From: "Alice Ferrazzi" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.0 commit in: /
Date: Wed, 17 Apr 2019 07:32:31 +0000 (UTC) [thread overview]
Message-ID: <1555486337.f5cf400c13c66c3c62cc0b83cd9894bec6c56983.alicef@gentoo> (raw)
commit: f5cf400c13c66c3c62cc0b83cd9894bec6c56983
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 17 07:31:29 2019 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Apr 17 07:32:17 2019 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f5cf400c
Linux Patch 5.0.8
Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>
0000_README | 4 +
1007_linux-5.0.8.patch | 36018 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 36022 insertions(+)
diff --git a/0000_README b/0000_README
index 0545dfc..2dd07a5 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-5.0.7.patch
From: http://www.kernel.org
Desc: Linux 5.0.7
+Patch: 1007_linux-5.0.8.patch
+From: http://www.kernel.org
+Desc: Linux 5.0.8
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1007_linux-5.0.8.patch b/1007_linux-5.0.8.patch
new file mode 100644
index 0000000..2e45798
--- /dev/null
+++ b/1007_linux-5.0.8.patch
@@ -0,0 +1,36018 @@
+diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
+index e133ccd60228..acfe3d0f78d1 100644
+--- a/Documentation/DMA-API.txt
++++ b/Documentation/DMA-API.txt
+@@ -195,6 +195,14 @@ Requesting the required mask does not alter the current mask. If you
+ wish to take advantage of it, you should issue a dma_set_mask()
+ call to set the mask to the value returned.
+
++::
++
++ size_t
++ dma_direct_max_mapping_size(struct device *dev);
++
++Returns the maximum size of a mapping for the device. The size parameter
++of the mapping functions like dma_map_single(), dma_map_page() and
++others should not be larger than the returned value.
+
+ Part Id - Streaming DMA mappings
+ --------------------------------
+diff --git a/Documentation/arm/kernel_mode_neon.txt b/Documentation/arm/kernel_mode_neon.txt
+index 525452726d31..b9e060c5b61e 100644
+--- a/Documentation/arm/kernel_mode_neon.txt
++++ b/Documentation/arm/kernel_mode_neon.txt
+@@ -6,7 +6,7 @@ TL;DR summary
+ * Use only NEON instructions, or VFP instructions that don't rely on support
+ code
+ * Isolate your NEON code in a separate compilation unit, and compile it with
+- '-mfpu=neon -mfloat-abi=softfp'
++ '-march=armv7-a -mfpu=neon -mfloat-abi=softfp'
+ * Put kernel_neon_begin() and kernel_neon_end() calls around the calls into your
+ NEON code
+ * Don't sleep in your NEON code, and be aware that it will be executed with
+@@ -87,7 +87,7 @@ instructions appearing in unexpected places if no special care is taken.
+ Therefore, the recommended and only supported way of using NEON/VFP in the
+ kernel is by adhering to the following rules:
+ * isolate the NEON code in a separate compilation unit and compile it with
+- '-mfpu=neon -mfloat-abi=softfp';
++ '-march=armv7-a -mfpu=neon -mfloat-abi=softfp';
+ * issue the calls to kernel_neon_begin(), kernel_neon_end() as well as the calls
+ into the unit containing the NEON code from a compilation unit which is *not*
+ built with the GCC flag '-mfpu=neon' set.
+diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
+index 1f09d043d086..ddb8ce5333ba 100644
+--- a/Documentation/arm64/silicon-errata.txt
++++ b/Documentation/arm64/silicon-errata.txt
+@@ -44,6 +44,8 @@ stable kernels.
+
+ | Implementor | Component | Erratum ID | Kconfig |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Allwinner | A64/R18 | UNKNOWN1 | SUN50I_ERRATUM_UNKNOWN1 |
++| | | | |
+ | ARM | Cortex-A53 | #826319 | ARM64_ERRATUM_826319 |
+ | ARM | Cortex-A53 | #827319 | ARM64_ERRATUM_827319 |
+ | ARM | Cortex-A53 | #824069 | ARM64_ERRATUM_824069 |
+diff --git a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
+index a10c1f89037d..e1fe02f3e3e9 100644
+--- a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
++++ b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
+@@ -11,11 +11,13 @@ New driver handles the following
+
+ Required properties:
+ - compatible: Must be "samsung,exynos-adc-v1"
+- for exynos4412/5250 controllers.
++ for Exynos5250 controllers.
+ Must be "samsung,exynos-adc-v2" for
+ future controllers.
+ Must be "samsung,exynos3250-adc" for
+ controllers compatible with ADC of Exynos3250.
++ Must be "samsung,exynos4212-adc" for
++ controllers compatible with ADC of Exynos4212 and Exynos4412.
+ Must be "samsung,exynos7-adc" for
+ the ADC in Exynos7 and compatibles
+ Must be "samsung,s3c2410-adc" for
+diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
+index 0de6f6145cc6..7ba8cd567f84 100644
+--- a/Documentation/process/stable-kernel-rules.rst
++++ b/Documentation/process/stable-kernel-rules.rst
+@@ -38,6 +38,9 @@ Procedure for submitting patches to the -stable tree
+ - If the patch covers files in net/ or drivers/net please follow netdev stable
+ submission guidelines as described in
+ :ref:`Documentation/networking/netdev-FAQ.rst <netdev-FAQ>`
++ after first checking the stable networking queue at
++ https://patchwork.ozlabs.org/bundle/davem/stable/?series=&submitter=&state=*&q=&archive=
++ to ensure the requested patch is not already queued up.
+ - Security patches should not be handled (solely) by the -stable review
+ process but should follow the procedures in
+ :ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.
+diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
+index 356156f5c52d..ba8927c0d45c 100644
+--- a/Documentation/virtual/kvm/api.txt
++++ b/Documentation/virtual/kvm/api.txt
+@@ -13,7 +13,7 @@ of a virtual machine. The ioctls belong to three classes
+
+ - VM ioctls: These query and set attributes that affect an entire virtual
+ machine, for example memory layout. In addition a VM ioctl is used to
+- create virtual cpus (vcpus).
++ create virtual cpus (vcpus) and devices.
+
+ Only run VM ioctls from the same process (address space) that was used
+ to create the VM.
+@@ -24,6 +24,11 @@ of a virtual machine. The ioctls belong to three classes
+ Only run vcpu ioctls from the same thread that was used to create the
+ vcpu.
+
++ - device ioctls: These query and set attributes that control the operation
++ of a single device.
++
++ device ioctls must be issued from the same process (address space) that
++ was used to create the VM.
+
+ 2. File descriptors
+ -------------------
+@@ -32,10 +37,11 @@ The kvm API is centered around file descriptors. An initial
+ open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
+ can be used to issue system ioctls. A KVM_CREATE_VM ioctl on this
+ handle will create a VM file descriptor which can be used to issue VM
+-ioctls. A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu
+-and return a file descriptor pointing to it. Finally, ioctls on a vcpu
+-fd can be used to control the vcpu, including the important task of
+-actually running guest code.
++ioctls. A KVM_CREATE_VCPU or KVM_CREATE_DEVICE ioctl on a VM fd will
++create a virtual cpu or device and return a file descriptor pointing to
++the new resource. Finally, ioctls on a vcpu or device fd can be used
++to control the vcpu or device. For vcpus, this includes the important
++task of actually running guest code.
+
+ In general file descriptors can be migrated among processes by means
+ of fork() and the SCM_RIGHTS facility of unix domain socket. These
+diff --git a/Makefile b/Makefile
+index d5713e7b1e50..f7666051de66 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 0
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+
+@@ -15,19 +15,6 @@ NAME = Shy Crocodile
+ PHONY := _all
+ _all:
+
+-# Do not use make's built-in rules and variables
+-# (this increases performance and avoids hard-to-debug behaviour)
+-MAKEFLAGS += -rR
+-
+-# Avoid funny character set dependencies
+-unexport LC_ALL
+-LC_COLLATE=C
+-LC_NUMERIC=C
+-export LC_COLLATE LC_NUMERIC
+-
+-# Avoid interference with shell env settings
+-unexport GREP_OPTIONS
+-
+ # We are using a recursive build, so we need to do a little thinking
+ # to get the ordering right.
+ #
+@@ -44,6 +31,21 @@ unexport GREP_OPTIONS
+ # descending is started. They are now explicitly listed as the
+ # prepare rule.
+
++ifneq ($(sub-make-done),1)
++
++# Do not use make's built-in rules and variables
++# (this increases performance and avoids hard-to-debug behaviour)
++MAKEFLAGS += -rR
++
++# Avoid funny character set dependencies
++unexport LC_ALL
++LC_COLLATE=C
++LC_NUMERIC=C
++export LC_COLLATE LC_NUMERIC
++
++# Avoid interference with shell env settings
++unexport GREP_OPTIONS
++
+ # Beautify output
+ # ---------------------------------------------------------------------------
+ #
+@@ -112,7 +114,6 @@ export quiet Q KBUILD_VERBOSE
+
+ # KBUILD_SRC is not intended to be used by the regular user (for now),
+ # it is set on invocation of make with KBUILD_OUTPUT or O= specified.
+-ifeq ($(KBUILD_SRC),)
+
+ # OK, Make called in directory where kernel src resides
+ # Do we want to locate output files in a separate directory?
+@@ -142,6 +143,24 @@ $(if $(KBUILD_OUTPUT),, \
+ # 'sub-make' below.
+ MAKEFLAGS += --include-dir=$(CURDIR)
+
++need-sub-make := 1
++else
++
++# Do not print "Entering directory ..." at all for in-tree build.
++MAKEFLAGS += --no-print-directory
++
++endif # ifneq ($(KBUILD_OUTPUT),)
++
++ifneq ($(filter 3.%,$(MAKE_VERSION)),)
++# 'MAKEFLAGS += -rR' does not immediately become effective for GNU Make 3.x
++# We need to invoke sub-make to avoid implicit rules in the top Makefile.
++need-sub-make := 1
++# Cancel implicit rules for this Makefile.
++$(lastword $(MAKEFILE_LIST)): ;
++endif
++
++ifeq ($(need-sub-make),1)
++
+ PHONY += $(MAKECMDGOALS) sub-make
+
+ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
+@@ -149,16 +168,15 @@ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
+
+ # Invoke a second make in the output directory, passing relevant variables
+ sub-make:
+- $(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \
++ $(Q)$(MAKE) sub-make-done=1 \
++ $(if $(KBUILD_OUTPUT),-C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR)) \
+ -f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS))
+
+-# Leave processing to above invocation of make
+-skip-makefile := 1
+-endif # ifneq ($(KBUILD_OUTPUT),)
+-endif # ifeq ($(KBUILD_SRC),)
++endif # need-sub-make
++endif # sub-make-done
+
+ # We process the rest of the Makefile if this is the final invocation of make
+-ifeq ($(skip-makefile),)
++ifeq ($(need-sub-make),)
+
+ # Do not print "Entering directory ...",
+ # but we want to display it when entering to the output directory
+@@ -492,7 +510,7 @@ endif
+ ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
+ ifneq ($(CROSS_COMPILE),)
+ CLANG_FLAGS := --target=$(notdir $(CROSS_COMPILE:%-=%))
+-GCC_TOOLCHAIN_DIR := $(dir $(shell which $(LD)))
++GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)elfedit))
+ CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)
+ GCC_TOOLCHAIN := $(realpath $(GCC_TOOLCHAIN_DIR)/..)
+ endif
+@@ -625,12 +643,15 @@ ifeq ($(may-sync-config),1)
+ -include include/config/auto.conf.cmd
+
+ # To avoid any implicit rule to kick in, define an empty command
+-$(KCONFIG_CONFIG) include/config/auto.conf.cmd: ;
++$(KCONFIG_CONFIG): ;
+
+ # The actual configuration files used during the build are stored in
+ # include/generated/ and include/config/. Update them if .config is newer than
+ # include/config/auto.conf (which mirrors .config).
+-include/config/%.conf: $(KCONFIG_CONFIG) include/config/auto.conf.cmd
++#
++# This exploits the 'multi-target pattern rule' trick.
++# The syncconfig should be executed only once to make all the targets.
++%/auto.conf %/auto.conf.cmd %/tristate.conf: $(KCONFIG_CONFIG)
+ $(Q)$(MAKE) -f $(srctree)/Makefile syncconfig
+ else
+ # External modules and some install targets need include/generated/autoconf.h
+@@ -944,9 +965,11 @@ mod_sign_cmd = true
+ endif
+ export mod_sign_cmd
+
++HOST_LIBELF_LIBS = $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
++
+ ifdef CONFIG_STACK_VALIDATION
+ has_libelf := $(call try-run,\
+- echo "int main() {}" | $(HOSTCC) -xc -o /dev/null -lelf -,1,0)
++ echo "int main() {}" | $(HOSTCC) -xc -o /dev/null $(HOST_LIBELF_LIBS) -,1,0)
+ ifeq ($(has_libelf),1)
+ objtool_target := tools/objtool FORCE
+ else
+@@ -1754,7 +1777,7 @@ $(cmd_files): ; # Do not try to update included dependency files
+
+ endif # ifeq ($(config-targets),1)
+ endif # ifeq ($(mixed-targets),1)
+-endif # skip-makefile
++endif # need-sub-make
+
+ PHONY += FORCE
+ FORCE:
+diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl
+index 7b56a53be5e3..e09558edae73 100644
+--- a/arch/alpha/kernel/syscalls/syscall.tbl
++++ b/arch/alpha/kernel/syscalls/syscall.tbl
+@@ -451,3 +451,4 @@
+ 520 common preadv2 sys_preadv2
+ 521 common pwritev2 sys_pwritev2
+ 522 common statx sys_statx
++523 common io_pgetevents sys_io_pgetevents
+diff --git a/arch/arm/boot/dts/am335x-evm.dts b/arch/arm/boot/dts/am335x-evm.dts
+index dce5be5df97b..edcff79879e7 100644
+--- a/arch/arm/boot/dts/am335x-evm.dts
++++ b/arch/arm/boot/dts/am335x-evm.dts
+@@ -57,6 +57,24 @@
+ enable-active-high;
+ };
+
++ /* TPS79501 */
++ v1_8d_reg: fixedregulator-v1_8d {
++ compatible = "regulator-fixed";
++ regulator-name = "v1_8d";
++ vin-supply = <&vbat>;
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ };
++
++ /* TPS79501 */
++ v3_3d_reg: fixedregulator-v3_3d {
++ compatible = "regulator-fixed";
++ regulator-name = "v3_3d";
++ vin-supply = <&vbat>;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ };
++
+ matrix_keypad: matrix_keypad0 {
+ compatible = "gpio-matrix-keypad";
+ debounce-delay-ms = <5>;
+@@ -499,10 +517,10 @@
+ status = "okay";
+
+ /* Regulators */
+- AVDD-supply = <&vaux2_reg>;
+- IOVDD-supply = <&vaux2_reg>;
+- DRVDD-supply = <&vaux2_reg>;
+- DVDD-supply = <&vbat>;
++ AVDD-supply = <&v3_3d_reg>;
++ IOVDD-supply = <&v3_3d_reg>;
++ DRVDD-supply = <&v3_3d_reg>;
++ DVDD-supply = <&v1_8d_reg>;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/am335x-evmsk.dts b/arch/arm/boot/dts/am335x-evmsk.dts
+index b128998097ce..2c2d8b5b8cf5 100644
+--- a/arch/arm/boot/dts/am335x-evmsk.dts
++++ b/arch/arm/boot/dts/am335x-evmsk.dts
+@@ -73,6 +73,24 @@
+ enable-active-high;
+ };
+
++ /* TPS79518 */
++ v1_8d_reg: fixedregulator-v1_8d {
++ compatible = "regulator-fixed";
++ regulator-name = "v1_8d";
++ vin-supply = <&vbat>;
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ };
++
++ /* TPS78633 */
++ v3_3d_reg: fixedregulator-v3_3d {
++ compatible = "regulator-fixed";
++ regulator-name = "v3_3d";
++ vin-supply = <&vbat>;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ };
++
+ leds {
+ pinctrl-names = "default";
+ pinctrl-0 = <&user_leds_s0>;
+@@ -501,10 +519,10 @@
+ status = "okay";
+
+ /* Regulators */
+- AVDD-supply = <&vaux2_reg>;
+- IOVDD-supply = <&vaux2_reg>;
+- DRVDD-supply = <&vaux2_reg>;
+- DVDD-supply = <&vbat>;
++ AVDD-supply = <&v3_3d_reg>;
++ IOVDD-supply = <&v3_3d_reg>;
++ DRVDD-supply = <&v3_3d_reg>;
++ DVDD-supply = <&v1_8d_reg>;
+ };
+ };
+
+diff --git a/arch/arm/boot/dts/exynos3250.dtsi b/arch/arm/boot/dts/exynos3250.dtsi
+index 608d17454179..5892a9f7622f 100644
+--- a/arch/arm/boot/dts/exynos3250.dtsi
++++ b/arch/arm/boot/dts/exynos3250.dtsi
+@@ -168,6 +168,9 @@
+ interrupt-controller;
+ #interrupt-cells = <3>;
+ interrupt-parent = <&gic>;
++ clock-names = "clkout8";
++ clocks = <&cmu CLK_FIN_PLL>;
++ #clock-cells = <1>;
+ };
+
+ mipi_phy: video-phy {
+diff --git a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+index 3a9eb1e91c45..8a64c4e8c474 100644
+--- a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
++++ b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+@@ -49,7 +49,7 @@
+ };
+
+ emmc_pwrseq: pwrseq {
+- pinctrl-0 = <&sd1_cd>;
++ pinctrl-0 = <&emmc_rstn>;
+ pinctrl-names = "default";
+ compatible = "mmc-pwrseq-emmc";
+ reset-gpios = <&gpk1 2 GPIO_ACTIVE_LOW>;
+@@ -165,12 +165,6 @@
+ cpu0-supply = <&buck2_reg>;
+ };
+
+-/* RSTN signal for eMMC */
+-&sd1_cd {
+- samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
+- samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
+-};
+-
+ &pinctrl_1 {
+ gpio_power_key: power_key {
+ samsung,pins = "gpx1-3";
+@@ -188,6 +182,11 @@
+ samsung,pins = "gpx3-7";
+ samsung,pin-pud = <EXYNOS_PIN_PULL_DOWN>;
+ };
++
++ emmc_rstn: emmc-rstn {
++ samsung,pins = "gpk1-2";
++ samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
++ };
+ };
+
+ &ehci {
+diff --git a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+index bf09eab90f8a..6bf3661293ee 100644
+--- a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
++++ b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+@@ -468,7 +468,7 @@
+ buck8_reg: BUCK8 {
+ regulator-name = "vdd_1.8v_ldo";
+ regulator-min-microvolt = <800000>;
+- regulator-max-microvolt = <1500000>;
++ regulator-max-microvolt = <2000000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+diff --git a/arch/arm/boot/dts/lpc32xx.dtsi b/arch/arm/boot/dts/lpc32xx.dtsi
+index b7303a4e4236..ed0d6fb20122 100644
+--- a/arch/arm/boot/dts/lpc32xx.dtsi
++++ b/arch/arm/boot/dts/lpc32xx.dtsi
+@@ -230,7 +230,7 @@
+ status = "disabled";
+ };
+
+- i2s1: i2s@2009C000 {
++ i2s1: i2s@2009c000 {
+ compatible = "nxp,lpc3220-i2s";
+ reg = <0x2009C000 0x1000>;
+ };
+@@ -273,7 +273,7 @@
+ status = "disabled";
+ };
+
+- i2c1: i2c@400A0000 {
++ i2c1: i2c@400a0000 {
+ compatible = "nxp,pnx-i2c";
+ reg = <0x400A0000 0x100>;
+ interrupt-parent = <&sic1>;
+@@ -284,7 +284,7 @@
+ clocks = <&clk LPC32XX_CLK_I2C1>;
+ };
+
+- i2c2: i2c@400A8000 {
++ i2c2: i2c@400a8000 {
+ compatible = "nxp,pnx-i2c";
+ reg = <0x400A8000 0x100>;
+ interrupt-parent = <&sic1>;
+@@ -295,7 +295,7 @@
+ clocks = <&clk LPC32XX_CLK_I2C2>;
+ };
+
+- mpwm: mpwm@400E8000 {
++ mpwm: mpwm@400e8000 {
+ compatible = "nxp,lpc3220-motor-pwm";
+ reg = <0x400E8000 0x78>;
+ status = "disabled";
+@@ -394,7 +394,7 @@
+ #gpio-cells = <3>; /* bank, pin, flags */
+ };
+
+- timer4: timer@4002C000 {
++ timer4: timer@4002c000 {
+ compatible = "nxp,lpc3220-timer";
+ reg = <0x4002C000 0x1000>;
+ interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+@@ -412,7 +412,7 @@
+ status = "disabled";
+ };
+
+- watchdog: watchdog@4003C000 {
++ watchdog: watchdog@4003c000 {
+ compatible = "nxp,pnx4008-wdt";
+ reg = <0x4003C000 0x1000>;
+ clocks = <&clk LPC32XX_CLK_WDOG>;
+@@ -451,7 +451,7 @@
+ status = "disabled";
+ };
+
+- timer1: timer@4004C000 {
++ timer1: timer@4004c000 {
+ compatible = "nxp,lpc3220-timer";
+ reg = <0x4004C000 0x1000>;
+ interrupts = <17 IRQ_TYPE_LEVEL_LOW>;
+@@ -475,7 +475,7 @@
+ status = "disabled";
+ };
+
+- pwm1: pwm@4005C000 {
++ pwm1: pwm@4005c000 {
+ compatible = "nxp,lpc3220-pwm";
+ reg = <0x4005C000 0x4>;
+ clocks = <&clk LPC32XX_CLK_PWM1>;
+@@ -484,7 +484,7 @@
+ status = "disabled";
+ };
+
+- pwm2: pwm@4005C004 {
++ pwm2: pwm@4005c004 {
+ compatible = "nxp,lpc3220-pwm";
+ reg = <0x4005C004 0x4>;
+ clocks = <&clk LPC32XX_CLK_PWM2>;
+diff --git a/arch/arm/boot/dts/meson8b.dtsi b/arch/arm/boot/dts/meson8b.dtsi
+index 22d775460767..dc125769fe85 100644
+--- a/arch/arm/boot/dts/meson8b.dtsi
++++ b/arch/arm/boot/dts/meson8b.dtsi
+@@ -270,9 +270,7 @@
+ groups = "eth_tx_clk",
+ "eth_tx_en",
+ "eth_txd1_0",
+- "eth_txd1_1",
+ "eth_txd0_0",
+- "eth_txd0_1",
+ "eth_rx_clk",
+ "eth_rx_dv",
+ "eth_rxd1",
+@@ -281,7 +279,9 @@
+ "eth_mdc",
+ "eth_ref_clk",
+ "eth_txd2",
+- "eth_txd3";
++ "eth_txd3",
++ "eth_rxd3",
++ "eth_rxd2";
+ function = "ethernet";
+ bias-disable;
+ };
+diff --git a/arch/arm/boot/dts/rk3288-tinker.dtsi b/arch/arm/boot/dts/rk3288-tinker.dtsi
+index aa107ee41b8b..ef653c3209bc 100644
+--- a/arch/arm/boot/dts/rk3288-tinker.dtsi
++++ b/arch/arm/boot/dts/rk3288-tinker.dtsi
+@@ -254,6 +254,7 @@
+ };
+
+ vccio_sd: LDO_REG5 {
++ regulator-boot-on;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-name = "vccio_sd";
+@@ -430,7 +431,7 @@
+ bus-width = <4>;
+ cap-mmc-highspeed;
+ cap-sd-highspeed;
+- card-detect-delay = <200>;
++ broken-cd;
+ disable-wp; /* wp not hooked up */
+ pinctrl-names = "default";
+ pinctrl-0 = <&sdmmc_clk &sdmmc_cmd &sdmmc_cd &sdmmc_bus4>;
+diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
+index ca7d52daa8fb..09868dcee34b 100644
+--- a/arch/arm/boot/dts/rk3288.dtsi
++++ b/arch/arm/boot/dts/rk3288.dtsi
+@@ -70,7 +70,7 @@
+ compatible = "arm,cortex-a12";
+ reg = <0x501>;
+ resets = <&cru SRST_CORE1>;
+- operating-points = <&cpu_opp_table>;
++ operating-points-v2 = <&cpu_opp_table>;
+ #cooling-cells = <2>; /* min followed by max */
+ clock-latency = <40000>;
+ clocks = <&cru ARMCLK>;
+@@ -80,7 +80,7 @@
+ compatible = "arm,cortex-a12";
+ reg = <0x502>;
+ resets = <&cru SRST_CORE2>;
+- operating-points = <&cpu_opp_table>;
++ operating-points-v2 = <&cpu_opp_table>;
+ #cooling-cells = <2>; /* min followed by max */
+ clock-latency = <40000>;
+ clocks = <&cru ARMCLK>;
+@@ -90,7 +90,7 @@
+ compatible = "arm,cortex-a12";
+ reg = <0x503>;
+ resets = <&cru SRST_CORE3>;
+- operating-points = <&cpu_opp_table>;
++ operating-points-v2 = <&cpu_opp_table>;
+ #cooling-cells = <2>; /* min followed by max */
+ clock-latency = <40000>;
+ clocks = <&cru ARMCLK>;
+diff --git a/arch/arm/boot/dts/sama5d2-pinfunc.h b/arch/arm/boot/dts/sama5d2-pinfunc.h
+index 1c01a6f843d8..28a2e45752fe 100644
+--- a/arch/arm/boot/dts/sama5d2-pinfunc.h
++++ b/arch/arm/boot/dts/sama5d2-pinfunc.h
+@@ -518,7 +518,7 @@
+ #define PIN_PC9__GPIO PINMUX_PIN(PIN_PC9, 0, 0)
+ #define PIN_PC9__FIQ PINMUX_PIN(PIN_PC9, 1, 3)
+ #define PIN_PC9__GTSUCOMP PINMUX_PIN(PIN_PC9, 2, 1)
+-#define PIN_PC9__ISC_D0 PINMUX_PIN(PIN_PC9, 2, 1)
++#define PIN_PC9__ISC_D0 PINMUX_PIN(PIN_PC9, 3, 1)
+ #define PIN_PC9__TIOA4 PINMUX_PIN(PIN_PC9, 4, 2)
+ #define PIN_PC10 74
+ #define PIN_PC10__GPIO PINMUX_PIN(PIN_PC10, 0, 0)
+diff --git a/arch/arm/crypto/crct10dif-ce-core.S b/arch/arm/crypto/crct10dif-ce-core.S
+index ce45ba0c0687..16019b5961e7 100644
+--- a/arch/arm/crypto/crct10dif-ce-core.S
++++ b/arch/arm/crypto/crct10dif-ce-core.S
+@@ -124,10 +124,10 @@ ENTRY(crc_t10dif_pmull)
+ vext.8 q10, qzr, q0, #4
+
+ // receive the initial 64B data, xor the initial crc value
+- vld1.64 {q0-q1}, [arg2, :128]!
+- vld1.64 {q2-q3}, [arg2, :128]!
+- vld1.64 {q4-q5}, [arg2, :128]!
+- vld1.64 {q6-q7}, [arg2, :128]!
++ vld1.64 {q0-q1}, [arg2]!
++ vld1.64 {q2-q3}, [arg2]!
++ vld1.64 {q4-q5}, [arg2]!
++ vld1.64 {q6-q7}, [arg2]!
+ CPU_LE( vrev64.8 q0, q0 )
+ CPU_LE( vrev64.8 q1, q1 )
+ CPU_LE( vrev64.8 q2, q2 )
+@@ -167,7 +167,7 @@ CPU_LE( vrev64.8 q7, q7 )
+ _fold_64_B_loop:
+
+ .macro fold64, reg1, reg2
+- vld1.64 {q11-q12}, [arg2, :128]!
++ vld1.64 {q11-q12}, [arg2]!
+
+ vmull.p64 q8, \reg1\()h, d21
+ vmull.p64 \reg1, \reg1\()l, d20
+@@ -238,7 +238,7 @@ _16B_reduction_loop:
+ vmull.p64 q7, d15, d21
+ veor.8 q7, q7, q8
+
+- vld1.64 {q0}, [arg2, :128]!
++ vld1.64 {q0}, [arg2]!
+ CPU_LE( vrev64.8 q0, q0 )
+ vswp d0, d1
+ veor.8 q7, q7, q0
+@@ -335,7 +335,7 @@ _less_than_128:
+ vmov.i8 q0, #0
+ vmov s3, arg1_low32 // get the initial crc value
+
+- vld1.64 {q7}, [arg2, :128]!
++ vld1.64 {q7}, [arg2]!
+ CPU_LE( vrev64.8 q7, q7 )
+ vswp d14, d15
+ veor.8 q7, q7, q0
+diff --git a/arch/arm/crypto/crct10dif-ce-glue.c b/arch/arm/crypto/crct10dif-ce-glue.c
+index d428355cf38d..14c19c70a841 100644
+--- a/arch/arm/crypto/crct10dif-ce-glue.c
++++ b/arch/arm/crypto/crct10dif-ce-glue.c
+@@ -35,26 +35,15 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data,
+ unsigned int length)
+ {
+ u16 *crc = shash_desc_ctx(desc);
+- unsigned int l;
+
+- if (!may_use_simd()) {
+- *crc = crc_t10dif_generic(*crc, data, length);
++ if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) {
++ kernel_neon_begin();
++ *crc = crc_t10dif_pmull(*crc, data, length);
++ kernel_neon_end();
+ } else {
+- if (unlikely((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) {
+- l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE -
+- ((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE));
+-
+- *crc = crc_t10dif_generic(*crc, data, l);
+-
+- length -= l;
+- data += l;
+- }
+- if (length > 0) {
+- kernel_neon_begin();
+- *crc = crc_t10dif_pmull(*crc, data, length);
+- kernel_neon_end();
+- }
++ *crc = crc_t10dif_generic(*crc, data, length);
+ }
++
+ return 0;
+ }
+
+diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
+index 69772e742a0a..83ae97c049d9 100644
+--- a/arch/arm/include/asm/barrier.h
++++ b/arch/arm/include/asm/barrier.h
+@@ -11,6 +11,8 @@
+ #define sev() __asm__ __volatile__ ("sev" : : : "memory")
+ #define wfe() __asm__ __volatile__ ("wfe" : : : "memory")
+ #define wfi() __asm__ __volatile__ ("wfi" : : : "memory")
++#else
++#define wfe() do { } while (0)
+ #endif
+
+ #if __LINUX_ARM_ARCH__ >= 7
+diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
+index 120f4c9bbfde..57fe73ea0f72 100644
+--- a/arch/arm/include/asm/processor.h
++++ b/arch/arm/include/asm/processor.h
+@@ -89,7 +89,11 @@ extern void release_thread(struct task_struct *);
+ unsigned long get_wchan(struct task_struct *p);
+
+ #if __LINUX_ARM_ARCH__ == 6 || defined(CONFIG_ARM_ERRATA_754327)
+-#define cpu_relax() smp_mb()
++#define cpu_relax() \
++ do { \
++ smp_mb(); \
++ __asm__ __volatile__("nop; nop; nop; nop; nop; nop; nop; nop; nop; nop;"); \
++ } while (0)
+ #else
+ #define cpu_relax() barrier()
+ #endif
+diff --git a/arch/arm/include/asm/v7m.h b/arch/arm/include/asm/v7m.h
+index 187ccf6496ad..2cb00d15831b 100644
+--- a/arch/arm/include/asm/v7m.h
++++ b/arch/arm/include/asm/v7m.h
+@@ -49,7 +49,7 @@
+ * (0 -> msp; 1 -> psp). Bits [1:0] are fixed to 0b01.
+ */
+ #define EXC_RET_STACK_MASK 0x00000004
+-#define EXC_RET_THREADMODE_PROCESSSTACK 0xfffffffd
++#define EXC_RET_THREADMODE_PROCESSSTACK (3 << 2)
+
+ /* Cache related definitions */
+
+diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S
+index 773424843d6e..62db1c9746cb 100644
+--- a/arch/arm/kernel/entry-header.S
++++ b/arch/arm/kernel/entry-header.S
+@@ -127,7 +127,8 @@
+ */
+ .macro v7m_exception_slow_exit ret_r0
+ cpsid i
+- ldr lr, =EXC_RET_THREADMODE_PROCESSSTACK
++ ldr lr, =exc_ret
++ ldr lr, [lr]
+
+ @ read original r12, sp, lr, pc and xPSR
+ add r12, sp, #S_IP
+diff --git a/arch/arm/kernel/entry-v7m.S b/arch/arm/kernel/entry-v7m.S
+index abcf47848525..19d2dcd6530d 100644
+--- a/arch/arm/kernel/entry-v7m.S
++++ b/arch/arm/kernel/entry-v7m.S
+@@ -146,3 +146,7 @@ ENTRY(vector_table)
+ .rept CONFIG_CPU_V7M_NUM_IRQ
+ .long __irq_entry @ External Interrupts
+ .endr
++ .align 2
++ .globl exc_ret
++exc_ret:
++ .space 4
+diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c
+index dd2eb5f76b9f..76300f3813e8 100644
+--- a/arch/arm/kernel/machine_kexec.c
++++ b/arch/arm/kernel/machine_kexec.c
+@@ -91,8 +91,11 @@ void machine_crash_nonpanic_core(void *unused)
+
+ set_cpu_online(smp_processor_id(), false);
+ atomic_dec(&waiting_for_crash_ipi);
+- while (1)
++
++ while (1) {
+ cpu_relax();
++ wfe();
++ }
+ }
+
+ void crash_smp_send_stop(void)
+diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
+index 1d6f5ea522f4..a3ce7c5365fa 100644
+--- a/arch/arm/kernel/smp.c
++++ b/arch/arm/kernel/smp.c
+@@ -604,8 +604,10 @@ static void ipi_cpu_stop(unsigned int cpu)
+ local_fiq_disable();
+ local_irq_disable();
+
+- while (1)
++ while (1) {
+ cpu_relax();
++ wfe();
++ }
+ }
+
+ static DEFINE_PER_CPU(struct completion *, cpu_completion);
+diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
+index 0bee233fef9a..314cfb232a63 100644
+--- a/arch/arm/kernel/unwind.c
++++ b/arch/arm/kernel/unwind.c
+@@ -93,7 +93,7 @@ extern const struct unwind_idx __start_unwind_idx[];
+ static const struct unwind_idx *__origin_unwind_idx;
+ extern const struct unwind_idx __stop_unwind_idx[];
+
+-static DEFINE_SPINLOCK(unwind_lock);
++static DEFINE_RAW_SPINLOCK(unwind_lock);
+ static LIST_HEAD(unwind_tables);
+
+ /* Convert a prel31 symbol to an absolute address */
+@@ -201,7 +201,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
+ /* module unwind tables */
+ struct unwind_table *table;
+
+- spin_lock_irqsave(&unwind_lock, flags);
++ raw_spin_lock_irqsave(&unwind_lock, flags);
+ list_for_each_entry(table, &unwind_tables, list) {
+ if (addr >= table->begin_addr &&
+ addr < table->end_addr) {
+@@ -213,7 +213,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
+ break;
+ }
+ }
+- spin_unlock_irqrestore(&unwind_lock, flags);
++ raw_spin_unlock_irqrestore(&unwind_lock, flags);
+ }
+
+ pr_debug("%s: idx = %p\n", __func__, idx);
+@@ -529,9 +529,9 @@ struct unwind_table *unwind_table_add(unsigned long start, unsigned long size,
+ tab->begin_addr = text_addr;
+ tab->end_addr = text_addr + text_size;
+
+- spin_lock_irqsave(&unwind_lock, flags);
++ raw_spin_lock_irqsave(&unwind_lock, flags);
+ list_add_tail(&tab->list, &unwind_tables);
+- spin_unlock_irqrestore(&unwind_lock, flags);
++ raw_spin_unlock_irqrestore(&unwind_lock, flags);
+
+ return tab;
+ }
+@@ -543,9 +543,9 @@ void unwind_table_del(struct unwind_table *tab)
+ if (!tab)
+ return;
+
+- spin_lock_irqsave(&unwind_lock, flags);
++ raw_spin_lock_irqsave(&unwind_lock, flags);
+ list_del(&tab->list);
+- spin_unlock_irqrestore(&unwind_lock, flags);
++ raw_spin_unlock_irqrestore(&unwind_lock, flags);
+
+ kfree(tab);
+ }
+diff --git a/arch/arm/lib/Makefile b/arch/arm/lib/Makefile
+index ad25fd1872c7..0bff0176db2c 100644
+--- a/arch/arm/lib/Makefile
++++ b/arch/arm/lib/Makefile
+@@ -39,7 +39,7 @@ $(obj)/csumpartialcopy.o: $(obj)/csumpartialcopygeneric.S
+ $(obj)/csumpartialcopyuser.o: $(obj)/csumpartialcopygeneric.S
+
+ ifeq ($(CONFIG_KERNEL_MODE_NEON),y)
+- NEON_FLAGS := -mfloat-abi=softfp -mfpu=neon
++ NEON_FLAGS := -march=armv7-a -mfloat-abi=softfp -mfpu=neon
+ CFLAGS_xor-neon.o += $(NEON_FLAGS)
+ obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o
+ endif
+diff --git a/arch/arm/lib/xor-neon.c b/arch/arm/lib/xor-neon.c
+index 2c40aeab3eaa..c691b901092f 100644
+--- a/arch/arm/lib/xor-neon.c
++++ b/arch/arm/lib/xor-neon.c
+@@ -14,7 +14,7 @@
+ MODULE_LICENSE("GPL");
+
+ #ifndef __ARM_NEON__
+-#error You should compile this file with '-mfloat-abi=softfp -mfpu=neon'
++#error You should compile this file with '-march=armv7-a -mfloat-abi=softfp -mfpu=neon'
+ #endif
+
+ /*
+diff --git a/arch/arm/mach-imx/cpuidle-imx6q.c b/arch/arm/mach-imx/cpuidle-imx6q.c
+index bfeb25aaf9a2..326e870d7123 100644
+--- a/arch/arm/mach-imx/cpuidle-imx6q.c
++++ b/arch/arm/mach-imx/cpuidle-imx6q.c
+@@ -16,30 +16,23 @@
+ #include "cpuidle.h"
+ #include "hardware.h"
+
+-static atomic_t master = ATOMIC_INIT(0);
+-static DEFINE_SPINLOCK(master_lock);
++static int num_idle_cpus = 0;
++static DEFINE_SPINLOCK(cpuidle_lock);
+
+ static int imx6q_enter_wait(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int index)
+ {
+- if (atomic_inc_return(&master) == num_online_cpus()) {
+- /*
+- * With this lock, we prevent other cpu to exit and enter
+- * this function again and become the master.
+- */
+- if (!spin_trylock(&master_lock))
+- goto idle;
++ spin_lock(&cpuidle_lock);
++ if (++num_idle_cpus == num_online_cpus())
+ imx6_set_lpm(WAIT_UNCLOCKED);
+- cpu_do_idle();
+- imx6_set_lpm(WAIT_CLOCKED);
+- spin_unlock(&master_lock);
+- goto done;
+- }
++ spin_unlock(&cpuidle_lock);
+
+-idle:
+ cpu_do_idle();
+-done:
+- atomic_dec(&master);
++
++ spin_lock(&cpuidle_lock);
++ if (num_idle_cpus-- == num_online_cpus())
++ imx6_set_lpm(WAIT_CLOCKED);
++ spin_unlock(&cpuidle_lock);
+
+ return index;
+ }
+diff --git a/arch/arm/mach-omap1/board-ams-delta.c b/arch/arm/mach-omap1/board-ams-delta.c
+index c4c0a8ea11e4..ee410ae7369e 100644
+--- a/arch/arm/mach-omap1/board-ams-delta.c
++++ b/arch/arm/mach-omap1/board-ams-delta.c
+@@ -182,6 +182,7 @@ static struct resource latch1_resources[] = {
+
+ static struct bgpio_pdata latch1_pdata = {
+ .label = LATCH1_LABEL,
++ .base = -1,
+ .ngpio = LATCH1_NGPIO,
+ };
+
+@@ -219,6 +220,7 @@ static struct resource latch2_resources[] = {
+
+ static struct bgpio_pdata latch2_pdata = {
+ .label = LATCH2_LABEL,
++ .base = -1,
+ .ngpio = LATCH2_NGPIO,
+ };
+
+diff --git a/arch/arm/mach-omap2/prm_common.c b/arch/arm/mach-omap2/prm_common.c
+index 058a37e6d11c..fd6e0671f957 100644
+--- a/arch/arm/mach-omap2/prm_common.c
++++ b/arch/arm/mach-omap2/prm_common.c
+@@ -523,8 +523,10 @@ void omap_prm_reset_system(void)
+
+ prm_ll_data->reset_system();
+
+- while (1)
++ while (1) {
+ cpu_relax();
++ wfe();
++ }
+ }
+
+ /**
+diff --git a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
+index 058ce73137e8..5d819b6ea428 100644
+--- a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
++++ b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
+@@ -65,16 +65,16 @@ static int osiris_dvs_notify(struct notifier_block *nb,
+
+ switch (val) {
+ case CPUFREQ_PRECHANGE:
+- if (old_dvs & !new_dvs ||
+- cur_dvs & !new_dvs) {
++ if ((old_dvs && !new_dvs) ||
++ (cur_dvs && !new_dvs)) {
+ pr_debug("%s: exiting dvs\n", __func__);
+ cur_dvs = false;
+ gpio_set_value(OSIRIS_GPIO_DVS, 1);
+ }
+ break;
+ case CPUFREQ_POSTCHANGE:
+- if (!old_dvs & new_dvs ||
+- !cur_dvs & new_dvs) {
++ if ((!old_dvs && new_dvs) ||
++ (!cur_dvs && new_dvs)) {
+ pr_debug("entering dvs\n");
+ cur_dvs = true;
+ gpio_set_value(OSIRIS_GPIO_DVS, 0);
+diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+index 8e50daa99151..dc526ef2e9b3 100644
+--- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
++++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+@@ -40,6 +40,7 @@
+ struct regulator_quirk {
+ struct list_head list;
+ const struct of_device_id *id;
++ struct device_node *np;
+ struct of_phandle_args irq_args;
+ struct i2c_msg i2c_msg;
+ bool shared; /* IRQ line is shared */
+@@ -101,6 +102,9 @@ static int regulator_quirk_notify(struct notifier_block *nb,
+ if (!pos->shared)
+ continue;
+
++ if (pos->np->parent != client->dev.parent->of_node)
++ continue;
++
+ dev_info(&client->dev, "clearing %s@0x%02x interrupts\n",
+ pos->id->compatible, pos->i2c_msg.addr);
+
+@@ -165,6 +169,7 @@ static int __init rcar_gen2_regulator_quirk(void)
+ memcpy(&quirk->i2c_msg, id->data, sizeof(quirk->i2c_msg));
+
+ quirk->id = id;
++ quirk->np = np;
+ quirk->i2c_msg.addr = addr;
+
+ ret = of_irq_parse_one(np, 0, argsa);
+diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c
+index b03202cddddb..f74cdce6d4da 100644
+--- a/arch/arm/mm/copypage-v4mc.c
++++ b/arch/arm/mm/copypage-v4mc.c
+@@ -45,6 +45,7 @@ static void mc_copy_user_page(void *from, void *to)
+ int tmp;
+
+ asm volatile ("\
++ .syntax unified\n\
+ ldmia %0!, {r2, r3, ip, lr} @ 4\n\
+ 1: mcr p15, 0, %1, c7, c6, 1 @ 1 invalidate D line\n\
+ stmia %1!, {r2, r3, ip, lr} @ 4\n\
+@@ -56,7 +57,7 @@ static void mc_copy_user_page(void *from, void *to)
+ ldmia %0!, {r2, r3, ip, lr} @ 4\n\
+ subs %2, %2, #1 @ 1\n\
+ stmia %1!, {r2, r3, ip, lr} @ 4\n\
+- ldmneia %0!, {r2, r3, ip, lr} @ 4\n\
++ ldmiane %0!, {r2, r3, ip, lr} @ 4\n\
+ bne 1b @ "
+ : "+&r" (from), "+&r" (to), "=&r" (tmp)
+ : "2" (PAGE_SIZE / 64)
+diff --git a/arch/arm/mm/copypage-v4wb.c b/arch/arm/mm/copypage-v4wb.c
+index cd3e165afeed..6d336740aae4 100644
+--- a/arch/arm/mm/copypage-v4wb.c
++++ b/arch/arm/mm/copypage-v4wb.c
+@@ -27,6 +27,7 @@ static void v4wb_copy_user_page(void *kto, const void *kfrom)
+ int tmp;
+
+ asm volatile ("\
++ .syntax unified\n\
+ ldmia %1!, {r3, r4, ip, lr} @ 4\n\
+ 1: mcr p15, 0, %0, c7, c6, 1 @ 1 invalidate D line\n\
+ stmia %0!, {r3, r4, ip, lr} @ 4\n\
+@@ -38,7 +39,7 @@ static void v4wb_copy_user_page(void *kto, const void *kfrom)
+ ldmia %1!, {r3, r4, ip, lr} @ 4\n\
+ subs %2, %2, #1 @ 1\n\
+ stmia %0!, {r3, r4, ip, lr} @ 4\n\
+- ldmneia %1!, {r3, r4, ip, lr} @ 4\n\
++ ldmiane %1!, {r3, r4, ip, lr} @ 4\n\
+ bne 1b @ 1\n\
+ mcr p15, 0, %1, c7, c10, 4 @ 1 drain WB"
+ : "+&r" (kto), "+&r" (kfrom), "=&r" (tmp)
+diff --git a/arch/arm/mm/copypage-v4wt.c b/arch/arm/mm/copypage-v4wt.c
+index 8614572e1296..3851bb396442 100644
+--- a/arch/arm/mm/copypage-v4wt.c
++++ b/arch/arm/mm/copypage-v4wt.c
+@@ -25,6 +25,7 @@ static void v4wt_copy_user_page(void *kto, const void *kfrom)
+ int tmp;
+
+ asm volatile ("\
++ .syntax unified\n\
+ ldmia %1!, {r3, r4, ip, lr} @ 4\n\
+ 1: stmia %0!, {r3, r4, ip, lr} @ 4\n\
+ ldmia %1!, {r3, r4, ip, lr} @ 4+1\n\
+@@ -34,7 +35,7 @@ static void v4wt_copy_user_page(void *kto, const void *kfrom)
+ ldmia %1!, {r3, r4, ip, lr} @ 4\n\
+ subs %2, %2, #1 @ 1\n\
+ stmia %0!, {r3, r4, ip, lr} @ 4\n\
+- ldmneia %1!, {r3, r4, ip, lr} @ 4\n\
++ ldmiane %1!, {r3, r4, ip, lr} @ 4\n\
+ bne 1b @ 1\n\
+ mcr p15, 0, %2, c7, c7, 0 @ flush ID cache"
+ : "+&r" (kto), "+&r" (kfrom), "=&r" (tmp)
+diff --git a/arch/arm/mm/proc-v7m.S b/arch/arm/mm/proc-v7m.S
+index 47a5acc64433..92e84181933a 100644
+--- a/arch/arm/mm/proc-v7m.S
++++ b/arch/arm/mm/proc-v7m.S
+@@ -139,6 +139,9 @@ __v7m_setup_cont:
+ cpsie i
+ svc #0
+ 1: cpsid i
++ ldr r0, =exc_ret
++ orr lr, lr, #EXC_RET_THREADMODE_PROCESSSTACK
++ str lr, [r0]
+ ldmia sp, {r0-r3, r12}
+ str r5, [r12, #11 * 4] @ restore the original SVC vector entry
+ mov lr, r6 @ restore LR
+diff --git a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+index 610235028cc7..c14205cd6bf5 100644
+--- a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
++++ b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+@@ -118,6 +118,7 @@
+ reset-gpios = <&gpio0 5 GPIO_ACTIVE_LOW>;
+ clocks = <&pmic>;
+ clock-names = "ext_clock";
++ post-power-on-delay-ms = <10>;
+ power-off-delay-us = <10>;
+ };
+
+@@ -300,7 +301,6 @@
+
+ dwmmc_0: dwmmc0@f723d000 {
+ cap-mmc-highspeed;
+- mmc-hs200-1_8v;
+ non-removable;
+ bus-width = <0x8>;
+ vmmc-supply = <&ldo19>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
+index 040b36ef0dd2..520ed8e474be 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
+@@ -46,8 +46,7 @@
+
+ vcc_host1_5v: vcc_otg_5v: vcc-host1-5v-regulator {
+ compatible = "regulator-fixed";
+- enable-active-high;
+- gpio = <&gpio0 RK_PA2 GPIO_ACTIVE_HIGH>;
++ gpio = <&gpio0 RK_PA2 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&usb20_host_drv>;
+ regulator-name = "vcc_host1_5v";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index ecd7f19c3542..97aa65455b4a 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -1431,11 +1431,11 @@
+
+ sdmmc0 {
+ sdmmc0_clk: sdmmc0-clk {
+- rockchip,pins = <1 RK_PA6 1 &pcfg_pull_none_4ma>;
++ rockchip,pins = <1 RK_PA6 1 &pcfg_pull_none_8ma>;
+ };
+
+ sdmmc0_cmd: sdmmc0-cmd {
+- rockchip,pins = <1 RK_PA4 1 &pcfg_pull_up_4ma>;
++ rockchip,pins = <1 RK_PA4 1 &pcfg_pull_up_8ma>;
+ };
+
+ sdmmc0_dectn: sdmmc0-dectn {
+@@ -1447,14 +1447,14 @@
+ };
+
+ sdmmc0_bus1: sdmmc0-bus1 {
+- rockchip,pins = <1 RK_PA0 1 &pcfg_pull_up_4ma>;
++ rockchip,pins = <1 RK_PA0 1 &pcfg_pull_up_8ma>;
+ };
+
+ sdmmc0_bus4: sdmmc0-bus4 {
+- rockchip,pins = <1 RK_PA0 1 &pcfg_pull_up_4ma>,
+- <1 RK_PA1 1 &pcfg_pull_up_4ma>,
+- <1 RK_PA2 1 &pcfg_pull_up_4ma>,
+- <1 RK_PA3 1 &pcfg_pull_up_4ma>;
++ rockchip,pins = <1 RK_PA0 1 &pcfg_pull_up_8ma>,
++ <1 RK_PA1 1 &pcfg_pull_up_8ma>,
++ <1 RK_PA2 1 &pcfg_pull_up_8ma>,
++ <1 RK_PA3 1 &pcfg_pull_up_8ma>;
+ };
+
+ sdmmc0_gpio: sdmmc0-gpio {
+@@ -1628,50 +1628,50 @@
+ rgmiim1_pins: rgmiim1-pins {
+ rockchip,pins =
+ /* mac_txclk */
+- <1 RK_PB4 2 &pcfg_pull_none_12ma>,
++ <1 RK_PB4 2 &pcfg_pull_none_8ma>,
+ /* mac_rxclk */
+- <1 RK_PB5 2 &pcfg_pull_none_2ma>,
++ <1 RK_PB5 2 &pcfg_pull_none_4ma>,
+ /* mac_mdio */
+- <1 RK_PC3 2 &pcfg_pull_none_2ma>,
++ <1 RK_PC3 2 &pcfg_pull_none_4ma>,
+ /* mac_txen */
+- <1 RK_PD1 2 &pcfg_pull_none_12ma>,
++ <1 RK_PD1 2 &pcfg_pull_none_8ma>,
+ /* mac_clk */
+- <1 RK_PC5 2 &pcfg_pull_none_2ma>,
++ <1 RK_PC5 2 &pcfg_pull_none_4ma>,
+ /* mac_rxdv */
+- <1 RK_PC6 2 &pcfg_pull_none_2ma>,
++ <1 RK_PC6 2 &pcfg_pull_none_4ma>,
+ /* mac_mdc */
+- <1 RK_PC7 2 &pcfg_pull_none_2ma>,
++ <1 RK_PC7 2 &pcfg_pull_none_4ma>,
+ /* mac_rxd1 */
+- <1 RK_PB2 2 &pcfg_pull_none_2ma>,
++ <1 RK_PB2 2 &pcfg_pull_none_4ma>,
+ /* mac_rxd0 */
+- <1 RK_PB3 2 &pcfg_pull_none_2ma>,
++ <1 RK_PB3 2 &pcfg_pull_none_4ma>,
+ /* mac_txd1 */
+- <1 RK_PB0 2 &pcfg_pull_none_12ma>,
++ <1 RK_PB0 2 &pcfg_pull_none_8ma>,
+ /* mac_txd0 */
+- <1 RK_PB1 2 &pcfg_pull_none_12ma>,
++ <1 RK_PB1 2 &pcfg_pull_none_8ma>,
+ /* mac_rxd3 */
+- <1 RK_PB6 2 &pcfg_pull_none_2ma>,
++ <1 RK_PB6 2 &pcfg_pull_none_4ma>,
+ /* mac_rxd2 */
+- <1 RK_PB7 2 &pcfg_pull_none_2ma>,
++ <1 RK_PB7 2 &pcfg_pull_none_4ma>,
+ /* mac_txd3 */
+- <1 RK_PC0 2 &pcfg_pull_none_12ma>,
++ <1 RK_PC0 2 &pcfg_pull_none_8ma>,
+ /* mac_txd2 */
+- <1 RK_PC1 2 &pcfg_pull_none_12ma>,
++ <1 RK_PC1 2 &pcfg_pull_none_8ma>,
+
+ /* mac_txclk */
+- <0 RK_PB0 1 &pcfg_pull_none>,
++ <0 RK_PB0 1 &pcfg_pull_none_8ma>,
+ /* mac_txen */
+- <0 RK_PB4 1 &pcfg_pull_none>,
++ <0 RK_PB4 1 &pcfg_pull_none_8ma>,
+ /* mac_clk */
+- <0 RK_PD0 1 &pcfg_pull_none>,
++ <0 RK_PD0 1 &pcfg_pull_none_4ma>,
+ /* mac_txd1 */
+- <0 RK_PC0 1 &pcfg_pull_none>,
++ <0 RK_PC0 1 &pcfg_pull_none_8ma>,
+ /* mac_txd0 */
+- <0 RK_PC1 1 &pcfg_pull_none>,
++ <0 RK_PC1 1 &pcfg_pull_none_8ma>,
+ /* mac_txd3 */
+- <0 RK_PC7 1 &pcfg_pull_none>,
++ <0 RK_PC7 1 &pcfg_pull_none_8ma>,
+ /* mac_txd2 */
+- <0 RK_PC6 1 &pcfg_pull_none>;
++ <0 RK_PC6 1 &pcfg_pull_none_8ma>;
+ };
+
+ rmiim1_pins: rmiim1-pins {
+diff --git a/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts b/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
+index 13a0a028df98..e5699d0d91e4 100644
+--- a/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
++++ b/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
+@@ -101,6 +101,7 @@
+ sdio_pwrseq: sdio-pwrseq {
+ compatible = "mmc-pwrseq-simple";
+ reset-gpios = <&gpio 7 GPIO_ACTIVE_LOW>; /* WIFI_EN */
++ post-power-on-delay-ms = <10>;
+ };
+ };
+
+diff --git a/arch/arm64/crypto/aes-ce-ccm-core.S b/arch/arm64/crypto/aes-ce-ccm-core.S
+index e3a375c4cb83..1b151442dac1 100644
+--- a/arch/arm64/crypto/aes-ce-ccm-core.S
++++ b/arch/arm64/crypto/aes-ce-ccm-core.S
+@@ -74,12 +74,13 @@ ENTRY(ce_aes_ccm_auth_data)
+ beq 10f
+ ext v0.16b, v0.16b, v0.16b, #1 /* rotate out the mac bytes */
+ b 7b
+-8: mov w7, w8
++8: cbz w8, 91f
++ mov w7, w8
+ add w8, w8, #16
+ 9: ext v1.16b, v1.16b, v1.16b, #1
+ adds w7, w7, #1
+ bne 9b
+- eor v0.16b, v0.16b, v1.16b
++91: eor v0.16b, v0.16b, v1.16b
+ st1 {v0.16b}, [x0]
+ 10: str w8, [x3]
+ ret
+diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
+index 68b11aa690e4..986191e8c058 100644
+--- a/arch/arm64/crypto/aes-ce-ccm-glue.c
++++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
+@@ -125,7 +125,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[],
+ abytes -= added;
+ }
+
+- while (abytes > AES_BLOCK_SIZE) {
++ while (abytes >= AES_BLOCK_SIZE) {
+ __aes_arm64_encrypt(key->key_enc, mac, mac,
+ num_rounds(key));
+ crypto_xor(mac, in, AES_BLOCK_SIZE);
+@@ -139,8 +139,6 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[],
+ num_rounds(key));
+ crypto_xor(mac, in, abytes);
+ *macp = abytes;
+- } else {
+- *macp = 0;
+ }
+ }
+ }
+diff --git a/arch/arm64/crypto/aes-neonbs-core.S b/arch/arm64/crypto/aes-neonbs-core.S
+index e613a87f8b53..8432c8d0dea6 100644
+--- a/arch/arm64/crypto/aes-neonbs-core.S
++++ b/arch/arm64/crypto/aes-neonbs-core.S
+@@ -971,18 +971,22 @@ CPU_LE( rev x8, x8 )
+
+ 8: next_ctr v0
+ st1 {v0.16b}, [x24]
+- cbz x23, 0f
++ cbz x23, .Lctr_done
+
+ cond_yield_neon 98b
+ b 99b
+
+-0: frame_pop
++.Lctr_done:
++ frame_pop
+ ret
+
+ /*
+ * If we are handling the tail of the input (x6 != NULL), return the
+ * final keystream block back to the caller.
+ */
++0: cbz x25, 8b
++ st1 {v0.16b}, [x25]
++ b 8b
+ 1: cbz x25, 8b
+ st1 {v1.16b}, [x25]
+ b 8b
+diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c
+index b461d62023f2..567c24f3d224 100644
+--- a/arch/arm64/crypto/crct10dif-ce-glue.c
++++ b/arch/arm64/crypto/crct10dif-ce-glue.c
+@@ -39,26 +39,13 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data,
+ unsigned int length)
+ {
+ u16 *crc = shash_desc_ctx(desc);
+- unsigned int l;
+
+- if (unlikely((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) {
+- l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE -
+- ((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE));
+-
+- *crc = crc_t10dif_generic(*crc, data, l);
+-
+- length -= l;
+- data += l;
+- }
+-
+- if (length > 0) {
+- if (may_use_simd()) {
+- kernel_neon_begin();
+- *crc = crc_t10dif_pmull(*crc, data, length);
+- kernel_neon_end();
+- } else {
+- *crc = crc_t10dif_generic(*crc, data, length);
+- }
++ if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) {
++ kernel_neon_begin();
++ *crc = crc_t10dif_pmull(*crc, data, length);
++ kernel_neon_end();
++ } else {
++ *crc = crc_t10dif_generic(*crc, data, length);
+ }
+
+ return 0;
+diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
+index cccb83ad7fa8..e1d95f08f8e1 100644
+--- a/arch/arm64/include/asm/futex.h
++++ b/arch/arm64/include/asm/futex.h
+@@ -30,8 +30,8 @@ do { \
+ " prfm pstl1strm, %2\n" \
+ "1: ldxr %w1, %2\n" \
+ insn "\n" \
+-"2: stlxr %w3, %w0, %2\n" \
+-" cbnz %w3, 1b\n" \
++"2: stlxr %w0, %w3, %2\n" \
++" cbnz %w0, 1b\n" \
+ " dmb ish\n" \
+ "3:\n" \
+ " .pushsection .fixup,\"ax\"\n" \
+@@ -50,30 +50,30 @@ do { \
+ static inline int
+ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
+ {
+- int oldval = 0, ret, tmp;
++ int oldval, ret, tmp;
+ u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);
+
+ pagefault_disable();
+
+ switch (op) {
+ case FUTEX_OP_SET:
+- __futex_atomic_op("mov %w0, %w4",
++ __futex_atomic_op("mov %w3, %w4",
+ ret, oldval, uaddr, tmp, oparg);
+ break;
+ case FUTEX_OP_ADD:
+- __futex_atomic_op("add %w0, %w1, %w4",
++ __futex_atomic_op("add %w3, %w1, %w4",
+ ret, oldval, uaddr, tmp, oparg);
+ break;
+ case FUTEX_OP_OR:
+- __futex_atomic_op("orr %w0, %w1, %w4",
++ __futex_atomic_op("orr %w3, %w1, %w4",
+ ret, oldval, uaddr, tmp, oparg);
+ break;
+ case FUTEX_OP_ANDN:
+- __futex_atomic_op("and %w0, %w1, %w4",
++ __futex_atomic_op("and %w3, %w1, %w4",
+ ret, oldval, uaddr, tmp, ~oparg);
+ break;
+ case FUTEX_OP_XOR:
+- __futex_atomic_op("eor %w0, %w1, %w4",
++ __futex_atomic_op("eor %w3, %w1, %w4",
+ ret, oldval, uaddr, tmp, oparg);
+ break;
+ default:
+diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h
+index 1473fc2f7ab7..89691c86640a 100644
+--- a/arch/arm64/include/asm/hardirq.h
++++ b/arch/arm64/include/asm/hardirq.h
+@@ -17,8 +17,12 @@
+ #define __ASM_HARDIRQ_H
+
+ #include <linux/cache.h>
++#include <linux/percpu.h>
+ #include <linux/threads.h>
++#include <asm/barrier.h>
+ #include <asm/irq.h>
++#include <asm/kvm_arm.h>
++#include <asm/sysreg.h>
+
+ #define NR_IPI 7
+
+@@ -37,6 +41,33 @@ u64 smp_irq_stat_cpu(unsigned int cpu);
+
+ #define __ARCH_IRQ_EXIT_IRQS_DISABLED 1
+
++struct nmi_ctx {
++ u64 hcr;
++};
++
++DECLARE_PER_CPU(struct nmi_ctx, nmi_contexts);
++
++#define arch_nmi_enter() \
++ do { \
++ if (is_kernel_in_hyp_mode()) { \
++ struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts); \
++ nmi_ctx->hcr = read_sysreg(hcr_el2); \
++ if (!(nmi_ctx->hcr & HCR_TGE)) { \
++ write_sysreg(nmi_ctx->hcr | HCR_TGE, hcr_el2); \
++ isb(); \
++ } \
++ } \
++ } while (0)
++
++#define arch_nmi_exit() \
++ do { \
++ if (is_kernel_in_hyp_mode()) { \
++ struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts); \
++ if (!(nmi_ctx->hcr & HCR_TGE)) \
++ write_sysreg(nmi_ctx->hcr, hcr_el2); \
++ } \
++ } while (0)
++
+ static inline void ack_bad_irq(unsigned int irq)
+ {
+ extern unsigned long irq_err_count;
+diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h
+index 905e1bb0e7bd..cd9f4e9d04d3 100644
+--- a/arch/arm64/include/asm/module.h
++++ b/arch/arm64/include/asm/module.h
+@@ -73,4 +73,9 @@ static inline bool is_forbidden_offset_for_adrp(void *place)
+ struct plt_entry get_plt_entry(u64 dst, void *pc);
+ bool plt_entries_equal(const struct plt_entry *a, const struct plt_entry *b);
+
++static inline bool plt_entry_is_initialized(const struct plt_entry *e)
++{
++ return e->adrp || e->add || e->br;
++}
++
+ #endif /* __ASM_MODULE_H */
+diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
+index 8e4431a8821f..07b298120182 100644
+--- a/arch/arm64/kernel/ftrace.c
++++ b/arch/arm64/kernel/ftrace.c
+@@ -107,8 +107,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+ trampoline = get_plt_entry(addr, mod->arch.ftrace_trampoline);
+ if (!plt_entries_equal(mod->arch.ftrace_trampoline,
+ &trampoline)) {
+- if (!plt_entries_equal(mod->arch.ftrace_trampoline,
+- &(struct plt_entry){})) {
++ if (plt_entry_is_initialized(mod->arch.ftrace_trampoline)) {
+ pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
+ return -EINVAL;
+ }
+diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
+index 780a12f59a8f..92fa81798fb9 100644
+--- a/arch/arm64/kernel/irq.c
++++ b/arch/arm64/kernel/irq.c
+@@ -33,6 +33,9 @@
+
+ unsigned long irq_err_count;
+
++/* Only access this in an NMI enter/exit */
++DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts);
++
+ DEFINE_PER_CPU(unsigned long *, irq_stack_ptr);
+
+ int arch_show_interrupts(struct seq_file *p, int prec)
+diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
+index ce46c4cdf368..691854b77c7f 100644
+--- a/arch/arm64/kernel/kgdb.c
++++ b/arch/arm64/kernel/kgdb.c
+@@ -244,27 +244,33 @@ int kgdb_arch_handle_exception(int exception_vector, int signo,
+
+ static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr)
+ {
++ if (user_mode(regs))
++ return DBG_HOOK_ERROR;
++
+ kgdb_handle_exception(1, SIGTRAP, 0, regs);
+- return 0;
++ return DBG_HOOK_HANDLED;
+ }
+ NOKPROBE_SYMBOL(kgdb_brk_fn)
+
+ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)
+ {
++ if (user_mode(regs))
++ return DBG_HOOK_ERROR;
++
+ compiled_break = 1;
+ kgdb_handle_exception(1, SIGTRAP, 0, regs);
+
+- return 0;
++ return DBG_HOOK_HANDLED;
+ }
+ NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
+
+ static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
+ {
+- if (!kgdb_single_step)
++ if (user_mode(regs) || !kgdb_single_step)
+ return DBG_HOOK_ERROR;
+
+ kgdb_handle_exception(1, SIGTRAP, 0, regs);
+- return 0;
++ return DBG_HOOK_HANDLED;
+ }
+ NOKPROBE_SYMBOL(kgdb_step_brk_fn);
+
+diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
+index f17afb99890c..7fb6f3aa5ceb 100644
+--- a/arch/arm64/kernel/probes/kprobes.c
++++ b/arch/arm64/kernel/probes/kprobes.c
+@@ -450,6 +450,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
+ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+ int retval;
+
++ if (user_mode(regs))
++ return DBG_HOOK_ERROR;
++
+ /* return error if this is not our step */
+ retval = kprobe_ss_hit(kcb, instruction_pointer(regs));
+
+@@ -466,6 +469,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
+ int __kprobes
+ kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr)
+ {
++ if (user_mode(regs))
++ return DBG_HOOK_ERROR;
++
+ kprobe_handler(regs);
+ return DBG_HOOK_HANDLED;
+ }
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 4e2fb877f8d5..92bfeb3e8d7c 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -102,10 +102,16 @@ static void dump_instr(const char *lvl, struct pt_regs *regs)
+ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
+ {
+ struct stackframe frame;
+- int skip;
++ int skip = 0;
+
+ pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
+
++ if (regs) {
++ if (user_mode(regs))
++ return;
++ skip = 1;
++ }
++
+ if (!tsk)
+ tsk = current;
+
+@@ -126,7 +132,6 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
+ frame.graph = 0;
+ #endif
+
+- skip = !!regs;
+ printk("Call trace:\n");
+ do {
+ /* skip until specified stack frame */
+@@ -176,15 +181,13 @@ static int __die(const char *str, int err, struct pt_regs *regs)
+ return ret;
+
+ print_modules();
+- __show_regs(regs);
+ pr_emerg("Process %.*s (pid: %d, stack limit = 0x%p)\n",
+ TASK_COMM_LEN, tsk->comm, task_pid_nr(tsk),
+ end_of_stack(tsk));
++ show_regs(regs);
+
+- if (!user_mode(regs)) {
+- dump_backtrace(regs, tsk);
++ if (!user_mode(regs))
+ dump_instr(KERN_EMERG, regs);
+- }
+
+ return ret;
+ }
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index c936aa40c3f4..b6dac3a68508 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1476,7 +1476,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+
+ { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
+ { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
+- { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 },
++ { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 },
+ };
+
+ static bool trap_dbgidr(struct kvm_vcpu *vcpu,
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index efb7b2cbead5..ef46925096f0 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -824,11 +824,12 @@ void __init hook_debug_fault_code(int nr,
+ debug_fault_info[nr].name = name;
+ }
+
+-asmlinkage int __exception do_debug_exception(unsigned long addr,
++asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint,
+ unsigned int esr,
+ struct pt_regs *regs)
+ {
+ const struct fault_info *inf = esr_to_debug_fault_info(esr);
++ unsigned long pc = instruction_pointer(regs);
+ int rv;
+
+ /*
+@@ -838,14 +839,14 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
+ if (interrupts_enabled(regs))
+ trace_hardirqs_off();
+
+- if (user_mode(regs) && !is_ttbr0_addr(instruction_pointer(regs)))
++ if (user_mode(regs) && !is_ttbr0_addr(pc))
+ arm64_apply_bp_hardening();
+
+- if (!inf->fn(addr, esr, regs)) {
++ if (!inf->fn(addr_if_watchpoint, esr, regs)) {
+ rv = 1;
+ } else {
+ arm64_notify_die(inf->name, regs,
+- inf->sig, inf->code, (void __user *)addr, esr);
++ inf->sig, inf->code, (void __user *)pc, esr);
+ rv = 0;
+ }
+
+diff --git a/arch/csky/include/asm/syscall.h b/arch/csky/include/asm/syscall.h
+index d637445737b7..9a9cd81e66c1 100644
+--- a/arch/csky/include/asm/syscall.h
++++ b/arch/csky/include/asm/syscall.h
+@@ -49,10 +49,11 @@ syscall_get_arguments(struct task_struct *task, struct pt_regs *regs,
+ if (i == 0) {
+ args[0] = regs->orig_a0;
+ args++;
+- i++;
+ n--;
++ } else {
++ i--;
+ }
+- memcpy(args, ®s->a1 + i * sizeof(regs->a1), n * sizeof(args[0]));
++ memcpy(args, ®s->a1 + i, n * sizeof(args[0]));
+ }
+
+ static inline void
+@@ -63,10 +64,11 @@ syscall_set_arguments(struct task_struct *task, struct pt_regs *regs,
+ if (i == 0) {
+ regs->orig_a0 = args[0];
+ args++;
+- i++;
+ n--;
++ } else {
++ i--;
+ }
+- memcpy(®s->a1 + i * sizeof(regs->a1), args, n * sizeof(regs->a0));
++ memcpy(®s->a1 + i, args, n * sizeof(regs->a1));
+ }
+
+ static inline int
+diff --git a/arch/h8300/Makefile b/arch/h8300/Makefile
+index f801f3708a89..ba0f26cfad61 100644
+--- a/arch/h8300/Makefile
++++ b/arch/h8300/Makefile
+@@ -27,7 +27,7 @@ KBUILD_LDFLAGS += $(ldflags-y)
+ CHECKFLAGS += -msize-long
+
+ ifeq ($(CROSS_COMPILE),)
+-CROSS_COMPILE := h8300-unknown-linux-
++CROSS_COMPILE := $(call cc-cross-prefix, h8300-unknown-linux- h8300-linux-)
+ endif
+
+ core-y += arch/$(ARCH)/kernel/ arch/$(ARCH)/mm/
+diff --git a/arch/m68k/Makefile b/arch/m68k/Makefile
+index f00ca53f8c14..482513b9af2c 100644
+--- a/arch/m68k/Makefile
++++ b/arch/m68k/Makefile
+@@ -58,7 +58,10 @@ cpuflags-$(CONFIG_M5206e) := $(call cc-option,-mcpu=5206e,-m5200)
+ cpuflags-$(CONFIG_M5206) := $(call cc-option,-mcpu=5206,-m5200)
+
+ KBUILD_AFLAGS += $(cpuflags-y)
+-KBUILD_CFLAGS += $(cpuflags-y) -pipe
++KBUILD_CFLAGS += $(cpuflags-y)
++
++KBUILD_CFLAGS += -pipe -ffreestanding
++
+ ifdef CONFIG_MMU
+ # without -fno-strength-reduce the 53c7xx.c driver fails ;-(
+ KBUILD_CFLAGS += -fno-strength-reduce -ffixed-a2
+diff --git a/arch/mips/include/asm/jump_label.h b/arch/mips/include/asm/jump_label.h
+index e77672539e8e..e4456e450f94 100644
+--- a/arch/mips/include/asm/jump_label.h
++++ b/arch/mips/include/asm/jump_label.h
+@@ -21,15 +21,15 @@
+ #endif
+
+ #ifdef CONFIG_CPU_MICROMIPS
+-#define NOP_INSN "nop32"
++#define B_INSN "b32"
+ #else
+-#define NOP_INSN "nop"
++#define B_INSN "b"
+ #endif
+
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+- asm_volatile_goto("1:\t" NOP_INSN "\n\t"
+- "nop\n\t"
++ asm_volatile_goto("1:\t" B_INSN " 2f\n\t"
++ "2:\tnop\n\t"
+ ".pushsection __jump_table, \"aw\"\n\t"
+ WORD_INSN " 1b, %l[l_yes], %0\n\t"
+ ".popsection\n\t"
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index d2abd98471e8..41204a49cf95 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -1134,7 +1134,7 @@ static inline void kvm_arch_hardware_unsetup(void) {}
+ static inline void kvm_arch_sync_events(struct kvm *kvm) {}
+ static inline void kvm_arch_free_memslot(struct kvm *kvm,
+ struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
+-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
+ static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
+diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
+index ba150c755fcc..85b6c60f285d 100644
+--- a/arch/mips/kernel/irq.c
++++ b/arch/mips/kernel/irq.c
+@@ -52,6 +52,7 @@ asmlinkage void spurious_interrupt(void)
+ void __init init_IRQ(void)
+ {
+ int i;
++ unsigned int order = get_order(IRQ_STACK_SIZE);
+
+ for (i = 0; i < NR_IRQS; i++)
+ irq_set_noprobe(i);
+@@ -62,8 +63,7 @@ void __init init_IRQ(void)
+ arch_init_irq();
+
+ for_each_possible_cpu(i) {
+- int irq_pages = IRQ_STACK_SIZE / PAGE_SIZE;
+- void *s = (void *)__get_free_pages(GFP_KERNEL, irq_pages);
++ void *s = (void *)__get_free_pages(GFP_KERNEL, order);
+
+ irq_stack[i] = s;
+ pr_debug("CPU%d IRQ stack at 0x%p - 0x%p\n", i,
+diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
+index cb7e9ed7a453..33ee0d18fb0a 100644
+--- a/arch/mips/kernel/vmlinux.lds.S
++++ b/arch/mips/kernel/vmlinux.lds.S
+@@ -140,6 +140,13 @@ SECTIONS
+ PERCPU_SECTION(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
+ #endif
+
++#ifdef CONFIG_MIPS_ELF_APPENDED_DTB
++ .appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) {
++ *(.appended_dtb)
++ KEEP(*(.appended_dtb))
++ }
++#endif
++
+ #ifdef CONFIG_RELOCATABLE
+ . = ALIGN(4);
+
+@@ -164,11 +171,6 @@ SECTIONS
+ __appended_dtb = .;
+ /* leave space for appended DTB */
+ . += 0x100000;
+-#elif defined(CONFIG_MIPS_ELF_APPENDED_DTB)
+- .appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) {
+- *(.appended_dtb)
+- KEEP(*(.appended_dtb))
+- }
+ #endif
+ /*
+ * Align to 64K in attempt to eliminate holes before the
+diff --git a/arch/mips/loongson64/lemote-2f/irq.c b/arch/mips/loongson64/lemote-2f/irq.c
+index 9e33e45aa17c..b213cecb8e3a 100644
+--- a/arch/mips/loongson64/lemote-2f/irq.c
++++ b/arch/mips/loongson64/lemote-2f/irq.c
+@@ -103,7 +103,7 @@ static struct irqaction ip6_irqaction = {
+ static struct irqaction cascade_irqaction = {
+ .handler = no_action,
+ .name = "cascade",
+- .flags = IRQF_NO_THREAD,
++ .flags = IRQF_NO_THREAD | IRQF_NO_SUSPEND,
+ };
+
+ void __init mach_init_irq(void)
+diff --git a/arch/parisc/include/asm/ptrace.h b/arch/parisc/include/asm/ptrace.h
+index 2a27b275ab09..9ff033d261ab 100644
+--- a/arch/parisc/include/asm/ptrace.h
++++ b/arch/parisc/include/asm/ptrace.h
+@@ -22,13 +22,14 @@ unsigned long profile_pc(struct pt_regs *);
+
+ static inline unsigned long regs_return_value(struct pt_regs *regs)
+ {
+- return regs->gr[20];
++ return regs->gr[28];
+ }
+
+ static inline void instruction_pointer_set(struct pt_regs *regs,
+ unsigned long val)
+ {
+- regs->iaoq[0] = val;
++ regs->iaoq[0] = val;
++ regs->iaoq[1] = val + 4;
+ }
+
+ /* Query offset/name of register from its name/offset */
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index eb39e7e380d7..841db71958cd 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -210,12 +210,6 @@ void __cpuidle arch_cpu_idle(void)
+
+ static int __init parisc_idle_init(void)
+ {
+- const char *marker;
+-
+- /* check QEMU/SeaBIOS marker in PAGE0 */
+- marker = (char *) &PAGE0->pad0;
+- running_on_qemu = (memcmp(marker, "SeaBIOS", 8) == 0);
+-
+ if (!running_on_qemu)
+ cpu_idle_poll_ctrl(1);
+
+diff --git a/arch/parisc/kernel/setup.c b/arch/parisc/kernel/setup.c
+index f2cf86ac279b..25946624ce6a 100644
+--- a/arch/parisc/kernel/setup.c
++++ b/arch/parisc/kernel/setup.c
+@@ -396,6 +396,9 @@ void __init start_parisc(void)
+ int ret, cpunum;
+ struct pdc_coproc_cfg coproc_cfg;
+
++ /* check QEMU/SeaBIOS marker in PAGE0 */
++ running_on_qemu = (memcmp(&PAGE0->pad0, "SeaBIOS", 8) == 0);
++
+ cpunum = smp_processor_id();
+
+ init_cpu_topology();
+diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h
+index 5b0177733994..46130ef4941c 100644
+--- a/arch/powerpc/include/asm/book3s/64/hugetlb.h
++++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h
+@@ -35,6 +35,14 @@ static inline int hstate_get_psize(struct hstate *hstate)
+ #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
+ static inline bool gigantic_page_supported(void)
+ {
++ /*
++ * We used gigantic page reservation with hypervisor assist in some case.
++ * We cannot use runtime allocation of gigantic pages in those platforms
++ * This is hash translation mode LPARs.
++ */
++ if (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled())
++ return false;
++
+ return true;
+ }
+ #endif
+diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
+index 0f98f00da2ea..19693b8add93 100644
+--- a/arch/powerpc/include/asm/kvm_host.h
++++ b/arch/powerpc/include/asm/kvm_host.h
+@@ -837,7 +837,7 @@ struct kvm_vcpu_arch {
+ static inline void kvm_arch_hardware_disable(void) {}
+ static inline void kvm_arch_hardware_unsetup(void) {}
+ static inline void kvm_arch_sync_events(struct kvm *kvm) {}
+-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+ static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
+ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_exit(void) {}
+diff --git a/arch/powerpc/include/asm/powernv.h b/arch/powerpc/include/asm/powernv.h
+index 2f3ff7a27881..d85fcfea32ca 100644
+--- a/arch/powerpc/include/asm/powernv.h
++++ b/arch/powerpc/include/asm/powernv.h
+@@ -23,6 +23,8 @@ extern int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea,
+ unsigned long *flags, unsigned long *status,
+ int count);
+
++void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val);
++
+ void pnv_tm_init(void);
+ #else
+ static inline void powernv_set_nmmu_ptcr(unsigned long ptcr) { }
+diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
+index 19a8834e0398..0690a306f6ca 100644
+--- a/arch/powerpc/include/asm/ppc-opcode.h
++++ b/arch/powerpc/include/asm/ppc-opcode.h
+@@ -302,6 +302,7 @@
+ /* Misc instructions for BPF compiler */
+ #define PPC_INST_LBZ 0x88000000
+ #define PPC_INST_LD 0xe8000000
++#define PPC_INST_LDX 0x7c00002a
+ #define PPC_INST_LHZ 0xa0000000
+ #define PPC_INST_LWZ 0x80000000
+ #define PPC_INST_LHBRX 0x7c00062c
+@@ -309,6 +310,7 @@
+ #define PPC_INST_STB 0x98000000
+ #define PPC_INST_STH 0xb0000000
+ #define PPC_INST_STD 0xf8000000
++#define PPC_INST_STDX 0x7c00012a
+ #define PPC_INST_STDU 0xf8000001
+ #define PPC_INST_STW 0x90000000
+ #define PPC_INST_STWU 0x94000000
+diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
+index a4a718dbfec6..f85e2b01c3df 100644
+--- a/arch/powerpc/include/asm/topology.h
++++ b/arch/powerpc/include/asm/topology.h
+@@ -132,6 +132,8 @@ static inline void shared_proc_topology_init(void) {}
+ #define topology_sibling_cpumask(cpu) (per_cpu(cpu_sibling_map, cpu))
+ #define topology_core_cpumask(cpu) (per_cpu(cpu_core_map, cpu))
+ #define topology_core_id(cpu) (cpu_to_core_id(cpu))
++
++int dlpar_cpu_readd(int cpu);
+ #endif
+ #endif
+
+diff --git a/arch/powerpc/include/asm/vdso_datapage.h b/arch/powerpc/include/asm/vdso_datapage.h
+index 1afe90ade595..bbc06bd72b1f 100644
+--- a/arch/powerpc/include/asm/vdso_datapage.h
++++ b/arch/powerpc/include/asm/vdso_datapage.h
+@@ -82,10 +82,10 @@ struct vdso_data {
+ __u32 icache_block_size; /* L1 i-cache block size */
+ __u32 dcache_log_block_size; /* L1 d-cache log block size */
+ __u32 icache_log_block_size; /* L1 i-cache log block size */
+- __s32 wtom_clock_sec; /* Wall to monotonic clock */
+- __s32 wtom_clock_nsec;
+- struct timespec stamp_xtime; /* xtime as at tb_orig_stamp */
+- __u32 stamp_sec_fraction; /* fractional seconds of stamp_xtime */
++ __u32 stamp_sec_fraction; /* fractional seconds of stamp_xtime */
++ __s32 wtom_clock_nsec; /* Wall to monotonic clock nsec */
++ __s64 wtom_clock_sec; /* Wall to monotonic clock sec */
++ struct timespec stamp_xtime; /* xtime as at tb_orig_stamp */
+ __u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls */
+ __u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
+ };
+diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
+index 0768dfd8a64e..fdd528cdb2ee 100644
+--- a/arch/powerpc/kernel/entry_32.S
++++ b/arch/powerpc/kernel/entry_32.S
+@@ -745,6 +745,9 @@ fast_exception_return:
+ mtcr r10
+ lwz r10,_LINK(r11)
+ mtlr r10
++ /* Clear the exception_marker on the stack to avoid confusing stacktrace */
++ li r10, 0
++ stw r10, 8(r11)
+ REST_GPR(10, r11)
+ #if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
+ mtspr SPRN_NRI, r0
+@@ -982,6 +985,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX)
+ mtcrf 0xFF,r10
+ mtlr r11
+
++ /* Clear the exception_marker on the stack to avoid confusing stacktrace */
++ li r10, 0
++ stw r10, 8(r1)
+ /*
+ * Once we put values in SRR0 and SRR1, we are in a state
+ * where exceptions are not recoverable, since taking an
+@@ -1021,6 +1027,9 @@ exc_exit_restart_end:
+ mtlr r11
+ lwz r10,_CCR(r1)
+ mtcrf 0xff,r10
++ /* Clear the exception_marker on the stack to avoid confusing stacktrace */
++ li r10, 0
++ stw r10, 8(r1)
+ REST_2GPRS(9, r1)
+ .globl exc_exit_restart
+ exc_exit_restart:
+diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
+index 435927f549c4..a2c168b395d2 100644
+--- a/arch/powerpc/kernel/entry_64.S
++++ b/arch/powerpc/kernel/entry_64.S
+@@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
+ ld r2,_NIP(r1)
+ mtspr SPRN_SRR0,r2
+
++ /*
++ * Leaving a stale exception_marker on the stack can confuse
++ * the reliable stack unwinder later on. Clear it.
++ */
++ li r2,0
++ std r2,STACK_FRAME_OVERHEAD-16(r1)
++
+ ld r0,GPR0(r1)
+ ld r2,GPR2(r1)
+ ld r3,GPR3(r1)
+diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
+index afb638778f44..447defdd4503 100644
+--- a/arch/powerpc/kernel/exceptions-64e.S
++++ b/arch/powerpc/kernel/exceptions-64e.S
+@@ -349,6 +349,7 @@ ret_from_mc_except:
+ #define GEN_BTB_FLUSH
+ #define CRIT_BTB_FLUSH
+ #define DBG_BTB_FLUSH
++#define MC_BTB_FLUSH
+ #define GDBELL_BTB_FLUSH
+ #endif
+
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 9e253ce27e08..4fee6c9887db 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -612,11 +612,17 @@ EXC_COMMON_BEGIN(data_access_slb_common)
+ ld r4,PACA_EXSLB+EX_DAR(r13)
+ std r4,_DAR(r1)
+ addi r3,r1,STACK_FRAME_OVERHEAD
++BEGIN_MMU_FTR_SECTION
++ /* HPT case, do SLB fault */
+ bl do_slb_fault
+ cmpdi r3,0
+ bne- 1f
+ b fast_exception_return
+ 1: /* Error case */
++MMU_FTR_SECTION_ELSE
++ /* Radix case, access is outside page table range */
++ li r3,-EFAULT
++ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+ std r3,RESULT(r1)
+ bl save_nvgprs
+ RECONCILE_IRQ_STATE(r10, r11)
+@@ -661,11 +667,17 @@ EXC_COMMON_BEGIN(instruction_access_slb_common)
+ EXCEPTION_PROLOG_COMMON(0x480, PACA_EXSLB)
+ ld r4,_NIP(r1)
+ addi r3,r1,STACK_FRAME_OVERHEAD
++BEGIN_MMU_FTR_SECTION
++ /* HPT case, do SLB fault */
+ bl do_slb_fault
+ cmpdi r3,0
+ bne- 1f
+ b fast_exception_return
+ 1: /* Error case */
++MMU_FTR_SECTION_ELSE
++ /* Radix case, access is outside page table range */
++ li r3,-EFAULT
++ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+ std r3,RESULT(r1)
+ bl save_nvgprs
+ RECONCILE_IRQ_STATE(r10, r11)
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index ce393df243aa..71bad4b6f80d 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -176,7 +176,7 @@ static void __giveup_fpu(struct task_struct *tsk)
+
+ save_fpu(tsk);
+ msr = tsk->thread.regs->msr;
+- msr &= ~MSR_FP;
++ msr &= ~(MSR_FP|MSR_FE0|MSR_FE1);
+ #ifdef CONFIG_VSX
+ if (cpu_has_feature(CPU_FTR_VSX))
+ msr &= ~MSR_VSX;
+diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
+index cdd5d1d3ae41..d9ac7d94656e 100644
+--- a/arch/powerpc/kernel/ptrace.c
++++ b/arch/powerpc/kernel/ptrace.c
+@@ -33,6 +33,7 @@
+ #include <linux/hw_breakpoint.h>
+ #include <linux/perf_event.h>
+ #include <linux/context_tracking.h>
++#include <linux/nospec.h>
+
+ #include <linux/uaccess.h>
+ #include <linux/pkeys.h>
+@@ -274,6 +275,8 @@ static int set_user_trap(struct task_struct *task, unsigned long trap)
+ */
+ int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
+ {
++ unsigned int regs_max;
++
+ if ((task->thread.regs == NULL) || !data)
+ return -EIO;
+
+@@ -297,7 +300,9 @@ int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
+ }
+ #endif
+
+- if (regno < (sizeof(struct user_pt_regs) / sizeof(unsigned long))) {
++ regs_max = sizeof(struct user_pt_regs) / sizeof(unsigned long);
++ if (regno < regs_max) {
++ regno = array_index_nospec(regno, regs_max);
+ *data = ((unsigned long *)task->thread.regs)[regno];
+ return 0;
+ }
+@@ -321,6 +326,7 @@ int ptrace_put_reg(struct task_struct *task, int regno, unsigned long data)
+ return set_user_dscr(task, data);
+
+ if (regno <= PT_MAX_PUT_REG) {
++ regno = array_index_nospec(regno, PT_MAX_PUT_REG + 1);
+ ((unsigned long *)task->thread.regs)[regno] = data;
+ return 0;
+ }
+@@ -561,6 +567,7 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
+ /*
+ * Copy out only the low-order word of vrsave.
+ */
++ int start, end;
+ union {
+ elf_vrreg_t reg;
+ u32 word;
+@@ -569,8 +576,10 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
+
+ vrsave.word = target->thread.vrsave;
+
++ start = 33 * sizeof(vector128);
++ end = start + sizeof(vrsave);
+ ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, &vrsave,
+- 33 * sizeof(vector128), -1);
++ start, end);
+ }
+
+ return ret;
+@@ -608,6 +617,7 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
+ /*
+ * We use only the first word of vrsave.
+ */
++ int start, end;
+ union {
+ elf_vrreg_t reg;
+ u32 word;
+@@ -616,8 +626,10 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
+
+ vrsave.word = target->thread.vrsave;
+
++ start = 33 * sizeof(vector128);
++ end = start + sizeof(vrsave);
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &vrsave,
+- 33 * sizeof(vector128), -1);
++ start, end);
+ if (!ret)
+ target->thread.vrsave = vrsave.word;
+ }
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index 9b8631533e02..b33bafb8fcea 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -190,29 +190,22 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
+ bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+ ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+
+- if (bcs || ccd || count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+- bool comma = false;
++ if (bcs || ccd) {
+ seq_buf_printf(&s, "Mitigation: ");
+
+- if (bcs) {
++ if (bcs)
+ seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+- comma = true;
+- }
+
+- if (ccd) {
+- if (comma)
+- seq_buf_printf(&s, ", ");
+- seq_buf_printf(&s, "Indirect branch cache disabled");
+- comma = true;
+- }
+-
+- if (comma)
++ if (bcs && ccd)
+ seq_buf_printf(&s, ", ");
+
+- seq_buf_printf(&s, "Software count cache flush");
++ if (ccd)
++ seq_buf_printf(&s, "Indirect branch cache disabled");
++ } else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
++ seq_buf_printf(&s, "Mitigation: Software count cache flush");
+
+ if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
+- seq_buf_printf(&s, "(hardware accelerated)");
++ seq_buf_printf(&s, " (hardware accelerated)");
+ } else if (btb_flush_enabled) {
+ seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
+ } else {
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 3f15edf25a0d..6e521a3f67ca 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -358,13 +358,12 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask)
+ * NMI IPIs may not be recoverable, so should not be used as ongoing part of
+ * a running system. They can be used for crash, debug, halt/reboot, etc.
+ *
+- * NMI IPIs are globally single threaded. No more than one in progress at
+- * any time.
+- *
+ * The IPI call waits with interrupts disabled until all targets enter the
+- * NMI handler, then the call returns.
++ * NMI handler, then returns. Subsequent IPIs can be issued before targets
++ * have returned from their handlers, so there is no guarantee about
++ * concurrency or re-entrancy.
+ *
+- * No new NMI can be initiated until targets exit the handler.
++ * A new NMI can be issued before all targets exit the handler.
+ *
+ * The IPI call may time out without all targets entering the NMI handler.
+ * In that case, there is some logic to recover (and ignore subsequent
+@@ -375,7 +374,7 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask)
+
+ static atomic_t __nmi_ipi_lock = ATOMIC_INIT(0);
+ static struct cpumask nmi_ipi_pending_mask;
+-static int nmi_ipi_busy_count = 0;
++static bool nmi_ipi_busy = false;
+ static void (*nmi_ipi_function)(struct pt_regs *) = NULL;
+
+ static void nmi_ipi_lock_start(unsigned long *flags)
+@@ -414,7 +413,7 @@ static void nmi_ipi_unlock_end(unsigned long *flags)
+ */
+ int smp_handle_nmi_ipi(struct pt_regs *regs)
+ {
+- void (*fn)(struct pt_regs *);
++ void (*fn)(struct pt_regs *) = NULL;
+ unsigned long flags;
+ int me = raw_smp_processor_id();
+ int ret = 0;
+@@ -425,29 +424,17 @@ int smp_handle_nmi_ipi(struct pt_regs *regs)
+ * because the caller may have timed out.
+ */
+ nmi_ipi_lock_start(&flags);
+- if (!nmi_ipi_busy_count)
+- goto out;
+- if (!cpumask_test_cpu(me, &nmi_ipi_pending_mask))
+- goto out;
+-
+- fn = nmi_ipi_function;
+- if (!fn)
+- goto out;
+-
+- cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
+- nmi_ipi_busy_count++;
+- nmi_ipi_unlock();
+-
+- ret = 1;
+-
+- fn(regs);
+-
+- nmi_ipi_lock();
+- if (nmi_ipi_busy_count > 1) /* Can race with caller time-out */
+- nmi_ipi_busy_count--;
+-out:
++ if (cpumask_test_cpu(me, &nmi_ipi_pending_mask)) {
++ cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
++ fn = READ_ONCE(nmi_ipi_function);
++ WARN_ON_ONCE(!fn);
++ ret = 1;
++ }
+ nmi_ipi_unlock_end(&flags);
+
++ if (fn)
++ fn(regs);
++
+ return ret;
+ }
+
+@@ -473,7 +460,7 @@ static void do_smp_send_nmi_ipi(int cpu, bool safe)
+ * - cpu is the target CPU (must not be this CPU), or NMI_IPI_ALL_OTHERS.
+ * - fn is the target callback function.
+ * - delay_us > 0 is the delay before giving up waiting for targets to
+- * complete executing the handler, == 0 specifies indefinite delay.
++ * begin executing the handler, == 0 specifies indefinite delay.
+ */
+ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool safe)
+ {
+@@ -487,31 +474,33 @@ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool
+ if (unlikely(!smp_ops))
+ return 0;
+
+- /* Take the nmi_ipi_busy count/lock with interrupts hard disabled */
+ nmi_ipi_lock_start(&flags);
+- while (nmi_ipi_busy_count) {
++ while (nmi_ipi_busy) {
+ nmi_ipi_unlock_end(&flags);
+- spin_until_cond(nmi_ipi_busy_count == 0);
++ spin_until_cond(!nmi_ipi_busy);
+ nmi_ipi_lock_start(&flags);
+ }
+-
++ nmi_ipi_busy = true;
+ nmi_ipi_function = fn;
+
++ WARN_ON_ONCE(!cpumask_empty(&nmi_ipi_pending_mask));
++
+ if (cpu < 0) {
+ /* ALL_OTHERS */
+ cpumask_copy(&nmi_ipi_pending_mask, cpu_online_mask);
+ cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
+ } else {
+- /* cpumask starts clear */
+ cpumask_set_cpu(cpu, &nmi_ipi_pending_mask);
+ }
+- nmi_ipi_busy_count++;
++
+ nmi_ipi_unlock();
+
++ /* Interrupts remain hard disabled */
++
+ do_smp_send_nmi_ipi(cpu, safe);
+
+ nmi_ipi_lock();
+- /* nmi_ipi_busy_count is held here, so unlock/lock is okay */
++ /* nmi_ipi_busy is set here, so unlock/lock is okay */
+ while (!cpumask_empty(&nmi_ipi_pending_mask)) {
+ nmi_ipi_unlock();
+ udelay(1);
+@@ -523,29 +512,15 @@ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool
+ }
+ }
+
+- while (nmi_ipi_busy_count > 1) {
+- nmi_ipi_unlock();
+- udelay(1);
+- nmi_ipi_lock();
+- if (delay_us) {
+- delay_us--;
+- if (!delay_us)
+- break;
+- }
+- }
+-
+ if (!cpumask_empty(&nmi_ipi_pending_mask)) {
+ /* Timeout waiting for CPUs to call smp_handle_nmi_ipi */
+ ret = 0;
+ cpumask_clear(&nmi_ipi_pending_mask);
+ }
+- if (nmi_ipi_busy_count > 1) {
+- /* Timeout waiting for CPUs to execute fn */
+- ret = 0;
+- nmi_ipi_busy_count = 1;
+- }
+
+- nmi_ipi_busy_count--;
++ nmi_ipi_function = NULL;
++ nmi_ipi_busy = false;
++
+ nmi_ipi_unlock_end(&flags);
+
+ return ret;
+@@ -613,17 +588,8 @@ void crash_send_ipi(void (*crash_ipi_callback)(struct pt_regs *))
+ static void nmi_stop_this_cpu(struct pt_regs *regs)
+ {
+ /*
+- * This is a special case because it never returns, so the NMI IPI
+- * handling would never mark it as done, which makes any later
+- * smp_send_nmi_ipi() call spin forever. Mark it done now.
+- *
+ * IRQs are already hard disabled by the smp_handle_nmi_ipi.
+ */
+- nmi_ipi_lock();
+- if (nmi_ipi_busy_count > 1)
+- nmi_ipi_busy_count--;
+- nmi_ipi_unlock();
+-
+ spin_begin();
+ while (1)
+ spin_cpu_relax();
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 64936b60d521..7a1de34f38c8 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -763,15 +763,15 @@ void machine_check_exception(struct pt_regs *regs)
+ if (check_io_access(regs))
+ goto bail;
+
+- /* Must die if the interrupt is not recoverable */
+- if (!(regs->msr & MSR_RI))
+- nmi_panic(regs, "Unrecoverable Machine check");
+-
+ if (!nested)
+ nmi_exit();
+
+ die("Machine check", regs, SIGBUS);
+
++ /* Must die if the interrupt is not recoverable */
++ if (!(regs->msr & MSR_RI))
++ nmi_panic(regs, "Unrecoverable Machine check");
++
+ return;
+
+ bail:
+@@ -1542,8 +1542,8 @@ bail:
+
+ void StackOverflow(struct pt_regs *regs)
+ {
+- printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n",
+- current, regs->gpr[1]);
++ pr_crit("Kernel stack overflow in process %s[%d], r1=%lx\n",
++ current->comm, task_pid_nr(current), regs->gpr[1]);
+ debugger(regs);
+ show_regs(regs);
+ panic("kernel stack overflow");
+diff --git a/arch/powerpc/kernel/vdso64/gettimeofday.S b/arch/powerpc/kernel/vdso64/gettimeofday.S
+index a4ed9edfd5f0..1f324c28705b 100644
+--- a/arch/powerpc/kernel/vdso64/gettimeofday.S
++++ b/arch/powerpc/kernel/vdso64/gettimeofday.S
+@@ -92,7 +92,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
+ * At this point, r4,r5 contain our sec/nsec values.
+ */
+
+- lwa r6,WTOM_CLOCK_SEC(r3)
++ ld r6,WTOM_CLOCK_SEC(r3)
+ lwa r9,WTOM_CLOCK_NSEC(r3)
+
+ /* We now have our result in r6,r9. We create a fake dependency
+@@ -125,7 +125,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
+ bne cr6,75f
+
+ /* CLOCK_MONOTONIC_COARSE */
+- lwa r6,WTOM_CLOCK_SEC(r3)
++ ld r6,WTOM_CLOCK_SEC(r3)
+ lwa r9,WTOM_CLOCK_NSEC(r3)
+
+ /* check if counter has updated */
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 9b8d50a7cbaf..45b06e239d1f 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -58,6 +58,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
+ #define STACK_SLOT_DAWR (SFS-56)
+ #define STACK_SLOT_DAWRX (SFS-64)
+ #define STACK_SLOT_HFSCR (SFS-72)
++#define STACK_SLOT_AMR (SFS-80)
++#define STACK_SLOT_UAMOR (SFS-88)
+ /* the following is used by the P9 short path */
+ #define STACK_SLOT_NVGPRS (SFS-152) /* 18 gprs */
+
+@@ -726,11 +728,9 @@ BEGIN_FTR_SECTION
+ mfspr r5, SPRN_TIDR
+ mfspr r6, SPRN_PSSCR
+ mfspr r7, SPRN_PID
+- mfspr r8, SPRN_IAMR
+ std r5, STACK_SLOT_TID(r1)
+ std r6, STACK_SLOT_PSSCR(r1)
+ std r7, STACK_SLOT_PID(r1)
+- std r8, STACK_SLOT_IAMR(r1)
+ mfspr r5, SPRN_HFSCR
+ std r5, STACK_SLOT_HFSCR(r1)
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+@@ -738,11 +738,18 @@ BEGIN_FTR_SECTION
+ mfspr r5, SPRN_CIABR
+ mfspr r6, SPRN_DAWR
+ mfspr r7, SPRN_DAWRX
++ mfspr r8, SPRN_IAMR
+ std r5, STACK_SLOT_CIABR(r1)
+ std r6, STACK_SLOT_DAWR(r1)
+ std r7, STACK_SLOT_DAWRX(r1)
++ std r8, STACK_SLOT_IAMR(r1)
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
+
++ mfspr r5, SPRN_AMR
++ std r5, STACK_SLOT_AMR(r1)
++ mfspr r6, SPRN_UAMOR
++ std r6, STACK_SLOT_UAMOR(r1)
++
+ BEGIN_FTR_SECTION
+ /* Set partition DABR */
+ /* Do this before re-enabling PMU to avoid P7 DABR corruption bug */
+@@ -1631,22 +1638,25 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300)
+ mtspr SPRN_PSPB, r0
+ mtspr SPRN_WORT, r0
+ BEGIN_FTR_SECTION
+- mtspr SPRN_IAMR, r0
+ mtspr SPRN_TCSCR, r0
+ /* Set MMCRS to 1<<31 to freeze and disable the SPMC counters */
+ li r0, 1
+ sldi r0, r0, 31
+ mtspr SPRN_MMCRS, r0
+ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
+-8:
+
+- /* Save and reset AMR and UAMOR before turning on the MMU */
++ /* Save and restore AMR, IAMR and UAMOR before turning on the MMU */
++ ld r8, STACK_SLOT_IAMR(r1)
++ mtspr SPRN_IAMR, r8
++
++8: /* Power7 jumps back in here */
+ mfspr r5,SPRN_AMR
+ mfspr r6,SPRN_UAMOR
+ std r5,VCPU_AMR(r9)
+ std r6,VCPU_UAMOR(r9)
+- li r6,0
+- mtspr SPRN_AMR,r6
++ ld r5,STACK_SLOT_AMR(r1)
++ ld r6,STACK_SLOT_UAMOR(r1)
++ mtspr SPRN_AMR, r5
+ mtspr SPRN_UAMOR, r6
+
+ /* Switch DSCR back to host value */
+@@ -1746,11 +1756,9 @@ BEGIN_FTR_SECTION
+ ld r5, STACK_SLOT_TID(r1)
+ ld r6, STACK_SLOT_PSSCR(r1)
+ ld r7, STACK_SLOT_PID(r1)
+- ld r8, STACK_SLOT_IAMR(r1)
+ mtspr SPRN_TIDR, r5
+ mtspr SPRN_PSSCR, r6
+ mtspr SPRN_PID, r7
+- mtspr SPRN_IAMR, r8
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+
+ #ifdef CONFIG_PPC_RADIX_MMU
+diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S
+index 844d8e774492..b7f6f6e0b6e8 100644
+--- a/arch/powerpc/lib/memcmp_64.S
++++ b/arch/powerpc/lib/memcmp_64.S
+@@ -215,11 +215,20 @@ _GLOBAL_TOC(memcmp)
+ beq .Lzero
+
+ .Lcmp_rest_lt8bytes:
+- /* Here we have only less than 8 bytes to compare with. at least s1
+- * Address is aligned with 8 bytes.
+- * The next double words are load and shift right with appropriate
+- * bits.
++ /*
++ * Here we have less than 8 bytes to compare. At least s1 is aligned to
++ * 8 bytes, but s2 may not be. We must make sure s2 + 7 doesn't cross a
++ * page boundary, otherwise we might read past the end of the buffer and
++ * trigger a page fault. We use 4K as the conservative minimum page
++ * size. If we detect that case we go to the byte-by-byte loop.
++ *
++ * Otherwise the next double word is loaded from s1 and s2, and shifted
++ * right to compare the appropriate bits.
+ */
++ clrldi r6,r4,(64-12) // r6 = r4 & 0xfff
++ cmpdi r6,0xff8
++ bgt .Lshort
++
+ subfic r6,r5,8
+ slwi r6,r6,3
+ LD rA,0,r3
+diff --git a/arch/powerpc/mm/hugetlbpage-radix.c b/arch/powerpc/mm/hugetlbpage-radix.c
+index 2486bee0f93e..97c7a39ebc00 100644
+--- a/arch/powerpc/mm/hugetlbpage-radix.c
++++ b/arch/powerpc/mm/hugetlbpage-radix.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/mm.h>
+ #include <linux/hugetlb.h>
++#include <linux/security.h>
+ #include <asm/pgtable.h>
+ #include <asm/pgalloc.h>
+ #include <asm/cacheflush.h>
+@@ -73,7 +74,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ if (addr) {
+ addr = ALIGN(addr, huge_page_size(h));
+ vma = find_vma(mm, addr);
+- if (high_limit - len >= addr &&
++ if (high_limit - len >= addr && addr >= mmap_min_addr &&
+ (!vma || addr + len <= vm_start_gap(vma)))
+ return addr;
+ }
+@@ -83,7 +84,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ */
+ info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ info.length = len;
+- info.low_limit = PAGE_SIZE;
++ info.low_limit = max(PAGE_SIZE, mmap_min_addr);
+ info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW);
+ info.align_mask = PAGE_MASK & ~huge_page_mask(h);
+ info.align_offset = 0;
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 87f0dd004295..b5d1c45c1475 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1460,13 +1460,6 @@ static void reset_topology_timer(void)
+
+ #ifdef CONFIG_SMP
+
+-static void stage_topology_update(int core_id)
+-{
+- cpumask_or(&cpu_associativity_changes_mask,
+- &cpu_associativity_changes_mask, cpu_sibling_mask(core_id));
+- reset_topology_timer();
+-}
+-
+ static int dt_update_callback(struct notifier_block *nb,
+ unsigned long action, void *data)
+ {
+@@ -1479,7 +1472,7 @@ static int dt_update_callback(struct notifier_block *nb,
+ !of_prop_cmp(update->prop->name, "ibm,associativity")) {
+ u32 core_id;
+ of_property_read_u32(update->dn, "reg", &core_id);
+- stage_topology_update(core_id);
++ rc = dlpar_cpu_readd(core_id);
+ rc = NOTIFY_OK;
+ }
+ break;
+diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
+index bc3914d54e26..5986df48359b 100644
+--- a/arch/powerpc/mm/slb.c
++++ b/arch/powerpc/mm/slb.c
+@@ -69,6 +69,11 @@ static void assert_slb_presence(bool present, unsigned long ea)
+ if (!cpu_has_feature(CPU_FTR_ARCH_206))
+ return;
+
++ /*
++ * slbfee. requires bit 24 (PPC bit 39) be clear in RB. Hardware
++ * ignores all other bits from 0-27, so just clear them all.
++ */
++ ea &= ~((1UL << 28) - 1);
+ asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0");
+
+ WARN_ON(present == (tmp == 0));
+diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
+index c2d5192ed64f..e52e30bf7d86 100644
+--- a/arch/powerpc/net/bpf_jit.h
++++ b/arch/powerpc/net/bpf_jit.h
+@@ -51,6 +51,8 @@
+ #define PPC_LIS(r, i) PPC_ADDIS(r, 0, i)
+ #define PPC_STD(r, base, i) EMIT(PPC_INST_STD | ___PPC_RS(r) | \
+ ___PPC_RA(base) | ((i) & 0xfffc))
++#define PPC_STDX(r, base, b) EMIT(PPC_INST_STDX | ___PPC_RS(r) | \
++ ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_STDU(r, base, i) EMIT(PPC_INST_STDU | ___PPC_RS(r) | \
+ ___PPC_RA(base) | ((i) & 0xfffc))
+ #define PPC_STW(r, base, i) EMIT(PPC_INST_STW | ___PPC_RS(r) | \
+@@ -65,7 +67,9 @@
+ #define PPC_LBZ(r, base, i) EMIT(PPC_INST_LBZ | ___PPC_RT(r) | \
+ ___PPC_RA(base) | IMM_L(i))
+ #define PPC_LD(r, base, i) EMIT(PPC_INST_LD | ___PPC_RT(r) | \
+- ___PPC_RA(base) | IMM_L(i))
++ ___PPC_RA(base) | ((i) & 0xfffc))
++#define PPC_LDX(r, base, b) EMIT(PPC_INST_LDX | ___PPC_RT(r) | \
++ ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_LWZ(r, base, i) EMIT(PPC_INST_LWZ | ___PPC_RT(r) | \
+ ___PPC_RA(base) | IMM_L(i))
+ #define PPC_LHZ(r, base, i) EMIT(PPC_INST_LHZ | ___PPC_RT(r) | \
+@@ -85,17 +89,6 @@
+ ___PPC_RA(a) | ___PPC_RB(b))
+ #define PPC_BPF_STDCX(s, a, b) EMIT(PPC_INST_STDCX | ___PPC_RS(s) | \
+ ___PPC_RA(a) | ___PPC_RB(b))
+-
+-#ifdef CONFIG_PPC64
+-#define PPC_BPF_LL(r, base, i) do { PPC_LD(r, base, i); } while(0)
+-#define PPC_BPF_STL(r, base, i) do { PPC_STD(r, base, i); } while(0)
+-#define PPC_BPF_STLU(r, base, i) do { PPC_STDU(r, base, i); } while(0)
+-#else
+-#define PPC_BPF_LL(r, base, i) do { PPC_LWZ(r, base, i); } while(0)
+-#define PPC_BPF_STL(r, base, i) do { PPC_STW(r, base, i); } while(0)
+-#define PPC_BPF_STLU(r, base, i) do { PPC_STWU(r, base, i); } while(0)
+-#endif
+-
+ #define PPC_CMPWI(a, i) EMIT(PPC_INST_CMPWI | ___PPC_RA(a) | IMM_L(i))
+ #define PPC_CMPDI(a, i) EMIT(PPC_INST_CMPDI | ___PPC_RA(a) | IMM_L(i))
+ #define PPC_CMPW(a, b) EMIT(PPC_INST_CMPW | ___PPC_RA(a) | \
+diff --git a/arch/powerpc/net/bpf_jit32.h b/arch/powerpc/net/bpf_jit32.h
+index 6f4daacad296..ade04547703f 100644
+--- a/arch/powerpc/net/bpf_jit32.h
++++ b/arch/powerpc/net/bpf_jit32.h
+@@ -123,6 +123,10 @@ DECLARE_LOAD_FUNC(sk_load_byte_msh);
+ #define PPC_NTOHS_OFFS(r, base, i) PPC_LHZ_OFFS(r, base, i)
+ #endif
+
++#define PPC_BPF_LL(r, base, i) do { PPC_LWZ(r, base, i); } while(0)
++#define PPC_BPF_STL(r, base, i) do { PPC_STW(r, base, i); } while(0)
++#define PPC_BPF_STLU(r, base, i) do { PPC_STWU(r, base, i); } while(0)
++
+ #define SEEN_DATAREF 0x10000 /* might call external helpers */
+ #define SEEN_XREG 0x20000 /* X reg is used */
+ #define SEEN_MEM 0x40000 /* SEEN_MEM+(1<<n) = use mem[n] for temporary
+diff --git a/arch/powerpc/net/bpf_jit64.h b/arch/powerpc/net/bpf_jit64.h
+index 3609be4692b3..47f441f351a6 100644
+--- a/arch/powerpc/net/bpf_jit64.h
++++ b/arch/powerpc/net/bpf_jit64.h
+@@ -68,6 +68,26 @@ static const int b2p[] = {
+ /* PPC NVR range -- update this if we ever use NVRs below r27 */
+ #define BPF_PPC_NVR_MIN 27
+
++/*
++ * WARNING: These can use TMP_REG_2 if the offset is not at word boundary,
++ * so ensure that it isn't in use already.
++ */
++#define PPC_BPF_LL(r, base, i) do { \
++ if ((i) % 4) { \
++ PPC_LI(b2p[TMP_REG_2], (i)); \
++ PPC_LDX(r, base, b2p[TMP_REG_2]); \
++ } else \
++ PPC_LD(r, base, i); \
++ } while(0)
++#define PPC_BPF_STL(r, base, i) do { \
++ if ((i) % 4) { \
++ PPC_LI(b2p[TMP_REG_2], (i)); \
++ PPC_STDX(r, base, b2p[TMP_REG_2]); \
++ } else \
++ PPC_STD(r, base, i); \
++ } while(0)
++#define PPC_BPF_STLU(r, base, i) do { PPC_STDU(r, base, i); } while(0)
++
+ #define SEEN_FUNC 0x1000 /* might call external helpers */
+ #define SEEN_STACK 0x2000 /* uses BPF stack */
+ #define SEEN_TAILCALL 0x4000 /* uses tail calls */
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 7ce57657d3b8..b1a116eecae2 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -252,7 +252,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
+ * goto out;
+ */
+- PPC_LD(b2p[TMP_REG_1], 1, bpf_jit_stack_tailcallcnt(ctx));
++ PPC_BPF_LL(b2p[TMP_REG_1], 1, bpf_jit_stack_tailcallcnt(ctx));
+ PPC_CMPLWI(b2p[TMP_REG_1], MAX_TAIL_CALL_CNT);
+ PPC_BCC(COND_GT, out);
+
+@@ -265,7 +265,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ /* prog = array->ptrs[index]; */
+ PPC_MULI(b2p[TMP_REG_1], b2p_index, 8);
+ PPC_ADD(b2p[TMP_REG_1], b2p[TMP_REG_1], b2p_bpf_array);
+- PPC_LD(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_array, ptrs));
++ PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_array, ptrs));
+
+ /*
+ * if (prog == NULL)
+@@ -275,7 +275,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ PPC_BCC(COND_EQ, out);
+
+ /* goto *(prog->bpf_func + prologue_size); */
+- PPC_LD(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_prog, bpf_func));
++ PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_prog, bpf_func));
+ #ifdef PPC64_ELF_ABI_v1
+ /* skip past the function descriptor */
+ PPC_ADDI(b2p[TMP_REG_1], b2p[TMP_REG_1],
+@@ -606,7 +606,7 @@ bpf_alu32_trunc:
+ * the instructions generated will remain the
+ * same across all passes
+ */
+- PPC_STD(dst_reg, 1, bpf_jit_stack_local(ctx));
++ PPC_BPF_STL(dst_reg, 1, bpf_jit_stack_local(ctx));
+ PPC_ADDI(b2p[TMP_REG_1], 1, bpf_jit_stack_local(ctx));
+ PPC_LDBRX(dst_reg, 0, b2p[TMP_REG_1]);
+ break;
+@@ -662,7 +662,7 @@ emit_clear:
+ PPC_LI32(b2p[TMP_REG_1], imm);
+ src_reg = b2p[TMP_REG_1];
+ }
+- PPC_STD(src_reg, dst_reg, off);
++ PPC_BPF_STL(src_reg, dst_reg, off);
+ break;
+
+ /*
+@@ -709,7 +709,7 @@ emit_clear:
+ break;
+ /* dst = *(u64 *)(ul) (src + off) */
+ case BPF_LDX | BPF_MEM | BPF_DW:
+- PPC_LD(dst_reg, src_reg, off);
++ PPC_BPF_LL(dst_reg, src_reg, off);
+ break;
+
+ /*
+diff --git a/arch/powerpc/platforms/44x/Kconfig b/arch/powerpc/platforms/44x/Kconfig
+index 4a9a72d01c3c..35be81fd2dc2 100644
+--- a/arch/powerpc/platforms/44x/Kconfig
++++ b/arch/powerpc/platforms/44x/Kconfig
+@@ -180,6 +180,7 @@ config CURRITUCK
+ depends on PPC_47x
+ select SWIOTLB
+ select 476FPE
++ select FORCE_PCI
+ select PPC4xx_PCI_EXPRESS
+ help
+ This option enables support for the IBM Currituck (476fpe) evaluation board
+diff --git a/arch/powerpc/platforms/83xx/suspend-asm.S b/arch/powerpc/platforms/83xx/suspend-asm.S
+index 3d1ecd211776..8137f77abad5 100644
+--- a/arch/powerpc/platforms/83xx/suspend-asm.S
++++ b/arch/powerpc/platforms/83xx/suspend-asm.S
+@@ -26,13 +26,13 @@
+ #define SS_MSR 0x74
+ #define SS_SDR1 0x78
+ #define SS_LR 0x7c
+-#define SS_SPRG 0x80 /* 4 SPRGs */
+-#define SS_DBAT 0x90 /* 8 DBATs */
+-#define SS_IBAT 0xd0 /* 8 IBATs */
+-#define SS_TB 0x110
+-#define SS_CR 0x118
+-#define SS_GPREG 0x11c /* r12-r31 */
+-#define STATE_SAVE_SIZE 0x16c
++#define SS_SPRG 0x80 /* 8 SPRGs */
++#define SS_DBAT 0xa0 /* 8 DBATs */
++#define SS_IBAT 0xe0 /* 8 IBATs */
++#define SS_TB 0x120
++#define SS_CR 0x128
++#define SS_GPREG 0x12c /* r12-r31 */
++#define STATE_SAVE_SIZE 0x17c
+
+ .section .data
+ .align 5
+@@ -103,6 +103,16 @@ _GLOBAL(mpc83xx_enter_deep_sleep)
+ stw r7, SS_SPRG+12(r3)
+ stw r8, SS_SDR1(r3)
+
++ mfspr r4, SPRN_SPRG4
++ mfspr r5, SPRN_SPRG5
++ mfspr r6, SPRN_SPRG6
++ mfspr r7, SPRN_SPRG7
++
++ stw r4, SS_SPRG+16(r3)
++ stw r5, SS_SPRG+20(r3)
++ stw r6, SS_SPRG+24(r3)
++ stw r7, SS_SPRG+28(r3)
++
+ mfspr r4, SPRN_DBAT0U
+ mfspr r5, SPRN_DBAT0L
+ mfspr r6, SPRN_DBAT1U
+@@ -493,6 +503,16 @@ mpc83xx_deep_resume:
+ mtspr SPRN_IBAT7U, r6
+ mtspr SPRN_IBAT7L, r7
+
++ lwz r4, SS_SPRG+16(r3)
++ lwz r5, SS_SPRG+20(r3)
++ lwz r6, SS_SPRG+24(r3)
++ lwz r7, SS_SPRG+28(r3)
++
++ mtspr SPRN_SPRG4, r4
++ mtspr SPRN_SPRG5, r5
++ mtspr SPRN_SPRG6, r6
++ mtspr SPRN_SPRG7, r7
++
+ lwz r4, SS_SPRG+0(r3)
+ lwz r5, SS_SPRG+4(r3)
+ lwz r6, SS_SPRG+8(r3)
+diff --git a/arch/powerpc/platforms/embedded6xx/wii.c b/arch/powerpc/platforms/embedded6xx/wii.c
+index ecf703ee3a76..ac4ee88efc80 100644
+--- a/arch/powerpc/platforms/embedded6xx/wii.c
++++ b/arch/powerpc/platforms/embedded6xx/wii.c
+@@ -83,6 +83,10 @@ unsigned long __init wii_mmu_mapin_mem2(unsigned long top)
+ /* MEM2 64MB@0x10000000 */
+ delta = wii_hole_start + wii_hole_size;
+ size = top - delta;
++
++ if (__map_without_bats)
++ return delta;
++
+ for (bl = 128<<10; bl < max_size; bl <<= 1) {
+ if (bl * 2 > size)
+ break;
+diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
+index 35f699ebb662..e52f9b06dd9c 100644
+--- a/arch/powerpc/platforms/powernv/idle.c
++++ b/arch/powerpc/platforms/powernv/idle.c
+@@ -458,7 +458,8 @@ EXPORT_SYMBOL_GPL(pnv_power9_force_smt4_release);
+ #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
+
+ #ifdef CONFIG_HOTPLUG_CPU
+-static void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
++
++void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
+ {
+ u64 pir = get_hard_smp_processor_id(cpu);
+
+@@ -481,20 +482,6 @@ unsigned long pnv_cpu_offline(unsigned int cpu)
+ {
+ unsigned long srr1;
+ u32 idle_states = pnv_get_supported_cpuidle_states();
+- u64 lpcr_val;
+-
+- /*
+- * We don't want to take decrementer interrupts while we are
+- * offline, so clear LPCR:PECE1. We keep PECE2 (and
+- * LPCR_PECE_HVEE on P9) enabled as to let IPIs in.
+- *
+- * If the CPU gets woken up by a special wakeup, ensure that
+- * the SLW engine sets LPCR with decrementer bit cleared, else
+- * the CPU will come back to the kernel due to a spurious
+- * wakeup.
+- */
+- lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
+- pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
+
+ __ppc64_runlatch_off();
+
+@@ -526,16 +513,6 @@ unsigned long pnv_cpu_offline(unsigned int cpu)
+
+ __ppc64_runlatch_on();
+
+- /*
+- * Re-enable decrementer interrupts in LPCR.
+- *
+- * Further, we want stop states to be woken up by decrementer
+- * for non-hotplug cases. So program the LPCR via stop api as
+- * well.
+- */
+- lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1;
+- pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
+-
+ return srr1;
+ }
+ #endif
+diff --git a/arch/powerpc/platforms/powernv/opal-msglog.c b/arch/powerpc/platforms/powernv/opal-msglog.c
+index acd3206dfae3..06628c71cef6 100644
+--- a/arch/powerpc/platforms/powernv/opal-msglog.c
++++ b/arch/powerpc/platforms/powernv/opal-msglog.c
+@@ -98,7 +98,7 @@ static ssize_t opal_msglog_read(struct file *file, struct kobject *kobj,
+ }
+
+ static struct bin_attribute opal_msglog_attr = {
+- .attr = {.name = "msglog", .mode = 0444},
++ .attr = {.name = "msglog", .mode = 0400},
+ .read = opal_msglog_read
+ };
+
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+index 697449afb3f7..e28f03e1eb5e 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+@@ -313,7 +313,6 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+ page_shift);
+ tbl->it_level_size = 1ULL << (level_shift - 3);
+ tbl->it_indirect_levels = levels - 1;
+- tbl->it_allocated_size = total_allocated;
+ tbl->it_userspace = uas;
+ tbl->it_nid = nid;
+
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 145373f0e5dc..2d62c58f9a4c 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -2594,8 +2594,13 @@ static long pnv_pci_ioda2_create_table_userspace(
+ int num, __u32 page_shift, __u64 window_size, __u32 levels,
+ struct iommu_table **ptbl)
+ {
+- return pnv_pci_ioda2_create_table(table_group,
++ long ret = pnv_pci_ioda2_create_table(table_group,
+ num, page_shift, window_size, levels, true, ptbl);
++
++ if (!ret)
++ (*ptbl)->it_allocated_size = pnv_pci_ioda2_get_table_size(
++ page_shift, window_size, levels);
++ return ret;
+ }
+
+ static void pnv_ioda2_take_ownership(struct iommu_table_group *table_group)
+diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
+index 0d354e19ef92..db09c7022635 100644
+--- a/arch/powerpc/platforms/powernv/smp.c
++++ b/arch/powerpc/platforms/powernv/smp.c
+@@ -39,6 +39,7 @@
+ #include <asm/cpuidle.h>
+ #include <asm/kexec.h>
+ #include <asm/reg.h>
++#include <asm/powernv.h>
+
+ #include "powernv.h"
+
+@@ -153,6 +154,7 @@ static void pnv_smp_cpu_kill_self(void)
+ {
+ unsigned int cpu;
+ unsigned long srr1, wmask;
++ u64 lpcr_val;
+
+ /* Standard hot unplug procedure */
+ /*
+@@ -174,6 +176,19 @@ static void pnv_smp_cpu_kill_self(void)
+ if (cpu_has_feature(CPU_FTR_ARCH_207S))
+ wmask = SRR1_WAKEMASK_P8;
+
++ /*
++ * We don't want to take decrementer interrupts while we are
++ * offline, so clear LPCR:PECE1. We keep PECE2 (and
++ * LPCR_PECE_HVEE on P9) enabled so as to let IPIs in.
++ *
++ * If the CPU gets woken up by a special wakeup, ensure that
++ * the SLW engine sets LPCR with decrementer bit cleared, else
++ * the CPU will come back to the kernel due to a spurious
++ * wakeup.
++ */
++ lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
++ pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
++
+ while (!generic_check_cpu_restart(cpu)) {
+ /*
+ * Clear IPI flag, since we don't handle IPIs while
+@@ -246,6 +261,16 @@ static void pnv_smp_cpu_kill_self(void)
+
+ }
+
++ /*
++ * Re-enable decrementer interrupts in LPCR.
++ *
++ * Further, we want stop states to be woken up by decrementer
++ * for non-hotplug cases. So program the LPCR via stop api as
++ * well.
++ */
++ lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1;
++ pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
++
+ DBG("CPU%d coming online...\n", cpu);
+ }
+
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 2f8e62163602..97feb6e79f1a 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -802,6 +802,25 @@ static int dlpar_cpu_add_by_count(u32 cpus_to_add)
+ return rc;
+ }
+
++int dlpar_cpu_readd(int cpu)
++{
++ struct device_node *dn;
++ struct device *dev;
++ u32 drc_index;
++ int rc;
++
++ dev = get_cpu_device(cpu);
++ dn = dev->of_node;
++
++ rc = of_property_read_u32(dn, "ibm,my-drc-index", &drc_index);
++
++ rc = dlpar_cpu_remove_by_index(drc_index);
++ if (!rc)
++ rc = dlpar_cpu_add(drc_index);
++
++ return rc;
++}
++
+ int dlpar_cpu(struct pseries_hp_errorlog *hp_elog)
+ {
+ u32 count, drc_index;
+diff --git a/arch/powerpc/platforms/pseries/pseries_energy.c b/arch/powerpc/platforms/pseries/pseries_energy.c
+index 6ed22127391b..921f12182f3e 100644
+--- a/arch/powerpc/platforms/pseries/pseries_energy.c
++++ b/arch/powerpc/platforms/pseries/pseries_energy.c
+@@ -77,18 +77,27 @@ static u32 cpu_to_drc_index(int cpu)
+
+ ret = drc.drc_index_start + (thread_index * drc.sequential_inc);
+ } else {
+- const __be32 *indexes;
+-
+- indexes = of_get_property(dn, "ibm,drc-indexes", NULL);
+- if (indexes == NULL)
+- goto err_of_node_put;
++ u32 nr_drc_indexes, thread_drc_index;
+
+ /*
+- * The first element indexes[0] is the number of drc_indexes
+- * returned in the list. Hence thread_index+1 will get the
+- * drc_index corresponding to core number thread_index.
++ * The first element of ibm,drc-indexes array is the
++ * number of drc_indexes returned in the list. Hence
++ * thread_index+1 will get the drc_index corresponding
++ * to core number thread_index.
+ */
+- ret = indexes[thread_index + 1];
++ rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
++ 0, &nr_drc_indexes);
++ if (rc)
++ goto err_of_node_put;
++
++ WARN_ON_ONCE(thread_index > nr_drc_indexes);
++ rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
++ thread_index + 1,
++ &thread_drc_index);
++ if (rc)
++ goto err_of_node_put;
++
++ ret = thread_drc_index;
+ }
+
+ rc = 0;
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index d97d52772789..452dcfd7e5dd 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -550,6 +550,7 @@ static void pseries_print_mce_info(struct pt_regs *regs,
+ "UE",
+ "SLB",
+ "ERAT",
++ "Unknown",
+ "TLB",
+ "D-Cache",
+ "Unknown",
+diff --git a/arch/powerpc/xmon/ppc-dis.c b/arch/powerpc/xmon/ppc-dis.c
+index 9deea5ee13f6..27f1e6415036 100644
+--- a/arch/powerpc/xmon/ppc-dis.c
++++ b/arch/powerpc/xmon/ppc-dis.c
+@@ -158,7 +158,7 @@ int print_insn_powerpc (unsigned long insn, unsigned long memaddr)
+ dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7
+ | PPC_OPCODE_POWER8 | PPC_OPCODE_POWER9 | PPC_OPCODE_HTM
+ | PPC_OPCODE_ALTIVEC | PPC_OPCODE_ALTIVEC2
+- | PPC_OPCODE_VSX | PPC_OPCODE_VSX3),
++ | PPC_OPCODE_VSX | PPC_OPCODE_VSX3);
+
+ /* Get the major opcode of the insn. */
+ opcode = NULL;
+diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
+index bba3da6ef157..6ea9e1804233 100644
+--- a/arch/riscv/include/asm/syscall.h
++++ b/arch/riscv/include/asm/syscall.h
+@@ -79,10 +79,11 @@ static inline void syscall_get_arguments(struct task_struct *task,
+ if (i == 0) {
+ args[0] = regs->orig_a0;
+ args++;
+- i++;
+ n--;
++ } else {
++ i--;
+ }
+- memcpy(args, ®s->a1 + i * sizeof(regs->a1), n * sizeof(args[0]));
++ memcpy(args, ®s->a1 + i, n * sizeof(args[0]));
+ }
+
+ static inline void syscall_set_arguments(struct task_struct *task,
+@@ -94,10 +95,11 @@ static inline void syscall_set_arguments(struct task_struct *task,
+ if (i == 0) {
+ regs->orig_a0 = args[0];
+ args++;
+- i++;
+ n--;
+- }
+- memcpy(®s->a1 + i * sizeof(regs->a1), args, n * sizeof(regs->a0));
++ } else {
++ i--;
++ }
++ memcpy(®s->a1 + i, args, n * sizeof(regs->a1));
+ }
+
+ static inline int syscall_get_arch(void)
+diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
+index d5d24889c3bc..c2b8c8c6c9be 100644
+--- a/arch/s390/include/asm/kvm_host.h
++++ b/arch/s390/include/asm/kvm_host.h
+@@ -878,7 +878,7 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
+ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_free_memslot(struct kvm *kvm,
+ struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
+-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+ static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
+ static inline void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
+ struct kvm_memory_slot *slot) {}
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index bfabeb1889cc..1266194afb02 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -1600,7 +1600,7 @@ static void aux_sdb_init(unsigned long sdb)
+
+ /*
+ * aux_buffer_setup() - Setup AUX buffer for diagnostic mode sampling
+- * @cpu: On which to allocate, -1 means current
++ * @event: Event the buffer is setup for, event->cpu == -1 means current
+ * @pages: Array of pointers to buffer pages passed from perf core
+ * @nr_pages: Total pages
+ * @snapshot: Flag for snapshot mode
+@@ -1612,8 +1612,8 @@ static void aux_sdb_init(unsigned long sdb)
+ *
+ * Return the private AUX buffer structure if success or NULL if fails.
+ */
+-static void *aux_buffer_setup(int cpu, void **pages, int nr_pages,
+- bool snapshot)
++static void *aux_buffer_setup(struct perf_event *event, void **pages,
++ int nr_pages, bool snapshot)
+ {
+ struct sf_buffer *sfb;
+ struct aux_buffer *aux;
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 7ed90a759135..01a3f4964d57 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -369,7 +369,7 @@ void __init arch_call_rest_init(void)
+ : : [_frame] "a" (frame));
+ }
+
+-static void __init setup_lowcore(void)
++static void __init setup_lowcore_dat_off(void)
+ {
+ struct lowcore *lc;
+
+@@ -380,19 +380,16 @@ static void __init setup_lowcore(void)
+ lc = memblock_alloc_low(sizeof(*lc), sizeof(*lc));
+ lc->restart_psw.mask = PSW_KERNEL_BITS;
+ lc->restart_psw.addr = (unsigned long) restart_int_handler;
+- lc->external_new_psw.mask = PSW_KERNEL_BITS |
+- PSW_MASK_DAT | PSW_MASK_MCHECK;
++ lc->external_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
+ lc->external_new_psw.addr = (unsigned long) ext_int_handler;
+ lc->svc_new_psw.mask = PSW_KERNEL_BITS |
+- PSW_MASK_DAT | PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK;
++ PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK;
+ lc->svc_new_psw.addr = (unsigned long) system_call;
+- lc->program_new_psw.mask = PSW_KERNEL_BITS |
+- PSW_MASK_DAT | PSW_MASK_MCHECK;
++ lc->program_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
+ lc->program_new_psw.addr = (unsigned long) pgm_check_handler;
+ lc->mcck_new_psw.mask = PSW_KERNEL_BITS;
+ lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler;
+- lc->io_new_psw.mask = PSW_KERNEL_BITS |
+- PSW_MASK_DAT | PSW_MASK_MCHECK;
++ lc->io_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
+ lc->io_new_psw.addr = (unsigned long) io_int_handler;
+ lc->clock_comparator = clock_comparator_max;
+ lc->nodat_stack = ((unsigned long) &init_thread_union)
+@@ -452,6 +449,16 @@ static void __init setup_lowcore(void)
+ lowcore_ptr[0] = lc;
+ }
+
++static void __init setup_lowcore_dat_on(void)
++{
++ __ctl_clear_bit(0, 28);
++ S390_lowcore.external_new_psw.mask |= PSW_MASK_DAT;
++ S390_lowcore.svc_new_psw.mask |= PSW_MASK_DAT;
++ S390_lowcore.program_new_psw.mask |= PSW_MASK_DAT;
++ S390_lowcore.io_new_psw.mask |= PSW_MASK_DAT;
++ __ctl_set_bit(0, 28);
++}
++
+ static struct resource code_resource = {
+ .name = "Kernel code",
+ .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM,
+@@ -1072,7 +1079,7 @@ void __init setup_arch(char **cmdline_p)
+ #endif
+
+ setup_resources();
+- setup_lowcore();
++ setup_lowcore_dat_off();
+ smp_fill_possible_mask();
+ cpu_detect_mhz_feature();
+ cpu_init();
+@@ -1085,6 +1092,12 @@ void __init setup_arch(char **cmdline_p)
+ */
+ paging_init();
+
++ /*
++ * After paging_init created the kernel page table, the new PSWs
++ * in lowcore can now run with DAT enabled.
++ */
++ setup_lowcore_dat_on();
++
+ /* Setup default console */
+ conmode_default();
+ set_preferred_console();
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 68261430fe6e..64d5a3327030 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2221,14 +2221,8 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING
+ If unsure, leave at the default value.
+
+ config HOTPLUG_CPU
+- bool "Support for hot-pluggable CPUs"
++ def_bool y
+ depends on SMP
+- ---help---
+- Say Y here to allow turning CPUs off and on. CPUs can be
+- controlled through /sys/devices/system/cpu.
+- ( Note: power management support will enable this option
+- automatically on SMP systems. )
+- Say N if you want to disable CPU hotplug.
+
+ config BOOTPARAM_HOTPLUG_CPU0
+ bool "Set default setting of cpu0_hotpluggable"
+diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
+index 9b5adae9cc40..e2839b5c246c 100644
+--- a/arch/x86/boot/Makefile
++++ b/arch/x86/boot/Makefile
+@@ -100,7 +100,7 @@ $(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
+ AFLAGS_header.o += -I$(objtree)/$(obj)
+ $(obj)/header.o: $(obj)/zoffset.h
+
+-LDFLAGS_setup.elf := -T
++LDFLAGS_setup.elf := -m elf_i386 -T
+ $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
+ $(call if_changed,ld)
+
+diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
+index 9e2157371491..f8debf7aeb4c 100644
+--- a/arch/x86/boot/compressed/pgtable_64.c
++++ b/arch/x86/boot/compressed/pgtable_64.c
+@@ -1,5 +1,7 @@
++#include <linux/efi.h>
+ #include <asm/e820/types.h>
+ #include <asm/processor.h>
++#include <asm/efi.h>
+ #include "pgtable.h"
+ #include "../string.h"
+
+@@ -37,9 +39,10 @@ int cmdline_find_option_bool(const char *option);
+
+ static unsigned long find_trampoline_placement(void)
+ {
+- unsigned long bios_start, ebda_start;
++ unsigned long bios_start = 0, ebda_start = 0;
+ unsigned long trampoline_start;
+ struct boot_e820_entry *entry;
++ char *signature;
+ int i;
+
+ /*
+@@ -47,8 +50,18 @@ static unsigned long find_trampoline_placement(void)
+ * This code is based on reserve_bios_regions().
+ */
+
+- ebda_start = *(unsigned short *)0x40e << 4;
+- bios_start = *(unsigned short *)0x413 << 10;
++ /*
++ * EFI systems may not provide legacy ROM. The memory may not be mapped
++ * at all.
++ *
++ * Only look for values in the legacy ROM for non-EFI system.
++ */
++ signature = (char *)&boot_params->efi_info.efi_loader_signature;
++ if (strncmp(signature, EFI32_LOADER_SIGNATURE, 4) &&
++ strncmp(signature, EFI64_LOADER_SIGNATURE, 4)) {
++ ebda_start = *(unsigned short *)0x40e << 4;
++ bios_start = *(unsigned short *)0x413 << 10;
++ }
+
+ if (bios_start < BIOS_START_MIN || bios_start > BIOS_START_MAX)
+ bios_start = BIOS_START_MAX;
+diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c
+index 2a356b948720..3ea71b871813 100644
+--- a/arch/x86/crypto/aegis128-aesni-glue.c
++++ b/arch/x86/crypto/aegis128-aesni-glue.c
+@@ -119,31 +119,20 @@ static void crypto_aegis128_aesni_process_ad(
+ }
+
+ static void crypto_aegis128_aesni_process_crypt(
+- struct aegis_state *state, struct aead_request *req,
++ struct aegis_state *state, struct skcipher_walk *walk,
+ const struct aegis_crypt_ops *ops)
+ {
+- struct skcipher_walk walk;
+- u8 *src, *dst;
+- unsigned int chunksize, base;
+-
+- ops->skcipher_walk_init(&walk, req, false);
+-
+- while (walk.nbytes) {
+- src = walk.src.virt.addr;
+- dst = walk.dst.virt.addr;
+- chunksize = walk.nbytes;
+-
+- ops->crypt_blocks(state, chunksize, src, dst);
+-
+- base = chunksize & ~(AEGIS128_BLOCK_SIZE - 1);
+- src += base;
+- dst += base;
+- chunksize &= AEGIS128_BLOCK_SIZE - 1;
+-
+- if (chunksize > 0)
+- ops->crypt_tail(state, chunksize, src, dst);
++ while (walk->nbytes >= AEGIS128_BLOCK_SIZE) {
++ ops->crypt_blocks(state,
++ round_down(walk->nbytes, AEGIS128_BLOCK_SIZE),
++ walk->src.virt.addr, walk->dst.virt.addr);
++ skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE);
++ }
+
+- skcipher_walk_done(&walk, 0);
++ if (walk->nbytes) {
++ ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
++ walk->dst.virt.addr);
++ skcipher_walk_done(walk, 0);
+ }
+ }
+
+@@ -186,13 +175,16 @@ static void crypto_aegis128_aesni_crypt(struct aead_request *req,
+ {
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct aegis_ctx *ctx = crypto_aegis128_aesni_ctx(tfm);
++ struct skcipher_walk walk;
+ struct aegis_state state;
+
++ ops->skcipher_walk_init(&walk, req, true);
++
+ kernel_fpu_begin();
+
+ crypto_aegis128_aesni_init(&state, ctx->key.bytes, req->iv);
+ crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen);
+- crypto_aegis128_aesni_process_crypt(&state, req, ops);
++ crypto_aegis128_aesni_process_crypt(&state, &walk, ops);
+ crypto_aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+
+ kernel_fpu_end();
+diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c
+index dbe8bb980da1..1b1b39c66c5e 100644
+--- a/arch/x86/crypto/aegis128l-aesni-glue.c
++++ b/arch/x86/crypto/aegis128l-aesni-glue.c
+@@ -119,31 +119,20 @@ static void crypto_aegis128l_aesni_process_ad(
+ }
+
+ static void crypto_aegis128l_aesni_process_crypt(
+- struct aegis_state *state, struct aead_request *req,
++ struct aegis_state *state, struct skcipher_walk *walk,
+ const struct aegis_crypt_ops *ops)
+ {
+- struct skcipher_walk walk;
+- u8 *src, *dst;
+- unsigned int chunksize, base;
+-
+- ops->skcipher_walk_init(&walk, req, false);
+-
+- while (walk.nbytes) {
+- src = walk.src.virt.addr;
+- dst = walk.dst.virt.addr;
+- chunksize = walk.nbytes;
+-
+- ops->crypt_blocks(state, chunksize, src, dst);
+-
+- base = chunksize & ~(AEGIS128L_BLOCK_SIZE - 1);
+- src += base;
+- dst += base;
+- chunksize &= AEGIS128L_BLOCK_SIZE - 1;
+-
+- if (chunksize > 0)
+- ops->crypt_tail(state, chunksize, src, dst);
++ while (walk->nbytes >= AEGIS128L_BLOCK_SIZE) {
++ ops->crypt_blocks(state, round_down(walk->nbytes,
++ AEGIS128L_BLOCK_SIZE),
++ walk->src.virt.addr, walk->dst.virt.addr);
++ skcipher_walk_done(walk, walk->nbytes % AEGIS128L_BLOCK_SIZE);
++ }
+
+- skcipher_walk_done(&walk, 0);
++ if (walk->nbytes) {
++ ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
++ walk->dst.virt.addr);
++ skcipher_walk_done(walk, 0);
+ }
+ }
+
+@@ -186,13 +175,16 @@ static void crypto_aegis128l_aesni_crypt(struct aead_request *req,
+ {
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct aegis_ctx *ctx = crypto_aegis128l_aesni_ctx(tfm);
++ struct skcipher_walk walk;
+ struct aegis_state state;
+
++ ops->skcipher_walk_init(&walk, req, true);
++
+ kernel_fpu_begin();
+
+ crypto_aegis128l_aesni_init(&state, ctx->key.bytes, req->iv);
+ crypto_aegis128l_aesni_process_ad(&state, req->src, req->assoclen);
+- crypto_aegis128l_aesni_process_crypt(&state, req, ops);
++ crypto_aegis128l_aesni_process_crypt(&state, &walk, ops);
+ crypto_aegis128l_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+
+ kernel_fpu_end();
+diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c
+index 8bebda2de92f..6227ca3220a0 100644
+--- a/arch/x86/crypto/aegis256-aesni-glue.c
++++ b/arch/x86/crypto/aegis256-aesni-glue.c
+@@ -119,31 +119,20 @@ static void crypto_aegis256_aesni_process_ad(
+ }
+
+ static void crypto_aegis256_aesni_process_crypt(
+- struct aegis_state *state, struct aead_request *req,
++ struct aegis_state *state, struct skcipher_walk *walk,
+ const struct aegis_crypt_ops *ops)
+ {
+- struct skcipher_walk walk;
+- u8 *src, *dst;
+- unsigned int chunksize, base;
+-
+- ops->skcipher_walk_init(&walk, req, false);
+-
+- while (walk.nbytes) {
+- src = walk.src.virt.addr;
+- dst = walk.dst.virt.addr;
+- chunksize = walk.nbytes;
+-
+- ops->crypt_blocks(state, chunksize, src, dst);
+-
+- base = chunksize & ~(AEGIS256_BLOCK_SIZE - 1);
+- src += base;
+- dst += base;
+- chunksize &= AEGIS256_BLOCK_SIZE - 1;
+-
+- if (chunksize > 0)
+- ops->crypt_tail(state, chunksize, src, dst);
++ while (walk->nbytes >= AEGIS256_BLOCK_SIZE) {
++ ops->crypt_blocks(state,
++ round_down(walk->nbytes, AEGIS256_BLOCK_SIZE),
++ walk->src.virt.addr, walk->dst.virt.addr);
++ skcipher_walk_done(walk, walk->nbytes % AEGIS256_BLOCK_SIZE);
++ }
+
+- skcipher_walk_done(&walk, 0);
++ if (walk->nbytes) {
++ ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
++ walk->dst.virt.addr);
++ skcipher_walk_done(walk, 0);
+ }
+ }
+
+@@ -186,13 +175,16 @@ static void crypto_aegis256_aesni_crypt(struct aead_request *req,
+ {
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct aegis_ctx *ctx = crypto_aegis256_aesni_ctx(tfm);
++ struct skcipher_walk walk;
+ struct aegis_state state;
+
++ ops->skcipher_walk_init(&walk, req, true);
++
+ kernel_fpu_begin();
+
+ crypto_aegis256_aesni_init(&state, ctx->key, req->iv);
+ crypto_aegis256_aesni_process_ad(&state, req->src, req->assoclen);
+- crypto_aegis256_aesni_process_crypt(&state, req, ops);
++ crypto_aegis256_aesni_process_crypt(&state, &walk, ops);
+ crypto_aegis256_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+
+ kernel_fpu_end();
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index 1321700d6647..ae30c8b6ec4d 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -821,11 +821,14 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ scatterwalk_map_and_copy(assoc, req->src, 0, assoclen, 0);
+ }
+
+- src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
+- scatterwalk_start(&src_sg_walk, src_sg);
+- if (req->src != req->dst) {
+- dst_sg = scatterwalk_ffwd(dst_start, req->dst, req->assoclen);
+- scatterwalk_start(&dst_sg_walk, dst_sg);
++ if (left) {
++ src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
++ scatterwalk_start(&src_sg_walk, src_sg);
++ if (req->src != req->dst) {
++ dst_sg = scatterwalk_ffwd(dst_start, req->dst,
++ req->assoclen);
++ scatterwalk_start(&dst_sg_walk, dst_sg);
++ }
+ }
+
+ kernel_fpu_begin();
+diff --git a/arch/x86/crypto/morus1280_glue.c b/arch/x86/crypto/morus1280_glue.c
+index 0dccdda1eb3a..7e600f8bcdad 100644
+--- a/arch/x86/crypto/morus1280_glue.c
++++ b/arch/x86/crypto/morus1280_glue.c
+@@ -85,31 +85,20 @@ static void crypto_morus1280_glue_process_ad(
+
+ static void crypto_morus1280_glue_process_crypt(struct morus1280_state *state,
+ struct morus1280_ops ops,
+- struct aead_request *req)
++ struct skcipher_walk *walk)
+ {
+- struct skcipher_walk walk;
+- u8 *cursor_src, *cursor_dst;
+- unsigned int chunksize, base;
+-
+- ops.skcipher_walk_init(&walk, req, false);
+-
+- while (walk.nbytes) {
+- cursor_src = walk.src.virt.addr;
+- cursor_dst = walk.dst.virt.addr;
+- chunksize = walk.nbytes;
+-
+- ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
+-
+- base = chunksize & ~(MORUS1280_BLOCK_SIZE - 1);
+- cursor_src += base;
+- cursor_dst += base;
+- chunksize &= MORUS1280_BLOCK_SIZE - 1;
+-
+- if (chunksize > 0)
+- ops.crypt_tail(state, cursor_src, cursor_dst,
+- chunksize);
++ while (walk->nbytes >= MORUS1280_BLOCK_SIZE) {
++ ops.crypt_blocks(state, walk->src.virt.addr,
++ walk->dst.virt.addr,
++ round_down(walk->nbytes,
++ MORUS1280_BLOCK_SIZE));
++ skcipher_walk_done(walk, walk->nbytes % MORUS1280_BLOCK_SIZE);
++ }
+
+- skcipher_walk_done(&walk, 0);
++ if (walk->nbytes) {
++ ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
++ walk->nbytes);
++ skcipher_walk_done(walk, 0);
+ }
+ }
+
+@@ -147,12 +136,15 @@ static void crypto_morus1280_glue_crypt(struct aead_request *req,
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct morus1280_ctx *ctx = crypto_aead_ctx(tfm);
+ struct morus1280_state state;
++ struct skcipher_walk walk;
++
++ ops.skcipher_walk_init(&walk, req, true);
+
+ kernel_fpu_begin();
+
+ ctx->ops->init(&state, &ctx->key, req->iv);
+ crypto_morus1280_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
+- crypto_morus1280_glue_process_crypt(&state, ops, req);
++ crypto_morus1280_glue_process_crypt(&state, ops, &walk);
+ ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
+
+ kernel_fpu_end();
+diff --git a/arch/x86/crypto/morus640_glue.c b/arch/x86/crypto/morus640_glue.c
+index 7b58fe4d9bd1..cb3a81732016 100644
+--- a/arch/x86/crypto/morus640_glue.c
++++ b/arch/x86/crypto/morus640_glue.c
+@@ -85,31 +85,19 @@ static void crypto_morus640_glue_process_ad(
+
+ static void crypto_morus640_glue_process_crypt(struct morus640_state *state,
+ struct morus640_ops ops,
+- struct aead_request *req)
++ struct skcipher_walk *walk)
+ {
+- struct skcipher_walk walk;
+- u8 *cursor_src, *cursor_dst;
+- unsigned int chunksize, base;
+-
+- ops.skcipher_walk_init(&walk, req, false);
+-
+- while (walk.nbytes) {
+- cursor_src = walk.src.virt.addr;
+- cursor_dst = walk.dst.virt.addr;
+- chunksize = walk.nbytes;
+-
+- ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
+-
+- base = chunksize & ~(MORUS640_BLOCK_SIZE - 1);
+- cursor_src += base;
+- cursor_dst += base;
+- chunksize &= MORUS640_BLOCK_SIZE - 1;
+-
+- if (chunksize > 0)
+- ops.crypt_tail(state, cursor_src, cursor_dst,
+- chunksize);
++ while (walk->nbytes >= MORUS640_BLOCK_SIZE) {
++ ops.crypt_blocks(state, walk->src.virt.addr,
++ walk->dst.virt.addr,
++ round_down(walk->nbytes, MORUS640_BLOCK_SIZE));
++ skcipher_walk_done(walk, walk->nbytes % MORUS640_BLOCK_SIZE);
++ }
+
+- skcipher_walk_done(&walk, 0);
++ if (walk->nbytes) {
++ ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
++ walk->nbytes);
++ skcipher_walk_done(walk, 0);
+ }
+ }
+
+@@ -143,12 +131,15 @@ static void crypto_morus640_glue_crypt(struct aead_request *req,
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct morus640_ctx *ctx = crypto_aead_ctx(tfm);
+ struct morus640_state state;
++ struct skcipher_walk walk;
++
++ ops.skcipher_walk_init(&walk, req, true);
+
+ kernel_fpu_begin();
+
+ ctx->ops->init(&state, &ctx->key, req->iv);
+ crypto_morus640_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
+- crypto_morus640_glue_process_crypt(&state, ops, req);
++ crypto_morus640_glue_process_crypt(&state, ops, &walk);
+ ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
+
+ kernel_fpu_end();
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index 7d2d7c801dba..0ecfac84ba91 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -3,10 +3,14 @@
+ #include <linux/types.h>
+ #include <linux/init.h>
+ #include <linux/slab.h>
++#include <linux/delay.h>
+ #include <asm/apicdef.h>
++#include <asm/nmi.h>
+
+ #include "../perf_event.h"
+
++static DEFINE_PER_CPU(unsigned int, perf_nmi_counter);
++
+ static __initconst const u64 amd_hw_cache_event_ids
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+@@ -429,6 +433,132 @@ static void amd_pmu_cpu_dead(int cpu)
+ }
+ }
+
++/*
++ * When a PMC counter overflows, an NMI is used to process the event and
++ * reset the counter. NMI latency can result in the counter being updated
++ * before the NMI can run, which can result in what appear to be spurious
++ * NMIs. This function is intended to wait for the NMI to run and reset
++ * the counter to avoid possible unhandled NMI messages.
++ */
++#define OVERFLOW_WAIT_COUNT 50
++
++static void amd_pmu_wait_on_overflow(int idx)
++{
++ unsigned int i;
++ u64 counter;
++
++ /*
++ * Wait for the counter to be reset if it has overflowed. This loop
++ * should exit very, very quickly, but just in case, don't wait
++ * forever...
++ */
++ for (i = 0; i < OVERFLOW_WAIT_COUNT; i++) {
++ rdmsrl(x86_pmu_event_addr(idx), counter);
++ if (counter & (1ULL << (x86_pmu.cntval_bits - 1)))
++ break;
++
++ /* Might be in IRQ context, so can't sleep */
++ udelay(1);
++ }
++}
++
++static void amd_pmu_disable_all(void)
++{
++ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++ int idx;
++
++ x86_pmu_disable_all();
++
++ /*
++ * This shouldn't be called from NMI context, but add a safeguard here
++ * to return, since if we're in NMI context we can't wait for an NMI
++ * to reset an overflowed counter value.
++ */
++ if (in_nmi())
++ return;
++
++ /*
++ * Check each counter for overflow and wait for it to be reset by the
++ * NMI if it has overflowed. This relies on the fact that all active
++ * counters are always enabled when this function is caled and
++ * ARCH_PERFMON_EVENTSEL_INT is always set.
++ */
++ for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++ if (!test_bit(idx, cpuc->active_mask))
++ continue;
++
++ amd_pmu_wait_on_overflow(idx);
++ }
++}
++
++static void amd_pmu_disable_event(struct perf_event *event)
++{
++ x86_pmu_disable_event(event);
++
++ /*
++ * This can be called from NMI context (via x86_pmu_stop). The counter
++ * may have overflowed, but either way, we'll never see it get reset
++ * by the NMI if we're already in the NMI. And the NMI latency support
++ * below will take care of any pending NMI that might have been
++ * generated by the overflow.
++ */
++ if (in_nmi())
++ return;
++
++ amd_pmu_wait_on_overflow(event->hw.idx);
++}
++
++/*
++ * Because of NMI latency, if multiple PMC counters are active or other sources
++ * of NMIs are received, the perf NMI handler can handle one or more overflowed
++ * PMC counters outside of the NMI associated with the PMC overflow. If the NMI
++ * doesn't arrive at the LAPIC in time to become a pending NMI, then the kernel
++ * back-to-back NMI support won't be active. This PMC handler needs to take into
++ * account that this can occur, otherwise this could result in unknown NMI
++ * messages being issued. Examples of this is PMC overflow while in the NMI
++ * handler when multiple PMCs are active or PMC overflow while handling some
++ * other source of an NMI.
++ *
++ * Attempt to mitigate this by using the number of active PMCs to determine
++ * whether to return NMI_HANDLED if the perf NMI handler did not handle/reset
++ * any PMCs. The per-CPU perf_nmi_counter variable is set to a minimum of the
++ * number of active PMCs or 2. The value of 2 is used in case an NMI does not
++ * arrive at the LAPIC in time to be collapsed into an already pending NMI.
++ */
++static int amd_pmu_handle_irq(struct pt_regs *regs)
++{
++ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++ int active, handled;
++
++ /*
++ * Obtain the active count before calling x86_pmu_handle_irq() since
++ * it is possible that x86_pmu_handle_irq() may make a counter
++ * inactive (through x86_pmu_stop).
++ */
++ active = __bitmap_weight(cpuc->active_mask, X86_PMC_IDX_MAX);
++
++ /* Process any counter overflows */
++ handled = x86_pmu_handle_irq(regs);
++
++ /*
++ * If a counter was handled, record the number of possible remaining
++ * NMIs that can occur.
++ */
++ if (handled) {
++ this_cpu_write(perf_nmi_counter,
++ min_t(unsigned int, 2, active));
++
++ return handled;
++ }
++
++ if (!this_cpu_read(perf_nmi_counter))
++ return NMI_DONE;
++
++ this_cpu_dec(perf_nmi_counter);
++
++ return NMI_HANDLED;
++}
++
+ static struct event_constraint *
+ amd_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ struct perf_event *event)
+@@ -621,11 +751,11 @@ static ssize_t amd_event_sysfs_show(char *page, u64 config)
+
+ static __initconst const struct x86_pmu amd_pmu = {
+ .name = "AMD",
+- .handle_irq = x86_pmu_handle_irq,
+- .disable_all = x86_pmu_disable_all,
++ .handle_irq = amd_pmu_handle_irq,
++ .disable_all = amd_pmu_disable_all,
+ .enable_all = x86_pmu_enable_all,
+ .enable = x86_pmu_enable_event,
+- .disable = x86_pmu_disable_event,
++ .disable = amd_pmu_disable_event,
+ .hw_config = amd_pmu_hw_config,
+ .schedule_events = x86_schedule_events,
+ .eventsel = MSR_K7_EVNTSEL0,
+@@ -732,7 +862,7 @@ void amd_pmu_enable_virt(void)
+ cpuc->perf_ctr_virt_mask = 0;
+
+ /* Reload all events */
+- x86_pmu_disable_all();
++ amd_pmu_disable_all();
+ x86_pmu_enable_all(0);
+ }
+ EXPORT_SYMBOL_GPL(amd_pmu_enable_virt);
+@@ -750,7 +880,7 @@ void amd_pmu_disable_virt(void)
+ cpuc->perf_ctr_virt_mask = AMD64_EVENTSEL_HOSTONLY;
+
+ /* Reload all events */
+- x86_pmu_disable_all();
++ amd_pmu_disable_all();
+ x86_pmu_enable_all(0);
+ }
+ EXPORT_SYMBOL_GPL(amd_pmu_disable_virt);
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index b684f0294f35..81911e11a15d 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -1349,8 +1349,9 @@ void x86_pmu_stop(struct perf_event *event, int flags)
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ struct hw_perf_event *hwc = &event->hw;
+
+- if (__test_and_clear_bit(hwc->idx, cpuc->active_mask)) {
++ if (test_bit(hwc->idx, cpuc->active_mask)) {
+ x86_pmu.disable(event);
++ __clear_bit(hwc->idx, cpuc->active_mask);
+ cpuc->events[hwc->idx] = NULL;
+ WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
+ hwc->state |= PERF_HES_STOPPED;
+@@ -1447,16 +1448,8 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
+ apic_write(APIC_LVTPC, APIC_DM_NMI);
+
+ for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+- if (!test_bit(idx, cpuc->active_mask)) {
+- /*
+- * Though we deactivated the counter some cpus
+- * might still deliver spurious interrupts still
+- * in flight. Catch them:
+- */
+- if (__test_and_clear_bit(idx, cpuc->running))
+- handled++;
++ if (!test_bit(idx, cpuc->active_mask))
+ continue;
+- }
+
+ event = cpuc->events[idx];
+
+@@ -1995,7 +1988,7 @@ static int x86_pmu_commit_txn(struct pmu *pmu)
+ */
+ static void free_fake_cpuc(struct cpu_hw_events *cpuc)
+ {
+- kfree(cpuc->shared_regs);
++ intel_cpuc_finish(cpuc);
+ kfree(cpuc);
+ }
+
+@@ -2007,14 +2000,11 @@ static struct cpu_hw_events *allocate_fake_cpuc(void)
+ cpuc = kzalloc(sizeof(*cpuc), GFP_KERNEL);
+ if (!cpuc)
+ return ERR_PTR(-ENOMEM);
+-
+- /* only needed, if we have extra_regs */
+- if (x86_pmu.extra_regs) {
+- cpuc->shared_regs = allocate_shared_regs(cpu);
+- if (!cpuc->shared_regs)
+- goto error;
+- }
+ cpuc->is_fake = 1;
++
++ if (intel_cpuc_prepare(cpuc, cpu))
++ goto error;
++
+ return cpuc;
+ error:
+ free_fake_cpuc(cpuc);
+diff --git a/arch/x86/events/intel/bts.c b/arch/x86/events/intel/bts.c
+index a01ef1b0f883..7cdd7b13bbda 100644
+--- a/arch/x86/events/intel/bts.c
++++ b/arch/x86/events/intel/bts.c
+@@ -77,10 +77,12 @@ static size_t buf_size(struct page *page)
+ }
+
+ static void *
+-bts_buffer_setup_aux(int cpu, void **pages, int nr_pages, bool overwrite)
++bts_buffer_setup_aux(struct perf_event *event, void **pages,
++ int nr_pages, bool overwrite)
+ {
+ struct bts_buffer *buf;
+ struct page *page;
++ int cpu = event->cpu;
+ int node = (cpu == -1) ? cpu : cpu_to_node(cpu);
+ unsigned long offset;
+ size_t size = nr_pages << PAGE_SHIFT;
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 730978dff63f..2480feb07df3 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -1999,6 +1999,39 @@ static void intel_pmu_nhm_enable_all(int added)
+ intel_pmu_enable_all(added);
+ }
+
++static void intel_set_tfa(struct cpu_hw_events *cpuc, bool on)
++{
++ u64 val = on ? MSR_TFA_RTM_FORCE_ABORT : 0;
++
++ if (cpuc->tfa_shadow != val) {
++ cpuc->tfa_shadow = val;
++ wrmsrl(MSR_TSX_FORCE_ABORT, val);
++ }
++}
++
++static void intel_tfa_commit_scheduling(struct cpu_hw_events *cpuc, int idx, int cntr)
++{
++ /*
++ * We're going to use PMC3, make sure TFA is set before we touch it.
++ */
++ if (cntr == 3 && !cpuc->is_fake)
++ intel_set_tfa(cpuc, true);
++}
++
++static void intel_tfa_pmu_enable_all(int added)
++{
++ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++
++ /*
++ * If we find PMC3 is no longer used when we enable the PMU, we can
++ * clear TFA.
++ */
++ if (!test_bit(3, cpuc->active_mask))
++ intel_set_tfa(cpuc, false);
++
++ intel_pmu_enable_all(added);
++}
++
+ static void enable_counter_freeze(void)
+ {
+ update_debugctlmsr(get_debugctlmsr() |
+@@ -2768,6 +2801,35 @@ intel_stop_scheduling(struct cpu_hw_events *cpuc)
+ raw_spin_unlock(&excl_cntrs->lock);
+ }
+
++static struct event_constraint *
++dyn_constraint(struct cpu_hw_events *cpuc, struct event_constraint *c, int idx)
++{
++ WARN_ON_ONCE(!cpuc->constraint_list);
++
++ if (!(c->flags & PERF_X86_EVENT_DYNAMIC)) {
++ struct event_constraint *cx;
++
++ /*
++ * grab pre-allocated constraint entry
++ */
++ cx = &cpuc->constraint_list[idx];
++
++ /*
++ * initialize dynamic constraint
++ * with static constraint
++ */
++ *cx = *c;
++
++ /*
++ * mark constraint as dynamic
++ */
++ cx->flags |= PERF_X86_EVENT_DYNAMIC;
++ c = cx;
++ }
++
++ return c;
++}
++
+ static struct event_constraint *
+ intel_get_excl_constraints(struct cpu_hw_events *cpuc, struct perf_event *event,
+ int idx, struct event_constraint *c)
+@@ -2798,27 +2860,7 @@ intel_get_excl_constraints(struct cpu_hw_events *cpuc, struct perf_event *event,
+ * only needed when constraint has not yet
+ * been cloned (marked dynamic)
+ */
+- if (!(c->flags & PERF_X86_EVENT_DYNAMIC)) {
+- struct event_constraint *cx;
+-
+- /*
+- * grab pre-allocated constraint entry
+- */
+- cx = &cpuc->constraint_list[idx];
+-
+- /*
+- * initialize dynamic constraint
+- * with static constraint
+- */
+- *cx = *c;
+-
+- /*
+- * mark constraint as dynamic, so we
+- * can free it later on
+- */
+- cx->flags |= PERF_X86_EVENT_DYNAMIC;
+- c = cx;
+- }
++ c = dyn_constraint(cpuc, c, idx);
+
+ /*
+ * From here on, the constraint is dynamic.
+@@ -3345,6 +3387,26 @@ glp_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ return c;
+ }
+
++static bool allow_tsx_force_abort = true;
++
++static struct event_constraint *
++tfa_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
++ struct perf_event *event)
++{
++ struct event_constraint *c = hsw_get_event_constraints(cpuc, idx, event);
++
++ /*
++ * Without TFA we must not use PMC3.
++ */
++ if (!allow_tsx_force_abort && test_bit(3, c->idxmsk) && idx >= 0) {
++ c = dyn_constraint(cpuc, c, idx);
++ c->idxmsk64 &= ~(1ULL << 3);
++ c->weight--;
++ }
++
++ return c;
++}
++
+ /*
+ * Broadwell:
+ *
+@@ -3398,7 +3460,7 @@ ssize_t intel_event_sysfs_show(char *page, u64 config)
+ return x86_event_sysfs_show(page, config, event);
+ }
+
+-struct intel_shared_regs *allocate_shared_regs(int cpu)
++static struct intel_shared_regs *allocate_shared_regs(int cpu)
+ {
+ struct intel_shared_regs *regs;
+ int i;
+@@ -3430,23 +3492,24 @@ static struct intel_excl_cntrs *allocate_excl_cntrs(int cpu)
+ return c;
+ }
+
+-static int intel_pmu_cpu_prepare(int cpu)
+-{
+- struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
+
++int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu)
++{
+ if (x86_pmu.extra_regs || x86_pmu.lbr_sel_map) {
+ cpuc->shared_regs = allocate_shared_regs(cpu);
+ if (!cpuc->shared_regs)
+ goto err;
+ }
+
+- if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) {
++ if (x86_pmu.flags & (PMU_FL_EXCL_CNTRS | PMU_FL_TFA)) {
+ size_t sz = X86_PMC_IDX_MAX * sizeof(struct event_constraint);
+
+- cpuc->constraint_list = kzalloc(sz, GFP_KERNEL);
++ cpuc->constraint_list = kzalloc_node(sz, GFP_KERNEL, cpu_to_node(cpu));
+ if (!cpuc->constraint_list)
+ goto err_shared_regs;
++ }
+
++ if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) {
+ cpuc->excl_cntrs = allocate_excl_cntrs(cpu);
+ if (!cpuc->excl_cntrs)
+ goto err_constraint_list;
+@@ -3468,6 +3531,11 @@ err:
+ return -ENOMEM;
+ }
+
++static int intel_pmu_cpu_prepare(int cpu)
++{
++ return intel_cpuc_prepare(&per_cpu(cpu_hw_events, cpu), cpu);
++}
++
+ static void flip_smm_bit(void *data)
+ {
+ unsigned long set = *(unsigned long *)data;
+@@ -3542,9 +3610,8 @@ static void intel_pmu_cpu_starting(int cpu)
+ }
+ }
+
+-static void free_excl_cntrs(int cpu)
++static void free_excl_cntrs(struct cpu_hw_events *cpuc)
+ {
+- struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
+ struct intel_excl_cntrs *c;
+
+ c = cpuc->excl_cntrs;
+@@ -3552,9 +3619,10 @@ static void free_excl_cntrs(int cpu)
+ if (c->core_id == -1 || --c->refcnt == 0)
+ kfree(c);
+ cpuc->excl_cntrs = NULL;
+- kfree(cpuc->constraint_list);
+- cpuc->constraint_list = NULL;
+ }
++
++ kfree(cpuc->constraint_list);
++ cpuc->constraint_list = NULL;
+ }
+
+ static void intel_pmu_cpu_dying(int cpu)
+@@ -3565,9 +3633,8 @@ static void intel_pmu_cpu_dying(int cpu)
+ disable_counter_freeze();
+ }
+
+-static void intel_pmu_cpu_dead(int cpu)
++void intel_cpuc_finish(struct cpu_hw_events *cpuc)
+ {
+- struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
+ struct intel_shared_regs *pc;
+
+ pc = cpuc->shared_regs;
+@@ -3577,7 +3644,12 @@ static void intel_pmu_cpu_dead(int cpu)
+ cpuc->shared_regs = NULL;
+ }
+
+- free_excl_cntrs(cpu);
++ free_excl_cntrs(cpuc);
++}
++
++static void intel_pmu_cpu_dead(int cpu)
++{
++ intel_cpuc_finish(&per_cpu(cpu_hw_events, cpu));
+ }
+
+ static void intel_pmu_sched_task(struct perf_event_context *ctx,
+@@ -4070,8 +4142,11 @@ static struct attribute *intel_pmu_caps_attrs[] = {
+ NULL
+ };
+
++static DEVICE_BOOL_ATTR(allow_tsx_force_abort, 0644, allow_tsx_force_abort);
++
+ static struct attribute *intel_pmu_attrs[] = {
+ &dev_attr_freeze_on_smi.attr,
++ NULL, /* &dev_attr_allow_tsx_force_abort.attr.attr */
+ NULL,
+ };
+
+@@ -4564,6 +4639,15 @@ __init int intel_pmu_init(void)
+ tsx_attr = hsw_tsx_events_attrs;
+ intel_pmu_pebs_data_source_skl(
+ boot_cpu_data.x86_model == INTEL_FAM6_SKYLAKE_X);
++
++ if (boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)) {
++ x86_pmu.flags |= PMU_FL_TFA;
++ x86_pmu.get_event_constraints = tfa_get_event_constraints;
++ x86_pmu.enable_all = intel_tfa_pmu_enable_all;
++ x86_pmu.commit_scheduling = intel_tfa_commit_scheduling;
++ intel_pmu_attrs[1] = &dev_attr_allow_tsx_force_abort.attr.attr;
++ }
++
+ pr_cont("Skylake events, ");
+ name = "skylake";
+ break;
+@@ -4715,7 +4799,7 @@ static __init int fixup_ht_bug(void)
+ hardlockup_detector_perf_restart();
+
+ for_each_online_cpu(c)
+- free_excl_cntrs(c);
++ free_excl_cntrs(&per_cpu(cpu_hw_events, c));
+
+ cpus_read_unlock();
+ pr_info("PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off\n");
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 9494ca68fd9d..c0e86ff21f81 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -1114,10 +1114,11 @@ static int pt_buffer_init_topa(struct pt_buffer *buf, unsigned long nr_pages,
+ * Return: Our private PT buffer structure.
+ */
+ static void *
+-pt_buffer_setup_aux(int cpu, void **pages, int nr_pages, bool snapshot)
++pt_buffer_setup_aux(struct perf_event *event, void **pages,
++ int nr_pages, bool snapshot)
+ {
+ struct pt_buffer *buf;
+- int node, ret;
++ int node, ret, cpu = event->cpu;
+
+ if (!nr_pages)
+ return NULL;
+diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
+index 27a461414b30..2690135bf83f 100644
+--- a/arch/x86/events/intel/uncore.c
++++ b/arch/x86/events/intel/uncore.c
+@@ -740,6 +740,7 @@ static int uncore_pmu_event_init(struct perf_event *event)
+ /* fixed counters have event field hardcoded to zero */
+ hwc->config = 0ULL;
+ } else if (is_freerunning_event(event)) {
++ hwc->config = event->attr.config;
+ if (!check_valid_freerunning_event(box, event))
+ return -EINVAL;
+ event->hw.idx = UNCORE_PMC_IDX_FREERUNNING;
+diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
+index cb46d602a6b8..853a49a8ccf6 100644
+--- a/arch/x86/events/intel/uncore.h
++++ b/arch/x86/events/intel/uncore.h
+@@ -292,8 +292,8 @@ static inline
+ unsigned int uncore_freerunning_counter(struct intel_uncore_box *box,
+ struct perf_event *event)
+ {
+- unsigned int type = uncore_freerunning_type(event->attr.config);
+- unsigned int idx = uncore_freerunning_idx(event->attr.config);
++ unsigned int type = uncore_freerunning_type(event->hw.config);
++ unsigned int idx = uncore_freerunning_idx(event->hw.config);
+ struct intel_uncore_pmu *pmu = box->pmu;
+
+ return pmu->type->freerunning[type].counter_base +
+@@ -377,7 +377,7 @@ static inline
+ unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
+ struct perf_event *event)
+ {
+- unsigned int type = uncore_freerunning_type(event->attr.config);
++ unsigned int type = uncore_freerunning_type(event->hw.config);
+
+ return box->pmu->type->freerunning[type].bits;
+ }
+@@ -385,7 +385,7 @@ unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
+ static inline int uncore_num_freerunning(struct intel_uncore_box *box,
+ struct perf_event *event)
+ {
+- unsigned int type = uncore_freerunning_type(event->attr.config);
++ unsigned int type = uncore_freerunning_type(event->hw.config);
+
+ return box->pmu->type->freerunning[type].num_counters;
+ }
+@@ -399,8 +399,8 @@ static inline int uncore_num_freerunning_types(struct intel_uncore_box *box,
+ static inline bool check_valid_freerunning_event(struct intel_uncore_box *box,
+ struct perf_event *event)
+ {
+- unsigned int type = uncore_freerunning_type(event->attr.config);
+- unsigned int idx = uncore_freerunning_idx(event->attr.config);
++ unsigned int type = uncore_freerunning_type(event->hw.config);
++ unsigned int idx = uncore_freerunning_idx(event->hw.config);
+
+ return (type < uncore_num_freerunning_types(box, event)) &&
+ (idx < uncore_num_freerunning(box, event));
+diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
+index 2593b0d7aeee..ef7faf486a1a 100644
+--- a/arch/x86/events/intel/uncore_snb.c
++++ b/arch/x86/events/intel/uncore_snb.c
+@@ -448,9 +448,11 @@ static int snb_uncore_imc_event_init(struct perf_event *event)
+
+ /* must be done before validate_group */
+ event->hw.event_base = base;
+- event->hw.config = cfg;
+ event->hw.idx = idx;
+
++ /* Convert to standard encoding format for freerunning counters */
++ event->hw.config = ((cfg - 1) << 8) | 0x10ff;
++
+ /* no group validation needed, we have free running counters */
+
+ return 0;
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index d46fd6754d92..acd72e669c04 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -242,6 +242,11 @@ struct cpu_hw_events {
+ struct intel_excl_cntrs *excl_cntrs;
+ int excl_thread_id; /* 0 or 1 */
+
++ /*
++ * SKL TSX_FORCE_ABORT shadow
++ */
++ u64 tfa_shadow;
++
+ /*
+ * AMD specific bits
+ */
+@@ -681,6 +686,7 @@ do { \
+ #define PMU_FL_EXCL_CNTRS 0x4 /* has exclusive counter requirements */
+ #define PMU_FL_EXCL_ENABLED 0x8 /* exclusive counter active */
+ #define PMU_FL_PEBS_ALL 0x10 /* all events are valid PEBS events */
++#define PMU_FL_TFA 0x20 /* deal with TSX force abort */
+
+ #define EVENT_VAR(_id) event_attr_##_id
+ #define EVENT_PTR(_id) &event_attr_##_id.attr.attr
+@@ -889,7 +895,8 @@ struct event_constraint *
+ x86_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ struct perf_event *event);
+
+-struct intel_shared_regs *allocate_shared_regs(int cpu);
++extern int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu);
++extern void intel_cpuc_finish(struct cpu_hw_events *cpuc);
+
+ int intel_pmu_init(void);
+
+@@ -1025,9 +1032,13 @@ static inline int intel_pmu_init(void)
+ return 0;
+ }
+
+-static inline struct intel_shared_regs *allocate_shared_regs(int cpu)
++static inline int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu)
++{
++ return 0;
++}
++
++static inline void intel_cpuc_finish(struct cpu_hw_events *cpuc)
+ {
+- return NULL;
+ }
+
+ static inline int is_ht_workaround_enabled(void)
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index 7abb09e2eeb8..d3f42b6bbdac 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -406,6 +406,13 @@ void hyperv_cleanup(void)
+ /* Reset our OS id */
+ wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
+
++ /*
++ * Reset hypercall page reference before reset the page,
++ * let hypercall operations fail safely rather than
++ * panic the kernel for using invalid hypercall page
++ */
++ hv_hypercall_pg = NULL;
++
+ /* Reset the hypercall page */
+ hypercall_msr.as_uint64 = 0;
+ wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
+index ad7b210aa3f6..8e790ec219a5 100644
+--- a/arch/x86/include/asm/bitops.h
++++ b/arch/x86/include/asm/bitops.h
+@@ -36,22 +36,17 @@
+ * bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
+ */
+
+-#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 1)
+-/* Technically wrong, but this avoids compilation errors on some gcc
+- versions. */
+-#define BITOP_ADDR(x) "=m" (*(volatile long *) (x))
+-#else
+-#define BITOP_ADDR(x) "+m" (*(volatile long *) (x))
+-#endif
++#define RLONG_ADDR(x) "m" (*(volatile long *) (x))
++#define WBYTE_ADDR(x) "+m" (*(volatile char *) (x))
+
+-#define ADDR BITOP_ADDR(addr)
++#define ADDR RLONG_ADDR(addr)
+
+ /*
+ * We do the locked ops that don't return the old value as
+ * a mask operation on a byte.
+ */
+ #define IS_IMMEDIATE(nr) (__builtin_constant_p(nr))
+-#define CONST_MASK_ADDR(nr, addr) BITOP_ADDR((void *)(addr) + ((nr)>>3))
++#define CONST_MASK_ADDR(nr, addr) WBYTE_ADDR((void *)(addr) + ((nr)>>3))
+ #define CONST_MASK(nr) (1 << ((nr) & 7))
+
+ /**
+@@ -79,7 +74,7 @@ set_bit(long nr, volatile unsigned long *addr)
+ : "memory");
+ } else {
+ asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
+- : BITOP_ADDR(addr) : "Ir" (nr) : "memory");
++ : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
+ }
+ }
+
+@@ -94,7 +89,7 @@ set_bit(long nr, volatile unsigned long *addr)
+ */
+ static __always_inline void __set_bit(long nr, volatile unsigned long *addr)
+ {
+- asm volatile(__ASM_SIZE(bts) " %1,%0" : ADDR : "Ir" (nr) : "memory");
++ asm volatile(__ASM_SIZE(bts) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
+ }
+
+ /**
+@@ -116,8 +111,7 @@ clear_bit(long nr, volatile unsigned long *addr)
+ : "iq" ((u8)~CONST_MASK(nr)));
+ } else {
+ asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
+- : BITOP_ADDR(addr)
+- : "Ir" (nr));
++ : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
+ }
+ }
+
+@@ -137,7 +131,7 @@ static __always_inline void clear_bit_unlock(long nr, volatile unsigned long *ad
+
+ static __always_inline void __clear_bit(long nr, volatile unsigned long *addr)
+ {
+- asm volatile(__ASM_SIZE(btr) " %1,%0" : ADDR : "Ir" (nr));
++ asm volatile(__ASM_SIZE(btr) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
+ }
+
+ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr)
+@@ -145,7 +139,7 @@ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile
+ bool negative;
+ asm volatile(LOCK_PREFIX "andb %2,%1"
+ CC_SET(s)
+- : CC_OUT(s) (negative), ADDR
++ : CC_OUT(s) (negative), WBYTE_ADDR(addr)
+ : "ir" ((char) ~(1 << nr)) : "memory");
+ return negative;
+ }
+@@ -161,13 +155,9 @@ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile
+ * __clear_bit() is non-atomic and implies release semantics before the memory
+ * operation. It can be used for an unlock if no other CPUs can concurrently
+ * modify other bits in the word.
+- *
+- * No memory barrier is required here, because x86 cannot reorder stores past
+- * older loads. Same principle as spin_unlock.
+ */
+ static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
+ {
+- barrier();
+ __clear_bit(nr, addr);
+ }
+
+@@ -182,7 +172,7 @@ static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *
+ */
+ static __always_inline void __change_bit(long nr, volatile unsigned long *addr)
+ {
+- asm volatile(__ASM_SIZE(btc) " %1,%0" : ADDR : "Ir" (nr));
++ asm volatile(__ASM_SIZE(btc) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
+ }
+
+ /**
+@@ -202,8 +192,7 @@ static __always_inline void change_bit(long nr, volatile unsigned long *addr)
+ : "iq" ((u8)CONST_MASK(nr)));
+ } else {
+ asm volatile(LOCK_PREFIX __ASM_SIZE(btc) " %1,%0"
+- : BITOP_ADDR(addr)
+- : "Ir" (nr));
++ : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
+ }
+ }
+
+@@ -248,8 +237,8 @@ static __always_inline bool __test_and_set_bit(long nr, volatile unsigned long *
+
+ asm(__ASM_SIZE(bts) " %2,%1"
+ CC_SET(c)
+- : CC_OUT(c) (oldbit), ADDR
+- : "Ir" (nr));
++ : CC_OUT(c) (oldbit)
++ : ADDR, "Ir" (nr) : "memory");
+ return oldbit;
+ }
+
+@@ -288,8 +277,8 @@ static __always_inline bool __test_and_clear_bit(long nr, volatile unsigned long
+
+ asm volatile(__ASM_SIZE(btr) " %2,%1"
+ CC_SET(c)
+- : CC_OUT(c) (oldbit), ADDR
+- : "Ir" (nr));
++ : CC_OUT(c) (oldbit)
++ : ADDR, "Ir" (nr) : "memory");
+ return oldbit;
+ }
+
+@@ -300,8 +289,8 @@ static __always_inline bool __test_and_change_bit(long nr, volatile unsigned lon
+
+ asm volatile(__ASM_SIZE(btc) " %2,%1"
+ CC_SET(c)
+- : CC_OUT(c) (oldbit), ADDR
+- : "Ir" (nr) : "memory");
++ : CC_OUT(c) (oldbit)
++ : ADDR, "Ir" (nr) : "memory");
+
+ return oldbit;
+ }
+@@ -332,7 +321,7 @@ static __always_inline bool variable_test_bit(long nr, volatile const unsigned l
+ asm volatile(__ASM_SIZE(bt) " %2,%1"
+ CC_SET(c)
+ : CC_OUT(c) (oldbit)
+- : "m" (*(unsigned long *)addr), "Ir" (nr));
++ : "m" (*(unsigned long *)addr), "Ir" (nr) : "memory");
+
+ return oldbit;
+ }
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 6d6122524711..981ff9479648 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -344,6 +344,7 @@
+ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
+ #define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */
+ #define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
++#define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */
+ #define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 180373360e34..71d763ad2637 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -352,6 +352,7 @@ struct kvm_mmu_page {
+ };
+
+ struct kvm_pio_request {
++ unsigned long linear_rip;
+ unsigned long count;
+ int in;
+ int port;
+@@ -570,6 +571,7 @@ struct kvm_vcpu_arch {
+ bool tpr_access_reporting;
+ u64 ia32_xss;
+ u64 microcode_version;
++ u64 arch_capabilities;
+
+ /*
+ * Paging state of the vcpu
+@@ -1255,7 +1257,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
+ struct kvm_memory_slot *slot,
+ gfn_t gfn_offset, unsigned long mask);
+ void kvm_mmu_zap_all(struct kvm *kvm);
+-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots);
++void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen);
+ unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
+ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
+
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 8e40c2446fd1..ca5bc0eacb95 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -666,6 +666,12 @@
+
+ #define MSR_IA32_TSC_DEADLINE 0x000006E0
+
++
++#define MSR_TSX_FORCE_ABORT 0x0000010F
++
++#define MSR_TFA_RTM_FORCE_ABORT_BIT 0
++#define MSR_TFA_RTM_FORCE_ABORT BIT_ULL(MSR_TFA_RTM_FORCE_ABORT_BIT)
++
+ /* P4/Xeon+ specific */
+ #define MSR_IA32_MCG_EAX 0x00000180
+ #define MSR_IA32_MCG_EBX 0x00000181
+diff --git a/arch/x86/include/asm/string_32.h b/arch/x86/include/asm/string_32.h
+index 55d392c6bd29..2fd165f1cffa 100644
+--- a/arch/x86/include/asm/string_32.h
++++ b/arch/x86/include/asm/string_32.h
+@@ -179,14 +179,7 @@ static inline void *__memcpy3d(void *to, const void *from, size_t len)
+ * No 3D Now!
+ */
+
+-#if (__GNUC__ >= 4)
+ #define memcpy(t, f, n) __builtin_memcpy(t, f, n)
+-#else
+-#define memcpy(t, f, n) \
+- (__builtin_constant_p((n)) \
+- ? __constant_memcpy((t), (f), (n)) \
+- : __memcpy((t), (f), (n)))
+-#endif
+
+ #endif
+ #endif /* !CONFIG_FORTIFY_SOURCE */
+@@ -282,12 +275,7 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
+
+ {
+ int d0, d1;
+-#if __GNUC__ == 4 && __GNUC_MINOR__ == 0
+- /* Workaround for broken gcc 4.0 */
+- register unsigned long eax asm("%eax") = pattern;
+-#else
+ unsigned long eax = pattern;
+-#endif
+
+ switch (count % 4) {
+ case 0:
+@@ -321,15 +309,7 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
+ #define __HAVE_ARCH_MEMSET
+ extern void *memset(void *, int, size_t);
+ #ifndef CONFIG_FORTIFY_SOURCE
+-#if (__GNUC__ >= 4)
+ #define memset(s, c, count) __builtin_memset(s, c, count)
+-#else
+-#define memset(s, c, count) \
+- (__builtin_constant_p(c) \
+- ? __constant_c_x_memset((s), (0x01010101UL * (unsigned char)(c)), \
+- (count)) \
+- : __memset((s), (c), (count)))
+-#endif
+ #endif /* !CONFIG_FORTIFY_SOURCE */
+
+ #define __HAVE_ARCH_MEMSET16
+diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
+index 4e4194e21a09..75314c3dbe47 100644
+--- a/arch/x86/include/asm/string_64.h
++++ b/arch/x86/include/asm/string_64.h
+@@ -14,21 +14,6 @@
+ extern void *memcpy(void *to, const void *from, size_t len);
+ extern void *__memcpy(void *to, const void *from, size_t len);
+
+-#ifndef CONFIG_FORTIFY_SOURCE
+-#if (__GNUC__ == 4 && __GNUC_MINOR__ < 3) || __GNUC__ < 4
+-#define memcpy(dst, src, len) \
+-({ \
+- size_t __len = (len); \
+- void *__ret; \
+- if (__builtin_constant_p(len) && __len >= 64) \
+- __ret = __memcpy((dst), (src), __len); \
+- else \
+- __ret = __builtin_memcpy((dst), (src), __len); \
+- __ret; \
+-})
+-#endif
+-#endif /* !CONFIG_FORTIFY_SOURCE */
+-
+ #define __HAVE_ARCH_MEMSET
+ void *memset(void *s, int c, size_t n);
+ void *__memset(void *s, int c, size_t n);
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index c1334aaaa78d..f3aed639dccd 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -76,7 +76,7 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
+ #endif
+
+ /**
+- * access_ok: - Checks if a user space pointer is valid
++ * access_ok - Checks if a user space pointer is valid
+ * @addr: User space pointer to start of block to check
+ * @size: Size of block to check
+ *
+@@ -85,12 +85,12 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
+ *
+ * Checks if a pointer to a block of memory in user space is valid.
+ *
+- * Returns true (nonzero) if the memory block may be valid, false (zero)
+- * if it is definitely invalid.
+- *
+ * Note that, depending on architecture, this function probably just
+ * checks that the pointer is in the user space range - after calling
+ * this function, memory access functions may still return -EFAULT.
++ *
++ * Return: true (nonzero) if the memory block may be valid, false (zero)
++ * if it is definitely invalid.
+ */
+ #define access_ok(addr, size) \
+ ({ \
+@@ -135,7 +135,7 @@ extern int __get_user_bad(void);
+ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+
+ /**
+- * get_user: - Get a simple variable from user space.
++ * get_user - Get a simple variable from user space.
+ * @x: Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+@@ -149,7 +149,7 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+ * @ptr must have pointer-to-simple-variable type, and the result of
+ * dereferencing @ptr must be assignable to @x without a cast.
+ *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+ /*
+@@ -227,7 +227,7 @@ extern void __put_user_4(void);
+ extern void __put_user_8(void);
+
+ /**
+- * put_user: - Write a simple value into user space.
++ * put_user - Write a simple value into user space.
+ * @x: Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+@@ -241,7 +241,7 @@ extern void __put_user_8(void);
+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+ * to the result of dereferencing @ptr.
+ *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+ */
+ #define put_user(x, ptr) \
+ ({ \
+@@ -503,7 +503,7 @@ struct __large_struct { unsigned long buf[100]; };
+ } while (0)
+
+ /**
+- * __get_user: - Get a simple variable from user space, with less checking.
++ * __get_user - Get a simple variable from user space, with less checking.
+ * @x: Variable to store result.
+ * @ptr: Source address, in user space.
+ *
+@@ -520,7 +520,7 @@ struct __large_struct { unsigned long buf[100]; };
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+ * On error, the variable @x is set to zero.
+ */
+
+@@ -528,7 +528,7 @@ struct __large_struct { unsigned long buf[100]; };
+ __get_user_nocheck((x), (ptr), sizeof(*(ptr)))
+
+ /**
+- * __put_user: - Write a simple value into user space, with less checking.
++ * __put_user - Write a simple value into user space, with less checking.
+ * @x: Value to copy to user space.
+ * @ptr: Destination address, in user space.
+ *
+@@ -545,7 +545,7 @@ struct __large_struct { unsigned long buf[100]; };
+ * Caller must check the pointer with access_ok() before calling this
+ * function.
+ *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+ */
+
+ #define __put_user(x, ptr) \
+diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h
+index 1f86e1b0a5cd..499578f7e6d7 100644
+--- a/arch/x86/include/asm/unwind.h
++++ b/arch/x86/include/asm/unwind.h
+@@ -23,6 +23,12 @@ struct unwind_state {
+ #elif defined(CONFIG_UNWINDER_FRAME_POINTER)
+ bool got_irq;
+ unsigned long *bp, *orig_sp, ip;
++ /*
++ * If non-NULL: The current frame is incomplete and doesn't contain a
++ * valid BP. When looking for the next frame, use this instead of the
++ * non-existent saved BP.
++ */
++ unsigned long *next_bp;
+ struct pt_regs *regs;
+ #else
+ unsigned long *sp;
+diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
+index ef05bea7010d..6b5c710846f5 100644
+--- a/arch/x86/include/asm/xen/hypercall.h
++++ b/arch/x86/include/asm/xen/hypercall.h
+@@ -206,6 +206,9 @@ xen_single_call(unsigned int call,
+ __HYPERCALL_DECLS;
+ __HYPERCALL_5ARG(a1, a2, a3, a4, a5);
+
++ if (call >= PAGE_SIZE / sizeof(hypercall_page[0]))
++ return -EINVAL;
++
+ asm volatile(CALL_NOSPEC
+ : __HYPERCALL_5PARAM
+ : [thunk_target] "a" (&hypercall_page[call])
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 69f6bbb41be0..01004bfb1a1b 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -819,11 +819,9 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ static void init_amd_zn(struct cpuinfo_x86 *c)
+ {
+ set_cpu_cap(c, X86_FEATURE_ZEN);
+- /*
+- * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects
+- * all up to and including B1.
+- */
+- if (c->x86_model <= 1 && c->x86_stepping <= 1)
++
++ /* Fix erratum 1076: CPB feature bit not being set in CPUID. */
++ if (!cpu_has(c, X86_FEATURE_CPB))
+ set_cpu_cap(c, X86_FEATURE_CPB);
+ }
+
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 8257a59704ae..763d4264d16a 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -49,7 +49,7 @@ int ftrace_arch_code_modify_post_process(void)
+ union ftrace_code_union {
+ char code[MCOUNT_INSN_SIZE];
+ struct {
+- unsigned char e8;
++ unsigned char op;
+ int offset;
+ } __attribute__((packed));
+ };
+@@ -59,20 +59,23 @@ static int ftrace_calc_offset(long ip, long addr)
+ return (int)(addr - ip);
+ }
+
+-static unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
++static unsigned char *
++ftrace_text_replace(unsigned char op, unsigned long ip, unsigned long addr)
+ {
+ static union ftrace_code_union calc;
+
+- calc.e8 = 0xe8;
++ calc.op = op;
+ calc.offset = ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
+
+- /*
+- * No locking needed, this must be called via kstop_machine
+- * which in essence is like running on a uniprocessor machine.
+- */
+ return calc.code;
+ }
+
++static unsigned char *
++ftrace_call_replace(unsigned long ip, unsigned long addr)
++{
++ return ftrace_text_replace(0xe8, ip, addr);
++}
++
+ static inline int
+ within(unsigned long addr, unsigned long start, unsigned long end)
+ {
+@@ -664,22 +667,6 @@ int __init ftrace_dyn_arch_init(void)
+ return 0;
+ }
+
+-#if defined(CONFIG_X86_64) || defined(CONFIG_FUNCTION_GRAPH_TRACER)
+-static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
+-{
+- static union ftrace_code_union calc;
+-
+- /* Jmp not a call (ignore the .e8) */
+- calc.e8 = 0xe9;
+- calc.offset = ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
+-
+- /*
+- * ftrace external locks synchronize the access to the static variable.
+- */
+- return calc.code;
+-}
+-#endif
+-
+ /* Currently only x86_64 supports dynamic trampolines */
+ #ifdef CONFIG_X86_64
+
+@@ -891,8 +878,8 @@ static void *addr_from_call(void *ptr)
+ return NULL;
+
+ /* Make sure this is a call */
+- if (WARN_ON_ONCE(calc.e8 != 0xe8)) {
+- pr_warn("Expected e8, got %x\n", calc.e8);
++ if (WARN_ON_ONCE(calc.op != 0xe8)) {
++ pr_warn("Expected e8, got %x\n", calc.op);
+ return NULL;
+ }
+
+@@ -963,6 +950,11 @@ void arch_ftrace_trampoline_free(struct ftrace_ops *ops)
+ #ifdef CONFIG_DYNAMIC_FTRACE
+ extern void ftrace_graph_call(void);
+
++static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
++{
++ return ftrace_text_replace(0xe9, ip, addr);
++}
++
+ static int ftrace_mod_jmp(unsigned long ip, void *func)
+ {
+ unsigned char *new;
+diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
+index 53917a3ebf94..1f3b77367948 100644
+--- a/arch/x86/kernel/kexec-bzimage64.c
++++ b/arch/x86/kernel/kexec-bzimage64.c
+@@ -218,6 +218,9 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params,
+ params->screen_info.ext_mem_k = 0;
+ params->alt_mem_k = 0;
+
++ /* Always fill in RSDP: it is either 0 or a valid value */
++ params->acpi_rsdp_addr = boot_params.acpi_rsdp_addr;
++
+ /* Default APM info */
+ memset(¶ms->apm_bios_info, 0, sizeof(params->apm_bios_info));
+
+@@ -256,7 +259,6 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params,
+ setup_efi_state(params, params_load_addr, efi_map_offset, efi_map_sz,
+ efi_setup_data_offset);
+ #endif
+-
+ /* Setup EDD info */
+ memcpy(params->eddbuf, boot_params.eddbuf,
+ EDDMAXNR * sizeof(struct edd_info));
+diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
+index 6adf6e6c2933..544bd41a514c 100644
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -141,6 +141,11 @@ asm (
+
+ void optprobe_template_func(void);
+ STACK_FRAME_NON_STANDARD(optprobe_template_func);
++NOKPROBE_SYMBOL(optprobe_template_func);
++NOKPROBE_SYMBOL(optprobe_template_entry);
++NOKPROBE_SYMBOL(optprobe_template_val);
++NOKPROBE_SYMBOL(optprobe_template_call);
++NOKPROBE_SYMBOL(optprobe_template_end);
+
+ #define TMPL_MOVE_IDX \
+ ((long)optprobe_template_val - (long)optprobe_template_entry)
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index e811d4d1c824..d908a37bf3f3 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -104,12 +104,8 @@ static u64 kvm_sched_clock_read(void)
+
+ static inline void kvm_sched_clock_init(bool stable)
+ {
+- if (!stable) {
+- pv_ops.time.sched_clock = kvm_clock_read;
++ if (!stable)
+ clear_sched_clock_stable();
+- return;
+- }
+-
+ kvm_sched_clock_offset = kvm_clock_read();
+ pv_ops.time.sched_clock = kvm_sched_clock_read;
+
+diff --git a/arch/x86/kernel/unwind_frame.c b/arch/x86/kernel/unwind_frame.c
+index 3dc26f95d46e..9b9fd4826e7a 100644
+--- a/arch/x86/kernel/unwind_frame.c
++++ b/arch/x86/kernel/unwind_frame.c
+@@ -320,10 +320,14 @@ bool unwind_next_frame(struct unwind_state *state)
+ }
+
+ /* Get the next frame pointer: */
+- if (state->regs)
++ if (state->next_bp) {
++ next_bp = state->next_bp;
++ state->next_bp = NULL;
++ } else if (state->regs) {
+ next_bp = (unsigned long *)state->regs->bp;
+- else
++ } else {
+ next_bp = (unsigned long *)READ_ONCE_TASK_STACK(state->task, *state->bp);
++ }
+
+ /* Move to the next frame if it's safe: */
+ if (!update_stack_state(state, next_bp))
+@@ -398,6 +402,21 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+
+ bp = get_frame_pointer(task, regs);
+
++ /*
++ * If we crash with IP==0, the last successfully executed instruction
++ * was probably an indirect function call with a NULL function pointer.
++ * That means that SP points into the middle of an incomplete frame:
++ * *SP is a return pointer, and *(SP-sizeof(unsigned long)) is where we
++ * would have written a frame pointer if we hadn't crashed.
++ * Pretend that the frame is complete and that BP points to it, but save
++ * the real BP so that we can use it when looking for the next frame.
++ */
++ if (regs && regs->ip == 0 &&
++ (unsigned long *)kernel_stack_pointer(regs) >= first_frame) {
++ state->next_bp = bp;
++ bp = ((unsigned long *)kernel_stack_pointer(regs)) - 1;
++ }
++
+ /* Initialize stack info and make sure the frame data is accessible: */
+ get_stack_info(bp, state->task, &state->stack_info,
+ &state->stack_mask);
+@@ -410,7 +429,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ */
+ while (!unwind_done(state) &&
+ (!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
+- state->bp < first_frame))
++ (state->next_bp == NULL && state->bp < first_frame)))
+ unwind_next_frame(state);
+ }
+ EXPORT_SYMBOL_GPL(__unwind_start);
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index 26038eacf74a..89be1be1790c 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -113,6 +113,20 @@ static struct orc_entry *orc_ftrace_find(unsigned long ip)
+ }
+ #endif
+
++/*
++ * If we crash with IP==0, the last successfully executed instruction
++ * was probably an indirect function call with a NULL function pointer,
++ * and we don't have unwind information for NULL.
++ * This hardcoded ORC entry for IP==0 allows us to unwind from a NULL function
++ * pointer into its parent and then continue normally from there.
++ */
++static struct orc_entry null_orc_entry = {
++ .sp_offset = sizeof(long),
++ .sp_reg = ORC_REG_SP,
++ .bp_reg = ORC_REG_UNDEFINED,
++ .type = ORC_TYPE_CALL
++};
++
+ static struct orc_entry *orc_find(unsigned long ip)
+ {
+ static struct orc_entry *orc;
+@@ -120,6 +134,9 @@ static struct orc_entry *orc_find(unsigned long ip)
+ if (!orc_init)
+ return NULL;
+
++ if (ip == 0)
++ return &null_orc_entry;
++
+ /* For non-init vmlinux addresses, use the fast lookup table: */
+ if (ip >= LOOKUP_START_IP && ip < LOOKUP_STOP_IP) {
+ unsigned int idx, start, stop;
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 0d618ee634ac..ee3b5c7d662e 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -401,7 +401,7 @@ SECTIONS
+ * Per-cpu symbols which need to be offset from __per_cpu_load
+ * for the boot processor.
+ */
+-#define INIT_PER_CPU(x) init_per_cpu__##x = x + __per_cpu_load
++#define INIT_PER_CPU(x) init_per_cpu__##x = ABSOLUTE(x) + __per_cpu_load
+ INIT_PER_CPU(gdt_page);
+ INIT_PER_CPU(irq_stack_union);
+
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index f2d1d230d5b8..9ab33cab9486 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -5635,13 +5635,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+ {
+ struct kvm_memslots *slots;
+ struct kvm_memory_slot *memslot;
+- bool flush_tlb = true;
+- bool flush = false;
+ int i;
+
+- if (kvm_available_flush_tlb_with_range())
+- flush_tlb = false;
+-
+ spin_lock(&kvm->mmu_lock);
+ for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+ slots = __kvm_memslots(kvm, i);
+@@ -5653,17 +5648,12 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+ if (start >= end)
+ continue;
+
+- flush |= slot_handle_level_range(kvm, memslot,
+- kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL,
+- PT_MAX_HUGEPAGE_LEVEL, start,
+- end - 1, flush_tlb);
++ slot_handle_level_range(kvm, memslot, kvm_zap_rmapp,
++ PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL,
++ start, end - 1, true);
+ }
+ }
+
+- if (flush)
+- kvm_flush_remote_tlbs_with_address(kvm, gfn_start,
+- gfn_end - gfn_start + 1);
+-
+ spin_unlock(&kvm->mmu_lock);
+ }
+
+@@ -5901,13 +5891,30 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
+ return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
+ }
+
+-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots)
++void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
+ {
++ gen &= MMIO_GEN_MASK;
++
++ /*
++ * Shift to eliminate the "update in-progress" flag, which isn't
++ * included in the spte's generation number.
++ */
++ gen >>= 1;
++
++ /*
++ * Generation numbers are incremented in multiples of the number of
++ * address spaces in order to provide unique generations across all
++ * address spaces. Strip what is effectively the address space
++ * modifier prior to checking for a wrap of the MMIO generation so
++ * that a wrap in any address space is detected.
++ */
++ gen &= ~((u64)KVM_ADDRESS_SPACE_NUM - 1);
++
+ /*
+- * The very rare case: if the generation-number is round,
++ * The very rare case: if the MMIO generation number has wrapped,
+ * zap all shadow pages.
+ */
+- if (unlikely((slots->generation & MMIO_GEN_MASK) == 0)) {
++ if (unlikely(gen == 0)) {
+ kvm_debug_ratelimited("kvm: zapping shadow pages for mmio generation wraparound\n");
+ kvm_mmu_invalidate_zap_all_pages(kvm);
+ }
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index f13a3a24d360..a9b8e38d78ad 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -6422,11 +6422,11 @@ e_free:
+ return ret;
+ }
+
+-static int get_num_contig_pages(int idx, struct page **inpages,
+- unsigned long npages)
++static unsigned long get_num_contig_pages(unsigned long idx,
++ struct page **inpages, unsigned long npages)
+ {
+ unsigned long paddr, next_paddr;
+- int i = idx + 1, pages = 1;
++ unsigned long i = idx + 1, pages = 1;
+
+ /* find the number of contiguous pages starting from idx */
+ paddr = __sme_page_pa(inpages[idx]);
+@@ -6445,12 +6445,12 @@ static int get_num_contig_pages(int idx, struct page **inpages,
+
+ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ {
+- unsigned long vaddr, vaddr_end, next_vaddr, npages, size;
++ unsigned long vaddr, vaddr_end, next_vaddr, npages, pages, size, i;
+ struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ struct kvm_sev_launch_update_data params;
+ struct sev_data_launch_update_data *data;
+ struct page **inpages;
+- int i, ret, pages;
++ int ret;
+
+ if (!sev_guest(kvm))
+ return -ENOTTY;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index d737a51a53ca..f90b3a948291 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -500,6 +500,17 @@ static void nested_vmx_disable_intercept_for_msr(unsigned long *msr_bitmap_l1,
+ }
+ }
+
++static inline void enable_x2apic_msr_intercepts(unsigned long *msr_bitmap) {
++ int msr;
++
++ for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
++ unsigned word = msr / BITS_PER_LONG;
++
++ msr_bitmap[word] = ~0;
++ msr_bitmap[word + (0x800 / sizeof(long))] = ~0;
++ }
++}
++
+ /*
+ * Merge L0's and L1's MSR bitmap, return false to indicate that
+ * we do not use the hardware.
+@@ -541,39 +552,44 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ return false;
+
+ msr_bitmap_l1 = (unsigned long *)kmap(page);
+- if (nested_cpu_has_apic_reg_virt(vmcs12)) {
+- /*
+- * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it
+- * just lets the processor take the value from the virtual-APIC page;
+- * take those 256 bits directly from the L1 bitmap.
+- */
+- for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
+- unsigned word = msr / BITS_PER_LONG;
+- msr_bitmap_l0[word] = msr_bitmap_l1[word];
+- msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
+- }
+- } else {
+- for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
+- unsigned word = msr / BITS_PER_LONG;
+- msr_bitmap_l0[word] = ~0;
+- msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
+- }
+- }
+
+- nested_vmx_disable_intercept_for_msr(
+- msr_bitmap_l1, msr_bitmap_l0,
+- X2APIC_MSR(APIC_TASKPRI),
+- MSR_TYPE_W);
++ /*
++ * To keep the control flow simple, pay eight 8-byte writes (sixteen
++ * 4-byte writes on 32-bit systems) up front to enable intercepts for
++ * the x2APIC MSR range and selectively disable them below.
++ */
++ enable_x2apic_msr_intercepts(msr_bitmap_l0);
++
++ if (nested_cpu_has_virt_x2apic_mode(vmcs12)) {
++ if (nested_cpu_has_apic_reg_virt(vmcs12)) {
++ /*
++ * L0 need not intercept reads for MSRs between 0x800
++ * and 0x8ff, it just lets the processor take the value
++ * from the virtual-APIC page; take those 256 bits
++ * directly from the L1 bitmap.
++ */
++ for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
++ unsigned word = msr / BITS_PER_LONG;
++
++ msr_bitmap_l0[word] = msr_bitmap_l1[word];
++ }
++ }
+
+- if (nested_cpu_has_vid(vmcs12)) {
+- nested_vmx_disable_intercept_for_msr(
+- msr_bitmap_l1, msr_bitmap_l0,
+- X2APIC_MSR(APIC_EOI),
+- MSR_TYPE_W);
+ nested_vmx_disable_intercept_for_msr(
+ msr_bitmap_l1, msr_bitmap_l0,
+- X2APIC_MSR(APIC_SELF_IPI),
+- MSR_TYPE_W);
++ X2APIC_MSR(APIC_TASKPRI),
++ MSR_TYPE_R | MSR_TYPE_W);
++
++ if (nested_cpu_has_vid(vmcs12)) {
++ nested_vmx_disable_intercept_for_msr(
++ msr_bitmap_l1, msr_bitmap_l0,
++ X2APIC_MSR(APIC_EOI),
++ MSR_TYPE_W);
++ nested_vmx_disable_intercept_for_msr(
++ msr_bitmap_l1, msr_bitmap_l0,
++ X2APIC_MSR(APIC_SELF_IPI),
++ MSR_TYPE_W);
++ }
+ }
+
+ if (spec_ctrl)
+@@ -2765,7 +2781,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
+ "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */
+
+ /* Check if vmlaunch or vmresume is needed */
+- "cmpl $0, %c[launched](%% " _ASM_CX")\n\t"
++ "cmpb $0, %c[launched](%% " _ASM_CX")\n\t"
+
+ "call vmx_vmenter\n\t"
+
+@@ -4035,25 +4051,50 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
+ /* Addr = segment_base + offset */
+ /* offset = base + [index * scale] + displacement */
+ off = exit_qualification; /* holds the displacement */
++ if (addr_size == 1)
++ off = (gva_t)sign_extend64(off, 31);
++ else if (addr_size == 0)
++ off = (gva_t)sign_extend64(off, 15);
+ if (base_is_valid)
+ off += kvm_register_read(vcpu, base_reg);
+ if (index_is_valid)
+ off += kvm_register_read(vcpu, index_reg)<<scaling;
+ vmx_get_segment(vcpu, &s, seg_reg);
+- *ret = s.base + off;
+
++ /*
++ * The effective address, i.e. @off, of a memory operand is truncated
++ * based on the address size of the instruction. Note that this is
++ * the *effective address*, i.e. the address prior to accounting for
++ * the segment's base.
++ */
+ if (addr_size == 1) /* 32 bit */
+- *ret &= 0xffffffff;
++ off &= 0xffffffff;
++ else if (addr_size == 0) /* 16 bit */
++ off &= 0xffff;
+
+ /* Checks for #GP/#SS exceptions. */
+ exn = false;
+ if (is_long_mode(vcpu)) {
++ /*
++ * The virtual/linear address is never truncated in 64-bit
++ * mode, e.g. a 32-bit address size can yield a 64-bit virtual
++ * address when using FS/GS with a non-zero base.
++ */
++ *ret = s.base + off;
++
+ /* Long mode: #GP(0)/#SS(0) if the memory address is in a
+ * non-canonical form. This is the only check on the memory
+ * destination for long mode!
+ */
+ exn = is_noncanonical_address(*ret, vcpu);
+ } else if (is_protmode(vcpu)) {
++ /*
++ * When not in long mode, the virtual/linear address is
++ * unconditionally truncated to 32 bits regardless of the
++ * address size.
++ */
++ *ret = (s.base + off) & 0xffffffff;
++
+ /* Protected mode: apply checks for segment validity in the
+ * following order:
+ * - segment type check (#GP(0) may be thrown)
+@@ -4077,10 +4118,16 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
+ /* Protected mode: #GP(0)/#SS(0) if the segment is unusable.
+ */
+ exn = (s.unusable != 0);
+- /* Protected mode: #GP(0)/#SS(0) if the memory
+- * operand is outside the segment limit.
++
++ /*
++ * Protected mode: #GP(0)/#SS(0) if the memory operand is
++ * outside the segment limit. All CPUs that support VMX ignore
++ * limit checks for flat segments, i.e. segments with base==0,
++ * limit==0xffffffff and of type expand-up data or code.
+ */
+- exn = exn || (off + sizeof(u64) > s.limit);
++ if (!(s.base == 0 && s.limit == 0xffffffff &&
++ ((s.type & 8) || !(s.type & 4))))
++ exn = exn || (off + sizeof(u64) > s.limit);
+ }
+ if (exn) {
+ kvm_queue_exception_e(vcpu,
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 30a6bcd735ec..a0a770816429 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1679,12 +1679,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+
+ msr_info->data = to_vmx(vcpu)->spec_ctrl;
+ break;
+- case MSR_IA32_ARCH_CAPABILITIES:
+- if (!msr_info->host_initiated &&
+- !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
+- return 1;
+- msr_info->data = to_vmx(vcpu)->arch_capabilities;
+- break;
+ case MSR_IA32_SYSENTER_CS:
+ msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
+ break;
+@@ -1891,11 +1885,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD,
+ MSR_TYPE_W);
+ break;
+- case MSR_IA32_ARCH_CAPABILITIES:
+- if (!msr_info->host_initiated)
+- return 1;
+- vmx->arch_capabilities = data;
+- break;
+ case MSR_IA32_CR_PAT:
+ if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
+ if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
+@@ -4083,8 +4072,6 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ ++vmx->nmsrs;
+ }
+
+- vmx->arch_capabilities = kvm_get_arch_capabilities();
+-
+ vm_exit_controls_init(vmx, vmx_vmexit_ctrl());
+
+ /* 22.2.1, 20.8.1 */
+@@ -6399,7 +6386,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ "mov %%" _ASM_AX", %%cr2 \n\t"
+ "3: \n\t"
+ /* Check if vmlaunch or vmresume is needed */
+- "cmpl $0, %c[launched](%%" _ASM_CX ") \n\t"
++ "cmpb $0, %c[launched](%%" _ASM_CX ") \n\t"
+ /* Load guest registers. Don't clobber flags. */
+ "mov %c[rax](%%" _ASM_CX "), %%" _ASM_AX " \n\t"
+ "mov %c[rbx](%%" _ASM_CX "), %%" _ASM_BX " \n\t"
+@@ -6449,10 +6436,15 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ "mov %%r13, %c[r13](%%" _ASM_CX ") \n\t"
+ "mov %%r14, %c[r14](%%" _ASM_CX ") \n\t"
+ "mov %%r15, %c[r15](%%" _ASM_CX ") \n\t"
++
+ /*
+- * Clear host registers marked as clobbered to prevent
+- * speculative use.
+- */
++ * Clear all general purpose registers (except RSP, which is loaded by
++ * the CPU during VM-Exit) to prevent speculative use of the guest's
++ * values, even those that are saved/loaded via the stack. In theory,
++ * an L1 cache miss when restoring registers could lead to speculative
++ * execution with the guest's values. Zeroing XORs are dirt cheap,
++ * i.e. the extra paranoia is essentially free.
++ */
+ "xor %%r8d, %%r8d \n\t"
+ "xor %%r9d, %%r9d \n\t"
+ "xor %%r10d, %%r10d \n\t"
+@@ -6467,8 +6459,11 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+
+ "xor %%eax, %%eax \n\t"
+ "xor %%ebx, %%ebx \n\t"
++ "xor %%ecx, %%ecx \n\t"
++ "xor %%edx, %%edx \n\t"
+ "xor %%esi, %%esi \n\t"
+ "xor %%edi, %%edi \n\t"
++ "xor %%ebp, %%ebp \n\t"
+ "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t"
+ : ASM_CALL_CONSTRAINT
+ : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp),
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 0ac0a64c7790..1abae731c3e4 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -191,7 +191,6 @@ struct vcpu_vmx {
+ u64 msr_guest_kernel_gs_base;
+ #endif
+
+- u64 arch_capabilities;
+ u64 spec_ctrl;
+
+ u32 vm_entry_controls_shadow;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 941f932373d0..7ee802a92bc8 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2443,6 +2443,11 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ if (msr_info->host_initiated)
+ vcpu->arch.microcode_version = data;
+ break;
++ case MSR_IA32_ARCH_CAPABILITIES:
++ if (!msr_info->host_initiated)
++ return 1;
++ vcpu->arch.arch_capabilities = data;
++ break;
+ case MSR_EFER:
+ return set_efer(vcpu, data);
+ case MSR_K7_HWCR:
+@@ -2747,6 +2752,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ case MSR_IA32_UCODE_REV:
+ msr_info->data = vcpu->arch.microcode_version;
+ break;
++ case MSR_IA32_ARCH_CAPABILITIES:
++ if (!msr_info->host_initiated &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
++ return 1;
++ msr_info->data = vcpu->arch.arch_capabilities;
++ break;
+ case MSR_IA32_TSC:
+ msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset;
+ break;
+@@ -6522,14 +6533,27 @@ int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
+ }
+ EXPORT_SYMBOL_GPL(kvm_emulate_instruction_from_buffer);
+
++static int complete_fast_pio_out(struct kvm_vcpu *vcpu)
++{
++ vcpu->arch.pio.count = 0;
++
++ if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip)))
++ return 1;
++
++ return kvm_skip_emulated_instruction(vcpu);
++}
++
+ static int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size,
+ unsigned short port)
+ {
+ unsigned long val = kvm_register_read(vcpu, VCPU_REGS_RAX);
+ int ret = emulator_pio_out_emulated(&vcpu->arch.emulate_ctxt,
+ size, port, &val, 1);
+- /* do not return to emulator after return from userspace */
+- vcpu->arch.pio.count = 0;
++
++ if (!ret) {
++ vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
++ vcpu->arch.complete_userspace_io = complete_fast_pio_out;
++ }
+ return ret;
+ }
+
+@@ -6540,6 +6564,11 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+ /* We should only ever be called with arch.pio.count equal to 1 */
+ BUG_ON(vcpu->arch.pio.count != 1);
+
++ if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip))) {
++ vcpu->arch.pio.count = 0;
++ return 1;
++ }
++
+ /* For size less than 4 we merge, else we zero extend */
+ val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
+ : 0;
+@@ -6552,7 +6581,7 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+ vcpu->arch.pio.port, &val, 1);
+ kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+
+- return 1;
++ return kvm_skip_emulated_instruction(vcpu);
+ }
+
+ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
+@@ -6571,6 +6600,7 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
+ return ret;
+ }
+
++ vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
+ vcpu->arch.complete_userspace_io = complete_fast_pio_in;
+
+ return 0;
+@@ -6578,16 +6608,13 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
+
+ int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in)
+ {
+- int ret = kvm_skip_emulated_instruction(vcpu);
++ int ret;
+
+- /*
+- * TODO: we might be squashing a KVM_GUESTDBG_SINGLESTEP-triggered
+- * KVM_EXIT_DEBUG here.
+- */
+ if (in)
+- return kvm_fast_pio_in(vcpu, size, port) && ret;
++ ret = kvm_fast_pio_in(vcpu, size, port);
+ else
+- return kvm_fast_pio_out(vcpu, size, port) && ret;
++ ret = kvm_fast_pio_out(vcpu, size, port);
++ return ret && kvm_skip_emulated_instruction(vcpu);
+ }
+ EXPORT_SYMBOL_GPL(kvm_fast_pio);
+
+@@ -8725,6 +8752,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
+
+ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
+ {
++ vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
+ vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
+ kvm_vcpu_mtrr_init(vcpu);
+ vcpu_load(vcpu);
+@@ -9348,13 +9376,13 @@ out_free:
+ return -ENOMEM;
+ }
+
+-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
+ {
+ /*
+ * memslots->generation has been incremented.
+ * mmio generation may have reached its maximum value.
+ */
+- kvm_mmu_invalidate_mmio_sptes(kvm, slots);
++ kvm_mmu_invalidate_mmio_sptes(kvm, gen);
+ }
+
+ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index 224cd0a47568..20ede17202bf 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -181,6 +181,11 @@ static inline bool emul_is_noncanonical_address(u64 la,
+ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
+ gva_t gva, gfn_t gfn, unsigned access)
+ {
++ u64 gen = kvm_memslots(vcpu->kvm)->generation;
++
++ if (unlikely(gen & 1))
++ return;
++
+ /*
+ * If this is a shadow nested page table, the "GVA" is
+ * actually a nGPA.
+@@ -188,7 +193,7 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
+ vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
+ vcpu->arch.access = access;
+ vcpu->arch.mmio_gfn = gfn;
+- vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
++ vcpu->arch.mmio_gen = gen;
+ }
+
+ static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
+index bfd94e7812fc..7d290777246d 100644
+--- a/arch/x86/lib/usercopy_32.c
++++ b/arch/x86/lib/usercopy_32.c
+@@ -54,13 +54,13 @@ do { \
+ } while (0)
+
+ /**
+- * clear_user: - Zero a block of memory in user space.
++ * clear_user - Zero a block of memory in user space.
+ * @to: Destination address, in user space.
+ * @n: Number of bytes to zero.
+ *
+ * Zero a block of memory in user space.
+ *
+- * Returns number of bytes that could not be cleared.
++ * Return: number of bytes that could not be cleared.
+ * On success, this will be zero.
+ */
+ unsigned long
+@@ -74,14 +74,14 @@ clear_user(void __user *to, unsigned long n)
+ EXPORT_SYMBOL(clear_user);
+
+ /**
+- * __clear_user: - Zero a block of memory in user space, with less checking.
++ * __clear_user - Zero a block of memory in user space, with less checking.
+ * @to: Destination address, in user space.
+ * @n: Number of bytes to zero.
+ *
+ * Zero a block of memory in user space. Caller must check
+ * the specified block with access_ok() before calling this function.
+ *
+- * Returns number of bytes that could not be cleared.
++ * Return: number of bytes that could not be cleared.
+ * On success, this will be zero.
+ */
+ unsigned long
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index 30a5111ae5fd..527e69b12002 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -635,6 +635,22 @@ static void quirk_no_aersid(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
+ PCI_CLASS_BRIDGE_PCI, 8, quirk_no_aersid);
+
++static void quirk_intel_th_dnv(struct pci_dev *dev)
++{
++ struct resource *r = &dev->resource[4];
++
++ /*
++ * Denverton reports 2k of RTIT_BAR (intel_th resource 4), which
++ * appears to be 4 MB in reality.
++ */
++ if (r->end == r->start + 0x7ff) {
++ r->start = 0;
++ r->end = 0x3fffff;
++ r->flags |= IORESOURCE_UNSET;
++ }
++}
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x19e1, quirk_intel_th_dnv);
++
+ #ifdef CONFIG_PHYS_ADDR_T_64BIT
+
+ #define AMD_141b_MMIO_BASE(x) (0x80 + (x) * 0x8)
+diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
+index 17456a1d3f04..6c571ae86947 100644
+--- a/arch/x86/platform/efi/quirks.c
++++ b/arch/x86/platform/efi/quirks.c
+@@ -717,7 +717,7 @@ void efi_recover_from_page_fault(unsigned long phys_addr)
+ * "efi_mm" cannot be used to check if the page fault had occurred
+ * in the firmware context because efi=old_map doesn't use efi_pgd.
+ */
+- if (efi_rts_work.efi_rts_id == NONE)
++ if (efi_rts_work.efi_rts_id == EFI_NONE)
+ return;
+
+ /*
+@@ -742,7 +742,7 @@ void efi_recover_from_page_fault(unsigned long phys_addr)
+ * because this case occurs *very* rarely and hence could be improved
+ * on a need by basis.
+ */
+- if (efi_rts_work.efi_rts_id == RESET_SYSTEM) {
++ if (efi_rts_work.efi_rts_id == EFI_RESET_SYSTEM) {
+ pr_info("efi_reset_system() buggy! Reboot through BIOS\n");
+ machine_real_restart(MRR_BIOS);
+ return;
+diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
+index 4463fa72db94..96cb20de08af 100644
+--- a/arch/x86/realmode/rm/Makefile
++++ b/arch/x86/realmode/rm/Makefile
+@@ -47,7 +47,7 @@ $(obj)/pasyms.h: $(REALMODE_OBJS) FORCE
+ targets += realmode.lds
+ $(obj)/realmode.lds: $(obj)/pasyms.h
+
+-LDFLAGS_realmode.elf := --emit-relocs -T
++LDFLAGS_realmode.elf := -m elf_i386 --emit-relocs -T
+ CPPFLAGS_realmode.lds += -P -C -I$(objtree)/$(obj)
+
+ targets += realmode.elf
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 0f4fe206dcc2..20701977e6c0 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -2114,10 +2114,10 @@ void __init xen_relocate_p2m(void)
+ pt = early_memremap(pt_phys, PAGE_SIZE);
+ clear_page(pt);
+ for (idx_pte = 0;
+- idx_pte < min(n_pte, PTRS_PER_PTE);
+- idx_pte++) {
+- set_pte(pt + idx_pte,
+- pfn_pte(p2m_pfn, PAGE_KERNEL));
++ idx_pte < min(n_pte, PTRS_PER_PTE);
++ idx_pte++) {
++ pt[idx_pte] = pfn_pte(p2m_pfn,
++ PAGE_KERNEL);
+ p2m_pfn++;
+ }
+ n_pte -= PTRS_PER_PTE;
+@@ -2125,8 +2125,7 @@ void __init xen_relocate_p2m(void)
+ make_lowmem_page_readonly(__va(pt_phys));
+ pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE,
+ PFN_DOWN(pt_phys));
+- set_pmd(pmd + idx_pt,
+- __pmd(_PAGE_TABLE | pt_phys));
++ pmd[idx_pt] = __pmd(_PAGE_TABLE | pt_phys);
+ pt_phys += PAGE_SIZE;
+ }
+ n_pt -= PTRS_PER_PMD;
+@@ -2134,7 +2133,7 @@ void __init xen_relocate_p2m(void)
+ make_lowmem_page_readonly(__va(pmd_phys));
+ pin_pagetable_pfn(MMUEXT_PIN_L2_TABLE,
+ PFN_DOWN(pmd_phys));
+- set_pud(pud + idx_pmd, __pud(_PAGE_TABLE | pmd_phys));
++ pud[idx_pmd] = __pud(_PAGE_TABLE | pmd_phys);
+ pmd_phys += PAGE_SIZE;
+ }
+ n_pmd -= PTRS_PER_PUD;
+diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c
+index 74969a437a37..2e73395f0560 100644
+--- a/arch/xtensa/kernel/process.c
++++ b/arch/xtensa/kernel/process.c
+@@ -321,8 +321,8 @@ unsigned long get_wchan(struct task_struct *p)
+
+ /* Stack layout: sp-4: ra, sp-3: sp' */
+
+- pc = MAKE_PC_FROM_RA(*(unsigned long*)sp - 4, sp);
+- sp = *(unsigned long *)sp - 3;
++ pc = MAKE_PC_FROM_RA(SPILL_SLOT(sp, 0), sp);
++ sp = SPILL_SLOT(sp, 1);
+ } while (count++ < 16);
+ return 0;
+ }
+diff --git a/arch/xtensa/kernel/stacktrace.c b/arch/xtensa/kernel/stacktrace.c
+index 174c11f13bba..b9f82510c650 100644
+--- a/arch/xtensa/kernel/stacktrace.c
++++ b/arch/xtensa/kernel/stacktrace.c
+@@ -253,10 +253,14 @@ static int return_address_cb(struct stackframe *frame, void *data)
+ return 1;
+ }
+
++/*
++ * level == 0 is for the return address from the caller of this function,
++ * not from this function itself.
++ */
+ unsigned long return_address(unsigned level)
+ {
+ struct return_addr_data r = {
+- .skip = level + 1,
++ .skip = level,
+ };
+ walk_stackframe(stack_pointer(NULL), return_address_cb, &r);
+ return r.addr;
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index cd307767a134..e5ed28629271 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -747,6 +747,7 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+
+ inc_counter:
+ bfqq->weight_counter->num_active++;
++ bfqq->ref++;
+ }
+
+ /*
+@@ -771,6 +772,7 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
+
+ reset_entity_pointer:
+ bfqq->weight_counter = NULL;
++ bfq_put_queue(bfqq);
+ }
+
+ /*
+@@ -782,9 +784,6 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
+ {
+ struct bfq_entity *entity = bfqq->entity.parent;
+
+- __bfq_weights_tree_remove(bfqd, bfqq,
+- &bfqd->queue_weights_tree);
+-
+ for_each_entity(entity) {
+ struct bfq_sched_data *sd = entity->my_sched_data;
+
+@@ -818,6 +817,15 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
+ bfqd->num_groups_with_pending_reqs--;
+ }
+ }
++
++ /*
++ * Next function is invoked last, because it causes bfqq to be
++ * freed if the following holds: bfqq is not in service and
++ * has no dispatched request. DO NOT use bfqq after the next
++ * function invocation.
++ */
++ __bfq_weights_tree_remove(bfqd, bfqq,
++ &bfqd->queue_weights_tree);
+ }
+
+ /*
+@@ -1011,7 +1019,8 @@ bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd,
+
+ static int bfqq_process_refs(struct bfq_queue *bfqq)
+ {
+- return bfqq->ref - bfqq->allocated - bfqq->entity.on_st;
++ return bfqq->ref - bfqq->allocated - bfqq->entity.on_st -
++ (bfqq->weight_counter != NULL);
+ }
+
+ /* Empty burst list and add just bfqq (see comments on bfq_handle_burst) */
+@@ -2224,7 +2233,8 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+
+ if (in_service_bfqq && in_service_bfqq != bfqq &&
+ likely(in_service_bfqq != &bfqd->oom_bfqq) &&
+- bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) &&
++ bfq_rq_close_to_sector(io_struct, request,
++ bfqd->in_serv_last_pos) &&
+ bfqq->entity.parent == in_service_bfqq->entity.parent &&
+ bfq_may_be_close_cooperator(bfqq, in_service_bfqq)) {
+ new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq);
+@@ -2764,6 +2774,8 @@ update_rate_and_reset:
+ bfq_update_rate_reset(bfqd, rq);
+ update_last_values:
+ bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
++ if (RQ_BFQQ(rq) == bfqd->in_service_queue)
++ bfqd->in_serv_last_pos = bfqd->last_position;
+ bfqd->last_dispatch = now_ns;
+ }
+
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index 0b02bf302de0..746bd570b85a 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -537,6 +537,9 @@ struct bfq_data {
+ /* on-disk position of the last served request */
+ sector_t last_position;
+
++ /* position of the last served request for the in-service queue */
++ sector_t in_serv_last_pos;
++
+ /* time of last request completion (ns) */
+ u64 last_completion;
+
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index 72adbbe975d5..4aab1a8191f0 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -1667,15 +1667,15 @@ void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+
+ bfqd->busy_queues--;
+
+- if (!bfqq->dispatched)
+- bfq_weights_tree_remove(bfqd, bfqq);
+-
+ if (bfqq->wr_coeff > 1)
+ bfqd->wr_busy_queues--;
+
+ bfqg_stats_update_dequeue(bfqq_group(bfqq));
+
+ bfq_deactivate_bfqq(bfqd, bfqq, true, expiration);
++
++ if (!bfqq->dispatched)
++ bfq_weights_tree_remove(bfqd, bfqq);
+ }
+
+ /*
+diff --git a/block/bio.c b/block/bio.c
+index 4db1008309ed..a06f58bd4c72 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1238,8 +1238,11 @@ struct bio *bio_copy_user_iov(struct request_queue *q,
+ }
+ }
+
+- if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes)
++ if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes) {
++ if (!map_data)
++ __free_page(page);
+ break;
++ }
+
+ len -= bytes;
+ offset = 0;
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 6b78ec56a4f2..5bde73a49399 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -1246,8 +1246,6 @@ static int blk_cloned_rq_check_limits(struct request_queue *q,
+ */
+ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq)
+ {
+- blk_qc_t unused;
+-
+ if (blk_cloned_rq_check_limits(q, rq))
+ return BLK_STS_IOERR;
+
+@@ -1263,7 +1261,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
+ * bypass a potential scheduler on the bottom device for
+ * insert.
+ */
+- return blk_mq_try_issue_directly(rq->mq_hctx, rq, &unused, true, true);
++ return blk_mq_request_issue_directly(rq, true);
+ }
+ EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
+
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 140933e4a7d1..0c98b6c1ca49 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -423,10 +423,12 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
+ * busy in case of 'none' scheduler, and this way may save
+ * us one extra enqueue & dequeue to sw queue.
+ */
+- if (!hctx->dispatch_busy && !e && !run_queue_async)
++ if (!hctx->dispatch_busy && !e && !run_queue_async) {
+ blk_mq_try_issue_list_directly(hctx, list);
+- else
+- blk_mq_insert_requests(hctx, ctx, list);
++ if (list_empty(list))
++ return;
++ }
++ blk_mq_insert_requests(hctx, ctx, list);
+ }
+
+ blk_mq_run_hw_queue(hctx, run_queue_async);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 9437a5eb07cf..16f9675c57e6 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1076,7 +1076,13 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
+ hctx = container_of(wait, struct blk_mq_hw_ctx, dispatch_wait);
+
+ spin_lock(&hctx->dispatch_wait_lock);
+- list_del_init(&wait->entry);
++ if (!list_empty(&wait->entry)) {
++ struct sbitmap_queue *sbq;
++
++ list_del_init(&wait->entry);
++ sbq = &hctx->tags->bitmap_tags;
++ atomic_dec(&sbq->ws_active);
++ }
+ spin_unlock(&hctx->dispatch_wait_lock);
+
+ blk_mq_run_hw_queue(hctx, true);
+@@ -1092,6 +1098,7 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
+ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ struct request *rq)
+ {
++ struct sbitmap_queue *sbq = &hctx->tags->bitmap_tags;
+ struct wait_queue_head *wq;
+ wait_queue_entry_t *wait;
+ bool ret;
+@@ -1115,7 +1122,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ if (!list_empty_careful(&wait->entry))
+ return false;
+
+- wq = &bt_wait_ptr(&hctx->tags->bitmap_tags, hctx)->wait;
++ wq = &bt_wait_ptr(sbq, hctx)->wait;
+
+ spin_lock_irq(&wq->lock);
+ spin_lock(&hctx->dispatch_wait_lock);
+@@ -1125,6 +1132,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ return false;
+ }
+
++ atomic_inc(&sbq->ws_active);
+ wait->flags &= ~WQ_FLAG_EXCLUSIVE;
+ __add_wait_queue(wq, wait);
+
+@@ -1145,6 +1153,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ * someone else gets the wakeup.
+ */
+ list_del_init(&wait->entry);
++ atomic_dec(&sbq->ws_active);
+ spin_unlock(&hctx->dispatch_wait_lock);
+ spin_unlock_irq(&wq->lock);
+
+@@ -1796,74 +1805,76 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
+ return ret;
+ }
+
+-blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
++static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+ struct request *rq,
+ blk_qc_t *cookie,
+- bool bypass, bool last)
++ bool bypass_insert, bool last)
+ {
+ struct request_queue *q = rq->q;
+ bool run_queue = true;
+- blk_status_t ret = BLK_STS_RESOURCE;
+- int srcu_idx;
+- bool force = false;
+
+- hctx_lock(hctx, &srcu_idx);
+ /*
+- * hctx_lock is needed before checking quiesced flag.
++ * RCU or SRCU read lock is needed before checking quiesced flag.
+ *
+- * When queue is stopped or quiesced, ignore 'bypass', insert
+- * and return BLK_STS_OK to caller, and avoid driver to try to
+- * dispatch again.
++ * When queue is stopped or quiesced, ignore 'bypass_insert' from
++ * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller,
++ * and avoid driver to try to dispatch again.
+ */
+- if (unlikely(blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q))) {
++ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) {
+ run_queue = false;
+- bypass = false;
+- goto out_unlock;
++ bypass_insert = false;
++ goto insert;
+ }
+
+- if (unlikely(q->elevator && !bypass))
+- goto out_unlock;
++ if (q->elevator && !bypass_insert)
++ goto insert;
+
+ if (!blk_mq_get_dispatch_budget(hctx))
+- goto out_unlock;
++ goto insert;
+
+ if (!blk_mq_get_driver_tag(rq)) {
+ blk_mq_put_dispatch_budget(hctx);
+- goto out_unlock;
++ goto insert;
+ }
+
+- /*
+- * Always add a request that has been through
+- *.queue_rq() to the hardware dispatch list.
+- */
+- force = true;
+- ret = __blk_mq_issue_directly(hctx, rq, cookie, last);
+-out_unlock:
++ return __blk_mq_issue_directly(hctx, rq, cookie, last);
++insert:
++ if (bypass_insert)
++ return BLK_STS_RESOURCE;
++
++ blk_mq_request_bypass_insert(rq, run_queue);
++ return BLK_STS_OK;
++}
++
++static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
++ struct request *rq, blk_qc_t *cookie)
++{
++ blk_status_t ret;
++ int srcu_idx;
++
++ might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING);
++
++ hctx_lock(hctx, &srcu_idx);
++
++ ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true);
++ if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
++ blk_mq_request_bypass_insert(rq, true);
++ else if (ret != BLK_STS_OK)
++ blk_mq_end_request(rq, ret);
++
++ hctx_unlock(hctx, srcu_idx);
++}
++
++blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
++{
++ blk_status_t ret;
++ int srcu_idx;
++ blk_qc_t unused_cookie;
++ struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
++
++ hctx_lock(hctx, &srcu_idx);
++ ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true, last);
+ hctx_unlock(hctx, srcu_idx);
+- switch (ret) {
+- case BLK_STS_OK:
+- break;
+- case BLK_STS_DEV_RESOURCE:
+- case BLK_STS_RESOURCE:
+- if (force) {
+- blk_mq_request_bypass_insert(rq, run_queue);
+- /*
+- * We have to return BLK_STS_OK for the DM
+- * to avoid livelock. Otherwise, we return
+- * the real result to indicate whether the
+- * request is direct-issued successfully.
+- */
+- ret = bypass ? BLK_STS_OK : ret;
+- } else if (!bypass) {
+- blk_mq_sched_insert_request(rq, false,
+- run_queue, false);
+- }
+- break;
+- default:
+- if (!bypass)
+- blk_mq_end_request(rq, ret);
+- break;
+- }
+
+ return ret;
+ }
+@@ -1871,20 +1882,22 @@ out_unlock:
+ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ struct list_head *list)
+ {
+- blk_qc_t unused;
+- blk_status_t ret = BLK_STS_OK;
+-
+ while (!list_empty(list)) {
++ blk_status_t ret;
+ struct request *rq = list_first_entry(list, struct request,
+ queuelist);
+
+ list_del_init(&rq->queuelist);
+- if (ret == BLK_STS_OK)
+- ret = blk_mq_try_issue_directly(hctx, rq, &unused,
+- false,
++ ret = blk_mq_request_issue_directly(rq, list_empty(list));
++ if (ret != BLK_STS_OK) {
++ if (ret == BLK_STS_RESOURCE ||
++ ret == BLK_STS_DEV_RESOURCE) {
++ blk_mq_request_bypass_insert(rq,
+ list_empty(list));
+- else
+- blk_mq_sched_insert_request(rq, false, true, false);
++ break;
++ }
++ blk_mq_end_request(rq, ret);
++ }
+ }
+
+ /*
+@@ -1892,7 +1905,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ * the driver there was more coming, but that turned out to
+ * be a lie.
+ */
+- if (ret != BLK_STS_OK && hctx->queue->mq_ops->commit_rqs)
++ if (!list_empty(list) && hctx->queue->mq_ops->commit_rqs)
+ hctx->queue->mq_ops->commit_rqs(hctx);
+ }
+
+@@ -2005,13 +2018,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
+ if (same_queue_rq) {
+ data.hctx = same_queue_rq->mq_hctx;
+ blk_mq_try_issue_directly(data.hctx, same_queue_rq,
+- &cookie, false, true);
++ &cookie);
+ }
+ } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
+ !data.hctx->dispatch_busy)) {
+ blk_mq_put_ctx(data.ctx);
+ blk_mq_bio_to_request(rq, bio);
+- blk_mq_try_issue_directly(data.hctx, rq, &cookie, false, true);
++ blk_mq_try_issue_directly(data.hctx, rq, &cookie);
+ } else {
+ blk_mq_put_ctx(data.ctx);
+ blk_mq_bio_to_request(rq, bio);
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index d0b3dd54ef8d..a3a684a8c633 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -67,10 +67,8 @@ void blk_mq_request_bypass_insert(struct request *rq, bool run_queue);
+ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
+ struct list_head *list);
+
+-blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+- struct request *rq,
+- blk_qc_t *cookie,
+- bool bypass, bool last);
++/* Used by blk_insert_cloned_request() to issue request directly */
++blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last);
+ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ struct list_head *list);
+
+diff --git a/crypto/aead.c b/crypto/aead.c
+index 189c52d1f63a..4908b5e846f0 100644
+--- a/crypto/aead.c
++++ b/crypto/aead.c
+@@ -61,8 +61,10 @@ int crypto_aead_setkey(struct crypto_aead *tfm,
+ else
+ err = crypto_aead_alg(tfm)->setkey(tfm, key, keylen);
+
+- if (err)
++ if (unlikely(err)) {
++ crypto_aead_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ return err;
++ }
+
+ crypto_aead_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ return 0;
+diff --git a/crypto/aegis128.c b/crypto/aegis128.c
+index c22f4414856d..789716f92e4c 100644
+--- a/crypto/aegis128.c
++++ b/crypto/aegis128.c
+@@ -290,19 +290,19 @@ static void crypto_aegis128_process_crypt(struct aegis_state *state,
+ const struct aegis128_ops *ops)
+ {
+ struct skcipher_walk walk;
+- u8 *src, *dst;
+- unsigned int chunksize;
+
+ ops->skcipher_walk_init(&walk, req, false);
+
+ while (walk.nbytes) {
+- src = walk.src.virt.addr;
+- dst = walk.dst.virt.addr;
+- chunksize = walk.nbytes;
++ unsigned int nbytes = walk.nbytes;
+
+- ops->crypt_chunk(state, dst, src, chunksize);
++ if (nbytes < walk.total)
++ nbytes = round_down(nbytes, walk.stride);
+
+- skcipher_walk_done(&walk, 0);
++ ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++ nbytes);
++
++ skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ }
+ }
+
+diff --git a/crypto/aegis128l.c b/crypto/aegis128l.c
+index b6fb21ebdc3e..73811448cb6b 100644
+--- a/crypto/aegis128l.c
++++ b/crypto/aegis128l.c
+@@ -353,19 +353,19 @@ static void crypto_aegis128l_process_crypt(struct aegis_state *state,
+ const struct aegis128l_ops *ops)
+ {
+ struct skcipher_walk walk;
+- u8 *src, *dst;
+- unsigned int chunksize;
+
+ ops->skcipher_walk_init(&walk, req, false);
+
+ while (walk.nbytes) {
+- src = walk.src.virt.addr;
+- dst = walk.dst.virt.addr;
+- chunksize = walk.nbytes;
++ unsigned int nbytes = walk.nbytes;
+
+- ops->crypt_chunk(state, dst, src, chunksize);
++ if (nbytes < walk.total)
++ nbytes = round_down(nbytes, walk.stride);
+
+- skcipher_walk_done(&walk, 0);
++ ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++ nbytes);
++
++ skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ }
+ }
+
+diff --git a/crypto/aegis256.c b/crypto/aegis256.c
+index 11f0f8ec9c7c..8a71e9c06193 100644
+--- a/crypto/aegis256.c
++++ b/crypto/aegis256.c
+@@ -303,19 +303,19 @@ static void crypto_aegis256_process_crypt(struct aegis_state *state,
+ const struct aegis256_ops *ops)
+ {
+ struct skcipher_walk walk;
+- u8 *src, *dst;
+- unsigned int chunksize;
+
+ ops->skcipher_walk_init(&walk, req, false);
+
+ while (walk.nbytes) {
+- src = walk.src.virt.addr;
+- dst = walk.dst.virt.addr;
+- chunksize = walk.nbytes;
++ unsigned int nbytes = walk.nbytes;
+
+- ops->crypt_chunk(state, dst, src, chunksize);
++ if (nbytes < walk.total)
++ nbytes = round_down(nbytes, walk.stride);
+
+- skcipher_walk_done(&walk, 0);
++ ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++ nbytes);
++
++ skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ }
+ }
+
+diff --git a/crypto/ahash.c b/crypto/ahash.c
+index 5d320a811f75..81e2767e2164 100644
+--- a/crypto/ahash.c
++++ b/crypto/ahash.c
+@@ -86,17 +86,17 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
+ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
+ {
+ unsigned int alignmask = walk->alignmask;
+- unsigned int nbytes = walk->entrylen;
+
+ walk->data -= walk->offset;
+
+- if (nbytes && walk->offset & alignmask && !err) {
+- walk->offset = ALIGN(walk->offset, alignmask + 1);
+- nbytes = min(nbytes,
+- ((unsigned int)(PAGE_SIZE)) - walk->offset);
+- walk->entrylen -= nbytes;
++ if (walk->entrylen && (walk->offset & alignmask) && !err) {
++ unsigned int nbytes;
+
++ walk->offset = ALIGN(walk->offset, alignmask + 1);
++ nbytes = min(walk->entrylen,
++ (unsigned int)(PAGE_SIZE - walk->offset));
+ if (nbytes) {
++ walk->entrylen -= nbytes;
+ walk->data += walk->offset;
+ return nbytes;
+ }
+@@ -116,7 +116,7 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
+ if (err)
+ return err;
+
+- if (nbytes) {
++ if (walk->entrylen) {
+ walk->offset = 0;
+ walk->pg++;
+ return hash_walk_next(walk);
+@@ -190,6 +190,21 @@ static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
+ return ret;
+ }
+
++static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
++ unsigned int keylen)
++{
++ return -ENOSYS;
++}
++
++static void ahash_set_needkey(struct crypto_ahash *tfm)
++{
++ const struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
++
++ if (tfm->setkey != ahash_nosetkey &&
++ !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
++ crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
++}
++
+ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen)
+ {
+@@ -201,20 +216,16 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ else
+ err = tfm->setkey(tfm, key, keylen);
+
+- if (err)
++ if (unlikely(err)) {
++ ahash_set_needkey(tfm);
+ return err;
++ }
+
+ crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
+
+-static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
+- unsigned int keylen)
+-{
+- return -ENOSYS;
+-}
+-
+ static inline unsigned int ahash_align_buffer_size(unsigned len,
+ unsigned long mask)
+ {
+@@ -489,8 +500,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
+
+ if (alg->setkey) {
+ hash->setkey = alg->setkey;
+- if (!(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+- crypto_ahash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
++ ahash_set_needkey(hash);
+ }
+
+ return 0;
+diff --git a/crypto/cfb.c b/crypto/cfb.c
+index e81e45673498..4abfe32ff845 100644
+--- a/crypto/cfb.c
++++ b/crypto/cfb.c
+@@ -77,12 +77,14 @@ static int crypto_cfb_encrypt_segment(struct skcipher_walk *walk,
+ do {
+ crypto_cfb_encrypt_one(tfm, iv, dst);
+ crypto_xor(dst, src, bsize);
+- memcpy(iv, dst, bsize);
++ iv = dst;
+
+ src += bsize;
+ dst += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+
++ memcpy(walk->iv, iv, bsize);
++
+ return nbytes;
+ }
+
+@@ -162,7 +164,7 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
+ const unsigned int bsize = crypto_cfb_bsize(tfm);
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+- u8 *iv = walk->iv;
++ u8 * const iv = walk->iv;
+ u8 tmp[MAX_CIPHER_BLOCKSIZE];
+
+ do {
+@@ -172,8 +174,6 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
+ src += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+
+- memcpy(walk->iv, iv, bsize);
+-
+ return nbytes;
+ }
+
+@@ -298,6 +298,12 @@ static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb)
+ inst->alg.base.cra_blocksize = 1;
+ inst->alg.base.cra_alignmask = alg->cra_alignmask;
+
++ /*
++ * To simplify the implementation, configure the skcipher walk to only
++ * give a partial block at the very end, never earlier.
++ */
++ inst->alg.chunksize = alg->cra_blocksize;
++
+ inst->alg.ivsize = alg->cra_blocksize;
+ inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
+ inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
+diff --git a/crypto/morus1280.c b/crypto/morus1280.c
+index 3889c188f266..b83576b4eb55 100644
+--- a/crypto/morus1280.c
++++ b/crypto/morus1280.c
+@@ -366,18 +366,19 @@ static void crypto_morus1280_process_crypt(struct morus1280_state *state,
+ const struct morus1280_ops *ops)
+ {
+ struct skcipher_walk walk;
+- u8 *dst;
+- const u8 *src;
+
+ ops->skcipher_walk_init(&walk, req, false);
+
+ while (walk.nbytes) {
+- src = walk.src.virt.addr;
+- dst = walk.dst.virt.addr;
++ unsigned int nbytes = walk.nbytes;
+
+- ops->crypt_chunk(state, dst, src, walk.nbytes);
++ if (nbytes < walk.total)
++ nbytes = round_down(nbytes, walk.stride);
+
+- skcipher_walk_done(&walk, 0);
++ ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++ nbytes);
++
++ skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ }
+ }
+
+diff --git a/crypto/morus640.c b/crypto/morus640.c
+index da06ec2f6a80..b6a477444f6d 100644
+--- a/crypto/morus640.c
++++ b/crypto/morus640.c
+@@ -365,18 +365,19 @@ static void crypto_morus640_process_crypt(struct morus640_state *state,
+ const struct morus640_ops *ops)
+ {
+ struct skcipher_walk walk;
+- u8 *dst;
+- const u8 *src;
+
+ ops->skcipher_walk_init(&walk, req, false);
+
+ while (walk.nbytes) {
+- src = walk.src.virt.addr;
+- dst = walk.dst.virt.addr;
++ unsigned int nbytes = walk.nbytes;
+
+- ops->crypt_chunk(state, dst, src, walk.nbytes);
++ if (nbytes < walk.total)
++ nbytes = round_down(nbytes, walk.stride);
+
+- skcipher_walk_done(&walk, 0);
++ ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++ nbytes);
++
++ skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ }
+ }
+
+diff --git a/crypto/ofb.c b/crypto/ofb.c
+index 886631708c5e..cab0b80953fe 100644
+--- a/crypto/ofb.c
++++ b/crypto/ofb.c
+@@ -5,9 +5,6 @@
+ *
+ * Copyright (C) 2018 ARM Limited or its affiliates.
+ * All rights reserved.
+- *
+- * Based loosely on public domain code gleaned from libtomcrypt
+- * (https://github.com/libtom/libtomcrypt).
+ */
+
+ #include <crypto/algapi.h>
+@@ -21,7 +18,6 @@
+
+ struct crypto_ofb_ctx {
+ struct crypto_cipher *child;
+- int cnt;
+ };
+
+
+@@ -41,58 +37,40 @@ static int crypto_ofb_setkey(struct crypto_skcipher *parent, const u8 *key,
+ return err;
+ }
+
+-static int crypto_ofb_encrypt_segment(struct crypto_ofb_ctx *ctx,
+- struct skcipher_walk *walk,
+- struct crypto_cipher *tfm)
++static int crypto_ofb_crypt(struct skcipher_request *req)
+ {
+- int bsize = crypto_cipher_blocksize(tfm);
+- int nbytes = walk->nbytes;
+-
+- u8 *src = walk->src.virt.addr;
+- u8 *dst = walk->dst.virt.addr;
+- u8 *iv = walk->iv;
+-
+- do {
+- if (ctx->cnt == bsize) {
+- if (nbytes < bsize)
+- break;
+- crypto_cipher_encrypt_one(tfm, iv, iv);
+- ctx->cnt = 0;
+- }
+- *dst = *src ^ iv[ctx->cnt];
+- src++;
+- dst++;
+- ctx->cnt++;
+- } while (--nbytes);
+- return nbytes;
+-}
+-
+-static int crypto_ofb_encrypt(struct skcipher_request *req)
+-{
+- struct skcipher_walk walk;
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+- unsigned int bsize;
+ struct crypto_ofb_ctx *ctx = crypto_skcipher_ctx(tfm);
+- struct crypto_cipher *child = ctx->child;
+- int ret = 0;
++ struct crypto_cipher *cipher = ctx->child;
++ const unsigned int bsize = crypto_cipher_blocksize(cipher);
++ struct skcipher_walk walk;
++ int err;
+
+- bsize = crypto_cipher_blocksize(child);
+- ctx->cnt = bsize;
++ err = skcipher_walk_virt(&walk, req, false);
+
+- ret = skcipher_walk_virt(&walk, req, false);
++ while (walk.nbytes >= bsize) {
++ const u8 *src = walk.src.virt.addr;
++ u8 *dst = walk.dst.virt.addr;
++ u8 * const iv = walk.iv;
++ unsigned int nbytes = walk.nbytes;
+
+- while (walk.nbytes) {
+- ret = crypto_ofb_encrypt_segment(ctx, &walk, child);
+- ret = skcipher_walk_done(&walk, ret);
+- }
++ do {
++ crypto_cipher_encrypt_one(cipher, iv, iv);
++ crypto_xor_cpy(dst, src, iv, bsize);
++ dst += bsize;
++ src += bsize;
++ } while ((nbytes -= bsize) >= bsize);
+
+- return ret;
+-}
++ err = skcipher_walk_done(&walk, nbytes);
++ }
+
+-/* OFB encrypt and decrypt are identical */
+-static int crypto_ofb_decrypt(struct skcipher_request *req)
+-{
+- return crypto_ofb_encrypt(req);
++ if (walk.nbytes) {
++ crypto_cipher_encrypt_one(cipher, walk.iv, walk.iv);
++ crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, walk.iv,
++ walk.nbytes);
++ err = skcipher_walk_done(&walk, 0);
++ }
++ return err;
+ }
+
+ static int crypto_ofb_init_tfm(struct crypto_skcipher *tfm)
+@@ -165,13 +143,18 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
+ if (err)
+ goto err_drop_spawn;
+
++ /* OFB mode is a stream cipher. */
++ inst->alg.base.cra_blocksize = 1;
++
++ /*
++ * To simplify the implementation, configure the skcipher walk to only
++ * give a partial block at the very end, never earlier.
++ */
++ inst->alg.chunksize = alg->cra_blocksize;
++
+ inst->alg.base.cra_priority = alg->cra_priority;
+- inst->alg.base.cra_blocksize = alg->cra_blocksize;
+ inst->alg.base.cra_alignmask = alg->cra_alignmask;
+
+- /* We access the data as u32s when xoring. */
+- inst->alg.base.cra_alignmask |= __alignof__(u32) - 1;
+-
+ inst->alg.ivsize = alg->cra_blocksize;
+ inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
+ inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
+@@ -182,8 +165,8 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
+ inst->alg.exit = crypto_ofb_exit_tfm;
+
+ inst->alg.setkey = crypto_ofb_setkey;
+- inst->alg.encrypt = crypto_ofb_encrypt;
+- inst->alg.decrypt = crypto_ofb_decrypt;
++ inst->alg.encrypt = crypto_ofb_crypt;
++ inst->alg.decrypt = crypto_ofb_crypt;
+
+ inst->free = crypto_ofb_free;
+
+diff --git a/crypto/pcbc.c b/crypto/pcbc.c
+index 8aa10144407c..1b182dfedc94 100644
+--- a/crypto/pcbc.c
++++ b/crypto/pcbc.c
+@@ -51,7 +51,7 @@ static int crypto_pcbc_encrypt_segment(struct skcipher_request *req,
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+ u8 *dst = walk->dst.virt.addr;
+- u8 *iv = walk->iv;
++ u8 * const iv = walk->iv;
+
+ do {
+ crypto_xor(iv, src, bsize);
+@@ -72,7 +72,7 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
+ int bsize = crypto_cipher_blocksize(tfm);
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+- u8 *iv = walk->iv;
++ u8 * const iv = walk->iv;
+ u8 tmpbuf[MAX_CIPHER_BLOCKSIZE];
+
+ do {
+@@ -84,8 +84,6 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
+ src += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+
+- memcpy(walk->iv, iv, bsize);
+-
+ return nbytes;
+ }
+
+@@ -121,7 +119,7 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+ u8 *dst = walk->dst.virt.addr;
+- u8 *iv = walk->iv;
++ u8 * const iv = walk->iv;
+
+ do {
+ crypto_cipher_decrypt_one(tfm, dst, src);
+@@ -132,8 +130,6 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
+ dst += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+
+- memcpy(walk->iv, iv, bsize);
+-
+ return nbytes;
+ }
+
+@@ -144,7 +140,7 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
+ int bsize = crypto_cipher_blocksize(tfm);
+ unsigned int nbytes = walk->nbytes;
+ u8 *src = walk->src.virt.addr;
+- u8 *iv = walk->iv;
++ u8 * const iv = walk->iv;
+ u8 tmpbuf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(u32));
+
+ do {
+@@ -156,8 +152,6 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
+ src += bsize;
+ } while ((nbytes -= bsize) >= bsize);
+
+- memcpy(walk->iv, iv, bsize);
+-
+ return nbytes;
+ }
+
+diff --git a/crypto/shash.c b/crypto/shash.c
+index 44d297b82a8f..40311ccad3fa 100644
+--- a/crypto/shash.c
++++ b/crypto/shash.c
+@@ -53,6 +53,13 @@ static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
+ return err;
+ }
+
++static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg)
++{
++ if (crypto_shash_alg_has_setkey(alg) &&
++ !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
++ crypto_shash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
++}
++
+ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
+ unsigned int keylen)
+ {
+@@ -65,8 +72,10 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
+ else
+ err = shash->setkey(tfm, key, keylen);
+
+- if (err)
++ if (unlikely(err)) {
++ shash_set_needkey(tfm, shash);
+ return err;
++ }
+
+ crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ return 0;
+@@ -373,7 +382,8 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
+ crt->final = shash_async_final;
+ crt->finup = shash_async_finup;
+ crt->digest = shash_async_digest;
+- crt->setkey = shash_async_setkey;
++ if (crypto_shash_alg_has_setkey(alg))
++ crt->setkey = shash_async_setkey;
+
+ crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
+ CRYPTO_TFM_NEED_KEY);
+@@ -395,9 +405,7 @@ static int crypto_shash_init_tfm(struct crypto_tfm *tfm)
+
+ hash->descsize = alg->descsize;
+
+- if (crypto_shash_alg_has_setkey(alg) &&
+- !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+- crypto_shash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
++ shash_set_needkey(hash, alg);
+
+ return 0;
+ }
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index 2a969296bc24..de09ff60991e 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -585,6 +585,12 @@ static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg)
+ return crypto_alg_extsize(alg);
+ }
+
++static void skcipher_set_needkey(struct crypto_skcipher *tfm)
++{
++ if (tfm->keysize)
++ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
++}
++
+ static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm,
+ const u8 *key, unsigned int keylen)
+ {
+@@ -598,8 +604,10 @@ static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm,
+ err = crypto_blkcipher_setkey(blkcipher, key, keylen);
+ crypto_skcipher_set_flags(tfm, crypto_blkcipher_get_flags(blkcipher) &
+ CRYPTO_TFM_RES_MASK);
+- if (err)
++ if (unlikely(err)) {
++ skcipher_set_needkey(tfm);
+ return err;
++ }
+
+ crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ return 0;
+@@ -677,8 +685,7 @@ static int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm)
+ skcipher->ivsize = crypto_blkcipher_ivsize(blkcipher);
+ skcipher->keysize = calg->cra_blkcipher.max_keysize;
+
+- if (skcipher->keysize)
+- crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
++ skcipher_set_needkey(skcipher);
+
+ return 0;
+ }
+@@ -698,8 +705,10 @@ static int skcipher_setkey_ablkcipher(struct crypto_skcipher *tfm,
+ crypto_skcipher_set_flags(tfm,
+ crypto_ablkcipher_get_flags(ablkcipher) &
+ CRYPTO_TFM_RES_MASK);
+- if (err)
++ if (unlikely(err)) {
++ skcipher_set_needkey(tfm);
+ return err;
++ }
+
+ crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ return 0;
+@@ -776,8 +785,7 @@ static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm)
+ sizeof(struct ablkcipher_request);
+ skcipher->keysize = calg->cra_ablkcipher.max_keysize;
+
+- if (skcipher->keysize)
+- crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
++ skcipher_set_needkey(skcipher);
+
+ return 0;
+ }
+@@ -820,8 +828,10 @@ static int skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ else
+ err = cipher->setkey(tfm, key, keylen);
+
+- if (err)
++ if (unlikely(err)) {
++ skcipher_set_needkey(tfm);
+ return err;
++ }
+
+ crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ return 0;
+@@ -852,8 +862,7 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
+ skcipher->ivsize = alg->ivsize;
+ skcipher->keysize = alg->max_keysize;
+
+- if (skcipher->keysize)
+- crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
++ skcipher_set_needkey(skcipher);
+
+ if (alg->exit)
+ skcipher->base.exit = crypto_skcipher_exit_tfm;
+diff --git a/crypto/testmgr.c b/crypto/testmgr.c
+index 0f684a414acb..b8e4a3ccbfe0 100644
+--- a/crypto/testmgr.c
++++ b/crypto/testmgr.c
+@@ -1894,14 +1894,21 @@ static int alg_test_crc32c(const struct alg_test_desc *desc,
+
+ err = alg_test_hash(desc, driver, type, mask);
+ if (err)
+- goto out;
++ return err;
+
+ tfm = crypto_alloc_shash(driver, type, mask);
+ if (IS_ERR(tfm)) {
++ if (PTR_ERR(tfm) == -ENOENT) {
++ /*
++ * This crc32c implementation is only available through
++ * ahash API, not the shash API, so the remaining part
++ * of the test is not applicable to it.
++ */
++ return 0;
++ }
+ printk(KERN_ERR "alg: crc32c: Failed to load transform for %s: "
+ "%ld\n", driver, PTR_ERR(tfm));
+- err = PTR_ERR(tfm);
+- goto out;
++ return PTR_ERR(tfm);
+ }
+
+ do {
+@@ -1928,7 +1935,6 @@ static int alg_test_crc32c(const struct alg_test_desc *desc,
+
+ crypto_free_shash(tfm);
+
+-out:
+ return err;
+ }
+
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index e8f47d7b92cd..ca8e8ebef309 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -12870,6 +12870,31 @@ static const struct cipher_testvec aes_cfb_tv_template[] = {
+ "\x75\xa3\x85\x74\x1a\xb9\xce\xf8"
+ "\x20\x31\x62\x3d\x55\xb1\xe4\x71",
+ .len = 64,
++ .also_non_np = 1,
++ .np = 2,
++ .tap = { 31, 33 },
++ }, { /* > 16 bytes, not a multiple of 16 bytes */
++ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++ .klen = 16,
++ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07"
++ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
++ "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
++ "\xae",
++ .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
++ "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
++ "\xc8",
++ .len = 17,
++ }, { /* < 16 bytes */
++ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++ .klen = 16,
++ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07"
++ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
++ .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
++ .len = 7,
+ },
+ };
+
+@@ -16656,8 +16681,7 @@ static const struct cipher_testvec aes_ctr_rfc3686_tv_template[] = {
+ };
+
+ static const struct cipher_testvec aes_ofb_tv_template[] = {
+- /* From NIST Special Publication 800-38A, Appendix F.5 */
+- {
++ { /* From NIST Special Publication 800-38A, Appendix F.5 */
+ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+ .klen = 16,
+@@ -16680,6 +16704,31 @@ static const struct cipher_testvec aes_ofb_tv_template[] = {
+ "\x30\x4c\x65\x28\xf6\x59\xc7\x78"
+ "\x66\xa5\x10\xd9\xc1\xd6\xae\x5e",
+ .len = 64,
++ .also_non_np = 1,
++ .np = 2,
++ .tap = { 31, 33 },
++ }, { /* > 16 bytes, not a multiple of 16 bytes */
++ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++ .klen = 16,
++ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07"
++ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
++ "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
++ "\xae",
++ .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
++ "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
++ "\x77",
++ .len = 17,
++ }, { /* < 16 bytes */
++ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++ .klen = 16,
++ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07"
++ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
++ .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
++ .len = 7,
+ }
+ };
+
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index f0b52266b3ac..d73afb562ad9 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -2124,21 +2124,29 @@ static int __init intel_opregion_present(void)
+ return opregion;
+ }
+
++/* Check if the chassis-type indicates there is no builtin LCD panel */
+ static bool dmi_is_desktop(void)
+ {
+ const char *chassis_type;
++ unsigned long type;
+
+ chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE);
+ if (!chassis_type)
+ return false;
+
+- if (!strcmp(chassis_type, "3") || /* 3: Desktop */
+- !strcmp(chassis_type, "4") || /* 4: Low Profile Desktop */
+- !strcmp(chassis_type, "5") || /* 5: Pizza Box */
+- !strcmp(chassis_type, "6") || /* 6: Mini Tower */
+- !strcmp(chassis_type, "7") || /* 7: Tower */
+- !strcmp(chassis_type, "11")) /* 11: Main Server Chassis */
++ if (kstrtoul(chassis_type, 10, &type) != 0)
++ return false;
++
++ switch (type) {
++ case 0x03: /* Desktop */
++ case 0x04: /* Low Profile Desktop */
++ case 0x05: /* Pizza Box */
++ case 0x06: /* Mini Tower */
++ case 0x07: /* Tower */
++ case 0x10: /* Lunch Box */
++ case 0x11: /* Main Server Chassis */
+ return true;
++ }
+
+ return false;
+ }
+diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
+index e10fec99a182..4424997ecf30 100644
+--- a/drivers/acpi/acpica/evgpe.c
++++ b/drivers/acpi/acpica/evgpe.c
+@@ -81,8 +81,12 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
+
+ ACPI_FUNCTION_TRACE(ev_enable_gpe);
+
+- /* Enable the requested GPE */
++ /* Clear the GPE status */
++ status = acpi_hw_clear_gpe(gpe_event_info);
++ if (ACPI_FAILURE(status))
++ return_ACPI_STATUS(status);
+
++ /* Enable the requested GPE */
+ status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
+ return_ACPI_STATUS(status);
+ }
+diff --git a/drivers/acpi/acpica/nsobject.c b/drivers/acpi/acpica/nsobject.c
+index 8638f43cfc3d..79d86da1c892 100644
+--- a/drivers/acpi/acpica/nsobject.c
++++ b/drivers/acpi/acpica/nsobject.c
+@@ -186,6 +186,10 @@ void acpi_ns_detach_object(struct acpi_namespace_node *node)
+ }
+ }
+
++ if (obj_desc->common.type == ACPI_TYPE_REGION) {
++ acpi_ut_remove_address_range(obj_desc->region.space_id, node);
++ }
++
+ /* Clear the Node entry in all cases */
+
+ node->object = NULL;
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 217a782c3e55..7aa08884ed48 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -1108,8 +1108,13 @@ int cppc_get_perf_caps(int cpunum, struct cppc_perf_caps *perf_caps)
+ cpc_read(cpunum, nominal_reg, &nom);
+ perf_caps->nominal_perf = nom;
+
+- cpc_read(cpunum, guaranteed_reg, &guaranteed);
+- perf_caps->guaranteed_perf = guaranteed;
++ if (guaranteed_reg->type != ACPI_TYPE_BUFFER ||
++ IS_NULL_REG(&guaranteed_reg->cpc_entry.reg)) {
++ perf_caps->guaranteed_perf = 0;
++ } else {
++ cpc_read(cpunum, guaranteed_reg, &guaranteed);
++ perf_caps->guaranteed_perf = guaranteed;
++ }
+
+ cpc_read(cpunum, lowest_non_linear_reg, &min_nonlinear);
+ perf_caps->lowest_nonlinear_perf = min_nonlinear;
+diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
+index 545e91420cde..8940054d6250 100644
+--- a/drivers/acpi/device_sysfs.c
++++ b/drivers/acpi/device_sysfs.c
+@@ -202,11 +202,15 @@ static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias,
+ {
+ struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
+ const union acpi_object *of_compatible, *obj;
++ acpi_status status;
+ int len, count;
+ int i, nval;
+ char *c;
+
+- acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf);
++ status = acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf);
++ if (ACPI_FAILURE(status))
++ return -ENODEV;
++
+ /* DT strings are all in lower case */
+ for (c = buf.pointer; *c != '\0'; c++)
+ *c = tolower(*c);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index e18ade5d74e9..f75f8f870ce3 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -415,7 +415,7 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
+ if (call_pkg) {
+ int i;
+
+- if (nfit_mem->family != call_pkg->nd_family)
++ if (nfit_mem && nfit_mem->family != call_pkg->nd_family)
+ return -ENOTTY;
+
+ for (i = 0; i < ARRAY_SIZE(call_pkg->nd_reserved2); i++)
+@@ -424,6 +424,10 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
+ return call_pkg->nd_command;
+ }
+
++ /* In the !call_pkg case, bus commands == bus functions */
++ if (!nfit_mem)
++ return cmd;
++
+ /* Linux ND commands == NVDIMM_FAMILY_INTEL function numbers */
+ if (nfit_mem->family == NVDIMM_FAMILY_INTEL)
+ return cmd;
+@@ -454,17 +458,18 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ if (cmd_rc)
+ *cmd_rc = -EINVAL;
+
++ if (cmd == ND_CMD_CALL)
++ call_pkg = buf;
++ func = cmd_to_func(nfit_mem, cmd, call_pkg);
++ if (func < 0)
++ return func;
++
+ if (nvdimm) {
+ struct acpi_device *adev = nfit_mem->adev;
+
+ if (!adev)
+ return -ENOTTY;
+
+- if (cmd == ND_CMD_CALL)
+- call_pkg = buf;
+- func = cmd_to_func(nfit_mem, cmd, call_pkg);
+- if (func < 0)
+- return func;
+ dimm_name = nvdimm_name(nvdimm);
+ cmd_name = nvdimm_cmd_name(cmd);
+ cmd_mask = nvdimm_cmd_mask(nvdimm);
+@@ -475,12 +480,9 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ } else {
+ struct acpi_device *adev = to_acpi_dev(acpi_desc);
+
+- func = cmd;
+ cmd_name = nvdimm_bus_cmd_name(cmd);
+ cmd_mask = nd_desc->cmd_mask;
+- dsm_mask = cmd_mask;
+- if (cmd == ND_CMD_CALL)
+- dsm_mask = nd_desc->bus_dsm_mask;
++ dsm_mask = nd_desc->bus_dsm_mask;
+ desc = nd_cmd_bus_desc(cmd);
+ guid = to_nfit_uuid(NFIT_DEV_BUS);
+ handle = adev->handle;
+@@ -554,6 +556,13 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ return -EINVAL;
+ }
+
++ if (out_obj->type != ACPI_TYPE_BUFFER) {
++ dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n",
++ dimm_name, cmd_name, out_obj->type);
++ rc = -EINVAL;
++ goto out;
++ }
++
+ if (call_pkg) {
+ call_pkg->nd_fw_size = out_obj->buffer.length;
+ memcpy(call_pkg->nd_payload + call_pkg->nd_size_in,
+@@ -572,13 +581,6 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ return 0;
+ }
+
+- if (out_obj->package.type != ACPI_TYPE_BUFFER) {
+- dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n",
+- dimm_name, cmd_name, out_obj->type);
+- rc = -EINVAL;
+- goto out;
+- }
+-
+ dev_dbg(dev, "%s cmd: %s output length: %d\n", dimm_name,
+ cmd_name, out_obj->buffer.length);
+ print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4, 4,
+@@ -1759,14 +1761,14 @@ static bool acpi_nvdimm_has_method(struct acpi_device *adev, char *method)
+
+ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
+ {
++ struct device *dev = &nfit_mem->adev->dev;
+ struct nd_intel_smart smart = { 0 };
+ union acpi_object in_buf = {
+- .type = ACPI_TYPE_BUFFER,
+- .buffer.pointer = (char *) &smart,
+- .buffer.length = sizeof(smart),
++ .buffer.type = ACPI_TYPE_BUFFER,
++ .buffer.length = 0,
+ };
+ union acpi_object in_obj = {
+- .type = ACPI_TYPE_PACKAGE,
++ .package.type = ACPI_TYPE_PACKAGE,
+ .package.count = 1,
+ .package.elements = &in_buf,
+ };
+@@ -1781,8 +1783,15 @@ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
+ return;
+
+ out_obj = acpi_evaluate_dsm(handle, guid, revid, func, &in_obj);
+- if (!out_obj)
++ if (!out_obj || out_obj->type != ACPI_TYPE_BUFFER
++ || out_obj->buffer.length < sizeof(smart)) {
++ dev_dbg(dev->parent, "%s: failed to retrieve initial health\n",
++ dev_name(dev));
++ ACPI_FREE(out_obj);
+ return;
++ }
++ memcpy(&smart, out_obj->buffer.pointer, sizeof(smart));
++ ACPI_FREE(out_obj);
+
+ if (smart.flags & ND_INTEL_SMART_SHUTDOWN_VALID) {
+ if (smart.shutdown_state)
+@@ -1793,7 +1802,6 @@ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
+ set_bit(NFIT_MEM_DIRTY_COUNT, &nfit_mem->flags);
+ nfit_mem->dirty_shutdown = smart.shutdown_count;
+ }
+- ACPI_FREE(out_obj);
+ }
+
+ static void populate_shutdown_status(struct nfit_mem *nfit_mem)
+@@ -1915,18 +1923,19 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
+ | 1 << ND_CMD_SET_CONFIG_DATA;
+ if (family == NVDIMM_FAMILY_INTEL
+ && (dsm_mask & label_mask) == label_mask)
+- return 0;
+-
+- if (acpi_nvdimm_has_method(adev_dimm, "_LSI")
+- && acpi_nvdimm_has_method(adev_dimm, "_LSR")) {
+- dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev));
+- set_bit(NFIT_MEM_LSR, &nfit_mem->flags);
+- }
++ /* skip _LS{I,R,W} enabling */;
++ else {
++ if (acpi_nvdimm_has_method(adev_dimm, "_LSI")
++ && acpi_nvdimm_has_method(adev_dimm, "_LSR")) {
++ dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev));
++ set_bit(NFIT_MEM_LSR, &nfit_mem->flags);
++ }
+
+- if (test_bit(NFIT_MEM_LSR, &nfit_mem->flags)
+- && acpi_nvdimm_has_method(adev_dimm, "_LSW")) {
+- dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev));
+- set_bit(NFIT_MEM_LSW, &nfit_mem->flags);
++ if (test_bit(NFIT_MEM_LSR, &nfit_mem->flags)
++ && acpi_nvdimm_has_method(adev_dimm, "_LSW")) {
++ dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev));
++ set_bit(NFIT_MEM_LSW, &nfit_mem->flags);
++ }
+ }
+
+ populate_shutdown_status(nfit_mem);
+@@ -3004,14 +3013,16 @@ static int ars_register(struct acpi_nfit_desc *acpi_desc,
+ {
+ int rc;
+
+- if (no_init_ars || test_bit(ARS_FAILED, &nfit_spa->ars_state))
++ if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ return acpi_nfit_register_region(acpi_desc, nfit_spa);
+
+ set_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
+- set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
++ if (!no_init_ars)
++ set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
+
+ switch (acpi_nfit_query_poison(acpi_desc)) {
+ case 0:
++ case -ENOSPC:
+ case -EAGAIN:
+ rc = ars_start(acpi_desc, nfit_spa, ARS_REQ_SHORT);
+ /* shouldn't happen, try again later */
+@@ -3036,7 +3047,6 @@ static int ars_register(struct acpi_nfit_desc *acpi_desc,
+ break;
+ case -EBUSY:
+ case -ENOMEM:
+- case -ENOSPC:
+ /*
+ * BIOS was using ARS, wait for it to complete (or
+ * resources to become available) and then perform our
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 4d2b2ad1ee0e..01f80cbd2741 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -329,6 +329,8 @@ struct binder_error {
+ * (invariant after initialized)
+ * @min_priority: minimum scheduling priority
+ * (invariant after initialized)
++ * @txn_security_ctx: require sender's security context
++ * (invariant after initialized)
+ * @async_todo: list of async work items
+ * (protected by @proc->inner_lock)
+ *
+@@ -365,6 +367,7 @@ struct binder_node {
+ * invariant after initialization
+ */
+ u8 accept_fds:1;
++ u8 txn_security_ctx:1;
+ u8 min_priority;
+ };
+ bool has_async_transaction;
+@@ -615,6 +618,7 @@ struct binder_transaction {
+ long saved_priority;
+ kuid_t sender_euid;
+ struct list_head fd_fixups;
++ binder_uintptr_t security_ctx;
+ /**
+ * @lock: protects @from, @to_proc, and @to_thread
+ *
+@@ -1152,6 +1156,7 @@ static struct binder_node *binder_init_node_ilocked(
+ node->work.type = BINDER_WORK_NODE;
+ node->min_priority = flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
+ node->accept_fds = !!(flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
++ node->txn_security_ctx = !!(flags & FLAT_BINDER_FLAG_TXN_SECURITY_CTX);
+ spin_lock_init(&node->lock);
+ INIT_LIST_HEAD(&node->work.entry);
+ INIT_LIST_HEAD(&node->async_todo);
+@@ -2778,6 +2783,8 @@ static void binder_transaction(struct binder_proc *proc,
+ binder_size_t last_fixup_min_off = 0;
+ struct binder_context *context = proc->context;
+ int t_debug_id = atomic_inc_return(&binder_last_id);
++ char *secctx = NULL;
++ u32 secctx_sz = 0;
+
+ e = binder_transaction_log_add(&binder_transaction_log);
+ e->debug_id = t_debug_id;
+@@ -3020,6 +3027,20 @@ static void binder_transaction(struct binder_proc *proc,
+ t->flags = tr->flags;
+ t->priority = task_nice(current);
+
++ if (target_node && target_node->txn_security_ctx) {
++ u32 secid;
++
++ security_task_getsecid(proc->tsk, &secid);
++ ret = security_secid_to_secctx(secid, &secctx, &secctx_sz);
++ if (ret) {
++ return_error = BR_FAILED_REPLY;
++ return_error_param = ret;
++ return_error_line = __LINE__;
++ goto err_get_secctx_failed;
++ }
++ extra_buffers_size += ALIGN(secctx_sz, sizeof(u64));
++ }
++
+ trace_binder_transaction(reply, t, target_node);
+
+ t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
+@@ -3036,6 +3057,19 @@ static void binder_transaction(struct binder_proc *proc,
+ t->buffer = NULL;
+ goto err_binder_alloc_buf_failed;
+ }
++ if (secctx) {
++ size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) +
++ ALIGN(tr->offsets_size, sizeof(void *)) +
++ ALIGN(extra_buffers_size, sizeof(void *)) -
++ ALIGN(secctx_sz, sizeof(u64));
++ char *kptr = t->buffer->data + buf_offset;
++
++ t->security_ctx = (uintptr_t)kptr +
++ binder_alloc_get_user_buffer_offset(&target_proc->alloc);
++ memcpy(kptr, secctx, secctx_sz);
++ security_release_secctx(secctx, secctx_sz);
++ secctx = NULL;
++ }
+ t->buffer->debug_id = t->debug_id;
+ t->buffer->transaction = t;
+ t->buffer->target_node = target_node;
+@@ -3305,6 +3339,9 @@ err_copy_data_failed:
+ t->buffer->transaction = NULL;
+ binder_alloc_free_buf(&target_proc->alloc, t->buffer);
+ err_binder_alloc_buf_failed:
++ if (secctx)
++ security_release_secctx(secctx, secctx_sz);
++err_get_secctx_failed:
+ kfree(tcomplete);
+ binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
+ err_alloc_tcomplete_failed:
+@@ -4036,11 +4073,13 @@ retry:
+
+ while (1) {
+ uint32_t cmd;
+- struct binder_transaction_data tr;
++ struct binder_transaction_data_secctx tr;
++ struct binder_transaction_data *trd = &tr.transaction_data;
+ struct binder_work *w = NULL;
+ struct list_head *list = NULL;
+ struct binder_transaction *t = NULL;
+ struct binder_thread *t_from;
++ size_t trsize = sizeof(*trd);
+
+ binder_inner_proc_lock(proc);
+ if (!binder_worklist_empty_ilocked(&thread->todo))
+@@ -4240,8 +4279,8 @@ retry:
+ if (t->buffer->target_node) {
+ struct binder_node *target_node = t->buffer->target_node;
+
+- tr.target.ptr = target_node->ptr;
+- tr.cookie = target_node->cookie;
++ trd->target.ptr = target_node->ptr;
++ trd->cookie = target_node->cookie;
+ t->saved_priority = task_nice(current);
+ if (t->priority < target_node->min_priority &&
+ !(t->flags & TF_ONE_WAY))
+@@ -4251,22 +4290,23 @@ retry:
+ binder_set_nice(target_node->min_priority);
+ cmd = BR_TRANSACTION;
+ } else {
+- tr.target.ptr = 0;
+- tr.cookie = 0;
++ trd->target.ptr = 0;
++ trd->cookie = 0;
+ cmd = BR_REPLY;
+ }
+- tr.code = t->code;
+- tr.flags = t->flags;
+- tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
++ trd->code = t->code;
++ trd->flags = t->flags;
++ trd->sender_euid = from_kuid(current_user_ns(), t->sender_euid);
+
+ t_from = binder_get_txn_from(t);
+ if (t_from) {
+ struct task_struct *sender = t_from->proc->tsk;
+
+- tr.sender_pid = task_tgid_nr_ns(sender,
+- task_active_pid_ns(current));
++ trd->sender_pid =
++ task_tgid_nr_ns(sender,
++ task_active_pid_ns(current));
+ } else {
+- tr.sender_pid = 0;
++ trd->sender_pid = 0;
+ }
+
+ ret = binder_apply_fd_fixups(t);
+@@ -4297,15 +4337,20 @@ retry:
+ }
+ continue;
+ }
+- tr.data_size = t->buffer->data_size;
+- tr.offsets_size = t->buffer->offsets_size;
+- tr.data.ptr.buffer = (binder_uintptr_t)
++ trd->data_size = t->buffer->data_size;
++ trd->offsets_size = t->buffer->offsets_size;
++ trd->data.ptr.buffer = (binder_uintptr_t)
+ ((uintptr_t)t->buffer->data +
+ binder_alloc_get_user_buffer_offset(&proc->alloc));
+- tr.data.ptr.offsets = tr.data.ptr.buffer +
++ trd->data.ptr.offsets = trd->data.ptr.buffer +
+ ALIGN(t->buffer->data_size,
+ sizeof(void *));
+
++ tr.secctx = t->security_ctx;
++ if (t->security_ctx) {
++ cmd = BR_TRANSACTION_SEC_CTX;
++ trsize = sizeof(tr);
++ }
+ if (put_user(cmd, (uint32_t __user *)ptr)) {
+ if (t_from)
+ binder_thread_dec_tmpref(t_from);
+@@ -4316,7 +4361,7 @@ retry:
+ return -EFAULT;
+ }
+ ptr += sizeof(uint32_t);
+- if (copy_to_user(ptr, &tr, sizeof(tr))) {
++ if (copy_to_user(ptr, &tr, trsize)) {
+ if (t_from)
+ binder_thread_dec_tmpref(t_from);
+
+@@ -4325,7 +4370,7 @@ retry:
+
+ return -EFAULT;
+ }
+- ptr += sizeof(tr);
++ ptr += trsize;
+
+ trace_binder_transaction_received(t);
+ binder_stat_br(proc, thread, cmd);
+@@ -4333,16 +4378,18 @@ retry:
+ "%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %016llx-%016llx\n",
+ proc->pid, thread->pid,
+ (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :
+- "BR_REPLY",
++ (cmd == BR_TRANSACTION_SEC_CTX) ?
++ "BR_TRANSACTION_SEC_CTX" : "BR_REPLY",
+ t->debug_id, t_from ? t_from->proc->pid : 0,
+ t_from ? t_from->pid : 0, cmd,
+ t->buffer->data_size, t->buffer->offsets_size,
+- (u64)tr.data.ptr.buffer, (u64)tr.data.ptr.offsets);
++ (u64)trd->data.ptr.buffer,
++ (u64)trd->data.ptr.offsets);
+
+ if (t_from)
+ binder_thread_dec_tmpref(t_from);
+ t->buffer->allow_user_free = 1;
+- if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
++ if (cmd != BR_REPLY && !(t->flags & TF_ONE_WAY)) {
+ binder_inner_proc_lock(thread->proc);
+ t->to_parent = thread->transaction_stack;
+ t->to_thread = thread;
+@@ -4690,7 +4737,8 @@ out:
+ return ret;
+ }
+
+-static int binder_ioctl_set_ctx_mgr(struct file *filp)
++static int binder_ioctl_set_ctx_mgr(struct file *filp,
++ struct flat_binder_object *fbo)
+ {
+ int ret = 0;
+ struct binder_proc *proc = filp->private_data;
+@@ -4719,7 +4767,7 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp)
+ } else {
+ context->binder_context_mgr_uid = curr_euid;
+ }
+- new_node = binder_new_node(proc, NULL);
++ new_node = binder_new_node(proc, fbo);
+ if (!new_node) {
+ ret = -ENOMEM;
+ goto out;
+@@ -4842,8 +4890,20 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ binder_inner_proc_unlock(proc);
+ break;
+ }
++ case BINDER_SET_CONTEXT_MGR_EXT: {
++ struct flat_binder_object fbo;
++
++ if (copy_from_user(&fbo, ubuf, sizeof(fbo))) {
++ ret = -EINVAL;
++ goto err;
++ }
++ ret = binder_ioctl_set_ctx_mgr(filp, &fbo);
++ if (ret)
++ goto err;
++ break;
++ }
+ case BINDER_SET_CONTEXT_MGR:
+- ret = binder_ioctl_set_ctx_mgr(filp);
++ ret = binder_ioctl_set_ctx_mgr(filp, NULL);
+ if (ret)
+ goto err;
+ break;
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 8ac10af17c00..d62487d02455 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -968,9 +968,9 @@ static void __device_release_driver(struct device *dev, struct device *parent)
+ drv->remove(dev);
+
+ device_links_driver_cleanup(dev);
+- arch_teardown_dma_ops(dev);
+
+ devres_release_all(dev);
++ arch_teardown_dma_ops(dev);
+ dev->driver = NULL;
+ dev_set_drvdata(dev, NULL);
+ if (dev->pm_domain && dev->pm_domain->dismiss)
+diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
+index 5fa1898755a3..7c84f64c74f7 100644
+--- a/drivers/base/power/wakeup.c
++++ b/drivers/base/power/wakeup.c
+@@ -118,7 +118,6 @@ void wakeup_source_drop(struct wakeup_source *ws)
+ if (!ws)
+ return;
+
+- del_timer_sync(&ws->timer);
+ __pm_relax(ws);
+ }
+ EXPORT_SYMBOL_GPL(wakeup_source_drop);
+@@ -205,6 +204,13 @@ void wakeup_source_remove(struct wakeup_source *ws)
+ list_del_rcu(&ws->entry);
+ raw_spin_unlock_irqrestore(&events_lock, flags);
+ synchronize_srcu(&wakeup_srcu);
++
++ del_timer_sync(&ws->timer);
++ /*
++ * Clear timer.function to make wakeup_source_not_registered() treat
++ * this wakeup source as not registered.
++ */
++ ws->timer.function = NULL;
+ }
+ EXPORT_SYMBOL_GPL(wakeup_source_remove);
+
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index cf5538942834..9a8d83bc1e75 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -656,7 +656,7 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
+ return -EBADF;
+
+ l = f->f_mapping->host->i_bdev->bd_disk->private_data;
+- if (l->lo_state == Lo_unbound) {
++ if (l->lo_state != Lo_bound) {
+ return -EINVAL;
+ }
+ f = l->lo_backing_file;
+@@ -1089,16 +1089,12 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
+ kobject_uevent(&disk_to_dev(bdev->bd_disk)->kobj, KOBJ_CHANGE);
+ }
+ mapping_set_gfp_mask(filp->f_mapping, gfp);
+- lo->lo_state = Lo_unbound;
+ /* This is safe: open() is still holding a reference. */
+ module_put(THIS_MODULE);
+ blk_mq_unfreeze_queue(lo->lo_queue);
+
+ partscan = lo->lo_flags & LO_FLAGS_PARTSCAN && bdev;
+ lo_number = lo->lo_number;
+- lo->lo_flags = 0;
+- if (!part_shift)
+- lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
+ loop_unprepare_queue(lo);
+ out_unlock:
+ mutex_unlock(&loop_ctl_mutex);
+@@ -1120,6 +1116,23 @@ out_unlock:
+ /* Device is gone, no point in returning error */
+ err = 0;
+ }
++
++ /*
++ * lo->lo_state is set to Lo_unbound here after above partscan has
++ * finished.
++ *
++ * There cannot be anybody else entering __loop_clr_fd() as
++ * lo->lo_backing_file is already cleared and Lo_rundown state
++ * protects us from all the other places trying to change the 'lo'
++ * device.
++ */
++ mutex_lock(&loop_ctl_mutex);
++ lo->lo_flags = 0;
++ if (!part_shift)
++ lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
++ lo->lo_state = Lo_unbound;
++ mutex_unlock(&loop_ctl_mutex);
++
+ /*
+ * Need not hold loop_ctl_mutex to fput backing file.
+ * Calling fput holding loop_ctl_mutex triggers a circular
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 04ca65912638..684854d3b0ad 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -290,18 +290,8 @@ static ssize_t idle_store(struct device *dev,
+ struct zram *zram = dev_to_zram(dev);
+ unsigned long nr_pages = zram->disksize >> PAGE_SHIFT;
+ int index;
+- char mode_buf[8];
+- ssize_t sz;
+
+- sz = strscpy(mode_buf, buf, sizeof(mode_buf));
+- if (sz <= 0)
+- return -EINVAL;
+-
+- /* ignore trailing new line */
+- if (mode_buf[sz - 1] == '\n')
+- mode_buf[sz - 1] = 0x00;
+-
+- if (strcmp(mode_buf, "all"))
++ if (!sysfs_streq(buf, "all"))
+ return -EINVAL;
+
+ down_read(&zram->init_lock);
+@@ -635,25 +625,15 @@ static ssize_t writeback_store(struct device *dev,
+ struct bio bio;
+ struct bio_vec bio_vec;
+ struct page *page;
+- ssize_t ret, sz;
+- char mode_buf[8];
+- int mode = -1;
++ ssize_t ret;
++ int mode;
+ unsigned long blk_idx = 0;
+
+- sz = strscpy(mode_buf, buf, sizeof(mode_buf));
+- if (sz <= 0)
+- return -EINVAL;
+-
+- /* ignore trailing newline */
+- if (mode_buf[sz - 1] == '\n')
+- mode_buf[sz - 1] = 0x00;
+-
+- if (!strcmp(mode_buf, "idle"))
++ if (sysfs_streq(buf, "idle"))
+ mode = IDLE_WRITEBACK;
+- else if (!strcmp(mode_buf, "huge"))
++ else if (sysfs_streq(buf, "huge"))
+ mode = HUGE_WRITEBACK;
+-
+- if (mode == -1)
++ else
+ return -EINVAL;
+
+ down_read(&zram->init_lock);
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index 41405de27d66..c91bba00df4e 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -552,10 +552,9 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev,
+ hdev->bus);
+
+ if (!btrtl_dev->ic_info) {
+- rtl_dev_err(hdev, "rtl: unknown IC info, lmp subver %04x, hci rev %04x, hci ver %04x",
++ rtl_dev_info(hdev, "rtl: unknown IC info, lmp subver %04x, hci rev %04x, hci ver %04x",
+ lmp_subver, hci_rev, hci_ver);
+- ret = -EINVAL;
+- goto err_free;
++ return btrtl_dev;
+ }
+
+ if (btrtl_dev->ic_info->has_rom_version) {
+@@ -610,6 +609,11 @@ int btrtl_download_firmware(struct hci_dev *hdev,
+ * standard btusb. Once that firmware is uploaded, the subver changes
+ * to a different value.
+ */
++ if (!btrtl_dev->ic_info) {
++ rtl_dev_info(hdev, "rtl: assuming no firmware upload needed\n");
++ return 0;
++ }
++
+ switch (btrtl_dev->ic_info->lmp_subver) {
+ case RTL_ROM_LMP_8723A:
+ case RTL_ROM_LMP_3499:
+diff --git a/drivers/bluetooth/h4_recv.h b/drivers/bluetooth/h4_recv.h
+index b432651f8236..307d82166f48 100644
+--- a/drivers/bluetooth/h4_recv.h
++++ b/drivers/bluetooth/h4_recv.h
+@@ -60,6 +60,10 @@ static inline struct sk_buff *h4_recv_buf(struct hci_dev *hdev,
+ const struct h4_recv_pkt *pkts,
+ int pkts_count)
+ {
++ /* Check for error from previous call */
++ if (IS_ERR(skb))
++ skb = NULL;
++
+ while (count) {
+ int i, len;
+
+diff --git a/drivers/bluetooth/hci_h4.c b/drivers/bluetooth/hci_h4.c
+index fb97a3bf069b..5d97d77627c1 100644
+--- a/drivers/bluetooth/hci_h4.c
++++ b/drivers/bluetooth/hci_h4.c
+@@ -174,6 +174,10 @@ struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb,
+ struct hci_uart *hu = hci_get_drvdata(hdev);
+ u8 alignment = hu->alignment ? hu->alignment : 1;
+
++ /* Check for error from previous call */
++ if (IS_ERR(skb))
++ skb = NULL;
++
+ while (count) {
+ int i, len;
+
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index fbf7b4df23ab..9562e72c1ae5 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -207,11 +207,11 @@ void hci_uart_init_work(struct work_struct *work)
+ err = hci_register_dev(hu->hdev);
+ if (err < 0) {
+ BT_ERR("Can't register HCI device");
++ clear_bit(HCI_UART_PROTO_READY, &hu->flags);
++ hu->proto->close(hu);
+ hdev = hu->hdev;
+ hu->hdev = NULL;
+ hci_free_dev(hdev);
+- clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+- hu->proto->close(hu);
+ return;
+ }
+
+@@ -616,6 +616,7 @@ static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data,
+ static int hci_uart_register_dev(struct hci_uart *hu)
+ {
+ struct hci_dev *hdev;
++ int err;
+
+ BT_DBG("");
+
+@@ -659,11 +660,22 @@ static int hci_uart_register_dev(struct hci_uart *hu)
+ else
+ hdev->dev_type = HCI_PRIMARY;
+
++ /* Only call open() for the protocol after hdev is fully initialized as
++ * open() (or a timer/workqueue it starts) may attempt to reference it.
++ */
++ err = hu->proto->open(hu);
++ if (err) {
++ hu->hdev = NULL;
++ hci_free_dev(hdev);
++ return err;
++ }
++
+ if (test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags))
+ return 0;
+
+ if (hci_register_dev(hdev) < 0) {
+ BT_ERR("Can't register HCI device");
++ hu->proto->close(hu);
+ hu->hdev = NULL;
+ hci_free_dev(hdev);
+ return -ENODEV;
+@@ -683,20 +695,14 @@ static int hci_uart_set_proto(struct hci_uart *hu, int id)
+ if (!p)
+ return -EPROTONOSUPPORT;
+
+- err = p->open(hu);
+- if (err)
+- return err;
+-
+ hu->proto = p;
+- set_bit(HCI_UART_PROTO_READY, &hu->flags);
+
+ err = hci_uart_register_dev(hu);
+ if (err) {
+- clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+- p->close(hu);
+ return err;
+ }
+
++ set_bit(HCI_UART_PROTO_READY, &hu->flags);
+ return 0;
+ }
+
+diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
+index 614ecdbb4ab7..933268b8d6a5 100644
+--- a/drivers/cdrom/cdrom.c
++++ b/drivers/cdrom/cdrom.c
+@@ -265,6 +265,7 @@
+ /* #define ERRLOGMASK (CD_WARNING|CD_OPEN|CD_COUNT_TRACKS|CD_CLOSE) */
+ /* #define ERRLOGMASK (CD_WARNING|CD_REG_UNREG|CD_DO_IOCTL|CD_OPEN|CD_CLOSE|CD_COUNT_TRACKS) */
+
++#include <linux/atomic.h>
+ #include <linux/module.h>
+ #include <linux/fs.h>
+ #include <linux/major.h>
+@@ -3692,9 +3693,9 @@ static struct ctl_table_header *cdrom_sysctl_header;
+
+ static void cdrom_sysctl_register(void)
+ {
+- static int initialized;
++ static atomic_t initialized = ATOMIC_INIT(0);
+
+- if (initialized == 1)
++ if (!atomic_add_unless(&initialized, 1, 1))
+ return;
+
+ cdrom_sysctl_header = register_sysctl_table(cdrom_root_table);
+@@ -3705,8 +3706,6 @@ static void cdrom_sysctl_register(void)
+ cdrom_sysctl_settings.debug = debug;
+ cdrom_sysctl_settings.lock = lockdoor;
+ cdrom_sysctl_settings.check = check_media_type;
+-
+- initialized = 1;
+ }
+
+ static void cdrom_sysctl_unregister(void)
+diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
+index 2e2ffe7010aa..51c77f0e47b2 100644
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -351,7 +351,7 @@ config XILINX_HWICAP
+
+ config R3964
+ tristate "Siemens R3964 line discipline"
+- depends on TTY
++ depends on TTY && BROKEN
+ ---help---
+ This driver allows synchronous communication with devices using the
+ Siemens R3964 packet protocol. Unless you are dealing with special
+diff --git a/drivers/char/applicom.c b/drivers/char/applicom.c
+index c0a5b1f3a986..4ccc39e00ced 100644
+--- a/drivers/char/applicom.c
++++ b/drivers/char/applicom.c
+@@ -32,6 +32,7 @@
+ #include <linux/wait.h>
+ #include <linux/init.h>
+ #include <linux/fs.h>
++#include <linux/nospec.h>
+
+ #include <asm/io.h>
+ #include <linux/uaccess.h>
+@@ -386,7 +387,11 @@ static ssize_t ac_write(struct file *file, const char __user *buf, size_t count,
+ TicCard = st_loc.tic_des_from_pc; /* tic number to send */
+ IndexCard = NumCard - 1;
+
+- if((NumCard < 1) || (NumCard > MAX_BOARD) || !apbs[IndexCard].RamIO)
++ if (IndexCard >= MAX_BOARD)
++ return -EINVAL;
++ IndexCard = array_index_nospec(IndexCard, MAX_BOARD);
++
++ if (!apbs[IndexCard].RamIO)
+ return -EINVAL;
+
+ #ifdef DEBUG
+@@ -697,6 +702,7 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ unsigned char IndexCard;
+ void __iomem *pmem;
+ int ret = 0;
++ static int warncount = 10;
+ volatile unsigned char byte_reset_it;
+ struct st_ram_io *adgl;
+ void __user *argp = (void __user *)arg;
+@@ -711,16 +717,12 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ mutex_lock(&ac_mutex);
+ IndexCard = adgl->num_card-1;
+
+- if(cmd != 6 && ((IndexCard >= MAX_BOARD) || !apbs[IndexCard].RamIO)) {
+- static int warncount = 10;
+- if (warncount) {
+- printk( KERN_WARNING "APPLICOM driver IOCTL, bad board number %d\n",(int)IndexCard+1);
+- warncount--;
+- }
+- kfree(adgl);
+- mutex_unlock(&ac_mutex);
+- return -EINVAL;
+- }
++ if (cmd != 6 && IndexCard >= MAX_BOARD)
++ goto err;
++ IndexCard = array_index_nospec(IndexCard, MAX_BOARD);
++
++ if (cmd != 6 && !apbs[IndexCard].RamIO)
++ goto err;
+
+ switch (cmd) {
+
+@@ -838,5 +840,16 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ kfree(adgl);
+ mutex_unlock(&ac_mutex);
+ return 0;
++
++err:
++ if (warncount) {
++ pr_warn("APPLICOM driver IOCTL, bad board number %d\n",
++ (int)IndexCard + 1);
++ warncount--;
++ }
++ kfree(adgl);
++ mutex_unlock(&ac_mutex);
++ return -EINVAL;
++
+ }
+
+diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
+index 4a22b4b41aef..9bffcd37cc7b 100644
+--- a/drivers/char/hpet.c
++++ b/drivers/char/hpet.c
+@@ -377,7 +377,7 @@ static __init int hpet_mmap_enable(char *str)
+ pr_info("HPET mmap %s\n", hpet_mmap_enabled ? "enabled" : "disabled");
+ return 1;
+ }
+-__setup("hpet_mmap", hpet_mmap_enable);
++__setup("hpet_mmap=", hpet_mmap_enable);
+
+ static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
+index b89df66ea1ae..7abd604e938c 100644
+--- a/drivers/char/hw_random/virtio-rng.c
++++ b/drivers/char/hw_random/virtio-rng.c
+@@ -73,7 +73,7 @@ static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait)
+
+ if (!vi->busy) {
+ vi->busy = true;
+- init_completion(&vi->have_data);
++ reinit_completion(&vi->have_data);
+ register_buffer(vi, buf, size);
+ }
+
+diff --git a/drivers/char/ipmi/ipmi_si.h b/drivers/char/ipmi/ipmi_si.h
+index 52f6152d1fcb..7ae52c17618e 100644
+--- a/drivers/char/ipmi/ipmi_si.h
++++ b/drivers/char/ipmi/ipmi_si.h
+@@ -25,7 +25,9 @@ void ipmi_irq_finish_setup(struct si_sm_io *io);
+ int ipmi_si_remove_by_dev(struct device *dev);
+ void ipmi_si_remove_by_data(int addr_space, enum si_type si_type,
+ unsigned long addr);
+-int ipmi_si_hardcode_find_bmc(void);
++void ipmi_hardcode_init(void);
++void ipmi_si_hardcode_exit(void);
++int ipmi_si_hardcode_match(int addr_type, unsigned long addr);
+ void ipmi_si_platform_init(void);
+ void ipmi_si_platform_shutdown(void);
+
+diff --git a/drivers/char/ipmi/ipmi_si_hardcode.c b/drivers/char/ipmi/ipmi_si_hardcode.c
+index 487642809c58..1e5783961b0d 100644
+--- a/drivers/char/ipmi/ipmi_si_hardcode.c
++++ b/drivers/char/ipmi/ipmi_si_hardcode.c
+@@ -3,6 +3,7 @@
+ #define pr_fmt(fmt) "ipmi_hardcode: " fmt
+
+ #include <linux/moduleparam.h>
++#include <linux/platform_device.h>
+ #include "ipmi_si.h"
+
+ /*
+@@ -12,23 +13,22 @@
+
+ #define SI_MAX_PARMS 4
+
+-static char *si_type[SI_MAX_PARMS];
+ #define MAX_SI_TYPE_STR 30
+-static char si_type_str[MAX_SI_TYPE_STR];
++static char si_type_str[MAX_SI_TYPE_STR] __initdata;
+ static unsigned long addrs[SI_MAX_PARMS];
+ static unsigned int num_addrs;
+ static unsigned int ports[SI_MAX_PARMS];
+ static unsigned int num_ports;
+-static int irqs[SI_MAX_PARMS];
+-static unsigned int num_irqs;
+-static int regspacings[SI_MAX_PARMS];
+-static unsigned int num_regspacings;
+-static int regsizes[SI_MAX_PARMS];
+-static unsigned int num_regsizes;
+-static int regshifts[SI_MAX_PARMS];
+-static unsigned int num_regshifts;
+-static int slave_addrs[SI_MAX_PARMS]; /* Leaving 0 chooses the default value */
+-static unsigned int num_slave_addrs;
++static int irqs[SI_MAX_PARMS] __initdata;
++static unsigned int num_irqs __initdata;
++static int regspacings[SI_MAX_PARMS] __initdata;
++static unsigned int num_regspacings __initdata;
++static int regsizes[SI_MAX_PARMS] __initdata;
++static unsigned int num_regsizes __initdata;
++static int regshifts[SI_MAX_PARMS] __initdata;
++static unsigned int num_regshifts __initdata;
++static int slave_addrs[SI_MAX_PARMS] __initdata;
++static unsigned int num_slave_addrs __initdata;
+
+ module_param_string(type, si_type_str, MAX_SI_TYPE_STR, 0);
+ MODULE_PARM_DESC(type, "Defines the type of each interface, each"
+@@ -73,12 +73,133 @@ MODULE_PARM_DESC(slave_addrs, "Set the default IPMB slave address for"
+ " overridden by this parm. This is an array indexed"
+ " by interface number.");
+
+-int ipmi_si_hardcode_find_bmc(void)
++static struct platform_device *ipmi_hc_pdevs[SI_MAX_PARMS];
++
++static void __init ipmi_hardcode_init_one(const char *si_type_str,
++ unsigned int i,
++ unsigned long addr,
++ unsigned int flags)
+ {
+- int ret = -ENODEV;
+- int i;
+- struct si_sm_io io;
++ struct platform_device *pdev;
++ unsigned int num_r = 1, size;
++ struct resource r[4];
++ struct property_entry p[6];
++ enum si_type si_type;
++ unsigned int regspacing, regsize;
++ int rv;
++
++ memset(p, 0, sizeof(p));
++ memset(r, 0, sizeof(r));
++
++ if (!si_type_str || !*si_type_str || strcmp(si_type_str, "kcs") == 0) {
++ size = 2;
++ si_type = SI_KCS;
++ } else if (strcmp(si_type_str, "smic") == 0) {
++ size = 2;
++ si_type = SI_SMIC;
++ } else if (strcmp(si_type_str, "bt") == 0) {
++ size = 3;
++ si_type = SI_BT;
++ } else if (strcmp(si_type_str, "invalid") == 0) {
++ /*
++ * Allow a firmware-specified interface to be
++ * disabled.
++ */
++ size = 1;
++ si_type = SI_TYPE_INVALID;
++ } else {
++ pr_warn("Interface type specified for interface %d, was invalid: %s\n",
++ i, si_type_str);
++ return;
++ }
++
++ regsize = regsizes[i];
++ if (regsize == 0)
++ regsize = DEFAULT_REGSIZE;
++
++ p[0] = PROPERTY_ENTRY_U8("ipmi-type", si_type);
++ p[1] = PROPERTY_ENTRY_U8("slave-addr", slave_addrs[i]);
++ p[2] = PROPERTY_ENTRY_U8("addr-source", SI_HARDCODED);
++ p[3] = PROPERTY_ENTRY_U8("reg-shift", regshifts[i]);
++ p[4] = PROPERTY_ENTRY_U8("reg-size", regsize);
++ /* Last entry must be left NULL to terminate it. */
++
++ /*
++ * Register spacing is derived from the resources in
++ * the IPMI platform code.
++ */
++ regspacing = regspacings[i];
++ if (regspacing == 0)
++ regspacing = regsize;
++
++ r[0].start = addr;
++ r[0].end = r[0].start + regsize - 1;
++ r[0].name = "IPMI Address 1";
++ r[0].flags = flags;
++
++ if (size > 1) {
++ r[1].start = r[0].start + regspacing;
++ r[1].end = r[1].start + regsize - 1;
++ r[1].name = "IPMI Address 2";
++ r[1].flags = flags;
++ num_r++;
++ }
++
++ if (size > 2) {
++ r[2].start = r[1].start + regspacing;
++ r[2].end = r[2].start + regsize - 1;
++ r[2].name = "IPMI Address 3";
++ r[2].flags = flags;
++ num_r++;
++ }
++
++ if (irqs[i]) {
++ r[num_r].start = irqs[i];
++ r[num_r].end = irqs[i];
++ r[num_r].name = "IPMI IRQ";
++ r[num_r].flags = IORESOURCE_IRQ;
++ num_r++;
++ }
++
++ pdev = platform_device_alloc("hardcode-ipmi-si", i);
++ if (!pdev) {
++ pr_err("Error allocating IPMI platform device %d\n", i);
++ return;
++ }
++
++ rv = platform_device_add_resources(pdev, r, num_r);
++ if (rv) {
++ dev_err(&pdev->dev,
++ "Unable to add hard-code resources: %d\n", rv);
++ goto err;
++ }
++
++ rv = platform_device_add_properties(pdev, p);
++ if (rv) {
++ dev_err(&pdev->dev,
++ "Unable to add hard-code properties: %d\n", rv);
++ goto err;
++ }
++
++ rv = platform_device_add(pdev);
++ if (rv) {
++ dev_err(&pdev->dev,
++ "Unable to add hard-code device: %d\n", rv);
++ goto err;
++ }
++
++ ipmi_hc_pdevs[i] = pdev;
++ return;
++
++err:
++ platform_device_put(pdev);
++}
++
++void __init ipmi_hardcode_init(void)
++{
++ unsigned int i;
+ char *str;
++ char *si_type[SI_MAX_PARMS];
+
+ /* Parse out the si_type string into its components. */
+ str = si_type_str;
+@@ -95,54 +216,45 @@ int ipmi_si_hardcode_find_bmc(void)
+ }
+ }
+
+- memset(&io, 0, sizeof(io));
+ for (i = 0; i < SI_MAX_PARMS; i++) {
+- if (!ports[i] && !addrs[i])
+- continue;
+-
+- io.addr_source = SI_HARDCODED;
+- pr_info("probing via hardcoded address\n");
+-
+- if (!si_type[i] || strcmp(si_type[i], "kcs") == 0) {
+- io.si_type = SI_KCS;
+- } else if (strcmp(si_type[i], "smic") == 0) {
+- io.si_type = SI_SMIC;
+- } else if (strcmp(si_type[i], "bt") == 0) {
+- io.si_type = SI_BT;
+- } else {
+- pr_warn("Interface type specified for interface %d, was invalid: %s\n",
+- i, si_type[i]);
+- continue;
+- }
++ if (i < num_ports && ports[i])
++ ipmi_hardcode_init_one(si_type[i], i, ports[i],
++ IORESOURCE_IO);
++ if (i < num_addrs && addrs[i])
++ ipmi_hardcode_init_one(si_type[i], i, addrs[i],
++ IORESOURCE_MEM);
++ }
++}
+
+- if (ports[i]) {
+- /* An I/O port */
+- io.addr_data = ports[i];
+- io.addr_type = IPMI_IO_ADDR_SPACE;
+- } else if (addrs[i]) {
+- /* A memory port */
+- io.addr_data = addrs[i];
+- io.addr_type = IPMI_MEM_ADDR_SPACE;
+- } else {
+- pr_warn("Interface type specified for interface %d, but port and address were not set or set to zero\n",
+- i);
+- continue;
+- }
++void ipmi_si_hardcode_exit(void)
++{
++ unsigned int i;
+
+- io.addr = NULL;
+- io.regspacing = regspacings[i];
+- if (!io.regspacing)
+- io.regspacing = DEFAULT_REGSPACING;
+- io.regsize = regsizes[i];
+- if (!io.regsize)
+- io.regsize = DEFAULT_REGSIZE;
+- io.regshift = regshifts[i];
+- io.irq = irqs[i];
+- if (io.irq)
+- io.irq_setup = ipmi_std_irq_setup;
+- io.slave_addr = slave_addrs[i];
+-
+- ret = ipmi_si_add_smi(&io);
++ for (i = 0; i < SI_MAX_PARMS; i++) {
++ if (ipmi_hc_pdevs[i])
++ platform_device_unregister(ipmi_hc_pdevs[i]);
+ }
+- return ret;
++}
++
++/*
++ * Returns true of the given address exists as a hardcoded address,
++ * false if not.
++ */
++int ipmi_si_hardcode_match(int addr_type, unsigned long addr)
++{
++ unsigned int i;
++
++ if (addr_type == IPMI_IO_ADDR_SPACE) {
++ for (i = 0; i < num_ports; i++) {
++ if (ports[i] == addr)
++ return 1;
++ }
++ } else {
++ for (i = 0; i < num_addrs; i++) {
++ if (addrs[i] == addr)
++ return 1;
++ }
++ }
++
++ return 0;
+ }
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index dc8603d34320..5294abc4c96c 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -1862,6 +1862,18 @@ int ipmi_si_add_smi(struct si_sm_io *io)
+ int rv = 0;
+ struct smi_info *new_smi, *dup;
+
++ /*
++ * If the user gave us a hard-coded device at the same
++ * address, they presumably want us to use it and not what is
++ * in the firmware.
++ */
++ if (io->addr_source != SI_HARDCODED &&
++ ipmi_si_hardcode_match(io->addr_type, io->addr_data)) {
++ dev_info(io->dev,
++ "Hard-coded device at this address already exists");
++ return -ENODEV;
++ }
++
+ if (!io->io_setup) {
+ if (io->addr_type == IPMI_IO_ADDR_SPACE) {
+ io->io_setup = ipmi_si_port_setup;
+@@ -2085,11 +2097,16 @@ static int try_smi_init(struct smi_info *new_smi)
+ WARN_ON(new_smi->io.dev->init_name != NULL);
+
+ out_err:
++ if (rv && new_smi->io.io_cleanup) {
++ new_smi->io.io_cleanup(&new_smi->io);
++ new_smi->io.io_cleanup = NULL;
++ }
++
+ kfree(init_name);
+ return rv;
+ }
+
+-static int init_ipmi_si(void)
++static int __init init_ipmi_si(void)
+ {
+ struct smi_info *e;
+ enum ipmi_addr_src type = SI_INVALID;
+@@ -2097,11 +2114,9 @@ static int init_ipmi_si(void)
+ if (initialized)
+ return 0;
+
+- pr_info("IPMI System Interface driver\n");
++ ipmi_hardcode_init();
+
+- /* If the user gave us a device, they presumably want us to use it */
+- if (!ipmi_si_hardcode_find_bmc())
+- goto do_scan;
++ pr_info("IPMI System Interface driver\n");
+
+ ipmi_si_platform_init();
+
+@@ -2113,7 +2128,6 @@ static int init_ipmi_si(void)
+ with multiple BMCs we assume that there will be several instances
+ of a given type so if we succeed in registering a type then also
+ try to register everything else of the same type */
+-do_scan:
+ mutex_lock(&smi_infos_lock);
+ list_for_each_entry(e, &smi_infos, link) {
+ /* Try to register a device if it has an IRQ and we either
+@@ -2299,6 +2313,8 @@ static void cleanup_ipmi_si(void)
+ list_for_each_entry_safe(e, tmp_e, &smi_infos, link)
+ cleanup_one_si(e);
+ mutex_unlock(&smi_infos_lock);
++
++ ipmi_si_hardcode_exit();
+ }
+ module_exit(cleanup_ipmi_si);
+
+diff --git a/drivers/char/ipmi/ipmi_si_mem_io.c b/drivers/char/ipmi/ipmi_si_mem_io.c
+index fd0ec8d6bf0e..75583612ab10 100644
+--- a/drivers/char/ipmi/ipmi_si_mem_io.c
++++ b/drivers/char/ipmi/ipmi_si_mem_io.c
+@@ -81,8 +81,6 @@ int ipmi_si_mem_setup(struct si_sm_io *io)
+ if (!addr)
+ return -ENODEV;
+
+- io->io_cleanup = mem_cleanup;
+-
+ /*
+ * Figure out the actual readb/readw/readl/etc routine to use based
+ * upon the register size.
+@@ -141,5 +139,8 @@ int ipmi_si_mem_setup(struct si_sm_io *io)
+ mem_region_cleanup(io, io->io_size);
+ return -EIO;
+ }
++
++ io->io_cleanup = mem_cleanup;
++
+ return 0;
+ }
+diff --git a/drivers/char/ipmi/ipmi_si_platform.c b/drivers/char/ipmi/ipmi_si_platform.c
+index 15cf819f884f..8158d03542f4 100644
+--- a/drivers/char/ipmi/ipmi_si_platform.c
++++ b/drivers/char/ipmi/ipmi_si_platform.c
+@@ -128,8 +128,6 @@ ipmi_get_info_from_resources(struct platform_device *pdev,
+ if (res_second->start > io->addr_data)
+ io->regspacing = res_second->start - io->addr_data;
+ }
+- io->regsize = DEFAULT_REGSIZE;
+- io->regshift = 0;
+
+ return res;
+ }
+@@ -137,7 +135,7 @@ ipmi_get_info_from_resources(struct platform_device *pdev,
+ static int platform_ipmi_probe(struct platform_device *pdev)
+ {
+ struct si_sm_io io;
+- u8 type, slave_addr, addr_source;
++ u8 type, slave_addr, addr_source, regsize, regshift;
+ int rv;
+
+ rv = device_property_read_u8(&pdev->dev, "addr-source", &addr_source);
+@@ -149,7 +147,7 @@ static int platform_ipmi_probe(struct platform_device *pdev)
+ if (addr_source == SI_SMBIOS) {
+ if (!si_trydmi)
+ return -ENODEV;
+- } else {
++ } else if (addr_source != SI_HARDCODED) {
+ if (!si_tryplatform)
+ return -ENODEV;
+ }
+@@ -169,11 +167,23 @@ static int platform_ipmi_probe(struct platform_device *pdev)
+ case SI_BT:
+ io.si_type = type;
+ break;
++ case SI_TYPE_INVALID: /* User disabled this in hardcode. */
++ return -ENODEV;
+ default:
+ dev_err(&pdev->dev, "ipmi-type property is invalid\n");
+ return -EINVAL;
+ }
+
++ io.regsize = DEFAULT_REGSIZE;
++ rv = device_property_read_u8(&pdev->dev, "reg-size", ®size);
++ if (!rv)
++ io.regsize = regsize;
++
++ io.regshift = 0;
++ rv = device_property_read_u8(&pdev->dev, "reg-shift", ®shift);
++ if (!rv)
++ io.regshift = regshift;
++
+ if (!ipmi_get_info_from_resources(pdev, &io))
+ return -EINVAL;
+
+@@ -193,7 +203,8 @@ static int platform_ipmi_probe(struct platform_device *pdev)
+
+ io.dev = &pdev->dev;
+
+- pr_info("ipmi_si: SMBIOS: %s %#lx regsize %d spacing %d irq %d\n",
++ pr_info("ipmi_si: %s: %s %#lx regsize %d spacing %d irq %d\n",
++ ipmi_addr_src_to_str(addr_source),
+ (io.addr_type == IPMI_IO_ADDR_SPACE) ? "io" : "mem",
+ io.addr_data, io.regsize, io.regspacing, io.irq);
+
+@@ -358,6 +369,9 @@ static int acpi_ipmi_probe(struct platform_device *pdev)
+ goto err_free;
+ }
+
++ io.regsize = DEFAULT_REGSIZE;
++ io.regshift = 0;
++
+ res = ipmi_get_info_from_resources(pdev, &io);
+ if (!res) {
+ rv = -EINVAL;
+@@ -420,8 +434,9 @@ static int ipmi_remove(struct platform_device *pdev)
+ }
+
+ static const struct platform_device_id si_plat_ids[] = {
+- { "dmi-ipmi-si", 0 },
+- { }
++ { "dmi-ipmi-si", 0 },
++ { "hardcode-ipmi-si", 0 },
++ { }
+ };
+
+ struct platform_driver ipmi_platform_driver = {
+diff --git a/drivers/char/ipmi/ipmi_si_port_io.c b/drivers/char/ipmi/ipmi_si_port_io.c
+index ef6dffcea9fa..03924c32b6e9 100644
+--- a/drivers/char/ipmi/ipmi_si_port_io.c
++++ b/drivers/char/ipmi/ipmi_si_port_io.c
+@@ -68,8 +68,6 @@ int ipmi_si_port_setup(struct si_sm_io *io)
+ if (!addr)
+ return -ENODEV;
+
+- io->io_cleanup = port_cleanup;
+-
+ /*
+ * Figure out the actual inb/inw/inl/etc routine to use based
+ * upon the register size.
+@@ -109,5 +107,8 @@ int ipmi_si_port_setup(struct si_sm_io *io)
+ return -EIO;
+ }
+ }
++
++ io->io_cleanup = port_cleanup;
++
+ return 0;
+ }
+diff --git a/drivers/char/tpm/st33zp24/st33zp24.c b/drivers/char/tpm/st33zp24/st33zp24.c
+index 64dc560859f2..13dc614b7ebc 100644
+--- a/drivers/char/tpm/st33zp24/st33zp24.c
++++ b/drivers/char/tpm/st33zp24/st33zp24.c
+@@ -436,7 +436,7 @@ static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf,
+ goto out_err;
+ }
+
+- return len;
++ return 0;
+ out_err:
+ st33zp24_cancel(chip);
+ release_locality(chip);
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index d9439f9abe78..88d2e01a651d 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -230,10 +230,19 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ if (rc < 0) {
+ if (rc != -EPIPE)
+ dev_err(&chip->dev,
+- "%s: tpm_send: error %d\n", __func__, rc);
++ "%s: send(): error %d\n", __func__, rc);
+ goto out;
+ }
+
++ /* A sanity check. send() should just return zero on success e.g.
++ * not the command length.
++ */
++ if (rc > 0) {
++ dev_warn(&chip->dev,
++ "%s: send(): invalid value %d\n", __func__, rc);
++ rc = 0;
++ }
++
+ if (chip->flags & TPM_CHIP_FLAG_IRQ)
+ goto out_recv;
+
+diff --git a/drivers/char/tpm/tpm_atmel.c b/drivers/char/tpm/tpm_atmel.c
+index 66a14526aaf4..a290b30a0c35 100644
+--- a/drivers/char/tpm/tpm_atmel.c
++++ b/drivers/char/tpm/tpm_atmel.c
+@@ -105,7 +105,7 @@ static int tpm_atml_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ iowrite8(buf[i], priv->iobase);
+ }
+
+- return count;
++ return 0;
+ }
+
+ static void tpm_atml_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 36952ef98f90..763fc7e6c005 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -287,19 +287,29 @@ static int crb_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ struct crb_priv *priv = dev_get_drvdata(&chip->dev);
+ unsigned int expected;
+
+- /* sanity check */
+- if (count < 6)
++ /* A sanity check that the upper layer wants to get at least the header
++ * as that is the minimum size for any TPM response.
++ */
++ if (count < TPM_HEADER_SIZE)
+ return -EIO;
+
++ /* If this bit is set, according to the spec, the TPM is in
++ * unrecoverable condition.
++ */
+ if (ioread32(&priv->regs_t->ctrl_sts) & CRB_CTRL_STS_ERROR)
+ return -EIO;
+
+- memcpy_fromio(buf, priv->rsp, 6);
+- expected = be32_to_cpup((__be32 *) &buf[2]);
+- if (expected > count || expected < 6)
++ /* Read the first 8 bytes in order to get the length of the response.
++ * We read exactly a quad word in order to make sure that the remaining
++ * reads will be aligned.
++ */
++ memcpy_fromio(buf, priv->rsp, 8);
++
++ expected = be32_to_cpup((__be32 *)&buf[2]);
++ if (expected > count || expected < TPM_HEADER_SIZE)
+ return -EIO;
+
+- memcpy_fromio(&buf[6], &priv->rsp[6], expected - 6);
++ memcpy_fromio(&buf[8], &priv->rsp[8], expected - 8);
+
+ return expected;
+ }
+diff --git a/drivers/char/tpm/tpm_i2c_atmel.c b/drivers/char/tpm/tpm_i2c_atmel.c
+index 95ce2e9ccdc6..32a8e27c5382 100644
+--- a/drivers/char/tpm/tpm_i2c_atmel.c
++++ b/drivers/char/tpm/tpm_i2c_atmel.c
+@@ -65,7 +65,11 @@ static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ dev_dbg(&chip->dev,
+ "%s(buf=%*ph len=%0zx) -> sts=%d\n", __func__,
+ (int)min_t(size_t, 64, len), buf, len, status);
+- return status;
++
++ if (status < 0)
++ return status;
++
++ return 0;
+ }
+
+ static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c
+index 9086edc9066b..977fd42daa1b 100644
+--- a/drivers/char/tpm/tpm_i2c_infineon.c
++++ b/drivers/char/tpm/tpm_i2c_infineon.c
+@@ -587,7 +587,7 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ /* go and do it */
+ iic_tpm_write(TPM_STS(tpm_dev.locality), &sts, 1);
+
+- return len;
++ return 0;
+ out_err:
+ tpm_tis_i2c_ready(chip);
+ /* The TPM needs some time to clean up here,
+diff --git a/drivers/char/tpm/tpm_i2c_nuvoton.c b/drivers/char/tpm/tpm_i2c_nuvoton.c
+index 217f7f1cbde8..058220edb8b3 100644
+--- a/drivers/char/tpm/tpm_i2c_nuvoton.c
++++ b/drivers/char/tpm/tpm_i2c_nuvoton.c
+@@ -467,7 +467,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ }
+
+ dev_dbg(dev, "%s() -> %zd\n", __func__, len);
+- return len;
++ return 0;
+ }
+
+ static bool i2c_nuvoton_req_canceled(struct tpm_chip *chip, u8 status)
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 07b5a487d0c8..757ca45b39b8 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -139,14 +139,14 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ }
+
+ /**
+- * tpm_ibmvtpm_send - Send tpm request
+- *
++ * tpm_ibmvtpm_send() - Send a TPM command
+ * @chip: tpm chip struct
+ * @buf: buffer contains data to send
+ * @count: size of buffer
+ *
+ * Return:
+- * Number of bytes sent or < 0 on error.
++ * 0 on success,
++ * -errno on error
+ */
+ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+@@ -192,7 +192,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ rc = 0;
+ ibmvtpm->tpm_processing_cmd = false;
+ } else
+- rc = count;
++ rc = 0;
+
+ spin_unlock(&ibmvtpm->rtce_lock);
+ return rc;
+diff --git a/drivers/char/tpm/tpm_infineon.c b/drivers/char/tpm/tpm_infineon.c
+index d8f10047fbba..97f6d4fe0aee 100644
+--- a/drivers/char/tpm/tpm_infineon.c
++++ b/drivers/char/tpm/tpm_infineon.c
+@@ -354,7 +354,7 @@ static int tpm_inf_send(struct tpm_chip *chip, u8 * buf, size_t count)
+ for (i = 0; i < count; i++) {
+ wait_and_send(chip, buf[i]);
+ }
+- return count;
++ return 0;
+ }
+
+ static void tpm_inf_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/tpm_nsc.c b/drivers/char/tpm/tpm_nsc.c
+index 5d6cce74cd3f..9bee3c5eb4bf 100644
+--- a/drivers/char/tpm/tpm_nsc.c
++++ b/drivers/char/tpm/tpm_nsc.c
+@@ -226,7 +226,7 @@ static int tpm_nsc_send(struct tpm_chip *chip, u8 * buf, size_t count)
+ }
+ outb(NSC_COMMAND_EOC, priv->base + NSC_COMMAND);
+
+- return count;
++ return 0;
+ }
+
+ static void tpm_nsc_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index bf7e49cfa643..bb0c2e160562 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -481,7 +481,7 @@ static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len)
+ goto out_err;
+ }
+ }
+- return len;
++ return 0;
+ out_err:
+ tpm_tis_ready(chip);
+ return rc;
+diff --git a/drivers/char/tpm/tpm_vtpm_proxy.c b/drivers/char/tpm/tpm_vtpm_proxy.c
+index 87a0ce47f201..ecbb63f8d231 100644
+--- a/drivers/char/tpm/tpm_vtpm_proxy.c
++++ b/drivers/char/tpm/tpm_vtpm_proxy.c
+@@ -335,7 +335,6 @@ static int vtpm_proxy_is_driver_command(struct tpm_chip *chip,
+ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+ struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev);
+- int rc = 0;
+
+ if (count > sizeof(proxy_dev->buffer)) {
+ dev_err(&chip->dev,
+@@ -366,7 +365,7 @@ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count)
+
+ wake_up_interruptible(&proxy_dev->wq);
+
+- return rc;
++ return 0;
+ }
+
+ static void vtpm_proxy_tpm_op_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
+index b150f87f38f5..5a327eb7f63a 100644
+--- a/drivers/char/tpm/xen-tpmfront.c
++++ b/drivers/char/tpm/xen-tpmfront.c
+@@ -173,7 +173,7 @@ static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ return -ETIME;
+ }
+
+- return count;
++ return 0;
+ }
+
+ static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+diff --git a/drivers/clk/clk-fractional-divider.c b/drivers/clk/clk-fractional-divider.c
+index 545dceec0bbf..fdfe2e423d15 100644
+--- a/drivers/clk/clk-fractional-divider.c
++++ b/drivers/clk/clk-fractional-divider.c
+@@ -79,7 +79,7 @@ static long clk_fd_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long m, n;
+ u64 ret;
+
+- if (!rate || rate >= *parent_rate)
++ if (!rate || (!clk_hw_can_set_rate_parent(hw) && rate >= *parent_rate))
+ return *parent_rate;
+
+ if (fd->approximation)
+diff --git a/drivers/clk/clk-twl6040.c b/drivers/clk/clk-twl6040.c
+index ea846f77750b..0cad5748bf0e 100644
+--- a/drivers/clk/clk-twl6040.c
++++ b/drivers/clk/clk-twl6040.c
+@@ -41,6 +41,43 @@ static int twl6040_pdmclk_is_prepared(struct clk_hw *hw)
+ return pdmclk->enabled;
+ }
+
++static int twl6040_pdmclk_reset_one_clock(struct twl6040_pdmclk *pdmclk,
++ unsigned int reg)
++{
++ const u8 reset_mask = TWL6040_HPLLRST; /* Same for HPPLL and LPPLL */
++ int ret;
++
++ ret = twl6040_set_bits(pdmclk->twl6040, reg, reset_mask);
++ if (ret < 0)
++ return ret;
++
++ ret = twl6040_clear_bits(pdmclk->twl6040, reg, reset_mask);
++ if (ret < 0)
++ return ret;
++
++ return 0;
++}
++
++/*
++ * TWL6040A2 Phoenix Audio IC erratum #6: "PDM Clock Generation Issue At
++ * Cold Temperature". This affects cold boot and deeper idle states it
++ * seems. The workaround consists of resetting HPPLL and LPPLL.
++ */
++static int twl6040_pdmclk_quirk_reset_clocks(struct twl6040_pdmclk *pdmclk)
++{
++ int ret;
++
++ ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_HPPLLCTL);
++ if (ret)
++ return ret;
++
++ ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_LPPLLCTL);
++ if (ret)
++ return ret;
++
++ return 0;
++}
++
+ static int twl6040_pdmclk_prepare(struct clk_hw *hw)
+ {
+ struct twl6040_pdmclk *pdmclk = container_of(hw, struct twl6040_pdmclk,
+@@ -48,8 +85,20 @@ static int twl6040_pdmclk_prepare(struct clk_hw *hw)
+ int ret;
+
+ ret = twl6040_power(pdmclk->twl6040, 1);
+- if (!ret)
+- pdmclk->enabled = 1;
++ if (ret)
++ return ret;
++
++ ret = twl6040_pdmclk_quirk_reset_clocks(pdmclk);
++ if (ret)
++ goto out_err;
++
++ pdmclk->enabled = 1;
++
++ return 0;
++
++out_err:
++ dev_err(pdmclk->dev, "%s: error %i\n", __func__, ret);
++ twl6040_power(pdmclk->twl6040, 0);
+
+ return ret;
+ }
+diff --git a/drivers/clk/ingenic/cgu.c b/drivers/clk/ingenic/cgu.c
+index 5ef7d9ba2195..b40160eb3372 100644
+--- a/drivers/clk/ingenic/cgu.c
++++ b/drivers/clk/ingenic/cgu.c
+@@ -426,16 +426,16 @@ ingenic_clk_round_rate(struct clk_hw *hw, unsigned long req_rate,
+ struct ingenic_clk *ingenic_clk = to_ingenic_clk(hw);
+ struct ingenic_cgu *cgu = ingenic_clk->cgu;
+ const struct ingenic_cgu_clk_info *clk_info;
+- long rate = *parent_rate;
++ unsigned int div = 1;
+
+ clk_info = &cgu->clock_info[ingenic_clk->idx];
+
+ if (clk_info->type & CGU_CLK_DIV)
+- rate /= ingenic_clk_calc_div(clk_info, *parent_rate, req_rate);
++ div = ingenic_clk_calc_div(clk_info, *parent_rate, req_rate);
+ else if (clk_info->type & CGU_CLK_FIXDIV)
+- rate /= clk_info->fixdiv.div;
++ div = clk_info->fixdiv.div;
+
+- return rate;
++ return DIV_ROUND_UP(*parent_rate, div);
+ }
+
+ static int
+@@ -455,7 +455,7 @@ ingenic_clk_set_rate(struct clk_hw *hw, unsigned long req_rate,
+
+ if (clk_info->type & CGU_CLK_DIV) {
+ div = ingenic_clk_calc_div(clk_info, parent_rate, req_rate);
+- rate = parent_rate / div;
++ rate = DIV_ROUND_UP(parent_rate, div);
+
+ if (rate != req_rate)
+ return -EINVAL;
+diff --git a/drivers/clk/ingenic/cgu.h b/drivers/clk/ingenic/cgu.h
+index 502bcbb61b04..e12716d8ce3c 100644
+--- a/drivers/clk/ingenic/cgu.h
++++ b/drivers/clk/ingenic/cgu.h
+@@ -80,7 +80,7 @@ struct ingenic_cgu_mux_info {
+ * @reg: offset of the divider control register within the CGU
+ * @shift: number of bits to left shift the divide value by (ie. the index of
+ * the lowest bit of the divide value within its control register)
+- * @div: number of bits to divide the divider value by (i.e. if the
++ * @div: number to divide the divider value by (i.e. if the
+ * effective divider value is the value written to the register
+ * multiplied by some constant)
+ * @bits: the size of the divide value in bits
+diff --git a/drivers/clk/rockchip/clk-rk3328.c b/drivers/clk/rockchip/clk-rk3328.c
+index faa94adb2a37..65ab5c2f48b0 100644
+--- a/drivers/clk/rockchip/clk-rk3328.c
++++ b/drivers/clk/rockchip/clk-rk3328.c
+@@ -78,17 +78,17 @@ static struct rockchip_pll_rate_table rk3328_pll_rates[] = {
+
+ static struct rockchip_pll_rate_table rk3328_pll_frac_rates[] = {
+ /* _mhz, _refdiv, _fbdiv, _postdiv1, _postdiv2, _dsmpd, _frac */
+- RK3036_PLL_RATE(1016064000, 3, 127, 1, 1, 0, 134217),
++ RK3036_PLL_RATE(1016064000, 3, 127, 1, 1, 0, 134218),
+ /* vco = 1016064000 */
+- RK3036_PLL_RATE(983040000, 24, 983, 1, 1, 0, 671088),
++ RK3036_PLL_RATE(983040000, 24, 983, 1, 1, 0, 671089),
+ /* vco = 983040000 */
+- RK3036_PLL_RATE(491520000, 24, 983, 2, 1, 0, 671088),
++ RK3036_PLL_RATE(491520000, 24, 983, 2, 1, 0, 671089),
+ /* vco = 983040000 */
+- RK3036_PLL_RATE(61440000, 6, 215, 7, 2, 0, 671088),
++ RK3036_PLL_RATE(61440000, 6, 215, 7, 2, 0, 671089),
+ /* vco = 860156000 */
+- RK3036_PLL_RATE(56448000, 12, 451, 4, 4, 0, 9797894),
++ RK3036_PLL_RATE(56448000, 12, 451, 4, 4, 0, 9797895),
+ /* vco = 903168000 */
+- RK3036_PLL_RATE(40960000, 12, 409, 4, 5, 0, 10066329),
++ RK3036_PLL_RATE(40960000, 12, 409, 4, 5, 0, 10066330),
+ /* vco = 819200000 */
+ { /* sentinel */ },
+ };
+diff --git a/drivers/clk/samsung/clk-exynos5-subcmu.c b/drivers/clk/samsung/clk-exynos5-subcmu.c
+index 93306283d764..8ae44b5db4c2 100644
+--- a/drivers/clk/samsung/clk-exynos5-subcmu.c
++++ b/drivers/clk/samsung/clk-exynos5-subcmu.c
+@@ -136,15 +136,20 @@ static int __init exynos5_clk_register_subcmu(struct device *parent,
+ {
+ struct of_phandle_args genpdspec = { .np = pd_node };
+ struct platform_device *pdev;
++ int ret;
++
++ pdev = platform_device_alloc("exynos5-subcmu", PLATFORM_DEVID_AUTO);
++ if (!pdev)
++ return -ENOMEM;
+
+- pdev = platform_device_alloc(info->pd_name, -1);
+ pdev->dev.parent = parent;
+- pdev->driver_override = "exynos5-subcmu";
+ platform_set_drvdata(pdev, (void *)info);
+ of_genpd_add_device(&genpdspec, &pdev->dev);
+- platform_device_add(pdev);
++ ret = platform_device_add(pdev);
++ if (ret)
++ platform_device_put(pdev);
+
+- return 0;
++ return ret;
+ }
+
+ static int __init exynos5_clk_probe(struct platform_device *pdev)
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 40630eb950fc..85d7f301149b 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -530,7 +530,7 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ * Create default clkdm name, replace _cm from end of parent
+ * node name with _clkdm
+ */
+- provider->clkdm_name[strlen(provider->clkdm_name) - 5] = 0;
++ provider->clkdm_name[strlen(provider->clkdm_name) - 2] = 0;
+ } else {
+ provider->clkdm_name = kasprintf(GFP_KERNEL, "%pOFn", node);
+ if (!provider->clkdm_name) {
+diff --git a/drivers/clk/uniphier/clk-uniphier-cpugear.c b/drivers/clk/uniphier/clk-uniphier-cpugear.c
+index ec11f55594ad..5d2d42b7e182 100644
+--- a/drivers/clk/uniphier/clk-uniphier-cpugear.c
++++ b/drivers/clk/uniphier/clk-uniphier-cpugear.c
+@@ -47,7 +47,7 @@ static int uniphier_clk_cpugear_set_parent(struct clk_hw *hw, u8 index)
+ return ret;
+
+ ret = regmap_write_bits(gear->regmap,
+- gear->regbase + UNIPHIER_CLK_CPUGEAR_SET,
++ gear->regbase + UNIPHIER_CLK_CPUGEAR_UPD,
+ UNIPHIER_CLK_CPUGEAR_UPD_BIT,
+ UNIPHIER_CLK_CPUGEAR_UPD_BIT);
+ if (ret)
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index a9e26f6a81a1..8dfd3bc448d0 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -360,6 +360,16 @@ config ARM64_ERRATUM_858921
+ The workaround will be dynamically enabled when an affected
+ core is detected.
+
++config SUN50I_ERRATUM_UNKNOWN1
++ bool "Workaround for Allwinner A64 erratum UNKNOWN1"
++ default y
++ depends on ARM_ARCH_TIMER && ARM64 && ARCH_SUNXI
++ select ARM_ARCH_TIMER_OOL_WORKAROUND
++ help
++ This option enables a workaround for instability in the timer on
++ the Allwinner A64 SoC. The workaround will only be active if the
++ allwinner,erratum-unknown1 property is found in the timer node.
++
+ config ARM_GLOBAL_TIMER
+ bool "Support for the ARM global timer" if COMPILE_TEST
+ select TIMER_OF if OF
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index 9a7d4dc00b6e..a8b20b65bd4b 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -326,6 +326,48 @@ static u64 notrace arm64_1188873_read_cntvct_el0(void)
+ }
+ #endif
+
++#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
++/*
++ * The low bits of the counter registers are indeterminate while bit 10 or
++ * greater is rolling over. Since the counter value can jump both backward
++ * (7ff -> 000 -> 800) and forward (7ff -> fff -> 800), ignore register values
++ * with all ones or all zeros in the low bits. Bound the loop by the maximum
++ * number of CPU cycles in 3 consecutive 24 MHz counter periods.
++ */
++#define __sun50i_a64_read_reg(reg) ({ \
++ u64 _val; \
++ int _retries = 150; \
++ \
++ do { \
++ _val = read_sysreg(reg); \
++ _retries--; \
++ } while (((_val + 1) & GENMASK(9, 0)) <= 1 && _retries); \
++ \
++ WARN_ON_ONCE(!_retries); \
++ _val; \
++})
++
++static u64 notrace sun50i_a64_read_cntpct_el0(void)
++{
++ return __sun50i_a64_read_reg(cntpct_el0);
++}
++
++static u64 notrace sun50i_a64_read_cntvct_el0(void)
++{
++ return __sun50i_a64_read_reg(cntvct_el0);
++}
++
++static u32 notrace sun50i_a64_read_cntp_tval_el0(void)
++{
++ return read_sysreg(cntp_cval_el0) - sun50i_a64_read_cntpct_el0();
++}
++
++static u32 notrace sun50i_a64_read_cntv_tval_el0(void)
++{
++ return read_sysreg(cntv_cval_el0) - sun50i_a64_read_cntvct_el0();
++}
++#endif
++
+ #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
+ DEFINE_PER_CPU(const struct arch_timer_erratum_workaround *, timer_unstable_counter_workaround);
+ EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround);
+@@ -423,6 +465,19 @@ static const struct arch_timer_erratum_workaround ool_workarounds[] = {
+ .read_cntvct_el0 = arm64_1188873_read_cntvct_el0,
+ },
+ #endif
++#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
++ {
++ .match_type = ate_match_dt,
++ .id = "allwinner,erratum-unknown1",
++ .desc = "Allwinner erratum UNKNOWN1",
++ .read_cntp_tval_el0 = sun50i_a64_read_cntp_tval_el0,
++ .read_cntv_tval_el0 = sun50i_a64_read_cntv_tval_el0,
++ .read_cntpct_el0 = sun50i_a64_read_cntpct_el0,
++ .read_cntvct_el0 = sun50i_a64_read_cntvct_el0,
++ .set_next_event_phys = erratum_set_next_event_tval_phys,
++ .set_next_event_virt = erratum_set_next_event_tval_virt,
++ },
++#endif
+ };
+
+ typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *,
+diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c
+index 7a244b681876..d55c30f6981d 100644
+--- a/drivers/clocksource/exynos_mct.c
++++ b/drivers/clocksource/exynos_mct.c
+@@ -388,6 +388,13 @@ static void exynos4_mct_tick_start(unsigned long cycles,
+ exynos4_mct_write(tmp, mevt->base + MCT_L_TCON_OFFSET);
+ }
+
++static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
++{
++ /* Clear the MCT tick interrupt */
++ if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1)
++ exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET);
++}
++
+ static int exynos4_tick_set_next_event(unsigned long cycles,
+ struct clock_event_device *evt)
+ {
+@@ -404,6 +411,7 @@ static int set_state_shutdown(struct clock_event_device *evt)
+
+ mevt = container_of(evt, struct mct_clock_event_device, evt);
+ exynos4_mct_tick_stop(mevt);
++ exynos4_mct_tick_clear(mevt);
+ return 0;
+ }
+
+@@ -420,8 +428,11 @@ static int set_state_periodic(struct clock_event_device *evt)
+ return 0;
+ }
+
+-static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
++static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id)
+ {
++ struct mct_clock_event_device *mevt = dev_id;
++ struct clock_event_device *evt = &mevt->evt;
++
+ /*
+ * This is for supporting oneshot mode.
+ * Mct would generate interrupt periodically
+@@ -430,16 +441,6 @@ static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
+ if (!clockevent_state_periodic(&mevt->evt))
+ exynos4_mct_tick_stop(mevt);
+
+- /* Clear the MCT tick interrupt */
+- if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1)
+- exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET);
+-}
+-
+-static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id)
+-{
+- struct mct_clock_event_device *mevt = dev_id;
+- struct clock_event_device *evt = &mevt->evt;
+-
+ exynos4_mct_tick_clear(mevt);
+
+ evt->event_handler(evt);
+diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
+index 431892200a08..ead71bfac689 100644
+--- a/drivers/clocksource/timer-riscv.c
++++ b/drivers/clocksource/timer-riscv.c
+@@ -58,7 +58,7 @@ static u64 riscv_sched_clock(void)
+ static DEFINE_PER_CPU(struct clocksource, riscv_clocksource) = {
+ .name = "riscv_clocksource",
+ .rating = 300,
+- .mask = CLOCKSOURCE_MASK(BITS_PER_LONG),
++ .mask = CLOCKSOURCE_MASK(64),
+ .flags = CLOCK_SOURCE_IS_CONTINUOUS,
+ .read = riscv_clocksource_rdtime,
+ };
+@@ -103,8 +103,7 @@ static int __init riscv_timer_init_dt(struct device_node *n)
+ cs = per_cpu_ptr(&riscv_clocksource, cpuid);
+ clocksource_register_hz(cs, riscv_timebase);
+
+- sched_clock_register(riscv_sched_clock,
+- BITS_PER_LONG, riscv_timebase);
++ sched_clock_register(riscv_sched_clock, 64, riscv_timebase);
+
+ error = cpuhp_setup_state(CPUHP_AP_RISCV_TIMER_STARTING,
+ "clockevents/riscv/timer:starting",
+diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
+index ed5e42461094..ad48fd52cb53 100644
+--- a/drivers/connector/cn_proc.c
++++ b/drivers/connector/cn_proc.c
+@@ -250,6 +250,7 @@ void proc_coredump_connector(struct task_struct *task)
+ {
+ struct cn_msg *msg;
+ struct proc_event *ev;
++ struct task_struct *parent;
+ __u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
+
+ if (atomic_read(&proc_event_num_listeners) < 1)
+@@ -262,8 +263,14 @@ void proc_coredump_connector(struct task_struct *task)
+ ev->what = PROC_EVENT_COREDUMP;
+ ev->event_data.coredump.process_pid = task->pid;
+ ev->event_data.coredump.process_tgid = task->tgid;
+- ev->event_data.coredump.parent_pid = task->real_parent->pid;
+- ev->event_data.coredump.parent_tgid = task->real_parent->tgid;
++
++ rcu_read_lock();
++ if (pid_alive(task)) {
++ parent = rcu_dereference(task->real_parent);
++ ev->event_data.coredump.parent_pid = parent->pid;
++ ev->event_data.coredump.parent_tgid = parent->tgid;
++ }
++ rcu_read_unlock();
+
+ memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ msg->ack = 0; /* not used */
+@@ -276,6 +283,7 @@ void proc_exit_connector(struct task_struct *task)
+ {
+ struct cn_msg *msg;
+ struct proc_event *ev;
++ struct task_struct *parent;
+ __u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
+
+ if (atomic_read(&proc_event_num_listeners) < 1)
+@@ -290,8 +298,14 @@ void proc_exit_connector(struct task_struct *task)
+ ev->event_data.exit.process_tgid = task->tgid;
+ ev->event_data.exit.exit_code = task->exit_code;
+ ev->event_data.exit.exit_signal = task->exit_signal;
+- ev->event_data.exit.parent_pid = task->real_parent->pid;
+- ev->event_data.exit.parent_tgid = task->real_parent->tgid;
++
++ rcu_read_lock();
++ if (pid_alive(task)) {
++ parent = rcu_dereference(task->real_parent);
++ ev->event_data.exit.parent_pid = parent->pid;
++ ev->event_data.exit.parent_tgid = parent->tgid;
++ }
++ rcu_read_unlock();
+
+ memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ msg->ack = 0; /* not used */
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index d62fd374d5c7..c72258a44ba4 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -916,8 +916,10 @@ static void __init acpi_cpufreq_boost_init(void)
+ {
+ int ret;
+
+- if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA)))
++ if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA))) {
++ pr_debug("Boost capabilities not present in the processor\n");
+ return;
++ }
+
+ acpi_cpufreq_driver.set_boost = set_boost;
+ acpi_cpufreq_driver.boost_enabled = boost_state(0);
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index e35a886e00bc..ef0e33e21b98 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -545,13 +545,13 @@ EXPORT_SYMBOL_GPL(cpufreq_policy_transition_delay_us);
+ * SYSFS INTERFACE *
+ *********************************************************************/
+ static ssize_t show_boost(struct kobject *kobj,
+- struct attribute *attr, char *buf)
++ struct kobj_attribute *attr, char *buf)
+ {
+ return sprintf(buf, "%d\n", cpufreq_driver->boost_enabled);
+ }
+
+-static ssize_t store_boost(struct kobject *kobj, struct attribute *attr,
+- const char *buf, size_t count)
++static ssize_t store_boost(struct kobject *kobj, struct kobj_attribute *attr,
++ const char *buf, size_t count)
+ {
+ int ret, enable;
+
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index dd66decf2087..a579ca4552df 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -383,7 +383,10 @@ static int intel_pstate_get_cppc_guranteed(int cpu)
+ if (ret)
+ return ret;
+
+- return cppc_perf.guaranteed_perf;
++ if (cppc_perf.guaranteed_perf)
++ return cppc_perf.guaranteed_perf;
++
++ return cppc_perf.nominal_perf;
+ }
+
+ #else /* CONFIG_ACPI_CPPC_LIB */
+@@ -895,7 +898,7 @@ static void intel_pstate_update_policies(void)
+ /************************** sysfs begin ************************/
+ #define show_one(file_name, object) \
+ static ssize_t show_##file_name \
+- (struct kobject *kobj, struct attribute *attr, char *buf) \
++ (struct kobject *kobj, struct kobj_attribute *attr, char *buf) \
+ { \
+ return sprintf(buf, "%u\n", global.object); \
+ }
+@@ -904,7 +907,7 @@ static ssize_t intel_pstate_show_status(char *buf);
+ static int intel_pstate_update_status(const char *buf, size_t size);
+
+ static ssize_t show_status(struct kobject *kobj,
+- struct attribute *attr, char *buf)
++ struct kobj_attribute *attr, char *buf)
+ {
+ ssize_t ret;
+
+@@ -915,7 +918,7 @@ static ssize_t show_status(struct kobject *kobj,
+ return ret;
+ }
+
+-static ssize_t store_status(struct kobject *a, struct attribute *b,
++static ssize_t store_status(struct kobject *a, struct kobj_attribute *b,
+ const char *buf, size_t count)
+ {
+ char *p = memchr(buf, '\n', count);
+@@ -929,7 +932,7 @@ static ssize_t store_status(struct kobject *a, struct attribute *b,
+ }
+
+ static ssize_t show_turbo_pct(struct kobject *kobj,
+- struct attribute *attr, char *buf)
++ struct kobj_attribute *attr, char *buf)
+ {
+ struct cpudata *cpu;
+ int total, no_turbo, turbo_pct;
+@@ -955,7 +958,7 @@ static ssize_t show_turbo_pct(struct kobject *kobj,
+ }
+
+ static ssize_t show_num_pstates(struct kobject *kobj,
+- struct attribute *attr, char *buf)
++ struct kobj_attribute *attr, char *buf)
+ {
+ struct cpudata *cpu;
+ int total;
+@@ -976,7 +979,7 @@ static ssize_t show_num_pstates(struct kobject *kobj,
+ }
+
+ static ssize_t show_no_turbo(struct kobject *kobj,
+- struct attribute *attr, char *buf)
++ struct kobj_attribute *attr, char *buf)
+ {
+ ssize_t ret;
+
+@@ -998,7 +1001,7 @@ static ssize_t show_no_turbo(struct kobject *kobj,
+ return ret;
+ }
+
+-static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
++static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
+ const char *buf, size_t count)
+ {
+ unsigned int input;
+@@ -1045,7 +1048,7 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
+ return count;
+ }
+
+-static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
++static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
+ const char *buf, size_t count)
+ {
+ unsigned int input;
+@@ -1075,7 +1078,7 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
+ return count;
+ }
+
+-static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
++static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b,
+ const char *buf, size_t count)
+ {
+ unsigned int input;
+@@ -1107,12 +1110,13 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
+ }
+
+ static ssize_t show_hwp_dynamic_boost(struct kobject *kobj,
+- struct attribute *attr, char *buf)
++ struct kobj_attribute *attr, char *buf)
+ {
+ return sprintf(buf, "%u\n", hwp_boost);
+ }
+
+-static ssize_t store_hwp_dynamic_boost(struct kobject *a, struct attribute *b,
++static ssize_t store_hwp_dynamic_boost(struct kobject *a,
++ struct kobj_attribute *b,
+ const char *buf, size_t count)
+ {
+ unsigned int input;
+diff --git a/drivers/cpufreq/pxa2xx-cpufreq.c b/drivers/cpufreq/pxa2xx-cpufreq.c
+index 46254e583982..74e0e0c20c46 100644
+--- a/drivers/cpufreq/pxa2xx-cpufreq.c
++++ b/drivers/cpufreq/pxa2xx-cpufreq.c
+@@ -143,7 +143,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
+ return ret;
+ }
+
+-static void __init pxa_cpufreq_init_voltages(void)
++static void pxa_cpufreq_init_voltages(void)
+ {
+ vcc_core = regulator_get(NULL, "vcc_core");
+ if (IS_ERR(vcc_core)) {
+@@ -159,7 +159,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
+ return 0;
+ }
+
+-static void __init pxa_cpufreq_init_voltages(void) { }
++static void pxa_cpufreq_init_voltages(void) { }
+ #endif
+
+ static void find_freq_tables(struct cpufreq_frequency_table **freq_table,
+diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c
+index 2a3675c24032..a472b814058f 100644
+--- a/drivers/cpufreq/qcom-cpufreq-kryo.c
++++ b/drivers/cpufreq/qcom-cpufreq-kryo.c
+@@ -75,7 +75,7 @@ static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
+
+ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
+ {
+- struct opp_table *opp_tables[NR_CPUS] = {0};
++ struct opp_table **opp_tables;
+ enum _msm8996_version msm8996_version;
+ struct nvmem_cell *speedbin_nvmem;
+ struct device_node *np;
+@@ -133,6 +133,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
+ }
+ kfree(speedbin);
+
++ opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables), GFP_KERNEL);
++ if (!opp_tables)
++ return -ENOMEM;
++
+ for_each_possible_cpu(cpu) {
+ cpu_dev = get_cpu_device(cpu);
+ if (NULL == cpu_dev) {
+@@ -151,8 +155,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
+
+ cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
+ NULL, 0);
+- if (!IS_ERR(cpufreq_dt_pdev))
++ if (!IS_ERR(cpufreq_dt_pdev)) {
++ platform_set_drvdata(pdev, opp_tables);
+ return 0;
++ }
+
+ ret = PTR_ERR(cpufreq_dt_pdev);
+ dev_err(cpu_dev, "Failed to register platform device\n");
+@@ -163,13 +169,23 @@ free_opp:
+ break;
+ dev_pm_opp_put_supported_hw(opp_tables[cpu]);
+ }
++ kfree(opp_tables);
+
+ return ret;
+ }
+
+ static int qcom_cpufreq_kryo_remove(struct platform_device *pdev)
+ {
++ struct opp_table **opp_tables = platform_get_drvdata(pdev);
++ unsigned int cpu;
++
+ platform_device_unregister(cpufreq_dt_pdev);
++
++ for_each_possible_cpu(cpu)
++ dev_pm_opp_put_supported_hw(opp_tables[cpu]);
++
++ kfree(opp_tables);
++
+ return 0;
+ }
+
+diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
+index 99449738faa4..632ccf82c5d3 100644
+--- a/drivers/cpufreq/scpi-cpufreq.c
++++ b/drivers/cpufreq/scpi-cpufreq.c
+@@ -189,8 +189,8 @@ static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
+ cpufreq_cooling_unregister(priv->cdev);
+ clk_put(priv->clk);
+ dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+- kfree(priv);
+ dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
++ kfree(priv);
+
+ return 0;
+ }
+diff --git a/drivers/cpufreq/tegra124-cpufreq.c b/drivers/cpufreq/tegra124-cpufreq.c
+index 43530254201a..4bb154f6c54c 100644
+--- a/drivers/cpufreq/tegra124-cpufreq.c
++++ b/drivers/cpufreq/tegra124-cpufreq.c
+@@ -134,6 +134,8 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, priv);
+
++ of_node_put(np);
++
+ return 0;
+
+ out_switch_to_pllx:
+diff --git a/drivers/cpuidle/governor.c b/drivers/cpuidle/governor.c
+index bb93e5cf6a4a..9fddf828a76f 100644
+--- a/drivers/cpuidle/governor.c
++++ b/drivers/cpuidle/governor.c
+@@ -89,6 +89,7 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
+ mutex_lock(&cpuidle_lock);
+ if (__cpuidle_find_governor(gov->name) == NULL) {
+ ret = 0;
++ list_add_tail(&gov->governor_list, &cpuidle_governors);
+ if (!cpuidle_curr_governor ||
+ !strncasecmp(param_governor, gov->name, CPUIDLE_NAME_LEN) ||
+ (cpuidle_curr_governor->rating < gov->rating &&
+diff --git a/drivers/crypto/amcc/crypto4xx_trng.c b/drivers/crypto/amcc/crypto4xx_trng.c
+index 5e63742b0d22..53ab1f140a26 100644
+--- a/drivers/crypto/amcc/crypto4xx_trng.c
++++ b/drivers/crypto/amcc/crypto4xx_trng.c
+@@ -80,8 +80,10 @@ void ppc4xx_trng_probe(struct crypto4xx_core_device *core_dev)
+
+ /* Find the TRNG device node and map it */
+ trng = of_find_matching_node(NULL, ppc4xx_trng_match);
+- if (!trng || !of_device_is_available(trng))
++ if (!trng || !of_device_is_available(trng)) {
++ of_node_put(trng);
+ return;
++ }
+
+ dev->trng_base = of_iomap(trng, 0);
+ of_node_put(trng);
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index 80ae69f906fb..1c4f3a046dc5 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -1040,6 +1040,7 @@ static void init_aead_job(struct aead_request *req,
+ if (unlikely(req->src != req->dst)) {
+ if (edesc->dst_nents == 1) {
+ dst_dma = sg_dma_address(req->dst);
++ out_options = 0;
+ } else {
+ dst_dma = edesc->sec4_sg_dma +
+ sec4_sg_index *
+diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
+index bb1a2cdf1951..0f11811a3585 100644
+--- a/drivers/crypto/caam/caamhash.c
++++ b/drivers/crypto/caam/caamhash.c
+@@ -113,6 +113,7 @@ struct caam_hash_ctx {
+ struct caam_hash_state {
+ dma_addr_t buf_dma;
+ dma_addr_t ctx_dma;
++ int ctx_dma_len;
+ u8 buf_0[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
+ int buflen_0;
+ u8 buf_1[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
+@@ -165,6 +166,7 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
+ struct caam_hash_state *state,
+ int ctx_len)
+ {
++ state->ctx_dma_len = ctx_len;
+ state->ctx_dma = dma_map_single(jrdev, state->caam_ctx,
+ ctx_len, DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, state->ctx_dma)) {
+@@ -178,18 +180,6 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
+ return 0;
+ }
+
+-/* Map req->result, and append seq_out_ptr command that points to it */
+-static inline dma_addr_t map_seq_out_ptr_result(u32 *desc, struct device *jrdev,
+- u8 *result, int digestsize)
+-{
+- dma_addr_t dst_dma;
+-
+- dst_dma = dma_map_single(jrdev, result, digestsize, DMA_FROM_DEVICE);
+- append_seq_out_ptr(desc, dst_dma, digestsize, 0);
+-
+- return dst_dma;
+-}
+-
+ /* Map current buffer in state (if length > 0) and put it in link table */
+ static inline int buf_map_to_sec4_sg(struct device *jrdev,
+ struct sec4_sg_entry *sec4_sg,
+@@ -218,6 +208,7 @@ static inline int ctx_map_to_sec4_sg(struct device *jrdev,
+ struct caam_hash_state *state, int ctx_len,
+ struct sec4_sg_entry *sec4_sg, u32 flag)
+ {
++ state->ctx_dma_len = ctx_len;
+ state->ctx_dma = dma_map_single(jrdev, state->caam_ctx, ctx_len, flag);
+ if (dma_mapping_error(jrdev, state->ctx_dma)) {
+ dev_err(jrdev, "unable to map ctx\n");
+@@ -426,7 +417,6 @@ static int ahash_setkey(struct crypto_ahash *ahash,
+
+ /*
+ * ahash_edesc - s/w-extended ahash descriptor
+- * @dst_dma: physical mapped address of req->result
+ * @sec4_sg_dma: physical mapped address of h/w link table
+ * @src_nents: number of segments in input scatterlist
+ * @sec4_sg_bytes: length of dma mapped sec4_sg space
+@@ -434,7 +424,6 @@ static int ahash_setkey(struct crypto_ahash *ahash,
+ * @sec4_sg: h/w link table
+ */
+ struct ahash_edesc {
+- dma_addr_t dst_dma;
+ dma_addr_t sec4_sg_dma;
+ int src_nents;
+ int sec4_sg_bytes;
+@@ -450,8 +439,6 @@ static inline void ahash_unmap(struct device *dev,
+
+ if (edesc->src_nents)
+ dma_unmap_sg(dev, req->src, edesc->src_nents, DMA_TO_DEVICE);
+- if (edesc->dst_dma)
+- dma_unmap_single(dev, edesc->dst_dma, dst_len, DMA_FROM_DEVICE);
+
+ if (edesc->sec4_sg_bytes)
+ dma_unmap_single(dev, edesc->sec4_sg_dma,
+@@ -468,12 +455,10 @@ static inline void ahash_unmap_ctx(struct device *dev,
+ struct ahash_edesc *edesc,
+ struct ahash_request *req, int dst_len, u32 flag)
+ {
+- struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+- struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ struct caam_hash_state *state = ahash_request_ctx(req);
+
+ if (state->ctx_dma) {
+- dma_unmap_single(dev, state->ctx_dma, ctx->ctx_len, flag);
++ dma_unmap_single(dev, state->ctx_dma, state->ctx_dma_len, flag);
+ state->ctx_dma = 0;
+ }
+ ahash_unmap(dev, edesc, req, dst_len);
+@@ -486,9 +471,9 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
+ struct ahash_edesc *edesc;
+ struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+ int digestsize = crypto_ahash_digestsize(ahash);
++ struct caam_hash_state *state = ahash_request_ctx(req);
+ #ifdef DEBUG
+ struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+- struct caam_hash_state *state = ahash_request_ctx(req);
+
+ dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
+ #endif
+@@ -497,17 +482,14 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
+ if (err)
+ caam_jr_strstatus(jrdev, err);
+
+- ahash_unmap(jrdev, edesc, req, digestsize);
++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
++ memcpy(req->result, state->caam_ctx, digestsize);
+ kfree(edesc);
+
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR, "ctx@"__stringify(__LINE__)": ",
+ DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
+ ctx->ctx_len, 1);
+- if (req->result)
+- print_hex_dump(KERN_ERR, "result@"__stringify(__LINE__)": ",
+- DUMP_PREFIX_ADDRESS, 16, 4, req->result,
+- digestsize, 1);
+ #endif
+
+ req->base.complete(&req->base, err);
+@@ -555,9 +537,9 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
+ struct ahash_edesc *edesc;
+ struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+ int digestsize = crypto_ahash_digestsize(ahash);
++ struct caam_hash_state *state = ahash_request_ctx(req);
+ #ifdef DEBUG
+ struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+- struct caam_hash_state *state = ahash_request_ctx(req);
+
+ dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
+ #endif
+@@ -566,17 +548,14 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
+ if (err)
+ caam_jr_strstatus(jrdev, err);
+
+- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_TO_DEVICE);
++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
++ memcpy(req->result, state->caam_ctx, digestsize);
+ kfree(edesc);
+
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR, "ctx@"__stringify(__LINE__)": ",
+ DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
+ ctx->ctx_len, 1);
+- if (req->result)
+- print_hex_dump(KERN_ERR, "result@"__stringify(__LINE__)": ",
+- DUMP_PREFIX_ADDRESS, 16, 4, req->result,
+- digestsize, 1);
+ #endif
+
+ req->base.complete(&req->base, err);
+@@ -837,7 +816,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ edesc->sec4_sg_bytes = sec4_sg_bytes;
+
+ ret = ctx_map_to_sec4_sg(jrdev, state, ctx->ctx_len,
+- edesc->sec4_sg, DMA_TO_DEVICE);
++ edesc->sec4_sg, DMA_BIDIRECTIONAL);
+ if (ret)
+ goto unmap_ctx;
+
+@@ -857,14 +836,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+
+ append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len + buflen,
+ LDST_SGF);
+-
+- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+- digestsize);
+- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+- dev_err(jrdev, "unable to map dst\n");
+- ret = -ENOMEM;
+- goto unmap_ctx;
+- }
++ append_seq_out_ptr(desc, state->ctx_dma, digestsize, 0);
+
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -877,7 +849,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+
+ return -EINPROGRESS;
+ unmap_ctx:
+- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
+ kfree(edesc);
+ return ret;
+ }
+@@ -931,7 +903,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ edesc->src_nents = src_nents;
+
+ ret = ctx_map_to_sec4_sg(jrdev, state, ctx->ctx_len,
+- edesc->sec4_sg, DMA_TO_DEVICE);
++ edesc->sec4_sg, DMA_BIDIRECTIONAL);
+ if (ret)
+ goto unmap_ctx;
+
+@@ -945,13 +917,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ if (ret)
+ goto unmap_ctx;
+
+- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+- digestsize);
+- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+- dev_err(jrdev, "unable to map dst\n");
+- ret = -ENOMEM;
+- goto unmap_ctx;
+- }
++ append_seq_out_ptr(desc, state->ctx_dma, digestsize, 0);
+
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -964,7 +930,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+
+ return -EINPROGRESS;
+ unmap_ctx:
+- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
+ kfree(edesc);
+ return ret;
+ }
+@@ -1023,10 +989,8 @@ static int ahash_digest(struct ahash_request *req)
+
+ desc = edesc->hw_desc;
+
+- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+- digestsize);
+- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+- dev_err(jrdev, "unable to map dst\n");
++ ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
++ if (ret) {
+ ahash_unmap(jrdev, edesc, req, digestsize);
+ kfree(edesc);
+ return -ENOMEM;
+@@ -1041,7 +1005,7 @@ static int ahash_digest(struct ahash_request *req)
+ if (!ret) {
+ ret = -EINPROGRESS;
+ } else {
+- ahash_unmap(jrdev, edesc, req, digestsize);
++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+ kfree(edesc);
+ }
+
+@@ -1083,12 +1047,9 @@ static int ahash_final_no_ctx(struct ahash_request *req)
+ append_seq_in_ptr(desc, state->buf_dma, buflen, 0);
+ }
+
+- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+- digestsize);
+- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+- dev_err(jrdev, "unable to map dst\n");
++ ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
++ if (ret)
+ goto unmap;
+- }
+
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -1099,7 +1060,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
+ if (!ret) {
+ ret = -EINPROGRESS;
+ } else {
+- ahash_unmap(jrdev, edesc, req, digestsize);
++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+ kfree(edesc);
+ }
+
+@@ -1298,12 +1259,9 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ goto unmap;
+ }
+
+- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+- digestsize);
+- if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+- dev_err(jrdev, "unable to map dst\n");
++ ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
++ if (ret)
+ goto unmap;
+- }
+
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -1314,7 +1272,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ if (!ret) {
+ ret = -EINPROGRESS;
+ } else {
+- ahash_unmap(jrdev, edesc, req, digestsize);
++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+ kfree(edesc);
+ }
+
+@@ -1446,6 +1404,7 @@ static int ahash_init(struct ahash_request *req)
+ state->final = ahash_final_no_ctx;
+
+ state->ctx_dma = 0;
++ state->ctx_dma_len = 0;
+ state->current_buf = 0;
+ state->buf_dma = 0;
+ state->buflen_0 = 0;
+diff --git a/drivers/crypto/cavium/zip/zip_main.c b/drivers/crypto/cavium/zip/zip_main.c
+index be055b9547f6..6183f9128a8a 100644
+--- a/drivers/crypto/cavium/zip/zip_main.c
++++ b/drivers/crypto/cavium/zip/zip_main.c
+@@ -351,6 +351,7 @@ static struct pci_driver zip_driver = {
+
+ static struct crypto_alg zip_comp_deflate = {
+ .cra_name = "deflate",
++ .cra_driver_name = "deflate-cavium",
+ .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
+ .cra_ctxsize = sizeof(struct zip_kernel_ctx),
+ .cra_priority = 300,
+@@ -365,6 +366,7 @@ static struct crypto_alg zip_comp_deflate = {
+
+ static struct crypto_alg zip_comp_lzs = {
+ .cra_name = "lzs",
++ .cra_driver_name = "lzs-cavium",
+ .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
+ .cra_ctxsize = sizeof(struct zip_kernel_ctx),
+ .cra_priority = 300,
+@@ -384,7 +386,7 @@ static struct scomp_alg zip_scomp_deflate = {
+ .decompress = zip_scomp_decompress,
+ .base = {
+ .cra_name = "deflate",
+- .cra_driver_name = "deflate-scomp",
++ .cra_driver_name = "deflate-scomp-cavium",
+ .cra_module = THIS_MODULE,
+ .cra_priority = 300,
+ }
+@@ -397,7 +399,7 @@ static struct scomp_alg zip_scomp_lzs = {
+ .decompress = zip_scomp_decompress,
+ .base = {
+ .cra_name = "lzs",
+- .cra_driver_name = "lzs-scomp",
++ .cra_driver_name = "lzs-scomp-cavium",
+ .cra_module = THIS_MODULE,
+ .cra_priority = 300,
+ }
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index dd948e1df9e5..3bcb6bce666e 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -614,10 +614,10 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ hw_iv_size, DMA_BIDIRECTIONAL);
+ }
+
+- /*In case a pool was set, a table was
+- *allocated and should be released
+- */
+- if (areq_ctx->mlli_params.curr_pool) {
++ /* Release pool */
++ if ((areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI ||
++ areq_ctx->data_buff_type == CC_DMA_BUF_MLLI) &&
++ (areq_ctx->mlli_params.mlli_virt_addr)) {
+ dev_dbg(dev, "free MLLI buffer: dma=%pad virt=%pK\n",
+ &areq_ctx->mlli_params.mlli_dma_addr,
+ areq_ctx->mlli_params.mlli_virt_addr);
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index cc92b031fad1..4ec93079daaf 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -80,6 +80,7 @@ static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size)
+ default:
+ break;
+ }
++ break;
+ case S_DIN_to_DES:
+ if (size == DES3_EDE_KEY_SIZE || size == DES_KEY_SIZE)
+ return 0;
+@@ -652,6 +653,8 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
+ unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
+ unsigned int len;
+
++ cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
++
+ switch (ctx_p->cipher_mode) {
+ case DRV_CIPHER_CBC:
+ /*
+@@ -681,7 +684,6 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
+ break;
+ }
+
+- cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
+ kzfree(req_ctx->iv);
+
+ skcipher_request_complete(req, err);
+@@ -799,7 +801,8 @@ static int cc_cipher_decrypt(struct skcipher_request *req)
+
+ memset(req_ctx, 0, sizeof(*req_ctx));
+
+- if (ctx_p->cipher_mode == DRV_CIPHER_CBC) {
++ if ((ctx_p->cipher_mode == DRV_CIPHER_CBC) &&
++ (req->cryptlen >= ivsize)) {
+
+ /* Allocate and save the last IV sized bytes of the source,
+ * which will be lost in case of in-place decryption.
+diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c
+index c9d622abd90c..0ce4a65b95f5 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto.c
++++ b/drivers/crypto/rockchip/rk3288_crypto.c
+@@ -119,7 +119,7 @@ static int rk_load_data(struct rk_crypto_info *dev,
+ count = (dev->left_bytes > PAGE_SIZE) ?
+ PAGE_SIZE : dev->left_bytes;
+
+- if (!sg_pcopy_to_buffer(dev->first, dev->nents,
++ if (!sg_pcopy_to_buffer(dev->first, dev->src_nents,
+ dev->addr_vir, count,
+ dev->total - dev->left_bytes)) {
+ dev_err(dev->dev, "[%s:%d] pcopy err\n",
+diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h
+index d5fb4013fb42..54ee5b3ed9db 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto.h
++++ b/drivers/crypto/rockchip/rk3288_crypto.h
+@@ -207,7 +207,8 @@ struct rk_crypto_info {
+ void *addr_vir;
+ int aligned;
+ int align_size;
+- size_t nents;
++ size_t src_nents;
++ size_t dst_nents;
+ unsigned int total;
+ unsigned int count;
+ dma_addr_t addr_in;
+@@ -244,6 +245,7 @@ struct rk_cipher_ctx {
+ struct rk_crypto_info *dev;
+ unsigned int keylen;
+ u32 mode;
++ u8 iv[AES_BLOCK_SIZE];
+ };
+
+ enum alg_type {
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
+index 639c15c5364b..23305f22072f 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
+@@ -242,6 +242,17 @@ static void crypto_dma_start(struct rk_crypto_info *dev)
+ static int rk_set_data_start(struct rk_crypto_info *dev)
+ {
+ int err;
++ struct ablkcipher_request *req =
++ ablkcipher_request_cast(dev->async_req);
++ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
++ struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm);
++ u32 ivsize = crypto_ablkcipher_ivsize(tfm);
++ u8 *src_last_blk = page_address(sg_page(dev->sg_src)) +
++ dev->sg_src->offset + dev->sg_src->length - ivsize;
++
++ /* store the iv that need to be updated in chain mode */
++ if (ctx->mode & RK_CRYPTO_DEC)
++ memcpy(ctx->iv, src_last_blk, ivsize);
+
+ err = dev->load_data(dev, dev->sg_src, dev->sg_dst);
+ if (!err)
+@@ -260,8 +271,9 @@ static int rk_ablk_start(struct rk_crypto_info *dev)
+ dev->total = req->nbytes;
+ dev->sg_src = req->src;
+ dev->first = req->src;
+- dev->nents = sg_nents(req->src);
++ dev->src_nents = sg_nents(req->src);
+ dev->sg_dst = req->dst;
++ dev->dst_nents = sg_nents(req->dst);
+ dev->aligned = 1;
+
+ spin_lock_irqsave(&dev->lock, flags);
+@@ -285,6 +297,28 @@ static void rk_iv_copyback(struct rk_crypto_info *dev)
+ memcpy_fromio(req->info, dev->reg + RK_CRYPTO_AES_IV_0, ivsize);
+ }
+
++static void rk_update_iv(struct rk_crypto_info *dev)
++{
++ struct ablkcipher_request *req =
++ ablkcipher_request_cast(dev->async_req);
++ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
++ struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm);
++ u32 ivsize = crypto_ablkcipher_ivsize(tfm);
++ u8 *new_iv = NULL;
++
++ if (ctx->mode & RK_CRYPTO_DEC) {
++ new_iv = ctx->iv;
++ } else {
++ new_iv = page_address(sg_page(dev->sg_dst)) +
++ dev->sg_dst->offset + dev->sg_dst->length - ivsize;
++ }
++
++ if (ivsize == DES_BLOCK_SIZE)
++ memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize);
++ else if (ivsize == AES_BLOCK_SIZE)
++ memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize);
++}
++
+ /* return:
+ * true some err was occurred
+ * fault no err, continue
+@@ -297,7 +331,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev)
+
+ dev->unload_data(dev);
+ if (!dev->aligned) {
+- if (!sg_pcopy_from_buffer(req->dst, dev->nents,
++ if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents,
+ dev->addr_vir, dev->count,
+ dev->total - dev->left_bytes -
+ dev->count)) {
+@@ -306,6 +340,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev)
+ }
+ }
+ if (dev->left_bytes) {
++ rk_update_iv(dev);
+ if (dev->aligned) {
+ if (sg_is_last(dev->sg_src)) {
+ dev_err(dev->dev, "[%s:%d] Lack of data\n",
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+index 821a506b9e17..c336ae75e361 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+@@ -206,7 +206,7 @@ static int rk_ahash_start(struct rk_crypto_info *dev)
+ dev->sg_dst = NULL;
+ dev->sg_src = req->src;
+ dev->first = req->src;
+- dev->nents = sg_nents(req->src);
++ dev->src_nents = sg_nents(req->src);
+ rctx = ahash_request_ctx(req);
+ rctx->mode = 0;
+
+diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
+index 4a09af3cd546..7b9a7fb28bb9 100644
+--- a/drivers/dma/imx-dma.c
++++ b/drivers/dma/imx-dma.c
+@@ -285,7 +285,7 @@ static inline int imxdma_sg_next(struct imxdma_desc *d)
+ struct scatterlist *sg = d->sg;
+ unsigned long now;
+
+- now = min(d->len, sg_dma_len(sg));
++ now = min_t(size_t, d->len, sg_dma_len(sg));
+ if (d->len != IMX_DMA_LENGTH_LOOP)
+ d->len -= now;
+
+diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
+index 43d4b00b8138..411f91fde734 100644
+--- a/drivers/dma/qcom/hidma.c
++++ b/drivers/dma/qcom/hidma.c
+@@ -138,24 +138,25 @@ static void hidma_process_completed(struct hidma_chan *mchan)
+ desc = &mdesc->desc;
+ last_cookie = desc->cookie;
+
++ llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
++
+ spin_lock_irqsave(&mchan->lock, irqflags);
++ if (llstat == DMA_COMPLETE) {
++ mchan->last_success = last_cookie;
++ result.result = DMA_TRANS_NOERROR;
++ } else {
++ result.result = DMA_TRANS_ABORTED;
++ }
++
+ dma_cookie_complete(desc);
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+- llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
+ dmaengine_desc_get_callback(desc, &cb);
+
+ dma_run_dependencies(desc);
+
+ spin_lock_irqsave(&mchan->lock, irqflags);
+ list_move(&mdesc->node, &mchan->free);
+-
+- if (llstat == DMA_COMPLETE) {
+- mchan->last_success = last_cookie;
+- result.result = DMA_TRANS_NOERROR;
+- } else
+- result.result = DMA_TRANS_ABORTED;
+-
+ spin_unlock_irqrestore(&mchan->lock, irqflags);
+
+ dmaengine_desc_callback_invoke(&cb, &result);
+@@ -415,6 +416,7 @@ hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dest, dma_addr_t src,
+ if (!mdesc)
+ return NULL;
+
++ mdesc->desc.flags = flags;
+ hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+ src, dest, len, flags,
+ HIDMA_TRE_MEMCPY);
+@@ -447,6 +449,7 @@ hidma_prep_dma_memset(struct dma_chan *dmach, dma_addr_t dest, int value,
+ if (!mdesc)
+ return NULL;
+
++ mdesc->desc.flags = flags;
+ hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+ value, dest, len, flags,
+ HIDMA_TRE_MEMSET);
+diff --git a/drivers/dma/sh/usb-dmac.c b/drivers/dma/sh/usb-dmac.c
+index 7f7184c3cf95..59403f6d008a 100644
+--- a/drivers/dma/sh/usb-dmac.c
++++ b/drivers/dma/sh/usb-dmac.c
+@@ -694,6 +694,8 @@ static int usb_dmac_runtime_resume(struct device *dev)
+ #endif /* CONFIG_PM */
+
+ static const struct dev_pm_ops usb_dmac_pm = {
++ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
++ pm_runtime_force_resume)
+ SET_RUNTIME_PM_OPS(usb_dmac_runtime_suspend, usb_dmac_runtime_resume,
+ NULL)
+ };
+diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
+index 9a558e30c461..8219ab88a507 100644
+--- a/drivers/dma/tegra20-apb-dma.c
++++ b/drivers/dma/tegra20-apb-dma.c
+@@ -636,7 +636,10 @@ static void handle_cont_sngl_cycle_dma_done(struct tegra_dma_channel *tdc,
+
+ sgreq = list_first_entry(&tdc->pending_sg_req, typeof(*sgreq), node);
+ dma_desc = sgreq->dma_desc;
+- dma_desc->bytes_transferred += sgreq->req_len;
++ /* if we dma for long enough the transfer count will wrap */
++ dma_desc->bytes_transferred =
++ (dma_desc->bytes_transferred + sgreq->req_len) %
++ dma_desc->bytes_requested;
+
+ /* Callback need to be call */
+ if (!dma_desc->cb_count)
+diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c
+index a7902fccdcfa..6090d25dce85 100644
+--- a/drivers/firmware/efi/cper.c
++++ b/drivers/firmware/efi/cper.c
+@@ -546,19 +546,24 @@ EXPORT_SYMBOL_GPL(cper_estatus_check_header);
+ int cper_estatus_check(const struct acpi_hest_generic_status *estatus)
+ {
+ struct acpi_hest_generic_data *gdata;
+- unsigned int data_len, gedata_len;
++ unsigned int data_len, record_size;
+ int rc;
+
+ rc = cper_estatus_check_header(estatus);
+ if (rc)
+ return rc;
++
+ data_len = estatus->data_length;
+
+ apei_estatus_for_each_section(estatus, gdata) {
+- gedata_len = acpi_hest_get_error_length(gdata);
+- if (gedata_len > data_len - acpi_hest_get_size(gdata))
++ if (sizeof(struct acpi_hest_generic_data) > data_len)
++ return -EINVAL;
++
++ record_size = acpi_hest_get_record_size(gdata);
++ if (record_size > data_len)
+ return -EINVAL;
+- data_len -= acpi_hest_get_record_size(gdata);
++
++ data_len -= record_size;
+ }
+ if (data_len)
+ return -EINVAL;
+diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
+index c037c6c5d0b7..04e6ecd72cd9 100644
+--- a/drivers/firmware/efi/libstub/arm-stub.c
++++ b/drivers/firmware/efi/libstub/arm-stub.c
+@@ -367,6 +367,11 @@ void efi_get_virtmap(efi_memory_desc_t *memory_map, unsigned long map_size,
+ paddr = in->phys_addr;
+ size = in->num_pages * EFI_PAGE_SIZE;
+
++ if (novamap()) {
++ in->virt_addr = in->phys_addr;
++ continue;
++ }
++
+ /*
+ * Make the mapping compatible with 64k pages: this allows
+ * a 4k page size kernel to kexec a 64k page size kernel and
+diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
+index e94975f4655b..442f51c2a53d 100644
+--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
++++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
+@@ -34,6 +34,7 @@ static unsigned long __chunk_size = EFI_READ_CHUNK_SIZE;
+
+ static int __section(.data) __nokaslr;
+ static int __section(.data) __quiet;
++static int __section(.data) __novamap;
+
+ int __pure nokaslr(void)
+ {
+@@ -43,6 +44,10 @@ int __pure is_quiet(void)
+ {
+ return __quiet;
+ }
++int __pure novamap(void)
++{
++ return __novamap;
++}
+
+ #define EFI_MMAP_NR_SLACK_SLOTS 8
+
+@@ -482,6 +487,11 @@ efi_status_t efi_parse_options(char const *cmdline)
+ __chunk_size = -1UL;
+ }
+
++ if (!strncmp(str, "novamap", 7)) {
++ str += strlen("novamap");
++ __novamap = 1;
++ }
++
+ /* Group words together, delimited by "," */
+ while (*str && *str != ' ' && *str != ',')
+ str++;
+diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
+index 32799cf039ef..337b52c4702c 100644
+--- a/drivers/firmware/efi/libstub/efistub.h
++++ b/drivers/firmware/efi/libstub/efistub.h
+@@ -27,6 +27,7 @@
+
+ extern int __pure nokaslr(void);
+ extern int __pure is_quiet(void);
++extern int __pure novamap(void);
+
+ #define pr_efi(sys_table, msg) do { \
+ if (!is_quiet()) efi_printk(sys_table, "EFI stub: "msg); \
+diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
+index 0dc7b4987cc2..f8f89f995e9d 100644
+--- a/drivers/firmware/efi/libstub/fdt.c
++++ b/drivers/firmware/efi/libstub/fdt.c
+@@ -327,6 +327,9 @@ efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table,
+ if (status == EFI_SUCCESS) {
+ efi_set_virtual_address_map_t *svam;
+
++ if (novamap())
++ return EFI_SUCCESS;
++
+ /* Install the new virtual address map */
+ svam = sys_table->runtime->set_virtual_address_map;
+ status = svam(runtime_entry_count * desc_size, desc_size,
+diff --git a/drivers/firmware/efi/memattr.c b/drivers/firmware/efi/memattr.c
+index 8986757eafaf..aac972b056d9 100644
+--- a/drivers/firmware/efi/memattr.c
++++ b/drivers/firmware/efi/memattr.c
+@@ -94,7 +94,7 @@ static bool entry_is_valid(const efi_memory_desc_t *in, efi_memory_desc_t *out)
+
+ if (!(md->attribute & EFI_MEMORY_RUNTIME))
+ continue;
+- if (md->virt_addr == 0) {
++ if (md->virt_addr == 0 && md->phys_addr != 0) {
+ /* no virtual mapping has been installed by the stub */
+ break;
+ }
+diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
+index e2abfdb5cee6..698745c249e8 100644
+--- a/drivers/firmware/efi/runtime-wrappers.c
++++ b/drivers/firmware/efi/runtime-wrappers.c
+@@ -85,7 +85,7 @@ struct efi_runtime_work efi_rts_work;
+ pr_err("Failed to queue work to efi_rts_wq.\n"); \
+ \
+ exit: \
+- efi_rts_work.efi_rts_id = NONE; \
++ efi_rts_work.efi_rts_id = EFI_NONE; \
+ efi_rts_work.status; \
+ })
+
+@@ -175,50 +175,50 @@ static void efi_call_rts(struct work_struct *work)
+ arg5 = efi_rts_work.arg5;
+
+ switch (efi_rts_work.efi_rts_id) {
+- case GET_TIME:
++ case EFI_GET_TIME:
+ status = efi_call_virt(get_time, (efi_time_t *)arg1,
+ (efi_time_cap_t *)arg2);
+ break;
+- case SET_TIME:
++ case EFI_SET_TIME:
+ status = efi_call_virt(set_time, (efi_time_t *)arg1);
+ break;
+- case GET_WAKEUP_TIME:
++ case EFI_GET_WAKEUP_TIME:
+ status = efi_call_virt(get_wakeup_time, (efi_bool_t *)arg1,
+ (efi_bool_t *)arg2, (efi_time_t *)arg3);
+ break;
+- case SET_WAKEUP_TIME:
++ case EFI_SET_WAKEUP_TIME:
+ status = efi_call_virt(set_wakeup_time, *(efi_bool_t *)arg1,
+ (efi_time_t *)arg2);
+ break;
+- case GET_VARIABLE:
++ case EFI_GET_VARIABLE:
+ status = efi_call_virt(get_variable, (efi_char16_t *)arg1,
+ (efi_guid_t *)arg2, (u32 *)arg3,
+ (unsigned long *)arg4, (void *)arg5);
+ break;
+- case GET_NEXT_VARIABLE:
++ case EFI_GET_NEXT_VARIABLE:
+ status = efi_call_virt(get_next_variable, (unsigned long *)arg1,
+ (efi_char16_t *)arg2,
+ (efi_guid_t *)arg3);
+ break;
+- case SET_VARIABLE:
++ case EFI_SET_VARIABLE:
+ status = efi_call_virt(set_variable, (efi_char16_t *)arg1,
+ (efi_guid_t *)arg2, *(u32 *)arg3,
+ *(unsigned long *)arg4, (void *)arg5);
+ break;
+- case QUERY_VARIABLE_INFO:
++ case EFI_QUERY_VARIABLE_INFO:
+ status = efi_call_virt(query_variable_info, *(u32 *)arg1,
+ (u64 *)arg2, (u64 *)arg3, (u64 *)arg4);
+ break;
+- case GET_NEXT_HIGH_MONO_COUNT:
++ case EFI_GET_NEXT_HIGH_MONO_COUNT:
+ status = efi_call_virt(get_next_high_mono_count, (u32 *)arg1);
+ break;
+- case UPDATE_CAPSULE:
++ case EFI_UPDATE_CAPSULE:
+ status = efi_call_virt(update_capsule,
+ (efi_capsule_header_t **)arg1,
+ *(unsigned long *)arg2,
+ *(unsigned long *)arg3);
+ break;
+- case QUERY_CAPSULE_CAPS:
++ case EFI_QUERY_CAPSULE_CAPS:
+ status = efi_call_virt(query_capsule_caps,
+ (efi_capsule_header_t **)arg1,
+ *(unsigned long *)arg2, (u64 *)arg3,
+@@ -242,7 +242,7 @@ static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc)
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(GET_TIME, tm, tc, NULL, NULL, NULL);
++ status = efi_queue_work(EFI_GET_TIME, tm, tc, NULL, NULL, NULL);
+ up(&efi_runtime_lock);
+ return status;
+ }
+@@ -253,7 +253,7 @@ static efi_status_t virt_efi_set_time(efi_time_t *tm)
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(SET_TIME, tm, NULL, NULL, NULL, NULL);
++ status = efi_queue_work(EFI_SET_TIME, tm, NULL, NULL, NULL, NULL);
+ up(&efi_runtime_lock);
+ return status;
+ }
+@@ -266,7 +266,7 @@ static efi_status_t virt_efi_get_wakeup_time(efi_bool_t *enabled,
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(GET_WAKEUP_TIME, enabled, pending, tm, NULL,
++ status = efi_queue_work(EFI_GET_WAKEUP_TIME, enabled, pending, tm, NULL,
+ NULL);
+ up(&efi_runtime_lock);
+ return status;
+@@ -278,7 +278,7 @@ static efi_status_t virt_efi_set_wakeup_time(efi_bool_t enabled, efi_time_t *tm)
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(SET_WAKEUP_TIME, &enabled, tm, NULL, NULL,
++ status = efi_queue_work(EFI_SET_WAKEUP_TIME, &enabled, tm, NULL, NULL,
+ NULL);
+ up(&efi_runtime_lock);
+ return status;
+@@ -294,7 +294,7 @@ static efi_status_t virt_efi_get_variable(efi_char16_t *name,
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(GET_VARIABLE, name, vendor, attr, data_size,
++ status = efi_queue_work(EFI_GET_VARIABLE, name, vendor, attr, data_size,
+ data);
+ up(&efi_runtime_lock);
+ return status;
+@@ -308,7 +308,7 @@ static efi_status_t virt_efi_get_next_variable(unsigned long *name_size,
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(GET_NEXT_VARIABLE, name_size, name, vendor,
++ status = efi_queue_work(EFI_GET_NEXT_VARIABLE, name_size, name, vendor,
+ NULL, NULL);
+ up(&efi_runtime_lock);
+ return status;
+@@ -324,7 +324,7 @@ static efi_status_t virt_efi_set_variable(efi_char16_t *name,
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(SET_VARIABLE, name, vendor, &attr, &data_size,
++ status = efi_queue_work(EFI_SET_VARIABLE, name, vendor, &attr, &data_size,
+ data);
+ up(&efi_runtime_lock);
+ return status;
+@@ -359,7 +359,7 @@ static efi_status_t virt_efi_query_variable_info(u32 attr,
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(QUERY_VARIABLE_INFO, &attr, storage_space,
++ status = efi_queue_work(EFI_QUERY_VARIABLE_INFO, &attr, storage_space,
+ remaining_space, max_variable_size, NULL);
+ up(&efi_runtime_lock);
+ return status;
+@@ -391,7 +391,7 @@ static efi_status_t virt_efi_get_next_high_mono_count(u32 *count)
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(GET_NEXT_HIGH_MONO_COUNT, count, NULL, NULL,
++ status = efi_queue_work(EFI_GET_NEXT_HIGH_MONO_COUNT, count, NULL, NULL,
+ NULL, NULL);
+ up(&efi_runtime_lock);
+ return status;
+@@ -407,7 +407,7 @@ static void virt_efi_reset_system(int reset_type,
+ "could not get exclusive access to the firmware\n");
+ return;
+ }
+- efi_rts_work.efi_rts_id = RESET_SYSTEM;
++ efi_rts_work.efi_rts_id = EFI_RESET_SYSTEM;
+ __efi_call_virt(reset_system, reset_type, status, data_size, data);
+ up(&efi_runtime_lock);
+ }
+@@ -423,7 +423,7 @@ static efi_status_t virt_efi_update_capsule(efi_capsule_header_t **capsules,
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(UPDATE_CAPSULE, capsules, &count, &sg_list,
++ status = efi_queue_work(EFI_UPDATE_CAPSULE, capsules, &count, &sg_list,
+ NULL, NULL);
+ up(&efi_runtime_lock);
+ return status;
+@@ -441,7 +441,7 @@ static efi_status_t virt_efi_query_capsule_caps(efi_capsule_header_t **capsules,
+
+ if (down_interruptible(&efi_runtime_lock))
+ return EFI_ABORTED;
+- status = efi_queue_work(QUERY_CAPSULE_CAPS, capsules, &count,
++ status = efi_queue_work(EFI_QUERY_CAPSULE_CAPS, capsules, &count,
+ max_size, reset_type, NULL);
+ up(&efi_runtime_lock);
+ return status;
+diff --git a/drivers/firmware/iscsi_ibft.c b/drivers/firmware/iscsi_ibft.c
+index 6bc8e6640d71..c51462f5aa1e 100644
+--- a/drivers/firmware/iscsi_ibft.c
++++ b/drivers/firmware/iscsi_ibft.c
+@@ -542,6 +542,7 @@ static umode_t __init ibft_check_tgt_for(void *data, int type)
+ case ISCSI_BOOT_TGT_NIC_ASSOC:
+ case ISCSI_BOOT_TGT_CHAP_TYPE:
+ rc = S_IRUGO;
++ break;
+ case ISCSI_BOOT_TGT_NAME:
+ if (tgt->tgt_name_len)
+ rc = S_IRUGO;
+diff --git a/drivers/gnss/sirf.c b/drivers/gnss/sirf.c
+index 226f6e6fe01b..8e3f6a776e02 100644
+--- a/drivers/gnss/sirf.c
++++ b/drivers/gnss/sirf.c
+@@ -310,30 +310,26 @@ static int sirf_probe(struct serdev_device *serdev)
+ ret = -ENODEV;
+ goto err_put_device;
+ }
++
++ ret = regulator_enable(data->vcc);
++ if (ret)
++ goto err_put_device;
++
++ /* Wait for chip to boot into hibernate mode. */
++ msleep(SIRF_BOOT_DELAY);
+ }
+
+ if (data->wakeup) {
+ ret = gpiod_to_irq(data->wakeup);
+ if (ret < 0)
+- goto err_put_device;
+-
++ goto err_disable_vcc;
+ data->irq = ret;
+
+- ret = devm_request_threaded_irq(dev, data->irq, NULL,
+- sirf_wakeup_handler,
++ ret = request_threaded_irq(data->irq, NULL, sirf_wakeup_handler,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ "wakeup", data);
+ if (ret)
+- goto err_put_device;
+- }
+-
+- if (data->on_off) {
+- ret = regulator_enable(data->vcc);
+- if (ret)
+- goto err_put_device;
+-
+- /* Wait for chip to boot into hibernate mode */
+- msleep(SIRF_BOOT_DELAY);
++ goto err_disable_vcc;
+ }
+
+ if (IS_ENABLED(CONFIG_PM)) {
+@@ -342,7 +338,7 @@ static int sirf_probe(struct serdev_device *serdev)
+ } else {
+ ret = sirf_runtime_resume(dev);
+ if (ret < 0)
+- goto err_disable_vcc;
++ goto err_free_irq;
+ }
+
+ ret = gnss_register_device(gdev);
+@@ -356,6 +352,9 @@ err_disable_rpm:
+ pm_runtime_disable(dev);
+ else
+ sirf_runtime_suspend(dev);
++err_free_irq:
++ if (data->wakeup)
++ free_irq(data->irq, data);
+ err_disable_vcc:
+ if (data->on_off)
+ regulator_disable(data->vcc);
+@@ -376,6 +375,9 @@ static void sirf_remove(struct serdev_device *serdev)
+ else
+ sirf_runtime_suspend(&serdev->dev);
+
++ if (data->wakeup)
++ free_irq(data->irq, data);
++
+ if (data->on_off)
+ regulator_disable(data->vcc);
+
+diff --git a/drivers/gpio/gpio-adnp.c b/drivers/gpio/gpio-adnp.c
+index 91b90c0cea73..12acdac85820 100644
+--- a/drivers/gpio/gpio-adnp.c
++++ b/drivers/gpio/gpio-adnp.c
+@@ -132,8 +132,10 @@ static int adnp_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
+ if (err < 0)
+ goto out;
+
+- if (err & BIT(pos))
+- err = -EACCES;
++ if (value & BIT(pos)) {
++ err = -EPERM;
++ goto out;
++ }
+
+ err = 0;
+
+diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c
+index 0ecd2369c2ca..a09d2f9ebacc 100644
+--- a/drivers/gpio/gpio-exar.c
++++ b/drivers/gpio/gpio-exar.c
+@@ -148,6 +148,8 @@ static int gpio_exar_probe(struct platform_device *pdev)
+ mutex_init(&exar_gpio->lock);
+
+ index = ida_simple_get(&ida_index, 0, 0, GFP_KERNEL);
++ if (index < 0)
++ goto err_destroy;
+
+ sprintf(exar_gpio->name, "exar_gpio%d", index);
+ exar_gpio->gpio_chip.label = exar_gpio->name;
+diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c
+index f4e9921fa966..7f33024b6d83 100644
+--- a/drivers/gpio/gpio-omap.c
++++ b/drivers/gpio/gpio-omap.c
+@@ -883,14 +883,16 @@ static void omap_gpio_unmask_irq(struct irq_data *d)
+ if (trigger)
+ omap_set_gpio_triggering(bank, offset, trigger);
+
+- /* For level-triggered GPIOs, the clearing must be done after
+- * the HW source is cleared, thus after the handler has run */
+- if (bank->level_mask & BIT(offset)) {
+- omap_set_gpio_irqenable(bank, offset, 0);
++ omap_set_gpio_irqenable(bank, offset, 1);
++
++ /*
++ * For level-triggered GPIOs, clearing must be done after the source
++ * is cleared, thus after the handler has run. OMAP4 needs this done
++ * after enabing the interrupt to clear the wakeup status.
++ */
++ if (bank->level_mask & BIT(offset))
+ omap_clear_gpio_irqstatus(bank, offset);
+- }
+
+- omap_set_gpio_irqenable(bank, offset, 1);
+ raw_spin_unlock_irqrestore(&bank->lock, flags);
+ }
+
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 0dc96419efe3..d8a985fc6a5d 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -587,7 +587,8 @@ static int pca953x_irq_set_type(struct irq_data *d, unsigned int type)
+
+ static void pca953x_irq_shutdown(struct irq_data *d)
+ {
+- struct pca953x_chip *chip = irq_data_get_irq_chip_data(d);
++ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++ struct pca953x_chip *chip = gpiochip_get_data(gc);
+ u8 mask = 1 << (d->hwirq % BANK_SZ);
+
+ chip->irq_trig_raise[d->hwirq / BANK_SZ] &= ~mask;
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index a6e1891217e2..a1dd2f1c0d02 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -86,7 +86,8 @@ static void of_gpio_flags_quirks(struct device_node *np,
+ if (IS_ENABLED(CONFIG_REGULATOR) &&
+ (of_device_is_compatible(np, "regulator-fixed") ||
+ of_device_is_compatible(np, "reg-fixed-voltage") ||
+- of_device_is_compatible(np, "regulator-gpio"))) {
++ (of_device_is_compatible(np, "regulator-gpio") &&
++ strcmp(propname, "enable-gpio") == 0))) {
+ /*
+ * The regulator GPIO handles are specified such that the
+ * presence or absence of "enable-active-high" solely controls
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index bacdaef77b6c..278dd55ff476 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -738,7 +738,7 @@ static int gmc_v9_0_allocate_vm_inv_eng(struct amdgpu_device *adev)
+ }
+
+ ring->vm_inv_eng = inv_eng - 1;
+- change_bit(inv_eng - 1, (unsigned long *)(&vm_inv_engs[vmhub]));
++ vm_inv_engs[vmhub] &= ~(1 << ring->vm_inv_eng);
+
+ dev_info(adev->dev, "ring %s uses VM inv eng %u on hub %u\n",
+ ring->name, ring->vm_inv_eng, ring->funcs->vmhub);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 636d14a60952..83c8a0407537 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -886,6 +886,7 @@ static void emulated_link_detect(struct dc_link *link)
+ return;
+ }
+
++ /* dc_sink_create returns a new reference */
+ link->local_sink = sink;
+
+ edid_status = dm_helpers_read_local_edid(
+@@ -952,6 +953,8 @@ static int dm_resume(void *handle)
+ if (aconnector->fake_enable && aconnector->dc_link->local_sink)
+ aconnector->fake_enable = false;
+
++ if (aconnector->dc_sink)
++ dc_sink_release(aconnector->dc_sink);
+ aconnector->dc_sink = NULL;
+ amdgpu_dm_update_connector_after_detect(aconnector);
+ mutex_unlock(&aconnector->hpd_lock);
+@@ -1061,6 +1064,8 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+
+
+ sink = aconnector->dc_link->local_sink;
++ if (sink)
++ dc_sink_retain(sink);
+
+ /*
+ * Edid mgmt connector gets first update only in mode_valid hook and then
+@@ -1085,21 +1090,24 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ * to it anymore after disconnect, so on next crtc to connector
+ * reshuffle by UMD we will get into unwanted dc_sink release
+ */
+- if (aconnector->dc_sink != aconnector->dc_em_sink)
+- dc_sink_release(aconnector->dc_sink);
++ dc_sink_release(aconnector->dc_sink);
+ }
+ aconnector->dc_sink = sink;
++ dc_sink_retain(aconnector->dc_sink);
+ amdgpu_dm_update_freesync_caps(connector,
+ aconnector->edid);
+ } else {
+ amdgpu_dm_update_freesync_caps(connector, NULL);
+- if (!aconnector->dc_sink)
++ if (!aconnector->dc_sink) {
+ aconnector->dc_sink = aconnector->dc_em_sink;
+- else if (aconnector->dc_sink != aconnector->dc_em_sink)
+ dc_sink_retain(aconnector->dc_sink);
++ }
+ }
+
+ mutex_unlock(&dev->mode_config.mutex);
++
++ if (sink)
++ dc_sink_release(sink);
+ return;
+ }
+
+@@ -1107,8 +1115,10 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ * TODO: temporary guard to look for proper fix
+ * if this sink is MST sink, we should not do anything
+ */
+- if (sink && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
++ if (sink && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
++ dc_sink_release(sink);
+ return;
++ }
+
+ if (aconnector->dc_sink == sink) {
+ /*
+@@ -1117,6 +1127,8 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ */
+ DRM_DEBUG_DRIVER("DCHPD: connector_id=%d: dc_sink didn't change.\n",
+ aconnector->connector_id);
++ if (sink)
++ dc_sink_release(sink);
+ return;
+ }
+
+@@ -1138,6 +1150,7 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ amdgpu_dm_update_freesync_caps(connector, NULL);
+
+ aconnector->dc_sink = sink;
++ dc_sink_retain(aconnector->dc_sink);
+ if (sink->dc_edid.length == 0) {
+ aconnector->edid = NULL;
+ drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux);
+@@ -1158,11 +1171,15 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ amdgpu_dm_update_freesync_caps(connector, NULL);
+ drm_connector_update_edid_property(connector, NULL);
+ aconnector->num_modes = 0;
++ dc_sink_release(aconnector->dc_sink);
+ aconnector->dc_sink = NULL;
+ aconnector->edid = NULL;
+ }
+
+ mutex_unlock(&dev->mode_config.mutex);
++
++ if (sink)
++ dc_sink_release(sink);
+ }
+
+ static void handle_hpd_irq(void *param)
+@@ -2908,6 +2925,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ }
+ } else {
+ sink = aconnector->dc_sink;
++ dc_sink_retain(sink);
+ }
+
+ stream = dc_create_stream_for_sink(sink);
+@@ -2974,8 +2992,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ stream->ignore_msa_timing_param = true;
+
+ finish:
+- if (sink && sink->sink_signal == SIGNAL_TYPE_VIRTUAL && aconnector->base.force != DRM_FORCE_ON)
+- dc_sink_release(sink);
++ dc_sink_release(sink);
+
+ return stream;
+ }
+@@ -3233,6 +3250,14 @@ static void amdgpu_dm_connector_destroy(struct drm_connector *connector)
+ dm->backlight_dev = NULL;
+ }
+ #endif
++
++ if (aconnector->dc_em_sink)
++ dc_sink_release(aconnector->dc_em_sink);
++ aconnector->dc_em_sink = NULL;
++ if (aconnector->dc_sink)
++ dc_sink_release(aconnector->dc_sink);
++ aconnector->dc_sink = NULL;
++
+ drm_dp_cec_unregister_connector(&aconnector->dm_dp_aux.aux);
+ drm_connector_unregister(connector);
+ drm_connector_cleanup(connector);
+@@ -3330,10 +3355,12 @@ static void create_eml_sink(struct amdgpu_dm_connector *aconnector)
+ (edid->extensions + 1) * EDID_LENGTH,
+ &init_params);
+
+- if (aconnector->base.force == DRM_FORCE_ON)
++ if (aconnector->base.force == DRM_FORCE_ON) {
+ aconnector->dc_sink = aconnector->dc_link->local_sink ?
+ aconnector->dc_link->local_sink :
+ aconnector->dc_em_sink;
++ dc_sink_retain(aconnector->dc_sink);
++ }
+ }
+
+ static void handle_edid_mgmt(struct amdgpu_dm_connector *aconnector)
+@@ -4948,7 +4975,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ static void amdgpu_dm_crtc_copy_transient_flags(struct drm_crtc_state *crtc_state,
+ struct dc_stream_state *stream_state)
+ {
+- stream_state->mode_changed = crtc_state->mode_changed;
++ stream_state->mode_changed =
++ crtc_state->mode_changed || crtc_state->active_changed;
+ }
+
+ static int amdgpu_dm_atomic_commit(struct drm_device *dev,
+@@ -4969,10 +4997,22 @@ static int amdgpu_dm_atomic_commit(struct drm_device *dev,
+ */
+ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
+ struct dm_crtc_state *dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
++ struct dm_crtc_state *dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+ struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
+
+- if (drm_atomic_crtc_needs_modeset(new_crtc_state) && dm_old_crtc_state->stream)
++ if (drm_atomic_crtc_needs_modeset(new_crtc_state)
++ && dm_old_crtc_state->stream) {
++ /*
++ * CRC capture was enabled but not disabled.
++ * Release the vblank reference.
++ */
++ if (dm_new_crtc_state->crc_enabled) {
++ drm_crtc_vblank_put(crtc);
++ dm_new_crtc_state->crc_enabled = false;
++ }
++
+ manage_dm_interrupts(adev, acrtc, false);
++ }
+ }
+ /*
+ * Add check here for SoC's that support hardware cursor plane, to
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+index f088ac585978..26b651148c67 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+@@ -66,6 +66,7 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+ {
+ struct dm_crtc_state *crtc_state = to_dm_crtc_state(crtc->state);
+ struct dc_stream_state *stream_state = crtc_state->stream;
++ bool enable;
+
+ enum amdgpu_dm_pipe_crc_source source = dm_parse_crc_source(src_name);
+
+@@ -80,28 +81,27 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+ return -EINVAL;
+ }
+
++ enable = (source == AMDGPU_DM_PIPE_CRC_SOURCE_AUTO);
++
++ if (!dc_stream_configure_crc(stream_state->ctx->dc, stream_state,
++ enable, enable))
++ return -EINVAL;
++
+ /* When enabling CRC, we should also disable dithering. */
+- if (source == AMDGPU_DM_PIPE_CRC_SOURCE_AUTO) {
+- if (dc_stream_configure_crc(stream_state->ctx->dc,
+- stream_state,
+- true, true)) {
+- crtc_state->crc_enabled = true;
+- dc_stream_set_dither_option(stream_state,
+- DITHER_OPTION_TRUN8);
+- }
+- else
+- return -EINVAL;
+- } else {
+- if (dc_stream_configure_crc(stream_state->ctx->dc,
+- stream_state,
+- false, false)) {
+- crtc_state->crc_enabled = false;
+- dc_stream_set_dither_option(stream_state,
+- DITHER_OPTION_DEFAULT);
+- }
+- else
+- return -EINVAL;
+- }
++ dc_stream_set_dither_option(stream_state,
++ enable ? DITHER_OPTION_TRUN8
++ : DITHER_OPTION_DEFAULT);
++
++ /*
++ * Reading the CRC requires the vblank interrupt handler to be
++ * enabled. Keep a reference until CRC capture stops.
++ */
++ if (!crtc_state->crc_enabled && enable)
++ drm_crtc_vblank_get(crtc);
++ else if (crtc_state->crc_enabled && !enable)
++ drm_crtc_vblank_put(crtc);
++
++ crtc_state->crc_enabled = enable;
+
+ /* Reset crc_skipped on dm state */
+ crtc_state->crc_skip_count = 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 1b0d209d8367..3b95a637b508 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -239,6 +239,7 @@ static int dm_dp_mst_get_modes(struct drm_connector *connector)
+ &init_params);
+
+ dc_sink->priv = aconnector;
++ /* dc_link_add_remote_sink returns a new reference */
+ aconnector->dc_sink = dc_sink;
+
+ if (aconnector->dc_sink)
+diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+index 43e4a2be0fa6..57cc11d0e9a5 100644
+--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+@@ -1355,12 +1355,12 @@ void dcn_bw_update_from_pplib(struct dc *dc)
+ struct dm_pp_clock_levels_with_voltage fclks = {0}, dcfclks = {0};
+ bool res;
+
+- kernel_fpu_begin();
+-
+ /* TODO: This is not the proper way to obtain fabric_and_dram_bandwidth, should be min(fclk, memclk) */
+ res = dm_pp_get_clock_levels_by_type_with_voltage(
+ ctx, DM_PP_CLOCK_TYPE_FCLK, &fclks);
+
++ kernel_fpu_begin();
++
+ if (res)
+ res = verify_clock_values(&fclks);
+
+@@ -1379,9 +1379,13 @@ void dcn_bw_update_from_pplib(struct dc *dc)
+ } else
+ BREAK_TO_DEBUGGER();
+
++ kernel_fpu_end();
++
+ res = dm_pp_get_clock_levels_by_type_with_voltage(
+ ctx, DM_PP_CLOCK_TYPE_DCFCLK, &dcfclks);
+
++ kernel_fpu_begin();
++
+ if (res)
+ res = verify_clock_values(&dcfclks);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 5fd52094d459..1f92e7e8e3d3 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1078,6 +1078,9 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
+ /* pplib is notified if disp_num changed */
+ dc->hwss.optimize_bandwidth(dc, context);
+
++ for (i = 0; i < context->stream_count; i++)
++ context->streams[i]->mode_changed = false;
++
+ dc_release_state(dc->current_state);
+
+ dc->current_state = context;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index b0265dbebd4c..583eb367850f 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -792,6 +792,7 @@ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason)
+ sink->dongle_max_pix_clk = sink_caps.max_hdmi_pixel_clock;
+ sink->converter_disable_audio = converter_disable_audio;
+
++ /* dc_sink_create returns a new reference */
+ link->local_sink = sink;
+
+ edid_status = dm_helpers_read_local_edid(
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 41883c981789..a684b38332ac 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -2334,9 +2334,10 @@ static void dcn10_apply_ctx_for_surface(
+ }
+ }
+
+- if (!pipe_ctx->plane_state &&
+- old_pipe_ctx->plane_state &&
+- old_pipe_ctx->stream_res.tg == tg) {
++ if ((!pipe_ctx->plane_state ||
++ pipe_ctx->stream_res.tg != old_pipe_ctx->stream_res.tg) &&
++ old_pipe_ctx->plane_state &&
++ old_pipe_ctx->stream_res.tg == tg) {
+
+ dc->hwss.plane_atomic_disconnect(dc, old_pipe_ctx);
+ removed_pipe[i] = true;
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index c8f5c00dd1e7..86e3fb27c125 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -3491,14 +3491,14 @@ static int smu7_get_gpu_power(struct pp_hwmgr *hwmgr, u32 *query)
+
+ smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogStart);
+ cgs_write_ind_register(hwmgr->device, CGS_IND_REG__SMC,
+- ixSMU_PM_STATUS_94, 0);
++ ixSMU_PM_STATUS_95, 0);
+
+ for (i = 0; i < 10; i++) {
+- mdelay(1);
++ mdelay(500);
+ smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogSample);
+ tmp = cgs_read_ind_register(hwmgr->device,
+ CGS_IND_REG__SMC,
+- ixSMU_PM_STATUS_94);
++ ixSMU_PM_STATUS_95);
+ if (tmp != 0)
+ break;
+ }
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index f4290f6b0c38..2323ba9310d9 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1611,6 +1611,15 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
+ if (old_plane_state->fb != new_plane_state->fb)
+ return -EINVAL;
+
++ /*
++ * FIXME: Since prepare_fb and cleanup_fb are always called on
++ * the new_plane_state for async updates we need to block framebuffer
++ * changes. This prevents use of a fb that's been cleaned up and
++ * double cleanups from occuring.
++ */
++ if (old_plane_state->fb != new_plane_state->fb)
++ return -EINVAL;
++
+ funcs = plane->helper_private;
+ if (!funcs->atomic_async_update)
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 529414556962..1a244c53252c 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -3286,6 +3286,7 @@ static int drm_dp_mst_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs
+ msg.u.i2c_read.transactions[i].i2c_dev_id = msgs[i].addr;
+ msg.u.i2c_read.transactions[i].num_bytes = msgs[i].len;
+ msg.u.i2c_read.transactions[i].bytes = msgs[i].buf;
++ msg.u.i2c_read.transactions[i].no_stop_bit = !(msgs[i].flags & I2C_M_STOP);
+ }
+ msg.u.i2c_read.read_i2c_device_id = msgs[num - 1].addr;
+ msg.u.i2c_read.num_bytes_read = msgs[num - 1].len;
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index d73703a695e8..edd8cb497f3b 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -2891,7 +2891,7 @@ int drm_fb_helper_fbdev_setup(struct drm_device *dev,
+ return 0;
+
+ err_drm_fb_helper_fini:
+- drm_fb_helper_fini(fb_helper);
++ drm_fb_helper_fbdev_teardown(dev);
+
+ return ret;
+ }
+@@ -3170,9 +3170,7 @@ static void drm_fbdev_client_unregister(struct drm_client_dev *client)
+
+ static int drm_fbdev_client_restore(struct drm_client_dev *client)
+ {
+- struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);
+-
+- drm_fb_helper_restore_fbdev_mode_unlocked(fb_helper);
++ drm_fb_helper_lastclose(client->dev);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/drm_mode_object.c b/drivers/gpu/drm/drm_mode_object.c
+index 004191d01772..15b919f90c5a 100644
+--- a/drivers/gpu/drm/drm_mode_object.c
++++ b/drivers/gpu/drm/drm_mode_object.c
+@@ -465,6 +465,7 @@ static int set_property_atomic(struct drm_mode_object *obj,
+
+ drm_modeset_acquire_init(&ctx, 0);
+ state->acquire_ctx = &ctx;
++
+ retry:
+ if (prop == state->dev->mode_config.dpms_property) {
+ if (obj->type != DRM_MODE_OBJECT_CONNECTOR) {
+diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
+index 5f650d8fc66b..4cfb56893b7f 100644
+--- a/drivers/gpu/drm/drm_plane.c
++++ b/drivers/gpu/drm/drm_plane.c
+@@ -220,6 +220,9 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane,
+ format_modifier_count++;
+ }
+
++ if (format_modifier_count)
++ config->allow_fb_modifiers = true;
++
+ plane->modifier_count = format_modifier_count;
+ plane->modifiers = kmalloc_array(format_modifier_count,
+ sizeof(format_modifiers[0]),
+diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
+index 77ae634eb11c..bd95fd6b4ac8 100644
+--- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
++++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
+@@ -1446,7 +1446,7 @@ static inline int cmd_address_audit(struct parser_exec_state *s,
+ }
+
+ if (index_mode) {
+- if (guest_gma >= I915_GTT_PAGE_SIZE / sizeof(u64)) {
++ if (guest_gma >= I915_GTT_PAGE_SIZE) {
+ ret = -EFAULT;
+ goto err;
+ }
+diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
+index c7103dd2d8d5..563ab8590061 100644
+--- a/drivers/gpu/drm/i915/gvt/gtt.c
++++ b/drivers/gpu/drm/i915/gvt/gtt.c
+@@ -1942,7 +1942,7 @@ void _intel_vgpu_mm_release(struct kref *mm_ref)
+ */
+ void intel_vgpu_unpin_mm(struct intel_vgpu_mm *mm)
+ {
+- atomic_dec(&mm->pincount);
++ atomic_dec_if_positive(&mm->pincount);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
+index 55bb7885e228..8fff49affc11 100644
+--- a/drivers/gpu/drm/i915/gvt/scheduler.c
++++ b/drivers/gpu/drm/i915/gvt/scheduler.c
+@@ -1475,8 +1475,9 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id,
+ intel_runtime_pm_put(dev_priv);
+ }
+
+- if (ret && (vgpu_is_vm_unhealthy(ret))) {
+- enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR);
++ if (ret) {
++ if (vgpu_is_vm_unhealthy(ret))
++ enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR);
+ intel_vgpu_destroy_workload(workload);
+ return ERR_PTR(ret);
+ }
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index b1c31967194b..489c1e656ff6 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -2293,7 +2293,8 @@ intel_info(const struct drm_i915_private *dev_priv)
+ INTEL_DEVID(dev_priv) == 0x5915 || \
+ INTEL_DEVID(dev_priv) == 0x591E)
+ #define IS_AML_ULX(dev_priv) (INTEL_DEVID(dev_priv) == 0x591C || \
+- INTEL_DEVID(dev_priv) == 0x87C0)
++ INTEL_DEVID(dev_priv) == 0x87C0 || \
++ INTEL_DEVID(dev_priv) == 0x87CA)
+ #define IS_SKL_GT2(dev_priv) (IS_SKYLAKE(dev_priv) && \
+ (dev_priv)->info.gt == 2)
+ #define IS_SKL_GT3(dev_priv) (IS_SKYLAKE(dev_priv) && \
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 067054cf4a86..60bed3f27775 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -9205,7 +9205,7 @@ enum skl_power_gate {
+ #define TRANS_DDI_FUNC_CTL2(tran) _MMIO_TRANS2(tran, \
+ _TRANS_DDI_FUNC_CTL2_A)
+ #define PORT_SYNC_MODE_ENABLE (1 << 4)
+-#define PORT_SYNC_MODE_MASTER_SELECT(x) ((x) < 0)
++#define PORT_SYNC_MODE_MASTER_SELECT(x) ((x) << 0)
+ #define PORT_SYNC_MODE_MASTER_SELECT_MASK (0x7 << 0)
+ #define PORT_SYNC_MODE_MASTER_SELECT_SHIFT 0
+
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 22a74608c6e4..dcd1df5322e8 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -1845,42 +1845,6 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
+ return false;
+ }
+
+-/* Optimize link config in order: max bpp, min lanes, min clock */
+-static bool
+-intel_dp_compute_link_config_fast(struct intel_dp *intel_dp,
+- struct intel_crtc_state *pipe_config,
+- const struct link_config_limits *limits)
+-{
+- struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
+- int bpp, clock, lane_count;
+- int mode_rate, link_clock, link_avail;
+-
+- for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
+- mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
+- bpp);
+-
+- for (lane_count = limits->min_lane_count;
+- lane_count <= limits->max_lane_count;
+- lane_count <<= 1) {
+- for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
+- link_clock = intel_dp->common_rates[clock];
+- link_avail = intel_dp_max_data_rate(link_clock,
+- lane_count);
+-
+- if (mode_rate <= link_avail) {
+- pipe_config->lane_count = lane_count;
+- pipe_config->pipe_bpp = bpp;
+- pipe_config->port_clock = link_clock;
+-
+- return true;
+- }
+- }
+- }
+- }
+-
+- return false;
+-}
+-
+ static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc)
+ {
+ int i, num_bpc;
+@@ -2013,15 +1977,13 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
+ limits.min_bpp = 6 * 3;
+ limits.max_bpp = intel_dp_compute_bpp(intel_dp, pipe_config);
+
+- if (intel_dp_is_edp(intel_dp) && intel_dp->edp_dpcd[0] < DP_EDP_14) {
++ if (intel_dp_is_edp(intel_dp)) {
+ /*
+ * Use the maximum clock and number of lanes the eDP panel
+- * advertizes being capable of. The eDP 1.3 and earlier panels
+- * are generally designed to support only a single clock and
+- * lane configuration, and typically these values correspond to
+- * the native resolution of the panel. With eDP 1.4 rate select
+- * and DSC, this is decreasingly the case, and we need to be
+- * able to select less than maximum link config.
++ * advertizes being capable of. The panels are generally
++ * designed to support only a single clock and lane
++ * configuration, and typically these values correspond to the
++ * native resolution of the panel.
+ */
+ limits.min_lane_count = limits.max_lane_count;
+ limits.min_clock = limits.max_clock;
+@@ -2035,22 +1997,11 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
+ intel_dp->common_rates[limits.max_clock],
+ limits.max_bpp, adjusted_mode->crtc_clock);
+
+- if (intel_dp_is_edp(intel_dp))
+- /*
+- * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4
+- * section A.1: "It is recommended that the minimum number of
+- * lanes be used, using the minimum link rate allowed for that
+- * lane configuration."
+- *
+- * Note that we use the max clock and lane count for eDP 1.3 and
+- * earlier, and fast vs. wide is irrelevant.
+- */
+- ret = intel_dp_compute_link_config_fast(intel_dp, pipe_config,
+- &limits);
+- else
+- /* Optimize for slow and wide. */
+- ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config,
+- &limits);
++ /*
++ * Optimize for slow and wide. This is the place to add alternative
++ * optimization policy.
++ */
++ ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits);
+
+ /* enable compression if the mode doesn't fit available BW */
+ if (!ret) {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+index cb307a2abf06..7316b4ab1b85 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+@@ -23,11 +23,14 @@ struct dpu_mdss {
+ struct dpu_irq_controller irq_controller;
+ };
+
+-static irqreturn_t dpu_mdss_irq(int irq, void *arg)
++static void dpu_mdss_irq(struct irq_desc *desc)
+ {
+- struct dpu_mdss *dpu_mdss = arg;
++ struct dpu_mdss *dpu_mdss = irq_desc_get_handler_data(desc);
++ struct irq_chip *chip = irq_desc_get_chip(desc);
+ u32 interrupts;
+
++ chained_irq_enter(chip, desc);
++
+ interrupts = readl_relaxed(dpu_mdss->mmio + HW_INTR_STATUS);
+
+ while (interrupts) {
+@@ -39,20 +42,20 @@ static irqreturn_t dpu_mdss_irq(int irq, void *arg)
+ hwirq);
+ if (mapping == 0) {
+ DRM_ERROR("couldn't find irq mapping for %lu\n", hwirq);
+- return IRQ_NONE;
++ break;
+ }
+
+ rc = generic_handle_irq(mapping);
+ if (rc < 0) {
+ DRM_ERROR("handle irq fail: irq=%lu mapping=%u rc=%d\n",
+ hwirq, mapping, rc);
+- return IRQ_NONE;
++ break;
+ }
+
+ interrupts &= ~(1 << hwirq);
+ }
+
+- return IRQ_HANDLED;
++ chained_irq_exit(chip, desc);
+ }
+
+ static void dpu_mdss_irq_mask(struct irq_data *irqd)
+@@ -83,16 +86,16 @@ static struct irq_chip dpu_mdss_irq_chip = {
+ .irq_unmask = dpu_mdss_irq_unmask,
+ };
+
++static struct lock_class_key dpu_mdss_lock_key, dpu_mdss_request_key;
++
+ static int dpu_mdss_irqdomain_map(struct irq_domain *domain,
+ unsigned int irq, irq_hw_number_t hwirq)
+ {
+ struct dpu_mdss *dpu_mdss = domain->host_data;
+- int ret;
+
++ irq_set_lockdep_class(irq, &dpu_mdss_lock_key, &dpu_mdss_request_key);
+ irq_set_chip_and_handler(irq, &dpu_mdss_irq_chip, handle_level_irq);
+- ret = irq_set_chip_data(irq, dpu_mdss);
+-
+- return ret;
++ return irq_set_chip_data(irq, dpu_mdss);
+ }
+
+ static const struct irq_domain_ops dpu_mdss_irqdomain_ops = {
+@@ -159,11 +162,13 @@ static void dpu_mdss_destroy(struct drm_device *dev)
+ struct msm_drm_private *priv = dev->dev_private;
+ struct dpu_mdss *dpu_mdss = to_dpu_mdss(priv->mdss);
+ struct dss_module_power *mp = &dpu_mdss->mp;
++ int irq;
+
+ pm_runtime_suspend(dev->dev);
+ pm_runtime_disable(dev->dev);
+ _dpu_mdss_irq_domain_fini(dpu_mdss);
+- free_irq(platform_get_irq(pdev, 0), dpu_mdss);
++ irq = platform_get_irq(pdev, 0);
++ irq_set_chained_handler_and_data(irq, NULL, NULL);
+ msm_dss_put_clk(mp->clk_config, mp->num_clk);
+ devm_kfree(&pdev->dev, mp->clk_config);
+
+@@ -187,6 +192,7 @@ int dpu_mdss_init(struct drm_device *dev)
+ struct dpu_mdss *dpu_mdss;
+ struct dss_module_power *mp;
+ int ret = 0;
++ int irq;
+
+ dpu_mdss = devm_kzalloc(dev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
+ if (!dpu_mdss)
+@@ -219,12 +225,12 @@ int dpu_mdss_init(struct drm_device *dev)
+ if (ret)
+ goto irq_domain_error;
+
+- ret = request_irq(platform_get_irq(pdev, 0),
+- dpu_mdss_irq, 0, "dpu_mdss_isr", dpu_mdss);
+- if (ret) {
+- DPU_ERROR("failed to init irq: %d\n", ret);
++ irq = platform_get_irq(pdev, 0);
++ if (irq < 0)
+ goto irq_error;
+- }
++
++ irq_set_chained_handler_and_data(irq, dpu_mdss_irq,
++ dpu_mdss);
+
+ pm_runtime_enable(dev->dev);
+
+diff --git a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
+index 6a4ca139cf5d..8fd8124d72ba 100644
+--- a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
++++ b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
+@@ -750,7 +750,9 @@ static int nv17_tv_set_property(struct drm_encoder *encoder,
+ /* Disable the crtc to ensure a full modeset is
+ * performed whenever it's turned on again. */
+ if (crtc)
+- drm_crtc_force_disable(crtc);
++ drm_crtc_helper_set_mode(crtc, &crtc->mode,
++ crtc->x, crtc->y,
++ crtc->primary->fb);
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c
+index f471537c852f..1e14c6921454 100644
+--- a/drivers/gpu/drm/radeon/evergreen_cs.c
++++ b/drivers/gpu/drm/radeon/evergreen_cs.c
+@@ -1299,6 +1299,7 @@ static int evergreen_cs_handle_reg(struct radeon_cs_parser *p, u32 reg, u32 idx)
+ return -EINVAL;
+ }
+ ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff);
++ break;
+ case CB_TARGET_MASK:
+ track->cb_target_mask = radeon_get_ib_value(p, idx);
+ track->cb_dirty = true;
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_kms.c b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+index 9c7007d45408..f9a90ff24e6d 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_kms.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+@@ -331,6 +331,7 @@ static int rcar_du_encoders_init_one(struct rcar_du_device *rcdu,
+ dev_dbg(rcdu->dev,
+ "connected entity %pOF is disabled, skipping\n",
+ entity);
++ of_node_put(entity);
+ return -ENODEV;
+ }
+
+@@ -366,6 +367,7 @@ static int rcar_du_encoders_init_one(struct rcar_du_device *rcdu,
+ dev_warn(rcdu->dev,
+ "no encoder found for endpoint %pOF, skipping\n",
+ ep->local_node);
++ of_node_put(entity);
+ return -ENODEV;
+ }
+
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index fb70fb486fbf..cdbb47566cac 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -511,6 +511,18 @@ static void vop_core_clks_disable(struct vop *vop)
+ clk_disable(vop->hclk);
+ }
+
++static void vop_win_disable(struct vop *vop, const struct vop_win_data *win)
++{
++ if (win->phy->scl && win->phy->scl->ext) {
++ VOP_SCL_SET_EXT(vop, win, yrgb_hor_scl_mode, SCALE_NONE);
++ VOP_SCL_SET_EXT(vop, win, yrgb_ver_scl_mode, SCALE_NONE);
++ VOP_SCL_SET_EXT(vop, win, cbcr_hor_scl_mode, SCALE_NONE);
++ VOP_SCL_SET_EXT(vop, win, cbcr_ver_scl_mode, SCALE_NONE);
++ }
++
++ VOP_WIN_SET(vop, win, enable, 0);
++}
++
+ static int vop_enable(struct drm_crtc *crtc)
+ {
+ struct vop *vop = to_vop(crtc);
+@@ -556,7 +568,7 @@ static int vop_enable(struct drm_crtc *crtc)
+ struct vop_win *vop_win = &vop->win[i];
+ const struct vop_win_data *win = vop_win->data;
+
+- VOP_WIN_SET(vop, win, enable, 0);
++ vop_win_disable(vop, win);
+ }
+ spin_unlock(&vop->reg_lock);
+
+@@ -700,7 +712,7 @@ static void vop_plane_atomic_disable(struct drm_plane *plane,
+
+ spin_lock(&vop->reg_lock);
+
+- VOP_WIN_SET(vop, win, enable, 0);
++ vop_win_disable(vop, win);
+
+ spin_unlock(&vop->reg_lock);
+ }
+@@ -1476,7 +1488,7 @@ static int vop_initial(struct vop *vop)
+ int channel = i * 2 + 1;
+
+ VOP_WIN_SET(vop, win, channel, (channel + 1) << 4 | channel);
+- VOP_WIN_SET(vop, win, enable, 0);
++ vop_win_disable(vop, win);
+ VOP_WIN_SET(vop, win, gate, 1);
+ }
+
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index e2942c9a11a7..35ddbec1375a 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -52,12 +52,12 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
+ {
+ int i;
+
+- if (!(entity && rq_list && num_rq_list > 0 && rq_list[0]))
++ if (!(entity && rq_list && (num_rq_list == 0 || rq_list[0])))
+ return -EINVAL;
+
+ memset(entity, 0, sizeof(struct drm_sched_entity));
+ INIT_LIST_HEAD(&entity->list);
+- entity->rq = rq_list[0];
++ entity->rq = NULL;
+ entity->guilty = guilty;
+ entity->num_rq_list = num_rq_list;
+ entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *),
+@@ -67,6 +67,10 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
+
+ for (i = 0; i < num_rq_list; ++i)
+ entity->rq_list[i] = rq_list[i];
++
++ if (num_rq_list)
++ entity->rq = rq_list[0];
++
+ entity->last_scheduled = NULL;
+
+ spin_lock_init(&entity->rq_lock);
+@@ -165,6 +169,9 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout)
+ struct task_struct *last_user;
+ long ret = timeout;
+
++ if (!entity->rq)
++ return 0;
++
+ sched = entity->rq->sched;
+ /**
+ * The client will not queue more IBs during this fini, consume existing
+@@ -264,20 +271,24 @@ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity)
+ */
+ void drm_sched_entity_fini(struct drm_sched_entity *entity)
+ {
+- struct drm_gpu_scheduler *sched;
++ struct drm_gpu_scheduler *sched = NULL;
+
+- sched = entity->rq->sched;
+- drm_sched_rq_remove_entity(entity->rq, entity);
++ if (entity->rq) {
++ sched = entity->rq->sched;
++ drm_sched_rq_remove_entity(entity->rq, entity);
++ }
+
+ /* Consumption of existing IBs wasn't completed. Forcefully
+ * remove them here.
+ */
+ if (spsc_queue_peek(&entity->job_queue)) {
+- /* Park the kernel for a moment to make sure it isn't processing
+- * our enity.
+- */
+- kthread_park(sched->thread);
+- kthread_unpark(sched->thread);
++ if (sched) {
++ /* Park the kernel for a moment to make sure it isn't processing
++ * our enity.
++ */
++ kthread_park(sched->thread);
++ kthread_unpark(sched->thread);
++ }
+ if (entity->dependency) {
+ dma_fence_remove_callback(entity->dependency,
+ &entity->cb);
+@@ -362,9 +373,11 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity,
+ for (i = 0; i < entity->num_rq_list; ++i)
+ drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority);
+
+- drm_sched_rq_remove_entity(entity->rq, entity);
+- drm_sched_entity_set_rq_priority(&entity->rq, priority);
+- drm_sched_rq_add_entity(entity->rq, entity);
++ if (entity->rq) {
++ drm_sched_rq_remove_entity(entity->rq, entity);
++ drm_sched_entity_set_rq_priority(&entity->rq, priority);
++ drm_sched_rq_add_entity(entity->rq, entity);
++ }
+
+ spin_unlock(&entity->rq_lock);
+ }
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+index dc47720c99ba..39d8509d96a0 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+@@ -48,8 +48,13 @@ static enum drm_mode_status
+ sun8i_dw_hdmi_mode_valid_h6(struct drm_connector *connector,
+ const struct drm_display_mode *mode)
+ {
+- /* This is max for HDMI 2.0b (4K@60Hz) */
+- if (mode->clock > 594000)
++ /*
++ * Controller support maximum of 594 MHz, which correlates to
++ * 4K@60Hz 4:4:4 or RGB. However, for frequencies greater than
++ * 340 MHz scrambling has to be enabled. Because scrambling is
++ * not yet implemented, just limit to 340 MHz for now.
++ */
++ if (mode->clock > 340000)
+ return MODE_CLOCK_HIGH;
+
+ return MODE_OK;
+diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
+index a63e3011e971..bd4f0b88bbd7 100644
+--- a/drivers/gpu/drm/udl/udl_drv.c
++++ b/drivers/gpu/drm/udl/udl_drv.c
+@@ -51,6 +51,7 @@ static struct drm_driver driver = {
+ .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME,
+ .load = udl_driver_load,
+ .unload = udl_driver_unload,
++ .release = udl_driver_release,
+
+ /* gem hooks */
+ .gem_free_object_unlocked = udl_gem_free_object,
+diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
+index e9e9b1ff678e..4ae67d882eae 100644
+--- a/drivers/gpu/drm/udl/udl_drv.h
++++ b/drivers/gpu/drm/udl/udl_drv.h
+@@ -104,6 +104,7 @@ void udl_urb_completion(struct urb *urb);
+
+ int udl_driver_load(struct drm_device *dev, unsigned long flags);
+ void udl_driver_unload(struct drm_device *dev);
++void udl_driver_release(struct drm_device *dev);
+
+ int udl_fbdev_init(struct drm_device *dev);
+ void udl_fbdev_cleanup(struct drm_device *dev);
+diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
+index 1b014d92855b..19055dda3140 100644
+--- a/drivers/gpu/drm/udl/udl_main.c
++++ b/drivers/gpu/drm/udl/udl_main.c
+@@ -378,6 +378,12 @@ void udl_driver_unload(struct drm_device *dev)
+ udl_free_urb_list(dev);
+
+ udl_fbdev_cleanup(dev);
+- udl_modeset_cleanup(dev);
+ kfree(udl);
+ }
++
++void udl_driver_release(struct drm_device *dev)
++{
++ udl_modeset_cleanup(dev);
++ drm_dev_fini(dev);
++ kfree(dev);
++}
+diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
+index 5930facd6d2d..11a8f99ba18c 100644
+--- a/drivers/gpu/drm/vgem/vgem_drv.c
++++ b/drivers/gpu/drm/vgem/vgem_drv.c
+@@ -191,13 +191,9 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
+ ret = drm_gem_handle_create(file, &obj->base, handle);
+ drm_gem_object_put_unlocked(&obj->base);
+ if (ret)
+- goto err;
++ return ERR_PTR(ret);
+
+ return &obj->base;
+-
+-err:
+- __vgem_gem_destroy(obj);
+- return ERR_PTR(ret);
+ }
+
+ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
+diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
+index f39a183d59c2..e7e946035027 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_object.c
++++ b/drivers/gpu/drm/virtio/virtgpu_object.c
+@@ -28,10 +28,21 @@
+ static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
+ uint32_t *resid)
+ {
++#if 0
+ int handle = ida_alloc(&vgdev->resource_ida, GFP_KERNEL);
+
+ if (handle < 0)
+ return handle;
++#else
++ static int handle;
++
++ /*
++ * FIXME: dirty hack to avoid re-using IDs, virglrenderer
++ * can't deal with that. Needs fixing in virglrenderer, also
++ * should figure a better way to handle that in the guest.
++ */
++ handle++;
++#endif
+
+ *resid = handle + 1;
+ return 0;
+@@ -39,7 +50,9 @@ static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
+
+ static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t id)
+ {
++#if 0
+ ida_free(&vgdev->resource_ida, id - 1);
++#endif
+ }
+
+ static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
+diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
+index eb56ee893761..1054f535178a 100644
+--- a/drivers/gpu/drm/vkms/vkms_crtc.c
++++ b/drivers/gpu/drm/vkms/vkms_crtc.c
+@@ -4,13 +4,17 @@
+ #include <drm/drm_atomic_helper.h>
+ #include <drm/drm_crtc_helper.h>
+
+-static void _vblank_handle(struct vkms_output *output)
++static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
+ {
++ struct vkms_output *output = container_of(timer, struct vkms_output,
++ vblank_hrtimer);
+ struct drm_crtc *crtc = &output->crtc;
+ struct vkms_crtc_state *state = to_vkms_crtc_state(crtc->state);
++ int ret_overrun;
+ bool ret;
+
+ spin_lock(&output->lock);
++
+ ret = drm_crtc_handle_vblank(crtc);
+ if (!ret)
+ DRM_ERROR("vkms failure on handling vblank");
+@@ -31,19 +35,9 @@ static void _vblank_handle(struct vkms_output *output)
+ DRM_WARN("failed to queue vkms_crc_work_handle");
+ }
+
+- spin_unlock(&output->lock);
+-}
+-
+-static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
+-{
+- struct vkms_output *output = container_of(timer, struct vkms_output,
+- vblank_hrtimer);
+- int ret_overrun;
+-
+- _vblank_handle(output);
+-
+ ret_overrun = hrtimer_forward_now(&output->vblank_hrtimer,
+ output->period_ns);
++ spin_unlock(&output->lock);
+
+ return HRTIMER_RESTART;
+ }
+@@ -81,6 +75,9 @@ bool vkms_get_vblank_timestamp(struct drm_device *dev, unsigned int pipe,
+
+ *vblank_time = output->vblank_hrtimer.node.expires;
+
++ if (!in_vblank_irq)
++ *vblank_time -= output->period_ns;
++
+ return true;
+ }
+
+@@ -98,6 +95,7 @@ static void vkms_atomic_crtc_reset(struct drm_crtc *crtc)
+ vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL);
+ if (!vkms_state)
+ return;
++ INIT_WORK(&vkms_state->crc_work, vkms_crc_work_handle);
+
+ crtc->state = &vkms_state->base;
+ crtc->state->crtc = crtc;
+diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
+index 138b0bb325cf..69048e73377d 100644
+--- a/drivers/gpu/drm/vkms/vkms_gem.c
++++ b/drivers/gpu/drm/vkms/vkms_gem.c
+@@ -111,11 +111,8 @@ struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
+
+ ret = drm_gem_handle_create(file, &obj->gem, handle);
+ drm_gem_object_put_unlocked(&obj->gem);
+- if (ret) {
+- drm_gem_object_release(&obj->gem);
+- kfree(obj);
++ if (ret)
+ return ERR_PTR(ret);
+- }
+
+ return &obj->gem;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+index b913a56f3426..2a9112515f46 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+@@ -564,11 +564,9 @@ static int vmw_fb_set_par(struct fb_info *info)
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC)
+ };
+- struct drm_display_mode *old_mode;
+ struct drm_display_mode *mode;
+ int ret;
+
+- old_mode = par->set_mode;
+ mode = drm_mode_duplicate(vmw_priv->dev, &new_mode);
+ if (!mode) {
+ DRM_ERROR("Could not create new fb mode.\n");
+@@ -579,11 +577,7 @@ static int vmw_fb_set_par(struct fb_info *info)
+ mode->vdisplay = var->yres;
+ vmw_guess_mode_timing(mode);
+
+- if (old_mode && drm_mode_equal(old_mode, mode)) {
+- drm_mode_destroy(vmw_priv->dev, mode);
+- mode = old_mode;
+- old_mode = NULL;
+- } else if (!vmw_kms_validate_mode_vram(vmw_priv,
++ if (!vmw_kms_validate_mode_vram(vmw_priv,
+ mode->hdisplay *
+ DIV_ROUND_UP(var->bits_per_pixel, 8),
+ mode->vdisplay)) {
+@@ -620,8 +614,8 @@ static int vmw_fb_set_par(struct fb_info *info)
+ schedule_delayed_work(&par->local_work, 0);
+
+ out_unlock:
+- if (old_mode)
+- drm_mode_destroy(vmw_priv->dev, old_mode);
++ if (par->set_mode)
++ drm_mode_destroy(vmw_priv->dev, par->set_mode);
+ par->set_mode = mode;
+
+ mutex_unlock(&par->bo_mutex);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
+index b93c558dd86e..7da752ca1c34 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
+@@ -57,7 +57,7 @@ static int vmw_gmrid_man_get_node(struct ttm_mem_type_manager *man,
+
+ id = ida_alloc_max(&gman->gmr_ida, gman->max_gmr_ids - 1, GFP_KERNEL);
+ if (id < 0)
+- return id;
++ return (id != -ENOMEM ? 0 : id);
+
+ spin_lock(&gman->lock);
+
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 15ed6177a7a3..f040c8a7f9a9 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -2608,8 +2608,9 @@ static int m560_raw_event(struct hid_device *hdev, u8 *data, int size)
+ input_report_rel(mydata->input, REL_Y, v);
+
+ v = hid_snto32(data[6], 8);
+- hidpp_scroll_counter_handle_scroll(
+- &hidpp->vertical_wheel_counter, v);
++ if (v != 0)
++ hidpp_scroll_counter_handle_scroll(
++ &hidpp->vertical_wheel_counter, v);
+
+ input_sync(mydata->input);
+ }
+diff --git a/drivers/hid/intel-ish-hid/ipc/ipc.c b/drivers/hid/intel-ish-hid/ipc/ipc.c
+index 742191bb24c6..45e33c7ba9a6 100644
+--- a/drivers/hid/intel-ish-hid/ipc/ipc.c
++++ b/drivers/hid/intel-ish-hid/ipc/ipc.c
+@@ -91,7 +91,10 @@ static bool check_generated_interrupt(struct ishtp_device *dev)
+ IPC_INT_FROM_ISH_TO_HOST_CHV_AB(pisr_val);
+ } else {
+ pisr_val = ish_reg_read(dev, IPC_REG_PISR_BXT);
+- interrupt_generated = IPC_INT_FROM_ISH_TO_HOST_BXT(pisr_val);
++ interrupt_generated = !!pisr_val;
++ /* only busy-clear bit is RW, others are RO */
++ if (pisr_val)
++ ish_reg_write(dev, IPC_REG_PISR_BXT, pisr_val);
+ }
+
+ return interrupt_generated;
+@@ -839,11 +842,11 @@ int ish_hw_start(struct ishtp_device *dev)
+ {
+ ish_set_host_rdy(dev);
+
++ set_host_ready(dev);
++
+ /* After that we can enable ISH DMA operation and wakeup ISHFW */
+ ish_wakeup(dev);
+
+- set_host_ready(dev);
+-
+ /* wait for FW-initiated reset flow */
+ if (!dev->recvd_hw_ready)
+ wait_event_interruptible_timeout(dev->wait_hw_ready,
+diff --git a/drivers/hid/intel-ish-hid/ishtp/bus.c b/drivers/hid/intel-ish-hid/ishtp/bus.c
+index 728dc6d4561a..a271d6d169b1 100644
+--- a/drivers/hid/intel-ish-hid/ishtp/bus.c
++++ b/drivers/hid/intel-ish-hid/ishtp/bus.c
+@@ -675,7 +675,8 @@ int ishtp_cl_device_bind(struct ishtp_cl *cl)
+ spin_lock_irqsave(&cl->dev->device_list_lock, flags);
+ list_for_each_entry(cl_device, &cl->dev->device_list,
+ device_link) {
+- if (cl_device->fw_client->client_id == cl->fw_client_id) {
++ if (cl_device->fw_client &&
++ cl_device->fw_client->client_id == cl->fw_client_id) {
+ cl->device = cl_device;
+ rv = 0;
+ break;
+@@ -735,6 +736,7 @@ void ishtp_bus_remove_all_clients(struct ishtp_device *ishtp_dev,
+ spin_lock_irqsave(&ishtp_dev->device_list_lock, flags);
+ list_for_each_entry_safe(cl_device, n, &ishtp_dev->device_list,
+ device_link) {
++ cl_device->fw_client = NULL;
+ if (warm_reset && cl_device->reference_count)
+ continue;
+
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 6f929bfa9fcd..d0f1dfe2bcbb 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1759,6 +1759,7 @@ config SENSORS_VT8231
+ config SENSORS_W83773G
+ tristate "Nuvoton W83773G"
+ depends on I2C
++ select REGMAP_I2C
+ help
+ If you say yes here you get support for the Nuvoton W83773G hardware
+ monitoring chip.
+diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c
+index 391118c8aae8..c888f4aca45c 100644
+--- a/drivers/hwmon/occ/common.c
++++ b/drivers/hwmon/occ/common.c
+@@ -889,6 +889,8 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ s++;
+ }
+ }
++
++ s = (sensors->power.num_sensors * 4) + 1;
+ } else {
+ for (i = 0; i < sensors->power.num_sensors; ++i) {
+ s = i + 1;
+@@ -917,11 +919,11 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ show_power, NULL, 3, i);
+ attr++;
+ }
+- }
+
+- if (sensors->caps.num_sensors >= 1) {
+ s = sensors->power.num_sensors + 1;
++ }
+
++ if (sensors->caps.num_sensors >= 1) {
+ snprintf(attr->name, sizeof(attr->name), "power%d_label", s);
+ attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL,
+ 0, 0);
+diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
+index abe8249b893b..f21eb28b6782 100644
+--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
++++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
+@@ -177,15 +177,15 @@ static void etm_free_aux(void *data)
+ schedule_work(&event_data->work);
+ }
+
+-static void *etm_setup_aux(int event_cpu, void **pages,
++static void *etm_setup_aux(struct perf_event *event, void **pages,
+ int nr_pages, bool overwrite)
+ {
+- int cpu;
++ int cpu = event->cpu;
+ cpumask_t *mask;
+ struct coresight_device *sink;
+ struct etm_event_data *event_data = NULL;
+
+- event_data = alloc_event_data(event_cpu);
++ event_data = alloc_event_data(cpu);
+ if (!event_data)
+ return NULL;
+ INIT_WORK(&event_data->work, free_event_data);
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index 53e2fb6e86f6..fe76b176974a 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -55,7 +55,8 @@ static void etm4_os_unlock(struct etmv4_drvdata *drvdata)
+
+ static bool etm4_arch_supported(u8 arch)
+ {
+- switch (arch) {
++ /* Mask out the minor version number */
++ switch (arch & 0xf0) {
+ case ETM_ARCH_V4:
+ break;
+ default:
+diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
+index 8426b7970c14..cc287cf6eb29 100644
+--- a/drivers/hwtracing/intel_th/gth.c
++++ b/drivers/hwtracing/intel_th/gth.c
+@@ -607,6 +607,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
+ {
+ struct gth_device *gth = dev_get_drvdata(&thdev->dev);
+ int port = othdev->output.port;
++ int master;
+
+ if (thdev->host_mode)
+ return;
+@@ -615,6 +616,9 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
+ othdev->output.port = -1;
+ othdev->output.active = false;
+ gth->output[port].output = NULL;
++ for (master = 0; master < TH_CONFIGURABLE_MASTERS; master++)
++ if (gth->master[master] == port)
++ gth->master[master] = -1;
+ spin_unlock(>h->gth_lock);
+ }
+
+diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
+index 93ce3aa740a9..c7ba8acfd4d5 100644
+--- a/drivers/hwtracing/stm/core.c
++++ b/drivers/hwtracing/stm/core.c
+@@ -244,6 +244,9 @@ static int find_free_channels(unsigned long *bitmap, unsigned int start,
+ ;
+ if (i == width)
+ return pos;
++
++ /* step over [pos..pos+i) to continue search */
++ pos += i;
+ }
+
+ return -1;
+@@ -732,7 +735,7 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
+ struct stm_device *stm = stmf->stm;
+ struct stp_policy_id *id;
+ char *ids[] = { NULL, NULL };
+- int ret = -EINVAL;
++ int ret = -EINVAL, wlimit = 1;
+ u32 size;
+
+ if (stmf->output.nr_chans)
+@@ -760,8 +763,10 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
+ if (id->__reserved_0 || id->__reserved_1)
+ goto err_free;
+
+- if (id->width < 1 ||
+- id->width > PAGE_SIZE / stm->data->sw_mmiosz)
++ if (stm->data->sw_mmiosz)
++ wlimit = PAGE_SIZE / stm->data->sw_mmiosz;
++
++ if (id->width < 1 || id->width > wlimit)
+ goto err_free;
+
+ ids[0] = id->id;
+diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
+index b4a0b2b99a78..6b4ef1d38fb2 100644
+--- a/drivers/i2c/busses/i2c-designware-core.h
++++ b/drivers/i2c/busses/i2c-designware-core.h
+@@ -215,6 +215,7 @@
+ * @disable_int: function to disable all interrupts
+ * @init: function to initialize the I2C hardware
+ * @mode: operation mode - DW_IC_MASTER or DW_IC_SLAVE
++ * @suspended: set to true if the controller is suspended
+ *
+ * HCNT and LCNT parameters can be used if the platform knows more accurate
+ * values than the one computed based only on the input clock frequency.
+@@ -270,6 +271,7 @@ struct dw_i2c_dev {
+ int (*set_sda_hold_time)(struct dw_i2c_dev *dev);
+ int mode;
+ struct i2c_bus_recovery_info rinfo;
++ bool suspended;
+ };
+
+ #define ACCESS_SWAP 0x00000001
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 8d1bc44d2530..bb8e3f149979 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -426,6 +426,12 @@ i2c_dw_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
+
+ pm_runtime_get_sync(dev->dev);
+
++ if (dev->suspended) {
++ dev_err(dev->dev, "Error %s call while suspended\n", __func__);
++ ret = -ESHUTDOWN;
++ goto done_nolock;
++ }
++
+ reinit_completion(&dev->cmd_complete);
+ dev->msgs = msgs;
+ dev->msgs_num = num;
+diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
+index d50f80487214..76810deb2de6 100644
+--- a/drivers/i2c/busses/i2c-designware-pcidrv.c
++++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
+@@ -176,6 +176,7 @@ static int i2c_dw_pci_suspend(struct device *dev)
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct dw_i2c_dev *i_dev = pci_get_drvdata(pdev);
+
++ i_dev->suspended = true;
+ i_dev->disable(i_dev);
+
+ return 0;
+@@ -185,8 +186,12 @@ static int i2c_dw_pci_resume(struct device *dev)
+ {
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct dw_i2c_dev *i_dev = pci_get_drvdata(pdev);
++ int ret;
+
+- return i_dev->init(i_dev);
++ ret = i_dev->init(i_dev);
++ i_dev->suspended = false;
++
++ return ret;
+ }
+ #endif
+
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 9eaac3be1f63..ead5e7de3e4d 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -454,6 +454,8 @@ static int dw_i2c_plat_suspend(struct device *dev)
+ {
+ struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
+
++ i_dev->suspended = true;
++
+ if (i_dev->shared_with_punit)
+ return 0;
+
+@@ -471,6 +473,7 @@ static int dw_i2c_plat_resume(struct device *dev)
+ i2c_dw_prepare_clk(i_dev, true);
+
+ i_dev->init(i_dev);
++ i_dev->suspended = false;
+
+ return 0;
+ }
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index c77adbbea0c7..e85dc8583896 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -118,6 +118,9 @@
+ #define I2C_MST_FIFO_STATUS_TX_MASK 0xff0000
+ #define I2C_MST_FIFO_STATUS_TX_SHIFT 16
+
++/* Packet header size in bytes */
++#define I2C_PACKET_HEADER_SIZE 12
++
+ /*
+ * msg_end_type: The bus control which need to be send at end of transfer.
+ * @MSG_END_STOP: Send stop pulse at end of transfer.
+@@ -836,12 +839,13 @@ static const struct i2c_algorithm tegra_i2c_algo = {
+ /* payload size is only 12 bit */
+ static const struct i2c_adapter_quirks tegra_i2c_quirks = {
+ .flags = I2C_AQ_NO_ZERO_LEN,
+- .max_read_len = 4096,
+- .max_write_len = 4096,
++ .max_read_len = SZ_4K,
++ .max_write_len = SZ_4K - I2C_PACKET_HEADER_SIZE,
+ };
+
+ static const struct i2c_adapter_quirks tegra194_i2c_quirks = {
+ .flags = I2C_AQ_NO_ZERO_LEN,
++ .max_write_len = SZ_64K - I2C_PACKET_HEADER_SIZE,
+ };
+
+ static const struct tegra_i2c_hw_feature tegra20_i2c_hw = {
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 28460f6a60cc..af87a16ac3a5 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -430,7 +430,7 @@ static int i2c_device_remove(struct device *dev)
+ dev_pm_clear_wake_irq(&client->dev);
+ device_init_wakeup(&client->dev, false);
+
+- client->irq = 0;
++ client->irq = client->init_irq;
+
+ return status;
+ }
+@@ -741,10 +741,11 @@ i2c_new_device(struct i2c_adapter *adap, struct i2c_board_info const *info)
+ client->flags = info->flags;
+ client->addr = info->addr;
+
+- client->irq = info->irq;
+- if (!client->irq)
+- client->irq = i2c_dev_irq_from_resources(info->resources,
++ client->init_irq = info->irq;
++ if (!client->init_irq)
++ client->init_irq = i2c_dev_irq_from_resources(info->resources,
+ info->num_resources);
++ client->irq = client->init_irq;
+
+ strlcpy(client->name, info->type, sizeof(client->name));
+
+diff --git a/drivers/i2c/i2c-core-of.c b/drivers/i2c/i2c-core-of.c
+index 6cb7ad608bcd..0f01cdba9d2c 100644
+--- a/drivers/i2c/i2c-core-of.c
++++ b/drivers/i2c/i2c-core-of.c
+@@ -121,6 +121,17 @@ static int of_dev_node_match(struct device *dev, void *data)
+ return dev->of_node == data;
+ }
+
++static int of_dev_or_parent_node_match(struct device *dev, void *data)
++{
++ if (dev->of_node == data)
++ return 1;
++
++ if (dev->parent)
++ return dev->parent->of_node == data;
++
++ return 0;
++}
++
+ /* must call put_device() when done with returned i2c_client device */
+ struct i2c_client *of_find_i2c_device_by_node(struct device_node *node)
+ {
+@@ -145,7 +156,8 @@ struct i2c_adapter *of_find_i2c_adapter_by_node(struct device_node *node)
+ struct device *dev;
+ struct i2c_adapter *adapter;
+
+- dev = bus_find_device(&i2c_bus_type, NULL, node, of_dev_node_match);
++ dev = bus_find_device(&i2c_bus_type, NULL, node,
++ of_dev_or_parent_node_match);
+ if (!dev)
+ return NULL;
+
+diff --git a/drivers/iio/adc/exynos_adc.c b/drivers/iio/adc/exynos_adc.c
+index fa2d2b5767f3..1ca2c4d39f87 100644
+--- a/drivers/iio/adc/exynos_adc.c
++++ b/drivers/iio/adc/exynos_adc.c
+@@ -115,6 +115,7 @@
+ #define MAX_ADC_V2_CHANNELS 10
+ #define MAX_ADC_V1_CHANNELS 8
+ #define MAX_EXYNOS3250_ADC_CHANNELS 2
++#define MAX_EXYNOS4212_ADC_CHANNELS 4
+ #define MAX_S5PV210_ADC_CHANNELS 10
+
+ /* Bit definitions common for ADC_V1 and ADC_V2 */
+@@ -271,6 +272,19 @@ static void exynos_adc_v1_start_conv(struct exynos_adc *info,
+ writel(con1 | ADC_CON_EN_START, ADC_V1_CON(info->regs));
+ }
+
++/* Exynos4212 and 4412 is like ADCv1 but with four channels only */
++static const struct exynos_adc_data exynos4212_adc_data = {
++ .num_channels = MAX_EXYNOS4212_ADC_CHANNELS,
++ .mask = ADC_DATX_MASK, /* 12 bit ADC resolution */
++ .needs_adc_phy = true,
++ .phy_offset = EXYNOS_ADCV1_PHY_OFFSET,
++
++ .init_hw = exynos_adc_v1_init_hw,
++ .exit_hw = exynos_adc_v1_exit_hw,
++ .clear_irq = exynos_adc_v1_clear_irq,
++ .start_conv = exynos_adc_v1_start_conv,
++};
++
+ static const struct exynos_adc_data exynos_adc_v1_data = {
+ .num_channels = MAX_ADC_V1_CHANNELS,
+ .mask = ADC_DATX_MASK, /* 12 bit ADC resolution */
+@@ -492,6 +506,9 @@ static const struct of_device_id exynos_adc_match[] = {
+ }, {
+ .compatible = "samsung,s5pv210-adc",
+ .data = &exynos_adc_s5pv210_data,
++ }, {
++ .compatible = "samsung,exynos4212-adc",
++ .data = &exynos4212_adc_data,
+ }, {
+ .compatible = "samsung,exynos-adc-v1",
+ .data = &exynos_adc_v1_data,
+@@ -929,7 +946,7 @@ static int exynos_adc_remove(struct platform_device *pdev)
+ struct iio_dev *indio_dev = platform_get_drvdata(pdev);
+ struct exynos_adc *info = iio_priv(indio_dev);
+
+- if (IS_REACHABLE(CONFIG_INPUT)) {
++ if (IS_REACHABLE(CONFIG_INPUT) && info->input) {
+ free_irq(info->tsirq, info);
+ input_unregister_device(info->input);
+ }
+diff --git a/drivers/iio/adc/qcom-pm8xxx-xoadc.c b/drivers/iio/adc/qcom-pm8xxx-xoadc.c
+index c30c002f1fef..4735f8a1ca9d 100644
+--- a/drivers/iio/adc/qcom-pm8xxx-xoadc.c
++++ b/drivers/iio/adc/qcom-pm8xxx-xoadc.c
+@@ -423,18 +423,14 @@ static irqreturn_t pm8xxx_eoc_irq(int irq, void *d)
+ static struct pm8xxx_chan_info *
+ pm8xxx_get_channel(struct pm8xxx_xoadc *adc, u8 chan)
+ {
+- struct pm8xxx_chan_info *ch;
+ int i;
+
+ for (i = 0; i < adc->nchans; i++) {
+- ch = &adc->chans[i];
++ struct pm8xxx_chan_info *ch = &adc->chans[i];
+ if (ch->hwchan->amux_channel == chan)
+- break;
++ return ch;
+ }
+- if (i == adc->nchans)
+- return NULL;
+-
+- return ch;
++ return NULL;
+ }
+
+ static int pm8xxx_read_channel_rsv(struct pm8xxx_xoadc *adc,
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 84f077b2b90a..81bded0d37d1 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -2966,13 +2966,22 @@ static void addr_handler(int status, struct sockaddr *src_addr,
+ {
+ struct rdma_id_private *id_priv = context;
+ struct rdma_cm_event event = {};
++ struct sockaddr *addr;
++ struct sockaddr_storage old_addr;
+
+ mutex_lock(&id_priv->handler_mutex);
+ if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_QUERY,
+ RDMA_CM_ADDR_RESOLVED))
+ goto out;
+
+- memcpy(cma_src_addr(id_priv), src_addr, rdma_addr_size(src_addr));
++ /*
++ * Store the previous src address, so that if we fail to acquire
++ * matching rdma device, old address can be restored back, which helps
++ * to cancel the cma listen operation correctly.
++ */
++ addr = cma_src_addr(id_priv);
++ memcpy(&old_addr, addr, rdma_addr_size(addr));
++ memcpy(addr, src_addr, rdma_addr_size(src_addr));
+ if (!status && !id_priv->cma_dev) {
+ status = cma_acquire_dev_by_src_ip(id_priv);
+ if (status)
+@@ -2983,6 +2992,8 @@ static void addr_handler(int status, struct sockaddr *src_addr,
+ }
+
+ if (status) {
++ memcpy(addr, &old_addr,
++ rdma_addr_size((struct sockaddr *)&old_addr));
+ if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_RESOLVED,
+ RDMA_CM_ADDR_BOUND))
+ goto out;
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index 8221813219e5..25a81fbb0d4d 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -1903,8 +1903,10 @@ static int abort_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
+ }
+ mutex_unlock(&ep->com.mutex);
+
+- if (release)
++ if (release) {
++ close_complete_upcall(ep, -ECONNRESET);
+ release_ep_resources(ep);
++ }
+ c4iw_put_ep(&ep->com);
+ return 0;
+ }
+@@ -3606,7 +3608,6 @@ int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp)
+ if (close) {
+ if (abrupt) {
+ set_bit(EP_DISC_ABORT, &ep->com.history);
+- close_complete_upcall(ep, -ECONNRESET);
+ ret = send_abort(ep);
+ } else {
+ set_bit(EP_DISC_CLOSE, &ep->com.history);
+diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
+index 6db2276f5c13..15ec3e1feb09 100644
+--- a/drivers/infiniband/hw/hfi1/hfi.h
++++ b/drivers/infiniband/hw/hfi1/hfi.h
+@@ -1435,7 +1435,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd,
+ struct hfi1_devdata *dd, u8 hw_pidx, u8 port);
+ void hfi1_free_ctxtdata(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd);
+ int hfi1_rcd_put(struct hfi1_ctxtdata *rcd);
+-void hfi1_rcd_get(struct hfi1_ctxtdata *rcd);
++int hfi1_rcd_get(struct hfi1_ctxtdata *rcd);
+ struct hfi1_ctxtdata *hfi1_rcd_get_by_index_safe(struct hfi1_devdata *dd,
+ u16 ctxt);
+ struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt);
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index 7835eb52e7c5..c532ceb0bb9a 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -215,12 +215,12 @@ static void hfi1_rcd_free(struct kref *kref)
+ struct hfi1_ctxtdata *rcd =
+ container_of(kref, struct hfi1_ctxtdata, kref);
+
+- hfi1_free_ctxtdata(rcd->dd, rcd);
+-
+ spin_lock_irqsave(&rcd->dd->uctxt_lock, flags);
+ rcd->dd->rcd[rcd->ctxt] = NULL;
+ spin_unlock_irqrestore(&rcd->dd->uctxt_lock, flags);
+
++ hfi1_free_ctxtdata(rcd->dd, rcd);
++
+ kfree(rcd);
+ }
+
+@@ -243,10 +243,13 @@ int hfi1_rcd_put(struct hfi1_ctxtdata *rcd)
+ * @rcd: pointer to an initialized rcd data structure
+ *
+ * Use this to get a reference after the init.
++ *
++ * Return : reflect kref_get_unless_zero(), which returns non-zero on
++ * increment, otherwise 0.
+ */
+-void hfi1_rcd_get(struct hfi1_ctxtdata *rcd)
++int hfi1_rcd_get(struct hfi1_ctxtdata *rcd)
+ {
+- kref_get(&rcd->kref);
++ return kref_get_unless_zero(&rcd->kref);
+ }
+
+ /**
+@@ -326,7 +329,8 @@ struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt)
+ spin_lock_irqsave(&dd->uctxt_lock, flags);
+ if (dd->rcd[ctxt]) {
+ rcd = dd->rcd[ctxt];
+- hfi1_rcd_get(rcd);
++ if (!hfi1_rcd_get(rcd))
++ rcd = NULL;
+ }
+ spin_unlock_irqrestore(&dd->uctxt_lock, flags);
+
+diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c
+index fedaf8260105..8c79a480f2b7 100644
+--- a/drivers/infiniband/hw/mlx4/cm.c
++++ b/drivers/infiniband/hw/mlx4/cm.c
+@@ -39,7 +39,7 @@
+
+ #include "mlx4_ib.h"
+
+-#define CM_CLEANUP_CACHE_TIMEOUT (5 * HZ)
++#define CM_CLEANUP_CACHE_TIMEOUT (30 * HZ)
+
+ struct id_map_entry {
+ struct rb_node node;
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 4ee32964e1dd..948eb6e25219 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -560,7 +560,7 @@ static int pagefault_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr,
+ struct ib_umem_odp *odp_mr = to_ib_umem_odp(mr->umem);
+ bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE;
+ bool prefetch = flags & MLX5_PF_FLAGS_PREFETCH;
+- u64 access_mask = ODP_READ_ALLOWED_BIT;
++ u64 access_mask;
+ u64 start_idx, page_mask;
+ struct ib_umem_odp *odp;
+ size_t size;
+@@ -582,6 +582,7 @@ next_mr:
+ page_shift = mr->umem->page_shift;
+ page_mask = ~(BIT(page_shift) - 1);
+ start_idx = (io_virt - (mr->mmkey.iova & page_mask)) >> page_shift;
++ access_mask = ODP_READ_ALLOWED_BIT;
+
+ if (prefetch && !downgrade && !mr->umem->writable) {
+ /* prefetch with write-access must
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index c6cc3e4ab71d..c45b8359b389 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -2785,6 +2785,18 @@ again:
+ }
+ EXPORT_SYMBOL(rvt_copy_sge);
+
++static enum ib_wc_status loopback_qp_drop(struct rvt_ibport *rvp,
++ struct rvt_qp *sqp)
++{
++ rvp->n_pkt_drops++;
++ /*
++ * For RC, the requester would timeout and retry so
++ * shortcut the timeouts and just signal too many retries.
++ */
++ return sqp->ibqp.qp_type == IB_QPT_RC ?
++ IB_WC_RETRY_EXC_ERR : IB_WC_SUCCESS;
++}
++
+ /**
+ * ruc_loopback - handle UC and RC loopback requests
+ * @sqp: the sending QP
+@@ -2857,17 +2869,14 @@ again:
+ }
+ spin_unlock_irqrestore(&sqp->s_lock, flags);
+
+- if (!qp || !(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) ||
++ if (!qp) {
++ send_status = loopback_qp_drop(rvp, sqp);
++ goto serr_no_r_lock;
++ }
++ spin_lock_irqsave(&qp->r_lock, flags);
++ if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) ||
+ qp->ibqp.qp_type != sqp->ibqp.qp_type) {
+- rvp->n_pkt_drops++;
+- /*
+- * For RC, the requester would timeout and retry so
+- * shortcut the timeouts and just signal too many retries.
+- */
+- if (sqp->ibqp.qp_type == IB_QPT_RC)
+- send_status = IB_WC_RETRY_EXC_ERR;
+- else
+- send_status = IB_WC_SUCCESS;
++ send_status = loopback_qp_drop(rvp, sqp);
+ goto serr;
+ }
+
+@@ -2893,18 +2902,8 @@ again:
+ goto send_comp;
+
+ case IB_WR_SEND_WITH_INV:
+- if (!rvt_invalidate_rkey(qp, wqe->wr.ex.invalidate_rkey)) {
+- wc.wc_flags = IB_WC_WITH_INVALIDATE;
+- wc.ex.invalidate_rkey = wqe->wr.ex.invalidate_rkey;
+- }
+- goto send;
+-
+ case IB_WR_SEND_WITH_IMM:
+- wc.wc_flags = IB_WC_WITH_IMM;
+- wc.ex.imm_data = wqe->wr.ex.imm_data;
+- /* FALLTHROUGH */
+ case IB_WR_SEND:
+-send:
+ ret = rvt_get_rwqe(qp, false);
+ if (ret < 0)
+ goto op_err;
+@@ -2912,6 +2911,22 @@ send:
+ goto rnr_nak;
+ if (wqe->length > qp->r_len)
+ goto inv_err;
++ switch (wqe->wr.opcode) {
++ case IB_WR_SEND_WITH_INV:
++ if (!rvt_invalidate_rkey(qp,
++ wqe->wr.ex.invalidate_rkey)) {
++ wc.wc_flags = IB_WC_WITH_INVALIDATE;
++ wc.ex.invalidate_rkey =
++ wqe->wr.ex.invalidate_rkey;
++ }
++ break;
++ case IB_WR_SEND_WITH_IMM:
++ wc.wc_flags = IB_WC_WITH_IMM;
++ wc.ex.imm_data = wqe->wr.ex.imm_data;
++ break;
++ default:
++ break;
++ }
+ break;
+
+ case IB_WR_RDMA_WRITE_WITH_IMM:
+@@ -3041,6 +3056,7 @@ do_write:
+ wqe->wr.send_flags & IB_SEND_SOLICITED);
+
+ send_comp:
++ spin_unlock_irqrestore(&qp->r_lock, flags);
+ spin_lock_irqsave(&sqp->s_lock, flags);
+ rvp->n_loop_pkts++;
+ flush_send:
+@@ -3067,6 +3083,7 @@ rnr_nak:
+ }
+ if (sqp->s_rnr_retry_cnt < 7)
+ sqp->s_rnr_retry--;
++ spin_unlock_irqrestore(&qp->r_lock, flags);
+ spin_lock_irqsave(&sqp->s_lock, flags);
+ if (!(ib_rvt_state_ops[sqp->state] & RVT_PROCESS_RECV_OK))
+ goto clr_busy;
+@@ -3095,6 +3112,8 @@ err:
+ rvt_rc_error(qp, wc.status);
+
+ serr:
++ spin_unlock_irqrestore(&qp->r_lock, flags);
++serr_no_r_lock:
+ spin_lock_irqsave(&sqp->s_lock, flags);
+ rvt_send_complete(sqp, wqe, send_status);
+ if (sqp->ibqp.qp_type == IB_QPT_RC) {
+diff --git a/drivers/input/misc/soc_button_array.c b/drivers/input/misc/soc_button_array.c
+index 23520df7650f..55cd6e0b409c 100644
+--- a/drivers/input/misc/soc_button_array.c
++++ b/drivers/input/misc/soc_button_array.c
+@@ -373,7 +373,7 @@ static struct soc_button_info soc_button_PNP0C40[] = {
+ { "home", 1, EV_KEY, KEY_LEFTMETA, false, true },
+ { "volume_up", 2, EV_KEY, KEY_VOLUMEUP, true, false },
+ { "volume_down", 3, EV_KEY, KEY_VOLUMEDOWN, true, false },
+- { "rotation_lock", 4, EV_SW, SW_ROTATE_LOCK, false, false },
++ { "rotation_lock", 4, EV_KEY, KEY_ROTATE_LOCK_TOGGLE, false, false },
+ { }
+ };
+
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 225ae6980182..628ef617bb2f 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1337,6 +1337,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ { "ELAN0000", 0 },
+ { "ELAN0100", 0 },
+ { "ELAN0600", 0 },
++ { "ELAN0601", 0 },
+ { "ELAN0602", 0 },
+ { "ELAN0605", 0 },
+ { "ELAN0608", 0 },
+diff --git a/drivers/input/tablet/wacom_serial4.c b/drivers/input/tablet/wacom_serial4.c
+index 38bfaca48eab..150f9eecaca7 100644
+--- a/drivers/input/tablet/wacom_serial4.c
++++ b/drivers/input/tablet/wacom_serial4.c
+@@ -187,6 +187,7 @@ enum {
+ MODEL_DIGITIZER_II = 0x5544, /* UD */
+ MODEL_GRAPHIRE = 0x4554, /* ET */
+ MODEL_PENPARTNER = 0x4354, /* CT */
++ MODEL_ARTPAD_II = 0x4B54, /* KT */
+ };
+
+ static void wacom_handle_model_response(struct wacom *wacom)
+@@ -245,6 +246,7 @@ static void wacom_handle_model_response(struct wacom *wacom)
+ wacom->flags = F_HAS_STYLUS2 | F_HAS_SCROLLWHEEL;
+ break;
+
++ case MODEL_ARTPAD_II:
+ case MODEL_DIGITIZER_II:
+ wacom->dev->name = "Wacom Digitizer II";
+ wacom->dev->id.version = MODEL_DIGITIZER_II;
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 2a7b78bb98b4..e628ef23418f 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -2605,7 +2605,12 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
+
+ /* Everything is mapped - write the right values into s->dma_address */
+ for_each_sg(sglist, s, nelems, i) {
+- s->dma_address += address + s->offset;
++ /*
++ * Add in the remaining piece of the scatter-gather offset that
++ * was masked out when we were determining the physical address
++ * via (sg_phys(s) & PAGE_MASK) earlier.
++ */
++ s->dma_address += address + (s->offset & ~PAGE_MASK);
+ s->dma_length = s->length;
+ }
+
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 78188bf7e90d..dbd6824dfffa 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -2485,7 +2485,8 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ if (dev && dev_is_pci(dev)) {
+ struct pci_dev *pdev = to_pci_dev(info->dev);
+
+- if (!pci_ats_disabled() &&
++ if (!pdev->untrusted &&
++ !pci_ats_disabled() &&
+ ecap_dev_iotlb_support(iommu->ecap) &&
+ pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ATS) &&
+ dmar_find_matched_atsr_unit(pdev))
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index cec29bf45c9b..18a8330e1882 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -161,6 +161,14 @@
+
+ #define ARM_V7S_TCR_PD1 BIT(5)
+
++#ifdef CONFIG_ZONE_DMA32
++#define ARM_V7S_TABLE_GFP_DMA GFP_DMA32
++#define ARM_V7S_TABLE_SLAB_FLAGS SLAB_CACHE_DMA32
++#else
++#define ARM_V7S_TABLE_GFP_DMA GFP_DMA
++#define ARM_V7S_TABLE_SLAB_FLAGS SLAB_CACHE_DMA
++#endif
++
+ typedef u32 arm_v7s_iopte;
+
+ static bool selftest_running;
+@@ -198,13 +206,16 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ void *table = NULL;
+
+ if (lvl == 1)
+- table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
++ table = (void *)__get_free_pages(
++ __GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size));
+ else if (lvl == 2)
+- table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA);
++ table = kmem_cache_zalloc(data->l2_tables, gfp);
+ phys = virt_to_phys(table);
+- if (phys != (arm_v7s_iopte)phys)
++ if (phys != (arm_v7s_iopte)phys) {
+ /* Doesn't fit in PTE */
++ dev_err(dev, "Page table does not fit in PTE: %pa", &phys);
+ goto out_free;
++ }
+ if (table && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)) {
+ dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
+ if (dma_mapping_error(dev, dma))
+@@ -217,7 +228,8 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ if (dma != phys)
+ goto out_unmap;
+ }
+- kmemleak_ignore(table);
++ if (lvl == 2)
++ kmemleak_ignore(table);
+ return table;
+
+ out_unmap:
+@@ -733,7 +745,7 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+ data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
+ ARM_V7S_TABLE_SIZE(2),
+ ARM_V7S_TABLE_SIZE(2),
+- SLAB_CACHE_DMA, NULL);
++ ARM_V7S_TABLE_SLAB_FLAGS, NULL);
+ if (!data->l2_tables)
+ goto out_free_data;
+
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index f8d3ba247523..2de8122e218f 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -207,8 +207,10 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
+ curr_iova = rb_entry(curr, struct iova, node);
+ } while (curr && new_pfn <= curr_iova->pfn_hi);
+
+- if (limit_pfn < size || new_pfn < iovad->start_pfn)
++ if (limit_pfn < size || new_pfn < iovad->start_pfn) {
++ iovad->max32_alloc_size = size;
+ goto iova32_full;
++ }
+
+ /* pfn_lo will point to size aligned address if size_aligned is set */
+ new->pfn_lo = new_pfn;
+@@ -222,7 +224,6 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
+ return 0;
+
+ iova32_full:
+- iovad->max32_alloc_size = size;
+ spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+ return -ENOMEM;
+ }
+diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c
+index 0e65f609352e..83364fedbf0a 100644
+--- a/drivers/irqchip/irq-brcmstb-l2.c
++++ b/drivers/irqchip/irq-brcmstb-l2.c
+@@ -129,8 +129,9 @@ static void brcmstb_l2_intc_suspend(struct irq_data *d)
+ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ struct brcmstb_l2_intc_data *b = gc->private;
++ unsigned long flags;
+
+- irq_gc_lock(gc);
++ irq_gc_lock_irqsave(gc, flags);
+ /* Save the current mask */
+ b->saved_mask = irq_reg_readl(gc, ct->regs.mask);
+
+@@ -139,7 +140,7 @@ static void brcmstb_l2_intc_suspend(struct irq_data *d)
+ irq_reg_writel(gc, ~gc->wake_active, ct->regs.disable);
+ irq_reg_writel(gc, gc->wake_active, ct->regs.enable);
+ }
+- irq_gc_unlock(gc);
++ irq_gc_unlock_irqrestore(gc, flags);
+ }
+
+ static void brcmstb_l2_intc_resume(struct irq_data *d)
+@@ -147,8 +148,9 @@ static void brcmstb_l2_intc_resume(struct irq_data *d)
+ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ struct brcmstb_l2_intc_data *b = gc->private;
++ unsigned long flags;
+
+- irq_gc_lock(gc);
++ irq_gc_lock_irqsave(gc, flags);
+ if (ct->chip.irq_ack) {
+ /* Clear unmasked non-wakeup interrupts */
+ irq_reg_writel(gc, ~b->saved_mask & ~gc->wake_active,
+@@ -158,7 +160,7 @@ static void brcmstb_l2_intc_resume(struct irq_data *d)
+ /* Restore the saved mask */
+ irq_reg_writel(gc, b->saved_mask, ct->regs.disable);
+ irq_reg_writel(gc, ~b->saved_mask, ct->regs.enable);
+- irq_gc_unlock(gc);
++ irq_gc_unlock_irqrestore(gc, flags);
+ }
+
+ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index c3aba3fc818d..93e32a59640c 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -1482,7 +1482,7 @@ static int lpi_range_cmp(void *priv, struct list_head *a, struct list_head *b)
+ ra = container_of(a, struct lpi_range, entry);
+ rb = container_of(b, struct lpi_range, entry);
+
+- return rb->base_id - ra->base_id;
++ return ra->base_id - rb->base_id;
+ }
+
+ static void merge_lpi_ranges(void)
+@@ -1955,6 +1955,8 @@ static int its_alloc_tables(struct its_node *its)
+ indirect = its_parse_indirect_baser(its, baser,
+ psz, &order,
+ its->device_ids);
++ break;
++
+ case GITS_BASER_TYPE_VCPU:
+ indirect = its_parse_indirect_baser(its, baser,
+ psz, &order,
+diff --git a/drivers/isdn/hardware/mISDN/hfcmulti.c b/drivers/isdn/hardware/mISDN/hfcmulti.c
+index 4d85645c87f7..0928fd1f0e0c 100644
+--- a/drivers/isdn/hardware/mISDN/hfcmulti.c
++++ b/drivers/isdn/hardware/mISDN/hfcmulti.c
+@@ -4365,7 +4365,8 @@ setup_pci(struct hfc_multi *hc, struct pci_dev *pdev,
+ if (m->clock2)
+ test_and_set_bit(HFC_CHIP_CLOCK2, &hc->chip);
+
+- if (ent->device == 0xB410) {
++ if (ent->vendor == PCI_VENDOR_ID_DIGIUM &&
++ ent->device == PCI_DEVICE_ID_DIGIUM_HFC4S) {
+ test_and_set_bit(HFC_CHIP_B410P, &hc->chip);
+ test_and_set_bit(HFC_CHIP_PCM_MASTER, &hc->chip);
+ test_and_clear_bit(HFC_CHIP_PCM_SLAVE, &hc->chip);
+diff --git a/drivers/leds/leds-lp55xx-common.c b/drivers/leds/leds-lp55xx-common.c
+index 3d79a6380761..723f2f17497a 100644
+--- a/drivers/leds/leds-lp55xx-common.c
++++ b/drivers/leds/leds-lp55xx-common.c
+@@ -201,7 +201,7 @@ static void lp55xx_firmware_loaded(const struct firmware *fw, void *context)
+
+ if (!fw) {
+ dev_err(dev, "firmware request failed\n");
+- goto out;
++ return;
+ }
+
+ /* handling firmware data is chip dependent */
+@@ -214,9 +214,9 @@ static void lp55xx_firmware_loaded(const struct firmware *fw, void *context)
+
+ mutex_unlock(&chip->lock);
+
+-out:
+ /* firmware should be released for other channel use */
+ release_firmware(chip->fw);
++ chip->fw = NULL;
+ }
+
+ static int lp55xx_request_firmware(struct lp55xx_chip *chip)
+diff --git a/drivers/md/bcache/extents.c b/drivers/md/bcache/extents.c
+index 956004366699..886710043025 100644
+--- a/drivers/md/bcache/extents.c
++++ b/drivers/md/bcache/extents.c
+@@ -538,6 +538,7 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
+ {
+ struct btree *b = container_of(bk, struct btree, keys);
+ unsigned int i, stale;
++ char buf[80];
+
+ if (!KEY_PTRS(k) ||
+ bch_extent_invalid(bk, k))
+@@ -547,19 +548,19 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
+ if (!ptr_available(b->c, k, i))
+ return true;
+
+- if (!expensive_debug_checks(b->c) && KEY_DIRTY(k))
+- return false;
+-
+ for (i = 0; i < KEY_PTRS(k); i++) {
+ stale = ptr_stale(b->c, k, i);
+
++ if (stale && KEY_DIRTY(k)) {
++ bch_extent_to_text(buf, sizeof(buf), k);
++ pr_info("stale dirty pointer, stale %u, key: %s",
++ stale, buf);
++ }
++
+ btree_bug_on(stale > BUCKET_GC_GEN_MAX, b,
+ "key too stale: %i, need_gc %u",
+ stale, b->c->need_gc);
+
+- btree_bug_on(stale && KEY_DIRTY(k) && KEY_SIZE(k),
+- b, "stale dirty pointer");
+-
+ if (stale)
+ return true;
+
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index 15070412a32e..f101bfe8657a 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -392,10 +392,11 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
+
+ /*
+ * Flag for bypass if the IO is for read-ahead or background,
+- * unless the read-ahead request is for metadata (eg, for gfs2).
++ * unless the read-ahead request is for metadata
++ * (eg, for gfs2 or xfs).
+ */
+ if (bio->bi_opf & (REQ_RAHEAD|REQ_BACKGROUND) &&
+- !(bio->bi_opf & REQ_PRIO))
++ !(bio->bi_opf & (REQ_META|REQ_PRIO)))
+ goto skip;
+
+ if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) ||
+@@ -877,7 +878,7 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s,
+ }
+
+ if (!(bio->bi_opf & REQ_RAHEAD) &&
+- !(bio->bi_opf & REQ_PRIO) &&
++ !(bio->bi_opf & (REQ_META|REQ_PRIO)) &&
+ s->iop.c->gc_stats.in_use < CUTOFF_CACHE_READA)
+ reada = min_t(sector_t, dc->readahead >> 9,
+ get_capacity(bio->bi_disk) - bio_end_sector(bio));
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index 557a8a3270a1..e5daf91310f6 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -287,8 +287,12 @@ STORE(__cached_dev)
+ sysfs_strtoul_clamp(writeback_rate_update_seconds,
+ dc->writeback_rate_update_seconds,
+ 1, WRITEBACK_RATE_UPDATE_SECS_MAX);
+- d_strtoul(writeback_rate_i_term_inverse);
+- d_strtoul_nonzero(writeback_rate_p_term_inverse);
++ sysfs_strtoul_clamp(writeback_rate_i_term_inverse,
++ dc->writeback_rate_i_term_inverse,
++ 1, UINT_MAX);
++ sysfs_strtoul_clamp(writeback_rate_p_term_inverse,
++ dc->writeback_rate_p_term_inverse,
++ 1, UINT_MAX);
+ d_strtoul_nonzero(writeback_rate_minimum);
+
+ sysfs_strtoul_clamp(io_error_limit, dc->error_limit, 0, INT_MAX);
+@@ -299,7 +303,9 @@ STORE(__cached_dev)
+ dc->io_disable = v ? 1 : 0;
+ }
+
+- d_strtoi_h(sequential_cutoff);
++ sysfs_strtoul_clamp(sequential_cutoff,
++ dc->sequential_cutoff,
++ 0, UINT_MAX);
+ d_strtoi_h(readahead);
+
+ if (attr == &sysfs_clear_stats)
+@@ -778,8 +784,17 @@ STORE(__bch_cache_set)
+ c->error_limit = strtoul_or_return(buf);
+
+ /* See count_io_errors() for why 88 */
+- if (attr == &sysfs_io_error_halflife)
+- c->error_decay = strtoul_or_return(buf) / 88;
++ if (attr == &sysfs_io_error_halflife) {
++ unsigned long v = 0;
++ ssize_t ret;
++
++ ret = strtoul_safe_clamp(buf, v, 0, UINT_MAX);
++ if (!ret) {
++ c->error_decay = v / 88;
++ return size;
++ }
++ return ret;
++ }
+
+ if (attr == &sysfs_io_disable) {
+ v = strtoul_or_return(buf);
+diff --git a/drivers/md/bcache/sysfs.h b/drivers/md/bcache/sysfs.h
+index 3fe82425859c..0ad2715a884e 100644
+--- a/drivers/md/bcache/sysfs.h
++++ b/drivers/md/bcache/sysfs.h
+@@ -81,9 +81,16 @@ do { \
+
+ #define sysfs_strtoul_clamp(file, var, min, max) \
+ do { \
+- if (attr == &sysfs_ ## file) \
+- return strtoul_safe_clamp(buf, var, min, max) \
+- ?: (ssize_t) size; \
++ if (attr == &sysfs_ ## file) { \
++ unsigned long v = 0; \
++ ssize_t ret; \
++ ret = strtoul_safe_clamp(buf, v, min, max); \
++ if (!ret) { \
++ var = v; \
++ return size; \
++ } \
++ return ret; \
++ } \
+ } while (0)
+
+ #define strtoul_or_return(cp) \
+diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
+index 6a743d3bb338..4e4c6810dc3c 100644
+--- a/drivers/md/bcache/writeback.h
++++ b/drivers/md/bcache/writeback.h
+@@ -71,6 +71,9 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio,
+ in_use > bch_cutoff_writeback_sync)
+ return false;
+
++ if (bio_op(bio) == REQ_OP_DISCARD)
++ return false;
++
+ if (dc->partial_stripes_expensive &&
+ bcache_dev_stripe_dirty(dc, bio->bi_iter.bi_sector,
+ bio_sectors(bio)))
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index 95c6d86ab5e8..c4ef1fceead6 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -115,6 +115,7 @@ struct mapped_device {
+ struct srcu_struct io_barrier;
+ };
+
++void disable_discard(struct mapped_device *md);
+ void disable_write_same(struct mapped_device *md);
+ void disable_write_zeroes(struct mapped_device *md);
+
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 457200ca6287..f535fd8ac82d 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -913,7 +913,7 @@ static void copy_from_journal(struct dm_integrity_c *ic, unsigned section, unsig
+ static bool ranges_overlap(struct dm_integrity_range *range1, struct dm_integrity_range *range2)
+ {
+ return range1->logical_sector < range2->logical_sector + range2->n_sectors &&
+- range2->logical_sector + range2->n_sectors > range2->logical_sector;
++ range1->logical_sector + range1->n_sectors > range2->logical_sector;
+ }
+
+ static bool add_new_range(struct dm_integrity_c *ic, struct dm_integrity_range *new_range, bool check_waiting)
+@@ -959,8 +959,6 @@ static void remove_range_unlocked(struct dm_integrity_c *ic, struct dm_integrity
+ struct dm_integrity_range *last_range =
+ list_first_entry(&ic->wait_list, struct dm_integrity_range, wait_entry);
+ struct task_struct *last_range_task;
+- if (!ranges_overlap(range, last_range))
+- break;
+ last_range_task = last_range->task;
+ list_del(&last_range->wait_entry);
+ if (!add_new_range(ic, last_range, false)) {
+@@ -1368,8 +1366,8 @@ again:
+ checksums_ptr - checksums, !dio->write ? TAG_CMP : TAG_WRITE);
+ if (unlikely(r)) {
+ if (r > 0) {
+- DMERR("Checksum failed at sector 0x%llx",
+- (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size)));
++ DMERR_LIMIT("Checksum failed at sector 0x%llx",
++ (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size)));
+ r = -EILSEQ;
+ atomic64_inc(&ic->number_of_mismatches);
+ }
+@@ -1561,8 +1559,8 @@ retry_kmap:
+
+ integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
+ if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
+- DMERR("Checksum failed when reading from journal, at sector 0x%llx",
+- (unsigned long long)logical_sector);
++ DMERR_LIMIT("Checksum failed when reading from journal, at sector 0x%llx",
++ (unsigned long long)logical_sector);
+ }
+ }
+ #endif
+@@ -3185,7 +3183,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ journal_watermark = val;
+ else if (sscanf(opt_string, "commit_time:%u%c", &val, &dummy) == 1)
+ sync_msec = val;
+- else if (!memcmp(opt_string, "meta_device:", strlen("meta_device:"))) {
++ else if (!strncmp(opt_string, "meta_device:", strlen("meta_device:"))) {
+ if (ic->meta_dev) {
+ dm_put_device(ti, ic->meta_dev);
+ ic->meta_dev = NULL;
+@@ -3204,17 +3202,17 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ goto bad;
+ }
+ ic->sectors_per_block = val >> SECTOR_SHIFT;
+- } else if (!memcmp(opt_string, "internal_hash:", strlen("internal_hash:"))) {
++ } else if (!strncmp(opt_string, "internal_hash:", strlen("internal_hash:"))) {
+ r = get_alg_and_key(opt_string, &ic->internal_hash_alg, &ti->error,
+ "Invalid internal_hash argument");
+ if (r)
+ goto bad;
+- } else if (!memcmp(opt_string, "journal_crypt:", strlen("journal_crypt:"))) {
++ } else if (!strncmp(opt_string, "journal_crypt:", strlen("journal_crypt:"))) {
+ r = get_alg_and_key(opt_string, &ic->journal_crypt_alg, &ti->error,
+ "Invalid journal_crypt argument");
+ if (r)
+ goto bad;
+- } else if (!memcmp(opt_string, "journal_mac:", strlen("journal_mac:"))) {
++ } else if (!strncmp(opt_string, "journal_mac:", strlen("journal_mac:"))) {
+ r = get_alg_and_key(opt_string, &ic->journal_mac_alg, &ti->error,
+ "Invalid journal_mac argument");
+ if (r)
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index a20531e5f3b4..582265e043a6 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -206,11 +206,14 @@ static void dm_done(struct request *clone, blk_status_t error, bool mapped)
+ }
+
+ if (unlikely(error == BLK_STS_TARGET)) {
+- if (req_op(clone) == REQ_OP_WRITE_SAME &&
+- !clone->q->limits.max_write_same_sectors)
++ if (req_op(clone) == REQ_OP_DISCARD &&
++ !clone->q->limits.max_discard_sectors)
++ disable_discard(tio->md);
++ else if (req_op(clone) == REQ_OP_WRITE_SAME &&
++ !clone->q->limits.max_write_same_sectors)
+ disable_write_same(tio->md);
+- if (req_op(clone) == REQ_OP_WRITE_ZEROES &&
+- !clone->q->limits.max_write_zeroes_sectors)
++ else if (req_op(clone) == REQ_OP_WRITE_ZEROES &&
++ !clone->q->limits.max_write_zeroes_sectors)
+ disable_write_zeroes(tio->md);
+ }
+
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 4b1be754cc41..eb257e4dcb1c 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -1852,6 +1852,36 @@ static bool dm_table_supports_secure_erase(struct dm_table *t)
+ return true;
+ }
+
++static int device_requires_stable_pages(struct dm_target *ti,
++ struct dm_dev *dev, sector_t start,
++ sector_t len, void *data)
++{
++ struct request_queue *q = bdev_get_queue(dev->bdev);
++
++ return q && bdi_cap_stable_pages_required(q->backing_dev_info);
++}
++
++/*
++ * If any underlying device requires stable pages, a table must require
++ * them as well. Only targets that support iterate_devices are considered:
++ * don't want error, zero, etc to require stable pages.
++ */
++static bool dm_table_requires_stable_pages(struct dm_table *t)
++{
++ struct dm_target *ti;
++ unsigned i;
++
++ for (i = 0; i < dm_table_get_num_targets(t); i++) {
++ ti = dm_table_get_target(t, i);
++
++ if (ti->type->iterate_devices &&
++ ti->type->iterate_devices(ti, device_requires_stable_pages, NULL))
++ return true;
++ }
++
++ return false;
++}
++
+ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ struct queue_limits *limits)
+ {
+@@ -1909,6 +1939,15 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+
+ dm_table_verify_integrity(t);
+
++ /*
++ * Some devices don't use blk_integrity but still want stable pages
++ * because they do their own checksumming.
++ */
++ if (dm_table_requires_stable_pages(t))
++ q->backing_dev_info->capabilities |= BDI_CAP_STABLE_WRITES;
++ else
++ q->backing_dev_info->capabilities &= ~BDI_CAP_STABLE_WRITES;
++
+ /*
+ * Determine whether or not this queue's I/O timings contribute
+ * to the entropy pool, Only request-based targets use this.
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index e83b63608262..254c26eb963a 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -3283,6 +3283,13 @@ static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ as.argc = argc;
+ as.argv = argv;
+
++ /* make sure metadata and data are different devices */
++ if (!strcmp(argv[0], argv[1])) {
++ ti->error = "Error setting metadata or data device";
++ r = -EINVAL;
++ goto out_unlock;
++ }
++
+ /*
+ * Set default pool features.
+ */
+@@ -4167,6 +4174,12 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ tc->sort_bio_list = RB_ROOT;
+
+ if (argc == 3) {
++ if (!strcmp(argv[0], argv[2])) {
++ ti->error = "Error setting origin device";
++ r = -EINVAL;
++ goto bad_origin_dev;
++ }
++
+ r = dm_get_device(ti, argv[2], FMODE_READ, &origin_dev);
+ if (r) {
+ ti->error = "Error opening origin device";
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 515e6af9bed2..4986eea520b6 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -963,6 +963,15 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
+ }
+ }
+
++void disable_discard(struct mapped_device *md)
++{
++ struct queue_limits *limits = dm_get_queue_limits(md);
++
++ /* device doesn't really support DISCARD, disable it */
++ limits->max_discard_sectors = 0;
++ blk_queue_flag_clear(QUEUE_FLAG_DISCARD, md->queue);
++}
++
+ void disable_write_same(struct mapped_device *md)
+ {
+ struct queue_limits *limits = dm_get_queue_limits(md);
+@@ -988,11 +997,14 @@ static void clone_endio(struct bio *bio)
+ dm_endio_fn endio = tio->ti->type->end_io;
+
+ if (unlikely(error == BLK_STS_TARGET) && md->type != DM_TYPE_NVME_BIO_BASED) {
+- if (bio_op(bio) == REQ_OP_WRITE_SAME &&
+- !bio->bi_disk->queue->limits.max_write_same_sectors)
++ if (bio_op(bio) == REQ_OP_DISCARD &&
++ !bio->bi_disk->queue->limits.max_discard_sectors)
++ disable_discard(md);
++ else if (bio_op(bio) == REQ_OP_WRITE_SAME &&
++ !bio->bi_disk->queue->limits.max_write_same_sectors)
+ disable_write_same(md);
+- if (bio_op(bio) == REQ_OP_WRITE_ZEROES &&
+- !bio->bi_disk->queue->limits.max_write_zeroes_sectors)
++ else if (bio_op(bio) == REQ_OP_WRITE_ZEROES &&
++ !bio->bi_disk->queue->limits.max_write_zeroes_sectors)
+ disable_write_zeroes(md);
+ }
+
+@@ -1060,15 +1072,7 @@ int dm_set_target_max_io_len(struct dm_target *ti, sector_t len)
+ return -EINVAL;
+ }
+
+- /*
+- * BIO based queue uses its own splitting. When multipage bvecs
+- * is switched on, size of the incoming bio may be too big to
+- * be handled in some targets, such as crypt.
+- *
+- * When these targets are ready for the big bio, we can remove
+- * the limit.
+- */
+- ti->max_io_len = min_t(uint32_t, len, BIO_MAX_PAGES * PAGE_SIZE);
++ ti->max_io_len = (uint32_t) len;
+
+ return 0;
+ }
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index abb5d382f64d..3b6880dd648d 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -3939,6 +3939,8 @@ static int raid10_run(struct mddev *mddev)
+ set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+ mddev->sync_thread = md_register_thread(md_do_sync, mddev,
+ "reshape");
++ if (!mddev->sync_thread)
++ goto out_free_conf;
+ }
+
+ return 0;
+@@ -4670,7 +4672,6 @@ read_more:
+ atomic_inc(&r10_bio->remaining);
+ read_bio->bi_next = NULL;
+ generic_make_request(read_bio);
+- sector_nr += nr_sectors;
+ sectors_done += nr_sectors;
+ if (sector_nr <= last)
+ goto read_more;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index cecea901ab8c..5b68f2d0da60 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -7402,6 +7402,8 @@ static int raid5_run(struct mddev *mddev)
+ set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+ mddev->sync_thread = md_register_thread(md_do_sync, mddev,
+ "reshape");
++ if (!mddev->sync_thread)
++ goto abort;
+ }
+
+ /* Ok, everything is just fine now */
+diff --git a/drivers/media/dvb-frontends/lgdt330x.c b/drivers/media/dvb-frontends/lgdt330x.c
+index 96807e134886..8abb1a510a81 100644
+--- a/drivers/media/dvb-frontends/lgdt330x.c
++++ b/drivers/media/dvb-frontends/lgdt330x.c
+@@ -783,7 +783,7 @@ static int lgdt3303_read_status(struct dvb_frontend *fe,
+
+ if ((buf[0] & 0x02) == 0x00)
+ *status |= FE_HAS_SYNC;
+- if ((buf[0] & 0xfd) == 0x01)
++ if ((buf[0] & 0x01) == 0x01)
+ *status |= FE_HAS_VITERBI | FE_HAS_LOCK;
+ break;
+ default:
+diff --git a/drivers/media/i2c/cx25840/cx25840-core.c b/drivers/media/i2c/cx25840/cx25840-core.c
+index b168bf3635b6..8b0b8b5aa531 100644
+--- a/drivers/media/i2c/cx25840/cx25840-core.c
++++ b/drivers/media/i2c/cx25840/cx25840-core.c
+@@ -5216,8 +5216,9 @@ static int cx25840_probe(struct i2c_client *client,
+ * those extra inputs. So, let's add it only when needed.
+ */
+ state->pads[CX25840_PAD_INPUT].flags = MEDIA_PAD_FL_SINK;
++ state->pads[CX25840_PAD_INPUT].sig_type = PAD_SIGNAL_ANALOG;
+ state->pads[CX25840_PAD_VID_OUT].flags = MEDIA_PAD_FL_SOURCE;
+- state->pads[CX25840_PAD_VBI_OUT].flags = MEDIA_PAD_FL_SOURCE;
++ state->pads[CX25840_PAD_VID_OUT].sig_type = PAD_SIGNAL_DV;
+ sd->entity.function = MEDIA_ENT_F_ATV_DECODER;
+
+ ret = media_entity_pads_init(&sd->entity, ARRAY_SIZE(state->pads),
+diff --git a/drivers/media/i2c/cx25840/cx25840-core.h b/drivers/media/i2c/cx25840/cx25840-core.h
+index c323b1af1f83..9efefa15d090 100644
+--- a/drivers/media/i2c/cx25840/cx25840-core.h
++++ b/drivers/media/i2c/cx25840/cx25840-core.h
+@@ -40,7 +40,6 @@ enum cx25840_model {
+ enum cx25840_media_pads {
+ CX25840_PAD_INPUT,
+ CX25840_PAD_VID_OUT,
+- CX25840_PAD_VBI_OUT,
+
+ CX25840_NUM_PADS
+ };
+diff --git a/drivers/media/i2c/mt9m111.c b/drivers/media/i2c/mt9m111.c
+index d639b9bcf64a..7a759b4b88cf 100644
+--- a/drivers/media/i2c/mt9m111.c
++++ b/drivers/media/i2c/mt9m111.c
+@@ -1273,6 +1273,8 @@ static int mt9m111_probe(struct i2c_client *client,
+ mt9m111->rect.top = MT9M111_MIN_DARK_ROWS;
+ mt9m111->rect.width = MT9M111_MAX_WIDTH;
+ mt9m111->rect.height = MT9M111_MAX_HEIGHT;
++ mt9m111->width = mt9m111->rect.width;
++ mt9m111->height = mt9m111->rect.height;
+ mt9m111->fmt = &mt9m111_colour_fmts[0];
+ mt9m111->lastpage = -1;
+ mutex_init(&mt9m111->power_lock);
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index bef3f3aae0ed..9f8fc1ad9b1a 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -1893,7 +1893,7 @@ static void ov5640_reset(struct ov5640_dev *sensor)
+ usleep_range(1000, 2000);
+
+ gpiod_set_value_cansleep(sensor->reset_gpio, 0);
+- usleep_range(5000, 10000);
++ usleep_range(20000, 25000);
+ }
+
+ static int ov5640_set_power_on(struct ov5640_dev *sensor)
+diff --git a/drivers/media/i2c/ov7740.c b/drivers/media/i2c/ov7740.c
+index 177688afd9a6..8835b831cdc0 100644
+--- a/drivers/media/i2c/ov7740.c
++++ b/drivers/media/i2c/ov7740.c
+@@ -1101,6 +1101,9 @@ static int ov7740_probe(struct i2c_client *client,
+ if (ret)
+ return ret;
+
++ pm_runtime_set_active(&client->dev);
++ pm_runtime_enable(&client->dev);
++
+ ret = ov7740_detect(ov7740);
+ if (ret)
+ goto error_detect;
+@@ -1123,8 +1126,6 @@ static int ov7740_probe(struct i2c_client *client,
+ if (ret)
+ goto error_async_register;
+
+- pm_runtime_set_active(&client->dev);
+- pm_runtime_enable(&client->dev);
+ pm_runtime_idle(&client->dev);
+
+ return 0;
+@@ -1134,6 +1135,8 @@ error_async_register:
+ error_init_controls:
+ ov7740_free_controls(ov7740);
+ error_detect:
++ pm_runtime_disable(&client->dev);
++ pm_runtime_set_suspended(&client->dev);
+ ov7740_set_power(ov7740, 0);
+ media_entity_cleanup(&ov7740->subdev.entity);
+
+diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+index 2a5d5002c27e..f761e4d8bf2a 100644
+--- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+@@ -702,7 +702,7 @@ end:
+ v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, to_vb2_v4l2_buffer(vb));
+ }
+
+-static void *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
++static struct vb2_v4l2_buffer *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
+ enum v4l2_buf_type type)
+ {
+ if (V4L2_TYPE_IS_OUTPUT(type))
+@@ -714,7 +714,7 @@ static void *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
+ static int mtk_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ struct mtk_jpeg_ctx *ctx = vb2_get_drv_priv(q);
+- struct vb2_buffer *vb;
++ struct vb2_v4l2_buffer *vb;
+ int ret = 0;
+
+ ret = pm_runtime_get_sync(ctx->jpeg->dev);
+@@ -724,14 +724,14 @@ static int mtk_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
+ return 0;
+ err:
+ while ((vb = mtk_jpeg_buf_remove(ctx, q->type)))
+- v4l2_m2m_buf_done(to_vb2_v4l2_buffer(vb), VB2_BUF_STATE_QUEUED);
++ v4l2_m2m_buf_done(vb, VB2_BUF_STATE_QUEUED);
+ return ret;
+ }
+
+ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
+ {
+ struct mtk_jpeg_ctx *ctx = vb2_get_drv_priv(q);
+- struct vb2_buffer *vb;
++ struct vb2_v4l2_buffer *vb;
+
+ /*
+ * STREAMOFF is an acknowledgment for source change event.
+@@ -743,7 +743,7 @@ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
+ struct mtk_jpeg_src_buf *src_buf;
+
+ vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+- src_buf = mtk_jpeg_vb2_to_srcbuf(vb);
++ src_buf = mtk_jpeg_vb2_to_srcbuf(&vb->vb2_buf);
+ mtk_jpeg_set_queue_data(ctx, &src_buf->dec_param);
+ ctx->state = MTK_JPEG_RUNNING;
+ } else if (V4L2_TYPE_IS_OUTPUT(q->type)) {
+@@ -751,7 +751,7 @@ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
+ }
+
+ while ((vb = mtk_jpeg_buf_remove(ctx, q->type)))
+- v4l2_m2m_buf_done(to_vb2_v4l2_buffer(vb), VB2_BUF_STATE_ERROR);
++ v4l2_m2m_buf_done(vb, VB2_BUF_STATE_ERROR);
+
+ pm_runtime_put_sync(ctx->jpeg->dev);
+ }
+@@ -807,7 +807,7 @@ static void mtk_jpeg_device_run(void *priv)
+ {
+ struct mtk_jpeg_ctx *ctx = priv;
+ struct mtk_jpeg_dev *jpeg = ctx->jpeg;
+- struct vb2_buffer *src_buf, *dst_buf;
++ struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR;
+ unsigned long flags;
+ struct mtk_jpeg_src_buf *jpeg_src_buf;
+@@ -817,11 +817,11 @@ static void mtk_jpeg_device_run(void *priv)
+
+ src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+- jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(src_buf);
++ jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf);
+
+ if (jpeg_src_buf->flags & MTK_JPEG_BUF_FLAGS_LAST_FRAME) {
+- for (i = 0; i < dst_buf->num_planes; i++)
+- vb2_set_plane_payload(dst_buf, i, 0);
++ for (i = 0; i < dst_buf->vb2_buf.num_planes; i++)
++ vb2_set_plane_payload(&dst_buf->vb2_buf, i, 0);
+ buf_state = VB2_BUF_STATE_DONE;
+ goto dec_end;
+ }
+@@ -833,8 +833,8 @@ static void mtk_jpeg_device_run(void *priv)
+ return;
+ }
+
+- mtk_jpeg_set_dec_src(ctx, src_buf, &bs);
+- if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, dst_buf, &fb))
++ mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs);
++ if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, &dst_buf->vb2_buf, &fb))
+ goto dec_end;
+
+ spin_lock_irqsave(&jpeg->hw_lock, flags);
+@@ -849,8 +849,8 @@ static void mtk_jpeg_device_run(void *priv)
+ dec_end:
+ v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+- v4l2_m2m_buf_done(to_vb2_v4l2_buffer(src_buf), buf_state);
+- v4l2_m2m_buf_done(to_vb2_v4l2_buffer(dst_buf), buf_state);
++ v4l2_m2m_buf_done(src_buf, buf_state);
++ v4l2_m2m_buf_done(dst_buf, buf_state);
+ v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx);
+ }
+
+@@ -921,7 +921,7 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
+ {
+ struct mtk_jpeg_dev *jpeg = priv;
+ struct mtk_jpeg_ctx *ctx;
+- struct vb2_buffer *src_buf, *dst_buf;
++ struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ struct mtk_jpeg_src_buf *jpeg_src_buf;
+ enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR;
+ u32 dec_irq_ret;
+@@ -938,7 +938,7 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
+
+ src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+- jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(src_buf);
++ jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf);
+
+ if (dec_irq_ret >= MTK_JPEG_DEC_RESULT_UNDERFLOW)
+ mtk_jpeg_dec_reset(jpeg->dec_reg_base);
+@@ -948,15 +948,15 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
+ goto dec_end;
+ }
+
+- for (i = 0; i < dst_buf->num_planes; i++)
+- vb2_set_plane_payload(dst_buf, i,
++ for (i = 0; i < dst_buf->vb2_buf.num_planes; i++)
++ vb2_set_plane_payload(&dst_buf->vb2_buf, i,
+ jpeg_src_buf->dec_param.comp_size[i]);
+
+ buf_state = VB2_BUF_STATE_DONE;
+
+ dec_end:
+- v4l2_m2m_buf_done(to_vb2_v4l2_buffer(src_buf), buf_state);
+- v4l2_m2m_buf_done(to_vb2_v4l2_buffer(dst_buf), buf_state);
++ v4l2_m2m_buf_done(src_buf, buf_state);
++ v4l2_m2m_buf_done(dst_buf, buf_state);
+ v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx);
+ return IRQ_HANDLED;
+ }
+diff --git a/drivers/media/platform/mx2_emmaprp.c b/drivers/media/platform/mx2_emmaprp.c
+index 27b078cf98e3..f60f499c596b 100644
+--- a/drivers/media/platform/mx2_emmaprp.c
++++ b/drivers/media/platform/mx2_emmaprp.c
+@@ -274,7 +274,7 @@ static void emmaprp_device_run(void *priv)
+ {
+ struct emmaprp_ctx *ctx = priv;
+ struct emmaprp_q_data *s_q_data, *d_q_data;
+- struct vb2_buffer *src_buf, *dst_buf;
++ struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ struct emmaprp_dev *pcdev = ctx->dev;
+ unsigned int s_width, s_height;
+ unsigned int d_width, d_height;
+@@ -294,8 +294,8 @@ static void emmaprp_device_run(void *priv)
+ d_height = d_q_data->height;
+ d_size = d_width * d_height;
+
+- p_in = vb2_dma_contig_plane_dma_addr(src_buf, 0);
+- p_out = vb2_dma_contig_plane_dma_addr(dst_buf, 0);
++ p_in = vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0);
++ p_out = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0);
+ if (!p_in || !p_out) {
+ v4l2_err(&pcdev->v4l2_dev,
+ "Acquiring kernel pointers to buffers failed\n");
+diff --git a/drivers/media/platform/rcar-vin/rcar-core.c b/drivers/media/platform/rcar-vin/rcar-core.c
+index f0719ce24b97..aef8d8dab6ab 100644
+--- a/drivers/media/platform/rcar-vin/rcar-core.c
++++ b/drivers/media/platform/rcar-vin/rcar-core.c
+@@ -131,9 +131,13 @@ static int rvin_group_link_notify(struct media_link *link, u32 flags,
+ !is_media_entity_v4l2_video_device(link->sink->entity))
+ return 0;
+
+- /* If any entity is in use don't allow link changes. */
++ /*
++ * Don't allow link changes if any entity in the graph is
++ * streaming, modifying the CHSEL register fields can disrupt
++ * running streams.
++ */
+ media_device_for_each_entity(entity, &group->mdev)
+- if (entity->use_count)
++ if (entity->stream_count)
+ return -EBUSY;
+
+ mutex_lock(&group->lock);
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 5c653287185f..b096227a9722 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -43,7 +43,7 @@ static void device_run(void *prv)
+ {
+ struct rga_ctx *ctx = prv;
+ struct rockchip_rga *rga = ctx->rga;
+- struct vb2_buffer *src, *dst;
++ struct vb2_v4l2_buffer *src, *dst;
+ unsigned long flags;
+
+ spin_lock_irqsave(&rga->ctrl_lock, flags);
+@@ -53,8 +53,8 @@ static void device_run(void *prv)
+ src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+
+- rga_buf_map(src);
+- rga_buf_map(dst);
++ rga_buf_map(&src->vb2_buf);
++ rga_buf_map(&dst->vb2_buf);
+
+ rga_hw_start(rga);
+
+diff --git a/drivers/media/platform/s5p-g2d/g2d.c b/drivers/media/platform/s5p-g2d/g2d.c
+index 57ab1d1085d1..971c47165010 100644
+--- a/drivers/media/platform/s5p-g2d/g2d.c
++++ b/drivers/media/platform/s5p-g2d/g2d.c
+@@ -513,7 +513,7 @@ static void device_run(void *prv)
+ {
+ struct g2d_ctx *ctx = prv;
+ struct g2d_dev *dev = ctx->dev;
+- struct vb2_buffer *src, *dst;
++ struct vb2_v4l2_buffer *src, *dst;
+ unsigned long flags;
+ u32 cmd = 0;
+
+@@ -528,10 +528,10 @@ static void device_run(void *prv)
+ spin_lock_irqsave(&dev->ctrl_lock, flags);
+
+ g2d_set_src_size(dev, &ctx->in);
+- g2d_set_src_addr(dev, vb2_dma_contig_plane_dma_addr(src, 0));
++ g2d_set_src_addr(dev, vb2_dma_contig_plane_dma_addr(&src->vb2_buf, 0));
+
+ g2d_set_dst_size(dev, &ctx->out);
+- g2d_set_dst_addr(dev, vb2_dma_contig_plane_dma_addr(dst, 0));
++ g2d_set_dst_addr(dev, vb2_dma_contig_plane_dma_addr(&dst->vb2_buf, 0));
+
+ g2d_set_rop4(dev, ctx->rop);
+ g2d_set_flip(dev, ctx->flip);
+diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.c b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+index 3f9000b70385..370942b67d86 100644
+--- a/drivers/media/platform/s5p-jpeg/jpeg-core.c
++++ b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+@@ -793,14 +793,14 @@ static void skip(struct s5p_jpeg_buffer *buf, long len);
+ static void exynos4_jpeg_parse_decode_h_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ struct s5p_jpeg *jpeg = ctx->jpeg;
+- struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++ struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ struct s5p_jpeg_buffer jpeg_buffer;
+ unsigned int word;
+ int c, x, components;
+
+ jpeg_buffer.size = 2; /* Ls */
+ jpeg_buffer.data =
+- (unsigned long)vb2_plane_vaddr(vb, 0) + ctx->out_q.sos + 2;
++ (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sos + 2;
+ jpeg_buffer.curr = 0;
+
+ word = 0;
+@@ -830,14 +830,14 @@ static void exynos4_jpeg_parse_decode_h_tbl(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_parse_huff_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ struct s5p_jpeg *jpeg = ctx->jpeg;
+- struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++ struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ struct s5p_jpeg_buffer jpeg_buffer;
+ unsigned int word;
+ int c, i, n, j;
+
+ for (j = 0; j < ctx->out_q.dht.n; ++j) {
+ jpeg_buffer.size = ctx->out_q.dht.len[j];
+- jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(vb, 0) +
++ jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) +
+ ctx->out_q.dht.marker[j];
+ jpeg_buffer.curr = 0;
+
+@@ -889,13 +889,13 @@ static void exynos4_jpeg_parse_huff_tbl(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_parse_decode_q_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ struct s5p_jpeg *jpeg = ctx->jpeg;
+- struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++ struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ struct s5p_jpeg_buffer jpeg_buffer;
+ int c, x, components;
+
+ jpeg_buffer.size = ctx->out_q.sof_len;
+ jpeg_buffer.data =
+- (unsigned long)vb2_plane_vaddr(vb, 0) + ctx->out_q.sof;
++ (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sof;
+ jpeg_buffer.curr = 0;
+
+ skip(&jpeg_buffer, 5); /* P, Y, X */
+@@ -920,14 +920,14 @@ static void exynos4_jpeg_parse_decode_q_tbl(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_parse_q_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ struct s5p_jpeg *jpeg = ctx->jpeg;
+- struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++ struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ struct s5p_jpeg_buffer jpeg_buffer;
+ unsigned int word;
+ int c, i, j;
+
+ for (j = 0; j < ctx->out_q.dqt.n; ++j) {
+ jpeg_buffer.size = ctx->out_q.dqt.len[j];
+- jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(vb, 0) +
++ jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) +
+ ctx->out_q.dqt.marker[j];
+ jpeg_buffer.curr = 0;
+
+@@ -1293,13 +1293,16 @@ static int s5p_jpeg_querycap(struct file *file, void *priv,
+ return 0;
+ }
+
+-static int enum_fmt(struct s5p_jpeg_fmt *sjpeg_formats, int n,
++static int enum_fmt(struct s5p_jpeg_ctx *ctx,
++ struct s5p_jpeg_fmt *sjpeg_formats, int n,
+ struct v4l2_fmtdesc *f, u32 type)
+ {
+ int i, num = 0;
++ unsigned int fmt_ver_flag = ctx->jpeg->variant->fmt_ver_flag;
+
+ for (i = 0; i < n; ++i) {
+- if (sjpeg_formats[i].flags & type) {
++ if (sjpeg_formats[i].flags & type &&
++ sjpeg_formats[i].flags & fmt_ver_flag) {
+ /* index-th format of type type found ? */
+ if (num == f->index)
+ break;
+@@ -1326,11 +1329,11 @@ static int s5p_jpeg_enum_fmt_vid_cap(struct file *file, void *priv,
+ struct s5p_jpeg_ctx *ctx = fh_to_ctx(priv);
+
+ if (ctx->mode == S5P_JPEG_ENCODE)
+- return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
++ return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
+ SJPEG_FMT_FLAG_ENC_CAPTURE);
+
+- return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
+- SJPEG_FMT_FLAG_DEC_CAPTURE);
++ return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
++ SJPEG_FMT_FLAG_DEC_CAPTURE);
+ }
+
+ static int s5p_jpeg_enum_fmt_vid_out(struct file *file, void *priv,
+@@ -1339,11 +1342,11 @@ static int s5p_jpeg_enum_fmt_vid_out(struct file *file, void *priv,
+ struct s5p_jpeg_ctx *ctx = fh_to_ctx(priv);
+
+ if (ctx->mode == S5P_JPEG_ENCODE)
+- return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
++ return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
+ SJPEG_FMT_FLAG_ENC_OUTPUT);
+
+- return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
+- SJPEG_FMT_FLAG_DEC_OUTPUT);
++ return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
++ SJPEG_FMT_FLAG_DEC_OUTPUT);
+ }
+
+ static struct s5p_jpeg_q_data *get_q_data(struct s5p_jpeg_ctx *ctx,
+@@ -2072,15 +2075,15 @@ static void s5p_jpeg_device_run(void *priv)
+ {
+ struct s5p_jpeg_ctx *ctx = priv;
+ struct s5p_jpeg *jpeg = ctx->jpeg;
+- struct vb2_buffer *src_buf, *dst_buf;
++ struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ unsigned long src_addr, dst_addr, flags;
+
+ spin_lock_irqsave(&ctx->jpeg->slock, flags);
+
+ src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+- src_addr = vb2_dma_contig_plane_dma_addr(src_buf, 0);
+- dst_addr = vb2_dma_contig_plane_dma_addr(dst_buf, 0);
++ src_addr = vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0);
++ dst_addr = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0);
+
+ s5p_jpeg_reset(jpeg->regs);
+ s5p_jpeg_poweron(jpeg->regs);
+@@ -2153,7 +2156,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ struct s5p_jpeg *jpeg = ctx->jpeg;
+ struct s5p_jpeg_fmt *fmt;
+- struct vb2_buffer *vb;
++ struct vb2_v4l2_buffer *vb;
+ struct s5p_jpeg_addr jpeg_addr = {};
+ u32 pix_size, padding_bytes = 0;
+
+@@ -2172,7 +2175,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ vb = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ }
+
+- jpeg_addr.y = vb2_dma_contig_plane_dma_addr(vb, 0);
++ jpeg_addr.y = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+
+ if (fmt->colplanes == 2) {
+ jpeg_addr.cb = jpeg_addr.y + pix_size - padding_bytes;
+@@ -2190,7 +2193,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ struct s5p_jpeg *jpeg = ctx->jpeg;
+- struct vb2_buffer *vb;
++ struct vb2_v4l2_buffer *vb;
+ unsigned int jpeg_addr = 0;
+
+ if (ctx->mode == S5P_JPEG_ENCODE)
+@@ -2198,7 +2201,7 @@ static void exynos4_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ else
+ vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+
+- jpeg_addr = vb2_dma_contig_plane_dma_addr(vb, 0);
++ jpeg_addr = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+ if (jpeg->variant->version == SJPEG_EXYNOS5433 &&
+ ctx->mode == S5P_JPEG_DECODE)
+ jpeg_addr += ctx->out_q.sos;
+@@ -2314,7 +2317,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ struct s5p_jpeg *jpeg = ctx->jpeg;
+ struct s5p_jpeg_fmt *fmt;
+- struct vb2_buffer *vb;
++ struct vb2_v4l2_buffer *vb;
+ struct s5p_jpeg_addr jpeg_addr = {};
+ u32 pix_size;
+
+@@ -2328,7 +2331,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ fmt = ctx->cap_q.fmt;
+ }
+
+- jpeg_addr.y = vb2_dma_contig_plane_dma_addr(vb, 0);
++ jpeg_addr.y = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+
+ if (fmt->colplanes == 2) {
+ jpeg_addr.cb = jpeg_addr.y + pix_size;
+@@ -2346,7 +2349,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ static void exynos3250_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ struct s5p_jpeg *jpeg = ctx->jpeg;
+- struct vb2_buffer *vb;
++ struct vb2_v4l2_buffer *vb;
+ unsigned int jpeg_addr = 0;
+
+ if (ctx->mode == S5P_JPEG_ENCODE)
+@@ -2354,7 +2357,7 @@ static void exynos3250_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ else
+ vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+
+- jpeg_addr = vb2_dma_contig_plane_dma_addr(vb, 0);
++ jpeg_addr = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+ exynos3250_jpeg_jpgadr(jpeg->regs, jpeg_addr);
+ }
+
+diff --git a/drivers/media/platform/sh_veu.c b/drivers/media/platform/sh_veu.c
+index 09ae64a0004c..d277cc674349 100644
+--- a/drivers/media/platform/sh_veu.c
++++ b/drivers/media/platform/sh_veu.c
+@@ -273,13 +273,13 @@ static void sh_veu_process(struct sh_veu_dev *veu,
+ static void sh_veu_device_run(void *priv)
+ {
+ struct sh_veu_dev *veu = priv;
+- struct vb2_buffer *src_buf, *dst_buf;
++ struct vb2_v4l2_buffer *src_buf, *dst_buf;
+
+ src_buf = v4l2_m2m_next_src_buf(veu->m2m_ctx);
+ dst_buf = v4l2_m2m_next_dst_buf(veu->m2m_ctx);
+
+ if (src_buf && dst_buf)
+- sh_veu_process(veu, src_buf, dst_buf);
++ sh_veu_process(veu, &src_buf->vb2_buf, &dst_buf->vb2_buf);
+ }
+
+ /* ========== video ioctls ========== */
+diff --git a/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c b/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
+index 6950585edb5a..d16f54cdc3b0 100644
+--- a/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
++++ b/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
+@@ -793,7 +793,7 @@ static const struct regmap_config sun6i_csi_regmap_config = {
+ .reg_bits = 32,
+ .reg_stride = 4,
+ .val_bits = 32,
+- .max_register = 0x1000,
++ .max_register = 0x9c,
+ };
+
+ static int sun6i_csi_resource_request(struct sun6i_csi_dev *sdev,
+diff --git a/drivers/media/platform/vimc/Makefile b/drivers/media/platform/vimc/Makefile
+index 4b2e3de7856e..c4fc8e7d365a 100644
+--- a/drivers/media/platform/vimc/Makefile
++++ b/drivers/media/platform/vimc/Makefile
+@@ -5,6 +5,7 @@ vimc_common-objs := vimc-common.o
+ vimc_debayer-objs := vimc-debayer.o
+ vimc_scaler-objs := vimc-scaler.o
+ vimc_sensor-objs := vimc-sensor.o
++vimc_streamer-objs := vimc-streamer.o
+
+ obj-$(CONFIG_VIDEO_VIMC) += vimc.o vimc_capture.o vimc_common.o vimc-debayer.o \
+- vimc_scaler.o vimc_sensor.o
++ vimc_scaler.o vimc_sensor.o vimc_streamer.o
+diff --git a/drivers/media/platform/vimc/vimc-capture.c b/drivers/media/platform/vimc/vimc-capture.c
+index 3f7e9ed56633..80d7515ec420 100644
+--- a/drivers/media/platform/vimc/vimc-capture.c
++++ b/drivers/media/platform/vimc/vimc-capture.c
+@@ -24,6 +24,7 @@
+ #include <media/videobuf2-vmalloc.h>
+
+ #include "vimc-common.h"
++#include "vimc-streamer.h"
+
+ #define VIMC_CAP_DRV_NAME "vimc-capture"
+
+@@ -44,7 +45,7 @@ struct vimc_cap_device {
+ spinlock_t qlock;
+ struct mutex lock;
+ u32 sequence;
+- struct media_pipeline pipe;
++ struct vimc_stream stream;
+ };
+
+ static const struct v4l2_pix_format fmt_default = {
+@@ -248,14 +249,13 @@ static int vimc_cap_start_streaming(struct vb2_queue *vq, unsigned int count)
+ vcap->sequence = 0;
+
+ /* Start the media pipeline */
+- ret = media_pipeline_start(entity, &vcap->pipe);
++ ret = media_pipeline_start(entity, &vcap->stream.pipe);
+ if (ret) {
+ vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED);
+ return ret;
+ }
+
+- /* Enable streaming from the pipe */
+- ret = vimc_pipeline_s_stream(&vcap->vdev.entity, 1);
++ ret = vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 1);
+ if (ret) {
+ media_pipeline_stop(entity);
+ vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED);
+@@ -273,8 +273,7 @@ static void vimc_cap_stop_streaming(struct vb2_queue *vq)
+ {
+ struct vimc_cap_device *vcap = vb2_get_drv_priv(vq);
+
+- /* Disable streaming from the pipe */
+- vimc_pipeline_s_stream(&vcap->vdev.entity, 0);
++ vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 0);
+
+ /* Stop the media pipeline */
+ media_pipeline_stop(&vcap->vdev.entity);
+@@ -355,8 +354,8 @@ static void vimc_cap_comp_unbind(struct device *comp, struct device *master,
+ kfree(vcap);
+ }
+
+-static void vimc_cap_process_frame(struct vimc_ent_device *ved,
+- struct media_pad *sink, const void *frame)
++static void *vimc_cap_process_frame(struct vimc_ent_device *ved,
++ const void *frame)
+ {
+ struct vimc_cap_device *vcap = container_of(ved, struct vimc_cap_device,
+ ved);
+@@ -370,7 +369,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved,
+ typeof(*vimc_buf), list);
+ if (!vimc_buf) {
+ spin_unlock(&vcap->qlock);
+- return;
++ return ERR_PTR(-EAGAIN);
+ }
+
+ /* Remove this entry from the list */
+@@ -391,6 +390,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved,
+ vb2_set_plane_payload(&vimc_buf->vb2.vb2_buf, 0,
+ vcap->format.sizeimage);
+ vb2_buffer_done(&vimc_buf->vb2.vb2_buf, VB2_BUF_STATE_DONE);
++ return NULL;
+ }
+
+ static int vimc_cap_comp_bind(struct device *comp, struct device *master,
+diff --git a/drivers/media/platform/vimc/vimc-common.c b/drivers/media/platform/vimc/vimc-common.c
+index 867e24dbd6b5..c1a74bb2df58 100644
+--- a/drivers/media/platform/vimc/vimc-common.c
++++ b/drivers/media/platform/vimc/vimc-common.c
+@@ -207,41 +207,6 @@ const struct vimc_pix_map *vimc_pix_map_by_pixelformat(u32 pixelformat)
+ }
+ EXPORT_SYMBOL_GPL(vimc_pix_map_by_pixelformat);
+
+-int vimc_propagate_frame(struct media_pad *src, const void *frame)
+-{
+- struct media_link *link;
+-
+- if (!(src->flags & MEDIA_PAD_FL_SOURCE))
+- return -EINVAL;
+-
+- /* Send this frame to all sink pads that are direct linked */
+- list_for_each_entry(link, &src->entity->links, list) {
+- if (link->source == src &&
+- (link->flags & MEDIA_LNK_FL_ENABLED)) {
+- struct vimc_ent_device *ved = NULL;
+- struct media_entity *entity = link->sink->entity;
+-
+- if (is_media_entity_v4l2_subdev(entity)) {
+- struct v4l2_subdev *sd =
+- container_of(entity, struct v4l2_subdev,
+- entity);
+- ved = v4l2_get_subdevdata(sd);
+- } else if (is_media_entity_v4l2_video_device(entity)) {
+- struct video_device *vdev =
+- container_of(entity,
+- struct video_device,
+- entity);
+- ved = video_get_drvdata(vdev);
+- }
+- if (ved && ved->process_frame)
+- ved->process_frame(ved, link->sink, frame);
+- }
+- }
+-
+- return 0;
+-}
+-EXPORT_SYMBOL_GPL(vimc_propagate_frame);
+-
+ /* Helper function to allocate and initialize pads */
+ struct media_pad *vimc_pads_init(u16 num_pads, const unsigned long *pads_flag)
+ {
+diff --git a/drivers/media/platform/vimc/vimc-common.h b/drivers/media/platform/vimc/vimc-common.h
+index 2e9981b18166..6ed969d9efbb 100644
+--- a/drivers/media/platform/vimc/vimc-common.h
++++ b/drivers/media/platform/vimc/vimc-common.h
+@@ -113,23 +113,12 @@ struct vimc_pix_map {
+ struct vimc_ent_device {
+ struct media_entity *ent;
+ struct media_pad *pads;
+- void (*process_frame)(struct vimc_ent_device *ved,
+- struct media_pad *sink, const void *frame);
++ void * (*process_frame)(struct vimc_ent_device *ved,
++ const void *frame);
+ void (*vdev_get_format)(struct vimc_ent_device *ved,
+ struct v4l2_pix_format *fmt);
+ };
+
+-/**
+- * vimc_propagate_frame - propagate a frame through the topology
+- *
+- * @src: the source pad where the frame is being originated
+- * @frame: the frame to be propagated
+- *
+- * This function will call the process_frame callback from the vimc_ent_device
+- * struct of the nodes directly connected to the @src pad
+- */
+-int vimc_propagate_frame(struct media_pad *src, const void *frame);
+-
+ /**
+ * vimc_pads_init - initialize pads
+ *
+diff --git a/drivers/media/platform/vimc/vimc-debayer.c b/drivers/media/platform/vimc/vimc-debayer.c
+index 77887f66f323..7d77c63b99d2 100644
+--- a/drivers/media/platform/vimc/vimc-debayer.c
++++ b/drivers/media/platform/vimc/vimc-debayer.c
+@@ -321,7 +321,6 @@ static void vimc_deb_set_rgb_mbus_fmt_rgb888_1x24(struct vimc_deb_device *vdeb,
+ static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable)
+ {
+ struct vimc_deb_device *vdeb = v4l2_get_subdevdata(sd);
+- int ret;
+
+ if (enable) {
+ const struct vimc_pix_map *vpix;
+@@ -351,22 +350,10 @@ static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable)
+ if (!vdeb->src_frame)
+ return -ENOMEM;
+
+- /* Turn the stream on in the subdevices directly connected */
+- ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 1);
+- if (ret) {
+- vfree(vdeb->src_frame);
+- vdeb->src_frame = NULL;
+- return ret;
+- }
+ } else {
+ if (!vdeb->src_frame)
+ return 0;
+
+- /* Disable streaming from the pipe */
+- ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 0);
+- if (ret)
+- return ret;
+-
+ vfree(vdeb->src_frame);
+ vdeb->src_frame = NULL;
+ }
+@@ -480,9 +467,8 @@ static void vimc_deb_calc_rgb_sink(struct vimc_deb_device *vdeb,
+ }
+ }
+
+-static void vimc_deb_process_frame(struct vimc_ent_device *ved,
+- struct media_pad *sink,
+- const void *sink_frame)
++static void *vimc_deb_process_frame(struct vimc_ent_device *ved,
++ const void *sink_frame)
+ {
+ struct vimc_deb_device *vdeb = container_of(ved, struct vimc_deb_device,
+ ved);
+@@ -491,7 +477,7 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved,
+
+ /* If the stream in this node is not active, just return */
+ if (!vdeb->src_frame)
+- return;
++ return ERR_PTR(-EINVAL);
+
+ for (i = 0; i < vdeb->sink_fmt.height; i++)
+ for (j = 0; j < vdeb->sink_fmt.width; j++) {
+@@ -499,12 +485,8 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved,
+ vdeb->set_rgb_src(vdeb, i, j, rgb);
+ }
+
+- /* Propagate the frame through all source pads */
+- for (i = 1; i < vdeb->sd.entity.num_pads; i++) {
+- struct media_pad *pad = &vdeb->sd.entity.pads[i];
++ return vdeb->src_frame;
+
+- vimc_propagate_frame(pad, vdeb->src_frame);
+- }
+ }
+
+ static void vimc_deb_comp_unbind(struct device *comp, struct device *master,
+diff --git a/drivers/media/platform/vimc/vimc-scaler.c b/drivers/media/platform/vimc/vimc-scaler.c
+index b0952ee86296..39b2a73dfcc1 100644
+--- a/drivers/media/platform/vimc/vimc-scaler.c
++++ b/drivers/media/platform/vimc/vimc-scaler.c
+@@ -217,7 +217,6 @@ static const struct v4l2_subdev_pad_ops vimc_sca_pad_ops = {
+ static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable)
+ {
+ struct vimc_sca_device *vsca = v4l2_get_subdevdata(sd);
+- int ret;
+
+ if (enable) {
+ const struct vimc_pix_map *vpix;
+@@ -245,22 +244,10 @@ static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable)
+ if (!vsca->src_frame)
+ return -ENOMEM;
+
+- /* Turn the stream on in the subdevices directly connected */
+- ret = vimc_pipeline_s_stream(&vsca->sd.entity, 1);
+- if (ret) {
+- vfree(vsca->src_frame);
+- vsca->src_frame = NULL;
+- return ret;
+- }
+ } else {
+ if (!vsca->src_frame)
+ return 0;
+
+- /* Disable streaming from the pipe */
+- ret = vimc_pipeline_s_stream(&vsca->sd.entity, 0);
+- if (ret)
+- return ret;
+-
+ vfree(vsca->src_frame);
+ vsca->src_frame = NULL;
+ }
+@@ -346,26 +333,19 @@ static void vimc_sca_fill_src_frame(const struct vimc_sca_device *const vsca,
+ vimc_sca_scale_pix(vsca, i, j, sink_frame);
+ }
+
+-static void vimc_sca_process_frame(struct vimc_ent_device *ved,
+- struct media_pad *sink,
+- const void *sink_frame)
++static void *vimc_sca_process_frame(struct vimc_ent_device *ved,
++ const void *sink_frame)
+ {
+ struct vimc_sca_device *vsca = container_of(ved, struct vimc_sca_device,
+ ved);
+- unsigned int i;
+
+ /* If the stream in this node is not active, just return */
+ if (!vsca->src_frame)
+- return;
++ return ERR_PTR(-EINVAL);
+
+ vimc_sca_fill_src_frame(vsca, sink_frame);
+
+- /* Propagate the frame through all source pads */
+- for (i = 1; i < vsca->sd.entity.num_pads; i++) {
+- struct media_pad *pad = &vsca->sd.entity.pads[i];
+-
+- vimc_propagate_frame(pad, vsca->src_frame);
+- }
++ return vsca->src_frame;
+ };
+
+ static void vimc_sca_comp_unbind(struct device *comp, struct device *master,
+diff --git a/drivers/media/platform/vimc/vimc-sensor.c b/drivers/media/platform/vimc/vimc-sensor.c
+index 32ca9c6172b1..93961a1e694f 100644
+--- a/drivers/media/platform/vimc/vimc-sensor.c
++++ b/drivers/media/platform/vimc/vimc-sensor.c
+@@ -16,8 +16,6 @@
+ */
+
+ #include <linux/component.h>
+-#include <linux/freezer.h>
+-#include <linux/kthread.h>
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/platform_device.h>
+@@ -201,38 +199,27 @@ static const struct v4l2_subdev_pad_ops vimc_sen_pad_ops = {
+ .set_fmt = vimc_sen_set_fmt,
+ };
+
+-static int vimc_sen_tpg_thread(void *data)
++static void *vimc_sen_process_frame(struct vimc_ent_device *ved,
++ const void *sink_frame)
+ {
+- struct vimc_sen_device *vsen = data;
+- unsigned int i;
+-
+- set_freezable();
+- set_current_state(TASK_UNINTERRUPTIBLE);
+-
+- for (;;) {
+- try_to_freeze();
+- if (kthread_should_stop())
+- break;
+-
+- tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame);
++ struct vimc_sen_device *vsen = container_of(ved, struct vimc_sen_device,
++ ved);
++ const struct vimc_pix_map *vpix;
++ unsigned int frame_size;
+
+- /* Send the frame to all source pads */
+- for (i = 0; i < vsen->sd.entity.num_pads; i++)
+- vimc_propagate_frame(&vsen->sd.entity.pads[i],
+- vsen->frame);
++ /* Calculate the frame size */
++ vpix = vimc_pix_map_by_code(vsen->mbus_format.code);
++ frame_size = vsen->mbus_format.width * vpix->bpp *
++ vsen->mbus_format.height;
+
+- /* 60 frames per second */
+- schedule_timeout(HZ/60);
+- }
+-
+- return 0;
++ tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame);
++ return vsen->frame;
+ }
+
+ static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable)
+ {
+ struct vimc_sen_device *vsen =
+ container_of(sd, struct vimc_sen_device, sd);
+- int ret;
+
+ if (enable) {
+ const struct vimc_pix_map *vpix;
+@@ -258,26 +245,8 @@ static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable)
+ /* configure the test pattern generator */
+ vimc_sen_tpg_s_format(vsen);
+
+- /* Initialize the image generator thread */
+- vsen->kthread_sen = kthread_run(vimc_sen_tpg_thread, vsen,
+- "%s-sen", vsen->sd.v4l2_dev->name);
+- if (IS_ERR(vsen->kthread_sen)) {
+- dev_err(vsen->dev, "%s: kernel_thread() failed\n",
+- vsen->sd.name);
+- vfree(vsen->frame);
+- vsen->frame = NULL;
+- return PTR_ERR(vsen->kthread_sen);
+- }
+ } else {
+- if (!vsen->kthread_sen)
+- return 0;
+-
+- /* Stop image generator */
+- ret = kthread_stop(vsen->kthread_sen);
+- if (ret)
+- return ret;
+
+- vsen->kthread_sen = NULL;
+ vfree(vsen->frame);
+ vsen->frame = NULL;
+ return 0;
+@@ -413,6 +382,7 @@ static int vimc_sen_comp_bind(struct device *comp, struct device *master,
+ if (ret)
+ goto err_free_hdl;
+
++ vsen->ved.process_frame = vimc_sen_process_frame;
+ dev_set_drvdata(comp, &vsen->ved);
+ vsen->dev = comp;
+
+diff --git a/drivers/media/platform/vimc/vimc-streamer.c b/drivers/media/platform/vimc/vimc-streamer.c
+new file mode 100644
+index 000000000000..fcc897fb247b
+--- /dev/null
++++ b/drivers/media/platform/vimc/vimc-streamer.c
+@@ -0,0 +1,188 @@
++// SPDX-License-Identifier: GPL-2.0+
++/*
++ * vimc-streamer.c Virtual Media Controller Driver
++ *
++ * Copyright (C) 2018 Lucas A. M. Magalhães <lucmaga@gmail.com>
++ *
++ */
++
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/freezer.h>
++#include <linux/kthread.h>
++
++#include "vimc-streamer.h"
++
++/**
++ * vimc_get_source_entity - get the entity connected with the first sink pad
++ *
++ * @ent: reference media_entity
++ *
++ * Helper function that returns the media entity containing the source pad
++ * linked with the first sink pad from the given media entity pad list.
++ */
++static struct media_entity *vimc_get_source_entity(struct media_entity *ent)
++{
++ struct media_pad *pad;
++ int i;
++
++ for (i = 0; i < ent->num_pads; i++) {
++ if (ent->pads[i].flags & MEDIA_PAD_FL_SOURCE)
++ continue;
++ pad = media_entity_remote_pad(&ent->pads[i]);
++ return pad ? pad->entity : NULL;
++ }
++ return NULL;
++}
++
++/*
++ * vimc_streamer_pipeline_terminate - Disable stream in all ved in stream
++ *
++ * @stream: the pointer to the stream structure with the pipeline to be
++ * disabled.
++ *
++ * Calls s_stream to disable the stream in each entity of the pipeline
++ *
++ */
++static void vimc_streamer_pipeline_terminate(struct vimc_stream *stream)
++{
++ struct media_entity *entity;
++ struct v4l2_subdev *sd;
++
++ while (stream->pipe_size) {
++ stream->pipe_size--;
++ entity = stream->ved_pipeline[stream->pipe_size]->ent;
++ entity = vimc_get_source_entity(entity);
++ stream->ved_pipeline[stream->pipe_size] = NULL;
++
++ if (!is_media_entity_v4l2_subdev(entity))
++ continue;
++
++ sd = media_entity_to_v4l2_subdev(entity);
++ v4l2_subdev_call(sd, video, s_stream, 0);
++ }
++}
++
++/*
++ * vimc_streamer_pipeline_init - initializes the stream structure
++ *
++ * @stream: the pointer to the stream structure to be initialized
++ * @ved: the pointer to the vimc entity initializing the stream
++ *
++ * Initializes the stream structure. Walks through the entity graph to
++ * construct the pipeline used later on the streamer thread.
++ * Calls s_stream to enable stream in all entities of the pipeline.
++ */
++static int vimc_streamer_pipeline_init(struct vimc_stream *stream,
++ struct vimc_ent_device *ved)
++{
++ struct media_entity *entity;
++ struct video_device *vdev;
++ struct v4l2_subdev *sd;
++ int ret = 0;
++
++ stream->pipe_size = 0;
++ while (stream->pipe_size < VIMC_STREAMER_PIPELINE_MAX_SIZE) {
++ if (!ved) {
++ vimc_streamer_pipeline_terminate(stream);
++ return -EINVAL;
++ }
++ stream->ved_pipeline[stream->pipe_size++] = ved;
++
++ entity = vimc_get_source_entity(ved->ent);
++ /* Check if the end of the pipeline was reached*/
++ if (!entity)
++ return 0;
++
++ if (is_media_entity_v4l2_subdev(entity)) {
++ sd = media_entity_to_v4l2_subdev(entity);
++ ret = v4l2_subdev_call(sd, video, s_stream, 1);
++ if (ret && ret != -ENOIOCTLCMD) {
++ vimc_streamer_pipeline_terminate(stream);
++ return ret;
++ }
++ ved = v4l2_get_subdevdata(sd);
++ } else {
++ vdev = container_of(entity,
++ struct video_device,
++ entity);
++ ved = video_get_drvdata(vdev);
++ }
++ }
++
++ vimc_streamer_pipeline_terminate(stream);
++ return -EINVAL;
++}
++
++static int vimc_streamer_thread(void *data)
++{
++ struct vimc_stream *stream = data;
++ int i;
++
++ set_freezable();
++ set_current_state(TASK_UNINTERRUPTIBLE);
++
++ for (;;) {
++ try_to_freeze();
++ if (kthread_should_stop())
++ break;
++
++ for (i = stream->pipe_size - 1; i >= 0; i--) {
++ stream->frame = stream->ved_pipeline[i]->process_frame(
++ stream->ved_pipeline[i],
++ stream->frame);
++ if (!stream->frame)
++ break;
++ if (IS_ERR(stream->frame))
++ break;
++ }
++ //wait for 60hz
++ schedule_timeout(HZ / 60);
++ }
++
++ return 0;
++}
++
++int vimc_streamer_s_stream(struct vimc_stream *stream,
++ struct vimc_ent_device *ved,
++ int enable)
++{
++ int ret;
++
++ if (!stream || !ved)
++ return -EINVAL;
++
++ if (enable) {
++ if (stream->kthread)
++ return 0;
++
++ ret = vimc_streamer_pipeline_init(stream, ved);
++ if (ret)
++ return ret;
++
++ stream->kthread = kthread_run(vimc_streamer_thread, stream,
++ "vimc-streamer thread");
++
++ if (IS_ERR(stream->kthread))
++ return PTR_ERR(stream->kthread);
++
++ } else {
++ if (!stream->kthread)
++ return 0;
++
++ ret = kthread_stop(stream->kthread);
++ if (ret)
++ return ret;
++
++ stream->kthread = NULL;
++
++ vimc_streamer_pipeline_terminate(stream);
++ }
++
++ return 0;
++}
++EXPORT_SYMBOL_GPL(vimc_streamer_s_stream);
++
++MODULE_DESCRIPTION("Virtual Media Controller Driver (VIMC) Streamer");
++MODULE_AUTHOR("Lucas A. M. Magalhães <lucmaga@gmail.com>");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/media/platform/vimc/vimc-streamer.h b/drivers/media/platform/vimc/vimc-streamer.h
+new file mode 100644
+index 000000000000..752af2e2d5a2
+--- /dev/null
++++ b/drivers/media/platform/vimc/vimc-streamer.h
+@@ -0,0 +1,38 @@
++/* SPDX-License-Identifier: GPL-2.0+ */
++/*
++ * vimc-streamer.h Virtual Media Controller Driver
++ *
++ * Copyright (C) 2018 Lucas A. M. Magalhães <lucmaga@gmail.com>
++ *
++ */
++
++#ifndef _VIMC_STREAMER_H_
++#define _VIMC_STREAMER_H_
++
++#include <media/media-device.h>
++
++#include "vimc-common.h"
++
++#define VIMC_STREAMER_PIPELINE_MAX_SIZE 16
++
++struct vimc_stream {
++ struct media_pipeline pipe;
++ struct vimc_ent_device *ved_pipeline[VIMC_STREAMER_PIPELINE_MAX_SIZE];
++ unsigned int pipe_size;
++ u8 *frame;
++ struct task_struct *kthread;
++};
++
++/**
++ * vimc_streamer_s_streamer - start/stop the stream
++ *
++ * @stream: the pointer to the stream to start or stop
++ * @ved: The last entity of the streamer pipeline
++ * @enable: any non-zero number start the stream, zero stop
++ *
++ */
++int vimc_streamer_s_stream(struct vimc_stream *stream,
++ struct vimc_ent_device *ved,
++ int enable);
++
++#endif //_VIMC_STREAMER_H_
+diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
+index 66a174979b3c..81745644f720 100644
+--- a/drivers/media/rc/rc-main.c
++++ b/drivers/media/rc/rc-main.c
+@@ -274,6 +274,7 @@ static unsigned int ir_update_mapping(struct rc_dev *dev,
+ unsigned int new_keycode)
+ {
+ int old_keycode = rc_map->scan[index].keycode;
++ int i;
+
+ /* Did the user wish to remove the mapping? */
+ if (new_keycode == KEY_RESERVED || new_keycode == KEY_UNKNOWN) {
+@@ -288,9 +289,20 @@ static unsigned int ir_update_mapping(struct rc_dev *dev,
+ old_keycode == KEY_RESERVED ? "New" : "Replacing",
+ rc_map->scan[index].scancode, new_keycode);
+ rc_map->scan[index].keycode = new_keycode;
++ __set_bit(new_keycode, dev->input_dev->keybit);
+ }
+
+ if (old_keycode != KEY_RESERVED) {
++ /* A previous mapping was updated... */
++ __clear_bit(old_keycode, dev->input_dev->keybit);
++ /* ... but another scancode might use the same keycode */
++ for (i = 0; i < rc_map->len; i++) {
++ if (rc_map->scan[i].keycode == old_keycode) {
++ __set_bit(old_keycode, dev->input_dev->keybit);
++ break;
++ }
++ }
++
+ /* Possibly shrink the keytable, failure is not a problem */
+ ir_resize_table(dev, rc_map, GFP_ATOMIC);
+ }
+@@ -1750,7 +1762,6 @@ static int rc_prepare_rx_device(struct rc_dev *dev)
+ set_bit(EV_REP, dev->input_dev->evbit);
+ set_bit(EV_MSC, dev->input_dev->evbit);
+ set_bit(MSC_SCAN, dev->input_dev->mscbit);
+- bitmap_fill(dev->input_dev->keybit, KEY_CNT);
+
+ /* Pointer/mouse events */
+ set_bit(EV_REL, dev->input_dev->evbit);
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index d45415cbe6e7..14cff91b7aea 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1212,7 +1212,7 @@ static void uvc_ctrl_fill_event(struct uvc_video_chain *chain,
+
+ __uvc_query_v4l2_ctrl(chain, ctrl, mapping, &v4l2_ctrl);
+
+- memset(ev->reserved, 0, sizeof(ev->reserved));
++ memset(ev, 0, sizeof(*ev));
+ ev->type = V4L2_EVENT_CTRL;
+ ev->id = v4l2_ctrl.id;
+ ev->u.ctrl.value = value;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index b62cbd800111..33a22c016456 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -1106,11 +1106,19 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- /* Make sure the terminal type MSB is not null, otherwise it
+- * could be confused with a unit.
++ /*
++ * Reject invalid terminal types that would cause issues:
++ *
++ * - The high byte must be non-zero, otherwise it would be
++ * confused with a unit.
++ *
++ * - Bit 15 must be 0, as we use it internally as a terminal
++ * direction flag.
++ *
++ * Other unknown types are accepted.
+ */
+ type = get_unaligned_le16(&buffer[4]);
+- if ((type & 0xff00) == 0) {
++ if ((type & 0x7f00) == 0 || (type & 0x8000) != 0) {
+ uvc_trace(UVC_TRACE_DESCR, "device %d videocontrol "
+ "interface %d INPUT_TERMINAL %d has invalid "
+ "type 0x%04x, skipping\n", udev->devnum,
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index 84525ff04745..e314657a1843 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -676,6 +676,14 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ if (!uvc_hw_timestamps_param)
+ return;
+
++ /*
++ * We will get called from __vb2_queue_cancel() if there are buffers
++ * done but not dequeued by the user, but the sample array has already
++ * been released at that time. Just bail out in that case.
++ */
++ if (!clock->samples)
++ return;
++
+ spin_lock_irqsave(&clock->lock, flags);
+
+ if (clock->count < clock->size)
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index 5e3806feb5d7..8a82427c4d54 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -1387,7 +1387,7 @@ static u32 user_flags(const struct v4l2_ctrl *ctrl)
+
+ static void fill_event(struct v4l2_event *ev, struct v4l2_ctrl *ctrl, u32 changes)
+ {
+- memset(ev->reserved, 0, sizeof(ev->reserved));
++ memset(ev, 0, sizeof(*ev));
+ ev->type = V4L2_EVENT_CTRL;
+ ev->id = ctrl->id;
+ ev->u.ctrl.changes = changes;
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index a530972c5a7e..e0173bf4b0dc 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -1145,6 +1145,9 @@ static int sm501_register_gpio_i2c_instance(struct sm501_devdata *sm,
+ lookup = devm_kzalloc(&pdev->dev,
+ sizeof(*lookup) + 3 * sizeof(struct gpiod_lookup),
+ GFP_KERNEL);
++ if (!lookup)
++ return -ENOMEM;
++
+ lookup->dev_id = "i2c-gpio";
+ if (iic->pin_sda < 32)
+ lookup->table[0].chip_label = "SM501-LOW";
+diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
+index 5d28d9e454f5..08f4a512afad 100644
+--- a/drivers/misc/cxl/guest.c
++++ b/drivers/misc/cxl/guest.c
+@@ -267,6 +267,7 @@ static int guest_reset(struct cxl *adapter)
+ int i, rc;
+
+ pr_devel("Adapter reset request\n");
++ spin_lock(&adapter->afu_list_lock);
+ for (i = 0; i < adapter->slices; i++) {
+ if ((afu = adapter->afu[i])) {
+ pci_error_handlers(afu, CXL_ERROR_DETECTED_EVENT,
+@@ -283,6 +284,7 @@ static int guest_reset(struct cxl *adapter)
+ pci_error_handlers(afu, CXL_RESUME_EVENT, 0);
+ }
+ }
++ spin_unlock(&adapter->afu_list_lock);
+ return rc;
+ }
+
+diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
+index c79ba1c699ad..300531d6136f 100644
+--- a/drivers/misc/cxl/pci.c
++++ b/drivers/misc/cxl/pci.c
+@@ -1805,7 +1805,7 @@ static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu,
+ /* There should only be one entry, but go through the list
+ * anyway
+ */
+- if (afu->phb == NULL)
++ if (afu == NULL || afu->phb == NULL)
+ return result;
+
+ list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
+@@ -1832,7 +1832,8 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ {
+ struct cxl *adapter = pci_get_drvdata(pdev);
+ struct cxl_afu *afu;
+- pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET, afu_result;
++ pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET;
++ pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET;
+ int i;
+
+ /* At this point, we could still have an interrupt pending.
+@@ -1843,6 +1844,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+
+ /* If we're permanently dead, give up. */
+ if (state == pci_channel_io_perm_failure) {
++ spin_lock(&adapter->afu_list_lock);
+ for (i = 0; i < adapter->slices; i++) {
+ afu = adapter->afu[i];
+ /*
+@@ -1851,6 +1853,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ */
+ cxl_vphb_error_detected(afu, state);
+ }
++ spin_unlock(&adapter->afu_list_lock);
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+@@ -1932,11 +1935,17 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ * * In slot_reset, free the old resources and allocate new ones.
+ * * In resume, clear the flag to allow things to start.
+ */
++
++ /* Make sure no one else changes the afu list */
++ spin_lock(&adapter->afu_list_lock);
++
+ for (i = 0; i < adapter->slices; i++) {
+ afu = adapter->afu[i];
+
+- afu_result = cxl_vphb_error_detected(afu, state);
++ if (afu == NULL)
++ continue;
+
++ afu_result = cxl_vphb_error_detected(afu, state);
+ cxl_context_detach_all(afu);
+ cxl_ops->afu_deactivate_mode(afu, afu->current_mode);
+ pci_deconfigure_afu(afu);
+@@ -1948,6 +1957,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ (result == PCI_ERS_RESULT_NEED_RESET))
+ result = PCI_ERS_RESULT_NONE;
+ }
++ spin_unlock(&adapter->afu_list_lock);
+
+ /* should take the context lock here */
+ if (cxl_adapter_context_lock(adapter) != 0)
+@@ -1980,14 +1990,18 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
+ */
+ cxl_adapter_context_unlock(adapter);
+
++ spin_lock(&adapter->afu_list_lock);
+ for (i = 0; i < adapter->slices; i++) {
+ afu = adapter->afu[i];
+
++ if (afu == NULL)
++ continue;
++
+ if (pci_configure_afu(afu, adapter, pdev))
+- goto err;
++ goto err_unlock;
+
+ if (cxl_afu_select_best_mode(afu))
+- goto err;
++ goto err_unlock;
+
+ if (afu->phb == NULL)
+ continue;
+@@ -1999,16 +2013,16 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
+ ctx = cxl_get_context(afu_dev);
+
+ if (ctx && cxl_release_context(ctx))
+- goto err;
++ goto err_unlock;
+
+ ctx = cxl_dev_context_init(afu_dev);
+ if (IS_ERR(ctx))
+- goto err;
++ goto err_unlock;
+
+ afu_dev->dev.archdata.cxl_ctx = ctx;
+
+ if (cxl_ops->afu_check_and_enable(afu))
+- goto err;
++ goto err_unlock;
+
+ afu_dev->error_state = pci_channel_io_normal;
+
+@@ -2029,8 +2043,13 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
+ result = PCI_ERS_RESULT_DISCONNECT;
+ }
+ }
++
++ spin_unlock(&adapter->afu_list_lock);
+ return result;
+
++err_unlock:
++ spin_unlock(&adapter->afu_list_lock);
++
+ err:
+ /* All the bits that happen in both error_detected and cxl_remove
+ * should be idempotent, so we don't need to worry about leaving a mix
+@@ -2051,10 +2070,11 @@ static void cxl_pci_resume(struct pci_dev *pdev)
+ * This is not the place to be checking if everything came back up
+ * properly, because there's no return value: do that in slot_reset.
+ */
++ spin_lock(&adapter->afu_list_lock);
+ for (i = 0; i < adapter->slices; i++) {
+ afu = adapter->afu[i];
+
+- if (afu->phb == NULL)
++ if (afu == NULL || afu->phb == NULL)
+ continue;
+
+ list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
+@@ -2063,6 +2083,7 @@ static void cxl_pci_resume(struct pci_dev *pdev)
+ afu_dev->driver->err_handler->resume(afu_dev);
+ }
+ }
++ spin_unlock(&adapter->afu_list_lock);
+ }
+
+ static const struct pci_error_handlers cxl_err_handler = {
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index fc3872fe7b25..c383322ec2ba 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -541,17 +541,9 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
+ goto out;
+ }
+
+- if (!mei_cl_bus_module_get(cldev)) {
+- dev_err(&cldev->dev, "get hw module failed");
+- ret = -ENODEV;
+- goto out;
+- }
+-
+ ret = mei_cl_connect(cl, cldev->me_cl, NULL);
+- if (ret < 0) {
++ if (ret < 0)
+ dev_err(&cldev->dev, "cannot connect\n");
+- mei_cl_bus_module_put(cldev);
+- }
+
+ out:
+ mutex_unlock(&bus->device_lock);
+@@ -614,7 +606,6 @@ int mei_cldev_disable(struct mei_cl_device *cldev)
+ if (err < 0)
+ dev_err(bus->dev, "Could not disconnect from the ME client\n");
+
+- mei_cl_bus_module_put(cldev);
+ out:
+ /* Flush queues and remove any pending read */
+ mei_cl_flush_queues(cl, NULL);
+@@ -725,9 +716,16 @@ static int mei_cl_device_probe(struct device *dev)
+ if (!id)
+ return -ENODEV;
+
++ if (!mei_cl_bus_module_get(cldev)) {
++ dev_err(&cldev->dev, "get hw module failed");
++ return -ENODEV;
++ }
++
+ ret = cldrv->probe(cldev, id);
+- if (ret)
++ if (ret) {
++ mei_cl_bus_module_put(cldev);
+ return ret;
++ }
+
+ __module_get(THIS_MODULE);
+ return 0;
+@@ -755,6 +753,7 @@ static int mei_cl_device_remove(struct device *dev)
+
+ mei_cldev_unregister_callbacks(cldev);
+
++ mei_cl_bus_module_put(cldev);
+ module_put(THIS_MODULE);
+ dev->driver = NULL;
+ return ret;
+diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
+index 8f7616557c97..e6207f614816 100644
+--- a/drivers/misc/mei/hbm.c
++++ b/drivers/misc/mei/hbm.c
+@@ -1029,29 +1029,36 @@ static void mei_hbm_config_features(struct mei_device *dev)
+ dev->version.minor_version >= HBM_MINOR_VERSION_PGI)
+ dev->hbm_f_pg_supported = 1;
+
++ dev->hbm_f_dc_supported = 0;
+ if (dev->version.major_version >= HBM_MAJOR_VERSION_DC)
+ dev->hbm_f_dc_supported = 1;
+
++ dev->hbm_f_ie_supported = 0;
+ if (dev->version.major_version >= HBM_MAJOR_VERSION_IE)
+ dev->hbm_f_ie_supported = 1;
+
+ /* disconnect on connect timeout instead of link reset */
++ dev->hbm_f_dot_supported = 0;
+ if (dev->version.major_version >= HBM_MAJOR_VERSION_DOT)
+ dev->hbm_f_dot_supported = 1;
+
+ /* Notification Event Support */
++ dev->hbm_f_ev_supported = 0;
+ if (dev->version.major_version >= HBM_MAJOR_VERSION_EV)
+ dev->hbm_f_ev_supported = 1;
+
+ /* Fixed Address Client Support */
++ dev->hbm_f_fa_supported = 0;
+ if (dev->version.major_version >= HBM_MAJOR_VERSION_FA)
+ dev->hbm_f_fa_supported = 1;
+
+ /* OS ver message Support */
++ dev->hbm_f_os_supported = 0;
+ if (dev->version.major_version >= HBM_MAJOR_VERSION_OS)
+ dev->hbm_f_os_supported = 1;
+
+ /* DMA Ring Support */
++ dev->hbm_f_dr_supported = 0;
+ if (dev->version.major_version > HBM_MAJOR_VERSION_DR ||
+ (dev->version.major_version == HBM_MAJOR_VERSION_DR &&
+ dev->version.minor_version >= HBM_MINOR_VERSION_DR))
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index f8240b87df22..f69acb5d4a50 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -1287,7 +1287,7 @@ static void vmballoon_reset(struct vmballoon *b)
+ vmballoon_pop(b);
+
+ if (vmballoon_send_start(b, VMW_BALLOON_CAPABILITIES))
+- return;
++ goto unlock;
+
+ if ((b->capabilities & VMW_BALLOON_BATCHED_CMDS) != 0) {
+ if (vmballoon_init_batching(b)) {
+@@ -1298,7 +1298,7 @@ static void vmballoon_reset(struct vmballoon *b)
+ * The guest will retry in one second.
+ */
+ vmballoon_send_start(b, 0);
+- return;
++ goto unlock;
+ }
+ } else if ((b->capabilities & VMW_BALLOON_BASIC_CMDS) != 0) {
+ vmballoon_deinit_batching(b);
+@@ -1314,6 +1314,7 @@ static void vmballoon_reset(struct vmballoon *b)
+ if (vmballoon_send_guest_id(b))
+ pr_err("failed to send guest ID to the host\n");
+
++unlock:
+ up_write(&b->conf_sem);
+ }
+
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index b27a1e620233..1e6b07c176dc 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -2381,9 +2381,9 @@ unsigned int mmc_calc_max_discard(struct mmc_card *card)
+ return card->pref_erase;
+
+ max_discard = mmc_do_calc_max_discard(card, MMC_ERASE_ARG);
+- if (max_discard && mmc_can_trim(card)) {
++ if (mmc_can_trim(card)) {
+ max_trim = mmc_do_calc_max_discard(card, MMC_TRIM_ARG);
+- if (max_trim < max_discard)
++ if (max_trim < max_discard || max_discard == 0)
+ max_discard = max_trim;
+ } else if (max_discard < card->erase_size) {
+ max_discard = 0;
+diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c
+index c712b7deb3a9..7c8f203f9a24 100644
+--- a/drivers/mmc/host/alcor.c
++++ b/drivers/mmc/host/alcor.c
+@@ -48,7 +48,6 @@ struct alcor_sdmmc_host {
+ struct mmc_command *cmd;
+ struct mmc_data *data;
+ unsigned int dma_on:1;
+- unsigned int early_data:1;
+
+ struct mutex cmd_mutex;
+
+@@ -144,8 +143,7 @@ static void alcor_data_set_dma(struct alcor_sdmmc_host *host)
+ host->sg_count--;
+ }
+
+-static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host,
+- bool early)
++static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host)
+ {
+ struct alcor_pci_priv *priv = host->alcor_pci;
+ struct mmc_data *data = host->data;
+@@ -155,13 +153,6 @@ static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host,
+ ctrl |= AU6601_DATA_WRITE;
+
+ if (data->host_cookie == COOKIE_MAPPED) {
+- if (host->early_data) {
+- host->early_data = false;
+- return;
+- }
+-
+- host->early_data = early;
+-
+ alcor_data_set_dma(host);
+ ctrl |= AU6601_DATA_DMA_MODE;
+ host->dma_on = 1;
+@@ -231,6 +222,7 @@ static void alcor_prepare_sg_miter(struct alcor_sdmmc_host *host)
+ static void alcor_prepare_data(struct alcor_sdmmc_host *host,
+ struct mmc_command *cmd)
+ {
++ struct alcor_pci_priv *priv = host->alcor_pci;
+ struct mmc_data *data = cmd->data;
+
+ if (!data)
+@@ -248,7 +240,7 @@ static void alcor_prepare_data(struct alcor_sdmmc_host *host,
+ if (data->host_cookie != COOKIE_MAPPED)
+ alcor_prepare_sg_miter(host);
+
+- alcor_trigger_data_transfer(host, true);
++ alcor_write8(priv, 0, AU6601_DATA_XFER_CTRL);
+ }
+
+ static void alcor_send_cmd(struct alcor_sdmmc_host *host,
+@@ -435,7 +427,7 @@ static int alcor_cmd_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
+ if (!host->data)
+ return false;
+
+- alcor_trigger_data_transfer(host, false);
++ alcor_trigger_data_transfer(host);
+ host->cmd = NULL;
+ return true;
+ }
+@@ -456,7 +448,7 @@ static void alcor_cmd_irq_thread(struct alcor_sdmmc_host *host, u32 intmask)
+ if (!host->data)
+ alcor_request_complete(host, 1);
+ else
+- alcor_trigger_data_transfer(host, false);
++ alcor_trigger_data_transfer(host);
+ host->cmd = NULL;
+ }
+
+@@ -487,15 +479,9 @@ static int alcor_data_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
+ break;
+ case AU6601_INT_READ_BUF_RDY:
+ alcor_trf_block_pio(host, true);
+- if (!host->blocks)
+- break;
+- alcor_trigger_data_transfer(host, false);
+ return 1;
+ case AU6601_INT_WRITE_BUF_RDY:
+ alcor_trf_block_pio(host, false);
+- if (!host->blocks)
+- break;
+- alcor_trigger_data_transfer(host, false);
+ return 1;
+ case AU6601_INT_DMA_END:
+ if (!host->sg_count)
+@@ -508,8 +494,14 @@ static int alcor_data_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
+ break;
+ }
+
+- if (intmask & AU6601_INT_DATA_END)
+- return 0;
++ if (intmask & AU6601_INT_DATA_END) {
++ if (!host->dma_on && host->blocks) {
++ alcor_trigger_data_transfer(host);
++ return 1;
++ } else {
++ return 0;
++ }
++ }
+
+ return 1;
+ }
+@@ -1044,14 +1036,27 @@ static void alcor_init_mmc(struct alcor_sdmmc_host *host)
+ mmc->caps2 = MMC_CAP2_NO_SDIO;
+ mmc->ops = &alcor_sdc_ops;
+
+- /* Hardware cannot do scatter lists */
++ /* The hardware does DMA data transfer of 4096 bytes to/from a single
++ * buffer address. Scatterlists are not supported, but upon DMA
++ * completion (signalled via IRQ), the original vendor driver does
++ * then immediately set up another DMA transfer of the next 4096
++ * bytes.
++ *
++ * This means that we need to handle the I/O in 4096 byte chunks.
++ * Lacking a way to limit the sglist entries to 4096 bytes, we instead
++ * impose that only one segment is provided, with maximum size 4096,
++ * which also happens to be the minimum size. This means that the
++ * single-entry sglist handled by this driver can be handed directly
++ * to the hardware, nice and simple.
++ *
++ * Unfortunately though, that means we only do 4096 bytes I/O per
++ * MMC command. A future improvement would be to make the driver
++ * accept sg lists and entries of any size, and simply iterate
++ * through them 4096 bytes at a time.
++ */
+ mmc->max_segs = AU6601_MAX_DMA_SEGMENTS;
+ mmc->max_seg_size = AU6601_MAX_DMA_BLOCK_SIZE;
+-
+- mmc->max_blk_size = mmc->max_seg_size;
+- mmc->max_blk_count = mmc->max_segs;
+-
+- mmc->max_req_size = mmc->max_seg_size * mmc->max_segs;
++ mmc->max_req_size = mmc->max_seg_size;
+ }
+
+ static int alcor_pci_sdmmc_drv_probe(struct platform_device *pdev)
+diff --git a/drivers/mmc/host/mxcmmc.c b/drivers/mmc/host/mxcmmc.c
+index 4d17032d15ee..7b530e5a86da 100644
+--- a/drivers/mmc/host/mxcmmc.c
++++ b/drivers/mmc/host/mxcmmc.c
+@@ -292,11 +292,8 @@ static void mxcmci_swap_buffers(struct mmc_data *data)
+ struct scatterlist *sg;
+ int i;
+
+- for_each_sg(data->sg, sg, data->sg_len, i) {
+- void *buf = kmap_atomic(sg_page(sg) + sg->offset);
+- buffer_swap32(buf, sg->length);
+- kunmap_atomic(buf);
+- }
++ for_each_sg(data->sg, sg, data->sg_len, i)
++ buffer_swap32(sg_virt(sg), sg->length);
+ }
+ #else
+ static inline void mxcmci_swap_buffers(struct mmc_data *data) {}
+@@ -613,7 +610,6 @@ static int mxcmci_transfer_data(struct mxcmci_host *host)
+ {
+ struct mmc_data *data = host->req->data;
+ struct scatterlist *sg;
+- void *buf;
+ int stat, i;
+
+ host->data = data;
+@@ -621,18 +617,14 @@ static int mxcmci_transfer_data(struct mxcmci_host *host)
+
+ if (data->flags & MMC_DATA_READ) {
+ for_each_sg(data->sg, sg, data->sg_len, i) {
+- buf = kmap_atomic(sg_page(sg) + sg->offset);
+- stat = mxcmci_pull(host, buf, sg->length);
+- kunmap(buf);
++ stat = mxcmci_pull(host, sg_virt(sg), sg->length);
+ if (stat)
+ return stat;
+ host->datasize += sg->length;
+ }
+ } else {
+ for_each_sg(data->sg, sg, data->sg_len, i) {
+- buf = kmap_atomic(sg_page(sg) + sg->offset);
+- stat = mxcmci_push(host, buf, sg->length);
+- kunmap(buf);
++ stat = mxcmci_push(host, sg_virt(sg), sg->length);
+ if (stat)
+ return stat;
+ host->datasize += sg->length;
+diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
+index c60a7625b1fa..b2873a2432b6 100644
+--- a/drivers/mmc/host/omap.c
++++ b/drivers/mmc/host/omap.c
+@@ -920,7 +920,7 @@ static inline void set_cmd_timeout(struct mmc_omap_host *host, struct mmc_reques
+ reg &= ~(1 << 5);
+ OMAP_MMC_WRITE(host, SDIO, reg);
+ /* Set maximum timeout */
+- OMAP_MMC_WRITE(host, CTO, 0xff);
++ OMAP_MMC_WRITE(host, CTO, 0xfd);
+ }
+
+ static inline void set_data_timeout(struct mmc_omap_host *host, struct mmc_request *req)
+diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
+index 8779bbaa6b69..194a81888792 100644
+--- a/drivers/mmc/host/pxamci.c
++++ b/drivers/mmc/host/pxamci.c
+@@ -162,7 +162,7 @@ static void pxamci_dma_irq(void *param);
+ static void pxamci_setup_data(struct pxamci_host *host, struct mmc_data *data)
+ {
+ struct dma_async_tx_descriptor *tx;
+- enum dma_data_direction direction;
++ enum dma_transfer_direction direction;
+ struct dma_slave_config config;
+ struct dma_chan *chan;
+ unsigned int nob = data->blocks;
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 31a351a20dc0..d9be22b310e6 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -634,6 +634,7 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ struct renesas_sdhi *priv;
+ struct resource *res;
+ int irq, ret, i;
++ u16 ver;
+
+ of_data = of_device_get_match_data(&pdev->dev);
+
+@@ -723,6 +724,13 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ host->ops.start_signal_voltage_switch =
+ renesas_sdhi_start_signal_voltage_switch;
+ host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27;
++
++ /* SDR and HS200/400 registers requires HW reset */
++ if (of_data && of_data->scc_offset) {
++ priv->scc_ctl = host->ctl + of_data->scc_offset;
++ host->mmc->caps |= MMC_CAP_HW_RESET;
++ host->hw_reset = renesas_sdhi_hw_reset;
++ }
+ }
+
+ /* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */
+@@ -759,12 +767,17 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ if (ret)
+ goto efree;
+
++ ver = sd_ctrl_read16(host, CTL_VERSION);
++ /* GEN2_SDR104 is first known SDHI to use 32bit block count */
++ if (ver < SDHI_VER_GEN2_SDR104 && mmc_data->max_blk_count > U16_MAX)
++ mmc_data->max_blk_count = U16_MAX;
++
+ ret = tmio_mmc_host_probe(host);
+ if (ret < 0)
+ goto edisclk;
+
+ /* One Gen2 SDHI incarnation does NOT have a CBSY bit */
+- if (sd_ctrl_read16(host, CTL_VERSION) == SDHI_VER_GEN2_SDR50)
++ if (ver == SDHI_VER_GEN2_SDR50)
+ mmc_data->flags &= ~TMIO_MMC_HAVE_CBSY;
+
+ /* Enable tuning iff we have an SCC and a supported mode */
+@@ -775,8 +788,6 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ const struct renesas_sdhi_scc *taps = of_data->taps;
+ bool hit = false;
+
+- host->mmc->caps |= MMC_CAP_HW_RESET;
+-
+ for (i = 0; i < of_data->taps_num; i++) {
+ if (taps[i].clk_rate == 0 ||
+ taps[i].clk_rate == host->mmc->f_max) {
+@@ -789,12 +800,10 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ if (!hit)
+ dev_warn(&host->pdev->dev, "Unknown clock rate for SDR104\n");
+
+- priv->scc_ctl = host->ctl + of_data->scc_offset;
+ host->init_tuning = renesas_sdhi_init_tuning;
+ host->prepare_tuning = renesas_sdhi_prepare_tuning;
+ host->select_tuning = renesas_sdhi_select_tuning;
+ host->check_scc_error = renesas_sdhi_check_scc_error;
+- host->hw_reset = renesas_sdhi_hw_reset;
+ host->prepare_hs400_tuning =
+ renesas_sdhi_prepare_hs400_tuning;
+ host->hs400_downgrade = renesas_sdhi_disable_scc;
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 00d41b312c79..a6f25c796aed 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -979,6 +979,7 @@ static void esdhc_set_uhs_signaling(struct sdhci_host *host, unsigned timing)
+ case MMC_TIMING_UHS_SDR25:
+ case MMC_TIMING_UHS_SDR50:
+ case MMC_TIMING_UHS_SDR104:
++ case MMC_TIMING_MMC_HS:
+ case MMC_TIMING_MMC_HS200:
+ writel(m, host->ioaddr + ESDHC_MIX_CTRL);
+ break;
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index c11c18a9aacb..9ec300ec94ba 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -797,6 +797,43 @@ void sdhci_omap_reset(struct sdhci_host *host, u8 mask)
+ sdhci_reset(host, mask);
+ }
+
++#define CMD_ERR_MASK (SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX |\
++ SDHCI_INT_TIMEOUT)
++#define CMD_MASK (CMD_ERR_MASK | SDHCI_INT_RESPONSE)
++
++static u32 sdhci_omap_irq(struct sdhci_host *host, u32 intmask)
++{
++ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++ struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host);
++
++ if (omap_host->is_tuning && host->cmd && !host->data_early &&
++ (intmask & CMD_ERR_MASK)) {
++
++ /*
++ * Since we are not resetting data lines during tuning
++ * operation, data error or data complete interrupts
++ * might still arrive. Mark this request as a failure
++ * but still wait for the data interrupt
++ */
++ if (intmask & SDHCI_INT_TIMEOUT)
++ host->cmd->error = -ETIMEDOUT;
++ else
++ host->cmd->error = -EILSEQ;
++
++ host->cmd = NULL;
++
++ /*
++ * Sometimes command error interrupts and command complete
++ * interrupt will arrive together. Clear all command related
++ * interrupts here.
++ */
++ sdhci_writel(host, intmask & CMD_MASK, SDHCI_INT_STATUS);
++ intmask &= ~CMD_MASK;
++ }
++
++ return intmask;
++}
++
+ static struct sdhci_ops sdhci_omap_ops = {
+ .set_clock = sdhci_omap_set_clock,
+ .set_power = sdhci_omap_set_power,
+@@ -807,6 +844,7 @@ static struct sdhci_ops sdhci_omap_ops = {
+ .platform_send_init_74_clocks = sdhci_omap_init_74_clocks,
+ .reset = sdhci_omap_reset,
+ .set_uhs_signaling = sdhci_omap_set_uhs_signaling,
++ .irq = sdhci_omap_irq,
+ };
+
+ static int sdhci_omap_set_capabilities(struct sdhci_omap_host *omap_host)
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index 21bf8ac78380..390e896dadc7 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -213,8 +213,8 @@ config GENEVE
+
+ config GTP
+ tristate "GPRS Tunneling Protocol datapath (GTP-U)"
+- depends on INET && NET_UDP_TUNNEL
+- select NET_IP_TUNNEL
++ depends on INET
++ select NET_UDP_TUNNEL
+ ---help---
+ This allows one to create gtp virtual interfaces that provide
+ the GPRS Tunneling Protocol datapath (GTP-U). This tunneling protocol
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index ddc1f9ca8ebc..4543ac97f077 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1069,10 +1069,10 @@ static int gswip_probe(struct platform_device *pdev)
+ version = gswip_switch_r(priv, GSWIP_VERSION);
+
+ /* bring up the mdio bus */
+- gphy_fw_np = of_find_compatible_node(pdev->dev.of_node, NULL,
+- "lantiq,gphy-fw");
++ gphy_fw_np = of_get_compatible_child(dev->of_node, "lantiq,gphy-fw");
+ if (gphy_fw_np) {
+ err = gswip_gphy_fw_list(priv, gphy_fw_np, version);
++ of_node_put(gphy_fw_np);
+ if (err) {
+ dev_err(dev, "gphy fw probe failed\n");
+ return err;
+@@ -1080,13 +1080,12 @@ static int gswip_probe(struct platform_device *pdev)
+ }
+
+ /* bring up the mdio bus */
+- mdio_np = of_find_compatible_node(pdev->dev.of_node, NULL,
+- "lantiq,xrx200-mdio");
++ mdio_np = of_get_compatible_child(dev->of_node, "lantiq,xrx200-mdio");
+ if (mdio_np) {
+ err = gswip_mdio(priv, mdio_np);
+ if (err) {
+ dev_err(dev, "mdio probe failed\n");
+- goto gphy_fw;
++ goto put_mdio_node;
+ }
+ }
+
+@@ -1099,7 +1098,7 @@ static int gswip_probe(struct platform_device *pdev)
+ dev_err(dev, "wrong CPU port defined, HW only supports port: %i",
+ priv->hw_info->cpu_port);
+ err = -EINVAL;
+- goto mdio_bus;
++ goto disable_switch;
+ }
+
+ platform_set_drvdata(pdev, priv);
+@@ -1109,10 +1108,14 @@ static int gswip_probe(struct platform_device *pdev)
+ (version & GSWIP_VERSION_MOD_MASK) >> GSWIP_VERSION_MOD_SHIFT);
+ return 0;
+
++disable_switch:
++ gswip_mdio_mask(priv, GSWIP_MDIO_GLOB_ENABLE, 0, GSWIP_MDIO_GLOB);
++ dsa_unregister_switch(priv->ds);
+ mdio_bus:
+ if (mdio_np)
+ mdiobus_unregister(priv->ds->slave_mii_bus);
+-gphy_fw:
++put_mdio_node:
++ of_node_put(mdio_np);
+ for (i = 0; i < priv->num_gphy_fw; i++)
+ gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]);
+ return err;
+@@ -1131,8 +1134,10 @@ static int gswip_remove(struct platform_device *pdev)
+
+ dsa_unregister_switch(priv->ds);
+
+- if (priv->ds->slave_mii_bus)
++ if (priv->ds->slave_mii_bus) {
+ mdiobus_unregister(priv->ds->slave_mii_bus);
++ of_node_put(priv->ds->slave_mii_bus->dev.of_node);
++ }
+
+ for (i = 0; i < priv->num_gphy_fw; i++)
+ gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 7e3c00bd9532..6cba05a80892 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -442,12 +442,20 @@ out_mapping:
+
+ static int mv88e6xxx_g1_irq_setup(struct mv88e6xxx_chip *chip)
+ {
++ static struct lock_class_key lock_key;
++ static struct lock_class_key request_key;
+ int err;
+
+ err = mv88e6xxx_g1_irq_setup_common(chip);
+ if (err)
+ return err;
+
++ /* These lock classes tells lockdep that global 1 irqs are in
++ * a different category than their parent GPIO, so it won't
++ * report false recursion.
++ */
++ irq_set_lockdep_class(chip->irq, &lock_key, &request_key);
++
+ err = request_threaded_irq(chip->irq, NULL,
+ mv88e6xxx_g1_irq_thread_fn,
+ IRQF_ONESHOT | IRQF_SHARED,
+@@ -559,6 +567,9 @@ static int mv88e6xxx_port_setup_mac(struct mv88e6xxx_chip *chip, int port,
+ goto restore_link;
+ }
+
++ if (speed == SPEED_MAX && chip->info->ops->port_max_speed_mode)
++ mode = chip->info->ops->port_max_speed_mode(port);
++
+ if (chip->info->ops->port_set_pause) {
+ err = chip->info->ops->port_set_pause(chip, port, pause);
+ if (err)
+@@ -3042,6 +3053,7 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
+ .port_set_duplex = mv88e6xxx_port_set_duplex,
+ .port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ .port_set_speed = mv88e6341_port_set_speed,
++ .port_max_speed_mode = mv88e6341_port_max_speed_mode,
+ .port_tag_remap = mv88e6095_port_tag_remap,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3360,6 +3372,7 @@ static const struct mv88e6xxx_ops mv88e6190_ops = {
+ .port_set_duplex = mv88e6xxx_port_set_duplex,
+ .port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ .port_set_speed = mv88e6390_port_set_speed,
++ .port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ .port_tag_remap = mv88e6390_port_tag_remap,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3404,6 +3417,7 @@ static const struct mv88e6xxx_ops mv88e6190x_ops = {
+ .port_set_duplex = mv88e6xxx_port_set_duplex,
+ .port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ .port_set_speed = mv88e6390x_port_set_speed,
++ .port_max_speed_mode = mv88e6390x_port_max_speed_mode,
+ .port_tag_remap = mv88e6390_port_tag_remap,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3448,6 +3462,7 @@ static const struct mv88e6xxx_ops mv88e6191_ops = {
+ .port_set_duplex = mv88e6xxx_port_set_duplex,
+ .port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ .port_set_speed = mv88e6390_port_set_speed,
++ .port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ .port_tag_remap = mv88e6390_port_tag_remap,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3541,6 +3556,7 @@ static const struct mv88e6xxx_ops mv88e6290_ops = {
+ .port_set_duplex = mv88e6xxx_port_set_duplex,
+ .port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ .port_set_speed = mv88e6390_port_set_speed,
++ .port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ .port_tag_remap = mv88e6390_port_tag_remap,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3672,6 +3688,7 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
+ .port_set_duplex = mv88e6xxx_port_set_duplex,
+ .port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ .port_set_speed = mv88e6341_port_set_speed,
++ .port_max_speed_mode = mv88e6341_port_max_speed_mode,
+ .port_tag_remap = mv88e6095_port_tag_remap,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3847,6 +3864,7 @@ static const struct mv88e6xxx_ops mv88e6390_ops = {
+ .port_set_duplex = mv88e6xxx_port_set_duplex,
+ .port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ .port_set_speed = mv88e6390_port_set_speed,
++ .port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ .port_tag_remap = mv88e6390_port_tag_remap,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3895,6 +3913,7 @@ static const struct mv88e6xxx_ops mv88e6390x_ops = {
+ .port_set_duplex = mv88e6xxx_port_set_duplex,
+ .port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ .port_set_speed = mv88e6390x_port_set_speed,
++ .port_max_speed_mode = mv88e6390x_port_max_speed_mode,
+ .port_tag_remap = mv88e6390_port_tag_remap,
+ .port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ .port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -4222,7 +4241,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .name = "Marvell 88E6190",
+ .num_databases = 4096,
+ .num_ports = 11, /* 10 + Z80 */
+- .num_internal_phys = 11,
++ .num_internal_phys = 9,
+ .num_gpio = 16,
+ .max_vid = 8191,
+ .port_base_addr = 0x0,
+@@ -4245,7 +4264,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .name = "Marvell 88E6190X",
+ .num_databases = 4096,
+ .num_ports = 11, /* 10 + Z80 */
+- .num_internal_phys = 11,
++ .num_internal_phys = 9,
+ .num_gpio = 16,
+ .max_vid = 8191,
+ .port_base_addr = 0x0,
+@@ -4268,7 +4287,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .name = "Marvell 88E6191",
+ .num_databases = 4096,
+ .num_ports = 11, /* 10 + Z80 */
+- .num_internal_phys = 11,
++ .num_internal_phys = 9,
+ .max_vid = 8191,
+ .port_base_addr = 0x0,
+ .phy_base_addr = 0x0,
+@@ -4315,7 +4334,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .name = "Marvell 88E6290",
+ .num_databases = 4096,
+ .num_ports = 11, /* 10 + Z80 */
+- .num_internal_phys = 11,
++ .num_internal_phys = 9,
+ .num_gpio = 16,
+ .max_vid = 8191,
+ .port_base_addr = 0x0,
+@@ -4477,7 +4496,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .name = "Marvell 88E6390",
+ .num_databases = 4096,
+ .num_ports = 11, /* 10 + Z80 */
+- .num_internal_phys = 11,
++ .num_internal_phys = 9,
+ .num_gpio = 16,
+ .max_vid = 8191,
+ .port_base_addr = 0x0,
+@@ -4500,7 +4519,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .name = "Marvell 88E6390X",
+ .num_databases = 4096,
+ .num_ports = 11, /* 10 + Z80 */
+- .num_internal_phys = 11,
++ .num_internal_phys = 9,
+ .num_gpio = 16,
+ .max_vid = 8191,
+ .port_base_addr = 0x0,
+@@ -4847,6 +4866,7 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ if (err)
+ goto out;
+
++ mv88e6xxx_ports_cmode_init(chip);
+ mv88e6xxx_phy_init(chip);
+
+ if (chip->info->ops->get_eeprom) {
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
+index 546651d8c3e1..dfb1af65c205 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.h
++++ b/drivers/net/dsa/mv88e6xxx/chip.h
+@@ -377,6 +377,9 @@ struct mv88e6xxx_ops {
+ */
+ int (*port_set_speed)(struct mv88e6xxx_chip *chip, int port, int speed);
+
++ /* What interface mode should be used for maximum speed? */
++ phy_interface_t (*port_max_speed_mode)(int port);
++
+ int (*port_tag_remap)(struct mv88e6xxx_chip *chip, int port);
+
+ int (*port_set_frame_mode)(struct mv88e6xxx_chip *chip, int port,
+diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
+index 79ab51e69aee..c44b2822e4dd 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.c
++++ b/drivers/net/dsa/mv88e6xxx/port.c
+@@ -190,7 +190,7 @@ int mv88e6xxx_port_set_duplex(struct mv88e6xxx_chip *chip, int port, int dup)
+ /* normal duplex detection */
+ break;
+ default:
+- return -EINVAL;
++ return -EOPNOTSUPP;
+ }
+
+ err = mv88e6xxx_port_write(chip, port, MV88E6XXX_PORT_MAC_CTL, reg);
+@@ -312,6 +312,14 @@ int mv88e6341_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ return mv88e6xxx_port_set_speed(chip, port, speed, !port, true);
+ }
+
++phy_interface_t mv88e6341_port_max_speed_mode(int port)
++{
++ if (port == 5)
++ return PHY_INTERFACE_MODE_2500BASEX;
++
++ return PHY_INTERFACE_MODE_NA;
++}
++
+ /* Support 10, 100, 200, 1000 Mbps (e.g. 88E6352 family) */
+ int mv88e6352_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ {
+@@ -345,6 +353,14 @@ int mv88e6390_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ return mv88e6xxx_port_set_speed(chip, port, speed, true, true);
+ }
+
++phy_interface_t mv88e6390_port_max_speed_mode(int port)
++{
++ if (port == 9 || port == 10)
++ return PHY_INTERFACE_MODE_2500BASEX;
++
++ return PHY_INTERFACE_MODE_NA;
++}
++
+ /* Support 10, 100, 200, 1000, 2500, 10000 Mbps (e.g. 88E6190X) */
+ int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ {
+@@ -360,6 +376,14 @@ int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ return mv88e6xxx_port_set_speed(chip, port, speed, true, true);
+ }
+
++phy_interface_t mv88e6390x_port_max_speed_mode(int port)
++{
++ if (port == 9 || port == 10)
++ return PHY_INTERFACE_MODE_XAUI;
++
++ return PHY_INTERFACE_MODE_NA;
++}
++
+ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ phy_interface_t mode)
+ {
+@@ -403,18 +427,22 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ return 0;
+
+ lane = mv88e6390x_serdes_get_lane(chip, port);
+- if (lane < 0)
++ if (lane < 0 && lane != -ENODEV)
+ return lane;
+
+- if (chip->ports[port].serdes_irq) {
+- err = mv88e6390_serdes_irq_disable(chip, port, lane);
++ if (lane >= 0) {
++ if (chip->ports[port].serdes_irq) {
++ err = mv88e6390_serdes_irq_disable(chip, port, lane);
++ if (err)
++ return err;
++ }
++
++ err = mv88e6390x_serdes_power(chip, port, false);
+ if (err)
+ return err;
+ }
+
+- err = mv88e6390x_serdes_power(chip, port, false);
+- if (err)
+- return err;
++ chip->ports[port].cmode = 0;
+
+ if (cmode) {
+ err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, ®);
+@@ -428,6 +456,12 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ if (err)
+ return err;
+
++ chip->ports[port].cmode = cmode;
++
++ lane = mv88e6390x_serdes_get_lane(chip, port);
++ if (lane < 0)
++ return lane;
++
+ err = mv88e6390x_serdes_power(chip, port, true);
+ if (err)
+ return err;
+@@ -439,8 +473,6 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ }
+ }
+
+- chip->ports[port].cmode = cmode;
+-
+ return 0;
+ }
+
+@@ -448,6 +480,8 @@ int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ phy_interface_t mode)
+ {
+ switch (mode) {
++ case PHY_INTERFACE_MODE_NA:
++ return 0;
+ case PHY_INTERFACE_MODE_XGMII:
+ case PHY_INTERFACE_MODE_XAUI:
+ case PHY_INTERFACE_MODE_RXAUI:
+diff --git a/drivers/net/dsa/mv88e6xxx/port.h b/drivers/net/dsa/mv88e6xxx/port.h
+index 4aadf321edb7..c7bed263a0f4 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.h
++++ b/drivers/net/dsa/mv88e6xxx/port.h
+@@ -285,6 +285,10 @@ int mv88e6352_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
+ int mv88e6390_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
+ int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
+
++phy_interface_t mv88e6341_port_max_speed_mode(int port);
++phy_interface_t mv88e6390_port_max_speed_mode(int port);
++phy_interface_t mv88e6390x_port_max_speed_mode(int port);
++
+ int mv88e6xxx_port_set_state(struct mv88e6xxx_chip *chip, int port, u8 state);
+
+ int mv88e6xxx_port_set_vlan_map(struct mv88e6xxx_chip *chip, int port, u16 map);
+diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
+index 7e97e620bd44..a26850c888cf 100644
+--- a/drivers/net/dsa/qca8k.c
++++ b/drivers/net/dsa/qca8k.c
+@@ -620,22 +620,6 @@ qca8k_adjust_link(struct dsa_switch *ds, int port, struct phy_device *phy)
+ qca8k_port_set_status(priv, port, 1);
+ }
+
+-static int
+-qca8k_phy_read(struct dsa_switch *ds, int phy, int regnum)
+-{
+- struct qca8k_priv *priv = (struct qca8k_priv *)ds->priv;
+-
+- return mdiobus_read(priv->bus, phy, regnum);
+-}
+-
+-static int
+-qca8k_phy_write(struct dsa_switch *ds, int phy, int regnum, u16 val)
+-{
+- struct qca8k_priv *priv = (struct qca8k_priv *)ds->priv;
+-
+- return mdiobus_write(priv->bus, phy, regnum, val);
+-}
+-
+ static void
+ qca8k_get_strings(struct dsa_switch *ds, int port, u32 stringset, uint8_t *data)
+ {
+@@ -876,8 +860,6 @@ static const struct dsa_switch_ops qca8k_switch_ops = {
+ .setup = qca8k_setup,
+ .adjust_link = qca8k_adjust_link,
+ .get_strings = qca8k_get_strings,
+- .phy_read = qca8k_phy_read,
+- .phy_write = qca8k_phy_write,
+ .get_ethtool_stats = qca8k_get_ethtool_stats,
+ .get_sset_count = qca8k_get_sset_count,
+ .get_mac_eee = qca8k_get_mac_eee,
+diff --git a/drivers/net/ethernet/8390/mac8390.c b/drivers/net/ethernet/8390/mac8390.c
+index 342ae08ec3c2..d60a86aa8aa8 100644
+--- a/drivers/net/ethernet/8390/mac8390.c
++++ b/drivers/net/ethernet/8390/mac8390.c
+@@ -153,8 +153,6 @@ static void dayna_block_input(struct net_device *dev, int count,
+ static void dayna_block_output(struct net_device *dev, int count,
+ const unsigned char *buf, int start_page);
+
+-#define memcmp_withio(a, b, c) memcmp((a), (void *)(b), (c))
+-
+ /* Slow Sane (16-bit chunk memory read/write) Cabletron uses this */
+ static void slow_sane_get_8390_hdr(struct net_device *dev,
+ struct e8390_pkt_hdr *hdr, int ring_page);
+@@ -233,19 +231,26 @@ static enum mac8390_type mac8390_ident(struct nubus_rsrc *fres)
+
+ static enum mac8390_access mac8390_testio(unsigned long membase)
+ {
+- unsigned long outdata = 0xA5A0B5B0;
+- unsigned long indata = 0x00000000;
++ u32 outdata = 0xA5A0B5B0;
++ u32 indata = 0;
++
+ /* Try writing 32 bits */
+- memcpy_toio((void __iomem *)membase, &outdata, 4);
+- /* Now compare them */
+- if (memcmp_withio(&outdata, membase, 4) == 0)
++ nubus_writel(outdata, membase);
++ /* Now read it back */
++ indata = nubus_readl(membase);
++ if (outdata == indata)
+ return ACCESS_32;
++
++ outdata = 0xC5C0D5D0;
++ indata = 0;
++
+ /* Write 16 bit output */
+ word_memcpy_tocard(membase, &outdata, 4);
+ /* Now read it back */
+ word_memcpy_fromcard(&indata, membase, 4);
+ if (outdata == indata)
+ return ACCESS_16;
++
+ return ACCESS_UNKNOWN;
+ }
+
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index 74550ccc7a20..e2ffb159cbe2 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -186,11 +186,12 @@ static void aq_rx_checksum(struct aq_ring_s *self,
+ }
+ if (buff->is_ip_cso) {
+ __skb_incr_checksum_unnecessary(skb);
+- if (buff->is_udp_cso || buff->is_tcp_cso)
+- __skb_incr_checksum_unnecessary(skb);
+ } else {
+ skb->ip_summed = CHECKSUM_NONE;
+ }
++
++ if (buff->is_udp_cso || buff->is_tcp_cso)
++ __skb_incr_checksum_unnecessary(skb);
+ }
+
+ #define AQ_SKB_ALIGN SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 803f7990d32b..40ca339ec3df 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1129,6 +1129,8 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+ tpa_info = &rxr->rx_tpa[agg_id];
+
+ if (unlikely(cons != rxr->rx_next_cons)) {
++ netdev_warn(bp->dev, "TPA cons %x != expected cons %x\n",
++ cons, rxr->rx_next_cons);
+ bnxt_sched_reset(bp, rxr);
+ return;
+ }
+@@ -1581,15 +1583,17 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ }
+
+ cons = rxcmp->rx_cmp_opaque;
+- rx_buf = &rxr->rx_buf_ring[cons];
+- data = rx_buf->data;
+- data_ptr = rx_buf->data_ptr;
+ if (unlikely(cons != rxr->rx_next_cons)) {
+ int rc1 = bnxt_discard_rx(bp, cpr, raw_cons, rxcmp);
+
++ netdev_warn(bp->dev, "RX cons %x != expected cons %x\n",
++ cons, rxr->rx_next_cons);
+ bnxt_sched_reset(bp, rxr);
+ return rc1;
+ }
++ rx_buf = &rxr->rx_buf_ring[cons];
++ data = rx_buf->data;
++ data_ptr = rx_buf->data_ptr;
+ prefetch(data_ptr);
+
+ misc = le32_to_cpu(rxcmp->rx_cmp_misc_v1);
+@@ -1606,11 +1610,17 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+
+ rx_buf->data = NULL;
+ if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L2_ERRORS) {
++ u32 rx_err = le32_to_cpu(rxcmp1->rx_cmp_cfa_code_errors_v2);
++
+ bnxt_reuse_rx_data(rxr, cons, data);
+ if (agg_bufs)
+ bnxt_reuse_rx_agg_bufs(cpr, cp_cons, agg_bufs);
+
+ rc = -EIO;
++ if (rx_err & RX_CMPL_ERRORS_BUFFER_ERROR_MASK) {
++ netdev_warn(bp->dev, "RX buffer error %x\n", rx_err);
++ bnxt_sched_reset(bp, rxr);
++ }
+ goto next_rx;
+ }
+
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index 503cfadff4ac..d4ee9f9c8c34 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -1328,10 +1328,11 @@ int nicvf_stop(struct net_device *netdev)
+ struct nicvf_cq_poll *cq_poll = NULL;
+ union nic_mbx mbx = {};
+
+- cancel_delayed_work_sync(&nic->link_change_work);
+-
+ /* wait till all queued set_rx_mode tasks completes */
+- drain_workqueue(nic->nicvf_rx_mode_wq);
++ if (nic->nicvf_rx_mode_wq) {
++ cancel_delayed_work_sync(&nic->link_change_work);
++ drain_workqueue(nic->nicvf_rx_mode_wq);
++ }
+
+ mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
+ nicvf_send_msg_to_pf(nic, &mbx);
+@@ -1452,7 +1453,8 @@ int nicvf_open(struct net_device *netdev)
+ struct nicvf_cq_poll *cq_poll = NULL;
+
+ /* wait till all queued set_rx_mode tasks completes if any */
+- drain_workqueue(nic->nicvf_rx_mode_wq);
++ if (nic->nicvf_rx_mode_wq)
++ drain_workqueue(nic->nicvf_rx_mode_wq);
+
+ netif_carrier_off(netdev);
+
+@@ -1550,10 +1552,12 @@ int nicvf_open(struct net_device *netdev)
+ /* Send VF config done msg to PF */
+ nicvf_send_cfg_done(nic);
+
+- INIT_DELAYED_WORK(&nic->link_change_work,
+- nicvf_link_status_check_task);
+- queue_delayed_work(nic->nicvf_rx_mode_wq,
+- &nic->link_change_work, 0);
++ if (nic->nicvf_rx_mode_wq) {
++ INIT_DELAYED_WORK(&nic->link_change_work,
++ nicvf_link_status_check_task);
++ queue_delayed_work(nic->nicvf_rx_mode_wq,
++ &nic->link_change_work, 0);
++ }
+
+ return 0;
+ cleanup:
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+index 5b4d3badcb73..e246f9733bb8 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+@@ -105,20 +105,19 @@ static inline struct pgcache *nicvf_alloc_page(struct nicvf *nic,
+ /* Check if page can be recycled */
+ if (page) {
+ ref_count = page_ref_count(page);
+- /* Check if this page has been used once i.e 'put_page'
+- * called after packet transmission i.e internal ref_count
+- * and page's ref_count are equal i.e page can be recycled.
++ /* This page can be recycled if internal ref_count and page's
++ * ref_count are equal, indicating that the page has been used
++ * once for packet transmission. For non-XDP mode, internal
++ * ref_count is always '1'.
+ */
+- if (rbdr->is_xdp && (ref_count == pgcache->ref_count))
+- pgcache->ref_count--;
+- else
+- page = NULL;
+-
+- /* In non-XDP mode, page's ref_count needs to be '1' for it
+- * to be recycled.
+- */
+- if (!rbdr->is_xdp && (ref_count != 1))
++ if (rbdr->is_xdp) {
++ if (ref_count == pgcache->ref_count)
++ pgcache->ref_count--;
++ else
++ page = NULL;
++ } else if (ref_count != 1) {
+ page = NULL;
++ }
+ }
+
+ if (!page) {
+@@ -365,11 +364,10 @@ static void nicvf_free_rbdr(struct nicvf *nic, struct rbdr *rbdr)
+ while (head < rbdr->pgcnt) {
+ pgcache = &rbdr->pgcache[head];
+ if (pgcache->page && page_ref_count(pgcache->page) != 0) {
+- if (!rbdr->is_xdp) {
+- put_page(pgcache->page);
+- continue;
++ if (rbdr->is_xdp) {
++ page_ref_sub(pgcache->page,
++ pgcache->ref_count - 1);
+ }
+- page_ref_sub(pgcache->page, pgcache->ref_count - 1);
+ put_page(pgcache->page);
+ }
+ head++;
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index 9a7f70db20c7..733d9172425b 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -119,7 +119,7 @@ static void enic_init_affinity_hint(struct enic *enic)
+
+ for (i = 0; i < enic->intr_count; i++) {
+ if (enic_is_err_intr(enic, i) || enic_is_notify_intr(enic, i) ||
+- (enic->msix[i].affinity_mask &&
++ (cpumask_available(enic->msix[i].affinity_mask) &&
+ !cpumask_empty(enic->msix[i].affinity_mask)))
+ continue;
+ if (zalloc_cpumask_var(&enic->msix[i].affinity_mask,
+@@ -148,7 +148,7 @@ static void enic_set_affinity_hint(struct enic *enic)
+ for (i = 0; i < enic->intr_count; i++) {
+ if (enic_is_err_intr(enic, i) ||
+ enic_is_notify_intr(enic, i) ||
+- !enic->msix[i].affinity_mask ||
++ !cpumask_available(enic->msix[i].affinity_mask) ||
+ cpumask_empty(enic->msix[i].affinity_mask))
+ continue;
+ err = irq_set_affinity_hint(enic->msix_entry[i].vector,
+@@ -161,7 +161,7 @@ static void enic_set_affinity_hint(struct enic *enic)
+ for (i = 0; i < enic->wq_count; i++) {
+ int wq_intr = enic_msix_wq_intr(enic, i);
+
+- if (enic->msix[wq_intr].affinity_mask &&
++ if (cpumask_available(enic->msix[wq_intr].affinity_mask) &&
+ !cpumask_empty(enic->msix[wq_intr].affinity_mask))
+ netif_set_xps_queue(enic->netdev,
+ enic->msix[wq_intr].affinity_mask,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 36eab37d8a40..09c774fe8853 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -192,6 +192,7 @@ struct hnae3_ae_dev {
+ const struct hnae3_ae_ops *ops;
+ struct list_head node;
+ u32 flag;
++ u8 override_pci_need_reset; /* fix to stop multiple reset happening */
+ enum hnae3_dev_type dev_type;
+ enum hnae3_reset_type reset_type;
+ void *priv;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 1bf7a5f116a0..d84c50068f66 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1852,7 +1852,9 @@ static pci_ers_result_t hns3_slot_reset(struct pci_dev *pdev)
+
+ /* request the reset */
+ if (ae_dev->ops->reset_event) {
+- ae_dev->ops->reset_event(pdev, NULL);
++ if (!ae_dev->override_pci_need_reset)
++ ae_dev->ops->reset_event(pdev, NULL);
++
+ return PCI_ERS_RESULT_RECOVERED;
+ }
+
+@@ -2476,6 +2478,8 @@ static int hns3_add_frag(struct hns3_enet_ring *ring, struct hns3_desc *desc,
+ desc = &ring->desc[ring->next_to_clean];
+ desc_cb = &ring->desc_cb[ring->next_to_clean];
+ bd_base_info = le32_to_cpu(desc->rx.bd_base_info);
++ /* make sure HW write desc complete */
++ dma_rmb();
+ if (!hnae3_get_bit(bd_base_info, HNS3_RXD_VLD_B))
+ return -ENXIO;
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+index d0f654123b9b..3ea72e4d9dc4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+@@ -1094,10 +1094,10 @@ static int hclge_log_rocee_ovf_error(struct hclge_dev *hdev)
+ return 0;
+ }
+
+-static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
++static enum hnae3_reset_type
++hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ {
+- enum hnae3_reset_type reset_type = HNAE3_FUNC_RESET;
+- struct hnae3_ae_dev *ae_dev = hdev->ae_dev;
++ enum hnae3_reset_type reset_type = HNAE3_NONE_RESET;
+ struct device *dev = &hdev->pdev->dev;
+ struct hclge_desc desc[2];
+ unsigned int status;
+@@ -1110,17 +1110,20 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ if (ret) {
+ dev_err(dev, "failed(%d) to query ROCEE RAS INT SRC\n", ret);
+ /* reset everything for now */
+- HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET);
+- return ret;
++ return HNAE3_GLOBAL_RESET;
+ }
+
+ status = le32_to_cpu(desc[0].data[0]);
+
+- if (status & HCLGE_ROCEE_RERR_INT_MASK)
++ if (status & HCLGE_ROCEE_RERR_INT_MASK) {
+ dev_warn(dev, "ROCEE RAS AXI rresp error\n");
++ reset_type = HNAE3_FUNC_RESET;
++ }
+
+- if (status & HCLGE_ROCEE_BERR_INT_MASK)
++ if (status & HCLGE_ROCEE_BERR_INT_MASK) {
+ dev_warn(dev, "ROCEE RAS AXI bresp error\n");
++ reset_type = HNAE3_FUNC_RESET;
++ }
+
+ if (status & HCLGE_ROCEE_ECC_INT_MASK) {
+ dev_warn(dev, "ROCEE RAS 2bit ECC error\n");
+@@ -1132,9 +1135,9 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ if (ret) {
+ dev_err(dev, "failed(%d) to process ovf error\n", ret);
+ /* reset everything for now */
+- HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET);
+- return ret;
++ return HNAE3_GLOBAL_RESET;
+ }
++ reset_type = HNAE3_FUNC_RESET;
+ }
+
+ /* clear error status */
+@@ -1143,12 +1146,10 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ if (ret) {
+ dev_err(dev, "failed(%d) to clear ROCEE RAS error\n", ret);
+ /* reset everything for now */
+- reset_type = HNAE3_GLOBAL_RESET;
++ return HNAE3_GLOBAL_RESET;
+ }
+
+- HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type);
+-
+- return ret;
++ return reset_type;
+ }
+
+ static int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en)
+@@ -1178,15 +1179,18 @@ static int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en)
+ return ret;
+ }
+
+-static int hclge_handle_rocee_ras_error(struct hnae3_ae_dev *ae_dev)
++static void hclge_handle_rocee_ras_error(struct hnae3_ae_dev *ae_dev)
+ {
++ enum hnae3_reset_type reset_type = HNAE3_NONE_RESET;
+ struct hclge_dev *hdev = ae_dev->priv;
+
+ if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
+ hdev->pdev->revision < 0x21)
+- return HNAE3_NONE_RESET;
++ return;
+
+- return hclge_log_and_clear_rocee_ras_error(hdev);
++ reset_type = hclge_log_and_clear_rocee_ras_error(hdev);
++ if (reset_type != HNAE3_NONE_RESET)
++ HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type);
+ }
+
+ static const struct hclge_hw_blk hw_blk[] = {
+@@ -1259,8 +1263,10 @@ pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev)
+ hclge_handle_all_ras_errors(hdev);
+ } else {
+ if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
+- hdev->pdev->revision < 0x21)
++ hdev->pdev->revision < 0x21) {
++ ae_dev->override_pci_need_reset = 1;
+ return PCI_ERS_RESULT_RECOVERED;
++ }
+ }
+
+ if (status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
+@@ -1269,8 +1275,11 @@ pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev)
+ }
+
+ if (status & HCLGE_RAS_REG_NFE_MASK ||
+- status & HCLGE_RAS_REG_ROCEE_ERR_MASK)
++ status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
++ ae_dev->override_pci_need_reset = 0;
+ return PCI_ERS_RESULT_NEED_RESET;
++ }
++ ae_dev->override_pci_need_reset = 1;
+
+ return PCI_ERS_RESULT_RECOVERED;
+ }
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 5ecbb1adcf3b..51cfe95f3e24 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1885,6 +1885,7 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter,
+ */
+ adapter->state = VNIC_PROBED;
+
++ reinit_completion(&adapter->init_done);
+ rc = init_crq_queue(adapter);
+ if (rc) {
+ netdev_err(adapter->netdev,
+@@ -4625,7 +4626,7 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter)
+ old_num_rx_queues = adapter->req_rx_queues;
+ old_num_tx_queues = adapter->req_tx_queues;
+
+- init_completion(&adapter->init_done);
++ reinit_completion(&adapter->init_done);
+ adapter->init_done_rc = 0;
+ ibmvnic_send_crq_init(adapter);
+ if (!wait_for_completion_timeout(&adapter->init_done, timeout)) {
+@@ -4680,7 +4681,6 @@ static int ibmvnic_init(struct ibmvnic_adapter *adapter)
+
+ adapter->from_passive_init = false;
+
+- init_completion(&adapter->init_done);
+ adapter->init_done_rc = 0;
+ ibmvnic_send_crq_init(adapter);
+ if (!wait_for_completion_timeout(&adapter->init_done, timeout)) {
+@@ -4759,6 +4759,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ INIT_WORK(&adapter->ibmvnic_reset, __ibmvnic_reset);
+ INIT_LIST_HEAD(&adapter->rwi_list);
+ spin_lock_init(&adapter->rwi_lock);
++ init_completion(&adapter->init_done);
+ adapter->resetting = false;
+
+ adapter->mac_change_pending = false;
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 189f231075c2..7acc61e4f645 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -2106,7 +2106,7 @@ static int e1000_request_msix(struct e1000_adapter *adapter)
+ if (strlen(netdev->name) < (IFNAMSIZ - 5))
+ snprintf(adapter->rx_ring->name,
+ sizeof(adapter->rx_ring->name) - 1,
+- "%s-rx-0", netdev->name);
++ "%.14s-rx-0", netdev->name);
+ else
+ memcpy(adapter->rx_ring->name, netdev->name, IFNAMSIZ);
+ err = request_irq(adapter->msix_entries[vector].vector,
+@@ -2122,7 +2122,7 @@ static int e1000_request_msix(struct e1000_adapter *adapter)
+ if (strlen(netdev->name) < (IFNAMSIZ - 5))
+ snprintf(adapter->tx_ring->name,
+ sizeof(adapter->tx_ring->name) - 1,
+- "%s-tx-0", netdev->name);
++ "%.14s-tx-0", netdev->name);
+ else
+ memcpy(adapter->tx_ring->name, netdev->name, IFNAMSIZ);
+ err = request_irq(adapter->msix_entries[vector].vector,
+@@ -5309,8 +5309,13 @@ static void e1000_watchdog_task(struct work_struct *work)
+ /* 8000ES2LAN requires a Rx packet buffer work-around
+ * on link down event; reset the controller to flush
+ * the Rx packet buffer.
++ *
++ * If the link is lost the controller stops DMA, but
++ * if there is queued Tx work it cannot be done. So
++ * reset the controller to flush the Tx packet buffers.
+ */
+- if (adapter->flags & FLAG_RX_NEEDS_RESTART)
++ if ((adapter->flags & FLAG_RX_NEEDS_RESTART) ||
++ e1000_desc_unused(tx_ring) + 1 < tx_ring->count)
+ adapter->flags |= FLAG_RESTART_NOW;
+ else
+ pm_schedule_suspend(netdev->dev.parent,
+@@ -5333,14 +5338,6 @@ link_up:
+ adapter->gotc_old = adapter->stats.gotc;
+ spin_unlock(&adapter->stats64_lock);
+
+- /* If the link is lost the controller stops DMA, but
+- * if there is queued Tx work it cannot be done. So
+- * reset the controller to flush the Tx packet buffers.
+- */
+- if (!netif_carrier_ok(netdev) &&
+- (e1000_desc_unused(tx_ring) + 1 < tx_ring->count))
+- adapter->flags |= FLAG_RESTART_NOW;
+-
+ /* If reset is necessary, do it outside of interrupt context. */
+ if (adapter->flags & FLAG_RESTART_NOW) {
+ schedule_work(&adapter->reset_task);
+@@ -7351,6 +7348,8 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+
+ e1000_print_device_info(adapter);
+
++ dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
++
+ if (pci_dev_run_wake(pdev))
+ pm_runtime_put_noidle(&pdev->dev);
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 2e5693107fa4..8d602247eb44 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -1538,9 +1538,20 @@ ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
+ } else if (!list_elem->vsi_list_info) {
+ status = ICE_ERR_DOES_NOT_EXIST;
+ goto exit;
++ } else if (list_elem->vsi_list_info->ref_cnt > 1) {
++ /* a ref_cnt > 1 indicates that the vsi_list is being
++ * shared by multiple rules. Decrement the ref_cnt and
++ * remove this rule, but do not modify the list, as it
++ * is in-use by other rules.
++ */
++ list_elem->vsi_list_info->ref_cnt--;
++ remove_rule = true;
+ } else {
+- if (list_elem->vsi_list_info->ref_cnt > 1)
+- list_elem->vsi_list_info->ref_cnt--;
++ /* a ref_cnt of 1 indicates the vsi_list is only used
++ * by one rule. However, the original removal request is only
++ * for a single VSI. Update the vsi_list first, and only
++ * remove the rule if there are no further VSIs in this list.
++ */
+ vsi_handle = f_entry->fltr_info.vsi_handle;
+ status = ice_rem_update_vsi_list(hw, vsi_handle, list_elem);
+ if (status)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 16066c2d5b3a..931beac3359d 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1380,13 +1380,9 @@ static void mvpp2_port_reset(struct mvpp2_port *port)
+ for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_regs); i++)
+ mvpp2_read_count(port, &mvpp2_ethtool_regs[i]);
+
+- val = readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
+- ~MVPP2_GMAC_PORT_RESET_MASK;
++ val = readl(port->base + MVPP2_GMAC_CTRL_2_REG) |
++ MVPP2_GMAC_PORT_RESET_MASK;
+ writel(val, port->base + MVPP2_GMAC_CTRL_2_REG);
+-
+- while (readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
+- MVPP2_GMAC_PORT_RESET_MASK)
+- continue;
+ }
+
+ /* Change maximum receive size of the port */
+@@ -4543,12 +4539,15 @@ static void mvpp2_gmac_config(struct mvpp2_port *port, unsigned int mode,
+ const struct phylink_link_state *state)
+ {
+ u32 an, ctrl0, ctrl2, ctrl4;
++ u32 old_ctrl2;
+
+ an = readl(port->base + MVPP2_GMAC_AUTONEG_CONFIG);
+ ctrl0 = readl(port->base + MVPP2_GMAC_CTRL_0_REG);
+ ctrl2 = readl(port->base + MVPP2_GMAC_CTRL_2_REG);
+ ctrl4 = readl(port->base + MVPP22_GMAC_CTRL_4_REG);
+
++ old_ctrl2 = ctrl2;
++
+ /* Force link down */
+ an &= ~MVPP2_GMAC_FORCE_LINK_PASS;
+ an |= MVPP2_GMAC_FORCE_LINK_DOWN;
+@@ -4621,6 +4620,12 @@ static void mvpp2_gmac_config(struct mvpp2_port *port, unsigned int mode,
+ writel(ctrl2, port->base + MVPP2_GMAC_CTRL_2_REG);
+ writel(ctrl4, port->base + MVPP22_GMAC_CTRL_4_REG);
+ writel(an, port->base + MVPP2_GMAC_AUTONEG_CONFIG);
++
++ if (old_ctrl2 & MVPP2_GMAC_PORT_RESET_MASK) {
++ while (readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
++ MVPP2_GMAC_PORT_RESET_MASK)
++ continue;
++ }
+ }
+
+ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
+index 57727fe1501e..8b3495ee2b6e 100644
+--- a/drivers/net/ethernet/marvell/sky2.c
++++ b/drivers/net/ethernet/marvell/sky2.c
+@@ -46,6 +46,7 @@
+ #include <linux/mii.h>
+ #include <linux/of_device.h>
+ #include <linux/of_net.h>
++#include <linux/dmi.h>
+
+ #include <asm/irq.h>
+
+@@ -93,7 +94,7 @@ static int copybreak __read_mostly = 128;
+ module_param(copybreak, int, 0);
+ MODULE_PARM_DESC(copybreak, "Receive copy threshold");
+
+-static int disable_msi = 0;
++static int disable_msi = -1;
+ module_param(disable_msi, int, 0);
+ MODULE_PARM_DESC(disable_msi, "Disable Message Signaled Interrupt (MSI)");
+
+@@ -4917,6 +4918,24 @@ static const char *sky2_name(u8 chipid, char *buf, int sz)
+ return buf;
+ }
+
++static const struct dmi_system_id msi_blacklist[] = {
++ {
++ .ident = "Dell Inspiron 1545",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 1545"),
++ },
++ },
++ {
++ .ident = "Gateway P-79",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Gateway"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "P-79"),
++ },
++ },
++ {}
++};
++
+ static int sky2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ {
+ struct net_device *dev, *dev1;
+@@ -5028,6 +5047,9 @@ static int sky2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_out_free_pci;
+ }
+
++ if (disable_msi == -1)
++ disable_msi = !!dmi_check_system(msi_blacklist);
++
+ if (!disable_msi && pci_enable_msi(pdev) == 0) {
+ err = sky2_test_msi(hw);
+ if (err) {
+diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c
+index e65bc3c95630..857588e2488d 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c
+@@ -2645,6 +2645,8 @@ int mlx4_cmd_use_events(struct mlx4_dev *dev)
+ if (!priv->cmd.context)
+ return -ENOMEM;
+
++ if (mlx4_is_mfunc(dev))
++ mutex_lock(&priv->cmd.slave_cmd_mutex);
+ down_write(&priv->cmd.switch_sem);
+ for (i = 0; i < priv->cmd.max_cmds; ++i) {
+ priv->cmd.context[i].token = i;
+@@ -2670,6 +2672,8 @@ int mlx4_cmd_use_events(struct mlx4_dev *dev)
+ down(&priv->cmd.poll_sem);
+ priv->cmd.use_events = 1;
+ up_write(&priv->cmd.switch_sem);
++ if (mlx4_is_mfunc(dev))
++ mutex_unlock(&priv->cmd.slave_cmd_mutex);
+
+ return err;
+ }
+@@ -2682,6 +2686,8 @@ void mlx4_cmd_use_polling(struct mlx4_dev *dev)
+ struct mlx4_priv *priv = mlx4_priv(dev);
+ int i;
+
++ if (mlx4_is_mfunc(dev))
++ mutex_lock(&priv->cmd.slave_cmd_mutex);
+ down_write(&priv->cmd.switch_sem);
+ priv->cmd.use_events = 0;
+
+@@ -2689,9 +2695,12 @@ void mlx4_cmd_use_polling(struct mlx4_dev *dev)
+ down(&priv->cmd.event_sem);
+
+ kfree(priv->cmd.context);
++ priv->cmd.context = NULL;
+
+ up(&priv->cmd.poll_sem);
+ up_write(&priv->cmd.switch_sem);
++ if (mlx4_is_mfunc(dev))
++ mutex_unlock(&priv->cmd.slave_cmd_mutex);
+ }
+
+ struct mlx4_cmd_mailbox *mlx4_alloc_cmd_mailbox(struct mlx4_dev *dev)
+diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+index eb13d3618162..4356f3a58002 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+@@ -2719,13 +2719,13 @@ static int qp_get_mtt_size(struct mlx4_qp_context *qpc)
+ int total_pages;
+ int total_mem;
+ int page_offset = (be32_to_cpu(qpc->params2) >> 6) & 0x3f;
++ int tot;
+
+ sq_size = 1 << (log_sq_size + log_sq_sride + 4);
+ rq_size = (srq|rss|xrc) ? 0 : (1 << (log_rq_size + log_rq_stride + 4));
+ total_mem = sq_size + rq_size;
+- total_pages =
+- roundup_pow_of_two((total_mem + (page_offset << 6)) >>
+- page_shift);
++ tot = (total_mem + (page_offset << 6)) >> page_shift;
++ total_pages = !tot ? 1 : roundup_pow_of_two(tot);
+
+ return total_pages;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index eac245a93f91..4ab0d030b544 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -122,7 +122,9 @@ out:
+ return err;
+ }
+
+-/* xoff = ((301+2.16 * len [m]) * speed [Gbps] + 2.72 MTU [B]) */
++/* xoff = ((301+2.16 * len [m]) * speed [Gbps] + 2.72 MTU [B])
++ * minimum speed value is 40Gbps
++ */
+ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
+ {
+ u32 speed;
+@@ -130,10 +132,9 @@ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
+ int err;
+
+ err = mlx5e_port_linkspeed(priv->mdev, &speed);
+- if (err) {
+- mlx5_core_warn(priv->mdev, "cannot get port speed\n");
+- return 0;
+- }
++ if (err)
++ speed = SPEED_40000;
++ speed = max_t(u32, speed, SPEED_40000);
+
+ xoff = (301 + 216 * priv->dcbx.cable_len / 100) * speed / 1000 + 272 * mtu / 100;
+
+@@ -142,7 +143,7 @@ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
+ }
+
+ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+- u32 xoff, unsigned int mtu)
++ u32 xoff, unsigned int max_mtu)
+ {
+ int i;
+
+@@ -154,11 +155,12 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+ }
+
+ if (port_buffer->buffer[i].size <
+- (xoff + mtu + (1 << MLX5E_BUFFER_CELL_SHIFT)))
++ (xoff + max_mtu + (1 << MLX5E_BUFFER_CELL_SHIFT)))
+ return -ENOMEM;
+
+ port_buffer->buffer[i].xoff = port_buffer->buffer[i].size - xoff;
+- port_buffer->buffer[i].xon = port_buffer->buffer[i].xoff - mtu;
++ port_buffer->buffer[i].xon =
++ port_buffer->buffer[i].xoff - max_mtu;
+ }
+
+ return 0;
+@@ -166,7 +168,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+
+ /**
+ * update_buffer_lossy()
+- * mtu: device's MTU
++ * max_mtu: netdev's max_mtu
+ * pfc_en: <input> current pfc configuration
+ * buffer: <input> current prio to buffer mapping
+ * xoff: <input> xoff value
+@@ -183,7 +185,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+ * Return 0 if no error.
+ * Set change to true if buffer configuration is modified.
+ */
+-static int update_buffer_lossy(unsigned int mtu,
++static int update_buffer_lossy(unsigned int max_mtu,
+ u8 pfc_en, u8 *buffer, u32 xoff,
+ struct mlx5e_port_buffer *port_buffer,
+ bool *change)
+@@ -220,7 +222,7 @@ static int update_buffer_lossy(unsigned int mtu,
+ }
+
+ if (changed) {
+- err = update_xoff_threshold(port_buffer, xoff, mtu);
++ err = update_xoff_threshold(port_buffer, xoff, max_mtu);
+ if (err)
+ return err;
+
+@@ -230,6 +232,7 @@ static int update_buffer_lossy(unsigned int mtu,
+ return 0;
+ }
+
++#define MINIMUM_MAX_MTU 9216
+ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ u32 change, unsigned int mtu,
+ struct ieee_pfc *pfc,
+@@ -241,12 +244,14 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ bool update_prio2buffer = false;
+ u8 buffer[MLX5E_MAX_PRIORITY];
+ bool update_buffer = false;
++ unsigned int max_mtu;
+ u32 total_used = 0;
+ u8 curr_pfc_en;
+ int err;
+ int i;
+
+ mlx5e_dbg(HW, priv, "%s: change=%x\n", __func__, change);
++ max_mtu = max_t(unsigned int, priv->netdev->max_mtu, MINIMUM_MAX_MTU);
+
+ err = mlx5e_port_query_buffer(priv, &port_buffer);
+ if (err)
+@@ -254,7 +259,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+
+ if (change & MLX5E_PORT_BUFFER_CABLE_LEN) {
+ update_buffer = true;
+- err = update_xoff_threshold(&port_buffer, xoff, mtu);
++ err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
+ if (err)
+ return err;
+ }
+@@ -264,7 +269,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ if (err)
+ return err;
+
+- err = update_buffer_lossy(mtu, pfc->pfc_en, buffer, xoff,
++ err = update_buffer_lossy(max_mtu, pfc->pfc_en, buffer, xoff,
+ &port_buffer, &update_buffer);
+ if (err)
+ return err;
+@@ -276,8 +281,8 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ if (err)
+ return err;
+
+- err = update_buffer_lossy(mtu, curr_pfc_en, prio2buffer, xoff,
+- &port_buffer, &update_buffer);
++ err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer,
++ xoff, &port_buffer, &update_buffer);
+ if (err)
+ return err;
+ }
+@@ -301,7 +306,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ return -EINVAL;
+
+ update_buffer = true;
+- err = update_xoff_threshold(&port_buffer, xoff, mtu);
++ err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
+ if (err)
+ return err;
+ }
+@@ -309,7 +314,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ /* Need to update buffer configuration if xoff value is changed */
+ if (!update_buffer && xoff != priv->dcbx.xoff) {
+ update_buffer = true;
+- err = update_xoff_threshold(&port_buffer, xoff, mtu);
++ err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
+ if (err)
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+index 3078491cc0d0..1539cf3de5dc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+@@ -45,7 +45,9 @@ int mlx5e_create_tir(struct mlx5_core_dev *mdev,
+ if (err)
+ return err;
+
++ mutex_lock(&mdev->mlx5e_res.td.list_lock);
+ list_add(&tir->list, &mdev->mlx5e_res.td.tirs_list);
++ mutex_unlock(&mdev->mlx5e_res.td.list_lock);
+
+ return 0;
+ }
+@@ -53,8 +55,10 @@ int mlx5e_create_tir(struct mlx5_core_dev *mdev,
+ void mlx5e_destroy_tir(struct mlx5_core_dev *mdev,
+ struct mlx5e_tir *tir)
+ {
++ mutex_lock(&mdev->mlx5e_res.td.list_lock);
+ mlx5_core_destroy_tir(mdev, tir->tirn);
+ list_del(&tir->list);
++ mutex_unlock(&mdev->mlx5e_res.td.list_lock);
+ }
+
+ static int mlx5e_create_mkey(struct mlx5_core_dev *mdev, u32 pdn,
+@@ -114,6 +118,7 @@ int mlx5e_create_mdev_resources(struct mlx5_core_dev *mdev)
+ }
+
+ INIT_LIST_HEAD(&mdev->mlx5e_res.td.tirs_list);
++ mutex_init(&mdev->mlx5e_res.td.list_lock);
+
+ return 0;
+
+@@ -141,15 +146,17 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb)
+ {
+ struct mlx5_core_dev *mdev = priv->mdev;
+ struct mlx5e_tir *tir;
+- int err = -ENOMEM;
++ int err = 0;
+ u32 tirn = 0;
+ int inlen;
+ void *in;
+
+ inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
+ in = kvzalloc(inlen, GFP_KERNEL);
+- if (!in)
++ if (!in) {
++ err = -ENOMEM;
+ goto out;
++ }
+
+ if (enable_uc_lb)
+ MLX5_SET(modify_tir_in, in, ctx.self_lb_block,
+@@ -157,6 +164,7 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb)
+
+ MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1);
+
++ mutex_lock(&mdev->mlx5e_res.td.list_lock);
+ list_for_each_entry(tir, &mdev->mlx5e_res.td.tirs_list, list) {
+ tirn = tir->tirn;
+ err = mlx5_core_modify_tir(mdev, tirn, in, inlen);
+@@ -168,6 +176,7 @@ out:
+ kvfree(in);
+ if (err)
+ netdev_err(priv->netdev, "refresh tir(0x%x) failed, %d\n", tirn, err);
++ mutex_unlock(&mdev->mlx5e_res.td.list_lock);
+
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 47233b9a4f81..e6099f51d25f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -357,6 +357,9 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
+
+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
+ priv->channels.params = new_channels.params;
++ if (!netif_is_rxfh_configured(priv->netdev))
++ mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt,
++ MLX5E_INDIR_RQT_SIZE, count);
+ goto out;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 5b492b67f4e1..13c48883ed61 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1812,7 +1812,7 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
+ u64 node_guid;
+ int err = 0;
+
+- if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
++ if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager))
+ return -EPERM;
+ if (!LEGAL_VPORT(esw, vport) || is_multicast_ether_addr(mac))
+ return -EINVAL;
+@@ -1886,7 +1886,7 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
+ {
+ struct mlx5_vport *evport;
+
+- if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
++ if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager))
+ return -EPERM;
+ if (!LEGAL_VPORT(esw, vport))
+ return -EINVAL;
+@@ -2059,19 +2059,24 @@ static int normalize_vports_min_rate(struct mlx5_eswitch *esw, u32 divider)
+ int mlx5_eswitch_set_vport_rate(struct mlx5_eswitch *esw, int vport,
+ u32 max_rate, u32 min_rate)
+ {
+- u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share);
+- bool min_rate_supported = MLX5_CAP_QOS(esw->dev, esw_bw_share) &&
+- fw_max_bw_share >= MLX5_MIN_BW_SHARE;
+- bool max_rate_supported = MLX5_CAP_QOS(esw->dev, esw_rate_limit);
+ struct mlx5_vport *evport;
++ u32 fw_max_bw_share;
+ u32 previous_min_rate;
+ u32 divider;
++ bool min_rate_supported;
++ bool max_rate_supported;
+ int err = 0;
+
+ if (!ESW_ALLOWED(esw))
+ return -EPERM;
+ if (!LEGAL_VPORT(esw, vport))
+ return -EINVAL;
++
++ fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share);
++ min_rate_supported = MLX5_CAP_QOS(esw->dev, esw_bw_share) &&
++ fw_max_bw_share >= MLX5_MIN_BW_SHARE;
++ max_rate_supported = MLX5_CAP_QOS(esw->dev, esw_rate_limit);
++
+ if ((min_rate && !min_rate_supported) || (max_rate && !max_rate_supported))
+ return -EOPNOTSUPP;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
+index 5cf5f2a9d51f..8de64e88c670 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
+@@ -217,15 +217,21 @@ int mlx5_fpga_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
+ void *cmd;
+ int ret;
+
++ rcu_read_lock();
++ flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle));
++ rcu_read_unlock();
++
++ if (!flow) {
++ WARN_ONCE(1, "Received NULL pointer for handle\n");
++ return -EINVAL;
++ }
++
+ buf = kzalloc(size, GFP_ATOMIC);
+ if (!buf)
+ return -ENOMEM;
+
+ cmd = (buf + 1);
+
+- rcu_read_lock();
+- flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle));
+- rcu_read_unlock();
+ mlx5_fpga_tls_flow_to_cmd(flow, cmd);
+
+ MLX5_SET(tls_cmd, cmd, swid, ntohl(handle));
+@@ -238,6 +244,8 @@ int mlx5_fpga_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
+ buf->complete = mlx_tls_kfree_complete;
+
+ ret = mlx5_fpga_sbu_conn_sendmsg(mdev->fpga->tls->conn, buf);
++ if (ret < 0)
++ kfree(buf);
+
+ return ret;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index be81b319b0dc..694edd899322 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -163,26 +163,6 @@ static struct mlx5_profile profile[] = {
+ .size = 8,
+ .limit = 4
+ },
+- .mr_cache[16] = {
+- .size = 8,
+- .limit = 4
+- },
+- .mr_cache[17] = {
+- .size = 8,
+- .limit = 4
+- },
+- .mr_cache[18] = {
+- .size = 8,
+- .limit = 4
+- },
+- .mr_cache[19] = {
+- .size = 4,
+- .limit = 2
+- },
+- .mr_cache[20] = {
+- .size = 4,
+- .limit = 2
+- },
+ },
+ };
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qp.c b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
+index 370ca94b6775..c7c2920c05c4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/qp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
+@@ -40,6 +40,9 @@
+ #include "mlx5_core.h"
+ #include "lib/eq.h"
+
++static int mlx5_core_drain_dct(struct mlx5_core_dev *dev,
++ struct mlx5_core_dct *dct);
++
+ static struct mlx5_core_rsc_common *
+ mlx5_get_rsc(struct mlx5_qp_table *table, u32 rsn)
+ {
+@@ -227,13 +230,42 @@ static void destroy_resource_common(struct mlx5_core_dev *dev,
+ wait_for_completion(&qp->common.free);
+ }
+
++static int _mlx5_core_destroy_dct(struct mlx5_core_dev *dev,
++ struct mlx5_core_dct *dct, bool need_cleanup)
++{
++ u32 out[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
++ u32 in[MLX5_ST_SZ_DW(destroy_dct_in)] = {0};
++ struct mlx5_core_qp *qp = &dct->mqp;
++ int err;
++
++ err = mlx5_core_drain_dct(dev, dct);
++ if (err) {
++ if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
++ goto destroy;
++ } else {
++ mlx5_core_warn(
++ dev, "failed drain DCT 0x%x with error 0x%x\n",
++ qp->qpn, err);
++ return err;
++ }
++ }
++ wait_for_completion(&dct->drained);
++destroy:
++ if (need_cleanup)
++ destroy_resource_common(dev, &dct->mqp);
++ MLX5_SET(destroy_dct_in, in, opcode, MLX5_CMD_OP_DESTROY_DCT);
++ MLX5_SET(destroy_dct_in, in, dctn, qp->qpn);
++ MLX5_SET(destroy_dct_in, in, uid, qp->uid);
++ err = mlx5_cmd_exec(dev, (void *)&in, sizeof(in),
++ (void *)&out, sizeof(out));
++ return err;
++}
++
+ int mlx5_core_create_dct(struct mlx5_core_dev *dev,
+ struct mlx5_core_dct *dct,
+ u32 *in, int inlen)
+ {
+ u32 out[MLX5_ST_SZ_DW(create_dct_out)] = {0};
+- u32 din[MLX5_ST_SZ_DW(destroy_dct_in)] = {0};
+- u32 dout[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
+ struct mlx5_core_qp *qp = &dct->mqp;
+ int err;
+
+@@ -254,11 +286,7 @@ int mlx5_core_create_dct(struct mlx5_core_dev *dev,
+
+ return 0;
+ err_cmd:
+- MLX5_SET(destroy_dct_in, din, opcode, MLX5_CMD_OP_DESTROY_DCT);
+- MLX5_SET(destroy_dct_in, din, dctn, qp->qpn);
+- MLX5_SET(destroy_dct_in, din, uid, qp->uid);
+- mlx5_cmd_exec(dev, (void *)&in, sizeof(din),
+- (void *)&out, sizeof(dout));
++ _mlx5_core_destroy_dct(dev, dct, false);
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(mlx5_core_create_dct);
+@@ -323,29 +351,7 @@ static int mlx5_core_drain_dct(struct mlx5_core_dev *dev,
+ int mlx5_core_destroy_dct(struct mlx5_core_dev *dev,
+ struct mlx5_core_dct *dct)
+ {
+- u32 out[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
+- u32 in[MLX5_ST_SZ_DW(destroy_dct_in)] = {0};
+- struct mlx5_core_qp *qp = &dct->mqp;
+- int err;
+-
+- err = mlx5_core_drain_dct(dev, dct);
+- if (err) {
+- if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
+- goto destroy;
+- } else {
+- mlx5_core_warn(dev, "failed drain DCT 0x%x with error 0x%x\n", qp->qpn, err);
+- return err;
+- }
+- }
+- wait_for_completion(&dct->drained);
+-destroy:
+- destroy_resource_common(dev, &dct->mqp);
+- MLX5_SET(destroy_dct_in, in, opcode, MLX5_CMD_OP_DESTROY_DCT);
+- MLX5_SET(destroy_dct_in, in, dctn, qp->qpn);
+- MLX5_SET(destroy_dct_in, in, uid, qp->uid);
+- err = mlx5_cmd_exec(dev, (void *)&in, sizeof(in),
+- (void *)&out, sizeof(out));
+- return err;
++ return _mlx5_core_destroy_dct(dev, dct, true);
+ }
+ EXPORT_SYMBOL_GPL(mlx5_core_destroy_dct);
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index b65e274b02e9..cbdee5164be7 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -2105,7 +2105,7 @@ static void mlxsw_sp_port_get_prio_strings(u8 **p, int prio)
+ int i;
+
+ for (i = 0; i < MLXSW_SP_PORT_HW_PRIO_STATS_LEN; i++) {
+- snprintf(*p, ETH_GSTRING_LEN, "%s_%d",
++ snprintf(*p, ETH_GSTRING_LEN, "%.29s_%.1d",
+ mlxsw_sp_port_hw_prio_stats[i].str, prio);
+ *p += ETH_GSTRING_LEN;
+ }
+@@ -2116,7 +2116,7 @@ static void mlxsw_sp_port_get_tc_strings(u8 **p, int tc)
+ int i;
+
+ for (i = 0; i < MLXSW_SP_PORT_HW_TC_STATS_LEN; i++) {
+- snprintf(*p, ETH_GSTRING_LEN, "%s_%d",
++ snprintf(*p, ETH_GSTRING_LEN, "%.29s_%.1d",
+ mlxsw_sp_port_hw_tc_stats[i].str, tc);
+ *p += ETH_GSTRING_LEN;
+ }
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index 4d1b4a24907f..13e6bf13ac4d 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -585,8 +585,7 @@ static int lan743x_intr_open(struct lan743x_adapter *adapter)
+
+ if (adapter->csr.flags &
+ LAN743X_CSR_FLAG_SUPPORTS_INTR_AUTO_SET_CLR) {
+- flags = LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_CLEAR |
+- LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_SET |
++ flags = LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_SET |
+ LAN743X_VECTOR_FLAG_SOURCE_ENABLE_AUTO_SET |
+ LAN743X_VECTOR_FLAG_SOURCE_ENABLE_AUTO_CLEAR |
+ LAN743X_VECTOR_FLAG_SOURCE_STATUS_AUTO_CLEAR;
+@@ -599,12 +598,6 @@ static int lan743x_intr_open(struct lan743x_adapter *adapter)
+ /* map TX interrupt to vector */
+ int_vec_map1 |= INT_VEC_MAP1_TX_VEC_(index, vector);
+ lan743x_csr_write(adapter, INT_VEC_MAP1, int_vec_map1);
+- if (flags &
+- LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_CLEAR) {
+- int_vec_en_auto_clr |= INT_VEC_EN_(vector);
+- lan743x_csr_write(adapter, INT_VEC_EN_AUTO_CLR,
+- int_vec_en_auto_clr);
+- }
+
+ /* Remove TX interrupt from shared mask */
+ intr->vector_list[0].int_mask &= ~int_bit;
+@@ -1902,7 +1895,17 @@ static int lan743x_rx_next_index(struct lan743x_rx *rx, int index)
+ return ((++index) % rx->ring_size);
+ }
+
+-static int lan743x_rx_allocate_ring_element(struct lan743x_rx *rx, int index)
++static struct sk_buff *lan743x_rx_allocate_skb(struct lan743x_rx *rx)
++{
++ int length = 0;
++
++ length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
++ return __netdev_alloc_skb(rx->adapter->netdev,
++ length, GFP_ATOMIC | GFP_DMA);
++}
++
++static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index,
++ struct sk_buff *skb)
+ {
+ struct lan743x_rx_buffer_info *buffer_info;
+ struct lan743x_rx_descriptor *descriptor;
+@@ -1911,9 +1914,7 @@ static int lan743x_rx_allocate_ring_element(struct lan743x_rx *rx, int index)
+ length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
+ descriptor = &rx->ring_cpu_ptr[index];
+ buffer_info = &rx->buffer_info[index];
+- buffer_info->skb = __netdev_alloc_skb(rx->adapter->netdev,
+- length,
+- GFP_ATOMIC | GFP_DMA);
++ buffer_info->skb = skb;
+ if (!(buffer_info->skb))
+ return -ENOMEM;
+ buffer_info->dma_ptr = dma_map_single(&rx->adapter->pdev->dev,
+@@ -2060,8 +2061,19 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ /* packet is available */
+ if (first_index == last_index) {
+ /* single buffer packet */
++ struct sk_buff *new_skb = NULL;
+ int packet_length;
+
++ new_skb = lan743x_rx_allocate_skb(rx);
++ if (!new_skb) {
++ /* failed to allocate next skb.
++ * Memory is very low.
++ * Drop this packet and reuse buffer.
++ */
++ lan743x_rx_reuse_ring_element(rx, first_index);
++ goto process_extension;
++ }
++
+ buffer_info = &rx->buffer_info[first_index];
+ skb = buffer_info->skb;
+ descriptor = &rx->ring_cpu_ptr[first_index];
+@@ -2081,7 +2093,7 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ skb_put(skb, packet_length - 4);
+ skb->protocol = eth_type_trans(skb,
+ rx->adapter->netdev);
+- lan743x_rx_allocate_ring_element(rx, first_index);
++ lan743x_rx_init_ring_element(rx, first_index, new_skb);
+ } else {
+ int index = first_index;
+
+@@ -2094,26 +2106,23 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ if (first_index <= last_index) {
+ while ((index >= first_index) &&
+ (index <= last_index)) {
+- lan743x_rx_release_ring_element(rx,
+- index);
+- lan743x_rx_allocate_ring_element(rx,
+- index);
++ lan743x_rx_reuse_ring_element(rx,
++ index);
+ index = lan743x_rx_next_index(rx,
+ index);
+ }
+ } else {
+ while ((index >= first_index) ||
+ (index <= last_index)) {
+- lan743x_rx_release_ring_element(rx,
+- index);
+- lan743x_rx_allocate_ring_element(rx,
+- index);
++ lan743x_rx_reuse_ring_element(rx,
++ index);
+ index = lan743x_rx_next_index(rx,
+ index);
+ }
+ }
+ }
+
++process_extension:
+ if (extension_index >= 0) {
+ descriptor = &rx->ring_cpu_ptr[extension_index];
+ buffer_info = &rx->buffer_info[extension_index];
+@@ -2290,7 +2299,9 @@ static int lan743x_rx_ring_init(struct lan743x_rx *rx)
+
+ rx->last_head = 0;
+ for (index = 0; index < rx->ring_size; index++) {
+- ret = lan743x_rx_allocate_ring_element(rx, index);
++ struct sk_buff *new_skb = lan743x_rx_allocate_skb(rx);
++
++ ret = lan743x_rx_init_ring_element(rx, index, new_skb);
+ if (ret)
+ goto cleanup;
+ }
+diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c
+index ca3ea2fbfcd0..80d87798c62b 100644
+--- a/drivers/net/ethernet/mscc/ocelot_board.c
++++ b/drivers/net/ethernet/mscc/ocelot_board.c
+@@ -267,6 +267,7 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ struct phy *serdes;
+ void __iomem *regs;
+ char res_name[8];
++ int phy_mode;
+ u32 port;
+
+ if (of_property_read_u32(portnp, "reg", &port))
+@@ -292,11 +293,11 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ if (err)
+ return err;
+
+- err = of_get_phy_mode(portnp);
+- if (err < 0)
++ phy_mode = of_get_phy_mode(portnp);
++ if (phy_mode < 0)
+ ocelot->ports[port]->phy_mode = PHY_INTERFACE_MODE_NA;
+ else
+- ocelot->ports[port]->phy_mode = err;
++ ocelot->ports[port]->phy_mode = phy_mode;
+
+ switch (ocelot->ports[port]->phy_mode) {
+ case PHY_INTERFACE_MODE_NA:
+@@ -304,6 +305,13 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ case PHY_INTERFACE_MODE_SGMII:
+ break;
+ case PHY_INTERFACE_MODE_QSGMII:
++ /* Ensure clock signals and speed is set on all
++ * QSGMII links
++ */
++ ocelot_port_writel(ocelot->ports[port],
++ DEV_CLOCK_CFG_LINK_SPEED
++ (OCELOT_SPEED_1000),
++ DEV_CLOCK_CFG);
+ break;
+ default:
+ dev_err(ocelot->dev,
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
+index 69d7aebda09b..73db94e55fd0 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
+@@ -196,7 +196,7 @@ static netdev_tx_t nfp_repr_xmit(struct sk_buff *skb, struct net_device *netdev)
+ ret = dev_queue_xmit(skb);
+ nfp_repr_inc_tx_stats(netdev, len, ret);
+
+- return ret;
++ return NETDEV_TX_OK;
+ }
+
+ static int nfp_repr_stop(struct net_device *netdev)
+@@ -384,7 +384,7 @@ int nfp_repr_init(struct nfp_app *app, struct net_device *netdev,
+ netdev->features &= ~(NETIF_F_TSO | NETIF_F_TSO6);
+ netdev->gso_max_segs = NFP_NET_LSO_MAX_SEGS;
+
+- netdev->priv_flags |= IFF_NO_QUEUE;
++ netdev->priv_flags |= IFF_NO_QUEUE | IFF_DISABLE_NETPOLL;
+ netdev->features |= NETIF_F_LLTX;
+
+ if (nfp_app_has_tc(app)) {
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 6e36b88ca7c9..365cddbfc684 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -28,6 +28,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/firmware.h>
+ #include <linux/prefetch.h>
++#include <linux/pci-aspm.h>
+ #include <linux/ipv6.h>
+ #include <net/ip6_checksum.h>
+
+@@ -5332,7 +5333,7 @@ static void rtl_hw_start_8168(struct rtl8169_private *tp)
+ tp->cp_cmd |= PktCntrDisable | INTT_1;
+ RTL_W16(tp, CPlusCmd, tp->cp_cmd);
+
+- RTL_W16(tp, IntrMitigate, 0x5151);
++ RTL_W16(tp, IntrMitigate, 0x5100);
+
+ /* Work around for RxFIFO overflow. */
+ if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
+@@ -6435,7 +6436,7 @@ static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
+ set_bit(RTL_FLAG_TASK_RESET_PENDING, tp->wk.flags);
+ }
+
+- if (status & RTL_EVENT_NAPI) {
++ if (status & (RTL_EVENT_NAPI | LinkChg)) {
+ rtl_irq_disable(tp);
+ napi_schedule_irqoff(&tp->napi);
+ }
+@@ -7224,6 +7225,11 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ return rc;
+ }
+
++ /* Disable ASPM completely as that cause random device stop working
++ * problems as well as full system hangs for some PCIe devices users.
++ */
++ pci_disable_link_state(pdev, PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1);
++
+ /* enable device (incl. PCI PM wakeup and hotplug setup) */
+ rc = pcim_enable_device(pdev);
+ if (rc < 0) {
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index d28c8f9ca55b..8154b38c08f7 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -458,7 +458,7 @@ static int ravb_dmac_init(struct net_device *ndev)
+ RCR_EFFS | RCR_ENCF | RCR_ETS0 | RCR_ESF | 0x18000000, RCR);
+
+ /* Set FIFO size */
+- ravb_write(ndev, TGC_TQP_AVBMODE1 | 0x00222200, TGC);
++ ravb_write(ndev, TGC_TQP_AVBMODE1 | 0x00112200, TGC);
+
+ /* Timestamp enable */
+ ravb_write(ndev, TCCR_TFEN, TCCR);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
+index d8c5bc412219..c0c75c111abb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
++++ b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
+@@ -111,10 +111,11 @@ static unsigned int is_jumbo_frm(int len, int enh_desc)
+
+ static void refill_desc3(void *priv_ptr, struct dma_desc *p)
+ {
+- struct stmmac_priv *priv = (struct stmmac_priv *)priv_ptr;
++ struct stmmac_rx_queue *rx_q = priv_ptr;
++ struct stmmac_priv *priv = rx_q->priv_data;
+
+ /* Fill DES3 in case of RING mode */
+- if (priv->dma_buf_sz >= BUF_SIZE_8KiB)
++ if (priv->dma_buf_sz == BUF_SIZE_16KiB)
+ p->des3 = cpu_to_le32(le32_to_cpu(p->des2) + BUF_SIZE_8KiB);
+ }
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 685d20472358..019ab99e65bb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -474,7 +474,7 @@ static void stmmac_get_tx_hwtstamp(struct stmmac_priv *priv,
+ struct dma_desc *p, struct sk_buff *skb)
+ {
+ struct skb_shared_hwtstamps shhwtstamp;
+- u64 ns;
++ u64 ns = 0;
+
+ if (!priv->hwts_tx_en)
+ return;
+@@ -513,7 +513,7 @@ static void stmmac_get_rx_hwtstamp(struct stmmac_priv *priv, struct dma_desc *p,
+ {
+ struct skb_shared_hwtstamps *shhwtstamp = NULL;
+ struct dma_desc *desc = p;
+- u64 ns;
++ u64 ns = 0;
+
+ if (!priv->hwts_rx_en)
+ return;
+@@ -558,8 +558,8 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr)
+ u32 snap_type_sel = 0;
+ u32 ts_master_en = 0;
+ u32 ts_event_en = 0;
++ u32 sec_inc = 0;
+ u32 value = 0;
+- u32 sec_inc;
+ bool xmac;
+
+ xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+index 2293e21f789f..cc60b3fb0892 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+@@ -105,7 +105,7 @@ static int stmmac_get_time(struct ptp_clock_info *ptp, struct timespec64 *ts)
+ struct stmmac_priv *priv =
+ container_of(ptp, struct stmmac_priv, ptp_clock_ops);
+ unsigned long flags;
+- u64 ns;
++ u64 ns = 0;
+
+ spin_lock_irqsave(&priv->ptp_lock, flags);
+ stmmac_get_systime(priv, priv->ptpaddr, &ns);
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index e859ae2e42d5..49f41b64077b 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -987,6 +987,7 @@ struct netvsc_device {
+
+ wait_queue_head_t wait_drain;
+ bool destroy;
++ bool tx_disable; /* if true, do not wake up queue again */
+
+ /* Receive buffer allocated by us but manages by NetVSP */
+ void *recv_buf;
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 813d195bbd57..e0dce373cdd9 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -110,6 +110,7 @@ static struct netvsc_device *alloc_net_device(void)
+
+ init_waitqueue_head(&net_device->wait_drain);
+ net_device->destroy = false;
++ net_device->tx_disable = false;
+
+ net_device->max_pkt = RNDIS_MAX_PKT_DEFAULT;
+ net_device->pkt_align = RNDIS_PKT_ALIGN_DEFAULT;
+@@ -719,7 +720,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
+ } else {
+ struct netdev_queue *txq = netdev_get_tx_queue(ndev, q_idx);
+
+- if (netif_tx_queue_stopped(txq) &&
++ if (netif_tx_queue_stopped(txq) && !net_device->tx_disable &&
+ (hv_get_avail_to_write_percent(&channel->outbound) >
+ RING_AVAIL_PERCENT_HIWATER || queue_sends < 1)) {
+ netif_tx_wake_queue(txq);
+@@ -874,7 +875,8 @@ static inline int netvsc_send_pkt(
+ } else if (ret == -EAGAIN) {
+ netif_tx_stop_queue(txq);
+ ndev_ctx->eth_stats.stop_queue++;
+- if (atomic_read(&nvchan->queue_sends) < 1) {
++ if (atomic_read(&nvchan->queue_sends) < 1 &&
++ !net_device->tx_disable) {
+ netif_tx_wake_queue(txq);
+ ndev_ctx->eth_stats.wake_queue++;
+ ret = -ENOSPC;
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index cf4897043e83..b20fb0fb595b 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -109,6 +109,15 @@ static void netvsc_set_rx_mode(struct net_device *net)
+ rcu_read_unlock();
+ }
+
++static void netvsc_tx_enable(struct netvsc_device *nvscdev,
++ struct net_device *ndev)
++{
++ nvscdev->tx_disable = false;
++ virt_wmb(); /* ensure queue wake up mechanism is on */
++
++ netif_tx_wake_all_queues(ndev);
++}
++
+ static int netvsc_open(struct net_device *net)
+ {
+ struct net_device_context *ndev_ctx = netdev_priv(net);
+@@ -129,7 +138,7 @@ static int netvsc_open(struct net_device *net)
+ rdev = nvdev->extension;
+ if (!rdev->link_state) {
+ netif_carrier_on(net);
+- netif_tx_wake_all_queues(net);
++ netvsc_tx_enable(nvdev, net);
+ }
+
+ if (vf_netdev) {
+@@ -184,6 +193,17 @@ static int netvsc_wait_until_empty(struct netvsc_device *nvdev)
+ }
+ }
+
++static void netvsc_tx_disable(struct netvsc_device *nvscdev,
++ struct net_device *ndev)
++{
++ if (nvscdev) {
++ nvscdev->tx_disable = true;
++ virt_wmb(); /* ensure txq will not wake up after stop */
++ }
++
++ netif_tx_disable(ndev);
++}
++
+ static int netvsc_close(struct net_device *net)
+ {
+ struct net_device_context *net_device_ctx = netdev_priv(net);
+@@ -192,7 +212,7 @@ static int netvsc_close(struct net_device *net)
+ struct netvsc_device *nvdev = rtnl_dereference(net_device_ctx->nvdev);
+ int ret;
+
+- netif_tx_disable(net);
++ netvsc_tx_disable(nvdev, net);
+
+ /* No need to close rndis filter if it is removed already */
+ if (!nvdev)
+@@ -920,7 +940,7 @@ static int netvsc_detach(struct net_device *ndev,
+
+ /* If device was up (receiving) then shutdown */
+ if (netif_running(ndev)) {
+- netif_tx_disable(ndev);
++ netvsc_tx_disable(nvdev, ndev);
+
+ ret = rndis_filter_close(nvdev);
+ if (ret) {
+@@ -1908,7 +1928,7 @@ static void netvsc_link_change(struct work_struct *w)
+ if (rdev->link_state) {
+ rdev->link_state = false;
+ netif_carrier_on(net);
+- netif_tx_wake_all_queues(net);
++ netvsc_tx_enable(net_device, net);
+ } else {
+ notify = true;
+ }
+@@ -1918,7 +1938,7 @@ static void netvsc_link_change(struct work_struct *w)
+ if (!rdev->link_state) {
+ rdev->link_state = true;
+ netif_carrier_off(net);
+- netif_tx_stop_all_queues(net);
++ netvsc_tx_disable(net_device, net);
+ }
+ kfree(event);
+ break;
+@@ -1927,7 +1947,7 @@ static void netvsc_link_change(struct work_struct *w)
+ if (!rdev->link_state) {
+ rdev->link_state = true;
+ netif_carrier_off(net);
+- netif_tx_stop_all_queues(net);
++ netvsc_tx_disable(net_device, net);
+ event->event = RNDIS_STATUS_MEDIA_CONNECT;
+ spin_lock_irqsave(&ndev_ctx->lock, flags);
+ list_add(&event->list, &ndev_ctx->reconfig_events);
+diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c
+index 3ddaf9595697..68af4c75ffb3 100644
+--- a/drivers/net/phy/meson-gxl.c
++++ b/drivers/net/phy/meson-gxl.c
+@@ -211,6 +211,7 @@ static int meson_gxl_ack_interrupt(struct phy_device *phydev)
+ static int meson_gxl_config_intr(struct phy_device *phydev)
+ {
+ u16 val;
++ int ret;
+
+ if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
+ val = INTSRC_ANEG_PR
+@@ -223,6 +224,11 @@ static int meson_gxl_config_intr(struct phy_device *phydev)
+ val = 0;
+ }
+
++ /* Ack any pending IRQ */
++ ret = meson_gxl_ack_interrupt(phydev);
++ if (ret)
++ return ret;
++
+ return phy_write(phydev, INTSRC_MASK, val);
+ }
+
+diff --git a/drivers/net/phy/phy-c45.c b/drivers/net/phy/phy-c45.c
+index 03af927fa5ad..e39bf0428dd9 100644
+--- a/drivers/net/phy/phy-c45.c
++++ b/drivers/net/phy/phy-c45.c
+@@ -147,9 +147,15 @@ int genphy_c45_read_link(struct phy_device *phydev, u32 mmd_mask)
+ mmd_mask &= ~BIT(devad);
+
+ /* The link state is latched low so that momentary link
+- * drops can be detected. Do not double-read the status
+- * register if the link is down.
++ * drops can be detected. Do not double-read the status
++ * in polling mode to detect such short link drops.
+ */
++ if (!phy_polling_mode(phydev)) {
++ val = phy_read_mmd(phydev, devad, MDIO_STAT1);
++ if (val < 0)
++ return val;
++ }
++
+ val = phy_read_mmd(phydev, devad, MDIO_STAT1);
+ if (val < 0)
+ return val;
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 46c86725a693..adf79614c2db 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1683,10 +1683,15 @@ int genphy_update_link(struct phy_device *phydev)
+ {
+ int status;
+
+- /* Do a fake read */
+- status = phy_read(phydev, MII_BMSR);
+- if (status < 0)
+- return status;
++ /* The link state is latched low so that momentary link
++ * drops can be detected. Do not double-read the status
++ * in polling mode to detect such short link drops.
++ */
++ if (!phy_polling_mode(phydev)) {
++ status = phy_read(phydev, MII_BMSR);
++ if (status < 0)
++ return status;
++ }
+
+ /* Read link and autonegotiation status */
+ status = phy_read(phydev, MII_BMSR);
+@@ -1827,7 +1832,7 @@ int genphy_soft_reset(struct phy_device *phydev)
+ {
+ int ret;
+
+- ret = phy_write(phydev, MII_BMCR, BMCR_RESET);
++ ret = phy_set_bits(phydev, MII_BMCR, BMCR_RESET);
+ if (ret < 0)
+ return ret;
+
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index 8f09edd811e9..50c60550f295 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -532,6 +532,7 @@ static void pptp_sock_destruct(struct sock *sk)
+ pppox_unbind_sock(sk);
+ }
+ skb_queue_purge(&sk->sk_receive_queue);
++ dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1));
+ }
+
+ static int pptp_create(struct net *net, struct socket *sock, int kern)
+diff --git a/drivers/net/team/team_mode_loadbalance.c b/drivers/net/team/team_mode_loadbalance.c
+index a5ef97010eb3..5541e1c19936 100644
+--- a/drivers/net/team/team_mode_loadbalance.c
++++ b/drivers/net/team/team_mode_loadbalance.c
+@@ -325,6 +325,20 @@ static int lb_bpf_func_set(struct team *team, struct team_gsetter_ctx *ctx)
+ return 0;
+ }
+
++static void lb_bpf_func_free(struct team *team)
++{
++ struct lb_priv *lb_priv = get_lb_priv(team);
++ struct bpf_prog *fp;
++
++ if (!lb_priv->ex->orig_fprog)
++ return;
++
++ __fprog_destroy(lb_priv->ex->orig_fprog);
++ fp = rcu_dereference_protected(lb_priv->fp,
++ lockdep_is_held(&team->lock));
++ bpf_prog_destroy(fp);
++}
++
+ static int lb_tx_method_get(struct team *team, struct team_gsetter_ctx *ctx)
+ {
+ struct lb_priv *lb_priv = get_lb_priv(team);
+@@ -639,6 +653,7 @@ static void lb_exit(struct team *team)
+
+ team_options_unregister(team, lb_options,
+ ARRAY_SIZE(lb_options));
++ lb_bpf_func_free(team);
+ cancel_delayed_work_sync(&lb_priv->ex->stats.refresh_dw);
+ free_percpu(lb_priv->pcpu_stats);
+ kfree(lb_priv->ex);
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 53f4f37b0ffd..448d5439ff6a 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1763,9 +1763,6 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ int skb_xdp = 1;
+ bool frags = tun_napi_frags_enabled(tfile);
+
+- if (!(tun->dev->flags & IFF_UP))
+- return -EIO;
+-
+ if (!(tun->flags & IFF_NO_PI)) {
+ if (len < sizeof(pi))
+ return -EINVAL;
+@@ -1867,6 +1864,8 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ err = skb_copy_datagram_from_iter(skb, 0, from, len);
+
+ if (err) {
++ err = -EFAULT;
++drop:
+ this_cpu_inc(tun->pcpu_stats->rx_dropped);
+ kfree_skb(skb);
+ if (frags) {
+@@ -1874,7 +1873,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ mutex_unlock(&tfile->napi_mutex);
+ }
+
+- return -EFAULT;
++ return err;
+ }
+ }
+
+@@ -1958,6 +1957,13 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ !tfile->detached)
+ rxhash = __skb_get_hash_symmetric(skb);
+
++ rcu_read_lock();
++ if (unlikely(!(tun->dev->flags & IFF_UP))) {
++ err = -EIO;
++ rcu_read_unlock();
++ goto drop;
++ }
++
+ if (frags) {
+ /* Exercise flow dissector code path. */
+ u32 headlen = eth_get_headlen(skb->data, skb_headlen(skb));
+@@ -1965,6 +1971,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ if (unlikely(headlen > skb_headlen(skb))) {
+ this_cpu_inc(tun->pcpu_stats->rx_dropped);
+ napi_free_frags(&tfile->napi);
++ rcu_read_unlock();
+ mutex_unlock(&tfile->napi_mutex);
+ WARN_ON(1);
+ return -ENOMEM;
+@@ -1992,6 +1999,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ } else {
+ netif_rx_ni(skb);
+ }
++ rcu_read_unlock();
+
+ stats = get_cpu_ptr(tun->pcpu_stats);
+ u64_stats_update_begin(&stats->syncp);
+diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c
+index 820a2fe7d027..aff995be2a31 100644
+--- a/drivers/net/usb/aqc111.c
++++ b/drivers/net/usb/aqc111.c
+@@ -1301,6 +1301,20 @@ static const struct driver_info trendnet_info = {
+ .tx_fixup = aqc111_tx_fixup,
+ };
+
++static const struct driver_info qnap_info = {
++ .description = "QNAP QNA-UC5G1T USB to 5GbE Adapter",
++ .bind = aqc111_bind,
++ .unbind = aqc111_unbind,
++ .status = aqc111_status,
++ .link_reset = aqc111_link_reset,
++ .reset = aqc111_reset,
++ .stop = aqc111_stop,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX |
++ FLAG_AVOID_UNLINK_URBS | FLAG_MULTI_PACKET,
++ .rx_fixup = aqc111_rx_fixup,
++ .tx_fixup = aqc111_tx_fixup,
++};
++
+ static int aqc111_suspend(struct usb_interface *intf, pm_message_t message)
+ {
+ struct usbnet *dev = usb_get_intfdata(intf);
+@@ -1455,6 +1469,7 @@ static const struct usb_device_id products[] = {
+ {AQC111_USB_ETH_DEV(0x0b95, 0x2790, asix111_info)},
+ {AQC111_USB_ETH_DEV(0x0b95, 0x2791, asix112_info)},
+ {AQC111_USB_ETH_DEV(0x20f4, 0xe05a, trendnet_info)},
++ {AQC111_USB_ETH_DEV(0x1c04, 0x0015, qnap_info)},
+ { },/* END */
+ };
+ MODULE_DEVICE_TABLE(usb, products);
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 5512a1038721..3e9b2c319e45 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -851,6 +851,14 @@ static const struct usb_device_id products[] = {
+ .driver_info = 0,
+ },
+
++/* QNAP QNA-UC5G1T USB to 5GbE Adapter (based on AQC111U) */
++{
++ USB_DEVICE_AND_INTERFACE_INFO(0x1c04, 0x0015, USB_CLASS_COMM,
++ USB_CDC_SUBCLASS_ETHERNET,
++ USB_CDC_PROTO_NONE),
++ .driver_info = 0,
++},
++
+ /* WHITELIST!!!
+ *
+ * CDC Ether uses two interfaces, not necessarily consecutive.
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 18af2f8eee96..9195f3476b1d 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -976,6 +976,13 @@ static const struct usb_device_id products[] = {
+ 0xff),
+ .driver_info = (unsigned long)&qmi_wwan_info_quirk_dtr,
+ },
++ { /* Quectel EG12/EM12 */
++ USB_DEVICE_AND_INTERFACE_INFO(0x2c7c, 0x0512,
++ USB_CLASS_VENDOR_SPEC,
++ USB_SUBCLASS_VENDOR_SPEC,
++ 0xff),
++ .driver_info = (unsigned long)&qmi_wwan_info_quirk_dtr,
++ },
+
+ /* 3. Combined interface devices matching on interface number */
+ {QMI_FIXED_INTF(0x0408, 0xea42, 4)}, /* Yota / Megafon M100-1 */
+@@ -1196,6 +1203,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */
+ {QMI_FIXED_INTF(0x2001, 0x7e19, 4)}, /* D-Link DWM-221 B1 */
+ {QMI_FIXED_INTF(0x2001, 0x7e35, 4)}, /* D-Link DWM-222 */
++ {QMI_FIXED_INTF(0x2020, 0x2031, 4)}, /* Olicard 600 */
+ {QMI_FIXED_INTF(0x2020, 0x2033, 4)}, /* BroadMobi BM806U */
+ {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */
+ {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */
+@@ -1343,17 +1351,20 @@ static bool quectel_ec20_detected(struct usb_interface *intf)
+ return false;
+ }
+
+-static bool quectel_ep06_diag_detected(struct usb_interface *intf)
++static bool quectel_diag_detected(struct usb_interface *intf)
+ {
+ struct usb_device *dev = interface_to_usbdev(intf);
+ struct usb_interface_descriptor intf_desc = intf->cur_altsetting->desc;
++ u16 id_vendor = le16_to_cpu(dev->descriptor.idVendor);
++ u16 id_product = le16_to_cpu(dev->descriptor.idProduct);
+
+- if (le16_to_cpu(dev->descriptor.idVendor) == 0x2c7c &&
+- le16_to_cpu(dev->descriptor.idProduct) == 0x0306 &&
+- intf_desc.bNumEndpoints == 2)
+- return true;
++ if (id_vendor != 0x2c7c || intf_desc.bNumEndpoints != 2)
++ return false;
+
+- return false;
++ if (id_product == 0x0306 || id_product == 0x0512)
++ return true;
++ else
++ return false;
+ }
+
+ static int qmi_wwan_probe(struct usb_interface *intf,
+@@ -1390,13 +1401,13 @@ static int qmi_wwan_probe(struct usb_interface *intf,
+ return -ENODEV;
+ }
+
+- /* Quectel EP06/EM06/EG06 supports dynamic interface configuration, so
++ /* Several Quectel modems supports dynamic interface configuration, so
+ * we need to match on class/subclass/protocol. These values are
+ * identical for the diagnostic- and QMI-interface, but bNumEndpoints is
+ * different. Ignore the current interface if the number of endpoints
+ * the number for the diag interface (two).
+ */
+- if (quectel_ep06_diag_detected(intf))
++ if (quectel_diag_detected(intf))
+ return -ENODEV;
+
+ return usbnet_probe(intf, id);
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index f412ea1cef18..b203d1867959 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -115,7 +115,8 @@ static void veth_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
+ p += sizeof(ethtool_stats_keys);
+ for (i = 0; i < dev->real_num_rx_queues; i++) {
+ for (j = 0; j < VETH_RQ_STATS_LEN; j++) {
+- snprintf(p, ETH_GSTRING_LEN, "rx_queue_%u_%s",
++ snprintf(p, ETH_GSTRING_LEN,
++ "rx_queue_%u_%.11s",
+ i, veth_rq_stats_desc[j].desc);
+ p += ETH_GSTRING_LEN;
+ }
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 7c1430ed0244..cd15c32b2e43 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1273,9 +1273,14 @@ static void vrf_setup(struct net_device *dev)
+
+ /* default to no qdisc; user can add if desired */
+ dev->priv_flags |= IFF_NO_QUEUE;
++ dev->priv_flags |= IFF_NO_RX_HANDLER;
+
+- dev->min_mtu = 0;
+- dev->max_mtu = 0;
++ /* VRF devices do not care about MTU, but if the MTU is set
++ * too low then the ipv4 and ipv6 protocols are disabled
++ * which breaks networking.
++ */
++ dev->min_mtu = IPV6_MIN_MTU;
++ dev->max_mtu = ETH_MAX_MTU;
+ }
+
+ static int vrf_validate(struct nlattr *tb[], struct nlattr *data[],
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 2aae11feff0c..5006daed2e96 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1657,6 +1657,14 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
+ goto drop;
+ }
+
++ rcu_read_lock();
++
++ if (unlikely(!(vxlan->dev->flags & IFF_UP))) {
++ rcu_read_unlock();
++ atomic_long_inc(&vxlan->dev->rx_dropped);
++ goto drop;
++ }
++
+ stats = this_cpu_ptr(vxlan->dev->tstats);
+ u64_stats_update_begin(&stats->syncp);
+ stats->rx_packets++;
+@@ -1664,6 +1672,9 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
+ u64_stats_update_end(&stats->syncp);
+
+ gro_cells_receive(&vxlan->gro_cells, skb);
++
++ rcu_read_unlock();
++
+ return 0;
+
+ drop:
+@@ -2693,6 +2704,8 @@ static void vxlan_uninit(struct net_device *dev)
+ {
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+
++ gro_cells_destroy(&vxlan->gro_cells);
++
+ vxlan_fdb_delete_default(vxlan, vxlan->cfg.vni);
+
+ free_percpu(dev->tstats);
+@@ -3794,7 +3807,6 @@ static void vxlan_dellink(struct net_device *dev, struct list_head *head)
+
+ vxlan_flush(vxlan, true);
+
+- gro_cells_destroy(&vxlan->gro_cells);
+ list_del(&vxlan->next);
+ unregister_netdevice_queue(dev, head);
+ }
+@@ -4172,10 +4184,8 @@ static void vxlan_destroy_tunnels(struct net *net, struct list_head *head)
+ /* If vxlan->dev is in the same netns, it has already been added
+ * to the list by the previous loop.
+ */
+- if (!net_eq(dev_net(vxlan->dev), net)) {
+- gro_cells_destroy(&vxlan->gro_cells);
++ if (!net_eq(dev_net(vxlan->dev), net))
+ unregister_netdevice_queue(vxlan->dev, head);
+- }
+ }
+
+ for (h = 0; h < PORT_HASH_SIZE; ++h)
+diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
+index 2a5668b4f6bc..1a1ea4bbf8a0 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.c
++++ b/drivers/net/wireless/ath/ath10k/ce.c
+@@ -500,14 +500,8 @@ static int _ath10k_ce_send_nolock(struct ath10k_ce_pipe *ce_state,
+ write_index = CE_RING_IDX_INCR(nentries_mask, write_index);
+
+ /* WORKAROUND */
+- if (!(flags & CE_SEND_FLAG_GATHER)) {
+- if (ar->hw_params.shadow_reg_support)
+- ath10k_ce_shadow_src_ring_write_index_set(ar, ce_state,
+- write_index);
+- else
+- ath10k_ce_src_ring_write_index_set(ar, ctrl_addr,
+- write_index);
+- }
++ if (!(flags & CE_SEND_FLAG_GATHER))
++ ath10k_ce_src_ring_write_index_set(ar, ctrl_addr, write_index);
+
+ src_ring->write_index = write_index;
+ exit:
+@@ -581,8 +575,14 @@ static int _ath10k_ce_send_nolock_64(struct ath10k_ce_pipe *ce_state,
+ /* Update Source Ring Write Index */
+ write_index = CE_RING_IDX_INCR(nentries_mask, write_index);
+
+- if (!(flags & CE_SEND_FLAG_GATHER))
+- ath10k_ce_src_ring_write_index_set(ar, ctrl_addr, write_index);
++ if (!(flags & CE_SEND_FLAG_GATHER)) {
++ if (ar->hw_params.shadow_reg_support)
++ ath10k_ce_shadow_src_ring_write_index_set(ar, ce_state,
++ write_index);
++ else
++ ath10k_ce_src_ring_write_index_set(ar, ctrl_addr,
++ write_index);
++ }
+
+ src_ring->write_index = write_index;
+ exit:
+@@ -1404,12 +1404,12 @@ static int ath10k_ce_alloc_shadow_base(struct ath10k *ar,
+ u32 nentries)
+ {
+ src_ring->shadow_base_unaligned = kcalloc(nentries,
+- sizeof(struct ce_desc),
++ sizeof(struct ce_desc_64),
+ GFP_KERNEL);
+ if (!src_ring->shadow_base_unaligned)
+ return -ENOMEM;
+
+- src_ring->shadow_base = (struct ce_desc *)
++ src_ring->shadow_base = (struct ce_desc_64 *)
+ PTR_ALIGN(src_ring->shadow_base_unaligned,
+ CE_DESC_RING_ALIGN);
+ return 0;
+@@ -1461,7 +1461,7 @@ ath10k_ce_alloc_src_ring(struct ath10k *ar, unsigned int ce_id,
+ ret = ath10k_ce_alloc_shadow_base(ar, src_ring, nentries);
+ if (ret) {
+ dma_free_coherent(ar->dev,
+- (nentries * sizeof(struct ce_desc) +
++ (nentries * sizeof(struct ce_desc_64) +
+ CE_DESC_RING_ALIGN),
+ src_ring->base_addr_owner_space_unaligned,
+ base_addr);
+diff --git a/drivers/net/wireless/ath/ath10k/ce.h b/drivers/net/wireless/ath/ath10k/ce.h
+index ead9987c3259..463e2fc8b501 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.h
++++ b/drivers/net/wireless/ath/ath10k/ce.h
+@@ -118,7 +118,7 @@ struct ath10k_ce_ring {
+ u32 base_addr_ce_space;
+
+ char *shadow_base_unaligned;
+- struct ce_desc *shadow_base;
++ struct ce_desc_64 *shadow_base;
+
+ /* keep last */
+ void *per_transfer_context[0];
+diff --git a/drivers/net/wireless/ath/ath10k/debugfs_sta.c b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+index 4778a455d81a..068f1a7e07d3 100644
+--- a/drivers/net/wireless/ath/ath10k/debugfs_sta.c
++++ b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+@@ -696,11 +696,12 @@ static ssize_t ath10k_dbg_sta_dump_tx_stats(struct file *file,
+ " %llu ", stats->ht[j][i]);
+ len += scnprintf(buf + len, size - len, "\n");
+ len += scnprintf(buf + len, size - len,
+- " BW %s (20,40,80,160 MHz)\n", str[j]);
++ " BW %s (20,5,10,40,80,160 MHz)\n", str[j]);
+ len += scnprintf(buf + len, size - len,
+- " %llu %llu %llu %llu\n",
++ " %llu %llu %llu %llu %llu %llu\n",
+ stats->bw[j][0], stats->bw[j][1],
+- stats->bw[j][2], stats->bw[j][3]);
++ stats->bw[j][2], stats->bw[j][3],
++ stats->bw[j][4], stats->bw[j][5]);
+ len += scnprintf(buf + len, size - len,
+ " NSS %s (1x1,2x2,3x3,4x4)\n", str[j]);
+ len += scnprintf(buf + len, size - len,
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index f42bac204ef8..ecf34ce7acf0 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -2130,9 +2130,15 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
+ hdr = (struct ieee80211_hdr *)skb->data;
+ rx_status = IEEE80211_SKB_RXCB(skb);
+ rx_status->chains |= BIT(0);
+- rx_status->signal = ATH10K_DEFAULT_NOISE_FLOOR +
+- rx->ppdu.combined_rssi;
+- rx_status->flag &= ~RX_FLAG_NO_SIGNAL_VAL;
++ if (rx->ppdu.combined_rssi == 0) {
++ /* SDIO firmware does not provide signal */
++ rx_status->signal = 0;
++ rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
++ } else {
++ rx_status->signal = ATH10K_DEFAULT_NOISE_FLOOR +
++ rx->ppdu.combined_rssi;
++ rx_status->flag &= ~RX_FLAG_NO_SIGNAL_VAL;
++ }
+
+ spin_lock_bh(&ar->data_lock);
+ ch = ar->scan_channel;
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 2034ccc7cc72..1d5d0209ebeb 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -5003,7 +5003,7 @@ enum wmi_rate_preamble {
+ #define ATH10K_FW_SKIPPED_RATE_CTRL(flags) (((flags) >> 6) & 0x1)
+
+ #define ATH10K_VHT_MCS_NUM 10
+-#define ATH10K_BW_NUM 4
++#define ATH10K_BW_NUM 6
+ #define ATH10K_NSS_NUM 4
+ #define ATH10K_LEGACY_NUM 12
+ #define ATH10K_GI_NUM 2
+diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c
+index c070a9e51ebf..fae572b38416 100644
+--- a/drivers/net/wireless/ath/ath9k/init.c
++++ b/drivers/net/wireless/ath/ath9k/init.c
+@@ -636,15 +636,15 @@ static int ath9k_of_init(struct ath_softc *sc)
+ ret = ath9k_eeprom_request(sc, eeprom_name);
+ if (ret)
+ return ret;
++
++ ah->ah_flags &= ~AH_USE_EEPROM;
++ ah->ah_flags |= AH_NO_EEP_SWAP;
+ }
+
+ mac = of_get_mac_address(np);
+ if (mac)
+ ether_addr_copy(common->macaddr, mac);
+
+- ah->ah_flags &= ~AH_USE_EEPROM;
+- ah->ah_flags |= AH_NO_EEP_SWAP;
+-
+ return 0;
+ }
+
+diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
+index 9b2f9f543952..5a44f9d0ff02 100644
+--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
++++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
+@@ -1580,6 +1580,12 @@ static int _wil_cfg80211_merge_extra_ies(const u8 *ies1, u16 ies1_len,
+ u8 *buf, *dpos;
+ const u8 *spos;
+
++ if (!ies1)
++ ies1_len = 0;
++
++ if (!ies2)
++ ies2_len = 0;
++
+ if (ies1_len == 0 && ies2_len == 0) {
+ *merged_ies = NULL;
+ *merged_len = 0;
+@@ -1589,17 +1595,19 @@ static int _wil_cfg80211_merge_extra_ies(const u8 *ies1, u16 ies1_len,
+ buf = kmalloc(ies1_len + ies2_len, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+- memcpy(buf, ies1, ies1_len);
++ if (ies1)
++ memcpy(buf, ies1, ies1_len);
+ dpos = buf + ies1_len;
+ spos = ies2;
+- while (spos + 1 < ies2 + ies2_len) {
++ while (spos && (spos + 1 < ies2 + ies2_len)) {
+ /* IE tag at offset 0, length at offset 1 */
+ u16 ielen = 2 + spos[1];
+
+ if (spos + ielen > ies2 + ies2_len)
+ break;
+ if (spos[0] == WLAN_EID_VENDOR_SPECIFIC &&
+- !_wil_cfg80211_find_ie(ies1, ies1_len, spos, ielen)) {
++ (!ies1 || !_wil_cfg80211_find_ie(ies1, ies1_len,
++ spos, ielen))) {
+ memcpy(dpos, spos, ielen);
+ dpos += ielen;
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+index 1f1e95a15a17..0ce1d8174e6d 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+@@ -149,7 +149,7 @@ static int brcmf_c_process_clm_blob(struct brcmf_if *ifp)
+ return err;
+ }
+
+- err = request_firmware(&clm, clm_name, bus->dev);
++ err = firmware_request_nowarn(&clm, clm_name, bus->dev);
+ if (err) {
+ brcmf_info("no clm_blob available (err=%d), device may have limited channels available\n",
+ err);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 0d6c313b6669..19ec55cef802 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -127,13 +127,17 @@ static int iwl_send_rss_cfg_cmd(struct iwl_mvm *mvm)
+
+ static int iwl_configure_rxq(struct iwl_mvm *mvm)
+ {
+- int i, num_queues, size;
++ int i, num_queues, size, ret;
+ struct iwl_rfh_queue_config *cmd;
++ struct iwl_host_cmd hcmd = {
++ .id = WIDE_ID(DATA_PATH_GROUP, RFH_QUEUE_CONFIG_CMD),
++ .dataflags[0] = IWL_HCMD_DFL_NOCOPY,
++ };
+
+ /* Do not configure default queue, it is configured via context info */
+ num_queues = mvm->trans->num_rx_queues - 1;
+
+- size = sizeof(*cmd) + num_queues * sizeof(struct iwl_rfh_queue_data);
++ size = struct_size(cmd, data, num_queues);
+
+ cmd = kzalloc(size, GFP_KERNEL);
+ if (!cmd)
+@@ -154,10 +158,14 @@ static int iwl_configure_rxq(struct iwl_mvm *mvm)
+ cmd->data[i].fr_bd_wid = cpu_to_le32(data.fr_bd_wid);
+ }
+
+- return iwl_mvm_send_cmd_pdu(mvm,
+- WIDE_ID(DATA_PATH_GROUP,
+- RFH_QUEUE_CONFIG_CMD),
+- 0, size, cmd);
++ hcmd.data[0] = cmd;
++ hcmd.len[0] = size;
++
++ ret = iwl_mvm_send_cmd(mvm, &hcmd);
++
++ kfree(cmd);
++
++ return ret;
+ }
+
+ static int iwl_mvm_send_dqa_cmd(struct iwl_mvm *mvm)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 9e850c25877b..c596c7b13504 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -499,7 +499,7 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
+ struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ struct iwl_rb_allocator *rba = &trans_pcie->rba;
+ struct list_head local_empty;
+- int pending = atomic_xchg(&rba->req_pending, 0);
++ int pending = atomic_read(&rba->req_pending);
+
+ IWL_DEBUG_RX(trans, "Pending allocation requests = %d\n", pending);
+
+@@ -554,11 +554,13 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
+ i++;
+ }
+
++ atomic_dec(&rba->req_pending);
+ pending--;
++
+ if (!pending) {
+- pending = atomic_xchg(&rba->req_pending, 0);
++ pending = atomic_read(&rba->req_pending);
+ IWL_DEBUG_RX(trans,
+- "Pending allocation requests = %d\n",
++ "Got more pending allocation requests = %d\n",
+ pending);
+ }
+
+@@ -570,12 +572,15 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
+ spin_unlock(&rba->lock);
+
+ atomic_inc(&rba->req_ready);
++
+ }
+
+ spin_lock(&rba->lock);
+ /* return unused rbds to the allocator empty list */
+ list_splice_tail(&local_empty, &rba->rbd_empty);
+ spin_unlock(&rba->lock);
++
++ IWL_DEBUG_RX(trans, "%s, exit.\n", __func__);
+ }
+
+ /*
+diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+index 789337ea676a..6ede6168bd85 100644
+--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+@@ -433,8 +433,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
+ skb_tail_pointer(skb),
+ MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn, cardp);
+
+- cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
+-
+ lbtf_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n",
+ cardp->rx_urb);
+ ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC);
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 1467af22e394..883752f640b4 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -4310,11 +4310,13 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ wiphy->mgmt_stypes = mwifiex_mgmt_stypes;
+ wiphy->max_remain_on_channel_duration = 5000;
+ wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
+- BIT(NL80211_IFTYPE_ADHOC) |
+ BIT(NL80211_IFTYPE_P2P_CLIENT) |
+ BIT(NL80211_IFTYPE_P2P_GO) |
+ BIT(NL80211_IFTYPE_AP);
+
++ if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
++ wiphy->interface_modes |= BIT(NL80211_IFTYPE_ADHOC);
++
+ wiphy->bands[NL80211_BAND_2GHZ] = &mwifiex_band_2ghz;
+ if (adapter->config_bands & BAND_A)
+ wiphy->bands[NL80211_BAND_5GHZ] = &mwifiex_band_5ghz;
+@@ -4374,11 +4376,13 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ wiphy->available_antennas_tx = BIT(adapter->number_of_antenna) - 1;
+ wiphy->available_antennas_rx = BIT(adapter->number_of_antenna) - 1;
+
+- wiphy->features |= NL80211_FEATURE_HT_IBSS |
+- NL80211_FEATURE_INACTIVITY_TIMER |
++ wiphy->features |= NL80211_FEATURE_INACTIVITY_TIMER |
+ NL80211_FEATURE_LOW_PRIORITY_SCAN |
+ NL80211_FEATURE_NEED_OBSS_SCAN;
+
++ if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
++ wiphy->features |= NL80211_FEATURE_HT_IBSS;
++
+ if (ISSUPP_RANDOM_MAC(adapter->fw_cap_info))
+ wiphy->features |= NL80211_FEATURE_SCAN_RANDOM_MAC_ADDR |
+ NL80211_FEATURE_SCHED_SCAN_RANDOM_MAC_ADDR |
+diff --git a/drivers/net/wireless/mediatek/mt76/eeprom.c b/drivers/net/wireless/mediatek/mt76/eeprom.c
+index 530e5593765c..a1529920d877 100644
+--- a/drivers/net/wireless/mediatek/mt76/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/eeprom.c
+@@ -54,22 +54,30 @@ mt76_get_of_eeprom(struct mt76_dev *dev, int len)
+ part = np->name;
+
+ mtd = get_mtd_device_nm(part);
+- if (IS_ERR(mtd))
+- return PTR_ERR(mtd);
++ if (IS_ERR(mtd)) {
++ ret = PTR_ERR(mtd);
++ goto out_put_node;
++ }
+
+- if (size <= sizeof(*list))
+- return -EINVAL;
++ if (size <= sizeof(*list)) {
++ ret = -EINVAL;
++ goto out_put_node;
++ }
+
+ offset = be32_to_cpup(list);
+ ret = mtd_read(mtd, offset, len, &retlen, dev->eeprom.data);
+ put_mtd_device(mtd);
+ if (ret)
+- return ret;
++ goto out_put_node;
+
+- if (retlen < len)
+- return -EINVAL;
++ if (retlen < len) {
++ ret = -EINVAL;
++ goto out_put_node;
++ }
+
+- return 0;
++out_put_node:
++ of_node_put(np);
++ return ret;
+ #else
+ return -ENOENT;
+ #endif
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 5cd508a68609..6d29ba4046c3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -713,6 +713,19 @@ static inline bool mt76u_check_sg(struct mt76_dev *dev)
+ udev->speed == USB_SPEED_WIRELESS));
+ }
+
++static inline int
++mt76u_bulk_msg(struct mt76_dev *dev, void *data, int len, int timeout)
++{
++ struct usb_interface *intf = to_usb_interface(dev->dev);
++ struct usb_device *udev = interface_to_usbdev(intf);
++ struct mt76_usb *usb = &dev->usb;
++ unsigned int pipe;
++ int sent;
++
++ pipe = usb_sndbulkpipe(udev, usb->out_ep[MT_EP_OUT_INBAND_CMD]);
++ return usb_bulk_msg(udev, pipe, data, len, &sent, timeout);
++}
++
+ int mt76u_vendor_request(struct mt76_dev *dev, u8 req,
+ u8 req_type, u16 val, u16 offset,
+ void *buf, size_t len);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+index c08bf371e527..7c9dfa54fee8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+@@ -309,7 +309,7 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi,
+ ccmp_pn[6] = pn >> 32;
+ ccmp_pn[7] = pn >> 40;
+ txwi->iv = *((__le32 *)&ccmp_pn[0]);
+- txwi->eiv = *((__le32 *)&ccmp_pn[1]);
++ txwi->eiv = *((__le32 *)&ccmp_pn[4]);
+ }
+
+ spin_lock_bh(&dev->mt76.lock);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
+index 6db789f90269..2ca393e267af 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
+@@ -121,18 +121,14 @@ static int
+ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb,
+ int cmd, bool wait_resp)
+ {
+- struct usb_interface *intf = to_usb_interface(dev->dev);
+- struct usb_device *udev = interface_to_usbdev(intf);
+ struct mt76_usb *usb = &dev->usb;
+- unsigned int pipe;
+- int ret, sent;
++ int ret;
+ u8 seq = 0;
+ u32 info;
+
+ if (test_bit(MT76_REMOVED, &dev->state))
+ return 0;
+
+- pipe = usb_sndbulkpipe(udev, usb->out_ep[MT_EP_OUT_INBAND_CMD]);
+ if (wait_resp) {
+ seq = ++usb->mcu.msg_seq & 0xf;
+ if (!seq)
+@@ -146,7 +142,7 @@ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb,
+ if (ret)
+ return ret;
+
+- ret = usb_bulk_msg(udev, pipe, skb->data, skb->len, &sent, 500);
++ ret = mt76u_bulk_msg(dev, skb->data, skb->len, 500);
+ if (ret)
+ return ret;
+
+@@ -268,14 +264,12 @@ void mt76x02u_mcu_fw_reset(struct mt76x02_dev *dev)
+ EXPORT_SYMBOL_GPL(mt76x02u_mcu_fw_reset);
+
+ static int
+-__mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
++__mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, u8 *data,
+ const void *fw_data, int len, u32 dst_addr)
+ {
+- u8 *data = sg_virt(&buf->urb->sg[0]);
+- DECLARE_COMPLETION_ONSTACK(cmpl);
+ __le32 info;
+ u32 val;
+- int err;
++ int err, data_len;
+
+ info = cpu_to_le32(FIELD_PREP(MT_MCU_MSG_PORT, CPU_TX_PORT) |
+ FIELD_PREP(MT_MCU_MSG_LEN, len) |
+@@ -291,25 +285,12 @@ __mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
+ mt76u_single_wr(&dev->mt76, MT_VEND_WRITE_FCE,
+ MT_FCE_DMA_LEN, len << 16);
+
+- buf->len = MT_CMD_HDR_LEN + len + sizeof(info);
+- err = mt76u_submit_buf(&dev->mt76, USB_DIR_OUT,
+- MT_EP_OUT_INBAND_CMD,
+- buf, GFP_KERNEL,
+- mt76u_mcu_complete_urb, &cmpl);
+- if (err < 0)
+- return err;
+-
+- if (!wait_for_completion_timeout(&cmpl,
+- msecs_to_jiffies(1000))) {
+- dev_err(dev->mt76.dev, "firmware upload timed out\n");
+- usb_kill_urb(buf->urb);
+- return -ETIMEDOUT;
+- }
++ data_len = MT_CMD_HDR_LEN + len + sizeof(info);
+
+- if (mt76u_urb_error(buf->urb)) {
+- dev_err(dev->mt76.dev, "firmware upload failed: %d\n",
+- buf->urb->status);
+- return buf->urb->status;
++ err = mt76u_bulk_msg(&dev->mt76, data, data_len, 1000);
++ if (err) {
++ dev_err(dev->mt76.dev, "firmware upload failed: %d\n", err);
++ return err;
+ }
+
+ val = mt76_rr(dev, MT_TX_CPU_FROM_FCE_CPU_DESC_IDX);
+@@ -322,17 +303,16 @@ __mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
+ int mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, const void *data,
+ int data_len, u32 max_payload, u32 offset)
+ {
+- int err, len, pos = 0, max_len = max_payload - 8;
+- struct mt76u_buf buf;
++ int len, err = 0, pos = 0, max_len = max_payload - 8;
++ u8 *buf;
+
+- err = mt76u_buf_alloc(&dev->mt76, &buf, 1, max_payload, max_payload,
+- GFP_KERNEL);
+- if (err < 0)
+- return err;
++ buf = kmalloc(max_payload, GFP_KERNEL);
++ if (!buf)
++ return -ENOMEM;
+
+ while (data_len > 0) {
+ len = min_t(int, data_len, max_len);
+- err = __mt76x02u_mcu_fw_send_data(dev, &buf, data + pos,
++ err = __mt76x02u_mcu_fw_send_data(dev, buf, data + pos,
+ len, offset + pos);
+ if (err < 0)
+ break;
+@@ -341,7 +321,7 @@ int mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, const void *data,
+ pos += len;
+ usleep_range(5000, 10000);
+ }
+- mt76u_buf_free(&buf);
++ kfree(buf);
+
+ return err;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
+index b061263453d4..61cde0f9f58f 100644
+--- a/drivers/net/wireless/mediatek/mt76/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/usb.c
+@@ -326,7 +326,6 @@ int mt76u_buf_alloc(struct mt76_dev *dev, struct mt76u_buf *buf,
+
+ return mt76u_fill_rx_sg(dev, buf, nsgs, len, sglen);
+ }
+-EXPORT_SYMBOL_GPL(mt76u_buf_alloc);
+
+ void mt76u_buf_free(struct mt76u_buf *buf)
+ {
+@@ -838,16 +837,9 @@ int mt76u_alloc_queues(struct mt76_dev *dev)
+
+ err = mt76u_alloc_rx(dev);
+ if (err < 0)
+- goto err;
+-
+- err = mt76u_alloc_tx(dev);
+- if (err < 0)
+- goto err;
++ return err;
+
+- return 0;
+-err:
+- mt76u_queues_deinit(dev);
+- return err;
++ return mt76u_alloc_tx(dev);
+ }
+ EXPORT_SYMBOL_GPL(mt76u_alloc_queues);
+
+diff --git a/drivers/net/wireless/mediatek/mt7601u/eeprom.h b/drivers/net/wireless/mediatek/mt7601u/eeprom.h
+index 662d12703b69..57b503ae63f1 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/eeprom.h
++++ b/drivers/net/wireless/mediatek/mt7601u/eeprom.h
+@@ -17,7 +17,7 @@
+
+ struct mt7601u_dev;
+
+-#define MT7601U_EE_MAX_VER 0x0c
++#define MT7601U_EE_MAX_VER 0x0d
+ #define MT7601U_EEPROM_SIZE 256
+
+ #define MT7601U_DEFAULT_TX_POWER 6
+diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
+index 26b187336875..2e12de813a5b 100644
+--- a/drivers/net/wireless/ti/wlcore/main.c
++++ b/drivers/net/wireless/ti/wlcore/main.c
+@@ -1085,8 +1085,11 @@ static int wl12xx_chip_wakeup(struct wl1271 *wl, bool plt)
+ goto out;
+
+ ret = wl12xx_fetch_firmware(wl, plt);
+- if (ret < 0)
+- goto out;
++ if (ret < 0) {
++ kfree(wl->fw_status);
++ kfree(wl->raw_fw_status);
++ kfree(wl->tx_res_if);
++ }
+
+ out:
+ return ret;
+diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
+index a11bf4e6b451..6d6e9a12150b 100644
+--- a/drivers/nvdimm/label.c
++++ b/drivers/nvdimm/label.c
+@@ -755,7 +755,7 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
+
+ static int __pmem_label_update(struct nd_region *nd_region,
+ struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
+- int pos)
++ int pos, unsigned long flags)
+ {
+ struct nd_namespace_common *ndns = &nspm->nsio.common;
+ struct nd_interleave_set *nd_set = nd_region->nd_set;
+@@ -796,7 +796,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
+ memcpy(nd_label->uuid, nspm->uuid, NSLABEL_UUID_LEN);
+ if (nspm->alt_name)
+ memcpy(nd_label->name, nspm->alt_name, NSLABEL_NAME_LEN);
+- nd_label->flags = __cpu_to_le32(NSLABEL_FLAG_UPDATING);
++ nd_label->flags = __cpu_to_le32(flags);
+ nd_label->nlabel = __cpu_to_le16(nd_region->ndr_mappings);
+ nd_label->position = __cpu_to_le16(pos);
+ nd_label->isetcookie = __cpu_to_le64(cookie);
+@@ -1249,13 +1249,13 @@ static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
+ int nd_pmem_namespace_label_update(struct nd_region *nd_region,
+ struct nd_namespace_pmem *nspm, resource_size_t size)
+ {
+- int i;
++ int i, rc;
+
+ for (i = 0; i < nd_region->ndr_mappings; i++) {
+ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+ struct resource *res;
+- int rc, count = 0;
++ int count = 0;
+
+ if (size == 0) {
+ rc = del_labels(nd_mapping, nspm->uuid);
+@@ -1273,7 +1273,20 @@ int nd_pmem_namespace_label_update(struct nd_region *nd_region,
+ if (rc < 0)
+ return rc;
+
+- rc = __pmem_label_update(nd_region, nd_mapping, nspm, i);
++ rc = __pmem_label_update(nd_region, nd_mapping, nspm, i,
++ NSLABEL_FLAG_UPDATING);
++ if (rc)
++ return rc;
++ }
++
++ if (size == 0)
++ return 0;
++
++ /* Clear the UPDATING flag per UEFI 2.7 expectations */
++ for (i = 0; i < nd_region->ndr_mappings; i++) {
++ struct nd_mapping *nd_mapping = &nd_region->mapping[i];
++
++ rc = __pmem_label_update(nd_region, nd_mapping, nspm, i, 0);
+ if (rc)
+ return rc;
+ }
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 4b077555ac70..33a3b23b3db7 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -138,6 +138,7 @@ bool nd_is_uuid_unique(struct device *dev, u8 *uuid)
+ bool pmem_should_map_pages(struct device *dev)
+ {
+ struct nd_region *nd_region = to_nd_region(dev->parent);
++ struct nd_namespace_common *ndns = to_ndns(dev);
+ struct nd_namespace_io *nsio;
+
+ if (!IS_ENABLED(CONFIG_ZONE_DEVICE))
+@@ -149,6 +150,9 @@ bool pmem_should_map_pages(struct device *dev)
+ if (is_nd_pfn(dev) || is_nd_btt(dev))
+ return false;
+
++ if (ndns->force_raw)
++ return false;
++
+ nsio = to_nd_namespace_io(dev);
+ if (region_intersects(nsio->res.start, resource_size(&nsio->res),
+ IORESOURCE_SYSTEM_RAM,
+diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
+index 6f22272e8d80..7760c1b91853 100644
+--- a/drivers/nvdimm/pfn_devs.c
++++ b/drivers/nvdimm/pfn_devs.c
+@@ -593,7 +593,7 @@ static unsigned long init_altmap_base(resource_size_t base)
+
+ static unsigned long init_altmap_reserve(resource_size_t base)
+ {
+- unsigned long reserve = PHYS_PFN(SZ_8K);
++ unsigned long reserve = PFN_UP(SZ_8K);
+ unsigned long base_pfn = PHYS_PFN(base);
+
+ reserve += base_pfn - PFN_SECTION_ALIGN_DOWN(base_pfn);
+@@ -678,7 +678,7 @@ static void trim_pfn_device(struct nd_pfn *nd_pfn, u32 *start_pad, u32 *end_trun
+ if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM,
+ IORES_DESC_NONE) == REGION_MIXED
+ || !IS_ALIGNED(end, nd_pfn->align)
+- || nd_region_conflict(nd_region, start, size + adjust))
++ || nd_region_conflict(nd_region, start, size))
+ *end_trunc = end - phys_pmem_align_down(nd_pfn, end);
+ }
+
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 89accc76d71c..c37d5bbd72ab 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -3018,7 +3018,10 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
+
+ ctrl->ctrl.opts = opts;
+ ctrl->ctrl.nr_reconnects = 0;
+- ctrl->ctrl.numa_node = dev_to_node(lport->dev);
++ if (lport->dev)
++ ctrl->ctrl.numa_node = dev_to_node(lport->dev);
++ else
++ ctrl->ctrl.numa_node = NUMA_NO_NODE;
+ INIT_LIST_HEAD(&ctrl->ctrl_list);
+ ctrl->lport = lport;
+ ctrl->rport = rport;
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 88d260f31835..02c63c463222 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -1171,6 +1171,15 @@ static void nvmet_release_p2p_ns_map(struct nvmet_ctrl *ctrl)
+ put_device(ctrl->p2p_client);
+ }
+
++static void nvmet_fatal_error_handler(struct work_struct *work)
++{
++ struct nvmet_ctrl *ctrl =
++ container_of(work, struct nvmet_ctrl, fatal_err_work);
++
++ pr_err("ctrl %d fatal error occurred!\n", ctrl->cntlid);
++ ctrl->ops->delete_ctrl(ctrl);
++}
++
+ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp)
+ {
+@@ -1213,6 +1222,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work);
+ INIT_LIST_HEAD(&ctrl->async_events);
+ INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL);
++ INIT_WORK(&ctrl->fatal_err_work, nvmet_fatal_error_handler);
+
+ memcpy(ctrl->subsysnqn, subsysnqn, NVMF_NQN_SIZE);
+ memcpy(ctrl->hostnqn, hostnqn, NVMF_NQN_SIZE);
+@@ -1316,21 +1326,11 @@ void nvmet_ctrl_put(struct nvmet_ctrl *ctrl)
+ kref_put(&ctrl->ref, nvmet_ctrl_free);
+ }
+
+-static void nvmet_fatal_error_handler(struct work_struct *work)
+-{
+- struct nvmet_ctrl *ctrl =
+- container_of(work, struct nvmet_ctrl, fatal_err_work);
+-
+- pr_err("ctrl %d fatal error occurred!\n", ctrl->cntlid);
+- ctrl->ops->delete_ctrl(ctrl);
+-}
+-
+ void nvmet_ctrl_fatal_error(struct nvmet_ctrl *ctrl)
+ {
+ mutex_lock(&ctrl->lock);
+ if (!(ctrl->csts & NVME_CSTS_CFS)) {
+ ctrl->csts |= NVME_CSTS_CFS;
+- INIT_WORK(&ctrl->fatal_err_work, nvmet_fatal_error_handler);
+ schedule_work(&ctrl->fatal_err_work);
+ }
+ mutex_unlock(&ctrl->lock);
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index f7301bb4ef3b..3ce65927e11c 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -686,9 +686,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ if (rval)
+ goto err_remove_cells;
+
+- rval = blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
+- if (rval)
+- goto err_remove_cells;
++ blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
+
+ return nvmem;
+
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 18f1639dbc4a..f5d2fa195f5f 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -743,7 +743,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+ old_freq, freq);
+
+ /* Scaling up? Configure required OPPs before frequency */
+- if (freq > old_freq) {
++ if (freq >= old_freq) {
+ ret = _set_required_opps(dev, opp_table, opp);
+ if (ret)
+ goto put_opp;
+diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c
+index 9c8249f74479..6296dbb83d47 100644
+--- a/drivers/parport/parport_pc.c
++++ b/drivers/parport/parport_pc.c
+@@ -1377,7 +1377,7 @@ static struct superio_struct *find_superio(struct parport *p)
+ {
+ int i;
+ for (i = 0; i < NR_SUPERIOS; i++)
+- if (superios[i].io != p->base)
++ if (superios[i].io == p->base)
+ return &superios[i];
+ return NULL;
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 721d60a5d9e4..9c5614f21b8e 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -439,7 +439,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ if (ret)
+ pci->num_viewport = 2;
+
+- if (IS_ENABLED(CONFIG_PCI_MSI)) {
++ if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_enabled()) {
+ /*
+ * If a specific SoC driver needs to change the
+ * default number of vectors, it needs to implement
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index d185ea5fe996..a7f703556790 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1228,7 +1228,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+
+ pcie->ops = of_device_get_match_data(dev);
+
+- pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW);
++ pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH);
+ if (IS_ERR(pcie->reset)) {
+ ret = PTR_ERR(pcie->reset);
+ goto err_pm_runtime_put;
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 750081c1cb48..6eecae447af3 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -499,7 +499,7 @@ static void advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ bridge->data = pcie;
+ bridge->ops = &advk_pci_bridge_emul_ops;
+
+- pci_bridge_emul_init(bridge);
++ pci_bridge_emul_init(bridge, 0);
+
+ }
+
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index fa0fc46edb0c..d3a0419e42f2 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -583,7 +583,7 @@ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
+ bridge->data = port;
+ bridge->ops = &mvebu_pci_bridge_emul_ops;
+
+- pci_bridge_emul_init(bridge);
++ pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR);
+ }
+
+ static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys)
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index 55e471c18e8d..c42fe5c4319f 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -654,7 +654,6 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
+ struct resource *mem = &pcie->mem;
+ const struct mtk_pcie_soc *soc = port->pcie->soc;
+ u32 val;
+- size_t size;
+ int err;
+
+ /* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */
+@@ -706,8 +705,8 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
+ mtk_pcie_enable_msi(port);
+
+ /* Set AHB to PCIe translation windows */
+- size = mem->end - mem->start;
+- val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size));
++ val = lower_32_bits(mem->start) |
++ AHB2PCIE_SIZE(fls(resource_size(mem)));
+ writel(val, port->base + PCIE_AHB_TRANS_BASE0_L);
+
+ val = upper_32_bits(mem->start);
+diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c
+index 3f3df4c29f6e..905282a8ddaa 100644
+--- a/drivers/pci/hotplug/pciehp_ctrl.c
++++ b/drivers/pci/hotplug/pciehp_ctrl.c
+@@ -115,6 +115,10 @@ static void remove_board(struct controller *ctrl, bool safe_removal)
+ * removed from the slot/adapter.
+ */
+ msleep(1000);
++
++ /* Ignore link or presence changes caused by power off */
++ atomic_and(~(PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC),
++ &ctrl->pending_events);
+ }
+
+ /* turn off Green LED */
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 7dd443aea5a5..8bfcb8cd0900 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -156,9 +156,9 @@ static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
+ slot_ctrl |= (cmd & mask);
+ ctrl->cmd_busy = 1;
+ smp_mb();
++ ctrl->slot_ctrl = slot_ctrl;
+ pcie_capability_write_word(pdev, PCI_EXP_SLTCTL, slot_ctrl);
+ ctrl->cmd_started = jiffies;
+- ctrl->slot_ctrl = slot_ctrl;
+
+ /*
+ * Controllers with the Intel CF118 and similar errata advertise
+@@ -736,12 +736,25 @@ void pcie_clear_hotplug_events(struct controller *ctrl)
+
+ void pcie_enable_interrupt(struct controller *ctrl)
+ {
+- pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_HPIE, PCI_EXP_SLTCTL_HPIE);
++ u16 mask;
++
++ mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
++ pcie_write_cmd(ctrl, mask, mask);
+ }
+
+ void pcie_disable_interrupt(struct controller *ctrl)
+ {
+- pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_HPIE);
++ u16 mask;
++
++ /*
++ * Mask hot-plug interrupt to prevent it triggering immediately
++ * when the link goes inactive (we still get PME when any of the
++ * enabled events is detected). Same goes with Link Layer State
++ * changed event which generates PME immediately when the link goes
++ * inactive so mask it as well.
++ */
++ mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
++ pcie_write_cmd(ctrl, 0, mask);
+ }
+
+ /*
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index 129738362d90..83fb077d0b41 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -24,29 +24,6 @@
+ #define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END
+ #define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2)
+
+-/*
+- * Initialize a pci_bridge_emul structure to represent a fake PCI
+- * bridge configuration space. The caller needs to have initialized
+- * the PCI configuration space with whatever values make sense
+- * (typically at least vendor, device, revision), the ->ops pointer,
+- * and optionally ->data and ->has_pcie.
+- */
+-void pci_bridge_emul_init(struct pci_bridge_emul *bridge)
+-{
+- bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16;
+- bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
+- bridge->conf.cache_line_size = 0x10;
+- bridge->conf.status = PCI_STATUS_CAP_LIST;
+-
+- if (bridge->has_pcie) {
+- bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
+- bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
+- /* Set PCIe v2, root port, slot support */
+- bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
+- PCI_EXP_FLAGS_SLOT;
+- }
+-}
+-
+ struct pci_bridge_reg_behavior {
+ /* Read-only bits */
+ u32 ro;
+@@ -283,6 +260,61 @@ const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+ },
+ };
+
++/*
++ * Initialize a pci_bridge_emul structure to represent a fake PCI
++ * bridge configuration space. The caller needs to have initialized
++ * the PCI configuration space with whatever values make sense
++ * (typically at least vendor, device, revision), the ->ops pointer,
++ * and optionally ->data and ->has_pcie.
++ */
++int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
++ unsigned int flags)
++{
++ bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16;
++ bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
++ bridge->conf.cache_line_size = 0x10;
++ bridge->conf.status = PCI_STATUS_CAP_LIST;
++ bridge->pci_regs_behavior = kmemdup(pci_regs_behavior,
++ sizeof(pci_regs_behavior),
++ GFP_KERNEL);
++ if (!bridge->pci_regs_behavior)
++ return -ENOMEM;
++
++ if (bridge->has_pcie) {
++ bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
++ bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
++ /* Set PCIe v2, root port, slot support */
++ bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
++ PCI_EXP_FLAGS_SLOT;
++ bridge->pcie_cap_regs_behavior =
++ kmemdup(pcie_cap_regs_behavior,
++ sizeof(pcie_cap_regs_behavior),
++ GFP_KERNEL);
++ if (!bridge->pcie_cap_regs_behavior) {
++ kfree(bridge->pci_regs_behavior);
++ return -ENOMEM;
++ }
++ }
++
++ if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) {
++ bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].ro = ~0;
++ bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].rw = 0;
++ }
++
++ return 0;
++}
++
++/*
++ * Cleanup a pci_bridge_emul structure that was previously initilized
++ * using pci_bridge_emul_init().
++ */
++void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge)
++{
++ if (bridge->has_pcie)
++ kfree(bridge->pcie_cap_regs_behavior);
++ kfree(bridge->pci_regs_behavior);
++}
++
+ /*
+ * Should be called by the PCI controller driver when reading the PCI
+ * configuration space of the fake bridge. It will call back the
+@@ -312,11 +344,11 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
+ reg -= PCI_CAP_PCIE_START;
+ read_op = bridge->ops->read_pcie;
+ cfgspace = (u32 *) &bridge->pcie_conf;
+- behavior = pcie_cap_regs_behavior;
++ behavior = bridge->pcie_cap_regs_behavior;
+ } else {
+ read_op = bridge->ops->read_base;
+ cfgspace = (u32 *) &bridge->conf;
+- behavior = pci_regs_behavior;
++ behavior = bridge->pci_regs_behavior;
+ }
+
+ if (read_op)
+@@ -383,11 +415,11 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
+ reg -= PCI_CAP_PCIE_START;
+ write_op = bridge->ops->write_pcie;
+ cfgspace = (u32 *) &bridge->pcie_conf;
+- behavior = pcie_cap_regs_behavior;
++ behavior = bridge->pcie_cap_regs_behavior;
+ } else {
+ write_op = bridge->ops->write_base;
+ cfgspace = (u32 *) &bridge->conf;
+- behavior = pci_regs_behavior;
++ behavior = bridge->pci_regs_behavior;
+ }
+
+ /* Keep all bits, except the RW bits */
+diff --git a/drivers/pci/pci-bridge-emul.h b/drivers/pci/pci-bridge-emul.h
+index 9d510ccf738b..e65b1b79899d 100644
+--- a/drivers/pci/pci-bridge-emul.h
++++ b/drivers/pci/pci-bridge-emul.h
+@@ -107,15 +107,26 @@ struct pci_bridge_emul_ops {
+ u32 old, u32 new, u32 mask);
+ };
+
++struct pci_bridge_reg_behavior;
++
+ struct pci_bridge_emul {
+ struct pci_bridge_emul_conf conf;
+ struct pci_bridge_emul_pcie_conf pcie_conf;
+ struct pci_bridge_emul_ops *ops;
++ struct pci_bridge_reg_behavior *pci_regs_behavior;
++ struct pci_bridge_reg_behavior *pcie_cap_regs_behavior;
+ void *data;
+ bool has_pcie;
+ };
+
+-void pci_bridge_emul_init(struct pci_bridge_emul *bridge);
++enum {
++ PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR = BIT(0),
++};
++
++int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
++ unsigned int flags);
++void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge);
++
+ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
+ int size, u32 *value);
+ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index e435d12e61a0..7b77754a82de 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -202,6 +202,28 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
+ pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status);
+ }
+
++static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
++ struct aer_err_info *info)
++{
++ int pos = dev->aer_cap;
++ u32 status, mask, sev;
++
++ pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status);
++ pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &mask);
++ status &= ~mask;
++ if (!status)
++ return 0;
++
++ pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev);
++ status &= sev;
++ if (status)
++ info->severity = AER_FATAL;
++ else
++ info->severity = AER_NONFATAL;
++
++ return 1;
++}
++
+ static irqreturn_t dpc_handler(int irq, void *context)
+ {
+ struct aer_err_info info;
+@@ -229,9 +251,12 @@ static irqreturn_t dpc_handler(int irq, void *context)
+ /* show RP PIO error detail information */
+ if (dpc->rp_extensions && reason == 3 && ext_reason == 0)
+ dpc_process_rp_pio_error(dpc);
+- else if (reason == 0 && aer_get_device_error_info(pdev, &info)) {
++ else if (reason == 0 &&
++ dpc_get_aer_uncorrect_severity(pdev, &info) &&
++ aer_get_device_error_info(pdev, &info)) {
+ aer_print_error(pdev, &info);
+ pci_cleanup_aer_uncorrect_error_status(pdev);
++ pci_aer_clear_fatal_status(pdev);
+ }
+
+ /* We configure DPC so it only triggers on ERR_FATAL */
+diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c
+index 0dbcf429089f..efa5b552914b 100644
+--- a/drivers/pci/pcie/pme.c
++++ b/drivers/pci/pcie/pme.c
+@@ -363,6 +363,16 @@ static bool pcie_pme_check_wakeup(struct pci_bus *bus)
+ return false;
+ }
+
++static void pcie_pme_disable_interrupt(struct pci_dev *port,
++ struct pcie_pme_service_data *data)
++{
++ spin_lock_irq(&data->lock);
++ pcie_pme_interrupt_enable(port, false);
++ pcie_clear_root_pme_status(port);
++ data->noirq = true;
++ spin_unlock_irq(&data->lock);
++}
++
+ /**
+ * pcie_pme_suspend - Suspend PCIe PME service device.
+ * @srv: PCIe service device to suspend.
+@@ -387,11 +397,7 @@ static int pcie_pme_suspend(struct pcie_device *srv)
+ return 0;
+ }
+
+- spin_lock_irq(&data->lock);
+- pcie_pme_interrupt_enable(port, false);
+- pcie_clear_root_pme_status(port);
+- data->noirq = true;
+- spin_unlock_irq(&data->lock);
++ pcie_pme_disable_interrupt(port, data);
+
+ synchronize_irq(srv->irq);
+
+@@ -426,35 +432,12 @@ static int pcie_pme_resume(struct pcie_device *srv)
+ * @srv - PCIe service device to remove.
+ */
+ static void pcie_pme_remove(struct pcie_device *srv)
+-{
+- pcie_pme_suspend(srv);
+- free_irq(srv->irq, srv);
+- kfree(get_service_data(srv));
+-}
+-
+-static int pcie_pme_runtime_suspend(struct pcie_device *srv)
+-{
+- struct pcie_pme_service_data *data = get_service_data(srv);
+-
+- spin_lock_irq(&data->lock);
+- pcie_pme_interrupt_enable(srv->port, false);
+- pcie_clear_root_pme_status(srv->port);
+- data->noirq = true;
+- spin_unlock_irq(&data->lock);
+-
+- return 0;
+-}
+-
+-static int pcie_pme_runtime_resume(struct pcie_device *srv)
+ {
+ struct pcie_pme_service_data *data = get_service_data(srv);
+
+- spin_lock_irq(&data->lock);
+- pcie_pme_interrupt_enable(srv->port, true);
+- data->noirq = false;
+- spin_unlock_irq(&data->lock);
+-
+- return 0;
++ pcie_pme_disable_interrupt(srv->port, data);
++ free_irq(srv->irq, srv);
++ kfree(data);
+ }
+
+ static struct pcie_port_service_driver pcie_pme_driver = {
+@@ -464,8 +447,6 @@ static struct pcie_port_service_driver pcie_pme_driver = {
+
+ .probe = pcie_pme_probe,
+ .suspend = pcie_pme_suspend,
+- .runtime_suspend = pcie_pme_runtime_suspend,
+- .runtime_resume = pcie_pme_runtime_resume,
+ .resume = pcie_pme_resume,
+ .remove = pcie_pme_remove,
+ };
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 257b9f6f2ebb..c46a3fcb341e 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -2071,11 +2071,8 @@ static void pci_configure_ltr(struct pci_dev *dev)
+ {
+ #ifdef CONFIG_PCIEASPM
+ struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
+- u32 cap;
+ struct pci_dev *bridge;
+-
+- if (!host->native_ltr)
+- return;
++ u32 cap, ctl;
+
+ if (!pci_is_pcie(dev))
+ return;
+@@ -2084,22 +2081,35 @@ static void pci_configure_ltr(struct pci_dev *dev)
+ if (!(cap & PCI_EXP_DEVCAP2_LTR))
+ return;
+
+- /*
+- * Software must not enable LTR in an Endpoint unless the Root
+- * Complex and all intermediate Switches indicate support for LTR.
+- * PCIe r3.1, sec 6.18.
+- */
+- if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)
+- dev->ltr_path = 1;
+- else {
++ pcie_capability_read_dword(dev, PCI_EXP_DEVCTL2, &ctl);
++ if (ctl & PCI_EXP_DEVCTL2_LTR_EN) {
++ if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) {
++ dev->ltr_path = 1;
++ return;
++ }
++
+ bridge = pci_upstream_bridge(dev);
+ if (bridge && bridge->ltr_path)
+ dev->ltr_path = 1;
++
++ return;
+ }
+
+- if (dev->ltr_path)
++ if (!host->native_ltr)
++ return;
++
++ /*
++ * Software must not enable LTR in an Endpoint unless the Root
++ * Complex and all intermediate Switches indicate support for LTR.
++ * PCIe r4.0, sec 6.18.
++ */
++ if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
++ ((bridge = pci_upstream_bridge(dev)) &&
++ bridge->ltr_path)) {
+ pcie_capability_set_word(dev, PCI_EXP_DEVCTL2,
+ PCI_EXP_DEVCTL2_LTR_EN);
++ dev->ltr_path = 1;
++ }
+ #endif
+ }
+
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index e2a879e93d86..fba03a7d5c7f 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3877,6 +3877,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9128,
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9130,
+ quirk_dma_func1_alias);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9170,
++ quirk_dma_func1_alias);
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c47 + c57 */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9172,
+ quirk_dma_func1_alias);
+diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
+index 8e46a9dad2fa..7cb766dafe85 100644
+--- a/drivers/perf/arm_spe_pmu.c
++++ b/drivers/perf/arm_spe_pmu.c
+@@ -824,10 +824,10 @@ static void arm_spe_pmu_read(struct perf_event *event)
+ {
+ }
+
+-static void *arm_spe_pmu_setup_aux(int cpu, void **pages, int nr_pages,
+- bool snapshot)
++static void *arm_spe_pmu_setup_aux(struct perf_event *event, void **pages,
++ int nr_pages, bool snapshot)
+ {
+- int i;
++ int i, cpu = event->cpu;
+ struct page **pglist;
+ struct arm_spe_pmu_buf *buf;
+
+diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c
+index 5163097b43df..4bbd9ede38c8 100644
+--- a/drivers/phy/allwinner/phy-sun4i-usb.c
++++ b/drivers/phy/allwinner/phy-sun4i-usb.c
+@@ -485,8 +485,11 @@ static int sun4i_usb_phy_set_mode(struct phy *_phy,
+ struct sun4i_usb_phy_data *data = to_sun4i_usb_phy_data(phy);
+ int new_mode;
+
+- if (phy->index != 0)
++ if (phy->index != 0) {
++ if (mode == PHY_MODE_USB_HOST)
++ return 0;
+ return -EINVAL;
++ }
+
+ switch (mode) {
+ case PHY_MODE_USB_HOST:
+diff --git a/drivers/pinctrl/meson/pinctrl-meson.c b/drivers/pinctrl/meson/pinctrl-meson.c
+index ea87d739f534..a4ae1ac5369e 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson.c
++++ b/drivers/pinctrl/meson/pinctrl-meson.c
+@@ -31,6 +31,9 @@
+ * In some cases the register ranges for pull enable and pull
+ * direction are the same and thus there are only 3 register ranges.
+ *
++ * Since Meson G12A SoC, the ao register ranges for gpio, pull enable
++ * and pull direction are the same, so there are only 2 register ranges.
++ *
+ * For the pull and GPIO configuration every bank uses a contiguous
+ * set of bits in the register sets described above; the same register
+ * can be shared by more banks with different offsets.
+@@ -488,23 +491,22 @@ static int meson_pinctrl_parse_dt(struct meson_pinctrl *pc,
+ return PTR_ERR(pc->reg_mux);
+ }
+
+- pc->reg_pull = meson_map_resource(pc, gpio_np, "pull");
+- if (IS_ERR(pc->reg_pull)) {
+- dev_err(pc->dev, "pull registers not found\n");
+- return PTR_ERR(pc->reg_pull);
++ pc->reg_gpio = meson_map_resource(pc, gpio_np, "gpio");
++ if (IS_ERR(pc->reg_gpio)) {
++ dev_err(pc->dev, "gpio registers not found\n");
++ return PTR_ERR(pc->reg_gpio);
+ }
+
++ pc->reg_pull = meson_map_resource(pc, gpio_np, "pull");
++ /* Use gpio region if pull one is not present */
++ if (IS_ERR(pc->reg_pull))
++ pc->reg_pull = pc->reg_gpio;
++
+ pc->reg_pullen = meson_map_resource(pc, gpio_np, "pull-enable");
+ /* Use pull region if pull-enable one is not present */
+ if (IS_ERR(pc->reg_pullen))
+ pc->reg_pullen = pc->reg_pull;
+
+- pc->reg_gpio = meson_map_resource(pc, gpio_np, "gpio");
+- if (IS_ERR(pc->reg_gpio)) {
+- dev_err(pc->dev, "gpio registers not found\n");
+- return PTR_ERR(pc->reg_gpio);
+- }
+-
+ return 0;
+ }
+
+diff --git a/drivers/pinctrl/meson/pinctrl-meson8b.c b/drivers/pinctrl/meson/pinctrl-meson8b.c
+index 0f140a802137..7f76000cc12e 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson8b.c
++++ b/drivers/pinctrl/meson/pinctrl-meson8b.c
+@@ -346,6 +346,8 @@ static const unsigned int eth_rx_dv_pins[] = { DIF_1_P };
+ static const unsigned int eth_rx_clk_pins[] = { DIF_1_N };
+ static const unsigned int eth_txd0_1_pins[] = { DIF_2_P };
+ static const unsigned int eth_txd1_1_pins[] = { DIF_2_N };
++static const unsigned int eth_rxd3_pins[] = { DIF_2_P };
++static const unsigned int eth_rxd2_pins[] = { DIF_2_N };
+ static const unsigned int eth_tx_en_pins[] = { DIF_3_P };
+ static const unsigned int eth_ref_clk_pins[] = { DIF_3_N };
+ static const unsigned int eth_mdc_pins[] = { DIF_4_P };
+@@ -599,6 +601,8 @@ static struct meson_pmx_group meson8b_cbus_groups[] = {
+ GROUP(eth_ref_clk, 6, 8),
+ GROUP(eth_mdc, 6, 9),
+ GROUP(eth_mdio_en, 6, 10),
++ GROUP(eth_rxd3, 7, 22),
++ GROUP(eth_rxd2, 7, 23),
+ };
+
+ static struct meson_pmx_group meson8b_aobus_groups[] = {
+@@ -748,7 +752,7 @@ static const char * const ethernet_groups[] = {
+ "eth_tx_clk", "eth_tx_en", "eth_txd1_0", "eth_txd1_1",
+ "eth_txd0_0", "eth_txd0_1", "eth_rx_clk", "eth_rx_dv",
+ "eth_rxd1", "eth_rxd0", "eth_mdio_en", "eth_mdc", "eth_ref_clk",
+- "eth_txd2", "eth_txd3"
++ "eth_txd2", "eth_txd3", "eth_rxd3", "eth_rxd2"
+ };
+
+ static const char * const i2c_a_groups[] = {
+diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a77990.c b/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
+index e40908dc37e0..1ce286f7b286 100644
+--- a/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
++++ b/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
+@@ -391,29 +391,33 @@ FM(IP12_23_20) IP12_23_20 FM(IP13_23_20) IP13_23_20 FM(IP14_23_20) IP14_23_20 FM
+ FM(IP12_27_24) IP12_27_24 FM(IP13_27_24) IP13_27_24 FM(IP14_27_24) IP14_27_24 FM(IP15_27_24) IP15_27_24 \
+ FM(IP12_31_28) IP12_31_28 FM(IP13_31_28) IP13_31_28 FM(IP14_31_28) IP14_31_28 FM(IP15_31_28) IP15_31_28
+
++/* The bit numbering in MOD_SEL fields is reversed */
++#define REV4(f0, f1, f2, f3) f0 f2 f1 f3
++#define REV8(f0, f1, f2, f3, f4, f5, f6, f7) f0 f4 f2 f6 f1 f5 f3 f7
++
+ /* MOD_SEL0 */ /* 0 */ /* 1 */ /* 2 */ /* 3 */ /* 4 */ /* 5 */ /* 6 */ /* 7 */
+-#define MOD_SEL0_30_29 FM(SEL_ADGB_0) FM(SEL_ADGB_1) FM(SEL_ADGB_2) F_(0, 0)
++#define MOD_SEL0_30_29 REV4(FM(SEL_ADGB_0), FM(SEL_ADGB_1), FM(SEL_ADGB_2), F_(0, 0))
+ #define MOD_SEL0_28 FM(SEL_DRIF0_0) FM(SEL_DRIF0_1)
+-#define MOD_SEL0_27_26 FM(SEL_FM_0) FM(SEL_FM_1) FM(SEL_FM_2) F_(0, 0)
++#define MOD_SEL0_27_26 REV4(FM(SEL_FM_0), FM(SEL_FM_1), FM(SEL_FM_2), F_(0, 0))
+ #define MOD_SEL0_25 FM(SEL_FSO_0) FM(SEL_FSO_1)
+ #define MOD_SEL0_24 FM(SEL_HSCIF0_0) FM(SEL_HSCIF0_1)
+ #define MOD_SEL0_23 FM(SEL_HSCIF1_0) FM(SEL_HSCIF1_1)
+ #define MOD_SEL0_22 FM(SEL_HSCIF2_0) FM(SEL_HSCIF2_1)
+-#define MOD_SEL0_21_20 FM(SEL_I2C1_0) FM(SEL_I2C1_1) FM(SEL_I2C1_2) FM(SEL_I2C1_3)
+-#define MOD_SEL0_19_18_17 FM(SEL_I2C2_0) FM(SEL_I2C2_1) FM(SEL_I2C2_2) FM(SEL_I2C2_3) FM(SEL_I2C2_4) F_(0, 0) F_(0, 0) F_(0, 0)
++#define MOD_SEL0_21_20 REV4(FM(SEL_I2C1_0), FM(SEL_I2C1_1), FM(SEL_I2C1_2), FM(SEL_I2C1_3))
++#define MOD_SEL0_19_18_17 REV8(FM(SEL_I2C2_0), FM(SEL_I2C2_1), FM(SEL_I2C2_2), FM(SEL_I2C2_3), FM(SEL_I2C2_4), F_(0, 0), F_(0, 0), F_(0, 0))
+ #define MOD_SEL0_16 FM(SEL_NDFC_0) FM(SEL_NDFC_1)
+ #define MOD_SEL0_15 FM(SEL_PWM0_0) FM(SEL_PWM0_1)
+ #define MOD_SEL0_14 FM(SEL_PWM1_0) FM(SEL_PWM1_1)
+-#define MOD_SEL0_13_12 FM(SEL_PWM2_0) FM(SEL_PWM2_1) FM(SEL_PWM2_2) F_(0, 0)
+-#define MOD_SEL0_11_10 FM(SEL_PWM3_0) FM(SEL_PWM3_1) FM(SEL_PWM3_2) F_(0, 0)
++#define MOD_SEL0_13_12 REV4(FM(SEL_PWM2_0), FM(SEL_PWM2_1), FM(SEL_PWM2_2), F_(0, 0))
++#define MOD_SEL0_11_10 REV4(FM(SEL_PWM3_0), FM(SEL_PWM3_1), FM(SEL_PWM3_2), F_(0, 0))
+ #define MOD_SEL0_9 FM(SEL_PWM4_0) FM(SEL_PWM4_1)
+ #define MOD_SEL0_8 FM(SEL_PWM5_0) FM(SEL_PWM5_1)
+ #define MOD_SEL0_7 FM(SEL_PWM6_0) FM(SEL_PWM6_1)
+-#define MOD_SEL0_6_5 FM(SEL_REMOCON_0) FM(SEL_REMOCON_1) FM(SEL_REMOCON_2) F_(0, 0)
++#define MOD_SEL0_6_5 REV4(FM(SEL_REMOCON_0), FM(SEL_REMOCON_1), FM(SEL_REMOCON_2), F_(0, 0))
+ #define MOD_SEL0_4 FM(SEL_SCIF_0) FM(SEL_SCIF_1)
+ #define MOD_SEL0_3 FM(SEL_SCIF0_0) FM(SEL_SCIF0_1)
+ #define MOD_SEL0_2 FM(SEL_SCIF2_0) FM(SEL_SCIF2_1)
+-#define MOD_SEL0_1_0 FM(SEL_SPEED_PULSE_IF_0) FM(SEL_SPEED_PULSE_IF_1) FM(SEL_SPEED_PULSE_IF_2) F_(0, 0)
++#define MOD_SEL0_1_0 REV4(FM(SEL_SPEED_PULSE_IF_0), FM(SEL_SPEED_PULSE_IF_1), FM(SEL_SPEED_PULSE_IF_2), F_(0, 0))
+
+ /* MOD_SEL1 */ /* 0 */ /* 1 */ /* 2 */ /* 3 */ /* 4 */ /* 5 */ /* 6 */ /* 7 */
+ #define MOD_SEL1_31 FM(SEL_SIMCARD_0) FM(SEL_SIMCARD_1)
+@@ -422,18 +426,18 @@ FM(IP12_31_28) IP12_31_28 FM(IP13_31_28) IP13_31_28 FM(IP14_31_28) IP14_31_28 FM
+ #define MOD_SEL1_28 FM(SEL_USB_20_CH0_0) FM(SEL_USB_20_CH0_1)
+ #define MOD_SEL1_26 FM(SEL_DRIF2_0) FM(SEL_DRIF2_1)
+ #define MOD_SEL1_25 FM(SEL_DRIF3_0) FM(SEL_DRIF3_1)
+-#define MOD_SEL1_24_23_22 FM(SEL_HSCIF3_0) FM(SEL_HSCIF3_1) FM(SEL_HSCIF3_2) FM(SEL_HSCIF3_3) FM(SEL_HSCIF3_4) F_(0, 0) F_(0, 0) F_(0, 0)
+-#define MOD_SEL1_21_20_19 FM(SEL_HSCIF4_0) FM(SEL_HSCIF4_1) FM(SEL_HSCIF4_2) FM(SEL_HSCIF4_3) FM(SEL_HSCIF4_4) F_(0, 0) F_(0, 0) F_(0, 0)
++#define MOD_SEL1_24_23_22 REV8(FM(SEL_HSCIF3_0), FM(SEL_HSCIF3_1), FM(SEL_HSCIF3_2), FM(SEL_HSCIF3_3), FM(SEL_HSCIF3_4), F_(0, 0), F_(0, 0), F_(0, 0))
++#define MOD_SEL1_21_20_19 REV8(FM(SEL_HSCIF4_0), FM(SEL_HSCIF4_1), FM(SEL_HSCIF4_2), FM(SEL_HSCIF4_3), FM(SEL_HSCIF4_4), F_(0, 0), F_(0, 0), F_(0, 0))
+ #define MOD_SEL1_18 FM(SEL_I2C6_0) FM(SEL_I2C6_1)
+ #define MOD_SEL1_17 FM(SEL_I2C7_0) FM(SEL_I2C7_1)
+ #define MOD_SEL1_16 FM(SEL_MSIOF2_0) FM(SEL_MSIOF2_1)
+ #define MOD_SEL1_15 FM(SEL_MSIOF3_0) FM(SEL_MSIOF3_1)
+-#define MOD_SEL1_14_13 FM(SEL_SCIF3_0) FM(SEL_SCIF3_1) FM(SEL_SCIF3_2) F_(0, 0)
+-#define MOD_SEL1_12_11 FM(SEL_SCIF4_0) FM(SEL_SCIF4_1) FM(SEL_SCIF4_2) F_(0, 0)
+-#define MOD_SEL1_10_9 FM(SEL_SCIF5_0) FM(SEL_SCIF5_1) FM(SEL_SCIF5_2) F_(0, 0)
++#define MOD_SEL1_14_13 REV4(FM(SEL_SCIF3_0), FM(SEL_SCIF3_1), FM(SEL_SCIF3_2), F_(0, 0))
++#define MOD_SEL1_12_11 REV4(FM(SEL_SCIF4_0), FM(SEL_SCIF4_1), FM(SEL_SCIF4_2), F_(0, 0))
++#define MOD_SEL1_10_9 REV4(FM(SEL_SCIF5_0), FM(SEL_SCIF5_1), FM(SEL_SCIF5_2), F_(0, 0))
+ #define MOD_SEL1_8 FM(SEL_VIN4_0) FM(SEL_VIN4_1)
+ #define MOD_SEL1_7 FM(SEL_VIN5_0) FM(SEL_VIN5_1)
+-#define MOD_SEL1_6_5 FM(SEL_ADGC_0) FM(SEL_ADGC_1) FM(SEL_ADGC_2) F_(0, 0)
++#define MOD_SEL1_6_5 REV4(FM(SEL_ADGC_0), FM(SEL_ADGC_1), FM(SEL_ADGC_2), F_(0, 0))
+ #define MOD_SEL1_4 FM(SEL_SSI9_0) FM(SEL_SSI9_1)
+
+ #define PINMUX_MOD_SELS \
+diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a77995.c b/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
+index 84d78db381e3..9e377e3b9cb3 100644
+--- a/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
++++ b/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
+@@ -381,6 +381,9 @@ FM(IP12_23_20) IP12_23_20 \
+ FM(IP12_27_24) IP12_27_24 \
+ FM(IP12_31_28) IP12_31_28 \
+
++/* The bit numbering in MOD_SEL fields is reversed */
++#define REV4(f0, f1, f2, f3) f0 f2 f1 f3
++
+ /* MOD_SEL0 */ /* 0 */ /* 1 */ /* 2 */ /* 3 */
+ #define MOD_SEL0_30 FM(SEL_MSIOF2_0) FM(SEL_MSIOF2_1)
+ #define MOD_SEL0_29 FM(SEL_I2C3_0) FM(SEL_I2C3_1)
+@@ -388,10 +391,10 @@ FM(IP12_31_28) IP12_31_28 \
+ #define MOD_SEL0_27 FM(SEL_MSIOF3_0) FM(SEL_MSIOF3_1)
+ #define MOD_SEL0_26 FM(SEL_HSCIF3_0) FM(SEL_HSCIF3_1)
+ #define MOD_SEL0_25 FM(SEL_SCIF4_0) FM(SEL_SCIF4_1)
+-#define MOD_SEL0_24_23 FM(SEL_PWM0_0) FM(SEL_PWM0_1) FM(SEL_PWM0_2) F_(0, 0)
+-#define MOD_SEL0_22_21 FM(SEL_PWM1_0) FM(SEL_PWM1_1) FM(SEL_PWM1_2) F_(0, 0)
+-#define MOD_SEL0_20_19 FM(SEL_PWM2_0) FM(SEL_PWM2_1) FM(SEL_PWM2_2) F_(0, 0)
+-#define MOD_SEL0_18_17 FM(SEL_PWM3_0) FM(SEL_PWM3_1) FM(SEL_PWM3_2) F_(0, 0)
++#define MOD_SEL0_24_23 REV4(FM(SEL_PWM0_0), FM(SEL_PWM0_1), FM(SEL_PWM0_2), F_(0, 0))
++#define MOD_SEL0_22_21 REV4(FM(SEL_PWM1_0), FM(SEL_PWM1_1), FM(SEL_PWM1_2), F_(0, 0))
++#define MOD_SEL0_20_19 REV4(FM(SEL_PWM2_0), FM(SEL_PWM2_1), FM(SEL_PWM2_2), F_(0, 0))
++#define MOD_SEL0_18_17 REV4(FM(SEL_PWM3_0), FM(SEL_PWM3_1), FM(SEL_PWM3_2), F_(0, 0))
+ #define MOD_SEL0_15 FM(SEL_IRQ_0_0) FM(SEL_IRQ_0_1)
+ #define MOD_SEL0_14 FM(SEL_IRQ_1_0) FM(SEL_IRQ_1_1)
+ #define MOD_SEL0_13 FM(SEL_IRQ_2_0) FM(SEL_IRQ_2_1)
+diff --git a/drivers/platform/mellanox/mlxreg-hotplug.c b/drivers/platform/mellanox/mlxreg-hotplug.c
+index b6d44550d98c..eca16d00e310 100644
+--- a/drivers/platform/mellanox/mlxreg-hotplug.c
++++ b/drivers/platform/mellanox/mlxreg-hotplug.c
+@@ -248,7 +248,8 @@ mlxreg_hotplug_work_helper(struct mlxreg_hotplug_priv_data *priv,
+ struct mlxreg_core_item *item)
+ {
+ struct mlxreg_core_data *data;
+- u32 asserted, regval, bit;
++ unsigned long asserted;
++ u32 regval, bit;
+ int ret;
+
+ /*
+@@ -281,7 +282,7 @@ mlxreg_hotplug_work_helper(struct mlxreg_hotplug_priv_data *priv,
+ asserted = item->cache ^ regval;
+ item->cache = regval;
+
+- for_each_set_bit(bit, (unsigned long *)&asserted, 8) {
++ for_each_set_bit(bit, &asserted, 8) {
+ data = item->data + bit;
+ if (regval & BIT(bit)) {
+ if (item->inversed)
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index 1589dffab9fa..8b53a9ceb897 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -989,7 +989,7 @@ static const struct dmi_system_id no_hw_rfkill_list[] = {
+ .ident = "Lenovo RESCUER R720-15IKBN",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_MATCH(DMI_BOARD_NAME, "80WW"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo R720-15IKBN"),
+ },
+ },
+ {
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index e28bcf61b126..bc0d55a59015 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -363,7 +363,7 @@ wakeup:
+ * the 5-button array, but still send notifies with power button
+ * event code to this device object on power button actions.
+ *
+- * Report the power button press; catch and ignore the button release.
++ * Report the power button press and release.
+ */
+ if (!priv->array) {
+ if (event == 0xce) {
+@@ -372,8 +372,11 @@ wakeup:
+ return;
+ }
+
+- if (event == 0xcf)
++ if (event == 0xcf) {
++ input_report_key(priv->input_dev, KEY_POWER, 0);
++ input_sync(priv->input_dev);
+ return;
++ }
+ }
+
+ /* 0xC0 is for HID events, other values are for 5 button array */
+diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
+index 22dbf115782e..c37e74ee609d 100644
+--- a/drivers/platform/x86/intel_pmc_core.c
++++ b/drivers/platform/x86/intel_pmc_core.c
+@@ -380,7 +380,8 @@ static int pmc_core_ppfear_show(struct seq_file *s, void *unused)
+ index < PPFEAR_MAX_NUM_ENTRIES; index++, iter++)
+ pf_regs[index] = pmc_core_reg_read_byte(pmcdev, iter);
+
+- for (index = 0; map[index].name; index++)
++ for (index = 0; map[index].name &&
++ index < pmcdev->map->ppfear_buckets * 8; index++)
+ pmc_core_display_map(s, index, pf_regs[index / 8], map);
+
+ return 0;
+diff --git a/drivers/platform/x86/intel_pmc_core.h b/drivers/platform/x86/intel_pmc_core.h
+index 89554cba5758..1a0104d2cbf0 100644
+--- a/drivers/platform/x86/intel_pmc_core.h
++++ b/drivers/platform/x86/intel_pmc_core.h
+@@ -32,7 +32,7 @@
+ #define SPT_PMC_SLP_S0_RES_COUNTER_STEP 0x64
+ #define PMC_BASE_ADDR_MASK ~(SPT_PMC_MMIO_REG_LEN - 1)
+ #define MTPMC_MASK 0xffff0000
+-#define PPFEAR_MAX_NUM_ENTRIES 5
++#define PPFEAR_MAX_NUM_ENTRIES 12
+ #define SPT_PPFEAR_NUM_ENTRIES 5
+ #define SPT_PMC_READ_DISABLE_BIT 0x16
+ #define SPT_PMC_MSG_FULL_STS_BIT 0x18
+diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c
+index c843eaff8ad0..c3ed7b476676 100644
+--- a/drivers/power/supply/cpcap-charger.c
++++ b/drivers/power/supply/cpcap-charger.c
+@@ -458,6 +458,7 @@ static void cpcap_usb_detect(struct work_struct *work)
+ goto out_err;
+ }
+
++ power_supply_changed(ddata->usb);
+ return;
+
+ out_err:
+diff --git a/drivers/regulator/act8865-regulator.c b/drivers/regulator/act8865-regulator.c
+index 21e20483bd91..e0239cf3f56d 100644
+--- a/drivers/regulator/act8865-regulator.c
++++ b/drivers/regulator/act8865-regulator.c
+@@ -131,7 +131,7 @@
+ * ACT8865 voltage number
+ */
+ #define ACT8865_VOLTAGE_NUM 64
+-#define ACT8600_SUDCDC_VOLTAGE_NUM 255
++#define ACT8600_SUDCDC_VOLTAGE_NUM 256
+
+ struct act8865 {
+ struct regmap *regmap;
+@@ -222,7 +222,8 @@ static const struct regulator_linear_range act8600_sudcdc_voltage_ranges[] = {
+ REGULATOR_LINEAR_RANGE(3000000, 0, 63, 0),
+ REGULATOR_LINEAR_RANGE(3000000, 64, 159, 100000),
+ REGULATOR_LINEAR_RANGE(12600000, 160, 191, 200000),
+- REGULATOR_LINEAR_RANGE(19000000, 191, 255, 400000),
++ REGULATOR_LINEAR_RANGE(19000000, 192, 247, 400000),
++ REGULATOR_LINEAR_RANGE(41400000, 248, 255, 0),
+ };
+
+ static struct regulator_ops act8865_ops = {
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index b9d7b45c7295..e2caf11598c7 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1349,7 +1349,9 @@ static int set_machine_constraints(struct regulator_dev *rdev,
+ * We'll only apply the initial system load if an
+ * initial mode wasn't specified.
+ */
++ regulator_lock(rdev);
+ drms_uA_update(rdev);
++ regulator_unlock(rdev);
+ }
+
+ if ((rdev->constraints->ramp_delay || rdev->constraints->ramp_disable)
+diff --git a/drivers/regulator/max77620-regulator.c b/drivers/regulator/max77620-regulator.c
+index b94e3a721721..cd93cf53e23c 100644
+--- a/drivers/regulator/max77620-regulator.c
++++ b/drivers/regulator/max77620-regulator.c
+@@ -1,7 +1,7 @@
+ /*
+ * Maxim MAX77620 Regulator driver
+ *
+- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
++ * Copyright (c) 2016-2018, NVIDIA CORPORATION. All rights reserved.
+ *
+ * Author: Mallikarjun Kasoju <mkasoju@nvidia.com>
+ * Laxman Dewangan <ldewangan@nvidia.com>
+@@ -803,6 +803,14 @@ static int max77620_regulator_probe(struct platform_device *pdev)
+ rdesc = &rinfo[id].desc;
+ pmic->rinfo[id] = &max77620_regs_info[id];
+ pmic->enable_power_mode[id] = MAX77620_POWER_MODE_NORMAL;
++ pmic->reg_pdata[id].active_fps_src = -1;
++ pmic->reg_pdata[id].active_fps_pd_slot = -1;
++ pmic->reg_pdata[id].active_fps_pu_slot = -1;
++ pmic->reg_pdata[id].suspend_fps_src = -1;
++ pmic->reg_pdata[id].suspend_fps_pd_slot = -1;
++ pmic->reg_pdata[id].suspend_fps_pu_slot = -1;
++ pmic->reg_pdata[id].power_ok = -1;
++ pmic->reg_pdata[id].ramp_rate_setting = -1;
+
+ ret = max77620_read_slew_rate(pmic, id);
+ if (ret < 0)
+diff --git a/drivers/regulator/mcp16502.c b/drivers/regulator/mcp16502.c
+index 3479ae009b0b..0fc4963bd5b0 100644
+--- a/drivers/regulator/mcp16502.c
++++ b/drivers/regulator/mcp16502.c
+@@ -17,6 +17,7 @@
+ #include <linux/regmap.h>
+ #include <linux/regulator/driver.h>
+ #include <linux/suspend.h>
++#include <linux/gpio/consumer.h>
+
+ #define VDD_LOW_SEL 0x0D
+ #define VDD_HIGH_SEL 0x3F
+diff --git a/drivers/regulator/s2mpa01.c b/drivers/regulator/s2mpa01.c
+index 095d25f3d2ea..58a1fe583a6c 100644
+--- a/drivers/regulator/s2mpa01.c
++++ b/drivers/regulator/s2mpa01.c
+@@ -298,13 +298,13 @@ static const struct regulator_desc regulators[] = {
+ regulator_desc_ldo(2, STEP_50_MV),
+ regulator_desc_ldo(3, STEP_50_MV),
+ regulator_desc_ldo(4, STEP_50_MV),
+- regulator_desc_ldo(5, STEP_50_MV),
++ regulator_desc_ldo(5, STEP_25_MV),
+ regulator_desc_ldo(6, STEP_25_MV),
+ regulator_desc_ldo(7, STEP_50_MV),
+ regulator_desc_ldo(8, STEP_50_MV),
+ regulator_desc_ldo(9, STEP_50_MV),
+ regulator_desc_ldo(10, STEP_50_MV),
+- regulator_desc_ldo(11, STEP_25_MV),
++ regulator_desc_ldo(11, STEP_50_MV),
+ regulator_desc_ldo(12, STEP_50_MV),
+ regulator_desc_ldo(13, STEP_50_MV),
+ regulator_desc_ldo(14, STEP_50_MV),
+@@ -315,11 +315,11 @@ static const struct regulator_desc regulators[] = {
+ regulator_desc_ldo(19, STEP_50_MV),
+ regulator_desc_ldo(20, STEP_50_MV),
+ regulator_desc_ldo(21, STEP_50_MV),
+- regulator_desc_ldo(22, STEP_25_MV),
+- regulator_desc_ldo(23, STEP_25_MV),
++ regulator_desc_ldo(22, STEP_50_MV),
++ regulator_desc_ldo(23, STEP_50_MV),
+ regulator_desc_ldo(24, STEP_50_MV),
+ regulator_desc_ldo(25, STEP_50_MV),
+- regulator_desc_ldo(26, STEP_50_MV),
++ regulator_desc_ldo(26, STEP_25_MV),
+ regulator_desc_buck1_4(1),
+ regulator_desc_buck1_4(2),
+ regulator_desc_buck1_4(3),
+diff --git a/drivers/regulator/s2mps11.c b/drivers/regulator/s2mps11.c
+index ee4a23ab0663..134c62db36c5 100644
+--- a/drivers/regulator/s2mps11.c
++++ b/drivers/regulator/s2mps11.c
+@@ -362,7 +362,7 @@ static const struct regulator_desc s2mps11_regulators[] = {
+ regulator_desc_s2mps11_ldo(32, STEP_50_MV),
+ regulator_desc_s2mps11_ldo(33, STEP_50_MV),
+ regulator_desc_s2mps11_ldo(34, STEP_50_MV),
+- regulator_desc_s2mps11_ldo(35, STEP_50_MV),
++ regulator_desc_s2mps11_ldo(35, STEP_25_MV),
+ regulator_desc_s2mps11_ldo(36, STEP_50_MV),
+ regulator_desc_s2mps11_ldo(37, STEP_50_MV),
+ regulator_desc_s2mps11_ldo(38, STEP_50_MV),
+@@ -372,8 +372,8 @@ static const struct regulator_desc s2mps11_regulators[] = {
+ regulator_desc_s2mps11_buck1_4(4),
+ regulator_desc_s2mps11_buck5,
+ regulator_desc_s2mps11_buck67810(6, MIN_600_MV, STEP_6_25_MV),
+- regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_6_25_MV),
+- regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_6_25_MV),
++ regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_12_5_MV),
++ regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_12_5_MV),
+ regulator_desc_s2mps11_buck9,
+ regulator_desc_s2mps11_buck67810(10, MIN_750_MV, STEP_12_5_MV),
+ };
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index a10cec0e86eb..0b3b9de45c60 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -72,20 +72,24 @@ static void vfio_ccw_sch_io_todo(struct work_struct *work)
+ {
+ struct vfio_ccw_private *private;
+ struct irb *irb;
++ bool is_final;
+
+ private = container_of(work, struct vfio_ccw_private, io_work);
+ irb = &private->irb;
+
++ is_final = !(scsw_actl(&irb->scsw) &
++ (SCSW_ACTL_DEVACT | SCSW_ACTL_SCHACT));
+ if (scsw_is_solicited(&irb->scsw)) {
+ cp_update_scsw(&private->cp, &irb->scsw);
+- cp_free(&private->cp);
++ if (is_final)
++ cp_free(&private->cp);
+ }
+ memcpy(private->io_region->irb_area, irb, sizeof(*irb));
+
+ if (private->io_trigger)
+ eventfd_signal(private->io_trigger, 1);
+
+- if (private->mdev)
++ if (private->mdev && is_final)
+ private->state = VFIO_CCW_STATE_IDLE;
+ }
+
+diff --git a/drivers/s390/crypto/vfio_ap_drv.c b/drivers/s390/crypto/vfio_ap_drv.c
+index 31c6c847eaca..e9824c35c34f 100644
+--- a/drivers/s390/crypto/vfio_ap_drv.c
++++ b/drivers/s390/crypto/vfio_ap_drv.c
+@@ -15,7 +15,6 @@
+ #include "vfio_ap_private.h"
+
+ #define VFIO_AP_ROOT_NAME "vfio_ap"
+-#define VFIO_AP_DEV_TYPE_NAME "ap_matrix"
+ #define VFIO_AP_DEV_NAME "matrix"
+
+ MODULE_AUTHOR("IBM Corporation");
+@@ -24,10 +23,6 @@ MODULE_LICENSE("GPL v2");
+
+ static struct ap_driver vfio_ap_drv;
+
+-static struct device_type vfio_ap_dev_type = {
+- .name = VFIO_AP_DEV_TYPE_NAME,
+-};
+-
+ struct ap_matrix_dev *matrix_dev;
+
+ /* Only type 10 adapters (CEX4 and later) are supported
+@@ -62,6 +57,22 @@ static void vfio_ap_matrix_dev_release(struct device *dev)
+ kfree(matrix_dev);
+ }
+
++static int matrix_bus_match(struct device *dev, struct device_driver *drv)
++{
++ return 1;
++}
++
++static struct bus_type matrix_bus = {
++ .name = "matrix",
++ .match = &matrix_bus_match,
++};
++
++static struct device_driver matrix_driver = {
++ .name = "vfio_ap",
++ .bus = &matrix_bus,
++ .suppress_bind_attrs = true,
++};
++
+ static int vfio_ap_matrix_dev_create(void)
+ {
+ int ret;
+@@ -71,6 +82,10 @@ static int vfio_ap_matrix_dev_create(void)
+ if (IS_ERR(root_device))
+ return PTR_ERR(root_device);
+
++ ret = bus_register(&matrix_bus);
++ if (ret)
++ goto bus_register_err;
++
+ matrix_dev = kzalloc(sizeof(*matrix_dev), GFP_KERNEL);
+ if (!matrix_dev) {
+ ret = -ENOMEM;
+@@ -87,30 +102,41 @@ static int vfio_ap_matrix_dev_create(void)
+ mutex_init(&matrix_dev->lock);
+ INIT_LIST_HEAD(&matrix_dev->mdev_list);
+
+- matrix_dev->device.type = &vfio_ap_dev_type;
+ dev_set_name(&matrix_dev->device, "%s", VFIO_AP_DEV_NAME);
+ matrix_dev->device.parent = root_device;
++ matrix_dev->device.bus = &matrix_bus;
+ matrix_dev->device.release = vfio_ap_matrix_dev_release;
+- matrix_dev->device.driver = &vfio_ap_drv.driver;
++ matrix_dev->vfio_ap_drv = &vfio_ap_drv;
+
+ ret = device_register(&matrix_dev->device);
+ if (ret)
+ goto matrix_reg_err;
+
++ ret = driver_register(&matrix_driver);
++ if (ret)
++ goto matrix_drv_err;
++
+ return 0;
+
++matrix_drv_err:
++ device_unregister(&matrix_dev->device);
+ matrix_reg_err:
+ put_device(&matrix_dev->device);
+ matrix_alloc_err:
++ bus_unregister(&matrix_bus);
++bus_register_err:
+ root_device_unregister(root_device);
+-
+ return ret;
+ }
+
+ static void vfio_ap_matrix_dev_destroy(void)
+ {
++ struct device *root_device = matrix_dev->device.parent;
++
++ driver_unregister(&matrix_driver);
+ device_unregister(&matrix_dev->device);
+- root_device_unregister(matrix_dev->device.parent);
++ bus_unregister(&matrix_bus);
++ root_device_unregister(root_device);
+ }
+
+ static int __init vfio_ap_init(void)
+diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
+index 272ef427dcc0..900b9cf20ca5 100644
+--- a/drivers/s390/crypto/vfio_ap_ops.c
++++ b/drivers/s390/crypto/vfio_ap_ops.c
+@@ -198,8 +198,8 @@ static int vfio_ap_verify_queue_reserved(unsigned long *apid,
+ qres.apqi = apqi;
+ qres.reserved = false;
+
+- ret = driver_for_each_device(matrix_dev->device.driver, NULL, &qres,
+- vfio_ap_has_queue);
++ ret = driver_for_each_device(&matrix_dev->vfio_ap_drv->driver, NULL,
++ &qres, vfio_ap_has_queue);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/s390/crypto/vfio_ap_private.h b/drivers/s390/crypto/vfio_ap_private.h
+index 5675492233c7..76b7f98e47e9 100644
+--- a/drivers/s390/crypto/vfio_ap_private.h
++++ b/drivers/s390/crypto/vfio_ap_private.h
+@@ -40,6 +40,7 @@ struct ap_matrix_dev {
+ struct ap_config_info info;
+ struct list_head mdev_list;
+ struct mutex lock;
++ struct ap_driver *vfio_ap_drv;
+ };
+
+ extern struct ap_matrix_dev *matrix_dev;
+diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c
+index ed8e58f09054..3e132592c1fe 100644
+--- a/drivers/s390/net/ism_drv.c
++++ b/drivers/s390/net/ism_drv.c
+@@ -141,10 +141,13 @@ static int register_ieq(struct ism_dev *ism)
+
+ static int unregister_sba(struct ism_dev *ism)
+ {
++ int ret;
++
+ if (!ism->sba)
+ return 0;
+
+- if (ism_cmd_simple(ism, ISM_UNREG_SBA))
++ ret = ism_cmd_simple(ism, ISM_UNREG_SBA);
++ if (ret && ret != ISM_ERROR)
+ return -EIO;
+
+ dma_free_coherent(&ism->pdev->dev, PAGE_SIZE,
+@@ -158,10 +161,13 @@ static int unregister_sba(struct ism_dev *ism)
+
+ static int unregister_ieq(struct ism_dev *ism)
+ {
++ int ret;
++
+ if (!ism->ieq)
+ return 0;
+
+- if (ism_cmd_simple(ism, ISM_UNREG_IEQ))
++ ret = ism_cmd_simple(ism, ISM_UNREG_IEQ);
++ if (ret && ret != ISM_ERROR)
+ return -EIO;
+
+ dma_free_coherent(&ism->pdev->dev, PAGE_SIZE,
+@@ -287,7 +293,7 @@ static int ism_unregister_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb)
+ cmd.request.dmb_tok = dmb->dmb_tok;
+
+ ret = ism_cmd(ism, &cmd);
+- if (ret)
++ if (ret && ret != ISM_ERROR)
+ goto out;
+
+ ism_free_dmb(ism, dmb);
+diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
+index 744a64680d5b..e8fc28dba8df 100644
+--- a/drivers/s390/scsi/zfcp_erp.c
++++ b/drivers/s390/scsi/zfcp_erp.c
+@@ -624,6 +624,20 @@ static void zfcp_erp_strategy_memwait(struct zfcp_erp_action *erp_action)
+ add_timer(&erp_action->timer);
+ }
+
++void zfcp_erp_port_forced_reopen_all(struct zfcp_adapter *adapter,
++ int clear, char *dbftag)
++{
++ unsigned long flags;
++ struct zfcp_port *port;
++
++ write_lock_irqsave(&adapter->erp_lock, flags);
++ read_lock(&adapter->port_list_lock);
++ list_for_each_entry(port, &adapter->port_list, list)
++ _zfcp_erp_port_forced_reopen(port, clear, dbftag);
++ read_unlock(&adapter->port_list_lock);
++ write_unlock_irqrestore(&adapter->erp_lock, flags);
++}
++
+ static void _zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter,
+ int clear, char *dbftag)
+ {
+@@ -1341,6 +1355,9 @@ static void zfcp_erp_try_rport_unblock(struct zfcp_port *port)
+ struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev);
+ int lun_status;
+
++ if (sdev->sdev_state == SDEV_DEL ||
++ sdev->sdev_state == SDEV_CANCEL)
++ continue;
+ if (zsdev->port != port)
+ continue;
+ /* LUN under port of interest */
+diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
+index 3fce47b0b21b..c6acca521ffe 100644
+--- a/drivers/s390/scsi/zfcp_ext.h
++++ b/drivers/s390/scsi/zfcp_ext.h
+@@ -70,6 +70,8 @@ extern void zfcp_erp_port_reopen(struct zfcp_port *port, int clear,
+ char *dbftag);
+ extern void zfcp_erp_port_shutdown(struct zfcp_port *, int, char *);
+ extern void zfcp_erp_port_forced_reopen(struct zfcp_port *, int, char *);
++extern void zfcp_erp_port_forced_reopen_all(struct zfcp_adapter *adapter,
++ int clear, char *dbftag);
+ extern void zfcp_erp_set_lun_status(struct scsi_device *, u32);
+ extern void zfcp_erp_clear_lun_status(struct scsi_device *, u32);
+ extern void zfcp_erp_lun_reopen(struct scsi_device *, int, char *);
+diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
+index f4f6a07c5222..221d0dfb8493 100644
+--- a/drivers/s390/scsi/zfcp_scsi.c
++++ b/drivers/s390/scsi/zfcp_scsi.c
+@@ -368,6 +368,10 @@ static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt)
+ struct zfcp_adapter *adapter = zfcp_sdev->port->adapter;
+ int ret = SUCCESS, fc_ret;
+
++ if (!(adapter->connection_features & FSF_FEATURE_NPIV_MODE)) {
++ zfcp_erp_port_forced_reopen_all(adapter, 0, "schrh_p");
++ zfcp_erp_wait(adapter);
++ }
+ zfcp_erp_adapter_reopen(adapter, 0, "schrh_1");
+ zfcp_erp_wait(adapter);
+ fc_ret = fc_block_scsi_eh(scpnt);
+diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
+index ae1d56da671d..1a738fe9f26b 100644
+--- a/drivers/s390/virtio/virtio_ccw.c
++++ b/drivers/s390/virtio/virtio_ccw.c
+@@ -272,6 +272,8 @@ static void virtio_ccw_drop_indicators(struct virtio_ccw_device *vcdev)
+ {
+ struct virtio_ccw_vq_info *info;
+
++ if (!vcdev->airq_info)
++ return;
+ list_for_each_entry(info, &vcdev->virtqueues, node)
+ drop_airq_indicator(info->vq, vcdev->airq_info);
+ }
+@@ -413,7 +415,7 @@ static int virtio_ccw_read_vq_conf(struct virtio_ccw_device *vcdev,
+ ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_VQ_CONF);
+ if (ret)
+ return ret;
+- return vcdev->config_block->num;
++ return vcdev->config_block->num ?: -ENOENT;
+ }
+
+ static void virtio_ccw_del_vq(struct virtqueue *vq, struct ccw1 *ccw)
+diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
+index d5a6aa9676c8..a3adc954f40f 100644
+--- a/drivers/scsi/aacraid/commsup.c
++++ b/drivers/scsi/aacraid/commsup.c
+@@ -1303,8 +1303,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ ADD : DELETE;
+ break;
+ }
+- case AifBuManagerEvent:
+- aac_handle_aif_bu(dev, aifcmd);
++ break;
++ case AifBuManagerEvent:
++ aac_handle_aif_bu(dev, aifcmd);
+ break;
+ }
+
+diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
+index 7e56a11836c1..ccefface7e31 100644
+--- a/drivers/scsi/aacraid/linit.c
++++ b/drivers/scsi/aacraid/linit.c
+@@ -413,13 +413,16 @@ static int aac_slave_configure(struct scsi_device *sdev)
+ if (chn < AAC_MAX_BUSES && tid < AAC_MAX_TARGETS && aac->sa_firmware) {
+ devtype = aac->hba_map[chn][tid].devtype;
+
+- if (devtype == AAC_DEVTYPE_NATIVE_RAW)
++ if (devtype == AAC_DEVTYPE_NATIVE_RAW) {
+ depth = aac->hba_map[chn][tid].qd_limit;
+- else if (devtype == AAC_DEVTYPE_ARC_RAW)
++ set_timeout = 1;
++ goto common_config;
++ }
++ if (devtype == AAC_DEVTYPE_ARC_RAW) {
+ set_qd_dev_type = true;
+-
+- set_timeout = 1;
+- goto common_config;
++ set_timeout = 1;
++ goto common_config;
++ }
+ }
+
+ if (aac->jbod && (sdev->type == TYPE_DISK))
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+index 2e4e7159ebf9..a75e74ad1698 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+@@ -1438,7 +1438,7 @@ bind_err:
+ static struct bnx2fc_interface *
+ bnx2fc_interface_create(struct bnx2fc_hba *hba,
+ struct net_device *netdev,
+- enum fip_state fip_mode)
++ enum fip_mode fip_mode)
+ {
+ struct fcoe_ctlr_device *ctlr_dev;
+ struct bnx2fc_interface *interface;
+diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
+index cd19be3f3405..8ba8862d3292 100644
+--- a/drivers/scsi/fcoe/fcoe.c
++++ b/drivers/scsi/fcoe/fcoe.c
+@@ -389,7 +389,7 @@ static int fcoe_interface_setup(struct fcoe_interface *fcoe,
+ * Returns: pointer to a struct fcoe_interface or NULL on error
+ */
+ static struct fcoe_interface *fcoe_interface_create(struct net_device *netdev,
+- enum fip_state fip_mode)
++ enum fip_mode fip_mode)
+ {
+ struct fcoe_ctlr_device *ctlr_dev;
+ struct fcoe_ctlr *ctlr;
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index 54da3166da8d..7dc4ffa24430 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -147,7 +147,7 @@ static void fcoe_ctlr_map_dest(struct fcoe_ctlr *fip)
+ * fcoe_ctlr_init() - Initialize the FCoE Controller instance
+ * @fip: The FCoE controller to initialize
+ */
+-void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_state mode)
++void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_mode mode)
+ {
+ fcoe_ctlr_set_state(fip, FIP_ST_LINK_WAIT);
+ fip->mode = mode;
+@@ -454,7 +454,10 @@ void fcoe_ctlr_link_up(struct fcoe_ctlr *fip)
+ mutex_unlock(&fip->ctlr_mutex);
+ fc_linkup(fip->lp);
+ } else if (fip->state == FIP_ST_LINK_WAIT) {
+- fcoe_ctlr_set_state(fip, fip->mode);
++ if (fip->mode == FIP_MODE_NON_FIP)
++ fcoe_ctlr_set_state(fip, FIP_ST_NON_FIP);
++ else
++ fcoe_ctlr_set_state(fip, FIP_ST_AUTO);
+ switch (fip->mode) {
+ default:
+ LIBFCOE_FIP_DBG(fip, "invalid mode %d\n", fip->mode);
+diff --git a/drivers/scsi/fcoe/fcoe_transport.c b/drivers/scsi/fcoe/fcoe_transport.c
+index f4909cd206d3..f15d5e1d56b1 100644
+--- a/drivers/scsi/fcoe/fcoe_transport.c
++++ b/drivers/scsi/fcoe/fcoe_transport.c
+@@ -873,7 +873,7 @@ static int fcoe_transport_create(const char *buffer,
+ int rc = -ENODEV;
+ struct net_device *netdev = NULL;
+ struct fcoe_transport *ft = NULL;
+- enum fip_state fip_mode = (enum fip_state)(long)kp->arg;
++ enum fip_mode fip_mode = (enum fip_mode)kp->arg;
+
+ mutex_lock(&ft_mutex);
+
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index bc17fa0d8375..62d158574281 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -10,6 +10,7 @@
+ */
+
+ #include "hisi_sas.h"
++#include "../libsas/sas_internal.h"
+ #define DRV_NAME "hisi_sas"
+
+ #define DEV_IS_GONE(dev) \
+@@ -872,7 +873,8 @@ static void hisi_sas_do_release_task(struct hisi_hba *hisi_hba, struct sas_task
+ spin_lock_irqsave(&task->task_state_lock, flags);
+ task->task_state_flags &=
+ ~(SAS_TASK_STATE_PENDING | SAS_TASK_AT_INITIATOR);
+- task->task_state_flags |= SAS_TASK_STATE_DONE;
++ if (!slot->is_internal && task->task_proto != SAS_PROTOCOL_SMP)
++ task->task_state_flags |= SAS_TASK_STATE_DONE;
+ spin_unlock_irqrestore(&task->task_state_lock, flags);
+ }
+
+@@ -1972,9 +1974,18 @@ static int hisi_sas_write_gpio(struct sas_ha_struct *sha, u8 reg_type,
+
+ static void hisi_sas_phy_disconnected(struct hisi_sas_phy *phy)
+ {
++ struct asd_sas_phy *sas_phy = &phy->sas_phy;
++ struct sas_phy *sphy = sas_phy->phy;
++ struct sas_phy_data *d = sphy->hostdata;
++
+ phy->phy_attached = 0;
+ phy->phy_type = 0;
+ phy->port = NULL;
++
++ if (d->enable)
++ sphy->negotiated_linkrate = SAS_LINK_RATE_UNKNOWN;
++ else
++ sphy->negotiated_linkrate = SAS_PHY_DISABLED;
+ }
+
+ void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 1135e74646e2..8cec5230fe31 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -96,6 +96,7 @@ static int client_reserve = 1;
+ static char partition_name[96] = "UNKNOWN";
+ static unsigned int partition_number = -1;
+ static LIST_HEAD(ibmvscsi_head);
++static DEFINE_SPINLOCK(ibmvscsi_driver_lock);
+
+ static struct scsi_transport_template *ibmvscsi_transport_template;
+
+@@ -2270,7 +2271,9 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ }
+
+ dev_set_drvdata(&vdev->dev, hostdata);
++ spin_lock(&ibmvscsi_driver_lock);
+ list_add_tail(&hostdata->host_list, &ibmvscsi_head);
++ spin_unlock(&ibmvscsi_driver_lock);
+ return 0;
+
+ add_srp_port_failed:
+@@ -2292,15 +2295,27 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ static int ibmvscsi_remove(struct vio_dev *vdev)
+ {
+ struct ibmvscsi_host_data *hostdata = dev_get_drvdata(&vdev->dev);
+- list_del(&hostdata->host_list);
+- unmap_persist_bufs(hostdata);
++ unsigned long flags;
++
++ srp_remove_host(hostdata->host);
++ scsi_remove_host(hostdata->host);
++
++ purge_requests(hostdata, DID_ERROR);
++
++ spin_lock_irqsave(hostdata->host->host_lock, flags);
+ release_event_pool(&hostdata->pool, hostdata);
++ spin_unlock_irqrestore(hostdata->host->host_lock, flags);
++
+ ibmvscsi_release_crq_queue(&hostdata->queue, hostdata,
+ max_events);
+
+ kthread_stop(hostdata->work_thread);
+- srp_remove_host(hostdata->host);
+- scsi_remove_host(hostdata->host);
++ unmap_persist_bufs(hostdata);
++
++ spin_lock(&ibmvscsi_driver_lock);
++ list_del(&hostdata->host_list);
++ spin_unlock(&ibmvscsi_driver_lock);
++
+ scsi_host_put(hostdata->host);
+
+ return 0;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index fcbff83c0097..c9811d1aa007 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -4188,6 +4188,7 @@ int megasas_alloc_cmds(struct megasas_instance *instance)
+ if (megasas_create_frame_pool(instance)) {
+ dev_printk(KERN_DEBUG, &instance->pdev->dev, "Error creating frame DMA pool\n");
+ megasas_free_cmds(instance);
++ return -ENOMEM;
+ }
+
+ return 0;
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 9bbc19fc190b..9f9431a4cc0e 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -1418,7 +1418,7 @@ static struct libfc_function_template qedf_lport_template = {
+
+ static void qedf_fcoe_ctlr_setup(struct qedf_ctx *qedf)
+ {
+- fcoe_ctlr_init(&qedf->ctlr, FIP_ST_AUTO);
++ fcoe_ctlr_init(&qedf->ctlr, FIP_MODE_AUTO);
+
+ qedf->ctlr.send = qedf_fip_send;
+ qedf->ctlr.get_src_addr = qedf_get_src_mac;
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 8d1acc802a67..7f8946844a5e 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -644,11 +644,14 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
+ break;
+ case DSC_LS_PORT_UNAVAIL:
+ default:
+- if (fcport->loop_id != FC_NO_LOOP_ID)
+- qla2x00_clear_loop_id(fcport);
+-
+- fcport->loop_id = loop_id;
+- fcport->fw_login_state = DSC_LS_PORT_UNAVAIL;
++ if (fcport->loop_id == FC_NO_LOOP_ID) {
++ qla2x00_find_new_loop_id(vha, fcport);
++ fcport->fw_login_state =
++ DSC_LS_PORT_UNAVAIL;
++ }
++ ql_dbg(ql_dbg_disc, vha, 0x20e5,
++ "%s %d %8phC\n", __func__, __LINE__,
++ fcport->port_name);
+ qla24xx_fcport_handle_login(vha, fcport);
+ break;
+ }
+@@ -1471,29 +1474,6 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ return 0;
+ }
+
+-static
+-void qla24xx_handle_rscn_event(fc_port_t *fcport, struct event_arg *ea)
+-{
+- fcport->rscn_gen++;
+-
+- ql_dbg(ql_dbg_disc, fcport->vha, 0x210c,
+- "%s %8phC DS %d LS %d\n",
+- __func__, fcport->port_name, fcport->disc_state,
+- fcport->fw_login_state);
+-
+- if (fcport->flags & FCF_ASYNC_SENT)
+- return;
+-
+- switch (fcport->disc_state) {
+- case DSC_DELETED:
+- case DSC_LOGIN_COMPLETE:
+- qla24xx_post_gpnid_work(fcport->vha, &ea->id);
+- break;
+- default:
+- break;
+- }
+-}
+-
+ int qla24xx_post_newsess_work(struct scsi_qla_host *vha, port_id_t *id,
+ u8 *port_name, u8 *node_name, void *pla, u8 fc4_type)
+ {
+@@ -1560,8 +1540,6 @@ static void qla_handle_els_plogi_done(scsi_qla_host_t *vha,
+
+ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
+ {
+- fc_port_t *f, *tf;
+- uint32_t id = 0, mask, rid;
+ fc_port_t *fcport;
+
+ switch (ea->event) {
+@@ -1574,10 +1552,6 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
+ case FCME_RSCN:
+ if (test_bit(UNLOADING, &vha->dpc_flags))
+ return;
+- switch (ea->id.b.rsvd_1) {
+- case RSCN_PORT_ADDR:
+-#define BIGSCAN 1
+-#if defined BIGSCAN & BIGSCAN > 0
+ {
+ unsigned long flags;
+ fcport = qla2x00_find_fcport_by_nportid
+@@ -1596,59 +1570,6 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
+ }
+ spin_unlock_irqrestore(&vha->work_lock, flags);
+ }
+-#else
+- {
+- int rc;
+- fcport = qla2x00_find_fcport_by_nportid(vha, &ea->id, 1);
+- if (!fcport) {
+- /* cable moved */
+- rc = qla24xx_post_gpnid_work(vha, &ea->id);
+- if (rc) {
+- ql_log(ql_log_warn, vha, 0xd044,
+- "RSCN GPNID work failed %06x\n",
+- ea->id.b24);
+- }
+- } else {
+- ea->fcport = fcport;
+- fcport->scan_needed = 1;
+- qla24xx_handle_rscn_event(fcport, ea);
+- }
+- }
+-#endif
+- break;
+- case RSCN_AREA_ADDR:
+- case RSCN_DOM_ADDR:
+- if (ea->id.b.rsvd_1 == RSCN_AREA_ADDR) {
+- mask = 0xffff00;
+- ql_dbg(ql_dbg_async, vha, 0x5044,
+- "RSCN: Area 0x%06x was affected\n",
+- ea->id.b24);
+- } else {
+- mask = 0xff0000;
+- ql_dbg(ql_dbg_async, vha, 0x507a,
+- "RSCN: Domain 0x%06x was affected\n",
+- ea->id.b24);
+- }
+-
+- rid = ea->id.b24 & mask;
+- list_for_each_entry_safe(f, tf, &vha->vp_fcports,
+- list) {
+- id = f->d_id.b24 & mask;
+- if (rid == id) {
+- ea->fcport = f;
+- qla24xx_handle_rscn_event(f, ea);
+- }
+- }
+- break;
+- case RSCN_FAB_ADDR:
+- default:
+- ql_log(ql_log_warn, vha, 0xd045,
+- "RSCN: Fabric was affected. Addr format %d\n",
+- ea->id.b.rsvd_1);
+- qla2x00_mark_all_devices_lost(vha, 1);
+- set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
+- set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
+- }
+ break;
+ case FCME_GNL_DONE:
+ qla24xx_handle_gnl_done_event(vha, ea);
+@@ -1709,11 +1630,7 @@ void qla_rscn_replay(fc_port_t *fcport)
+ ea.event = FCME_RSCN;
+ ea.id = fcport->d_id;
+ ea.id.b.rsvd_1 = RSCN_PORT_ADDR;
+-#if defined BIGSCAN & BIGSCAN > 0
+ qla2x00_fcport_event_handler(fcport->vha, &ea);
+-#else
+- qla24xx_post_gpnid_work(fcport->vha, &ea.id);
+-#endif
+ }
+ }
+
+@@ -5051,6 +4968,13 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
+ (area != vha->d_id.b.area || domain != vha->d_id.b.domain))
+ continue;
+
++ /* Bypass if not same domain and area of adapter. */
++ if (area && domain && ((area != vha->d_id.b.area) ||
++ (domain != vha->d_id.b.domain)) &&
++ (ha->current_topology == ISP_CFG_NL))
++ continue;
++
++
+ /* Bypass invalid local loop ID. */
+ if (loop_id > LAST_LOCAL_LOOP_ID)
+ continue;
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 8507c43b918c..1a20e5d8f057 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -3410,7 +3410,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
+ min_vecs++;
+ }
+
+- if (USER_CTRL_IRQ(ha)) {
++ if (USER_CTRL_IRQ(ha) || !ha->mqiobase) {
+ /* user wants to control IRQ setting for target mode */
+ ret = pci_alloc_irq_vectors(ha->pdev, min_vecs,
+ ha->msix_count, PCI_IRQ_MSIX);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index c6ef83d0d99b..7e35ce2162d0 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -6936,7 +6936,7 @@ static int qla2xxx_map_queues(struct Scsi_Host *shost)
+ scsi_qla_host_t *vha = (scsi_qla_host_t *)shost->hostdata;
+ struct blk_mq_queue_map *qmap = &shost->tag_set.map[0];
+
+- if (USER_CTRL_IRQ(vha->hw))
++ if (USER_CTRL_IRQ(vha->hw) || !vha->hw->mqiobase)
+ rc = blk_mq_map_queues(qmap);
+ else
+ rc = blk_mq_pci_map_queues(qmap, vha->hw->pdev, vha->irq_offset);
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index a6828391d6b3..5a6e8e12701a 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -2598,8 +2598,10 @@ void scsi_device_resume(struct scsi_device *sdev)
+ * device deleted during suspend)
+ */
+ mutex_lock(&sdev->state_mutex);
+- sdev->quiesced_by = NULL;
+- blk_clear_pm_only(sdev->request_queue);
++ if (sdev->quiesced_by) {
++ sdev->quiesced_by = NULL;
++ blk_clear_pm_only(sdev->request_queue);
++ }
+ if (sdev->sdev_state == SDEV_QUIESCE)
+ scsi_device_set_state(sdev, SDEV_RUNNING);
+ mutex_unlock(&sdev->state_mutex);
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index dd0d516f65e2..53380e07b40e 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -220,7 +220,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
+ struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
+
+ sdev = kzalloc(sizeof(*sdev) + shost->transportt->device_size,
+- GFP_ATOMIC);
++ GFP_KERNEL);
+ if (!sdev)
+ goto out;
+
+@@ -788,7 +788,7 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
+ */
+ sdev->inquiry = kmemdup(inq_result,
+ max_t(size_t, sdev->inquiry_len, 36),
+- GFP_ATOMIC);
++ GFP_KERNEL);
+ if (sdev->inquiry == NULL)
+ return SCSI_SCAN_NO_RESPONSE;
+
+@@ -1079,7 +1079,7 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
+ if (!sdev)
+ goto out;
+
+- result = kmalloc(result_len, GFP_ATOMIC |
++ result = kmalloc(result_len, GFP_KERNEL |
+ ((shost->unchecked_isa_dma) ? __GFP_DMA : 0));
+ if (!result)
+ goto out_free_sdev;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 5464d467e23e..d64553c0a051 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1398,11 +1398,6 @@ static void sd_release(struct gendisk *disk, fmode_t mode)
+ scsi_set_medium_removal(sdev, SCSI_REMOVAL_ALLOW);
+ }
+
+- /*
+- * XXX and what if there are packets in flight and this close()
+- * XXX is followed by a "rmmod sd_mod"?
+- */
+-
+ scsi_disk_put(sdkp);
+ }
+
+@@ -3047,6 +3042,58 @@ static void sd_read_security(struct scsi_disk *sdkp, unsigned char *buffer)
+ sdkp->security = 1;
+ }
+
++/*
++ * Determine the device's preferred I/O size for reads and writes
++ * unless the reported value is unreasonably small, large, not a
++ * multiple of the physical block size, or simply garbage.
++ */
++static bool sd_validate_opt_xfer_size(struct scsi_disk *sdkp,
++ unsigned int dev_max)
++{
++ struct scsi_device *sdp = sdkp->device;
++ unsigned int opt_xfer_bytes =
++ logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
++
++ if (sdkp->opt_xfer_blocks == 0)
++ return false;
++
++ if (sdkp->opt_xfer_blocks > dev_max) {
++ sd_first_printk(KERN_WARNING, sdkp,
++ "Optimal transfer size %u logical blocks " \
++ "> dev_max (%u logical blocks)\n",
++ sdkp->opt_xfer_blocks, dev_max);
++ return false;
++ }
++
++ if (sdkp->opt_xfer_blocks > SD_DEF_XFER_BLOCKS) {
++ sd_first_printk(KERN_WARNING, sdkp,
++ "Optimal transfer size %u logical blocks " \
++ "> sd driver limit (%u logical blocks)\n",
++ sdkp->opt_xfer_blocks, SD_DEF_XFER_BLOCKS);
++ return false;
++ }
++
++ if (opt_xfer_bytes < PAGE_SIZE) {
++ sd_first_printk(KERN_WARNING, sdkp,
++ "Optimal transfer size %u bytes < " \
++ "PAGE_SIZE (%u bytes)\n",
++ opt_xfer_bytes, (unsigned int)PAGE_SIZE);
++ return false;
++ }
++
++ if (opt_xfer_bytes & (sdkp->physical_block_size - 1)) {
++ sd_first_printk(KERN_WARNING, sdkp,
++ "Optimal transfer size %u bytes not a " \
++ "multiple of physical block size (%u bytes)\n",
++ opt_xfer_bytes, sdkp->physical_block_size);
++ return false;
++ }
++
++ sd_first_printk(KERN_INFO, sdkp, "Optimal transfer size %u bytes\n",
++ opt_xfer_bytes);
++ return true;
++}
++
+ /**
+ * sd_revalidate_disk - called the first time a new disk is seen,
+ * performs disk spin up, read_capacity, etc.
+@@ -3125,15 +3172,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
+ dev_max = min_not_zero(dev_max, sdkp->max_xfer_blocks);
+ q->limits.max_dev_sectors = logical_to_sectors(sdp, dev_max);
+
+- /*
+- * Determine the device's preferred I/O size for reads and writes
+- * unless the reported value is unreasonably small, large, or
+- * garbage.
+- */
+- if (sdkp->opt_xfer_blocks &&
+- sdkp->opt_xfer_blocks <= dev_max &&
+- sdkp->opt_xfer_blocks <= SD_DEF_XFER_BLOCKS &&
+- logical_to_bytes(sdp, sdkp->opt_xfer_blocks) >= PAGE_SIZE) {
++ if (sd_validate_opt_xfer_size(sdkp, dev_max)) {
+ q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
+ rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks);
+ } else
+@@ -3447,9 +3486,21 @@ static void scsi_disk_release(struct device *dev)
+ {
+ struct scsi_disk *sdkp = to_scsi_disk(dev);
+ struct gendisk *disk = sdkp->disk;
+-
++ struct request_queue *q = disk->queue;
++
+ ida_free(&sd_index_ida, sdkp->index);
+
++ /*
++ * Wait until all requests that are in progress have completed.
++ * This is necessary to avoid that e.g. scsi_end_request() crashes
++ * due to clearing the disk->private_data pointer. Wait from inside
++ * scsi_disk_release() instead of from sd_release() to avoid that
++ * freezing and unfreezing the request queue affects user space I/O
++ * in case multiple processes open a /dev/sd... node concurrently.
++ */
++ blk_mq_freeze_queue(q);
++ blk_mq_unfreeze_queue(q);
++
+ disk->private_data = NULL;
+ put_disk(disk);
+ put_device(&sdkp->device->sdev_gendev);
+diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
+index 772b976e4ee4..464cba521fb6 100644
+--- a/drivers/scsi/virtio_scsi.c
++++ b/drivers/scsi/virtio_scsi.c
+@@ -594,7 +594,6 @@ static int virtscsi_device_reset(struct scsi_cmnd *sc)
+ return FAILED;
+
+ memset(cmd, 0, sizeof(*cmd));
+- cmd->sc = sc;
+ cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){
+ .type = VIRTIO_SCSI_T_TMF,
+ .subtype = cpu_to_virtio32(vscsi->vdev,
+@@ -653,7 +652,6 @@ static int virtscsi_abort(struct scsi_cmnd *sc)
+ return FAILED;
+
+ memset(cmd, 0, sizeof(*cmd));
+- cmd->sc = sc;
+ cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){
+ .type = VIRTIO_SCSI_T_TMF,
+ .subtype = VIRTIO_SCSI_T_TMF_ABORT_TASK,
+diff --git a/drivers/soc/qcom/qcom_gsbi.c b/drivers/soc/qcom/qcom_gsbi.c
+index 09c669e70d63..038abc377fdb 100644
+--- a/drivers/soc/qcom/qcom_gsbi.c
++++ b/drivers/soc/qcom/qcom_gsbi.c
+@@ -138,7 +138,7 @@ static int gsbi_probe(struct platform_device *pdev)
+ struct resource *res;
+ void __iomem *base;
+ struct gsbi_info *gsbi;
+- int i;
++ int i, ret;
+ u32 mask, gsbi_num;
+ const struct crci_config *config = NULL;
+
+@@ -221,7 +221,10 @@ static int gsbi_probe(struct platform_device *pdev)
+
+ platform_set_drvdata(pdev, gsbi);
+
+- return of_platform_populate(node, NULL, NULL, &pdev->dev);
++ ret = of_platform_populate(node, NULL, NULL, &pdev->dev);
++ if (ret)
++ clk_disable_unprepare(gsbi->hclk);
++ return ret;
+ }
+
+ static int gsbi_remove(struct platform_device *pdev)
+diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
+index c7beb6841289..ab8f731a3426 100644
+--- a/drivers/soc/qcom/rpmh.c
++++ b/drivers/soc/qcom/rpmh.c
+@@ -80,6 +80,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
+ struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request,
+ msg);
+ struct completion *compl = rpm_msg->completion;
++ bool free = rpm_msg->needs_free;
+
+ rpm_msg->err = r;
+
+@@ -94,7 +95,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
+ complete(compl);
+
+ exit:
+- if (rpm_msg->needs_free)
++ if (free)
+ kfree(rpm_msg);
+ }
+
+@@ -348,11 +349,12 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ {
+ struct batch_cache_req *req;
+ struct rpmh_request *rpm_msgs;
+- DECLARE_COMPLETION_ONSTACK(compl);
++ struct completion *compls;
+ struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
+ unsigned long time_left;
+ int count = 0;
+- int ret, i, j;
++ int ret, i;
++ void *ptr;
+
+ if (!cmd || !n)
+ return -EINVAL;
+@@ -362,10 +364,15 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ if (!count)
+ return -EINVAL;
+
+- req = kzalloc(sizeof(*req) + count * sizeof(req->rpm_msgs[0]),
++ ptr = kzalloc(sizeof(*req) +
++ count * (sizeof(req->rpm_msgs[0]) + sizeof(*compls)),
+ GFP_ATOMIC);
+- if (!req)
++ if (!ptr)
+ return -ENOMEM;
++
++ req = ptr;
++ compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
++
+ req->count = count;
+ rpm_msgs = req->rpm_msgs;
+
+@@ -380,25 +387,26 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ }
+
+ for (i = 0; i < count; i++) {
+- rpm_msgs[i].completion = &compl;
++ struct completion *compl = &compls[i];
++
++ init_completion(compl);
++ rpm_msgs[i].completion = compl;
+ ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msgs[i].msg);
+ if (ret) {
+ pr_err("Error(%d) sending RPMH message addr=%#x\n",
+ ret, rpm_msgs[i].msg.cmds[0].addr);
+- for (j = i; j < count; j++)
+- rpmh_tx_done(&rpm_msgs[j].msg, ret);
+ break;
+ }
+ }
+
+ time_left = RPMH_TIMEOUT_MS;
+- for (i = 0; i < count; i++) {
+- time_left = wait_for_completion_timeout(&compl, time_left);
++ while (i--) {
++ time_left = wait_for_completion_timeout(&compls[i], time_left);
+ if (!time_left) {
+ /*
+ * Better hope they never finish because they'll signal
+- * the completion on our stack and that's bad once
+- * we've returned from the function.
++ * the completion that we're going to free once
++ * we've returned from this function.
+ */
+ WARN_ON(1);
+ ret = -ETIMEDOUT;
+@@ -407,7 +415,7 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ }
+
+ exit:
+- kfree(req);
++ kfree(ptr);
+
+ return ret;
+ }
+diff --git a/drivers/soc/tegra/fuse/fuse-tegra.c b/drivers/soc/tegra/fuse/fuse-tegra.c
+index a33ee8ef8b6b..51625703399e 100644
+--- a/drivers/soc/tegra/fuse/fuse-tegra.c
++++ b/drivers/soc/tegra/fuse/fuse-tegra.c
+@@ -137,13 +137,17 @@ static int tegra_fuse_probe(struct platform_device *pdev)
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ fuse->phys = res->start;
+ fuse->base = devm_ioremap_resource(&pdev->dev, res);
+- if (IS_ERR(fuse->base))
+- return PTR_ERR(fuse->base);
++ if (IS_ERR(fuse->base)) {
++ err = PTR_ERR(fuse->base);
++ fuse->base = base;
++ return err;
++ }
+
+ fuse->clk = devm_clk_get(&pdev->dev, "fuse");
+ if (IS_ERR(fuse->clk)) {
+ dev_err(&pdev->dev, "failed to get FUSE clock: %ld",
+ PTR_ERR(fuse->clk));
++ fuse->base = base;
+ return PTR_ERR(fuse->clk);
+ }
+
+@@ -152,8 +156,10 @@ static int tegra_fuse_probe(struct platform_device *pdev)
+
+ if (fuse->soc->probe) {
+ err = fuse->soc->probe(fuse);
+- if (err < 0)
++ if (err < 0) {
++ fuse->base = base;
+ return err;
++ }
+ }
+
+ if (tegra_fuse_create_sysfs(&pdev->dev, fuse->soc->info->size,
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index a4aee26028cd..53b35c56a557 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -428,7 +428,8 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ return status;
+
+ master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
+- master->mode_bits = SPI_3WIRE | SPI_3WIRE_HIZ | SPI_CPHA | SPI_CPOL;
++ master->mode_bits = SPI_3WIRE | SPI_3WIRE_HIZ | SPI_CPHA | SPI_CPOL |
++ SPI_CS_HIGH;
+ master->flags = master_flags;
+ master->bus_num = pdev->id;
+ /* The master needs to think there is a chipselect even if not connected */
+@@ -455,7 +456,6 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ spi_gpio->bitbang.txrx_word[SPI_MODE_3] = spi_gpio_spec_txrx_word_mode3;
+ }
+ spi_gpio->bitbang.setup_transfer = spi_bitbang_setup_transfer;
+- spi_gpio->bitbang.flags = SPI_CS_HIGH;
+
+ status = spi_bitbang_start(&spi_gpio->bitbang);
+ if (status)
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index 2fd8881fcd65..8be304379628 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -623,8 +623,8 @@ omap2_mcspi_txrx_dma(struct spi_device *spi, struct spi_transfer *xfer)
+ cfg.dst_addr = cs->phys + OMAP2_MCSPI_TX0;
+ cfg.src_addr_width = width;
+ cfg.dst_addr_width = width;
+- cfg.src_maxburst = es;
+- cfg.dst_maxburst = es;
++ cfg.src_maxburst = 1;
++ cfg.dst_maxburst = 1;
+
+ rx = xfer->rx_buf;
+ tx = xfer->tx_buf;
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index d84b893a64d7..3e82eaad0f2d 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1696,6 +1696,7 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
+ platform_info->enable_dma = false;
+ } else {
+ master->can_dma = pxa2xx_spi_can_dma;
++ master->max_dma_len = MAX_DMA_LEN;
+ }
+ }
+
+diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
+index 5f19016bbf10..b9fb6493cd6b 100644
+--- a/drivers/spi/spi-ti-qspi.c
++++ b/drivers/spi/spi-ti-qspi.c
+@@ -490,8 +490,8 @@ static void ti_qspi_enable_memory_map(struct spi_device *spi)
+ ti_qspi_write(qspi, MM_SWITCH, QSPI_SPI_SWITCH_REG);
+ if (qspi->ctrl_base) {
+ regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg,
+- MEM_CS_EN(spi->chip_select),
+- MEM_CS_MASK);
++ MEM_CS_MASK,
++ MEM_CS_EN(spi->chip_select));
+ }
+ qspi->mmap_enabled = true;
+ }
+@@ -503,7 +503,7 @@ static void ti_qspi_disable_memory_map(struct spi_device *spi)
+ ti_qspi_write(qspi, 0, QSPI_SPI_SWITCH_REG);
+ if (qspi->ctrl_base)
+ regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg,
+- 0, MEM_CS_MASK);
++ MEM_CS_MASK, 0);
+ qspi->mmap_enabled = false;
+ }
+
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index 90a8a9f1ac7d..910826df4a31 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -75,6 +75,9 @@ struct ashmem_range {
+ /* LRU list of unpinned pages, protected by ashmem_mutex */
+ static LIST_HEAD(ashmem_lru_list);
+
++static atomic_t ashmem_shrink_inflight = ATOMIC_INIT(0);
++static DECLARE_WAIT_QUEUE_HEAD(ashmem_shrink_wait);
++
+ /*
+ * long lru_count - The count of pages on our LRU list.
+ *
+@@ -168,19 +171,15 @@ static inline void lru_del(struct ashmem_range *range)
+ * @end: The ending page (inclusive)
+ *
+ * This function is protected by ashmem_mutex.
+- *
+- * Return: 0 if successful, or -ENOMEM if there is an error
+ */
+-static int range_alloc(struct ashmem_area *asma,
+- struct ashmem_range *prev_range, unsigned int purged,
+- size_t start, size_t end)
++static void range_alloc(struct ashmem_area *asma,
++ struct ashmem_range *prev_range, unsigned int purged,
++ size_t start, size_t end,
++ struct ashmem_range **new_range)
+ {
+- struct ashmem_range *range;
+-
+- range = kmem_cache_zalloc(ashmem_range_cachep, GFP_KERNEL);
+- if (!range)
+- return -ENOMEM;
++ struct ashmem_range *range = *new_range;
+
++ *new_range = NULL;
+ range->asma = asma;
+ range->pgstart = start;
+ range->pgend = end;
+@@ -190,8 +189,6 @@ static int range_alloc(struct ashmem_area *asma,
+
+ if (range_on_lru(range))
+ lru_add(range);
+-
+- return 0;
+ }
+
+ /**
+@@ -438,7 +435,6 @@ out:
+ static unsigned long
+ ashmem_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
+ {
+- struct ashmem_range *range, *next;
+ unsigned long freed = 0;
+
+ /* We might recurse into filesystem code, so bail out if necessary */
+@@ -448,21 +444,33 @@ ashmem_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
+ if (!mutex_trylock(&ashmem_mutex))
+ return -1;
+
+- list_for_each_entry_safe(range, next, &ashmem_lru_list, lru) {
++ while (!list_empty(&ashmem_lru_list)) {
++ struct ashmem_range *range =
++ list_first_entry(&ashmem_lru_list, typeof(*range), lru);
+ loff_t start = range->pgstart * PAGE_SIZE;
+ loff_t end = (range->pgend + 1) * PAGE_SIZE;
++ struct file *f = range->asma->file;
+
+- range->asma->file->f_op->fallocate(range->asma->file,
+- FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
+- start, end - start);
++ get_file(f);
++ atomic_inc(&ashmem_shrink_inflight);
+ range->purged = ASHMEM_WAS_PURGED;
+ lru_del(range);
+
+ freed += range_size(range);
++ mutex_unlock(&ashmem_mutex);
++ f->f_op->fallocate(f,
++ FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
++ start, end - start);
++ fput(f);
++ if (atomic_dec_and_test(&ashmem_shrink_inflight))
++ wake_up_all(&ashmem_shrink_wait);
++ if (!mutex_trylock(&ashmem_mutex))
++ goto out;
+ if (--sc->nr_to_scan <= 0)
+ break;
+ }
+ mutex_unlock(&ashmem_mutex);
++out:
+ return freed;
+ }
+
+@@ -582,7 +590,8 @@ static int get_name(struct ashmem_area *asma, void __user *name)
+ *
+ * Caller must hold ashmem_mutex.
+ */
+-static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
++static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend,
++ struct ashmem_range **new_range)
+ {
+ struct ashmem_range *range, *next;
+ int ret = ASHMEM_NOT_PURGED;
+@@ -635,7 +644,7 @@ static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
+ * second half and adjust the first chunk's endpoint.
+ */
+ range_alloc(asma, range, range->purged,
+- pgend + 1, range->pgend);
++ pgend + 1, range->pgend, new_range);
+ range_shrink(range, range->pgstart, pgstart - 1);
+ break;
+ }
+@@ -649,7 +658,8 @@ static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
+ *
+ * Caller must hold ashmem_mutex.
+ */
+-static int ashmem_unpin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
++static int ashmem_unpin(struct ashmem_area *asma, size_t pgstart, size_t pgend,
++ struct ashmem_range **new_range)
+ {
+ struct ashmem_range *range, *next;
+ unsigned int purged = ASHMEM_NOT_PURGED;
+@@ -675,7 +685,8 @@ restart:
+ }
+ }
+
+- return range_alloc(asma, range, purged, pgstart, pgend);
++ range_alloc(asma, range, purged, pgstart, pgend, new_range);
++ return 0;
+ }
+
+ /*
+@@ -708,11 +719,19 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+ struct ashmem_pin pin;
+ size_t pgstart, pgend;
+ int ret = -EINVAL;
++ struct ashmem_range *range = NULL;
+
+ if (copy_from_user(&pin, p, sizeof(pin)))
+ return -EFAULT;
+
++ if (cmd == ASHMEM_PIN || cmd == ASHMEM_UNPIN) {
++ range = kmem_cache_zalloc(ashmem_range_cachep, GFP_KERNEL);
++ if (!range)
++ return -ENOMEM;
++ }
++
+ mutex_lock(&ashmem_mutex);
++ wait_event(ashmem_shrink_wait, !atomic_read(&ashmem_shrink_inflight));
+
+ if (!asma->file)
+ goto out_unlock;
+@@ -735,10 +754,10 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+
+ switch (cmd) {
+ case ASHMEM_PIN:
+- ret = ashmem_pin(asma, pgstart, pgend);
++ ret = ashmem_pin(asma, pgstart, pgend, &range);
+ break;
+ case ASHMEM_UNPIN:
+- ret = ashmem_unpin(asma, pgstart, pgend);
++ ret = ashmem_unpin(asma, pgstart, pgend, &range);
+ break;
+ case ASHMEM_GET_PIN_STATUS:
+ ret = ashmem_get_pin_status(asma, pgstart, pgend);
+@@ -747,6 +766,8 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+
+ out_unlock:
+ mutex_unlock(&ashmem_mutex);
++ if (range)
++ kmem_cache_free(ashmem_range_cachep, range);
+
+ return ret;
+ }
+diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c
+index 0383f7548d48..20f2103a4ebf 100644
+--- a/drivers/staging/android/ion/ion_system_heap.c
++++ b/drivers/staging/android/ion/ion_system_heap.c
+@@ -223,10 +223,10 @@ static void ion_system_heap_destroy_pools(struct ion_page_pool **pools)
+ static int ion_system_heap_create_pools(struct ion_page_pool **pools)
+ {
+ int i;
+- gfp_t gfp_flags = low_order_gfp_flags;
+
+ for (i = 0; i < NUM_ORDERS; i++) {
+ struct ion_page_pool *pool;
++ gfp_t gfp_flags = low_order_gfp_flags;
+
+ if (orders[i] > 4)
+ gfp_flags = high_order_gfp_flags;
+diff --git a/drivers/staging/comedi/comedidev.h b/drivers/staging/comedi/comedidev.h
+index a7d569cfca5d..0dff1ac057cd 100644
+--- a/drivers/staging/comedi/comedidev.h
++++ b/drivers/staging/comedi/comedidev.h
+@@ -1001,6 +1001,8 @@ int comedi_dio_insn_config(struct comedi_device *dev,
+ unsigned int mask);
+ unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
+ unsigned int *data);
++unsigned int comedi_bytes_per_scan_cmd(struct comedi_subdevice *s,
++ struct comedi_cmd *cmd);
+ unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s);
+ unsigned int comedi_nscans_left(struct comedi_subdevice *s,
+ unsigned int nscans);
+diff --git a/drivers/staging/comedi/drivers.c b/drivers/staging/comedi/drivers.c
+index eefa62f42c0f..5a32b8fc000e 100644
+--- a/drivers/staging/comedi/drivers.c
++++ b/drivers/staging/comedi/drivers.c
+@@ -394,11 +394,13 @@ unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
+ EXPORT_SYMBOL_GPL(comedi_dio_update_state);
+
+ /**
+- * comedi_bytes_per_scan() - Get length of asynchronous command "scan" in bytes
++ * comedi_bytes_per_scan_cmd() - Get length of asynchronous command "scan" in
++ * bytes
+ * @s: COMEDI subdevice.
++ * @cmd: COMEDI command.
+ *
+ * Determines the overall scan length according to the subdevice type and the
+- * number of channels in the scan.
++ * number of channels in the scan for the specified command.
+ *
+ * For digital input, output or input/output subdevices, samples for
+ * multiple channels are assumed to be packed into one or more unsigned
+@@ -408,9 +410,9 @@ EXPORT_SYMBOL_GPL(comedi_dio_update_state);
+ *
+ * Returns the overall scan length in bytes.
+ */
+-unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
++unsigned int comedi_bytes_per_scan_cmd(struct comedi_subdevice *s,
++ struct comedi_cmd *cmd)
+ {
+- struct comedi_cmd *cmd = &s->async->cmd;
+ unsigned int num_samples;
+ unsigned int bits_per_sample;
+
+@@ -427,6 +429,29 @@ unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
+ }
+ return comedi_samples_to_bytes(s, num_samples);
+ }
++EXPORT_SYMBOL_GPL(comedi_bytes_per_scan_cmd);
++
++/**
++ * comedi_bytes_per_scan() - Get length of asynchronous command "scan" in bytes
++ * @s: COMEDI subdevice.
++ *
++ * Determines the overall scan length according to the subdevice type and the
++ * number of channels in the scan for the current command.
++ *
++ * For digital input, output or input/output subdevices, samples for
++ * multiple channels are assumed to be packed into one or more unsigned
++ * short or unsigned int values according to the subdevice's %SDF_LSAMPL
++ * flag. For other types of subdevice, samples are assumed to occupy a
++ * whole unsigned short or unsigned int according to the %SDF_LSAMPL flag.
++ *
++ * Returns the overall scan length in bytes.
++ */
++unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
++{
++ struct comedi_cmd *cmd = &s->async->cmd;
++
++ return comedi_bytes_per_scan_cmd(s, cmd);
++}
+ EXPORT_SYMBOL_GPL(comedi_bytes_per_scan);
+
+ static unsigned int __comedi_nscans_left(struct comedi_subdevice *s,
+diff --git a/drivers/staging/comedi/drivers/ni_660x.c b/drivers/staging/comedi/drivers/ni_660x.c
+index e70a461e723f..405573e927cf 100644
+--- a/drivers/staging/comedi/drivers/ni_660x.c
++++ b/drivers/staging/comedi/drivers/ni_660x.c
+@@ -656,6 +656,7 @@ static int ni_660x_set_pfi_routing(struct comedi_device *dev,
+ case NI_660X_PFI_OUTPUT_DIO:
+ if (chan > 31)
+ return -EINVAL;
++ break;
+ default:
+ return -EINVAL;
+ }
+diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c
+index 5edf59ac6706..b04dad8c7092 100644
+--- a/drivers/staging/comedi/drivers/ni_mio_common.c
++++ b/drivers/staging/comedi/drivers/ni_mio_common.c
+@@ -3545,6 +3545,7 @@ static int ni_cdio_cmdtest(struct comedi_device *dev,
+ struct comedi_subdevice *s, struct comedi_cmd *cmd)
+ {
+ struct ni_private *devpriv = dev->private;
++ unsigned int bytes_per_scan;
+ int err = 0;
+
+ /* Step 1 : check if triggers are trivially valid */
+@@ -3579,9 +3580,12 @@ static int ni_cdio_cmdtest(struct comedi_device *dev,
+ err |= comedi_check_trigger_arg_is(&cmd->convert_arg, 0);
+ err |= comedi_check_trigger_arg_is(&cmd->scan_end_arg,
+ cmd->chanlist_len);
+- err |= comedi_check_trigger_arg_max(&cmd->stop_arg,
+- s->async->prealloc_bufsz /
+- comedi_bytes_per_scan(s));
++ bytes_per_scan = comedi_bytes_per_scan_cmd(s, cmd);
++ if (bytes_per_scan) {
++ err |= comedi_check_trigger_arg_max(&cmd->stop_arg,
++ s->async->prealloc_bufsz /
++ bytes_per_scan);
++ }
+
+ if (err)
+ return 3;
+diff --git a/drivers/staging/erofs/dir.c b/drivers/staging/erofs/dir.c
+index 833f052f79d0..b21ed5b4c711 100644
+--- a/drivers/staging/erofs/dir.c
++++ b/drivers/staging/erofs/dir.c
+@@ -23,6 +23,21 @@ static const unsigned char erofs_filetype_table[EROFS_FT_MAX] = {
+ [EROFS_FT_SYMLINK] = DT_LNK,
+ };
+
++static void debug_one_dentry(unsigned char d_type, const char *de_name,
++ unsigned int de_namelen)
++{
++#ifdef CONFIG_EROFS_FS_DEBUG
++ /* since the on-disk name could not have the trailing '\0' */
++ unsigned char dbg_namebuf[EROFS_NAME_LEN + 1];
++
++ memcpy(dbg_namebuf, de_name, de_namelen);
++ dbg_namebuf[de_namelen] = '\0';
++
++ debugln("found dirent %s de_len %u d_type %d", dbg_namebuf,
++ de_namelen, d_type);
++#endif
++}
++
+ static int erofs_fill_dentries(struct dir_context *ctx,
+ void *dentry_blk, unsigned int *ofs,
+ unsigned int nameoff, unsigned int maxsize)
+@@ -33,14 +48,10 @@ static int erofs_fill_dentries(struct dir_context *ctx,
+ de = dentry_blk + *ofs;
+ while (de < end) {
+ const char *de_name;
+- int de_namelen;
++ unsigned int de_namelen;
+ unsigned char d_type;
+-#ifdef CONFIG_EROFS_FS_DEBUG
+- unsigned int dbg_namelen;
+- unsigned char dbg_namebuf[EROFS_NAME_LEN];
+-#endif
+
+- if (unlikely(de->file_type < EROFS_FT_MAX))
++ if (de->file_type < EROFS_FT_MAX)
+ d_type = erofs_filetype_table[de->file_type];
+ else
+ d_type = DT_UNKNOWN;
+@@ -48,26 +59,20 @@ static int erofs_fill_dentries(struct dir_context *ctx,
+ nameoff = le16_to_cpu(de->nameoff);
+ de_name = (char *)dentry_blk + nameoff;
+
+- de_namelen = unlikely(de + 1 >= end) ?
+- /* last directory entry */
+- strnlen(de_name, maxsize - nameoff) :
+- le16_to_cpu(de[1].nameoff) - nameoff;
++ /* the last dirent in the block? */
++ if (de + 1 >= end)
++ de_namelen = strnlen(de_name, maxsize - nameoff);
++ else
++ de_namelen = le16_to_cpu(de[1].nameoff) - nameoff;
+
+ /* a corrupted entry is found */
+- if (unlikely(de_namelen < 0)) {
++ if (unlikely(nameoff + de_namelen > maxsize ||
++ de_namelen > EROFS_NAME_LEN)) {
+ DBG_BUGON(1);
+ return -EIO;
+ }
+
+-#ifdef CONFIG_EROFS_FS_DEBUG
+- dbg_namelen = min(EROFS_NAME_LEN - 1, de_namelen);
+- memcpy(dbg_namebuf, de_name, dbg_namelen);
+- dbg_namebuf[dbg_namelen] = '\0';
+-
+- debugln("%s, found de_name %s de_len %d d_type %d", __func__,
+- dbg_namebuf, de_namelen, d_type);
+-#endif
+-
++ debug_one_dentry(d_type, de_name, de_namelen);
+ if (!dir_emit(ctx, de_name, de_namelen,
+ le64_to_cpu(de->nid), d_type))
+ /* stopped by some reason */
+diff --git a/drivers/staging/erofs/inode.c b/drivers/staging/erofs/inode.c
+index d7fbf5f4600f..f99954dbfdb5 100644
+--- a/drivers/staging/erofs/inode.c
++++ b/drivers/staging/erofs/inode.c
+@@ -185,16 +185,16 @@ static int fill_inode(struct inode *inode, int isdir)
+ /* setup the new inode */
+ if (S_ISREG(inode->i_mode)) {
+ #ifdef CONFIG_EROFS_FS_XATTR
+- if (vi->xattr_isize)
+- inode->i_op = &erofs_generic_xattr_iops;
++ inode->i_op = &erofs_generic_xattr_iops;
+ #endif
+ inode->i_fop = &generic_ro_fops;
+ } else if (S_ISDIR(inode->i_mode)) {
+ inode->i_op =
+ #ifdef CONFIG_EROFS_FS_XATTR
+- vi->xattr_isize ? &erofs_dir_xattr_iops :
+-#endif
++ &erofs_dir_xattr_iops;
++#else
+ &erofs_dir_iops;
++#endif
+ inode->i_fop = &erofs_dir_fops;
+ } else if (S_ISLNK(inode->i_mode)) {
+ /* by default, page_get_link is used for symlink */
+diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
+index e049d00c087a..16249d7f0895 100644
+--- a/drivers/staging/erofs/internal.h
++++ b/drivers/staging/erofs/internal.h
+@@ -354,12 +354,17 @@ static inline erofs_off_t iloc(struct erofs_sb_info *sbi, erofs_nid_t nid)
+ return blknr_to_addr(sbi->meta_blkaddr) + (nid << sbi->islotbits);
+ }
+
+-#define inode_set_inited_xattr(inode) (EROFS_V(inode)->flags |= 1)
+-#define inode_has_inited_xattr(inode) (EROFS_V(inode)->flags & 1)
++/* atomic flag definitions */
++#define EROFS_V_EA_INITED_BIT 0
++
++/* bitlock definitions (arranged in reverse order) */
++#define EROFS_V_BL_XATTR_BIT (BITS_PER_LONG - 1)
+
+ struct erofs_vnode {
+ erofs_nid_t nid;
+- unsigned int flags;
++
++ /* atomic flags (including bitlocks) */
++ unsigned long flags;
+
+ unsigned char data_mapping_mode;
+ /* inline size in bytes */
+diff --git a/drivers/staging/erofs/namei.c b/drivers/staging/erofs/namei.c
+index 5596c52e246d..ecc51ef0753f 100644
+--- a/drivers/staging/erofs/namei.c
++++ b/drivers/staging/erofs/namei.c
+@@ -15,74 +15,77 @@
+
+ #include <trace/events/erofs.h>
+
+-/* based on the value of qn->len is accurate */
+-static inline int dirnamecmp(struct qstr *qn,
+- struct qstr *qd, unsigned int *matched)
++struct erofs_qstr {
++ const unsigned char *name;
++ const unsigned char *end;
++};
++
++/* based on the end of qn is accurate and it must have the trailing '\0' */
++static inline int dirnamecmp(const struct erofs_qstr *qn,
++ const struct erofs_qstr *qd,
++ unsigned int *matched)
+ {
+- unsigned int i = *matched, len = min(qn->len, qd->len);
+-loop:
+- if (unlikely(i >= len)) {
+- *matched = i;
+- if (qn->len < qd->len) {
+- /*
+- * actually (qn->len == qd->len)
+- * when qd->name[i] == '\0'
+- */
+- return qd->name[i] == '\0' ? 0 : -1;
++ unsigned int i = *matched;
++
++ /*
++ * on-disk error, let's only BUG_ON in the debugging mode.
++ * otherwise, it will return 1 to just skip the invalid name
++ * and go on (in consideration of the lookup performance).
++ */
++ DBG_BUGON(qd->name > qd->end);
++
++ /* qd could not have trailing '\0' */
++ /* However it is absolutely safe if < qd->end */
++ while (qd->name + i < qd->end && qd->name[i] != '\0') {
++ if (qn->name[i] != qd->name[i]) {
++ *matched = i;
++ return qn->name[i] > qd->name[i] ? 1 : -1;
+ }
+- return (qn->len > qd->len);
++ ++i;
+ }
+-
+- if (qn->name[i] != qd->name[i]) {
+- *matched = i;
+- return qn->name[i] > qd->name[i] ? 1 : -1;
+- }
+-
+- ++i;
+- goto loop;
++ *matched = i;
++ /* See comments in __d_alloc on the terminating NUL character */
++ return qn->name[i] == '\0' ? 0 : 1;
+ }
+
+-static struct erofs_dirent *find_target_dirent(
+- struct qstr *name,
+- u8 *data, int maxsize)
++#define nameoff_from_disk(off, sz) (le16_to_cpu(off) & ((sz) - 1))
++
++static struct erofs_dirent *find_target_dirent(struct erofs_qstr *name,
++ u8 *data,
++ unsigned int dirblksize,
++ const int ndirents)
+ {
+- unsigned int ndirents, head, back;
++ int head, back;
+ unsigned int startprfx, endprfx;
+ struct erofs_dirent *const de = (struct erofs_dirent *)data;
+
+- /* make sure that maxsize is valid */
+- BUG_ON(maxsize < sizeof(struct erofs_dirent));
+-
+- ndirents = le16_to_cpu(de->nameoff) / sizeof(*de);
+-
+- /* corrupted dir (may be unnecessary...) */
+- BUG_ON(!ndirents);
+-
+- head = 0;
++ /* since the 1st dirent has been evaluated previously */
++ head = 1;
+ back = ndirents - 1;
+ startprfx = endprfx = 0;
+
+ while (head <= back) {
+- unsigned int mid = head + (back - head) / 2;
+- unsigned int nameoff = le16_to_cpu(de[mid].nameoff);
++ const int mid = head + (back - head) / 2;
++ const int nameoff = nameoff_from_disk(de[mid].nameoff,
++ dirblksize);
+ unsigned int matched = min(startprfx, endprfx);
+-
+- struct qstr dname = QSTR_INIT(data + nameoff,
+- unlikely(mid >= ndirents - 1) ?
+- maxsize - nameoff :
+- le16_to_cpu(de[mid + 1].nameoff) - nameoff);
++ struct erofs_qstr dname = {
++ .name = data + nameoff,
++ .end = unlikely(mid >= ndirents - 1) ?
++ data + dirblksize :
++ data + nameoff_from_disk(de[mid + 1].nameoff,
++ dirblksize)
++ };
+
+ /* string comparison without already matched prefix */
+ int ret = dirnamecmp(name, &dname, &matched);
+
+- if (unlikely(!ret))
++ if (unlikely(!ret)) {
+ return de + mid;
+- else if (ret > 0) {
++ } else if (ret > 0) {
+ head = mid + 1;
+ startprfx = matched;
+- } else if (unlikely(mid < 1)) /* fix "mid" overflow */
+- break;
+- else {
++ } else {
+ back = mid - 1;
+ endprfx = matched;
+ }
+@@ -91,12 +94,12 @@ static struct erofs_dirent *find_target_dirent(
+ return ERR_PTR(-ENOENT);
+ }
+
+-static struct page *find_target_block_classic(
+- struct inode *dir,
+- struct qstr *name, int *_diff)
++static struct page *find_target_block_classic(struct inode *dir,
++ struct erofs_qstr *name,
++ int *_ndirents)
+ {
+ unsigned int startprfx, endprfx;
+- unsigned int head, back;
++ int head, back;
+ struct address_space *const mapping = dir->i_mapping;
+ struct page *candidate = ERR_PTR(-ENOENT);
+
+@@ -105,41 +108,43 @@ static struct page *find_target_block_classic(
+ back = inode_datablocks(dir) - 1;
+
+ while (head <= back) {
+- unsigned int mid = head + (back - head) / 2;
++ const int mid = head + (back - head) / 2;
+ struct page *page = read_mapping_page(mapping, mid, NULL);
+
+- if (IS_ERR(page)) {
+-exact_out:
+- if (!IS_ERR(candidate)) /* valid candidate */
+- put_page(candidate);
+- return page;
+- } else {
+- int diff;
+- unsigned int ndirents, matched;
+- struct qstr dname;
++ if (!IS_ERR(page)) {
+ struct erofs_dirent *de = kmap_atomic(page);
+- unsigned int nameoff = le16_to_cpu(de->nameoff);
+-
+- ndirents = nameoff / sizeof(*de);
++ const int nameoff = nameoff_from_disk(de->nameoff,
++ EROFS_BLKSIZ);
++ const int ndirents = nameoff / sizeof(*de);
++ int diff;
++ unsigned int matched;
++ struct erofs_qstr dname;
+
+- /* corrupted dir (should have one entry at least) */
+- BUG_ON(!ndirents || nameoff > PAGE_SIZE);
++ if (unlikely(!ndirents)) {
++ DBG_BUGON(1);
++ kunmap_atomic(de);
++ put_page(page);
++ page = ERR_PTR(-EIO);
++ goto out;
++ }
+
+ matched = min(startprfx, endprfx);
+
+ dname.name = (u8 *)de + nameoff;
+- dname.len = ndirents == 1 ?
+- /* since the rest of the last page is 0 */
+- EROFS_BLKSIZ - nameoff
+- : le16_to_cpu(de[1].nameoff) - nameoff;
++ if (ndirents == 1)
++ dname.end = (u8 *)de + EROFS_BLKSIZ;
++ else
++ dname.end = (u8 *)de +
++ nameoff_from_disk(de[1].nameoff,
++ EROFS_BLKSIZ);
+
+ /* string comparison without already matched prefix */
+ diff = dirnamecmp(name, &dname, &matched);
+ kunmap_atomic(de);
+
+ if (unlikely(!diff)) {
+- *_diff = 0;
+- goto exact_out;
++ *_ndirents = 0;
++ goto out;
+ } else if (diff > 0) {
+ head = mid + 1;
+ startprfx = matched;
+@@ -147,45 +152,51 @@ exact_out:
+ if (likely(!IS_ERR(candidate)))
+ put_page(candidate);
+ candidate = page;
++ *_ndirents = ndirents;
+ } else {
+ put_page(page);
+
+- if (unlikely(mid < 1)) /* fix "mid" overflow */
+- break;
+-
+ back = mid - 1;
+ endprfx = matched;
+ }
++ continue;
+ }
++out: /* free if the candidate is valid */
++ if (!IS_ERR(candidate))
++ put_page(candidate);
++ return page;
+ }
+- *_diff = 1;
+ return candidate;
+ }
+
+ int erofs_namei(struct inode *dir,
+- struct qstr *name,
+- erofs_nid_t *nid, unsigned int *d_type)
++ struct qstr *name,
++ erofs_nid_t *nid, unsigned int *d_type)
+ {
+- int diff;
++ int ndirents;
+ struct page *page;
+- u8 *data;
++ void *data;
+ struct erofs_dirent *de;
++ struct erofs_qstr qn;
+
+ if (unlikely(!dir->i_size))
+ return -ENOENT;
+
+- diff = 1;
+- page = find_target_block_classic(dir, name, &diff);
++ qn.name = name->name;
++ qn.end = name->name + name->len;
++
++ ndirents = 0;
++ page = find_target_block_classic(dir, &qn, &ndirents);
+
+ if (unlikely(IS_ERR(page)))
+ return PTR_ERR(page);
+
+ data = kmap_atomic(page);
+ /* the target page has been mapped */
+- de = likely(diff) ?
+- /* since the rest of the last page is 0 */
+- find_target_dirent(name, data, EROFS_BLKSIZ) :
+- (struct erofs_dirent *)data;
++ if (ndirents)
++ de = find_target_dirent(&qn, data, EROFS_BLKSIZ, ndirents);
++ else
++ de = (struct erofs_dirent *)data;
+
+ if (likely(!IS_ERR(de))) {
+ *nid = le64_to_cpu(de->nid);
+diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
+index 4ac1099a39c6..d850be1abc84 100644
+--- a/drivers/staging/erofs/unzip_vle.c
++++ b/drivers/staging/erofs/unzip_vle.c
+@@ -107,15 +107,30 @@ enum z_erofs_vle_work_role {
+ Z_EROFS_VLE_WORK_SECONDARY,
+ Z_EROFS_VLE_WORK_PRIMARY,
+ /*
+- * The current work has at least been linked with the following
+- * processed chained works, which means if the processing page
+- * is the tail partial page of the work, the current work can
+- * safely use the whole page, as illustrated below:
+- * +--------------+-------------------------------------------+
+- * | tail page | head page (of the previous work) |
+- * +--------------+-------------------------------------------+
+- * /\ which belongs to the current work
+- * [ (*) this page can be used for the current work itself. ]
++ * The current work was the tail of an exist chain, and the previous
++ * processed chained works are all decided to be hooked up to it.
++ * A new chain should be created for the remaining unprocessed works,
++ * therefore different from Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED,
++ * the next work cannot reuse the whole page in the following scenario:
++ * ________________________________________________________________
++ * | tail (partial) page | head (partial) page |
++ * | (belongs to the next work) | (belongs to the current work) |
++ * |_______PRIMARY_FOLLOWED_______|________PRIMARY_HOOKED___________|
++ */
++ Z_EROFS_VLE_WORK_PRIMARY_HOOKED,
++ /*
++ * The current work has been linked with the processed chained works,
++ * and could be also linked with the potential remaining works, which
++ * means if the processing page is the tail partial page of the work,
++ * the current work can safely use the whole page (since the next work
++ * is under control) for in-place decompression, as illustrated below:
++ * ________________________________________________________________
++ * | tail (partial) page | head (partial) page |
++ * | (of the current work) | (of the previous work) |
++ * | PRIMARY_FOLLOWED or | |
++ * |_____PRIMARY_HOOKED____|____________PRIMARY_FOLLOWED____________|
++ *
++ * [ (*) the above page can be used for the current work itself. ]
+ */
+ Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED,
+ Z_EROFS_VLE_WORK_MAX
+@@ -315,10 +330,10 @@ static int z_erofs_vle_work_add_page(
+ return ret ? 0 : -EAGAIN;
+ }
+
+-static inline bool try_to_claim_workgroup(
+- struct z_erofs_vle_workgroup *grp,
+- z_erofs_vle_owned_workgrp_t *owned_head,
+- bool *hosted)
++static enum z_erofs_vle_work_role
++try_to_claim_workgroup(struct z_erofs_vle_workgroup *grp,
++ z_erofs_vle_owned_workgrp_t *owned_head,
++ bool *hosted)
+ {
+ DBG_BUGON(*hosted == true);
+
+@@ -332,6 +347,9 @@ retry:
+
+ *owned_head = &grp->next;
+ *hosted = true;
++ /* lucky, I am the followee :) */
++ return Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED;
++
+ } else if (grp->next == Z_EROFS_VLE_WORKGRP_TAIL) {
+ /*
+ * type 2, link to the end of a existing open chain,
+@@ -341,12 +359,11 @@ retry:
+ if (cmpxchg(&grp->next, Z_EROFS_VLE_WORKGRP_TAIL,
+ *owned_head) != Z_EROFS_VLE_WORKGRP_TAIL)
+ goto retry;
+-
+ *owned_head = Z_EROFS_VLE_WORKGRP_TAIL;
+- } else
+- return false; /* :( better luck next time */
++ return Z_EROFS_VLE_WORK_PRIMARY_HOOKED;
++ }
+
+- return true; /* lucky, I am the followee :) */
++ return Z_EROFS_VLE_WORK_PRIMARY; /* :( better luck next time */
+ }
+
+ struct z_erofs_vle_work_finder {
+@@ -424,12 +441,9 @@ z_erofs_vle_work_lookup(const struct z_erofs_vle_work_finder *f)
+ *f->hosted = false;
+ if (!primary)
+ *f->role = Z_EROFS_VLE_WORK_SECONDARY;
+- /* claim the workgroup if possible */
+- else if (try_to_claim_workgroup(grp, f->owned_head, f->hosted))
+- *f->role = Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED;
+- else
+- *f->role = Z_EROFS_VLE_WORK_PRIMARY;
+-
++ else /* claim the workgroup if possible */
++ *f->role = try_to_claim_workgroup(grp, f->owned_head,
++ f->hosted);
+ return work;
+ }
+
+@@ -493,6 +507,9 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
+ return work;
+ }
+
++#define builder_is_hooked(builder) \
++ ((builder)->role >= Z_EROFS_VLE_WORK_PRIMARY_HOOKED)
++
+ #define builder_is_followed(builder) \
+ ((builder)->role >= Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED)
+
+@@ -686,7 +703,7 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
+ struct z_erofs_vle_work_builder *const builder = &fe->builder;
+ const loff_t offset = page_offset(page);
+
+- bool tight = builder_is_followed(builder);
++ bool tight = builder_is_hooked(builder);
+ struct z_erofs_vle_work *work = builder->work;
+
+ enum z_erofs_cache_alloctype cache_strategy;
+@@ -704,8 +721,12 @@ repeat:
+
+ /* lucky, within the range of the current map_blocks */
+ if (offset + cur >= map->m_la &&
+- offset + cur < map->m_la + map->m_llen)
++ offset + cur < map->m_la + map->m_llen) {
++ /* didn't get a valid unzip work previously (very rare) */
++ if (!builder->work)
++ goto restart_now;
+ goto hitted;
++ }
+
+ /* go ahead the next map_blocks */
+ debugln("%s: [out-of-range] pos %llu", __func__, offset + cur);
+@@ -719,6 +740,7 @@ repeat:
+ if (unlikely(err))
+ goto err_out;
+
++restart_now:
+ if (unlikely(!(map->m_flags & EROFS_MAP_MAPPED)))
+ goto hitted;
+
+@@ -740,7 +762,7 @@ repeat:
+ map->m_plen / PAGE_SIZE,
+ cache_strategy, page_pool, GFP_KERNEL);
+
+- tight &= builder_is_followed(builder);
++ tight &= builder_is_hooked(builder);
+ work = builder->work;
+ hitted:
+ cur = end - min_t(unsigned int, offset + end - map->m_la, end);
+@@ -755,6 +777,9 @@ hitted:
+ (tight ? Z_EROFS_PAGE_TYPE_EXCLUSIVE :
+ Z_EROFS_VLE_PAGE_TYPE_TAIL_SHARED));
+
++ if (cur)
++ tight &= builder_is_followed(builder);
++
+ retry:
+ err = z_erofs_vle_work_add_page(builder, page, page_type);
+ /* should allocate an additional staging page for pagevec */
+@@ -952,6 +977,7 @@ repeat:
+ overlapped = false;
+ compressed_pages = grp->compressed_pages;
+
++ err = 0;
+ for (i = 0; i < clusterpages; ++i) {
+ unsigned int pagenr;
+
+@@ -961,26 +987,39 @@ repeat:
+ DBG_BUGON(!page);
+ DBG_BUGON(!page->mapping);
+
+- if (z_erofs_is_stagingpage(page))
+- continue;
++ if (!z_erofs_is_stagingpage(page)) {
+ #ifdef EROFS_FS_HAS_MANAGED_CACHE
+- if (page->mapping == MNGD_MAPPING(sbi)) {
+- DBG_BUGON(!PageUptodate(page));
+- continue;
+- }
++ if (page->mapping == MNGD_MAPPING(sbi)) {
++ if (unlikely(!PageUptodate(page)))
++ err = -EIO;
++ continue;
++ }
+ #endif
+
+- /* only non-head page could be reused as a compressed page */
+- pagenr = z_erofs_onlinepage_index(page);
++ /*
++ * only if non-head page can be selected
++ * for inplace decompression
++ */
++ pagenr = z_erofs_onlinepage_index(page);
+
+- DBG_BUGON(pagenr >= nr_pages);
+- DBG_BUGON(pages[pagenr]);
+- ++sparsemem_pages;
+- pages[pagenr] = page;
++ DBG_BUGON(pagenr >= nr_pages);
++ DBG_BUGON(pages[pagenr]);
++ ++sparsemem_pages;
++ pages[pagenr] = page;
+
+- overlapped = true;
++ overlapped = true;
++ }
++
++ /* PG_error needs checking for inplaced and staging pages */
++ if (unlikely(PageError(page))) {
++ DBG_BUGON(PageUptodate(page));
++ err = -EIO;
++ }
+ }
+
++ if (unlikely(err))
++ goto out;
++
+ llen = (nr_pages << PAGE_SHIFT) - work->pageofs;
+
+ if (z_erofs_vle_workgrp_fmt(grp) == Z_EROFS_VLE_WORKGRP_FMT_PLAIN) {
+@@ -992,11 +1031,10 @@ repeat:
+ if (llen > grp->llen)
+ llen = grp->llen;
+
+- err = z_erofs_vle_unzip_fast_percpu(compressed_pages,
+- clusterpages, pages, llen, work->pageofs,
+- z_erofs_onlinepage_endio);
++ err = z_erofs_vle_unzip_fast_percpu(compressed_pages, clusterpages,
++ pages, llen, work->pageofs);
+ if (err != -ENOTSUPP)
+- goto out_percpu;
++ goto out;
+
+ if (sparsemem_pages >= nr_pages)
+ goto skip_allocpage;
+@@ -1010,6 +1048,10 @@ repeat:
+
+ skip_allocpage:
+ vout = erofs_vmap(pages, nr_pages);
++ if (!vout) {
++ err = -ENOMEM;
++ goto out;
++ }
+
+ err = z_erofs_vle_unzip_vmap(compressed_pages,
+ clusterpages, vout, llen, work->pageofs, overlapped);
+@@ -1017,8 +1059,25 @@ skip_allocpage:
+ erofs_vunmap(vout, nr_pages);
+
+ out:
++ /* must handle all compressed pages before endding pages */
++ for (i = 0; i < clusterpages; ++i) {
++ page = compressed_pages[i];
++
++#ifdef EROFS_FS_HAS_MANAGED_CACHE
++ if (page->mapping == MNGD_MAPPING(sbi))
++ continue;
++#endif
++ /* recycle all individual staging pages */
++ (void)z_erofs_gather_if_stagingpage(page_pool, page);
++
++ WRITE_ONCE(compressed_pages[i], NULL);
++ }
++
+ for (i = 0; i < nr_pages; ++i) {
+ page = pages[i];
++ if (!page)
++ continue;
++
+ DBG_BUGON(!page->mapping);
+
+ /* recycle all individual staging pages */
+@@ -1031,20 +1090,6 @@ out:
+ z_erofs_onlinepage_endio(page);
+ }
+
+-out_percpu:
+- for (i = 0; i < clusterpages; ++i) {
+- page = compressed_pages[i];
+-
+-#ifdef EROFS_FS_HAS_MANAGED_CACHE
+- if (page->mapping == MNGD_MAPPING(sbi))
+- continue;
+-#endif
+- /* recycle all individual staging pages */
+- (void)z_erofs_gather_if_stagingpage(page_pool, page);
+-
+- WRITE_ONCE(compressed_pages[i], NULL);
+- }
+-
+ if (pages == z_pagemap_global)
+ mutex_unlock(&z_pagemap_global_lock);
+ else if (unlikely(pages != pages_onstack))
+@@ -1172,6 +1217,7 @@ repeat:
+ if (page->mapping == mc) {
+ WRITE_ONCE(grp->compressed_pages[nr], page);
+
++ ClearPageError(page);
+ if (!PagePrivate(page)) {
+ /*
+ * impossible to be !PagePrivate(page) for
+diff --git a/drivers/staging/erofs/unzip_vle.h b/drivers/staging/erofs/unzip_vle.h
+index 5a4e1b62c0d1..c0dfd6906aa8 100644
+--- a/drivers/staging/erofs/unzip_vle.h
++++ b/drivers/staging/erofs/unzip_vle.h
+@@ -218,8 +218,7 @@ extern int z_erofs_vle_plain_copy(struct page **compressed_pages,
+
+ extern int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ unsigned clusterpages, struct page **pages,
+- unsigned outlen, unsigned short pageofs,
+- void (*endio)(struct page *));
++ unsigned int outlen, unsigned short pageofs);
+
+ extern int z_erofs_vle_unzip_vmap(struct page **compressed_pages,
+ unsigned clusterpages, void *vaddr, unsigned llen,
+diff --git a/drivers/staging/erofs/unzip_vle_lz4.c b/drivers/staging/erofs/unzip_vle_lz4.c
+index 52797bd89da1..3e8b0ff2efeb 100644
+--- a/drivers/staging/erofs/unzip_vle_lz4.c
++++ b/drivers/staging/erofs/unzip_vle_lz4.c
+@@ -125,8 +125,7 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ unsigned int clusterpages,
+ struct page **pages,
+ unsigned int outlen,
+- unsigned short pageofs,
+- void (*endio)(struct page *))
++ unsigned short pageofs)
+ {
+ void *vin, *vout;
+ unsigned int nr_pages, i, j;
+@@ -137,10 +136,13 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+
+ nr_pages = DIV_ROUND_UP(outlen + pageofs, PAGE_SIZE);
+
+- if (clusterpages == 1)
++ if (clusterpages == 1) {
+ vin = kmap_atomic(compressed_pages[0]);
+- else
++ } else {
+ vin = erofs_vmap(compressed_pages, clusterpages);
++ if (!vin)
++ return -ENOMEM;
++ }
+
+ preempt_disable();
+ vout = erofs_pcpubuf[smp_processor_id()].data;
+@@ -148,19 +150,16 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ ret = z_erofs_unzip_lz4(vin, vout + pageofs,
+ clusterpages * PAGE_SIZE, outlen);
+
+- if (ret >= 0) {
+- outlen = ret;
+- ret = 0;
+- }
++ if (ret < 0)
++ goto out;
++ ret = 0;
+
+ for (i = 0; i < nr_pages; ++i) {
+ j = min((unsigned int)PAGE_SIZE - pageofs, outlen);
+
+ if (pages[i]) {
+- if (ret < 0) {
+- SetPageError(pages[i]);
+- } else if (clusterpages == 1 &&
+- pages[i] == compressed_pages[0]) {
++ if (clusterpages == 1 &&
++ pages[i] == compressed_pages[0]) {
+ memcpy(vin + pageofs, vout + pageofs, j);
+ } else {
+ void *dst = kmap_atomic(pages[i]);
+@@ -168,12 +167,13 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ memcpy(dst + pageofs, vout + pageofs, j);
+ kunmap_atomic(dst);
+ }
+- endio(pages[i]);
+ }
+ vout += PAGE_SIZE;
+ outlen -= j;
+ pageofs = 0;
+ }
++
++out:
+ preempt_enable();
+
+ if (clusterpages == 1)
+diff --git a/drivers/staging/erofs/xattr.c b/drivers/staging/erofs/xattr.c
+index 80dca6a4adbe..6cb05ae31233 100644
+--- a/drivers/staging/erofs/xattr.c
++++ b/drivers/staging/erofs/xattr.c
+@@ -44,19 +44,48 @@ static inline void xattr_iter_end_final(struct xattr_iter *it)
+
+ static int init_inode_xattrs(struct inode *inode)
+ {
++ struct erofs_vnode *const vi = EROFS_V(inode);
+ struct xattr_iter it;
+ unsigned int i;
+ struct erofs_xattr_ibody_header *ih;
+ struct super_block *sb;
+ struct erofs_sb_info *sbi;
+- struct erofs_vnode *vi;
+ bool atomic_map;
++ int ret = 0;
+
+- if (likely(inode_has_inited_xattr(inode)))
++ /* the most case is that xattrs of this inode are initialized. */
++ if (test_bit(EROFS_V_EA_INITED_BIT, &vi->flags))
+ return 0;
+
+- vi = EROFS_V(inode);
+- BUG_ON(!vi->xattr_isize);
++ if (wait_on_bit_lock(&vi->flags, EROFS_V_BL_XATTR_BIT, TASK_KILLABLE))
++ return -ERESTARTSYS;
++
++ /* someone has initialized xattrs for us? */
++ if (test_bit(EROFS_V_EA_INITED_BIT, &vi->flags))
++ goto out_unlock;
++
++ /*
++ * bypass all xattr operations if ->xattr_isize is not greater than
++ * sizeof(struct erofs_xattr_ibody_header), in detail:
++ * 1) it is not enough to contain erofs_xattr_ibody_header then
++ * ->xattr_isize should be 0 (it means no xattr);
++ * 2) it is just to contain erofs_xattr_ibody_header, which is on-disk
++ * undefined right now (maybe use later with some new sb feature).
++ */
++ if (vi->xattr_isize == sizeof(struct erofs_xattr_ibody_header)) {
++ errln("xattr_isize %d of nid %llu is not supported yet",
++ vi->xattr_isize, vi->nid);
++ ret = -ENOTSUPP;
++ goto out_unlock;
++ } else if (vi->xattr_isize < sizeof(struct erofs_xattr_ibody_header)) {
++ if (unlikely(vi->xattr_isize)) {
++ DBG_BUGON(1);
++ ret = -EIO;
++ goto out_unlock; /* xattr ondisk layout error */
++ }
++ ret = -ENOATTR;
++ goto out_unlock;
++ }
+
+ sb = inode->i_sb;
+ sbi = EROFS_SB(sb);
+@@ -64,8 +93,10 @@ static int init_inode_xattrs(struct inode *inode)
+ it.ofs = erofs_blkoff(iloc(sbi, vi->nid) + vi->inode_isize);
+
+ it.page = erofs_get_inline_page(inode, it.blkaddr);
+- if (IS_ERR(it.page))
+- return PTR_ERR(it.page);
++ if (IS_ERR(it.page)) {
++ ret = PTR_ERR(it.page);
++ goto out_unlock;
++ }
+
+ /* read in shared xattr array (non-atomic, see kmalloc below) */
+ it.kaddr = kmap(it.page);
+@@ -78,7 +109,8 @@ static int init_inode_xattrs(struct inode *inode)
+ sizeof(uint), GFP_KERNEL);
+ if (vi->xattr_shared_xattrs == NULL) {
+ xattr_iter_end(&it, atomic_map);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto out_unlock;
+ }
+
+ /* let's skip ibody header */
+@@ -92,8 +124,12 @@ static int init_inode_xattrs(struct inode *inode)
+
+ it.page = erofs_get_meta_page(sb,
+ ++it.blkaddr, S_ISDIR(inode->i_mode));
+- if (IS_ERR(it.page))
+- return PTR_ERR(it.page);
++ if (IS_ERR(it.page)) {
++ kfree(vi->xattr_shared_xattrs);
++ vi->xattr_shared_xattrs = NULL;
++ ret = PTR_ERR(it.page);
++ goto out_unlock;
++ }
+
+ it.kaddr = kmap_atomic(it.page);
+ atomic_map = true;
+@@ -105,8 +141,11 @@ static int init_inode_xattrs(struct inode *inode)
+ }
+ xattr_iter_end(&it, atomic_map);
+
+- inode_set_inited_xattr(inode);
+- return 0;
++ set_bit(EROFS_V_EA_INITED_BIT, &vi->flags);
++
++out_unlock:
++ clear_and_wake_up_bit(EROFS_V_BL_XATTR_BIT, &vi->flags);
++ return ret;
+ }
+
+ /*
+@@ -422,7 +461,6 @@ static int erofs_xattr_generic_get(const struct xattr_handler *handler,
+ struct dentry *unused, struct inode *inode,
+ const char *name, void *buffer, size_t size)
+ {
+- struct erofs_vnode *const vi = EROFS_V(inode);
+ struct erofs_sb_info *const sbi = EROFS_I_SB(inode);
+
+ switch (handler->flags) {
+@@ -440,9 +478,6 @@ static int erofs_xattr_generic_get(const struct xattr_handler *handler,
+ return -EINVAL;
+ }
+
+- if (!vi->xattr_isize)
+- return -ENOATTR;
+-
+ return erofs_getxattr(inode, handler->flags, name, buffer, size);
+ }
+
+diff --git a/drivers/staging/iio/addac/adt7316.c b/drivers/staging/iio/addac/adt7316.c
+index dc93e85808e0..7839d869d25d 100644
+--- a/drivers/staging/iio/addac/adt7316.c
++++ b/drivers/staging/iio/addac/adt7316.c
+@@ -651,17 +651,10 @@ static ssize_t adt7316_store_da_high_resolution(struct device *dev,
+ u8 config3;
+ int ret;
+
+- chip->dac_bits = 8;
+-
+- if (buf[0] == '1') {
++ if (buf[0] == '1')
+ config3 = chip->config3 | ADT7316_DA_HIGH_RESOLUTION;
+- if (chip->id == ID_ADT7316 || chip->id == ID_ADT7516)
+- chip->dac_bits = 12;
+- else if (chip->id == ID_ADT7317 || chip->id == ID_ADT7517)
+- chip->dac_bits = 10;
+- } else {
++ else
+ config3 = chip->config3 & (~ADT7316_DA_HIGH_RESOLUTION);
+- }
+
+ ret = chip->bus.write(chip->bus.client, ADT7316_CONFIG3, config3);
+ if (ret)
+@@ -2123,6 +2116,13 @@ int adt7316_probe(struct device *dev, struct adt7316_bus *bus,
+ else
+ return -ENODEV;
+
++ if (chip->id == ID_ADT7316 || chip->id == ID_ADT7516)
++ chip->dac_bits = 12;
++ else if (chip->id == ID_ADT7317 || chip->id == ID_ADT7517)
++ chip->dac_bits = 10;
++ else
++ chip->dac_bits = 8;
++
+ chip->ldac_pin = devm_gpiod_get_optional(dev, "adi,ldac", GPIOD_OUT_LOW);
+ if (IS_ERR(chip->ldac_pin)) {
+ ret = PTR_ERR(chip->ldac_pin);
+diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
+index 28f41caba05d..fb442499f806 100644
+--- a/drivers/staging/media/imx/imx-ic-prpencvf.c
++++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
+@@ -680,12 +680,23 @@ static int prp_start(struct prp_priv *priv)
+ goto out_free_nfb4eof_irq;
+ }
+
++ /* start upstream */
++ ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
++ ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
++ if (ret) {
++ v4l2_err(&ic_priv->sd,
++ "upstream stream on failed: %d\n", ret);
++ goto out_free_eof_irq;
++ }
++
+ /* start the EOF timeout timer */
+ mod_timer(&priv->eof_timeout_timer,
+ jiffies + msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+
+ return 0;
+
++out_free_eof_irq:
++ devm_free_irq(ic_priv->dev, priv->eof_irq, priv);
+ out_free_nfb4eof_irq:
+ devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv);
+ out_unsetup:
+@@ -717,6 +728,12 @@ static void prp_stop(struct prp_priv *priv)
+ if (ret == 0)
+ v4l2_warn(&ic_priv->sd, "wait last EOF timeout\n");
+
++ /* stop upstream */
++ ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
++ if (ret && ret != -ENOIOCTLCMD)
++ v4l2_warn(&ic_priv->sd,
++ "upstream stream off failed: %d\n", ret);
++
+ devm_free_irq(ic_priv->dev, priv->eof_irq, priv);
+ devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv);
+
+@@ -1148,15 +1165,6 @@ static int prp_s_stream(struct v4l2_subdev *sd, int enable)
+ if (ret)
+ goto out;
+
+- /* start/stop upstream */
+- ret = v4l2_subdev_call(priv->src_sd, video, s_stream, enable);
+- ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
+- if (ret) {
+- if (enable)
+- prp_stop(priv);
+- goto out;
+- }
+-
+ update_count:
+ priv->stream_count += enable ? 1 : -1;
+ if (priv->stream_count < 0)
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index 4223f8d418ae..be1e9e52b2a0 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -629,7 +629,7 @@ out_put_ipu:
+ return ret;
+ }
+
+-static void csi_idmac_stop(struct csi_priv *priv)
++static void csi_idmac_wait_last_eof(struct csi_priv *priv)
+ {
+ unsigned long flags;
+ int ret;
+@@ -646,7 +646,10 @@ static void csi_idmac_stop(struct csi_priv *priv)
+ &priv->last_eof_comp, msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+ if (ret == 0)
+ v4l2_warn(&priv->sd, "wait last EOF timeout\n");
++}
+
++static void csi_idmac_stop(struct csi_priv *priv)
++{
+ devm_free_irq(priv->dev, priv->eof_irq, priv);
+ devm_free_irq(priv->dev, priv->nfb4eof_irq, priv);
+
+@@ -722,10 +725,16 @@ static int csi_start(struct csi_priv *priv)
+
+ output_fi = &priv->frame_interval[priv->active_output_pad];
+
++ /* start upstream */
++ ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
++ ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
++ if (ret)
++ return ret;
++
+ if (priv->dest == IPU_CSI_DEST_IDMAC) {
+ ret = csi_idmac_start(priv);
+ if (ret)
+- return ret;
++ goto stop_upstream;
+ }
+
+ ret = csi_setup(priv);
+@@ -753,11 +762,26 @@ fim_off:
+ idmac_stop:
+ if (priv->dest == IPU_CSI_DEST_IDMAC)
+ csi_idmac_stop(priv);
++stop_upstream:
++ v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
+ return ret;
+ }
+
+ static void csi_stop(struct csi_priv *priv)
+ {
++ if (priv->dest == IPU_CSI_DEST_IDMAC)
++ csi_idmac_wait_last_eof(priv);
++
++ /*
++ * Disable the CSI asap, after syncing with the last EOF.
++ * Doing so after the IDMA channel is disabled has shown to
++ * create hard system-wide hangs.
++ */
++ ipu_csi_disable(priv->csi);
++
++ /* stop upstream */
++ v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
++
+ if (priv->dest == IPU_CSI_DEST_IDMAC) {
+ csi_idmac_stop(priv);
+
+@@ -765,8 +789,6 @@ static void csi_stop(struct csi_priv *priv)
+ if (priv->fim)
+ imx_media_fim_set_stream(priv->fim, NULL, false);
+ }
+-
+- ipu_csi_disable(priv->csi);
+ }
+
+ static const struct csi_skip_desc csi_skip[12] = {
+@@ -927,23 +949,13 @@ static int csi_s_stream(struct v4l2_subdev *sd, int enable)
+ goto update_count;
+
+ if (enable) {
+- /* upstream must be started first, before starting CSI */
+- ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
+- ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
+- if (ret)
+- goto out;
+-
+ dev_dbg(priv->dev, "stream ON\n");
+ ret = csi_start(priv);
+- if (ret) {
+- v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
++ if (ret)
+ goto out;
+- }
+ } else {
+ dev_dbg(priv->dev, "stream OFF\n");
+- /* CSI must be stopped first, then stop upstream */
+ csi_stop(priv);
+- v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
+ }
+
+ update_count:
+@@ -1787,7 +1799,7 @@ static int imx_csi_parse_endpoint(struct device *dev,
+ struct v4l2_fwnode_endpoint *vep,
+ struct v4l2_async_subdev *asd)
+ {
+- return fwnode_device_is_available(asd->match.fwnode) ? 0 : -EINVAL;
++ return fwnode_device_is_available(asd->match.fwnode) ? 0 : -ENOTCONN;
+ }
+
+ static int imx_csi_async_register(struct csi_priv *priv)
+diff --git a/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c b/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
+index 5282236d1bb1..06daea66fb49 100644
+--- a/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
++++ b/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
+@@ -80,7 +80,7 @@ rk3288_vpu_jpeg_enc_set_qtable(struct rockchip_vpu_dev *vpu,
+ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ {
+ struct rockchip_vpu_dev *vpu = ctx->dev;
+- struct vb2_buffer *src_buf, *dst_buf;
++ struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ struct rockchip_vpu_jpeg_ctx jpeg_ctx;
+ u32 reg;
+
+@@ -88,7 +88,7 @@ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+
+ memset(&jpeg_ctx, 0, sizeof(jpeg_ctx));
+- jpeg_ctx.buffer = vb2_plane_vaddr(dst_buf, 0);
++ jpeg_ctx.buffer = vb2_plane_vaddr(&dst_buf->vb2_buf, 0);
+ jpeg_ctx.width = ctx->dst_fmt.width;
+ jpeg_ctx.height = ctx->dst_fmt.height;
+ jpeg_ctx.quality = ctx->jpeg_quality;
+@@ -99,7 +99,7 @@ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ VEPU_REG_ENC_CTRL);
+
+ rk3288_vpu_set_src_img_ctrl(vpu, ctx);
+- rk3288_vpu_jpeg_enc_set_buffers(vpu, ctx, src_buf);
++ rk3288_vpu_jpeg_enc_set_buffers(vpu, ctx, &src_buf->vb2_buf);
+ rk3288_vpu_jpeg_enc_set_qtable(vpu,
+ rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 0),
+ rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 1));
+diff --git a/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c b/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
+index dbc86d95fe3b..3d438797692e 100644
+--- a/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
++++ b/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
+@@ -111,7 +111,7 @@ rk3399_vpu_jpeg_enc_set_qtable(struct rockchip_vpu_dev *vpu,
+ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ {
+ struct rockchip_vpu_dev *vpu = ctx->dev;
+- struct vb2_buffer *src_buf, *dst_buf;
++ struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ struct rockchip_vpu_jpeg_ctx jpeg_ctx;
+ u32 reg;
+
+@@ -119,7 +119,7 @@ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+
+ memset(&jpeg_ctx, 0, sizeof(jpeg_ctx));
+- jpeg_ctx.buffer = vb2_plane_vaddr(dst_buf, 0);
++ jpeg_ctx.buffer = vb2_plane_vaddr(&dst_buf->vb2_buf, 0);
+ jpeg_ctx.width = ctx->dst_fmt.width;
+ jpeg_ctx.height = ctx->dst_fmt.height;
+ jpeg_ctx.quality = ctx->jpeg_quality;
+@@ -130,7 +130,7 @@ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ VEPU_REG_ENCODE_START);
+
+ rk3399_vpu_set_src_img_ctrl(vpu, ctx);
+- rk3399_vpu_jpeg_enc_set_buffers(vpu, ctx, src_buf);
++ rk3399_vpu_jpeg_enc_set_buffers(vpu, ctx, &src_buf->vb2_buf);
+ rk3399_vpu_jpeg_enc_set_qtable(vpu,
+ rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 0),
+ rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 1));
+diff --git a/drivers/staging/mt7621-spi/spi-mt7621.c b/drivers/staging/mt7621-spi/spi-mt7621.c
+index 513b6e79b985..e1f50efd0922 100644
+--- a/drivers/staging/mt7621-spi/spi-mt7621.c
++++ b/drivers/staging/mt7621-spi/spi-mt7621.c
+@@ -330,6 +330,7 @@ static int mt7621_spi_probe(struct platform_device *pdev)
+ int status = 0;
+ struct clk *clk;
+ struct mt7621_spi_ops *ops;
++ int ret;
+
+ match = of_match_device(mt7621_spi_match, &pdev->dev);
+ if (!match)
+@@ -377,7 +378,11 @@ static int mt7621_spi_probe(struct platform_device *pdev)
+ rs->pending_write = 0;
+ dev_info(&pdev->dev, "sys_freq: %u\n", rs->sys_freq);
+
+- device_reset(&pdev->dev);
++ ret = device_reset(&pdev->dev);
++ if (ret) {
++ dev_err(&pdev->dev, "SPI reset failed!\n");
++ return ret;
++ }
+
+ mt7621_spi_reset(rs);
+
+diff --git a/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c b/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
+index 80b8d4153414..a54286498a47 100644
+--- a/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
++++ b/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
+@@ -45,7 +45,7 @@ static int dcon_init_xo_1(struct dcon_priv *dcon)
+ {
+ unsigned char lob;
+ int ret, i;
+- struct dcon_gpio *pin = &gpios_asis[0];
++ const struct dcon_gpio *pin = &gpios_asis[0];
+
+ for (i = 0; i < ARRAY_SIZE(gpios_asis); i++) {
+ gpios[i] = devm_gpiod_get(&dcon->client->dev, pin[i].name,
+diff --git a/drivers/staging/speakup/speakup_soft.c b/drivers/staging/speakup/speakup_soft.c
+index 947c79532e10..d5383974d40e 100644
+--- a/drivers/staging/speakup/speakup_soft.c
++++ b/drivers/staging/speakup/speakup_soft.c
+@@ -208,12 +208,15 @@ static ssize_t softsynthx_read(struct file *fp, char __user *buf, size_t count,
+ return -EINVAL;
+
+ spin_lock_irqsave(&speakup_info.spinlock, flags);
++ synth_soft.alive = 1;
+ while (1) {
+ prepare_to_wait(&speakup_event, &wait, TASK_INTERRUPTIBLE);
+- if (!unicode)
+- synth_buffer_skip_nonlatin1();
+- if (!synth_buffer_empty() || speakup_info.flushing)
+- break;
++ if (synth_current() == &synth_soft) {
++ if (!unicode)
++ synth_buffer_skip_nonlatin1();
++ if (!synth_buffer_empty() || speakup_info.flushing)
++ break;
++ }
+ spin_unlock_irqrestore(&speakup_info.spinlock, flags);
+ if (fp->f_flags & O_NONBLOCK) {
+ finish_wait(&speakup_event, &wait);
+@@ -233,6 +236,8 @@ static ssize_t softsynthx_read(struct file *fp, char __user *buf, size_t count,
+
+ /* Keep 3 bytes available for a 16bit UTF-8-encoded character */
+ while (chars_sent <= count - bytes_per_ch) {
++ if (synth_current() != &synth_soft)
++ break;
+ if (speakup_info.flushing) {
+ speakup_info.flushing = 0;
+ ch = '\x18';
+@@ -329,7 +334,8 @@ static __poll_t softsynth_poll(struct file *fp, struct poll_table_struct *wait)
+ poll_wait(fp, &speakup_event, wait);
+
+ spin_lock_irqsave(&speakup_info.spinlock, flags);
+- if (!synth_buffer_empty() || speakup_info.flushing)
++ if (synth_current() == &synth_soft &&
++ (!synth_buffer_empty() || speakup_info.flushing))
+ ret = EPOLLIN | EPOLLRDNORM;
+ spin_unlock_irqrestore(&speakup_info.spinlock, flags);
+ return ret;
+diff --git a/drivers/staging/speakup/spk_priv.h b/drivers/staging/speakup/spk_priv.h
+index c8e688878fc7..ac6a74883af4 100644
+--- a/drivers/staging/speakup/spk_priv.h
++++ b/drivers/staging/speakup/spk_priv.h
+@@ -74,6 +74,7 @@ int synth_request_region(unsigned long start, unsigned long n);
+ int synth_release_region(unsigned long start, unsigned long n);
+ int synth_add(struct spk_synth *in_synth);
+ void synth_remove(struct spk_synth *in_synth);
++struct spk_synth *synth_current(void);
+
+ extern struct speakup_info_t speakup_info;
+
+diff --git a/drivers/staging/speakup/synth.c b/drivers/staging/speakup/synth.c
+index 25f259ee4ffc..3568bfb89912 100644
+--- a/drivers/staging/speakup/synth.c
++++ b/drivers/staging/speakup/synth.c
+@@ -481,4 +481,10 @@ void synth_remove(struct spk_synth *in_synth)
+ }
+ EXPORT_SYMBOL_GPL(synth_remove);
+
++struct spk_synth *synth_current(void)
++{
++ return synth;
++}
++EXPORT_SYMBOL_GPL(synth_current);
++
+ short spk_punc_masks[] = { 0, SOME, MOST, PUNC, PUNC | B_SYM };
+diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c
+index c9097e7367d8..2e28fbcdfe8e 100644
+--- a/drivers/staging/vt6655/device_main.c
++++ b/drivers/staging/vt6655/device_main.c
+@@ -1033,8 +1033,6 @@ static void vnt_interrupt_process(struct vnt_private *priv)
+ return;
+ }
+
+- MACvIntDisable(priv->PortOffset);
+-
+ spin_lock_irqsave(&priv->lock, flags);
+
+ /* Read low level stats */
+@@ -1122,8 +1120,6 @@ static void vnt_interrupt_process(struct vnt_private *priv)
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+-
+- MACvIntEnable(priv->PortOffset, IMR_MASK_VALUE);
+ }
+
+ static void vnt_interrupt_work(struct work_struct *work)
+@@ -1133,14 +1129,17 @@ static void vnt_interrupt_work(struct work_struct *work)
+
+ if (priv->vif)
+ vnt_interrupt_process(priv);
++
++ MACvIntEnable(priv->PortOffset, IMR_MASK_VALUE);
+ }
+
+ static irqreturn_t vnt_interrupt(int irq, void *arg)
+ {
+ struct vnt_private *priv = arg;
+
+- if (priv->vif)
+- schedule_work(&priv->interrupt_work);
++ schedule_work(&priv->interrupt_work);
++
++ MACvIntDisable(priv->PortOffset);
+
+ return IRQ_HANDLED;
+ }
+diff --git a/drivers/staging/wilc1000/linux_wlan.c b/drivers/staging/wilc1000/linux_wlan.c
+index 721689048648..5e5149c9a92d 100644
+--- a/drivers/staging/wilc1000/linux_wlan.c
++++ b/drivers/staging/wilc1000/linux_wlan.c
+@@ -1086,8 +1086,8 @@ int wilc_netdev_init(struct wilc **wilc, struct device *dev, int io_type,
+ vif->wilc = *wilc;
+ vif->ndev = ndev;
+ wl->vif[i] = vif;
+- wl->vif_num = i;
+- vif->idx = wl->vif_num;
++ wl->vif_num = i + 1;
++ vif->idx = i;
+
+ ndev->netdev_ops = &wilc_netdev_ops;
+
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index bd15a564fe24..3ad2659630e8 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4040,9 +4040,9 @@ static void iscsit_release_commands_from_conn(struct iscsi_conn *conn)
+ struct se_cmd *se_cmd = &cmd->se_cmd;
+
+ if (se_cmd->se_tfo != NULL) {
+- spin_lock(&se_cmd->t_state_lock);
++ spin_lock_irq(&se_cmd->t_state_lock);
+ se_cmd->transport_state |= CMD_T_FABRIC_STOP;
+- spin_unlock(&se_cmd->t_state_lock);
++ spin_unlock_irq(&se_cmd->t_state_lock);
+ }
+ }
+ spin_unlock_bh(&conn->cmd_lock);
+diff --git a/drivers/tty/Kconfig b/drivers/tty/Kconfig
+index 0840d27381ea..e0a04bfc873e 100644
+--- a/drivers/tty/Kconfig
++++ b/drivers/tty/Kconfig
+@@ -441,4 +441,28 @@ config VCC
+ depends on SUN_LDOMS
+ help
+ Support for Sun logical domain consoles.
++
++config LDISC_AUTOLOAD
++ bool "Automatically load TTY Line Disciplines"
++ default y
++ help
++ Historically the kernel has always automatically loaded any
++ line discipline that is in a kernel module when a user asks
++ for it to be loaded with the TIOCSETD ioctl, or through other
++ means. This is not always the best thing to do on systems
++ where you know you will not be using some of the more
++ "ancient" line disciplines, so prevent the kernel from doing
++ this unless the request is coming from a process with the
++ CAP_SYS_MODULE permissions.
++
++ Say 'Y' here if you trust your userspace users to do the right
++ thing, or if you have only provided the line disciplines that
++ you know you will be using, or if you wish to continue to use
++ the traditional method of on-demand loading of these modules
++ by any user.
++
++ This functionality can be changed at runtime with the
++ dev.tty.ldisc_autoload sysctl, this configuration option will
++ only set the default value of this functionality.
++
+ endif # TTY
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index a1a85805d010..2488de1c4bc4 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -130,6 +130,10 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
+ port->flags |= UPF_IOREMAP;
+ }
+
++ /* Compatibility with the deprecated pxa driver and 8250_pxa drivers. */
++ if (of_device_is_compatible(np, "mrvl,mmp-uart"))
++ port->regshift = 2;
++
+ /* Check for registers offset within the devices address range */
+ if (of_property_read_u32(np, "reg-shift", &prop) == 0)
+ port->regshift = prop;
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 48bd694a5fa1..bbe5cba21522 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -2027,6 +2027,111 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
+ .setup = pci_default_setup,
+ .exit = pci_plx9050_exit,
+ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4S,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_4,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4SM,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_4,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S,
++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_4,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_ACCESIO,
++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .setup = pci_pericom_setup,
++ },
+ /*
+ * SBS Technologies, Inc., PMC-OCTALPRO 232
+ */
+@@ -4575,10 +4680,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ */
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SDB,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2S,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pericom_PI7C9X7954 },
+@@ -4587,10 +4692,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ pbn_pericom_PI7C9X7954 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_2DB,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_2,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pericom_PI7C9X7954 },
+@@ -4599,10 +4704,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ pbn_pericom_PI7C9X7954 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SMDB,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2SM,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pericom_PI7C9X7954 },
+@@ -4611,13 +4716,13 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ pbn_pericom_PI7C9X7954 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_1,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7951 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_2,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_2,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pericom_PI7C9X7954 },
+@@ -4626,16 +4731,16 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ pbn_pericom_PI7C9X7954 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2S,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pericom_PI7C9X7954 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_2,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_2,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pericom_PI7C9X7954 },
+@@ -4644,13 +4749,13 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ pbn_pericom_PI7C9X7954 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2SM,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7954 },
++ pbn_pericom_PI7C9X7952 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7958 },
++ pbn_pericom_PI7C9X7954 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7958 },
++ pbn_pericom_PI7C9X7954 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_8,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pericom_PI7C9X7958 },
+@@ -4659,19 +4764,19 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ pbn_pericom_PI7C9X7958 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7958 },
++ pbn_pericom_PI7C9X7954 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_8,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pericom_PI7C9X7958 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7958 },
++ pbn_pericom_PI7C9X7954 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_8SM,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ pbn_pericom_PI7C9X7958 },
+ { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+- pbn_pericom_PI7C9X7958 },
++ pbn_pericom_PI7C9X7954 },
+ /*
+ * Topic TP560 Data/Fax/Voice 56k modem (reported by Evan Clarke)
+ */
+diff --git a/drivers/tty/serial/8250/8250_pxa.c b/drivers/tty/serial/8250/8250_pxa.c
+index b9bcbe20a2be..c47188860e32 100644
+--- a/drivers/tty/serial/8250/8250_pxa.c
++++ b/drivers/tty/serial/8250/8250_pxa.c
+@@ -113,6 +113,10 @@ static int serial_pxa_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
++ ret = of_alias_get_id(pdev->dev.of_node, "serial");
++ if (ret >= 0)
++ uart.port.line = ret;
++
+ uart.port.type = PORT_XSCALE;
+ uart.port.iotype = UPIO_MEM32;
+ uart.port.mapbase = mmres->start;
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 05147fe24343..0b4f36905321 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -166,6 +166,8 @@ struct atmel_uart_port {
+ unsigned int pending_status;
+ spinlock_t lock_suspended;
+
++ bool hd_start_rx; /* can start RX during half-duplex operation */
++
+ /* ISO7816 */
+ unsigned int fidi_min;
+ unsigned int fidi_max;
+@@ -231,6 +233,13 @@ static inline void atmel_uart_write_char(struct uart_port *port, u8 value)
+ __raw_writeb(value, port->membase + ATMEL_US_THR);
+ }
+
++static inline int atmel_uart_is_half_duplex(struct uart_port *port)
++{
++ return ((port->rs485.flags & SER_RS485_ENABLED) &&
++ !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
++ (port->iso7816.flags & SER_ISO7816_ENABLED);
++}
++
+ #ifdef CONFIG_SERIAL_ATMEL_PDC
+ static bool atmel_use_pdc_rx(struct uart_port *port)
+ {
+@@ -608,10 +617,9 @@ static void atmel_stop_tx(struct uart_port *port)
+ /* Disable interrupts */
+ atmel_uart_writel(port, ATMEL_US_IDR, atmel_port->tx_done_mask);
+
+- if (((port->rs485.flags & SER_RS485_ENABLED) &&
+- !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+- port->iso7816.flags & SER_ISO7816_ENABLED)
++ if (atmel_uart_is_half_duplex(port))
+ atmel_start_rx(port);
++
+ }
+
+ /*
+@@ -628,9 +636,7 @@ static void atmel_start_tx(struct uart_port *port)
+ return;
+
+ if (atmel_use_pdc_tx(port) || atmel_use_dma_tx(port))
+- if (((port->rs485.flags & SER_RS485_ENABLED) &&
+- !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+- port->iso7816.flags & SER_ISO7816_ENABLED)
++ if (atmel_uart_is_half_duplex(port))
+ atmel_stop_rx(port);
+
+ if (atmel_use_pdc_tx(port))
+@@ -928,11 +934,14 @@ static void atmel_complete_tx_dma(void *arg)
+ */
+ if (!uart_circ_empty(xmit))
+ atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx);
+- else if (((port->rs485.flags & SER_RS485_ENABLED) &&
+- !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+- port->iso7816.flags & SER_ISO7816_ENABLED) {
+- /* DMA done, stop TX, start RX for RS485 */
+- atmel_start_rx(port);
++ else if (atmel_uart_is_half_duplex(port)) {
++ /*
++ * DMA done, re-enable TXEMPTY and signal that we can stop
++ * TX and start RX for RS485
++ */
++ atmel_port->hd_start_rx = true;
++ atmel_uart_writel(port, ATMEL_US_IER,
++ atmel_port->tx_done_mask);
+ }
+
+ spin_unlock_irqrestore(&port->lock, flags);
+@@ -1288,6 +1297,10 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
+ sg_dma_len(&atmel_port->sg_rx)/2,
+ DMA_DEV_TO_MEM,
+ DMA_PREP_INTERRUPT);
++ if (!desc) {
++ dev_err(port->dev, "Preparing DMA cyclic failed\n");
++ goto chan_err;
++ }
+ desc->callback = atmel_complete_rx_dma;
+ desc->callback_param = port;
+ atmel_port->desc_rx = desc;
+@@ -1376,9 +1389,20 @@ atmel_handle_transmit(struct uart_port *port, unsigned int pending)
+ struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
+
+ if (pending & atmel_port->tx_done_mask) {
+- /* Either PDC or interrupt transmission */
+ atmel_uart_writel(port, ATMEL_US_IDR,
+ atmel_port->tx_done_mask);
++
++ /* Start RX if flag was set and FIFO is empty */
++ if (atmel_port->hd_start_rx) {
++ if (!(atmel_uart_readl(port, ATMEL_US_CSR)
++ & ATMEL_US_TXEMPTY))
++ dev_warn(port->dev, "Should start RX, but TX fifo is not empty\n");
++
++ atmel_port->hd_start_rx = false;
++ atmel_start_rx(port);
++ return;
++ }
++
+ atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx);
+ }
+ }
+@@ -1508,9 +1532,7 @@ static void atmel_tx_pdc(struct uart_port *port)
+ atmel_uart_writel(port, ATMEL_US_IER,
+ atmel_port->tx_done_mask);
+ } else {
+- if (((port->rs485.flags & SER_RS485_ENABLED) &&
+- !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+- port->iso7816.flags & SER_ISO7816_ENABLED) {
++ if (atmel_uart_is_half_duplex(port)) {
+ /* DMA done, stop TX, start RX for RS485 */
+ atmel_start_rx(port);
+ }
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index 6fb312e7af71..bfe5e9e034ec 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -148,8 +148,10 @@ static int configure_kgdboc(void)
+ char *cptr = config;
+ struct console *cons;
+
+- if (!strlen(config) || isspace(config[0]))
++ if (!strlen(config) || isspace(config[0])) {
++ err = 0;
+ goto noconfig;
++ }
+
+ kgdboc_io_ops.is_console = 0;
+ kgdb_tty_driver = NULL;
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 4f479841769a..0fdf3a760aa0 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1416,6 +1416,8 @@ static int max310x_spi_probe(struct spi_device *spi)
+ if (spi->dev.of_node) {
+ const struct of_device_id *of_id =
+ of_match_device(max310x_dt_ids, &spi->dev);
++ if (!of_id)
++ return -ENODEV;
+
+ devtype = (struct max310x_devtype *)of_id->data;
+ } else {
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index 231f751d1ef4..7e7b1559fa36 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -810,6 +810,9 @@ static int mvebu_uart_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
++ if (!match)
++ return -ENODEV;
++
+ /* Assume that all UART ports have a DT alias or none has */
+ id = of_alias_get_id(pdev->dev.of_node, "serial");
+ if (!pdev->dev.of_node || id < 0)
+diff --git a/drivers/tty/serial/mxs-auart.c b/drivers/tty/serial/mxs-auart.c
+index 27235a526cce..4c188f4079b3 100644
+--- a/drivers/tty/serial/mxs-auart.c
++++ b/drivers/tty/serial/mxs-auart.c
+@@ -1686,6 +1686,10 @@ static int mxs_auart_probe(struct platform_device *pdev)
+
+ s->port.mapbase = r->start;
+ s->port.membase = ioremap(r->start, resource_size(r));
++ if (!s->port.membase) {
++ ret = -ENOMEM;
++ goto out_disable_clks;
++ }
+ s->port.ops = &mxs_auart_ops;
+ s->port.iotype = UPIO_MEM;
+ s->port.fifosize = MXS_AUART_FIFO_SIZE;
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 38016609c7fa..d30502c58106 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -1117,7 +1117,7 @@ static int __init qcom_geni_console_setup(struct console *co, char *options)
+ {
+ struct uart_port *uport;
+ struct qcom_geni_serial_port *port;
+- int baud;
++ int baud = 9600;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 64bbeb7d7e0c..93bd90f1ff14 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -838,19 +838,9 @@ static void sci_transmit_chars(struct uart_port *port)
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(port);
+- if (uart_circ_empty(xmit)) {
++ if (uart_circ_empty(xmit))
+ sci_stop_tx(port);
+- } else {
+- ctrl = serial_port_in(port, SCSCR);
+-
+- if (port->type != PORT_SCI) {
+- serial_port_in(port, SCxSR); /* Dummy read */
+- sci_clear_SCxSR(port, SCxSR_TDxE_CLEAR(port));
+- }
+
+- ctrl |= SCSCR_TIE;
+- serial_port_out(port, SCSCR, ctrl);
+- }
+ }
+
+ /* On SH3, SCIF may read end-of-break as a space->mark char */
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index 094f2958cb2b..ee9f18c52d29 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -364,7 +364,13 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
+ cdns_uart_handle_tx(dev_id);
+ isrstatus &= ~CDNS_UART_IXR_TXEMPTY;
+ }
+- if (isrstatus & CDNS_UART_IXR_RXMASK)
++
++ /*
++ * Skip RX processing if RX is disabled as RXEMPTY will never be set
++ * as read bytes will not be removed from the FIFO.
++ */
++ if (isrstatus & CDNS_UART_IXR_RXMASK &&
++ !(readl(port->membase + CDNS_UART_CR) & CDNS_UART_CR_RX_DIS))
+ cdns_uart_handle_rx(dev_id, isrstatus);
+
+ spin_unlock(&port->lock);
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 77070c2d1240..ec145a59f199 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -26,7 +26,7 @@
+ * Byte threshold to limit memory consumption for flip buffers.
+ * The actual memory limit is > 2x this amount.
+ */
+-#define TTYB_DEFAULT_MEM_LIMIT 65536
++#define TTYB_DEFAULT_MEM_LIMIT (640 * 1024UL)
+
+ /*
+ * We default to dicing tty buffer allocations to this many characters
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 21ffcce16927..5fa250157025 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -513,6 +513,8 @@ static const struct file_operations hung_up_tty_fops = {
+ static DEFINE_SPINLOCK(redirect_lock);
+ static struct file *redirect;
+
++extern void tty_sysctl_init(void);
++
+ /**
+ * tty_wakeup - request more data
+ * @tty: terminal
+@@ -3483,6 +3485,7 @@ void console_sysfs_notify(void)
+ */
+ int __init tty_init(void)
+ {
++ tty_sysctl_init();
+ cdev_init(&tty_cdev, &tty_fops);
+ if (cdev_add(&tty_cdev, MKDEV(TTYAUX_MAJOR, 0), 1) ||
+ register_chrdev_region(MKDEV(TTYAUX_MAJOR, 0), 1, "/dev/tty") < 0)
+diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c
+index 45eda69b150c..e38f104db174 100644
+--- a/drivers/tty/tty_ldisc.c
++++ b/drivers/tty/tty_ldisc.c
+@@ -156,6 +156,13 @@ static void put_ldops(struct tty_ldisc_ops *ldops)
+ * takes tty_ldiscs_lock to guard against ldisc races
+ */
+
++#if defined(CONFIG_LDISC_AUTOLOAD)
++ #define INITIAL_AUTOLOAD_STATE 1
++#else
++ #define INITIAL_AUTOLOAD_STATE 0
++#endif
++static int tty_ldisc_autoload = INITIAL_AUTOLOAD_STATE;
++
+ static struct tty_ldisc *tty_ldisc_get(struct tty_struct *tty, int disc)
+ {
+ struct tty_ldisc *ld;
+@@ -170,6 +177,8 @@ static struct tty_ldisc *tty_ldisc_get(struct tty_struct *tty, int disc)
+ */
+ ldops = get_ldops(disc);
+ if (IS_ERR(ldops)) {
++ if (!capable(CAP_SYS_MODULE) && !tty_ldisc_autoload)
++ return ERR_PTR(-EPERM);
+ request_module("tty-ldisc-%d", disc);
+ ldops = get_ldops(disc);
+ if (IS_ERR(ldops))
+@@ -845,3 +854,41 @@ void tty_ldisc_deinit(struct tty_struct *tty)
+ tty_ldisc_put(tty->ldisc);
+ tty->ldisc = NULL;
+ }
++
++static int zero;
++static int one = 1;
++static struct ctl_table tty_table[] = {
++ {
++ .procname = "ldisc_autoload",
++ .data = &tty_ldisc_autoload,
++ .maxlen = sizeof(tty_ldisc_autoload),
++ .mode = 0644,
++ .proc_handler = proc_dointvec,
++ .extra1 = &zero,
++ .extra2 = &one,
++ },
++ { }
++};
++
++static struct ctl_table tty_dir_table[] = {
++ {
++ .procname = "tty",
++ .mode = 0555,
++ .child = tty_table,
++ },
++ { }
++};
++
++static struct ctl_table tty_root_table[] = {
++ {
++ .procname = "dev",
++ .mode = 0555,
++ .child = tty_dir_table,
++ },
++ { }
++};
++
++void tty_sysctl_init(void)
++{
++ register_sysctl_table(tty_root_table);
++}
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index bba75560d11e..9646ff63e77a 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -935,8 +935,11 @@ static void flush_scrollback(struct vc_data *vc)
+ {
+ WARN_CONSOLE_UNLOCKED();
+
++ set_origin(vc);
+ if (vc->vc_sw->con_flush_scrollback)
+ vc->vc_sw->con_flush_scrollback(vc);
++ else
++ vc->vc_sw->con_switch(vc);
+ }
+
+ /*
+@@ -1503,8 +1506,10 @@ static void csi_J(struct vc_data *vc, int vpar)
+ count = ((vc->vc_pos - vc->vc_origin) >> 1) + 1;
+ start = (unsigned short *)vc->vc_origin;
+ break;
++ case 3: /* include scrollback */
++ flush_scrollback(vc);
++ /* fallthrough */
+ case 2: /* erase whole display */
+- case 3: /* (and scrollback buffer later) */
+ vc_uniscr_clear_lines(vc, 0, vc->vc_rows);
+ count = vc->vc_cols * vc->vc_rows;
+ start = (unsigned short *)vc->vc_origin;
+@@ -1513,13 +1518,7 @@ static void csi_J(struct vc_data *vc, int vpar)
+ return;
+ }
+ scr_memsetw(start, vc->vc_video_erase_char, 2 * count);
+- if (vpar == 3) {
+- set_origin(vc);
+- flush_scrollback(vc);
+- if (con_is_visible(vc))
+- update_screen(vc);
+- } else if (con_should_update(vc))
+- do_update_region(vc, (unsigned long) start, count);
++ update_region(vc, (unsigned long) start, count);
+ vc->vc_need_wrap = 0;
+ }
+
+diff --git a/drivers/usb/chipidea/ci_hdrc_tegra.c b/drivers/usb/chipidea/ci_hdrc_tegra.c
+index 772851bee99b..12025358bb3c 100644
+--- a/drivers/usb/chipidea/ci_hdrc_tegra.c
++++ b/drivers/usb/chipidea/ci_hdrc_tegra.c
+@@ -130,6 +130,7 @@ static int tegra_udc_remove(struct platform_device *pdev)
+ {
+ struct tegra_udc *udc = platform_get_drvdata(pdev);
+
++ ci_hdrc_remove_device(udc->dev);
+ usb_phy_set_suspend(udc->phy, 1);
+ clk_disable_unprepare(udc->clk);
+
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 7bfcbb23c2a4..016e4004fe9d 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -954,8 +954,15 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ } else if (ci->platdata->usb_phy) {
+ ci->usb_phy = ci->platdata->usb_phy;
+ } else {
++ ci->usb_phy = devm_usb_get_phy_by_phandle(dev->parent, "phys",
++ 0);
+ ci->phy = devm_phy_get(dev->parent, "usb-phy");
+- ci->usb_phy = devm_usb_get_phy(dev->parent, USB_PHY_TYPE_USB2);
++
++ /* Fallback to grabbing any registered USB2 PHY */
++ if (IS_ERR(ci->usb_phy) &&
++ PTR_ERR(ci->usb_phy) != -EPROBE_DEFER)
++ ci->usb_phy = devm_usb_get_phy(dev->parent,
++ USB_PHY_TYPE_USB2);
+
+ /* if both generic PHY and USB PHY layers aren't enabled */
+ if (PTR_ERR(ci->phy) == -ENOSYS &&
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 739f8960811a..ec666eb4b7b4 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -558,10 +558,8 @@ static void acm_softint(struct work_struct *work)
+ clear_bit(EVENT_RX_STALL, &acm->flags);
+ }
+
+- if (test_bit(EVENT_TTY_WAKEUP, &acm->flags)) {
++ if (test_and_clear_bit(EVENT_TTY_WAKEUP, &acm->flags))
+ tty_port_tty_wakeup(&acm->port);
+- clear_bit(EVENT_TTY_WAKEUP, &acm->flags);
+- }
+ }
+
+ /*
+diff --git a/drivers/usb/common/common.c b/drivers/usb/common/common.c
+index 48277bbc15e4..73c8e6591746 100644
+--- a/drivers/usb/common/common.c
++++ b/drivers/usb/common/common.c
+@@ -145,6 +145,8 @@ enum usb_dr_mode of_usb_get_dr_mode_by_phy(struct device_node *np, int arg0)
+
+ do {
+ controller = of_find_node_with_property(controller, "phys");
++ if (!of_device_is_available(controller))
++ continue;
+ index = 0;
+ do {
+ if (arg0 == -1) {
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 6c9b76bcc2e1..8d1dbe36db92 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3339,6 +3339,8 @@ int dwc3_gadget_init(struct dwc3 *dwc)
+ goto err4;
+ }
+
++ dwc3_gadget_set_speed(&dwc->gadget, dwc->maximum_speed);
++
+ return 0;
+
+ err4:
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 1e5430438703..0f8d16de7a37 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1082,6 +1082,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
+ * condition with req->complete callback.
+ */
+ usb_ep_dequeue(ep->ep, req);
++ wait_for_completion(&done);
+ interrupted = ep->status < 0;
+ }
+
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index 75b113a5b25c..f3816a5c861e 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -391,20 +391,20 @@ try_again:
+ req->complete = f_hidg_req_complete;
+ req->context = hidg;
+
++ spin_unlock_irqrestore(&hidg->write_spinlock, flags);
++
+ status = usb_ep_queue(hidg->in_ep, req, GFP_ATOMIC);
+ if (status < 0) {
+ ERROR(hidg->func.config->cdev,
+ "usb_ep_queue error on int endpoint %zd\n", status);
+- goto release_write_pending_unlocked;
++ goto release_write_pending;
+ } else {
+ status = count;
+ }
+- spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+
+ return status;
+ release_write_pending:
+ spin_lock_irqsave(&hidg->write_spinlock, flags);
+-release_write_pending_unlocked:
+ hidg->write_pending = 0;
+ spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index 86cff5c28eff..ba841c569c48 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -516,7 +516,6 @@ static int xhci_do_dbc_stop(struct xhci_hcd *xhci)
+ return -1;
+
+ writel(0, &dbc->regs->control);
+- xhci_dbc_mem_cleanup(xhci);
+ dbc->state = DS_DISABLED;
+
+ return 0;
+@@ -562,8 +561,10 @@ static void xhci_dbc_stop(struct xhci_hcd *xhci)
+ ret = xhci_do_dbc_stop(xhci);
+ spin_unlock_irqrestore(&dbc->lock, flags);
+
+- if (!ret)
++ if (!ret) {
++ xhci_dbc_mem_cleanup(xhci);
+ pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller);
++ }
+ }
+
+ static void
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index e2eece693655..96a740543183 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1545,20 +1545,25 @@ int xhci_bus_suspend(struct usb_hcd *hcd)
+ port_index = max_ports;
+ while (port_index--) {
+ u32 t1, t2;
+-
++ int retries = 10;
++retry:
+ t1 = readl(ports[port_index]->addr);
+ t2 = xhci_port_state_to_neutral(t1);
+ portsc_buf[port_index] = 0;
+
+- /* Bail out if a USB3 port has a new device in link training */
+- if ((hcd->speed >= HCD_USB3) &&
++ /*
++ * Give a USB3 port in link training time to finish, but don't
++ * prevent suspend as port might be stuck
++ */
++ if ((hcd->speed >= HCD_USB3) && retries-- &&
+ (t1 & PORT_PLS_MASK) == XDEV_POLLING) {
+- bus_state->bus_suspended = 0;
+ spin_unlock_irqrestore(&xhci->lock, flags);
+- xhci_dbg(xhci, "Bus suspend bailout, port in polling\n");
+- return -EBUSY;
++ msleep(XHCI_PORT_POLLING_LFPS_TIME);
++ spin_lock_irqsave(&xhci->lock, flags);
++ xhci_dbg(xhci, "port %d polling in bus suspend, waiting\n",
++ port_index);
++ goto retry;
+ }
+-
+ /* suspend ports in U0, or bail out for new connect changes */
+ if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) {
+ if ((t1 & PORT_CSC) && wake_enabled) {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index a9ec7051f286..c2fe218e051f 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -194,6 +194,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ xhci->quirks |= XHCI_SSIC_PORT_UNUSED;
+ if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++ pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI))
+ xhci->quirks |= XHCI_INTEL_USB_ROLE_SW;
+ if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c
+index a6e463715779..671bce18782c 100644
+--- a/drivers/usb/host/xhci-rcar.c
++++ b/drivers/usb/host/xhci-rcar.c
+@@ -246,6 +246,7 @@ int xhci_rcar_init_quirk(struct usb_hcd *hcd)
+ if (!xhci_rcar_wait_for_pll_active(hcd))
+ return -ETIMEDOUT;
+
++ xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+ return xhci_rcar_download_firmware(hcd);
+ }
+
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 40fa25c4d041..9215a28dad40 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1647,10 +1647,13 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ }
+ }
+
+- if ((portsc & PORT_PLC) && (portsc & PORT_PLS_MASK) == XDEV_U0 &&
+- DEV_SUPERSPEED_ANY(portsc)) {
++ if ((portsc & PORT_PLC) &&
++ DEV_SUPERSPEED_ANY(portsc) &&
++ ((portsc & PORT_PLS_MASK) == XDEV_U0 ||
++ (portsc & PORT_PLS_MASK) == XDEV_U1 ||
++ (portsc & PORT_PLS_MASK) == XDEV_U2)) {
+ xhci_dbg(xhci, "resume SS port %d finished\n", port_id);
+- /* We've just brought the device into U0 through either the
++ /* We've just brought the device into U0/1/2 through either the
+ * Resume state after a device remote wakeup, or through the
+ * U3Exit state after a host-initiated resume. If it's a device
+ * initiated remote wake, don't pass up the link state change,
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 938ff06c0349..efb0cad8710e 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -941,9 +941,9 @@ static void tegra_xusb_powerdomain_remove(struct device *dev,
+ device_link_del(tegra->genpd_dl_ss);
+ if (tegra->genpd_dl_host)
+ device_link_del(tegra->genpd_dl_host);
+- if (tegra->genpd_dev_ss)
++ if (!IS_ERR_OR_NULL(tegra->genpd_dev_ss))
+ dev_pm_domain_detach(tegra->genpd_dev_ss, true);
+- if (tegra->genpd_dev_host)
++ if (!IS_ERR_OR_NULL(tegra->genpd_dev_host))
+ dev_pm_domain_detach(tegra->genpd_dev_host, true);
+ }
+
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 652dc36e3012..9334cdee382a 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -452,6 +452,14 @@ struct xhci_op_regs {
+ */
+ #define XHCI_DEFAULT_BESL 4
+
++/*
++ * USB3 specification define a 360ms tPollingLFPSTiemout for USB3 ports
++ * to complete link training. usually link trainig completes much faster
++ * so check status 10 times with 36ms sleep in places we need to wait for
++ * polling to complete.
++ */
++#define XHCI_PORT_POLLING_LFPS_TIME 36
++
+ /**
+ * struct xhci_intr_reg - Interrupt Register Set
+ * @irq_pending: IMAN - Interrupt Management Register. Used to enable
+diff --git a/drivers/usb/mtu3/Kconfig b/drivers/usb/mtu3/Kconfig
+index 40bbf1f53337..fe58904f350b 100644
+--- a/drivers/usb/mtu3/Kconfig
++++ b/drivers/usb/mtu3/Kconfig
+@@ -4,6 +4,7 @@ config USB_MTU3
+ tristate "MediaTek USB3 Dual Role controller"
+ depends on USB || USB_GADGET
+ depends on ARCH_MEDIATEK || COMPILE_TEST
++ depends on EXTCON || !EXTCON
+ select USB_XHCI_MTK if USB_SUPPORT && USB_XHCI_HCD
+ help
+ Say Y or M here if your system runs on MediaTek SoCs with
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index c0777a374a88..e732949f6567 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -61,6 +61,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
+ { USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
+ { USB_DEVICE(0x0908, 0x01FF) }, /* Siemens RUGGEDCOM USB Serial Console */
++ { USB_DEVICE(0x0B00, 0x3070) }, /* Ingenico 3070 */
+ { USB_DEVICE(0x0BED, 0x1100) }, /* MEI (TM) Cashflow-SC Bill/Voucher Acceptor */
+ { USB_DEVICE(0x0BED, 0x1101) }, /* MEI series 2000 Combo Acceptor */
+ { USB_DEVICE(0x0FCF, 0x1003) }, /* Dynastream ANT development board */
+@@ -79,6 +80,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x10C4, 0x804E) }, /* Software Bisque Paramount ME build-in converter */
+ { USB_DEVICE(0x10C4, 0x8053) }, /* Enfora EDG1228 */
+ { USB_DEVICE(0x10C4, 0x8054) }, /* Enfora GSM2228 */
++ { USB_DEVICE(0x10C4, 0x8056) }, /* Lorenz Messtechnik devices */
+ { USB_DEVICE(0x10C4, 0x8066) }, /* Argussoft In-System Programmer */
+ { USB_DEVICE(0x10C4, 0x806F) }, /* IMS USB to RS422 Converter Cable */
+ { USB_DEVICE(0x10C4, 0x807A) }, /* Crumb128 board */
+@@ -1353,8 +1355,13 @@ static int cp210x_gpio_get(struct gpio_chip *gc, unsigned int gpio)
+ if (priv->partnum == CP210X_PARTNUM_CP2105)
+ req_type = REQTYPE_INTERFACE_TO_HOST;
+
++ result = usb_autopm_get_interface(serial->interface);
++ if (result)
++ return result;
++
+ result = cp210x_read_vendor_block(serial, req_type,
+ CP210X_READ_LATCH, &buf, sizeof(buf));
++ usb_autopm_put_interface(serial->interface);
+ if (result < 0)
+ return result;
+
+@@ -1375,6 +1382,10 @@ static void cp210x_gpio_set(struct gpio_chip *gc, unsigned int gpio, int value)
+
+ buf.mask = BIT(gpio);
+
++ result = usb_autopm_get_interface(serial->interface);
++ if (result)
++ goto out;
++
+ if (priv->partnum == CP210X_PARTNUM_CP2105) {
+ result = cp210x_write_vendor_block(serial,
+ REQTYPE_HOST_TO_INTERFACE,
+@@ -1392,6 +1403,8 @@ static void cp210x_gpio_set(struct gpio_chip *gc, unsigned int gpio, int value)
+ NULL, 0, USB_CTRL_SET_TIMEOUT);
+ }
+
++ usb_autopm_put_interface(serial->interface);
++out:
+ if (result < 0) {
+ dev_err(&serial->interface->dev, "failed to set GPIO value: %d\n",
+ result);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 77ef4c481f3c..1d8461ae2c34 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -609,6 +609,8 @@ static const struct usb_device_id id_table_combined[] = {
+ .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ { USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID),
+ .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
++ { USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLX_PLUS_PID) },
++ { USB_DEVICE(FTDI_VID, FTDI_NT_ORION_IO_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2_PID) },
+@@ -1025,6 +1027,8 @@ static const struct usb_device_id id_table_combined[] = {
+ { USB_DEVICE(CYPRESS_VID, CYPRESS_WICED_BT_USB_PID) },
+ { USB_DEVICE(CYPRESS_VID, CYPRESS_WICED_WL_USB_PID) },
+ { USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) },
++ /* EZPrototypes devices */
++ { USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) },
+ { } /* Terminating entry */
+ };
+
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 975d02666c5a..5755f0df0025 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -567,7 +567,9 @@
+ /*
+ * NovaTech product ids (FTDI_VID)
+ */
+-#define FTDI_NT_ORIONLXM_PID 0x7c90 /* OrionLXm Substation Automation Platform */
++#define FTDI_NT_ORIONLXM_PID 0x7c90 /* OrionLXm Substation Automation Platform */
++#define FTDI_NT_ORIONLX_PLUS_PID 0x7c91 /* OrionLX+ Substation Automation Platform */
++#define FTDI_NT_ORION_IO_PID 0x7c92 /* Orion I/O */
+
+ /*
+ * Synapse Wireless product ids (FTDI_VID)
+@@ -1308,6 +1310,12 @@
+ #define IONICS_VID 0x1c0c
+ #define IONICS_PLUGCOMPUTER_PID 0x0102
+
++/*
++ * EZPrototypes (PID reseller)
++ */
++#define EZPROTOTYPES_VID 0x1c40
++#define HJELMSLUND_USB485_ISO_PID 0x0477
++
+ /*
+ * Dresden Elektronik Sensor Terminal Board
+ */
+diff --git a/drivers/usb/serial/mos7720.c b/drivers/usb/serial/mos7720.c
+index fc52ac75fbf6..18110225d506 100644
+--- a/drivers/usb/serial/mos7720.c
++++ b/drivers/usb/serial/mos7720.c
+@@ -366,8 +366,6 @@ static int write_parport_reg_nonblock(struct mos7715_parport *mos_parport,
+ if (!urbtrack)
+ return -ENOMEM;
+
+- kref_get(&mos_parport->ref_count);
+- urbtrack->mos_parport = mos_parport;
+ urbtrack->urb = usb_alloc_urb(0, GFP_ATOMIC);
+ if (!urbtrack->urb) {
+ kfree(urbtrack);
+@@ -388,6 +386,8 @@ static int write_parport_reg_nonblock(struct mos7715_parport *mos_parport,
+ usb_sndctrlpipe(usbdev, 0),
+ (unsigned char *)urbtrack->setup,
+ NULL, 0, async_complete, urbtrack);
++ kref_get(&mos_parport->ref_count);
++ urbtrack->mos_parport = mos_parport;
+ kref_init(&urbtrack->ref_count);
+ INIT_LIST_HEAD(&urbtrack->urblist_entry);
+
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index aef15497ff31..83869065b802 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -246,6 +246,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EC25 0x0125
+ #define QUECTEL_PRODUCT_BG96 0x0296
+ #define QUECTEL_PRODUCT_EP06 0x0306
++#define QUECTEL_PRODUCT_EM12 0x0512
+
+ #define CMOTECH_VENDOR_ID 0x16d8
+ #define CMOTECH_PRODUCT_6001 0x6001
+@@ -1066,7 +1067,8 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(3) },
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */
+- { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */
++ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000), /* SIMCom SIM5218 */
++ .driver_info = NCTRL(0) | NCTRL(1) | NCTRL(2) | NCTRL(3) | RSVD(4) },
+ /* Quectel products using Qualcomm vendor ID */
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC15)},
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC20),
+@@ -1087,6 +1089,9 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
+ .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
++ .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
++ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
+ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6003),
+@@ -1148,6 +1153,8 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+ .driver_info = NCTRL(0) | RSVD(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1102, 0xff), /* Telit ME910 (ECM) */
++ .driver_info = NCTRL(0) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910),
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),
+@@ -1938,10 +1945,12 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(4) },
+ { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e35, 0xff), /* D-Link DWM-222 */
+ .driver_info = RSVD(4) },
+- { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
+- { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
+- { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/A3 */
+- { USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) }, /* OLICARD300 - MT6225 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/A3 */
++ { USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2031, 0xff), /* Olicard 600 */
++ .driver_info = RSVD(4) },
++ { USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) }, /* OLICARD300 - MT6225 */
+ { USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) },
+ { USB_DEVICE(VIATELECOM_VENDOR_ID, VIATELECOM_PRODUCT_CDS7) },
+ { USB_DEVICE_AND_INTERFACE_INFO(WETELECOM_VENDOR_ID, WETELECOM_PRODUCT_WMD200, 0xff, 0xff, 0xff) },
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index f1c39a3c7534..d34e945e5d09 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -37,6 +37,7 @@
+ S(SRC_ATTACHED), \
+ S(SRC_STARTUP), \
+ S(SRC_SEND_CAPABILITIES), \
++ S(SRC_SEND_CAPABILITIES_TIMEOUT), \
+ S(SRC_NEGOTIATE_CAPABILITIES), \
+ S(SRC_TRANSITION_SUPPLY), \
+ S(SRC_READY), \
+@@ -2966,10 +2967,34 @@ static void run_state_machine(struct tcpm_port *port)
+ /* port->hard_reset_count = 0; */
+ port->caps_count = 0;
+ port->pd_capable = true;
+- tcpm_set_state_cond(port, hard_reset_state(port),
++ tcpm_set_state_cond(port, SRC_SEND_CAPABILITIES_TIMEOUT,
+ PD_T_SEND_SOURCE_CAP);
+ }
+ break;
++ case SRC_SEND_CAPABILITIES_TIMEOUT:
++ /*
++ * Error recovery for a PD_DATA_SOURCE_CAP reply timeout.
++ *
++ * PD 2.0 sinks are supposed to accept src-capabilities with a
++ * 3.0 header and simply ignore any src PDOs which the sink does
++ * not understand such as PPS but some 2.0 sinks instead ignore
++ * the entire PD_DATA_SOURCE_CAP message, causing contract
++ * negotiation to fail.
++ *
++ * After PD_N_HARD_RESET_COUNT hard-reset attempts, we try
++ * sending src-capabilities with a lower PD revision to
++ * make these broken sinks work.
++ */
++ if (port->hard_reset_count < PD_N_HARD_RESET_COUNT) {
++ tcpm_set_state(port, HARD_RESET_SEND, 0);
++ } else if (port->negotiated_rev > PD_REV20) {
++ port->negotiated_rev--;
++ port->hard_reset_count = 0;
++ tcpm_set_state(port, SRC_SEND_CAPABILITIES, 0);
++ } else {
++ tcpm_set_state(port, hard_reset_state(port), 0);
++ }
++ break;
+ case SRC_NEGOTIATE_CAPABILITIES:
+ ret = tcpm_pd_check_request(port);
+ if (ret < 0) {
+diff --git a/drivers/usb/typec/tcpm/wcove.c b/drivers/usb/typec/tcpm/wcove.c
+index 423208e19383..6770afd40765 100644
+--- a/drivers/usb/typec/tcpm/wcove.c
++++ b/drivers/usb/typec/tcpm/wcove.c
+@@ -615,8 +615,13 @@ static int wcove_typec_probe(struct platform_device *pdev)
+ wcove->dev = &pdev->dev;
+ wcove->regmap = pmic->regmap;
+
+- irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr,
+- platform_get_irq(pdev, 0));
++ irq = platform_get_irq(pdev, 0);
++ if (irq < 0) {
++ dev_err(&pdev->dev, "Failed to get IRQ: %d\n", irq);
++ return irq;
++ }
++
++ irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr, irq);
+ if (irq < 0)
+ return irq;
+
+diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
+index 1c0033ad8738..e1109b15636d 100644
+--- a/drivers/usb/typec/tps6598x.c
++++ b/drivers/usb/typec/tps6598x.c
+@@ -110,6 +110,20 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)
+ return 0;
+ }
+
++static int tps6598x_block_write(struct tps6598x *tps, u8 reg,
++ void *val, size_t len)
++{
++ u8 data[TPS_MAX_LEN + 1];
++
++ if (!tps->i2c_protocol)
++ return regmap_raw_write(tps->regmap, reg, val, len);
++
++ data[0] = len;
++ memcpy(&data[1], val, len);
++
++ return regmap_raw_write(tps->regmap, reg, data, sizeof(data));
++}
++
+ static inline int tps6598x_read16(struct tps6598x *tps, u8 reg, u16 *val)
+ {
+ return tps6598x_block_read(tps, reg, val, sizeof(u16));
+@@ -127,23 +141,23 @@ static inline int tps6598x_read64(struct tps6598x *tps, u8 reg, u64 *val)
+
+ static inline int tps6598x_write16(struct tps6598x *tps, u8 reg, u16 val)
+ {
+- return regmap_raw_write(tps->regmap, reg, &val, sizeof(u16));
++ return tps6598x_block_write(tps, reg, &val, sizeof(u16));
+ }
+
+ static inline int tps6598x_write32(struct tps6598x *tps, u8 reg, u32 val)
+ {
+- return regmap_raw_write(tps->regmap, reg, &val, sizeof(u32));
++ return tps6598x_block_write(tps, reg, &val, sizeof(u32));
+ }
+
+ static inline int tps6598x_write64(struct tps6598x *tps, u8 reg, u64 val)
+ {
+- return regmap_raw_write(tps->regmap, reg, &val, sizeof(u64));
++ return tps6598x_block_write(tps, reg, &val, sizeof(u64));
+ }
+
+ static inline int
+ tps6598x_write_4cc(struct tps6598x *tps, u8 reg, const char *val)
+ {
+- return regmap_raw_write(tps->regmap, reg, &val, sizeof(u32));
++ return tps6598x_block_write(tps, reg, &val, sizeof(u32));
+ }
+
+ static int tps6598x_read_partner_identity(struct tps6598x *tps)
+@@ -229,8 +243,8 @@ static int tps6598x_exec_cmd(struct tps6598x *tps, const char *cmd,
+ return -EBUSY;
+
+ if (in_len) {
+- ret = regmap_raw_write(tps->regmap, TPS_REG_DATA1,
+- in_data, in_len);
++ ret = tps6598x_block_write(tps, TPS_REG_DATA1,
++ in_data, in_len);
+ if (ret)
+ return ret;
+ }
+diff --git a/drivers/video/backlight/pwm_bl.c b/drivers/video/backlight/pwm_bl.c
+index feb90764a811..53b8ceea9bde 100644
+--- a/drivers/video/backlight/pwm_bl.c
++++ b/drivers/video/backlight/pwm_bl.c
+@@ -435,7 +435,7 @@ static int pwm_backlight_initial_power_state(const struct pwm_bl_data *pb)
+ */
+
+ /* if the enable GPIO is disabled, do not enable the backlight */
+- if (pb->enable_gpio && gpiod_get_value(pb->enable_gpio) == 0)
++ if (pb->enable_gpio && gpiod_get_value_cansleep(pb->enable_gpio) == 0)
+ return FB_BLANK_POWERDOWN;
+
+ /* The regulator is disabled, do not enable the backlight */
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index cb43a2258c51..4721491e6c8c 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -431,6 +431,9 @@ static void fb_do_show_logo(struct fb_info *info, struct fb_image *image,
+ {
+ unsigned int x;
+
++ if (image->width > info->var.xres || image->height > info->var.yres)
++ return;
++
+ if (rotate == FB_ROTATE_UR) {
+ for (x = 0;
+ x < num && image->dx + image->width <= info->var.xres;
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index a0b07c331255..a38b65b97be0 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -871,6 +871,8 @@ static struct virtqueue *vring_create_virtqueue_split(
+ GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
+ if (queue)
+ break;
++ if (!may_reduce_num)
++ return NULL;
+ }
+
+ if (!num)
+diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
+index cba6b586bfbd..d97fcfc5e558 100644
+--- a/drivers/xen/gntdev-dmabuf.c
++++ b/drivers/xen/gntdev-dmabuf.c
+@@ -80,6 +80,12 @@ struct gntdev_dmabuf_priv {
+ struct list_head imp_list;
+ /* This is the lock which protects dma_buf_xxx lists. */
+ struct mutex lock;
++ /*
++ * We reference this file while exporting dma-bufs, so
++ * the grant device context is not destroyed while there are
++ * external users alive.
++ */
++ struct file *filp;
+ };
+
+ /* DMA buffer export support. */
+@@ -311,6 +317,7 @@ static void dmabuf_exp_release(struct kref *kref)
+
+ dmabuf_exp_wait_obj_signal(gntdev_dmabuf->priv, gntdev_dmabuf);
+ list_del(&gntdev_dmabuf->next);
++ fput(gntdev_dmabuf->priv->filp);
+ kfree(gntdev_dmabuf);
+ }
+
+@@ -423,6 +430,7 @@ static int dmabuf_exp_from_pages(struct gntdev_dmabuf_export_args *args)
+ mutex_lock(&args->dmabuf_priv->lock);
+ list_add(&gntdev_dmabuf->next, &args->dmabuf_priv->exp_list);
+ mutex_unlock(&args->dmabuf_priv->lock);
++ get_file(gntdev_dmabuf->priv->filp);
+ return 0;
+
+ fail:
+@@ -834,7 +842,7 @@ long gntdev_ioctl_dmabuf_imp_release(struct gntdev_priv *priv,
+ return dmabuf_imp_release(priv->dmabuf_priv, op.fd);
+ }
+
+-struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void)
++struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct file *filp)
+ {
+ struct gntdev_dmabuf_priv *priv;
+
+@@ -847,6 +855,8 @@ struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void)
+ INIT_LIST_HEAD(&priv->exp_wait_list);
+ INIT_LIST_HEAD(&priv->imp_list);
+
++ priv->filp = filp;
++
+ return priv;
+ }
+
+diff --git a/drivers/xen/gntdev-dmabuf.h b/drivers/xen/gntdev-dmabuf.h
+index 7220a53d0fc5..3d9b9cf9d5a1 100644
+--- a/drivers/xen/gntdev-dmabuf.h
++++ b/drivers/xen/gntdev-dmabuf.h
+@@ -14,7 +14,7 @@
+ struct gntdev_dmabuf_priv;
+ struct gntdev_priv;
+
+-struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void);
++struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct file *filp);
+
+ void gntdev_dmabuf_fini(struct gntdev_dmabuf_priv *priv);
+
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 5efc5eee9544..7cf9c51318aa 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -600,7 +600,7 @@ static int gntdev_open(struct inode *inode, struct file *flip)
+ mutex_init(&priv->lock);
+
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+- priv->dmabuf_priv = gntdev_dmabuf_init();
++ priv->dmabuf_priv = gntdev_dmabuf_init(flip);
+ if (IS_ERR(priv->dmabuf_priv)) {
+ ret = PTR_ERR(priv->dmabuf_priv);
+ kfree(priv);
+diff --git a/fs/9p/v9fs_vfs.h b/fs/9p/v9fs_vfs.h
+index 5a0db6dec8d1..aaee1e6584e6 100644
+--- a/fs/9p/v9fs_vfs.h
++++ b/fs/9p/v9fs_vfs.h
+@@ -40,6 +40,9 @@
+ */
+ #define P9_LOCK_TIMEOUT (30*HZ)
+
++/* flags for v9fs_stat2inode() & v9fs_stat2inode_dotl() */
++#define V9FS_STAT2INODE_KEEP_ISIZE 1
++
+ extern struct file_system_type v9fs_fs_type;
+ extern const struct address_space_operations v9fs_addr_operations;
+ extern const struct file_operations v9fs_file_operations;
+@@ -61,8 +64,10 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses,
+ struct inode *inode, umode_t mode, dev_t);
+ void v9fs_evict_inode(struct inode *inode);
+ ino_t v9fs_qid2ino(struct p9_qid *qid);
+-void v9fs_stat2inode(struct p9_wstat *, struct inode *, struct super_block *);
+-void v9fs_stat2inode_dotl(struct p9_stat_dotl *, struct inode *);
++void v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
++ struct super_block *sb, unsigned int flags);
++void v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
++ unsigned int flags);
+ int v9fs_dir_release(struct inode *inode, struct file *filp);
+ int v9fs_file_open(struct inode *inode, struct file *file);
+ void v9fs_inode2stat(struct inode *inode, struct p9_wstat *stat);
+@@ -83,4 +88,18 @@ static inline void v9fs_invalidate_inode_attr(struct inode *inode)
+ }
+
+ int v9fs_open_to_dotl_flags(int flags);
++
++static inline void v9fs_i_size_write(struct inode *inode, loff_t i_size)
++{
++ /*
++ * 32-bit need the lock, concurrent updates could break the
++ * sequences and make i_size_read() loop forever.
++ * 64-bit updates are atomic and can skip the locking.
++ */
++ if (sizeof(i_size) > sizeof(long))
++ spin_lock(&inode->i_lock);
++ i_size_write(inode, i_size);
++ if (sizeof(i_size) > sizeof(long))
++ spin_unlock(&inode->i_lock);
++}
+ #endif
+diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
+index a25efa782fcc..9a1125305d84 100644
+--- a/fs/9p/vfs_file.c
++++ b/fs/9p/vfs_file.c
+@@ -446,7 +446,11 @@ v9fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ i_size = i_size_read(inode);
+ if (iocb->ki_pos > i_size) {
+ inode_add_bytes(inode, iocb->ki_pos - i_size);
+- i_size_write(inode, iocb->ki_pos);
++ /*
++ * Need to serialize against i_size_write() in
++ * v9fs_stat2inode()
++ */
++ v9fs_i_size_write(inode, iocb->ki_pos);
+ }
+ return retval;
+ }
+diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
+index 85ff859d3af5..72b779bc0942 100644
+--- a/fs/9p/vfs_inode.c
++++ b/fs/9p/vfs_inode.c
+@@ -538,7 +538,7 @@ static struct inode *v9fs_qid_iget(struct super_block *sb,
+ if (retval)
+ goto error;
+
+- v9fs_stat2inode(st, inode, sb);
++ v9fs_stat2inode(st, inode, sb, 0);
+ v9fs_cache_inode_get_cookie(inode);
+ unlock_new_inode(inode);
+ return inode;
+@@ -1092,7 +1092,7 @@ v9fs_vfs_getattr(const struct path *path, struct kstat *stat,
+ if (IS_ERR(st))
+ return PTR_ERR(st);
+
+- v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb);
++ v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb, 0);
+ generic_fillattr(d_inode(dentry), stat);
+
+ p9stat_free(st);
+@@ -1170,12 +1170,13 @@ static int v9fs_vfs_setattr(struct dentry *dentry, struct iattr *iattr)
+ * @stat: Plan 9 metadata (mistat) structure
+ * @inode: inode to populate
+ * @sb: superblock of filesystem
++ * @flags: control flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE)
+ *
+ */
+
+ void
+ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
+- struct super_block *sb)
++ struct super_block *sb, unsigned int flags)
+ {
+ umode_t mode;
+ char ext[32];
+@@ -1216,10 +1217,11 @@ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
+ mode = p9mode2perm(v9ses, stat);
+ mode |= inode->i_mode & ~S_IALLUGO;
+ inode->i_mode = mode;
+- i_size_write(inode, stat->length);
+
++ if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE))
++ v9fs_i_size_write(inode, stat->length);
+ /* not real number of blocks, but 512 byte ones ... */
+- inode->i_blocks = (i_size_read(inode) + 512 - 1) >> 9;
++ inode->i_blocks = (stat->length + 512 - 1) >> 9;
+ v9inode->cache_validity &= ~V9FS_INO_INVALID_ATTR;
+ }
+
+@@ -1416,9 +1418,9 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
+ {
+ int umode;
+ dev_t rdev;
+- loff_t i_size;
+ struct p9_wstat *st;
+ struct v9fs_session_info *v9ses;
++ unsigned int flags;
+
+ v9ses = v9fs_inode2v9ses(inode);
+ st = p9_client_stat(fid);
+@@ -1431,16 +1433,13 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
+ if ((inode->i_mode & S_IFMT) != (umode & S_IFMT))
+ goto out;
+
+- spin_lock(&inode->i_lock);
+ /*
+ * We don't want to refresh inode->i_size,
+ * because we may have cached data
+ */
+- i_size = inode->i_size;
+- v9fs_stat2inode(st, inode, inode->i_sb);
+- if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
+- inode->i_size = i_size;
+- spin_unlock(&inode->i_lock);
++ flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ?
++ V9FS_STAT2INODE_KEEP_ISIZE : 0;
++ v9fs_stat2inode(st, inode, inode->i_sb, flags);
+ out:
+ p9stat_free(st);
+ kfree(st);
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index 4823e1c46999..a950a927a626 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -143,7 +143,7 @@ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb,
+ if (retval)
+ goto error;
+
+- v9fs_stat2inode_dotl(st, inode);
++ v9fs_stat2inode_dotl(st, inode, 0);
+ v9fs_cache_inode_get_cookie(inode);
+ retval = v9fs_get_acl(inode, fid);
+ if (retval)
+@@ -496,7 +496,7 @@ v9fs_vfs_getattr_dotl(const struct path *path, struct kstat *stat,
+ if (IS_ERR(st))
+ return PTR_ERR(st);
+
+- v9fs_stat2inode_dotl(st, d_inode(dentry));
++ v9fs_stat2inode_dotl(st, d_inode(dentry), 0);
+ generic_fillattr(d_inode(dentry), stat);
+ /* Change block size to what the server returned */
+ stat->blksize = st->st_blksize;
+@@ -607,11 +607,13 @@ int v9fs_vfs_setattr_dotl(struct dentry *dentry, struct iattr *iattr)
+ * v9fs_stat2inode_dotl - populate an inode structure with stat info
+ * @stat: stat structure
+ * @inode: inode to populate
++ * @flags: ctrl flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE)
+ *
+ */
+
+ void
+-v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
++v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
++ unsigned int flags)
+ {
+ umode_t mode;
+ struct v9fs_inode *v9inode = V9FS_I(inode);
+@@ -631,7 +633,8 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
+ mode |= inode->i_mode & ~S_IALLUGO;
+ inode->i_mode = mode;
+
+- i_size_write(inode, stat->st_size);
++ if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE))
++ v9fs_i_size_write(inode, stat->st_size);
+ inode->i_blocks = stat->st_blocks;
+ } else {
+ if (stat->st_result_mask & P9_STATS_ATIME) {
+@@ -661,8 +664,9 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
+ }
+ if (stat->st_result_mask & P9_STATS_RDEV)
+ inode->i_rdev = new_decode_dev(stat->st_rdev);
+- if (stat->st_result_mask & P9_STATS_SIZE)
+- i_size_write(inode, stat->st_size);
++ if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) &&
++ stat->st_result_mask & P9_STATS_SIZE)
++ v9fs_i_size_write(inode, stat->st_size);
+ if (stat->st_result_mask & P9_STATS_BLOCKS)
+ inode->i_blocks = stat->st_blocks;
+ }
+@@ -928,9 +932,9 @@ v9fs_vfs_get_link_dotl(struct dentry *dentry,
+
+ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
+ {
+- loff_t i_size;
+ struct p9_stat_dotl *st;
+ struct v9fs_session_info *v9ses;
++ unsigned int flags;
+
+ v9ses = v9fs_inode2v9ses(inode);
+ st = p9_client_getattr_dotl(fid, P9_STATS_ALL);
+@@ -942,16 +946,13 @@ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
+ if ((inode->i_mode & S_IFMT) != (st->st_mode & S_IFMT))
+ goto out;
+
+- spin_lock(&inode->i_lock);
+ /*
+ * We don't want to refresh inode->i_size,
+ * because we may have cached data
+ */
+- i_size = inode->i_size;
+- v9fs_stat2inode_dotl(st, inode);
+- if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
+- inode->i_size = i_size;
+- spin_unlock(&inode->i_lock);
++ flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ?
++ V9FS_STAT2INODE_KEEP_ISIZE : 0;
++ v9fs_stat2inode_dotl(st, inode, flags);
+ out:
+ kfree(st);
+ return 0;
+diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c
+index 48ce50484e80..eeab9953af89 100644
+--- a/fs/9p/vfs_super.c
++++ b/fs/9p/vfs_super.c
+@@ -172,7 +172,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags,
+ goto release_sb;
+ }
+ d_inode(root)->i_ino = v9fs_qid2ino(&st->qid);
+- v9fs_stat2inode_dotl(st, d_inode(root));
++ v9fs_stat2inode_dotl(st, d_inode(root), 0);
+ kfree(st);
+ } else {
+ struct p9_wstat *st = NULL;
+@@ -183,7 +183,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags,
+ }
+
+ d_inode(root)->i_ino = v9fs_qid2ino(&st->qid);
+- v9fs_stat2inode(st, d_inode(root), sb);
++ v9fs_stat2inode(st, d_inode(root), sb, 0);
+
+ p9stat_free(st);
+ kfree(st);
+diff --git a/fs/aio.c b/fs/aio.c
+index aaaaf4d12c73..3d9669d011b9 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -167,9 +167,13 @@ struct kioctx {
+ unsigned id;
+ };
+
++/*
++ * First field must be the file pointer in all the
++ * iocb unions! See also 'struct kiocb' in <linux/fs.h>
++ */
+ struct fsync_iocb {
+- struct work_struct work;
+ struct file *file;
++ struct work_struct work;
+ bool datasync;
+ };
+
+@@ -183,8 +187,15 @@ struct poll_iocb {
+ struct work_struct work;
+ };
+
++/*
++ * NOTE! Each of the iocb union members has the file pointer
++ * as the first entry in their struct definition. So you can
++ * access the file pointer through any of the sub-structs,
++ * or directly as just 'ki_filp' in this struct.
++ */
+ struct aio_kiocb {
+ union {
++ struct file *ki_filp;
+ struct kiocb rw;
+ struct fsync_iocb fsync;
+ struct poll_iocb poll;
+@@ -1060,6 +1071,8 @@ static inline void iocb_put(struct aio_kiocb *iocb)
+ {
+ if (refcount_read(&iocb->ki_refcnt) == 0 ||
+ refcount_dec_and_test(&iocb->ki_refcnt)) {
++ if (iocb->ki_filp)
++ fput(iocb->ki_filp);
+ percpu_ref_put(&iocb->ki_ctx->reqs);
+ kmem_cache_free(kiocb_cachep, iocb);
+ }
+@@ -1424,7 +1437,6 @@ static void aio_complete_rw(struct kiocb *kiocb, long res, long res2)
+ file_end_write(kiocb->ki_filp);
+ }
+
+- fput(kiocb->ki_filp);
+ aio_complete(iocb, res, res2);
+ }
+
+@@ -1432,9 +1444,6 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+ {
+ int ret;
+
+- req->ki_filp = fget(iocb->aio_fildes);
+- if (unlikely(!req->ki_filp))
+- return -EBADF;
+ req->ki_complete = aio_complete_rw;
+ req->private = NULL;
+ req->ki_pos = iocb->aio_offset;
+@@ -1451,7 +1460,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+ ret = ioprio_check_cap(iocb->aio_reqprio);
+ if (ret) {
+ pr_debug("aio ioprio check cap error: %d\n", ret);
+- goto out_fput;
++ return ret;
+ }
+
+ req->ki_ioprio = iocb->aio_reqprio;
+@@ -1460,14 +1469,10 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+
+ ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags);
+ if (unlikely(ret))
+- goto out_fput;
++ return ret;
+
+ req->ki_flags &= ~IOCB_HIPRI; /* no one is going to poll for this I/O */
+ return 0;
+-
+-out_fput:
+- fput(req->ki_filp);
+- return ret;
+ }
+
+ static int aio_setup_rw(int rw, const struct iocb *iocb, struct iovec **iovec,
+@@ -1521,24 +1526,19 @@ static ssize_t aio_read(struct kiocb *req, const struct iocb *iocb,
+ if (ret)
+ return ret;
+ file = req->ki_filp;
+-
+- ret = -EBADF;
+ if (unlikely(!(file->f_mode & FMODE_READ)))
+- goto out_fput;
++ return -EBADF;
+ ret = -EINVAL;
+ if (unlikely(!file->f_op->read_iter))
+- goto out_fput;
++ return -EINVAL;
+
+ ret = aio_setup_rw(READ, iocb, &iovec, vectored, compat, &iter);
+ if (ret)
+- goto out_fput;
++ return ret;
+ ret = rw_verify_area(READ, file, &req->ki_pos, iov_iter_count(&iter));
+ if (!ret)
+ aio_rw_done(req, call_read_iter(file, req, &iter));
+ kfree(iovec);
+-out_fput:
+- if (unlikely(ret))
+- fput(file);
+ return ret;
+ }
+
+@@ -1555,16 +1555,14 @@ static ssize_t aio_write(struct kiocb *req, const struct iocb *iocb,
+ return ret;
+ file = req->ki_filp;
+
+- ret = -EBADF;
+ if (unlikely(!(file->f_mode & FMODE_WRITE)))
+- goto out_fput;
+- ret = -EINVAL;
++ return -EBADF;
+ if (unlikely(!file->f_op->write_iter))
+- goto out_fput;
++ return -EINVAL;
+
+ ret = aio_setup_rw(WRITE, iocb, &iovec, vectored, compat, &iter);
+ if (ret)
+- goto out_fput;
++ return ret;
+ ret = rw_verify_area(WRITE, file, &req->ki_pos, iov_iter_count(&iter));
+ if (!ret) {
+ /*
+@@ -1582,9 +1580,6 @@ static ssize_t aio_write(struct kiocb *req, const struct iocb *iocb,
+ aio_rw_done(req, call_write_iter(file, req, &iter));
+ }
+ kfree(iovec);
+-out_fput:
+- if (unlikely(ret))
+- fput(file);
+ return ret;
+ }
+
+@@ -1594,7 +1589,6 @@ static void aio_fsync_work(struct work_struct *work)
+ int ret;
+
+ ret = vfs_fsync(req->file, req->datasync);
+- fput(req->file);
+ aio_complete(container_of(req, struct aio_kiocb, fsync), ret, 0);
+ }
+
+@@ -1605,13 +1599,8 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
+ iocb->aio_rw_flags))
+ return -EINVAL;
+
+- req->file = fget(iocb->aio_fildes);
+- if (unlikely(!req->file))
+- return -EBADF;
+- if (unlikely(!req->file->f_op->fsync)) {
+- fput(req->file);
++ if (unlikely(!req->file->f_op->fsync))
+ return -EINVAL;
+- }
+
+ req->datasync = datasync;
+ INIT_WORK(&req->work, aio_fsync_work);
+@@ -1621,10 +1610,7 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
+
+ static inline void aio_poll_complete(struct aio_kiocb *iocb, __poll_t mask)
+ {
+- struct file *file = iocb->poll.file;
+-
+ aio_complete(iocb, mangle_poll(mask), 0);
+- fput(file);
+ }
+
+ static void aio_poll_complete_work(struct work_struct *work)
+@@ -1680,6 +1666,7 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ struct poll_iocb *req = container_of(wait, struct poll_iocb, wait);
+ struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll);
+ __poll_t mask = key_to_poll(key);
++ unsigned long flags;
+
+ req->woken = true;
+
+@@ -1688,10 +1675,15 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ if (!(mask & req->events))
+ return 0;
+
+- /* try to complete the iocb inline if we can: */
+- if (spin_trylock(&iocb->ki_ctx->ctx_lock)) {
++ /*
++ * Try to complete the iocb inline if we can. Use
++ * irqsave/irqrestore because not all filesystems (e.g. fuse)
++ * call this function with IRQs disabled and because IRQs
++ * have to be disabled before ctx_lock is obtained.
++ */
++ if (spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
+ list_del(&iocb->ki_list);
+- spin_unlock(&iocb->ki_ctx->ctx_lock);
++ spin_unlock_irqrestore(&iocb->ki_ctx->ctx_lock, flags);
+
+ list_del_init(&req->wait.entry);
+ aio_poll_complete(iocb, mask);
+@@ -1743,9 +1735,6 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+
+ INIT_WORK(&req->work, aio_poll_complete_work);
+ req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP;
+- req->file = fget(iocb->aio_fildes);
+- if (unlikely(!req->file))
+- return -EBADF;
+
+ req->head = NULL;
+ req->woken = false;
+@@ -1788,10 +1777,8 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+ spin_unlock_irq(&ctx->ctx_lock);
+
+ out:
+- if (unlikely(apt.error)) {
+- fput(req->file);
++ if (unlikely(apt.error))
+ return apt.error;
+- }
+
+ if (mask)
+ aio_poll_complete(aiocb, mask);
+@@ -1829,6 +1816,11 @@ static int __io_submit_one(struct kioctx *ctx, const struct iocb *iocb,
+ if (unlikely(!req))
+ goto out_put_reqs_available;
+
++ req->ki_filp = fget(iocb->aio_fildes);
++ ret = -EBADF;
++ if (unlikely(!req->ki_filp))
++ goto out_put_req;
++
+ if (iocb->aio_flags & IOCB_FLAG_RESFD) {
+ /*
+ * If the IOCB_FLAG_RESFD flag of aio_flags is set, get an
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 58a4c1217fa8..06ef48ad1998 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -298,10 +298,10 @@ static void blkdev_bio_end_io(struct bio *bio)
+ struct blkdev_dio *dio = bio->bi_private;
+ bool should_dirty = dio->should_dirty;
+
+- if (dio->multi_bio && !atomic_dec_and_test(&dio->ref)) {
+- if (bio->bi_status && !dio->bio.bi_status)
+- dio->bio.bi_status = bio->bi_status;
+- } else {
++ if (bio->bi_status && !dio->bio.bi_status)
++ dio->bio.bi_status = bio->bi_status;
++
++ if (!dio->multi_bio || atomic_dec_and_test(&dio->ref)) {
+ if (!dio->is_sync) {
+ struct kiocb *iocb = dio->iocb;
+ ssize_t ret;
+diff --git a/fs/btrfs/acl.c b/fs/btrfs/acl.c
+index 3b66c957ea6f..5810463dc6d2 100644
+--- a/fs/btrfs/acl.c
++++ b/fs/btrfs/acl.c
+@@ -9,6 +9,7 @@
+ #include <linux/posix_acl_xattr.h>
+ #include <linux/posix_acl.h>
+ #include <linux/sched.h>
++#include <linux/sched/mm.h>
+ #include <linux/slab.h>
+
+ #include "ctree.h"
+@@ -72,8 +73,16 @@ static int __btrfs_set_acl(struct btrfs_trans_handle *trans,
+ }
+
+ if (acl) {
++ unsigned int nofs_flag;
++
+ size = posix_acl_xattr_size(acl->a_count);
++ /*
++ * We're holding a transaction handle, so use a NOFS memory
++ * allocation context to avoid deadlock if reclaim happens.
++ */
++ nofs_flag = memalloc_nofs_save();
+ value = kmalloc(size, GFP_KERNEL);
++ memalloc_nofs_restore(nofs_flag);
+ if (!value) {
+ ret = -ENOMEM;
+ goto out;
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index 8750c835f535..c4dea3b7349e 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -862,6 +862,7 @@ int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info)
+ btrfs_destroy_dev_replace_tgtdev(tgt_device);
+ break;
+ default:
++ up_write(&dev_replace->rwsem);
+ result = -EINVAL;
+ }
+
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 6a2a2a951705..888d72dda794 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -17,6 +17,7 @@
+ #include <linux/semaphore.h>
+ #include <linux/error-injection.h>
+ #include <linux/crc32c.h>
++#include <linux/sched/mm.h>
+ #include <asm/unaligned.h>
+ #include "ctree.h"
+ #include "disk-io.h"
+@@ -1258,10 +1259,17 @@ struct btrfs_root *btrfs_create_tree(struct btrfs_trans_handle *trans,
+ struct btrfs_root *tree_root = fs_info->tree_root;
+ struct btrfs_root *root;
+ struct btrfs_key key;
++ unsigned int nofs_flag;
+ int ret = 0;
+ uuid_le uuid = NULL_UUID_LE;
+
++ /*
++ * We're holding a transaction handle, so use a NOFS memory allocation
++ * context to avoid deadlock if reclaim happens.
++ */
++ nofs_flag = memalloc_nofs_save();
+ root = btrfs_alloc_root(fs_info, GFP_KERNEL);
++ memalloc_nofs_restore(nofs_flag);
+ if (!root)
+ return ERR_PTR(-ENOMEM);
+
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index d81035b7ea7d..1b68700bc1c5 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4808,6 +4808,7 @@ skip_async:
+ }
+
+ struct reserve_ticket {
++ u64 orig_bytes;
+ u64 bytes;
+ int error;
+ struct list_head list;
+@@ -5030,7 +5031,7 @@ static inline int need_do_async_reclaim(struct btrfs_fs_info *fs_info,
+ !test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state));
+ }
+
+-static void wake_all_tickets(struct list_head *head)
++static bool wake_all_tickets(struct list_head *head)
+ {
+ struct reserve_ticket *ticket;
+
+@@ -5039,7 +5040,10 @@ static void wake_all_tickets(struct list_head *head)
+ list_del_init(&ticket->list);
+ ticket->error = -ENOSPC;
+ wake_up(&ticket->wait);
++ if (ticket->bytes != ticket->orig_bytes)
++ return true;
+ }
++ return false;
+ }
+
+ /*
+@@ -5094,8 +5098,12 @@ static void btrfs_async_reclaim_metadata_space(struct work_struct *work)
+ if (flush_state > COMMIT_TRANS) {
+ commit_cycles++;
+ if (commit_cycles > 2) {
+- wake_all_tickets(&space_info->tickets);
+- space_info->flush = 0;
++ if (wake_all_tickets(&space_info->tickets)) {
++ flush_state = FLUSH_DELAYED_ITEMS_NR;
++ commit_cycles--;
++ } else {
++ space_info->flush = 0;
++ }
+ } else {
+ flush_state = FLUSH_DELAYED_ITEMS_NR;
+ }
+@@ -5147,10 +5155,11 @@ static void priority_reclaim_metadata_space(struct btrfs_fs_info *fs_info,
+
+ static int wait_reserve_ticket(struct btrfs_fs_info *fs_info,
+ struct btrfs_space_info *space_info,
+- struct reserve_ticket *ticket, u64 orig_bytes)
++ struct reserve_ticket *ticket)
+
+ {
+ DEFINE_WAIT(wait);
++ u64 reclaim_bytes = 0;
+ int ret = 0;
+
+ spin_lock(&space_info->lock);
+@@ -5171,14 +5180,12 @@ static int wait_reserve_ticket(struct btrfs_fs_info *fs_info,
+ ret = ticket->error;
+ if (!list_empty(&ticket->list))
+ list_del_init(&ticket->list);
+- if (ticket->bytes && ticket->bytes < orig_bytes) {
+- u64 num_bytes = orig_bytes - ticket->bytes;
+- update_bytes_may_use(space_info, -num_bytes);
+- trace_btrfs_space_reservation(fs_info, "space_info",
+- space_info->flags, num_bytes, 0);
+- }
++ if (ticket->bytes && ticket->bytes < ticket->orig_bytes)
++ reclaim_bytes = ticket->orig_bytes - ticket->bytes;
+ spin_unlock(&space_info->lock);
+
++ if (reclaim_bytes)
++ space_info_add_old_bytes(fs_info, space_info, reclaim_bytes);
+ return ret;
+ }
+
+@@ -5204,6 +5211,7 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
+ {
+ struct reserve_ticket ticket;
+ u64 used;
++ u64 reclaim_bytes = 0;
+ int ret = 0;
+
+ ASSERT(orig_bytes);
+@@ -5239,6 +5247,7 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
+ * the list and we will do our own flushing further down.
+ */
+ if (ret && flush != BTRFS_RESERVE_NO_FLUSH) {
++ ticket.orig_bytes = orig_bytes;
+ ticket.bytes = orig_bytes;
+ ticket.error = 0;
+ init_waitqueue_head(&ticket.wait);
+@@ -5279,25 +5288,21 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
+ return ret;
+
+ if (flush == BTRFS_RESERVE_FLUSH_ALL)
+- return wait_reserve_ticket(fs_info, space_info, &ticket,
+- orig_bytes);
++ return wait_reserve_ticket(fs_info, space_info, &ticket);
+
+ ret = 0;
+ priority_reclaim_metadata_space(fs_info, space_info, &ticket);
+ spin_lock(&space_info->lock);
+ if (ticket.bytes) {
+- if (ticket.bytes < orig_bytes) {
+- u64 num_bytes = orig_bytes - ticket.bytes;
+- update_bytes_may_use(space_info, -num_bytes);
+- trace_btrfs_space_reservation(fs_info, "space_info",
+- space_info->flags,
+- num_bytes, 0);
+-
+- }
++ if (ticket.bytes < orig_bytes)
++ reclaim_bytes = orig_bytes - ticket.bytes;
+ list_del_init(&ticket.list);
+ ret = -ENOSPC;
+ }
+ spin_unlock(&space_info->lock);
++
++ if (reclaim_bytes)
++ space_info_add_old_bytes(fs_info, space_info, reclaim_bytes);
+ ASSERT(list_empty(&ticket.list));
+ return ret;
+ }
+@@ -6115,7 +6120,7 @@ static void btrfs_calculate_inode_block_rsv_size(struct btrfs_fs_info *fs_info,
+ *
+ * This is overestimating in most cases.
+ */
+- qgroup_rsv_size = outstanding_extents * fs_info->nodesize;
++ qgroup_rsv_size = (u64)outstanding_extents * fs_info->nodesize;
+
+ spin_lock(&block_rsv->lock);
+ block_rsv->size = reserve_size;
+@@ -8690,6 +8695,8 @@ struct walk_control {
+ u64 refs[BTRFS_MAX_LEVEL];
+ u64 flags[BTRFS_MAX_LEVEL];
+ struct btrfs_key update_progress;
++ struct btrfs_key drop_progress;
++ int drop_level;
+ int stage;
+ int level;
+ int shared_level;
+@@ -9028,6 +9035,16 @@ skip:
+ ret);
+ }
+ }
++
++ /*
++ * We need to update the next key in our walk control so we can
++ * update the drop_progress key accordingly. We don't care if
++ * find_next_key doesn't find a key because that means we're at
++ * the end and are going to clean up now.
++ */
++ wc->drop_level = level;
++ find_next_key(path, level, &wc->drop_progress);
++
+ ret = btrfs_free_extent(trans, root, bytenr, fs_info->nodesize,
+ parent, root->root_key.objectid,
+ level - 1, 0);
+@@ -9378,12 +9395,14 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
+ }
+
+ if (wc->stage == DROP_REFERENCE) {
+- level = wc->level;
+- btrfs_node_key(path->nodes[level],
+- &root_item->drop_progress,
+- path->slots[level]);
+- root_item->drop_level = level;
+- }
++ wc->drop_level = wc->level;
++ btrfs_node_key_to_cpu(path->nodes[wc->drop_level],
++ &wc->drop_progress,
++ path->slots[wc->drop_level]);
++ }
++ btrfs_cpu_key_to_disk(&root_item->drop_progress,
++ &wc->drop_progress);
++ root_item->drop_level = wc->drop_level;
+
+ BUG_ON(wc->level == 0);
+ if (btrfs_should_end_transaction(trans) ||
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 52abe4082680..1bfb7207bbf0 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2985,11 +2985,11 @@ static int __do_readpage(struct extent_io_tree *tree,
+ */
+ if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) &&
+ prev_em_start && *prev_em_start != (u64)-1 &&
+- *prev_em_start != em->orig_start)
++ *prev_em_start != em->start)
+ force_bio_submit = true;
+
+ if (prev_em_start)
+- *prev_em_start = em->orig_start;
++ *prev_em_start = em->start;
+
+ free_extent_map(em);
+ em = NULL;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 9c8e1734429c..1d64a6b8e413 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -501,6 +501,16 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
++ /*
++ * If the fs is mounted with nologreplay, which requires it to be
++ * mounted in RO mode as well, we can not allow discard on free space
++ * inside block groups, because log trees refer to extents that are not
++ * pinned in a block group's free space cache (pinning the extents is
++ * precisely the first phase of replaying a log tree).
++ */
++ if (btrfs_test_opt(fs_info, NOLOGREPLAY))
++ return -EROFS;
++
+ rcu_read_lock();
+ list_for_each_entry_rcu(device, &fs_info->fs_devices->devices,
+ dev_list) {
+@@ -3206,21 +3216,6 @@ out:
+ return ret;
+ }
+
+-static void btrfs_double_inode_unlock(struct inode *inode1, struct inode *inode2)
+-{
+- inode_unlock(inode1);
+- inode_unlock(inode2);
+-}
+-
+-static void btrfs_double_inode_lock(struct inode *inode1, struct inode *inode2)
+-{
+- if (inode1 < inode2)
+- swap(inode1, inode2);
+-
+- inode_lock_nested(inode1, I_MUTEX_PARENT);
+- inode_lock_nested(inode2, I_MUTEX_CHILD);
+-}
+-
+ static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1,
+ struct inode *inode2, u64 loff2, u64 len)
+ {
+@@ -3989,7 +3984,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+ if (same_inode)
+ inode_lock(inode_in);
+ else
+- btrfs_double_inode_lock(inode_in, inode_out);
++ lock_two_nondirectories(inode_in, inode_out);
+
+ /*
+ * Now that the inodes are locked, we need to start writeback ourselves
+@@ -4039,7 +4034,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+ if (same_inode)
+ inode_unlock(inode_in);
+ else
+- btrfs_double_inode_unlock(inode_in, inode_out);
++ unlock_two_nondirectories(inode_in, inode_out);
+
+ return ret;
+ }
+@@ -4069,7 +4064,7 @@ loff_t btrfs_remap_file_range(struct file *src_file, loff_t off,
+ if (same_inode)
+ inode_unlock(src_inode);
+ else
+- btrfs_double_inode_unlock(src_inode, dst_inode);
++ unlock_two_nondirectories(src_inode, dst_inode);
+
+ return ret < 0 ? ret : len;
+ }
+diff --git a/fs/btrfs/props.c b/fs/btrfs/props.c
+index dc6140013ae8..61d22a56c0ba 100644
+--- a/fs/btrfs/props.c
++++ b/fs/btrfs/props.c
+@@ -366,11 +366,11 @@ int btrfs_subvol_inherit_props(struct btrfs_trans_handle *trans,
+
+ static int prop_compression_validate(const char *value, size_t len)
+ {
+- if (!strncmp("lzo", value, len))
++ if (!strncmp("lzo", value, 3))
+ return 0;
+- else if (!strncmp("zlib", value, len))
++ else if (!strncmp("zlib", value, 4))
+ return 0;
+- else if (!strncmp("zstd", value, len))
++ else if (!strncmp("zstd", value, 4))
+ return 0;
+
+ return -EINVAL;
+@@ -396,7 +396,7 @@ static int prop_compression_apply(struct inode *inode,
+ btrfs_set_fs_incompat(fs_info, COMPRESS_LZO);
+ } else if (!strncmp("zlib", value, 4)) {
+ type = BTRFS_COMPRESS_ZLIB;
+- } else if (!strncmp("zstd", value, len)) {
++ } else if (!strncmp("zstd", value, 4)) {
+ type = BTRFS_COMPRESS_ZSTD;
+ btrfs_set_fs_incompat(fs_info, COMPRESS_ZSTD);
+ } else {
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 4e473a998219..e28fb43e943b 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1917,8 +1917,8 @@ static int qgroup_trace_new_subtree_blocks(struct btrfs_trans_handle* trans,
+ int i;
+
+ /* Level sanity check */
+- if (cur_level < 0 || cur_level >= BTRFS_MAX_LEVEL ||
+- root_level < 0 || root_level >= BTRFS_MAX_LEVEL ||
++ if (cur_level < 0 || cur_level >= BTRFS_MAX_LEVEL - 1 ||
++ root_level < 0 || root_level >= BTRFS_MAX_LEVEL - 1 ||
+ root_level < cur_level) {
+ btrfs_err_rl(fs_info,
+ "%s: bad levels, cur_level=%d root_level=%d",
+@@ -2842,16 +2842,15 @@ out:
+ /*
+ * Two limits to commit transaction in advance.
+ *
+- * For RATIO, it will be 1/RATIO of the remaining limit
+- * (excluding data and prealloc meta) as threshold.
++ * For RATIO, it will be 1/RATIO of the remaining limit as threshold.
+ * For SIZE, it will be in byte unit as threshold.
+ */
+-#define QGROUP_PERTRANS_RATIO 32
+-#define QGROUP_PERTRANS_SIZE SZ_32M
++#define QGROUP_FREE_RATIO 32
++#define QGROUP_FREE_SIZE SZ_32M
+ static bool qgroup_check_limits(struct btrfs_fs_info *fs_info,
+ const struct btrfs_qgroup *qg, u64 num_bytes)
+ {
+- u64 limit;
++ u64 free;
+ u64 threshold;
+
+ if ((qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_RFER) &&
+@@ -2870,20 +2869,21 @@ static bool qgroup_check_limits(struct btrfs_fs_info *fs_info,
+ */
+ if ((qg->lim_flags & (BTRFS_QGROUP_LIMIT_MAX_RFER |
+ BTRFS_QGROUP_LIMIT_MAX_EXCL))) {
+- if (qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_EXCL)
+- limit = qg->max_excl;
+- else
+- limit = qg->max_rfer;
+- threshold = (limit - qg->rsv.values[BTRFS_QGROUP_RSV_DATA] -
+- qg->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC]) /
+- QGROUP_PERTRANS_RATIO;
+- threshold = min_t(u64, threshold, QGROUP_PERTRANS_SIZE);
++ if (qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_EXCL) {
++ free = qg->max_excl - qgroup_rsv_total(qg) - qg->excl;
++ threshold = min_t(u64, qg->max_excl / QGROUP_FREE_RATIO,
++ QGROUP_FREE_SIZE);
++ } else {
++ free = qg->max_rfer - qgroup_rsv_total(qg) - qg->rfer;
++ threshold = min_t(u64, qg->max_rfer / QGROUP_FREE_RATIO,
++ QGROUP_FREE_SIZE);
++ }
+
+ /*
+ * Use transaction_kthread to commit transaction, so we no
+ * longer need to bother nested transaction nor lock context.
+ */
+- if (qg->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS] > threshold)
++ if (free < threshold)
+ btrfs_commit_transaction_locksafe(fs_info);
+ }
+
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index e74455eb42f9..6976e2280771 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -2429,8 +2429,9 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
+ bitmap_clear(rbio->dbitmap, pagenr, 1);
+ kunmap(p);
+
+- for (stripe = 0; stripe < rbio->real_stripes; stripe++)
++ for (stripe = 0; stripe < nr_data; stripe++)
+ kunmap(page_in_rbio(rbio, stripe, pagenr, 0));
++ kunmap(p_page);
+ }
+
+ __free_page(p_page);
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 6dcd36d7b849..1aeac70d0531 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -584,6 +584,7 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
+ sctx->pages_per_rd_bio = SCRUB_PAGES_PER_RD_BIO;
+ sctx->curr = -1;
+ sctx->fs_info = fs_info;
++ INIT_LIST_HEAD(&sctx->csum_list);
+ for (i = 0; i < SCRUB_BIOS_PER_SCTX; ++i) {
+ struct scrub_bio *sbio;
+
+@@ -608,7 +609,6 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
+ atomic_set(&sctx->workers_pending, 0);
+ atomic_set(&sctx->cancel_req, 0);
+ sctx->csum_size = btrfs_super_csum_size(fs_info->super_copy);
+- INIT_LIST_HEAD(&sctx->csum_list);
+
+ spin_lock_init(&sctx->list_lock);
+ spin_lock_init(&sctx->stat_lock);
+@@ -3770,16 +3770,6 @@ fail_scrub_workers:
+ return -ENOMEM;
+ }
+
+-static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
+-{
+- if (--fs_info->scrub_workers_refcnt == 0) {
+- btrfs_destroy_workqueue(fs_info->scrub_workers);
+- btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
+- btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
+- }
+- WARN_ON(fs_info->scrub_workers_refcnt < 0);
+-}
+-
+ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ u64 end, struct btrfs_scrub_progress *progress,
+ int readonly, int is_dev_replace)
+@@ -3788,6 +3778,9 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ int ret;
+ struct btrfs_device *dev;
+ unsigned int nofs_flag;
++ struct btrfs_workqueue *scrub_workers = NULL;
++ struct btrfs_workqueue *scrub_wr_comp = NULL;
++ struct btrfs_workqueue *scrub_parity = NULL;
+
+ if (btrfs_fs_closing(fs_info))
+ return -EINVAL;
+@@ -3927,9 +3920,16 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+
+ mutex_lock(&fs_info->scrub_lock);
+ dev->scrub_ctx = NULL;
+- scrub_workers_put(fs_info);
++ if (--fs_info->scrub_workers_refcnt == 0) {
++ scrub_workers = fs_info->scrub_workers;
++ scrub_wr_comp = fs_info->scrub_wr_completion_workers;
++ scrub_parity = fs_info->scrub_parity_workers;
++ }
+ mutex_unlock(&fs_info->scrub_lock);
+
++ btrfs_destroy_workqueue(scrub_workers);
++ btrfs_destroy_workqueue(scrub_wr_comp);
++ btrfs_destroy_workqueue(scrub_parity);
+ scrub_put_ctx(sctx);
+
+ return ret;
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index ac232b3d6d7e..7f3b74a55073 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3517,9 +3517,16 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
+ }
+ btrfs_release_path(path);
+
+- /* find the first key from this transaction again */
++ /*
++ * Find the first key from this transaction again. See the note for
++ * log_new_dir_dentries, if we're logging a directory recursively we
++ * won't be holding its i_mutex, which means we can modify the directory
++ * while we're logging it. If we remove an entry between our first
++ * search and this search we'll not find the key again and can just
++ * bail.
++ */
+ ret = btrfs_search_slot(NULL, root, &min_key, path, 0, 0);
+- if (WARN_ON(ret != 0))
++ if (ret != 0)
+ goto done;
+
+ /*
+@@ -4481,6 +4488,19 @@ static int logged_inode_size(struct btrfs_root *log, struct btrfs_inode *inode,
+ item = btrfs_item_ptr(path->nodes[0], path->slots[0],
+ struct btrfs_inode_item);
+ *size_ret = btrfs_inode_size(path->nodes[0], item);
++ /*
++ * If the in-memory inode's i_size is smaller then the inode
++ * size stored in the btree, return the inode's i_size, so
++ * that we get a correct inode size after replaying the log
++ * when before a power failure we had a shrinking truncate
++ * followed by addition of a new name (rename / new hard link).
++ * Otherwise return the inode size from the btree, to avoid
++ * data loss when replaying a log due to previously doing a
++ * write that expands the inode's size and logging a new name
++ * immediately after.
++ */
++ if (*size_ret > inode->vfs_inode.i_size)
++ *size_ret = inode->vfs_inode.i_size;
+ }
+
+ btrfs_release_path(path);
+@@ -4642,15 +4662,8 @@ static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,
+ struct btrfs_file_extent_item);
+
+ if (btrfs_file_extent_type(leaf, extent) ==
+- BTRFS_FILE_EXTENT_INLINE) {
+- len = btrfs_file_extent_ram_bytes(leaf, extent);
+- ASSERT(len == i_size ||
+- (len == fs_info->sectorsize &&
+- btrfs_file_extent_compression(leaf, extent) !=
+- BTRFS_COMPRESS_NONE) ||
+- (len < i_size && i_size < fs_info->sectorsize));
++ BTRFS_FILE_EXTENT_INLINE)
+ return 0;
+- }
+
+ len = btrfs_file_extent_num_bytes(leaf, extent);
+ /* Last extent goes beyond i_size, no need to log a hole. */
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 15561926ab32..88a323a453d8 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -6413,7 +6413,7 @@ static void btrfs_end_bio(struct bio *bio)
+ if (bio_op(bio) == REQ_OP_WRITE)
+ btrfs_dev_stat_inc_and_print(dev,
+ BTRFS_DEV_STAT_WRITE_ERRS);
+- else
++ else if (!(bio->bi_opf & REQ_RAHEAD))
+ btrfs_dev_stat_inc_and_print(dev,
+ BTRFS_DEV_STAT_READ_ERRS);
+ if (bio->bi_opf & REQ_PREFLUSH)
+@@ -6782,10 +6782,10 @@ static int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info,
+ }
+
+ if ((type & BTRFS_BLOCK_GROUP_RAID10 && sub_stripes != 2) ||
+- (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes < 1) ||
++ (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes != 2) ||
+ (type & BTRFS_BLOCK_GROUP_RAID5 && num_stripes < 2) ||
+ (type & BTRFS_BLOCK_GROUP_RAID6 && num_stripes < 3) ||
+- (type & BTRFS_BLOCK_GROUP_DUP && num_stripes > 2) ||
++ (type & BTRFS_BLOCK_GROUP_DUP && num_stripes != 2) ||
+ ((type & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 &&
+ num_stripes != 1)) {
+ btrfs_err(fs_info,
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 48318fb74938..cab7a026876b 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -3027,6 +3027,13 @@ void guard_bio_eod(int op, struct bio *bio)
+ /* Uhhuh. We've got a bio that straddles the device size! */
+ truncated_bytes = bio->bi_iter.bi_size - (maxsector << 9);
+
++ /*
++ * The bio contains more than one segment which spans EOD, just return
++ * and let IO layer turn it into an EIO
++ */
++ if (truncated_bytes > bvec->bv_len)
++ return;
++
+ /* Truncate the bio.. */
+ bio->bi_iter.bi_size -= truncated_bytes;
+ bvec->bv_len -= truncated_bytes;
+diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
+index d9b99abe1243..5d83c924cc47 100644
+--- a/fs/cifs/cifs_dfs_ref.c
++++ b/fs/cifs/cifs_dfs_ref.c
+@@ -285,9 +285,9 @@ static void dump_referral(const struct dfs_info3_param *ref)
+ {
+ cifs_dbg(FYI, "DFS: ref path: %s\n", ref->path_name);
+ cifs_dbg(FYI, "DFS: node path: %s\n", ref->node_name);
+- cifs_dbg(FYI, "DFS: fl: %hd, srv_type: %hd\n",
++ cifs_dbg(FYI, "DFS: fl: %d, srv_type: %d\n",
+ ref->flags, ref->server_type);
+- cifs_dbg(FYI, "DFS: ref_flags: %hd, path_consumed: %hd\n",
++ cifs_dbg(FYI, "DFS: ref_flags: %d, path_consumed: %d\n",
+ ref->ref_flag, ref->path_consumed);
+ }
+
+diff --git a/fs/cifs/cifs_fs_sb.h b/fs/cifs/cifs_fs_sb.h
+index 42f0d67f1054..ed49222abecb 100644
+--- a/fs/cifs/cifs_fs_sb.h
++++ b/fs/cifs/cifs_fs_sb.h
+@@ -58,6 +58,7 @@ struct cifs_sb_info {
+ spinlock_t tlink_tree_lock;
+ struct tcon_link *master_tlink;
+ struct nls_table *local_nls;
++ unsigned int bsize;
+ unsigned int rsize;
+ unsigned int wsize;
+ unsigned long actimeo; /* attribute cache timeout (jiffies) */
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 62d48d486d8f..07cad54b84f1 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -554,10 +554,13 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
+
+ seq_printf(s, ",rsize=%u", cifs_sb->rsize);
+ seq_printf(s, ",wsize=%u", cifs_sb->wsize);
++ seq_printf(s, ",bsize=%u", cifs_sb->bsize);
+ seq_printf(s, ",echo_interval=%lu",
+ tcon->ses->server->echo_interval / HZ);
+ if (tcon->snapshot_time)
+ seq_printf(s, ",snapshot=%llu", tcon->snapshot_time);
++ if (tcon->handle_timeout)
++ seq_printf(s, ",handletimeout=%u", tcon->handle_timeout);
+ /* convert actimeo and display it in seconds */
+ seq_printf(s, ",actimeo=%lu", cifs_sb->actimeo / HZ);
+
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 94dbdbe5be34..6c934ab3722b 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -59,6 +59,12 @@
+ */
+ #define CIFS_MAX_ACTIMEO (1 << 30)
+
++/*
++ * Max persistent and resilient handle timeout (milliseconds).
++ * Windows durable max was 960000 (16 minutes)
++ */
++#define SMB3_MAX_HANDLE_TIMEOUT 960000
++
+ /*
+ * MAX_REQ is the maximum number of requests that WE will send
+ * on one socket concurrently.
+@@ -236,6 +242,8 @@ struct smb_version_operations {
+ int * (*get_credits_field)(struct TCP_Server_Info *, const int);
+ unsigned int (*get_credits)(struct mid_q_entry *);
+ __u64 (*get_next_mid)(struct TCP_Server_Info *);
++ void (*revert_current_mid)(struct TCP_Server_Info *server,
++ const unsigned int val);
+ /* data offset from read response message */
+ unsigned int (*read_data_offset)(char *);
+ /*
+@@ -557,6 +565,7 @@ struct smb_vol {
+ bool resilient:1; /* noresilient not required since not fored for CA */
+ bool domainauto:1;
+ bool rdma:1;
++ unsigned int bsize;
+ unsigned int rsize;
+ unsigned int wsize;
+ bool sockopt_tcp_nodelay:1;
+@@ -569,6 +578,7 @@ struct smb_vol {
+ struct nls_table *local_nls;
+ unsigned int echo_interval; /* echo interval in secs */
+ __u64 snapshot_time; /* needed for timewarp tokens */
++ __u32 handle_timeout; /* persistent and durable handle timeout in ms */
+ unsigned int max_credits; /* smb3 max_credits 10 < credits < 60000 */
+ };
+
+@@ -770,6 +780,22 @@ get_next_mid(struct TCP_Server_Info *server)
+ return cpu_to_le16(mid);
+ }
+
++static inline void
++revert_current_mid(struct TCP_Server_Info *server, const unsigned int val)
++{
++ if (server->ops->revert_current_mid)
++ server->ops->revert_current_mid(server, val);
++}
++
++static inline void
++revert_current_mid_from_hdr(struct TCP_Server_Info *server,
++ const struct smb2_sync_hdr *shdr)
++{
++ unsigned int num = le16_to_cpu(shdr->CreditCharge);
++
++ return revert_current_mid(server, num > 0 ? num : 1);
++}
++
+ static inline __u16
+ get_mid(const struct smb_hdr *smb)
+ {
+@@ -1009,6 +1035,7 @@ struct cifs_tcon {
+ __u32 vol_serial_number;
+ __le64 vol_create_time;
+ __u64 snapshot_time; /* for timewarp tokens - timestamp of snapshot */
++ __u32 handle_timeout; /* persistent and durable handle timeout in ms */
+ __u32 ss_flags; /* sector size flags */
+ __u32 perf_sector_size; /* best sector size for perf */
+ __u32 max_chunks;
+@@ -1422,6 +1449,7 @@ struct mid_q_entry {
+ struct kref refcount;
+ struct TCP_Server_Info *server; /* server corresponding to this mid */
+ __u64 mid; /* multiplex id */
++ __u16 credits; /* number of credits consumed by this mid */
+ __u32 pid; /* process id */
+ __u32 sequence_number; /* for CIFS signing */
+ unsigned long when_alloc; /* when mid was created */
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index bb54ccf8481c..551924beb86f 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -2125,12 +2125,13 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+
+ wdata2->cfile = find_writable_file(CIFS_I(inode), false);
+ if (!wdata2->cfile) {
+- cifs_dbg(VFS, "No writable handles for inode\n");
++ cifs_dbg(VFS, "No writable handle to retry writepages\n");
+ rc = -EBADF;
+- break;
++ } else {
++ wdata2->pid = wdata2->cfile->pid;
++ rc = server->ops->async_writev(wdata2,
++ cifs_writedata_release);
+ }
+- wdata2->pid = wdata2->cfile->pid;
+- rc = server->ops->async_writev(wdata2, cifs_writedata_release);
+
+ for (j = 0; j < nr_pages; j++) {
+ unlock_page(wdata2->pages[j]);
+@@ -2145,6 +2146,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ kref_put(&wdata2->refcount, cifs_writedata_release);
+ if (is_retryable_error(rc))
+ continue;
++ i += nr_pages;
+ break;
+ }
+
+@@ -2152,6 +2154,13 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ i += nr_pages;
+ } while (i < wdata->nr_pages);
+
++ /* cleanup remaining pages from the original wdata */
++ for (; i < wdata->nr_pages; i++) {
++ SetPageError(wdata->pages[i]);
++ end_page_writeback(wdata->pages[i]);
++ put_page(wdata->pages[i]);
++ }
++
+ if (rc != 0 && !is_retryable_error(rc))
+ mapping_set_error(inode->i_mapping, rc);
+ kref_put(&wdata->refcount, cifs_writedata_release);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 8463c940e0e5..44e6ec85f832 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -102,8 +102,8 @@ enum {
+ Opt_backupuid, Opt_backupgid, Opt_uid,
+ Opt_cruid, Opt_gid, Opt_file_mode,
+ Opt_dirmode, Opt_port,
+- Opt_rsize, Opt_wsize, Opt_actimeo,
+- Opt_echo_interval, Opt_max_credits,
++ Opt_blocksize, Opt_rsize, Opt_wsize, Opt_actimeo,
++ Opt_echo_interval, Opt_max_credits, Opt_handletimeout,
+ Opt_snapshot,
+
+ /* Mount options which take string value */
+@@ -204,9 +204,11 @@ static const match_table_t cifs_mount_option_tokens = {
+ { Opt_dirmode, "dirmode=%s" },
+ { Opt_dirmode, "dir_mode=%s" },
+ { Opt_port, "port=%s" },
++ { Opt_blocksize, "bsize=%s" },
+ { Opt_rsize, "rsize=%s" },
+ { Opt_wsize, "wsize=%s" },
+ { Opt_actimeo, "actimeo=%s" },
++ { Opt_handletimeout, "handletimeout=%s" },
+ { Opt_echo_interval, "echo_interval=%s" },
+ { Opt_max_credits, "max_credits=%s" },
+ { Opt_snapshot, "snapshot=%s" },
+@@ -1486,6 +1488,11 @@ cifs_parse_devname(const char *devname, struct smb_vol *vol)
+ const char *delims = "/\\";
+ size_t len;
+
++ if (unlikely(!devname || !*devname)) {
++ cifs_dbg(VFS, "Device name not specified.\n");
++ return -EINVAL;
++ }
++
+ /* make sure we have a valid UNC double delimiter prefix */
+ len = strspn(devname, delims);
+ if (len != 2)
+@@ -1571,7 +1578,7 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ vol->cred_uid = current_uid();
+ vol->linux_uid = current_uid();
+ vol->linux_gid = current_gid();
+-
++ vol->bsize = 1024 * 1024; /* can improve cp performance significantly */
+ /*
+ * default to SFM style remapping of seven reserved characters
+ * unless user overrides it or we negotiate CIFS POSIX where
+@@ -1594,6 +1601,9 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+
+ vol->actimeo = CIFS_DEF_ACTIMEO;
+
++ /* Most clients set timeout to 0, allows server to use its default */
++ vol->handle_timeout = 0; /* See MS-SMB2 spec section 2.2.14.2.12 */
++
+ /* offer SMB2.1 and later (SMB3 etc). Secure and widely accepted */
+ vol->ops = &smb30_operations;
+ vol->vals = &smbdefault_values;
+@@ -1944,6 +1954,26 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ }
+ port = (unsigned short)option;
+ break;
++ case Opt_blocksize:
++ if (get_option_ul(args, &option)) {
++ cifs_dbg(VFS, "%s: Invalid blocksize value\n",
++ __func__);
++ goto cifs_parse_mount_err;
++ }
++ /*
++ * inode blocksize realistically should never need to be
++ * less than 16K or greater than 16M and default is 1MB.
++ * Note that small inode block sizes (e.g. 64K) can lead
++ * to very poor performance of common tools like cp and scp
++ */
++ if ((option < CIFS_MAX_MSGSIZE) ||
++ (option > (4 * SMB3_DEFAULT_IOSIZE))) {
++ cifs_dbg(VFS, "%s: Invalid blocksize\n",
++ __func__);
++ goto cifs_parse_mount_err;
++ }
++ vol->bsize = option;
++ break;
+ case Opt_rsize:
+ if (get_option_ul(args, &option)) {
+ cifs_dbg(VFS, "%s: Invalid rsize value\n",
+@@ -1972,6 +2002,18 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ goto cifs_parse_mount_err;
+ }
+ break;
++ case Opt_handletimeout:
++ if (get_option_ul(args, &option)) {
++ cifs_dbg(VFS, "%s: Invalid handletimeout value\n",
++ __func__);
++ goto cifs_parse_mount_err;
++ }
++ vol->handle_timeout = option;
++ if (vol->handle_timeout > SMB3_MAX_HANDLE_TIMEOUT) {
++ cifs_dbg(VFS, "Invalid handle cache timeout, longer than 16 minutes\n");
++ goto cifs_parse_mount_err;
++ }
++ break;
+ case Opt_echo_interval:
+ if (get_option_ul(args, &option)) {
+ cifs_dbg(VFS, "%s: Invalid echo interval value\n",
+@@ -3138,6 +3180,8 @@ static int match_tcon(struct cifs_tcon *tcon, struct smb_vol *volume_info)
+ return 0;
+ if (tcon->snapshot_time != volume_info->snapshot_time)
+ return 0;
++ if (tcon->handle_timeout != volume_info->handle_timeout)
++ return 0;
+ return 1;
+ }
+
+@@ -3252,6 +3296,16 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb_vol *volume_info)
+ tcon->snapshot_time = volume_info->snapshot_time;
+ }
+
++ if (volume_info->handle_timeout) {
++ if (ses->server->vals->protocol_id == 0) {
++ cifs_dbg(VFS,
++ "Use SMB2.1 or later for handle timeout option\n");
++ rc = -EOPNOTSUPP;
++ goto out_fail;
++ } else
++ tcon->handle_timeout = volume_info->handle_timeout;
++ }
++
+ tcon->ses = ses;
+ if (volume_info->password) {
+ tcon->password = kstrdup(volume_info->password, GFP_KERNEL);
+@@ -3839,6 +3893,7 @@ int cifs_setup_cifs_sb(struct smb_vol *pvolume_info,
+ spin_lock_init(&cifs_sb->tlink_tree_lock);
+ cifs_sb->tlink_tree = RB_ROOT;
+
++ cifs_sb->bsize = pvolume_info->bsize;
+ /*
+ * Temporarily set r/wsize for matching superblock. If we end up using
+ * new sb then client will later negotiate it downward if needed.
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 659ce1b92c44..8d107587208f 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -1645,8 +1645,20 @@ cifs_setlk(struct file *file, struct file_lock *flock, __u32 type,
+ rc = server->ops->mand_unlock_range(cfile, flock, xid);
+
+ out:
+- if (flock->fl_flags & FL_POSIX && !rc)
++ if (flock->fl_flags & FL_POSIX) {
++ /*
++ * If this is a request to remove all locks because we
++ * are closing the file, it doesn't matter if the
++ * unlocking failed as both cifs.ko and the SMB server
++ * remove the lock on file close
++ */
++ if (rc) {
++ cifs_dbg(VFS, "%s failed rc=%d\n", __func__, rc);
++ if (!(flock->fl_flags & FL_CLOSE))
++ return rc;
++ }
+ rc = locks_lock_file_wait(file, flock);
++ }
+ return rc;
+ }
+
+@@ -3028,14 +3040,16 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from)
+ * these pages but not on the region from pos to ppos+len-1.
+ */
+ written = cifs_user_writev(iocb, from);
+- if (written > 0 && CIFS_CACHE_READ(cinode)) {
++ if (CIFS_CACHE_READ(cinode)) {
+ /*
+- * Windows 7 server can delay breaking level2 oplock if a write
+- * request comes - break it on the client to prevent reading
+- * an old data.
++ * We have read level caching and we have just sent a write
++ * request to the server thus making data in the cache stale.
++ * Zap the cache and set oplock/lease level to NONE to avoid
++ * reading stale data from the cache. All subsequent read
++ * operations will read new data from the server.
+ */
+ cifs_zap_mapping(inode);
+- cifs_dbg(FYI, "Set no oplock for inode=%p after a write operation\n",
++ cifs_dbg(FYI, "Set Oplock/Lease to NONE for inode=%p after write\n",
+ inode);
+ cinode->oplock = 0;
+ }
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 478003644916..53fdb5df0d2e 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2080,7 +2080,7 @@ int cifs_getattr(const struct path *path, struct kstat *stat,
+ return rc;
+
+ generic_fillattr(inode, stat);
+- stat->blksize = CIFS_MAX_MSGSIZE;
++ stat->blksize = cifs_sb->bsize;
+ stat->ino = CIFS_I(inode)->uniqueid;
+
+ /* old CIFS Unix Extensions doesn't return create time */
+diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c
+index 32a6c020478f..20a88776f04d 100644
+--- a/fs/cifs/smb1ops.c
++++ b/fs/cifs/smb1ops.c
+@@ -308,7 +308,7 @@ coalesce_t2(char *second_buf, struct smb_hdr *target_hdr)
+ remaining = tgt_total_cnt - total_in_tgt;
+
+ if (remaining < 0) {
+- cifs_dbg(FYI, "Server sent too much data. tgt_total_cnt=%hu total_in_tgt=%hu\n",
++ cifs_dbg(FYI, "Server sent too much data. tgt_total_cnt=%hu total_in_tgt=%u\n",
+ tgt_total_cnt, total_in_tgt);
+ return -EPROTO;
+ }
+diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
+index b204e84b87fb..b0e76d27d752 100644
+--- a/fs/cifs/smb2file.c
++++ b/fs/cifs/smb2file.c
+@@ -68,7 +68,9 @@ smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms,
+
+
+ if (oparms->tcon->use_resilient) {
+- nr_ioctl_req.Timeout = 0; /* use server default (120 seconds) */
++ /* default timeout is 0, servers pick default (120 seconds) */
++ nr_ioctl_req.Timeout =
++ cpu_to_le32(oparms->tcon->handle_timeout);
+ nr_ioctl_req.Reserved = 0;
+ rc = SMB2_ioctl(xid, oparms->tcon, fid->persistent_fid,
+ fid->volatile_fid, FSCTL_LMR_REQUEST_RESILIENCY,
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 7b8b58fb4d3f..58700d2ba8cd 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -517,7 +517,6 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+ __u8 lease_state;
+ struct list_head *tmp;
+ struct cifsFileInfo *cfile;
+- struct TCP_Server_Info *server = tcon->ses->server;
+ struct cifs_pending_open *open;
+ struct cifsInodeInfo *cinode;
+ int ack_req = le32_to_cpu(rsp->Flags &
+@@ -537,13 +536,25 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+ cifs_dbg(FYI, "lease key match, lease break 0x%x\n",
+ le32_to_cpu(rsp->NewLeaseState));
+
+- server->ops->set_oplock_level(cinode, lease_state, 0, NULL);
+-
+ if (ack_req)
+ cfile->oplock_break_cancelled = false;
+ else
+ cfile->oplock_break_cancelled = true;
+
++ set_bit(CIFS_INODE_PENDING_OPLOCK_BREAK, &cinode->flags);
++
++ /*
++ * Set or clear flags depending on the lease state being READ.
++ * HANDLE caching flag should be added when the client starts
++ * to defer closing remote file handles with HANDLE leases.
++ */
++ if (lease_state & SMB2_LEASE_READ_CACHING_HE)
++ set_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
++ &cinode->flags);
++ else
++ clear_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
++ &cinode->flags);
++
+ queue_work(cifsoplockd_wq, &cfile->oplock_break);
+ kfree(lw);
+ return true;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 6f96e2292856..b29f711ab965 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -219,6 +219,15 @@ smb2_get_next_mid(struct TCP_Server_Info *server)
+ return mid;
+ }
+
++static void
++smb2_revert_current_mid(struct TCP_Server_Info *server, const unsigned int val)
++{
++ spin_lock(&GlobalMid_Lock);
++ if (server->CurrentMid >= val)
++ server->CurrentMid -= val;
++ spin_unlock(&GlobalMid_Lock);
++}
++
+ static struct mid_q_entry *
+ smb2_find_mid(struct TCP_Server_Info *server, char *buf)
+ {
+@@ -2594,6 +2603,15 @@ smb2_downgrade_oplock(struct TCP_Server_Info *server,
+ server->ops->set_oplock_level(cinode, 0, 0, NULL);
+ }
+
++static void
++smb21_downgrade_oplock(struct TCP_Server_Info *server,
++ struct cifsInodeInfo *cinode, bool set_level2)
++{
++ server->ops->set_oplock_level(cinode,
++ set_level2 ? SMB2_LEASE_READ_CACHING_HE :
++ 0, 0, NULL);
++}
++
+ static void
+ smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+ unsigned int epoch, bool *purge_cache)
+@@ -3541,6 +3559,7 @@ struct smb_version_operations smb20_operations = {
+ .get_credits = smb2_get_credits,
+ .wait_mtu_credits = cifs_wait_mtu_credits,
+ .get_next_mid = smb2_get_next_mid,
++ .revert_current_mid = smb2_revert_current_mid,
+ .read_data_offset = smb2_read_data_offset,
+ .read_data_length = smb2_read_data_length,
+ .map_error = map_smb2_to_linux_error,
+@@ -3636,6 +3655,7 @@ struct smb_version_operations smb21_operations = {
+ .get_credits = smb2_get_credits,
+ .wait_mtu_credits = smb2_wait_mtu_credits,
+ .get_next_mid = smb2_get_next_mid,
++ .revert_current_mid = smb2_revert_current_mid,
+ .read_data_offset = smb2_read_data_offset,
+ .read_data_length = smb2_read_data_length,
+ .map_error = map_smb2_to_linux_error,
+@@ -3646,7 +3666,7 @@ struct smb_version_operations smb21_operations = {
+ .print_stats = smb2_print_stats,
+ .is_oplock_break = smb2_is_valid_oplock_break,
+ .handle_cancelled_mid = smb2_handle_cancelled_mid,
+- .downgrade_oplock = smb2_downgrade_oplock,
++ .downgrade_oplock = smb21_downgrade_oplock,
+ .need_neg = smb2_need_neg,
+ .negotiate = smb2_negotiate,
+ .negotiate_wsize = smb2_negotiate_wsize,
+@@ -3732,6 +3752,7 @@ struct smb_version_operations smb30_operations = {
+ .get_credits = smb2_get_credits,
+ .wait_mtu_credits = smb2_wait_mtu_credits,
+ .get_next_mid = smb2_get_next_mid,
++ .revert_current_mid = smb2_revert_current_mid,
+ .read_data_offset = smb2_read_data_offset,
+ .read_data_length = smb2_read_data_length,
+ .map_error = map_smb2_to_linux_error,
+@@ -3743,7 +3764,7 @@ struct smb_version_operations smb30_operations = {
+ .dump_share_caps = smb2_dump_share_caps,
+ .is_oplock_break = smb2_is_valid_oplock_break,
+ .handle_cancelled_mid = smb2_handle_cancelled_mid,
+- .downgrade_oplock = smb2_downgrade_oplock,
++ .downgrade_oplock = smb21_downgrade_oplock,
+ .need_neg = smb2_need_neg,
+ .negotiate = smb2_negotiate,
+ .negotiate_wsize = smb3_negotiate_wsize,
+@@ -3837,6 +3858,7 @@ struct smb_version_operations smb311_operations = {
+ .get_credits = smb2_get_credits,
+ .wait_mtu_credits = smb2_wait_mtu_credits,
+ .get_next_mid = smb2_get_next_mid,
++ .revert_current_mid = smb2_revert_current_mid,
+ .read_data_offset = smb2_read_data_offset,
+ .read_data_length = smb2_read_data_length,
+ .map_error = map_smb2_to_linux_error,
+@@ -3848,7 +3870,7 @@ struct smb_version_operations smb311_operations = {
+ .dump_share_caps = smb2_dump_share_caps,
+ .is_oplock_break = smb2_is_valid_oplock_break,
+ .handle_cancelled_mid = smb2_handle_cancelled_mid,
+- .downgrade_oplock = smb2_downgrade_oplock,
++ .downgrade_oplock = smb21_downgrade_oplock,
+ .need_neg = smb2_need_neg,
+ .negotiate = smb2_negotiate,
+ .negotiate_wsize = smb3_negotiate_wsize,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 77b3aaa39b35..068febe37fe4 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -986,8 +986,14 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+ FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,
+ (char *)pneg_inbuf, inbuflen, (char **)&pneg_rsp, &rsplen);
+-
+- if (rc != 0) {
++ if (rc == -EOPNOTSUPP) {
++ /*
++ * Old Windows versions or Netapp SMB server can return
++ * not supported error. Client should accept it.
++ */
++ cifs_dbg(VFS, "Server does not support validate negotiate\n");
++ return 0;
++ } else if (rc != 0) {
+ cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc);
+ rc = -EIO;
+ goto out_free_inbuf;
+@@ -1605,9 +1611,16 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree,
+ iov[1].iov_base = unc_path;
+ iov[1].iov_len = unc_path_len;
+
+- /* 3.11 tcon req must be signed if not encrypted. See MS-SMB2 3.2.4.1.1 */
++ /*
++ * 3.11 tcon req must be signed if not encrypted. See MS-SMB2 3.2.4.1.1
++ * unless it is guest or anonymous user. See MS-SMB2 3.2.5.3.1
++ * (Samba servers don't always set the flag so also check if null user)
++ */
+ if ((ses->server->dialect == SMB311_PROT_ID) &&
+- !smb3_encryption_required(tcon))
++ !smb3_encryption_required(tcon) &&
++ !(ses->session_flags &
++ (SMB2_SESSION_FLAG_IS_GUEST|SMB2_SESSION_FLAG_IS_NULL)) &&
++ ((ses->user_name != NULL) || (ses->sectype == Kerberos)))
+ req->sync_hdr.Flags |= SMB2_FLAGS_SIGNED;
+
+ memset(&rqst, 0, sizeof(struct smb_rqst));
+@@ -1824,8 +1837,9 @@ add_lease_context(struct TCP_Server_Info *server, struct kvec *iov,
+ }
+
+ static struct create_durable_v2 *
+-create_durable_v2_buf(struct cifs_fid *pfid)
++create_durable_v2_buf(struct cifs_open_parms *oparms)
+ {
++ struct cifs_fid *pfid = oparms->fid;
+ struct create_durable_v2 *buf;
+
+ buf = kzalloc(sizeof(struct create_durable_v2), GFP_KERNEL);
+@@ -1839,7 +1853,14 @@ create_durable_v2_buf(struct cifs_fid *pfid)
+ (struct create_durable_v2, Name));
+ buf->ccontext.NameLength = cpu_to_le16(4);
+
+- buf->dcontext.Timeout = 0; /* Should this be configurable by workload */
++ /*
++ * NB: Handle timeout defaults to 0, which allows server to choose
++ * (most servers default to 120 seconds) and most clients default to 0.
++ * This can be overridden at mount ("handletimeout=") if the user wants
++ * a different persistent (or resilient) handle timeout for all opens
++ * opens on a particular SMB3 mount.
++ */
++ buf->dcontext.Timeout = cpu_to_le32(oparms->tcon->handle_timeout);
+ buf->dcontext.Flags = cpu_to_le32(SMB2_DHANDLE_FLAG_PERSISTENT);
+ generate_random_uuid(buf->dcontext.CreateGuid);
+ memcpy(pfid->create_guid, buf->dcontext.CreateGuid, 16);
+@@ -1892,7 +1913,7 @@ add_durable_v2_context(struct kvec *iov, unsigned int *num_iovec,
+ struct smb2_create_req *req = iov[0].iov_base;
+ unsigned int num = *num_iovec;
+
+- iov[num].iov_base = create_durable_v2_buf(oparms->fid);
++ iov[num].iov_base = create_durable_v2_buf(oparms);
+ if (iov[num].iov_base == NULL)
+ return -ENOMEM;
+ iov[num].iov_len = sizeof(struct create_durable_v2);
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index 7b351c65ee46..63264db78b89 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -576,6 +576,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr,
+ struct TCP_Server_Info *server)
+ {
+ struct mid_q_entry *temp;
++ unsigned int credits = le16_to_cpu(shdr->CreditCharge);
+
+ if (server == NULL) {
+ cifs_dbg(VFS, "Null TCP session in smb2_mid_entry_alloc\n");
+@@ -586,6 +587,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr,
+ memset(temp, 0, sizeof(struct mid_q_entry));
+ kref_init(&temp->refcount);
+ temp->mid = le64_to_cpu(shdr->MessageId);
++ temp->credits = credits > 0 ? credits : 1;
+ temp->pid = current->pid;
+ temp->command = shdr->Command; /* Always LE */
+ temp->when_alloc = jiffies;
+@@ -674,13 +676,18 @@ smb2_setup_request(struct cifs_ses *ses, struct smb_rqst *rqst)
+ smb2_seq_num_into_buf(ses->server, shdr);
+
+ rc = smb2_get_mid_entry(ses, shdr, &mid);
+- if (rc)
++ if (rc) {
++ revert_current_mid_from_hdr(ses->server, shdr);
+ return ERR_PTR(rc);
++ }
++
+ rc = smb2_sign_rqst(rqst, ses->server);
+ if (rc) {
++ revert_current_mid_from_hdr(ses->server, shdr);
+ cifs_delete_mid(mid);
+ return ERR_PTR(rc);
+ }
++
+ return mid;
+ }
+
+@@ -695,11 +702,14 @@ smb2_setup_async_request(struct TCP_Server_Info *server, struct smb_rqst *rqst)
+ smb2_seq_num_into_buf(server, shdr);
+
+ mid = smb2_mid_entry_alloc(shdr, server);
+- if (mid == NULL)
++ if (mid == NULL) {
++ revert_current_mid_from_hdr(server, shdr);
+ return ERR_PTR(-ENOMEM);
++ }
+
+ rc = smb2_sign_rqst(rqst, server);
+ if (rc) {
++ revert_current_mid_from_hdr(server, shdr);
+ DeleteMidQEntry(mid);
+ return ERR_PTR(rc);
+ }
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 53532bd3f50d..9544eb99b5a2 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -647,6 +647,7 @@ cifs_call_async(struct TCP_Server_Info *server, struct smb_rqst *rqst,
+ cifs_in_send_dec(server);
+
+ if (rc < 0) {
++ revert_current_mid(server, mid->credits);
+ server->sequence_number -= 2;
+ cifs_delete_mid(mid);
+ }
+@@ -868,6 +869,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ for (i = 0; i < num_rqst; i++) {
+ midQ[i] = ses->server->ops->setup_request(ses, &rqst[i]);
+ if (IS_ERR(midQ[i])) {
++ revert_current_mid(ses->server, i);
+ for (j = 0; j < i; j++)
+ cifs_delete_mid(midQ[j]);
+ mutex_unlock(&ses->server->srv_mutex);
+@@ -897,8 +899,10 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ for (i = 0; i < num_rqst; i++)
+ cifs_save_when_sent(midQ[i]);
+
+- if (rc < 0)
++ if (rc < 0) {
++ revert_current_mid(ses->server, num_rqst);
+ ses->server->sequence_number -= 2;
++ }
+
+ mutex_unlock(&ses->server->srv_mutex);
+
+diff --git a/fs/dax.c b/fs/dax.c
+index 6959837cc465..05cca2214ae3 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -843,9 +843,8 @@ unlock_pte:
+ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ struct address_space *mapping, void *entry)
+ {
+- unsigned long pfn;
++ unsigned long pfn, index, count;
+ long ret = 0;
+- size_t size;
+
+ /*
+ * A page got tagged dirty in DAX mapping? Something is seriously
+@@ -894,17 +893,18 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ xas_unlock_irq(xas);
+
+ /*
+- * Even if dax_writeback_mapping_range() was given a wbc->range_start
+- * in the middle of a PMD, the 'index' we are given will be aligned to
+- * the start index of the PMD, as will the pfn we pull from 'entry'.
++ * If dax_writeback_mapping_range() was given a wbc->range_start
++ * in the middle of a PMD, the 'index' we use needs to be
++ * aligned to the start of the PMD.
+ * This allows us to flush for PMD_SIZE and not have to worry about
+ * partial PMD writebacks.
+ */
+ pfn = dax_to_pfn(entry);
+- size = PAGE_SIZE << dax_entry_order(entry);
++ count = 1UL << dax_entry_order(entry);
++ index = xas->xa_index & ~(count - 1);
+
+- dax_entry_mkclean(mapping, xas->xa_index, pfn);
+- dax_flush(dax_dev, page_address(pfn_to_page(pfn)), size);
++ dax_entry_mkclean(mapping, index, pfn);
++ dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE);
+ /*
+ * After we have flushed the cache, we can clear the dirty tag. There
+ * cannot be new dirty data in the pfn after the flush has completed as
+@@ -917,8 +917,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ xas_clear_mark(xas, PAGECACHE_TAG_DIRTY);
+ dax_wake_entry(xas, entry, false);
+
+- trace_dax_writeback_one(mapping->host, xas->xa_index,
+- size >> PAGE_SHIFT);
++ trace_dax_writeback_one(mapping->host, index, count);
+ return ret;
+
+ put_unlocked:
+diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
+index c53814539070..553a3f3300ae 100644
+--- a/fs/devpts/inode.c
++++ b/fs/devpts/inode.c
+@@ -455,6 +455,7 @@ devpts_fill_super(struct super_block *s, void *data, int silent)
+ s->s_blocksize_bits = 10;
+ s->s_magic = DEVPTS_SUPER_MAGIC;
+ s->s_op = &devpts_sops;
++ s->s_d_op = &simple_dentry_operations;
+ s->s_time_gran = 1;
+
+ error = -ENOMEM;
+diff --git a/fs/exec.c b/fs/exec.c
+index fb72d36f7823..bcf383730bea 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -932,7 +932,7 @@ int kernel_read_file(struct file *file, void **buf, loff_t *size,
+ bytes = kernel_read(file, *buf + pos, i_size - pos, &pos);
+ if (bytes < 0) {
+ ret = bytes;
+- goto out;
++ goto out_free;
+ }
+
+ if (bytes == 0)
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index 73b2d528237f..a9ea38182578 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -757,7 +757,8 @@ static loff_t ext2_max_size(int bits)
+ {
+ loff_t res = EXT2_NDIR_BLOCKS;
+ int meta_blocks;
+- loff_t upper_limit;
++ unsigned int upper_limit;
++ unsigned int ppb = 1 << (bits-2);
+
+ /* This is calculated to be the largest file size for a
+ * dense, file such that the total number of
+@@ -771,24 +772,34 @@ static loff_t ext2_max_size(int bits)
+ /* total blocks in file system block size */
+ upper_limit >>= (bits - 9);
+
++ /* Compute how many blocks we can address by block tree */
++ res += 1LL << (bits-2);
++ res += 1LL << (2*(bits-2));
++ res += 1LL << (3*(bits-2));
++ /* Does block tree limit file size? */
++ if (res < upper_limit)
++ goto check_lfs;
+
++ res = upper_limit;
++ /* How many metadata blocks are needed for addressing upper_limit? */
++ upper_limit -= EXT2_NDIR_BLOCKS;
+ /* indirect blocks */
+ meta_blocks = 1;
++ upper_limit -= ppb;
+ /* double indirect blocks */
+- meta_blocks += 1 + (1LL << (bits-2));
+- /* tripple indirect blocks */
+- meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
+-
+- upper_limit -= meta_blocks;
+- upper_limit <<= bits;
+-
+- res += 1LL << (bits-2);
+- res += 1LL << (2*(bits-2));
+- res += 1LL << (3*(bits-2));
++ if (upper_limit < ppb * ppb) {
++ meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb);
++ res -= meta_blocks;
++ goto check_lfs;
++ }
++ meta_blocks += 1 + ppb;
++ upper_limit -= ppb * ppb;
++ /* tripple indirect blocks for the rest */
++ meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb) +
++ DIV_ROUND_UP(upper_limit, ppb*ppb);
++ res -= meta_blocks;
++check_lfs:
+ res <<= bits;
+- if (res > upper_limit)
+- res = upper_limit;
+-
+ if (res > MAX_LFS_FILESIZE)
+ res = MAX_LFS_FILESIZE;
+
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 185a05d3257e..508a37ec9271 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -426,6 +426,9 @@ struct flex_groups {
+ /* Flags that are appropriate for non-directories/regular files. */
+ #define EXT4_OTHER_FLMASK (EXT4_NODUMP_FL | EXT4_NOATIME_FL)
+
++/* The only flags that should be swapped */
++#define EXT4_FL_SHOULD_SWAP (EXT4_HUGE_FILE_FL | EXT4_EXTENTS_FL)
++
+ /* Mask out flags that are inappropriate for the given type of inode. */
+ static inline __u32 ext4_mask_flags(umode_t mode, __u32 flags)
+ {
+diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
+index 15b6dd733780..df908ef79cce 100644
+--- a/fs/ext4/ext4_jbd2.h
++++ b/fs/ext4/ext4_jbd2.h
+@@ -384,7 +384,7 @@ static inline void ext4_update_inode_fsync_trans(handle_t *handle,
+ {
+ struct ext4_inode_info *ei = EXT4_I(inode);
+
+- if (ext4_handle_valid(handle)) {
++ if (ext4_handle_valid(handle) && !is_handle_aborted(handle)) {
+ ei->i_sync_tid = handle->h_transaction->t_tid;
+ if (datasync)
+ ei->i_datasync_tid = handle->h_transaction->t_tid;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 240b6dea5441..252bbbb5a2f4 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -2956,14 +2956,17 @@ again:
+ if (err < 0)
+ goto out;
+
+- } else if (sbi->s_cluster_ratio > 1 && end >= ex_end) {
++ } else if (sbi->s_cluster_ratio > 1 && end >= ex_end &&
++ partial.state == initial) {
+ /*
+- * If there's an extent to the right its first cluster
+- * contains the immediate right boundary of the
+- * truncated/punched region. Set partial_cluster to
+- * its negative value so it won't be freed if shared
+- * with the current extent. The end < ee_block case
+- * is handled in ext4_ext_rm_leaf().
++ * If we're punching, there's an extent to the right.
++ * If the partial cluster hasn't been set, set it to
++ * that extent's first cluster and its state to nofree
++ * so it won't be freed should it contain blocks to be
++ * removed. If it's already set (tofree/nofree), we're
++ * retrying and keep the original partial cluster info
++ * so a cluster marked tofree as a result of earlier
++ * extent removal is not lost.
+ */
+ lblk = ex_end + 1;
+ err = ext4_ext_search_right(inode, path, &lblk, &pblk,
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 69d65d49837b..98ec11f69cd4 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -125,7 +125,7 @@ ext4_unaligned_aio(struct inode *inode, struct iov_iter *from, loff_t pos)
+ struct super_block *sb = inode->i_sb;
+ int blockmask = sb->s_blocksize - 1;
+
+- if (pos >= i_size_read(inode))
++ if (pos >= ALIGN(i_size_read(inode), sb->s_blocksize))
+ return 0;
+
+ if ((pos | iov_iter_alignment(from)) & blockmask)
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index bf7fa1507e81..e1801b288847 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -1219,6 +1219,7 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
+ ext4_lblk_t offsets[4], offsets2[4];
+ Indirect chain[4], chain2[4];
+ Indirect *partial, *partial2;
++ Indirect *p = NULL, *p2 = NULL;
+ ext4_lblk_t max_block;
+ __le32 nr = 0, nr2 = 0;
+ int n = 0, n2 = 0;
+@@ -1260,7 +1261,7 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
+ }
+
+
+- partial = ext4_find_shared(inode, n, offsets, chain, &nr);
++ partial = p = ext4_find_shared(inode, n, offsets, chain, &nr);
+ if (nr) {
+ if (partial == chain) {
+ /* Shared branch grows from the inode */
+@@ -1285,13 +1286,11 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
+ partial->p + 1,
+ (__le32 *)partial->bh->b_data+addr_per_block,
+ (chain+n-1) - partial);
+- BUFFER_TRACE(partial->bh, "call brelse");
+- brelse(partial->bh);
+ partial--;
+ }
+
+ end_range:
+- partial2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
++ partial2 = p2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
+ if (nr2) {
+ if (partial2 == chain2) {
+ /*
+@@ -1321,16 +1320,14 @@ end_range:
+ (__le32 *)partial2->bh->b_data,
+ partial2->p,
+ (chain2+n2-1) - partial2);
+- BUFFER_TRACE(partial2->bh, "call brelse");
+- brelse(partial2->bh);
+ partial2--;
+ }
+ goto do_indirects;
+ }
+
+ /* Punch happened within the same level (n == n2) */
+- partial = ext4_find_shared(inode, n, offsets, chain, &nr);
+- partial2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
++ partial = p = ext4_find_shared(inode, n, offsets, chain, &nr);
++ partial2 = p2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
+
+ /* Free top, but only if partial2 isn't its subtree. */
+ if (nr) {
+@@ -1387,11 +1384,7 @@ end_range:
+ partial->p + 1,
+ partial2->p,
+ (chain+n-1) - partial);
+- BUFFER_TRACE(partial->bh, "call brelse");
+- brelse(partial->bh);
+- BUFFER_TRACE(partial2->bh, "call brelse");
+- brelse(partial2->bh);
+- return 0;
++ goto cleanup;
+ }
+
+ /*
+@@ -1406,8 +1399,6 @@ end_range:
+ partial->p + 1,
+ (__le32 *)partial->bh->b_data+addr_per_block,
+ (chain+n-1) - partial);
+- BUFFER_TRACE(partial->bh, "call brelse");
+- brelse(partial->bh);
+ partial--;
+ }
+ if (partial2 > chain2 && depth2 <= depth) {
+@@ -1415,11 +1406,21 @@ end_range:
+ (__le32 *)partial2->bh->b_data,
+ partial2->p,
+ (chain2+n2-1) - partial2);
+- BUFFER_TRACE(partial2->bh, "call brelse");
+- brelse(partial2->bh);
+ partial2--;
+ }
+ }
++
++cleanup:
++ while (p && p > chain) {
++ BUFFER_TRACE(p->bh, "call brelse");
++ brelse(p->bh);
++ p--;
++ }
++ while (p2 && p2 > chain2) {
++ BUFFER_TRACE(p2->bh, "call brelse");
++ brelse(p2->bh);
++ p2--;
++ }
+ return 0;
+
+ do_indirects:
+@@ -1427,7 +1428,7 @@ do_indirects:
+ switch (offsets[0]) {
+ default:
+ if (++n >= n2)
+- return 0;
++ break;
+ nr = i_data[EXT4_IND_BLOCK];
+ if (nr) {
+ ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 1);
+@@ -1435,7 +1436,7 @@ do_indirects:
+ }
+ case EXT4_IND_BLOCK:
+ if (++n >= n2)
+- return 0;
++ break;
+ nr = i_data[EXT4_DIND_BLOCK];
+ if (nr) {
+ ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 2);
+@@ -1443,7 +1444,7 @@ do_indirects:
+ }
+ case EXT4_DIND_BLOCK:
+ if (++n >= n2)
+- return 0;
++ break;
+ nr = i_data[EXT4_TIND_BLOCK];
+ if (nr) {
+ ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 3);
+@@ -1452,5 +1453,5 @@ do_indirects:
+ case EXT4_TIND_BLOCK:
+ ;
+ }
+- return 0;
++ goto cleanup;
+ }
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index d37dafa1d133..2e76fb55d94a 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -63,18 +63,20 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
+ loff_t isize;
+ struct ext4_inode_info *ei1;
+ struct ext4_inode_info *ei2;
++ unsigned long tmp;
+
+ ei1 = EXT4_I(inode1);
+ ei2 = EXT4_I(inode2);
+
+ swap(inode1->i_version, inode2->i_version);
+- swap(inode1->i_blocks, inode2->i_blocks);
+- swap(inode1->i_bytes, inode2->i_bytes);
+ swap(inode1->i_atime, inode2->i_atime);
+ swap(inode1->i_mtime, inode2->i_mtime);
+
+ memswap(ei1->i_data, ei2->i_data, sizeof(ei1->i_data));
+- swap(ei1->i_flags, ei2->i_flags);
++ tmp = ei1->i_flags & EXT4_FL_SHOULD_SWAP;
++ ei1->i_flags = (ei2->i_flags & EXT4_FL_SHOULD_SWAP) |
++ (ei1->i_flags & ~EXT4_FL_SHOULD_SWAP);
++ ei2->i_flags = tmp | (ei2->i_flags & ~EXT4_FL_SHOULD_SWAP);
+ swap(ei1->i_disksize, ei2->i_disksize);
+ ext4_es_remove_extent(inode1, 0, EXT_MAX_BLOCKS);
+ ext4_es_remove_extent(inode2, 0, EXT_MAX_BLOCKS);
+@@ -115,28 +117,41 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ int err;
+ struct inode *inode_bl;
+ struct ext4_inode_info *ei_bl;
+-
+- if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
+- IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
+- ext4_has_inline_data(inode))
+- return -EINVAL;
+-
+- if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
+- !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
+- return -EPERM;
++ qsize_t size, size_bl, diff;
++ blkcnt_t blocks;
++ unsigned short bytes;
+
+ inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO, EXT4_IGET_SPECIAL);
+ if (IS_ERR(inode_bl))
+ return PTR_ERR(inode_bl);
+ ei_bl = EXT4_I(inode_bl);
+
+- filemap_flush(inode->i_mapping);
+- filemap_flush(inode_bl->i_mapping);
+-
+ /* Protect orig inodes against a truncate and make sure,
+ * that only 1 swap_inode_boot_loader is running. */
+ lock_two_nondirectories(inode, inode_bl);
+
++ if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
++ IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
++ ext4_has_inline_data(inode)) {
++ err = -EINVAL;
++ goto journal_err_out;
++ }
++
++ if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
++ !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN)) {
++ err = -EPERM;
++ goto journal_err_out;
++ }
++
++ down_write(&EXT4_I(inode)->i_mmap_sem);
++ err = filemap_write_and_wait(inode->i_mapping);
++ if (err)
++ goto err_out;
++
++ err = filemap_write_and_wait(inode_bl->i_mapping);
++ if (err)
++ goto err_out;
++
+ /* Wait for all existing dio workers */
+ inode_dio_wait(inode);
+ inode_dio_wait(inode_bl);
+@@ -147,7 +162,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ handle = ext4_journal_start(inode_bl, EXT4_HT_MOVE_EXTENTS, 2);
+ if (IS_ERR(handle)) {
+ err = -EINVAL;
+- goto journal_err_out;
++ goto err_out;
+ }
+
+ /* Protect extent tree against block allocations via delalloc */
+@@ -170,6 +185,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ memset(ei_bl->i_data, 0, sizeof(ei_bl->i_data));
+ }
+
++ err = dquot_initialize(inode);
++ if (err)
++ goto err_out1;
++
++ size = (qsize_t)(inode->i_blocks) * (1 << 9) + inode->i_bytes;
++ size_bl = (qsize_t)(inode_bl->i_blocks) * (1 << 9) + inode_bl->i_bytes;
++ diff = size - size_bl;
+ swap_inode_data(inode, inode_bl);
+
+ inode->i_ctime = inode_bl->i_ctime = current_time(inode);
+@@ -183,27 +205,51 @@ static long swap_inode_boot_loader(struct super_block *sb,
+
+ err = ext4_mark_inode_dirty(handle, inode);
+ if (err < 0) {
++ /* No need to update quota information. */
+ ext4_warning(inode->i_sb,
+ "couldn't mark inode #%lu dirty (err %d)",
+ inode->i_ino, err);
+ /* Revert all changes: */
+ swap_inode_data(inode, inode_bl);
+ ext4_mark_inode_dirty(handle, inode);
+- } else {
+- err = ext4_mark_inode_dirty(handle, inode_bl);
+- if (err < 0) {
+- ext4_warning(inode_bl->i_sb,
+- "couldn't mark inode #%lu dirty (err %d)",
+- inode_bl->i_ino, err);
+- /* Revert all changes: */
+- swap_inode_data(inode, inode_bl);
+- ext4_mark_inode_dirty(handle, inode);
+- ext4_mark_inode_dirty(handle, inode_bl);
+- }
++ goto err_out1;
++ }
++
++ blocks = inode_bl->i_blocks;
++ bytes = inode_bl->i_bytes;
++ inode_bl->i_blocks = inode->i_blocks;
++ inode_bl->i_bytes = inode->i_bytes;
++ err = ext4_mark_inode_dirty(handle, inode_bl);
++ if (err < 0) {
++ /* No need to update quota information. */
++ ext4_warning(inode_bl->i_sb,
++ "couldn't mark inode #%lu dirty (err %d)",
++ inode_bl->i_ino, err);
++ goto revert;
++ }
++
++ /* Bootloader inode should not be counted into quota information. */
++ if (diff > 0)
++ dquot_free_space(inode, diff);
++ else
++ err = dquot_alloc_space(inode, -1 * diff);
++
++ if (err < 0) {
++revert:
++ /* Revert all changes: */
++ inode_bl->i_blocks = blocks;
++ inode_bl->i_bytes = bytes;
++ swap_inode_data(inode, inode_bl);
++ ext4_mark_inode_dirty(handle, inode);
++ ext4_mark_inode_dirty(handle, inode_bl);
+ }
++
++err_out1:
+ ext4_journal_stop(handle);
+ ext4_double_up_write_data_sem(inode, inode_bl);
+
++err_out:
++ up_write(&EXT4_I(inode)->i_mmap_sem);
+ journal_err_out:
+ unlock_two_nondirectories(inode, inode_bl);
+ iput(inode_bl);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 48421de803b7..3d9b18505c0c 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1960,7 +1960,8 @@ retry:
+ le16_to_cpu(es->s_reserved_gdt_blocks);
+ n_group = n_desc_blocks * EXT4_DESC_PER_BLOCK(sb);
+ n_blocks_count = (ext4_fsblk_t)n_group *
+- EXT4_BLOCKS_PER_GROUP(sb);
++ EXT4_BLOCKS_PER_GROUP(sb) +
++ le32_to_cpu(es->s_first_data_block);
+ n_group--; /* set to last group number */
+ }
+
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 1cb0fcc67d2d..caf77fe8ac07 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -506,7 +506,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ unsigned int end = fofs + len;
+ unsigned int pos = (unsigned int)fofs;
+ bool updated = false;
+- bool leftmost;
++ bool leftmost = false;
+
+ if (!et)
+ return;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 12fabd6735dd..279bc00489cc 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -456,7 +456,6 @@ struct f2fs_flush_device {
+
+ /* for inline stuff */
+ #define DEF_INLINE_RESERVED_SIZE 1
+-#define DEF_MIN_INLINE_SIZE 1
+ static inline int get_extra_isize(struct inode *inode);
+ static inline int get_inline_xattr_addrs(struct inode *inode);
+ #define MAX_INLINE_DATA(inode) (sizeof(__le32) * \
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index bba56b39dcc5..ae2b45e75847 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1750,10 +1750,12 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+
+ down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+
+- if (!get_dirty_pages(inode))
+- goto skip_flush;
+-
+- f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
++ /*
++ * Should wait end_io to count F2FS_WB_CP_DATA correctly by
++ * f2fs_is_atomic_file.
++ */
++ if (get_dirty_pages(inode))
++ f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
+ "Unexpected flush for atomic writes: ino=%lu, npages=%u",
+ inode->i_ino, get_dirty_pages(inode));
+ ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
+@@ -1761,7 +1763,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ goto out;
+ }
+-skip_flush:
++
+ set_inode_flag(inode, FI_ATOMIC_FILE);
+ clear_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST);
+ up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index d636cbcf68f2..aacbb864ec1e 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -659,6 +659,12 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
+ if (IS_ERR(ipage))
+ return PTR_ERR(ipage);
+
++ /*
++ * f2fs_readdir was protected by inode.i_rwsem, it is safe to access
++ * ipage without page's lock held.
++ */
++ unlock_page(ipage);
++
+ inline_dentry = inline_data_addr(inode, ipage);
+
+ make_dentry_ptr_inline(inode, &d, inline_dentry);
+@@ -667,7 +673,7 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
+ if (!err)
+ ctx->pos = d.max;
+
+- f2fs_put_page(ipage, 1);
++ f2fs_put_page(ipage, 0);
+ return err < 0 ? err : 0;
+ }
+
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 9b79056d705d..e1b1d390b329 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -215,7 +215,8 @@ void f2fs_register_inmem_page(struct inode *inode, struct page *page)
+ }
+
+ static int __revoke_inmem_pages(struct inode *inode,
+- struct list_head *head, bool drop, bool recover)
++ struct list_head *head, bool drop, bool recover,
++ bool trylock)
+ {
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct inmem_pages *cur, *tmp;
+@@ -227,7 +228,16 @@ static int __revoke_inmem_pages(struct inode *inode,
+ if (drop)
+ trace_f2fs_commit_inmem_page(page, INMEM_DROP);
+
+- lock_page(page);
++ if (trylock) {
++ /*
++ * to avoid deadlock in between page lock and
++ * inmem_lock.
++ */
++ if (!trylock_page(page))
++ continue;
++ } else {
++ lock_page(page);
++ }
+
+ f2fs_wait_on_page_writeback(page, DATA, true, true);
+
+@@ -318,13 +328,19 @@ void f2fs_drop_inmem_pages(struct inode *inode)
+ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct f2fs_inode_info *fi = F2FS_I(inode);
+
+- mutex_lock(&fi->inmem_lock);
+- __revoke_inmem_pages(inode, &fi->inmem_pages, true, false);
+- spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
+- if (!list_empty(&fi->inmem_ilist))
+- list_del_init(&fi->inmem_ilist);
+- spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
+- mutex_unlock(&fi->inmem_lock);
++ while (!list_empty(&fi->inmem_pages)) {
++ mutex_lock(&fi->inmem_lock);
++ __revoke_inmem_pages(inode, &fi->inmem_pages,
++ true, false, true);
++
++ if (list_empty(&fi->inmem_pages)) {
++ spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
++ if (!list_empty(&fi->inmem_ilist))
++ list_del_init(&fi->inmem_ilist);
++ spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
++ }
++ mutex_unlock(&fi->inmem_lock);
++ }
+
+ clear_inode_flag(inode, FI_ATOMIC_FILE);
+ fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
+@@ -429,12 +445,15 @@ retry:
+ * recovery or rewrite & commit last transaction. For other
+ * error number, revoking was done by filesystem itself.
+ */
+- err = __revoke_inmem_pages(inode, &revoke_list, false, true);
++ err = __revoke_inmem_pages(inode, &revoke_list,
++ false, true, false);
+
+ /* drop all uncommitted pages */
+- __revoke_inmem_pages(inode, &fi->inmem_pages, true, false);
++ __revoke_inmem_pages(inode, &fi->inmem_pages,
++ true, false, false);
+ } else {
+- __revoke_inmem_pages(inode, &revoke_list, false, false);
++ __revoke_inmem_pages(inode, &revoke_list,
++ false, false, false);
+ }
+
+ return err;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index c46a1d4318d4..5892fa3c885f 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -834,12 +834,13 @@ static int parse_options(struct super_block *sb, char *options)
+ "set with inline_xattr option");
+ return -EINVAL;
+ }
+- if (!F2FS_OPTION(sbi).inline_xattr_size ||
+- F2FS_OPTION(sbi).inline_xattr_size >=
+- DEF_ADDRS_PER_INODE -
+- F2FS_TOTAL_EXTRA_ATTR_SIZE -
+- DEF_INLINE_RESERVED_SIZE -
+- DEF_MIN_INLINE_SIZE) {
++ if (F2FS_OPTION(sbi).inline_xattr_size <
++ sizeof(struct f2fs_xattr_header) / sizeof(__le32) ||
++ F2FS_OPTION(sbi).inline_xattr_size >
++ DEF_ADDRS_PER_INODE -
++ F2FS_TOTAL_EXTRA_ATTR_SIZE / sizeof(__le32) -
++ DEF_INLINE_RESERVED_SIZE -
++ MIN_INLINE_DENTRY_SIZE / sizeof(__le32)) {
+ f2fs_msg(sb, KERN_ERR,
+ "inline xattr size is out of range");
+ return -EINVAL;
+@@ -915,6 +916,10 @@ static int f2fs_drop_inode(struct inode *inode)
+ sb_start_intwrite(inode->i_sb);
+ f2fs_i_size_write(inode, 0);
+
++ f2fs_submit_merged_write_cond(F2FS_I_SB(inode),
++ inode, NULL, 0, DATA);
++ truncate_inode_pages_final(inode->i_mapping);
++
+ if (F2FS_HAS_BLOCKS(inode))
+ f2fs_truncate(inode);
+
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 0575edbe3ed6..f1ab9000b294 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -278,10 +278,16 @@ out:
+ return count;
+ }
+
+- *ui = t;
+
+- if (!strcmp(a->attr.name, "iostat_enable") && *ui == 0)
+- f2fs_reset_iostat(sbi);
++ if (!strcmp(a->attr.name, "iostat_enable")) {
++ sbi->iostat_enable = !!t;
++ if (!sbi->iostat_enable)
++ f2fs_reset_iostat(sbi);
++ return count;
++ }
++
++ *ui = (unsigned int)t;
++
+ return count;
+ }
+
+diff --git a/fs/f2fs/trace.c b/fs/f2fs/trace.c
+index ce2a5eb210b6..d0ab533a9ce8 100644
+--- a/fs/f2fs/trace.c
++++ b/fs/f2fs/trace.c
+@@ -14,7 +14,7 @@
+ #include "trace.h"
+
+ static RADIX_TREE(pids, GFP_ATOMIC);
+-static struct mutex pids_lock;
++static spinlock_t pids_lock;
+ static struct last_io_info last_io;
+
+ static inline void __print_last_io(void)
+@@ -58,23 +58,29 @@ void f2fs_trace_pid(struct page *page)
+
+ set_page_private(page, (unsigned long)pid);
+
++retry:
+ if (radix_tree_preload(GFP_NOFS))
+ return;
+
+- mutex_lock(&pids_lock);
++ spin_lock(&pids_lock);
+ p = radix_tree_lookup(&pids, pid);
+ if (p == current)
+ goto out;
+ if (p)
+ radix_tree_delete(&pids, pid);
+
+- f2fs_radix_tree_insert(&pids, pid, current);
++ if (radix_tree_insert(&pids, pid, current)) {
++ spin_unlock(&pids_lock);
++ radix_tree_preload_end();
++ cond_resched();
++ goto retry;
++ }
+
+ trace_printk("%3x:%3x %4x %-16s\n",
+ MAJOR(inode->i_sb->s_dev), MINOR(inode->i_sb->s_dev),
+ pid, current->comm);
+ out:
+- mutex_unlock(&pids_lock);
++ spin_unlock(&pids_lock);
+ radix_tree_preload_end();
+ }
+
+@@ -119,7 +125,7 @@ void f2fs_trace_ios(struct f2fs_io_info *fio, int flush)
+
+ void f2fs_build_trace_ios(void)
+ {
+- mutex_init(&pids_lock);
++ spin_lock_init(&pids_lock);
+ }
+
+ #define PIDVEC_SIZE 128
+@@ -147,7 +153,7 @@ void f2fs_destroy_trace_ios(void)
+ pid_t next_pid = 0;
+ unsigned int found;
+
+- mutex_lock(&pids_lock);
++ spin_lock(&pids_lock);
+ while ((found = gang_lookup_pids(pid, next_pid, PIDVEC_SIZE))) {
+ unsigned idx;
+
+@@ -155,5 +161,5 @@ void f2fs_destroy_trace_ios(void)
+ for (idx = 0; idx < found; idx++)
+ radix_tree_delete(&pids, pid[idx]);
+ }
+- mutex_unlock(&pids_lock);
++ spin_unlock(&pids_lock);
+ }
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index 18d5ffbc5e8c..73b92985198b 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -224,11 +224,11 @@ static struct f2fs_xattr_entry *__find_inline_xattr(struct inode *inode,
+ {
+ struct f2fs_xattr_entry *entry;
+ unsigned int inline_size = inline_xattr_size(inode);
++ void *max_addr = base_addr + inline_size;
+
+ list_for_each_xattr(entry, base_addr) {
+- if ((void *)entry + sizeof(__u32) > base_addr + inline_size ||
+- (void *)XATTR_NEXT_ENTRY(entry) + sizeof(__u32) >
+- base_addr + inline_size) {
++ if ((void *)entry + sizeof(__u32) > max_addr ||
++ (void *)XATTR_NEXT_ENTRY(entry) > max_addr) {
+ *last_addr = entry;
+ return NULL;
+ }
+@@ -239,6 +239,13 @@ static struct f2fs_xattr_entry *__find_inline_xattr(struct inode *inode,
+ if (!memcmp(entry->e_name, name, len))
+ break;
+ }
++
++ /* inline xattr header or entry across max inline xattr size */
++ if (IS_XATTR_LAST_ENTRY(entry) &&
++ (void *)entry + sizeof(__u32) > max_addr) {
++ *last_addr = entry;
++ return NULL;
++ }
+ return entry;
+ }
+
+diff --git a/fs/file.c b/fs/file.c
+index 3209ee271c41..a10487aa0a84 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -457,6 +457,7 @@ struct files_struct init_files = {
+ .full_fds_bits = init_files.full_fds_bits_init,
+ },
+ .file_lock = __SPIN_LOCK_UNLOCKED(init_files.file_lock),
++ .resize_wait = __WAIT_QUEUE_HEAD_INITIALIZER(init_files.resize_wait),
+ };
+
+ static unsigned int find_next_fd(struct fdtable *fdt, unsigned int start)
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index b92740edc416..4b038f25f256 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -107,7 +107,7 @@ static int glock_wake_function(wait_queue_entry_t *wait, unsigned int mode,
+
+ static wait_queue_head_t *glock_waitqueue(struct lm_lockname *name)
+ {
+- u32 hash = jhash2((u32 *)name, sizeof(*name) / 4, 0);
++ u32 hash = jhash2((u32 *)name, ht_parms.key_len / 4, 0);
+
+ return glock_wait_table + hash_32(hash, GLOCK_WAIT_TABLE_BITS);
+ }
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 2eb55c3361a8..efd0ce9489ae 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -694,9 +694,11 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ the last tag we set up. */
+
+ tag->t_flags |= cpu_to_be16(JBD2_FLAG_LAST_TAG);
+-
+- jbd2_descriptor_block_csum_set(journal, descriptor);
+ start_journal_io:
++ if (descriptor)
++ jbd2_descriptor_block_csum_set(journal,
++ descriptor);
++
+ for (i = 0; i < bufs; i++) {
+ struct buffer_head *bh = wbuf[i];
+ /*
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 8ef6b6daaa7a..88f2a49338a1 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1356,6 +1356,10 @@ static int journal_reset(journal_t *journal)
+ return jbd2_journal_start_thread(journal);
+ }
+
++/*
++ * This function expects that the caller will have locked the journal
++ * buffer head, and will return with it unlocked
++ */
+ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ {
+ struct buffer_head *bh = journal->j_sb_buffer;
+@@ -1365,7 +1369,6 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ trace_jbd2_write_superblock(journal, write_flags);
+ if (!(journal->j_flags & JBD2_BARRIER))
+ write_flags &= ~(REQ_FUA | REQ_PREFLUSH);
+- lock_buffer(bh);
+ if (buffer_write_io_error(bh)) {
+ /*
+ * Oh, dear. A previous attempt to write the journal
+@@ -1424,6 +1427,7 @@ int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
+ jbd_debug(1, "JBD2: updating superblock (start %lu, seq %u)\n",
+ tail_block, tail_tid);
+
++ lock_buffer(journal->j_sb_buffer);
+ sb->s_sequence = cpu_to_be32(tail_tid);
+ sb->s_start = cpu_to_be32(tail_block);
+
+@@ -1454,18 +1458,17 @@ static void jbd2_mark_journal_empty(journal_t *journal, int write_op)
+ journal_superblock_t *sb = journal->j_superblock;
+
+ BUG_ON(!mutex_is_locked(&journal->j_checkpoint_mutex));
+- read_lock(&journal->j_state_lock);
+- /* Is it already empty? */
+- if (sb->s_start == 0) {
+- read_unlock(&journal->j_state_lock);
++ lock_buffer(journal->j_sb_buffer);
++ if (sb->s_start == 0) { /* Is it already empty? */
++ unlock_buffer(journal->j_sb_buffer);
+ return;
+ }
++
+ jbd_debug(1, "JBD2: Marking journal as empty (seq %d)\n",
+ journal->j_tail_sequence);
+
+ sb->s_sequence = cpu_to_be32(journal->j_tail_sequence);
+ sb->s_start = cpu_to_be32(0);
+- read_unlock(&journal->j_state_lock);
+
+ jbd2_write_superblock(journal, write_op);
+
+@@ -1488,9 +1491,8 @@ void jbd2_journal_update_sb_errno(journal_t *journal)
+ journal_superblock_t *sb = journal->j_superblock;
+ int errcode;
+
+- read_lock(&journal->j_state_lock);
++ lock_buffer(journal->j_sb_buffer);
+ errcode = journal->j_errno;
+- read_unlock(&journal->j_state_lock);
+ if (errcode == -ESHUTDOWN)
+ errcode = 0;
+ jbd_debug(1, "JBD2: updating superblock error (errno %d)\n", errcode);
+@@ -1894,28 +1896,27 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
+
+ sb = journal->j_superblock;
+
++ /* Load the checksum driver if necessary */
++ if ((journal->j_chksum_driver == NULL) &&
++ INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V3)) {
++ journal->j_chksum_driver = crypto_alloc_shash("crc32c", 0, 0);
++ if (IS_ERR(journal->j_chksum_driver)) {
++ printk(KERN_ERR "JBD2: Cannot load crc32c driver.\n");
++ journal->j_chksum_driver = NULL;
++ return 0;
++ }
++ /* Precompute checksum seed for all metadata */
++ journal->j_csum_seed = jbd2_chksum(journal, ~0, sb->s_uuid,
++ sizeof(sb->s_uuid));
++ }
++
++ lock_buffer(journal->j_sb_buffer);
++
+ /* If enabling v3 checksums, update superblock */
+ if (INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V3)) {
+ sb->s_checksum_type = JBD2_CRC32C_CHKSUM;
+ sb->s_feature_compat &=
+ ~cpu_to_be32(JBD2_FEATURE_COMPAT_CHECKSUM);
+-
+- /* Load the checksum driver */
+- if (journal->j_chksum_driver == NULL) {
+- journal->j_chksum_driver = crypto_alloc_shash("crc32c",
+- 0, 0);
+- if (IS_ERR(journal->j_chksum_driver)) {
+- printk(KERN_ERR "JBD2: Cannot load crc32c "
+- "driver.\n");
+- journal->j_chksum_driver = NULL;
+- return 0;
+- }
+-
+- /* Precompute checksum seed for all metadata */
+- journal->j_csum_seed = jbd2_chksum(journal, ~0,
+- sb->s_uuid,
+- sizeof(sb->s_uuid));
+- }
+ }
+
+ /* If enabling v1 checksums, downgrade superblock */
+@@ -1927,6 +1928,7 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
+ sb->s_feature_compat |= cpu_to_be32(compat);
+ sb->s_feature_ro_compat |= cpu_to_be32(ro);
+ sb->s_feature_incompat |= cpu_to_be32(incompat);
++ unlock_buffer(journal->j_sb_buffer);
+
+ return 1;
+ #undef COMPAT_FEATURE_ON
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index cc35537232f2..f0d8dabe1ff5 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1252,11 +1252,12 @@ int jbd2_journal_get_undo_access(handle_t *handle, struct buffer_head *bh)
+ struct journal_head *jh;
+ char *committed_data = NULL;
+
+- JBUFFER_TRACE(jh, "entry");
+ if (jbd2_write_access_granted(handle, bh, true))
+ return 0;
+
+ jh = jbd2_journal_add_journal_head(bh);
++ JBUFFER_TRACE(jh, "entry");
++
+ /*
+ * Do this first --- it can drop the journal lock, so we want to
+ * make sure that obtaining the committed_data is done
+@@ -1367,15 +1368,17 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+
+ if (is_handle_aborted(handle))
+ return -EROFS;
+- if (!buffer_jbd(bh)) {
+- ret = -EUCLEAN;
+- goto out;
+- }
++ if (!buffer_jbd(bh))
++ return -EUCLEAN;
++
+ /*
+ * We don't grab jh reference here since the buffer must be part
+ * of the running transaction.
+ */
+ jh = bh2jh(bh);
++ jbd_debug(5, "journal_head %p\n", jh);
++ JBUFFER_TRACE(jh, "entry");
++
+ /*
+ * This and the following assertions are unreliable since we may see jh
+ * in inconsistent state unless we grab bh_state lock. But this is
+@@ -1409,9 +1412,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ }
+
+ journal = transaction->t_journal;
+- jbd_debug(5, "journal_head %p\n", jh);
+- JBUFFER_TRACE(jh, "entry");
+-
+ jbd_lock_bh_state(bh);
+
+ if (jh->b_modified == 0) {
+@@ -1609,14 +1609,21 @@ int jbd2_journal_forget (handle_t *handle, struct buffer_head *bh)
+ /* However, if the buffer is still owned by a prior
+ * (committing) transaction, we can't drop it yet... */
+ JBUFFER_TRACE(jh, "belongs to older transaction");
+- /* ... but we CAN drop it from the new transaction if we
+- * have also modified it since the original commit. */
++ /* ... but we CAN drop it from the new transaction through
++ * marking the buffer as freed and set j_next_transaction to
++ * the new transaction, so that not only the commit code
++ * knows it should clear dirty bits when it is done with the
++ * buffer, but also the buffer can be checkpointed only
++ * after the new transaction commits. */
+
+- if (jh->b_next_transaction) {
+- J_ASSERT(jh->b_next_transaction == transaction);
++ set_buffer_freed(bh);
++
++ if (!jh->b_next_transaction) {
+ spin_lock(&journal->j_list_lock);
+- jh->b_next_transaction = NULL;
++ jh->b_next_transaction = transaction;
+ spin_unlock(&journal->j_list_lock);
++ } else {
++ J_ASSERT(jh->b_next_transaction == transaction);
+
+ /*
+ * only drop a reference if this transaction modified
+diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c
+index fdf527b6d79c..d71c9405874a 100644
+--- a/fs/kernfs/mount.c
++++ b/fs/kernfs/mount.c
+@@ -196,8 +196,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
+ return dentry;
+
+ knparent = find_next_ancestor(kn, NULL);
+- if (WARN_ON(!knparent))
++ if (WARN_ON(!knparent)) {
++ dput(dentry);
+ return ERR_PTR(-EINVAL);
++ }
+
+ do {
+ struct dentry *dtmp;
+@@ -206,8 +208,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
+ if (kn == knparent)
+ return dentry;
+ kntmp = find_next_ancestor(kn, knparent);
+- if (WARN_ON(!kntmp))
++ if (WARN_ON(!kntmp)) {
++ dput(dentry);
+ return ERR_PTR(-EINVAL);
++ }
+ dtmp = lookup_one_len_unlocked(kntmp->name, dentry,
+ strlen(kntmp->name));
+ dput(dentry);
+diff --git a/fs/lockd/host.c b/fs/lockd/host.c
+index 93fb7cf0b92b..f0b5c987d6ae 100644
+--- a/fs/lockd/host.c
++++ b/fs/lockd/host.c
+@@ -290,12 +290,11 @@ void nlmclnt_release_host(struct nlm_host *host)
+
+ WARN_ON_ONCE(host->h_server);
+
+- if (refcount_dec_and_test(&host->h_count)) {
++ if (refcount_dec_and_mutex_lock(&host->h_count, &nlm_host_mutex)) {
+ WARN_ON_ONCE(!list_empty(&host->h_lockowners));
+ WARN_ON_ONCE(!list_empty(&host->h_granted));
+ WARN_ON_ONCE(!list_empty(&host->h_reclaim));
+
+- mutex_lock(&nlm_host_mutex);
+ nlm_destroy_host_locked(host);
+ mutex_unlock(&nlm_host_mutex);
+ }
+diff --git a/fs/locks.c b/fs/locks.c
+index ff6af2c32601..5f468cd95f68 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -1160,6 +1160,11 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
+ */
+ error = -EDEADLK;
+ spin_lock(&blocked_lock_lock);
++ /*
++ * Ensure that we don't find any locks blocked on this
++ * request during deadlock detection.
++ */
++ __locks_wake_up_blocks(request);
+ if (likely(!posix_locks_deadlock(request, fl))) {
+ error = FILE_LOCK_DEFERRED;
+ __locks_insert_block(fl, request,
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 557a5d636183..44258c516305 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -947,6 +947,13 @@ nfs4_sequence_process_interrupted(struct nfs_client *client,
+
+ #endif /* !CONFIG_NFS_V4_1 */
+
++static void nfs41_sequence_res_init(struct nfs4_sequence_res *res)
++{
++ res->sr_timestamp = jiffies;
++ res->sr_status_flags = 0;
++ res->sr_status = 1;
++}
++
+ static
+ void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args,
+ struct nfs4_sequence_res *res,
+@@ -958,10 +965,6 @@ void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args,
+ args->sa_slot = slot;
+
+ res->sr_slot = slot;
+- res->sr_timestamp = jiffies;
+- res->sr_status_flags = 0;
+- res->sr_status = 1;
+-
+ }
+
+ int nfs4_setup_sequence(struct nfs_client *client,
+@@ -1007,6 +1010,7 @@ int nfs4_setup_sequence(struct nfs_client *client,
+
+ trace_nfs4_setup_sequence(session, args);
+ out_start:
++ nfs41_sequence_res_init(res);
+ rpc_call_start(task);
+ return 0;
+
+@@ -2934,7 +2938,8 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ }
+
+ out:
+- nfs4_sequence_free_slot(&opendata->o_res.seq_res);
++ if (!opendata->cancelled)
++ nfs4_sequence_free_slot(&opendata->o_res.seq_res);
+ return ret;
+ }
+
+@@ -6302,7 +6307,6 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl,
+ p->arg.seqid = seqid;
+ p->res.seqid = seqid;
+ p->lsp = lsp;
+- refcount_inc(&lsp->ls_count);
+ /* Ensure we don't close file until we're done freeing locks! */
+ p->ctx = get_nfs_open_context(ctx);
+ p->l_ctx = nfs_get_lock_context(ctx);
+@@ -6527,7 +6531,6 @@ static struct nfs4_lockdata *nfs4_alloc_lockdata(struct file_lock *fl,
+ p->res.lock_seqid = p->arg.lock_seqid;
+ p->lsp = lsp;
+ p->server = server;
+- refcount_inc(&lsp->ls_count);
+ p->ctx = get_nfs_open_context(ctx);
+ locks_init_lock(&p->fl);
+ locks_copy_lock(&p->fl, fl);
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index e54d899c1848..a8951f1f7b4e 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -988,6 +988,17 @@ static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc)
+ }
+ }
+
++static void
++nfs_pageio_cleanup_request(struct nfs_pageio_descriptor *desc,
++ struct nfs_page *req)
++{
++ LIST_HEAD(head);
++
++ nfs_list_remove_request(req);
++ nfs_list_add_request(req, &head);
++ desc->pg_completion_ops->error_cleanup(&head);
++}
++
+ /**
+ * nfs_pageio_add_request - Attempt to coalesce a request into a page list.
+ * @desc: destination io descriptor
+@@ -1025,10 +1036,8 @@ static int __nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ nfs_page_group_unlock(req);
+ desc->pg_moreio = 1;
+ nfs_pageio_doio(desc);
+- if (desc->pg_error < 0)
+- return 0;
+- if (mirror->pg_recoalesce)
+- return 0;
++ if (desc->pg_error < 0 || mirror->pg_recoalesce)
++ goto out_cleanup_subreq;
+ /* retry add_request for this subreq */
+ nfs_page_group_lock(req);
+ continue;
+@@ -1061,6 +1070,10 @@ err_ptr:
+ desc->pg_error = PTR_ERR(subreq);
+ nfs_page_group_unlock(req);
+ return 0;
++out_cleanup_subreq:
++ if (req != subreq)
++ nfs_pageio_cleanup_request(desc, subreq);
++ return 0;
+ }
+
+ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
+@@ -1079,7 +1092,6 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
+ struct nfs_page *req;
+
+ req = list_first_entry(&head, struct nfs_page, wb_list);
+- nfs_list_remove_request(req);
+ if (__nfs_pageio_add_request(desc, req))
+ continue;
+ if (desc->pg_error < 0) {
+@@ -1168,11 +1180,14 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ if (nfs_pgio_has_mirroring(desc))
+ desc->pg_mirror_idx = midx;
+ if (!nfs_pageio_add_request_mirror(desc, dupreq))
+- goto out_failed;
++ goto out_cleanup_subreq;
+ }
+
+ return 1;
+
++out_cleanup_subreq:
++ if (req != dupreq)
++ nfs_pageio_cleanup_request(desc, dupreq);
+ out_failed:
+ nfs_pageio_error_cleanup(desc);
+ return 0;
+@@ -1194,7 +1209,7 @@ static void nfs_pageio_complete_mirror(struct nfs_pageio_descriptor *desc,
+ desc->pg_mirror_idx = mirror_idx;
+ for (;;) {
+ nfs_pageio_doio(desc);
+- if (!mirror->pg_recoalesce)
++ if (desc->pg_error < 0 || !mirror->pg_recoalesce)
+ break;
+ if (!nfs_do_recoalesce(desc))
+ break;
+diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c
+index 9eb8086ea841..c9cf46e0c040 100644
+--- a/fs/nfsd/nfs3proc.c
++++ b/fs/nfsd/nfs3proc.c
+@@ -463,8 +463,19 @@ nfsd3_proc_readdir(struct svc_rqst *rqstp)
+ &resp->common, nfs3svc_encode_entry);
+ memcpy(resp->verf, argp->verf, 8);
+ resp->count = resp->buffer - argp->buffer;
+- if (resp->offset)
+- xdr_encode_hyper(resp->offset, argp->cookie);
++ if (resp->offset) {
++ loff_t offset = argp->cookie;
++
++ if (unlikely(resp->offset1)) {
++ /* we ended up with offset on a page boundary */
++ *resp->offset = htonl(offset >> 32);
++ *resp->offset1 = htonl(offset & 0xffffffff);
++ resp->offset1 = NULL;
++ } else {
++ xdr_encode_hyper(resp->offset, offset);
++ }
++ resp->offset = NULL;
++ }
+
+ RETURN_STATUS(nfserr);
+ }
+@@ -533,6 +544,7 @@ nfsd3_proc_readdirplus(struct svc_rqst *rqstp)
+ } else {
+ xdr_encode_hyper(resp->offset, offset);
+ }
++ resp->offset = NULL;
+ }
+
+ RETURN_STATUS(nfserr);
+diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
+index 9b973f4f7d01..83919116d5cb 100644
+--- a/fs/nfsd/nfs3xdr.c
++++ b/fs/nfsd/nfs3xdr.c
+@@ -921,6 +921,7 @@ encode_entry(struct readdir_cd *ccd, const char *name, int namlen,
+ } else {
+ xdr_encode_hyper(cd->offset, offset64);
+ }
++ cd->offset = NULL;
+ }
+
+ /*
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index fb3c9844c82a..6a45fb00c5fc 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1544,16 +1544,16 @@ static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca)
+ {
+ u32 slotsize = slot_bytes(ca);
+ u32 num = ca->maxreqs;
+- int avail;
++ unsigned long avail, total_avail;
+
+ spin_lock(&nfsd_drc_lock);
+- avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION,
+- nfsd_drc_max_mem - nfsd_drc_mem_used);
++ total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;
++ avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail);
+ /*
+ * Never use more than a third of the remaining memory,
+ * unless it's the only way to give this client a slot:
+ */
+- avail = clamp_t(int, avail, slotsize, avail/3);
++ avail = clamp_t(int, avail, slotsize, total_avail/3);
+ num = min_t(int, num, avail / slotsize);
+ nfsd_drc_mem_used += num * slotsize;
+ spin_unlock(&nfsd_drc_lock);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 72a7681f4046..f2feb2d11bae 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1126,7 +1126,7 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size)
+ case 'Y':
+ case 'y':
+ case '1':
+- if (nn->nfsd_serv)
++ if (!nn->nfsd_serv)
+ return -EBUSY;
+ nfsd4_end_grace(nn);
+ break;
+diff --git a/fs/ocfs2/cluster/nodemanager.c b/fs/ocfs2/cluster/nodemanager.c
+index 0e4166cc23a0..4ac775e32240 100644
+--- a/fs/ocfs2/cluster/nodemanager.c
++++ b/fs/ocfs2/cluster/nodemanager.c
+@@ -621,13 +621,15 @@ static void o2nm_node_group_drop_item(struct config_group *group,
+ struct o2nm_node *node = to_o2nm_node(item);
+ struct o2nm_cluster *cluster = to_o2nm_cluster(group->cg_item.ci_parent);
+
+- o2net_disconnect_node(node);
++ if (cluster->cl_nodes[node->nd_num] == node) {
++ o2net_disconnect_node(node);
+
+- if (cluster->cl_has_local &&
+- (cluster->cl_local_node == node->nd_num)) {
+- cluster->cl_has_local = 0;
+- cluster->cl_local_node = O2NM_INVALID_NODE_NUM;
+- o2net_stop_listening(node);
++ if (cluster->cl_has_local &&
++ (cluster->cl_local_node == node->nd_num)) {
++ cluster->cl_has_local = 0;
++ cluster->cl_local_node = O2NM_INVALID_NODE_NUM;
++ o2net_stop_listening(node);
++ }
+ }
+
+ /* XXX call into net to stop this node from trading messages */
+diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
+index a35259eebc56..1dc9a08e8bdc 100644
+--- a/fs/ocfs2/refcounttree.c
++++ b/fs/ocfs2/refcounttree.c
+@@ -4719,22 +4719,23 @@ out:
+
+ /* Lock an inode and grab a bh pointing to the inode. */
+ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+- struct buffer_head **bh1,
++ struct buffer_head **bh_s,
+ struct inode *t_inode,
+- struct buffer_head **bh2)
++ struct buffer_head **bh_t)
+ {
+- struct inode *inode1;
+- struct inode *inode2;
++ struct inode *inode1 = s_inode;
++ struct inode *inode2 = t_inode;
+ struct ocfs2_inode_info *oi1;
+ struct ocfs2_inode_info *oi2;
++ struct buffer_head *bh1 = NULL;
++ struct buffer_head *bh2 = NULL;
+ bool same_inode = (s_inode == t_inode);
++ bool need_swap = (inode1->i_ino > inode2->i_ino);
+ int status;
+
+ /* First grab the VFS and rw locks. */
+ lock_two_nondirectories(s_inode, t_inode);
+- inode1 = s_inode;
+- inode2 = t_inode;
+- if (inode1->i_ino > inode2->i_ino)
++ if (need_swap)
+ swap(inode1, inode2);
+
+ status = ocfs2_rw_lock(inode1, 1);
+@@ -4757,17 +4758,13 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+ trace_ocfs2_double_lock((unsigned long long)oi1->ip_blkno,
+ (unsigned long long)oi2->ip_blkno);
+
+- if (*bh1)
+- *bh1 = NULL;
+- if (*bh2)
+- *bh2 = NULL;
+-
+ /* We always want to lock the one with the lower lockid first. */
+ if (oi1->ip_blkno > oi2->ip_blkno)
+ mlog_errno(-ENOLCK);
+
+ /* lock id1 */
+- status = ocfs2_inode_lock_nested(inode1, bh1, 1, OI_LS_REFLINK_TARGET);
++ status = ocfs2_inode_lock_nested(inode1, &bh1, 1,
++ OI_LS_REFLINK_TARGET);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -4776,15 +4773,25 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+
+ /* lock id2 */
+ if (!same_inode) {
+- status = ocfs2_inode_lock_nested(inode2, bh2, 1,
++ status = ocfs2_inode_lock_nested(inode2, &bh2, 1,
+ OI_LS_REFLINK_TARGET);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+ goto out_cl1;
+ }
+- } else
+- *bh2 = *bh1;
++ } else {
++ bh2 = bh1;
++ }
++
++ /*
++ * If we swapped inode order above, we have to swap the buffer heads
++ * before passing them back to the caller.
++ */
++ if (need_swap)
++ swap(bh1, bh2);
++ *bh_s = bh1;
++ *bh_t = bh2;
+
+ trace_ocfs2_double_lock_end(
+ (unsigned long long)oi1->ip_blkno,
+@@ -4794,8 +4801,7 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+
+ out_cl1:
+ ocfs2_inode_unlock(inode1, 1);
+- brelse(*bh1);
+- *bh1 = NULL;
++ brelse(bh1);
+ out_rw2:
+ ocfs2_rw_unlock(inode2, 1);
+ out_i2:
+diff --git a/fs/open.c b/fs/open.c
+index 0285ce7dbd51..f1c2f855fd43 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -733,6 +733,12 @@ static int do_dentry_open(struct file *f,
+ return 0;
+ }
+
++ /* Any file opened for execve()/uselib() has to be a regular file. */
++ if (unlikely(f->f_flags & FMODE_EXEC && !S_ISREG(inode->i_mode))) {
++ error = -EACCES;
++ goto cleanup_file;
++ }
++
+ if (f->f_mode & FMODE_WRITE && !special_file(inode->i_mode)) {
+ error = get_write_access(inode);
+ if (unlikely(error))
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 9e62dcf06fc4..68b3303e4b46 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -443,6 +443,24 @@ static int ovl_copy_up_inode(struct ovl_copy_up_ctx *c, struct dentry *temp)
+ {
+ int err;
+
++ /*
++ * Copy up data first and then xattrs. Writing data after
++ * xattrs will remove security.capability xattr automatically.
++ */
++ if (S_ISREG(c->stat.mode) && !c->metacopy) {
++ struct path upperpath, datapath;
++
++ ovl_path_upper(c->dentry, &upperpath);
++ if (WARN_ON(upperpath.dentry != NULL))
++ return -EIO;
++ upperpath.dentry = temp;
++
++ ovl_path_lowerdata(c->dentry, &datapath);
++ err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
++ if (err)
++ return err;
++ }
++
+ err = ovl_copy_xattr(c->lowerpath.dentry, temp);
+ if (err)
+ return err;
+@@ -460,19 +478,6 @@ static int ovl_copy_up_inode(struct ovl_copy_up_ctx *c, struct dentry *temp)
+ return err;
+ }
+
+- if (S_ISREG(c->stat.mode) && !c->metacopy) {
+- struct path upperpath, datapath;
+-
+- ovl_path_upper(c->dentry, &upperpath);
+- BUG_ON(upperpath.dentry != NULL);
+- upperpath.dentry = temp;
+-
+- ovl_path_lowerdata(c->dentry, &datapath);
+- err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
+- if (err)
+- return err;
+- }
+-
+ if (c->metacopy) {
+ err = ovl_check_setxattr(c->dentry, temp, OVL_XATTR_METACOPY,
+ NULL, 0, -EOPNOTSUPP);
+@@ -737,6 +742,8 @@ static int ovl_copy_up_meta_inode_data(struct ovl_copy_up_ctx *c)
+ {
+ struct path upperpath, datapath;
+ int err;
++ char *capability = NULL;
++ ssize_t uninitialized_var(cap_size);
+
+ ovl_path_upper(c->dentry, &upperpath);
+ if (WARN_ON(upperpath.dentry == NULL))
+@@ -746,15 +753,37 @@ static int ovl_copy_up_meta_inode_data(struct ovl_copy_up_ctx *c)
+ if (WARN_ON(datapath.dentry == NULL))
+ return -EIO;
+
++ if (c->stat.size) {
++ err = cap_size = ovl_getxattr(upperpath.dentry, XATTR_NAME_CAPS,
++ &capability, 0);
++ if (err < 0 && err != -ENODATA)
++ goto out;
++ }
++
+ err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
+ if (err)
+- return err;
++ goto out_free;
++
++ /*
++ * Writing to upper file will clear security.capability xattr. We
++ * don't want that to happen for normal copy-up operation.
++ */
++ if (capability) {
++ err = ovl_do_setxattr(upperpath.dentry, XATTR_NAME_CAPS,
++ capability, cap_size, 0);
++ if (err)
++ goto out_free;
++ }
++
+
+ err = vfs_removexattr(upperpath.dentry, OVL_XATTR_METACOPY);
+ if (err)
+- return err;
++ goto out_free;
+
+ ovl_set_upperdata(d_inode(c->dentry));
++out_free:
++ kfree(capability);
++out:
+ return err;
+ }
+
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 5e45cb3630a0..9c6018287d57 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -277,6 +277,8 @@ int ovl_lock_rename_workdir(struct dentry *workdir, struct dentry *upperdir);
+ int ovl_check_metacopy_xattr(struct dentry *dentry);
+ bool ovl_is_metacopy_dentry(struct dentry *dentry);
+ char *ovl_get_redirect_xattr(struct dentry *dentry, int padding);
++ssize_t ovl_getxattr(struct dentry *dentry, char *name, char **value,
++ size_t padding);
+
+ static inline bool ovl_is_impuredir(struct dentry *dentry)
+ {
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 7c01327b1852..4035e640f402 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -863,28 +863,49 @@ bool ovl_is_metacopy_dentry(struct dentry *dentry)
+ return (oe->numlower > 1);
+ }
+
+-char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
++ssize_t ovl_getxattr(struct dentry *dentry, char *name, char **value,
++ size_t padding)
+ {
+- int res;
+- char *s, *next, *buf = NULL;
++ ssize_t res;
++ char *buf = NULL;
+
+- res = vfs_getxattr(dentry, OVL_XATTR_REDIRECT, NULL, 0);
++ res = vfs_getxattr(dentry, name, NULL, 0);
+ if (res < 0) {
+ if (res == -ENODATA || res == -EOPNOTSUPP)
+- return NULL;
++ return -ENODATA;
+ goto fail;
+ }
+
+- buf = kzalloc(res + padding + 1, GFP_KERNEL);
+- if (!buf)
+- return ERR_PTR(-ENOMEM);
++ if (res != 0) {
++ buf = kzalloc(res + padding, GFP_KERNEL);
++ if (!buf)
++ return -ENOMEM;
+
+- if (res == 0)
+- goto invalid;
++ res = vfs_getxattr(dentry, name, buf, res);
++ if (res < 0)
++ goto fail;
++ }
++ *value = buf;
++
++ return res;
++
++fail:
++ pr_warn_ratelimited("overlayfs: failed to get xattr %s: err=%zi)\n",
++ name, res);
++ kfree(buf);
++ return res;
++}
+
+- res = vfs_getxattr(dentry, OVL_XATTR_REDIRECT, buf, res);
++char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
++{
++ int res;
++ char *s, *next, *buf = NULL;
++
++ res = ovl_getxattr(dentry, OVL_XATTR_REDIRECT, &buf, padding + 1);
++ if (res == -ENODATA)
++ return NULL;
+ if (res < 0)
+- goto fail;
++ return ERR_PTR(res);
+ if (res == 0)
+ goto invalid;
+
+@@ -900,15 +921,9 @@ char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
+ }
+
+ return buf;
+-
+-err_free:
+- kfree(buf);
+- return ERR_PTR(res);
+-fail:
+- pr_warn_ratelimited("overlayfs: failed to get redirect (%i)\n", res);
+- goto err_free;
+ invalid:
+ pr_warn_ratelimited("overlayfs: invalid redirect (%s)\n", buf);
+ res = -EINVAL;
+- goto err_free;
++ kfree(buf);
++ return ERR_PTR(res);
+ }
+diff --git a/fs/pipe.c b/fs/pipe.c
+index bdc5d3c0977d..c51750ed4011 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -234,6 +234,14 @@ static const struct pipe_buf_operations anon_pipe_buf_ops = {
+ .get = generic_pipe_buf_get,
+ };
+
++static const struct pipe_buf_operations anon_pipe_buf_nomerge_ops = {
++ .can_merge = 0,
++ .confirm = generic_pipe_buf_confirm,
++ .release = anon_pipe_buf_release,
++ .steal = anon_pipe_buf_steal,
++ .get = generic_pipe_buf_get,
++};
++
+ static const struct pipe_buf_operations packet_pipe_buf_ops = {
+ .can_merge = 0,
+ .confirm = generic_pipe_buf_confirm,
+@@ -242,6 +250,12 @@ static const struct pipe_buf_operations packet_pipe_buf_ops = {
+ .get = generic_pipe_buf_get,
+ };
+
++void pipe_buf_mark_unmergeable(struct pipe_buffer *buf)
++{
++ if (buf->ops == &anon_pipe_buf_ops)
++ buf->ops = &anon_pipe_buf_nomerge_ops;
++}
++
+ static ssize_t
+ pipe_read(struct kiocb *iocb, struct iov_iter *to)
+ {
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 4d598a399bbf..d65390727541 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -1626,7 +1626,8 @@ static void drop_sysctl_table(struct ctl_table_header *header)
+ if (--header->nreg)
+ return;
+
+- put_links(header);
++ if (parent)
++ put_links(header);
+ start_unregistering(header);
+ if (!--header->count)
+ kfree_rcu(header, rcu);
+diff --git a/fs/read_write.c b/fs/read_write.c
+index ff3c5e6f87cf..27b69b85d49f 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -1238,6 +1238,9 @@ COMPAT_SYSCALL_DEFINE5(preadv64v2, unsigned long, fd,
+ const struct compat_iovec __user *,vec,
+ unsigned long, vlen, loff_t, pos, rwf_t, flags)
+ {
++ if (pos == -1)
++ return do_compat_readv(fd, vec, vlen, flags);
++
+ return do_compat_preadv64(fd, vec, vlen, pos, flags);
+ }
+ #endif
+@@ -1344,6 +1347,9 @@ COMPAT_SYSCALL_DEFINE5(pwritev64v2, unsigned long, fd,
+ const struct compat_iovec __user *,vec,
+ unsigned long, vlen, loff_t, pos, rwf_t, flags)
+ {
++ if (pos == -1)
++ return do_compat_writev(fd, vec, vlen, flags);
++
+ return do_compat_pwritev64(fd, vec, vlen, pos, flags);
+ }
+ #endif
+diff --git a/fs/splice.c b/fs/splice.c
+index de2ede048473..90c29675d573 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -1597,6 +1597,8 @@ retry:
+ */
+ obuf->flags &= ~PIPE_BUF_FLAG_GIFT;
+
++ pipe_buf_mark_unmergeable(obuf);
++
+ obuf->len = len;
+ opipe->nrbufs++;
+ ibuf->offset += obuf->len;
+@@ -1671,6 +1673,8 @@ static int link_pipe(struct pipe_inode_info *ipipe,
+ */
+ obuf->flags &= ~PIPE_BUF_FLAG_GIFT;
+
++ pipe_buf_mark_unmergeable(obuf);
++
+ if (obuf->len > len)
+ obuf->len = len;
+
+diff --git a/fs/udf/truncate.c b/fs/udf/truncate.c
+index b647f0bd150c..94220ba85628 100644
+--- a/fs/udf/truncate.c
++++ b/fs/udf/truncate.c
+@@ -260,6 +260,9 @@ void udf_truncate_extents(struct inode *inode)
+ epos.block = eloc;
+ epos.bh = udf_tread(sb,
+ udf_get_lb_pblock(sb, &eloc, 0));
++ /* Error reading indirect block? */
++ if (!epos.bh)
++ return;
+ if (elen)
+ indirect_ext_len =
+ (elen + sb->s_blocksize - 1) >>
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 3d7a6a9c2370..f8f6f04c4453 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -733,7 +733,7 @@
+ KEEP(*(.orc_unwind_ip)) \
+ __stop_orc_unwind_ip = .; \
+ } \
+- . = ALIGN(6); \
++ . = ALIGN(2); \
+ .orc_unwind : AT(ADDR(.orc_unwind) - LOAD_OFFSET) { \
+ __start_orc_unwind = .; \
+ KEEP(*(.orc_unwind)) \
+diff --git a/include/drm/drm_cache.h b/include/drm/drm_cache.h
+index bfe1639df02d..97fc498dc767 100644
+--- a/include/drm/drm_cache.h
++++ b/include/drm/drm_cache.h
+@@ -47,6 +47,24 @@ static inline bool drm_arch_can_wc_memory(void)
+ return false;
+ #elif defined(CONFIG_MIPS) && defined(CONFIG_CPU_LOONGSON3)
+ return false;
++#elif defined(CONFIG_ARM) || defined(CONFIG_ARM64)
++ /*
++ * The DRM driver stack is designed to work with cache coherent devices
++ * only, but permits an optimization to be enabled in some cases, where
++ * for some buffers, both the CPU and the GPU use uncached mappings,
++ * removing the need for DMA snooping and allocation in the CPU caches.
++ *
++ * The use of uncached GPU mappings relies on the correct implementation
++ * of the PCIe NoSnoop TLP attribute by the platform, otherwise the GPU
++ * will use cached mappings nonetheless. On x86 platforms, this does not
++ * seem to matter, as uncached CPU mappings will snoop the caches in any
++ * case. However, on ARM and arm64, enabling this optimization on a
++ * platform where NoSnoop is ignored results in loss of coherency, which
++ * breaks correct operation of the device. Since we have no way of
++ * detecting whether NoSnoop works or not, just disable this
++ * optimization entirely for ARM and arm64.
++ */
++ return false;
+ #else
+ return true;
+ #endif
+diff --git a/include/linux/atalk.h b/include/linux/atalk.h
+index 23f805562f4e..840cf92307ba 100644
+--- a/include/linux/atalk.h
++++ b/include/linux/atalk.h
+@@ -161,16 +161,26 @@ extern int sysctl_aarp_resolve_time;
+ extern void atalk_register_sysctl(void);
+ extern void atalk_unregister_sysctl(void);
+ #else
+-#define atalk_register_sysctl() do { } while(0)
+-#define atalk_unregister_sysctl() do { } while(0)
++static inline int atalk_register_sysctl(void)
++{
++ return 0;
++}
++static inline void atalk_unregister_sysctl(void)
++{
++}
+ #endif
+
+ #ifdef CONFIG_PROC_FS
+ extern int atalk_proc_init(void);
+ extern void atalk_proc_exit(void);
+ #else
+-#define atalk_proc_init() ({ 0; })
+-#define atalk_proc_exit() do { } while(0)
++static inline int atalk_proc_init(void)
++{
++ return 0;
++}
++static inline void atalk_proc_exit(void)
++{
++}
+ #endif /* CONFIG_PROC_FS */
+
+ #endif /* __LINUX_ATALK_H__ */
+diff --git a/include/linux/bitrev.h b/include/linux/bitrev.h
+index 50fb0dee23e8..d35b8ec1c485 100644
+--- a/include/linux/bitrev.h
++++ b/include/linux/bitrev.h
+@@ -34,41 +34,41 @@ static inline u32 __bitrev32(u32 x)
+
+ #define __constant_bitrev32(x) \
+ ({ \
+- u32 __x = x; \
+- __x = (__x >> 16) | (__x << 16); \
+- __x = ((__x & (u32)0xFF00FF00UL) >> 8) | ((__x & (u32)0x00FF00FFUL) << 8); \
+- __x = ((__x & (u32)0xF0F0F0F0UL) >> 4) | ((__x & (u32)0x0F0F0F0FUL) << 4); \
+- __x = ((__x & (u32)0xCCCCCCCCUL) >> 2) | ((__x & (u32)0x33333333UL) << 2); \
+- __x = ((__x & (u32)0xAAAAAAAAUL) >> 1) | ((__x & (u32)0x55555555UL) << 1); \
+- __x; \
++ u32 ___x = x; \
++ ___x = (___x >> 16) | (___x << 16); \
++ ___x = ((___x & (u32)0xFF00FF00UL) >> 8) | ((___x & (u32)0x00FF00FFUL) << 8); \
++ ___x = ((___x & (u32)0xF0F0F0F0UL) >> 4) | ((___x & (u32)0x0F0F0F0FUL) << 4); \
++ ___x = ((___x & (u32)0xCCCCCCCCUL) >> 2) | ((___x & (u32)0x33333333UL) << 2); \
++ ___x = ((___x & (u32)0xAAAAAAAAUL) >> 1) | ((___x & (u32)0x55555555UL) << 1); \
++ ___x; \
+ })
+
+ #define __constant_bitrev16(x) \
+ ({ \
+- u16 __x = x; \
+- __x = (__x >> 8) | (__x << 8); \
+- __x = ((__x & (u16)0xF0F0U) >> 4) | ((__x & (u16)0x0F0FU) << 4); \
+- __x = ((__x & (u16)0xCCCCU) >> 2) | ((__x & (u16)0x3333U) << 2); \
+- __x = ((__x & (u16)0xAAAAU) >> 1) | ((__x & (u16)0x5555U) << 1); \
+- __x; \
++ u16 ___x = x; \
++ ___x = (___x >> 8) | (___x << 8); \
++ ___x = ((___x & (u16)0xF0F0U) >> 4) | ((___x & (u16)0x0F0FU) << 4); \
++ ___x = ((___x & (u16)0xCCCCU) >> 2) | ((___x & (u16)0x3333U) << 2); \
++ ___x = ((___x & (u16)0xAAAAU) >> 1) | ((___x & (u16)0x5555U) << 1); \
++ ___x; \
+ })
+
+ #define __constant_bitrev8x4(x) \
+ ({ \
+- u32 __x = x; \
+- __x = ((__x & (u32)0xF0F0F0F0UL) >> 4) | ((__x & (u32)0x0F0F0F0FUL) << 4); \
+- __x = ((__x & (u32)0xCCCCCCCCUL) >> 2) | ((__x & (u32)0x33333333UL) << 2); \
+- __x = ((__x & (u32)0xAAAAAAAAUL) >> 1) | ((__x & (u32)0x55555555UL) << 1); \
+- __x; \
++ u32 ___x = x; \
++ ___x = ((___x & (u32)0xF0F0F0F0UL) >> 4) | ((___x & (u32)0x0F0F0F0FUL) << 4); \
++ ___x = ((___x & (u32)0xCCCCCCCCUL) >> 2) | ((___x & (u32)0x33333333UL) << 2); \
++ ___x = ((___x & (u32)0xAAAAAAAAUL) >> 1) | ((___x & (u32)0x55555555UL) << 1); \
++ ___x; \
+ })
+
+ #define __constant_bitrev8(x) \
+ ({ \
+- u8 __x = x; \
+- __x = (__x >> 4) | (__x << 4); \
+- __x = ((__x & (u8)0xCCU) >> 2) | ((__x & (u8)0x33U) << 2); \
+- __x = ((__x & (u8)0xAAU) >> 1) | ((__x & (u8)0x55U) << 1); \
+- __x; \
++ u8 ___x = x; \
++ ___x = (___x >> 4) | (___x << 4); \
++ ___x = ((___x & (u8)0xCCU) >> 2) | ((___x & (u8)0x33U) << 2); \
++ ___x = ((___x & (u8)0xAAU) >> 1) | ((___x & (u8)0x55U) << 1); \
++ ___x; \
+ })
+
+ #define bitrev32(x) \
+diff --git a/include/linux/ceph/libceph.h b/include/linux/ceph/libceph.h
+index a420c07904bc..337d5049ff93 100644
+--- a/include/linux/ceph/libceph.h
++++ b/include/linux/ceph/libceph.h
+@@ -294,6 +294,8 @@ extern void ceph_destroy_client(struct ceph_client *client);
+ extern int __ceph_open_session(struct ceph_client *client,
+ unsigned long started);
+ extern int ceph_open_session(struct ceph_client *client);
++int ceph_wait_for_latest_osdmap(struct ceph_client *client,
++ unsigned long timeout);
+
+ /* pagevec.c */
+ extern void ceph_release_page_vector(struct page **pages, int num_pages);
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 8fcbae1b8db0..120d1d40704b 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -602,7 +602,7 @@ struct cgroup_subsys {
+ void (*cancel_fork)(struct task_struct *task);
+ void (*fork)(struct task_struct *task);
+ void (*exit)(struct task_struct *task);
+- void (*free)(struct task_struct *task);
++ void (*release)(struct task_struct *task);
+ void (*bind)(struct cgroup_subsys_state *root_css);
+
+ bool early_init:1;
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index 9968332cceed..81f58b4a5418 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -121,6 +121,7 @@ extern int cgroup_can_fork(struct task_struct *p);
+ extern void cgroup_cancel_fork(struct task_struct *p);
+ extern void cgroup_post_fork(struct task_struct *p);
+ void cgroup_exit(struct task_struct *p);
++void cgroup_release(struct task_struct *p);
+ void cgroup_free(struct task_struct *p);
+
+ int cgroup_init_early(void);
+@@ -697,6 +698,7 @@ static inline int cgroup_can_fork(struct task_struct *p) { return 0; }
+ static inline void cgroup_cancel_fork(struct task_struct *p) {}
+ static inline void cgroup_post_fork(struct task_struct *p) {}
+ static inline void cgroup_exit(struct task_struct *p) {}
++static inline void cgroup_release(struct task_struct *p) {}
+ static inline void cgroup_free(struct task_struct *p) {}
+
+ static inline int cgroup_init_early(void) { return 0; }
+diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
+index e443fa9fa859..b7cf80a71293 100644
+--- a/include/linux/clk-provider.h
++++ b/include/linux/clk-provider.h
+@@ -792,6 +792,9 @@ unsigned int __clk_get_enable_count(struct clk *clk);
+ unsigned long clk_hw_get_rate(const struct clk_hw *hw);
+ unsigned long __clk_get_flags(struct clk *clk);
+ unsigned long clk_hw_get_flags(const struct clk_hw *hw);
++#define clk_hw_can_set_rate_parent(hw) \
++ (clk_hw_get_flags((hw)) & CLK_SET_RATE_PARENT)
++
+ bool clk_hw_is_prepared(const struct clk_hw *hw);
+ bool clk_hw_rate_is_protected(const struct clk_hw *hw);
+ bool clk_hw_is_enabled(const struct clk_hw *hw);
+diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
+index c86d6d8bdfed..0b427d5df0fe 100644
+--- a/include/linux/cpufreq.h
++++ b/include/linux/cpufreq.h
+@@ -254,20 +254,12 @@ __ATTR(_name, 0644, show_##_name, store_##_name)
+ static struct freq_attr _name = \
+ __ATTR(_name, 0200, NULL, store_##_name)
+
+-struct global_attr {
+- struct attribute attr;
+- ssize_t (*show)(struct kobject *kobj,
+- struct attribute *attr, char *buf);
+- ssize_t (*store)(struct kobject *a, struct attribute *b,
+- const char *c, size_t count);
+-};
+-
+ #define define_one_global_ro(_name) \
+-static struct global_attr _name = \
++static struct kobj_attribute _name = \
+ __ATTR(_name, 0444, show_##_name, NULL)
+
+ #define define_one_global_rw(_name) \
+-static struct global_attr _name = \
++static struct kobj_attribute _name = \
+ __ATTR(_name, 0644, show_##_name, store_##_name)
+
+
+diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
+index e528baebad69..bee4bb9f81bc 100644
+--- a/include/linux/device-mapper.h
++++ b/include/linux/device-mapper.h
+@@ -609,7 +609,7 @@ do { \
+ */
+ #define dm_target_offset(ti, sector) ((sector) - (ti)->begin)
+
+-static inline sector_t to_sector(unsigned long n)
++static inline sector_t to_sector(unsigned long long n)
+ {
+ return (n >> SECTOR_SHIFT);
+ }
+diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
+index f6ded992c183..5b21f14802e1 100644
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -130,6 +130,7 @@ struct dma_map_ops {
+ enum dma_data_direction direction);
+ int (*dma_supported)(struct device *dev, u64 mask);
+ u64 (*get_required_mask)(struct device *dev);
++ size_t (*max_mapping_size)(struct device *dev);
+ };
+
+ #define DMA_MAPPING_ERROR (~(dma_addr_t)0)
+@@ -257,6 +258,8 @@ static inline void dma_direct_sync_sg_for_cpu(struct device *dev,
+ }
+ #endif
+
++size_t dma_direct_max_mapping_size(struct device *dev);
++
+ #ifdef CONFIG_HAS_DMA
+ #include <asm/dma-mapping.h>
+
+@@ -460,6 +463,7 @@ int dma_supported(struct device *dev, u64 mask);
+ int dma_set_mask(struct device *dev, u64 mask);
+ int dma_set_coherent_mask(struct device *dev, u64 mask);
+ u64 dma_get_required_mask(struct device *dev);
++size_t dma_max_mapping_size(struct device *dev);
+ #else /* CONFIG_HAS_DMA */
+ static inline dma_addr_t dma_map_page_attrs(struct device *dev,
+ struct page *page, size_t offset, size_t size,
+@@ -561,6 +565,10 @@ static inline u64 dma_get_required_mask(struct device *dev)
+ {
+ return 0;
+ }
++static inline size_t dma_max_mapping_size(struct device *dev)
++{
++ return 0;
++}
+ #endif /* CONFIG_HAS_DMA */
+
+ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index 28604a8d0aa9..a86485ac7c87 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -1699,19 +1699,19 @@ extern int efi_tpm_eventlog_init(void);
+ * fault happened while executing an efi runtime service.
+ */
+ enum efi_rts_ids {
+- NONE,
+- GET_TIME,
+- SET_TIME,
+- GET_WAKEUP_TIME,
+- SET_WAKEUP_TIME,
+- GET_VARIABLE,
+- GET_NEXT_VARIABLE,
+- SET_VARIABLE,
+- QUERY_VARIABLE_INFO,
+- GET_NEXT_HIGH_MONO_COUNT,
+- RESET_SYSTEM,
+- UPDATE_CAPSULE,
+- QUERY_CAPSULE_CAPS,
++ EFI_NONE,
++ EFI_GET_TIME,
++ EFI_SET_TIME,
++ EFI_GET_WAKEUP_TIME,
++ EFI_SET_WAKEUP_TIME,
++ EFI_GET_VARIABLE,
++ EFI_GET_NEXT_VARIABLE,
++ EFI_SET_VARIABLE,
++ EFI_QUERY_VARIABLE_INFO,
++ EFI_GET_NEXT_HIGH_MONO_COUNT,
++ EFI_RESET_SYSTEM,
++ EFI_UPDATE_CAPSULE,
++ EFI_QUERY_CAPSULE_CAPS,
+ };
+
+ /*
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index d7711048ef93..c524ad7d31da 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -489,12 +489,12 @@ typedef __le32 f2fs_hash_t;
+
+ /*
+ * space utilization of regular dentry and inline dentry (w/o extra reservation)
+- * regular dentry inline dentry
+- * bitmap 1 * 27 = 27 1 * 23 = 23
+- * reserved 1 * 3 = 3 1 * 7 = 7
+- * dentry 11 * 214 = 2354 11 * 182 = 2002
+- * filename 8 * 214 = 1712 8 * 182 = 1456
+- * total 4096 3488
++ * regular dentry inline dentry (def) inline dentry (min)
++ * bitmap 1 * 27 = 27 1 * 23 = 23 1 * 1 = 1
++ * reserved 1 * 3 = 3 1 * 7 = 7 1 * 1 = 1
++ * dentry 11 * 214 = 2354 11 * 182 = 2002 11 * 2 = 22
++ * filename 8 * 214 = 1712 8 * 182 = 1456 8 * 2 = 16
++ * total 4096 3488 40
+ *
+ * Note: there are more reserved space in inline dentry than in regular
+ * dentry, when converting inline dentry we should handle this carefully.
+@@ -506,6 +506,7 @@ typedef __le32 f2fs_hash_t;
+ #define SIZE_OF_RESERVED (PAGE_SIZE - ((SIZE_OF_DIR_ENTRY + \
+ F2FS_SLOT_LEN) * \
+ NR_DENTRY_IN_BLOCK + SIZE_OF_DENTRY_BITMAP))
++#define MIN_INLINE_DENTRY_SIZE 40 /* just include '.' and '..' entries */
+
+ /* One directory entry slot representing F2FS_SLOT_LEN-sized file name */
+ struct f2fs_dir_entry {
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index e532fcc6e4b5..3358646a8e7a 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -874,7 +874,9 @@ bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
+ unsigned int alignment,
+ bpf_jit_fill_hole_t bpf_fill_ill_insns);
+ void bpf_jit_binary_free(struct bpf_binary_header *hdr);
+-
++u64 bpf_jit_alloc_exec_limit(void);
++void *bpf_jit_alloc_exec(unsigned long size);
++void bpf_jit_free_exec(void *addr);
+ void bpf_jit_free(struct bpf_prog *fp);
+
+ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 29d8e2cfed0e..fd423fec8d83 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -304,13 +304,19 @@ enum rw_hint {
+
+ struct kiocb {
+ struct file *ki_filp;
++
++ /* The 'ki_filp' pointer is shared in a union for aio */
++ randomized_struct_fields_start
++
+ loff_t ki_pos;
+ void (*ki_complete)(struct kiocb *iocb, long ret, long ret2);
+ void *private;
+ int ki_flags;
+ u16 ki_hint;
+ u16 ki_ioprio; /* See linux/ioprio.h */
+-} __randomize_layout;
++
++ randomized_struct_fields_end
++};
+
+ static inline bool is_sync_kiocb(struct kiocb *kiocb)
+ {
+diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
+index 0fbbcdf0c178..da0af631ded5 100644
+--- a/include/linux/hardirq.h
++++ b/include/linux/hardirq.h
+@@ -60,8 +60,14 @@ extern void irq_enter(void);
+ */
+ extern void irq_exit(void);
+
++#ifndef arch_nmi_enter
++#define arch_nmi_enter() do { } while (0)
++#define arch_nmi_exit() do { } while (0)
++#endif
++
+ #define nmi_enter() \
+ do { \
++ arch_nmi_enter(); \
+ printk_nmi_enter(); \
+ lockdep_off(); \
+ ftrace_nmi_enter(); \
+@@ -80,6 +86,7 @@ extern void irq_exit(void);
+ ftrace_nmi_exit(); \
+ lockdep_on(); \
+ printk_nmi_exit(); \
++ arch_nmi_exit(); \
+ } while (0)
+
+ #endif /* LINUX_HARDIRQ_H */
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index 65b4eaed1d96..7e748648c7d3 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -333,6 +333,7 @@ struct i2c_client {
+ char name[I2C_NAME_SIZE];
+ struct i2c_adapter *adapter; /* the adapter we sit on */
+ struct device dev; /* the device structure */
++ int init_irq; /* irq set at initialization */
+ int irq; /* irq issued by device */
+ struct list_head detected;
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
+index dd1e40ddac7d..875c41b23f20 100644
+--- a/include/linux/irqdesc.h
++++ b/include/linux/irqdesc.h
+@@ -65,6 +65,7 @@ struct irq_desc {
+ unsigned int core_internal_state__do_not_mess_with_it;
+ unsigned int depth; /* nested irq disables */
+ unsigned int wake_depth; /* nested wake enables */
++ unsigned int tot_count;
+ unsigned int irq_count; /* For detecting broken IRQs */
+ unsigned long last_unhandled; /* Aging timer for unhandled count */
+ unsigned int irqs_unhandled;
+diff --git a/include/linux/kasan-checks.h b/include/linux/kasan-checks.h
+index d314150658a4..a61dc075e2ce 100644
+--- a/include/linux/kasan-checks.h
++++ b/include/linux/kasan-checks.h
+@@ -2,7 +2,7 @@
+ #ifndef _LINUX_KASAN_CHECKS_H
+ #define _LINUX_KASAN_CHECKS_H
+
+-#ifdef CONFIG_KASAN
++#if defined(__SANITIZE_ADDRESS__) || defined(__KASAN_INTERNAL)
+ void kasan_check_read(const volatile void *p, unsigned int size);
+ void kasan_check_write(const volatile void *p, unsigned int size);
+ #else
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index c38cc5eb7e73..cf761ff58224 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -634,7 +634,7 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
+ struct kvm_memory_slot *dont);
+ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+ unsigned long npages);
+-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots);
++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen);
+ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ struct kvm_memory_slot *memslot,
+ const struct kvm_userspace_memory_region *mem,
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 83ae11cbd12c..7391f5fe4eda 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -561,7 +561,10 @@ struct mem_cgroup *lock_page_memcg(struct page *page);
+ void __unlock_page_memcg(struct mem_cgroup *memcg);
+ void unlock_page_memcg(struct page *page);
+
+-/* idx can be of type enum memcg_stat_item or node_stat_item */
++/*
++ * idx can be of type enum memcg_stat_item or node_stat_item.
++ * Keep in sync with memcg_exact_page_state().
++ */
+ static inline unsigned long memcg_page_state(struct mem_cgroup *memcg,
+ int idx)
+ {
+diff --git a/include/linux/mii.h b/include/linux/mii.h
+index 6fee8b1a4400..5cd824c1c0ca 100644
+--- a/include/linux/mii.h
++++ b/include/linux/mii.h
+@@ -469,7 +469,7 @@ static inline u32 linkmode_adv_to_lcl_adv_t(unsigned long *advertising)
+ if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ advertising))
+ lcl_adv |= ADVERTISE_PAUSE_CAP;
+- if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
++ if (linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ advertising))
+ lcl_adv |= ADVERTISE_PAUSE_ASYM;
+
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 54299251d40d..4f001619f854 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -591,6 +591,8 @@ enum mlx5_pagefault_type_flags {
+ };
+
+ struct mlx5_td {
++ /* protects tirs list changes while tirs refresh */
++ struct mutex list_lock;
+ struct list_head tirs_list;
+ u32 tdn;
+ };
+diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
+index 4eb26d278046..280ae96dc4c3 100644
+--- a/include/linux/page-isolation.h
++++ b/include/linux/page-isolation.h
+@@ -41,16 +41,6 @@ int move_freepages_block(struct zone *zone, struct page *page,
+
+ /*
+ * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
+- * If specified range includes migrate types other than MOVABLE or CMA,
+- * this will fail with -EBUSY.
+- *
+- * For isolating all pages in the range finally, the caller have to
+- * free all pages in the range. test_page_isolated() can be used for
+- * test it.
+- *
+- * The following flags are allowed (they can be combined in a bit mask)
+- * SKIP_HWPOISON - ignore hwpoison pages
+- * REPORT_FAILURE - report details about the failure to isolate the range
+ */
+ int
+ start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index e1a051724f7e..7cbbd891bfcd 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -409,7 +409,7 @@ struct pmu {
+ /*
+ * Set up pmu-private data structures for an AUX area
+ */
+- void *(*setup_aux) (int cpu, void **pages,
++ void *(*setup_aux) (struct perf_event *event, void **pages,
+ int nr_pages, bool overwrite);
+ /* optional */
+
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index 5a3bb3b7c9ad..3ecd7ea212ae 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -182,6 +182,7 @@ void generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_confirm(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_steal(struct pipe_inode_info *, struct pipe_buffer *);
+ void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *);
++void pipe_buf_mark_unmergeable(struct pipe_buffer *buf);
+
+ extern const struct pipe_buf_operations nosteal_pipe_buf_ops;
+
+diff --git a/include/linux/property.h b/include/linux/property.h
+index 3789ec755fb6..65d3420dd5d1 100644
+--- a/include/linux/property.h
++++ b/include/linux/property.h
+@@ -258,7 +258,7 @@ struct property_entry {
+ #define PROPERTY_ENTRY_STRING(_name_, _val_) \
+ (struct property_entry) { \
+ .name = _name_, \
+- .length = sizeof(_val_), \
++ .length = sizeof(const char *), \
+ .type = DEV_PROP_STRING, \
+ { .value = { .str = _val_ } }, \
+ }
+diff --git a/include/linux/relay.h b/include/linux/relay.h
+index e1bdf01a86e2..c759f96e39c1 100644
+--- a/include/linux/relay.h
++++ b/include/linux/relay.h
+@@ -66,7 +66,7 @@ struct rchan
+ struct kref kref; /* channel refcount */
+ void *private_data; /* for user-defined data */
+ size_t last_toobig; /* tried to log event > subbuf size */
+- struct rchan_buf ** __percpu buf; /* per-cpu channel buffers */
++ struct rchan_buf * __percpu *buf; /* per-cpu channel buffers */
+ int is_global; /* One global buffer ? */
+ struct list_head list; /* for channel list */
+ struct dentry *parent; /* parent dentry passed to open */
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index 5b9ae62272bb..503778920448 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -128,7 +128,7 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts,
+ unsigned long *lost_events);
+
+ struct ring_buffer_iter *
+-ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu);
++ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu, gfp_t flags);
+ void ring_buffer_read_prepare_sync(void);
+ void ring_buffer_read_start(struct ring_buffer_iter *iter);
+ void ring_buffer_read_finish(struct ring_buffer_iter *iter);
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index f9b43c989577..9b35aff09f70 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1748,9 +1748,9 @@ static __always_inline bool need_resched(void)
+ static inline unsigned int task_cpu(const struct task_struct *p)
+ {
+ #ifdef CONFIG_THREAD_INFO_IN_TASK
+- return p->cpu;
++ return READ_ONCE(p->cpu);
+ #else
+- return task_thread_info(p)->cpu;
++ return READ_ONCE(task_thread_info(p)->cpu);
+ #endif
+ }
+
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index c31d3a47a47c..57c7ed3fe465 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -176,10 +176,10 @@ typedef int (*sched_domain_flags_f)(void);
+ #define SDTL_OVERLAP 0x01
+
+ struct sd_data {
+- struct sched_domain **__percpu sd;
+- struct sched_domain_shared **__percpu sds;
+- struct sched_group **__percpu sg;
+- struct sched_group_capacity **__percpu sgc;
++ struct sched_domain *__percpu *sd;
++ struct sched_domain_shared *__percpu *sds;
++ struct sched_group *__percpu *sg;
++ struct sched_group_capacity *__percpu *sgc;
+ };
+
+ struct sched_domain_topology_level {
+diff --git a/include/linux/slab.h b/include/linux/slab.h
+index 11b45f7ae405..9449b19c5f10 100644
+--- a/include/linux/slab.h
++++ b/include/linux/slab.h
+@@ -32,6 +32,8 @@
+ #define SLAB_HWCACHE_ALIGN ((slab_flags_t __force)0x00002000U)
+ /* Use GFP_DMA memory */
+ #define SLAB_CACHE_DMA ((slab_flags_t __force)0x00004000U)
++/* Use GFP_DMA32 memory */
++#define SLAB_CACHE_DMA32 ((slab_flags_t __force)0x00008000U)
+ /* DEBUG: Store the last owner for bug hunting */
+ #define SLAB_STORE_USER ((slab_flags_t __force)0x00010000U)
+ /* Panic if kmem_cache_create() fails */
+diff --git a/include/linux/string.h b/include/linux/string.h
+index 7927b875f80c..6ab0a6fa512e 100644
+--- a/include/linux/string.h
++++ b/include/linux/string.h
+@@ -150,6 +150,9 @@ extern void * memscan(void *,int,__kernel_size_t);
+ #ifndef __HAVE_ARCH_MEMCMP
+ extern int memcmp(const void *,const void *,__kernel_size_t);
+ #endif
++#ifndef __HAVE_ARCH_BCMP
++extern int bcmp(const void *,const void *,__kernel_size_t);
++#endif
+ #ifndef __HAVE_ARCH_MEMCHR
+ extern void * memchr(const void *,int,__kernel_size_t);
+ #endif
+diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
+index 7c007ed7505f..29bc3a203283 100644
+--- a/include/linux/swiotlb.h
++++ b/include/linux/swiotlb.h
+@@ -76,6 +76,8 @@ bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr,
+ size_t size, enum dma_data_direction dir, unsigned long attrs);
+ void __init swiotlb_exit(void);
+ unsigned int swiotlb_max_segment(void);
++size_t swiotlb_max_mapping_size(struct device *dev);
++bool is_swiotlb_active(void);
+ #else
+ #define swiotlb_force SWIOTLB_NO_FORCE
+ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+@@ -95,6 +97,15 @@ static inline unsigned int swiotlb_max_segment(void)
+ {
+ return 0;
+ }
++static inline size_t swiotlb_max_mapping_size(struct device *dev)
++{
++ return SIZE_MAX;
++}
++
++static inline bool is_swiotlb_active(void)
++{
++ return false;
++}
+ #endif /* CONFIG_SWIOTLB */
+
+ extern void swiotlb_print_info(void);
+diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
+index fab02133a919..3dc70adfe5f5 100644
+--- a/include/linux/virtio_ring.h
++++ b/include/linux/virtio_ring.h
+@@ -63,7 +63,7 @@ struct virtqueue;
+ /*
+ * Creates a virtqueue and allocates the descriptor ring. If
+ * may_reduce_num is set, then this may allocate a smaller ring than
+- * expected. The caller should query virtqueue_get_ring_size to learn
++ * expected. The caller should query virtqueue_get_vring_size to learn
+ * the actual size of the ring.
+ */
+ struct virtqueue *vring_create_virtqueue(unsigned int index,
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index ec9d6bc65855..fabee6db0abb 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -276,7 +276,7 @@ int bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg);
+ int bt_sock_wait_state(struct sock *sk, int state, unsigned long timeo);
+ int bt_sock_wait_ready(struct sock *sk, unsigned long flags);
+
+-void bt_accept_enqueue(struct sock *parent, struct sock *sk);
++void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh);
+ void bt_accept_unlink(struct sock *sk);
+ struct sock *bt_accept_dequeue(struct sock *parent, struct socket *newsock);
+
+diff --git a/include/net/ip.h b/include/net/ip.h
+index be3cad9c2e4c..583526aad1d0 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -677,7 +677,7 @@ int ip_options_get_from_user(struct net *net, struct ip_options_rcu **optp,
+ unsigned char __user *data, int optlen);
+ void ip_options_undo(struct ip_options *opt);
+ void ip_forward_options(struct sk_buff *skb);
+-int ip_options_rcv_srr(struct sk_buff *skb);
++int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev);
+
+ /*
+ * Functions provided by ip_sockglue.c
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index 99d4148e0f90..1c3126c14930 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -58,6 +58,7 @@ struct net {
+ */
+ spinlock_t rules_mod_lock;
+
++ u32 hash_mix;
+ atomic64_t cookie_gen;
+
+ struct list_head list; /* list of network namespaces */
+diff --git a/include/net/netfilter/br_netfilter.h b/include/net/netfilter/br_netfilter.h
+index 4cd56808ac4e..89808ce293c4 100644
+--- a/include/net/netfilter/br_netfilter.h
++++ b/include/net/netfilter/br_netfilter.h
+@@ -43,7 +43,6 @@ static inline struct rtable *bridge_parent_rtable(const struct net_device *dev)
+ }
+
+ struct net_device *setup_pre_routing(struct sk_buff *skb);
+-void br_netfilter_enable(void);
+
+ #if IS_ENABLED(CONFIG_IPV6)
+ int br_validate_ipv6(struct net *net, struct sk_buff *skb);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index b4984bbbe157..0612439909dc 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -416,7 +416,8 @@ struct nft_set {
+ unsigned char *udata;
+ /* runtime data below here */
+ const struct nft_set_ops *ops ____cacheline_aligned;
+- u16 flags:14,
++ u16 flags:13,
++ bound:1,
+ genmask:2;
+ u8 klen;
+ u8 dlen;
+@@ -690,10 +691,12 @@ static inline void nft_set_gc_batch_add(struct nft_set_gc_batch *gcb,
+ gcb->elems[gcb->head.cnt++] = elem;
+ }
+
++struct nft_expr_ops;
+ /**
+ * struct nft_expr_type - nf_tables expression type
+ *
+ * @select_ops: function to select nft_expr_ops
++ * @release_ops: release nft_expr_ops
+ * @ops: default ops, used when no select_ops functions is present
+ * @list: used internally
+ * @name: Identifier
+@@ -706,6 +709,7 @@ static inline void nft_set_gc_batch_add(struct nft_set_gc_batch *gcb,
+ struct nft_expr_type {
+ const struct nft_expr_ops *(*select_ops)(const struct nft_ctx *,
+ const struct nlattr * const tb[]);
++ void (*release_ops)(const struct nft_expr_ops *ops);
+ const struct nft_expr_ops *ops;
+ struct list_head list;
+ const char *name;
+@@ -1329,15 +1333,12 @@ struct nft_trans_rule {
+ struct nft_trans_set {
+ struct nft_set *set;
+ u32 set_id;
+- bool bound;
+ };
+
+ #define nft_trans_set(trans) \
+ (((struct nft_trans_set *)trans->data)->set)
+ #define nft_trans_set_id(trans) \
+ (((struct nft_trans_set *)trans->data)->set_id)
+-#define nft_trans_set_bound(trans) \
+- (((struct nft_trans_set *)trans->data)->bound)
+
+ struct nft_trans_chain {
+ bool update;
+diff --git a/include/net/netns/hash.h b/include/net/netns/hash.h
+index 16a842456189..d9b665151f3d 100644
+--- a/include/net/netns/hash.h
++++ b/include/net/netns/hash.h
+@@ -2,16 +2,10 @@
+ #ifndef __NET_NS_HASH_H__
+ #define __NET_NS_HASH_H__
+
+-#include <asm/cache.h>
+-
+-struct net;
++#include <net/net_namespace.h>
+
+ static inline u32 net_hash_mix(const struct net *net)
+ {
+-#ifdef CONFIG_NET_NS
+- return (u32)(((unsigned long)net) >> ilog2(sizeof(*net)));
+-#else
+- return 0;
+-#endif
++ return net->hash_mix;
+ }
+ #endif
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 9481f2c142e2..e7eb4aa6ccc9 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -51,7 +51,10 @@ struct qdisc_size_table {
+ struct qdisc_skb_head {
+ struct sk_buff *head;
+ struct sk_buff *tail;
+- __u32 qlen;
++ union {
++ u32 qlen;
++ atomic_t atomic_qlen;
++ };
+ spinlock_t lock;
+ };
+
+@@ -408,27 +411,19 @@ static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
+ BUILD_BUG_ON(sizeof(qcb->data) < sz);
+ }
+
+-static inline int qdisc_qlen_cpu(const struct Qdisc *q)
+-{
+- return this_cpu_ptr(q->cpu_qstats)->qlen;
+-}
+-
+ static inline int qdisc_qlen(const struct Qdisc *q)
+ {
+ return q->q.qlen;
+ }
+
+-static inline int qdisc_qlen_sum(const struct Qdisc *q)
++static inline u32 qdisc_qlen_sum(const struct Qdisc *q)
+ {
+- __u32 qlen = q->qstats.qlen;
+- int i;
++ u32 qlen = q->qstats.qlen;
+
+- if (q->flags & TCQ_F_NOLOCK) {
+- for_each_possible_cpu(i)
+- qlen += per_cpu_ptr(q->cpu_qstats, i)->qlen;
+- } else {
++ if (q->flags & TCQ_F_NOLOCK)
++ qlen += atomic_read(&q->q.atomic_qlen);
++ else
+ qlen += q->q.qlen;
+- }
+
+ return qlen;
+ }
+@@ -825,14 +820,14 @@ static inline void qdisc_qstats_cpu_backlog_inc(struct Qdisc *sch,
+ this_cpu_add(sch->cpu_qstats->backlog, qdisc_pkt_len(skb));
+ }
+
+-static inline void qdisc_qstats_cpu_qlen_inc(struct Qdisc *sch)
++static inline void qdisc_qstats_atomic_qlen_inc(struct Qdisc *sch)
+ {
+- this_cpu_inc(sch->cpu_qstats->qlen);
++ atomic_inc(&sch->q.atomic_qlen);
+ }
+
+-static inline void qdisc_qstats_cpu_qlen_dec(struct Qdisc *sch)
++static inline void qdisc_qstats_atomic_qlen_dec(struct Qdisc *sch)
+ {
+- this_cpu_dec(sch->cpu_qstats->qlen);
++ atomic_dec(&sch->q.atomic_qlen);
+ }
+
+ static inline void qdisc_qstats_cpu_requeues_inc(struct Qdisc *sch)
+diff --git a/include/net/sctp/checksum.h b/include/net/sctp/checksum.h
+index 32ee65a30aff..1c6e6c0766ca 100644
+--- a/include/net/sctp/checksum.h
++++ b/include/net/sctp/checksum.h
+@@ -61,7 +61,7 @@ static inline __wsum sctp_csum_combine(__wsum csum, __wsum csum2,
+ static inline __le32 sctp_compute_cksum(const struct sk_buff *skb,
+ unsigned int offset)
+ {
+- struct sctphdr *sh = sctp_hdr(skb);
++ struct sctphdr *sh = (struct sctphdr *)(skb->data + offset);
+ const struct skb_checksum_ops ops = {
+ .update = sctp_csum_update,
+ .combine = sctp_csum_combine,
+diff --git a/include/net/sock.h b/include/net/sock.h
+index f43f935cb113..89d0d94d5db2 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -710,6 +710,12 @@ static inline void sk_add_node_rcu(struct sock *sk, struct hlist_head *list)
+ hlist_add_head_rcu(&sk->sk_node, list);
+ }
+
++static inline void sk_add_node_tail_rcu(struct sock *sk, struct hlist_head *list)
++{
++ sock_hold(sk);
++ hlist_add_tail_rcu(&sk->sk_node, list);
++}
++
+ static inline void __sk_nulls_add_node_rcu(struct sock *sk, struct hlist_nulls_head *list)
+ {
+ hlist_nulls_add_head_rcu(&sk->sk_nulls_node, list);
+diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
+index cb8a273732cf..bb8092fa1e36 100644
+--- a/include/scsi/libfcoe.h
++++ b/include/scsi/libfcoe.h
+@@ -79,7 +79,7 @@ enum fip_state {
+ * It must not change after fcoe_ctlr_init() sets it.
+ */
+ enum fip_mode {
+- FIP_MODE_AUTO = FIP_ST_AUTO,
++ FIP_MODE_AUTO,
+ FIP_MODE_NON_FIP,
+ FIP_MODE_FABRIC,
+ FIP_MODE_VN2VN,
+@@ -250,7 +250,7 @@ struct fcoe_rport {
+ };
+
+ /* FIP API functions */
+-void fcoe_ctlr_init(struct fcoe_ctlr *, enum fip_state);
++void fcoe_ctlr_init(struct fcoe_ctlr *, enum fip_mode);
+ void fcoe_ctlr_destroy(struct fcoe_ctlr *);
+ void fcoe_ctlr_link_up(struct fcoe_ctlr *);
+ int fcoe_ctlr_link_down(struct fcoe_ctlr *);
+diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
+index b9ba520f7e4b..2832134e5397 100644
+--- a/include/uapi/linux/android/binder.h
++++ b/include/uapi/linux/android/binder.h
+@@ -41,6 +41,14 @@ enum {
+ enum {
+ FLAT_BINDER_FLAG_PRIORITY_MASK = 0xff,
+ FLAT_BINDER_FLAG_ACCEPTS_FDS = 0x100,
++
++ /**
++ * @FLAT_BINDER_FLAG_TXN_SECURITY_CTX: request security contexts
++ *
++ * Only when set, causes senders to include their security
++ * context
++ */
++ FLAT_BINDER_FLAG_TXN_SECURITY_CTX = 0x1000,
+ };
+
+ #ifdef BINDER_IPC_32BIT
+@@ -218,6 +226,7 @@ struct binder_node_info_for_ref {
+ #define BINDER_VERSION _IOWR('b', 9, struct binder_version)
+ #define BINDER_GET_NODE_DEBUG_INFO _IOWR('b', 11, struct binder_node_debug_info)
+ #define BINDER_GET_NODE_INFO_FOR_REF _IOWR('b', 12, struct binder_node_info_for_ref)
++#define BINDER_SET_CONTEXT_MGR_EXT _IOW('b', 13, struct flat_binder_object)
+
+ /*
+ * NOTE: Two special error codes you should check for when calling
+@@ -276,6 +285,11 @@ struct binder_transaction_data {
+ } data;
+ };
+
++struct binder_transaction_data_secctx {
++ struct binder_transaction_data transaction_data;
++ binder_uintptr_t secctx;
++};
++
+ struct binder_transaction_data_sg {
+ struct binder_transaction_data transaction_data;
+ binder_size_t buffers_size;
+@@ -311,6 +325,11 @@ enum binder_driver_return_protocol {
+ BR_OK = _IO('r', 1),
+ /* No parameters! */
+
++ BR_TRANSACTION_SEC_CTX = _IOR('r', 2,
++ struct binder_transaction_data_secctx),
++ /*
++ * binder_transaction_data_secctx: the received command.
++ */
+ BR_TRANSACTION = _IOR('r', 2, struct binder_transaction_data),
+ BR_REPLY = _IOR('r', 3, struct binder_transaction_data),
+ /*
+diff --git a/kernel/audit.h b/kernel/audit.h
+index 91421679a168..6ffb70575082 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -314,7 +314,7 @@ extern void audit_trim_trees(void);
+ extern int audit_tag_tree(char *old, char *new);
+ extern const char *audit_tree_path(struct audit_tree *tree);
+ extern void audit_put_tree(struct audit_tree *tree);
+-extern void audit_kill_trees(struct list_head *list);
++extern void audit_kill_trees(struct audit_context *context);
+ #else
+ #define audit_remove_tree_rule(rule) BUG()
+ #define audit_add_tree_rule(rule) -EINVAL
+@@ -323,7 +323,7 @@ extern void audit_kill_trees(struct list_head *list);
+ #define audit_put_tree(tree) (void)0
+ #define audit_tag_tree(old, new) -EINVAL
+ #define audit_tree_path(rule) "" /* never called */
+-#define audit_kill_trees(list) BUG()
++#define audit_kill_trees(context) BUG()
+ #endif
+
+ extern char *audit_unpack_string(void **bufp, size_t *remain, size_t len);
+diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c
+index d4af4d97f847..abfb112f26aa 100644
+--- a/kernel/audit_tree.c
++++ b/kernel/audit_tree.c
+@@ -524,13 +524,14 @@ static int tag_chunk(struct inode *inode, struct audit_tree *tree)
+ return 0;
+ }
+
+-static void audit_tree_log_remove_rule(struct audit_krule *rule)
++static void audit_tree_log_remove_rule(struct audit_context *context,
++ struct audit_krule *rule)
+ {
+ struct audit_buffer *ab;
+
+ if (!audit_enabled)
+ return;
+- ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_CONFIG_CHANGE);
++ ab = audit_log_start(context, GFP_KERNEL, AUDIT_CONFIG_CHANGE);
+ if (unlikely(!ab))
+ return;
+ audit_log_format(ab, "op=remove_rule dir=");
+@@ -540,7 +541,7 @@ static void audit_tree_log_remove_rule(struct audit_krule *rule)
+ audit_log_end(ab);
+ }
+
+-static void kill_rules(struct audit_tree *tree)
++static void kill_rules(struct audit_context *context, struct audit_tree *tree)
+ {
+ struct audit_krule *rule, *next;
+ struct audit_entry *entry;
+@@ -551,7 +552,7 @@ static void kill_rules(struct audit_tree *tree)
+ list_del_init(&rule->rlist);
+ if (rule->tree) {
+ /* not a half-baked one */
+- audit_tree_log_remove_rule(rule);
++ audit_tree_log_remove_rule(context, rule);
+ if (entry->rule.exe)
+ audit_remove_mark(entry->rule.exe);
+ rule->tree = NULL;
+@@ -633,7 +634,7 @@ static void trim_marked(struct audit_tree *tree)
+ tree->goner = 1;
+ spin_unlock(&hash_lock);
+ mutex_lock(&audit_filter_mutex);
+- kill_rules(tree);
++ kill_rules(audit_context(), tree);
+ list_del_init(&tree->list);
+ mutex_unlock(&audit_filter_mutex);
+ prune_one(tree);
+@@ -973,8 +974,10 @@ static void audit_schedule_prune(void)
+ * ... and that one is done if evict_chunk() decides to delay until the end
+ * of syscall. Runs synchronously.
+ */
+-void audit_kill_trees(struct list_head *list)
++void audit_kill_trees(struct audit_context *context)
+ {
++ struct list_head *list = &context->killed_trees;
++
+ audit_ctl_lock();
+ mutex_lock(&audit_filter_mutex);
+
+@@ -982,7 +985,7 @@ void audit_kill_trees(struct list_head *list)
+ struct audit_tree *victim;
+
+ victim = list_entry(list->next, struct audit_tree, list);
+- kill_rules(victim);
++ kill_rules(context, victim);
+ list_del_init(&victim->list);
+
+ mutex_unlock(&audit_filter_mutex);
+@@ -1017,7 +1020,7 @@ static void evict_chunk(struct audit_chunk *chunk)
+ list_del_init(&owner->same_root);
+ spin_unlock(&hash_lock);
+ if (!postponed) {
+- kill_rules(owner);
++ kill_rules(audit_context(), owner);
+ list_move(&owner->list, &prune_list);
+ need_prune = 1;
+ } else {
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 6593a5207fb0..b585ceb2f7a2 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1444,6 +1444,9 @@ void __audit_free(struct task_struct *tsk)
+ if (!context)
+ return;
+
++ if (!list_empty(&context->killed_trees))
++ audit_kill_trees(context);
++
+ /* We are called either by do_exit() or the fork() error handling code;
+ * in the former case tsk == current and in the latter tsk is a
+ * random task_struct that doesn't doesn't have any meaningful data we
+@@ -1460,9 +1463,6 @@ void __audit_free(struct task_struct *tsk)
+ audit_log_exit();
+ }
+
+- if (!list_empty(&context->killed_trees))
+- audit_kill_trees(&context->killed_trees);
+-
+ audit_set_context(tsk, NULL);
+ audit_free_context(context);
+ }
+@@ -1537,6 +1537,9 @@ void __audit_syscall_exit(int success, long return_code)
+ if (!context)
+ return;
+
++ if (!list_empty(&context->killed_trees))
++ audit_kill_trees(context);
++
+ if (!context->dummy && context->in_syscall) {
+ if (success)
+ context->return_valid = AUDITSC_SUCCESS;
+@@ -1571,9 +1574,6 @@ void __audit_syscall_exit(int success, long return_code)
+ context->in_syscall = 0;
+ context->prio = context->state == AUDIT_RECORD_CONTEXT ? ~0ULL : 0;
+
+- if (!list_empty(&context->killed_trees))
+- audit_kill_trees(&context->killed_trees);
+-
+ audit_free_names(context);
+ unroll_tree_refs(context, NULL, 0);
+ audit_free_aux(context);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 5fcce2f4209d..d53825b6fcd9 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3187,7 +3187,7 @@ do_sim:
+ *dst_reg = *ptr_reg;
+ }
+ ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
+- if (!ptr_is_dst_reg)
++ if (!ptr_is_dst_reg && ret)
+ *dst_reg = tmp;
+ return !ret ? -EFAULT : 0;
+ }
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index f31bd61c9466..f84bf28f36ba 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -197,7 +197,7 @@ static u64 css_serial_nr_next = 1;
+ */
+ static u16 have_fork_callback __read_mostly;
+ static u16 have_exit_callback __read_mostly;
+-static u16 have_free_callback __read_mostly;
++static u16 have_release_callback __read_mostly;
+ static u16 have_canfork_callback __read_mostly;
+
+ /* cgroup namespace for init task */
+@@ -2033,7 +2033,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
+ struct cgroup_namespace *ns)
+ {
+ struct dentry *dentry;
+- bool new_sb;
++ bool new_sb = false;
+
+ dentry = kernfs_mount(fs_type, flags, root->kf_root, magic, &new_sb);
+
+@@ -2043,6 +2043,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
+ */
+ if (!IS_ERR(dentry) && ns != &init_cgroup_ns) {
+ struct dentry *nsdentry;
++ struct super_block *sb = dentry->d_sb;
+ struct cgroup *cgrp;
+
+ mutex_lock(&cgroup_mutex);
+@@ -2053,12 +2054,14 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
+ spin_unlock_irq(&css_set_lock);
+ mutex_unlock(&cgroup_mutex);
+
+- nsdentry = kernfs_node_dentry(cgrp->kn, dentry->d_sb);
++ nsdentry = kernfs_node_dentry(cgrp->kn, sb);
+ dput(dentry);
++ if (IS_ERR(nsdentry))
++ deactivate_locked_super(sb);
+ dentry = nsdentry;
+ }
+
+- if (IS_ERR(dentry) || !new_sb)
++ if (!new_sb)
+ cgroup_put(&root->cgrp);
+
+ return dentry;
+@@ -5313,7 +5316,7 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss, bool early)
+
+ have_fork_callback |= (bool)ss->fork << ss->id;
+ have_exit_callback |= (bool)ss->exit << ss->id;
+- have_free_callback |= (bool)ss->free << ss->id;
++ have_release_callback |= (bool)ss->release << ss->id;
+ have_canfork_callback |= (bool)ss->can_fork << ss->id;
+
+ /* At system boot, before all subsystems have been
+@@ -5749,16 +5752,19 @@ void cgroup_exit(struct task_struct *tsk)
+ } while_each_subsys_mask();
+ }
+
+-void cgroup_free(struct task_struct *task)
++void cgroup_release(struct task_struct *task)
+ {
+- struct css_set *cset = task_css_set(task);
+ struct cgroup_subsys *ss;
+ int ssid;
+
+- do_each_subsys_mask(ss, ssid, have_free_callback) {
+- ss->free(task);
++ do_each_subsys_mask(ss, ssid, have_release_callback) {
++ ss->release(task);
+ } while_each_subsys_mask();
++}
+
++void cgroup_free(struct task_struct *task)
++{
++ struct css_set *cset = task_css_set(task);
+ put_css_set(cset);
+ }
+
+diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c
+index 9829c67ebc0a..c9960baaa14f 100644
+--- a/kernel/cgroup/pids.c
++++ b/kernel/cgroup/pids.c
+@@ -247,7 +247,7 @@ static void pids_cancel_fork(struct task_struct *task)
+ pids_uncharge(pids, 1);
+ }
+
+-static void pids_free(struct task_struct *task)
++static void pids_release(struct task_struct *task)
+ {
+ struct pids_cgroup *pids = css_pids(task_css(task, pids_cgrp_id));
+
+@@ -342,7 +342,7 @@ struct cgroup_subsys pids_cgrp_subsys = {
+ .cancel_attach = pids_cancel_attach,
+ .can_fork = pids_can_fork,
+ .cancel_fork = pids_cancel_fork,
+- .free = pids_free,
++ .release = pids_release,
+ .legacy_cftypes = pids_files,
+ .dfl_cftypes = pids_files,
+ .threaded = true,
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index d503d1a9007c..bb95a35e8c2d 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -87,7 +87,6 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
+ struct cgroup *root, int cpu)
+ {
+ struct cgroup_rstat_cpu *rstatc;
+- struct cgroup *parent;
+
+ if (pos == root)
+ return NULL;
+@@ -115,8 +114,8 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
+ * However, due to the way we traverse, @pos will be the first
+ * child in most cases. The only exception is @root.
+ */
+- parent = cgroup_parent(pos);
+- if (parent && rstatc->updated_next) {
++ if (rstatc->updated_next) {
++ struct cgroup *parent = cgroup_parent(pos);
+ struct cgroup_rstat_cpu *prstatc = cgroup_rstat_cpu(parent, cpu);
+ struct cgroup_rstat_cpu *nrstatc;
+ struct cgroup **nextp;
+@@ -140,9 +139,12 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
+ * updated stat.
+ */
+ smp_mb();
++
++ return pos;
+ }
+
+- return pos;
++ /* only happens for @root */
++ return NULL;
+ }
+
+ /* see cgroup_rstat_flush() */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index d1c6d152da89..6754f3ecfd94 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -313,6 +313,15 @@ void cpus_write_unlock(void)
+
+ void lockdep_assert_cpus_held(void)
+ {
++ /*
++ * We can't have hotplug operations before userspace starts running,
++ * and some init codepaths will knowingly not take the hotplug lock.
++ * This is all valid, so mute lockdep until it makes sense to report
++ * unheld locks.
++ */
++ if (system_state < SYSTEM_RUNNING)
++ return;
++
+ percpu_rwsem_assert_held(&cpu_hotplug_lock);
+ }
+
+@@ -555,6 +564,20 @@ static void undo_cpu_up(unsigned int cpu, struct cpuhp_cpu_state *st)
+ cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
+ }
+
++static inline bool can_rollback_cpu(struct cpuhp_cpu_state *st)
++{
++ if (IS_ENABLED(CONFIG_HOTPLUG_CPU))
++ return true;
++ /*
++ * When CPU hotplug is disabled, then taking the CPU down is not
++ * possible because takedown_cpu() and the architecture and
++ * subsystem specific mechanisms are not available. So the CPU
++ * which would be completely unplugged again needs to stay around
++ * in the current state.
++ */
++ return st->state <= CPUHP_BRINGUP_CPU;
++}
++
+ static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ enum cpuhp_state target)
+ {
+@@ -565,8 +588,10 @@ static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ st->state++;
+ ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
+ if (ret) {
+- st->target = prev_state;
+- undo_cpu_up(cpu, st);
++ if (can_rollback_cpu(st)) {
++ st->target = prev_state;
++ undo_cpu_up(cpu, st);
++ }
+ break;
+ }
+ }
+diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
+index 355d16acee6d..6310ad01f915 100644
+--- a/kernel/dma/direct.c
++++ b/kernel/dma/direct.c
+@@ -380,3 +380,14 @@ int dma_direct_supported(struct device *dev, u64 mask)
+ */
+ return mask >= __phys_to_dma(dev, min_mask);
+ }
++
++size_t dma_direct_max_mapping_size(struct device *dev)
++{
++ size_t size = SIZE_MAX;
++
++ /* If SWIOTLB is active, use its maximum mapping size */
++ if (is_swiotlb_active())
++ size = swiotlb_max_mapping_size(dev);
++
++ return size;
++}
+diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
+index a11006b6d8e8..5753008ab286 100644
+--- a/kernel/dma/mapping.c
++++ b/kernel/dma/mapping.c
+@@ -357,3 +357,17 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
+ ops->cache_sync(dev, vaddr, size, dir);
+ }
+ EXPORT_SYMBOL(dma_cache_sync);
++
++size_t dma_max_mapping_size(struct device *dev)
++{
++ const struct dma_map_ops *ops = get_dma_ops(dev);
++ size_t size = SIZE_MAX;
++
++ if (dma_is_direct(ops))
++ size = dma_direct_max_mapping_size(dev);
++ else if (ops && ops->max_mapping_size)
++ size = ops->max_mapping_size(dev);
++
++ return size;
++}
++EXPORT_SYMBOL_GPL(dma_max_mapping_size);
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 1fb6fd68b9c7..c873f9cc2146 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -662,3 +662,17 @@ swiotlb_dma_supported(struct device *hwdev, u64 mask)
+ {
+ return __phys_to_dma(hwdev, io_tlb_end - 1) <= mask;
+ }
++
++size_t swiotlb_max_mapping_size(struct device *dev)
++{
++ return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
++}
++
++bool is_swiotlb_active(void)
++{
++ /*
++ * When SWIOTLB is initialized, even if io_tlb_start points to physical
++ * address zero, io_tlb_end surely doesn't.
++ */
++ return io_tlb_end != 0;
++}
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 5ab4fe3b1dcc..878c62ec0190 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -658,7 +658,7 @@ int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event,
+ goto out;
+ }
+
+- rb->aux_priv = event->pmu->setup_aux(event->cpu, rb->aux_pages, nr_pages,
++ rb->aux_priv = event->pmu->setup_aux(event, rb->aux_pages, nr_pages,
+ overwrite);
+ if (!rb->aux_priv)
+ goto out;
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 2639a30a8aa5..2166c2d92ddc 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -219,6 +219,7 @@ repeat:
+ }
+
+ write_unlock_irq(&tasklist_lock);
++ cgroup_release(p);
+ release_thread(p);
+ call_rcu(&p->rcu, delayed_put_task_struct);
+
+diff --git a/kernel/futex.c b/kernel/futex.c
+index a0514e01c3eb..52668d44e07b 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -3440,6 +3440,10 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p
+ {
+ u32 uval, uninitialized_var(nval), mval;
+
++ /* Futex address must be 32bit aligned */
++ if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0)
++ return -1;
++
+ retry:
+ if (get_user(uval, uaddr))
+ return -1;
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index 34e969069488..b07a2acc4eec 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -855,7 +855,11 @@ void handle_percpu_irq(struct irq_desc *desc)
+ {
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+
+- kstat_incr_irqs_this_cpu(desc);
++ /*
++ * PER CPU interrupts are not serialized. Do not touch
++ * desc->tot_count.
++ */
++ __kstat_incr_irqs_this_cpu(desc);
+
+ if (chip->irq_ack)
+ chip->irq_ack(&desc->irq_data);
+@@ -884,7 +888,11 @@ void handle_percpu_devid_irq(struct irq_desc *desc)
+ unsigned int irq = irq_desc_get_irq(desc);
+ irqreturn_t res;
+
+- kstat_incr_irqs_this_cpu(desc);
++ /*
++ * PER CPU interrupts are not serialized. Do not touch
++ * desc->tot_count.
++ */
++ __kstat_incr_irqs_this_cpu(desc);
+
+ if (chip->irq_ack)
+ chip->irq_ack(&desc->irq_data);
+@@ -1376,6 +1384,10 @@ int irq_chip_set_vcpu_affinity_parent(struct irq_data *data, void *vcpu_info)
+ int irq_chip_set_wake_parent(struct irq_data *data, unsigned int on)
+ {
+ data = data->parent_data;
++
++ if (data->chip->flags & IRQCHIP_SKIP_SET_WAKE)
++ return 0;
++
+ if (data->chip->irq_set_wake)
+ return data->chip->irq_set_wake(data, on);
+
+diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
+index ca6afa267070..e74e7eea76cf 100644
+--- a/kernel/irq/internals.h
++++ b/kernel/irq/internals.h
+@@ -242,12 +242,18 @@ static inline void irq_state_set_masked(struct irq_desc *desc)
+
+ #undef __irqd_to_state
+
+-static inline void kstat_incr_irqs_this_cpu(struct irq_desc *desc)
++static inline void __kstat_incr_irqs_this_cpu(struct irq_desc *desc)
+ {
+ __this_cpu_inc(*desc->kstat_irqs);
+ __this_cpu_inc(kstat.irqs_sum);
+ }
+
++static inline void kstat_incr_irqs_this_cpu(struct irq_desc *desc)
++{
++ __kstat_incr_irqs_this_cpu(desc);
++ desc->tot_count++;
++}
++
+ static inline int irq_desc_get_node(struct irq_desc *desc)
+ {
+ return irq_common_data_get_node(&desc->irq_common_data);
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index ef8ad36cadcf..e16e022eae09 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -119,6 +119,7 @@ static void desc_set_defaults(unsigned int irq, struct irq_desc *desc, int node,
+ desc->depth = 1;
+ desc->irq_count = 0;
+ desc->irqs_unhandled = 0;
++ desc->tot_count = 0;
+ desc->name = NULL;
+ desc->owner = owner;
+ for_each_possible_cpu(cpu)
+@@ -557,6 +558,7 @@ int __init early_irq_init(void)
+ alloc_masks(&desc[i], node);
+ raw_spin_lock_init(&desc[i].lock);
+ lockdep_set_class(&desc[i].lock, &irq_desc_lock_class);
++ mutex_init(&desc[i].request_mutex);
+ desc_set_defaults(i, &desc[i], node, NULL, NULL);
+ }
+ return arch_early_irq_init();
+@@ -919,11 +921,15 @@ unsigned int kstat_irqs_cpu(unsigned int irq, int cpu)
+ unsigned int kstat_irqs(unsigned int irq)
+ {
+ struct irq_desc *desc = irq_to_desc(irq);
+- int cpu;
+ unsigned int sum = 0;
++ int cpu;
+
+ if (!desc || !desc->kstat_irqs)
+ return 0;
++ if (!irq_settings_is_per_cpu_devid(desc) &&
++ !irq_settings_is_per_cpu(desc))
++ return desc->tot_count;
++
+ for_each_possible_cpu(cpu)
+ sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
+ return sum;
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 95932333a48b..e805fe3bf87f 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -3535,6 +3535,9 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+ unsigned int depth;
+ int i;
+
++ if (unlikely(!debug_locks))
++ return 0;
++
+ depth = curr->lockdep_depth;
+ /*
+ * This function is about (re)setting the class of a held lock,
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 9180158756d2..38d44d27e37a 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1557,14 +1557,23 @@ static bool rcu_future_gp_cleanup(struct rcu_node *rnp)
+ }
+
+ /*
+- * Awaken the grace-period kthread. Don't do a self-awaken, and don't
+- * bother awakening when there is nothing for the grace-period kthread
+- * to do (as in several CPUs raced to awaken, and we lost), and finally
+- * don't try to awaken a kthread that has not yet been created.
++ * Awaken the grace-period kthread. Don't do a self-awaken (unless in
++ * an interrupt or softirq handler), and don't bother awakening when there
++ * is nothing for the grace-period kthread to do (as in several CPUs raced
++ * to awaken, and we lost), and finally don't try to awaken a kthread that
++ * has not yet been created. If all those checks are passed, track some
++ * debug information and awaken.
++ *
++ * So why do the self-wakeup when in an interrupt or softirq handler
++ * in the grace-period kthread's context? Because the kthread might have
++ * been interrupted just as it was going to sleep, and just after the final
++ * pre-sleep check of the awaken condition. In this case, a wakeup really
++ * is required, and is therefore supplied.
+ */
+ static void rcu_gp_kthread_wake(void)
+ {
+- if (current == rcu_state.gp_kthread ||
++ if ((current == rcu_state.gp_kthread &&
++ !in_interrupt() && !in_serving_softirq()) ||
+ !READ_ONCE(rcu_state.gp_flags) ||
+ !rcu_state.gp_kthread)
+ return;
+diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
+index 1971869c4072..f4ca36d92138 100644
+--- a/kernel/rcu/update.c
++++ b/kernel/rcu/update.c
+@@ -52,6 +52,7 @@
+ #include <linux/tick.h>
+ #include <linux/rcupdate_wait.h>
+ #include <linux/sched/isolation.h>
++#include <linux/kprobes.h>
+
+ #define CREATE_TRACE_POINTS
+
+@@ -249,6 +250,7 @@ int notrace debug_lockdep_rcu_enabled(void)
+ current->lockdep_recursion == 0;
+ }
+ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
++NOKPROBE_SYMBOL(debug_lockdep_rcu_enabled);
+
+ /**
+ * rcu_read_lock_held() - might we be in RCU read-side critical section?
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 915c02e8e5dd..ca7ed5158cff 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -382,7 +382,7 @@ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
+ int (*func)(struct resource *, void *))
+ {
+ struct resource res;
+- int ret = -1;
++ int ret = -EINVAL;
+
+ while (start < end &&
+ !find_next_iomem_res(start, end, flags, desc, first_lvl, &res)) {
+@@ -462,7 +462,7 @@ int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
+ unsigned long flags;
+ struct resource res;
+ unsigned long pfn, end_pfn;
+- int ret = -1;
++ int ret = -EINVAL;
+
+ start = (u64) start_pfn << PAGE_SHIFT;
+ end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index d8d76a65cfdd..01a2489de94e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -107,11 +107,12 @@ struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
+ * [L] ->on_rq
+ * RELEASE (rq->lock)
+ *
+- * If we observe the old CPU in task_rq_lock, the acquire of
++ * If we observe the old CPU in task_rq_lock(), the acquire of
+ * the old rq->lock will fully serialize against the stores.
+ *
+- * If we observe the new CPU in task_rq_lock, the acquire will
+- * pair with the WMB to ensure we must then also see migrating.
++ * If we observe the new CPU in task_rq_lock(), the address
++ * dependency headed by '[L] rq = task_rq()' and the acquire
++ * will pair with the WMB to ensure we then also see migrating.
+ */
+ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
+ rq_pin_lock(rq, rf);
+@@ -928,7 +929,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
+ {
+ lockdep_assert_held(&rq->lock);
+
+- p->on_rq = TASK_ON_RQ_MIGRATING;
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
+ dequeue_task(rq, p, DEQUEUE_NOCLOCK);
+ set_task_cpu(p, new_cpu);
+ rq_unlock(rq, rf);
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index de3de997e245..8039d62ae36e 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -315,6 +315,7 @@ void register_sched_domain_sysctl(void)
+ {
+ static struct ctl_table *cpu_entries;
+ static struct ctl_table **cpu_idx;
++ static bool init_done = false;
+ char buf[32];
+ int i;
+
+@@ -344,7 +345,10 @@ void register_sched_domain_sysctl(void)
+ if (!cpumask_available(sd_sysctl_cpus)) {
+ if (!alloc_cpumask_var(&sd_sysctl_cpus, GFP_KERNEL))
+ return;
++ }
+
++ if (!init_done) {
++ init_done = true;
+ /* init to possible to not have holes in @cpu_entries */
+ cpumask_copy(sd_sysctl_cpus, cpu_possible_mask);
+ }
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 310d0637fe4b..5e61a1a99e38 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7713,10 +7713,10 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
+ if (cfs_rq->last_h_load_update == now)
+ return;
+
+- cfs_rq->h_load_next = NULL;
++ WRITE_ONCE(cfs_rq->h_load_next, NULL);
+ for_each_sched_entity(se) {
+ cfs_rq = cfs_rq_of(se);
+- cfs_rq->h_load_next = se;
++ WRITE_ONCE(cfs_rq->h_load_next, se);
+ if (cfs_rq->last_h_load_update == now)
+ break;
+ }
+@@ -7726,7 +7726,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
+ cfs_rq->last_h_load_update = now;
+ }
+
+- while ((se = cfs_rq->h_load_next) != NULL) {
++ while ((se = READ_ONCE(cfs_rq->h_load_next)) != NULL) {
+ load = cfs_rq->h_load;
+ load = div64_ul(load * se->avg.load_avg,
+ cfs_rq_load_avg(cfs_rq) + 1);
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index d04530bf251f..425a5589e5f6 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1460,9 +1460,9 @@ static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
+ */
+ smp_wmb();
+ #ifdef CONFIG_THREAD_INFO_IN_TASK
+- p->cpu = cpu;
++ WRITE_ONCE(p->cpu, cpu);
+ #else
+- task_thread_info(p)->cpu = cpu;
++ WRITE_ONCE(task_thread_info(p)->cpu, cpu);
+ #endif
+ p->wake_cpu = cpu;
+ #endif
+@@ -1563,7 +1563,7 @@ static inline int task_on_rq_queued(struct task_struct *p)
+
+ static inline int task_on_rq_migrating(struct task_struct *p)
+ {
+- return p->on_rq == TASK_ON_RQ_MIGRATING;
++ return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
+ }
+
+ /*
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 3f35ba1d8fde..efca2489d881 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -676,7 +676,7 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
+ }
+
+ struct s_data {
+- struct sched_domain ** __percpu sd;
++ struct sched_domain * __percpu *sd;
+ struct root_domain *rd;
+ };
+
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index ba4d9e85feb8..28ec71d914c7 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -127,6 +127,7 @@ static int __maybe_unused one = 1;
+ static int __maybe_unused two = 2;
+ static int __maybe_unused four = 4;
+ static unsigned long one_ul = 1;
++static unsigned long long_max = LONG_MAX;
+ static int one_hundred = 100;
+ static int one_thousand = 1000;
+ #ifdef CONFIG_PRINTK
+@@ -1722,6 +1723,8 @@ static struct ctl_table fs_table[] = {
+ .maxlen = sizeof(files_stat.max_files),
+ .mode = 0644,
+ .proc_handler = proc_doulongvec_minmax,
++ .extra1 = &zero,
++ .extra2 = &long_max,
+ },
+ {
+ .procname = "nr_open",
+@@ -2579,7 +2582,16 @@ static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp,
+ {
+ struct do_proc_dointvec_minmax_conv_param *param = data;
+ if (write) {
+- int val = *negp ? -*lvalp : *lvalp;
++ int val;
++ if (*negp) {
++ if (*lvalp > (unsigned long) INT_MAX + 1)
++ return -EINVAL;
++ val = -*lvalp;
++ } else {
++ if (*lvalp > (unsigned long) INT_MAX)
++ return -EINVAL;
++ val = *lvalp;
++ }
+ if ((param->min && *param->min > val) ||
+ (param->max && *param->max < val))
+ return -EINVAL;
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index 2c97e8c2d29f..0519a8805aab 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -594,7 +594,7 @@ static ktime_t alarm_timer_remaining(struct k_itimer *timr, ktime_t now)
+ {
+ struct alarm *alarm = &timr->it.alarm.alarmtimer;
+
+- return ktime_sub(now, alarm->node.expires);
++ return ktime_sub(alarm->node.expires, now);
+ }
+
+ /**
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 06e864a334bb..b49affb4666b 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -4205,6 +4205,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_consume);
+ * ring_buffer_read_prepare - Prepare for a non consuming read of the buffer
+ * @buffer: The ring buffer to read from
+ * @cpu: The cpu buffer to iterate over
++ * @flags: gfp flags to use for memory allocation
+ *
+ * This performs the initial preparations necessary to iterate
+ * through the buffer. Memory is allocated, buffer recording
+@@ -4222,7 +4223,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_consume);
+ * This overall must be paired with ring_buffer_read_finish.
+ */
+ struct ring_buffer_iter *
+-ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu)
++ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu, gfp_t flags)
+ {
+ struct ring_buffer_per_cpu *cpu_buffer;
+ struct ring_buffer_iter *iter;
+@@ -4230,7 +4231,7 @@ ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu)
+ if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ return NULL;
+
+- iter = kmalloc(sizeof(*iter), GFP_KERNEL);
++ iter = kmalloc(sizeof(*iter), flags);
+ if (!iter)
+ return NULL;
+
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index c4238b441624..89158aa93fa6 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3904,7 +3904,8 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
+ if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
+ for_each_tracing_cpu(cpu) {
+ iter->buffer_iter[cpu] =
+- ring_buffer_read_prepare(iter->trace_buffer->buffer, cpu);
++ ring_buffer_read_prepare(iter->trace_buffer->buffer,
++ cpu, GFP_KERNEL);
+ }
+ ring_buffer_read_prepare_sync();
+ for_each_tracing_cpu(cpu) {
+@@ -3914,7 +3915,8 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
+ } else {
+ cpu = iter->cpu_file;
+ iter->buffer_iter[cpu] =
+- ring_buffer_read_prepare(iter->trace_buffer->buffer, cpu);
++ ring_buffer_read_prepare(iter->trace_buffer->buffer,
++ cpu, GFP_KERNEL);
+ ring_buffer_read_prepare_sync();
+ ring_buffer_read_start(iter->buffer_iter[cpu]);
+ tracing_iter_reset(iter, cpu);
+@@ -5626,7 +5628,6 @@ out:
+ return ret;
+
+ fail:
+- kfree(iter->trace);
+ kfree(iter);
+ __trace_array_put(tr);
+ mutex_unlock(&trace_types_lock);
+diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
+index dd1f43588d70..fa100ed3b4de 100644
+--- a/kernel/trace/trace_dynevent.c
++++ b/kernel/trace/trace_dynevent.c
+@@ -74,7 +74,7 @@ int dyn_event_release(int argc, char **argv, struct dyn_event_operations *type)
+ static int create_dyn_event(int argc, char **argv)
+ {
+ struct dyn_event_operations *ops;
+- int ret;
++ int ret = -ENODEV;
+
+ if (argv[0][0] == '-' || argv[0][0] == '!')
+ return dyn_event_release(argc, argv, NULL);
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index 76217bbef815..4629a6104474 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -299,15 +299,13 @@ int perf_uprobe_init(struct perf_event *p_event,
+
+ if (!p_event->attr.uprobe_path)
+ return -EINVAL;
+- path = kzalloc(PATH_MAX, GFP_KERNEL);
+- if (!path)
+- return -ENOMEM;
+- ret = strncpy_from_user(
+- path, u64_to_user_ptr(p_event->attr.uprobe_path), PATH_MAX);
+- if (ret == PATH_MAX)
+- return -E2BIG;
+- if (ret < 0)
+- goto out;
++
++ path = strndup_user(u64_to_user_ptr(p_event->attr.uprobe_path),
++ PATH_MAX);
++ if (IS_ERR(path)) {
++ ret = PTR_ERR(path);
++ return (ret == -EINVAL) ? -E2BIG : ret;
++ }
+ if (path[0] == '\0') {
+ ret = -EINVAL;
+ goto out;
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 27821480105e..217ef481fbbb 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -1301,7 +1301,7 @@ static int parse_pred(const char *str, void *data,
+ /* go past the last quote */
+ i++;
+
+- } else if (isdigit(str[i])) {
++ } else if (isdigit(str[i]) || str[i] == '-') {
+
+ /* Make sure the field is not a string */
+ if (is_string_field(field)) {
+@@ -1314,6 +1314,9 @@ static int parse_pred(const char *str, void *data,
+ goto err_free;
+ }
+
++ if (str[i] == '-')
++ i++;
++
+ /* We allow 0xDEADBEEF */
+ while (isalnum(str[i]))
+ i++;
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 449d90cfa151..55b72b1c63a0 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -4695,9 +4695,10 @@ static inline void add_to_key(char *compound_key, void *key,
+ /* ensure NULL-termination */
+ if (size > key_field->size - 1)
+ size = key_field->size - 1;
+- }
+
+- memcpy(compound_key + key_field->offset, key, size);
++ strncpy(compound_key + key_field->offset, (char *)key, size);
++ } else
++ memcpy(compound_key + key_field->offset, key, size);
+ }
+
+ static void
+diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
+index d953c163a079..810d78a8d14c 100644
+--- a/kernel/trace/trace_kdb.c
++++ b/kernel/trace/trace_kdb.c
+@@ -51,14 +51,16 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
+ if (cpu_file == RING_BUFFER_ALL_CPUS) {
+ for_each_tracing_cpu(cpu) {
+ iter.buffer_iter[cpu] =
+- ring_buffer_read_prepare(iter.trace_buffer->buffer, cpu);
++ ring_buffer_read_prepare(iter.trace_buffer->buffer,
++ cpu, GFP_ATOMIC);
+ ring_buffer_read_start(iter.buffer_iter[cpu]);
+ tracing_iter_reset(&iter, cpu);
+ }
+ } else {
+ iter.cpu_file = cpu_file;
+ iter.buffer_iter[cpu_file] =
+- ring_buffer_read_prepare(iter.trace_buffer->buffer, cpu_file);
++ ring_buffer_read_prepare(iter.trace_buffer->buffer,
++ cpu_file, GFP_ATOMIC);
+ ring_buffer_read_start(iter.buffer_iter[cpu_file]);
+ tracing_iter_reset(&iter, cpu_file);
+ }
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 977918d5d350..bbc4940f21af 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -547,13 +547,15 @@ static void softlockup_start_all(void)
+
+ int lockup_detector_online_cpu(unsigned int cpu)
+ {
+- watchdog_enable(cpu);
++ if (cpumask_test_cpu(cpu, &watchdog_allowed_mask))
++ watchdog_enable(cpu);
+ return 0;
+ }
+
+ int lockup_detector_offline_cpu(unsigned int cpu)
+ {
+- watchdog_disable(cpu);
++ if (cpumask_test_cpu(cpu, &watchdog_allowed_mask))
++ watchdog_disable(cpu);
+ return 0;
+ }
+
+diff --git a/lib/bsearch.c b/lib/bsearch.c
+index 18b445b010c3..82512fe7b33c 100644
+--- a/lib/bsearch.c
++++ b/lib/bsearch.c
+@@ -11,6 +11,7 @@
+
+ #include <linux/export.h>
+ #include <linux/bsearch.h>
++#include <linux/kprobes.h>
+
+ /*
+ * bsearch - binary search an array of elements
+@@ -53,3 +54,4 @@ void *bsearch(const void *key, const void *base, size_t num, size_t size,
+ return NULL;
+ }
+ EXPORT_SYMBOL(bsearch);
++NOKPROBE_SYMBOL(bsearch);
+diff --git a/lib/raid6/Makefile b/lib/raid6/Makefile
+index 4e90d443d1b0..e723eacf7868 100644
+--- a/lib/raid6/Makefile
++++ b/lib/raid6/Makefile
+@@ -39,7 +39,7 @@ endif
+ ifeq ($(CONFIG_KERNEL_MODE_NEON),y)
+ NEON_FLAGS := -ffreestanding
+ ifeq ($(ARCH),arm)
+-NEON_FLAGS += -mfloat-abi=softfp -mfpu=neon
++NEON_FLAGS += -march=armv7-a -mfloat-abi=softfp -mfpu=neon
+ endif
+ CFLAGS_recov_neon_inner.o += $(NEON_FLAGS)
+ ifeq ($(ARCH),arm64)
+diff --git a/lib/rhashtable.c b/lib/rhashtable.c
+index 852ffa5160f1..4edcf3310513 100644
+--- a/lib/rhashtable.c
++++ b/lib/rhashtable.c
+@@ -416,8 +416,12 @@ static void rht_deferred_worker(struct work_struct *work)
+ else if (tbl->nest)
+ err = rhashtable_rehash_alloc(ht, tbl, tbl->size);
+
+- if (!err)
+- err = rhashtable_rehash_table(ht);
++ if (!err || err == -EEXIST) {
++ int nerr;
++
++ nerr = rhashtable_rehash_table(ht);
++ err = err ?: nerr;
++ }
+
+ mutex_unlock(&ht->mutex);
+
+diff --git a/lib/string.c b/lib/string.c
+index 38e4ca08e757..3ab861c1a857 100644
+--- a/lib/string.c
++++ b/lib/string.c
+@@ -866,6 +866,26 @@ __visible int memcmp(const void *cs, const void *ct, size_t count)
+ EXPORT_SYMBOL(memcmp);
+ #endif
+
++#ifndef __HAVE_ARCH_BCMP
++/**
++ * bcmp - returns 0 if and only if the buffers have identical contents.
++ * @a: pointer to first buffer.
++ * @b: pointer to second buffer.
++ * @len: size of buffers.
++ *
++ * The sign or magnitude of a non-zero return value has no particular
++ * meaning, and architectures may implement their own more efficient bcmp(). So
++ * while this particular implementation is a simple (tail) call to memcmp, do
++ * not rely on anything but whether the return value is zero or non-zero.
++ */
++#undef bcmp
++int bcmp(const void *a, const void *b, size_t len)
++{
++ return memcmp(a, b, len);
++}
++EXPORT_SYMBOL(bcmp);
++#endif
++
+ #ifndef __HAVE_ARCH_MEMSCAN
+ /**
+ * memscan - Find a character in an area of memory.
+diff --git a/mm/cma.c b/mm/cma.c
+index c7b39dd3b4f6..f4f3a8a57d86 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -353,12 +353,14 @@ int __init cma_declare_contiguous(phys_addr_t base,
+
+ ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma);
+ if (ret)
+- goto err;
++ goto free_mem;
+
+ pr_info("Reserved %ld MiB at %pa\n", (unsigned long)size / SZ_1M,
+ &base);
+ return 0;
+
++free_mem:
++ memblock_free(base, size);
+ err:
+ pr_err("Failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
+ return ret;
+diff --git a/mm/debug.c b/mm/debug.c
+index 1611cf00a137..854d5f84047d 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -79,7 +79,7 @@ void __dump_page(struct page *page, const char *reason)
+ pr_warn("ksm ");
+ else if (mapping) {
+ pr_warn("%ps ", mapping->a_ops);
+- if (mapping->host->i_dentry.first) {
++ if (mapping->host && mapping->host->i_dentry.first) {
+ struct dentry *dentry;
+ dentry = container_of(mapping->host->i_dentry.first, struct dentry, d_u.d_alias);
+ pr_warn("name:\"%pd\" ", dentry);
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index faf357eaf0ce..8b03c698f86e 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -753,6 +753,21 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+ spinlock_t *ptl;
+
+ ptl = pmd_lock(mm, pmd);
++ if (!pmd_none(*pmd)) {
++ if (write) {
++ if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
++ WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
++ goto out_unlock;
++ }
++ entry = pmd_mkyoung(*pmd);
++ entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
++ if (pmdp_set_access_flags(vma, addr, pmd, entry, 1))
++ update_mmu_cache_pmd(vma, addr, pmd);
++ }
++
++ goto out_unlock;
++ }
++
+ entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
+ if (pfn_t_devmap(pfn))
+ entry = pmd_mkdevmap(entry);
+@@ -764,11 +779,16 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+ if (pgtable) {
+ pgtable_trans_huge_deposit(mm, pmd, pgtable);
+ mm_inc_nr_ptes(mm);
++ pgtable = NULL;
+ }
+
+ set_pmd_at(mm, addr, pmd, entry);
+ update_mmu_cache_pmd(vma, addr, pmd);
++
++out_unlock:
+ spin_unlock(ptl);
++ if (pgtable)
++ pte_free(mm, pgtable);
+ }
+
+ vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+@@ -819,6 +839,20 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+ spinlock_t *ptl;
+
+ ptl = pud_lock(mm, pud);
++ if (!pud_none(*pud)) {
++ if (write) {
++ if (pud_pfn(*pud) != pfn_t_to_pfn(pfn)) {
++ WARN_ON_ONCE(!is_huge_zero_pud(*pud));
++ goto out_unlock;
++ }
++ entry = pud_mkyoung(*pud);
++ entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma);
++ if (pudp_set_access_flags(vma, addr, pud, entry, 1))
++ update_mmu_cache_pud(vma, addr, pud);
++ }
++ goto out_unlock;
++ }
++
+ entry = pud_mkhuge(pfn_t_pud(pfn, prot));
+ if (pfn_t_devmap(pfn))
+ entry = pud_mkdevmap(entry);
+@@ -828,6 +862,8 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+ }
+ set_pud_at(mm, addr, pud, entry);
+ update_mmu_cache_pud(vma, addr, pud);
++
++out_unlock:
+ spin_unlock(ptl);
+ }
+
+diff --git a/mm/kasan/common.c b/mm/kasan/common.c
+index 09b534fbba17..80bbe62b16cd 100644
+--- a/mm/kasan/common.c
++++ b/mm/kasan/common.c
+@@ -14,6 +14,8 @@
+ *
+ */
+
++#define __KASAN_INTERNAL
++
+ #include <linux/export.h>
+ #include <linux/interrupt.h>
+ #include <linux/init.h>
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index af7f18b32389..5bbf2de02a0f 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -248,6 +248,12 @@ enum res_type {
+ iter != NULL; \
+ iter = mem_cgroup_iter(NULL, iter, NULL))
+
++static inline bool should_force_charge(void)
++{
++ return tsk_is_oom_victim(current) || fatal_signal_pending(current) ||
++ (current->flags & PF_EXITING);
++}
++
+ /* Some nice accessors for the vmpressure. */
+ struct vmpressure *memcg_to_vmpressure(struct mem_cgroup *memcg)
+ {
+@@ -1389,8 +1395,13 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ };
+ bool ret;
+
+- mutex_lock(&oom_lock);
+- ret = out_of_memory(&oc);
++ if (mutex_lock_killable(&oom_lock))
++ return true;
++ /*
++ * A few threads which were not waiting at mutex_lock_killable() can
++ * fail to bail out. Therefore, check again after holding oom_lock.
++ */
++ ret = should_force_charge() || out_of_memory(&oc);
+ mutex_unlock(&oom_lock);
+ return ret;
+ }
+@@ -2209,9 +2220,7 @@ retry:
+ * bypass the last charges so that they can exit quickly and
+ * free their memory.
+ */
+- if (unlikely(tsk_is_oom_victim(current) ||
+- fatal_signal_pending(current) ||
+- current->flags & PF_EXITING))
++ if (unlikely(should_force_charge()))
+ goto force;
+
+ /*
+@@ -3873,6 +3882,22 @@ struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb)
+ return &memcg->cgwb_domain;
+ }
+
++/*
++ * idx can be of type enum memcg_stat_item or node_stat_item.
++ * Keep in sync with memcg_exact_page().
++ */
++static unsigned long memcg_exact_page_state(struct mem_cgroup *memcg, int idx)
++{
++ long x = atomic_long_read(&memcg->stat[idx]);
++ int cpu;
++
++ for_each_online_cpu(cpu)
++ x += per_cpu_ptr(memcg->stat_cpu, cpu)->count[idx];
++ if (x < 0)
++ x = 0;
++ return x;
++}
++
+ /**
+ * mem_cgroup_wb_stats - retrieve writeback related stats from its memcg
+ * @wb: bdi_writeback in question
+@@ -3898,10 +3923,10 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
+ struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css);
+ struct mem_cgroup *parent;
+
+- *pdirty = memcg_page_state(memcg, NR_FILE_DIRTY);
++ *pdirty = memcg_exact_page_state(memcg, NR_FILE_DIRTY);
+
+ /* this should eventually include NR_UNSTABLE_NFS */
+- *pwriteback = memcg_page_state(memcg, NR_WRITEBACK);
++ *pwriteback = memcg_exact_page_state(memcg, NR_WRITEBACK);
+ *pfilepages = mem_cgroup_nr_lru_pages(memcg, (1 << LRU_INACTIVE_FILE) |
+ (1 << LRU_ACTIVE_FILE));
+ *pheadroom = PAGE_COUNTER_MAX;
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 831be5ff5f4d..fc8b51744579 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1825,19 +1825,17 @@ static int soft_offline_in_use_page(struct page *page, int flags)
+ struct page *hpage = compound_head(page);
+
+ if (!PageHuge(page) && PageTransHuge(hpage)) {
+- lock_page(hpage);
+- if (!PageAnon(hpage) || unlikely(split_huge_page(hpage))) {
+- unlock_page(hpage);
+- if (!PageAnon(hpage))
++ lock_page(page);
++ if (!PageAnon(page) || unlikely(split_huge_page(page))) {
++ unlock_page(page);
++ if (!PageAnon(page))
+ pr_info("soft offline: %#lx: non anonymous thp\n", page_to_pfn(page));
+ else
+ pr_info("soft offline: %#lx: thp split failed\n", page_to_pfn(page));
+- put_hwpoison_page(hpage);
++ put_hwpoison_page(page);
+ return -EBUSY;
+ }
+- unlock_page(hpage);
+- get_hwpoison_page(page);
+- put_hwpoison_page(hpage);
++ unlock_page(page);
+ }
+
+ /*
+diff --git a/mm/memory.c b/mm/memory.c
+index e11ca9dd823f..8d3f38fa530d 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1546,10 +1546,12 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
+ WARN_ON_ONCE(!is_zero_pfn(pte_pfn(*pte)));
+ goto out_unlock;
+ }
+- entry = *pte;
+- goto out_mkwrite;
+- } else
+- goto out_unlock;
++ entry = pte_mkyoung(*pte);
++ entry = maybe_mkwrite(pte_mkdirty(entry), vma);
++ if (ptep_set_access_flags(vma, addr, pte, entry, 1))
++ update_mmu_cache(vma, addr, pte);
++ }
++ goto out_unlock;
+ }
+
+ /* Ok, finally just insert the thing.. */
+@@ -1558,7 +1560,6 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
+ else
+ entry = pte_mkspecial(pfn_t_pte(pfn, prot));
+
+-out_mkwrite:
+ if (mkwrite) {
+ entry = pte_mkyoung(entry);
+ entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+@@ -3517,10 +3518,13 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf)
+ * but allow concurrent faults).
+ * The mmap_sem may have been released depending on flags and our
+ * return value. See filemap_fault() and __lock_page_or_retry().
++ * If mmap_sem is released, vma may become invalid (for example
++ * by other thread calling munmap()).
+ */
+ static vm_fault_t do_fault(struct vm_fault *vmf)
+ {
+ struct vm_area_struct *vma = vmf->vma;
++ struct mm_struct *vm_mm = vma->vm_mm;
+ vm_fault_t ret;
+
+ /*
+@@ -3561,7 +3565,7 @@ static vm_fault_t do_fault(struct vm_fault *vmf)
+
+ /* preallocated pagetable is unused: free it */
+ if (vmf->prealloc_pte) {
+- pte_free(vma->vm_mm, vmf->prealloc_pte);
++ pte_free(vm_mm, vmf->prealloc_pte);
+ vmf->prealloc_pte = NULL;
+ }
+ return ret;
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 1ad28323fb9f..11593a03c051 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1560,7 +1560,7 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ {
+ unsigned long pfn, nr_pages;
+ long offlined_pages;
+- int ret, node;
++ int ret, node, nr_isolate_pageblock;
+ unsigned long flags;
+ unsigned long valid_start, valid_end;
+ struct zone *zone;
+@@ -1586,10 +1586,11 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ ret = start_isolate_page_range(start_pfn, end_pfn,
+ MIGRATE_MOVABLE,
+ SKIP_HWPOISON | REPORT_FAILURE);
+- if (ret) {
++ if (ret < 0) {
+ reason = "failure to isolate range";
+ goto failed_removal;
+ }
++ nr_isolate_pageblock = ret;
+
+ arg.start_pfn = start_pfn;
+ arg.nr_pages = nr_pages;
+@@ -1642,8 +1643,16 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ /* Ok, all of our target is isolated.
+ We cannot do rollback at this point. */
+ offline_isolated_pages(start_pfn, end_pfn);
+- /* reset pagetype flags and makes migrate type to be MOVABLE */
+- undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
++
++ /*
++ * Onlining will reset pagetype flags and makes migrate type
++ * MOVABLE, so just need to decrease the number of isolated
++ * pageblocks zone counter here.
++ */
++ spin_lock_irqsave(&zone->lock, flags);
++ zone->nr_isolate_pageblock -= nr_isolate_pageblock;
++ spin_unlock_irqrestore(&zone->lock, flags);
++
+ /* removal success */
+ adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages);
+ zone->present_pages -= offlined_pages;
+@@ -1675,12 +1684,12 @@ static int __ref __offline_pages(unsigned long start_pfn,
+
+ failed_removal_isolated:
+ undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
++ memory_notify(MEM_CANCEL_OFFLINE, &arg);
+ failed_removal:
+ pr_debug("memory offlining [mem %#010llx-%#010llx] failed due to %s\n",
+ (unsigned long long) start_pfn << PAGE_SHIFT,
+ ((unsigned long long) end_pfn << PAGE_SHIFT) - 1,
+ reason);
+- memory_notify(MEM_CANCEL_OFFLINE, &arg);
+ /* pushback to free area */
+ mem_hotplug_done();
+ return ret;
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index ee2bce59d2bf..c2275c1e6d2a 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -350,7 +350,7 @@ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask)
+ {
+ if (!pol)
+ return;
+- if (!mpol_store_user_nodemask(pol) &&
++ if (!mpol_store_user_nodemask(pol) && !(pol->flags & MPOL_F_LOCAL) &&
+ nodes_equal(pol->w.cpuset_mems_allowed, *newmask))
+ return;
+
+@@ -428,6 +428,13 @@ static inline bool queue_pages_required(struct page *page,
+ return node_isset(nid, *qp->nmask) == !(flags & MPOL_MF_INVERT);
+ }
+
++/*
++ * queue_pages_pmd() has three possible return values:
++ * 1 - pages are placed on the right node or queued successfully.
++ * 0 - THP was split.
++ * -EIO - is migration entry or MPOL_MF_STRICT was specified and an existing
++ * page was already on a node that does not follow the policy.
++ */
+ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ unsigned long end, struct mm_walk *walk)
+ {
+@@ -437,7 +444,7 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ unsigned long flags;
+
+ if (unlikely(is_pmd_migration_entry(*pmd))) {
+- ret = 1;
++ ret = -EIO;
+ goto unlock;
+ }
+ page = pmd_page(*pmd);
+@@ -454,8 +461,15 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ ret = 1;
+ flags = qp->flags;
+ /* go to thp migration */
+- if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
++ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
++ if (!vma_migratable(walk->vma)) {
++ ret = -EIO;
++ goto unlock;
++ }
++
+ migrate_page_add(page, qp->pagelist, flags);
++ } else
++ ret = -EIO;
+ unlock:
+ spin_unlock(ptl);
+ out:
+@@ -480,8 +494,10 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
+ ptl = pmd_trans_huge_lock(pmd, vma);
+ if (ptl) {
+ ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
+- if (ret)
++ if (ret > 0)
+ return 0;
++ else if (ret < 0)
++ return ret;
+ }
+
+ if (pmd_trans_unstable(pmd))
+@@ -502,11 +518,16 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
+ continue;
+ if (!queue_pages_required(page, qp))
+ continue;
+- migrate_page_add(page, qp->pagelist, flags);
++ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
++ if (!vma_migratable(vma))
++ break;
++ migrate_page_add(page, qp->pagelist, flags);
++ } else
++ break;
+ }
+ pte_unmap_unlock(pte - 1, ptl);
+ cond_resched();
+- return 0;
++ return addr != end ? -EIO : 0;
+ }
+
+ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
+@@ -576,7 +597,12 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
+ unsigned long endvma = vma->vm_end;
+ unsigned long flags = qp->flags;
+
+- if (!vma_migratable(vma))
++ /*
++ * Need check MPOL_MF_STRICT to return -EIO if possible
++ * regardless of vma_migratable
++ */
++ if (!vma_migratable(vma) &&
++ !(flags & MPOL_MF_STRICT))
+ return 1;
+
+ if (endvma > end)
+@@ -603,7 +629,7 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
+ }
+
+ /* queue pages from current vma */
+- if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
++ if (flags & MPOL_MF_VALID)
+ return 0;
+ return 1;
+ }
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 181f5d2718a9..76e237b4610c 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -248,10 +248,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
+ pte = swp_entry_to_pte(entry);
+ } else if (is_device_public_page(new)) {
+ pte = pte_mkdevmap(pte);
+- flush_dcache_page(new);
+ }
+- } else
+- flush_dcache_page(new);
++ }
+
+ #ifdef CONFIG_HUGETLB_PAGE
+ if (PageHuge(new)) {
+@@ -995,6 +993,13 @@ static int move_to_new_page(struct page *newpage, struct page *page,
+ */
+ if (!PageMappingFlags(page))
+ page->mapping = NULL;
++
++ if (unlikely(is_zone_device_page(newpage))) {
++ if (is_device_public_page(newpage))
++ flush_dcache_page(newpage);
++ } else
++ flush_dcache_page(newpage);
++
+ }
+ out:
+ return rc;
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 26ea8636758f..da0e44914085 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -928,7 +928,8 @@ static void __oom_kill_process(struct task_struct *victim)
+ */
+ static int oom_kill_memcg_member(struct task_struct *task, void *unused)
+ {
+- if (task->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
++ if (task->signal->oom_score_adj != OOM_SCORE_ADJ_MIN &&
++ !is_global_init(task)) {
+ get_task_struct(task);
+ __oom_kill_process(task);
+ }
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 0b9f577b1a2a..20dd3283bb1b 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1945,8 +1945,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
+
+ arch_alloc_page(page, order);
+ kernel_map_pages(page, 1 << order, 1);
+- kernel_poison_pages(page, 1 << order, 1);
+ kasan_alloc_pages(page, order);
++ kernel_poison_pages(page, 1 << order, 1);
+ set_page_owner(page, order, gfp_flags);
+ }
+
+@@ -8160,7 +8160,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
+
+ ret = start_isolate_page_range(pfn_max_align_down(start),
+ pfn_max_align_up(end), migratetype, 0);
+- if (ret)
++ if (ret < 0)
+ return ret;
+
+ /*
+diff --git a/mm/page_ext.c b/mm/page_ext.c
+index 8c78b8d45117..f116431c3dee 100644
+--- a/mm/page_ext.c
++++ b/mm/page_ext.c
+@@ -273,6 +273,7 @@ static void free_page_ext(void *addr)
+ table_size = get_entry_size() * PAGES_PER_SECTION;
+
+ BUG_ON(PageReserved(page));
++ kmemleak_free(addr);
+ free_pages_exact(addr, table_size);
+ }
+ }
+diff --git a/mm/page_isolation.c b/mm/page_isolation.c
+index ce323e56b34d..019280712e1b 100644
+--- a/mm/page_isolation.c
++++ b/mm/page_isolation.c
+@@ -59,7 +59,8 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
+ * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
+ * We just check MOVABLE pages.
+ */
+- if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype, flags))
++ if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype,
++ isol_flags))
+ ret = 0;
+
+ /*
+@@ -160,27 +161,36 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
+ return NULL;
+ }
+
+-/*
+- * start_isolate_page_range() -- make page-allocation-type of range of pages
+- * to be MIGRATE_ISOLATE.
+- * @start_pfn: The lower PFN of the range to be isolated.
+- * @end_pfn: The upper PFN of the range to be isolated.
+- * @migratetype: migrate type to set in error recovery.
++/**
++ * start_isolate_page_range() - make page-allocation-type of range of pages to
++ * be MIGRATE_ISOLATE.
++ * @start_pfn: The lower PFN of the range to be isolated.
++ * @end_pfn: The upper PFN of the range to be isolated.
++ * start_pfn/end_pfn must be aligned to pageblock_order.
++ * @migratetype: Migrate type to set in error recovery.
++ * @flags: The following flags are allowed (they can be combined in
++ * a bit mask)
++ * SKIP_HWPOISON - ignore hwpoison pages
++ * REPORT_FAILURE - report details about the failure to
++ * isolate the range
+ *
+ * Making page-allocation-type to be MIGRATE_ISOLATE means free pages in
+ * the range will never be allocated. Any free pages and pages freed in the
+- * future will not be allocated again.
+- *
+- * start_pfn/end_pfn must be aligned to pageblock_order.
+- * Return 0 on success and -EBUSY if any part of range cannot be isolated.
++ * future will not be allocated again. If specified range includes migrate types
++ * other than MOVABLE or CMA, this will fail with -EBUSY. For isolating all
++ * pages in the range finally, the caller have to free all pages in the range.
++ * test_page_isolated() can be used for test it.
+ *
+ * There is no high level synchronization mechanism that prevents two threads
+- * from trying to isolate overlapping ranges. If this happens, one thread
++ * from trying to isolate overlapping ranges. If this happens, one thread
+ * will notice pageblocks in the overlapping range already set to isolate.
+ * This happens in set_migratetype_isolate, and set_migratetype_isolate
+- * returns an error. We then clean up by restoring the migration type on
+- * pageblocks we may have modified and return -EBUSY to caller. This
++ * returns an error. We then clean up by restoring the migration type on
++ * pageblocks we may have modified and return -EBUSY to caller. This
+ * prevents two threads from simultaneously working on overlapping ranges.
++ *
++ * Return: the number of isolated pageblocks on success and -EBUSY if any part
++ * of range cannot be isolated.
+ */
+ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ unsigned migratetype, int flags)
+@@ -188,6 +198,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ unsigned long pfn;
+ unsigned long undo_pfn;
+ struct page *page;
++ int nr_isolate_pageblock = 0;
+
+ BUG_ON(!IS_ALIGNED(start_pfn, pageblock_nr_pages));
+ BUG_ON(!IS_ALIGNED(end_pfn, pageblock_nr_pages));
+@@ -196,13 +207,15 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ pfn < end_pfn;
+ pfn += pageblock_nr_pages) {
+ page = __first_valid_page(pfn, pageblock_nr_pages);
+- if (page &&
+- set_migratetype_isolate(page, migratetype, flags)) {
+- undo_pfn = pfn;
+- goto undo;
++ if (page) {
++ if (set_migratetype_isolate(page, migratetype, flags)) {
++ undo_pfn = pfn;
++ goto undo;
++ }
++ nr_isolate_pageblock++;
+ }
+ }
+- return 0;
++ return nr_isolate_pageblock;
+ undo:
+ for (pfn = start_pfn;
+ pfn < undo_pfn;
+diff --git a/mm/page_poison.c b/mm/page_poison.c
+index f0c15e9017c0..21d4f97cb49b 100644
+--- a/mm/page_poison.c
++++ b/mm/page_poison.c
+@@ -6,6 +6,7 @@
+ #include <linux/page_ext.h>
+ #include <linux/poison.h>
+ #include <linux/ratelimit.h>
++#include <linux/kasan.h>
+
+ static bool want_page_poisoning __read_mostly;
+
+@@ -40,7 +41,10 @@ static void poison_page(struct page *page)
+ {
+ void *addr = kmap_atomic(page);
+
++ /* KASAN still think the page is in-use, so skip it. */
++ kasan_disable_current();
+ memset(addr, PAGE_POISON, PAGE_SIZE);
++ kasan_enable_current();
+ kunmap_atomic(addr);
+ }
+
+diff --git a/mm/slab.c b/mm/slab.c
+index 91c1863df93d..2f2aa8eaf7d9 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -550,14 +550,6 @@ static void start_cpu_timer(int cpu)
+
+ static void init_arraycache(struct array_cache *ac, int limit, int batch)
+ {
+- /*
+- * The array_cache structures contain pointers to free object.
+- * However, when such objects are allocated or transferred to another
+- * cache the pointers are not cleared and they could be counted as
+- * valid references during a kmemleak scan. Therefore, kmemleak must
+- * not scan such objects.
+- */
+- kmemleak_no_scan(ac);
+ if (ac) {
+ ac->avail = 0;
+ ac->limit = limit;
+@@ -573,6 +565,14 @@ static struct array_cache *alloc_arraycache(int node, int entries,
+ struct array_cache *ac = NULL;
+
+ ac = kmalloc_node(memsize, gfp, node);
++ /*
++ * The array_cache structures contain pointers to free object.
++ * However, when such objects are allocated or transferred to another
++ * cache the pointers are not cleared and they could be counted as
++ * valid references during a kmemleak scan. Therefore, kmemleak must
++ * not scan such objects.
++ */
++ kmemleak_no_scan(ac);
+ init_arraycache(ac, entries, batchcount);
+ return ac;
+ }
+@@ -667,6 +667,7 @@ static struct alien_cache *__alloc_alien_cache(int node, int entries,
+
+ alc = kmalloc_node(memsize, gfp, node);
+ if (alc) {
++ kmemleak_no_scan(alc);
+ init_arraycache(&alc->ac, entries, batch);
+ spin_lock_init(&alc->lock);
+ }
+@@ -2111,6 +2112,8 @@ done:
+ cachep->allocflags = __GFP_COMP;
+ if (flags & SLAB_CACHE_DMA)
+ cachep->allocflags |= GFP_DMA;
++ if (flags & SLAB_CACHE_DMA32)
++ cachep->allocflags |= GFP_DMA32;
+ if (flags & SLAB_RECLAIM_ACCOUNT)
+ cachep->allocflags |= __GFP_RECLAIMABLE;
+ cachep->size = size;
+diff --git a/mm/slab.h b/mm/slab.h
+index 384105318779..27834ead5f14 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -127,7 +127,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
+
+
+ /* Legal flag mask for kmem_cache_create(), for various configurations */
+-#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
++#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
++ SLAB_CACHE_DMA32 | SLAB_PANIC | \
+ SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS )
+
+ #if defined(CONFIG_DEBUG_SLAB)
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index f9d89c1b5977..333618231f8d 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -53,7 +53,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
+ SLAB_FAILSLAB | SLAB_KASAN)
+
+ #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
+- SLAB_ACCOUNT)
++ SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
+
+ /*
+ * Merge control. If this is set then no merging of slab caches will occur.
+diff --git a/mm/slub.c b/mm/slub.c
+index dc777761b6b7..1673100fd534 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -3591,6 +3591,9 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
+ if (s->flags & SLAB_CACHE_DMA)
+ s->allocflags |= GFP_DMA;
+
++ if (s->flags & SLAB_CACHE_DMA32)
++ s->allocflags |= GFP_DMA32;
++
+ if (s->flags & SLAB_RECLAIM_ACCOUNT)
+ s->allocflags |= __GFP_RECLAIMABLE;
+
+@@ -5681,6 +5684,8 @@ static char *create_unique_id(struct kmem_cache *s)
+ */
+ if (s->flags & SLAB_CACHE_DMA)
+ *p++ = 'd';
++ if (s->flags & SLAB_CACHE_DMA32)
++ *p++ = 'D';
+ if (s->flags & SLAB_RECLAIM_ACCOUNT)
+ *p++ = 'a';
+ if (s->flags & SLAB_CONSISTENCY_CHECKS)
+diff --git a/mm/sparse.c b/mm/sparse.c
+index 7ea5dc6c6b19..b3771f35a0ed 100644
+--- a/mm/sparse.c
++++ b/mm/sparse.c
+@@ -197,7 +197,7 @@ static inline int next_present_section_nr(int section_nr)
+ }
+ #define for_each_present_section_nr(start, section_nr) \
+ for (section_nr = next_present_section_nr(start-1); \
+- ((section_nr >= 0) && \
++ ((section_nr != -1) && \
+ (section_nr <= __highest_present_section_nr)); \
+ section_nr = next_present_section_nr(section_nr))
+
+@@ -556,7 +556,7 @@ void online_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
+ }
+
+ #ifdef CONFIG_MEMORY_HOTREMOVE
+-/* Mark all memory sections within the pfn range as online */
++/* Mark all memory sections within the pfn range as offline */
+ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
+ {
+ unsigned long pfn;
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index dbac1d49469d..67f60e051814 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -98,6 +98,15 @@ static atomic_t proc_poll_event = ATOMIC_INIT(0);
+
+ atomic_t nr_rotate_swap = ATOMIC_INIT(0);
+
++static struct swap_info_struct *swap_type_to_swap_info(int type)
++{
++ if (type >= READ_ONCE(nr_swapfiles))
++ return NULL;
++
++ smp_rmb(); /* Pairs with smp_wmb in alloc_swap_info. */
++ return READ_ONCE(swap_info[type]);
++}
++
+ static inline unsigned char swap_count(unsigned char ent)
+ {
+ return ent & ~SWAP_HAS_CACHE; /* may include COUNT_CONTINUED flag */
+@@ -1044,12 +1053,14 @@ noswap:
+ /* The only caller of this function is now suspend routine */
+ swp_entry_t get_swap_page_of_type(int type)
+ {
+- struct swap_info_struct *si;
++ struct swap_info_struct *si = swap_type_to_swap_info(type);
+ pgoff_t offset;
+
+- si = swap_info[type];
++ if (!si)
++ goto fail;
++
+ spin_lock(&si->lock);
+- if (si && (si->flags & SWP_WRITEOK)) {
++ if (si->flags & SWP_WRITEOK) {
+ atomic_long_dec(&nr_swap_pages);
+ /* This is called for allocating swap entry, not cache */
+ offset = scan_swap_map(si, 1);
+@@ -1060,6 +1071,7 @@ swp_entry_t get_swap_page_of_type(int type)
+ atomic_long_inc(&nr_swap_pages);
+ }
+ spin_unlock(&si->lock);
++fail:
+ return (swp_entry_t) {0};
+ }
+
+@@ -1071,9 +1083,9 @@ static struct swap_info_struct *__swap_info_get(swp_entry_t entry)
+ if (!entry.val)
+ goto out;
+ type = swp_type(entry);
+- if (type >= nr_swapfiles)
++ p = swap_type_to_swap_info(type);
++ if (!p)
+ goto bad_nofile;
+- p = swap_info[type];
+ if (!(p->flags & SWP_USED))
+ goto bad_device;
+ offset = swp_offset(entry);
+@@ -1697,10 +1709,9 @@ int swap_type_of(dev_t device, sector_t offset, struct block_device **bdev_p)
+ sector_t swapdev_block(int type, pgoff_t offset)
+ {
+ struct block_device *bdev;
++ struct swap_info_struct *si = swap_type_to_swap_info(type);
+
+- if ((unsigned int)type >= nr_swapfiles)
+- return 0;
+- if (!(swap_info[type]->flags & SWP_WRITEOK))
++ if (!si || !(si->flags & SWP_WRITEOK))
+ return 0;
+ return map_swap_entry(swp_entry(type, offset), &bdev);
+ }
+@@ -2258,7 +2269,7 @@ static sector_t map_swap_entry(swp_entry_t entry, struct block_device **bdev)
+ struct swap_extent *se;
+ pgoff_t offset;
+
+- sis = swap_info[swp_type(entry)];
++ sis = swp_swap_info(entry);
+ *bdev = sis->bdev;
+
+ offset = swp_offset(entry);
+@@ -2700,9 +2711,7 @@ static void *swap_start(struct seq_file *swap, loff_t *pos)
+ if (!l)
+ return SEQ_START_TOKEN;
+
+- for (type = 0; type < nr_swapfiles; type++) {
+- smp_rmb(); /* read nr_swapfiles before swap_info[type] */
+- si = swap_info[type];
++ for (type = 0; (si = swap_type_to_swap_info(type)); type++) {
+ if (!(si->flags & SWP_USED) || !si->swap_map)
+ continue;
+ if (!--l)
+@@ -2722,9 +2731,7 @@ static void *swap_next(struct seq_file *swap, void *v, loff_t *pos)
+ else
+ type = si->type + 1;
+
+- for (; type < nr_swapfiles; type++) {
+- smp_rmb(); /* read nr_swapfiles before swap_info[type] */
+- si = swap_info[type];
++ for (; (si = swap_type_to_swap_info(type)); type++) {
+ if (!(si->flags & SWP_USED) || !si->swap_map)
+ continue;
+ ++*pos;
+@@ -2831,14 +2838,14 @@ static struct swap_info_struct *alloc_swap_info(void)
+ }
+ if (type >= nr_swapfiles) {
+ p->type = type;
+- swap_info[type] = p;
++ WRITE_ONCE(swap_info[type], p);
+ /*
+ * Write swap_info[type] before nr_swapfiles, in case a
+ * racing procfs swap_start() or swap_next() is reading them.
+ * (We never shrink nr_swapfiles, we never free this entry.)
+ */
+ smp_wmb();
+- nr_swapfiles++;
++ WRITE_ONCE(nr_swapfiles, nr_swapfiles + 1);
+ } else {
+ kvfree(p);
+ p = swap_info[type];
+@@ -3358,7 +3365,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+ {
+ struct swap_info_struct *p;
+ struct swap_cluster_info *ci;
+- unsigned long offset, type;
++ unsigned long offset;
+ unsigned char count;
+ unsigned char has_cache;
+ int err = -EINVAL;
+@@ -3366,10 +3373,10 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+ if (non_swap_entry(entry))
+ goto out;
+
+- type = swp_type(entry);
+- if (type >= nr_swapfiles)
++ p = swp_swap_info(entry);
++ if (!p)
+ goto bad_file;
+- p = swap_info[type];
++
+ offset = swp_offset(entry);
+ if (unlikely(offset >= p->max))
+ goto out;
+@@ -3466,7 +3473,7 @@ int swapcache_prepare(swp_entry_t entry)
+
+ struct swap_info_struct *swp_swap_info(swp_entry_t entry)
+ {
+- return swap_info[swp_type(entry)];
++ return swap_type_to_swap_info(swp_type(entry));
+ }
+
+ struct swap_info_struct *page_swap_info(struct page *page)
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 871e41c55e23..583630bf247d 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -498,7 +498,11 @@ nocache:
+ }
+
+ found:
+- if (addr + size > vend)
++ /*
++ * Check also calculated address against the vstart,
++ * because it can be 0 because of big align request.
++ */
++ if (addr + size > vend || addr < vstart)
+ goto overflow;
+
+ va->va_start = addr;
+@@ -2248,7 +2252,7 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
+ if (!(area->flags & VM_USERMAP))
+ return -EINVAL;
+
+- if (kaddr + size > area->addr + area->size)
++ if (kaddr + size > area->addr + get_vm_area_size(area))
+ return -EINVAL;
+
+ do {
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 357214a51f13..b85d51f4b8eb 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -1061,7 +1061,7 @@ struct p9_client *p9_client_create(const char *dev_name, char *options)
+ p9_debug(P9_DEBUG_ERROR,
+ "Please specify a msize of at least 4k\n");
+ err = -EINVAL;
+- goto free_client;
++ goto close_trans;
+ }
+
+ err = p9_client_version(clnt);
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index deacc52d7ff1..8d12198eaa94 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -154,15 +154,25 @@ void bt_sock_unlink(struct bt_sock_list *l, struct sock *sk)
+ }
+ EXPORT_SYMBOL(bt_sock_unlink);
+
+-void bt_accept_enqueue(struct sock *parent, struct sock *sk)
++void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh)
+ {
+ BT_DBG("parent %p, sk %p", parent, sk);
+
+ sock_hold(sk);
+- lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
++
++ if (bh)
++ bh_lock_sock_nested(sk);
++ else
++ lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
++
+ list_add_tail(&bt_sk(sk)->accept_q, &bt_sk(parent)->accept_q);
+ bt_sk(sk)->parent = parent;
+- release_sock(sk);
++
++ if (bh)
++ bh_unlock_sock(sk);
++ else
++ release_sock(sk);
++
+ parent->sk_ack_backlog++;
+ }
+ EXPORT_SYMBOL(bt_accept_enqueue);
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 1506e1632394..d4e2a166ae17 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -831,8 +831,6 @@ static int hci_sock_release(struct socket *sock)
+ if (!sk)
+ return 0;
+
+- hdev = hci_pi(sk)->hdev;
+-
+ switch (hci_pi(sk)->channel) {
+ case HCI_CHANNEL_MONITOR:
+ atomic_dec(&monitor_promisc);
+@@ -854,6 +852,7 @@ static int hci_sock_release(struct socket *sock)
+
+ bt_sock_unlink(&hci_sk_list, sk);
+
++ hdev = hci_pi(sk)->hdev;
+ if (hdev) {
+ if (hci_pi(sk)->channel == HCI_CHANNEL_USER) {
+ /* When releasing a user channel exclusive access,
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 2a7fb517d460..ccdc5c67d22a 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -3337,16 +3337,22 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+
+ while (len >= L2CAP_CONF_OPT_SIZE) {
+ len -= l2cap_get_conf_opt(&req, &type, &olen, &val);
++ if (len < 0)
++ break;
+
+ hint = type & L2CAP_CONF_HINT;
+ type &= L2CAP_CONF_MASK;
+
+ switch (type) {
+ case L2CAP_CONF_MTU:
++ if (olen != 2)
++ break;
+ mtu = val;
+ break;
+
+ case L2CAP_CONF_FLUSH_TO:
++ if (olen != 2)
++ break;
+ chan->flush_to = val;
+ break;
+
+@@ -3354,26 +3360,30 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ break;
+
+ case L2CAP_CONF_RFC:
+- if (olen == sizeof(rfc))
+- memcpy(&rfc, (void *) val, olen);
++ if (olen != sizeof(rfc))
++ break;
++ memcpy(&rfc, (void *) val, olen);
+ break;
+
+ case L2CAP_CONF_FCS:
++ if (olen != 1)
++ break;
+ if (val == L2CAP_FCS_NONE)
+ set_bit(CONF_RECV_NO_FCS, &chan->conf_state);
+ break;
+
+ case L2CAP_CONF_EFS:
+- if (olen == sizeof(efs)) {
+- remote_efs = 1;
+- memcpy(&efs, (void *) val, olen);
+- }
++ if (olen != sizeof(efs))
++ break;
++ remote_efs = 1;
++ memcpy(&efs, (void *) val, olen);
+ break;
+
+ case L2CAP_CONF_EWS:
++ if (olen != 2)
++ break;
+ if (!(chan->conn->local_fixed_chan & L2CAP_FC_A2MP))
+ return -ECONNREFUSED;
+-
+ set_bit(FLAG_EXT_CTRL, &chan->flags);
+ set_bit(CONF_EWS_RECV, &chan->conf_state);
+ chan->tx_win_max = L2CAP_DEFAULT_EXT_WINDOW;
+@@ -3383,7 +3393,6 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ default:
+ if (hint)
+ break;
+-
+ result = L2CAP_CONF_UNKNOWN;
+ *((u8 *) ptr++) = type;
+ break;
+@@ -3548,58 +3557,65 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len,
+
+ while (len >= L2CAP_CONF_OPT_SIZE) {
+ len -= l2cap_get_conf_opt(&rsp, &type, &olen, &val);
++ if (len < 0)
++ break;
+
+ switch (type) {
+ case L2CAP_CONF_MTU:
++ if (olen != 2)
++ break;
+ if (val < L2CAP_DEFAULT_MIN_MTU) {
+ *result = L2CAP_CONF_UNACCEPT;
+ chan->imtu = L2CAP_DEFAULT_MIN_MTU;
+ } else
+ chan->imtu = val;
+- l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu, endptr - ptr);
++ l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu,
++ endptr - ptr);
+ break;
+
+ case L2CAP_CONF_FLUSH_TO:
++ if (olen != 2)
++ break;
+ chan->flush_to = val;
+- l2cap_add_conf_opt(&ptr, L2CAP_CONF_FLUSH_TO,
+- 2, chan->flush_to, endptr - ptr);
++ l2cap_add_conf_opt(&ptr, L2CAP_CONF_FLUSH_TO, 2,
++ chan->flush_to, endptr - ptr);
+ break;
+
+ case L2CAP_CONF_RFC:
+- if (olen == sizeof(rfc))
+- memcpy(&rfc, (void *)val, olen);
+-
++ if (olen != sizeof(rfc))
++ break;
++ memcpy(&rfc, (void *)val, olen);
+ if (test_bit(CONF_STATE2_DEVICE, &chan->conf_state) &&
+ rfc.mode != chan->mode)
+ return -ECONNREFUSED;
+-
+ chan->fcs = 0;
+-
+- l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC,
+- sizeof(rfc), (unsigned long) &rfc, endptr - ptr);
++ l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, sizeof(rfc),
++ (unsigned long) &rfc, endptr - ptr);
+ break;
+
+ case L2CAP_CONF_EWS:
++ if (olen != 2)
++ break;
+ chan->ack_win = min_t(u16, val, chan->ack_win);
+ l2cap_add_conf_opt(&ptr, L2CAP_CONF_EWS, 2,
+ chan->tx_win, endptr - ptr);
+ break;
+
+ case L2CAP_CONF_EFS:
+- if (olen == sizeof(efs)) {
+- memcpy(&efs, (void *)val, olen);
+-
+- if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
+- efs.stype != L2CAP_SERV_NOTRAFIC &&
+- efs.stype != chan->local_stype)
+- return -ECONNREFUSED;
+-
+- l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
+- (unsigned long) &efs, endptr - ptr);
+- }
++ if (olen != sizeof(efs))
++ break;
++ memcpy(&efs, (void *)val, olen);
++ if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
++ efs.stype != L2CAP_SERV_NOTRAFIC &&
++ efs.stype != chan->local_stype)
++ return -ECONNREFUSED;
++ l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
++ (unsigned long) &efs, endptr - ptr);
+ break;
+
+ case L2CAP_CONF_FCS:
++ if (olen != 1)
++ break;
+ if (*result == L2CAP_CONF_PENDING)
+ if (val == L2CAP_FCS_NONE)
+ set_bit(CONF_RECV_NO_FCS,
+@@ -3728,13 +3744,18 @@ static void l2cap_conf_rfc_get(struct l2cap_chan *chan, void *rsp, int len)
+
+ while (len >= L2CAP_CONF_OPT_SIZE) {
+ len -= l2cap_get_conf_opt(&rsp, &type, &olen, &val);
++ if (len < 0)
++ break;
+
+ switch (type) {
+ case L2CAP_CONF_RFC:
+- if (olen == sizeof(rfc))
+- memcpy(&rfc, (void *)val, olen);
++ if (olen != sizeof(rfc))
++ break;
++ memcpy(&rfc, (void *)val, olen);
+ break;
+ case L2CAP_CONF_EWS:
++ if (olen != 2)
++ break;
+ txwin_ext = val;
+ break;
+ }
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 686bdc6b35b0..a3a2cd55e23a 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1252,7 +1252,7 @@ static struct l2cap_chan *l2cap_sock_new_connection_cb(struct l2cap_chan *chan)
+
+ l2cap_sock_init(sk, parent);
+
+- bt_accept_enqueue(parent, sk);
++ bt_accept_enqueue(parent, sk, false);
+
+ release_sock(parent);
+
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index aa0db1d1bd9b..b1f49fcc0478 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -988,7 +988,7 @@ int rfcomm_connect_ind(struct rfcomm_session *s, u8 channel, struct rfcomm_dlc *
+ rfcomm_pi(sk)->channel = channel;
+
+ sk->sk_state = BT_CONFIG;
+- bt_accept_enqueue(parent, sk);
++ bt_accept_enqueue(parent, sk, true);
+
+ /* Accept connection and return socket DLC */
+ *d = rfcomm_pi(sk)->dlc;
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 529b38996d8b..9a580999ca57 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -193,7 +193,7 @@ static void __sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ conn->sk = sk;
+
+ if (parent)
+- bt_accept_enqueue(parent, sk);
++ bt_accept_enqueue(parent, sk, true);
+ }
+
+ static int sco_chan_add(struct sco_conn *conn, struct sock *sk,
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index ac92b2eb32b1..e4777614a8a0 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -599,6 +599,7 @@ static int br_ip4_multicast_add_group(struct net_bridge *br,
+ if (ipv4_is_local_multicast(group))
+ return 0;
+
++ memset(&br_group, 0, sizeof(br_group));
+ br_group.u.ip4 = group;
+ br_group.proto = htons(ETH_P_IP);
+ br_group.vid = vid;
+@@ -1489,6 +1490,7 @@ static void br_ip4_multicast_leave_group(struct net_bridge *br,
+
+ own_query = port ? &port->ip4_own_query : &br->ip4_own_query;
+
++ memset(&br_group, 0, sizeof(br_group));
+ br_group.u.ip4 = group;
+ br_group.proto = htons(ETH_P_IP);
+ br_group.vid = vid;
+@@ -1512,6 +1514,7 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br,
+
+ own_query = port ? &port->ip6_own_query : &br->ip6_own_query;
+
++ memset(&br_group, 0, sizeof(br_group));
+ br_group.u.ip6 = *group;
+ br_group.proto = htons(ETH_P_IPV6);
+ br_group.vid = vid;
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index c93c35bb73dd..40d058378b52 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -881,11 +881,6 @@ static const struct nf_br_ops br_ops = {
+ .br_dev_xmit_hook = br_nf_dev_xmit,
+ };
+
+-void br_netfilter_enable(void)
+-{
+-}
+-EXPORT_SYMBOL_GPL(br_netfilter_enable);
+-
+ /* For br_nf_post_routing, we need (prio = NF_BR_PRI_LAST), because
+ * br_dev_queue_push_xmit is called afterwards */
+ static const struct nf_hook_ops br_nf_ops[] = {
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index 6693e209efe8..f77888ec93f1 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -31,10 +31,6 @@
+ /* needed for logical [in,out]-dev filtering */
+ #include "../br_private.h"
+
+-#define BUGPRINT(format, args...) printk("kernel msg: ebtables bug: please "\
+- "report to author: "format, ## args)
+-/* #define BUGPRINT(format, args...) */
+-
+ /* Each cpu has its own set of counters, so there is no need for write_lock in
+ * the softirq
+ * For reading or updating the counters, the user context needs to
+@@ -466,8 +462,6 @@ static int ebt_verify_pointers(const struct ebt_replace *repl,
+ /* we make userspace set this right,
+ * so there is no misunderstanding
+ */
+- BUGPRINT("EBT_ENTRY_OR_ENTRIES shouldn't be set "
+- "in distinguisher\n");
+ return -EINVAL;
+ }
+ if (i != NF_BR_NUMHOOKS)
+@@ -485,18 +479,14 @@ static int ebt_verify_pointers(const struct ebt_replace *repl,
+ offset += e->next_offset;
+ }
+ }
+- if (offset != limit) {
+- BUGPRINT("entries_size too small\n");
++ if (offset != limit)
+ return -EINVAL;
+- }
+
+ /* check if all valid hooks have a chain */
+ for (i = 0; i < NF_BR_NUMHOOKS; i++) {
+ if (!newinfo->hook_entry[i] &&
+- (valid_hooks & (1 << i))) {
+- BUGPRINT("Valid hook without chain\n");
++ (valid_hooks & (1 << i)))
+ return -EINVAL;
+- }
+ }
+ return 0;
+ }
+@@ -523,26 +513,20 @@ ebt_check_entry_size_and_hooks(const struct ebt_entry *e,
+ /* this checks if the previous chain has as many entries
+ * as it said it has
+ */
+- if (*n != *cnt) {
+- BUGPRINT("nentries does not equal the nr of entries "
+- "in the chain\n");
++ if (*n != *cnt)
+ return -EINVAL;
+- }
++
+ if (((struct ebt_entries *)e)->policy != EBT_DROP &&
+ ((struct ebt_entries *)e)->policy != EBT_ACCEPT) {
+ /* only RETURN from udc */
+ if (i != NF_BR_NUMHOOKS ||
+- ((struct ebt_entries *)e)->policy != EBT_RETURN) {
+- BUGPRINT("bad policy\n");
++ ((struct ebt_entries *)e)->policy != EBT_RETURN)
+ return -EINVAL;
+- }
+ }
+ if (i == NF_BR_NUMHOOKS) /* it's a user defined chain */
+ (*udc_cnt)++;
+- if (((struct ebt_entries *)e)->counter_offset != *totalcnt) {
+- BUGPRINT("counter_offset != totalcnt");
++ if (((struct ebt_entries *)e)->counter_offset != *totalcnt)
+ return -EINVAL;
+- }
+ *n = ((struct ebt_entries *)e)->nentries;
+ *cnt = 0;
+ return 0;
+@@ -550,15 +534,13 @@ ebt_check_entry_size_and_hooks(const struct ebt_entry *e,
+ /* a plain old entry, heh */
+ if (sizeof(struct ebt_entry) > e->watchers_offset ||
+ e->watchers_offset > e->target_offset ||
+- e->target_offset >= e->next_offset) {
+- BUGPRINT("entry offsets not in right order\n");
++ e->target_offset >= e->next_offset)
+ return -EINVAL;
+- }
++
+ /* this is not checked anywhere else */
+- if (e->next_offset - e->target_offset < sizeof(struct ebt_entry_target)) {
+- BUGPRINT("target size too small\n");
++ if (e->next_offset - e->target_offset < sizeof(struct ebt_entry_target))
+ return -EINVAL;
+- }
++
+ (*cnt)++;
+ (*totalcnt)++;
+ return 0;
+@@ -678,18 +660,15 @@ ebt_check_entry(struct ebt_entry *e, struct net *net,
+ if (e->bitmask == 0)
+ return 0;
+
+- if (e->bitmask & ~EBT_F_MASK) {
+- BUGPRINT("Unknown flag for bitmask\n");
++ if (e->bitmask & ~EBT_F_MASK)
+ return -EINVAL;
+- }
+- if (e->invflags & ~EBT_INV_MASK) {
+- BUGPRINT("Unknown flag for inv bitmask\n");
++
++ if (e->invflags & ~EBT_INV_MASK)
+ return -EINVAL;
+- }
+- if ((e->bitmask & EBT_NOPROTO) && (e->bitmask & EBT_802_3)) {
+- BUGPRINT("NOPROTO & 802_3 not allowed\n");
++
++ if ((e->bitmask & EBT_NOPROTO) && (e->bitmask & EBT_802_3))
+ return -EINVAL;
+- }
++
+ /* what hook do we belong to? */
+ for (i = 0; i < NF_BR_NUMHOOKS; i++) {
+ if (!newinfo->hook_entry[i])
+@@ -748,13 +727,11 @@ ebt_check_entry(struct ebt_entry *e, struct net *net,
+ t->u.target = target;
+ if (t->u.target == &ebt_standard_target) {
+ if (gap < sizeof(struct ebt_standard_target)) {
+- BUGPRINT("Standard target size too big\n");
+ ret = -EFAULT;
+ goto cleanup_watchers;
+ }
+ if (((struct ebt_standard_target *)t)->verdict <
+ -NUM_STANDARD_TARGETS) {
+- BUGPRINT("Invalid standard target\n");
+ ret = -EFAULT;
+ goto cleanup_watchers;
+ }
+@@ -813,10 +790,9 @@ static int check_chainloops(const struct ebt_entries *chain, struct ebt_cl_stack
+ if (strcmp(t->u.name, EBT_STANDARD_TARGET))
+ goto letscontinue;
+ if (e->target_offset + sizeof(struct ebt_standard_target) >
+- e->next_offset) {
+- BUGPRINT("Standard target size too big\n");
++ e->next_offset)
+ return -1;
+- }
++
+ verdict = ((struct ebt_standard_target *)t)->verdict;
+ if (verdict >= 0) { /* jump to another chain */
+ struct ebt_entries *hlp2 =
+@@ -825,14 +801,12 @@ static int check_chainloops(const struct ebt_entries *chain, struct ebt_cl_stack
+ if (hlp2 == cl_s[i].cs.chaininfo)
+ break;
+ /* bad destination or loop */
+- if (i == udc_cnt) {
+- BUGPRINT("bad destination\n");
++ if (i == udc_cnt)
+ return -1;
+- }
+- if (cl_s[i].cs.n) {
+- BUGPRINT("loop\n");
++
++ if (cl_s[i].cs.n)
+ return -1;
+- }
++
+ if (cl_s[i].hookmask & (1 << hooknr))
+ goto letscontinue;
+ /* this can't be 0, so the loop test is correct */
+@@ -865,24 +839,21 @@ static int translate_table(struct net *net, const char *name,
+ i = 0;
+ while (i < NF_BR_NUMHOOKS && !newinfo->hook_entry[i])
+ i++;
+- if (i == NF_BR_NUMHOOKS) {
+- BUGPRINT("No valid hooks specified\n");
++ if (i == NF_BR_NUMHOOKS)
+ return -EINVAL;
+- }
+- if (newinfo->hook_entry[i] != (struct ebt_entries *)newinfo->entries) {
+- BUGPRINT("Chains don't start at beginning\n");
++
++ if (newinfo->hook_entry[i] != (struct ebt_entries *)newinfo->entries)
+ return -EINVAL;
+- }
++
+ /* make sure chains are ordered after each other in same order
+ * as their corresponding hooks
+ */
+ for (j = i + 1; j < NF_BR_NUMHOOKS; j++) {
+ if (!newinfo->hook_entry[j])
+ continue;
+- if (newinfo->hook_entry[j] <= newinfo->hook_entry[i]) {
+- BUGPRINT("Hook order must be followed\n");
++ if (newinfo->hook_entry[j] <= newinfo->hook_entry[i])
+ return -EINVAL;
+- }
++
+ i = j;
+ }
+
+@@ -900,15 +871,11 @@ static int translate_table(struct net *net, const char *name,
+ if (ret != 0)
+ return ret;
+
+- if (i != j) {
+- BUGPRINT("nentries does not equal the nr of entries in the "
+- "(last) chain\n");
++ if (i != j)
+ return -EINVAL;
+- }
+- if (k != newinfo->nentries) {
+- BUGPRINT("Total nentries is wrong\n");
++
++ if (k != newinfo->nentries)
+ return -EINVAL;
+- }
+
+ /* get the location of the udc, put them in an array
+ * while we're at it, allocate the chainstack
+@@ -942,7 +909,6 @@ static int translate_table(struct net *net, const char *name,
+ ebt_get_udc_positions, newinfo, &i, cl_s);
+ /* sanity check */
+ if (i != udc_cnt) {
+- BUGPRINT("i != udc_cnt\n");
+ vfree(cl_s);
+ return -EFAULT;
+ }
+@@ -1042,7 +1008,6 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
+ goto free_unlock;
+
+ if (repl->num_counters && repl->num_counters != t->private->nentries) {
+- BUGPRINT("Wrong nr. of counters requested\n");
+ ret = -EINVAL;
+ goto free_unlock;
+ }
+@@ -1118,15 +1083,12 @@ static int do_replace(struct net *net, const void __user *user,
+ if (copy_from_user(&tmp, user, sizeof(tmp)) != 0)
+ return -EFAULT;
+
+- if (len != sizeof(tmp) + tmp.entries_size) {
+- BUGPRINT("Wrong len argument\n");
++ if (len != sizeof(tmp) + tmp.entries_size)
+ return -EINVAL;
+- }
+
+- if (tmp.entries_size == 0) {
+- BUGPRINT("Entries_size never zero\n");
++ if (tmp.entries_size == 0)
+ return -EINVAL;
+- }
++
+ /* overflow check */
+ if (tmp.nentries >= ((INT_MAX - sizeof(struct ebt_table_info)) /
+ NR_CPUS - SMP_CACHE_BYTES) / sizeof(struct ebt_counter))
+@@ -1153,7 +1115,6 @@ static int do_replace(struct net *net, const void __user *user,
+ }
+ if (copy_from_user(
+ newinfo->entries, tmp.entries, tmp.entries_size) != 0) {
+- BUGPRINT("Couldn't copy entries from userspace\n");
+ ret = -EFAULT;
+ goto free_entries;
+ }
+@@ -1194,10 +1155,8 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+
+ if (input_table == NULL || (repl = input_table->table) == NULL ||
+ repl->entries == NULL || repl->entries_size == 0 ||
+- repl->counters != NULL || input_table->private != NULL) {
+- BUGPRINT("Bad table data for ebt_register_table!!!\n");
++ repl->counters != NULL || input_table->private != NULL)
+ return -EINVAL;
+- }
+
+ /* Don't add one table to multiple lists. */
+ table = kmemdup(input_table, sizeof(struct ebt_table), GFP_KERNEL);
+@@ -1235,13 +1194,10 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+ ((char *)repl->hook_entry[i] - repl->entries);
+ }
+ ret = translate_table(net, repl->name, newinfo);
+- if (ret != 0) {
+- BUGPRINT("Translate_table failed\n");
++ if (ret != 0)
+ goto free_chainstack;
+- }
+
+ if (table->check && table->check(newinfo, table->valid_hooks)) {
+- BUGPRINT("The table doesn't like its own initial data, lol\n");
+ ret = -EINVAL;
+ goto free_chainstack;
+ }
+@@ -1252,7 +1208,6 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+ list_for_each_entry(t, &net->xt.tables[NFPROTO_BRIDGE], list) {
+ if (strcmp(t->name, table->name) == 0) {
+ ret = -EEXIST;
+- BUGPRINT("Table name already exists\n");
+ goto free_unlock;
+ }
+ }
+@@ -1320,7 +1275,6 @@ static int do_update_counters(struct net *net, const char *name,
+ goto free_tmp;
+
+ if (num_counters != t->private->nentries) {
+- BUGPRINT("Wrong nr of counters\n");
+ ret = -EINVAL;
+ goto unlock_mutex;
+ }
+@@ -1447,10 +1401,8 @@ static int copy_counters_to_user(struct ebt_table *t,
+ if (num_counters == 0)
+ return 0;
+
+- if (num_counters != nentries) {
+- BUGPRINT("Num_counters wrong\n");
++ if (num_counters != nentries)
+ return -EINVAL;
+- }
+
+ counterstmp = vmalloc(array_size(nentries, sizeof(*counterstmp)));
+ if (!counterstmp)
+@@ -1496,15 +1448,11 @@ static int copy_everything_to_user(struct ebt_table *t, void __user *user,
+ (tmp.num_counters ? nentries * sizeof(struct ebt_counter) : 0))
+ return -EINVAL;
+
+- if (tmp.nentries != nentries) {
+- BUGPRINT("Nentries wrong\n");
++ if (tmp.nentries != nentries)
+ return -EINVAL;
+- }
+
+- if (tmp.entries_size != entries_size) {
+- BUGPRINT("Wrong size\n");
++ if (tmp.entries_size != entries_size)
+ return -EINVAL;
+- }
+
+ ret = copy_counters_to_user(t, oldcounters, tmp.counters,
+ tmp.num_counters, nentries);
+@@ -1576,7 +1524,6 @@ static int do_ebt_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
+ }
+ mutex_unlock(&ebt_mutex);
+ if (copy_to_user(user, &tmp, *len) != 0) {
+- BUGPRINT("c2u Didn't work\n");
+ ret = -EFAULT;
+ break;
+ }
+diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
+index 9cab80207ced..79eac465ec65 100644
+--- a/net/ceph/ceph_common.c
++++ b/net/ceph/ceph_common.c
+@@ -738,7 +738,6 @@ int __ceph_open_session(struct ceph_client *client, unsigned long started)
+ }
+ EXPORT_SYMBOL(__ceph_open_session);
+
+-
+ int ceph_open_session(struct ceph_client *client)
+ {
+ int ret;
+@@ -754,6 +753,23 @@ int ceph_open_session(struct ceph_client *client)
+ }
+ EXPORT_SYMBOL(ceph_open_session);
+
++int ceph_wait_for_latest_osdmap(struct ceph_client *client,
++ unsigned long timeout)
++{
++ u64 newest_epoch;
++ int ret;
++
++ ret = ceph_monc_get_version(&client->monc, "osdmap", &newest_epoch);
++ if (ret)
++ return ret;
++
++ if (client->osdc.osdmap->epoch >= newest_epoch)
++ return 0;
++
++ ceph_osdc_maybe_request_map(&client->osdc);
++ return ceph_monc_wait_osdmap(&client->monc, newest_epoch, timeout);
++}
++EXPORT_SYMBOL(ceph_wait_for_latest_osdmap);
+
+ static int __init init_ceph_lib(void)
+ {
+diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
+index 18deb3d889c4..a53e4fbb6319 100644
+--- a/net/ceph/mon_client.c
++++ b/net/ceph/mon_client.c
+@@ -922,6 +922,15 @@ int ceph_monc_blacklist_add(struct ceph_mon_client *monc,
+ mutex_unlock(&monc->mutex);
+
+ ret = wait_generic_request(req);
++ if (!ret)
++ /*
++ * Make sure we have the osdmap that includes the blacklist
++ * entry. This is needed to ensure that the OSDs pick up the
++ * new blacklist before processing any future requests from
++ * this client.
++ */
++ ret = ceph_wait_for_latest_osdmap(monc->client, 0);
++
+ out:
+ put_generic_request(req);
+ return ret;
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index b2651bb6d2a3..e657289db4ac 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -279,7 +279,7 @@ struct sk_buff *__skb_try_recv_datagram(struct sock *sk, unsigned int flags,
+ break;
+
+ sk_busy_loop(sk, flags & MSG_DONTWAIT);
+- } while (!skb_queue_empty(&sk->sk_receive_queue));
++ } while (sk->sk_receive_queue.prev != *last);
+
+ error = -EAGAIN;
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 5d03889502eb..12824e007e06 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5014,8 +5014,10 @@ static inline void __netif_receive_skb_list_ptype(struct list_head *head,
+ if (pt_prev->list_func != NULL)
+ pt_prev->list_func(head, pt_prev, orig_dev);
+ else
+- list_for_each_entry_safe(skb, next, head, list)
++ list_for_each_entry_safe(skb, next, head, list) {
++ skb_list_del_init(skb);
+ pt_prev->func(skb, skb->dev, pt_prev, orig_dev);
++ }
+ }
+
+ static void __netif_receive_skb_list_core(struct list_head *head, bool pfmemalloc)
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 158264f7cfaf..3a7f19a61768 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -1794,11 +1794,16 @@ static int ethtool_get_strings(struct net_device *dev, void __user *useraddr)
+ WARN_ON_ONCE(!ret);
+
+ gstrings.len = ret;
+- data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
+- if (gstrings.len && !data)
+- return -ENOMEM;
+
+- __ethtool_get_strings(dev, gstrings.string_set, data);
++ if (gstrings.len) {
++ data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
++ if (!data)
++ return -ENOMEM;
++
++ __ethtool_get_strings(dev, gstrings.string_set, data);
++ } else {
++ data = NULL;
++ }
+
+ ret = -EFAULT;
+ if (copy_to_user(useraddr, &gstrings, sizeof(gstrings)))
+@@ -1894,11 +1899,15 @@ static int ethtool_get_stats(struct net_device *dev, void __user *useraddr)
+ return -EFAULT;
+
+ stats.n_stats = n_stats;
+- data = vzalloc(array_size(n_stats, sizeof(u64)));
+- if (n_stats && !data)
+- return -ENOMEM;
+
+- ops->get_ethtool_stats(dev, &stats, data);
++ if (n_stats) {
++ data = vzalloc(array_size(n_stats, sizeof(u64)));
++ if (!data)
++ return -ENOMEM;
++ ops->get_ethtool_stats(dev, &stats, data);
++ } else {
++ data = NULL;
++ }
+
+ ret = -EFAULT;
+ if (copy_to_user(useraddr, &stats, sizeof(stats)))
+@@ -1938,16 +1947,21 @@ static int ethtool_get_phy_stats(struct net_device *dev, void __user *useraddr)
+ return -EFAULT;
+
+ stats.n_stats = n_stats;
+- data = vzalloc(array_size(n_stats, sizeof(u64)));
+- if (n_stats && !data)
+- return -ENOMEM;
+
+- if (dev->phydev && !ops->get_ethtool_phy_stats) {
+- ret = phy_ethtool_get_stats(dev->phydev, &stats, data);
+- if (ret < 0)
+- return ret;
++ if (n_stats) {
++ data = vzalloc(array_size(n_stats, sizeof(u64)));
++ if (!data)
++ return -ENOMEM;
++
++ if (dev->phydev && !ops->get_ethtool_phy_stats) {
++ ret = phy_ethtool_get_stats(dev->phydev, &stats, data);
++ if (ret < 0)
++ goto out;
++ } else {
++ ops->get_ethtool_phy_stats(dev, &stats, data);
++ }
+ } else {
+- ops->get_ethtool_phy_stats(dev, &stats, data);
++ data = NULL;
+ }
+
+ ret = -EFAULT;
+diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
+index 9bf1b9ad1780..ac679f74ba47 100644
+--- a/net/core/gen_stats.c
++++ b/net/core/gen_stats.c
+@@ -291,7 +291,6 @@ __gnet_stats_copy_queue_cpu(struct gnet_stats_queue *qstats,
+ for_each_possible_cpu(i) {
+ const struct gnet_stats_queue *qcpu = per_cpu_ptr(q, i);
+
+- qstats->qlen = 0;
+ qstats->backlog += qcpu->backlog;
+ qstats->drops += qcpu->drops;
+ qstats->requeues += qcpu->requeues;
+@@ -307,7 +306,6 @@ void __gnet_stats_copy_queue(struct gnet_stats_queue *qstats,
+ if (cpu) {
+ __gnet_stats_copy_queue_cpu(qstats, cpu);
+ } else {
+- qstats->qlen = q->qlen;
+ qstats->backlog = q->backlog;
+ qstats->drops = q->drops;
+ qstats->requeues = q->requeues;
+diff --git a/net/core/gro_cells.c b/net/core/gro_cells.c
+index acf45ddbe924..e095fb871d91 100644
+--- a/net/core/gro_cells.c
++++ b/net/core/gro_cells.c
+@@ -13,22 +13,36 @@ int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
+ {
+ struct net_device *dev = skb->dev;
+ struct gro_cell *cell;
++ int res;
+
+- if (!gcells->cells || skb_cloned(skb) || netif_elide_gro(dev))
+- return netif_rx(skb);
++ rcu_read_lock();
++ if (unlikely(!(dev->flags & IFF_UP)))
++ goto drop;
++
++ if (!gcells->cells || skb_cloned(skb) || netif_elide_gro(dev)) {
++ res = netif_rx(skb);
++ goto unlock;
++ }
+
+ cell = this_cpu_ptr(gcells->cells);
+
+ if (skb_queue_len(&cell->napi_skbs) > netdev_max_backlog) {
++drop:
+ atomic_long_inc(&dev->rx_dropped);
+ kfree_skb(skb);
+- return NET_RX_DROP;
++ res = NET_RX_DROP;
++ goto unlock;
+ }
+
+ __skb_queue_tail(&cell->napi_skbs, skb);
+ if (skb_queue_len(&cell->napi_skbs) == 1)
+ napi_schedule(&cell->napi);
+- return NET_RX_SUCCESS;
++
++ res = NET_RX_SUCCESS;
++
++unlock:
++ rcu_read_unlock();
++ return res;
+ }
+ EXPORT_SYMBOL(gro_cells_receive);
+
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index ff9fd2bb4ce4..aec26584f0ca 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -934,6 +934,8 @@ static int rx_queue_add_kobject(struct net_device *dev, int index)
+ if (error)
+ return error;
+
++ dev_hold(queue->dev);
++
+ if (dev->sysfs_rx_queue_group) {
+ error = sysfs_create_group(kobj, dev->sysfs_rx_queue_group);
+ if (error) {
+@@ -943,7 +945,6 @@ static int rx_queue_add_kobject(struct net_device *dev, int index)
+ }
+
+ kobject_uevent(kobj, KOBJ_ADD);
+- dev_hold(queue->dev);
+
+ return error;
+ }
+@@ -1472,6 +1473,8 @@ static int netdev_queue_add_kobject(struct net_device *dev, int index)
+ if (error)
+ return error;
+
++ dev_hold(queue->dev);
++
+ #ifdef CONFIG_BQL
+ error = sysfs_create_group(kobj, &dql_group);
+ if (error) {
+@@ -1481,7 +1484,6 @@ static int netdev_queue_add_kobject(struct net_device *dev, int index)
+ #endif
+
+ kobject_uevent(kobj, KOBJ_ADD);
+- dev_hold(queue->dev);
+
+ return 0;
+ }
+@@ -1547,6 +1549,9 @@ static int register_queue_kobjects(struct net_device *dev)
+ error:
+ netdev_queue_update_kobjects(dev, txq, 0);
+ net_rx_queue_update_kobjects(dev, rxq, 0);
++#ifdef CONFIG_SYSFS
++ kset_unregister(dev->queues_kset);
++#endif
+ return error;
+ }
+
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index b02fb19df2cc..40c249c574c1 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -304,6 +304,7 @@ static __net_init int setup_net(struct net *net, struct user_namespace *user_ns)
+
+ refcount_set(&net->count, 1);
+ refcount_set(&net->passive, 1);
++ get_random_bytes(&net->hash_mix, sizeof(u32));
+ net->dev_base_seq = 1;
+ net->user_ns = user_ns;
+ idr_init(&net->netns_ids);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 2415d9cb9b89..ef2cd5712098 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3801,7 +3801,7 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
+ unsigned int delta_truesize;
+ struct sk_buff *lp;
+
+- if (unlikely(p->len + len >= 65536))
++ if (unlikely(p->len + len >= 65536 || NAPI_GRO_CB(skb)->flush))
+ return -E2BIG;
+
+ lp = NAPI_GRO_CB(p)->last;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 8c826603bf36..8bc0ba1ebabe 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -545,6 +545,7 @@ static void sk_psock_destroy_deferred(struct work_struct *gc)
+ struct sk_psock *psock = container_of(gc, struct sk_psock, gc);
+
+ /* No sk_callback_lock since already detached. */
++ strp_stop(&psock->parser.strp);
+ strp_done(&psock->parser.strp);
+
+ cancel_work_sync(&psock->work);
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index d5740bad5b18..57d84e9b7b6f 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -436,8 +436,8 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk,
+ newnp->ipv6_mc_list = NULL;
+ newnp->ipv6_ac_list = NULL;
+ newnp->ipv6_fl_list = NULL;
+- newnp->mcast_oif = inet6_iif(skb);
+- newnp->mcast_hops = ipv6_hdr(skb)->hop_limit;
++ newnp->mcast_oif = inet_iif(skb);
++ newnp->mcast_hops = ip_hdr(skb)->ttl;
+
+ /*
+ * No need to charge this sock to the relevant IPv6 refcnt debug socks count
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index b8cd43c9ed5b..a97bf326b231 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -94,9 +94,8 @@ static void hsr_check_announce(struct net_device *hsr_dev,
+ && (old_operstate != IF_OPER_UP)) {
+ /* Went up */
+ hsr->announce_count = 0;
+- hsr->announce_timer.expires = jiffies +
+- msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
+- add_timer(&hsr->announce_timer);
++ mod_timer(&hsr->announce_timer,
++ jiffies + msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL));
+ }
+
+ if ((hsr_dev->operstate != IF_OPER_UP) && (old_operstate == IF_OPER_UP))
+@@ -332,6 +331,7 @@ static void hsr_announce(struct timer_list *t)
+ {
+ struct hsr_priv *hsr;
+ struct hsr_port *master;
++ unsigned long interval;
+
+ hsr = from_timer(hsr, t, announce_timer);
+
+@@ -343,18 +343,16 @@ static void hsr_announce(struct timer_list *t)
+ hsr->protVersion);
+ hsr->announce_count++;
+
+- hsr->announce_timer.expires = jiffies +
+- msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
++ interval = msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
+ } else {
+ send_hsr_supervision_frame(master, HSR_TLV_LIFE_CHECK,
+ hsr->protVersion);
+
+- hsr->announce_timer.expires = jiffies +
+- msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
++ interval = msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
+ }
+
+ if (is_admin_up(master->dev))
+- add_timer(&hsr->announce_timer);
++ mod_timer(&hsr->announce_timer, jiffies + interval);
+
+ rcu_read_unlock();
+ }
+@@ -486,7 +484,7 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+
+ res = hsr_add_port(hsr, hsr_dev, HSR_PT_MASTER);
+ if (res)
+- return res;
++ goto err_add_port;
+
+ res = register_netdevice(hsr_dev);
+ if (res)
+@@ -506,6 +504,8 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+ fail:
+ hsr_for_each_port(hsr, port)
+ hsr_del_port(port);
++err_add_port:
++ hsr_del_node(&hsr->self_node_db);
+
+ return res;
+ }
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 286ceb41ac0c..9af16cb68f76 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -124,6 +124,18 @@ int hsr_create_self_node(struct list_head *self_node_db,
+ return 0;
+ }
+
++void hsr_del_node(struct list_head *self_node_db)
++{
++ struct hsr_node *node;
++
++ rcu_read_lock();
++ node = list_first_or_null_rcu(self_node_db, struct hsr_node, mac_list);
++ rcu_read_unlock();
++ if (node) {
++ list_del_rcu(&node->mac_list);
++ kfree(node);
++ }
++}
+
+ /* Allocate an hsr_node and add it to node_db. 'addr' is the node's AddressA;
+ * seq_out is used to initialize filtering of outgoing duplicate frames
+diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h
+index 370b45998121..531fd3dfcac1 100644
+--- a/net/hsr/hsr_framereg.h
++++ b/net/hsr/hsr_framereg.h
+@@ -16,6 +16,7 @@
+
+ struct hsr_node;
+
++void hsr_del_node(struct list_head *self_node_db);
+ struct hsr_node *hsr_add_node(struct list_head *node_db, unsigned char addr[],
+ u16 seq_out);
+ struct hsr_node *hsr_get_node(struct hsr_port *port, struct sk_buff *skb,
+diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c
+index 437070d1ffb1..79e98e21cdd7 100644
+--- a/net/ipv4/fou.c
++++ b/net/ipv4/fou.c
+@@ -1024,7 +1024,7 @@ static int gue_err(struct sk_buff *skb, u32 info)
+ int ret;
+
+ len = sizeof(struct udphdr) + sizeof(struct guehdr);
+- if (!pskb_may_pull(skb, len))
++ if (!pskb_may_pull(skb, transport_offset + len))
+ return -EINVAL;
+
+ guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+@@ -1059,7 +1059,7 @@ static int gue_err(struct sk_buff *skb, u32 info)
+
+ optlen = guehdr->hlen << 2;
+
+- if (!pskb_may_pull(skb, len + optlen))
++ if (!pskb_may_pull(skb, transport_offset + len + optlen))
+ return -EINVAL;
+
+ guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 6ae89f2b541b..2d5734079e6b 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -259,7 +259,6 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ struct net *net = dev_net(skb->dev);
+ struct metadata_dst *tun_dst = NULL;
+ struct erspan_base_hdr *ershdr;
+- struct erspan_metadata *pkt_md;
+ struct ip_tunnel_net *itn;
+ struct ip_tunnel *tunnel;
+ const struct iphdr *iph;
+@@ -282,9 +281,6 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ if (unlikely(!pskb_may_pull(skb, len)))
+ return PACKET_REJECT;
+
+- ershdr = (struct erspan_base_hdr *)(skb->data + gre_hdr_len);
+- pkt_md = (struct erspan_metadata *)(ershdr + 1);
+-
+ if (__iptunnel_pull_header(skb,
+ len,
+ htons(ETH_P_TEB),
+@@ -292,8 +288,9 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ goto drop;
+
+ if (tunnel->collect_md) {
++ struct erspan_metadata *pkt_md, *md;
+ struct ip_tunnel_info *info;
+- struct erspan_metadata *md;
++ unsigned char *gh;
+ __be64 tun_id;
+ __be16 flags;
+
+@@ -306,6 +303,14 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ if (!tun_dst)
+ return PACKET_REJECT;
+
++ /* skb can be uncloned in __iptunnel_pull_header, so
++ * old pkt_md is no longer valid and we need to reset
++ * it
++ */
++ gh = skb_network_header(skb) +
++ skb_network_header_len(skb);
++ pkt_md = (struct erspan_metadata *)(gh + gre_hdr_len +
++ sizeof(*ershdr));
+ md = ip_tunnel_info_opts(&tun_dst->u.tun_info);
+ md->version = ver;
+ md2 = &md->u.md2;
+diff --git a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c
+index 1f4737b77067..ccf0d31b6ce5 100644
+--- a/net/ipv4/ip_input.c
++++ b/net/ipv4/ip_input.c
+@@ -257,11 +257,10 @@ int ip_local_deliver(struct sk_buff *skb)
+ ip_local_deliver_finish);
+ }
+
+-static inline bool ip_rcv_options(struct sk_buff *skb)
++static inline bool ip_rcv_options(struct sk_buff *skb, struct net_device *dev)
+ {
+ struct ip_options *opt;
+ const struct iphdr *iph;
+- struct net_device *dev = skb->dev;
+
+ /* It looks as overkill, because not all
+ IP options require packet mangling.
+@@ -297,7 +296,7 @@ static inline bool ip_rcv_options(struct sk_buff *skb)
+ }
+ }
+
+- if (ip_options_rcv_srr(skb))
++ if (ip_options_rcv_srr(skb, dev))
+ goto drop;
+ }
+
+@@ -353,7 +352,7 @@ static int ip_rcv_finish_core(struct net *net, struct sock *sk,
+ }
+ #endif
+
+- if (iph->ihl > 5 && ip_rcv_options(skb))
++ if (iph->ihl > 5 && ip_rcv_options(skb, dev))
+ goto drop;
+
+ rt = skb_rtable(skb);
+diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
+index 32a35043c9f5..3db31bb9df50 100644
+--- a/net/ipv4/ip_options.c
++++ b/net/ipv4/ip_options.c
+@@ -612,7 +612,7 @@ void ip_forward_options(struct sk_buff *skb)
+ }
+ }
+
+-int ip_options_rcv_srr(struct sk_buff *skb)
++int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev)
+ {
+ struct ip_options *opt = &(IPCB(skb)->opt);
+ int srrspace, srrptr;
+@@ -647,7 +647,7 @@ int ip_options_rcv_srr(struct sk_buff *skb)
+
+ orefdst = skb->_skb_refdst;
+ skb_dst_set(skb, NULL);
+- err = ip_route_input(skb, nexthop, iph->saddr, iph->tos, skb->dev);
++ err = ip_route_input(skb, nexthop, iph->saddr, iph->tos, dev);
+ rt2 = skb_rtable(skb);
+ if (err || (rt2->rt_type != RTN_UNICAST && rt2->rt_type != RTN_LOCAL)) {
+ skb_dst_drop(skb);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 7bb9128c8363..e04cdb58a602 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1303,6 +1303,10 @@ static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr)
+ if (fnhe->fnhe_daddr == daddr) {
+ rcu_assign_pointer(*fnhe_p, rcu_dereference_protected(
+ fnhe->fnhe_next, lockdep_is_held(&fnhe_lock)));
++ /* set fnhe_daddr to 0 to ensure it won't bind with
++ * new dsts in rt_bind_exception().
++ */
++ fnhe->fnhe_daddr = 0;
+ fnhe_flush_routes(fnhe);
+ kfree_rcu(fnhe, rcu);
+ break;
+@@ -2144,12 +2148,13 @@ int ip_route_input_rcu(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ int our = 0;
+ int err = -EINVAL;
+
+- if (in_dev)
+- our = ip_check_mc_rcu(in_dev, daddr, saddr,
+- ip_hdr(skb)->protocol);
++ if (!in_dev)
++ return err;
++ our = ip_check_mc_rcu(in_dev, daddr, saddr,
++ ip_hdr(skb)->protocol);
+
+ /* check l3 master if no match yet */
+- if ((!in_dev || !our) && netif_is_l3_slave(dev)) {
++ if (!our && netif_is_l3_slave(dev)) {
+ struct in_device *l3_in_dev;
+
+ l3_in_dev = __in_dev_get_rcu(skb->dev);
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 606f868d9f3f..e531344611a0 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -216,7 +216,12 @@ struct sock *tcp_get_cookie_sock(struct sock *sk, struct sk_buff *skb,
+ refcount_set(&req->rsk_refcnt, 1);
+ tcp_sk(child)->tsoffset = tsoff;
+ sock_rps_save_rxhash(child, skb);
+- inet_csk_reqsk_queue_add(sk, req, child);
++ if (!inet_csk_reqsk_queue_add(sk, req, child)) {
++ bh_unlock_sock(child);
++ sock_put(child);
++ child = NULL;
++ reqsk_put(req);
++ }
+ } else {
+ reqsk_free(req);
+ }
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index cf3c5095c10e..ce365cbba1d1 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1914,6 +1914,11 @@ static int tcp_inq_hint(struct sock *sk)
+ inq = tp->rcv_nxt - tp->copied_seq;
+ release_sock(sk);
+ }
++ /* After receiving a FIN, tell the user-space to continue reading
++ * by returning a non-zero inq.
++ */
++ if (inq == 0 && sock_flag(sk, SOCK_DONE))
++ inq = 1;
+ return inq;
+ }
+
+diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
+index cd4814f7e962..359da68d7c06 100644
+--- a/net/ipv4/tcp_dctcp.c
++++ b/net/ipv4/tcp_dctcp.c
+@@ -67,11 +67,6 @@ static unsigned int dctcp_alpha_on_init __read_mostly = DCTCP_MAX_ALPHA;
+ module_param(dctcp_alpha_on_init, uint, 0644);
+ MODULE_PARM_DESC(dctcp_alpha_on_init, "parameter for initial alpha value");
+
+-static unsigned int dctcp_clamp_alpha_on_loss __read_mostly;
+-module_param(dctcp_clamp_alpha_on_loss, uint, 0644);
+-MODULE_PARM_DESC(dctcp_clamp_alpha_on_loss,
+- "parameter for clamping alpha on loss");
+-
+ static struct tcp_congestion_ops dctcp_reno;
+
+ static void dctcp_reset(const struct tcp_sock *tp, struct dctcp *ca)
+@@ -164,21 +159,23 @@ static void dctcp_update_alpha(struct sock *sk, u32 flags)
+ }
+ }
+
+-static void dctcp_state(struct sock *sk, u8 new_state)
++static void dctcp_react_to_loss(struct sock *sk)
+ {
+- if (dctcp_clamp_alpha_on_loss && new_state == TCP_CA_Loss) {
+- struct dctcp *ca = inet_csk_ca(sk);
++ struct dctcp *ca = inet_csk_ca(sk);
++ struct tcp_sock *tp = tcp_sk(sk);
+
+- /* If this extension is enabled, we clamp dctcp_alpha to
+- * max on packet loss; the motivation is that dctcp_alpha
+- * is an indicator to the extend of congestion and packet
+- * loss is an indicator of extreme congestion; setting
+- * this in practice turned out to be beneficial, and
+- * effectively assumes total congestion which reduces the
+- * window by half.
+- */
+- ca->dctcp_alpha = DCTCP_MAX_ALPHA;
+- }
++ ca->loss_cwnd = tp->snd_cwnd;
++ tp->snd_ssthresh = max(tp->snd_cwnd >> 1U, 2U);
++}
++
++static void dctcp_state(struct sock *sk, u8 new_state)
++{
++ if (new_state == TCP_CA_Recovery &&
++ new_state != inet_csk(sk)->icsk_ca_state)
++ dctcp_react_to_loss(sk);
++ /* We handle RTO in dctcp_cwnd_event to ensure that we perform only
++ * one loss-adjustment per RTT.
++ */
+ }
+
+ static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev)
+@@ -190,6 +187,9 @@ static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev)
+ case CA_EVENT_ECN_NO_CE:
+ dctcp_ece_ack_update(sk, ev, &ca->prior_rcv_nxt, &ca->ce_state);
+ break;
++ case CA_EVENT_LOSS:
++ dctcp_react_to_loss(sk);
++ break;
+ default:
+ /* Don't care for the rest. */
+ break;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 76858b14ebe9..7b1ef897b398 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -6519,7 +6519,13 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
+ af_ops->send_synack(fastopen_sk, dst, &fl, req,
+ &foc, TCP_SYNACK_FASTOPEN);
+ /* Add the child socket directly into the accept queue */
+- inet_csk_reqsk_queue_add(sk, req, fastopen_sk);
++ if (!inet_csk_reqsk_queue_add(sk, req, fastopen_sk)) {
++ reqsk_fastopen_remove(fastopen_sk, req, false);
++ bh_unlock_sock(fastopen_sk);
++ sock_put(fastopen_sk);
++ reqsk_put(req);
++ goto drop;
++ }
+ sk->sk_data_ready(sk);
+ bh_unlock_sock(fastopen_sk);
+ sock_put(fastopen_sk);
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index ec3cea9d6828..00852f47a73d 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1734,15 +1734,8 @@ EXPORT_SYMBOL(tcp_add_backlog);
+ int tcp_filter(struct sock *sk, struct sk_buff *skb)
+ {
+ struct tcphdr *th = (struct tcphdr *)skb->data;
+- unsigned int eaten = skb->len;
+- int err;
+
+- err = sk_filter_trim_cap(sk, skb, th->doff * 4);
+- if (!err) {
+- eaten -= skb->len;
+- TCP_SKB_CB(skb)->end_seq -= eaten;
+- }
+- return err;
++ return sk_filter_trim_cap(sk, skb, th->doff * 4);
+ }
+ EXPORT_SYMBOL(tcp_filter);
+
+@@ -2585,7 +2578,8 @@ static void __net_exit tcp_sk_exit(struct net *net)
+ {
+ int cpu;
+
+- module_put(net->ipv4.tcp_congestion_control->owner);
++ if (net->ipv4.tcp_congestion_control)
++ module_put(net->ipv4.tcp_congestion_control->owner);
+
+ for_each_possible_cpu(cpu)
+ inet_ctl_sock_destroy(*per_cpu_ptr(net->ipv4.tcp_sk, cpu));
+diff --git a/net/ipv6/fou6.c b/net/ipv6/fou6.c
+index 867474abe269..ec4e2ed95f36 100644
+--- a/net/ipv6/fou6.c
++++ b/net/ipv6/fou6.c
+@@ -94,7 +94,7 @@ static int gue6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ int ret;
+
+ len = sizeof(struct udphdr) + sizeof(struct guehdr);
+- if (!pskb_may_pull(skb, len))
++ if (!pskb_may_pull(skb, transport_offset + len))
+ return -EINVAL;
+
+ guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+@@ -129,7 +129,7 @@ static int gue6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+
+ optlen = guehdr->hlen << 2;
+
+- if (!pskb_may_pull(skb, len + optlen))
++ if (!pskb_may_pull(skb, transport_offset + len + optlen))
+ return -EINVAL;
+
+ guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
+index 17c455ff69ff..7858fa9ea103 100644
+--- a/net/ipv6/ila/ila_xlat.c
++++ b/net/ipv6/ila/ila_xlat.c
+@@ -420,6 +420,7 @@ int ila_xlat_nl_cmd_flush(struct sk_buff *skb, struct genl_info *info)
+
+ done:
+ rhashtable_walk_stop(&iter);
++ rhashtable_walk_exit(&iter);
+ return ret;
+ }
+
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 26f25b6e2833..438f1a5fd19a 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -524,11 +524,10 @@ static int ip6gre_rcv(struct sk_buff *skb, const struct tnl_ptk_info *tpi)
+ return PACKET_REJECT;
+ }
+
+-static int ip6erspan_rcv(struct sk_buff *skb, int gre_hdr_len,
+- struct tnl_ptk_info *tpi)
++static int ip6erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
++ int gre_hdr_len)
+ {
+ struct erspan_base_hdr *ershdr;
+- struct erspan_metadata *pkt_md;
+ const struct ipv6hdr *ipv6h;
+ struct erspan_md2 *md2;
+ struct ip6_tnl *tunnel;
+@@ -547,18 +546,16 @@ static int ip6erspan_rcv(struct sk_buff *skb, int gre_hdr_len,
+ if (unlikely(!pskb_may_pull(skb, len)))
+ return PACKET_REJECT;
+
+- ershdr = (struct erspan_base_hdr *)skb->data;
+- pkt_md = (struct erspan_metadata *)(ershdr + 1);
+-
+ if (__iptunnel_pull_header(skb, len,
+ htons(ETH_P_TEB),
+ false, false) < 0)
+ return PACKET_REJECT;
+
+ if (tunnel->parms.collect_md) {
++ struct erspan_metadata *pkt_md, *md;
+ struct metadata_dst *tun_dst;
+ struct ip_tunnel_info *info;
+- struct erspan_metadata *md;
++ unsigned char *gh;
+ __be64 tun_id;
+ __be16 flags;
+
+@@ -571,6 +568,14 @@ static int ip6erspan_rcv(struct sk_buff *skb, int gre_hdr_len,
+ if (!tun_dst)
+ return PACKET_REJECT;
+
++ /* skb can be uncloned in __iptunnel_pull_header, so
++ * old pkt_md is no longer valid and we need to reset
++ * it
++ */
++ gh = skb_network_header(skb) +
++ skb_network_header_len(skb);
++ pkt_md = (struct erspan_metadata *)(gh + gre_hdr_len +
++ sizeof(*ershdr));
+ info = &tun_dst->u.tun_info;
+ md = ip_tunnel_info_opts(info);
+ md->version = ver;
+@@ -607,7 +612,7 @@ static int gre_rcv(struct sk_buff *skb)
+
+ if (unlikely(tpi.proto == htons(ETH_P_ERSPAN) ||
+ tpi.proto == htons(ETH_P_ERSPAN2))) {
+- if (ip6erspan_rcv(skb, hdr_len, &tpi) == PACKET_RCVD)
++ if (ip6erspan_rcv(skb, &tpi, hdr_len) == PACKET_RCVD)
+ return 0;
+ goto out;
+ }
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 5f9fa0302b5a..e71227390bec 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -595,7 +595,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ inet6_sk(skb->sk) : NULL;
+ struct ipv6hdr *tmp_hdr;
+ struct frag_hdr *fh;
+- unsigned int mtu, hlen, left, len;
++ unsigned int mtu, hlen, left, len, nexthdr_offset;
+ int hroom, troom;
+ __be32 frag_id;
+ int ptr, offset = 0, err = 0;
+@@ -606,6 +606,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ goto fail;
+ hlen = err;
+ nexthdr = *prevhdr;
++ nexthdr_offset = prevhdr - skb_network_header(skb);
+
+ mtu = ip6_skb_dst_mtu(skb);
+
+@@ -640,6 +641,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ (err = skb_checksum_help(skb)))
+ goto fail;
+
++ prevhdr = skb_network_header(skb) + nexthdr_offset;
+ hroom = LL_RESERVED_SPACE(rt->dst.dev);
+ if (skb_has_frag_list(skb)) {
+ unsigned int first_len = skb_pagelen(skb);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 0c6403cf8b52..ade1390c6348 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -627,7 +627,7 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ rt = ip_route_output_ports(dev_net(skb->dev), &fl4, NULL,
+ eiph->daddr, eiph->saddr, 0, 0,
+ IPPROTO_IPIP, RT_TOS(eiph->tos), 0);
+- if (IS_ERR(rt) || rt->dst.dev->type != ARPHRD_TUNNEL) {
++ if (IS_ERR(rt) || rt->dst.dev->type != ARPHRD_TUNNEL6) {
+ if (!IS_ERR(rt))
+ ip_rt_put(rt);
+ goto out;
+@@ -636,7 +636,7 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ } else {
+ if (ip_route_input(skb2, eiph->daddr, eiph->saddr, eiph->tos,
+ skb2->dev) ||
+- skb_dst(skb2)->dev->type != ARPHRD_TUNNEL)
++ skb_dst(skb2)->dev->type != ARPHRD_TUNNEL6)
+ goto out;
+ }
+
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index cc01aa3f2b5e..af91a1a402f1 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -1964,10 +1964,10 @@ int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+
+ static inline int ip6mr_forward2_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+- __IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+- IPSTATS_MIB_OUTFORWDATAGRAMS);
+- __IP6_ADD_STATS(net, ip6_dst_idev(skb_dst(skb)),
+- IPSTATS_MIB_OUTOCTETS, skb->len);
++ IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
++ IPSTATS_MIB_OUTFORWDATAGRAMS);
++ IP6_ADD_STATS(net, ip6_dst_idev(skb_dst(skb)),
++ IPSTATS_MIB_OUTOCTETS, skb->len);
+ return dst_output(net, sk, skb);
+ }
+
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 8dad1d690b78..0086acc16f3c 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1040,14 +1040,20 @@ static struct rt6_info *ip6_create_rt_rcu(struct fib6_info *rt)
+ struct rt6_info *nrt;
+
+ if (!fib6_info_hold_safe(rt))
+- return NULL;
++ goto fallback;
+
+ nrt = ip6_dst_alloc(dev_net(dev), dev, flags);
+- if (nrt)
+- ip6_rt_copy_init(nrt, rt);
+- else
++ if (!nrt) {
+ fib6_info_release(rt);
++ goto fallback;
++ }
+
++ ip6_rt_copy_init(nrt, rt);
++ return nrt;
++
++fallback:
++ nrt = dev_net(dev)->ipv6.ip6_null_entry;
++ dst_hold(&nrt->dst);
+ return nrt;
+ }
+
+@@ -1096,10 +1102,6 @@ restart:
+ dst_hold(&rt->dst);
+ } else {
+ rt = ip6_create_rt_rcu(f6i);
+- if (!rt) {
+- rt = net->ipv6.ip6_null_entry;
+- dst_hold(&rt->dst);
+- }
+ }
+
+ rcu_read_unlock();
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 09e440e8dfae..b2109b74857d 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -669,6 +669,10 @@ static int ipip6_rcv(struct sk_buff *skb)
+ !net_eq(tunnel->net, dev_net(tunnel->dev))))
+ goto out;
+
++ /* skb can be uncloned in iptunnel_pull_header, so
++ * old iph is no longer valid
++ */
++ iph = (const struct iphdr *)skb_mac_header(skb);
+ err = IP_ECN_decapsulate(iph, skb);
+ if (unlikely(err)) {
+ if (log_ecn_error)
+@@ -778,8 +782,9 @@ static bool check_6rd(struct ip_tunnel *tunnel, const struct in6_addr *v6dst,
+ pbw0 = tunnel->ip6rd.prefixlen >> 5;
+ pbi0 = tunnel->ip6rd.prefixlen & 0x1f;
+
+- d = (ntohl(v6dst->s6_addr32[pbw0]) << pbi0) >>
+- tunnel->ip6rd.relay_prefixlen;
++ d = tunnel->ip6rd.relay_prefixlen < 32 ?
++ (ntohl(v6dst->s6_addr32[pbw0]) << pbi0) >>
++ tunnel->ip6rd.relay_prefixlen : 0;
+
+ pbi1 = pbi0 - tunnel->ip6rd.relay_prefixlen;
+ if (pbi1 > 0)
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index b81eb7cb815e..8505d96483d5 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1112,11 +1112,11 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
+ newnp->ipv6_fl_list = NULL;
+ newnp->pktoptions = NULL;
+ newnp->opt = NULL;
+- newnp->mcast_oif = tcp_v6_iif(skb);
+- newnp->mcast_hops = ipv6_hdr(skb)->hop_limit;
+- newnp->rcv_flowinfo = ip6_flowinfo(ipv6_hdr(skb));
++ newnp->mcast_oif = inet_iif(skb);
++ newnp->mcast_hops = ip_hdr(skb)->ttl;
++ newnp->rcv_flowinfo = 0;
+ if (np->repflow)
+- newnp->flow_label = ip6_flowlabel(ipv6_hdr(skb));
++ newnp->flow_label = 0;
+
+ /*
+ * No need to charge this sock to the relevant IPv6 refcnt debug socks count
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 571d824e4e24..b919db02c7f9 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -2054,14 +2054,14 @@ static int __init kcm_init(void)
+ if (err)
+ goto fail;
+
+- err = sock_register(&kcm_family_ops);
+- if (err)
+- goto sock_register_fail;
+-
+ err = register_pernet_device(&kcm_net_ops);
+ if (err)
+ goto net_ops_fail;
+
++ err = sock_register(&kcm_family_ops);
++ if (err)
++ goto sock_register_fail;
++
+ err = kcm_proc_init();
+ if (err)
+ goto proc_init_fail;
+@@ -2069,12 +2069,12 @@ static int __init kcm_init(void)
+ return 0;
+
+ proc_init_fail:
+- unregister_pernet_device(&kcm_net_ops);
+-
+-net_ops_fail:
+ sock_unregister(PF_KCM);
+
+ sock_register_fail:
++ unregister_pernet_device(&kcm_net_ops);
++
++net_ops_fail:
+ proto_unregister(&kcm_proto);
+
+ fail:
+@@ -2090,8 +2090,8 @@ fail:
+ static void __exit kcm_exit(void)
+ {
+ kcm_proc_exit();
+- unregister_pernet_device(&kcm_net_ops);
+ sock_unregister(PF_KCM);
++ unregister_pernet_device(&kcm_net_ops);
+ proto_unregister(&kcm_proto);
+ destroy_workqueue(kcm_wq);
+
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index 0ae6899edac0..37a69df17cab 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -674,9 +674,6 @@ static int l2tp_ip6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ if (flags & MSG_OOB)
+ goto out;
+
+- if (addr_len)
+- *addr_len = sizeof(*lsa);
+-
+ if (flags & MSG_ERRQUEUE)
+ return ipv6_recv_error(sk, msg, len, addr_len);
+
+@@ -706,6 +703,7 @@ static int l2tp_ip6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ lsa->l2tp_conn_id = 0;
+ if (ipv6_addr_type(&lsa->l2tp_addr) & IPV6_ADDR_LINKLOCAL)
+ lsa->l2tp_scope_id = inet6_iif(skb);
++ *addr_len = sizeof(*lsa);
+ }
+
+ if (np->rxopt.all)
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index db4d46332e86..9dd4c2048a2b 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -901,10 +901,18 @@ __nf_conntrack_confirm(struct sk_buff *skb)
+ * REJECT will give spurious warnings here.
+ */
+
+- /* No external references means no one else could have
+- * confirmed us.
++ /* Another skb with the same unconfirmed conntrack may
++ * win the race. This may happen for bridge(br_flood)
++ * or broadcast/multicast packets do skb_clone with
++ * unconfirmed conntrack.
+ */
+- WARN_ON(nf_ct_is_confirmed(ct));
++ if (unlikely(nf_ct_is_confirmed(ct))) {
++ WARN_ON_ONCE(1);
++ nf_conntrack_double_unlock(hash, reply_hash);
++ local_bh_enable();
++ return NF_DROP;
++ }
++
+ pr_debug("Confirming conntrack %p\n", ct);
+ /* We have to check the DYING flag after unlink to prevent
+ * a race against nf_ct_get_next_corpse() possibly called from
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index 4dcbd51a8e97..74fb3fa34db4 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -828,6 +828,12 @@ static noinline bool tcp_new(struct nf_conn *ct, const struct sk_buff *skb,
+ return true;
+ }
+
++static bool nf_conntrack_tcp_established(const struct nf_conn *ct)
++{
++ return ct->proto.tcp.state == TCP_CONNTRACK_ESTABLISHED &&
++ test_bit(IPS_ASSURED_BIT, &ct->status);
++}
++
+ /* Returns verdict for packet, or -1 for invalid. */
+ static int tcp_packet(struct nf_conn *ct,
+ struct sk_buff *skb,
+@@ -1030,16 +1036,38 @@ static int tcp_packet(struct nf_conn *ct,
+ new_state = TCP_CONNTRACK_ESTABLISHED;
+ break;
+ case TCP_CONNTRACK_CLOSE:
+- if (index == TCP_RST_SET
+- && (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET)
+- && before(ntohl(th->seq), ct->proto.tcp.seen[!dir].td_maxack)) {
+- /* Invalid RST */
+- spin_unlock_bh(&ct->lock);
+- nf_ct_l4proto_log_invalid(skb, ct, "invalid rst");
+- return -NF_ACCEPT;
++ if (index != TCP_RST_SET)
++ break;
++
++ if (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET) {
++ u32 seq = ntohl(th->seq);
++
++ if (before(seq, ct->proto.tcp.seen[!dir].td_maxack)) {
++ /* Invalid RST */
++ spin_unlock_bh(&ct->lock);
++ nf_ct_l4proto_log_invalid(skb, ct, "invalid rst");
++ return -NF_ACCEPT;
++ }
++
++ if (!nf_conntrack_tcp_established(ct) ||
++ seq == ct->proto.tcp.seen[!dir].td_maxack)
++ break;
++
++ /* Check if rst is part of train, such as
++ * foo:80 > bar:4379: P, 235946583:235946602(19) ack 42
++ * foo:80 > bar:4379: R, 235946602:235946602(0) ack 42
++ */
++ if (ct->proto.tcp.last_index == TCP_ACK_SET &&
++ ct->proto.tcp.last_dir == dir &&
++ seq == ct->proto.tcp.last_end)
++ break;
++
++ /* ... RST sequence number doesn't match exactly, keep
++ * established state to allow a possible challenge ACK.
++ */
++ new_state = old_state;
+ }
+- if (index == TCP_RST_SET
+- && ((test_bit(IPS_SEEN_REPLY_BIT, &ct->status)
++ if (((test_bit(IPS_SEEN_REPLY_BIT, &ct->status)
+ && ct->proto.tcp.last_index == TCP_SYN_SET)
+ || (!test_bit(IPS_ASSURED_BIT, &ct->status)
+ && ct->proto.tcp.last_index == TCP_ACK_SET))
+@@ -1055,7 +1083,7 @@ static int tcp_packet(struct nf_conn *ct,
+ * segments we ignored. */
+ goto in_window;
+ }
+- /* Just fall through */
++ break;
+ default:
+ /* Keep compilers happy. */
+ break;
+@@ -1090,6 +1118,8 @@ static int tcp_packet(struct nf_conn *ct,
+ if (ct->proto.tcp.retrans >= tn->tcp_max_retrans &&
+ timeouts[new_state] > timeouts[TCP_CONNTRACK_RETRANS])
+ timeout = timeouts[TCP_CONNTRACK_RETRANS];
++ else if (unlikely(index == TCP_RST_SET))
++ timeout = timeouts[TCP_CONNTRACK_CLOSE];
+ else if ((ct->proto.tcp.seen[0].flags | ct->proto.tcp.seen[1].flags) &
+ IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED &&
+ timeouts[new_state] > timeouts[TCP_CONNTRACK_UNACK])
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 4893f248dfdc..acb124ce92ec 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -127,7 +127,7 @@ static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
+ list_for_each_entry_reverse(trans, &net->nft.commit_list, list) {
+ if (trans->msg_type == NFT_MSG_NEWSET &&
+ nft_trans_set(trans) == set) {
+- nft_trans_set_bound(trans) = true;
++ set->bound = true;
+ break;
+ }
+ }
+@@ -2119,9 +2119,11 @@ err1:
+ static void nf_tables_expr_destroy(const struct nft_ctx *ctx,
+ struct nft_expr *expr)
+ {
++ const struct nft_expr_type *type = expr->ops->type;
++
+ if (expr->ops->destroy)
+ expr->ops->destroy(ctx, expr);
+- module_put(expr->ops->type->owner);
++ module_put(type->owner);
+ }
+
+ struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
+@@ -2129,6 +2131,7 @@ struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
+ {
+ struct nft_expr_info info;
+ struct nft_expr *expr;
++ struct module *owner;
+ int err;
+
+ err = nf_tables_expr_parse(ctx, nla, &info);
+@@ -2148,7 +2151,11 @@ struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
+ err3:
+ kfree(expr);
+ err2:
+- module_put(info.ops->type->owner);
++ owner = info.ops->type->owner;
++ if (info.ops->type->release_ops)
++ info.ops->type->release_ops(info.ops);
++
++ module_put(owner);
+ err1:
+ return ERR_PTR(err);
+ }
+@@ -2746,8 +2753,11 @@ err2:
+ nf_tables_rule_release(&ctx, rule);
+ err1:
+ for (i = 0; i < n; i++) {
+- if (info[i].ops != NULL)
++ if (info[i].ops) {
+ module_put(info[i].ops->type->owner);
++ if (info[i].ops->type->release_ops)
++ info[i].ops->type->release_ops(info[i].ops);
++ }
+ }
+ kvfree(info);
+ return err;
+@@ -6617,8 +6627,7 @@ static void nf_tables_abort_release(struct nft_trans *trans)
+ nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
+ break;
+ case NFT_MSG_NEWSET:
+- if (!nft_trans_set_bound(trans))
+- nft_set_destroy(nft_trans_set(trans));
++ nft_set_destroy(nft_trans_set(trans));
+ break;
+ case NFT_MSG_NEWSETELEM:
+ nft_set_elem_destroy(nft_trans_elem_set(trans),
+@@ -6691,8 +6700,11 @@ static int __nf_tables_abort(struct net *net)
+ break;
+ case NFT_MSG_NEWSET:
+ trans->ctx.table->use--;
+- if (!nft_trans_set_bound(trans))
+- list_del_rcu(&nft_trans_set(trans)->list);
++ if (nft_trans_set(trans)->bound) {
++ nft_trans_destroy(trans);
++ break;
++ }
++ list_del_rcu(&nft_trans_set(trans)->list);
+ break;
+ case NFT_MSG_DELSET:
+ trans->ctx.table->use++;
+@@ -6700,8 +6712,11 @@ static int __nf_tables_abort(struct net *net)
+ nft_trans_destroy(trans);
+ break;
+ case NFT_MSG_NEWSETELEM:
++ if (nft_trans_elem_set(trans)->bound) {
++ nft_trans_destroy(trans);
++ break;
++ }
+ te = (struct nft_trans_elem *)trans->data;
+-
+ te->set->ops->remove(net, te->set, &te->elem);
+ atomic_dec(&te->set->nelems);
+ break;
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index a50500232b0a..7e8dae82ca52 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -98,21 +98,23 @@ static noinline void nft_update_chain_stats(const struct nft_chain *chain,
+ const struct nft_pktinfo *pkt)
+ {
+ struct nft_base_chain *base_chain;
++ struct nft_stats __percpu *pstats;
+ struct nft_stats *stats;
+
+ base_chain = nft_base_chain(chain);
+- if (!rcu_access_pointer(base_chain->stats))
+- return;
+
+- local_bh_disable();
+- stats = this_cpu_ptr(rcu_dereference(base_chain->stats));
+- if (stats) {
++ rcu_read_lock();
++ pstats = READ_ONCE(base_chain->stats);
++ if (pstats) {
++ local_bh_disable();
++ stats = this_cpu_ptr(pstats);
+ u64_stats_update_begin(&stats->syncp);
+ stats->pkts++;
+ stats->bytes += pkt->skb->len;
+ u64_stats_update_end(&stats->syncp);
++ local_bh_enable();
+ }
+- local_bh_enable();
++ rcu_read_unlock();
+ }
+
+ struct nft_jumpstack {
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 0a4bad55a8aa..469f9da5073b 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -22,23 +22,6 @@
+ #include <linux/netfilter_bridge/ebtables.h>
+ #include <linux/netfilter_arp/arp_tables.h>
+ #include <net/netfilter/nf_tables.h>
+-#include <net/netns/generic.h>
+-
+-struct nft_xt {
+- struct list_head head;
+- struct nft_expr_ops ops;
+- refcount_t refcnt;
+-
+- /* used only when transaction mutex is locked */
+- unsigned int listcnt;
+-
+- /* Unlike other expressions, ops doesn't have static storage duration.
+- * nft core assumes they do. We use kfree_rcu so that nft core can
+- * can check expr->ops->size even after nft_compat->destroy() frees
+- * the nft_xt struct that holds the ops structure.
+- */
+- struct rcu_head rcu_head;
+-};
+
+ /* Used for matches where *info is larger than X byte */
+ #define NFT_MATCH_LARGE_THRESH 192
+@@ -47,46 +30,6 @@ struct nft_xt_match_priv {
+ void *info;
+ };
+
+-struct nft_compat_net {
+- struct list_head nft_target_list;
+- struct list_head nft_match_list;
+-};
+-
+-static unsigned int nft_compat_net_id __read_mostly;
+-static struct nft_expr_type nft_match_type;
+-static struct nft_expr_type nft_target_type;
+-
+-static struct nft_compat_net *nft_compat_pernet(struct net *net)
+-{
+- return net_generic(net, nft_compat_net_id);
+-}
+-
+-static void nft_xt_get(struct nft_xt *xt)
+-{
+- /* refcount_inc() warns on 0 -> 1 transition, but we can't
+- * init the reference count to 1 in .select_ops -- we can't
+- * undo such an increase when another expression inside the same
+- * rule fails afterwards.
+- */
+- if (xt->listcnt == 0)
+- refcount_set(&xt->refcnt, 1);
+- else
+- refcount_inc(&xt->refcnt);
+-
+- xt->listcnt++;
+-}
+-
+-static bool nft_xt_put(struct nft_xt *xt)
+-{
+- if (refcount_dec_and_test(&xt->refcnt)) {
+- WARN_ON_ONCE(!list_empty(&xt->head));
+- kfree_rcu(xt, rcu_head);
+- return true;
+- }
+-
+- return false;
+-}
+-
+ static int nft_compat_chain_validate_dependency(const struct nft_ctx *ctx,
+ const char *tablename)
+ {
+@@ -281,7 +224,6 @@ nft_target_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ struct xt_target *target = expr->ops->data;
+ struct xt_tgchk_param par;
+ size_t size = XT_ALIGN(nla_len(tb[NFTA_TARGET_INFO]));
+- struct nft_xt *nft_xt;
+ u16 proto = 0;
+ bool inv = false;
+ union nft_entry e = {};
+@@ -305,8 +247,6 @@ nft_target_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ if (!target->target)
+ return -EINVAL;
+
+- nft_xt = container_of(expr->ops, struct nft_xt, ops);
+- nft_xt_get(nft_xt);
+ return 0;
+ }
+
+@@ -325,8 +265,8 @@ nft_target_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
+ if (par.target->destroy != NULL)
+ par.target->destroy(&par);
+
+- if (nft_xt_put(container_of(expr->ops, struct nft_xt, ops)))
+- module_put(me);
++ module_put(me);
++ kfree(expr->ops);
+ }
+
+ static int nft_extension_dump_info(struct sk_buff *skb, int attr,
+@@ -499,7 +439,6 @@ __nft_match_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ struct xt_match *match = expr->ops->data;
+ struct xt_mtchk_param par;
+ size_t size = XT_ALIGN(nla_len(tb[NFTA_MATCH_INFO]));
+- struct nft_xt *nft_xt;
+ u16 proto = 0;
+ bool inv = false;
+ union nft_entry e = {};
+@@ -515,13 +454,7 @@ __nft_match_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+
+ nft_match_set_mtchk_param(&par, ctx, match, info, &e, proto, inv);
+
+- ret = xt_check_match(&par, size, proto, inv);
+- if (ret < 0)
+- return ret;
+-
+- nft_xt = container_of(expr->ops, struct nft_xt, ops);
+- nft_xt_get(nft_xt);
+- return 0;
++ return xt_check_match(&par, size, proto, inv);
+ }
+
+ static int
+@@ -564,8 +497,8 @@ __nft_match_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ if (par.match->destroy != NULL)
+ par.match->destroy(&par);
+
+- if (nft_xt_put(container_of(expr->ops, struct nft_xt, ops)))
+- module_put(me);
++ module_put(me);
++ kfree(expr->ops);
+ }
+
+ static void
+@@ -574,18 +507,6 @@ nft_match_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
+ __nft_match_destroy(ctx, expr, nft_expr_priv(expr));
+ }
+
+-static void nft_compat_deactivate(const struct nft_ctx *ctx,
+- const struct nft_expr *expr,
+- enum nft_trans_phase phase)
+-{
+- struct nft_xt *xt = container_of(expr->ops, struct nft_xt, ops);
+-
+- if (phase == NFT_TRANS_ABORT || phase == NFT_TRANS_COMMIT) {
+- if (--xt->listcnt == 0)
+- list_del_init(&xt->head);
+- }
+-}
+-
+ static void
+ nft_match_large_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
+ {
+@@ -780,19 +701,13 @@ static const struct nfnetlink_subsystem nfnl_compat_subsys = {
+ .cb = nfnl_nft_compat_cb,
+ };
+
+-static bool nft_match_cmp(const struct xt_match *match,
+- const char *name, u32 rev, u32 family)
+-{
+- return strcmp(match->name, name) == 0 && match->revision == rev &&
+- (match->family == NFPROTO_UNSPEC || match->family == family);
+-}
++static struct nft_expr_type nft_match_type;
+
+ static const struct nft_expr_ops *
+ nft_match_select_ops(const struct nft_ctx *ctx,
+ const struct nlattr * const tb[])
+ {
+- struct nft_compat_net *cn;
+- struct nft_xt *nft_match;
++ struct nft_expr_ops *ops;
+ struct xt_match *match;
+ unsigned int matchsize;
+ char *mt_name;
+@@ -808,16 +723,6 @@ nft_match_select_ops(const struct nft_ctx *ctx,
+ rev = ntohl(nla_get_be32(tb[NFTA_MATCH_REV]));
+ family = ctx->family;
+
+- cn = nft_compat_pernet(ctx->net);
+-
+- /* Re-use the existing match if it's already loaded. */
+- list_for_each_entry(nft_match, &cn->nft_match_list, head) {
+- struct xt_match *match = nft_match->ops.data;
+-
+- if (nft_match_cmp(match, mt_name, rev, family))
+- return &nft_match->ops;
+- }
+-
+ match = xt_request_find_match(family, mt_name, rev);
+ if (IS_ERR(match))
+ return ERR_PTR(-ENOENT);
+@@ -827,65 +732,62 @@ nft_match_select_ops(const struct nft_ctx *ctx,
+ goto err;
+ }
+
+- /* This is the first time we use this match, allocate operations */
+- nft_match = kzalloc(sizeof(struct nft_xt), GFP_KERNEL);
+- if (nft_match == NULL) {
++ ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL);
++ if (!ops) {
+ err = -ENOMEM;
+ goto err;
+ }
+
+- refcount_set(&nft_match->refcnt, 0);
+- nft_match->ops.type = &nft_match_type;
+- nft_match->ops.eval = nft_match_eval;
+- nft_match->ops.init = nft_match_init;
+- nft_match->ops.destroy = nft_match_destroy;
+- nft_match->ops.deactivate = nft_compat_deactivate;
+- nft_match->ops.dump = nft_match_dump;
+- nft_match->ops.validate = nft_match_validate;
+- nft_match->ops.data = match;
++ ops->type = &nft_match_type;
++ ops->eval = nft_match_eval;
++ ops->init = nft_match_init;
++ ops->destroy = nft_match_destroy;
++ ops->dump = nft_match_dump;
++ ops->validate = nft_match_validate;
++ ops->data = match;
+
+ matchsize = NFT_EXPR_SIZE(XT_ALIGN(match->matchsize));
+ if (matchsize > NFT_MATCH_LARGE_THRESH) {
+ matchsize = NFT_EXPR_SIZE(sizeof(struct nft_xt_match_priv));
+
+- nft_match->ops.eval = nft_match_large_eval;
+- nft_match->ops.init = nft_match_large_init;
+- nft_match->ops.destroy = nft_match_large_destroy;
+- nft_match->ops.dump = nft_match_large_dump;
++ ops->eval = nft_match_large_eval;
++ ops->init = nft_match_large_init;
++ ops->destroy = nft_match_large_destroy;
++ ops->dump = nft_match_large_dump;
+ }
+
+- nft_match->ops.size = matchsize;
++ ops->size = matchsize;
+
+- nft_match->listcnt = 0;
+- list_add(&nft_match->head, &cn->nft_match_list);
+-
+- return &nft_match->ops;
++ return ops;
+ err:
+ module_put(match->me);
+ return ERR_PTR(err);
+ }
+
++static void nft_match_release_ops(const struct nft_expr_ops *ops)
++{
++ struct xt_match *match = ops->data;
++
++ module_put(match->me);
++ kfree(ops);
++}
++
+ static struct nft_expr_type nft_match_type __read_mostly = {
+ .name = "match",
+ .select_ops = nft_match_select_ops,
++ .release_ops = nft_match_release_ops,
+ .policy = nft_match_policy,
+ .maxattr = NFTA_MATCH_MAX,
+ .owner = THIS_MODULE,
+ };
+
+-static bool nft_target_cmp(const struct xt_target *tg,
+- const char *name, u32 rev, u32 family)
+-{
+- return strcmp(tg->name, name) == 0 && tg->revision == rev &&
+- (tg->family == NFPROTO_UNSPEC || tg->family == family);
+-}
++static struct nft_expr_type nft_target_type;
+
+ static const struct nft_expr_ops *
+ nft_target_select_ops(const struct nft_ctx *ctx,
+ const struct nlattr * const tb[])
+ {
+- struct nft_compat_net *cn;
+- struct nft_xt *nft_target;
++ struct nft_expr_ops *ops;
+ struct xt_target *target;
+ char *tg_name;
+ u32 rev, family;
+@@ -905,18 +807,6 @@ nft_target_select_ops(const struct nft_ctx *ctx,
+ strcmp(tg_name, "standard") == 0)
+ return ERR_PTR(-EINVAL);
+
+- cn = nft_compat_pernet(ctx->net);
+- /* Re-use the existing target if it's already loaded. */
+- list_for_each_entry(nft_target, &cn->nft_target_list, head) {
+- struct xt_target *target = nft_target->ops.data;
+-
+- if (!target->target)
+- continue;
+-
+- if (nft_target_cmp(target, tg_name, rev, family))
+- return &nft_target->ops;
+- }
+-
+ target = xt_request_find_target(family, tg_name, rev);
+ if (IS_ERR(target))
+ return ERR_PTR(-ENOENT);
+@@ -931,113 +821,55 @@ nft_target_select_ops(const struct nft_ctx *ctx,
+ goto err;
+ }
+
+- /* This is the first time we use this target, allocate operations */
+- nft_target = kzalloc(sizeof(struct nft_xt), GFP_KERNEL);
+- if (nft_target == NULL) {
++ ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL);
++ if (!ops) {
+ err = -ENOMEM;
+ goto err;
+ }
+
+- refcount_set(&nft_target->refcnt, 0);
+- nft_target->ops.type = &nft_target_type;
+- nft_target->ops.size = NFT_EXPR_SIZE(XT_ALIGN(target->targetsize));
+- nft_target->ops.init = nft_target_init;
+- nft_target->ops.destroy = nft_target_destroy;
+- nft_target->ops.deactivate = nft_compat_deactivate;
+- nft_target->ops.dump = nft_target_dump;
+- nft_target->ops.validate = nft_target_validate;
+- nft_target->ops.data = target;
++ ops->type = &nft_target_type;
++ ops->size = NFT_EXPR_SIZE(XT_ALIGN(target->targetsize));
++ ops->init = nft_target_init;
++ ops->destroy = nft_target_destroy;
++ ops->dump = nft_target_dump;
++ ops->validate = nft_target_validate;
++ ops->data = target;
+
+ if (family == NFPROTO_BRIDGE)
+- nft_target->ops.eval = nft_target_eval_bridge;
++ ops->eval = nft_target_eval_bridge;
+ else
+- nft_target->ops.eval = nft_target_eval_xt;
+-
+- nft_target->listcnt = 0;
+- list_add(&nft_target->head, &cn->nft_target_list);
++ ops->eval = nft_target_eval_xt;
+
+- return &nft_target->ops;
++ return ops;
+ err:
+ module_put(target->me);
+ return ERR_PTR(err);
+ }
+
++static void nft_target_release_ops(const struct nft_expr_ops *ops)
++{
++ struct xt_target *target = ops->data;
++
++ module_put(target->me);
++ kfree(ops);
++}
++
+ static struct nft_expr_type nft_target_type __read_mostly = {
+ .name = "target",
+ .select_ops = nft_target_select_ops,
++ .release_ops = nft_target_release_ops,
+ .policy = nft_target_policy,
+ .maxattr = NFTA_TARGET_MAX,
+ .owner = THIS_MODULE,
+ };
+
+-static int __net_init nft_compat_init_net(struct net *net)
+-{
+- struct nft_compat_net *cn = nft_compat_pernet(net);
+-
+- INIT_LIST_HEAD(&cn->nft_target_list);
+- INIT_LIST_HEAD(&cn->nft_match_list);
+-
+- return 0;
+-}
+-
+-static void __net_exit nft_compat_exit_net(struct net *net)
+-{
+- struct nft_compat_net *cn = nft_compat_pernet(net);
+- struct nft_xt *xt, *next;
+-
+- if (list_empty(&cn->nft_match_list) &&
+- list_empty(&cn->nft_target_list))
+- return;
+-
+- /* If there was an error that caused nft_xt expr to not be initialized
+- * fully and noone else requested the same expression later, the lists
+- * contain 0-refcount entries that still hold module reference.
+- *
+- * Clean them here.
+- */
+- mutex_lock(&net->nft.commit_mutex);
+- list_for_each_entry_safe(xt, next, &cn->nft_target_list, head) {
+- struct xt_target *target = xt->ops.data;
+-
+- list_del_init(&xt->head);
+-
+- if (refcount_read(&xt->refcnt))
+- continue;
+- module_put(target->me);
+- kfree(xt);
+- }
+-
+- list_for_each_entry_safe(xt, next, &cn->nft_match_list, head) {
+- struct xt_match *match = xt->ops.data;
+-
+- list_del_init(&xt->head);
+-
+- if (refcount_read(&xt->refcnt))
+- continue;
+- module_put(match->me);
+- kfree(xt);
+- }
+- mutex_unlock(&net->nft.commit_mutex);
+-}
+-
+-static struct pernet_operations nft_compat_net_ops = {
+- .init = nft_compat_init_net,
+- .exit = nft_compat_exit_net,
+- .id = &nft_compat_net_id,
+- .size = sizeof(struct nft_compat_net),
+-};
+-
+ static int __init nft_compat_module_init(void)
+ {
+ int ret;
+
+- ret = register_pernet_subsys(&nft_compat_net_ops);
+- if (ret < 0)
+- goto err_target;
+-
+ ret = nft_register_expr(&nft_match_type);
+ if (ret < 0)
+- goto err_pernet;
++ return ret;
+
+ ret = nft_register_expr(&nft_target_type);
+ if (ret < 0)
+@@ -1054,8 +886,6 @@ err_target:
+ nft_unregister_expr(&nft_target_type);
+ err_match:
+ nft_unregister_expr(&nft_match_type);
+-err_pernet:
+- unregister_pernet_subsys(&nft_compat_net_ops);
+ return ret;
+ }
+
+@@ -1064,7 +894,6 @@ static void __exit nft_compat_module_exit(void)
+ nfnetlink_subsys_unregister(&nfnl_compat_subsys);
+ nft_unregister_expr(&nft_target_type);
+ nft_unregister_expr(&nft_match_type);
+- unregister_pernet_subsys(&nft_compat_net_ops);
+ }
+
+ MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_NFT_COMPAT);
+diff --git a/net/netfilter/xt_physdev.c b/net/netfilter/xt_physdev.c
+index 4034d70bff39..b2e39cb6a590 100644
+--- a/net/netfilter/xt_physdev.c
++++ b/net/netfilter/xt_physdev.c
+@@ -96,8 +96,7 @@ match_outdev:
+ static int physdev_mt_check(const struct xt_mtchk_param *par)
+ {
+ const struct xt_physdev_info *info = par->matchinfo;
+-
+- br_netfilter_enable();
++ static bool brnf_probed __read_mostly;
+
+ if (!(info->bitmask & XT_PHYSDEV_OP_MASK) ||
+ info->bitmask & ~XT_PHYSDEV_OP_MASK)
+@@ -111,6 +110,12 @@ static int physdev_mt_check(const struct xt_mtchk_param *par)
+ if (par->hook_mask & (1 << NF_INET_LOCAL_OUT))
+ return -EINVAL;
+ }
++
++ if (!brnf_probed) {
++ brnf_probed = true;
++ request_module("br_netfilter");
++ }
++
+ return 0;
+ }
+
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 25eeb6d2a75a..f0ec068e1d02 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -366,7 +366,7 @@ int genl_register_family(struct genl_family *family)
+ start, end + 1, GFP_KERNEL);
+ if (family->id < 0) {
+ err = family->id;
+- goto errout_locked;
++ goto errout_free;
+ }
+
+ err = genl_validate_assign_mc_groups(family);
+@@ -385,6 +385,7 @@ int genl_register_family(struct genl_family *family)
+
+ errout_remove:
+ idr_remove(&genl_fam_idr, family->id);
++errout_free:
+ kfree(family->attrbuf);
+ errout_locked:
+ genl_unlock_all();
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 691da853bef5..4bdf5e3ac208 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2306,14 +2306,14 @@ static struct nlattr *reserve_sfa_size(struct sw_flow_actions **sfa,
+
+ struct sw_flow_actions *acts;
+ int new_acts_size;
+- int req_size = NLA_ALIGN(attr_len);
++ size_t req_size = NLA_ALIGN(attr_len);
+ int next_offset = offsetof(struct sw_flow_actions, actions) +
+ (*sfa)->actions_len;
+
+ if (req_size <= (ksize(*sfa) - next_offset))
+ goto out;
+
+- new_acts_size = ksize(*sfa) * 2;
++ new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2);
+
+ if (new_acts_size > MAX_ACTIONS_BUFSIZE) {
+ if ((MAX_ACTIONS_BUFSIZE - next_offset) < req_size) {
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 1cd1d83a4be0..8406bf11eef4 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3245,7 +3245,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
+ }
+
+ mutex_lock(&net->packet.sklist_lock);
+- sk_add_node_rcu(sk, &net->packet.sklist);
++ sk_add_node_tail_rcu(sk, &net->packet.sklist);
+ mutex_unlock(&net->packet.sklist_lock);
+
+ preempt_disable();
+@@ -4211,7 +4211,7 @@ static struct pgv *alloc_pg_vec(struct tpacket_req *req, int order)
+ struct pgv *pg_vec;
+ int i;
+
+- pg_vec = kcalloc(block_nr, sizeof(struct pgv), GFP_KERNEL);
++ pg_vec = kcalloc(block_nr, sizeof(struct pgv), GFP_KERNEL | __GFP_NOWARN);
+ if (unlikely(!pg_vec))
+ goto out;
+
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index c16f0a362c32..a729c47db781 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -600,7 +600,7 @@ static void rds_tcp_kill_sock(struct net *net)
+ list_for_each_entry_safe(tc, _tc, &rds_tcp_conn_list, t_tcp_node) {
+ struct net *c_net = read_pnet(&tc->t_cpath->cp_conn->c_net);
+
+- if (net != c_net || !tc->t_sock)
++ if (net != c_net)
+ continue;
+ if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn)) {
+ list_move_tail(&tc->t_tcp_node, &tmp_list);
+diff --git a/net/rose/rose_subr.c b/net/rose/rose_subr.c
+index 7ca57741b2fb..7849f286bb93 100644
+--- a/net/rose/rose_subr.c
++++ b/net/rose/rose_subr.c
+@@ -105,16 +105,17 @@ void rose_write_internal(struct sock *sk, int frametype)
+ struct sk_buff *skb;
+ unsigned char *dptr;
+ unsigned char lci1, lci2;
+- char buffer[100];
+- int len, faclen = 0;
++ int maxfaclen = 0;
++ int len, faclen;
++ int reserve;
+
+- len = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + ROSE_MIN_LEN + 1;
++ reserve = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + 1;
++ len = ROSE_MIN_LEN;
+
+ switch (frametype) {
+ case ROSE_CALL_REQUEST:
+ len += 1 + ROSE_ADDR_LEN + ROSE_ADDR_LEN;
+- faclen = rose_create_facilities(buffer, rose);
+- len += faclen;
++ maxfaclen = 256;
+ break;
+ case ROSE_CALL_ACCEPTED:
+ case ROSE_CLEAR_REQUEST:
+@@ -123,15 +124,16 @@ void rose_write_internal(struct sock *sk, int frametype)
+ break;
+ }
+
+- if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)
++ skb = alloc_skb(reserve + len + maxfaclen, GFP_ATOMIC);
++ if (!skb)
+ return;
+
+ /*
+ * Space for AX.25 header and PID.
+ */
+- skb_reserve(skb, AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + 1);
++ skb_reserve(skb, reserve);
+
+- dptr = skb_put(skb, skb_tailroom(skb));
++ dptr = skb_put(skb, len);
+
+ lci1 = (rose->lci >> 8) & 0x0F;
+ lci2 = (rose->lci >> 0) & 0xFF;
+@@ -146,7 +148,8 @@ void rose_write_internal(struct sock *sk, int frametype)
+ dptr += ROSE_ADDR_LEN;
+ memcpy(dptr, &rose->source_addr, ROSE_ADDR_LEN);
+ dptr += ROSE_ADDR_LEN;
+- memcpy(dptr, buffer, faclen);
++ faclen = rose_create_facilities(dptr, rose);
++ skb_put(skb, faclen);
+ dptr += faclen;
+ break;
+
+diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
+index b2adfa825363..5cf6d9f4761d 100644
+--- a/net/rxrpc/conn_client.c
++++ b/net/rxrpc/conn_client.c
+@@ -353,7 +353,7 @@ static int rxrpc_get_client_conn(struct rxrpc_sock *rx,
+ * normally have to take channel_lock but we do this before anyone else
+ * can see the connection.
+ */
+- list_add_tail(&call->chan_wait_link, &candidate->waiting_calls);
++ list_add(&call->chan_wait_link, &candidate->waiting_calls);
+
+ if (cp->exclusive) {
+ call->conn = candidate;
+@@ -432,7 +432,7 @@ found_extant_conn:
+ call->conn = conn;
+ call->security_ix = conn->security_ix;
+ call->service_id = conn->service_id;
+- list_add(&call->chan_wait_link, &conn->waiting_calls);
++ list_add_tail(&call->chan_wait_link, &conn->waiting_calls);
+ spin_unlock(&conn->channel_lock);
+ _leave(" = 0 [extant %d]", conn->debug_id);
+ return 0;
+diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
+index 1a0c682fd734..fd62fe6c8e73 100644
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -43,8 +43,8 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ struct tc_action_net *tn = net_generic(net, sample_net_id);
+ struct nlattr *tb[TCA_SAMPLE_MAX + 1];
+ struct psample_group *psample_group;
++ u32 psample_group_num, rate;
+ struct tc_sample *parm;
+- u32 psample_group_num;
+ struct tcf_sample *s;
+ bool exists = false;
+ int ret, err;
+@@ -80,6 +80,12 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ return -EEXIST;
+ }
+
++ rate = nla_get_u32(tb[TCA_SAMPLE_RATE]);
++ if (!rate) {
++ NL_SET_ERR_MSG(extack, "invalid sample rate");
++ tcf_idr_release(*a, bind);
++ return -EINVAL;
++ }
+ psample_group_num = nla_get_u32(tb[TCA_SAMPLE_PSAMPLE_GROUP]);
+ psample_group = psample_group_get(net, psample_group_num);
+ if (!psample_group) {
+@@ -91,7 +97,7 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+
+ spin_lock_bh(&s->tcf_lock);
+ s->tcf_action = parm->action;
+- s->rate = nla_get_u32(tb[TCA_SAMPLE_RATE]);
++ s->rate = rate;
+ s->psample_group_num = psample_group_num;
+ RCU_INIT_POINTER(s->psample_group, psample_group);
+
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 12ca9d13db83..bf67ae5ac1c3 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1327,46 +1327,46 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
+ if (err < 0)
+ goto errout;
+
+- if (!handle) {
+- handle = 1;
+- err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
+- INT_MAX, GFP_KERNEL);
+- } else if (!fold) {
+- /* user specifies a handle and it doesn't exist */
+- err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
+- handle, GFP_KERNEL);
+- }
+- if (err)
+- goto errout;
+- fnew->handle = handle;
+-
+ if (tb[TCA_FLOWER_FLAGS]) {
+ fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]);
+
+ if (!tc_flags_valid(fnew->flags)) {
+ err = -EINVAL;
+- goto errout_idr;
++ goto errout;
+ }
+ }
+
+ err = fl_set_parms(net, tp, fnew, mask, base, tb, tca[TCA_RATE], ovr,
+ tp->chain->tmplt_priv, extack);
+ if (err)
+- goto errout_idr;
++ goto errout;
+
+ err = fl_check_assign_mask(head, fnew, fold, mask);
+ if (err)
+- goto errout_idr;
++ goto errout;
++
++ if (!handle) {
++ handle = 1;
++ err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
++ INT_MAX, GFP_KERNEL);
++ } else if (!fold) {
++ /* user specifies a handle and it doesn't exist */
++ err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
++ handle, GFP_KERNEL);
++ }
++ if (err)
++ goto errout_mask;
++ fnew->handle = handle;
+
+ if (!fold && __fl_lookup(fnew->mask, &fnew->mkey)) {
+ err = -EEXIST;
+- goto errout_mask;
++ goto errout_idr;
+ }
+
+ err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node,
+ fnew->mask->filter_ht_params);
+ if (err)
+- goto errout_mask;
++ goto errout_idr;
+
+ if (!tc_skip_hw(fnew->flags)) {
+ err = fl_hw_replace_filter(tp, fnew, extack);
+@@ -1405,12 +1405,13 @@ errout_mask_ht:
+ rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node,
+ fnew->mask->filter_ht_params);
+
+-errout_mask:
+- fl_mask_put(head, fnew->mask, false);
+-
+ errout_idr:
+ if (!fold)
+ idr_remove(&head->handle_idr, fnew->handle);
++
++errout_mask:
++ fl_mask_put(head, fnew->mask, false);
++
+ errout:
+ tcf_exts_destroy(&fnew->exts);
+ kfree(fnew);
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index 0e408ee9dcec..5ba07cd11e31 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -125,6 +125,11 @@ static void mall_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
+
+ static void *mall_get(struct tcf_proto *tp, u32 handle)
+ {
++ struct cls_mall_head *head = rtnl_dereference(tp->root);
++
++ if (head && head->handle == handle)
++ return head;
++
+ return NULL;
+ }
+
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 968a85fe4d4a..de31f2f3b973 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -68,7 +68,7 @@ static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q)
+ skb = __skb_dequeue(&q->skb_bad_txq);
+ if (qdisc_is_percpu_stats(q)) {
+ qdisc_qstats_cpu_backlog_dec(q, skb);
+- qdisc_qstats_cpu_qlen_dec(q);
++ qdisc_qstats_atomic_qlen_dec(q);
+ } else {
+ qdisc_qstats_backlog_dec(q, skb);
+ q->q.qlen--;
+@@ -108,7 +108,7 @@ static inline void qdisc_enqueue_skb_bad_txq(struct Qdisc *q,
+
+ if (qdisc_is_percpu_stats(q)) {
+ qdisc_qstats_cpu_backlog_inc(q, skb);
+- qdisc_qstats_cpu_qlen_inc(q);
++ qdisc_qstats_atomic_qlen_inc(q);
+ } else {
+ qdisc_qstats_backlog_inc(q, skb);
+ q->q.qlen++;
+@@ -147,7 +147,7 @@ static inline int dev_requeue_skb_locked(struct sk_buff *skb, struct Qdisc *q)
+
+ qdisc_qstats_cpu_requeues_inc(q);
+ qdisc_qstats_cpu_backlog_inc(q, skb);
+- qdisc_qstats_cpu_qlen_inc(q);
++ qdisc_qstats_atomic_qlen_inc(q);
+
+ skb = next;
+ }
+@@ -252,7 +252,7 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate,
+ skb = __skb_dequeue(&q->gso_skb);
+ if (qdisc_is_percpu_stats(q)) {
+ qdisc_qstats_cpu_backlog_dec(q, skb);
+- qdisc_qstats_cpu_qlen_dec(q);
++ qdisc_qstats_atomic_qlen_dec(q);
+ } else {
+ qdisc_qstats_backlog_dec(q, skb);
+ q->q.qlen--;
+@@ -645,7 +645,7 @@ static int pfifo_fast_enqueue(struct sk_buff *skb, struct Qdisc *qdisc,
+ if (unlikely(err))
+ return qdisc_drop_cpu(skb, qdisc, to_free);
+
+- qdisc_qstats_cpu_qlen_inc(qdisc);
++ qdisc_qstats_atomic_qlen_inc(qdisc);
+ /* Note: skb can not be used after skb_array_produce(),
+ * so we better not use qdisc_qstats_cpu_backlog_inc()
+ */
+@@ -670,7 +670,7 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
+ if (likely(skb)) {
+ qdisc_qstats_cpu_backlog_dec(qdisc, skb);
+ qdisc_bstats_cpu_update(qdisc, skb);
+- qdisc_qstats_cpu_qlen_dec(qdisc);
++ qdisc_qstats_atomic_qlen_dec(qdisc);
+ }
+
+ return skb;
+@@ -714,7 +714,6 @@ static void pfifo_fast_reset(struct Qdisc *qdisc)
+ struct gnet_stats_queue *q = per_cpu_ptr(qdisc->cpu_qstats, i);
+
+ q->backlog = 0;
+- q->qlen = 0;
+ }
+ }
+
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 6abc8b274270..951afdeea5e9 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -600,6 +600,7 @@ out:
+ static int sctp_v4_addr_to_user(struct sctp_sock *sp, union sctp_addr *addr)
+ {
+ /* No address mapping for V4 sockets */
++ memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
+ return sizeof(struct sockaddr_in);
+ }
+
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 65d6d04546ae..5f68420b4b0d 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -999,7 +999,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ if (unlikely(addrs_size <= 0))
+ return -EINVAL;
+
+- kaddrs = vmemdup_user(addrs, addrs_size);
++ kaddrs = memdup_user(addrs, addrs_size);
+ if (unlikely(IS_ERR(kaddrs)))
+ return PTR_ERR(kaddrs);
+
+@@ -1007,7 +1007,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ addr_buf = kaddrs;
+ while (walk_size < addrs_size) {
+ if (walk_size + sizeof(sa_family_t) > addrs_size) {
+- kvfree(kaddrs);
++ kfree(kaddrs);
+ return -EINVAL;
+ }
+
+@@ -1018,7 +1018,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ * causes the address buffer to overflow return EINVAL.
+ */
+ if (!af || (walk_size + af->sockaddr_len) > addrs_size) {
+- kvfree(kaddrs);
++ kfree(kaddrs);
+ return -EINVAL;
+ }
+ addrcnt++;
+@@ -1054,7 +1054,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ }
+
+ out:
+- kvfree(kaddrs);
++ kfree(kaddrs);
+
+ return err;
+ }
+@@ -1329,7 +1329,7 @@ static int __sctp_setsockopt_connectx(struct sock *sk,
+ if (unlikely(addrs_size <= 0))
+ return -EINVAL;
+
+- kaddrs = vmemdup_user(addrs, addrs_size);
++ kaddrs = memdup_user(addrs, addrs_size);
+ if (unlikely(IS_ERR(kaddrs)))
+ return PTR_ERR(kaddrs);
+
+@@ -1349,7 +1349,7 @@ static int __sctp_setsockopt_connectx(struct sock *sk,
+ err = __sctp_connect(sk, kaddrs, addrs_size, flags, assoc_id);
+
+ out_free:
+- kvfree(kaddrs);
++ kfree(kaddrs);
+
+ return err;
+ }
+@@ -1866,6 +1866,7 @@ static int sctp_sendmsg_check_sflags(struct sctp_association *asoc,
+
+ pr_debug("%s: aborting association:%p\n", __func__, asoc);
+ sctp_primitive_ABORT(net, asoc, chunk);
++ iov_iter_revert(&msg->msg_iter, msg_len);
+
+ return 0;
+ }
+diff --git a/net/sctp/stream.c b/net/sctp/stream.c
+index 2936ed17bf9e..3b47457862cc 100644
+--- a/net/sctp/stream.c
++++ b/net/sctp/stream.c
+@@ -230,8 +230,6 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt,
+ for (i = 0; i < stream->outcnt; i++)
+ SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
+
+- sched->init(stream);
+-
+ in:
+ sctp_stream_interleave_init(stream);
+ if (!incnt)
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index d7ec6132c046..d455537c8fc6 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -66,9 +66,6 @@ static void call_decode(struct rpc_task *task);
+ static void call_bind(struct rpc_task *task);
+ static void call_bind_status(struct rpc_task *task);
+ static void call_transmit(struct rpc_task *task);
+-#if defined(CONFIG_SUNRPC_BACKCHANNEL)
+-static void call_bc_transmit(struct rpc_task *task);
+-#endif /* CONFIG_SUNRPC_BACKCHANNEL */
+ static void call_status(struct rpc_task *task);
+ static void call_transmit_status(struct rpc_task *task);
+ static void call_refresh(struct rpc_task *task);
+@@ -80,6 +77,7 @@ static void call_connect_status(struct rpc_task *task);
+ static __be32 *rpc_encode_header(struct rpc_task *task);
+ static __be32 *rpc_verify_header(struct rpc_task *task);
+ static int rpc_ping(struct rpc_clnt *clnt);
++static void rpc_check_timeout(struct rpc_task *task);
+
+ static void rpc_register_client(struct rpc_clnt *clnt)
+ {
+@@ -1131,6 +1129,8 @@ rpc_call_async(struct rpc_clnt *clnt, const struct rpc_message *msg, int flags,
+ EXPORT_SYMBOL_GPL(rpc_call_async);
+
+ #if defined(CONFIG_SUNRPC_BACKCHANNEL)
++static void call_bc_encode(struct rpc_task *task);
++
+ /**
+ * rpc_run_bc_task - Allocate a new RPC task for backchannel use, then run
+ * rpc_execute against it
+@@ -1152,7 +1152,7 @@ struct rpc_task *rpc_run_bc_task(struct rpc_rqst *req)
+ task = rpc_new_task(&task_setup_data);
+ xprt_init_bc_request(req, task);
+
+- task->tk_action = call_bc_transmit;
++ task->tk_action = call_bc_encode;
+ atomic_inc(&task->tk_count);
+ WARN_ON_ONCE(atomic_read(&task->tk_count) != 2);
+ rpc_execute(task);
+@@ -1786,7 +1786,12 @@ call_encode(struct rpc_task *task)
+ xprt_request_enqueue_receive(task);
+ xprt_request_enqueue_transmit(task);
+ out:
+- task->tk_action = call_bind;
++ task->tk_action = call_transmit;
++ /* Check that the connection is OK */
++ if (!xprt_bound(task->tk_xprt))
++ task->tk_action = call_bind;
++ else if (!xprt_connected(task->tk_xprt))
++ task->tk_action = call_connect;
+ }
+
+ /*
+@@ -1937,8 +1942,7 @@ call_connect_status(struct rpc_task *task)
+ break;
+ if (clnt->cl_autobind) {
+ rpc_force_rebind(clnt);
+- task->tk_action = call_bind;
+- return;
++ goto out_retry;
+ }
+ /* fall through */
+ case -ECONNRESET:
+@@ -1958,16 +1962,19 @@ call_connect_status(struct rpc_task *task)
+ /* fall through */
+ case -ENOTCONN:
+ case -EAGAIN:
+- /* Check for timeouts before looping back to call_bind */
+ case -ETIMEDOUT:
+- task->tk_action = call_timeout;
+- return;
++ goto out_retry;
+ case 0:
+ clnt->cl_stats->netreconn++;
+ task->tk_action = call_transmit;
+ return;
+ }
+ rpc_exit(task, status);
++ return;
++out_retry:
++ /* Check for timeouts before looping back to call_bind */
++ task->tk_action = call_bind;
++ rpc_check_timeout(task);
+ }
+
+ /*
+@@ -1978,13 +1985,19 @@ call_transmit(struct rpc_task *task)
+ {
+ dprint_status(task);
+
+- task->tk_status = 0;
++ task->tk_action = call_transmit_status;
+ if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
+ if (!xprt_prepare_transmit(task))
+ return;
+- xprt_transmit(task);
++ task->tk_status = 0;
++ if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
++ if (!xprt_connected(task->tk_xprt)) {
++ task->tk_status = -ENOTCONN;
++ return;
++ }
++ xprt_transmit(task);
++ }
+ }
+- task->tk_action = call_transmit_status;
+ xprt_end_transmit(task);
+ }
+
+@@ -2038,7 +2051,7 @@ call_transmit_status(struct rpc_task *task)
+ trace_xprt_ping(task->tk_xprt,
+ task->tk_status);
+ rpc_exit(task, task->tk_status);
+- break;
++ return;
+ }
+ /* fall through */
+ case -ECONNRESET:
+@@ -2046,11 +2059,24 @@ call_transmit_status(struct rpc_task *task)
+ case -EADDRINUSE:
+ case -ENOTCONN:
+ case -EPIPE:
++ task->tk_action = call_bind;
++ task->tk_status = 0;
+ break;
+ }
++ rpc_check_timeout(task);
+ }
+
+ #if defined(CONFIG_SUNRPC_BACKCHANNEL)
++static void call_bc_transmit(struct rpc_task *task);
++static void call_bc_transmit_status(struct rpc_task *task);
++
++static void
++call_bc_encode(struct rpc_task *task)
++{
++ xprt_request_enqueue_transmit(task);
++ task->tk_action = call_bc_transmit;
++}
++
+ /*
+ * 5b. Send the backchannel RPC reply. On error, drop the reply. In
+ * addition, disconnect on connectivity errors.
+@@ -2058,26 +2084,23 @@ call_transmit_status(struct rpc_task *task)
+ static void
+ call_bc_transmit(struct rpc_task *task)
+ {
+- struct rpc_rqst *req = task->tk_rqstp;
+-
+- if (rpc_task_need_encode(task))
+- xprt_request_enqueue_transmit(task);
+- if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
+- goto out_wakeup;
+-
+- if (!xprt_prepare_transmit(task))
+- goto out_retry;
+-
+- if (task->tk_status < 0) {
+- printk(KERN_NOTICE "RPC: Could not send backchannel reply "
+- "error: %d\n", task->tk_status);
+- goto out_done;
++ task->tk_action = call_bc_transmit_status;
++ if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
++ if (!xprt_prepare_transmit(task))
++ return;
++ task->tk_status = 0;
++ xprt_transmit(task);
+ }
++ xprt_end_transmit(task);
++}
+
+- xprt_transmit(task);
++static void
++call_bc_transmit_status(struct rpc_task *task)
++{
++ struct rpc_rqst *req = task->tk_rqstp;
+
+- xprt_end_transmit(task);
+ dprint_status(task);
++
+ switch (task->tk_status) {
+ case 0:
+ /* Success */
+@@ -2091,8 +2114,14 @@ call_bc_transmit(struct rpc_task *task)
+ case -ENOTCONN:
+ case -EPIPE:
+ break;
++ case -ENOBUFS:
++ rpc_delay(task, HZ>>2);
++ /* fall through */
++ case -EBADSLT:
+ case -EAGAIN:
+- goto out_retry;
++ task->tk_status = 0;
++ task->tk_action = call_bc_transmit;
++ return;
+ case -ETIMEDOUT:
+ /*
+ * Problem reaching the server. Disconnect and let the
+@@ -2111,18 +2140,11 @@ call_bc_transmit(struct rpc_task *task)
+ * We were unable to reply and will have to drop the
+ * request. The server should reconnect and retransmit.
+ */
+- WARN_ON_ONCE(task->tk_status == -EAGAIN);
+ printk(KERN_NOTICE "RPC: Could not send backchannel reply "
+ "error: %d\n", task->tk_status);
+ break;
+ }
+-out_wakeup:
+- rpc_wake_up_queued_task(&req->rq_xprt->pending, task);
+-out_done:
+ task->tk_action = rpc_exit_task;
+- return;
+-out_retry:
+- task->tk_status = 0;
+ }
+ #endif /* CONFIG_SUNRPC_BACKCHANNEL */
+
+@@ -2178,7 +2200,7 @@ call_status(struct rpc_task *task)
+ case -EPIPE:
+ case -ENOTCONN:
+ case -EAGAIN:
+- task->tk_action = call_encode;
++ task->tk_action = call_timeout;
+ break;
+ case -EIO:
+ /* shutdown or soft timeout */
+@@ -2192,20 +2214,13 @@ call_status(struct rpc_task *task)
+ }
+ }
+
+-/*
+- * 6a. Handle RPC timeout
+- * We do not release the request slot, so we keep using the
+- * same XID for all retransmits.
+- */
+ static void
+-call_timeout(struct rpc_task *task)
++rpc_check_timeout(struct rpc_task *task)
+ {
+ struct rpc_clnt *clnt = task->tk_client;
+
+- if (xprt_adjust_timeout(task->tk_rqstp) == 0) {
+- dprintk("RPC: %5u call_timeout (minor)\n", task->tk_pid);
+- goto retry;
+- }
++ if (xprt_adjust_timeout(task->tk_rqstp) == 0)
++ return;
+
+ dprintk("RPC: %5u call_timeout (major)\n", task->tk_pid);
+ task->tk_timeouts++;
+@@ -2241,10 +2256,19 @@ call_timeout(struct rpc_task *task)
+ * event? RFC2203 requires the server to drop all such requests.
+ */
+ rpcauth_invalcred(task);
++}
+
+-retry:
++/*
++ * 6a. Handle RPC timeout
++ * We do not release the request slot, so we keep using the
++ * same XID for all retransmits.
++ */
++static void
++call_timeout(struct rpc_task *task)
++{
+ task->tk_action = call_encode;
+ task->tk_status = 0;
++ rpc_check_timeout(task);
+ }
+
+ /*
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index a6a060925e5d..43590a968b73 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -349,12 +349,16 @@ static ssize_t svc_recvfrom(struct svc_rqst *rqstp, struct kvec *iov,
+ /*
+ * Set socket snd and rcv buffer lengths
+ */
+-static void svc_sock_setbufsize(struct socket *sock, unsigned int snd,
+- unsigned int rcv)
++static void svc_sock_setbufsize(struct svc_sock *svsk, unsigned int nreqs)
+ {
++ unsigned int max_mesg = svsk->sk_xprt.xpt_server->sv_max_mesg;
++ struct socket *sock = svsk->sk_sock;
++
++ nreqs = min(nreqs, INT_MAX / 2 / max_mesg);
++
+ lock_sock(sock->sk);
+- sock->sk->sk_sndbuf = snd * 2;
+- sock->sk->sk_rcvbuf = rcv * 2;
++ sock->sk->sk_sndbuf = nreqs * max_mesg * 2;
++ sock->sk->sk_rcvbuf = nreqs * max_mesg * 2;
+ sock->sk->sk_write_space(sock->sk);
+ release_sock(sock->sk);
+ }
+@@ -516,9 +520,7 @@ static int svc_udp_recvfrom(struct svc_rqst *rqstp)
+ * provides an upper bound on the number of threads
+ * which will access the socket.
+ */
+- svc_sock_setbufsize(svsk->sk_sock,
+- (serv->sv_nrthreads+3) * serv->sv_max_mesg,
+- (serv->sv_nrthreads+3) * serv->sv_max_mesg);
++ svc_sock_setbufsize(svsk, serv->sv_nrthreads + 3);
+
+ clear_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags);
+ skb = NULL;
+@@ -681,9 +683,7 @@ static void svc_udp_init(struct svc_sock *svsk, struct svc_serv *serv)
+ * receive and respond to one request.
+ * svc_udp_recvfrom will re-adjust if necessary
+ */
+- svc_sock_setbufsize(svsk->sk_sock,
+- 3 * svsk->sk_xprt.xpt_server->sv_max_mesg,
+- 3 * svsk->sk_xprt.xpt_server->sv_max_mesg);
++ svc_sock_setbufsize(svsk, 3);
+
+ /* data might have come in before data_ready set up */
+ set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 21113bfd4eca..a5ae9c036b9c 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -90,7 +90,7 @@ static void rpcrdma_xprt_drain(struct rpcrdma_xprt *r_xprt)
+ /* Flush Receives, then wait for deferred Reply work
+ * to complete.
+ */
+- ib_drain_qp(ia->ri_id->qp);
++ ib_drain_rq(ia->ri_id->qp);
+ drain_workqueue(buf->rb_completion_wq);
+
+ /* Deferred Reply processing might have scheduled
+diff --git a/net/tipc/net.c b/net/tipc/net.c
+index f076edb74338..7ce1e86b024f 100644
+--- a/net/tipc/net.c
++++ b/net/tipc/net.c
+@@ -163,12 +163,9 @@ void tipc_sched_net_finalize(struct net *net, u32 addr)
+
+ void tipc_net_stop(struct net *net)
+ {
+- u32 self = tipc_own_addr(net);
+-
+- if (!self)
++ if (!tipc_own_id(net))
+ return;
+
+- tipc_nametbl_withdraw(net, TIPC_CFG_SRV, self, self, self);
+ rtnl_lock();
+ tipc_bearer_stop(net);
+ tipc_node_stop(net);
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 70343ac448b1..4dca9161f99b 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1333,7 +1333,7 @@ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dlen)
+
+ if (unlikely(!dest)) {
+ dest = &tsk->peer;
+- if (!syn || dest->family != AF_TIPC)
++ if (!syn && dest->family != AF_TIPC)
+ return -EDESTADDRREQ;
+ }
+
+@@ -2349,6 +2349,16 @@ static int tipc_wait_for_connect(struct socket *sock, long *timeo_p)
+ return 0;
+ }
+
++static bool tipc_sockaddr_is_sane(struct sockaddr_tipc *addr)
++{
++ if (addr->family != AF_TIPC)
++ return false;
++ if (addr->addrtype == TIPC_SERVICE_RANGE)
++ return (addr->addr.nameseq.lower <= addr->addr.nameseq.upper);
++ return (addr->addrtype == TIPC_SERVICE_ADDR ||
++ addr->addrtype == TIPC_SOCKET_ADDR);
++}
++
+ /**
+ * tipc_connect - establish a connection to another TIPC port
+ * @sock: socket structure
+@@ -2384,18 +2394,18 @@ static int tipc_connect(struct socket *sock, struct sockaddr *dest,
+ if (!tipc_sk_type_connectionless(sk))
+ res = -EINVAL;
+ goto exit;
+- } else if (dst->family != AF_TIPC) {
+- res = -EINVAL;
+ }
+- if (dst->addrtype != TIPC_ADDR_ID && dst->addrtype != TIPC_ADDR_NAME)
++ if (!tipc_sockaddr_is_sane(dst)) {
+ res = -EINVAL;
+- if (res)
+ goto exit;
+-
++ }
+ /* DGRAM/RDM connect(), just save the destaddr */
+ if (tipc_sk_type_connectionless(sk)) {
+ memcpy(&tsk->peer, dest, destlen);
+ goto exit;
++ } else if (dst->addrtype == TIPC_SERVICE_RANGE) {
++ res = -EINVAL;
++ goto exit;
+ }
+
+ previous = sk->sk_state;
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index a457c0fbbef1..f5edb213d760 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -365,6 +365,7 @@ static int tipc_conn_rcv_sub(struct tipc_topsrv *srv,
+ struct tipc_subscription *sub;
+
+ if (tipc_sub_read(s, filter) & TIPC_SUB_CANCEL) {
++ s->filter &= __constant_ntohl(~TIPC_SUB_CANCEL);
+ tipc_conn_delete_sub(con, s);
+ return 0;
+ }
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 3ae3a33da70b..602715fc9a75 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -662,6 +662,8 @@ static int virtio_transport_reset(struct vsock_sock *vsk,
+ */
+ static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
+ {
++ const struct virtio_transport *t;
++ struct virtio_vsock_pkt *reply;
+ struct virtio_vsock_pkt_info info = {
+ .op = VIRTIO_VSOCK_OP_RST,
+ .type = le16_to_cpu(pkt->hdr.type),
+@@ -672,15 +674,21 @@ static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
+ if (le16_to_cpu(pkt->hdr.op) == VIRTIO_VSOCK_OP_RST)
+ return 0;
+
+- pkt = virtio_transport_alloc_pkt(&info, 0,
+- le64_to_cpu(pkt->hdr.dst_cid),
+- le32_to_cpu(pkt->hdr.dst_port),
+- le64_to_cpu(pkt->hdr.src_cid),
+- le32_to_cpu(pkt->hdr.src_port));
+- if (!pkt)
++ reply = virtio_transport_alloc_pkt(&info, 0,
++ le64_to_cpu(pkt->hdr.dst_cid),
++ le32_to_cpu(pkt->hdr.dst_port),
++ le64_to_cpu(pkt->hdr.src_cid),
++ le32_to_cpu(pkt->hdr.src_port));
++ if (!reply)
+ return -ENOMEM;
+
+- return virtio_transport_get_ops()->send_pkt(pkt);
++ t = virtio_transport_get_ops();
++ if (!t) {
++ virtio_transport_free_pkt(reply);
++ return -ENOTCONN;
++ }
++
++ return t->send_pkt(reply);
+ }
+
+ static void virtio_transport_wait_close(struct sock *sk, long timeout)
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index eff31348e20b..20a511398389 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -820,8 +820,13 @@ static int x25_connect(struct socket *sock, struct sockaddr *uaddr,
+ sock->state = SS_CONNECTED;
+ rc = 0;
+ out_put_neigh:
+- if (rc)
++ if (rc) {
++ read_lock_bh(&x25_list_lock);
+ x25_neigh_put(x25->neighbour);
++ x25->neighbour = NULL;
++ read_unlock_bh(&x25_list_lock);
++ x25->state = X25_STATE_0;
++ }
+ out_put_route:
+ x25_route_put(rt);
+ out:
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 85e4fe4f18cc..f3031c8907d9 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -407,6 +407,10 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ if (sxdp->sxdp_family != AF_XDP)
+ return -EINVAL;
+
++ flags = sxdp->sxdp_flags;
++ if (flags & ~(XDP_SHARED_UMEM | XDP_COPY | XDP_ZEROCOPY))
++ return -EINVAL;
++
+ mutex_lock(&xs->mutex);
+ if (xs->dev) {
+ err = -EBUSY;
+@@ -425,7 +429,6 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ }
+
+ qid = sxdp->sxdp_queue_id;
+- flags = sxdp->sxdp_flags;
+
+ if (flags & XDP_SHARED_UMEM) {
+ struct xdp_sock *umem_xs;
+diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
+index 7aad82406422..d3319a80788a 100644
+--- a/scripts/gdb/linux/constants.py.in
++++ b/scripts/gdb/linux/constants.py.in
+@@ -37,12 +37,12 @@
+ import gdb
+
+ /* linux/fs.h */
+-LX_VALUE(MS_RDONLY)
+-LX_VALUE(MS_SYNCHRONOUS)
+-LX_VALUE(MS_MANDLOCK)
+-LX_VALUE(MS_DIRSYNC)
+-LX_VALUE(MS_NOATIME)
+-LX_VALUE(MS_NODIRATIME)
++LX_VALUE(SB_RDONLY)
++LX_VALUE(SB_SYNCHRONOUS)
++LX_VALUE(SB_MANDLOCK)
++LX_VALUE(SB_DIRSYNC)
++LX_VALUE(SB_NOATIME)
++LX_VALUE(SB_NODIRATIME)
+
+ /* linux/mount.h */
+ LX_VALUE(MNT_NOSUID)
+diff --git a/scripts/gdb/linux/proc.py b/scripts/gdb/linux/proc.py
+index 0aebd7565b03..2f01a958eb22 100644
+--- a/scripts/gdb/linux/proc.py
++++ b/scripts/gdb/linux/proc.py
+@@ -114,11 +114,11 @@ def info_opts(lst, opt):
+ return opts
+
+
+-FS_INFO = {constants.LX_MS_SYNCHRONOUS: ",sync",
+- constants.LX_MS_MANDLOCK: ",mand",
+- constants.LX_MS_DIRSYNC: ",dirsync",
+- constants.LX_MS_NOATIME: ",noatime",
+- constants.LX_MS_NODIRATIME: ",nodiratime"}
++FS_INFO = {constants.LX_SB_SYNCHRONOUS: ",sync",
++ constants.LX_SB_MANDLOCK: ",mand",
++ constants.LX_SB_DIRSYNC: ",dirsync",
++ constants.LX_SB_NOATIME: ",noatime",
++ constants.LX_SB_NODIRATIME: ",nodiratime"}
+
+ MNT_INFO = {constants.LX_MNT_NOSUID: ",nosuid",
+ constants.LX_MNT_NODEV: ",nodev",
+@@ -184,7 +184,7 @@ values of that process namespace"""
+ fstype = superblock['s_type']['name'].string()
+ s_flags = int(superblock['s_flags'])
+ m_flags = int(vfs['mnt']['mnt_flags'])
+- rd = "ro" if (s_flags & constants.LX_MS_RDONLY) else "rw"
++ rd = "ro" if (s_flags & constants.LX_SB_RDONLY) else "rw"
+
+ gdb.write(
+ "{} {} {} {}{}{} 0 0\n"
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 26bf886bd168..588a3bc29ecc 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -640,7 +640,7 @@ static void handle_modversions(struct module *mod, struct elf_info *info,
+ info->sechdrs[sym->st_shndx].sh_offset -
+ (info->hdr->e_type != ET_REL ?
+ info->sechdrs[sym->st_shndx].sh_addr : 0);
+- crc = *crcp;
++ crc = TO_NATIVE(*crcp);
+ }
+ sym_update_crc(symname + strlen("__crc_"), mod, crc,
+ export);
+diff --git a/scripts/package/Makefile b/scripts/package/Makefile
+index 453fecee62f0..aa39c2b5e46a 100644
+--- a/scripts/package/Makefile
++++ b/scripts/package/Makefile
+@@ -59,7 +59,7 @@ rpm-pkg: FORCE
+ # binrpm-pkg
+ # ---------------------------------------------------------------------------
+ binrpm-pkg: FORCE
+- $(MAKE) KBUILD_SRC=
++ $(MAKE) -f $(srctree)/Makefile
+ $(CONFIG_SHELL) $(MKSPEC) prebuilt > $(objtree)/binkernel.spec
+ +rpmbuild $(RPMOPTS) --define "_builddir $(objtree)" --target \
+ $(UTS_MACHINE) -bb $(objtree)/binkernel.spec
+@@ -102,7 +102,7 @@ clean-dirs += $(objtree)/snap/
+ # tarball targets
+ # ---------------------------------------------------------------------------
+ tar%pkg: FORCE
+- $(MAKE) KBUILD_SRC=
++ $(MAKE) -f $(srctree)/Makefile
+ $(CONFIG_SHELL) $(srctree)/scripts/package/buildtar $@
+
+ clean-dirs += $(objtree)/tar-install/
+diff --git a/scripts/package/builddeb b/scripts/package/builddeb
+index f43a274f4f1d..8ac25d10a6ad 100755
+--- a/scripts/package/builddeb
++++ b/scripts/package/builddeb
+@@ -86,12 +86,12 @@ cp "$($MAKE -s -f $srctree/Makefile image_name)" "$tmpdir/$installed_image_path"
+ if grep -q "^CONFIG_OF_EARLY_FLATTREE=y" $KCONFIG_CONFIG ; then
+ # Only some architectures with OF support have this target
+ if [ -d "${srctree}/arch/$SRCARCH/boot/dts" ]; then
+- $MAKE KBUILD_SRC= INSTALL_DTBS_PATH="$tmpdir/usr/lib/$packagename" dtbs_install
++ $MAKE -f $srctree/Makefile INSTALL_DTBS_PATH="$tmpdir/usr/lib/$packagename" dtbs_install
+ fi
+ fi
+
+ if grep -q '^CONFIG_MODULES=y' $KCONFIG_CONFIG ; then
+- INSTALL_MOD_PATH="$tmpdir" $MAKE KBUILD_SRC= modules_install
++ INSTALL_MOD_PATH="$tmpdir" $MAKE -f $srctree/Makefile modules_install
+ rm -f "$tmpdir/lib/modules/$version/build"
+ rm -f "$tmpdir/lib/modules/$version/source"
+ if [ "$ARCH" = "um" ] ; then
+@@ -113,14 +113,14 @@ if grep -q '^CONFIG_MODULES=y' $KCONFIG_CONFIG ; then
+ # resign stripped modules
+ MODULE_SIG_ALL="$(grep -s '^CONFIG_MODULE_SIG_ALL=y' $KCONFIG_CONFIG || true)"
+ if [ -n "$MODULE_SIG_ALL" ]; then
+- INSTALL_MOD_PATH="$tmpdir" $MAKE KBUILD_SRC= modules_sign
++ INSTALL_MOD_PATH="$tmpdir" $MAKE -f $srctree/Makefile modules_sign
+ fi
+ fi
+ fi
+
+ if [ "$ARCH" != "um" ]; then
+- $MAKE headers_check KBUILD_SRC=
+- $MAKE headers_install KBUILD_SRC= INSTALL_HDR_PATH="$libc_headers_dir/usr"
++ $MAKE -f $srctree/Makefile headers_check
++ $MAKE -f $srctree/Makefile headers_install INSTALL_HDR_PATH="$libc_headers_dir/usr"
+ fi
+
+ # Install the maintainer scripts
+diff --git a/scripts/package/buildtar b/scripts/package/buildtar
+index d624a07a4e77..cfd2a4a3fe42 100755
+--- a/scripts/package/buildtar
++++ b/scripts/package/buildtar
+@@ -57,7 +57,7 @@ dirs=boot
+ # Try to install modules
+ #
+ if grep -q '^CONFIG_MODULES=y' "${KCONFIG_CONFIG}"; then
+- make ARCH="${ARCH}" O="${objtree}" KBUILD_SRC= INSTALL_MOD_PATH="${tmpdir}" modules_install
++ make ARCH="${ARCH}" -f ${srctree}/Makefile INSTALL_MOD_PATH="${tmpdir}" modules_install
+ dirs="$dirs lib"
+ fi
+
+diff --git a/scripts/package/mkdebian b/scripts/package/mkdebian
+index edcad61fe3cd..f030961c5165 100755
+--- a/scripts/package/mkdebian
++++ b/scripts/package/mkdebian
+@@ -205,13 +205,15 @@ EOF
+ cat <<EOF > debian/rules
+ #!$(command -v $MAKE) -f
+
++srctree ?= .
++
+ build:
+ \$(MAKE) KERNELRELEASE=${version} ARCH=${ARCH} \
+- KBUILD_BUILD_VERSION=${revision} KBUILD_SRC=
++ KBUILD_BUILD_VERSION=${revision} -f \$(srctree)/Makefile
+
+ binary-arch:
+ \$(MAKE) KERNELRELEASE=${version} ARCH=${ARCH} \
+- KBUILD_BUILD_VERSION=${revision} KBUILD_SRC= intdeb-pkg
++ KBUILD_BUILD_VERSION=${revision} -f \$(srctree)/Makefile intdeb-pkg
+
+ clean:
+ rm -rf debian/*tmp debian/files
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 379682e2a8d5..f6c2bcb2ab14 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -579,6 +579,7 @@ fail:
+ kfree(profile->secmark[i].label);
+ kfree(profile->secmark);
+ profile->secmark_count = 0;
++ profile->secmark = NULL;
+ }
+
+ e->pos = pos;
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index f0e36c3492ba..07b11b5aaf1f 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -959,8 +959,11 @@ static int selinux_sb_clone_mnt_opts(const struct super_block *oldsb,
+ BUG_ON(!(oldsbsec->flags & SE_SBINITIALIZED));
+
+ /* if fs is reusing a sb, make sure that the contexts match */
+- if (newsbsec->flags & SE_SBINITIALIZED)
++ if (newsbsec->flags & SE_SBINITIALIZED) {
++ if ((kern_flags & SECURITY_LSM_NATIVE_LABELS) && !set_context)
++ *set_kern_flags |= SECURITY_LSM_NATIVE_LABELS;
+ return selinux_cmp_sb_context(oldsb, newsb);
++ }
+
+ mutex_lock(&newsbsec->lock);
+
+@@ -3241,12 +3244,16 @@ static int selinux_inode_setsecurity(struct inode *inode, const char *name,
+ const void *value, size_t size, int flags)
+ {
+ struct inode_security_struct *isec = inode_security_novalidate(inode);
++ struct superblock_security_struct *sbsec = inode->i_sb->s_security;
+ u32 newsid;
+ int rc;
+
+ if (strcmp(name, XATTR_SELINUX_SUFFIX))
+ return -EOPNOTSUPP;
+
++ if (!(sbsec->flags & SBLABEL_MNT))
++ return -EOPNOTSUPP;
++
+ if (!value || !size)
+ return -EACCES;
+
+@@ -5120,6 +5127,9 @@ static int selinux_sctp_bind_connect(struct sock *sk, int optname,
+ return -EINVAL;
+ }
+
++ if (walk_size + len > addrlen)
++ return -EINVAL;
++
+ err = -EINVAL;
+ switch (optname) {
+ /* Bind checks */
+@@ -6392,7 +6402,10 @@ static void selinux_inode_invalidate_secctx(struct inode *inode)
+ */
+ static int selinux_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen)
+ {
+- return selinux_inode_setsecurity(inode, XATTR_SELINUX_SUFFIX, ctx, ctxlen, 0);
++ int rc = selinux_inode_setsecurity(inode, XATTR_SELINUX_SUFFIX,
++ ctx, ctxlen, 0);
++ /* Do not return error when suppressing label (SBLABEL_MNT not set). */
++ return rc == -EOPNOTSUPP ? 0 : rc;
+ }
+
+ /*
+diff --git a/sound/ac97/bus.c b/sound/ac97/bus.c
+index 9f0c480489ef..9cbf6927abe9 100644
+--- a/sound/ac97/bus.c
++++ b/sound/ac97/bus.c
+@@ -84,7 +84,7 @@ ac97_of_get_child_device(struct ac97_controller *ac97_ctrl, int idx,
+ if ((idx != of_property_read_u32(node, "reg", ®)) ||
+ !of_device_is_compatible(node, compat))
+ continue;
+- return of_node_get(node);
++ return node;
+ }
+
+ return NULL;
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 467039b342b5..41abb8bd466a 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -940,6 +940,28 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ oss_frame_size = snd_pcm_format_physical_width(params_format(params)) *
+ params_channels(params) / 8;
+
++ err = snd_pcm_oss_period_size(substream, params, sparams);
++ if (err < 0)
++ goto failure;
++
++ n = snd_pcm_plug_slave_size(substream, runtime->oss.period_bytes / oss_frame_size);
++ err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, n, NULL);
++ if (err < 0)
++ goto failure;
++
++ err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIODS,
++ runtime->oss.periods, NULL);
++ if (err < 0)
++ goto failure;
++
++ snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_DROP, NULL);
++
++ err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_HW_PARAMS, sparams);
++ if (err < 0) {
++ pcm_dbg(substream->pcm, "HW_PARAMS failed: %i\n", err);
++ goto failure;
++ }
++
+ #ifdef CONFIG_SND_PCM_OSS_PLUGINS
+ snd_pcm_oss_plugin_clear(substream);
+ if (!direct) {
+@@ -974,27 +996,6 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ }
+ #endif
+
+- err = snd_pcm_oss_period_size(substream, params, sparams);
+- if (err < 0)
+- goto failure;
+-
+- n = snd_pcm_plug_slave_size(substream, runtime->oss.period_bytes / oss_frame_size);
+- err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, n, NULL);
+- if (err < 0)
+- goto failure;
+-
+- err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIODS,
+- runtime->oss.periods, NULL);
+- if (err < 0)
+- goto failure;
+-
+- snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_DROP, NULL);
+-
+- if ((err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_HW_PARAMS, sparams)) < 0) {
+- pcm_dbg(substream->pcm, "HW_PARAMS failed: %i\n", err);
+- goto failure;
+- }
+-
+ if (runtime->oss.trigger) {
+ sw_params->start_threshold = 1;
+ } else {
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 818dff1de545..e08c6c6ca029 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -1426,8 +1426,15 @@ static int snd_pcm_pause(struct snd_pcm_substream *substream, int push)
+ static int snd_pcm_pre_suspend(struct snd_pcm_substream *substream, int state)
+ {
+ struct snd_pcm_runtime *runtime = substream->runtime;
+- if (runtime->status->state == SNDRV_PCM_STATE_SUSPENDED)
++ switch (runtime->status->state) {
++ case SNDRV_PCM_STATE_SUSPENDED:
++ return -EBUSY;
++ /* unresumable PCM state; return -EBUSY for skipping suspend */
++ case SNDRV_PCM_STATE_OPEN:
++ case SNDRV_PCM_STATE_SETUP:
++ case SNDRV_PCM_STATE_DISCONNECTED:
+ return -EBUSY;
++ }
+ runtime->trigger_master = substream;
+ return 0;
+ }
+@@ -1506,6 +1513,14 @@ int snd_pcm_suspend_all(struct snd_pcm *pcm)
+ /* FIXME: the open/close code should lock this as well */
+ if (substream->runtime == NULL)
+ continue;
++
++ /*
++ * Skip BE dai link PCM's that are internal and may
++ * not have their substream ops set.
++ */
++ if (!substream->ops)
++ continue;
++
+ err = snd_pcm_suspend(substream);
+ if (err < 0 && err != -EBUSY)
+ return err;
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index ee601d7f0926..c0690d1ecd55 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -30,6 +30,7 @@
+ #include <linux/module.h>
+ #include <linux/delay.h>
+ #include <linux/mm.h>
++#include <linux/nospec.h>
+ #include <sound/rawmidi.h>
+ #include <sound/info.h>
+ #include <sound/control.h>
+@@ -601,6 +602,7 @@ static int __snd_rawmidi_info_select(struct snd_card *card,
+ return -ENXIO;
+ if (info->stream < 0 || info->stream > 1)
+ return -EINVAL;
++ info->stream = array_index_nospec(info->stream, 2);
+ pstr = &rmidi->streams[info->stream];
+ if (pstr->substream_count == 0)
+ return -ENOENT;
+diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c
+index 278ebb993122..c93945917235 100644
+--- a/sound/core/seq/oss/seq_oss_synth.c
++++ b/sound/core/seq/oss/seq_oss_synth.c
+@@ -617,13 +617,14 @@ int
+ snd_seq_oss_synth_make_info(struct seq_oss_devinfo *dp, int dev, struct synth_info *inf)
+ {
+ struct seq_oss_synth *rec;
++ struct seq_oss_synthinfo *info = get_synthinfo_nospec(dp, dev);
+
+- if (dev < 0 || dev >= dp->max_synthdev)
++ if (!info)
+ return -ENXIO;
+
+- if (dp->synths[dev].is_midi) {
++ if (info->is_midi) {
+ struct midi_info minf;
+- snd_seq_oss_midi_make_info(dp, dp->synths[dev].midi_mapped, &minf);
++ snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf);
+ inf->synth_type = SYNTH_TYPE_MIDI;
+ inf->synth_subtype = 0;
+ inf->nr_voices = 16;
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 7d4640d1fe9f..38e7deab6384 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1252,7 +1252,7 @@ static int snd_seq_ioctl_set_client_info(struct snd_seq_client *client,
+
+ /* fill the info fields */
+ if (client_info->name[0])
+- strlcpy(client->name, client_info->name, sizeof(client->name));
++ strscpy(client->name, client_info->name, sizeof(client->name));
+
+ client->filter = client_info->filter;
+ client->event_lost = client_info->event_lost;
+@@ -1530,7 +1530,7 @@ static int snd_seq_ioctl_create_queue(struct snd_seq_client *client, void *arg)
+ /* set queue name */
+ if (!info->name[0])
+ snprintf(info->name, sizeof(info->name), "Queue-%d", q->queue);
+- strlcpy(q->name, info->name, sizeof(q->name));
++ strscpy(q->name, info->name, sizeof(q->name));
+ snd_use_lock_free(&q->use_lock);
+
+ return 0;
+@@ -1592,7 +1592,7 @@ static int snd_seq_ioctl_set_queue_info(struct snd_seq_client *client,
+ queuefree(q);
+ return -EPERM;
+ }
+- strlcpy(q->name, info->name, sizeof(q->name));
++ strscpy(q->name, info->name, sizeof(q->name));
+ queuefree(q);
+
+ return 0;
+diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
+index d91874275d2c..5b46e8dcc2dd 100644
+--- a/sound/firewire/bebob/bebob.c
++++ b/sound/firewire/bebob/bebob.c
+@@ -448,7 +448,19 @@ static const struct ieee1394_device_id bebob_id_table[] = {
+ /* Focusrite, SaffirePro 26 I/O */
+ SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, 0x00000003, &saffirepro_26_spec),
+ /* Focusrite, SaffirePro 10 I/O */
+- SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, 0x00000006, &saffirepro_10_spec),
++ {
++ // The combination of vendor_id and model_id is the same as the
++ // same as the one of Liquid Saffire 56.
++ .match_flags = IEEE1394_MATCH_VENDOR_ID |
++ IEEE1394_MATCH_MODEL_ID |
++ IEEE1394_MATCH_SPECIFIER_ID |
++ IEEE1394_MATCH_VERSION,
++ .vendor_id = VEN_FOCUSRITE,
++ .model_id = 0x000006,
++ .specifier_id = 0x00a02d,
++ .version = 0x010001,
++ .driver_data = (kernel_ulong_t)&saffirepro_10_spec,
++ },
+ /* Focusrite, Saffire(no label and LE) */
+ SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, MODEL_FOCUSRITE_SAFFIRE_BOTH,
+ &saffire_spec),
+diff --git a/sound/firewire/dice/dice.c b/sound/firewire/dice/dice.c
+index ed50b222d36e..eee184b05d93 100644
+--- a/sound/firewire/dice/dice.c
++++ b/sound/firewire/dice/dice.c
+@@ -18,6 +18,7 @@ MODULE_LICENSE("GPL v2");
+ #define OUI_ALESIS 0x000595
+ #define OUI_MAUDIO 0x000d6c
+ #define OUI_MYTEK 0x001ee8
++#define OUI_SSL 0x0050c2 // Actually ID reserved by IEEE.
+
+ #define DICE_CATEGORY_ID 0x04
+ #define WEISS_CATEGORY_ID 0x00
+@@ -196,7 +197,7 @@ static int dice_probe(struct fw_unit *unit,
+ struct snd_dice *dice;
+ int err;
+
+- if (!entry->driver_data) {
++ if (!entry->driver_data && entry->vendor_id != OUI_SSL) {
+ err = check_dice_category(unit);
+ if (err < 0)
+ return -ENODEV;
+@@ -361,6 +362,15 @@ static const struct ieee1394_device_id dice_id_table[] = {
+ .model_id = 0x000002,
+ .driver_data = (kernel_ulong_t)snd_dice_detect_mytek_formats,
+ },
++ // Solid State Logic, Duende Classic and Mini.
++ // NOTE: each field of GUID in config ROM is not compliant to standard
++ // DICE scheme.
++ {
++ .match_flags = IEEE1394_MATCH_VENDOR_ID |
++ IEEE1394_MATCH_MODEL_ID,
++ .vendor_id = OUI_SSL,
++ .model_id = 0x000070,
++ },
+ {
+ .match_flags = IEEE1394_MATCH_VERSION,
+ .version = DICE_INTERFACE,
+diff --git a/sound/firewire/motu/amdtp-motu.c b/sound/firewire/motu/amdtp-motu.c
+index f0555a24d90e..6c9b743ea74b 100644
+--- a/sound/firewire/motu/amdtp-motu.c
++++ b/sound/firewire/motu/amdtp-motu.c
+@@ -136,7 +136,9 @@ static void read_pcm_s32(struct amdtp_stream *s,
+ byte = (u8 *)buffer + p->pcm_byte_offset;
+
+ for (c = 0; c < channels; ++c) {
+- *dst = (byte[0] << 24) | (byte[1] << 16) | byte[2];
++ *dst = (byte[0] << 24) |
++ (byte[1] << 16) |
++ (byte[2] << 8);
+ byte += 3;
+ dst++;
+ }
+diff --git a/sound/firewire/motu/motu.c b/sound/firewire/motu/motu.c
+index 220e61926ea4..513291ba0ab0 100644
+--- a/sound/firewire/motu/motu.c
++++ b/sound/firewire/motu/motu.c
+@@ -36,7 +36,7 @@ static void name_card(struct snd_motu *motu)
+ fw_csr_iterator_init(&it, motu->unit->directory);
+ while (fw_csr_iterator_next(&it, &key, &val)) {
+ switch (key) {
+- case CSR_VERSION:
++ case CSR_MODEL:
+ version = val;
+ break;
+ }
+@@ -46,7 +46,7 @@ static void name_card(struct snd_motu *motu)
+ strcpy(motu->card->shortname, motu->spec->name);
+ strcpy(motu->card->mixername, motu->spec->name);
+ snprintf(motu->card->longname, sizeof(motu->card->longname),
+- "MOTU %s (version:%d), GUID %08x%08x at %s, S%d",
++ "MOTU %s (version:%06x), GUID %08x%08x at %s, S%d",
+ motu->spec->name, version,
+ fw_dev->config_rom[3], fw_dev->config_rom[4],
+ dev_name(&motu->unit->device), 100 << fw_dev->max_speed);
+@@ -237,20 +237,20 @@ static const struct snd_motu_spec motu_audio_express = {
+ #define SND_MOTU_DEV_ENTRY(model, data) \
+ { \
+ .match_flags = IEEE1394_MATCH_VENDOR_ID | \
+- IEEE1394_MATCH_MODEL_ID | \
+- IEEE1394_MATCH_SPECIFIER_ID, \
++ IEEE1394_MATCH_SPECIFIER_ID | \
++ IEEE1394_MATCH_VERSION, \
+ .vendor_id = OUI_MOTU, \
+- .model_id = model, \
+ .specifier_id = OUI_MOTU, \
++ .version = model, \
+ .driver_data = (kernel_ulong_t)data, \
+ }
+
+ static const struct ieee1394_device_id motu_id_table[] = {
+- SND_MOTU_DEV_ENTRY(0x101800, &motu_828mk2),
+- SND_MOTU_DEV_ENTRY(0x107800, &snd_motu_spec_traveler),
+- SND_MOTU_DEV_ENTRY(0x106800, &motu_828mk3), /* FireWire only. */
+- SND_MOTU_DEV_ENTRY(0x100800, &motu_828mk3), /* Hybrid. */
+- SND_MOTU_DEV_ENTRY(0x104800, &motu_audio_express),
++ SND_MOTU_DEV_ENTRY(0x000003, &motu_828mk2),
++ SND_MOTU_DEV_ENTRY(0x000009, &snd_motu_spec_traveler),
++ SND_MOTU_DEV_ENTRY(0x000015, &motu_828mk3), /* FireWire only. */
++ SND_MOTU_DEV_ENTRY(0x000035, &motu_828mk3), /* Hybrid. */
++ SND_MOTU_DEV_ENTRY(0x000033, &motu_audio_express),
+ { }
+ };
+ MODULE_DEVICE_TABLE(ieee1394, motu_id_table);
+diff --git a/sound/hda/hdac_i915.c b/sound/hda/hdac_i915.c
+index 617ff1aa818f..27eb0270a711 100644
+--- a/sound/hda/hdac_i915.c
++++ b/sound/hda/hdac_i915.c
+@@ -144,9 +144,9 @@ int snd_hdac_i915_init(struct hdac_bus *bus)
+ return -ENODEV;
+ if (!acomp->ops) {
+ request_module("i915");
+- /* 10s timeout */
++ /* 60s timeout */
+ wait_for_completion_timeout(&bind_complete,
+- msecs_to_jiffies(10 * 1000));
++ msecs_to_jiffies(60 * 1000));
+ }
+ if (!acomp->ops) {
+ dev_info(bus->dev, "couldn't bind with audio component\n");
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 9f8d59e7e89f..b238e903b9d7 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2917,6 +2917,7 @@ static void hda_call_codec_resume(struct hda_codec *codec)
+ hda_jackpoll_work(&codec->jackpoll_work.work);
+ else
+ snd_hda_jack_report_sync(codec);
++ codec->core.dev.power.power_state = PMSG_ON;
+ snd_hdac_leave_pm(&codec->core);
+ }
+
+@@ -2950,10 +2951,62 @@ static int hda_codec_runtime_resume(struct device *dev)
+ }
+ #endif /* CONFIG_PM */
+
++#ifdef CONFIG_PM_SLEEP
++static int hda_codec_force_resume(struct device *dev)
++{
++ int ret;
++
++ /* The get/put pair below enforces the runtime resume even if the
++ * device hasn't been used at suspend time. This trick is needed to
++ * update the jack state change during the sleep.
++ */
++ pm_runtime_get_noresume(dev);
++ ret = pm_runtime_force_resume(dev);
++ pm_runtime_put(dev);
++ return ret;
++}
++
++static int hda_codec_pm_suspend(struct device *dev)
++{
++ dev->power.power_state = PMSG_SUSPEND;
++ return pm_runtime_force_suspend(dev);
++}
++
++static int hda_codec_pm_resume(struct device *dev)
++{
++ dev->power.power_state = PMSG_RESUME;
++ return hda_codec_force_resume(dev);
++}
++
++static int hda_codec_pm_freeze(struct device *dev)
++{
++ dev->power.power_state = PMSG_FREEZE;
++ return pm_runtime_force_suspend(dev);
++}
++
++static int hda_codec_pm_thaw(struct device *dev)
++{
++ dev->power.power_state = PMSG_THAW;
++ return hda_codec_force_resume(dev);
++}
++
++static int hda_codec_pm_restore(struct device *dev)
++{
++ dev->power.power_state = PMSG_RESTORE;
++ return hda_codec_force_resume(dev);
++}
++#endif /* CONFIG_PM_SLEEP */
++
+ /* referred in hda_bind.c */
+ const struct dev_pm_ops hda_codec_driver_pm = {
+- SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
+- pm_runtime_force_resume)
++#ifdef CONFIG_PM_SLEEP
++ .suspend = hda_codec_pm_suspend,
++ .resume = hda_codec_pm_resume,
++ .freeze = hda_codec_pm_freeze,
++ .thaw = hda_codec_pm_thaw,
++ .poweroff = hda_codec_pm_suspend,
++ .restore = hda_codec_pm_restore,
++#endif /* CONFIG_PM_SLEEP */
+ SET_RUNTIME_PM_OPS(hda_codec_runtime_suspend, hda_codec_runtime_resume,
+ NULL)
+ };
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index e5c49003e75f..2ec91085fa3e 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -947,7 +947,7 @@ static void __azx_runtime_suspend(struct azx *chip)
+ display_power(chip, false);
+ }
+
+-static void __azx_runtime_resume(struct azx *chip)
++static void __azx_runtime_resume(struct azx *chip, bool from_rt)
+ {
+ struct hda_intel *hda = container_of(chip, struct hda_intel, chip);
+ struct hdac_bus *bus = azx_bus(chip);
+@@ -964,7 +964,7 @@ static void __azx_runtime_resume(struct azx *chip)
+ azx_init_pci(chip);
+ hda_intel_init_chip(chip, true);
+
+- if (status) {
++ if (status && from_rt) {
+ list_for_each_codec(codec, &chip->bus)
+ if (status & (1 << codec->addr))
+ schedule_delayed_work(&codec->jackpoll_work,
+@@ -1016,7 +1016,7 @@ static int azx_resume(struct device *dev)
+ chip->msi = 0;
+ if (azx_acquire_irq(chip, 1) < 0)
+ return -EIO;
+- __azx_runtime_resume(chip);
++ __azx_runtime_resume(chip, false);
+ snd_power_change_state(card, SNDRV_CTL_POWER_D0);
+
+ trace_azx_resume(chip);
+@@ -1081,7 +1081,7 @@ static int azx_runtime_resume(struct device *dev)
+ chip = card->private_data;
+ if (!azx_has_pm_runtime(chip))
+ return 0;
+- __azx_runtime_resume(chip);
++ __azx_runtime_resume(chip, true);
+
+ /* disable controller Wake Up event*/
+ azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &
+@@ -2142,12 +2142,18 @@ static struct snd_pci_quirk power_save_blacklist[] = {
+ SND_PCI_QUIRK(0x8086, 0x2040, "Intel DZ77BH-55K", 0),
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=199607 */
+ SND_PCI_QUIRK(0x8086, 0x2057, "Intel NUC5i7RYB", 0),
++ /* https://bugs.launchpad.net/bugs/1821663 */
++ SND_PCI_QUIRK(0x8086, 0x2064, "Intel SDP 8086:2064", 0),
+ /* https://bugzilla.redhat.com/show_bug.cgi?id=1520902 */
+ SND_PCI_QUIRK(0x8086, 0x2068, "Intel NUC7i3BNB", 0),
+- /* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
+- SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0),
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=198611 */
+ SND_PCI_QUIRK(0x17aa, 0x2227, "Lenovo X1 Carbon 3rd Gen", 0),
++ /* https://bugzilla.redhat.com/show_bug.cgi?id=1689623 */
++ SND_PCI_QUIRK(0x17aa, 0x367b, "Lenovo IdeaCentre B550", 0),
++ /* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
++ SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0),
++ /* https://bugs.launchpad.net/bugs/1821663 */
++ SND_PCI_QUIRK(0x1631, 0xe017, "Packard Bell NEC IMEDIA 5204", 0),
+ {}
+ };
+ #endif /* CONFIG_PM */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index a4ee7656d9ee..fb65ad31e86c 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -936,6 +936,9 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x103c, 0x8458, "HP Z2 G4 mini premium", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
+ SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 1ffa36e987b4..84fae0df59e9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -118,6 +118,7 @@ struct alc_spec {
+ unsigned int has_alc5505_dsp:1;
+ unsigned int no_depop_delay:1;
+ unsigned int done_hp_init:1;
++ unsigned int no_shutup_pins:1;
+
+ /* for PLL fix */
+ hda_nid_t pll_nid;
+@@ -476,6 +477,14 @@ static void alc_auto_setup_eapd(struct hda_codec *codec, bool on)
+ set_eapd(codec, *p, on);
+ }
+
++static void alc_shutup_pins(struct hda_codec *codec)
++{
++ struct alc_spec *spec = codec->spec;
++
++ if (!spec->no_shutup_pins)
++ snd_hda_shutup_pins(codec);
++}
++
+ /* generic shutup callback;
+ * just turning off EAPD and a little pause for avoiding pop-noise
+ */
+@@ -486,7 +495,7 @@ static void alc_eapd_shutup(struct hda_codec *codec)
+ alc_auto_setup_eapd(codec, false);
+ if (!spec->no_depop_delay)
+ msleep(200);
+- snd_hda_shutup_pins(codec);
++ alc_shutup_pins(codec);
+ }
+
+ /* generic EAPD initialization */
+@@ -814,7 +823,7 @@ static inline void alc_shutup(struct hda_codec *codec)
+ if (spec && spec->shutup)
+ spec->shutup(codec);
+ else
+- snd_hda_shutup_pins(codec);
++ alc_shutup_pins(codec);
+ }
+
+ static void alc_reboot_notify(struct hda_codec *codec)
+@@ -1855,8 +1864,8 @@ enum {
+ ALC887_FIXUP_BASS_CHMAP,
+ ALC1220_FIXUP_GB_DUAL_CODECS,
+ ALC1220_FIXUP_CLEVO_P950,
+- ALC1220_FIXUP_SYSTEM76_ORYP5,
+- ALC1220_FIXUP_SYSTEM76_ORYP5_PINS,
++ ALC1220_FIXUP_CLEVO_PB51ED,
++ ALC1220_FIXUP_CLEVO_PB51ED_PINS,
+ };
+
+ static void alc889_fixup_coef(struct hda_codec *codec,
+@@ -2061,7 +2070,7 @@ static void alc1220_fixup_clevo_p950(struct hda_codec *codec,
+ static void alc_fixup_headset_mode_no_hp_mic(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action);
+
+-static void alc1220_fixup_system76_oryp5(struct hda_codec *codec,
++static void alc1220_fixup_clevo_pb51ed(struct hda_codec *codec,
+ const struct hda_fixup *fix,
+ int action)
+ {
+@@ -2313,18 +2322,18 @@ static const struct hda_fixup alc882_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc1220_fixup_clevo_p950,
+ },
+- [ALC1220_FIXUP_SYSTEM76_ORYP5] = {
++ [ALC1220_FIXUP_CLEVO_PB51ED] = {
+ .type = HDA_FIXUP_FUNC,
+- .v.func = alc1220_fixup_system76_oryp5,
++ .v.func = alc1220_fixup_clevo_pb51ed,
+ },
+- [ALC1220_FIXUP_SYSTEM76_ORYP5_PINS] = {
++ [ALC1220_FIXUP_CLEVO_PB51ED_PINS] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */
+ {}
+ },
+ .chained = true,
+- .chain_id = ALC1220_FIXUP_SYSTEM76_ORYP5,
++ .chain_id = ALC1220_FIXUP_CLEVO_PB51ED,
+ },
+ };
+
+@@ -2402,8 +2411,9 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1558, 0x95e1, "Clevo P95xER", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1558, 0x95e2, "Clevo P950ER", ALC1220_FIXUP_CLEVO_P950),
+- SND_PCI_QUIRK(0x1558, 0x96e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_SYSTEM76_ORYP5_PINS),
+- SND_PCI_QUIRK(0x1558, 0x97e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_SYSTEM76_ORYP5_PINS),
++ SND_PCI_QUIRK(0x1558, 0x96e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++ SND_PCI_QUIRK(0x1558, 0x97e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++ SND_PCI_QUIRK(0x1558, 0x65d1, "Tuxedo Book XC1509", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
+ SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
+ SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530),
+@@ -2950,7 +2960,7 @@ static void alc269_shutup(struct hda_codec *codec)
+ (alc_get_coef0(codec) & 0x00ff) == 0x018) {
+ msleep(150);
+ }
+- snd_hda_shutup_pins(codec);
++ alc_shutup_pins(codec);
+ }
+
+ static struct coef_fw alc282_coefs[] = {
+@@ -3053,14 +3063,15 @@ static void alc282_shutup(struct hda_codec *codec)
+ if (hp_pin_sense)
+ msleep(85);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ if (!spec->no_shutup_pins)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
+ if (hp_pin_sense)
+ msleep(100);
+
+ alc_auto_setup_eapd(codec, false);
+- snd_hda_shutup_pins(codec);
++ alc_shutup_pins(codec);
+ alc_write_coef_idx(codec, 0x78, coef78);
+ }
+
+@@ -3166,15 +3177,16 @@ static void alc283_shutup(struct hda_codec *codec)
+ if (hp_pin_sense)
+ msleep(100);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ if (!spec->no_shutup_pins)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
+ alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+
+ if (hp_pin_sense)
+ msleep(100);
+ alc_auto_setup_eapd(codec, false);
+- snd_hda_shutup_pins(codec);
++ alc_shutup_pins(codec);
+ alc_write_coef_idx(codec, 0x43, 0x9614);
+ }
+
+@@ -3240,14 +3252,15 @@ static void alc256_shutup(struct hda_codec *codec)
+ /* NOTE: call this before clearing the pin, otherwise codec stalls */
+ alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ if (!spec->no_shutup_pins)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
+ if (hp_pin_sense)
+ msleep(100);
+
+ alc_auto_setup_eapd(codec, false);
+- snd_hda_shutup_pins(codec);
++ alc_shutup_pins(codec);
+ }
+
+ static void alc225_init(struct hda_codec *codec)
+@@ -3334,7 +3347,7 @@ static void alc225_shutup(struct hda_codec *codec)
+ msleep(100);
+
+ alc_auto_setup_eapd(codec, false);
+- snd_hda_shutup_pins(codec);
++ alc_shutup_pins(codec);
+ }
+
+ static void alc_default_init(struct hda_codec *codec)
+@@ -3388,14 +3401,15 @@ static void alc_default_shutup(struct hda_codec *codec)
+ if (hp_pin_sense)
+ msleep(85);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ if (!spec->no_shutup_pins)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
+ if (hp_pin_sense)
+ msleep(100);
+
+ alc_auto_setup_eapd(codec, false);
+- snd_hda_shutup_pins(codec);
++ alc_shutup_pins(codec);
+ }
+
+ static void alc294_hp_init(struct hda_codec *codec)
+@@ -3412,8 +3426,9 @@ static void alc294_hp_init(struct hda_codec *codec)
+
+ msleep(100);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ if (!spec->no_shutup_pins)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
+ alc_update_coef_idx(codec, 0x6f, 0x000f, 0);/* Set HP depop to manual mode */
+ alc_update_coefex_idx(codec, 0x58, 0x00, 0x8000, 0x8000); /* HP depop procedure start */
+@@ -5007,16 +5022,12 @@ static void alc_fixup_auto_mute_via_amp(struct hda_codec *codec,
+ }
+ }
+
+-static void alc_no_shutup(struct hda_codec *codec)
+-{
+-}
+-
+ static void alc_fixup_no_shutup(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ struct alc_spec *spec = codec->spec;
+- spec->shutup = alc_no_shutup;
++ spec->no_shutup_pins = 1;
+ }
+ }
+
+@@ -5479,7 +5490,7 @@ static void alc_headset_btn_callback(struct hda_codec *codec,
+ jack->jack->button_state = report;
+ }
+
+-static void alc_fixup_headset_jack(struct hda_codec *codec,
++static void alc295_fixup_chromebook(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+
+@@ -5489,6 +5500,16 @@ static void alc_fixup_headset_jack(struct hda_codec *codec,
+ alc_headset_btn_callback);
+ snd_hda_jack_add_kctl(codec, 0x55, "Headset Jack", false,
+ SND_JACK_HEADSET, alc_headset_btn_keymap);
++ switch (codec->core.vendor_id) {
++ case 0x10ec0295:
++ alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */
++ alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15);
++ break;
++ case 0x10ec0236:
++ alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */
++ alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15);
++ break;
++ }
+ break;
+ case HDA_FIXUP_ACT_INIT:
+ switch (codec->core.vendor_id) {
+@@ -5641,6 +5662,7 @@ enum {
+ ALC233_FIXUP_ASUS_MIC_NO_PRESENCE,
+ ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE,
+ ALC233_FIXUP_LENOVO_MULTI_CODECS,
++ ALC233_FIXUP_ACER_HEADSET_MIC,
+ ALC294_FIXUP_LENOVO_MIC_LOCATION,
+ ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE,
+ ALC700_FIXUP_INTEL_REFERENCE,
+@@ -5658,9 +5680,16 @@ enum {
+ ALC294_FIXUP_ASUS_MIC,
+ ALC294_FIXUP_ASUS_HEADSET_MIC,
+ ALC294_FIXUP_ASUS_SPK,
+- ALC225_FIXUP_HEADSET_JACK,
+ ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+ ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
++ ALC255_FIXUP_ACER_HEADSET_MIC,
++ ALC295_FIXUP_CHROME_BOOK,
++ ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE,
++ ALC225_FIXUP_WYSE_AUTO_MUTE,
++ ALC225_FIXUP_WYSE_DISABLE_MIC_VREF,
++ ALC286_FIXUP_ACER_AIO_HEADSET_MIC,
++ ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++ ALC299_FIXUP_PREDATOR_SPK,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -6461,6 +6490,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc233_alc662_fixup_lenovo_dual_codecs,
+ },
++ [ALC233_FIXUP_ACER_HEADSET_MIC] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x45 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x5089 },
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC233_FIXUP_ASUS_MIC_NO_PRESENCE
++ },
+ [ALC294_FIXUP_LENOVO_MIC_LOCATION] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -6603,9 +6642,9 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC
+ },
+- [ALC225_FIXUP_HEADSET_JACK] = {
++ [ALC295_FIXUP_CHROME_BOOK] = {
+ .type = HDA_FIXUP_FUNC,
+- .v.func = alc_fixup_headset_jack,
++ .v.func = alc295_fixup_chromebook,
+ },
+ [ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE] = {
+ .type = HDA_FIXUP_PINS,
+@@ -6627,6 +6666,64 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC285_FIXUP_LENOVO_HEADPHONE_NOISE
+ },
++ [ALC255_FIXUP_ACER_HEADSET_MIC] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x03a11130 },
++ { 0x1a, 0x90a60140 }, /* use as internal mic */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC
++ },
++ [ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x16, 0x01011020 }, /* Rear Line out */
++ { 0x19, 0x01a1913c }, /* use as Front headset mic, without its own jack detect */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC225_FIXUP_WYSE_AUTO_MUTE
++ },
++ [ALC225_FIXUP_WYSE_AUTO_MUTE] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc_fixup_auto_mute_via_amp,
++ .chained = true,
++ .chain_id = ALC225_FIXUP_WYSE_DISABLE_MIC_VREF
++ },
++ [ALC225_FIXUP_WYSE_DISABLE_MIC_VREF] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc_fixup_disable_mic_vref,
++ .chained = true,
++ .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
++ },
++ [ALC286_FIXUP_ACER_AIO_HEADSET_MIC] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x4f },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x5029 },
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE
++ },
++ [ALC256_FIXUP_ASUS_MIC_NO_PRESENCE] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x04a11120 }, /* use as headset mic, without its own jack detect */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE
++ },
++ [ALC299_FIXUP_PREDATOR_SPK] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x21, 0x90170150 }, /* use as headset mic, without its own jack detect */
++ { }
++ }
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -6643,9 +6740,15 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
+ SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK),
+- SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+- SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+- SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1025, 0x1099, "Acer Aspire E5-523G", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1025, 0x1246, "Acer Predator Helios 500", ALC299_FIXUP_PREDATOR_SPK),
++ SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
+ SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X),
+@@ -6677,6 +6780,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13 9350", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
++ SND_PCI_QUIRK(0x1028, 0x0738, "Dell Precision 5820", ALC269_FIXUP_NO_SHUTUP),
+ SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
+ SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
+@@ -6689,6 +6793,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
++ SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+ SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+@@ -6751,11 +6857,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
++ SND_PCI_QUIRK(0x103c, 0x802e, "HP Z240 SFF", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC),
+ SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
+- SND_PCI_QUIRK(0x103c, 0x82bf, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+- SND_PCI_QUIRK(0x103c, 0x82c0, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x103c, 0x82bf, "HP G3 mini", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+@@ -6771,7 +6879,6 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+- SND_PCI_QUIRK(0x1043, 0x14a1, "ASUS UX533FD", ALC294_FIXUP_ASUS_SPK),
+ SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+@@ -7036,7 +7143,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC255_FIXUP_DUMMY_LINEOUT_VERB, .name = "alc255-dummy-lineout"},
+ {.id = ALC255_FIXUP_DELL_HEADSET_MIC, .name = "alc255-dell-headset"},
+ {.id = ALC295_FIXUP_HP_X360, .name = "alc295-hp-x360"},
+- {.id = ALC225_FIXUP_HEADSET_JACK, .name = "alc-sense-combo"},
++ {.id = ALC295_FIXUP_CHROME_BOOK, .name = "alc-sense-combo"},
++ {.id = ALC299_FIXUP_PREDATOR_SPK, .name = "predator-spk"},
+ {}
+ };
+ #define ALC225_STANDARD_PINS \
+@@ -7257,6 +7365,18 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ {0x14, 0x90170110},
+ {0x1b, 0x90a70130},
+ {0x21, 0x03211020}),
++ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++ {0x12, 0x90a60130},
++ {0x14, 0x90170110},
++ {0x21, 0x03211020}),
++ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++ {0x12, 0x90a60130},
++ {0x14, 0x90170110},
++ {0x21, 0x04211020}),
++ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++ {0x1a, 0x90a70130},
++ {0x1b, 0x90170110},
++ {0x21, 0x03211020}),
+ SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
+ {0x12, 0xb7a60130},
+ {0x13, 0xb8a61140},
+@@ -7388,6 +7508,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ {0x14, 0x90170110},
+ {0x1b, 0x90a70130},
+ {0x21, 0x04211020}),
++ SND_HDA_PIN_QUIRK(0x10ec0294, 0x1043, "ASUS", ALC294_FIXUP_ASUS_SPK,
++ {0x12, 0x90a60130},
++ {0x17, 0x90170110},
++ {0x21, 0x03211020}),
+ SND_HDA_PIN_QUIRK(0x10ec0294, 0x1043, "ASUS", ALC294_FIXUP_ASUS_SPK,
+ {0x12, 0x90a60130},
+ {0x17, 0x90170110},
+diff --git a/sound/soc/codecs/pcm186x.c b/sound/soc/codecs/pcm186x.c
+index 809b7e9f03ca..c5fcc632f670 100644
+--- a/sound/soc/codecs/pcm186x.c
++++ b/sound/soc/codecs/pcm186x.c
+@@ -42,7 +42,7 @@ struct pcm186x_priv {
+ bool is_master_mode;
+ };
+
+-static const DECLARE_TLV_DB_SCALE(pcm186x_pga_tlv, -1200, 4000, 50);
++static const DECLARE_TLV_DB_SCALE(pcm186x_pga_tlv, -1200, 50, 0);
+
+ static const struct snd_kcontrol_new pcm1863_snd_controls[] = {
+ SOC_DOUBLE_R_S_TLV("ADC Capture Volume", PCM186X_PGA_VAL_CH1_L,
+@@ -158,7 +158,7 @@ static const struct snd_soc_dapm_widget pcm1863_dapm_widgets[] = {
+ * Put the codec into SLEEP mode when not in use, allowing the
+ * Energysense mechanism to operate.
+ */
+- SND_SOC_DAPM_ADC("ADC", "HiFi Capture", PCM186X_POWER_CTRL, 1, 0),
++ SND_SOC_DAPM_ADC("ADC", "HiFi Capture", PCM186X_POWER_CTRL, 1, 1),
+ };
+
+ static const struct snd_soc_dapm_widget pcm1865_dapm_widgets[] = {
+@@ -184,8 +184,8 @@ static const struct snd_soc_dapm_widget pcm1865_dapm_widgets[] = {
+ * Put the codec into SLEEP mode when not in use, allowing the
+ * Energysense mechanism to operate.
+ */
+- SND_SOC_DAPM_ADC("ADC1", "HiFi Capture 1", PCM186X_POWER_CTRL, 1, 0),
+- SND_SOC_DAPM_ADC("ADC2", "HiFi Capture 2", PCM186X_POWER_CTRL, 1, 0),
++ SND_SOC_DAPM_ADC("ADC1", "HiFi Capture 1", PCM186X_POWER_CTRL, 1, 1),
++ SND_SOC_DAPM_ADC("ADC2", "HiFi Capture 2", PCM186X_POWER_CTRL, 1, 1),
+ };
+
+ static const struct snd_soc_dapm_route pcm1863_dapm_routes[] = {
+diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
+index 81f2fe2c6d23..60f87a0d99f4 100644
+--- a/sound/soc/fsl/fsl-asoc-card.c
++++ b/sound/soc/fsl/fsl-asoc-card.c
+@@ -689,6 +689,7 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ asrc_fail:
+ of_node_put(asrc_np);
+ of_node_put(codec_np);
++ put_device(&cpu_pdev->dev);
+ fail:
+ of_node_put(cpu_np);
+
+diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
+index 57b484768a58..3623aa9a6f2e 100644
+--- a/sound/soc/fsl/fsl_esai.c
++++ b/sound/soc/fsl/fsl_esai.c
+@@ -54,6 +54,8 @@ struct fsl_esai {
+ u32 fifo_depth;
+ u32 slot_width;
+ u32 slots;
++ u32 tx_mask;
++ u32 rx_mask;
+ u32 hck_rate[2];
+ u32 sck_rate[2];
+ bool hck_dir[2];
+@@ -361,21 +363,13 @@ static int fsl_esai_set_dai_tdm_slot(struct snd_soc_dai *dai, u32 tx_mask,
+ regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR,
+ ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(slots));
+
+- regmap_update_bits(esai_priv->regmap, REG_ESAI_TSMA,
+- ESAI_xSMA_xS_MASK, ESAI_xSMA_xS(tx_mask));
+- regmap_update_bits(esai_priv->regmap, REG_ESAI_TSMB,
+- ESAI_xSMB_xS_MASK, ESAI_xSMB_xS(tx_mask));
+-
+ regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR,
+ ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(slots));
+
+- regmap_update_bits(esai_priv->regmap, REG_ESAI_RSMA,
+- ESAI_xSMA_xS_MASK, ESAI_xSMA_xS(rx_mask));
+- regmap_update_bits(esai_priv->regmap, REG_ESAI_RSMB,
+- ESAI_xSMB_xS_MASK, ESAI_xSMB_xS(rx_mask));
+-
+ esai_priv->slot_width = slot_width;
+ esai_priv->slots = slots;
++ esai_priv->tx_mask = tx_mask;
++ esai_priv->rx_mask = rx_mask;
+
+ return 0;
+ }
+@@ -398,7 +392,8 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ break;
+ case SND_SOC_DAIFMT_RIGHT_J:
+ /* Data on rising edge of bclk, frame high, right aligned */
+- xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCR_xWA;
++ xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP;
++ xcr |= ESAI_xCR_xWA;
+ break;
+ case SND_SOC_DAIFMT_DSP_A:
+ /* Data on rising edge of bclk, frame high, 1clk before data */
+@@ -455,12 +450,12 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ return -EINVAL;
+ }
+
+- mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR;
++ mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR | ESAI_xCR_xWA;
+ regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR, mask, xcr);
+ regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR, mask, xcr);
+
+ mask = ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCCR_xFSP |
+- ESAI_xCCR_xFSD | ESAI_xCCR_xCKD | ESAI_xCR_xWA;
++ ESAI_xCCR_xFSD | ESAI_xCCR_xCKD;
+ regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR, mask, xccr);
+ regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR, mask, xccr);
+
+@@ -595,6 +590,7 @@ static int fsl_esai_trigger(struct snd_pcm_substream *substream, int cmd,
+ bool tx = substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
+ u8 i, channels = substream->runtime->channels;
+ u32 pins = DIV_ROUND_UP(channels, esai_priv->slots);
++ u32 mask;
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+@@ -607,15 +603,38 @@ static int fsl_esai_trigger(struct snd_pcm_substream *substream, int cmd,
+ for (i = 0; tx && i < channels; i++)
+ regmap_write(esai_priv->regmap, REG_ESAI_ETDR, 0x0);
+
++ /*
++ * When set the TE/RE in the end of enablement flow, there
++ * will be channel swap issue for multi data line case.
++ * In order to workaround this issue, we switch the bit
++ * enablement sequence to below sequence
++ * 1) clear the xSMB & xSMA: which is done in probe and
++ * stop state.
++ * 2) set TE/RE
++ * 3) set xSMB
++ * 4) set xSMA: xSMA is the last one in this flow, which
++ * will trigger esai to start.
++ */
+ regmap_update_bits(esai_priv->regmap, REG_ESAI_xCR(tx),
+ tx ? ESAI_xCR_TE_MASK : ESAI_xCR_RE_MASK,
+ tx ? ESAI_xCR_TE(pins) : ESAI_xCR_RE(pins));
++ mask = tx ? esai_priv->tx_mask : esai_priv->rx_mask;
++
++ regmap_update_bits(esai_priv->regmap, REG_ESAI_xSMB(tx),
++ ESAI_xSMB_xS_MASK, ESAI_xSMB_xS(mask));
++ regmap_update_bits(esai_priv->regmap, REG_ESAI_xSMA(tx),
++ ESAI_xSMA_xS_MASK, ESAI_xSMA_xS(mask));
++
+ break;
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ regmap_update_bits(esai_priv->regmap, REG_ESAI_xCR(tx),
+ tx ? ESAI_xCR_TE_MASK : ESAI_xCR_RE_MASK, 0);
++ regmap_update_bits(esai_priv->regmap, REG_ESAI_xSMA(tx),
++ ESAI_xSMA_xS_MASK, 0);
++ regmap_update_bits(esai_priv->regmap, REG_ESAI_xSMB(tx),
++ ESAI_xSMB_xS_MASK, 0);
+
+ /* Disable and reset FIFO */
+ regmap_update_bits(esai_priv->regmap, REG_ESAI_xFCR(tx),
+@@ -905,6 +924,15 @@ static int fsl_esai_probe(struct platform_device *pdev)
+ return ret;
+ }
+
++ esai_priv->tx_mask = 0xFFFFFFFF;
++ esai_priv->rx_mask = 0xFFFFFFFF;
++
++ /* Clear the TSMA, TSMB, RSMA, RSMB */
++ regmap_write(esai_priv->regmap, REG_ESAI_TSMA, 0);
++ regmap_write(esai_priv->regmap, REG_ESAI_TSMB, 0);
++ regmap_write(esai_priv->regmap, REG_ESAI_RSMA, 0);
++ regmap_write(esai_priv->regmap, REG_ESAI_RSMB, 0);
++
+ ret = devm_snd_soc_register_component(&pdev->dev, &fsl_esai_component,
+ &fsl_esai_dai, 1);
+ if (ret) {
+diff --git a/sound/soc/fsl/imx-sgtl5000.c b/sound/soc/fsl/imx-sgtl5000.c
+index c29200cf755a..9b9a7ec52905 100644
+--- a/sound/soc/fsl/imx-sgtl5000.c
++++ b/sound/soc/fsl/imx-sgtl5000.c
+@@ -108,6 +108,7 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ ret = -EPROBE_DEFER;
+ goto fail;
+ }
++ put_device(&ssi_pdev->dev);
+ codec_dev = of_find_i2c_device_by_node(codec_np);
+ if (!codec_dev) {
+ dev_err(&pdev->dev, "failed to find codec platform device\n");
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index b807a47515eb..336895f7fd1e 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -283,12 +283,20 @@ static int asoc_simple_card_get_dai_id(struct device_node *ep)
+ /* use endpoint/port reg if exist */
+ ret = of_graph_parse_endpoint(ep, &info);
+ if (ret == 0) {
+- if (info.id)
++ /*
++ * Because it will count port/endpoint if it doesn't have "reg".
++ * But, we can't judge whether it has "no reg", or "reg = <0>"
++ * only of_graph_parse_endpoint().
++ * We need to check "reg" property
++ */
++ if (of_get_property(ep, "reg", NULL))
+ return info.id;
+- if (info.port)
++
++ node = of_get_parent(ep);
++ of_node_put(node);
++ if (of_get_property(node, "reg", NULL))
+ return info.port;
+ }
+-
+ node = of_graph_get_port_parent(ep);
+
+ /*
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index 91a2436ce952..e9623da911d5 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -711,9 +711,17 @@ static int sst_soc_probe(struct snd_soc_component *component)
+ return sst_dsp_init_v2_dpcm(component);
+ }
+
++static void sst_soc_remove(struct snd_soc_component *component)
++{
++ struct sst_data *drv = dev_get_drvdata(component->dev);
++
++ drv->soc_card = NULL;
++}
++
+ static const struct snd_soc_component_driver sst_soc_platform_drv = {
+ .name = DRV_NAME,
+ .probe = sst_soc_probe,
++ .remove = sst_soc_remove,
+ .ops = &sst_platform_ops,
+ .compr_ops = &sst_platform_compr_ops,
+ .pcm_new = sst_pcm_new,
+diff --git a/sound/soc/qcom/common.c b/sound/soc/qcom/common.c
+index 4715527054e5..5661025e8cec 100644
+--- a/sound/soc/qcom/common.c
++++ b/sound/soc/qcom/common.c
+@@ -42,6 +42,9 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ link = card->dai_link;
+ for_each_child_of_node(dev->of_node, np) {
+ cpu = of_get_child_by_name(np, "cpu");
++ platform = of_get_child_by_name(np, "platform");
++ codec = of_get_child_by_name(np, "codec");
++
+ if (!cpu) {
+ dev_err(dev, "Can't find cpu DT node\n");
+ ret = -EINVAL;
+@@ -63,8 +66,6 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ goto err;
+ }
+
+- platform = of_get_child_by_name(np, "platform");
+- codec = of_get_child_by_name(np, "codec");
+ if (codec && platform) {
+ link->platform_of_node = of_parse_phandle(platform,
+ "sound-dai",
+@@ -100,10 +101,15 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ link->dpcm_capture = 1;
+ link->stream_name = link->name;
+ link++;
++
++ of_node_put(cpu);
++ of_node_put(codec);
++ of_node_put(platform);
+ }
+
+ return 0;
+ err:
++ of_node_put(np);
+ of_node_put(cpu);
+ of_node_put(codec);
+ of_node_put(platform);
+diff --git a/sound/xen/xen_snd_front_alsa.c b/sound/xen/xen_snd_front_alsa.c
+index a7f413cb704d..b14ab512c2ce 100644
+--- a/sound/xen/xen_snd_front_alsa.c
++++ b/sound/xen/xen_snd_front_alsa.c
+@@ -441,7 +441,7 @@ static int shbuf_setup_backstore(struct xen_snd_front_pcm_stream_info *stream,
+ {
+ int i;
+
+- stream->buffer = alloc_pages_exact(stream->buffer_sz, GFP_KERNEL);
++ stream->buffer = alloc_pages_exact(buffer_sz, GFP_KERNEL);
+ if (!stream->buffer)
+ return -ENOMEM;
+
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index 5467c6bf9ceb..bb9dca65eb5f 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -70,7 +70,6 @@ FEATURE_TESTS_BASIC := \
+ sched_getcpu \
+ sdt \
+ setns \
+- libopencsd \
+ libaio
+
+ # FEATURE_TESTS_BASIC + FEATURE_TESTS_EXTRA is the complete list
+@@ -84,6 +83,7 @@ FEATURE_TESTS_EXTRA := \
+ libbabeltrace \
+ libbfd-liberty \
+ libbfd-liberty-z \
++ libopencsd \
+ libunwind-debug-frame \
+ libunwind-debug-frame-arm \
+ libunwind-debug-frame-aarch64 \
+diff --git a/tools/build/feature/test-all.c b/tools/build/feature/test-all.c
+index 20cdaa4fc112..e903b86b742f 100644
+--- a/tools/build/feature/test-all.c
++++ b/tools/build/feature/test-all.c
+@@ -170,14 +170,14 @@
+ # include "test-setns.c"
+ #undef main
+
+-#define main main_test_libopencsd
+-# include "test-libopencsd.c"
+-#undef main
+-
+ #define main main_test_libaio
+ # include "test-libaio.c"
+ #undef main
+
++#define main main_test_reallocarray
++# include "test-reallocarray.c"
++#undef main
++
+ int main(int argc, char *argv[])
+ {
+ main_test_libpython();
+@@ -217,8 +217,8 @@ int main(int argc, char *argv[])
+ main_test_sched_getcpu();
+ main_test_sdt();
+ main_test_setns();
+- main_test_libopencsd();
+ main_test_libaio();
++ main_test_reallocarray();
+
+ return 0;
+ }
+diff --git a/tools/build/feature/test-reallocarray.c b/tools/build/feature/test-reallocarray.c
+index 8170de35150d..8f6743e31da7 100644
+--- a/tools/build/feature/test-reallocarray.c
++++ b/tools/build/feature/test-reallocarray.c
+@@ -6,3 +6,5 @@ int main(void)
+ {
+ return !!reallocarray(NULL, 1, 1);
+ }
++
++#undef _GNU_SOURCE
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index 34d9c3619c96..78fd86b85087 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -162,7 +162,8 @@ endif
+
+ TARGETS = $(CMD_TARGETS)
+
+-all: fixdep all_cmd
++all: fixdep
++ $(Q)$(MAKE) all_cmd
+
+ all_cmd: $(CMD_TARGETS) check
+
+diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
+index c8fbd0306960..11f425662b43 100755
+--- a/tools/lib/lockdep/run_tests.sh
++++ b/tools/lib/lockdep/run_tests.sh
+@@ -11,7 +11,7 @@ find tests -name '*.c' | sort | while read -r i; do
+ testname=$(basename "$i" .c)
+ echo -ne "$testname... "
+ if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
+- timeout 1 "tests/$testname" 2>&1 | "tests/${testname}.sh"; then
++ timeout 1 "tests/$testname" 2>&1 | /bin/bash "tests/${testname}.sh"; then
+ echo "PASSED!"
+ else
+ echo "FAILED!"
+@@ -24,7 +24,7 @@ find tests -name '*.c' | sort | while read -r i; do
+ echo -ne "(PRELOAD) $testname... "
+ if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+ timeout 1 ./lockdep "tests/$testname" 2>&1 |
+- "tests/${testname}.sh"; then
++ /bin/bash "tests/${testname}.sh"; then
+ echo "PASSED!"
+ else
+ echo "FAILED!"
+@@ -37,7 +37,7 @@ find tests -name '*.c' | sort | while read -r i; do
+ echo -ne "(PRELOAD + Valgrind) $testname... "
+ if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+ { timeout 10 valgrind --read-var-info=yes ./lockdep "./tests/$testname" >& "tests/${testname}.vg.out"; true; } &&
+- "tests/${testname}.sh" < "tests/${testname}.vg.out" &&
++ /bin/bash "tests/${testname}.sh" < "tests/${testname}.vg.out" &&
+ ! grep -Eq '(^==[0-9]*== (Invalid |Uninitialised ))|Mismatched free|Source and destination overlap| UME ' "tests/${testname}.vg.out"; then
+ echo "PASSED!"
+ else
+diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
+index abd4fa5d3088..87494c7c619d 100644
+--- a/tools/lib/traceevent/event-parse.c
++++ b/tools/lib/traceevent/event-parse.c
+@@ -2457,7 +2457,7 @@ static int arg_num_eval(struct tep_print_arg *arg, long long *val)
+ static char *arg_eval (struct tep_print_arg *arg)
+ {
+ long long val;
+- static char buf[20];
++ static char buf[24];
+
+ switch (arg->type) {
+ case TEP_PRINT_ATOM:
+diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
+index c9d038f91af6..53f8be0f4a1f 100644
+--- a/tools/objtool/Makefile
++++ b/tools/objtool/Makefile
+@@ -25,14 +25,17 @@ LIBSUBCMD = $(LIBSUBCMD_OUTPUT)libsubcmd.a
+ OBJTOOL := $(OUTPUT)objtool
+ OBJTOOL_IN := $(OBJTOOL)-in.o
+
++LIBELF_FLAGS := $(shell pkg-config libelf --cflags 2>/dev/null)
++LIBELF_LIBS := $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
++
+ all: $(OBJTOOL)
+
+ INCLUDES := -I$(srctree)/tools/include \
+ -I$(srctree)/tools/arch/$(HOSTARCH)/include/uapi \
+ -I$(srctree)/tools/objtool/arch/$(ARCH)/include
+ WARNINGS := $(EXTRA_WARNINGS) -Wno-switch-default -Wno-switch-enum -Wno-packed
+-CFLAGS += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES)
+-LDFLAGS += -lelf $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
++CFLAGS += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES) $(LIBELF_FLAGS)
++LDFLAGS += $(LIBELF_LIBS) $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
+
+ # Allow old libelf to be used:
+ elfshdr := $(shell echo '$(pound)include <libelf.h>' | $(CC) $(CFLAGS) -x c -E - | grep elf_getshdr)
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 0414a0d52262..5dde107083c6 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2184,9 +2184,10 @@ static void cleanup(struct objtool_file *file)
+ elf_close(file->elf);
+ }
+
++static struct objtool_file file;
++
+ int check(const char *_objname, bool orc)
+ {
+- struct objtool_file file;
+ int ret, warnings = 0;
+
+ objname = _objname;
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index b441c88cafa1..cf4a8329c4c0 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -218,6 +218,8 @@ FEATURE_CHECK_LDFLAGS-libpython := $(PYTHON_EMBED_LDOPTS)
+ FEATURE_CHECK_CFLAGS-libpython-version := $(PYTHON_EMBED_CCOPTS)
+ FEATURE_CHECK_LDFLAGS-libpython-version := $(PYTHON_EMBED_LDOPTS)
+
++FEATURE_CHECK_LDFLAGS-libaio = -lrt
++
+ CFLAGS += -fno-omit-frame-pointer
+ CFLAGS += -ggdb3
+ CFLAGS += -funwind-tables
+@@ -386,7 +388,8 @@ ifeq ($(feature-setns), 1)
+ $(call detected,CONFIG_SETNS)
+ endif
+
+-ifndef NO_CORESIGHT
++ifdef CORESIGHT
++ $(call feature_check,libopencsd)
+ ifeq ($(feature-libopencsd), 1)
+ CFLAGS += -DHAVE_CSTRACE_SUPPORT $(LIBOPENCSD_CFLAGS)
+ LDFLAGS += $(LIBOPENCSD_LDFLAGS)
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 0ee6795d82cc..77f8f069f1e7 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -102,7 +102,7 @@ include ../scripts/utilities.mak
+ # When selected, pass LLVM_CONFIG=/path/to/llvm-config to `make' if
+ # llvm-config is not in $PATH.
+ #
+-# Define NO_CORESIGHT if you do not want support for CoreSight trace decoding.
++# Define CORESIGHT if you DO WANT support for CoreSight trace decoding.
+ #
+ # Define NO_AIO if you do not want support of Posix AIO based trace
+ # streaming for record mode. Currently Posix AIO trace streaming is
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index d340d2e42776..13758a0b367b 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2055,6 +2055,12 @@ static int setup_nodes(struct perf_session *session)
+ if (!set)
+ return -ENOMEM;
+
++ nodes[node] = set;
++
++ /* empty node, skip */
++ if (cpu_map__empty(map))
++ continue;
++
+ for (cpu = 0; cpu < map->nr; cpu++) {
+ set_bit(map->map[cpu], set);
+
+@@ -2063,8 +2069,6 @@ static int setup_nodes(struct perf_session *session)
+
+ cpu2node[map->map[cpu]] = node;
+ }
+-
+- nodes[node] = set;
+ }
+
+ setup_nodes_header();
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index ac221f137ed2..cff4d10daf49 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -148,6 +148,7 @@ static struct {
+ unsigned int print_ip_opts;
+ u64 fields;
+ u64 invalid_fields;
++ u64 user_set_fields;
+ } output[OUTPUT_TYPE_MAX] = {
+
+ [PERF_TYPE_HARDWARE] = {
+@@ -344,7 +345,7 @@ static int perf_evsel__do_check_stype(struct perf_evsel *evsel,
+ if (attr->sample_type & sample_type)
+ return 0;
+
+- if (output[type].user_set) {
++ if (output[type].user_set_fields & field) {
+ if (allow_user_set)
+ return 0;
+ evname = perf_evsel__name(evsel);
+@@ -2627,10 +2628,13 @@ parse:
+ pr_warning("\'%s\' not valid for %s events. Ignoring.\n",
+ all_output_options[i].str, event_type(j));
+ } else {
+- if (change == REMOVE)
++ if (change == REMOVE) {
+ output[j].fields &= ~all_output_options[i].field;
+- else
++ output[j].user_set_fields &= ~all_output_options[i].field;
++ } else {
+ output[j].fields |= all_output_options[i].field;
++ output[j].user_set_fields |= all_output_options[i].field;
++ }
+ output[j].user_set = true;
+ output[j].wildcard_set = true;
+ }
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index b36061cd1ab8..91cdbf504535 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -1039,6 +1039,9 @@ static const size_t trace__entry_str_size = 2048;
+
+ static struct file *thread_trace__files_entry(struct thread_trace *ttrace, int fd)
+ {
++ if (fd < 0)
++ return NULL;
++
+ if (fd > ttrace->files.max) {
+ struct file *nfiles = realloc(ttrace->files.table, (fd + 1) * sizeof(struct file));
+
+@@ -3865,7 +3868,8 @@ int cmd_trace(int argc, const char **argv)
+ goto init_augmented_syscall_tp;
+ }
+
+- if (strcmp(perf_evsel__name(evsel), "raw_syscalls:sys_enter") == 0) {
++ if (trace.syscalls.events.augmented->priv == NULL &&
++ strstr(perf_evsel__name(evsel), "syscalls:sys_enter")) {
+ struct perf_evsel *augmented = trace.syscalls.events.augmented;
+ if (perf_evsel__init_augmented_syscall_tp(augmented, evsel) ||
+ perf_evsel__init_augmented_syscall_tp_args(augmented))
+diff --git a/tools/perf/tests/evsel-tp-sched.c b/tools/perf/tests/evsel-tp-sched.c
+index 5cbba70bcdd0..ea7acf403727 100644
+--- a/tools/perf/tests/evsel-tp-sched.c
++++ b/tools/perf/tests/evsel-tp-sched.c
+@@ -43,7 +43,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
+ return -1;
+ }
+
+- if (perf_evsel__test_field(evsel, "prev_comm", 16, true))
++ if (perf_evsel__test_field(evsel, "prev_comm", 16, false))
+ ret = -1;
+
+ if (perf_evsel__test_field(evsel, "prev_pid", 4, true))
+@@ -55,7 +55,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
+ if (perf_evsel__test_field(evsel, "prev_state", sizeof(long), true))
+ ret = -1;
+
+- if (perf_evsel__test_field(evsel, "next_comm", 16, true))
++ if (perf_evsel__test_field(evsel, "next_comm", 16, false))
+ ret = -1;
+
+ if (perf_evsel__test_field(evsel, "next_pid", 4, true))
+@@ -73,7 +73,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
+ return -1;
+ }
+
+- if (perf_evsel__test_field(evsel, "comm", 16, true))
++ if (perf_evsel__test_field(evsel, "comm", 16, false))
+ ret = -1;
+
+ if (perf_evsel__test_field(evsel, "pid", 4, true))
+diff --git a/tools/perf/trace/beauty/msg_flags.c b/tools/perf/trace/beauty/msg_flags.c
+index d66c66315987..ea68db08b8e7 100644
+--- a/tools/perf/trace/beauty/msg_flags.c
++++ b/tools/perf/trace/beauty/msg_flags.c
+@@ -29,7 +29,7 @@ static size_t syscall_arg__scnprintf_msg_flags(char *bf, size_t size,
+ return scnprintf(bf, size, "NONE");
+ #define P_MSG_FLAG(n) \
+ if (flags & MSG_##n) { \
+- printed += scnprintf(bf + printed, size - printed, "%s%s", printed ? "|" : "", show_prefix ? prefix : "", #n); \
++ printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : "", #n); \
+ flags &= ~MSG_##n; \
+ }
+
+diff --git a/tools/perf/trace/beauty/waitid_options.c b/tools/perf/trace/beauty/waitid_options.c
+index 6897fab40dcc..d4d10b33ba0e 100644
+--- a/tools/perf/trace/beauty/waitid_options.c
++++ b/tools/perf/trace/beauty/waitid_options.c
+@@ -11,7 +11,7 @@ static size_t syscall_arg__scnprintf_waitid_options(char *bf, size_t size,
+
+ #define P_OPTION(n) \
+ if (options & W##n) { \
+- printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : #n); \
++ printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : "", #n); \
+ options &= ~W##n; \
+ }
+
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 70de8f6b3aee..9142fd294e76 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -1889,6 +1889,7 @@ int symbol__annotate(struct symbol *sym, struct map *map,
+ struct annotation_options *options,
+ struct arch **parch)
+ {
++ struct annotation *notes = symbol__annotation(sym);
+ struct annotate_args args = {
+ .privsize = privsize,
+ .evsel = evsel,
+@@ -1919,6 +1920,7 @@ int symbol__annotate(struct symbol *sym, struct map *map,
+
+ args.ms.map = map;
+ args.ms.sym = sym;
++ notes->start = map__rip_2objdump(map, sym->start);
+
+ return symbol__disassemble(sym, &args);
+ }
+@@ -2794,8 +2796,6 @@ int symbol__annotate2(struct symbol *sym, struct map *map, struct perf_evsel *ev
+
+ symbol__calc_percent(sym, evsel);
+
+- notes->start = map__rip_2objdump(map, sym->start);
+-
+ annotation__set_offsets(notes, size);
+ annotation__mark_jump_targets(notes, sym);
+ annotation__compute_ipc(notes, size);
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index f69961c4a4f3..2921ce08b198 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -1278,9 +1278,9 @@ static int __auxtrace_mmap__read(struct perf_mmap *map,
+ }
+
+ /* padding must be written by fn() e.g. record__process_auxtrace() */
+- padding = size & 7;
++ padding = size & (PERF_AUXTRACE_RECORD_ALIGNMENT - 1);
+ if (padding)
+- padding = 8 - padding;
++ padding = PERF_AUXTRACE_RECORD_ALIGNMENT - padding;
+
+ memset(&ev, 0, sizeof(ev));
+ ev.auxtrace.header.type = PERF_RECORD_AUXTRACE;
+diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h
+index 8e50f96d4b23..fac32482db61 100644
+--- a/tools/perf/util/auxtrace.h
++++ b/tools/perf/util/auxtrace.h
+@@ -40,6 +40,9 @@ struct record_opts;
+ struct auxtrace_info_event;
+ struct events_stats;
+
++/* Auxtrace records must have the same alignment as perf event records */
++#define PERF_AUXTRACE_RECORD_ALIGNMENT 8
++
+ enum auxtrace_type {
+ PERF_AUXTRACE_UNKNOWN,
+ PERF_AUXTRACE_INTEL_PT,
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index 4503f3ca45ab..7c0b975dd2f0 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -26,6 +26,7 @@
+
+ #include "../cache.h"
+ #include "../util.h"
++#include "../auxtrace.h"
+
+ #include "intel-pt-insn-decoder.h"
+ #include "intel-pt-pkt-decoder.h"
+@@ -250,19 +251,15 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params)
+ if (!(decoder->tsc_ctc_ratio_n % decoder->tsc_ctc_ratio_d))
+ decoder->tsc_ctc_mult = decoder->tsc_ctc_ratio_n /
+ decoder->tsc_ctc_ratio_d;
+-
+- /*
+- * Allow for timestamps appearing to backwards because a TSC
+- * packet has slipped past a MTC packet, so allow 2 MTC ticks
+- * or ...
+- */
+- decoder->tsc_slip = multdiv(2 << decoder->mtc_shift,
+- decoder->tsc_ctc_ratio_n,
+- decoder->tsc_ctc_ratio_d);
+ }
+- /* ... or 0x100 paranoia */
+- if (decoder->tsc_slip < 0x100)
+- decoder->tsc_slip = 0x100;
++
++ /*
++ * A TSC packet can slip past MTC packets so that the timestamp appears
++ * to go backwards. One estimate is that can be up to about 40 CPU
++ * cycles, which is certainly less than 0x1000 TSC ticks, but accept
++ * slippage an order of magnitude more to be on the safe side.
++ */
++ decoder->tsc_slip = 0x10000;
+
+ intel_pt_log("timestamp: mtc_shift %u\n", decoder->mtc_shift);
+ intel_pt_log("timestamp: tsc_ctc_ratio_n %u\n", decoder->tsc_ctc_ratio_n);
+@@ -1394,7 +1391,6 @@ static int intel_pt_overflow(struct intel_pt_decoder *decoder)
+ {
+ intel_pt_log("ERROR: Buffer overflow\n");
+ intel_pt_clear_tx_flags(decoder);
+- decoder->cbr = 0;
+ decoder->timestamp_insn_cnt = 0;
+ decoder->pkt_state = INTEL_PT_STATE_ERR_RESYNC;
+ decoder->overflow = true;
+@@ -2575,6 +2571,34 @@ static int intel_pt_tsc_cmp(uint64_t tsc1, uint64_t tsc2)
+ }
+ }
+
++#define MAX_PADDING (PERF_AUXTRACE_RECORD_ALIGNMENT - 1)
++
++/**
++ * adj_for_padding - adjust overlap to account for padding.
++ * @buf_b: second buffer
++ * @buf_a: first buffer
++ * @len_a: size of first buffer
++ *
++ * @buf_a might have up to 7 bytes of padding appended. Adjust the overlap
++ * accordingly.
++ *
++ * Return: A pointer into @buf_b from where non-overlapped data starts
++ */
++static unsigned char *adj_for_padding(unsigned char *buf_b,
++ unsigned char *buf_a, size_t len_a)
++{
++ unsigned char *p = buf_b - MAX_PADDING;
++ unsigned char *q = buf_a + len_a - MAX_PADDING;
++ int i;
++
++ for (i = MAX_PADDING; i; i--, p++, q++) {
++ if (*p != *q)
++ break;
++ }
++
++ return p;
++}
++
+ /**
+ * intel_pt_find_overlap_tsc - determine start of non-overlapped trace data
+ * using TSC.
+@@ -2625,8 +2649,11 @@ static unsigned char *intel_pt_find_overlap_tsc(unsigned char *buf_a,
+
+ /* Same TSC, so buffers are consecutive */
+ if (!cmp && rem_b >= rem_a) {
++ unsigned char *start;
++
+ *consecutive = true;
+- return buf_b + len_b - (rem_b - rem_a);
++ start = buf_b + len_b - (rem_b - rem_a);
++ return adj_for_padding(start, buf_a, len_a);
+ }
+ if (cmp < 0)
+ return buf_b; /* tsc_a < tsc_b => no overlap */
+@@ -2689,7 +2716,7 @@ unsigned char *intel_pt_find_overlap(unsigned char *buf_a, size_t len_a,
+ found = memmem(buf_a, len_a, buf_b, len_a);
+ if (found) {
+ *consecutive = true;
+- return buf_b + len_a;
++ return adj_for_padding(buf_b + len_a, buf_a, len_a);
+ }
+
+ /* Try again at next PSB in buffer 'a' */
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 2e72373ec6df..4493fc13a6fa 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -2522,6 +2522,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
+ }
+
+ pt->timeless_decoding = intel_pt_timeless_decoding(pt);
++ if (pt->timeless_decoding && !pt->tc.time_mult)
++ pt->tc.time_mult = 1;
+ pt->have_tsc = intel_pt_have_tsc(pt);
+ pt->sampling_mode = false;
+ pt->est_tsc = !pt->timeless_decoding;
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 11a234740632..ccd3275feeaa 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -734,10 +734,20 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
+
+ if (!is_arm_pmu_core(name)) {
+ pname = pe->pmu ? pe->pmu : "cpu";
++
++ /*
++ * uncore alias may be from different PMU
++ * with common prefix
++ */
++ if (pmu_is_uncore(name) &&
++ !strncmp(pname, name, strlen(pname)))
++ goto new_alias;
++
+ if (strcmp(pname, name))
+ continue;
+ }
+
++new_alias:
+ /* need type casts to override 'const' */
+ __perf_pmu__new_alias(head, NULL, (char *)pe->name,
+ (char *)pe->desc, (char *)pe->event,
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index 18a59fba97ff..cc4773157b9b 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -157,8 +157,10 @@ static struct map *kernel_get_module_map(const char *module)
+ if (module && strchr(module, '/'))
+ return dso__new_map(module);
+
+- if (!module)
+- module = "kernel";
++ if (!module) {
++ pos = machine__kernel_map(host_machine);
++ return map__get(pos);
++ }
+
+ for (pos = maps__first(maps); pos; pos = map__next(pos)) {
+ /* short_name is "[module]" */
+diff --git a/tools/perf/util/s390-cpumsf.c b/tools/perf/util/s390-cpumsf.c
+index 68b2570304ec..08073a4d59a4 100644
+--- a/tools/perf/util/s390-cpumsf.c
++++ b/tools/perf/util/s390-cpumsf.c
+@@ -301,6 +301,11 @@ static bool s390_cpumsf_validate(int machine_type,
+ *dsdes = 85;
+ *bsdes = 32;
+ break;
++ case 2964:
++ case 2965:
++ *dsdes = 112;
++ *bsdes = 32;
++ break;
+ default:
+ /* Illegal trailer entry */
+ return false;
+diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
+index 87ef16a1b17e..7059d1be2d09 100644
+--- a/tools/perf/util/scripting-engines/trace-event-python.c
++++ b/tools/perf/util/scripting-engines/trace-event-python.c
+@@ -733,8 +733,7 @@ static PyObject *get_perf_sample_dict(struct perf_sample *sample,
+ Py_FatalError("couldn't create Python dictionary");
+
+ pydict_set_item_string_decref(dict, "ev_name", _PyUnicode_FromString(perf_evsel__name(evsel)));
+- pydict_set_item_string_decref(dict, "attr", _PyUnicode_FromStringAndSize(
+- (const char *)&evsel->attr, sizeof(evsel->attr)));
++ pydict_set_item_string_decref(dict, "attr", _PyBytes_FromStringAndSize((const char *)&evsel->attr, sizeof(evsel->attr)));
+
+ pydict_set_item_string_decref(dict_sample, "pid",
+ _PyLong_FromLong(sample->pid));
+@@ -1494,34 +1493,40 @@ static void _free_command_line(wchar_t **command_line, int num)
+ static int python_start_script(const char *script, int argc, const char **argv)
+ {
+ struct tables *tables = &tables_global;
++ PyMODINIT_FUNC (*initfunc)(void);
+ #if PY_MAJOR_VERSION < 3
+ const char **command_line;
+ #else
+ wchar_t **command_line;
+ #endif
+- char buf[PATH_MAX];
++ /*
++ * Use a non-const name variable to cope with python 2.6's
++ * PyImport_AppendInittab prototype
++ */
++ char buf[PATH_MAX], name[19] = "perf_trace_context";
+ int i, err = 0;
+ FILE *fp;
+
+ #if PY_MAJOR_VERSION < 3
++ initfunc = initperf_trace_context;
+ command_line = malloc((argc + 1) * sizeof(const char *));
+ command_line[0] = script;
+ for (i = 1; i < argc + 1; i++)
+ command_line[i] = argv[i - 1];
+ #else
++ initfunc = PyInit_perf_trace_context;
+ command_line = malloc((argc + 1) * sizeof(wchar_t *));
+ command_line[0] = Py_DecodeLocale(script, NULL);
+ for (i = 1; i < argc + 1; i++)
+ command_line[i] = Py_DecodeLocale(argv[i - 1], NULL);
+ #endif
+
++ PyImport_AppendInittab(name, initfunc);
+ Py_Initialize();
+
+ #if PY_MAJOR_VERSION < 3
+- initperf_trace_context();
+ PySys_SetArgv(argc + 1, (char **)command_line);
+ #else
+- PyInit_perf_trace_context();
+ PySys_SetArgv(argc + 1, command_line);
+ #endif
+
+diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
+index 6c1a83768eb0..d0334c33da54 100644
+--- a/tools/perf/util/sort.c
++++ b/tools/perf/util/sort.c
+@@ -230,8 +230,14 @@ static int64_t _sort__sym_cmp(struct symbol *sym_l, struct symbol *sym_r)
+ if (sym_l == sym_r)
+ return 0;
+
+- if (sym_l->inlined || sym_r->inlined)
+- return strcmp(sym_l->name, sym_r->name);
++ if (sym_l->inlined || sym_r->inlined) {
++ int ret = strcmp(sym_l->name, sym_r->name);
++
++ if (ret)
++ return ret;
++ if ((sym_l->start <= sym_r->end) && (sym_l->end >= sym_r->start))
++ return 0;
++ }
+
+ if (sym_l->start != sym_r->start)
+ return (int64_t)(sym_r->start - sym_l->start);
+diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
+index dc86597d0cc4..ccf42c4e83f0 100644
+--- a/tools/perf/util/srcline.c
++++ b/tools/perf/util/srcline.c
+@@ -104,7 +104,7 @@ static struct symbol *new_inline_sym(struct dso *dso,
+ } else {
+ /* create a fake symbol for the inline frame */
+ inline_sym = symbol__new(base_sym ? base_sym->start : 0,
+- base_sym ? base_sym->end : 0,
++ base_sym ? (base_sym->end - base_sym->start) : 0,
+ base_sym ? base_sym->binding : 0,
+ base_sym ? base_sym->type : 0,
+ funcname);
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 48efad6d0f90..ca5f2e4796ea 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -710,6 +710,8 @@ static int map_groups__split_kallsyms_for_kcore(struct map_groups *kmaps, struct
+ }
+
+ pos->start -= curr_map->start - curr_map->pgoff;
++ if (pos->end > curr_map->end)
++ pos->end = curr_map->end;
+ if (pos->end)
+ pos->end -= curr_map->start - curr_map->pgoff;
+ symbols__insert(&curr_map->dso->symbols, pos);
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 41ab7a3668b3..936f726f7cd9 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -96,6 +96,7 @@ $(BPFOBJ): force
+ CLANG ?= clang
+ LLC ?= llc
+ LLVM_OBJCOPY ?= llvm-objcopy
++LLVM_READELF ?= llvm-readelf
+ BTF_PAHOLE ?= pahole
+
+ PROBE := $(shell $(LLC) -march=bpf -mcpu=probe -filetype=null /dev/null 2>&1)
+@@ -132,7 +133,7 @@ BTF_PAHOLE_PROBE := $(shell $(BTF_PAHOLE) --help 2>&1 | grep BTF)
+ BTF_OBJCOPY_PROBE := $(shell $(LLVM_OBJCOPY) --help 2>&1 | grep -i 'usage.*llvm')
+ BTF_LLVM_PROBE := $(shell echo "int main() { return 0; }" | \
+ $(CLANG) -target bpf -O2 -g -c -x c - -o ./llvm_btf_verify.o; \
+- readelf -S ./llvm_btf_verify.o | grep BTF; \
++ $(LLVM_READELF) -S ./llvm_btf_verify.o | grep BTF; \
+ /bin/rm -f ./llvm_btf_verify.o)
+
+ ifneq ($(BTF_LLVM_PROBE),)
+diff --git a/tools/testing/selftests/bpf/test_map_in_map.c b/tools/testing/selftests/bpf/test_map_in_map.c
+index ce923e67e08e..2985f262846e 100644
+--- a/tools/testing/selftests/bpf/test_map_in_map.c
++++ b/tools/testing/selftests/bpf/test_map_in_map.c
+@@ -27,6 +27,7 @@ SEC("xdp_mimtest")
+ int xdp_mimtest0(struct xdp_md *ctx)
+ {
+ int value = 123;
++ int *value_p;
+ int key = 0;
+ void *map;
+
+@@ -35,6 +36,9 @@ int xdp_mimtest0(struct xdp_md *ctx)
+ return XDP_DROP;
+
+ bpf_map_update_elem(map, &key, &value, 0);
++ value_p = bpf_map_lookup_elem(map, &key);
++ if (!value_p || *value_p != 123)
++ return XDP_DROP;
+
+ map = bpf_map_lookup_elem(&mim_hash, &key);
+ if (!map)
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index e2b9eee37187..6e05a22b346c 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -43,7 +43,7 @@ static int map_flags;
+ } \
+ })
+
+-static void test_hashmap(int task, void *data)
++static void test_hashmap(unsigned int task, void *data)
+ {
+ long long key, next_key, first_key, value;
+ int fd;
+@@ -133,7 +133,7 @@ static void test_hashmap(int task, void *data)
+ close(fd);
+ }
+
+-static void test_hashmap_sizes(int task, void *data)
++static void test_hashmap_sizes(unsigned int task, void *data)
+ {
+ int fd, i, j;
+
+@@ -153,7 +153,7 @@ static void test_hashmap_sizes(int task, void *data)
+ }
+ }
+
+-static void test_hashmap_percpu(int task, void *data)
++static void test_hashmap_percpu(unsigned int task, void *data)
+ {
+ unsigned int nr_cpus = bpf_num_possible_cpus();
+ BPF_DECLARE_PERCPU(long, value);
+@@ -280,7 +280,7 @@ static int helper_fill_hashmap(int max_entries)
+ return fd;
+ }
+
+-static void test_hashmap_walk(int task, void *data)
++static void test_hashmap_walk(unsigned int task, void *data)
+ {
+ int fd, i, max_entries = 1000;
+ long long key, value, next_key;
+@@ -351,7 +351,7 @@ static void test_hashmap_zero_seed(void)
+ close(second);
+ }
+
+-static void test_arraymap(int task, void *data)
++static void test_arraymap(unsigned int task, void *data)
+ {
+ int key, next_key, fd;
+ long long value;
+@@ -406,7 +406,7 @@ static void test_arraymap(int task, void *data)
+ close(fd);
+ }
+
+-static void test_arraymap_percpu(int task, void *data)
++static void test_arraymap_percpu(unsigned int task, void *data)
+ {
+ unsigned int nr_cpus = bpf_num_possible_cpus();
+ BPF_DECLARE_PERCPU(long, values);
+@@ -502,7 +502,7 @@ static void test_arraymap_percpu_many_keys(void)
+ close(fd);
+ }
+
+-static void test_devmap(int task, void *data)
++static void test_devmap(unsigned int task, void *data)
+ {
+ int fd;
+ __u32 key, value;
+@@ -517,7 +517,7 @@ static void test_devmap(int task, void *data)
+ close(fd);
+ }
+
+-static void test_queuemap(int task, void *data)
++static void test_queuemap(unsigned int task, void *data)
+ {
+ const int MAP_SIZE = 32;
+ __u32 vals[MAP_SIZE + MAP_SIZE/2], val;
+@@ -575,7 +575,7 @@ static void test_queuemap(int task, void *data)
+ close(fd);
+ }
+
+-static void test_stackmap(int task, void *data)
++static void test_stackmap(unsigned int task, void *data)
+ {
+ const int MAP_SIZE = 32;
+ __u32 vals[MAP_SIZE + MAP_SIZE/2], val;
+@@ -641,7 +641,7 @@ static void test_stackmap(int task, void *data)
+ #define SOCKMAP_PARSE_PROG "./sockmap_parse_prog.o"
+ #define SOCKMAP_VERDICT_PROG "./sockmap_verdict_prog.o"
+ #define SOCKMAP_TCP_MSG_PROG "./sockmap_tcp_msg_prog.o"
+-static void test_sockmap(int tasks, void *data)
++static void test_sockmap(unsigned int tasks, void *data)
+ {
+ struct bpf_map *bpf_map_rx, *bpf_map_tx, *bpf_map_msg, *bpf_map_break;
+ int map_fd_msg = 0, map_fd_rx = 0, map_fd_tx = 0, map_fd_break;
+@@ -1258,10 +1258,11 @@ static void test_map_large(void)
+ }
+
+ #define run_parallel(N, FN, DATA) \
+- printf("Fork %d tasks to '" #FN "'\n", N); \
++ printf("Fork %u tasks to '" #FN "'\n", N); \
+ __run_parallel(N, FN, DATA)
+
+-static void __run_parallel(int tasks, void (*fn)(int task, void *data),
++static void __run_parallel(unsigned int tasks,
++ void (*fn)(unsigned int task, void *data),
+ void *data)
+ {
+ pid_t pid[tasks];
+@@ -1302,7 +1303,7 @@ static void test_map_stress(void)
+ #define DO_UPDATE 1
+ #define DO_DELETE 0
+
+-static void test_update_delete(int fn, void *data)
++static void test_update_delete(unsigned int fn, void *data)
+ {
+ int do_update = ((int *)data)[1];
+ int fd = ((int *)data)[0];
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 2fd90d456892..9a967983abed 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -34,6 +34,7 @@
+ #include <linux/if_ether.h>
+
+ #include <bpf/bpf.h>
++#include <bpf/libbpf.h>
+
+ #ifdef HAVE_GENHDR
+ # include "autoconf.h"
+@@ -59,6 +60,7 @@
+
+ #define UNPRIV_SYSCTL "kernel/unprivileged_bpf_disabled"
+ static bool unpriv_disabled = false;
++static int skips;
+
+ struct bpf_test {
+ const char *descr;
+@@ -15946,6 +15948,11 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
+ pflags |= BPF_F_ANY_ALIGNMENT;
+ fd_prog = bpf_verify_program(prog_type, prog, prog_len, pflags,
+ "GPL", 0, bpf_vlog, sizeof(bpf_vlog), 1);
++ if (fd_prog < 0 && !bpf_probe_prog_type(prog_type, 0)) {
++ printf("SKIP (unsupported program type %d)\n", prog_type);
++ skips++;
++ goto close_fds;
++ }
+
+ expected_ret = unpriv && test->result_unpriv != UNDEF ?
+ test->result_unpriv : test->result;
+@@ -16099,7 +16106,7 @@ static bool test_as_unpriv(struct bpf_test *test)
+
+ static int do_test(bool unpriv, unsigned int from, unsigned int to)
+ {
+- int i, passes = 0, errors = 0, skips = 0;
++ int i, passes = 0, errors = 0;
+
+ for (i = from; i < to; i++) {
+ struct bpf_test *test = &tests[i];
+diff --git a/tools/testing/selftests/firmware/config b/tools/testing/selftests/firmware/config
+index 913a25a4a32b..bf634dda0720 100644
+--- a/tools/testing/selftests/firmware/config
++++ b/tools/testing/selftests/firmware/config
+@@ -1,6 +1,5 @@
+ CONFIG_TEST_FIRMWARE=y
+ CONFIG_FW_LOADER=y
+ CONFIG_FW_LOADER_USER_HELPER=y
+-CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
+ CONFIG_IKCONFIG=y
+ CONFIG_IKCONFIG_PROC=y
+diff --git a/tools/testing/selftests/firmware/fw_filesystem.sh b/tools/testing/selftests/firmware/fw_filesystem.sh
+index 466cf2f91ba0..a4320c4b44dc 100755
+--- a/tools/testing/selftests/firmware/fw_filesystem.sh
++++ b/tools/testing/selftests/firmware/fw_filesystem.sh
+@@ -155,8 +155,11 @@ read_firmwares()
+ {
+ for i in $(seq 0 3); do
+ config_set_read_fw_idx $i
+- # Verify the contents match
+- if ! diff -q "$FW" $DIR/read_firmware 2>/dev/null ; then
++ # Verify the contents are what we expect.
++ # -Z required for now -- check for yourself, md5sum
++ # on $FW and DIR/read_firmware will yield the same. Even
++ # cmp agrees, so something is off.
++ if ! diff -q -Z "$FW" $DIR/read_firmware 2>/dev/null ; then
+ echo "request #$i: firmware was not loaded" >&2
+ exit 1
+ fi
+@@ -168,7 +171,7 @@ read_firmwares_expect_nofile()
+ for i in $(seq 0 3); do
+ config_set_read_fw_idx $i
+ # Ensures contents differ
+- if diff -q "$FW" $DIR/read_firmware 2>/dev/null ; then
++ if diff -q -Z "$FW" $DIR/read_firmware 2>/dev/null ; then
+ echo "request $i: file was not expected to match" >&2
+ exit 1
+ fi
+diff --git a/tools/testing/selftests/firmware/fw_lib.sh b/tools/testing/selftests/firmware/fw_lib.sh
+index 6c5f1b2ffb74..1cbb12e284a6 100755
+--- a/tools/testing/selftests/firmware/fw_lib.sh
++++ b/tools/testing/selftests/firmware/fw_lib.sh
+@@ -91,7 +91,7 @@ verify_reqs()
+ if [ "$TEST_REQS_FW_SYSFS_FALLBACK" = "yes" ]; then
+ if [ ! "$HAS_FW_LOADER_USER_HELPER" = "yes" ]; then
+ echo "usermode helper disabled so ignoring test"
+- exit $ksft_skip
++ exit 0
+ fi
+ fi
+ }
+diff --git a/tools/testing/selftests/ir/ir_loopback.c b/tools/testing/selftests/ir/ir_loopback.c
+index 858c19caf224..8cdf1b89ac9c 100644
+--- a/tools/testing/selftests/ir/ir_loopback.c
++++ b/tools/testing/selftests/ir/ir_loopback.c
+@@ -27,6 +27,8 @@
+
+ #define TEST_SCANCODES 10
+ #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
++#define SYSFS_PATH_MAX 256
++#define DNAME_PATH_MAX 256
+
+ static const struct {
+ enum rc_proto proto;
+@@ -56,7 +58,7 @@ static const struct {
+ int lirc_open(const char *rc)
+ {
+ struct dirent *dent;
+- char buf[100];
++ char buf[SYSFS_PATH_MAX + DNAME_PATH_MAX];
+ DIR *d;
+ int fd;
+
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 7e632b465ab4..6d7a81306f8a 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -2971,6 +2971,12 @@ TEST(get_metadata)
+ struct seccomp_metadata md;
+ long ret;
+
++ /* Only real root can get metadata. */
++ if (geteuid()) {
++ XFAIL(return, "get_metadata requires real root");
++ return;
++ }
++
+ ASSERT_EQ(0, pipe(pipefd));
+
+ pid = fork();
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index 30251e288629..5cc22cdaa5ba 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -2353,7 +2353,7 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+ return 0;
+ }
+
+-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
+ {
+ }
+
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 076bc38963bf..b4f2d892a1d3 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -874,6 +874,7 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
+ int as_id, struct kvm_memslots *slots)
+ {
+ struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id);
++ u64 gen;
+
+ /*
+ * Set the low bit in the generation, which disables SPTE caching
+@@ -896,9 +897,11 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
+ * space 0 will use generations 0, 4, 8, ... while * address space 1 will
+ * use generations 2, 6, 10, 14, ...
+ */
+- slots->generation += KVM_ADDRESS_SPACE_NUM * 2 - 1;
++ gen = slots->generation + KVM_ADDRESS_SPACE_NUM * 2 - 1;
+
+- kvm_arch_memslots_updated(kvm, slots);
++ kvm_arch_memslots_updated(kvm, gen);
++
++ slots->generation = gen;
+
+ return old_memslots;
+ }
+@@ -2899,6 +2902,9 @@ static long kvm_device_ioctl(struct file *filp, unsigned int ioctl,
+ {
+ struct kvm_device *dev = filp->private_data;
+
++ if (dev->kvm->mm != current->mm)
++ return -EIO;
++
+ switch (ioctl) {
+ case KVM_SET_DEVICE_ATTR:
+ return kvm_device_ioctl_attr(dev, dev->ops->set_attr, arg);
next reply other threads:[~2019-04-17 7:32 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-17 7:32 Alice Ferrazzi [this message]
-- strict thread matches above, loose matches on Subject: below --
2019-06-04 11:10 [gentoo-commits] proj/linux-patches:5.0 commit in: / Mike Pagano
2019-05-31 14:03 Mike Pagano
2019-05-26 17:08 Mike Pagano
2019-05-22 11:04 Mike Pagano
2019-05-16 23:04 Mike Pagano
2019-05-14 21:01 Mike Pagano
2019-05-10 19:43 Mike Pagano
2019-05-08 10:07 Mike Pagano
2019-05-05 13:40 Mike Pagano
2019-05-05 13:39 Mike Pagano
2019-05-04 18:29 Mike Pagano
2019-05-02 10:12 Mike Pagano
2019-04-27 17:38 Mike Pagano
2019-04-20 11:12 Mike Pagano
2019-04-19 19:28 Mike Pagano
2019-04-05 21:47 Mike Pagano
2019-04-03 11:09 Mike Pagano
2019-04-03 11:00 Mike Pagano
2019-03-27 12:20 Mike Pagano
2019-03-27 10:23 Mike Pagano
2019-03-23 20:25 Mike Pagano
2019-03-19 17:01 Mike Pagano
2019-03-13 22:10 Mike Pagano
2019-03-10 14:12 Mike Pagano
2019-03-08 14:36 Mike Pagano
2019-03-04 13:16 Mike Pagano
2019-03-04 13:11 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1555486337.f5cf400c13c66c3c62cc0b83cd9894bec6c56983.alicef@gentoo \
--to=alicef@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox