public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.14 commit in: /
Date: Wed, 14 Nov 2018 14:00:55 +0000 (UTC)	[thread overview]
Message-ID: <1542204041.88a451bddccb5814e6fcb6c6edb71df3abf215fa.mpagano@gentoo> (raw)

commit:     88a451bddccb5814e6fcb6c6edb71df3abf215fa
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Nov 13 21:18:42 2018 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Nov 14 14:00:41 2018 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=88a451bd

proj/linux-patches: Linux patch 4.14.81

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README              |    4 +
 1080_linux-4.14.81.patch | 6990 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6994 insertions(+)

diff --git a/0000_README b/0000_README
index 28ef8f2..fd76211 100644
--- a/0000_README
+++ b/0000_README
@@ -363,6 +363,10 @@ Patch:  1079_linux-4.14.80.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.14.80
 
+Patch:  1080-4.14.81.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.14.81
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1080_linux-4.14.81.patch b/1080_linux-4.14.81.patch
new file mode 100644
index 0000000..ad133e4
--- /dev/null
+++ b/1080_linux-4.14.81.patch
@@ -0,0 +1,6990 @@
+diff --git a/Documentation/media/uapi/v4l/biblio.rst b/Documentation/media/uapi/v4l/biblio.rst
+index 1cedcfc04327..386d6cf83e9c 100644
+--- a/Documentation/media/uapi/v4l/biblio.rst
++++ b/Documentation/media/uapi/v4l/biblio.rst
+@@ -226,16 +226,6 @@ xvYCC
+ 
+ :author:    International Electrotechnical Commission (http://www.iec.ch)
+ 
+-.. _adobergb:
+-
+-AdobeRGB
+-========
+-
+-
+-:title:     Adobe© RGB (1998) Color Image Encoding Version 2005-05
+-
+-:author:    Adobe Systems Incorporated (http://www.adobe.com)
+-
+ .. _oprgb:
+ 
+ opRGB
+diff --git a/Documentation/media/uapi/v4l/colorspaces-defs.rst b/Documentation/media/uapi/v4l/colorspaces-defs.rst
+index 410907fe9415..f24615544792 100644
+--- a/Documentation/media/uapi/v4l/colorspaces-defs.rst
++++ b/Documentation/media/uapi/v4l/colorspaces-defs.rst
+@@ -51,8 +51,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+       - See :ref:`col-rec709`.
+     * - ``V4L2_COLORSPACE_SRGB``
+       - See :ref:`col-srgb`.
+-    * - ``V4L2_COLORSPACE_ADOBERGB``
+-      - See :ref:`col-adobergb`.
++    * - ``V4L2_COLORSPACE_OPRGB``
++      - See :ref:`col-oprgb`.
+     * - ``V4L2_COLORSPACE_BT2020``
+       - See :ref:`col-bt2020`.
+     * - ``V4L2_COLORSPACE_DCI_P3``
+@@ -90,8 +90,8 @@ whole range, 0-255, dividing the angular value by 1.41. The enum
+       - Use the Rec. 709 transfer function.
+     * - ``V4L2_XFER_FUNC_SRGB``
+       - Use the sRGB transfer function.
+-    * - ``V4L2_XFER_FUNC_ADOBERGB``
+-      - Use the AdobeRGB transfer function.
++    * - ``V4L2_XFER_FUNC_OPRGB``
++      - Use the opRGB transfer function.
+     * - ``V4L2_XFER_FUNC_SMPTE240M``
+       - Use the SMPTE 240M transfer function.
+     * - ``V4L2_XFER_FUNC_NONE``
+diff --git a/Documentation/media/uapi/v4l/colorspaces-details.rst b/Documentation/media/uapi/v4l/colorspaces-details.rst
+index b5d551b9cc8f..09fabf4cd412 100644
+--- a/Documentation/media/uapi/v4l/colorspaces-details.rst
++++ b/Documentation/media/uapi/v4l/colorspaces-details.rst
+@@ -290,15 +290,14 @@ Y' is clamped to the range [0…1] and Cb and Cr are clamped to the range
+ 170M/BT.601. The Y'CbCr quantization is limited range.
+ 
+ 
+-.. _col-adobergb:
++.. _col-oprgb:
+ 
+-Colorspace Adobe RGB (V4L2_COLORSPACE_ADOBERGB)
++Colorspace opRGB (V4L2_COLORSPACE_OPRGB)
+ ===============================================
+ 
+-The :ref:`adobergb` standard defines the colorspace used by computer
+-graphics that use the AdobeRGB colorspace. This is also known as the
+-:ref:`oprgb` standard. The default transfer function is
+-``V4L2_XFER_FUNC_ADOBERGB``. The default Y'CbCr encoding is
++The :ref:`oprgb` standard defines the colorspace used by computer
++graphics that use the opRGB colorspace. The default transfer function is
++``V4L2_XFER_FUNC_OPRGB``. The default Y'CbCr encoding is
+ ``V4L2_YCBCR_ENC_601``. The default Y'CbCr quantization is limited
+ range.
+ 
+@@ -312,7 +311,7 @@ The chromaticities of the primary colors and the white reference are:
+ 
+ .. tabularcolumns:: |p{4.4cm}|p{4.4cm}|p{8.7cm}|
+ 
+-.. flat-table:: Adobe RGB Chromaticities
++.. flat-table:: opRGB Chromaticities
+     :header-rows:  1
+     :stub-columns: 0
+     :widths:       1 1 2
+diff --git a/Makefile b/Makefile
+index f4cad5e03561..2fe1424d61d2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 14
+-SUBLEVEL = 80
++SUBLEVEL = 81
+ EXTRAVERSION =
+ NAME = Petit Gorille
+ 
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index a5bd8f0205e8..0bf354024ef5 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -333,7 +333,7 @@
+ 				ti,hwmods = "pcie1";
+ 				phys = <&pcie1_phy>;
+ 				phy-names = "pcie-phy0";
+-				ti,syscon-unaligned-access = <&scm_conf1 0x14 2>;
++				ti,syscon-unaligned-access = <&scm_conf1 0x14 1>;
+ 				status = "disabled";
+ 			};
+ 		};
+diff --git a/arch/arm/boot/dts/exynos3250.dtsi b/arch/arm/boot/dts/exynos3250.dtsi
+index 590ee442d0ae..3ed3d1a0fd40 100644
+--- a/arch/arm/boot/dts/exynos3250.dtsi
++++ b/arch/arm/boot/dts/exynos3250.dtsi
+@@ -82,6 +82,22 @@
+ 			compatible = "arm,cortex-a7";
+ 			reg = <1>;
+ 			clock-frequency = <1000000000>;
++			clocks = <&cmu CLK_ARM_CLK>;
++			clock-names = "cpu";
++			#cooling-cells = <2>;
++
++			operating-points = <
++				1000000 1150000
++				900000  1112500
++				800000  1075000
++				700000  1037500
++				600000  1000000
++				500000  962500
++				400000  925000
++				300000  887500
++				200000  850000
++				100000  850000
++			>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos4210-origen.dts b/arch/arm/boot/dts/exynos4210-origen.dts
+index 084fcc5574ef..e4876186d5cd 100644
+--- a/arch/arm/boot/dts/exynos4210-origen.dts
++++ b/arch/arm/boot/dts/exynos4210-origen.dts
+@@ -152,6 +152,8 @@
+ 		reg = <0x66>;
+ 		interrupt-parent = <&gpx0>;
+ 		interrupts = <4 IRQ_TYPE_NONE>, <3 IRQ_TYPE_NONE>;
++		pinctrl-names = "default";
++		pinctrl-0 = <&max8997_irq>;
+ 
+ 		max8997,pmic-buck1-dvs-voltage = <1350000>;
+ 		max8997,pmic-buck2-dvs-voltage = <1100000>;
+@@ -289,6 +291,13 @@
+ 	};
+ };
+ 
++&pinctrl_1 {
++	max8997_irq: max8997-irq {
++		samsung,pins = "gpx0-3", "gpx0-4";
++		samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
++	};
++};
++
+ &sdhci_0 {
+ 	bus-width = <4>;
+ 	pinctrl-0 = <&sd0_clk &sd0_cmd &sd0_bus4 &sd0_cd>;
+diff --git a/arch/arm/boot/dts/exynos4210.dtsi b/arch/arm/boot/dts/exynos4210.dtsi
+index 768fb075b1fd..27e17471ab7a 100644
+--- a/arch/arm/boot/dts/exynos4210.dtsi
++++ b/arch/arm/boot/dts/exynos4210.dtsi
+@@ -52,8 +52,6 @@
+ 				400000	975000
+ 				200000	950000
+ 			>;
+-			cooling-min-level = <4>;
+-			cooling-max-level = <2>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -61,6 +59,19 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a9";
+ 			reg = <0x901>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
++			clock-latency = <160000>;
++
++			operating-points = <
++				1200000 1250000
++				1000000 1150000
++				800000	1075000
++				500000	975000
++				400000	975000
++				200000	950000
++			>;
++			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos4412.dtsi b/arch/arm/boot/dts/exynos4412.dtsi
+index 7ff03a7e8fb9..1a35e6336e53 100644
+--- a/arch/arm/boot/dts/exynos4412.dtsi
++++ b/arch/arm/boot/dts/exynos4412.dtsi
+@@ -45,8 +45,6 @@
+ 			clocks = <&clock CLK_ARM_CLK>;
+ 			clock-names = "cpu";
+ 			operating-points-v2 = <&cpu0_opp_table>;
+-			cooling-min-level = <13>;
+-			cooling-max-level = <7>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/exynos5250.dtsi b/arch/arm/boot/dts/exynos5250.dtsi
+index 35b1949a3e3c..9f73a8bf6e1c 100644
+--- a/arch/arm/boot/dts/exynos5250.dtsi
++++ b/arch/arm/boot/dts/exynos5250.dtsi
+@@ -57,38 +57,106 @@
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a15";
+ 			reg = <0>;
+-			clock-frequency = <1700000000>;
+ 			clocks = <&clock CLK_ARM_CLK>;
+ 			clock-names = "cpu";
+-			clock-latency = <140000>;
+-
+-			operating-points = <
+-				1700000 1300000
+-				1600000 1250000
+-				1500000 1225000
+-				1400000 1200000
+-				1300000 1150000
+-				1200000 1125000
+-				1100000 1100000
+-				1000000 1075000
+-				 900000 1050000
+-				 800000 1025000
+-				 700000 1012500
+-				 600000 1000000
+-				 500000  975000
+-				 400000  950000
+-				 300000  937500
+-				 200000  925000
+-			>;
+-			cooling-min-level = <15>;
+-			cooling-max-level = <9>;
++			operating-points-v2 = <&cpu0_opp_table>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 		cpu@1 {
+ 			device_type = "cpu";
+ 			compatible = "arm,cortex-a15";
+ 			reg = <1>;
+-			clock-frequency = <1700000000>;
++			clocks = <&clock CLK_ARM_CLK>;
++			clock-names = "cpu";
++			operating-points-v2 = <&cpu0_opp_table>;
++			#cooling-cells = <2>; /* min followed by max */
++		};
++	};
++
++	cpu0_opp_table: opp_table0 {
++		compatible = "operating-points-v2";
++		opp-shared;
++
++		opp-200000000 {
++			opp-hz = /bits/ 64 <200000000>;
++			opp-microvolt = <925000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-300000000 {
++			opp-hz = /bits/ 64 <300000000>;
++			opp-microvolt = <937500>;
++			clock-latency-ns = <140000>;
++		};
++		opp-400000000 {
++			opp-hz = /bits/ 64 <400000000>;
++			opp-microvolt = <950000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-500000000 {
++			opp-hz = /bits/ 64 <500000000>;
++			opp-microvolt = <975000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-600000000 {
++			opp-hz = /bits/ 64 <600000000>;
++			opp-microvolt = <1000000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-700000000 {
++			opp-hz = /bits/ 64 <700000000>;
++			opp-microvolt = <1012500>;
++			clock-latency-ns = <140000>;
++		};
++		opp-800000000 {
++			opp-hz = /bits/ 64 <800000000>;
++			opp-microvolt = <1025000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-900000000 {
++			opp-hz = /bits/ 64 <900000000>;
++			opp-microvolt = <1050000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1000000000 {
++			opp-hz = /bits/ 64 <1000000000>;
++			opp-microvolt = <1075000>;
++			clock-latency-ns = <140000>;
++			opp-suspend;
++		};
++		opp-1100000000 {
++			opp-hz = /bits/ 64 <1100000000>;
++			opp-microvolt = <1100000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1200000000 {
++			opp-hz = /bits/ 64 <1200000000>;
++			opp-microvolt = <1125000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1300000000 {
++			opp-hz = /bits/ 64 <1300000000>;
++			opp-microvolt = <1150000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1400000000 {
++			opp-hz = /bits/ 64 <1400000000>;
++			opp-microvolt = <1200000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1500000000 {
++			opp-hz = /bits/ 64 <1500000000>;
++			opp-microvolt = <1225000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1600000000 {
++			opp-hz = /bits/ 64 <1600000000>;
++			opp-microvolt = <1250000>;
++			clock-latency-ns = <140000>;
++		};
++		opp-1700000000 {
++			opp-hz = /bits/ 64 <1700000000>;
++			opp-microvolt = <1300000>;
++			clock-latency-ns = <140000>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/exynos5420-cpus.dtsi b/arch/arm/boot/dts/exynos5420-cpus.dtsi
+index 5c052d7ff554..7e6b55561b1d 100644
+--- a/arch/arm/boot/dts/exynos5420-cpus.dtsi
++++ b/arch/arm/boot/dts/exynos5420-cpus.dtsi
+@@ -33,8 +33,6 @@
+ 			clock-frequency = <1800000000>;
+ 			cci-control-port = <&cci_control1>;
+ 			operating-points-v2 = <&cluster_a15_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <11>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -45,8 +43,6 @@
+ 			clock-frequency = <1800000000>;
+ 			cci-control-port = <&cci_control1>;
+ 			operating-points-v2 = <&cluster_a15_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <11>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -57,8 +53,6 @@
+ 			clock-frequency = <1800000000>;
+ 			cci-control-port = <&cci_control1>;
+ 			operating-points-v2 = <&cluster_a15_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <11>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -69,8 +63,6 @@
+ 			clock-frequency = <1800000000>;
+ 			cci-control-port = <&cci_control1>;
+ 			operating-points-v2 = <&cluster_a15_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <11>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -82,8 +74,6 @@
+ 			clock-frequency = <1000000000>;
+ 			cci-control-port = <&cci_control0>;
+ 			operating-points-v2 = <&cluster_a7_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <7>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -94,8 +84,6 @@
+ 			clock-frequency = <1000000000>;
+ 			cci-control-port = <&cci_control0>;
+ 			operating-points-v2 = <&cluster_a7_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <7>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -106,8 +94,6 @@
+ 			clock-frequency = <1000000000>;
+ 			cci-control-port = <&cci_control0>;
+ 			operating-points-v2 = <&cluster_a7_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <7>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -118,8 +104,6 @@
+ 			clock-frequency = <1000000000>;
+ 			cci-control-port = <&cci_control0>;
+ 			operating-points-v2 = <&cluster_a7_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <7>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/exynos5422-cpus.dtsi b/arch/arm/boot/dts/exynos5422-cpus.dtsi
+index bf3c6f1ec4ee..c8afdf821a77 100644
+--- a/arch/arm/boot/dts/exynos5422-cpus.dtsi
++++ b/arch/arm/boot/dts/exynos5422-cpus.dtsi
+@@ -32,8 +32,6 @@
+ 			clock-frequency = <1000000000>;
+ 			cci-control-port = <&cci_control0>;
+ 			operating-points-v2 = <&cluster_a7_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <11>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -44,8 +42,6 @@
+ 			clock-frequency = <1000000000>;
+ 			cci-control-port = <&cci_control0>;
+ 			operating-points-v2 = <&cluster_a7_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <11>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -56,8 +52,6 @@
+ 			clock-frequency = <1000000000>;
+ 			cci-control-port = <&cci_control0>;
+ 			operating-points-v2 = <&cluster_a7_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <11>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -68,8 +62,6 @@
+ 			clock-frequency = <1000000000>;
+ 			cci-control-port = <&cci_control0>;
+ 			operating-points-v2 = <&cluster_a7_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <11>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -81,8 +73,6 @@
+ 			clock-frequency = <1800000000>;
+ 			cci-control-port = <&cci_control1>;
+ 			operating-points-v2 = <&cluster_a15_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <15>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -93,8 +83,6 @@
+ 			clock-frequency = <1800000000>;
+ 			cci-control-port = <&cci_control1>;
+ 			operating-points-v2 = <&cluster_a15_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <15>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -105,8 +93,6 @@
+ 			clock-frequency = <1800000000>;
+ 			cci-control-port = <&cci_control1>;
+ 			operating-points-v2 = <&cluster_a15_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <15>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 
+@@ -117,8 +103,6 @@
+ 			clock-frequency = <1800000000>;
+ 			cci-control-port = <&cci_control1>;
+ 			operating-points-v2 = <&cluster_a15_opp_table>;
+-			cooling-min-level = <0>;
+-			cooling-max-level = <15>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 		};
+ 	};
+diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
+index 791ca15c799e..bd1985694bca 100644
+--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
+@@ -601,7 +601,7 @@
+ 			status = "disabled";
+ 		};
+ 
+-		sdr: sdr@ffc25000 {
++		sdr: sdr@ffcfb100 {
+ 			compatible = "altr,sdr-ctl", "syscon";
+ 			reg = <0xffcfb100 0x80>;
+ 		};
+diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+index c2b9bcb0ef61..e79f3defe002 100644
+--- a/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10.dtsi
+@@ -249,7 +249,7 @@
+ 
+ 		sysmgr: sysmgr@ffd12000 {
+ 			compatible = "altr,sys-mgr", "syscon";
+-			reg = <0xffd12000 0x1000>;
++			reg = <0xffd12000 0x228>;
+ 		};
+ 
+ 		/* Local timer */
+diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
+index 9a8cb96555d6..9a947afaf74c 100644
+--- a/arch/arm64/lib/Makefile
++++ b/arch/arm64/lib/Makefile
+@@ -12,7 +12,7 @@ lib-y		:= bitops.o clear_user.o delay.o copy_from_user.o	\
+ # when supported by the CPU. Result and argument registers are handled
+ # correctly, based on the function prototype.
+ lib-$(CONFIG_ARM64_LSE_ATOMICS) += atomic_ll_sc.o
+-CFLAGS_atomic_ll_sc.o	:= -fcall-used-x0 -ffixed-x1 -ffixed-x2		\
++CFLAGS_atomic_ll_sc.o	:= -ffixed-x1 -ffixed-x2        		\
+ 		   -ffixed-x3 -ffixed-x4 -ffixed-x5 -ffixed-x6		\
+ 		   -ffixed-x7 -fcall-saved-x8 -fcall-saved-x9		\
+ 		   -fcall-saved-x10 -fcall-saved-x11 -fcall-saved-x12	\
+diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper.c b/arch/mips/cavium-octeon/executive/cvmx-helper.c
+index f24be0b5db50..c683c369bca5 100644
+--- a/arch/mips/cavium-octeon/executive/cvmx-helper.c
++++ b/arch/mips/cavium-octeon/executive/cvmx-helper.c
+@@ -67,7 +67,7 @@ void (*cvmx_override_pko_queue_priority) (int pko_port,
+ void (*cvmx_override_ipd_port_setup) (int ipd_port);
+ 
+ /* Port count per interface */
+-static int interface_port_count[5];
++static int interface_port_count[9];
+ 
+ /**
+  * Return the number of interfaces the chip has. Each interface
+diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
+index 1b4732e20137..843825a7e6e2 100644
+--- a/arch/parisc/kernel/entry.S
++++ b/arch/parisc/kernel/entry.S
+@@ -185,7 +185,7 @@
+ 	bv,n	0(%r3)
+ 	nop
+ 	.word	0		/* checksum (will be patched) */
+-	.word	PA(os_hpmc)	/* address of handler */
++	.word	0		/* address of handler */
+ 	.word	0		/* length of handler */
+ 	.endm
+ 
+diff --git a/arch/parisc/kernel/hpmc.S b/arch/parisc/kernel/hpmc.S
+index 781c3b9a3e46..fde654115564 100644
+--- a/arch/parisc/kernel/hpmc.S
++++ b/arch/parisc/kernel/hpmc.S
+@@ -85,7 +85,7 @@ END(hpmc_pim_data)
+ 
+ 	.import intr_save, code
+ 	.align 16
+-ENTRY_CFI(os_hpmc)
++ENTRY(os_hpmc)
+ .os_hpmc:
+ 
+ 	/*
+@@ -302,7 +302,6 @@ os_hpmc_6:
+ 	b .
+ 	nop
+ 	.align 16	/* make function length multiple of 16 bytes */
+-ENDPROC_CFI(os_hpmc)
+ .os_hpmc_end:
+ 
+ 
+diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
+index 8453724b8009..9a898d68f4a0 100644
+--- a/arch/parisc/kernel/traps.c
++++ b/arch/parisc/kernel/traps.c
+@@ -836,7 +836,8 @@ void __init initialize_ivt(const void *iva)
+ 	if (pdc_instr(&instr) == PDC_OK)
+ 		ivap[0] = instr;
+ 
+-	/* Compute Checksum for HPMC handler */
++	/* Setup IVA and compute checksum for HPMC handler */
++	ivap[6] = (u32)__pa(os_hpmc);
+ 	length = os_hpmc_size;
+ 	ivap[7] = length;
+ 
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 13f7854e0d49..cc700f7dda54 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -495,12 +495,8 @@ static void __init map_pages(unsigned long start_vaddr,
+ 						pte = pte_mkhuge(pte);
+ 				}
+ 
+-				if (address >= end_paddr) {
+-					if (force)
+-						break;
+-					else
+-						pte_val(pte) = 0;
+-				}
++				if (address >= end_paddr)
++					break;
+ 
+ 				set_pte(pg_table, pte);
+ 
+diff --git a/arch/powerpc/include/asm/mpic.h b/arch/powerpc/include/asm/mpic.h
+index fad8ddd697ac..0abf2e7fd222 100644
+--- a/arch/powerpc/include/asm/mpic.h
++++ b/arch/powerpc/include/asm/mpic.h
+@@ -393,7 +393,14 @@ extern struct bus_type mpic_subsys;
+ #define	MPIC_REGSET_TSI108		MPIC_REGSET(1)	/* Tsi108/109 PIC */
+ 
+ /* Get the version of primary MPIC */
++#ifdef CONFIG_MPIC
+ extern u32 fsl_mpic_primary_get_version(void);
++#else
++static inline u32 fsl_mpic_primary_get_version(void)
++{
++	return 0;
++}
++#endif
+ 
+ /* Allocate the controller structure and setup the linux irq descs
+  * for the range if interrupts passed in. No HW initialization is
+diff --git a/arch/s390/kvm/sthyi.c b/arch/s390/kvm/sthyi.c
+index 395926b8c1ed..ffba4617d108 100644
+--- a/arch/s390/kvm/sthyi.c
++++ b/arch/s390/kvm/sthyi.c
+@@ -174,17 +174,19 @@ static void fill_hdr(struct sthyi_sctns *sctns)
+ static void fill_stsi_mac(struct sthyi_sctns *sctns,
+ 			  struct sysinfo_1_1_1 *sysinfo)
+ {
++	sclp_ocf_cpc_name_copy(sctns->mac.infmname);
++	if (*(u64 *)sctns->mac.infmname != 0)
++		sctns->mac.infmval1 |= MAC_NAME_VLD;
++
+ 	if (stsi(sysinfo, 1, 1, 1))
+ 		return;
+ 
+-	sclp_ocf_cpc_name_copy(sctns->mac.infmname);
+-
+ 	memcpy(sctns->mac.infmtype, sysinfo->type, sizeof(sctns->mac.infmtype));
+ 	memcpy(sctns->mac.infmmanu, sysinfo->manufacturer, sizeof(sctns->mac.infmmanu));
+ 	memcpy(sctns->mac.infmpman, sysinfo->plant, sizeof(sctns->mac.infmpman));
+ 	memcpy(sctns->mac.infmseq, sysinfo->sequence, sizeof(sctns->mac.infmseq));
+ 
+-	sctns->mac.infmval1 |= MAC_ID_VLD | MAC_NAME_VLD;
++	sctns->mac.infmval1 |= MAC_ID_VLD;
+ }
+ 
+ static void fill_stsi_par(struct sthyi_sctns *sctns,
+diff --git a/arch/sparc/include/asm/cpudata_64.h b/arch/sparc/include/asm/cpudata_64.h
+index 666d6b5c0440..9c3fc03abe9a 100644
+--- a/arch/sparc/include/asm/cpudata_64.h
++++ b/arch/sparc/include/asm/cpudata_64.h
+@@ -28,7 +28,7 @@ typedef struct {
+ 	unsigned short	sock_id;	/* physical package */
+ 	unsigned short	core_id;
+ 	unsigned short  max_cache_id;	/* groupings of highest shared cache */
+-	unsigned short	proc_id;	/* strand (aka HW thread) id */
++	signed short	proc_id;	/* strand (aka HW thread) id */
+ } cpuinfo_sparc;
+ 
+ DECLARE_PER_CPU(cpuinfo_sparc, __cpu_data);
+diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c
+index 5c1f54758312..eceb0215bdee 100644
+--- a/arch/sparc/kernel/perf_event.c
++++ b/arch/sparc/kernel/perf_event.c
+@@ -24,6 +24,7 @@
+ #include <asm/cpudata.h>
+ #include <linux/uaccess.h>
+ #include <linux/atomic.h>
++#include <linux/sched/clock.h>
+ #include <asm/nmi.h>
+ #include <asm/pcr.h>
+ #include <asm/cacheflush.h>
+@@ -927,6 +928,8 @@ static void read_in_all_counters(struct cpu_hw_events *cpuc)
+ 			sparc_perf_event_update(cp, &cp->hw,
+ 						cpuc->current_idx[i]);
+ 			cpuc->current_idx[i] = PIC_NO_INDEX;
++			if (cp->hw.state & PERF_HES_STOPPED)
++				cp->hw.state |= PERF_HES_ARCH;
+ 		}
+ 	}
+ }
+@@ -959,10 +962,12 @@ static void calculate_single_pcr(struct cpu_hw_events *cpuc)
+ 
+ 		enc = perf_event_get_enc(cpuc->events[i]);
+ 		cpuc->pcr[0] &= ~mask_for_index(idx);
+-		if (hwc->state & PERF_HES_STOPPED)
++		if (hwc->state & PERF_HES_ARCH) {
+ 			cpuc->pcr[0] |= nop_for_index(idx);
+-		else
++		} else {
+ 			cpuc->pcr[0] |= event_encoding(enc, idx);
++			hwc->state = 0;
++		}
+ 	}
+ out:
+ 	cpuc->pcr[0] |= cpuc->event[0]->hw.config_base;
+@@ -988,6 +993,9 @@ static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc)
+ 
+ 		cpuc->current_idx[i] = idx;
+ 
++		if (cp->hw.state & PERF_HES_ARCH)
++			continue;
++
+ 		sparc_pmu_start(cp, PERF_EF_RELOAD);
+ 	}
+ out:
+@@ -1079,6 +1087,8 @@ static void sparc_pmu_start(struct perf_event *event, int flags)
+ 	event->hw.state = 0;
+ 
+ 	sparc_pmu_enable_event(cpuc, &event->hw, idx);
++
++	perf_event_update_userpage(event);
+ }
+ 
+ static void sparc_pmu_stop(struct perf_event *event, int flags)
+@@ -1371,9 +1381,9 @@ static int sparc_pmu_add(struct perf_event *event, int ef_flags)
+ 	cpuc->events[n0] = event->hw.event_base;
+ 	cpuc->current_idx[n0] = PIC_NO_INDEX;
+ 
+-	event->hw.state = PERF_HES_UPTODATE;
++	event->hw.state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
+ 	if (!(ef_flags & PERF_EF_START))
+-		event->hw.state |= PERF_HES_STOPPED;
++		event->hw.state |= PERF_HES_ARCH;
+ 
+ 	/*
+ 	 * If group events scheduling transaction was started,
+@@ -1603,6 +1613,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 	struct perf_sample_data data;
+ 	struct cpu_hw_events *cpuc;
+ 	struct pt_regs *regs;
++	u64 finish_clock;
++	u64 start_clock;
+ 	int i;
+ 
+ 	if (!atomic_read(&active_events))
+@@ -1616,6 +1628,8 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 		return NOTIFY_DONE;
+ 	}
+ 
++	start_clock = sched_clock();
++
+ 	regs = args->regs;
+ 
+ 	cpuc = this_cpu_ptr(&cpu_hw_events);
+@@ -1654,6 +1668,10 @@ static int __kprobes perf_event_nmi_handler(struct notifier_block *self,
+ 			sparc_pmu_stop(event, 0);
+ 	}
+ 
++	finish_clock = sched_clock();
++
++	perf_sample_event_took(finish_clock - start_clock);
++
+ 	return NOTIFY_STOP;
+ }
+ 
+diff --git a/arch/x86/boot/tools/build.c b/arch/x86/boot/tools/build.c
+index d4e6cd4577e5..bf0e82400358 100644
+--- a/arch/x86/boot/tools/build.c
++++ b/arch/x86/boot/tools/build.c
+@@ -391,6 +391,13 @@ int main(int argc, char ** argv)
+ 		die("Unable to mmap '%s': %m", argv[2]);
+ 	/* Number of 16-byte paragraphs, including space for a 4-byte CRC */
+ 	sys_size = (sz + 15 + 4) / 16;
++#ifdef CONFIG_EFI_STUB
++	/*
++	 * COFF requires minimum 32-byte alignment of sections, and
++	 * adding a signature is problematic without that alignment.
++	 */
++	sys_size = (sys_size + 1) & ~1;
++#endif
+ 
+ 	/* Patch the setup code with the appropriate size parameters */
+ 	buf[0x1f1] = setup_sectors-1;
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 8418462298e7..673d6e988196 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -220,6 +220,7 @@
+ #define X86_FEATURE_STIBP		( 7*32+27) /* Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ZEN			( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */
+ #define X86_FEATURE_L1TF_PTEINV		( 7*32+29) /* "" L1TF workaround PTE inversion */
++#define X86_FEATURE_IBRS_ENHANCED	( 7*32+30) /* Enhanced IBRS */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW		( 8*32+ 0) /* Intel TPR Shadow */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 4015b88383ce..367cdd263a5c 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -174,6 +174,7 @@ enum {
+ 
+ #define DR6_BD		(1 << 13)
+ #define DR6_BS		(1 << 14)
++#define DR6_BT		(1 << 15)
+ #define DR6_RTM		(1 << 16)
+ #define DR6_FIXED_1	0xfffe0ff0
+ #define DR6_INIT	0xffff0ff0
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 8b38df98548e..1b4132161c1f 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -215,6 +215,7 @@ enum spectre_v2_mitigation {
+ 	SPECTRE_V2_RETPOLINE_GENERIC,
+ 	SPECTRE_V2_RETPOLINE_AMD,
+ 	SPECTRE_V2_IBRS,
++	SPECTRE_V2_IBRS_ENHANCED,
+ };
+ 
+ /* The Speculative Store Bypass disable variants */
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 5f00ecb9d251..2501be609b82 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -466,6 +466,12 @@ static inline void __native_flush_tlb_one_user(unsigned long addr)
+  */
+ static inline void __flush_tlb_all(void)
+ {
++	/*
++	 * This is to catch users with enabled preemption and the PGE feature
++	 * and don't trigger the warning in __native_flush_tlb().
++	 */
++	VM_WARN_ON_ONCE(preemptible());
++
+ 	if (boot_cpu_has(X86_FEATURE_PGE)) {
+ 		__flush_tlb_global();
+ 	} else {
+diff --git a/arch/x86/kernel/check.c b/arch/x86/kernel/check.c
+index 33399426793e..cc8258a5378b 100644
+--- a/arch/x86/kernel/check.c
++++ b/arch/x86/kernel/check.c
+@@ -31,6 +31,11 @@ static __init int set_corruption_check(char *arg)
+ 	ssize_t ret;
+ 	unsigned long val;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	ret = kstrtoul(arg, 10, &val);
+ 	if (ret)
+ 		return ret;
+@@ -45,6 +50,11 @@ static __init int set_corruption_check_period(char *arg)
+ 	ssize_t ret;
+ 	unsigned long val;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check_period config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	ret = kstrtoul(arg, 10, &val);
+ 	if (ret)
+ 		return ret;
+@@ -59,6 +69,11 @@ static __init int set_corruption_check_size(char *arg)
+ 	char *end;
+ 	unsigned size;
+ 
++	if (!arg) {
++		pr_err("memory_corruption_check_size config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	size = memparse(arg, &end);
+ 
+ 	if (*end == '\0')
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 3e435f88621d..aa6e7f75bccc 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -34,12 +34,10 @@ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
+ static void __init l1tf_select_mitigation(void);
+ 
+-/*
+- * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any
+- * writes to SPEC_CTRL contain whatever reserved bits have been set.
+- */
+-u64 __ro_after_init x86_spec_ctrl_base;
++/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
++u64 x86_spec_ctrl_base;
+ EXPORT_SYMBOL_GPL(x86_spec_ctrl_base);
++static DEFINE_MUTEX(spec_ctrl_mutex);
+ 
+ /*
+  * The vendor and possibly platform specific bits which can be modified in
+@@ -140,6 +138,7 @@ static const char *spectre_v2_strings[] = {
+ 	[SPECTRE_V2_RETPOLINE_MINIMAL_AMD]	= "Vulnerable: Minimal AMD ASM retpoline",
+ 	[SPECTRE_V2_RETPOLINE_GENERIC]		= "Mitigation: Full generic retpoline",
+ 	[SPECTRE_V2_RETPOLINE_AMD]		= "Mitigation: Full AMD retpoline",
++	[SPECTRE_V2_IBRS_ENHANCED]		= "Mitigation: Enhanced IBRS",
+ };
+ 
+ #undef pr_fmt
+@@ -322,6 +321,46 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	return cmd;
+ }
+ 
++static bool stibp_needed(void)
++{
++	if (spectre_v2_enabled == SPECTRE_V2_NONE)
++		return false;
++
++	if (!boot_cpu_has(X86_FEATURE_STIBP))
++		return false;
++
++	return true;
++}
++
++static void update_stibp_msr(void *info)
++{
++	wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++}
++
++void arch_smt_update(void)
++{
++	u64 mask;
++
++	if (!stibp_needed())
++		return;
++
++	mutex_lock(&spec_ctrl_mutex);
++	mask = x86_spec_ctrl_base;
++	if (cpu_smt_control == CPU_SMT_ENABLED)
++		mask |= SPEC_CTRL_STIBP;
++	else
++		mask &= ~SPEC_CTRL_STIBP;
++
++	if (mask != x86_spec_ctrl_base) {
++		pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
++				cpu_smt_control == CPU_SMT_ENABLED ?
++				"Enabling" : "Disabling");
++		x86_spec_ctrl_base = mask;
++		on_each_cpu(update_stibp_msr, NULL, 1);
++	}
++	mutex_unlock(&spec_ctrl_mutex);
++}
++
+ static void __init spectre_v2_select_mitigation(void)
+ {
+ 	enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+@@ -341,6 +380,13 @@ static void __init spectre_v2_select_mitigation(void)
+ 
+ 	case SPECTRE_V2_CMD_FORCE:
+ 	case SPECTRE_V2_CMD_AUTO:
++		if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
++			mode = SPECTRE_V2_IBRS_ENHANCED;
++			/* Force it so VMEXIT will restore correctly */
++			x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
++			wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
++			goto specv2_set_mode;
++		}
+ 		if (IS_ENABLED(CONFIG_RETPOLINE))
+ 			goto retpoline_auto;
+ 		break;
+@@ -378,6 +424,7 @@ retpoline_auto:
+ 		setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
+ 	}
+ 
++specv2_set_mode:
+ 	spectre_v2_enabled = mode;
+ 	pr_info("%s\n", spectre_v2_strings[mode]);
+ 
+@@ -400,12 +447,22 @@ retpoline_auto:
+ 
+ 	/*
+ 	 * Retpoline means the kernel is safe because it has no indirect
+-	 * branches. But firmware isn't, so use IBRS to protect that.
++	 * branches. Enhanced IBRS protects firmware too, so, enable restricted
++	 * speculation around firmware calls only when Enhanced IBRS isn't
++	 * supported.
++	 *
++	 * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because
++	 * the user might select retpoline on the kernel command line and if
++	 * the CPU supports Enhanced IBRS, kernel might un-intentionally not
++	 * enable IBRS around firmware calls.
+ 	 */
+-	if (boot_cpu_has(X86_FEATURE_IBRS)) {
++	if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) {
+ 		setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
+ 		pr_info("Enabling Restricted Speculation for firmware calls\n");
+ 	}
++
++	/* Enable STIBP if appropriate */
++	arch_smt_update();
+ }
+ 
+ #undef pr_fmt
+@@ -798,6 +855,8 @@ static ssize_t l1tf_show_state(char *buf)
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
++	int ret;
++
+ 	if (!boot_cpu_has_bug(bug))
+ 		return sprintf(buf, "Not affected\n");
+ 
+@@ -812,10 +871,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 		return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+ 
+ 	case X86_BUG_SPECTRE_V2:
+-		return sprintf(buf, "%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
++		ret = sprintf(buf, "%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+ 			       boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
+ 			       boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
++			       (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+ 			       spectre_v2_module_string());
++		return ret;
+ 
+ 	case X86_BUG_SPEC_STORE_BYPASS:
+ 		return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 7d2a7890a823..96643e2c75b8 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -967,6 +967,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
+ 	setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+ 
++	if (ia32_cap & ARCH_CAP_IBRS_ALL)
++		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
++
+ 	if (x86_match_cpu(cpu_no_meltdown))
+ 		return;
+ 
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 23f1691670b6..61a949d84dfa 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -314,7 +314,6 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
+ 		 * thread's fpu state, reconstruct fxstate from the fsave
+ 		 * header. Validate and sanitize the copied state.
+ 		 */
+-		struct fpu *fpu = &tsk->thread.fpu;
+ 		struct user_i387_ia32_struct env;
+ 		int err = 0;
+ 
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index fd46d890296c..ec588cf4fe95 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -2733,10 +2733,13 @@ static int nested_vmx_check_exception(struct kvm_vcpu *vcpu, unsigned long *exit
+ 		}
+ 	} else {
+ 		if (vmcs12->exception_bitmap & (1u << nr)) {
+-			if (nr == DB_VECTOR)
++			if (nr == DB_VECTOR) {
+ 				*exit_qual = vcpu->arch.dr6;
+-			else
++				*exit_qual &= ~(DR6_FIXED_1 | DR6_BT);
++				*exit_qual ^= DR6_RTM;
++			} else {
+ 				*exit_qual = 0;
++			}
+ 			return 1;
+ 		}
+ 	}
+diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
+index 464f53da3a6f..835620ab435f 100644
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -2037,9 +2037,13 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
+ 
+ 	/*
+ 	 * We should perform an IPI and flush all tlbs,
+-	 * but that can deadlock->flush only current cpu:
++	 * but that can deadlock->flush only current cpu.
++	 * Preemption needs to be disabled around __flush_tlb_all() due to
++	 * CR3 reload in __native_flush_tlb().
+ 	 */
++	preempt_disable();
+ 	__flush_tlb_all();
++	preempt_enable();
+ 
+ 	arch_flush_lazy_mmu_mode();
+ }
+diff --git a/arch/x86/platform/olpc/olpc-xo1-rtc.c b/arch/x86/platform/olpc/olpc-xo1-rtc.c
+index a2b4efddd61a..8e7ddd7e313a 100644
+--- a/arch/x86/platform/olpc/olpc-xo1-rtc.c
++++ b/arch/x86/platform/olpc/olpc-xo1-rtc.c
+@@ -16,6 +16,7 @@
+ 
+ #include <asm/msr.h>
+ #include <asm/olpc.h>
++#include <asm/x86_init.h>
+ 
+ static void rtc_wake_on(struct device *dev)
+ {
+@@ -75,6 +76,8 @@ static int __init xo1_rtc_init(void)
+ 	if (r)
+ 		return r;
+ 
++	x86_platform.legacy.rtc = 0;
++
+ 	device_init_wakeup(&xo1_rtc_device.dev, 1);
+ 	return 0;
+ }
+diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
+index 7bd3ee08393e..d6d7b29b3be0 100644
+--- a/arch/x86/xen/enlighten_pvh.c
++++ b/arch/x86/xen/enlighten_pvh.c
+@@ -76,7 +76,7 @@ static void __init init_pvh_bootparams(void)
+ 	 * Version 2.12 supports Xen entry point but we will use default x86/PC
+ 	 * environment (i.e. hardware_subarch 0).
+ 	 */
+-	pvh_bootparams.hdr.version = 0x212;
++	pvh_bootparams.hdr.version = (2 << 8) | 12;
+ 	pvh_bootparams.hdr.type_of_loader = (9 << 4) | 0; /* Xen loader */
+ }
+ 
+diff --git a/arch/x86/xen/platform-pci-unplug.c b/arch/x86/xen/platform-pci-unplug.c
+index 33a783c77d96..184b36922397 100644
+--- a/arch/x86/xen/platform-pci-unplug.c
++++ b/arch/x86/xen/platform-pci-unplug.c
+@@ -146,6 +146,10 @@ void xen_unplug_emulated_devices(void)
+ {
+ 	int r;
+ 
++	/* PVH guests don't have emulated devices. */
++	if (xen_pvh_domain())
++		return;
++
+ 	/* user explicitly requested no unplug */
+ 	if (xen_emul_unplug & XEN_UNPLUG_NEVER)
+ 		return;
+diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
+index 08324c64005d..2527540051ff 100644
+--- a/arch/x86/xen/spinlock.c
++++ b/arch/x86/xen/spinlock.c
+@@ -9,6 +9,7 @@
+ #include <linux/log2.h>
+ #include <linux/gfp.h>
+ #include <linux/slab.h>
++#include <linux/atomic.h>
+ 
+ #include <asm/paravirt.h>
+ 
+@@ -20,6 +21,7 @@
+ 
+ static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+ static DEFINE_PER_CPU(char *, irq_name);
++static DEFINE_PER_CPU(atomic_t, xen_qlock_wait_nest);
+ static bool xen_pvspin = true;
+ 
+ #include <asm/qspinlock.h>
+@@ -41,33 +43,24 @@ static void xen_qlock_kick(int cpu)
+ static void xen_qlock_wait(u8 *byte, u8 val)
+ {
+ 	int irq = __this_cpu_read(lock_kicker_irq);
++	atomic_t *nest_cnt = this_cpu_ptr(&xen_qlock_wait_nest);
+ 
+ 	/* If kicker interrupts not initialized yet, just spin */
+-	if (irq == -1)
++	if (irq == -1 || in_nmi())
+ 		return;
+ 
+-	/* clear pending */
+-	xen_clear_irq_pending(irq);
+-	barrier();
+-
+-	/*
+-	 * We check the byte value after clearing pending IRQ to make sure
+-	 * that we won't miss a wakeup event because of the clearing.
+-	 *
+-	 * The sync_clear_bit() call in xen_clear_irq_pending() is atomic.
+-	 * So it is effectively a memory barrier for x86.
+-	 */
+-	if (READ_ONCE(*byte) != val)
+-		return;
++	/* Detect reentry. */
++	atomic_inc(nest_cnt);
+ 
+-	/*
+-	 * If an interrupt happens here, it will leave the wakeup irq
+-	 * pending, which will cause xen_poll_irq() to return
+-	 * immediately.
+-	 */
++	/* If irq pending already and no nested call clear it. */
++	if (atomic_read(nest_cnt) == 1 && xen_test_irq_pending(irq)) {
++		xen_clear_irq_pending(irq);
++	} else if (READ_ONCE(*byte) == val) {
++		/* Block until irq becomes pending (or a spurious wakeup) */
++		xen_poll_irq(irq);
++	}
+ 
+-	/* Block until irq becomes pending (or perhaps a spurious wakeup) */
+-	xen_poll_irq(irq);
++	atomic_dec(nest_cnt);
+ }
+ 
+ static irqreturn_t dummy_handler(int irq, void *dev_id)
+diff --git a/arch/x86/xen/xen-pvh.S b/arch/x86/xen/xen-pvh.S
+index 5d7554c025fd..7ecbd3dde2ea 100644
+--- a/arch/x86/xen/xen-pvh.S
++++ b/arch/x86/xen/xen-pvh.S
+@@ -178,7 +178,7 @@ canary:
+ 	.fill 48, 1, 0
+ 
+ early_stack:
+-	.fill 256, 1, 0
++	.fill BOOT_STACK_SIZE, 1, 0
+ early_stack_end:
+ 
+ 	ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY,
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index 414ba686a847..c1727604ad14 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -1172,10 +1172,17 @@ bool __bfq_deactivate_entity(struct bfq_entity *entity, bool ins_into_idle_tree)
+ 	st = bfq_entity_service_tree(entity);
+ 	is_in_service = entity == sd->in_service_entity;
+ 
+-	if (is_in_service) {
+-		bfq_calc_finish(entity, entity->service);
++	bfq_calc_finish(entity, entity->service);
++
++	if (is_in_service)
+ 		sd->in_service_entity = NULL;
+-	}
++	else
++		/*
++		 * Non in-service entity: nobody will take care of
++		 * resetting its service counter on expiration. Do it
++		 * now.
++		 */
++		entity->service = 0;
+ 
+ 	if (entity->tree == &st->active)
+ 		bfq_active_extract(st, entity);
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index fdba6dd6db63..886f91f2426c 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -139,7 +139,12 @@ static inline int get_index128(be128 *block)
+ 		return x + ffz(val);
+ 	}
+ 
+-	return x;
++	/*
++	 * If we get here, then x == 128 and we are incrementing the counter
++	 * from all ones to all zeros. This means we must return index 127, i.e.
++	 * the one corresponding to key2*{ 1,...,1 }.
++	 */
++	return 127;
+ }
+ 
+ static int post_crypt(struct skcipher_request *req)
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index e339960dcac7..f7affe7cf0b4 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -727,6 +727,9 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs,
+ 			break;
+ 		}
+ 
++		if (speed[i].klen)
++			crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen);
++
+ 		pr_info("test%3u "
+ 			"(%5u byte blocks,%5u bytes per update,%4u updates): ",
+ 			i, speed[i].blen, speed[i].plen, speed[i].blen / speed[i].plen);
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index 75c3cb377b98..a56d3f352765 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -326,9 +326,11 @@ static const struct acpi_device_id acpi_lpss_device_ids[] = {
+ 	{ "INT33FC", },
+ 
+ 	/* Braswell LPSS devices */
++	{ "80862286", LPSS_ADDR(lpss_dma_desc) },
+ 	{ "80862288", LPSS_ADDR(bsw_pwm_dev_desc) },
+ 	{ "8086228A", LPSS_ADDR(bsw_uart_dev_desc) },
+ 	{ "8086228E", LPSS_ADDR(bsw_spi_dev_desc) },
++	{ "808622C0", LPSS_ADDR(lpss_dma_desc) },
+ 	{ "808622C1", LPSS_ADDR(bsw_i2c_dev_desc) },
+ 
+ 	/* Broadwell LPSS devices */
+diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
+index 86c10599d9f8..ccf07674a2a0 100644
+--- a/drivers/acpi/acpi_processor.c
++++ b/drivers/acpi/acpi_processor.c
+@@ -642,7 +642,7 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
+ 
+ 	status = acpi_get_type(handle, &acpi_type);
+ 	if (ACPI_FAILURE(status))
+-		return false;
++		return status;
+ 
+ 	switch (acpi_type) {
+ 	case ACPI_TYPE_PROCESSOR:
+@@ -662,11 +662,12 @@ static acpi_status __init acpi_processor_ids_walk(acpi_handle handle,
+ 	}
+ 
+ 	processor_validated_ids_update(uid);
+-	return true;
++	return AE_OK;
+ 
+ err:
++	/* Exit on error, but don't abort the namespace walk */
+ 	acpi_handle_info(handle, "Invalid processor object\n");
+-	return false;
++	return AE_OK;
+ 
+ }
+ 
+diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
+index 92da886180aa..1dacc42e2dcf 100644
+--- a/drivers/block/ataflop.c
++++ b/drivers/block/ataflop.c
+@@ -1935,6 +1935,11 @@ static int __init atari_floppy_init (void)
+ 		unit[i].disk = alloc_disk(1);
+ 		if (!unit[i].disk)
+ 			goto Enomem;
++
++		unit[i].disk->queue = blk_init_queue(do_fd_request,
++						     &ataflop_lock);
++		if (!unit[i].disk->queue)
++			goto Enomem;
+ 	}
+ 
+ 	if (UseTrackbuffer < 0)
+@@ -1966,10 +1971,6 @@ static int __init atari_floppy_init (void)
+ 		sprintf(unit[i].disk->disk_name, "fd%d", i);
+ 		unit[i].disk->fops = &floppy_fops;
+ 		unit[i].disk->private_data = &unit[i];
+-		unit[i].disk->queue = blk_init_queue(do_fd_request,
+-					&ataflop_lock);
+-		if (!unit[i].disk->queue)
+-			goto Enomem;
+ 		set_capacity(unit[i].disk, MAX_DISK_SIZE * 2);
+ 		add_disk(unit[i].disk);
+ 	}
+@@ -1984,13 +1985,17 @@ static int __init atari_floppy_init (void)
+ 
+ 	return 0;
+ Enomem:
+-	while (i--) {
+-		struct request_queue *q = unit[i].disk->queue;
++	do {
++		struct gendisk *disk = unit[i].disk;
+ 
+-		put_disk(unit[i].disk);
+-		if (q)
+-			blk_cleanup_queue(q);
+-	}
++		if (disk) {
++			if (disk->queue) {
++				blk_cleanup_queue(disk->queue);
++				disk->queue = NULL;
++			}
++			put_disk(unit[i].disk);
++		}
++	} while (i--);
+ 
+ 	unregister_blkdev(FLOPPY_MAJOR, "fd");
+ 	return -ENOMEM;
+diff --git a/drivers/block/swim.c b/drivers/block/swim.c
+index e88d50f75a4a..58e308145e95 100644
+--- a/drivers/block/swim.c
++++ b/drivers/block/swim.c
+@@ -887,8 +887,17 @@ static int swim_floppy_init(struct swim_priv *swd)
+ 
+ exit_put_disks:
+ 	unregister_blkdev(FLOPPY_MAJOR, "fd");
+-	while (drive--)
+-		put_disk(swd->unit[drive].disk);
++	do {
++		struct gendisk *disk = swd->unit[drive].disk;
++
++		if (disk) {
++			if (disk->queue) {
++				blk_cleanup_queue(disk->queue);
++				disk->queue = NULL;
++			}
++			put_disk(disk);
++		}
++	} while (drive--);
+ 	return err;
+ }
+ 
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 7d23225f79ed..32ac5f551e55 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -1910,6 +1910,7 @@ static int negotiate_mq(struct blkfront_info *info)
+ 	info->rinfo = kzalloc(sizeof(struct blkfront_ring_info) * info->nr_rings, GFP_KERNEL);
+ 	if (!info->rinfo) {
+ 		xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating ring_info structure");
++		info->nr_rings = 0;
+ 		return -ENOMEM;
+ 	}
+ 
+@@ -2471,6 +2472,9 @@ static int blkfront_remove(struct xenbus_device *xbdev)
+ 
+ 	dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);
+ 
++	if (!info)
++		return 0;
++
+ 	blkif_free(info, 0);
+ 
+ 	mutex_lock(&info->mutex);
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index cc4bdefa6648..67315cb28826 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -325,6 +325,7 @@ static const struct {
+ 	{ 0x4103, "BCM4330B1"	},	/* 002.001.003 */
+ 	{ 0x410e, "BCM43341B0"	},	/* 002.001.014 */
+ 	{ 0x4406, "BCM4324B3"	},	/* 002.004.006 */
++	{ 0x6109, "BCM4335C0"	},	/* 003.001.009 */
+ 	{ 0x610c, "BCM4354"	},	/* 003.001.012 */
+ 	{ 0x2209, "BCM43430A1"  },	/* 001.002.009 */
+ 	{ }
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index 932678617dfa..0904ab442d31 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -621,8 +621,9 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 			flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+ 			ssif_info->waiting_alert = true;
+ 			ssif_info->rtc_us_timer = SSIF_MSG_USEC;
+-			mod_timer(&ssif_info->retry_timer,
+-				  jiffies + SSIF_MSG_JIFFIES);
++			if (!ssif_info->stopping)
++				mod_timer(&ssif_info->retry_timer,
++					  jiffies + SSIF_MSG_JIFFIES);
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 			return;
+ 		}
+@@ -954,8 +955,9 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
+ 			ssif_info->waiting_alert = true;
+ 			ssif_info->retries_left = SSIF_RECV_RETRIES;
+ 			ssif_info->rtc_us_timer = SSIF_MSG_PART_USEC;
+-			mod_timer(&ssif_info->retry_timer,
+-				  jiffies + SSIF_MSG_PART_JIFFIES);
++			if (!ssif_info->stopping)
++				mod_timer(&ssif_info->retry_timer,
++					  jiffies + SSIF_MSG_PART_JIFFIES);
+ 			ipmi_ssif_unlock_cond(ssif_info, flags);
+ 		}
+ 	}
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 89d5915b1a3f..6e93df272c20 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -653,7 +653,8 @@ ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space,
+ 		return len;
+ 
+ 	err = be32_to_cpu(header->return_code);
+-	if (err != 0 && desc)
++	if (err != 0 && err != TPM_ERR_DISABLED && err != TPM_ERR_DEACTIVATED
++	    && desc)
+ 		dev_err(&chip->dev, "A TPM error (%d) occurred %s\n", err,
+ 			desc);
+ 	if (err)
+diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
+index 656e8af95d52..2cffaf567d99 100644
+--- a/drivers/char/tpm/xen-tpmfront.c
++++ b/drivers/char/tpm/xen-tpmfront.c
+@@ -203,7 +203,7 @@ static int setup_ring(struct xenbus_device *dev, struct tpm_private *priv)
+ 		return -ENOMEM;
+ 	}
+ 
+-	rv = xenbus_grant_ring(dev, &priv->shr, 1, &gref);
++	rv = xenbus_grant_ring(dev, priv->shr, 1, &gref);
+ 	if (rv < 0)
+ 		return rv;
+ 
+diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
+index d83ab94d041a..ca6ee9f389b6 100644
+--- a/drivers/cpufreq/cpufreq-dt.c
++++ b/drivers/cpufreq/cpufreq-dt.c
+@@ -32,6 +32,7 @@ struct private_data {
+ 	struct device *cpu_dev;
+ 	struct thermal_cooling_device *cdev;
+ 	const char *reg_name;
++	bool have_static_opps;
+ };
+ 
+ static struct freq_attr *cpufreq_dt_attr[] = {
+@@ -196,6 +197,15 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 		}
+ 	}
+ 
++	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
++	if (!priv) {
++		ret = -ENOMEM;
++		goto out_put_regulator;
++	}
++
++	priv->reg_name = name;
++	priv->opp_table = opp_table;
++
+ 	/*
+ 	 * Initialize OPP tables for all policy->cpus. They will be shared by
+ 	 * all CPUs which have marked their CPUs shared with OPP bindings.
+@@ -206,7 +216,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 	 *
+ 	 * OPPs might be populated at runtime, don't check for error here
+ 	 */
+-	dev_pm_opp_of_cpumask_add_table(policy->cpus);
++	if (!dev_pm_opp_of_cpumask_add_table(policy->cpus))
++		priv->have_static_opps = true;
+ 
+ 	/*
+ 	 * But we need OPP table to function so if it is not there let's
+@@ -232,19 +243,10 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 				__func__, ret);
+ 	}
+ 
+-	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+-	if (!priv) {
+-		ret = -ENOMEM;
+-		goto out_free_opp;
+-	}
+-
+-	priv->reg_name = name;
+-	priv->opp_table = opp_table;
+-
+ 	ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
+ 	if (ret) {
+ 		dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
+-		goto out_free_priv;
++		goto out_free_opp;
+ 	}
+ 
+ 	priv->cpu_dev = cpu_dev;
+@@ -280,10 +282,11 @@ static int cpufreq_init(struct cpufreq_policy *policy)
+ 
+ out_free_cpufreq_table:
+ 	dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
+-out_free_priv:
+-	kfree(priv);
+ out_free_opp:
+-	dev_pm_opp_of_cpumask_remove_table(policy->cpus);
++	if (priv->have_static_opps)
++		dev_pm_opp_of_cpumask_remove_table(policy->cpus);
++	kfree(priv);
++out_put_regulator:
+ 	if (name)
+ 		dev_pm_opp_put_regulators(opp_table);
+ out_put_clk:
+@@ -298,7 +301,8 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
+ 
+ 	cpufreq_cooling_unregister(priv->cdev);
+ 	dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+-	dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
++	if (priv->have_static_opps)
++		dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
+ 	if (priv->reg_name)
+ 		dev_pm_opp_put_regulators(priv->opp_table);
+ 
+diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
+index f20f20a77d4d..4268f87e99fc 100644
+--- a/drivers/cpufreq/cpufreq_conservative.c
++++ b/drivers/cpufreq/cpufreq_conservative.c
+@@ -80,8 +80,10 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
+ 	 * changed in the meantime, so fall back to current frequency in that
+ 	 * case.
+ 	 */
+-	if (requested_freq > policy->max || requested_freq < policy->min)
++	if (requested_freq > policy->max || requested_freq < policy->min) {
+ 		requested_freq = policy->cur;
++		dbs_info->requested_freq = requested_freq;
++	}
+ 
+ 	freq_step = get_freq_step(cs_tuners, policy);
+ 
+@@ -92,7 +94,7 @@ static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
+ 	if (policy_dbs->idle_periods < UINT_MAX) {
+ 		unsigned int freq_steps = policy_dbs->idle_periods * freq_step;
+ 
+-		if (requested_freq > freq_steps)
++		if (requested_freq > policy->min + freq_steps)
+ 			requested_freq -= freq_steps;
+ 		else
+ 			requested_freq = policy->min;
+diff --git a/drivers/crypto/caam/regs.h b/drivers/crypto/caam/regs.h
+index fee363865d88..e5513cc59ec3 100644
+--- a/drivers/crypto/caam/regs.h
++++ b/drivers/crypto/caam/regs.h
+@@ -70,22 +70,22 @@
+ extern bool caam_little_end;
+ extern bool caam_imx;
+ 
+-#define caam_to_cpu(len)				\
+-static inline u##len caam##len ## _to_cpu(u##len val)	\
+-{							\
+-	if (caam_little_end)				\
+-		return le##len ## _to_cpu(val);		\
+-	else						\
+-		return be##len ## _to_cpu(val);		\
++#define caam_to_cpu(len)						\
++static inline u##len caam##len ## _to_cpu(u##len val)			\
++{									\
++	if (caam_little_end)						\
++		return le##len ## _to_cpu((__force __le##len)val);	\
++	else								\
++		return be##len ## _to_cpu((__force __be##len)val);	\
+ }
+ 
+-#define cpu_to_caam(len)				\
+-static inline u##len cpu_to_caam##len(u##len val)	\
+-{							\
+-	if (caam_little_end)				\
+-		return cpu_to_le##len(val);		\
+-	else						\
+-		return cpu_to_be##len(val);		\
++#define cpu_to_caam(len)					\
++static inline u##len cpu_to_caam##len(u##len val)		\
++{								\
++	if (caam_little_end)					\
++		return (__force u##len)cpu_to_le##len(val);	\
++	else							\
++		return (__force u##len)cpu_to_be##len(val);	\
+ }
+ 
+ caam_to_cpu(16)
+diff --git a/drivers/dma/dma-jz4780.c b/drivers/dma/dma-jz4780.c
+index 7373b7a555ec..803cfb4523b0 100644
+--- a/drivers/dma/dma-jz4780.c
++++ b/drivers/dma/dma-jz4780.c
+@@ -754,6 +754,11 @@ static int jz4780_dma_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	int i, ret;
+ 
++	if (!dev->of_node) {
++		dev_err(dev, "This driver must be probed from devicetree\n");
++		return -EINVAL;
++	}
++
+ 	jzdma = devm_kzalloc(dev, sizeof(*jzdma), GFP_KERNEL);
+ 	if (!jzdma)
+ 		return -ENOMEM;
+diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c
+index 854deb0da07c..68680e4151ea 100644
+--- a/drivers/dma/ioat/init.c
++++ b/drivers/dma/ioat/init.c
+@@ -1205,8 +1205,15 @@ static void ioat_shutdown(struct pci_dev *pdev)
+ 
+ 		spin_lock_bh(&ioat_chan->prep_lock);
+ 		set_bit(IOAT_CHAN_DOWN, &ioat_chan->state);
+-		del_timer_sync(&ioat_chan->timer);
+ 		spin_unlock_bh(&ioat_chan->prep_lock);
++		/*
++		 * Synchronization rule for del_timer_sync():
++		 *  - The caller must not hold locks which would prevent
++		 *    completion of the timer's handler.
++		 * So prep_lock cannot be held before calling it.
++		 */
++		del_timer_sync(&ioat_chan->timer);
++
+ 		/* this should quiesce then reset */
+ 		ioat_reset_hw(ioat_chan);
+ 	}
+diff --git a/drivers/dma/ppc4xx/adma.c b/drivers/dma/ppc4xx/adma.c
+index 4cf0d4d0cecf..25610286979f 100644
+--- a/drivers/dma/ppc4xx/adma.c
++++ b/drivers/dma/ppc4xx/adma.c
+@@ -4360,7 +4360,7 @@ static ssize_t enable_store(struct device_driver *dev, const char *buf,
+ }
+ static DRIVER_ATTR_RW(enable);
+ 
+-static ssize_t poly_store(struct device_driver *dev, char *buf)
++static ssize_t poly_show(struct device_driver *dev, char *buf)
+ {
+ 	ssize_t size = 0;
+ 	u32 reg;
+diff --git a/drivers/dma/stm32-dma.c b/drivers/dma/stm32-dma.c
+index 786fc8fcc38e..32192e98159b 100644
+--- a/drivers/dma/stm32-dma.c
++++ b/drivers/dma/stm32-dma.c
+@@ -429,6 +429,8 @@ static void stm32_dma_dump_reg(struct stm32_dma_chan *chan)
+ 	dev_dbg(chan2dev(chan), "SFCR:  0x%08x\n", sfcr);
+ }
+ 
++static void stm32_dma_configure_next_sg(struct stm32_dma_chan *chan);
++
+ static void stm32_dma_start_transfer(struct stm32_dma_chan *chan)
+ {
+ 	struct stm32_dma_device *dmadev = stm32_dma_get_dev(chan);
+@@ -471,6 +473,9 @@ static void stm32_dma_start_transfer(struct stm32_dma_chan *chan)
+ 	if (status)
+ 		stm32_dma_irq_clear(chan, status);
+ 
++	if (chan->desc->cyclic)
++		stm32_dma_configure_next_sg(chan);
++
+ 	stm32_dma_dump_reg(chan);
+ 
+ 	/* Start DMA */
+@@ -564,8 +569,7 @@ static void stm32_dma_issue_pending(struct dma_chan *c)
+ 	if (vchan_issue_pending(&chan->vchan) && !chan->desc && !chan->busy) {
+ 		dev_dbg(chan2dev(chan), "vchan %p: issued\n", &chan->vchan);
+ 		stm32_dma_start_transfer(chan);
+-		if (chan->desc->cyclic)
+-			stm32_dma_configure_next_sg(chan);
++
+ 	}
+ 	spin_unlock_irqrestore(&chan->vchan.lock, flags);
+ }
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 59ce32e405ac..667f5ba0403c 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -2200,6 +2200,15 @@ static struct amd64_family_type family_types[] = {
+ 			.dbam_to_cs		= f17_base_addr_to_cs_size,
+ 		}
+ 	},
++	[F17_M10H_CPUS] = {
++		.ctl_name = "F17h_M10h",
++		.f0_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F0,
++		.f6_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F6,
++		.ops = {
++			.early_channel_count	= f17_early_channel_count,
++			.dbam_to_cs		= f17_base_addr_to_cs_size,
++		}
++	},
+ };
+ 
+ /*
+@@ -3188,6 +3197,11 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt)
+ 		break;
+ 
+ 	case 0x17:
++		if (pvt->model >= 0x10 && pvt->model <= 0x2f) {
++			fam_type = &family_types[F17_M10H_CPUS];
++			pvt->ops = &family_types[F17_M10H_CPUS].ops;
++			break;
++		}
+ 		fam_type	= &family_types[F17_CPUS];
+ 		pvt->ops	= &family_types[F17_CPUS].ops;
+ 		break;
+diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
+index 1d4b74e9a037..4242f8e39c18 100644
+--- a/drivers/edac/amd64_edac.h
++++ b/drivers/edac/amd64_edac.h
+@@ -115,6 +115,8 @@
+ #define PCI_DEVICE_ID_AMD_16H_M30H_NB_F2 0x1582
+ #define PCI_DEVICE_ID_AMD_17H_DF_F0	0x1460
+ #define PCI_DEVICE_ID_AMD_17H_DF_F6	0x1466
++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F0 0x15e8
++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F6 0x15ee
+ 
+ /*
+  * Function 1 - Address Map
+@@ -281,6 +283,7 @@ enum amd_families {
+ 	F16_CPUS,
+ 	F16_M30H_CPUS,
+ 	F17_CPUS,
++	F17_M10H_CPUS,
+ 	NUM_FAMILIES,
+ };
+ 
+diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
+index 6c7d5f20eacb..2054a24b41d7 100644
+--- a/drivers/edac/i7core_edac.c
++++ b/drivers/edac/i7core_edac.c
+@@ -1711,6 +1711,7 @@ static void i7core_mce_output_error(struct mem_ctl_info *mci,
+ 	u32 errnum = find_first_bit(&error, 32);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv)
+ 			tp_event = HW_EVENT_ERR_FATAL;
+ 		else
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index 0dc0d595c47c..b0b390a1da15 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -2891,6 +2891,7 @@ static void sbridge_mce_output_error(struct mem_ctl_info *mci,
+ 		recoverable = GET_BITFIELD(m->status, 56, 56);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv) {
+ 			type = "FATAL";
+ 			tp_event = HW_EVENT_ERR_FATAL;
+diff --git a/drivers/edac/skx_edac.c b/drivers/edac/skx_edac.c
+index 16dea97568a1..5dafd4fa8f5e 100644
+--- a/drivers/edac/skx_edac.c
++++ b/drivers/edac/skx_edac.c
+@@ -604,7 +604,7 @@ sad_found:
+ 			break;
+ 		case 2:
+ 			lchan = (addr >> shift) % 2;
+-			lchan = (lchan << 1) | ~lchan;
++			lchan = (lchan << 1) | !lchan;
+ 			break;
+ 		case 3:
+ 			lchan = ((addr >> shift) % 2) << 1;
+@@ -895,6 +895,7 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
+ 	recoverable = GET_BITFIELD(m->status, 56, 56);
+ 
+ 	if (uncorrected_error) {
++		core_err_cnt = 1;
+ 		if (ripv) {
+ 			type = "FATAL";
+ 			tp_event = HW_EVENT_ERR_FATAL;
+diff --git a/drivers/hid/usbhid/hiddev.c b/drivers/hid/usbhid/hiddev.c
+index cf307bdc3d53..89761551c15d 100644
+--- a/drivers/hid/usbhid/hiddev.c
++++ b/drivers/hid/usbhid/hiddev.c
+@@ -512,14 +512,24 @@ static noinline int hiddev_ioctl_usage(struct hiddev *hiddev, unsigned int cmd,
+ 			if (cmd == HIDIOCGCOLLECTIONINDEX) {
+ 				if (uref->usage_index >= field->maxusage)
+ 					goto inval;
++				uref->usage_index =
++					array_index_nospec(uref->usage_index,
++							   field->maxusage);
+ 			} else if (uref->usage_index >= field->report_count)
+ 				goto inval;
+ 		}
+ 
+-		if ((cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) &&
+-		    (uref_multi->num_values > HID_MAX_MULTI_USAGES ||
+-		     uref->usage_index + uref_multi->num_values > field->report_count))
+-			goto inval;
++		if (cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) {
++			if (uref_multi->num_values > HID_MAX_MULTI_USAGES ||
++			    uref->usage_index + uref_multi->num_values >
++			    field->report_count)
++				goto inval;
++
++			uref->usage_index =
++				array_index_nospec(uref->usage_index,
++						   field->report_count -
++						   uref_multi->num_values);
++		}
+ 
+ 		switch (cmd) {
+ 		case HIDIOCGUSAGE:
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 1700b4e7758d..752c52f7353d 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -599,16 +599,18 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 	bool perf_chn = vmbus_devs[dev_type].perf_device;
+ 	struct vmbus_channel *primary = channel->primary_channel;
+ 	int next_node;
+-	struct cpumask available_mask;
++	cpumask_var_t available_mask;
+ 	struct cpumask *alloced_mask;
+ 
+ 	if ((vmbus_proto_version == VERSION_WS2008) ||
+-	    (vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) {
++	    (vmbus_proto_version == VERSION_WIN7) || (!perf_chn) ||
++	    !alloc_cpumask_var(&available_mask, GFP_KERNEL)) {
+ 		/*
+ 		 * Prior to win8, all channel interrupts are
+ 		 * delivered on cpu 0.
+ 		 * Also if the channel is not a performance critical
+ 		 * channel, bind it to cpu 0.
++		 * In case alloc_cpumask_var() fails, bind it to cpu 0.
+ 		 */
+ 		channel->numa_node = 0;
+ 		channel->target_cpu = 0;
+@@ -646,7 +648,7 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 		cpumask_clear(alloced_mask);
+ 	}
+ 
+-	cpumask_xor(&available_mask, alloced_mask,
++	cpumask_xor(available_mask, alloced_mask,
+ 		    cpumask_of_node(primary->numa_node));
+ 
+ 	cur_cpu = -1;
+@@ -664,10 +666,10 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 	}
+ 
+ 	while (true) {
+-		cur_cpu = cpumask_next(cur_cpu, &available_mask);
++		cur_cpu = cpumask_next(cur_cpu, available_mask);
+ 		if (cur_cpu >= nr_cpu_ids) {
+ 			cur_cpu = -1;
+-			cpumask_copy(&available_mask,
++			cpumask_copy(available_mask,
+ 				     cpumask_of_node(primary->numa_node));
+ 			continue;
+ 		}
+@@ -697,6 +699,8 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
+ 
+ 	channel->target_cpu = cur_cpu;
+ 	channel->target_vp = hv_cpu_number_to_vp_number(cur_cpu);
++
++	free_cpumask_var(available_mask);
+ }
+ 
+ static void vmbus_wait_for_unload(void)
+diff --git a/drivers/hwmon/pmbus/pmbus.c b/drivers/hwmon/pmbus/pmbus.c
+index 7718e58dbda5..7688dab32f6e 100644
+--- a/drivers/hwmon/pmbus/pmbus.c
++++ b/drivers/hwmon/pmbus/pmbus.c
+@@ -118,6 +118,8 @@ static int pmbus_identify(struct i2c_client *client,
+ 		} else {
+ 			info->pages = 1;
+ 		}
++
++		pmbus_clear_faults(client);
+ 	}
+ 
+ 	if (pmbus_check_byte_register(client, 0, PMBUS_VOUT_MODE)) {
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index a139940cd991..924f3ca41c65 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -1802,7 +1802,10 @@ static int pmbus_init_common(struct i2c_client *client, struct pmbus_data *data,
+ 	if (ret >= 0 && (ret & PB_CAPABILITY_ERROR_CHECK))
+ 		client->flags |= I2C_CLIENT_PEC;
+ 
+-	pmbus_clear_faults(client);
++	if (data->info->pages)
++		pmbus_clear_faults(client);
++	else
++		pmbus_clear_fault_page(client, -1);
+ 
+ 	if (info->identify) {
+ 		ret = (*info->identify)(client, info);
+diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c
+index 70cc0d134f3c..ca250e7ac511 100644
+--- a/drivers/hwmon/pwm-fan.c
++++ b/drivers/hwmon/pwm-fan.c
+@@ -290,9 +290,19 @@ static int pwm_fan_remove(struct platform_device *pdev)
+ static int pwm_fan_suspend(struct device *dev)
+ {
+ 	struct pwm_fan_ctx *ctx = dev_get_drvdata(dev);
++	struct pwm_args args;
++	int ret;
++
++	pwm_get_args(ctx->pwm, &args);
++
++	if (ctx->pwm_value) {
++		ret = pwm_config(ctx->pwm, 0, args.period);
++		if (ret < 0)
++			return ret;
+ 
+-	if (ctx->pwm_value)
+ 		pwm_disable(ctx->pwm);
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
+index 56ecd7aff5eb..d14a9cb7959a 100644
+--- a/drivers/hwtracing/coresight/coresight-etb10.c
++++ b/drivers/hwtracing/coresight/coresight-etb10.c
+@@ -155,6 +155,10 @@ static int etb_enable(struct coresight_device *csdev, u32 mode)
+ 	if (val == CS_MODE_PERF)
+ 		return -EBUSY;
+ 
++	/* Don't let perf disturb sysFS sessions */
++	if (val == CS_MODE_SYSFS && mode == CS_MODE_PERF)
++		return -EBUSY;
++
+ 	/* Nothing to do, the tracer is already enabled. */
+ 	if (val == CS_MODE_SYSFS)
+ 		goto out;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 7f044df1ea07..3415733a9364 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -761,8 +761,12 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
+ 
+ 	time_left = wait_event_timeout(priv->wait, priv->flags & ID_DONE,
+ 				     num * adap->timeout);
+-	if (!time_left) {
++
++	/* cleanup DMA if it couldn't complete properly due to an error */
++	if (priv->dma_direction != DMA_NONE)
+ 		rcar_i2c_cleanup_dma(priv);
++
++	if (!time_left) {
+ 		rcar_i2c_init(priv);
+ 		ret = -ETIMEDOUT;
+ 	} else if (priv->flags & ID_NACK) {
+diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
+index 15109728cae7..cd686179aa92 100644
+--- a/drivers/iio/adc/at91_adc.c
++++ b/drivers/iio/adc/at91_adc.c
+@@ -248,12 +248,14 @@ static irqreturn_t at91_adc_trigger_handler(int irq, void *p)
+ 	struct iio_poll_func *pf = p;
+ 	struct iio_dev *idev = pf->indio_dev;
+ 	struct at91_adc_state *st = iio_priv(idev);
++	struct iio_chan_spec const *chan;
+ 	int i, j = 0;
+ 
+ 	for (i = 0; i < idev->masklength; i++) {
+ 		if (!test_bit(i, idev->active_scan_mask))
+ 			continue;
+-		st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, i));
++		chan = idev->channels + i;
++		st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, chan->channel));
+ 		j++;
+ 	}
+ 
+@@ -279,6 +281,8 @@ static void handle_adc_eoc_trigger(int irq, struct iio_dev *idev)
+ 		iio_trigger_poll(idev->trig);
+ 	} else {
+ 		st->last_value = at91_adc_readl(st, AT91_ADC_CHAN(st, st->chnb));
++		/* Needed to ACK the DRDY interruption */
++		at91_adc_readl(st, AT91_ADC_LCDR);
+ 		st->done = true;
+ 		wake_up_interruptible(&st->wq_data_avail);
+ 	}
+diff --git a/drivers/iio/adc/fsl-imx25-gcq.c b/drivers/iio/adc/fsl-imx25-gcq.c
+index ea264fa9e567..929c617db364 100644
+--- a/drivers/iio/adc/fsl-imx25-gcq.c
++++ b/drivers/iio/adc/fsl-imx25-gcq.c
+@@ -209,12 +209,14 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 		ret = of_property_read_u32(child, "reg", &reg);
+ 		if (ret) {
+ 			dev_err(dev, "Failed to get reg property\n");
++			of_node_put(child);
+ 			return ret;
+ 		}
+ 
+ 		if (reg >= MX25_NUM_CFGS) {
+ 			dev_err(dev,
+ 				"reg value is greater than the number of available configuration registers\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -228,6 +230,7 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 			if (IS_ERR(priv->vref[refp])) {
+ 				dev_err(dev, "Error, trying to use external voltage reference without a vref-%s regulator.",
+ 					mx25_gcq_refp_names[refp]);
++				of_node_put(child);
+ 				return PTR_ERR(priv->vref[refp]);
+ 			}
+ 			priv->channel_vref_mv[reg] =
+@@ -240,6 +243,7 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 			break;
+ 		default:
+ 			dev_err(dev, "Invalid positive reference %d\n", refp);
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -254,10 +258,12 @@ static int mx25_gcq_setup_cfgs(struct platform_device *pdev,
+ 
+ 		if ((refp & MX25_ADCQ_CFG_REFP_MASK) != refp) {
+ 			dev_err(dev, "Invalid fsl,adc-refp property value\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 		if ((refn & MX25_ADCQ_CFG_REFN_MASK) != refn) {
+ 			dev_err(dev, "Invalid fsl,adc-refn property value\n");
++			of_node_put(child);
+ 			return -EINVAL;
+ 		}
+ 
+diff --git a/drivers/iio/dac/ad5064.c b/drivers/iio/dac/ad5064.c
+index 3f9399c27869..efc762da2ba8 100644
+--- a/drivers/iio/dac/ad5064.c
++++ b/drivers/iio/dac/ad5064.c
+@@ -809,6 +809,40 @@ static int ad5064_set_config(struct ad5064_state *st, unsigned int val)
+ 	return ad5064_write(st, cmd, 0, val, 0);
+ }
+ 
++static int ad5064_request_vref(struct ad5064_state *st, struct device *dev)
++{
++	unsigned int i;
++	int ret;
++
++	for (i = 0; i < ad5064_num_vref(st); ++i)
++		st->vref_reg[i].supply = ad5064_vref_name(st, i);
++
++	if (!st->chip_info->internal_vref)
++		return devm_regulator_bulk_get(dev, ad5064_num_vref(st),
++					       st->vref_reg);
++
++	/*
++	 * This assumes that when the regulator has an internal VREF
++	 * there is only one external VREF connection, which is
++	 * currently the case for all supported devices.
++	 */
++	st->vref_reg[0].consumer = devm_regulator_get_optional(dev, "vref");
++	if (!IS_ERR(st->vref_reg[0].consumer))
++		return 0;
++
++	ret = PTR_ERR(st->vref_reg[0].consumer);
++	if (ret != -ENODEV)
++		return ret;
++
++	/* If no external regulator was supplied use the internal VREF */
++	st->use_internal_vref = true;
++	ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE);
++	if (ret)
++		dev_err(dev, "Failed to enable internal vref: %d\n", ret);
++
++	return ret;
++}
++
+ static int ad5064_probe(struct device *dev, enum ad5064_type type,
+ 			const char *name, ad5064_write_func write)
+ {
+@@ -829,22 +863,11 @@ static int ad5064_probe(struct device *dev, enum ad5064_type type,
+ 	st->dev = dev;
+ 	st->write = write;
+ 
+-	for (i = 0; i < ad5064_num_vref(st); ++i)
+-		st->vref_reg[i].supply = ad5064_vref_name(st, i);
++	ret = ad5064_request_vref(st, dev);
++	if (ret)
++		return ret;
+ 
+-	ret = devm_regulator_bulk_get(dev, ad5064_num_vref(st),
+-		st->vref_reg);
+-	if (ret) {
+-		if (!st->chip_info->internal_vref)
+-			return ret;
+-		st->use_internal_vref = true;
+-		ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE);
+-		if (ret) {
+-			dev_err(dev, "Failed to enable internal vref: %d\n",
+-				ret);
+-			return ret;
+-		}
+-	} else {
++	if (!st->use_internal_vref) {
+ 		ret = regulator_bulk_enable(ad5064_num_vref(st), st->vref_reg);
+ 		if (ret)
+ 			return ret;
+diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
+index 0a1e96c25ca3..f75f99476ad0 100644
+--- a/drivers/infiniband/core/sysfs.c
++++ b/drivers/infiniband/core/sysfs.c
+@@ -489,7 +489,7 @@ static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr,
+ 	ret = get_perf_mad(p->ibdev, p->port_num, tab_attr->attr_id, &data,
+ 			40 + offset / 8, sizeof(data));
+ 	if (ret < 0)
+-		return sprintf(buf, "N/A (no PMA)\n");
++		return ret;
+ 
+ 	switch (width) {
+ 	case 4:
+@@ -1012,10 +1012,12 @@ static int add_port(struct ib_device *device, int port_num,
+ 		goto err_put;
+ 	}
+ 
+-	p->pma_table = get_counter_table(device, port_num);
+-	ret = sysfs_create_group(&p->kobj, p->pma_table);
+-	if (ret)
+-		goto err_put_gid_attrs;
++	if (device->process_mad) {
++		p->pma_table = get_counter_table(device, port_num);
++		ret = sysfs_create_group(&p->kobj, p->pma_table);
++		if (ret)
++			goto err_put_gid_attrs;
++	}
+ 
+ 	p->gid_group.name  = "gids";
+ 	p->gid_group.attrs = alloc_group_attrs(show_port_gid, attr.gid_tbl_len);
+@@ -1128,7 +1130,8 @@ err_free_gid:
+ 	p->gid_group.attrs = NULL;
+ 
+ err_remove_pma:
+-	sysfs_remove_group(&p->kobj, p->pma_table);
++	if (p->pma_table)
++		sysfs_remove_group(&p->kobj, p->pma_table);
+ 
+ err_put_gid_attrs:
+ 	kobject_put(&p->gid_attr_group->kobj);
+@@ -1240,7 +1243,9 @@ static void free_port_list_attributes(struct ib_device *device)
+ 			kfree(port->hw_stats);
+ 			free_hsag(&port->kobj, port->hw_stats_ag);
+ 		}
+-		sysfs_remove_group(p, port->pma_table);
++
++		if (port->pma_table)
++			sysfs_remove_group(p, port->pma_table);
+ 		sysfs_remove_group(p, &port->pkey_group);
+ 		sysfs_remove_group(p, &port->gid_group);
+ 		sysfs_remove_group(&port->gid_attr_group->kobj,
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index 8d91733009a4..ad74988837c9 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -311,8 +311,17 @@ static int bnxt_qplib_process_qp_event(struct bnxt_qplib_rcfw *rcfw,
+ 		bnxt_qplib_release_cq_locks(qp, &flags);
+ 		break;
+ 	default:
+-		/* Command Response */
+-		spin_lock_irqsave(&cmdq->lock, flags);
++		/*
++		 * Command Response
++		 * cmdq->lock needs to be acquired to synchronie
++		 * the command send and completion reaping. This function
++		 * is always called with creq->lock held. Using
++		 * the nested variant of spin_lock.
++		 *
++		 */
++
++		spin_lock_irqsave_nested(&cmdq->lock, flags,
++					 SINGLE_DEPTH_NESTING);
+ 		cookie = le16_to_cpu(qp_event->cookie);
+ 		mcookie = qp_event->cookie;
+ 		blocked = cookie & RCFW_CMD_IS_BLOCKING;
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 9866c5d1b99f..e88bb71056cd 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -675,7 +675,6 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
+ 		init_completion(&ent->compl);
+ 		INIT_WORK(&ent->work, cache_work_func);
+ 		INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func);
+-		queue_work(cache->wq, &ent->work);
+ 
+ 		if (i > MR_CACHE_LAST_STD_ENTRY) {
+ 			mlx5_odp_init_mr_cache_entry(ent);
+@@ -694,6 +693,7 @@ int mlx5_mr_cache_init(struct mlx5_ib_dev *dev)
+ 			ent->limit = dev->mdev->profile->mr_cache[i].limit;
+ 		else
+ 			ent->limit = 0;
++		queue_work(cache->wq, &ent->work);
+ 	}
+ 
+ 	err = mlx5_mr_cache_debugfs_init(dev);
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index bd43c1c7a42f..4d84b010b3ee 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -683,6 +683,7 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ 		rxe_advance_resp_resource(qp);
+ 
+ 		res->type		= RXE_READ_MASK;
++		res->replay		= 0;
+ 
+ 		res->read.va		= qp->resp.va;
+ 		res->read.va_org	= qp->resp.va;
+@@ -753,7 +754,8 @@ static enum resp_states read_reply(struct rxe_qp *qp,
+ 		state = RESPST_DONE;
+ 	} else {
+ 		qp->resp.res = NULL;
+-		qp->resp.opcode = -1;
++		if (!res->replay)
++			qp->resp.opcode = -1;
+ 		if (psn_compare(res->cur_psn, qp->resp.psn) >= 0)
+ 			qp->resp.psn = res->cur_psn;
+ 		state = RESPST_CLEANUP;
+@@ -815,6 +817,7 @@ static enum resp_states execute(struct rxe_qp *qp, struct rxe_pkt_info *pkt)
+ 
+ 	/* next expected psn, read handles this separately */
+ 	qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK;
++	qp->resp.ack_psn = qp->resp.psn;
+ 
+ 	qp->resp.opcode = pkt->opcode;
+ 	qp->resp.status = IB_WC_SUCCESS;
+@@ -1071,7 +1074,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
+ 					  struct rxe_pkt_info *pkt)
+ {
+ 	enum resp_states rc;
+-	u32 prev_psn = (qp->resp.psn - 1) & BTH_PSN_MASK;
++	u32 prev_psn = (qp->resp.ack_psn - 1) & BTH_PSN_MASK;
+ 
+ 	if (pkt->mask & RXE_SEND_MASK ||
+ 	    pkt->mask & RXE_WRITE_MASK) {
+@@ -1114,6 +1117,7 @@ static enum resp_states duplicate_request(struct rxe_qp *qp,
+ 			res->state = (pkt->psn == res->first_psn) ?
+ 					rdatm_res_state_new :
+ 					rdatm_res_state_replay;
++			res->replay = 1;
+ 
+ 			/* Reset the resource, except length. */
+ 			res->read.va_org = iova;
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index 1019f5e7dbdd..59f6a24db064 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -173,6 +173,7 @@ enum rdatm_res_state {
+ 
+ struct resp_res {
+ 	int			type;
++	int			replay;
+ 	u32			first_psn;
+ 	u32			last_psn;
+ 	u32			cur_psn;
+@@ -197,6 +198,7 @@ struct rxe_resp_info {
+ 	enum rxe_qp_state	state;
+ 	u32			msn;
+ 	u32			psn;
++	u32			ack_psn;
+ 	int			opcode;
+ 	int			drop_msg;
+ 	int			goto_error;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index 9939f32d0154..0e85b3445c07 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -1427,11 +1427,15 @@ static void ipoib_cm_skb_reap(struct work_struct *work)
+ 		spin_unlock_irqrestore(&priv->lock, flags);
+ 		netif_tx_unlock_bh(dev);
+ 
+-		if (skb->protocol == htons(ETH_P_IP))
++		if (skb->protocol == htons(ETH_P_IP)) {
++			memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ 			icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
++		}
+ #if IS_ENABLED(CONFIG_IPV6)
+-		else if (skb->protocol == htons(ETH_P_IPV6))
++		else if (skb->protocol == htons(ETH_P_IPV6)) {
++			memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
+ 			icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
++		}
+ #endif
+ 		dev_kfree_skb_any(skb);
+ 
+diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
+index 2c436376f13e..15b5856475fc 100644
+--- a/drivers/iommu/arm-smmu.c
++++ b/drivers/iommu/arm-smmu.c
+@@ -475,6 +475,9 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
+ 	bool stage1 = cfg->cbar != CBAR_TYPE_S2_TRANS;
+ 	void __iomem *reg = ARM_SMMU_CB(smmu_domain->smmu, cfg->cbndx);
+ 
++	if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
++		wmb();
++
+ 	if (stage1) {
+ 		reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
+ 
+@@ -516,6 +519,9 @@ static void arm_smmu_tlb_inv_vmid_nosync(unsigned long iova, size_t size,
+ 	struct arm_smmu_domain *smmu_domain = cookie;
+ 	void __iomem *base = ARM_SMMU_GR0(smmu_domain->smmu);
+ 
++	if (smmu_domain->smmu->features & ARM_SMMU_FEAT_COHERENT_WALK)
++		wmb();
++
+ 	writel_relaxed(smmu_domain->cfg.vmid, base + ARM_SMMU_GR0_TLBIVMID);
+ }
+ 
+diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c
+index cb556e06673e..5d0912bf9eab 100644
+--- a/drivers/lightnvm/pblk-recovery.c
++++ b/drivers/lightnvm/pblk-recovery.c
+@@ -1001,12 +1001,14 @@ next:
+ 		}
+ 	}
+ 
+-	spin_lock(&l_mg->free_lock);
+ 	if (!open_lines) {
++		spin_lock(&l_mg->free_lock);
+ 		WARN_ON_ONCE(!test_and_clear_bit(meta_line,
+ 							&l_mg->meta_bitmap));
++		spin_unlock(&l_mg->free_lock);
+ 		pblk_line_replace_data(pblk);
+ 	} else {
++		spin_lock(&l_mg->free_lock);
+ 		/* Allocate next line for preparation */
+ 		l_mg->data_next = pblk_line_get(pblk);
+ 		if (l_mg->data_next) {
+@@ -1014,8 +1016,8 @@ next:
+ 			l_mg->data_next->type = PBLK_LINETYPE_DATA;
+ 			is_next = 1;
+ 		}
++		spin_unlock(&l_mg->free_lock);
+ 	}
+-	spin_unlock(&l_mg->free_lock);
+ 
+ 	if (is_next) {
+ 		pblk_line_erase(pblk, l_mg->data_next);
+diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
+index 89d088cf95d9..9406326216f1 100644
+--- a/drivers/md/bcache/btree.c
++++ b/drivers/md/bcache/btree.c
+@@ -2371,7 +2371,7 @@ static int refill_keybuf_fn(struct btree_op *op, struct btree *b,
+ 	struct keybuf *buf = refill->buf;
+ 	int ret = MAP_CONTINUE;
+ 
+-	if (bkey_cmp(k, refill->end) >= 0) {
++	if (bkey_cmp(k, refill->end) > 0) {
+ 		ret = MAP_DONE;
+ 		goto out;
+ 	}
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index 5b63afff46d5..69b336d8c05a 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -792,7 +792,7 @@ static void cached_dev_read_done_bh(struct closure *cl)
+ 
+ 	bch_mark_cache_accounting(s->iop.c, s->d,
+ 				  !s->cache_missed, s->iop.bypass);
+-	trace_bcache_read(s->orig_bio, !s->cache_miss, s->iop.bypass);
++	trace_bcache_read(s->orig_bio, !s->cache_missed, s->iop.bypass);
+ 
+ 	if (s->iop.status)
+ 		continue_at_nobarrier(cl, cached_dev_read_error, bcache_wq);
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index e52676fa9832..ca948155191a 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1719,8 +1719,7 @@ static void free_params(struct dm_ioctl *param, size_t param_size, int param_fla
+ }
+ 
+ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kernel,
+-		       int ioctl_flags,
+-		       struct dm_ioctl **param, int *param_flags)
++		       int ioctl_flags, struct dm_ioctl **param, int *param_flags)
+ {
+ 	struct dm_ioctl *dmi;
+ 	int secure_data;
+@@ -1761,18 +1760,13 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
+ 
+ 	*param_flags |= DM_PARAMS_MALLOC;
+ 
+-	if (copy_from_user(dmi, user, param_kernel->data_size))
+-		goto bad;
++	/* Copy from param_kernel (which was already copied from user) */
++	memcpy(dmi, param_kernel, minimum_data_size);
+ 
+-data_copied:
+-	/*
+-	 * Abort if something changed the ioctl data while it was being copied.
+-	 */
+-	if (dmi->data_size != param_kernel->data_size) {
+-		DMERR("rejecting ioctl: data size modified while processing parameters");
++	if (copy_from_user(&dmi->data, (char __user *)user + minimum_data_size,
++			   param_kernel->data_size - minimum_data_size))
+ 		goto bad;
+-	}
+-
++data_copied:
+ 	/* Wipe the user buffer so we do not return it to userspace */
+ 	if (secure_data && clear_user(user, param_kernel->data_size))
+ 		goto bad;
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index 70485de37b66..34968ca6b84a 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -99,7 +99,7 @@ struct dmz_mblock {
+ 	struct rb_node		node;
+ 	struct list_head	link;
+ 	sector_t		no;
+-	atomic_t		ref;
++	unsigned int		ref;
+ 	unsigned long		state;
+ 	struct page		*page;
+ 	void			*data;
+@@ -296,7 +296,7 @@ static struct dmz_mblock *dmz_alloc_mblock(struct dmz_metadata *zmd,
+ 
+ 	RB_CLEAR_NODE(&mblk->node);
+ 	INIT_LIST_HEAD(&mblk->link);
+-	atomic_set(&mblk->ref, 0);
++	mblk->ref = 0;
+ 	mblk->state = 0;
+ 	mblk->no = mblk_no;
+ 	mblk->data = page_address(mblk->page);
+@@ -339,10 +339,11 @@ static void dmz_insert_mblock(struct dmz_metadata *zmd, struct dmz_mblock *mblk)
+ }
+ 
+ /*
+- * Lookup a metadata block in the rbtree.
++ * Lookup a metadata block in the rbtree. If the block is found, increment
++ * its reference count.
+  */
+-static struct dmz_mblock *dmz_lookup_mblock(struct dmz_metadata *zmd,
+-					    sector_t mblk_no)
++static struct dmz_mblock *dmz_get_mblock_fast(struct dmz_metadata *zmd,
++					      sector_t mblk_no)
+ {
+ 	struct rb_root *root = &zmd->mblk_rbtree;
+ 	struct rb_node *node = root->rb_node;
+@@ -350,8 +351,17 @@ static struct dmz_mblock *dmz_lookup_mblock(struct dmz_metadata *zmd,
+ 
+ 	while (node) {
+ 		mblk = container_of(node, struct dmz_mblock, node);
+-		if (mblk->no == mblk_no)
++		if (mblk->no == mblk_no) {
++			/*
++			 * If this is the first reference to the block,
++			 * remove it from the LRU list.
++			 */
++			mblk->ref++;
++			if (mblk->ref == 1 &&
++			    !test_bit(DMZ_META_DIRTY, &mblk->state))
++				list_del_init(&mblk->link);
+ 			return mblk;
++		}
+ 		node = (mblk->no < mblk_no) ? node->rb_left : node->rb_right;
+ 	}
+ 
+@@ -382,32 +392,47 @@ static void dmz_mblock_bio_end_io(struct bio *bio)
+ }
+ 
+ /*
+- * Read a metadata block from disk.
++ * Read an uncached metadata block from disk and add it to the cache.
+  */
+-static struct dmz_mblock *dmz_fetch_mblock(struct dmz_metadata *zmd,
+-					   sector_t mblk_no)
++static struct dmz_mblock *dmz_get_mblock_slow(struct dmz_metadata *zmd,
++					      sector_t mblk_no)
+ {
+-	struct dmz_mblock *mblk;
++	struct dmz_mblock *mblk, *m;
+ 	sector_t block = zmd->sb[zmd->mblk_primary].block + mblk_no;
+ 	struct bio *bio;
+ 
+-	/* Get block and insert it */
++	/* Get a new block and a BIO to read it */
+ 	mblk = dmz_alloc_mblock(zmd, mblk_no);
+ 	if (!mblk)
+ 		return NULL;
+ 
+-	spin_lock(&zmd->mblk_lock);
+-	atomic_inc(&mblk->ref);
+-	set_bit(DMZ_META_READING, &mblk->state);
+-	dmz_insert_mblock(zmd, mblk);
+-	spin_unlock(&zmd->mblk_lock);
+-
+ 	bio = bio_alloc(GFP_NOIO, 1);
+ 	if (!bio) {
+ 		dmz_free_mblock(zmd, mblk);
+ 		return NULL;
+ 	}
+ 
++	spin_lock(&zmd->mblk_lock);
++
++	/*
++	 * Make sure that another context did not start reading
++	 * the block already.
++	 */
++	m = dmz_get_mblock_fast(zmd, mblk_no);
++	if (m) {
++		spin_unlock(&zmd->mblk_lock);
++		dmz_free_mblock(zmd, mblk);
++		bio_put(bio);
++		return m;
++	}
++
++	mblk->ref++;
++	set_bit(DMZ_META_READING, &mblk->state);
++	dmz_insert_mblock(zmd, mblk);
++
++	spin_unlock(&zmd->mblk_lock);
++
++	/* Submit read BIO */
+ 	bio->bi_iter.bi_sector = dmz_blk2sect(block);
+ 	bio_set_dev(bio, zmd->dev->bdev);
+ 	bio->bi_private = mblk;
+@@ -484,7 +509,8 @@ static void dmz_release_mblock(struct dmz_metadata *zmd,
+ 
+ 	spin_lock(&zmd->mblk_lock);
+ 
+-	if (atomic_dec_and_test(&mblk->ref)) {
++	mblk->ref--;
++	if (mblk->ref == 0) {
+ 		if (test_bit(DMZ_META_ERROR, &mblk->state)) {
+ 			rb_erase(&mblk->node, &zmd->mblk_rbtree);
+ 			dmz_free_mblock(zmd, mblk);
+@@ -508,18 +534,12 @@ static struct dmz_mblock *dmz_get_mblock(struct dmz_metadata *zmd,
+ 
+ 	/* Check rbtree */
+ 	spin_lock(&zmd->mblk_lock);
+-	mblk = dmz_lookup_mblock(zmd, mblk_no);
+-	if (mblk) {
+-		/* Cache hit: remove block from LRU list */
+-		if (atomic_inc_return(&mblk->ref) == 1 &&
+-		    !test_bit(DMZ_META_DIRTY, &mblk->state))
+-			list_del_init(&mblk->link);
+-	}
++	mblk = dmz_get_mblock_fast(zmd, mblk_no);
+ 	spin_unlock(&zmd->mblk_lock);
+ 
+ 	if (!mblk) {
+ 		/* Cache miss: read the block from disk */
+-		mblk = dmz_fetch_mblock(zmd, mblk_no);
++		mblk = dmz_get_mblock_slow(zmd, mblk_no);
+ 		if (!mblk)
+ 			return ERR_PTR(-ENOMEM);
+ 	}
+@@ -753,7 +773,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd)
+ 
+ 		spin_lock(&zmd->mblk_lock);
+ 		clear_bit(DMZ_META_DIRTY, &mblk->state);
+-		if (atomic_read(&mblk->ref) == 0)
++		if (mblk->ref == 0)
+ 			list_add_tail(&mblk->link, &zmd->mblk_lru_list);
+ 		spin_unlock(&zmd->mblk_lock);
+ 	}
+@@ -2308,7 +2328,7 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd)
+ 		mblk = list_first_entry(&zmd->mblk_dirty_list,
+ 					struct dmz_mblock, link);
+ 		dmz_dev_warn(zmd->dev, "mblock %llu still in dirty list (ref %u)",
+-			     (u64)mblk->no, atomic_read(&mblk->ref));
++			     (u64)mblk->no, mblk->ref);
+ 		list_del_init(&mblk->link);
+ 		rb_erase(&mblk->node, &zmd->mblk_rbtree);
+ 		dmz_free_mblock(zmd, mblk);
+@@ -2326,8 +2346,8 @@ static void dmz_cleanup_metadata(struct dmz_metadata *zmd)
+ 	root = &zmd->mblk_rbtree;
+ 	rbtree_postorder_for_each_entry_safe(mblk, next, root, node) {
+ 		dmz_dev_warn(zmd->dev, "mblock %llu ref %u still in rbtree",
+-			     (u64)mblk->no, atomic_read(&mblk->ref));
+-		atomic_set(&mblk->ref, 0);
++			     (u64)mblk->no, mblk->ref);
++		mblk->ref = 0;
+ 		dmz_free_mblock(zmd, mblk);
+ 	}
+ 
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 78d830763704..205f86f1a6cb 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -1725,6 +1725,7 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 	 */
+ 	if (rdev->saved_raid_disk >= 0 &&
+ 	    rdev->saved_raid_disk >= first &&
++	    rdev->saved_raid_disk < conf->raid_disks &&
+ 	    conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
+ 		first = last = rdev->saved_raid_disk;
+ 
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 927b60e9d3ca..e786546bf3b8 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1775,6 +1775,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev)
+ 		first = last = rdev->raid_disk;
+ 
+ 	if (rdev->saved_raid_disk >= first &&
++	    rdev->saved_raid_disk < conf->geo.raid_disks &&
+ 	    conf->mirrors[rdev->saved_raid_disk].rdev == NULL)
+ 		mirror = rdev->saved_raid_disk;
+ 	else
+diff --git a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+index a772976cfe26..a1aacd6fb96f 100644
+--- a/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
++++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
+@@ -1765,7 +1765,7 @@ typedef struct { u16 __; u8 _; } __packed x24;
+ 				pos[7] = (chr & (0x01 << 0) ? fg : bg);	\
+ 			} \
+ 	\
+-			pos += (tpg->hflip ? -8 : 8) / hdiv;	\
++			pos += (tpg->hflip ? -8 : 8) / (int)hdiv;	\
+ 		}	\
+ 	}	\
+ } while (0)
+diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
+index 59b0c1fce9be..4d3e97f97c76 100644
+--- a/drivers/media/i2c/tvp5150.c
++++ b/drivers/media/i2c/tvp5150.c
+@@ -1530,7 +1530,7 @@ static int tvp5150_probe(struct i2c_client *c,
+ 			27000000, 1, 27000000);
+ 	v4l2_ctrl_new_std_menu_items(&core->hdl, &tvp5150_ctrl_ops,
+ 				     V4L2_CID_TEST_PATTERN,
+-				     ARRAY_SIZE(tvp5150_test_patterns),
++				     ARRAY_SIZE(tvp5150_test_patterns) - 1,
+ 				     0, 0, tvp5150_test_patterns);
+ 	sd->ctrl_handler = &core->hdl;
+ 	if (core->hdl.error) {
+diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
+index 11a59854a0a6..9747e23aad27 100644
+--- a/drivers/media/usb/em28xx/em28xx-cards.c
++++ b/drivers/media/usb/em28xx/em28xx-cards.c
+@@ -2112,13 +2112,13 @@ struct em28xx_board em28xx_boards[] = {
+ 		.input           = { {
+ 			.type     = EM28XX_VMUX_COMPOSITE,
+ 			.vmux     = TVP5150_COMPOSITE1,
+-			.amux     = EM28XX_AUDIO_SRC_LINE,
++			.amux     = EM28XX_AMUX_LINE_IN,
+ 			.gpio     = terratec_av350_unmute_gpio,
+ 
+ 		}, {
+ 			.type     = EM28XX_VMUX_SVIDEO,
+ 			.vmux     = TVP5150_SVIDEO,
+-			.amux     = EM28XX_AUDIO_SRC_LINE,
++			.amux     = EM28XX_AMUX_LINE_IN,
+ 			.gpio     = terratec_av350_unmute_gpio,
+ 		} },
+ 	},
+diff --git a/drivers/media/usb/em28xx/em28xx-video.c b/drivers/media/usb/em28xx/em28xx-video.c
+index 8d253a5df0a9..92a74bc34527 100644
+--- a/drivers/media/usb/em28xx/em28xx-video.c
++++ b/drivers/media/usb/em28xx/em28xx-video.c
+@@ -900,6 +900,8 @@ static int em28xx_enable_analog_tuner(struct em28xx *dev)
+ 	if (!mdev || !v4l2->decoder)
+ 		return 0;
+ 
++	dev->v4l2->field_count = 0;
++
+ 	/*
+ 	 * This will find the tuner that is connected into the decoder.
+ 	 * Technically, this is not 100% correct, as the device may be
+@@ -1445,9 +1447,9 @@ static int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
+ 
+ 	fmt = format_by_fourcc(f->fmt.pix.pixelformat);
+ 	if (!fmt) {
+-		em28xx_videodbg("Fourcc format (%08x) invalid.\n",
+-				f->fmt.pix.pixelformat);
+-		return -EINVAL;
++		fmt = &format[0];
++		em28xx_videodbg("Fourcc format (%08x) invalid. Using default (%08x).\n",
++				f->fmt.pix.pixelformat, fmt->fourcc);
+ 	}
+ 
+ 	if (dev->board.is_em2800) {
+diff --git a/drivers/mfd/menelaus.c b/drivers/mfd/menelaus.c
+index 29b7164a823b..d28ebe7ecd21 100644
+--- a/drivers/mfd/menelaus.c
++++ b/drivers/mfd/menelaus.c
+@@ -1094,6 +1094,7 @@ static void menelaus_rtc_alarm_work(struct menelaus_chip *m)
+ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ {
+ 	int	alarm = (m->client->irq > 0);
++	int	err;
+ 
+ 	/* assume 32KDETEN pin is pulled high */
+ 	if (!(menelaus_read_reg(MENELAUS_OSC_CTRL) & 0x80)) {
+@@ -1101,6 +1102,12 @@ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ 		return;
+ 	}
+ 
++	m->rtc = devm_rtc_allocate_device(&m->client->dev);
++	if (IS_ERR(m->rtc))
++		return;
++
++	m->rtc->ops = &menelaus_rtc_ops;
++
+ 	/* support RTC alarm; it can issue wakeups */
+ 	if (alarm) {
+ 		if (menelaus_add_irq_work(MENELAUS_RTCALM_IRQ,
+@@ -1125,10 +1132,8 @@ static inline void menelaus_rtc_init(struct menelaus_chip *m)
+ 		menelaus_write_reg(MENELAUS_RTC_CTRL, m->rtc_control);
+ 	}
+ 
+-	m->rtc = rtc_device_register(DRIVER_NAME,
+-			&m->client->dev,
+-			&menelaus_rtc_ops, THIS_MODULE);
+-	if (IS_ERR(m->rtc)) {
++	err = rtc_register_device(m->rtc);
++	if (err) {
+ 		if (alarm) {
+ 			menelaus_remove_irq_work(MENELAUS_RTCALM_IRQ);
+ 			device_init_wakeup(&m->client->dev, 0);
+diff --git a/drivers/misc/genwqe/card_base.h b/drivers/misc/genwqe/card_base.h
+index 5813b5f25006..135e02f257c1 100644
+--- a/drivers/misc/genwqe/card_base.h
++++ b/drivers/misc/genwqe/card_base.h
+@@ -403,7 +403,7 @@ struct genwqe_file {
+ 	struct file *filp;
+ 
+ 	struct fasync_struct *async_queue;
+-	struct task_struct *owner;
++	struct pid *opener;
+ 	struct list_head list;		/* entry in list of open files */
+ 
+ 	spinlock_t map_lock;		/* lock for dma_mappings */
+diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c
+index dd4617764f14..dbd5eaa69311 100644
+--- a/drivers/misc/genwqe/card_dev.c
++++ b/drivers/misc/genwqe/card_dev.c
+@@ -52,7 +52,7 @@ static void genwqe_add_file(struct genwqe_dev *cd, struct genwqe_file *cfile)
+ {
+ 	unsigned long flags;
+ 
+-	cfile->owner = current;
++	cfile->opener = get_pid(task_tgid(current));
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_add(&cfile->list, &cd->file_list);
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
+@@ -65,6 +65,7 @@ static int genwqe_del_file(struct genwqe_dev *cd, struct genwqe_file *cfile)
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_del(&cfile->list);
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
++	put_pid(cfile->opener);
+ 
+ 	return 0;
+ }
+@@ -275,7 +276,7 @@ static int genwqe_kill_fasync(struct genwqe_dev *cd, int sig)
+ 	return files;
+ }
+ 
+-static int genwqe_force_sig(struct genwqe_dev *cd, int sig)
++static int genwqe_terminate(struct genwqe_dev *cd)
+ {
+ 	unsigned int files = 0;
+ 	unsigned long flags;
+@@ -283,7 +284,7 @@ static int genwqe_force_sig(struct genwqe_dev *cd, int sig)
+ 
+ 	spin_lock_irqsave(&cd->file_lock, flags);
+ 	list_for_each_entry(cfile, &cd->file_list, list) {
+-		force_sig(sig, cfile->owner);
++		kill_pid(cfile->opener, SIGKILL, 1);
+ 		files++;
+ 	}
+ 	spin_unlock_irqrestore(&cd->file_lock, flags);
+@@ -1356,7 +1357,7 @@ static int genwqe_inform_and_stop_processes(struct genwqe_dev *cd)
+ 		dev_warn(&pci_dev->dev,
+ 			 "[%s] send SIGKILL and wait ...\n", __func__);
+ 
+-		rc = genwqe_force_sig(cd, SIGKILL); /* force terminate */
++		rc = genwqe_terminate(cd);
+ 		if (rc) {
+ 			/* Give kill_timout more seconds to end processes */
+ 			for (i = 0; (i < genwqe_kill_timeout) &&
+diff --git a/drivers/misc/vmw_vmci/vmci_driver.c b/drivers/misc/vmw_vmci/vmci_driver.c
+index d7eaf1eb11e7..003bfba40758 100644
+--- a/drivers/misc/vmw_vmci/vmci_driver.c
++++ b/drivers/misc/vmw_vmci/vmci_driver.c
+@@ -113,5 +113,5 @@ module_exit(vmci_drv_exit);
+ 
+ MODULE_AUTHOR("VMware, Inc.");
+ MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface.");
+-MODULE_VERSION("1.1.5.0-k");
++MODULE_VERSION("1.1.6.0-k");
+ MODULE_LICENSE("GPL v2");
+diff --git a/drivers/misc/vmw_vmci/vmci_resource.c b/drivers/misc/vmw_vmci/vmci_resource.c
+index 1ab6e8737a5f..da1ee2e1ba99 100644
+--- a/drivers/misc/vmw_vmci/vmci_resource.c
++++ b/drivers/misc/vmw_vmci/vmci_resource.c
+@@ -57,7 +57,8 @@ static struct vmci_resource *vmci_resource_lookup(struct vmci_handle handle,
+ 
+ 		if (r->type == type &&
+ 		    rid == handle.resource &&
+-		    (cid == handle.context || cid == VMCI_INVALID_ID)) {
++		    (cid == handle.context || cid == VMCI_INVALID_ID ||
++		     handle.context == VMCI_INVALID_ID)) {
+ 			resource = r;
+ 			break;
+ 		}
+diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c
+index 14273ca00641..44a809a20d3a 100644
+--- a/drivers/mmc/host/sdhci-pci-o2micro.c
++++ b/drivers/mmc/host/sdhci-pci-o2micro.c
+@@ -334,6 +334,9 @@ int sdhci_pci_o2_probe(struct sdhci_pci_chip *chip)
+ 		pci_write_config_byte(chip->pdev, O2_SD_LOCK_WP, scratch);
+ 		break;
+ 	case PCI_DEVICE_ID_O2_SEABIRD0:
++		if (chip->pdev->revision == 0x01)
++			chip->quirks |= SDHCI_QUIRK_DELAY_AFTER_POWER;
++		/* fall through */
+ 	case PCI_DEVICE_ID_O2_SEABIRD1:
+ 		/* UnLock WP */
+ 		ret = pci_read_config_byte(chip->pdev,
+diff --git a/drivers/mtd/nand/atmel/nand-controller.c b/drivers/mtd/nand/atmel/nand-controller.c
+index 148744418e82..32a2f947a454 100644
+--- a/drivers/mtd/nand/atmel/nand-controller.c
++++ b/drivers/mtd/nand/atmel/nand-controller.c
+@@ -2079,6 +2079,10 @@ atmel_hsmc_nand_controller_legacy_init(struct atmel_hsmc_nand_controller *nc)
+ 	nand_np = dev->of_node;
+ 	nfc_np = of_find_compatible_node(dev->of_node, NULL,
+ 					 "atmel,sama5d3-nfc");
++	if (!nfc_np) {
++		dev_err(dev, "Could not find device node for sama5d3-nfc\n");
++		return -ENODEV;
++	}
+ 
+ 	nc->clk = of_clk_get(nfc_np, 0);
+ 	if (IS_ERR(nc->clk)) {
+diff --git a/drivers/mtd/spi-nor/fsl-quadspi.c b/drivers/mtd/spi-nor/fsl-quadspi.c
+index f17d22435bfc..62f5763482b3 100644
+--- a/drivers/mtd/spi-nor/fsl-quadspi.c
++++ b/drivers/mtd/spi-nor/fsl-quadspi.c
+@@ -468,6 +468,7 @@ static int fsl_qspi_get_seqid(struct fsl_qspi *q, u8 cmd)
+ {
+ 	switch (cmd) {
+ 	case SPINOR_OP_READ_1_1_4:
++	case SPINOR_OP_READ_1_1_4_4B:
+ 		return SEQID_READ;
+ 	case SPINOR_OP_WREN:
+ 		return SEQID_WREN;
+diff --git a/drivers/net/dsa/mv88e6xxx/phy.c b/drivers/net/dsa/mv88e6xxx/phy.c
+index 436668bd50dc..e53ce9610fee 100644
+--- a/drivers/net/dsa/mv88e6xxx/phy.c
++++ b/drivers/net/dsa/mv88e6xxx/phy.c
+@@ -110,6 +110,9 @@ int mv88e6xxx_phy_page_write(struct mv88e6xxx_chip *chip, int phy,
+ 	err = mv88e6xxx_phy_page_get(chip, phy, page);
+ 	if (!err) {
+ 		err = mv88e6xxx_phy_write(chip, phy, MV88E6XXX_PHY_PAGE, page);
++		if (!err)
++			err = mv88e6xxx_phy_write(chip, phy, reg, val);
++
+ 		mv88e6xxx_phy_page_put(chip, phy);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+index 90be4385bf36..e238f6e85ab6 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+@@ -3427,6 +3427,10 @@ static void ixgbevf_tx_csum(struct ixgbevf_ring *tx_ring,
+ 		skb_checksum_help(skb);
+ 		goto no_csum;
+ 	}
++
++	if (first->protocol == htons(ETH_P_IP))
++		type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
++
+ 	/* update TX checksum flag */
+ 	first->tx_flags |= IXGBE_TX_FLAGS_CSUM;
+ 	vlan_macip_lens = skb_checksum_start_offset(skb) -
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+index 6c9f29c2e975..90a6c4fbc113 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_devlink.c
+@@ -96,6 +96,7 @@ nfp_devlink_port_split(struct devlink *devlink, unsigned int port_index,
+ {
+ 	struct nfp_pf *pf = devlink_priv(devlink);
+ 	struct nfp_eth_table_port eth_port;
++	unsigned int lanes;
+ 	int ret;
+ 
+ 	if (count < 2)
+@@ -114,8 +115,12 @@ nfp_devlink_port_split(struct devlink *devlink, unsigned int port_index,
+ 		goto out;
+ 	}
+ 
+-	ret = nfp_devlink_set_lanes(pf, eth_port.index,
+-				    eth_port.port_lanes / count);
++	/* Special case the 100G CXP -> 2x40G split */
++	lanes = eth_port.port_lanes / count;
++	if (eth_port.lanes == 10 && count == 2)
++		lanes = 8 / count;
++
++	ret = nfp_devlink_set_lanes(pf, eth_port.index, lanes);
+ out:
+ 	mutex_unlock(&pf->lock);
+ 
+@@ -127,6 +132,7 @@ nfp_devlink_port_unsplit(struct devlink *devlink, unsigned int port_index)
+ {
+ 	struct nfp_pf *pf = devlink_priv(devlink);
+ 	struct nfp_eth_table_port eth_port;
++	unsigned int lanes;
+ 	int ret;
+ 
+ 	mutex_lock(&pf->lock);
+@@ -142,7 +148,12 @@ nfp_devlink_port_unsplit(struct devlink *devlink, unsigned int port_index)
+ 		goto out;
+ 	}
+ 
+-	ret = nfp_devlink_set_lanes(pf, eth_port.index, eth_port.port_lanes);
++	/* Special case the 100G CXP -> 2x40G unsplit */
++	lanes = eth_port.port_lanes;
++	if (eth_port.port_lanes == 8)
++		lanes = 10;
++
++	ret = nfp_devlink_set_lanes(pf, eth_port.index, lanes);
+ out:
+ 	mutex_unlock(&pf->lock);
+ 
+diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c
+index 2991179c2fd0..080d00520362 100644
+--- a/drivers/net/ethernet/qlogic/qla3xxx.c
++++ b/drivers/net/ethernet/qlogic/qla3xxx.c
+@@ -380,8 +380,6 @@ static void fm93c56a_select(struct ql3_adapter *qdev)
+ 
+ 	qdev->eeprom_cmd_data = AUBURN_EEPROM_CS_1;
+ 	ql_write_nvram_reg(qdev, spir, ISP_NVRAM_MASK | qdev->eeprom_cmd_data);
+-	ql_write_nvram_reg(qdev, spir,
+-			   ((ISP_NVRAM_MASK << 16) | qdev->eeprom_cmd_data));
+ }
+ 
+ /*
+diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
+index 79f28b9186c6..70ce7da26d1f 100644
+--- a/drivers/net/phy/phylink.c
++++ b/drivers/net/phy/phylink.c
+@@ -747,6 +747,9 @@ void phylink_start(struct phylink *pl)
+ 		    phylink_an_mode_str(pl->link_an_mode),
+ 		    phy_modes(pl->link_config.interface));
+ 
++	/* Always set the carrier off */
++	netif_carrier_off(pl->netdev);
++
+ 	/* Apply the link configuration to the MAC when starting. This allows
+ 	 * a fixed-link to start with the correct parameters, and also
+ 	 * ensures that we set the appropriate advertisment for Serdes links.
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index e0baea2dfd3c..7f8c7e3aa356 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1814,6 +1814,8 @@ static void tun_setup(struct net_device *dev)
+ static int tun_validate(struct nlattr *tb[], struct nlattr *data[],
+ 			struct netlink_ext_ack *extack)
+ {
++	if (!data)
++		return 0;
+ 	return -EINVAL;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
+index 2ab5311659ea..8cb47858eb00 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.c
++++ b/drivers/net/wireless/ath/ath10k/wmi.c
+@@ -1852,6 +1852,12 @@ int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, u32 cmd_id)
+ 	if (ret)
+ 		dev_kfree_skb_any(skb);
+ 
++	if (ret == -EAGAIN) {
++		ath10k_warn(ar, "wmi command %d timeout, restarting hardware\n",
++			    cmd_id);
++		queue_work(ar->workqueue, &ar->restart_work);
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
+index d8b79cb72b58..e7584b842dce 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c
+@@ -77,6 +77,8 @@ static u16 d11ac_bw(enum brcmu_chan_bw bw)
+ 		return BRCMU_CHSPEC_D11AC_BW_40;
+ 	case BRCMU_CHAN_BW_80:
+ 		return BRCMU_CHSPEC_D11AC_BW_80;
++	case BRCMU_CHAN_BW_160:
++		return BRCMU_CHSPEC_D11AC_BW_160;
+ 	default:
+ 		WARN_ON(1);
+ 	}
+@@ -190,8 +192,38 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch)
+ 			break;
+ 		}
+ 		break;
+-	case BRCMU_CHSPEC_D11AC_BW_8080:
+ 	case BRCMU_CHSPEC_D11AC_BW_160:
++		switch (ch->sb) {
++		case BRCMU_CHAN_SB_LLL:
++			ch->control_ch_num -= CH_70MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LLU:
++			ch->control_ch_num -= CH_50MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LUL:
++			ch->control_ch_num -= CH_30MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_LUU:
++			ch->control_ch_num -= CH_10MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_ULL:
++			ch->control_ch_num += CH_10MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_ULU:
++			ch->control_ch_num += CH_30MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_UUL:
++			ch->control_ch_num += CH_50MHZ_APART;
++			break;
++		case BRCMU_CHAN_SB_UUU:
++			ch->control_ch_num += CH_70MHZ_APART;
++			break;
++		default:
++			WARN_ON_ONCE(1);
++			break;
++		}
++		break;
++	case BRCMU_CHSPEC_D11AC_BW_8080:
+ 	default:
+ 		WARN_ON_ONCE(1);
+ 		break;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
+index 7b9a77981df1..75b2a0438cfa 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
++++ b/drivers/net/wireless/broadcom/brcm80211/include/brcmu_wifi.h
+@@ -29,6 +29,8 @@
+ #define CH_UPPER_SB			0x01
+ #define CH_LOWER_SB			0x02
+ #define CH_EWA_VALID			0x04
++#define CH_70MHZ_APART			14
++#define CH_50MHZ_APART			10
+ #define CH_30MHZ_APART			6
+ #define CH_20MHZ_APART			4
+ #define CH_10MHZ_APART			2
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index db1fab9aa1c6..80a653950e86 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1225,12 +1225,15 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm)
+ 	iwl_mvm_del_aux_sta(mvm);
+ 
+ 	/*
+-	 * Clear IN_HW_RESTART flag when stopping the hw (as restart_complete()
+-	 * won't be called in this case).
++	 * Clear IN_HW_RESTART and HW_RESTART_REQUESTED flag when stopping the
++	 * hw (as restart_complete() won't be called in this case) and mac80211
++	 * won't execute the restart.
+ 	 * But make sure to cleanup interfaces that have gone down before/during
+ 	 * HW restart was requested.
+ 	 */
+-	if (test_and_clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
++	if (test_and_clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) ||
++	    test_and_clear_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
++			       &mvm->status))
+ 		ieee80211_iterate_interfaces(mvm->hw, 0,
+ 					     iwl_mvm_cleanup_iterator, mvm);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 386fdee23eb0..bd48cd0eb395 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -1226,7 +1226,11 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 	    !(info->flags & IEEE80211_TX_STAT_AMPDU))
+ 		return;
+ 
+-	rs_rate_from_ucode_rate(tx_resp_hwrate, info->band, &tx_resp_rate);
++	if (rs_rate_from_ucode_rate(tx_resp_hwrate, info->band,
++				    &tx_resp_rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ #ifdef CONFIG_MAC80211_DEBUGFS
+ 	/* Disable last tx check if we are debugging with fixed rate but
+@@ -1277,7 +1281,10 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 	 */
+ 	table = &lq_sta->lq;
+ 	lq_hwrate = le32_to_cpu(table->rs_table[0]);
+-	rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate);
++	if (rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ 	/* Here we actually compare this rate to the latest LQ command */
+ 	if (lq_color != LQ_FLAG_COLOR_GET(table->flags)) {
+@@ -1379,8 +1386,12 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
+ 		/* Collect data for each rate used during failed TX attempts */
+ 		for (i = 0; i <= retries; ++i) {
+ 			lq_hwrate = le32_to_cpu(table->rs_table[i]);
+-			rs_rate_from_ucode_rate(lq_hwrate, info->band,
+-						&lq_rate);
++			if (rs_rate_from_ucode_rate(lq_hwrate, info->band,
++						    &lq_rate)) {
++				WARN_ON_ONCE(1);
++				return;
++			}
++
+ 			/*
+ 			 * Only collect stats if retried rate is in the same RS
+ 			 * table as active/search.
+@@ -3244,7 +3255,10 @@ static void rs_build_rates_table_from_fixed(struct iwl_mvm *mvm,
+ 	for (i = 0; i < num_rates; i++)
+ 		lq_cmd->rs_table[i] = ucode_rate_le32;
+ 
+-	rs_rate_from_ucode_rate(ucode_rate, band, &rate);
++	if (rs_rate_from_ucode_rate(ucode_rate, band, &rate)) {
++		WARN_ON_ONCE(1);
++		return;
++	}
+ 
+ 	if (is_mimo(&rate))
+ 		lq_cmd->mimo_delim = num_rates - 1;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 6c014c273922..62a6e293cf12 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1345,6 +1345,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ 	while (!skb_queue_empty(&skbs)) {
+ 		struct sk_buff *skb = __skb_dequeue(&skbs);
+ 		struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
++		struct ieee80211_hdr *hdr = (void *)skb->data;
+ 		bool flushed = false;
+ 
+ 		skb_freed++;
+@@ -1389,11 +1390,11 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ 			info->flags |= IEEE80211_TX_STAT_AMPDU_NO_BACK;
+ 		info->flags &= ~IEEE80211_TX_CTL_AMPDU;
+ 
+-		/* W/A FW bug: seq_ctl is wrong when the status isn't success */
+-		if (status != TX_STATUS_SUCCESS) {
+-			struct ieee80211_hdr *hdr = (void *)skb->data;
++		/* W/A FW bug: seq_ctl is wrong upon failure / BAR frame */
++		if (ieee80211_is_back_req(hdr->frame_control))
++			seq_ctl = 0;
++		else if (status != TX_STATUS_SUCCESS)
+ 			seq_ctl = le16_to_cpu(hdr->seq_ctrl);
+-		}
+ 
+ 		if (unlikely(!seq_ctl)) {
+ 			struct ieee80211_hdr *hdr = (void *)skb->data;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index ca99c3cf41c2..5a15362ef671 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -1049,6 +1049,14 @@ void iwl_pcie_rx_free(struct iwl_trans *trans)
+ 	kfree(trans_pcie->rxq);
+ }
+ 
++static void iwl_pcie_rx_move_to_allocator(struct iwl_rxq *rxq,
++					  struct iwl_rb_allocator *rba)
++{
++	spin_lock(&rba->lock);
++	list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
++	spin_unlock(&rba->lock);
++}
++
+ /*
+  * iwl_pcie_rx_reuse_rbd - Recycle used RBDs
+  *
+@@ -1080,9 +1088,7 @@ static void iwl_pcie_rx_reuse_rbd(struct iwl_trans *trans,
+ 	if ((rxq->used_count % RX_CLAIM_REQ_ALLOC) == RX_POST_REQ_ALLOC) {
+ 		/* Move the 2 RBDs to the allocator ownership.
+ 		 Allocator has another 6 from pool for the request completion*/
+-		spin_lock(&rba->lock);
+-		list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
+-		spin_unlock(&rba->lock);
++		iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 
+ 		atomic_inc(&rba->req_pending);
+ 		queue_work(rba->alloc_wq, &rba->rx_alloc);
+@@ -1260,10 +1266,18 @@ restart:
+ 		IWL_DEBUG_RX(trans, "Q %d: HW = SW = %d\n", rxq->id, r);
+ 
+ 	while (i != r) {
++		struct iwl_rb_allocator *rba = &trans_pcie->rba;
+ 		struct iwl_rx_mem_buffer *rxb;
+-
+-		if (unlikely(rxq->used_count == rxq->queue_size / 2))
++		/* number of RBDs still waiting for page allocation */
++		u32 rb_pending_alloc =
++			atomic_read(&trans_pcie->rba.req_pending) *
++			RX_CLAIM_REQ_ALLOC;
++
++		if (unlikely(rb_pending_alloc >= rxq->queue_size / 2 &&
++			     !emergency)) {
++			iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 			emergency = true;
++		}
+ 
+ 		if (trans->cfg->mq_rx_supported) {
+ 			/*
+@@ -1306,17 +1320,13 @@ restart:
+ 			iwl_pcie_rx_allocator_get(trans, rxq);
+ 
+ 		if (rxq->used_count % RX_CLAIM_REQ_ALLOC == 0 && !emergency) {
+-			struct iwl_rb_allocator *rba = &trans_pcie->rba;
+-
+ 			/* Add the remaining empty RBDs for allocator use */
+-			spin_lock(&rba->lock);
+-			list_splice_tail_init(&rxq->rx_used, &rba->rbd_empty);
+-			spin_unlock(&rba->lock);
++			iwl_pcie_rx_move_to_allocator(rxq, rba);
+ 		} else if (emergency) {
+ 			count++;
+ 			if (count == 8) {
+ 				count = 0;
+-				if (rxq->used_count < rxq->queue_size / 3)
++				if (rb_pending_alloc < rxq->queue_size / 3)
+ 					emergency = false;
+ 
+ 				rxq->read = i;
+diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
+index 16e54c757dd0..e4ae2b5a71c2 100644
+--- a/drivers/net/wireless/marvell/libertas/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas/if_usb.c
+@@ -456,8 +456,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
+ 			  MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn,
+ 			  cardp);
+ 
+-	cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
+-
+ 	lbs_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n", cardp->rx_urb);
+ 	if ((ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC))) {
+ 		lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB failed: %d\n", ret);
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index fb5ab5812a22..a6746a1f20ae 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -484,6 +484,8 @@ static void nd_async_device_register(void *d, async_cookie_t cookie)
+ 		put_device(dev);
+ 	}
+ 	put_device(dev);
++	if (dev->parent)
++		put_device(dev->parent);
+ }
+ 
+ static void nd_async_device_unregister(void *d, async_cookie_t cookie)
+@@ -503,6 +505,8 @@ void __nd_device_register(struct device *dev)
+ 	if (!dev)
+ 		return;
+ 	dev->bus = &nvdimm_bus_type;
++	if (dev->parent)
++		get_device(dev->parent);
+ 	get_device(dev);
+ 	async_schedule_domain(nd_async_device_register, dev,
+ 			&nd_async_domain);
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index abaf38c61220..050deb56ee62 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -513,10 +513,17 @@ static ssize_t region_badblocks_show(struct device *dev,
+ 		struct device_attribute *attr, char *buf)
+ {
+ 	struct nd_region *nd_region = to_nd_region(dev);
++	ssize_t rc;
+ 
+-	return badblocks_show(&nd_region->bb, buf, 0);
+-}
++	device_lock(dev);
++	if (dev->driver)
++		rc = badblocks_show(&nd_region->bb, buf, 0);
++	else
++		rc = -ENXIO;
++	device_unlock(dev);
+ 
++	return rc;
++}
+ static DEVICE_ATTR(badblocks, 0444, region_badblocks_show, NULL);
+ 
+ static ssize_t resource_show(struct device *dev,
+diff --git a/drivers/pci/dwc/pci-dra7xx.c b/drivers/pci/dwc/pci-dra7xx.c
+index 362607f727ee..06eae132aff7 100644
+--- a/drivers/pci/dwc/pci-dra7xx.c
++++ b/drivers/pci/dwc/pci-dra7xx.c
+@@ -546,7 +546,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
+ };
+ 
+ /*
+- * dra7xx_pcie_ep_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
++ * dra7xx_pcie_unaligned_memaccess: workaround for AM572x/AM571x Errata i870
+  * @dra7xx: the dra7xx device where the workaround should be applied
+  *
+  * Access to the PCIe slave port that are not 32-bit aligned will result
+@@ -556,7 +556,7 @@ static const struct of_device_id of_dra7xx_pcie_match[] = {
+  *
+  * To avoid this issue set PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE to 1.
+  */
+-static int dra7xx_pcie_ep_unaligned_memaccess(struct device *dev)
++static int dra7xx_pcie_unaligned_memaccess(struct device *dev)
+ {
+ 	int ret;
+ 	struct device_node *np = dev->of_node;
+@@ -707,6 +707,11 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ 	case DW_PCIE_RC_TYPE:
+ 		dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
+ 				   DEVICE_TYPE_RC);
++
++		ret = dra7xx_pcie_unaligned_memaccess(dev);
++		if (ret)
++			dev_err(dev, "WA for Errata i870 not applied\n");
++
+ 		ret = dra7xx_add_pcie_port(dra7xx, pdev);
+ 		if (ret < 0)
+ 			goto err_gpio;
+@@ -715,7 +720,7 @@ static int __init dra7xx_pcie_probe(struct platform_device *pdev)
+ 		dra7xx_pcie_writel(dra7xx, PCIECTRL_TI_CONF_DEVICE_TYPE,
+ 				   DEVICE_TYPE_EP);
+ 
+-		ret = dra7xx_pcie_ep_unaligned_memaccess(dev);
++		ret = dra7xx_pcie_unaligned_memaccess(dev);
+ 		if (ret)
+ 			goto err_gpio;
+ 
+diff --git a/drivers/pci/host/pcie-mediatek.c b/drivers/pci/host/pcie-mediatek.c
+index db93efdf1d63..c896bb9ef968 100644
+--- a/drivers/pci/host/pcie-mediatek.c
++++ b/drivers/pci/host/pcie-mediatek.c
+@@ -333,6 +333,17 @@ static struct mtk_pcie_port *mtk_pcie_find_port(struct pci_bus *bus,
+ {
+ 	struct mtk_pcie *pcie = bus->sysdata;
+ 	struct mtk_pcie_port *port;
++	struct pci_dev *dev = NULL;
++
++	/*
++	 * Walk the bus hierarchy to get the devfn value
++	 * of the port in the root bus.
++	 */
++	while (bus && bus->number) {
++		dev = bus->self;
++		bus = dev->bus;
++		devfn = dev->devfn;
++	}
+ 
+ 	list_for_each_entry(port, &pcie->ports, list)
+ 		if (port->slot == PCI_SLOT(devfn))
+diff --git a/drivers/pci/host/vmd.c b/drivers/pci/host/vmd.c
+index 509893bc3e63..2537b022f42d 100644
+--- a/drivers/pci/host/vmd.c
++++ b/drivers/pci/host/vmd.c
+@@ -183,9 +183,20 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
+ 	int i, best = 1;
+ 	unsigned long flags;
+ 
+-	if (pci_is_bridge(msi_desc_to_pci_dev(desc)) || vmd->msix_count == 1)
++	if (vmd->msix_count == 1)
+ 		return &vmd->irqs[0];
+ 
++	/*
++	 * White list for fast-interrupt handlers. All others will share the
++	 * "slow" interrupt vector.
++	 */
++	switch (msi_desc_to_pci_dev(desc)->class) {
++	case PCI_CLASS_STORAGE_EXPRESS:
++		break;
++	default:
++		return &vmd->irqs[0];
++	}
++
+ 	raw_spin_lock_irqsave(&list_lock, flags);
+ 	for (i = 1; i < vmd->msix_count; i++)
+ 		if (vmd->irqs[i].count < vmd->irqs[best].count)
+diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
+index 496ed9130600..536e9a5cd2b1 100644
+--- a/drivers/pci/msi.c
++++ b/drivers/pci/msi.c
+@@ -958,7 +958,6 @@ static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries,
+ 			}
+ 		}
+ 	}
+-	WARN_ON(!!dev->msix_enabled);
+ 
+ 	/* Check whether driver already requested for MSI irq */
+ 	if (dev->msi_enabled) {
+@@ -1028,8 +1027,6 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ 	if (!pci_msi_supported(dev, minvec))
+ 		return -EINVAL;
+ 
+-	WARN_ON(!!dev->msi_enabled);
+-
+ 	/* Check whether driver already requested MSI-X irqs */
+ 	if (dev->msix_enabled) {
+ 		dev_info(&dev->dev,
+@@ -1040,6 +1037,9 @@ static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ 	if (maxvec < minvec)
+ 		return -ERANGE;
+ 
++	if (WARN_ON_ONCE(dev->msi_enabled))
++		return -EINVAL;
++
+ 	nvec = pci_msi_vec_count(dev);
+ 	if (nvec < 0)
+ 		return nvec;
+@@ -1088,6 +1088,9 @@ static int __pci_enable_msix_range(struct pci_dev *dev,
+ 	if (maxvec < minvec)
+ 		return -ERANGE;
+ 
++	if (WARN_ON_ONCE(dev->msix_enabled))
++		return -EINVAL;
++
+ 	for (;;) {
+ 		if (affd) {
+ 			nvec = irq_calc_affinity_vectors(minvec, nvec, affd);
+diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
+index 4708eb9df71b..a3cedf8de863 100644
+--- a/drivers/pci/pci-acpi.c
++++ b/drivers/pci/pci-acpi.c
+@@ -738,19 +738,33 @@ static void pci_acpi_setup(struct device *dev)
+ 		return;
+ 
+ 	device_set_wakeup_capable(dev, true);
++	/*
++	 * For bridges that can do D3 we enable wake automatically (as
++	 * we do for the power management itself in that case). The
++	 * reason is that the bridge may have additional methods such as
++	 * _DSW that need to be called.
++	 */
++	if (pci_dev->bridge_d3)
++		device_wakeup_enable(dev);
++
+ 	acpi_pci_wakeup(pci_dev, false);
+ }
+ 
+ static void pci_acpi_cleanup(struct device *dev)
+ {
+ 	struct acpi_device *adev = ACPI_COMPANION(dev);
++	struct pci_dev *pci_dev = to_pci_dev(dev);
+ 
+ 	if (!adev)
+ 		return;
+ 
+ 	pci_acpi_remove_pm_notifier(adev);
+-	if (adev->wakeup.flags.valid)
++	if (adev->wakeup.flags.valid) {
++		if (pci_dev->bridge_d3)
++			device_wakeup_disable(dev);
++
+ 		device_set_wakeup_capable(dev, false);
++	}
+ }
+ 
+ static bool pci_acpi_bus_match(struct device *dev)
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 633e55c57b13..c0e1985e4c75 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -937,7 +937,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ 	 * All PCIe functions are in one slot, remove one function will remove
+ 	 * the whole slot, so just wait until we are the last function left.
+ 	 */
+-	if (!list_is_last(&pdev->bus_list, &parent->subordinate->devices))
++	if (!list_empty(&parent->subordinate->devices))
+ 		goto out;
+ 
+ 	link = parent->link_state;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 35c9b2f4b293..d442afa195ab 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3163,7 +3163,11 @@ static void disable_igfx_irq(struct pci_dev *dev)
+ 
+ 	pci_iounmap(dev, regs);
+ }
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0042, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0046, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x004a, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0106, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq);
+ 
+diff --git a/drivers/pci/remove.c b/drivers/pci/remove.c
+index 2fa0dbde36b7..0911217467bc 100644
+--- a/drivers/pci/remove.c
++++ b/drivers/pci/remove.c
+@@ -24,9 +24,6 @@ static void pci_stop_dev(struct pci_dev *dev)
+ 		pci_remove_sysfs_dev_files(dev);
+ 		dev->is_added = 0;
+ 	}
+-
+-	if (dev->bus->self)
+-		pcie_aspm_exit_link_state(dev);
+ }
+ 
+ static void pci_destroy_dev(struct pci_dev *dev)
+@@ -40,6 +37,7 @@ static void pci_destroy_dev(struct pci_dev *dev)
+ 	list_del(&dev->bus_list);
+ 	up_write(&pci_bus_sem);
+ 
++	pcie_aspm_exit_link_state(dev);
+ 	pci_bridge_d3_update(dev);
+ 	pci_free_resources(dev);
+ 	put_device(&dev->dev);
+diff --git a/drivers/pcmcia/ricoh.h b/drivers/pcmcia/ricoh.h
+index 01098c841f87..8ac7b138c094 100644
+--- a/drivers/pcmcia/ricoh.h
++++ b/drivers/pcmcia/ricoh.h
+@@ -119,6 +119,10 @@
+ #define  RL5C4XX_MISC_CONTROL           0x2F /* 8 bit */
+ #define  RL5C4XX_ZV_ENABLE              0x08
+ 
++/* Misc Control 3 Register */
++#define RL5C4XX_MISC3			0x00A2 /* 16 bit */
++#define  RL5C47X_MISC3_CB_CLKRUN_DIS	BIT(1)
++
+ #ifdef __YENTA_H
+ 
+ #define rl_misc(socket)		((socket)->private[0])
+@@ -156,6 +160,35 @@ static void ricoh_set_zv(struct yenta_socket *socket)
+         }
+ }
+ 
++static void ricoh_set_clkrun(struct yenta_socket *socket, bool quiet)
++{
++	u16 misc3;
++
++	/*
++	 * RL5C475II likely has this setting, too, however no datasheet
++	 * is publicly available for this chip
++	 */
++	if (socket->dev->device != PCI_DEVICE_ID_RICOH_RL5C476 &&
++	    socket->dev->device != PCI_DEVICE_ID_RICOH_RL5C478)
++		return;
++
++	if (socket->dev->revision < 0x80)
++		return;
++
++	misc3 = config_readw(socket, RL5C4XX_MISC3);
++	if (misc3 & RL5C47X_MISC3_CB_CLKRUN_DIS) {
++		if (!quiet)
++			dev_dbg(&socket->dev->dev,
++				"CLKRUN feature already disabled\n");
++	} else if (disable_clkrun) {
++		if (!quiet)
++			dev_info(&socket->dev->dev,
++				 "Disabling CLKRUN feature\n");
++		misc3 |= RL5C47X_MISC3_CB_CLKRUN_DIS;
++		config_writew(socket, RL5C4XX_MISC3, misc3);
++	}
++}
++
+ static void ricoh_save_state(struct yenta_socket *socket)
+ {
+ 	rl_misc(socket) = config_readw(socket, RL5C4XX_MISC);
+@@ -172,6 +205,7 @@ static void ricoh_restore_state(struct yenta_socket *socket)
+ 	config_writew(socket, RL5C4XX_16BIT_IO_0, rl_io(socket));
+ 	config_writew(socket, RL5C4XX_16BIT_MEM_0, rl_mem(socket));
+ 	config_writew(socket, RL5C4XX_CONFIG, rl_config(socket));
++	ricoh_set_clkrun(socket, true);
+ }
+ 
+ 
+@@ -197,6 +231,7 @@ static int ricoh_override(struct yenta_socket *socket)
+ 	config_writew(socket, RL5C4XX_CONFIG, config);
+ 
+ 	ricoh_set_zv(socket);
++	ricoh_set_clkrun(socket, false);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/pcmcia/yenta_socket.c b/drivers/pcmcia/yenta_socket.c
+index 5d6d9b1549bc..5034422a1d96 100644
+--- a/drivers/pcmcia/yenta_socket.c
++++ b/drivers/pcmcia/yenta_socket.c
+@@ -26,7 +26,8 @@
+ 
+ static bool disable_clkrun;
+ module_param(disable_clkrun, bool, 0444);
+-MODULE_PARM_DESC(disable_clkrun, "If PC card doesn't function properly, please try this option");
++MODULE_PARM_DESC(disable_clkrun,
++		 "If PC card doesn't function properly, please try this option (TI and Ricoh bridges only)");
+ 
+ static bool isa_probe = 1;
+ module_param(isa_probe, bool, 0444);
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+index 6556dbeae65e..ac251c62bc66 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+@@ -319,6 +319,8 @@ static int pmic_mpp_set_mux(struct pinctrl_dev *pctldev, unsigned function,
+ 	pad->function = function;
+ 
+ 	ret = pmic_mpp_write_mode_ctl(state, pad);
++	if (ret < 0)
++		return ret;
+ 
+ 	val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT;
+ 
+@@ -343,13 +345,12 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pad->pullup == PMIC_MPP_PULL_UP_OPEN;
++		if (pad->pullup != PMIC_MPP_PULL_UP_OPEN)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+ 		switch (pad->pullup) {
+-		case PMIC_MPP_PULL_UP_OPEN:
+-			arg = 0;
+-			break;
+ 		case PMIC_MPP_PULL_UP_0P6KOHM:
+ 			arg = 600;
+ 			break;
+@@ -364,13 +365,17 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 		}
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = !pad->is_enabled;
++		if (pad->is_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_POWER_SOURCE:
+ 		arg = pad->power_source;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pad->input_enabled;
++		if (!pad->input_enabled)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		arg = pad->out_value;
+@@ -382,7 +387,9 @@ static int pmic_mpp_config_get(struct pinctrl_dev *pctldev,
+ 		arg = pad->amux_input;
+ 		break;
+ 	case PMIC_MPP_CONF_PAIRED:
+-		arg = pad->paired;
++		if (!pad->paired)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_STRENGTH:
+ 		arg = pad->drive_strength;
+@@ -455,7 +462,7 @@ static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 			pad->dtest = arg;
+ 			break;
+ 		case PIN_CONFIG_DRIVE_STRENGTH:
+-			arg = pad->drive_strength;
++			pad->drive_strength = arg;
+ 			break;
+ 		case PMIC_MPP_CONF_AMUX_ROUTE:
+ 			if (arg >= PMIC_MPP_AMUX_ROUTE_ABUS4)
+@@ -502,6 +509,10 @@ static int pmic_mpp_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ 	if (ret < 0)
+ 		return ret;
+ 
++	ret = pmic_mpp_write(state, pad, PMIC_MPP_REG_SINK_CTL, pad->drive_strength);
++	if (ret < 0)
++		return ret;
++
+ 	val = pad->is_enabled << PMIC_MPP_REG_MASTER_EN_SHIFT;
+ 
+ 	return pmic_mpp_write(state, pad, PMIC_MPP_REG_EN_CTL, val);
+diff --git a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
+index f53e32a9d8fc..0e153bae322e 100644
+--- a/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c
+@@ -260,22 +260,32 @@ static int pm8xxx_pin_config_get(struct pinctrl_dev *pctldev,
+ 
+ 	switch (param) {
+ 	case PIN_CONFIG_BIAS_DISABLE:
+-		arg = pin->bias == PM8XXX_GPIO_BIAS_NP;
++		if (pin->bias != PM8XXX_GPIO_BIAS_NP)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+-		arg = pin->bias == PM8XXX_GPIO_BIAS_PD;
++		if (pin->bias != PM8XXX_GPIO_BIAS_PD)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+-		arg = pin->bias <= PM8XXX_GPIO_BIAS_PU_1P5_30;
++		if (pin->bias > PM8XXX_GPIO_BIAS_PU_1P5_30)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PM8XXX_QCOM_PULL_UP_STRENGTH:
+ 		arg = pin->pull_up_strength;
+ 		break;
+ 	case PIN_CONFIG_BIAS_HIGH_IMPEDANCE:
+-		arg = pin->disable;
++		if (!pin->disable)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_INPUT_ENABLE:
+-		arg = pin->mode == PM8XXX_GPIO_MODE_INPUT;
++		if (pin->mode != PM8XXX_GPIO_MODE_INPUT)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_OUTPUT:
+ 		if (pin->mode & PM8XXX_GPIO_MODE_OUTPUT)
+@@ -290,10 +300,14 @@ static int pm8xxx_pin_config_get(struct pinctrl_dev *pctldev,
+ 		arg = pin->output_strength;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_PUSH_PULL:
+-		arg = !pin->open_drain;
++		if (pin->open_drain)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
+-		arg = pin->open_drain;
++		if (!pin->open_drain)
++			return -EINVAL;
++		arg = 1;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c
+index f1a2147a6d84..72d02bfeda9e 100644
+--- a/drivers/rpmsg/qcom_smd.c
++++ b/drivers/rpmsg/qcom_smd.c
+@@ -1049,8 +1049,10 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
+ 
+ 	channel->edge = edge;
+ 	channel->name = kstrdup(name, GFP_KERNEL);
+-	if (!channel->name)
+-		return ERR_PTR(-ENOMEM);
++	if (!channel->name) {
++		ret = -ENOMEM;
++		goto free_channel;
++	}
+ 
+ 	mutex_init(&channel->tx_lock);
+ 	spin_lock_init(&channel->recv_lock);
+@@ -1099,6 +1101,7 @@ static struct qcom_smd_channel *qcom_smd_create_channel(struct qcom_smd_edge *ed
+ 
+ free_name_and_channel:
+ 	kfree(channel->name);
++free_channel:
+ 	kfree(channel);
+ 
+ 	return ERR_PTR(ret);
+diff --git a/drivers/scsi/esp_scsi.c b/drivers/scsi/esp_scsi.c
+index c3fc34b9964d..9e5d3f7d29ae 100644
+--- a/drivers/scsi/esp_scsi.c
++++ b/drivers/scsi/esp_scsi.c
+@@ -1338,6 +1338,7 @@ static int esp_data_bytes_sent(struct esp *esp, struct esp_cmd_entry *ent,
+ 
+ 	bytes_sent = esp->data_dma_len;
+ 	bytes_sent -= ecount;
++	bytes_sent -= esp->send_cmd_residual;
+ 
+ 	/*
+ 	 * The am53c974 has a DMA 'pecularity'. The doc states:
+diff --git a/drivers/scsi/esp_scsi.h b/drivers/scsi/esp_scsi.h
+index 8163dca2071b..a77772777a30 100644
+--- a/drivers/scsi/esp_scsi.h
++++ b/drivers/scsi/esp_scsi.h
+@@ -540,6 +540,8 @@ struct esp {
+ 
+ 	void			*dma;
+ 	int			dmarev;
++
++	u32			send_cmd_residual;
+ };
+ 
+ /* A front-end driver for the ESP chip should do the following in
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 1a6f122bb25d..4ade13d72deb 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -4149,9 +4149,17 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ 
+ 	lpfc_scsi_unprep_dma_buf(phba, lpfc_cmd);
+ 
+-	spin_lock_irqsave(&phba->hbalock, flags);
+-	lpfc_cmd->pCmd = NULL;
+-	spin_unlock_irqrestore(&phba->hbalock, flags);
++	/* If pCmd was set to NULL from abort path, do not call scsi_done */
++	if (xchg(&lpfc_cmd->pCmd, NULL) == NULL) {
++		lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP,
++				 "0711 FCP cmd already NULL, sid: 0x%06x, "
++				 "did: 0x%06x, oxid: 0x%04x\n",
++				 vport->fc_myDID,
++				 (pnode) ? pnode->nlp_DID : 0,
++				 phba->sli_rev == LPFC_SLI_REV4 ?
++				 lpfc_cmd->cur_iocbq.sli4_xritag : 0xffff);
++		return;
++	}
+ 
+ 	/* The sdev is not guaranteed to be valid post scsi_done upcall. */
+ 	cmd->scsi_done(cmd);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index dc83498024dc..24b6e56f6e97 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -3585,6 +3585,7 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
+ 	struct hbq_dmabuf *dmabuf;
+ 	struct lpfc_cq_event *cq_event;
+ 	unsigned long iflag;
++	int count = 0;
+ 
+ 	spin_lock_irqsave(&phba->hbalock, iflag);
+ 	phba->hba_flag &= ~HBA_SP_QUEUE_EVT;
+@@ -3606,16 +3607,22 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
+ 			if (irspiocbq)
+ 				lpfc_sli_sp_handle_rspiocb(phba, pring,
+ 							   irspiocbq);
++			count++;
+ 			break;
+ 		case CQE_CODE_RECEIVE:
+ 		case CQE_CODE_RECEIVE_V1:
+ 			dmabuf = container_of(cq_event, struct hbq_dmabuf,
+ 					      cq_event);
+ 			lpfc_sli4_handle_received_buffer(phba, dmabuf);
++			count++;
+ 			break;
+ 		default:
+ 			break;
+ 		}
++
++		/* Limit the number of events to 64 to avoid soft lockups */
++		if (count == 64)
++			break;
+ 	}
+ }
+ 
+diff --git a/drivers/scsi/mac_esp.c b/drivers/scsi/mac_esp.c
+index eb551f3cc471..71879f2207e0 100644
+--- a/drivers/scsi/mac_esp.c
++++ b/drivers/scsi/mac_esp.c
+@@ -427,6 +427,8 @@ static void mac_esp_send_pio_cmd(struct esp *esp, u32 addr, u32 esp_count,
+ 			scsi_esp_cmd(esp, ESP_CMD_TI);
+ 		}
+ 	}
++
++	esp->send_cmd_residual = esp_count;
+ }
+ 
+ static int mac_esp_irq_pending(struct esp *esp)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index d55c365be238..d0abee3e6ed9 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -7361,6 +7361,9 @@ static int megasas_mgmt_compat_ioctl_fw(struct file *file, unsigned long arg)
+ 		get_user(user_sense_off, &cioc->sense_off))
+ 		return -EFAULT;
+ 
++	if (local_sense_off != user_sense_off)
++		return -EINVAL;
++
+ 	if (local_sense_len) {
+ 		void __user **sense_ioc_ptr =
+ 			(void __user **)((u8 *)((unsigned long)&ioc->frame.raw) + local_sense_off);
+diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c
+index 0453ff6839a7..7e9ef3431bea 100644
+--- a/drivers/soc/tegra/pmc.c
++++ b/drivers/soc/tegra/pmc.c
+@@ -1321,7 +1321,7 @@ static void tegra_pmc_init_tsense_reset(struct tegra_pmc *pmc)
+ 	if (!pmc->soc->has_tsense_reset)
+ 		return;
+ 
+-	np = of_find_node_by_name(pmc->dev->of_node, "i2c-thermtrip");
++	np = of_get_child_by_name(pmc->dev->of_node, "i2c-thermtrip");
+ 	if (!np) {
+ 		dev_warn(dev, "i2c-thermtrip node not found, %s.\n", disabled);
+ 		return;
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 6573152ce893..0316fae20cfe 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -88,7 +88,7 @@
+ #define BSPI_BPP_MODE_SELECT_MASK		BIT(8)
+ #define BSPI_BPP_ADDR_SELECT_MASK		BIT(16)
+ 
+-#define BSPI_READ_LENGTH			512
++#define BSPI_READ_LENGTH			256
+ 
+ /* MSPI register offsets */
+ #define MSPI_SPCR0_LSB				0x000
+diff --git a/drivers/spi/spi-ep93xx.c b/drivers/spi/spi-ep93xx.c
+index e5cc07357746..ce28c910ee48 100644
+--- a/drivers/spi/spi-ep93xx.c
++++ b/drivers/spi/spi-ep93xx.c
+@@ -246,6 +246,19 @@ static int ep93xx_spi_read_write(struct spi_master *master)
+ 	return -EINPROGRESS;
+ }
+ 
++static enum dma_transfer_direction
++ep93xx_dma_data_to_trans_dir(enum dma_data_direction dir)
++{
++	switch (dir) {
++	case DMA_TO_DEVICE:
++		return DMA_MEM_TO_DEV;
++	case DMA_FROM_DEVICE:
++		return DMA_DEV_TO_MEM;
++	default:
++		return DMA_TRANS_NONE;
++	}
++}
++
+ /**
+  * ep93xx_spi_dma_prepare() - prepares a DMA transfer
+  * @master: SPI master
+@@ -257,7 +270,7 @@ static int ep93xx_spi_read_write(struct spi_master *master)
+  */
+ static struct dma_async_tx_descriptor *
+ ep93xx_spi_dma_prepare(struct spi_master *master,
+-		       enum dma_transfer_direction dir)
++		       enum dma_data_direction dir)
+ {
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct spi_transfer *xfer = master->cur_msg->state;
+@@ -277,9 +290,9 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+ 		buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ 
+ 	memset(&conf, 0, sizeof(conf));
+-	conf.direction = dir;
++	conf.direction = ep93xx_dma_data_to_trans_dir(dir);
+ 
+-	if (dir == DMA_DEV_TO_MEM) {
++	if (dir == DMA_FROM_DEVICE) {
+ 		chan = espi->dma_rx;
+ 		buf = xfer->rx_buf;
+ 		sgt = &espi->rx_sgt;
+@@ -343,7 +356,8 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+ 	if (!nents)
+ 		return ERR_PTR(-ENOMEM);
+ 
+-	txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, dir, DMA_CTRL_ACK);
++	txd = dmaengine_prep_slave_sg(chan, sgt->sgl, nents, conf.direction,
++				      DMA_CTRL_ACK);
+ 	if (!txd) {
+ 		dma_unmap_sg(chan->device->dev, sgt->sgl, sgt->nents, dir);
+ 		return ERR_PTR(-ENOMEM);
+@@ -360,13 +374,13 @@ ep93xx_spi_dma_prepare(struct spi_master *master,
+  * unmapped.
+  */
+ static void ep93xx_spi_dma_finish(struct spi_master *master,
+-				  enum dma_transfer_direction dir)
++				  enum dma_data_direction dir)
+ {
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct dma_chan *chan;
+ 	struct sg_table *sgt;
+ 
+-	if (dir == DMA_DEV_TO_MEM) {
++	if (dir == DMA_FROM_DEVICE) {
+ 		chan = espi->dma_rx;
+ 		sgt = &espi->rx_sgt;
+ 	} else {
+@@ -381,8 +395,8 @@ static void ep93xx_spi_dma_callback(void *callback_param)
+ {
+ 	struct spi_master *master = callback_param;
+ 
+-	ep93xx_spi_dma_finish(master, DMA_MEM_TO_DEV);
+-	ep93xx_spi_dma_finish(master, DMA_DEV_TO_MEM);
++	ep93xx_spi_dma_finish(master, DMA_TO_DEVICE);
++	ep93xx_spi_dma_finish(master, DMA_FROM_DEVICE);
+ 
+ 	spi_finalize_current_transfer(master);
+ }
+@@ -392,15 +406,15 @@ static int ep93xx_spi_dma_transfer(struct spi_master *master)
+ 	struct ep93xx_spi *espi = spi_master_get_devdata(master);
+ 	struct dma_async_tx_descriptor *rxd, *txd;
+ 
+-	rxd = ep93xx_spi_dma_prepare(master, DMA_DEV_TO_MEM);
++	rxd = ep93xx_spi_dma_prepare(master, DMA_FROM_DEVICE);
+ 	if (IS_ERR(rxd)) {
+ 		dev_err(&master->dev, "DMA RX failed: %ld\n", PTR_ERR(rxd));
+ 		return PTR_ERR(rxd);
+ 	}
+ 
+-	txd = ep93xx_spi_dma_prepare(master, DMA_MEM_TO_DEV);
++	txd = ep93xx_spi_dma_prepare(master, DMA_TO_DEVICE);
+ 	if (IS_ERR(txd)) {
+-		ep93xx_spi_dma_finish(master, DMA_DEV_TO_MEM);
++		ep93xx_spi_dma_finish(master, DMA_FROM_DEVICE);
+ 		dev_err(&master->dev, "DMA TX failed: %ld\n", PTR_ERR(txd));
+ 		return PTR_ERR(txd);
+ 	}
+diff --git a/drivers/tc/tc.c b/drivers/tc/tc.c
+index 3be9519654e5..cf3fad2cb871 100644
+--- a/drivers/tc/tc.c
++++ b/drivers/tc/tc.c
+@@ -2,7 +2,7 @@
+  *	TURBOchannel bus services.
+  *
+  *	Copyright (c) Harald Koerfgen, 1998
+- *	Copyright (c) 2001, 2003, 2005, 2006  Maciej W. Rozycki
++ *	Copyright (c) 2001, 2003, 2005, 2006, 2018  Maciej W. Rozycki
+  *	Copyright (c) 2005  James Simmons
+  *
+  *	This file is subject to the terms and conditions of the GNU
+@@ -10,6 +10,7 @@
+  *	directory of this archive for more details.
+  */
+ #include <linux/compiler.h>
++#include <linux/dma-mapping.h>
+ #include <linux/errno.h>
+ #include <linux/init.h>
+ #include <linux/ioport.h>
+@@ -92,6 +93,11 @@ static void __init tc_bus_add_devices(struct tc_bus *tbus)
+ 		tdev->dev.bus = &tc_bus_type;
+ 		tdev->slot = slot;
+ 
++		/* TURBOchannel has 34-bit DMA addressing (16GiB space). */
++		tdev->dma_mask = DMA_BIT_MASK(34);
++		tdev->dev.dma_mask = &tdev->dma_mask;
++		tdev->dev.coherent_dma_mask = DMA_BIT_MASK(34);
++
+ 		for (i = 0; i < 8; i++) {
+ 			tdev->firmware[i] =
+ 				readb(module + offset + TC_FIRM_VER + 4 * i);
+diff --git a/drivers/thermal/da9062-thermal.c b/drivers/thermal/da9062-thermal.c
+index dd8dd947b7f0..01b0cb994457 100644
+--- a/drivers/thermal/da9062-thermal.c
++++ b/drivers/thermal/da9062-thermal.c
+@@ -106,7 +106,7 @@ static void da9062_thermal_poll_on(struct work_struct *work)
+ 					   THERMAL_EVENT_UNSPECIFIED);
+ 
+ 		delay = msecs_to_jiffies(thermal->zone->passive_delay);
+-		schedule_delayed_work(&thermal->work, delay);
++		queue_delayed_work(system_freezable_wq, &thermal->work, delay);
+ 		return;
+ 	}
+ 
+@@ -125,7 +125,7 @@ static irqreturn_t da9062_thermal_irq_handler(int irq, void *data)
+ 	struct da9062_thermal *thermal = data;
+ 
+ 	disable_irq_nosync(thermal->irq);
+-	schedule_delayed_work(&thermal->work, 0);
++	queue_delayed_work(system_freezable_wq, &thermal->work, 0);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index a260cde743e2..2db68dfe497d 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -133,6 +133,11 @@ static void kgdboc_unregister_kbd(void)
+ 
+ static int kgdboc_option_setup(char *opt)
+ {
++	if (!opt) {
++		pr_err("kgdboc: config string not provided\n");
++		return -EINVAL;
++	}
++
+ 	if (strlen(opt) >= MAX_CONFIG_LEN) {
+ 		printk(KERN_ERR "kgdboc: config string too long\n");
+ 		return -ENOSPC;
+diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
+index 41784798c789..0a730136646d 100644
+--- a/drivers/uio/uio.c
++++ b/drivers/uio/uio.c
+@@ -249,6 +249,8 @@ static struct class uio_class = {
+ 	.dev_groups = uio_groups,
+ };
+ 
++bool uio_class_registered;
++
+ /*
+  * device functions
+  */
+@@ -780,6 +782,9 @@ static int init_uio_class(void)
+ 		printk(KERN_ERR "class_register failed for uio\n");
+ 		goto err_class_register;
+ 	}
++
++	uio_class_registered = true;
++
+ 	return 0;
+ 
+ err_class_register:
+@@ -790,6 +795,7 @@ exit:
+ 
+ static void release_uio_class(void)
+ {
++	uio_class_registered = false;
+ 	class_unregister(&uio_class);
+ 	uio_major_cleanup();
+ }
+@@ -809,6 +815,9 @@ int __uio_register_device(struct module *owner,
+ 	struct uio_device *idev;
+ 	int ret = 0;
+ 
++	if (!uio_class_registered)
++		return -EPROBE_DEFER;
++
+ 	if (!parent || !info || !info->name || !info->version)
+ 		return -EINVAL;
+ 
+diff --git a/drivers/usb/chipidea/otg.h b/drivers/usb/chipidea/otg.h
+index 9ecb598e48f0..a5557c70034a 100644
+--- a/drivers/usb/chipidea/otg.h
++++ b/drivers/usb/chipidea/otg.h
+@@ -20,7 +20,8 @@ void ci_handle_vbus_change(struct ci_hdrc *ci);
+ static inline void ci_otg_queue_work(struct ci_hdrc *ci)
+ {
+ 	disable_irq_nosync(ci->irq);
+-	queue_work(ci->wq, &ci->work);
++	if (queue_work(ci->wq, &ci->work) == false)
++		enable_irq(ci->irq);
+ }
+ 
+ #endif /* __DRIVERS_USB_CHIPIDEA_OTG_H */
+diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c
+index a884c022df7a..cb66f982c313 100644
+--- a/drivers/usb/gadget/udc/atmel_usba_udc.c
++++ b/drivers/usb/gadget/udc/atmel_usba_udc.c
+@@ -2071,6 +2071,8 @@ static struct usba_ep * atmel_udc_of_init(struct platform_device *pdev,
+ 
+ 	udc->errata = match->data;
+ 	udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9g45-pmc");
++	if (IS_ERR(udc->pmc))
++		udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9rl-pmc");
+ 	if (IS_ERR(udc->pmc))
+ 		udc->pmc = syscon_regmap_lookup_by_compatible("atmel,at91sam9x5-pmc");
+ 	if (udc->errata && IS_ERR(udc->pmc))
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 36a706f475d2..ade0723787e5 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2374,6 +2374,9 @@ static ssize_t renesas_usb3_b_device_write(struct file *file,
+ 	else
+ 		usb3->forced_b_device = false;
+ 
++	if (usb3->workaround_for_vbus)
++		usb3_disconnect(usb3);
++
+ 	/* Let this driver call usb3_connect() anyway */
+ 	usb3_check_id(usb3);
+ 
+diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
+index 5302f988e7e6..e0ebd3d513c6 100644
+--- a/drivers/usb/host/ohci-at91.c
++++ b/drivers/usb/host/ohci-at91.c
+@@ -550,6 +550,8 @@ static int ohci_hcd_at91_drv_probe(struct platform_device *pdev)
+ 		pdata->overcurrent_pin[i] =
+ 			devm_gpiod_get_index_optional(&pdev->dev, "atmel,oc",
+ 						      i, GPIOD_IN);
++		if (!pdata->overcurrent_pin[i])
++			continue;
+ 		if (IS_ERR(pdata->overcurrent_pin[i])) {
+ 			err = PTR_ERR(pdata->overcurrent_pin[i]);
+ 			dev_err(&pdev->dev, "unable to claim gpio \"overcurrent\": %d\n", err);
+diff --git a/drivers/usb/usbip/vudc_main.c b/drivers/usb/usbip/vudc_main.c
+index 9e655714e389..916e2eefc886 100644
+--- a/drivers/usb/usbip/vudc_main.c
++++ b/drivers/usb/usbip/vudc_main.c
+@@ -85,6 +85,10 @@ static int __init init(void)
+ cleanup:
+ 	list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) {
+ 		list_del(&udc_dev->dev_entry);
++		/*
++		 * Just do platform_device_del() here, put_vudc_device()
++		 * calls the platform_device_put()
++		 */
+ 		platform_device_del(udc_dev->pdev);
+ 		put_vudc_device(udc_dev);
+ 	}
+@@ -101,7 +105,11 @@ static void __exit cleanup(void)
+ 
+ 	list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) {
+ 		list_del(&udc_dev->dev_entry);
+-		platform_device_unregister(udc_dev->pdev);
++		/*
++		 * Just do platform_device_del() here, put_vudc_device()
++		 * calls the platform_device_put()
++		 */
++		platform_device_del(udc_dev->pdev);
+ 		put_vudc_device(udc_dev);
+ 	}
+ 	platform_driver_unregister(&vudc_driver);
+diff --git a/drivers/w1/masters/omap_hdq.c b/drivers/w1/masters/omap_hdq.c
+index 83fc9aab34e8..3099052e1243 100644
+--- a/drivers/w1/masters/omap_hdq.c
++++ b/drivers/w1/masters/omap_hdq.c
+@@ -763,6 +763,8 @@ static int omap_hdq_remove(struct platform_device *pdev)
+ 	/* remove module dependency */
+ 	pm_runtime_disable(&pdev->dev);
+ 
++	w1_remove_master_device(&omap_w1_master);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index f98b8c135db9..95dbee89b758 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -317,6 +317,9 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+ 	*/
+ 	flags &= ~(__GFP_DMA | __GFP_HIGHMEM);
+ 
++	/* Convert the size to actually allocated. */
++	size = 1UL << (order + XEN_PAGE_SHIFT);
++
+ 	/* On ARM this function returns an ioremap'ped virtual address for
+ 	 * which virt_to_phys doesn't return the corresponding physical
+ 	 * address. In fact on ARM virt_to_phys only works for kernel direct
+@@ -365,6 +368,9 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+ 	 * physical address */
+ 	phys = xen_bus_to_phys(dev_addr);
+ 
++	/* Convert the size to actually allocated. */
++	size = 1UL << (order + XEN_PAGE_SHIFT);
++
+ 	if (((dev_addr + size - 1 <= dma_mask)) ||
+ 	    range_straddles_page_boundary(phys, size))
+ 		xen_destroy_contiguous_region(phys, order);
+diff --git a/drivers/xen/xen-balloon.c b/drivers/xen/xen-balloon.c
+index 294f35ce9e46..cf8ef8cee5a0 100644
+--- a/drivers/xen/xen-balloon.c
++++ b/drivers/xen/xen-balloon.c
+@@ -75,12 +75,15 @@ static void watch_target(struct xenbus_watch *watch,
+ 
+ 	if (!watch_fired) {
+ 		watch_fired = true;
+-		err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
+-				   &static_max);
+-		if (err != 1)
+-			static_max = new_target;
+-		else
++
++		if ((xenbus_scanf(XBT_NIL, "memory", "static-max",
++				  "%llu", &static_max) == 1) ||
++		    (xenbus_scanf(XBT_NIL, "memory", "memory_static_max",
++				  "%llu", &static_max) == 1))
+ 			static_max >>= PAGE_SHIFT - 10;
++		else
++			static_max = new_target;
++
+ 		target_diff = (xen_pv_domain() || xen_initial_domain()) ? 0
+ 				: static_max - balloon_stats.target_pages;
+ 	}
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index f96f72659693..2cb3569ac548 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -7573,6 +7573,7 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info,
+ 	struct btrfs_block_group_cache *block_group = NULL;
+ 	u64 search_start = 0;
+ 	u64 max_extent_size = 0;
++	u64 max_free_space = 0;
+ 	u64 empty_cluster = 0;
+ 	struct btrfs_space_info *space_info;
+ 	int loop = 0;
+@@ -7867,8 +7868,8 @@ unclustered_alloc:
+ 			spin_lock(&ctl->tree_lock);
+ 			if (ctl->free_space <
+ 			    num_bytes + empty_cluster + empty_size) {
+-				if (ctl->free_space > max_extent_size)
+-					max_extent_size = ctl->free_space;
++				max_free_space = max(max_free_space,
++						     ctl->free_space);
+ 				spin_unlock(&ctl->tree_lock);
+ 				goto loop;
+ 			}
+@@ -8037,6 +8038,8 @@ loop:
+ 	}
+ out:
+ 	if (ret == -ENOSPC) {
++		if (!max_extent_size)
++			max_extent_size = max_free_space;
+ 		spin_lock(&space_info->lock);
+ 		space_info->max_extent_size = max_extent_size;
+ 		spin_unlock(&space_info->lock);
+@@ -8398,6 +8401,19 @@ btrfs_init_new_buffer(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ 	if (IS_ERR(buf))
+ 		return buf;
+ 
++	/*
++	 * Extra safety check in case the extent tree is corrupted and extent
++	 * allocator chooses to use a tree block which is already used and
++	 * locked.
++	 */
++	if (buf->lock_owner == current->pid) {
++		btrfs_err_rl(fs_info,
++"tree block %llu owner %llu already locked by pid=%d, extent tree corruption detected",
++			buf->start, btrfs_header_owner(buf), current->pid);
++		free_extent_buffer(buf);
++		return ERR_PTR(-EUCLEAN);
++	}
++
+ 	btrfs_set_header_generation(buf, trans->transid);
+ 	btrfs_set_buffer_lockdep_class(root->root_key.objectid, buf, level);
+ 	btrfs_tree_lock(buf);
+@@ -9028,15 +9044,14 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans,
+ 	if (eb == root->node) {
+ 		if (wc->flags[level] & BTRFS_BLOCK_FLAG_FULL_BACKREF)
+ 			parent = eb->start;
+-		else
+-			BUG_ON(root->root_key.objectid !=
+-			       btrfs_header_owner(eb));
++		else if (root->root_key.objectid != btrfs_header_owner(eb))
++			goto owner_mismatch;
+ 	} else {
+ 		if (wc->flags[level + 1] & BTRFS_BLOCK_FLAG_FULL_BACKREF)
+ 			parent = path->nodes[level + 1]->start;
+-		else
+-			BUG_ON(root->root_key.objectid !=
+-			       btrfs_header_owner(path->nodes[level + 1]));
++		else if (root->root_key.objectid !=
++			 btrfs_header_owner(path->nodes[level + 1]))
++			goto owner_mismatch;
+ 	}
+ 
+ 	btrfs_free_tree_block(trans, root, eb, parent, wc->refs[level] == 1);
+@@ -9044,6 +9059,11 @@ out:
+ 	wc->refs[level] = 0;
+ 	wc->flags[level] = 0;
+ 	return 0;
++
++owner_mismatch:
++	btrfs_err_rl(fs_info, "unexpected tree owner, have %llu expect %llu",
++		     btrfs_header_owner(eb), root->root_key.objectid);
++	return -EUCLEAN;
+ }
+ 
+ static noinline int walk_down_tree(struct btrfs_trans_handle *trans,
+@@ -9097,6 +9117,8 @@ static noinline int walk_up_tree(struct btrfs_trans_handle *trans,
+ 			ret = walk_up_proc(trans, root, path, wc);
+ 			if (ret > 0)
+ 				return 0;
++			if (ret < 0)
++				return ret;
+ 
+ 			if (path->locks[level]) {
+ 				btrfs_tree_unlock_rw(path->nodes[level],
+@@ -9862,6 +9884,7 @@ void btrfs_put_block_group_cache(struct btrfs_fs_info *info)
+ 
+ 		block_group = btrfs_lookup_first_block_group(info, last);
+ 		while (block_group) {
++			wait_block_group_cache_done(block_group);
+ 			spin_lock(&block_group->lock);
+ 			if (block_group->iref)
+ 				break;
+@@ -10250,7 +10273,7 @@ error:
+ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans,
+ 				       struct btrfs_fs_info *fs_info)
+ {
+-	struct btrfs_block_group_cache *block_group, *tmp;
++	struct btrfs_block_group_cache *block_group;
+ 	struct btrfs_root *extent_root = fs_info->extent_root;
+ 	struct btrfs_block_group_item item;
+ 	struct btrfs_key key;
+@@ -10258,7 +10281,10 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans,
+ 	bool can_flush_pending_bgs = trans->can_flush_pending_bgs;
+ 
+ 	trans->can_flush_pending_bgs = false;
+-	list_for_each_entry_safe(block_group, tmp, &trans->new_bgs, bg_list) {
++	while (!list_empty(&trans->new_bgs)) {
++		block_group = list_first_entry(&trans->new_bgs,
++					       struct btrfs_block_group_cache,
++					       bg_list);
+ 		if (ret)
+ 			goto next;
+ 
+@@ -10957,6 +10983,10 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 
+ 	*trimmed = 0;
+ 
++	/* Discard not supported = nothing to do. */
++	if (!blk_queue_discard(bdev_get_queue(device->bdev)))
++		return 0;
++
+ 	/* Not writeable = nothing to do. */
+ 	if (!device->writeable)
+ 		return 0;
+@@ -11018,6 +11048,15 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 	return ret;
+ }
+ 
++/*
++ * Trim the whole filesystem by:
++ * 1) trimming the free space in each block group
++ * 2) trimming the unallocated space on each device
++ *
++ * This will also continue trimming even if a block group or device encounters
++ * an error.  The return value will be the last error, or 0 if nothing bad
++ * happens.
++ */
+ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ {
+ 	struct btrfs_block_group_cache *cache = NULL;
+@@ -11027,18 +11066,14 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 	u64 start;
+ 	u64 end;
+ 	u64 trimmed = 0;
+-	u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
++	u64 bg_failed = 0;
++	u64 dev_failed = 0;
++	int bg_ret = 0;
++	int dev_ret = 0;
+ 	int ret = 0;
+ 
+-	/*
+-	 * try to trim all FS space, our block group may start from non-zero.
+-	 */
+-	if (range->len == total_bytes)
+-		cache = btrfs_lookup_first_block_group(fs_info, range->start);
+-	else
+-		cache = btrfs_lookup_block_group(fs_info, range->start);
+-
+-	while (cache) {
++	cache = btrfs_lookup_first_block_group(fs_info, range->start);
++	for (; cache; cache = next_block_group(fs_info, cache)) {
+ 		if (cache->key.objectid >= (range->start + range->len)) {
+ 			btrfs_put_block_group(cache);
+ 			break;
+@@ -11052,13 +11087,15 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 			if (!block_group_cache_done(cache)) {
+ 				ret = cache_block_group(cache, 0);
+ 				if (ret) {
+-					btrfs_put_block_group(cache);
+-					break;
++					bg_failed++;
++					bg_ret = ret;
++					continue;
+ 				}
+ 				ret = wait_block_group_cache_done(cache);
+ 				if (ret) {
+-					btrfs_put_block_group(cache);
+-					break;
++					bg_failed++;
++					bg_ret = ret;
++					continue;
+ 				}
+ 			}
+ 			ret = btrfs_trim_block_group(cache,
+@@ -11069,28 +11106,40 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 
+ 			trimmed += group_trimmed;
+ 			if (ret) {
+-				btrfs_put_block_group(cache);
+-				break;
++				bg_failed++;
++				bg_ret = ret;
++				continue;
+ 			}
+ 		}
+-
+-		cache = next_block_group(fs_info, cache);
+ 	}
+ 
++	if (bg_failed)
++		btrfs_warn(fs_info,
++			"failed to trim %llu block group(s), last error %d",
++			bg_failed, bg_ret);
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+-	devices = &fs_info->fs_devices->alloc_list;
+-	list_for_each_entry(device, devices, dev_alloc_list) {
++	devices = &fs_info->fs_devices->devices;
++	list_for_each_entry(device, devices, dev_list) {
+ 		ret = btrfs_trim_free_extents(device, range->minlen,
+ 					      &group_trimmed);
+-		if (ret)
++		if (ret) {
++			dev_failed++;
++			dev_ret = ret;
+ 			break;
++		}
+ 
+ 		trimmed += group_trimmed;
+ 	}
+ 	mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ 
++	if (dev_failed)
++		btrfs_warn(fs_info,
++			"failed to trim %llu device(s), last error %d",
++			dev_failed, dev_ret);
+ 	range->len = trimmed;
+-	return ret;
++	if (bg_ret)
++		return bg_ret;
++	return dev_ret;
+ }
+ 
+ /*
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 5690feded0de..57e25e83b81a 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2078,6 +2078,14 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		goto out;
+ 
+ 	inode_lock(inode);
++
++	/*
++	 * We take the dio_sem here because the tree log stuff can race with
++	 * lockless dio writes and get an extent map logged for an extent we
++	 * never waited on.  We need it this high up for lockdep reasons.
++	 */
++	down_write(&BTRFS_I(inode)->dio_sem);
++
+ 	atomic_inc(&root->log_batch);
+ 	full_sync = test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
+ 			     &BTRFS_I(inode)->runtime_flags);
+@@ -2129,6 +2137,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		ret = start_ordered_ops(inode, start, end);
+ 	}
+ 	if (ret) {
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2184,6 +2193,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 		 * checked called fsync.
+ 		 */
+ 		ret = filemap_check_wb_err(inode->i_mapping, file->f_wb_err);
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2208,6 +2218,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	trans = btrfs_start_transaction(root, 0);
+ 	if (IS_ERR(trans)) {
+ 		ret = PTR_ERR(trans);
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+@@ -2229,6 +2240,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	 * file again, but that will end up using the synchronization
+ 	 * inside btrfs_sync_log to keep things safe.
+ 	 */
++	up_write(&BTRFS_I(inode)->dio_sem);
+ 	inode_unlock(inode);
+ 
+ 	/*
+diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
+index 4426d1c73e50..9f31b81a5e27 100644
+--- a/fs/btrfs/free-space-cache.c
++++ b/fs/btrfs/free-space-cache.c
+@@ -22,6 +22,7 @@
+ #include <linux/slab.h>
+ #include <linux/math64.h>
+ #include <linux/ratelimit.h>
++#include <linux/sched/mm.h>
+ #include "ctree.h"
+ #include "free-space-cache.h"
+ #include "transaction.h"
+@@ -59,6 +60,7 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
+ 	struct btrfs_free_space_header *header;
+ 	struct extent_buffer *leaf;
+ 	struct inode *inode = NULL;
++	unsigned nofs_flag;
+ 	int ret;
+ 
+ 	key.objectid = BTRFS_FREE_SPACE_OBJECTID;
+@@ -80,7 +82,13 @@ static struct inode *__lookup_free_space_inode(struct btrfs_root *root,
+ 	btrfs_disk_key_to_cpu(&location, &disk_key);
+ 	btrfs_release_path(path);
+ 
++	/*
++	 * We are often under a trans handle at this point, so we need to make
++	 * sure NOFS is set to keep us from deadlocking.
++	 */
++	nofs_flag = memalloc_nofs_save();
+ 	inode = btrfs_iget(fs_info->sb, &location, root, NULL);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (IS_ERR(inode))
+ 		return inode;
+ 	if (is_bad_inode(inode)) {
+@@ -1702,6 +1710,8 @@ static inline void __bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
+ 	bitmap_clear(info->bitmap, start, count);
+ 
+ 	info->bytes -= bytes;
++	if (info->max_extent_size > ctl->unit)
++		info->max_extent_size = 0;
+ }
+ 
+ static void bitmap_clear_bits(struct btrfs_free_space_ctl *ctl,
+@@ -1785,6 +1795,13 @@ static int search_bitmap(struct btrfs_free_space_ctl *ctl,
+ 	return -1;
+ }
+ 
++static inline u64 get_max_extent_size(struct btrfs_free_space *entry)
++{
++	if (entry->bitmap)
++		return entry->max_extent_size;
++	return entry->bytes;
++}
++
+ /* Cache the size of the max extent in bytes */
+ static struct btrfs_free_space *
+ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+@@ -1806,8 +1823,8 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 	for (node = &entry->offset_index; node; node = rb_next(node)) {
+ 		entry = rb_entry(node, struct btrfs_free_space, offset_index);
+ 		if (entry->bytes < *bytes) {
+-			if (entry->bytes > *max_extent_size)
+-				*max_extent_size = entry->bytes;
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 			continue;
+ 		}
+ 
+@@ -1825,8 +1842,8 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 		}
+ 
+ 		if (entry->bytes < *bytes + align_off) {
+-			if (entry->bytes > *max_extent_size)
+-				*max_extent_size = entry->bytes;
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 			continue;
+ 		}
+ 
+@@ -1838,8 +1855,10 @@ find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
+ 				*offset = tmp;
+ 				*bytes = size;
+ 				return entry;
+-			} else if (size > *max_extent_size) {
+-				*max_extent_size = size;
++			} else {
++				*max_extent_size =
++					max(get_max_extent_size(entry),
++					    *max_extent_size);
+ 			}
+ 			continue;
+ 		}
+@@ -2463,6 +2482,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
+ 	struct rb_node *n;
+ 	int count = 0;
+ 
++	spin_lock(&ctl->tree_lock);
+ 	for (n = rb_first(&ctl->free_space_offset); n; n = rb_next(n)) {
+ 		info = rb_entry(n, struct btrfs_free_space, offset_index);
+ 		if (info->bytes >= bytes && !block_group->ro)
+@@ -2471,6 +2491,7 @@ void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
+ 			   info->offset, info->bytes,
+ 		       (info->bitmap) ? "yes" : "no");
+ 	}
++	spin_unlock(&ctl->tree_lock);
+ 	btrfs_info(fs_info, "block group has cluster?: %s",
+ 	       list_empty(&block_group->cluster_list) ? "no" : "yes");
+ 	btrfs_info(fs_info,
+@@ -2699,8 +2720,8 @@ static u64 btrfs_alloc_from_bitmap(struct btrfs_block_group_cache *block_group,
+ 
+ 	err = search_bitmap(ctl, entry, &search_start, &search_bytes, true);
+ 	if (err) {
+-		if (search_bytes > *max_extent_size)
+-			*max_extent_size = search_bytes;
++		*max_extent_size = max(get_max_extent_size(entry),
++				       *max_extent_size);
+ 		return 0;
+ 	}
+ 
+@@ -2737,8 +2758,9 @@ u64 btrfs_alloc_from_cluster(struct btrfs_block_group_cache *block_group,
+ 
+ 	entry = rb_entry(node, struct btrfs_free_space, offset_index);
+ 	while (1) {
+-		if (entry->bytes < bytes && entry->bytes > *max_extent_size)
+-			*max_extent_size = entry->bytes;
++		if (entry->bytes < bytes)
++			*max_extent_size = max(get_max_extent_size(entry),
++					       *max_extent_size);
+ 
+ 		if (entry->bytes < bytes ||
+ 		    (!entry->bitmap && entry->offset < min_start)) {
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index e8bfafa25a71..90568a21fa77 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -524,6 +524,7 @@ again:
+ 		pages = kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);
+ 		if (!pages) {
+ 			/* just bail out to the uncompressed code */
++			nr_pages = 0;
+ 			goto cont;
+ 		}
+ 
+@@ -2965,6 +2966,7 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
+ 	bool truncated = false;
+ 	bool range_locked = false;
+ 	bool clear_new_delalloc_bytes = false;
++	bool clear_reserved_extent = true;
+ 
+ 	if (!test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&
+ 	    !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags) &&
+@@ -3068,10 +3070,12 @@ static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent)
+ 						logical_len, logical_len,
+ 						compress_type, 0, 0,
+ 						BTRFS_FILE_EXTENT_REG);
+-		if (!ret)
++		if (!ret) {
++			clear_reserved_extent = false;
+ 			btrfs_release_delalloc_bytes(fs_info,
+ 						     ordered_extent->start,
+ 						     ordered_extent->disk_len);
++		}
+ 	}
+ 	unpin_extent_cache(&BTRFS_I(inode)->extent_tree,
+ 			   ordered_extent->file_offset, ordered_extent->len,
+@@ -3131,8 +3135,13 @@ out:
+ 		 * wrong we need to return the space for this ordered extent
+ 		 * back to the allocator.  We only free the extent in the
+ 		 * truncated case if we didn't write out the extent at all.
++		 *
++		 * If we made it past insert_reserved_file_extent before we
++		 * errored out then we don't need to do this as the accounting
++		 * has already been done.
+ 		 */
+ 		if ((ret || !logical_len) &&
++		    clear_reserved_extent &&
+ 		    !test_bit(BTRFS_ORDERED_NOCOW, &ordered_extent->flags) &&
+ 		    !test_bit(BTRFS_ORDERED_PREALLOC, &ordered_extent->flags))
+ 			btrfs_free_reserved_extent(fs_info,
+@@ -5326,11 +5335,13 @@ static void evict_inode_truncate_pages(struct inode *inode)
+ 		struct extent_state *cached_state = NULL;
+ 		u64 start;
+ 		u64 end;
++		unsigned state_flags;
+ 
+ 		node = rb_first(&io_tree->state);
+ 		state = rb_entry(node, struct extent_state, rb_node);
+ 		start = state->start;
+ 		end = state->end;
++		state_flags = state->state;
+ 		spin_unlock(&io_tree->lock);
+ 
+ 		lock_extent_bits(io_tree, start, end, &cached_state);
+@@ -5343,7 +5354,7 @@ static void evict_inode_truncate_pages(struct inode *inode)
+ 		 *
+ 		 * Note, end is the bytenr of last byte, so we need + 1 here.
+ 		 */
+-		if (state->state & EXTENT_DELALLOC)
++		if (state_flags & EXTENT_DELALLOC)
+ 			btrfs_qgroup_free_data(inode, NULL, start, end - start + 1);
+ 
+ 		clear_extent_bit(io_tree, start, end,
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index a507c0d25354..9333e4cda68d 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -352,7 +352,6 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ 	struct fstrim_range range;
+ 	u64 minlen = ULLONG_MAX;
+ 	u64 num_devices = 0;
+-	u64 total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
+ 	int ret;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+@@ -376,11 +375,15 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ 		return -EOPNOTSUPP;
+ 	if (copy_from_user(&range, arg, sizeof(range)))
+ 		return -EFAULT;
+-	if (range.start > total_bytes ||
+-	    range.len < fs_info->sb->s_blocksize)
++
++	/*
++	 * NOTE: Don't truncate the range using super->total_bytes.  Bytenr of
++	 * block group is in the logical address space, which can be any
++	 * sectorsize aligned bytenr in  the range [0, U64_MAX].
++	 */
++	if (range.len < fs_info->sb->s_blocksize)
+ 		return -EINVAL;
+ 
+-	range.len = min(range.len, total_bytes - range.start);
+ 	range.minlen = max(range.minlen, minlen);
+ 	ret = btrfs_trim_fs(fs_info, &range);
+ 	if (ret < 0)
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 47dec283628d..d6d6e9593e39 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2763,6 +2763,7 @@ qgroup_rescan_zero_tracking(struct btrfs_fs_info *fs_info)
+ 		qgroup->rfer_cmpr = 0;
+ 		qgroup->excl = 0;
+ 		qgroup->excl_cmpr = 0;
++		qgroup_dirty(fs_info, qgroup);
+ 	}
+ 	spin_unlock(&fs_info->qgroup_lock);
+ }
+@@ -2972,6 +2973,10 @@ static int __btrfs_qgroup_release_data(struct inode *inode,
+ 	int trace_op = QGROUP_RELEASE;
+ 	int ret;
+ 
++	if (!test_bit(BTRFS_FS_QUOTA_ENABLED,
++		      &BTRFS_I(inode)->root->fs_info->flags))
++		return 0;
++
+ 	/* In release case, we shouldn't have @reserved */
+ 	WARN_ON(!free && reserved);
+ 	if (free && reserved)
+diff --git a/fs/btrfs/qgroup.h b/fs/btrfs/qgroup.h
+index d9984e87cddf..83483ade3b19 100644
+--- a/fs/btrfs/qgroup.h
++++ b/fs/btrfs/qgroup.h
+@@ -232,6 +232,8 @@ void btrfs_qgroup_free_refroot(struct btrfs_fs_info *fs_info,
+ static inline void btrfs_qgroup_free_delayed_ref(struct btrfs_fs_info *fs_info,
+ 						 u64 ref_root, u64 num_bytes)
+ {
++	if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags))
++		return;
+ 	trace_btrfs_qgroup_free_delayed_ref(fs_info, ref_root, num_bytes);
+ 	btrfs_qgroup_free_refroot(fs_info, ref_root, num_bytes);
+ }
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index b80b03e0c5d3..eeae2c3ab17e 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1334,7 +1334,7 @@ static void __del_reloc_root(struct btrfs_root *root)
+ 	struct mapping_node *node = NULL;
+ 	struct reloc_control *rc = fs_info->reloc_ctl;
+ 
+-	if (rc) {
++	if (rc && root->node) {
+ 		spin_lock(&rc->reloc_root_tree.lock);
+ 		rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+ 				      root->node->start);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 27638b96079d..f74005ca8f08 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -2307,15 +2307,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 
+ 	kmem_cache_free(btrfs_trans_handle_cachep, trans);
+ 
+-	/*
+-	 * If fs has been frozen, we can not handle delayed iputs, otherwise
+-	 * it'll result in deadlock about SB_FREEZE_FS.
+-	 */
+-	if (current != fs_info->transaction_kthread &&
+-	    current != fs_info->cleaner_kthread &&
+-	    !test_bit(BTRFS_FS_FROZEN, &fs_info->flags))
+-		btrfs_run_delayed_iputs(fs_info);
+-
+ 	return ret;
+ 
+ scrub_continue:
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index e1b4a59485df..2109db196449 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -273,6 +273,13 @@ struct walk_control {
+ 	/* what stage of the replay code we're currently in */
+ 	int stage;
+ 
++	/*
++	 * Ignore any items from the inode currently being processed. Needs
++	 * to be set every time we find a BTRFS_INODE_ITEM_KEY and we are in
++	 * the LOG_WALK_REPLAY_INODES stage.
++	 */
++	bool ignore_cur_inode;
++
+ 	/* the root we are currently replaying */
+ 	struct btrfs_root *replay_dest;
+ 
+@@ -2363,6 +2370,20 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 
+ 			inode_item = btrfs_item_ptr(eb, i,
+ 					    struct btrfs_inode_item);
++			/*
++			 * If we have a tmpfile (O_TMPFILE) that got fsync'ed
++			 * and never got linked before the fsync, skip it, as
++			 * replaying it is pointless since it would be deleted
++			 * later. We skip logging tmpfiles, but it's always
++			 * possible we are replaying a log created with a kernel
++			 * that used to log tmpfiles.
++			 */
++			if (btrfs_inode_nlink(eb, inode_item) == 0) {
++				wc->ignore_cur_inode = true;
++				continue;
++			} else {
++				wc->ignore_cur_inode = false;
++			}
+ 			ret = replay_xattr_deletes(wc->trans, root, log,
+ 						   path, key.objectid);
+ 			if (ret)
+@@ -2400,16 +2421,8 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 					     root->fs_info->sectorsize);
+ 				ret = btrfs_drop_extents(wc->trans, root, inode,
+ 							 from, (u64)-1, 1);
+-				/*
+-				 * If the nlink count is zero here, the iput
+-				 * will free the inode.  We bump it to make
+-				 * sure it doesn't get freed until the link
+-				 * count fixup is done.
+-				 */
+ 				if (!ret) {
+-					if (inode->i_nlink == 0)
+-						inc_nlink(inode);
+-					/* Update link count and nbytes. */
++					/* Update the inode's nbytes. */
+ 					ret = btrfs_update_inode(wc->trans,
+ 								 root, inode);
+ 				}
+@@ -2424,6 +2437,9 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb,
+ 				break;
+ 		}
+ 
++		if (wc->ignore_cur_inode)
++			continue;
++
+ 		if (key.type == BTRFS_DIR_INDEX_KEY &&
+ 		    wc->stage == LOG_WALK_REPLAY_DIR_INDEX) {
+ 			ret = replay_one_dir_item(wc->trans, root, path,
+@@ -3078,9 +3094,12 @@ static void free_log_tree(struct btrfs_trans_handle *trans,
+ 	};
+ 
+ 	ret = walk_log_tree(trans, log, &wc);
+-	/* I don't think this can happen but just in case */
+-	if (ret)
+-		btrfs_abort_transaction(trans, ret);
++	if (ret) {
++		if (trans)
++			btrfs_abort_transaction(trans, ret);
++		else
++			btrfs_handle_fs_error(log->fs_info, ret, NULL);
++	}
+ 
+ 	while (1) {
+ 		ret = find_first_extent_bit(&log->dirty_log_pages,
+@@ -3959,6 +3978,36 @@ fill_holes:
+ 			break;
+ 		*last_extent = extent_end;
+ 	}
++
++	/*
++	 * Check if there is a hole between the last extent found in our leaf
++	 * and the first extent in the next leaf. If there is one, we need to
++	 * log an explicit hole so that at replay time we can punch the hole.
++	 */
++	if (ret == 0 &&
++	    key.objectid == btrfs_ino(inode) &&
++	    key.type == BTRFS_EXTENT_DATA_KEY &&
++	    i == btrfs_header_nritems(src_path->nodes[0])) {
++		ret = btrfs_next_leaf(inode->root, src_path);
++		need_find_last_extent = true;
++		if (ret > 0) {
++			ret = 0;
++		} else if (ret == 0) {
++			btrfs_item_key_to_cpu(src_path->nodes[0], &key,
++					      src_path->slots[0]);
++			if (key.objectid == btrfs_ino(inode) &&
++			    key.type == BTRFS_EXTENT_DATA_KEY &&
++			    *last_extent < key.offset) {
++				const u64 len = key.offset - *last_extent;
++
++				ret = btrfs_insert_file_extent(trans, log,
++							       btrfs_ino(inode),
++							       *last_extent, 0,
++							       0, len, 0, len,
++							       0, 0, 0);
++			}
++		}
++	}
+ 	/*
+ 	 * Need to let the callers know we dropped the path so they should
+ 	 * re-search.
+@@ -4343,7 +4392,6 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans,
+ 
+ 	INIT_LIST_HEAD(&extents);
+ 
+-	down_write(&inode->dio_sem);
+ 	write_lock(&tree->lock);
+ 	test_gen = root->fs_info->last_trans_committed;
+ 	logged_start = start;
+@@ -4424,7 +4472,6 @@ process:
+ 	}
+ 	WARN_ON(!list_empty(&extents));
+ 	write_unlock(&tree->lock);
+-	up_write(&inode->dio_sem);
+ 
+ 	btrfs_release_path(path);
+ 	if (!ret)
+@@ -4622,7 +4669,8 @@ static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,
+ 			ASSERT(len == i_size ||
+ 			       (len == fs_info->sectorsize &&
+ 				btrfs_file_extent_compression(leaf, extent) !=
+-				BTRFS_COMPRESS_NONE));
++				BTRFS_COMPRESS_NONE) ||
++			       (len < i_size && i_size < fs_info->sectorsize));
+ 			return 0;
+ 		}
+ 
+@@ -5564,9 +5612,33 @@ static int btrfs_log_all_parents(struct btrfs_trans_handle *trans,
+ 
+ 			dir_inode = btrfs_iget(fs_info->sb, &inode_key,
+ 					       root, NULL);
+-			/* If parent inode was deleted, skip it. */
+-			if (IS_ERR(dir_inode))
+-				continue;
++			/*
++			 * If the parent inode was deleted, return an error to
++			 * fallback to a transaction commit. This is to prevent
++			 * getting an inode that was moved from one parent A to
++			 * a parent B, got its former parent A deleted and then
++			 * it got fsync'ed, from existing at both parents after
++			 * a log replay (and the old parent still existing).
++			 * Example:
++			 *
++			 * mkdir /mnt/A
++			 * mkdir /mnt/B
++			 * touch /mnt/B/bar
++			 * sync
++			 * mv /mnt/B/bar /mnt/A/bar
++			 * mv -T /mnt/A /mnt/B
++			 * fsync /mnt/B/bar
++			 * <power fail>
++			 *
++			 * If we ignore the old parent B which got deleted,
++			 * after a log replay we would have file bar linked
++			 * at both parents and the old parent B would still
++			 * exist.
++			 */
++			if (IS_ERR(dir_inode)) {
++				ret = PTR_ERR(dir_inode);
++				goto out;
++			}
+ 
+ 			if (ctx)
+ 				ctx->log_new_dentries = false;
+@@ -5641,7 +5713,13 @@ static int btrfs_log_inode_parent(struct btrfs_trans_handle *trans,
+ 	if (ret)
+ 		goto end_no_trans;
+ 
+-	if (btrfs_inode_in_log(inode, trans->transid)) {
++	/*
++	 * Skip already logged inodes or inodes corresponding to tmpfiles
++	 * (since logging them is pointless, a link count of 0 means they
++	 * will never be accessible).
++	 */
++	if (btrfs_inode_in_log(inode, trans->transid) ||
++	    inode->vfs_inode.i_nlink == 0) {
+ 		ret = BTRFS_NO_LOG_SYNC;
+ 		goto end_no_trans;
+ 	}
+diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
+index 2565cee702e4..106a715101f9 100644
+--- a/fs/cifs/cifs_debug.c
++++ b/fs/cifs/cifs_debug.c
+@@ -289,6 +289,9 @@ static ssize_t cifs_stats_proc_write(struct file *file,
+ 		atomic_set(&totBufAllocCount, 0);
+ 		atomic_set(&totSmBufAllocCount, 0);
+ #endif /* CONFIG_CIFS_STATS2 */
++		atomic_set(&tcpSesReconnectCount, 0);
++		atomic_set(&tconInfoReconnectCount, 0);
++
+ 		spin_lock(&GlobalMid_Lock);
+ 		GlobalMaxActiveXid = 0;
+ 		GlobalCurrentXid = 0;
+diff --git a/fs/cifs/cifs_spnego.c b/fs/cifs/cifs_spnego.c
+index b611fc2e8984..7f01c6e60791 100644
+--- a/fs/cifs/cifs_spnego.c
++++ b/fs/cifs/cifs_spnego.c
+@@ -147,8 +147,10 @@ cifs_get_spnego_key(struct cifs_ses *sesInfo)
+ 		sprintf(dp, ";sec=krb5");
+ 	else if (server->sec_mskerberos)
+ 		sprintf(dp, ";sec=mskrb5");
+-	else
+-		goto out;
++	else {
++		cifs_dbg(VFS, "unknown or missing server auth type, use krb5\n");
++		sprintf(dp, ";sec=krb5");
++	}
+ 
+ 	dp = description + strlen(description);
+ 	sprintf(dp, ";uid=0x%x",
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index d01cbca84701..a90a637ae79a 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -776,7 +776,15 @@ cifs_get_inode_info(struct inode **inode, const char *full_path,
+ 	} else if (rc == -EREMOTE) {
+ 		cifs_create_dfs_fattr(&fattr, sb);
+ 		rc = 0;
+-	} else if (rc == -EACCES && backup_cred(cifs_sb)) {
++	} else if ((rc == -EACCES) && backup_cred(cifs_sb) &&
++		   (strcmp(server->vals->version_string, SMB1_VERSION_STRING)
++		      == 0)) {
++			/*
++			 * For SMB2 and later the backup intent flag is already
++			 * sent if needed on open and there is no path based
++			 * FindFirst operation to use to retry with
++			 */
++
+ 			srchinf = kzalloc(sizeof(struct cifs_search_info),
+ 						GFP_KERNEL);
+ 			if (srchinf == NULL) {
+diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
+index 7919967488cb..011c6f53dcda 100644
+--- a/fs/cramfs/inode.c
++++ b/fs/cramfs/inode.c
+@@ -186,7 +186,8 @@ static void *cramfs_read(struct super_block *sb, unsigned int offset, unsigned i
+ 			continue;
+ 		blk_offset = (blocknr - buffer_blocknr[i]) << PAGE_SHIFT;
+ 		blk_offset += offset;
+-		if (blk_offset + len > BUFFER_SIZE)
++		if (blk_offset > BUFFER_SIZE ||
++		    blk_offset + len > BUFFER_SIZE)
+ 			continue;
+ 		return read_buffers[i] + blk_offset;
+ 	}
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index c96778c39885..c0c6562b3c44 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1421,7 +1421,8 @@ struct ext4_sb_info {
+ 	u32 s_min_batch_time;
+ 	struct block_device *journal_bdev;
+ #ifdef CONFIG_QUOTA
+-	char *s_qf_names[EXT4_MAXQUOTAS];	/* Names of quota files with journalled quota */
++	/* Names of quota files with journalled quota */
++	char __rcu *s_qf_names[EXT4_MAXQUOTAS];
+ 	int s_jquota_fmt;			/* Format of quota to use */
+ #endif
+ 	unsigned int s_want_extra_isize; /* New inodes should reserve # bytes */
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 4e1d62ba0703..ac2e0516c16f 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -869,7 +869,7 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
+ 	handle_t *handle;
+ 	struct page *page;
+ 	struct ext4_iloc iloc;
+-	int retries;
++	int retries = 0;
+ 
+ 	ret = ext4_get_inode_loc(inode, &iloc);
+ 	if (ret)
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 1eb68e626931..b2a47058e04c 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -344,19 +344,14 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 	if (projid_eq(kprojid, EXT4_I(inode)->i_projid))
+ 		return 0;
+ 
+-	err = mnt_want_write_file(filp);
+-	if (err)
+-		return err;
+-
+ 	err = -EPERM;
+-	inode_lock(inode);
+ 	/* Is it quota file? Do not allow user to mess with it */
+ 	if (ext4_is_quota_file(inode))
+-		goto out_unlock;
++		return err;
+ 
+ 	err = ext4_get_inode_loc(inode, &iloc);
+ 	if (err)
+-		goto out_unlock;
++		return err;
+ 
+ 	raw_inode = ext4_raw_inode(&iloc);
+ 	if (!EXT4_FITS_IN_INODE(raw_inode, ei, i_projid)) {
+@@ -364,20 +359,20 @@ static int ext4_ioctl_setproject(struct file *filp, __u32 projid)
+ 					      EXT4_SB(sb)->s_want_extra_isize,
+ 					      &iloc);
+ 		if (err)
+-			goto out_unlock;
++			return err;
+ 	} else {
+ 		brelse(iloc.bh);
+ 	}
+ 
+-	dquot_initialize(inode);
++	err = dquot_initialize(inode);
++	if (err)
++		return err;
+ 
+ 	handle = ext4_journal_start(inode, EXT4_HT_QUOTA,
+ 		EXT4_QUOTA_INIT_BLOCKS(sb) +
+ 		EXT4_QUOTA_DEL_BLOCKS(sb) + 3);
+-	if (IS_ERR(handle)) {
+-		err = PTR_ERR(handle);
+-		goto out_unlock;
+-	}
++	if (IS_ERR(handle))
++		return PTR_ERR(handle);
+ 
+ 	err = ext4_reserve_inode_write(handle, inode, &iloc);
+ 	if (err)
+@@ -405,9 +400,6 @@ out_dirty:
+ 		err = rc;
+ out_stop:
+ 	ext4_journal_stop(handle);
+-out_unlock:
+-	inode_unlock(inode);
+-	mnt_drop_write_file(filp);
+ 	return err;
+ }
+ #else
+@@ -592,6 +584,30 @@ static int ext4_ioc_getfsmap(struct super_block *sb,
+ 	return 0;
+ }
+ 
++static int ext4_ioctl_check_project(struct inode *inode, struct fsxattr *fa)
++{
++	/*
++	 * Project Quota ID state is only allowed to change from within the init
++	 * namespace. Enforce that restriction only if we are trying to change
++	 * the quota ID state. Everything else is allowed in user namespaces.
++	 */
++	if (current_user_ns() == &init_user_ns)
++		return 0;
++
++	if (__kprojid_val(EXT4_I(inode)->i_projid) != fa->fsx_projid)
++		return -EINVAL;
++
++	if (ext4_test_inode_flag(inode, EXT4_INODE_PROJINHERIT)) {
++		if (!(fa->fsx_xflags & FS_XFLAG_PROJINHERIT))
++			return -EINVAL;
++	} else {
++		if (fa->fsx_xflags & FS_XFLAG_PROJINHERIT)
++			return -EINVAL;
++	}
++
++	return 0;
++}
++
+ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ {
+ 	struct inode *inode = file_inode(filp);
+@@ -1029,19 +1045,19 @@ resizefs_out:
+ 			return err;
+ 
+ 		inode_lock(inode);
++		err = ext4_ioctl_check_project(inode, &fa);
++		if (err)
++			goto out;
+ 		flags = (ei->i_flags & ~EXT4_FL_XFLAG_VISIBLE) |
+ 			 (flags & EXT4_FL_XFLAG_VISIBLE);
+ 		err = ext4_ioctl_setflags(inode, flags);
+-		inode_unlock(inode);
+-		mnt_drop_write_file(filp);
+ 		if (err)
+-			return err;
+-
++			goto out;
+ 		err = ext4_ioctl_setproject(filp, fa.fsx_projid);
+-		if (err)
+-			return err;
+-
+-		return 0;
++out:
++		inode_unlock(inode);
++		mnt_drop_write_file(filp);
++		return err;
+ 	}
+ 	case EXT4_IOC_SHUTDOWN:
+ 		return ext4_shutdown(sb, arg);
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index 9bb36909ec92..cd8d481e0c48 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -526,9 +526,13 @@ mext_check_arguments(struct inode *orig_inode,
+ 			orig_inode->i_ino, donor_inode->i_ino);
+ 		return -EINVAL;
+ 	}
+-	if (orig_eof < orig_start + *len - 1)
++	if (orig_eof <= orig_start)
++		*len = 0;
++	else if (orig_eof < orig_start + *len - 1)
+ 		*len = orig_eof - orig_start;
+-	if (donor_eof < donor_start + *len - 1)
++	if (donor_eof <= donor_start)
++		*len = 0;
++	else if (donor_eof < donor_start + *len - 1)
+ 		*len = donor_eof - donor_start;
+ 	if (!*len) {
+ 		ext4_debug("ext4 move extent: len should not be 0 "
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 9dbd27f7b778..46ad267ef6d6 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -855,6 +855,18 @@ static inline void ext4_quota_off_umount(struct super_block *sb)
+ 	for (type = 0; type < EXT4_MAXQUOTAS; type++)
+ 		ext4_quota_off(sb, type);
+ }
++
++/*
++ * This is a helper function which is used in the mount/remount
++ * codepaths (which holds s_umount) to fetch the quota file name.
++ */
++static inline char *get_qf_name(struct super_block *sb,
++				struct ext4_sb_info *sbi,
++				int type)
++{
++	return rcu_dereference_protected(sbi->s_qf_names[type],
++					 lockdep_is_held(&sb->s_umount));
++}
+ #else
+ static inline void ext4_quota_off_umount(struct super_block *sb)
+ {
+@@ -907,7 +919,7 @@ static void ext4_put_super(struct super_block *sb)
+ 	percpu_free_rwsem(&sbi->s_journal_flag_rwsem);
+ #ifdef CONFIG_QUOTA
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+-		kfree(sbi->s_qf_names[i]);
++		kfree(get_qf_name(sb, sbi, i));
+ #endif
+ 
+ 	/* Debugging code just in case the in-memory inode orphan list
+@@ -1473,11 +1485,10 @@ static const char deprecated_msg[] =
+ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	char *qname;
++	char *qname, *old_qname = get_qf_name(sb, sbi, qtype);
+ 	int ret = -1;
+ 
+-	if (sb_any_quota_loaded(sb) &&
+-		!sbi->s_qf_names[qtype]) {
++	if (sb_any_quota_loaded(sb) && !old_qname) {
+ 		ext4_msg(sb, KERN_ERR,
+ 			"Cannot change journaled "
+ 			"quota options when quota turned on");
+@@ -1494,8 +1505,8 @@ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ 			"Not enough memory for storing quotafile name");
+ 		return -1;
+ 	}
+-	if (sbi->s_qf_names[qtype]) {
+-		if (strcmp(sbi->s_qf_names[qtype], qname) == 0)
++	if (old_qname) {
++		if (strcmp(old_qname, qname) == 0)
+ 			ret = 1;
+ 		else
+ 			ext4_msg(sb, KERN_ERR,
+@@ -1508,7 +1519,7 @@ static int set_qf_name(struct super_block *sb, int qtype, substring_t *args)
+ 			"quotafile must be on filesystem root");
+ 		goto errout;
+ 	}
+-	sbi->s_qf_names[qtype] = qname;
++	rcu_assign_pointer(sbi->s_qf_names[qtype], qname);
+ 	set_opt(sb, QUOTA);
+ 	return 1;
+ errout:
+@@ -1520,15 +1531,16 @@ static int clear_qf_name(struct super_block *sb, int qtype)
+ {
+ 
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	char *old_qname = get_qf_name(sb, sbi, qtype);
+ 
+-	if (sb_any_quota_loaded(sb) &&
+-		sbi->s_qf_names[qtype]) {
++	if (sb_any_quota_loaded(sb) && old_qname) {
+ 		ext4_msg(sb, KERN_ERR, "Cannot change journaled quota options"
+ 			" when quota turned on");
+ 		return -1;
+ 	}
+-	kfree(sbi->s_qf_names[qtype]);
+-	sbi->s_qf_names[qtype] = NULL;
++	rcu_assign_pointer(sbi->s_qf_names[qtype], NULL);
++	synchronize_rcu();
++	kfree(old_qname);
+ 	return 1;
+ }
+ #endif
+@@ -1901,7 +1913,7 @@ static int parse_options(char *options, struct super_block *sb,
+ 			 int is_remount)
+ {
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
+-	char *p;
++	char *p, __maybe_unused *usr_qf_name, __maybe_unused *grp_qf_name;
+ 	substring_t args[MAX_OPT_ARGS];
+ 	int token;
+ 
+@@ -1932,11 +1944,13 @@ static int parse_options(char *options, struct super_block *sb,
+ 			 "Cannot enable project quota enforcement.");
+ 		return 0;
+ 	}
+-	if (sbi->s_qf_names[USRQUOTA] || sbi->s_qf_names[GRPQUOTA]) {
+-		if (test_opt(sb, USRQUOTA) && sbi->s_qf_names[USRQUOTA])
++	usr_qf_name = get_qf_name(sb, sbi, USRQUOTA);
++	grp_qf_name = get_qf_name(sb, sbi, GRPQUOTA);
++	if (usr_qf_name || grp_qf_name) {
++		if (test_opt(sb, USRQUOTA) && usr_qf_name)
+ 			clear_opt(sb, USRQUOTA);
+ 
+-		if (test_opt(sb, GRPQUOTA) && sbi->s_qf_names[GRPQUOTA])
++		if (test_opt(sb, GRPQUOTA) && grp_qf_name)
+ 			clear_opt(sb, GRPQUOTA);
+ 
+ 		if (test_opt(sb, GRPQUOTA) || test_opt(sb, USRQUOTA)) {
+@@ -1970,6 +1984,7 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
+ {
+ #if defined(CONFIG_QUOTA)
+ 	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	char *usr_qf_name, *grp_qf_name;
+ 
+ 	if (sbi->s_jquota_fmt) {
+ 		char *fmtname = "";
+@@ -1988,11 +2003,14 @@ static inline void ext4_show_quota_options(struct seq_file *seq,
+ 		seq_printf(seq, ",jqfmt=%s", fmtname);
+ 	}
+ 
+-	if (sbi->s_qf_names[USRQUOTA])
+-		seq_show_option(seq, "usrjquota", sbi->s_qf_names[USRQUOTA]);
+-
+-	if (sbi->s_qf_names[GRPQUOTA])
+-		seq_show_option(seq, "grpjquota", sbi->s_qf_names[GRPQUOTA]);
++	rcu_read_lock();
++	usr_qf_name = rcu_dereference(sbi->s_qf_names[USRQUOTA]);
++	grp_qf_name = rcu_dereference(sbi->s_qf_names[GRPQUOTA]);
++	if (usr_qf_name)
++		seq_show_option(seq, "usrjquota", usr_qf_name);
++	if (grp_qf_name)
++		seq_show_option(seq, "grpjquota", grp_qf_name);
++	rcu_read_unlock();
+ #endif
+ }
+ 
+@@ -5038,6 +5056,7 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	int err = 0;
+ #ifdef CONFIG_QUOTA
+ 	int i, j;
++	char *to_free[EXT4_MAXQUOTAS];
+ #endif
+ 	char *orig_data = kstrdup(data, GFP_KERNEL);
+ 
+@@ -5054,8 +5073,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 	old_opts.s_jquota_fmt = sbi->s_jquota_fmt;
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++)
+ 		if (sbi->s_qf_names[i]) {
+-			old_opts.s_qf_names[i] = kstrdup(sbi->s_qf_names[i],
+-							 GFP_KERNEL);
++			char *qf_name = get_qf_name(sb, sbi, i);
++
++			old_opts.s_qf_names[i] = kstrdup(qf_name, GFP_KERNEL);
+ 			if (!old_opts.s_qf_names[i]) {
+ 				for (j = 0; j < i; j++)
+ 					kfree(old_opts.s_qf_names[j]);
+@@ -5277,9 +5297,12 @@ restore_opts:
+ #ifdef CONFIG_QUOTA
+ 	sbi->s_jquota_fmt = old_opts.s_jquota_fmt;
+ 	for (i = 0; i < EXT4_MAXQUOTAS; i++) {
+-		kfree(sbi->s_qf_names[i]);
+-		sbi->s_qf_names[i] = old_opts.s_qf_names[i];
++		to_free[i] = get_qf_name(sb, sbi, i);
++		rcu_assign_pointer(sbi->s_qf_names[i], old_opts.s_qf_names[i]);
+ 	}
++	synchronize_rcu();
++	for (i = 0; i < EXT4_MAXQUOTAS; i++)
++		kfree(to_free[i]);
+ #endif
+ 	kfree(orig_data);
+ 	return err;
+@@ -5469,7 +5492,7 @@ static int ext4_write_info(struct super_block *sb, int type)
+  */
+ static int ext4_quota_on_mount(struct super_block *sb, int type)
+ {
+-	return dquot_quota_on_mount(sb, EXT4_SB(sb)->s_qf_names[type],
++	return dquot_quota_on_mount(sb, get_qf_name(sb, EXT4_SB(sb), type),
+ 					EXT4_SB(sb)->s_jquota_fmt, type);
+ }
+ 
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index e10bd73f0723..6fbb6d75318a 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -381,10 +381,10 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
+ 	}
+ 	bio_set_op_attrs(bio, fio->op, fio->op_flags);
+ 
+-	__submit_bio(fio->sbi, bio, fio->type);
+-
+ 	if (!is_read_io(fio->op))
+ 		inc_page_count(fio->sbi, WB_DATA_TYPE(fio->page));
++
++	__submit_bio(fio->sbi, bio, fio->type);
+ 	return 0;
+ }
+ 
+@@ -2190,10 +2190,6 @@ static int f2fs_set_data_page_dirty(struct page *page)
+ 	if (!PageUptodate(page))
+ 		SetPageUptodate(page);
+ 
+-	/* don't remain PG_checked flag which was set during GC */
+-	if (is_cold_data(page))
+-		clear_cold_data(page);
+-
+ 	if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) {
+ 		if (!IS_ATOMIC_WRITTEN_PAGE(page)) {
+ 			register_inmem_page(inode, page);
+diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
+index 9626758bc762..765fadf954af 100644
+--- a/fs/f2fs/recovery.c
++++ b/fs/f2fs/recovery.c
+@@ -210,6 +210,7 @@ static void recover_inode(struct inode *inode, struct page *page)
+ 	inode->i_mtime.tv_nsec = le32_to_cpu(raw->i_mtime_nsec);
+ 
+ 	F2FS_I(inode)->i_advise = raw->i_advise;
++	F2FS_I(inode)->i_flags = le32_to_cpu(raw->i_flags);
+ 
+ 	if (file_enc_name(inode))
+ 		name = "<encrypted>";
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index eae35909fa51..7cda685296b2 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1488,7 +1488,9 @@ static int f2fs_quota_off(struct super_block *sb, int type)
+ 	if (!inode || !igrab(inode))
+ 		return dquot_quota_off(sb, type);
+ 
+-	f2fs_quota_sync(sb, type);
++	err = f2fs_quota_sync(sb, type);
++	if (err)
++		goto out_put;
+ 
+ 	err = dquot_quota_off(sb, type);
+ 	if (err)
+@@ -1507,9 +1509,20 @@ out_put:
+ void f2fs_quota_off_umount(struct super_block *sb)
+ {
+ 	int type;
++	int err;
+ 
+-	for (type = 0; type < MAXQUOTAS; type++)
+-		f2fs_quota_off(sb, type);
++	for (type = 0; type < MAXQUOTAS; type++) {
++		err = f2fs_quota_off(sb, type);
++		if (err) {
++			int ret = dquot_quota_off(sb, type);
++
++			f2fs_msg(sb, KERN_ERR,
++				"Fail to turn off disk quota "
++				"(type: %d, err: %d, ret:%d), Please "
++				"run fsck to fix it.", type, err, ret);
++			set_sbi_flag(F2FS_SB(sb), SBI_NEED_FSCK);
++		}
++	}
+ }
+ 
+ int f2fs_get_projid(struct inode *inode, kprojid_t *projid)
+diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
+index a3711f543405..28d6c65c8bb3 100644
+--- a/fs/gfs2/ops_fstype.c
++++ b/fs/gfs2/ops_fstype.c
+@@ -1352,6 +1352,9 @@ static struct dentry *gfs2_mount_meta(struct file_system_type *fs_type,
+ 	struct path path;
+ 	int error;
+ 
++	if (!dev_name || !*dev_name)
++		return ERR_PTR(-EINVAL);
++
+ 	error = kern_path(dev_name, LOOKUP_FOLLOW, &path);
+ 	if (error) {
+ 		pr_warn("path_lookup on %s returned error %d\n",
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index 4055f51617ef..fe4fe155b7fb 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -254,8 +254,8 @@ restart:
+ 		bh = jh2bh(jh);
+ 
+ 		if (buffer_locked(bh)) {
+-			spin_unlock(&journal->j_list_lock);
+ 			get_bh(bh);
++			spin_unlock(&journal->j_list_lock);
+ 			wait_on_buffer(bh);
+ 			/* the journal_head may have gone by now */
+ 			BUFFER_TRACE(bh, "brelse");
+@@ -336,8 +336,8 @@ restart2:
+ 		jh = transaction->t_checkpoint_io_list;
+ 		bh = jh2bh(jh);
+ 		if (buffer_locked(bh)) {
+-			spin_unlock(&journal->j_list_lock);
+ 			get_bh(bh);
++			spin_unlock(&journal->j_list_lock);
+ 			wait_on_buffer(bh);
+ 			/* the journal_head may have gone by now */
+ 			BUFFER_TRACE(bh, "brelse");
+diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c
+index 33e01de576d2..bc00cc385b77 100644
+--- a/fs/jffs2/super.c
++++ b/fs/jffs2/super.c
+@@ -285,10 +285,8 @@ static int jffs2_fill_super(struct super_block *sb, void *data, int silent)
+ 	sb->s_fs_info = c;
+ 
+ 	ret = jffs2_parse_options(c, data);
+-	if (ret) {
+-		kfree(c);
++	if (ret)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* Initialize JFFS2 superblock locks, the further initialization will
+ 	 * be done later */
+diff --git a/fs/lockd/host.c b/fs/lockd/host.c
+index 0d4e590e0549..c4504ed9f680 100644
+--- a/fs/lockd/host.c
++++ b/fs/lockd/host.c
+@@ -341,7 +341,7 @@ struct nlm_host *nlmsvc_lookup_host(const struct svc_rqst *rqstp,
+ 	};
+ 	struct lockd_net *ln = net_generic(net, lockd_net_id);
+ 
+-	dprintk("lockd: %s(host='%*s', vers=%u, proto=%s)\n", __func__,
++	dprintk("lockd: %s(host='%.*s', vers=%u, proto=%s)\n", __func__,
+ 			(int)hostname_len, hostname, rqstp->rq_vers,
+ 			(rqstp->rq_prot == IPPROTO_UDP ? "udp" : "tcp"));
+ 
+diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c
+index fb85d04fdc4c..fed9c8005c17 100644
+--- a/fs/nfs/nfs4client.c
++++ b/fs/nfs/nfs4client.c
+@@ -925,10 +925,10 @@ EXPORT_SYMBOL_GPL(nfs4_set_ds_client);
+ 
+ /*
+  * Session has been established, and the client marked ready.
+- * Set the mount rsize and wsize with negotiated fore channel
+- * attributes which will be bound checked in nfs_server_set_fsinfo.
++ * Limit the mount rsize, wsize and dtsize using negotiated fore
++ * channel attributes.
+  */
+-static void nfs4_session_set_rwsize(struct nfs_server *server)
++static void nfs4_session_limit_rwsize(struct nfs_server *server)
+ {
+ #ifdef CONFIG_NFS_V4_1
+ 	struct nfs4_session *sess;
+@@ -941,9 +941,11 @@ static void nfs4_session_set_rwsize(struct nfs_server *server)
+ 	server_resp_sz = sess->fc_attrs.max_resp_sz - nfs41_maxread_overhead;
+ 	server_rqst_sz = sess->fc_attrs.max_rqst_sz - nfs41_maxwrite_overhead;
+ 
+-	if (!server->rsize || server->rsize > server_resp_sz)
++	if (server->dtsize > server_resp_sz)
++		server->dtsize = server_resp_sz;
++	if (server->rsize > server_resp_sz)
+ 		server->rsize = server_resp_sz;
+-	if (!server->wsize || server->wsize > server_rqst_sz)
++	if (server->wsize > server_rqst_sz)
+ 		server->wsize = server_rqst_sz;
+ #endif /* CONFIG_NFS_V4_1 */
+ }
+@@ -990,12 +992,12 @@ static int nfs4_server_common_setup(struct nfs_server *server,
+ 			(unsigned long long) server->fsid.minor);
+ 	nfs_display_fhandle(mntfh, "Pseudo-fs root FH");
+ 
+-	nfs4_session_set_rwsize(server);
+-
+ 	error = nfs_probe_fsinfo(server, mntfh, fattr);
+ 	if (error < 0)
+ 		goto out;
+ 
++	nfs4_session_limit_rwsize(server);
++
+ 	if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN)
+ 		server->namelen = NFS4_MAXNAMLEN;
+ 
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index d0543e19098a..37f20d7a26ed 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -1110,6 +1110,20 @@ static int nfs_pageio_add_request_mirror(struct nfs_pageio_descriptor *desc,
+ 	return ret;
+ }
+ 
++static void nfs_pageio_error_cleanup(struct nfs_pageio_descriptor *desc)
++{
++	u32 midx;
++	struct nfs_pgio_mirror *mirror;
++
++	if (!desc->pg_error)
++		return;
++
++	for (midx = 0; midx < desc->pg_mirror_count; midx++) {
++		mirror = &desc->pg_mirrors[midx];
++		desc->pg_completion_ops->error_cleanup(&mirror->pg_list);
++	}
++}
++
+ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 			   struct nfs_page *req)
+ {
+@@ -1160,25 +1174,11 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 	return 1;
+ 
+ out_failed:
+-	/*
+-	 * We might have failed before sending any reqs over wire.
+-	 * Clean up rest of the reqs in mirror pg_list.
+-	 */
+-	if (desc->pg_error) {
+-		struct nfs_pgio_mirror *mirror;
+-		void (*func)(struct list_head *);
+-
+-		/* remember fatal errors */
+-		if (nfs_error_is_fatal(desc->pg_error))
+-			nfs_context_set_write_error(req->wb_context,
+-						    desc->pg_error);
+-
+-		func = desc->pg_completion_ops->error_cleanup;
+-		for (midx = 0; midx < desc->pg_mirror_count; midx++) {
+-			mirror = &desc->pg_mirrors[midx];
+-			func(&mirror->pg_list);
+-		}
+-	}
++	/* remember fatal errors */
++	if (nfs_error_is_fatal(desc->pg_error))
++		nfs_context_set_write_error(req->wb_context,
++						desc->pg_error);
++	nfs_pageio_error_cleanup(desc);
+ 	return 0;
+ }
+ 
+@@ -1250,6 +1250,8 @@ void nfs_pageio_complete(struct nfs_pageio_descriptor *desc)
+ 	for (midx = 0; midx < desc->pg_mirror_count; midx++)
+ 		nfs_pageio_complete_mirror(desc, midx);
+ 
++	if (desc->pg_error < 0)
++		nfs_pageio_error_cleanup(desc);
+ 	if (desc->pg_ops->pg_cleanup)
+ 		desc->pg_ops->pg_cleanup(desc);
+ 	nfs_pageio_cleanup_mirroring(desc);
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 519522d39bde..2b47757c9c68 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -768,6 +768,8 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
+ 	smaps_walk.private = mss;
+ 
+ #ifdef CONFIG_SHMEM
++	/* In case of smaps_rollup, reset the value from previous vma */
++	mss->check_shmem_swap = false;
+ 	if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) {
+ 		/*
+ 		 * For shared or readonly shmem mappings we know that all
+@@ -783,7 +785,7 @@ static int show_smap(struct seq_file *m, void *v, int is_pid)
+ 
+ 		if (!shmem_swapped || (vma->vm_flags & VM_SHARED) ||
+ 					!(vma->vm_flags & VM_WRITE)) {
+-			mss->swap = shmem_swapped;
++			mss->swap += shmem_swapped;
+ 		} else {
+ 			mss->check_shmem_swap = true;
+ 			smaps_walk.pte_hole = smaps_pte_hole;
+diff --git a/include/linux/compat.h b/include/linux/compat.h
+index 3e838a828459..23909d12f729 100644
+--- a/include/linux/compat.h
++++ b/include/linux/compat.h
+@@ -68,6 +68,9 @@ typedef struct compat_sigaltstack {
+ 	compat_size_t			ss_size;
+ } compat_stack_t;
+ #endif
++#ifndef COMPAT_MINSIGSTKSZ
++#define COMPAT_MINSIGSTKSZ	MINSIGSTKSZ
++#endif
+ 
+ #define compat_jiffies_to_clock_t(x)	\
+ 		(((unsigned long)(x) * COMPAT_USER_HZ) / HZ)
+diff --git a/include/linux/signal.h b/include/linux/signal.h
+index 042968dd98f0..843bd62b1ead 100644
+--- a/include/linux/signal.h
++++ b/include/linux/signal.h
+@@ -34,7 +34,7 @@ enum siginfo_layout {
+ #endif
+ };
+ 
+-enum siginfo_layout siginfo_layout(int sig, int si_code);
++enum siginfo_layout siginfo_layout(unsigned sig, int si_code);
+ 
+ /*
+  * Define some primitives to manipulate sigset_t.
+diff --git a/include/linux/tc.h b/include/linux/tc.h
+index f92511e57cdb..a60639f37963 100644
+--- a/include/linux/tc.h
++++ b/include/linux/tc.h
+@@ -84,6 +84,7 @@ struct tc_dev {
+ 					   device. */
+ 	struct device	dev;		/* Generic device interface. */
+ 	struct resource	resource;	/* Address space of this device. */
++	u64		dma_mask;	/* DMA addressable range. */
+ 	char		vendor[9];
+ 	char		name[9];
+ 	char		firmware[9];
+diff --git a/include/uapi/linux/ndctl.h b/include/uapi/linux/ndctl.h
+index 3f03567631cb..145f242c7c90 100644
+--- a/include/uapi/linux/ndctl.h
++++ b/include/uapi/linux/ndctl.h
+@@ -176,37 +176,31 @@ enum {
+ 
+ static inline const char *nvdimm_bus_cmd_name(unsigned cmd)
+ {
+-	static const char * const names[] = {
+-		[ND_CMD_ARS_CAP] = "ars_cap",
+-		[ND_CMD_ARS_START] = "ars_start",
+-		[ND_CMD_ARS_STATUS] = "ars_status",
+-		[ND_CMD_CLEAR_ERROR] = "clear_error",
+-		[ND_CMD_CALL] = "cmd_call",
+-	};
+-
+-	if (cmd < ARRAY_SIZE(names) && names[cmd])
+-		return names[cmd];
+-	return "unknown";
++	switch (cmd) {
++	case ND_CMD_ARS_CAP:		return "ars_cap";
++	case ND_CMD_ARS_START:		return "ars_start";
++	case ND_CMD_ARS_STATUS:		return "ars_status";
++	case ND_CMD_CLEAR_ERROR:	return "clear_error";
++	case ND_CMD_CALL:		return "cmd_call";
++	default:			return "unknown";
++	}
+ }
+ 
+ static inline const char *nvdimm_cmd_name(unsigned cmd)
+ {
+-	static const char * const names[] = {
+-		[ND_CMD_SMART] = "smart",
+-		[ND_CMD_SMART_THRESHOLD] = "smart_thresh",
+-		[ND_CMD_DIMM_FLAGS] = "flags",
+-		[ND_CMD_GET_CONFIG_SIZE] = "get_size",
+-		[ND_CMD_GET_CONFIG_DATA] = "get_data",
+-		[ND_CMD_SET_CONFIG_DATA] = "set_data",
+-		[ND_CMD_VENDOR_EFFECT_LOG_SIZE] = "effect_size",
+-		[ND_CMD_VENDOR_EFFECT_LOG] = "effect_log",
+-		[ND_CMD_VENDOR] = "vendor",
+-		[ND_CMD_CALL] = "cmd_call",
+-	};
+-
+-	if (cmd < ARRAY_SIZE(names) && names[cmd])
+-		return names[cmd];
+-	return "unknown";
++	switch (cmd) {
++	case ND_CMD_SMART:			return "smart";
++	case ND_CMD_SMART_THRESHOLD:		return "smart_thresh";
++	case ND_CMD_DIMM_FLAGS:			return "flags";
++	case ND_CMD_GET_CONFIG_SIZE:		return "get_size";
++	case ND_CMD_GET_CONFIG_DATA:		return "get_data";
++	case ND_CMD_SET_CONFIG_DATA:		return "set_data";
++	case ND_CMD_VENDOR_EFFECT_LOG_SIZE:	return "effect_size";
++	case ND_CMD_VENDOR_EFFECT_LOG:		return "effect_log";
++	case ND_CMD_VENDOR:			return "vendor";
++	case ND_CMD_CALL:			return "cmd_call";
++	default:				return "unknown";
++	}
+ }
+ 
+ #define ND_IOCTL 'N'
+diff --git a/kernel/bounds.c b/kernel/bounds.c
+index c373e887c066..9795d75b09b2 100644
+--- a/kernel/bounds.c
++++ b/kernel/bounds.c
+@@ -13,7 +13,7 @@
+ #include <linux/log2.h>
+ #include <linux/spinlock_types.h>
+ 
+-void foo(void)
++int main(void)
+ {
+ 	/* The enum constants to put into include/generated/bounds.h */
+ 	DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS);
+@@ -23,4 +23,6 @@ void foo(void)
+ #endif
+ 	DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t));
+ 	/* End of constants */
++
++	return 0;
+ }
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index ea22d0b6a9f0..5c9deed4524e 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -519,6 +519,17 @@ err_put:
+ 	return err;
+ }
+ 
++static void maybe_wait_bpf_programs(struct bpf_map *map)
++{
++	/* Wait for any running BPF programs to complete so that
++	 * userspace, when we return to it, knows that all programs
++	 * that could be running use the new map value.
++	 */
++	if (map->map_type == BPF_MAP_TYPE_HASH_OF_MAPS ||
++	    map->map_type == BPF_MAP_TYPE_ARRAY_OF_MAPS)
++		synchronize_rcu();
++}
++
+ #define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
+ 
+ static int map_update_elem(union bpf_attr *attr)
+@@ -592,6 +603,7 @@ static int map_update_elem(union bpf_attr *attr)
+ 	}
+ 	__this_cpu_dec(bpf_prog_active);
+ 	preempt_enable();
++	maybe_wait_bpf_programs(map);
+ 
+ 	if (!err)
+ 		trace_bpf_map_update_elem(map, ufd, key, value);
+@@ -636,6 +648,7 @@ static int map_delete_elem(union bpf_attr *attr)
+ 	rcu_read_unlock();
+ 	__this_cpu_dec(bpf_prog_active);
+ 	preempt_enable();
++	maybe_wait_bpf_programs(map);
+ 
+ 	if (!err)
+ 		trace_bpf_map_delete_elem(map, ufd, key);
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index f3f389e33343..90cf6a04e08a 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2045,6 +2045,12 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
+ 	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
+ }
+ 
++/*
++ * Architectures that need SMT-specific errata handling during SMT hotplug
++ * should override this.
++ */
++void __weak arch_smt_update(void) { };
++
+ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ {
+ 	int cpu, ret = 0;
+@@ -2071,8 +2077,10 @@ static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
+ 		 */
+ 		cpuhp_offline_cpu_device(cpu);
+ 	}
+-	if (!ret)
++	if (!ret) {
+ 		cpu_smt_control = ctrlval;
++		arch_smt_update();
++	}
+ 	cpu_maps_update_done();
+ 	return ret;
+ }
+@@ -2083,6 +2091,7 @@ static int cpuhp_smt_enable(void)
+ 
+ 	cpu_maps_update_begin();
+ 	cpu_smt_control = CPU_SMT_ENABLED;
++	arch_smt_update();
+ 	for_each_present_cpu(cpu) {
+ 		/* Skip online CPUs and CPUs on offline nodes */
+ 		if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 069311541577..4cd85870f00e 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -882,6 +882,9 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
+ 
+ 	local_bh_disable();
+ 	ret = action->thread_fn(action->irq, action->dev_id);
++	if (ret == IRQ_HANDLED)
++		atomic_inc(&desc->threads_handled);
++
+ 	irq_finalize_oneshot(desc, action);
+ 	local_bh_enable();
+ 	return ret;
+@@ -898,6 +901,9 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
+ 	irqreturn_t ret;
+ 
+ 	ret = action->thread_fn(action->irq, action->dev_id);
++	if (ret == IRQ_HANDLED)
++		atomic_inc(&desc->threads_handled);
++
+ 	irq_finalize_oneshot(desc, action);
+ 	return ret;
+ }
+@@ -975,8 +981,6 @@ static int irq_thread(void *data)
+ 		irq_thread_check_affinity(desc, action);
+ 
+ 		action_ret = handler_fn(desc, action);
+-		if (action_ret == IRQ_HANDLED)
+-			atomic_inc(&desc->threads_handled);
+ 		if (action_ret == IRQ_WAKE_THREAD)
+ 			irq_wake_secondary(desc, action);
+ 
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 5c90765d37e7..5cbad4fb9107 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -700,9 +700,10 @@ static void unoptimize_kprobe(struct kprobe *p, bool force)
+ }
+ 
+ /* Cancel unoptimizing for reusing */
+-static void reuse_unused_kprobe(struct kprobe *ap)
++static int reuse_unused_kprobe(struct kprobe *ap)
+ {
+ 	struct optimized_kprobe *op;
++	int ret;
+ 
+ 	BUG_ON(!kprobe_unused(ap));
+ 	/*
+@@ -716,8 +717,12 @@ static void reuse_unused_kprobe(struct kprobe *ap)
+ 	/* Enable the probe again */
+ 	ap->flags &= ~KPROBE_FLAG_DISABLED;
+ 	/* Optimize it again (remove from op->list) */
+-	BUG_ON(!kprobe_optready(ap));
++	ret = kprobe_optready(ap);
++	if (ret)
++		return ret;
++
+ 	optimize_kprobe(ap);
++	return 0;
+ }
+ 
+ /* Remove optimized instructions */
+@@ -942,11 +947,16 @@ static void __disarm_kprobe(struct kprobe *p, bool reopt)
+ #define kprobe_disarmed(p)			kprobe_disabled(p)
+ #define wait_for_kprobe_optimizer()		do {} while (0)
+ 
+-/* There should be no unused kprobes can be reused without optimization */
+-static void reuse_unused_kprobe(struct kprobe *ap)
++static int reuse_unused_kprobe(struct kprobe *ap)
+ {
++	/*
++	 * If the optimized kprobe is NOT supported, the aggr kprobe is
++	 * released at the same time that the last aggregated kprobe is
++	 * unregistered.
++	 * Thus there should be no chance to reuse unused kprobe.
++	 */
+ 	printk(KERN_ERR "Error: There should be no unused kprobe here.\n");
+-	BUG_ON(kprobe_unused(ap));
++	return -EINVAL;
+ }
+ 
+ static void free_aggr_kprobe(struct kprobe *p)
+@@ -1320,9 +1330,12 @@ static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p)
+ 			goto out;
+ 		}
+ 		init_aggr_kprobe(ap, orig_p);
+-	} else if (kprobe_unused(ap))
++	} else if (kprobe_unused(ap)) {
+ 		/* This probe is going to die. Rescue it */
+-		reuse_unused_kprobe(ap);
++		ret = reuse_unused_kprobe(ap);
++		if (ret)
++			goto out;
++	}
+ 
+ 	if (kprobe_gone(ap)) {
+ 		/*
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index d7c155048ea9..bf694c709b96 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -4215,7 +4215,7 @@ void lock_contended(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	unsigned long flags;
+ 
+-	if (unlikely(!lock_stat))
++	if (unlikely(!lock_stat || !debug_locks))
+ 		return;
+ 
+ 	if (unlikely(current->lockdep_recursion))
+@@ -4235,7 +4235,7 @@ void lock_acquired(struct lockdep_map *lock, unsigned long ip)
+ {
+ 	unsigned long flags;
+ 
+-	if (unlikely(!lock_stat))
++	if (unlikely(!lock_stat || !debug_locks))
+ 		return;
+ 
+ 	if (unlikely(current->lockdep_recursion))
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index f0223a7d9ed1..7161312593dd 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -1043,7 +1043,12 @@ static void __init log_buf_len_update(unsigned size)
+ /* save requested log_buf_len since it's too early to process it */
+ static int __init log_buf_len_setup(char *str)
+ {
+-	unsigned size = memparse(str, &str);
++	unsigned int size;
++
++	if (!str)
++		return -EINVAL;
++
++	size = memparse(str, &str);
+ 
+ 	log_buf_len_update(size);
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 19bfa21f7197..2d4d79420e36 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3825,7 +3825,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+ 	 * put back on, and if we advance min_vruntime, we'll be placed back
+ 	 * further than we started -- ie. we'll be penalized.
+ 	 */
+-	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) == DEQUEUE_SAVE)
++	if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) != DEQUEUE_SAVE)
+ 		update_min_vruntime(cfs_rq);
+ }
+ 
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 4439ba9dc5d9..164c36ef0825 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1003,7 +1003,7 @@ static int __send_signal(int sig, struct siginfo *info, struct task_struct *t,
+ 
+ 	result = TRACE_SIGNAL_IGNORED;
+ 	if (!prepare_signal(sig, t,
+-			from_ancestor_ns || (info == SEND_SIG_FORCED)))
++			from_ancestor_ns || (info == SEND_SIG_PRIV) || (info == SEND_SIG_FORCED)))
+ 		goto ret;
+ 
+ 	pending = group ? &t->signal->shared_pending : &t->pending;
+@@ -2700,7 +2700,7 @@ COMPAT_SYSCALL_DEFINE2(rt_sigpending, compat_sigset_t __user *, uset,
+ }
+ #endif
+ 
+-enum siginfo_layout siginfo_layout(int sig, int si_code)
++enum siginfo_layout siginfo_layout(unsigned sig, int si_code)
+ {
+ 	enum siginfo_layout layout = SIL_KILL;
+ 	if ((si_code > SI_USER) && (si_code < SI_KERNEL)) {
+@@ -3215,7 +3215,8 @@ int do_sigaction(int sig, struct k_sigaction *act, struct k_sigaction *oact)
+ }
+ 
+ static int
+-do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp)
++do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp,
++		size_t min_ss_size)
+ {
+ 	struct task_struct *t = current;
+ 
+@@ -3245,7 +3246,7 @@ do_sigaltstack (const stack_t *ss, stack_t *oss, unsigned long sp)
+ 			ss_size = 0;
+ 			ss_sp = NULL;
+ 		} else {
+-			if (unlikely(ss_size < MINSIGSTKSZ))
++			if (unlikely(ss_size < min_ss_size))
+ 				return -ENOMEM;
+ 		}
+ 
+@@ -3263,7 +3264,8 @@ SYSCALL_DEFINE2(sigaltstack,const stack_t __user *,uss, stack_t __user *,uoss)
+ 	if (uss && copy_from_user(&new, uss, sizeof(stack_t)))
+ 		return -EFAULT;
+ 	err = do_sigaltstack(uss ? &new : NULL, uoss ? &old : NULL,
+-			      current_user_stack_pointer());
++			      current_user_stack_pointer(),
++			      MINSIGSTKSZ);
+ 	if (!err && uoss && copy_to_user(uoss, &old, sizeof(stack_t)))
+ 		err = -EFAULT;
+ 	return err;
+@@ -3274,7 +3276,8 @@ int restore_altstack(const stack_t __user *uss)
+ 	stack_t new;
+ 	if (copy_from_user(&new, uss, sizeof(stack_t)))
+ 		return -EFAULT;
+-	(void)do_sigaltstack(&new, NULL, current_user_stack_pointer());
++	(void)do_sigaltstack(&new, NULL, current_user_stack_pointer(),
++			     MINSIGSTKSZ);
+ 	/* squash all but EFAULT for now */
+ 	return 0;
+ }
+@@ -3309,7 +3312,8 @@ COMPAT_SYSCALL_DEFINE2(sigaltstack,
+ 		uss.ss_size = uss32.ss_size;
+ 	}
+ 	ret = do_sigaltstack(uss_ptr ? &uss : NULL, &uoss,
+-			     compat_user_stack_pointer());
++			     compat_user_stack_pointer(),
++			     COMPAT_MINSIGSTKSZ);
+ 	if (ret >= 0 && uoss_ptr)  {
+ 		compat_stack_t old;
+ 		memset(&old, 0, sizeof(old));
+diff --git a/lib/debug_locks.c b/lib/debug_locks.c
+index 96c4c633d95e..124fdf238b3d 100644
+--- a/lib/debug_locks.c
++++ b/lib/debug_locks.c
+@@ -37,7 +37,7 @@ EXPORT_SYMBOL_GPL(debug_locks_silent);
+  */
+ int debug_locks_off(void)
+ {
+-	if (__debug_locks_off()) {
++	if (debug_locks && __debug_locks_off()) {
+ 		if (!debug_locks_silent) {
+ 			console_verbose();
+ 			return 1;
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 9801dc0250e2..e073099083ca 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -3644,6 +3644,12 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
+ 		return err;
+ 	ClearPagePrivate(page);
+ 
++	/*
++	 * set page dirty so that it will not be removed from cache/file
++	 * by non-hugetlbfs specific code paths.
++	 */
++	set_page_dirty(page);
++
+ 	spin_lock(&inode->i_lock);
+ 	inode->i_blocks += blocks_per_huge_page(h);
+ 	spin_unlock(&inode->i_lock);
+diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
+index 956015614395..e00d985a51c5 100644
+--- a/mm/page_vma_mapped.c
++++ b/mm/page_vma_mapped.c
+@@ -21,7 +21,29 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
+ 			if (!is_swap_pte(*pvmw->pte))
+ 				return false;
+ 		} else {
+-			if (!pte_present(*pvmw->pte))
++			/*
++			 * We get here when we are trying to unmap a private
++			 * device page from the process address space. Such
++			 * page is not CPU accessible and thus is mapped as
++			 * a special swap entry, nonetheless it still does
++			 * count as a valid regular mapping for the page (and
++			 * is accounted as such in page maps count).
++			 *
++			 * So handle this special case as if it was a normal
++			 * page mapping ie lock CPU page table and returns
++			 * true.
++			 *
++			 * For more details on device private memory see HMM
++			 * (include/linux/hmm.h or mm/hmm.c).
++			 */
++			if (is_swap_pte(*pvmw->pte)) {
++				swp_entry_t entry;
++
++				/* Handle un-addressable ZONE_DEVICE memory */
++				entry = pte_to_swp_entry(*pvmw->pte);
++				if (!is_device_private_entry(entry))
++					return false;
++			} else if (!pte_present(*pvmw->pte))
+ 				return false;
+ 		}
+ 	}
+diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
+index 5e4f04004a49..7bf833598615 100644
+--- a/net/core/netclassid_cgroup.c
++++ b/net/core/netclassid_cgroup.c
+@@ -106,6 +106,7 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
+ 		iterate_fd(p->files, 0, update_classid_sock,
+ 			   (void *)(unsigned long)cs->classid);
+ 		task_unlock(p);
++		cond_resched();
+ 	}
+ 	css_task_iter_end(&it);
+ 
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index 82178cc69c96..777fa3b7fb13 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -1512,7 +1512,7 @@ static int cipso_v4_parsetag_loc(const struct cipso_v4_doi *doi_def,
+  *
+  * Description:
+  * Parse the packet's IP header looking for a CIPSO option.  Returns a pointer
+- * to the start of the CIPSO option on success, NULL if one if not found.
++ * to the start of the CIPSO option on success, NULL if one is not found.
+  *
+  */
+ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+@@ -1522,10 +1522,8 @@ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+ 	int optlen;
+ 	int taglen;
+ 
+-	for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 0; ) {
++	for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 1; ) {
+ 		switch (optptr[0]) {
+-		case IPOPT_CIPSO:
+-			return optptr;
+ 		case IPOPT_END:
+ 			return NULL;
+ 		case IPOPT_NOOP:
+@@ -1534,6 +1532,11 @@ unsigned char *cipso_v4_optptr(const struct sk_buff *skb)
+ 		default:
+ 			taglen = optptr[1];
+ 		}
++		if (!taglen || taglen > optlen)
++			return NULL;
++		if (optptr[0] == IPOPT_CIPSO)
++			return optptr;
++
+ 		optlen -= taglen;
+ 		optptr += taglen;
+ 	}
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index 691ca96f7460..7b4270987ac1 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1218,7 +1218,6 @@ check_loop_fn(struct Qdisc *q, unsigned long cl, struct qdisc_walker *w)
+ 
+ const struct nla_policy rtm_tca_policy[TCA_MAX + 1] = {
+ 	[TCA_KIND]		= { .type = NLA_STRING },
+-	[TCA_OPTIONS]		= { .type = NLA_NESTED },
+ 	[TCA_RATE]		= { .type = NLA_BINARY,
+ 				    .len = sizeof(struct tc_estimator) },
+ 	[TCA_STAB]		= { .type = NLA_NESTED },
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index d16a8b423c20..ea7b5a3a53f0 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -1040,7 +1040,7 @@ static void call_xpt_users(struct svc_xprt *xprt)
+ 	spin_lock(&xprt->xpt_lock);
+ 	while (!list_empty(&xprt->xpt_users)) {
+ 		u = list_first_entry(&xprt->xpt_users, struct svc_xpt_user, list);
+-		list_del(&u->list);
++		list_del_init(&u->list);
+ 		u->callback(u);
+ 	}
+ 	spin_unlock(&xprt->xpt_lock);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 37c32e73aaef..70ec57b887f6 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -626,9 +626,9 @@ static void xfrm_hash_rebuild(struct work_struct *work)
+ 				break;
+ 		}
+ 		if (newpos)
+-			hlist_add_behind(&policy->bydst, newpos);
++			hlist_add_behind_rcu(&policy->bydst, newpos);
+ 		else
+-			hlist_add_head(&policy->bydst, chain);
++			hlist_add_head_rcu(&policy->bydst, chain);
+ 	}
+ 
+ 	spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
+@@ -767,9 +767,9 @@ int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl)
+ 			break;
+ 	}
+ 	if (newpos)
+-		hlist_add_behind(&policy->bydst, newpos);
++		hlist_add_behind_rcu(&policy->bydst, newpos);
+ 	else
+-		hlist_add_head(&policy->bydst, chain);
++		hlist_add_head_rcu(&policy->bydst, chain);
+ 	__xfrm_policy_link(policy, dir);
+ 
+ 	/* After previous checking, family can either be AF_INET or AF_INET6 */
+diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c
+index ad491c51e833..2c4e83f6409e 100644
+--- a/security/integrity/ima/ima_fs.c
++++ b/security/integrity/ima/ima_fs.c
+@@ -39,14 +39,14 @@ static int __init default_canonical_fmt_setup(char *str)
+ __setup("ima_canonical_fmt", default_canonical_fmt_setup);
+ 
+ static int valid_policy = 1;
+-#define TMPBUFLEN 12
++
+ static ssize_t ima_show_htable_value(char __user *buf, size_t count,
+ 				     loff_t *ppos, atomic_long_t *val)
+ {
+-	char tmpbuf[TMPBUFLEN];
++	char tmpbuf[32];	/* greater than largest 'long' string value */
+ 	ssize_t len;
+ 
+-	len = scnprintf(tmpbuf, TMPBUFLEN, "%li\n", atomic_long_read(val));
++	len = scnprintf(tmpbuf, sizeof(tmpbuf), "%li\n", atomic_long_read(val));
+ 	return simple_read_from_buffer(buf, count, ppos, tmpbuf, len);
+ }
+ 
+diff --git a/sound/pci/ca0106/ca0106.h b/sound/pci/ca0106/ca0106.h
+index 04402c14cb23..9847b669cf3c 100644
+--- a/sound/pci/ca0106/ca0106.h
++++ b/sound/pci/ca0106/ca0106.h
+@@ -582,7 +582,7 @@
+ #define SPI_PL_BIT_R_R		(2<<7)	/* right channel = right */
+ #define SPI_PL_BIT_R_C		(3<<7)	/* right channel = (L+R)/2 */
+ #define SPI_IZD_REG		2
+-#define SPI_IZD_BIT		(1<<4)	/* infinite zero detect */
++#define SPI_IZD_BIT		(0<<4)	/* infinite zero detect */
+ 
+ #define SPI_FMT_REG		3
+ #define SPI_FMT_BIT_RJ		(0<<0)	/* right justified mode */
+diff --git a/sound/pci/hda/hda_controller.h b/sound/pci/hda/hda_controller.h
+index a68e75b00ea3..53c3cd28bc99 100644
+--- a/sound/pci/hda/hda_controller.h
++++ b/sound/pci/hda/hda_controller.h
+@@ -160,6 +160,7 @@ struct azx {
+ 	unsigned int msi:1;
+ 	unsigned int probing:1; /* codec probing phase */
+ 	unsigned int snoop:1;
++	unsigned int uc_buffer:1; /* non-cached pages for stream buffers */
+ 	unsigned int align_buffer_size:1;
+ 	unsigned int region_requested:1;
+ 	unsigned int disabled:1; /* disabled by vga_switcheroo */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 873d9824fbcf..4e38905bc47d 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -410,7 +410,7 @@ static void __mark_pages_wc(struct azx *chip, struct snd_dma_buffer *dmab, bool
+ #ifdef CONFIG_SND_DMA_SGBUF
+ 	if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_SG) {
+ 		struct snd_sg_buf *sgbuf = dmab->private_data;
+-		if (chip->driver_type == AZX_DRIVER_CMEDIA)
++		if (!chip->uc_buffer)
+ 			return; /* deal with only CORB/RIRB buffers */
+ 		if (on)
+ 			set_pages_array_wc(sgbuf->page_table, sgbuf->pages);
+@@ -1634,6 +1634,7 @@ static void azx_check_snoop_available(struct azx *chip)
+ 		dev_info(chip->card->dev, "Force to %s mode by module option\n",
+ 			 snoop ? "snoop" : "non-snoop");
+ 		chip->snoop = snoop;
++		chip->uc_buffer = !snoop;
+ 		return;
+ 	}
+ 
+@@ -1654,8 +1655,12 @@ static void azx_check_snoop_available(struct azx *chip)
+ 		snoop = false;
+ 
+ 	chip->snoop = snoop;
+-	if (!snoop)
++	if (!snoop) {
+ 		dev_info(chip->card->dev, "Force to non-snoop mode\n");
++		/* C-Media requires non-cached pages only for CORB/RIRB */
++		if (chip->driver_type != AZX_DRIVER_CMEDIA)
++			chip->uc_buffer = true;
++	}
+ }
+ 
+ static void azx_probe_work(struct work_struct *work)
+@@ -2094,7 +2099,7 @@ static void pcm_mmap_prepare(struct snd_pcm_substream *substream,
+ #ifdef CONFIG_X86
+ 	struct azx_pcm *apcm = snd_pcm_substream_chip(substream);
+ 	struct azx *chip = apcm->chip;
+-	if (!azx_snoop(chip) && chip->driver_type != AZX_DRIVER_CMEDIA)
++	if (chip->uc_buffer)
+ 		area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
+ #endif
+ }
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 16197ad4512a..0cc0ced1f2ed 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -981,6 +981,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x21da, "Lenovo X220", CXT_PINCFG_LENOVO_TP410),
+ 	SND_PCI_QUIRK(0x17aa, 0x21db, "Lenovo X220-tablet", CXT_PINCFG_LENOVO_TP410),
+ 	SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo IdeaPad Z560", CXT_FIXUP_MUTE_LED_EAPD),
++	SND_PCI_QUIRK(0x17aa, 0x3905, "Lenovo G50-30", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index fe5c741fcc6a..eb8807de3ebc 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6629,6 +6629,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x1a, 0x02a11040},
+ 		{0x1b, 0x01014020},
+ 		{0x21, 0x0221101f}),
++	SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
++		{0x14, 0x90170110},
++		{0x19, 0x02a11030},
++		{0x1a, 0x02a11040},
++		{0x1b, 0x01011020},
++		{0x21, 0x0221101f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0235, 0x17aa, "Lenovo", ALC294_FIXUP_LENOVO_MIC_LOCATION,
+ 		{0x14, 0x90170110},
+ 		{0x19, 0x02a11020},
+@@ -7515,6 +7521,8 @@ enum {
+ 	ALC662_FIXUP_ASUS_Nx50,
+ 	ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
+ 	ALC668_FIXUP_ASUS_Nx51,
++	ALC668_FIXUP_MIC_COEF,
++	ALC668_FIXUP_ASUS_G751,
+ 	ALC891_FIXUP_HEADSET_MODE,
+ 	ALC891_FIXUP_DELL_MIC_NO_PRESENCE,
+ 	ALC662_FIXUP_ACER_VERITON,
+@@ -7784,6 +7792,23 @@ static const struct hda_fixup alc662_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC668_FIXUP_ASUS_Nx51_HEADSET_MODE,
+ 	},
++	[ALC668_FIXUP_MIC_COEF] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0xc3 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x4000 },
++			{}
++		},
++	},
++	[ALC668_FIXUP_ASUS_G751] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x16, 0x0421101f }, /* HP */
++			{}
++		},
++		.chained = true,
++		.chain_id = ALC668_FIXUP_MIC_COEF
++	},
+ 	[ALC891_FIXUP_HEADSET_MODE] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_headset_mode,
+@@ -7857,6 +7882,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x11cd, "Asus N550", ALC662_FIXUP_ASUS_Nx50),
+ 	SND_PCI_QUIRK(0x1043, 0x13df, "Asus N550JX", ALC662_FIXUP_BASS_1A),
+ 	SND_PCI_QUIRK(0x1043, 0x129d, "Asus N750", ALC662_FIXUP_ASUS_Nx50),
++	SND_PCI_QUIRK(0x1043, 0x12ff, "ASUS G751", ALC668_FIXUP_ASUS_G751),
+ 	SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_BASS_MODE4_CHMAP),
+ 	SND_PCI_QUIRK(0x1043, 0x15a7, "ASUS UX51VZH", ALC662_FIXUP_BASS_16),
+ 	SND_PCI_QUIRK(0x1043, 0x177d, "ASUS N551", ALC668_FIXUP_ASUS_Nx51),
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index 22f768ca3c73..b45c1ae60f94 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -2360,6 +2360,7 @@ static int skl_tplg_get_token(struct device *dev,
+ 
+ 	case SKL_TKN_U8_CORE_ID:
+ 		mconfig->core_id = tkn_elem->value;
++		break;
+ 
+ 	case SKL_TKN_U8_MOD_TYPE:
+ 		mconfig->m_type = tkn_elem->value;
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 63f534a0902f..f362ee46506a 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -795,7 +795,7 @@ ifndef NO_JVMTI
+     JDIR=$(shell /usr/sbin/update-java-alternatives -l | head -1 | awk '{print $$3}')
+   else
+     ifneq (,$(wildcard /usr/sbin/alternatives))
+-      JDIR=$(shell alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
++      JDIR=$(shell /usr/sbin/alternatives --display java | tail -1 | cut -d' ' -f 5 | sed 's%/jre/bin/java.%%g')
+     endif
+   endif
+   ifndef JDIR
+diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
+index d40498f2cb1e..635c09fda1d9 100644
+--- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
++++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-power.json
+@@ -188,7 +188,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_CYCLES",
+-        "Filter": "filter_band0=1200",
++        "Filter": "filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -199,7 +199,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_CYCLES",
+-        "Filter": "filter_band1=2000",
++        "Filter": "filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -210,7 +210,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_CYCLES",
+-        "Filter": "filter_band2=3000",
++        "Filter": "filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -221,7 +221,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_CYCLES",
+-        "Filter": "filter_band3=4000",
++        "Filter": "filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+@@ -232,7 +232,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band0=1200",
++        "Filter": "edge=1,filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -243,7 +243,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band1=2000",
++        "Filter": "edge=1,filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -254,7 +254,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band2=4000",
++        "Filter": "edge=1,filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -265,7 +265,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band3=4000",
++        "Filter": "edge=1,filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+diff --git a/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json b/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
+index 16034bfd06dd..8755693d86c6 100644
+--- a/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
++++ b/tools/perf/pmu-events/arch/x86/jaketown/uncore-power.json
+@@ -187,7 +187,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_CYCLES",
+-        "Filter": "filter_band0=1200",
++        "Filter": "filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -198,7 +198,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_CYCLES",
+-        "Filter": "filter_band1=2000",
++        "Filter": "filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -209,7 +209,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_CYCLES",
+-        "Filter": "filter_band2=3000",
++        "Filter": "filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -220,7 +220,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_CYCLES",
+-        "Filter": "filter_band3=4000",
++        "Filter": "filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+@@ -231,7 +231,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xb",
+         "EventName": "UNC_P_FREQ_GE_1200MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band0=1200",
++        "Filter": "edge=1,filter_band0=12",
+         "MetricExpr": "(UNC_P_FREQ_GE_1200MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_1200mhz_cycles %",
+         "PerPkg": "1",
+@@ -242,7 +242,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xc",
+         "EventName": "UNC_P_FREQ_GE_2000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band1=2000",
++        "Filter": "edge=1,filter_band1=20",
+         "MetricExpr": "(UNC_P_FREQ_GE_2000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_2000mhz_cycles %",
+         "PerPkg": "1",
+@@ -253,7 +253,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xd",
+         "EventName": "UNC_P_FREQ_GE_3000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band2=4000",
++        "Filter": "edge=1,filter_band2=30",
+         "MetricExpr": "(UNC_P_FREQ_GE_3000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_3000mhz_cycles %",
+         "PerPkg": "1",
+@@ -264,7 +264,7 @@
+         "Counter": "0,1,2,3",
+         "EventCode": "0xe",
+         "EventName": "UNC_P_FREQ_GE_4000MHZ_TRANSITIONS",
+-        "Filter": "edge=1,filter_band3=4000",
++        "Filter": "edge=1,filter_band3=40",
+         "MetricExpr": "(UNC_P_FREQ_GE_4000MHZ_CYCLES / UNC_P_CLOCKTICKS) * 100.",
+         "MetricName": "freq_ge_4000mhz_cycles %",
+         "PerPkg": "1",
+diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
+index fc690fecbfd6..a19e840db54a 100644
+--- a/tools/perf/util/event.c
++++ b/tools/perf/util/event.c
+@@ -951,6 +951,7 @@ void *cpu_map_data__alloc(struct cpu_map *map, size_t *size, u16 *type, int *max
+ 	}
+ 
+ 	*size += sizeof(struct cpu_map_data);
++	*size = PERF_ALIGN(*size, sizeof(u64));
+ 	return zalloc(*size);
+ }
+ 
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index d87d458996b7..dceef4725d33 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -754,13 +754,14 @@ static void pmu_format_value(unsigned long *format, __u64 value, __u64 *v,
+ 
+ static __u64 pmu_format_max_value(const unsigned long *format)
+ {
+-	__u64 w = 0;
+-	int fbit;
+-
+-	for_each_set_bit(fbit, format, PERF_PMU_FORMAT_BITS)
+-		w |= (1ULL << fbit);
++	int w;
+ 
+-	return w;
++	w = bitmap_weight(format, PERF_PMU_FORMAT_BITS);
++	if (!w)
++		return 0;
++	if (w < 64)
++		return (1ULL << w) - 1;
++	return -1;
+ }
+ 
+ /*
+diff --git a/tools/perf/util/strbuf.c b/tools/perf/util/strbuf.c
+index 3d1cf5bf7f18..9005fbe0780e 100644
+--- a/tools/perf/util/strbuf.c
++++ b/tools/perf/util/strbuf.c
+@@ -98,19 +98,25 @@ static int strbuf_addv(struct strbuf *sb, const char *fmt, va_list ap)
+ 
+ 	va_copy(ap_saved, ap);
+ 	len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap);
+-	if (len < 0)
++	if (len < 0) {
++		va_end(ap_saved);
+ 		return len;
++	}
+ 	if (len > strbuf_avail(sb)) {
+ 		ret = strbuf_grow(sb, len);
+-		if (ret)
++		if (ret) {
++			va_end(ap_saved);
+ 			return ret;
++		}
+ 		len = vsnprintf(sb->buf + sb->len, sb->alloc - sb->len, fmt, ap_saved);
+ 		va_end(ap_saved);
+ 		if (len > strbuf_avail(sb)) {
+ 			pr_debug("this should not happen, your vsnprintf is broken");
++			va_end(ap_saved);
+ 			return -EINVAL;
+ 		}
+ 	}
++	va_end(ap_saved);
+ 	return strbuf_setlen(sb, sb->len + len);
+ }
+ 
+diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c
+index 8f3b7ef221f2..71a5b4863707 100644
+--- a/tools/perf/util/trace-event-info.c
++++ b/tools/perf/util/trace-event-info.c
+@@ -533,12 +533,14 @@ struct tracing_data *tracing_data_get(struct list_head *pattrs,
+ 			 "/tmp/perf-XXXXXX");
+ 		if (!mkstemp(tdata->temp_file)) {
+ 			pr_debug("Can't make temp file");
++			free(tdata);
+ 			return NULL;
+ 		}
+ 
+ 		temp_fd = open(tdata->temp_file, O_RDWR);
+ 		if (temp_fd < 0) {
+ 			pr_debug("Can't read '%s'", tdata->temp_file);
++			free(tdata);
+ 			return NULL;
+ 		}
+ 
+diff --git a/tools/perf/util/trace-event-read.c b/tools/perf/util/trace-event-read.c
+index 8a9a677f7576..6bfd690d63d9 100644
+--- a/tools/perf/util/trace-event-read.c
++++ b/tools/perf/util/trace-event-read.c
+@@ -350,9 +350,12 @@ static int read_event_files(struct pevent *pevent)
+ 		for (x=0; x < count; x++) {
+ 			size = read8(pevent);
+ 			ret = read_event_file(pevent, sys, size);
+-			if (ret)
++			if (ret) {
++				free(sys);
+ 				return ret;
++			}
+ 		}
++		free(sys);
+ 	}
+ 	return 0;
+ }
+diff --git a/tools/power/cpupower/utils/cpufreq-info.c b/tools/power/cpupower/utils/cpufreq-info.c
+index 3e701f0e9c14..5853faa9daf3 100644
+--- a/tools/power/cpupower/utils/cpufreq-info.c
++++ b/tools/power/cpupower/utils/cpufreq-info.c
+@@ -202,6 +202,8 @@ static int get_boost_mode(unsigned int cpu)
+ 		printf(_("    Boost States: %d\n"), b_states);
+ 		printf(_("    Total States: %d\n"), pstate_no);
+ 		for (i = 0; i < pstate_no; i++) {
++			if (!pstates[i])
++				continue;
+ 			if (i < b_states)
+ 				printf(_("    Pstate-Pb%d: %luMHz (boost state)"
+ 					 "\n"), i, pstates[i]);
+diff --git a/tools/power/cpupower/utils/helpers/amd.c b/tools/power/cpupower/utils/helpers/amd.c
+index bb41cdd0df6b..9607ada5b29a 100644
+--- a/tools/power/cpupower/utils/helpers/amd.c
++++ b/tools/power/cpupower/utils/helpers/amd.c
+@@ -33,7 +33,7 @@ union msr_pstate {
+ 		unsigned vid:8;
+ 		unsigned iddval:8;
+ 		unsigned idddiv:2;
+-		unsigned res1:30;
++		unsigned res1:31;
+ 		unsigned en:1;
+ 	} fam17h_bits;
+ 	unsigned long long val;
+@@ -119,6 +119,11 @@ int decode_pstates(unsigned int cpu, unsigned int cpu_family,
+ 		}
+ 		if (read_msr(cpu, MSR_AMD_PSTATE + i, &pstate.val))
+ 			return -1;
++		if ((cpu_family == 0x17) && (!pstate.fam17h_bits.en))
++			continue;
++		else if (!pstate.bits.en)
++			continue;
++
+ 		pstates[i] = get_cof(cpu_family, pstate);
+ 	}
+ 	*no = i;
+diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc
+new file mode 100644
+index 000000000000..88e6c3f43006
+--- /dev/null
++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-syntax.tc
+@@ -0,0 +1,80 @@
++#!/bin/sh
++# SPDX-License-Identifier: GPL-2.0
++# description: event trigger - test synthetic_events syntax parser
++
++do_reset() {
++    reset_trigger
++    echo > set_event
++    clear_trace
++}
++
++fail() { #msg
++    do_reset
++    echo $1
++    exit_fail
++}
++
++if [ ! -f set_event ]; then
++    echo "event tracing is not supported"
++    exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++    echo "synthetic event is not supported"
++    exit_unsupported
++fi
++
++reset_tracer
++do_reset
++
++echo "Test synthetic_events syntax parser"
++
++echo > synthetic_events
++
++# synthetic event must have a field
++! echo "myevent" >> synthetic_events
++echo "myevent u64 var1" >> synthetic_events
++
++# synthetic event must be found in synthetic_events
++grep "myevent[[:space:]]u64 var1" synthetic_events
++
++# it is not possible to add same name event
++! echo "myevent u64 var2" >> synthetic_events
++
++# Non-append open will cleanup all events and add new one
++echo "myevent u64 var2" > synthetic_events
++
++# multiple fields with different spaces
++echo "myevent u64 var1; u64 var2;" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++echo "myevent u64 var1 ; u64 var2 ;" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++echo "myevent u64 var1 ;u64 var2" > synthetic_events
++grep "myevent[[:space:]]u64 var1; u64 var2" synthetic_events
++
++# test field types
++echo "myevent u32 var" > synthetic_events
++echo "myevent u16 var" > synthetic_events
++echo "myevent u8 var" > synthetic_events
++echo "myevent s64 var" > synthetic_events
++echo "myevent s32 var" > synthetic_events
++echo "myevent s16 var" > synthetic_events
++echo "myevent s8 var" > synthetic_events
++
++echo "myevent char var" > synthetic_events
++echo "myevent int var" > synthetic_events
++echo "myevent long var" > synthetic_events
++echo "myevent pid_t var" > synthetic_events
++
++echo "myevent unsigned char var" > synthetic_events
++echo "myevent unsigned int var" > synthetic_events
++echo "myevent unsigned long var" > synthetic_events
++grep "myevent[[:space:]]unsigned long var" synthetic_events
++
++# test string type
++echo "myevent char var[10]" > synthetic_events
++grep "myevent[[:space:]]char\[10\] var" synthetic_events
++
++do_reset
++
++exit 0
+diff --git a/tools/testing/selftests/net/reuseport_bpf.c b/tools/testing/selftests/net/reuseport_bpf.c
+index cad14cd0ea92..b5277106df1f 100644
+--- a/tools/testing/selftests/net/reuseport_bpf.c
++++ b/tools/testing/selftests/net/reuseport_bpf.c
+@@ -437,14 +437,19 @@ void enable_fastopen(void)
+ 	}
+ }
+ 
+-static struct rlimit rlim_old, rlim_new;
++static struct rlimit rlim_old;
+ 
+ static  __attribute__((constructor)) void main_ctor(void)
+ {
+ 	getrlimit(RLIMIT_MEMLOCK, &rlim_old);
+-	rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
+-	rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
+-	setrlimit(RLIMIT_MEMLOCK, &rlim_new);
++
++	if (rlim_old.rlim_cur != RLIM_INFINITY) {
++		struct rlimit rlim_new;
++
++		rlim_new.rlim_cur = rlim_old.rlim_cur + (1UL << 20);
++		rlim_new.rlim_max = rlim_old.rlim_max + (1UL << 20);
++		setrlimit(RLIMIT_MEMLOCK, &rlim_new);
++	}
+ }
+ 
+ static __attribute__((destructor)) void main_dtor(void)
+diff --git a/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c b/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
+index 327fa943c7f3..dbdffa2e2c82 100644
+--- a/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
++++ b/tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
+@@ -67,8 +67,8 @@ trans:
+ 		"3: ;"
+ 		: [res] "=r" (result), [texasr] "=r" (texasr)
+ 		: [gpr_1]"i"(GPR_1), [gpr_2]"i"(GPR_2), [gpr_4]"i"(GPR_4),
+-		[sprn_texasr] "i" (SPRN_TEXASR), [flt_1] "r" (&a),
+-		[flt_2] "r" (&b), [flt_4] "r" (&d)
++		[sprn_texasr] "i" (SPRN_TEXASR), [flt_1] "b" (&a),
++		[flt_4] "b" (&d)
+ 		: "memory", "r5", "r6", "r7",
+ 		"r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15",
+ 		"r16", "r17", "r18", "r19", "r20", "r21", "r22", "r23",
+diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
+index d5f1d8364571..ed42b8cf6f5b 100644
+--- a/virt/kvm/arm/arm.c
++++ b/virt/kvm/arm/arm.c
+@@ -1148,8 +1148,6 @@ static void cpu_init_hyp_mode(void *dummy)
+ 
+ 	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
+ 	__cpu_init_stage2();
+-
+-	kvm_arm_init_debug();
+ }
+ 
+ static void cpu_hyp_reset(void)
+@@ -1173,6 +1171,8 @@ static void cpu_hyp_reinit(void)
+ 		cpu_init_hyp_mode(NULL);
+ 	}
+ 
++	kvm_arm_init_debug();
++
+ 	if (vgic_present)
+ 		kvm_vgic_init_cpu_hardware();
+ }


             reply	other threads:[~2018-11-14 14:01 UTC|newest]

Thread overview: 448+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-14 14:00 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2023-08-30 15:01 [gentoo-commits] proj/linux-patches:4.14 commit in: / Mike Pagano
2023-08-16 16:58 Mike Pagano
2023-08-11 11:58 Mike Pagano
2023-08-08 18:44 Mike Pagano
2023-06-28 10:30 Mike Pagano
2023-06-21 14:56 Alice Ferrazzi
2023-06-14 10:22 Mike Pagano
2023-06-09 11:33 Mike Pagano
2023-05-30 12:58 Mike Pagano
2023-05-17 11:01 Mike Pagano
2023-04-26  9:35 Alice Ferrazzi
2023-04-20 11:18 Alice Ferrazzi
2023-04-05 10:02 Alice Ferrazzi
2023-03-22 14:16 Alice Ferrazzi
2023-03-17 10:47 Mike Pagano
2023-03-13 11:36 Alice Ferrazzi
2023-03-11 16:02 Mike Pagano
2023-02-25 11:40 Mike Pagano
2023-02-24  3:13 Alice Ferrazzi
2023-02-22 14:48 Alice Ferrazzi
2023-02-22 14:46 Alice Ferrazzi
2023-02-06 12:50 Mike Pagano
2023-01-24  7:18 Alice Ferrazzi
2023-01-18 11:11 Mike Pagano
2022-12-14 12:24 Mike Pagano
2022-12-08 12:39 Alice Ferrazzi
2022-11-25 17:03 Mike Pagano
2022-11-10 15:14 Mike Pagano
2022-11-03 15:09 Mike Pagano
2022-11-01 19:49 Mike Pagano
2022-10-26 11:42 Mike Pagano
2022-09-28  9:18 Mike Pagano
2022-09-20 12:04 Mike Pagano
2022-09-15 11:10 Mike Pagano
2022-09-05 12:06 Mike Pagano
2022-08-25 10:36 Mike Pagano
2022-07-29 15:27 Mike Pagano
2022-07-21 20:13 Mike Pagano
2022-07-12 16:02 Mike Pagano
2022-07-07 16:19 Mike Pagano
2022-07-02 16:06 Mike Pagano
2022-06-25 10:23 Mike Pagano
2022-06-16 11:41 Mike Pagano
2022-06-14 15:48 Mike Pagano
2022-06-06 11:06 Mike Pagano
2022-05-27 12:28 Mike Pagano
2022-05-25 11:56 Mike Pagano
2022-05-18  9:51 Mike Pagano
2022-05-15 22:13 Mike Pagano
2022-05-12 11:31 Mike Pagano
2022-04-27 11:38 Mike Pagano
2022-04-20 12:10 Mike Pagano
2022-04-02 16:32 Mike Pagano
2022-03-28 11:00 Mike Pagano
2022-03-23 11:58 Mike Pagano
2022-03-16 13:21 Mike Pagano
2022-03-11 10:57 Mike Pagano
2022-03-08 18:28 Mike Pagano
2022-03-02 13:08 Mike Pagano
2022-02-26 23:30 Mike Pagano
2022-02-23 12:40 Mike Pagano
2022-02-16 12:49 Mike Pagano
2022-02-11 12:48 Mike Pagano
2022-02-08 17:57 Mike Pagano
2022-01-29 17:45 Mike Pagano
2022-01-27 11:40 Mike Pagano
2022-01-11 13:16 Mike Pagano
2022-01-05 12:56 Mike Pagano
2021-12-29 13:12 Mike Pagano
2021-12-22 14:07 Mike Pagano
2021-12-14 10:36 Mike Pagano
2021-12-08 12:56 Mike Pagano
2021-11-26 12:00 Mike Pagano
2021-11-12 13:47 Mike Pagano
2021-11-02 19:36 Mike Pagano
2021-10-27 11:59 Mike Pagano
2021-10-20 13:33 Mike Pagano
2021-10-17 13:13 Mike Pagano
2021-10-09 21:34 Mike Pagano
2021-10-06 14:04 Mike Pagano
2021-09-26 14:14 Mike Pagano
2021-09-22 11:41 Mike Pagano
2021-09-20 22:05 Mike Pagano
2021-09-03 11:24 Mike Pagano
2021-08-26 14:05 Mike Pagano
2021-08-25 23:04 Mike Pagano
2021-08-15 20:08 Mike Pagano
2021-08-08 13:40 Mike Pagano
2021-08-04 11:55 Mike Pagano
2021-08-03 12:45 Mike Pagano
2021-07-28 12:38 Mike Pagano
2021-07-20 15:32 Alice Ferrazzi
2021-07-11 14:46 Mike Pagano
2021-06-30 14:26 Mike Pagano
2021-06-16 12:21 Mike Pagano
2021-06-10 11:16 Mike Pagano
2021-06-03 10:35 Alice Ferrazzi
2021-05-26 12:04 Mike Pagano
2021-05-22 10:03 Mike Pagano
2021-04-28 18:22 Mike Pagano
2021-04-28 11:31 Alice Ferrazzi
2021-04-16 11:17 Alice Ferrazzi
2021-04-10 13:23 Mike Pagano
2021-04-07 12:17 Mike Pagano
2021-03-30 14:15 Mike Pagano
2021-03-24 12:07 Mike Pagano
2021-03-17 16:18 Mike Pagano
2021-03-11 14:04 Mike Pagano
2021-03-07 15:14 Mike Pagano
2021-03-03 18:15 Alice Ferrazzi
2021-02-23 13:51 Alice Ferrazzi
2021-02-10 10:07 Alice Ferrazzi
2021-02-07 14:17 Alice Ferrazzi
2021-02-03 23:38 Mike Pagano
2021-01-30 12:58 Alice Ferrazzi
2021-01-23 16:35 Mike Pagano
2021-01-21 11:25 Alice Ferrazzi
2021-01-17 16:21 Mike Pagano
2021-01-12 20:07 Mike Pagano
2021-01-09 12:56 Mike Pagano
2020-12-29 14:20 Mike Pagano
2020-12-11 12:55 Mike Pagano
2020-12-08 12:05 Mike Pagano
2020-12-02 12:48 Mike Pagano
2020-11-24 13:44 Mike Pagano
2020-11-22 19:17 Mike Pagano
2020-11-18 19:24 Mike Pagano
2020-11-11 15:36 Mike Pagano
2020-11-10 13:55 Mike Pagano
2020-11-05 12:34 Mike Pagano
2020-10-29 11:17 Mike Pagano
2020-10-17 10:17 Mike Pagano
2020-10-14 20:35 Mike Pagano
2020-10-01 12:42 Mike Pagano
2020-10-01 12:34 Mike Pagano
2020-09-24 16:00 Mike Pagano
2020-09-23 12:05 Mike Pagano
2020-09-23 12:03 Mike Pagano
2020-09-12 17:50 Mike Pagano
2020-09-09 17:58 Mike Pagano
2020-09-03 11:36 Mike Pagano
2020-08-26 11:14 Mike Pagano
2020-08-21 10:51 Alice Ferrazzi
2020-08-07 19:14 Mike Pagano
2020-08-05 14:57 Thomas Deutschmann
2020-07-31 17:56 Mike Pagano
2020-07-29 12:30 Mike Pagano
2020-07-22 13:47 Mike Pagano
2020-07-09 12:10 Mike Pagano
2020-07-01 12:13 Mike Pagano
2020-06-29 17:43 Mike Pagano
2020-06-25 15:08 Mike Pagano
2020-06-22 14:45 Mike Pagano
2020-06-11 11:32 Mike Pagano
2020-06-03 11:39 Mike Pagano
2020-05-27 15:25 Mike Pagano
2020-05-20 11:26 Mike Pagano
2020-05-13 12:46 Mike Pagano
2020-05-11 22:51 Mike Pagano
2020-05-05 17:41 Mike Pagano
2020-05-02 19:23 Mike Pagano
2020-04-24 12:02 Mike Pagano
2020-04-15 17:38 Mike Pagano
2020-04-13 11:16 Mike Pagano
2020-04-02 15:23 Mike Pagano
2020-03-20 11:56 Mike Pagano
2020-03-11 18:19 Mike Pagano
2020-02-28 16:34 Mike Pagano
2020-02-14 23:46 Mike Pagano
2020-02-05 17:22 Mike Pagano
2020-02-05 14:49 Mike Pagano
2020-01-29 16:14 Mike Pagano
2020-01-27 14:24 Mike Pagano
2020-01-23 11:05 Mike Pagano
2020-01-17 19:53 Mike Pagano
2020-01-14 22:28 Mike Pagano
2020-01-12 14:53 Mike Pagano
2020-01-09 11:14 Mike Pagano
2020-01-04 16:49 Mike Pagano
2019-12-31 13:56 Mike Pagano
2019-12-21 15:00 Mike Pagano
2019-12-17 21:55 Mike Pagano
2019-12-05 15:20 Alice Ferrazzi
2019-12-01 14:08 Thomas Deutschmann
2019-11-24 15:42 Mike Pagano
2019-11-20 18:18 Mike Pagano
2019-11-12 20:59 Mike Pagano
2019-11-10 16:19 Mike Pagano
2019-11-06 14:25 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 13:59 Mike Pagano
2019-10-29 11:33 Mike Pagano
2019-10-17 22:26 Mike Pagano
2019-10-11 17:02 Mike Pagano
2019-10-07 17:39 Mike Pagano
2019-10-05 11:40 Mike Pagano
2019-09-21 16:30 Mike Pagano
2019-09-19 23:28 Mike Pagano
2019-09-19 10:03 Mike Pagano
2019-09-16 12:23 Mike Pagano
2019-09-10 11:11 Mike Pagano
2019-09-06 17:19 Mike Pagano
2019-08-29 14:13 Mike Pagano
2019-08-25 17:36 Mike Pagano
2019-08-23 22:15 Mike Pagano
2019-08-16 12:14 Mike Pagano
2019-08-09 17:34 Mike Pagano
2019-08-06 19:16 Mike Pagano
2019-08-04 16:06 Mike Pagano
2019-07-31 10:23 Mike Pagano
2019-07-21 14:40 Mike Pagano
2019-07-10 11:04 Mike Pagano
2019-07-03 13:02 Mike Pagano
2019-06-27 11:09 Mike Pagano
2019-06-25 10:52 Mike Pagano
2019-06-22 19:09 Mike Pagano
2019-06-19 17:19 Thomas Deutschmann
2019-06-17 19:20 Mike Pagano
2019-06-15 15:05 Mike Pagano
2019-06-11 17:51 Mike Pagano
2019-06-11 12:40 Mike Pagano
2019-06-09 16:17 Mike Pagano
2019-05-31 16:41 Mike Pagano
2019-05-26 17:11 Mike Pagano
2019-05-21 17:17 Mike Pagano
2019-05-16 23:02 Mike Pagano
2019-05-14 20:55 Mike Pagano
2019-05-10 19:39 Mike Pagano
2019-05-08 10:05 Mike Pagano
2019-05-04 18:34 Mike Pagano
2019-05-04 18:27 Mike Pagano
2019-05-02 10:14 Mike Pagano
2019-04-27 17:35 Mike Pagano
2019-04-24 22:58 Mike Pagano
2019-04-20 11:08 Mike Pagano
2019-04-19 19:53 Mike Pagano
2019-04-05 21:45 Mike Pagano
2019-04-03 10:58 Mike Pagano
2019-03-27 10:21 Mike Pagano
2019-03-23 14:30 Mike Pagano
2019-03-23 14:19 Mike Pagano
2019-03-19 16:57 Mike Pagano
2019-03-13 22:07 Mike Pagano
2019-03-06 19:09 Mike Pagano
2019-03-05 18:03 Mike Pagano
2019-02-27 11:22 Mike Pagano
2019-02-23 14:43 Mike Pagano
2019-02-20 11:17 Mike Pagano
2019-02-16  0:44 Mike Pagano
2019-02-15 12:51 Mike Pagano
2019-02-12 20:52 Mike Pagano
2019-02-06 17:06 Mike Pagano
2019-01-31 11:24 Mike Pagano
2019-01-26 15:06 Mike Pagano
2019-01-23 11:30 Mike Pagano
2019-01-16 23:30 Mike Pagano
2019-01-13 19:27 Mike Pagano
2019-01-09 17:53 Mike Pagano
2018-12-29 22:47 Mike Pagano
2018-12-29 18:54 Mike Pagano
2018-12-21 14:46 Mike Pagano
2018-12-17 11:40 Mike Pagano
2018-12-13 11:38 Mike Pagano
2018-12-08 13:22 Mike Pagano
2018-12-05 19:42 Mike Pagano
2018-12-01 17:26 Mike Pagano
2018-12-01 15:06 Mike Pagano
2018-11-27 16:17 Mike Pagano
2018-11-23 12:44 Mike Pagano
2018-11-21 12:27 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 14:00 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:49 Mike Pagano
2018-11-14 13:33 Mike Pagano
2018-11-13 21:19 Mike Pagano
2018-11-11  1:19 Mike Pagano
2018-11-10 21:31 Mike Pagano
2018-11-04 17:31 Alice Ferrazzi
2018-10-20 12:41 Mike Pagano
2018-10-18 10:26 Mike Pagano
2018-10-13 16:33 Mike Pagano
2018-10-10 11:18 Mike Pagano
2018-10-04 10:42 Mike Pagano
2018-09-29 13:35 Mike Pagano
2018-09-26 10:41 Mike Pagano
2018-09-19 22:40 Mike Pagano
2018-09-15 10:12 Mike Pagano
2018-09-09 23:28 Mike Pagano
2018-09-05 15:28 Mike Pagano
2018-08-24 11:44 Mike Pagano
2018-08-22 10:01 Alice Ferrazzi
2018-08-18 18:12 Mike Pagano
2018-08-17 19:37 Mike Pagano
2018-08-17 19:26 Mike Pagano
2018-08-16 11:49 Mike Pagano
2018-08-15 16:48 Mike Pagano
2018-08-09 10:54 Mike Pagano
2018-08-07 18:11 Mike Pagano
2018-08-03 12:27 Mike Pagano
2018-07-28 10:39 Mike Pagano
2018-07-25 10:27 Mike Pagano
2018-07-22 15:13 Mike Pagano
2018-07-17 10:27 Mike Pagano
2018-07-12 16:13 Alice Ferrazzi
2018-07-09 15:07 Alice Ferrazzi
2018-07-03 13:18 Mike Pagano
2018-06-26 16:32 Alice Ferrazzi
2018-06-20 19:42 Mike Pagano
2018-06-16 15:43 Mike Pagano
2018-06-11 21:46 Mike Pagano
2018-06-08 23:48 Mike Pagano
2018-06-05 11:22 Mike Pagano
2018-05-30 22:33 Mike Pagano
2018-05-30 11:42 Mike Pagano
2018-05-25 15:36 Mike Pagano
2018-05-22 18:45 Mike Pagano
2018-05-20 22:21 Mike Pagano
2018-05-16 10:24 Mike Pagano
2018-05-09 10:55 Mike Pagano
2018-05-02 16:14 Mike Pagano
2018-04-30 10:30 Mike Pagano
2018-04-26 10:21 Mike Pagano
2018-04-24 11:27 Mike Pagano
2018-04-19 10:43 Mike Pagano
2018-04-12 15:10 Mike Pagano
2018-04-08 14:27 Mike Pagano
2018-03-31 22:18 Mike Pagano
2018-03-28 17:01 Mike Pagano
2018-03-25 13:38 Mike Pagano
2018-03-21 14:41 Mike Pagano
2018-03-19 12:01 Mike Pagano
2018-03-15 10:28 Mike Pagano
2018-03-11 17:38 Mike Pagano
2018-03-09 16:34 Alice Ferrazzi
2018-03-05  2:24 Alice Ferrazzi
2018-02-28 18:28 Alice Ferrazzi
2018-02-28 15:00 Alice Ferrazzi
2018-02-25 13:40 Alice Ferrazzi
2018-02-22 23:23 Mike Pagano
2018-02-17 14:28 Alice Ferrazzi
2018-02-17 14:27 Alice Ferrazzi
2018-02-13 13:19 Alice Ferrazzi
2018-02-08  0:41 Mike Pagano
2018-02-03 21:21 Mike Pagano
2018-01-31 13:50 Alice Ferrazzi
2018-01-23 21:20 Mike Pagano
2018-01-23 21:18 Mike Pagano
2018-01-17  9:39 Alice Ferrazzi
2018-01-17  9:14 Alice Ferrazzi
2018-01-10 11:52 Mike Pagano
2018-01-10 11:43 Mike Pagano
2018-01-05 15:41 Alice Ferrazzi
2018-01-05 15:41 Alice Ferrazzi
2018-01-05 15:02 Alice Ferrazzi
2018-01-04 15:18 Alice Ferrazzi
2018-01-04  7:40 Alice Ferrazzi
2018-01-04  7:32 Alice Ferrazzi
2018-01-04  0:23 Alice Ferrazzi
2018-01-02 20:19 Mike Pagano
2018-01-02 20:14 Mike Pagano
2017-12-30 12:20 Alice Ferrazzi
2017-12-29 17:54 Alice Ferrazzi
2017-12-29 17:18 Alice Ferrazzi
2017-12-25 14:34 Alice Ferrazzi
2017-12-20 17:51 Alice Ferrazzi
2017-12-20 12:43 Mike Pagano
2017-12-17 14:33 Alice Ferrazzi
2017-12-14  9:11 Alice Ferrazzi
2017-12-10 13:02 Alice Ferrazzi
2017-12-09 14:07 Alice Ferrazzi
2017-12-05 11:37 Mike Pagano
2017-11-30 12:15 Alice Ferrazzi
2017-11-24  9:18 Alice Ferrazzi
2017-11-24  9:15 Alice Ferrazzi
2017-11-21 11:34 Mike Pagano
2017-11-21 11:24 Mike Pagano
2017-11-16 19:08 Mike Pagano
2017-10-23 16:31 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1542204041.88a451bddccb5814e6fcb6c6edb71df3abf215fa.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox