public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.4 commit in: /
Date: Tue, 12 Apr 2016 18:59:59 +0000 (UTC)	[thread overview]
Message-ID: <1460487601.3f51af26c90914dc78fa1469e036114de4c1c888.mpagano@gentoo> (raw)

commit:     3f51af26c90914dc78fa1469e036114de4c1c888
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 12 19:00:01 2016 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 12 19:00:01 2016 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3f51af26

Linux patch 4.4.7

 0000_README            |    4 +
 1006_linux-4.4.7.patch | 8416 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8420 insertions(+)

diff --git a/0000_README b/0000_README
index 9dc0b5b..5a8f4cb 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-4.4.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.4.6
 
+Patch:  1006_linux-4.4.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.4.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-4.4.7.patch b/1006_linux-4.4.7.patch
new file mode 100644
index 0000000..9cec5f3
--- /dev/null
+++ b/1006_linux-4.4.7.patch
@@ -0,0 +1,8416 @@
+diff --git a/MAINTAINERS b/MAINTAINERS
+index d826f1b9eb02..4c3e1d2ac31b 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -230,13 +230,13 @@ F:	kernel/sys_ni.c
+ 
+ ABIT UGURU 1,2 HARDWARE MONITOR DRIVER
+ M:	Hans de Goede <hdegoede@redhat.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	drivers/hwmon/abituguru.c
+ 
+ ABIT UGURU 3 HARDWARE MONITOR DRIVER
+ M:	Alistair John Strachan <alistair@devzero.co.uk>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	drivers/hwmon/abituguru3.c
+ 
+@@ -373,14 +373,14 @@ S:	Maintained
+ 
+ ADM1025 HARDWARE MONITOR DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/adm1025
+ F:	drivers/hwmon/adm1025.c
+ 
+ ADM1029 HARDWARE MONITOR DRIVER
+ M:	Corentin Labbe <clabbe.montjoie@gmail.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	drivers/hwmon/adm1029.c
+ 
+@@ -425,7 +425,7 @@ F:	drivers/video/backlight/adp8860_bl.c
+ 
+ ADS1015 HARDWARE MONITOR DRIVER
+ M:	Dirk Eibach <eibach@gdsys.de>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/ads1015
+ F:	drivers/hwmon/ads1015.c
+@@ -438,7 +438,7 @@ F:	drivers/macintosh/therm_adt746x.c
+ 
+ ADT7475 HARDWARE MONITOR DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/adt7475
+ F:	drivers/hwmon/adt7475.c
+@@ -615,7 +615,7 @@ F:	include/linux/ccp.h
+ 
+ AMD FAM15H PROCESSOR POWER MONITORING DRIVER
+ M:	Andreas Herrmann <herrmann.der.user@googlemail.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/fam15h_power
+ F:	drivers/hwmon/fam15h_power.c
+@@ -779,7 +779,7 @@ F:	drivers/input/mouse/bcm5974.c
+ 
+ APPLE SMC DRIVER
+ M:	Henrik Rydberg <rydberg@bitmath.org>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Odd fixes
+ F:	drivers/hwmon/applesmc.c
+ 
+@@ -1777,7 +1777,7 @@ F:	include/media/as3645a.h
+ 
+ ASC7621 HARDWARE MONITOR DRIVER
+ M:	George Joseph <george.joseph@fairview5.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/asc7621
+ F:	drivers/hwmon/asc7621.c
+@@ -1864,7 +1864,7 @@ F:	drivers/net/wireless/ath/carl9170/
+ 
+ ATK0110 HWMON DRIVER
+ M:	Luca Tettamanti <kronos.it@gmail.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	drivers/hwmon/asus_atk0110.c
+ 
+@@ -2984,7 +2984,7 @@ F:	mm/swap_cgroup.c
+ 
+ CORETEMP HARDWARE MONITORING DRIVER
+ M:	Fenghua Yu <fenghua.yu@intel.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/coretemp
+ F:	drivers/hwmon/coretemp.c
+@@ -3549,7 +3549,7 @@ T:	git git://git.infradead.org/users/vkoul/slave-dma.git
+ 
+ DME1737 HARDWARE MONITOR DRIVER
+ M:	Juerg Haefliger <juergh@gmail.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/dme1737
+ F:	drivers/hwmon/dme1737.c
+@@ -4262,7 +4262,7 @@ F:	include/video/exynos_mipi*
+ 
+ F71805F HARDWARE MONITORING DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/f71805f
+ F:	drivers/hwmon/f71805f.c
+@@ -4341,7 +4341,7 @@ F:	fs/*
+ 
+ FINTEK F75375S HARDWARE MONITOR AND FAN CONTROLLER DRIVER
+ M:	Riku Voipio <riku.voipio@iki.fi>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	drivers/hwmon/f75375s.c
+ F:	include/linux/f75375s.h
+@@ -4883,8 +4883,8 @@ F:	drivers/media/usb/hackrf/
+ HARDWARE MONITORING
+ M:	Jean Delvare <jdelvare@suse.com>
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
+-W:	http://www.lm-sensors.org/
++L:	linux-hwmon@vger.kernel.org
++W:	http://hwmon.wiki.kernel.org/
+ T:	quilt http://jdelvare.nerim.net/devel/linux/jdelvare-hwmon/
+ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging.git
+ S:	Maintained
+@@ -5393,7 +5393,7 @@ F:	drivers/usb/atm/ueagle-atm.c
+ 
+ INA209 HARDWARE MONITOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/ina209
+ F:	Documentation/devicetree/bindings/i2c/ina209.txt
+@@ -5401,7 +5401,7 @@ F:	drivers/hwmon/ina209.c
+ 
+ INA2XX HARDWARE MONITOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/ina2xx
+ F:	drivers/hwmon/ina2xx.c
+@@ -5884,7 +5884,7 @@ F:	drivers/isdn/hardware/eicon/
+ 
+ IT87 HARDWARE MONITORING DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/it87
+ F:	drivers/hwmon/it87.c
+@@ -5920,7 +5920,7 @@ F:	drivers/media/dvb-frontends/ix2505v*
+ 
+ JC42.4 TEMPERATURE SENSOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	drivers/hwmon/jc42.c
+ F:	Documentation/hwmon/jc42
+@@ -5970,14 +5970,14 @@ F:	drivers/tty/serial/jsm/
+ 
+ K10TEMP HARDWARE MONITORING DRIVER
+ M:	Clemens Ladisch <clemens@ladisch.de>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/k10temp
+ F:	drivers/hwmon/k10temp.c
+ 
+ K8TEMP HARDWARE MONITORING DRIVER
+ M:	Rudolf Marek <r.marek@assembler.cz>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/k8temp
+ F:	drivers/hwmon/k8temp.c
+@@ -6485,27 +6485,27 @@ F:	net/llc/
+ 
+ LM73 HARDWARE MONITOR DRIVER
+ M:	Guillaume Ligneul <guillaume.ligneul@gmail.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	drivers/hwmon/lm73.c
+ 
+ LM78 HARDWARE MONITOR DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/lm78
+ F:	drivers/hwmon/lm78.c
+ 
+ LM83 HARDWARE MONITOR DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/lm83
+ F:	drivers/hwmon/lm83.c
+ 
+ LM90 HARDWARE MONITOR DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/lm90
+ F:	Documentation/devicetree/bindings/hwmon/lm90.txt
+@@ -6513,7 +6513,7 @@ F:	drivers/hwmon/lm90.c
+ 
+ LM95234 HARDWARE MONITOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/lm95234
+ F:	drivers/hwmon/lm95234.c
+@@ -6580,7 +6580,7 @@ F:	drivers/scsi/sym53c8xx_2/
+ 
+ LTC4261 HARDWARE MONITOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/ltc4261
+ F:	drivers/hwmon/ltc4261.c
+@@ -6749,28 +6749,28 @@ F:	include/uapi/linux/matroxfb.h
+ 
+ MAX16065 HARDWARE MONITOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/max16065
+ F:	drivers/hwmon/max16065.c
+ 
+ MAX20751 HARDWARE MONITOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/max20751
+ F:	drivers/hwmon/max20751.c
+ 
+ MAX6650 HARDWARE MONITOR AND FAN CONTROLLER DRIVER
+ M:	"Hans J. Koch" <hjk@hansjkoch.de>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/max6650
+ F:	drivers/hwmon/max6650.c
+ 
+ MAX6697 HARDWARE MONITOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/max6697
+ F:	Documentation/devicetree/bindings/i2c/max6697.txt
+@@ -7303,7 +7303,7 @@ F:	drivers/scsi/NCR_D700.*
+ 
+ NCT6775 HARDWARE MONITOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/nct6775
+ F:	drivers/hwmon/nct6775.c
+@@ -8064,7 +8064,7 @@ F:	drivers/video/logo/logo_parisc*
+ 
+ PC87360 HARDWARE MONITORING DRIVER
+ M:	Jim Cromie <jim.cromie@gmail.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/pc87360
+ F:	drivers/hwmon/pc87360.c
+@@ -8076,7 +8076,7 @@ F:	drivers/char/pc8736x_gpio.c
+ 
+ PC87427 HARDWARE MONITORING DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/pc87427
+ F:	drivers/hwmon/pc87427.c
+@@ -8415,8 +8415,8 @@ F:	drivers/rtc/rtc-puv3.c
+ 
+ PMBUS HARDWARE MONITORING DRIVERS
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
+-W:	http://www.lm-sensors.org/
++L:	linux-hwmon@vger.kernel.org
++W:	http://hwmon.wiki.kernel.org/
+ W:	http://www.roeck-us.net/linux/drivers/
+ T:	git git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging.git
+ S:	Maintained
+@@ -8610,7 +8610,7 @@ F:	drivers/media/usb/pwc/*
+ 
+ PWM FAN DRIVER
+ M:	Kamil Debski <k.debski@samsung.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Supported
+ F:	Documentation/devicetree/bindings/hwmon/pwm-fan.txt
+ F:	Documentation/hwmon/pwm-fan
+@@ -9882,28 +9882,28 @@ F:	Documentation/devicetree/bindings/media/i2c/nokia,smia.txt
+ 
+ SMM665 HARDWARE MONITOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/smm665
+ F:	drivers/hwmon/smm665.c
+ 
+ SMSC EMC2103 HARDWARE MONITOR DRIVER
+ M:	Steve Glendinning <steve.glendinning@shawell.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/emc2103
+ F:	drivers/hwmon/emc2103.c
+ 
+ SMSC SCH5627 HARDWARE MONITOR DRIVER
+ M:	Hans de Goede <hdegoede@redhat.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Supported
+ F:	Documentation/hwmon/sch5627
+ F:	drivers/hwmon/sch5627.c
+ 
+ SMSC47B397 HARDWARE MONITOR DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/smsc47b397
+ F:	drivers/hwmon/smsc47b397.c
+@@ -10830,7 +10830,7 @@ F:	include/linux/mmc/sh_mobile_sdhi.h
+ 
+ TMP401 HARDWARE MONITOR DRIVER
+ M:	Guenter Roeck <linux@roeck-us.net>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/tmp401
+ F:	drivers/hwmon/tmp401.c
+@@ -11564,14 +11564,14 @@ F:	Documentation/networking/vrf.txt
+ 
+ VT1211 HARDWARE MONITOR DRIVER
+ M:	Juerg Haefliger <juergh@gmail.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/vt1211
+ F:	drivers/hwmon/vt1211.c
+ 
+ VT8231 HARDWARE MONITOR DRIVER
+ M:	Roger Lucas <vt8231@hiddenengine.co.uk>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	drivers/hwmon/vt8231.c
+ 
+@@ -11590,21 +11590,21 @@ F:	drivers/w1/
+ 
+ W83791D HARDWARE MONITORING DRIVER
+ M:	Marc Hulsman <m.hulsman@tudelft.nl>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/w83791d
+ F:	drivers/hwmon/w83791d.c
+ 
+ W83793 HARDWARE MONITORING DRIVER
+ M:	Rudolf Marek <r.marek@assembler.cz>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	Documentation/hwmon/w83793
+ F:	drivers/hwmon/w83793.c
+ 
+ W83795 HARDWARE MONITORING DRIVER
+ M:	Jean Delvare <jdelvare@suse.com>
+-L:	lm-sensors@lm-sensors.org
++L:	linux-hwmon@vger.kernel.org
+ S:	Maintained
+ F:	drivers/hwmon/w83795.c
+ 
+diff --git a/Makefile b/Makefile
+index 87d12b44ab66..5a493e785aca 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 4
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Blurry Fish Butt
+ 
+diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h
+index 57c1f33844d4..0352fb8d21b9 100644
+--- a/arch/arc/include/asm/bitops.h
++++ b/arch/arc/include/asm/bitops.h
+@@ -35,21 +35,6 @@ static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\
+ 									\
+ 	m += nr >> 5;							\
+ 									\
+-	/*								\
+-	 * ARC ISA micro-optimization:					\
+-	 *								\
+-	 * Instructions dealing with bitpos only consider lower 5 bits	\
+-	 * e.g (x << 33) is handled like (x << 1) by ASL instruction	\
+-	 *  (mem pointer still needs adjustment to point to next word)	\
+-	 *								\
+-	 * Hence the masking to clamp @nr arg can be elided in general.	\
+-	 *								\
+-	 * However if @nr is a constant (above assumed in a register),	\
+-	 * and greater than 31, gcc can optimize away (x << 33) to 0,	\
+-	 * as overflow, given the 32-bit ISA. Thus masking needs to be	\
+-	 * done for const @nr, but no code is generated due to gcc	\
+-	 * const prop.							\
+-	 */								\
+ 	nr &= 0x1f;							\
+ 									\
+ 	__asm__ __volatile__(						\
+diff --git a/arch/arc/include/asm/io.h b/arch/arc/include/asm/io.h
+index 694ece8a0243..27b17adea50d 100644
+--- a/arch/arc/include/asm/io.h
++++ b/arch/arc/include/asm/io.h
+@@ -129,15 +129,23 @@ static inline void __raw_writel(u32 w, volatile void __iomem *addr)
+ #define writel(v,c)		({ __iowmb(); writel_relaxed(v,c); })
+ 
+ /*
+- * Relaxed API for drivers which can handle any ordering themselves
++ * Relaxed API for drivers which can handle barrier ordering themselves
++ *
++ * Also these are defined to perform little endian accesses.
++ * To provide the typical device register semantics of fixed endian,
++ * swap the byte order for Big Endian
++ *
++ * http://lkml.kernel.org/r/201603100845.30602.arnd@arndb.de
+  */
+ #define readb_relaxed(c)	__raw_readb(c)
+-#define readw_relaxed(c)	__raw_readw(c)
+-#define readl_relaxed(c)	__raw_readl(c)
++#define readw_relaxed(c) ({ u16 __r = le16_to_cpu((__force __le16) \
++					__raw_readw(c)); __r; })
++#define readl_relaxed(c) ({ u32 __r = le32_to_cpu((__force __le32) \
++					__raw_readl(c)); __r; })
+ 
+ #define writeb_relaxed(v,c)	__raw_writeb(v,c)
+-#define writew_relaxed(v,c)	__raw_writew(v,c)
+-#define writel_relaxed(v,c)	__raw_writel(v,c)
++#define writew_relaxed(v,c)	__raw_writew((__force u16) cpu_to_le16(v),c)
++#define writel_relaxed(v,c)	__raw_writel((__force u32) cpu_to_le32(v),c)
+ 
+ #include <asm-generic/io.h>
+ 
+diff --git a/arch/arm/boot/dts/at91-sama5d3_xplained.dts b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+index ff888d21c786..f3e2b96c06a3 100644
+--- a/arch/arm/boot/dts/at91-sama5d3_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d3_xplained.dts
+@@ -303,6 +303,7 @@
+ 		regulator-name = "mmc0-card-supply";
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
++		regulator-always-on;
+ 	};
+ 
+ 	gpio_keys {
+diff --git a/arch/arm/boot/dts/at91-sama5d4_xplained.dts b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+index 569026e8f96c..da84e65b56ef 100644
+--- a/arch/arm/boot/dts/at91-sama5d4_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+@@ -268,5 +268,6 @@
+ 		regulator-min-microvolt = <3300000>;
+ 		regulator-max-microvolt = <3300000>;
+ 		vin-supply = <&vcc_3v3_reg>;
++		regulator-always-on;
+ 	};
+ };
+diff --git a/arch/arm/mach-s3c64xx/dev-audio.c b/arch/arm/mach-s3c64xx/dev-audio.c
+index ff780a8d8366..9a42736ef4ac 100644
+--- a/arch/arm/mach-s3c64xx/dev-audio.c
++++ b/arch/arm/mach-s3c64xx/dev-audio.c
+@@ -54,12 +54,12 @@ static int s3c64xx_i2s_cfg_gpio(struct platform_device *pdev)
+ 
+ static struct resource s3c64xx_iis0_resource[] = {
+ 	[0] = DEFINE_RES_MEM(S3C64XX_PA_IIS0, SZ_256),
+-	[1] = DEFINE_RES_DMA(DMACH_I2S0_OUT),
+-	[2] = DEFINE_RES_DMA(DMACH_I2S0_IN),
+ };
+ 
+-static struct s3c_audio_pdata i2sv3_pdata = {
++static struct s3c_audio_pdata i2s0_pdata = {
+ 	.cfg_gpio = s3c64xx_i2s_cfg_gpio,
++	.dma_playback = DMACH_I2S0_OUT,
++	.dma_capture = DMACH_I2S0_IN,
+ };
+ 
+ struct platform_device s3c64xx_device_iis0 = {
+@@ -68,15 +68,19 @@ struct platform_device s3c64xx_device_iis0 = {
+ 	.num_resources	  = ARRAY_SIZE(s3c64xx_iis0_resource),
+ 	.resource	  = s3c64xx_iis0_resource,
+ 	.dev = {
+-		.platform_data = &i2sv3_pdata,
++		.platform_data = &i2s0_pdata,
+ 	},
+ };
+ EXPORT_SYMBOL(s3c64xx_device_iis0);
+ 
+ static struct resource s3c64xx_iis1_resource[] = {
+ 	[0] = DEFINE_RES_MEM(S3C64XX_PA_IIS1, SZ_256),
+-	[1] = DEFINE_RES_DMA(DMACH_I2S1_OUT),
+-	[2] = DEFINE_RES_DMA(DMACH_I2S1_IN),
++};
++
++static struct s3c_audio_pdata i2s1_pdata = {
++	.cfg_gpio = s3c64xx_i2s_cfg_gpio,
++	.dma_playback = DMACH_I2S1_OUT,
++	.dma_capture = DMACH_I2S1_IN,
+ };
+ 
+ struct platform_device s3c64xx_device_iis1 = {
+@@ -85,19 +89,19 @@ struct platform_device s3c64xx_device_iis1 = {
+ 	.num_resources	  = ARRAY_SIZE(s3c64xx_iis1_resource),
+ 	.resource	  = s3c64xx_iis1_resource,
+ 	.dev = {
+-		.platform_data = &i2sv3_pdata,
++		.platform_data = &i2s1_pdata,
+ 	},
+ };
+ EXPORT_SYMBOL(s3c64xx_device_iis1);
+ 
+ static struct resource s3c64xx_iisv4_resource[] = {
+ 	[0] = DEFINE_RES_MEM(S3C64XX_PA_IISV4, SZ_256),
+-	[1] = DEFINE_RES_DMA(DMACH_HSI_I2SV40_TX),
+-	[2] = DEFINE_RES_DMA(DMACH_HSI_I2SV40_RX),
+ };
+ 
+ static struct s3c_audio_pdata i2sv4_pdata = {
+ 	.cfg_gpio = s3c64xx_i2s_cfg_gpio,
++	.dma_playback = DMACH_HSI_I2SV40_TX,
++	.dma_capture = DMACH_HSI_I2SV40_RX,
+ 	.type = {
+ 		.i2s = {
+ 			.quirks = QUIRK_PRI_6CHAN,
+@@ -142,12 +146,12 @@ static int s3c64xx_pcm_cfg_gpio(struct platform_device *pdev)
+ 
+ static struct resource s3c64xx_pcm0_resource[] = {
+ 	[0] = DEFINE_RES_MEM(S3C64XX_PA_PCM0, SZ_256),
+-	[1] = DEFINE_RES_DMA(DMACH_PCM0_TX),
+-	[2] = DEFINE_RES_DMA(DMACH_PCM0_RX),
+ };
+ 
+ static struct s3c_audio_pdata s3c_pcm0_pdata = {
+ 	.cfg_gpio = s3c64xx_pcm_cfg_gpio,
++	.dma_capture = DMACH_PCM0_RX,
++	.dma_playback = DMACH_PCM0_TX,
+ };
+ 
+ struct platform_device s3c64xx_device_pcm0 = {
+@@ -163,12 +167,12 @@ EXPORT_SYMBOL(s3c64xx_device_pcm0);
+ 
+ static struct resource s3c64xx_pcm1_resource[] = {
+ 	[0] = DEFINE_RES_MEM(S3C64XX_PA_PCM1, SZ_256),
+-	[1] = DEFINE_RES_DMA(DMACH_PCM1_TX),
+-	[2] = DEFINE_RES_DMA(DMACH_PCM1_RX),
+ };
+ 
+ static struct s3c_audio_pdata s3c_pcm1_pdata = {
+ 	.cfg_gpio = s3c64xx_pcm_cfg_gpio,
++	.dma_playback = DMACH_PCM1_TX,
++	.dma_capture = DMACH_PCM1_RX,
+ };
+ 
+ struct platform_device s3c64xx_device_pcm1 = {
+@@ -196,13 +200,14 @@ static int s3c64xx_ac97_cfg_gpe(struct platform_device *pdev)
+ 
+ static struct resource s3c64xx_ac97_resource[] = {
+ 	[0] = DEFINE_RES_MEM(S3C64XX_PA_AC97, SZ_256),
+-	[1] = DEFINE_RES_DMA(DMACH_AC97_PCMOUT),
+-	[2] = DEFINE_RES_DMA(DMACH_AC97_PCMIN),
+-	[3] = DEFINE_RES_DMA(DMACH_AC97_MICIN),
+-	[4] = DEFINE_RES_IRQ(IRQ_AC97),
++	[1] = DEFINE_RES_IRQ(IRQ_AC97),
+ };
+ 
+-static struct s3c_audio_pdata s3c_ac97_pdata;
++static struct s3c_audio_pdata s3c_ac97_pdata = {
++	.dma_playback = DMACH_AC97_PCMOUT,
++	.dma_capture = DMACH_AC97_PCMIN,
++	.dma_capture_mic = DMACH_AC97_MICIN,
++};
+ 
+ static u64 s3c64xx_ac97_dmamask = DMA_BIT_MASK(32);
+ 
+diff --git a/arch/arm/mach-s3c64xx/include/mach/dma.h b/arch/arm/mach-s3c64xx/include/mach/dma.h
+index 096e14073bd9..9c739eafe95c 100644
+--- a/arch/arm/mach-s3c64xx/include/mach/dma.h
++++ b/arch/arm/mach-s3c64xx/include/mach/dma.h
+@@ -14,38 +14,38 @@
+ #define S3C64XX_DMA_CHAN(name)		((unsigned long)(name))
+ 
+ /* DMA0/SDMA0 */
+-#define DMACH_UART0		S3C64XX_DMA_CHAN("uart0_tx")
+-#define DMACH_UART0_SRC2	S3C64XX_DMA_CHAN("uart0_rx")
+-#define DMACH_UART1		S3C64XX_DMA_CHAN("uart1_tx")
+-#define DMACH_UART1_SRC2	S3C64XX_DMA_CHAN("uart1_rx")
+-#define DMACH_UART2		S3C64XX_DMA_CHAN("uart2_tx")
+-#define DMACH_UART2_SRC2	S3C64XX_DMA_CHAN("uart2_rx")
+-#define DMACH_UART3		S3C64XX_DMA_CHAN("uart3_tx")
+-#define DMACH_UART3_SRC2	S3C64XX_DMA_CHAN("uart3_rx")
+-#define DMACH_PCM0_TX		S3C64XX_DMA_CHAN("pcm0_tx")
+-#define DMACH_PCM0_RX		S3C64XX_DMA_CHAN("pcm0_rx")
+-#define DMACH_I2S0_OUT		S3C64XX_DMA_CHAN("i2s0_tx")
+-#define DMACH_I2S0_IN		S3C64XX_DMA_CHAN("i2s0_rx")
++#define DMACH_UART0		"uart0_tx"
++#define DMACH_UART0_SRC2	"uart0_rx"
++#define DMACH_UART1		"uart1_tx"
++#define DMACH_UART1_SRC2	"uart1_rx"
++#define DMACH_UART2		"uart2_tx"
++#define DMACH_UART2_SRC2	"uart2_rx"
++#define DMACH_UART3		"uart3_tx"
++#define DMACH_UART3_SRC2	"uart3_rx"
++#define DMACH_PCM0_TX		"pcm0_tx"
++#define DMACH_PCM0_RX		"pcm0_rx"
++#define DMACH_I2S0_OUT		"i2s0_tx"
++#define DMACH_I2S0_IN		"i2s0_rx"
+ #define DMACH_SPI0_TX		S3C64XX_DMA_CHAN("spi0_tx")
+ #define DMACH_SPI0_RX		S3C64XX_DMA_CHAN("spi0_rx")
+-#define DMACH_HSI_I2SV40_TX	S3C64XX_DMA_CHAN("i2s2_tx")
+-#define DMACH_HSI_I2SV40_RX	S3C64XX_DMA_CHAN("i2s2_rx")
++#define DMACH_HSI_I2SV40_TX	"i2s2_tx"
++#define DMACH_HSI_I2SV40_RX	"i2s2_rx"
+ 
+ /* DMA1/SDMA1 */
+-#define DMACH_PCM1_TX		S3C64XX_DMA_CHAN("pcm1_tx")
+-#define DMACH_PCM1_RX		S3C64XX_DMA_CHAN("pcm1_rx")
+-#define DMACH_I2S1_OUT		S3C64XX_DMA_CHAN("i2s1_tx")
+-#define DMACH_I2S1_IN		S3C64XX_DMA_CHAN("i2s1_rx")
++#define DMACH_PCM1_TX		"pcm1_tx"
++#define DMACH_PCM1_RX		"pcm1_rx"
++#define DMACH_I2S1_OUT		"i2s1_tx"
++#define DMACH_I2S1_IN		"i2s1_rx"
+ #define DMACH_SPI1_TX		S3C64XX_DMA_CHAN("spi1_tx")
+ #define DMACH_SPI1_RX		S3C64XX_DMA_CHAN("spi1_rx")
+-#define DMACH_AC97_PCMOUT	S3C64XX_DMA_CHAN("ac97_out")
+-#define DMACH_AC97_PCMIN	S3C64XX_DMA_CHAN("ac97_in")
+-#define DMACH_AC97_MICIN	S3C64XX_DMA_CHAN("ac97_mic")
+-#define DMACH_PWM		S3C64XX_DMA_CHAN("pwm")
+-#define DMACH_IRDA		S3C64XX_DMA_CHAN("irda")
+-#define DMACH_EXTERNAL		S3C64XX_DMA_CHAN("external")
+-#define DMACH_SECURITY_RX	S3C64XX_DMA_CHAN("sec_rx")
+-#define DMACH_SECURITY_TX	S3C64XX_DMA_CHAN("sec_tx")
++#define DMACH_AC97_PCMOUT	"ac97_out"
++#define DMACH_AC97_PCMIN	"ac97_in"
++#define DMACH_AC97_MICIN	"ac97_mic"
++#define DMACH_PWM		"pwm"
++#define DMACH_IRDA		"irda"
++#define DMACH_EXTERNAL		"external"
++#define DMACH_SECURITY_RX	"sec_rx"
++#define DMACH_SECURITY_TX	"sec_tx"
+ 
+ enum dma_ch {
+ 	DMACH_MAX = 32
+diff --git a/arch/arm/plat-samsung/devs.c b/arch/arm/plat-samsung/devs.c
+index 82074625de5c..e212f9d804bd 100644
+--- a/arch/arm/plat-samsung/devs.c
++++ b/arch/arm/plat-samsung/devs.c
+@@ -65,6 +65,7 @@
+ #include <linux/platform_data/usb-ohci-s3c2410.h>
+ #include <plat/usb-phy.h>
+ #include <plat/regs-spi.h>
++#include <linux/platform_data/asoc-s3c.h>
+ #include <linux/platform_data/spi-s3c64xx.h>
+ 
+ static u64 samsung_device_dma_mask = DMA_BIT_MASK(32);
+@@ -74,9 +75,12 @@ static u64 samsung_device_dma_mask = DMA_BIT_MASK(32);
+ static struct resource s3c_ac97_resource[] = {
+ 	[0] = DEFINE_RES_MEM(S3C2440_PA_AC97, S3C2440_SZ_AC97),
+ 	[1] = DEFINE_RES_IRQ(IRQ_S3C244X_AC97),
+-	[2] = DEFINE_RES_DMA_NAMED(DMACH_PCM_OUT, "PCM out"),
+-	[3] = DEFINE_RES_DMA_NAMED(DMACH_PCM_IN, "PCM in"),
+-	[4] = DEFINE_RES_DMA_NAMED(DMACH_MIC_IN, "Mic in"),
++};
++
++static struct s3c_audio_pdata s3c_ac97_pdata = {
++	.dma_playback = (void *)DMACH_PCM_OUT,
++	.dma_capture = (void *)DMACH_PCM_IN,
++	.dma_capture_mic = (void *)DMACH_MIC_IN,
+ };
+ 
+ struct platform_device s3c_device_ac97 = {
+@@ -87,6 +91,7 @@ struct platform_device s3c_device_ac97 = {
+ 	.dev		= {
+ 		.dma_mask		= &samsung_device_dma_mask,
+ 		.coherent_dma_mask	= DMA_BIT_MASK(32),
++		.platform_data		= &s3c_ac97_pdata,
+ 	}
+ };
+ #endif /* CONFIG_CPU_S3C2440 */
+diff --git a/arch/ia64/include/asm/io.h b/arch/ia64/include/asm/io.h
+index 9041bbe2b7b4..8fdb9c7eeb66 100644
+--- a/arch/ia64/include/asm/io.h
++++ b/arch/ia64/include/asm/io.h
+@@ -436,6 +436,7 @@ static inline void __iomem * ioremap_cache (unsigned long phys_addr, unsigned lo
+ 	return ioremap(phys_addr, size);
+ }
+ #define ioremap_cache ioremap_cache
++#define ioremap_uc ioremap_nocache
+ 
+ 
+ /*
+diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
+index c873e682b67f..2b2ced9dc00a 100644
+--- a/arch/s390/include/asm/pci.h
++++ b/arch/s390/include/asm/pci.h
+@@ -45,7 +45,7 @@ struct zpci_fmb {
+ 	u64 rpcit_ops;
+ 	u64 dma_rbytes;
+ 	u64 dma_wbytes;
+-} __packed __aligned(16);
++} __packed __aligned(64);
+ 
+ enum zpci_state {
+ 	ZPCI_FN_STATE_RESERVED,
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 857b6526d298..424e6809ad07 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -1197,114 +1197,12 @@ cleanup_critical:
+ 	.quad	.Lpsw_idle_lpsw
+ 
+ .Lcleanup_save_fpu_regs:
+-	TSTMSK	__LC_CPU_FLAGS,_CIF_FPU
+-	bor	%r14
+-	clg	%r9,BASED(.Lcleanup_save_fpu_regs_done)
+-	jhe	5f
+-	clg	%r9,BASED(.Lcleanup_save_fpu_regs_fp)
+-	jhe	4f
+-	clg	%r9,BASED(.Lcleanup_save_fpu_regs_vx_high)
+-	jhe	3f
+-	clg	%r9,BASED(.Lcleanup_save_fpu_regs_vx_low)
+-	jhe	2f
+-	clg	%r9,BASED(.Lcleanup_save_fpu_fpc_end)
+-	jhe	1f
+-	lg	%r2,__LC_CURRENT
+-	aghi	%r2,__TASK_thread
+-0:	# Store floating-point controls
+-	stfpc	__THREAD_FPU_fpc(%r2)
+-1:	# Load register save area and check if VX is active
+-	lg	%r3,__THREAD_FPU_regs(%r2)
+-	TSTMSK	__LC_MACHINE_FLAGS,MACHINE_FLAG_VX
+-	jz	4f			  # no VX -> store FP regs
+-2:	# Store vector registers (V0-V15)
+-	VSTM	%v0,%v15,0,%r3		  # vstm 0,15,0(3)
+-3:	# Store vector registers (V16-V31)
+-	VSTM	%v16,%v31,256,%r3	  # vstm 16,31,256(3)
+-	j	5f			  # -> done, set CIF_FPU flag
+-4:	# Store floating-point registers
+-	std	0,0(%r3)
+-	std	1,8(%r3)
+-	std	2,16(%r3)
+-	std	3,24(%r3)
+-	std	4,32(%r3)
+-	std	5,40(%r3)
+-	std	6,48(%r3)
+-	std	7,56(%r3)
+-	std	8,64(%r3)
+-	std	9,72(%r3)
+-	std	10,80(%r3)
+-	std	11,88(%r3)
+-	std	12,96(%r3)
+-	std	13,104(%r3)
+-	std	14,112(%r3)
+-	std	15,120(%r3)
+-5:	# Set CIF_FPU flag
+-	oi	__LC_CPU_FLAGS+7,_CIF_FPU
+-	lg	%r9,48(%r11)		# return from save_fpu_regs
++	larl	%r9,save_fpu_regs
+ 	br	%r14
+-.Lcleanup_save_fpu_fpc_end:
+-	.quad	.Lsave_fpu_regs_fpc_end
+-.Lcleanup_save_fpu_regs_vx_low:
+-	.quad	.Lsave_fpu_regs_vx_low
+-.Lcleanup_save_fpu_regs_vx_high:
+-	.quad	.Lsave_fpu_regs_vx_high
+-.Lcleanup_save_fpu_regs_fp:
+-	.quad	.Lsave_fpu_regs_fp
+-.Lcleanup_save_fpu_regs_done:
+-	.quad	.Lsave_fpu_regs_done
+ 
+ .Lcleanup_load_fpu_regs:
+-	TSTMSK	__LC_CPU_FLAGS,_CIF_FPU
+-	bnor	%r14
+-	clg	%r9,BASED(.Lcleanup_load_fpu_regs_done)
+-	jhe	1f
+-	clg	%r9,BASED(.Lcleanup_load_fpu_regs_fp)
+-	jhe	2f
+-	clg	%r9,BASED(.Lcleanup_load_fpu_regs_vx_high)
+-	jhe	3f
+-	clg	%r9,BASED(.Lcleanup_load_fpu_regs_vx)
+-	jhe	4f
+-	lg	%r4,__LC_CURRENT
+-	aghi	%r4,__TASK_thread
+-	lfpc	__THREAD_FPU_fpc(%r4)
+-	TSTMSK	__LC_MACHINE_FLAGS,MACHINE_FLAG_VX
+-	lg	%r4,__THREAD_FPU_regs(%r4)	# %r4 <- reg save area
+-	jz	2f				# -> no VX, load FP regs
+-4:	# Load V0 ..V15 registers
+-	VLM	%v0,%v15,0,%r4
+-3:	# Load V16..V31 registers
+-	VLM	%v16,%v31,256,%r4
+-	j	1f
+-2:	# Load floating-point registers
+-	ld	0,0(%r4)
+-	ld	1,8(%r4)
+-	ld	2,16(%r4)
+-	ld	3,24(%r4)
+-	ld	4,32(%r4)
+-	ld	5,40(%r4)
+-	ld	6,48(%r4)
+-	ld	7,56(%r4)
+-	ld	8,64(%r4)
+-	ld	9,72(%r4)
+-	ld	10,80(%r4)
+-	ld	11,88(%r4)
+-	ld	12,96(%r4)
+-	ld	13,104(%r4)
+-	ld	14,112(%r4)
+-	ld	15,120(%r4)
+-1:	# Clear CIF_FPU bit
+-	ni	__LC_CPU_FLAGS+7,255-_CIF_FPU
+-	lg	%r9,48(%r11)		# return from load_fpu_regs
++	larl	%r9,load_fpu_regs
+ 	br	%r14
+-.Lcleanup_load_fpu_regs_vx:
+-	.quad	.Lload_fpu_regs_vx
+-.Lcleanup_load_fpu_regs_vx_high:
+-	.quad	.Lload_fpu_regs_vx_high
+-.Lcleanup_load_fpu_regs_fp:
+-	.quad	.Lload_fpu_regs_fp
+-.Lcleanup_load_fpu_regs_done:
+-	.quad	.Lload_fpu_regs_done
+ 
+ /*
+  * Integer constants
+diff --git a/arch/s390/kernel/head64.S b/arch/s390/kernel/head64.S
+index 58b719fa8067..1ad2407c7f75 100644
+--- a/arch/s390/kernel/head64.S
++++ b/arch/s390/kernel/head64.S
+@@ -16,7 +16,7 @@
+ 
+ __HEAD
+ ENTRY(startup_continue)
+-	tm	__LC_STFL_FAC_LIST+6,0x80	# LPP available ?
++	tm	__LC_STFL_FAC_LIST+5,0x80	# LPP available ?
+ 	jz	0f
+ 	xc	__LC_LPP+1(7,0),__LC_LPP+1	# clear lpp and current_pid
+ 	mvi	__LC_LPP,0x80			#   and set LPP_MAGIC
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index c837bcacf218..1f581eb61bc2 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -329,6 +329,7 @@ static void __init setup_lowcore(void)
+ 		+ PAGE_SIZE - STACK_FRAME_OVERHEAD - sizeof(struct pt_regs);
+ 	lc->current_task = (unsigned long) init_thread_union.thread_info.task;
+ 	lc->thread_info = (unsigned long) &init_thread_union;
++	lc->lpp = LPP_MAGIC;
+ 	lc->machine_flags = S390_lowcore.machine_flags;
+ 	lc->stfl_fac_list = S390_lowcore.stfl_fac_list;
+ 	memcpy(lc->stfle_fac_list, S390_lowcore.stfle_fac_list,
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 7ef12a3ace3a..19442395f413 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -871,8 +871,11 @@ static inline int barsize(u8 size)
+ 
+ static int zpci_mem_init(void)
+ {
++	BUILD_BUG_ON(!is_power_of_2(__alignof__(struct zpci_fmb)) ||
++		     __alignof__(struct zpci_fmb) < sizeof(struct zpci_fmb));
++
+ 	zdev_fmb_cache = kmem_cache_create("PCI_FMB_cache", sizeof(struct zpci_fmb),
+-				16, 0, NULL);
++					   __alignof__(struct zpci_fmb), 0, NULL);
+ 	if (!zdev_fmb_cache)
+ 		goto error_zdev;
+ 
+diff --git a/arch/sh/mm/kmap.c b/arch/sh/mm/kmap.c
+index ec29e14ec5a8..bf25d7c79a2d 100644
+--- a/arch/sh/mm/kmap.c
++++ b/arch/sh/mm/kmap.c
+@@ -36,6 +36,7 @@ void *kmap_coherent(struct page *page, unsigned long addr)
+ 
+ 	BUG_ON(!test_bit(PG_dcache_clean, &page->flags));
+ 
++	preempt_disable();
+ 	pagefault_disable();
+ 
+ 	idx = FIX_CMAP_END -
+@@ -64,4 +65,5 @@ void kunmap_coherent(void *kvaddr)
+ 	}
+ 
+ 	pagefault_enable();
++	preempt_enable();
+ }
+diff --git a/arch/um/drivers/mconsole_kern.c b/arch/um/drivers/mconsole_kern.c
+index 29880c9b324e..e22e57298522 100644
+--- a/arch/um/drivers/mconsole_kern.c
++++ b/arch/um/drivers/mconsole_kern.c
+@@ -133,7 +133,7 @@ void mconsole_proc(struct mc_request *req)
+ 	ptr += strlen("proc");
+ 	ptr = skip_spaces(ptr);
+ 
+-	file = file_open_root(mnt->mnt_root, mnt, ptr, O_RDONLY);
++	file = file_open_root(mnt->mnt_root, mnt, ptr, O_RDONLY, 0);
+ 	if (IS_ERR(file)) {
+ 		mconsole_reply(req, "Failed to open file", 1, 0);
+ 		printk(KERN_ERR "open /proc/%s: %ld\n", ptr, PTR_ERR(file));
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index db3622f22b61..436639a31624 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1126,22 +1126,23 @@ config MICROCODE
+ 	bool "CPU microcode loading support"
+ 	default y
+ 	depends on CPU_SUP_AMD || CPU_SUP_INTEL
+-	depends on BLK_DEV_INITRD
+ 	select FW_LOADER
+ 	---help---
+-
+ 	  If you say Y here, you will be able to update the microcode on
+-	  certain Intel and AMD processors. The Intel support is for the
+-	  IA32 family, e.g. Pentium Pro, Pentium II, Pentium III, Pentium 4,
+-	  Xeon etc. The AMD support is for families 0x10 and later. You will
+-	  obviously need the actual microcode binary data itself which is not
+-	  shipped with the Linux kernel.
+-
+-	  This option selects the general module only, you need to select
+-	  at least one vendor specific module as well.
+-
+-	  To compile this driver as a module, choose M here: the module
+-	  will be called microcode.
++	  Intel and AMD processors. The Intel support is for the IA32 family,
++	  e.g. Pentium Pro, Pentium II, Pentium III, Pentium 4, Xeon etc. The
++	  AMD support is for families 0x10 and later. You will obviously need
++	  the actual microcode binary data itself which is not shipped with
++	  the Linux kernel.
++
++	  The preferred method to load microcode from a detached initrd is described
++	  in Documentation/x86/early-microcode.txt. For that you need to enable
++	  CONFIG_BLK_DEV_INITRD in order for the loader to be able to scan the
++	  initrd for microcode blobs.
++
++	  In addition, you can build-in the microcode into the kernel. For that you
++	  need to enable FIRMWARE_IN_KERNEL and add the vendor-supplied microcode
++	  to the CONFIG_EXTRA_FIRMWARE config option.
+ 
+ config MICROCODE_INTEL
+ 	bool "Intel microcode loading support"
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index 03663740c866..1a4477cedc49 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -268,6 +268,7 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
+ /* Called with IRQs disabled. */
+ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
+ {
++	struct thread_info *ti = pt_regs_to_thread_info(regs);
+ 	u32 cached_flags;
+ 
+ 	if (IS_ENABLED(CONFIG_PROVE_LOCKING) && WARN_ON(!irqs_disabled()))
+@@ -275,12 +276,22 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
+ 
+ 	lockdep_sys_exit();
+ 
+-	cached_flags =
+-		READ_ONCE(pt_regs_to_thread_info(regs)->flags);
++	cached_flags = READ_ONCE(ti->flags);
+ 
+ 	if (unlikely(cached_flags & EXIT_TO_USERMODE_LOOP_FLAGS))
+ 		exit_to_usermode_loop(regs, cached_flags);
+ 
++#ifdef CONFIG_COMPAT
++	/*
++	 * Compat syscalls set TS_COMPAT.  Make sure we clear it before
++	 * returning to user mode.  We need to clear it *after* signal
++	 * handling, because syscall restart has a fixup for compat
++	 * syscalls.  The fixup is exercised by the ptrace_syscall_32
++	 * selftest.
++	 */
++	ti->status &= ~TS_COMPAT;
++#endif
++
+ 	user_enter();
+ }
+ 
+@@ -332,14 +343,6 @@ __visible inline void syscall_return_slowpath(struct pt_regs *regs)
+ 	if (unlikely(cached_flags & SYSCALL_EXIT_WORK_FLAGS))
+ 		syscall_slow_exit_work(regs, cached_flags);
+ 
+-#ifdef CONFIG_COMPAT
+-	/*
+-	 * Compat syscalls set TS_COMPAT.  Make sure we clear it before
+-	 * returning to user mode.
+-	 */
+-	ti->status &= ~TS_COMPAT;
+-#endif
+-
+ 	local_irq_disable();
+ 	prepare_exit_to_usermode(regs);
+ }
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index a30316bf801a..163769d82475 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -638,8 +638,8 @@ static inline void entering_irq(void)
+ 
+ static inline void entering_ack_irq(void)
+ {
+-	ack_APIC_irq();
+ 	entering_irq();
++	ack_APIC_irq();
+ }
+ 
+ static inline void ipi_entering_ack_irq(void)
+diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h
+index 1e3408e88604..59caa55fb9b5 100644
+--- a/arch/x86/include/asm/hw_irq.h
++++ b/arch/x86/include/asm/hw_irq.h
+@@ -136,6 +136,7 @@ struct irq_alloc_info {
+ struct irq_cfg {
+ 	unsigned int		dest_apicid;
+ 	u8			vector;
++	u8			old_vector;
+ };
+ 
+ extern struct irq_cfg *irq_cfg(unsigned int irq);
+diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
+index 34e62b1dcfce..712b24ed3a64 100644
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -2,6 +2,7 @@
+ #define _ASM_X86_MICROCODE_H
+ 
+ #include <linux/earlycpio.h>
++#include <linux/initrd.h>
+ 
+ #define native_rdmsr(msr, val1, val2)			\
+ do {							\
+@@ -168,4 +169,29 @@ static inline void reload_early_microcode(void)			{ }
+ static inline bool
+ get_builtin_firmware(struct cpio_data *cd, const char *name)	{ return false; }
+ #endif
++
++static inline unsigned long get_initrd_start(void)
++{
++#ifdef CONFIG_BLK_DEV_INITRD
++	return initrd_start;
++#else
++	return 0;
++#endif
++}
++
++static inline unsigned long get_initrd_start_addr(void)
++{
++#ifdef CONFIG_BLK_DEV_INITRD
++#ifdef CONFIG_X86_32
++	unsigned long *initrd_start_p = (unsigned long *)__pa_nodebug(&initrd_start);
++
++	return (unsigned long)__pa_nodebug(*initrd_start_p);
++#else
++	return get_initrd_start();
++#endif
++#else /* CONFIG_BLK_DEV_INITRD */
++	return 0;
++#endif
++}
++
+ #endif /* _ASM_X86_MICROCODE_H */
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 7bcb861a04e5..5a2ed3ed2f26 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -165,6 +165,7 @@ struct x86_pmu_capability {
+ #define GLOBAL_STATUS_ASIF				BIT_ULL(60)
+ #define GLOBAL_STATUS_COUNTERS_FROZEN			BIT_ULL(59)
+ #define GLOBAL_STATUS_LBRS_FROZEN			BIT_ULL(58)
++#define GLOBAL_STATUS_TRACE_TOPAPMI			BIT_ULL(55)
+ 
+ /*
+  * IBS cpuid feature detection
+diff --git a/arch/x86/include/asm/xen/hypervisor.h b/arch/x86/include/asm/xen/hypervisor.h
+index 8b2d4bea9962..39171b3646bb 100644
+--- a/arch/x86/include/asm/xen/hypervisor.h
++++ b/arch/x86/include/asm/xen/hypervisor.h
+@@ -62,4 +62,6 @@ void xen_arch_register_cpu(int num);
+ void xen_arch_unregister_cpu(int num);
+ #endif
+ 
++extern void xen_set_iopl_mask(unsigned mask);
++
+ #endif /* _ASM_X86_XEN_HYPERVISOR_H */
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index a35f6b5473f4..7af2505f20c2 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -211,6 +211,7 @@ update:
+ 	 */
+ 	cpumask_and(d->old_domain, d->old_domain, cpu_online_mask);
+ 	d->move_in_progress = !cpumask_empty(d->old_domain);
++	d->cfg.old_vector = d->move_in_progress ? d->cfg.vector : 0;
+ 	d->cfg.vector = vector;
+ 	cpumask_copy(d->domain, vector_cpumask);
+ success:
+@@ -653,46 +654,97 @@ void irq_complete_move(struct irq_cfg *cfg)
+ }
+ 
+ /*
+- * Called with @desc->lock held and interrupts disabled.
++ * Called from fixup_irqs() with @desc->lock held and interrupts disabled.
+  */
+ void irq_force_complete_move(struct irq_desc *desc)
+ {
+ 	struct irq_data *irqdata = irq_desc_get_irq_data(desc);
+ 	struct apic_chip_data *data = apic_chip_data(irqdata);
+ 	struct irq_cfg *cfg = data ? &data->cfg : NULL;
++	unsigned int cpu;
+ 
+ 	if (!cfg)
+ 		return;
+ 
+-	__irq_complete_move(cfg, cfg->vector);
+-
+ 	/*
+ 	 * This is tricky. If the cleanup of @data->old_domain has not been
+ 	 * done yet, then the following setaffinity call will fail with
+ 	 * -EBUSY. This can leave the interrupt in a stale state.
+ 	 *
+-	 * The cleanup cannot make progress because we hold @desc->lock. So in
+-	 * case @data->old_domain is not yet cleaned up, we need to drop the
+-	 * lock and acquire it again. @desc cannot go away, because the
+-	 * hotplug code holds the sparse irq lock.
++	 * All CPUs are stuck in stop machine with interrupts disabled so
++	 * calling __irq_complete_move() would be completely pointless.
+ 	 */
+ 	raw_spin_lock(&vector_lock);
+-	/* Clean out all offline cpus (including ourself) first. */
++	/*
++	 * Clean out all offline cpus (including the outgoing one) from the
++	 * old_domain mask.
++	 */
+ 	cpumask_and(data->old_domain, data->old_domain, cpu_online_mask);
+-	while (!cpumask_empty(data->old_domain)) {
++
++	/*
++	 * If move_in_progress is cleared and the old_domain mask is empty,
++	 * then there is nothing to cleanup. fixup_irqs() will take care of
++	 * the stale vectors on the outgoing cpu.
++	 */
++	if (!data->move_in_progress && cpumask_empty(data->old_domain)) {
+ 		raw_spin_unlock(&vector_lock);
+-		raw_spin_unlock(&desc->lock);
+-		cpu_relax();
+-		raw_spin_lock(&desc->lock);
++		return;
++	}
++
++	/*
++	 * 1) The interrupt is in move_in_progress state. That means that we
++	 *    have not seen an interrupt since the io_apic was reprogrammed to
++	 *    the new vector.
++	 *
++	 * 2) The interrupt has fired on the new vector, but the cleanup IPIs
++	 *    have not been processed yet.
++	 */
++	if (data->move_in_progress) {
+ 		/*
+-		 * Reevaluate apic_chip_data. It might have been cleared after
+-		 * we dropped @desc->lock.
++		 * In theory there is a race:
++		 *
++		 * set_ioapic(new_vector) <-- Interrupt is raised before update
++		 *			      is effective, i.e. it's raised on
++		 *			      the old vector.
++		 *
++		 * So if the target cpu cannot handle that interrupt before
++		 * the old vector is cleaned up, we get a spurious interrupt
++		 * and in the worst case the ioapic irq line becomes stale.
++		 *
++		 * But in case of cpu hotplug this should be a non issue
++		 * because if the affinity update happens right before all
++		 * cpus rendevouz in stop machine, there is no way that the
++		 * interrupt can be blocked on the target cpu because all cpus
++		 * loops first with interrupts enabled in stop machine, so the
++		 * old vector is not yet cleaned up when the interrupt fires.
++		 *
++		 * So the only way to run into this issue is if the delivery
++		 * of the interrupt on the apic/system bus would be delayed
++		 * beyond the point where the target cpu disables interrupts
++		 * in stop machine. I doubt that it can happen, but at least
++		 * there is a theroretical chance. Virtualization might be
++		 * able to expose this, but AFAICT the IOAPIC emulation is not
++		 * as stupid as the real hardware.
++		 *
++		 * Anyway, there is nothing we can do about that at this point
++		 * w/o refactoring the whole fixup_irq() business completely.
++		 * We print at least the irq number and the old vector number,
++		 * so we have the necessary information when a problem in that
++		 * area arises.
+ 		 */
+-		data = apic_chip_data(irqdata);
+-		if (!data)
+-			return;
+-		raw_spin_lock(&vector_lock);
++		pr_warn("IRQ fixup: irq %d move in progress, old vector %d\n",
++			irqdata->irq, cfg->old_vector);
+ 	}
++	/*
++	 * If old_domain is not empty, then other cpus still have the irq
++	 * descriptor set in their vector array. Clean it up.
++	 */
++	for_each_cpu(cpu, data->old_domain)
++		per_cpu(vector_irq, cpu)[cfg->old_vector] = VECTOR_UNUSED;
++
++	/* Cleanup the left overs of the (half finished) move */
++	cpumask_clear(data->old_domain);
++	data->move_in_progress = 0;
+ 	raw_spin_unlock(&vector_lock);
+ }
+ #endif
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index ce47402eb2f9..ac8975a65280 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -555,10 +555,14 @@ scan_microcode(struct mc_saved_data *mc_saved_data, unsigned long *initrd,
+ 	cd.data = NULL;
+ 	cd.size = 0;
+ 
+-	cd = find_cpio_data(p, (void *)start, size, &offset);
+-	if (!cd.data) {
++	/* try built-in microcode if no initrd */
++	if (!size) {
+ 		if (!load_builtin_intel_microcode(&cd))
+ 			return UCODE_ERROR;
++	} else {
++		cd = find_cpio_data(p, (void *)start, size, &offset);
++		if (!cd.data)
++			return UCODE_ERROR;
+ 	}
+ 
+ 	return get_matching_model_microcode(0, start, cd.data, cd.size,
+@@ -694,7 +698,7 @@ int __init save_microcode_in_initrd_intel(void)
+ 	if (count == 0)
+ 		return ret;
+ 
+-	copy_initrd_ptrs(mc_saved, mc_saved_in_initrd, initrd_start, count);
++	copy_initrd_ptrs(mc_saved, mc_saved_in_initrd, get_initrd_start(), count);
+ 	ret = save_microcode(&mc_saved_data, mc_saved, count);
+ 	if (ret)
+ 		pr_err("Cannot save microcode patches from initrd.\n");
+@@ -732,16 +736,20 @@ void __init load_ucode_intel_bsp(void)
+ 	struct boot_params *p;
+ 
+ 	p	= (struct boot_params *)__pa_nodebug(&boot_params);
+-	start	= p->hdr.ramdisk_image;
+ 	size	= p->hdr.ramdisk_size;
+ 
+-	_load_ucode_intel_bsp(
+-			(struct mc_saved_data *)__pa_nodebug(&mc_saved_data),
+-			(unsigned long *)__pa_nodebug(&mc_saved_in_initrd),
+-			start, size);
++	/*
++	 * Set start only if we have an initrd image. We cannot use initrd_start
++	 * because it is not set that early yet.
++	 */
++	start	= (size ? p->hdr.ramdisk_image : 0);
++
++	_load_ucode_intel_bsp((struct mc_saved_data *)__pa_nodebug(&mc_saved_data),
++			      (unsigned long *)__pa_nodebug(&mc_saved_in_initrd),
++			      start, size);
+ #else
+-	start	= boot_params.hdr.ramdisk_image + PAGE_OFFSET;
+ 	size	= boot_params.hdr.ramdisk_size;
++	start	= (size ? boot_params.hdr.ramdisk_image + PAGE_OFFSET : 0);
+ 
+ 	_load_ucode_intel_bsp(&mc_saved_data, mc_saved_in_initrd, start, size);
+ #endif
+@@ -752,20 +760,14 @@ void load_ucode_intel_ap(void)
+ 	struct mc_saved_data *mc_saved_data_p;
+ 	struct ucode_cpu_info uci;
+ 	unsigned long *mc_saved_in_initrd_p;
+-	unsigned long initrd_start_addr;
+ 	enum ucode_state ret;
+ #ifdef CONFIG_X86_32
+-	unsigned long *initrd_start_p;
+ 
+-	mc_saved_in_initrd_p =
+-		(unsigned long *)__pa_nodebug(mc_saved_in_initrd);
++	mc_saved_in_initrd_p = (unsigned long *)__pa_nodebug(mc_saved_in_initrd);
+ 	mc_saved_data_p = (struct mc_saved_data *)__pa_nodebug(&mc_saved_data);
+-	initrd_start_p = (unsigned long *)__pa_nodebug(&initrd_start);
+-	initrd_start_addr = (unsigned long)__pa_nodebug(*initrd_start_p);
+ #else
+-	mc_saved_data_p = &mc_saved_data;
+ 	mc_saved_in_initrd_p = mc_saved_in_initrd;
+-	initrd_start_addr = initrd_start;
++	mc_saved_data_p = &mc_saved_data;
+ #endif
+ 
+ 	/*
+@@ -777,7 +779,7 @@ void load_ucode_intel_ap(void)
+ 
+ 	collect_cpu_info_early(&uci);
+ 	ret = load_microcode(mc_saved_data_p, mc_saved_in_initrd_p,
+-			     initrd_start_addr, &uci);
++			     get_initrd_start_addr(), &uci);
+ 
+ 	if (ret != UCODE_OK)
+ 		return;
+diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
+index 2bf79d7c97df..a3aeb2cc361e 100644
+--- a/arch/x86/kernel/cpu/perf_event.c
++++ b/arch/x86/kernel/cpu/perf_event.c
+@@ -593,6 +593,19 @@ void x86_pmu_disable_all(void)
+ 	}
+ }
+ 
++/*
++ * There may be PMI landing after enabled=0. The PMI hitting could be before or
++ * after disable_all.
++ *
++ * If PMI hits before disable_all, the PMU will be disabled in the NMI handler.
++ * It will not be re-enabled in the NMI handler again, because enabled=0. After
++ * handling the NMI, disable_all will be called, which will not change the
++ * state either. If PMI hits after disable_all, the PMU is already disabled
++ * before entering NMI handler. The NMI handler will not change the state
++ * either.
++ *
++ * So either situation is harmless.
++ */
+ static void x86_pmu_disable(struct pmu *pmu)
+ {
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
+index d0e35ebb2adb..ee70445fbb1f 100644
+--- a/arch/x86/kernel/cpu/perf_event.h
++++ b/arch/x86/kernel/cpu/perf_event.h
+@@ -591,6 +591,7 @@ struct x86_pmu {
+ 			pebs_active	:1,
+ 			pebs_broken	:1;
+ 	int		pebs_record_size;
++	int		pebs_buffer_size;
+ 	void		(*drain_pebs)(struct pt_regs *regs);
+ 	struct event_constraint *pebs_constraints;
+ 	void		(*pebs_aliases)(struct perf_event *event);
+@@ -907,6 +908,8 @@ void intel_pmu_lbr_init_hsw(void);
+ 
+ void intel_pmu_lbr_init_skl(void);
+ 
++void intel_pmu_pebs_data_source_nhm(void);
++
+ int intel_pmu_setup_lbr_filter(struct perf_event *event);
+ 
+ void intel_pt_interrupt(void);
+diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
+index e2a430021e46..078de2e86b7a 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel.c
++++ b/arch/x86/kernel/cpu/perf_event_intel.c
+@@ -1458,7 +1458,15 @@ static __initconst const u64 slm_hw_cache_event_ids
+ };
+ 
+ /*
+- * Use from PMIs where the LBRs are already disabled.
++ * Used from PMIs where the LBRs are already disabled.
++ *
++ * This function could be called consecutively. It is required to remain in
++ * disabled state if called consecutively.
++ *
++ * During consecutive calls, the same disable value will be written to related
++ * registers, so the PMU state remains unchanged. hw.state in
++ * intel_bts_disable_local will remain PERF_HES_STOPPED too in consecutive
++ * calls.
+  */
+ static void __intel_pmu_disable_all(void)
+ {
+@@ -1840,6 +1848,16 @@ again:
+ 	if (__test_and_clear_bit(62, (unsigned long *)&status)) {
+ 		handled++;
+ 		x86_pmu.drain_pebs(regs);
++		/*
++		 * There are cases where, even though, the PEBS ovfl bit is set
++		 * in GLOBAL_OVF_STATUS, the PEBS events may also have their
++		 * overflow bits set for their counters. We must clear them
++		 * here because they have been processed as exact samples in
++		 * the drain_pebs() routine. They must not be processed again
++		 * in the for_each_bit_set() loop for regular samples below.
++		 */
++		status &= ~cpuc->pebs_enabled;
++		status &= x86_pmu.intel_ctrl | GLOBAL_STATUS_TRACE_TOPAPMI;
+ 	}
+ 
+ 	/*
+@@ -1885,7 +1903,10 @@ again:
+ 		goto again;
+ 
+ done:
+-	__intel_pmu_enable_all(0, true);
++	/* Only restore PMU state when it's active. See x86_pmu_disable(). */
++	if (cpuc->enabled)
++		__intel_pmu_enable_all(0, true);
++
+ 	/*
+ 	 * Only unmask the NMI after the overflow counters
+ 	 * have been reset. This avoids spurious NMIs on
+@@ -3315,6 +3336,7 @@ __init int intel_pmu_init(void)
+ 		intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] =
+ 			X86_CONFIG(.event=0xb1, .umask=0x3f, .inv=1, .cmask=1);
+ 
++		intel_pmu_pebs_data_source_nhm();
+ 		x86_add_quirk(intel_nehalem_quirk);
+ 
+ 		pr_cont("Nehalem events, ");
+@@ -3377,6 +3399,7 @@ __init int intel_pmu_init(void)
+ 		intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] =
+ 			X86_CONFIG(.event=0xb1, .umask=0x3f, .inv=1, .cmask=1);
+ 
++		intel_pmu_pebs_data_source_nhm();
+ 		pr_cont("Westmere events, ");
+ 		break;
+ 
+diff --git a/arch/x86/kernel/cpu/perf_event_intel_ds.c b/arch/x86/kernel/cpu/perf_event_intel_ds.c
+index 5db1c7755548..7abb2b88572e 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel_ds.c
++++ b/arch/x86/kernel/cpu/perf_event_intel_ds.c
+@@ -51,7 +51,8 @@ union intel_x86_pebs_dse {
+ #define OP_LH (P(OP, LOAD) | P(LVL, HIT))
+ #define SNOOP_NONE_MISS (P(SNOOP, NONE) | P(SNOOP, MISS))
+ 
+-static const u64 pebs_data_source[] = {
++/* Version for Sandy Bridge and later */
++static u64 pebs_data_source[] = {
+ 	P(OP, LOAD) | P(LVL, MISS) | P(LVL, L3) | P(SNOOP, NA),/* 0x00:ukn L3 */
+ 	OP_LH | P(LVL, L1)  | P(SNOOP, NONE),	/* 0x01: L1 local */
+ 	OP_LH | P(LVL, LFB) | P(SNOOP, NONE),	/* 0x02: LFB hit */
+@@ -70,6 +71,14 @@ static const u64 pebs_data_source[] = {
+ 	OP_LH | P(LVL, UNC) | P(SNOOP, NONE), /* 0x0f: uncached */
+ };
+ 
++/* Patch up minor differences in the bits */
++void __init intel_pmu_pebs_data_source_nhm(void)
++{
++	pebs_data_source[0x05] = OP_LH | P(LVL, L3)  | P(SNOOP, HIT);
++	pebs_data_source[0x06] = OP_LH | P(LVL, L3)  | P(SNOOP, HITM);
++	pebs_data_source[0x07] = OP_LH | P(LVL, L3)  | P(SNOOP, HITM);
++}
++
+ static u64 precise_store_data(u64 status)
+ {
+ 	union intel_x86_pebs_dse dse;
+@@ -269,7 +278,7 @@ static int alloc_pebs_buffer(int cpu)
+ 	if (!x86_pmu.pebs)
+ 		return 0;
+ 
+-	buffer = kzalloc_node(PEBS_BUFFER_SIZE, GFP_KERNEL, node);
++	buffer = kzalloc_node(x86_pmu.pebs_buffer_size, GFP_KERNEL, node);
+ 	if (unlikely(!buffer))
+ 		return -ENOMEM;
+ 
+@@ -286,7 +295,7 @@ static int alloc_pebs_buffer(int cpu)
+ 		per_cpu(insn_buffer, cpu) = ibuffer;
+ 	}
+ 
+-	max = PEBS_BUFFER_SIZE / x86_pmu.pebs_record_size;
++	max = x86_pmu.pebs_buffer_size / x86_pmu.pebs_record_size;
+ 
+ 	ds->pebs_buffer_base = (u64)(unsigned long)buffer;
+ 	ds->pebs_index = ds->pebs_buffer_base;
+@@ -1296,6 +1305,7 @@ void __init intel_ds_init(void)
+ 
+ 	x86_pmu.bts  = boot_cpu_has(X86_FEATURE_BTS);
+ 	x86_pmu.pebs = boot_cpu_has(X86_FEATURE_PEBS);
++	x86_pmu.pebs_buffer_size = PEBS_BUFFER_SIZE;
+ 	if (x86_pmu.pebs) {
+ 		char pebs_type = x86_pmu.intel_cap.pebs_trap ?  '+' : '-';
+ 		int format = x86_pmu.intel_cap.pebs_format;
+@@ -1304,6 +1314,14 @@ void __init intel_ds_init(void)
+ 		case 0:
+ 			printk(KERN_CONT "PEBS fmt0%c, ", pebs_type);
+ 			x86_pmu.pebs_record_size = sizeof(struct pebs_record_core);
++			/*
++			 * Using >PAGE_SIZE buffers makes the WRMSR to
++			 * PERF_GLOBAL_CTRL in intel_pmu_enable_all()
++			 * mysteriously hang on Core2.
++			 *
++			 * As a workaround, we don't do this.
++			 */
++			x86_pmu.pebs_buffer_size = PAGE_SIZE;
+ 			x86_pmu.drain_pebs = intel_pmu_drain_pebs_core;
+ 			break;
+ 
+diff --git a/arch/x86/kernel/cpu/perf_event_knc.c b/arch/x86/kernel/cpu/perf_event_knc.c
+index 5b0c232d1ee6..b931095e86d4 100644
+--- a/arch/x86/kernel/cpu/perf_event_knc.c
++++ b/arch/x86/kernel/cpu/perf_event_knc.c
+@@ -263,7 +263,9 @@ again:
+ 		goto again;
+ 
+ done:
+-	knc_pmu_enable_all(0);
++	/* Only restore PMU state when it's active. See x86_pmu_disable(). */
++	if (cpuc->enabled)
++		knc_pmu_enable_all(0);
+ 
+ 	return handled;
+ }
+diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c
+index 37dae792dbbe..589b3193f102 100644
+--- a/arch/x86/kernel/ioport.c
++++ b/arch/x86/kernel/ioport.c
+@@ -96,9 +96,14 @@ asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int turn_on)
+ SYSCALL_DEFINE1(iopl, unsigned int, level)
+ {
+ 	struct pt_regs *regs = current_pt_regs();
+-	unsigned int old = (regs->flags >> 12) & 3;
+ 	struct thread_struct *t = &current->thread;
+ 
++	/*
++	 * Careful: the IOPL bits in regs->flags are undefined under Xen PV
++	 * and changing them has no effect.
++	 */
++	unsigned int old = t->iopl >> X86_EFLAGS_IOPL_BIT;
++
+ 	if (level > 3)
+ 		return -EINVAL;
+ 	/* Trying to gain more privileges? */
+@@ -106,8 +111,9 @@ SYSCALL_DEFINE1(iopl, unsigned int, level)
+ 		if (!capable(CAP_SYS_RAWIO))
+ 			return -EPERM;
+ 	}
+-	regs->flags = (regs->flags & ~X86_EFLAGS_IOPL) | (level << 12);
+-	t->iopl = level << 12;
++	regs->flags = (regs->flags & ~X86_EFLAGS_IOPL) |
++		(level << X86_EFLAGS_IOPL_BIT);
++	t->iopl = level << X86_EFLAGS_IOPL_BIT;
+ 	set_iopl_mask(t->iopl);
+ 
+ 	return 0;
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index e835d263a33b..4cbb60fbff3e 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -48,6 +48,7 @@
+ #include <asm/syscalls.h>
+ #include <asm/debugreg.h>
+ #include <asm/switch_to.h>
++#include <asm/xen/hypervisor.h>
+ 
+ asmlinkage extern void ret_from_fork(void);
+ 
+@@ -411,6 +412,17 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+ 		     task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV))
+ 		__switch_to_xtra(prev_p, next_p, tss);
+ 
++#ifdef CONFIG_XEN
++	/*
++	 * On Xen PV, IOPL bits in pt_regs->flags have no effect, and
++	 * current_pt_regs()->flags may not match the current task's
++	 * intended IOPL.  We need to switch it manually.
++	 */
++	if (unlikely(static_cpu_has(X86_FEATURE_XENPV) &&
++		     prev->iopl != next->iopl))
++		xen_set_iopl_mask(next->iopl);
++#endif
++
+ 	if (static_cpu_has_bug(X86_BUG_SYSRET_SS_ATTRS)) {
+ 		/*
+ 		 * AMD CPUs have a misfeature: SYSRET sets the SS selector but
+diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
+index b0ea42b78ccd..ab5318727579 100644
+--- a/arch/x86/kvm/i8254.c
++++ b/arch/x86/kvm/i8254.c
+@@ -245,7 +245,7 @@ static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian)
+ 		 * PIC is being reset.  Handle it gracefully here
+ 		 */
+ 		atomic_inc(&ps->pending);
+-	else if (value > 0)
++	else if (value > 0 && ps->reinject)
+ 		/* in this case, we had multiple outstanding pit interrupts
+ 		 * that we needed to inject.  Reinject
+ 		 */
+@@ -288,7 +288,9 @@ static void pit_do_work(struct kthread_work *work)
+ 	 * last one has been acked.
+ 	 */
+ 	spin_lock(&ps->inject_lock);
+-	if (ps->irq_ack) {
++	if (!ps->reinject)
++		inject = 1;
++	else if (ps->irq_ack) {
+ 		ps->irq_ack = 0;
+ 		inject = 1;
+ 	}
+@@ -317,10 +319,10 @@ static enum hrtimer_restart pit_timer_fn(struct hrtimer *data)
+ 	struct kvm_kpit_state *ps = container_of(data, struct kvm_kpit_state, timer);
+ 	struct kvm_pit *pt = ps->kvm->arch.vpit;
+ 
+-	if (ps->reinject || !atomic_read(&ps->pending)) {
++	if (ps->reinject)
+ 		atomic_inc(&ps->pending);
+-		queue_kthread_work(&pt->worker, &pt->expired);
+-	}
++
++	queue_kthread_work(&pt->worker, &pt->expired);
+ 
+ 	if (ps->is_periodic) {
+ 		hrtimer_add_expires_ns(&ps->timer, ps->period);
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 0958fa2b7cb7..f34ab71dfd57 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -2637,8 +2637,15 @@ static void nested_vmx_setup_ctls_msrs(struct vcpu_vmx *vmx)
+ 	} else
+ 		vmx->nested.nested_vmx_ept_caps = 0;
+ 
++	/*
++	 * Old versions of KVM use the single-context version without
++	 * checking for support, so declare that it is supported even
++	 * though it is treated as global context.  The alternative is
++	 * not failing the single-context invvpid, and it is worse.
++	 */
+ 	if (enable_vpid)
+ 		vmx->nested.nested_vmx_vpid_caps = VMX_VPID_INVVPID_BIT |
++				VMX_VPID_EXTENT_SINGLE_CONTEXT_BIT |
+ 				VMX_VPID_EXTENT_GLOBAL_CONTEXT_BIT;
+ 	else
+ 		vmx->nested.nested_vmx_vpid_caps = 0;
+@@ -7340,6 +7347,7 @@ static int handle_invept(struct kvm_vcpu *vcpu)
+ 	if (!(types & (1UL << type))) {
+ 		nested_vmx_failValid(vcpu,
+ 				VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID);
++		skip_emulated_instruction(vcpu);
+ 		return 1;
+ 	}
+ 
+@@ -7398,6 +7406,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
+ 	if (!(types & (1UL << type))) {
+ 		nested_vmx_failValid(vcpu,
+ 			VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID);
++		skip_emulated_instruction(vcpu);
+ 		return 1;
+ 	}
+ 
+@@ -7414,12 +7423,17 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
+ 	}
+ 
+ 	switch (type) {
++	case VMX_VPID_EXTENT_SINGLE_CONTEXT:
++		/*
++		 * Old versions of KVM use the single-context version so we
++		 * have to support it; just treat it the same as all-context.
++		 */
+ 	case VMX_VPID_EXTENT_ALL_CONTEXT:
+ 		__vmx_flush_tlb(vcpu, to_vmx(vcpu)->nested.vpid02);
+ 		nested_vmx_succeed(vcpu);
+ 		break;
+ 	default:
+-		/* Trap single context invalidation invvpid calls */
++		/* Trap individual address invalidation invvpid calls */
+ 		BUG_ON(1);
+ 		break;
+ 	}
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index d2945024ed33..8bfc5fc6a39b 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2736,6 +2736,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ 	}
+ 
+ 	kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu);
++	vcpu->arch.switch_db_regs |= KVM_DEBUGREG_RELOAD;
+ }
+ 
+ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 8f4cc3dfac32..5fb6adaaa796 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -106,8 +106,6 @@ static void flush_tlb_func(void *info)
+ 
+ 	if (f->flush_mm != this_cpu_read(cpu_tlbstate.active_mm))
+ 		return;
+-	if (!f->flush_end)
+-		f->flush_end = f->flush_start + PAGE_SIZE;
+ 
+ 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH_RECEIVED);
+ 	if (this_cpu_read(cpu_tlbstate.state) == TLBSTATE_OK) {
+@@ -135,12 +133,20 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
+ 				 unsigned long end)
+ {
+ 	struct flush_tlb_info info;
++
++	if (end == 0)
++		end = start + PAGE_SIZE;
+ 	info.flush_mm = mm;
+ 	info.flush_start = start;
+ 	info.flush_end = end;
+ 
+ 	count_vm_tlb_event(NR_TLB_REMOTE_FLUSH);
+-	trace_tlb_flush(TLB_REMOTE_SEND_IPI, end - start);
++	if (end == TLB_FLUSH_ALL)
++		trace_tlb_flush(TLB_REMOTE_SEND_IPI, TLB_FLUSH_ALL);
++	else
++		trace_tlb_flush(TLB_REMOTE_SEND_IPI,
++				(end - start) >> PAGE_SHIFT);
++
+ 	if (is_uv_system()) {
+ 		unsigned int cpu;
+ 
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index e58565556703..0ae7e9fa348d 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -540,3 +540,10 @@ static void twinhead_reserve_killing_zone(struct pci_dev *dev)
+         }
+ }
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x27B9, twinhead_reserve_killing_zone);
++
++static void pci_bdwep_bar(struct pci_dev *dev)
++{
++	dev->non_compliant_bars = 1;
++}
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fa0, pci_bdwep_bar);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fc0, pci_bdwep_bar);
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index b7de78bdc09c..beab8c706ac9 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -961,7 +961,7 @@ static void xen_load_sp0(struct tss_struct *tss,
+ 	tss->x86_tss.sp0 = thread->sp0;
+ }
+ 
+-static void xen_set_iopl_mask(unsigned mask)
++void xen_set_iopl_mask(unsigned mask)
+ {
+ 	struct physdev_set_iopl set_iopl;
+ 
+diff --git a/arch/xtensa/kernel/head.S b/arch/xtensa/kernel/head.S
+index 9ed55649ac8e..05e1df943856 100644
+--- a/arch/xtensa/kernel/head.S
++++ b/arch/xtensa/kernel/head.S
+@@ -128,7 +128,7 @@ ENTRY(_startup)
+ 	wsr	a0, icountlevel
+ 
+ 	.set	_index, 0
+-	.rept	XCHAL_NUM_DBREAK - 1
++	.rept	XCHAL_NUM_DBREAK
+ 	wsr	a0, SREG_DBREAKC + _index
+ 	.set	_index, _index + 1
+ 	.endr
+diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c
+index d75aa1476da7..1a804a2f9a5b 100644
+--- a/arch/xtensa/mm/cache.c
++++ b/arch/xtensa/mm/cache.c
+@@ -97,11 +97,11 @@ void clear_user_highpage(struct page *page, unsigned long vaddr)
+ 	unsigned long paddr;
+ 	void *kvaddr = coherent_kvaddr(page, TLBTEMP_BASE_1, vaddr, &paddr);
+ 
+-	pagefault_disable();
++	preempt_disable();
+ 	kmap_invalidate_coherent(page, vaddr);
+ 	set_bit(PG_arch_1, &page->flags);
+ 	clear_page_alias(kvaddr, paddr);
+-	pagefault_enable();
++	preempt_enable();
+ }
+ 
+ void copy_user_highpage(struct page *dst, struct page *src,
+@@ -113,11 +113,11 @@ void copy_user_highpage(struct page *dst, struct page *src,
+ 	void *src_vaddr = coherent_kvaddr(src, TLBTEMP_BASE_2, vaddr,
+ 					  &src_paddr);
+ 
+-	pagefault_disable();
++	preempt_disable();
+ 	kmap_invalidate_coherent(dst, vaddr);
+ 	set_bit(PG_arch_1, &dst->flags);
+ 	copy_page_alias(dst_vaddr, src_vaddr, dst_paddr, src_paddr);
+-	pagefault_enable();
++	preempt_enable();
+ }
+ 
+ #endif /* DCACHE_WAY_SIZE > PAGE_SIZE */
+diff --git a/arch/xtensa/platforms/iss/console.c b/arch/xtensa/platforms/iss/console.c
+index 70cb408bc20d..92d785fefb6d 100644
+--- a/arch/xtensa/platforms/iss/console.c
++++ b/arch/xtensa/platforms/iss/console.c
+@@ -100,21 +100,23 @@ static void rs_poll(unsigned long priv)
+ {
+ 	struct tty_port *port = (struct tty_port *)priv;
+ 	int i = 0;
++	int rd = 1;
+ 	unsigned char c;
+ 
+ 	spin_lock(&timer_lock);
+ 
+ 	while (simc_poll(0)) {
+-		simc_read(0, &c, 1);
++		rd = simc_read(0, &c, 1);
++		if (rd <= 0)
++			break;
+ 		tty_insert_flip_char(port, c, TTY_NORMAL);
+ 		i++;
+ 	}
+ 
+ 	if (i)
+ 		tty_flip_buffer_push(port);
+-
+-
+-	mod_timer(&serial_timer, jiffies + SERIAL_TIMER_VALUE);
++	if (rd)
++		mod_timer(&serial_timer, jiffies + SERIAL_TIMER_VALUE);
+ 	spin_unlock(&timer_lock);
+ }
+ 
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 33e2f62d5062..f8e64cac981a 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2189,7 +2189,7 @@ int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
+ 	if (q->mq_ops) {
+ 		if (blk_queue_io_stat(q))
+ 			blk_account_io_start(rq, true);
+-		blk_mq_insert_request(rq, false, true, true);
++		blk_mq_insert_request(rq, false, true, false);
+ 		return 0;
+ 	}
+ 
+diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c
+index 021d39c0ba75..13c4e5a5fe8c 100644
+--- a/crypto/asymmetric_keys/x509_cert_parser.c
++++ b/crypto/asymmetric_keys/x509_cert_parser.c
+@@ -494,7 +494,7 @@ int x509_decode_time(time64_t *_t,  size_t hdrlen,
+ 		     unsigned char tag,
+ 		     const unsigned char *value, size_t vlen)
+ {
+-	static const unsigned char month_lengths[] = { 31, 29, 31, 30, 31, 30,
++	static const unsigned char month_lengths[] = { 31, 28, 31, 30, 31, 30,
+ 						       31, 31, 30, 31, 30, 31 };
+ 	const unsigned char *p = value;
+ 	unsigned year, mon, day, hour, min, sec, mon_len;
+@@ -540,9 +540,9 @@ int x509_decode_time(time64_t *_t,  size_t hdrlen,
+ 		if (year % 4 == 0) {
+ 			mon_len = 29;
+ 			if (year % 100 == 0) {
+-				year /= 100;
+-				if (year % 4 != 0)
+-					mon_len = 28;
++				mon_len = 28;
++				if (year % 400 == 0)
++					mon_len = 29;
+ 			}
+ 		}
+ 	}
+diff --git a/crypto/keywrap.c b/crypto/keywrap.c
+index b1d106ce55f3..72014f963ba7 100644
+--- a/crypto/keywrap.c
++++ b/crypto/keywrap.c
+@@ -212,7 +212,7 @@ static int crypto_kw_decrypt(struct blkcipher_desc *desc,
+ 			  SEMIBSIZE))
+ 		ret = -EBADMSG;
+ 
+-	memzero_explicit(&block, sizeof(struct crypto_kw_block));
++	memzero_explicit(block, sizeof(struct crypto_kw_block));
+ 
+ 	return ret;
+ }
+@@ -297,7 +297,7 @@ static int crypto_kw_encrypt(struct blkcipher_desc *desc,
+ 	/* establish the IV for the caller to pick up */
+ 	memcpy(desc->info, block->A, SEMIBSIZE);
+ 
+-	memzero_explicit(&block, sizeof(struct crypto_kw_block));
++	memzero_explicit(block, sizeof(struct crypto_kw_block));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index cdc5c2599beb..627f8fbb5e9a 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -26,8 +26,20 @@
+ 
+ #ifdef CONFIG_X86
+ #define valid_IRQ(i) (((i) != 0) && ((i) != 2))
++static inline bool acpi_iospace_resource_valid(struct resource *res)
++{
++	/* On X86 IO space is limited to the [0 - 64K] IO port range */
++	return res->end < 0x10003;
++}
+ #else
+ #define valid_IRQ(i) (true)
++/*
++ * ACPI IO descriptors on arches other than X86 contain MMIO CPU physical
++ * addresses mapping IO space in CPU physical address space, IO space
++ * resources can be placed anywhere in the 64-bit physical address space.
++ */
++static inline bool
++acpi_iospace_resource_valid(struct resource *res) { return true; }
+ #endif
+ 
+ static bool acpi_dev_resource_len_valid(u64 start, u64 end, u64 len, bool io)
+@@ -126,7 +138,7 @@ static void acpi_dev_ioresource_flags(struct resource *res, u64 len,
+ 	if (!acpi_dev_resource_len_valid(res->start, res->end, len, true))
+ 		res->flags |= IORESOURCE_DISABLED | IORESOURCE_UNSET;
+ 
+-	if (res->end >= 0x10003)
++	if (!acpi_iospace_resource_valid(res))
+ 		res->flags |= IORESOURCE_DISABLED | IORESOURCE_UNSET;
+ 
+ 	if (io_decode == ACPI_DECODE_16)
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 0d94621dc856..e3322adaaae0 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -714,6 +714,7 @@ static int acpi_hibernation_enter(void)
+ 
+ static void acpi_hibernation_leave(void)
+ {
++	pm_set_resume_via_firmware();
+ 	/*
+ 	 * If ACPI is not enabled by the BIOS and the boot kernel, we need to
+ 	 * enable it here.
+diff --git a/drivers/block/brd.c b/drivers/block/brd.c
+index a5880f4ab40e..1914c63ca8b1 100644
+--- a/drivers/block/brd.c
++++ b/drivers/block/brd.c
+@@ -338,7 +338,7 @@ static blk_qc_t brd_make_request(struct request_queue *q, struct bio *bio)
+ 
+ 	if (unlikely(bio->bi_rw & REQ_DISCARD)) {
+ 		if (sector & ((PAGE_SIZE >> SECTOR_SHIFT) - 1) ||
+-		    bio->bi_iter.bi_size & PAGE_MASK)
++		    bio->bi_iter.bi_size & ~PAGE_MASK)
+ 			goto io_error;
+ 		discard_from_brd(brd, sector, bio->bi_iter.bi_size);
+ 		goto out;
+diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
+index 3457ac8c03e2..55d3d1da72de 100644
+--- a/drivers/block/mtip32xx/mtip32xx.c
++++ b/drivers/block/mtip32xx/mtip32xx.c
+@@ -173,7 +173,13 @@ static struct mtip_cmd *mtip_get_int_command(struct driver_data *dd)
+ {
+ 	struct request *rq;
+ 
++	if (mtip_check_surprise_removal(dd->pdev))
++		return NULL;
++
+ 	rq = blk_mq_alloc_request(dd->queue, 0, __GFP_RECLAIM, true);
++	if (IS_ERR(rq))
++		return NULL;
++
+ 	return blk_mq_rq_to_pdu(rq);
+ }
+ 
+@@ -233,15 +239,9 @@ static void mtip_async_complete(struct mtip_port *port,
+ 			"Command tag %d failed due to TFE\n", tag);
+ 	}
+ 
+-	/* Unmap the DMA scatter list entries */
+-	dma_unmap_sg(&dd->pdev->dev, cmd->sg, cmd->scatter_ents, cmd->direction);
+-
+ 	rq = mtip_rq_from_tag(dd, tag);
+ 
+-	if (unlikely(cmd->unaligned))
+-		up(&port->cmd_slot_unal);
+-
+-	blk_mq_end_request(rq, status ? -EIO : 0);
++	blk_mq_complete_request(rq, status);
+ }
+ 
+ /*
+@@ -581,6 +581,8 @@ static void mtip_completion(struct mtip_port *port,
+ 		dev_warn(&port->dd->pdev->dev,
+ 			"Internal command %d completed with TFE\n", tag);
+ 
++	command->comp_func = NULL;
++	command->comp_data = NULL;
+ 	complete(waiting);
+ }
+ 
+@@ -618,8 +620,6 @@ static void mtip_handle_tfe(struct driver_data *dd)
+ 
+ 	port = dd->port;
+ 
+-	set_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags);
+-
+ 	if (test_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags)) {
+ 		cmd = mtip_cmd_from_tag(dd, MTIP_TAG_INTERNAL);
+ 		dbg_printk(MTIP_DRV_NAME " TFE for the internal command\n");
+@@ -628,7 +628,7 @@ static void mtip_handle_tfe(struct driver_data *dd)
+ 			cmd->comp_func(port, MTIP_TAG_INTERNAL,
+ 					cmd, PORT_IRQ_TF_ERR);
+ 		}
+-		goto handle_tfe_exit;
++		return;
+ 	}
+ 
+ 	/* clear the tag accumulator */
+@@ -701,7 +701,7 @@ static void mtip_handle_tfe(struct driver_data *dd)
+ 			fail_reason = "thermal shutdown";
+ 		}
+ 		if (buf[288] == 0xBF) {
+-			set_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag);
++			set_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag);
+ 			dev_info(&dd->pdev->dev,
+ 				"Drive indicates rebuild has failed. Secure erase required.\n");
+ 			fail_all_ncq_cmds = 1;
+@@ -771,11 +771,6 @@ static void mtip_handle_tfe(struct driver_data *dd)
+ 		}
+ 	}
+ 	print_tags(dd, "reissued (TFE)", tagaccum, cmd_cnt);
+-
+-handle_tfe_exit:
+-	/* clear eh_active */
+-	clear_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags);
+-	wake_up_interruptible(&port->svc_wait);
+ }
+ 
+ /*
+@@ -1007,6 +1002,7 @@ static bool mtip_pause_ncq(struct mtip_port *port,
+ 			(fis->features == 0x27 || fis->features == 0x72 ||
+ 			 fis->features == 0x62 || fis->features == 0x26))) {
+ 		clear_bit(MTIP_DDF_SEC_LOCK_BIT, &port->dd->dd_flag);
++		clear_bit(MTIP_DDF_REBUILD_FAILED_BIT, &port->dd->dd_flag);
+ 		/* Com reset after secure erase or lowlevel format */
+ 		mtip_restart_port(port);
+ 		clear_bit(MTIP_PF_SE_ACTIVE_BIT, &port->flags);
+@@ -1021,12 +1017,14 @@ static bool mtip_pause_ncq(struct mtip_port *port,
+  *
+  * @port    Pointer to port data structure
+  * @timeout Max duration to wait (ms)
++ * @atomic  gfp_t flag to indicate blockable context or not
+  *
+  * return value
+  *	0	Success
+  *	-EBUSY  Commands still active
+  */
+-static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
++static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout,
++								gfp_t atomic)
+ {
+ 	unsigned long to;
+ 	unsigned int n;
+@@ -1037,16 +1035,21 @@ static int mtip_quiesce_io(struct mtip_port *port, unsigned long timeout)
+ 	to = jiffies + msecs_to_jiffies(timeout);
+ 	do {
+ 		if (test_bit(MTIP_PF_SVC_THD_ACTIVE_BIT, &port->flags) &&
+-			test_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags)) {
++			test_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags) &&
++			atomic == GFP_KERNEL) {
+ 			msleep(20);
+ 			continue; /* svc thd is actively issuing commands */
+ 		}
+ 
+-		msleep(100);
++		if (atomic == GFP_KERNEL)
++			msleep(100);
++		else {
++			cpu_relax();
++			udelay(100);
++		}
++
+ 		if (mtip_check_surprise_removal(port->dd->pdev))
+ 			goto err_fault;
+-		if (test_bit(MTIP_DDF_REMOVE_PENDING_BIT, &port->dd->dd_flag))
+-			goto err_fault;
+ 
+ 		/*
+ 		 * Ignore s_active bit 0 of array element 0.
+@@ -1099,6 +1102,7 @@ static int mtip_exec_internal_command(struct mtip_port *port,
+ 	struct mtip_cmd *int_cmd;
+ 	struct driver_data *dd = port->dd;
+ 	int rv = 0;
++	unsigned long start;
+ 
+ 	/* Make sure the buffer is 8 byte aligned. This is asic specific. */
+ 	if (buffer & 0x00000007) {
+@@ -1107,6 +1111,10 @@ static int mtip_exec_internal_command(struct mtip_port *port,
+ 	}
+ 
+ 	int_cmd = mtip_get_int_command(dd);
++	if (!int_cmd) {
++		dbg_printk(MTIP_DRV_NAME "Unable to allocate tag for PIO cmd\n");
++		return -EFAULT;
++	}
+ 
+ 	set_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags);
+ 
+@@ -1119,7 +1127,7 @@ static int mtip_exec_internal_command(struct mtip_port *port,
+ 		if (fis->command != ATA_CMD_STANDBYNOW1) {
+ 			/* wait for io to complete if non atomic */
+ 			if (mtip_quiesce_io(port,
+-					MTIP_QUIESCE_IO_TIMEOUT_MS) < 0) {
++				MTIP_QUIESCE_IO_TIMEOUT_MS, atomic) < 0) {
+ 				dev_warn(&dd->pdev->dev,
+ 					"Failed to quiesce IO\n");
+ 				mtip_put_int_command(dd, int_cmd);
+@@ -1162,6 +1170,8 @@ static int mtip_exec_internal_command(struct mtip_port *port,
+ 	/* Populate the command header */
+ 	int_cmd->command_header->byte_count = 0;
+ 
++	start = jiffies;
++
+ 	/* Issue the command to the hardware */
+ 	mtip_issue_non_ncq_command(port, MTIP_TAG_INTERNAL);
+ 
+@@ -1170,10 +1180,12 @@ static int mtip_exec_internal_command(struct mtip_port *port,
+ 		if ((rv = wait_for_completion_interruptible_timeout(
+ 				&wait,
+ 				msecs_to_jiffies(timeout))) <= 0) {
++
+ 			if (rv == -ERESTARTSYS) { /* interrupted */
+ 				dev_err(&dd->pdev->dev,
+-					"Internal command [%02X] was interrupted after %lu ms\n",
+-					fis->command, timeout);
++					"Internal command [%02X] was interrupted after %u ms\n",
++					fis->command,
++					jiffies_to_msecs(jiffies - start));
+ 				rv = -EINTR;
+ 				goto exec_ic_exit;
+ 			} else if (rv == 0) /* timeout */
+@@ -2897,6 +2909,42 @@ static int mtip_ftl_rebuild_poll(struct driver_data *dd)
+ 	return -EFAULT;
+ }
+ 
++static void mtip_softirq_done_fn(struct request *rq)
++{
++	struct mtip_cmd *cmd = blk_mq_rq_to_pdu(rq);
++	struct driver_data *dd = rq->q->queuedata;
++
++	/* Unmap the DMA scatter list entries */
++	dma_unmap_sg(&dd->pdev->dev, cmd->sg, cmd->scatter_ents,
++							cmd->direction);
++
++	if (unlikely(cmd->unaligned))
++		up(&dd->port->cmd_slot_unal);
++
++	blk_mq_end_request(rq, rq->errors);
++}
++
++static void mtip_abort_cmd(struct request *req, void *data,
++							bool reserved)
++{
++	struct driver_data *dd = data;
++
++	dbg_printk(MTIP_DRV_NAME " Aborting request, tag = %d\n", req->tag);
++
++	clear_bit(req->tag, dd->port->cmds_to_issue);
++	req->errors = -EIO;
++	mtip_softirq_done_fn(req);
++}
++
++static void mtip_queue_cmd(struct request *req, void *data,
++							bool reserved)
++{
++	struct driver_data *dd = data;
++
++	set_bit(req->tag, dd->port->cmds_to_issue);
++	blk_abort_request(req);
++}
++
+ /*
+  * service thread to issue queued commands
+  *
+@@ -2909,7 +2957,7 @@ static int mtip_ftl_rebuild_poll(struct driver_data *dd)
+ static int mtip_service_thread(void *data)
+ {
+ 	struct driver_data *dd = (struct driver_data *)data;
+-	unsigned long slot, slot_start, slot_wrap;
++	unsigned long slot, slot_start, slot_wrap, to;
+ 	unsigned int num_cmd_slots = dd->slot_groups * 32;
+ 	struct mtip_port *port = dd->port;
+ 
+@@ -2924,9 +2972,7 @@ static int mtip_service_thread(void *data)
+ 		 * is in progress nor error handling is active
+ 		 */
+ 		wait_event_interruptible(port->svc_wait, (port->flags) &&
+-			!(port->flags & MTIP_PF_PAUSE_IO));
+-
+-		set_bit(MTIP_PF_SVC_THD_ACTIVE_BIT, &port->flags);
++			(port->flags & MTIP_PF_SVC_THD_WORK));
+ 
+ 		if (kthread_should_stop() ||
+ 			test_bit(MTIP_PF_SVC_THD_STOP_BIT, &port->flags))
+@@ -2936,6 +2982,8 @@ static int mtip_service_thread(void *data)
+ 				&dd->dd_flag)))
+ 			goto st_out;
+ 
++		set_bit(MTIP_PF_SVC_THD_ACTIVE_BIT, &port->flags);
++
+ restart_eh:
+ 		/* Demux bits: start with error handling */
+ 		if (test_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags)) {
+@@ -2946,6 +2994,32 @@ restart_eh:
+ 		if (test_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags))
+ 			goto restart_eh;
+ 
++		if (test_bit(MTIP_PF_TO_ACTIVE_BIT, &port->flags)) {
++			to = jiffies + msecs_to_jiffies(5000);
++
++			do {
++				mdelay(100);
++			} while (atomic_read(&dd->irq_workers_active) != 0 &&
++				time_before(jiffies, to));
++
++			if (atomic_read(&dd->irq_workers_active) != 0)
++				dev_warn(&dd->pdev->dev,
++					"Completion workers still active!");
++
++			spin_lock(dd->queue->queue_lock);
++			blk_mq_all_tag_busy_iter(*dd->tags.tags,
++							mtip_queue_cmd, dd);
++			spin_unlock(dd->queue->queue_lock);
++
++			set_bit(MTIP_PF_ISSUE_CMDS_BIT, &dd->port->flags);
++
++			if (mtip_device_reset(dd))
++				blk_mq_all_tag_busy_iter(*dd->tags.tags,
++							mtip_abort_cmd, dd);
++
++			clear_bit(MTIP_PF_TO_ACTIVE_BIT, &dd->port->flags);
++		}
++
+ 		if (test_bit(MTIP_PF_ISSUE_CMDS_BIT, &port->flags)) {
+ 			slot = 1;
+ 			/* used to restrict the loop to one iteration */
+@@ -2978,10 +3052,8 @@ restart_eh:
+ 		}
+ 
+ 		if (test_bit(MTIP_PF_REBUILD_BIT, &port->flags)) {
+-			if (mtip_ftl_rebuild_poll(dd) < 0)
+-				set_bit(MTIP_DDF_REBUILD_FAILED_BIT,
+-							&dd->dd_flag);
+-			clear_bit(MTIP_PF_REBUILD_BIT, &port->flags);
++			if (mtip_ftl_rebuild_poll(dd) == 0)
++				clear_bit(MTIP_PF_REBUILD_BIT, &port->flags);
+ 		}
+ 	}
+ 
+@@ -3096,7 +3168,7 @@ static int mtip_hw_get_identify(struct driver_data *dd)
+ 		if (buf[288] == 0xBF) {
+ 			dev_info(&dd->pdev->dev,
+ 				"Drive indicates rebuild has failed.\n");
+-			/* TODO */
++			set_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag);
+ 		}
+ 	}
+ 
+@@ -3270,20 +3342,25 @@ out1:
+ 	return rv;
+ }
+ 
+-static void mtip_standby_drive(struct driver_data *dd)
++static int mtip_standby_drive(struct driver_data *dd)
+ {
+-	if (dd->sr)
+-		return;
++	int rv = 0;
+ 
++	if (dd->sr || !dd->port)
++		return -ENODEV;
+ 	/*
+ 	 * Send standby immediate (E0h) to the drive so that it
+ 	 * saves its state.
+ 	 */
+ 	if (!test_bit(MTIP_PF_REBUILD_BIT, &dd->port->flags) &&
+-	    !test_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag))
+-		if (mtip_standby_immediate(dd->port))
++	    !test_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag) &&
++	    !test_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag)) {
++		rv = mtip_standby_immediate(dd->port);
++		if (rv)
+ 			dev_warn(&dd->pdev->dev,
+ 				"STANDBY IMMEDIATE failed\n");
++	}
++	return rv;
+ }
+ 
+ /*
+@@ -3296,10 +3373,6 @@ static void mtip_standby_drive(struct driver_data *dd)
+  */
+ static int mtip_hw_exit(struct driver_data *dd)
+ {
+-	/*
+-	 * Send standby immediate (E0h) to the drive so that it
+-	 * saves its state.
+-	 */
+ 	if (!dd->sr) {
+ 		/* de-initialize the port. */
+ 		mtip_deinit_port(dd->port);
+@@ -3341,8 +3414,7 @@ static int mtip_hw_shutdown(struct driver_data *dd)
+ 	 * Send standby immediate (E0h) to the drive so that it
+ 	 * saves its state.
+ 	 */
+-	if (!dd->sr && dd->port)
+-		mtip_standby_immediate(dd->port);
++	mtip_standby_drive(dd);
+ 
+ 	return 0;
+ }
+@@ -3365,7 +3437,7 @@ static int mtip_hw_suspend(struct driver_data *dd)
+ 	 * Send standby immediate (E0h) to the drive
+ 	 * so that it saves its state.
+ 	 */
+-	if (mtip_standby_immediate(dd->port) != 0) {
++	if (mtip_standby_drive(dd) != 0) {
+ 		dev_err(&dd->pdev->dev,
+ 			"Failed standby-immediate command\n");
+ 		return -EFAULT;
+@@ -3603,6 +3675,28 @@ static int mtip_block_getgeo(struct block_device *dev,
+ 	return 0;
+ }
+ 
++static int mtip_block_open(struct block_device *dev, fmode_t mode)
++{
++	struct driver_data *dd;
++
++	if (dev && dev->bd_disk) {
++		dd = (struct driver_data *) dev->bd_disk->private_data;
++
++		if (dd) {
++			if (test_bit(MTIP_DDF_REMOVAL_BIT,
++							&dd->dd_flag)) {
++				return -ENODEV;
++			}
++			return 0;
++		}
++	}
++	return -ENODEV;
++}
++
++void mtip_block_release(struct gendisk *disk, fmode_t mode)
++{
++}
++
+ /*
+  * Block device operation function.
+  *
+@@ -3610,6 +3704,8 @@ static int mtip_block_getgeo(struct block_device *dev,
+  * layer.
+  */
+ static const struct block_device_operations mtip_block_ops = {
++	.open		= mtip_block_open,
++	.release	= mtip_block_release,
+ 	.ioctl		= mtip_block_ioctl,
+ #ifdef CONFIG_COMPAT
+ 	.compat_ioctl	= mtip_block_compat_ioctl,
+@@ -3671,10 +3767,9 @@ static int mtip_submit_request(struct blk_mq_hw_ctx *hctx, struct request *rq)
+ 				rq_data_dir(rq))) {
+ 			return -ENODATA;
+ 		}
+-		if (unlikely(test_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag)))
++		if (unlikely(test_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag) ||
++			test_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag)))
+ 			return -ENODATA;
+-		if (test_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag))
+-			return -ENXIO;
+ 	}
+ 
+ 	if (rq->cmd_flags & REQ_DISCARD) {
+@@ -3786,11 +3881,33 @@ static int mtip_init_cmd(void *data, struct request *rq, unsigned int hctx_idx,
+ 	return 0;
+ }
+ 
++static enum blk_eh_timer_return mtip_cmd_timeout(struct request *req,
++								bool reserved)
++{
++	struct driver_data *dd = req->q->queuedata;
++	int ret = BLK_EH_RESET_TIMER;
++
++	if (reserved)
++		goto exit_handler;
++
++	if (test_bit(req->tag, dd->port->cmds_to_issue))
++		goto exit_handler;
++
++	if (test_and_set_bit(MTIP_PF_TO_ACTIVE_BIT, &dd->port->flags))
++		goto exit_handler;
++
++	wake_up_interruptible(&dd->port->svc_wait);
++exit_handler:
++	return ret;
++}
++
+ static struct blk_mq_ops mtip_mq_ops = {
+ 	.queue_rq	= mtip_queue_rq,
+ 	.map_queue	= blk_mq_map_queue,
+ 	.init_request	= mtip_init_cmd,
+ 	.exit_request	= mtip_free_cmd,
++	.complete	= mtip_softirq_done_fn,
++	.timeout        = mtip_cmd_timeout,
+ };
+ 
+ /*
+@@ -3857,7 +3974,6 @@ static int mtip_block_initialize(struct driver_data *dd)
+ 
+ 	mtip_hw_debugfs_init(dd);
+ 
+-skip_create_disk:
+ 	memset(&dd->tags, 0, sizeof(dd->tags));
+ 	dd->tags.ops = &mtip_mq_ops;
+ 	dd->tags.nr_hw_queues = 1;
+@@ -3867,12 +3983,13 @@ skip_create_disk:
+ 	dd->tags.numa_node = dd->numa_node;
+ 	dd->tags.flags = BLK_MQ_F_SHOULD_MERGE;
+ 	dd->tags.driver_data = dd;
++	dd->tags.timeout = MTIP_NCQ_CMD_TIMEOUT_MS;
+ 
+ 	rv = blk_mq_alloc_tag_set(&dd->tags);
+ 	if (rv) {
+ 		dev_err(&dd->pdev->dev,
+ 			"Unable to allocate request queue\n");
+-		goto block_queue_alloc_init_error;
++		goto block_queue_alloc_tag_error;
+ 	}
+ 
+ 	/* Allocate the request queue. */
+@@ -3887,6 +4004,7 @@ skip_create_disk:
+ 	dd->disk->queue		= dd->queue;
+ 	dd->queue->queuedata	= dd;
+ 
++skip_create_disk:
+ 	/* Initialize the protocol layer. */
+ 	wait_for_rebuild = mtip_hw_get_identify(dd);
+ 	if (wait_for_rebuild < 0) {
+@@ -3983,8 +4101,9 @@ kthread_run_error:
+ read_capacity_error:
+ init_hw_cmds_error:
+ 	blk_cleanup_queue(dd->queue);
+-	blk_mq_free_tag_set(&dd->tags);
+ block_queue_alloc_init_error:
++	blk_mq_free_tag_set(&dd->tags);
++block_queue_alloc_tag_error:
+ 	mtip_hw_debugfs_exit(dd);
+ disk_index_error:
+ 	spin_lock(&rssd_index_lock);
+@@ -4001,6 +4120,22 @@ protocol_init_error:
+ 	return rv;
+ }
+ 
++static void mtip_no_dev_cleanup(struct request *rq, void *data, bool reserv)
++{
++	struct driver_data *dd = (struct driver_data *)data;
++	struct mtip_cmd *cmd;
++
++	if (likely(!reserv))
++		blk_mq_complete_request(rq, -ENODEV);
++	else if (test_bit(MTIP_PF_IC_ACTIVE_BIT, &dd->port->flags)) {
++
++		cmd = mtip_cmd_from_tag(dd, MTIP_TAG_INTERNAL);
++		if (cmd->comp_func)
++			cmd->comp_func(dd->port, MTIP_TAG_INTERNAL,
++					cmd, -ENODEV);
++	}
++}
++
+ /*
+  * Block layer deinitialization function.
+  *
+@@ -4032,12 +4167,23 @@ static int mtip_block_remove(struct driver_data *dd)
+ 		}
+ 	}
+ 
+-	if (!dd->sr)
+-		mtip_standby_drive(dd);
++	if (!dd->sr) {
++		/*
++		 * Explicitly wait here for IOs to quiesce,
++		 * as mtip_standby_drive usually won't wait for IOs.
++		 */
++		if (!mtip_quiesce_io(dd->port, MTIP_QUIESCE_IO_TIMEOUT_MS,
++								GFP_KERNEL))
++			mtip_standby_drive(dd);
++	}
+ 	else
+ 		dev_info(&dd->pdev->dev, "device %s surprise removal\n",
+ 						dd->disk->disk_name);
+ 
++	blk_mq_freeze_queue_start(dd->queue);
++	blk_mq_stop_hw_queues(dd->queue);
++	blk_mq_all_tag_busy_iter(dd->tags.tags[0], mtip_no_dev_cleanup, dd);
++
+ 	/*
+ 	 * Delete our gendisk structure. This also removes the device
+ 	 * from /dev
+@@ -4047,7 +4193,8 @@ static int mtip_block_remove(struct driver_data *dd)
+ 		dd->bdev = NULL;
+ 	}
+ 	if (dd->disk) {
+-		del_gendisk(dd->disk);
++		if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag))
++			del_gendisk(dd->disk);
+ 		if (dd->disk->queue) {
+ 			blk_cleanup_queue(dd->queue);
+ 			blk_mq_free_tag_set(&dd->tags);
+@@ -4088,7 +4235,8 @@ static int mtip_block_shutdown(struct driver_data *dd)
+ 		dev_info(&dd->pdev->dev,
+ 			"Shutting down %s ...\n", dd->disk->disk_name);
+ 
+-		del_gendisk(dd->disk);
++		if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag))
++			del_gendisk(dd->disk);
+ 		if (dd->disk->queue) {
+ 			blk_cleanup_queue(dd->queue);
+ 			blk_mq_free_tag_set(&dd->tags);
+@@ -4433,7 +4581,7 @@ static void mtip_pci_remove(struct pci_dev *pdev)
+ 	struct driver_data *dd = pci_get_drvdata(pdev);
+ 	unsigned long flags, to;
+ 
+-	set_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag);
++	set_bit(MTIP_DDF_REMOVAL_BIT, &dd->dd_flag);
+ 
+ 	spin_lock_irqsave(&dev_lock, flags);
+ 	list_del_init(&dd->online_list);
+@@ -4450,12 +4598,17 @@ static void mtip_pci_remove(struct pci_dev *pdev)
+ 	} while (atomic_read(&dd->irq_workers_active) != 0 &&
+ 		time_before(jiffies, to));
+ 
++	if (!dd->sr)
++		fsync_bdev(dd->bdev);
++
+ 	if (atomic_read(&dd->irq_workers_active) != 0) {
+ 		dev_warn(&dd->pdev->dev,
+ 			"Completion workers still active!\n");
+ 	}
+ 
+-	blk_mq_stop_hw_queues(dd->queue);
++	blk_set_queue_dying(dd->queue);
++	set_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag);
++
+ 	/* Clean up the block layer. */
+ 	mtip_block_remove(dd);
+ 
+diff --git a/drivers/block/mtip32xx/mtip32xx.h b/drivers/block/mtip32xx/mtip32xx.h
+index 3274784008eb..7617888f7944 100644
+--- a/drivers/block/mtip32xx/mtip32xx.h
++++ b/drivers/block/mtip32xx/mtip32xx.h
+@@ -134,16 +134,24 @@ enum {
+ 	MTIP_PF_EH_ACTIVE_BIT       = 1, /* error handling */
+ 	MTIP_PF_SE_ACTIVE_BIT       = 2, /* secure erase */
+ 	MTIP_PF_DM_ACTIVE_BIT       = 3, /* download microcde */
++	MTIP_PF_TO_ACTIVE_BIT       = 9, /* timeout handling */
+ 	MTIP_PF_PAUSE_IO      =	((1 << MTIP_PF_IC_ACTIVE_BIT) |
+ 				(1 << MTIP_PF_EH_ACTIVE_BIT) |
+ 				(1 << MTIP_PF_SE_ACTIVE_BIT) |
+-				(1 << MTIP_PF_DM_ACTIVE_BIT)),
++				(1 << MTIP_PF_DM_ACTIVE_BIT) |
++				(1 << MTIP_PF_TO_ACTIVE_BIT)),
+ 
+ 	MTIP_PF_SVC_THD_ACTIVE_BIT  = 4,
+ 	MTIP_PF_ISSUE_CMDS_BIT      = 5,
+ 	MTIP_PF_REBUILD_BIT         = 6,
+ 	MTIP_PF_SVC_THD_STOP_BIT    = 8,
+ 
++	MTIP_PF_SVC_THD_WORK	= ((1 << MTIP_PF_EH_ACTIVE_BIT) |
++				  (1 << MTIP_PF_ISSUE_CMDS_BIT) |
++				  (1 << MTIP_PF_REBUILD_BIT) |
++				  (1 << MTIP_PF_SVC_THD_STOP_BIT) |
++				  (1 << MTIP_PF_TO_ACTIVE_BIT)),
++
+ 	/* below are bit numbers in 'dd_flag' defined in driver_data */
+ 	MTIP_DDF_SEC_LOCK_BIT	    = 0,
+ 	MTIP_DDF_REMOVE_PENDING_BIT = 1,
+@@ -153,6 +161,7 @@ enum {
+ 	MTIP_DDF_RESUME_BIT         = 6,
+ 	MTIP_DDF_INIT_DONE_BIT      = 7,
+ 	MTIP_DDF_REBUILD_FAILED_BIT = 8,
++	MTIP_DDF_REMOVAL_BIT	    = 9,
+ 
+ 	MTIP_DDF_STOP_IO      = ((1 << MTIP_DDF_REMOVE_PENDING_BIT) |
+ 				(1 << MTIP_DDF_SEC_LOCK_BIT) |
+diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
+index fa893c3ec408..0beaa52df66b 100644
+--- a/drivers/bluetooth/ath3k.c
++++ b/drivers/bluetooth/ath3k.c
+@@ -82,6 +82,7 @@ static const struct usb_device_id ath3k_table[] = {
+ 	{ USB_DEVICE(0x0489, 0xe05f) },
+ 	{ USB_DEVICE(0x0489, 0xe076) },
+ 	{ USB_DEVICE(0x0489, 0xe078) },
++	{ USB_DEVICE(0x0489, 0xe095) },
+ 	{ USB_DEVICE(0x04c5, 0x1330) },
+ 	{ USB_DEVICE(0x04CA, 0x3004) },
+ 	{ USB_DEVICE(0x04CA, 0x3005) },
+@@ -92,6 +93,7 @@ static const struct usb_device_id ath3k_table[] = {
+ 	{ USB_DEVICE(0x04CA, 0x300d) },
+ 	{ USB_DEVICE(0x04CA, 0x300f) },
+ 	{ USB_DEVICE(0x04CA, 0x3010) },
++	{ USB_DEVICE(0x04CA, 0x3014) },
+ 	{ USB_DEVICE(0x0930, 0x0219) },
+ 	{ USB_DEVICE(0x0930, 0x021c) },
+ 	{ USB_DEVICE(0x0930, 0x0220) },
+@@ -113,10 +115,12 @@ static const struct usb_device_id ath3k_table[] = {
+ 	{ USB_DEVICE(0x13d3, 0x3362) },
+ 	{ USB_DEVICE(0x13d3, 0x3375) },
+ 	{ USB_DEVICE(0x13d3, 0x3393) },
++	{ USB_DEVICE(0x13d3, 0x3395) },
+ 	{ USB_DEVICE(0x13d3, 0x3402) },
+ 	{ USB_DEVICE(0x13d3, 0x3408) },
+ 	{ USB_DEVICE(0x13d3, 0x3423) },
+ 	{ USB_DEVICE(0x13d3, 0x3432) },
++	{ USB_DEVICE(0x13d3, 0x3472) },
+ 	{ USB_DEVICE(0x13d3, 0x3474) },
+ 
+ 	/* Atheros AR5BBU12 with sflash firmware */
+@@ -144,6 +148,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
+ 	{ USB_DEVICE(0x0489, 0xe05f), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe076), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe078), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x0489, 0xe095), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
+@@ -154,6 +159,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
+ 	{ USB_DEVICE(0x04ca, 0x300d), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x04ca, 0x3014), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0930, 0x021c), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 },
+@@ -175,10 +181,12 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
+ 	{ USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x13d3, 0x3395), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3423), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x13d3, 0x3472), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 },
+ 
+ 	/* Atheros AR5BBU22 with sflash firmware */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 968897108c76..79107597a594 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -196,6 +196,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x0489, 0xe05f), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe076), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0489, 0xe078), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x0489, 0xe095), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
+@@ -206,6 +207,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x04ca, 0x300d), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x04ca, 0x3014), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0930, 0x021c), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 },
+@@ -227,10 +229,12 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x13d3, 0x3395), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3423), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 },
++	{ USB_DEVICE(0x13d3, 0x3472), .driver_info = BTUSB_ATH3012 },
+ 	{ USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 },
+ 
+ 	/* Atheros AR5BBU12 with sflash firmware */
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 45cc39aabeee..252142524ff2 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -136,11 +136,13 @@ struct tpm_chip *tpmm_chip_alloc(struct device *dev,
+ 	chip->cdev.owner = chip->pdev->driver->owner;
+ 	chip->cdev.kobj.parent = &chip->dev.kobj;
+ 
++	devm_add_action(dev, (void (*)(void *)) put_device, &chip->dev);
++
+ 	return chip;
+ }
+ EXPORT_SYMBOL_GPL(tpmm_chip_alloc);
+ 
+-static int tpm_dev_add_device(struct tpm_chip *chip)
++static int tpm_add_char_device(struct tpm_chip *chip)
+ {
+ 	int rc;
+ 
+@@ -151,7 +153,6 @@ static int tpm_dev_add_device(struct tpm_chip *chip)
+ 			chip->devname, MAJOR(chip->dev.devt),
+ 			MINOR(chip->dev.devt), rc);
+ 
+-		device_unregister(&chip->dev);
+ 		return rc;
+ 	}
+ 
+@@ -162,16 +163,17 @@ static int tpm_dev_add_device(struct tpm_chip *chip)
+ 			chip->devname, MAJOR(chip->dev.devt),
+ 			MINOR(chip->dev.devt), rc);
+ 
++		cdev_del(&chip->cdev);
+ 		return rc;
+ 	}
+ 
+ 	return rc;
+ }
+ 
+-static void tpm_dev_del_device(struct tpm_chip *chip)
++static void tpm_del_char_device(struct tpm_chip *chip)
+ {
+ 	cdev_del(&chip->cdev);
+-	device_unregister(&chip->dev);
++	device_del(&chip->dev);
+ }
+ 
+ static int tpm1_chip_register(struct tpm_chip *chip)
+@@ -222,7 +224,7 @@ int tpm_chip_register(struct tpm_chip *chip)
+ 
+ 	tpm_add_ppi(chip);
+ 
+-	rc = tpm_dev_add_device(chip);
++	rc = tpm_add_char_device(chip);
+ 	if (rc)
+ 		goto out_err;
+ 
+@@ -274,6 +276,6 @@ void tpm_chip_unregister(struct tpm_chip *chip)
+ 		sysfs_remove_link(&chip->pdev->kobj, "ppi");
+ 
+ 	tpm1_chip_unregister(chip);
+-	tpm_dev_del_device(chip);
++	tpm_del_char_device(chip);
+ }
+ EXPORT_SYMBOL_GPL(tpm_chip_unregister);
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 4bb9727c1047..61e64293b765 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -310,11 +310,11 @@ static int crb_acpi_remove(struct acpi_device *device)
+ 	struct device *dev = &device->dev;
+ 	struct tpm_chip *chip = dev_get_drvdata(dev);
+ 
+-	tpm_chip_unregister(chip);
+-
+ 	if (chip->flags & TPM_CHIP_FLAG_TPM2)
+ 		tpm2_shutdown(chip, TPM2_SU_CLEAR);
+ 
++	tpm_chip_unregister(chip);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/char/tpm/tpm_eventlog.c b/drivers/char/tpm/tpm_eventlog.c
+index bd72fb04225e..4e6940acf639 100644
+--- a/drivers/char/tpm/tpm_eventlog.c
++++ b/drivers/char/tpm/tpm_eventlog.c
+@@ -232,7 +232,7 @@ static int tpm_binary_bios_measurements_show(struct seq_file *m, void *v)
+ {
+ 	struct tcpa_event *event = v;
+ 	struct tcpa_event temp_event;
+-	char *tempPtr;
++	char *temp_ptr;
+ 	int i;
+ 
+ 	memcpy(&temp_event, event, sizeof(struct tcpa_event));
+@@ -242,10 +242,16 @@ static int tpm_binary_bios_measurements_show(struct seq_file *m, void *v)
+ 	temp_event.event_type = do_endian_conversion(event->event_type);
+ 	temp_event.event_size = do_endian_conversion(event->event_size);
+ 
+-	tempPtr = (char *)&temp_event;
++	temp_ptr = (char *) &temp_event;
+ 
+-	for (i = 0; i < sizeof(struct tcpa_event) + temp_event.event_size; i++)
+-		seq_putc(m, tempPtr[i]);
++	for (i = 0; i < (sizeof(struct tcpa_event) - 1) ; i++)
++		seq_putc(m, temp_ptr[i]);
++
++	temp_ptr = (char *) v;
++
++	for (i = (sizeof(struct tcpa_event) - 1);
++	     i < (sizeof(struct tcpa_event) + temp_event.event_size); i++)
++		seq_putc(m, temp_ptr[i]);
+ 
+ 	return 0;
+ 
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 39bf5820297e..4f9830c1b121 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -1097,13 +1097,15 @@ static int bcm2835_pll_divider_set_rate(struct clk_hw *hw,
+ 	struct bcm2835_pll_divider *divider = bcm2835_pll_divider_from_hw(hw);
+ 	struct bcm2835_cprman *cprman = divider->cprman;
+ 	const struct bcm2835_pll_divider_data *data = divider->data;
+-	u32 cm;
+-	int ret;
++	u32 cm, div, max_div = 1 << A2W_PLL_DIV_BITS;
+ 
+-	ret = clk_divider_ops.set_rate(hw, rate, parent_rate);
+-	if (ret)
+-		return ret;
++	div = DIV_ROUND_UP_ULL(parent_rate, rate);
++
++	div = min(div, max_div);
++	if (div == max_div)
++		div = 0;
+ 
++	cprman_write(cprman, data->a2w_reg, div);
+ 	cm = cprman_read(cprman, data->cm_reg);
+ 	cprman_write(cprman, data->cm_reg, cm | data->load_mask);
+ 	cprman_write(cprman, data->cm_reg, cm & ~data->load_mask);
+diff --git a/drivers/clk/rockchip/clk-rk3188.c b/drivers/clk/rockchip/clk-rk3188.c
+index abb47608713b..fe728f8dcbe4 100644
+--- a/drivers/clk/rockchip/clk-rk3188.c
++++ b/drivers/clk/rockchip/clk-rk3188.c
+@@ -718,6 +718,7 @@ static const char *const rk3188_critical_clocks[] __initconst = {
+ 	"hclk_peri",
+ 	"pclk_cpu",
+ 	"pclk_peri",
++	"hclk_cpubus"
+ };
+ 
+ static void __init rk3188_common_clk_init(struct device_node *np)
+diff --git a/drivers/clk/rockchip/clk-rk3368.c b/drivers/clk/rockchip/clk-rk3368.c
+index 7e6b783e6eee..1b148694b633 100644
+--- a/drivers/clk/rockchip/clk-rk3368.c
++++ b/drivers/clk/rockchip/clk-rk3368.c
+@@ -165,7 +165,7 @@ static const struct rockchip_cpuclk_reg_data rk3368_cpuclkb_data = {
+ 	.core_reg = RK3368_CLKSEL_CON(0),
+ 	.div_core_shift = 0,
+ 	.div_core_mask = 0x1f,
+-	.mux_core_shift = 15,
++	.mux_core_shift = 7,
+ };
+ 
+ static const struct rockchip_cpuclk_reg_data rk3368_cpuclkl_data = {
+@@ -218,29 +218,29 @@ static const struct rockchip_cpuclk_reg_data rk3368_cpuclkl_data = {
+ 	}
+ 
+ static struct rockchip_cpuclk_rate_table rk3368_cpuclkb_rates[] __initdata = {
+-	RK3368_CPUCLKB_RATE(1512000000, 2, 6, 6),
+-	RK3368_CPUCLKB_RATE(1488000000, 2, 5, 5),
+-	RK3368_CPUCLKB_RATE(1416000000, 2, 5, 5),
+-	RK3368_CPUCLKB_RATE(1200000000, 2, 4, 4),
+-	RK3368_CPUCLKB_RATE(1008000000, 2, 4, 4),
+-	RK3368_CPUCLKB_RATE( 816000000, 2, 3, 3),
+-	RK3368_CPUCLKB_RATE( 696000000, 2, 3, 3),
+-	RK3368_CPUCLKB_RATE( 600000000, 2, 2, 2),
+-	RK3368_CPUCLKB_RATE( 408000000, 2, 2, 2),
+-	RK3368_CPUCLKB_RATE( 312000000, 2, 2, 2),
++	RK3368_CPUCLKB_RATE(1512000000, 1, 5, 5),
++	RK3368_CPUCLKB_RATE(1488000000, 1, 4, 4),
++	RK3368_CPUCLKB_RATE(1416000000, 1, 4, 4),
++	RK3368_CPUCLKB_RATE(1200000000, 1, 3, 3),
++	RK3368_CPUCLKB_RATE(1008000000, 1, 3, 3),
++	RK3368_CPUCLKB_RATE( 816000000, 1, 2, 2),
++	RK3368_CPUCLKB_RATE( 696000000, 1, 2, 2),
++	RK3368_CPUCLKB_RATE( 600000000, 1, 1, 1),
++	RK3368_CPUCLKB_RATE( 408000000, 1, 1, 1),
++	RK3368_CPUCLKB_RATE( 312000000, 1, 1, 1),
+ };
+ 
+ static struct rockchip_cpuclk_rate_table rk3368_cpuclkl_rates[] __initdata = {
+-	RK3368_CPUCLKL_RATE(1512000000, 2, 7, 7),
+-	RK3368_CPUCLKL_RATE(1488000000, 2, 6, 6),
+-	RK3368_CPUCLKL_RATE(1416000000, 2, 6, 6),
+-	RK3368_CPUCLKL_RATE(1200000000, 2, 5, 5),
+-	RK3368_CPUCLKL_RATE(1008000000, 2, 5, 5),
+-	RK3368_CPUCLKL_RATE( 816000000, 2, 4, 4),
+-	RK3368_CPUCLKL_RATE( 696000000, 2, 3, 3),
+-	RK3368_CPUCLKL_RATE( 600000000, 2, 3, 3),
+-	RK3368_CPUCLKL_RATE( 408000000, 2, 2, 2),
+-	RK3368_CPUCLKL_RATE( 312000000, 2, 2, 2),
++	RK3368_CPUCLKL_RATE(1512000000, 1, 6, 6),
++	RK3368_CPUCLKL_RATE(1488000000, 1, 5, 5),
++	RK3368_CPUCLKL_RATE(1416000000, 1, 5, 5),
++	RK3368_CPUCLKL_RATE(1200000000, 1, 4, 4),
++	RK3368_CPUCLKL_RATE(1008000000, 1, 4, 4),
++	RK3368_CPUCLKL_RATE( 816000000, 1, 3, 3),
++	RK3368_CPUCLKL_RATE( 696000000, 1, 2, 2),
++	RK3368_CPUCLKL_RATE( 600000000, 1, 2, 2),
++	RK3368_CPUCLKL_RATE( 408000000, 1, 1, 1),
++	RK3368_CPUCLKL_RATE( 312000000, 1, 1, 1),
+ };
+ 
+ static struct rockchip_clk_branch rk3368_clk_branches[] __initdata = {
+@@ -384,10 +384,10 @@ static struct rockchip_clk_branch rk3368_clk_branches[] __initdata = {
+ 	 * Clock-Architecture Diagram 3
+ 	 */
+ 
+-	COMPOSITE(0, "aclk_vepu", mux_pll_src_cpll_gpll_usb_p, 0,
++	COMPOSITE(0, "aclk_vepu", mux_pll_src_cpll_gpll_npll_usb_p, 0,
+ 			RK3368_CLKSEL_CON(15), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ 			RK3368_CLKGATE_CON(4), 6, GFLAGS),
+-	COMPOSITE(0, "aclk_vdpu", mux_pll_src_cpll_gpll_usb_p, 0,
++	COMPOSITE(0, "aclk_vdpu", mux_pll_src_cpll_gpll_npll_usb_p, 0,
+ 			RK3368_CLKSEL_CON(15), 14, 2, MFLAGS, 8, 5, DFLAGS,
+ 			RK3368_CLKGATE_CON(4), 7, GFLAGS),
+ 
+@@ -442,7 +442,7 @@ static struct rockchip_clk_branch rk3368_clk_branches[] __initdata = {
+ 	GATE(SCLK_HDMI_HDCP, "sclk_hdmi_hdcp", "xin24m", 0,
+ 			RK3368_CLKGATE_CON(4), 13, GFLAGS),
+ 	GATE(SCLK_HDMI_CEC, "sclk_hdmi_cec", "xin32k", 0,
+-			RK3368_CLKGATE_CON(5), 12, GFLAGS),
++			RK3368_CLKGATE_CON(4), 12, GFLAGS),
+ 
+ 	COMPOSITE_NODIV(0, "vip_src", mux_pll_src_cpll_gpll_p, 0,
+ 			RK3368_CLKSEL_CON(21), 15, 1, MFLAGS,
+diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
+index fb16d812c8f5..1dffb13e5c2f 100644
+--- a/drivers/crypto/atmel-aes.c
++++ b/drivers/crypto/atmel-aes.c
+@@ -1396,9 +1396,9 @@ static int atmel_aes_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	aes_dd->io_base = devm_ioremap_resource(&pdev->dev, aes_res);
+-	if (!aes_dd->io_base) {
++	if (IS_ERR(aes_dd->io_base)) {
+ 		dev_err(dev, "can't ioremap\n");
+-		err = -ENOMEM;
++		err = PTR_ERR(aes_dd->io_base);
+ 		goto res_err;
+ 	}
+ 
+diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c
+index 3178f84d2757..0dadb6332f0e 100644
+--- a/drivers/crypto/atmel-sha.c
++++ b/drivers/crypto/atmel-sha.c
+@@ -1405,9 +1405,9 @@ static int atmel_sha_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	sha_dd->io_base = devm_ioremap_resource(&pdev->dev, sha_res);
+-	if (!sha_dd->io_base) {
++	if (IS_ERR(sha_dd->io_base)) {
+ 		dev_err(dev, "can't ioremap\n");
+-		err = -ENOMEM;
++		err = PTR_ERR(sha_dd->io_base);
+ 		goto res_err;
+ 	}
+ 
+diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c
+index 2c7a628d0375..bf467d7be35c 100644
+--- a/drivers/crypto/atmel-tdes.c
++++ b/drivers/crypto/atmel-tdes.c
+@@ -1417,9 +1417,9 @@ static int atmel_tdes_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	tdes_dd->io_base = devm_ioremap_resource(&pdev->dev, tdes_res);
+-	if (!tdes_dd->io_base) {
++	if (IS_ERR(tdes_dd->io_base)) {
+ 		dev_err(dev, "can't ioremap\n");
+-		err = -ENOMEM;
++		err = PTR_ERR(tdes_dd->io_base);
+ 		goto res_err;
+ 	}
+ 
+diff --git a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
+index d89f20c04266..3d9acc53d247 100644
+--- a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
++++ b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c
+@@ -220,6 +220,39 @@ static int ccp_aes_cmac_digest(struct ahash_request *req)
+ 	return ccp_aes_cmac_finup(req);
+ }
+ 
++static int ccp_aes_cmac_export(struct ahash_request *req, void *out)
++{
++	struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req);
++	struct ccp_aes_cmac_exp_ctx state;
++
++	state.null_msg = rctx->null_msg;
++	memcpy(state.iv, rctx->iv, sizeof(state.iv));
++	state.buf_count = rctx->buf_count;
++	memcpy(state.buf, rctx->buf, sizeof(state.buf));
++
++	/* 'out' may not be aligned so memcpy from local variable */
++	memcpy(out, &state, sizeof(state));
++
++	return 0;
++}
++
++static int ccp_aes_cmac_import(struct ahash_request *req, const void *in)
++{
++	struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req);
++	struct ccp_aes_cmac_exp_ctx state;
++
++	/* 'in' may not be aligned so memcpy to local variable */
++	memcpy(&state, in, sizeof(state));
++
++	memset(rctx, 0, sizeof(*rctx));
++	rctx->null_msg = state.null_msg;
++	memcpy(rctx->iv, state.iv, sizeof(rctx->iv));
++	rctx->buf_count = state.buf_count;
++	memcpy(rctx->buf, state.buf, sizeof(rctx->buf));
++
++	return 0;
++}
++
+ static int ccp_aes_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
+ 			       unsigned int key_len)
+ {
+@@ -352,10 +385,13 @@ int ccp_register_aes_cmac_algs(struct list_head *head)
+ 	alg->final = ccp_aes_cmac_final;
+ 	alg->finup = ccp_aes_cmac_finup;
+ 	alg->digest = ccp_aes_cmac_digest;
++	alg->export = ccp_aes_cmac_export;
++	alg->import = ccp_aes_cmac_import;
+ 	alg->setkey = ccp_aes_cmac_setkey;
+ 
+ 	halg = &alg->halg;
+ 	halg->digestsize = AES_BLOCK_SIZE;
++	halg->statesize = sizeof(struct ccp_aes_cmac_exp_ctx);
+ 
+ 	base = &halg->base;
+ 	snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "cmac(aes)");
+diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c b/drivers/crypto/ccp/ccp-crypto-sha.c
+index d14b3f28e010..8ef06fad8b14 100644
+--- a/drivers/crypto/ccp/ccp-crypto-sha.c
++++ b/drivers/crypto/ccp/ccp-crypto-sha.c
+@@ -207,6 +207,43 @@ static int ccp_sha_digest(struct ahash_request *req)
+ 	return ccp_sha_finup(req);
+ }
+ 
++static int ccp_sha_export(struct ahash_request *req, void *out)
++{
++	struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req);
++	struct ccp_sha_exp_ctx state;
++
++	state.type = rctx->type;
++	state.msg_bits = rctx->msg_bits;
++	state.first = rctx->first;
++	memcpy(state.ctx, rctx->ctx, sizeof(state.ctx));
++	state.buf_count = rctx->buf_count;
++	memcpy(state.buf, rctx->buf, sizeof(state.buf));
++
++	/* 'out' may not be aligned so memcpy from local variable */
++	memcpy(out, &state, sizeof(state));
++
++	return 0;
++}
++
++static int ccp_sha_import(struct ahash_request *req, const void *in)
++{
++	struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req);
++	struct ccp_sha_exp_ctx state;
++
++	/* 'in' may not be aligned so memcpy to local variable */
++	memcpy(&state, in, sizeof(state));
++
++	memset(rctx, 0, sizeof(*rctx));
++	rctx->type = state.type;
++	rctx->msg_bits = state.msg_bits;
++	rctx->first = state.first;
++	memcpy(rctx->ctx, state.ctx, sizeof(rctx->ctx));
++	rctx->buf_count = state.buf_count;
++	memcpy(rctx->buf, state.buf, sizeof(rctx->buf));
++
++	return 0;
++}
++
+ static int ccp_sha_setkey(struct crypto_ahash *tfm, const u8 *key,
+ 			  unsigned int key_len)
+ {
+@@ -403,9 +440,12 @@ static int ccp_register_sha_alg(struct list_head *head,
+ 	alg->final = ccp_sha_final;
+ 	alg->finup = ccp_sha_finup;
+ 	alg->digest = ccp_sha_digest;
++	alg->export = ccp_sha_export;
++	alg->import = ccp_sha_import;
+ 
+ 	halg = &alg->halg;
+ 	halg->digestsize = def->digest_size;
++	halg->statesize = sizeof(struct ccp_sha_exp_ctx);
+ 
+ 	base = &halg->base;
+ 	snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name);
+diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h
+index 76a96f0f44c6..a326ec20bfa8 100644
+--- a/drivers/crypto/ccp/ccp-crypto.h
++++ b/drivers/crypto/ccp/ccp-crypto.h
+@@ -129,6 +129,15 @@ struct ccp_aes_cmac_req_ctx {
+ 	struct ccp_cmd cmd;
+ };
+ 
++struct ccp_aes_cmac_exp_ctx {
++	unsigned int null_msg;
++
++	u8 iv[AES_BLOCK_SIZE];
++
++	unsigned int buf_count;
++	u8 buf[AES_BLOCK_SIZE];
++};
++
+ /***** SHA related defines *****/
+ #define MAX_SHA_CONTEXT_SIZE	SHA256_DIGEST_SIZE
+ #define MAX_SHA_BLOCK_SIZE	SHA256_BLOCK_SIZE
+@@ -171,6 +180,19 @@ struct ccp_sha_req_ctx {
+ 	struct ccp_cmd cmd;
+ };
+ 
++struct ccp_sha_exp_ctx {
++	enum ccp_sha_type type;
++
++	u64 msg_bits;
++
++	unsigned int first;
++
++	u8 ctx[MAX_SHA_CONTEXT_SIZE];
++
++	unsigned int buf_count;
++	u8 buf[MAX_SHA_BLOCK_SIZE];
++};
++
+ /***** Common Context Structure *****/
+ struct ccp_ctx {
+ 	int (*complete)(struct crypto_async_request *req, int ret);
+diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
+index c0656e7f37b5..80239ae69527 100644
+--- a/drivers/crypto/marvell/cesa.c
++++ b/drivers/crypto/marvell/cesa.c
+@@ -420,7 +420,7 @@ static int mv_cesa_probe(struct platform_device *pdev)
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
+ 	cesa->regs = devm_ioremap_resource(dev, res);
+ 	if (IS_ERR(cesa->regs))
+-		return -ENOMEM;
++		return PTR_ERR(cesa->regs);
+ 
+ 	ret = mv_cesa_dev_dma_init(cesa);
+ 	if (ret)
+diff --git a/drivers/crypto/ux500/cryp/cryp_core.c b/drivers/crypto/ux500/cryp/cryp_core.c
+index 4c243c1ffc7f..790f7cadc1ed 100644
+--- a/drivers/crypto/ux500/cryp/cryp_core.c
++++ b/drivers/crypto/ux500/cryp/cryp_core.c
+@@ -1440,9 +1440,9 @@ static int ux500_cryp_probe(struct platform_device *pdev)
+ 
+ 	device_data->phybase = res->start;
+ 	device_data->base = devm_ioremap_resource(dev, res);
+-	if (!device_data->base) {
++	if (IS_ERR(device_data->base)) {
+ 		dev_err(dev, "[%s]: ioremap failed!", __func__);
+-		ret = -ENOMEM;
++		ret = PTR_ERR(device_data->base);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/crypto/ux500/hash/hash_core.c b/drivers/crypto/ux500/hash/hash_core.c
+index f47d112041b2..66b1c3313e2e 100644
+--- a/drivers/crypto/ux500/hash/hash_core.c
++++ b/drivers/crypto/ux500/hash/hash_core.c
+@@ -1675,9 +1675,9 @@ static int ux500_hash_probe(struct platform_device *pdev)
+ 
+ 	device_data->phybase = res->start;
+ 	device_data->base = devm_ioremap_resource(dev, res);
+-	if (!device_data->base) {
++	if (IS_ERR(device_data->base)) {
+ 		dev_err(dev, "%s: ioremap() failed!\n", __func__);
+-		ret = -ENOMEM;
++		ret = PTR_ERR(device_data->base);
+ 		goto out;
+ 	}
+ 	spin_lock_init(&device_data->ctx_lock);
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 9eee13ef83a5..d87a47547ba5 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -1452,7 +1452,7 @@ static u64 f1x_get_norm_dct_addr(struct amd64_pvt *pvt, u8 range,
+ 	u64 chan_off;
+ 	u64 dram_base		= get_dram_base(pvt, range);
+ 	u64 hole_off		= f10_dhar_offset(pvt);
+-	u64 dct_sel_base_off	= (pvt->dct_sel_hi & 0xFFFFFC00) << 16;
++	u64 dct_sel_base_off	= (u64)(pvt->dct_sel_hi & 0xFFFFFC00) << 16;
+ 
+ 	if (hi_rng) {
+ 		/*
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index 429309c62699..cbee3179ec08 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -1117,8 +1117,8 @@ static void get_memory_layout(const struct mem_ctl_info *mci)
+ 		edac_dbg(0, "TAD#%d: up to %u.%03u GB (0x%016Lx), socket interleave %d, memory interleave %d, TGT: %d, %d, %d, %d, reg=0x%08x\n",
+ 			 n_tads, gb, (mb*1000)/1024,
+ 			 ((u64)tmp_mb) << 20L,
+-			 (u32)TAD_SOCK(reg),
+-			 (u32)TAD_CH(reg),
++			 (u32)(1 << TAD_SOCK(reg)),
++			 (u32)TAD_CH(reg) + 1,
+ 			 (u32)TAD_TGT0(reg),
+ 			 (u32)TAD_TGT1(reg),
+ 			 (u32)TAD_TGT2(reg),
+@@ -1396,7 +1396,7 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
+ 	}
+ 
+ 	ch_way = TAD_CH(reg) + 1;
+-	sck_way = TAD_SOCK(reg) + 1;
++	sck_way = 1 << TAD_SOCK(reg);
+ 
+ 	if (ch_way == 3)
+ 		idx = addr >> 6;
+@@ -1453,7 +1453,7 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
+ 		 n_tads,
+ 		 addr,
+ 		 limit,
+-		 (u32)TAD_SOCK(reg),
++		 sck_way,
+ 		 ch_way,
+ 		 offset,
+ 		 idx,
+@@ -1468,18 +1468,12 @@ static int get_memory_error_data(struct mem_ctl_info *mci,
+ 			offset, addr);
+ 		return -EINVAL;
+ 	}
+-	addr -= offset;
+-	/* Store the low bits [0:6] of the addr */
+-	ch_addr = addr & 0x7f;
+-	/* Remove socket wayness and remove 6 bits */
+-	addr >>= 6;
+-	addr = div_u64(addr, sck_xch);
+-#if 0
+-	/* Divide by channel way */
+-	addr = addr / ch_way;
+-#endif
+-	/* Recover the last 6 bits */
+-	ch_addr |= addr << 6;
++
++	ch_addr = addr - offset;
++	ch_addr >>= (6 + shiftup);
++	ch_addr /= ch_way * sck_way;
++	ch_addr <<= (6 + shiftup);
++	ch_addr |= addr & ((1 << (6 + shiftup)) - 1);
+ 
+ 	/*
+ 	 * Step 3) Decode rank
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
+index 5a8fbadbd27b..8ac49812a716 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
+@@ -63,6 +63,10 @@ bool amdgpu_has_atpx(void) {
+ 	return amdgpu_atpx_priv.atpx_detected;
+ }
+ 
++bool amdgpu_has_atpx_dgpu_power_cntl(void) {
++	return amdgpu_atpx_priv.atpx.functions.power_cntl;
++}
++
+ /**
+  * amdgpu_atpx_call - call an ATPX method
+  *
+@@ -142,10 +146,6 @@ static void amdgpu_atpx_parse_functions(struct amdgpu_atpx_functions *f, u32 mas
+  */
+ static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx)
+ {
+-	/* make sure required functions are enabled */
+-	/* dGPU power control is required */
+-	atpx->functions.power_cntl = true;
+-
+ 	if (atpx->functions.px_params) {
+ 		union acpi_object *info;
+ 		struct atpx_px_params output;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index c961fe093e12..9d88023df836 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -61,6 +61,12 @@ static const char *amdgpu_asic_name[] = {
+ 	"LAST",
+ };
+ 
++#if defined(CONFIG_VGA_SWITCHEROO)
++bool amdgpu_has_atpx_dgpu_power_cntl(void);
++#else
++static inline bool amdgpu_has_atpx_dgpu_power_cntl(void) { return false; }
++#endif
++
+ bool amdgpu_device_is_px(struct drm_device *dev)
+ {
+ 	struct amdgpu_device *adev = dev->dev_private;
+@@ -1469,7 +1475,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ 
+ 	if (amdgpu_runtime_pm == 1)
+ 		runtime = true;
+-	if (amdgpu_device_is_px(ddev))
++	if (amdgpu_device_is_px(ddev) && amdgpu_has_atpx_dgpu_power_cntl())
+ 		runtime = true;
+ 	vga_switcheroo_register_client(adev->pdev, &amdgpu_switcheroo_ops, runtime);
+ 	if (runtime)
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
+index 2cf50180cc51..b1c7a9b3631b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
+@@ -32,8 +32,8 @@
+ #include "oss/oss_2_4_d.h"
+ #include "oss/oss_2_4_sh_mask.h"
+ 
+-#include "gmc/gmc_8_1_d.h"
+-#include "gmc/gmc_8_1_sh_mask.h"
++#include "gmc/gmc_7_1_d.h"
++#include "gmc/gmc_7_1_sh_mask.h"
+ 
+ #include "gca/gfx_8_0_d.h"
+ #include "gca/gfx_8_0_enum.h"
+diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
+index bb292143997e..adf74f4366bb 100644
+--- a/drivers/gpu/drm/radeon/atombios_encoders.c
++++ b/drivers/gpu/drm/radeon/atombios_encoders.c
+@@ -892,8 +892,6 @@ atombios_dig_encoder_setup2(struct drm_encoder *encoder, int action, int panel_m
+ 			else
+ 				args.v1.ucLaneNum = 4;
+ 
+-			if (ENCODER_MODE_IS_DP(args.v1.ucEncoderMode) && (dp_clock == 270000))
+-				args.v1.ucConfig |= ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ;
+ 			switch (radeon_encoder->encoder_id) {
+ 			case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
+ 				args.v1.ucConfig = ATOM_ENCODER_CONFIG_V2_TRANSMITTER1;
+@@ -910,6 +908,10 @@ atombios_dig_encoder_setup2(struct drm_encoder *encoder, int action, int panel_m
+ 				args.v1.ucConfig |= ATOM_ENCODER_CONFIG_LINKB;
+ 			else
+ 				args.v1.ucConfig |= ATOM_ENCODER_CONFIG_LINKA;
++
++			if (ENCODER_MODE_IS_DP(args.v1.ucEncoderMode) && (dp_clock == 270000))
++				args.v1.ucConfig |= ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ;
++
+ 			break;
+ 		case 2:
+ 		case 3:
+diff --git a/drivers/gpu/drm/radeon/radeon_atpx_handler.c b/drivers/gpu/drm/radeon/radeon_atpx_handler.c
+index c4b4f298a283..9bc408c9f9f6 100644
+--- a/drivers/gpu/drm/radeon/radeon_atpx_handler.c
++++ b/drivers/gpu/drm/radeon/radeon_atpx_handler.c
+@@ -62,6 +62,10 @@ bool radeon_has_atpx(void) {
+ 	return radeon_atpx_priv.atpx_detected;
+ }
+ 
++bool radeon_has_atpx_dgpu_power_cntl(void) {
++	return radeon_atpx_priv.atpx.functions.power_cntl;
++}
++
+ /**
+  * radeon_atpx_call - call an ATPX method
+  *
+@@ -141,10 +145,6 @@ static void radeon_atpx_parse_functions(struct radeon_atpx_functions *f, u32 mas
+  */
+ static int radeon_atpx_validate(struct radeon_atpx *atpx)
+ {
+-	/* make sure required functions are enabled */
+-	/* dGPU power control is required */
+-	atpx->functions.power_cntl = true;
+-
+ 	if (atpx->functions.px_params) {
+ 		union acpi_object *info;
+ 		struct atpx_px_params output;
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index c566993a2ec3..f78f111e68de 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -103,6 +103,12 @@ static const char radeon_family_name[][16] = {
+ 	"LAST",
+ };
+ 
++#if defined(CONFIG_VGA_SWITCHEROO)
++bool radeon_has_atpx_dgpu_power_cntl(void);
++#else
++static inline bool radeon_has_atpx_dgpu_power_cntl(void) { return false; }
++#endif
++
+ #define RADEON_PX_QUIRK_DISABLE_PX  (1 << 0)
+ #define RADEON_PX_QUIRK_LONG_WAKEUP (1 << 1)
+ 
+@@ -1433,7 +1439,7 @@ int radeon_device_init(struct radeon_device *rdev,
+ 	 * ignore it */
+ 	vga_client_register(rdev->pdev, rdev, NULL, radeon_vga_set_decode);
+ 
+-	if (rdev->flags & RADEON_IS_PX)
++	if ((rdev->flags & RADEON_IS_PX) && radeon_has_atpx_dgpu_power_cntl())
+ 		runtime = true;
+ 	vga_switcheroo_register_client(rdev->pdev, &radeon_switcheroo_ops, runtime);
+ 	if (runtime)
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index c6f7a694f67a..ec791e169f8f 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1897,6 +1897,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_ELITE_KBD) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_CORDLESS_DESKTOP_LX500) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_EXTREME_3D) },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_DUAL_ACTION) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_WHEEL) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_RUMBLEPAD_CORD) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_RUMBLEPAD) },
+@@ -2615,9 +2616,10 @@ int hid_add_device(struct hid_device *hdev)
+ 	/*
+ 	 * Scan generic devices for group information
+ 	 */
+-	if (hid_ignore_special_drivers ||
+-	    (!hdev->group &&
+-	     !hid_match_id(hdev, hid_have_special_driver))) {
++	if (hid_ignore_special_drivers) {
++		hdev->group = HID_GROUP_GENERIC;
++	} else if (!hdev->group &&
++		   !hid_match_id(hdev, hid_have_special_driver)) {
+ 		ret = hid_scan_report(hdev);
+ 		if (ret)
+ 			hid_warn(hdev, "bad device descriptor (%d)\n", ret);
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 2b8ff18d3713..c5ec4f915594 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -396,6 +396,11 @@ static void mt_feature_mapping(struct hid_device *hdev,
+ 			td->is_buttonpad = true;
+ 
+ 		break;
++	case 0xff0000c5:
++		/* Retrieve the Win8 blob once to enable some devices */
++		if (usage->usage_index == 0)
++			mt_get_feature(hdev, field->report);
++		break;
+ 	}
+ }
+ 
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 10bd8e6e4c9c..0b80633bae91 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -282,17 +282,21 @@ static int i2c_hid_set_or_send_report(struct i2c_client *client, u8 reportType,
+ 	u16 dataRegister = le16_to_cpu(ihid->hdesc.wDataRegister);
+ 	u16 outputRegister = le16_to_cpu(ihid->hdesc.wOutputRegister);
+ 	u16 maxOutputLength = le16_to_cpu(ihid->hdesc.wMaxOutputLength);
++	u16 size;
++	int args_len;
++	int index = 0;
++
++	i2c_hid_dbg(ihid, "%s\n", __func__);
++
++	if (data_len > ihid->bufsize)
++		return -EINVAL;
+ 
+-	/* hid_hw_* already checked that data_len < HID_MAX_BUFFER_SIZE */
+-	u16 size =	2			/* size */ +
++	size =		2			/* size */ +
+ 			(reportID ? 1 : 0)	/* reportID */ +
+ 			data_len		/* buf */;
+-	int args_len =	(reportID >= 0x0F ? 1 : 0) /* optional third byte */ +
++	args_len =	(reportID >= 0x0F ? 1 : 0) /* optional third byte */ +
+ 			2			/* dataRegister */ +
+ 			size			/* args */;
+-	int index = 0;
+-
+-	i2c_hid_dbg(ihid, "%s\n", __func__);
+ 
+ 	if (!use_data && maxOutputLength == 0)
+ 		return -ENOSYS;
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index cd4510a63375..146eed70bdf4 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -65,7 +65,7 @@
+ #include <asm/mwait.h>
+ #include <asm/msr.h>
+ 
+-#define INTEL_IDLE_VERSION "0.4"
++#define INTEL_IDLE_VERSION "0.4.1"
+ #define PREFIX "intel_idle: "
+ 
+ static struct cpuidle_driver intel_idle_driver = {
+@@ -994,36 +994,92 @@ static void intel_idle_cpuidle_devices_uninit(void)
+ }
+ 
+ /*
+- * intel_idle_state_table_update()
+- *
+- * Update the default state_table for this CPU-id
++ * ivt_idle_state_table_update(void)
+  *
+- * Currently used to access tuned IVT multi-socket targets
++ * Tune IVT multi-socket targets
+  * Assumption: num_sockets == (max_package_num + 1)
+  */
+-void intel_idle_state_table_update(void)
++static void ivt_idle_state_table_update(void)
+ {
+ 	/* IVT uses a different table for 1-2, 3-4, and > 4 sockets */
+-	if (boot_cpu_data.x86_model == 0x3e) { /* IVT */
+-		int cpu, package_num, num_sockets = 1;
+-
+-		for_each_online_cpu(cpu) {
+-			package_num = topology_physical_package_id(cpu);
+-			if (package_num + 1 > num_sockets) {
+-				num_sockets = package_num + 1;
+-
+-				if (num_sockets > 4) {
+-					cpuidle_state_table = ivt_cstates_8s;
+-					return;
+-				}
++	int cpu, package_num, num_sockets = 1;
++
++	for_each_online_cpu(cpu) {
++		package_num = topology_physical_package_id(cpu);
++		if (package_num + 1 > num_sockets) {
++			num_sockets = package_num + 1;
++
++			if (num_sockets > 4) {
++				cpuidle_state_table = ivt_cstates_8s;
++				return;
+ 			}
+ 		}
++	}
++
++	if (num_sockets > 2)
++		cpuidle_state_table = ivt_cstates_4s;
++
++	/* else, 1 and 2 socket systems use default ivt_cstates */
++}
++/*
++ * sklh_idle_state_table_update(void)
++ *
++ * On SKL-H (model 0x5e) disable C8 and C9 if:
++ * C10 is enabled and SGX disabled
++ */
++static void sklh_idle_state_table_update(void)
++{
++	unsigned long long msr;
++	unsigned int eax, ebx, ecx, edx;
++
++
++	/* if PC10 disabled via cmdline intel_idle.max_cstate=7 or shallower */
++	if (max_cstate <= 7)
++		return;
++
++	/* if PC10 not present in CPUID.MWAIT.EDX */
++	if ((mwait_substates & (0xF << 28)) == 0)
++		return;
++
++	rdmsrl(MSR_NHM_SNB_PKG_CST_CFG_CTL, msr);
++
++	/* PC10 is not enabled in PKG C-state limit */
++	if ((msr & 0xF) != 8)
++		return;
++
++	ecx = 0;
++	cpuid(7, &eax, &ebx, &ecx, &edx);
++
++	/* if SGX is present */
++	if (ebx & (1 << 2)) {
+ 
+-		if (num_sockets > 2)
+-			cpuidle_state_table = ivt_cstates_4s;
+-		/* else, 1 and 2 socket systems use default ivt_cstates */
++		rdmsrl(MSR_IA32_FEATURE_CONTROL, msr);
++
++		/* if SGX is enabled */
++		if (msr & (1 << 18))
++			return;
++	}
++
++	skl_cstates[5].disabled = 1;	/* C8-SKL */
++	skl_cstates[6].disabled = 1;	/* C9-SKL */
++}
++/*
++ * intel_idle_state_table_update()
++ *
++ * Update the default state_table for this CPU-id
++ */
++
++static void intel_idle_state_table_update(void)
++{
++	switch (boot_cpu_data.x86_model) {
++
++	case 0x3e: /* IVT */
++		ivt_idle_state_table_update();
++		break;
++	case 0x5e: /* SKL-H */
++		sklh_idle_state_table_update();
++		break;
+ 	}
+-	return;
+ }
+ 
+ /*
+@@ -1063,6 +1119,14 @@ static int __init intel_idle_cpuidle_driver_init(void)
+ 		if (num_substates == 0)
+ 			continue;
+ 
++		/* if state marked as disabled, skip it */
++		if (cpuidle_state_table[cstate].disabled != 0) {
++			pr_debug(PREFIX "state %s is disabled",
++				cpuidle_state_table[cstate].name);
++			continue;
++		}
++
++
+ 		if (((mwait_cstate + 1) > 2) &&
+ 			!boot_cpu_has(X86_FEATURE_NONSTOP_TSC))
+ 			mark_tsc_unstable("TSC halts in idle"
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+index f357ca67a41c..87799de90a1d 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+@@ -456,7 +456,10 @@ out_locked:
+ 	return status;
+ }
+ 
+-static void ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
++/*
++ * Caller must hold 'priv->lock'
++ */
++static int ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
+ {
+ 	struct ipoib_dev_priv *priv = netdev_priv(dev);
+ 	struct ib_sa_multicast *multicast;
+@@ -466,6 +469,10 @@ static void ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
+ 	ib_sa_comp_mask comp_mask;
+ 	int ret = 0;
+ 
++	if (!priv->broadcast ||
++	    !test_bit(IPOIB_FLAG_OPER_UP, &priv->flags))
++		return -EINVAL;
++
+ 	ipoib_dbg_mcast(priv, "joining MGID %pI6\n", mcast->mcmember.mgid.raw);
+ 
+ 	rec.mgid     = mcast->mcmember.mgid;
+@@ -525,20 +532,23 @@ static void ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
+ 			rec.join_state = 4;
+ #endif
+ 	}
++	spin_unlock_irq(&priv->lock);
+ 
+ 	multicast = ib_sa_join_multicast(&ipoib_sa_client, priv->ca, priv->port,
+ 					 &rec, comp_mask, GFP_KERNEL,
+ 					 ipoib_mcast_join_complete, mcast);
++	spin_lock_irq(&priv->lock);
+ 	if (IS_ERR(multicast)) {
+ 		ret = PTR_ERR(multicast);
+ 		ipoib_warn(priv, "ib_sa_join_multicast failed, status %d\n", ret);
+-		spin_lock_irq(&priv->lock);
+ 		/* Requeue this join task with a backoff delay */
+ 		__ipoib_mcast_schedule_join_thread(priv, mcast, 1);
+ 		clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
+ 		spin_unlock_irq(&priv->lock);
+ 		complete(&mcast->done);
++		spin_lock_irq(&priv->lock);
+ 	}
++	return 0;
+ }
+ 
+ void ipoib_mcast_join_task(struct work_struct *work)
+@@ -620,9 +630,10 @@ void ipoib_mcast_join_task(struct work_struct *work)
+ 				/* Found the next unjoined group */
+ 				init_completion(&mcast->done);
+ 				set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
+-				spin_unlock_irq(&priv->lock);
+-				ipoib_mcast_join(dev, mcast);
+-				spin_lock_irq(&priv->lock);
++				if (ipoib_mcast_join(dev, mcast)) {
++					spin_unlock_irq(&priv->lock);
++					return;
++				}
+ 			} else if (!delay_until ||
+ 				 time_before(mcast->delay_until, delay_until))
+ 				delay_until = mcast->delay_until;
+@@ -641,10 +652,9 @@ out:
+ 	if (mcast) {
+ 		init_completion(&mcast->done);
+ 		set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
++		ipoib_mcast_join(dev, mcast);
+ 	}
+ 	spin_unlock_irq(&priv->lock);
+-	if (mcast)
+-		ipoib_mcast_join(dev, mcast);
+ }
+ 
+ int ipoib_mcast_start_thread(struct net_device *dev)
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 8a51c3b5d657..b0edb66a291b 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -66,6 +66,7 @@ isert_rdma_accept(struct isert_conn *isert_conn);
+ struct rdma_cm_id *isert_setup_id(struct isert_np *isert_np);
+ 
+ static void isert_release_work(struct work_struct *work);
++static void isert_wait4flush(struct isert_conn *isert_conn);
+ 
+ static inline bool
+ isert_prot_cmd(struct isert_conn *conn, struct se_cmd *cmd)
+@@ -815,12 +816,31 @@ isert_put_conn(struct isert_conn *isert_conn)
+ 	kref_put(&isert_conn->kref, isert_release_kref);
+ }
+ 
++static void
++isert_handle_unbound_conn(struct isert_conn *isert_conn)
++{
++	struct isert_np *isert_np = isert_conn->cm_id->context;
++
++	mutex_lock(&isert_np->mutex);
++	if (!list_empty(&isert_conn->node)) {
++		/*
++		 * This means iscsi doesn't know this connection
++		 * so schedule a cleanup ourselves
++		 */
++		list_del_init(&isert_conn->node);
++		isert_put_conn(isert_conn);
++		complete(&isert_conn->wait);
++		queue_work(isert_release_wq, &isert_conn->release_work);
++	}
++	mutex_unlock(&isert_np->mutex);
++}
++
+ /**
+  * isert_conn_terminate() - Initiate connection termination
+  * @isert_conn: isert connection struct
+  *
+  * Notes:
+- * In case the connection state is FULL_FEATURE, move state
++ * In case the connection state is BOUND, move state
+  * to TEMINATING and start teardown sequence (rdma_disconnect).
+  * In case the connection state is UP, complete flush as well.
+  *
+@@ -832,23 +852,19 @@ isert_conn_terminate(struct isert_conn *isert_conn)
+ {
+ 	int err;
+ 
+-	switch (isert_conn->state) {
+-	case ISER_CONN_TERMINATING:
+-		break;
+-	case ISER_CONN_UP:
+-	case ISER_CONN_FULL_FEATURE: /* FALLTHRU */
+-		isert_info("Terminating conn %p state %d\n",
+-			   isert_conn, isert_conn->state);
+-		isert_conn->state = ISER_CONN_TERMINATING;
+-		err = rdma_disconnect(isert_conn->cm_id);
+-		if (err)
+-			isert_warn("Failed rdma_disconnect isert_conn %p\n",
+-				   isert_conn);
+-		break;
+-	default:
+-		isert_warn("conn %p teminating in state %d\n",
+-			   isert_conn, isert_conn->state);
+-	}
++	if (isert_conn->state >= ISER_CONN_TERMINATING)
++		return;
++
++	isert_info("Terminating conn %p state %d\n",
++		   isert_conn, isert_conn->state);
++	isert_conn->state = ISER_CONN_TERMINATING;
++	err = rdma_disconnect(isert_conn->cm_id);
++	if (err)
++		isert_warn("Failed rdma_disconnect isert_conn %p\n",
++			   isert_conn);
++
++	isert_info("conn %p completing wait\n", isert_conn);
++	complete(&isert_conn->wait);
+ }
+ 
+ static int
+@@ -882,35 +898,27 @@ static int
+ isert_disconnected_handler(struct rdma_cm_id *cma_id,
+ 			   enum rdma_cm_event_type event)
+ {
+-	struct isert_np *isert_np = cma_id->context;
+-	struct isert_conn *isert_conn;
+-	bool terminating = false;
+-
+-	if (isert_np->cm_id == cma_id)
+-		return isert_np_cma_handler(cma_id->context, event);
+-
+-	isert_conn = cma_id->qp->qp_context;
++	struct isert_conn *isert_conn = cma_id->qp->qp_context;
+ 
+ 	mutex_lock(&isert_conn->mutex);
+-	terminating = (isert_conn->state == ISER_CONN_TERMINATING);
+-	isert_conn_terminate(isert_conn);
+-	mutex_unlock(&isert_conn->mutex);
+-
+-	isert_info("conn %p completing wait\n", isert_conn);
+-	complete(&isert_conn->wait);
+-
+-	if (terminating)
+-		goto out;
+-
+-	mutex_lock(&isert_np->mutex);
+-	if (!list_empty(&isert_conn->node)) {
+-		list_del_init(&isert_conn->node);
+-		isert_put_conn(isert_conn);
+-		queue_work(isert_release_wq, &isert_conn->release_work);
++	switch (isert_conn->state) {
++	case ISER_CONN_TERMINATING:
++		break;
++	case ISER_CONN_UP:
++		isert_conn_terminate(isert_conn);
++		isert_wait4flush(isert_conn);
++		isert_handle_unbound_conn(isert_conn);
++		break;
++	case ISER_CONN_BOUND:
++	case ISER_CONN_FULL_FEATURE: /* FALLTHRU */
++		iscsit_cause_connection_reinstatement(isert_conn->conn, 0);
++		break;
++	default:
++		isert_warn("conn %p teminating in state %d\n",
++			   isert_conn, isert_conn->state);
+ 	}
+-	mutex_unlock(&isert_np->mutex);
++	mutex_unlock(&isert_conn->mutex);
+ 
+-out:
+ 	return 0;
+ }
+ 
+@@ -929,12 +937,16 @@ isert_connect_error(struct rdma_cm_id *cma_id)
+ static int
+ isert_cma_handler(struct rdma_cm_id *cma_id, struct rdma_cm_event *event)
+ {
++	struct isert_np *isert_np = cma_id->context;
+ 	int ret = 0;
+ 
+ 	isert_info("%s (%d): status %d id %p np %p\n",
+ 		   rdma_event_msg(event->event), event->event,
+ 		   event->status, cma_id, cma_id->context);
+ 
++	if (isert_np->cm_id == cma_id)
++		return isert_np_cma_handler(cma_id->context, event->event);
++
+ 	switch (event->event) {
+ 	case RDMA_CM_EVENT_CONNECT_REQUEST:
+ 		ret = isert_connect_request(cma_id, event);
+@@ -980,13 +992,10 @@ isert_post_recvm(struct isert_conn *isert_conn, u32 count)
+ 	rx_wr--;
+ 	rx_wr->next = NULL; /* mark end of work requests list */
+ 
+-	isert_conn->post_recv_buf_count += count;
+ 	ret = ib_post_recv(isert_conn->qp, isert_conn->rx_wr,
+ 			   &rx_wr_failed);
+-	if (ret) {
++	if (ret)
+ 		isert_err("ib_post_recv() failed with ret: %d\n", ret);
+-		isert_conn->post_recv_buf_count -= count;
+-	}
+ 
+ 	return ret;
+ }
+@@ -1002,12 +1011,9 @@ isert_post_recv(struct isert_conn *isert_conn, struct iser_rx_desc *rx_desc)
+ 	rx_wr.num_sge = 1;
+ 	rx_wr.next = NULL;
+ 
+-	isert_conn->post_recv_buf_count++;
+ 	ret = ib_post_recv(isert_conn->qp, &rx_wr, &rx_wr_failed);
+-	if (ret) {
++	if (ret)
+ 		isert_err("ib_post_recv() failed with ret: %d\n", ret);
+-		isert_conn->post_recv_buf_count--;
+-	}
+ 
+ 	return ret;
+ }
+@@ -1120,12 +1126,9 @@ isert_rdma_post_recvl(struct isert_conn *isert_conn)
+ 	rx_wr.sg_list = &sge;
+ 	rx_wr.num_sge = 1;
+ 
+-	isert_conn->post_recv_buf_count++;
+ 	ret = ib_post_recv(isert_conn->qp, &rx_wr, &rx_wr_fail);
+-	if (ret) {
++	if (ret)
+ 		isert_err("ib_post_recv() failed: %d\n", ret);
+-		isert_conn->post_recv_buf_count--;
+-	}
+ 
+ 	return ret;
+ }
+@@ -1620,7 +1623,6 @@ isert_rcv_completion(struct iser_rx_desc *desc,
+ 	ib_dma_sync_single_for_device(ib_dev, rx_dma, rx_buflen,
+ 				      DMA_FROM_DEVICE);
+ 
+-	isert_conn->post_recv_buf_count--;
+ }
+ 
+ static int
+@@ -2035,7 +2037,8 @@ is_isert_tx_desc(struct isert_conn *isert_conn, void *wr_id)
+ 	void *start = isert_conn->rx_descs;
+ 	int len = ISERT_QP_MAX_RECV_DTOS * sizeof(*isert_conn->rx_descs);
+ 
+-	if (wr_id >= start && wr_id < start + len)
++	if ((wr_id >= start && wr_id < start + len) ||
++	    (wr_id == isert_conn->login_req_buf))
+ 		return false;
+ 
+ 	return true;
+@@ -2059,10 +2062,6 @@ isert_cq_comp_err(struct isert_conn *isert_conn, struct ib_wc *wc)
+ 			isert_unmap_tx_desc(desc, ib_dev);
+ 		else
+ 			isert_completion_put(desc, isert_cmd, ib_dev, true);
+-	} else {
+-		isert_conn->post_recv_buf_count--;
+-		if (!isert_conn->post_recv_buf_count)
+-			iscsit_cause_connection_reinstatement(isert_conn->conn, 0);
+ 	}
+ }
+ 
+@@ -3193,6 +3192,7 @@ accept_wait:
+ 
+ 	conn->context = isert_conn;
+ 	isert_conn->conn = conn;
++	isert_conn->state = ISER_CONN_BOUND;
+ 
+ 	isert_set_conn_info(np, conn, isert_conn);
+ 
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.h b/drivers/infiniband/ulp/isert/ib_isert.h
+index 3d7fbc47c343..1874d21daee0 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.h
++++ b/drivers/infiniband/ulp/isert/ib_isert.h
+@@ -50,6 +50,7 @@ enum iser_ib_op_code {
+ enum iser_conn_state {
+ 	ISER_CONN_INIT,
+ 	ISER_CONN_UP,
++	ISER_CONN_BOUND,
+ 	ISER_CONN_FULL_FEATURE,
+ 	ISER_CONN_TERMINATING,
+ 	ISER_CONN_DOWN,
+@@ -144,7 +145,6 @@ struct isert_device;
+ 
+ struct isert_conn {
+ 	enum iser_conn_state	state;
+-	int			post_recv_buf_count;
+ 	u32			responder_resources;
+ 	u32			initiator_depth;
+ 	bool			pi_support;
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index 2e2fe818ca9f..eaabf3125846 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -1737,47 +1737,6 @@ send_sense:
+ 	return -1;
+ }
+ 
+-/**
+- * srpt_rx_mgmt_fn_tag() - Process a task management function by tag.
+- * @ch: RDMA channel of the task management request.
+- * @fn: Task management function to perform.
+- * @req_tag: Tag of the SRP task management request.
+- * @mgmt_ioctx: I/O context of the task management request.
+- *
+- * Returns zero if the target core will process the task management
+- * request asynchronously.
+- *
+- * Note: It is assumed that the initiator serializes tag-based task management
+- * requests.
+- */
+-static int srpt_rx_mgmt_fn_tag(struct srpt_send_ioctx *ioctx, u64 tag)
+-{
+-	struct srpt_device *sdev;
+-	struct srpt_rdma_ch *ch;
+-	struct srpt_send_ioctx *target;
+-	int ret, i;
+-
+-	ret = -EINVAL;
+-	ch = ioctx->ch;
+-	BUG_ON(!ch);
+-	BUG_ON(!ch->sport);
+-	sdev = ch->sport->sdev;
+-	BUG_ON(!sdev);
+-	spin_lock_irq(&sdev->spinlock);
+-	for (i = 0; i < ch->rq_size; ++i) {
+-		target = ch->ioctx_ring[i];
+-		if (target->cmd.se_lun == ioctx->cmd.se_lun &&
+-		    target->cmd.tag == tag &&
+-		    srpt_get_cmd_state(target) != SRPT_STATE_DONE) {
+-			ret = 0;
+-			/* now let the target core abort &target->cmd; */
+-			break;
+-		}
+-	}
+-	spin_unlock_irq(&sdev->spinlock);
+-	return ret;
+-}
+-
+ static int srp_tmr_to_tcm(int fn)
+ {
+ 	switch (fn) {
+@@ -1812,7 +1771,6 @@ static void srpt_handle_tsk_mgmt(struct srpt_rdma_ch *ch,
+ 	struct se_cmd *cmd;
+ 	struct se_session *sess = ch->sess;
+ 	uint64_t unpacked_lun;
+-	uint32_t tag = 0;
+ 	int tcm_tmr;
+ 	int rc;
+ 
+@@ -1828,25 +1786,10 @@ static void srpt_handle_tsk_mgmt(struct srpt_rdma_ch *ch,
+ 	srpt_set_cmd_state(send_ioctx, SRPT_STATE_MGMT);
+ 	send_ioctx->cmd.tag = srp_tsk->tag;
+ 	tcm_tmr = srp_tmr_to_tcm(srp_tsk->tsk_mgmt_func);
+-	if (tcm_tmr < 0) {
+-		send_ioctx->cmd.se_tmr_req->response =
+-			TMR_TASK_MGMT_FUNCTION_NOT_SUPPORTED;
+-		goto fail;
+-	}
+ 	unpacked_lun = srpt_unpack_lun((uint8_t *)&srp_tsk->lun,
+ 				       sizeof(srp_tsk->lun));
+-
+-	if (srp_tsk->tsk_mgmt_func == SRP_TSK_ABORT_TASK) {
+-		rc = srpt_rx_mgmt_fn_tag(send_ioctx, srp_tsk->task_tag);
+-		if (rc < 0) {
+-			send_ioctx->cmd.se_tmr_req->response =
+-					TMR_TASK_DOES_NOT_EXIST;
+-			goto fail;
+-		}
+-		tag = srp_tsk->task_tag;
+-	}
+ 	rc = target_submit_tmr(&send_ioctx->cmd, sess, NULL, unpacked_lun,
+-				srp_tsk, tcm_tmr, GFP_KERNEL, tag,
++				srp_tsk, tcm_tmr, GFP_KERNEL, srp_tsk->task_tag,
+ 				TARGET_SCF_ACK_KREF);
+ 	if (rc != 0) {
+ 		send_ioctx->cmd.se_tmr_req->response = TMR_FUNCTION_REJECTED;
+diff --git a/drivers/input/misc/ati_remote2.c b/drivers/input/misc/ati_remote2.c
+index cfd58e87da26..1c5914cae853 100644
+--- a/drivers/input/misc/ati_remote2.c
++++ b/drivers/input/misc/ati_remote2.c
+@@ -817,26 +817,49 @@ static int ati_remote2_probe(struct usb_interface *interface, const struct usb_d
+ 
+ 	ar2->udev = udev;
+ 
++	/* Sanity check, first interface must have an endpoint */
++	if (alt->desc.bNumEndpoints < 1 || !alt->endpoint) {
++		dev_err(&interface->dev,
++			"%s(): interface 0 must have an endpoint\n", __func__);
++		r = -ENODEV;
++		goto fail1;
++	}
+ 	ar2->intf[0] = interface;
+ 	ar2->ep[0] = &alt->endpoint[0].desc;
+ 
++	/* Sanity check, the device must have two interfaces */
+ 	ar2->intf[1] = usb_ifnum_to_if(udev, 1);
++	if ((udev->actconfig->desc.bNumInterfaces < 2) || !ar2->intf[1]) {
++		dev_err(&interface->dev, "%s(): need 2 interfaces, found %d\n",
++			__func__, udev->actconfig->desc.bNumInterfaces);
++		r = -ENODEV;
++		goto fail1;
++	}
++
+ 	r = usb_driver_claim_interface(&ati_remote2_driver, ar2->intf[1], ar2);
+ 	if (r)
+ 		goto fail1;
++
++	/* Sanity check, second interface must have an endpoint */
+ 	alt = ar2->intf[1]->cur_altsetting;
++	if (alt->desc.bNumEndpoints < 1 || !alt->endpoint) {
++		dev_err(&interface->dev,
++			"%s(): interface 1 must have an endpoint\n", __func__);
++		r = -ENODEV;
++		goto fail2;
++	}
+ 	ar2->ep[1] = &alt->endpoint[0].desc;
+ 
+ 	r = ati_remote2_urb_init(ar2);
+ 	if (r)
+-		goto fail2;
++		goto fail3;
+ 
+ 	ar2->channel_mask = channel_mask;
+ 	ar2->mode_mask = mode_mask;
+ 
+ 	r = ati_remote2_setup(ar2, ar2->channel_mask);
+ 	if (r)
+-		goto fail2;
++		goto fail3;
+ 
+ 	usb_make_path(udev, ar2->phys, sizeof(ar2->phys));
+ 	strlcat(ar2->phys, "/input0", sizeof(ar2->phys));
+@@ -845,11 +868,11 @@ static int ati_remote2_probe(struct usb_interface *interface, const struct usb_d
+ 
+ 	r = sysfs_create_group(&udev->dev.kobj, &ati_remote2_attr_group);
+ 	if (r)
+-		goto fail2;
++		goto fail3;
+ 
+ 	r = ati_remote2_input_init(ar2);
+ 	if (r)
+-		goto fail3;
++		goto fail4;
+ 
+ 	usb_set_intfdata(interface, ar2);
+ 
+@@ -857,10 +880,11 @@ static int ati_remote2_probe(struct usb_interface *interface, const struct usb_d
+ 
+ 	return 0;
+ 
+- fail3:
++ fail4:
+ 	sysfs_remove_group(&udev->dev.kobj, &ati_remote2_attr_group);
+- fail2:
++ fail3:
+ 	ati_remote2_urb_cleanup(ar2);
++ fail2:
+ 	usb_driver_release_interface(&ati_remote2_driver, ar2->intf[1]);
+  fail1:
+ 	kfree(ar2);
+diff --git a/drivers/input/misc/ims-pcu.c b/drivers/input/misc/ims-pcu.c
+index ac1fa5f44580..9c0ea36913b4 100644
+--- a/drivers/input/misc/ims-pcu.c
++++ b/drivers/input/misc/ims-pcu.c
+@@ -1663,6 +1663,8 @@ static int ims_pcu_parse_cdc_data(struct usb_interface *intf, struct ims_pcu *pc
+ 
+ 	pcu->ctrl_intf = usb_ifnum_to_if(pcu->udev,
+ 					 union_desc->bMasterInterface0);
++	if (!pcu->ctrl_intf)
++		return -EINVAL;
+ 
+ 	alt = pcu->ctrl_intf->cur_altsetting;
+ 	pcu->ep_ctrl = &alt->endpoint[0].desc;
+@@ -1670,6 +1672,8 @@ static int ims_pcu_parse_cdc_data(struct usb_interface *intf, struct ims_pcu *pc
+ 
+ 	pcu->data_intf = usb_ifnum_to_if(pcu->udev,
+ 					 union_desc->bSlaveInterface0);
++	if (!pcu->data_intf)
++		return -EINVAL;
+ 
+ 	alt = pcu->data_intf->cur_altsetting;
+ 	if (alt->desc.bNumEndpoints != 2) {
+diff --git a/drivers/input/misc/powermate.c b/drivers/input/misc/powermate.c
+index 63b539d3daba..84909a12ff36 100644
+--- a/drivers/input/misc/powermate.c
++++ b/drivers/input/misc/powermate.c
+@@ -307,6 +307,9 @@ static int powermate_probe(struct usb_interface *intf, const struct usb_device_i
+ 	int error = -ENOMEM;
+ 
+ 	interface = intf->cur_altsetting;
++	if (interface->desc.bNumEndpoints < 1)
++		return -EINVAL;
++
+ 	endpoint = &interface->endpoint[0].desc;
+ 	if (!usb_endpoint_is_int_in(endpoint))
+ 		return -EIO;
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 6025eb430c0a..a41d8328c064 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -862,8 +862,9 @@ static void synaptics_report_ext_buttons(struct psmouse *psmouse,
+ 	if (!SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap))
+ 		return;
+ 
+-	/* Bug in FW 8.1, buttons are reported only when ExtBit is 1 */
+-	if (SYN_ID_FULL(priv->identity) == 0x801 &&
++	/* Bug in FW 8.1 & 8.2, buttons are reported only when ExtBit is 1 */
++	if ((SYN_ID_FULL(priv->identity) == 0x801 ||
++	     SYN_ID_FULL(priv->identity) == 0x802) &&
+ 	    !((psmouse->packet[0] ^ psmouse->packet[3]) & 0x02))
+ 		return;
+ 
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 8d0ead98eb6e..a296425a7270 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1015,8 +1015,12 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c)
+ 	 */
+ 	atomic_set(&dc->count, 1);
+ 
+-	if (bch_cached_dev_writeback_start(dc))
++	/* Block writeback thread, but spawn it */
++	down_write(&dc->writeback_lock);
++	if (bch_cached_dev_writeback_start(dc)) {
++		up_write(&dc->writeback_lock);
+ 		return -ENOMEM;
++	}
+ 
+ 	if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) {
+ 		bch_sectors_dirty_init(dc);
+@@ -1028,6 +1032,9 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c)
+ 	bch_cached_dev_run(dc);
+ 	bcache_device_link(&dc->disk, c, "bdev");
+ 
++	/* Allow the writeback thread to proceed */
++	up_write(&dc->writeback_lock);
++
+ 	pr_info("Caching %s as %s on set %pU",
+ 		bdevname(dc->bdev, buf), dc->disk.disk->disk_name,
+ 		dc->disk.c->sb.set_uuid);
+@@ -1366,6 +1373,9 @@ static void cache_set_flush(struct closure *cl)
+ 	struct btree *b;
+ 	unsigned i;
+ 
++	if (!c)
++		closure_return(cl);
++
+ 	bch_cache_accounting_destroy(&c->accounting);
+ 
+ 	kobject_put(&c->internal);
+@@ -1828,11 +1838,12 @@ static int cache_alloc(struct cache_sb *sb, struct cache *ca)
+ 	return 0;
+ }
+ 
+-static void register_cache(struct cache_sb *sb, struct page *sb_page,
++static int register_cache(struct cache_sb *sb, struct page *sb_page,
+ 				struct block_device *bdev, struct cache *ca)
+ {
+ 	char name[BDEVNAME_SIZE];
+-	const char *err = "cannot allocate memory";
++	const char *err = NULL;
++	int ret = 0;
+ 
+ 	memcpy(&ca->sb, sb, sizeof(struct cache_sb));
+ 	ca->bdev = bdev;
+@@ -1847,27 +1858,35 @@ static void register_cache(struct cache_sb *sb, struct page *sb_page,
+ 	if (blk_queue_discard(bdev_get_queue(ca->bdev)))
+ 		ca->discard = CACHE_DISCARD(&ca->sb);
+ 
+-	if (cache_alloc(sb, ca) != 0)
++	ret = cache_alloc(sb, ca);
++	if (ret != 0)
+ 		goto err;
+ 
+-	err = "error creating kobject";
+-	if (kobject_add(&ca->kobj, &part_to_dev(bdev->bd_part)->kobj, "bcache"))
+-		goto err;
++	if (kobject_add(&ca->kobj, &part_to_dev(bdev->bd_part)->kobj, "bcache")) {
++		err = "error calling kobject_add";
++		ret = -ENOMEM;
++		goto out;
++	}
+ 
+ 	mutex_lock(&bch_register_lock);
+ 	err = register_cache_set(ca);
+ 	mutex_unlock(&bch_register_lock);
+ 
+-	if (err)
+-		goto err;
++	if (err) {
++		ret = -ENODEV;
++		goto out;
++	}
+ 
+ 	pr_info("registered cache device %s", bdevname(bdev, name));
++
+ out:
+ 	kobject_put(&ca->kobj);
+-	return;
++
+ err:
+-	pr_notice("error opening %s: %s", bdevname(bdev, name), err);
+-	goto out;
++	if (err)
++		pr_notice("error opening %s: %s", bdevname(bdev, name), err);
++
++	return ret;
+ }
+ 
+ /* Global interfaces/init */
+@@ -1965,7 +1984,8 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
+ 		if (!ca)
+ 			goto err_close;
+ 
+-		register_cache(sb, sb_page, bdev, ca);
++		if (register_cache(sb, sb_page, bdev, ca) != 0)
++			goto err_close;
+ 	}
+ out:
+ 	if (sb_page)
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index f6543f3a970f..27f2ef300f8b 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -867,19 +867,40 @@ static int blocks_are_unmapped_or_clean(struct dm_cache_metadata *cmd,
+ 	return 0;
+ }
+ 
+-#define WRITE_LOCK(cmd) \
+-	if (cmd->fail_io || dm_bm_is_read_only(cmd->bm)) \
++#define WRITE_LOCK(cmd)	\
++	down_write(&cmd->root_lock); \
++	if (cmd->fail_io || dm_bm_is_read_only(cmd->bm)) { \
++		up_write(&cmd->root_lock); \
+ 		return -EINVAL; \
+-	down_write(&cmd->root_lock)
++	}
+ 
+ #define WRITE_LOCK_VOID(cmd) \
+-	if (cmd->fail_io || dm_bm_is_read_only(cmd->bm)) \
++	down_write(&cmd->root_lock); \
++	if (cmd->fail_io || dm_bm_is_read_only(cmd->bm)) { \
++		up_write(&cmd->root_lock); \
+ 		return; \
+-	down_write(&cmd->root_lock)
++	}
+ 
+ #define WRITE_UNLOCK(cmd) \
+ 	up_write(&cmd->root_lock)
+ 
++#define READ_LOCK(cmd) \
++	down_read(&cmd->root_lock); \
++	if (cmd->fail_io || dm_bm_is_read_only(cmd->bm)) { \
++		up_read(&cmd->root_lock); \
++		return -EINVAL; \
++	}
++
++#define READ_LOCK_VOID(cmd)	\
++	down_read(&cmd->root_lock); \
++	if (cmd->fail_io || dm_bm_is_read_only(cmd->bm)) { \
++		up_read(&cmd->root_lock); \
++		return; \
++	}
++
++#define READ_UNLOCK(cmd) \
++	up_read(&cmd->root_lock)
++
+ int dm_cache_resize(struct dm_cache_metadata *cmd, dm_cblock_t new_cache_size)
+ {
+ 	int r;
+@@ -1015,22 +1036,20 @@ int dm_cache_load_discards(struct dm_cache_metadata *cmd,
+ {
+ 	int r;
+ 
+-	down_read(&cmd->root_lock);
++	READ_LOCK(cmd);
+ 	r = __load_discards(cmd, fn, context);
+-	up_read(&cmd->root_lock);
++	READ_UNLOCK(cmd);
+ 
+ 	return r;
+ }
+ 
+-dm_cblock_t dm_cache_size(struct dm_cache_metadata *cmd)
++int dm_cache_size(struct dm_cache_metadata *cmd, dm_cblock_t *result)
+ {
+-	dm_cblock_t r;
++	READ_LOCK(cmd);
++	*result = cmd->cache_blocks;
++	READ_UNLOCK(cmd);
+ 
+-	down_read(&cmd->root_lock);
+-	r = cmd->cache_blocks;
+-	up_read(&cmd->root_lock);
+-
+-	return r;
++	return 0;
+ }
+ 
+ static int __remove(struct dm_cache_metadata *cmd, dm_cblock_t cblock)
+@@ -1188,9 +1207,9 @@ int dm_cache_load_mappings(struct dm_cache_metadata *cmd,
+ {
+ 	int r;
+ 
+-	down_read(&cmd->root_lock);
++	READ_LOCK(cmd);
+ 	r = __load_mappings(cmd, policy, fn, context);
+-	up_read(&cmd->root_lock);
++	READ_UNLOCK(cmd);
+ 
+ 	return r;
+ }
+@@ -1215,18 +1234,18 @@ static int __dump_mappings(struct dm_cache_metadata *cmd)
+ 
+ void dm_cache_dump(struct dm_cache_metadata *cmd)
+ {
+-	down_read(&cmd->root_lock);
++	READ_LOCK_VOID(cmd);
+ 	__dump_mappings(cmd);
+-	up_read(&cmd->root_lock);
++	READ_UNLOCK(cmd);
+ }
+ 
+ int dm_cache_changed_this_transaction(struct dm_cache_metadata *cmd)
+ {
+ 	int r;
+ 
+-	down_read(&cmd->root_lock);
++	READ_LOCK(cmd);
+ 	r = cmd->changed;
+-	up_read(&cmd->root_lock);
++	READ_UNLOCK(cmd);
+ 
+ 	return r;
+ }
+@@ -1276,9 +1295,9 @@ int dm_cache_set_dirty(struct dm_cache_metadata *cmd,
+ void dm_cache_metadata_get_stats(struct dm_cache_metadata *cmd,
+ 				 struct dm_cache_statistics *stats)
+ {
+-	down_read(&cmd->root_lock);
++	READ_LOCK_VOID(cmd);
+ 	*stats = cmd->stats;
+-	up_read(&cmd->root_lock);
++	READ_UNLOCK(cmd);
+ }
+ 
+ void dm_cache_metadata_set_stats(struct dm_cache_metadata *cmd,
+@@ -1312,9 +1331,9 @@ int dm_cache_get_free_metadata_block_count(struct dm_cache_metadata *cmd,
+ {
+ 	int r = -EINVAL;
+ 
+-	down_read(&cmd->root_lock);
++	READ_LOCK(cmd);
+ 	r = dm_sm_get_nr_free(cmd->metadata_sm, result);
+-	up_read(&cmd->root_lock);
++	READ_UNLOCK(cmd);
+ 
+ 	return r;
+ }
+@@ -1324,9 +1343,9 @@ int dm_cache_get_metadata_dev_size(struct dm_cache_metadata *cmd,
+ {
+ 	int r = -EINVAL;
+ 
+-	down_read(&cmd->root_lock);
++	READ_LOCK(cmd);
+ 	r = dm_sm_get_nr_blocks(cmd->metadata_sm, result);
+-	up_read(&cmd->root_lock);
++	READ_UNLOCK(cmd);
+ 
+ 	return r;
+ }
+@@ -1417,7 +1436,13 @@ int dm_cache_write_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *
+ 
+ int dm_cache_metadata_all_clean(struct dm_cache_metadata *cmd, bool *result)
+ {
+-	return blocks_are_unmapped_or_clean(cmd, 0, cmd->cache_blocks, result);
++	int r;
++
++	READ_LOCK(cmd);
++	r = blocks_are_unmapped_or_clean(cmd, 0, cmd->cache_blocks, result);
++	READ_UNLOCK(cmd);
++
++	return r;
+ }
+ 
+ void dm_cache_metadata_set_read_only(struct dm_cache_metadata *cmd)
+@@ -1440,10 +1465,7 @@ int dm_cache_metadata_set_needs_check(struct dm_cache_metadata *cmd)
+ 	struct dm_block *sblock;
+ 	struct cache_disk_superblock *disk_super;
+ 
+-	/*
+-	 * We ignore fail_io for this function.
+-	 */
+-	down_write(&cmd->root_lock);
++	WRITE_LOCK(cmd);
+ 	set_bit(NEEDS_CHECK, &cmd->flags);
+ 
+ 	r = superblock_lock(cmd, &sblock);
+@@ -1458,19 +1480,17 @@ int dm_cache_metadata_set_needs_check(struct dm_cache_metadata *cmd)
+ 	dm_bm_unlock(sblock);
+ 
+ out:
+-	up_write(&cmd->root_lock);
++	WRITE_UNLOCK(cmd);
+ 	return r;
+ }
+ 
+-bool dm_cache_metadata_needs_check(struct dm_cache_metadata *cmd)
++int dm_cache_metadata_needs_check(struct dm_cache_metadata *cmd, bool *result)
+ {
+-	bool needs_check;
++	READ_LOCK(cmd);
++	*result = !!test_bit(NEEDS_CHECK, &cmd->flags);
++	READ_UNLOCK(cmd);
+ 
+-	down_read(&cmd->root_lock);
+-	needs_check = !!test_bit(NEEDS_CHECK, &cmd->flags);
+-	up_read(&cmd->root_lock);
+-
+-	return needs_check;
++	return 0;
+ }
+ 
+ int dm_cache_metadata_abort(struct dm_cache_metadata *cmd)
+diff --git a/drivers/md/dm-cache-metadata.h b/drivers/md/dm-cache-metadata.h
+index 2ffee21f318d..8528744195e5 100644
+--- a/drivers/md/dm-cache-metadata.h
++++ b/drivers/md/dm-cache-metadata.h
+@@ -66,7 +66,7 @@ void dm_cache_metadata_close(struct dm_cache_metadata *cmd);
+  * origin blocks to map to.
+  */
+ int dm_cache_resize(struct dm_cache_metadata *cmd, dm_cblock_t new_cache_size);
+-dm_cblock_t dm_cache_size(struct dm_cache_metadata *cmd);
++int dm_cache_size(struct dm_cache_metadata *cmd, dm_cblock_t *result);
+ 
+ int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd,
+ 				   sector_t discard_block_size,
+@@ -137,7 +137,7 @@ int dm_cache_write_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *
+  */
+ int dm_cache_metadata_all_clean(struct dm_cache_metadata *cmd, bool *result);
+ 
+-bool dm_cache_metadata_needs_check(struct dm_cache_metadata *cmd);
++int dm_cache_metadata_needs_check(struct dm_cache_metadata *cmd, bool *result);
+ int dm_cache_metadata_set_needs_check(struct dm_cache_metadata *cmd);
+ void dm_cache_metadata_set_read_only(struct dm_cache_metadata *cmd);
+ void dm_cache_metadata_set_read_write(struct dm_cache_metadata *cmd);
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 2fd4c8296144..515f83e7d9ab 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -987,9 +987,14 @@ static void notify_mode_switch(struct cache *cache, enum cache_metadata_mode mod
+ 
+ static void set_cache_mode(struct cache *cache, enum cache_metadata_mode new_mode)
+ {
+-	bool needs_check = dm_cache_metadata_needs_check(cache->cmd);
++	bool needs_check;
+ 	enum cache_metadata_mode old_mode = get_cache_mode(cache);
+ 
++	if (dm_cache_metadata_needs_check(cache->cmd, &needs_check)) {
++		DMERR("unable to read needs_check flag, setting failure mode");
++		new_mode = CM_FAIL;
++	}
++
+ 	if (new_mode == CM_WRITE && needs_check) {
+ 		DMERR("%s: unable to switch cache to write mode until repaired.",
+ 		      cache_device_name(cache));
+@@ -3513,6 +3518,7 @@ static void cache_status(struct dm_target *ti, status_type_t type,
+ 	char buf[BDEVNAME_SIZE];
+ 	struct cache *cache = ti->private;
+ 	dm_cblock_t residency;
++	bool needs_check;
+ 
+ 	switch (type) {
+ 	case STATUSTYPE_INFO:
+@@ -3586,7 +3592,9 @@ static void cache_status(struct dm_target *ti, status_type_t type,
+ 		else
+ 			DMEMIT("rw ");
+ 
+-		if (dm_cache_metadata_needs_check(cache->cmd))
++		r = dm_cache_metadata_needs_check(cache->cmd, &needs_check);
++
++		if (r || needs_check)
+ 			DMEMIT("needs_check ");
+ 		else
+ 			DMEMIT("- ");
+diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c
+index 61f184ad081c..e108deebbaaa 100644
+--- a/drivers/md/dm-snap.c
++++ b/drivers/md/dm-snap.c
+@@ -1106,6 +1106,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 	int i;
+ 	int r = -EINVAL;
+ 	char *origin_path, *cow_path;
++	dev_t origin_dev, cow_dev;
+ 	unsigned args_used, num_flush_bios = 1;
+ 	fmode_t origin_mode = FMODE_READ;
+ 
+@@ -1136,11 +1137,19 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 		ti->error = "Cannot get origin device";
+ 		goto bad_origin;
+ 	}
++	origin_dev = s->origin->bdev->bd_dev;
+ 
+ 	cow_path = argv[0];
+ 	argv++;
+ 	argc--;
+ 
++	cow_dev = dm_get_dev_t(cow_path);
++	if (cow_dev && cow_dev == origin_dev) {
++		ti->error = "COW device cannot be the same as origin device";
++		r = -EINVAL;
++		goto bad_cow;
++	}
++
+ 	r = dm_get_device(ti, cow_path, dm_table_get_mode(ti->table), &s->cow);
+ 	if (r) {
+ 		ti->error = "Cannot get COW device";
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 061152a43730..cb5d0daf53bb 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -365,6 +365,26 @@ static int upgrade_mode(struct dm_dev_internal *dd, fmode_t new_mode,
+ }
+ 
+ /*
++ * Convert the path to a device
++ */
++dev_t dm_get_dev_t(const char *path)
++{
++	dev_t uninitialized_var(dev);
++	struct block_device *bdev;
++
++	bdev = lookup_bdev(path);
++	if (IS_ERR(bdev))
++		dev = name_to_dev_t(path);
++	else {
++		dev = bdev->bd_dev;
++		bdput(bdev);
++	}
++
++	return dev;
++}
++EXPORT_SYMBOL_GPL(dm_get_dev_t);
++
++/*
+  * Add a device to the list, or just increment the usage count if
+  * it's already present.
+  */
+@@ -372,23 +392,15 @@ int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode,
+ 		  struct dm_dev **result)
+ {
+ 	int r;
+-	dev_t uninitialized_var(dev);
++	dev_t dev;
+ 	struct dm_dev_internal *dd;
+ 	struct dm_table *t = ti->table;
+-	struct block_device *bdev;
+ 
+ 	BUG_ON(!t);
+ 
+-	/* convert the path to a device */
+-	bdev = lookup_bdev(path);
+-	if (IS_ERR(bdev)) {
+-		dev = name_to_dev_t(path);
+-		if (!dev)
+-			return -ENODEV;
+-	} else {
+-		dev = bdev->bd_dev;
+-		bdput(bdev);
+-	}
++	dev = dm_get_dev_t(path);
++	if (!dev)
++		return -ENODEV;
+ 
+ 	dd = find_device(&t->devices, dev);
+ 	if (!dd) {
+diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c
+index c219a053c7f6..911ada643364 100644
+--- a/drivers/md/dm-thin-metadata.c
++++ b/drivers/md/dm-thin-metadata.c
+@@ -1943,5 +1943,8 @@ bool dm_pool_metadata_needs_check(struct dm_pool_metadata *pmd)
+ 
+ void dm_pool_issue_prefetches(struct dm_pool_metadata *pmd)
+ {
+-	dm_tm_issue_prefetches(pmd->tm);
++	down_read(&pmd->root_lock);
++	if (!pmd->fail_io)
++		dm_tm_issue_prefetches(pmd->tm);
++	up_read(&pmd->root_lock);
+ }
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index dd834927bc66..c338aebb4ccd 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1109,12 +1109,8 @@ static void rq_completed(struct mapped_device *md, int rw, bool run_queue)
+ 	 * back into ->request_fn() could deadlock attempting to grab the
+ 	 * queue lock again.
+ 	 */
+-	if (run_queue) {
+-		if (md->queue->mq_ops)
+-			blk_mq_run_hw_queues(md->queue, true);
+-		else
+-			blk_run_queue_async(md->queue);
+-	}
++	if (!md->queue->mq_ops && run_queue)
++		blk_run_queue_async(md->queue);
+ 
+ 	/*
+ 	 * dm_put() must be at the end of this function. See the comment above
+@@ -1214,9 +1210,9 @@ static void dm_requeue_original_request(struct mapped_device *md,
+ {
+ 	int rw = rq_data_dir(rq);
+ 
++	rq_end_stats(md, rq);
+ 	dm_unprep_request(rq);
+ 
+-	rq_end_stats(md, rq);
+ 	if (!rq->q->mq_ops)
+ 		old_requeue_request(rq);
+ 	else {
+@@ -1336,7 +1332,10 @@ static void dm_complete_request(struct request *rq, int error)
+ 	struct dm_rq_target_io *tio = tio_from_request(rq);
+ 
+ 	tio->error = error;
+-	blk_complete_request(rq);
++	if (!rq->q->mq_ops)
++		blk_complete_request(rq);
++	else
++		blk_mq_complete_request(rq, error);
+ }
+ 
+ /*
+diff --git a/drivers/md/multipath.c b/drivers/md/multipath.c
+index 0a72ab6e6c20..dd483bb2e111 100644
+--- a/drivers/md/multipath.c
++++ b/drivers/md/multipath.c
+@@ -129,7 +129,9 @@ static void multipath_make_request(struct mddev *mddev, struct bio * bio)
+ 	}
+ 	multipath = conf->multipaths + mp_bh->path;
+ 
+-	mp_bh->bio = *bio;
++	bio_init(&mp_bh->bio);
++	__bio_clone_fast(&mp_bh->bio, bio);
++
+ 	mp_bh->bio.bi_iter.bi_sector += multipath->rdev->data_offset;
+ 	mp_bh->bio.bi_bdev = multipath->rdev->bdev;
+ 	mp_bh->bio.bi_rw |= REQ_FAILFAST_TRANSPORT;
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index c4b913409226..515554c7365b 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -2274,6 +2274,7 @@ static void handle_write_finished(struct r1conf *conf, struct r1bio *r1_bio)
+ 	if (fail) {
+ 		spin_lock_irq(&conf->device_lock);
+ 		list_add(&r1_bio->retry_list, &conf->bio_end_io_list);
++		conf->nr_queued++;
+ 		spin_unlock_irq(&conf->device_lock);
+ 		md_wakeup_thread(conf->mddev->thread);
+ 	} else {
+@@ -2391,8 +2392,10 @@ static void raid1d(struct md_thread *thread)
+ 		LIST_HEAD(tmp);
+ 		spin_lock_irqsave(&conf->device_lock, flags);
+ 		if (!test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
+-			list_add(&tmp, &conf->bio_end_io_list);
+-			list_del_init(&conf->bio_end_io_list);
++			while (!list_empty(&conf->bio_end_io_list)) {
++				list_move(conf->bio_end_io_list.prev, &tmp);
++				conf->nr_queued--;
++			}
+ 		}
+ 		spin_unlock_irqrestore(&conf->device_lock, flags);
+ 		while (!list_empty(&tmp)) {
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index ce959b4ae4df..ebb0dd612ebd 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -2664,6 +2664,7 @@ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio)
+ 		if (fail) {
+ 			spin_lock_irq(&conf->device_lock);
+ 			list_add(&r10_bio->retry_list, &conf->bio_end_io_list);
++			conf->nr_queued++;
+ 			spin_unlock_irq(&conf->device_lock);
+ 			md_wakeup_thread(conf->mddev->thread);
+ 		} else {
+@@ -2691,8 +2692,10 @@ static void raid10d(struct md_thread *thread)
+ 		LIST_HEAD(tmp);
+ 		spin_lock_irqsave(&conf->device_lock, flags);
+ 		if (!test_bit(MD_CHANGE_PENDING, &mddev->flags)) {
+-			list_add(&tmp, &conf->bio_end_io_list);
+-			list_del_init(&conf->bio_end_io_list);
++			while (!list_empty(&conf->bio_end_io_list)) {
++				list_move(conf->bio_end_io_list.prev, &tmp);
++				conf->nr_queued--;
++			}
+ 		}
+ 		spin_unlock_irqrestore(&conf->device_lock, flags);
+ 		while (!list_empty(&tmp)) {
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 704ef7fcfbf8..10ce885445f6 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -340,8 +340,7 @@ static void release_inactive_stripe_list(struct r5conf *conf,
+ 					 int hash)
+ {
+ 	int size;
+-	unsigned long do_wakeup = 0;
+-	int i = 0;
++	bool do_wakeup = false;
+ 	unsigned long flags;
+ 
+ 	if (hash == NR_STRIPE_HASH_LOCKS) {
+@@ -362,19 +361,15 @@ static void release_inactive_stripe_list(struct r5conf *conf,
+ 			    !list_empty(list))
+ 				atomic_dec(&conf->empty_inactive_list_nr);
+ 			list_splice_tail_init(list, conf->inactive_list + hash);
+-			do_wakeup |= 1 << hash;
++			do_wakeup = true;
+ 			spin_unlock_irqrestore(conf->hash_locks + hash, flags);
+ 		}
+ 		size--;
+ 		hash--;
+ 	}
+ 
+-	for (i = 0; i < NR_STRIPE_HASH_LOCKS; i++) {
+-		if (do_wakeup & (1 << i))
+-			wake_up(&conf->wait_for_stripe[i]);
+-	}
+-
+ 	if (do_wakeup) {
++		wake_up(&conf->wait_for_stripe);
+ 		if (atomic_read(&conf->active_stripes) == 0)
+ 			wake_up(&conf->wait_for_quiescent);
+ 		if (conf->retry_read_aligned)
+@@ -687,15 +682,14 @@ raid5_get_active_stripe(struct r5conf *conf, sector_t sector,
+ 			if (!sh) {
+ 				set_bit(R5_INACTIVE_BLOCKED,
+ 					&conf->cache_state);
+-				wait_event_exclusive_cmd(
+-					conf->wait_for_stripe[hash],
++				wait_event_lock_irq(
++					conf->wait_for_stripe,
+ 					!list_empty(conf->inactive_list + hash) &&
+ 					(atomic_read(&conf->active_stripes)
+ 					 < (conf->max_nr_stripes * 3 / 4)
+ 					 || !test_bit(R5_INACTIVE_BLOCKED,
+ 						      &conf->cache_state)),
+-					spin_unlock_irq(conf->hash_locks + hash),
+-					spin_lock_irq(conf->hash_locks + hash));
++					*(conf->hash_locks + hash));
+ 				clear_bit(R5_INACTIVE_BLOCKED,
+ 					  &conf->cache_state);
+ 			} else {
+@@ -720,9 +714,6 @@ raid5_get_active_stripe(struct r5conf *conf, sector_t sector,
+ 		}
+ 	} while (sh == NULL);
+ 
+-	if (!list_empty(conf->inactive_list + hash))
+-		wake_up(&conf->wait_for_stripe[hash]);
+-
+ 	spin_unlock_irq(conf->hash_locks + hash);
+ 	return sh;
+ }
+@@ -2091,6 +2082,14 @@ static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors)
+ 	unsigned long cpu;
+ 	int err = 0;
+ 
++	/*
++	 * Never shrink. And mddev_suspend() could deadlock if this is called
++	 * from raid5d. In that case, scribble_disks and scribble_sectors
++	 * should equal to new_disks and new_sectors
++	 */
++	if (conf->scribble_disks >= new_disks &&
++	    conf->scribble_sectors >= new_sectors)
++		return 0;
+ 	mddev_suspend(conf->mddev);
+ 	get_online_cpus();
+ 	for_each_present_cpu(cpu) {
+@@ -2112,6 +2111,10 @@ static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors)
+ 	}
+ 	put_online_cpus();
+ 	mddev_resume(conf->mddev);
++	if (!err) {
++		conf->scribble_disks = new_disks;
++		conf->scribble_sectors = new_sectors;
++	}
+ 	return err;
+ }
+ 
+@@ -2192,7 +2195,7 @@ static int resize_stripes(struct r5conf *conf, int newsize)
+ 	cnt = 0;
+ 	list_for_each_entry(nsh, &newstripes, lru) {
+ 		lock_device_hash_lock(conf, hash);
+-		wait_event_exclusive_cmd(conf->wait_for_stripe[hash],
++		wait_event_cmd(conf->wait_for_stripe,
+ 				    !list_empty(conf->inactive_list + hash),
+ 				    unlock_device_hash_lock(conf, hash),
+ 				    lock_device_hash_lock(conf, hash));
+@@ -4238,7 +4241,6 @@ static void break_stripe_batch_list(struct stripe_head *head_sh,
+ 		WARN_ON_ONCE(sh->state & ((1 << STRIPE_ACTIVE) |
+ 					  (1 << STRIPE_SYNCING) |
+ 					  (1 << STRIPE_REPLACED) |
+-					  (1 << STRIPE_PREREAD_ACTIVE) |
+ 					  (1 << STRIPE_DELAYED) |
+ 					  (1 << STRIPE_BIT_DELAY) |
+ 					  (1 << STRIPE_FULL_WRITE) |
+@@ -4253,6 +4255,7 @@ static void break_stripe_batch_list(struct stripe_head *head_sh,
+ 					      (1 << STRIPE_REPLACED)));
+ 
+ 		set_mask_bits(&sh->state, ~(STRIPE_EXPAND_SYNC_FLAGS |
++					    (1 << STRIPE_PREREAD_ACTIVE) |
+ 					    (1 << STRIPE_DEGRADED)),
+ 			      head_sh->state & (1 << STRIPE_INSYNC));
+ 
+@@ -6414,6 +6417,12 @@ static int raid5_alloc_percpu(struct r5conf *conf)
+ 	}
+ 	put_online_cpus();
+ 
++	if (!err) {
++		conf->scribble_disks = max(conf->raid_disks,
++			conf->previous_raid_disks);
++		conf->scribble_sectors = max(conf->chunk_sectors,
++			conf->prev_chunk_sectors);
++	}
+ 	return err;
+ }
+ 
+@@ -6504,9 +6513,7 @@ static struct r5conf *setup_conf(struct mddev *mddev)
+ 	seqcount_init(&conf->gen_lock);
+ 	mutex_init(&conf->cache_size_mutex);
+ 	init_waitqueue_head(&conf->wait_for_quiescent);
+-	for (i = 0; i < NR_STRIPE_HASH_LOCKS; i++) {
+-		init_waitqueue_head(&conf->wait_for_stripe[i]);
+-	}
++	init_waitqueue_head(&conf->wait_for_stripe);
+ 	init_waitqueue_head(&conf->wait_for_overlap);
+ 	INIT_LIST_HEAD(&conf->handle_list);
+ 	INIT_LIST_HEAD(&conf->hold_list);
+@@ -7015,8 +7022,8 @@ static int run(struct mddev *mddev)
+ 		}
+ 
+ 		if (discard_supported &&
+-		   mddev->queue->limits.max_discard_sectors >= stripe &&
+-		   mddev->queue->limits.discard_granularity >= stripe)
++		    mddev->queue->limits.max_discard_sectors >= (stripe >> 9) &&
++		    mddev->queue->limits.discard_granularity >= stripe)
+ 			queue_flag_set_unlocked(QUEUE_FLAG_DISCARD,
+ 						mddev->queue);
+ 		else
+diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
+index a415e1cd39b8..517d4b68a1be 100644
+--- a/drivers/md/raid5.h
++++ b/drivers/md/raid5.h
+@@ -510,6 +510,8 @@ struct r5conf {
+ 					      * conversions
+ 					      */
+ 	} __percpu *percpu;
++	int scribble_disks;
++	int scribble_sectors;
+ #ifdef CONFIG_HOTPLUG_CPU
+ 	struct notifier_block	cpu_notify;
+ #endif
+@@ -522,7 +524,7 @@ struct r5conf {
+ 	atomic_t		empty_inactive_list_nr;
+ 	struct llist_head	released_stripes;
+ 	wait_queue_head_t	wait_for_quiescent;
+-	wait_queue_head_t	wait_for_stripe[NR_STRIPE_HASH_LOCKS];
++	wait_queue_head_t	wait_for_stripe;
+ 	wait_queue_head_t	wait_for_overlap;
+ 	unsigned long		cache_state;
+ #define R5_INACTIVE_BLOCKED	1	/* release of inactive stripes blocked,
+diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c
+index e4900df1140b..c24839cfcc35 100644
+--- a/drivers/media/i2c/adv7511.c
++++ b/drivers/media/i2c/adv7511.c
+@@ -1161,12 +1161,23 @@ static void adv7511_dbg_dump_edid(int lvl, int debug, struct v4l2_subdev *sd, in
+ 	}
+ }
+ 
++static void adv7511_notify_no_edid(struct v4l2_subdev *sd)
++{
++	struct adv7511_state *state = get_adv7511_state(sd);
++	struct adv7511_edid_detect ed;
++
++	/* We failed to read the EDID, so send an event for this. */
++	ed.present = false;
++	ed.segment = adv7511_rd(sd, 0xc4);
++	v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed);
++	v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x0);
++}
++
+ static void adv7511_edid_handler(struct work_struct *work)
+ {
+ 	struct delayed_work *dwork = to_delayed_work(work);
+ 	struct adv7511_state *state = container_of(dwork, struct adv7511_state, edid_handler);
+ 	struct v4l2_subdev *sd = &state->sd;
+-	struct adv7511_edid_detect ed;
+ 
+ 	v4l2_dbg(1, debug, sd, "%s:\n", __func__);
+ 
+@@ -1191,9 +1202,7 @@ static void adv7511_edid_handler(struct work_struct *work)
+ 	}
+ 
+ 	/* We failed to read the EDID, so send an event for this. */
+-	ed.present = false;
+-	ed.segment = adv7511_rd(sd, 0xc4);
+-	v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed);
++	adv7511_notify_no_edid(sd);
+ 	v4l2_dbg(1, debug, sd, "%s: no edid found\n", __func__);
+ }
+ 
+@@ -1264,7 +1273,6 @@ static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd)
+ 	/* update read only ctrls */
+ 	v4l2_ctrl_s_ctrl(state->hotplug_ctrl, adv7511_have_hotplug(sd) ? 0x1 : 0x0);
+ 	v4l2_ctrl_s_ctrl(state->rx_sense_ctrl, adv7511_have_rx_sense(sd) ? 0x1 : 0x0);
+-	v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, state->edid.segments ? 0x1 : 0x0);
+ 
+ 	if ((status & MASK_ADV7511_HPD_DETECT) && ((status & MASK_ADV7511_MSEN_DETECT) || state->edid.segments)) {
+ 		v4l2_dbg(1, debug, sd, "%s: hotplug and (rx-sense or edid)\n", __func__);
+@@ -1294,6 +1302,7 @@ static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd)
+ 		}
+ 		adv7511_s_power(sd, false);
+ 		memset(&state->edid, 0, sizeof(struct adv7511_state_edid));
++		adv7511_notify_no_edid(sd);
+ 	}
+ }
+ 
+@@ -1370,6 +1379,7 @@ static bool adv7511_check_edid_status(struct v4l2_subdev *sd)
+ 		}
+ 		/* one more segment read ok */
+ 		state->edid.segments = segment + 1;
++		v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x1);
+ 		if (((state->edid.data[0x7e] >> 1) + 1) > state->edid.segments) {
+ 			/* Request next EDID segment */
+ 			v4l2_dbg(1, debug, sd, "%s: request segment %d\n", __func__, state->edid.segments);
+@@ -1389,7 +1399,6 @@ static bool adv7511_check_edid_status(struct v4l2_subdev *sd)
+ 		ed.present = true;
+ 		ed.segment = 0;
+ 		state->edid_detect_counter++;
+-		v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, state->edid.segments ? 0x1 : 0x0);
+ 		v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed);
+ 		return ed.present;
+ 	}
+diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c
+index 15a4ebc2844d..51dbef2f9a48 100644
+--- a/drivers/media/pci/bt8xx/bttv-driver.c
++++ b/drivers/media/pci/bt8xx/bttv-driver.c
+@@ -2334,6 +2334,19 @@ static int bttv_g_fmt_vid_overlay(struct file *file, void *priv,
+ 	return 0;
+ }
+ 
++static void bttv_get_width_mask_vid_cap(const struct bttv_format *fmt,
++					unsigned int *width_mask,
++					unsigned int *width_bias)
++{
++	if (fmt->flags & FORMAT_FLAGS_PLANAR) {
++		*width_mask = ~15; /* width must be a multiple of 16 pixels */
++		*width_bias = 8;   /* nearest */
++	} else {
++		*width_mask = ~3; /* width must be a multiple of 4 pixels */
++		*width_bias = 2;  /* nearest */
++	}
++}
++
+ static int bttv_try_fmt_vid_cap(struct file *file, void *priv,
+ 						struct v4l2_format *f)
+ {
+@@ -2343,6 +2356,7 @@ static int bttv_try_fmt_vid_cap(struct file *file, void *priv,
+ 	enum v4l2_field field;
+ 	__s32 width, height;
+ 	__s32 height2;
++	unsigned int width_mask, width_bias;
+ 	int rc;
+ 
+ 	fmt = format_by_fourcc(f->fmt.pix.pixelformat);
+@@ -2375,9 +2389,9 @@ static int bttv_try_fmt_vid_cap(struct file *file, void *priv,
+ 	width = f->fmt.pix.width;
+ 	height = f->fmt.pix.height;
+ 
++	bttv_get_width_mask_vid_cap(fmt, &width_mask, &width_bias);
+ 	rc = limit_scaled_size_lock(fh, &width, &height, field,
+-			       /* width_mask: 4 pixels */ ~3,
+-			       /* width_bias: nearest */ 2,
++			       width_mask, width_bias,
+ 			       /* adjust_size */ 1,
+ 			       /* adjust_crop */ 0);
+ 	if (0 != rc)
+@@ -2410,6 +2424,7 @@ static int bttv_s_fmt_vid_cap(struct file *file, void *priv,
+ 	struct bttv_fh *fh = priv;
+ 	struct bttv *btv = fh->btv;
+ 	__s32 width, height;
++	unsigned int width_mask, width_bias;
+ 	enum v4l2_field field;
+ 
+ 	retval = bttv_switch_type(fh, f->type);
+@@ -2424,9 +2439,10 @@ static int bttv_s_fmt_vid_cap(struct file *file, void *priv,
+ 	height = f->fmt.pix.height;
+ 	field = f->fmt.pix.field;
+ 
++	fmt = format_by_fourcc(f->fmt.pix.pixelformat);
++	bttv_get_width_mask_vid_cap(fmt, &width_mask, &width_bias);
+ 	retval = limit_scaled_size_lock(fh, &width, &height, f->fmt.pix.field,
+-			       /* width_mask: 4 pixels */ ~3,
+-			       /* width_bias: nearest */ 2,
++			       width_mask, width_bias,
+ 			       /* adjust_size */ 1,
+ 			       /* adjust_crop */ 1);
+ 	if (0 != retval)
+@@ -2434,8 +2450,6 @@ static int bttv_s_fmt_vid_cap(struct file *file, void *priv,
+ 
+ 	f->fmt.pix.field = field;
+ 
+-	fmt = format_by_fourcc(f->fmt.pix.pixelformat);
+-
+ 	/* update our state informations */
+ 	fh->fmt              = fmt;
+ 	fh->cap.field        = f->fmt.pix.field;
+diff --git a/drivers/media/pci/saa7134/saa7134-video.c b/drivers/media/pci/saa7134/saa7134-video.c
+index 518086c7aed5..15e56c07b217 100644
+--- a/drivers/media/pci/saa7134/saa7134-video.c
++++ b/drivers/media/pci/saa7134/saa7134-video.c
+@@ -1219,10 +1219,13 @@ static int saa7134_g_fmt_vid_cap(struct file *file, void *priv,
+ 	f->fmt.pix.height       = dev->height;
+ 	f->fmt.pix.field        = dev->field;
+ 	f->fmt.pix.pixelformat  = dev->fmt->fourcc;
+-	f->fmt.pix.bytesperline =
+-		(f->fmt.pix.width * dev->fmt->depth) >> 3;
++	if (dev->fmt->planar)
++		f->fmt.pix.bytesperline = f->fmt.pix.width;
++	else
++		f->fmt.pix.bytesperline =
++			(f->fmt.pix.width * dev->fmt->depth) / 8;
+ 	f->fmt.pix.sizeimage =
+-		f->fmt.pix.height * f->fmt.pix.bytesperline;
++		(f->fmt.pix.height * f->fmt.pix.width * dev->fmt->depth) / 8;
+ 	f->fmt.pix.colorspace   = V4L2_COLORSPACE_SMPTE170M;
+ 	return 0;
+ }
+@@ -1298,10 +1301,13 @@ static int saa7134_try_fmt_vid_cap(struct file *file, void *priv,
+ 	if (f->fmt.pix.height > maxh)
+ 		f->fmt.pix.height = maxh;
+ 	f->fmt.pix.width &= ~0x03;
+-	f->fmt.pix.bytesperline =
+-		(f->fmt.pix.width * fmt->depth) >> 3;
++	if (fmt->planar)
++		f->fmt.pix.bytesperline = f->fmt.pix.width;
++	else
++		f->fmt.pix.bytesperline =
++			(f->fmt.pix.width * fmt->depth) / 8;
+ 	f->fmt.pix.sizeimage =
+-		f->fmt.pix.height * f->fmt.pix.bytesperline;
++		(f->fmt.pix.height * f->fmt.pix.width * fmt->depth) / 8;
+ 	f->fmt.pix.colorspace   = V4L2_COLORSPACE_SMPTE170M;
+ 
+ 	return 0;
+diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c
+index 654e964f84a2..d76511c1c1e3 100644
+--- a/drivers/media/platform/coda/coda-bit.c
++++ b/drivers/media/platform/coda/coda-bit.c
+@@ -1342,7 +1342,7 @@ static void coda_finish_encode(struct coda_ctx *ctx)
+ 
+ 	/* Calculate bytesused field */
+ 	if (dst_buf->sequence == 0) {
+-		vb2_set_plane_payload(&dst_buf->vb2_buf, 0,
++		vb2_set_plane_payload(&dst_buf->vb2_buf, 0, wr_ptr - start_ptr +
+ 					ctx->vpu_header_size[0] +
+ 					ctx->vpu_header_size[1] +
+ 					ctx->vpu_header_size[2]);
+diff --git a/drivers/media/usb/pwc/pwc-if.c b/drivers/media/usb/pwc/pwc-if.c
+index b79c36fd8cd2..58f23bcfe94e 100644
+--- a/drivers/media/usb/pwc/pwc-if.c
++++ b/drivers/media/usb/pwc/pwc-if.c
+@@ -91,6 +91,7 @@ static const struct usb_device_id pwc_device_table [] = {
+ 	{ USB_DEVICE(0x0471, 0x0312) },
+ 	{ USB_DEVICE(0x0471, 0x0313) }, /* the 'new' 720K */
+ 	{ USB_DEVICE(0x0471, 0x0329) }, /* Philips SPC 900NC PC Camera */
++	{ USB_DEVICE(0x0471, 0x032C) }, /* Philips SPC 880NC PC Camera */
+ 	{ USB_DEVICE(0x069A, 0x0001) }, /* Askey */
+ 	{ USB_DEVICE(0x046D, 0x08B0) }, /* Logitech QuickCam Pro 3000 */
+ 	{ USB_DEVICE(0x046D, 0x08B1) }, /* Logitech QuickCam Notebook Pro */
+@@ -811,6 +812,11 @@ static int usb_pwc_probe(struct usb_interface *intf, const struct usb_device_id
+ 			name = "Philips SPC 900NC webcam";
+ 			type_id = 740;
+ 			break;
++		case 0x032C:
++			PWC_INFO("Philips SPC 880NC USB webcam detected.\n");
++			name = "Philips SPC 880NC webcam";
++			type_id = 740;
++			break;
+ 		default:
+ 			return -ENODEV;
+ 			break;
+diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+index 327e83ac2469..f38c076752ce 100644
+--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
++++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+@@ -415,7 +415,8 @@ static int get_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user
+ 		get_user(kp->index, &up->index) ||
+ 		get_user(kp->type, &up->type) ||
+ 		get_user(kp->flags, &up->flags) ||
+-		get_user(kp->memory, &up->memory))
++		get_user(kp->memory, &up->memory) ||
++		get_user(kp->length, &up->length))
+ 			return -EFAULT;
+ 
+ 	if (V4L2_TYPE_IS_OUTPUT(kp->type))
+@@ -427,9 +428,6 @@ static int get_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user
+ 			return -EFAULT;
+ 
+ 	if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) {
+-		if (get_user(kp->length, &up->length))
+-			return -EFAULT;
+-
+ 		num_planes = kp->length;
+ 		if (num_planes == 0) {
+ 			kp->m.planes = NULL;
+@@ -462,16 +460,14 @@ static int get_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user
+ 	} else {
+ 		switch (kp->memory) {
+ 		case V4L2_MEMORY_MMAP:
+-			if (get_user(kp->length, &up->length) ||
+-				get_user(kp->m.offset, &up->m.offset))
++			if (get_user(kp->m.offset, &up->m.offset))
+ 				return -EFAULT;
+ 			break;
+ 		case V4L2_MEMORY_USERPTR:
+ 			{
+ 			compat_long_t tmp;
+ 
+-			if (get_user(kp->length, &up->length) ||
+-			    get_user(tmp, &up->m.userptr))
++			if (get_user(tmp, &up->m.userptr))
+ 				return -EFAULT;
+ 
+ 			kp->m.userptr = (unsigned long)compat_ptr(tmp);
+@@ -513,7 +509,8 @@ static int put_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user
+ 		copy_to_user(&up->timecode, &kp->timecode, sizeof(struct v4l2_timecode)) ||
+ 		put_user(kp->sequence, &up->sequence) ||
+ 		put_user(kp->reserved2, &up->reserved2) ||
+-		put_user(kp->reserved, &up->reserved))
++		put_user(kp->reserved, &up->reserved) ||
++		put_user(kp->length, &up->length))
+ 			return -EFAULT;
+ 
+ 	if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) {
+@@ -536,13 +533,11 @@ static int put_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user
+ 	} else {
+ 		switch (kp->memory) {
+ 		case V4L2_MEMORY_MMAP:
+-			if (put_user(kp->length, &up->length) ||
+-				put_user(kp->m.offset, &up->m.offset))
++			if (put_user(kp->m.offset, &up->m.offset))
+ 				return -EFAULT;
+ 			break;
+ 		case V4L2_MEMORY_USERPTR:
+-			if (put_user(kp->length, &up->length) ||
+-				put_user(kp->m.userptr, &up->m.userptr))
++			if (put_user(kp->m.userptr, &up->m.userptr))
+ 				return -EFAULT;
+ 			break;
+ 		case V4L2_MEMORY_OVERLAY:
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index 0b05aa938799..1a173d0af694 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -53,6 +53,11 @@ ssize_t __mei_cl_send(struct mei_cl *cl, u8 *buf, size_t length,
+ 	bus = cl->dev;
+ 
+ 	mutex_lock(&bus->device_lock);
++	if (bus->dev_state != MEI_DEV_ENABLED) {
++		rets = -ENODEV;
++		goto out;
++	}
++
+ 	if (!mei_cl_is_connected(cl)) {
+ 		rets = -ENODEV;
+ 		goto out;
+@@ -109,6 +114,10 @@ ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length)
+ 	bus = cl->dev;
+ 
+ 	mutex_lock(&bus->device_lock);
++	if (bus->dev_state != MEI_DEV_ENABLED) {
++		rets = -ENODEV;
++		goto out;
++	}
+ 
+ 	cb = mei_cl_read_cb(cl, NULL);
+ 	if (cb)
+diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
+index d8486168415a..553113eb1bdb 100644
+--- a/drivers/mmc/card/block.c
++++ b/drivers/mmc/card/block.c
+@@ -596,6 +596,14 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev,
+ 	struct mmc_card *card;
+ 	int err = 0, ioc_err = 0;
+ 
++	/*
++	 * The caller must have CAP_SYS_RAWIO, and must be calling this on the
++	 * whole block device, not on a partition.  This prevents overspray
++	 * between sibling partitions.
++	 */
++	if ((!capable(CAP_SYS_RAWIO)) || (bdev != bdev->bd_contains))
++		return -EPERM;
++
+ 	idata = mmc_blk_ioctl_copy_from_user(ic_ptr);
+ 	if (IS_ERR(idata))
+ 		return PTR_ERR(idata);
+@@ -638,6 +646,14 @@ static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev,
+ 	int i, err = 0, ioc_err = 0;
+ 	__u64 num_of_cmds;
+ 
++	/*
++	 * The caller must have CAP_SYS_RAWIO, and must be calling this on the
++	 * whole block device, not on a partition.  This prevents overspray
++	 * between sibling partitions.
++	 */
++	if ((!capable(CAP_SYS_RAWIO)) || (bdev != bdev->bd_contains))
++		return -EPERM;
++
+ 	if (copy_from_user(&num_of_cmds, &user->num_of_cmds,
+ 			   sizeof(num_of_cmds)))
+ 		return -EFAULT;
+@@ -693,14 +709,6 @@ cmd_err:
+ static int mmc_blk_ioctl(struct block_device *bdev, fmode_t mode,
+ 	unsigned int cmd, unsigned long arg)
+ {
+-	/*
+-	 * The caller must have CAP_SYS_RAWIO, and must be calling this on the
+-	 * whole block device, not on a partition.  This prevents overspray
+-	 * between sibling partitions.
+-	 */
+-	if ((!capable(CAP_SYS_RAWIO)) || (bdev != bdev->bd_contains))
+-		return -EPERM;
+-
+ 	switch (cmd) {
+ 	case MMC_IOC_CMD:
+ 		return mmc_blk_ioctl_cmd(bdev,
+diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
+index 1c1b45ef3faf..aad3243a48fc 100644
+--- a/drivers/mmc/host/mmc_spi.c
++++ b/drivers/mmc/host/mmc_spi.c
+@@ -1436,6 +1436,12 @@ static int mmc_spi_probe(struct spi_device *spi)
+ 					     host->pdata->cd_debounce);
+ 		if (status != 0)
+ 			goto fail_add_host;
++
++		/* The platform has a CD GPIO signal that may support
++		 * interrupts, so let mmc_gpiod_request_cd_irq() decide
++		 * if polling is needed or not.
++		 */
++		mmc->caps &= ~MMC_CAP_NEEDS_POLL;
+ 		mmc_gpiod_request_cd_irq(mmc);
+ 	}
+ 
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 8814eb6b83bf..1a802af827ed 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -666,9 +666,20 @@ static u8 sdhci_calc_timeout(struct sdhci_host *host, struct mmc_command *cmd)
+ 	if (!data)
+ 		target_timeout = cmd->busy_timeout * 1000;
+ 	else {
+-		target_timeout = data->timeout_ns / 1000;
+-		if (host->clock)
+-			target_timeout += data->timeout_clks / host->clock;
++		target_timeout = DIV_ROUND_UP(data->timeout_ns, 1000);
++		if (host->clock && data->timeout_clks) {
++			unsigned long long val;
++
++			/*
++			 * data->timeout_clks is in units of clock cycles.
++			 * host->clock is in Hz.  target_timeout is in us.
++			 * Hence, us = 1000000 * cycles / Hz.  Round up.
++			 */
++			val = 1000000 * data->timeout_clks;
++			if (do_div(val, host->clock))
++				target_timeout++;
++			target_timeout += val;
++		}
+ 	}
+ 
+ 	/*
+@@ -3095,14 +3106,14 @@ int sdhci_add_host(struct sdhci_host *host)
+ 		if (caps[0] & SDHCI_TIMEOUT_CLK_UNIT)
+ 			host->timeout_clk *= 1000;
+ 
++		if (override_timeout_clk)
++			host->timeout_clk = override_timeout_clk;
++
+ 		mmc->max_busy_timeout = host->ops->get_max_timeout_count ?
+ 			host->ops->get_max_timeout_count(host) : 1 << 27;
+ 		mmc->max_busy_timeout /= host->timeout_clk;
+ 	}
+ 
+-	if (override_timeout_clk)
+-		host->timeout_clk = override_timeout_clk;
+-
+ 	mmc->caps |= MMC_CAP_SDIO_IRQ | MMC_CAP_ERASE | MMC_CAP_CMD23;
+ 	mmc->caps2 |= MMC_CAP2_SDIO_IRQ_NOTHREAD;
+ 
+diff --git a/drivers/mmc/host/sh_mmcif.c b/drivers/mmc/host/sh_mmcif.c
+index ad9ffea7d659..6234eab38ff3 100644
+--- a/drivers/mmc/host/sh_mmcif.c
++++ b/drivers/mmc/host/sh_mmcif.c
+@@ -397,38 +397,26 @@ static void sh_mmcif_start_dma_tx(struct sh_mmcif_host *host)
+ }
+ 
+ static struct dma_chan *
+-sh_mmcif_request_dma_one(struct sh_mmcif_host *host,
+-			 struct sh_mmcif_plat_data *pdata,
+-			 enum dma_transfer_direction direction)
++sh_mmcif_request_dma_pdata(struct sh_mmcif_host *host, uintptr_t slave_id)
+ {
+-	struct dma_slave_config cfg = { 0, };
+-	struct dma_chan *chan;
+-	void *slave_data = NULL;
+-	struct resource *res;
+-	struct device *dev = sh_mmcif_host_to_dev(host);
+ 	dma_cap_mask_t mask;
+-	int ret;
+ 
+ 	dma_cap_zero(mask);
+ 	dma_cap_set(DMA_SLAVE, mask);
++	if (slave_id <= 0)
++		return NULL;
+ 
+-	if (pdata)
+-		slave_data = direction == DMA_MEM_TO_DEV ?
+-			(void *)pdata->slave_id_tx :
+-			(void *)pdata->slave_id_rx;
+-
+-	chan = dma_request_slave_channel_compat(mask, shdma_chan_filter,
+-				slave_data, dev,
+-				direction == DMA_MEM_TO_DEV ? "tx" : "rx");
+-
+-	dev_dbg(dev, "%s: %s: got channel %p\n", __func__,
+-		direction == DMA_MEM_TO_DEV ? "TX" : "RX", chan);
++	return dma_request_channel(mask, shdma_chan_filter, (void *)slave_id);
++}
+ 
+-	if (!chan)
+-		return NULL;
++static int sh_mmcif_dma_slave_config(struct sh_mmcif_host *host,
++				     struct dma_chan *chan,
++				     enum dma_transfer_direction direction)
++{
++	struct resource *res;
++	struct dma_slave_config cfg = { 0, };
+ 
+ 	res = platform_get_resource(host->pd, IORESOURCE_MEM, 0);
+-
+ 	cfg.direction = direction;
+ 
+ 	if (direction == DMA_DEV_TO_MEM) {
+@@ -439,38 +427,42 @@ sh_mmcif_request_dma_one(struct sh_mmcif_host *host,
+ 		cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+ 	}
+ 
+-	ret = dmaengine_slave_config(chan, &cfg);
+-	if (ret < 0) {
+-		dma_release_channel(chan);
+-		return NULL;
+-	}
+-
+-	return chan;
++	return dmaengine_slave_config(chan, &cfg);
+ }
+ 
+-static void sh_mmcif_request_dma(struct sh_mmcif_host *host,
+-				 struct sh_mmcif_plat_data *pdata)
++static void sh_mmcif_request_dma(struct sh_mmcif_host *host)
+ {
+ 	struct device *dev = sh_mmcif_host_to_dev(host);
+ 	host->dma_active = false;
+ 
+-	if (pdata) {
+-		if (pdata->slave_id_tx <= 0 || pdata->slave_id_rx <= 0)
+-			return;
+-	} else if (!dev->of_node) {
+-		return;
++	/* We can only either use DMA for both Tx and Rx or not use it at all */
++	if (IS_ENABLED(CONFIG_SUPERH) && dev->platform_data) {
++		struct sh_mmcif_plat_data *pdata = dev->platform_data;
++
++		host->chan_tx = sh_mmcif_request_dma_pdata(host,
++							pdata->slave_id_tx);
++		host->chan_rx = sh_mmcif_request_dma_pdata(host,
++							pdata->slave_id_rx);
++	} else {
++		host->chan_tx = dma_request_slave_channel(dev, "tx");
++		host->chan_rx = dma_request_slave_channel(dev, "rx");
+ 	}
++	dev_dbg(dev, "%s: got channel TX %p RX %p\n", __func__, host->chan_tx,
++		host->chan_rx);
+ 
+-	/* We can only either use DMA for both Tx and Rx or not use it at all */
+-	host->chan_tx = sh_mmcif_request_dma_one(host, pdata, DMA_MEM_TO_DEV);
+-	if (!host->chan_tx)
+-		return;
++	if (!host->chan_tx || !host->chan_rx ||
++	    sh_mmcif_dma_slave_config(host, host->chan_tx, DMA_MEM_TO_DEV) ||
++	    sh_mmcif_dma_slave_config(host, host->chan_rx, DMA_DEV_TO_MEM))
++		goto error;
+ 
+-	host->chan_rx = sh_mmcif_request_dma_one(host, pdata, DMA_DEV_TO_MEM);
+-	if (!host->chan_rx) {
++	return;
++
++error:
++	if (host->chan_tx)
+ 		dma_release_channel(host->chan_tx);
+-		host->chan_tx = NULL;
+-	}
++	if (host->chan_rx)
++		dma_release_channel(host->chan_rx);
++	host->chan_tx = host->chan_rx = NULL;
+ }
+ 
+ static void sh_mmcif_release_dma(struct sh_mmcif_host *host)
+@@ -1102,7 +1094,7 @@ static void sh_mmcif_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ 	if (ios->power_mode == MMC_POWER_UP) {
+ 		if (!host->card_present) {
+ 			/* See if we also get DMA */
+-			sh_mmcif_request_dma(host, dev->platform_data);
++			sh_mmcif_request_dma(host);
+ 			host->card_present = true;
+ 		}
+ 		sh_mmcif_set_power(host, ios);
+diff --git a/drivers/mtd/onenand/onenand_base.c b/drivers/mtd/onenand/onenand_base.c
+index 43b3392ffee7..652d01832873 100644
+--- a/drivers/mtd/onenand/onenand_base.c
++++ b/drivers/mtd/onenand/onenand_base.c
+@@ -2599,6 +2599,7 @@ static int onenand_default_block_markbad(struct mtd_info *mtd, loff_t ofs)
+  */
+ static int onenand_block_markbad(struct mtd_info *mtd, loff_t ofs)
+ {
++	struct onenand_chip *this = mtd->priv;
+ 	int ret;
+ 
+ 	ret = onenand_block_isbad(mtd, ofs);
+@@ -2610,7 +2611,7 @@ static int onenand_block_markbad(struct mtd_info *mtd, loff_t ofs)
+ 	}
+ 
+ 	onenand_get_device(mtd, FL_WRITING);
+-	ret = mtd_block_markbad(mtd, ofs);
++	ret = this->block_markbad(mtd, ofs);
+ 	onenand_release_device(mtd);
+ 	return ret;
+ }
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index ed622fa29dfa..a4ac6fedac75 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -3404,7 +3404,7 @@ static int mvneta_probe(struct platform_device *pdev)
+ 	dev->features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO;
+ 	dev->hw_features |= dev->features;
+ 	dev->vlan_features |= dev->features;
+-	dev->priv_flags |= IFF_UNICAST_FLT;
++	dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE;
+ 	dev->gso_max_segs = MVNETA_MAX_TSO_SEGS;
+ 
+ 	err = register_netdev(dev);
+diff --git a/drivers/net/irda/irtty-sir.c b/drivers/net/irda/irtty-sir.c
+index 696852eb23c3..7a3f990c1935 100644
+--- a/drivers/net/irda/irtty-sir.c
++++ b/drivers/net/irda/irtty-sir.c
+@@ -430,16 +430,6 @@ static int irtty_open(struct tty_struct *tty)
+ 
+ 	/* Module stuff handled via irda_ldisc.owner - Jean II */
+ 
+-	/* First make sure we're not already connected. */
+-	if (tty->disc_data != NULL) {
+-		priv = tty->disc_data;
+-		if (priv && priv->magic == IRTTY_MAGIC) {
+-			ret = -EEXIST;
+-			goto out;
+-		}
+-		tty->disc_data = NULL;		/* ### */
+-	}
+-
+ 	/* stop the underlying  driver */
+ 	irtty_stop_receiver(tty, TRUE);
+ 	if (tty->ops->stop)
+diff --git a/drivers/net/rionet.c b/drivers/net/rionet.c
+index 01f08a7751f7..e7034c55e796 100644
+--- a/drivers/net/rionet.c
++++ b/drivers/net/rionet.c
+@@ -280,7 +280,7 @@ static void rionet_outb_msg_event(struct rio_mport *mport, void *dev_id, int mbo
+ 	struct net_device *ndev = dev_id;
+ 	struct rionet_private *rnet = netdev_priv(ndev);
+ 
+-	spin_lock(&rnet->lock);
++	spin_lock(&rnet->tx_lock);
+ 
+ 	if (netif_msg_intr(rnet))
+ 		printk(KERN_INFO
+@@ -299,7 +299,7 @@ static void rionet_outb_msg_event(struct rio_mport *mport, void *dev_id, int mbo
+ 	if (rnet->tx_cnt < RIONET_TX_RING_SIZE)
+ 		netif_wake_queue(ndev);
+ 
+-	spin_unlock(&rnet->lock);
++	spin_unlock(&rnet->tx_lock);
+ }
+ 
+ static int rionet_open(struct net_device *ndev)
+diff --git a/drivers/net/wireless/iwlwifi/mvm/fw.c b/drivers/net/wireless/iwlwifi/mvm/fw.c
+index d906fa13ba97..610c442c7ab2 100644
+--- a/drivers/net/wireless/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/iwlwifi/mvm/fw.c
+@@ -106,7 +106,7 @@ static int iwl_send_tx_ant_cfg(struct iwl_mvm *mvm, u8 valid_tx_ant)
+ 				    sizeof(tx_ant_cmd), &tx_ant_cmd);
+ }
+ 
+-static void iwl_free_fw_paging(struct iwl_mvm *mvm)
++void iwl_free_fw_paging(struct iwl_mvm *mvm)
+ {
+ 	int i;
+ 
+@@ -126,6 +126,8 @@ static void iwl_free_fw_paging(struct iwl_mvm *mvm)
+ 			     get_order(mvm->fw_paging_db[i].fw_paging_size));
+ 	}
+ 	kfree(mvm->trans->paging_download_buf);
++	mvm->trans->paging_download_buf = NULL;
++
+ 	memset(mvm->fw_paging_db, 0, sizeof(mvm->fw_paging_db));
+ }
+ 
+diff --git a/drivers/net/wireless/iwlwifi/mvm/mvm.h b/drivers/net/wireless/iwlwifi/mvm/mvm.h
+index 4bde2d027dcd..244e26c26821 100644
+--- a/drivers/net/wireless/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/iwlwifi/mvm/mvm.h
+@@ -1190,6 +1190,9 @@ void iwl_mvm_rx_umac_scan_complete_notif(struct iwl_mvm *mvm,
+ void iwl_mvm_rx_umac_scan_iter_complete_notif(struct iwl_mvm *mvm,
+ 					      struct iwl_rx_cmd_buffer *rxb);
+ 
++/* Paging */
++void iwl_free_fw_paging(struct iwl_mvm *mvm);
++
+ /* MVM debugfs */
+ #ifdef CONFIG_IWLWIFI_DEBUGFS
+ int iwl_mvm_dbgfs_register(struct iwl_mvm *mvm, struct dentry *dbgfs_dir);
+diff --git a/drivers/net/wireless/iwlwifi/mvm/ops.c b/drivers/net/wireless/iwlwifi/mvm/ops.c
+index 13c97f665ba8..c3adf2bcdc85 100644
+--- a/drivers/net/wireless/iwlwifi/mvm/ops.c
++++ b/drivers/net/wireless/iwlwifi/mvm/ops.c
+@@ -645,6 +645,8 @@ static void iwl_op_mode_mvm_stop(struct iwl_op_mode *op_mode)
+ 	for (i = 0; i < NVM_MAX_NUM_SECTIONS; i++)
+ 		kfree(mvm->nvm_sections[i].data);
+ 
++	iwl_free_fw_paging(mvm);
++
+ 	iwl_mvm_tof_clean(mvm);
+ 
+ 	ieee80211_free_hw(mvm->hw);
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 7e2c43f701bc..496b9b662dc6 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -513,10 +513,10 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm,
+ 
+ 	/* fail write commands (when read-only) */
+ 	if (read_only)
+-		switch (ioctl_cmd) {
+-		case ND_IOCTL_VENDOR:
+-		case ND_IOCTL_SET_CONFIG_DATA:
+-		case ND_IOCTL_ARS_START:
++		switch (cmd) {
++		case ND_CMD_VENDOR:
++		case ND_CMD_SET_CONFIG_DATA:
++		case ND_CMD_ARS_START:
+ 			dev_dbg(&nvdimm_bus->dev, "'%s' command while read-only.\n",
+ 					nvdimm ? nvdimm_cmd_name(cmd)
+ 					: nvdimm_bus_cmd_name(cmd));
+diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
+index 1a3556a9e9ea..ed01c0172e4a 100644
+--- a/drivers/of/of_reserved_mem.c
++++ b/drivers/of/of_reserved_mem.c
+@@ -32,11 +32,13 @@ int __init __weak early_init_dt_alloc_reserved_memory_arch(phys_addr_t size,
+ 	phys_addr_t align, phys_addr_t start, phys_addr_t end, bool nomap,
+ 	phys_addr_t *res_base)
+ {
++	phys_addr_t base;
+ 	/*
+ 	 * We use __memblock_alloc_base() because memblock_alloc_base()
+ 	 * panic()s on allocation failure.
+ 	 */
+-	phys_addr_t base = __memblock_alloc_base(size, align, end);
++	end = !end ? MEMBLOCK_ALLOC_ANYWHERE : end;
++	base = __memblock_alloc_base(size, align, end);
+ 	if (!base)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index edb1984201e9..7aafb5fb9336 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -179,6 +179,9 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
+ 	u16 orig_cmd;
+ 	struct pci_bus_region region, inverted_region;
+ 
++	if (dev->non_compliant_bars)
++		return 0;
++
+ 	mask = type ? PCI_ROM_ADDRESS_MASK : ~0;
+ 
+ 	/* No printks while decoding is disabled! */
+@@ -1174,6 +1177,7 @@ void pci_msi_setup_pci_dev(struct pci_dev *dev)
+ int pci_setup_device(struct pci_dev *dev)
+ {
+ 	u32 class;
++	u16 cmd;
+ 	u8 hdr_type;
+ 	int pos = 0;
+ 	struct pci_bus_region region;
+@@ -1219,6 +1223,16 @@ int pci_setup_device(struct pci_dev *dev)
+ 	/* device class may be changed after fixup */
+ 	class = dev->class >> 8;
+ 
++	if (dev->non_compliant_bars) {
++		pci_read_config_word(dev, PCI_COMMAND, &cmd);
++		if (cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) {
++			dev_info(&dev->dev, "device has non-compliant BARs; disabling IO/MEM decoding\n");
++			cmd &= ~PCI_COMMAND_IO;
++			cmd &= ~PCI_COMMAND_MEMORY;
++			pci_write_config_word(dev, PCI_COMMAND, cmd);
++		}
++	}
++
+ 	switch (dev->hdr_type) {		    /* header type */
+ 	case PCI_HEADER_TYPE_NORMAL:		    /* standard header */
+ 		if (class == PCI_CLASS_BRIDGE_PCI)
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+index 2e6ca69635aa..17dd8fe12b54 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+@@ -779,7 +779,7 @@ static int bcm2835_pctl_dt_node_to_map(struct pinctrl_dev *pctldev,
+ 		}
+ 		if (num_pulls) {
+ 			err = of_property_read_u32_index(np, "brcm,pull",
+-					(num_funcs > 1) ? i : 0, &pull);
++					(num_pulls > 1) ? i : 0, &pull);
+ 			if (err)
+ 				goto out;
+ 			err = bcm2835_pctl_dt_node_to_map_pull(pc, np, pin,
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index d78ee151c9e4..be3bc2f4edd4 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -865,6 +865,20 @@ static const struct dmi_system_id no_hw_rfkill_list[] = {
+ 		},
+ 	},
+ 	{
++		.ident = "Lenovo ideapad Y700-15ISK",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad Y700-15ISK"),
++		},
++	},
++	{
++		.ident = "Lenovo ideapad Y700 Touch-15ISK",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad Y700 Touch-15ISK"),
++		},
++	},
++	{
+ 		.ident = "Lenovo ideapad Y700-17ISK",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 73b7683355cd..7b94b8ee087c 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -132,24 +132,24 @@ static bool have_full_constraints(void)
+ 	return has_full_constraints || of_have_populated_dt();
+ }
+ 
++static inline struct regulator_dev *rdev_get_supply(struct regulator_dev *rdev)
++{
++	if (rdev && rdev->supply)
++		return rdev->supply->rdev;
++
++	return NULL;
++}
++
+ /**
+  * regulator_lock_supply - lock a regulator and its supplies
+  * @rdev:         regulator source
+  */
+ static void regulator_lock_supply(struct regulator_dev *rdev)
+ {
+-	struct regulator *supply;
+-	int i = 0;
+-
+-	while (1) {
+-		mutex_lock_nested(&rdev->mutex, i++);
+-		supply = rdev->supply;
+-
+-		if (!rdev->supply)
+-			return;
++	int i;
+ 
+-		rdev = supply->rdev;
+-	}
++	for (i = 0; rdev->supply; rdev = rdev_get_supply(rdev), i++)
++		mutex_lock_nested(&rdev->mutex, i);
+ }
+ 
+ /**
+diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
+index 074878b55a0b..d044f3f273be 100644
+--- a/drivers/scsi/aacraid/aacraid.h
++++ b/drivers/scsi/aacraid/aacraid.h
+@@ -944,6 +944,7 @@ struct fib {
+ 	 */
+ 	struct list_head	fiblink;
+ 	void			*data;
++	u32			vector_no;
+ 	struct hw_fib		*hw_fib_va;		/* Actual shared object */
+ 	dma_addr_t		hw_fib_pa;		/* physical address of hw_fib*/
+ };
+@@ -2113,6 +2114,7 @@ static inline unsigned int cap_to_cyls(sector_t capacity, unsigned divisor)
+ int aac_acquire_irq(struct aac_dev *dev);
+ void aac_free_irq(struct aac_dev *dev);
+ const char *aac_driverinfo(struct Scsi_Host *);
++void aac_fib_vector_assign(struct aac_dev *dev);
+ struct fib *aac_fib_alloc(struct aac_dev *dev);
+ int aac_fib_setup(struct aac_dev *dev);
+ void aac_fib_map_free(struct aac_dev *dev);
+diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
+index a1f90fe849c9..4cbf54928640 100644
+--- a/drivers/scsi/aacraid/commsup.c
++++ b/drivers/scsi/aacraid/commsup.c
+@@ -83,13 +83,38 @@ static int fib_map_alloc(struct aac_dev *dev)
+ 
+ void aac_fib_map_free(struct aac_dev *dev)
+ {
+-	pci_free_consistent(dev->pdev,
+-	  dev->max_fib_size * (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB),
+-	  dev->hw_fib_va, dev->hw_fib_pa);
++	if (dev->hw_fib_va && dev->max_fib_size) {
++		pci_free_consistent(dev->pdev,
++		(dev->max_fib_size *
++		(dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB)),
++		dev->hw_fib_va, dev->hw_fib_pa);
++	}
+ 	dev->hw_fib_va = NULL;
+ 	dev->hw_fib_pa = 0;
+ }
+ 
++void aac_fib_vector_assign(struct aac_dev *dev)
++{
++	u32 i = 0;
++	u32 vector = 1;
++	struct fib *fibptr = NULL;
++
++	for (i = 0, fibptr = &dev->fibs[i];
++		i < (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB);
++		i++, fibptr++) {
++		if ((dev->max_msix == 1) ||
++		  (i > ((dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB - 1)
++			- dev->vector_cap))) {
++			fibptr->vector_no = 0;
++		} else {
++			fibptr->vector_no = vector;
++			vector++;
++			if (vector == dev->max_msix)
++				vector = 1;
++		}
++	}
++}
++
+ /**
+  *	aac_fib_setup	-	setup the fibs
+  *	@dev: Adapter to set up
+@@ -151,6 +176,12 @@ int aac_fib_setup(struct aac_dev * dev)
+ 		hw_fib_pa = hw_fib_pa +
+ 			dev->max_fib_size + sizeof(struct aac_fib_xporthdr);
+ 	}
++
++	/*
++	 *Assign vector numbers to fibs
++	 */
++	aac_fib_vector_assign(dev);
++
+ 	/*
+ 	 *	Add the fib chain to the free list
+ 	 */
+diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
+index 3b6e5c67e853..aa6eccb8940b 100644
+--- a/drivers/scsi/aacraid/linit.c
++++ b/drivers/scsi/aacraid/linit.c
+@@ -1404,8 +1404,18 @@ static int aac_acquire_resources(struct aac_dev *dev)
+ 
+ 	aac_adapter_enable_int(dev);
+ 
+-	if (!dev->sync_mode)
++	/*max msix may change  after EEH
++	 * Re-assign vectors to fibs
++	 */
++	aac_fib_vector_assign(dev);
++
++	if (!dev->sync_mode) {
++		/* After EEH recovery or suspend resume, max_msix count
++		 * may change, therfore updating in init as well.
++		 */
+ 		aac_adapter_start(dev);
++		dev->init->Sa_MSIXVectors = cpu_to_le32(dev->max_msix);
++	}
+ 	return 0;
+ 
+ error_iounmap:
+diff --git a/drivers/scsi/aacraid/src.c b/drivers/scsi/aacraid/src.c
+index 2aa34ea8ceb1..bc0203f3d243 100644
+--- a/drivers/scsi/aacraid/src.c
++++ b/drivers/scsi/aacraid/src.c
+@@ -156,8 +156,8 @@ static irqreturn_t aac_src_intr_message(int irq, void *dev_id)
+ 				break;
+ 			if (dev->msi_enabled && dev->max_msix > 1)
+ 				atomic_dec(&dev->rrq_outstanding[vector_no]);
+-			aac_intr_normal(dev, handle-1, 0, isFastResponse, NULL);
+ 			dev->host_rrq[index++] = 0;
++			aac_intr_normal(dev, handle-1, 0, isFastResponse, NULL);
+ 			if (index == (vector_no + 1) * dev->vector_cap)
+ 				index = vector_no * dev->vector_cap;
+ 			dev->host_rrq_idx[vector_no] = index;
+@@ -452,36 +452,20 @@ static int aac_src_deliver_message(struct fib *fib)
+ #endif
+ 
+ 	u16 hdr_size = le16_to_cpu(fib->hw_fib_va->header.Size);
++	u16 vector_no;
+ 
+ 	atomic_inc(&q->numpending);
+ 
+ 	if (dev->msi_enabled && fib->hw_fib_va->header.Command != AifRequest &&
+ 	    dev->max_msix > 1) {
+-		u_int16_t vector_no, first_choice = 0xffff;
+-
+-		vector_no = dev->fibs_pushed_no % dev->max_msix;
+-		do {
+-			vector_no += 1;
+-			if (vector_no == dev->max_msix)
+-				vector_no = 1;
+-			if (atomic_read(&dev->rrq_outstanding[vector_no]) <
+-			    dev->vector_cap)
+-				break;
+-			if (0xffff == first_choice)
+-				first_choice = vector_no;
+-			else if (vector_no == first_choice)
+-				break;
+-		} while (1);
+-		if (vector_no == first_choice)
+-			vector_no = 0;
+-		atomic_inc(&dev->rrq_outstanding[vector_no]);
+-		if (dev->fibs_pushed_no == 0xffffffff)
+-			dev->fibs_pushed_no = 0;
+-		else
+-			dev->fibs_pushed_no++;
++		vector_no = fib->vector_no;
+ 		fib->hw_fib_va->header.Handle += (vector_no << 16);
++	} else {
++		vector_no = 0;
+ 	}
+ 
++	atomic_inc(&dev->rrq_outstanding[vector_no]);
++
+ 	if (dev->comm_interface == AAC_COMM_MESSAGE_TYPE2) {
+ 		/* Calculate the amount to the fibsize bits */
+ 		fibsize = (hdr_size + 127) / 128 - 1;
+diff --git a/drivers/scsi/aic7xxx/aic7xxx_osm.c b/drivers/scsi/aic7xxx/aic7xxx_osm.c
+index b846a4683562..fc6a83188c1e 100644
+--- a/drivers/scsi/aic7xxx/aic7xxx_osm.c
++++ b/drivers/scsi/aic7xxx/aic7xxx_osm.c
+@@ -1336,6 +1336,7 @@ ahc_platform_set_tags(struct ahc_softc *ahc, struct scsi_device *sdev,
+ 	case AHC_DEV_Q_TAGGED:
+ 		scsi_change_queue_depth(sdev,
+ 				dev->openings + dev->active);
++		break;
+ 	default:
+ 		/*
+ 		 * We allow the OS to queue 2 untagged transactions to
+diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c
+index fe0c5143f8e6..758f76e88704 100644
+--- a/drivers/scsi/be2iscsi/be_main.c
++++ b/drivers/scsi/be2iscsi/be_main.c
+@@ -4470,6 +4470,7 @@ put_shost:
+ 	scsi_host_put(phba->shost);
+ free_kset:
+ 	iscsi_boot_destroy_kset(phba->boot_kset);
++	phba->boot_kset = NULL;
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
+index 536cd5a80422..43ac62623bf2 100644
+--- a/drivers/scsi/ipr.c
++++ b/drivers/scsi/ipr.c
+@@ -4003,13 +4003,17 @@ static ssize_t ipr_store_update_fw(struct device *dev,
+ 	struct ipr_sglist *sglist;
+ 	char fname[100];
+ 	char *src;
+-	int len, result, dnld_size;
++	char *endline;
++	int result, dnld_size;
+ 
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EACCES;
+ 
+-	len = snprintf(fname, 99, "%s", buf);
+-	fname[len-1] = '\0';
++	snprintf(fname, sizeof(fname), "%s", buf);
++
++	endline = strchr(fname, '\n');
++	if (endline)
++		*endline = '\0';
+ 
+ 	if (request_firmware(&fw_entry, fname, &ioa_cfg->pdev->dev)) {
+ 		dev_err(&ioa_cfg->pdev->dev, "Firmware file %s not found\n", fname);
+diff --git a/drivers/scsi/scsi_common.c b/drivers/scsi/scsi_common.c
+index c126966130ab..ce79de822e46 100644
+--- a/drivers/scsi/scsi_common.c
++++ b/drivers/scsi/scsi_common.c
+@@ -278,8 +278,16 @@ int scsi_set_sense_information(u8 *buf, int buf_len, u64 info)
+ 		ucp[3] = 0;
+ 		put_unaligned_be64(info, &ucp[4]);
+ 	} else if ((buf[0] & 0x7f) == 0x70) {
+-		buf[0] |= 0x80;
+-		put_unaligned_be64(info, &buf[3]);
++		/*
++		 * Only set the 'VALID' bit if we can represent the value
++		 * correctly; otherwise just fill out the lower bytes and
++		 * clear the 'VALID' flag.
++		 */
++		if (info <= 0xffffffffUL)
++			buf[0] |= 0x80;
++		else
++			buf[0] &= 0x7f;
++		put_unaligned_be32((u32)info, &buf[3]);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index bb669d32ccd0..cc84ea7d09cc 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -648,7 +648,7 @@ static void sd_config_discard(struct scsi_disk *sdkp, unsigned int mode)
+ 	 */
+ 	if (sdkp->lbprz) {
+ 		q->limits.discard_alignment = 0;
+-		q->limits.discard_granularity = 1;
++		q->limits.discard_granularity = logical_block_size;
+ 	} else {
+ 		q->limits.discard_alignment = sdkp->unmap_alignment *
+ 			logical_block_size;
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index 5e820674432c..ae7d9bdf409c 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -652,7 +652,8 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos)
+ 	else
+ 		hp->dxfer_direction = (mxsize > 0) ? SG_DXFER_FROM_DEV : SG_DXFER_NONE;
+ 	hp->dxfer_len = mxsize;
+-	if (hp->dxfer_direction == SG_DXFER_TO_DEV)
++	if ((hp->dxfer_direction == SG_DXFER_TO_DEV) ||
++	    (hp->dxfer_direction == SG_DXFER_TO_FROM_DEV))
+ 		hp->dxferp = (char __user *)buf + cmd_size;
+ 	else
+ 		hp->dxferp = NULL;
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 3fba42ad9fb8..0f636cc4c809 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -889,8 +889,9 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
+ 		do_work = true;
+ 		process_err_fn = storvsc_remove_lun;
+ 		break;
+-	case (SRB_STATUS_ABORTED | SRB_STATUS_AUTOSENSE_VALID):
+-		if ((asc == 0x2a) && (ascq == 0x9)) {
++	case SRB_STATUS_ABORTED:
++		if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID &&
++		    (asc == 0x2a) && (ascq == 0x9)) {
+ 			do_work = true;
+ 			process_err_fn = storvsc_device_scan;
+ 			/*
+diff --git a/drivers/staging/android/ion/ion_test.c b/drivers/staging/android/ion/ion_test.c
+index b8dcf5a26cc4..58d46893e5ff 100644
+--- a/drivers/staging/android/ion/ion_test.c
++++ b/drivers/staging/android/ion/ion_test.c
+@@ -285,8 +285,8 @@ static int __init ion_test_init(void)
+ {
+ 	ion_test_pdev = platform_device_register_simple("ion-test",
+ 							-1, NULL, 0);
+-	if (!ion_test_pdev)
+-		return -ENODEV;
++	if (IS_ERR(ion_test_pdev))
++		return PTR_ERR(ion_test_pdev);
+ 
+ 	return platform_driver_probe(&ion_test_platform_driver, ion_test_probe);
+ }
+diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c
+index 6cc304a4c59b..27fbf1a81097 100644
+--- a/drivers/staging/comedi/drivers/ni_mio_common.c
++++ b/drivers/staging/comedi/drivers/ni_mio_common.c
+@@ -246,24 +246,24 @@ static void ni_writel(struct comedi_device *dev, uint32_t data, int reg)
+ {
+ 	if (dev->mmio)
+ 		writel(data, dev->mmio + reg);
+-
+-	outl(data, dev->iobase + reg);
++	else
++		outl(data, dev->iobase + reg);
+ }
+ 
+ static void ni_writew(struct comedi_device *dev, uint16_t data, int reg)
+ {
+ 	if (dev->mmio)
+ 		writew(data, dev->mmio + reg);
+-
+-	outw(data, dev->iobase + reg);
++	else
++		outw(data, dev->iobase + reg);
+ }
+ 
+ static void ni_writeb(struct comedi_device *dev, uint8_t data, int reg)
+ {
+ 	if (dev->mmio)
+ 		writeb(data, dev->mmio + reg);
+-
+-	outb(data, dev->iobase + reg);
++	else
++		outb(data, dev->iobase + reg);
+ }
+ 
+ static uint32_t ni_readl(struct comedi_device *dev, int reg)
+diff --git a/drivers/staging/comedi/drivers/ni_tiocmd.c b/drivers/staging/comedi/drivers/ni_tiocmd.c
+index 437f723bb34d..823e47910004 100644
+--- a/drivers/staging/comedi/drivers/ni_tiocmd.c
++++ b/drivers/staging/comedi/drivers/ni_tiocmd.c
+@@ -92,7 +92,7 @@ static int ni_tio_input_inttrig(struct comedi_device *dev,
+ 	unsigned long flags;
+ 	int ret = 0;
+ 
+-	if (trig_num != cmd->start_src)
++	if (trig_num != cmd->start_arg)
+ 		return -EINVAL;
+ 
+ 	spin_lock_irqsave(&counter->lock, flags);
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 94f4ffac723f..d151bc3d6971 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -2618,8 +2618,6 @@ void target_wait_for_sess_cmds(struct se_session *se_sess)
+ 
+ 	list_for_each_entry_safe(se_cmd, tmp_cmd,
+ 				&se_sess->sess_wait_list, se_cmd_list) {
+-		list_del_init(&se_cmd->se_cmd_list);
+-
+ 		pr_debug("Waiting for se_cmd: %p t_state: %d, fabric state:"
+ 			" %d\n", se_cmd, se_cmd->t_state,
+ 			se_cmd->se_tfo->get_cmd_state(se_cmd));
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index ba08b5521382..3d5f8f432b5b 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -454,6 +454,10 @@ static void handle_thermal_trip(struct thermal_zone_device *tz, int trip)
+ {
+ 	enum thermal_trip_type type;
+ 
++	/* Ignore disabled trip points */
++	if (test_bit(trip, &tz->trips_disabled))
++		return;
++
+ 	tz->ops->get_trip_type(tz, trip, &type);
+ 
+ 	if (type == THERMAL_TRIP_CRITICAL || type == THERMAL_TRIP_HOT)
+@@ -1796,6 +1800,7 @@ struct thermal_zone_device *thermal_zone_device_register(const char *type,
+ {
+ 	struct thermal_zone_device *tz;
+ 	enum thermal_trip_type trip_type;
++	int trip_temp;
+ 	int result;
+ 	int count;
+ 	int passive = 0;
+@@ -1867,9 +1872,15 @@ struct thermal_zone_device *thermal_zone_device_register(const char *type,
+ 		goto unregister;
+ 
+ 	for (count = 0; count < trips; count++) {
+-		tz->ops->get_trip_type(tz, count, &trip_type);
++		if (tz->ops->get_trip_type(tz, count, &trip_type))
++			set_bit(count, &tz->trips_disabled);
+ 		if (trip_type == THERMAL_TRIP_PASSIVE)
+ 			passive = 1;
++		if (tz->ops->get_trip_temp(tz, count, &trip_temp))
++			set_bit(count, &tz->trips_disabled);
++		/* Check for bogus trip points */
++		if (trip_temp == 0)
++			set_bit(count, &tz->trips_disabled);
+ 	}
+ 
+ 	if (!passive) {
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 52d82d2ac726..56ccbcefdd85 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -713,22 +713,16 @@ static int size_fifo(struct uart_8250_port *up)
+  */
+ static unsigned int autoconfig_read_divisor_id(struct uart_8250_port *p)
+ {
+-	unsigned char old_dll, old_dlm, old_lcr;
+-	unsigned int id;
++	unsigned char old_lcr;
++	unsigned int id, old_dl;
+ 
+ 	old_lcr = serial_in(p, UART_LCR);
+ 	serial_out(p, UART_LCR, UART_LCR_CONF_MODE_A);
++	old_dl = serial_dl_read(p);
++	serial_dl_write(p, 0);
++	id = serial_dl_read(p);
++	serial_dl_write(p, old_dl);
+ 
+-	old_dll = serial_in(p, UART_DLL);
+-	old_dlm = serial_in(p, UART_DLM);
+-
+-	serial_out(p, UART_DLL, 0);
+-	serial_out(p, UART_DLM, 0);
+-
+-	id = serial_in(p, UART_DLL) | serial_in(p, UART_DLM) << 8;
+-
+-	serial_out(p, UART_DLL, old_dll);
+-	serial_out(p, UART_DLM, old_dlm);
+ 	serial_out(p, UART_LCR, old_lcr);
+ 
+ 	return id;
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index fa4e23930614..d37fdcc3143c 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1114,6 +1114,9 @@ static int acm_probe(struct usb_interface *intf,
+ 	if (quirks == NO_UNION_NORMAL) {
+ 		data_interface = usb_ifnum_to_if(usb_dev, 1);
+ 		control_interface = usb_ifnum_to_if(usb_dev, 0);
++		/* we would crash */
++		if (!data_interface || !control_interface)
++			return -ENODEV;
+ 		goto skip_normal_probe;
+ 	}
+ 
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index 56593a9a8726..2057d91d8336 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -502,11 +502,15 @@ static int usb_unbind_interface(struct device *dev)
+ int usb_driver_claim_interface(struct usb_driver *driver,
+ 				struct usb_interface *iface, void *priv)
+ {
+-	struct device *dev = &iface->dev;
++	struct device *dev;
+ 	struct usb_device *udev;
+ 	int retval = 0;
+ 	int lpm_disable_error;
+ 
++	if (!iface)
++		return -ENODEV;
++
++	dev = &iface->dev;
+ 	if (dev->driver)
+ 		return -EBUSY;
+ 
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 1560f3f3e756..2a274884c7ea 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -4277,7 +4277,7 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ {
+ 	struct usb_device	*hdev = hub->hdev;
+ 	struct usb_hcd		*hcd = bus_to_hcd(hdev->bus);
+-	int			i, j, retval;
++	int			retries, operations, retval, i;
+ 	unsigned		delay = HUB_SHORT_RESET_TIME;
+ 	enum usb_device_speed	oldspeed = udev->speed;
+ 	const char		*speed;
+@@ -4379,7 +4379,7 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 	 * first 8 bytes of the device descriptor to get the ep0 maxpacket
+ 	 * value.
+ 	 */
+-	for (i = 0; i < GET_DESCRIPTOR_TRIES; (++i, msleep(100))) {
++	for (retries = 0; retries < GET_DESCRIPTOR_TRIES; (++retries, msleep(100))) {
+ 		bool did_new_scheme = false;
+ 
+ 		if (use_new_scheme(udev, retry_counter)) {
+@@ -4406,7 +4406,7 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 			 * 255 is for WUSB devices, we actually need to use
+ 			 * 512 (WUSB1.0[4.8.1]).
+ 			 */
+-			for (j = 0; j < 3; ++j) {
++			for (operations = 0; operations < 3; ++operations) {
+ 				buf->bMaxPacketSize0 = 0;
+ 				r = usb_control_msg(udev, usb_rcvaddr0pipe(),
+ 					USB_REQ_GET_DESCRIPTOR, USB_DIR_IN,
+@@ -4426,7 +4426,13 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 						r = -EPROTO;
+ 					break;
+ 				}
+-				if (r == 0)
++				/*
++				 * Some devices time out if they are powered on
++				 * when already connected. They need a second
++				 * reset. But only on the first attempt,
++				 * lest we get into a time out/reset loop
++				 */
++				if (r == 0  || (r == -ETIMEDOUT && retries == 0))
+ 					break;
+ 			}
+ 			udev->descriptor.bMaxPacketSize0 =
+@@ -4458,7 +4464,7 @@ hub_port_init(struct usb_hub *hub, struct usb_device *udev, int port1,
+ 		 * authorization will assign the final address.
+ 		 */
+ 		if (udev->wusb == 0) {
+-			for (j = 0; j < SET_ADDRESS_TRIES; ++j) {
++			for (operations = 0; operations < SET_ADDRESS_TRIES; ++operations) {
+ 				retval = hub_set_address(udev, devnum);
+ 				if (retval >= 0)
+ 					break;
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index c6bfd13f6c92..1950e87b4219 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -787,6 +787,12 @@ static int iowarrior_probe(struct usb_interface *interface,
+ 	iface_desc = interface->cur_altsetting;
+ 	dev->product_id = le16_to_cpu(udev->descriptor.idProduct);
+ 
++	if (iface_desc->desc.bNumEndpoints < 1) {
++		dev_err(&interface->dev, "Invalid number of endpoints\n");
++		retval = -EINVAL;
++		goto error;
++	}
++
+ 	/* set up the endpoint information */
+ 	for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
+ 		endpoint = &iface_desc->endpoint[i].desc;
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 7a76fe4c2f9e..bdc0f2f24f19 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -164,6 +164,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x18EF, 0xE025) }, /* ELV Marble Sound Board 1 */
+ 	{ USB_DEVICE(0x1901, 0x0190) }, /* GE B850 CP2105 Recorder interface */
+ 	{ USB_DEVICE(0x1901, 0x0193) }, /* GE B650 CP2104 PMC interface */
++	{ USB_DEVICE(0x1901, 0x0194) },	/* GE Healthcare Remote Alarm Box */
+ 	{ USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
+ 	{ USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+ 	{ USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
+diff --git a/drivers/usb/serial/cypress_m8.c b/drivers/usb/serial/cypress_m8.c
+index 01bf53392819..244acb1299a9 100644
+--- a/drivers/usb/serial/cypress_m8.c
++++ b/drivers/usb/serial/cypress_m8.c
+@@ -447,6 +447,11 @@ static int cypress_generic_port_probe(struct usb_serial_port *port)
+ 	struct usb_serial *serial = port->serial;
+ 	struct cypress_private *priv;
+ 
++	if (!port->interrupt_out_urb || !port->interrupt_in_urb) {
++		dev_err(&port->dev, "required endpoint is missing\n");
++		return -ENODEV;
++	}
++
+ 	priv = kzalloc(sizeof(struct cypress_private), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
+@@ -606,12 +611,6 @@ static int cypress_open(struct tty_struct *tty, struct usb_serial_port *port)
+ 		cypress_set_termios(tty, port, &priv->tmp_termios);
+ 
+ 	/* setup the port and start reading from the device */
+-	if (!port->interrupt_in_urb) {
+-		dev_err(&port->dev, "%s - interrupt_in_urb is empty!\n",
+-			__func__);
+-		return -1;
+-	}
+-
+ 	usb_fill_int_urb(port->interrupt_in_urb, serial->dev,
+ 		usb_rcvintpipe(serial->dev, port->interrupt_in_endpointAddress),
+ 		port->interrupt_in_urb->transfer_buffer,
+diff --git a/drivers/usb/serial/digi_acceleport.c b/drivers/usb/serial/digi_acceleport.c
+index 12b0e67473ba..3df7b7ec178e 100644
+--- a/drivers/usb/serial/digi_acceleport.c
++++ b/drivers/usb/serial/digi_acceleport.c
+@@ -1251,8 +1251,27 @@ static int digi_port_init(struct usb_serial_port *port, unsigned port_num)
+ 
+ static int digi_startup(struct usb_serial *serial)
+ {
++	struct device *dev = &serial->interface->dev;
+ 	struct digi_serial *serial_priv;
+ 	int ret;
++	int i;
++
++	/* check whether the device has the expected number of endpoints */
++	if (serial->num_port_pointers < serial->type->num_ports + 1) {
++		dev_err(dev, "OOB endpoints missing\n");
++		return -ENODEV;
++	}
++
++	for (i = 0; i < serial->type->num_ports + 1 ; i++) {
++		if (!serial->port[i]->read_urb) {
++			dev_err(dev, "bulk-in endpoint missing\n");
++			return -ENODEV;
++		}
++		if (!serial->port[i]->write_urb) {
++			dev_err(dev, "bulk-out endpoint missing\n");
++			return -ENODEV;
++		}
++	}
+ 
+ 	serial_priv = kzalloc(sizeof(*serial_priv), GFP_KERNEL);
+ 	if (!serial_priv)
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 8c660ae401d8..b61f12160d37 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1004,6 +1004,10 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(FTDI_VID, CHETCO_SEASMART_DISPLAY_PID) },
+ 	{ USB_DEVICE(FTDI_VID, CHETCO_SEASMART_LITE_PID) },
+ 	{ USB_DEVICE(FTDI_VID, CHETCO_SEASMART_ANALOG_PID) },
++	/* ICP DAS I-756xU devices */
++	{ USB_DEVICE(ICPDAS_VID, ICPDAS_I7560U_PID) },
++	{ USB_DEVICE(ICPDAS_VID, ICPDAS_I7561U_PID) },
++	{ USB_DEVICE(ICPDAS_VID, ICPDAS_I7563U_PID) },
+ 	{ }					/* Terminating entry */
+ };
+ 
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index a84df2513994..c5d6c1e73e8e 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -872,6 +872,14 @@
+ #define NOVITUS_BONO_E_PID		0x6010
+ 
+ /*
++ * ICPDAS I-756*U devices
++ */
++#define ICPDAS_VID			0x1b5c
++#define ICPDAS_I7560U_PID		0x0103
++#define ICPDAS_I7561U_PID		0x0104
++#define ICPDAS_I7563U_PID		0x0105
++
++/*
+  * RT Systems programming cables for various ham radios
+  */
+ #define RTSYSTEMS_VID		0x2100	/* Vendor ID */
+diff --git a/drivers/usb/serial/mct_u232.c b/drivers/usb/serial/mct_u232.c
+index fd707d6a10e2..89726f702202 100644
+--- a/drivers/usb/serial/mct_u232.c
++++ b/drivers/usb/serial/mct_u232.c
+@@ -376,14 +376,21 @@ static void mct_u232_msr_to_state(struct usb_serial_port *port,
+ 
+ static int mct_u232_port_probe(struct usb_serial_port *port)
+ {
++	struct usb_serial *serial = port->serial;
+ 	struct mct_u232_private *priv;
+ 
++	/* check first to simplify error handling */
++	if (!serial->port[1] || !serial->port[1]->interrupt_in_urb) {
++		dev_err(&port->dev, "expected endpoint missing\n");
++		return -ENODEV;
++	}
++
+ 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
+ 
+ 	/* Use second interrupt-in endpoint for reading. */
+-	priv->read_urb = port->serial->port[1]->interrupt_in_urb;
++	priv->read_urb = serial->port[1]->interrupt_in_urb;
+ 	priv->read_urb->context = port;
+ 
+ 	spin_lock_init(&priv->lock);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 348e19834b83..c6f497f16526 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1818,6 +1818,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d02, 0xff, 0x00, 0x00) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x02, 0x01) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x00, 0x00) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e19, 0xff),			/* D-Link DWM-221 B1 */
++	  .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) },                /* OLICARD300 - MT6225 */
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 5c66d3f7a6d0..ce0cd6e20d4f 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -812,7 +812,7 @@ static struct scsi_host_template uas_host_template = {
+ 	.slave_configure = uas_slave_configure,
+ 	.eh_abort_handler = uas_eh_abort_handler,
+ 	.eh_bus_reset_handler = uas_eh_bus_reset_handler,
+-	.can_queue = 65536,	/* Is there a limit on the _host_ ? */
++	.can_queue = MAX_CMNDS,
+ 	.this_id = -1,
+ 	.sg_tablesize = SG_NONE,
+ 	.skip_settle_delay = 1,
+diff --git a/drivers/watchdog/rc32434_wdt.c b/drivers/watchdog/rc32434_wdt.c
+index 71e78ef4b736..3a75f3b53452 100644
+--- a/drivers/watchdog/rc32434_wdt.c
++++ b/drivers/watchdog/rc32434_wdt.c
+@@ -237,7 +237,7 @@ static long rc32434_wdt_ioctl(struct file *file, unsigned int cmd,
+ 			return -EINVAL;
+ 		/* Fall through */
+ 	case WDIOC_GETTIMEOUT:
+-		return copy_to_user(argp, &timeout, sizeof(int));
++		return copy_to_user(argp, &timeout, sizeof(int)) ? -EFAULT : 0;
+ 	default:
+ 		return -ENOTTY;
+ 	}
+diff --git a/fs/coredump.c b/fs/coredump.c
+index 1777331eee76..dfc87c5f5a54 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -32,6 +32,9 @@
+ #include <linux/pipe_fs_i.h>
+ #include <linux/oom.h>
+ #include <linux/compat.h>
++#include <linux/sched.h>
++#include <linux/fs.h>
++#include <linux/path.h>
+ 
+ #include <asm/uaccess.h>
+ #include <asm/mmu_context.h>
+@@ -627,6 +630,8 @@ void do_coredump(const siginfo_t *siginfo)
+ 		}
+ 	} else {
+ 		struct inode *inode;
++		int open_flags = O_CREAT | O_RDWR | O_NOFOLLOW |
++				 O_LARGEFILE | O_EXCL;
+ 
+ 		if (cprm.limit < binfmt->min_coredump)
+ 			goto fail_unlock;
+@@ -665,10 +670,27 @@ void do_coredump(const siginfo_t *siginfo)
+ 		 * what matters is that at least one of the two processes
+ 		 * writes its coredump successfully, not which one.
+ 		 */
+-		cprm.file = filp_open(cn.corename,
+-				 O_CREAT | 2 | O_NOFOLLOW |
+-				 O_LARGEFILE | O_EXCL,
+-				 0600);
++		if (need_suid_safe) {
++			/*
++			 * Using user namespaces, normal user tasks can change
++			 * their current->fs->root to point to arbitrary
++			 * directories. Since the intention of the "only dump
++			 * with a fully qualified path" rule is to control where
++			 * coredumps may be placed using root privileges,
++			 * current->fs->root must not be used. Instead, use the
++			 * root directory of init_task.
++			 */
++			struct path root;
++
++			task_lock(&init_task);
++			get_fs_root(init_task.fs, &root);
++			task_unlock(&init_task);
++			cprm.file = file_open_root(root.dentry, root.mnt,
++				cn.corename, open_flags, 0600);
++			path_put(&root);
++		} else {
++			cprm.file = filp_open(cn.corename, open_flags, 0600);
++		}
+ 		if (IS_ERR(cprm.file))
+ 			goto fail_unlock;
+ 
+diff --git a/fs/fhandle.c b/fs/fhandle.c
+index d59712dfa3e7..ca3c3dd01789 100644
+--- a/fs/fhandle.c
++++ b/fs/fhandle.c
+@@ -228,7 +228,7 @@ long do_handle_open(int mountdirfd,
+ 		path_put(&path);
+ 		return fd;
+ 	}
+-	file = file_open_root(path.dentry, path.mnt, "", open_flag);
++	file = file_open_root(path.dentry, path.mnt, "", open_flag, 0);
+ 	if (IS_ERR(file)) {
+ 		put_unused_fd(fd);
+ 		retval =  PTR_ERR(file);
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 7a8ea1351584..60d6fc2e0e4b 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -281,13 +281,15 @@ locked_inode_to_wb_and_lock_list(struct inode *inode)
+ 		wb_get(wb);
+ 		spin_unlock(&inode->i_lock);
+ 		spin_lock(&wb->list_lock);
+-		wb_put(wb);		/* not gonna deref it anymore */
+ 
+ 		/* i_wb may have changed inbetween, can't use inode_to_wb() */
+-		if (likely(wb == inode->i_wb))
+-			return wb;	/* @inode already has ref */
++		if (likely(wb == inode->i_wb)) {
++			wb_put(wb);	/* @inode already has ref */
++			return wb;
++		}
+ 
+ 		spin_unlock(&wb->list_lock);
++		wb_put(wb);
+ 		cpu_relax();
+ 		spin_lock(&inode->i_lock);
+ 	}
+@@ -1339,10 +1341,10 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
+  * we go e.g. from filesystem. Flusher thread uses __writeback_single_inode()
+  * and does more profound writeback list handling in writeback_sb_inodes().
+  */
+-static int
+-writeback_single_inode(struct inode *inode, struct bdi_writeback *wb,
+-		       struct writeback_control *wbc)
++static int writeback_single_inode(struct inode *inode,
++				  struct writeback_control *wbc)
+ {
++	struct bdi_writeback *wb;
+ 	int ret = 0;
+ 
+ 	spin_lock(&inode->i_lock);
+@@ -1380,7 +1382,8 @@ writeback_single_inode(struct inode *inode, struct bdi_writeback *wb,
+ 	ret = __writeback_single_inode(inode, wbc);
+ 
+ 	wbc_detach_inode(wbc);
+-	spin_lock(&wb->list_lock);
++
++	wb = inode_to_wb_and_lock_list(inode);
+ 	spin_lock(&inode->i_lock);
+ 	/*
+ 	 * If inode is clean, remove it from writeback lists. Otherwise don't
+@@ -1455,6 +1458,7 @@ static long writeback_sb_inodes(struct super_block *sb,
+ 
+ 	while (!list_empty(&wb->b_io)) {
+ 		struct inode *inode = wb_inode(wb->b_io.prev);
++		struct bdi_writeback *tmp_wb;
+ 
+ 		if (inode->i_sb != sb) {
+ 			if (work->sb) {
+@@ -1545,15 +1549,23 @@ static long writeback_sb_inodes(struct super_block *sb,
+ 			cond_resched();
+ 		}
+ 
+-
+-		spin_lock(&wb->list_lock);
++		/*
++		 * Requeue @inode if still dirty.  Be careful as @inode may
++		 * have been switched to another wb in the meantime.
++		 */
++		tmp_wb = inode_to_wb_and_lock_list(inode);
+ 		spin_lock(&inode->i_lock);
+ 		if (!(inode->i_state & I_DIRTY_ALL))
+ 			wrote++;
+-		requeue_inode(inode, wb, &wbc);
++		requeue_inode(inode, tmp_wb, &wbc);
+ 		inode_sync_complete(inode);
+ 		spin_unlock(&inode->i_lock);
+ 
++		if (unlikely(tmp_wb != wb)) {
++			spin_unlock(&tmp_wb->list_lock);
++			spin_lock(&wb->list_lock);
++		}
++
+ 		/*
+ 		 * bail out to wb_writeback() often enough to check
+ 		 * background threshold and other termination conditions.
+@@ -2340,7 +2352,6 @@ EXPORT_SYMBOL(sync_inodes_sb);
+  */
+ int write_inode_now(struct inode *inode, int sync)
+ {
+-	struct bdi_writeback *wb = &inode_to_bdi(inode)->wb;
+ 	struct writeback_control wbc = {
+ 		.nr_to_write = LONG_MAX,
+ 		.sync_mode = sync ? WB_SYNC_ALL : WB_SYNC_NONE,
+@@ -2352,7 +2363,7 @@ int write_inode_now(struct inode *inode, int sync)
+ 		wbc.nr_to_write = 0;
+ 
+ 	might_sleep();
+-	return writeback_single_inode(inode, wb, &wbc);
++	return writeback_single_inode(inode, &wbc);
+ }
+ EXPORT_SYMBOL(write_inode_now);
+ 
+@@ -2369,7 +2380,7 @@ EXPORT_SYMBOL(write_inode_now);
+  */
+ int sync_inode(struct inode *inode, struct writeback_control *wbc)
+ {
+-	return writeback_single_inode(inode, &inode_to_bdi(inode)->wb, wbc);
++	return writeback_single_inode(inode, wbc);
+ }
+ EXPORT_SYMBOL(sync_inode);
+ 
+diff --git a/fs/fuse/cuse.c b/fs/fuse/cuse.c
+index 8e3ee1936c7e..c5b6b7165489 100644
+--- a/fs/fuse/cuse.c
++++ b/fs/fuse/cuse.c
+@@ -90,7 +90,7 @@ static struct list_head *cuse_conntbl_head(dev_t devt)
+ 
+ static ssize_t cuse_read_iter(struct kiocb *kiocb, struct iov_iter *to)
+ {
+-	struct fuse_io_priv io = { .async = 0, .file = kiocb->ki_filp };
++	struct fuse_io_priv io = FUSE_IO_PRIV_SYNC(kiocb->ki_filp);
+ 	loff_t pos = 0;
+ 
+ 	return fuse_direct_io(&io, to, &pos, FUSE_DIO_CUSE);
+@@ -98,7 +98,7 @@ static ssize_t cuse_read_iter(struct kiocb *kiocb, struct iov_iter *to)
+ 
+ static ssize_t cuse_write_iter(struct kiocb *kiocb, struct iov_iter *from)
+ {
+-	struct fuse_io_priv io = { .async = 0, .file = kiocb->ki_filp };
++	struct fuse_io_priv io = FUSE_IO_PRIV_SYNC(kiocb->ki_filp);
+ 	loff_t pos = 0;
+ 	/*
+ 	 * No locking or generic_write_checks(), the server is
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 570ca4053c80..c2e340d6ec6e 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -528,6 +528,11 @@ static void fuse_release_user_pages(struct fuse_req *req, int write)
+ 	}
+ }
+ 
++static void fuse_io_release(struct kref *kref)
++{
++	kfree(container_of(kref, struct fuse_io_priv, refcnt));
++}
++
+ static ssize_t fuse_get_res_by_io(struct fuse_io_priv *io)
+ {
+ 	if (io->err)
+@@ -585,8 +590,9 @@ static void fuse_aio_complete(struct fuse_io_priv *io, int err, ssize_t pos)
+ 		}
+ 
+ 		io->iocb->ki_complete(io->iocb, res, 0);
+-		kfree(io);
+ 	}
++
++	kref_put(&io->refcnt, fuse_io_release);
+ }
+ 
+ static void fuse_aio_complete_req(struct fuse_conn *fc, struct fuse_req *req)
+@@ -613,6 +619,7 @@ static size_t fuse_async_req_send(struct fuse_conn *fc, struct fuse_req *req,
+ 		size_t num_bytes, struct fuse_io_priv *io)
+ {
+ 	spin_lock(&io->lock);
++	kref_get(&io->refcnt);
+ 	io->size += num_bytes;
+ 	io->reqs++;
+ 	spin_unlock(&io->lock);
+@@ -691,7 +698,7 @@ static void fuse_short_read(struct fuse_req *req, struct inode *inode,
+ 
+ static int fuse_do_readpage(struct file *file, struct page *page)
+ {
+-	struct fuse_io_priv io = { .async = 0, .file = file };
++	struct fuse_io_priv io = FUSE_IO_PRIV_SYNC(file);
+ 	struct inode *inode = page->mapping->host;
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 	struct fuse_req *req;
+@@ -984,7 +991,7 @@ static size_t fuse_send_write_pages(struct fuse_req *req, struct file *file,
+ 	size_t res;
+ 	unsigned offset;
+ 	unsigned i;
+-	struct fuse_io_priv io = { .async = 0, .file = file };
++	struct fuse_io_priv io = FUSE_IO_PRIV_SYNC(file);
+ 
+ 	for (i = 0; i < req->num_pages; i++)
+ 		fuse_wait_on_page_writeback(inode, req->pages[i]->index);
+@@ -1398,7 +1405,7 @@ static ssize_t __fuse_direct_read(struct fuse_io_priv *io,
+ 
+ static ssize_t fuse_direct_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ {
+-	struct fuse_io_priv io = { .async = 0, .file = iocb->ki_filp };
++	struct fuse_io_priv io = FUSE_IO_PRIV_SYNC(iocb->ki_filp);
+ 	return __fuse_direct_read(&io, to, &iocb->ki_pos);
+ }
+ 
+@@ -1406,7 +1413,7 @@ static ssize_t fuse_direct_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ {
+ 	struct file *file = iocb->ki_filp;
+ 	struct inode *inode = file_inode(file);
+-	struct fuse_io_priv io = { .async = 0, .file = file };
++	struct fuse_io_priv io = FUSE_IO_PRIV_SYNC(file);
+ 	ssize_t res;
+ 
+ 	if (is_bad_inode(inode))
+@@ -2786,6 +2793,7 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter, loff_t offset)
+ 	loff_t i_size;
+ 	size_t count = iov_iter_count(iter);
+ 	struct fuse_io_priv *io;
++	bool is_sync = is_sync_kiocb(iocb);
+ 
+ 	pos = offset;
+ 	inode = file->f_mapping->host;
+@@ -2806,6 +2814,7 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter, loff_t offset)
+ 	if (!io)
+ 		return -ENOMEM;
+ 	spin_lock_init(&io->lock);
++	kref_init(&io->refcnt);
+ 	io->reqs = 1;
+ 	io->bytes = -1;
+ 	io->size = 0;
+@@ -2825,12 +2834,18 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter, loff_t offset)
+ 	 * to wait on real async I/O requests, so we must submit this request
+ 	 * synchronously.
+ 	 */
+-	if (!is_sync_kiocb(iocb) && (offset + count > i_size) &&
++	if (!is_sync && (offset + count > i_size) &&
+ 	    iov_iter_rw(iter) == WRITE)
+ 		io->async = false;
+ 
+-	if (io->async && is_sync_kiocb(iocb))
++	if (io->async && is_sync) {
++		/*
++		 * Additional reference to keep io around after
++		 * calling fuse_aio_complete()
++		 */
++		kref_get(&io->refcnt);
+ 		io->done = &wait;
++	}
+ 
+ 	if (iov_iter_rw(iter) == WRITE) {
+ 		ret = fuse_direct_io(io, iter, &pos, FUSE_DIO_WRITE);
+@@ -2843,14 +2858,14 @@ fuse_direct_IO(struct kiocb *iocb, struct iov_iter *iter, loff_t offset)
+ 		fuse_aio_complete(io, ret < 0 ? ret : 0, -1);
+ 
+ 		/* we have a non-extending, async request, so return */
+-		if (!is_sync_kiocb(iocb))
++		if (!is_sync)
+ 			return -EIOCBQUEUED;
+ 
+ 		wait_for_completion(&wait);
+ 		ret = fuse_get_res_by_io(io);
+ 	}
+ 
+-	kfree(io);
++	kref_put(&io->refcnt, fuse_io_release);
+ 
+ 	if (iov_iter_rw(iter) == WRITE) {
+ 		if (ret > 0)
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index 405113101db8..604cd42dafef 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -22,6 +22,7 @@
+ #include <linux/rbtree.h>
+ #include <linux/poll.h>
+ #include <linux/workqueue.h>
++#include <linux/kref.h>
+ 
+ /** Max number of pages that can be used in a single read request */
+ #define FUSE_MAX_PAGES_PER_REQ 32
+@@ -243,6 +244,7 @@ struct fuse_args {
+ 
+ /** The request IO state (for asynchronous processing) */
+ struct fuse_io_priv {
++	struct kref refcnt;
+ 	int async;
+ 	spinlock_t lock;
+ 	unsigned reqs;
+@@ -256,6 +258,13 @@ struct fuse_io_priv {
+ 	struct completion *done;
+ };
+ 
++#define FUSE_IO_PRIV_SYNC(f) \
++{					\
++	.refcnt = { ATOMIC_INIT(1) },	\
++	.async = 0,			\
++	.file = f,			\
++}
++
+ /**
+  * Request flags
+  *
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 81e622681c82..624a57a9c4aa 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1408,11 +1408,12 @@ out:
+ /**
+  * jbd2_mark_journal_empty() - Mark on disk journal as empty.
+  * @journal: The journal to update.
++ * @write_op: With which operation should we write the journal sb
+  *
+  * Update a journal's dynamic superblock fields to show that journal is empty.
+  * Write updated superblock to disk waiting for IO to complete.
+  */
+-static void jbd2_mark_journal_empty(journal_t *journal)
++static void jbd2_mark_journal_empty(journal_t *journal, int write_op)
+ {
+ 	journal_superblock_t *sb = journal->j_superblock;
+ 
+@@ -1430,7 +1431,7 @@ static void jbd2_mark_journal_empty(journal_t *journal)
+ 	sb->s_start    = cpu_to_be32(0);
+ 	read_unlock(&journal->j_state_lock);
+ 
+-	jbd2_write_superblock(journal, WRITE_FUA);
++	jbd2_write_superblock(journal, write_op);
+ 
+ 	/* Log is no longer empty */
+ 	write_lock(&journal->j_state_lock);
+@@ -1716,7 +1717,13 @@ int jbd2_journal_destroy(journal_t *journal)
+ 	if (journal->j_sb_buffer) {
+ 		if (!is_journal_aborted(journal)) {
+ 			mutex_lock(&journal->j_checkpoint_mutex);
+-			jbd2_mark_journal_empty(journal);
++
++			write_lock(&journal->j_state_lock);
++			journal->j_tail_sequence =
++				++journal->j_transaction_sequence;
++			write_unlock(&journal->j_state_lock);
++
++			jbd2_mark_journal_empty(journal, WRITE_FLUSH_FUA);
+ 			mutex_unlock(&journal->j_checkpoint_mutex);
+ 		} else
+ 			err = -EIO;
+@@ -1975,7 +1982,7 @@ int jbd2_journal_flush(journal_t *journal)
+ 	 * the magic code for a fully-recovered superblock.  Any future
+ 	 * commits of data to the journal will restore the current
+ 	 * s_start value. */
+-	jbd2_mark_journal_empty(journal);
++	jbd2_mark_journal_empty(journal, WRITE_FUA);
+ 	mutex_unlock(&journal->j_checkpoint_mutex);
+ 	write_lock(&journal->j_state_lock);
+ 	J_ASSERT(!journal->j_running_transaction);
+@@ -2021,7 +2028,7 @@ int jbd2_journal_wipe(journal_t *journal, int write)
+ 	if (write) {
+ 		/* Lock to make assertions happy... */
+ 		mutex_lock(&journal->j_checkpoint_mutex);
+-		jbd2_mark_journal_empty(journal);
++		jbd2_mark_journal_empty(journal, WRITE_FUA);
+ 		mutex_unlock(&journal->j_checkpoint_mutex);
+ 	}
+ 
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index a9f096c7e99f..7d5351cd67fb 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -877,6 +877,7 @@ nfsd4_secinfo(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ 				    &exp, &dentry);
+ 	if (err)
+ 		return err;
++	fh_unlock(&cstate->current_fh);
+ 	if (d_really_is_negative(dentry)) {
+ 		exp_put(exp);
+ 		err = nfserr_noent;
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 51c9e9ca39a4..12935209deca 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -1072,8 +1072,9 @@ nfsd4_decode_rename(struct nfsd4_compoundargs *argp, struct nfsd4_rename *rename
+ 
+ 	READ_BUF(4);
+ 	rename->rn_snamelen = be32_to_cpup(p++);
+-	READ_BUF(rename->rn_snamelen + 4);
++	READ_BUF(rename->rn_snamelen);
+ 	SAVEMEM(rename->rn_sname, rename->rn_snamelen);
++	READ_BUF(4);
+ 	rename->rn_tnamelen = be32_to_cpup(p++);
+ 	READ_BUF(rename->rn_tnamelen);
+ 	SAVEMEM(rename->rn_tname, rename->rn_tnamelen);
+@@ -1155,13 +1156,14 @@ nfsd4_decode_setclientid(struct nfsd4_compoundargs *argp, struct nfsd4_setclient
+ 	READ_BUF(8);
+ 	setclientid->se_callback_prog = be32_to_cpup(p++);
+ 	setclientid->se_callback_netid_len = be32_to_cpup(p++);
+-
+-	READ_BUF(setclientid->se_callback_netid_len + 4);
++	READ_BUF(setclientid->se_callback_netid_len);
+ 	SAVEMEM(setclientid->se_callback_netid_val, setclientid->se_callback_netid_len);
++	READ_BUF(4);
+ 	setclientid->se_callback_addr_len = be32_to_cpup(p++);
+ 
+-	READ_BUF(setclientid->se_callback_addr_len + 4);
++	READ_BUF(setclientid->se_callback_addr_len);
+ 	SAVEMEM(setclientid->se_callback_addr_val, setclientid->se_callback_addr_len);
++	READ_BUF(4);
+ 	setclientid->se_callback_ident = be32_to_cpup(p++);
+ 
+ 	DECODE_TAIL;
+@@ -1815,8 +1817,9 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp)
+ 
+ 	READ_BUF(4);
+ 	argp->taglen = be32_to_cpup(p++);
+-	READ_BUF(argp->taglen + 8);
++	READ_BUF(argp->taglen);
+ 	SAVEMEM(argp->tag, argp->taglen);
++	READ_BUF(8);
+ 	argp->minorversion = be32_to_cpup(p++);
+ 	argp->opcnt = be32_to_cpup(p++);
+ 	max_reply += 4 + (XDR_QUADLEN(argp->taglen) << 2);
+diff --git a/fs/ocfs2/dlm/dlmconvert.c b/fs/ocfs2/dlm/dlmconvert.c
+index e36d63ff1783..f90931335c6b 100644
+--- a/fs/ocfs2/dlm/dlmconvert.c
++++ b/fs/ocfs2/dlm/dlmconvert.c
+@@ -262,6 +262,7 @@ enum dlm_status dlmconvert_remote(struct dlm_ctxt *dlm,
+ 				  struct dlm_lock *lock, int flags, int type)
+ {
+ 	enum dlm_status status;
++	u8 old_owner = res->owner;
+ 
+ 	mlog(0, "type=%d, convert_type=%d, busy=%d\n", lock->ml.type,
+ 	     lock->ml.convert_type, res->state & DLM_LOCK_RES_IN_PROGRESS);
+@@ -287,6 +288,19 @@ enum dlm_status dlmconvert_remote(struct dlm_ctxt *dlm,
+ 		status = DLM_DENIED;
+ 		goto bail;
+ 	}
++
++	if (lock->ml.type == type && lock->ml.convert_type == LKM_IVMODE) {
++		mlog(0, "last convert request returned DLM_RECOVERING, but "
++		     "owner has already queued and sent ast to me. res %.*s, "
++		     "(cookie=%u:%llu, type=%d, conv=%d)\n",
++		     res->lockname.len, res->lockname.name,
++		     dlm_get_lock_cookie_node(be64_to_cpu(lock->ml.cookie)),
++		     dlm_get_lock_cookie_seq(be64_to_cpu(lock->ml.cookie)),
++		     lock->ml.type, lock->ml.convert_type);
++		status = DLM_NORMAL;
++		goto bail;
++	}
++
+ 	res->state |= DLM_LOCK_RES_IN_PROGRESS;
+ 	/* move lock to local convert queue */
+ 	/* do not alter lock refcount.  switching lists. */
+@@ -316,11 +330,19 @@ enum dlm_status dlmconvert_remote(struct dlm_ctxt *dlm,
+ 	spin_lock(&res->spinlock);
+ 	res->state &= ~DLM_LOCK_RES_IN_PROGRESS;
+ 	lock->convert_pending = 0;
+-	/* if it failed, move it back to granted queue */
++	/* if it failed, move it back to granted queue.
++	 * if master returns DLM_NORMAL and then down before sending ast,
++	 * it may have already been moved to granted queue, reset to
++	 * DLM_RECOVERING and retry convert */
+ 	if (status != DLM_NORMAL) {
+ 		if (status != DLM_NOTQUEUED)
+ 			dlm_error(status);
+ 		dlm_revert_pending_convert(res, lock);
++	} else if ((res->state & DLM_LOCK_RES_RECOVERING) ||
++			(old_owner != res->owner)) {
++		mlog(0, "res %.*s is in recovering or has been recovered.\n",
++				res->lockname.len, res->lockname.name);
++		status = DLM_RECOVERING;
+ 	}
+ bail:
+ 	spin_unlock(&res->spinlock);
+diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
+index 42f0cae93a0a..4a338803e7e9 100644
+--- a/fs/ocfs2/dlm/dlmrecovery.c
++++ b/fs/ocfs2/dlm/dlmrecovery.c
+@@ -2064,7 +2064,6 @@ void dlm_move_lockres_to_recovery_list(struct dlm_ctxt *dlm,
+ 			dlm_lock_get(lock);
+ 			if (lock->convert_pending) {
+ 				/* move converting lock back to granted */
+-				BUG_ON(i != DLM_CONVERTING_LIST);
+ 				mlog(0, "node died with convert pending "
+ 				     "on %.*s. move back to granted list.\n",
+ 				     res->lockname.len, res->lockname.name);
+diff --git a/fs/open.c b/fs/open.c
+index b6f1e96a7c0b..6a24f988d253 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -995,14 +995,12 @@ struct file *filp_open(const char *filename, int flags, umode_t mode)
+ EXPORT_SYMBOL(filp_open);
+ 
+ struct file *file_open_root(struct dentry *dentry, struct vfsmount *mnt,
+-			    const char *filename, int flags)
++			    const char *filename, int flags, umode_t mode)
+ {
+ 	struct open_flags op;
+-	int err = build_open_flags(flags, 0, &op);
++	int err = build_open_flags(flags, mode, &op);
+ 	if (err)
+ 		return ERR_PTR(err);
+-	if (flags & O_CREAT)
+-		return ERR_PTR(-EINVAL);
+ 	return do_file_open_root(dentry, mnt, filename, &op);
+ }
+ EXPORT_SYMBOL(file_open_root);
+diff --git a/fs/proc_namespace.c b/fs/proc_namespace.c
+index 8ebd9a334085..87645955990d 100644
+--- a/fs/proc_namespace.c
++++ b/fs/proc_namespace.c
+@@ -197,6 +197,8 @@ static int show_vfsstat(struct seq_file *m, struct vfsmount *mnt)
+ 	if (sb->s_op->show_devname) {
+ 		seq_puts(m, "device ");
+ 		err = sb->s_op->show_devname(m, mnt_path.dentry);
++		if (err)
++			goto out;
+ 	} else {
+ 		if (r->mnt_devname) {
+ 			seq_puts(m, "device ");
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index ef0d64b2a6d9..353ff31dcee1 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -1398,7 +1398,7 @@ static int dquot_active(const struct inode *inode)
+ static int __dquot_initialize(struct inode *inode, int type)
+ {
+ 	int cnt, init_needed = 0;
+-	struct dquot **dquots, *got[MAXQUOTAS];
++	struct dquot **dquots, *got[MAXQUOTAS] = {};
+ 	struct super_block *sb = inode->i_sb;
+ 	qsize_t rsv;
+ 	int ret = 0;
+@@ -1415,7 +1415,6 @@ static int __dquot_initialize(struct inode *inode, int type)
+ 		int rc;
+ 		struct dquot *dquot;
+ 
+-		got[cnt] = NULL;
+ 		if (type != -1 && cnt != type)
+ 			continue;
+ 		/*
+diff --git a/fs/splice.c b/fs/splice.c
+index 4cf700d50b40..0f77e9682857 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -185,6 +185,9 @@ ssize_t splice_to_pipe(struct pipe_inode_info *pipe,
+ 	unsigned int spd_pages = spd->nr_pages;
+ 	int ret, do_wakeup, page_nr;
+ 
++	if (!spd_pages)
++		return 0;
++
+ 	ret = 0;
+ 	do_wakeup = 0;
+ 	page_nr = 0;
+diff --git a/fs/xfs/xfs_attr_list.c b/fs/xfs/xfs_attr_list.c
+index 0ef7c2ed3f8a..4fa14820e2e2 100644
+--- a/fs/xfs/xfs_attr_list.c
++++ b/fs/xfs/xfs_attr_list.c
+@@ -202,8 +202,10 @@ xfs_attr_shortform_list(xfs_attr_list_context_t *context)
+ 					sbp->namelen,
+ 					sbp->valuelen,
+ 					&sbp->name[sbp->namelen]);
+-		if (error)
++		if (error) {
++			kmem_free(sbuf);
+ 			return error;
++		}
+ 		if (context->seen_enough)
+ 			break;
+ 		cursor->offset++;
+@@ -454,14 +456,13 @@ xfs_attr3_leaf_list_int(
+ 				args.rmtblkcnt = xfs_attr3_rmt_blocks(
+ 							args.dp->i_mount, valuelen);
+ 				retval = xfs_attr_rmtval_get(&args);
+-				if (retval)
+-					return retval;
+-				retval = context->put_listent(context,
+-						entry->flags,
+-						name_rmt->name,
+-						(int)name_rmt->namelen,
+-						valuelen,
+-						args.value);
++				if (!retval)
++					retval = context->put_listent(context,
++							entry->flags,
++							name_rmt->name,
++							(int)name_rmt->namelen,
++							valuelen,
++							args.value);
+ 				kmem_free(args.value);
+ 			} else {
+ 				retval = context->put_listent(context,
+diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h
+index c30266e94806..8ef0ccbf8167 100644
+--- a/include/asm-generic/bitops/lock.h
++++ b/include/asm-generic/bitops/lock.h
+@@ -29,16 +29,16 @@ do {					\
+  * @nr: the bit to set
+  * @addr: the address to start counting from
+  *
+- * This operation is like clear_bit_unlock, however it is not atomic.
+- * It does provide release barrier semantics so it can be used to unlock
+- * a bit lock, however it would only be used if no other CPU can modify
+- * any bits in the memory until the lock is released (a good example is
+- * if the bit lock itself protects access to the other bits in the word).
++ * A weaker form of clear_bit_unlock() as used by __bit_lock_unlock(). If all
++ * the bits in the word are protected by this lock some archs can use weaker
++ * ops to safely unlock.
++ *
++ * See for example x86's implementation.
+  */
+ #define __clear_bit_unlock(nr, addr)	\
+ do {					\
+-	smp_mb();			\
+-	__clear_bit(nr, addr);		\
++	smp_mb__before_atomic();	\
++	clear_bit(nr, addr);		\
+ } while (0)
+ 
+ #endif /* _ASM_GENERIC_BITOPS_LOCK_H_ */
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 8e30faeab183..a7c7f74808a4 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -216,6 +216,9 @@ struct css_set {
+ 	/* all css_task_iters currently walking this cset */
+ 	struct list_head task_iters;
+ 
++	/* dead and being drained, ignore for migration */
++	bool dead;
++
+ 	/* For RCU-protected deletion */
+ 	struct rcu_head rcu_head;
+ };
+diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
+index ec1c61c87d89..899ab9f8549e 100644
+--- a/include/linux/device-mapper.h
++++ b/include/linux/device-mapper.h
+@@ -124,6 +124,8 @@ struct dm_dev {
+ 	char name[16];
+ };
+ 
++dev_t dm_get_dev_t(const char *path);
++
+ /*
+  * Constructors should call these functions to ensure destination devices
+  * are opened/closed correctly.
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 3aa514254161..22c5a0cf16e3 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -2217,7 +2217,7 @@ extern long do_sys_open(int dfd, const char __user *filename, int flags,
+ extern struct file *file_open_name(struct filename *, int, umode_t);
+ extern struct file *filp_open(const char *, int, umode_t);
+ extern struct file *file_open_root(struct dentry *, struct vfsmount *,
+-				   const char *, int);
++				   const char *, int, umode_t);
+ extern struct file * dentry_open(const struct path *, int, const struct cred *);
+ extern int filp_close(struct file *, fl_owner_t id);
+ 
+diff --git a/include/linux/kernel.h b/include/linux/kernel.h
+index 350dfb08aee3..924853d33a13 100644
+--- a/include/linux/kernel.h
++++ b/include/linux/kernel.h
+@@ -607,7 +607,7 @@ do {							\
+ 
+ #define do_trace_printk(fmt, args...)					\
+ do {									\
+-	static const char *trace_printk_fmt				\
++	static const char *trace_printk_fmt __used			\
+ 		__attribute__((section("__trace_printk_fmt"))) =	\
+ 		__builtin_constant_p(fmt) ? fmt : NULL;			\
+ 									\
+@@ -651,7 +651,7 @@ int __trace_printk(unsigned long ip, const char *fmt, ...);
+  */
+ 
+ #define trace_puts(str) ({						\
+-	static const char *trace_printk_fmt				\
++	static const char *trace_printk_fmt __used			\
+ 		__attribute__((section("__trace_printk_fmt"))) =	\
+ 		__builtin_constant_p(str) ? str : NULL;			\
+ 									\
+@@ -673,7 +673,7 @@ extern void trace_dump_stack(int skip);
+ #define ftrace_vprintk(fmt, vargs)					\
+ do {									\
+ 	if (__builtin_constant_p(fmt)) {				\
+-		static const char *trace_printk_fmt			\
++		static const char *trace_printk_fmt __used		\
+ 		  __attribute__((section("__trace_printk_fmt"))) =	\
+ 			__builtin_constant_p(fmt) ? fmt : NULL;		\
+ 									\
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 6ae25aae88fd..4e554bfff129 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -359,6 +359,7 @@ struct pci_dev {
+ 	unsigned int	io_window_1k:1;	/* Intel P2P bridge 1K I/O windows */
+ 	unsigned int	irq_managed:1;
+ 	unsigned int	has_secondary_link:1;
++	unsigned int	non_compliant_bars:1;	/* broken BARs; ignore them */
+ 	pci_dev_flags_t dev_flags;
+ 	atomic_t	enable_cnt;	/* pci_enable_device has been called */
+ 
+diff --git a/include/linux/platform_data/asoc-s3c.h b/include/linux/platform_data/asoc-s3c.h
+index 5e0bc779e6c5..33f88b4479e4 100644
+--- a/include/linux/platform_data/asoc-s3c.h
++++ b/include/linux/platform_data/asoc-s3c.h
+@@ -39,6 +39,10 @@ struct samsung_i2s {
+  */
+ struct s3c_audio_pdata {
+ 	int (*cfg_gpio)(struct platform_device *);
++	void *dma_playback;
++	void *dma_capture;
++	void *dma_play_sec;
++	void *dma_capture_mic;
+ 	union {
+ 		struct samsung_i2s i2s;
+ 	} type;
+diff --git a/include/linux/thermal.h b/include/linux/thermal.h
+index e13a1ace50e9..4a849f19e6c9 100644
+--- a/include/linux/thermal.h
++++ b/include/linux/thermal.h
+@@ -156,6 +156,7 @@ struct thermal_attr {
+  * @trip_hyst_attrs:	attributes for trip points for sysfs: trip hysteresis
+  * @devdata:	private pointer for device private data
+  * @trips:	number of trip points the thermal zone supports
++ * @trips_disabled;	bitmap for disabled trips
+  * @passive_delay:	number of milliseconds to wait between polls when
+  *			performing passive cooling.
+  * @polling_delay:	number of milliseconds to wait between polls when
+@@ -191,6 +192,7 @@ struct thermal_zone_device {
+ 	struct thermal_attr *trip_hyst_attrs;
+ 	void *devdata;
+ 	int trips;
++	unsigned long trips_disabled;	/* bitmap for disabled trips */
+ 	int passive_delay;
+ 	int polling_delay;
+ 	int temperature;
+diff --git a/include/linux/tty.h b/include/linux/tty.h
+index 6b6e811f4575..3bf03b6b52e9 100644
+--- a/include/linux/tty.h
++++ b/include/linux/tty.h
+@@ -594,7 +594,7 @@ static inline int tty_ldisc_receive_buf(struct tty_ldisc *ld, unsigned char *p,
+ 		count = ld->ops->receive_buf2(ld->tty, p, f, count);
+ 	else {
+ 		count = min_t(int, count, ld->tty->receive_room);
+-		if (count)
++		if (count && ld->ops->receive_buf)
+ 			ld->ops->receive_buf(ld->tty, p, f, count);
+ 	}
+ 	return count;
+diff --git a/kernel/cgroup.c b/kernel/cgroup.c
+index fb1ecfd2decd..dc94f8beb097 100644
+--- a/kernel/cgroup.c
++++ b/kernel/cgroup.c
+@@ -2498,6 +2498,14 @@ static void cgroup_migrate_add_src(struct css_set *src_cset,
+ 	lockdep_assert_held(&cgroup_mutex);
+ 	lockdep_assert_held(&css_set_lock);
+ 
++	/*
++	 * If ->dead, @src_set is associated with one or more dead cgroups
++	 * and doesn't contain any migratable tasks.  Ignore it early so
++	 * that the rest of migration path doesn't get confused by it.
++	 */
++	if (src_cset->dead)
++		return;
++
+ 	src_cgrp = cset_cgroup_from_root(src_cset, dst_cgrp->root);
+ 
+ 	if (!list_empty(&src_cset->mg_preload_node))
+@@ -5131,6 +5139,7 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
+ 	__releases(&cgroup_mutex) __acquires(&cgroup_mutex)
+ {
+ 	struct cgroup_subsys_state *css;
++	struct cgrp_cset_link *link;
+ 	int ssid;
+ 
+ 	lockdep_assert_held(&cgroup_mutex);
+@@ -5151,11 +5160,18 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
+ 		return -EBUSY;
+ 
+ 	/*
+-	 * Mark @cgrp dead.  This prevents further task migration and child
+-	 * creation by disabling cgroup_lock_live_group().
++	 * Mark @cgrp and the associated csets dead.  The former prevents
++	 * further task migration and child creation by disabling
++	 * cgroup_lock_live_group().  The latter makes the csets ignored by
++	 * the migration path.
+ 	 */
+ 	cgrp->self.flags &= ~CSS_ONLINE;
+ 
++	spin_lock_bh(&css_set_lock);
++	list_for_each_entry(link, &cgrp->cset_links, cset_link)
++		link->cset->dead = true;
++	spin_unlock_bh(&css_set_lock);
++
+ 	/* initiate massacre of all css's */
+ 	for_each_css(css, ssid, cgrp)
+ 		kill_css(css);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 1087bbeb152b..faf2067fc8e2 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -7979,6 +7979,9 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ 		}
+ 	}
+ 
++	/* symmetric to unaccount_event() in _free_event() */
++	account_event(event);
++
+ 	return event;
+ 
+ err_per_task:
+@@ -8342,8 +8345,6 @@ SYSCALL_DEFINE5(perf_event_open,
+ 		}
+ 	}
+ 
+-	account_event(event);
+-
+ 	/*
+ 	 * Special case software events and allow them to be part of
+ 	 * any hardware group.
+@@ -8626,8 +8627,6 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
+ 	/* Mark owner so we could distinguish it from user events. */
+ 	event->owner = EVENT_OWNER_KERNEL;
+ 
+-	account_event(event);
+-
+ 	ctx = find_get_context(event->pmu, task, event);
+ 	if (IS_ERR(ctx)) {
+ 		err = PTR_ERR(ctx);
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index b7342a24f559..b7dd5718836e 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -339,6 +339,7 @@ int hibernation_snapshot(int platform_mode)
+ 	pm_message_t msg;
+ 	int error;
+ 
++	pm_suspend_clear_flags();
+ 	error = platform_begin(platform_mode);
+ 	if (error)
+ 		goto Close;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index eb70592f03f6..70e5e09341f1 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -5525,6 +5525,7 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu)
+ 
+ 	case CPU_UP_PREPARE:
+ 		rq->calc_load_update = calc_load_update;
++		account_reset_rq(rq);
+ 		break;
+ 
+ 	case CPU_ONLINE:
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 05de80b48586..f74ea89e77a8 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -259,21 +259,21 @@ static __always_inline bool steal_account_process_tick(void)
+ #ifdef CONFIG_PARAVIRT
+ 	if (static_key_false(&paravirt_steal_enabled)) {
+ 		u64 steal;
+-		cputime_t steal_ct;
++		unsigned long steal_jiffies;
+ 
+ 		steal = paravirt_steal_clock(smp_processor_id());
+ 		steal -= this_rq()->prev_steal_time;
+ 
+ 		/*
+-		 * cputime_t may be less precise than nsecs (eg: if it's
+-		 * based on jiffies). Lets cast the result to cputime
++		 * steal is in nsecs but our caller is expecting steal
++		 * time in jiffies. Lets cast the result to jiffies
+ 		 * granularity and account the rest on the next rounds.
+ 		 */
+-		steal_ct = nsecs_to_cputime(steal);
+-		this_rq()->prev_steal_time += cputime_to_nsecs(steal_ct);
++		steal_jiffies = nsecs_to_jiffies(steal);
++		this_rq()->prev_steal_time += jiffies_to_nsecs(steal_jiffies);
+ 
+-		account_steal_time(steal_ct);
+-		return steal_ct;
++		account_steal_time(jiffies_to_cputime(steal_jiffies));
++		return steal_jiffies;
+ 	}
+ #endif
+ 	return false;
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index b242775bf670..0517abd7dd73 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1770,3 +1770,16 @@ static inline u64 irq_time_read(int cpu)
+ }
+ #endif /* CONFIG_64BIT */
+ #endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++static inline void account_reset_rq(struct rq *rq)
++{
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++	rq->prev_irq_time = 0;
++#endif
++#ifdef CONFIG_PARAVIRT
++	rq->prev_steal_time = 0;
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++	rq->prev_steal_time_rq = 0;
++#endif
++}
+diff --git a/kernel/sysctl_binary.c b/kernel/sysctl_binary.c
+index 7e7746a42a62..10a1d7dc9313 100644
+--- a/kernel/sysctl_binary.c
++++ b/kernel/sysctl_binary.c
+@@ -1321,7 +1321,7 @@ static ssize_t binary_sysctl(const int *name, int nlen,
+ 	}
+ 
+ 	mnt = task_active_pid_ns(current)->proc_mnt;
+-	file = file_open_root(mnt->mnt_root, mnt, pathname, flags);
++	file = file_open_root(mnt->mnt_root, mnt, pathname, flags, 0);
+ 	result = PTR_ERR(file);
+ 	if (IS_ERR(file))
+ 		goto out_putname;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index d9293402ee68..8305cbb2d5a2 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -4949,7 +4949,10 @@ static ssize_t tracing_splice_read_pipe(struct file *filp,
+ 
+ 	spd.nr_pages = i;
+ 
+-	ret = splice_to_pipe(pipe, &spd);
++	if (i)
++		ret = splice_to_pipe(pipe, &spd);
++	else
++		ret = 0;
+ out:
+ 	splice_shrink_spd(&spd);
+ 	return ret;
+diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
+index e4e56589ec1d..be3222b7d72e 100644
+--- a/kernel/trace/trace_irqsoff.c
++++ b/kernel/trace/trace_irqsoff.c
+@@ -109,8 +109,12 @@ static int func_prolog_dec(struct trace_array *tr,
+ 		return 0;
+ 
+ 	local_save_flags(*flags);
+-	/* slight chance to get a false positive on tracing_cpu */
+-	if (!irqs_disabled_flags(*flags))
++	/*
++	 * Slight chance to get a false positive on tracing_cpu,
++	 * although I'm starting to think there isn't a chance.
++	 * Leave this for now just to be paranoid.
++	 */
++	if (!irqs_disabled_flags(*flags) && !preempt_count())
+ 		return 0;
+ 
+ 	*data = per_cpu_ptr(tr->trace_buffer.data, cpu);
+diff --git a/kernel/trace/trace_printk.c b/kernel/trace/trace_printk.c
+index 060df67dbdd1..f96f0383f6c6 100644
+--- a/kernel/trace/trace_printk.c
++++ b/kernel/trace/trace_printk.c
+@@ -296,6 +296,9 @@ static int t_show(struct seq_file *m, void *v)
+ 	const char *str = *fmt;
+ 	int i;
+ 
++	if (!*fmt)
++		return 0;
++
+ 	seq_printf(m, "0x%lx : \"", *(unsigned long *)fmt);
+ 
+ 	/*
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 18f34cf75f74..198137b1cadc 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -907,6 +907,9 @@ static int proc_watchdog_common(int which, struct ctl_table *table, int write,
+ 		 * both lockup detectors are disabled if proc_watchdog_update()
+ 		 * returns an error.
+ 		 */
++		if (old == new)
++			goto out;
++
+ 		err = proc_watchdog_update();
+ 	}
+ out:
+@@ -951,7 +954,7 @@ int proc_soft_watchdog(struct ctl_table *table, int write,
+ int proc_watchdog_thresh(struct ctl_table *table, int write,
+ 			 void __user *buffer, size_t *lenp, loff_t *ppos)
+ {
+-	int err, old;
++	int err, old, new;
+ 
+ 	get_online_cpus();
+ 	mutex_lock(&watchdog_proc_mutex);
+@@ -971,6 +974,10 @@ int proc_watchdog_thresh(struct ctl_table *table, int write,
+ 	/*
+ 	 * Update the sample period. Restore on failure.
+ 	 */
++	new = ACCESS_ONCE(watchdog_thresh);
++	if (old == new)
++		goto out;
++
+ 	set_sample_period();
+ 	err = proc_watchdog_update();
+ 	if (err) {
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index ee6acd279953..fc0bcc41d57f 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -1332,7 +1332,7 @@ static unsigned long mem_cgroup_get_limit(struct mem_cgroup *memcg)
+ 	return limit;
+ }
+ 
+-static void mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
++static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ 				     int order)
+ {
+ 	struct oom_control oc = {
+@@ -1410,6 +1410,7 @@ static void mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ 	}
+ unlock:
+ 	mutex_unlock(&oom_lock);
++	return chosen;
+ }
+ 
+ #if MAX_NUMNODES > 1
+@@ -5121,6 +5122,7 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
+ 				 char *buf, size_t nbytes, loff_t off)
+ {
+ 	struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
++	unsigned long nr_pages;
+ 	unsigned long high;
+ 	int err;
+ 
+@@ -5131,6 +5133,11 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
+ 
+ 	memcg->high = high;
+ 
++	nr_pages = page_counter_read(&memcg->memory);
++	if (nr_pages > high)
++		try_to_free_mem_cgroup_pages(memcg, nr_pages - high,
++					     GFP_KERNEL, true);
++
+ 	memcg_wb_domain_size_changed(memcg);
+ 	return nbytes;
+ }
+@@ -5152,6 +5159,8 @@ static ssize_t memory_max_write(struct kernfs_open_file *of,
+ 				char *buf, size_t nbytes, loff_t off)
+ {
+ 	struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
++	unsigned int nr_reclaims = MEM_CGROUP_RECLAIM_RETRIES;
++	bool drained = false;
+ 	unsigned long max;
+ 	int err;
+ 
+@@ -5160,9 +5169,36 @@ static ssize_t memory_max_write(struct kernfs_open_file *of,
+ 	if (err)
+ 		return err;
+ 
+-	err = mem_cgroup_resize_limit(memcg, max);
+-	if (err)
+-		return err;
++	xchg(&memcg->memory.limit, max);
++
++	for (;;) {
++		unsigned long nr_pages = page_counter_read(&memcg->memory);
++
++		if (nr_pages <= max)
++			break;
++
++		if (signal_pending(current)) {
++			err = -EINTR;
++			break;
++		}
++
++		if (!drained) {
++			drain_all_stock(memcg);
++			drained = true;
++			continue;
++		}
++
++		if (nr_reclaims) {
++			if (!try_to_free_mem_cgroup_pages(memcg, nr_pages - max,
++							  GFP_KERNEL, true))
++				nr_reclaims--;
++			continue;
++		}
++
++		mem_cgroup_events(memcg, MEMCG_OOM, 1);
++		if (!mem_cgroup_out_of_memory(memcg, GFP_KERNEL, 0))
++			break;
++	}
+ 
+ 	memcg_wb_domain_size_changed(memcg);
+ 	return nbytes;
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 9d666df5ef95..c69531afbd8f 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -662,34 +662,28 @@ static inline void __free_one_page(struct page *page,
+ 	unsigned long combined_idx;
+ 	unsigned long uninitialized_var(buddy_idx);
+ 	struct page *buddy;
+-	unsigned int max_order = MAX_ORDER;
++	unsigned int max_order;
++
++	max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
+ 
+ 	VM_BUG_ON(!zone_is_initialized(zone));
+ 	VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
+ 
+ 	VM_BUG_ON(migratetype == -1);
+-	if (is_migrate_isolate(migratetype)) {
+-		/*
+-		 * We restrict max order of merging to prevent merge
+-		 * between freepages on isolate pageblock and normal
+-		 * pageblock. Without this, pageblock isolation
+-		 * could cause incorrect freepage accounting.
+-		 */
+-		max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1);
+-	} else {
++	if (likely(!is_migrate_isolate(migratetype)))
+ 		__mod_zone_freepage_state(zone, 1 << order, migratetype);
+-	}
+ 
+-	page_idx = pfn & ((1 << max_order) - 1);
++	page_idx = pfn & ((1 << MAX_ORDER) - 1);
+ 
+ 	VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
+ 	VM_BUG_ON_PAGE(bad_range(zone, page), page);
+ 
++continue_merging:
+ 	while (order < max_order - 1) {
+ 		buddy_idx = __find_buddy_index(page_idx, order);
+ 		buddy = page + (buddy_idx - page_idx);
+ 		if (!page_is_buddy(page, buddy, order))
+-			break;
++			goto done_merging;
+ 		/*
+ 		 * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page,
+ 		 * merge with it and move up one order.
+@@ -706,6 +700,32 @@ static inline void __free_one_page(struct page *page,
+ 		page_idx = combined_idx;
+ 		order++;
+ 	}
++	if (max_order < MAX_ORDER) {
++		/* If we are here, it means order is >= pageblock_order.
++		 * We want to prevent merge between freepages on isolate
++		 * pageblock and normal pageblock. Without this, pageblock
++		 * isolation could cause incorrect freepage or CMA accounting.
++		 *
++		 * We don't want to hit this code for the more frequent
++		 * low-order merging.
++		 */
++		if (unlikely(has_isolate_pageblock(zone))) {
++			int buddy_mt;
++
++			buddy_idx = __find_buddy_index(page_idx, order);
++			buddy = page + (buddy_idx - page_idx);
++			buddy_mt = get_pageblock_migratetype(buddy);
++
++			if (migratetype != buddy_mt
++					&& (is_migrate_isolate(migratetype) ||
++						is_migrate_isolate(buddy_mt)))
++				goto done_merging;
++		}
++		max_order++;
++		goto continue_merging;
++	}
++
++done_merging:
+ 	set_page_order(page, order);
+ 
+ 	/*
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 7f22119276f3..b1b0a1c0bd8d 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -7155,6 +7155,10 @@ static int add_advertising(struct sock *sk, struct hci_dev *hdev,
+ 		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_ADD_ADVERTISING,
+ 				       status);
+ 
++	if (data_len != sizeof(*cp) + cp->adv_data_len + cp->scan_rsp_len)
++		return mgmt_cmd_status(sk, hdev->id, MGMT_OP_ADD_ADVERTISING,
++				       MGMT_STATUS_INVALID_PARAMS);
++
+ 	flags = __le32_to_cpu(cp->flags);
+ 	timeout = __le16_to_cpu(cp->timeout);
+ 	duration = __le16_to_cpu(cp->duration);
+diff --git a/scripts/coccinelle/iterators/use_after_iter.cocci b/scripts/coccinelle/iterators/use_after_iter.cocci
+index f085f5968c52..ce8cc9c006e5 100644
+--- a/scripts/coccinelle/iterators/use_after_iter.cocci
++++ b/scripts/coccinelle/iterators/use_after_iter.cocci
+@@ -123,7 +123,7 @@ list_remove_head(x,c,...)
+ |
+ sizeof(<+...c...+>)
+ |
+-&c->member
++ &c->member
+ |
+ c = E
+ |
+diff --git a/scripts/kconfig/Makefile b/scripts/kconfig/Makefile
+index d79cba4ce3eb..ebced77deb9c 100644
+--- a/scripts/kconfig/Makefile
++++ b/scripts/kconfig/Makefile
+@@ -96,13 +96,15 @@ savedefconfig: $(obj)/conf
+ defconfig: $(obj)/conf
+ ifeq ($(KBUILD_DEFCONFIG),)
+ 	$< $(silent) --defconfig $(Kconfig)
+-else ifneq ($(wildcard $(srctree)/arch/$(SRCARCH)/configs/$(KBUILD_DEFCONFIG)),)
++else
++ifneq ($(wildcard $(srctree)/arch/$(SRCARCH)/configs/$(KBUILD_DEFCONFIG)),)
+ 	@$(kecho) "*** Default configuration is based on '$(KBUILD_DEFCONFIG)'"
+ 	$(Q)$< $(silent) --defconfig=arch/$(SRCARCH)/configs/$(KBUILD_DEFCONFIG) $(Kconfig)
+ else
+ 	@$(kecho) "*** Default configuration is based on target '$(KBUILD_DEFCONFIG)'"
+ 	$(Q)$(MAKE) -f $(srctree)/Makefile $(KBUILD_DEFCONFIG)
+ endif
++endif
+ 
+ %_defconfig: $(obj)/conf
+ 	$(Q)$< $(silent) --defconfig=arch/$(SRCARCH)/configs/$@ $(Kconfig)
+diff --git a/scripts/package/mkspec b/scripts/package/mkspec
+index 71004daefe31..fe44d68e9344 100755
+--- a/scripts/package/mkspec
++++ b/scripts/package/mkspec
+@@ -131,11 +131,11 @@ echo 'rm -rf $RPM_BUILD_ROOT'
+ echo ""
+ echo "%post"
+ echo "if [ -x /sbin/installkernel -a -r /boot/vmlinuz-$KERNELRELEASE -a -r /boot/System.map-$KERNELRELEASE ]; then"
+-echo "cp /boot/vmlinuz-$KERNELRELEASE /boot/vmlinuz-$KERNELRELEASE-rpm"
+-echo "cp /boot/System.map-$KERNELRELEASE /boot/System.map-$KERNELRELEASE-rpm"
++echo "cp /boot/vmlinuz-$KERNELRELEASE /boot/.vmlinuz-$KERNELRELEASE-rpm"
++echo "cp /boot/System.map-$KERNELRELEASE /boot/.System.map-$KERNELRELEASE-rpm"
+ echo "rm -f /boot/vmlinuz-$KERNELRELEASE /boot/System.map-$KERNELRELEASE"
+-echo "/sbin/installkernel $KERNELRELEASE /boot/vmlinuz-$KERNELRELEASE-rpm /boot/System.map-$KERNELRELEASE-rpm"
+-echo "rm -f /boot/vmlinuz-$KERNELRELEASE-rpm /boot/System.map-$KERNELRELEASE-rpm"
++echo "/sbin/installkernel $KERNELRELEASE /boot/.vmlinuz-$KERNELRELEASE-rpm /boot/.System.map-$KERNELRELEASE-rpm"
++echo "rm -f /boot/.vmlinuz-$KERNELRELEASE-rpm /boot/.System.map-$KERNELRELEASE-rpm"
+ echo "fi"
+ echo ""
+ echo "%files"
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 6b5a811e01a5..3a9b66c6e09c 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -322,7 +322,7 @@ static int snd_pcm_update_hw_ptr0(struct snd_pcm_substream *substream,
+ 			char name[16];
+ 			snd_pcm_debug_name(substream, name, sizeof(name));
+ 			pcm_err(substream->pcm,
+-				"BUG: %s, pos = %ld, buffer size = %ld, period size = %ld\n",
++				"invalid position: %s, pos = %ld, buffer size = %ld, period size = %ld\n",
+ 				name, pos, runtime->buffer_size,
+ 				runtime->period_size);
+ 		}
+diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
+index c1c855a6c0af..a47e8ae0eb30 100644
+--- a/sound/pci/hda/patch_cirrus.c
++++ b/sound/pci/hda/patch_cirrus.c
+@@ -174,8 +174,12 @@ static void cs_automute(struct hda_codec *codec)
+ 	snd_hda_gen_update_outputs(codec);
+ 
+ 	if (spec->gpio_eapd_hp || spec->gpio_eapd_speaker) {
+-		spec->gpio_data = spec->gen.hp_jack_present ?
+-			spec->gpio_eapd_hp : spec->gpio_eapd_speaker;
++		if (spec->gen.automute_speaker)
++			spec->gpio_data = spec->gen.hp_jack_present ?
++				spec->gpio_eapd_hp : spec->gpio_eapd_speaker;
++		else
++			spec->gpio_data =
++				spec->gpio_eapd_hp | spec->gpio_eapd_speaker;
+ 		snd_hda_codec_write(codec, 0x01, 0,
+ 				    AC_VERB_SET_GPIO_DATA, spec->gpio_data);
+ 	}
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index ef198903c0c3..600af5878e75 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -204,8 +204,13 @@ static void cx_auto_reboot_notify(struct hda_codec *codec)
+ {
+ 	struct conexant_spec *spec = codec->spec;
+ 
+-	if (codec->core.vendor_id != 0x14f150f2)
++	switch (codec->core.vendor_id) {
++	case 0x14f150f2: /* CX20722 */
++	case 0x14f150f4: /* CX20724 */
++		break;
++	default:
+ 		return;
++	}
+ 
+ 	/* Turn the CX20722 codec into D3 to avoid spurious noises
+ 	   from the internal speaker during (and after) reboot */
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 70c945603379..f7bcd8dbac14 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2353,6 +2353,10 @@ static void intel_pin_eld_notify(void *audio_ptr, int port)
+ 	struct hda_codec *codec = audio_ptr;
+ 	int pin_nid = port + 0x04;
+ 
++	/* we assume only from port-B to port-D */
++	if (port < 1 || port > 3)
++		return;
++
+ 	/* skip notification during system suspend (but not in runtime PM);
+ 	 * the state will be updated at resume
+ 	 */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index c2430b36e1ce..6968b796baa3 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5529,6 +5529,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2233, "Thinkpad", ALC293_FIXUP_LENOVO_SPK_NOISE),
+ 	SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
++	SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
+diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c
+index 42bcbac801a3..ccdab29a8b66 100644
+--- a/sound/pci/intel8x0.c
++++ b/sound/pci/intel8x0.c
+@@ -2879,6 +2879,7 @@ static void intel8x0_measure_ac97_clock(struct intel8x0 *chip)
+ 
+ static struct snd_pci_quirk intel8x0_clock_list[] = {
+ 	SND_PCI_QUIRK(0x0e11, 0x008a, "AD1885", 41000),
++	SND_PCI_QUIRK(0x1014, 0x0581, "AD1981B", 48000),
+ 	SND_PCI_QUIRK(0x1028, 0x00be, "AD1885", 44100),
+ 	SND_PCI_QUIRK(0x1028, 0x0177, "AD1980", 48000),
+ 	SND_PCI_QUIRK(0x1028, 0x01ad, "AD1981B", 48000),
+diff --git a/sound/soc/samsung/ac97.c b/sound/soc/samsung/ac97.c
+index e4145509d63c..9c5219392460 100644
+--- a/sound/soc/samsung/ac97.c
++++ b/sound/soc/samsung/ac97.c
+@@ -324,7 +324,7 @@ static const struct snd_soc_component_driver s3c_ac97_component = {
+ 
+ static int s3c_ac97_probe(struct platform_device *pdev)
+ {
+-	struct resource *mem_res, *dmatx_res, *dmarx_res, *dmamic_res, *irq_res;
++	struct resource *mem_res, *irq_res;
+ 	struct s3c_audio_pdata *ac97_pdata;
+ 	int ret;
+ 
+@@ -335,24 +335,6 @@ static int s3c_ac97_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* Check for availability of necessary resource */
+-	dmatx_res = platform_get_resource(pdev, IORESOURCE_DMA, 0);
+-	if (!dmatx_res) {
+-		dev_err(&pdev->dev, "Unable to get AC97-TX dma resource\n");
+-		return -ENXIO;
+-	}
+-
+-	dmarx_res = platform_get_resource(pdev, IORESOURCE_DMA, 1);
+-	if (!dmarx_res) {
+-		dev_err(&pdev->dev, "Unable to get AC97-RX dma resource\n");
+-		return -ENXIO;
+-	}
+-
+-	dmamic_res = platform_get_resource(pdev, IORESOURCE_DMA, 2);
+-	if (!dmamic_res) {
+-		dev_err(&pdev->dev, "Unable to get AC97-MIC dma resource\n");
+-		return -ENXIO;
+-	}
+-
+ 	irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ 	if (!irq_res) {
+ 		dev_err(&pdev->dev, "AC97 IRQ not provided!\n");
+@@ -364,11 +346,11 @@ static int s3c_ac97_probe(struct platform_device *pdev)
+ 	if (IS_ERR(s3c_ac97.regs))
+ 		return PTR_ERR(s3c_ac97.regs);
+ 
+-	s3c_ac97_pcm_out.channel = dmatx_res->start;
++	s3c_ac97_pcm_out.slave = ac97_pdata->dma_playback;
+ 	s3c_ac97_pcm_out.dma_addr = mem_res->start + S3C_AC97_PCM_DATA;
+-	s3c_ac97_pcm_in.channel = dmarx_res->start;
++	s3c_ac97_pcm_in.slave = ac97_pdata->dma_capture;
+ 	s3c_ac97_pcm_in.dma_addr = mem_res->start + S3C_AC97_PCM_DATA;
+-	s3c_ac97_mic_in.channel = dmamic_res->start;
++	s3c_ac97_mic_in.slave = ac97_pdata->dma_capture_mic;
+ 	s3c_ac97_mic_in.dma_addr = mem_res->start + S3C_AC97_MIC_DATA;
+ 
+ 	init_completion(&s3c_ac97.done);
+diff --git a/sound/soc/samsung/dma.h b/sound/soc/samsung/dma.h
+index 0e85dcfec023..085ef30f5ca2 100644
+--- a/sound/soc/samsung/dma.h
++++ b/sound/soc/samsung/dma.h
+@@ -15,7 +15,7 @@
+ #include <sound/dmaengine_pcm.h>
+ 
+ struct s3c_dma_params {
+-	int channel;				/* Channel ID */
++	void *slave;				/* Channel ID */
+ 	dma_addr_t dma_addr;
+ 	int dma_size;			/* Size of the DMA transfer */
+ 	char *ch_name;
+diff --git a/sound/soc/samsung/dmaengine.c b/sound/soc/samsung/dmaengine.c
+index 506f5bf6d082..727008d57d14 100644
+--- a/sound/soc/samsung/dmaengine.c
++++ b/sound/soc/samsung/dmaengine.c
+@@ -50,14 +50,14 @@ void samsung_asoc_init_dma_data(struct snd_soc_dai *dai,
+ 
+ 	if (playback) {
+ 		playback_data = &playback->dma_data;
+-		playback_data->filter_data = (void *)playback->channel;
++		playback_data->filter_data = playback->slave;
+ 		playback_data->chan_name = playback->ch_name;
+ 		playback_data->addr = playback->dma_addr;
+ 		playback_data->addr_width = playback->dma_size;
+ 	}
+ 	if (capture) {
+ 		capture_data = &capture->dma_data;
+-		capture_data->filter_data = (void *)capture->channel;
++		capture_data->filter_data = capture->slave;
+ 		capture_data->chan_name = capture->ch_name;
+ 		capture_data->addr = capture->dma_addr;
+ 		capture_data->addr_width = capture->dma_size;
+diff --git a/sound/soc/samsung/i2s.c b/sound/soc/samsung/i2s.c
+index 7dbf899b2af2..e163b0148c4b 100644
+--- a/sound/soc/samsung/i2s.c
++++ b/sound/soc/samsung/i2s.c
+@@ -1260,27 +1260,14 @@ static int samsung_i2s_probe(struct platform_device *pdev)
+ 	pri_dai->lock = &pri_dai->spinlock;
+ 
+ 	if (!np) {
+-		res = platform_get_resource(pdev, IORESOURCE_DMA, 0);
+-		if (!res) {
+-			dev_err(&pdev->dev,
+-				"Unable to get I2S-TX dma resource\n");
+-			return -ENXIO;
+-		}
+-		pri_dai->dma_playback.channel = res->start;
+-
+-		res = platform_get_resource(pdev, IORESOURCE_DMA, 1);
+-		if (!res) {
+-			dev_err(&pdev->dev,
+-				"Unable to get I2S-RX dma resource\n");
+-			return -ENXIO;
+-		}
+-		pri_dai->dma_capture.channel = res->start;
+-
+ 		if (i2s_pdata == NULL) {
+ 			dev_err(&pdev->dev, "Can't work without s3c_audio_pdata\n");
+ 			return -EINVAL;
+ 		}
+ 
++		pri_dai->dma_playback.slave = i2s_pdata->dma_playback;
++		pri_dai->dma_capture.slave = i2s_pdata->dma_capture;
++
+ 		if (&i2s_pdata->type)
+ 			i2s_cfg = &i2s_pdata->type.i2s;
+ 
+@@ -1341,11 +1328,8 @@ static int samsung_i2s_probe(struct platform_device *pdev)
+ 		sec_dai->dma_playback.dma_addr = regs_base + I2STXDS;
+ 		sec_dai->dma_playback.ch_name = "tx-sec";
+ 
+-		if (!np) {
+-			res = platform_get_resource(pdev, IORESOURCE_DMA, 2);
+-			if (res)
+-				sec_dai->dma_playback.channel = res->start;
+-		}
++		if (!np)
++			sec_dai->dma_playback.slave = i2s_pdata->dma_play_sec;
+ 
+ 		sec_dai->dma_playback.dma_size = 4;
+ 		sec_dai->addr = pri_dai->addr;
+diff --git a/sound/soc/samsung/pcm.c b/sound/soc/samsung/pcm.c
+index b320a9d3fbf8..c77f324e0bb8 100644
+--- a/sound/soc/samsung/pcm.c
++++ b/sound/soc/samsung/pcm.c
+@@ -486,7 +486,7 @@ static const struct snd_soc_component_driver s3c_pcm_component = {
+ static int s3c_pcm_dev_probe(struct platform_device *pdev)
+ {
+ 	struct s3c_pcm_info *pcm;
+-	struct resource *mem_res, *dmatx_res, *dmarx_res;
++	struct resource *mem_res;
+ 	struct s3c_audio_pdata *pcm_pdata;
+ 	int ret;
+ 
+@@ -499,18 +499,6 @@ static int s3c_pcm_dev_probe(struct platform_device *pdev)
+ 	pcm_pdata = pdev->dev.platform_data;
+ 
+ 	/* Check for availability of necessary resource */
+-	dmatx_res = platform_get_resource(pdev, IORESOURCE_DMA, 0);
+-	if (!dmatx_res) {
+-		dev_err(&pdev->dev, "Unable to get PCM-TX dma resource\n");
+-		return -ENXIO;
+-	}
+-
+-	dmarx_res = platform_get_resource(pdev, IORESOURCE_DMA, 1);
+-	if (!dmarx_res) {
+-		dev_err(&pdev->dev, "Unable to get PCM-RX dma resource\n");
+-		return -ENXIO;
+-	}
+-
+ 	mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (!mem_res) {
+ 		dev_err(&pdev->dev, "Unable to get register resource\n");
+@@ -568,8 +556,10 @@ static int s3c_pcm_dev_probe(struct platform_device *pdev)
+ 	s3c_pcm_stereo_out[pdev->id].dma_addr = mem_res->start
+ 							+ S3C_PCM_TXFIFO;
+ 
+-	s3c_pcm_stereo_in[pdev->id].channel = dmarx_res->start;
+-	s3c_pcm_stereo_out[pdev->id].channel = dmatx_res->start;
++	if (pcm_pdata) {
++		s3c_pcm_stereo_in[pdev->id].slave = pcm_pdata->dma_capture;
++		s3c_pcm_stereo_out[pdev->id].slave = pcm_pdata->dma_playback;
++	}
+ 
+ 	pcm->dma_capture = &s3c_pcm_stereo_in[pdev->id];
+ 	pcm->dma_playback = &s3c_pcm_stereo_out[pdev->id];
+diff --git a/sound/soc/samsung/s3c2412-i2s.c b/sound/soc/samsung/s3c2412-i2s.c
+index 2b766d212ce0..77d27c85a32a 100644
+--- a/sound/soc/samsung/s3c2412-i2s.c
++++ b/sound/soc/samsung/s3c2412-i2s.c
+@@ -34,13 +34,13 @@
+ #include "s3c2412-i2s.h"
+ 
+ static struct s3c_dma_params s3c2412_i2s_pcm_stereo_out = {
+-	.channel	= DMACH_I2S_OUT,
++	.slave		= (void *)(uintptr_t)DMACH_I2S_OUT,
+ 	.ch_name	= "tx",
+ 	.dma_size	= 4,
+ };
+ 
+ static struct s3c_dma_params s3c2412_i2s_pcm_stereo_in = {
+-	.channel	= DMACH_I2S_IN,
++	.slave		= (void *)(uintptr_t)DMACH_I2S_IN,
+ 	.ch_name	= "rx",
+ 	.dma_size	= 4,
+ };
+diff --git a/sound/soc/samsung/s3c24xx-i2s.c b/sound/soc/samsung/s3c24xx-i2s.c
+index 5bf723689692..9da3a77ea2c7 100644
+--- a/sound/soc/samsung/s3c24xx-i2s.c
++++ b/sound/soc/samsung/s3c24xx-i2s.c
+@@ -32,13 +32,13 @@
+ #include "s3c24xx-i2s.h"
+ 
+ static struct s3c_dma_params s3c24xx_i2s_pcm_stereo_out = {
+-	.channel	= DMACH_I2S_OUT,
++	.slave		= (void *)(uintptr_t)DMACH_I2S_OUT,
+ 	.ch_name	= "tx",
+ 	.dma_size	= 2,
+ };
+ 
+ static struct s3c_dma_params s3c24xx_i2s_pcm_stereo_in = {
+-	.channel	= DMACH_I2S_IN,
++	.slave		= (void *)(uintptr_t)DMACH_I2S_IN,
+ 	.ch_name	= "rx",
+ 	.dma_size	= 2,
+ };
+diff --git a/sound/soc/samsung/spdif.c b/sound/soc/samsung/spdif.c
+index 36dbc0e96004..9dd7ee6d03ff 100644
+--- a/sound/soc/samsung/spdif.c
++++ b/sound/soc/samsung/spdif.c
+@@ -359,7 +359,7 @@ static const struct snd_soc_component_driver samsung_spdif_component = {
+ static int spdif_probe(struct platform_device *pdev)
+ {
+ 	struct s3c_audio_pdata *spdif_pdata;
+-	struct resource *mem_res, *dma_res;
++	struct resource *mem_res;
+ 	struct samsung_spdif_info *spdif;
+ 	int ret;
+ 
+@@ -367,12 +367,6 @@ static int spdif_probe(struct platform_device *pdev)
+ 
+ 	dev_dbg(&pdev->dev, "Entered %s\n", __func__);
+ 
+-	dma_res = platform_get_resource(pdev, IORESOURCE_DMA, 0);
+-	if (!dma_res) {
+-		dev_err(&pdev->dev, "Unable to get dma resource.\n");
+-		return -ENXIO;
+-	}
+-
+ 	mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (!mem_res) {
+ 		dev_err(&pdev->dev, "Unable to get register resource.\n");
+@@ -432,7 +426,7 @@ static int spdif_probe(struct platform_device *pdev)
+ 
+ 	spdif_stereo_out.dma_size = 2;
+ 	spdif_stereo_out.dma_addr = mem_res->start + DATA_OUTBUF;
+-	spdif_stereo_out.channel = dma_res->start;
++	spdif_stereo_out.slave = spdif_pdata ? spdif_pdata->dma_playback : NULL;
+ 
+ 	spdif->dma_playback = &spdif_stereo_out;
+ 
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index 2ed260b10f6d..7ccbcaf6a147 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -285,6 +285,8 @@ static int set_sample_rate_v1(struct snd_usb_audio *chip, int iface,
+ 	unsigned char data[3];
+ 	int err, crate;
+ 
++	if (get_iface_desc(alts)->bNumEndpoints < 1)
++		return -EINVAL;
+ 	ep = get_endpoint(alts, 0)->bEndpointAddress;
+ 
+ 	/* if endpoint doesn't have sampling rate control, bail out */
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index 7b1cb365ffab..c07a7eda42a2 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -438,6 +438,9 @@ exit_clear:
+  *
+  * New endpoints will be added to chip->ep_list and must be freed by
+  * calling snd_usb_endpoint_free().
++ *
++ * For SND_USB_ENDPOINT_TYPE_SYNC, the caller needs to guarantee that
++ * bNumEndpoints > 1 beforehand.
+  */
+ struct snd_usb_endpoint *snd_usb_add_endpoint(struct snd_usb_audio *chip,
+ 					      struct usb_host_interface *alts,
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 279025650568..f6c3bf79af9a 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -1519,7 +1519,11 @@ static int snd_microii_spdif_default_get(struct snd_kcontrol *kcontrol,
+ 
+ 	/* use known values for that card: interface#1 altsetting#1 */
+ 	iface = usb_ifnum_to_if(chip->dev, 1);
++	if (!iface || iface->num_altsetting < 2)
++		return -EINVAL;
+ 	alts = &iface->altsetting[1];
++	if (get_iface_desc(alts)->bNumEndpoints < 1)
++		return -EINVAL;
+ 	ep = get_endpoint(alts, 0)->bEndpointAddress;
+ 
+ 	err = snd_usb_ctl_msg(chip->dev,
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index 9245f52d43bd..44d178ee9177 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -159,6 +159,8 @@ static int init_pitch_v1(struct snd_usb_audio *chip, int iface,
+ 	unsigned char data[1];
+ 	int err;
+ 
++	if (get_iface_desc(alts)->bNumEndpoints < 1)
++		return -EINVAL;
+ 	ep = get_endpoint(alts, 0)->bEndpointAddress;
+ 
+ 	data[0] = 1;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index c458d60d5030..cd7eac28edee 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -150,6 +150,7 @@ static int create_fixed_stream_quirk(struct snd_usb_audio *chip,
+ 		usb_audio_err(chip, "cannot memdup\n");
+ 		return -ENOMEM;
+ 	}
++	INIT_LIST_HEAD(&fp->list);
+ 	if (fp->nr_rates > MAX_NR_RATES) {
+ 		kfree(fp);
+ 		return -EINVAL;
+@@ -167,19 +168,20 @@ static int create_fixed_stream_quirk(struct snd_usb_audio *chip,
+ 	stream = (fp->endpoint & USB_DIR_IN)
+ 		? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
+ 	err = snd_usb_add_audio_stream(chip, stream, fp);
+-	if (err < 0) {
+-		kfree(fp);
+-		kfree(rate_table);
+-		return err;
+-	}
++	if (err < 0)
++		goto error;
+ 	if (fp->iface != get_iface_desc(&iface->altsetting[0])->bInterfaceNumber ||
+ 	    fp->altset_idx >= iface->num_altsetting) {
+-		kfree(fp);
+-		kfree(rate_table);
+-		return -EINVAL;
++		err = -EINVAL;
++		goto error;
+ 	}
+ 	alts = &iface->altsetting[fp->altset_idx];
+ 	altsd = get_iface_desc(alts);
++	if (altsd->bNumEndpoints < 1) {
++		err = -EINVAL;
++		goto error;
++	}
++
+ 	fp->protocol = altsd->bInterfaceProtocol;
+ 
+ 	if (fp->datainterval == 0)
+@@ -190,6 +192,12 @@ static int create_fixed_stream_quirk(struct snd_usb_audio *chip,
+ 	snd_usb_init_pitch(chip, fp->iface, alts, fp);
+ 	snd_usb_init_sample_rate(chip, fp->iface, alts, fp, fp->rate_max);
+ 	return 0;
++
++ error:
++	list_del(&fp->list); /* unlink for avoiding double-free */
++	kfree(fp);
++	kfree(rate_table);
++	return err;
+ }
+ 
+ static int create_auto_pcm_quirk(struct snd_usb_audio *chip,
+@@ -462,6 +470,7 @@ static int create_uaxx_quirk(struct snd_usb_audio *chip,
+ 	fp->ep_attr = get_endpoint(alts, 0)->bmAttributes;
+ 	fp->datainterval = 0;
+ 	fp->maxpacksize = le16_to_cpu(get_endpoint(alts, 0)->wMaxPacketSize);
++	INIT_LIST_HEAD(&fp->list);
+ 
+ 	switch (fp->maxpacksize) {
+ 	case 0x120:
+@@ -485,6 +494,7 @@ static int create_uaxx_quirk(struct snd_usb_audio *chip,
+ 		? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
+ 	err = snd_usb_add_audio_stream(chip, stream, fp);
+ 	if (err < 0) {
++		list_del(&fp->list); /* unlink for avoiding double-free */
+ 		kfree(fp);
+ 		return err;
+ 	}
+@@ -1121,6 +1131,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip)
+ 	switch (chip->usb_id) {
+ 	case USB_ID(0x045E, 0x075D): /* MS Lifecam Cinema  */
+ 	case USB_ID(0x045E, 0x076D): /* MS Lifecam HD-5000 */
++	case USB_ID(0x045E, 0x076E): /* MS Lifecam HD-5001 */
+ 	case USB_ID(0x045E, 0x076F): /* MS Lifecam HD-6000 */
+ 	case USB_ID(0x045E, 0x0772): /* MS Lifecam Studio */
+ 	case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index 8ee14f2365e7..3b23102230c0 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -316,7 +316,9 @@ static struct snd_pcm_chmap_elem *convert_chmap(int channels, unsigned int bits,
+ /*
+  * add this endpoint to the chip instance.
+  * if a stream with the same endpoint already exists, append to it.
+- * if not, create a new pcm stream.
++ * if not, create a new pcm stream. note, fp is added to the substream
++ * fmt_list and will be freed on the chip instance release. do not free
++ * fp or do remove it from the substream fmt_list to avoid double-free.
+  */
+ int snd_usb_add_audio_stream(struct snd_usb_audio *chip,
+ 			     int stream,
+@@ -677,6 +679,7 @@ int snd_usb_parse_audio_interface(struct snd_usb_audio *chip, int iface_no)
+ 					* (fp->maxpacksize & 0x7ff);
+ 		fp->attributes = parse_uac_endpoint_attributes(chip, alts, protocol, iface_no);
+ 		fp->clock = clock;
++		INIT_LIST_HEAD(&fp->list);
+ 
+ 		/* some quirks for attributes here */
+ 
+@@ -725,6 +728,7 @@ int snd_usb_parse_audio_interface(struct snd_usb_audio *chip, int iface_no)
+ 		dev_dbg(&dev->dev, "%u:%d: add audio endpoint %#x\n", iface_no, altno, fp->endpoint);
+ 		err = snd_usb_add_audio_stream(chip, stream, fp);
+ 		if (err < 0) {
++			list_del(&fp->list); /* unlink for avoiding double-free */
+ 			kfree(fp->rate_table);
+ 			kfree(fp->chmap);
+ 			kfree(fp);
+diff --git a/tools/hv/Makefile b/tools/hv/Makefile
+index a8ab79556926..a8c4644022a6 100644
+--- a/tools/hv/Makefile
++++ b/tools/hv/Makefile
+@@ -5,6 +5,8 @@ PTHREAD_LIBS = -lpthread
+ WARNINGS = -Wall -Wextra
+ CFLAGS = $(WARNINGS) -g $(PTHREAD_LIBS) $(shell getconf LFS_CFLAGS)
+ 
++CFLAGS += -D__EXPORTED_HEADERS__ -I../../include/uapi -I../../include
++
+ all: hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon
+ %: %.c
+ 	$(CC) $(CFLAGS) -o $@ $^
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index b48e87693aa5..a35db828bd0d 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -2101,11 +2101,11 @@ char *parse_events_formats_error_string(char *additional_terms)
+ 
+ 	/* valid terms */
+ 	if (additional_terms) {
+-		if (!asprintf(&str, "valid terms: %s,%s",
+-			      additional_terms, static_terms))
++		if (asprintf(&str, "valid terms: %s,%s",
++			     additional_terms, static_terms) < 0)
+ 			goto fail;
+ 	} else {
+-		if (!asprintf(&str, "valid terms: %s", static_terms))
++		if (asprintf(&str, "valid terms: %s", static_terms) < 0)
+ 			goto fail;
+ 	}
+ 	return str;
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index e4b173dec4b9..6f2a0279476c 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -283,13 +283,12 @@ static int pmu_aliases_parse(char *dir, struct list_head *head)
+ {
+ 	struct dirent *evt_ent;
+ 	DIR *event_dir;
+-	int ret = 0;
+ 
+ 	event_dir = opendir(dir);
+ 	if (!event_dir)
+ 		return -EINVAL;
+ 
+-	while (!ret && (evt_ent = readdir(event_dir))) {
++	while ((evt_ent = readdir(event_dir))) {
+ 		char path[PATH_MAX];
+ 		char *name = evt_ent->d_name;
+ 		FILE *file;
+@@ -305,17 +304,19 @@ static int pmu_aliases_parse(char *dir, struct list_head *head)
+ 
+ 		snprintf(path, PATH_MAX, "%s/%s", dir, name);
+ 
+-		ret = -EINVAL;
+ 		file = fopen(path, "r");
+-		if (!file)
+-			break;
++		if (!file) {
++			pr_debug("Cannot open %s\n", path);
++			continue;
++		}
+ 
+-		ret = perf_pmu__new_alias(head, dir, name, file);
++		if (perf_pmu__new_alias(head, dir, name, file) < 0)
++			pr_debug("Cannot set up %s\n", name);
+ 		fclose(file);
+ 	}
+ 
+ 	closedir(event_dir);
+-	return ret;
++	return 0;
+ }
+ 
+ /*
+diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py
+index 1833103768cb..c8680984d2d6 100644
+--- a/tools/perf/util/setup.py
++++ b/tools/perf/util/setup.py
+@@ -22,6 +22,7 @@ cflags = getenv('CFLAGS', '').split()
+ # switch off several checks (need to be at the end of cflags list)
+ cflags += ['-fno-strict-aliasing', '-Wno-write-strings', '-Wno-unused-parameter' ]
+ 
++src_perf  = getenv('srctree') + '/tools/perf'
+ build_lib = getenv('PYTHON_EXTBUILD_LIB')
+ build_tmp = getenv('PYTHON_EXTBUILD_TMP')
+ libtraceevent = getenv('LIBTRACEEVENT')
+@@ -30,6 +31,9 @@ libapikfs = getenv('LIBAPI')
+ ext_sources = [f.strip() for f in file('util/python-ext-sources')
+ 				if len(f.strip()) > 0 and f[0] != '#']
+ 
++# use full paths with source files
++ext_sources = map(lambda x: '%s/%s' % (src_perf, x) , ext_sources)
++
+ perf = Extension('perf',
+ 		  sources = ext_sources,
+ 		  include_dirs = ['util/include'],
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 7338e30421d8..fefbf2d148ef 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -547,6 +547,16 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ 	if (!kvm)
+ 		return ERR_PTR(-ENOMEM);
+ 
++	spin_lock_init(&kvm->mmu_lock);
++	atomic_inc(&current->mm->mm_count);
++	kvm->mm = current->mm;
++	kvm_eventfd_init(kvm);
++	mutex_init(&kvm->lock);
++	mutex_init(&kvm->irq_lock);
++	mutex_init(&kvm->slots_lock);
++	atomic_set(&kvm->users_count, 1);
++	INIT_LIST_HEAD(&kvm->devices);
++
+ 	r = kvm_arch_init_vm(kvm, type);
+ 	if (r)
+ 		goto out_err_no_disable;
+@@ -579,16 +589,6 @@ static struct kvm *kvm_create_vm(unsigned long type)
+ 			goto out_err;
+ 	}
+ 
+-	spin_lock_init(&kvm->mmu_lock);
+-	kvm->mm = current->mm;
+-	atomic_inc(&kvm->mm->mm_count);
+-	kvm_eventfd_init(kvm);
+-	mutex_init(&kvm->lock);
+-	mutex_init(&kvm->irq_lock);
+-	mutex_init(&kvm->slots_lock);
+-	atomic_set(&kvm->users_count, 1);
+-	INIT_LIST_HEAD(&kvm->devices);
+-
+ 	r = kvm_init_mmu_notifier(kvm);
+ 	if (r)
+ 		goto out_err;
+@@ -613,6 +613,7 @@ out_err_no_disable:
+ 	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++)
+ 		kvm_free_memslots(kvm, kvm->memslots[i]);
+ 	kvm_arch_free_vm(kvm);
++	mmdrop(current->mm);
+ 	return ERR_PTR(r);
+ }
+ 


             reply	other threads:[~2016-04-12 19:00 UTC|newest]

Thread overview: 355+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-12 18:59 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2022-02-03 11:46 [gentoo-commits] proj/linux-patches:4.4 commit in: / Mike Pagano
2022-01-29 17:47 Mike Pagano
2022-01-27 11:42 Mike Pagano
2022-01-11 12:57 Mike Pagano
2022-01-05 12:57 Mike Pagano
2021-12-29 13:13 Mike Pagano
2021-12-22 14:09 Mike Pagano
2021-12-14 10:38 Mike Pagano
2021-12-08 12:58 Mike Pagano
2021-11-26 12:02 Mike Pagano
2021-11-12 13:39 Mike Pagano
2021-11-02 17:07 Mike Pagano
2021-10-27 12:01 Mike Pagano
2021-10-17 13:15 Mike Pagano
2021-10-09 21:36 Mike Pagano
2021-10-07 10:37 Mike Pagano
2021-10-06 11:33 Mike Pagano
2021-09-26 14:16 Mike Pagano
2021-09-22 11:43 Mike Pagano
2021-09-20 22:07 Mike Pagano
2021-09-03 11:26 Mike Pagano
2021-08-26 14:02 Mike Pagano
2021-08-25 23:20 Mike Pagano
2021-08-15 20:12 Mike Pagano
2021-08-10 16:22 Mike Pagano
2021-08-08 13:47 Mike Pagano
2021-08-04 11:56 Mike Pagano
2021-08-03 12:51 Mike Pagano
2021-07-28 12:39 Mike Pagano
2021-07-20 15:17 Alice Ferrazzi
2021-07-11 14:48 Mike Pagano
2021-06-30 14:29 Mike Pagano
2021-06-17 11:05 Alice Ferrazzi
2021-06-10 11:09 Mike Pagano
2021-06-03 10:43 Alice Ferrazzi
2021-05-26 11:59 Mike Pagano
2021-05-22 10:00 Mike Pagano
2021-04-28 11:08 Alice Ferrazzi
2021-04-16 11:20 Alice Ferrazzi
2021-04-10 13:21 Mike Pagano
2021-04-07 12:10 Mike Pagano
2021-03-30 14:13 Mike Pagano
2021-03-24 12:06 Mike Pagano
2021-03-17 15:39 Mike Pagano
2021-03-11 13:34 Mike Pagano
2021-03-07 15:12 Mike Pagano
2021-03-03 16:34 Alice Ferrazzi
2021-02-23 13:46 Mike Pagano
2021-02-10 10:17 Alice Ferrazzi
2021-02-05 14:57 Alice Ferrazzi
2021-02-03 23:23 Mike Pagano
2021-01-30 13:11 Alice Ferrazzi
2021-01-23 16:33 Mike Pagano
2021-01-17 16:23 Mike Pagano
2021-01-12 20:08 Mike Pagano
2021-01-09 12:53 Mike Pagano
2020-12-29 14:16 Mike Pagano
2020-12-11 12:54 Mike Pagano
2020-12-02 12:17 Mike Pagano
2020-11-24 13:29 Mike Pagano
2020-11-22 19:08 Mike Pagano
2020-11-18 19:21 Mike Pagano
2020-11-11 15:27 Mike Pagano
2020-11-10 13:53 Mike Pagano
2020-10-29 11:14 Mike Pagano
2020-10-17 10:13 Mike Pagano
2020-10-14 20:30 Mike Pagano
2020-10-01 11:41 Mike Pagano
2020-10-01 11:24 Mike Pagano
2020-09-24 16:04 Mike Pagano
2020-09-23 11:51 Mike Pagano
2020-09-23 11:50 Mike Pagano
2020-09-12 17:08 Mike Pagano
2020-09-03 11:32 Mike Pagano
2020-08-26 11:12 Mike Pagano
2020-08-21 11:11 Alice Ferrazzi
2020-07-31 16:10 Mike Pagano
2020-07-22 12:24 Mike Pagano
2020-07-09 12:05 Mike Pagano
2020-07-01 12:09 Mike Pagano
2020-06-22 14:43 Mike Pagano
2020-06-11 11:25 Mike Pagano
2020-06-03 11:35 Mike Pagano
2020-05-27 15:26 Mike Pagano
2020-05-20 11:20 Mike Pagano
2020-05-13 13:01 Mike Pagano
2020-05-11 22:52 Mike Pagano
2020-05-05 17:37 Mike Pagano
2020-05-02 19:20 Mike Pagano
2020-04-24 11:59 Mike Pagano
2020-04-15 18:24 Mike Pagano
2020-04-13 11:14 Mike Pagano
2020-04-02 18:55 Mike Pagano
2020-03-20 11:53 Mike Pagano
2020-03-20 11:51 Mike Pagano
2020-03-20 11:49 Mike Pagano
2020-03-11 10:14 Mike Pagano
2020-02-28 15:24 Mike Pagano
2020-02-14 23:34 Mike Pagano
2020-02-05 14:47 Mike Pagano
2020-01-29 12:36 Mike Pagano
2020-01-23 11:00 Mike Pagano
2020-01-14 22:24 Mike Pagano
2020-01-12 14:48 Mike Pagano
2020-01-04 16:46 Mike Pagano
2019-12-21 14:51 Mike Pagano
2019-12-05 14:47 Alice Ferrazzi
2019-11-29 21:41 Thomas Deutschmann
2019-11-28 23:49 Mike Pagano
2019-11-25 16:25 Mike Pagano
2019-11-16 10:54 Mike Pagano
2019-11-12 20:57 Mike Pagano
2019-11-10 16:13 Mike Pagano
2019-11-06 14:22 Mike Pagano
2019-10-29 10:08 Mike Pagano
2019-10-17 22:18 Mike Pagano
2019-10-07 21:03 Mike Pagano
2019-10-05 20:43 Mike Pagano
2019-09-21 15:56 Mike Pagano
2019-09-20 15:50 Mike Pagano
2019-09-16 12:21 Mike Pagano
2019-09-10 11:10 Mike Pagano
2019-09-06 17:17 Mike Pagano
2019-08-25 17:33 Mike Pagano
2019-08-11 10:58 Mike Pagano
2019-08-06 19:14 Mike Pagano
2019-08-04 16:03 Mike Pagano
2019-07-21 14:36 Mike Pagano
2019-07-10 11:01 Mike Pagano
2019-06-27 11:11 Mike Pagano
2019-06-22 19:01 Mike Pagano
2019-06-17 19:18 Mike Pagano
2019-06-11 17:30 Mike Pagano
2019-06-11 12:38 Mike Pagano
2019-05-16 23:01 Mike Pagano
2019-04-27 17:28 Mike Pagano
2019-04-03 10:49 Mike Pagano
2019-04-03 10:49 Mike Pagano
2019-03-23 14:17 Mike Pagano
2019-02-23 14:40 Mike Pagano
2019-02-20 11:14 Mike Pagano
2019-02-15 23:38 Mike Pagano
2019-02-15 23:35 Mike Pagano
2019-02-08 15:21 Mike Pagano
2019-02-06 20:51 Mike Pagano
2019-02-06  0:05 Mike Pagano
2019-01-26 14:59 Mike Pagano
2019-01-16 23:27 Mike Pagano
2019-01-13 19:46 Mike Pagano
2019-01-13 19:24 Mike Pagano
2018-12-29 22:56 Mike Pagano
2018-12-21 14:40 Mike Pagano
2018-12-17 21:56 Mike Pagano
2018-12-13 11:35 Mike Pagano
2018-12-01 18:35 Mike Pagano
2018-12-01 15:02 Mike Pagano
2018-11-27 16:59 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 15:02 Mike Pagano
2018-11-21 12:18 Mike Pagano
2018-11-10 21:27 Mike Pagano
2018-10-20 12:33 Mike Pagano
2018-10-13 16:35 Mike Pagano
2018-10-10 11:20 Mike Pagano
2018-09-29 13:32 Mike Pagano
2018-09-26 10:44 Mike Pagano
2018-09-19 22:37 Mike Pagano
2018-09-15 10:09 Mike Pagano
2018-09-09 23:26 Mike Pagano
2018-09-05 15:21 Mike Pagano
2018-08-28 22:32 Mike Pagano
2018-08-24 11:41 Mike Pagano
2018-08-22 10:08 Alice Ferrazzi
2018-08-18 18:06 Mike Pagano
2018-08-17 19:24 Mike Pagano
2018-08-15 16:44 Mike Pagano
2018-08-09 10:49 Mike Pagano
2018-08-07 18:14 Mike Pagano
2018-07-28 10:37 Mike Pagano
2018-07-22 15:15 Mike Pagano
2018-07-19 15:27 Mike Pagano
2018-07-17 10:24 Mike Pagano
2018-07-12 16:21 Alice Ferrazzi
2018-07-04 14:26 Mike Pagano
2018-06-16 15:41 Mike Pagano
2018-06-13 14:54 Mike Pagano
2018-06-06 18:00 Mike Pagano
2018-05-30 22:35 Mike Pagano
2018-05-30 11:38 Mike Pagano
2018-05-26 13:43 Mike Pagano
2018-05-16 10:22 Mike Pagano
2018-05-02 16:11 Mike Pagano
2018-04-29 11:48 Mike Pagano
2018-04-24 11:28 Mike Pagano
2018-04-13 22:20 Mike Pagano
2018-04-08 14:25 Mike Pagano
2018-03-31 23:00 Mike Pagano
2018-03-31 22:16 Mike Pagano
2018-03-25 13:42 Mike Pagano
2018-03-22 12:54 Mike Pagano
2018-03-11 18:25 Mike Pagano
2018-03-05  2:52 Alice Ferrazzi
2018-02-28 15:05 Alice Ferrazzi
2018-02-25 15:46 Mike Pagano
2018-02-22 23:20 Mike Pagano
2018-02-17 15:10 Alice Ferrazzi
2018-02-03 21:23 Mike Pagano
2018-01-31 13:36 Alice Ferrazzi
2018-01-23 21:15 Mike Pagano
2018-01-17 10:20 Alice Ferrazzi
2018-01-17  9:18 Alice Ferrazzi
2018-01-15 15:01 Alice Ferrazzi
2018-01-10 11:56 Mike Pagano
2018-01-10 11:48 Mike Pagano
2018-01-05 15:59 Alice Ferrazzi
2018-01-05 15:05 Alice Ferrazzi
2018-01-02 20:12 Mike Pagano
2017-12-25 14:41 Alice Ferrazzi
2017-12-20 12:45 Mike Pagano
2017-12-16 11:46 Alice Ferrazzi
2017-12-09 18:50 Alice Ferrazzi
2017-12-05 11:39 Mike Pagano
2017-11-30 12:25 Alice Ferrazzi
2017-11-24 10:49 Alice Ferrazzi
2017-11-24  9:46 Alice Ferrazzi
2017-11-21  8:40 Alice Ferrazzi
2017-11-18 18:12 Mike Pagano
2017-11-15 16:44 Alice Ferrazzi
2017-11-08 13:50 Mike Pagano
2017-11-02 10:02 Mike Pagano
2017-10-27 10:33 Mike Pagano
2017-10-21 20:13 Mike Pagano
2017-10-18 13:44 Mike Pagano
2017-10-12 12:22 Mike Pagano
2017-10-08 14:25 Mike Pagano
2017-10-05 11:39 Mike Pagano
2017-09-27 10:38 Mike Pagano
2017-09-14 13:37 Mike Pagano
2017-09-13 22:26 Mike Pagano
2017-09-13 14:33 Mike Pagano
2017-09-07 22:42 Mike Pagano
2017-09-02 17:14 Mike Pagano
2017-08-30 10:08 Mike Pagano
2017-08-25 10:53 Mike Pagano
2017-08-16 22:30 Mike Pagano
2017-08-13 16:52 Mike Pagano
2017-08-11 17:44 Mike Pagano
2017-08-07 10:25 Mike Pagano
2017-05-14 13:32 Mike Pagano
2017-05-08 10:40 Mike Pagano
2017-05-03 17:41 Mike Pagano
2017-04-30 18:08 Mike Pagano
2017-04-30 17:59 Mike Pagano
2017-04-27  8:18 Alice Ferrazzi
2017-04-22 17:00 Mike Pagano
2017-04-18 10:21 Mike Pagano
2017-04-12 17:59 Mike Pagano
2017-04-08 13:56 Mike Pagano
2017-03-31 10:43 Mike Pagano
2017-03-30 18:16 Mike Pagano
2017-03-26 11:53 Mike Pagano
2017-03-22 12:28 Mike Pagano
2017-03-18 14:32 Mike Pagano
2017-03-15 14:39 Mike Pagano
2017-03-12 12:17 Mike Pagano
2017-03-02 16:29 Mike Pagano
2017-03-02 16:29 Mike Pagano
2017-02-26 20:45 Mike Pagano
2017-02-24  0:38 Mike Pagano
2017-02-23 20:12 Mike Pagano
2017-02-18 16:27 Alice Ferrazzi
2017-02-15 16:22 Alice Ferrazzi
2017-02-09  8:05 Alice Ferrazzi
2017-02-04 13:47 Alice Ferrazzi
2017-02-01 12:59 Alice Ferrazzi
2017-01-26  8:24 Alice Ferrazzi
2017-01-20 12:45 Alice Ferrazzi
2017-01-15 22:57 Mike Pagano
2017-01-14 14:46 Mike Pagano
2017-01-12 12:11 Mike Pagano
2017-01-09 12:46 Mike Pagano
2017-01-06 23:13 Mike Pagano
2016-12-15 23:41 Mike Pagano
2016-12-11 15:02 Alice Ferrazzi
2016-12-09 13:57 Alice Ferrazzi
2016-12-08  0:03 Mike Pagano
2016-12-02 16:21 Mike Pagano
2016-11-26 18:51 Mike Pagano
2016-11-26 18:40 Mike Pagano
2016-11-22  0:14 Mike Pagano
2016-11-19 11:03 Mike Pagano
2016-11-15 10:05 Alice Ferrazzi
2016-11-10 18:13 Alice Ferrazzi
2016-11-01  3:14 Alice Ferrazzi
2016-10-31 14:09 Alice Ferrazzi
2016-10-28 18:27 Alice Ferrazzi
2016-10-22 13:05 Mike Pagano
2016-10-21 11:10 Mike Pagano
2016-10-16 19:25 Mike Pagano
2016-10-08 19:55 Mike Pagano
2016-09-30 19:07 Mike Pagano
2016-09-24 10:51 Mike Pagano
2016-09-16 19:10 Mike Pagano
2016-09-15 13:58 Mike Pagano
2016-09-09 19:20 Mike Pagano
2016-08-20 16:31 Mike Pagano
2016-08-17 11:48 Mike Pagano
2016-08-10 12:56 Mike Pagano
2016-07-27 19:19 Mike Pagano
2016-07-11 19:59 Mike Pagano
2016-07-02 15:30 Mike Pagano
2016-07-01  0:55 Mike Pagano
2016-06-24 20:40 Mike Pagano
2016-06-08 13:38 Mike Pagano
2016-06-02 18:24 Mike Pagano
2016-05-19 13:00 Mike Pagano
2016-05-12  0:14 Mike Pagano
2016-05-04 23:51 Mike Pagano
2016-04-20 11:27 Mike Pagano
2016-03-22 22:47 Mike Pagano
2016-03-16 19:43 Mike Pagano
2016-03-10  0:51 Mike Pagano
2016-03-04 11:15 Mike Pagano
2016-02-26  0:02 Mike Pagano
2016-02-19 23:33 Mike Pagano
2016-02-18  0:20 Mike Pagano
2016-02-01  0:19 Mike Pagano
2016-02-01  0:13 Mike Pagano
2016-01-31 23:33 Mike Pagano
2016-01-20 12:38 Mike Pagano
2016-01-10 17:19 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1460487601.3f51af26c90914dc78fa1469e036114de4c1c888.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox