public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-03-04 13:11 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-03-04 13:11 UTC (permalink / raw
  To: gentoo-commits

commit:     216fdd655adbbeeff9a96eb6dd5c9fee223c9add
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar  4 13:10:52 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar  4 13:10:52 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=216fdd65

proj/linux-patches: Rename cpu opt patch for gcc > v8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                                             | 2 +-
 ....patch => 5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch | 0
 2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/0000_README b/0000_README
index b37d2a4..44c405c 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,6 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
-Patch:  5010_enable-additional-cpu-optimizations-for-gcc.patch
+Patch:  5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc >= v4.13 optimizations for additional CPUs.

diff --git a/5010_enable-additional-cpu-optimizations-for-gcc.patch b/5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
similarity index 100%
rename from 5010_enable-additional-cpu-optimizations-for-gcc.patch
rename to 5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-03-04 13:16 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-03-04 13:16 UTC (permalink / raw
  To: gentoo-commits

commit:     d16eb045481cbdffea00353726477d5e2b5d901e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar  4 13:15:41 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar  4 13:15:41 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d16eb045

proj/linux-patches: CPU Opt patch for gcc >= v8

Kernel patch for >= gccv8 enables kernel >= v4.13
optimizations for additional CPUs.

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                  |   4 +
 5011_enable-cpu-optimizations-for-gcc8.patch | 569 +++++++++++++++++++++++++++
 2 files changed, 573 insertions(+)

diff --git a/0000_README b/0000_README
index 44c405c..cfba4e3 100644
--- a/0000_README
+++ b/0000_README
@@ -66,3 +66,7 @@ Desc:   Add Gentoo Linux support config settings and defaults.
 Patch:  5010_enable-additional-cpu-optimizations-for-gcc-4.9.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc >= v4.13 optimizations for additional CPUs.
+
+Patch:  5011_enable-cpu-optimizations-for-gcc8.patch
+From:   https://github.com/graysky2/kernel_gcc_patch/
+Desc:   Kernel patch for >= gccv8 enables kernel >= v4.13 optimizations for additional CPUs.

diff --git a/5011_enable-cpu-optimizations-for-gcc8.patch b/5011_enable-cpu-optimizations-for-gcc8.patch
new file mode 100644
index 0000000..bfd2065
--- /dev/null
+++ b/5011_enable-cpu-optimizations-for-gcc8.patch
@@ -0,0 +1,569 @@
+WARNING
+This patch works with gcc versions 8.1+ and with kernel version 4.13+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features  --->
+  Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* Intel Silvermont low-power processors
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5/i7 (Skylake)
+* Intel 6th Gen Core i7/i9 (Skylake X)
+* Intel 8th Gen Core i3/i5/i7 (Cannon Lake)
+* Intel 8th Gen Core i7/i9 (Ice Lake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[3]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is to use the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=4.20
+gcc version >=8.1
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+4. https://github.com/graysky2/kernel_gcc_patch/issues/15
+5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/Makefile_32.cpu	2019-02-22 09:22:03.426937735 -0500
++++ b/arch/x86/Makefile_32.cpu	2019-02-22 09:37:58.680968580 -0500
+@@ -23,7 +23,18 @@ cflags-$(CONFIG_MK6)		+= -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7)		+= -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8)		+= $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3)		+= $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10)	+= $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA)	+= $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT)	+= $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR)	+= $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER)	+= $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER)	+= $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER)	+= $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR)	+= $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN)	+= $(call cc-option,-march=znver1,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE)	+= -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON)	+= -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6)	+= $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,9 +43,20 @@ cflags-$(CONFIG_MCYRIXIII)	+= $(call cc-
+ cflags-$(CONFIG_MVIAC3_2)	+= $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7)		+= -march=i686
+ cflags-$(CONFIG_MCORE2)		+= -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+-	$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
+-
++cflags-$(CONFIG_MNEHALEM)	+= -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE)	+= -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT)	+= -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MSANDYBRIDGE)	+= -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE)	+= -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL)	+= -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL)	+= -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE)	+= -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MSKYLAKEX)	+= -march=i686 $(call tune,skylake-avx512)
++cflags-$(CONFIG_MCANNONLAKE)	+= -march=i686 $(call tune,cannonlake)
++cflags-$(CONFIG_MICELAKE)	+= -march=i686 $(call tune,icelake)
++cflags-$(CONFIG_MATOM)		+= $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++	$(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
++ 
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN)		+= -march=i486
+ 
+--- a/arch/x86/Kconfig.cpu	2019-02-22 09:22:11.576958595 -0500
++++ b/arch/x86/Kconfig.cpu	2019-02-22 09:34:16.490003911 -0500
+@@ -116,6 +116,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ 	bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ 	depends on X86_32
++	select X86_P6_NOP
+ 	---help---
+ 	  Select this for Intel Pentium 4 chips.  This includes the
+ 	  Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -150,7 +151,7 @@ config MPENTIUM4
+ 
+ 
+ config MK6
+-	bool "K6/K6-II/K6-III"
++	bool "AMD K6/K6-II/K6-III"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD K6-family processor.  Enables use of
+@@ -158,7 +159,7 @@ config MK6
+ 	  flags to GCC.
+ 
+ config MK7
+-	bool "Athlon/Duron/K7"
++	bool "AMD Athlon/Duron/K7"
+ 	depends on X86_32
+ 	---help---
+ 	  Select this for an AMD Athlon K7-family processor.  Enables use of
+@@ -166,11 +167,81 @@ config MK7
+ 	  flags to GCC.
+ 
+ config MK8
+-	bool "Opteron/Athlon64/Hammer/K8"
++	bool "AMD Opteron/Athlon64/Hammer/K8"
+ 	---help---
+ 	  Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ 	  Enables use of some extended instructions, and passes appropriate
+ 	  optimization flags to GCC.
++config MK8SSE3
++	bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++	---help---
++	  Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MK10
++	bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++	---help---
++	  Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++		Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++	  Enables use of some extended instructions, and passes appropriate
++	  optimization flags to GCC.
++
++config MBARCELONA
++	bool "AMD Barcelona"
++	---help---
++	  Select this for AMD Family 10h Barcelona processors.
++
++	  Enables -march=barcelona
++
++config MBOBCAT
++	bool "AMD Bobcat"
++	---help---
++	  Select this for AMD Family 14h Bobcat processors.
++
++	  Enables -march=btver1
++
++config MJAGUAR
++	bool "AMD Jaguar"
++	---help---
++	  Select this for AMD Family 16h Jaguar processors.
++
++	  Enables -march=btver2
++
++config MBULLDOZER
++	bool "AMD Bulldozer"
++	---help---
++	  Select this for AMD Family 15h Bulldozer processors.
++
++	  Enables -march=bdver1
++
++config MPILEDRIVER
++	bool "AMD Piledriver"
++	---help---
++	  Select this for AMD Family 15h Piledriver processors.
++
++	  Enables -march=bdver2
++
++config MSTEAMROLLER
++	bool "AMD Steamroller"
++	---help---
++	  Select this for AMD Family 15h Steamroller processors.
++
++	  Enables -march=bdver3
++
++config MEXCAVATOR
++	bool "AMD Excavator"
++	---help---
++	  Select this for AMD Family 15h Excavator processors.
++
++	  Enables -march=bdver4
++
++config MZEN
++	bool "AMD Zen"
++	---help---
++	  Select this for AMD Family 17h Zen processors.
++
++	  Enables -march=znver1
+ 
+ config MCRUSOE
+ 	bool "Crusoe"
+@@ -253,6 +324,7 @@ config MVIAC7
+ 
+ config MPSC
+ 	bool "Intel P4 / older Netburst based Xeon"
++	select X86_P6_NOP
+ 	depends on X86_64
+ 	---help---
+ 	  Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -262,23 +334,126 @@ config MPSC
+ 	  using the cpu family field
+ 	  in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+ 
++config MATOM
++	bool "Intel Atom"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Atom platform. Intel Atom CPUs have an
++	  in-order pipelining architecture and thus can benefit from
++	  accordingly optimized code. Use a recent GCC with specific Atom
++	  support in order to fully benefit from selecting this option.
++
+ config MCORE2
+-	bool "Core 2/newer Xeon"
++	bool "Intel Core 2"
++	select X86_P6_NOP
++
+ 	---help---
+ 
+ 	  Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+ 	  53xx) CPUs. You can distinguish newer from older Xeons by the CPU
+ 	  family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ 	  (not a typo)
++	  Enables -march=core2
+ 
+-config MATOM
+-	bool "Intel Atom"
++config MNEHALEM
++	bool "Intel Nehalem"
++	select X86_P6_NOP
+ 	---help---
+ 
+-	  Select this for the Intel Atom platform. Intel Atom CPUs have an
+-	  in-order pipelining architecture and thus can benefit from
+-	  accordingly optimized code. Use a recent GCC with specific Atom
+-	  support in order to fully benefit from selecting this option.
++	  Select this for 1st Gen Core processors in the Nehalem family.
++
++	  Enables -march=nehalem
++
++config MWESTMERE
++	bool "Intel Westmere"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Westmere formerly Nehalem-C family.
++
++	  Enables -march=westmere
++
++config MSILVERMONT
++	bool "Intel Silvermont"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for the Intel Silvermont platform.
++
++	  Enables -march=silvermont
++
++config MSANDYBRIDGE
++	bool "Intel Sandy Bridge"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++	  Enables -march=sandybridge
++
++config MIVYBRIDGE
++	bool "Intel Ivy Bridge"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++	  Enables -march=ivybridge
++
++config MHASWELL
++	bool "Intel Haswell"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 4th Gen Core processors in the Haswell family.
++
++	  Enables -march=haswell
++
++config MBROADWELL
++	bool "Intel Broadwell"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 5th Gen Core processors in the Broadwell family.
++
++	  Enables -march=broadwell
++
++config MSKYLAKE
++	bool "Intel Skylake"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 6th Gen Core processors in the Skylake family.
++
++	  Enables -march=skylake
++
++config MSKYLAKEX
++	bool "Intel Skylake X"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 6th Gen Core processors in the Skylake X family.
++
++	  Enables -march=skylake-avx512
++
++config MCANNONLAKE
++	bool "Intel Cannon Lake"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 8th Gen Core processors
++
++	  Enables -march=cannonlake
++
++config MICELAKE
++	bool "Intel Ice Lake"
++	select X86_P6_NOP
++	---help---
++
++	  Select this for 8th Gen Core processors in the Ice Lake family.
++
++	  Enables -march=icelake
+ 
+ config GENERIC_CPU
+ 	bool "Generic-x86-64"
+@@ -287,6 +462,19 @@ config GENERIC_CPU
+ 	  Generic x86-64 CPU.
+ 	  Run equally well on all x86-64 CPUs.
+ 
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++   GCC 4.2 and above support -march=native, which automatically detects
++   the optimum settings to use based on your processor. -march=native
++   also detects and applies additional settings beyond -march specific
++   to your CPU, (eg. -msse4). Unless you have a specific reason not to
++   (e.g. distcc cross-compiling), you should probably be using
++   -march=native rather than anything listed below.
++
++   Enables -march=native
++
+ endchoice
+ 
+ config X86_GENERIC
+@@ -311,7 +499,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ 	int
+ 	default "7" if MPENTIUM4 || MPSC
+-	default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++	default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ 	default "4" if MELAN || M486 || MGEODEGX1
+ 	default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+ 
+@@ -329,39 +517,40 @@ config X86_ALIGNMENT_16
+ 
+ config X86_INTEL_USERCOPY
+ 	def_bool y
+-	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++	depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE
+ 
+ config X86_USE_PPRO_CHECKSUM
+ 	def_bool y
+-	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++	depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MATOM || MNATIVE
+ 
+ config X86_USE_3DNOW
+ 	def_bool y
+ 	depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+ 
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs).  In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+-	def_bool y
+-	depends on X86_64
+-	depends on (MCORE2 || MPENTIUM4 || MPSC)
++	default n
++	bool "Support for P6_NOPs on Intel chips"
++	depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT  || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE)
++	---help---
++	P6_NOPs are a relatively minor optimization that require a family >=
++	6 processor, except that it is broken on certain VIA chips.
++	Furthermore, AMD chips prefer a totally different sequence of NOPs
++	(which work on all CPUs).  In addition, it looks like Virtual PC
++	does not understand them.
++
++	As a result, disallow these if we're not compiling for X86_64 (these
++	NOPs do work on all x86-64 capable chips); the list of processors in
++	the right-hand clause are the cores that benefit from this optimization.
+ 
++	Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
++ 
+ config X86_TSC
+ 	def_bool y
+-	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++	depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MNATIVE || MATOM) || X86_64
+ 
+ config X86_CMPXCHG64
+ 	def_bool y
+-	depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586TSC || M586MMX || MATOM || MGEODE_LX || MGEODEGX1 || MK6 || MK7 || MK8
++	depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+ 
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+--- a/arch/x86/Makefile	2019-02-22 09:21:58.196924367 -0500
++++ b/arch/x86/Makefile	2019-02-22 09:36:27.310577832 -0500
+@@ -118,13 +118,46 @@ else
+ 	KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+ 
+         # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++		cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+         cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++        cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++        cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++        cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++        cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++        cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++        cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++        cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++        cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++        cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++        cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
+         cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+ 
+         cflags-$(CONFIG_MCORE2) += \
+-                $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+-	cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+-		$(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++                $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++        cflags-$(CONFIG_MNEHALEM) += \
++                $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++        cflags-$(CONFIG_MWESTMERE) += \
++                $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++        cflags-$(CONFIG_MSILVERMONT) += \
++                $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++        cflags-$(CONFIG_MSANDYBRIDGE) += \
++                $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++        cflags-$(CONFIG_MIVYBRIDGE) += \
++                $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++        cflags-$(CONFIG_MHASWELL) += \
++                $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++        cflags-$(CONFIG_MBROADWELL) += \
++                $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++        cflags-$(CONFIG_MSKYLAKE) += \
++                $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++        cflags-$(CONFIG_MSKYLAKEX) += \
++                $(call cc-option,-march=skylake-avx512,$(call cc-option,-mtune=skylake-avx512))
++        cflags-$(CONFIG_MCANNONLAKE) += \
++                $(call cc-option,-march=cannonlake,$(call cc-option,-mtune=cannonlake))
++        cflags-$(CONFIG_MICELAKE) += \
++                $(call cc-option,-march=icelake,$(call cc-option,-mtune=icelake))
++        cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++                $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+         cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+         KBUILD_CFLAGS += $(cflags-y)
+ 
+--- a/arch/x86/include/asm/module.h	2019-02-22 09:22:26.726997480 -0500
++++ b/arch/x86/include/asm/module.h	2019-02-22 09:40:04.231493392 -0500
+@@ -25,6 +25,30 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
++#elif defined CONFIG_MSKYLAKEX
++#define MODULE_PROC_FAMILY "SKYLAKEX "
++#elif defined CONFIG_MCANNONLAKE
++#define MODULE_PROC_FAMILY "CANNONLAKE "
++#elif defined CONFIG_MICELAKE
++#define MODULE_PROC_FAMILY "ICELAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -43,6 +67,26 @@ struct mod_arch_specific {
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-03-08 14:36 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-03-08 14:36 UTC (permalink / raw
  To: gentoo-commits

commit:     64ef0319a05b7c75548b7394bf827605777a684a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar  8 14:36:09 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar  8 14:36:09 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=64ef0319

proj/linux-kernel: netfilter: nf_tables: fix set double-free in abort path

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 +
 ..._tables-fix-set-double-free-in-abort-path.patch | 110 +++++++++++++++++++++
 2 files changed, 114 insertions(+)

diff --git a/0000_README b/0000_README
index cfba4e3..225fb97 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  2600_enable-key-swapping-for-apple-mac.patch
 From:   https://github.com/free5lot/hid-apple-patched
 Desc:   This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902
 
+Patch:  2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch
+From:   https://www.spinics.net/lists/netfilter-devel/msg58466.html
+Desc:   netfilter: nf_tables: fix set double-free in abort path
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.

diff --git a/2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch b/2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch
new file mode 100644
index 0000000..8a126bf
--- /dev/null
+++ b/2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch
@@ -0,0 +1,110 @@
+From: Florian Westphal <fw@strlen.de>
+To: <netfilter-devel@vger.kernel.org>
+Cc: kfm@plushkava.net, Florian Westphal <fw@strlen.de>
+Subject: [PATCH nf] netfilter: nf_tables: fix set double-free in abort path
+Date: Thu,  7 Mar 2019 20:30:41 +0100
+X-Mailer: git-send-email 2.19.2
+
+The abort path can cause a double-free of an (anon) set.
+
+Added-and-to-be-aborted rule looks like this:
+
+udp dport { 137, 138 } drop
+
+The to-be-aborted transaction list looks like this:
+newset
+newsetelem
+newsetelem
+rule
+
+This gets walked in reverse order, so first pass disables
+the rule, the set elements, then the set.
+
+After synchronize_rcu(), we then destroy those in same order:
+rule, set element, set element, newset.
+
+Problem is that the (anon) set has already been bound to the rule,
+so the rule (lookup expression destructor) already frees the set,
+when then cause use-after-free when trying to delete the elements
+from this set, then try to free the set again when handling the
+newset expression.
+
+To resolve this, check in first phase if the newset is bound already.
+If so, remove the newset transaction from the list, rule destructor
+will handle cleanup.
+
+This is still causes the use-after-free on set element removal.
+To handle this, move all affected set elements to a extra list
+and process it first.
+
+This forces strict 'destroy elements, then set' ordering.
+
+Fixes: f6ac8585897684 ("netfilter: nf_tables: unbind set in rule from commit path")
+Bugzilla: https://bugzilla.netfilter.org/show_bug.cgi?id=1325
+Signed-off-by: Florian Westphal <fw@strlen.de>
+
+--- a/net/netfilter/nf_tables_api.c	2019-03-07 21:49:45.776492810 -0000
++++ b/net/netfilter/nf_tables_api.c	2019-03-07 21:49:57.067493081 -0000
+@@ -6634,10 +6634,39 @@ static void nf_tables_abort_release(stru
+ 	kfree(trans);
+ }
+ 
++static void __nf_tables_newset_abort(struct net *net,
++				     struct nft_trans *set_trans,
++				     struct list_head *set_elements)
++{
++	const struct nft_set *set = nft_trans_set(set_trans);
++	struct nft_trans *trans, *next;
++
++	if (!nft_trans_set_bound(set_trans))
++		return;
++
++	/* When abort is in progress, NFT_MSG_NEWRULE will remove the
++	 * set if its bound, so we need to remove the NEWSET transaction,
++	 * else the set is released twice.  NEWSETELEM need to be moved
++	 * to special list to ensure 'free elements, then set' ordering.
++	 */
++	list_for_each_entry_safe_reverse(trans, next,
++					 &net->nft.commit_list, list) {
++		if (trans == set_trans)
++			break;
++
++		if (trans->msg_type == NFT_MSG_NEWSETELEM &&
++		    nft_trans_set(trans) == set)
++			list_move(&trans->list, set_elements);
++	}
++
++	nft_trans_destroy(set_trans);
++}
++
+ static int __nf_tables_abort(struct net *net)
+ {
+ 	struct nft_trans *trans, *next;
+ 	struct nft_trans_elem *te;
++	LIST_HEAD(set_elements);
+ 
+ 	list_for_each_entry_safe_reverse(trans, next, &net->nft.commit_list,
+ 					 list) {
+@@ -6693,6 +6722,8 @@ static int __nf_tables_abort(struct net
+ 			trans->ctx.table->use--;
+ 			if (!nft_trans_set_bound(trans))
+ 				list_del_rcu(&nft_trans_set(trans)->list);
++
++			__nf_tables_newset_abort(net, trans, &set_elements);
+ 			break;
+ 		case NFT_MSG_DELSET:
+ 			trans->ctx.table->use++;
+@@ -6739,6 +6770,13 @@ static int __nf_tables_abort(struct net
+ 
+ 	synchronize_rcu();
+ 
++	/* free set elements before the set they belong to is freed */
++	list_for_each_entry_safe_reverse(trans, next,
++					 &set_elements, list) {
++		list_del(&trans->list);
++		nf_tables_abort_release(trans);
++	}
++
+ 	list_for_each_entry_safe_reverse(trans, next,
+ 					 &net->nft.commit_list, list) {
+ 		list_del(&trans->list);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-03-10 14:12 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-03-10 14:12 UTC (permalink / raw
  To: gentoo-commits

commit:     2abc69ce98210c0192dfce305815bdbd671e2d7c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 10 14:12:03 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Mar 10 14:12:03 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2abc69ce

proj/linux-patches: Linux patch 5.0.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1000_linux-5.0.1.patch | 2134 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2138 insertions(+)

diff --git a/0000_README b/0000_README
index 225fb97..99e0bb6 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-5.0.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1000_linux-5.0.1.patch b/1000_linux-5.0.1.patch
new file mode 100644
index 0000000..1a45071
--- /dev/null
+++ b/1000_linux-5.0.1.patch
@@ -0,0 +1,2134 @@
+diff --git a/Makefile b/Makefile
+index d5713e7b1e506..3cd7163fe1646 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl
+index 7b56a53be5e30..e09558edae73a 100644
+--- a/arch/alpha/kernel/syscalls/syscall.tbl
++++ b/arch/alpha/kernel/syscalls/syscall.tbl
+@@ -451,3 +451,4 @@
+ 520	common	preadv2				sys_preadv2
+ 521	common	pwritev2			sys_pwritev2
+ 522	common	statx				sys_statx
++523	common	io_pgetevents			sys_io_pgetevents
+diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
+index ba150c755fcce..85b6c60f285d2 100644
+--- a/arch/mips/kernel/irq.c
++++ b/arch/mips/kernel/irq.c
+@@ -52,6 +52,7 @@ asmlinkage void spurious_interrupt(void)
+ void __init init_IRQ(void)
+ {
+ 	int i;
++	unsigned int order = get_order(IRQ_STACK_SIZE);
+ 
+ 	for (i = 0; i < NR_IRQS; i++)
+ 		irq_set_noprobe(i);
+@@ -62,8 +63,7 @@ void __init init_IRQ(void)
+ 	arch_init_irq();
+ 
+ 	for_each_possible_cpu(i) {
+-		int irq_pages = IRQ_STACK_SIZE / PAGE_SIZE;
+-		void *s = (void *)__get_free_pages(GFP_KERNEL, irq_pages);
++		void *s = (void *)__get_free_pages(GFP_KERNEL, order);
+ 
+ 		irq_stack[i] = s;
+ 		pr_debug("CPU%d IRQ stack at 0x%p - 0x%p\n", i,
+diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
+index 9e21573714910..f8debf7aeb4c1 100644
+--- a/arch/x86/boot/compressed/pgtable_64.c
++++ b/arch/x86/boot/compressed/pgtable_64.c
+@@ -1,5 +1,7 @@
++#include <linux/efi.h>
+ #include <asm/e820/types.h>
+ #include <asm/processor.h>
++#include <asm/efi.h>
+ #include "pgtable.h"
+ #include "../string.h"
+ 
+@@ -37,9 +39,10 @@ int cmdline_find_option_bool(const char *option);
+ 
+ static unsigned long find_trampoline_placement(void)
+ {
+-	unsigned long bios_start, ebda_start;
++	unsigned long bios_start = 0, ebda_start = 0;
+ 	unsigned long trampoline_start;
+ 	struct boot_e820_entry *entry;
++	char *signature;
+ 	int i;
+ 
+ 	/*
+@@ -47,8 +50,18 @@ static unsigned long find_trampoline_placement(void)
+ 	 * This code is based on reserve_bios_regions().
+ 	 */
+ 
+-	ebda_start = *(unsigned short *)0x40e << 4;
+-	bios_start = *(unsigned short *)0x413 << 10;
++	/*
++	 * EFI systems may not provide legacy ROM. The memory may not be mapped
++	 * at all.
++	 *
++	 * Only look for values in the legacy ROM for non-EFI system.
++	 */
++	signature = (char *)&boot_params->efi_info.efi_loader_signature;
++	if (strncmp(signature, EFI32_LOADER_SIGNATURE, 4) &&
++	    strncmp(signature, EFI64_LOADER_SIGNATURE, 4)) {
++		ebda_start = *(unsigned short *)0x40e << 4;
++		bios_start = *(unsigned short *)0x413 << 10;
++	}
+ 
+ 	if (bios_start < BIOS_START_MIN || bios_start > BIOS_START_MAX)
+ 		bios_start = BIOS_START_MAX;
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 69f6bbb41be0b..01004bfb1a1bc 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -819,11 +819,9 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ static void init_amd_zn(struct cpuinfo_x86 *c)
+ {
+ 	set_cpu_cap(c, X86_FEATURE_ZEN);
+-	/*
+-	 * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects
+-	 * all up to and including B1.
+-	 */
+-	if (c->x86_model <= 1 && c->x86_stepping <= 1)
++
++	/* Fix erratum 1076: CPB feature bit not being set in CPUID. */
++	if (!cpu_has(c, X86_FEATURE_CPB))
+ 		set_cpu_cap(c, X86_FEATURE_CPB);
+ }
+ 
+diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c
+index 74969a437a37c..2e73395f0560c 100644
+--- a/arch/xtensa/kernel/process.c
++++ b/arch/xtensa/kernel/process.c
+@@ -321,8 +321,8 @@ unsigned long get_wchan(struct task_struct *p)
+ 
+ 		/* Stack layout: sp-4: ra, sp-3: sp' */
+ 
+-		pc = MAKE_PC_FROM_RA(*(unsigned long*)sp - 4, sp);
+-		sp = *(unsigned long *)sp - 3;
++		pc = MAKE_PC_FROM_RA(SPILL_SLOT(sp, 0), sp);
++		sp = SPILL_SLOT(sp, 1);
+ 	} while (count++ < 16);
+ 	return 0;
+ }
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 4d2b2ad1ee0e1..01f80cbd27418 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -329,6 +329,8 @@ struct binder_error {
+  *                        (invariant after initialized)
+  * @min_priority:         minimum scheduling priority
+  *                        (invariant after initialized)
++ * @txn_security_ctx:     require sender's security context
++ *                        (invariant after initialized)
+  * @async_todo:           list of async work items
+  *                        (protected by @proc->inner_lock)
+  *
+@@ -365,6 +367,7 @@ struct binder_node {
+ 		 * invariant after initialization
+ 		 */
+ 		u8 accept_fds:1;
++		u8 txn_security_ctx:1;
+ 		u8 min_priority;
+ 	};
+ 	bool has_async_transaction;
+@@ -615,6 +618,7 @@ struct binder_transaction {
+ 	long	saved_priority;
+ 	kuid_t	sender_euid;
+ 	struct list_head fd_fixups;
++	binder_uintptr_t security_ctx;
+ 	/**
+ 	 * @lock:  protects @from, @to_proc, and @to_thread
+ 	 *
+@@ -1152,6 +1156,7 @@ static struct binder_node *binder_init_node_ilocked(
+ 	node->work.type = BINDER_WORK_NODE;
+ 	node->min_priority = flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
+ 	node->accept_fds = !!(flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
++	node->txn_security_ctx = !!(flags & FLAT_BINDER_FLAG_TXN_SECURITY_CTX);
+ 	spin_lock_init(&node->lock);
+ 	INIT_LIST_HEAD(&node->work.entry);
+ 	INIT_LIST_HEAD(&node->async_todo);
+@@ -2778,6 +2783,8 @@ static void binder_transaction(struct binder_proc *proc,
+ 	binder_size_t last_fixup_min_off = 0;
+ 	struct binder_context *context = proc->context;
+ 	int t_debug_id = atomic_inc_return(&binder_last_id);
++	char *secctx = NULL;
++	u32 secctx_sz = 0;
+ 
+ 	e = binder_transaction_log_add(&binder_transaction_log);
+ 	e->debug_id = t_debug_id;
+@@ -3020,6 +3027,20 @@ static void binder_transaction(struct binder_proc *proc,
+ 	t->flags = tr->flags;
+ 	t->priority = task_nice(current);
+ 
++	if (target_node && target_node->txn_security_ctx) {
++		u32 secid;
++
++		security_task_getsecid(proc->tsk, &secid);
++		ret = security_secid_to_secctx(secid, &secctx, &secctx_sz);
++		if (ret) {
++			return_error = BR_FAILED_REPLY;
++			return_error_param = ret;
++			return_error_line = __LINE__;
++			goto err_get_secctx_failed;
++		}
++		extra_buffers_size += ALIGN(secctx_sz, sizeof(u64));
++	}
++
+ 	trace_binder_transaction(reply, t, target_node);
+ 
+ 	t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
+@@ -3036,6 +3057,19 @@ static void binder_transaction(struct binder_proc *proc,
+ 		t->buffer = NULL;
+ 		goto err_binder_alloc_buf_failed;
+ 	}
++	if (secctx) {
++		size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) +
++				    ALIGN(tr->offsets_size, sizeof(void *)) +
++				    ALIGN(extra_buffers_size, sizeof(void *)) -
++				    ALIGN(secctx_sz, sizeof(u64));
++		char *kptr = t->buffer->data + buf_offset;
++
++		t->security_ctx = (uintptr_t)kptr +
++		    binder_alloc_get_user_buffer_offset(&target_proc->alloc);
++		memcpy(kptr, secctx, secctx_sz);
++		security_release_secctx(secctx, secctx_sz);
++		secctx = NULL;
++	}
+ 	t->buffer->debug_id = t->debug_id;
+ 	t->buffer->transaction = t;
+ 	t->buffer->target_node = target_node;
+@@ -3305,6 +3339,9 @@ err_copy_data_failed:
+ 	t->buffer->transaction = NULL;
+ 	binder_alloc_free_buf(&target_proc->alloc, t->buffer);
+ err_binder_alloc_buf_failed:
++	if (secctx)
++		security_release_secctx(secctx, secctx_sz);
++err_get_secctx_failed:
+ 	kfree(tcomplete);
+ 	binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
+ err_alloc_tcomplete_failed:
+@@ -4036,11 +4073,13 @@ retry:
+ 
+ 	while (1) {
+ 		uint32_t cmd;
+-		struct binder_transaction_data tr;
++		struct binder_transaction_data_secctx tr;
++		struct binder_transaction_data *trd = &tr.transaction_data;
+ 		struct binder_work *w = NULL;
+ 		struct list_head *list = NULL;
+ 		struct binder_transaction *t = NULL;
+ 		struct binder_thread *t_from;
++		size_t trsize = sizeof(*trd);
+ 
+ 		binder_inner_proc_lock(proc);
+ 		if (!binder_worklist_empty_ilocked(&thread->todo))
+@@ -4240,8 +4279,8 @@ retry:
+ 		if (t->buffer->target_node) {
+ 			struct binder_node *target_node = t->buffer->target_node;
+ 
+-			tr.target.ptr = target_node->ptr;
+-			tr.cookie =  target_node->cookie;
++			trd->target.ptr = target_node->ptr;
++			trd->cookie =  target_node->cookie;
+ 			t->saved_priority = task_nice(current);
+ 			if (t->priority < target_node->min_priority &&
+ 			    !(t->flags & TF_ONE_WAY))
+@@ -4251,22 +4290,23 @@ retry:
+ 				binder_set_nice(target_node->min_priority);
+ 			cmd = BR_TRANSACTION;
+ 		} else {
+-			tr.target.ptr = 0;
+-			tr.cookie = 0;
++			trd->target.ptr = 0;
++			trd->cookie = 0;
+ 			cmd = BR_REPLY;
+ 		}
+-		tr.code = t->code;
+-		tr.flags = t->flags;
+-		tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
++		trd->code = t->code;
++		trd->flags = t->flags;
++		trd->sender_euid = from_kuid(current_user_ns(), t->sender_euid);
+ 
+ 		t_from = binder_get_txn_from(t);
+ 		if (t_from) {
+ 			struct task_struct *sender = t_from->proc->tsk;
+ 
+-			tr.sender_pid = task_tgid_nr_ns(sender,
+-							task_active_pid_ns(current));
++			trd->sender_pid =
++				task_tgid_nr_ns(sender,
++						task_active_pid_ns(current));
+ 		} else {
+-			tr.sender_pid = 0;
++			trd->sender_pid = 0;
+ 		}
+ 
+ 		ret = binder_apply_fd_fixups(t);
+@@ -4297,15 +4337,20 @@ retry:
+ 			}
+ 			continue;
+ 		}
+-		tr.data_size = t->buffer->data_size;
+-		tr.offsets_size = t->buffer->offsets_size;
+-		tr.data.ptr.buffer = (binder_uintptr_t)
++		trd->data_size = t->buffer->data_size;
++		trd->offsets_size = t->buffer->offsets_size;
++		trd->data.ptr.buffer = (binder_uintptr_t)
+ 			((uintptr_t)t->buffer->data +
+ 			binder_alloc_get_user_buffer_offset(&proc->alloc));
+-		tr.data.ptr.offsets = tr.data.ptr.buffer +
++		trd->data.ptr.offsets = trd->data.ptr.buffer +
+ 					ALIGN(t->buffer->data_size,
+ 					    sizeof(void *));
+ 
++		tr.secctx = t->security_ctx;
++		if (t->security_ctx) {
++			cmd = BR_TRANSACTION_SEC_CTX;
++			trsize = sizeof(tr);
++		}
+ 		if (put_user(cmd, (uint32_t __user *)ptr)) {
+ 			if (t_from)
+ 				binder_thread_dec_tmpref(t_from);
+@@ -4316,7 +4361,7 @@ retry:
+ 			return -EFAULT;
+ 		}
+ 		ptr += sizeof(uint32_t);
+-		if (copy_to_user(ptr, &tr, sizeof(tr))) {
++		if (copy_to_user(ptr, &tr, trsize)) {
+ 			if (t_from)
+ 				binder_thread_dec_tmpref(t_from);
+ 
+@@ -4325,7 +4370,7 @@ retry:
+ 
+ 			return -EFAULT;
+ 		}
+-		ptr += sizeof(tr);
++		ptr += trsize;
+ 
+ 		trace_binder_transaction_received(t);
+ 		binder_stat_br(proc, thread, cmd);
+@@ -4333,16 +4378,18 @@ retry:
+ 			     "%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %016llx-%016llx\n",
+ 			     proc->pid, thread->pid,
+ 			     (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :
+-			     "BR_REPLY",
++				(cmd == BR_TRANSACTION_SEC_CTX) ?
++				     "BR_TRANSACTION_SEC_CTX" : "BR_REPLY",
+ 			     t->debug_id, t_from ? t_from->proc->pid : 0,
+ 			     t_from ? t_from->pid : 0, cmd,
+ 			     t->buffer->data_size, t->buffer->offsets_size,
+-			     (u64)tr.data.ptr.buffer, (u64)tr.data.ptr.offsets);
++			     (u64)trd->data.ptr.buffer,
++			     (u64)trd->data.ptr.offsets);
+ 
+ 		if (t_from)
+ 			binder_thread_dec_tmpref(t_from);
+ 		t->buffer->allow_user_free = 1;
+-		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
++		if (cmd != BR_REPLY && !(t->flags & TF_ONE_WAY)) {
+ 			binder_inner_proc_lock(thread->proc);
+ 			t->to_parent = thread->transaction_stack;
+ 			t->to_thread = thread;
+@@ -4690,7 +4737,8 @@ out:
+ 	return ret;
+ }
+ 
+-static int binder_ioctl_set_ctx_mgr(struct file *filp)
++static int binder_ioctl_set_ctx_mgr(struct file *filp,
++				    struct flat_binder_object *fbo)
+ {
+ 	int ret = 0;
+ 	struct binder_proc *proc = filp->private_data;
+@@ -4719,7 +4767,7 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp)
+ 	} else {
+ 		context->binder_context_mgr_uid = curr_euid;
+ 	}
+-	new_node = binder_new_node(proc, NULL);
++	new_node = binder_new_node(proc, fbo);
+ 	if (!new_node) {
+ 		ret = -ENOMEM;
+ 		goto out;
+@@ -4842,8 +4890,20 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 		binder_inner_proc_unlock(proc);
+ 		break;
+ 	}
++	case BINDER_SET_CONTEXT_MGR_EXT: {
++		struct flat_binder_object fbo;
++
++		if (copy_from_user(&fbo, ubuf, sizeof(fbo))) {
++			ret = -EINVAL;
++			goto err;
++		}
++		ret = binder_ioctl_set_ctx_mgr(filp, &fbo);
++		if (ret)
++			goto err;
++		break;
++	}
+ 	case BINDER_SET_CONTEXT_MGR:
+-		ret = binder_ioctl_set_ctx_mgr(filp);
++		ret = binder_ioctl_set_ctx_mgr(filp, NULL);
+ 		if (ret)
+ 			goto err;
+ 		break;
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 8ac10af17c004..d62487d024559 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -968,9 +968,9 @@ static void __device_release_driver(struct device *dev, struct device *parent)
+ 			drv->remove(dev);
+ 
+ 		device_links_driver_cleanup(dev);
+-		arch_teardown_dma_ops(dev);
+ 
+ 		devres_release_all(dev);
++		arch_teardown_dma_ops(dev);
+ 		dev->driver = NULL;
+ 		dev_set_drvdata(dev, NULL);
+ 		if (dev->pm_domain && dev->pm_domain->dismiss)
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index 41405de27d665..c91bba00df4e4 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -552,10 +552,9 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev,
+ 					    hdev->bus);
+ 
+ 	if (!btrtl_dev->ic_info) {
+-		rtl_dev_err(hdev, "rtl: unknown IC info, lmp subver %04x, hci rev %04x, hci ver %04x",
++		rtl_dev_info(hdev, "rtl: unknown IC info, lmp subver %04x, hci rev %04x, hci ver %04x",
+ 			    lmp_subver, hci_rev, hci_ver);
+-		ret = -EINVAL;
+-		goto err_free;
++		return btrtl_dev;
+ 	}
+ 
+ 	if (btrtl_dev->ic_info->has_rom_version) {
+@@ -610,6 +609,11 @@ int btrtl_download_firmware(struct hci_dev *hdev,
+ 	 * standard btusb. Once that firmware is uploaded, the subver changes
+ 	 * to a different value.
+ 	 */
++	if (!btrtl_dev->ic_info) {
++		rtl_dev_info(hdev, "rtl: assuming no firmware upload needed\n");
++		return 0;
++	}
++
+ 	switch (btrtl_dev->ic_info->lmp_subver) {
+ 	case RTL_ROM_LMP_8723A:
+ 	case RTL_ROM_LMP_3499:
+diff --git a/drivers/char/applicom.c b/drivers/char/applicom.c
+index c0a5b1f3a9863..4ccc39e00ced3 100644
+--- a/drivers/char/applicom.c
++++ b/drivers/char/applicom.c
+@@ -32,6 +32,7 @@
+ #include <linux/wait.h>
+ #include <linux/init.h>
+ #include <linux/fs.h>
++#include <linux/nospec.h>
+ 
+ #include <asm/io.h>
+ #include <linux/uaccess.h>
+@@ -386,7 +387,11 @@ static ssize_t ac_write(struct file *file, const char __user *buf, size_t count,
+ 	TicCard = st_loc.tic_des_from_pc;	/* tic number to send            */
+ 	IndexCard = NumCard - 1;
+ 
+-	if((NumCard < 1) || (NumCard > MAX_BOARD) || !apbs[IndexCard].RamIO)
++	if (IndexCard >= MAX_BOARD)
++		return -EINVAL;
++	IndexCard = array_index_nospec(IndexCard, MAX_BOARD);
++
++	if (!apbs[IndexCard].RamIO)
+ 		return -EINVAL;
+ 
+ #ifdef DEBUG
+@@ -697,6 +702,7 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	unsigned char IndexCard;
+ 	void __iomem *pmem;
+ 	int ret = 0;
++	static int warncount = 10;
+ 	volatile unsigned char byte_reset_it;
+ 	struct st_ram_io *adgl;
+ 	void __user *argp = (void __user *)arg;
+@@ -711,16 +717,12 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	mutex_lock(&ac_mutex);	
+ 	IndexCard = adgl->num_card-1;
+ 	 
+-	if(cmd != 6 && ((IndexCard >= MAX_BOARD) || !apbs[IndexCard].RamIO)) {
+-		static int warncount = 10;
+-		if (warncount) {
+-			printk( KERN_WARNING "APPLICOM driver IOCTL, bad board number %d\n",(int)IndexCard+1);
+-			warncount--;
+-		}
+-		kfree(adgl);
+-		mutex_unlock(&ac_mutex);
+-		return -EINVAL;
+-	}
++	if (cmd != 6 && IndexCard >= MAX_BOARD)
++		goto err;
++	IndexCard = array_index_nospec(IndexCard, MAX_BOARD);
++
++	if (cmd != 6 && !apbs[IndexCard].RamIO)
++		goto err;
+ 
+ 	switch (cmd) {
+ 		
+@@ -838,5 +840,16 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	kfree(adgl);
+ 	mutex_unlock(&ac_mutex);
+ 	return 0;
++
++err:
++	if (warncount) {
++		pr_warn("APPLICOM driver IOCTL, bad board number %d\n",
++			(int)IndexCard + 1);
++		warncount--;
++	}
++	kfree(adgl);
++	mutex_unlock(&ac_mutex);
++	return -EINVAL;
++
+ }
+ 
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index e35a886e00bcf..ef0e33e21b988 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -545,13 +545,13 @@ EXPORT_SYMBOL_GPL(cpufreq_policy_transition_delay_us);
+  *                          SYSFS INTERFACE                          *
+  *********************************************************************/
+ static ssize_t show_boost(struct kobject *kobj,
+-				 struct attribute *attr, char *buf)
++			  struct kobj_attribute *attr, char *buf)
+ {
+ 	return sprintf(buf, "%d\n", cpufreq_driver->boost_enabled);
+ }
+ 
+-static ssize_t store_boost(struct kobject *kobj, struct attribute *attr,
+-				  const char *buf, size_t count)
++static ssize_t store_boost(struct kobject *kobj, struct kobj_attribute *attr,
++			   const char *buf, size_t count)
+ {
+ 	int ret, enable;
+ 
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index dd66decf2087c..5ab6a4fe93aa6 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -895,7 +895,7 @@ static void intel_pstate_update_policies(void)
+ /************************** sysfs begin ************************/
+ #define show_one(file_name, object)					\
+ 	static ssize_t show_##file_name					\
+-	(struct kobject *kobj, struct attribute *attr, char *buf)	\
++	(struct kobject *kobj, struct kobj_attribute *attr, char *buf)	\
+ 	{								\
+ 		return sprintf(buf, "%u\n", global.object);		\
+ 	}
+@@ -904,7 +904,7 @@ static ssize_t intel_pstate_show_status(char *buf);
+ static int intel_pstate_update_status(const char *buf, size_t size);
+ 
+ static ssize_t show_status(struct kobject *kobj,
+-			   struct attribute *attr, char *buf)
++			   struct kobj_attribute *attr, char *buf)
+ {
+ 	ssize_t ret;
+ 
+@@ -915,7 +915,7 @@ static ssize_t show_status(struct kobject *kobj,
+ 	return ret;
+ }
+ 
+-static ssize_t store_status(struct kobject *a, struct attribute *b,
++static ssize_t store_status(struct kobject *a, struct kobj_attribute *b,
+ 			    const char *buf, size_t count)
+ {
+ 	char *p = memchr(buf, '\n', count);
+@@ -929,7 +929,7 @@ static ssize_t store_status(struct kobject *a, struct attribute *b,
+ }
+ 
+ static ssize_t show_turbo_pct(struct kobject *kobj,
+-				struct attribute *attr, char *buf)
++				struct kobj_attribute *attr, char *buf)
+ {
+ 	struct cpudata *cpu;
+ 	int total, no_turbo, turbo_pct;
+@@ -955,7 +955,7 @@ static ssize_t show_turbo_pct(struct kobject *kobj,
+ }
+ 
+ static ssize_t show_num_pstates(struct kobject *kobj,
+-				struct attribute *attr, char *buf)
++				struct kobj_attribute *attr, char *buf)
+ {
+ 	struct cpudata *cpu;
+ 	int total;
+@@ -976,7 +976,7 @@ static ssize_t show_num_pstates(struct kobject *kobj,
+ }
+ 
+ static ssize_t show_no_turbo(struct kobject *kobj,
+-			     struct attribute *attr, char *buf)
++			     struct kobj_attribute *attr, char *buf)
+ {
+ 	ssize_t ret;
+ 
+@@ -998,7 +998,7 @@ static ssize_t show_no_turbo(struct kobject *kobj,
+ 	return ret;
+ }
+ 
+-static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
++static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
+ 			      const char *buf, size_t count)
+ {
+ 	unsigned int input;
+@@ -1045,7 +1045,7 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
+ 	return count;
+ }
+ 
+-static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
++static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
+ 				  const char *buf, size_t count)
+ {
+ 	unsigned int input;
+@@ -1075,7 +1075,7 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
+ 	return count;
+ }
+ 
+-static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
++static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b,
+ 				  const char *buf, size_t count)
+ {
+ 	unsigned int input;
+@@ -1107,12 +1107,13 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
+ }
+ 
+ static ssize_t show_hwp_dynamic_boost(struct kobject *kobj,
+-				struct attribute *attr, char *buf)
++				struct kobj_attribute *attr, char *buf)
+ {
+ 	return sprintf(buf, "%u\n", hwp_boost);
+ }
+ 
+-static ssize_t store_hwp_dynamic_boost(struct kobject *a, struct attribute *b,
++static ssize_t store_hwp_dynamic_boost(struct kobject *a,
++				       struct kobj_attribute *b,
+ 				       const char *buf, size_t count)
+ {
+ 	unsigned int input;
+diff --git a/drivers/gnss/sirf.c b/drivers/gnss/sirf.c
+index 226f6e6fe01bc..8e3f6a776e02e 100644
+--- a/drivers/gnss/sirf.c
++++ b/drivers/gnss/sirf.c
+@@ -310,30 +310,26 @@ static int sirf_probe(struct serdev_device *serdev)
+ 			ret = -ENODEV;
+ 			goto err_put_device;
+ 		}
++
++		ret = regulator_enable(data->vcc);
++		if (ret)
++			goto err_put_device;
++
++		/* Wait for chip to boot into hibernate mode. */
++		msleep(SIRF_BOOT_DELAY);
+ 	}
+ 
+ 	if (data->wakeup) {
+ 		ret = gpiod_to_irq(data->wakeup);
+ 		if (ret < 0)
+-			goto err_put_device;
+-
++			goto err_disable_vcc;
+ 		data->irq = ret;
+ 
+-		ret = devm_request_threaded_irq(dev, data->irq, NULL,
+-				sirf_wakeup_handler,
++		ret = request_threaded_irq(data->irq, NULL, sirf_wakeup_handler,
+ 				IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 				"wakeup", data);
+ 		if (ret)
+-			goto err_put_device;
+-	}
+-
+-	if (data->on_off) {
+-		ret = regulator_enable(data->vcc);
+-		if (ret)
+-			goto err_put_device;
+-
+-		/* Wait for chip to boot into hibernate mode */
+-		msleep(SIRF_BOOT_DELAY);
++			goto err_disable_vcc;
+ 	}
+ 
+ 	if (IS_ENABLED(CONFIG_PM)) {
+@@ -342,7 +338,7 @@ static int sirf_probe(struct serdev_device *serdev)
+ 	} else {
+ 		ret = sirf_runtime_resume(dev);
+ 		if (ret < 0)
+-			goto err_disable_vcc;
++			goto err_free_irq;
+ 	}
+ 
+ 	ret = gnss_register_device(gdev);
+@@ -356,6 +352,9 @@ err_disable_rpm:
+ 		pm_runtime_disable(dev);
+ 	else
+ 		sirf_runtime_suspend(dev);
++err_free_irq:
++	if (data->wakeup)
++		free_irq(data->irq, data);
+ err_disable_vcc:
+ 	if (data->on_off)
+ 		regulator_disable(data->vcc);
+@@ -376,6 +375,9 @@ static void sirf_remove(struct serdev_device *serdev)
+ 	else
+ 		sirf_runtime_suspend(&serdev->dev);
+ 
++	if (data->wakeup)
++		free_irq(data->irq, data);
++
+ 	if (data->on_off)
+ 		regulator_disable(data->vcc);
+ 
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 7e3c00bd9532a..76cc163b3cf15 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -4222,7 +4222,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6190",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.num_gpio = 16,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+@@ -4245,7 +4245,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6190X",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.num_gpio = 16,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+@@ -4268,7 +4268,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6191",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+ 		.phy_base_addr = 0x0,
+@@ -4315,7 +4315,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6290",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.num_gpio = 16,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+@@ -4477,7 +4477,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6390",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.num_gpio = 16,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+@@ -4500,7 +4500,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6390X",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.num_gpio = 16,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+@@ -4847,6 +4847,7 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ 	if (err)
+ 		goto out;
+ 
++	mv88e6xxx_ports_cmode_init(chip);
+ 	mv88e6xxx_phy_init(chip);
+ 
+ 	if (chip->info->ops->get_eeprom) {
+diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
+index 79ab51e69aee4..184c2b1b31159 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.c
++++ b/drivers/net/dsa/mv88e6xxx/port.c
+@@ -190,7 +190,7 @@ int mv88e6xxx_port_set_duplex(struct mv88e6xxx_chip *chip, int port, int dup)
+ 		/* normal duplex detection */
+ 		break;
+ 	default:
+-		return -EINVAL;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	err = mv88e6xxx_port_write(chip, port, MV88E6XXX_PORT_MAC_CTL, reg);
+diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
+index 57727fe1501ee..8b3495ee2b6eb 100644
+--- a/drivers/net/ethernet/marvell/sky2.c
++++ b/drivers/net/ethernet/marvell/sky2.c
+@@ -46,6 +46,7 @@
+ #include <linux/mii.h>
+ #include <linux/of_device.h>
+ #include <linux/of_net.h>
++#include <linux/dmi.h>
+ 
+ #include <asm/irq.h>
+ 
+@@ -93,7 +94,7 @@ static int copybreak __read_mostly = 128;
+ module_param(copybreak, int, 0);
+ MODULE_PARM_DESC(copybreak, "Receive copy threshold");
+ 
+-static int disable_msi = 0;
++static int disable_msi = -1;
+ module_param(disable_msi, int, 0);
+ MODULE_PARM_DESC(disable_msi, "Disable Message Signaled Interrupt (MSI)");
+ 
+@@ -4917,6 +4918,24 @@ static const char *sky2_name(u8 chipid, char *buf, int sz)
+ 	return buf;
+ }
+ 
++static const struct dmi_system_id msi_blacklist[] = {
++	{
++		.ident = "Dell Inspiron 1545",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 1545"),
++		},
++	},
++	{
++		.ident = "Gateway P-79",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Gateway"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P-79"),
++		},
++	},
++	{}
++};
++
+ static int sky2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ {
+ 	struct net_device *dev, *dev1;
+@@ -5028,6 +5047,9 @@ static int sky2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		goto err_out_free_pci;
+ 	}
+ 
++	if (disable_msi == -1)
++		disable_msi = !!dmi_check_system(msi_blacklist);
++
+ 	if (!disable_msi && pci_enable_msi(pdev) == 0) {
+ 		err = sky2_test_msi(hw);
+ 		if (err) {
+diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c
+index ca3ea2fbfcd08..80d87798c62b8 100644
+--- a/drivers/net/ethernet/mscc/ocelot_board.c
++++ b/drivers/net/ethernet/mscc/ocelot_board.c
+@@ -267,6 +267,7 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ 		struct phy *serdes;
+ 		void __iomem *regs;
+ 		char res_name[8];
++		int phy_mode;
+ 		u32 port;
+ 
+ 		if (of_property_read_u32(portnp, "reg", &port))
+@@ -292,11 +293,11 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ 		if (err)
+ 			return err;
+ 
+-		err = of_get_phy_mode(portnp);
+-		if (err < 0)
++		phy_mode = of_get_phy_mode(portnp);
++		if (phy_mode < 0)
+ 			ocelot->ports[port]->phy_mode = PHY_INTERFACE_MODE_NA;
+ 		else
+-			ocelot->ports[port]->phy_mode = err;
++			ocelot->ports[port]->phy_mode = phy_mode;
+ 
+ 		switch (ocelot->ports[port]->phy_mode) {
+ 		case PHY_INTERFACE_MODE_NA:
+@@ -304,6 +305,13 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ 		case PHY_INTERFACE_MODE_SGMII:
+ 			break;
+ 		case PHY_INTERFACE_MODE_QSGMII:
++			/* Ensure clock signals and speed is set on all
++			 * QSGMII links
++			 */
++			ocelot_port_writel(ocelot->ports[port],
++					   DEV_CLOCK_CFG_LINK_SPEED
++					   (OCELOT_SPEED_1000),
++					   DEV_CLOCK_CFG);
+ 			break;
+ 		default:
+ 			dev_err(ocelot->dev,
+diff --git a/drivers/net/team/team_mode_loadbalance.c b/drivers/net/team/team_mode_loadbalance.c
+index a5ef97010eb34..5541e1c19936c 100644
+--- a/drivers/net/team/team_mode_loadbalance.c
++++ b/drivers/net/team/team_mode_loadbalance.c
+@@ -325,6 +325,20 @@ static int lb_bpf_func_set(struct team *team, struct team_gsetter_ctx *ctx)
+ 	return 0;
+ }
+ 
++static void lb_bpf_func_free(struct team *team)
++{
++	struct lb_priv *lb_priv = get_lb_priv(team);
++	struct bpf_prog *fp;
++
++	if (!lb_priv->ex->orig_fprog)
++		return;
++
++	__fprog_destroy(lb_priv->ex->orig_fprog);
++	fp = rcu_dereference_protected(lb_priv->fp,
++				       lockdep_is_held(&team->lock));
++	bpf_prog_destroy(fp);
++}
++
+ static int lb_tx_method_get(struct team *team, struct team_gsetter_ctx *ctx)
+ {
+ 	struct lb_priv *lb_priv = get_lb_priv(team);
+@@ -639,6 +653,7 @@ static void lb_exit(struct team *team)
+ 
+ 	team_options_unregister(team, lb_options,
+ 				ARRAY_SIZE(lb_options));
++	lb_bpf_func_free(team);
+ 	cancel_delayed_work_sync(&lb_priv->ex->stats.refresh_dw);
+ 	free_percpu(lb_priv->pcpu_stats);
+ 	kfree(lb_priv->ex);
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 18af2f8eee96a..74bebbdb4b158 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -976,6 +976,13 @@ static const struct usb_device_id products[] = {
+ 					      0xff),
+ 		.driver_info	    = (unsigned long)&qmi_wwan_info_quirk_dtr,
+ 	},
++	{	/* Quectel EG12/EM12 */
++		USB_DEVICE_AND_INTERFACE_INFO(0x2c7c, 0x0512,
++					      USB_CLASS_VENDOR_SPEC,
++					      USB_SUBCLASS_VENDOR_SPEC,
++					      0xff),
++		.driver_info	    = (unsigned long)&qmi_wwan_info_quirk_dtr,
++	},
+ 
+ 	/* 3. Combined interface devices matching on interface number */
+ 	{QMI_FIXED_INTF(0x0408, 0xea42, 4)},	/* Yota / Megafon M100-1 */
+@@ -1343,17 +1350,20 @@ static bool quectel_ec20_detected(struct usb_interface *intf)
+ 	return false;
+ }
+ 
+-static bool quectel_ep06_diag_detected(struct usb_interface *intf)
++static bool quectel_diag_detected(struct usb_interface *intf)
+ {
+ 	struct usb_device *dev = interface_to_usbdev(intf);
+ 	struct usb_interface_descriptor intf_desc = intf->cur_altsetting->desc;
++	u16 id_vendor = le16_to_cpu(dev->descriptor.idVendor);
++	u16 id_product = le16_to_cpu(dev->descriptor.idProduct);
+ 
+-	if (le16_to_cpu(dev->descriptor.idVendor) == 0x2c7c &&
+-	    le16_to_cpu(dev->descriptor.idProduct) == 0x0306 &&
+-	    intf_desc.bNumEndpoints == 2)
+-		return true;
++	if (id_vendor != 0x2c7c || intf_desc.bNumEndpoints != 2)
++		return false;
+ 
+-	return false;
++	if (id_product == 0x0306 || id_product == 0x0512)
++		return true;
++	else
++		return false;
+ }
+ 
+ static int qmi_wwan_probe(struct usb_interface *intf,
+@@ -1390,13 +1400,13 @@ static int qmi_wwan_probe(struct usb_interface *intf,
+ 		return -ENODEV;
+ 	}
+ 
+-	/* Quectel EP06/EM06/EG06 supports dynamic interface configuration, so
++	/* Several Quectel modems supports dynamic interface configuration, so
+ 	 * we need to match on class/subclass/protocol. These values are
+ 	 * identical for the diagnostic- and QMI-interface, but bNumEndpoints is
+ 	 * different. Ignore the current interface if the number of endpoints
+ 	 * the number for the diag interface (two).
+ 	 */
+-	if (quectel_ep06_diag_detected(intf))
++	if (quectel_diag_detected(intf))
+ 		return -ENODEV;
+ 
+ 	return usbnet_probe(intf, id);
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index 90a8a9f1ac7d8..910826df4a316 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -75,6 +75,9 @@ struct ashmem_range {
+ /* LRU list of unpinned pages, protected by ashmem_mutex */
+ static LIST_HEAD(ashmem_lru_list);
+ 
++static atomic_t ashmem_shrink_inflight = ATOMIC_INIT(0);
++static DECLARE_WAIT_QUEUE_HEAD(ashmem_shrink_wait);
++
+ /*
+  * long lru_count - The count of pages on our LRU list.
+  *
+@@ -168,19 +171,15 @@ static inline void lru_del(struct ashmem_range *range)
+  * @end:	   The ending page (inclusive)
+  *
+  * This function is protected by ashmem_mutex.
+- *
+- * Return: 0 if successful, or -ENOMEM if there is an error
+  */
+-static int range_alloc(struct ashmem_area *asma,
+-		       struct ashmem_range *prev_range, unsigned int purged,
+-		       size_t start, size_t end)
++static void range_alloc(struct ashmem_area *asma,
++			struct ashmem_range *prev_range, unsigned int purged,
++			size_t start, size_t end,
++			struct ashmem_range **new_range)
+ {
+-	struct ashmem_range *range;
+-
+-	range = kmem_cache_zalloc(ashmem_range_cachep, GFP_KERNEL);
+-	if (!range)
+-		return -ENOMEM;
++	struct ashmem_range *range = *new_range;
+ 
++	*new_range = NULL;
+ 	range->asma = asma;
+ 	range->pgstart = start;
+ 	range->pgend = end;
+@@ -190,8 +189,6 @@ static int range_alloc(struct ashmem_area *asma,
+ 
+ 	if (range_on_lru(range))
+ 		lru_add(range);
+-
+-	return 0;
+ }
+ 
+ /**
+@@ -438,7 +435,6 @@ out:
+ static unsigned long
+ ashmem_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
+ {
+-	struct ashmem_range *range, *next;
+ 	unsigned long freed = 0;
+ 
+ 	/* We might recurse into filesystem code, so bail out if necessary */
+@@ -448,21 +444,33 @@ ashmem_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
+ 	if (!mutex_trylock(&ashmem_mutex))
+ 		return -1;
+ 
+-	list_for_each_entry_safe(range, next, &ashmem_lru_list, lru) {
++	while (!list_empty(&ashmem_lru_list)) {
++		struct ashmem_range *range =
++			list_first_entry(&ashmem_lru_list, typeof(*range), lru);
+ 		loff_t start = range->pgstart * PAGE_SIZE;
+ 		loff_t end = (range->pgend + 1) * PAGE_SIZE;
++		struct file *f = range->asma->file;
+ 
+-		range->asma->file->f_op->fallocate(range->asma->file,
+-				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
+-				start, end - start);
++		get_file(f);
++		atomic_inc(&ashmem_shrink_inflight);
+ 		range->purged = ASHMEM_WAS_PURGED;
+ 		lru_del(range);
+ 
+ 		freed += range_size(range);
++		mutex_unlock(&ashmem_mutex);
++		f->f_op->fallocate(f,
++				   FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
++				   start, end - start);
++		fput(f);
++		if (atomic_dec_and_test(&ashmem_shrink_inflight))
++			wake_up_all(&ashmem_shrink_wait);
++		if (!mutex_trylock(&ashmem_mutex))
++			goto out;
+ 		if (--sc->nr_to_scan <= 0)
+ 			break;
+ 	}
+ 	mutex_unlock(&ashmem_mutex);
++out:
+ 	return freed;
+ }
+ 
+@@ -582,7 +590,8 @@ static int get_name(struct ashmem_area *asma, void __user *name)
+  *
+  * Caller must hold ashmem_mutex.
+  */
+-static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
++static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend,
++		      struct ashmem_range **new_range)
+ {
+ 	struct ashmem_range *range, *next;
+ 	int ret = ASHMEM_NOT_PURGED;
+@@ -635,7 +644,7 @@ static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
+ 			 * second half and adjust the first chunk's endpoint.
+ 			 */
+ 			range_alloc(asma, range, range->purged,
+-				    pgend + 1, range->pgend);
++				    pgend + 1, range->pgend, new_range);
+ 			range_shrink(range, range->pgstart, pgstart - 1);
+ 			break;
+ 		}
+@@ -649,7 +658,8 @@ static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
+  *
+  * Caller must hold ashmem_mutex.
+  */
+-static int ashmem_unpin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
++static int ashmem_unpin(struct ashmem_area *asma, size_t pgstart, size_t pgend,
++			struct ashmem_range **new_range)
+ {
+ 	struct ashmem_range *range, *next;
+ 	unsigned int purged = ASHMEM_NOT_PURGED;
+@@ -675,7 +685,8 @@ restart:
+ 		}
+ 	}
+ 
+-	return range_alloc(asma, range, purged, pgstart, pgend);
++	range_alloc(asma, range, purged, pgstart, pgend, new_range);
++	return 0;
+ }
+ 
+ /*
+@@ -708,11 +719,19 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+ 	struct ashmem_pin pin;
+ 	size_t pgstart, pgend;
+ 	int ret = -EINVAL;
++	struct ashmem_range *range = NULL;
+ 
+ 	if (copy_from_user(&pin, p, sizeof(pin)))
+ 		return -EFAULT;
+ 
++	if (cmd == ASHMEM_PIN || cmd == ASHMEM_UNPIN) {
++		range = kmem_cache_zalloc(ashmem_range_cachep, GFP_KERNEL);
++		if (!range)
++			return -ENOMEM;
++	}
++
+ 	mutex_lock(&ashmem_mutex);
++	wait_event(ashmem_shrink_wait, !atomic_read(&ashmem_shrink_inflight));
+ 
+ 	if (!asma->file)
+ 		goto out_unlock;
+@@ -735,10 +754,10 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+ 
+ 	switch (cmd) {
+ 	case ASHMEM_PIN:
+-		ret = ashmem_pin(asma, pgstart, pgend);
++		ret = ashmem_pin(asma, pgstart, pgend, &range);
+ 		break;
+ 	case ASHMEM_UNPIN:
+-		ret = ashmem_unpin(asma, pgstart, pgend);
++		ret = ashmem_unpin(asma, pgstart, pgend, &range);
+ 		break;
+ 	case ASHMEM_GET_PIN_STATUS:
+ 		ret = ashmem_get_pin_status(asma, pgstart, pgend);
+@@ -747,6 +766,8 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+ 
+ out_unlock:
+ 	mutex_unlock(&ashmem_mutex);
++	if (range)
++		kmem_cache_free(ashmem_range_cachep, range);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c
+index 0383f7548d48e..20f2103a4ebfb 100644
+--- a/drivers/staging/android/ion/ion_system_heap.c
++++ b/drivers/staging/android/ion/ion_system_heap.c
+@@ -223,10 +223,10 @@ static void ion_system_heap_destroy_pools(struct ion_page_pool **pools)
+ static int ion_system_heap_create_pools(struct ion_page_pool **pools)
+ {
+ 	int i;
+-	gfp_t gfp_flags = low_order_gfp_flags;
+ 
+ 	for (i = 0; i < NUM_ORDERS; i++) {
+ 		struct ion_page_pool *pool;
++		gfp_t gfp_flags = low_order_gfp_flags;
+ 
+ 		if (orders[i] > 4)
+ 			gfp_flags = high_order_gfp_flags;
+diff --git a/drivers/staging/comedi/drivers/ni_660x.c b/drivers/staging/comedi/drivers/ni_660x.c
+index e70a461e723f8..405573e927cfc 100644
+--- a/drivers/staging/comedi/drivers/ni_660x.c
++++ b/drivers/staging/comedi/drivers/ni_660x.c
+@@ -656,6 +656,7 @@ static int ni_660x_set_pfi_routing(struct comedi_device *dev,
+ 	case NI_660X_PFI_OUTPUT_DIO:
+ 		if (chan > 31)
+ 			return -EINVAL;
++		break;
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/staging/erofs/inode.c b/drivers/staging/erofs/inode.c
+index d7fbf5f4600f3..f99954dbfdb58 100644
+--- a/drivers/staging/erofs/inode.c
++++ b/drivers/staging/erofs/inode.c
+@@ -185,16 +185,16 @@ static int fill_inode(struct inode *inode, int isdir)
+ 		/* setup the new inode */
+ 		if (S_ISREG(inode->i_mode)) {
+ #ifdef CONFIG_EROFS_FS_XATTR
+-			if (vi->xattr_isize)
+-				inode->i_op = &erofs_generic_xattr_iops;
++			inode->i_op = &erofs_generic_xattr_iops;
+ #endif
+ 			inode->i_fop = &generic_ro_fops;
+ 		} else if (S_ISDIR(inode->i_mode)) {
+ 			inode->i_op =
+ #ifdef CONFIG_EROFS_FS_XATTR
+-				vi->xattr_isize ? &erofs_dir_xattr_iops :
+-#endif
++				&erofs_dir_xattr_iops;
++#else
+ 				&erofs_dir_iops;
++#endif
+ 			inode->i_fop = &erofs_dir_fops;
+ 		} else if (S_ISLNK(inode->i_mode)) {
+ 			/* by default, page_get_link is used for symlink */
+diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
+index e049d00c087a0..16249d7f08953 100644
+--- a/drivers/staging/erofs/internal.h
++++ b/drivers/staging/erofs/internal.h
+@@ -354,12 +354,17 @@ static inline erofs_off_t iloc(struct erofs_sb_info *sbi, erofs_nid_t nid)
+ 	return blknr_to_addr(sbi->meta_blkaddr) + (nid << sbi->islotbits);
+ }
+ 
+-#define inode_set_inited_xattr(inode)   (EROFS_V(inode)->flags |= 1)
+-#define inode_has_inited_xattr(inode)   (EROFS_V(inode)->flags & 1)
++/* atomic flag definitions */
++#define EROFS_V_EA_INITED_BIT	0
++
++/* bitlock definitions (arranged in reverse order) */
++#define EROFS_V_BL_XATTR_BIT	(BITS_PER_LONG - 1)
+ 
+ struct erofs_vnode {
+ 	erofs_nid_t nid;
+-	unsigned int flags;
++
++	/* atomic flags (including bitlocks) */
++	unsigned long flags;
+ 
+ 	unsigned char data_mapping_mode;
+ 	/* inline size in bytes */
+diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
+index 4ac1099a39c6c..ca2e8fd789591 100644
+--- a/drivers/staging/erofs/unzip_vle.c
++++ b/drivers/staging/erofs/unzip_vle.c
+@@ -107,15 +107,30 @@ enum z_erofs_vle_work_role {
+ 	Z_EROFS_VLE_WORK_SECONDARY,
+ 	Z_EROFS_VLE_WORK_PRIMARY,
+ 	/*
+-	 * The current work has at least been linked with the following
+-	 * processed chained works, which means if the processing page
+-	 * is the tail partial page of the work, the current work can
+-	 * safely use the whole page, as illustrated below:
+-	 * +--------------+-------------------------------------------+
+-	 * |  tail page   |      head page (of the previous work)     |
+-	 * +--------------+-------------------------------------------+
+-	 *   /\  which belongs to the current work
+-	 * [  (*) this page can be used for the current work itself.  ]
++	 * The current work was the tail of an exist chain, and the previous
++	 * processed chained works are all decided to be hooked up to it.
++	 * A new chain should be created for the remaining unprocessed works,
++	 * therefore different from Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED,
++	 * the next work cannot reuse the whole page in the following scenario:
++	 *  ________________________________________________________________
++	 * |      tail (partial) page     |       head (partial) page       |
++	 * |  (belongs to the next work)  |  (belongs to the current work)  |
++	 * |_______PRIMARY_FOLLOWED_______|________PRIMARY_HOOKED___________|
++	 */
++	Z_EROFS_VLE_WORK_PRIMARY_HOOKED,
++	/*
++	 * The current work has been linked with the processed chained works,
++	 * and could be also linked with the potential remaining works, which
++	 * means if the processing page is the tail partial page of the work,
++	 * the current work can safely use the whole page (since the next work
++	 * is under control) for in-place decompression, as illustrated below:
++	 *  ________________________________________________________________
++	 * |  tail (partial) page  |          head (partial) page           |
++	 * | (of the current work) |         (of the previous work)         |
++	 * |  PRIMARY_FOLLOWED or  |                                        |
++	 * |_____PRIMARY_HOOKED____|____________PRIMARY_FOLLOWED____________|
++	 *
++	 * [  (*) the above page can be used for the current work itself.  ]
+ 	 */
+ 	Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED,
+ 	Z_EROFS_VLE_WORK_MAX
+@@ -315,10 +330,10 @@ static int z_erofs_vle_work_add_page(
+ 	return ret ? 0 : -EAGAIN;
+ }
+ 
+-static inline bool try_to_claim_workgroup(
+-	struct z_erofs_vle_workgroup *grp,
+-	z_erofs_vle_owned_workgrp_t *owned_head,
+-	bool *hosted)
++static enum z_erofs_vle_work_role
++try_to_claim_workgroup(struct z_erofs_vle_workgroup *grp,
++		       z_erofs_vle_owned_workgrp_t *owned_head,
++		       bool *hosted)
+ {
+ 	DBG_BUGON(*hosted == true);
+ 
+@@ -332,6 +347,9 @@ retry:
+ 
+ 		*owned_head = &grp->next;
+ 		*hosted = true;
++		/* lucky, I am the followee :) */
++		return Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED;
++
+ 	} else if (grp->next == Z_EROFS_VLE_WORKGRP_TAIL) {
+ 		/*
+ 		 * type 2, link to the end of a existing open chain,
+@@ -341,12 +359,11 @@ retry:
+ 		if (cmpxchg(&grp->next, Z_EROFS_VLE_WORKGRP_TAIL,
+ 			    *owned_head) != Z_EROFS_VLE_WORKGRP_TAIL)
+ 			goto retry;
+-
+ 		*owned_head = Z_EROFS_VLE_WORKGRP_TAIL;
+-	} else
+-		return false;	/* :( better luck next time */
++		return Z_EROFS_VLE_WORK_PRIMARY_HOOKED;
++	}
+ 
+-	return true;	/* lucky, I am the followee :) */
++	return Z_EROFS_VLE_WORK_PRIMARY; /* :( better luck next time */
+ }
+ 
+ struct z_erofs_vle_work_finder {
+@@ -424,12 +441,9 @@ z_erofs_vle_work_lookup(const struct z_erofs_vle_work_finder *f)
+ 	*f->hosted = false;
+ 	if (!primary)
+ 		*f->role = Z_EROFS_VLE_WORK_SECONDARY;
+-	/* claim the workgroup if possible */
+-	else if (try_to_claim_workgroup(grp, f->owned_head, f->hosted))
+-		*f->role = Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED;
+-	else
+-		*f->role = Z_EROFS_VLE_WORK_PRIMARY;
+-
++	else	/* claim the workgroup if possible */
++		*f->role = try_to_claim_workgroup(grp, f->owned_head,
++						  f->hosted);
+ 	return work;
+ }
+ 
+@@ -493,6 +507,9 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
+ 	return work;
+ }
+ 
++#define builder_is_hooked(builder) \
++	((builder)->role >= Z_EROFS_VLE_WORK_PRIMARY_HOOKED)
++
+ #define builder_is_followed(builder) \
+ 	((builder)->role >= Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED)
+ 
+@@ -686,7 +703,7 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
+ 	struct z_erofs_vle_work_builder *const builder = &fe->builder;
+ 	const loff_t offset = page_offset(page);
+ 
+-	bool tight = builder_is_followed(builder);
++	bool tight = builder_is_hooked(builder);
+ 	struct z_erofs_vle_work *work = builder->work;
+ 
+ 	enum z_erofs_cache_alloctype cache_strategy;
+@@ -704,8 +721,12 @@ repeat:
+ 
+ 	/* lucky, within the range of the current map_blocks */
+ 	if (offset + cur >= map->m_la &&
+-		offset + cur < map->m_la + map->m_llen)
++		offset + cur < map->m_la + map->m_llen) {
++		/* didn't get a valid unzip work previously (very rare) */
++		if (!builder->work)
++			goto restart_now;
+ 		goto hitted;
++	}
+ 
+ 	/* go ahead the next map_blocks */
+ 	debugln("%s: [out-of-range] pos %llu", __func__, offset + cur);
+@@ -719,6 +740,7 @@ repeat:
+ 	if (unlikely(err))
+ 		goto err_out;
+ 
++restart_now:
+ 	if (unlikely(!(map->m_flags & EROFS_MAP_MAPPED)))
+ 		goto hitted;
+ 
+@@ -740,7 +762,7 @@ repeat:
+ 				 map->m_plen / PAGE_SIZE,
+ 				 cache_strategy, page_pool, GFP_KERNEL);
+ 
+-	tight &= builder_is_followed(builder);
++	tight &= builder_is_hooked(builder);
+ 	work = builder->work;
+ hitted:
+ 	cur = end - min_t(unsigned int, offset + end - map->m_la, end);
+@@ -755,6 +777,9 @@ hitted:
+ 			(tight ? Z_EROFS_PAGE_TYPE_EXCLUSIVE :
+ 				Z_EROFS_VLE_PAGE_TYPE_TAIL_SHARED));
+ 
++	if (cur)
++		tight &= builder_is_followed(builder);
++
+ retry:
+ 	err = z_erofs_vle_work_add_page(builder, page, page_type);
+ 	/* should allocate an additional staging page for pagevec */
+diff --git a/drivers/staging/erofs/xattr.c b/drivers/staging/erofs/xattr.c
+index 80dca6a4adbe2..6cb05ae312338 100644
+--- a/drivers/staging/erofs/xattr.c
++++ b/drivers/staging/erofs/xattr.c
+@@ -44,19 +44,48 @@ static inline void xattr_iter_end_final(struct xattr_iter *it)
+ 
+ static int init_inode_xattrs(struct inode *inode)
+ {
++	struct erofs_vnode *const vi = EROFS_V(inode);
+ 	struct xattr_iter it;
+ 	unsigned int i;
+ 	struct erofs_xattr_ibody_header *ih;
+ 	struct super_block *sb;
+ 	struct erofs_sb_info *sbi;
+-	struct erofs_vnode *vi;
+ 	bool atomic_map;
++	int ret = 0;
+ 
+-	if (likely(inode_has_inited_xattr(inode)))
++	/* the most case is that xattrs of this inode are initialized. */
++	if (test_bit(EROFS_V_EA_INITED_BIT, &vi->flags))
+ 		return 0;
+ 
+-	vi = EROFS_V(inode);
+-	BUG_ON(!vi->xattr_isize);
++	if (wait_on_bit_lock(&vi->flags, EROFS_V_BL_XATTR_BIT, TASK_KILLABLE))
++		return -ERESTARTSYS;
++
++	/* someone has initialized xattrs for us? */
++	if (test_bit(EROFS_V_EA_INITED_BIT, &vi->flags))
++		goto out_unlock;
++
++	/*
++	 * bypass all xattr operations if ->xattr_isize is not greater than
++	 * sizeof(struct erofs_xattr_ibody_header), in detail:
++	 * 1) it is not enough to contain erofs_xattr_ibody_header then
++	 *    ->xattr_isize should be 0 (it means no xattr);
++	 * 2) it is just to contain erofs_xattr_ibody_header, which is on-disk
++	 *    undefined right now (maybe use later with some new sb feature).
++	 */
++	if (vi->xattr_isize == sizeof(struct erofs_xattr_ibody_header)) {
++		errln("xattr_isize %d of nid %llu is not supported yet",
++		      vi->xattr_isize, vi->nid);
++		ret = -ENOTSUPP;
++		goto out_unlock;
++	} else if (vi->xattr_isize < sizeof(struct erofs_xattr_ibody_header)) {
++		if (unlikely(vi->xattr_isize)) {
++			DBG_BUGON(1);
++			ret = -EIO;
++			goto out_unlock;	/* xattr ondisk layout error */
++		}
++		ret = -ENOATTR;
++		goto out_unlock;
++	}
+ 
+ 	sb = inode->i_sb;
+ 	sbi = EROFS_SB(sb);
+@@ -64,8 +93,10 @@ static int init_inode_xattrs(struct inode *inode)
+ 	it.ofs = erofs_blkoff(iloc(sbi, vi->nid) + vi->inode_isize);
+ 
+ 	it.page = erofs_get_inline_page(inode, it.blkaddr);
+-	if (IS_ERR(it.page))
+-		return PTR_ERR(it.page);
++	if (IS_ERR(it.page)) {
++		ret = PTR_ERR(it.page);
++		goto out_unlock;
++	}
+ 
+ 	/* read in shared xattr array (non-atomic, see kmalloc below) */
+ 	it.kaddr = kmap(it.page);
+@@ -78,7 +109,8 @@ static int init_inode_xattrs(struct inode *inode)
+ 						sizeof(uint), GFP_KERNEL);
+ 	if (vi->xattr_shared_xattrs == NULL) {
+ 		xattr_iter_end(&it, atomic_map);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto out_unlock;
+ 	}
+ 
+ 	/* let's skip ibody header */
+@@ -92,8 +124,12 @@ static int init_inode_xattrs(struct inode *inode)
+ 
+ 			it.page = erofs_get_meta_page(sb,
+ 				++it.blkaddr, S_ISDIR(inode->i_mode));
+-			if (IS_ERR(it.page))
+-				return PTR_ERR(it.page);
++			if (IS_ERR(it.page)) {
++				kfree(vi->xattr_shared_xattrs);
++				vi->xattr_shared_xattrs = NULL;
++				ret = PTR_ERR(it.page);
++				goto out_unlock;
++			}
+ 
+ 			it.kaddr = kmap_atomic(it.page);
+ 			atomic_map = true;
+@@ -105,8 +141,11 @@ static int init_inode_xattrs(struct inode *inode)
+ 	}
+ 	xattr_iter_end(&it, atomic_map);
+ 
+-	inode_set_inited_xattr(inode);
+-	return 0;
++	set_bit(EROFS_V_EA_INITED_BIT, &vi->flags);
++
++out_unlock:
++	clear_and_wake_up_bit(EROFS_V_BL_XATTR_BIT, &vi->flags);
++	return ret;
+ }
+ 
+ /*
+@@ -422,7 +461,6 @@ static int erofs_xattr_generic_get(const struct xattr_handler *handler,
+ 		struct dentry *unused, struct inode *inode,
+ 		const char *name, void *buffer, size_t size)
+ {
+-	struct erofs_vnode *const vi = EROFS_V(inode);
+ 	struct erofs_sb_info *const sbi = EROFS_I_SB(inode);
+ 
+ 	switch (handler->flags) {
+@@ -440,9 +478,6 @@ static int erofs_xattr_generic_get(const struct xattr_handler *handler,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!vi->xattr_isize)
+-		return -ENOATTR;
+-
+ 	return erofs_getxattr(inode, handler->flags, name, buffer, size);
+ }
+ 
+diff --git a/drivers/staging/wilc1000/linux_wlan.c b/drivers/staging/wilc1000/linux_wlan.c
+index 721689048648e..5e5149c9a92d9 100644
+--- a/drivers/staging/wilc1000/linux_wlan.c
++++ b/drivers/staging/wilc1000/linux_wlan.c
+@@ -1086,8 +1086,8 @@ int wilc_netdev_init(struct wilc **wilc, struct device *dev, int io_type,
+ 		vif->wilc = *wilc;
+ 		vif->ndev = ndev;
+ 		wl->vif[i] = vif;
+-		wl->vif_num = i;
+-		vif->idx = wl->vif_num;
++		wl->vif_num = i + 1;
++		vif->idx = i;
+ 
+ 		ndev->netdev_ops = &wilc_netdev_ops;
+ 
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index a9ec7051f2864..c2fe218e051f0 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -194,6 +194,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		xhci->quirks |= XHCI_SSIC_PORT_UNUSED;
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ 	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI))
+ 		xhci->quirks |= XHCI_INTEL_USB_ROLE_SW;
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 938ff06c03495..efb0cad8710e3 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -941,9 +941,9 @@ static void tegra_xusb_powerdomain_remove(struct device *dev,
+ 		device_link_del(tegra->genpd_dl_ss);
+ 	if (tegra->genpd_dl_host)
+ 		device_link_del(tegra->genpd_dl_host);
+-	if (tegra->genpd_dev_ss)
++	if (!IS_ERR_OR_NULL(tegra->genpd_dev_ss))
+ 		dev_pm_domain_detach(tegra->genpd_dev_ss, true);
+-	if (tegra->genpd_dev_host)
++	if (!IS_ERR_OR_NULL(tegra->genpd_dev_host))
+ 		dev_pm_domain_detach(tegra->genpd_dev_host, true);
+ }
+ 
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index c0777a374a88f..4c66edf533fe9 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -61,6 +61,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
+ 	{ USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
+ 	{ USB_DEVICE(0x0908, 0x01FF) }, /* Siemens RUGGEDCOM USB Serial Console */
++	{ USB_DEVICE(0x0B00, 0x3070) }, /* Ingenico 3070 */
+ 	{ USB_DEVICE(0x0BED, 0x1100) }, /* MEI (TM) Cashflow-SC Bill/Voucher Acceptor */
+ 	{ USB_DEVICE(0x0BED, 0x1101) }, /* MEI series 2000 Combo Acceptor */
+ 	{ USB_DEVICE(0x0FCF, 0x1003) }, /* Dynastream ANT development board */
+@@ -1353,8 +1354,13 @@ static int cp210x_gpio_get(struct gpio_chip *gc, unsigned int gpio)
+ 	if (priv->partnum == CP210X_PARTNUM_CP2105)
+ 		req_type = REQTYPE_INTERFACE_TO_HOST;
+ 
++	result = usb_autopm_get_interface(serial->interface);
++	if (result)
++		return result;
++
+ 	result = cp210x_read_vendor_block(serial, req_type,
+ 					  CP210X_READ_LATCH, &buf, sizeof(buf));
++	usb_autopm_put_interface(serial->interface);
+ 	if (result < 0)
+ 		return result;
+ 
+@@ -1375,6 +1381,10 @@ static void cp210x_gpio_set(struct gpio_chip *gc, unsigned int gpio, int value)
+ 
+ 	buf.mask = BIT(gpio);
+ 
++	result = usb_autopm_get_interface(serial->interface);
++	if (result)
++		goto out;
++
+ 	if (priv->partnum == CP210X_PARTNUM_CP2105) {
+ 		result = cp210x_write_vendor_block(serial,
+ 						   REQTYPE_HOST_TO_INTERFACE,
+@@ -1392,6 +1402,8 @@ static void cp210x_gpio_set(struct gpio_chip *gc, unsigned int gpio, int value)
+ 					 NULL, 0, USB_CTRL_SET_TIMEOUT);
+ 	}
+ 
++	usb_autopm_put_interface(serial->interface);
++out:
+ 	if (result < 0) {
+ 		dev_err(&serial->interface->dev, "failed to set GPIO value: %d\n",
+ 				result);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 77ef4c481f3ce..8f5b174717594 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1025,6 +1025,8 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(CYPRESS_VID, CYPRESS_WICED_BT_USB_PID) },
+ 	{ USB_DEVICE(CYPRESS_VID, CYPRESS_WICED_WL_USB_PID) },
+ 	{ USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) },
++	/* EZPrototypes devices */
++	{ USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) },
+ 	{ }					/* Terminating entry */
+ };
+ 
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 975d02666c5a0..b863bedb55a13 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1308,6 +1308,12 @@
+ #define IONICS_VID			0x1c0c
+ #define IONICS_PLUGCOMPUTER_PID		0x0102
+ 
++/*
++ * EZPrototypes (PID reseller)
++ */
++#define EZPROTOTYPES_VID		0x1c40
++#define HJELMSLUND_USB485_ISO_PID	0x0477
++
+ /*
+  * Dresden Elektronik Sensor Terminal Board
+  */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index aef15497ff31f..11b21d9410f35 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1148,6 +1148,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+ 	  .driver_info = NCTRL(0) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1102, 0xff),	/* Telit ME910 (ECM) */
++	  .driver_info = NCTRL(0) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),
+diff --git a/fs/aio.c b/fs/aio.c
+index aaaaf4d12c739..528d03680526f 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -1680,6 +1680,7 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ 	struct poll_iocb *req = container_of(wait, struct poll_iocb, wait);
+ 	struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll);
+ 	__poll_t mask = key_to_poll(key);
++	unsigned long flags;
+ 
+ 	req->woken = true;
+ 
+@@ -1688,10 +1689,15 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ 		if (!(mask & req->events))
+ 			return 0;
+ 
+-		/* try to complete the iocb inline if we can: */
+-		if (spin_trylock(&iocb->ki_ctx->ctx_lock)) {
++		/*
++		 * Try to complete the iocb inline if we can. Use
++		 * irqsave/irqrestore because not all filesystems (e.g. fuse)
++		 * call this function with IRQs disabled and because IRQs
++		 * have to be disabled before ctx_lock is obtained.
++		 */
++		if (spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
+ 			list_del(&iocb->ki_list);
+-			spin_unlock(&iocb->ki_ctx->ctx_lock);
++			spin_unlock_irqrestore(&iocb->ki_ctx->ctx_lock, flags);
+ 
+ 			list_del_init(&req->wait.entry);
+ 			aio_poll_complete(iocb, mask);
+diff --git a/fs/exec.c b/fs/exec.c
+index fb72d36f7823e..bcf383730bea9 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -932,7 +932,7 @@ int kernel_read_file(struct file *file, void **buf, loff_t *size,
+ 		bytes = kernel_read(file, *buf + pos, i_size - pos, &pos);
+ 		if (bytes < 0) {
+ 			ret = bytes;
+-			goto out;
++			goto out_free;
+ 		}
+ 
+ 		if (bytes == 0)
+diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
+index c86d6d8bdfed2..0b427d5df0fea 100644
+--- a/include/linux/cpufreq.h
++++ b/include/linux/cpufreq.h
+@@ -254,20 +254,12 @@ __ATTR(_name, 0644, show_##_name, store_##_name)
+ static struct freq_attr _name =			\
+ __ATTR(_name, 0200, NULL, store_##_name)
+ 
+-struct global_attr {
+-	struct attribute attr;
+-	ssize_t (*show)(struct kobject *kobj,
+-			struct attribute *attr, char *buf);
+-	ssize_t (*store)(struct kobject *a, struct attribute *b,
+-			 const char *c, size_t count);
+-};
+-
+ #define define_one_global_ro(_name)		\
+-static struct global_attr _name =		\
++static struct kobj_attribute _name =		\
+ __ATTR(_name, 0444, show_##_name, NULL)
+ 
+ #define define_one_global_rw(_name)		\
+-static struct global_attr _name =		\
++static struct kobj_attribute _name =		\
+ __ATTR(_name, 0644, show_##_name, store_##_name)
+ 
+ 
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index ec9d6bc658559..fabee6db0abb7 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -276,7 +276,7 @@ int  bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg);
+ int  bt_sock_wait_state(struct sock *sk, int state, unsigned long timeo);
+ int  bt_sock_wait_ready(struct sock *sk, unsigned long flags);
+ 
+-void bt_accept_enqueue(struct sock *parent, struct sock *sk);
++void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh);
+ void bt_accept_unlink(struct sock *sk);
+ struct sock *bt_accept_dequeue(struct sock *parent, struct socket *newsock);
+ 
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 9481f2c142e26..e7eb4aa6ccc94 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -51,7 +51,10 @@ struct qdisc_size_table {
+ struct qdisc_skb_head {
+ 	struct sk_buff	*head;
+ 	struct sk_buff	*tail;
+-	__u32		qlen;
++	union {
++		u32		qlen;
++		atomic_t	atomic_qlen;
++	};
+ 	spinlock_t	lock;
+ };
+ 
+@@ -408,27 +411,19 @@ static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
+ 	BUILD_BUG_ON(sizeof(qcb->data) < sz);
+ }
+ 
+-static inline int qdisc_qlen_cpu(const struct Qdisc *q)
+-{
+-	return this_cpu_ptr(q->cpu_qstats)->qlen;
+-}
+-
+ static inline int qdisc_qlen(const struct Qdisc *q)
+ {
+ 	return q->q.qlen;
+ }
+ 
+-static inline int qdisc_qlen_sum(const struct Qdisc *q)
++static inline u32 qdisc_qlen_sum(const struct Qdisc *q)
+ {
+-	__u32 qlen = q->qstats.qlen;
+-	int i;
++	u32 qlen = q->qstats.qlen;
+ 
+-	if (q->flags & TCQ_F_NOLOCK) {
+-		for_each_possible_cpu(i)
+-			qlen += per_cpu_ptr(q->cpu_qstats, i)->qlen;
+-	} else {
++	if (q->flags & TCQ_F_NOLOCK)
++		qlen += atomic_read(&q->q.atomic_qlen);
++	else
+ 		qlen += q->q.qlen;
+-	}
+ 
+ 	return qlen;
+ }
+@@ -825,14 +820,14 @@ static inline void qdisc_qstats_cpu_backlog_inc(struct Qdisc *sch,
+ 	this_cpu_add(sch->cpu_qstats->backlog, qdisc_pkt_len(skb));
+ }
+ 
+-static inline void qdisc_qstats_cpu_qlen_inc(struct Qdisc *sch)
++static inline void qdisc_qstats_atomic_qlen_inc(struct Qdisc *sch)
+ {
+-	this_cpu_inc(sch->cpu_qstats->qlen);
++	atomic_inc(&sch->q.atomic_qlen);
+ }
+ 
+-static inline void qdisc_qstats_cpu_qlen_dec(struct Qdisc *sch)
++static inline void qdisc_qstats_atomic_qlen_dec(struct Qdisc *sch)
+ {
+-	this_cpu_dec(sch->cpu_qstats->qlen);
++	atomic_dec(&sch->q.atomic_qlen);
+ }
+ 
+ static inline void qdisc_qstats_cpu_requeues_inc(struct Qdisc *sch)
+diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
+index b9ba520f7e4bb..2832134e53971 100644
+--- a/include/uapi/linux/android/binder.h
++++ b/include/uapi/linux/android/binder.h
+@@ -41,6 +41,14 @@ enum {
+ enum {
+ 	FLAT_BINDER_FLAG_PRIORITY_MASK = 0xff,
+ 	FLAT_BINDER_FLAG_ACCEPTS_FDS = 0x100,
++
++	/**
++	 * @FLAT_BINDER_FLAG_TXN_SECURITY_CTX: request security contexts
++	 *
++	 * Only when set, causes senders to include their security
++	 * context
++	 */
++	FLAT_BINDER_FLAG_TXN_SECURITY_CTX = 0x1000,
+ };
+ 
+ #ifdef BINDER_IPC_32BIT
+@@ -218,6 +226,7 @@ struct binder_node_info_for_ref {
+ #define BINDER_VERSION			_IOWR('b', 9, struct binder_version)
+ #define BINDER_GET_NODE_DEBUG_INFO	_IOWR('b', 11, struct binder_node_debug_info)
+ #define BINDER_GET_NODE_INFO_FOR_REF	_IOWR('b', 12, struct binder_node_info_for_ref)
++#define BINDER_SET_CONTEXT_MGR_EXT	_IOW('b', 13, struct flat_binder_object)
+ 
+ /*
+  * NOTE: Two special error codes you should check for when calling
+@@ -276,6 +285,11 @@ struct binder_transaction_data {
+ 	} data;
+ };
+ 
++struct binder_transaction_data_secctx {
++	struct binder_transaction_data transaction_data;
++	binder_uintptr_t secctx;
++};
++
+ struct binder_transaction_data_sg {
+ 	struct binder_transaction_data transaction_data;
+ 	binder_size_t buffers_size;
+@@ -311,6 +325,11 @@ enum binder_driver_return_protocol {
+ 	BR_OK = _IO('r', 1),
+ 	/* No parameters! */
+ 
++	BR_TRANSACTION_SEC_CTX = _IOR('r', 2,
++				      struct binder_transaction_data_secctx),
++	/*
++	 * binder_transaction_data_secctx: the received command.
++	 */
+ 	BR_TRANSACTION = _IOR('r', 2, struct binder_transaction_data),
+ 	BR_REPLY = _IOR('r', 3, struct binder_transaction_data),
+ 	/*
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 27821480105e6..217ef481fbbb6 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -1301,7 +1301,7 @@ static int parse_pred(const char *str, void *data,
+ 		/* go past the last quote */
+ 		i++;
+ 
+-	} else if (isdigit(str[i])) {
++	} else if (isdigit(str[i]) || str[i] == '-') {
+ 
+ 		/* Make sure the field is not a string */
+ 		if (is_string_field(field)) {
+@@ -1314,6 +1314,9 @@ static int parse_pred(const char *str, void *data,
+ 			goto err_free;
+ 		}
+ 
++		if (str[i] == '-')
++			i++;
++
+ 		/* We allow 0xDEADBEEF */
+ 		while (isalnum(str[i]))
+ 			i++;
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index deacc52d7ff18..8d12198eaa949 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -154,15 +154,25 @@ void bt_sock_unlink(struct bt_sock_list *l, struct sock *sk)
+ }
+ EXPORT_SYMBOL(bt_sock_unlink);
+ 
+-void bt_accept_enqueue(struct sock *parent, struct sock *sk)
++void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh)
+ {
+ 	BT_DBG("parent %p, sk %p", parent, sk);
+ 
+ 	sock_hold(sk);
+-	lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
++
++	if (bh)
++		bh_lock_sock_nested(sk);
++	else
++		lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
++
+ 	list_add_tail(&bt_sk(sk)->accept_q, &bt_sk(parent)->accept_q);
+ 	bt_sk(sk)->parent = parent;
+-	release_sock(sk);
++
++	if (bh)
++		bh_unlock_sock(sk);
++	else
++		release_sock(sk);
++
+ 	parent->sk_ack_backlog++;
+ }
+ EXPORT_SYMBOL(bt_accept_enqueue);
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 686bdc6b35b03..a3a2cd55e23a9 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1252,7 +1252,7 @@ static struct l2cap_chan *l2cap_sock_new_connection_cb(struct l2cap_chan *chan)
+ 
+ 	l2cap_sock_init(sk, parent);
+ 
+-	bt_accept_enqueue(parent, sk);
++	bt_accept_enqueue(parent, sk, false);
+ 
+ 	release_sock(parent);
+ 
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index aa0db1d1bd9b4..b1f49fcc04780 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -988,7 +988,7 @@ int rfcomm_connect_ind(struct rfcomm_session *s, u8 channel, struct rfcomm_dlc *
+ 	rfcomm_pi(sk)->channel = channel;
+ 
+ 	sk->sk_state = BT_CONFIG;
+-	bt_accept_enqueue(parent, sk);
++	bt_accept_enqueue(parent, sk, true);
+ 
+ 	/* Accept connection and return socket DLC */
+ 	*d = rfcomm_pi(sk)->dlc;
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 529b38996d8bc..9a580999ca57e 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -193,7 +193,7 @@ static void __sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ 	conn->sk = sk;
+ 
+ 	if (parent)
+-		bt_accept_enqueue(parent, sk);
++		bt_accept_enqueue(parent, sk, true);
+ }
+ 
+ static int sco_chan_add(struct sco_conn *conn, struct sock *sk,
+diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
+index 9bf1b9ad17806..ac679f74ba475 100644
+--- a/net/core/gen_stats.c
++++ b/net/core/gen_stats.c
+@@ -291,7 +291,6 @@ __gnet_stats_copy_queue_cpu(struct gnet_stats_queue *qstats,
+ 	for_each_possible_cpu(i) {
+ 		const struct gnet_stats_queue *qcpu = per_cpu_ptr(q, i);
+ 
+-		qstats->qlen = 0;
+ 		qstats->backlog += qcpu->backlog;
+ 		qstats->drops += qcpu->drops;
+ 		qstats->requeues += qcpu->requeues;
+@@ -307,7 +306,6 @@ void __gnet_stats_copy_queue(struct gnet_stats_queue *qstats,
+ 	if (cpu) {
+ 		__gnet_stats_copy_queue_cpu(qstats, cpu);
+ 	} else {
+-		qstats->qlen = q->qlen;
+ 		qstats->backlog = q->backlog;
+ 		qstats->drops = q->drops;
+ 		qstats->requeues = q->requeues;
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index ff9fd2bb4ce43..73ad7607dcd1a 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -1547,6 +1547,9 @@ static int register_queue_kobjects(struct net_device *dev)
+ error:
+ 	netdev_queue_update_kobjects(dev, txq, 0);
+ 	net_rx_queue_update_kobjects(dev, rxq, 0);
++#ifdef CONFIG_SYSFS
++	kset_unregister(dev->queues_kset);
++#endif
+ 	return error;
+ }
+ 
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index cc01aa3f2b5e3..af91a1a402f13 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -1964,10 +1964,10 @@ int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+ 
+ static inline int ip6mr_forward2_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+-	__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+-			IPSTATS_MIB_OUTFORWDATAGRAMS);
+-	__IP6_ADD_STATS(net, ip6_dst_idev(skb_dst(skb)),
+-			IPSTATS_MIB_OUTOCTETS, skb->len);
++	IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
++		      IPSTATS_MIB_OUTFORWDATAGRAMS);
++	IP6_ADD_STATS(net, ip6_dst_idev(skb_dst(skb)),
++		      IPSTATS_MIB_OUTOCTETS, skb->len);
+ 	return dst_output(net, sk, skb);
+ }
+ 
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 968a85fe4d4a9..de31f2f3b9730 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -68,7 +68,7 @@ static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q)
+ 			skb = __skb_dequeue(&q->skb_bad_txq);
+ 			if (qdisc_is_percpu_stats(q)) {
+ 				qdisc_qstats_cpu_backlog_dec(q, skb);
+-				qdisc_qstats_cpu_qlen_dec(q);
++				qdisc_qstats_atomic_qlen_dec(q);
+ 			} else {
+ 				qdisc_qstats_backlog_dec(q, skb);
+ 				q->q.qlen--;
+@@ -108,7 +108,7 @@ static inline void qdisc_enqueue_skb_bad_txq(struct Qdisc *q,
+ 
+ 	if (qdisc_is_percpu_stats(q)) {
+ 		qdisc_qstats_cpu_backlog_inc(q, skb);
+-		qdisc_qstats_cpu_qlen_inc(q);
++		qdisc_qstats_atomic_qlen_inc(q);
+ 	} else {
+ 		qdisc_qstats_backlog_inc(q, skb);
+ 		q->q.qlen++;
+@@ -147,7 +147,7 @@ static inline int dev_requeue_skb_locked(struct sk_buff *skb, struct Qdisc *q)
+ 
+ 		qdisc_qstats_cpu_requeues_inc(q);
+ 		qdisc_qstats_cpu_backlog_inc(q, skb);
+-		qdisc_qstats_cpu_qlen_inc(q);
++		qdisc_qstats_atomic_qlen_inc(q);
+ 
+ 		skb = next;
+ 	}
+@@ -252,7 +252,7 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate,
+ 			skb = __skb_dequeue(&q->gso_skb);
+ 			if (qdisc_is_percpu_stats(q)) {
+ 				qdisc_qstats_cpu_backlog_dec(q, skb);
+-				qdisc_qstats_cpu_qlen_dec(q);
++				qdisc_qstats_atomic_qlen_dec(q);
+ 			} else {
+ 				qdisc_qstats_backlog_dec(q, skb);
+ 				q->q.qlen--;
+@@ -645,7 +645,7 @@ static int pfifo_fast_enqueue(struct sk_buff *skb, struct Qdisc *qdisc,
+ 	if (unlikely(err))
+ 		return qdisc_drop_cpu(skb, qdisc, to_free);
+ 
+-	qdisc_qstats_cpu_qlen_inc(qdisc);
++	qdisc_qstats_atomic_qlen_inc(qdisc);
+ 	/* Note: skb can not be used after skb_array_produce(),
+ 	 * so we better not use qdisc_qstats_cpu_backlog_inc()
+ 	 */
+@@ -670,7 +670,7 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
+ 	if (likely(skb)) {
+ 		qdisc_qstats_cpu_backlog_dec(qdisc, skb);
+ 		qdisc_bstats_cpu_update(qdisc, skb);
+-		qdisc_qstats_cpu_qlen_dec(qdisc);
++		qdisc_qstats_atomic_qlen_dec(qdisc);
+ 	}
+ 
+ 	return skb;
+@@ -714,7 +714,6 @@ static void pfifo_fast_reset(struct Qdisc *qdisc)
+ 		struct gnet_stats_queue *q = per_cpu_ptr(qdisc->cpu_qstats, i);
+ 
+ 		q->backlog = 0;
+-		q->qlen = 0;
+ 	}
+ }
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 65d6d04546aee..a2771b3b3c148 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -1866,6 +1866,7 @@ static int sctp_sendmsg_check_sflags(struct sctp_association *asoc,
+ 
+ 		pr_debug("%s: aborting association:%p\n", __func__, asoc);
+ 		sctp_primitive_ABORT(net, asoc, chunk);
++		iov_iter_revert(&msg->msg_iter, msg_len);
+ 
+ 		return 0;
+ 	}
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 70343ac448b18..139694f2c576c 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1333,7 +1333,7 @@ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dlen)
+ 
+ 	if (unlikely(!dest)) {
+ 		dest = &tsk->peer;
+-		if (!syn || dest->family != AF_TIPC)
++		if (!syn && dest->family != AF_TIPC)
+ 			return -EDESTADDRREQ;
+ 	}
+ 
+diff --git a/tools/testing/selftests/firmware/config b/tools/testing/selftests/firmware/config
+index 913a25a4a32be..bf634dda07201 100644
+--- a/tools/testing/selftests/firmware/config
++++ b/tools/testing/selftests/firmware/config
+@@ -1,6 +1,5 @@
+ CONFIG_TEST_FIRMWARE=y
+ CONFIG_FW_LOADER=y
+ CONFIG_FW_LOADER_USER_HELPER=y
+-CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
+ CONFIG_IKCONFIG=y
+ CONFIG_IKCONFIG_PROC=y
+diff --git a/tools/testing/selftests/firmware/fw_filesystem.sh b/tools/testing/selftests/firmware/fw_filesystem.sh
+index 466cf2f91ba01..a4320c4b44dc9 100755
+--- a/tools/testing/selftests/firmware/fw_filesystem.sh
++++ b/tools/testing/selftests/firmware/fw_filesystem.sh
+@@ -155,8 +155,11 @@ read_firmwares()
+ {
+ 	for i in $(seq 0 3); do
+ 		config_set_read_fw_idx $i
+-		# Verify the contents match
+-		if ! diff -q "$FW" $DIR/read_firmware 2>/dev/null ; then
++		# Verify the contents are what we expect.
++		# -Z required for now -- check for yourself, md5sum
++		# on $FW and DIR/read_firmware will yield the same. Even
++		# cmp agrees, so something is off.
++		if ! diff -q -Z "$FW" $DIR/read_firmware 2>/dev/null ; then
+ 			echo "request #$i: firmware was not loaded" >&2
+ 			exit 1
+ 		fi
+@@ -168,7 +171,7 @@ read_firmwares_expect_nofile()
+ 	for i in $(seq 0 3); do
+ 		config_set_read_fw_idx $i
+ 		# Ensures contents differ
+-		if diff -q "$FW" $DIR/read_firmware 2>/dev/null ; then
++		if diff -q -Z "$FW" $DIR/read_firmware 2>/dev/null ; then
+ 			echo "request $i: file was not expected to match" >&2
+ 			exit 1
+ 		fi
+diff --git a/tools/testing/selftests/firmware/fw_lib.sh b/tools/testing/selftests/firmware/fw_lib.sh
+index 6c5f1b2ffb745..1cbb12e284a68 100755
+--- a/tools/testing/selftests/firmware/fw_lib.sh
++++ b/tools/testing/selftests/firmware/fw_lib.sh
+@@ -91,7 +91,7 @@ verify_reqs()
+ 	if [ "$TEST_REQS_FW_SYSFS_FALLBACK" = "yes" ]; then
+ 		if [ ! "$HAS_FW_LOADER_USER_HELPER" = "yes" ]; then
+ 			echo "usermode helper disabled so ignoring test"
+-			exit $ksft_skip
++			exit 0
+ 		fi
+ 	fi
+ }


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-03-13 22:10 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-03-13 22:10 UTC (permalink / raw
  To: gentoo-commits

commit:     e6ea672694ccf0bad305b4ffeb7b8dac3e3f804e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 13 22:10:33 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 13 22:10:33 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e6ea6726

proj/linux-patches: Linux patch 5.0.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1001_linux-5.0.2.patch | 1235 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1239 insertions(+)

diff --git a/0000_README b/0000_README
index 99e0bb6..04daf20 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch:  1000_linux-5.0.1.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.1
 
+Patch:  1001_linux-5.0.2.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.2
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-5.0.2.patch b/1001_linux-5.0.2.patch
new file mode 100644
index 0000000..4fcf3cb
--- /dev/null
+++ b/1001_linux-5.0.2.patch
@@ -0,0 +1,1235 @@
+diff --git a/Makefile b/Makefile
+index 3cd7163fe164..bb2f7664594a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/boot/dts/exynos3250.dtsi b/arch/arm/boot/dts/exynos3250.dtsi
+index 608d17454179..5892a9f7622f 100644
+--- a/arch/arm/boot/dts/exynos3250.dtsi
++++ b/arch/arm/boot/dts/exynos3250.dtsi
+@@ -168,6 +168,9 @@
+ 			interrupt-controller;
+ 			#interrupt-cells = <3>;
+ 			interrupt-parent = <&gic>;
++			clock-names = "clkout8";
++			clocks = <&cmu CLK_FIN_PLL>;
++			#clock-cells = <1>;
+ 		};
+ 
+ 		mipi_phy: video-phy {
+diff --git a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+index 3a9eb1e91c45..8a64c4e8c474 100644
+--- a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
++++ b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+@@ -49,7 +49,7 @@
+ 	};
+ 
+ 	emmc_pwrseq: pwrseq {
+-		pinctrl-0 = <&sd1_cd>;
++		pinctrl-0 = <&emmc_rstn>;
+ 		pinctrl-names = "default";
+ 		compatible = "mmc-pwrseq-emmc";
+ 		reset-gpios = <&gpk1 2 GPIO_ACTIVE_LOW>;
+@@ -165,12 +165,6 @@
+ 	cpu0-supply = <&buck2_reg>;
+ };
+ 
+-/* RSTN signal for eMMC */
+-&sd1_cd {
+-	samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
+-	samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
+-};
+-
+ &pinctrl_1 {
+ 	gpio_power_key: power_key {
+ 		samsung,pins = "gpx1-3";
+@@ -188,6 +182,11 @@
+ 		samsung,pins = "gpx3-7";
+ 		samsung,pin-pud = <EXYNOS_PIN_PULL_DOWN>;
+ 	};
++
++	emmc_rstn: emmc-rstn {
++		samsung,pins = "gpk1-2";
++		samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
++	};
+ };
+ 
+ &ehci {
+diff --git a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+index bf09eab90f8a..6bf3661293ee 100644
+--- a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
++++ b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+@@ -468,7 +468,7 @@
+ 			buck8_reg: BUCK8 {
+ 				regulator-name = "vdd_1.8v_ldo";
+ 				regulator-min-microvolt = <800000>;
+-				regulator-max-microvolt = <1500000>;
++				regulator-max-microvolt = <2000000>;
+ 				regulator-always-on;
+ 				regulator-boot-on;
+ 			};
+diff --git a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+index 610235028cc7..c14205cd6bf5 100644
+--- a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
++++ b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+@@ -118,6 +118,7 @@
+ 		reset-gpios = <&gpio0 5 GPIO_ACTIVE_LOW>;
+ 		clocks = <&pmic>;
+ 		clock-names = "ext_clock";
++		post-power-on-delay-ms = <10>;
+ 		power-off-delay-us = <10>;
+ 	};
+ 
+@@ -300,7 +301,6 @@
+ 
+ 		dwmmc_0: dwmmc0@f723d000 {
+ 			cap-mmc-highspeed;
+-			mmc-hs200-1_8v;
+ 			non-removable;
+ 			bus-width = <0x8>;
+ 			vmmc-supply = <&ldo19>;
+diff --git a/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts b/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
+index 13a0a028df98..e5699d0d91e4 100644
+--- a/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
++++ b/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
+@@ -101,6 +101,7 @@
+ 	sdio_pwrseq: sdio-pwrseq {
+ 		compatible = "mmc-pwrseq-simple";
+ 		reset-gpios = <&gpio 7 GPIO_ACTIVE_LOW>; /* WIFI_EN */
++		post-power-on-delay-ms = <10>;
+ 	};
+ };
+ 
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index b684f0294f35..e2b1447192a8 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -1995,7 +1995,7 @@ static int x86_pmu_commit_txn(struct pmu *pmu)
+  */
+ static void free_fake_cpuc(struct cpu_hw_events *cpuc)
+ {
+-	kfree(cpuc->shared_regs);
++	intel_cpuc_finish(cpuc);
+ 	kfree(cpuc);
+ }
+ 
+@@ -2007,14 +2007,11 @@ static struct cpu_hw_events *allocate_fake_cpuc(void)
+ 	cpuc = kzalloc(sizeof(*cpuc), GFP_KERNEL);
+ 	if (!cpuc)
+ 		return ERR_PTR(-ENOMEM);
+-
+-	/* only needed, if we have extra_regs */
+-	if (x86_pmu.extra_regs) {
+-		cpuc->shared_regs = allocate_shared_regs(cpu);
+-		if (!cpuc->shared_regs)
+-			goto error;
+-	}
+ 	cpuc->is_fake = 1;
++
++	if (intel_cpuc_prepare(cpuc, cpu))
++		goto error;
++
+ 	return cpuc;
+ error:
+ 	free_fake_cpuc(cpuc);
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 730978dff63f..dadb8f7e5a0d 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -1999,6 +1999,39 @@ static void intel_pmu_nhm_enable_all(int added)
+ 	intel_pmu_enable_all(added);
+ }
+ 
++static void intel_set_tfa(struct cpu_hw_events *cpuc, bool on)
++{
++	u64 val = on ? MSR_TFA_RTM_FORCE_ABORT : 0;
++
++	if (cpuc->tfa_shadow != val) {
++		cpuc->tfa_shadow = val;
++		wrmsrl(MSR_TSX_FORCE_ABORT, val);
++	}
++}
++
++static void intel_tfa_commit_scheduling(struct cpu_hw_events *cpuc, int idx, int cntr)
++{
++	/*
++	 * We're going to use PMC3, make sure TFA is set before we touch it.
++	 */
++	if (cntr == 3 && !cpuc->is_fake)
++		intel_set_tfa(cpuc, true);
++}
++
++static void intel_tfa_pmu_enable_all(int added)
++{
++	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++
++	/*
++	 * If we find PMC3 is no longer used when we enable the PMU, we can
++	 * clear TFA.
++	 */
++	if (!test_bit(3, cpuc->active_mask))
++		intel_set_tfa(cpuc, false);
++
++	intel_pmu_enable_all(added);
++}
++
+ static void enable_counter_freeze(void)
+ {
+ 	update_debugctlmsr(get_debugctlmsr() |
+@@ -2768,6 +2801,35 @@ intel_stop_scheduling(struct cpu_hw_events *cpuc)
+ 	raw_spin_unlock(&excl_cntrs->lock);
+ }
+ 
++static struct event_constraint *
++dyn_constraint(struct cpu_hw_events *cpuc, struct event_constraint *c, int idx)
++{
++	WARN_ON_ONCE(!cpuc->constraint_list);
++
++	if (!(c->flags & PERF_X86_EVENT_DYNAMIC)) {
++		struct event_constraint *cx;
++
++		/*
++		 * grab pre-allocated constraint entry
++		 */
++		cx = &cpuc->constraint_list[idx];
++
++		/*
++		 * initialize dynamic constraint
++		 * with static constraint
++		 */
++		*cx = *c;
++
++		/*
++		 * mark constraint as dynamic
++		 */
++		cx->flags |= PERF_X86_EVENT_DYNAMIC;
++		c = cx;
++	}
++
++	return c;
++}
++
+ static struct event_constraint *
+ intel_get_excl_constraints(struct cpu_hw_events *cpuc, struct perf_event *event,
+ 			   int idx, struct event_constraint *c)
+@@ -2798,27 +2860,7 @@ intel_get_excl_constraints(struct cpu_hw_events *cpuc, struct perf_event *event,
+ 	 * only needed when constraint has not yet
+ 	 * been cloned (marked dynamic)
+ 	 */
+-	if (!(c->flags & PERF_X86_EVENT_DYNAMIC)) {
+-		struct event_constraint *cx;
+-
+-		/*
+-		 * grab pre-allocated constraint entry
+-		 */
+-		cx = &cpuc->constraint_list[idx];
+-
+-		/*
+-		 * initialize dynamic constraint
+-		 * with static constraint
+-		 */
+-		*cx = *c;
+-
+-		/*
+-		 * mark constraint as dynamic, so we
+-		 * can free it later on
+-		 */
+-		cx->flags |= PERF_X86_EVENT_DYNAMIC;
+-		c = cx;
+-	}
++	c = dyn_constraint(cpuc, c, idx);
+ 
+ 	/*
+ 	 * From here on, the constraint is dynamic.
+@@ -3345,6 +3387,26 @@ glp_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ 	return c;
+ }
+ 
++static bool allow_tsx_force_abort = true;
++
++static struct event_constraint *
++tfa_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
++			  struct perf_event *event)
++{
++	struct event_constraint *c = hsw_get_event_constraints(cpuc, idx, event);
++
++	/*
++	 * Without TFA we must not use PMC3.
++	 */
++	if (!allow_tsx_force_abort && test_bit(3, c->idxmsk)) {
++		c = dyn_constraint(cpuc, c, idx);
++		c->idxmsk64 &= ~(1ULL << 3);
++		c->weight--;
++	}
++
++	return c;
++}
++
+ /*
+  * Broadwell:
+  *
+@@ -3398,7 +3460,7 @@ ssize_t intel_event_sysfs_show(char *page, u64 config)
+ 	return x86_event_sysfs_show(page, config, event);
+ }
+ 
+-struct intel_shared_regs *allocate_shared_regs(int cpu)
++static struct intel_shared_regs *allocate_shared_regs(int cpu)
+ {
+ 	struct intel_shared_regs *regs;
+ 	int i;
+@@ -3430,23 +3492,24 @@ static struct intel_excl_cntrs *allocate_excl_cntrs(int cpu)
+ 	return c;
+ }
+ 
+-static int intel_pmu_cpu_prepare(int cpu)
+-{
+-	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
+ 
++int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu)
++{
+ 	if (x86_pmu.extra_regs || x86_pmu.lbr_sel_map) {
+ 		cpuc->shared_regs = allocate_shared_regs(cpu);
+ 		if (!cpuc->shared_regs)
+ 			goto err;
+ 	}
+ 
+-	if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) {
++	if (x86_pmu.flags & (PMU_FL_EXCL_CNTRS | PMU_FL_TFA)) {
+ 		size_t sz = X86_PMC_IDX_MAX * sizeof(struct event_constraint);
+ 
+-		cpuc->constraint_list = kzalloc(sz, GFP_KERNEL);
++		cpuc->constraint_list = kzalloc_node(sz, GFP_KERNEL, cpu_to_node(cpu));
+ 		if (!cpuc->constraint_list)
+ 			goto err_shared_regs;
++	}
+ 
++	if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) {
+ 		cpuc->excl_cntrs = allocate_excl_cntrs(cpu);
+ 		if (!cpuc->excl_cntrs)
+ 			goto err_constraint_list;
+@@ -3468,6 +3531,11 @@ err:
+ 	return -ENOMEM;
+ }
+ 
++static int intel_pmu_cpu_prepare(int cpu)
++{
++	return intel_cpuc_prepare(&per_cpu(cpu_hw_events, cpu), cpu);
++}
++
+ static void flip_smm_bit(void *data)
+ {
+ 	unsigned long set = *(unsigned long *)data;
+@@ -3542,9 +3610,8 @@ static void intel_pmu_cpu_starting(int cpu)
+ 	}
+ }
+ 
+-static void free_excl_cntrs(int cpu)
++static void free_excl_cntrs(struct cpu_hw_events *cpuc)
+ {
+-	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
+ 	struct intel_excl_cntrs *c;
+ 
+ 	c = cpuc->excl_cntrs;
+@@ -3552,9 +3619,10 @@ static void free_excl_cntrs(int cpu)
+ 		if (c->core_id == -1 || --c->refcnt == 0)
+ 			kfree(c);
+ 		cpuc->excl_cntrs = NULL;
+-		kfree(cpuc->constraint_list);
+-		cpuc->constraint_list = NULL;
+ 	}
++
++	kfree(cpuc->constraint_list);
++	cpuc->constraint_list = NULL;
+ }
+ 
+ static void intel_pmu_cpu_dying(int cpu)
+@@ -3565,9 +3633,8 @@ static void intel_pmu_cpu_dying(int cpu)
+ 		disable_counter_freeze();
+ }
+ 
+-static void intel_pmu_cpu_dead(int cpu)
++void intel_cpuc_finish(struct cpu_hw_events *cpuc)
+ {
+-	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
+ 	struct intel_shared_regs *pc;
+ 
+ 	pc = cpuc->shared_regs;
+@@ -3577,7 +3644,12 @@ static void intel_pmu_cpu_dead(int cpu)
+ 		cpuc->shared_regs = NULL;
+ 	}
+ 
+-	free_excl_cntrs(cpu);
++	free_excl_cntrs(cpuc);
++}
++
++static void intel_pmu_cpu_dead(int cpu)
++{
++	intel_cpuc_finish(&per_cpu(cpu_hw_events, cpu));
+ }
+ 
+ static void intel_pmu_sched_task(struct perf_event_context *ctx,
+@@ -4070,8 +4142,11 @@ static struct attribute *intel_pmu_caps_attrs[] = {
+        NULL
+ };
+ 
++DEVICE_BOOL_ATTR(allow_tsx_force_abort, 0644, allow_tsx_force_abort);
++
+ static struct attribute *intel_pmu_attrs[] = {
+ 	&dev_attr_freeze_on_smi.attr,
++	NULL, /* &dev_attr_allow_tsx_force_abort.attr.attr */
+ 	NULL,
+ };
+ 
+@@ -4564,6 +4639,15 @@ __init int intel_pmu_init(void)
+ 		tsx_attr = hsw_tsx_events_attrs;
+ 		intel_pmu_pebs_data_source_skl(
+ 			boot_cpu_data.x86_model == INTEL_FAM6_SKYLAKE_X);
++
++		if (boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)) {
++			x86_pmu.flags |= PMU_FL_TFA;
++			x86_pmu.get_event_constraints = tfa_get_event_constraints;
++			x86_pmu.enable_all = intel_tfa_pmu_enable_all;
++			x86_pmu.commit_scheduling = intel_tfa_commit_scheduling;
++			intel_pmu_attrs[1] = &dev_attr_allow_tsx_force_abort.attr.attr;
++		}
++
+ 		pr_cont("Skylake events, ");
+ 		name = "skylake";
+ 		break;
+@@ -4715,7 +4799,7 @@ static __init int fixup_ht_bug(void)
+ 	hardlockup_detector_perf_restart();
+ 
+ 	for_each_online_cpu(c)
+-		free_excl_cntrs(c);
++		free_excl_cntrs(&per_cpu(cpu_hw_events, c));
+ 
+ 	cpus_read_unlock();
+ 	pr_info("PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off\n");
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index d46fd6754d92..a345d079f876 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -242,6 +242,11 @@ struct cpu_hw_events {
+ 	struct intel_excl_cntrs		*excl_cntrs;
+ 	int excl_thread_id; /* 0 or 1 */
+ 
++	/*
++	 * SKL TSX_FORCE_ABORT shadow
++	 */
++	u64				tfa_shadow;
++
+ 	/*
+ 	 * AMD specific bits
+ 	 */
+@@ -681,6 +686,7 @@ do {									\
+ #define PMU_FL_EXCL_CNTRS	0x4 /* has exclusive counter requirements  */
+ #define PMU_FL_EXCL_ENABLED	0x8 /* exclusive counter active */
+ #define PMU_FL_PEBS_ALL		0x10 /* all events are valid PEBS events */
++#define PMU_FL_TFA		0x20 /* deal with TSX force abort */
+ 
+ #define EVENT_VAR(_id)  event_attr_##_id
+ #define EVENT_PTR(_id) &event_attr_##_id.attr.attr
+@@ -889,7 +895,8 @@ struct event_constraint *
+ x86_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ 			  struct perf_event *event);
+ 
+-struct intel_shared_regs *allocate_shared_regs(int cpu);
++extern int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu);
++extern void intel_cpuc_finish(struct cpu_hw_events *cpuc);
+ 
+ int intel_pmu_init(void);
+ 
+@@ -1025,9 +1032,13 @@ static inline int intel_pmu_init(void)
+ 	return 0;
+ }
+ 
+-static inline struct intel_shared_regs *allocate_shared_regs(int cpu)
++static inline int intel_cpuc_prepare(struct cpu_hw_event *cpuc, int cpu)
++{
++	return 0;
++}
++
++static inline void intel_cpuc_finish(struct cpu_hw_event *cpuc)
+ {
+-	return NULL;
+ }
+ 
+ static inline int is_ht_workaround_enabled(void)
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 6d6122524711..981ff9479648 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -344,6 +344,7 @@
+ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
+ #define X86_FEATURE_AVX512_4VNNIW	(18*32+ 2) /* AVX-512 Neural Network Instructions */
+ #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
++#define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 8e40c2446fd1..ca5bc0eacb95 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -666,6 +666,12 @@
+ 
+ #define MSR_IA32_TSC_DEADLINE		0x000006E0
+ 
++
++#define MSR_TSX_FORCE_ABORT		0x0000010F
++
++#define MSR_TFA_RTM_FORCE_ABORT_BIT	0
++#define MSR_TFA_RTM_FORCE_ABORT		BIT_ULL(MSR_TFA_RTM_FORCE_ABORT_BIT)
++
+ /* P4/Xeon+ specific */
+ #define MSR_IA32_MCG_EAX		0x00000180
+ #define MSR_IA32_MCG_EBX		0x00000181
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index 30a5111ae5fd..527e69b12002 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -635,6 +635,22 @@ static void quirk_no_aersid(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
+ 			      PCI_CLASS_BRIDGE_PCI, 8, quirk_no_aersid);
+ 
++static void quirk_intel_th_dnv(struct pci_dev *dev)
++{
++	struct resource *r = &dev->resource[4];
++
++	/*
++	 * Denverton reports 2k of RTIT_BAR (intel_th resource 4), which
++	 * appears to be 4 MB in reality.
++	 */
++	if (r->end == r->start + 0x7ff) {
++		r->start = 0;
++		r->end   = 0x3fffff;
++		r->flags |= IORESOURCE_UNSET;
++	}
++}
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x19e1, quirk_intel_th_dnv);
++
+ #ifdef CONFIG_PHYS_ADDR_T_64BIT
+ 
+ #define AMD_141b_MMIO_BASE(x)	(0x80 + (x) * 0x8)
+diff --git a/drivers/firmware/iscsi_ibft.c b/drivers/firmware/iscsi_ibft.c
+index 6bc8e6640d71..c51462f5aa1e 100644
+--- a/drivers/firmware/iscsi_ibft.c
++++ b/drivers/firmware/iscsi_ibft.c
+@@ -542,6 +542,7 @@ static umode_t __init ibft_check_tgt_for(void *data, int type)
+ 	case ISCSI_BOOT_TGT_NIC_ASSOC:
+ 	case ISCSI_BOOT_TGT_CHAP_TYPE:
+ 		rc = S_IRUGO;
++		break;
+ 	case ISCSI_BOOT_TGT_NAME:
+ 		if (tgt->tgt_name_len)
+ 			rc = S_IRUGO;
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 225ae6980182..628ef617bb2f 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1337,6 +1337,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ 	{ "ELAN0000", 0 },
+ 	{ "ELAN0100", 0 },
+ 	{ "ELAN0600", 0 },
++	{ "ELAN0601", 0 },
+ 	{ "ELAN0602", 0 },
+ 	{ "ELAN0605", 0 },
+ 	{ "ELAN0608", 0 },
+diff --git a/drivers/input/tablet/wacom_serial4.c b/drivers/input/tablet/wacom_serial4.c
+index 38bfaca48eab..150f9eecaca7 100644
+--- a/drivers/input/tablet/wacom_serial4.c
++++ b/drivers/input/tablet/wacom_serial4.c
+@@ -187,6 +187,7 @@ enum {
+ 	MODEL_DIGITIZER_II	= 0x5544, /* UD */
+ 	MODEL_GRAPHIRE		= 0x4554, /* ET */
+ 	MODEL_PENPARTNER	= 0x4354, /* CT */
++	MODEL_ARTPAD_II		= 0x4B54, /* KT */
+ };
+ 
+ static void wacom_handle_model_response(struct wacom *wacom)
+@@ -245,6 +246,7 @@ static void wacom_handle_model_response(struct wacom *wacom)
+ 		wacom->flags = F_HAS_STYLUS2 | F_HAS_SCROLLWHEEL;
+ 		break;
+ 
++	case MODEL_ARTPAD_II:
+ 	case MODEL_DIGITIZER_II:
+ 		wacom->dev->name = "Wacom Digitizer II";
+ 		wacom->dev->id.version = MODEL_DIGITIZER_II;
+diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
+index 66a174979b3c..81745644f720 100644
+--- a/drivers/media/rc/rc-main.c
++++ b/drivers/media/rc/rc-main.c
+@@ -274,6 +274,7 @@ static unsigned int ir_update_mapping(struct rc_dev *dev,
+ 				      unsigned int new_keycode)
+ {
+ 	int old_keycode = rc_map->scan[index].keycode;
++	int i;
+ 
+ 	/* Did the user wish to remove the mapping? */
+ 	if (new_keycode == KEY_RESERVED || new_keycode == KEY_UNKNOWN) {
+@@ -288,9 +289,20 @@ static unsigned int ir_update_mapping(struct rc_dev *dev,
+ 			old_keycode == KEY_RESERVED ? "New" : "Replacing",
+ 			rc_map->scan[index].scancode, new_keycode);
+ 		rc_map->scan[index].keycode = new_keycode;
++		__set_bit(new_keycode, dev->input_dev->keybit);
+ 	}
+ 
+ 	if (old_keycode != KEY_RESERVED) {
++		/* A previous mapping was updated... */
++		__clear_bit(old_keycode, dev->input_dev->keybit);
++		/* ... but another scancode might use the same keycode */
++		for (i = 0; i < rc_map->len; i++) {
++			if (rc_map->scan[i].keycode == old_keycode) {
++				__set_bit(old_keycode, dev->input_dev->keybit);
++				break;
++			}
++		}
++
+ 		/* Possibly shrink the keytable, failure is not a problem */
+ 		ir_resize_table(dev, rc_map, GFP_ATOMIC);
+ 	}
+@@ -1750,7 +1762,6 @@ static int rc_prepare_rx_device(struct rc_dev *dev)
+ 	set_bit(EV_REP, dev->input_dev->evbit);
+ 	set_bit(EV_MSC, dev->input_dev->evbit);
+ 	set_bit(MSC_SCAN, dev->input_dev->mscbit);
+-	bitmap_fill(dev->input_dev->keybit, KEY_CNT);
+ 
+ 	/* Pointer/mouse events */
+ 	set_bit(EV_REL, dev->input_dev->evbit);
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index b62cbd800111..33a22c016456 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -1106,11 +1106,19 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ 			return -EINVAL;
+ 		}
+ 
+-		/* Make sure the terminal type MSB is not null, otherwise it
+-		 * could be confused with a unit.
++		/*
++		 * Reject invalid terminal types that would cause issues:
++		 *
++		 * - The high byte must be non-zero, otherwise it would be
++		 *   confused with a unit.
++		 *
++		 * - Bit 15 must be 0, as we use it internally as a terminal
++		 *   direction flag.
++		 *
++		 * Other unknown types are accepted.
+ 		 */
+ 		type = get_unaligned_le16(&buffer[4]);
+-		if ((type & 0xff00) == 0) {
++		if ((type & 0x7f00) == 0 || (type & 0x8000) != 0) {
+ 			uvc_trace(UVC_TRACE_DESCR, "device %d videocontrol "
+ 				"interface %d INPUT_TERMINAL %d has invalid "
+ 				"type 0x%04x, skipping\n", udev->devnum,
+diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c
+index c070a9e51ebf..fae572b38416 100644
+--- a/drivers/net/wireless/ath/ath9k/init.c
++++ b/drivers/net/wireless/ath/ath9k/init.c
+@@ -636,15 +636,15 @@ static int ath9k_of_init(struct ath_softc *sc)
+ 		ret = ath9k_eeprom_request(sc, eeprom_name);
+ 		if (ret)
+ 			return ret;
++
++		ah->ah_flags &= ~AH_USE_EEPROM;
++		ah->ah_flags |= AH_NO_EEP_SWAP;
+ 	}
+ 
+ 	mac = of_get_mac_address(np);
+ 	if (mac)
+ 		ether_addr_copy(common->macaddr, mac);
+ 
+-	ah->ah_flags &= ~AH_USE_EEPROM;
+-	ah->ah_flags |= AH_NO_EEP_SWAP;
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c
+index 0dbcf429089f..1a8b85051b1b 100644
+--- a/drivers/pci/pcie/pme.c
++++ b/drivers/pci/pcie/pme.c
+@@ -432,31 +432,6 @@ static void pcie_pme_remove(struct pcie_device *srv)
+ 	kfree(get_service_data(srv));
+ }
+ 
+-static int pcie_pme_runtime_suspend(struct pcie_device *srv)
+-{
+-	struct pcie_pme_service_data *data = get_service_data(srv);
+-
+-	spin_lock_irq(&data->lock);
+-	pcie_pme_interrupt_enable(srv->port, false);
+-	pcie_clear_root_pme_status(srv->port);
+-	data->noirq = true;
+-	spin_unlock_irq(&data->lock);
+-
+-	return 0;
+-}
+-
+-static int pcie_pme_runtime_resume(struct pcie_device *srv)
+-{
+-	struct pcie_pme_service_data *data = get_service_data(srv);
+-
+-	spin_lock_irq(&data->lock);
+-	pcie_pme_interrupt_enable(srv->port, true);
+-	data->noirq = false;
+-	spin_unlock_irq(&data->lock);
+-
+-	return 0;
+-}
+-
+ static struct pcie_port_service_driver pcie_pme_driver = {
+ 	.name		= "pcie_pme",
+ 	.port_type	= PCI_EXP_TYPE_ROOT_PORT,
+@@ -464,8 +439,6 @@ static struct pcie_port_service_driver pcie_pme_driver = {
+ 
+ 	.probe		= pcie_pme_probe,
+ 	.suspend	= pcie_pme_suspend,
+-	.runtime_suspend = pcie_pme_runtime_suspend,
+-	.runtime_resume	= pcie_pme_runtime_resume,
+ 	.resume		= pcie_pme_resume,
+ 	.remove		= pcie_pme_remove,
+ };
+diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
+index d5a6aa9676c8..a3adc954f40f 100644
+--- a/drivers/scsi/aacraid/commsup.c
++++ b/drivers/scsi/aacraid/commsup.c
+@@ -1303,8 +1303,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ 				  ADD : DELETE;
+ 				break;
+ 			}
+-			case AifBuManagerEvent:
+-				aac_handle_aif_bu(dev, aifcmd);
++			break;
++		case AifBuManagerEvent:
++			aac_handle_aif_bu(dev, aifcmd);
+ 			break;
+ 		}
+ 
+diff --git a/drivers/staging/erofs/namei.c b/drivers/staging/erofs/namei.c
+index 5596c52e246d..ecc51ef0753f 100644
+--- a/drivers/staging/erofs/namei.c
++++ b/drivers/staging/erofs/namei.c
+@@ -15,74 +15,77 @@
+ 
+ #include <trace/events/erofs.h>
+ 
+-/* based on the value of qn->len is accurate */
+-static inline int dirnamecmp(struct qstr *qn,
+-	struct qstr *qd, unsigned int *matched)
++struct erofs_qstr {
++	const unsigned char *name;
++	const unsigned char *end;
++};
++
++/* based on the end of qn is accurate and it must have the trailing '\0' */
++static inline int dirnamecmp(const struct erofs_qstr *qn,
++			     const struct erofs_qstr *qd,
++			     unsigned int *matched)
+ {
+-	unsigned int i = *matched, len = min(qn->len, qd->len);
+-loop:
+-	if (unlikely(i >= len)) {
+-		*matched = i;
+-		if (qn->len < qd->len) {
+-			/*
+-			 * actually (qn->len == qd->len)
+-			 * when qd->name[i] == '\0'
+-			 */
+-			return qd->name[i] == '\0' ? 0 : -1;
++	unsigned int i = *matched;
++
++	/*
++	 * on-disk error, let's only BUG_ON in the debugging mode.
++	 * otherwise, it will return 1 to just skip the invalid name
++	 * and go on (in consideration of the lookup performance).
++	 */
++	DBG_BUGON(qd->name > qd->end);
++
++	/* qd could not have trailing '\0' */
++	/* However it is absolutely safe if < qd->end */
++	while (qd->name + i < qd->end && qd->name[i] != '\0') {
++		if (qn->name[i] != qd->name[i]) {
++			*matched = i;
++			return qn->name[i] > qd->name[i] ? 1 : -1;
+ 		}
+-		return (qn->len > qd->len);
++		++i;
+ 	}
+-
+-	if (qn->name[i] != qd->name[i]) {
+-		*matched = i;
+-		return qn->name[i] > qd->name[i] ? 1 : -1;
+-	}
+-
+-	++i;
+-	goto loop;
++	*matched = i;
++	/* See comments in __d_alloc on the terminating NUL character */
++	return qn->name[i] == '\0' ? 0 : 1;
+ }
+ 
+-static struct erofs_dirent *find_target_dirent(
+-	struct qstr *name,
+-	u8 *data, int maxsize)
++#define nameoff_from_disk(off, sz)	(le16_to_cpu(off) & ((sz) - 1))
++
++static struct erofs_dirent *find_target_dirent(struct erofs_qstr *name,
++					       u8 *data,
++					       unsigned int dirblksize,
++					       const int ndirents)
+ {
+-	unsigned int ndirents, head, back;
++	int head, back;
+ 	unsigned int startprfx, endprfx;
+ 	struct erofs_dirent *const de = (struct erofs_dirent *)data;
+ 
+-	/* make sure that maxsize is valid */
+-	BUG_ON(maxsize < sizeof(struct erofs_dirent));
+-
+-	ndirents = le16_to_cpu(de->nameoff) / sizeof(*de);
+-
+-	/* corrupted dir (may be unnecessary...) */
+-	BUG_ON(!ndirents);
+-
+-	head = 0;
++	/* since the 1st dirent has been evaluated previously */
++	head = 1;
+ 	back = ndirents - 1;
+ 	startprfx = endprfx = 0;
+ 
+ 	while (head <= back) {
+-		unsigned int mid = head + (back - head) / 2;
+-		unsigned int nameoff = le16_to_cpu(de[mid].nameoff);
++		const int mid = head + (back - head) / 2;
++		const int nameoff = nameoff_from_disk(de[mid].nameoff,
++						      dirblksize);
+ 		unsigned int matched = min(startprfx, endprfx);
+-
+-		struct qstr dname = QSTR_INIT(data + nameoff,
+-			unlikely(mid >= ndirents - 1) ?
+-				maxsize - nameoff :
+-				le16_to_cpu(de[mid + 1].nameoff) - nameoff);
++		struct erofs_qstr dname = {
++			.name = data + nameoff,
++			.end = unlikely(mid >= ndirents - 1) ?
++				data + dirblksize :
++				data + nameoff_from_disk(de[mid + 1].nameoff,
++							 dirblksize)
++		};
+ 
+ 		/* string comparison without already matched prefix */
+ 		int ret = dirnamecmp(name, &dname, &matched);
+ 
+-		if (unlikely(!ret))
++		if (unlikely(!ret)) {
+ 			return de + mid;
+-		else if (ret > 0) {
++		} else if (ret > 0) {
+ 			head = mid + 1;
+ 			startprfx = matched;
+-		} else if (unlikely(mid < 1))	/* fix "mid" overflow */
+-			break;
+-		else {
++		} else {
+ 			back = mid - 1;
+ 			endprfx = matched;
+ 		}
+@@ -91,12 +94,12 @@ static struct erofs_dirent *find_target_dirent(
+ 	return ERR_PTR(-ENOENT);
+ }
+ 
+-static struct page *find_target_block_classic(
+-	struct inode *dir,
+-	struct qstr *name, int *_diff)
++static struct page *find_target_block_classic(struct inode *dir,
++					      struct erofs_qstr *name,
++					      int *_ndirents)
+ {
+ 	unsigned int startprfx, endprfx;
+-	unsigned int head, back;
++	int head, back;
+ 	struct address_space *const mapping = dir->i_mapping;
+ 	struct page *candidate = ERR_PTR(-ENOENT);
+ 
+@@ -105,41 +108,43 @@ static struct page *find_target_block_classic(
+ 	back = inode_datablocks(dir) - 1;
+ 
+ 	while (head <= back) {
+-		unsigned int mid = head + (back - head) / 2;
++		const int mid = head + (back - head) / 2;
+ 		struct page *page = read_mapping_page(mapping, mid, NULL);
+ 
+-		if (IS_ERR(page)) {
+-exact_out:
+-			if (!IS_ERR(candidate)) /* valid candidate */
+-				put_page(candidate);
+-			return page;
+-		} else {
+-			int diff;
+-			unsigned int ndirents, matched;
+-			struct qstr dname;
++		if (!IS_ERR(page)) {
+ 			struct erofs_dirent *de = kmap_atomic(page);
+-			unsigned int nameoff = le16_to_cpu(de->nameoff);
+-
+-			ndirents = nameoff / sizeof(*de);
++			const int nameoff = nameoff_from_disk(de->nameoff,
++							      EROFS_BLKSIZ);
++			const int ndirents = nameoff / sizeof(*de);
++			int diff;
++			unsigned int matched;
++			struct erofs_qstr dname;
+ 
+-			/* corrupted dir (should have one entry at least) */
+-			BUG_ON(!ndirents || nameoff > PAGE_SIZE);
++			if (unlikely(!ndirents)) {
++				DBG_BUGON(1);
++				kunmap_atomic(de);
++				put_page(page);
++				page = ERR_PTR(-EIO);
++				goto out;
++			}
+ 
+ 			matched = min(startprfx, endprfx);
+ 
+ 			dname.name = (u8 *)de + nameoff;
+-			dname.len = ndirents == 1 ?
+-				/* since the rest of the last page is 0 */
+-				EROFS_BLKSIZ - nameoff
+-				: le16_to_cpu(de[1].nameoff) - nameoff;
++			if (ndirents == 1)
++				dname.end = (u8 *)de + EROFS_BLKSIZ;
++			else
++				dname.end = (u8 *)de +
++					nameoff_from_disk(de[1].nameoff,
++							  EROFS_BLKSIZ);
+ 
+ 			/* string comparison without already matched prefix */
+ 			diff = dirnamecmp(name, &dname, &matched);
+ 			kunmap_atomic(de);
+ 
+ 			if (unlikely(!diff)) {
+-				*_diff = 0;
+-				goto exact_out;
++				*_ndirents = 0;
++				goto out;
+ 			} else if (diff > 0) {
+ 				head = mid + 1;
+ 				startprfx = matched;
+@@ -147,45 +152,51 @@ exact_out:
+ 				if (likely(!IS_ERR(candidate)))
+ 					put_page(candidate);
+ 				candidate = page;
++				*_ndirents = ndirents;
+ 			} else {
+ 				put_page(page);
+ 
+-				if (unlikely(mid < 1))	/* fix "mid" overflow */
+-					break;
+-
+ 				back = mid - 1;
+ 				endprfx = matched;
+ 			}
++			continue;
+ 		}
++out:		/* free if the candidate is valid */
++		if (!IS_ERR(candidate))
++			put_page(candidate);
++		return page;
+ 	}
+-	*_diff = 1;
+ 	return candidate;
+ }
+ 
+ int erofs_namei(struct inode *dir,
+-	struct qstr *name,
+-	erofs_nid_t *nid, unsigned int *d_type)
++		struct qstr *name,
++		erofs_nid_t *nid, unsigned int *d_type)
+ {
+-	int diff;
++	int ndirents;
+ 	struct page *page;
+-	u8 *data;
++	void *data;
+ 	struct erofs_dirent *de;
++	struct erofs_qstr qn;
+ 
+ 	if (unlikely(!dir->i_size))
+ 		return -ENOENT;
+ 
+-	diff = 1;
+-	page = find_target_block_classic(dir, name, &diff);
++	qn.name = name->name;
++	qn.end = name->name + name->len;
++
++	ndirents = 0;
++	page = find_target_block_classic(dir, &qn, &ndirents);
+ 
+ 	if (unlikely(IS_ERR(page)))
+ 		return PTR_ERR(page);
+ 
+ 	data = kmap_atomic(page);
+ 	/* the target page has been mapped */
+-	de = likely(diff) ?
+-		/* since the rest of the last page is 0 */
+-		find_target_dirent(name, data, EROFS_BLKSIZ) :
+-		(struct erofs_dirent *)data;
++	if (ndirents)
++		de = find_target_dirent(&qn, data, EROFS_BLKSIZ, ndirents);
++	else
++		de = (struct erofs_dirent *)data;
+ 
+ 	if (likely(!IS_ERR(de))) {
+ 		*nid = le64_to_cpu(de->nid);
+diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
+index ca2e8fd78959..ab30d14ded06 100644
+--- a/drivers/staging/erofs/unzip_vle.c
++++ b/drivers/staging/erofs/unzip_vle.c
+@@ -1017,11 +1017,10 @@ repeat:
+ 	if (llen > grp->llen)
+ 		llen = grp->llen;
+ 
+-	err = z_erofs_vle_unzip_fast_percpu(compressed_pages,
+-		clusterpages, pages, llen, work->pageofs,
+-		z_erofs_onlinepage_endio);
++	err = z_erofs_vle_unzip_fast_percpu(compressed_pages, clusterpages,
++					    pages, llen, work->pageofs);
+ 	if (err != -ENOTSUPP)
+-		goto out_percpu;
++		goto out;
+ 
+ 	if (sparsemem_pages >= nr_pages)
+ 		goto skip_allocpage;
+@@ -1042,8 +1041,25 @@ skip_allocpage:
+ 	erofs_vunmap(vout, nr_pages);
+ 
+ out:
++	/* must handle all compressed pages before endding pages */
++	for (i = 0; i < clusterpages; ++i) {
++		page = compressed_pages[i];
++
++#ifdef EROFS_FS_HAS_MANAGED_CACHE
++		if (page->mapping == MNGD_MAPPING(sbi))
++			continue;
++#endif
++		/* recycle all individual staging pages */
++		(void)z_erofs_gather_if_stagingpage(page_pool, page);
++
++		WRITE_ONCE(compressed_pages[i], NULL);
++	}
++
+ 	for (i = 0; i < nr_pages; ++i) {
+ 		page = pages[i];
++		if (!page)
++			continue;
++
+ 		DBG_BUGON(!page->mapping);
+ 
+ 		/* recycle all individual staging pages */
+@@ -1056,20 +1072,6 @@ out:
+ 		z_erofs_onlinepage_endio(page);
+ 	}
+ 
+-out_percpu:
+-	for (i = 0; i < clusterpages; ++i) {
+-		page = compressed_pages[i];
+-
+-#ifdef EROFS_FS_HAS_MANAGED_CACHE
+-		if (page->mapping == MNGD_MAPPING(sbi))
+-			continue;
+-#endif
+-		/* recycle all individual staging pages */
+-		(void)z_erofs_gather_if_stagingpage(page_pool, page);
+-
+-		WRITE_ONCE(compressed_pages[i], NULL);
+-	}
+-
+ 	if (pages == z_pagemap_global)
+ 		mutex_unlock(&z_pagemap_global_lock);
+ 	else if (unlikely(pages != pages_onstack))
+diff --git a/drivers/staging/erofs/unzip_vle.h b/drivers/staging/erofs/unzip_vle.h
+index 5a4e1b62c0d1..c0dfd6906aa8 100644
+--- a/drivers/staging/erofs/unzip_vle.h
++++ b/drivers/staging/erofs/unzip_vle.h
+@@ -218,8 +218,7 @@ extern int z_erofs_vle_plain_copy(struct page **compressed_pages,
+ 
+ extern int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ 	unsigned clusterpages, struct page **pages,
+-	unsigned outlen, unsigned short pageofs,
+-	void (*endio)(struct page *));
++	unsigned int outlen, unsigned short pageofs);
+ 
+ extern int z_erofs_vle_unzip_vmap(struct page **compressed_pages,
+ 	unsigned clusterpages, void *vaddr, unsigned llen,
+diff --git a/drivers/staging/erofs/unzip_vle_lz4.c b/drivers/staging/erofs/unzip_vle_lz4.c
+index 52797bd89da1..f471b894c848 100644
+--- a/drivers/staging/erofs/unzip_vle_lz4.c
++++ b/drivers/staging/erofs/unzip_vle_lz4.c
+@@ -125,8 +125,7 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ 				  unsigned int clusterpages,
+ 				  struct page **pages,
+ 				  unsigned int outlen,
+-				  unsigned short pageofs,
+-				  void (*endio)(struct page *))
++				  unsigned short pageofs)
+ {
+ 	void *vin, *vout;
+ 	unsigned int nr_pages, i, j;
+@@ -148,19 +147,16 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ 	ret = z_erofs_unzip_lz4(vin, vout + pageofs,
+ 				clusterpages * PAGE_SIZE, outlen);
+ 
+-	if (ret >= 0) {
+-		outlen = ret;
+-		ret = 0;
+-	}
++	if (ret < 0)
++		goto out;
++	ret = 0;
+ 
+ 	for (i = 0; i < nr_pages; ++i) {
+ 		j = min((unsigned int)PAGE_SIZE - pageofs, outlen);
+ 
+ 		if (pages[i]) {
+-			if (ret < 0) {
+-				SetPageError(pages[i]);
+-			} else if (clusterpages == 1 &&
+-				   pages[i] == compressed_pages[0]) {
++			if (clusterpages == 1 &&
++			    pages[i] == compressed_pages[0]) {
+ 				memcpy(vin + pageofs, vout + pageofs, j);
+ 			} else {
+ 				void *dst = kmap_atomic(pages[i]);
+@@ -168,12 +164,13 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ 				memcpy(dst + pageofs, vout + pageofs, j);
+ 				kunmap_atomic(dst);
+ 			}
+-			endio(pages[i]);
+ 		}
+ 		vout += PAGE_SIZE;
+ 		outlen -= j;
+ 		pageofs = 0;
+ 	}
++
++out:
+ 	preempt_enable();
+ 
+ 	if (clusterpages == 1)
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index b92740edc416..4b038f25f256 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -107,7 +107,7 @@ static int glock_wake_function(wait_queue_entry_t *wait, unsigned int mode,
+ 
+ static wait_queue_head_t *glock_waitqueue(struct lm_lockname *name)
+ {
+-	u32 hash = jhash2((u32 *)name, sizeof(*name) / 4, 0);
++	u32 hash = jhash2((u32 *)name, ht_parms.key_len / 4, 0);
+ 
+ 	return glock_wait_table + hash_32(hash, GLOCK_WAIT_TABLE_BITS);
+ }
+diff --git a/include/drm/drm_cache.h b/include/drm/drm_cache.h
+index bfe1639df02d..97fc498dc767 100644
+--- a/include/drm/drm_cache.h
++++ b/include/drm/drm_cache.h
+@@ -47,6 +47,24 @@ static inline bool drm_arch_can_wc_memory(void)
+ 	return false;
+ #elif defined(CONFIG_MIPS) && defined(CONFIG_CPU_LOONGSON3)
+ 	return false;
++#elif defined(CONFIG_ARM) || defined(CONFIG_ARM64)
++	/*
++	 * The DRM driver stack is designed to work with cache coherent devices
++	 * only, but permits an optimization to be enabled in some cases, where
++	 * for some buffers, both the CPU and the GPU use uncached mappings,
++	 * removing the need for DMA snooping and allocation in the CPU caches.
++	 *
++	 * The use of uncached GPU mappings relies on the correct implementation
++	 * of the PCIe NoSnoop TLP attribute by the platform, otherwise the GPU
++	 * will use cached mappings nonetheless. On x86 platforms, this does not
++	 * seem to matter, as uncached CPU mappings will snoop the caches in any
++	 * case. However, on ARM and arm64, enabling this optimization on a
++	 * platform where NoSnoop is ignored results in loss of coherency, which
++	 * breaks correct operation of the device. Since we have no way of
++	 * detecting whether NoSnoop works or not, just disable this
++	 * optimization entirely for ARM and arm64.
++	 */
++	return false;
+ #else
+ 	return true;
+ #endif
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 8c826603bf36..8bc0ba1ebabe 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -545,6 +545,7 @@ static void sk_psock_destroy_deferred(struct work_struct *gc)
+ 	struct sk_psock *psock = container_of(gc, struct sk_psock, gc);
+ 
+ 	/* No sk_callback_lock since already detached. */
++	strp_stop(&psock->parser.strp);
+ 	strp_done(&psock->parser.strp);
+ 
+ 	cancel_work_sync(&psock->work);
+diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
+index 7aad82406422..d3319a80788a 100644
+--- a/scripts/gdb/linux/constants.py.in
++++ b/scripts/gdb/linux/constants.py.in
+@@ -37,12 +37,12 @@
+ import gdb
+ 
+ /* linux/fs.h */
+-LX_VALUE(MS_RDONLY)
+-LX_VALUE(MS_SYNCHRONOUS)
+-LX_VALUE(MS_MANDLOCK)
+-LX_VALUE(MS_DIRSYNC)
+-LX_VALUE(MS_NOATIME)
+-LX_VALUE(MS_NODIRATIME)
++LX_VALUE(SB_RDONLY)
++LX_VALUE(SB_SYNCHRONOUS)
++LX_VALUE(SB_MANDLOCK)
++LX_VALUE(SB_DIRSYNC)
++LX_VALUE(SB_NOATIME)
++LX_VALUE(SB_NODIRATIME)
+ 
+ /* linux/mount.h */
+ LX_VALUE(MNT_NOSUID)
+diff --git a/scripts/gdb/linux/proc.py b/scripts/gdb/linux/proc.py
+index 0aebd7565b03..2f01a958eb22 100644
+--- a/scripts/gdb/linux/proc.py
++++ b/scripts/gdb/linux/proc.py
+@@ -114,11 +114,11 @@ def info_opts(lst, opt):
+     return opts
+ 
+ 
+-FS_INFO = {constants.LX_MS_SYNCHRONOUS: ",sync",
+-           constants.LX_MS_MANDLOCK: ",mand",
+-           constants.LX_MS_DIRSYNC: ",dirsync",
+-           constants.LX_MS_NOATIME: ",noatime",
+-           constants.LX_MS_NODIRATIME: ",nodiratime"}
++FS_INFO = {constants.LX_SB_SYNCHRONOUS: ",sync",
++           constants.LX_SB_MANDLOCK: ",mand",
++           constants.LX_SB_DIRSYNC: ",dirsync",
++           constants.LX_SB_NOATIME: ",noatime",
++           constants.LX_SB_NODIRATIME: ",nodiratime"}
+ 
+ MNT_INFO = {constants.LX_MNT_NOSUID: ",nosuid",
+             constants.LX_MNT_NODEV: ",nodev",
+@@ -184,7 +184,7 @@ values of that process namespace"""
+             fstype = superblock['s_type']['name'].string()
+             s_flags = int(superblock['s_flags'])
+             m_flags = int(vfs['mnt']['mnt_flags'])
+-            rd = "ro" if (s_flags & constants.LX_MS_RDONLY) else "rw"
++            rd = "ro" if (s_flags & constants.LX_SB_RDONLY) else "rw"
+ 
+             gdb.write(
+                 "{} {} {} {}{}{} 0 0\n"


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-03-19 17:01 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-03-19 17:01 UTC (permalink / raw
  To: gentoo-commits

commit:     9e73079481bd1f7384d57cde9b6d67984fe872cc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Mar 19 17:00:45 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Mar 19 17:00:45 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9e730794

proj/linux-patches: Linux patch 5.0.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1002_linux-5.0.3.patch | 1487 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1491 insertions(+)

diff --git a/0000_README b/0000_README
index 04daf20..4989a60 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-5.0.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.2
 
+Patch:  1002_linux-5.0.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-5.0.3.patch b/1002_linux-5.0.3.patch
new file mode 100644
index 0000000..9019944
--- /dev/null
+++ b/1002_linux-5.0.3.patch
@@ -0,0 +1,1487 @@
+diff --git a/Makefile b/Makefile
+index bb2f7664594a..fb888787e7d1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index dadb8f7e5a0d..2480feb07df3 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3398,7 +3398,7 @@ tfa_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ 	/*
+ 	 * Without TFA we must not use PMC3.
+ 	 */
+-	if (!allow_tsx_force_abort && test_bit(3, c->idxmsk)) {
++	if (!allow_tsx_force_abort && test_bit(3, c->idxmsk) && idx >= 0) {
+ 		c = dyn_constraint(cpuc, c, idx);
+ 		c->idxmsk64 &= ~(1ULL << 3);
+ 		c->weight--;
+@@ -4142,7 +4142,7 @@ static struct attribute *intel_pmu_caps_attrs[] = {
+        NULL
+ };
+ 
+-DEVICE_BOOL_ATTR(allow_tsx_force_abort, 0644, allow_tsx_force_abort);
++static DEVICE_BOOL_ATTR(allow_tsx_force_abort, 0644, allow_tsx_force_abort);
+ 
+ static struct attribute *intel_pmu_attrs[] = {
+ 	&dev_attr_freeze_on_smi.attr,
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index a345d079f876..acd72e669c04 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -1032,12 +1032,12 @@ static inline int intel_pmu_init(void)
+ 	return 0;
+ }
+ 
+-static inline int intel_cpuc_prepare(struct cpu_hw_event *cpuc, int cpu)
++static inline int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu)
+ {
+ 	return 0;
+ }
+ 
+-static inline void intel_cpuc_finish(struct cpu_hw_event *cpuc)
++static inline void intel_cpuc_finish(struct cpu_hw_events *cpuc)
+ {
+ }
+ 
+diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
+index ed5e42461094..ad48fd52cb53 100644
+--- a/drivers/connector/cn_proc.c
++++ b/drivers/connector/cn_proc.c
+@@ -250,6 +250,7 @@ void proc_coredump_connector(struct task_struct *task)
+ {
+ 	struct cn_msg *msg;
+ 	struct proc_event *ev;
++	struct task_struct *parent;
+ 	__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
+ 
+ 	if (atomic_read(&proc_event_num_listeners) < 1)
+@@ -262,8 +263,14 @@ void proc_coredump_connector(struct task_struct *task)
+ 	ev->what = PROC_EVENT_COREDUMP;
+ 	ev->event_data.coredump.process_pid = task->pid;
+ 	ev->event_data.coredump.process_tgid = task->tgid;
+-	ev->event_data.coredump.parent_pid = task->real_parent->pid;
+-	ev->event_data.coredump.parent_tgid = task->real_parent->tgid;
++
++	rcu_read_lock();
++	if (pid_alive(task)) {
++		parent = rcu_dereference(task->real_parent);
++		ev->event_data.coredump.parent_pid = parent->pid;
++		ev->event_data.coredump.parent_tgid = parent->tgid;
++	}
++	rcu_read_unlock();
+ 
+ 	memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ 	msg->ack = 0; /* not used */
+@@ -276,6 +283,7 @@ void proc_exit_connector(struct task_struct *task)
+ {
+ 	struct cn_msg *msg;
+ 	struct proc_event *ev;
++	struct task_struct *parent;
+ 	__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
+ 
+ 	if (atomic_read(&proc_event_num_listeners) < 1)
+@@ -290,8 +298,14 @@ void proc_exit_connector(struct task_struct *task)
+ 	ev->event_data.exit.process_tgid = task->tgid;
+ 	ev->event_data.exit.exit_code = task->exit_code;
+ 	ev->event_data.exit.exit_signal = task->exit_signal;
+-	ev->event_data.exit.parent_pid = task->real_parent->pid;
+-	ev->event_data.exit.parent_tgid = task->real_parent->tgid;
++
++	rcu_read_lock();
++	if (pid_alive(task)) {
++		parent = rcu_dereference(task->real_parent);
++		ev->event_data.exit.parent_pid = parent->pid;
++		ev->event_data.exit.parent_tgid = parent->tgid;
++	}
++	rcu_read_unlock();
+ 
+ 	memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ 	msg->ack = 0; /* not used */
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index f4290f6b0c38..2323ba9310d9 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1611,6 +1611,15 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
+ 	if (old_plane_state->fb != new_plane_state->fb)
+ 		return -EINVAL;
+ 
++	/*
++	 * FIXME: Since prepare_fb and cleanup_fb are always called on
++	 * the new_plane_state for async updates we need to block framebuffer
++	 * changes. This prevents use of a fb that's been cleaned up and
++	 * double cleanups from occuring.
++	 */
++	if (old_plane_state->fb != new_plane_state->fb)
++		return -EINVAL;
++
+ 	funcs = plane->helper_private;
+ 	if (!funcs->atomic_async_update)
+ 		return -EINVAL;
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index abb5d382f64d..ecef42bfe19d 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -4670,7 +4670,6 @@ read_more:
+ 	atomic_inc(&r10_bio->remaining);
+ 	read_bio->bi_next = NULL;
+ 	generic_make_request(read_bio);
+-	sector_nr += nr_sectors;
+ 	sectors_done += nr_sectors;
+ 	if (sector_nr <= last)
+ 		goto read_more;
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 76cc163b3cf1..4a0ec8e87c7a 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -559,6 +559,9 @@ static int mv88e6xxx_port_setup_mac(struct mv88e6xxx_chip *chip, int port,
+ 			goto restore_link;
+ 	}
+ 
++	if (speed == SPEED_MAX && chip->info->ops->port_max_speed_mode)
++		mode = chip->info->ops->port_max_speed_mode(port);
++
+ 	if (chip->info->ops->port_set_pause) {
+ 		err = chip->info->ops->port_set_pause(chip, port, pause);
+ 		if (err)
+@@ -3042,6 +3045,7 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6341_port_set_speed,
++	.port_max_speed_mode = mv88e6341_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6095_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3360,6 +3364,7 @@ static const struct mv88e6xxx_ops mv88e6190_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390_port_set_speed,
++	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3404,6 +3409,7 @@ static const struct mv88e6xxx_ops mv88e6190x_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390x_port_set_speed,
++	.port_max_speed_mode = mv88e6390x_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3448,6 +3454,7 @@ static const struct mv88e6xxx_ops mv88e6191_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390_port_set_speed,
++	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3541,6 +3548,7 @@ static const struct mv88e6xxx_ops mv88e6290_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390_port_set_speed,
++	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3672,6 +3680,7 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6341_port_set_speed,
++	.port_max_speed_mode = mv88e6341_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6095_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3847,6 +3856,7 @@ static const struct mv88e6xxx_ops mv88e6390_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390_port_set_speed,
++	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3895,6 +3905,7 @@ static const struct mv88e6xxx_ops mv88e6390x_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390x_port_set_speed,
++	.port_max_speed_mode = mv88e6390x_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
+index 546651d8c3e1..dfb1af65c205 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.h
++++ b/drivers/net/dsa/mv88e6xxx/chip.h
+@@ -377,6 +377,9 @@ struct mv88e6xxx_ops {
+ 	 */
+ 	int (*port_set_speed)(struct mv88e6xxx_chip *chip, int port, int speed);
+ 
++	/* What interface mode should be used for maximum speed? */
++	phy_interface_t (*port_max_speed_mode)(int port);
++
+ 	int (*port_tag_remap)(struct mv88e6xxx_chip *chip, int port);
+ 
+ 	int (*port_set_frame_mode)(struct mv88e6xxx_chip *chip, int port,
+diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
+index 184c2b1b3115..5e921bb6c214 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.c
++++ b/drivers/net/dsa/mv88e6xxx/port.c
+@@ -312,6 +312,14 @@ int mv88e6341_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ 	return mv88e6xxx_port_set_speed(chip, port, speed, !port, true);
+ }
+ 
++phy_interface_t mv88e6341_port_max_speed_mode(int port)
++{
++	if (port == 5)
++		return PHY_INTERFACE_MODE_2500BASEX;
++
++	return PHY_INTERFACE_MODE_NA;
++}
++
+ /* Support 10, 100, 200, 1000 Mbps (e.g. 88E6352 family) */
+ int mv88e6352_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ {
+@@ -345,6 +353,14 @@ int mv88e6390_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ 	return mv88e6xxx_port_set_speed(chip, port, speed, true, true);
+ }
+ 
++phy_interface_t mv88e6390_port_max_speed_mode(int port)
++{
++	if (port == 9 || port == 10)
++		return PHY_INTERFACE_MODE_2500BASEX;
++
++	return PHY_INTERFACE_MODE_NA;
++}
++
+ /* Support 10, 100, 200, 1000, 2500, 10000 Mbps (e.g. 88E6190X) */
+ int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ {
+@@ -360,6 +376,14 @@ int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ 	return mv88e6xxx_port_set_speed(chip, port, speed, true, true);
+ }
+ 
++phy_interface_t mv88e6390x_port_max_speed_mode(int port)
++{
++	if (port == 9 || port == 10)
++		return PHY_INTERFACE_MODE_XAUI;
++
++	return PHY_INTERFACE_MODE_NA;
++}
++
+ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ 			      phy_interface_t mode)
+ {
+diff --git a/drivers/net/dsa/mv88e6xxx/port.h b/drivers/net/dsa/mv88e6xxx/port.h
+index 4aadf321edb7..c7bed263a0f4 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.h
++++ b/drivers/net/dsa/mv88e6xxx/port.h
+@@ -285,6 +285,10 @@ int mv88e6352_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
+ int mv88e6390_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
+ int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
+ 
++phy_interface_t mv88e6341_port_max_speed_mode(int port);
++phy_interface_t mv88e6390_port_max_speed_mode(int port);
++phy_interface_t mv88e6390x_port_max_speed_mode(int port);
++
+ int mv88e6xxx_port_set_state(struct mv88e6xxx_chip *chip, int port, u8 state);
+ 
+ int mv88e6xxx_port_set_vlan_map(struct mv88e6xxx_chip *chip, int port, u16 map);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 36eab37d8a40..09c774fe8853 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -192,6 +192,7 @@ struct hnae3_ae_dev {
+ 	const struct hnae3_ae_ops *ops;
+ 	struct list_head node;
+ 	u32 flag;
++	u8 override_pci_need_reset; /* fix to stop multiple reset happening */
+ 	enum hnae3_dev_type dev_type;
+ 	enum hnae3_reset_type reset_type;
+ 	void *priv;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 1bf7a5f116a0..d84c50068f66 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1852,7 +1852,9 @@ static pci_ers_result_t hns3_slot_reset(struct pci_dev *pdev)
+ 
+ 	/* request the reset */
+ 	if (ae_dev->ops->reset_event) {
+-		ae_dev->ops->reset_event(pdev, NULL);
++		if (!ae_dev->override_pci_need_reset)
++			ae_dev->ops->reset_event(pdev, NULL);
++
+ 		return PCI_ERS_RESULT_RECOVERED;
+ 	}
+ 
+@@ -2476,6 +2478,8 @@ static int hns3_add_frag(struct hns3_enet_ring *ring, struct hns3_desc *desc,
+ 		desc = &ring->desc[ring->next_to_clean];
+ 		desc_cb = &ring->desc_cb[ring->next_to_clean];
+ 		bd_base_info = le32_to_cpu(desc->rx.bd_base_info);
++		/* make sure HW write desc complete */
++		dma_rmb();
+ 		if (!hnae3_get_bit(bd_base_info, HNS3_RXD_VLD_B))
+ 			return -ENXIO;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+index d0f654123b9b..efb6c1a25171 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+@@ -1259,8 +1259,10 @@ pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev)
+ 		hclge_handle_all_ras_errors(hdev);
+ 	} else {
+ 		if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
+-		    hdev->pdev->revision < 0x21)
++		    hdev->pdev->revision < 0x21) {
++			ae_dev->override_pci_need_reset = 1;
+ 			return PCI_ERS_RESULT_RECOVERED;
++		}
+ 	}
+ 
+ 	if (status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
+@@ -1269,8 +1271,11 @@ pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev)
+ 	}
+ 
+ 	if (status & HCLGE_RAS_REG_NFE_MASK ||
+-	    status & HCLGE_RAS_REG_ROCEE_ERR_MASK)
++	    status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
++		ae_dev->override_pci_need_reset = 0;
+ 		return PCI_ERS_RESULT_NEED_RESET;
++	}
++	ae_dev->override_pci_need_reset = 1;
+ 
+ 	return PCI_ERS_RESULT_RECOVERED;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c
+index e65bc3c95630..857588e2488d 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c
+@@ -2645,6 +2645,8 @@ int mlx4_cmd_use_events(struct mlx4_dev *dev)
+ 	if (!priv->cmd.context)
+ 		return -ENOMEM;
+ 
++	if (mlx4_is_mfunc(dev))
++		mutex_lock(&priv->cmd.slave_cmd_mutex);
+ 	down_write(&priv->cmd.switch_sem);
+ 	for (i = 0; i < priv->cmd.max_cmds; ++i) {
+ 		priv->cmd.context[i].token = i;
+@@ -2670,6 +2672,8 @@ int mlx4_cmd_use_events(struct mlx4_dev *dev)
+ 	down(&priv->cmd.poll_sem);
+ 	priv->cmd.use_events = 1;
+ 	up_write(&priv->cmd.switch_sem);
++	if (mlx4_is_mfunc(dev))
++		mutex_unlock(&priv->cmd.slave_cmd_mutex);
+ 
+ 	return err;
+ }
+@@ -2682,6 +2686,8 @@ void mlx4_cmd_use_polling(struct mlx4_dev *dev)
+ 	struct mlx4_priv *priv = mlx4_priv(dev);
+ 	int i;
+ 
++	if (mlx4_is_mfunc(dev))
++		mutex_lock(&priv->cmd.slave_cmd_mutex);
+ 	down_write(&priv->cmd.switch_sem);
+ 	priv->cmd.use_events = 0;
+ 
+@@ -2689,9 +2695,12 @@ void mlx4_cmd_use_polling(struct mlx4_dev *dev)
+ 		down(&priv->cmd.event_sem);
+ 
+ 	kfree(priv->cmd.context);
++	priv->cmd.context = NULL;
+ 
+ 	up(&priv->cmd.poll_sem);
+ 	up_write(&priv->cmd.switch_sem);
++	if (mlx4_is_mfunc(dev))
++		mutex_unlock(&priv->cmd.slave_cmd_mutex);
+ }
+ 
+ struct mlx4_cmd_mailbox *mlx4_alloc_cmd_mailbox(struct mlx4_dev *dev)
+diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+index eb13d3618162..4356f3a58002 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+@@ -2719,13 +2719,13 @@ static int qp_get_mtt_size(struct mlx4_qp_context *qpc)
+ 	int total_pages;
+ 	int total_mem;
+ 	int page_offset = (be32_to_cpu(qpc->params2) >> 6) & 0x3f;
++	int tot;
+ 
+ 	sq_size = 1 << (log_sq_size + log_sq_sride + 4);
+ 	rq_size = (srq|rss|xrc) ? 0 : (1 << (log_rq_size + log_rq_stride + 4));
+ 	total_mem = sq_size + rq_size;
+-	total_pages =
+-		roundup_pow_of_two((total_mem + (page_offset << 6)) >>
+-				   page_shift);
++	tot = (total_mem + (page_offset << 6)) >> page_shift;
++	total_pages = !tot ? 1 : roundup_pow_of_two(tot);
+ 
+ 	return total_pages;
+ }
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index 4d1b4a24907f..13e6bf13ac4d 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -585,8 +585,7 @@ static int lan743x_intr_open(struct lan743x_adapter *adapter)
+ 
+ 		if (adapter->csr.flags &
+ 		   LAN743X_CSR_FLAG_SUPPORTS_INTR_AUTO_SET_CLR) {
+-			flags = LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_CLEAR |
+-				LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_SET |
++			flags = LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_SET |
+ 				LAN743X_VECTOR_FLAG_SOURCE_ENABLE_AUTO_SET |
+ 				LAN743X_VECTOR_FLAG_SOURCE_ENABLE_AUTO_CLEAR |
+ 				LAN743X_VECTOR_FLAG_SOURCE_STATUS_AUTO_CLEAR;
+@@ -599,12 +598,6 @@ static int lan743x_intr_open(struct lan743x_adapter *adapter)
+ 			/* map TX interrupt to vector */
+ 			int_vec_map1 |= INT_VEC_MAP1_TX_VEC_(index, vector);
+ 			lan743x_csr_write(adapter, INT_VEC_MAP1, int_vec_map1);
+-			if (flags &
+-			    LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_CLEAR) {
+-				int_vec_en_auto_clr |= INT_VEC_EN_(vector);
+-				lan743x_csr_write(adapter, INT_VEC_EN_AUTO_CLR,
+-						  int_vec_en_auto_clr);
+-			}
+ 
+ 			/* Remove TX interrupt from shared mask */
+ 			intr->vector_list[0].int_mask &= ~int_bit;
+@@ -1902,7 +1895,17 @@ static int lan743x_rx_next_index(struct lan743x_rx *rx, int index)
+ 	return ((++index) % rx->ring_size);
+ }
+ 
+-static int lan743x_rx_allocate_ring_element(struct lan743x_rx *rx, int index)
++static struct sk_buff *lan743x_rx_allocate_skb(struct lan743x_rx *rx)
++{
++	int length = 0;
++
++	length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
++	return __netdev_alloc_skb(rx->adapter->netdev,
++				  length, GFP_ATOMIC | GFP_DMA);
++}
++
++static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index,
++					struct sk_buff *skb)
+ {
+ 	struct lan743x_rx_buffer_info *buffer_info;
+ 	struct lan743x_rx_descriptor *descriptor;
+@@ -1911,9 +1914,7 @@ static int lan743x_rx_allocate_ring_element(struct lan743x_rx *rx, int index)
+ 	length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
+ 	descriptor = &rx->ring_cpu_ptr[index];
+ 	buffer_info = &rx->buffer_info[index];
+-	buffer_info->skb = __netdev_alloc_skb(rx->adapter->netdev,
+-					      length,
+-					      GFP_ATOMIC | GFP_DMA);
++	buffer_info->skb = skb;
+ 	if (!(buffer_info->skb))
+ 		return -ENOMEM;
+ 	buffer_info->dma_ptr = dma_map_single(&rx->adapter->pdev->dev,
+@@ -2060,8 +2061,19 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 		/* packet is available */
+ 		if (first_index == last_index) {
+ 			/* single buffer packet */
++			struct sk_buff *new_skb = NULL;
+ 			int packet_length;
+ 
++			new_skb = lan743x_rx_allocate_skb(rx);
++			if (!new_skb) {
++				/* failed to allocate next skb.
++				 * Memory is very low.
++				 * Drop this packet and reuse buffer.
++				 */
++				lan743x_rx_reuse_ring_element(rx, first_index);
++				goto process_extension;
++			}
++
+ 			buffer_info = &rx->buffer_info[first_index];
+ 			skb = buffer_info->skb;
+ 			descriptor = &rx->ring_cpu_ptr[first_index];
+@@ -2081,7 +2093,7 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 			skb_put(skb, packet_length - 4);
+ 			skb->protocol = eth_type_trans(skb,
+ 						       rx->adapter->netdev);
+-			lan743x_rx_allocate_ring_element(rx, first_index);
++			lan743x_rx_init_ring_element(rx, first_index, new_skb);
+ 		} else {
+ 			int index = first_index;
+ 
+@@ -2094,26 +2106,23 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 			if (first_index <= last_index) {
+ 				while ((index >= first_index) &&
+ 				       (index <= last_index)) {
+-					lan743x_rx_release_ring_element(rx,
+-									index);
+-					lan743x_rx_allocate_ring_element(rx,
+-									 index);
++					lan743x_rx_reuse_ring_element(rx,
++								      index);
+ 					index = lan743x_rx_next_index(rx,
+ 								      index);
+ 				}
+ 			} else {
+ 				while ((index >= first_index) ||
+ 				       (index <= last_index)) {
+-					lan743x_rx_release_ring_element(rx,
+-									index);
+-					lan743x_rx_allocate_ring_element(rx,
+-									 index);
++					lan743x_rx_reuse_ring_element(rx,
++								      index);
+ 					index = lan743x_rx_next_index(rx,
+ 								      index);
+ 				}
+ 			}
+ 		}
+ 
++process_extension:
+ 		if (extension_index >= 0) {
+ 			descriptor = &rx->ring_cpu_ptr[extension_index];
+ 			buffer_info = &rx->buffer_info[extension_index];
+@@ -2290,7 +2299,9 @@ static int lan743x_rx_ring_init(struct lan743x_rx *rx)
+ 
+ 	rx->last_head = 0;
+ 	for (index = 0; index < rx->ring_size; index++) {
+-		ret = lan743x_rx_allocate_ring_element(rx, index);
++		struct sk_buff *new_skb = lan743x_rx_allocate_skb(rx);
++
++		ret = lan743x_rx_init_ring_element(rx, index, new_skb);
+ 		if (ret)
+ 			goto cleanup;
+ 	}
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index d28c8f9ca55b..8154b38c08f7 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -458,7 +458,7 @@ static int ravb_dmac_init(struct net_device *ndev)
+ 		   RCR_EFFS | RCR_ENCF | RCR_ETS0 | RCR_ESF | 0x18000000, RCR);
+ 
+ 	/* Set FIFO size */
+-	ravb_write(ndev, TGC_TQP_AVBMODE1 | 0x00222200, TGC);
++	ravb_write(ndev, TGC_TQP_AVBMODE1 | 0x00112200, TGC);
+ 
+ 	/* Timestamp enable */
+ 	ravb_write(ndev, TCCR_TFEN, TCCR);
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index 8f09edd811e9..50c60550f295 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -532,6 +532,7 @@ static void pptp_sock_destruct(struct sock *sk)
+ 		pppox_unbind_sock(sk);
+ 	}
+ 	skb_queue_purge(&sk->sk_receive_queue);
++	dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1));
+ }
+ 
+ static int pptp_create(struct net *net, struct socket *sock, int kern)
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 2aae11feff0c..d6fb6a89f9b3 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1657,6 +1657,14 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
+ 		goto drop;
+ 	}
+ 
++	rcu_read_lock();
++
++	if (unlikely(!(vxlan->dev->flags & IFF_UP))) {
++		rcu_read_unlock();
++		atomic_long_inc(&vxlan->dev->rx_dropped);
++		goto drop;
++	}
++
+ 	stats = this_cpu_ptr(vxlan->dev->tstats);
+ 	u64_stats_update_begin(&stats->syncp);
+ 	stats->rx_packets++;
+@@ -1664,6 +1672,9 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
+ 	u64_stats_update_end(&stats->syncp);
+ 
+ 	gro_cells_receive(&vxlan->gro_cells, skb);
++
++	rcu_read_unlock();
++
+ 	return 0;
+ 
+ drop:
+@@ -2693,6 +2704,8 @@ static void vxlan_uninit(struct net_device *dev)
+ {
+ 	struct vxlan_dev *vxlan = netdev_priv(dev);
+ 
++	gro_cells_destroy(&vxlan->gro_cells);
++
+ 	vxlan_fdb_delete_default(vxlan, vxlan->cfg.vni);
+ 
+ 	free_percpu(dev->tstats);
+@@ -3794,7 +3807,6 @@ static void vxlan_dellink(struct net_device *dev, struct list_head *head)
+ 
+ 	vxlan_flush(vxlan, true);
+ 
+-	gro_cells_destroy(&vxlan->gro_cells);
+ 	list_del(&vxlan->next);
+ 	unregister_netdevice_queue(dev, head);
+ }
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index bba56b39dcc5..ae2b45e75847 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1750,10 +1750,12 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ 
+ 	down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 
+-	if (!get_dirty_pages(inode))
+-		goto skip_flush;
+-
+-	f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
++	/*
++	 * Should wait end_io to count F2FS_WB_CP_DATA correctly by
++	 * f2fs_is_atomic_file.
++	 */
++	if (get_dirty_pages(inode))
++		f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
+ 		"Unexpected flush for atomic writes: ino=%lu, npages=%u",
+ 					inode->i_ino, get_dirty_pages(inode));
+ 	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
+@@ -1761,7 +1763,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ 		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 		goto out;
+ 	}
+-skip_flush:
++
+ 	set_inode_flag(inode, FI_ATOMIC_FILE);
+ 	clear_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST);
+ 	up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+diff --git a/net/core/gro_cells.c b/net/core/gro_cells.c
+index acf45ddbe924..e095fb871d91 100644
+--- a/net/core/gro_cells.c
++++ b/net/core/gro_cells.c
+@@ -13,22 +13,36 @@ int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
+ {
+ 	struct net_device *dev = skb->dev;
+ 	struct gro_cell *cell;
++	int res;
+ 
+-	if (!gcells->cells || skb_cloned(skb) || netif_elide_gro(dev))
+-		return netif_rx(skb);
++	rcu_read_lock();
++	if (unlikely(!(dev->flags & IFF_UP)))
++		goto drop;
++
++	if (!gcells->cells || skb_cloned(skb) || netif_elide_gro(dev)) {
++		res = netif_rx(skb);
++		goto unlock;
++	}
+ 
+ 	cell = this_cpu_ptr(gcells->cells);
+ 
+ 	if (skb_queue_len(&cell->napi_skbs) > netdev_max_backlog) {
++drop:
+ 		atomic_long_inc(&dev->rx_dropped);
+ 		kfree_skb(skb);
+-		return NET_RX_DROP;
++		res = NET_RX_DROP;
++		goto unlock;
+ 	}
+ 
+ 	__skb_queue_tail(&cell->napi_skbs, skb);
+ 	if (skb_queue_len(&cell->napi_skbs) == 1)
+ 		napi_schedule(&cell->napi);
+-	return NET_RX_SUCCESS;
++
++	res = NET_RX_SUCCESS;
++
++unlock:
++	rcu_read_unlock();
++	return res;
+ }
+ EXPORT_SYMBOL(gro_cells_receive);
+ 
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index b8cd43c9ed5b..a97bf326b231 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -94,9 +94,8 @@ static void hsr_check_announce(struct net_device *hsr_dev,
+ 			&& (old_operstate != IF_OPER_UP)) {
+ 		/* Went up */
+ 		hsr->announce_count = 0;
+-		hsr->announce_timer.expires = jiffies +
+-				msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
+-		add_timer(&hsr->announce_timer);
++		mod_timer(&hsr->announce_timer,
++			  jiffies + msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL));
+ 	}
+ 
+ 	if ((hsr_dev->operstate != IF_OPER_UP) && (old_operstate == IF_OPER_UP))
+@@ -332,6 +331,7 @@ static void hsr_announce(struct timer_list *t)
+ {
+ 	struct hsr_priv *hsr;
+ 	struct hsr_port *master;
++	unsigned long interval;
+ 
+ 	hsr = from_timer(hsr, t, announce_timer);
+ 
+@@ -343,18 +343,16 @@ static void hsr_announce(struct timer_list *t)
+ 				hsr->protVersion);
+ 		hsr->announce_count++;
+ 
+-		hsr->announce_timer.expires = jiffies +
+-				msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
++		interval = msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
+ 	} else {
+ 		send_hsr_supervision_frame(master, HSR_TLV_LIFE_CHECK,
+ 				hsr->protVersion);
+ 
+-		hsr->announce_timer.expires = jiffies +
+-				msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
++		interval = msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
+ 	}
+ 
+ 	if (is_admin_up(master->dev))
+-		add_timer(&hsr->announce_timer);
++		mod_timer(&hsr->announce_timer, jiffies + interval);
+ 
+ 	rcu_read_unlock();
+ }
+@@ -486,7 +484,7 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+ 
+ 	res = hsr_add_port(hsr, hsr_dev, HSR_PT_MASTER);
+ 	if (res)
+-		return res;
++		goto err_add_port;
+ 
+ 	res = register_netdevice(hsr_dev);
+ 	if (res)
+@@ -506,6 +504,8 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+ fail:
+ 	hsr_for_each_port(hsr, port)
+ 		hsr_del_port(port);
++err_add_port:
++	hsr_del_node(&hsr->self_node_db);
+ 
+ 	return res;
+ }
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 286ceb41ac0c..9af16cb68f76 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -124,6 +124,18 @@ int hsr_create_self_node(struct list_head *self_node_db,
+ 	return 0;
+ }
+ 
++void hsr_del_node(struct list_head *self_node_db)
++{
++	struct hsr_node *node;
++
++	rcu_read_lock();
++	node = list_first_or_null_rcu(self_node_db, struct hsr_node, mac_list);
++	rcu_read_unlock();
++	if (node) {
++		list_del_rcu(&node->mac_list);
++		kfree(node);
++	}
++}
+ 
+ /* Allocate an hsr_node and add it to node_db. 'addr' is the node's AddressA;
+  * seq_out is used to initialize filtering of outgoing duplicate frames
+diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h
+index 370b45998121..531fd3dfcac1 100644
+--- a/net/hsr/hsr_framereg.h
++++ b/net/hsr/hsr_framereg.h
+@@ -16,6 +16,7 @@
+ 
+ struct hsr_node;
+ 
++void hsr_del_node(struct list_head *self_node_db);
+ struct hsr_node *hsr_add_node(struct list_head *node_db, unsigned char addr[],
+ 			      u16 seq_out);
+ struct hsr_node *hsr_get_node(struct hsr_port *port, struct sk_buff *skb,
+diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c
+index 437070d1ffb1..79e98e21cdd7 100644
+--- a/net/ipv4/fou.c
++++ b/net/ipv4/fou.c
+@@ -1024,7 +1024,7 @@ static int gue_err(struct sk_buff *skb, u32 info)
+ 	int ret;
+ 
+ 	len = sizeof(struct udphdr) + sizeof(struct guehdr);
+-	if (!pskb_may_pull(skb, len))
++	if (!pskb_may_pull(skb, transport_offset + len))
+ 		return -EINVAL;
+ 
+ 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+@@ -1059,7 +1059,7 @@ static int gue_err(struct sk_buff *skb, u32 info)
+ 
+ 	optlen = guehdr->hlen << 2;
+ 
+-	if (!pskb_may_pull(skb, len + optlen))
++	if (!pskb_may_pull(skb, transport_offset + len + optlen))
+ 		return -EINVAL;
+ 
+ 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 7bb9128c8363..e04cdb58a602 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1303,6 +1303,10 @@ static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr)
+ 		if (fnhe->fnhe_daddr == daddr) {
+ 			rcu_assign_pointer(*fnhe_p, rcu_dereference_protected(
+ 				fnhe->fnhe_next, lockdep_is_held(&fnhe_lock)));
++			/* set fnhe_daddr to 0 to ensure it won't bind with
++			 * new dsts in rt_bind_exception().
++			 */
++			fnhe->fnhe_daddr = 0;
+ 			fnhe_flush_routes(fnhe);
+ 			kfree_rcu(fnhe, rcu);
+ 			break;
+@@ -2144,12 +2148,13 @@ int ip_route_input_rcu(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ 		int our = 0;
+ 		int err = -EINVAL;
+ 
+-		if (in_dev)
+-			our = ip_check_mc_rcu(in_dev, daddr, saddr,
+-					      ip_hdr(skb)->protocol);
++		if (!in_dev)
++			return err;
++		our = ip_check_mc_rcu(in_dev, daddr, saddr,
++				      ip_hdr(skb)->protocol);
+ 
+ 		/* check l3 master if no match yet */
+-		if ((!in_dev || !our) && netif_is_l3_slave(dev)) {
++		if (!our && netif_is_l3_slave(dev)) {
+ 			struct in_device *l3_in_dev;
+ 
+ 			l3_in_dev = __in_dev_get_rcu(skb->dev);
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 606f868d9f3f..e531344611a0 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -216,7 +216,12 @@ struct sock *tcp_get_cookie_sock(struct sock *sk, struct sk_buff *skb,
+ 		refcount_set(&req->rsk_refcnt, 1);
+ 		tcp_sk(child)->tsoffset = tsoff;
+ 		sock_rps_save_rxhash(child, skb);
+-		inet_csk_reqsk_queue_add(sk, req, child);
++		if (!inet_csk_reqsk_queue_add(sk, req, child)) {
++			bh_unlock_sock(child);
++			sock_put(child);
++			child = NULL;
++			reqsk_put(req);
++		}
+ 	} else {
+ 		reqsk_free(req);
+ 	}
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index cf3c5095c10e..ce365cbba1d1 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1914,6 +1914,11 @@ static int tcp_inq_hint(struct sock *sk)
+ 		inq = tp->rcv_nxt - tp->copied_seq;
+ 		release_sock(sk);
+ 	}
++	/* After receiving a FIN, tell the user-space to continue reading
++	 * by returning a non-zero inq.
++	 */
++	if (inq == 0 && sock_flag(sk, SOCK_DONE))
++		inq = 1;
+ 	return inq;
+ }
+ 
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 76858b14ebe9..7b1ef897b398 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -6519,7 +6519,13 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
+ 		af_ops->send_synack(fastopen_sk, dst, &fl, req,
+ 				    &foc, TCP_SYNACK_FASTOPEN);
+ 		/* Add the child socket directly into the accept queue */
+-		inet_csk_reqsk_queue_add(sk, req, fastopen_sk);
++		if (!inet_csk_reqsk_queue_add(sk, req, fastopen_sk)) {
++			reqsk_fastopen_remove(fastopen_sk, req, false);
++			bh_unlock_sock(fastopen_sk);
++			sock_put(fastopen_sk);
++			reqsk_put(req);
++			goto drop;
++		}
+ 		sk->sk_data_ready(sk);
+ 		bh_unlock_sock(fastopen_sk);
+ 		sock_put(fastopen_sk);
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index ec3cea9d6828..1aae9ab57fe9 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1734,15 +1734,8 @@ EXPORT_SYMBOL(tcp_add_backlog);
+ int tcp_filter(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct tcphdr *th = (struct tcphdr *)skb->data;
+-	unsigned int eaten = skb->len;
+-	int err;
+ 
+-	err = sk_filter_trim_cap(sk, skb, th->doff * 4);
+-	if (!err) {
+-		eaten -= skb->len;
+-		TCP_SKB_CB(skb)->end_seq -= eaten;
+-	}
+-	return err;
++	return sk_filter_trim_cap(sk, skb, th->doff * 4);
+ }
+ EXPORT_SYMBOL(tcp_filter);
+ 
+diff --git a/net/ipv6/fou6.c b/net/ipv6/fou6.c
+index 867474abe269..ec4e2ed95f36 100644
+--- a/net/ipv6/fou6.c
++++ b/net/ipv6/fou6.c
+@@ -94,7 +94,7 @@ static int gue6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	int ret;
+ 
+ 	len = sizeof(struct udphdr) + sizeof(struct guehdr);
+-	if (!pskb_may_pull(skb, len))
++	if (!pskb_may_pull(skb, transport_offset + len))
+ 		return -EINVAL;
+ 
+ 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+@@ -129,7 +129,7 @@ static int gue6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 
+ 	optlen = guehdr->hlen << 2;
+ 
+-	if (!pskb_may_pull(skb, len + optlen))
++	if (!pskb_may_pull(skb, transport_offset + len + optlen))
+ 		return -EINVAL;
+ 
+ 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 09e440e8dfae..07e21a82ce4c 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -778,8 +778,9 @@ static bool check_6rd(struct ip_tunnel *tunnel, const struct in6_addr *v6dst,
+ 		pbw0 = tunnel->ip6rd.prefixlen >> 5;
+ 		pbi0 = tunnel->ip6rd.prefixlen & 0x1f;
+ 
+-		d = (ntohl(v6dst->s6_addr32[pbw0]) << pbi0) >>
+-		    tunnel->ip6rd.relay_prefixlen;
++		d = tunnel->ip6rd.relay_prefixlen < 32 ?
++			(ntohl(v6dst->s6_addr32[pbw0]) << pbi0) >>
++		    tunnel->ip6rd.relay_prefixlen : 0;
+ 
+ 		pbi1 = pbi0 - tunnel->ip6rd.relay_prefixlen;
+ 		if (pbi1 > 0)
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index 0ae6899edac0..37a69df17cab 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -674,9 +674,6 @@ static int l2tp_ip6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 	if (flags & MSG_OOB)
+ 		goto out;
+ 
+-	if (addr_len)
+-		*addr_len = sizeof(*lsa);
+-
+ 	if (flags & MSG_ERRQUEUE)
+ 		return ipv6_recv_error(sk, msg, len, addr_len);
+ 
+@@ -706,6 +703,7 @@ static int l2tp_ip6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 		lsa->l2tp_conn_id = 0;
+ 		if (ipv6_addr_type(&lsa->l2tp_addr) & IPV6_ADDR_LINKLOCAL)
+ 			lsa->l2tp_scope_id = inet6_iif(skb);
++		*addr_len = sizeof(*lsa);
+ 	}
+ 
+ 	if (np->rxopt.all)
+diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
+index b2adfa825363..5cf6d9f4761d 100644
+--- a/net/rxrpc/conn_client.c
++++ b/net/rxrpc/conn_client.c
+@@ -353,7 +353,7 @@ static int rxrpc_get_client_conn(struct rxrpc_sock *rx,
+ 	 * normally have to take channel_lock but we do this before anyone else
+ 	 * can see the connection.
+ 	 */
+-	list_add_tail(&call->chan_wait_link, &candidate->waiting_calls);
++	list_add(&call->chan_wait_link, &candidate->waiting_calls);
+ 
+ 	if (cp->exclusive) {
+ 		call->conn = candidate;
+@@ -432,7 +432,7 @@ found_extant_conn:
+ 	call->conn = conn;
+ 	call->security_ix = conn->security_ix;
+ 	call->service_id = conn->service_id;
+-	list_add(&call->chan_wait_link, &conn->waiting_calls);
++	list_add_tail(&call->chan_wait_link, &conn->waiting_calls);
+ 	spin_unlock(&conn->channel_lock);
+ 	_leave(" = 0 [extant %d]", conn->debug_id);
+ 	return 0;
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 12ca9d13db83..bf67ae5ac1c3 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1327,46 +1327,46 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
+ 	if (err < 0)
+ 		goto errout;
+ 
+-	if (!handle) {
+-		handle = 1;
+-		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
+-				    INT_MAX, GFP_KERNEL);
+-	} else if (!fold) {
+-		/* user specifies a handle and it doesn't exist */
+-		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
+-				    handle, GFP_KERNEL);
+-	}
+-	if (err)
+-		goto errout;
+-	fnew->handle = handle;
+-
+ 	if (tb[TCA_FLOWER_FLAGS]) {
+ 		fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]);
+ 
+ 		if (!tc_flags_valid(fnew->flags)) {
+ 			err = -EINVAL;
+-			goto errout_idr;
++			goto errout;
+ 		}
+ 	}
+ 
+ 	err = fl_set_parms(net, tp, fnew, mask, base, tb, tca[TCA_RATE], ovr,
+ 			   tp->chain->tmplt_priv, extack);
+ 	if (err)
+-		goto errout_idr;
++		goto errout;
+ 
+ 	err = fl_check_assign_mask(head, fnew, fold, mask);
+ 	if (err)
+-		goto errout_idr;
++		goto errout;
++
++	if (!handle) {
++		handle = 1;
++		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
++				    INT_MAX, GFP_KERNEL);
++	} else if (!fold) {
++		/* user specifies a handle and it doesn't exist */
++		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
++				    handle, GFP_KERNEL);
++	}
++	if (err)
++		goto errout_mask;
++	fnew->handle = handle;
+ 
+ 	if (!fold && __fl_lookup(fnew->mask, &fnew->mkey)) {
+ 		err = -EEXIST;
+-		goto errout_mask;
++		goto errout_idr;
+ 	}
+ 
+ 	err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node,
+ 				     fnew->mask->filter_ht_params);
+ 	if (err)
+-		goto errout_mask;
++		goto errout_idr;
+ 
+ 	if (!tc_skip_hw(fnew->flags)) {
+ 		err = fl_hw_replace_filter(tp, fnew, extack);
+@@ -1405,12 +1405,13 @@ errout_mask_ht:
+ 	rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node,
+ 			       fnew->mask->filter_ht_params);
+ 
+-errout_mask:
+-	fl_mask_put(head, fnew->mask, false);
+-
+ errout_idr:
+ 	if (!fold)
+ 		idr_remove(&head->handle_idr, fnew->handle);
++
++errout_mask:
++	fl_mask_put(head, fnew->mask, false);
++
+ errout:
+ 	tcf_exts_destroy(&fnew->exts);
+ 	kfree(fnew);
+diff --git a/net/sctp/stream.c b/net/sctp/stream.c
+index 2936ed17bf9e..3b47457862cc 100644
+--- a/net/sctp/stream.c
++++ b/net/sctp/stream.c
+@@ -230,8 +230,6 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt,
+ 	for (i = 0; i < stream->outcnt; i++)
+ 		SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
+ 
+-	sched->init(stream);
+-
+ in:
+ 	sctp_stream_interleave_init(stream);
+ 	if (!incnt)
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 3ae3a33da70b..602715fc9a75 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -662,6 +662,8 @@ static int virtio_transport_reset(struct vsock_sock *vsk,
+  */
+ static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
+ {
++	const struct virtio_transport *t;
++	struct virtio_vsock_pkt *reply;
+ 	struct virtio_vsock_pkt_info info = {
+ 		.op = VIRTIO_VSOCK_OP_RST,
+ 		.type = le16_to_cpu(pkt->hdr.type),
+@@ -672,15 +674,21 @@ static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
+ 	if (le16_to_cpu(pkt->hdr.op) == VIRTIO_VSOCK_OP_RST)
+ 		return 0;
+ 
+-	pkt = virtio_transport_alloc_pkt(&info, 0,
+-					 le64_to_cpu(pkt->hdr.dst_cid),
+-					 le32_to_cpu(pkt->hdr.dst_port),
+-					 le64_to_cpu(pkt->hdr.src_cid),
+-					 le32_to_cpu(pkt->hdr.src_port));
+-	if (!pkt)
++	reply = virtio_transport_alloc_pkt(&info, 0,
++					   le64_to_cpu(pkt->hdr.dst_cid),
++					   le32_to_cpu(pkt->hdr.dst_port),
++					   le64_to_cpu(pkt->hdr.src_cid),
++					   le32_to_cpu(pkt->hdr.src_port));
++	if (!reply)
+ 		return -ENOMEM;
+ 
+-	return virtio_transport_get_ops()->send_pkt(pkt);
++	t = virtio_transport_get_ops();
++	if (!t) {
++		virtio_transport_free_pkt(reply);
++		return -ENOTCONN;
++	}
++
++	return t->send_pkt(reply);
+ }
+ 
+ static void virtio_transport_wait_close(struct sock *sk, long timeout)
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index eff31348e20b..20a511398389 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -820,8 +820,13 @@ static int x25_connect(struct socket *sock, struct sockaddr *uaddr,
+ 	sock->state = SS_CONNECTED;
+ 	rc = 0;
+ out_put_neigh:
+-	if (rc)
++	if (rc) {
++		read_lock_bh(&x25_list_lock);
+ 		x25_neigh_put(x25->neighbour);
++		x25->neighbour = NULL;
++		read_unlock_bh(&x25_list_lock);
++		x25->state = X25_STATE_0;
++	}
+ out_put_route:
+ 	x25_route_put(rt);
+ out:
+diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
+index d91874275d2c..5b46e8dcc2dd 100644
+--- a/sound/firewire/bebob/bebob.c
++++ b/sound/firewire/bebob/bebob.c
+@@ -448,7 +448,19 @@ static const struct ieee1394_device_id bebob_id_table[] = {
+ 	/* Focusrite, SaffirePro 26 I/O */
+ 	SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, 0x00000003, &saffirepro_26_spec),
+ 	/* Focusrite, SaffirePro 10 I/O */
+-	SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, 0x00000006, &saffirepro_10_spec),
++	{
++		// The combination of vendor_id and model_id is the same as the
++		// same as the one of Liquid Saffire 56.
++		.match_flags	= IEEE1394_MATCH_VENDOR_ID |
++				  IEEE1394_MATCH_MODEL_ID |
++				  IEEE1394_MATCH_SPECIFIER_ID |
++				  IEEE1394_MATCH_VERSION,
++		.vendor_id	= VEN_FOCUSRITE,
++		.model_id	= 0x000006,
++		.specifier_id	= 0x00a02d,
++		.version	= 0x010001,
++		.driver_data	= (kernel_ulong_t)&saffirepro_10_spec,
++	},
+ 	/* Focusrite, Saffire(no label and LE) */
+ 	SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, MODEL_FOCUSRITE_SAFFIRE_BOTH,
+ 			    &saffire_spec),
+diff --git a/sound/firewire/motu/amdtp-motu.c b/sound/firewire/motu/amdtp-motu.c
+index f0555a24d90e..6c9b743ea74b 100644
+--- a/sound/firewire/motu/amdtp-motu.c
++++ b/sound/firewire/motu/amdtp-motu.c
+@@ -136,7 +136,9 @@ static void read_pcm_s32(struct amdtp_stream *s,
+ 		byte = (u8 *)buffer + p->pcm_byte_offset;
+ 
+ 		for (c = 0; c < channels; ++c) {
+-			*dst = (byte[0] << 24) | (byte[1] << 16) | byte[2];
++			*dst = (byte[0] << 24) |
++			       (byte[1] << 16) |
++			       (byte[2] << 8);
+ 			byte += 3;
+ 			dst++;
+ 		}
+diff --git a/sound/hda/hdac_i915.c b/sound/hda/hdac_i915.c
+index 617ff1aa818f..27eb0270a711 100644
+--- a/sound/hda/hdac_i915.c
++++ b/sound/hda/hdac_i915.c
+@@ -144,9 +144,9 @@ int snd_hdac_i915_init(struct hdac_bus *bus)
+ 		return -ENODEV;
+ 	if (!acomp->ops) {
+ 		request_module("i915");
+-		/* 10s timeout */
++		/* 60s timeout */
+ 		wait_for_completion_timeout(&bind_complete,
+-					    msecs_to_jiffies(10 * 1000));
++					    msecs_to_jiffies(60 * 1000));
+ 	}
+ 	if (!acomp->ops) {
+ 		dev_info(bus->dev, "couldn't bind with audio component\n");
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index a4ee7656d9ee..fb65ad31e86c 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -936,6 +936,9 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x8458, "HP Z2 G4 mini premium", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
+ 	SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 1ffa36e987b4..3a8568d3928f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -118,6 +118,7 @@ struct alc_spec {
+ 	unsigned int has_alc5505_dsp:1;
+ 	unsigned int no_depop_delay:1;
+ 	unsigned int done_hp_init:1;
++	unsigned int no_shutup_pins:1;
+ 
+ 	/* for PLL fix */
+ 	hda_nid_t pll_nid;
+@@ -476,6 +477,14 @@ static void alc_auto_setup_eapd(struct hda_codec *codec, bool on)
+ 		set_eapd(codec, *p, on);
+ }
+ 
++static void alc_shutup_pins(struct hda_codec *codec)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (!spec->no_shutup_pins)
++		snd_hda_shutup_pins(codec);
++}
++
+ /* generic shutup callback;
+  * just turning off EAPD and a little pause for avoiding pop-noise
+  */
+@@ -486,7 +495,7 @@ static void alc_eapd_shutup(struct hda_codec *codec)
+ 	alc_auto_setup_eapd(codec, false);
+ 	if (!spec->no_depop_delay)
+ 		msleep(200);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ }
+ 
+ /* generic EAPD initialization */
+@@ -814,7 +823,7 @@ static inline void alc_shutup(struct hda_codec *codec)
+ 	if (spec && spec->shutup)
+ 		spec->shutup(codec);
+ 	else
+-		snd_hda_shutup_pins(codec);
++		alc_shutup_pins(codec);
+ }
+ 
+ static void alc_reboot_notify(struct hda_codec *codec)
+@@ -2950,7 +2959,7 @@ static void alc269_shutup(struct hda_codec *codec)
+ 			(alc_get_coef0(codec) & 0x00ff) == 0x018) {
+ 		msleep(150);
+ 	}
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ }
+ 
+ static struct coef_fw alc282_coefs[] = {
+@@ -3053,14 +3062,15 @@ static void alc282_shutup(struct hda_codec *codec)
+ 	if (hp_pin_sense)
+ 		msleep(85);
+ 
+-	snd_hda_codec_write(codec, hp_pin, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	if (!spec->no_shutup_pins)
++		snd_hda_codec_write(codec, hp_pin, 0,
++				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ 
+ 	if (hp_pin_sense)
+ 		msleep(100);
+ 
+ 	alc_auto_setup_eapd(codec, false);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ 	alc_write_coef_idx(codec, 0x78, coef78);
+ }
+ 
+@@ -3166,15 +3176,16 @@ static void alc283_shutup(struct hda_codec *codec)
+ 	if (hp_pin_sense)
+ 		msleep(100);
+ 
+-	snd_hda_codec_write(codec, hp_pin, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	if (!spec->no_shutup_pins)
++		snd_hda_codec_write(codec, hp_pin, 0,
++				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ 
+ 	alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ 
+ 	if (hp_pin_sense)
+ 		msleep(100);
+ 	alc_auto_setup_eapd(codec, false);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ 	alc_write_coef_idx(codec, 0x43, 0x9614);
+ }
+ 
+@@ -3240,14 +3251,15 @@ static void alc256_shutup(struct hda_codec *codec)
+ 	/* NOTE: call this before clearing the pin, otherwise codec stalls */
+ 	alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ 
+-	snd_hda_codec_write(codec, hp_pin, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	if (!spec->no_shutup_pins)
++		snd_hda_codec_write(codec, hp_pin, 0,
++				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ 
+ 	if (hp_pin_sense)
+ 		msleep(100);
+ 
+ 	alc_auto_setup_eapd(codec, false);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ }
+ 
+ static void alc225_init(struct hda_codec *codec)
+@@ -3334,7 +3346,7 @@ static void alc225_shutup(struct hda_codec *codec)
+ 		msleep(100);
+ 
+ 	alc_auto_setup_eapd(codec, false);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ }
+ 
+ static void alc_default_init(struct hda_codec *codec)
+@@ -3388,14 +3400,15 @@ static void alc_default_shutup(struct hda_codec *codec)
+ 	if (hp_pin_sense)
+ 		msleep(85);
+ 
+-	snd_hda_codec_write(codec, hp_pin, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	if (!spec->no_shutup_pins)
++		snd_hda_codec_write(codec, hp_pin, 0,
++				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ 
+ 	if (hp_pin_sense)
+ 		msleep(100);
+ 
+ 	alc_auto_setup_eapd(codec, false);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ }
+ 
+ static void alc294_hp_init(struct hda_codec *codec)
+@@ -3412,8 +3425,9 @@ static void alc294_hp_init(struct hda_codec *codec)
+ 
+ 	msleep(100);
+ 
+-	snd_hda_codec_write(codec, hp_pin, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	if (!spec->no_shutup_pins)
++		snd_hda_codec_write(codec, hp_pin, 0,
++				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ 
+ 	alc_update_coef_idx(codec, 0x6f, 0x000f, 0);/* Set HP depop to manual mode */
+ 	alc_update_coefex_idx(codec, 0x58, 0x00, 0x8000, 0x8000); /* HP depop procedure start */
+@@ -5007,16 +5021,12 @@ static void alc_fixup_auto_mute_via_amp(struct hda_codec *codec,
+ 	}
+ }
+ 
+-static void alc_no_shutup(struct hda_codec *codec)
+-{
+-}
+-
+ static void alc_fixup_no_shutup(struct hda_codec *codec,
+ 				const struct hda_fixup *fix, int action)
+ {
+ 	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ 		struct alc_spec *spec = codec->spec;
+-		spec->shutup = alc_no_shutup;
++		spec->no_shutup_pins = 1;
+ 	}
+ }
+ 
+@@ -5661,6 +5671,7 @@ enum {
+ 	ALC225_FIXUP_HEADSET_JACK,
+ 	ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+ 	ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
++	ALC255_FIXUP_ACER_HEADSET_MIC,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -6627,6 +6638,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC285_FIXUP_LENOVO_HEADPHONE_NOISE
+ 	},
++	[ALC255_FIXUP_ACER_HEADSET_MIC] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x03a11130 },
++			{ 0x1a, 0x90a60140 }, /* use as internal mic */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -6646,6 +6667,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ 	SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
+ 	SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X),
+@@ -6677,6 +6699,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13 9350", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ 	SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ 	SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
++	SND_PCI_QUIRK(0x1028, 0x0738, "Dell Precision 5820", ALC269_FIXUP_NO_SHUTUP),
+ 	SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ 	SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
+@@ -6751,11 +6774,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x103c, 0x802e, "HP Z240 SFF", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
+-	SND_PCI_QUIRK(0x103c, 0x82bf, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x103c, 0x82c0, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x82bf, "HP G3 mini", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+@@ -6771,7 +6796,6 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+-	SND_PCI_QUIRK(0x1043, 0x14a1, "ASUS UX533FD", ALC294_FIXUP_ASUS_SPK),
+ 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+@@ -7388,6 +7412,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x14, 0x90170110},
+ 		{0x1b, 0x90a70130},
+ 		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0294, 0x1043, "ASUS", ALC294_FIXUP_ASUS_SPK,
++		{0x12, 0x90a60130},
++		{0x17, 0x90170110},
++		{0x21, 0x03211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0294, 0x1043, "ASUS", ALC294_FIXUP_ASUS_SPK,
+ 		{0x12, 0x90a60130},
+ 		{0x17, 0x90170110},


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-03-23 20:25 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-03-23 20:25 UTC (permalink / raw
  To: gentoo-commits

commit:     e32412028254c50ce52ba2ee81c7ed4d13c8843d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 23 20:24:38 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 23 20:24:38 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e3241202

proj/linux-patches: Linux patch 5.0.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1003_linux-5.0.4.patch | 11152 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11156 insertions(+)

diff --git a/0000_README b/0000_README
index 4989a60..1974ef5 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-5.0.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.3
 
+Patch:  1003_linux-5.0.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-5.0.4.patch b/1003_linux-5.0.4.patch
new file mode 100644
index 0000000..4bb590f
--- /dev/null
+++ b/1003_linux-5.0.4.patch
@@ -0,0 +1,11152 @@
+diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
+index e133ccd60228..acfe3d0f78d1 100644
+--- a/Documentation/DMA-API.txt
++++ b/Documentation/DMA-API.txt
+@@ -195,6 +195,14 @@ Requesting the required mask does not alter the current mask.  If you
+ wish to take advantage of it, you should issue a dma_set_mask()
+ call to set the mask to the value returned.
+ 
++::
++
++	size_t
++	dma_direct_max_mapping_size(struct device *dev);
++
++Returns the maximum size of a mapping for the device. The size parameter
++of the mapping functions like dma_map_single(), dma_map_page() and
++others should not be larger than the returned value.
+ 
+ Part Id - Streaming DMA mappings
+ --------------------------------
+diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
+index 1f09d043d086..ddb8ce5333ba 100644
+--- a/Documentation/arm64/silicon-errata.txt
++++ b/Documentation/arm64/silicon-errata.txt
+@@ -44,6 +44,8 @@ stable kernels.
+ 
+ | Implementor    | Component       | Erratum ID      | Kconfig                     |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Allwinner      | A64/R18         | UNKNOWN1        | SUN50I_ERRATUM_UNKNOWN1     |
++|                |                 |                 |                             |
+ | ARM            | Cortex-A53      | #826319         | ARM64_ERRATUM_826319        |
+ | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_827319        |
+ | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_824069        |
+diff --git a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
+index a10c1f89037d..e1fe02f3e3e9 100644
+--- a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
++++ b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
+@@ -11,11 +11,13 @@ New driver handles the following
+ 
+ Required properties:
+ - compatible:		Must be "samsung,exynos-adc-v1"
+-				for exynos4412/5250 controllers.
++				for Exynos5250 controllers.
+ 			Must be "samsung,exynos-adc-v2" for
+ 				future controllers.
+ 			Must be "samsung,exynos3250-adc" for
+ 				controllers compatible with ADC of Exynos3250.
++			Must be "samsung,exynos4212-adc" for
++				controllers compatible with ADC of Exynos4212 and Exynos4412.
+ 			Must be "samsung,exynos7-adc" for
+ 				the ADC in Exynos7 and compatibles
+ 			Must be "samsung,s3c2410-adc" for
+diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
+index 0de6f6145cc6..7ba8cd567f84 100644
+--- a/Documentation/process/stable-kernel-rules.rst
++++ b/Documentation/process/stable-kernel-rules.rst
+@@ -38,6 +38,9 @@ Procedure for submitting patches to the -stable tree
+  - If the patch covers files in net/ or drivers/net please follow netdev stable
+    submission guidelines as described in
+    :ref:`Documentation/networking/netdev-FAQ.rst <netdev-FAQ>`
++   after first checking the stable networking queue at
++   https://patchwork.ozlabs.org/bundle/davem/stable/?series=&submitter=&state=*&q=&archive=
++   to ensure the requested patch is not already queued up.
+  - Security patches should not be handled (solely) by the -stable review
+    process but should follow the procedures in
+    :ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.
+diff --git a/Makefile b/Makefile
+index fb888787e7d1..06fda21614bc 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/crypto/crct10dif-ce-core.S b/arch/arm/crypto/crct10dif-ce-core.S
+index ce45ba0c0687..16019b5961e7 100644
+--- a/arch/arm/crypto/crct10dif-ce-core.S
++++ b/arch/arm/crypto/crct10dif-ce-core.S
+@@ -124,10 +124,10 @@ ENTRY(crc_t10dif_pmull)
+ 	vext.8		q10, qzr, q0, #4
+ 
+ 	// receive the initial 64B data, xor the initial crc value
+-	vld1.64		{q0-q1}, [arg2, :128]!
+-	vld1.64		{q2-q3}, [arg2, :128]!
+-	vld1.64		{q4-q5}, [arg2, :128]!
+-	vld1.64		{q6-q7}, [arg2, :128]!
++	vld1.64		{q0-q1}, [arg2]!
++	vld1.64		{q2-q3}, [arg2]!
++	vld1.64		{q4-q5}, [arg2]!
++	vld1.64		{q6-q7}, [arg2]!
+ CPU_LE(	vrev64.8	q0, q0			)
+ CPU_LE(	vrev64.8	q1, q1			)
+ CPU_LE(	vrev64.8	q2, q2			)
+@@ -167,7 +167,7 @@ CPU_LE(	vrev64.8	q7, q7			)
+ _fold_64_B_loop:
+ 
+ 	.macro		fold64, reg1, reg2
+-	vld1.64		{q11-q12}, [arg2, :128]!
++	vld1.64		{q11-q12}, [arg2]!
+ 
+ 	vmull.p64	q8, \reg1\()h, d21
+ 	vmull.p64	\reg1, \reg1\()l, d20
+@@ -238,7 +238,7 @@ _16B_reduction_loop:
+ 	vmull.p64	q7, d15, d21
+ 	veor.8		q7, q7, q8
+ 
+-	vld1.64		{q0}, [arg2, :128]!
++	vld1.64		{q0}, [arg2]!
+ CPU_LE(	vrev64.8	q0, q0		)
+ 	vswp		d0, d1
+ 	veor.8		q7, q7, q0
+@@ -335,7 +335,7 @@ _less_than_128:
+ 	vmov.i8		q0, #0
+ 	vmov		s3, arg1_low32		// get the initial crc value
+ 
+-	vld1.64		{q7}, [arg2, :128]!
++	vld1.64		{q7}, [arg2]!
+ CPU_LE(	vrev64.8	q7, q7		)
+ 	vswp		d14, d15
+ 	veor.8		q7, q7, q0
+diff --git a/arch/arm/crypto/crct10dif-ce-glue.c b/arch/arm/crypto/crct10dif-ce-glue.c
+index d428355cf38d..14c19c70a841 100644
+--- a/arch/arm/crypto/crct10dif-ce-glue.c
++++ b/arch/arm/crypto/crct10dif-ce-glue.c
+@@ -35,26 +35,15 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data,
+ 			    unsigned int length)
+ {
+ 	u16 *crc = shash_desc_ctx(desc);
+-	unsigned int l;
+ 
+-	if (!may_use_simd()) {
+-		*crc = crc_t10dif_generic(*crc, data, length);
++	if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) {
++		kernel_neon_begin();
++		*crc = crc_t10dif_pmull(*crc, data, length);
++		kernel_neon_end();
+ 	} else {
+-		if (unlikely((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) {
+-			l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE -
+-				  ((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE));
+-
+-			*crc = crc_t10dif_generic(*crc, data, l);
+-
+-			length -= l;
+-			data += l;
+-		}
+-		if (length > 0) {
+-			kernel_neon_begin();
+-			*crc = crc_t10dif_pmull(*crc, data, length);
+-			kernel_neon_end();
+-		}
++		*crc = crc_t10dif_generic(*crc, data, length);
+ 	}
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
+index 058ce73137e8..5d819b6ea428 100644
+--- a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
++++ b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
+@@ -65,16 +65,16 @@ static int osiris_dvs_notify(struct notifier_block *nb,
+ 
+ 	switch (val) {
+ 	case CPUFREQ_PRECHANGE:
+-		if (old_dvs & !new_dvs ||
+-		    cur_dvs & !new_dvs) {
++		if ((old_dvs && !new_dvs) ||
++		    (cur_dvs && !new_dvs)) {
+ 			pr_debug("%s: exiting dvs\n", __func__);
+ 			cur_dvs = false;
+ 			gpio_set_value(OSIRIS_GPIO_DVS, 1);
+ 		}
+ 		break;
+ 	case CPUFREQ_POSTCHANGE:
+-		if (!old_dvs & new_dvs ||
+-		    !cur_dvs & new_dvs) {
++		if ((!old_dvs && new_dvs) ||
++		    (!cur_dvs && new_dvs)) {
+ 			pr_debug("entering dvs\n");
+ 			cur_dvs = true;
+ 			gpio_set_value(OSIRIS_GPIO_DVS, 0);
+diff --git a/arch/arm64/crypto/aes-ce-ccm-core.S b/arch/arm64/crypto/aes-ce-ccm-core.S
+index e3a375c4cb83..1b151442dac1 100644
+--- a/arch/arm64/crypto/aes-ce-ccm-core.S
++++ b/arch/arm64/crypto/aes-ce-ccm-core.S
+@@ -74,12 +74,13 @@ ENTRY(ce_aes_ccm_auth_data)
+ 	beq	10f
+ 	ext	v0.16b, v0.16b, v0.16b, #1	/* rotate out the mac bytes */
+ 	b	7b
+-8:	mov	w7, w8
++8:	cbz	w8, 91f
++	mov	w7, w8
+ 	add	w8, w8, #16
+ 9:	ext	v1.16b, v1.16b, v1.16b, #1
+ 	adds	w7, w7, #1
+ 	bne	9b
+-	eor	v0.16b, v0.16b, v1.16b
++91:	eor	v0.16b, v0.16b, v1.16b
+ 	st1	{v0.16b}, [x0]
+ 10:	str	w8, [x3]
+ 	ret
+diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
+index 68b11aa690e4..986191e8c058 100644
+--- a/arch/arm64/crypto/aes-ce-ccm-glue.c
++++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
+@@ -125,7 +125,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[],
+ 			abytes -= added;
+ 		}
+ 
+-		while (abytes > AES_BLOCK_SIZE) {
++		while (abytes >= AES_BLOCK_SIZE) {
+ 			__aes_arm64_encrypt(key->key_enc, mac, mac,
+ 					    num_rounds(key));
+ 			crypto_xor(mac, in, AES_BLOCK_SIZE);
+@@ -139,8 +139,6 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[],
+ 					    num_rounds(key));
+ 			crypto_xor(mac, in, abytes);
+ 			*macp = abytes;
+-		} else {
+-			*macp = 0;
+ 		}
+ 	}
+ }
+diff --git a/arch/arm64/crypto/aes-neonbs-core.S b/arch/arm64/crypto/aes-neonbs-core.S
+index e613a87f8b53..8432c8d0dea6 100644
+--- a/arch/arm64/crypto/aes-neonbs-core.S
++++ b/arch/arm64/crypto/aes-neonbs-core.S
+@@ -971,18 +971,22 @@ CPU_LE(	rev		x8, x8		)
+ 
+ 8:	next_ctr	v0
+ 	st1		{v0.16b}, [x24]
+-	cbz		x23, 0f
++	cbz		x23, .Lctr_done
+ 
+ 	cond_yield_neon	98b
+ 	b		99b
+ 
+-0:	frame_pop
++.Lctr_done:
++	frame_pop
+ 	ret
+ 
+ 	/*
+ 	 * If we are handling the tail of the input (x6 != NULL), return the
+ 	 * final keystream block back to the caller.
+ 	 */
++0:	cbz		x25, 8b
++	st1		{v0.16b}, [x25]
++	b		8b
+ 1:	cbz		x25, 8b
+ 	st1		{v1.16b}, [x25]
+ 	b		8b
+diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c
+index b461d62023f2..567c24f3d224 100644
+--- a/arch/arm64/crypto/crct10dif-ce-glue.c
++++ b/arch/arm64/crypto/crct10dif-ce-glue.c
+@@ -39,26 +39,13 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data,
+ 			    unsigned int length)
+ {
+ 	u16 *crc = shash_desc_ctx(desc);
+-	unsigned int l;
+ 
+-	if (unlikely((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) {
+-		l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE -
+-			  ((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE));
+-
+-		*crc = crc_t10dif_generic(*crc, data, l);
+-
+-		length -= l;
+-		data += l;
+-	}
+-
+-	if (length > 0) {
+-		if (may_use_simd()) {
+-			kernel_neon_begin();
+-			*crc = crc_t10dif_pmull(*crc, data, length);
+-			kernel_neon_end();
+-		} else {
+-			*crc = crc_t10dif_generic(*crc, data, length);
+-		}
++	if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) {
++		kernel_neon_begin();
++		*crc = crc_t10dif_pmull(*crc, data, length);
++		kernel_neon_end();
++	} else {
++		*crc = crc_t10dif_generic(*crc, data, length);
+ 	}
+ 
+ 	return 0;
+diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h
+index 1473fc2f7ab7..89691c86640a 100644
+--- a/arch/arm64/include/asm/hardirq.h
++++ b/arch/arm64/include/asm/hardirq.h
+@@ -17,8 +17,12 @@
+ #define __ASM_HARDIRQ_H
+ 
+ #include <linux/cache.h>
++#include <linux/percpu.h>
+ #include <linux/threads.h>
++#include <asm/barrier.h>
+ #include <asm/irq.h>
++#include <asm/kvm_arm.h>
++#include <asm/sysreg.h>
+ 
+ #define NR_IPI	7
+ 
+@@ -37,6 +41,33 @@ u64 smp_irq_stat_cpu(unsigned int cpu);
+ 
+ #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
+ 
++struct nmi_ctx {
++	u64 hcr;
++};
++
++DECLARE_PER_CPU(struct nmi_ctx, nmi_contexts);
++
++#define arch_nmi_enter()							\
++	do {									\
++		if (is_kernel_in_hyp_mode()) {					\
++			struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts);	\
++			nmi_ctx->hcr = read_sysreg(hcr_el2);			\
++			if (!(nmi_ctx->hcr & HCR_TGE)) {			\
++				write_sysreg(nmi_ctx->hcr | HCR_TGE, hcr_el2);	\
++				isb();						\
++			}							\
++		}								\
++	} while (0)
++
++#define arch_nmi_exit()								\
++	do {									\
++		if (is_kernel_in_hyp_mode()) {					\
++			struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts);	\
++			if (!(nmi_ctx->hcr & HCR_TGE))				\
++				write_sysreg(nmi_ctx->hcr, hcr_el2);		\
++		}								\
++	} while (0)
++
+ static inline void ack_bad_irq(unsigned int irq)
+ {
+ 	extern unsigned long irq_err_count;
+diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
+index 780a12f59a8f..92fa81798fb9 100644
+--- a/arch/arm64/kernel/irq.c
++++ b/arch/arm64/kernel/irq.c
+@@ -33,6 +33,9 @@
+ 
+ unsigned long irq_err_count;
+ 
++/* Only access this in an NMI enter/exit */
++DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts);
++
+ DEFINE_PER_CPU(unsigned long *, irq_stack_ptr);
+ 
+ int arch_show_interrupts(struct seq_file *p, int prec)
+diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
+index ce46c4cdf368..691854b77c7f 100644
+--- a/arch/arm64/kernel/kgdb.c
++++ b/arch/arm64/kernel/kgdb.c
+@@ -244,27 +244,33 @@ int kgdb_arch_handle_exception(int exception_vector, int signo,
+ 
+ static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr)
+ {
++	if (user_mode(regs))
++		return DBG_HOOK_ERROR;
++
+ 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
+-	return 0;
++	return DBG_HOOK_HANDLED;
+ }
+ NOKPROBE_SYMBOL(kgdb_brk_fn)
+ 
+ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)
+ {
++	if (user_mode(regs))
++		return DBG_HOOK_ERROR;
++
+ 	compiled_break = 1;
+ 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
+ 
+-	return 0;
++	return DBG_HOOK_HANDLED;
+ }
+ NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
+ 
+ static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
+ {
+-	if (!kgdb_single_step)
++	if (user_mode(regs) || !kgdb_single_step)
+ 		return DBG_HOOK_ERROR;
+ 
+ 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
+-	return 0;
++	return DBG_HOOK_HANDLED;
+ }
+ NOKPROBE_SYMBOL(kgdb_step_brk_fn);
+ 
+diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
+index f17afb99890c..7fb6f3aa5ceb 100644
+--- a/arch/arm64/kernel/probes/kprobes.c
++++ b/arch/arm64/kernel/probes/kprobes.c
+@@ -450,6 +450,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
+ 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+ 	int retval;
+ 
++	if (user_mode(regs))
++		return DBG_HOOK_ERROR;
++
+ 	/* return error if this is not our step */
+ 	retval = kprobe_ss_hit(kcb, instruction_pointer(regs));
+ 
+@@ -466,6 +469,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
+ int __kprobes
+ kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr)
+ {
++	if (user_mode(regs))
++		return DBG_HOOK_ERROR;
++
+ 	kprobe_handler(regs);
+ 	return DBG_HOOK_HANDLED;
+ }
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index c936aa40c3f4..b6dac3a68508 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1476,7 +1476,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ 
+ 	{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
+ 	{ SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
+-	{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 },
++	{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 },
+ };
+ 
+ static bool trap_dbgidr(struct kvm_vcpu *vcpu,
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index efb7b2cbead5..ef46925096f0 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -824,11 +824,12 @@ void __init hook_debug_fault_code(int nr,
+ 	debug_fault_info[nr].name	= name;
+ }
+ 
+-asmlinkage int __exception do_debug_exception(unsigned long addr,
++asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint,
+ 					      unsigned int esr,
+ 					      struct pt_regs *regs)
+ {
+ 	const struct fault_info *inf = esr_to_debug_fault_info(esr);
++	unsigned long pc = instruction_pointer(regs);
+ 	int rv;
+ 
+ 	/*
+@@ -838,14 +839,14 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
+ 	if (interrupts_enabled(regs))
+ 		trace_hardirqs_off();
+ 
+-	if (user_mode(regs) && !is_ttbr0_addr(instruction_pointer(regs)))
++	if (user_mode(regs) && !is_ttbr0_addr(pc))
+ 		arm64_apply_bp_hardening();
+ 
+-	if (!inf->fn(addr, esr, regs)) {
++	if (!inf->fn(addr_if_watchpoint, esr, regs)) {
+ 		rv = 1;
+ 	} else {
+ 		arm64_notify_die(inf->name, regs,
+-				 inf->sig, inf->code, (void __user *)addr, esr);
++				 inf->sig, inf->code, (void __user *)pc, esr);
+ 		rv = 0;
+ 	}
+ 
+diff --git a/arch/m68k/Makefile b/arch/m68k/Makefile
+index f00ca53f8c14..482513b9af2c 100644
+--- a/arch/m68k/Makefile
++++ b/arch/m68k/Makefile
+@@ -58,7 +58,10 @@ cpuflags-$(CONFIG_M5206e)	:= $(call cc-option,-mcpu=5206e,-m5200)
+ cpuflags-$(CONFIG_M5206)	:= $(call cc-option,-mcpu=5206,-m5200)
+ 
+ KBUILD_AFLAGS += $(cpuflags-y)
+-KBUILD_CFLAGS += $(cpuflags-y) -pipe
++KBUILD_CFLAGS += $(cpuflags-y)
++
++KBUILD_CFLAGS += -pipe -ffreestanding
++
+ ifdef CONFIG_MMU
+ # without -fno-strength-reduce the 53c7xx.c driver fails ;-(
+ KBUILD_CFLAGS += -fno-strength-reduce -ffixed-a2
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index d2abd98471e8..41204a49cf95 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -1134,7 +1134,7 @@ static inline void kvm_arch_hardware_unsetup(void) {}
+ static inline void kvm_arch_sync_events(struct kvm *kvm) {}
+ static inline void kvm_arch_free_memslot(struct kvm *kvm,
+ 		struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
+-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
+ static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
+diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h
+index 5b0177733994..46130ef4941c 100644
+--- a/arch/powerpc/include/asm/book3s/64/hugetlb.h
++++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h
+@@ -35,6 +35,14 @@ static inline int hstate_get_psize(struct hstate *hstate)
+ #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
+ static inline bool gigantic_page_supported(void)
+ {
++	/*
++	 * We used gigantic page reservation with hypervisor assist in some case.
++	 * We cannot use runtime allocation of gigantic pages in those platforms
++	 * This is hash translation mode LPARs.
++	 */
++	if (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled())
++		return false;
++
+ 	return true;
+ }
+ #endif
+diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
+index 0f98f00da2ea..19693b8add93 100644
+--- a/arch/powerpc/include/asm/kvm_host.h
++++ b/arch/powerpc/include/asm/kvm_host.h
+@@ -837,7 +837,7 @@ struct kvm_vcpu_arch {
+ static inline void kvm_arch_hardware_disable(void) {}
+ static inline void kvm_arch_hardware_unsetup(void) {}
+ static inline void kvm_arch_sync_events(struct kvm *kvm) {}
+-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+ static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
+ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_exit(void) {}
+diff --git a/arch/powerpc/include/asm/powernv.h b/arch/powerpc/include/asm/powernv.h
+index 2f3ff7a27881..d85fcfea32ca 100644
+--- a/arch/powerpc/include/asm/powernv.h
++++ b/arch/powerpc/include/asm/powernv.h
+@@ -23,6 +23,8 @@ extern int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea,
+ 				unsigned long *flags, unsigned long *status,
+ 				int count);
+ 
++void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val);
++
+ void pnv_tm_init(void);
+ #else
+ static inline void powernv_set_nmmu_ptcr(unsigned long ptcr) { }
+diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
+index 0768dfd8a64e..fdd528cdb2ee 100644
+--- a/arch/powerpc/kernel/entry_32.S
++++ b/arch/powerpc/kernel/entry_32.S
+@@ -745,6 +745,9 @@ fast_exception_return:
+ 	mtcr	r10
+ 	lwz	r10,_LINK(r11)
+ 	mtlr	r10
++	/* Clear the exception_marker on the stack to avoid confusing stacktrace */
++	li	r10, 0
++	stw	r10, 8(r11)
+ 	REST_GPR(10, r11)
+ #if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
+ 	mtspr	SPRN_NRI, r0
+@@ -982,6 +985,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX)
+ 	mtcrf	0xFF,r10
+ 	mtlr	r11
+ 
++	/* Clear the exception_marker on the stack to avoid confusing stacktrace */
++	li	r10, 0
++	stw	r10, 8(r1)
+ 	/*
+ 	 * Once we put values in SRR0 and SRR1, we are in a state
+ 	 * where exceptions are not recoverable, since taking an
+@@ -1021,6 +1027,9 @@ exc_exit_restart_end:
+ 	mtlr	r11
+ 	lwz	r10,_CCR(r1)
+ 	mtcrf	0xff,r10
++	/* Clear the exception_marker on the stack to avoid confusing stacktrace */
++	li	r10, 0
++	stw	r10, 8(r1)
+ 	REST_2GPRS(9, r1)
+ 	.globl exc_exit_restart
+ exc_exit_restart:
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index ce393df243aa..71bad4b6f80d 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -176,7 +176,7 @@ static void __giveup_fpu(struct task_struct *tsk)
+ 
+ 	save_fpu(tsk);
+ 	msr = tsk->thread.regs->msr;
+-	msr &= ~MSR_FP;
++	msr &= ~(MSR_FP|MSR_FE0|MSR_FE1);
+ #ifdef CONFIG_VSX
+ 	if (cpu_has_feature(CPU_FTR_VSX))
+ 		msr &= ~MSR_VSX;
+diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
+index cdd5d1d3ae41..53151698bfe0 100644
+--- a/arch/powerpc/kernel/ptrace.c
++++ b/arch/powerpc/kernel/ptrace.c
+@@ -561,6 +561,7 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
+ 		/*
+ 		 * Copy out only the low-order word of vrsave.
+ 		 */
++		int start, end;
+ 		union {
+ 			elf_vrreg_t reg;
+ 			u32 word;
+@@ -569,8 +570,10 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
+ 
+ 		vrsave.word = target->thread.vrsave;
+ 
++		start = 33 * sizeof(vector128);
++		end = start + sizeof(vrsave);
+ 		ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, &vrsave,
+-					  33 * sizeof(vector128), -1);
++					  start, end);
+ 	}
+ 
+ 	return ret;
+@@ -608,6 +611,7 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
+ 		/*
+ 		 * We use only the first word of vrsave.
+ 		 */
++		int start, end;
+ 		union {
+ 			elf_vrreg_t reg;
+ 			u32 word;
+@@ -616,8 +620,10 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
+ 
+ 		vrsave.word = target->thread.vrsave;
+ 
++		start = 33 * sizeof(vector128);
++		end = start + sizeof(vrsave);
+ 		ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &vrsave,
+-					 33 * sizeof(vector128), -1);
++					 start, end);
+ 		if (!ret)
+ 			target->thread.vrsave = vrsave.word;
+ 	}
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 3f15edf25a0d..6e521a3f67ca 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -358,13 +358,12 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask)
+  * NMI IPIs may not be recoverable, so should not be used as ongoing part of
+  * a running system. They can be used for crash, debug, halt/reboot, etc.
+  *
+- * NMI IPIs are globally single threaded. No more than one in progress at
+- * any time.
+- *
+  * The IPI call waits with interrupts disabled until all targets enter the
+- * NMI handler, then the call returns.
++ * NMI handler, then returns. Subsequent IPIs can be issued before targets
++ * have returned from their handlers, so there is no guarantee about
++ * concurrency or re-entrancy.
+  *
+- * No new NMI can be initiated until targets exit the handler.
++ * A new NMI can be issued before all targets exit the handler.
+  *
+  * The IPI call may time out without all targets entering the NMI handler.
+  * In that case, there is some logic to recover (and ignore subsequent
+@@ -375,7 +374,7 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask)
+ 
+ static atomic_t __nmi_ipi_lock = ATOMIC_INIT(0);
+ static struct cpumask nmi_ipi_pending_mask;
+-static int nmi_ipi_busy_count = 0;
++static bool nmi_ipi_busy = false;
+ static void (*nmi_ipi_function)(struct pt_regs *) = NULL;
+ 
+ static void nmi_ipi_lock_start(unsigned long *flags)
+@@ -414,7 +413,7 @@ static void nmi_ipi_unlock_end(unsigned long *flags)
+  */
+ int smp_handle_nmi_ipi(struct pt_regs *regs)
+ {
+-	void (*fn)(struct pt_regs *);
++	void (*fn)(struct pt_regs *) = NULL;
+ 	unsigned long flags;
+ 	int me = raw_smp_processor_id();
+ 	int ret = 0;
+@@ -425,29 +424,17 @@ int smp_handle_nmi_ipi(struct pt_regs *regs)
+ 	 * because the caller may have timed out.
+ 	 */
+ 	nmi_ipi_lock_start(&flags);
+-	if (!nmi_ipi_busy_count)
+-		goto out;
+-	if (!cpumask_test_cpu(me, &nmi_ipi_pending_mask))
+-		goto out;
+-
+-	fn = nmi_ipi_function;
+-	if (!fn)
+-		goto out;
+-
+-	cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
+-	nmi_ipi_busy_count++;
+-	nmi_ipi_unlock();
+-
+-	ret = 1;
+-
+-	fn(regs);
+-
+-	nmi_ipi_lock();
+-	if (nmi_ipi_busy_count > 1) /* Can race with caller time-out */
+-		nmi_ipi_busy_count--;
+-out:
++	if (cpumask_test_cpu(me, &nmi_ipi_pending_mask)) {
++		cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
++		fn = READ_ONCE(nmi_ipi_function);
++		WARN_ON_ONCE(!fn);
++		ret = 1;
++	}
+ 	nmi_ipi_unlock_end(&flags);
+ 
++	if (fn)
++		fn(regs);
++
+ 	return ret;
+ }
+ 
+@@ -473,7 +460,7 @@ static void do_smp_send_nmi_ipi(int cpu, bool safe)
+  * - cpu is the target CPU (must not be this CPU), or NMI_IPI_ALL_OTHERS.
+  * - fn is the target callback function.
+  * - delay_us > 0 is the delay before giving up waiting for targets to
+- *   complete executing the handler, == 0 specifies indefinite delay.
++ *   begin executing the handler, == 0 specifies indefinite delay.
+  */
+ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool safe)
+ {
+@@ -487,31 +474,33 @@ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool
+ 	if (unlikely(!smp_ops))
+ 		return 0;
+ 
+-	/* Take the nmi_ipi_busy count/lock with interrupts hard disabled */
+ 	nmi_ipi_lock_start(&flags);
+-	while (nmi_ipi_busy_count) {
++	while (nmi_ipi_busy) {
+ 		nmi_ipi_unlock_end(&flags);
+-		spin_until_cond(nmi_ipi_busy_count == 0);
++		spin_until_cond(!nmi_ipi_busy);
+ 		nmi_ipi_lock_start(&flags);
+ 	}
+-
++	nmi_ipi_busy = true;
+ 	nmi_ipi_function = fn;
+ 
++	WARN_ON_ONCE(!cpumask_empty(&nmi_ipi_pending_mask));
++
+ 	if (cpu < 0) {
+ 		/* ALL_OTHERS */
+ 		cpumask_copy(&nmi_ipi_pending_mask, cpu_online_mask);
+ 		cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
+ 	} else {
+-		/* cpumask starts clear */
+ 		cpumask_set_cpu(cpu, &nmi_ipi_pending_mask);
+ 	}
+-	nmi_ipi_busy_count++;
++
+ 	nmi_ipi_unlock();
+ 
++	/* Interrupts remain hard disabled */
++
+ 	do_smp_send_nmi_ipi(cpu, safe);
+ 
+ 	nmi_ipi_lock();
+-	/* nmi_ipi_busy_count is held here, so unlock/lock is okay */
++	/* nmi_ipi_busy is set here, so unlock/lock is okay */
+ 	while (!cpumask_empty(&nmi_ipi_pending_mask)) {
+ 		nmi_ipi_unlock();
+ 		udelay(1);
+@@ -523,29 +512,15 @@ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool
+ 		}
+ 	}
+ 
+-	while (nmi_ipi_busy_count > 1) {
+-		nmi_ipi_unlock();
+-		udelay(1);
+-		nmi_ipi_lock();
+-		if (delay_us) {
+-			delay_us--;
+-			if (!delay_us)
+-				break;
+-		}
+-	}
+-
+ 	if (!cpumask_empty(&nmi_ipi_pending_mask)) {
+ 		/* Timeout waiting for CPUs to call smp_handle_nmi_ipi */
+ 		ret = 0;
+ 		cpumask_clear(&nmi_ipi_pending_mask);
+ 	}
+-	if (nmi_ipi_busy_count > 1) {
+-		/* Timeout waiting for CPUs to execute fn */
+-		ret = 0;
+-		nmi_ipi_busy_count = 1;
+-	}
+ 
+-	nmi_ipi_busy_count--;
++	nmi_ipi_function = NULL;
++	nmi_ipi_busy = false;
++
+ 	nmi_ipi_unlock_end(&flags);
+ 
+ 	return ret;
+@@ -613,17 +588,8 @@ void crash_send_ipi(void (*crash_ipi_callback)(struct pt_regs *))
+ static void nmi_stop_this_cpu(struct pt_regs *regs)
+ {
+ 	/*
+-	 * This is a special case because it never returns, so the NMI IPI
+-	 * handling would never mark it as done, which makes any later
+-	 * smp_send_nmi_ipi() call spin forever. Mark it done now.
+-	 *
+ 	 * IRQs are already hard disabled by the smp_handle_nmi_ipi.
+ 	 */
+-	nmi_ipi_lock();
+-	if (nmi_ipi_busy_count > 1)
+-		nmi_ipi_busy_count--;
+-	nmi_ipi_unlock();
+-
+ 	spin_begin();
+ 	while (1)
+ 		spin_cpu_relax();
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 64936b60d521..7a1de34f38c8 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -763,15 +763,15 @@ void machine_check_exception(struct pt_regs *regs)
+ 	if (check_io_access(regs))
+ 		goto bail;
+ 
+-	/* Must die if the interrupt is not recoverable */
+-	if (!(regs->msr & MSR_RI))
+-		nmi_panic(regs, "Unrecoverable Machine check");
+-
+ 	if (!nested)
+ 		nmi_exit();
+ 
+ 	die("Machine check", regs, SIGBUS);
+ 
++	/* Must die if the interrupt is not recoverable */
++	if (!(regs->msr & MSR_RI))
++		nmi_panic(regs, "Unrecoverable Machine check");
++
+ 	return;
+ 
+ bail:
+@@ -1542,8 +1542,8 @@ bail:
+ 
+ void StackOverflow(struct pt_regs *regs)
+ {
+-	printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n",
+-	       current, regs->gpr[1]);
++	pr_crit("Kernel stack overflow in process %s[%d], r1=%lx\n",
++		current->comm, task_pid_nr(current), regs->gpr[1]);
+ 	debugger(regs);
+ 	show_regs(regs);
+ 	panic("kernel stack overflow");
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 9b8d50a7cbaf..45b06e239d1f 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -58,6 +58,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
+ #define STACK_SLOT_DAWR		(SFS-56)
+ #define STACK_SLOT_DAWRX	(SFS-64)
+ #define STACK_SLOT_HFSCR	(SFS-72)
++#define STACK_SLOT_AMR		(SFS-80)
++#define STACK_SLOT_UAMOR	(SFS-88)
+ /* the following is used by the P9 short path */
+ #define STACK_SLOT_NVGPRS	(SFS-152)	/* 18 gprs */
+ 
+@@ -726,11 +728,9 @@ BEGIN_FTR_SECTION
+ 	mfspr	r5, SPRN_TIDR
+ 	mfspr	r6, SPRN_PSSCR
+ 	mfspr	r7, SPRN_PID
+-	mfspr	r8, SPRN_IAMR
+ 	std	r5, STACK_SLOT_TID(r1)
+ 	std	r6, STACK_SLOT_PSSCR(r1)
+ 	std	r7, STACK_SLOT_PID(r1)
+-	std	r8, STACK_SLOT_IAMR(r1)
+ 	mfspr	r5, SPRN_HFSCR
+ 	std	r5, STACK_SLOT_HFSCR(r1)
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+@@ -738,11 +738,18 @@ BEGIN_FTR_SECTION
+ 	mfspr	r5, SPRN_CIABR
+ 	mfspr	r6, SPRN_DAWR
+ 	mfspr	r7, SPRN_DAWRX
++	mfspr	r8, SPRN_IAMR
+ 	std	r5, STACK_SLOT_CIABR(r1)
+ 	std	r6, STACK_SLOT_DAWR(r1)
+ 	std	r7, STACK_SLOT_DAWRX(r1)
++	std	r8, STACK_SLOT_IAMR(r1)
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
+ 
++	mfspr	r5, SPRN_AMR
++	std	r5, STACK_SLOT_AMR(r1)
++	mfspr	r6, SPRN_UAMOR
++	std	r6, STACK_SLOT_UAMOR(r1)
++
+ BEGIN_FTR_SECTION
+ 	/* Set partition DABR */
+ 	/* Do this before re-enabling PMU to avoid P7 DABR corruption bug */
+@@ -1631,22 +1638,25 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300)
+ 	mtspr	SPRN_PSPB, r0
+ 	mtspr	SPRN_WORT, r0
+ BEGIN_FTR_SECTION
+-	mtspr	SPRN_IAMR, r0
+ 	mtspr	SPRN_TCSCR, r0
+ 	/* Set MMCRS to 1<<31 to freeze and disable the SPMC counters */
+ 	li	r0, 1
+ 	sldi	r0, r0, 31
+ 	mtspr	SPRN_MMCRS, r0
+ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
+-8:
+ 
+-	/* Save and reset AMR and UAMOR before turning on the MMU */
++	/* Save and restore AMR, IAMR and UAMOR before turning on the MMU */
++	ld	r8, STACK_SLOT_IAMR(r1)
++	mtspr	SPRN_IAMR, r8
++
++8:	/* Power7 jumps back in here */
+ 	mfspr	r5,SPRN_AMR
+ 	mfspr	r6,SPRN_UAMOR
+ 	std	r5,VCPU_AMR(r9)
+ 	std	r6,VCPU_UAMOR(r9)
+-	li	r6,0
+-	mtspr	SPRN_AMR,r6
++	ld	r5,STACK_SLOT_AMR(r1)
++	ld	r6,STACK_SLOT_UAMOR(r1)
++	mtspr	SPRN_AMR, r5
+ 	mtspr	SPRN_UAMOR, r6
+ 
+ 	/* Switch DSCR back to host value */
+@@ -1746,11 +1756,9 @@ BEGIN_FTR_SECTION
+ 	ld	r5, STACK_SLOT_TID(r1)
+ 	ld	r6, STACK_SLOT_PSSCR(r1)
+ 	ld	r7, STACK_SLOT_PID(r1)
+-	ld	r8, STACK_SLOT_IAMR(r1)
+ 	mtspr	SPRN_TIDR, r5
+ 	mtspr	SPRN_PSSCR, r6
+ 	mtspr	SPRN_PID, r7
+-	mtspr	SPRN_IAMR, r8
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+ 
+ #ifdef CONFIG_PPC_RADIX_MMU
+diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
+index bc3914d54e26..5986df48359b 100644
+--- a/arch/powerpc/mm/slb.c
++++ b/arch/powerpc/mm/slb.c
+@@ -69,6 +69,11 @@ static void assert_slb_presence(bool present, unsigned long ea)
+ 	if (!cpu_has_feature(CPU_FTR_ARCH_206))
+ 		return;
+ 
++	/*
++	 * slbfee. requires bit 24 (PPC bit 39) be clear in RB. Hardware
++	 * ignores all other bits from 0-27, so just clear them all.
++	 */
++	ea &= ~((1UL << 28) - 1);
+ 	asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0");
+ 
+ 	WARN_ON(present == (tmp == 0));
+diff --git a/arch/powerpc/platforms/83xx/suspend-asm.S b/arch/powerpc/platforms/83xx/suspend-asm.S
+index 3d1ecd211776..8137f77abad5 100644
+--- a/arch/powerpc/platforms/83xx/suspend-asm.S
++++ b/arch/powerpc/platforms/83xx/suspend-asm.S
+@@ -26,13 +26,13 @@
+ #define SS_MSR		0x74
+ #define SS_SDR1		0x78
+ #define SS_LR		0x7c
+-#define SS_SPRG		0x80 /* 4 SPRGs */
+-#define SS_DBAT		0x90 /* 8 DBATs */
+-#define SS_IBAT		0xd0 /* 8 IBATs */
+-#define SS_TB		0x110
+-#define SS_CR		0x118
+-#define SS_GPREG	0x11c /* r12-r31 */
+-#define STATE_SAVE_SIZE 0x16c
++#define SS_SPRG		0x80 /* 8 SPRGs */
++#define SS_DBAT		0xa0 /* 8 DBATs */
++#define SS_IBAT		0xe0 /* 8 IBATs */
++#define SS_TB		0x120
++#define SS_CR		0x128
++#define SS_GPREG	0x12c /* r12-r31 */
++#define STATE_SAVE_SIZE 0x17c
+ 
+ 	.section .data
+ 	.align	5
+@@ -103,6 +103,16 @@ _GLOBAL(mpc83xx_enter_deep_sleep)
+ 	stw	r7, SS_SPRG+12(r3)
+ 	stw	r8, SS_SDR1(r3)
+ 
++	mfspr	r4, SPRN_SPRG4
++	mfspr	r5, SPRN_SPRG5
++	mfspr	r6, SPRN_SPRG6
++	mfspr	r7, SPRN_SPRG7
++
++	stw	r4, SS_SPRG+16(r3)
++	stw	r5, SS_SPRG+20(r3)
++	stw	r6, SS_SPRG+24(r3)
++	stw	r7, SS_SPRG+28(r3)
++
+ 	mfspr	r4, SPRN_DBAT0U
+ 	mfspr	r5, SPRN_DBAT0L
+ 	mfspr	r6, SPRN_DBAT1U
+@@ -493,6 +503,16 @@ mpc83xx_deep_resume:
+ 	mtspr	SPRN_IBAT7U, r6
+ 	mtspr	SPRN_IBAT7L, r7
+ 
++	lwz	r4, SS_SPRG+16(r3)
++	lwz	r5, SS_SPRG+20(r3)
++	lwz	r6, SS_SPRG+24(r3)
++	lwz	r7, SS_SPRG+28(r3)
++
++	mtspr	SPRN_SPRG4, r4
++	mtspr	SPRN_SPRG5, r5
++	mtspr	SPRN_SPRG6, r6
++	mtspr	SPRN_SPRG7, r7
++
+ 	lwz	r4, SS_SPRG+0(r3)
+ 	lwz	r5, SS_SPRG+4(r3)
+ 	lwz	r6, SS_SPRG+8(r3)
+diff --git a/arch/powerpc/platforms/embedded6xx/wii.c b/arch/powerpc/platforms/embedded6xx/wii.c
+index ecf703ee3a76..ac4ee88efc80 100644
+--- a/arch/powerpc/platforms/embedded6xx/wii.c
++++ b/arch/powerpc/platforms/embedded6xx/wii.c
+@@ -83,6 +83,10 @@ unsigned long __init wii_mmu_mapin_mem2(unsigned long top)
+ 	/* MEM2 64MB@0x10000000 */
+ 	delta = wii_hole_start + wii_hole_size;
+ 	size = top - delta;
++
++	if (__map_without_bats)
++		return delta;
++
+ 	for (bl = 128<<10; bl < max_size; bl <<= 1) {
+ 		if (bl * 2 > size)
+ 			break;
+diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
+index 35f699ebb662..e52f9b06dd9c 100644
+--- a/arch/powerpc/platforms/powernv/idle.c
++++ b/arch/powerpc/platforms/powernv/idle.c
+@@ -458,7 +458,8 @@ EXPORT_SYMBOL_GPL(pnv_power9_force_smt4_release);
+ #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+-static void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
++
++void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
+ {
+ 	u64 pir = get_hard_smp_processor_id(cpu);
+ 
+@@ -481,20 +482,6 @@ unsigned long pnv_cpu_offline(unsigned int cpu)
+ {
+ 	unsigned long srr1;
+ 	u32 idle_states = pnv_get_supported_cpuidle_states();
+-	u64 lpcr_val;
+-
+-	/*
+-	 * We don't want to take decrementer interrupts while we are
+-	 * offline, so clear LPCR:PECE1. We keep PECE2 (and
+-	 * LPCR_PECE_HVEE on P9) enabled as to let IPIs in.
+-	 *
+-	 * If the CPU gets woken up by a special wakeup, ensure that
+-	 * the SLW engine sets LPCR with decrementer bit cleared, else
+-	 * the CPU will come back to the kernel due to a spurious
+-	 * wakeup.
+-	 */
+-	lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
+-	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
+ 
+ 	__ppc64_runlatch_off();
+ 
+@@ -526,16 +513,6 @@ unsigned long pnv_cpu_offline(unsigned int cpu)
+ 
+ 	__ppc64_runlatch_on();
+ 
+-	/*
+-	 * Re-enable decrementer interrupts in LPCR.
+-	 *
+-	 * Further, we want stop states to be woken up by decrementer
+-	 * for non-hotplug cases. So program the LPCR via stop api as
+-	 * well.
+-	 */
+-	lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1;
+-	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
+-
+ 	return srr1;
+ }
+ #endif
+diff --git a/arch/powerpc/platforms/powernv/opal-msglog.c b/arch/powerpc/platforms/powernv/opal-msglog.c
+index acd3206dfae3..06628c71cef6 100644
+--- a/arch/powerpc/platforms/powernv/opal-msglog.c
++++ b/arch/powerpc/platforms/powernv/opal-msglog.c
+@@ -98,7 +98,7 @@ static ssize_t opal_msglog_read(struct file *file, struct kobject *kobj,
+ }
+ 
+ static struct bin_attribute opal_msglog_attr = {
+-	.attr = {.name = "msglog", .mode = 0444},
++	.attr = {.name = "msglog", .mode = 0400},
+ 	.read = opal_msglog_read
+ };
+ 
+diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
+index 0d354e19ef92..db09c7022635 100644
+--- a/arch/powerpc/platforms/powernv/smp.c
++++ b/arch/powerpc/platforms/powernv/smp.c
+@@ -39,6 +39,7 @@
+ #include <asm/cpuidle.h>
+ #include <asm/kexec.h>
+ #include <asm/reg.h>
++#include <asm/powernv.h>
+ 
+ #include "powernv.h"
+ 
+@@ -153,6 +154,7 @@ static void pnv_smp_cpu_kill_self(void)
+ {
+ 	unsigned int cpu;
+ 	unsigned long srr1, wmask;
++	u64 lpcr_val;
+ 
+ 	/* Standard hot unplug procedure */
+ 	/*
+@@ -174,6 +176,19 @@ static void pnv_smp_cpu_kill_self(void)
+ 	if (cpu_has_feature(CPU_FTR_ARCH_207S))
+ 		wmask = SRR1_WAKEMASK_P8;
+ 
++	/*
++	 * We don't want to take decrementer interrupts while we are
++	 * offline, so clear LPCR:PECE1. We keep PECE2 (and
++	 * LPCR_PECE_HVEE on P9) enabled so as to let IPIs in.
++	 *
++	 * If the CPU gets woken up by a special wakeup, ensure that
++	 * the SLW engine sets LPCR with decrementer bit cleared, else
++	 * the CPU will come back to the kernel due to a spurious
++	 * wakeup.
++	 */
++	lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
++	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
++
+ 	while (!generic_check_cpu_restart(cpu)) {
+ 		/*
+ 		 * Clear IPI flag, since we don't handle IPIs while
+@@ -246,6 +261,16 @@ static void pnv_smp_cpu_kill_self(void)
+ 
+ 	}
+ 
++	/*
++	 * Re-enable decrementer interrupts in LPCR.
++	 *
++	 * Further, we want stop states to be woken up by decrementer
++	 * for non-hotplug cases. So program the LPCR via stop api as
++	 * well.
++	 */
++	lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1;
++	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
++
+ 	DBG("CPU%d coming online...\n", cpu);
+ }
+ 
+diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
+index d5d24889c3bc..c2b8c8c6c9be 100644
+--- a/arch/s390/include/asm/kvm_host.h
++++ b/arch/s390/include/asm/kvm_host.h
+@@ -878,7 +878,7 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
+ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_free_memslot(struct kvm *kvm,
+ 		struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
+-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+ static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
+ static inline void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
+ 		struct kvm_memory_slot *slot) {}
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 7ed90a759135..01a3f4964d57 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -369,7 +369,7 @@ void __init arch_call_rest_init(void)
+ 		: : [_frame] "a" (frame));
+ }
+ 
+-static void __init setup_lowcore(void)
++static void __init setup_lowcore_dat_off(void)
+ {
+ 	struct lowcore *lc;
+ 
+@@ -380,19 +380,16 @@ static void __init setup_lowcore(void)
+ 	lc = memblock_alloc_low(sizeof(*lc), sizeof(*lc));
+ 	lc->restart_psw.mask = PSW_KERNEL_BITS;
+ 	lc->restart_psw.addr = (unsigned long) restart_int_handler;
+-	lc->external_new_psw.mask = PSW_KERNEL_BITS |
+-		PSW_MASK_DAT | PSW_MASK_MCHECK;
++	lc->external_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
+ 	lc->external_new_psw.addr = (unsigned long) ext_int_handler;
+ 	lc->svc_new_psw.mask = PSW_KERNEL_BITS |
+-		PSW_MASK_DAT | PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK;
++		PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK;
+ 	lc->svc_new_psw.addr = (unsigned long) system_call;
+-	lc->program_new_psw.mask = PSW_KERNEL_BITS |
+-		PSW_MASK_DAT | PSW_MASK_MCHECK;
++	lc->program_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
+ 	lc->program_new_psw.addr = (unsigned long) pgm_check_handler;
+ 	lc->mcck_new_psw.mask = PSW_KERNEL_BITS;
+ 	lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler;
+-	lc->io_new_psw.mask = PSW_KERNEL_BITS |
+-		PSW_MASK_DAT | PSW_MASK_MCHECK;
++	lc->io_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
+ 	lc->io_new_psw.addr = (unsigned long) io_int_handler;
+ 	lc->clock_comparator = clock_comparator_max;
+ 	lc->nodat_stack = ((unsigned long) &init_thread_union)
+@@ -452,6 +449,16 @@ static void __init setup_lowcore(void)
+ 	lowcore_ptr[0] = lc;
+ }
+ 
++static void __init setup_lowcore_dat_on(void)
++{
++	__ctl_clear_bit(0, 28);
++	S390_lowcore.external_new_psw.mask |= PSW_MASK_DAT;
++	S390_lowcore.svc_new_psw.mask |= PSW_MASK_DAT;
++	S390_lowcore.program_new_psw.mask |= PSW_MASK_DAT;
++	S390_lowcore.io_new_psw.mask |= PSW_MASK_DAT;
++	__ctl_set_bit(0, 28);
++}
++
+ static struct resource code_resource = {
+ 	.name  = "Kernel code",
+ 	.flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM,
+@@ -1072,7 +1079,7 @@ void __init setup_arch(char **cmdline_p)
+ #endif
+ 
+ 	setup_resources();
+-	setup_lowcore();
++	setup_lowcore_dat_off();
+ 	smp_fill_possible_mask();
+ 	cpu_detect_mhz_feature();
+         cpu_init();
+@@ -1085,6 +1092,12 @@ void __init setup_arch(char **cmdline_p)
+ 	 */
+         paging_init();
+ 
++	/*
++	 * After paging_init created the kernel page table, the new PSWs
++	 * in lowcore can now run with DAT enabled.
++	 */
++	setup_lowcore_dat_on();
++
+         /* Setup default console */
+ 	conmode_default();
+ 	set_preferred_console();
+diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c
+index 2a356b948720..3ea71b871813 100644
+--- a/arch/x86/crypto/aegis128-aesni-glue.c
++++ b/arch/x86/crypto/aegis128-aesni-glue.c
+@@ -119,31 +119,20 @@ static void crypto_aegis128_aesni_process_ad(
+ }
+ 
+ static void crypto_aegis128_aesni_process_crypt(
+-		struct aegis_state *state, struct aead_request *req,
++		struct aegis_state *state, struct skcipher_walk *walk,
+ 		const struct aegis_crypt_ops *ops)
+ {
+-	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize, base;
+-
+-	ops->skcipher_walk_init(&walk, req, false);
+-
+-	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
+-
+-		ops->crypt_blocks(state, chunksize, src, dst);
+-
+-		base = chunksize & ~(AEGIS128_BLOCK_SIZE - 1);
+-		src += base;
+-		dst += base;
+-		chunksize &= AEGIS128_BLOCK_SIZE - 1;
+-
+-		if (chunksize > 0)
+-			ops->crypt_tail(state, chunksize, src, dst);
++	while (walk->nbytes >= AEGIS128_BLOCK_SIZE) {
++		ops->crypt_blocks(state,
++				  round_down(walk->nbytes, AEGIS128_BLOCK_SIZE),
++				  walk->src.virt.addr, walk->dst.virt.addr);
++		skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE);
++	}
+ 
+-		skcipher_walk_done(&walk, 0);
++	if (walk->nbytes) {
++		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
++				walk->dst.virt.addr);
++		skcipher_walk_done(walk, 0);
+ 	}
+ }
+ 
+@@ -186,13 +175,16 @@ static void crypto_aegis128_aesni_crypt(struct aead_request *req,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct aegis_ctx *ctx = crypto_aegis128_aesni_ctx(tfm);
++	struct skcipher_walk walk;
+ 	struct aegis_state state;
+ 
++	ops->skcipher_walk_init(&walk, req, true);
++
+ 	kernel_fpu_begin();
+ 
+ 	crypto_aegis128_aesni_init(&state, ctx->key.bytes, req->iv);
+ 	crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen);
+-	crypto_aegis128_aesni_process_crypt(&state, req, ops);
++	crypto_aegis128_aesni_process_crypt(&state, &walk, ops);
+ 	crypto_aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+ 
+ 	kernel_fpu_end();
+diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c
+index dbe8bb980da1..1b1b39c66c5e 100644
+--- a/arch/x86/crypto/aegis128l-aesni-glue.c
++++ b/arch/x86/crypto/aegis128l-aesni-glue.c
+@@ -119,31 +119,20 @@ static void crypto_aegis128l_aesni_process_ad(
+ }
+ 
+ static void crypto_aegis128l_aesni_process_crypt(
+-		struct aegis_state *state, struct aead_request *req,
++		struct aegis_state *state, struct skcipher_walk *walk,
+ 		const struct aegis_crypt_ops *ops)
+ {
+-	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize, base;
+-
+-	ops->skcipher_walk_init(&walk, req, false);
+-
+-	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
+-
+-		ops->crypt_blocks(state, chunksize, src, dst);
+-
+-		base = chunksize & ~(AEGIS128L_BLOCK_SIZE - 1);
+-		src += base;
+-		dst += base;
+-		chunksize &= AEGIS128L_BLOCK_SIZE - 1;
+-
+-		if (chunksize > 0)
+-			ops->crypt_tail(state, chunksize, src, dst);
++	while (walk->nbytes >= AEGIS128L_BLOCK_SIZE) {
++		ops->crypt_blocks(state, round_down(walk->nbytes,
++						    AEGIS128L_BLOCK_SIZE),
++				  walk->src.virt.addr, walk->dst.virt.addr);
++		skcipher_walk_done(walk, walk->nbytes % AEGIS128L_BLOCK_SIZE);
++	}
+ 
+-		skcipher_walk_done(&walk, 0);
++	if (walk->nbytes) {
++		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
++				walk->dst.virt.addr);
++		skcipher_walk_done(walk, 0);
+ 	}
+ }
+ 
+@@ -186,13 +175,16 @@ static void crypto_aegis128l_aesni_crypt(struct aead_request *req,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct aegis_ctx *ctx = crypto_aegis128l_aesni_ctx(tfm);
++	struct skcipher_walk walk;
+ 	struct aegis_state state;
+ 
++	ops->skcipher_walk_init(&walk, req, true);
++
+ 	kernel_fpu_begin();
+ 
+ 	crypto_aegis128l_aesni_init(&state, ctx->key.bytes, req->iv);
+ 	crypto_aegis128l_aesni_process_ad(&state, req->src, req->assoclen);
+-	crypto_aegis128l_aesni_process_crypt(&state, req, ops);
++	crypto_aegis128l_aesni_process_crypt(&state, &walk, ops);
+ 	crypto_aegis128l_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+ 
+ 	kernel_fpu_end();
+diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c
+index 8bebda2de92f..6227ca3220a0 100644
+--- a/arch/x86/crypto/aegis256-aesni-glue.c
++++ b/arch/x86/crypto/aegis256-aesni-glue.c
+@@ -119,31 +119,20 @@ static void crypto_aegis256_aesni_process_ad(
+ }
+ 
+ static void crypto_aegis256_aesni_process_crypt(
+-		struct aegis_state *state, struct aead_request *req,
++		struct aegis_state *state, struct skcipher_walk *walk,
+ 		const struct aegis_crypt_ops *ops)
+ {
+-	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize, base;
+-
+-	ops->skcipher_walk_init(&walk, req, false);
+-
+-	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
+-
+-		ops->crypt_blocks(state, chunksize, src, dst);
+-
+-		base = chunksize & ~(AEGIS256_BLOCK_SIZE - 1);
+-		src += base;
+-		dst += base;
+-		chunksize &= AEGIS256_BLOCK_SIZE - 1;
+-
+-		if (chunksize > 0)
+-			ops->crypt_tail(state, chunksize, src, dst);
++	while (walk->nbytes >= AEGIS256_BLOCK_SIZE) {
++		ops->crypt_blocks(state,
++				  round_down(walk->nbytes, AEGIS256_BLOCK_SIZE),
++				  walk->src.virt.addr, walk->dst.virt.addr);
++		skcipher_walk_done(walk, walk->nbytes % AEGIS256_BLOCK_SIZE);
++	}
+ 
+-		skcipher_walk_done(&walk, 0);
++	if (walk->nbytes) {
++		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
++				walk->dst.virt.addr);
++		skcipher_walk_done(walk, 0);
+ 	}
+ }
+ 
+@@ -186,13 +175,16 @@ static void crypto_aegis256_aesni_crypt(struct aead_request *req,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct aegis_ctx *ctx = crypto_aegis256_aesni_ctx(tfm);
++	struct skcipher_walk walk;
+ 	struct aegis_state state;
+ 
++	ops->skcipher_walk_init(&walk, req, true);
++
+ 	kernel_fpu_begin();
+ 
+ 	crypto_aegis256_aesni_init(&state, ctx->key, req->iv);
+ 	crypto_aegis256_aesni_process_ad(&state, req->src, req->assoclen);
+-	crypto_aegis256_aesni_process_crypt(&state, req, ops);
++	crypto_aegis256_aesni_process_crypt(&state, &walk, ops);
+ 	crypto_aegis256_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+ 
+ 	kernel_fpu_end();
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index 1321700d6647..ae30c8b6ec4d 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -821,11 +821,14 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ 		scatterwalk_map_and_copy(assoc, req->src, 0, assoclen, 0);
+ 	}
+ 
+-	src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
+-	scatterwalk_start(&src_sg_walk, src_sg);
+-	if (req->src != req->dst) {
+-		dst_sg = scatterwalk_ffwd(dst_start, req->dst, req->assoclen);
+-		scatterwalk_start(&dst_sg_walk, dst_sg);
++	if (left) {
++		src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
++		scatterwalk_start(&src_sg_walk, src_sg);
++		if (req->src != req->dst) {
++			dst_sg = scatterwalk_ffwd(dst_start, req->dst,
++						  req->assoclen);
++			scatterwalk_start(&dst_sg_walk, dst_sg);
++		}
+ 	}
+ 
+ 	kernel_fpu_begin();
+diff --git a/arch/x86/crypto/morus1280_glue.c b/arch/x86/crypto/morus1280_glue.c
+index 0dccdda1eb3a..7e600f8bcdad 100644
+--- a/arch/x86/crypto/morus1280_glue.c
++++ b/arch/x86/crypto/morus1280_glue.c
+@@ -85,31 +85,20 @@ static void crypto_morus1280_glue_process_ad(
+ 
+ static void crypto_morus1280_glue_process_crypt(struct morus1280_state *state,
+ 						struct morus1280_ops ops,
+-						struct aead_request *req)
++						struct skcipher_walk *walk)
+ {
+-	struct skcipher_walk walk;
+-	u8 *cursor_src, *cursor_dst;
+-	unsigned int chunksize, base;
+-
+-	ops.skcipher_walk_init(&walk, req, false);
+-
+-	while (walk.nbytes) {
+-		cursor_src = walk.src.virt.addr;
+-		cursor_dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
+-
+-		ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
+-
+-		base = chunksize & ~(MORUS1280_BLOCK_SIZE - 1);
+-		cursor_src += base;
+-		cursor_dst += base;
+-		chunksize &= MORUS1280_BLOCK_SIZE - 1;
+-
+-		if (chunksize > 0)
+-			ops.crypt_tail(state, cursor_src, cursor_dst,
+-				       chunksize);
++	while (walk->nbytes >= MORUS1280_BLOCK_SIZE) {
++		ops.crypt_blocks(state, walk->src.virt.addr,
++				 walk->dst.virt.addr,
++				 round_down(walk->nbytes,
++					    MORUS1280_BLOCK_SIZE));
++		skcipher_walk_done(walk, walk->nbytes % MORUS1280_BLOCK_SIZE);
++	}
+ 
+-		skcipher_walk_done(&walk, 0);
++	if (walk->nbytes) {
++		ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
++			       walk->nbytes);
++		skcipher_walk_done(walk, 0);
+ 	}
+ }
+ 
+@@ -147,12 +136,15 @@ static void crypto_morus1280_glue_crypt(struct aead_request *req,
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct morus1280_ctx *ctx = crypto_aead_ctx(tfm);
+ 	struct morus1280_state state;
++	struct skcipher_walk walk;
++
++	ops.skcipher_walk_init(&walk, req, true);
+ 
+ 	kernel_fpu_begin();
+ 
+ 	ctx->ops->init(&state, &ctx->key, req->iv);
+ 	crypto_morus1280_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
+-	crypto_morus1280_glue_process_crypt(&state, ops, req);
++	crypto_morus1280_glue_process_crypt(&state, ops, &walk);
+ 	ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
+ 
+ 	kernel_fpu_end();
+diff --git a/arch/x86/crypto/morus640_glue.c b/arch/x86/crypto/morus640_glue.c
+index 7b58fe4d9bd1..cb3a81732016 100644
+--- a/arch/x86/crypto/morus640_glue.c
++++ b/arch/x86/crypto/morus640_glue.c
+@@ -85,31 +85,19 @@ static void crypto_morus640_glue_process_ad(
+ 
+ static void crypto_morus640_glue_process_crypt(struct morus640_state *state,
+ 					       struct morus640_ops ops,
+-					       struct aead_request *req)
++					       struct skcipher_walk *walk)
+ {
+-	struct skcipher_walk walk;
+-	u8 *cursor_src, *cursor_dst;
+-	unsigned int chunksize, base;
+-
+-	ops.skcipher_walk_init(&walk, req, false);
+-
+-	while (walk.nbytes) {
+-		cursor_src = walk.src.virt.addr;
+-		cursor_dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
+-
+-		ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
+-
+-		base = chunksize & ~(MORUS640_BLOCK_SIZE - 1);
+-		cursor_src += base;
+-		cursor_dst += base;
+-		chunksize &= MORUS640_BLOCK_SIZE - 1;
+-
+-		if (chunksize > 0)
+-			ops.crypt_tail(state, cursor_src, cursor_dst,
+-				       chunksize);
++	while (walk->nbytes >= MORUS640_BLOCK_SIZE) {
++		ops.crypt_blocks(state, walk->src.virt.addr,
++				 walk->dst.virt.addr,
++				 round_down(walk->nbytes, MORUS640_BLOCK_SIZE));
++		skcipher_walk_done(walk, walk->nbytes % MORUS640_BLOCK_SIZE);
++	}
+ 
+-		skcipher_walk_done(&walk, 0);
++	if (walk->nbytes) {
++		ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
++			       walk->nbytes);
++		skcipher_walk_done(walk, 0);
+ 	}
+ }
+ 
+@@ -143,12 +131,15 @@ static void crypto_morus640_glue_crypt(struct aead_request *req,
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct morus640_ctx *ctx = crypto_aead_ctx(tfm);
+ 	struct morus640_state state;
++	struct skcipher_walk walk;
++
++	ops.skcipher_walk_init(&walk, req, true);
+ 
+ 	kernel_fpu_begin();
+ 
+ 	ctx->ops->init(&state, &ctx->key, req->iv);
+ 	crypto_morus640_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
+-	crypto_morus640_glue_process_crypt(&state, ops, req);
++	crypto_morus640_glue_process_crypt(&state, ops, &walk);
+ 	ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
+ 
+ 	kernel_fpu_end();
+diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
+index 27a461414b30..2690135bf83f 100644
+--- a/arch/x86/events/intel/uncore.c
++++ b/arch/x86/events/intel/uncore.c
+@@ -740,6 +740,7 @@ static int uncore_pmu_event_init(struct perf_event *event)
+ 		/* fixed counters have event field hardcoded to zero */
+ 		hwc->config = 0ULL;
+ 	} else if (is_freerunning_event(event)) {
++		hwc->config = event->attr.config;
+ 		if (!check_valid_freerunning_event(box, event))
+ 			return -EINVAL;
+ 		event->hw.idx = UNCORE_PMC_IDX_FREERUNNING;
+diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
+index cb46d602a6b8..853a49a8ccf6 100644
+--- a/arch/x86/events/intel/uncore.h
++++ b/arch/x86/events/intel/uncore.h
+@@ -292,8 +292,8 @@ static inline
+ unsigned int uncore_freerunning_counter(struct intel_uncore_box *box,
+ 					struct perf_event *event)
+ {
+-	unsigned int type = uncore_freerunning_type(event->attr.config);
+-	unsigned int idx = uncore_freerunning_idx(event->attr.config);
++	unsigned int type = uncore_freerunning_type(event->hw.config);
++	unsigned int idx = uncore_freerunning_idx(event->hw.config);
+ 	struct intel_uncore_pmu *pmu = box->pmu;
+ 
+ 	return pmu->type->freerunning[type].counter_base +
+@@ -377,7 +377,7 @@ static inline
+ unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
+ 				     struct perf_event *event)
+ {
+-	unsigned int type = uncore_freerunning_type(event->attr.config);
++	unsigned int type = uncore_freerunning_type(event->hw.config);
+ 
+ 	return box->pmu->type->freerunning[type].bits;
+ }
+@@ -385,7 +385,7 @@ unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
+ static inline int uncore_num_freerunning(struct intel_uncore_box *box,
+ 					 struct perf_event *event)
+ {
+-	unsigned int type = uncore_freerunning_type(event->attr.config);
++	unsigned int type = uncore_freerunning_type(event->hw.config);
+ 
+ 	return box->pmu->type->freerunning[type].num_counters;
+ }
+@@ -399,8 +399,8 @@ static inline int uncore_num_freerunning_types(struct intel_uncore_box *box,
+ static inline bool check_valid_freerunning_event(struct intel_uncore_box *box,
+ 						 struct perf_event *event)
+ {
+-	unsigned int type = uncore_freerunning_type(event->attr.config);
+-	unsigned int idx = uncore_freerunning_idx(event->attr.config);
++	unsigned int type = uncore_freerunning_type(event->hw.config);
++	unsigned int idx = uncore_freerunning_idx(event->hw.config);
+ 
+ 	return (type < uncore_num_freerunning_types(box, event)) &&
+ 	       (idx < uncore_num_freerunning(box, event));
+diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
+index 2593b0d7aeee..ef7faf486a1a 100644
+--- a/arch/x86/events/intel/uncore_snb.c
++++ b/arch/x86/events/intel/uncore_snb.c
+@@ -448,9 +448,11 @@ static int snb_uncore_imc_event_init(struct perf_event *event)
+ 
+ 	/* must be done before validate_group */
+ 	event->hw.event_base = base;
+-	event->hw.config = cfg;
+ 	event->hw.idx = idx;
+ 
++	/* Convert to standard encoding format for freerunning counters */
++	event->hw.config = ((cfg - 1) << 8) | 0x10ff;
++
+ 	/* no group validation needed, we have free running counters */
+ 
+ 	return 0;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 180373360e34..e40be168c73c 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1255,7 +1255,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
+ 				   struct kvm_memory_slot *slot,
+ 				   gfn_t gfn_offset, unsigned long mask);
+ void kvm_mmu_zap_all(struct kvm *kvm);
+-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots);
++void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen);
+ unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
+ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
+ 
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 8257a59704ae..763d4264d16a 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -49,7 +49,7 @@ int ftrace_arch_code_modify_post_process(void)
+ union ftrace_code_union {
+ 	char code[MCOUNT_INSN_SIZE];
+ 	struct {
+-		unsigned char e8;
++		unsigned char op;
+ 		int offset;
+ 	} __attribute__((packed));
+ };
+@@ -59,20 +59,23 @@ static int ftrace_calc_offset(long ip, long addr)
+ 	return (int)(addr - ip);
+ }
+ 
+-static unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
++static unsigned char *
++ftrace_text_replace(unsigned char op, unsigned long ip, unsigned long addr)
+ {
+ 	static union ftrace_code_union calc;
+ 
+-	calc.e8		= 0xe8;
++	calc.op		= op;
+ 	calc.offset	= ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
+ 
+-	/*
+-	 * No locking needed, this must be called via kstop_machine
+-	 * which in essence is like running on a uniprocessor machine.
+-	 */
+ 	return calc.code;
+ }
+ 
++static unsigned char *
++ftrace_call_replace(unsigned long ip, unsigned long addr)
++{
++	return ftrace_text_replace(0xe8, ip, addr);
++}
++
+ static inline int
+ within(unsigned long addr, unsigned long start, unsigned long end)
+ {
+@@ -664,22 +667,6 @@ int __init ftrace_dyn_arch_init(void)
+ 	return 0;
+ }
+ 
+-#if defined(CONFIG_X86_64) || defined(CONFIG_FUNCTION_GRAPH_TRACER)
+-static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
+-{
+-	static union ftrace_code_union calc;
+-
+-	/* Jmp not a call (ignore the .e8) */
+-	calc.e8		= 0xe9;
+-	calc.offset	= ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
+-
+-	/*
+-	 * ftrace external locks synchronize the access to the static variable.
+-	 */
+-	return calc.code;
+-}
+-#endif
+-
+ /* Currently only x86_64 supports dynamic trampolines */
+ #ifdef CONFIG_X86_64
+ 
+@@ -891,8 +878,8 @@ static void *addr_from_call(void *ptr)
+ 		return NULL;
+ 
+ 	/* Make sure this is a call */
+-	if (WARN_ON_ONCE(calc.e8 != 0xe8)) {
+-		pr_warn("Expected e8, got %x\n", calc.e8);
++	if (WARN_ON_ONCE(calc.op != 0xe8)) {
++		pr_warn("Expected e8, got %x\n", calc.op);
+ 		return NULL;
+ 	}
+ 
+@@ -963,6 +950,11 @@ void arch_ftrace_trampoline_free(struct ftrace_ops *ops)
+ #ifdef CONFIG_DYNAMIC_FTRACE
+ extern void ftrace_graph_call(void);
+ 
++static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
++{
++	return ftrace_text_replace(0xe9, ip, addr);
++}
++
+ static int ftrace_mod_jmp(unsigned long ip, void *func)
+ {
+ 	unsigned char *new;
+diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
+index 6adf6e6c2933..544bd41a514c 100644
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -141,6 +141,11 @@ asm (
+ 
+ void optprobe_template_func(void);
+ STACK_FRAME_NON_STANDARD(optprobe_template_func);
++NOKPROBE_SYMBOL(optprobe_template_func);
++NOKPROBE_SYMBOL(optprobe_template_entry);
++NOKPROBE_SYMBOL(optprobe_template_val);
++NOKPROBE_SYMBOL(optprobe_template_call);
++NOKPROBE_SYMBOL(optprobe_template_end);
+ 
+ #define TMPL_MOVE_IDX \
+ 	((long)optprobe_template_val - (long)optprobe_template_entry)
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index e811d4d1c824..d908a37bf3f3 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -104,12 +104,8 @@ static u64 kvm_sched_clock_read(void)
+ 
+ static inline void kvm_sched_clock_init(bool stable)
+ {
+-	if (!stable) {
+-		pv_ops.time.sched_clock = kvm_clock_read;
++	if (!stable)
+ 		clear_sched_clock_stable();
+-		return;
+-	}
+-
+ 	kvm_sched_clock_offset = kvm_clock_read();
+ 	pv_ops.time.sched_clock = kvm_sched_clock_read;
+ 
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index f2d1d230d5b8..9ab33cab9486 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -5635,13 +5635,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+ {
+ 	struct kvm_memslots *slots;
+ 	struct kvm_memory_slot *memslot;
+-	bool flush_tlb = true;
+-	bool flush = false;
+ 	int i;
+ 
+-	if (kvm_available_flush_tlb_with_range())
+-		flush_tlb = false;
+-
+ 	spin_lock(&kvm->mmu_lock);
+ 	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+ 		slots = __kvm_memslots(kvm, i);
+@@ -5653,17 +5648,12 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+ 			if (start >= end)
+ 				continue;
+ 
+-			flush |= slot_handle_level_range(kvm, memslot,
+-					kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL,
+-					PT_MAX_HUGEPAGE_LEVEL, start,
+-					end - 1, flush_tlb);
++			slot_handle_level_range(kvm, memslot, kvm_zap_rmapp,
++						PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL,
++						start, end - 1, true);
+ 		}
+ 	}
+ 
+-	if (flush)
+-		kvm_flush_remote_tlbs_with_address(kvm, gfn_start,
+-				gfn_end - gfn_start + 1);
+-
+ 	spin_unlock(&kvm->mmu_lock);
+ }
+ 
+@@ -5901,13 +5891,30 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
+ 	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
+ }
+ 
+-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots)
++void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
+ {
++	gen &= MMIO_GEN_MASK;
++
++	/*
++	 * Shift to eliminate the "update in-progress" flag, which isn't
++	 * included in the spte's generation number.
++	 */
++	gen >>= 1;
++
++	/*
++	 * Generation numbers are incremented in multiples of the number of
++	 * address spaces in order to provide unique generations across all
++	 * address spaces.  Strip what is effectively the address space
++	 * modifier prior to checking for a wrap of the MMIO generation so
++	 * that a wrap in any address space is detected.
++	 */
++	gen &= ~((u64)KVM_ADDRESS_SPACE_NUM - 1);
++
+ 	/*
+-	 * The very rare case: if the generation-number is round,
++	 * The very rare case: if the MMIO generation number has wrapped,
+ 	 * zap all shadow pages.
+ 	 */
+-	if (unlikely((slots->generation & MMIO_GEN_MASK) == 0)) {
++	if (unlikely(gen == 0)) {
+ 		kvm_debug_ratelimited("kvm: zapping shadow pages for mmio generation wraparound\n");
+ 		kvm_mmu_invalidate_zap_all_pages(kvm);
+ 	}
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index d737a51a53ca..f014e1aeee96 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2765,7 +2765,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
+ 		"add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */
+ 
+ 		/* Check if vmlaunch or vmresume is needed */
+-		"cmpl $0, %c[launched](%% " _ASM_CX")\n\t"
++		"cmpb $0, %c[launched](%% " _ASM_CX")\n\t"
+ 
+ 		"call vmx_vmenter\n\t"
+ 
+@@ -4035,25 +4035,50 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
+ 	/* Addr = segment_base + offset */
+ 	/* offset = base + [index * scale] + displacement */
+ 	off = exit_qualification; /* holds the displacement */
++	if (addr_size == 1)
++		off = (gva_t)sign_extend64(off, 31);
++	else if (addr_size == 0)
++		off = (gva_t)sign_extend64(off, 15);
+ 	if (base_is_valid)
+ 		off += kvm_register_read(vcpu, base_reg);
+ 	if (index_is_valid)
+ 		off += kvm_register_read(vcpu, index_reg)<<scaling;
+ 	vmx_get_segment(vcpu, &s, seg_reg);
+-	*ret = s.base + off;
+ 
++	/*
++	 * The effective address, i.e. @off, of a memory operand is truncated
++	 * based on the address size of the instruction.  Note that this is
++	 * the *effective address*, i.e. the address prior to accounting for
++	 * the segment's base.
++	 */
+ 	if (addr_size == 1) /* 32 bit */
+-		*ret &= 0xffffffff;
++		off &= 0xffffffff;
++	else if (addr_size == 0) /* 16 bit */
++		off &= 0xffff;
+ 
+ 	/* Checks for #GP/#SS exceptions. */
+ 	exn = false;
+ 	if (is_long_mode(vcpu)) {
++		/*
++		 * The virtual/linear address is never truncated in 64-bit
++		 * mode, e.g. a 32-bit address size can yield a 64-bit virtual
++		 * address when using FS/GS with a non-zero base.
++		 */
++		*ret = s.base + off;
++
+ 		/* Long mode: #GP(0)/#SS(0) if the memory address is in a
+ 		 * non-canonical form. This is the only check on the memory
+ 		 * destination for long mode!
+ 		 */
+ 		exn = is_noncanonical_address(*ret, vcpu);
+ 	} else if (is_protmode(vcpu)) {
++		/*
++		 * When not in long mode, the virtual/linear address is
++		 * unconditionally truncated to 32 bits regardless of the
++		 * address size.
++		 */
++		*ret = (s.base + off) & 0xffffffff;
++
+ 		/* Protected mode: apply checks for segment validity in the
+ 		 * following order:
+ 		 * - segment type check (#GP(0) may be thrown)
+@@ -4077,10 +4102,16 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
+ 		/* Protected mode: #GP(0)/#SS(0) if the segment is unusable.
+ 		 */
+ 		exn = (s.unusable != 0);
+-		/* Protected mode: #GP(0)/#SS(0) if the memory
+-		 * operand is outside the segment limit.
++
++		/*
++		 * Protected mode: #GP(0)/#SS(0) if the memory operand is
++		 * outside the segment limit.  All CPUs that support VMX ignore
++		 * limit checks for flat segments, i.e. segments with base==0,
++		 * limit==0xffffffff and of type expand-up data or code.
+ 		 */
+-		exn = exn || (off + sizeof(u64) > s.limit);
++		if (!(s.base == 0 && s.limit == 0xffffffff &&
++		     ((s.type & 8) || !(s.type & 4))))
++			exn = exn || (off + sizeof(u64) > s.limit);
+ 	}
+ 	if (exn) {
+ 		kvm_queue_exception_e(vcpu,
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 30a6bcd735ec..d86eee07d327 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6399,7 +6399,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ 		"mov %%" _ASM_AX", %%cr2 \n\t"
+ 		"3: \n\t"
+ 		/* Check if vmlaunch or vmresume is needed */
+-		"cmpl $0, %c[launched](%%" _ASM_CX ") \n\t"
++		"cmpb $0, %c[launched](%%" _ASM_CX ") \n\t"
+ 		/* Load guest registers.  Don't clobber flags. */
+ 		"mov %c[rax](%%" _ASM_CX "), %%" _ASM_AX " \n\t"
+ 		"mov %c[rbx](%%" _ASM_CX "), %%" _ASM_BX " \n\t"
+@@ -6449,10 +6449,15 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ 		"mov %%r13, %c[r13](%%" _ASM_CX ") \n\t"
+ 		"mov %%r14, %c[r14](%%" _ASM_CX ") \n\t"
+ 		"mov %%r15, %c[r15](%%" _ASM_CX ") \n\t"
++
+ 		/*
+-		* Clear host registers marked as clobbered to prevent
+-		* speculative use.
+-		*/
++		 * Clear all general purpose registers (except RSP, which is loaded by
++		 * the CPU during VM-Exit) to prevent speculative use of the guest's
++		 * values, even those that are saved/loaded via the stack.  In theory,
++		 * an L1 cache miss when restoring registers could lead to speculative
++		 * execution with the guest's values.  Zeroing XORs are dirt cheap,
++		 * i.e. the extra paranoia is essentially free.
++		 */
+ 		"xor %%r8d,  %%r8d \n\t"
+ 		"xor %%r9d,  %%r9d \n\t"
+ 		"xor %%r10d, %%r10d \n\t"
+@@ -6467,8 +6472,11 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ 
+ 		"xor %%eax, %%eax \n\t"
+ 		"xor %%ebx, %%ebx \n\t"
++		"xor %%ecx, %%ecx \n\t"
++		"xor %%edx, %%edx \n\t"
+ 		"xor %%esi, %%esi \n\t"
+ 		"xor %%edi, %%edi \n\t"
++		"xor %%ebp, %%ebp \n\t"
+ 		"pop  %%" _ASM_BP "; pop  %%" _ASM_DX " \n\t"
+ 	      : ASM_CALL_CONSTRAINT
+ 	      : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp),
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 941f932373d0..2bcef72a7c40 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9348,13 +9348,13 @@ out_free:
+ 	return -ENOMEM;
+ }
+ 
+-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
+ {
+ 	/*
+ 	 * memslots->generation has been incremented.
+ 	 * mmio generation may have reached its maximum value.
+ 	 */
+-	kvm_mmu_invalidate_mmio_sptes(kvm, slots);
++	kvm_mmu_invalidate_mmio_sptes(kvm, gen);
+ }
+ 
+ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index 224cd0a47568..20ede17202bf 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -181,6 +181,11 @@ static inline bool emul_is_noncanonical_address(u64 la,
+ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
+ 					gva_t gva, gfn_t gfn, unsigned access)
+ {
++	u64 gen = kvm_memslots(vcpu->kvm)->generation;
++
++	if (unlikely(gen & 1))
++		return;
++
+ 	/*
+ 	 * If this is a shadow nested page table, the "GVA" is
+ 	 * actually a nGPA.
+@@ -188,7 +193,7 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
+ 	vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
+ 	vcpu->arch.access = access;
+ 	vcpu->arch.mmio_gfn = gfn;
+-	vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
++	vcpu->arch.mmio_gen = gen;
+ }
+ 
+ static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 0f4fe206dcc2..20701977e6c0 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -2114,10 +2114,10 @@ void __init xen_relocate_p2m(void)
+ 				pt = early_memremap(pt_phys, PAGE_SIZE);
+ 				clear_page(pt);
+ 				for (idx_pte = 0;
+-						idx_pte < min(n_pte, PTRS_PER_PTE);
+-						idx_pte++) {
+-					set_pte(pt + idx_pte,
+-							pfn_pte(p2m_pfn, PAGE_KERNEL));
++				     idx_pte < min(n_pte, PTRS_PER_PTE);
++				     idx_pte++) {
++					pt[idx_pte] = pfn_pte(p2m_pfn,
++							      PAGE_KERNEL);
+ 					p2m_pfn++;
+ 				}
+ 				n_pte -= PTRS_PER_PTE;
+@@ -2125,8 +2125,7 @@ void __init xen_relocate_p2m(void)
+ 				make_lowmem_page_readonly(__va(pt_phys));
+ 				pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE,
+ 						PFN_DOWN(pt_phys));
+-				set_pmd(pmd + idx_pt,
+-						__pmd(_PAGE_TABLE | pt_phys));
++				pmd[idx_pt] = __pmd(_PAGE_TABLE | pt_phys);
+ 				pt_phys += PAGE_SIZE;
+ 			}
+ 			n_pt -= PTRS_PER_PMD;
+@@ -2134,7 +2133,7 @@ void __init xen_relocate_p2m(void)
+ 			make_lowmem_page_readonly(__va(pmd_phys));
+ 			pin_pagetable_pfn(MMUEXT_PIN_L2_TABLE,
+ 					PFN_DOWN(pmd_phys));
+-			set_pud(pud + idx_pmd, __pud(_PAGE_TABLE | pmd_phys));
++			pud[idx_pmd] = __pud(_PAGE_TABLE | pmd_phys);
+ 			pmd_phys += PAGE_SIZE;
+ 		}
+ 		n_pmd -= PTRS_PER_PUD;
+diff --git a/crypto/aead.c b/crypto/aead.c
+index 189c52d1f63a..4908b5e846f0 100644
+--- a/crypto/aead.c
++++ b/crypto/aead.c
+@@ -61,8 +61,10 @@ int crypto_aead_setkey(struct crypto_aead *tfm,
+ 	else
+ 		err = crypto_aead_alg(tfm)->setkey(tfm, key, keylen);
+ 
+-	if (err)
++	if (unlikely(err)) {
++		crypto_aead_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 		return err;
++	}
+ 
+ 	crypto_aead_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+diff --git a/crypto/aegis128.c b/crypto/aegis128.c
+index c22f4414856d..789716f92e4c 100644
+--- a/crypto/aegis128.c
++++ b/crypto/aegis128.c
+@@ -290,19 +290,19 @@ static void crypto_aegis128_process_crypt(struct aegis_state *state,
+ 					  const struct aegis128_ops *ops)
+ {
+ 	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize;
+ 
+ 	ops->skcipher_walk_init(&walk, req, false);
+ 
+ 	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
++		unsigned int nbytes = walk.nbytes;
+ 
+-		ops->crypt_chunk(state, dst, src, chunksize);
++		if (nbytes < walk.total)
++			nbytes = round_down(nbytes, walk.stride);
+ 
+-		skcipher_walk_done(&walk, 0);
++		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++				 nbytes);
++
++		skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ 	}
+ }
+ 
+diff --git a/crypto/aegis128l.c b/crypto/aegis128l.c
+index b6fb21ebdc3e..73811448cb6b 100644
+--- a/crypto/aegis128l.c
++++ b/crypto/aegis128l.c
+@@ -353,19 +353,19 @@ static void crypto_aegis128l_process_crypt(struct aegis_state *state,
+ 					   const struct aegis128l_ops *ops)
+ {
+ 	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize;
+ 
+ 	ops->skcipher_walk_init(&walk, req, false);
+ 
+ 	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
++		unsigned int nbytes = walk.nbytes;
+ 
+-		ops->crypt_chunk(state, dst, src, chunksize);
++		if (nbytes < walk.total)
++			nbytes = round_down(nbytes, walk.stride);
+ 
+-		skcipher_walk_done(&walk, 0);
++		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++				 nbytes);
++
++		skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ 	}
+ }
+ 
+diff --git a/crypto/aegis256.c b/crypto/aegis256.c
+index 11f0f8ec9c7c..8a71e9c06193 100644
+--- a/crypto/aegis256.c
++++ b/crypto/aegis256.c
+@@ -303,19 +303,19 @@ static void crypto_aegis256_process_crypt(struct aegis_state *state,
+ 					  const struct aegis256_ops *ops)
+ {
+ 	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize;
+ 
+ 	ops->skcipher_walk_init(&walk, req, false);
+ 
+ 	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
++		unsigned int nbytes = walk.nbytes;
+ 
+-		ops->crypt_chunk(state, dst, src, chunksize);
++		if (nbytes < walk.total)
++			nbytes = round_down(nbytes, walk.stride);
+ 
+-		skcipher_walk_done(&walk, 0);
++		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++				 nbytes);
++
++		skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ 	}
+ }
+ 
+diff --git a/crypto/ahash.c b/crypto/ahash.c
+index 5d320a811f75..81e2767e2164 100644
+--- a/crypto/ahash.c
++++ b/crypto/ahash.c
+@@ -86,17 +86,17 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
+ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
+ {
+ 	unsigned int alignmask = walk->alignmask;
+-	unsigned int nbytes = walk->entrylen;
+ 
+ 	walk->data -= walk->offset;
+ 
+-	if (nbytes && walk->offset & alignmask && !err) {
+-		walk->offset = ALIGN(walk->offset, alignmask + 1);
+-		nbytes = min(nbytes,
+-			     ((unsigned int)(PAGE_SIZE)) - walk->offset);
+-		walk->entrylen -= nbytes;
++	if (walk->entrylen && (walk->offset & alignmask) && !err) {
++		unsigned int nbytes;
+ 
++		walk->offset = ALIGN(walk->offset, alignmask + 1);
++		nbytes = min(walk->entrylen,
++			     (unsigned int)(PAGE_SIZE - walk->offset));
+ 		if (nbytes) {
++			walk->entrylen -= nbytes;
+ 			walk->data += walk->offset;
+ 			return nbytes;
+ 		}
+@@ -116,7 +116,7 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
+ 	if (err)
+ 		return err;
+ 
+-	if (nbytes) {
++	if (walk->entrylen) {
+ 		walk->offset = 0;
+ 		walk->pg++;
+ 		return hash_walk_next(walk);
+@@ -190,6 +190,21 @@ static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
+ 	return ret;
+ }
+ 
++static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
++			  unsigned int keylen)
++{
++	return -ENOSYS;
++}
++
++static void ahash_set_needkey(struct crypto_ahash *tfm)
++{
++	const struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
++
++	if (tfm->setkey != ahash_nosetkey &&
++	    !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
++		crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
++}
++
+ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ 			unsigned int keylen)
+ {
+@@ -201,20 +216,16 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ 	else
+ 		err = tfm->setkey(tfm, key, keylen);
+ 
+-	if (err)
++	if (unlikely(err)) {
++		ahash_set_needkey(tfm);
+ 		return err;
++	}
+ 
+ 	crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
+ 
+-static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
+-			  unsigned int keylen)
+-{
+-	return -ENOSYS;
+-}
+-
+ static inline unsigned int ahash_align_buffer_size(unsigned len,
+ 						   unsigned long mask)
+ {
+@@ -489,8 +500,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
+ 
+ 	if (alg->setkey) {
+ 		hash->setkey = alg->setkey;
+-		if (!(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+-			crypto_ahash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
++		ahash_set_needkey(hash);
+ 	}
+ 
+ 	return 0;
+diff --git a/crypto/cfb.c b/crypto/cfb.c
+index e81e45673498..4abfe32ff845 100644
+--- a/crypto/cfb.c
++++ b/crypto/cfb.c
+@@ -77,12 +77,14 @@ static int crypto_cfb_encrypt_segment(struct skcipher_walk *walk,
+ 	do {
+ 		crypto_cfb_encrypt_one(tfm, iv, dst);
+ 		crypto_xor(dst, src, bsize);
+-		memcpy(iv, dst, bsize);
++		iv = dst;
+ 
+ 		src += bsize;
+ 		dst += bsize;
+ 	} while ((nbytes -= bsize) >= bsize);
+ 
++	memcpy(walk->iv, iv, bsize);
++
+ 	return nbytes;
+ }
+ 
+@@ -162,7 +164,7 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
+ 	const unsigned int bsize = crypto_cfb_bsize(tfm);
+ 	unsigned int nbytes = walk->nbytes;
+ 	u8 *src = walk->src.virt.addr;
+-	u8 *iv = walk->iv;
++	u8 * const iv = walk->iv;
+ 	u8 tmp[MAX_CIPHER_BLOCKSIZE];
+ 
+ 	do {
+@@ -172,8 +174,6 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
+ 		src += bsize;
+ 	} while ((nbytes -= bsize) >= bsize);
+ 
+-	memcpy(walk->iv, iv, bsize);
+-
+ 	return nbytes;
+ }
+ 
+@@ -298,6 +298,12 @@ static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	inst->alg.base.cra_blocksize = 1;
+ 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
+ 
++	/*
++	 * To simplify the implementation, configure the skcipher walk to only
++	 * give a partial block at the very end, never earlier.
++	 */
++	inst->alg.chunksize = alg->cra_blocksize;
++
+ 	inst->alg.ivsize = alg->cra_blocksize;
+ 	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
+ 	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
+diff --git a/crypto/morus1280.c b/crypto/morus1280.c
+index 3889c188f266..b83576b4eb55 100644
+--- a/crypto/morus1280.c
++++ b/crypto/morus1280.c
+@@ -366,18 +366,19 @@ static void crypto_morus1280_process_crypt(struct morus1280_state *state,
+ 					   const struct morus1280_ops *ops)
+ {
+ 	struct skcipher_walk walk;
+-	u8 *dst;
+-	const u8 *src;
+ 
+ 	ops->skcipher_walk_init(&walk, req, false);
+ 
+ 	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
++		unsigned int nbytes = walk.nbytes;
+ 
+-		ops->crypt_chunk(state, dst, src, walk.nbytes);
++		if (nbytes < walk.total)
++			nbytes = round_down(nbytes, walk.stride);
+ 
+-		skcipher_walk_done(&walk, 0);
++		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++				 nbytes);
++
++		skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ 	}
+ }
+ 
+diff --git a/crypto/morus640.c b/crypto/morus640.c
+index da06ec2f6a80..b6a477444f6d 100644
+--- a/crypto/morus640.c
++++ b/crypto/morus640.c
+@@ -365,18 +365,19 @@ static void crypto_morus640_process_crypt(struct morus640_state *state,
+ 					  const struct morus640_ops *ops)
+ {
+ 	struct skcipher_walk walk;
+-	u8 *dst;
+-	const u8 *src;
+ 
+ 	ops->skcipher_walk_init(&walk, req, false);
+ 
+ 	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
++		unsigned int nbytes = walk.nbytes;
+ 
+-		ops->crypt_chunk(state, dst, src, walk.nbytes);
++		if (nbytes < walk.total)
++			nbytes = round_down(nbytes, walk.stride);
+ 
+-		skcipher_walk_done(&walk, 0);
++		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++				 nbytes);
++
++		skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ 	}
+ }
+ 
+diff --git a/crypto/ofb.c b/crypto/ofb.c
+index 886631708c5e..cab0b80953fe 100644
+--- a/crypto/ofb.c
++++ b/crypto/ofb.c
+@@ -5,9 +5,6 @@
+  *
+  * Copyright (C) 2018 ARM Limited or its affiliates.
+  * All rights reserved.
+- *
+- * Based loosely on public domain code gleaned from libtomcrypt
+- * (https://github.com/libtom/libtomcrypt).
+  */
+ 
+ #include <crypto/algapi.h>
+@@ -21,7 +18,6 @@
+ 
+ struct crypto_ofb_ctx {
+ 	struct crypto_cipher *child;
+-	int cnt;
+ };
+ 
+ 
+@@ -41,58 +37,40 @@ static int crypto_ofb_setkey(struct crypto_skcipher *parent, const u8 *key,
+ 	return err;
+ }
+ 
+-static int crypto_ofb_encrypt_segment(struct crypto_ofb_ctx *ctx,
+-				      struct skcipher_walk *walk,
+-				      struct crypto_cipher *tfm)
++static int crypto_ofb_crypt(struct skcipher_request *req)
+ {
+-	int bsize = crypto_cipher_blocksize(tfm);
+-	int nbytes = walk->nbytes;
+-
+-	u8 *src = walk->src.virt.addr;
+-	u8 *dst = walk->dst.virt.addr;
+-	u8 *iv = walk->iv;
+-
+-	do {
+-		if (ctx->cnt == bsize) {
+-			if (nbytes < bsize)
+-				break;
+-			crypto_cipher_encrypt_one(tfm, iv, iv);
+-			ctx->cnt = 0;
+-		}
+-		*dst = *src ^ iv[ctx->cnt];
+-		src++;
+-		dst++;
+-		ctx->cnt++;
+-	} while (--nbytes);
+-	return nbytes;
+-}
+-
+-static int crypto_ofb_encrypt(struct skcipher_request *req)
+-{
+-	struct skcipher_walk walk;
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	unsigned int bsize;
+ 	struct crypto_ofb_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct crypto_cipher *child = ctx->child;
+-	int ret = 0;
++	struct crypto_cipher *cipher = ctx->child;
++	const unsigned int bsize = crypto_cipher_blocksize(cipher);
++	struct skcipher_walk walk;
++	int err;
+ 
+-	bsize =  crypto_cipher_blocksize(child);
+-	ctx->cnt = bsize;
++	err = skcipher_walk_virt(&walk, req, false);
+ 
+-	ret = skcipher_walk_virt(&walk, req, false);
++	while (walk.nbytes >= bsize) {
++		const u8 *src = walk.src.virt.addr;
++		u8 *dst = walk.dst.virt.addr;
++		u8 * const iv = walk.iv;
++		unsigned int nbytes = walk.nbytes;
+ 
+-	while (walk.nbytes) {
+-		ret = crypto_ofb_encrypt_segment(ctx, &walk, child);
+-		ret = skcipher_walk_done(&walk, ret);
+-	}
++		do {
++			crypto_cipher_encrypt_one(cipher, iv, iv);
++			crypto_xor_cpy(dst, src, iv, bsize);
++			dst += bsize;
++			src += bsize;
++		} while ((nbytes -= bsize) >= bsize);
+ 
+-	return ret;
+-}
++		err = skcipher_walk_done(&walk, nbytes);
++	}
+ 
+-/* OFB encrypt and decrypt are identical */
+-static int crypto_ofb_decrypt(struct skcipher_request *req)
+-{
+-	return crypto_ofb_encrypt(req);
++	if (walk.nbytes) {
++		crypto_cipher_encrypt_one(cipher, walk.iv, walk.iv);
++		crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, walk.iv,
++			       walk.nbytes);
++		err = skcipher_walk_done(&walk, 0);
++	}
++	return err;
+ }
+ 
+ static int crypto_ofb_init_tfm(struct crypto_skcipher *tfm)
+@@ -165,13 +143,18 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	if (err)
+ 		goto err_drop_spawn;
+ 
++	/* OFB mode is a stream cipher. */
++	inst->alg.base.cra_blocksize = 1;
++
++	/*
++	 * To simplify the implementation, configure the skcipher walk to only
++	 * give a partial block at the very end, never earlier.
++	 */
++	inst->alg.chunksize = alg->cra_blocksize;
++
+ 	inst->alg.base.cra_priority = alg->cra_priority;
+-	inst->alg.base.cra_blocksize = alg->cra_blocksize;
+ 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
+ 
+-	/* We access the data as u32s when xoring. */
+-	inst->alg.base.cra_alignmask |= __alignof__(u32) - 1;
+-
+ 	inst->alg.ivsize = alg->cra_blocksize;
+ 	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
+ 	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
+@@ -182,8 +165,8 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	inst->alg.exit = crypto_ofb_exit_tfm;
+ 
+ 	inst->alg.setkey = crypto_ofb_setkey;
+-	inst->alg.encrypt = crypto_ofb_encrypt;
+-	inst->alg.decrypt = crypto_ofb_decrypt;
++	inst->alg.encrypt = crypto_ofb_crypt;
++	inst->alg.decrypt = crypto_ofb_crypt;
+ 
+ 	inst->free = crypto_ofb_free;
+ 
+diff --git a/crypto/pcbc.c b/crypto/pcbc.c
+index 8aa10144407c..1b182dfedc94 100644
+--- a/crypto/pcbc.c
++++ b/crypto/pcbc.c
+@@ -51,7 +51,7 @@ static int crypto_pcbc_encrypt_segment(struct skcipher_request *req,
+ 	unsigned int nbytes = walk->nbytes;
+ 	u8 *src = walk->src.virt.addr;
+ 	u8 *dst = walk->dst.virt.addr;
+-	u8 *iv = walk->iv;
++	u8 * const iv = walk->iv;
+ 
+ 	do {
+ 		crypto_xor(iv, src, bsize);
+@@ -72,7 +72,7 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
+ 	int bsize = crypto_cipher_blocksize(tfm);
+ 	unsigned int nbytes = walk->nbytes;
+ 	u8 *src = walk->src.virt.addr;
+-	u8 *iv = walk->iv;
++	u8 * const iv = walk->iv;
+ 	u8 tmpbuf[MAX_CIPHER_BLOCKSIZE];
+ 
+ 	do {
+@@ -84,8 +84,6 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
+ 		src += bsize;
+ 	} while ((nbytes -= bsize) >= bsize);
+ 
+-	memcpy(walk->iv, iv, bsize);
+-
+ 	return nbytes;
+ }
+ 
+@@ -121,7 +119,7 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
+ 	unsigned int nbytes = walk->nbytes;
+ 	u8 *src = walk->src.virt.addr;
+ 	u8 *dst = walk->dst.virt.addr;
+-	u8 *iv = walk->iv;
++	u8 * const iv = walk->iv;
+ 
+ 	do {
+ 		crypto_cipher_decrypt_one(tfm, dst, src);
+@@ -132,8 +130,6 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
+ 		dst += bsize;
+ 	} while ((nbytes -= bsize) >= bsize);
+ 
+-	memcpy(walk->iv, iv, bsize);
+-
+ 	return nbytes;
+ }
+ 
+@@ -144,7 +140,7 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
+ 	int bsize = crypto_cipher_blocksize(tfm);
+ 	unsigned int nbytes = walk->nbytes;
+ 	u8 *src = walk->src.virt.addr;
+-	u8 *iv = walk->iv;
++	u8 * const iv = walk->iv;
+ 	u8 tmpbuf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(u32));
+ 
+ 	do {
+@@ -156,8 +152,6 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
+ 		src += bsize;
+ 	} while ((nbytes -= bsize) >= bsize);
+ 
+-	memcpy(walk->iv, iv, bsize);
+-
+ 	return nbytes;
+ }
+ 
+diff --git a/crypto/shash.c b/crypto/shash.c
+index 44d297b82a8f..40311ccad3fa 100644
+--- a/crypto/shash.c
++++ b/crypto/shash.c
+@@ -53,6 +53,13 @@ static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
+ 	return err;
+ }
+ 
++static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg)
++{
++	if (crypto_shash_alg_has_setkey(alg) &&
++	    !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
++		crypto_shash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
++}
++
+ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
+ 			unsigned int keylen)
+ {
+@@ -65,8 +72,10 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
+ 	else
+ 		err = shash->setkey(tfm, key, keylen);
+ 
+-	if (err)
++	if (unlikely(err)) {
++		shash_set_needkey(tfm, shash);
+ 		return err;
++	}
+ 
+ 	crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+@@ -373,7 +382,8 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
+ 	crt->final = shash_async_final;
+ 	crt->finup = shash_async_finup;
+ 	crt->digest = shash_async_digest;
+-	crt->setkey = shash_async_setkey;
++	if (crypto_shash_alg_has_setkey(alg))
++		crt->setkey = shash_async_setkey;
+ 
+ 	crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
+ 				    CRYPTO_TFM_NEED_KEY);
+@@ -395,9 +405,7 @@ static int crypto_shash_init_tfm(struct crypto_tfm *tfm)
+ 
+ 	hash->descsize = alg->descsize;
+ 
+-	if (crypto_shash_alg_has_setkey(alg) &&
+-	    !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+-		crypto_shash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
++	shash_set_needkey(hash, alg);
+ 
+ 	return 0;
+ }
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index 2a969296bc24..de09ff60991e 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -585,6 +585,12 @@ static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg)
+ 	return crypto_alg_extsize(alg);
+ }
+ 
++static void skcipher_set_needkey(struct crypto_skcipher *tfm)
++{
++	if (tfm->keysize)
++		crypto_skcipher_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
++}
++
+ static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm,
+ 				     const u8 *key, unsigned int keylen)
+ {
+@@ -598,8 +604,10 @@ static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm,
+ 	err = crypto_blkcipher_setkey(blkcipher, key, keylen);
+ 	crypto_skcipher_set_flags(tfm, crypto_blkcipher_get_flags(blkcipher) &
+ 				       CRYPTO_TFM_RES_MASK);
+-	if (err)
++	if (unlikely(err)) {
++		skcipher_set_needkey(tfm);
+ 		return err;
++	}
+ 
+ 	crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+@@ -677,8 +685,7 @@ static int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm)
+ 	skcipher->ivsize = crypto_blkcipher_ivsize(blkcipher);
+ 	skcipher->keysize = calg->cra_blkcipher.max_keysize;
+ 
+-	if (skcipher->keysize)
+-		crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
++	skcipher_set_needkey(skcipher);
+ 
+ 	return 0;
+ }
+@@ -698,8 +705,10 @@ static int skcipher_setkey_ablkcipher(struct crypto_skcipher *tfm,
+ 	crypto_skcipher_set_flags(tfm,
+ 				  crypto_ablkcipher_get_flags(ablkcipher) &
+ 				  CRYPTO_TFM_RES_MASK);
+-	if (err)
++	if (unlikely(err)) {
++		skcipher_set_needkey(tfm);
+ 		return err;
++	}
+ 
+ 	crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+@@ -776,8 +785,7 @@ static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm)
+ 			    sizeof(struct ablkcipher_request);
+ 	skcipher->keysize = calg->cra_ablkcipher.max_keysize;
+ 
+-	if (skcipher->keysize)
+-		crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
++	skcipher_set_needkey(skcipher);
+ 
+ 	return 0;
+ }
+@@ -820,8 +828,10 @@ static int skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ 	else
+ 		err = cipher->setkey(tfm, key, keylen);
+ 
+-	if (err)
++	if (unlikely(err)) {
++		skcipher_set_needkey(tfm);
+ 		return err;
++	}
+ 
+ 	crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+@@ -852,8 +862,7 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
+ 	skcipher->ivsize = alg->ivsize;
+ 	skcipher->keysize = alg->max_keysize;
+ 
+-	if (skcipher->keysize)
+-		crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
++	skcipher_set_needkey(skcipher);
+ 
+ 	if (alg->exit)
+ 		skcipher->base.exit = crypto_skcipher_exit_tfm;
+diff --git a/crypto/testmgr.c b/crypto/testmgr.c
+index 0f684a414acb..b8e4a3ccbfe0 100644
+--- a/crypto/testmgr.c
++++ b/crypto/testmgr.c
+@@ -1894,14 +1894,21 @@ static int alg_test_crc32c(const struct alg_test_desc *desc,
+ 
+ 	err = alg_test_hash(desc, driver, type, mask);
+ 	if (err)
+-		goto out;
++		return err;
+ 
+ 	tfm = crypto_alloc_shash(driver, type, mask);
+ 	if (IS_ERR(tfm)) {
++		if (PTR_ERR(tfm) == -ENOENT) {
++			/*
++			 * This crc32c implementation is only available through
++			 * ahash API, not the shash API, so the remaining part
++			 * of the test is not applicable to it.
++			 */
++			return 0;
++		}
+ 		printk(KERN_ERR "alg: crc32c: Failed to load transform for %s: "
+ 		       "%ld\n", driver, PTR_ERR(tfm));
+-		err = PTR_ERR(tfm);
+-		goto out;
++		return PTR_ERR(tfm);
+ 	}
+ 
+ 	do {
+@@ -1928,7 +1935,6 @@ static int alg_test_crc32c(const struct alg_test_desc *desc,
+ 
+ 	crypto_free_shash(tfm);
+ 
+-out:
+ 	return err;
+ }
+ 
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index e8f47d7b92cd..ca8e8ebef309 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -12870,6 +12870,31 @@ static const struct cipher_testvec aes_cfb_tv_template[] = {
+ 			  "\x75\xa3\x85\x74\x1a\xb9\xce\xf8"
+ 			  "\x20\x31\x62\x3d\x55\xb1\xe4\x71",
+ 		.len	= 64,
++		.also_non_np = 1,
++		.np	= 2,
++		.tap	= { 31, 33 },
++	}, { /* > 16 bytes, not a multiple of 16 bytes */
++		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++		.klen	= 16,
++		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
++			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
++			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
++			  "\xae",
++		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
++			  "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
++			  "\xc8",
++		.len	= 17,
++	}, { /* < 16 bytes */
++		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++		.klen	= 16,
++		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
++			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
++		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
++		.len	= 7,
+ 	},
+ };
+ 
+@@ -16656,8 +16681,7 @@ static const struct cipher_testvec aes_ctr_rfc3686_tv_template[] = {
+ };
+ 
+ static const struct cipher_testvec aes_ofb_tv_template[] = {
+-	 /* From NIST Special Publication 800-38A, Appendix F.5 */
+-	{
++	{ /* From NIST Special Publication 800-38A, Appendix F.5 */
+ 		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+ 			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+ 		.klen	= 16,
+@@ -16680,6 +16704,31 @@ static const struct cipher_testvec aes_ofb_tv_template[] = {
+ 			  "\x30\x4c\x65\x28\xf6\x59\xc7\x78"
+ 			  "\x66\xa5\x10\xd9\xc1\xd6\xae\x5e",
+ 		.len	= 64,
++		.also_non_np = 1,
++		.np	= 2,
++		.tap	= { 31, 33 },
++	}, { /* > 16 bytes, not a multiple of 16 bytes */
++		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++		.klen	= 16,
++		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
++			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
++			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
++			  "\xae",
++		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
++			  "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
++			  "\x77",
++		.len	= 17,
++	}, { /* < 16 bytes */
++		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++		.klen	= 16,
++		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
++			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
++		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
++		.len	= 7,
+ 	}
+ };
+ 
+diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
+index 545e91420cde..8940054d6250 100644
+--- a/drivers/acpi/device_sysfs.c
++++ b/drivers/acpi/device_sysfs.c
+@@ -202,11 +202,15 @@ static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias,
+ {
+ 	struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
+ 	const union acpi_object *of_compatible, *obj;
++	acpi_status status;
+ 	int len, count;
+ 	int i, nval;
+ 	char *c;
+ 
+-	acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf);
++	status = acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf);
++	if (ACPI_FAILURE(status))
++		return -ENODEV;
++
+ 	/* DT strings are all in lower case */
+ 	for (c = buf.pointer; *c != '\0'; c++)
+ 		*c = tolower(*c);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index e18ade5d74e9..f75f8f870ce3 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -415,7 +415,7 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
+ 	if (call_pkg) {
+ 		int i;
+ 
+-		if (nfit_mem->family != call_pkg->nd_family)
++		if (nfit_mem && nfit_mem->family != call_pkg->nd_family)
+ 			return -ENOTTY;
+ 
+ 		for (i = 0; i < ARRAY_SIZE(call_pkg->nd_reserved2); i++)
+@@ -424,6 +424,10 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
+ 		return call_pkg->nd_command;
+ 	}
+ 
++	/* In the !call_pkg case, bus commands == bus functions */
++	if (!nfit_mem)
++		return cmd;
++
+ 	/* Linux ND commands == NVDIMM_FAMILY_INTEL function numbers */
+ 	if (nfit_mem->family == NVDIMM_FAMILY_INTEL)
+ 		return cmd;
+@@ -454,17 +458,18 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 	if (cmd_rc)
+ 		*cmd_rc = -EINVAL;
+ 
++	if (cmd == ND_CMD_CALL)
++		call_pkg = buf;
++	func = cmd_to_func(nfit_mem, cmd, call_pkg);
++	if (func < 0)
++		return func;
++
+ 	if (nvdimm) {
+ 		struct acpi_device *adev = nfit_mem->adev;
+ 
+ 		if (!adev)
+ 			return -ENOTTY;
+ 
+-		if (cmd == ND_CMD_CALL)
+-			call_pkg = buf;
+-		func = cmd_to_func(nfit_mem, cmd, call_pkg);
+-		if (func < 0)
+-			return func;
+ 		dimm_name = nvdimm_name(nvdimm);
+ 		cmd_name = nvdimm_cmd_name(cmd);
+ 		cmd_mask = nvdimm_cmd_mask(nvdimm);
+@@ -475,12 +480,9 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 	} else {
+ 		struct acpi_device *adev = to_acpi_dev(acpi_desc);
+ 
+-		func = cmd;
+ 		cmd_name = nvdimm_bus_cmd_name(cmd);
+ 		cmd_mask = nd_desc->cmd_mask;
+-		dsm_mask = cmd_mask;
+-		if (cmd == ND_CMD_CALL)
+-			dsm_mask = nd_desc->bus_dsm_mask;
++		dsm_mask = nd_desc->bus_dsm_mask;
+ 		desc = nd_cmd_bus_desc(cmd);
+ 		guid = to_nfit_uuid(NFIT_DEV_BUS);
+ 		handle = adev->handle;
+@@ -554,6 +556,13 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 		return -EINVAL;
+ 	}
+ 
++	if (out_obj->type != ACPI_TYPE_BUFFER) {
++		dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n",
++				dimm_name, cmd_name, out_obj->type);
++		rc = -EINVAL;
++		goto out;
++	}
++
+ 	if (call_pkg) {
+ 		call_pkg->nd_fw_size = out_obj->buffer.length;
+ 		memcpy(call_pkg->nd_payload + call_pkg->nd_size_in,
+@@ -572,13 +581,6 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 		return 0;
+ 	}
+ 
+-	if (out_obj->package.type != ACPI_TYPE_BUFFER) {
+-		dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n",
+-				dimm_name, cmd_name, out_obj->type);
+-		rc = -EINVAL;
+-		goto out;
+-	}
+-
+ 	dev_dbg(dev, "%s cmd: %s output length: %d\n", dimm_name,
+ 			cmd_name, out_obj->buffer.length);
+ 	print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4, 4,
+@@ -1759,14 +1761,14 @@ static bool acpi_nvdimm_has_method(struct acpi_device *adev, char *method)
+ 
+ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
+ {
++	struct device *dev = &nfit_mem->adev->dev;
+ 	struct nd_intel_smart smart = { 0 };
+ 	union acpi_object in_buf = {
+-		.type = ACPI_TYPE_BUFFER,
+-		.buffer.pointer = (char *) &smart,
+-		.buffer.length = sizeof(smart),
++		.buffer.type = ACPI_TYPE_BUFFER,
++		.buffer.length = 0,
+ 	};
+ 	union acpi_object in_obj = {
+-		.type = ACPI_TYPE_PACKAGE,
++		.package.type = ACPI_TYPE_PACKAGE,
+ 		.package.count = 1,
+ 		.package.elements = &in_buf,
+ 	};
+@@ -1781,8 +1783,15 @@ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
+ 		return;
+ 
+ 	out_obj = acpi_evaluate_dsm(handle, guid, revid, func, &in_obj);
+-	if (!out_obj)
++	if (!out_obj || out_obj->type != ACPI_TYPE_BUFFER
++			|| out_obj->buffer.length < sizeof(smart)) {
++		dev_dbg(dev->parent, "%s: failed to retrieve initial health\n",
++				dev_name(dev));
++		ACPI_FREE(out_obj);
+ 		return;
++	}
++	memcpy(&smart, out_obj->buffer.pointer, sizeof(smart));
++	ACPI_FREE(out_obj);
+ 
+ 	if (smart.flags & ND_INTEL_SMART_SHUTDOWN_VALID) {
+ 		if (smart.shutdown_state)
+@@ -1793,7 +1802,6 @@ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
+ 		set_bit(NFIT_MEM_DIRTY_COUNT, &nfit_mem->flags);
+ 		nfit_mem->dirty_shutdown = smart.shutdown_count;
+ 	}
+-	ACPI_FREE(out_obj);
+ }
+ 
+ static void populate_shutdown_status(struct nfit_mem *nfit_mem)
+@@ -1915,18 +1923,19 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
+ 		| 1 << ND_CMD_SET_CONFIG_DATA;
+ 	if (family == NVDIMM_FAMILY_INTEL
+ 			&& (dsm_mask & label_mask) == label_mask)
+-		return 0;
+-
+-	if (acpi_nvdimm_has_method(adev_dimm, "_LSI")
+-			&& acpi_nvdimm_has_method(adev_dimm, "_LSR")) {
+-		dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev));
+-		set_bit(NFIT_MEM_LSR, &nfit_mem->flags);
+-	}
++		/* skip _LS{I,R,W} enabling */;
++	else {
++		if (acpi_nvdimm_has_method(adev_dimm, "_LSI")
++				&& acpi_nvdimm_has_method(adev_dimm, "_LSR")) {
++			dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev));
++			set_bit(NFIT_MEM_LSR, &nfit_mem->flags);
++		}
+ 
+-	if (test_bit(NFIT_MEM_LSR, &nfit_mem->flags)
+-			&& acpi_nvdimm_has_method(adev_dimm, "_LSW")) {
+-		dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev));
+-		set_bit(NFIT_MEM_LSW, &nfit_mem->flags);
++		if (test_bit(NFIT_MEM_LSR, &nfit_mem->flags)
++				&& acpi_nvdimm_has_method(adev_dimm, "_LSW")) {
++			dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev));
++			set_bit(NFIT_MEM_LSW, &nfit_mem->flags);
++		}
+ 	}
+ 
+ 	populate_shutdown_status(nfit_mem);
+@@ -3004,14 +3013,16 @@ static int ars_register(struct acpi_nfit_desc *acpi_desc,
+ {
+ 	int rc;
+ 
+-	if (no_init_ars || test_bit(ARS_FAILED, &nfit_spa->ars_state))
++	if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 		return acpi_nfit_register_region(acpi_desc, nfit_spa);
+ 
+ 	set_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
+-	set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
++	if (!no_init_ars)
++		set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
+ 
+ 	switch (acpi_nfit_query_poison(acpi_desc)) {
+ 	case 0:
++	case -ENOSPC:
+ 	case -EAGAIN:
+ 		rc = ars_start(acpi_desc, nfit_spa, ARS_REQ_SHORT);
+ 		/* shouldn't happen, try again later */
+@@ -3036,7 +3047,6 @@ static int ars_register(struct acpi_nfit_desc *acpi_desc,
+ 		break;
+ 	case -EBUSY:
+ 	case -ENOMEM:
+-	case -ENOSPC:
+ 		/*
+ 		 * BIOS was using ARS, wait for it to complete (or
+ 		 * resources to become available) and then perform our
+diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
+index 5fa1898755a3..7c84f64c74f7 100644
+--- a/drivers/base/power/wakeup.c
++++ b/drivers/base/power/wakeup.c
+@@ -118,7 +118,6 @@ void wakeup_source_drop(struct wakeup_source *ws)
+ 	if (!ws)
+ 		return;
+ 
+-	del_timer_sync(&ws->timer);
+ 	__pm_relax(ws);
+ }
+ EXPORT_SYMBOL_GPL(wakeup_source_drop);
+@@ -205,6 +204,13 @@ void wakeup_source_remove(struct wakeup_source *ws)
+ 	list_del_rcu(&ws->entry);
+ 	raw_spin_unlock_irqrestore(&events_lock, flags);
+ 	synchronize_srcu(&wakeup_srcu);
++
++	del_timer_sync(&ws->timer);
++	/*
++	 * Clear timer.function to make wakeup_source_not_registered() treat
++	 * this wakeup source as not registered.
++	 */
++	ws->timer.function = NULL;
+ }
+ EXPORT_SYMBOL_GPL(wakeup_source_remove);
+ 
+diff --git a/drivers/char/ipmi/ipmi_si.h b/drivers/char/ipmi/ipmi_si.h
+index 52f6152d1fcb..7ae52c17618e 100644
+--- a/drivers/char/ipmi/ipmi_si.h
++++ b/drivers/char/ipmi/ipmi_si.h
+@@ -25,7 +25,9 @@ void ipmi_irq_finish_setup(struct si_sm_io *io);
+ int ipmi_si_remove_by_dev(struct device *dev);
+ void ipmi_si_remove_by_data(int addr_space, enum si_type si_type,
+ 			    unsigned long addr);
+-int ipmi_si_hardcode_find_bmc(void);
++void ipmi_hardcode_init(void);
++void ipmi_si_hardcode_exit(void);
++int ipmi_si_hardcode_match(int addr_type, unsigned long addr);
+ void ipmi_si_platform_init(void);
+ void ipmi_si_platform_shutdown(void);
+ 
+diff --git a/drivers/char/ipmi/ipmi_si_hardcode.c b/drivers/char/ipmi/ipmi_si_hardcode.c
+index 487642809c58..1e5783961b0d 100644
+--- a/drivers/char/ipmi/ipmi_si_hardcode.c
++++ b/drivers/char/ipmi/ipmi_si_hardcode.c
+@@ -3,6 +3,7 @@
+ #define pr_fmt(fmt) "ipmi_hardcode: " fmt
+ 
+ #include <linux/moduleparam.h>
++#include <linux/platform_device.h>
+ #include "ipmi_si.h"
+ 
+ /*
+@@ -12,23 +13,22 @@
+ 
+ #define SI_MAX_PARMS 4
+ 
+-static char          *si_type[SI_MAX_PARMS];
+ #define MAX_SI_TYPE_STR 30
+-static char          si_type_str[MAX_SI_TYPE_STR];
++static char          si_type_str[MAX_SI_TYPE_STR] __initdata;
+ static unsigned long addrs[SI_MAX_PARMS];
+ static unsigned int num_addrs;
+ static unsigned int  ports[SI_MAX_PARMS];
+ static unsigned int num_ports;
+-static int           irqs[SI_MAX_PARMS];
+-static unsigned int num_irqs;
+-static int           regspacings[SI_MAX_PARMS];
+-static unsigned int num_regspacings;
+-static int           regsizes[SI_MAX_PARMS];
+-static unsigned int num_regsizes;
+-static int           regshifts[SI_MAX_PARMS];
+-static unsigned int num_regshifts;
+-static int slave_addrs[SI_MAX_PARMS]; /* Leaving 0 chooses the default value */
+-static unsigned int num_slave_addrs;
++static int           irqs[SI_MAX_PARMS] __initdata;
++static unsigned int num_irqs __initdata;
++static int           regspacings[SI_MAX_PARMS] __initdata;
++static unsigned int num_regspacings __initdata;
++static int           regsizes[SI_MAX_PARMS] __initdata;
++static unsigned int num_regsizes __initdata;
++static int           regshifts[SI_MAX_PARMS] __initdata;
++static unsigned int num_regshifts __initdata;
++static int slave_addrs[SI_MAX_PARMS] __initdata;
++static unsigned int num_slave_addrs __initdata;
+ 
+ module_param_string(type, si_type_str, MAX_SI_TYPE_STR, 0);
+ MODULE_PARM_DESC(type, "Defines the type of each interface, each"
+@@ -73,12 +73,133 @@ MODULE_PARM_DESC(slave_addrs, "Set the default IPMB slave address for"
+ 		 " overridden by this parm.  This is an array indexed"
+ 		 " by interface number.");
+ 
+-int ipmi_si_hardcode_find_bmc(void)
++static struct platform_device *ipmi_hc_pdevs[SI_MAX_PARMS];
++
++static void __init ipmi_hardcode_init_one(const char *si_type_str,
++					  unsigned int i,
++					  unsigned long addr,
++					  unsigned int flags)
+ {
+-	int ret = -ENODEV;
+-	int             i;
+-	struct si_sm_io io;
++	struct platform_device *pdev;
++	unsigned int num_r = 1, size;
++	struct resource r[4];
++	struct property_entry p[6];
++	enum si_type si_type;
++	unsigned int regspacing, regsize;
++	int rv;
++
++	memset(p, 0, sizeof(p));
++	memset(r, 0, sizeof(r));
++
++	if (!si_type_str || !*si_type_str || strcmp(si_type_str, "kcs") == 0) {
++		size = 2;
++		si_type = SI_KCS;
++	} else if (strcmp(si_type_str, "smic") == 0) {
++		size = 2;
++		si_type = SI_SMIC;
++	} else if (strcmp(si_type_str, "bt") == 0) {
++		size = 3;
++		si_type = SI_BT;
++	} else if (strcmp(si_type_str, "invalid") == 0) {
++		/*
++		 * Allow a firmware-specified interface to be
++		 * disabled.
++		 */
++		size = 1;
++		si_type = SI_TYPE_INVALID;
++	} else {
++		pr_warn("Interface type specified for interface %d, was invalid: %s\n",
++			i, si_type_str);
++		return;
++	}
++
++	regsize = regsizes[i];
++	if (regsize == 0)
++		regsize = DEFAULT_REGSIZE;
++
++	p[0] = PROPERTY_ENTRY_U8("ipmi-type", si_type);
++	p[1] = PROPERTY_ENTRY_U8("slave-addr", slave_addrs[i]);
++	p[2] = PROPERTY_ENTRY_U8("addr-source", SI_HARDCODED);
++	p[3] = PROPERTY_ENTRY_U8("reg-shift", regshifts[i]);
++	p[4] = PROPERTY_ENTRY_U8("reg-size", regsize);
++	/* Last entry must be left NULL to terminate it. */
++
++	/*
++	 * Register spacing is derived from the resources in
++	 * the IPMI platform code.
++	 */
++	regspacing = regspacings[i];
++	if (regspacing == 0)
++		regspacing = regsize;
++
++	r[0].start = addr;
++	r[0].end = r[0].start + regsize - 1;
++	r[0].name = "IPMI Address 1";
++	r[0].flags = flags;
++
++	if (size > 1) {
++		r[1].start = r[0].start + regspacing;
++		r[1].end = r[1].start + regsize - 1;
++		r[1].name = "IPMI Address 2";
++		r[1].flags = flags;
++		num_r++;
++	}
++
++	if (size > 2) {
++		r[2].start = r[1].start + regspacing;
++		r[2].end = r[2].start + regsize - 1;
++		r[2].name = "IPMI Address 3";
++		r[2].flags = flags;
++		num_r++;
++	}
++
++	if (irqs[i]) {
++		r[num_r].start = irqs[i];
++		r[num_r].end = irqs[i];
++		r[num_r].name = "IPMI IRQ";
++		r[num_r].flags = IORESOURCE_IRQ;
++		num_r++;
++	}
++
++	pdev = platform_device_alloc("hardcode-ipmi-si", i);
++	if (!pdev) {
++		pr_err("Error allocating IPMI platform device %d\n", i);
++		return;
++	}
++
++	rv = platform_device_add_resources(pdev, r, num_r);
++	if (rv) {
++		dev_err(&pdev->dev,
++			"Unable to add hard-code resources: %d\n", rv);
++		goto err;
++	}
++
++	rv = platform_device_add_properties(pdev, p);
++	if (rv) {
++		dev_err(&pdev->dev,
++			"Unable to add hard-code properties: %d\n", rv);
++		goto err;
++	}
++
++	rv = platform_device_add(pdev);
++	if (rv) {
++		dev_err(&pdev->dev,
++			"Unable to add hard-code device: %d\n", rv);
++		goto err;
++	}
++
++	ipmi_hc_pdevs[i] = pdev;
++	return;
++
++err:
++	platform_device_put(pdev);
++}
++
++void __init ipmi_hardcode_init(void)
++{
++	unsigned int i;
+ 	char *str;
++	char *si_type[SI_MAX_PARMS];
+ 
+ 	/* Parse out the si_type string into its components. */
+ 	str = si_type_str;
+@@ -95,54 +216,45 @@ int ipmi_si_hardcode_find_bmc(void)
+ 		}
+ 	}
+ 
+-	memset(&io, 0, sizeof(io));
+ 	for (i = 0; i < SI_MAX_PARMS; i++) {
+-		if (!ports[i] && !addrs[i])
+-			continue;
+-
+-		io.addr_source = SI_HARDCODED;
+-		pr_info("probing via hardcoded address\n");
+-
+-		if (!si_type[i] || strcmp(si_type[i], "kcs") == 0) {
+-			io.si_type = SI_KCS;
+-		} else if (strcmp(si_type[i], "smic") == 0) {
+-			io.si_type = SI_SMIC;
+-		} else if (strcmp(si_type[i], "bt") == 0) {
+-			io.si_type = SI_BT;
+-		} else {
+-			pr_warn("Interface type specified for interface %d, was invalid: %s\n",
+-				i, si_type[i]);
+-			continue;
+-		}
++		if (i < num_ports && ports[i])
++			ipmi_hardcode_init_one(si_type[i], i, ports[i],
++					       IORESOURCE_IO);
++		if (i < num_addrs && addrs[i])
++			ipmi_hardcode_init_one(si_type[i], i, addrs[i],
++					       IORESOURCE_MEM);
++	}
++}
+ 
+-		if (ports[i]) {
+-			/* An I/O port */
+-			io.addr_data = ports[i];
+-			io.addr_type = IPMI_IO_ADDR_SPACE;
+-		} else if (addrs[i]) {
+-			/* A memory port */
+-			io.addr_data = addrs[i];
+-			io.addr_type = IPMI_MEM_ADDR_SPACE;
+-		} else {
+-			pr_warn("Interface type specified for interface %d, but port and address were not set or set to zero\n",
+-				i);
+-			continue;
+-		}
++void ipmi_si_hardcode_exit(void)
++{
++	unsigned int i;
+ 
+-		io.addr = NULL;
+-		io.regspacing = regspacings[i];
+-		if (!io.regspacing)
+-			io.regspacing = DEFAULT_REGSPACING;
+-		io.regsize = regsizes[i];
+-		if (!io.regsize)
+-			io.regsize = DEFAULT_REGSIZE;
+-		io.regshift = regshifts[i];
+-		io.irq = irqs[i];
+-		if (io.irq)
+-			io.irq_setup = ipmi_std_irq_setup;
+-		io.slave_addr = slave_addrs[i];
+-
+-		ret = ipmi_si_add_smi(&io);
++	for (i = 0; i < SI_MAX_PARMS; i++) {
++		if (ipmi_hc_pdevs[i])
++			platform_device_unregister(ipmi_hc_pdevs[i]);
+ 	}
+-	return ret;
++}
++
++/*
++ * Returns true of the given address exists as a hardcoded address,
++ * false if not.
++ */
++int ipmi_si_hardcode_match(int addr_type, unsigned long addr)
++{
++	unsigned int i;
++
++	if (addr_type == IPMI_IO_ADDR_SPACE) {
++		for (i = 0; i < num_ports; i++) {
++			if (ports[i] == addr)
++				return 1;
++		}
++	} else {
++		for (i = 0; i < num_addrs; i++) {
++			if (addrs[i] == addr)
++				return 1;
++		}
++	}
++
++	return 0;
+ }
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index dc8603d34320..5294abc4c96c 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -1862,6 +1862,18 @@ int ipmi_si_add_smi(struct si_sm_io *io)
+ 	int rv = 0;
+ 	struct smi_info *new_smi, *dup;
+ 
++	/*
++	 * If the user gave us a hard-coded device at the same
++	 * address, they presumably want us to use it and not what is
++	 * in the firmware.
++	 */
++	if (io->addr_source != SI_HARDCODED &&
++	    ipmi_si_hardcode_match(io->addr_type, io->addr_data)) {
++		dev_info(io->dev,
++			 "Hard-coded device at this address already exists");
++		return -ENODEV;
++	}
++
+ 	if (!io->io_setup) {
+ 		if (io->addr_type == IPMI_IO_ADDR_SPACE) {
+ 			io->io_setup = ipmi_si_port_setup;
+@@ -2085,11 +2097,16 @@ static int try_smi_init(struct smi_info *new_smi)
+ 	WARN_ON(new_smi->io.dev->init_name != NULL);
+ 
+  out_err:
++	if (rv && new_smi->io.io_cleanup) {
++		new_smi->io.io_cleanup(&new_smi->io);
++		new_smi->io.io_cleanup = NULL;
++	}
++
+ 	kfree(init_name);
+ 	return rv;
+ }
+ 
+-static int init_ipmi_si(void)
++static int __init init_ipmi_si(void)
+ {
+ 	struct smi_info *e;
+ 	enum ipmi_addr_src type = SI_INVALID;
+@@ -2097,11 +2114,9 @@ static int init_ipmi_si(void)
+ 	if (initialized)
+ 		return 0;
+ 
+-	pr_info("IPMI System Interface driver\n");
++	ipmi_hardcode_init();
+ 
+-	/* If the user gave us a device, they presumably want us to use it */
+-	if (!ipmi_si_hardcode_find_bmc())
+-		goto do_scan;
++	pr_info("IPMI System Interface driver\n");
+ 
+ 	ipmi_si_platform_init();
+ 
+@@ -2113,7 +2128,6 @@ static int init_ipmi_si(void)
+ 	   with multiple BMCs we assume that there will be several instances
+ 	   of a given type so if we succeed in registering a type then also
+ 	   try to register everything else of the same type */
+-do_scan:
+ 	mutex_lock(&smi_infos_lock);
+ 	list_for_each_entry(e, &smi_infos, link) {
+ 		/* Try to register a device if it has an IRQ and we either
+@@ -2299,6 +2313,8 @@ static void cleanup_ipmi_si(void)
+ 	list_for_each_entry_safe(e, tmp_e, &smi_infos, link)
+ 		cleanup_one_si(e);
+ 	mutex_unlock(&smi_infos_lock);
++
++	ipmi_si_hardcode_exit();
+ }
+ module_exit(cleanup_ipmi_si);
+ 
+diff --git a/drivers/char/ipmi/ipmi_si_mem_io.c b/drivers/char/ipmi/ipmi_si_mem_io.c
+index fd0ec8d6bf0e..75583612ab10 100644
+--- a/drivers/char/ipmi/ipmi_si_mem_io.c
++++ b/drivers/char/ipmi/ipmi_si_mem_io.c
+@@ -81,8 +81,6 @@ int ipmi_si_mem_setup(struct si_sm_io *io)
+ 	if (!addr)
+ 		return -ENODEV;
+ 
+-	io->io_cleanup = mem_cleanup;
+-
+ 	/*
+ 	 * Figure out the actual readb/readw/readl/etc routine to use based
+ 	 * upon the register size.
+@@ -141,5 +139,8 @@ int ipmi_si_mem_setup(struct si_sm_io *io)
+ 		mem_region_cleanup(io, io->io_size);
+ 		return -EIO;
+ 	}
++
++	io->io_cleanup = mem_cleanup;
++
+ 	return 0;
+ }
+diff --git a/drivers/char/ipmi/ipmi_si_platform.c b/drivers/char/ipmi/ipmi_si_platform.c
+index 15cf819f884f..8158d03542f4 100644
+--- a/drivers/char/ipmi/ipmi_si_platform.c
++++ b/drivers/char/ipmi/ipmi_si_platform.c
+@@ -128,8 +128,6 @@ ipmi_get_info_from_resources(struct platform_device *pdev,
+ 		if (res_second->start > io->addr_data)
+ 			io->regspacing = res_second->start - io->addr_data;
+ 	}
+-	io->regsize = DEFAULT_REGSIZE;
+-	io->regshift = 0;
+ 
+ 	return res;
+ }
+@@ -137,7 +135,7 @@ ipmi_get_info_from_resources(struct platform_device *pdev,
+ static int platform_ipmi_probe(struct platform_device *pdev)
+ {
+ 	struct si_sm_io io;
+-	u8 type, slave_addr, addr_source;
++	u8 type, slave_addr, addr_source, regsize, regshift;
+ 	int rv;
+ 
+ 	rv = device_property_read_u8(&pdev->dev, "addr-source", &addr_source);
+@@ -149,7 +147,7 @@ static int platform_ipmi_probe(struct platform_device *pdev)
+ 	if (addr_source == SI_SMBIOS) {
+ 		if (!si_trydmi)
+ 			return -ENODEV;
+-	} else {
++	} else if (addr_source != SI_HARDCODED) {
+ 		if (!si_tryplatform)
+ 			return -ENODEV;
+ 	}
+@@ -169,11 +167,23 @@ static int platform_ipmi_probe(struct platform_device *pdev)
+ 	case SI_BT:
+ 		io.si_type = type;
+ 		break;
++	case SI_TYPE_INVALID: /* User disabled this in hardcode. */
++		return -ENODEV;
+ 	default:
+ 		dev_err(&pdev->dev, "ipmi-type property is invalid\n");
+ 		return -EINVAL;
+ 	}
+ 
++	io.regsize = DEFAULT_REGSIZE;
++	rv = device_property_read_u8(&pdev->dev, "reg-size", &regsize);
++	if (!rv)
++		io.regsize = regsize;
++
++	io.regshift = 0;
++	rv = device_property_read_u8(&pdev->dev, "reg-shift", &regshift);
++	if (!rv)
++		io.regshift = regshift;
++
+ 	if (!ipmi_get_info_from_resources(pdev, &io))
+ 		return -EINVAL;
+ 
+@@ -193,7 +203,8 @@ static int platform_ipmi_probe(struct platform_device *pdev)
+ 
+ 	io.dev = &pdev->dev;
+ 
+-	pr_info("ipmi_si: SMBIOS: %s %#lx regsize %d spacing %d irq %d\n",
++	pr_info("ipmi_si: %s: %s %#lx regsize %d spacing %d irq %d\n",
++		ipmi_addr_src_to_str(addr_source),
+ 		(io.addr_type == IPMI_IO_ADDR_SPACE) ? "io" : "mem",
+ 		io.addr_data, io.regsize, io.regspacing, io.irq);
+ 
+@@ -358,6 +369,9 @@ static int acpi_ipmi_probe(struct platform_device *pdev)
+ 		goto err_free;
+ 	}
+ 
++	io.regsize = DEFAULT_REGSIZE;
++	io.regshift = 0;
++
+ 	res = ipmi_get_info_from_resources(pdev, &io);
+ 	if (!res) {
+ 		rv = -EINVAL;
+@@ -420,8 +434,9 @@ static int ipmi_remove(struct platform_device *pdev)
+ }
+ 
+ static const struct platform_device_id si_plat_ids[] = {
+-    { "dmi-ipmi-si", 0 },
+-    { }
++	{ "dmi-ipmi-si", 0 },
++	{ "hardcode-ipmi-si", 0 },
++	{ }
+ };
+ 
+ struct platform_driver ipmi_platform_driver = {
+diff --git a/drivers/char/ipmi/ipmi_si_port_io.c b/drivers/char/ipmi/ipmi_si_port_io.c
+index ef6dffcea9fa..03924c32b6e9 100644
+--- a/drivers/char/ipmi/ipmi_si_port_io.c
++++ b/drivers/char/ipmi/ipmi_si_port_io.c
+@@ -68,8 +68,6 @@ int ipmi_si_port_setup(struct si_sm_io *io)
+ 	if (!addr)
+ 		return -ENODEV;
+ 
+-	io->io_cleanup = port_cleanup;
+-
+ 	/*
+ 	 * Figure out the actual inb/inw/inl/etc routine to use based
+ 	 * upon the register size.
+@@ -109,5 +107,8 @@ int ipmi_si_port_setup(struct si_sm_io *io)
+ 			return -EIO;
+ 		}
+ 	}
++
++	io->io_cleanup = port_cleanup;
++
+ 	return 0;
+ }
+diff --git a/drivers/char/tpm/st33zp24/st33zp24.c b/drivers/char/tpm/st33zp24/st33zp24.c
+index 64dc560859f2..13dc614b7ebc 100644
+--- a/drivers/char/tpm/st33zp24/st33zp24.c
++++ b/drivers/char/tpm/st33zp24/st33zp24.c
+@@ -436,7 +436,7 @@ static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf,
+ 			goto out_err;
+ 	}
+ 
+-	return len;
++	return 0;
+ out_err:
+ 	st33zp24_cancel(chip);
+ 	release_locality(chip);
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index d9439f9abe78..88d2e01a651d 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -230,10 +230,19 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 	if (rc < 0) {
+ 		if (rc != -EPIPE)
+ 			dev_err(&chip->dev,
+-				"%s: tpm_send: error %d\n", __func__, rc);
++				"%s: send(): error %d\n", __func__, rc);
+ 		goto out;
+ 	}
+ 
++	/* A sanity check. send() should just return zero on success e.g.
++	 * not the command length.
++	 */
++	if (rc > 0) {
++		dev_warn(&chip->dev,
++			 "%s: send(): invalid value %d\n", __func__, rc);
++		rc = 0;
++	}
++
+ 	if (chip->flags & TPM_CHIP_FLAG_IRQ)
+ 		goto out_recv;
+ 
+diff --git a/drivers/char/tpm/tpm_atmel.c b/drivers/char/tpm/tpm_atmel.c
+index 66a14526aaf4..a290b30a0c35 100644
+--- a/drivers/char/tpm/tpm_atmel.c
++++ b/drivers/char/tpm/tpm_atmel.c
+@@ -105,7 +105,7 @@ static int tpm_atml_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 		iowrite8(buf[i], priv->iobase);
+ 	}
+ 
+-	return count;
++	return 0;
+ }
+ 
+ static void tpm_atml_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 36952ef98f90..763fc7e6c005 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -287,19 +287,29 @@ static int crb_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ 	struct crb_priv *priv = dev_get_drvdata(&chip->dev);
+ 	unsigned int expected;
+ 
+-	/* sanity check */
+-	if (count < 6)
++	/* A sanity check that the upper layer wants to get at least the header
++	 * as that is the minimum size for any TPM response.
++	 */
++	if (count < TPM_HEADER_SIZE)
+ 		return -EIO;
+ 
++	/* If this bit is set, according to the spec, the TPM is in
++	 * unrecoverable condition.
++	 */
+ 	if (ioread32(&priv->regs_t->ctrl_sts) & CRB_CTRL_STS_ERROR)
+ 		return -EIO;
+ 
+-	memcpy_fromio(buf, priv->rsp, 6);
+-	expected = be32_to_cpup((__be32 *) &buf[2]);
+-	if (expected > count || expected < 6)
++	/* Read the first 8 bytes in order to get the length of the response.
++	 * We read exactly a quad word in order to make sure that the remaining
++	 * reads will be aligned.
++	 */
++	memcpy_fromio(buf, priv->rsp, 8);
++
++	expected = be32_to_cpup((__be32 *)&buf[2]);
++	if (expected > count || expected < TPM_HEADER_SIZE)
+ 		return -EIO;
+ 
+-	memcpy_fromio(&buf[6], &priv->rsp[6], expected - 6);
++	memcpy_fromio(&buf[8], &priv->rsp[8], expected - 8);
+ 
+ 	return expected;
+ }
+diff --git a/drivers/char/tpm/tpm_i2c_atmel.c b/drivers/char/tpm/tpm_i2c_atmel.c
+index 95ce2e9ccdc6..32a8e27c5382 100644
+--- a/drivers/char/tpm/tpm_i2c_atmel.c
++++ b/drivers/char/tpm/tpm_i2c_atmel.c
+@@ -65,7 +65,11 @@ static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	dev_dbg(&chip->dev,
+ 		"%s(buf=%*ph len=%0zx) -> sts=%d\n", __func__,
+ 		(int)min_t(size_t, 64, len), buf, len, status);
+-	return status;
++
++	if (status < 0)
++		return status;
++
++	return 0;
+ }
+ 
+ static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c
+index 9086edc9066b..977fd42daa1b 100644
+--- a/drivers/char/tpm/tpm_i2c_infineon.c
++++ b/drivers/char/tpm/tpm_i2c_infineon.c
+@@ -587,7 +587,7 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	/* go and do it */
+ 	iic_tpm_write(TPM_STS(tpm_dev.locality), &sts, 1);
+ 
+-	return len;
++	return 0;
+ out_err:
+ 	tpm_tis_i2c_ready(chip);
+ 	/* The TPM needs some time to clean up here,
+diff --git a/drivers/char/tpm/tpm_i2c_nuvoton.c b/drivers/char/tpm/tpm_i2c_nuvoton.c
+index 217f7f1cbde8..058220edb8b3 100644
+--- a/drivers/char/tpm/tpm_i2c_nuvoton.c
++++ b/drivers/char/tpm/tpm_i2c_nuvoton.c
+@@ -467,7 +467,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	}
+ 
+ 	dev_dbg(dev, "%s() -> %zd\n", __func__, len);
+-	return len;
++	return 0;
+ }
+ 
+ static bool i2c_nuvoton_req_canceled(struct tpm_chip *chip, u8 status)
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 07b5a487d0c8..757ca45b39b8 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -139,14 +139,14 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ }
+ 
+ /**
+- * tpm_ibmvtpm_send - Send tpm request
+- *
++ * tpm_ibmvtpm_send() - Send a TPM command
+  * @chip:	tpm chip struct
+  * @buf:	buffer contains data to send
+  * @count:	size of buffer
+  *
+  * Return:
+- *	Number of bytes sent or < 0 on error.
++ *   0 on success,
++ *   -errno on error
+  */
+ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+@@ -192,7 +192,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 		rc = 0;
+ 		ibmvtpm->tpm_processing_cmd = false;
+ 	} else
+-		rc = count;
++		rc = 0;
+ 
+ 	spin_unlock(&ibmvtpm->rtce_lock);
+ 	return rc;
+diff --git a/drivers/char/tpm/tpm_infineon.c b/drivers/char/tpm/tpm_infineon.c
+index d8f10047fbba..97f6d4fe0aee 100644
+--- a/drivers/char/tpm/tpm_infineon.c
++++ b/drivers/char/tpm/tpm_infineon.c
+@@ -354,7 +354,7 @@ static int tpm_inf_send(struct tpm_chip *chip, u8 * buf, size_t count)
+ 	for (i = 0; i < count; i++) {
+ 		wait_and_send(chip, buf[i]);
+ 	}
+-	return count;
++	return 0;
+ }
+ 
+ static void tpm_inf_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/tpm_nsc.c b/drivers/char/tpm/tpm_nsc.c
+index 5d6cce74cd3f..9bee3c5eb4bf 100644
+--- a/drivers/char/tpm/tpm_nsc.c
++++ b/drivers/char/tpm/tpm_nsc.c
+@@ -226,7 +226,7 @@ static int tpm_nsc_send(struct tpm_chip *chip, u8 * buf, size_t count)
+ 	}
+ 	outb(NSC_COMMAND_EOC, priv->base + NSC_COMMAND);
+ 
+-	return count;
++	return 0;
+ }
+ 
+ static void tpm_nsc_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index bf7e49cfa643..bb0c2e160562 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -481,7 +481,7 @@ static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len)
+ 			goto out_err;
+ 		}
+ 	}
+-	return len;
++	return 0;
+ out_err:
+ 	tpm_tis_ready(chip);
+ 	return rc;
+diff --git a/drivers/char/tpm/tpm_vtpm_proxy.c b/drivers/char/tpm/tpm_vtpm_proxy.c
+index 87a0ce47f201..ecbb63f8d231 100644
+--- a/drivers/char/tpm/tpm_vtpm_proxy.c
++++ b/drivers/char/tpm/tpm_vtpm_proxy.c
+@@ -335,7 +335,6 @@ static int vtpm_proxy_is_driver_command(struct tpm_chip *chip,
+ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+ 	struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev);
+-	int rc = 0;
+ 
+ 	if (count > sizeof(proxy_dev->buffer)) {
+ 		dev_err(&chip->dev,
+@@ -366,7 +365,7 @@ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 
+ 	wake_up_interruptible(&proxy_dev->wq);
+ 
+-	return rc;
++	return 0;
+ }
+ 
+ static void vtpm_proxy_tpm_op_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
+index b150f87f38f5..5a327eb7f63a 100644
+--- a/drivers/char/tpm/xen-tpmfront.c
++++ b/drivers/char/tpm/xen-tpmfront.c
+@@ -173,7 +173,7 @@ static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 		return -ETIME;
+ 	}
+ 
+-	return count;
++	return 0;
+ }
+ 
+ static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+diff --git a/drivers/clk/clk-twl6040.c b/drivers/clk/clk-twl6040.c
+index ea846f77750b..0cad5748bf0e 100644
+--- a/drivers/clk/clk-twl6040.c
++++ b/drivers/clk/clk-twl6040.c
+@@ -41,6 +41,43 @@ static int twl6040_pdmclk_is_prepared(struct clk_hw *hw)
+ 	return pdmclk->enabled;
+ }
+ 
++static int twl6040_pdmclk_reset_one_clock(struct twl6040_pdmclk *pdmclk,
++					  unsigned int reg)
++{
++	const u8 reset_mask = TWL6040_HPLLRST;	/* Same for HPPLL and LPPLL */
++	int ret;
++
++	ret = twl6040_set_bits(pdmclk->twl6040, reg, reset_mask);
++	if (ret < 0)
++		return ret;
++
++	ret = twl6040_clear_bits(pdmclk->twl6040, reg, reset_mask);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
++/*
++ * TWL6040A2 Phoenix Audio IC erratum #6: "PDM Clock Generation Issue At
++ * Cold Temperature". This affects cold boot and deeper idle states it
++ * seems. The workaround consists of resetting HPPLL and LPPLL.
++ */
++static int twl6040_pdmclk_quirk_reset_clocks(struct twl6040_pdmclk *pdmclk)
++{
++	int ret;
++
++	ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_HPPLLCTL);
++	if (ret)
++		return ret;
++
++	ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_LPPLLCTL);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
+ static int twl6040_pdmclk_prepare(struct clk_hw *hw)
+ {
+ 	struct twl6040_pdmclk *pdmclk = container_of(hw, struct twl6040_pdmclk,
+@@ -48,8 +85,20 @@ static int twl6040_pdmclk_prepare(struct clk_hw *hw)
+ 	int ret;
+ 
+ 	ret = twl6040_power(pdmclk->twl6040, 1);
+-	if (!ret)
+-		pdmclk->enabled = 1;
++	if (ret)
++		return ret;
++
++	ret = twl6040_pdmclk_quirk_reset_clocks(pdmclk);
++	if (ret)
++		goto out_err;
++
++	pdmclk->enabled = 1;
++
++	return 0;
++
++out_err:
++	dev_err(pdmclk->dev, "%s: error %i\n", __func__, ret);
++	twl6040_power(pdmclk->twl6040, 0);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/clk/ingenic/cgu.c b/drivers/clk/ingenic/cgu.c
+index 5ef7d9ba2195..b40160eb3372 100644
+--- a/drivers/clk/ingenic/cgu.c
++++ b/drivers/clk/ingenic/cgu.c
+@@ -426,16 +426,16 @@ ingenic_clk_round_rate(struct clk_hw *hw, unsigned long req_rate,
+ 	struct ingenic_clk *ingenic_clk = to_ingenic_clk(hw);
+ 	struct ingenic_cgu *cgu = ingenic_clk->cgu;
+ 	const struct ingenic_cgu_clk_info *clk_info;
+-	long rate = *parent_rate;
++	unsigned int div = 1;
+ 
+ 	clk_info = &cgu->clock_info[ingenic_clk->idx];
+ 
+ 	if (clk_info->type & CGU_CLK_DIV)
+-		rate /= ingenic_clk_calc_div(clk_info, *parent_rate, req_rate);
++		div = ingenic_clk_calc_div(clk_info, *parent_rate, req_rate);
+ 	else if (clk_info->type & CGU_CLK_FIXDIV)
+-		rate /= clk_info->fixdiv.div;
++		div = clk_info->fixdiv.div;
+ 
+-	return rate;
++	return DIV_ROUND_UP(*parent_rate, div);
+ }
+ 
+ static int
+@@ -455,7 +455,7 @@ ingenic_clk_set_rate(struct clk_hw *hw, unsigned long req_rate,
+ 
+ 	if (clk_info->type & CGU_CLK_DIV) {
+ 		div = ingenic_clk_calc_div(clk_info, parent_rate, req_rate);
+-		rate = parent_rate / div;
++		rate = DIV_ROUND_UP(parent_rate, div);
+ 
+ 		if (rate != req_rate)
+ 			return -EINVAL;
+diff --git a/drivers/clk/ingenic/cgu.h b/drivers/clk/ingenic/cgu.h
+index 502bcbb61b04..e12716d8ce3c 100644
+--- a/drivers/clk/ingenic/cgu.h
++++ b/drivers/clk/ingenic/cgu.h
+@@ -80,7 +80,7 @@ struct ingenic_cgu_mux_info {
+  * @reg: offset of the divider control register within the CGU
+  * @shift: number of bits to left shift the divide value by (ie. the index of
+  *         the lowest bit of the divide value within its control register)
+- * @div: number of bits to divide the divider value by (i.e. if the
++ * @div: number to divide the divider value by (i.e. if the
+  *	 effective divider value is the value written to the register
+  *	 multiplied by some constant)
+  * @bits: the size of the divide value in bits
+diff --git a/drivers/clk/samsung/clk-exynos5-subcmu.c b/drivers/clk/samsung/clk-exynos5-subcmu.c
+index 93306283d764..8ae44b5db4c2 100644
+--- a/drivers/clk/samsung/clk-exynos5-subcmu.c
++++ b/drivers/clk/samsung/clk-exynos5-subcmu.c
+@@ -136,15 +136,20 @@ static int __init exynos5_clk_register_subcmu(struct device *parent,
+ {
+ 	struct of_phandle_args genpdspec = { .np = pd_node };
+ 	struct platform_device *pdev;
++	int ret;
++
++	pdev = platform_device_alloc("exynos5-subcmu", PLATFORM_DEVID_AUTO);
++	if (!pdev)
++		return -ENOMEM;
+ 
+-	pdev = platform_device_alloc(info->pd_name, -1);
+ 	pdev->dev.parent = parent;
+-	pdev->driver_override = "exynos5-subcmu";
+ 	platform_set_drvdata(pdev, (void *)info);
+ 	of_genpd_add_device(&genpdspec, &pdev->dev);
+-	platform_device_add(pdev);
++	ret = platform_device_add(pdev);
++	if (ret)
++		platform_device_put(pdev);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int __init exynos5_clk_probe(struct platform_device *pdev)
+diff --git a/drivers/clk/uniphier/clk-uniphier-cpugear.c b/drivers/clk/uniphier/clk-uniphier-cpugear.c
+index ec11f55594ad..5d2d42b7e182 100644
+--- a/drivers/clk/uniphier/clk-uniphier-cpugear.c
++++ b/drivers/clk/uniphier/clk-uniphier-cpugear.c
+@@ -47,7 +47,7 @@ static int uniphier_clk_cpugear_set_parent(struct clk_hw *hw, u8 index)
+ 		return ret;
+ 
+ 	ret = regmap_write_bits(gear->regmap,
+-				gear->regbase + UNIPHIER_CLK_CPUGEAR_SET,
++				gear->regbase + UNIPHIER_CLK_CPUGEAR_UPD,
+ 				UNIPHIER_CLK_CPUGEAR_UPD_BIT,
+ 				UNIPHIER_CLK_CPUGEAR_UPD_BIT);
+ 	if (ret)
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index a9e26f6a81a1..8dfd3bc448d0 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -360,6 +360,16 @@ config ARM64_ERRATUM_858921
+ 	  The workaround will be dynamically enabled when an affected
+ 	  core is detected.
+ 
++config SUN50I_ERRATUM_UNKNOWN1
++	bool "Workaround for Allwinner A64 erratum UNKNOWN1"
++	default y
++	depends on ARM_ARCH_TIMER && ARM64 && ARCH_SUNXI
++	select ARM_ARCH_TIMER_OOL_WORKAROUND
++	help
++	  This option enables a workaround for instability in the timer on
++	  the Allwinner A64 SoC. The workaround will only be active if the
++	  allwinner,erratum-unknown1 property is found in the timer node.
++
+ config ARM_GLOBAL_TIMER
+ 	bool "Support for the ARM global timer" if COMPILE_TEST
+ 	select TIMER_OF if OF
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index 9a7d4dc00b6e..a8b20b65bd4b 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -326,6 +326,48 @@ static u64 notrace arm64_1188873_read_cntvct_el0(void)
+ }
+ #endif
+ 
++#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
++/*
++ * The low bits of the counter registers are indeterminate while bit 10 or
++ * greater is rolling over. Since the counter value can jump both backward
++ * (7ff -> 000 -> 800) and forward (7ff -> fff -> 800), ignore register values
++ * with all ones or all zeros in the low bits. Bound the loop by the maximum
++ * number of CPU cycles in 3 consecutive 24 MHz counter periods.
++ */
++#define __sun50i_a64_read_reg(reg) ({					\
++	u64 _val;							\
++	int _retries = 150;						\
++									\
++	do {								\
++		_val = read_sysreg(reg);				\
++		_retries--;						\
++	} while (((_val + 1) & GENMASK(9, 0)) <= 1 && _retries);	\
++									\
++	WARN_ON_ONCE(!_retries);					\
++	_val;								\
++})
++
++static u64 notrace sun50i_a64_read_cntpct_el0(void)
++{
++	return __sun50i_a64_read_reg(cntpct_el0);
++}
++
++static u64 notrace sun50i_a64_read_cntvct_el0(void)
++{
++	return __sun50i_a64_read_reg(cntvct_el0);
++}
++
++static u32 notrace sun50i_a64_read_cntp_tval_el0(void)
++{
++	return read_sysreg(cntp_cval_el0) - sun50i_a64_read_cntpct_el0();
++}
++
++static u32 notrace sun50i_a64_read_cntv_tval_el0(void)
++{
++	return read_sysreg(cntv_cval_el0) - sun50i_a64_read_cntvct_el0();
++}
++#endif
++
+ #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
+ DEFINE_PER_CPU(const struct arch_timer_erratum_workaround *, timer_unstable_counter_workaround);
+ EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround);
+@@ -423,6 +465,19 @@ static const struct arch_timer_erratum_workaround ool_workarounds[] = {
+ 		.read_cntvct_el0 = arm64_1188873_read_cntvct_el0,
+ 	},
+ #endif
++#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
++	{
++		.match_type = ate_match_dt,
++		.id = "allwinner,erratum-unknown1",
++		.desc = "Allwinner erratum UNKNOWN1",
++		.read_cntp_tval_el0 = sun50i_a64_read_cntp_tval_el0,
++		.read_cntv_tval_el0 = sun50i_a64_read_cntv_tval_el0,
++		.read_cntpct_el0 = sun50i_a64_read_cntpct_el0,
++		.read_cntvct_el0 = sun50i_a64_read_cntvct_el0,
++		.set_next_event_phys = erratum_set_next_event_tval_phys,
++		.set_next_event_virt = erratum_set_next_event_tval_virt,
++	},
++#endif
+ };
+ 
+ typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *,
+diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c
+index 7a244b681876..d55c30f6981d 100644
+--- a/drivers/clocksource/exynos_mct.c
++++ b/drivers/clocksource/exynos_mct.c
+@@ -388,6 +388,13 @@ static void exynos4_mct_tick_start(unsigned long cycles,
+ 	exynos4_mct_write(tmp, mevt->base + MCT_L_TCON_OFFSET);
+ }
+ 
++static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
++{
++	/* Clear the MCT tick interrupt */
++	if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1)
++		exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET);
++}
++
+ static int exynos4_tick_set_next_event(unsigned long cycles,
+ 				       struct clock_event_device *evt)
+ {
+@@ -404,6 +411,7 @@ static int set_state_shutdown(struct clock_event_device *evt)
+ 
+ 	mevt = container_of(evt, struct mct_clock_event_device, evt);
+ 	exynos4_mct_tick_stop(mevt);
++	exynos4_mct_tick_clear(mevt);
+ 	return 0;
+ }
+ 
+@@ -420,8 +428,11 @@ static int set_state_periodic(struct clock_event_device *evt)
+ 	return 0;
+ }
+ 
+-static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
++static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id)
+ {
++	struct mct_clock_event_device *mevt = dev_id;
++	struct clock_event_device *evt = &mevt->evt;
++
+ 	/*
+ 	 * This is for supporting oneshot mode.
+ 	 * Mct would generate interrupt periodically
+@@ -430,16 +441,6 @@ static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
+ 	if (!clockevent_state_periodic(&mevt->evt))
+ 		exynos4_mct_tick_stop(mevt);
+ 
+-	/* Clear the MCT tick interrupt */
+-	if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1)
+-		exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET);
+-}
+-
+-static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id)
+-{
+-	struct mct_clock_event_device *mevt = dev_id;
+-	struct clock_event_device *evt = &mevt->evt;
+-
+ 	exynos4_mct_tick_clear(mevt);
+ 
+ 	evt->event_handler(evt);
+diff --git a/drivers/cpufreq/pxa2xx-cpufreq.c b/drivers/cpufreq/pxa2xx-cpufreq.c
+index 46254e583982..74e0e0c20c46 100644
+--- a/drivers/cpufreq/pxa2xx-cpufreq.c
++++ b/drivers/cpufreq/pxa2xx-cpufreq.c
+@@ -143,7 +143,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
+ 	return ret;
+ }
+ 
+-static void __init pxa_cpufreq_init_voltages(void)
++static void pxa_cpufreq_init_voltages(void)
+ {
+ 	vcc_core = regulator_get(NULL, "vcc_core");
+ 	if (IS_ERR(vcc_core)) {
+@@ -159,7 +159,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
+ 	return 0;
+ }
+ 
+-static void __init pxa_cpufreq_init_voltages(void) { }
++static void pxa_cpufreq_init_voltages(void) { }
+ #endif
+ 
+ static void find_freq_tables(struct cpufreq_frequency_table **freq_table,
+diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c
+index 2a3675c24032..a472b814058f 100644
+--- a/drivers/cpufreq/qcom-cpufreq-kryo.c
++++ b/drivers/cpufreq/qcom-cpufreq-kryo.c
+@@ -75,7 +75,7 @@ static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
+ 
+ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
+ {
+-	struct opp_table *opp_tables[NR_CPUS] = {0};
++	struct opp_table **opp_tables;
+ 	enum _msm8996_version msm8996_version;
+ 	struct nvmem_cell *speedbin_nvmem;
+ 	struct device_node *np;
+@@ -133,6 +133,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
+ 	}
+ 	kfree(speedbin);
+ 
++	opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables), GFP_KERNEL);
++	if (!opp_tables)
++		return -ENOMEM;
++
+ 	for_each_possible_cpu(cpu) {
+ 		cpu_dev = get_cpu_device(cpu);
+ 		if (NULL == cpu_dev) {
+@@ -151,8 +155,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
+ 
+ 	cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
+ 							  NULL, 0);
+-	if (!IS_ERR(cpufreq_dt_pdev))
++	if (!IS_ERR(cpufreq_dt_pdev)) {
++		platform_set_drvdata(pdev, opp_tables);
+ 		return 0;
++	}
+ 
+ 	ret = PTR_ERR(cpufreq_dt_pdev);
+ 	dev_err(cpu_dev, "Failed to register platform device\n");
+@@ -163,13 +169,23 @@ free_opp:
+ 			break;
+ 		dev_pm_opp_put_supported_hw(opp_tables[cpu]);
+ 	}
++	kfree(opp_tables);
+ 
+ 	return ret;
+ }
+ 
+ static int qcom_cpufreq_kryo_remove(struct platform_device *pdev)
+ {
++	struct opp_table **opp_tables = platform_get_drvdata(pdev);
++	unsigned int cpu;
++
+ 	platform_device_unregister(cpufreq_dt_pdev);
++
++	for_each_possible_cpu(cpu)
++		dev_pm_opp_put_supported_hw(opp_tables[cpu]);
++
++	kfree(opp_tables);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/cpufreq/tegra124-cpufreq.c b/drivers/cpufreq/tegra124-cpufreq.c
+index 43530254201a..4bb154f6c54c 100644
+--- a/drivers/cpufreq/tegra124-cpufreq.c
++++ b/drivers/cpufreq/tegra124-cpufreq.c
+@@ -134,6 +134,8 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, priv);
+ 
++	of_node_put(np);
++
+ 	return 0;
+ 
+ out_switch_to_pllx:
+diff --git a/drivers/cpuidle/governor.c b/drivers/cpuidle/governor.c
+index bb93e5cf6a4a..9fddf828a76f 100644
+--- a/drivers/cpuidle/governor.c
++++ b/drivers/cpuidle/governor.c
+@@ -89,6 +89,7 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
+ 	mutex_lock(&cpuidle_lock);
+ 	if (__cpuidle_find_governor(gov->name) == NULL) {
+ 		ret = 0;
++		list_add_tail(&gov->governor_list, &cpuidle_governors);
+ 		if (!cpuidle_curr_governor ||
+ 		    !strncasecmp(param_governor, gov->name, CPUIDLE_NAME_LEN) ||
+ 		    (cpuidle_curr_governor->rating < gov->rating &&
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index 80ae69f906fb..1c4f3a046dc5 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -1040,6 +1040,7 @@ static void init_aead_job(struct aead_request *req,
+ 	if (unlikely(req->src != req->dst)) {
+ 		if (edesc->dst_nents == 1) {
+ 			dst_dma = sg_dma_address(req->dst);
++			out_options = 0;
+ 		} else {
+ 			dst_dma = edesc->sec4_sg_dma +
+ 				  sec4_sg_index *
+diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
+index bb1a2cdf1951..0f11811a3585 100644
+--- a/drivers/crypto/caam/caamhash.c
++++ b/drivers/crypto/caam/caamhash.c
+@@ -113,6 +113,7 @@ struct caam_hash_ctx {
+ struct caam_hash_state {
+ 	dma_addr_t buf_dma;
+ 	dma_addr_t ctx_dma;
++	int ctx_dma_len;
+ 	u8 buf_0[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
+ 	int buflen_0;
+ 	u8 buf_1[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
+@@ -165,6 +166,7 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
+ 				      struct caam_hash_state *state,
+ 				      int ctx_len)
+ {
++	state->ctx_dma_len = ctx_len;
+ 	state->ctx_dma = dma_map_single(jrdev, state->caam_ctx,
+ 					ctx_len, DMA_FROM_DEVICE);
+ 	if (dma_mapping_error(jrdev, state->ctx_dma)) {
+@@ -178,18 +180,6 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
+ 	return 0;
+ }
+ 
+-/* Map req->result, and append seq_out_ptr command that points to it */
+-static inline dma_addr_t map_seq_out_ptr_result(u32 *desc, struct device *jrdev,
+-						u8 *result, int digestsize)
+-{
+-	dma_addr_t dst_dma;
+-
+-	dst_dma = dma_map_single(jrdev, result, digestsize, DMA_FROM_DEVICE);
+-	append_seq_out_ptr(desc, dst_dma, digestsize, 0);
+-
+-	return dst_dma;
+-}
+-
+ /* Map current buffer in state (if length > 0) and put it in link table */
+ static inline int buf_map_to_sec4_sg(struct device *jrdev,
+ 				     struct sec4_sg_entry *sec4_sg,
+@@ -218,6 +208,7 @@ static inline int ctx_map_to_sec4_sg(struct device *jrdev,
+ 				     struct caam_hash_state *state, int ctx_len,
+ 				     struct sec4_sg_entry *sec4_sg, u32 flag)
+ {
++	state->ctx_dma_len = ctx_len;
+ 	state->ctx_dma = dma_map_single(jrdev, state->caam_ctx, ctx_len, flag);
+ 	if (dma_mapping_error(jrdev, state->ctx_dma)) {
+ 		dev_err(jrdev, "unable to map ctx\n");
+@@ -426,7 +417,6 @@ static int ahash_setkey(struct crypto_ahash *ahash,
+ 
+ /*
+  * ahash_edesc - s/w-extended ahash descriptor
+- * @dst_dma: physical mapped address of req->result
+  * @sec4_sg_dma: physical mapped address of h/w link table
+  * @src_nents: number of segments in input scatterlist
+  * @sec4_sg_bytes: length of dma mapped sec4_sg space
+@@ -434,7 +424,6 @@ static int ahash_setkey(struct crypto_ahash *ahash,
+  * @sec4_sg: h/w link table
+  */
+ struct ahash_edesc {
+-	dma_addr_t dst_dma;
+ 	dma_addr_t sec4_sg_dma;
+ 	int src_nents;
+ 	int sec4_sg_bytes;
+@@ -450,8 +439,6 @@ static inline void ahash_unmap(struct device *dev,
+ 
+ 	if (edesc->src_nents)
+ 		dma_unmap_sg(dev, req->src, edesc->src_nents, DMA_TO_DEVICE);
+-	if (edesc->dst_dma)
+-		dma_unmap_single(dev, edesc->dst_dma, dst_len, DMA_FROM_DEVICE);
+ 
+ 	if (edesc->sec4_sg_bytes)
+ 		dma_unmap_single(dev, edesc->sec4_sg_dma,
+@@ -468,12 +455,10 @@ static inline void ahash_unmap_ctx(struct device *dev,
+ 			struct ahash_edesc *edesc,
+ 			struct ahash_request *req, int dst_len, u32 flag)
+ {
+-	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+-	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ 	struct caam_hash_state *state = ahash_request_ctx(req);
+ 
+ 	if (state->ctx_dma) {
+-		dma_unmap_single(dev, state->ctx_dma, ctx->ctx_len, flag);
++		dma_unmap_single(dev, state->ctx_dma, state->ctx_dma_len, flag);
+ 		state->ctx_dma = 0;
+ 	}
+ 	ahash_unmap(dev, edesc, req, dst_len);
+@@ -486,9 +471,9 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
+ 	struct ahash_edesc *edesc;
+ 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+ 	int digestsize = crypto_ahash_digestsize(ahash);
++	struct caam_hash_state *state = ahash_request_ctx(req);
+ #ifdef DEBUG
+ 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+-	struct caam_hash_state *state = ahash_request_ctx(req);
+ 
+ 	dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
+ #endif
+@@ -497,17 +482,14 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
+ 	if (err)
+ 		caam_jr_strstatus(jrdev, err);
+ 
+-	ahash_unmap(jrdev, edesc, req, digestsize);
++	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
++	memcpy(req->result, state->caam_ctx, digestsize);
+ 	kfree(edesc);
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "ctx@"__stringify(__LINE__)": ",
+ 		       DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
+ 		       ctx->ctx_len, 1);
+-	if (req->result)
+-		print_hex_dump(KERN_ERR, "result@"__stringify(__LINE__)": ",
+-			       DUMP_PREFIX_ADDRESS, 16, 4, req->result,
+-			       digestsize, 1);
+ #endif
+ 
+ 	req->base.complete(&req->base, err);
+@@ -555,9 +537,9 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
+ 	struct ahash_edesc *edesc;
+ 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+ 	int digestsize = crypto_ahash_digestsize(ahash);
++	struct caam_hash_state *state = ahash_request_ctx(req);
+ #ifdef DEBUG
+ 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+-	struct caam_hash_state *state = ahash_request_ctx(req);
+ 
+ 	dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
+ #endif
+@@ -566,17 +548,14 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
+ 	if (err)
+ 		caam_jr_strstatus(jrdev, err);
+ 
+-	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_TO_DEVICE);
++	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
++	memcpy(req->result, state->caam_ctx, digestsize);
+ 	kfree(edesc);
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "ctx@"__stringify(__LINE__)": ",
+ 		       DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
+ 		       ctx->ctx_len, 1);
+-	if (req->result)
+-		print_hex_dump(KERN_ERR, "result@"__stringify(__LINE__)": ",
+-			       DUMP_PREFIX_ADDRESS, 16, 4, req->result,
+-			       digestsize, 1);
+ #endif
+ 
+ 	req->base.complete(&req->base, err);
+@@ -837,7 +816,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 	edesc->sec4_sg_bytes = sec4_sg_bytes;
+ 
+ 	ret = ctx_map_to_sec4_sg(jrdev, state, ctx->ctx_len,
+-				 edesc->sec4_sg, DMA_TO_DEVICE);
++				 edesc->sec4_sg, DMA_BIDIRECTIONAL);
+ 	if (ret)
+ 		goto unmap_ctx;
+ 
+@@ -857,14 +836,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 
+ 	append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len + buflen,
+ 			  LDST_SGF);
+-
+-	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+-						digestsize);
+-	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+-		dev_err(jrdev, "unable to map dst\n");
+-		ret = -ENOMEM;
+-		goto unmap_ctx;
+-	}
++	append_seq_out_ptr(desc, state->ctx_dma, digestsize, 0);
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -877,7 +849,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 
+ 	return -EINPROGRESS;
+  unmap_ctx:
+-	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
++	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
+ 	kfree(edesc);
+ 	return ret;
+ }
+@@ -931,7 +903,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 	edesc->src_nents = src_nents;
+ 
+ 	ret = ctx_map_to_sec4_sg(jrdev, state, ctx->ctx_len,
+-				 edesc->sec4_sg, DMA_TO_DEVICE);
++				 edesc->sec4_sg, DMA_BIDIRECTIONAL);
+ 	if (ret)
+ 		goto unmap_ctx;
+ 
+@@ -945,13 +917,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 	if (ret)
+ 		goto unmap_ctx;
+ 
+-	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+-						digestsize);
+-	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+-		dev_err(jrdev, "unable to map dst\n");
+-		ret = -ENOMEM;
+-		goto unmap_ctx;
+-	}
++	append_seq_out_ptr(desc, state->ctx_dma, digestsize, 0);
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -964,7 +930,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 
+ 	return -EINPROGRESS;
+  unmap_ctx:
+-	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
++	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
+ 	kfree(edesc);
+ 	return ret;
+ }
+@@ -1023,10 +989,8 @@ static int ahash_digest(struct ahash_request *req)
+ 
+ 	desc = edesc->hw_desc;
+ 
+-	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+-						digestsize);
+-	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+-		dev_err(jrdev, "unable to map dst\n");
++	ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
++	if (ret) {
+ 		ahash_unmap(jrdev, edesc, req, digestsize);
+ 		kfree(edesc);
+ 		return -ENOMEM;
+@@ -1041,7 +1005,7 @@ static int ahash_digest(struct ahash_request *req)
+ 	if (!ret) {
+ 		ret = -EINPROGRESS;
+ 	} else {
+-		ahash_unmap(jrdev, edesc, req, digestsize);
++		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+ 		kfree(edesc);
+ 	}
+ 
+@@ -1083,12 +1047,9 @@ static int ahash_final_no_ctx(struct ahash_request *req)
+ 		append_seq_in_ptr(desc, state->buf_dma, buflen, 0);
+ 	}
+ 
+-	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+-						digestsize);
+-	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+-		dev_err(jrdev, "unable to map dst\n");
++	ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
++	if (ret)
+ 		goto unmap;
+-	}
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -1099,7 +1060,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
+ 	if (!ret) {
+ 		ret = -EINPROGRESS;
+ 	} else {
+-		ahash_unmap(jrdev, edesc, req, digestsize);
++		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+ 		kfree(edesc);
+ 	}
+ 
+@@ -1298,12 +1259,9 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ 		goto unmap;
+ 	}
+ 
+-	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+-						digestsize);
+-	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+-		dev_err(jrdev, "unable to map dst\n");
++	ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
++	if (ret)
+ 		goto unmap;
+-	}
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -1314,7 +1272,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ 	if (!ret) {
+ 		ret = -EINPROGRESS;
+ 	} else {
+-		ahash_unmap(jrdev, edesc, req, digestsize);
++		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+ 		kfree(edesc);
+ 	}
+ 
+@@ -1446,6 +1404,7 @@ static int ahash_init(struct ahash_request *req)
+ 	state->final = ahash_final_no_ctx;
+ 
+ 	state->ctx_dma = 0;
++	state->ctx_dma_len = 0;
+ 	state->current_buf = 0;
+ 	state->buf_dma = 0;
+ 	state->buflen_0 = 0;
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index dd948e1df9e5..3bcb6bce666e 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -614,10 +614,10 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 				 hw_iv_size, DMA_BIDIRECTIONAL);
+ 	}
+ 
+-	/*In case a pool was set, a table was
+-	 *allocated and should be released
+-	 */
+-	if (areq_ctx->mlli_params.curr_pool) {
++	/* Release pool */
++	if ((areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI ||
++	     areq_ctx->data_buff_type == CC_DMA_BUF_MLLI) &&
++	    (areq_ctx->mlli_params.mlli_virt_addr)) {
+ 		dev_dbg(dev, "free MLLI buffer: dma=%pad virt=%pK\n",
+ 			&areq_ctx->mlli_params.mlli_dma_addr,
+ 			areq_ctx->mlli_params.mlli_virt_addr);
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index cc92b031fad1..4ec93079daaf 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -80,6 +80,7 @@ static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size)
+ 		default:
+ 			break;
+ 		}
++		break;
+ 	case S_DIN_to_DES:
+ 		if (size == DES3_EDE_KEY_SIZE || size == DES_KEY_SIZE)
+ 			return 0;
+@@ -652,6 +653,8 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
+ 	unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
+ 	unsigned int len;
+ 
++	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
++
+ 	switch (ctx_p->cipher_mode) {
+ 	case DRV_CIPHER_CBC:
+ 		/*
+@@ -681,7 +684,6 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
+ 		break;
+ 	}
+ 
+-	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
+ 	kzfree(req_ctx->iv);
+ 
+ 	skcipher_request_complete(req, err);
+@@ -799,7 +801,8 @@ static int cc_cipher_decrypt(struct skcipher_request *req)
+ 
+ 	memset(req_ctx, 0, sizeof(*req_ctx));
+ 
+-	if (ctx_p->cipher_mode == DRV_CIPHER_CBC) {
++	if ((ctx_p->cipher_mode == DRV_CIPHER_CBC) &&
++	    (req->cryptlen >= ivsize)) {
+ 
+ 		/* Allocate and save the last IV sized bytes of the source,
+ 		 * which will be lost in case of in-place decryption.
+diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c
+index c9d622abd90c..0ce4a65b95f5 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto.c
++++ b/drivers/crypto/rockchip/rk3288_crypto.c
+@@ -119,7 +119,7 @@ static int rk_load_data(struct rk_crypto_info *dev,
+ 		count = (dev->left_bytes > PAGE_SIZE) ?
+ 			PAGE_SIZE : dev->left_bytes;
+ 
+-		if (!sg_pcopy_to_buffer(dev->first, dev->nents,
++		if (!sg_pcopy_to_buffer(dev->first, dev->src_nents,
+ 					dev->addr_vir, count,
+ 					dev->total - dev->left_bytes)) {
+ 			dev_err(dev->dev, "[%s:%d] pcopy err\n",
+diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h
+index d5fb4013fb42..54ee5b3ed9db 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto.h
++++ b/drivers/crypto/rockchip/rk3288_crypto.h
+@@ -207,7 +207,8 @@ struct rk_crypto_info {
+ 	void				*addr_vir;
+ 	int				aligned;
+ 	int				align_size;
+-	size_t				nents;
++	size_t				src_nents;
++	size_t				dst_nents;
+ 	unsigned int			total;
+ 	unsigned int			count;
+ 	dma_addr_t			addr_in;
+@@ -244,6 +245,7 @@ struct rk_cipher_ctx {
+ 	struct rk_crypto_info		*dev;
+ 	unsigned int			keylen;
+ 	u32				mode;
++	u8				iv[AES_BLOCK_SIZE];
+ };
+ 
+ enum alg_type {
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
+index 639c15c5364b..23305f22072f 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
+@@ -242,6 +242,17 @@ static void crypto_dma_start(struct rk_crypto_info *dev)
+ static int rk_set_data_start(struct rk_crypto_info *dev)
+ {
+ 	int err;
++	struct ablkcipher_request *req =
++		ablkcipher_request_cast(dev->async_req);
++	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
++	struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm);
++	u32 ivsize = crypto_ablkcipher_ivsize(tfm);
++	u8 *src_last_blk = page_address(sg_page(dev->sg_src)) +
++		dev->sg_src->offset + dev->sg_src->length - ivsize;
++
++	/* store the iv that need to be updated in chain mode */
++	if (ctx->mode & RK_CRYPTO_DEC)
++		memcpy(ctx->iv, src_last_blk, ivsize);
+ 
+ 	err = dev->load_data(dev, dev->sg_src, dev->sg_dst);
+ 	if (!err)
+@@ -260,8 +271,9 @@ static int rk_ablk_start(struct rk_crypto_info *dev)
+ 	dev->total = req->nbytes;
+ 	dev->sg_src = req->src;
+ 	dev->first = req->src;
+-	dev->nents = sg_nents(req->src);
++	dev->src_nents = sg_nents(req->src);
+ 	dev->sg_dst = req->dst;
++	dev->dst_nents = sg_nents(req->dst);
+ 	dev->aligned = 1;
+ 
+ 	spin_lock_irqsave(&dev->lock, flags);
+@@ -285,6 +297,28 @@ static void rk_iv_copyback(struct rk_crypto_info *dev)
+ 		memcpy_fromio(req->info, dev->reg + RK_CRYPTO_AES_IV_0, ivsize);
+ }
+ 
++static void rk_update_iv(struct rk_crypto_info *dev)
++{
++	struct ablkcipher_request *req =
++		ablkcipher_request_cast(dev->async_req);
++	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
++	struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm);
++	u32 ivsize = crypto_ablkcipher_ivsize(tfm);
++	u8 *new_iv = NULL;
++
++	if (ctx->mode & RK_CRYPTO_DEC) {
++		new_iv = ctx->iv;
++	} else {
++		new_iv = page_address(sg_page(dev->sg_dst)) +
++			 dev->sg_dst->offset + dev->sg_dst->length - ivsize;
++	}
++
++	if (ivsize == DES_BLOCK_SIZE)
++		memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize);
++	else if (ivsize == AES_BLOCK_SIZE)
++		memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize);
++}
++
+ /* return:
+  *	true	some err was occurred
+  *	fault	no err, continue
+@@ -297,7 +331,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev)
+ 
+ 	dev->unload_data(dev);
+ 	if (!dev->aligned) {
+-		if (!sg_pcopy_from_buffer(req->dst, dev->nents,
++		if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents,
+ 					  dev->addr_vir, dev->count,
+ 					  dev->total - dev->left_bytes -
+ 					  dev->count)) {
+@@ -306,6 +340,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev)
+ 		}
+ 	}
+ 	if (dev->left_bytes) {
++		rk_update_iv(dev);
+ 		if (dev->aligned) {
+ 			if (sg_is_last(dev->sg_src)) {
+ 				dev_err(dev->dev, "[%s:%d] Lack of data\n",
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+index 821a506b9e17..c336ae75e361 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+@@ -206,7 +206,7 @@ static int rk_ahash_start(struct rk_crypto_info *dev)
+ 	dev->sg_dst = NULL;
+ 	dev->sg_src = req->src;
+ 	dev->first = req->src;
+-	dev->nents = sg_nents(req->src);
++	dev->src_nents = sg_nents(req->src);
+ 	rctx = ahash_request_ctx(req);
+ 	rctx->mode = 0;
+ 
+diff --git a/drivers/dma/sh/usb-dmac.c b/drivers/dma/sh/usb-dmac.c
+index 7f7184c3cf95..59403f6d008a 100644
+--- a/drivers/dma/sh/usb-dmac.c
++++ b/drivers/dma/sh/usb-dmac.c
+@@ -694,6 +694,8 @@ static int usb_dmac_runtime_resume(struct device *dev)
+ #endif /* CONFIG_PM */
+ 
+ static const struct dev_pm_ops usb_dmac_pm = {
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
++				      pm_runtime_force_resume)
+ 	SET_RUNTIME_PM_OPS(usb_dmac_runtime_suspend, usb_dmac_runtime_resume,
+ 			   NULL)
+ };
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 0dc96419efe3..d8a985fc6a5d 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -587,7 +587,8 @@ static int pca953x_irq_set_type(struct irq_data *d, unsigned int type)
+ 
+ static void pca953x_irq_shutdown(struct irq_data *d)
+ {
+-	struct pca953x_chip *chip = irq_data_get_irq_chip_data(d);
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++	struct pca953x_chip *chip = gpiochip_get_data(gc);
+ 	u8 mask = 1 << (d->hwirq % BANK_SZ);
+ 
+ 	chip->irq_trig_raise[d->hwirq / BANK_SZ] &= ~mask;
+diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+index 43e4a2be0fa6..57cc11d0e9a5 100644
+--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+@@ -1355,12 +1355,12 @@ void dcn_bw_update_from_pplib(struct dc *dc)
+ 	struct dm_pp_clock_levels_with_voltage fclks = {0}, dcfclks = {0};
+ 	bool res;
+ 
+-	kernel_fpu_begin();
+-
+ 	/* TODO: This is not the proper way to obtain fabric_and_dram_bandwidth, should be min(fclk, memclk) */
+ 	res = dm_pp_get_clock_levels_by_type_with_voltage(
+ 			ctx, DM_PP_CLOCK_TYPE_FCLK, &fclks);
+ 
++	kernel_fpu_begin();
++
+ 	if (res)
+ 		res = verify_clock_values(&fclks);
+ 
+@@ -1379,9 +1379,13 @@ void dcn_bw_update_from_pplib(struct dc *dc)
+ 	} else
+ 		BREAK_TO_DEBUGGER();
+ 
++	kernel_fpu_end();
++
+ 	res = dm_pp_get_clock_levels_by_type_with_voltage(
+ 			ctx, DM_PP_CLOCK_TYPE_DCFCLK, &dcfclks);
+ 
++	kernel_fpu_begin();
++
+ 	if (res)
+ 		res = verify_clock_values(&dcfclks);
+ 
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index c8f5c00dd1e7..86e3fb27c125 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -3491,14 +3491,14 @@ static int smu7_get_gpu_power(struct pp_hwmgr *hwmgr, u32 *query)
+ 
+ 	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogStart);
+ 	cgs_write_ind_register(hwmgr->device, CGS_IND_REG__SMC,
+-							ixSMU_PM_STATUS_94, 0);
++							ixSMU_PM_STATUS_95, 0);
+ 
+ 	for (i = 0; i < 10; i++) {
+-		mdelay(1);
++		mdelay(500);
+ 		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogSample);
+ 		tmp = cgs_read_ind_register(hwmgr->device,
+ 						CGS_IND_REG__SMC,
+-						ixSMU_PM_STATUS_94);
++						ixSMU_PM_STATUS_95);
+ 		if (tmp != 0)
+ 			break;
+ 	}
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index d73703a695e8..70fc8e356b18 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -3170,9 +3170,7 @@ static void drm_fbdev_client_unregister(struct drm_client_dev *client)
+ 
+ static int drm_fbdev_client_restore(struct drm_client_dev *client)
+ {
+-	struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);
+-
+-	drm_fb_helper_restore_fbdev_mode_unlocked(fb_helper);
++	drm_fb_helper_lastclose(client->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c
+index f471537c852f..1e14c6921454 100644
+--- a/drivers/gpu/drm/radeon/evergreen_cs.c
++++ b/drivers/gpu/drm/radeon/evergreen_cs.c
+@@ -1299,6 +1299,7 @@ static int evergreen_cs_handle_reg(struct radeon_cs_parser *p, u32 reg, u32 idx)
+ 			return -EINVAL;
+ 		}
+ 		ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff);
++		break;
+ 	case CB_TARGET_MASK:
+ 		track->cb_target_mask = radeon_get_ib_value(p, idx);
+ 		track->cb_dirty = true;
+diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
+index 8426b7970c14..cc287cf6eb29 100644
+--- a/drivers/hwtracing/intel_th/gth.c
++++ b/drivers/hwtracing/intel_th/gth.c
+@@ -607,6 +607,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
+ {
+ 	struct gth_device *gth = dev_get_drvdata(&thdev->dev);
+ 	int port = othdev->output.port;
++	int master;
+ 
+ 	if (thdev->host_mode)
+ 		return;
+@@ -615,6 +616,9 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
+ 	othdev->output.port = -1;
+ 	othdev->output.active = false;
+ 	gth->output[port].output = NULL;
++	for (master = 0; master < TH_CONFIGURABLE_MASTERS; master++)
++		if (gth->master[master] == port)
++			gth->master[master] = -1;
+ 	spin_unlock(&gth->gth_lock);
+ }
+ 
+diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
+index 93ce3aa740a9..c7ba8acfd4d5 100644
+--- a/drivers/hwtracing/stm/core.c
++++ b/drivers/hwtracing/stm/core.c
+@@ -244,6 +244,9 @@ static int find_free_channels(unsigned long *bitmap, unsigned int start,
+ 			;
+ 		if (i == width)
+ 			return pos;
++
++		/* step over [pos..pos+i) to continue search */
++		pos += i;
+ 	}
+ 
+ 	return -1;
+@@ -732,7 +735,7 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
+ 	struct stm_device *stm = stmf->stm;
+ 	struct stp_policy_id *id;
+ 	char *ids[] = { NULL, NULL };
+-	int ret = -EINVAL;
++	int ret = -EINVAL, wlimit = 1;
+ 	u32 size;
+ 
+ 	if (stmf->output.nr_chans)
+@@ -760,8 +763,10 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
+ 	if (id->__reserved_0 || id->__reserved_1)
+ 		goto err_free;
+ 
+-	if (id->width < 1 ||
+-	    id->width > PAGE_SIZE / stm->data->sw_mmiosz)
++	if (stm->data->sw_mmiosz)
++		wlimit = PAGE_SIZE / stm->data->sw_mmiosz;
++
++	if (id->width < 1 || id->width > wlimit)
+ 		goto err_free;
+ 
+ 	ids[0] = id->id;
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index c77adbbea0c7..e85dc8583896 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -118,6 +118,9 @@
+ #define I2C_MST_FIFO_STATUS_TX_MASK		0xff0000
+ #define I2C_MST_FIFO_STATUS_TX_SHIFT		16
+ 
++/* Packet header size in bytes */
++#define I2C_PACKET_HEADER_SIZE			12
++
+ /*
+  * msg_end_type: The bus control which need to be send at end of transfer.
+  * @MSG_END_STOP: Send stop pulse at end of transfer.
+@@ -836,12 +839,13 @@ static const struct i2c_algorithm tegra_i2c_algo = {
+ /* payload size is only 12 bit */
+ static const struct i2c_adapter_quirks tegra_i2c_quirks = {
+ 	.flags = I2C_AQ_NO_ZERO_LEN,
+-	.max_read_len = 4096,
+-	.max_write_len = 4096,
++	.max_read_len = SZ_4K,
++	.max_write_len = SZ_4K - I2C_PACKET_HEADER_SIZE,
+ };
+ 
+ static const struct i2c_adapter_quirks tegra194_i2c_quirks = {
+ 	.flags = I2C_AQ_NO_ZERO_LEN,
++	.max_write_len = SZ_64K - I2C_PACKET_HEADER_SIZE,
+ };
+ 
+ static const struct tegra_i2c_hw_feature tegra20_i2c_hw = {
+diff --git a/drivers/iio/adc/exynos_adc.c b/drivers/iio/adc/exynos_adc.c
+index fa2d2b5767f3..1ca2c4d39f87 100644
+--- a/drivers/iio/adc/exynos_adc.c
++++ b/drivers/iio/adc/exynos_adc.c
+@@ -115,6 +115,7 @@
+ #define MAX_ADC_V2_CHANNELS		10
+ #define MAX_ADC_V1_CHANNELS		8
+ #define MAX_EXYNOS3250_ADC_CHANNELS	2
++#define MAX_EXYNOS4212_ADC_CHANNELS	4
+ #define MAX_S5PV210_ADC_CHANNELS	10
+ 
+ /* Bit definitions common for ADC_V1 and ADC_V2 */
+@@ -271,6 +272,19 @@ static void exynos_adc_v1_start_conv(struct exynos_adc *info,
+ 	writel(con1 | ADC_CON_EN_START, ADC_V1_CON(info->regs));
+ }
+ 
++/* Exynos4212 and 4412 is like ADCv1 but with four channels only */
++static const struct exynos_adc_data exynos4212_adc_data = {
++	.num_channels	= MAX_EXYNOS4212_ADC_CHANNELS,
++	.mask		= ADC_DATX_MASK,	/* 12 bit ADC resolution */
++	.needs_adc_phy	= true,
++	.phy_offset	= EXYNOS_ADCV1_PHY_OFFSET,
++
++	.init_hw	= exynos_adc_v1_init_hw,
++	.exit_hw	= exynos_adc_v1_exit_hw,
++	.clear_irq	= exynos_adc_v1_clear_irq,
++	.start_conv	= exynos_adc_v1_start_conv,
++};
++
+ static const struct exynos_adc_data exynos_adc_v1_data = {
+ 	.num_channels	= MAX_ADC_V1_CHANNELS,
+ 	.mask		= ADC_DATX_MASK,	/* 12 bit ADC resolution */
+@@ -492,6 +506,9 @@ static const struct of_device_id exynos_adc_match[] = {
+ 	}, {
+ 		.compatible = "samsung,s5pv210-adc",
+ 		.data = &exynos_adc_s5pv210_data,
++	}, {
++		.compatible = "samsung,exynos4212-adc",
++		.data = &exynos4212_adc_data,
+ 	}, {
+ 		.compatible = "samsung,exynos-adc-v1",
+ 		.data = &exynos_adc_v1_data,
+@@ -929,7 +946,7 @@ static int exynos_adc_remove(struct platform_device *pdev)
+ 	struct iio_dev *indio_dev = platform_get_drvdata(pdev);
+ 	struct exynos_adc *info = iio_priv(indio_dev);
+ 
+-	if (IS_REACHABLE(CONFIG_INPUT)) {
++	if (IS_REACHABLE(CONFIG_INPUT) && info->input) {
+ 		free_irq(info->tsirq, info);
+ 		input_unregister_device(info->input);
+ 	}
+diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
+index 6db2276f5c13..15ec3e1feb09 100644
+--- a/drivers/infiniband/hw/hfi1/hfi.h
++++ b/drivers/infiniband/hw/hfi1/hfi.h
+@@ -1435,7 +1435,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd,
+ 			 struct hfi1_devdata *dd, u8 hw_pidx, u8 port);
+ void hfi1_free_ctxtdata(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd);
+ int hfi1_rcd_put(struct hfi1_ctxtdata *rcd);
+-void hfi1_rcd_get(struct hfi1_ctxtdata *rcd);
++int hfi1_rcd_get(struct hfi1_ctxtdata *rcd);
+ struct hfi1_ctxtdata *hfi1_rcd_get_by_index_safe(struct hfi1_devdata *dd,
+ 						 u16 ctxt);
+ struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt);
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index 7835eb52e7c5..c532ceb0bb9a 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -215,12 +215,12 @@ static void hfi1_rcd_free(struct kref *kref)
+ 	struct hfi1_ctxtdata *rcd =
+ 		container_of(kref, struct hfi1_ctxtdata, kref);
+ 
+-	hfi1_free_ctxtdata(rcd->dd, rcd);
+-
+ 	spin_lock_irqsave(&rcd->dd->uctxt_lock, flags);
+ 	rcd->dd->rcd[rcd->ctxt] = NULL;
+ 	spin_unlock_irqrestore(&rcd->dd->uctxt_lock, flags);
+ 
++	hfi1_free_ctxtdata(rcd->dd, rcd);
++
+ 	kfree(rcd);
+ }
+ 
+@@ -243,10 +243,13 @@ int hfi1_rcd_put(struct hfi1_ctxtdata *rcd)
+  * @rcd: pointer to an initialized rcd data structure
+  *
+  * Use this to get a reference after the init.
++ *
++ * Return : reflect kref_get_unless_zero(), which returns non-zero on
++ * increment, otherwise 0.
+  */
+-void hfi1_rcd_get(struct hfi1_ctxtdata *rcd)
++int hfi1_rcd_get(struct hfi1_ctxtdata *rcd)
+ {
+-	kref_get(&rcd->kref);
++	return kref_get_unless_zero(&rcd->kref);
+ }
+ 
+ /**
+@@ -326,7 +329,8 @@ struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt)
+ 	spin_lock_irqsave(&dd->uctxt_lock, flags);
+ 	if (dd->rcd[ctxt]) {
+ 		rcd = dd->rcd[ctxt];
+-		hfi1_rcd_get(rcd);
++		if (!hfi1_rcd_get(rcd))
++			rcd = NULL;
+ 	}
+ 	spin_unlock_irqrestore(&dd->uctxt_lock, flags);
+ 
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index c6cc3e4ab71d..c45b8359b389 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -2785,6 +2785,18 @@ again:
+ }
+ EXPORT_SYMBOL(rvt_copy_sge);
+ 
++static enum ib_wc_status loopback_qp_drop(struct rvt_ibport *rvp,
++					  struct rvt_qp *sqp)
++{
++	rvp->n_pkt_drops++;
++	/*
++	 * For RC, the requester would timeout and retry so
++	 * shortcut the timeouts and just signal too many retries.
++	 */
++	return sqp->ibqp.qp_type == IB_QPT_RC ?
++		IB_WC_RETRY_EXC_ERR : IB_WC_SUCCESS;
++}
++
+ /**
+  * ruc_loopback - handle UC and RC loopback requests
+  * @sqp: the sending QP
+@@ -2857,17 +2869,14 @@ again:
+ 	}
+ 	spin_unlock_irqrestore(&sqp->s_lock, flags);
+ 
+-	if (!qp || !(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) ||
++	if (!qp) {
++		send_status = loopback_qp_drop(rvp, sqp);
++		goto serr_no_r_lock;
++	}
++	spin_lock_irqsave(&qp->r_lock, flags);
++	if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) ||
+ 	    qp->ibqp.qp_type != sqp->ibqp.qp_type) {
+-		rvp->n_pkt_drops++;
+-		/*
+-		 * For RC, the requester would timeout and retry so
+-		 * shortcut the timeouts and just signal too many retries.
+-		 */
+-		if (sqp->ibqp.qp_type == IB_QPT_RC)
+-			send_status = IB_WC_RETRY_EXC_ERR;
+-		else
+-			send_status = IB_WC_SUCCESS;
++		send_status = loopback_qp_drop(rvp, sqp);
+ 		goto serr;
+ 	}
+ 
+@@ -2893,18 +2902,8 @@ again:
+ 		goto send_comp;
+ 
+ 	case IB_WR_SEND_WITH_INV:
+-		if (!rvt_invalidate_rkey(qp, wqe->wr.ex.invalidate_rkey)) {
+-			wc.wc_flags = IB_WC_WITH_INVALIDATE;
+-			wc.ex.invalidate_rkey = wqe->wr.ex.invalidate_rkey;
+-		}
+-		goto send;
+-
+ 	case IB_WR_SEND_WITH_IMM:
+-		wc.wc_flags = IB_WC_WITH_IMM;
+-		wc.ex.imm_data = wqe->wr.ex.imm_data;
+-		/* FALLTHROUGH */
+ 	case IB_WR_SEND:
+-send:
+ 		ret = rvt_get_rwqe(qp, false);
+ 		if (ret < 0)
+ 			goto op_err;
+@@ -2912,6 +2911,22 @@ send:
+ 			goto rnr_nak;
+ 		if (wqe->length > qp->r_len)
+ 			goto inv_err;
++		switch (wqe->wr.opcode) {
++		case IB_WR_SEND_WITH_INV:
++			if (!rvt_invalidate_rkey(qp,
++						 wqe->wr.ex.invalidate_rkey)) {
++				wc.wc_flags = IB_WC_WITH_INVALIDATE;
++				wc.ex.invalidate_rkey =
++					wqe->wr.ex.invalidate_rkey;
++			}
++			break;
++		case IB_WR_SEND_WITH_IMM:
++			wc.wc_flags = IB_WC_WITH_IMM;
++			wc.ex.imm_data = wqe->wr.ex.imm_data;
++			break;
++		default:
++			break;
++		}
+ 		break;
+ 
+ 	case IB_WR_RDMA_WRITE_WITH_IMM:
+@@ -3041,6 +3056,7 @@ do_write:
+ 		     wqe->wr.send_flags & IB_SEND_SOLICITED);
+ 
+ send_comp:
++	spin_unlock_irqrestore(&qp->r_lock, flags);
+ 	spin_lock_irqsave(&sqp->s_lock, flags);
+ 	rvp->n_loop_pkts++;
+ flush_send:
+@@ -3067,6 +3083,7 @@ rnr_nak:
+ 	}
+ 	if (sqp->s_rnr_retry_cnt < 7)
+ 		sqp->s_rnr_retry--;
++	spin_unlock_irqrestore(&qp->r_lock, flags);
+ 	spin_lock_irqsave(&sqp->s_lock, flags);
+ 	if (!(ib_rvt_state_ops[sqp->state] & RVT_PROCESS_RECV_OK))
+ 		goto clr_busy;
+@@ -3095,6 +3112,8 @@ err:
+ 	rvt_rc_error(qp, wc.status);
+ 
+ serr:
++	spin_unlock_irqrestore(&qp->r_lock, flags);
++serr_no_r_lock:
+ 	spin_lock_irqsave(&sqp->s_lock, flags);
+ 	rvt_send_complete(sqp, wqe, send_status);
+ 	if (sqp->ibqp.qp_type == IB_QPT_RC) {
+diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c
+index 0e65f609352e..83364fedbf0a 100644
+--- a/drivers/irqchip/irq-brcmstb-l2.c
++++ b/drivers/irqchip/irq-brcmstb-l2.c
+@@ -129,8 +129,9 @@ static void brcmstb_l2_intc_suspend(struct irq_data *d)
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ 	struct brcmstb_l2_intc_data *b = gc->private;
++	unsigned long flags;
+ 
+-	irq_gc_lock(gc);
++	irq_gc_lock_irqsave(gc, flags);
+ 	/* Save the current mask */
+ 	b->saved_mask = irq_reg_readl(gc, ct->regs.mask);
+ 
+@@ -139,7 +140,7 @@ static void brcmstb_l2_intc_suspend(struct irq_data *d)
+ 		irq_reg_writel(gc, ~gc->wake_active, ct->regs.disable);
+ 		irq_reg_writel(gc, gc->wake_active, ct->regs.enable);
+ 	}
+-	irq_gc_unlock(gc);
++	irq_gc_unlock_irqrestore(gc, flags);
+ }
+ 
+ static void brcmstb_l2_intc_resume(struct irq_data *d)
+@@ -147,8 +148,9 @@ static void brcmstb_l2_intc_resume(struct irq_data *d)
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ 	struct brcmstb_l2_intc_data *b = gc->private;
++	unsigned long flags;
+ 
+-	irq_gc_lock(gc);
++	irq_gc_lock_irqsave(gc, flags);
+ 	if (ct->chip.irq_ack) {
+ 		/* Clear unmasked non-wakeup interrupts */
+ 		irq_reg_writel(gc, ~b->saved_mask & ~gc->wake_active,
+@@ -158,7 +160,7 @@ static void brcmstb_l2_intc_resume(struct irq_data *d)
+ 	/* Restore the saved mask */
+ 	irq_reg_writel(gc, b->saved_mask, ct->regs.disable);
+ 	irq_reg_writel(gc, ~b->saved_mask, ct->regs.enable);
+-	irq_gc_unlock(gc);
++	irq_gc_unlock_irqrestore(gc, flags);
+ }
+ 
+ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index c3aba3fc818d..f867d41b0aa1 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -1955,6 +1955,8 @@ static int its_alloc_tables(struct its_node *its)
+ 			indirect = its_parse_indirect_baser(its, baser,
+ 							    psz, &order,
+ 							    its->device_ids);
++			break;
++
+ 		case GITS_BASER_TYPE_VCPU:
+ 			indirect = its_parse_indirect_baser(its, baser,
+ 							    psz, &order,
+diff --git a/drivers/md/bcache/extents.c b/drivers/md/bcache/extents.c
+index 956004366699..886710043025 100644
+--- a/drivers/md/bcache/extents.c
++++ b/drivers/md/bcache/extents.c
+@@ -538,6 +538,7 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
+ {
+ 	struct btree *b = container_of(bk, struct btree, keys);
+ 	unsigned int i, stale;
++	char buf[80];
+ 
+ 	if (!KEY_PTRS(k) ||
+ 	    bch_extent_invalid(bk, k))
+@@ -547,19 +548,19 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
+ 		if (!ptr_available(b->c, k, i))
+ 			return true;
+ 
+-	if (!expensive_debug_checks(b->c) && KEY_DIRTY(k))
+-		return false;
+-
+ 	for (i = 0; i < KEY_PTRS(k); i++) {
+ 		stale = ptr_stale(b->c, k, i);
+ 
++		if (stale && KEY_DIRTY(k)) {
++			bch_extent_to_text(buf, sizeof(buf), k);
++			pr_info("stale dirty pointer, stale %u, key: %s",
++				stale, buf);
++		}
++
+ 		btree_bug_on(stale > BUCKET_GC_GEN_MAX, b,
+ 			     "key too stale: %i, need_gc %u",
+ 			     stale, b->c->need_gc);
+ 
+-		btree_bug_on(stale && KEY_DIRTY(k) && KEY_SIZE(k),
+-			     b, "stale dirty pointer");
+-
+ 		if (stale)
+ 			return true;
+ 
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index 15070412a32e..f101bfe8657a 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -392,10 +392,11 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
+ 
+ 	/*
+ 	 * Flag for bypass if the IO is for read-ahead or background,
+-	 * unless the read-ahead request is for metadata (eg, for gfs2).
++	 * unless the read-ahead request is for metadata
++	 * (eg, for gfs2 or xfs).
+ 	 */
+ 	if (bio->bi_opf & (REQ_RAHEAD|REQ_BACKGROUND) &&
+-	    !(bio->bi_opf & REQ_PRIO))
++	    !(bio->bi_opf & (REQ_META|REQ_PRIO)))
+ 		goto skip;
+ 
+ 	if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) ||
+@@ -877,7 +878,7 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s,
+ 	}
+ 
+ 	if (!(bio->bi_opf & REQ_RAHEAD) &&
+-	    !(bio->bi_opf & REQ_PRIO) &&
++	    !(bio->bi_opf & (REQ_META|REQ_PRIO)) &&
+ 	    s->iop.c->gc_stats.in_use < CUTOFF_CACHE_READA)
+ 		reada = min_t(sector_t, dc->readahead >> 9,
+ 			      get_capacity(bio->bi_disk) - bio_end_sector(bio));
+diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
+index 6a743d3bb338..4e4c6810dc3c 100644
+--- a/drivers/md/bcache/writeback.h
++++ b/drivers/md/bcache/writeback.h
+@@ -71,6 +71,9 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio,
+ 	    in_use > bch_cutoff_writeback_sync)
+ 		return false;
+ 
++	if (bio_op(bio) == REQ_OP_DISCARD)
++		return false;
++
+ 	if (dc->partial_stripes_expensive &&
+ 	    bcache_dev_stripe_dirty(dc, bio->bi_iter.bi_sector,
+ 				    bio_sectors(bio)))
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 457200ca6287..2e823252d797 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -1368,8 +1368,8 @@ again:
+ 						checksums_ptr - checksums, !dio->write ? TAG_CMP : TAG_WRITE);
+ 			if (unlikely(r)) {
+ 				if (r > 0) {
+-					DMERR("Checksum failed at sector 0x%llx",
+-					      (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size)));
++					DMERR_LIMIT("Checksum failed at sector 0x%llx",
++						    (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size)));
+ 					r = -EILSEQ;
+ 					atomic64_inc(&ic->number_of_mismatches);
+ 				}
+@@ -1561,8 +1561,8 @@ retry_kmap:
+ 
+ 					integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
+ 					if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
+-						DMERR("Checksum failed when reading from journal, at sector 0x%llx",
+-						      (unsigned long long)logical_sector);
++						DMERR_LIMIT("Checksum failed when reading from journal, at sector 0x%llx",
++							    (unsigned long long)logical_sector);
+ 					}
+ 				}
+ #endif
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index ecef42bfe19d..3b6880dd648d 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -3939,6 +3939,8 @@ static int raid10_run(struct mddev *mddev)
+ 		set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+ 		mddev->sync_thread = md_register_thread(md_do_sync, mddev,
+ 							"reshape");
++		if (!mddev->sync_thread)
++			goto out_free_conf;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index cecea901ab8c..5b68f2d0da60 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -7402,6 +7402,8 @@ static int raid5_run(struct mddev *mddev)
+ 		set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+ 		mddev->sync_thread = md_register_thread(md_do_sync, mddev,
+ 							"reshape");
++		if (!mddev->sync_thread)
++			goto abort;
+ 	}
+ 
+ 	/* Ok, everything is just fine now */
+diff --git a/drivers/media/dvb-frontends/lgdt330x.c b/drivers/media/dvb-frontends/lgdt330x.c
+index 96807e134886..8abb1a510a81 100644
+--- a/drivers/media/dvb-frontends/lgdt330x.c
++++ b/drivers/media/dvb-frontends/lgdt330x.c
+@@ -783,7 +783,7 @@ static int lgdt3303_read_status(struct dvb_frontend *fe,
+ 
+ 		if ((buf[0] & 0x02) == 0x00)
+ 			*status |= FE_HAS_SYNC;
+-		if ((buf[0] & 0xfd) == 0x01)
++		if ((buf[0] & 0x01) == 0x01)
+ 			*status |= FE_HAS_VITERBI | FE_HAS_LOCK;
+ 		break;
+ 	default:
+diff --git a/drivers/media/i2c/cx25840/cx25840-core.c b/drivers/media/i2c/cx25840/cx25840-core.c
+index b168bf3635b6..8b0b8b5aa531 100644
+--- a/drivers/media/i2c/cx25840/cx25840-core.c
++++ b/drivers/media/i2c/cx25840/cx25840-core.c
+@@ -5216,8 +5216,9 @@ static int cx25840_probe(struct i2c_client *client,
+ 	 * those extra inputs. So, let's add it only when needed.
+ 	 */
+ 	state->pads[CX25840_PAD_INPUT].flags = MEDIA_PAD_FL_SINK;
++	state->pads[CX25840_PAD_INPUT].sig_type = PAD_SIGNAL_ANALOG;
+ 	state->pads[CX25840_PAD_VID_OUT].flags = MEDIA_PAD_FL_SOURCE;
+-	state->pads[CX25840_PAD_VBI_OUT].flags = MEDIA_PAD_FL_SOURCE;
++	state->pads[CX25840_PAD_VID_OUT].sig_type = PAD_SIGNAL_DV;
+ 	sd->entity.function = MEDIA_ENT_F_ATV_DECODER;
+ 
+ 	ret = media_entity_pads_init(&sd->entity, ARRAY_SIZE(state->pads),
+diff --git a/drivers/media/i2c/cx25840/cx25840-core.h b/drivers/media/i2c/cx25840/cx25840-core.h
+index c323b1af1f83..9efefa15d090 100644
+--- a/drivers/media/i2c/cx25840/cx25840-core.h
++++ b/drivers/media/i2c/cx25840/cx25840-core.h
+@@ -40,7 +40,6 @@ enum cx25840_model {
+ enum cx25840_media_pads {
+ 	CX25840_PAD_INPUT,
+ 	CX25840_PAD_VID_OUT,
+-	CX25840_PAD_VBI_OUT,
+ 
+ 	CX25840_NUM_PADS
+ };
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index bef3f3aae0ed..9f8fc1ad9b1a 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -1893,7 +1893,7 @@ static void ov5640_reset(struct ov5640_dev *sensor)
+ 	usleep_range(1000, 2000);
+ 
+ 	gpiod_set_value_cansleep(sensor->reset_gpio, 0);
+-	usleep_range(5000, 10000);
++	usleep_range(20000, 25000);
+ }
+ 
+ static int ov5640_set_power_on(struct ov5640_dev *sensor)
+diff --git a/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c b/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
+index 6950585edb5a..d16f54cdc3b0 100644
+--- a/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
++++ b/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
+@@ -793,7 +793,7 @@ static const struct regmap_config sun6i_csi_regmap_config = {
+ 	.reg_bits       = 32,
+ 	.reg_stride     = 4,
+ 	.val_bits       = 32,
+-	.max_register	= 0x1000,
++	.max_register	= 0x9c,
+ };
+ 
+ static int sun6i_csi_resource_request(struct sun6i_csi_dev *sdev,
+diff --git a/drivers/media/platform/vimc/Makefile b/drivers/media/platform/vimc/Makefile
+index 4b2e3de7856e..c4fc8e7d365a 100644
+--- a/drivers/media/platform/vimc/Makefile
++++ b/drivers/media/platform/vimc/Makefile
+@@ -5,6 +5,7 @@ vimc_common-objs := vimc-common.o
+ vimc_debayer-objs := vimc-debayer.o
+ vimc_scaler-objs := vimc-scaler.o
+ vimc_sensor-objs := vimc-sensor.o
++vimc_streamer-objs := vimc-streamer.o
+ 
+ obj-$(CONFIG_VIDEO_VIMC) += vimc.o vimc_capture.o vimc_common.o vimc-debayer.o \
+-				vimc_scaler.o vimc_sensor.o
++			    vimc_scaler.o vimc_sensor.o vimc_streamer.o
+diff --git a/drivers/media/platform/vimc/vimc-capture.c b/drivers/media/platform/vimc/vimc-capture.c
+index 3f7e9ed56633..80d7515ec420 100644
+--- a/drivers/media/platform/vimc/vimc-capture.c
++++ b/drivers/media/platform/vimc/vimc-capture.c
+@@ -24,6 +24,7 @@
+ #include <media/videobuf2-vmalloc.h>
+ 
+ #include "vimc-common.h"
++#include "vimc-streamer.h"
+ 
+ #define VIMC_CAP_DRV_NAME "vimc-capture"
+ 
+@@ -44,7 +45,7 @@ struct vimc_cap_device {
+ 	spinlock_t qlock;
+ 	struct mutex lock;
+ 	u32 sequence;
+-	struct media_pipeline pipe;
++	struct vimc_stream stream;
+ };
+ 
+ static const struct v4l2_pix_format fmt_default = {
+@@ -248,14 +249,13 @@ static int vimc_cap_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 	vcap->sequence = 0;
+ 
+ 	/* Start the media pipeline */
+-	ret = media_pipeline_start(entity, &vcap->pipe);
++	ret = media_pipeline_start(entity, &vcap->stream.pipe);
+ 	if (ret) {
+ 		vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED);
+ 		return ret;
+ 	}
+ 
+-	/* Enable streaming from the pipe */
+-	ret = vimc_pipeline_s_stream(&vcap->vdev.entity, 1);
++	ret = vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 1);
+ 	if (ret) {
+ 		media_pipeline_stop(entity);
+ 		vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED);
+@@ -273,8 +273,7 @@ static void vimc_cap_stop_streaming(struct vb2_queue *vq)
+ {
+ 	struct vimc_cap_device *vcap = vb2_get_drv_priv(vq);
+ 
+-	/* Disable streaming from the pipe */
+-	vimc_pipeline_s_stream(&vcap->vdev.entity, 0);
++	vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 0);
+ 
+ 	/* Stop the media pipeline */
+ 	media_pipeline_stop(&vcap->vdev.entity);
+@@ -355,8 +354,8 @@ static void vimc_cap_comp_unbind(struct device *comp, struct device *master,
+ 	kfree(vcap);
+ }
+ 
+-static void vimc_cap_process_frame(struct vimc_ent_device *ved,
+-				   struct media_pad *sink, const void *frame)
++static void *vimc_cap_process_frame(struct vimc_ent_device *ved,
++				    const void *frame)
+ {
+ 	struct vimc_cap_device *vcap = container_of(ved, struct vimc_cap_device,
+ 						    ved);
+@@ -370,7 +369,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved,
+ 					    typeof(*vimc_buf), list);
+ 	if (!vimc_buf) {
+ 		spin_unlock(&vcap->qlock);
+-		return;
++		return ERR_PTR(-EAGAIN);
+ 	}
+ 
+ 	/* Remove this entry from the list */
+@@ -391,6 +390,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved,
+ 	vb2_set_plane_payload(&vimc_buf->vb2.vb2_buf, 0,
+ 			      vcap->format.sizeimage);
+ 	vb2_buffer_done(&vimc_buf->vb2.vb2_buf, VB2_BUF_STATE_DONE);
++	return NULL;
+ }
+ 
+ static int vimc_cap_comp_bind(struct device *comp, struct device *master,
+diff --git a/drivers/media/platform/vimc/vimc-common.c b/drivers/media/platform/vimc/vimc-common.c
+index 867e24dbd6b5..c1a74bb2df58 100644
+--- a/drivers/media/platform/vimc/vimc-common.c
++++ b/drivers/media/platform/vimc/vimc-common.c
+@@ -207,41 +207,6 @@ const struct vimc_pix_map *vimc_pix_map_by_pixelformat(u32 pixelformat)
+ }
+ EXPORT_SYMBOL_GPL(vimc_pix_map_by_pixelformat);
+ 
+-int vimc_propagate_frame(struct media_pad *src, const void *frame)
+-{
+-	struct media_link *link;
+-
+-	if (!(src->flags & MEDIA_PAD_FL_SOURCE))
+-		return -EINVAL;
+-
+-	/* Send this frame to all sink pads that are direct linked */
+-	list_for_each_entry(link, &src->entity->links, list) {
+-		if (link->source == src &&
+-		    (link->flags & MEDIA_LNK_FL_ENABLED)) {
+-			struct vimc_ent_device *ved = NULL;
+-			struct media_entity *entity = link->sink->entity;
+-
+-			if (is_media_entity_v4l2_subdev(entity)) {
+-				struct v4l2_subdev *sd =
+-					container_of(entity, struct v4l2_subdev,
+-						     entity);
+-				ved = v4l2_get_subdevdata(sd);
+-			} else if (is_media_entity_v4l2_video_device(entity)) {
+-				struct video_device *vdev =
+-					container_of(entity,
+-						     struct video_device,
+-						     entity);
+-				ved = video_get_drvdata(vdev);
+-			}
+-			if (ved && ved->process_frame)
+-				ved->process_frame(ved, link->sink, frame);
+-		}
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(vimc_propagate_frame);
+-
+ /* Helper function to allocate and initialize pads */
+ struct media_pad *vimc_pads_init(u16 num_pads, const unsigned long *pads_flag)
+ {
+diff --git a/drivers/media/platform/vimc/vimc-common.h b/drivers/media/platform/vimc/vimc-common.h
+index 2e9981b18166..6ed969d9efbb 100644
+--- a/drivers/media/platform/vimc/vimc-common.h
++++ b/drivers/media/platform/vimc/vimc-common.h
+@@ -113,23 +113,12 @@ struct vimc_pix_map {
+ struct vimc_ent_device {
+ 	struct media_entity *ent;
+ 	struct media_pad *pads;
+-	void (*process_frame)(struct vimc_ent_device *ved,
+-			      struct media_pad *sink, const void *frame);
++	void * (*process_frame)(struct vimc_ent_device *ved,
++				const void *frame);
+ 	void (*vdev_get_format)(struct vimc_ent_device *ved,
+ 			      struct v4l2_pix_format *fmt);
+ };
+ 
+-/**
+- * vimc_propagate_frame - propagate a frame through the topology
+- *
+- * @src:	the source pad where the frame is being originated
+- * @frame:	the frame to be propagated
+- *
+- * This function will call the process_frame callback from the vimc_ent_device
+- * struct of the nodes directly connected to the @src pad
+- */
+-int vimc_propagate_frame(struct media_pad *src, const void *frame);
+-
+ /**
+  * vimc_pads_init - initialize pads
+  *
+diff --git a/drivers/media/platform/vimc/vimc-debayer.c b/drivers/media/platform/vimc/vimc-debayer.c
+index 77887f66f323..7d77c63b99d2 100644
+--- a/drivers/media/platform/vimc/vimc-debayer.c
++++ b/drivers/media/platform/vimc/vimc-debayer.c
+@@ -321,7 +321,6 @@ static void vimc_deb_set_rgb_mbus_fmt_rgb888_1x24(struct vimc_deb_device *vdeb,
+ static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable)
+ {
+ 	struct vimc_deb_device *vdeb = v4l2_get_subdevdata(sd);
+-	int ret;
+ 
+ 	if (enable) {
+ 		const struct vimc_pix_map *vpix;
+@@ -351,22 +350,10 @@ static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable)
+ 		if (!vdeb->src_frame)
+ 			return -ENOMEM;
+ 
+-		/* Turn the stream on in the subdevices directly connected */
+-		ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 1);
+-		if (ret) {
+-			vfree(vdeb->src_frame);
+-			vdeb->src_frame = NULL;
+-			return ret;
+-		}
+ 	} else {
+ 		if (!vdeb->src_frame)
+ 			return 0;
+ 
+-		/* Disable streaming from the pipe */
+-		ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 0);
+-		if (ret)
+-			return ret;
+-
+ 		vfree(vdeb->src_frame);
+ 		vdeb->src_frame = NULL;
+ 	}
+@@ -480,9 +467,8 @@ static void vimc_deb_calc_rgb_sink(struct vimc_deb_device *vdeb,
+ 	}
+ }
+ 
+-static void vimc_deb_process_frame(struct vimc_ent_device *ved,
+-				   struct media_pad *sink,
+-				   const void *sink_frame)
++static void *vimc_deb_process_frame(struct vimc_ent_device *ved,
++				    const void *sink_frame)
+ {
+ 	struct vimc_deb_device *vdeb = container_of(ved, struct vimc_deb_device,
+ 						    ved);
+@@ -491,7 +477,7 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved,
+ 
+ 	/* If the stream in this node is not active, just return */
+ 	if (!vdeb->src_frame)
+-		return;
++		return ERR_PTR(-EINVAL);
+ 
+ 	for (i = 0; i < vdeb->sink_fmt.height; i++)
+ 		for (j = 0; j < vdeb->sink_fmt.width; j++) {
+@@ -499,12 +485,8 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved,
+ 			vdeb->set_rgb_src(vdeb, i, j, rgb);
+ 		}
+ 
+-	/* Propagate the frame through all source pads */
+-	for (i = 1; i < vdeb->sd.entity.num_pads; i++) {
+-		struct media_pad *pad = &vdeb->sd.entity.pads[i];
++	return vdeb->src_frame;
+ 
+-		vimc_propagate_frame(pad, vdeb->src_frame);
+-	}
+ }
+ 
+ static void vimc_deb_comp_unbind(struct device *comp, struct device *master,
+diff --git a/drivers/media/platform/vimc/vimc-scaler.c b/drivers/media/platform/vimc/vimc-scaler.c
+index b0952ee86296..39b2a73dfcc1 100644
+--- a/drivers/media/platform/vimc/vimc-scaler.c
++++ b/drivers/media/platform/vimc/vimc-scaler.c
+@@ -217,7 +217,6 @@ static const struct v4l2_subdev_pad_ops vimc_sca_pad_ops = {
+ static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable)
+ {
+ 	struct vimc_sca_device *vsca = v4l2_get_subdevdata(sd);
+-	int ret;
+ 
+ 	if (enable) {
+ 		const struct vimc_pix_map *vpix;
+@@ -245,22 +244,10 @@ static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable)
+ 		if (!vsca->src_frame)
+ 			return -ENOMEM;
+ 
+-		/* Turn the stream on in the subdevices directly connected */
+-		ret = vimc_pipeline_s_stream(&vsca->sd.entity, 1);
+-		if (ret) {
+-			vfree(vsca->src_frame);
+-			vsca->src_frame = NULL;
+-			return ret;
+-		}
+ 	} else {
+ 		if (!vsca->src_frame)
+ 			return 0;
+ 
+-		/* Disable streaming from the pipe */
+-		ret = vimc_pipeline_s_stream(&vsca->sd.entity, 0);
+-		if (ret)
+-			return ret;
+-
+ 		vfree(vsca->src_frame);
+ 		vsca->src_frame = NULL;
+ 	}
+@@ -346,26 +333,19 @@ static void vimc_sca_fill_src_frame(const struct vimc_sca_device *const vsca,
+ 			vimc_sca_scale_pix(vsca, i, j, sink_frame);
+ }
+ 
+-static void vimc_sca_process_frame(struct vimc_ent_device *ved,
+-				   struct media_pad *sink,
+-				   const void *sink_frame)
++static void *vimc_sca_process_frame(struct vimc_ent_device *ved,
++				    const void *sink_frame)
+ {
+ 	struct vimc_sca_device *vsca = container_of(ved, struct vimc_sca_device,
+ 						    ved);
+-	unsigned int i;
+ 
+ 	/* If the stream in this node is not active, just return */
+ 	if (!vsca->src_frame)
+-		return;
++		return ERR_PTR(-EINVAL);
+ 
+ 	vimc_sca_fill_src_frame(vsca, sink_frame);
+ 
+-	/* Propagate the frame through all source pads */
+-	for (i = 1; i < vsca->sd.entity.num_pads; i++) {
+-		struct media_pad *pad = &vsca->sd.entity.pads[i];
+-
+-		vimc_propagate_frame(pad, vsca->src_frame);
+-	}
++	return vsca->src_frame;
+ };
+ 
+ static void vimc_sca_comp_unbind(struct device *comp, struct device *master,
+diff --git a/drivers/media/platform/vimc/vimc-sensor.c b/drivers/media/platform/vimc/vimc-sensor.c
+index 32ca9c6172b1..93961a1e694f 100644
+--- a/drivers/media/platform/vimc/vimc-sensor.c
++++ b/drivers/media/platform/vimc/vimc-sensor.c
+@@ -16,8 +16,6 @@
+  */
+ 
+ #include <linux/component.h>
+-#include <linux/freezer.h>
+-#include <linux/kthread.h>
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/platform_device.h>
+@@ -201,38 +199,27 @@ static const struct v4l2_subdev_pad_ops vimc_sen_pad_ops = {
+ 	.set_fmt		= vimc_sen_set_fmt,
+ };
+ 
+-static int vimc_sen_tpg_thread(void *data)
++static void *vimc_sen_process_frame(struct vimc_ent_device *ved,
++				    const void *sink_frame)
+ {
+-	struct vimc_sen_device *vsen = data;
+-	unsigned int i;
+-
+-	set_freezable();
+-	set_current_state(TASK_UNINTERRUPTIBLE);
+-
+-	for (;;) {
+-		try_to_freeze();
+-		if (kthread_should_stop())
+-			break;
+-
+-		tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame);
++	struct vimc_sen_device *vsen = container_of(ved, struct vimc_sen_device,
++						    ved);
++	const struct vimc_pix_map *vpix;
++	unsigned int frame_size;
+ 
+-		/* Send the frame to all source pads */
+-		for (i = 0; i < vsen->sd.entity.num_pads; i++)
+-			vimc_propagate_frame(&vsen->sd.entity.pads[i],
+-					     vsen->frame);
++	/* Calculate the frame size */
++	vpix = vimc_pix_map_by_code(vsen->mbus_format.code);
++	frame_size = vsen->mbus_format.width * vpix->bpp *
++		     vsen->mbus_format.height;
+ 
+-		/* 60 frames per second */
+-		schedule_timeout(HZ/60);
+-	}
+-
+-	return 0;
++	tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame);
++	return vsen->frame;
+ }
+ 
+ static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable)
+ {
+ 	struct vimc_sen_device *vsen =
+ 				container_of(sd, struct vimc_sen_device, sd);
+-	int ret;
+ 
+ 	if (enable) {
+ 		const struct vimc_pix_map *vpix;
+@@ -258,26 +245,8 @@ static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable)
+ 		/* configure the test pattern generator */
+ 		vimc_sen_tpg_s_format(vsen);
+ 
+-		/* Initialize the image generator thread */
+-		vsen->kthread_sen = kthread_run(vimc_sen_tpg_thread, vsen,
+-					"%s-sen", vsen->sd.v4l2_dev->name);
+-		if (IS_ERR(vsen->kthread_sen)) {
+-			dev_err(vsen->dev, "%s: kernel_thread() failed\n",
+-				vsen->sd.name);
+-			vfree(vsen->frame);
+-			vsen->frame = NULL;
+-			return PTR_ERR(vsen->kthread_sen);
+-		}
+ 	} else {
+-		if (!vsen->kthread_sen)
+-			return 0;
+-
+-		/* Stop image generator */
+-		ret = kthread_stop(vsen->kthread_sen);
+-		if (ret)
+-			return ret;
+ 
+-		vsen->kthread_sen = NULL;
+ 		vfree(vsen->frame);
+ 		vsen->frame = NULL;
+ 		return 0;
+@@ -413,6 +382,7 @@ static int vimc_sen_comp_bind(struct device *comp, struct device *master,
+ 	if (ret)
+ 		goto err_free_hdl;
+ 
++	vsen->ved.process_frame = vimc_sen_process_frame;
+ 	dev_set_drvdata(comp, &vsen->ved);
+ 	vsen->dev = comp;
+ 
+diff --git a/drivers/media/platform/vimc/vimc-streamer.c b/drivers/media/platform/vimc/vimc-streamer.c
+new file mode 100644
+index 000000000000..fcc897fb247b
+--- /dev/null
++++ b/drivers/media/platform/vimc/vimc-streamer.c
+@@ -0,0 +1,188 @@
++// SPDX-License-Identifier: GPL-2.0+
++/*
++ * vimc-streamer.c Virtual Media Controller Driver
++ *
++ * Copyright (C) 2018 Lucas A. M. Magalhães <lucmaga@gmail.com>
++ *
++ */
++
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/freezer.h>
++#include <linux/kthread.h>
++
++#include "vimc-streamer.h"
++
++/**
++ * vimc_get_source_entity - get the entity connected with the first sink pad
++ *
++ * @ent:	reference media_entity
++ *
++ * Helper function that returns the media entity containing the source pad
++ * linked with the first sink pad from the given media entity pad list.
++ */
++static struct media_entity *vimc_get_source_entity(struct media_entity *ent)
++{
++	struct media_pad *pad;
++	int i;
++
++	for (i = 0; i < ent->num_pads; i++) {
++		if (ent->pads[i].flags & MEDIA_PAD_FL_SOURCE)
++			continue;
++		pad = media_entity_remote_pad(&ent->pads[i]);
++		return pad ? pad->entity : NULL;
++	}
++	return NULL;
++}
++
++/*
++ * vimc_streamer_pipeline_terminate - Disable stream in all ved in stream
++ *
++ * @stream: the pointer to the stream structure with the pipeline to be
++ *	    disabled.
++ *
++ * Calls s_stream to disable the stream in each entity of the pipeline
++ *
++ */
++static void vimc_streamer_pipeline_terminate(struct vimc_stream *stream)
++{
++	struct media_entity *entity;
++	struct v4l2_subdev *sd;
++
++	while (stream->pipe_size) {
++		stream->pipe_size--;
++		entity = stream->ved_pipeline[stream->pipe_size]->ent;
++		entity = vimc_get_source_entity(entity);
++		stream->ved_pipeline[stream->pipe_size] = NULL;
++
++		if (!is_media_entity_v4l2_subdev(entity))
++			continue;
++
++		sd = media_entity_to_v4l2_subdev(entity);
++		v4l2_subdev_call(sd, video, s_stream, 0);
++	}
++}
++
++/*
++ * vimc_streamer_pipeline_init - initializes the stream structure
++ *
++ * @stream: the pointer to the stream structure to be initialized
++ * @ved:    the pointer to the vimc entity initializing the stream
++ *
++ * Initializes the stream structure. Walks through the entity graph to
++ * construct the pipeline used later on the streamer thread.
++ * Calls s_stream to enable stream in all entities of the pipeline.
++ */
++static int vimc_streamer_pipeline_init(struct vimc_stream *stream,
++				       struct vimc_ent_device *ved)
++{
++	struct media_entity *entity;
++	struct video_device *vdev;
++	struct v4l2_subdev *sd;
++	int ret = 0;
++
++	stream->pipe_size = 0;
++	while (stream->pipe_size < VIMC_STREAMER_PIPELINE_MAX_SIZE) {
++		if (!ved) {
++			vimc_streamer_pipeline_terminate(stream);
++			return -EINVAL;
++		}
++		stream->ved_pipeline[stream->pipe_size++] = ved;
++
++		entity = vimc_get_source_entity(ved->ent);
++		/* Check if the end of the pipeline was reached*/
++		if (!entity)
++			return 0;
++
++		if (is_media_entity_v4l2_subdev(entity)) {
++			sd = media_entity_to_v4l2_subdev(entity);
++			ret = v4l2_subdev_call(sd, video, s_stream, 1);
++			if (ret && ret != -ENOIOCTLCMD) {
++				vimc_streamer_pipeline_terminate(stream);
++				return ret;
++			}
++			ved = v4l2_get_subdevdata(sd);
++		} else {
++			vdev = container_of(entity,
++					    struct video_device,
++					    entity);
++			ved = video_get_drvdata(vdev);
++		}
++	}
++
++	vimc_streamer_pipeline_terminate(stream);
++	return -EINVAL;
++}
++
++static int vimc_streamer_thread(void *data)
++{
++	struct vimc_stream *stream = data;
++	int i;
++
++	set_freezable();
++	set_current_state(TASK_UNINTERRUPTIBLE);
++
++	for (;;) {
++		try_to_freeze();
++		if (kthread_should_stop())
++			break;
++
++		for (i = stream->pipe_size - 1; i >= 0; i--) {
++			stream->frame = stream->ved_pipeline[i]->process_frame(
++					stream->ved_pipeline[i],
++					stream->frame);
++			if (!stream->frame)
++				break;
++			if (IS_ERR(stream->frame))
++				break;
++		}
++		//wait for 60hz
++		schedule_timeout(HZ / 60);
++	}
++
++	return 0;
++}
++
++int vimc_streamer_s_stream(struct vimc_stream *stream,
++			   struct vimc_ent_device *ved,
++			   int enable)
++{
++	int ret;
++
++	if (!stream || !ved)
++		return -EINVAL;
++
++	if (enable) {
++		if (stream->kthread)
++			return 0;
++
++		ret = vimc_streamer_pipeline_init(stream, ved);
++		if (ret)
++			return ret;
++
++		stream->kthread = kthread_run(vimc_streamer_thread, stream,
++					      "vimc-streamer thread");
++
++		if (IS_ERR(stream->kthread))
++			return PTR_ERR(stream->kthread);
++
++	} else {
++		if (!stream->kthread)
++			return 0;
++
++		ret = kthread_stop(stream->kthread);
++		if (ret)
++			return ret;
++
++		stream->kthread = NULL;
++
++		vimc_streamer_pipeline_terminate(stream);
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(vimc_streamer_s_stream);
++
++MODULE_DESCRIPTION("Virtual Media Controller Driver (VIMC) Streamer");
++MODULE_AUTHOR("Lucas A. M. Magalhães <lucmaga@gmail.com>");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/media/platform/vimc/vimc-streamer.h b/drivers/media/platform/vimc/vimc-streamer.h
+new file mode 100644
+index 000000000000..752af2e2d5a2
+--- /dev/null
++++ b/drivers/media/platform/vimc/vimc-streamer.h
+@@ -0,0 +1,38 @@
++/* SPDX-License-Identifier: GPL-2.0+ */
++/*
++ * vimc-streamer.h Virtual Media Controller Driver
++ *
++ * Copyright (C) 2018 Lucas A. M. Magalhães <lucmaga@gmail.com>
++ *
++ */
++
++#ifndef _VIMC_STREAMER_H_
++#define _VIMC_STREAMER_H_
++
++#include <media/media-device.h>
++
++#include "vimc-common.h"
++
++#define VIMC_STREAMER_PIPELINE_MAX_SIZE 16
++
++struct vimc_stream {
++	struct media_pipeline pipe;
++	struct vimc_ent_device *ved_pipeline[VIMC_STREAMER_PIPELINE_MAX_SIZE];
++	unsigned int pipe_size;
++	u8 *frame;
++	struct task_struct *kthread;
++};
++
++/**
++ * vimc_streamer_s_streamer - start/stop the stream
++ *
++ * @stream:	the pointer to the stream to start or stop
++ * @ved:	The last entity of the streamer pipeline
++ * @enable:	any non-zero number start the stream, zero stop
++ *
++ */
++int vimc_streamer_s_stream(struct vimc_stream *stream,
++			   struct vimc_ent_device *ved,
++			   int enable);
++
++#endif  //_VIMC_STREAMER_H_
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index 84525ff04745..e314657a1843 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -676,6 +676,14 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ 	if (!uvc_hw_timestamps_param)
+ 		return;
+ 
++	/*
++	 * We will get called from __vb2_queue_cancel() if there are buffers
++	 * done but not dequeued by the user, but the sample array has already
++	 * been released at that time. Just bail out in that case.
++	 */
++	if (!clock->samples)
++		return;
++
+ 	spin_lock_irqsave(&clock->lock, flags);
+ 
+ 	if (clock->count < clock->size)
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index a530972c5a7e..e0173bf4b0dc 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -1145,6 +1145,9 @@ static int sm501_register_gpio_i2c_instance(struct sm501_devdata *sm,
+ 	lookup = devm_kzalloc(&pdev->dev,
+ 			      sizeof(*lookup) + 3 * sizeof(struct gpiod_lookup),
+ 			      GFP_KERNEL);
++	if (!lookup)
++		return -ENOMEM;
++
+ 	lookup->dev_id = "i2c-gpio";
+ 	if (iic->pin_sda < 32)
+ 		lookup->table[0].chip_label = "SM501-LOW";
+diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
+index 5d28d9e454f5..08f4a512afad 100644
+--- a/drivers/misc/cxl/guest.c
++++ b/drivers/misc/cxl/guest.c
+@@ -267,6 +267,7 @@ static int guest_reset(struct cxl *adapter)
+ 	int i, rc;
+ 
+ 	pr_devel("Adapter reset request\n");
++	spin_lock(&adapter->afu_list_lock);
+ 	for (i = 0; i < adapter->slices; i++) {
+ 		if ((afu = adapter->afu[i])) {
+ 			pci_error_handlers(afu, CXL_ERROR_DETECTED_EVENT,
+@@ -283,6 +284,7 @@ static int guest_reset(struct cxl *adapter)
+ 			pci_error_handlers(afu, CXL_RESUME_EVENT, 0);
+ 		}
+ 	}
++	spin_unlock(&adapter->afu_list_lock);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
+index c79ba1c699ad..300531d6136f 100644
+--- a/drivers/misc/cxl/pci.c
++++ b/drivers/misc/cxl/pci.c
+@@ -1805,7 +1805,7 @@ static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu,
+ 	/* There should only be one entry, but go through the list
+ 	 * anyway
+ 	 */
+-	if (afu->phb == NULL)
++	if (afu == NULL || afu->phb == NULL)
+ 		return result;
+ 
+ 	list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
+@@ -1832,7 +1832,8 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ {
+ 	struct cxl *adapter = pci_get_drvdata(pdev);
+ 	struct cxl_afu *afu;
+-	pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET, afu_result;
++	pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET;
++	pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET;
+ 	int i;
+ 
+ 	/* At this point, we could still have an interrupt pending.
+@@ -1843,6 +1844,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ 
+ 	/* If we're permanently dead, give up. */
+ 	if (state == pci_channel_io_perm_failure) {
++		spin_lock(&adapter->afu_list_lock);
+ 		for (i = 0; i < adapter->slices; i++) {
+ 			afu = adapter->afu[i];
+ 			/*
+@@ -1851,6 +1853,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ 			 */
+ 			cxl_vphb_error_detected(afu, state);
+ 		}
++		spin_unlock(&adapter->afu_list_lock);
+ 		return PCI_ERS_RESULT_DISCONNECT;
+ 	}
+ 
+@@ -1932,11 +1935,17 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ 	 *     * In slot_reset, free the old resources and allocate new ones.
+ 	 *     * In resume, clear the flag to allow things to start.
+ 	 */
++
++	/* Make sure no one else changes the afu list */
++	spin_lock(&adapter->afu_list_lock);
++
+ 	for (i = 0; i < adapter->slices; i++) {
+ 		afu = adapter->afu[i];
+ 
+-		afu_result = cxl_vphb_error_detected(afu, state);
++		if (afu == NULL)
++			continue;
+ 
++		afu_result = cxl_vphb_error_detected(afu, state);
+ 		cxl_context_detach_all(afu);
+ 		cxl_ops->afu_deactivate_mode(afu, afu->current_mode);
+ 		pci_deconfigure_afu(afu);
+@@ -1948,6 +1957,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ 			 (result == PCI_ERS_RESULT_NEED_RESET))
+ 			result = PCI_ERS_RESULT_NONE;
+ 	}
++	spin_unlock(&adapter->afu_list_lock);
+ 
+ 	/* should take the context lock here */
+ 	if (cxl_adapter_context_lock(adapter) != 0)
+@@ -1980,14 +1990,18 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
+ 	 */
+ 	cxl_adapter_context_unlock(adapter);
+ 
++	spin_lock(&adapter->afu_list_lock);
+ 	for (i = 0; i < adapter->slices; i++) {
+ 		afu = adapter->afu[i];
+ 
++		if (afu == NULL)
++			continue;
++
+ 		if (pci_configure_afu(afu, adapter, pdev))
+-			goto err;
++			goto err_unlock;
+ 
+ 		if (cxl_afu_select_best_mode(afu))
+-			goto err;
++			goto err_unlock;
+ 
+ 		if (afu->phb == NULL)
+ 			continue;
+@@ -1999,16 +2013,16 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
+ 			ctx = cxl_get_context(afu_dev);
+ 
+ 			if (ctx && cxl_release_context(ctx))
+-				goto err;
++				goto err_unlock;
+ 
+ 			ctx = cxl_dev_context_init(afu_dev);
+ 			if (IS_ERR(ctx))
+-				goto err;
++				goto err_unlock;
+ 
+ 			afu_dev->dev.archdata.cxl_ctx = ctx;
+ 
+ 			if (cxl_ops->afu_check_and_enable(afu))
+-				goto err;
++				goto err_unlock;
+ 
+ 			afu_dev->error_state = pci_channel_io_normal;
+ 
+@@ -2029,8 +2043,13 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
+ 				result = PCI_ERS_RESULT_DISCONNECT;
+ 		}
+ 	}
++
++	spin_unlock(&adapter->afu_list_lock);
+ 	return result;
+ 
++err_unlock:
++	spin_unlock(&adapter->afu_list_lock);
++
+ err:
+ 	/* All the bits that happen in both error_detected and cxl_remove
+ 	 * should be idempotent, so we don't need to worry about leaving a mix
+@@ -2051,10 +2070,11 @@ static void cxl_pci_resume(struct pci_dev *pdev)
+ 	 * This is not the place to be checking if everything came back up
+ 	 * properly, because there's no return value: do that in slot_reset.
+ 	 */
++	spin_lock(&adapter->afu_list_lock);
+ 	for (i = 0; i < adapter->slices; i++) {
+ 		afu = adapter->afu[i];
+ 
+-		if (afu->phb == NULL)
++		if (afu == NULL || afu->phb == NULL)
+ 			continue;
+ 
+ 		list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
+@@ -2063,6 +2083,7 @@ static void cxl_pci_resume(struct pci_dev *pdev)
+ 				afu_dev->driver->err_handler->resume(afu_dev);
+ 		}
+ 	}
++	spin_unlock(&adapter->afu_list_lock);
+ }
+ 
+ static const struct pci_error_handlers cxl_err_handler = {
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index fc3872fe7b25..c383322ec2ba 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -541,17 +541,9 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
+ 		goto out;
+ 	}
+ 
+-	if (!mei_cl_bus_module_get(cldev)) {
+-		dev_err(&cldev->dev, "get hw module failed");
+-		ret = -ENODEV;
+-		goto out;
+-	}
+-
+ 	ret = mei_cl_connect(cl, cldev->me_cl, NULL);
+-	if (ret < 0) {
++	if (ret < 0)
+ 		dev_err(&cldev->dev, "cannot connect\n");
+-		mei_cl_bus_module_put(cldev);
+-	}
+ 
+ out:
+ 	mutex_unlock(&bus->device_lock);
+@@ -614,7 +606,6 @@ int mei_cldev_disable(struct mei_cl_device *cldev)
+ 	if (err < 0)
+ 		dev_err(bus->dev, "Could not disconnect from the ME client\n");
+ 
+-	mei_cl_bus_module_put(cldev);
+ out:
+ 	/* Flush queues and remove any pending read */
+ 	mei_cl_flush_queues(cl, NULL);
+@@ -725,9 +716,16 @@ static int mei_cl_device_probe(struct device *dev)
+ 	if (!id)
+ 		return -ENODEV;
+ 
++	if (!mei_cl_bus_module_get(cldev)) {
++		dev_err(&cldev->dev, "get hw module failed");
++		return -ENODEV;
++	}
++
+ 	ret = cldrv->probe(cldev, id);
+-	if (ret)
++	if (ret) {
++		mei_cl_bus_module_put(cldev);
+ 		return ret;
++	}
+ 
+ 	__module_get(THIS_MODULE);
+ 	return 0;
+@@ -755,6 +753,7 @@ static int mei_cl_device_remove(struct device *dev)
+ 
+ 	mei_cldev_unregister_callbacks(cldev);
+ 
++	mei_cl_bus_module_put(cldev);
+ 	module_put(THIS_MODULE);
+ 	dev->driver = NULL;
+ 	return ret;
+diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
+index 8f7616557c97..e6207f614816 100644
+--- a/drivers/misc/mei/hbm.c
++++ b/drivers/misc/mei/hbm.c
+@@ -1029,29 +1029,36 @@ static void mei_hbm_config_features(struct mei_device *dev)
+ 	    dev->version.minor_version >= HBM_MINOR_VERSION_PGI)
+ 		dev->hbm_f_pg_supported = 1;
+ 
++	dev->hbm_f_dc_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_DC)
+ 		dev->hbm_f_dc_supported = 1;
+ 
++	dev->hbm_f_ie_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_IE)
+ 		dev->hbm_f_ie_supported = 1;
+ 
+ 	/* disconnect on connect timeout instead of link reset */
++	dev->hbm_f_dot_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_DOT)
+ 		dev->hbm_f_dot_supported = 1;
+ 
+ 	/* Notification Event Support */
++	dev->hbm_f_ev_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_EV)
+ 		dev->hbm_f_ev_supported = 1;
+ 
+ 	/* Fixed Address Client Support */
++	dev->hbm_f_fa_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_FA)
+ 		dev->hbm_f_fa_supported = 1;
+ 
+ 	/* OS ver message Support */
++	dev->hbm_f_os_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_OS)
+ 		dev->hbm_f_os_supported = 1;
+ 
+ 	/* DMA Ring Support */
++	dev->hbm_f_dr_supported = 0;
+ 	if (dev->version.major_version > HBM_MAJOR_VERSION_DR ||
+ 	    (dev->version.major_version == HBM_MAJOR_VERSION_DR &&
+ 	     dev->version.minor_version >= HBM_MINOR_VERSION_DR))
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index f8240b87df22..f69acb5d4a50 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -1287,7 +1287,7 @@ static void vmballoon_reset(struct vmballoon *b)
+ 	vmballoon_pop(b);
+ 
+ 	if (vmballoon_send_start(b, VMW_BALLOON_CAPABILITIES))
+-		return;
++		goto unlock;
+ 
+ 	if ((b->capabilities & VMW_BALLOON_BATCHED_CMDS) != 0) {
+ 		if (vmballoon_init_batching(b)) {
+@@ -1298,7 +1298,7 @@ static void vmballoon_reset(struct vmballoon *b)
+ 			 * The guest will retry in one second.
+ 			 */
+ 			vmballoon_send_start(b, 0);
+-			return;
++			goto unlock;
+ 		}
+ 	} else if ((b->capabilities & VMW_BALLOON_BASIC_CMDS) != 0) {
+ 		vmballoon_deinit_batching(b);
+@@ -1314,6 +1314,7 @@ static void vmballoon_reset(struct vmballoon *b)
+ 	if (vmballoon_send_guest_id(b))
+ 		pr_err("failed to send guest ID to the host\n");
+ 
++unlock:
+ 	up_write(&b->conf_sem);
+ }
+ 
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index b27a1e620233..1e6b07c176dc 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -2381,9 +2381,9 @@ unsigned int mmc_calc_max_discard(struct mmc_card *card)
+ 		return card->pref_erase;
+ 
+ 	max_discard = mmc_do_calc_max_discard(card, MMC_ERASE_ARG);
+-	if (max_discard && mmc_can_trim(card)) {
++	if (mmc_can_trim(card)) {
+ 		max_trim = mmc_do_calc_max_discard(card, MMC_TRIM_ARG);
+-		if (max_trim < max_discard)
++		if (max_trim < max_discard || max_discard == 0)
+ 			max_discard = max_trim;
+ 	} else if (max_discard < card->erase_size) {
+ 		max_discard = 0;
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 31a351a20dc0..7e2a75c4f36f 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -723,6 +723,13 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		host->ops.start_signal_voltage_switch =
+ 			renesas_sdhi_start_signal_voltage_switch;
+ 		host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27;
++
++		/* SDR and HS200/400 registers requires HW reset */
++		if (of_data && of_data->scc_offset) {
++			priv->scc_ctl = host->ctl + of_data->scc_offset;
++			host->mmc->caps |= MMC_CAP_HW_RESET;
++			host->hw_reset = renesas_sdhi_hw_reset;
++		}
+ 	}
+ 
+ 	/* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */
+@@ -775,8 +782,6 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		const struct renesas_sdhi_scc *taps = of_data->taps;
+ 		bool hit = false;
+ 
+-		host->mmc->caps |= MMC_CAP_HW_RESET;
+-
+ 		for (i = 0; i < of_data->taps_num; i++) {
+ 			if (taps[i].clk_rate == 0 ||
+ 			    taps[i].clk_rate == host->mmc->f_max) {
+@@ -789,12 +794,10 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		if (!hit)
+ 			dev_warn(&host->pdev->dev, "Unknown clock rate for SDR104\n");
+ 
+-		priv->scc_ctl = host->ctl + of_data->scc_offset;
+ 		host->init_tuning = renesas_sdhi_init_tuning;
+ 		host->prepare_tuning = renesas_sdhi_prepare_tuning;
+ 		host->select_tuning = renesas_sdhi_select_tuning;
+ 		host->check_scc_error = renesas_sdhi_check_scc_error;
+-		host->hw_reset = renesas_sdhi_hw_reset;
+ 		host->prepare_hs400_tuning =
+ 			renesas_sdhi_prepare_hs400_tuning;
+ 		host->hs400_downgrade = renesas_sdhi_disable_scc;
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 00d41b312c79..a6f25c796aed 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -979,6 +979,7 @@ static void esdhc_set_uhs_signaling(struct sdhci_host *host, unsigned timing)
+ 	case MMC_TIMING_UHS_SDR25:
+ 	case MMC_TIMING_UHS_SDR50:
+ 	case MMC_TIMING_UHS_SDR104:
++	case MMC_TIMING_MMC_HS:
+ 	case MMC_TIMING_MMC_HS200:
+ 		writel(m, host->ioaddr + ESDHC_MIX_CTRL);
+ 		break;
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index ddc1f9ca8ebc..4543ac97f077 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1069,10 +1069,10 @@ static int gswip_probe(struct platform_device *pdev)
+ 	version = gswip_switch_r(priv, GSWIP_VERSION);
+ 
+ 	/* bring up the mdio bus */
+-	gphy_fw_np = of_find_compatible_node(pdev->dev.of_node, NULL,
+-					     "lantiq,gphy-fw");
++	gphy_fw_np = of_get_compatible_child(dev->of_node, "lantiq,gphy-fw");
+ 	if (gphy_fw_np) {
+ 		err = gswip_gphy_fw_list(priv, gphy_fw_np, version);
++		of_node_put(gphy_fw_np);
+ 		if (err) {
+ 			dev_err(dev, "gphy fw probe failed\n");
+ 			return err;
+@@ -1080,13 +1080,12 @@ static int gswip_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* bring up the mdio bus */
+-	mdio_np = of_find_compatible_node(pdev->dev.of_node, NULL,
+-					  "lantiq,xrx200-mdio");
++	mdio_np = of_get_compatible_child(dev->of_node, "lantiq,xrx200-mdio");
+ 	if (mdio_np) {
+ 		err = gswip_mdio(priv, mdio_np);
+ 		if (err) {
+ 			dev_err(dev, "mdio probe failed\n");
+-			goto gphy_fw;
++			goto put_mdio_node;
+ 		}
+ 	}
+ 
+@@ -1099,7 +1098,7 @@ static int gswip_probe(struct platform_device *pdev)
+ 		dev_err(dev, "wrong CPU port defined, HW only supports port: %i",
+ 			priv->hw_info->cpu_port);
+ 		err = -EINVAL;
+-		goto mdio_bus;
++		goto disable_switch;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, priv);
+@@ -1109,10 +1108,14 @@ static int gswip_probe(struct platform_device *pdev)
+ 		 (version & GSWIP_VERSION_MOD_MASK) >> GSWIP_VERSION_MOD_SHIFT);
+ 	return 0;
+ 
++disable_switch:
++	gswip_mdio_mask(priv, GSWIP_MDIO_GLOB_ENABLE, 0, GSWIP_MDIO_GLOB);
++	dsa_unregister_switch(priv->ds);
+ mdio_bus:
+ 	if (mdio_np)
+ 		mdiobus_unregister(priv->ds->slave_mii_bus);
+-gphy_fw:
++put_mdio_node:
++	of_node_put(mdio_np);
+ 	for (i = 0; i < priv->num_gphy_fw; i++)
+ 		gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]);
+ 	return err;
+@@ -1131,8 +1134,10 @@ static int gswip_remove(struct platform_device *pdev)
+ 
+ 	dsa_unregister_switch(priv->ds);
+ 
+-	if (priv->ds->slave_mii_bus)
++	if (priv->ds->slave_mii_bus) {
+ 		mdiobus_unregister(priv->ds->slave_mii_bus);
++		of_node_put(priv->ds->slave_mii_bus->dev.of_node);
++	}
+ 
+ 	for (i = 0; i < priv->num_gphy_fw; i++)
+ 		gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]);
+diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+index 789337ea676a..6ede6168bd85 100644
+--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+@@ -433,8 +433,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
+ 			  skb_tail_pointer(skb),
+ 			  MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn, cardp);
+ 
+-	cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
+-
+ 	lbtf_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n",
+ 		cardp->rx_urb);
+ 	ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+index c08bf371e527..7c9dfa54fee8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+@@ -309,7 +309,7 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi,
+ 		ccmp_pn[6] = pn >> 32;
+ 		ccmp_pn[7] = pn >> 40;
+ 		txwi->iv = *((__le32 *)&ccmp_pn[0]);
+-		txwi->eiv = *((__le32 *)&ccmp_pn[1]);
++		txwi->eiv = *((__le32 *)&ccmp_pn[4]);
+ 	}
+ 
+ 	spin_lock_bh(&dev->mt76.lock);
+diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
+index a11bf4e6b451..6d6e9a12150b 100644
+--- a/drivers/nvdimm/label.c
++++ b/drivers/nvdimm/label.c
+@@ -755,7 +755,7 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
+ 
+ static int __pmem_label_update(struct nd_region *nd_region,
+ 		struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
+-		int pos)
++		int pos, unsigned long flags)
+ {
+ 	struct nd_namespace_common *ndns = &nspm->nsio.common;
+ 	struct nd_interleave_set *nd_set = nd_region->nd_set;
+@@ -796,7 +796,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
+ 	memcpy(nd_label->uuid, nspm->uuid, NSLABEL_UUID_LEN);
+ 	if (nspm->alt_name)
+ 		memcpy(nd_label->name, nspm->alt_name, NSLABEL_NAME_LEN);
+-	nd_label->flags = __cpu_to_le32(NSLABEL_FLAG_UPDATING);
++	nd_label->flags = __cpu_to_le32(flags);
+ 	nd_label->nlabel = __cpu_to_le16(nd_region->ndr_mappings);
+ 	nd_label->position = __cpu_to_le16(pos);
+ 	nd_label->isetcookie = __cpu_to_le64(cookie);
+@@ -1249,13 +1249,13 @@ static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
+ int nd_pmem_namespace_label_update(struct nd_region *nd_region,
+ 		struct nd_namespace_pmem *nspm, resource_size_t size)
+ {
+-	int i;
++	int i, rc;
+ 
+ 	for (i = 0; i < nd_region->ndr_mappings; i++) {
+ 		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ 		struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+ 		struct resource *res;
+-		int rc, count = 0;
++		int count = 0;
+ 
+ 		if (size == 0) {
+ 			rc = del_labels(nd_mapping, nspm->uuid);
+@@ -1273,7 +1273,20 @@ int nd_pmem_namespace_label_update(struct nd_region *nd_region,
+ 		if (rc < 0)
+ 			return rc;
+ 
+-		rc = __pmem_label_update(nd_region, nd_mapping, nspm, i);
++		rc = __pmem_label_update(nd_region, nd_mapping, nspm, i,
++				NSLABEL_FLAG_UPDATING);
++		if (rc)
++			return rc;
++	}
++
++	if (size == 0)
++		return 0;
++
++	/* Clear the UPDATING flag per UEFI 2.7 expectations */
++	for (i = 0; i < nd_region->ndr_mappings; i++) {
++		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
++
++		rc = __pmem_label_update(nd_region, nd_mapping, nspm, i, 0);
+ 		if (rc)
+ 			return rc;
+ 	}
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 4b077555ac70..33a3b23b3db7 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -138,6 +138,7 @@ bool nd_is_uuid_unique(struct device *dev, u8 *uuid)
+ bool pmem_should_map_pages(struct device *dev)
+ {
+ 	struct nd_region *nd_region = to_nd_region(dev->parent);
++	struct nd_namespace_common *ndns = to_ndns(dev);
+ 	struct nd_namespace_io *nsio;
+ 
+ 	if (!IS_ENABLED(CONFIG_ZONE_DEVICE))
+@@ -149,6 +150,9 @@ bool pmem_should_map_pages(struct device *dev)
+ 	if (is_nd_pfn(dev) || is_nd_btt(dev))
+ 		return false;
+ 
++	if (ndns->force_raw)
++		return false;
++
+ 	nsio = to_nd_namespace_io(dev);
+ 	if (region_intersects(nsio->res.start, resource_size(&nsio->res),
+ 				IORESOURCE_SYSTEM_RAM,
+diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
+index 6f22272e8d80..7760c1b91853 100644
+--- a/drivers/nvdimm/pfn_devs.c
++++ b/drivers/nvdimm/pfn_devs.c
+@@ -593,7 +593,7 @@ static unsigned long init_altmap_base(resource_size_t base)
+ 
+ static unsigned long init_altmap_reserve(resource_size_t base)
+ {
+-	unsigned long reserve = PHYS_PFN(SZ_8K);
++	unsigned long reserve = PFN_UP(SZ_8K);
+ 	unsigned long base_pfn = PHYS_PFN(base);
+ 
+ 	reserve += base_pfn - PFN_SECTION_ALIGN_DOWN(base_pfn);
+@@ -678,7 +678,7 @@ static void trim_pfn_device(struct nd_pfn *nd_pfn, u32 *start_pad, u32 *end_trun
+ 	if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM,
+ 				IORES_DESC_NONE) == REGION_MIXED
+ 			|| !IS_ALIGNED(end, nd_pfn->align)
+-			|| nd_region_conflict(nd_region, start, size + adjust))
++			|| nd_region_conflict(nd_region, start, size))
+ 		*end_trunc = end - phys_pmem_align_down(nd_pfn, end);
+ }
+ 
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index f7301bb4ef3b..3ce65927e11c 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -686,9 +686,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 	if (rval)
+ 		goto err_remove_cells;
+ 
+-	rval = blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
+-	if (rval)
+-		goto err_remove_cells;
++	blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
+ 
+ 	return nvmem;
+ 
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 18f1639dbc4a..f5d2fa195f5f 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -743,7 +743,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+ 		old_freq, freq);
+ 
+ 	/* Scaling up? Configure required OPPs before frequency */
+-	if (freq > old_freq) {
++	if (freq >= old_freq) {
+ 		ret = _set_required_opps(dev, opp_table, opp);
+ 		if (ret)
+ 			goto put_opp;
+diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c
+index 9c8249f74479..6296dbb83d47 100644
+--- a/drivers/parport/parport_pc.c
++++ b/drivers/parport/parport_pc.c
+@@ -1377,7 +1377,7 @@ static struct superio_struct *find_superio(struct parport *p)
+ {
+ 	int i;
+ 	for (i = 0; i < NR_SUPERIOS; i++)
+-		if (superios[i].io != p->base)
++		if (superios[i].io == p->base)
+ 			return &superios[i];
+ 	return NULL;
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 721d60a5d9e4..9c5614f21b8e 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -439,7 +439,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 	if (ret)
+ 		pci->num_viewport = 2;
+ 
+-	if (IS_ENABLED(CONFIG_PCI_MSI)) {
++	if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_enabled()) {
+ 		/*
+ 		 * If a specific SoC driver needs to change the
+ 		 * default number of vectors, it needs to implement
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index d185ea5fe996..a7f703556790 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1228,7 +1228,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ 
+ 	pcie->ops = of_device_get_match_data(dev);
+ 
+-	pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW);
++	pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(pcie->reset)) {
+ 		ret = PTR_ERR(pcie->reset);
+ 		goto err_pm_runtime_put;
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 750081c1cb48..6eecae447af3 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -499,7 +499,7 @@ static void advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ 	bridge->data = pcie;
+ 	bridge->ops = &advk_pci_bridge_emul_ops;
+ 
+-	pci_bridge_emul_init(bridge);
++	pci_bridge_emul_init(bridge, 0);
+ 
+ }
+ 
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index fa0fc46edb0c..d3a0419e42f2 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -583,7 +583,7 @@ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
+ 	bridge->data = port;
+ 	bridge->ops = &mvebu_pci_bridge_emul_ops;
+ 
+-	pci_bridge_emul_init(bridge);
++	pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR);
+ }
+ 
+ static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys)
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 7dd443aea5a5..c0fb64ace05a 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -736,12 +736,25 @@ void pcie_clear_hotplug_events(struct controller *ctrl)
+ 
+ void pcie_enable_interrupt(struct controller *ctrl)
+ {
+-	pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_HPIE, PCI_EXP_SLTCTL_HPIE);
++	u16 mask;
++
++	mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
++	pcie_write_cmd(ctrl, mask, mask);
+ }
+ 
+ void pcie_disable_interrupt(struct controller *ctrl)
+ {
+-	pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_HPIE);
++	u16 mask;
++
++	/*
++	 * Mask hot-plug interrupt to prevent it triggering immediately
++	 * when the link goes inactive (we still get PME when any of the
++	 * enabled events is detected). Same goes with Link Layer State
++	 * changed event which generates PME immediately when the link goes
++	 * inactive so mask it as well.
++	 */
++	mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
++	pcie_write_cmd(ctrl, 0, mask);
+ }
+ 
+ /*
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index 129738362d90..83fb077d0b41 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -24,29 +24,6 @@
+ #define PCI_CAP_PCIE_START	PCI_BRIDGE_CONF_END
+ #define PCI_CAP_PCIE_END	(PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2)
+ 
+-/*
+- * Initialize a pci_bridge_emul structure to represent a fake PCI
+- * bridge configuration space. The caller needs to have initialized
+- * the PCI configuration space with whatever values make sense
+- * (typically at least vendor, device, revision), the ->ops pointer,
+- * and optionally ->data and ->has_pcie.
+- */
+-void pci_bridge_emul_init(struct pci_bridge_emul *bridge)
+-{
+-	bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16;
+-	bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
+-	bridge->conf.cache_line_size = 0x10;
+-	bridge->conf.status = PCI_STATUS_CAP_LIST;
+-
+-	if (bridge->has_pcie) {
+-		bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
+-		bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
+-		/* Set PCIe v2, root port, slot support */
+-		bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
+-			PCI_EXP_FLAGS_SLOT;
+-	}
+-}
+-
+ struct pci_bridge_reg_behavior {
+ 	/* Read-only bits */
+ 	u32 ro;
+@@ -283,6 +260,61 @@ const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+ 	},
+ };
+ 
++/*
++ * Initialize a pci_bridge_emul structure to represent a fake PCI
++ * bridge configuration space. The caller needs to have initialized
++ * the PCI configuration space with whatever values make sense
++ * (typically at least vendor, device, revision), the ->ops pointer,
++ * and optionally ->data and ->has_pcie.
++ */
++int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
++			 unsigned int flags)
++{
++	bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16;
++	bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
++	bridge->conf.cache_line_size = 0x10;
++	bridge->conf.status = PCI_STATUS_CAP_LIST;
++	bridge->pci_regs_behavior = kmemdup(pci_regs_behavior,
++					    sizeof(pci_regs_behavior),
++					    GFP_KERNEL);
++	if (!bridge->pci_regs_behavior)
++		return -ENOMEM;
++
++	if (bridge->has_pcie) {
++		bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
++		bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
++		/* Set PCIe v2, root port, slot support */
++		bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
++			PCI_EXP_FLAGS_SLOT;
++		bridge->pcie_cap_regs_behavior =
++			kmemdup(pcie_cap_regs_behavior,
++				sizeof(pcie_cap_regs_behavior),
++				GFP_KERNEL);
++		if (!bridge->pcie_cap_regs_behavior) {
++			kfree(bridge->pci_regs_behavior);
++			return -ENOMEM;
++		}
++	}
++
++	if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) {
++		bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].ro = ~0;
++		bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].rw = 0;
++	}
++
++	return 0;
++}
++
++/*
++ * Cleanup a pci_bridge_emul structure that was previously initilized
++ * using pci_bridge_emul_init().
++ */
++void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge)
++{
++	if (bridge->has_pcie)
++		kfree(bridge->pcie_cap_regs_behavior);
++	kfree(bridge->pci_regs_behavior);
++}
++
+ /*
+  * Should be called by the PCI controller driver when reading the PCI
+  * configuration space of the fake bridge. It will call back the
+@@ -312,11 +344,11 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
+ 		reg -= PCI_CAP_PCIE_START;
+ 		read_op = bridge->ops->read_pcie;
+ 		cfgspace = (u32 *) &bridge->pcie_conf;
+-		behavior = pcie_cap_regs_behavior;
++		behavior = bridge->pcie_cap_regs_behavior;
+ 	} else {
+ 		read_op = bridge->ops->read_base;
+ 		cfgspace = (u32 *) &bridge->conf;
+-		behavior = pci_regs_behavior;
++		behavior = bridge->pci_regs_behavior;
+ 	}
+ 
+ 	if (read_op)
+@@ -383,11 +415,11 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
+ 		reg -= PCI_CAP_PCIE_START;
+ 		write_op = bridge->ops->write_pcie;
+ 		cfgspace = (u32 *) &bridge->pcie_conf;
+-		behavior = pcie_cap_regs_behavior;
++		behavior = bridge->pcie_cap_regs_behavior;
+ 	} else {
+ 		write_op = bridge->ops->write_base;
+ 		cfgspace = (u32 *) &bridge->conf;
+-		behavior = pci_regs_behavior;
++		behavior = bridge->pci_regs_behavior;
+ 	}
+ 
+ 	/* Keep all bits, except the RW bits */
+diff --git a/drivers/pci/pci-bridge-emul.h b/drivers/pci/pci-bridge-emul.h
+index 9d510ccf738b..e65b1b79899d 100644
+--- a/drivers/pci/pci-bridge-emul.h
++++ b/drivers/pci/pci-bridge-emul.h
+@@ -107,15 +107,26 @@ struct pci_bridge_emul_ops {
+ 			   u32 old, u32 new, u32 mask);
+ };
+ 
++struct pci_bridge_reg_behavior;
++
+ struct pci_bridge_emul {
+ 	struct pci_bridge_emul_conf conf;
+ 	struct pci_bridge_emul_pcie_conf pcie_conf;
+ 	struct pci_bridge_emul_ops *ops;
++	struct pci_bridge_reg_behavior *pci_regs_behavior;
++	struct pci_bridge_reg_behavior *pcie_cap_regs_behavior;
+ 	void *data;
+ 	bool has_pcie;
+ };
+ 
+-void pci_bridge_emul_init(struct pci_bridge_emul *bridge);
++enum {
++	PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR = BIT(0),
++};
++
++int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
++			 unsigned int flags);
++void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge);
++
+ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
+ 			      int size, u32 *value);
+ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index e435d12e61a0..7b77754a82de 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -202,6 +202,28 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
+ 	pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status);
+ }
+ 
++static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
++					  struct aer_err_info *info)
++{
++	int pos = dev->aer_cap;
++	u32 status, mask, sev;
++
++	pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status);
++	pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &mask);
++	status &= ~mask;
++	if (!status)
++		return 0;
++
++	pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev);
++	status &= sev;
++	if (status)
++		info->severity = AER_FATAL;
++	else
++		info->severity = AER_NONFATAL;
++
++	return 1;
++}
++
+ static irqreturn_t dpc_handler(int irq, void *context)
+ {
+ 	struct aer_err_info info;
+@@ -229,9 +251,12 @@ static irqreturn_t dpc_handler(int irq, void *context)
+ 	/* show RP PIO error detail information */
+ 	if (dpc->rp_extensions && reason == 3 && ext_reason == 0)
+ 		dpc_process_rp_pio_error(dpc);
+-	else if (reason == 0 && aer_get_device_error_info(pdev, &info)) {
++	else if (reason == 0 &&
++		 dpc_get_aer_uncorrect_severity(pdev, &info) &&
++		 aer_get_device_error_info(pdev, &info)) {
+ 		aer_print_error(pdev, &info);
+ 		pci_cleanup_aer_uncorrect_error_status(pdev);
++		pci_aer_clear_fatal_status(pdev);
+ 	}
+ 
+ 	/* We configure DPC so it only triggers on ERR_FATAL */
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 257b9f6f2ebb..c46a3fcb341e 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -2071,11 +2071,8 @@ static void pci_configure_ltr(struct pci_dev *dev)
+ {
+ #ifdef CONFIG_PCIEASPM
+ 	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
+-	u32 cap;
+ 	struct pci_dev *bridge;
+-
+-	if (!host->native_ltr)
+-		return;
++	u32 cap, ctl;
+ 
+ 	if (!pci_is_pcie(dev))
+ 		return;
+@@ -2084,22 +2081,35 @@ static void pci_configure_ltr(struct pci_dev *dev)
+ 	if (!(cap & PCI_EXP_DEVCAP2_LTR))
+ 		return;
+ 
+-	/*
+-	 * Software must not enable LTR in an Endpoint unless the Root
+-	 * Complex and all intermediate Switches indicate support for LTR.
+-	 * PCIe r3.1, sec 6.18.
+-	 */
+-	if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)
+-		dev->ltr_path = 1;
+-	else {
++	pcie_capability_read_dword(dev, PCI_EXP_DEVCTL2, &ctl);
++	if (ctl & PCI_EXP_DEVCTL2_LTR_EN) {
++		if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) {
++			dev->ltr_path = 1;
++			return;
++		}
++
+ 		bridge = pci_upstream_bridge(dev);
+ 		if (bridge && bridge->ltr_path)
+ 			dev->ltr_path = 1;
++
++		return;
+ 	}
+ 
+-	if (dev->ltr_path)
++	if (!host->native_ltr)
++		return;
++
++	/*
++	 * Software must not enable LTR in an Endpoint unless the Root
++	 * Complex and all intermediate Switches indicate support for LTR.
++	 * PCIe r4.0, sec 6.18.
++	 */
++	if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
++	    ((bridge = pci_upstream_bridge(dev)) &&
++	      bridge->ltr_path)) {
+ 		pcie_capability_set_word(dev, PCI_EXP_DEVCTL2,
+ 					 PCI_EXP_DEVCTL2_LTR_EN);
++		dev->ltr_path = 1;
++	}
+ #endif
+ }
+ 
+diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c
+index c843eaff8ad0..c3ed7b476676 100644
+--- a/drivers/power/supply/cpcap-charger.c
++++ b/drivers/power/supply/cpcap-charger.c
+@@ -458,6 +458,7 @@ static void cpcap_usb_detect(struct work_struct *work)
+ 			goto out_err;
+ 	}
+ 
++	power_supply_changed(ddata->usb);
+ 	return;
+ 
+ out_err:
+diff --git a/drivers/regulator/max77620-regulator.c b/drivers/regulator/max77620-regulator.c
+index b94e3a721721..cd93cf53e23c 100644
+--- a/drivers/regulator/max77620-regulator.c
++++ b/drivers/regulator/max77620-regulator.c
+@@ -1,7 +1,7 @@
+ /*
+  * Maxim MAX77620 Regulator driver
+  *
+- * Copyright (c) 2016, NVIDIA CORPORATION.  All rights reserved.
++ * Copyright (c) 2016-2018, NVIDIA CORPORATION.  All rights reserved.
+  *
+  * Author: Mallikarjun Kasoju <mkasoju@nvidia.com>
+  *	Laxman Dewangan <ldewangan@nvidia.com>
+@@ -803,6 +803,14 @@ static int max77620_regulator_probe(struct platform_device *pdev)
+ 		rdesc = &rinfo[id].desc;
+ 		pmic->rinfo[id] = &max77620_regs_info[id];
+ 		pmic->enable_power_mode[id] = MAX77620_POWER_MODE_NORMAL;
++		pmic->reg_pdata[id].active_fps_src = -1;
++		pmic->reg_pdata[id].active_fps_pd_slot = -1;
++		pmic->reg_pdata[id].active_fps_pu_slot = -1;
++		pmic->reg_pdata[id].suspend_fps_src = -1;
++		pmic->reg_pdata[id].suspend_fps_pd_slot = -1;
++		pmic->reg_pdata[id].suspend_fps_pu_slot = -1;
++		pmic->reg_pdata[id].power_ok = -1;
++		pmic->reg_pdata[id].ramp_rate_setting = -1;
+ 
+ 		ret = max77620_read_slew_rate(pmic, id);
+ 		if (ret < 0)
+diff --git a/drivers/regulator/s2mpa01.c b/drivers/regulator/s2mpa01.c
+index 095d25f3d2ea..58a1fe583a6c 100644
+--- a/drivers/regulator/s2mpa01.c
++++ b/drivers/regulator/s2mpa01.c
+@@ -298,13 +298,13 @@ static const struct regulator_desc regulators[] = {
+ 	regulator_desc_ldo(2, STEP_50_MV),
+ 	regulator_desc_ldo(3, STEP_50_MV),
+ 	regulator_desc_ldo(4, STEP_50_MV),
+-	regulator_desc_ldo(5, STEP_50_MV),
++	regulator_desc_ldo(5, STEP_25_MV),
+ 	regulator_desc_ldo(6, STEP_25_MV),
+ 	regulator_desc_ldo(7, STEP_50_MV),
+ 	regulator_desc_ldo(8, STEP_50_MV),
+ 	regulator_desc_ldo(9, STEP_50_MV),
+ 	regulator_desc_ldo(10, STEP_50_MV),
+-	regulator_desc_ldo(11, STEP_25_MV),
++	regulator_desc_ldo(11, STEP_50_MV),
+ 	regulator_desc_ldo(12, STEP_50_MV),
+ 	regulator_desc_ldo(13, STEP_50_MV),
+ 	regulator_desc_ldo(14, STEP_50_MV),
+@@ -315,11 +315,11 @@ static const struct regulator_desc regulators[] = {
+ 	regulator_desc_ldo(19, STEP_50_MV),
+ 	regulator_desc_ldo(20, STEP_50_MV),
+ 	regulator_desc_ldo(21, STEP_50_MV),
+-	regulator_desc_ldo(22, STEP_25_MV),
+-	regulator_desc_ldo(23, STEP_25_MV),
++	regulator_desc_ldo(22, STEP_50_MV),
++	regulator_desc_ldo(23, STEP_50_MV),
+ 	regulator_desc_ldo(24, STEP_50_MV),
+ 	regulator_desc_ldo(25, STEP_50_MV),
+-	regulator_desc_ldo(26, STEP_50_MV),
++	regulator_desc_ldo(26, STEP_25_MV),
+ 	regulator_desc_buck1_4(1),
+ 	regulator_desc_buck1_4(2),
+ 	regulator_desc_buck1_4(3),
+diff --git a/drivers/regulator/s2mps11.c b/drivers/regulator/s2mps11.c
+index ee4a23ab0663..134c62db36c5 100644
+--- a/drivers/regulator/s2mps11.c
++++ b/drivers/regulator/s2mps11.c
+@@ -362,7 +362,7 @@ static const struct regulator_desc s2mps11_regulators[] = {
+ 	regulator_desc_s2mps11_ldo(32, STEP_50_MV),
+ 	regulator_desc_s2mps11_ldo(33, STEP_50_MV),
+ 	regulator_desc_s2mps11_ldo(34, STEP_50_MV),
+-	regulator_desc_s2mps11_ldo(35, STEP_50_MV),
++	regulator_desc_s2mps11_ldo(35, STEP_25_MV),
+ 	regulator_desc_s2mps11_ldo(36, STEP_50_MV),
+ 	regulator_desc_s2mps11_ldo(37, STEP_50_MV),
+ 	regulator_desc_s2mps11_ldo(38, STEP_50_MV),
+@@ -372,8 +372,8 @@ static const struct regulator_desc s2mps11_regulators[] = {
+ 	regulator_desc_s2mps11_buck1_4(4),
+ 	regulator_desc_s2mps11_buck5,
+ 	regulator_desc_s2mps11_buck67810(6, MIN_600_MV, STEP_6_25_MV),
+-	regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_6_25_MV),
+-	regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_6_25_MV),
++	regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_12_5_MV),
++	regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_12_5_MV),
+ 	regulator_desc_s2mps11_buck9,
+ 	regulator_desc_s2mps11_buck67810(10, MIN_750_MV, STEP_12_5_MV),
+ };
+diff --git a/drivers/s390/crypto/vfio_ap_drv.c b/drivers/s390/crypto/vfio_ap_drv.c
+index 31c6c847eaca..e9824c35c34f 100644
+--- a/drivers/s390/crypto/vfio_ap_drv.c
++++ b/drivers/s390/crypto/vfio_ap_drv.c
+@@ -15,7 +15,6 @@
+ #include "vfio_ap_private.h"
+ 
+ #define VFIO_AP_ROOT_NAME "vfio_ap"
+-#define VFIO_AP_DEV_TYPE_NAME "ap_matrix"
+ #define VFIO_AP_DEV_NAME "matrix"
+ 
+ MODULE_AUTHOR("IBM Corporation");
+@@ -24,10 +23,6 @@ MODULE_LICENSE("GPL v2");
+ 
+ static struct ap_driver vfio_ap_drv;
+ 
+-static struct device_type vfio_ap_dev_type = {
+-	.name = VFIO_AP_DEV_TYPE_NAME,
+-};
+-
+ struct ap_matrix_dev *matrix_dev;
+ 
+ /* Only type 10 adapters (CEX4 and later) are supported
+@@ -62,6 +57,22 @@ static void vfio_ap_matrix_dev_release(struct device *dev)
+ 	kfree(matrix_dev);
+ }
+ 
++static int matrix_bus_match(struct device *dev, struct device_driver *drv)
++{
++	return 1;
++}
++
++static struct bus_type matrix_bus = {
++	.name = "matrix",
++	.match = &matrix_bus_match,
++};
++
++static struct device_driver matrix_driver = {
++	.name = "vfio_ap",
++	.bus = &matrix_bus,
++	.suppress_bind_attrs = true,
++};
++
+ static int vfio_ap_matrix_dev_create(void)
+ {
+ 	int ret;
+@@ -71,6 +82,10 @@ static int vfio_ap_matrix_dev_create(void)
+ 	if (IS_ERR(root_device))
+ 		return PTR_ERR(root_device);
+ 
++	ret = bus_register(&matrix_bus);
++	if (ret)
++		goto bus_register_err;
++
+ 	matrix_dev = kzalloc(sizeof(*matrix_dev), GFP_KERNEL);
+ 	if (!matrix_dev) {
+ 		ret = -ENOMEM;
+@@ -87,30 +102,41 @@ static int vfio_ap_matrix_dev_create(void)
+ 	mutex_init(&matrix_dev->lock);
+ 	INIT_LIST_HEAD(&matrix_dev->mdev_list);
+ 
+-	matrix_dev->device.type = &vfio_ap_dev_type;
+ 	dev_set_name(&matrix_dev->device, "%s", VFIO_AP_DEV_NAME);
+ 	matrix_dev->device.parent = root_device;
++	matrix_dev->device.bus = &matrix_bus;
+ 	matrix_dev->device.release = vfio_ap_matrix_dev_release;
+-	matrix_dev->device.driver = &vfio_ap_drv.driver;
++	matrix_dev->vfio_ap_drv = &vfio_ap_drv;
+ 
+ 	ret = device_register(&matrix_dev->device);
+ 	if (ret)
+ 		goto matrix_reg_err;
+ 
++	ret = driver_register(&matrix_driver);
++	if (ret)
++		goto matrix_drv_err;
++
+ 	return 0;
+ 
++matrix_drv_err:
++	device_unregister(&matrix_dev->device);
+ matrix_reg_err:
+ 	put_device(&matrix_dev->device);
+ matrix_alloc_err:
++	bus_unregister(&matrix_bus);
++bus_register_err:
+ 	root_device_unregister(root_device);
+-
+ 	return ret;
+ }
+ 
+ static void vfio_ap_matrix_dev_destroy(void)
+ {
++	struct device *root_device = matrix_dev->device.parent;
++
++	driver_unregister(&matrix_driver);
+ 	device_unregister(&matrix_dev->device);
+-	root_device_unregister(matrix_dev->device.parent);
++	bus_unregister(&matrix_bus);
++	root_device_unregister(root_device);
+ }
+ 
+ static int __init vfio_ap_init(void)
+diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
+index 272ef427dcc0..900b9cf20ca5 100644
+--- a/drivers/s390/crypto/vfio_ap_ops.c
++++ b/drivers/s390/crypto/vfio_ap_ops.c
+@@ -198,8 +198,8 @@ static int vfio_ap_verify_queue_reserved(unsigned long *apid,
+ 	qres.apqi = apqi;
+ 	qres.reserved = false;
+ 
+-	ret = driver_for_each_device(matrix_dev->device.driver, NULL, &qres,
+-				     vfio_ap_has_queue);
++	ret = driver_for_each_device(&matrix_dev->vfio_ap_drv->driver, NULL,
++				     &qres, vfio_ap_has_queue);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/s390/crypto/vfio_ap_private.h b/drivers/s390/crypto/vfio_ap_private.h
+index 5675492233c7..76b7f98e47e9 100644
+--- a/drivers/s390/crypto/vfio_ap_private.h
++++ b/drivers/s390/crypto/vfio_ap_private.h
+@@ -40,6 +40,7 @@ struct ap_matrix_dev {
+ 	struct ap_config_info info;
+ 	struct list_head mdev_list;
+ 	struct mutex lock;
++	struct ap_driver  *vfio_ap_drv;
+ };
+ 
+ extern struct ap_matrix_dev *matrix_dev;
+diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
+index ae1d56da671d..1a738fe9f26b 100644
+--- a/drivers/s390/virtio/virtio_ccw.c
++++ b/drivers/s390/virtio/virtio_ccw.c
+@@ -272,6 +272,8 @@ static void virtio_ccw_drop_indicators(struct virtio_ccw_device *vcdev)
+ {
+ 	struct virtio_ccw_vq_info *info;
+ 
++	if (!vcdev->airq_info)
++		return;
+ 	list_for_each_entry(info, &vcdev->virtqueues, node)
+ 		drop_airq_indicator(info->vq, vcdev->airq_info);
+ }
+@@ -413,7 +415,7 @@ static int virtio_ccw_read_vq_conf(struct virtio_ccw_device *vcdev,
+ 	ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_VQ_CONF);
+ 	if (ret)
+ 		return ret;
+-	return vcdev->config_block->num;
++	return vcdev->config_block->num ?: -ENOENT;
+ }
+ 
+ static void virtio_ccw_del_vq(struct virtqueue *vq, struct ccw1 *ccw)
+diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
+index 7e56a11836c1..ccefface7e31 100644
+--- a/drivers/scsi/aacraid/linit.c
++++ b/drivers/scsi/aacraid/linit.c
+@@ -413,13 +413,16 @@ static int aac_slave_configure(struct scsi_device *sdev)
+ 	if (chn < AAC_MAX_BUSES && tid < AAC_MAX_TARGETS && aac->sa_firmware) {
+ 		devtype = aac->hba_map[chn][tid].devtype;
+ 
+-		if (devtype == AAC_DEVTYPE_NATIVE_RAW)
++		if (devtype == AAC_DEVTYPE_NATIVE_RAW) {
+ 			depth = aac->hba_map[chn][tid].qd_limit;
+-		else if (devtype == AAC_DEVTYPE_ARC_RAW)
++			set_timeout = 1;
++			goto common_config;
++		}
++		if (devtype == AAC_DEVTYPE_ARC_RAW) {
+ 			set_qd_dev_type = true;
+-
+-		set_timeout = 1;
+-		goto common_config;
++			set_timeout = 1;
++			goto common_config;
++		}
+ 	}
+ 
+ 	if (aac->jbod && (sdev->type == TYPE_DISK))
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 8d1acc802a67..f44e640229e7 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -644,11 +644,14 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
+ 				break;
+ 			case DSC_LS_PORT_UNAVAIL:
+ 			default:
+-				if (fcport->loop_id != FC_NO_LOOP_ID)
+-					qla2x00_clear_loop_id(fcport);
+-
+-				fcport->loop_id = loop_id;
+-				fcport->fw_login_state = DSC_LS_PORT_UNAVAIL;
++				if (fcport->loop_id == FC_NO_LOOP_ID) {
++					qla2x00_find_new_loop_id(vha, fcport);
++					fcport->fw_login_state =
++					    DSC_LS_PORT_UNAVAIL;
++				}
++				ql_dbg(ql_dbg_disc, vha, 0x20e5,
++				    "%s %d %8phC\n", __func__, __LINE__,
++				    fcport->port_name);
+ 				qla24xx_fcport_handle_login(vha, fcport);
+ 				break;
+ 			}
+@@ -1471,29 +1474,6 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	return 0;
+ }
+ 
+-static
+-void qla24xx_handle_rscn_event(fc_port_t *fcport, struct event_arg *ea)
+-{
+-	fcport->rscn_gen++;
+-
+-	ql_dbg(ql_dbg_disc, fcport->vha, 0x210c,
+-	    "%s %8phC DS %d LS %d\n",
+-	    __func__, fcport->port_name, fcport->disc_state,
+-	    fcport->fw_login_state);
+-
+-	if (fcport->flags & FCF_ASYNC_SENT)
+-		return;
+-
+-	switch (fcport->disc_state) {
+-	case DSC_DELETED:
+-	case DSC_LOGIN_COMPLETE:
+-		qla24xx_post_gpnid_work(fcport->vha, &ea->id);
+-		break;
+-	default:
+-		break;
+-	}
+-}
+-
+ int qla24xx_post_newsess_work(struct scsi_qla_host *vha, port_id_t *id,
+     u8 *port_name, u8 *node_name, void *pla, u8 fc4_type)
+ {
+@@ -1560,8 +1540,6 @@ static void qla_handle_els_plogi_done(scsi_qla_host_t *vha,
+ 
+ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
+ {
+-	fc_port_t *f, *tf;
+-	uint32_t id = 0, mask, rid;
+ 	fc_port_t *fcport;
+ 
+ 	switch (ea->event) {
+@@ -1574,10 +1552,6 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
+ 	case FCME_RSCN:
+ 		if (test_bit(UNLOADING, &vha->dpc_flags))
+ 			return;
+-		switch (ea->id.b.rsvd_1) {
+-		case RSCN_PORT_ADDR:
+-#define BIGSCAN 1
+-#if defined BIGSCAN & BIGSCAN > 0
+ 		{
+ 			unsigned long flags;
+ 			fcport = qla2x00_find_fcport_by_nportid
+@@ -1596,59 +1570,6 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
+ 			}
+ 			spin_unlock_irqrestore(&vha->work_lock, flags);
+ 		}
+-#else
+-		{
+-			int rc;
+-			fcport = qla2x00_find_fcport_by_nportid(vha, &ea->id, 1);
+-			if (!fcport) {
+-				/* cable moved */
+-				 rc = qla24xx_post_gpnid_work(vha, &ea->id);
+-				 if (rc) {
+-					 ql_log(ql_log_warn, vha, 0xd044,
+-					     "RSCN GPNID work failed %06x\n",
+-					     ea->id.b24);
+-				 }
+-			} else {
+-				ea->fcport = fcport;
+-				fcport->scan_needed = 1;
+-				qla24xx_handle_rscn_event(fcport, ea);
+-			}
+-		}
+-#endif
+-			break;
+-		case RSCN_AREA_ADDR:
+-		case RSCN_DOM_ADDR:
+-			if (ea->id.b.rsvd_1 == RSCN_AREA_ADDR) {
+-				mask = 0xffff00;
+-				ql_dbg(ql_dbg_async, vha, 0x5044,
+-				    "RSCN: Area 0x%06x was affected\n",
+-				    ea->id.b24);
+-			} else {
+-				mask = 0xff0000;
+-				ql_dbg(ql_dbg_async, vha, 0x507a,
+-				    "RSCN: Domain 0x%06x was affected\n",
+-				    ea->id.b24);
+-			}
+-
+-			rid = ea->id.b24 & mask;
+-			list_for_each_entry_safe(f, tf, &vha->vp_fcports,
+-			    list) {
+-				id = f->d_id.b24 & mask;
+-				if (rid == id) {
+-					ea->fcport = f;
+-					qla24xx_handle_rscn_event(f, ea);
+-				}
+-			}
+-			break;
+-		case RSCN_FAB_ADDR:
+-		default:
+-			ql_log(ql_log_warn, vha, 0xd045,
+-			    "RSCN: Fabric was affected. Addr format %d\n",
+-			    ea->id.b.rsvd_1);
+-			qla2x00_mark_all_devices_lost(vha, 1);
+-			set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
+-			set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
+-		}
+ 		break;
+ 	case FCME_GNL_DONE:
+ 		qla24xx_handle_gnl_done_event(vha, ea);
+@@ -1709,11 +1630,7 @@ void qla_rscn_replay(fc_port_t *fcport)
+                ea.event = FCME_RSCN;
+                ea.id = fcport->d_id;
+                ea.id.b.rsvd_1 = RSCN_PORT_ADDR;
+-#if defined BIGSCAN & BIGSCAN > 0
+                qla2x00_fcport_event_handler(fcport->vha, &ea);
+-#else
+-               qla24xx_post_gpnid_work(fcport->vha, &ea.id);
+-#endif
+ 	}
+ }
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 8507c43b918c..1a20e5d8f057 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -3410,7 +3410,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
+ 		min_vecs++;
+ 	}
+ 
+-	if (USER_CTRL_IRQ(ha)) {
++	if (USER_CTRL_IRQ(ha) || !ha->mqiobase) {
+ 		/* user wants to control IRQ setting for target mode */
+ 		ret = pci_alloc_irq_vectors(ha->pdev, min_vecs,
+ 		    ha->msix_count, PCI_IRQ_MSIX);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index c6ef83d0d99b..7e35ce2162d0 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -6936,7 +6936,7 @@ static int qla2xxx_map_queues(struct Scsi_Host *shost)
+ 	scsi_qla_host_t *vha = (scsi_qla_host_t *)shost->hostdata;
+ 	struct blk_mq_queue_map *qmap = &shost->tag_set.map[0];
+ 
+-	if (USER_CTRL_IRQ(vha->hw))
++	if (USER_CTRL_IRQ(vha->hw) || !vha->hw->mqiobase)
+ 		rc = blk_mq_map_queues(qmap);
+ 	else
+ 		rc = blk_mq_pci_map_queues(qmap, vha->hw->pdev, vha->irq_offset);
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 5464d467e23e..b84099479fe0 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -3047,6 +3047,55 @@ static void sd_read_security(struct scsi_disk *sdkp, unsigned char *buffer)
+ 		sdkp->security = 1;
+ }
+ 
++/*
++ * Determine the device's preferred I/O size for reads and writes
++ * unless the reported value is unreasonably small, large, not a
++ * multiple of the physical block size, or simply garbage.
++ */
++static bool sd_validate_opt_xfer_size(struct scsi_disk *sdkp,
++				      unsigned int dev_max)
++{
++	struct scsi_device *sdp = sdkp->device;
++	unsigned int opt_xfer_bytes =
++		logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
++
++	if (sdkp->opt_xfer_blocks > dev_max) {
++		sd_first_printk(KERN_WARNING, sdkp,
++				"Optimal transfer size %u logical blocks " \
++				"> dev_max (%u logical blocks)\n",
++				sdkp->opt_xfer_blocks, dev_max);
++		return false;
++	}
++
++	if (sdkp->opt_xfer_blocks > SD_DEF_XFER_BLOCKS) {
++		sd_first_printk(KERN_WARNING, sdkp,
++				"Optimal transfer size %u logical blocks " \
++				"> sd driver limit (%u logical blocks)\n",
++				sdkp->opt_xfer_blocks, SD_DEF_XFER_BLOCKS);
++		return false;
++	}
++
++	if (opt_xfer_bytes < PAGE_SIZE) {
++		sd_first_printk(KERN_WARNING, sdkp,
++				"Optimal transfer size %u bytes < " \
++				"PAGE_SIZE (%u bytes)\n",
++				opt_xfer_bytes, (unsigned int)PAGE_SIZE);
++		return false;
++	}
++
++	if (opt_xfer_bytes & (sdkp->physical_block_size - 1)) {
++		sd_first_printk(KERN_WARNING, sdkp,
++				"Optimal transfer size %u bytes not a " \
++				"multiple of physical block size (%u bytes)\n",
++				opt_xfer_bytes, sdkp->physical_block_size);
++		return false;
++	}
++
++	sd_first_printk(KERN_INFO, sdkp, "Optimal transfer size %u bytes\n",
++			opt_xfer_bytes);
++	return true;
++}
++
+ /**
+  *	sd_revalidate_disk - called the first time a new disk is seen,
+  *	performs disk spin up, read_capacity, etc.
+@@ -3125,15 +3174,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
+ 	dev_max = min_not_zero(dev_max, sdkp->max_xfer_blocks);
+ 	q->limits.max_dev_sectors = logical_to_sectors(sdp, dev_max);
+ 
+-	/*
+-	 * Determine the device's preferred I/O size for reads and writes
+-	 * unless the reported value is unreasonably small, large, or
+-	 * garbage.
+-	 */
+-	if (sdkp->opt_xfer_blocks &&
+-	    sdkp->opt_xfer_blocks <= dev_max &&
+-	    sdkp->opt_xfer_blocks <= SD_DEF_XFER_BLOCKS &&
+-	    logical_to_bytes(sdp, sdkp->opt_xfer_blocks) >= PAGE_SIZE) {
++	if (sd_validate_opt_xfer_size(sdkp, dev_max)) {
+ 		q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
+ 		rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks);
+ 	} else
+diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
+index 772b976e4ee4..464cba521fb6 100644
+--- a/drivers/scsi/virtio_scsi.c
++++ b/drivers/scsi/virtio_scsi.c
+@@ -594,7 +594,6 @@ static int virtscsi_device_reset(struct scsi_cmnd *sc)
+ 		return FAILED;
+ 
+ 	memset(cmd, 0, sizeof(*cmd));
+-	cmd->sc = sc;
+ 	cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){
+ 		.type = VIRTIO_SCSI_T_TMF,
+ 		.subtype = cpu_to_virtio32(vscsi->vdev,
+@@ -653,7 +652,6 @@ static int virtscsi_abort(struct scsi_cmnd *sc)
+ 		return FAILED;
+ 
+ 	memset(cmd, 0, sizeof(*cmd));
+-	cmd->sc = sc;
+ 	cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){
+ 		.type = VIRTIO_SCSI_T_TMF,
+ 		.subtype = VIRTIO_SCSI_T_TMF_ABORT_TASK,
+diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
+index c7beb6841289..ab8f731a3426 100644
+--- a/drivers/soc/qcom/rpmh.c
++++ b/drivers/soc/qcom/rpmh.c
+@@ -80,6 +80,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
+ 	struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request,
+ 						    msg);
+ 	struct completion *compl = rpm_msg->completion;
++	bool free = rpm_msg->needs_free;
+ 
+ 	rpm_msg->err = r;
+ 
+@@ -94,7 +95,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
+ 	complete(compl);
+ 
+ exit:
+-	if (rpm_msg->needs_free)
++	if (free)
+ 		kfree(rpm_msg);
+ }
+ 
+@@ -348,11 +349,12 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ {
+ 	struct batch_cache_req *req;
+ 	struct rpmh_request *rpm_msgs;
+-	DECLARE_COMPLETION_ONSTACK(compl);
++	struct completion *compls;
+ 	struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
+ 	unsigned long time_left;
+ 	int count = 0;
+-	int ret, i, j;
++	int ret, i;
++	void *ptr;
+ 
+ 	if (!cmd || !n)
+ 		return -EINVAL;
+@@ -362,10 +364,15 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ 	if (!count)
+ 		return -EINVAL;
+ 
+-	req = kzalloc(sizeof(*req) + count * sizeof(req->rpm_msgs[0]),
++	ptr = kzalloc(sizeof(*req) +
++		      count * (sizeof(req->rpm_msgs[0]) + sizeof(*compls)),
+ 		      GFP_ATOMIC);
+-	if (!req)
++	if (!ptr)
+ 		return -ENOMEM;
++
++	req = ptr;
++	compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
++
+ 	req->count = count;
+ 	rpm_msgs = req->rpm_msgs;
+ 
+@@ -380,25 +387,26 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ 	}
+ 
+ 	for (i = 0; i < count; i++) {
+-		rpm_msgs[i].completion = &compl;
++		struct completion *compl = &compls[i];
++
++		init_completion(compl);
++		rpm_msgs[i].completion = compl;
+ 		ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msgs[i].msg);
+ 		if (ret) {
+ 			pr_err("Error(%d) sending RPMH message addr=%#x\n",
+ 			       ret, rpm_msgs[i].msg.cmds[0].addr);
+-			for (j = i; j < count; j++)
+-				rpmh_tx_done(&rpm_msgs[j].msg, ret);
+ 			break;
+ 		}
+ 	}
+ 
+ 	time_left = RPMH_TIMEOUT_MS;
+-	for (i = 0; i < count; i++) {
+-		time_left = wait_for_completion_timeout(&compl, time_left);
++	while (i--) {
++		time_left = wait_for_completion_timeout(&compls[i], time_left);
+ 		if (!time_left) {
+ 			/*
+ 			 * Better hope they never finish because they'll signal
+-			 * the completion on our stack and that's bad once
+-			 * we've returned from the function.
++			 * the completion that we're going to free once
++			 * we've returned from this function.
+ 			 */
+ 			WARN_ON(1);
+ 			ret = -ETIMEDOUT;
+@@ -407,7 +415,7 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ 	}
+ 
+ exit:
+-	kfree(req);
++	kfree(ptr);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index a4aee26028cd..53b35c56a557 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -428,7 +428,8 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 		return status;
+ 
+ 	master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
+-	master->mode_bits = SPI_3WIRE | SPI_3WIRE_HIZ | SPI_CPHA | SPI_CPOL;
++	master->mode_bits = SPI_3WIRE | SPI_3WIRE_HIZ | SPI_CPHA | SPI_CPOL |
++			    SPI_CS_HIGH;
+ 	master->flags = master_flags;
+ 	master->bus_num = pdev->id;
+ 	/* The master needs to think there is a chipselect even if not connected */
+@@ -455,7 +456,6 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_3] = spi_gpio_spec_txrx_word_mode3;
+ 	}
+ 	spi_gpio->bitbang.setup_transfer = spi_bitbang_setup_transfer;
+-	spi_gpio->bitbang.flags = SPI_CS_HIGH;
+ 
+ 	status = spi_bitbang_start(&spi_gpio->bitbang);
+ 	if (status)
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index 2fd8881fcd65..8be304379628 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -623,8 +623,8 @@ omap2_mcspi_txrx_dma(struct spi_device *spi, struct spi_transfer *xfer)
+ 	cfg.dst_addr = cs->phys + OMAP2_MCSPI_TX0;
+ 	cfg.src_addr_width = width;
+ 	cfg.dst_addr_width = width;
+-	cfg.src_maxburst = es;
+-	cfg.dst_maxburst = es;
++	cfg.src_maxburst = 1;
++	cfg.dst_maxburst = 1;
+ 
+ 	rx = xfer->rx_buf;
+ 	tx = xfer->tx_buf;
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index d84b893a64d7..3e82eaad0f2d 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1696,6 +1696,7 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
+ 			platform_info->enable_dma = false;
+ 		} else {
+ 			master->can_dma = pxa2xx_spi_can_dma;
++			master->max_dma_len = MAX_DMA_LEN;
+ 		}
+ 	}
+ 
+diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
+index 5f19016bbf10..b9fb6493cd6b 100644
+--- a/drivers/spi/spi-ti-qspi.c
++++ b/drivers/spi/spi-ti-qspi.c
+@@ -490,8 +490,8 @@ static void ti_qspi_enable_memory_map(struct spi_device *spi)
+ 	ti_qspi_write(qspi, MM_SWITCH, QSPI_SPI_SWITCH_REG);
+ 	if (qspi->ctrl_base) {
+ 		regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg,
+-				   MEM_CS_EN(spi->chip_select),
+-				   MEM_CS_MASK);
++				   MEM_CS_MASK,
++				   MEM_CS_EN(spi->chip_select));
+ 	}
+ 	qspi->mmap_enabled = true;
+ }
+@@ -503,7 +503,7 @@ static void ti_qspi_disable_memory_map(struct spi_device *spi)
+ 	ti_qspi_write(qspi, 0, QSPI_SPI_SWITCH_REG);
+ 	if (qspi->ctrl_base)
+ 		regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg,
+-				   0, MEM_CS_MASK);
++				   MEM_CS_MASK, 0);
+ 	qspi->mmap_enabled = false;
+ }
+ 
+diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
+index 28f41caba05d..fb442499f806 100644
+--- a/drivers/staging/media/imx/imx-ic-prpencvf.c
++++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
+@@ -680,12 +680,23 @@ static int prp_start(struct prp_priv *priv)
+ 		goto out_free_nfb4eof_irq;
+ 	}
+ 
++	/* start upstream */
++	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
++	ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
++	if (ret) {
++		v4l2_err(&ic_priv->sd,
++			 "upstream stream on failed: %d\n", ret);
++		goto out_free_eof_irq;
++	}
++
+ 	/* start the EOF timeout timer */
+ 	mod_timer(&priv->eof_timeout_timer,
+ 		  jiffies + msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+ 
+ 	return 0;
+ 
++out_free_eof_irq:
++	devm_free_irq(ic_priv->dev, priv->eof_irq, priv);
+ out_free_nfb4eof_irq:
+ 	devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv);
+ out_unsetup:
+@@ -717,6 +728,12 @@ static void prp_stop(struct prp_priv *priv)
+ 	if (ret == 0)
+ 		v4l2_warn(&ic_priv->sd, "wait last EOF timeout\n");
+ 
++	/* stop upstream */
++	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
++	if (ret && ret != -ENOIOCTLCMD)
++		v4l2_warn(&ic_priv->sd,
++			  "upstream stream off failed: %d\n", ret);
++
+ 	devm_free_irq(ic_priv->dev, priv->eof_irq, priv);
+ 	devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv);
+ 
+@@ -1148,15 +1165,6 @@ static int prp_s_stream(struct v4l2_subdev *sd, int enable)
+ 	if (ret)
+ 		goto out;
+ 
+-	/* start/stop upstream */
+-	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, enable);
+-	ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
+-	if (ret) {
+-		if (enable)
+-			prp_stop(priv);
+-		goto out;
+-	}
+-
+ update_count:
+ 	priv->stream_count += enable ? 1 : -1;
+ 	if (priv->stream_count < 0)
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index 4223f8d418ae..be1e9e52b2a0 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -629,7 +629,7 @@ out_put_ipu:
+ 	return ret;
+ }
+ 
+-static void csi_idmac_stop(struct csi_priv *priv)
++static void csi_idmac_wait_last_eof(struct csi_priv *priv)
+ {
+ 	unsigned long flags;
+ 	int ret;
+@@ -646,7 +646,10 @@ static void csi_idmac_stop(struct csi_priv *priv)
+ 		&priv->last_eof_comp, msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+ 	if (ret == 0)
+ 		v4l2_warn(&priv->sd, "wait last EOF timeout\n");
++}
+ 
++static void csi_idmac_stop(struct csi_priv *priv)
++{
+ 	devm_free_irq(priv->dev, priv->eof_irq, priv);
+ 	devm_free_irq(priv->dev, priv->nfb4eof_irq, priv);
+ 
+@@ -722,10 +725,16 @@ static int csi_start(struct csi_priv *priv)
+ 
+ 	output_fi = &priv->frame_interval[priv->active_output_pad];
+ 
++	/* start upstream */
++	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
++	ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
++	if (ret)
++		return ret;
++
+ 	if (priv->dest == IPU_CSI_DEST_IDMAC) {
+ 		ret = csi_idmac_start(priv);
+ 		if (ret)
+-			return ret;
++			goto stop_upstream;
+ 	}
+ 
+ 	ret = csi_setup(priv);
+@@ -753,11 +762,26 @@ fim_off:
+ idmac_stop:
+ 	if (priv->dest == IPU_CSI_DEST_IDMAC)
+ 		csi_idmac_stop(priv);
++stop_upstream:
++	v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
+ 	return ret;
+ }
+ 
+ static void csi_stop(struct csi_priv *priv)
+ {
++	if (priv->dest == IPU_CSI_DEST_IDMAC)
++		csi_idmac_wait_last_eof(priv);
++
++	/*
++	 * Disable the CSI asap, after syncing with the last EOF.
++	 * Doing so after the IDMA channel is disabled has shown to
++	 * create hard system-wide hangs.
++	 */
++	ipu_csi_disable(priv->csi);
++
++	/* stop upstream */
++	v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
++
+ 	if (priv->dest == IPU_CSI_DEST_IDMAC) {
+ 		csi_idmac_stop(priv);
+ 
+@@ -765,8 +789,6 @@ static void csi_stop(struct csi_priv *priv)
+ 		if (priv->fim)
+ 			imx_media_fim_set_stream(priv->fim, NULL, false);
+ 	}
+-
+-	ipu_csi_disable(priv->csi);
+ }
+ 
+ static const struct csi_skip_desc csi_skip[12] = {
+@@ -927,23 +949,13 @@ static int csi_s_stream(struct v4l2_subdev *sd, int enable)
+ 		goto update_count;
+ 
+ 	if (enable) {
+-		/* upstream must be started first, before starting CSI */
+-		ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
+-		ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
+-		if (ret)
+-			goto out;
+-
+ 		dev_dbg(priv->dev, "stream ON\n");
+ 		ret = csi_start(priv);
+-		if (ret) {
+-			v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
++		if (ret)
+ 			goto out;
+-		}
+ 	} else {
+ 		dev_dbg(priv->dev, "stream OFF\n");
+-		/* CSI must be stopped first, then stop upstream */
+ 		csi_stop(priv);
+-		v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
+ 	}
+ 
+ update_count:
+@@ -1787,7 +1799,7 @@ static int imx_csi_parse_endpoint(struct device *dev,
+ 				  struct v4l2_fwnode_endpoint *vep,
+ 				  struct v4l2_async_subdev *asd)
+ {
+-	return fwnode_device_is_available(asd->match.fwnode) ? 0 : -EINVAL;
++	return fwnode_device_is_available(asd->match.fwnode) ? 0 : -ENOTCONN;
+ }
+ 
+ static int imx_csi_async_register(struct csi_priv *priv)
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index bd15a564fe24..3ad2659630e8 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4040,9 +4040,9 @@ static void iscsit_release_commands_from_conn(struct iscsi_conn *conn)
+ 		struct se_cmd *se_cmd = &cmd->se_cmd;
+ 
+ 		if (se_cmd->se_tfo != NULL) {
+-			spin_lock(&se_cmd->t_state_lock);
++			spin_lock_irq(&se_cmd->t_state_lock);
+ 			se_cmd->transport_state |= CMD_T_FABRIC_STOP;
+-			spin_unlock(&se_cmd->t_state_lock);
++			spin_unlock_irq(&se_cmd->t_state_lock);
+ 		}
+ 	}
+ 	spin_unlock_bh(&conn->cmd_lock);
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index a1a85805d010..2488de1c4bc4 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -130,6 +130,10 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
+ 		port->flags |= UPF_IOREMAP;
+ 	}
+ 
++	/* Compatibility with the deprecated pxa driver and 8250_pxa drivers. */
++	if (of_device_is_compatible(np, "mrvl,mmp-uart"))
++		port->regshift = 2;
++
+ 	/* Check for registers offset within the devices address range */
+ 	if (of_property_read_u32(np, "reg-shift", &prop) == 0)
+ 		port->regshift = prop;
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 48bd694a5fa1..bbe5cba21522 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -2027,6 +2027,111 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
+ 		.setup		= pci_default_setup,
+ 		.exit		= pci_plx9050_exit,
+ 	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4S,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4SM,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
+ 	/*
+ 	 * SBS Technologies, Inc., PMC-OCTALPRO 232
+ 	 */
+@@ -4575,10 +4680,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 	 */
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SDB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2S,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+@@ -4587,10 +4692,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_2DB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+@@ -4599,10 +4704,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SMDB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2SM,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+@@ -4611,13 +4716,13 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_1,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7951 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+@@ -4626,16 +4731,16 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2S,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+@@ -4644,13 +4749,13 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2SM,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7958 },
++		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7958 },
++		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_8,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7958 },
+@@ -4659,19 +4764,19 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7958 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7958 },
++		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_8,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7958 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7958 },
++		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_8SM,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7958 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7958 },
++		pbn_pericom_PI7C9X7954 },
+ 	/*
+ 	 * Topic TP560 Data/Fax/Voice 56k modem (reported by Evan Clarke)
+ 	 */
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index 094f2958cb2b..ee9f18c52d29 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -364,7 +364,13 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
+ 		cdns_uart_handle_tx(dev_id);
+ 		isrstatus &= ~CDNS_UART_IXR_TXEMPTY;
+ 	}
+-	if (isrstatus & CDNS_UART_IXR_RXMASK)
++
++	/*
++	 * Skip RX processing if RX is disabled as RXEMPTY will never be set
++	 * as read bytes will not be removed from the FIFO.
++	 */
++	if (isrstatus & CDNS_UART_IXR_RXMASK &&
++	    !(readl(port->membase + CDNS_UART_CR) & CDNS_UART_CR_RX_DIS))
+ 		cdns_uart_handle_rx(dev_id, isrstatus);
+ 
+ 	spin_unlock(&port->lock);
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index bba75560d11e..9646ff63e77a 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -935,8 +935,11 @@ static void flush_scrollback(struct vc_data *vc)
+ {
+ 	WARN_CONSOLE_UNLOCKED();
+ 
++	set_origin(vc);
+ 	if (vc->vc_sw->con_flush_scrollback)
+ 		vc->vc_sw->con_flush_scrollback(vc);
++	else
++		vc->vc_sw->con_switch(vc);
+ }
+ 
+ /*
+@@ -1503,8 +1506,10 @@ static void csi_J(struct vc_data *vc, int vpar)
+ 			count = ((vc->vc_pos - vc->vc_origin) >> 1) + 1;
+ 			start = (unsigned short *)vc->vc_origin;
+ 			break;
++		case 3: /* include scrollback */
++			flush_scrollback(vc);
++			/* fallthrough */
+ 		case 2: /* erase whole display */
+-		case 3: /* (and scrollback buffer later) */
+ 			vc_uniscr_clear_lines(vc, 0, vc->vc_rows);
+ 			count = vc->vc_cols * vc->vc_rows;
+ 			start = (unsigned short *)vc->vc_origin;
+@@ -1513,13 +1518,7 @@ static void csi_J(struct vc_data *vc, int vpar)
+ 			return;
+ 	}
+ 	scr_memsetw(start, vc->vc_video_erase_char, 2 * count);
+-	if (vpar == 3) {
+-		set_origin(vc);
+-		flush_scrollback(vc);
+-		if (con_is_visible(vc))
+-			update_screen(vc);
+-	} else if (con_should_update(vc))
+-		do_update_region(vc, (unsigned long) start, count);
++	update_region(vc, (unsigned long) start, count);
+ 	vc->vc_need_wrap = 0;
+ }
+ 
+diff --git a/drivers/usb/chipidea/ci_hdrc_tegra.c b/drivers/usb/chipidea/ci_hdrc_tegra.c
+index 772851bee99b..12025358bb3c 100644
+--- a/drivers/usb/chipidea/ci_hdrc_tegra.c
++++ b/drivers/usb/chipidea/ci_hdrc_tegra.c
+@@ -130,6 +130,7 @@ static int tegra_udc_remove(struct platform_device *pdev)
+ {
+ 	struct tegra_udc *udc = platform_get_drvdata(pdev);
+ 
++	ci_hdrc_remove_device(udc->dev);
+ 	usb_phy_set_suspend(udc->phy, 1);
+ 	clk_disable_unprepare(udc->clk);
+ 
+diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
+index 1c0033ad8738..e1109b15636d 100644
+--- a/drivers/usb/typec/tps6598x.c
++++ b/drivers/usb/typec/tps6598x.c
+@@ -110,6 +110,20 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)
+ 	return 0;
+ }
+ 
++static int tps6598x_block_write(struct tps6598x *tps, u8 reg,
++				void *val, size_t len)
++{
++	u8 data[TPS_MAX_LEN + 1];
++
++	if (!tps->i2c_protocol)
++		return regmap_raw_write(tps->regmap, reg, val, len);
++
++	data[0] = len;
++	memcpy(&data[1], val, len);
++
++	return regmap_raw_write(tps->regmap, reg, data, sizeof(data));
++}
++
+ static inline int tps6598x_read16(struct tps6598x *tps, u8 reg, u16 *val)
+ {
+ 	return tps6598x_block_read(tps, reg, val, sizeof(u16));
+@@ -127,23 +141,23 @@ static inline int tps6598x_read64(struct tps6598x *tps, u8 reg, u64 *val)
+ 
+ static inline int tps6598x_write16(struct tps6598x *tps, u8 reg, u16 val)
+ {
+-	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u16));
++	return tps6598x_block_write(tps, reg, &val, sizeof(u16));
+ }
+ 
+ static inline int tps6598x_write32(struct tps6598x *tps, u8 reg, u32 val)
+ {
+-	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u32));
++	return tps6598x_block_write(tps, reg, &val, sizeof(u32));
+ }
+ 
+ static inline int tps6598x_write64(struct tps6598x *tps, u8 reg, u64 val)
+ {
+-	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u64));
++	return tps6598x_block_write(tps, reg, &val, sizeof(u64));
+ }
+ 
+ static inline int
+ tps6598x_write_4cc(struct tps6598x *tps, u8 reg, const char *val)
+ {
+-	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u32));
++	return tps6598x_block_write(tps, reg, &val, sizeof(u32));
+ }
+ 
+ static int tps6598x_read_partner_identity(struct tps6598x *tps)
+@@ -229,8 +243,8 @@ static int tps6598x_exec_cmd(struct tps6598x *tps, const char *cmd,
+ 		return -EBUSY;
+ 
+ 	if (in_len) {
+-		ret = regmap_raw_write(tps->regmap, TPS_REG_DATA1,
+-				       in_data, in_len);
++		ret = tps6598x_block_write(tps, TPS_REG_DATA1,
++					   in_data, in_len);
+ 		if (ret)
+ 			return ret;
+ 	}
+diff --git a/fs/9p/v9fs_vfs.h b/fs/9p/v9fs_vfs.h
+index 5a0db6dec8d1..aaee1e6584e6 100644
+--- a/fs/9p/v9fs_vfs.h
++++ b/fs/9p/v9fs_vfs.h
+@@ -40,6 +40,9 @@
+  */
+ #define P9_LOCK_TIMEOUT (30*HZ)
+ 
++/* flags for v9fs_stat2inode() & v9fs_stat2inode_dotl() */
++#define V9FS_STAT2INODE_KEEP_ISIZE 1
++
+ extern struct file_system_type v9fs_fs_type;
+ extern const struct address_space_operations v9fs_addr_operations;
+ extern const struct file_operations v9fs_file_operations;
+@@ -61,8 +64,10 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses,
+ 		    struct inode *inode, umode_t mode, dev_t);
+ void v9fs_evict_inode(struct inode *inode);
+ ino_t v9fs_qid2ino(struct p9_qid *qid);
+-void v9fs_stat2inode(struct p9_wstat *, struct inode *, struct super_block *);
+-void v9fs_stat2inode_dotl(struct p9_stat_dotl *, struct inode *);
++void v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
++		      struct super_block *sb, unsigned int flags);
++void v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
++			   unsigned int flags);
+ int v9fs_dir_release(struct inode *inode, struct file *filp);
+ int v9fs_file_open(struct inode *inode, struct file *file);
+ void v9fs_inode2stat(struct inode *inode, struct p9_wstat *stat);
+@@ -83,4 +88,18 @@ static inline void v9fs_invalidate_inode_attr(struct inode *inode)
+ }
+ 
+ int v9fs_open_to_dotl_flags(int flags);
++
++static inline void v9fs_i_size_write(struct inode *inode, loff_t i_size)
++{
++	/*
++	 * 32-bit need the lock, concurrent updates could break the
++	 * sequences and make i_size_read() loop forever.
++	 * 64-bit updates are atomic and can skip the locking.
++	 */
++	if (sizeof(i_size) > sizeof(long))
++		spin_lock(&inode->i_lock);
++	i_size_write(inode, i_size);
++	if (sizeof(i_size) > sizeof(long))
++		spin_unlock(&inode->i_lock);
++}
+ #endif
+diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
+index a25efa782fcc..9a1125305d84 100644
+--- a/fs/9p/vfs_file.c
++++ b/fs/9p/vfs_file.c
+@@ -446,7 +446,11 @@ v9fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 		i_size = i_size_read(inode);
+ 		if (iocb->ki_pos > i_size) {
+ 			inode_add_bytes(inode, iocb->ki_pos - i_size);
+-			i_size_write(inode, iocb->ki_pos);
++			/*
++			 * Need to serialize against i_size_write() in
++			 * v9fs_stat2inode()
++			 */
++			v9fs_i_size_write(inode, iocb->ki_pos);
+ 		}
+ 		return retval;
+ 	}
+diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
+index 85ff859d3af5..72b779bc0942 100644
+--- a/fs/9p/vfs_inode.c
++++ b/fs/9p/vfs_inode.c
+@@ -538,7 +538,7 @@ static struct inode *v9fs_qid_iget(struct super_block *sb,
+ 	if (retval)
+ 		goto error;
+ 
+-	v9fs_stat2inode(st, inode, sb);
++	v9fs_stat2inode(st, inode, sb, 0);
+ 	v9fs_cache_inode_get_cookie(inode);
+ 	unlock_new_inode(inode);
+ 	return inode;
+@@ -1092,7 +1092,7 @@ v9fs_vfs_getattr(const struct path *path, struct kstat *stat,
+ 	if (IS_ERR(st))
+ 		return PTR_ERR(st);
+ 
+-	v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb);
++	v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb, 0);
+ 	generic_fillattr(d_inode(dentry), stat);
+ 
+ 	p9stat_free(st);
+@@ -1170,12 +1170,13 @@ static int v9fs_vfs_setattr(struct dentry *dentry, struct iattr *iattr)
+  * @stat: Plan 9 metadata (mistat) structure
+  * @inode: inode to populate
+  * @sb: superblock of filesystem
++ * @flags: control flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE)
+  *
+  */
+ 
+ void
+ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
+-	struct super_block *sb)
++		 struct super_block *sb, unsigned int flags)
+ {
+ 	umode_t mode;
+ 	char ext[32];
+@@ -1216,10 +1217,11 @@ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
+ 	mode = p9mode2perm(v9ses, stat);
+ 	mode |= inode->i_mode & ~S_IALLUGO;
+ 	inode->i_mode = mode;
+-	i_size_write(inode, stat->length);
+ 
++	if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE))
++		v9fs_i_size_write(inode, stat->length);
+ 	/* not real number of blocks, but 512 byte ones ... */
+-	inode->i_blocks = (i_size_read(inode) + 512 - 1) >> 9;
++	inode->i_blocks = (stat->length + 512 - 1) >> 9;
+ 	v9inode->cache_validity &= ~V9FS_INO_INVALID_ATTR;
+ }
+ 
+@@ -1416,9 +1418,9 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
+ {
+ 	int umode;
+ 	dev_t rdev;
+-	loff_t i_size;
+ 	struct p9_wstat *st;
+ 	struct v9fs_session_info *v9ses;
++	unsigned int flags;
+ 
+ 	v9ses = v9fs_inode2v9ses(inode);
+ 	st = p9_client_stat(fid);
+@@ -1431,16 +1433,13 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
+ 	if ((inode->i_mode & S_IFMT) != (umode & S_IFMT))
+ 		goto out;
+ 
+-	spin_lock(&inode->i_lock);
+ 	/*
+ 	 * We don't want to refresh inode->i_size,
+ 	 * because we may have cached data
+ 	 */
+-	i_size = inode->i_size;
+-	v9fs_stat2inode(st, inode, inode->i_sb);
+-	if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
+-		inode->i_size = i_size;
+-	spin_unlock(&inode->i_lock);
++	flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ?
++		V9FS_STAT2INODE_KEEP_ISIZE : 0;
++	v9fs_stat2inode(st, inode, inode->i_sb, flags);
+ out:
+ 	p9stat_free(st);
+ 	kfree(st);
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index 4823e1c46999..a950a927a626 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -143,7 +143,7 @@ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb,
+ 	if (retval)
+ 		goto error;
+ 
+-	v9fs_stat2inode_dotl(st, inode);
++	v9fs_stat2inode_dotl(st, inode, 0);
+ 	v9fs_cache_inode_get_cookie(inode);
+ 	retval = v9fs_get_acl(inode, fid);
+ 	if (retval)
+@@ -496,7 +496,7 @@ v9fs_vfs_getattr_dotl(const struct path *path, struct kstat *stat,
+ 	if (IS_ERR(st))
+ 		return PTR_ERR(st);
+ 
+-	v9fs_stat2inode_dotl(st, d_inode(dentry));
++	v9fs_stat2inode_dotl(st, d_inode(dentry), 0);
+ 	generic_fillattr(d_inode(dentry), stat);
+ 	/* Change block size to what the server returned */
+ 	stat->blksize = st->st_blksize;
+@@ -607,11 +607,13 @@ int v9fs_vfs_setattr_dotl(struct dentry *dentry, struct iattr *iattr)
+  * v9fs_stat2inode_dotl - populate an inode structure with stat info
+  * @stat: stat structure
+  * @inode: inode to populate
++ * @flags: ctrl flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE)
+  *
+  */
+ 
+ void
+-v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
++v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
++		      unsigned int flags)
+ {
+ 	umode_t mode;
+ 	struct v9fs_inode *v9inode = V9FS_I(inode);
+@@ -631,7 +633,8 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
+ 		mode |= inode->i_mode & ~S_IALLUGO;
+ 		inode->i_mode = mode;
+ 
+-		i_size_write(inode, stat->st_size);
++		if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE))
++			v9fs_i_size_write(inode, stat->st_size);
+ 		inode->i_blocks = stat->st_blocks;
+ 	} else {
+ 		if (stat->st_result_mask & P9_STATS_ATIME) {
+@@ -661,8 +664,9 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
+ 		}
+ 		if (stat->st_result_mask & P9_STATS_RDEV)
+ 			inode->i_rdev = new_decode_dev(stat->st_rdev);
+-		if (stat->st_result_mask & P9_STATS_SIZE)
+-			i_size_write(inode, stat->st_size);
++		if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) &&
++		    stat->st_result_mask & P9_STATS_SIZE)
++			v9fs_i_size_write(inode, stat->st_size);
+ 		if (stat->st_result_mask & P9_STATS_BLOCKS)
+ 			inode->i_blocks = stat->st_blocks;
+ 	}
+@@ -928,9 +932,9 @@ v9fs_vfs_get_link_dotl(struct dentry *dentry,
+ 
+ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
+ {
+-	loff_t i_size;
+ 	struct p9_stat_dotl *st;
+ 	struct v9fs_session_info *v9ses;
++	unsigned int flags;
+ 
+ 	v9ses = v9fs_inode2v9ses(inode);
+ 	st = p9_client_getattr_dotl(fid, P9_STATS_ALL);
+@@ -942,16 +946,13 @@ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
+ 	if ((inode->i_mode & S_IFMT) != (st->st_mode & S_IFMT))
+ 		goto out;
+ 
+-	spin_lock(&inode->i_lock);
+ 	/*
+ 	 * We don't want to refresh inode->i_size,
+ 	 * because we may have cached data
+ 	 */
+-	i_size = inode->i_size;
+-	v9fs_stat2inode_dotl(st, inode);
+-	if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
+-		inode->i_size = i_size;
+-	spin_unlock(&inode->i_lock);
++	flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ?
++		V9FS_STAT2INODE_KEEP_ISIZE : 0;
++	v9fs_stat2inode_dotl(st, inode, flags);
+ out:
+ 	kfree(st);
+ 	return 0;
+diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c
+index 48ce50484e80..eeab9953af89 100644
+--- a/fs/9p/vfs_super.c
++++ b/fs/9p/vfs_super.c
+@@ -172,7 +172,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags,
+ 			goto release_sb;
+ 		}
+ 		d_inode(root)->i_ino = v9fs_qid2ino(&st->qid);
+-		v9fs_stat2inode_dotl(st, d_inode(root));
++		v9fs_stat2inode_dotl(st, d_inode(root), 0);
+ 		kfree(st);
+ 	} else {
+ 		struct p9_wstat *st = NULL;
+@@ -183,7 +183,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags,
+ 		}
+ 
+ 		d_inode(root)->i_ino = v9fs_qid2ino(&st->qid);
+-		v9fs_stat2inode(st, d_inode(root), sb);
++		v9fs_stat2inode(st, d_inode(root), sb, 0);
+ 
+ 		p9stat_free(st);
+ 		kfree(st);
+diff --git a/fs/btrfs/acl.c b/fs/btrfs/acl.c
+index 3b66c957ea6f..5810463dc6d2 100644
+--- a/fs/btrfs/acl.c
++++ b/fs/btrfs/acl.c
+@@ -9,6 +9,7 @@
+ #include <linux/posix_acl_xattr.h>
+ #include <linux/posix_acl.h>
+ #include <linux/sched.h>
++#include <linux/sched/mm.h>
+ #include <linux/slab.h>
+ 
+ #include "ctree.h"
+@@ -72,8 +73,16 @@ static int __btrfs_set_acl(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	if (acl) {
++		unsigned int nofs_flag;
++
+ 		size = posix_acl_xattr_size(acl->a_count);
++		/*
++		 * We're holding a transaction handle, so use a NOFS memory
++		 * allocation context to avoid deadlock if reclaim happens.
++		 */
++		nofs_flag = memalloc_nofs_save();
+ 		value = kmalloc(size, GFP_KERNEL);
++		memalloc_nofs_restore(nofs_flag);
+ 		if (!value) {
+ 			ret = -ENOMEM;
+ 			goto out;
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index 8750c835f535..c4dea3b7349e 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -862,6 +862,7 @@ int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info)
+ 			btrfs_destroy_dev_replace_tgtdev(tgt_device);
+ 		break;
+ 	default:
++		up_write(&dev_replace->rwsem);
+ 		result = -EINVAL;
+ 	}
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 6a2a2a951705..888d72dda794 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -17,6 +17,7 @@
+ #include <linux/semaphore.h>
+ #include <linux/error-injection.h>
+ #include <linux/crc32c.h>
++#include <linux/sched/mm.h>
+ #include <asm/unaligned.h>
+ #include "ctree.h"
+ #include "disk-io.h"
+@@ -1258,10 +1259,17 @@ struct btrfs_root *btrfs_create_tree(struct btrfs_trans_handle *trans,
+ 	struct btrfs_root *tree_root = fs_info->tree_root;
+ 	struct btrfs_root *root;
+ 	struct btrfs_key key;
++	unsigned int nofs_flag;
+ 	int ret = 0;
+ 	uuid_le uuid = NULL_UUID_LE;
+ 
++	/*
++	 * We're holding a transaction handle, so use a NOFS memory allocation
++	 * context to avoid deadlock if reclaim happens.
++	 */
++	nofs_flag = memalloc_nofs_save();
+ 	root = btrfs_alloc_root(fs_info, GFP_KERNEL);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (!root)
+ 		return ERR_PTR(-ENOMEM);
+ 
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 52abe4082680..1bfb7207bbf0 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2985,11 +2985,11 @@ static int __do_readpage(struct extent_io_tree *tree,
+ 		 */
+ 		if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) &&
+ 		    prev_em_start && *prev_em_start != (u64)-1 &&
+-		    *prev_em_start != em->orig_start)
++		    *prev_em_start != em->start)
+ 			force_bio_submit = true;
+ 
+ 		if (prev_em_start)
+-			*prev_em_start = em->orig_start;
++			*prev_em_start = em->start;
+ 
+ 		free_extent_map(em);
+ 		em = NULL;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 9c8e1734429c..6e1119496721 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3206,21 +3206,6 @@ out:
+ 	return ret;
+ }
+ 
+-static void btrfs_double_inode_unlock(struct inode *inode1, struct inode *inode2)
+-{
+-	inode_unlock(inode1);
+-	inode_unlock(inode2);
+-}
+-
+-static void btrfs_double_inode_lock(struct inode *inode1, struct inode *inode2)
+-{
+-	if (inode1 < inode2)
+-		swap(inode1, inode2);
+-
+-	inode_lock_nested(inode1, I_MUTEX_PARENT);
+-	inode_lock_nested(inode2, I_MUTEX_CHILD);
+-}
+-
+ static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1,
+ 				       struct inode *inode2, u64 loff2, u64 len)
+ {
+@@ -3989,7 +3974,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+ 	if (same_inode)
+ 		inode_lock(inode_in);
+ 	else
+-		btrfs_double_inode_lock(inode_in, inode_out);
++		lock_two_nondirectories(inode_in, inode_out);
+ 
+ 	/*
+ 	 * Now that the inodes are locked, we need to start writeback ourselves
+@@ -4039,7 +4024,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+ 	if (same_inode)
+ 		inode_unlock(inode_in);
+ 	else
+-		btrfs_double_inode_unlock(inode_in, inode_out);
++		unlock_two_nondirectories(inode_in, inode_out);
+ 
+ 	return ret;
+ }
+@@ -4069,7 +4054,7 @@ loff_t btrfs_remap_file_range(struct file *src_file, loff_t off,
+ 	if (same_inode)
+ 		inode_unlock(src_inode);
+ 	else
+-		btrfs_double_inode_unlock(src_inode, dst_inode);
++		unlock_two_nondirectories(src_inode, dst_inode);
+ 
+ 	return ret < 0 ? ret : len;
+ }
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 6dcd36d7b849..1aeac70d0531 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -584,6 +584,7 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
+ 	sctx->pages_per_rd_bio = SCRUB_PAGES_PER_RD_BIO;
+ 	sctx->curr = -1;
+ 	sctx->fs_info = fs_info;
++	INIT_LIST_HEAD(&sctx->csum_list);
+ 	for (i = 0; i < SCRUB_BIOS_PER_SCTX; ++i) {
+ 		struct scrub_bio *sbio;
+ 
+@@ -608,7 +609,6 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
+ 	atomic_set(&sctx->workers_pending, 0);
+ 	atomic_set(&sctx->cancel_req, 0);
+ 	sctx->csum_size = btrfs_super_csum_size(fs_info->super_copy);
+-	INIT_LIST_HEAD(&sctx->csum_list);
+ 
+ 	spin_lock_init(&sctx->list_lock);
+ 	spin_lock_init(&sctx->stat_lock);
+@@ -3770,16 +3770,6 @@ fail_scrub_workers:
+ 	return -ENOMEM;
+ }
+ 
+-static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
+-{
+-	if (--fs_info->scrub_workers_refcnt == 0) {
+-		btrfs_destroy_workqueue(fs_info->scrub_workers);
+-		btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
+-		btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
+-	}
+-	WARN_ON(fs_info->scrub_workers_refcnt < 0);
+-}
+-
+ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 		    u64 end, struct btrfs_scrub_progress *progress,
+ 		    int readonly, int is_dev_replace)
+@@ -3788,6 +3778,9 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 	int ret;
+ 	struct btrfs_device *dev;
+ 	unsigned int nofs_flag;
++	struct btrfs_workqueue *scrub_workers = NULL;
++	struct btrfs_workqueue *scrub_wr_comp = NULL;
++	struct btrfs_workqueue *scrub_parity = NULL;
+ 
+ 	if (btrfs_fs_closing(fs_info))
+ 		return -EINVAL;
+@@ -3927,9 +3920,16 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 
+ 	mutex_lock(&fs_info->scrub_lock);
+ 	dev->scrub_ctx = NULL;
+-	scrub_workers_put(fs_info);
++	if (--fs_info->scrub_workers_refcnt == 0) {
++		scrub_workers = fs_info->scrub_workers;
++		scrub_wr_comp = fs_info->scrub_wr_completion_workers;
++		scrub_parity = fs_info->scrub_parity_workers;
++	}
+ 	mutex_unlock(&fs_info->scrub_lock);
+ 
++	btrfs_destroy_workqueue(scrub_workers);
++	btrfs_destroy_workqueue(scrub_wr_comp);
++	btrfs_destroy_workqueue(scrub_parity);
+ 	scrub_put_ctx(sctx);
+ 
+ 	return ret;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 15561926ab32..48523bcabae9 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -6782,10 +6782,10 @@ static int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info,
+ 	}
+ 
+ 	if ((type & BTRFS_BLOCK_GROUP_RAID10 && sub_stripes != 2) ||
+-	    (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes < 1) ||
++	    (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes != 2) ||
+ 	    (type & BTRFS_BLOCK_GROUP_RAID5 && num_stripes < 2) ||
+ 	    (type & BTRFS_BLOCK_GROUP_RAID6 && num_stripes < 3) ||
+-	    (type & BTRFS_BLOCK_GROUP_DUP && num_stripes > 2) ||
++	    (type & BTRFS_BLOCK_GROUP_DUP && num_stripes != 2) ||
+ 	    ((type & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 &&
+ 	     num_stripes != 1)) {
+ 		btrfs_err(fs_info,
+diff --git a/fs/cifs/cifs_fs_sb.h b/fs/cifs/cifs_fs_sb.h
+index 42f0d67f1054..ed49222abecb 100644
+--- a/fs/cifs/cifs_fs_sb.h
++++ b/fs/cifs/cifs_fs_sb.h
+@@ -58,6 +58,7 @@ struct cifs_sb_info {
+ 	spinlock_t tlink_tree_lock;
+ 	struct tcon_link *master_tlink;
+ 	struct nls_table *local_nls;
++	unsigned int bsize;
+ 	unsigned int rsize;
+ 	unsigned int wsize;
+ 	unsigned long actimeo; /* attribute cache timeout (jiffies) */
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 62d48d486d8f..f2c0d863fb52 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -554,6 +554,7 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
+ 
+ 	seq_printf(s, ",rsize=%u", cifs_sb->rsize);
+ 	seq_printf(s, ",wsize=%u", cifs_sb->wsize);
++	seq_printf(s, ",bsize=%u", cifs_sb->bsize);
+ 	seq_printf(s, ",echo_interval=%lu",
+ 			tcon->ses->server->echo_interval / HZ);
+ 	if (tcon->snapshot_time)
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 94dbdbe5be34..1b25e6e95d45 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -236,6 +236,8 @@ struct smb_version_operations {
+ 	int * (*get_credits_field)(struct TCP_Server_Info *, const int);
+ 	unsigned int (*get_credits)(struct mid_q_entry *);
+ 	__u64 (*get_next_mid)(struct TCP_Server_Info *);
++	void (*revert_current_mid)(struct TCP_Server_Info *server,
++				   const unsigned int val);
+ 	/* data offset from read response message */
+ 	unsigned int (*read_data_offset)(char *);
+ 	/*
+@@ -557,6 +559,7 @@ struct smb_vol {
+ 	bool resilient:1; /* noresilient not required since not fored for CA */
+ 	bool domainauto:1;
+ 	bool rdma:1;
++	unsigned int bsize;
+ 	unsigned int rsize;
+ 	unsigned int wsize;
+ 	bool sockopt_tcp_nodelay:1;
+@@ -770,6 +773,22 @@ get_next_mid(struct TCP_Server_Info *server)
+ 	return cpu_to_le16(mid);
+ }
+ 
++static inline void
++revert_current_mid(struct TCP_Server_Info *server, const unsigned int val)
++{
++	if (server->ops->revert_current_mid)
++		server->ops->revert_current_mid(server, val);
++}
++
++static inline void
++revert_current_mid_from_hdr(struct TCP_Server_Info *server,
++			    const struct smb2_sync_hdr *shdr)
++{
++	unsigned int num = le16_to_cpu(shdr->CreditCharge);
++
++	return revert_current_mid(server, num > 0 ? num : 1);
++}
++
+ static inline __u16
+ get_mid(const struct smb_hdr *smb)
+ {
+@@ -1422,6 +1441,7 @@ struct mid_q_entry {
+ 	struct kref refcount;
+ 	struct TCP_Server_Info *server;	/* server corresponding to this mid */
+ 	__u64 mid;		/* multiplex id */
++	__u16 credits;		/* number of credits consumed by this mid */
+ 	__u32 pid;		/* process id */
+ 	__u32 sequence_number;  /* for CIFS signing */
+ 	unsigned long when_alloc;  /* when mid was created */
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index bb54ccf8481c..551924beb86f 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -2125,12 +2125,13 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ 
+ 		wdata2->cfile = find_writable_file(CIFS_I(inode), false);
+ 		if (!wdata2->cfile) {
+-			cifs_dbg(VFS, "No writable handles for inode\n");
++			cifs_dbg(VFS, "No writable handle to retry writepages\n");
+ 			rc = -EBADF;
+-			break;
++		} else {
++			wdata2->pid = wdata2->cfile->pid;
++			rc = server->ops->async_writev(wdata2,
++						       cifs_writedata_release);
+ 		}
+-		wdata2->pid = wdata2->cfile->pid;
+-		rc = server->ops->async_writev(wdata2, cifs_writedata_release);
+ 
+ 		for (j = 0; j < nr_pages; j++) {
+ 			unlock_page(wdata2->pages[j]);
+@@ -2145,6 +2146,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ 			kref_put(&wdata2->refcount, cifs_writedata_release);
+ 			if (is_retryable_error(rc))
+ 				continue;
++			i += nr_pages;
+ 			break;
+ 		}
+ 
+@@ -2152,6 +2154,13 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ 		i += nr_pages;
+ 	} while (i < wdata->nr_pages);
+ 
++	/* cleanup remaining pages from the original wdata */
++	for (; i < wdata->nr_pages; i++) {
++		SetPageError(wdata->pages[i]);
++		end_page_writeback(wdata->pages[i]);
++		put_page(wdata->pages[i]);
++	}
++
+ 	if (rc != 0 && !is_retryable_error(rc))
+ 		mapping_set_error(inode->i_mapping, rc);
+ 	kref_put(&wdata->refcount, cifs_writedata_release);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 8463c940e0e5..e61cd2938c9e 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -102,7 +102,7 @@ enum {
+ 	Opt_backupuid, Opt_backupgid, Opt_uid,
+ 	Opt_cruid, Opt_gid, Opt_file_mode,
+ 	Opt_dirmode, Opt_port,
+-	Opt_rsize, Opt_wsize, Opt_actimeo,
++	Opt_blocksize, Opt_rsize, Opt_wsize, Opt_actimeo,
+ 	Opt_echo_interval, Opt_max_credits,
+ 	Opt_snapshot,
+ 
+@@ -204,6 +204,7 @@ static const match_table_t cifs_mount_option_tokens = {
+ 	{ Opt_dirmode, "dirmode=%s" },
+ 	{ Opt_dirmode, "dir_mode=%s" },
+ 	{ Opt_port, "port=%s" },
++	{ Opt_blocksize, "bsize=%s" },
+ 	{ Opt_rsize, "rsize=%s" },
+ 	{ Opt_wsize, "wsize=%s" },
+ 	{ Opt_actimeo, "actimeo=%s" },
+@@ -1571,7 +1572,7 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ 	vol->cred_uid = current_uid();
+ 	vol->linux_uid = current_uid();
+ 	vol->linux_gid = current_gid();
+-
++	vol->bsize = 1024 * 1024; /* can improve cp performance significantly */
+ 	/*
+ 	 * default to SFM style remapping of seven reserved characters
+ 	 * unless user overrides it or we negotiate CIFS POSIX where
+@@ -1944,6 +1945,26 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ 			}
+ 			port = (unsigned short)option;
+ 			break;
++		case Opt_blocksize:
++			if (get_option_ul(args, &option)) {
++				cifs_dbg(VFS, "%s: Invalid blocksize value\n",
++					__func__);
++				goto cifs_parse_mount_err;
++			}
++			/*
++			 * inode blocksize realistically should never need to be
++			 * less than 16K or greater than 16M and default is 1MB.
++			 * Note that small inode block sizes (e.g. 64K) can lead
++			 * to very poor performance of common tools like cp and scp
++			 */
++			if ((option < CIFS_MAX_MSGSIZE) ||
++			   (option > (4 * SMB3_DEFAULT_IOSIZE))) {
++				cifs_dbg(VFS, "%s: Invalid blocksize\n",
++					__func__);
++				goto cifs_parse_mount_err;
++			}
++			vol->bsize = option;
++			break;
+ 		case Opt_rsize:
+ 			if (get_option_ul(args, &option)) {
+ 				cifs_dbg(VFS, "%s: Invalid rsize value\n",
+@@ -3839,6 +3860,7 @@ int cifs_setup_cifs_sb(struct smb_vol *pvolume_info,
+ 	spin_lock_init(&cifs_sb->tlink_tree_lock);
+ 	cifs_sb->tlink_tree = RB_ROOT;
+ 
++	cifs_sb->bsize = pvolume_info->bsize;
+ 	/*
+ 	 * Temporarily set r/wsize for matching superblock. If we end up using
+ 	 * new sb then client will later negotiate it downward if needed.
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 659ce1b92c44..95461db80011 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3028,14 +3028,16 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from)
+ 	 * these pages but not on the region from pos to ppos+len-1.
+ 	 */
+ 	written = cifs_user_writev(iocb, from);
+-	if (written > 0 && CIFS_CACHE_READ(cinode)) {
++	if (CIFS_CACHE_READ(cinode)) {
+ 		/*
+-		 * Windows 7 server can delay breaking level2 oplock if a write
+-		 * request comes - break it on the client to prevent reading
+-		 * an old data.
++		 * We have read level caching and we have just sent a write
++		 * request to the server thus making data in the cache stale.
++		 * Zap the cache and set oplock/lease level to NONE to avoid
++		 * reading stale data from the cache. All subsequent read
++		 * operations will read new data from the server.
+ 		 */
+ 		cifs_zap_mapping(inode);
+-		cifs_dbg(FYI, "Set no oplock for inode=%p after a write operation\n",
++		cifs_dbg(FYI, "Set Oplock/Lease to NONE for inode=%p after write\n",
+ 			 inode);
+ 		cinode->oplock = 0;
+ 	}
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 478003644916..53fdb5df0d2e 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2080,7 +2080,7 @@ int cifs_getattr(const struct path *path, struct kstat *stat,
+ 		return rc;
+ 
+ 	generic_fillattr(inode, stat);
+-	stat->blksize = CIFS_MAX_MSGSIZE;
++	stat->blksize = cifs_sb->bsize;
+ 	stat->ino = CIFS_I(inode)->uniqueid;
+ 
+ 	/* old CIFS Unix Extensions doesn't return create time */
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 7b8b58fb4d3f..58700d2ba8cd 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -517,7 +517,6 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+ 	__u8 lease_state;
+ 	struct list_head *tmp;
+ 	struct cifsFileInfo *cfile;
+-	struct TCP_Server_Info *server = tcon->ses->server;
+ 	struct cifs_pending_open *open;
+ 	struct cifsInodeInfo *cinode;
+ 	int ack_req = le32_to_cpu(rsp->Flags &
+@@ -537,13 +536,25 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+ 		cifs_dbg(FYI, "lease key match, lease break 0x%x\n",
+ 			 le32_to_cpu(rsp->NewLeaseState));
+ 
+-		server->ops->set_oplock_level(cinode, lease_state, 0, NULL);
+-
+ 		if (ack_req)
+ 			cfile->oplock_break_cancelled = false;
+ 		else
+ 			cfile->oplock_break_cancelled = true;
+ 
++		set_bit(CIFS_INODE_PENDING_OPLOCK_BREAK, &cinode->flags);
++
++		/*
++		 * Set or clear flags depending on the lease state being READ.
++		 * HANDLE caching flag should be added when the client starts
++		 * to defer closing remote file handles with HANDLE leases.
++		 */
++		if (lease_state & SMB2_LEASE_READ_CACHING_HE)
++			set_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
++				&cinode->flags);
++		else
++			clear_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
++				  &cinode->flags);
++
+ 		queue_work(cifsoplockd_wq, &cfile->oplock_break);
+ 		kfree(lw);
+ 		return true;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 6f96e2292856..b29f711ab965 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -219,6 +219,15 @@ smb2_get_next_mid(struct TCP_Server_Info *server)
+ 	return mid;
+ }
+ 
++static void
++smb2_revert_current_mid(struct TCP_Server_Info *server, const unsigned int val)
++{
++	spin_lock(&GlobalMid_Lock);
++	if (server->CurrentMid >= val)
++		server->CurrentMid -= val;
++	spin_unlock(&GlobalMid_Lock);
++}
++
+ static struct mid_q_entry *
+ smb2_find_mid(struct TCP_Server_Info *server, char *buf)
+ {
+@@ -2594,6 +2603,15 @@ smb2_downgrade_oplock(struct TCP_Server_Info *server,
+ 		server->ops->set_oplock_level(cinode, 0, 0, NULL);
+ }
+ 
++static void
++smb21_downgrade_oplock(struct TCP_Server_Info *server,
++		       struct cifsInodeInfo *cinode, bool set_level2)
++{
++	server->ops->set_oplock_level(cinode,
++				      set_level2 ? SMB2_LEASE_READ_CACHING_HE :
++				      0, 0, NULL);
++}
++
+ static void
+ smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+ 		      unsigned int epoch, bool *purge_cache)
+@@ -3541,6 +3559,7 @@ struct smb_version_operations smb20_operations = {
+ 	.get_credits = smb2_get_credits,
+ 	.wait_mtu_credits = cifs_wait_mtu_credits,
+ 	.get_next_mid = smb2_get_next_mid,
++	.revert_current_mid = smb2_revert_current_mid,
+ 	.read_data_offset = smb2_read_data_offset,
+ 	.read_data_length = smb2_read_data_length,
+ 	.map_error = map_smb2_to_linux_error,
+@@ -3636,6 +3655,7 @@ struct smb_version_operations smb21_operations = {
+ 	.get_credits = smb2_get_credits,
+ 	.wait_mtu_credits = smb2_wait_mtu_credits,
+ 	.get_next_mid = smb2_get_next_mid,
++	.revert_current_mid = smb2_revert_current_mid,
+ 	.read_data_offset = smb2_read_data_offset,
+ 	.read_data_length = smb2_read_data_length,
+ 	.map_error = map_smb2_to_linux_error,
+@@ -3646,7 +3666,7 @@ struct smb_version_operations smb21_operations = {
+ 	.print_stats = smb2_print_stats,
+ 	.is_oplock_break = smb2_is_valid_oplock_break,
+ 	.handle_cancelled_mid = smb2_handle_cancelled_mid,
+-	.downgrade_oplock = smb2_downgrade_oplock,
++	.downgrade_oplock = smb21_downgrade_oplock,
+ 	.need_neg = smb2_need_neg,
+ 	.negotiate = smb2_negotiate,
+ 	.negotiate_wsize = smb2_negotiate_wsize,
+@@ -3732,6 +3752,7 @@ struct smb_version_operations smb30_operations = {
+ 	.get_credits = smb2_get_credits,
+ 	.wait_mtu_credits = smb2_wait_mtu_credits,
+ 	.get_next_mid = smb2_get_next_mid,
++	.revert_current_mid = smb2_revert_current_mid,
+ 	.read_data_offset = smb2_read_data_offset,
+ 	.read_data_length = smb2_read_data_length,
+ 	.map_error = map_smb2_to_linux_error,
+@@ -3743,7 +3764,7 @@ struct smb_version_operations smb30_operations = {
+ 	.dump_share_caps = smb2_dump_share_caps,
+ 	.is_oplock_break = smb2_is_valid_oplock_break,
+ 	.handle_cancelled_mid = smb2_handle_cancelled_mid,
+-	.downgrade_oplock = smb2_downgrade_oplock,
++	.downgrade_oplock = smb21_downgrade_oplock,
+ 	.need_neg = smb2_need_neg,
+ 	.negotiate = smb2_negotiate,
+ 	.negotiate_wsize = smb3_negotiate_wsize,
+@@ -3837,6 +3858,7 @@ struct smb_version_operations smb311_operations = {
+ 	.get_credits = smb2_get_credits,
+ 	.wait_mtu_credits = smb2_wait_mtu_credits,
+ 	.get_next_mid = smb2_get_next_mid,
++	.revert_current_mid = smb2_revert_current_mid,
+ 	.read_data_offset = smb2_read_data_offset,
+ 	.read_data_length = smb2_read_data_length,
+ 	.map_error = map_smb2_to_linux_error,
+@@ -3848,7 +3870,7 @@ struct smb_version_operations smb311_operations = {
+ 	.dump_share_caps = smb2_dump_share_caps,
+ 	.is_oplock_break = smb2_is_valid_oplock_break,
+ 	.handle_cancelled_mid = smb2_handle_cancelled_mid,
+-	.downgrade_oplock = smb2_downgrade_oplock,
++	.downgrade_oplock = smb21_downgrade_oplock,
+ 	.need_neg = smb2_need_neg,
+ 	.negotiate = smb2_negotiate,
+ 	.negotiate_wsize = smb3_negotiate_wsize,
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index 7b351c65ee46..63264db78b89 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -576,6 +576,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr,
+ 		     struct TCP_Server_Info *server)
+ {
+ 	struct mid_q_entry *temp;
++	unsigned int credits = le16_to_cpu(shdr->CreditCharge);
+ 
+ 	if (server == NULL) {
+ 		cifs_dbg(VFS, "Null TCP session in smb2_mid_entry_alloc\n");
+@@ -586,6 +587,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr,
+ 	memset(temp, 0, sizeof(struct mid_q_entry));
+ 	kref_init(&temp->refcount);
+ 	temp->mid = le64_to_cpu(shdr->MessageId);
++	temp->credits = credits > 0 ? credits : 1;
+ 	temp->pid = current->pid;
+ 	temp->command = shdr->Command; /* Always LE */
+ 	temp->when_alloc = jiffies;
+@@ -674,13 +676,18 @@ smb2_setup_request(struct cifs_ses *ses, struct smb_rqst *rqst)
+ 	smb2_seq_num_into_buf(ses->server, shdr);
+ 
+ 	rc = smb2_get_mid_entry(ses, shdr, &mid);
+-	if (rc)
++	if (rc) {
++		revert_current_mid_from_hdr(ses->server, shdr);
+ 		return ERR_PTR(rc);
++	}
++
+ 	rc = smb2_sign_rqst(rqst, ses->server);
+ 	if (rc) {
++		revert_current_mid_from_hdr(ses->server, shdr);
+ 		cifs_delete_mid(mid);
+ 		return ERR_PTR(rc);
+ 	}
++
+ 	return mid;
+ }
+ 
+@@ -695,11 +702,14 @@ smb2_setup_async_request(struct TCP_Server_Info *server, struct smb_rqst *rqst)
+ 	smb2_seq_num_into_buf(server, shdr);
+ 
+ 	mid = smb2_mid_entry_alloc(shdr, server);
+-	if (mid == NULL)
++	if (mid == NULL) {
++		revert_current_mid_from_hdr(server, shdr);
+ 		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	rc = smb2_sign_rqst(rqst, server);
+ 	if (rc) {
++		revert_current_mid_from_hdr(server, shdr);
+ 		DeleteMidQEntry(mid);
+ 		return ERR_PTR(rc);
+ 	}
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 53532bd3f50d..9544eb99b5a2 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -647,6 +647,7 @@ cifs_call_async(struct TCP_Server_Info *server, struct smb_rqst *rqst,
+ 	cifs_in_send_dec(server);
+ 
+ 	if (rc < 0) {
++		revert_current_mid(server, mid->credits);
+ 		server->sequence_number -= 2;
+ 		cifs_delete_mid(mid);
+ 	}
+@@ -868,6 +869,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 	for (i = 0; i < num_rqst; i++) {
+ 		midQ[i] = ses->server->ops->setup_request(ses, &rqst[i]);
+ 		if (IS_ERR(midQ[i])) {
++			revert_current_mid(ses->server, i);
+ 			for (j = 0; j < i; j++)
+ 				cifs_delete_mid(midQ[j]);
+ 			mutex_unlock(&ses->server->srv_mutex);
+@@ -897,8 +899,10 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 	for (i = 0; i < num_rqst; i++)
+ 		cifs_save_when_sent(midQ[i]);
+ 
+-	if (rc < 0)
++	if (rc < 0) {
++		revert_current_mid(ses->server, num_rqst);
+ 		ses->server->sequence_number -= 2;
++	}
+ 
+ 	mutex_unlock(&ses->server->srv_mutex);
+ 
+diff --git a/fs/dax.c b/fs/dax.c
+index 6959837cc465..05cca2214ae3 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -843,9 +843,8 @@ unlock_pte:
+ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ 		struct address_space *mapping, void *entry)
+ {
+-	unsigned long pfn;
++	unsigned long pfn, index, count;
+ 	long ret = 0;
+-	size_t size;
+ 
+ 	/*
+ 	 * A page got tagged dirty in DAX mapping? Something is seriously
+@@ -894,17 +893,18 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ 	xas_unlock_irq(xas);
+ 
+ 	/*
+-	 * Even if dax_writeback_mapping_range() was given a wbc->range_start
+-	 * in the middle of a PMD, the 'index' we are given will be aligned to
+-	 * the start index of the PMD, as will the pfn we pull from 'entry'.
++	 * If dax_writeback_mapping_range() was given a wbc->range_start
++	 * in the middle of a PMD, the 'index' we use needs to be
++	 * aligned to the start of the PMD.
+ 	 * This allows us to flush for PMD_SIZE and not have to worry about
+ 	 * partial PMD writebacks.
+ 	 */
+ 	pfn = dax_to_pfn(entry);
+-	size = PAGE_SIZE << dax_entry_order(entry);
++	count = 1UL << dax_entry_order(entry);
++	index = xas->xa_index & ~(count - 1);
+ 
+-	dax_entry_mkclean(mapping, xas->xa_index, pfn);
+-	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), size);
++	dax_entry_mkclean(mapping, index, pfn);
++	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE);
+ 	/*
+ 	 * After we have flushed the cache, we can clear the dirty tag. There
+ 	 * cannot be new dirty data in the pfn after the flush has completed as
+@@ -917,8 +917,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ 	xas_clear_mark(xas, PAGECACHE_TAG_DIRTY);
+ 	dax_wake_entry(xas, entry, false);
+ 
+-	trace_dax_writeback_one(mapping->host, xas->xa_index,
+-			size >> PAGE_SHIFT);
++	trace_dax_writeback_one(mapping->host, index, count);
+ 	return ret;
+ 
+  put_unlocked:
+diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
+index c53814539070..553a3f3300ae 100644
+--- a/fs/devpts/inode.c
++++ b/fs/devpts/inode.c
+@@ -455,6 +455,7 @@ devpts_fill_super(struct super_block *s, void *data, int silent)
+ 	s->s_blocksize_bits = 10;
+ 	s->s_magic = DEVPTS_SUPER_MAGIC;
+ 	s->s_op = &devpts_sops;
++	s->s_d_op = &simple_dentry_operations;
+ 	s->s_time_gran = 1;
+ 
+ 	error = -ENOMEM;
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index 73b2d528237f..a9ea38182578 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -757,7 +757,8 @@ static loff_t ext2_max_size(int bits)
+ {
+ 	loff_t res = EXT2_NDIR_BLOCKS;
+ 	int meta_blocks;
+-	loff_t upper_limit;
++	unsigned int upper_limit;
++	unsigned int ppb = 1 << (bits-2);
+ 
+ 	/* This is calculated to be the largest file size for a
+ 	 * dense, file such that the total number of
+@@ -771,24 +772,34 @@ static loff_t ext2_max_size(int bits)
+ 	/* total blocks in file system block size */
+ 	upper_limit >>= (bits - 9);
+ 
++	/* Compute how many blocks we can address by block tree */
++	res += 1LL << (bits-2);
++	res += 1LL << (2*(bits-2));
++	res += 1LL << (3*(bits-2));
++	/* Does block tree limit file size? */
++	if (res < upper_limit)
++		goto check_lfs;
+ 
++	res = upper_limit;
++	/* How many metadata blocks are needed for addressing upper_limit? */
++	upper_limit -= EXT2_NDIR_BLOCKS;
+ 	/* indirect blocks */
+ 	meta_blocks = 1;
++	upper_limit -= ppb;
+ 	/* double indirect blocks */
+-	meta_blocks += 1 + (1LL << (bits-2));
+-	/* tripple indirect blocks */
+-	meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
+-
+-	upper_limit -= meta_blocks;
+-	upper_limit <<= bits;
+-
+-	res += 1LL << (bits-2);
+-	res += 1LL << (2*(bits-2));
+-	res += 1LL << (3*(bits-2));
++	if (upper_limit < ppb * ppb) {
++		meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb);
++		res -= meta_blocks;
++		goto check_lfs;
++	}
++	meta_blocks += 1 + ppb;
++	upper_limit -= ppb * ppb;
++	/* tripple indirect blocks for the rest */
++	meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb) +
++		DIV_ROUND_UP(upper_limit, ppb*ppb);
++	res -= meta_blocks;
++check_lfs:
+ 	res <<= bits;
+-	if (res > upper_limit)
+-		res = upper_limit;
+-
+ 	if (res > MAX_LFS_FILESIZE)
+ 		res = MAX_LFS_FILESIZE;
+ 
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 185a05d3257e..508a37ec9271 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -426,6 +426,9 @@ struct flex_groups {
+ /* Flags that are appropriate for non-directories/regular files. */
+ #define EXT4_OTHER_FLMASK (EXT4_NODUMP_FL | EXT4_NOATIME_FL)
+ 
++/* The only flags that should be swapped */
++#define EXT4_FL_SHOULD_SWAP (EXT4_HUGE_FILE_FL | EXT4_EXTENTS_FL)
++
+ /* Mask out flags that are inappropriate for the given type of inode. */
+ static inline __u32 ext4_mask_flags(umode_t mode, __u32 flags)
+ {
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index d37dafa1d133..2e76fb55d94a 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -63,18 +63,20 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
+ 	loff_t isize;
+ 	struct ext4_inode_info *ei1;
+ 	struct ext4_inode_info *ei2;
++	unsigned long tmp;
+ 
+ 	ei1 = EXT4_I(inode1);
+ 	ei2 = EXT4_I(inode2);
+ 
+ 	swap(inode1->i_version, inode2->i_version);
+-	swap(inode1->i_blocks, inode2->i_blocks);
+-	swap(inode1->i_bytes, inode2->i_bytes);
+ 	swap(inode1->i_atime, inode2->i_atime);
+ 	swap(inode1->i_mtime, inode2->i_mtime);
+ 
+ 	memswap(ei1->i_data, ei2->i_data, sizeof(ei1->i_data));
+-	swap(ei1->i_flags, ei2->i_flags);
++	tmp = ei1->i_flags & EXT4_FL_SHOULD_SWAP;
++	ei1->i_flags = (ei2->i_flags & EXT4_FL_SHOULD_SWAP) |
++		(ei1->i_flags & ~EXT4_FL_SHOULD_SWAP);
++	ei2->i_flags = tmp | (ei2->i_flags & ~EXT4_FL_SHOULD_SWAP);
+ 	swap(ei1->i_disksize, ei2->i_disksize);
+ 	ext4_es_remove_extent(inode1, 0, EXT_MAX_BLOCKS);
+ 	ext4_es_remove_extent(inode2, 0, EXT_MAX_BLOCKS);
+@@ -115,28 +117,41 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	int err;
+ 	struct inode *inode_bl;
+ 	struct ext4_inode_info *ei_bl;
+-
+-	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
+-	    IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
+-	    ext4_has_inline_data(inode))
+-		return -EINVAL;
+-
+-	if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
+-	    !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
+-		return -EPERM;
++	qsize_t size, size_bl, diff;
++	blkcnt_t blocks;
++	unsigned short bytes;
+ 
+ 	inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO, EXT4_IGET_SPECIAL);
+ 	if (IS_ERR(inode_bl))
+ 		return PTR_ERR(inode_bl);
+ 	ei_bl = EXT4_I(inode_bl);
+ 
+-	filemap_flush(inode->i_mapping);
+-	filemap_flush(inode_bl->i_mapping);
+-
+ 	/* Protect orig inodes against a truncate and make sure,
+ 	 * that only 1 swap_inode_boot_loader is running. */
+ 	lock_two_nondirectories(inode, inode_bl);
+ 
++	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
++	    IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
++	    ext4_has_inline_data(inode)) {
++		err = -EINVAL;
++		goto journal_err_out;
++	}
++
++	if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
++	    !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN)) {
++		err = -EPERM;
++		goto journal_err_out;
++	}
++
++	down_write(&EXT4_I(inode)->i_mmap_sem);
++	err = filemap_write_and_wait(inode->i_mapping);
++	if (err)
++		goto err_out;
++
++	err = filemap_write_and_wait(inode_bl->i_mapping);
++	if (err)
++		goto err_out;
++
+ 	/* Wait for all existing dio workers */
+ 	inode_dio_wait(inode);
+ 	inode_dio_wait(inode_bl);
+@@ -147,7 +162,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	handle = ext4_journal_start(inode_bl, EXT4_HT_MOVE_EXTENTS, 2);
+ 	if (IS_ERR(handle)) {
+ 		err = -EINVAL;
+-		goto journal_err_out;
++		goto err_out;
+ 	}
+ 
+ 	/* Protect extent tree against block allocations via delalloc */
+@@ -170,6 +185,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 			memset(ei_bl->i_data, 0, sizeof(ei_bl->i_data));
+ 	}
+ 
++	err = dquot_initialize(inode);
++	if (err)
++		goto err_out1;
++
++	size = (qsize_t)(inode->i_blocks) * (1 << 9) + inode->i_bytes;
++	size_bl = (qsize_t)(inode_bl->i_blocks) * (1 << 9) + inode_bl->i_bytes;
++	diff = size - size_bl;
+ 	swap_inode_data(inode, inode_bl);
+ 
+ 	inode->i_ctime = inode_bl->i_ctime = current_time(inode);
+@@ -183,27 +205,51 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 
+ 	err = ext4_mark_inode_dirty(handle, inode);
+ 	if (err < 0) {
++		/* No need to update quota information. */
+ 		ext4_warning(inode->i_sb,
+ 			"couldn't mark inode #%lu dirty (err %d)",
+ 			inode->i_ino, err);
+ 		/* Revert all changes: */
+ 		swap_inode_data(inode, inode_bl);
+ 		ext4_mark_inode_dirty(handle, inode);
+-	} else {
+-		err = ext4_mark_inode_dirty(handle, inode_bl);
+-		if (err < 0) {
+-			ext4_warning(inode_bl->i_sb,
+-				"couldn't mark inode #%lu dirty (err %d)",
+-				inode_bl->i_ino, err);
+-			/* Revert all changes: */
+-			swap_inode_data(inode, inode_bl);
+-			ext4_mark_inode_dirty(handle, inode);
+-			ext4_mark_inode_dirty(handle, inode_bl);
+-		}
++		goto err_out1;
++	}
++
++	blocks = inode_bl->i_blocks;
++	bytes = inode_bl->i_bytes;
++	inode_bl->i_blocks = inode->i_blocks;
++	inode_bl->i_bytes = inode->i_bytes;
++	err = ext4_mark_inode_dirty(handle, inode_bl);
++	if (err < 0) {
++		/* No need to update quota information. */
++		ext4_warning(inode_bl->i_sb,
++			"couldn't mark inode #%lu dirty (err %d)",
++			inode_bl->i_ino, err);
++		goto revert;
++	}
++
++	/* Bootloader inode should not be counted into quota information. */
++	if (diff > 0)
++		dquot_free_space(inode, diff);
++	else
++		err = dquot_alloc_space(inode, -1 * diff);
++
++	if (err < 0) {
++revert:
++		/* Revert all changes: */
++		inode_bl->i_blocks = blocks;
++		inode_bl->i_bytes = bytes;
++		swap_inode_data(inode, inode_bl);
++		ext4_mark_inode_dirty(handle, inode);
++		ext4_mark_inode_dirty(handle, inode_bl);
+ 	}
++
++err_out1:
+ 	ext4_journal_stop(handle);
+ 	ext4_double_up_write_data_sem(inode, inode_bl);
+ 
++err_out:
++	up_write(&EXT4_I(inode)->i_mmap_sem);
+ journal_err_out:
+ 	unlock_two_nondirectories(inode, inode_bl);
+ 	iput(inode_bl);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 48421de803b7..3d9b18505c0c 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1960,7 +1960,8 @@ retry:
+ 				le16_to_cpu(es->s_reserved_gdt_blocks);
+ 			n_group = n_desc_blocks * EXT4_DESC_PER_BLOCK(sb);
+ 			n_blocks_count = (ext4_fsblk_t)n_group *
+-				EXT4_BLOCKS_PER_GROUP(sb);
++				EXT4_BLOCKS_PER_GROUP(sb) +
++				le32_to_cpu(es->s_first_data_block);
+ 			n_group--; /* set to last group number */
+ 		}
+ 
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index cc35537232f2..f0d8dabe1ff5 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1252,11 +1252,12 @@ int jbd2_journal_get_undo_access(handle_t *handle, struct buffer_head *bh)
+ 	struct journal_head *jh;
+ 	char *committed_data = NULL;
+ 
+-	JBUFFER_TRACE(jh, "entry");
+ 	if (jbd2_write_access_granted(handle, bh, true))
+ 		return 0;
+ 
+ 	jh = jbd2_journal_add_journal_head(bh);
++	JBUFFER_TRACE(jh, "entry");
++
+ 	/*
+ 	 * Do this first --- it can drop the journal lock, so we want to
+ 	 * make sure that obtaining the committed_data is done
+@@ -1367,15 +1368,17 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+-	if (!buffer_jbd(bh)) {
+-		ret = -EUCLEAN;
+-		goto out;
+-	}
++	if (!buffer_jbd(bh))
++		return -EUCLEAN;
++
+ 	/*
+ 	 * We don't grab jh reference here since the buffer must be part
+ 	 * of the running transaction.
+ 	 */
+ 	jh = bh2jh(bh);
++	jbd_debug(5, "journal_head %p\n", jh);
++	JBUFFER_TRACE(jh, "entry");
++
+ 	/*
+ 	 * This and the following assertions are unreliable since we may see jh
+ 	 * in inconsistent state unless we grab bh_state lock. But this is
+@@ -1409,9 +1412,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 	}
+ 
+ 	journal = transaction->t_journal;
+-	jbd_debug(5, "journal_head %p\n", jh);
+-	JBUFFER_TRACE(jh, "entry");
+-
+ 	jbd_lock_bh_state(bh);
+ 
+ 	if (jh->b_modified == 0) {
+@@ -1609,14 +1609,21 @@ int jbd2_journal_forget (handle_t *handle, struct buffer_head *bh)
+ 		/* However, if the buffer is still owned by a prior
+ 		 * (committing) transaction, we can't drop it yet... */
+ 		JBUFFER_TRACE(jh, "belongs to older transaction");
+-		/* ... but we CAN drop it from the new transaction if we
+-		 * have also modified it since the original commit. */
++		/* ... but we CAN drop it from the new transaction through
++		 * marking the buffer as freed and set j_next_transaction to
++		 * the new transaction, so that not only the commit code
++		 * knows it should clear dirty bits when it is done with the
++		 * buffer, but also the buffer can be checkpointed only
++		 * after the new transaction commits. */
+ 
+-		if (jh->b_next_transaction) {
+-			J_ASSERT(jh->b_next_transaction == transaction);
++		set_buffer_freed(bh);
++
++		if (!jh->b_next_transaction) {
+ 			spin_lock(&journal->j_list_lock);
+-			jh->b_next_transaction = NULL;
++			jh->b_next_transaction = transaction;
+ 			spin_unlock(&journal->j_list_lock);
++		} else {
++			J_ASSERT(jh->b_next_transaction == transaction);
+ 
+ 			/*
+ 			 * only drop a reference if this transaction modified
+diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c
+index fdf527b6d79c..d71c9405874a 100644
+--- a/fs/kernfs/mount.c
++++ b/fs/kernfs/mount.c
+@@ -196,8 +196,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
+ 		return dentry;
+ 
+ 	knparent = find_next_ancestor(kn, NULL);
+-	if (WARN_ON(!knparent))
++	if (WARN_ON(!knparent)) {
++		dput(dentry);
+ 		return ERR_PTR(-EINVAL);
++	}
+ 
+ 	do {
+ 		struct dentry *dtmp;
+@@ -206,8 +208,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
+ 		if (kn == knparent)
+ 			return dentry;
+ 		kntmp = find_next_ancestor(kn, knparent);
+-		if (WARN_ON(!kntmp))
++		if (WARN_ON(!kntmp)) {
++			dput(dentry);
+ 			return ERR_PTR(-EINVAL);
++		}
+ 		dtmp = lookup_one_len_unlocked(kntmp->name, dentry,
+ 					       strlen(kntmp->name));
+ 		dput(dentry);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 557a5d636183..64ac80ec6b7b 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -947,6 +947,13 @@ nfs4_sequence_process_interrupted(struct nfs_client *client,
+ 
+ #endif	/* !CONFIG_NFS_V4_1 */
+ 
++static void nfs41_sequence_res_init(struct nfs4_sequence_res *res)
++{
++	res->sr_timestamp = jiffies;
++	res->sr_status_flags = 0;
++	res->sr_status = 1;
++}
++
+ static
+ void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args,
+ 		struct nfs4_sequence_res *res,
+@@ -958,10 +965,6 @@ void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args,
+ 	args->sa_slot = slot;
+ 
+ 	res->sr_slot = slot;
+-	res->sr_timestamp = jiffies;
+-	res->sr_status_flags = 0;
+-	res->sr_status = 1;
+-
+ }
+ 
+ int nfs4_setup_sequence(struct nfs_client *client,
+@@ -1007,6 +1010,7 @@ int nfs4_setup_sequence(struct nfs_client *client,
+ 
+ 	trace_nfs4_setup_sequence(session, args);
+ out_start:
++	nfs41_sequence_res_init(res);
+ 	rpc_call_start(task);
+ 	return 0;
+ 
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index e54d899c1848..a8951f1f7b4e 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -988,6 +988,17 @@ static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc)
+ 	}
+ }
+ 
++static void
++nfs_pageio_cleanup_request(struct nfs_pageio_descriptor *desc,
++		struct nfs_page *req)
++{
++	LIST_HEAD(head);
++
++	nfs_list_remove_request(req);
++	nfs_list_add_request(req, &head);
++	desc->pg_completion_ops->error_cleanup(&head);
++}
++
+ /**
+  * nfs_pageio_add_request - Attempt to coalesce a request into a page list.
+  * @desc: destination io descriptor
+@@ -1025,10 +1036,8 @@ static int __nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 			nfs_page_group_unlock(req);
+ 			desc->pg_moreio = 1;
+ 			nfs_pageio_doio(desc);
+-			if (desc->pg_error < 0)
+-				return 0;
+-			if (mirror->pg_recoalesce)
+-				return 0;
++			if (desc->pg_error < 0 || mirror->pg_recoalesce)
++				goto out_cleanup_subreq;
+ 			/* retry add_request for this subreq */
+ 			nfs_page_group_lock(req);
+ 			continue;
+@@ -1061,6 +1070,10 @@ err_ptr:
+ 	desc->pg_error = PTR_ERR(subreq);
+ 	nfs_page_group_unlock(req);
+ 	return 0;
++out_cleanup_subreq:
++	if (req != subreq)
++		nfs_pageio_cleanup_request(desc, subreq);
++	return 0;
+ }
+ 
+ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
+@@ -1079,7 +1092,6 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
+ 			struct nfs_page *req;
+ 
+ 			req = list_first_entry(&head, struct nfs_page, wb_list);
+-			nfs_list_remove_request(req);
+ 			if (__nfs_pageio_add_request(desc, req))
+ 				continue;
+ 			if (desc->pg_error < 0) {
+@@ -1168,11 +1180,14 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 		if (nfs_pgio_has_mirroring(desc))
+ 			desc->pg_mirror_idx = midx;
+ 		if (!nfs_pageio_add_request_mirror(desc, dupreq))
+-			goto out_failed;
++			goto out_cleanup_subreq;
+ 	}
+ 
+ 	return 1;
+ 
++out_cleanup_subreq:
++	if (req != dupreq)
++		nfs_pageio_cleanup_request(desc, dupreq);
+ out_failed:
+ 	nfs_pageio_error_cleanup(desc);
+ 	return 0;
+@@ -1194,7 +1209,7 @@ static void nfs_pageio_complete_mirror(struct nfs_pageio_descriptor *desc,
+ 		desc->pg_mirror_idx = mirror_idx;
+ 	for (;;) {
+ 		nfs_pageio_doio(desc);
+-		if (!mirror->pg_recoalesce)
++		if (desc->pg_error < 0 || !mirror->pg_recoalesce)
+ 			break;
+ 		if (!nfs_do_recoalesce(desc))
+ 			break;
+diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c
+index 9eb8086ea841..c9cf46e0c040 100644
+--- a/fs/nfsd/nfs3proc.c
++++ b/fs/nfsd/nfs3proc.c
+@@ -463,8 +463,19 @@ nfsd3_proc_readdir(struct svc_rqst *rqstp)
+ 					&resp->common, nfs3svc_encode_entry);
+ 	memcpy(resp->verf, argp->verf, 8);
+ 	resp->count = resp->buffer - argp->buffer;
+-	if (resp->offset)
+-		xdr_encode_hyper(resp->offset, argp->cookie);
++	if (resp->offset) {
++		loff_t offset = argp->cookie;
++
++		if (unlikely(resp->offset1)) {
++			/* we ended up with offset on a page boundary */
++			*resp->offset = htonl(offset >> 32);
++			*resp->offset1 = htonl(offset & 0xffffffff);
++			resp->offset1 = NULL;
++		} else {
++			xdr_encode_hyper(resp->offset, offset);
++		}
++		resp->offset = NULL;
++	}
+ 
+ 	RETURN_STATUS(nfserr);
+ }
+@@ -533,6 +544,7 @@ nfsd3_proc_readdirplus(struct svc_rqst *rqstp)
+ 		} else {
+ 			xdr_encode_hyper(resp->offset, offset);
+ 		}
++		resp->offset = NULL;
+ 	}
+ 
+ 	RETURN_STATUS(nfserr);
+diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
+index 9b973f4f7d01..83919116d5cb 100644
+--- a/fs/nfsd/nfs3xdr.c
++++ b/fs/nfsd/nfs3xdr.c
+@@ -921,6 +921,7 @@ encode_entry(struct readdir_cd *ccd, const char *name, int namlen,
+ 		} else {
+ 			xdr_encode_hyper(cd->offset, offset64);
+ 		}
++		cd->offset = NULL;
+ 	}
+ 
+ 	/*
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index fb3c9844c82a..6a45fb00c5fc 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1544,16 +1544,16 @@ static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca)
+ {
+ 	u32 slotsize = slot_bytes(ca);
+ 	u32 num = ca->maxreqs;
+-	int avail;
++	unsigned long avail, total_avail;
+ 
+ 	spin_lock(&nfsd_drc_lock);
+-	avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION,
+-		    nfsd_drc_max_mem - nfsd_drc_mem_used);
++	total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;
++	avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail);
+ 	/*
+ 	 * Never use more than a third of the remaining memory,
+ 	 * unless it's the only way to give this client a slot:
+ 	 */
+-	avail = clamp_t(int, avail, slotsize, avail/3);
++	avail = clamp_t(int, avail, slotsize, total_avail/3);
+ 	num = min_t(int, num, avail / slotsize);
+ 	nfsd_drc_mem_used += num * slotsize;
+ 	spin_unlock(&nfsd_drc_lock);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 72a7681f4046..f2feb2d11bae 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1126,7 +1126,7 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size)
+ 		case 'Y':
+ 		case 'y':
+ 		case '1':
+-			if (nn->nfsd_serv)
++			if (!nn->nfsd_serv)
+ 				return -EBUSY;
+ 			nfsd4_end_grace(nn);
+ 			break;
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 9e62dcf06fc4..68b3303e4b46 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -443,6 +443,24 @@ static int ovl_copy_up_inode(struct ovl_copy_up_ctx *c, struct dentry *temp)
+ {
+ 	int err;
+ 
++	/*
++	 * Copy up data first and then xattrs. Writing data after
++	 * xattrs will remove security.capability xattr automatically.
++	 */
++	if (S_ISREG(c->stat.mode) && !c->metacopy) {
++		struct path upperpath, datapath;
++
++		ovl_path_upper(c->dentry, &upperpath);
++		if (WARN_ON(upperpath.dentry != NULL))
++			return -EIO;
++		upperpath.dentry = temp;
++
++		ovl_path_lowerdata(c->dentry, &datapath);
++		err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
++		if (err)
++			return err;
++	}
++
+ 	err = ovl_copy_xattr(c->lowerpath.dentry, temp);
+ 	if (err)
+ 		return err;
+@@ -460,19 +478,6 @@ static int ovl_copy_up_inode(struct ovl_copy_up_ctx *c, struct dentry *temp)
+ 			return err;
+ 	}
+ 
+-	if (S_ISREG(c->stat.mode) && !c->metacopy) {
+-		struct path upperpath, datapath;
+-
+-		ovl_path_upper(c->dentry, &upperpath);
+-		BUG_ON(upperpath.dentry != NULL);
+-		upperpath.dentry = temp;
+-
+-		ovl_path_lowerdata(c->dentry, &datapath);
+-		err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
+-		if (err)
+-			return err;
+-	}
+-
+ 	if (c->metacopy) {
+ 		err = ovl_check_setxattr(c->dentry, temp, OVL_XATTR_METACOPY,
+ 					 NULL, 0, -EOPNOTSUPP);
+@@ -737,6 +742,8 @@ static int ovl_copy_up_meta_inode_data(struct ovl_copy_up_ctx *c)
+ {
+ 	struct path upperpath, datapath;
+ 	int err;
++	char *capability = NULL;
++	ssize_t uninitialized_var(cap_size);
+ 
+ 	ovl_path_upper(c->dentry, &upperpath);
+ 	if (WARN_ON(upperpath.dentry == NULL))
+@@ -746,15 +753,37 @@ static int ovl_copy_up_meta_inode_data(struct ovl_copy_up_ctx *c)
+ 	if (WARN_ON(datapath.dentry == NULL))
+ 		return -EIO;
+ 
++	if (c->stat.size) {
++		err = cap_size = ovl_getxattr(upperpath.dentry, XATTR_NAME_CAPS,
++					      &capability, 0);
++		if (err < 0 && err != -ENODATA)
++			goto out;
++	}
++
+ 	err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
+ 	if (err)
+-		return err;
++		goto out_free;
++
++	/*
++	 * Writing to upper file will clear security.capability xattr. We
++	 * don't want that to happen for normal copy-up operation.
++	 */
++	if (capability) {
++		err = ovl_do_setxattr(upperpath.dentry, XATTR_NAME_CAPS,
++				      capability, cap_size, 0);
++		if (err)
++			goto out_free;
++	}
++
+ 
+ 	err = vfs_removexattr(upperpath.dentry, OVL_XATTR_METACOPY);
+ 	if (err)
+-		return err;
++		goto out_free;
+ 
+ 	ovl_set_upperdata(d_inode(c->dentry));
++out_free:
++	kfree(capability);
++out:
+ 	return err;
+ }
+ 
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 5e45cb3630a0..9c6018287d57 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -277,6 +277,8 @@ int ovl_lock_rename_workdir(struct dentry *workdir, struct dentry *upperdir);
+ int ovl_check_metacopy_xattr(struct dentry *dentry);
+ bool ovl_is_metacopy_dentry(struct dentry *dentry);
+ char *ovl_get_redirect_xattr(struct dentry *dentry, int padding);
++ssize_t ovl_getxattr(struct dentry *dentry, char *name, char **value,
++		     size_t padding);
+ 
+ static inline bool ovl_is_impuredir(struct dentry *dentry)
+ {
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 7c01327b1852..4035e640f402 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -863,28 +863,49 @@ bool ovl_is_metacopy_dentry(struct dentry *dentry)
+ 	return (oe->numlower > 1);
+ }
+ 
+-char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
++ssize_t ovl_getxattr(struct dentry *dentry, char *name, char **value,
++		     size_t padding)
+ {
+-	int res;
+-	char *s, *next, *buf = NULL;
++	ssize_t res;
++	char *buf = NULL;
+ 
+-	res = vfs_getxattr(dentry, OVL_XATTR_REDIRECT, NULL, 0);
++	res = vfs_getxattr(dentry, name, NULL, 0);
+ 	if (res < 0) {
+ 		if (res == -ENODATA || res == -EOPNOTSUPP)
+-			return NULL;
++			return -ENODATA;
+ 		goto fail;
+ 	}
+ 
+-	buf = kzalloc(res + padding + 1, GFP_KERNEL);
+-	if (!buf)
+-		return ERR_PTR(-ENOMEM);
++	if (res != 0) {
++		buf = kzalloc(res + padding, GFP_KERNEL);
++		if (!buf)
++			return -ENOMEM;
+ 
+-	if (res == 0)
+-		goto invalid;
++		res = vfs_getxattr(dentry, name, buf, res);
++		if (res < 0)
++			goto fail;
++	}
++	*value = buf;
++
++	return res;
++
++fail:
++	pr_warn_ratelimited("overlayfs: failed to get xattr %s: err=%zi)\n",
++			    name, res);
++	kfree(buf);
++	return res;
++}
+ 
+-	res = vfs_getxattr(dentry, OVL_XATTR_REDIRECT, buf, res);
++char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
++{
++	int res;
++	char *s, *next, *buf = NULL;
++
++	res = ovl_getxattr(dentry, OVL_XATTR_REDIRECT, &buf, padding + 1);
++	if (res == -ENODATA)
++		return NULL;
+ 	if (res < 0)
+-		goto fail;
++		return ERR_PTR(res);
+ 	if (res == 0)
+ 		goto invalid;
+ 
+@@ -900,15 +921,9 @@ char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
+ 	}
+ 
+ 	return buf;
+-
+-err_free:
+-	kfree(buf);
+-	return ERR_PTR(res);
+-fail:
+-	pr_warn_ratelimited("overlayfs: failed to get redirect (%i)\n", res);
+-	goto err_free;
+ invalid:
+ 	pr_warn_ratelimited("overlayfs: invalid redirect (%s)\n", buf);
+ 	res = -EINVAL;
+-	goto err_free;
++	kfree(buf);
++	return ERR_PTR(res);
+ }
+diff --git a/fs/pipe.c b/fs/pipe.c
+index bdc5d3c0977d..c51750ed4011 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -234,6 +234,14 @@ static const struct pipe_buf_operations anon_pipe_buf_ops = {
+ 	.get = generic_pipe_buf_get,
+ };
+ 
++static const struct pipe_buf_operations anon_pipe_buf_nomerge_ops = {
++	.can_merge = 0,
++	.confirm = generic_pipe_buf_confirm,
++	.release = anon_pipe_buf_release,
++	.steal = anon_pipe_buf_steal,
++	.get = generic_pipe_buf_get,
++};
++
+ static const struct pipe_buf_operations packet_pipe_buf_ops = {
+ 	.can_merge = 0,
+ 	.confirm = generic_pipe_buf_confirm,
+@@ -242,6 +250,12 @@ static const struct pipe_buf_operations packet_pipe_buf_ops = {
+ 	.get = generic_pipe_buf_get,
+ };
+ 
++void pipe_buf_mark_unmergeable(struct pipe_buffer *buf)
++{
++	if (buf->ops == &anon_pipe_buf_ops)
++		buf->ops = &anon_pipe_buf_nomerge_ops;
++}
++
+ static ssize_t
+ pipe_read(struct kiocb *iocb, struct iov_iter *to)
+ {
+diff --git a/fs/splice.c b/fs/splice.c
+index de2ede048473..90c29675d573 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -1597,6 +1597,8 @@ retry:
+ 			 */
+ 			obuf->flags &= ~PIPE_BUF_FLAG_GIFT;
+ 
++			pipe_buf_mark_unmergeable(obuf);
++
+ 			obuf->len = len;
+ 			opipe->nrbufs++;
+ 			ibuf->offset += obuf->len;
+@@ -1671,6 +1673,8 @@ static int link_pipe(struct pipe_inode_info *ipipe,
+ 		 */
+ 		obuf->flags &= ~PIPE_BUF_FLAG_GIFT;
+ 
++		pipe_buf_mark_unmergeable(obuf);
++
+ 		if (obuf->len > len)
+ 			obuf->len = len;
+ 
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 3d7a6a9c2370..f8f6f04c4453 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -733,7 +733,7 @@
+ 		KEEP(*(.orc_unwind_ip))					\
+ 		__stop_orc_unwind_ip = .;				\
+ 	}								\
+-	. = ALIGN(6);							\
++	. = ALIGN(2);							\
+ 	.orc_unwind : AT(ADDR(.orc_unwind) - LOAD_OFFSET) {		\
+ 		__start_orc_unwind = .;					\
+ 		KEEP(*(.orc_unwind))					\
+diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
+index e528baebad69..bee4bb9f81bc 100644
+--- a/include/linux/device-mapper.h
++++ b/include/linux/device-mapper.h
+@@ -609,7 +609,7 @@ do {									\
+  */
+ #define dm_target_offset(ti, sector) ((sector) - (ti)->begin)
+ 
+-static inline sector_t to_sector(unsigned long n)
++static inline sector_t to_sector(unsigned long long n)
+ {
+ 	return (n >> SECTOR_SHIFT);
+ }
+diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
+index f6ded992c183..5b21f14802e1 100644
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -130,6 +130,7 @@ struct dma_map_ops {
+ 			enum dma_data_direction direction);
+ 	int (*dma_supported)(struct device *dev, u64 mask);
+ 	u64 (*get_required_mask)(struct device *dev);
++	size_t (*max_mapping_size)(struct device *dev);
+ };
+ 
+ #define DMA_MAPPING_ERROR		(~(dma_addr_t)0)
+@@ -257,6 +258,8 @@ static inline void dma_direct_sync_sg_for_cpu(struct device *dev,
+ }
+ #endif
+ 
++size_t dma_direct_max_mapping_size(struct device *dev);
++
+ #ifdef CONFIG_HAS_DMA
+ #include <asm/dma-mapping.h>
+ 
+@@ -460,6 +463,7 @@ int dma_supported(struct device *dev, u64 mask);
+ int dma_set_mask(struct device *dev, u64 mask);
+ int dma_set_coherent_mask(struct device *dev, u64 mask);
+ u64 dma_get_required_mask(struct device *dev);
++size_t dma_max_mapping_size(struct device *dev);
+ #else /* CONFIG_HAS_DMA */
+ static inline dma_addr_t dma_map_page_attrs(struct device *dev,
+ 		struct page *page, size_t offset, size_t size,
+@@ -561,6 +565,10 @@ static inline u64 dma_get_required_mask(struct device *dev)
+ {
+ 	return 0;
+ }
++static inline size_t dma_max_mapping_size(struct device *dev)
++{
++	return 0;
++}
+ #endif /* CONFIG_HAS_DMA */
+ 
+ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
+diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
+index 0fbbcdf0c178..da0af631ded5 100644
+--- a/include/linux/hardirq.h
++++ b/include/linux/hardirq.h
+@@ -60,8 +60,14 @@ extern void irq_enter(void);
+  */
+ extern void irq_exit(void);
+ 
++#ifndef arch_nmi_enter
++#define arch_nmi_enter()	do { } while (0)
++#define arch_nmi_exit()		do { } while (0)
++#endif
++
+ #define nmi_enter()						\
+ 	do {							\
++		arch_nmi_enter();				\
+ 		printk_nmi_enter();				\
+ 		lockdep_off();					\
+ 		ftrace_nmi_enter();				\
+@@ -80,6 +86,7 @@ extern void irq_exit(void);
+ 		ftrace_nmi_exit();				\
+ 		lockdep_on();					\
+ 		printk_nmi_exit();				\
++		arch_nmi_exit();				\
+ 	} while (0)
+ 
+ #endif /* LINUX_HARDIRQ_H */
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index c38cc5eb7e73..cf761ff58224 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -634,7 +634,7 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
+ 			   struct kvm_memory_slot *dont);
+ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+ 			    unsigned long npages);
+-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots);
++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen);
+ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ 				struct kvm_memory_slot *memslot,
+ 				const struct kvm_userspace_memory_region *mem,
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index 5a3bb3b7c9ad..3ecd7ea212ae 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -182,6 +182,7 @@ void generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_confirm(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_steal(struct pipe_inode_info *, struct pipe_buffer *);
+ void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *);
++void pipe_buf_mark_unmergeable(struct pipe_buffer *buf);
+ 
+ extern const struct pipe_buf_operations nosteal_pipe_buf_ops;
+ 
+diff --git a/include/linux/property.h b/include/linux/property.h
+index 3789ec755fb6..65d3420dd5d1 100644
+--- a/include/linux/property.h
++++ b/include/linux/property.h
+@@ -258,7 +258,7 @@ struct property_entry {
+ #define PROPERTY_ENTRY_STRING(_name_, _val_)		\
+ (struct property_entry) {				\
+ 	.name = _name_,					\
+-	.length = sizeof(_val_),			\
++	.length = sizeof(const char *),			\
+ 	.type = DEV_PROP_STRING,			\
+ 	{ .value = { .str = _val_ } },			\
+ }
+diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
+index 7c007ed7505f..29bc3a203283 100644
+--- a/include/linux/swiotlb.h
++++ b/include/linux/swiotlb.h
+@@ -76,6 +76,8 @@ bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr,
+ 		size_t size, enum dma_data_direction dir, unsigned long attrs);
+ void __init swiotlb_exit(void);
+ unsigned int swiotlb_max_segment(void);
++size_t swiotlb_max_mapping_size(struct device *dev);
++bool is_swiotlb_active(void);
+ #else
+ #define swiotlb_force SWIOTLB_NO_FORCE
+ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+@@ -95,6 +97,15 @@ static inline unsigned int swiotlb_max_segment(void)
+ {
+ 	return 0;
+ }
++static inline size_t swiotlb_max_mapping_size(struct device *dev)
++{
++	return SIZE_MAX;
++}
++
++static inline bool is_swiotlb_active(void)
++{
++	return false;
++}
+ #endif /* CONFIG_SWIOTLB */
+ 
+ extern void swiotlb_print_info(void);
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index f31bd61c9466..503bba3c4bae 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2033,7 +2033,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
+ 			       struct cgroup_namespace *ns)
+ {
+ 	struct dentry *dentry;
+-	bool new_sb;
++	bool new_sb = false;
+ 
+ 	dentry = kernfs_mount(fs_type, flags, root->kf_root, magic, &new_sb);
+ 
+@@ -2043,6 +2043,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
+ 	 */
+ 	if (!IS_ERR(dentry) && ns != &init_cgroup_ns) {
+ 		struct dentry *nsdentry;
++		struct super_block *sb = dentry->d_sb;
+ 		struct cgroup *cgrp;
+ 
+ 		mutex_lock(&cgroup_mutex);
+@@ -2053,12 +2054,14 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
+ 		spin_unlock_irq(&css_set_lock);
+ 		mutex_unlock(&cgroup_mutex);
+ 
+-		nsdentry = kernfs_node_dentry(cgrp->kn, dentry->d_sb);
++		nsdentry = kernfs_node_dentry(cgrp->kn, sb);
+ 		dput(dentry);
++		if (IS_ERR(nsdentry))
++			deactivate_locked_super(sb);
+ 		dentry = nsdentry;
+ 	}
+ 
+-	if (IS_ERR(dentry) || !new_sb)
++	if (!new_sb)
+ 		cgroup_put(&root->cgrp);
+ 
+ 	return dentry;
+diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
+index 355d16acee6d..6310ad01f915 100644
+--- a/kernel/dma/direct.c
++++ b/kernel/dma/direct.c
+@@ -380,3 +380,14 @@ int dma_direct_supported(struct device *dev, u64 mask)
+ 	 */
+ 	return mask >= __phys_to_dma(dev, min_mask);
+ }
++
++size_t dma_direct_max_mapping_size(struct device *dev)
++{
++	size_t size = SIZE_MAX;
++
++	/* If SWIOTLB is active, use its maximum mapping size */
++	if (is_swiotlb_active())
++		size = swiotlb_max_mapping_size(dev);
++
++	return size;
++}
+diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
+index a11006b6d8e8..5753008ab286 100644
+--- a/kernel/dma/mapping.c
++++ b/kernel/dma/mapping.c
+@@ -357,3 +357,17 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
+ 		ops->cache_sync(dev, vaddr, size, dir);
+ }
+ EXPORT_SYMBOL(dma_cache_sync);
++
++size_t dma_max_mapping_size(struct device *dev)
++{
++	const struct dma_map_ops *ops = get_dma_ops(dev);
++	size_t size = SIZE_MAX;
++
++	if (dma_is_direct(ops))
++		size = dma_direct_max_mapping_size(dev);
++	else if (ops && ops->max_mapping_size)
++		size = ops->max_mapping_size(dev);
++
++	return size;
++}
++EXPORT_SYMBOL_GPL(dma_max_mapping_size);
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 1fb6fd68b9c7..c873f9cc2146 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -662,3 +662,17 @@ swiotlb_dma_supported(struct device *hwdev, u64 mask)
+ {
+ 	return __phys_to_dma(hwdev, io_tlb_end - 1) <= mask;
+ }
++
++size_t swiotlb_max_mapping_size(struct device *dev)
++{
++	return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
++}
++
++bool is_swiotlb_active(void)
++{
++	/*
++	 * When SWIOTLB is initialized, even if io_tlb_start points to physical
++	 * address zero, io_tlb_end surely doesn't.
++	 */
++	return io_tlb_end != 0;
++}
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 9180158756d2..38d44d27e37a 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1557,14 +1557,23 @@ static bool rcu_future_gp_cleanup(struct rcu_node *rnp)
+ }
+ 
+ /*
+- * Awaken the grace-period kthread.  Don't do a self-awaken, and don't
+- * bother awakening when there is nothing for the grace-period kthread
+- * to do (as in several CPUs raced to awaken, and we lost), and finally
+- * don't try to awaken a kthread that has not yet been created.
++ * Awaken the grace-period kthread.  Don't do a self-awaken (unless in
++ * an interrupt or softirq handler), and don't bother awakening when there
++ * is nothing for the grace-period kthread to do (as in several CPUs raced
++ * to awaken, and we lost), and finally don't try to awaken a kthread that
++ * has not yet been created.  If all those checks are passed, track some
++ * debug information and awaken.
++ *
++ * So why do the self-wakeup when in an interrupt or softirq handler
++ * in the grace-period kthread's context?  Because the kthread might have
++ * been interrupted just as it was going to sleep, and just after the final
++ * pre-sleep check of the awaken condition.  In this case, a wakeup really
++ * is required, and is therefore supplied.
+  */
+ static void rcu_gp_kthread_wake(void)
+ {
+-	if (current == rcu_state.gp_kthread ||
++	if ((current == rcu_state.gp_kthread &&
++	     !in_interrupt() && !in_serving_softirq()) ||
+ 	    !READ_ONCE(rcu_state.gp_flags) ||
+ 	    !rcu_state.gp_kthread)
+ 		return;
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index ba4d9e85feb8..d80bee8ff12e 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -2579,7 +2579,16 @@ static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp,
+ {
+ 	struct do_proc_dointvec_minmax_conv_param *param = data;
+ 	if (write) {
+-		int val = *negp ? -*lvalp : *lvalp;
++		int val;
++		if (*negp) {
++			if (*lvalp > (unsigned long) INT_MAX + 1)
++				return -EINVAL;
++			val = -*lvalp;
++		} else {
++			if (*lvalp > (unsigned long) INT_MAX)
++				return -EINVAL;
++			val = *lvalp;
++		}
+ 		if ((param->min && *param->min > val) ||
+ 		    (param->max && *param->max < val))
+ 			return -EINVAL;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index c4238b441624..5f40db27aaf2 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -5626,7 +5626,6 @@ out:
+ 	return ret;
+ 
+ fail:
+-	kfree(iter->trace);
+ 	kfree(iter);
+ 	__trace_array_put(tr);
+ 	mutex_unlock(&trace_types_lock);
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index 76217bbef815..4629a6104474 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -299,15 +299,13 @@ int perf_uprobe_init(struct perf_event *p_event,
+ 
+ 	if (!p_event->attr.uprobe_path)
+ 		return -EINVAL;
+-	path = kzalloc(PATH_MAX, GFP_KERNEL);
+-	if (!path)
+-		return -ENOMEM;
+-	ret = strncpy_from_user(
+-		path, u64_to_user_ptr(p_event->attr.uprobe_path), PATH_MAX);
+-	if (ret == PATH_MAX)
+-		return -E2BIG;
+-	if (ret < 0)
+-		goto out;
++
++	path = strndup_user(u64_to_user_ptr(p_event->attr.uprobe_path),
++			    PATH_MAX);
++	if (IS_ERR(path)) {
++		ret = PTR_ERR(path);
++		return (ret == -EINVAL) ? -E2BIG : ret;
++	}
+ 	if (path[0] == '\0') {
+ 		ret = -EINVAL;
+ 		goto out;
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 449d90cfa151..55b72b1c63a0 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -4695,9 +4695,10 @@ static inline void add_to_key(char *compound_key, void *key,
+ 		/* ensure NULL-termination */
+ 		if (size > key_field->size - 1)
+ 			size = key_field->size - 1;
+-	}
+ 
+-	memcpy(compound_key + key_field->offset, key, size);
++		strncpy(compound_key + key_field->offset, (char *)key, size);
++	} else
++		memcpy(compound_key + key_field->offset, key, size);
+ }
+ 
+ static void
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 831be5ff5f4d..fc8b51744579 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1825,19 +1825,17 @@ static int soft_offline_in_use_page(struct page *page, int flags)
+ 	struct page *hpage = compound_head(page);
+ 
+ 	if (!PageHuge(page) && PageTransHuge(hpage)) {
+-		lock_page(hpage);
+-		if (!PageAnon(hpage) || unlikely(split_huge_page(hpage))) {
+-			unlock_page(hpage);
+-			if (!PageAnon(hpage))
++		lock_page(page);
++		if (!PageAnon(page) || unlikely(split_huge_page(page))) {
++			unlock_page(page);
++			if (!PageAnon(page))
+ 				pr_info("soft offline: %#lx: non anonymous thp\n", page_to_pfn(page));
+ 			else
+ 				pr_info("soft offline: %#lx: thp split failed\n", page_to_pfn(page));
+-			put_hwpoison_page(hpage);
++			put_hwpoison_page(page);
+ 			return -EBUSY;
+ 		}
+-		unlock_page(hpage);
+-		get_hwpoison_page(page);
+-		put_hwpoison_page(hpage);
++		unlock_page(page);
+ 	}
+ 
+ 	/*
+diff --git a/mm/memory.c b/mm/memory.c
+index e11ca9dd823f..e8d69ade5acc 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -3517,10 +3517,13 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf)
+  * but allow concurrent faults).
+  * The mmap_sem may have been released depending on flags and our
+  * return value.  See filemap_fault() and __lock_page_or_retry().
++ * If mmap_sem is released, vma may become invalid (for example
++ * by other thread calling munmap()).
+  */
+ static vm_fault_t do_fault(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
++	struct mm_struct *vm_mm = vma->vm_mm;
+ 	vm_fault_t ret;
+ 
+ 	/*
+@@ -3561,7 +3564,7 @@ static vm_fault_t do_fault(struct vm_fault *vmf)
+ 
+ 	/* preallocated pagetable is unused: free it */
+ 	if (vmf->prealloc_pte) {
+-		pte_free(vma->vm_mm, vmf->prealloc_pte);
++		pte_free(vm_mm, vmf->prealloc_pte);
+ 		vmf->prealloc_pte = NULL;
+ 	}
+ 	return ret;
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 871e41c55e23..2cd24186ba84 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2248,7 +2248,7 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
+ 	if (!(area->flags & VM_USERMAP))
+ 		return -EINVAL;
+ 
+-	if (kaddr + size > area->addr + area->size)
++	if (kaddr + size > area->addr + get_vm_area_size(area))
+ 		return -EINVAL;
+ 
+ 	do {
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 357214a51f13..b85d51f4b8eb 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -1061,7 +1061,7 @@ struct p9_client *p9_client_create(const char *dev_name, char *options)
+ 		p9_debug(P9_DEBUG_ERROR,
+ 			 "Please specify a msize of at least 4k\n");
+ 		err = -EINVAL;
+-		goto free_client;
++		goto close_trans;
+ 	}
+ 
+ 	err = p9_client_version(clnt);
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index d7ec6132c046..d455537c8fc6 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -66,9 +66,6 @@ static void	call_decode(struct rpc_task *task);
+ static void	call_bind(struct rpc_task *task);
+ static void	call_bind_status(struct rpc_task *task);
+ static void	call_transmit(struct rpc_task *task);
+-#if defined(CONFIG_SUNRPC_BACKCHANNEL)
+-static void	call_bc_transmit(struct rpc_task *task);
+-#endif /* CONFIG_SUNRPC_BACKCHANNEL */
+ static void	call_status(struct rpc_task *task);
+ static void	call_transmit_status(struct rpc_task *task);
+ static void	call_refresh(struct rpc_task *task);
+@@ -80,6 +77,7 @@ static void	call_connect_status(struct rpc_task *task);
+ static __be32	*rpc_encode_header(struct rpc_task *task);
+ static __be32	*rpc_verify_header(struct rpc_task *task);
+ static int	rpc_ping(struct rpc_clnt *clnt);
++static void	rpc_check_timeout(struct rpc_task *task);
+ 
+ static void rpc_register_client(struct rpc_clnt *clnt)
+ {
+@@ -1131,6 +1129,8 @@ rpc_call_async(struct rpc_clnt *clnt, const struct rpc_message *msg, int flags,
+ EXPORT_SYMBOL_GPL(rpc_call_async);
+ 
+ #if defined(CONFIG_SUNRPC_BACKCHANNEL)
++static void call_bc_encode(struct rpc_task *task);
++
+ /**
+  * rpc_run_bc_task - Allocate a new RPC task for backchannel use, then run
+  * rpc_execute against it
+@@ -1152,7 +1152,7 @@ struct rpc_task *rpc_run_bc_task(struct rpc_rqst *req)
+ 	task = rpc_new_task(&task_setup_data);
+ 	xprt_init_bc_request(req, task);
+ 
+-	task->tk_action = call_bc_transmit;
++	task->tk_action = call_bc_encode;
+ 	atomic_inc(&task->tk_count);
+ 	WARN_ON_ONCE(atomic_read(&task->tk_count) != 2);
+ 	rpc_execute(task);
+@@ -1786,7 +1786,12 @@ call_encode(struct rpc_task *task)
+ 		xprt_request_enqueue_receive(task);
+ 	xprt_request_enqueue_transmit(task);
+ out:
+-	task->tk_action = call_bind;
++	task->tk_action = call_transmit;
++	/* Check that the connection is OK */
++	if (!xprt_bound(task->tk_xprt))
++		task->tk_action = call_bind;
++	else if (!xprt_connected(task->tk_xprt))
++		task->tk_action = call_connect;
+ }
+ 
+ /*
+@@ -1937,8 +1942,7 @@ call_connect_status(struct rpc_task *task)
+ 			break;
+ 		if (clnt->cl_autobind) {
+ 			rpc_force_rebind(clnt);
+-			task->tk_action = call_bind;
+-			return;
++			goto out_retry;
+ 		}
+ 		/* fall through */
+ 	case -ECONNRESET:
+@@ -1958,16 +1962,19 @@ call_connect_status(struct rpc_task *task)
+ 		/* fall through */
+ 	case -ENOTCONN:
+ 	case -EAGAIN:
+-		/* Check for timeouts before looping back to call_bind */
+ 	case -ETIMEDOUT:
+-		task->tk_action = call_timeout;
+-		return;
++		goto out_retry;
+ 	case 0:
+ 		clnt->cl_stats->netreconn++;
+ 		task->tk_action = call_transmit;
+ 		return;
+ 	}
+ 	rpc_exit(task, status);
++	return;
++out_retry:
++	/* Check for timeouts before looping back to call_bind */
++	task->tk_action = call_bind;
++	rpc_check_timeout(task);
+ }
+ 
+ /*
+@@ -1978,13 +1985,19 @@ call_transmit(struct rpc_task *task)
+ {
+ 	dprint_status(task);
+ 
+-	task->tk_status = 0;
++	task->tk_action = call_transmit_status;
+ 	if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
+ 		if (!xprt_prepare_transmit(task))
+ 			return;
+-		xprt_transmit(task);
++		task->tk_status = 0;
++		if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
++			if (!xprt_connected(task->tk_xprt)) {
++				task->tk_status = -ENOTCONN;
++				return;
++			}
++			xprt_transmit(task);
++		}
+ 	}
+-	task->tk_action = call_transmit_status;
+ 	xprt_end_transmit(task);
+ }
+ 
+@@ -2038,7 +2051,7 @@ call_transmit_status(struct rpc_task *task)
+ 				trace_xprt_ping(task->tk_xprt,
+ 						task->tk_status);
+ 			rpc_exit(task, task->tk_status);
+-			break;
++			return;
+ 		}
+ 		/* fall through */
+ 	case -ECONNRESET:
+@@ -2046,11 +2059,24 @@ call_transmit_status(struct rpc_task *task)
+ 	case -EADDRINUSE:
+ 	case -ENOTCONN:
+ 	case -EPIPE:
++		task->tk_action = call_bind;
++		task->tk_status = 0;
+ 		break;
+ 	}
++	rpc_check_timeout(task);
+ }
+ 
+ #if defined(CONFIG_SUNRPC_BACKCHANNEL)
++static void call_bc_transmit(struct rpc_task *task);
++static void call_bc_transmit_status(struct rpc_task *task);
++
++static void
++call_bc_encode(struct rpc_task *task)
++{
++	xprt_request_enqueue_transmit(task);
++	task->tk_action = call_bc_transmit;
++}
++
+ /*
+  * 5b.	Send the backchannel RPC reply.  On error, drop the reply.  In
+  * addition, disconnect on connectivity errors.
+@@ -2058,26 +2084,23 @@ call_transmit_status(struct rpc_task *task)
+ static void
+ call_bc_transmit(struct rpc_task *task)
+ {
+-	struct rpc_rqst *req = task->tk_rqstp;
+-
+-	if (rpc_task_need_encode(task))
+-		xprt_request_enqueue_transmit(task);
+-	if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
+-		goto out_wakeup;
+-
+-	if (!xprt_prepare_transmit(task))
+-		goto out_retry;
+-
+-	if (task->tk_status < 0) {
+-		printk(KERN_NOTICE "RPC: Could not send backchannel reply "
+-			"error: %d\n", task->tk_status);
+-		goto out_done;
++	task->tk_action = call_bc_transmit_status;
++	if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
++		if (!xprt_prepare_transmit(task))
++			return;
++		task->tk_status = 0;
++		xprt_transmit(task);
+ 	}
++	xprt_end_transmit(task);
++}
+ 
+-	xprt_transmit(task);
++static void
++call_bc_transmit_status(struct rpc_task *task)
++{
++	struct rpc_rqst *req = task->tk_rqstp;
+ 
+-	xprt_end_transmit(task);
+ 	dprint_status(task);
++
+ 	switch (task->tk_status) {
+ 	case 0:
+ 		/* Success */
+@@ -2091,8 +2114,14 @@ call_bc_transmit(struct rpc_task *task)
+ 	case -ENOTCONN:
+ 	case -EPIPE:
+ 		break;
++	case -ENOBUFS:
++		rpc_delay(task, HZ>>2);
++		/* fall through */
++	case -EBADSLT:
+ 	case -EAGAIN:
+-		goto out_retry;
++		task->tk_status = 0;
++		task->tk_action = call_bc_transmit;
++		return;
+ 	case -ETIMEDOUT:
+ 		/*
+ 		 * Problem reaching the server.  Disconnect and let the
+@@ -2111,18 +2140,11 @@ call_bc_transmit(struct rpc_task *task)
+ 		 * We were unable to reply and will have to drop the
+ 		 * request.  The server should reconnect and retransmit.
+ 		 */
+-		WARN_ON_ONCE(task->tk_status == -EAGAIN);
+ 		printk(KERN_NOTICE "RPC: Could not send backchannel reply "
+ 			"error: %d\n", task->tk_status);
+ 		break;
+ 	}
+-out_wakeup:
+-	rpc_wake_up_queued_task(&req->rq_xprt->pending, task);
+-out_done:
+ 	task->tk_action = rpc_exit_task;
+-	return;
+-out_retry:
+-	task->tk_status = 0;
+ }
+ #endif /* CONFIG_SUNRPC_BACKCHANNEL */
+ 
+@@ -2178,7 +2200,7 @@ call_status(struct rpc_task *task)
+ 	case -EPIPE:
+ 	case -ENOTCONN:
+ 	case -EAGAIN:
+-		task->tk_action = call_encode;
++		task->tk_action = call_timeout;
+ 		break;
+ 	case -EIO:
+ 		/* shutdown or soft timeout */
+@@ -2192,20 +2214,13 @@ call_status(struct rpc_task *task)
+ 	}
+ }
+ 
+-/*
+- * 6a.	Handle RPC timeout
+- * 	We do not release the request slot, so we keep using the
+- *	same XID for all retransmits.
+- */
+ static void
+-call_timeout(struct rpc_task *task)
++rpc_check_timeout(struct rpc_task *task)
+ {
+ 	struct rpc_clnt	*clnt = task->tk_client;
+ 
+-	if (xprt_adjust_timeout(task->tk_rqstp) == 0) {
+-		dprintk("RPC: %5u call_timeout (minor)\n", task->tk_pid);
+-		goto retry;
+-	}
++	if (xprt_adjust_timeout(task->tk_rqstp) == 0)
++		return;
+ 
+ 	dprintk("RPC: %5u call_timeout (major)\n", task->tk_pid);
+ 	task->tk_timeouts++;
+@@ -2241,10 +2256,19 @@ call_timeout(struct rpc_task *task)
+ 	 * event? RFC2203 requires the server to drop all such requests.
+ 	 */
+ 	rpcauth_invalcred(task);
++}
+ 
+-retry:
++/*
++ * 6a.	Handle RPC timeout
++ * 	We do not release the request slot, so we keep using the
++ *	same XID for all retransmits.
++ */
++static void
++call_timeout(struct rpc_task *task)
++{
+ 	task->tk_action = call_encode;
+ 	task->tk_status = 0;
++	rpc_check_timeout(task);
+ }
+ 
+ /*
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index a6a060925e5d..43590a968b73 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -349,12 +349,16 @@ static ssize_t svc_recvfrom(struct svc_rqst *rqstp, struct kvec *iov,
+ /*
+  * Set socket snd and rcv buffer lengths
+  */
+-static void svc_sock_setbufsize(struct socket *sock, unsigned int snd,
+-				unsigned int rcv)
++static void svc_sock_setbufsize(struct svc_sock *svsk, unsigned int nreqs)
+ {
++	unsigned int max_mesg = svsk->sk_xprt.xpt_server->sv_max_mesg;
++	struct socket *sock = svsk->sk_sock;
++
++	nreqs = min(nreqs, INT_MAX / 2 / max_mesg);
++
+ 	lock_sock(sock->sk);
+-	sock->sk->sk_sndbuf = snd * 2;
+-	sock->sk->sk_rcvbuf = rcv * 2;
++	sock->sk->sk_sndbuf = nreqs * max_mesg * 2;
++	sock->sk->sk_rcvbuf = nreqs * max_mesg * 2;
+ 	sock->sk->sk_write_space(sock->sk);
+ 	release_sock(sock->sk);
+ }
+@@ -516,9 +520,7 @@ static int svc_udp_recvfrom(struct svc_rqst *rqstp)
+ 	     * provides an upper bound on the number of threads
+ 	     * which will access the socket.
+ 	     */
+-	    svc_sock_setbufsize(svsk->sk_sock,
+-				(serv->sv_nrthreads+3) * serv->sv_max_mesg,
+-				(serv->sv_nrthreads+3) * serv->sv_max_mesg);
++	    svc_sock_setbufsize(svsk, serv->sv_nrthreads + 3);
+ 
+ 	clear_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags);
+ 	skb = NULL;
+@@ -681,9 +683,7 @@ static void svc_udp_init(struct svc_sock *svsk, struct svc_serv *serv)
+ 	 * receive and respond to one request.
+ 	 * svc_udp_recvfrom will re-adjust if necessary
+ 	 */
+-	svc_sock_setbufsize(svsk->sk_sock,
+-			    3 * svsk->sk_xprt.xpt_server->sv_max_mesg,
+-			    3 * svsk->sk_xprt.xpt_server->sv_max_mesg);
++	svc_sock_setbufsize(svsk, 3);
+ 
+ 	/* data might have come in before data_ready set up */
+ 	set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags);
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index f0e36c3492ba..cf20dd36a30f 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -959,8 +959,11 @@ static int selinux_sb_clone_mnt_opts(const struct super_block *oldsb,
+ 	BUG_ON(!(oldsbsec->flags & SE_SBINITIALIZED));
+ 
+ 	/* if fs is reusing a sb, make sure that the contexts match */
+-	if (newsbsec->flags & SE_SBINITIALIZED)
++	if (newsbsec->flags & SE_SBINITIALIZED) {
++		if ((kern_flags & SECURITY_LSM_NATIVE_LABELS) && !set_context)
++			*set_kern_flags |= SECURITY_LSM_NATIVE_LABELS;
+ 		return selinux_cmp_sb_context(oldsb, newsb);
++	}
+ 
+ 	mutex_lock(&newsbsec->lock);
+ 
+@@ -5120,6 +5123,9 @@ static int selinux_sctp_bind_connect(struct sock *sk, int optname,
+ 			return -EINVAL;
+ 		}
+ 
++		if (walk_size + len > addrlen)
++			return -EINVAL;
++
+ 		err = -EINVAL;
+ 		switch (optname) {
+ 		/* Bind checks */
+diff --git a/sound/soc/codecs/pcm186x.c b/sound/soc/codecs/pcm186x.c
+index 809b7e9f03ca..c5fcc632f670 100644
+--- a/sound/soc/codecs/pcm186x.c
++++ b/sound/soc/codecs/pcm186x.c
+@@ -42,7 +42,7 @@ struct pcm186x_priv {
+ 	bool is_master_mode;
+ };
+ 
+-static const DECLARE_TLV_DB_SCALE(pcm186x_pga_tlv, -1200, 4000, 50);
++static const DECLARE_TLV_DB_SCALE(pcm186x_pga_tlv, -1200, 50, 0);
+ 
+ static const struct snd_kcontrol_new pcm1863_snd_controls[] = {
+ 	SOC_DOUBLE_R_S_TLV("ADC Capture Volume", PCM186X_PGA_VAL_CH1_L,
+@@ -158,7 +158,7 @@ static const struct snd_soc_dapm_widget pcm1863_dapm_widgets[] = {
+ 	 * Put the codec into SLEEP mode when not in use, allowing the
+ 	 * Energysense mechanism to operate.
+ 	 */
+-	SND_SOC_DAPM_ADC("ADC", "HiFi Capture", PCM186X_POWER_CTRL, 1,  0),
++	SND_SOC_DAPM_ADC("ADC", "HiFi Capture", PCM186X_POWER_CTRL, 1,  1),
+ };
+ 
+ static const struct snd_soc_dapm_widget pcm1865_dapm_widgets[] = {
+@@ -184,8 +184,8 @@ static const struct snd_soc_dapm_widget pcm1865_dapm_widgets[] = {
+ 	 * Put the codec into SLEEP mode when not in use, allowing the
+ 	 * Energysense mechanism to operate.
+ 	 */
+-	SND_SOC_DAPM_ADC("ADC1", "HiFi Capture 1", PCM186X_POWER_CTRL, 1,  0),
+-	SND_SOC_DAPM_ADC("ADC2", "HiFi Capture 2", PCM186X_POWER_CTRL, 1,  0),
++	SND_SOC_DAPM_ADC("ADC1", "HiFi Capture 1", PCM186X_POWER_CTRL, 1,  1),
++	SND_SOC_DAPM_ADC("ADC2", "HiFi Capture 2", PCM186X_POWER_CTRL, 1,  1),
+ };
+ 
+ static const struct snd_soc_dapm_route pcm1863_dapm_routes[] = {
+diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
+index 57b484768a58..afe67c865330 100644
+--- a/sound/soc/fsl/fsl_esai.c
++++ b/sound/soc/fsl/fsl_esai.c
+@@ -398,7 +398,8 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 		break;
+ 	case SND_SOC_DAIFMT_RIGHT_J:
+ 		/* Data on rising edge of bclk, frame high, right aligned */
+-		xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCR_xWA;
++		xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP;
++		xcr  |= ESAI_xCR_xWA;
+ 		break;
+ 	case SND_SOC_DAIFMT_DSP_A:
+ 		/* Data on rising edge of bclk, frame high, 1clk before data */
+@@ -455,12 +456,12 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 		return -EINVAL;
+ 	}
+ 
+-	mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR;
++	mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR | ESAI_xCR_xWA;
+ 	regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR, mask, xcr);
+ 	regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR, mask, xcr);
+ 
+ 	mask = ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCCR_xFSP |
+-		ESAI_xCCR_xFSD | ESAI_xCCR_xCKD | ESAI_xCR_xWA;
++		ESAI_xCCR_xFSD | ESAI_xCCR_xCKD;
+ 	regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR, mask, xccr);
+ 	regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR, mask, xccr);
+ 
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index f69961c4a4f3..2921ce08b198 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -1278,9 +1278,9 @@ static int __auxtrace_mmap__read(struct perf_mmap *map,
+ 	}
+ 
+ 	/* padding must be written by fn() e.g. record__process_auxtrace() */
+-	padding = size & 7;
++	padding = size & (PERF_AUXTRACE_RECORD_ALIGNMENT - 1);
+ 	if (padding)
+-		padding = 8 - padding;
++		padding = PERF_AUXTRACE_RECORD_ALIGNMENT - padding;
+ 
+ 	memset(&ev, 0, sizeof(ev));
+ 	ev.auxtrace.header.type = PERF_RECORD_AUXTRACE;
+diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h
+index 8e50f96d4b23..fac32482db61 100644
+--- a/tools/perf/util/auxtrace.h
++++ b/tools/perf/util/auxtrace.h
+@@ -40,6 +40,9 @@ struct record_opts;
+ struct auxtrace_info_event;
+ struct events_stats;
+ 
++/* Auxtrace records must have the same alignment as perf event records */
++#define PERF_AUXTRACE_RECORD_ALIGNMENT 8
++
+ enum auxtrace_type {
+ 	PERF_AUXTRACE_UNKNOWN,
+ 	PERF_AUXTRACE_INTEL_PT,
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index 4503f3ca45ab..a54d6c9a4601 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -26,6 +26,7 @@
+ 
+ #include "../cache.h"
+ #include "../util.h"
++#include "../auxtrace.h"
+ 
+ #include "intel-pt-insn-decoder.h"
+ #include "intel-pt-pkt-decoder.h"
+@@ -1394,7 +1395,6 @@ static int intel_pt_overflow(struct intel_pt_decoder *decoder)
+ {
+ 	intel_pt_log("ERROR: Buffer overflow\n");
+ 	intel_pt_clear_tx_flags(decoder);
+-	decoder->cbr = 0;
+ 	decoder->timestamp_insn_cnt = 0;
+ 	decoder->pkt_state = INTEL_PT_STATE_ERR_RESYNC;
+ 	decoder->overflow = true;
+@@ -2575,6 +2575,34 @@ static int intel_pt_tsc_cmp(uint64_t tsc1, uint64_t tsc2)
+ 	}
+ }
+ 
++#define MAX_PADDING (PERF_AUXTRACE_RECORD_ALIGNMENT - 1)
++
++/**
++ * adj_for_padding - adjust overlap to account for padding.
++ * @buf_b: second buffer
++ * @buf_a: first buffer
++ * @len_a: size of first buffer
++ *
++ * @buf_a might have up to 7 bytes of padding appended. Adjust the overlap
++ * accordingly.
++ *
++ * Return: A pointer into @buf_b from where non-overlapped data starts
++ */
++static unsigned char *adj_for_padding(unsigned char *buf_b,
++				      unsigned char *buf_a, size_t len_a)
++{
++	unsigned char *p = buf_b - MAX_PADDING;
++	unsigned char *q = buf_a + len_a - MAX_PADDING;
++	int i;
++
++	for (i = MAX_PADDING; i; i--, p++, q++) {
++		if (*p != *q)
++			break;
++	}
++
++	return p;
++}
++
+ /**
+  * intel_pt_find_overlap_tsc - determine start of non-overlapped trace data
+  *                             using TSC.
+@@ -2625,8 +2653,11 @@ static unsigned char *intel_pt_find_overlap_tsc(unsigned char *buf_a,
+ 
+ 			/* Same TSC, so buffers are consecutive */
+ 			if (!cmp && rem_b >= rem_a) {
++				unsigned char *start;
++
+ 				*consecutive = true;
+-				return buf_b + len_b - (rem_b - rem_a);
++				start = buf_b + len_b - (rem_b - rem_a);
++				return adj_for_padding(start, buf_a, len_a);
+ 			}
+ 			if (cmp < 0)
+ 				return buf_b; /* tsc_a < tsc_b => no overlap */
+@@ -2689,7 +2720,7 @@ unsigned char *intel_pt_find_overlap(unsigned char *buf_a, size_t len_a,
+ 		found = memmem(buf_a, len_a, buf_b, len_a);
+ 		if (found) {
+ 			*consecutive = true;
+-			return buf_b + len_a;
++			return adj_for_padding(buf_b + len_a, buf_a, len_a);
+ 		}
+ 
+ 		/* Try again at next PSB in buffer 'a' */
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 2e72373ec6df..4493fc13a6fa 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -2522,6 +2522,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
+ 	}
+ 
+ 	pt->timeless_decoding = intel_pt_timeless_decoding(pt);
++	if (pt->timeless_decoding && !pt->tc.time_mult)
++		pt->tc.time_mult = 1;
+ 	pt->have_tsc = intel_pt_have_tsc(pt);
+ 	pt->sampling_mode = false;
+ 	pt->est_tsc = !pt->timeless_decoding;
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 48efad6d0f90..ca5f2e4796ea 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -710,6 +710,8 @@ static int map_groups__split_kallsyms_for_kcore(struct map_groups *kmaps, struct
+ 		}
+ 
+ 		pos->start -= curr_map->start - curr_map->pgoff;
++		if (pos->end > curr_map->end)
++			pos->end = curr_map->end;
+ 		if (pos->end)
+ 			pos->end -= curr_map->start - curr_map->pgoff;
+ 		symbols__insert(&curr_map->dso->symbols, pos);
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index 30251e288629..5cc22cdaa5ba 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -2353,7 +2353,7 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+ 	return 0;
+ }
+ 
+-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
+ {
+ }
+ 
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 076bc38963bf..4e1024dbb73f 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -874,6 +874,7 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
+ 		int as_id, struct kvm_memslots *slots)
+ {
+ 	struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id);
++	u64 gen;
+ 
+ 	/*
+ 	 * Set the low bit in the generation, which disables SPTE caching
+@@ -896,9 +897,11 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
+ 	 * space 0 will use generations 0, 4, 8, ... while * address space 1 will
+ 	 * use generations 2, 6, 10, 14, ...
+ 	 */
+-	slots->generation += KVM_ADDRESS_SPACE_NUM * 2 - 1;
++	gen = slots->generation + KVM_ADDRESS_SPACE_NUM * 2 - 1;
+ 
+-	kvm_arch_memslots_updated(kvm, slots);
++	kvm_arch_memslots_updated(kvm, gen);
++
++	slots->generation = gen;
+ 
+ 	return old_memslots;
+ }


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-03-27 10:23 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-03-27 10:23 UTC (permalink / raw
  To: gentoo-commits

commit:     24cf9478a62681cd1a01f2b4b4954ad318dad479
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 27 10:23:20 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 27 10:23:20 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=24cf9478

Linux patch 5.0.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1004_linux-5.0.5.patch | 2012 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2016 insertions(+)

diff --git a/0000_README b/0000_README
index 1974ef5..f452eee 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-5.0.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.4
 
+Patch:  1004_linux-5.0.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-5.0.5.patch b/1004_linux-5.0.5.patch
new file mode 100644
index 0000000..37a532b
--- /dev/null
+++ b/1004_linux-5.0.5.patch
@@ -0,0 +1,2012 @@
+diff --git a/Makefile b/Makefile
+index 06fda21614bc..63152c5ca136 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/mips/include/asm/jump_label.h b/arch/mips/include/asm/jump_label.h
+index e77672539e8e..e4456e450f94 100644
+--- a/arch/mips/include/asm/jump_label.h
++++ b/arch/mips/include/asm/jump_label.h
+@@ -21,15 +21,15 @@
+ #endif
+ 
+ #ifdef CONFIG_CPU_MICROMIPS
+-#define NOP_INSN "nop32"
++#define B_INSN "b32"
+ #else
+-#define NOP_INSN "nop"
++#define B_INSN "b"
+ #endif
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\t" NOP_INSN "\n\t"
+-		"nop\n\t"
++	asm_volatile_goto("1:\t" B_INSN " 2f\n\t"
++		"2:\tnop\n\t"
+ 		".pushsection __jump_table,  \"aw\"\n\t"
+ 		WORD_INSN " 1b, %l[l_yes], %0\n\t"
+ 		".popsection\n\t"
+diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
+index cb7e9ed7a453..33ee0d18fb0a 100644
+--- a/arch/mips/kernel/vmlinux.lds.S
++++ b/arch/mips/kernel/vmlinux.lds.S
+@@ -140,6 +140,13 @@ SECTIONS
+ 	PERCPU_SECTION(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
+ #endif
+ 
++#ifdef CONFIG_MIPS_ELF_APPENDED_DTB
++	.appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) {
++		*(.appended_dtb)
++		KEEP(*(.appended_dtb))
++	}
++#endif
++
+ #ifdef CONFIG_RELOCATABLE
+ 	. = ALIGN(4);
+ 
+@@ -164,11 +171,6 @@ SECTIONS
+ 	__appended_dtb = .;
+ 	/* leave space for appended DTB */
+ 	. += 0x100000;
+-#elif defined(CONFIG_MIPS_ELF_APPENDED_DTB)
+-	.appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) {
+-		*(.appended_dtb)
+-		KEEP(*(.appended_dtb))
+-	}
+ #endif
+ 	/*
+ 	 * Align to 64K in attempt to eliminate holes before the
+diff --git a/arch/mips/loongson64/lemote-2f/irq.c b/arch/mips/loongson64/lemote-2f/irq.c
+index 9e33e45aa17c..b213cecb8e3a 100644
+--- a/arch/mips/loongson64/lemote-2f/irq.c
++++ b/arch/mips/loongson64/lemote-2f/irq.c
+@@ -103,7 +103,7 @@ static struct irqaction ip6_irqaction = {
+ static struct irqaction cascade_irqaction = {
+ 	.handler = no_action,
+ 	.name = "cascade",
+-	.flags = IRQF_NO_THREAD,
++	.flags = IRQF_NO_THREAD | IRQF_NO_SUSPEND,
+ };
+ 
+ void __init mach_init_irq(void)
+diff --git a/arch/powerpc/include/asm/vdso_datapage.h b/arch/powerpc/include/asm/vdso_datapage.h
+index 1afe90ade595..bbc06bd72b1f 100644
+--- a/arch/powerpc/include/asm/vdso_datapage.h
++++ b/arch/powerpc/include/asm/vdso_datapage.h
+@@ -82,10 +82,10 @@ struct vdso_data {
+ 	__u32 icache_block_size;		/* L1 i-cache block size     */
+ 	__u32 dcache_log_block_size;		/* L1 d-cache log block size */
+ 	__u32 icache_log_block_size;		/* L1 i-cache log block size */
+-	__s32 wtom_clock_sec;			/* Wall to monotonic clock */
+-	__s32 wtom_clock_nsec;
+-	struct timespec stamp_xtime;	/* xtime as at tb_orig_stamp */
+-	__u32 stamp_sec_fraction;	/* fractional seconds of stamp_xtime */
++	__u32 stamp_sec_fraction;		/* fractional seconds of stamp_xtime */
++	__s32 wtom_clock_nsec;			/* Wall to monotonic clock nsec */
++	__s64 wtom_clock_sec;			/* Wall to monotonic clock sec */
++	struct timespec stamp_xtime;		/* xtime as at tb_orig_stamp */
+    	__u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls  */
+    	__u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
+ };
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index 9b8631533e02..b33bafb8fcea 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -190,29 +190,22 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
+ 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+ 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+ 
+-	if (bcs || ccd || count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+-		bool comma = false;
++	if (bcs || ccd) {
+ 		seq_buf_printf(&s, "Mitigation: ");
+ 
+-		if (bcs) {
++		if (bcs)
+ 			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+-			comma = true;
+-		}
+ 
+-		if (ccd) {
+-			if (comma)
+-				seq_buf_printf(&s, ", ");
+-			seq_buf_printf(&s, "Indirect branch cache disabled");
+-			comma = true;
+-		}
+-
+-		if (comma)
++		if (bcs && ccd)
+ 			seq_buf_printf(&s, ", ");
+ 
+-		seq_buf_printf(&s, "Software count cache flush");
++		if (ccd)
++			seq_buf_printf(&s, "Indirect branch cache disabled");
++	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
++		seq_buf_printf(&s, "Mitigation: Software count cache flush");
+ 
+ 		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
+-			seq_buf_printf(&s, "(hardware accelerated)");
++			seq_buf_printf(&s, " (hardware accelerated)");
+ 	} else if (btb_flush_enabled) {
+ 		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
+ 	} else {
+diff --git a/arch/powerpc/kernel/vdso64/gettimeofday.S b/arch/powerpc/kernel/vdso64/gettimeofday.S
+index a4ed9edfd5f0..1f324c28705b 100644
+--- a/arch/powerpc/kernel/vdso64/gettimeofday.S
++++ b/arch/powerpc/kernel/vdso64/gettimeofday.S
+@@ -92,7 +92,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
+ 	 * At this point, r4,r5 contain our sec/nsec values.
+ 	 */
+ 
+-	lwa	r6,WTOM_CLOCK_SEC(r3)
++	ld	r6,WTOM_CLOCK_SEC(r3)
+ 	lwa	r9,WTOM_CLOCK_NSEC(r3)
+ 
+ 	/* We now have our result in r6,r9. We create a fake dependency
+@@ -125,7 +125,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
+ 	bne     cr6,75f
+ 
+ 	/* CLOCK_MONOTONIC_COARSE */
+-	lwa     r6,WTOM_CLOCK_SEC(r3)
++	ld	r6,WTOM_CLOCK_SEC(r3)
+ 	lwa     r9,WTOM_CLOCK_NSEC(r3)
+ 
+ 	/* check if counter has updated */
+diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h
+index 1f86e1b0a5cd..499578f7e6d7 100644
+--- a/arch/x86/include/asm/unwind.h
++++ b/arch/x86/include/asm/unwind.h
+@@ -23,6 +23,12 @@ struct unwind_state {
+ #elif defined(CONFIG_UNWINDER_FRAME_POINTER)
+ 	bool got_irq;
+ 	unsigned long *bp, *orig_sp, ip;
++	/*
++	 * If non-NULL: The current frame is incomplete and doesn't contain a
++	 * valid BP. When looking for the next frame, use this instead of the
++	 * non-existent saved BP.
++	 */
++	unsigned long *next_bp;
+ 	struct pt_regs *regs;
+ #else
+ 	unsigned long *sp;
+diff --git a/arch/x86/kernel/unwind_frame.c b/arch/x86/kernel/unwind_frame.c
+index 3dc26f95d46e..9b9fd4826e7a 100644
+--- a/arch/x86/kernel/unwind_frame.c
++++ b/arch/x86/kernel/unwind_frame.c
+@@ -320,10 +320,14 @@ bool unwind_next_frame(struct unwind_state *state)
+ 	}
+ 
+ 	/* Get the next frame pointer: */
+-	if (state->regs)
++	if (state->next_bp) {
++		next_bp = state->next_bp;
++		state->next_bp = NULL;
++	} else if (state->regs) {
+ 		next_bp = (unsigned long *)state->regs->bp;
+-	else
++	} else {
+ 		next_bp = (unsigned long *)READ_ONCE_TASK_STACK(state->task, *state->bp);
++	}
+ 
+ 	/* Move to the next frame if it's safe: */
+ 	if (!update_stack_state(state, next_bp))
+@@ -398,6 +402,21 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ 
+ 	bp = get_frame_pointer(task, regs);
+ 
++	/*
++	 * If we crash with IP==0, the last successfully executed instruction
++	 * was probably an indirect function call with a NULL function pointer.
++	 * That means that SP points into the middle of an incomplete frame:
++	 * *SP is a return pointer, and *(SP-sizeof(unsigned long)) is where we
++	 * would have written a frame pointer if we hadn't crashed.
++	 * Pretend that the frame is complete and that BP points to it, but save
++	 * the real BP so that we can use it when looking for the next frame.
++	 */
++	if (regs && regs->ip == 0 &&
++	    (unsigned long *)kernel_stack_pointer(regs) >= first_frame) {
++		state->next_bp = bp;
++		bp = ((unsigned long *)kernel_stack_pointer(regs)) - 1;
++	}
++
+ 	/* Initialize stack info and make sure the frame data is accessible: */
+ 	get_stack_info(bp, state->task, &state->stack_info,
+ 		       &state->stack_mask);
+@@ -410,7 +429,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ 	 */
+ 	while (!unwind_done(state) &&
+ 	       (!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
+-			state->bp < first_frame))
++			(state->next_bp == NULL && state->bp < first_frame)))
+ 		unwind_next_frame(state);
+ }
+ EXPORT_SYMBOL_GPL(__unwind_start);
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index 26038eacf74a..89be1be1790c 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -113,6 +113,20 @@ static struct orc_entry *orc_ftrace_find(unsigned long ip)
+ }
+ #endif
+ 
++/*
++ * If we crash with IP==0, the last successfully executed instruction
++ * was probably an indirect function call with a NULL function pointer,
++ * and we don't have unwind information for NULL.
++ * This hardcoded ORC entry for IP==0 allows us to unwind from a NULL function
++ * pointer into its parent and then continue normally from there.
++ */
++static struct orc_entry null_orc_entry = {
++	.sp_offset = sizeof(long),
++	.sp_reg = ORC_REG_SP,
++	.bp_reg = ORC_REG_UNDEFINED,
++	.type = ORC_TYPE_CALL
++};
++
+ static struct orc_entry *orc_find(unsigned long ip)
+ {
+ 	static struct orc_entry *orc;
+@@ -120,6 +134,9 @@ static struct orc_entry *orc_find(unsigned long ip)
+ 	if (!orc_init)
+ 		return NULL;
+ 
++	if (ip == 0)
++		return &null_orc_entry;
++
+ 	/* For non-init vmlinux addresses, use the fast lookup table: */
+ 	if (ip >= LOOKUP_START_IP && ip < LOOKUP_STOP_IP) {
+ 		unsigned int idx, start, stop;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index cf5538942834..2faefdd6f420 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -656,7 +656,7 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
+ 			return -EBADF;
+ 
+ 		l = f->f_mapping->host->i_bdev->bd_disk->private_data;
+-		if (l->lo_state == Lo_unbound) {
++		if (l->lo_state != Lo_bound) {
+ 			return -EINVAL;
+ 		}
+ 		f = l->lo_backing_file;
+diff --git a/drivers/bluetooth/h4_recv.h b/drivers/bluetooth/h4_recv.h
+index b432651f8236..307d82166f48 100644
+--- a/drivers/bluetooth/h4_recv.h
++++ b/drivers/bluetooth/h4_recv.h
+@@ -60,6 +60,10 @@ static inline struct sk_buff *h4_recv_buf(struct hci_dev *hdev,
+ 					  const struct h4_recv_pkt *pkts,
+ 					  int pkts_count)
+ {
++	/* Check for error from previous call */
++	if (IS_ERR(skb))
++		skb = NULL;
++
+ 	while (count) {
+ 		int i, len;
+ 
+diff --git a/drivers/bluetooth/hci_h4.c b/drivers/bluetooth/hci_h4.c
+index fb97a3bf069b..5d97d77627c1 100644
+--- a/drivers/bluetooth/hci_h4.c
++++ b/drivers/bluetooth/hci_h4.c
+@@ -174,6 +174,10 @@ struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb,
+ 	struct hci_uart *hu = hci_get_drvdata(hdev);
+ 	u8 alignment = hu->alignment ? hu->alignment : 1;
+ 
++	/* Check for error from previous call */
++	if (IS_ERR(skb))
++		skb = NULL;
++
+ 	while (count) {
+ 		int i, len;
+ 
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index fbf7b4df23ab..9562e72c1ae5 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -207,11 +207,11 @@ void hci_uart_init_work(struct work_struct *work)
+ 	err = hci_register_dev(hu->hdev);
+ 	if (err < 0) {
+ 		BT_ERR("Can't register HCI device");
++		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
++		hu->proto->close(hu);
+ 		hdev = hu->hdev;
+ 		hu->hdev = NULL;
+ 		hci_free_dev(hdev);
+-		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+-		hu->proto->close(hu);
+ 		return;
+ 	}
+ 
+@@ -616,6 +616,7 @@ static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data,
+ static int hci_uart_register_dev(struct hci_uart *hu)
+ {
+ 	struct hci_dev *hdev;
++	int err;
+ 
+ 	BT_DBG("");
+ 
+@@ -659,11 +660,22 @@ static int hci_uart_register_dev(struct hci_uart *hu)
+ 	else
+ 		hdev->dev_type = HCI_PRIMARY;
+ 
++	/* Only call open() for the protocol after hdev is fully initialized as
++	 * open() (or a timer/workqueue it starts) may attempt to reference it.
++	 */
++	err = hu->proto->open(hu);
++	if (err) {
++		hu->hdev = NULL;
++		hci_free_dev(hdev);
++		return err;
++	}
++
+ 	if (test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags))
+ 		return 0;
+ 
+ 	if (hci_register_dev(hdev) < 0) {
+ 		BT_ERR("Can't register HCI device");
++		hu->proto->close(hu);
+ 		hu->hdev = NULL;
+ 		hci_free_dev(hdev);
+ 		return -ENODEV;
+@@ -683,20 +695,14 @@ static int hci_uart_set_proto(struct hci_uart *hu, int id)
+ 	if (!p)
+ 		return -EPROTONOSUPPORT;
+ 
+-	err = p->open(hu);
+-	if (err)
+-		return err;
+-
+ 	hu->proto = p;
+-	set_bit(HCI_UART_PROTO_READY, &hu->flags);
+ 
+ 	err = hci_uart_register_dev(hu);
+ 	if (err) {
+-		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+-		p->close(hu);
+ 		return err;
+ 	}
+ 
++	set_bit(HCI_UART_PROTO_READY, &hu->flags);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
+index 431892200a08..ead71bfac689 100644
+--- a/drivers/clocksource/timer-riscv.c
++++ b/drivers/clocksource/timer-riscv.c
+@@ -58,7 +58,7 @@ static u64 riscv_sched_clock(void)
+ static DEFINE_PER_CPU(struct clocksource, riscv_clocksource) = {
+ 	.name		= "riscv_clocksource",
+ 	.rating		= 300,
+-	.mask		= CLOCKSOURCE_MASK(BITS_PER_LONG),
++	.mask		= CLOCKSOURCE_MASK(64),
+ 	.flags		= CLOCK_SOURCE_IS_CONTINUOUS,
+ 	.read		= riscv_clocksource_rdtime,
+ };
+@@ -103,8 +103,7 @@ static int __init riscv_timer_init_dt(struct device_node *n)
+ 	cs = per_cpu_ptr(&riscv_clocksource, cpuid);
+ 	clocksource_register_hz(cs, riscv_timebase);
+ 
+-	sched_clock_register(riscv_sched_clock,
+-			BITS_PER_LONG, riscv_timebase);
++	sched_clock_register(riscv_sched_clock, 64, riscv_timebase);
+ 
+ 	error = cpuhp_setup_state(CPUHP_AP_RISCV_TIMER_STARTING,
+ 			 "clockevents/riscv/timer:starting",
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index bacdaef77b6c..278dd55ff476 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -738,7 +738,7 @@ static int gmc_v9_0_allocate_vm_inv_eng(struct amdgpu_device *adev)
+ 		}
+ 
+ 		ring->vm_inv_eng = inv_eng - 1;
+-		change_bit(inv_eng - 1, (unsigned long *)(&vm_inv_engs[vmhub]));
++		vm_inv_engs[vmhub] &= ~(1 << ring->vm_inv_eng);
+ 
+ 		dev_info(adev->dev, "ring %s uses VM inv eng %u on hub %u\n",
+ 			 ring->name, ring->vm_inv_eng, ring->funcs->vmhub);
+diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
+index eb56ee893761..e747a7d16739 100644
+--- a/drivers/gpu/drm/vkms/vkms_crtc.c
++++ b/drivers/gpu/drm/vkms/vkms_crtc.c
+@@ -98,6 +98,7 @@ static void vkms_atomic_crtc_reset(struct drm_crtc *crtc)
+ 	vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL);
+ 	if (!vkms_state)
+ 		return;
++	INIT_WORK(&vkms_state->crc_work, vkms_crc_work_handle);
+ 
+ 	crtc->state = &vkms_state->base;
+ 	crtc->state->crtc = crtc;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+index b913a56f3426..2a9112515f46 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+@@ -564,11 +564,9 @@ static int vmw_fb_set_par(struct fb_info *info)
+ 		0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 		DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC)
+ 	};
+-	struct drm_display_mode *old_mode;
+ 	struct drm_display_mode *mode;
+ 	int ret;
+ 
+-	old_mode = par->set_mode;
+ 	mode = drm_mode_duplicate(vmw_priv->dev, &new_mode);
+ 	if (!mode) {
+ 		DRM_ERROR("Could not create new fb mode.\n");
+@@ -579,11 +577,7 @@ static int vmw_fb_set_par(struct fb_info *info)
+ 	mode->vdisplay = var->yres;
+ 	vmw_guess_mode_timing(mode);
+ 
+-	if (old_mode && drm_mode_equal(old_mode, mode)) {
+-		drm_mode_destroy(vmw_priv->dev, mode);
+-		mode = old_mode;
+-		old_mode = NULL;
+-	} else if (!vmw_kms_validate_mode_vram(vmw_priv,
++	if (!vmw_kms_validate_mode_vram(vmw_priv,
+ 					mode->hdisplay *
+ 					DIV_ROUND_UP(var->bits_per_pixel, 8),
+ 					mode->vdisplay)) {
+@@ -620,8 +614,8 @@ static int vmw_fb_set_par(struct fb_info *info)
+ 	schedule_delayed_work(&par->local_work, 0);
+ 
+ out_unlock:
+-	if (old_mode)
+-		drm_mode_destroy(vmw_priv->dev, old_mode);
++	if (par->set_mode)
++		drm_mode_destroy(vmw_priv->dev, par->set_mode);
+ 	par->set_mode = mode;
+ 
+ 	mutex_unlock(&par->bo_mutex);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
+index b93c558dd86e..7da752ca1c34 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
+@@ -57,7 +57,7 @@ static int vmw_gmrid_man_get_node(struct ttm_mem_type_manager *man,
+ 
+ 	id = ida_alloc_max(&gman->gmr_ida, gman->max_gmr_ids - 1, GFP_KERNEL);
+ 	if (id < 0)
+-		return id;
++		return (id != -ENOMEM ? 0 : id);
+ 
+ 	spin_lock(&gman->lock);
+ 
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 84f077b2b90a..81bded0d37d1 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -2966,13 +2966,22 @@ static void addr_handler(int status, struct sockaddr *src_addr,
+ {
+ 	struct rdma_id_private *id_priv = context;
+ 	struct rdma_cm_event event = {};
++	struct sockaddr *addr;
++	struct sockaddr_storage old_addr;
+ 
+ 	mutex_lock(&id_priv->handler_mutex);
+ 	if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_QUERY,
+ 			   RDMA_CM_ADDR_RESOLVED))
+ 		goto out;
+ 
+-	memcpy(cma_src_addr(id_priv), src_addr, rdma_addr_size(src_addr));
++	/*
++	 * Store the previous src address, so that if we fail to acquire
++	 * matching rdma device, old address can be restored back, which helps
++	 * to cancel the cma listen operation correctly.
++	 */
++	addr = cma_src_addr(id_priv);
++	memcpy(&old_addr, addr, rdma_addr_size(addr));
++	memcpy(addr, src_addr, rdma_addr_size(src_addr));
+ 	if (!status && !id_priv->cma_dev) {
+ 		status = cma_acquire_dev_by_src_ip(id_priv);
+ 		if (status)
+@@ -2983,6 +2992,8 @@ static void addr_handler(int status, struct sockaddr *src_addr,
+ 	}
+ 
+ 	if (status) {
++		memcpy(addr, &old_addr,
++		       rdma_addr_size((struct sockaddr *)&old_addr));
+ 		if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_RESOLVED,
+ 				   RDMA_CM_ADDR_BOUND))
+ 			goto out;
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 2a7b78bb98b4..e628ef23418f 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -2605,7 +2605,12 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
+ 
+ 	/* Everything is mapped - write the right values into s->dma_address */
+ 	for_each_sg(sglist, s, nelems, i) {
+-		s->dma_address += address + s->offset;
++		/*
++		 * Add in the remaining piece of the scatter-gather offset that
++		 * was masked out when we were determining the physical address
++		 * via (sg_phys(s) & PAGE_MASK) earlier.
++		 */
++		s->dma_address += address + (s->offset & ~PAGE_MASK);
+ 		s->dma_length   = s->length;
+ 	}
+ 
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index f8d3ba247523..2de8122e218f 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -207,8 +207,10 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
+ 		curr_iova = rb_entry(curr, struct iova, node);
+ 	} while (curr && new_pfn <= curr_iova->pfn_hi);
+ 
+-	if (limit_pfn < size || new_pfn < iovad->start_pfn)
++	if (limit_pfn < size || new_pfn < iovad->start_pfn) {
++		iovad->max32_alloc_size = size;
+ 		goto iova32_full;
++	}
+ 
+ 	/* pfn_lo will point to size aligned address if size_aligned is set */
+ 	new->pfn_lo = new_pfn;
+@@ -222,7 +224,6 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
+ 	return 0;
+ 
+ iova32_full:
+-	iovad->max32_alloc_size = size;
+ 	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+ 	return -ENOMEM;
+ }
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index f867d41b0aa1..93e32a59640c 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -1482,7 +1482,7 @@ static int lpi_range_cmp(void *priv, struct list_head *a, struct list_head *b)
+ 	ra = container_of(a, struct lpi_range, entry);
+ 	rb = container_of(b, struct lpi_range, entry);
+ 
+-	return rb->base_id - ra->base_id;
++	return ra->base_id - rb->base_id;
+ }
+ 
+ static void merge_lpi_ranges(void)
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index d45415cbe6e7..14cff91b7aea 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1212,7 +1212,7 @@ static void uvc_ctrl_fill_event(struct uvc_video_chain *chain,
+ 
+ 	__uvc_query_v4l2_ctrl(chain, ctrl, mapping, &v4l2_ctrl);
+ 
+-	memset(ev->reserved, 0, sizeof(ev->reserved));
++	memset(ev, 0, sizeof(*ev));
+ 	ev->type = V4L2_EVENT_CTRL;
+ 	ev->id = v4l2_ctrl.id;
+ 	ev->u.ctrl.value = value;
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index 5e3806feb5d7..8a82427c4d54 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -1387,7 +1387,7 @@ static u32 user_flags(const struct v4l2_ctrl *ctrl)
+ 
+ static void fill_event(struct v4l2_event *ev, struct v4l2_ctrl *ctrl, u32 changes)
+ {
+-	memset(ev->reserved, 0, sizeof(ev->reserved));
++	memset(ev, 0, sizeof(*ev));
+ 	ev->type = V4L2_EVENT_CTRL;
+ 	ev->id = ctrl->id;
+ 	ev->u.ctrl.changes = changes;
+diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c
+index c712b7deb3a9..82a97866e0cf 100644
+--- a/drivers/mmc/host/alcor.c
++++ b/drivers/mmc/host/alcor.c
+@@ -1044,14 +1044,27 @@ static void alcor_init_mmc(struct alcor_sdmmc_host *host)
+ 	mmc->caps2 = MMC_CAP2_NO_SDIO;
+ 	mmc->ops = &alcor_sdc_ops;
+ 
+-	/* Hardware cannot do scatter lists */
++	/* The hardware does DMA data transfer of 4096 bytes to/from a single
++	 * buffer address. Scatterlists are not supported, but upon DMA
++	 * completion (signalled via IRQ), the original vendor driver does
++	 * then immediately set up another DMA transfer of the next 4096
++	 * bytes.
++	 *
++	 * This means that we need to handle the I/O in 4096 byte chunks.
++	 * Lacking a way to limit the sglist entries to 4096 bytes, we instead
++	 * impose that only one segment is provided, with maximum size 4096,
++	 * which also happens to be the minimum size. This means that the
++	 * single-entry sglist handled by this driver can be handed directly
++	 * to the hardware, nice and simple.
++	 *
++	 * Unfortunately though, that means we only do 4096 bytes I/O per
++	 * MMC command. A future improvement would be to make the driver
++	 * accept sg lists and entries of any size, and simply iterate
++	 * through them 4096 bytes at a time.
++	 */
+ 	mmc->max_segs = AU6601_MAX_DMA_SEGMENTS;
+ 	mmc->max_seg_size = AU6601_MAX_DMA_BLOCK_SIZE;
+-
+-	mmc->max_blk_size = mmc->max_seg_size;
+-	mmc->max_blk_count = mmc->max_segs;
+-
+-	mmc->max_req_size = mmc->max_seg_size * mmc->max_segs;
++	mmc->max_req_size = mmc->max_seg_size;
+ }
+ 
+ static int alcor_pci_sdmmc_drv_probe(struct platform_device *pdev)
+diff --git a/drivers/mmc/host/mxcmmc.c b/drivers/mmc/host/mxcmmc.c
+index 4d17032d15ee..7b530e5a86da 100644
+--- a/drivers/mmc/host/mxcmmc.c
++++ b/drivers/mmc/host/mxcmmc.c
+@@ -292,11 +292,8 @@ static void mxcmci_swap_buffers(struct mmc_data *data)
+ 	struct scatterlist *sg;
+ 	int i;
+ 
+-	for_each_sg(data->sg, sg, data->sg_len, i) {
+-		void *buf = kmap_atomic(sg_page(sg) + sg->offset);
+-		buffer_swap32(buf, sg->length);
+-		kunmap_atomic(buf);
+-	}
++	for_each_sg(data->sg, sg, data->sg_len, i)
++		buffer_swap32(sg_virt(sg), sg->length);
+ }
+ #else
+ static inline void mxcmci_swap_buffers(struct mmc_data *data) {}
+@@ -613,7 +610,6 @@ static int mxcmci_transfer_data(struct mxcmci_host *host)
+ {
+ 	struct mmc_data *data = host->req->data;
+ 	struct scatterlist *sg;
+-	void *buf;
+ 	int stat, i;
+ 
+ 	host->data = data;
+@@ -621,18 +617,14 @@ static int mxcmci_transfer_data(struct mxcmci_host *host)
+ 
+ 	if (data->flags & MMC_DATA_READ) {
+ 		for_each_sg(data->sg, sg, data->sg_len, i) {
+-			buf = kmap_atomic(sg_page(sg) + sg->offset);
+-			stat = mxcmci_pull(host, buf, sg->length);
+-			kunmap(buf);
++			stat = mxcmci_pull(host, sg_virt(sg), sg->length);
+ 			if (stat)
+ 				return stat;
+ 			host->datasize += sg->length;
+ 		}
+ 	} else {
+ 		for_each_sg(data->sg, sg, data->sg_len, i) {
+-			buf = kmap_atomic(sg_page(sg) + sg->offset);
+-			stat = mxcmci_push(host, buf, sg->length);
+-			kunmap(buf);
++			stat = mxcmci_push(host, sg_virt(sg), sg->length);
+ 			if (stat)
+ 				return stat;
+ 			host->datasize += sg->length;
+diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
+index 8779bbaa6b69..194a81888792 100644
+--- a/drivers/mmc/host/pxamci.c
++++ b/drivers/mmc/host/pxamci.c
+@@ -162,7 +162,7 @@ static void pxamci_dma_irq(void *param);
+ static void pxamci_setup_data(struct pxamci_host *host, struct mmc_data *data)
+ {
+ 	struct dma_async_tx_descriptor *tx;
+-	enum dma_data_direction direction;
++	enum dma_transfer_direction direction;
+ 	struct dma_slave_config	config;
+ 	struct dma_chan *chan;
+ 	unsigned int nob = data->blocks;
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 7e2a75c4f36f..d9be22b310e6 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -634,6 +634,7 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 	struct renesas_sdhi *priv;
+ 	struct resource *res;
+ 	int irq, ret, i;
++	u16 ver;
+ 
+ 	of_data = of_device_get_match_data(&pdev->dev);
+ 
+@@ -766,12 +767,17 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 	if (ret)
+ 		goto efree;
+ 
++	ver = sd_ctrl_read16(host, CTL_VERSION);
++	/* GEN2_SDR104 is first known SDHI to use 32bit block count */
++	if (ver < SDHI_VER_GEN2_SDR104 && mmc_data->max_blk_count > U16_MAX)
++		mmc_data->max_blk_count = U16_MAX;
++
+ 	ret = tmio_mmc_host_probe(host);
+ 	if (ret < 0)
+ 		goto edisclk;
+ 
+ 	/* One Gen2 SDHI incarnation does NOT have a CBSY bit */
+-	if (sd_ctrl_read16(host, CTL_VERSION) == SDHI_VER_GEN2_SDR50)
++	if (ver == SDHI_VER_GEN2_SDR50)
+ 		mmc_data->flags &= ~TMIO_MMC_HAVE_CBSY;
+ 
+ 	/* Enable tuning iff we have an SCC and a supported mode */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qp.c b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
+index 370ca94b6775..c7c2920c05c4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/qp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
+@@ -40,6 +40,9 @@
+ #include "mlx5_core.h"
+ #include "lib/eq.h"
+ 
++static int mlx5_core_drain_dct(struct mlx5_core_dev *dev,
++			       struct mlx5_core_dct *dct);
++
+ static struct mlx5_core_rsc_common *
+ mlx5_get_rsc(struct mlx5_qp_table *table, u32 rsn)
+ {
+@@ -227,13 +230,42 @@ static void destroy_resource_common(struct mlx5_core_dev *dev,
+ 	wait_for_completion(&qp->common.free);
+ }
+ 
++static int _mlx5_core_destroy_dct(struct mlx5_core_dev *dev,
++				  struct mlx5_core_dct *dct, bool need_cleanup)
++{
++	u32 out[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
++	u32 in[MLX5_ST_SZ_DW(destroy_dct_in)]   = {0};
++	struct mlx5_core_qp *qp = &dct->mqp;
++	int err;
++
++	err = mlx5_core_drain_dct(dev, dct);
++	if (err) {
++		if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
++			goto destroy;
++		} else {
++			mlx5_core_warn(
++				dev, "failed drain DCT 0x%x with error 0x%x\n",
++				qp->qpn, err);
++			return err;
++		}
++	}
++	wait_for_completion(&dct->drained);
++destroy:
++	if (need_cleanup)
++		destroy_resource_common(dev, &dct->mqp);
++	MLX5_SET(destroy_dct_in, in, opcode, MLX5_CMD_OP_DESTROY_DCT);
++	MLX5_SET(destroy_dct_in, in, dctn, qp->qpn);
++	MLX5_SET(destroy_dct_in, in, uid, qp->uid);
++	err = mlx5_cmd_exec(dev, (void *)&in, sizeof(in),
++			    (void *)&out, sizeof(out));
++	return err;
++}
++
+ int mlx5_core_create_dct(struct mlx5_core_dev *dev,
+ 			 struct mlx5_core_dct *dct,
+ 			 u32 *in, int inlen)
+ {
+ 	u32 out[MLX5_ST_SZ_DW(create_dct_out)]   = {0};
+-	u32 din[MLX5_ST_SZ_DW(destroy_dct_in)]   = {0};
+-	u32 dout[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
+ 	struct mlx5_core_qp *qp = &dct->mqp;
+ 	int err;
+ 
+@@ -254,11 +286,7 @@ int mlx5_core_create_dct(struct mlx5_core_dev *dev,
+ 
+ 	return 0;
+ err_cmd:
+-	MLX5_SET(destroy_dct_in, din, opcode, MLX5_CMD_OP_DESTROY_DCT);
+-	MLX5_SET(destroy_dct_in, din, dctn, qp->qpn);
+-	MLX5_SET(destroy_dct_in, din, uid, qp->uid);
+-	mlx5_cmd_exec(dev, (void *)&in, sizeof(din),
+-		      (void *)&out, sizeof(dout));
++	_mlx5_core_destroy_dct(dev, dct, false);
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(mlx5_core_create_dct);
+@@ -323,29 +351,7 @@ static int mlx5_core_drain_dct(struct mlx5_core_dev *dev,
+ int mlx5_core_destroy_dct(struct mlx5_core_dev *dev,
+ 			  struct mlx5_core_dct *dct)
+ {
+-	u32 out[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
+-	u32 in[MLX5_ST_SZ_DW(destroy_dct_in)]   = {0};
+-	struct mlx5_core_qp *qp = &dct->mqp;
+-	int err;
+-
+-	err = mlx5_core_drain_dct(dev, dct);
+-	if (err) {
+-		if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
+-			goto destroy;
+-		} else {
+-			mlx5_core_warn(dev, "failed drain DCT 0x%x with error 0x%x\n", qp->qpn, err);
+-			return err;
+-		}
+-	}
+-	wait_for_completion(&dct->drained);
+-destroy:
+-	destroy_resource_common(dev, &dct->mqp);
+-	MLX5_SET(destroy_dct_in, in, opcode, MLX5_CMD_OP_DESTROY_DCT);
+-	MLX5_SET(destroy_dct_in, in, dctn, qp->qpn);
+-	MLX5_SET(destroy_dct_in, in, uid, qp->uid);
+-	err = mlx5_cmd_exec(dev, (void *)&in, sizeof(in),
+-			    (void *)&out, sizeof(out));
+-	return err;
++	return _mlx5_core_destroy_dct(dev, dct, true);
+ }
+ EXPORT_SYMBOL_GPL(mlx5_core_destroy_dct);
+ 
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 1135e74646e2..8cec5230fe31 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -96,6 +96,7 @@ static int client_reserve = 1;
+ static char partition_name[96] = "UNKNOWN";
+ static unsigned int partition_number = -1;
+ static LIST_HEAD(ibmvscsi_head);
++static DEFINE_SPINLOCK(ibmvscsi_driver_lock);
+ 
+ static struct scsi_transport_template *ibmvscsi_transport_template;
+ 
+@@ -2270,7 +2271,9 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ 	}
+ 
+ 	dev_set_drvdata(&vdev->dev, hostdata);
++	spin_lock(&ibmvscsi_driver_lock);
+ 	list_add_tail(&hostdata->host_list, &ibmvscsi_head);
++	spin_unlock(&ibmvscsi_driver_lock);
+ 	return 0;
+ 
+       add_srp_port_failed:
+@@ -2292,15 +2295,27 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ static int ibmvscsi_remove(struct vio_dev *vdev)
+ {
+ 	struct ibmvscsi_host_data *hostdata = dev_get_drvdata(&vdev->dev);
+-	list_del(&hostdata->host_list);
+-	unmap_persist_bufs(hostdata);
++	unsigned long flags;
++
++	srp_remove_host(hostdata->host);
++	scsi_remove_host(hostdata->host);
++
++	purge_requests(hostdata, DID_ERROR);
++
++	spin_lock_irqsave(hostdata->host->host_lock, flags);
+ 	release_event_pool(&hostdata->pool, hostdata);
++	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
++
+ 	ibmvscsi_release_crq_queue(&hostdata->queue, hostdata,
+ 					max_events);
+ 
+ 	kthread_stop(hostdata->work_thread);
+-	srp_remove_host(hostdata->host);
+-	scsi_remove_host(hostdata->host);
++	unmap_persist_bufs(hostdata);
++
++	spin_lock(&ibmvscsi_driver_lock);
++	list_del(&hostdata->host_list);
++	spin_unlock(&ibmvscsi_driver_lock);
++
+ 	scsi_host_put(hostdata->host);
+ 
+ 	return 0;
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index f44e640229e7..7f8946844a5e 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -4968,6 +4968,13 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
+ 		    (area != vha->d_id.b.area || domain != vha->d_id.b.domain))
+ 			continue;
+ 
++		/* Bypass if not same domain and area of adapter. */
++		if (area && domain && ((area != vha->d_id.b.area) ||
++		    (domain != vha->d_id.b.domain)) &&
++		    (ha->current_topology == ISP_CFG_NL))
++			continue;
++
++
+ 		/* Bypass invalid local loop ID. */
+ 		if (loop_id > LAST_LOCAL_LOOP_ID)
+ 			continue;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index a6828391d6b3..5a6e8e12701a 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -2598,8 +2598,10 @@ void scsi_device_resume(struct scsi_device *sdev)
+ 	 * device deleted during suspend)
+ 	 */
+ 	mutex_lock(&sdev->state_mutex);
+-	sdev->quiesced_by = NULL;
+-	blk_clear_pm_only(sdev->request_queue);
++	if (sdev->quiesced_by) {
++		sdev->quiesced_by = NULL;
++		blk_clear_pm_only(sdev->request_queue);
++	}
+ 	if (sdev->sdev_state == SDEV_QUIESCE)
+ 		scsi_device_set_state(sdev, SDEV_RUNNING);
+ 	mutex_unlock(&sdev->state_mutex);
+diff --git a/fs/aio.c b/fs/aio.c
+index 528d03680526..3d9669d011b9 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -167,9 +167,13 @@ struct kioctx {
+ 	unsigned		id;
+ };
+ 
++/*
++ * First field must be the file pointer in all the
++ * iocb unions! See also 'struct kiocb' in <linux/fs.h>
++ */
+ struct fsync_iocb {
+-	struct work_struct	work;
+ 	struct file		*file;
++	struct work_struct	work;
+ 	bool			datasync;
+ };
+ 
+@@ -183,8 +187,15 @@ struct poll_iocb {
+ 	struct work_struct	work;
+ };
+ 
++/*
++ * NOTE! Each of the iocb union members has the file pointer
++ * as the first entry in their struct definition. So you can
++ * access the file pointer through any of the sub-structs,
++ * or directly as just 'ki_filp' in this struct.
++ */
+ struct aio_kiocb {
+ 	union {
++		struct file		*ki_filp;
+ 		struct kiocb		rw;
+ 		struct fsync_iocb	fsync;
+ 		struct poll_iocb	poll;
+@@ -1060,6 +1071,8 @@ static inline void iocb_put(struct aio_kiocb *iocb)
+ {
+ 	if (refcount_read(&iocb->ki_refcnt) == 0 ||
+ 	    refcount_dec_and_test(&iocb->ki_refcnt)) {
++		if (iocb->ki_filp)
++			fput(iocb->ki_filp);
+ 		percpu_ref_put(&iocb->ki_ctx->reqs);
+ 		kmem_cache_free(kiocb_cachep, iocb);
+ 	}
+@@ -1424,7 +1437,6 @@ static void aio_complete_rw(struct kiocb *kiocb, long res, long res2)
+ 		file_end_write(kiocb->ki_filp);
+ 	}
+ 
+-	fput(kiocb->ki_filp);
+ 	aio_complete(iocb, res, res2);
+ }
+ 
+@@ -1432,9 +1444,6 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+ {
+ 	int ret;
+ 
+-	req->ki_filp = fget(iocb->aio_fildes);
+-	if (unlikely(!req->ki_filp))
+-		return -EBADF;
+ 	req->ki_complete = aio_complete_rw;
+ 	req->private = NULL;
+ 	req->ki_pos = iocb->aio_offset;
+@@ -1451,7 +1460,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+ 		ret = ioprio_check_cap(iocb->aio_reqprio);
+ 		if (ret) {
+ 			pr_debug("aio ioprio check cap error: %d\n", ret);
+-			goto out_fput;
++			return ret;
+ 		}
+ 
+ 		req->ki_ioprio = iocb->aio_reqprio;
+@@ -1460,14 +1469,10 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+ 
+ 	ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags);
+ 	if (unlikely(ret))
+-		goto out_fput;
++		return ret;
+ 
+ 	req->ki_flags &= ~IOCB_HIPRI; /* no one is going to poll for this I/O */
+ 	return 0;
+-
+-out_fput:
+-	fput(req->ki_filp);
+-	return ret;
+ }
+ 
+ static int aio_setup_rw(int rw, const struct iocb *iocb, struct iovec **iovec,
+@@ -1521,24 +1526,19 @@ static ssize_t aio_read(struct kiocb *req, const struct iocb *iocb,
+ 	if (ret)
+ 		return ret;
+ 	file = req->ki_filp;
+-
+-	ret = -EBADF;
+ 	if (unlikely(!(file->f_mode & FMODE_READ)))
+-		goto out_fput;
++		return -EBADF;
+ 	ret = -EINVAL;
+ 	if (unlikely(!file->f_op->read_iter))
+-		goto out_fput;
++		return -EINVAL;
+ 
+ 	ret = aio_setup_rw(READ, iocb, &iovec, vectored, compat, &iter);
+ 	if (ret)
+-		goto out_fput;
++		return ret;
+ 	ret = rw_verify_area(READ, file, &req->ki_pos, iov_iter_count(&iter));
+ 	if (!ret)
+ 		aio_rw_done(req, call_read_iter(file, req, &iter));
+ 	kfree(iovec);
+-out_fput:
+-	if (unlikely(ret))
+-		fput(file);
+ 	return ret;
+ }
+ 
+@@ -1555,16 +1555,14 @@ static ssize_t aio_write(struct kiocb *req, const struct iocb *iocb,
+ 		return ret;
+ 	file = req->ki_filp;
+ 
+-	ret = -EBADF;
+ 	if (unlikely(!(file->f_mode & FMODE_WRITE)))
+-		goto out_fput;
+-	ret = -EINVAL;
++		return -EBADF;
+ 	if (unlikely(!file->f_op->write_iter))
+-		goto out_fput;
++		return -EINVAL;
+ 
+ 	ret = aio_setup_rw(WRITE, iocb, &iovec, vectored, compat, &iter);
+ 	if (ret)
+-		goto out_fput;
++		return ret;
+ 	ret = rw_verify_area(WRITE, file, &req->ki_pos, iov_iter_count(&iter));
+ 	if (!ret) {
+ 		/*
+@@ -1582,9 +1580,6 @@ static ssize_t aio_write(struct kiocb *req, const struct iocb *iocb,
+ 		aio_rw_done(req, call_write_iter(file, req, &iter));
+ 	}
+ 	kfree(iovec);
+-out_fput:
+-	if (unlikely(ret))
+-		fput(file);
+ 	return ret;
+ }
+ 
+@@ -1594,7 +1589,6 @@ static void aio_fsync_work(struct work_struct *work)
+ 	int ret;
+ 
+ 	ret = vfs_fsync(req->file, req->datasync);
+-	fput(req->file);
+ 	aio_complete(container_of(req, struct aio_kiocb, fsync), ret, 0);
+ }
+ 
+@@ -1605,13 +1599,8 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
+ 			iocb->aio_rw_flags))
+ 		return -EINVAL;
+ 
+-	req->file = fget(iocb->aio_fildes);
+-	if (unlikely(!req->file))
+-		return -EBADF;
+-	if (unlikely(!req->file->f_op->fsync)) {
+-		fput(req->file);
++	if (unlikely(!req->file->f_op->fsync))
+ 		return -EINVAL;
+-	}
+ 
+ 	req->datasync = datasync;
+ 	INIT_WORK(&req->work, aio_fsync_work);
+@@ -1621,10 +1610,7 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
+ 
+ static inline void aio_poll_complete(struct aio_kiocb *iocb, __poll_t mask)
+ {
+-	struct file *file = iocb->poll.file;
+-
+ 	aio_complete(iocb, mangle_poll(mask), 0);
+-	fput(file);
+ }
+ 
+ static void aio_poll_complete_work(struct work_struct *work)
+@@ -1749,9 +1735,6 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+ 
+ 	INIT_WORK(&req->work, aio_poll_complete_work);
+ 	req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP;
+-	req->file = fget(iocb->aio_fildes);
+-	if (unlikely(!req->file))
+-		return -EBADF;
+ 
+ 	req->head = NULL;
+ 	req->woken = false;
+@@ -1794,10 +1777,8 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+ 	spin_unlock_irq(&ctx->ctx_lock);
+ 
+ out:
+-	if (unlikely(apt.error)) {
+-		fput(req->file);
++	if (unlikely(apt.error))
+ 		return apt.error;
+-	}
+ 
+ 	if (mask)
+ 		aio_poll_complete(aiocb, mask);
+@@ -1835,6 +1816,11 @@ static int __io_submit_one(struct kioctx *ctx, const struct iocb *iocb,
+ 	if (unlikely(!req))
+ 		goto out_put_reqs_available;
+ 
++	req->ki_filp = fget(iocb->aio_fildes);
++	ret = -EBADF;
++	if (unlikely(!req->ki_filp))
++		goto out_put_req;
++
+ 	if (iocb->aio_flags & IOCB_FLAG_RESFD) {
+ 		/*
+ 		 * If the IOCB_FLAG_RESFD flag of aio_flags is set, get an
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 77b3aaa39b35..104905732fbe 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1605,9 +1605,16 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree,
+ 	iov[1].iov_base = unc_path;
+ 	iov[1].iov_len = unc_path_len;
+ 
+-	/* 3.11 tcon req must be signed if not encrypted. See MS-SMB2 3.2.4.1.1 */
++	/*
++	 * 3.11 tcon req must be signed if not encrypted. See MS-SMB2 3.2.4.1.1
++	 * unless it is guest or anonymous user. See MS-SMB2 3.2.5.3.1
++	 * (Samba servers don't always set the flag so also check if null user)
++	 */
+ 	if ((ses->server->dialect == SMB311_PROT_ID) &&
+-	    !smb3_encryption_required(tcon))
++	    !smb3_encryption_required(tcon) &&
++	    !(ses->session_flags &
++		    (SMB2_SESSION_FLAG_IS_GUEST|SMB2_SESSION_FLAG_IS_NULL)) &&
++	    ((ses->user_name != NULL) || (ses->sectype == Kerberos)))
+ 		req->sync_hdr.Flags |= SMB2_FLAGS_SIGNED;
+ 
+ 	memset(&rqst, 0, sizeof(struct smb_rqst));
+diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
+index 15b6dd733780..df908ef79cce 100644
+--- a/fs/ext4/ext4_jbd2.h
++++ b/fs/ext4/ext4_jbd2.h
+@@ -384,7 +384,7 @@ static inline void ext4_update_inode_fsync_trans(handle_t *handle,
+ {
+ 	struct ext4_inode_info *ei = EXT4_I(inode);
+ 
+-	if (ext4_handle_valid(handle)) {
++	if (ext4_handle_valid(handle) && !is_handle_aborted(handle)) {
+ 		ei->i_sync_tid = handle->h_transaction->t_tid;
+ 		if (datasync)
+ 			ei->i_datasync_tid = handle->h_transaction->t_tid;
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 69d65d49837b..98ec11f69cd4 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -125,7 +125,7 @@ ext4_unaligned_aio(struct inode *inode, struct iov_iter *from, loff_t pos)
+ 	struct super_block *sb = inode->i_sb;
+ 	int blockmask = sb->s_blocksize - 1;
+ 
+-	if (pos >= i_size_read(inode))
++	if (pos >= ALIGN(i_size_read(inode), sb->s_blocksize))
+ 		return 0;
+ 
+ 	if ((pos | iov_iter_alignment(from)) & blockmask)
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index bf7fa1507e81..9e96a0bd08d9 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -1387,10 +1387,14 @@ end_range:
+ 					   partial->p + 1,
+ 					   partial2->p,
+ 					   (chain+n-1) - partial);
+-			BUFFER_TRACE(partial->bh, "call brelse");
+-			brelse(partial->bh);
+-			BUFFER_TRACE(partial2->bh, "call brelse");
+-			brelse(partial2->bh);
++			while (partial > chain) {
++				BUFFER_TRACE(partial->bh, "call brelse");
++				brelse(partial->bh);
++			}
++			while (partial2 > chain2) {
++				BUFFER_TRACE(partial2->bh, "call brelse");
++				brelse(partial2->bh);
++			}
+ 			return 0;
+ 		}
+ 
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 9b79056d705d..e1b1d390b329 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -215,7 +215,8 @@ void f2fs_register_inmem_page(struct inode *inode, struct page *page)
+ }
+ 
+ static int __revoke_inmem_pages(struct inode *inode,
+-				struct list_head *head, bool drop, bool recover)
++				struct list_head *head, bool drop, bool recover,
++				bool trylock)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct inmem_pages *cur, *tmp;
+@@ -227,7 +228,16 @@ static int __revoke_inmem_pages(struct inode *inode,
+ 		if (drop)
+ 			trace_f2fs_commit_inmem_page(page, INMEM_DROP);
+ 
+-		lock_page(page);
++		if (trylock) {
++			/*
++			 * to avoid deadlock in between page lock and
++			 * inmem_lock.
++			 */
++			if (!trylock_page(page))
++				continue;
++		} else {
++			lock_page(page);
++		}
+ 
+ 		f2fs_wait_on_page_writeback(page, DATA, true, true);
+ 
+@@ -318,13 +328,19 @@ void f2fs_drop_inmem_pages(struct inode *inode)
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct f2fs_inode_info *fi = F2FS_I(inode);
+ 
+-	mutex_lock(&fi->inmem_lock);
+-	__revoke_inmem_pages(inode, &fi->inmem_pages, true, false);
+-	spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
+-	if (!list_empty(&fi->inmem_ilist))
+-		list_del_init(&fi->inmem_ilist);
+-	spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
+-	mutex_unlock(&fi->inmem_lock);
++	while (!list_empty(&fi->inmem_pages)) {
++		mutex_lock(&fi->inmem_lock);
++		__revoke_inmem_pages(inode, &fi->inmem_pages,
++						true, false, true);
++
++		if (list_empty(&fi->inmem_pages)) {
++			spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
++			if (!list_empty(&fi->inmem_ilist))
++				list_del_init(&fi->inmem_ilist);
++			spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
++		}
++		mutex_unlock(&fi->inmem_lock);
++	}
+ 
+ 	clear_inode_flag(inode, FI_ATOMIC_FILE);
+ 	fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
+@@ -429,12 +445,15 @@ retry:
+ 		 * recovery or rewrite & commit last transaction. For other
+ 		 * error number, revoking was done by filesystem itself.
+ 		 */
+-		err = __revoke_inmem_pages(inode, &revoke_list, false, true);
++		err = __revoke_inmem_pages(inode, &revoke_list,
++						false, true, false);
+ 
+ 		/* drop all uncommitted pages */
+-		__revoke_inmem_pages(inode, &fi->inmem_pages, true, false);
++		__revoke_inmem_pages(inode, &fi->inmem_pages,
++						true, false, false);
+ 	} else {
+-		__revoke_inmem_pages(inode, &revoke_list, false, false);
++		__revoke_inmem_pages(inode, &revoke_list,
++						false, false, false);
+ 	}
+ 
+ 	return err;
+diff --git a/fs/udf/truncate.c b/fs/udf/truncate.c
+index b647f0bd150c..94220ba85628 100644
+--- a/fs/udf/truncate.c
++++ b/fs/udf/truncate.c
+@@ -260,6 +260,9 @@ void udf_truncate_extents(struct inode *inode)
+ 			epos.block = eloc;
+ 			epos.bh = udf_tread(sb,
+ 					udf_get_lb_pblock(sb, &eloc, 0));
++			/* Error reading indirect block? */
++			if (!epos.bh)
++				return;
+ 			if (elen)
+ 				indirect_ext_len =
+ 					(elen + sb->s_blocksize - 1) >>
+diff --git a/include/linux/ceph/libceph.h b/include/linux/ceph/libceph.h
+index a420c07904bc..337d5049ff93 100644
+--- a/include/linux/ceph/libceph.h
++++ b/include/linux/ceph/libceph.h
+@@ -294,6 +294,8 @@ extern void ceph_destroy_client(struct ceph_client *client);
+ extern int __ceph_open_session(struct ceph_client *client,
+ 			       unsigned long started);
+ extern int ceph_open_session(struct ceph_client *client);
++int ceph_wait_for_latest_osdmap(struct ceph_client *client,
++				unsigned long timeout);
+ 
+ /* pagevec.c */
+ extern void ceph_release_page_vector(struct page **pages, int num_pages);
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 29d8e2cfed0e..fd423fec8d83 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -304,13 +304,19 @@ enum rw_hint {
+ 
+ struct kiocb {
+ 	struct file		*ki_filp;
++
++	/* The 'ki_filp' pointer is shared in a union for aio */
++	randomized_struct_fields_start
++
+ 	loff_t			ki_pos;
+ 	void (*ki_complete)(struct kiocb *iocb, long ret, long ret2);
+ 	void			*private;
+ 	int			ki_flags;
+ 	u16			ki_hint;
+ 	u16			ki_ioprio; /* See linux/ioprio.h */
+-} __randomize_layout;
++
++	randomized_struct_fields_end
++};
+ 
+ static inline bool is_sync_kiocb(struct kiocb *kiocb)
+ {
+diff --git a/kernel/futex.c b/kernel/futex.c
+index a0514e01c3eb..52668d44e07b 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -3440,6 +3440,10 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p
+ {
+ 	u32 uval, uninitialized_var(nval), mval;
+ 
++	/* Futex address must be 32bit aligned */
++	if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0)
++		return -1;
++
+ retry:
+ 	if (get_user(uval, uaddr))
+ 		return -1;
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 95932333a48b..e805fe3bf87f 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -3535,6 +3535,9 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+ 	unsigned int depth;
+ 	int i;
+ 
++	if (unlikely(!debug_locks))
++		return 0;
++
+ 	depth = curr->lockdep_depth;
+ 	/*
+ 	 * This function is about (re)setting the class of a held lock,
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 1506e1632394..d4e2a166ae17 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -831,8 +831,6 @@ static int hci_sock_release(struct socket *sock)
+ 	if (!sk)
+ 		return 0;
+ 
+-	hdev = hci_pi(sk)->hdev;
+-
+ 	switch (hci_pi(sk)->channel) {
+ 	case HCI_CHANNEL_MONITOR:
+ 		atomic_dec(&monitor_promisc);
+@@ -854,6 +852,7 @@ static int hci_sock_release(struct socket *sock)
+ 
+ 	bt_sock_unlink(&hci_sk_list, sk);
+ 
++	hdev = hci_pi(sk)->hdev;
+ 	if (hdev) {
+ 		if (hci_pi(sk)->channel == HCI_CHANNEL_USER) {
+ 			/* When releasing a user channel exclusive access,
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index 6693e209efe8..f77888ec93f1 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -31,10 +31,6 @@
+ /* needed for logical [in,out]-dev filtering */
+ #include "../br_private.h"
+ 
+-#define BUGPRINT(format, args...) printk("kernel msg: ebtables bug: please "\
+-					 "report to author: "format, ## args)
+-/* #define BUGPRINT(format, args...) */
+-
+ /* Each cpu has its own set of counters, so there is no need for write_lock in
+  * the softirq
+  * For reading or updating the counters, the user context needs to
+@@ -466,8 +462,6 @@ static int ebt_verify_pointers(const struct ebt_replace *repl,
+ 				/* we make userspace set this right,
+ 				 * so there is no misunderstanding
+ 				 */
+-				BUGPRINT("EBT_ENTRY_OR_ENTRIES shouldn't be set "
+-					 "in distinguisher\n");
+ 				return -EINVAL;
+ 			}
+ 			if (i != NF_BR_NUMHOOKS)
+@@ -485,18 +479,14 @@ static int ebt_verify_pointers(const struct ebt_replace *repl,
+ 			offset += e->next_offset;
+ 		}
+ 	}
+-	if (offset != limit) {
+-		BUGPRINT("entries_size too small\n");
++	if (offset != limit)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* check if all valid hooks have a chain */
+ 	for (i = 0; i < NF_BR_NUMHOOKS; i++) {
+ 		if (!newinfo->hook_entry[i] &&
+-		   (valid_hooks & (1 << i))) {
+-			BUGPRINT("Valid hook without chain\n");
++		   (valid_hooks & (1 << i)))
+ 			return -EINVAL;
+-		}
+ 	}
+ 	return 0;
+ }
+@@ -523,26 +513,20 @@ ebt_check_entry_size_and_hooks(const struct ebt_entry *e,
+ 		/* this checks if the previous chain has as many entries
+ 		 * as it said it has
+ 		 */
+-		if (*n != *cnt) {
+-			BUGPRINT("nentries does not equal the nr of entries "
+-				 "in the chain\n");
++		if (*n != *cnt)
+ 			return -EINVAL;
+-		}
++
+ 		if (((struct ebt_entries *)e)->policy != EBT_DROP &&
+ 		   ((struct ebt_entries *)e)->policy != EBT_ACCEPT) {
+ 			/* only RETURN from udc */
+ 			if (i != NF_BR_NUMHOOKS ||
+-			   ((struct ebt_entries *)e)->policy != EBT_RETURN) {
+-				BUGPRINT("bad policy\n");
++			   ((struct ebt_entries *)e)->policy != EBT_RETURN)
+ 				return -EINVAL;
+-			}
+ 		}
+ 		if (i == NF_BR_NUMHOOKS) /* it's a user defined chain */
+ 			(*udc_cnt)++;
+-		if (((struct ebt_entries *)e)->counter_offset != *totalcnt) {
+-			BUGPRINT("counter_offset != totalcnt");
++		if (((struct ebt_entries *)e)->counter_offset != *totalcnt)
+ 			return -EINVAL;
+-		}
+ 		*n = ((struct ebt_entries *)e)->nentries;
+ 		*cnt = 0;
+ 		return 0;
+@@ -550,15 +534,13 @@ ebt_check_entry_size_and_hooks(const struct ebt_entry *e,
+ 	/* a plain old entry, heh */
+ 	if (sizeof(struct ebt_entry) > e->watchers_offset ||
+ 	   e->watchers_offset > e->target_offset ||
+-	   e->target_offset >= e->next_offset) {
+-		BUGPRINT("entry offsets not in right order\n");
++	   e->target_offset >= e->next_offset)
+ 		return -EINVAL;
+-	}
++
+ 	/* this is not checked anywhere else */
+-	if (e->next_offset - e->target_offset < sizeof(struct ebt_entry_target)) {
+-		BUGPRINT("target size too small\n");
++	if (e->next_offset - e->target_offset < sizeof(struct ebt_entry_target))
+ 		return -EINVAL;
+-	}
++
+ 	(*cnt)++;
+ 	(*totalcnt)++;
+ 	return 0;
+@@ -678,18 +660,15 @@ ebt_check_entry(struct ebt_entry *e, struct net *net,
+ 	if (e->bitmask == 0)
+ 		return 0;
+ 
+-	if (e->bitmask & ~EBT_F_MASK) {
+-		BUGPRINT("Unknown flag for bitmask\n");
++	if (e->bitmask & ~EBT_F_MASK)
+ 		return -EINVAL;
+-	}
+-	if (e->invflags & ~EBT_INV_MASK) {
+-		BUGPRINT("Unknown flag for inv bitmask\n");
++
++	if (e->invflags & ~EBT_INV_MASK)
+ 		return -EINVAL;
+-	}
+-	if ((e->bitmask & EBT_NOPROTO) && (e->bitmask & EBT_802_3)) {
+-		BUGPRINT("NOPROTO & 802_3 not allowed\n");
++
++	if ((e->bitmask & EBT_NOPROTO) && (e->bitmask & EBT_802_3))
+ 		return -EINVAL;
+-	}
++
+ 	/* what hook do we belong to? */
+ 	for (i = 0; i < NF_BR_NUMHOOKS; i++) {
+ 		if (!newinfo->hook_entry[i])
+@@ -748,13 +727,11 @@ ebt_check_entry(struct ebt_entry *e, struct net *net,
+ 	t->u.target = target;
+ 	if (t->u.target == &ebt_standard_target) {
+ 		if (gap < sizeof(struct ebt_standard_target)) {
+-			BUGPRINT("Standard target size too big\n");
+ 			ret = -EFAULT;
+ 			goto cleanup_watchers;
+ 		}
+ 		if (((struct ebt_standard_target *)t)->verdict <
+ 		   -NUM_STANDARD_TARGETS) {
+-			BUGPRINT("Invalid standard target\n");
+ 			ret = -EFAULT;
+ 			goto cleanup_watchers;
+ 		}
+@@ -813,10 +790,9 @@ static int check_chainloops(const struct ebt_entries *chain, struct ebt_cl_stack
+ 		if (strcmp(t->u.name, EBT_STANDARD_TARGET))
+ 			goto letscontinue;
+ 		if (e->target_offset + sizeof(struct ebt_standard_target) >
+-		   e->next_offset) {
+-			BUGPRINT("Standard target size too big\n");
++		   e->next_offset)
+ 			return -1;
+-		}
++
+ 		verdict = ((struct ebt_standard_target *)t)->verdict;
+ 		if (verdict >= 0) { /* jump to another chain */
+ 			struct ebt_entries *hlp2 =
+@@ -825,14 +801,12 @@ static int check_chainloops(const struct ebt_entries *chain, struct ebt_cl_stack
+ 				if (hlp2 == cl_s[i].cs.chaininfo)
+ 					break;
+ 			/* bad destination or loop */
+-			if (i == udc_cnt) {
+-				BUGPRINT("bad destination\n");
++			if (i == udc_cnt)
+ 				return -1;
+-			}
+-			if (cl_s[i].cs.n) {
+-				BUGPRINT("loop\n");
++
++			if (cl_s[i].cs.n)
+ 				return -1;
+-			}
++
+ 			if (cl_s[i].hookmask & (1 << hooknr))
+ 				goto letscontinue;
+ 			/* this can't be 0, so the loop test is correct */
+@@ -865,24 +839,21 @@ static int translate_table(struct net *net, const char *name,
+ 	i = 0;
+ 	while (i < NF_BR_NUMHOOKS && !newinfo->hook_entry[i])
+ 		i++;
+-	if (i == NF_BR_NUMHOOKS) {
+-		BUGPRINT("No valid hooks specified\n");
++	if (i == NF_BR_NUMHOOKS)
+ 		return -EINVAL;
+-	}
+-	if (newinfo->hook_entry[i] != (struct ebt_entries *)newinfo->entries) {
+-		BUGPRINT("Chains don't start at beginning\n");
++
++	if (newinfo->hook_entry[i] != (struct ebt_entries *)newinfo->entries)
+ 		return -EINVAL;
+-	}
++
+ 	/* make sure chains are ordered after each other in same order
+ 	 * as their corresponding hooks
+ 	 */
+ 	for (j = i + 1; j < NF_BR_NUMHOOKS; j++) {
+ 		if (!newinfo->hook_entry[j])
+ 			continue;
+-		if (newinfo->hook_entry[j] <= newinfo->hook_entry[i]) {
+-			BUGPRINT("Hook order must be followed\n");
++		if (newinfo->hook_entry[j] <= newinfo->hook_entry[i])
+ 			return -EINVAL;
+-		}
++
+ 		i = j;
+ 	}
+ 
+@@ -900,15 +871,11 @@ static int translate_table(struct net *net, const char *name,
+ 	if (ret != 0)
+ 		return ret;
+ 
+-	if (i != j) {
+-		BUGPRINT("nentries does not equal the nr of entries in the "
+-			 "(last) chain\n");
++	if (i != j)
+ 		return -EINVAL;
+-	}
+-	if (k != newinfo->nentries) {
+-		BUGPRINT("Total nentries is wrong\n");
++
++	if (k != newinfo->nentries)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* get the location of the udc, put them in an array
+ 	 * while we're at it, allocate the chainstack
+@@ -942,7 +909,6 @@ static int translate_table(struct net *net, const char *name,
+ 		   ebt_get_udc_positions, newinfo, &i, cl_s);
+ 		/* sanity check */
+ 		if (i != udc_cnt) {
+-			BUGPRINT("i != udc_cnt\n");
+ 			vfree(cl_s);
+ 			return -EFAULT;
+ 		}
+@@ -1042,7 +1008,6 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
+ 		goto free_unlock;
+ 
+ 	if (repl->num_counters && repl->num_counters != t->private->nentries) {
+-		BUGPRINT("Wrong nr. of counters requested\n");
+ 		ret = -EINVAL;
+ 		goto free_unlock;
+ 	}
+@@ -1118,15 +1083,12 @@ static int do_replace(struct net *net, const void __user *user,
+ 	if (copy_from_user(&tmp, user, sizeof(tmp)) != 0)
+ 		return -EFAULT;
+ 
+-	if (len != sizeof(tmp) + tmp.entries_size) {
+-		BUGPRINT("Wrong len argument\n");
++	if (len != sizeof(tmp) + tmp.entries_size)
+ 		return -EINVAL;
+-	}
+ 
+-	if (tmp.entries_size == 0) {
+-		BUGPRINT("Entries_size never zero\n");
++	if (tmp.entries_size == 0)
+ 		return -EINVAL;
+-	}
++
+ 	/* overflow check */
+ 	if (tmp.nentries >= ((INT_MAX - sizeof(struct ebt_table_info)) /
+ 			NR_CPUS - SMP_CACHE_BYTES) / sizeof(struct ebt_counter))
+@@ -1153,7 +1115,6 @@ static int do_replace(struct net *net, const void __user *user,
+ 	}
+ 	if (copy_from_user(
+ 	   newinfo->entries, tmp.entries, tmp.entries_size) != 0) {
+-		BUGPRINT("Couldn't copy entries from userspace\n");
+ 		ret = -EFAULT;
+ 		goto free_entries;
+ 	}
+@@ -1194,10 +1155,8 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+ 
+ 	if (input_table == NULL || (repl = input_table->table) == NULL ||
+ 	    repl->entries == NULL || repl->entries_size == 0 ||
+-	    repl->counters != NULL || input_table->private != NULL) {
+-		BUGPRINT("Bad table data for ebt_register_table!!!\n");
++	    repl->counters != NULL || input_table->private != NULL)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* Don't add one table to multiple lists. */
+ 	table = kmemdup(input_table, sizeof(struct ebt_table), GFP_KERNEL);
+@@ -1235,13 +1194,10 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+ 				((char *)repl->hook_entry[i] - repl->entries);
+ 	}
+ 	ret = translate_table(net, repl->name, newinfo);
+-	if (ret != 0) {
+-		BUGPRINT("Translate_table failed\n");
++	if (ret != 0)
+ 		goto free_chainstack;
+-	}
+ 
+ 	if (table->check && table->check(newinfo, table->valid_hooks)) {
+-		BUGPRINT("The table doesn't like its own initial data, lol\n");
+ 		ret = -EINVAL;
+ 		goto free_chainstack;
+ 	}
+@@ -1252,7 +1208,6 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+ 	list_for_each_entry(t, &net->xt.tables[NFPROTO_BRIDGE], list) {
+ 		if (strcmp(t->name, table->name) == 0) {
+ 			ret = -EEXIST;
+-			BUGPRINT("Table name already exists\n");
+ 			goto free_unlock;
+ 		}
+ 	}
+@@ -1320,7 +1275,6 @@ static int do_update_counters(struct net *net, const char *name,
+ 		goto free_tmp;
+ 
+ 	if (num_counters != t->private->nentries) {
+-		BUGPRINT("Wrong nr of counters\n");
+ 		ret = -EINVAL;
+ 		goto unlock_mutex;
+ 	}
+@@ -1447,10 +1401,8 @@ static int copy_counters_to_user(struct ebt_table *t,
+ 	if (num_counters == 0)
+ 		return 0;
+ 
+-	if (num_counters != nentries) {
+-		BUGPRINT("Num_counters wrong\n");
++	if (num_counters != nentries)
+ 		return -EINVAL;
+-	}
+ 
+ 	counterstmp = vmalloc(array_size(nentries, sizeof(*counterstmp)));
+ 	if (!counterstmp)
+@@ -1496,15 +1448,11 @@ static int copy_everything_to_user(struct ebt_table *t, void __user *user,
+ 	   (tmp.num_counters ? nentries * sizeof(struct ebt_counter) : 0))
+ 		return -EINVAL;
+ 
+-	if (tmp.nentries != nentries) {
+-		BUGPRINT("Nentries wrong\n");
++	if (tmp.nentries != nentries)
+ 		return -EINVAL;
+-	}
+ 
+-	if (tmp.entries_size != entries_size) {
+-		BUGPRINT("Wrong size\n");
++	if (tmp.entries_size != entries_size)
+ 		return -EINVAL;
+-	}
+ 
+ 	ret = copy_counters_to_user(t, oldcounters, tmp.counters,
+ 					tmp.num_counters, nentries);
+@@ -1576,7 +1524,6 @@ static int do_ebt_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
+ 		}
+ 		mutex_unlock(&ebt_mutex);
+ 		if (copy_to_user(user, &tmp, *len) != 0) {
+-			BUGPRINT("c2u Didn't work\n");
+ 			ret = -EFAULT;
+ 			break;
+ 		}
+diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
+index 9cab80207ced..79eac465ec65 100644
+--- a/net/ceph/ceph_common.c
++++ b/net/ceph/ceph_common.c
+@@ -738,7 +738,6 @@ int __ceph_open_session(struct ceph_client *client, unsigned long started)
+ }
+ EXPORT_SYMBOL(__ceph_open_session);
+ 
+-
+ int ceph_open_session(struct ceph_client *client)
+ {
+ 	int ret;
+@@ -754,6 +753,23 @@ int ceph_open_session(struct ceph_client *client)
+ }
+ EXPORT_SYMBOL(ceph_open_session);
+ 
++int ceph_wait_for_latest_osdmap(struct ceph_client *client,
++				unsigned long timeout)
++{
++	u64 newest_epoch;
++	int ret;
++
++	ret = ceph_monc_get_version(&client->monc, "osdmap", &newest_epoch);
++	if (ret)
++		return ret;
++
++	if (client->osdc.osdmap->epoch >= newest_epoch)
++		return 0;
++
++	ceph_osdc_maybe_request_map(&client->osdc);
++	return ceph_monc_wait_osdmap(&client->monc, newest_epoch, timeout);
++}
++EXPORT_SYMBOL(ceph_wait_for_latest_osdmap);
+ 
+ static int __init init_ceph_lib(void)
+ {
+diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
+index 18deb3d889c4..a53e4fbb6319 100644
+--- a/net/ceph/mon_client.c
++++ b/net/ceph/mon_client.c
+@@ -922,6 +922,15 @@ int ceph_monc_blacklist_add(struct ceph_mon_client *monc,
+ 	mutex_unlock(&monc->mutex);
+ 
+ 	ret = wait_generic_request(req);
++	if (!ret)
++		/*
++		 * Make sure we have the osdmap that includes the blacklist
++		 * entry.  This is needed to ensure that the OSDs pick up the
++		 * new blacklist before processing any future requests from
++		 * this client.
++		 */
++		ret = ceph_wait_for_latest_osdmap(monc->client, 0);
++
+ out:
+ 	put_generic_request(req);
+ 	return ret;
+diff --git a/sound/ac97/bus.c b/sound/ac97/bus.c
+index 9f0c480489ef..9cbf6927abe9 100644
+--- a/sound/ac97/bus.c
++++ b/sound/ac97/bus.c
+@@ -84,7 +84,7 @@ ac97_of_get_child_device(struct ac97_controller *ac97_ctrl, int idx,
+ 		if ((idx != of_property_read_u32(node, "reg", &reg)) ||
+ 		    !of_device_is_compatible(node, compat))
+ 			continue;
+-		return of_node_get(node);
++		return node;
+ 	}
+ 
+ 	return NULL;
+diff --git a/sound/firewire/motu/motu.c b/sound/firewire/motu/motu.c
+index 220e61926ea4..513291ba0ab0 100644
+--- a/sound/firewire/motu/motu.c
++++ b/sound/firewire/motu/motu.c
+@@ -36,7 +36,7 @@ static void name_card(struct snd_motu *motu)
+ 	fw_csr_iterator_init(&it, motu->unit->directory);
+ 	while (fw_csr_iterator_next(&it, &key, &val)) {
+ 		switch (key) {
+-		case CSR_VERSION:
++		case CSR_MODEL:
+ 			version = val;
+ 			break;
+ 		}
+@@ -46,7 +46,7 @@ static void name_card(struct snd_motu *motu)
+ 	strcpy(motu->card->shortname, motu->spec->name);
+ 	strcpy(motu->card->mixername, motu->spec->name);
+ 	snprintf(motu->card->longname, sizeof(motu->card->longname),
+-		 "MOTU %s (version:%d), GUID %08x%08x at %s, S%d",
++		 "MOTU %s (version:%06x), GUID %08x%08x at %s, S%d",
+ 		 motu->spec->name, version,
+ 		 fw_dev->config_rom[3], fw_dev->config_rom[4],
+ 		 dev_name(&motu->unit->device), 100 << fw_dev->max_speed);
+@@ -237,20 +237,20 @@ static const struct snd_motu_spec motu_audio_express = {
+ #define SND_MOTU_DEV_ENTRY(model, data)			\
+ {							\
+ 	.match_flags	= IEEE1394_MATCH_VENDOR_ID |	\
+-			  IEEE1394_MATCH_MODEL_ID |	\
+-			  IEEE1394_MATCH_SPECIFIER_ID,	\
++			  IEEE1394_MATCH_SPECIFIER_ID |	\
++			  IEEE1394_MATCH_VERSION,	\
+ 	.vendor_id	= OUI_MOTU,			\
+-	.model_id	= model,			\
+ 	.specifier_id	= OUI_MOTU,			\
++	.version	= model,			\
+ 	.driver_data	= (kernel_ulong_t)data,		\
+ }
+ 
+ static const struct ieee1394_device_id motu_id_table[] = {
+-	SND_MOTU_DEV_ENTRY(0x101800, &motu_828mk2),
+-	SND_MOTU_DEV_ENTRY(0x107800, &snd_motu_spec_traveler),
+-	SND_MOTU_DEV_ENTRY(0x106800, &motu_828mk3),	/* FireWire only. */
+-	SND_MOTU_DEV_ENTRY(0x100800, &motu_828mk3),	/* Hybrid. */
+-	SND_MOTU_DEV_ENTRY(0x104800, &motu_audio_express),
++	SND_MOTU_DEV_ENTRY(0x000003, &motu_828mk2),
++	SND_MOTU_DEV_ENTRY(0x000009, &snd_motu_spec_traveler),
++	SND_MOTU_DEV_ENTRY(0x000015, &motu_828mk3),	/* FireWire only. */
++	SND_MOTU_DEV_ENTRY(0x000035, &motu_828mk3),	/* Hybrid. */
++	SND_MOTU_DEV_ENTRY(0x000033, &motu_audio_express),
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(ieee1394, motu_id_table);
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 9f8d59e7e89f..b238e903b9d7 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2917,6 +2917,7 @@ static void hda_call_codec_resume(struct hda_codec *codec)
+ 		hda_jackpoll_work(&codec->jackpoll_work.work);
+ 	else
+ 		snd_hda_jack_report_sync(codec);
++	codec->core.dev.power.power_state = PMSG_ON;
+ 	snd_hdac_leave_pm(&codec->core);
+ }
+ 
+@@ -2950,10 +2951,62 @@ static int hda_codec_runtime_resume(struct device *dev)
+ }
+ #endif /* CONFIG_PM */
+ 
++#ifdef CONFIG_PM_SLEEP
++static int hda_codec_force_resume(struct device *dev)
++{
++	int ret;
++
++	/* The get/put pair below enforces the runtime resume even if the
++	 * device hasn't been used at suspend time.  This trick is needed to
++	 * update the jack state change during the sleep.
++	 */
++	pm_runtime_get_noresume(dev);
++	ret = pm_runtime_force_resume(dev);
++	pm_runtime_put(dev);
++	return ret;
++}
++
++static int hda_codec_pm_suspend(struct device *dev)
++{
++	dev->power.power_state = PMSG_SUSPEND;
++	return pm_runtime_force_suspend(dev);
++}
++
++static int hda_codec_pm_resume(struct device *dev)
++{
++	dev->power.power_state = PMSG_RESUME;
++	return hda_codec_force_resume(dev);
++}
++
++static int hda_codec_pm_freeze(struct device *dev)
++{
++	dev->power.power_state = PMSG_FREEZE;
++	return pm_runtime_force_suspend(dev);
++}
++
++static int hda_codec_pm_thaw(struct device *dev)
++{
++	dev->power.power_state = PMSG_THAW;
++	return hda_codec_force_resume(dev);
++}
++
++static int hda_codec_pm_restore(struct device *dev)
++{
++	dev->power.power_state = PMSG_RESTORE;
++	return hda_codec_force_resume(dev);
++}
++#endif /* CONFIG_PM_SLEEP */
++
+ /* referred in hda_bind.c */
+ const struct dev_pm_ops hda_codec_driver_pm = {
+-	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
+-				pm_runtime_force_resume)
++#ifdef CONFIG_PM_SLEEP
++	.suspend = hda_codec_pm_suspend,
++	.resume = hda_codec_pm_resume,
++	.freeze = hda_codec_pm_freeze,
++	.thaw = hda_codec_pm_thaw,
++	.poweroff = hda_codec_pm_suspend,
++	.restore = hda_codec_pm_restore,
++#endif /* CONFIG_PM_SLEEP */
+ 	SET_RUNTIME_PM_OPS(hda_codec_runtime_suspend, hda_codec_runtime_resume,
+ 			   NULL)
+ };
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index e5c49003e75f..ece256a3b48f 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -947,7 +947,7 @@ static void __azx_runtime_suspend(struct azx *chip)
+ 	display_power(chip, false);
+ }
+ 
+-static void __azx_runtime_resume(struct azx *chip)
++static void __azx_runtime_resume(struct azx *chip, bool from_rt)
+ {
+ 	struct hda_intel *hda = container_of(chip, struct hda_intel, chip);
+ 	struct hdac_bus *bus = azx_bus(chip);
+@@ -964,7 +964,7 @@ static void __azx_runtime_resume(struct azx *chip)
+ 	azx_init_pci(chip);
+ 	hda_intel_init_chip(chip, true);
+ 
+-	if (status) {
++	if (status && from_rt) {
+ 		list_for_each_codec(codec, &chip->bus)
+ 			if (status & (1 << codec->addr))
+ 				schedule_delayed_work(&codec->jackpoll_work,
+@@ -1016,7 +1016,7 @@ static int azx_resume(struct device *dev)
+ 			chip->msi = 0;
+ 	if (azx_acquire_irq(chip, 1) < 0)
+ 		return -EIO;
+-	__azx_runtime_resume(chip);
++	__azx_runtime_resume(chip, false);
+ 	snd_power_change_state(card, SNDRV_CTL_POWER_D0);
+ 
+ 	trace_azx_resume(chip);
+@@ -1081,7 +1081,7 @@ static int azx_runtime_resume(struct device *dev)
+ 	chip = card->private_data;
+ 	if (!azx_has_pm_runtime(chip))
+ 		return 0;
+-	__azx_runtime_resume(chip);
++	__azx_runtime_resume(chip, true);
+ 
+ 	/* disable controller Wake Up event*/
+ 	azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &
+@@ -2144,10 +2144,12 @@ static struct snd_pci_quirk power_save_blacklist[] = {
+ 	SND_PCI_QUIRK(0x8086, 0x2057, "Intel NUC5i7RYB", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1520902 */
+ 	SND_PCI_QUIRK(0x8086, 0x2068, "Intel NUC7i3BNB", 0),
+-	/* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
+-	SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0),
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=198611 */
+ 	SND_PCI_QUIRK(0x17aa, 0x2227, "Lenovo X1 Carbon 3rd Gen", 0),
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1689623 */
++	SND_PCI_QUIRK(0x17aa, 0x367b, "Lenovo IdeaCentre B550", 0),
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
++	SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0),
+ 	{}
+ };
+ #endif /* CONFIG_PM */
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 0414a0d52262..5dde107083c6 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2184,9 +2184,10 @@ static void cleanup(struct objtool_file *file)
+ 	elf_close(file->elf);
+ }
+ 
++static struct objtool_file file;
++
+ int check(const char *_objname, bool orc)
+ {
+-	struct objtool_file file;
+ 	int ret, warnings = 0;
+ 
+ 	objname = _objname;
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index 18a59fba97ff..cc4773157b9b 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -157,8 +157,10 @@ static struct map *kernel_get_module_map(const char *module)
+ 	if (module && strchr(module, '/'))
+ 		return dso__new_map(module);
+ 
+-	if (!module)
+-		module = "kernel";
++	if (!module) {
++		pos = machine__kernel_map(host_machine);
++		return map__get(pos);
++	}
+ 
+ 	for (pos = maps__first(maps); pos; pos = map__next(pos)) {
+ 		/* short_name is "[module]" */


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-03-27 12:20 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-03-27 12:20 UTC (permalink / raw
  To: gentoo-commits

commit:     49c65cd5536daa462058ae4d2fef3d167b10719c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 27 12:19:55 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 27 12:19:55 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=49c65cd5

Update of netfilter patch thanks to kerfamil

Updated patch:
netfilter: nf_tables: fix set double-free in abort path

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 ..._tables-fix-set-double-free-in-abort-path.patch | 189 +++++++++++----------
 1 file changed, 103 insertions(+), 86 deletions(-)

diff --git a/2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch b/2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch
index 8a126bf..3cc4aef 100644
--- a/2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch
+++ b/2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch
@@ -1,110 +1,127 @@
-From: Florian Westphal <fw@strlen.de>
-To: <netfilter-devel@vger.kernel.org>
-Cc: kfm@plushkava.net, Florian Westphal <fw@strlen.de>
-Subject: [PATCH nf] netfilter: nf_tables: fix set double-free in abort path
-Date: Thu,  7 Mar 2019 20:30:41 +0100
-X-Mailer: git-send-email 2.19.2
-
-The abort path can cause a double-free of an (anon) set.
+commit 40ba1d9b4d19796afc9b7ece872f5f3e8f5e2c13 upstream.
 
+The abort path can cause a double-free of an anonymous set.
 Added-and-to-be-aborted rule looks like this:
 
 udp dport { 137, 138 } drop
 
 The to-be-aborted transaction list looks like this:
+
 newset
 newsetelem
 newsetelem
 rule
 
-This gets walked in reverse order, so first pass disables
-the rule, the set elements, then the set.
-
-After synchronize_rcu(), we then destroy those in same order:
-rule, set element, set element, newset.
+This gets walked in reverse order, so first pass disables the rule, the
+set elements, then the set.
 
-Problem is that the (anon) set has already been bound to the rule,
-so the rule (lookup expression destructor) already frees the set,
-when then cause use-after-free when trying to delete the elements
-from this set, then try to free the set again when handling the
-newset expression.
+After synchronize_rcu(), we then destroy those in same order: rule, set
+element, set element, newset.
 
-To resolve this, check in first phase if the newset is bound already.
-If so, remove the newset transaction from the list, rule destructor
-will handle cleanup.
+Problem is that the anonymous set has already been bound to the rule, so
+the rule (lookup expression destructor) already frees the set, when then
+cause use-after-free when trying to delete the elements from this set,
+then try to free the set again when handling the newset expression.
 
-This is still causes the use-after-free on set element removal.
-To handle this, move all affected set elements to a extra list
-and process it first.
+Rule releases the bound set in first place from the abort path, this
+causes the use-after-free on set element removal when undoing the new
+element transactions. To handle this, skip new element transaction if
+set is bound from the abort path.
 
-This forces strict 'destroy elements, then set' ordering.
+This is still causes the use-after-free on set element removal.  To
+handle this, remove transaction from the list when the set is already
+bound.
 
-Fixes: f6ac8585897684 ("netfilter: nf_tables: unbind set in rule from commit path")
+Fixes: f6ac85858976 ("netfilter: nf_tables: unbind set in rule from commit path")
 Bugzilla: https://bugzilla.netfilter.org/show_bug.cgi?id=1325
-Signed-off-by: Florian Westphal <fw@strlen.de>
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+---
+Florian, I'm taking your original patch subject and part of the description,
+sending this as v2. Please ack if this looks good to you. Thanks.
 
---- a/net/netfilter/nf_tables_api.c	2019-03-07 21:49:45.776492810 -0000
-+++ b/net/netfilter/nf_tables_api.c	2019-03-07 21:49:57.067493081 -0000
-@@ -6634,10 +6634,39 @@ static void nf_tables_abort_release(stru
- 	kfree(trans);
- }
+ include/net/netfilter/nf_tables.h |  6 ++----
+ net/netfilter/nf_tables_api.c     | 17 +++++++++++------
+ 2 files changed, 13 insertions(+), 10 deletions(-)
+
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index b4984bbbe157..3d58acf94dd2 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -416,7 +416,8 @@ struct nft_set {
+ 	unsigned char			*udata;
+ 	/* runtime data below here */
+ 	const struct nft_set_ops	*ops ____cacheline_aligned;
+-	u16				flags:14,
++	u16				flags:13,
++					bound:1,
+ 					genmask:2;
+ 	u8				klen;
+ 	u8				dlen;
+@@ -1329,15 +1330,12 @@ struct nft_trans_rule {
+ struct nft_trans_set {
+ 	struct nft_set			*set;
+ 	u32				set_id;
+-	bool				bound;
+ };
  
-+static void __nf_tables_newset_abort(struct net *net,
-+				     struct nft_trans *set_trans,
-+				     struct list_head *set_elements)
-+{
-+	const struct nft_set *set = nft_trans_set(set_trans);
-+	struct nft_trans *trans, *next;
-+
-+	if (!nft_trans_set_bound(set_trans))
-+		return;
-+
-+	/* When abort is in progress, NFT_MSG_NEWRULE will remove the
-+	 * set if its bound, so we need to remove the NEWSET transaction,
-+	 * else the set is released twice.  NEWSETELEM need to be moved
-+	 * to special list to ensure 'free elements, then set' ordering.
-+	 */
-+	list_for_each_entry_safe_reverse(trans, next,
-+					 &net->nft.commit_list, list) {
-+		if (trans == set_trans)
-+			break;
-+
-+		if (trans->msg_type == NFT_MSG_NEWSETELEM &&
-+		    nft_trans_set(trans) == set)
-+			list_move(&trans->list, set_elements);
-+	}
-+
-+	nft_trans_destroy(set_trans);
-+}
-+
- static int __nf_tables_abort(struct net *net)
- {
- 	struct nft_trans *trans, *next;
- 	struct nft_trans_elem *te;
-+	LIST_HEAD(set_elements);
+ #define nft_trans_set(trans)	\
+ 	(((struct nft_trans_set *)trans->data)->set)
+ #define nft_trans_set_id(trans)	\
+ 	(((struct nft_trans_set *)trans->data)->set_id)
+-#define nft_trans_set_bound(trans)	\
+-	(((struct nft_trans_set *)trans->data)->bound)
  
- 	list_for_each_entry_safe_reverse(trans, next, &net->nft.commit_list,
- 					 list) {
-@@ -6693,6 +6722,8 @@ static int __nf_tables_abort(struct net
+ struct nft_trans_chain {
+ 	bool				update;
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 4893f248dfdc..e1724f9d8b9d 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -127,7 +127,7 @@ static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
+ 	list_for_each_entry_reverse(trans, &net->nft.commit_list, list) {
+ 		if (trans->msg_type == NFT_MSG_NEWSET &&
+ 		    nft_trans_set(trans) == set) {
+-			nft_trans_set_bound(trans) = true;
++			set->bound = true;
+ 			break;
+ 		}
+ 	}
+@@ -6617,8 +6617,7 @@ static void nf_tables_abort_release(struct nft_trans *trans)
+ 		nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
+ 		break;
+ 	case NFT_MSG_NEWSET:
+-		if (!nft_trans_set_bound(trans))
+-			nft_set_destroy(nft_trans_set(trans));
++		nft_set_destroy(nft_trans_set(trans));
+ 		break;
+ 	case NFT_MSG_NEWSETELEM:
+ 		nft_set_elem_destroy(nft_trans_elem_set(trans),
+@@ -6691,8 +6690,11 @@ static int __nf_tables_abort(struct net *net)
+ 			break;
+ 		case NFT_MSG_NEWSET:
  			trans->ctx.table->use--;
- 			if (!nft_trans_set_bound(trans))
- 				list_del_rcu(&nft_trans_set(trans)->list);
-+
-+			__nf_tables_newset_abort(net, trans, &set_elements);
+-			if (!nft_trans_set_bound(trans))
+-				list_del_rcu(&nft_trans_set(trans)->list);
++			if (nft_trans_set(trans)->bound) {
++				nft_trans_destroy(trans);
++				break;
++			}
++			list_del_rcu(&nft_trans_set(trans)->list);
  			break;
  		case NFT_MSG_DELSET:
  			trans->ctx.table->use++;
-@@ -6739,6 +6770,13 @@ static int __nf_tables_abort(struct net
- 
- 	synchronize_rcu();
- 
-+	/* free set elements before the set they belong to is freed */
-+	list_for_each_entry_safe_reverse(trans, next,
-+					 &set_elements, list) {
-+		list_del(&trans->list);
-+		nf_tables_abort_release(trans);
-+	}
-+
- 	list_for_each_entry_safe_reverse(trans, next,
- 					 &net->nft.commit_list, list) {
- 		list_del(&trans->list);
+@@ -6700,8 +6702,11 @@ static int __nf_tables_abort(struct net *net)
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWSETELEM:
++			if (nft_trans_elem_set(trans)->bound) {
++				nft_trans_destroy(trans);
++				break;
++			}
+ 			te = (struct nft_trans_elem *)trans->data;
+-
+ 			te->set->ops->remove(net, te->set, &te->elem);
+ 			atomic_dec(&te->set->nelems);
+ 			break;
+-- 
+2.11.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-04-03 11:00 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-04-03 11:00 UTC (permalink / raw
  To: gentoo-commits

commit:     cf22e4462701b06cf6f7b15a399e49cc3a48ca6d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr  3 10:59:59 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr  3 10:59:59 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cf22e446

Linux patch 5.0.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1005_linux-5.0.6.patch | 4609 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4613 insertions(+)

diff --git a/0000_README b/0000_README
index f452eee..8c66a94 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-5.0.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.5
 
+Patch:  1005_linux-5.0.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-5.0.6.patch b/1005_linux-5.0.6.patch
new file mode 100644
index 0000000..9cd8b16
--- /dev/null
+++ b/1005_linux-5.0.6.patch
@@ -0,0 +1,4609 @@
+diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
+index 356156f5c52d..ba8927c0d45c 100644
+--- a/Documentation/virtual/kvm/api.txt
++++ b/Documentation/virtual/kvm/api.txt
+@@ -13,7 +13,7 @@ of a virtual machine.  The ioctls belong to three classes
+ 
+  - VM ioctls: These query and set attributes that affect an entire virtual
+    machine, for example memory layout.  In addition a VM ioctl is used to
+-   create virtual cpus (vcpus).
++   create virtual cpus (vcpus) and devices.
+ 
+    Only run VM ioctls from the same process (address space) that was used
+    to create the VM.
+@@ -24,6 +24,11 @@ of a virtual machine.  The ioctls belong to three classes
+    Only run vcpu ioctls from the same thread that was used to create the
+    vcpu.
+ 
++ - device ioctls: These query and set attributes that control the operation
++   of a single device.
++
++   device ioctls must be issued from the same process (address space) that
++   was used to create the VM.
+ 
+ 2. File descriptors
+ -------------------
+@@ -32,10 +37,11 @@ The kvm API is centered around file descriptors.  An initial
+ open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
+ can be used to issue system ioctls.  A KVM_CREATE_VM ioctl on this
+ handle will create a VM file descriptor which can be used to issue VM
+-ioctls.  A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu
+-and return a file descriptor pointing to it.  Finally, ioctls on a vcpu
+-fd can be used to control the vcpu, including the important task of
+-actually running guest code.
++ioctls.  A KVM_CREATE_VCPU or KVM_CREATE_DEVICE ioctl on a VM fd will
++create a virtual cpu or device and return a file descriptor pointing to
++the new resource.  Finally, ioctls on a vcpu or device fd can be used
++to control the vcpu or device.  For vcpus, this includes the important
++task of actually running guest code.
+ 
+ In general file descriptors can be migrated among processes by means
+ of fork() and the SCM_RIGHTS facility of unix domain socket.  These
+diff --git a/Makefile b/Makefile
+index 63152c5ca136..3ee390feea61 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+@@ -944,9 +944,11 @@ mod_sign_cmd = true
+ endif
+ export mod_sign_cmd
+ 
++HOST_LIBELF_LIBS = $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
++
+ ifdef CONFIG_STACK_VALIDATION
+   has_libelf := $(call try-run,\
+-		echo "int main() {}" | $(HOSTCC) -xc -o /dev/null -lelf -,1,0)
++		echo "int main() {}" | $(HOSTCC) -xc -o /dev/null $(HOST_LIBELF_LIBS) -,1,0)
+   ifeq ($(has_libelf),1)
+     objtool_target := tools/objtool FORCE
+   else
+diff --git a/arch/arm/mach-imx/cpuidle-imx6q.c b/arch/arm/mach-imx/cpuidle-imx6q.c
+index bfeb25aaf9a2..326e870d7123 100644
+--- a/arch/arm/mach-imx/cpuidle-imx6q.c
++++ b/arch/arm/mach-imx/cpuidle-imx6q.c
+@@ -16,30 +16,23 @@
+ #include "cpuidle.h"
+ #include "hardware.h"
+ 
+-static atomic_t master = ATOMIC_INIT(0);
+-static DEFINE_SPINLOCK(master_lock);
++static int num_idle_cpus = 0;
++static DEFINE_SPINLOCK(cpuidle_lock);
+ 
+ static int imx6q_enter_wait(struct cpuidle_device *dev,
+ 			    struct cpuidle_driver *drv, int index)
+ {
+-	if (atomic_inc_return(&master) == num_online_cpus()) {
+-		/*
+-		 * With this lock, we prevent other cpu to exit and enter
+-		 * this function again and become the master.
+-		 */
+-		if (!spin_trylock(&master_lock))
+-			goto idle;
++	spin_lock(&cpuidle_lock);
++	if (++num_idle_cpus == num_online_cpus())
+ 		imx6_set_lpm(WAIT_UNCLOCKED);
+-		cpu_do_idle();
+-		imx6_set_lpm(WAIT_CLOCKED);
+-		spin_unlock(&master_lock);
+-		goto done;
+-	}
++	spin_unlock(&cpuidle_lock);
+ 
+-idle:
+ 	cpu_do_idle();
+-done:
+-	atomic_dec(&master);
++
++	spin_lock(&cpuidle_lock);
++	if (num_idle_cpus-- == num_online_cpus())
++		imx6_set_lpm(WAIT_CLOCKED);
++	spin_unlock(&cpuidle_lock);
+ 
+ 	return index;
+ }
+diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
+index 19a8834e0398..0690a306f6ca 100644
+--- a/arch/powerpc/include/asm/ppc-opcode.h
++++ b/arch/powerpc/include/asm/ppc-opcode.h
+@@ -302,6 +302,7 @@
+ /* Misc instructions for BPF compiler */
+ #define PPC_INST_LBZ			0x88000000
+ #define PPC_INST_LD			0xe8000000
++#define PPC_INST_LDX			0x7c00002a
+ #define PPC_INST_LHZ			0xa0000000
+ #define PPC_INST_LWZ			0x80000000
+ #define PPC_INST_LHBRX			0x7c00062c
+@@ -309,6 +310,7 @@
+ #define PPC_INST_STB			0x98000000
+ #define PPC_INST_STH			0xb0000000
+ #define PPC_INST_STD			0xf8000000
++#define PPC_INST_STDX			0x7c00012a
+ #define PPC_INST_STDU			0xf8000001
+ #define PPC_INST_STW			0x90000000
+ #define PPC_INST_STWU			0x94000000
+diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
+index afb638778f44..447defdd4503 100644
+--- a/arch/powerpc/kernel/exceptions-64e.S
++++ b/arch/powerpc/kernel/exceptions-64e.S
+@@ -349,6 +349,7 @@ ret_from_mc_except:
+ #define GEN_BTB_FLUSH
+ #define CRIT_BTB_FLUSH
+ #define DBG_BTB_FLUSH
++#define MC_BTB_FLUSH
+ #define GDBELL_BTB_FLUSH
+ #endif
+ 
+diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S
+index 844d8e774492..b7f6f6e0b6e8 100644
+--- a/arch/powerpc/lib/memcmp_64.S
++++ b/arch/powerpc/lib/memcmp_64.S
+@@ -215,11 +215,20 @@ _GLOBAL_TOC(memcmp)
+ 	beq	.Lzero
+ 
+ .Lcmp_rest_lt8bytes:
+-	/* Here we have only less than 8 bytes to compare with. at least s1
+-	 * Address is aligned with 8 bytes.
+-	 * The next double words are load and shift right with appropriate
+-	 * bits.
++	/*
++	 * Here we have less than 8 bytes to compare. At least s1 is aligned to
++	 * 8 bytes, but s2 may not be. We must make sure s2 + 7 doesn't cross a
++	 * page boundary, otherwise we might read past the end of the buffer and
++	 * trigger a page fault. We use 4K as the conservative minimum page
++	 * size. If we detect that case we go to the byte-by-byte loop.
++	 *
++	 * Otherwise the next double word is loaded from s1 and s2, and shifted
++	 * right to compare the appropriate bits.
+ 	 */
++	clrldi	r6,r4,(64-12)	// r6 = r4 & 0xfff
++	cmpdi	r6,0xff8
++	bgt	.Lshort
++
+ 	subfic  r6,r5,8
+ 	slwi	r6,r6,3
+ 	LD	rA,0,r3
+diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
+index c2d5192ed64f..e52e30bf7d86 100644
+--- a/arch/powerpc/net/bpf_jit.h
++++ b/arch/powerpc/net/bpf_jit.h
+@@ -51,6 +51,8 @@
+ #define PPC_LIS(r, i)		PPC_ADDIS(r, 0, i)
+ #define PPC_STD(r, base, i)	EMIT(PPC_INST_STD | ___PPC_RS(r) |	      \
+ 				     ___PPC_RA(base) | ((i) & 0xfffc))
++#define PPC_STDX(r, base, b)	EMIT(PPC_INST_STDX | ___PPC_RS(r) |	      \
++				     ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_STDU(r, base, i)	EMIT(PPC_INST_STDU | ___PPC_RS(r) |	      \
+ 				     ___PPC_RA(base) | ((i) & 0xfffc))
+ #define PPC_STW(r, base, i)	EMIT(PPC_INST_STW | ___PPC_RS(r) |	      \
+@@ -65,7 +67,9 @@
+ #define PPC_LBZ(r, base, i)	EMIT(PPC_INST_LBZ | ___PPC_RT(r) |	      \
+ 				     ___PPC_RA(base) | IMM_L(i))
+ #define PPC_LD(r, base, i)	EMIT(PPC_INST_LD | ___PPC_RT(r) |	      \
+-				     ___PPC_RA(base) | IMM_L(i))
++				     ___PPC_RA(base) | ((i) & 0xfffc))
++#define PPC_LDX(r, base, b)	EMIT(PPC_INST_LDX | ___PPC_RT(r) |	      \
++				     ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_LWZ(r, base, i)	EMIT(PPC_INST_LWZ | ___PPC_RT(r) |	      \
+ 				     ___PPC_RA(base) | IMM_L(i))
+ #define PPC_LHZ(r, base, i)	EMIT(PPC_INST_LHZ | ___PPC_RT(r) |	      \
+@@ -85,17 +89,6 @@
+ 					___PPC_RA(a) | ___PPC_RB(b))
+ #define PPC_BPF_STDCX(s, a, b)	EMIT(PPC_INST_STDCX | ___PPC_RS(s) |	      \
+ 					___PPC_RA(a) | ___PPC_RB(b))
+-
+-#ifdef CONFIG_PPC64
+-#define PPC_BPF_LL(r, base, i) do { PPC_LD(r, base, i); } while(0)
+-#define PPC_BPF_STL(r, base, i) do { PPC_STD(r, base, i); } while(0)
+-#define PPC_BPF_STLU(r, base, i) do { PPC_STDU(r, base, i); } while(0)
+-#else
+-#define PPC_BPF_LL(r, base, i) do { PPC_LWZ(r, base, i); } while(0)
+-#define PPC_BPF_STL(r, base, i) do { PPC_STW(r, base, i); } while(0)
+-#define PPC_BPF_STLU(r, base, i) do { PPC_STWU(r, base, i); } while(0)
+-#endif
+-
+ #define PPC_CMPWI(a, i)		EMIT(PPC_INST_CMPWI | ___PPC_RA(a) | IMM_L(i))
+ #define PPC_CMPDI(a, i)		EMIT(PPC_INST_CMPDI | ___PPC_RA(a) | IMM_L(i))
+ #define PPC_CMPW(a, b)		EMIT(PPC_INST_CMPW | ___PPC_RA(a) |	      \
+diff --git a/arch/powerpc/net/bpf_jit32.h b/arch/powerpc/net/bpf_jit32.h
+index 6f4daacad296..ade04547703f 100644
+--- a/arch/powerpc/net/bpf_jit32.h
++++ b/arch/powerpc/net/bpf_jit32.h
+@@ -123,6 +123,10 @@ DECLARE_LOAD_FUNC(sk_load_byte_msh);
+ #define PPC_NTOHS_OFFS(r, base, i)	PPC_LHZ_OFFS(r, base, i)
+ #endif
+ 
++#define PPC_BPF_LL(r, base, i) do { PPC_LWZ(r, base, i); } while(0)
++#define PPC_BPF_STL(r, base, i) do { PPC_STW(r, base, i); } while(0)
++#define PPC_BPF_STLU(r, base, i) do { PPC_STWU(r, base, i); } while(0)
++
+ #define SEEN_DATAREF 0x10000 /* might call external helpers */
+ #define SEEN_XREG    0x20000 /* X reg is used */
+ #define SEEN_MEM     0x40000 /* SEEN_MEM+(1<<n) = use mem[n] for temporary
+diff --git a/arch/powerpc/net/bpf_jit64.h b/arch/powerpc/net/bpf_jit64.h
+index 3609be4692b3..47f441f351a6 100644
+--- a/arch/powerpc/net/bpf_jit64.h
++++ b/arch/powerpc/net/bpf_jit64.h
+@@ -68,6 +68,26 @@ static const int b2p[] = {
+ /* PPC NVR range -- update this if we ever use NVRs below r27 */
+ #define BPF_PPC_NVR_MIN		27
+ 
++/*
++ * WARNING: These can use TMP_REG_2 if the offset is not at word boundary,
++ * so ensure that it isn't in use already.
++ */
++#define PPC_BPF_LL(r, base, i) do {					      \
++				if ((i) % 4) {				      \
++					PPC_LI(b2p[TMP_REG_2], (i));	      \
++					PPC_LDX(r, base, b2p[TMP_REG_2]);     \
++				} else					      \
++					PPC_LD(r, base, i);		      \
++				} while(0)
++#define PPC_BPF_STL(r, base, i) do {					      \
++				if ((i) % 4) {				      \
++					PPC_LI(b2p[TMP_REG_2], (i));	      \
++					PPC_STDX(r, base, b2p[TMP_REG_2]);    \
++				} else					      \
++					PPC_STD(r, base, i);		      \
++				} while(0)
++#define PPC_BPF_STLU(r, base, i) do { PPC_STDU(r, base, i); } while(0)
++
+ #define SEEN_FUNC	0x1000 /* might call external helpers */
+ #define SEEN_STACK	0x2000 /* uses BPF stack */
+ #define SEEN_TAILCALL	0x4000 /* uses tail calls */
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 7ce57657d3b8..b1a116eecae2 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -252,7 +252,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ 	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
+ 	 *   goto out;
+ 	 */
+-	PPC_LD(b2p[TMP_REG_1], 1, bpf_jit_stack_tailcallcnt(ctx));
++	PPC_BPF_LL(b2p[TMP_REG_1], 1, bpf_jit_stack_tailcallcnt(ctx));
+ 	PPC_CMPLWI(b2p[TMP_REG_1], MAX_TAIL_CALL_CNT);
+ 	PPC_BCC(COND_GT, out);
+ 
+@@ -265,7 +265,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ 	/* prog = array->ptrs[index]; */
+ 	PPC_MULI(b2p[TMP_REG_1], b2p_index, 8);
+ 	PPC_ADD(b2p[TMP_REG_1], b2p[TMP_REG_1], b2p_bpf_array);
+-	PPC_LD(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_array, ptrs));
++	PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_array, ptrs));
+ 
+ 	/*
+ 	 * if (prog == NULL)
+@@ -275,7 +275,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ 	PPC_BCC(COND_EQ, out);
+ 
+ 	/* goto *(prog->bpf_func + prologue_size); */
+-	PPC_LD(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_prog, bpf_func));
++	PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_prog, bpf_func));
+ #ifdef PPC64_ELF_ABI_v1
+ 	/* skip past the function descriptor */
+ 	PPC_ADDI(b2p[TMP_REG_1], b2p[TMP_REG_1],
+@@ -606,7 +606,7 @@ bpf_alu32_trunc:
+ 				 * the instructions generated will remain the
+ 				 * same across all passes
+ 				 */
+-				PPC_STD(dst_reg, 1, bpf_jit_stack_local(ctx));
++				PPC_BPF_STL(dst_reg, 1, bpf_jit_stack_local(ctx));
+ 				PPC_ADDI(b2p[TMP_REG_1], 1, bpf_jit_stack_local(ctx));
+ 				PPC_LDBRX(dst_reg, 0, b2p[TMP_REG_1]);
+ 				break;
+@@ -662,7 +662,7 @@ emit_clear:
+ 				PPC_LI32(b2p[TMP_REG_1], imm);
+ 				src_reg = b2p[TMP_REG_1];
+ 			}
+-			PPC_STD(src_reg, dst_reg, off);
++			PPC_BPF_STL(src_reg, dst_reg, off);
+ 			break;
+ 
+ 		/*
+@@ -709,7 +709,7 @@ emit_clear:
+ 			break;
+ 		/* dst = *(u64 *)(ul) (src + off) */
+ 		case BPF_LDX | BPF_MEM | BPF_DW:
+-			PPC_LD(dst_reg, src_reg, off);
++			PPC_BPF_LL(dst_reg, src_reg, off);
+ 			break;
+ 
+ 		/*
+diff --git a/arch/powerpc/platforms/pseries/pseries_energy.c b/arch/powerpc/platforms/pseries/pseries_energy.c
+index 6ed22127391b..921f12182f3e 100644
+--- a/arch/powerpc/platforms/pseries/pseries_energy.c
++++ b/arch/powerpc/platforms/pseries/pseries_energy.c
+@@ -77,18 +77,27 @@ static u32 cpu_to_drc_index(int cpu)
+ 
+ 		ret = drc.drc_index_start + (thread_index * drc.sequential_inc);
+ 	} else {
+-		const __be32 *indexes;
+-
+-		indexes = of_get_property(dn, "ibm,drc-indexes", NULL);
+-		if (indexes == NULL)
+-			goto err_of_node_put;
++		u32 nr_drc_indexes, thread_drc_index;
+ 
+ 		/*
+-		 * The first element indexes[0] is the number of drc_indexes
+-		 * returned in the list.  Hence thread_index+1 will get the
+-		 * drc_index corresponding to core number thread_index.
++		 * The first element of ibm,drc-indexes array is the
++		 * number of drc_indexes returned in the list.  Hence
++		 * thread_index+1 will get the drc_index corresponding
++		 * to core number thread_index.
+ 		 */
+-		ret = indexes[thread_index + 1];
++		rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
++						0, &nr_drc_indexes);
++		if (rc)
++			goto err_of_node_put;
++
++		WARN_ON_ONCE(thread_index > nr_drc_indexes);
++		rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
++						thread_index + 1,
++						&thread_drc_index);
++		if (rc)
++			goto err_of_node_put;
++
++		ret = thread_drc_index;
+ 	}
+ 
+ 	rc = 0;
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index d97d52772789..452dcfd7e5dd 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -550,6 +550,7 @@ static void pseries_print_mce_info(struct pt_regs *regs,
+ 		"UE",
+ 		"SLB",
+ 		"ERAT",
++		"Unknown",
+ 		"TLB",
+ 		"D-Cache",
+ 		"Unknown",
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 68261430fe6e..64d5a3327030 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2221,14 +2221,8 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING
+ 	   If unsure, leave at the default value.
+ 
+ config HOTPLUG_CPU
+-	bool "Support for hot-pluggable CPUs"
++	def_bool y
+ 	depends on SMP
+-	---help---
+-	  Say Y here to allow turning CPUs off and on. CPUs can be
+-	  controlled through /sys/devices/system/cpu.
+-	  ( Note: power management support will enable this option
+-	    automatically on SMP systems. )
+-	  Say N if you want to disable CPU hotplug.
+ 
+ config BOOTPARAM_HOTPLUG_CPU0
+ 	bool "Set default setting of cpu0_hotpluggable"
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index e40be168c73c..71d763ad2637 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -352,6 +352,7 @@ struct kvm_mmu_page {
+ };
+ 
+ struct kvm_pio_request {
++	unsigned long linear_rip;
+ 	unsigned long count;
+ 	int in;
+ 	int port;
+@@ -570,6 +571,7 @@ struct kvm_vcpu_arch {
+ 	bool tpr_access_reporting;
+ 	u64 ia32_xss;
+ 	u64 microcode_version;
++	u64 arch_capabilities;
+ 
+ 	/*
+ 	 * Paging state of the vcpu
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index d86eee07d327..a0a770816429 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1679,12 +1679,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 
+ 		msr_info->data = to_vmx(vcpu)->spec_ctrl;
+ 		break;
+-	case MSR_IA32_ARCH_CAPABILITIES:
+-		if (!msr_info->host_initiated &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
+-			return 1;
+-		msr_info->data = to_vmx(vcpu)->arch_capabilities;
+-		break;
+ 	case MSR_IA32_SYSENTER_CS:
+ 		msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
+ 		break;
+@@ -1891,11 +1885,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD,
+ 					      MSR_TYPE_W);
+ 		break;
+-	case MSR_IA32_ARCH_CAPABILITIES:
+-		if (!msr_info->host_initiated)
+-			return 1;
+-		vmx->arch_capabilities = data;
+-		break;
+ 	case MSR_IA32_CR_PAT:
+ 		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
+ 			if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
+@@ -4083,8 +4072,6 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ 		++vmx->nmsrs;
+ 	}
+ 
+-	vmx->arch_capabilities = kvm_get_arch_capabilities();
+-
+ 	vm_exit_controls_init(vmx, vmx_vmexit_ctrl());
+ 
+ 	/* 22.2.1, 20.8.1 */
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 0ac0a64c7790..1abae731c3e4 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -191,7 +191,6 @@ struct vcpu_vmx {
+ 	u64		      msr_guest_kernel_gs_base;
+ #endif
+ 
+-	u64		      arch_capabilities;
+ 	u64		      spec_ctrl;
+ 
+ 	u32 vm_entry_controls_shadow;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 2bcef72a7c40..7ee802a92bc8 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2443,6 +2443,11 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		if (msr_info->host_initiated)
+ 			vcpu->arch.microcode_version = data;
+ 		break;
++	case MSR_IA32_ARCH_CAPABILITIES:
++		if (!msr_info->host_initiated)
++			return 1;
++		vcpu->arch.arch_capabilities = data;
++		break;
+ 	case MSR_EFER:
+ 		return set_efer(vcpu, data);
+ 	case MSR_K7_HWCR:
+@@ -2747,6 +2752,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_IA32_UCODE_REV:
+ 		msr_info->data = vcpu->arch.microcode_version;
+ 		break;
++	case MSR_IA32_ARCH_CAPABILITIES:
++		if (!msr_info->host_initiated &&
++		    !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
++			return 1;
++		msr_info->data = vcpu->arch.arch_capabilities;
++		break;
+ 	case MSR_IA32_TSC:
+ 		msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset;
+ 		break;
+@@ -6522,14 +6533,27 @@ int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
+ }
+ EXPORT_SYMBOL_GPL(kvm_emulate_instruction_from_buffer);
+ 
++static int complete_fast_pio_out(struct kvm_vcpu *vcpu)
++{
++	vcpu->arch.pio.count = 0;
++
++	if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip)))
++		return 1;
++
++	return kvm_skip_emulated_instruction(vcpu);
++}
++
+ static int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size,
+ 			    unsigned short port)
+ {
+ 	unsigned long val = kvm_register_read(vcpu, VCPU_REGS_RAX);
+ 	int ret = emulator_pio_out_emulated(&vcpu->arch.emulate_ctxt,
+ 					    size, port, &val, 1);
+-	/* do not return to emulator after return from userspace */
+-	vcpu->arch.pio.count = 0;
++
++	if (!ret) {
++		vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
++		vcpu->arch.complete_userspace_io = complete_fast_pio_out;
++	}
+ 	return ret;
+ }
+ 
+@@ -6540,6 +6564,11 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+ 	/* We should only ever be called with arch.pio.count equal to 1 */
+ 	BUG_ON(vcpu->arch.pio.count != 1);
+ 
++	if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip))) {
++		vcpu->arch.pio.count = 0;
++		return 1;
++	}
++
+ 	/* For size less than 4 we merge, else we zero extend */
+ 	val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
+ 					: 0;
+@@ -6552,7 +6581,7 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+ 				 vcpu->arch.pio.port, &val, 1);
+ 	kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+ 
+-	return 1;
++	return kvm_skip_emulated_instruction(vcpu);
+ }
+ 
+ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
+@@ -6571,6 +6600,7 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
+ 		return ret;
+ 	}
+ 
++	vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
+ 	vcpu->arch.complete_userspace_io = complete_fast_pio_in;
+ 
+ 	return 0;
+@@ -6578,16 +6608,13 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
+ 
+ int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in)
+ {
+-	int ret = kvm_skip_emulated_instruction(vcpu);
++	int ret;
+ 
+-	/*
+-	 * TODO: we might be squashing a KVM_GUESTDBG_SINGLESTEP-triggered
+-	 * KVM_EXIT_DEBUG here.
+-	 */
+ 	if (in)
+-		return kvm_fast_pio_in(vcpu, size, port) && ret;
++		ret = kvm_fast_pio_in(vcpu, size, port);
+ 	else
+-		return kvm_fast_pio_out(vcpu, size, port) && ret;
++		ret = kvm_fast_pio_out(vcpu, size, port);
++	return ret && kvm_skip_emulated_instruction(vcpu);
+ }
+ EXPORT_SYMBOL_GPL(kvm_fast_pio);
+ 
+@@ -8725,6 +8752,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
+ 
+ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
+ {
++	vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
+ 	vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
+ 	kvm_vcpu_mtrr_init(vcpu);
+ 	vcpu_load(vcpu);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 9437a5eb07cf..b9283b63d116 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1076,7 +1076,13 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
+ 	hctx = container_of(wait, struct blk_mq_hw_ctx, dispatch_wait);
+ 
+ 	spin_lock(&hctx->dispatch_wait_lock);
+-	list_del_init(&wait->entry);
++	if (!list_empty(&wait->entry)) {
++		struct sbitmap_queue *sbq;
++
++		list_del_init(&wait->entry);
++		sbq = &hctx->tags->bitmap_tags;
++		atomic_dec(&sbq->ws_active);
++	}
+ 	spin_unlock(&hctx->dispatch_wait_lock);
+ 
+ 	blk_mq_run_hw_queue(hctx, true);
+@@ -1092,6 +1098,7 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
+ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ 				 struct request *rq)
+ {
++	struct sbitmap_queue *sbq = &hctx->tags->bitmap_tags;
+ 	struct wait_queue_head *wq;
+ 	wait_queue_entry_t *wait;
+ 	bool ret;
+@@ -1115,7 +1122,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ 	if (!list_empty_careful(&wait->entry))
+ 		return false;
+ 
+-	wq = &bt_wait_ptr(&hctx->tags->bitmap_tags, hctx)->wait;
++	wq = &bt_wait_ptr(sbq, hctx)->wait;
+ 
+ 	spin_lock_irq(&wq->lock);
+ 	spin_lock(&hctx->dispatch_wait_lock);
+@@ -1125,6 +1132,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ 		return false;
+ 	}
+ 
++	atomic_inc(&sbq->ws_active);
+ 	wait->flags &= ~WQ_FLAG_EXCLUSIVE;
+ 	__add_wait_queue(wq, wait);
+ 
+@@ -1145,6 +1153,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ 	 * someone else gets the wakeup.
+ 	 */
+ 	list_del_init(&wait->entry);
++	atomic_dec(&sbq->ws_active);
+ 	spin_unlock(&hctx->dispatch_wait_lock);
+ 	spin_unlock_irq(&wq->lock);
+ 
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 217a782c3e55..7aa08884ed48 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -1108,8 +1108,13 @@ int cppc_get_perf_caps(int cpunum, struct cppc_perf_caps *perf_caps)
+ 	cpc_read(cpunum, nominal_reg, &nom);
+ 	perf_caps->nominal_perf = nom;
+ 
+-	cpc_read(cpunum, guaranteed_reg, &guaranteed);
+-	perf_caps->guaranteed_perf = guaranteed;
++	if (guaranteed_reg->type != ACPI_TYPE_BUFFER  ||
++	    IS_NULL_REG(&guaranteed_reg->cpc_entry.reg)) {
++		perf_caps->guaranteed_perf = 0;
++	} else {
++		cpc_read(cpunum, guaranteed_reg, &guaranteed);
++		perf_caps->guaranteed_perf = guaranteed;
++	}
+ 
+ 	cpc_read(cpunum, lowest_non_linear_reg, &min_nonlinear);
+ 	perf_caps->lowest_nonlinear_perf = min_nonlinear;
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 04ca65912638..684854d3b0ad 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -290,18 +290,8 @@ static ssize_t idle_store(struct device *dev,
+ 	struct zram *zram = dev_to_zram(dev);
+ 	unsigned long nr_pages = zram->disksize >> PAGE_SHIFT;
+ 	int index;
+-	char mode_buf[8];
+-	ssize_t sz;
+ 
+-	sz = strscpy(mode_buf, buf, sizeof(mode_buf));
+-	if (sz <= 0)
+-		return -EINVAL;
+-
+-	/* ignore trailing new line */
+-	if (mode_buf[sz - 1] == '\n')
+-		mode_buf[sz - 1] = 0x00;
+-
+-	if (strcmp(mode_buf, "all"))
++	if (!sysfs_streq(buf, "all"))
+ 		return -EINVAL;
+ 
+ 	down_read(&zram->init_lock);
+@@ -635,25 +625,15 @@ static ssize_t writeback_store(struct device *dev,
+ 	struct bio bio;
+ 	struct bio_vec bio_vec;
+ 	struct page *page;
+-	ssize_t ret, sz;
+-	char mode_buf[8];
+-	int mode = -1;
++	ssize_t ret;
++	int mode;
+ 	unsigned long blk_idx = 0;
+ 
+-	sz = strscpy(mode_buf, buf, sizeof(mode_buf));
+-	if (sz <= 0)
+-		return -EINVAL;
+-
+-	/* ignore trailing newline */
+-	if (mode_buf[sz - 1] == '\n')
+-		mode_buf[sz - 1] = 0x00;
+-
+-	if (!strcmp(mode_buf, "idle"))
++	if (sysfs_streq(buf, "idle"))
+ 		mode = IDLE_WRITEBACK;
+-	else if (!strcmp(mode_buf, "huge"))
++	else if (sysfs_streq(buf, "huge"))
+ 		mode = HUGE_WRITEBACK;
+-
+-	if (mode == -1)
++	else
+ 		return -EINVAL;
+ 
+ 	down_read(&zram->init_lock);
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index 5ab6a4fe93aa..a579ca4552df 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -383,7 +383,10 @@ static int intel_pstate_get_cppc_guranteed(int cpu)
+ 	if (ret)
+ 		return ret;
+ 
+-	return cppc_perf.guaranteed_perf;
++	if (cppc_perf.guaranteed_perf)
++		return cppc_perf.guaranteed_perf;
++
++	return cppc_perf.nominal_perf;
+ }
+ 
+ #else /* CONFIG_ACPI_CPPC_LIB */
+diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
+index 99449738faa4..632ccf82c5d3 100644
+--- a/drivers/cpufreq/scpi-cpufreq.c
++++ b/drivers/cpufreq/scpi-cpufreq.c
+@@ -189,8 +189,8 @@ static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
+ 	cpufreq_cooling_unregister(priv->cdev);
+ 	clk_put(priv->clk);
+ 	dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+-	kfree(priv);
+ 	dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
++	kfree(priv);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpio/gpio-adnp.c b/drivers/gpio/gpio-adnp.c
+index 91b90c0cea73..12acdac85820 100644
+--- a/drivers/gpio/gpio-adnp.c
++++ b/drivers/gpio/gpio-adnp.c
+@@ -132,8 +132,10 @@ static int adnp_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
+ 	if (err < 0)
+ 		goto out;
+ 
+-	if (err & BIT(pos))
+-		err = -EACCES;
++	if (value & BIT(pos)) {
++		err = -EPERM;
++		goto out;
++	}
+ 
+ 	err = 0;
+ 
+diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c
+index 0ecd2369c2ca..a09d2f9ebacc 100644
+--- a/drivers/gpio/gpio-exar.c
++++ b/drivers/gpio/gpio-exar.c
+@@ -148,6 +148,8 @@ static int gpio_exar_probe(struct platform_device *pdev)
+ 	mutex_init(&exar_gpio->lock);
+ 
+ 	index = ida_simple_get(&ida_index, 0, 0, GFP_KERNEL);
++	if (index < 0)
++		goto err_destroy;
+ 
+ 	sprintf(exar_gpio->name, "exar_gpio%d", index);
+ 	exar_gpio->gpio_chip.label = exar_gpio->name;
+diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
+index 77ae634eb11c..bd95fd6b4ac8 100644
+--- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
++++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
+@@ -1446,7 +1446,7 @@ static inline int cmd_address_audit(struct parser_exec_state *s,
+ 	}
+ 
+ 	if (index_mode)	{
+-		if (guest_gma >= I915_GTT_PAGE_SIZE / sizeof(u64)) {
++		if (guest_gma >= I915_GTT_PAGE_SIZE) {
+ 			ret = -EFAULT;
+ 			goto err;
+ 		}
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index b1c31967194b..489c1e656ff6 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -2293,7 +2293,8 @@ intel_info(const struct drm_i915_private *dev_priv)
+ 				 INTEL_DEVID(dev_priv) == 0x5915 || \
+ 				 INTEL_DEVID(dev_priv) == 0x591E)
+ #define IS_AML_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x591C || \
+-				 INTEL_DEVID(dev_priv) == 0x87C0)
++				 INTEL_DEVID(dev_priv) == 0x87C0 || \
++				 INTEL_DEVID(dev_priv) == 0x87CA)
+ #define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+ 				 (dev_priv)->info.gt == 2)
+ #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 067054cf4a86..60bed3f27775 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -9205,7 +9205,7 @@ enum skl_power_gate {
+ #define TRANS_DDI_FUNC_CTL2(tran)	_MMIO_TRANS2(tran, \
+ 						     _TRANS_DDI_FUNC_CTL2_A)
+ #define  PORT_SYNC_MODE_ENABLE			(1 << 4)
+-#define  PORT_SYNC_MODE_MASTER_SELECT(x)	((x) < 0)
++#define  PORT_SYNC_MODE_MASTER_SELECT(x)	((x) << 0)
+ #define  PORT_SYNC_MODE_MASTER_SELECT_MASK	(0x7 << 0)
+ #define  PORT_SYNC_MODE_MASTER_SELECT_SHIFT	0
+ 
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index fb70fb486fbf..cdbb47566cac 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -511,6 +511,18 @@ static void vop_core_clks_disable(struct vop *vop)
+ 	clk_disable(vop->hclk);
+ }
+ 
++static void vop_win_disable(struct vop *vop, const struct vop_win_data *win)
++{
++	if (win->phy->scl && win->phy->scl->ext) {
++		VOP_SCL_SET_EXT(vop, win, yrgb_hor_scl_mode, SCALE_NONE);
++		VOP_SCL_SET_EXT(vop, win, yrgb_ver_scl_mode, SCALE_NONE);
++		VOP_SCL_SET_EXT(vop, win, cbcr_hor_scl_mode, SCALE_NONE);
++		VOP_SCL_SET_EXT(vop, win, cbcr_ver_scl_mode, SCALE_NONE);
++	}
++
++	VOP_WIN_SET(vop, win, enable, 0);
++}
++
+ static int vop_enable(struct drm_crtc *crtc)
+ {
+ 	struct vop *vop = to_vop(crtc);
+@@ -556,7 +568,7 @@ static int vop_enable(struct drm_crtc *crtc)
+ 		struct vop_win *vop_win = &vop->win[i];
+ 		const struct vop_win_data *win = vop_win->data;
+ 
+-		VOP_WIN_SET(vop, win, enable, 0);
++		vop_win_disable(vop, win);
+ 	}
+ 	spin_unlock(&vop->reg_lock);
+ 
+@@ -700,7 +712,7 @@ static void vop_plane_atomic_disable(struct drm_plane *plane,
+ 
+ 	spin_lock(&vop->reg_lock);
+ 
+-	VOP_WIN_SET(vop, win, enable, 0);
++	vop_win_disable(vop, win);
+ 
+ 	spin_unlock(&vop->reg_lock);
+ }
+@@ -1476,7 +1488,7 @@ static int vop_initial(struct vop *vop)
+ 		int channel = i * 2 + 1;
+ 
+ 		VOP_WIN_SET(vop, win, channel, (channel + 1) << 4 | channel);
+-		VOP_WIN_SET(vop, win, enable, 0);
++		vop_win_disable(vop, win);
+ 		VOP_WIN_SET(vop, win, gate, 1);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
+index 5930facd6d2d..11a8f99ba18c 100644
+--- a/drivers/gpu/drm/vgem/vgem_drv.c
++++ b/drivers/gpu/drm/vgem/vgem_drv.c
+@@ -191,13 +191,9 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
+ 	ret = drm_gem_handle_create(file, &obj->base, handle);
+ 	drm_gem_object_put_unlocked(&obj->base);
+ 	if (ret)
+-		goto err;
++		return ERR_PTR(ret);
+ 
+ 	return &obj->base;
+-
+-err:
+-	__vgem_gem_destroy(obj);
+-	return ERR_PTR(ret);
+ }
+ 
+ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
+diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
+index 138b0bb325cf..69048e73377d 100644
+--- a/drivers/gpu/drm/vkms/vkms_gem.c
++++ b/drivers/gpu/drm/vkms/vkms_gem.c
+@@ -111,11 +111,8 @@ struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
+ 
+ 	ret = drm_gem_handle_create(file, &obj->gem, handle);
+ 	drm_gem_object_put_unlocked(&obj->gem);
+-	if (ret) {
+-		drm_gem_object_release(&obj->gem);
+-		kfree(obj);
++	if (ret)
+ 		return ERR_PTR(ret);
+-	}
+ 
+ 	return &obj->gem;
+ }
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index cec29bf45c9b..1b9e40a203e0 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -161,6 +161,14 @@
+ 
+ #define ARM_V7S_TCR_PD1			BIT(5)
+ 
++#ifdef CONFIG_ZONE_DMA32
++#define ARM_V7S_TABLE_GFP_DMA GFP_DMA32
++#define ARM_V7S_TABLE_SLAB_FLAGS SLAB_CACHE_DMA32
++#else
++#define ARM_V7S_TABLE_GFP_DMA GFP_DMA
++#define ARM_V7S_TABLE_SLAB_FLAGS SLAB_CACHE_DMA
++#endif
++
+ typedef u32 arm_v7s_iopte;
+ 
+ static bool selftest_running;
+@@ -198,13 +206,16 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 	void *table = NULL;
+ 
+ 	if (lvl == 1)
+-		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
++		table = (void *)__get_free_pages(
++			__GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size));
+ 	else if (lvl == 2)
+-		table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA);
++		table = kmem_cache_zalloc(data->l2_tables, gfp);
+ 	phys = virt_to_phys(table);
+-	if (phys != (arm_v7s_iopte)phys)
++	if (phys != (arm_v7s_iopte)phys) {
+ 		/* Doesn't fit in PTE */
++		dev_err(dev, "Page table does not fit in PTE: %pa", &phys);
+ 		goto out_free;
++	}
+ 	if (table && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)) {
+ 		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, dma))
+@@ -733,7 +744,7 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+ 	data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
+ 					    ARM_V7S_TABLE_SIZE(2),
+ 					    ARM_V7S_TABLE_SIZE(2),
+-					    SLAB_CACHE_DMA, NULL);
++					    ARM_V7S_TABLE_SLAB_FLAGS, NULL);
+ 	if (!data->l2_tables)
+ 		goto out_free_data;
+ 
+diff --git a/drivers/isdn/hardware/mISDN/hfcmulti.c b/drivers/isdn/hardware/mISDN/hfcmulti.c
+index 4d85645c87f7..0928fd1f0e0c 100644
+--- a/drivers/isdn/hardware/mISDN/hfcmulti.c
++++ b/drivers/isdn/hardware/mISDN/hfcmulti.c
+@@ -4365,7 +4365,8 @@ setup_pci(struct hfc_multi *hc, struct pci_dev *pdev,
+ 	if (m->clock2)
+ 		test_and_set_bit(HFC_CHIP_CLOCK2, &hc->chip);
+ 
+-	if (ent->device == 0xB410) {
++	if (ent->vendor == PCI_VENDOR_ID_DIGIUM &&
++	    ent->device == PCI_DEVICE_ID_DIGIUM_HFC4S) {
+ 		test_and_set_bit(HFC_CHIP_B410P, &hc->chip);
+ 		test_and_set_bit(HFC_CHIP_PCM_MASTER, &hc->chip);
+ 		test_and_clear_bit(HFC_CHIP_PCM_SLAVE, &hc->chip);
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index 21bf8ac78380..390e896dadc7 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -213,8 +213,8 @@ config GENEVE
+ 
+ config GTP
+ 	tristate "GPRS Tunneling Protocol datapath (GTP-U)"
+-	depends on INET && NET_UDP_TUNNEL
+-	select NET_IP_TUNNEL
++	depends on INET
++	select NET_UDP_TUNNEL
+ 	---help---
+ 	  This allows one to create gtp virtual interfaces that provide
+ 	  the GPRS Tunneling Protocol datapath (GTP-U). This tunneling protocol
+diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
+index 5e921bb6c214..41eee62fed25 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.c
++++ b/drivers/net/dsa/mv88e6xxx/port.c
+@@ -427,18 +427,22 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ 		return 0;
+ 
+ 	lane = mv88e6390x_serdes_get_lane(chip, port);
+-	if (lane < 0)
++	if (lane < 0 && lane != -ENODEV)
+ 		return lane;
+ 
+-	if (chip->ports[port].serdes_irq) {
+-		err = mv88e6390_serdes_irq_disable(chip, port, lane);
++	if (lane >= 0) {
++		if (chip->ports[port].serdes_irq) {
++			err = mv88e6390_serdes_irq_disable(chip, port, lane);
++			if (err)
++				return err;
++		}
++
++		err = mv88e6390x_serdes_power(chip, port, false);
+ 		if (err)
+ 			return err;
+ 	}
+ 
+-	err = mv88e6390x_serdes_power(chip, port, false);
+-	if (err)
+-		return err;
++	chip->ports[port].cmode = 0;
+ 
+ 	if (cmode) {
+ 		err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, &reg);
+@@ -452,6 +456,12 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ 		if (err)
+ 			return err;
+ 
++		chip->ports[port].cmode = cmode;
++
++		lane = mv88e6390x_serdes_get_lane(chip, port);
++		if (lane < 0)
++			return lane;
++
+ 		err = mv88e6390x_serdes_power(chip, port, true);
+ 		if (err)
+ 			return err;
+@@ -463,8 +473,6 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ 		}
+ 	}
+ 
+-	chip->ports[port].cmode = cmode;
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
+index 7e97e620bd44..a26850c888cf 100644
+--- a/drivers/net/dsa/qca8k.c
++++ b/drivers/net/dsa/qca8k.c
+@@ -620,22 +620,6 @@ qca8k_adjust_link(struct dsa_switch *ds, int port, struct phy_device *phy)
+ 	qca8k_port_set_status(priv, port, 1);
+ }
+ 
+-static int
+-qca8k_phy_read(struct dsa_switch *ds, int phy, int regnum)
+-{
+-	struct qca8k_priv *priv = (struct qca8k_priv *)ds->priv;
+-
+-	return mdiobus_read(priv->bus, phy, regnum);
+-}
+-
+-static int
+-qca8k_phy_write(struct dsa_switch *ds, int phy, int regnum, u16 val)
+-{
+-	struct qca8k_priv *priv = (struct qca8k_priv *)ds->priv;
+-
+-	return mdiobus_write(priv->bus, phy, regnum, val);
+-}
+-
+ static void
+ qca8k_get_strings(struct dsa_switch *ds, int port, u32 stringset, uint8_t *data)
+ {
+@@ -876,8 +860,6 @@ static const struct dsa_switch_ops qca8k_switch_ops = {
+ 	.setup			= qca8k_setup,
+ 	.adjust_link            = qca8k_adjust_link,
+ 	.get_strings		= qca8k_get_strings,
+-	.phy_read		= qca8k_phy_read,
+-	.phy_write		= qca8k_phy_write,
+ 	.get_ethtool_stats	= qca8k_get_ethtool_stats,
+ 	.get_sset_count		= qca8k_get_sset_count,
+ 	.get_mac_eee		= qca8k_get_mac_eee,
+diff --git a/drivers/net/ethernet/8390/mac8390.c b/drivers/net/ethernet/8390/mac8390.c
+index 342ae08ec3c2..d60a86aa8aa8 100644
+--- a/drivers/net/ethernet/8390/mac8390.c
++++ b/drivers/net/ethernet/8390/mac8390.c
+@@ -153,8 +153,6 @@ static void dayna_block_input(struct net_device *dev, int count,
+ static void dayna_block_output(struct net_device *dev, int count,
+ 			       const unsigned char *buf, int start_page);
+ 
+-#define memcmp_withio(a, b, c)	memcmp((a), (void *)(b), (c))
+-
+ /* Slow Sane (16-bit chunk memory read/write) Cabletron uses this */
+ static void slow_sane_get_8390_hdr(struct net_device *dev,
+ 				   struct e8390_pkt_hdr *hdr, int ring_page);
+@@ -233,19 +231,26 @@ static enum mac8390_type mac8390_ident(struct nubus_rsrc *fres)
+ 
+ static enum mac8390_access mac8390_testio(unsigned long membase)
+ {
+-	unsigned long outdata = 0xA5A0B5B0;
+-	unsigned long indata =  0x00000000;
++	u32 outdata = 0xA5A0B5B0;
++	u32 indata = 0;
++
+ 	/* Try writing 32 bits */
+-	memcpy_toio((void __iomem *)membase, &outdata, 4);
+-	/* Now compare them */
+-	if (memcmp_withio(&outdata, membase, 4) == 0)
++	nubus_writel(outdata, membase);
++	/* Now read it back */
++	indata = nubus_readl(membase);
++	if (outdata == indata)
+ 		return ACCESS_32;
++
++	outdata = 0xC5C0D5D0;
++	indata = 0;
++
+ 	/* Write 16 bit output */
+ 	word_memcpy_tocard(membase, &outdata, 4);
+ 	/* Now read it back */
+ 	word_memcpy_fromcard(&indata, membase, 4);
+ 	if (outdata == indata)
+ 		return ACCESS_16;
++
+ 	return ACCESS_UNKNOWN;
+ }
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index 74550ccc7a20..e2ffb159cbe2 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -186,11 +186,12 @@ static void aq_rx_checksum(struct aq_ring_s *self,
+ 	}
+ 	if (buff->is_ip_cso) {
+ 		__skb_incr_checksum_unnecessary(skb);
+-		if (buff->is_udp_cso || buff->is_tcp_cso)
+-			__skb_incr_checksum_unnecessary(skb);
+ 	} else {
+ 		skb->ip_summed = CHECKSUM_NONE;
+ 	}
++
++	if (buff->is_udp_cso || buff->is_tcp_cso)
++		__skb_incr_checksum_unnecessary(skb);
+ }
+ 
+ #define AQ_SKB_ALIGN SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+index 5b4d3badcb73..e246f9733bb8 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+@@ -105,20 +105,19 @@ static inline struct pgcache *nicvf_alloc_page(struct nicvf *nic,
+ 	/* Check if page can be recycled */
+ 	if (page) {
+ 		ref_count = page_ref_count(page);
+-		/* Check if this page has been used once i.e 'put_page'
+-		 * called after packet transmission i.e internal ref_count
+-		 * and page's ref_count are equal i.e page can be recycled.
++		/* This page can be recycled if internal ref_count and page's
++		 * ref_count are equal, indicating that the page has been used
++		 * once for packet transmission. For non-XDP mode, internal
++		 * ref_count is always '1'.
+ 		 */
+-		if (rbdr->is_xdp && (ref_count == pgcache->ref_count))
+-			pgcache->ref_count--;
+-		else
+-			page = NULL;
+-
+-		/* In non-XDP mode, page's ref_count needs to be '1' for it
+-		 * to be recycled.
+-		 */
+-		if (!rbdr->is_xdp && (ref_count != 1))
++		if (rbdr->is_xdp) {
++			if (ref_count == pgcache->ref_count)
++				pgcache->ref_count--;
++			else
++				page = NULL;
++		} else if (ref_count != 1) {
+ 			page = NULL;
++		}
+ 	}
+ 
+ 	if (!page) {
+@@ -365,11 +364,10 @@ static void nicvf_free_rbdr(struct nicvf *nic, struct rbdr *rbdr)
+ 	while (head < rbdr->pgcnt) {
+ 		pgcache = &rbdr->pgcache[head];
+ 		if (pgcache->page && page_ref_count(pgcache->page) != 0) {
+-			if (!rbdr->is_xdp) {
+-				put_page(pgcache->page);
+-				continue;
++			if (rbdr->is_xdp) {
++				page_ref_sub(pgcache->page,
++					     pgcache->ref_count - 1);
+ 			}
+-			page_ref_sub(pgcache->page, pgcache->ref_count - 1);
+ 			put_page(pgcache->page);
+ 		}
+ 		head++;
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 6e36b88ca7c9..f55d177ae894 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -6435,7 +6435,7 @@ static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
+ 		set_bit(RTL_FLAG_TASK_RESET_PENDING, tp->wk.flags);
+ 	}
+ 
+-	if (status & RTL_EVENT_NAPI) {
++	if (status & (RTL_EVENT_NAPI | LinkChg)) {
+ 		rtl_irq_disable(tp);
+ 		napi_schedule_irqoff(&tp->napi);
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
+index d8c5bc412219..c0c75c111abb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
++++ b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
+@@ -111,10 +111,11 @@ static unsigned int is_jumbo_frm(int len, int enh_desc)
+ 
+ static void refill_desc3(void *priv_ptr, struct dma_desc *p)
+ {
+-	struct stmmac_priv *priv = (struct stmmac_priv *)priv_ptr;
++	struct stmmac_rx_queue *rx_q = priv_ptr;
++	struct stmmac_priv *priv = rx_q->priv_data;
+ 
+ 	/* Fill DES3 in case of RING mode */
+-	if (priv->dma_buf_sz >= BUF_SIZE_8KiB)
++	if (priv->dma_buf_sz == BUF_SIZE_16KiB)
+ 		p->des3 = cpu_to_le32(le32_to_cpu(p->des2) + BUF_SIZE_8KiB);
+ }
+ 
+diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c
+index 3ddaf9595697..68af4c75ffb3 100644
+--- a/drivers/net/phy/meson-gxl.c
++++ b/drivers/net/phy/meson-gxl.c
+@@ -211,6 +211,7 @@ static int meson_gxl_ack_interrupt(struct phy_device *phydev)
+ static int meson_gxl_config_intr(struct phy_device *phydev)
+ {
+ 	u16 val;
++	int ret;
+ 
+ 	if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
+ 		val = INTSRC_ANEG_PR
+@@ -223,6 +224,11 @@ static int meson_gxl_config_intr(struct phy_device *phydev)
+ 		val = 0;
+ 	}
+ 
++	/* Ack any pending IRQ */
++	ret = meson_gxl_ack_interrupt(phydev);
++	if (ret)
++		return ret;
++
+ 	return phy_write(phydev, INTSRC_MASK, val);
+ }
+ 
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 46c86725a693..739434fe04fa 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1827,7 +1827,7 @@ int genphy_soft_reset(struct phy_device *phydev)
+ {
+ 	int ret;
+ 
+-	ret = phy_write(phydev, MII_BMCR, BMCR_RESET);
++	ret = phy_set_bits(phydev, MII_BMCR, BMCR_RESET);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 53f4f37b0ffd..448d5439ff6a 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1763,9 +1763,6 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 	int skb_xdp = 1;
+ 	bool frags = tun_napi_frags_enabled(tfile);
+ 
+-	if (!(tun->dev->flags & IFF_UP))
+-		return -EIO;
+-
+ 	if (!(tun->flags & IFF_NO_PI)) {
+ 		if (len < sizeof(pi))
+ 			return -EINVAL;
+@@ -1867,6 +1864,8 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 			err = skb_copy_datagram_from_iter(skb, 0, from, len);
+ 
+ 		if (err) {
++			err = -EFAULT;
++drop:
+ 			this_cpu_inc(tun->pcpu_stats->rx_dropped);
+ 			kfree_skb(skb);
+ 			if (frags) {
+@@ -1874,7 +1873,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 				mutex_unlock(&tfile->napi_mutex);
+ 			}
+ 
+-			return -EFAULT;
++			return err;
+ 		}
+ 	}
+ 
+@@ -1958,6 +1957,13 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 	    !tfile->detached)
+ 		rxhash = __skb_get_hash_symmetric(skb);
+ 
++	rcu_read_lock();
++	if (unlikely(!(tun->dev->flags & IFF_UP))) {
++		err = -EIO;
++		rcu_read_unlock();
++		goto drop;
++	}
++
+ 	if (frags) {
+ 		/* Exercise flow dissector code path. */
+ 		u32 headlen = eth_get_headlen(skb->data, skb_headlen(skb));
+@@ -1965,6 +1971,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 		if (unlikely(headlen > skb_headlen(skb))) {
+ 			this_cpu_inc(tun->pcpu_stats->rx_dropped);
+ 			napi_free_frags(&tfile->napi);
++			rcu_read_unlock();
+ 			mutex_unlock(&tfile->napi_mutex);
+ 			WARN_ON(1);
+ 			return -ENOMEM;
+@@ -1992,6 +1999,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 	} else {
+ 		netif_rx_ni(skb);
+ 	}
++	rcu_read_unlock();
+ 
+ 	stats = get_cpu_ptr(tun->pcpu_stats);
+ 	u64_stats_update_begin(&stats->syncp);
+diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c
+index 820a2fe7d027..aff995be2a31 100644
+--- a/drivers/net/usb/aqc111.c
++++ b/drivers/net/usb/aqc111.c
+@@ -1301,6 +1301,20 @@ static const struct driver_info trendnet_info = {
+ 	.tx_fixup	= aqc111_tx_fixup,
+ };
+ 
++static const struct driver_info qnap_info = {
++	.description	= "QNAP QNA-UC5G1T USB to 5GbE Adapter",
++	.bind		= aqc111_bind,
++	.unbind		= aqc111_unbind,
++	.status		= aqc111_status,
++	.link_reset	= aqc111_link_reset,
++	.reset		= aqc111_reset,
++	.stop		= aqc111_stop,
++	.flags		= FLAG_ETHER | FLAG_FRAMING_AX |
++			  FLAG_AVOID_UNLINK_URBS | FLAG_MULTI_PACKET,
++	.rx_fixup	= aqc111_rx_fixup,
++	.tx_fixup	= aqc111_tx_fixup,
++};
++
+ static int aqc111_suspend(struct usb_interface *intf, pm_message_t message)
+ {
+ 	struct usbnet *dev = usb_get_intfdata(intf);
+@@ -1455,6 +1469,7 @@ static const struct usb_device_id products[] = {
+ 	{AQC111_USB_ETH_DEV(0x0b95, 0x2790, asix111_info)},
+ 	{AQC111_USB_ETH_DEV(0x0b95, 0x2791, asix112_info)},
+ 	{AQC111_USB_ETH_DEV(0x20f4, 0xe05a, trendnet_info)},
++	{AQC111_USB_ETH_DEV(0x1c04, 0x0015, qnap_info)},
+ 	{ },/* END */
+ };
+ MODULE_DEVICE_TABLE(usb, products);
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 5512a1038721..3e9b2c319e45 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -851,6 +851,14 @@ static const struct usb_device_id	products[] = {
+ 	.driver_info = 0,
+ },
+ 
++/* QNAP QNA-UC5G1T USB to 5GbE Adapter (based on AQC111U) */
++{
++	USB_DEVICE_AND_INTERFACE_INFO(0x1c04, 0x0015, USB_CLASS_COMM,
++				      USB_CDC_SUBCLASS_ETHERNET,
++				      USB_CDC_PROTO_NONE),
++	.driver_info = 0,
++},
++
+ /* WHITELIST!!!
+  *
+  * CDC Ether uses two interfaces, not necessarily consecutive.
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 7c1430ed0244..6d1a1abbed27 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1273,6 +1273,7 @@ static void vrf_setup(struct net_device *dev)
+ 
+ 	/* default to no qdisc; user can add if desired */
+ 	dev->priv_flags |= IFF_NO_QUEUE;
++	dev->priv_flags |= IFF_NO_RX_HANDLER;
+ 
+ 	dev->min_mtu = 0;
+ 	dev->max_mtu = 0;
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index d6fb6a89f9b3..5006daed2e96 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -4184,10 +4184,8 @@ static void vxlan_destroy_tunnels(struct net *net, struct list_head *head)
+ 		/* If vxlan->dev is in the same netns, it has already been added
+ 		 * to the list by the previous loop.
+ 		 */
+-		if (!net_eq(dev_net(vxlan->dev), net)) {
+-			gro_cells_destroy(&vxlan->gro_cells);
++		if (!net_eq(dev_net(vxlan->dev), net))
+ 			unregister_netdevice_queue(vxlan->dev, head);
+-		}
+ 	}
+ 
+ 	for (h = 0; h < PORT_HASH_SIZE; ++h)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 5cd508a68609..6d29ba4046c3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -713,6 +713,19 @@ static inline bool mt76u_check_sg(struct mt76_dev *dev)
+ 		 udev->speed == USB_SPEED_WIRELESS));
+ }
+ 
++static inline int
++mt76u_bulk_msg(struct mt76_dev *dev, void *data, int len, int timeout)
++{
++	struct usb_interface *intf = to_usb_interface(dev->dev);
++	struct usb_device *udev = interface_to_usbdev(intf);
++	struct mt76_usb *usb = &dev->usb;
++	unsigned int pipe;
++	int sent;
++
++	pipe = usb_sndbulkpipe(udev, usb->out_ep[MT_EP_OUT_INBAND_CMD]);
++	return usb_bulk_msg(udev, pipe, data, len, &sent, timeout);
++}
++
+ int mt76u_vendor_request(struct mt76_dev *dev, u8 req,
+ 			 u8 req_type, u16 val, u16 offset,
+ 			 void *buf, size_t len);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
+index 6db789f90269..2ca393e267af 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
+@@ -121,18 +121,14 @@ static int
+ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb,
+ 			int cmd, bool wait_resp)
+ {
+-	struct usb_interface *intf = to_usb_interface(dev->dev);
+-	struct usb_device *udev = interface_to_usbdev(intf);
+ 	struct mt76_usb *usb = &dev->usb;
+-	unsigned int pipe;
+-	int ret, sent;
++	int ret;
+ 	u8 seq = 0;
+ 	u32 info;
+ 
+ 	if (test_bit(MT76_REMOVED, &dev->state))
+ 		return 0;
+ 
+-	pipe = usb_sndbulkpipe(udev, usb->out_ep[MT_EP_OUT_INBAND_CMD]);
+ 	if (wait_resp) {
+ 		seq = ++usb->mcu.msg_seq & 0xf;
+ 		if (!seq)
+@@ -146,7 +142,7 @@ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = usb_bulk_msg(udev, pipe, skb->data, skb->len, &sent, 500);
++	ret = mt76u_bulk_msg(dev, skb->data, skb->len, 500);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -268,14 +264,12 @@ void mt76x02u_mcu_fw_reset(struct mt76x02_dev *dev)
+ EXPORT_SYMBOL_GPL(mt76x02u_mcu_fw_reset);
+ 
+ static int
+-__mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
++__mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, u8 *data,
+ 			    const void *fw_data, int len, u32 dst_addr)
+ {
+-	u8 *data = sg_virt(&buf->urb->sg[0]);
+-	DECLARE_COMPLETION_ONSTACK(cmpl);
+ 	__le32 info;
+ 	u32 val;
+-	int err;
++	int err, data_len;
+ 
+ 	info = cpu_to_le32(FIELD_PREP(MT_MCU_MSG_PORT, CPU_TX_PORT) |
+ 			   FIELD_PREP(MT_MCU_MSG_LEN, len) |
+@@ -291,25 +285,12 @@ __mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
+ 	mt76u_single_wr(&dev->mt76, MT_VEND_WRITE_FCE,
+ 			MT_FCE_DMA_LEN, len << 16);
+ 
+-	buf->len = MT_CMD_HDR_LEN + len + sizeof(info);
+-	err = mt76u_submit_buf(&dev->mt76, USB_DIR_OUT,
+-			       MT_EP_OUT_INBAND_CMD,
+-			       buf, GFP_KERNEL,
+-			       mt76u_mcu_complete_urb, &cmpl);
+-	if (err < 0)
+-		return err;
+-
+-	if (!wait_for_completion_timeout(&cmpl,
+-					 msecs_to_jiffies(1000))) {
+-		dev_err(dev->mt76.dev, "firmware upload timed out\n");
+-		usb_kill_urb(buf->urb);
+-		return -ETIMEDOUT;
+-	}
++	data_len = MT_CMD_HDR_LEN + len + sizeof(info);
+ 
+-	if (mt76u_urb_error(buf->urb)) {
+-		dev_err(dev->mt76.dev, "firmware upload failed: %d\n",
+-			buf->urb->status);
+-		return buf->urb->status;
++	err = mt76u_bulk_msg(&dev->mt76, data, data_len, 1000);
++	if (err) {
++		dev_err(dev->mt76.dev, "firmware upload failed: %d\n", err);
++		return err;
+ 	}
+ 
+ 	val = mt76_rr(dev, MT_TX_CPU_FROM_FCE_CPU_DESC_IDX);
+@@ -322,17 +303,16 @@ __mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
+ int mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, const void *data,
+ 			      int data_len, u32 max_payload, u32 offset)
+ {
+-	int err, len, pos = 0, max_len = max_payload - 8;
+-	struct mt76u_buf buf;
++	int len, err = 0, pos = 0, max_len = max_payload - 8;
++	u8 *buf;
+ 
+-	err = mt76u_buf_alloc(&dev->mt76, &buf, 1, max_payload, max_payload,
+-			      GFP_KERNEL);
+-	if (err < 0)
+-		return err;
++	buf = kmalloc(max_payload, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
+ 
+ 	while (data_len > 0) {
+ 		len = min_t(int, data_len, max_len);
+-		err = __mt76x02u_mcu_fw_send_data(dev, &buf, data + pos,
++		err = __mt76x02u_mcu_fw_send_data(dev, buf, data + pos,
+ 						  len, offset + pos);
+ 		if (err < 0)
+ 			break;
+@@ -341,7 +321,7 @@ int mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, const void *data,
+ 		pos += len;
+ 		usleep_range(5000, 10000);
+ 	}
+-	mt76u_buf_free(&buf);
++	kfree(buf);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
+index b061263453d4..09923cedd039 100644
+--- a/drivers/net/wireless/mediatek/mt76/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/usb.c
+@@ -326,7 +326,6 @@ int mt76u_buf_alloc(struct mt76_dev *dev, struct mt76u_buf *buf,
+ 
+ 	return mt76u_fill_rx_sg(dev, buf, nsgs, len, sglen);
+ }
+-EXPORT_SYMBOL_GPL(mt76u_buf_alloc);
+ 
+ void mt76u_buf_free(struct mt76u_buf *buf)
+ {
+diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c
+index 5163097b43df..4bbd9ede38c8 100644
+--- a/drivers/phy/allwinner/phy-sun4i-usb.c
++++ b/drivers/phy/allwinner/phy-sun4i-usb.c
+@@ -485,8 +485,11 @@ static int sun4i_usb_phy_set_mode(struct phy *_phy,
+ 	struct sun4i_usb_phy_data *data = to_sun4i_usb_phy_data(phy);
+ 	int new_mode;
+ 
+-	if (phy->index != 0)
++	if (phy->index != 0) {
++		if (mode == PHY_MODE_USB_HOST)
++			return 0;
+ 		return -EINVAL;
++	}
+ 
+ 	switch (mode) {
+ 	case PHY_MODE_USB_HOST:
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index a10cec0e86eb..0b3b9de45c60 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -72,20 +72,24 @@ static void vfio_ccw_sch_io_todo(struct work_struct *work)
+ {
+ 	struct vfio_ccw_private *private;
+ 	struct irb *irb;
++	bool is_final;
+ 
+ 	private = container_of(work, struct vfio_ccw_private, io_work);
+ 	irb = &private->irb;
+ 
++	is_final = !(scsw_actl(&irb->scsw) &
++		     (SCSW_ACTL_DEVACT | SCSW_ACTL_SCHACT));
+ 	if (scsw_is_solicited(&irb->scsw)) {
+ 		cp_update_scsw(&private->cp, &irb->scsw);
+-		cp_free(&private->cp);
++		if (is_final)
++			cp_free(&private->cp);
+ 	}
+ 	memcpy(private->io_region->irb_area, irb, sizeof(*irb));
+ 
+ 	if (private->io_trigger)
+ 		eventfd_signal(private->io_trigger, 1);
+ 
+-	if (private->mdev)
++	if (private->mdev && is_final)
+ 		private->state = VFIO_CCW_STATE_IDLE;
+ }
+ 
+diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
+index 744a64680d5b..e8fc28dba8df 100644
+--- a/drivers/s390/scsi/zfcp_erp.c
++++ b/drivers/s390/scsi/zfcp_erp.c
+@@ -624,6 +624,20 @@ static void zfcp_erp_strategy_memwait(struct zfcp_erp_action *erp_action)
+ 	add_timer(&erp_action->timer);
+ }
+ 
++void zfcp_erp_port_forced_reopen_all(struct zfcp_adapter *adapter,
++				     int clear, char *dbftag)
++{
++	unsigned long flags;
++	struct zfcp_port *port;
++
++	write_lock_irqsave(&adapter->erp_lock, flags);
++	read_lock(&adapter->port_list_lock);
++	list_for_each_entry(port, &adapter->port_list, list)
++		_zfcp_erp_port_forced_reopen(port, clear, dbftag);
++	read_unlock(&adapter->port_list_lock);
++	write_unlock_irqrestore(&adapter->erp_lock, flags);
++}
++
+ static void _zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter,
+ 				      int clear, char *dbftag)
+ {
+@@ -1341,6 +1355,9 @@ static void zfcp_erp_try_rport_unblock(struct zfcp_port *port)
+ 		struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev);
+ 		int lun_status;
+ 
++		if (sdev->sdev_state == SDEV_DEL ||
++		    sdev->sdev_state == SDEV_CANCEL)
++			continue;
+ 		if (zsdev->port != port)
+ 			continue;
+ 		/* LUN under port of interest */
+diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
+index 3fce47b0b21b..c6acca521ffe 100644
+--- a/drivers/s390/scsi/zfcp_ext.h
++++ b/drivers/s390/scsi/zfcp_ext.h
+@@ -70,6 +70,8 @@ extern void zfcp_erp_port_reopen(struct zfcp_port *port, int clear,
+ 				 char *dbftag);
+ extern void zfcp_erp_port_shutdown(struct zfcp_port *, int, char *);
+ extern void zfcp_erp_port_forced_reopen(struct zfcp_port *, int, char *);
++extern void zfcp_erp_port_forced_reopen_all(struct zfcp_adapter *adapter,
++					    int clear, char *dbftag);
+ extern void zfcp_erp_set_lun_status(struct scsi_device *, u32);
+ extern void zfcp_erp_clear_lun_status(struct scsi_device *, u32);
+ extern void zfcp_erp_lun_reopen(struct scsi_device *, int, char *);
+diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
+index f4f6a07c5222..221d0dfb8493 100644
+--- a/drivers/s390/scsi/zfcp_scsi.c
++++ b/drivers/s390/scsi/zfcp_scsi.c
+@@ -368,6 +368,10 @@ static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt)
+ 	struct zfcp_adapter *adapter = zfcp_sdev->port->adapter;
+ 	int ret = SUCCESS, fc_ret;
+ 
++	if (!(adapter->connection_features & FSF_FEATURE_NPIV_MODE)) {
++		zfcp_erp_port_forced_reopen_all(adapter, 0, "schrh_p");
++		zfcp_erp_wait(adapter);
++	}
+ 	zfcp_erp_adapter_reopen(adapter, 0, "schrh_1");
+ 	zfcp_erp_wait(adapter);
+ 	fc_ret = fc_block_scsi_eh(scpnt);
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index b84099479fe0..d64553c0a051 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1398,11 +1398,6 @@ static void sd_release(struct gendisk *disk, fmode_t mode)
+ 			scsi_set_medium_removal(sdev, SCSI_REMOVAL_ALLOW);
+ 	}
+ 
+-	/*
+-	 * XXX and what if there are packets in flight and this close()
+-	 * XXX is followed by a "rmmod sd_mod"?
+-	 */
+-
+ 	scsi_disk_put(sdkp);
+ }
+ 
+@@ -3059,6 +3054,9 @@ static bool sd_validate_opt_xfer_size(struct scsi_disk *sdkp,
+ 	unsigned int opt_xfer_bytes =
+ 		logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
+ 
++	if (sdkp->opt_xfer_blocks == 0)
++		return false;
++
+ 	if (sdkp->opt_xfer_blocks > dev_max) {
+ 		sd_first_printk(KERN_WARNING, sdkp,
+ 				"Optimal transfer size %u logical blocks " \
+@@ -3488,9 +3486,21 @@ static void scsi_disk_release(struct device *dev)
+ {
+ 	struct scsi_disk *sdkp = to_scsi_disk(dev);
+ 	struct gendisk *disk = sdkp->disk;
+-	
++	struct request_queue *q = disk->queue;
++
+ 	ida_free(&sd_index_ida, sdkp->index);
+ 
++	/*
++	 * Wait until all requests that are in progress have completed.
++	 * This is necessary to avoid that e.g. scsi_end_request() crashes
++	 * due to clearing the disk->private_data pointer. Wait from inside
++	 * scsi_disk_release() instead of from sd_release() to avoid that
++	 * freezing and unfreezing the request queue affects user space I/O
++	 * in case multiple processes open a /dev/sd... node concurrently.
++	 */
++	blk_mq_freeze_queue(q);
++	blk_mq_unfreeze_queue(q);
++
+ 	disk->private_data = NULL;
+ 	put_disk(disk);
+ 	put_device(&sdkp->device->sdev_gendev);
+diff --git a/drivers/staging/comedi/comedidev.h b/drivers/staging/comedi/comedidev.h
+index a7d569cfca5d..0dff1ac057cd 100644
+--- a/drivers/staging/comedi/comedidev.h
++++ b/drivers/staging/comedi/comedidev.h
+@@ -1001,6 +1001,8 @@ int comedi_dio_insn_config(struct comedi_device *dev,
+ 			   unsigned int mask);
+ unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
+ 				     unsigned int *data);
++unsigned int comedi_bytes_per_scan_cmd(struct comedi_subdevice *s,
++				       struct comedi_cmd *cmd);
+ unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s);
+ unsigned int comedi_nscans_left(struct comedi_subdevice *s,
+ 				unsigned int nscans);
+diff --git a/drivers/staging/comedi/drivers.c b/drivers/staging/comedi/drivers.c
+index eefa62f42c0f..5a32b8fc000e 100644
+--- a/drivers/staging/comedi/drivers.c
++++ b/drivers/staging/comedi/drivers.c
+@@ -394,11 +394,13 @@ unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
+ EXPORT_SYMBOL_GPL(comedi_dio_update_state);
+ 
+ /**
+- * comedi_bytes_per_scan() - Get length of asynchronous command "scan" in bytes
++ * comedi_bytes_per_scan_cmd() - Get length of asynchronous command "scan" in
++ * bytes
+  * @s: COMEDI subdevice.
++ * @cmd: COMEDI command.
+  *
+  * Determines the overall scan length according to the subdevice type and the
+- * number of channels in the scan.
++ * number of channels in the scan for the specified command.
+  *
+  * For digital input, output or input/output subdevices, samples for
+  * multiple channels are assumed to be packed into one or more unsigned
+@@ -408,9 +410,9 @@ EXPORT_SYMBOL_GPL(comedi_dio_update_state);
+  *
+  * Returns the overall scan length in bytes.
+  */
+-unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
++unsigned int comedi_bytes_per_scan_cmd(struct comedi_subdevice *s,
++				       struct comedi_cmd *cmd)
+ {
+-	struct comedi_cmd *cmd = &s->async->cmd;
+ 	unsigned int num_samples;
+ 	unsigned int bits_per_sample;
+ 
+@@ -427,6 +429,29 @@ unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
+ 	}
+ 	return comedi_samples_to_bytes(s, num_samples);
+ }
++EXPORT_SYMBOL_GPL(comedi_bytes_per_scan_cmd);
++
++/**
++ * comedi_bytes_per_scan() - Get length of asynchronous command "scan" in bytes
++ * @s: COMEDI subdevice.
++ *
++ * Determines the overall scan length according to the subdevice type and the
++ * number of channels in the scan for the current command.
++ *
++ * For digital input, output or input/output subdevices, samples for
++ * multiple channels are assumed to be packed into one or more unsigned
++ * short or unsigned int values according to the subdevice's %SDF_LSAMPL
++ * flag.  For other types of subdevice, samples are assumed to occupy a
++ * whole unsigned short or unsigned int according to the %SDF_LSAMPL flag.
++ *
++ * Returns the overall scan length in bytes.
++ */
++unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
++{
++	struct comedi_cmd *cmd = &s->async->cmd;
++
++	return comedi_bytes_per_scan_cmd(s, cmd);
++}
+ EXPORT_SYMBOL_GPL(comedi_bytes_per_scan);
+ 
+ static unsigned int __comedi_nscans_left(struct comedi_subdevice *s,
+diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c
+index 5edf59ac6706..b04dad8c7092 100644
+--- a/drivers/staging/comedi/drivers/ni_mio_common.c
++++ b/drivers/staging/comedi/drivers/ni_mio_common.c
+@@ -3545,6 +3545,7 @@ static int ni_cdio_cmdtest(struct comedi_device *dev,
+ 			   struct comedi_subdevice *s, struct comedi_cmd *cmd)
+ {
+ 	struct ni_private *devpriv = dev->private;
++	unsigned int bytes_per_scan;
+ 	int err = 0;
+ 
+ 	/* Step 1 : check if triggers are trivially valid */
+@@ -3579,9 +3580,12 @@ static int ni_cdio_cmdtest(struct comedi_device *dev,
+ 	err |= comedi_check_trigger_arg_is(&cmd->convert_arg, 0);
+ 	err |= comedi_check_trigger_arg_is(&cmd->scan_end_arg,
+ 					   cmd->chanlist_len);
+-	err |= comedi_check_trigger_arg_max(&cmd->stop_arg,
+-					    s->async->prealloc_bufsz /
+-					    comedi_bytes_per_scan(s));
++	bytes_per_scan = comedi_bytes_per_scan_cmd(s, cmd);
++	if (bytes_per_scan) {
++		err |= comedi_check_trigger_arg_max(&cmd->stop_arg,
++						    s->async->prealloc_bufsz /
++						    bytes_per_scan);
++	}
+ 
+ 	if (err)
+ 		return 3;
+diff --git a/drivers/staging/erofs/dir.c b/drivers/staging/erofs/dir.c
+index 833f052f79d0..b21ed5b4c711 100644
+--- a/drivers/staging/erofs/dir.c
++++ b/drivers/staging/erofs/dir.c
+@@ -23,6 +23,21 @@ static const unsigned char erofs_filetype_table[EROFS_FT_MAX] = {
+ 	[EROFS_FT_SYMLINK]	= DT_LNK,
+ };
+ 
++static void debug_one_dentry(unsigned char d_type, const char *de_name,
++			     unsigned int de_namelen)
++{
++#ifdef CONFIG_EROFS_FS_DEBUG
++	/* since the on-disk name could not have the trailing '\0' */
++	unsigned char dbg_namebuf[EROFS_NAME_LEN + 1];
++
++	memcpy(dbg_namebuf, de_name, de_namelen);
++	dbg_namebuf[de_namelen] = '\0';
++
++	debugln("found dirent %s de_len %u d_type %d", dbg_namebuf,
++		de_namelen, d_type);
++#endif
++}
++
+ static int erofs_fill_dentries(struct dir_context *ctx,
+ 	void *dentry_blk, unsigned int *ofs,
+ 	unsigned int nameoff, unsigned int maxsize)
+@@ -33,14 +48,10 @@ static int erofs_fill_dentries(struct dir_context *ctx,
+ 	de = dentry_blk + *ofs;
+ 	while (de < end) {
+ 		const char *de_name;
+-		int de_namelen;
++		unsigned int de_namelen;
+ 		unsigned char d_type;
+-#ifdef CONFIG_EROFS_FS_DEBUG
+-		unsigned int dbg_namelen;
+-		unsigned char dbg_namebuf[EROFS_NAME_LEN];
+-#endif
+ 
+-		if (unlikely(de->file_type < EROFS_FT_MAX))
++		if (de->file_type < EROFS_FT_MAX)
+ 			d_type = erofs_filetype_table[de->file_type];
+ 		else
+ 			d_type = DT_UNKNOWN;
+@@ -48,26 +59,20 @@ static int erofs_fill_dentries(struct dir_context *ctx,
+ 		nameoff = le16_to_cpu(de->nameoff);
+ 		de_name = (char *)dentry_blk + nameoff;
+ 
+-		de_namelen = unlikely(de + 1 >= end) ?
+-			/* last directory entry */
+-			strnlen(de_name, maxsize - nameoff) :
+-			le16_to_cpu(de[1].nameoff) - nameoff;
++		/* the last dirent in the block? */
++		if (de + 1 >= end)
++			de_namelen = strnlen(de_name, maxsize - nameoff);
++		else
++			de_namelen = le16_to_cpu(de[1].nameoff) - nameoff;
+ 
+ 		/* a corrupted entry is found */
+-		if (unlikely(de_namelen < 0)) {
++		if (unlikely(nameoff + de_namelen > maxsize ||
++			     de_namelen > EROFS_NAME_LEN)) {
+ 			DBG_BUGON(1);
+ 			return -EIO;
+ 		}
+ 
+-#ifdef CONFIG_EROFS_FS_DEBUG
+-		dbg_namelen = min(EROFS_NAME_LEN - 1, de_namelen);
+-		memcpy(dbg_namebuf, de_name, dbg_namelen);
+-		dbg_namebuf[dbg_namelen] = '\0';
+-
+-		debugln("%s, found de_name %s de_len %d d_type %d", __func__,
+-			dbg_namebuf, de_namelen, d_type);
+-#endif
+-
++		debug_one_dentry(d_type, de_name, de_namelen);
+ 		if (!dir_emit(ctx, de_name, de_namelen,
+ 			      le64_to_cpu(de->nid), d_type))
+ 			/* stopped by some reason */
+diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
+index ab30d14ded06..d850be1abc84 100644
+--- a/drivers/staging/erofs/unzip_vle.c
++++ b/drivers/staging/erofs/unzip_vle.c
+@@ -977,6 +977,7 @@ repeat:
+ 	overlapped = false;
+ 	compressed_pages = grp->compressed_pages;
+ 
++	err = 0;
+ 	for (i = 0; i < clusterpages; ++i) {
+ 		unsigned int pagenr;
+ 
+@@ -986,26 +987,39 @@ repeat:
+ 		DBG_BUGON(!page);
+ 		DBG_BUGON(!page->mapping);
+ 
+-		if (z_erofs_is_stagingpage(page))
+-			continue;
++		if (!z_erofs_is_stagingpage(page)) {
+ #ifdef EROFS_FS_HAS_MANAGED_CACHE
+-		if (page->mapping == MNGD_MAPPING(sbi)) {
+-			DBG_BUGON(!PageUptodate(page));
+-			continue;
+-		}
++			if (page->mapping == MNGD_MAPPING(sbi)) {
++				if (unlikely(!PageUptodate(page)))
++					err = -EIO;
++				continue;
++			}
+ #endif
+ 
+-		/* only non-head page could be reused as a compressed page */
+-		pagenr = z_erofs_onlinepage_index(page);
++			/*
++			 * only if non-head page can be selected
++			 * for inplace decompression
++			 */
++			pagenr = z_erofs_onlinepage_index(page);
+ 
+-		DBG_BUGON(pagenr >= nr_pages);
+-		DBG_BUGON(pages[pagenr]);
+-		++sparsemem_pages;
+-		pages[pagenr] = page;
++			DBG_BUGON(pagenr >= nr_pages);
++			DBG_BUGON(pages[pagenr]);
++			++sparsemem_pages;
++			pages[pagenr] = page;
+ 
+-		overlapped = true;
++			overlapped = true;
++		}
++
++		/* PG_error needs checking for inplaced and staging pages */
++		if (unlikely(PageError(page))) {
++			DBG_BUGON(PageUptodate(page));
++			err = -EIO;
++		}
+ 	}
+ 
++	if (unlikely(err))
++		goto out;
++
+ 	llen = (nr_pages << PAGE_SHIFT) - work->pageofs;
+ 
+ 	if (z_erofs_vle_workgrp_fmt(grp) == Z_EROFS_VLE_WORKGRP_FMT_PLAIN) {
+@@ -1034,6 +1048,10 @@ repeat:
+ 
+ skip_allocpage:
+ 	vout = erofs_vmap(pages, nr_pages);
++	if (!vout) {
++		err = -ENOMEM;
++		goto out;
++	}
+ 
+ 	err = z_erofs_vle_unzip_vmap(compressed_pages,
+ 		clusterpages, vout, llen, work->pageofs, overlapped);
+@@ -1199,6 +1217,7 @@ repeat:
+ 	if (page->mapping == mc) {
+ 		WRITE_ONCE(grp->compressed_pages[nr], page);
+ 
++		ClearPageError(page);
+ 		if (!PagePrivate(page)) {
+ 			/*
+ 			 * impossible to be !PagePrivate(page) for
+diff --git a/drivers/staging/erofs/unzip_vle_lz4.c b/drivers/staging/erofs/unzip_vle_lz4.c
+index f471b894c848..3e8b0ff2efeb 100644
+--- a/drivers/staging/erofs/unzip_vle_lz4.c
++++ b/drivers/staging/erofs/unzip_vle_lz4.c
+@@ -136,10 +136,13 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ 
+ 	nr_pages = DIV_ROUND_UP(outlen + pageofs, PAGE_SIZE);
+ 
+-	if (clusterpages == 1)
++	if (clusterpages == 1) {
+ 		vin = kmap_atomic(compressed_pages[0]);
+-	else
++	} else {
+ 		vin = erofs_vmap(compressed_pages, clusterpages);
++		if (!vin)
++			return -ENOMEM;
++	}
+ 
+ 	preempt_disable();
+ 	vout = erofs_pcpubuf[smp_processor_id()].data;
+diff --git a/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c b/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
+index 80b8d4153414..a54286498a47 100644
+--- a/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
++++ b/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
+@@ -45,7 +45,7 @@ static int dcon_init_xo_1(struct dcon_priv *dcon)
+ {
+ 	unsigned char lob;
+ 	int ret, i;
+-	struct dcon_gpio *pin = &gpios_asis[0];
++	const struct dcon_gpio *pin = &gpios_asis[0];
+ 
+ 	for (i = 0; i < ARRAY_SIZE(gpios_asis); i++) {
+ 		gpios[i] = devm_gpiod_get(&dcon->client->dev, pin[i].name,
+diff --git a/drivers/staging/speakup/speakup_soft.c b/drivers/staging/speakup/speakup_soft.c
+index 947c79532e10..d5383974d40e 100644
+--- a/drivers/staging/speakup/speakup_soft.c
++++ b/drivers/staging/speakup/speakup_soft.c
+@@ -208,12 +208,15 @@ static ssize_t softsynthx_read(struct file *fp, char __user *buf, size_t count,
+ 		return -EINVAL;
+ 
+ 	spin_lock_irqsave(&speakup_info.spinlock, flags);
++	synth_soft.alive = 1;
+ 	while (1) {
+ 		prepare_to_wait(&speakup_event, &wait, TASK_INTERRUPTIBLE);
+-		if (!unicode)
+-			synth_buffer_skip_nonlatin1();
+-		if (!synth_buffer_empty() || speakup_info.flushing)
+-			break;
++		if (synth_current() == &synth_soft) {
++			if (!unicode)
++				synth_buffer_skip_nonlatin1();
++			if (!synth_buffer_empty() || speakup_info.flushing)
++				break;
++		}
+ 		spin_unlock_irqrestore(&speakup_info.spinlock, flags);
+ 		if (fp->f_flags & O_NONBLOCK) {
+ 			finish_wait(&speakup_event, &wait);
+@@ -233,6 +236,8 @@ static ssize_t softsynthx_read(struct file *fp, char __user *buf, size_t count,
+ 
+ 	/* Keep 3 bytes available for a 16bit UTF-8-encoded character */
+ 	while (chars_sent <= count - bytes_per_ch) {
++		if (synth_current() != &synth_soft)
++			break;
+ 		if (speakup_info.flushing) {
+ 			speakup_info.flushing = 0;
+ 			ch = '\x18';
+@@ -329,7 +334,8 @@ static __poll_t softsynth_poll(struct file *fp, struct poll_table_struct *wait)
+ 	poll_wait(fp, &speakup_event, wait);
+ 
+ 	spin_lock_irqsave(&speakup_info.spinlock, flags);
+-	if (!synth_buffer_empty() || speakup_info.flushing)
++	if (synth_current() == &synth_soft &&
++	    (!synth_buffer_empty() || speakup_info.flushing))
+ 		ret = EPOLLIN | EPOLLRDNORM;
+ 	spin_unlock_irqrestore(&speakup_info.spinlock, flags);
+ 	return ret;
+diff --git a/drivers/staging/speakup/spk_priv.h b/drivers/staging/speakup/spk_priv.h
+index c8e688878fc7..ac6a74883af4 100644
+--- a/drivers/staging/speakup/spk_priv.h
++++ b/drivers/staging/speakup/spk_priv.h
+@@ -74,6 +74,7 @@ int synth_request_region(unsigned long start, unsigned long n);
+ int synth_release_region(unsigned long start, unsigned long n);
+ int synth_add(struct spk_synth *in_synth);
+ void synth_remove(struct spk_synth *in_synth);
++struct spk_synth *synth_current(void);
+ 
+ extern struct speakup_info_t speakup_info;
+ 
+diff --git a/drivers/staging/speakup/synth.c b/drivers/staging/speakup/synth.c
+index 25f259ee4ffc..3568bfb89912 100644
+--- a/drivers/staging/speakup/synth.c
++++ b/drivers/staging/speakup/synth.c
+@@ -481,4 +481,10 @@ void synth_remove(struct spk_synth *in_synth)
+ }
+ EXPORT_SYMBOL_GPL(synth_remove);
+ 
++struct spk_synth *synth_current(void)
++{
++	return synth;
++}
++EXPORT_SYMBOL_GPL(synth_current);
++
+ short spk_punc_masks[] = { 0, SOME, MOST, PUNC, PUNC | B_SYM };
+diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c
+index c9097e7367d8..2e28fbcdfe8e 100644
+--- a/drivers/staging/vt6655/device_main.c
++++ b/drivers/staging/vt6655/device_main.c
+@@ -1033,8 +1033,6 @@ static void vnt_interrupt_process(struct vnt_private *priv)
+ 		return;
+ 	}
+ 
+-	MACvIntDisable(priv->PortOffset);
+-
+ 	spin_lock_irqsave(&priv->lock, flags);
+ 
+ 	/* Read low level stats */
+@@ -1122,8 +1120,6 @@ static void vnt_interrupt_process(struct vnt_private *priv)
+ 	}
+ 
+ 	spin_unlock_irqrestore(&priv->lock, flags);
+-
+-	MACvIntEnable(priv->PortOffset, IMR_MASK_VALUE);
+ }
+ 
+ static void vnt_interrupt_work(struct work_struct *work)
+@@ -1133,14 +1129,17 @@ static void vnt_interrupt_work(struct work_struct *work)
+ 
+ 	if (priv->vif)
+ 		vnt_interrupt_process(priv);
++
++	MACvIntEnable(priv->PortOffset, IMR_MASK_VALUE);
+ }
+ 
+ static irqreturn_t vnt_interrupt(int irq,  void *arg)
+ {
+ 	struct vnt_private *priv = arg;
+ 
+-	if (priv->vif)
+-		schedule_work(&priv->interrupt_work);
++	schedule_work(&priv->interrupt_work);
++
++	MACvIntDisable(priv->PortOffset);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 05147fe24343..0b4f36905321 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -166,6 +166,8 @@ struct atmel_uart_port {
+ 	unsigned int		pending_status;
+ 	spinlock_t		lock_suspended;
+ 
++	bool			hd_start_rx;	/* can start RX during half-duplex operation */
++
+ 	/* ISO7816 */
+ 	unsigned int		fidi_min;
+ 	unsigned int		fidi_max;
+@@ -231,6 +233,13 @@ static inline void atmel_uart_write_char(struct uart_port *port, u8 value)
+ 	__raw_writeb(value, port->membase + ATMEL_US_THR);
+ }
+ 
++static inline int atmel_uart_is_half_duplex(struct uart_port *port)
++{
++	return ((port->rs485.flags & SER_RS485_ENABLED) &&
++		!(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
++		(port->iso7816.flags & SER_ISO7816_ENABLED);
++}
++
+ #ifdef CONFIG_SERIAL_ATMEL_PDC
+ static bool atmel_use_pdc_rx(struct uart_port *port)
+ {
+@@ -608,10 +617,9 @@ static void atmel_stop_tx(struct uart_port *port)
+ 	/* Disable interrupts */
+ 	atmel_uart_writel(port, ATMEL_US_IDR, atmel_port->tx_done_mask);
+ 
+-	if (((port->rs485.flags & SER_RS485_ENABLED) &&
+-	     !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+-	    port->iso7816.flags & SER_ISO7816_ENABLED)
++	if (atmel_uart_is_half_duplex(port))
+ 		atmel_start_rx(port);
++
+ }
+ 
+ /*
+@@ -628,9 +636,7 @@ static void atmel_start_tx(struct uart_port *port)
+ 		return;
+ 
+ 	if (atmel_use_pdc_tx(port) || atmel_use_dma_tx(port))
+-		if (((port->rs485.flags & SER_RS485_ENABLED) &&
+-		     !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+-		    port->iso7816.flags & SER_ISO7816_ENABLED)
++		if (atmel_uart_is_half_duplex(port))
+ 			atmel_stop_rx(port);
+ 
+ 	if (atmel_use_pdc_tx(port))
+@@ -928,11 +934,14 @@ static void atmel_complete_tx_dma(void *arg)
+ 	 */
+ 	if (!uart_circ_empty(xmit))
+ 		atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx);
+-	else if (((port->rs485.flags & SER_RS485_ENABLED) &&
+-		  !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+-		 port->iso7816.flags & SER_ISO7816_ENABLED) {
+-		/* DMA done, stop TX, start RX for RS485 */
+-		atmel_start_rx(port);
++	else if (atmel_uart_is_half_duplex(port)) {
++		/*
++		 * DMA done, re-enable TXEMPTY and signal that we can stop
++		 * TX and start RX for RS485
++		 */
++		atmel_port->hd_start_rx = true;
++		atmel_uart_writel(port, ATMEL_US_IER,
++				  atmel_port->tx_done_mask);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&port->lock, flags);
+@@ -1288,6 +1297,10 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
+ 					 sg_dma_len(&atmel_port->sg_rx)/2,
+ 					 DMA_DEV_TO_MEM,
+ 					 DMA_PREP_INTERRUPT);
++	if (!desc) {
++		dev_err(port->dev, "Preparing DMA cyclic failed\n");
++		goto chan_err;
++	}
+ 	desc->callback = atmel_complete_rx_dma;
+ 	desc->callback_param = port;
+ 	atmel_port->desc_rx = desc;
+@@ -1376,9 +1389,20 @@ atmel_handle_transmit(struct uart_port *port, unsigned int pending)
+ 	struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
+ 
+ 	if (pending & atmel_port->tx_done_mask) {
+-		/* Either PDC or interrupt transmission */
+ 		atmel_uart_writel(port, ATMEL_US_IDR,
+ 				  atmel_port->tx_done_mask);
++
++		/* Start RX if flag was set and FIFO is empty */
++		if (atmel_port->hd_start_rx) {
++			if (!(atmel_uart_readl(port, ATMEL_US_CSR)
++					& ATMEL_US_TXEMPTY))
++				dev_warn(port->dev, "Should start RX, but TX fifo is not empty\n");
++
++			atmel_port->hd_start_rx = false;
++			atmel_start_rx(port);
++			return;
++		}
++
+ 		atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx);
+ 	}
+ }
+@@ -1508,9 +1532,7 @@ static void atmel_tx_pdc(struct uart_port *port)
+ 		atmel_uart_writel(port, ATMEL_US_IER,
+ 				  atmel_port->tx_done_mask);
+ 	} else {
+-		if (((port->rs485.flags & SER_RS485_ENABLED) &&
+-		     !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+-		    port->iso7816.flags & SER_ISO7816_ENABLED) {
++		if (atmel_uart_is_half_duplex(port)) {
+ 			/* DMA done, stop TX, start RX for RS485 */
+ 			atmel_start_rx(port);
+ 		}
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index 6fb312e7af71..bfe5e9e034ec 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -148,8 +148,10 @@ static int configure_kgdboc(void)
+ 	char *cptr = config;
+ 	struct console *cons;
+ 
+-	if (!strlen(config) || isspace(config[0]))
++	if (!strlen(config) || isspace(config[0])) {
++		err = 0;
+ 		goto noconfig;
++	}
+ 
+ 	kgdboc_io_ops.is_console = 0;
+ 	kgdb_tty_driver = NULL;
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 4f479841769a..0fdf3a760aa0 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1416,6 +1416,8 @@ static int max310x_spi_probe(struct spi_device *spi)
+ 	if (spi->dev.of_node) {
+ 		const struct of_device_id *of_id =
+ 			of_match_device(max310x_dt_ids, &spi->dev);
++		if (!of_id)
++			return -ENODEV;
+ 
+ 		devtype = (struct max310x_devtype *)of_id->data;
+ 	} else {
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index 231f751d1ef4..7e7b1559fa36 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -810,6 +810,9 @@ static int mvebu_uart_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
++	if (!match)
++		return -ENODEV;
++
+ 	/* Assume that all UART ports have a DT alias or none has */
+ 	id = of_alias_get_id(pdev->dev.of_node, "serial");
+ 	if (!pdev->dev.of_node || id < 0)
+diff --git a/drivers/tty/serial/mxs-auart.c b/drivers/tty/serial/mxs-auart.c
+index 27235a526cce..4c188f4079b3 100644
+--- a/drivers/tty/serial/mxs-auart.c
++++ b/drivers/tty/serial/mxs-auart.c
+@@ -1686,6 +1686,10 @@ static int mxs_auart_probe(struct platform_device *pdev)
+ 
+ 	s->port.mapbase = r->start;
+ 	s->port.membase = ioremap(r->start, resource_size(r));
++	if (!s->port.membase) {
++		ret = -ENOMEM;
++		goto out_disable_clks;
++	}
+ 	s->port.ops = &mxs_auart_ops;
+ 	s->port.iotype = UPIO_MEM;
+ 	s->port.fifosize = MXS_AUART_FIFO_SIZE;
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 38016609c7fa..d30502c58106 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -1117,7 +1117,7 @@ static int __init qcom_geni_console_setup(struct console *co, char *options)
+ {
+ 	struct uart_port *uport;
+ 	struct qcom_geni_serial_port *port;
+-	int baud;
++	int baud = 9600;
+ 	int bits = 8;
+ 	int parity = 'n';
+ 	int flow = 'n';
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 64bbeb7d7e0c..93bd90f1ff14 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -838,19 +838,9 @@ static void sci_transmit_chars(struct uart_port *port)
+ 
+ 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 		uart_write_wakeup(port);
+-	if (uart_circ_empty(xmit)) {
++	if (uart_circ_empty(xmit))
+ 		sci_stop_tx(port);
+-	} else {
+-		ctrl = serial_port_in(port, SCSCR);
+-
+-		if (port->type != PORT_SCI) {
+-			serial_port_in(port, SCxSR); /* Dummy read */
+-			sci_clear_SCxSR(port, SCxSR_TDxE_CLEAR(port));
+-		}
+ 
+-		ctrl |= SCSCR_TIE;
+-		serial_port_out(port, SCSCR, ctrl);
+-	}
+ }
+ 
+ /* On SH3, SCIF may read end-of-break as a space->mark char */
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 739f8960811a..ec666eb4b7b4 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -558,10 +558,8 @@ static void acm_softint(struct work_struct *work)
+ 		clear_bit(EVENT_RX_STALL, &acm->flags);
+ 	}
+ 
+-	if (test_bit(EVENT_TTY_WAKEUP, &acm->flags)) {
++	if (test_and_clear_bit(EVENT_TTY_WAKEUP, &acm->flags))
+ 		tty_port_tty_wakeup(&acm->port);
+-		clear_bit(EVENT_TTY_WAKEUP, &acm->flags);
+-	}
+ }
+ 
+ /*
+diff --git a/drivers/usb/common/common.c b/drivers/usb/common/common.c
+index 48277bbc15e4..73c8e6591746 100644
+--- a/drivers/usb/common/common.c
++++ b/drivers/usb/common/common.c
+@@ -145,6 +145,8 @@ enum usb_dr_mode of_usb_get_dr_mode_by_phy(struct device_node *np, int arg0)
+ 
+ 	do {
+ 		controller = of_find_node_with_property(controller, "phys");
++		if (!of_device_is_available(controller))
++			continue;
+ 		index = 0;
+ 		do {
+ 			if (arg0 == -1) {
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index 75b113a5b25c..f3816a5c861e 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -391,20 +391,20 @@ try_again:
+ 	req->complete = f_hidg_req_complete;
+ 	req->context  = hidg;
+ 
++	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
++
+ 	status = usb_ep_queue(hidg->in_ep, req, GFP_ATOMIC);
+ 	if (status < 0) {
+ 		ERROR(hidg->func.config->cdev,
+ 			"usb_ep_queue error on int endpoint %zd\n", status);
+-		goto release_write_pending_unlocked;
++		goto release_write_pending;
+ 	} else {
+ 		status = count;
+ 	}
+-	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+ 
+ 	return status;
+ release_write_pending:
+ 	spin_lock_irqsave(&hidg->write_spinlock, flags);
+-release_write_pending_unlocked:
+ 	hidg->write_pending = 0;
+ 	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+ 
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index 86cff5c28eff..ba841c569c48 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -516,7 +516,6 @@ static int xhci_do_dbc_stop(struct xhci_hcd *xhci)
+ 		return -1;
+ 
+ 	writel(0, &dbc->regs->control);
+-	xhci_dbc_mem_cleanup(xhci);
+ 	dbc->state = DS_DISABLED;
+ 
+ 	return 0;
+@@ -562,8 +561,10 @@ static void xhci_dbc_stop(struct xhci_hcd *xhci)
+ 	ret = xhci_do_dbc_stop(xhci);
+ 	spin_unlock_irqrestore(&dbc->lock, flags);
+ 
+-	if (!ret)
++	if (!ret) {
++		xhci_dbc_mem_cleanup(xhci);
+ 		pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller);
++	}
+ }
+ 
+ static void
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index e2eece693655..96a740543183 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1545,20 +1545,25 @@ int xhci_bus_suspend(struct usb_hcd *hcd)
+ 	port_index = max_ports;
+ 	while (port_index--) {
+ 		u32 t1, t2;
+-
++		int retries = 10;
++retry:
+ 		t1 = readl(ports[port_index]->addr);
+ 		t2 = xhci_port_state_to_neutral(t1);
+ 		portsc_buf[port_index] = 0;
+ 
+-		/* Bail out if a USB3 port has a new device in link training */
+-		if ((hcd->speed >= HCD_USB3) &&
++		/*
++		 * Give a USB3 port in link training time to finish, but don't
++		 * prevent suspend as port might be stuck
++		 */
++		if ((hcd->speed >= HCD_USB3) && retries-- &&
+ 		    (t1 & PORT_PLS_MASK) == XDEV_POLLING) {
+-			bus_state->bus_suspended = 0;
+ 			spin_unlock_irqrestore(&xhci->lock, flags);
+-			xhci_dbg(xhci, "Bus suspend bailout, port in polling\n");
+-			return -EBUSY;
++			msleep(XHCI_PORT_POLLING_LFPS_TIME);
++			spin_lock_irqsave(&xhci->lock, flags);
++			xhci_dbg(xhci, "port %d polling in bus suspend, waiting\n",
++				 port_index);
++			goto retry;
+ 		}
+-
+ 		/* suspend ports in U0, or bail out for new connect changes */
+ 		if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) {
+ 			if ((t1 & PORT_CSC) && wake_enabled) {
+diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c
+index a6e463715779..671bce18782c 100644
+--- a/drivers/usb/host/xhci-rcar.c
++++ b/drivers/usb/host/xhci-rcar.c
+@@ -246,6 +246,7 @@ int xhci_rcar_init_quirk(struct usb_hcd *hcd)
+ 	if (!xhci_rcar_wait_for_pll_active(hcd))
+ 		return -ETIMEDOUT;
+ 
++	xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+ 	return xhci_rcar_download_firmware(hcd);
+ }
+ 
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 40fa25c4d041..9215a28dad40 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1647,10 +1647,13 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 		}
+ 	}
+ 
+-	if ((portsc & PORT_PLC) && (portsc & PORT_PLS_MASK) == XDEV_U0 &&
+-			DEV_SUPERSPEED_ANY(portsc)) {
++	if ((portsc & PORT_PLC) &&
++	    DEV_SUPERSPEED_ANY(portsc) &&
++	    ((portsc & PORT_PLS_MASK) == XDEV_U0 ||
++	     (portsc & PORT_PLS_MASK) == XDEV_U1 ||
++	     (portsc & PORT_PLS_MASK) == XDEV_U2)) {
+ 		xhci_dbg(xhci, "resume SS port %d finished\n", port_id);
+-		/* We've just brought the device into U0 through either the
++		/* We've just brought the device into U0/1/2 through either the
+ 		 * Resume state after a device remote wakeup, or through the
+ 		 * U3Exit state after a host-initiated resume.  If it's a device
+ 		 * initiated remote wake, don't pass up the link state change,
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 652dc36e3012..9334cdee382a 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -452,6 +452,14 @@ struct xhci_op_regs {
+  */
+ #define XHCI_DEFAULT_BESL	4
+ 
++/*
++ * USB3 specification define a 360ms tPollingLFPSTiemout for USB3 ports
++ * to complete link training. usually link trainig completes much faster
++ * so check status 10 times with 36ms sleep in places we need to wait for
++ * polling to complete.
++ */
++#define XHCI_PORT_POLLING_LFPS_TIME  36
++
+ /**
+  * struct xhci_intr_reg - Interrupt Register Set
+  * @irq_pending:	IMAN - Interrupt Management Register.  Used to enable
+diff --git a/drivers/usb/mtu3/Kconfig b/drivers/usb/mtu3/Kconfig
+index 40bbf1f53337..fe58904f350b 100644
+--- a/drivers/usb/mtu3/Kconfig
++++ b/drivers/usb/mtu3/Kconfig
+@@ -4,6 +4,7 @@ config USB_MTU3
+ 	tristate "MediaTek USB3 Dual Role controller"
+ 	depends on USB || USB_GADGET
+ 	depends on ARCH_MEDIATEK || COMPILE_TEST
++	depends on EXTCON || !EXTCON
+ 	select USB_XHCI_MTK if USB_SUPPORT && USB_XHCI_HCD
+ 	help
+ 	  Say Y or M here if your system runs on MediaTek SoCs with
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 4c66edf533fe..e732949f6567 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -80,6 +80,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x804E) }, /* Software Bisque Paramount ME build-in converter */
+ 	{ USB_DEVICE(0x10C4, 0x8053) }, /* Enfora EDG1228 */
+ 	{ USB_DEVICE(0x10C4, 0x8054) }, /* Enfora GSM2228 */
++	{ USB_DEVICE(0x10C4, 0x8056) }, /* Lorenz Messtechnik devices */
+ 	{ USB_DEVICE(0x10C4, 0x8066) }, /* Argussoft In-System Programmer */
+ 	{ USB_DEVICE(0x10C4, 0x806F) }, /* IMS USB to RS422 Converter Cable */
+ 	{ USB_DEVICE(0x10C4, 0x807A) }, /* Crumb128 board */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 8f5b17471759..1d8461ae2c34 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -609,6 +609,8 @@ static const struct usb_device_id id_table_combined[] = {
+ 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID),
+ 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
++	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLX_PLUS_PID) },
++	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORION_IO_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2_PID) },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index b863bedb55a1..5755f0df0025 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -567,7 +567,9 @@
+ /*
+  * NovaTech product ids (FTDI_VID)
+  */
+-#define FTDI_NT_ORIONLXM_PID	0x7c90	/* OrionLXm Substation Automation Platform */
++#define FTDI_NT_ORIONLXM_PID		0x7c90	/* OrionLXm Substation Automation Platform */
++#define FTDI_NT_ORIONLX_PLUS_PID	0x7c91	/* OrionLX+ Substation Automation Platform */
++#define FTDI_NT_ORION_IO_PID		0x7c92	/* Orion I/O */
+ 
+ /*
+  * Synapse Wireless product ids (FTDI_VID)
+diff --git a/drivers/usb/serial/mos7720.c b/drivers/usb/serial/mos7720.c
+index fc52ac75fbf6..18110225d506 100644
+--- a/drivers/usb/serial/mos7720.c
++++ b/drivers/usb/serial/mos7720.c
+@@ -366,8 +366,6 @@ static int write_parport_reg_nonblock(struct mos7715_parport *mos_parport,
+ 	if (!urbtrack)
+ 		return -ENOMEM;
+ 
+-	kref_get(&mos_parport->ref_count);
+-	urbtrack->mos_parport = mos_parport;
+ 	urbtrack->urb = usb_alloc_urb(0, GFP_ATOMIC);
+ 	if (!urbtrack->urb) {
+ 		kfree(urbtrack);
+@@ -388,6 +386,8 @@ static int write_parport_reg_nonblock(struct mos7715_parport *mos_parport,
+ 			     usb_sndctrlpipe(usbdev, 0),
+ 			     (unsigned char *)urbtrack->setup,
+ 			     NULL, 0, async_complete, urbtrack);
++	kref_get(&mos_parport->ref_count);
++	urbtrack->mos_parport = mos_parport;
+ 	kref_init(&urbtrack->ref_count);
+ 	INIT_LIST_HEAD(&urbtrack->urblist_entry);
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 11b21d9410f3..83869065b802 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -246,6 +246,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EC25			0x0125
+ #define QUECTEL_PRODUCT_BG96			0x0296
+ #define QUECTEL_PRODUCT_EP06			0x0306
++#define QUECTEL_PRODUCT_EM12			0x0512
+ 
+ #define CMOTECH_VENDOR_ID			0x16d8
+ #define CMOTECH_PRODUCT_6001			0x6001
+@@ -1066,7 +1067,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */
+-	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */
++	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000), /* SIMCom SIM5218 */
++	  .driver_info = NCTRL(0) | NCTRL(1) | NCTRL(2) | NCTRL(3) | RSVD(4) },
+ 	/* Quectel products using Qualcomm vendor ID */
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC15)},
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC20),
+@@ -1087,6 +1089,9 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
++	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6003),
+@@ -1940,10 +1945,12 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e35, 0xff),			/* D-Link DWM-222 */
+ 	  .driver_info = RSVD(4) },
+-	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
+-	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
+-	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/A3 */
+-	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) },                /* OLICARD300 - MT6225 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) },	/* D-Link DWM-152/C1 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) },	/* D-Link DWM-156/C1 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) },	/* D-Link DWM-156/A3 */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2031, 0xff),			/* Olicard 600 */
++	  .driver_info = RSVD(4) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) },			/* OLICARD300 - MT6225 */
+ 	{ USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) },
+ 	{ USB_DEVICE(VIATELECOM_VENDOR_ID, VIATELECOM_PRODUCT_CDS7) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(WETELECOM_VENDOR_ID, WETELECOM_PRODUCT_WMD200, 0xff, 0xff, 0xff) },
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index f1c39a3c7534..d34e945e5d09 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -37,6 +37,7 @@
+ 	S(SRC_ATTACHED),			\
+ 	S(SRC_STARTUP),				\
+ 	S(SRC_SEND_CAPABILITIES),		\
++	S(SRC_SEND_CAPABILITIES_TIMEOUT),	\
+ 	S(SRC_NEGOTIATE_CAPABILITIES),		\
+ 	S(SRC_TRANSITION_SUPPLY),		\
+ 	S(SRC_READY),				\
+@@ -2966,10 +2967,34 @@ static void run_state_machine(struct tcpm_port *port)
+ 			/* port->hard_reset_count = 0; */
+ 			port->caps_count = 0;
+ 			port->pd_capable = true;
+-			tcpm_set_state_cond(port, hard_reset_state(port),
++			tcpm_set_state_cond(port, SRC_SEND_CAPABILITIES_TIMEOUT,
+ 					    PD_T_SEND_SOURCE_CAP);
+ 		}
+ 		break;
++	case SRC_SEND_CAPABILITIES_TIMEOUT:
++		/*
++		 * Error recovery for a PD_DATA_SOURCE_CAP reply timeout.
++		 *
++		 * PD 2.0 sinks are supposed to accept src-capabilities with a
++		 * 3.0 header and simply ignore any src PDOs which the sink does
++		 * not understand such as PPS but some 2.0 sinks instead ignore
++		 * the entire PD_DATA_SOURCE_CAP message, causing contract
++		 * negotiation to fail.
++		 *
++		 * After PD_N_HARD_RESET_COUNT hard-reset attempts, we try
++		 * sending src-capabilities with a lower PD revision to
++		 * make these broken sinks work.
++		 */
++		if (port->hard_reset_count < PD_N_HARD_RESET_COUNT) {
++			tcpm_set_state(port, HARD_RESET_SEND, 0);
++		} else if (port->negotiated_rev > PD_REV20) {
++			port->negotiated_rev--;
++			port->hard_reset_count = 0;
++			tcpm_set_state(port, SRC_SEND_CAPABILITIES, 0);
++		} else {
++			tcpm_set_state(port, hard_reset_state(port), 0);
++		}
++		break;
+ 	case SRC_NEGOTIATE_CAPABILITIES:
+ 		ret = tcpm_pd_check_request(port);
+ 		if (ret < 0) {
+diff --git a/drivers/usb/typec/tcpm/wcove.c b/drivers/usb/typec/tcpm/wcove.c
+index 423208e19383..6770afd40765 100644
+--- a/drivers/usb/typec/tcpm/wcove.c
++++ b/drivers/usb/typec/tcpm/wcove.c
+@@ -615,8 +615,13 @@ static int wcove_typec_probe(struct platform_device *pdev)
+ 	wcove->dev = &pdev->dev;
+ 	wcove->regmap = pmic->regmap;
+ 
+-	irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr,
+-				  platform_get_irq(pdev, 0));
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
++		dev_err(&pdev->dev, "Failed to get IRQ: %d\n", irq);
++		return irq;
++	}
++
++	irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr, irq);
+ 	if (irq < 0)
+ 		return irq;
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index d81035b7ea7d..0a6615573351 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -6115,7 +6115,7 @@ static void btrfs_calculate_inode_block_rsv_size(struct btrfs_fs_info *fs_info,
+ 	 *
+ 	 * This is overestimating in most cases.
+ 	 */
+-	qgroup_rsv_size = outstanding_extents * fs_info->nodesize;
++	qgroup_rsv_size = (u64)outstanding_extents * fs_info->nodesize;
+ 
+ 	spin_lock(&block_rsv->lock);
+ 	block_rsv->size = reserve_size;
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 4e473a998219..543dd5e66f31 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1917,8 +1917,8 @@ static int qgroup_trace_new_subtree_blocks(struct btrfs_trans_handle* trans,
+ 	int i;
+ 
+ 	/* Level sanity check */
+-	if (cur_level < 0 || cur_level >= BTRFS_MAX_LEVEL ||
+-	    root_level < 0 || root_level >= BTRFS_MAX_LEVEL ||
++	if (cur_level < 0 || cur_level >= BTRFS_MAX_LEVEL - 1 ||
++	    root_level < 0 || root_level >= BTRFS_MAX_LEVEL - 1 ||
+ 	    root_level < cur_level) {
+ 		btrfs_err_rl(fs_info,
+ 			"%s: bad levels, cur_level=%d root_level=%d",
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index e74455eb42f9..6976e2280771 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -2429,8 +2429,9 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
+ 			bitmap_clear(rbio->dbitmap, pagenr, 1);
+ 		kunmap(p);
+ 
+-		for (stripe = 0; stripe < rbio->real_stripes; stripe++)
++		for (stripe = 0; stripe < nr_data; stripe++)
+ 			kunmap(page_in_rbio(rbio, stripe, pagenr, 0));
++		kunmap(p_page);
+ 	}
+ 
+ 	__free_page(p_page);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index ac232b3d6d7e..7f3b74a55073 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3517,9 +3517,16 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
+ 	}
+ 	btrfs_release_path(path);
+ 
+-	/* find the first key from this transaction again */
++	/*
++	 * Find the first key from this transaction again.  See the note for
++	 * log_new_dir_dentries, if we're logging a directory recursively we
++	 * won't be holding its i_mutex, which means we can modify the directory
++	 * while we're logging it.  If we remove an entry between our first
++	 * search and this search we'll not find the key again and can just
++	 * bail.
++	 */
+ 	ret = btrfs_search_slot(NULL, root, &min_key, path, 0, 0);
+-	if (WARN_ON(ret != 0))
++	if (ret != 0)
+ 		goto done;
+ 
+ 	/*
+@@ -4481,6 +4488,19 @@ static int logged_inode_size(struct btrfs_root *log, struct btrfs_inode *inode,
+ 		item = btrfs_item_ptr(path->nodes[0], path->slots[0],
+ 				      struct btrfs_inode_item);
+ 		*size_ret = btrfs_inode_size(path->nodes[0], item);
++		/*
++		 * If the in-memory inode's i_size is smaller then the inode
++		 * size stored in the btree, return the inode's i_size, so
++		 * that we get a correct inode size after replaying the log
++		 * when before a power failure we had a shrinking truncate
++		 * followed by addition of a new name (rename / new hard link).
++		 * Otherwise return the inode size from the btree, to avoid
++		 * data loss when replaying a log due to previously doing a
++		 * write that expands the inode's size and logging a new name
++		 * immediately after.
++		 */
++		if (*size_ret > inode->vfs_inode.i_size)
++			*size_ret = inode->vfs_inode.i_size;
+ 	}
+ 
+ 	btrfs_release_path(path);
+@@ -4642,15 +4662,8 @@ static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,
+ 					struct btrfs_file_extent_item);
+ 
+ 		if (btrfs_file_extent_type(leaf, extent) ==
+-		    BTRFS_FILE_EXTENT_INLINE) {
+-			len = btrfs_file_extent_ram_bytes(leaf, extent);
+-			ASSERT(len == i_size ||
+-			       (len == fs_info->sectorsize &&
+-				btrfs_file_extent_compression(leaf, extent) !=
+-				BTRFS_COMPRESS_NONE) ||
+-			       (len < i_size && i_size < fs_info->sectorsize));
++		    BTRFS_FILE_EXTENT_INLINE)
+ 			return 0;
+-		}
+ 
+ 		len = btrfs_file_extent_num_bytes(leaf, extent);
+ 		/* Last extent goes beyond i_size, no need to log a hole. */
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 48523bcabae9..88a323a453d8 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -6413,7 +6413,7 @@ static void btrfs_end_bio(struct bio *bio)
+ 				if (bio_op(bio) == REQ_OP_WRITE)
+ 					btrfs_dev_stat_inc_and_print(dev,
+ 						BTRFS_DEV_STAT_WRITE_ERRS);
+-				else
++				else if (!(bio->bi_opf & REQ_RAHEAD))
+ 					btrfs_dev_stat_inc_and_print(dev,
+ 						BTRFS_DEV_STAT_READ_ERRS);
+ 				if (bio->bi_opf & REQ_PREFLUSH)
+diff --git a/fs/lockd/host.c b/fs/lockd/host.c
+index 93fb7cf0b92b..f0b5c987d6ae 100644
+--- a/fs/lockd/host.c
++++ b/fs/lockd/host.c
+@@ -290,12 +290,11 @@ void nlmclnt_release_host(struct nlm_host *host)
+ 
+ 	WARN_ON_ONCE(host->h_server);
+ 
+-	if (refcount_dec_and_test(&host->h_count)) {
++	if (refcount_dec_and_mutex_lock(&host->h_count, &nlm_host_mutex)) {
+ 		WARN_ON_ONCE(!list_empty(&host->h_lockowners));
+ 		WARN_ON_ONCE(!list_empty(&host->h_granted));
+ 		WARN_ON_ONCE(!list_empty(&host->h_reclaim));
+ 
+-		mutex_lock(&nlm_host_mutex);
+ 		nlm_destroy_host_locked(host);
+ 		mutex_unlock(&nlm_host_mutex);
+ 	}
+diff --git a/fs/locks.c b/fs/locks.c
+index ff6af2c32601..5f468cd95f68 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -1160,6 +1160,11 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
+ 			 */
+ 			error = -EDEADLK;
+ 			spin_lock(&blocked_lock_lock);
++			/*
++			 * Ensure that we don't find any locks blocked on this
++			 * request during deadlock detection.
++			 */
++			__locks_wake_up_blocks(request);
+ 			if (likely(!posix_locks_deadlock(request, fl))) {
+ 				error = FILE_LOCK_DEFERRED;
+ 				__locks_insert_block(fl, request,
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 64ac80ec6b7b..44258c516305 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -2938,7 +2938,8 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ 	}
+ 
+ out:
+-	nfs4_sequence_free_slot(&opendata->o_res.seq_res);
++	if (!opendata->cancelled)
++		nfs4_sequence_free_slot(&opendata->o_res.seq_res);
+ 	return ret;
+ }
+ 
+@@ -6306,7 +6307,6 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl,
+ 	p->arg.seqid = seqid;
+ 	p->res.seqid = seqid;
+ 	p->lsp = lsp;
+-	refcount_inc(&lsp->ls_count);
+ 	/* Ensure we don't close file until we're done freeing locks! */
+ 	p->ctx = get_nfs_open_context(ctx);
+ 	p->l_ctx = nfs_get_lock_context(ctx);
+@@ -6531,7 +6531,6 @@ static struct nfs4_lockdata *nfs4_alloc_lockdata(struct file_lock *fl,
+ 	p->res.lock_seqid = p->arg.lock_seqid;
+ 	p->lsp = lsp;
+ 	p->server = server;
+-	refcount_inc(&lsp->ls_count);
+ 	p->ctx = get_nfs_open_context(ctx);
+ 	locks_init_lock(&p->fl);
+ 	locks_copy_lock(&p->fl, fl);
+diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
+index a35259eebc56..1dc9a08e8bdc 100644
+--- a/fs/ocfs2/refcounttree.c
++++ b/fs/ocfs2/refcounttree.c
+@@ -4719,22 +4719,23 @@ out:
+ 
+ /* Lock an inode and grab a bh pointing to the inode. */
+ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+-			      struct buffer_head **bh1,
++			      struct buffer_head **bh_s,
+ 			      struct inode *t_inode,
+-			      struct buffer_head **bh2)
++			      struct buffer_head **bh_t)
+ {
+-	struct inode *inode1;
+-	struct inode *inode2;
++	struct inode *inode1 = s_inode;
++	struct inode *inode2 = t_inode;
+ 	struct ocfs2_inode_info *oi1;
+ 	struct ocfs2_inode_info *oi2;
++	struct buffer_head *bh1 = NULL;
++	struct buffer_head *bh2 = NULL;
+ 	bool same_inode = (s_inode == t_inode);
++	bool need_swap = (inode1->i_ino > inode2->i_ino);
+ 	int status;
+ 
+ 	/* First grab the VFS and rw locks. */
+ 	lock_two_nondirectories(s_inode, t_inode);
+-	inode1 = s_inode;
+-	inode2 = t_inode;
+-	if (inode1->i_ino > inode2->i_ino)
++	if (need_swap)
+ 		swap(inode1, inode2);
+ 
+ 	status = ocfs2_rw_lock(inode1, 1);
+@@ -4757,17 +4758,13 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+ 	trace_ocfs2_double_lock((unsigned long long)oi1->ip_blkno,
+ 				(unsigned long long)oi2->ip_blkno);
+ 
+-	if (*bh1)
+-		*bh1 = NULL;
+-	if (*bh2)
+-		*bh2 = NULL;
+-
+ 	/* We always want to lock the one with the lower lockid first. */
+ 	if (oi1->ip_blkno > oi2->ip_blkno)
+ 		mlog_errno(-ENOLCK);
+ 
+ 	/* lock id1 */
+-	status = ocfs2_inode_lock_nested(inode1, bh1, 1, OI_LS_REFLINK_TARGET);
++	status = ocfs2_inode_lock_nested(inode1, &bh1, 1,
++					 OI_LS_REFLINK_TARGET);
+ 	if (status < 0) {
+ 		if (status != -ENOENT)
+ 			mlog_errno(status);
+@@ -4776,15 +4773,25 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+ 
+ 	/* lock id2 */
+ 	if (!same_inode) {
+-		status = ocfs2_inode_lock_nested(inode2, bh2, 1,
++		status = ocfs2_inode_lock_nested(inode2, &bh2, 1,
+ 						 OI_LS_REFLINK_TARGET);
+ 		if (status < 0) {
+ 			if (status != -ENOENT)
+ 				mlog_errno(status);
+ 			goto out_cl1;
+ 		}
+-	} else
+-		*bh2 = *bh1;
++	} else {
++		bh2 = bh1;
++	}
++
++	/*
++	 * If we swapped inode order above, we have to swap the buffer heads
++	 * before passing them back to the caller.
++	 */
++	if (need_swap)
++		swap(bh1, bh2);
++	*bh_s = bh1;
++	*bh_t = bh2;
+ 
+ 	trace_ocfs2_double_lock_end(
+ 			(unsigned long long)oi1->ip_blkno,
+@@ -4794,8 +4801,7 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+ 
+ out_cl1:
+ 	ocfs2_inode_unlock(inode1, 1);
+-	brelse(*bh1);
+-	*bh1 = NULL;
++	brelse(bh1);
+ out_rw2:
+ 	ocfs2_rw_unlock(inode2, 1);
+ out_i2:
+diff --git a/fs/open.c b/fs/open.c
+index 0285ce7dbd51..f1c2f855fd43 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -733,6 +733,12 @@ static int do_dentry_open(struct file *f,
+ 		return 0;
+ 	}
+ 
++	/* Any file opened for execve()/uselib() has to be a regular file. */
++	if (unlikely(f->f_flags & FMODE_EXEC && !S_ISREG(inode->i_mode))) {
++		error = -EACCES;
++		goto cleanup_file;
++	}
++
+ 	if (f->f_mode & FMODE_WRITE && !special_file(inode->i_mode)) {
+ 		error = get_write_access(inode);
+ 		if (unlikely(error))
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 4d598a399bbf..d65390727541 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -1626,7 +1626,8 @@ static void drop_sysctl_table(struct ctl_table_header *header)
+ 	if (--header->nreg)
+ 		return;
+ 
+-	put_links(header);
++	if (parent)
++		put_links(header);
+ 	start_unregistering(header);
+ 	if (!--header->count)
+ 		kfree_rcu(header, rcu);
+diff --git a/include/linux/mii.h b/include/linux/mii.h
+index 6fee8b1a4400..5cd824c1c0ca 100644
+--- a/include/linux/mii.h
++++ b/include/linux/mii.h
+@@ -469,7 +469,7 @@ static inline u32 linkmode_adv_to_lcl_adv_t(unsigned long *advertising)
+ 	if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ 			      advertising))
+ 		lcl_adv |= ADVERTISE_PAUSE_CAP;
+-	if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
++	if (linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ 			      advertising))
+ 		lcl_adv |= ADVERTISE_PAUSE_ASYM;
+ 
+diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
+index 4eb26d278046..280ae96dc4c3 100644
+--- a/include/linux/page-isolation.h
++++ b/include/linux/page-isolation.h
+@@ -41,16 +41,6 @@ int move_freepages_block(struct zone *zone, struct page *page,
+ 
+ /*
+  * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
+- * If specified range includes migrate types other than MOVABLE or CMA,
+- * this will fail with -EBUSY.
+- *
+- * For isolating all pages in the range finally, the caller have to
+- * free all pages in the range. test_page_isolated() can be used for
+- * test it.
+- *
+- * The following flags are allowed (they can be combined in a bit mask)
+- * SKIP_HWPOISON - ignore hwpoison pages
+- * REPORT_FAILURE - report details about the failure to isolate the range
+  */
+ int
+ start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+diff --git a/include/linux/slab.h b/include/linux/slab.h
+index 11b45f7ae405..9449b19c5f10 100644
+--- a/include/linux/slab.h
++++ b/include/linux/slab.h
+@@ -32,6 +32,8 @@
+ #define SLAB_HWCACHE_ALIGN	((slab_flags_t __force)0x00002000U)
+ /* Use GFP_DMA memory */
+ #define SLAB_CACHE_DMA		((slab_flags_t __force)0x00004000U)
++/* Use GFP_DMA32 memory */
++#define SLAB_CACHE_DMA32	((slab_flags_t __force)0x00008000U)
+ /* DEBUG: Store the last owner for bug hunting */
+ #define SLAB_STORE_USER		((slab_flags_t __force)0x00010000U)
+ /* Panic if kmem_cache_create() fails */
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index b4984bbbe157..3d58acf94dd2 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -416,7 +416,8 @@ struct nft_set {
+ 	unsigned char			*udata;
+ 	/* runtime data below here */
+ 	const struct nft_set_ops	*ops ____cacheline_aligned;
+-	u16				flags:14,
++	u16				flags:13,
++					bound:1,
+ 					genmask:2;
+ 	u8				klen;
+ 	u8				dlen;
+@@ -1329,15 +1330,12 @@ struct nft_trans_rule {
+ struct nft_trans_set {
+ 	struct nft_set			*set;
+ 	u32				set_id;
+-	bool				bound;
+ };
+ 
+ #define nft_trans_set(trans)	\
+ 	(((struct nft_trans_set *)trans->data)->set)
+ #define nft_trans_set_id(trans)	\
+ 	(((struct nft_trans_set *)trans->data)->set_id)
+-#define nft_trans_set_bound(trans)	\
+-	(((struct nft_trans_set *)trans->data)->bound)
+ 
+ struct nft_trans_chain {
+ 	bool				update;
+diff --git a/include/net/sctp/checksum.h b/include/net/sctp/checksum.h
+index 32ee65a30aff..1c6e6c0766ca 100644
+--- a/include/net/sctp/checksum.h
++++ b/include/net/sctp/checksum.h
+@@ -61,7 +61,7 @@ static inline __wsum sctp_csum_combine(__wsum csum, __wsum csum2,
+ static inline __le32 sctp_compute_cksum(const struct sk_buff *skb,
+ 					unsigned int offset)
+ {
+-	struct sctphdr *sh = sctp_hdr(skb);
++	struct sctphdr *sh = (struct sctphdr *)(skb->data + offset);
+ 	const struct skb_checksum_ops ops = {
+ 		.update  = sctp_csum_update,
+ 		.combine = sctp_csum_combine,
+diff --git a/include/net/sock.h b/include/net/sock.h
+index f43f935cb113..89d0d94d5db2 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -710,6 +710,12 @@ static inline void sk_add_node_rcu(struct sock *sk, struct hlist_head *list)
+ 		hlist_add_head_rcu(&sk->sk_node, list);
+ }
+ 
++static inline void sk_add_node_tail_rcu(struct sock *sk, struct hlist_head *list)
++{
++	sock_hold(sk);
++	hlist_add_tail_rcu(&sk->sk_node, list);
++}
++
+ static inline void __sk_nulls_add_node_rcu(struct sock *sk, struct hlist_nulls_head *list)
+ {
+ 	hlist_nulls_add_head_rcu(&sk->sk_nulls_node, list);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 5fcce2f4209d..d53825b6fcd9 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3187,7 +3187,7 @@ do_sim:
+ 		*dst_reg = *ptr_reg;
+ 	}
+ 	ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
+-	if (!ptr_is_dst_reg)
++	if (!ptr_is_dst_reg && ret)
+ 		*dst_reg = tmp;
+ 	return !ret ? -EFAULT : 0;
+ }
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index d1c6d152da89..47f695d80dd1 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -555,6 +555,20 @@ static void undo_cpu_up(unsigned int cpu, struct cpuhp_cpu_state *st)
+ 		cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
+ }
+ 
++static inline bool can_rollback_cpu(struct cpuhp_cpu_state *st)
++{
++	if (IS_ENABLED(CONFIG_HOTPLUG_CPU))
++		return true;
++	/*
++	 * When CPU hotplug is disabled, then taking the CPU down is not
++	 * possible because takedown_cpu() and the architecture and
++	 * subsystem specific mechanisms are not available. So the CPU
++	 * which would be completely unplugged again needs to stay around
++	 * in the current state.
++	 */
++	return st->state <= CPUHP_BRINGUP_CPU;
++}
++
+ static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ 			      enum cpuhp_state target)
+ {
+@@ -565,8 +579,10 @@ static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ 		st->state++;
+ 		ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
+ 		if (ret) {
+-			st->target = prev_state;
+-			undo_cpu_up(cpu, st);
++			if (can_rollback_cpu(st)) {
++				st->target = prev_state;
++				undo_cpu_up(cpu, st);
++			}
+ 			break;
+ 		}
+ 	}
+diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
+index dd1f43588d70..fa100ed3b4de 100644
+--- a/kernel/trace/trace_dynevent.c
++++ b/kernel/trace/trace_dynevent.c
+@@ -74,7 +74,7 @@ int dyn_event_release(int argc, char **argv, struct dyn_event_operations *type)
+ static int create_dyn_event(int argc, char **argv)
+ {
+ 	struct dyn_event_operations *ops;
+-	int ret;
++	int ret = -ENODEV;
+ 
+ 	if (argv[0][0] == '-' || argv[0][0] == '!')
+ 		return dyn_event_release(argc, argv, NULL);
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 977918d5d350..bbc4940f21af 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -547,13 +547,15 @@ static void softlockup_start_all(void)
+ 
+ int lockup_detector_online_cpu(unsigned int cpu)
+ {
+-	watchdog_enable(cpu);
++	if (cpumask_test_cpu(cpu, &watchdog_allowed_mask))
++		watchdog_enable(cpu);
+ 	return 0;
+ }
+ 
+ int lockup_detector_offline_cpu(unsigned int cpu)
+ {
+-	watchdog_disable(cpu);
++	if (cpumask_test_cpu(cpu, &watchdog_allowed_mask))
++		watchdog_disable(cpu);
+ 	return 0;
+ }
+ 
+diff --git a/lib/rhashtable.c b/lib/rhashtable.c
+index 852ffa5160f1..4edcf3310513 100644
+--- a/lib/rhashtable.c
++++ b/lib/rhashtable.c
+@@ -416,8 +416,12 @@ static void rht_deferred_worker(struct work_struct *work)
+ 	else if (tbl->nest)
+ 		err = rhashtable_rehash_alloc(ht, tbl, tbl->size);
+ 
+-	if (!err)
+-		err = rhashtable_rehash_table(ht);
++	if (!err || err == -EEXIST) {
++		int nerr;
++
++		nerr = rhashtable_rehash_table(ht);
++		err = err ?: nerr;
++	}
+ 
+ 	mutex_unlock(&ht->mutex);
+ 
+diff --git a/mm/debug.c b/mm/debug.c
+index 1611cf00a137..854d5f84047d 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -79,7 +79,7 @@ void __dump_page(struct page *page, const char *reason)
+ 		pr_warn("ksm ");
+ 	else if (mapping) {
+ 		pr_warn("%ps ", mapping->a_ops);
+-		if (mapping->host->i_dentry.first) {
++		if (mapping->host && mapping->host->i_dentry.first) {
+ 			struct dentry *dentry;
+ 			dentry = container_of(mapping->host->i_dentry.first, struct dentry, d_u.d_alias);
+ 			pr_warn("name:\"%pd\" ", dentry);
+diff --git a/mm/memory.c b/mm/memory.c
+index e8d69ade5acc..8d3f38fa530d 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1546,10 +1546,12 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
+ 				WARN_ON_ONCE(!is_zero_pfn(pte_pfn(*pte)));
+ 				goto out_unlock;
+ 			}
+-			entry = *pte;
+-			goto out_mkwrite;
+-		} else
+-			goto out_unlock;
++			entry = pte_mkyoung(*pte);
++			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
++			if (ptep_set_access_flags(vma, addr, pte, entry, 1))
++				update_mmu_cache(vma, addr, pte);
++		}
++		goto out_unlock;
+ 	}
+ 
+ 	/* Ok, finally just insert the thing.. */
+@@ -1558,7 +1560,6 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
+ 	else
+ 		entry = pte_mkspecial(pfn_t_pte(pfn, prot));
+ 
+-out_mkwrite:
+ 	if (mkwrite) {
+ 		entry = pte_mkyoung(entry);
+ 		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 1ad28323fb9f..11593a03c051 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1560,7 +1560,7 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ {
+ 	unsigned long pfn, nr_pages;
+ 	long offlined_pages;
+-	int ret, node;
++	int ret, node, nr_isolate_pageblock;
+ 	unsigned long flags;
+ 	unsigned long valid_start, valid_end;
+ 	struct zone *zone;
+@@ -1586,10 +1586,11 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ 	ret = start_isolate_page_range(start_pfn, end_pfn,
+ 				       MIGRATE_MOVABLE,
+ 				       SKIP_HWPOISON | REPORT_FAILURE);
+-	if (ret) {
++	if (ret < 0) {
+ 		reason = "failure to isolate range";
+ 		goto failed_removal;
+ 	}
++	nr_isolate_pageblock = ret;
+ 
+ 	arg.start_pfn = start_pfn;
+ 	arg.nr_pages = nr_pages;
+@@ -1642,8 +1643,16 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ 	/* Ok, all of our target is isolated.
+ 	   We cannot do rollback at this point. */
+ 	offline_isolated_pages(start_pfn, end_pfn);
+-	/* reset pagetype flags and makes migrate type to be MOVABLE */
+-	undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
++
++	/*
++	 * Onlining will reset pagetype flags and makes migrate type
++	 * MOVABLE, so just need to decrease the number of isolated
++	 * pageblocks zone counter here.
++	 */
++	spin_lock_irqsave(&zone->lock, flags);
++	zone->nr_isolate_pageblock -= nr_isolate_pageblock;
++	spin_unlock_irqrestore(&zone->lock, flags);
++
+ 	/* removal success */
+ 	adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages);
+ 	zone->present_pages -= offlined_pages;
+@@ -1675,12 +1684,12 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ 
+ failed_removal_isolated:
+ 	undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
++	memory_notify(MEM_CANCEL_OFFLINE, &arg);
+ failed_removal:
+ 	pr_debug("memory offlining [mem %#010llx-%#010llx] failed due to %s\n",
+ 		 (unsigned long long) start_pfn << PAGE_SHIFT,
+ 		 ((unsigned long long) end_pfn << PAGE_SHIFT) - 1,
+ 		 reason);
+-	memory_notify(MEM_CANCEL_OFFLINE, &arg);
+ 	/* pushback to free area */
+ 	mem_hotplug_done();
+ 	return ret;
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index ee2bce59d2bf..6bc9786aad6e 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -428,6 +428,13 @@ static inline bool queue_pages_required(struct page *page,
+ 	return node_isset(nid, *qp->nmask) == !(flags & MPOL_MF_INVERT);
+ }
+ 
++/*
++ * queue_pages_pmd() has three possible return values:
++ * 1 - pages are placed on the right node or queued successfully.
++ * 0 - THP was split.
++ * -EIO - is migration entry or MPOL_MF_STRICT was specified and an existing
++ *        page was already on a node that does not follow the policy.
++ */
+ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ 				unsigned long end, struct mm_walk *walk)
+ {
+@@ -437,7 +444,7 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ 	unsigned long flags;
+ 
+ 	if (unlikely(is_pmd_migration_entry(*pmd))) {
+-		ret = 1;
++		ret = -EIO;
+ 		goto unlock;
+ 	}
+ 	page = pmd_page(*pmd);
+@@ -454,8 +461,15 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ 	ret = 1;
+ 	flags = qp->flags;
+ 	/* go to thp migration */
+-	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
++	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
++		if (!vma_migratable(walk->vma)) {
++			ret = -EIO;
++			goto unlock;
++		}
++
+ 		migrate_page_add(page, qp->pagelist, flags);
++	} else
++		ret = -EIO;
+ unlock:
+ 	spin_unlock(ptl);
+ out:
+@@ -480,8 +494,10 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
+ 	ptl = pmd_trans_huge_lock(pmd, vma);
+ 	if (ptl) {
+ 		ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
+-		if (ret)
++		if (ret > 0)
+ 			return 0;
++		else if (ret < 0)
++			return ret;
+ 	}
+ 
+ 	if (pmd_trans_unstable(pmd))
+@@ -502,11 +518,16 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
+ 			continue;
+ 		if (!queue_pages_required(page, qp))
+ 			continue;
+-		migrate_page_add(page, qp->pagelist, flags);
++		if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
++			if (!vma_migratable(vma))
++				break;
++			migrate_page_add(page, qp->pagelist, flags);
++		} else
++			break;
+ 	}
+ 	pte_unmap_unlock(pte - 1, ptl);
+ 	cond_resched();
+-	return 0;
++	return addr != end ? -EIO : 0;
+ }
+ 
+ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
+@@ -576,7 +597,12 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
+ 	unsigned long endvma = vma->vm_end;
+ 	unsigned long flags = qp->flags;
+ 
+-	if (!vma_migratable(vma))
++	/*
++	 * Need check MPOL_MF_STRICT to return -EIO if possible
++	 * regardless of vma_migratable
++	 */
++	if (!vma_migratable(vma) &&
++	    !(flags & MPOL_MF_STRICT))
+ 		return 1;
+ 
+ 	if (endvma > end)
+@@ -603,7 +629,7 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
+ 	}
+ 
+ 	/* queue pages from current vma */
+-	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
++	if (flags & MPOL_MF_VALID)
+ 		return 0;
+ 	return 1;
+ }
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 181f5d2718a9..76e237b4610c 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -248,10 +248,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
+ 				pte = swp_entry_to_pte(entry);
+ 			} else if (is_device_public_page(new)) {
+ 				pte = pte_mkdevmap(pte);
+-				flush_dcache_page(new);
+ 			}
+-		} else
+-			flush_dcache_page(new);
++		}
+ 
+ #ifdef CONFIG_HUGETLB_PAGE
+ 		if (PageHuge(new)) {
+@@ -995,6 +993,13 @@ static int move_to_new_page(struct page *newpage, struct page *page,
+ 		 */
+ 		if (!PageMappingFlags(page))
+ 			page->mapping = NULL;
++
++		if (unlikely(is_zone_device_page(newpage))) {
++			if (is_device_public_page(newpage))
++				flush_dcache_page(newpage);
++		} else
++			flush_dcache_page(newpage);
++
+ 	}
+ out:
+ 	return rc;
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 0b9f577b1a2a..11dc3c0e8728 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -8160,7 +8160,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
+ 
+ 	ret = start_isolate_page_range(pfn_max_align_down(start),
+ 				       pfn_max_align_up(end), migratetype, 0);
+-	if (ret)
++	if (ret < 0)
+ 		return ret;
+ 
+ 	/*
+diff --git a/mm/page_isolation.c b/mm/page_isolation.c
+index ce323e56b34d..019280712e1b 100644
+--- a/mm/page_isolation.c
++++ b/mm/page_isolation.c
+@@ -59,7 +59,8 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
+ 	 * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
+ 	 * We just check MOVABLE pages.
+ 	 */
+-	if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype, flags))
++	if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype,
++				 isol_flags))
+ 		ret = 0;
+ 
+ 	/*
+@@ -160,27 +161,36 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
+ 	return NULL;
+ }
+ 
+-/*
+- * start_isolate_page_range() -- make page-allocation-type of range of pages
+- * to be MIGRATE_ISOLATE.
+- * @start_pfn: The lower PFN of the range to be isolated.
+- * @end_pfn: The upper PFN of the range to be isolated.
+- * @migratetype: migrate type to set in error recovery.
++/**
++ * start_isolate_page_range() - make page-allocation-type of range of pages to
++ * be MIGRATE_ISOLATE.
++ * @start_pfn:		The lower PFN of the range to be isolated.
++ * @end_pfn:		The upper PFN of the range to be isolated.
++ *			start_pfn/end_pfn must be aligned to pageblock_order.
++ * @migratetype:	Migrate type to set in error recovery.
++ * @flags:		The following flags are allowed (they can be combined in
++ *			a bit mask)
++ *			SKIP_HWPOISON - ignore hwpoison pages
++ *			REPORT_FAILURE - report details about the failure to
++ *			isolate the range
+  *
+  * Making page-allocation-type to be MIGRATE_ISOLATE means free pages in
+  * the range will never be allocated. Any free pages and pages freed in the
+- * future will not be allocated again.
+- *
+- * start_pfn/end_pfn must be aligned to pageblock_order.
+- * Return 0 on success and -EBUSY if any part of range cannot be isolated.
++ * future will not be allocated again. If specified range includes migrate types
++ * other than MOVABLE or CMA, this will fail with -EBUSY. For isolating all
++ * pages in the range finally, the caller have to free all pages in the range.
++ * test_page_isolated() can be used for test it.
+  *
+  * There is no high level synchronization mechanism that prevents two threads
+- * from trying to isolate overlapping ranges.  If this happens, one thread
++ * from trying to isolate overlapping ranges. If this happens, one thread
+  * will notice pageblocks in the overlapping range already set to isolate.
+  * This happens in set_migratetype_isolate, and set_migratetype_isolate
+- * returns an error.  We then clean up by restoring the migration type on
+- * pageblocks we may have modified and return -EBUSY to caller.  This
++ * returns an error. We then clean up by restoring the migration type on
++ * pageblocks we may have modified and return -EBUSY to caller. This
+  * prevents two threads from simultaneously working on overlapping ranges.
++ *
++ * Return: the number of isolated pageblocks on success and -EBUSY if any part
++ * of range cannot be isolated.
+  */
+ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ 			     unsigned migratetype, int flags)
+@@ -188,6 +198,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ 	unsigned long pfn;
+ 	unsigned long undo_pfn;
+ 	struct page *page;
++	int nr_isolate_pageblock = 0;
+ 
+ 	BUG_ON(!IS_ALIGNED(start_pfn, pageblock_nr_pages));
+ 	BUG_ON(!IS_ALIGNED(end_pfn, pageblock_nr_pages));
+@@ -196,13 +207,15 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ 	     pfn < end_pfn;
+ 	     pfn += pageblock_nr_pages) {
+ 		page = __first_valid_page(pfn, pageblock_nr_pages);
+-		if (page &&
+-		    set_migratetype_isolate(page, migratetype, flags)) {
+-			undo_pfn = pfn;
+-			goto undo;
++		if (page) {
++			if (set_migratetype_isolate(page, migratetype, flags)) {
++				undo_pfn = pfn;
++				goto undo;
++			}
++			nr_isolate_pageblock++;
+ 		}
+ 	}
+-	return 0;
++	return nr_isolate_pageblock;
+ undo:
+ 	for (pfn = start_pfn;
+ 	     pfn < undo_pfn;
+diff --git a/mm/slab.c b/mm/slab.c
+index 91c1863df93d..b3e74b56a468 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -2111,6 +2111,8 @@ done:
+ 	cachep->allocflags = __GFP_COMP;
+ 	if (flags & SLAB_CACHE_DMA)
+ 		cachep->allocflags |= GFP_DMA;
++	if (flags & SLAB_CACHE_DMA32)
++		cachep->allocflags |= GFP_DMA32;
+ 	if (flags & SLAB_RECLAIM_ACCOUNT)
+ 		cachep->allocflags |= __GFP_RECLAIMABLE;
+ 	cachep->size = size;
+diff --git a/mm/slab.h b/mm/slab.h
+index 384105318779..27834ead5f14 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -127,7 +127,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
+ 
+ 
+ /* Legal flag mask for kmem_cache_create(), for various configurations */
+-#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
++#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
++			 SLAB_CACHE_DMA32 | SLAB_PANIC | \
+ 			 SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS )
+ 
+ #if defined(CONFIG_DEBUG_SLAB)
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index f9d89c1b5977..333618231f8d 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -53,7 +53,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
+ 		SLAB_FAILSLAB | SLAB_KASAN)
+ 
+ #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
+-			 SLAB_ACCOUNT)
++			 SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
+ 
+ /*
+  * Merge control. If this is set then no merging of slab caches will occur.
+diff --git a/mm/slub.c b/mm/slub.c
+index dc777761b6b7..1673100fd534 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -3591,6 +3591,9 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
+ 	if (s->flags & SLAB_CACHE_DMA)
+ 		s->allocflags |= GFP_DMA;
+ 
++	if (s->flags & SLAB_CACHE_DMA32)
++		s->allocflags |= GFP_DMA32;
++
+ 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
+ 		s->allocflags |= __GFP_RECLAIMABLE;
+ 
+@@ -5681,6 +5684,8 @@ static char *create_unique_id(struct kmem_cache *s)
+ 	 */
+ 	if (s->flags & SLAB_CACHE_DMA)
+ 		*p++ = 'd';
++	if (s->flags & SLAB_CACHE_DMA32)
++		*p++ = 'D';
+ 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
+ 		*p++ = 'a';
+ 	if (s->flags & SLAB_CONSISTENCY_CHECKS)
+diff --git a/mm/sparse.c b/mm/sparse.c
+index 7ea5dc6c6b19..4763519d4399 100644
+--- a/mm/sparse.c
++++ b/mm/sparse.c
+@@ -556,7 +556,7 @@ void online_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
+ }
+ 
+ #ifdef CONFIG_MEMORY_HOTREMOVE
+-/* Mark all memory sections within the pfn range as online */
++/* Mark all memory sections within the pfn range as offline */
+ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
+ {
+ 	unsigned long pfn;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 2a7fb517d460..ccdc5c67d22a 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -3337,16 +3337,22 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ 
+ 	while (len >= L2CAP_CONF_OPT_SIZE) {
+ 		len -= l2cap_get_conf_opt(&req, &type, &olen, &val);
++		if (len < 0)
++			break;
+ 
+ 		hint  = type & L2CAP_CONF_HINT;
+ 		type &= L2CAP_CONF_MASK;
+ 
+ 		switch (type) {
+ 		case L2CAP_CONF_MTU:
++			if (olen != 2)
++				break;
+ 			mtu = val;
+ 			break;
+ 
+ 		case L2CAP_CONF_FLUSH_TO:
++			if (olen != 2)
++				break;
+ 			chan->flush_to = val;
+ 			break;
+ 
+@@ -3354,26 +3360,30 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ 			break;
+ 
+ 		case L2CAP_CONF_RFC:
+-			if (olen == sizeof(rfc))
+-				memcpy(&rfc, (void *) val, olen);
++			if (olen != sizeof(rfc))
++				break;
++			memcpy(&rfc, (void *) val, olen);
+ 			break;
+ 
+ 		case L2CAP_CONF_FCS:
++			if (olen != 1)
++				break;
+ 			if (val == L2CAP_FCS_NONE)
+ 				set_bit(CONF_RECV_NO_FCS, &chan->conf_state);
+ 			break;
+ 
+ 		case L2CAP_CONF_EFS:
+-			if (olen == sizeof(efs)) {
+-				remote_efs = 1;
+-				memcpy(&efs, (void *) val, olen);
+-			}
++			if (olen != sizeof(efs))
++				break;
++			remote_efs = 1;
++			memcpy(&efs, (void *) val, olen);
+ 			break;
+ 
+ 		case L2CAP_CONF_EWS:
++			if (olen != 2)
++				break;
+ 			if (!(chan->conn->local_fixed_chan & L2CAP_FC_A2MP))
+ 				return -ECONNREFUSED;
+-
+ 			set_bit(FLAG_EXT_CTRL, &chan->flags);
+ 			set_bit(CONF_EWS_RECV, &chan->conf_state);
+ 			chan->tx_win_max = L2CAP_DEFAULT_EXT_WINDOW;
+@@ -3383,7 +3393,6 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ 		default:
+ 			if (hint)
+ 				break;
+-
+ 			result = L2CAP_CONF_UNKNOWN;
+ 			*((u8 *) ptr++) = type;
+ 			break;
+@@ -3548,58 +3557,65 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len,
+ 
+ 	while (len >= L2CAP_CONF_OPT_SIZE) {
+ 		len -= l2cap_get_conf_opt(&rsp, &type, &olen, &val);
++		if (len < 0)
++			break;
+ 
+ 		switch (type) {
+ 		case L2CAP_CONF_MTU:
++			if (olen != 2)
++				break;
+ 			if (val < L2CAP_DEFAULT_MIN_MTU) {
+ 				*result = L2CAP_CONF_UNACCEPT;
+ 				chan->imtu = L2CAP_DEFAULT_MIN_MTU;
+ 			} else
+ 				chan->imtu = val;
+-			l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu, endptr - ptr);
++			l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu,
++					   endptr - ptr);
+ 			break;
+ 
+ 		case L2CAP_CONF_FLUSH_TO:
++			if (olen != 2)
++				break;
+ 			chan->flush_to = val;
+-			l2cap_add_conf_opt(&ptr, L2CAP_CONF_FLUSH_TO,
+-					   2, chan->flush_to, endptr - ptr);
++			l2cap_add_conf_opt(&ptr, L2CAP_CONF_FLUSH_TO, 2,
++					   chan->flush_to, endptr - ptr);
+ 			break;
+ 
+ 		case L2CAP_CONF_RFC:
+-			if (olen == sizeof(rfc))
+-				memcpy(&rfc, (void *)val, olen);
+-
++			if (olen != sizeof(rfc))
++				break;
++			memcpy(&rfc, (void *)val, olen);
+ 			if (test_bit(CONF_STATE2_DEVICE, &chan->conf_state) &&
+ 			    rfc.mode != chan->mode)
+ 				return -ECONNREFUSED;
+-
+ 			chan->fcs = 0;
+-
+-			l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC,
+-					   sizeof(rfc), (unsigned long) &rfc, endptr - ptr);
++			l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, sizeof(rfc),
++					   (unsigned long) &rfc, endptr - ptr);
+ 			break;
+ 
+ 		case L2CAP_CONF_EWS:
++			if (olen != 2)
++				break;
+ 			chan->ack_win = min_t(u16, val, chan->ack_win);
+ 			l2cap_add_conf_opt(&ptr, L2CAP_CONF_EWS, 2,
+ 					   chan->tx_win, endptr - ptr);
+ 			break;
+ 
+ 		case L2CAP_CONF_EFS:
+-			if (olen == sizeof(efs)) {
+-				memcpy(&efs, (void *)val, olen);
+-
+-				if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
+-				    efs.stype != L2CAP_SERV_NOTRAFIC &&
+-				    efs.stype != chan->local_stype)
+-					return -ECONNREFUSED;
+-
+-				l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
+-						   (unsigned long) &efs, endptr - ptr);
+-			}
++			if (olen != sizeof(efs))
++				break;
++			memcpy(&efs, (void *)val, olen);
++			if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
++			    efs.stype != L2CAP_SERV_NOTRAFIC &&
++			    efs.stype != chan->local_stype)
++				return -ECONNREFUSED;
++			l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
++					   (unsigned long) &efs, endptr - ptr);
+ 			break;
+ 
+ 		case L2CAP_CONF_FCS:
++			if (olen != 1)
++				break;
+ 			if (*result == L2CAP_CONF_PENDING)
+ 				if (val == L2CAP_FCS_NONE)
+ 					set_bit(CONF_RECV_NO_FCS,
+@@ -3728,13 +3744,18 @@ static void l2cap_conf_rfc_get(struct l2cap_chan *chan, void *rsp, int len)
+ 
+ 	while (len >= L2CAP_CONF_OPT_SIZE) {
+ 		len -= l2cap_get_conf_opt(&rsp, &type, &olen, &val);
++		if (len < 0)
++			break;
+ 
+ 		switch (type) {
+ 		case L2CAP_CONF_RFC:
+-			if (olen == sizeof(rfc))
+-				memcpy(&rfc, (void *)val, olen);
++			if (olen != sizeof(rfc))
++				break;
++			memcpy(&rfc, (void *)val, olen);
+ 			break;
+ 		case L2CAP_CONF_EWS:
++			if (olen != 2)
++				break;
+ 			txwin_ext = val;
+ 			break;
+ 		}
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index b2651bb6d2a3..e657289db4ac 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -279,7 +279,7 @@ struct sk_buff *__skb_try_recv_datagram(struct sock *sk, unsigned int flags,
+ 			break;
+ 
+ 		sk_busy_loop(sk, flags & MSG_DONTWAIT);
+-	} while (!skb_queue_empty(&sk->sk_receive_queue));
++	} while (sk->sk_receive_queue.prev != *last);
+ 
+ 	error = -EAGAIN;
+ 
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index 73ad7607dcd1..aec26584f0ca 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -934,6 +934,8 @@ static int rx_queue_add_kobject(struct net_device *dev, int index)
+ 	if (error)
+ 		return error;
+ 
++	dev_hold(queue->dev);
++
+ 	if (dev->sysfs_rx_queue_group) {
+ 		error = sysfs_create_group(kobj, dev->sysfs_rx_queue_group);
+ 		if (error) {
+@@ -943,7 +945,6 @@ static int rx_queue_add_kobject(struct net_device *dev, int index)
+ 	}
+ 
+ 	kobject_uevent(kobj, KOBJ_ADD);
+-	dev_hold(queue->dev);
+ 
+ 	return error;
+ }
+@@ -1472,6 +1473,8 @@ static int netdev_queue_add_kobject(struct net_device *dev, int index)
+ 	if (error)
+ 		return error;
+ 
++	dev_hold(queue->dev);
++
+ #ifdef CONFIG_BQL
+ 	error = sysfs_create_group(kobj, &dql_group);
+ 	if (error) {
+@@ -1481,7 +1484,6 @@ static int netdev_queue_add_kobject(struct net_device *dev, int index)
+ #endif
+ 
+ 	kobject_uevent(kobj, KOBJ_ADD);
+-	dev_hold(queue->dev);
+ 
+ 	return 0;
+ }
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index d5740bad5b18..57d84e9b7b6f 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -436,8 +436,8 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk,
+ 		newnp->ipv6_mc_list = NULL;
+ 		newnp->ipv6_ac_list = NULL;
+ 		newnp->ipv6_fl_list = NULL;
+-		newnp->mcast_oif   = inet6_iif(skb);
+-		newnp->mcast_hops  = ipv6_hdr(skb)->hop_limit;
++		newnp->mcast_oif   = inet_iif(skb);
++		newnp->mcast_hops  = ip_hdr(skb)->ttl;
+ 
+ 		/*
+ 		 * No need to charge this sock to the relevant IPv6 refcnt debug socks count
+diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
+index 17c455ff69ff..7858fa9ea103 100644
+--- a/net/ipv6/ila/ila_xlat.c
++++ b/net/ipv6/ila/ila_xlat.c
+@@ -420,6 +420,7 @@ int ila_xlat_nl_cmd_flush(struct sk_buff *skb, struct genl_info *info)
+ 
+ done:
+ 	rhashtable_walk_stop(&iter);
++	rhashtable_walk_exit(&iter);
+ 	return ret;
+ }
+ 
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 8dad1d690b78..0086acc16f3c 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1040,14 +1040,20 @@ static struct rt6_info *ip6_create_rt_rcu(struct fib6_info *rt)
+ 	struct rt6_info *nrt;
+ 
+ 	if (!fib6_info_hold_safe(rt))
+-		return NULL;
++		goto fallback;
+ 
+ 	nrt = ip6_dst_alloc(dev_net(dev), dev, flags);
+-	if (nrt)
+-		ip6_rt_copy_init(nrt, rt);
+-	else
++	if (!nrt) {
+ 		fib6_info_release(rt);
++		goto fallback;
++	}
+ 
++	ip6_rt_copy_init(nrt, rt);
++	return nrt;
++
++fallback:
++	nrt = dev_net(dev)->ipv6.ip6_null_entry;
++	dst_hold(&nrt->dst);
+ 	return nrt;
+ }
+ 
+@@ -1096,10 +1102,6 @@ restart:
+ 		dst_hold(&rt->dst);
+ 	} else {
+ 		rt = ip6_create_rt_rcu(f6i);
+-		if (!rt) {
+-			rt = net->ipv6.ip6_null_entry;
+-			dst_hold(&rt->dst);
+-		}
+ 	}
+ 
+ 	rcu_read_unlock();
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index b81eb7cb815e..8505d96483d5 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1112,11 +1112,11 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
+ 		newnp->ipv6_fl_list = NULL;
+ 		newnp->pktoptions  = NULL;
+ 		newnp->opt	   = NULL;
+-		newnp->mcast_oif   = tcp_v6_iif(skb);
+-		newnp->mcast_hops  = ipv6_hdr(skb)->hop_limit;
+-		newnp->rcv_flowinfo = ip6_flowinfo(ipv6_hdr(skb));
++		newnp->mcast_oif   = inet_iif(skb);
++		newnp->mcast_hops  = ip_hdr(skb)->ttl;
++		newnp->rcv_flowinfo = 0;
+ 		if (np->repflow)
+-			newnp->flow_label = ip6_flowlabel(ipv6_hdr(skb));
++			newnp->flow_label = 0;
+ 
+ 		/*
+ 		 * No need to charge this sock to the relevant IPv6 refcnt debug socks count
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 4893f248dfdc..e1724f9d8b9d 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -127,7 +127,7 @@ static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
+ 	list_for_each_entry_reverse(trans, &net->nft.commit_list, list) {
+ 		if (trans->msg_type == NFT_MSG_NEWSET &&
+ 		    nft_trans_set(trans) == set) {
+-			nft_trans_set_bound(trans) = true;
++			set->bound = true;
+ 			break;
+ 		}
+ 	}
+@@ -6617,8 +6617,7 @@ static void nf_tables_abort_release(struct nft_trans *trans)
+ 		nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
+ 		break;
+ 	case NFT_MSG_NEWSET:
+-		if (!nft_trans_set_bound(trans))
+-			nft_set_destroy(nft_trans_set(trans));
++		nft_set_destroy(nft_trans_set(trans));
+ 		break;
+ 	case NFT_MSG_NEWSETELEM:
+ 		nft_set_elem_destroy(nft_trans_elem_set(trans),
+@@ -6691,8 +6690,11 @@ static int __nf_tables_abort(struct net *net)
+ 			break;
+ 		case NFT_MSG_NEWSET:
+ 			trans->ctx.table->use--;
+-			if (!nft_trans_set_bound(trans))
+-				list_del_rcu(&nft_trans_set(trans)->list);
++			if (nft_trans_set(trans)->bound) {
++				nft_trans_destroy(trans);
++				break;
++			}
++			list_del_rcu(&nft_trans_set(trans)->list);
+ 			break;
+ 		case NFT_MSG_DELSET:
+ 			trans->ctx.table->use++;
+@@ -6700,8 +6702,11 @@ static int __nf_tables_abort(struct net *net)
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWSETELEM:
++			if (nft_trans_elem_set(trans)->bound) {
++				nft_trans_destroy(trans);
++				break;
++			}
+ 			te = (struct nft_trans_elem *)trans->data;
+-
+ 			te->set->ops->remove(net, te->set, &te->elem);
+ 			atomic_dec(&te->set->nelems);
+ 			break;
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 25eeb6d2a75a..f0ec068e1d02 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -366,7 +366,7 @@ int genl_register_family(struct genl_family *family)
+ 			       start, end + 1, GFP_KERNEL);
+ 	if (family->id < 0) {
+ 		err = family->id;
+-		goto errout_locked;
++		goto errout_free;
+ 	}
+ 
+ 	err = genl_validate_assign_mc_groups(family);
+@@ -385,6 +385,7 @@ int genl_register_family(struct genl_family *family)
+ 
+ errout_remove:
+ 	idr_remove(&genl_fam_idr, family->id);
++errout_free:
+ 	kfree(family->attrbuf);
+ errout_locked:
+ 	genl_unlock_all();
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 1cd1d83a4be0..8406bf11eef4 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3245,7 +3245,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
+ 	}
+ 
+ 	mutex_lock(&net->packet.sklist_lock);
+-	sk_add_node_rcu(sk, &net->packet.sklist);
++	sk_add_node_tail_rcu(sk, &net->packet.sklist);
+ 	mutex_unlock(&net->packet.sklist_lock);
+ 
+ 	preempt_disable();
+@@ -4211,7 +4211,7 @@ static struct pgv *alloc_pg_vec(struct tpacket_req *req, int order)
+ 	struct pgv *pg_vec;
+ 	int i;
+ 
+-	pg_vec = kcalloc(block_nr, sizeof(struct pgv), GFP_KERNEL);
++	pg_vec = kcalloc(block_nr, sizeof(struct pgv), GFP_KERNEL | __GFP_NOWARN);
+ 	if (unlikely(!pg_vec))
+ 		goto out;
+ 
+diff --git a/net/rose/rose_subr.c b/net/rose/rose_subr.c
+index 7ca57741b2fb..7849f286bb93 100644
+--- a/net/rose/rose_subr.c
++++ b/net/rose/rose_subr.c
+@@ -105,16 +105,17 @@ void rose_write_internal(struct sock *sk, int frametype)
+ 	struct sk_buff *skb;
+ 	unsigned char  *dptr;
+ 	unsigned char  lci1, lci2;
+-	char buffer[100];
+-	int len, faclen = 0;
++	int maxfaclen = 0;
++	int len, faclen;
++	int reserve;
+ 
+-	len = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + ROSE_MIN_LEN + 1;
++	reserve = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + 1;
++	len = ROSE_MIN_LEN;
+ 
+ 	switch (frametype) {
+ 	case ROSE_CALL_REQUEST:
+ 		len   += 1 + ROSE_ADDR_LEN + ROSE_ADDR_LEN;
+-		faclen = rose_create_facilities(buffer, rose);
+-		len   += faclen;
++		maxfaclen = 256;
+ 		break;
+ 	case ROSE_CALL_ACCEPTED:
+ 	case ROSE_CLEAR_REQUEST:
+@@ -123,15 +124,16 @@ void rose_write_internal(struct sock *sk, int frametype)
+ 		break;
+ 	}
+ 
+-	if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)
++	skb = alloc_skb(reserve + len + maxfaclen, GFP_ATOMIC);
++	if (!skb)
+ 		return;
+ 
+ 	/*
+ 	 *	Space for AX.25 header and PID.
+ 	 */
+-	skb_reserve(skb, AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + 1);
++	skb_reserve(skb, reserve);
+ 
+-	dptr = skb_put(skb, skb_tailroom(skb));
++	dptr = skb_put(skb, len);
+ 
+ 	lci1 = (rose->lci >> 8) & 0x0F;
+ 	lci2 = (rose->lci >> 0) & 0xFF;
+@@ -146,7 +148,8 @@ void rose_write_internal(struct sock *sk, int frametype)
+ 		dptr   += ROSE_ADDR_LEN;
+ 		memcpy(dptr, &rose->source_addr, ROSE_ADDR_LEN);
+ 		dptr   += ROSE_ADDR_LEN;
+-		memcpy(dptr, buffer, faclen);
++		faclen = rose_create_facilities(dptr, rose);
++		skb_put(skb, faclen);
+ 		dptr   += faclen;
+ 		break;
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index a2771b3b3c14..5f68420b4b0d 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -999,7 +999,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ 	if (unlikely(addrs_size <= 0))
+ 		return -EINVAL;
+ 
+-	kaddrs = vmemdup_user(addrs, addrs_size);
++	kaddrs = memdup_user(addrs, addrs_size);
+ 	if (unlikely(IS_ERR(kaddrs)))
+ 		return PTR_ERR(kaddrs);
+ 
+@@ -1007,7 +1007,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ 	addr_buf = kaddrs;
+ 	while (walk_size < addrs_size) {
+ 		if (walk_size + sizeof(sa_family_t) > addrs_size) {
+-			kvfree(kaddrs);
++			kfree(kaddrs);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -1018,7 +1018,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ 		 * causes the address buffer to overflow return EINVAL.
+ 		 */
+ 		if (!af || (walk_size + af->sockaddr_len) > addrs_size) {
+-			kvfree(kaddrs);
++			kfree(kaddrs);
+ 			return -EINVAL;
+ 		}
+ 		addrcnt++;
+@@ -1054,7 +1054,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ 	}
+ 
+ out:
+-	kvfree(kaddrs);
++	kfree(kaddrs);
+ 
+ 	return err;
+ }
+@@ -1329,7 +1329,7 @@ static int __sctp_setsockopt_connectx(struct sock *sk,
+ 	if (unlikely(addrs_size <= 0))
+ 		return -EINVAL;
+ 
+-	kaddrs = vmemdup_user(addrs, addrs_size);
++	kaddrs = memdup_user(addrs, addrs_size);
+ 	if (unlikely(IS_ERR(kaddrs)))
+ 		return PTR_ERR(kaddrs);
+ 
+@@ -1349,7 +1349,7 @@ static int __sctp_setsockopt_connectx(struct sock *sk,
+ 	err = __sctp_connect(sk, kaddrs, addrs_size, flags, assoc_id);
+ 
+ out_free:
+-	kvfree(kaddrs);
++	kfree(kaddrs);
+ 
+ 	return err;
+ }
+diff --git a/net/tipc/net.c b/net/tipc/net.c
+index f076edb74338..7ce1e86b024f 100644
+--- a/net/tipc/net.c
++++ b/net/tipc/net.c
+@@ -163,12 +163,9 @@ void tipc_sched_net_finalize(struct net *net, u32 addr)
+ 
+ void tipc_net_stop(struct net *net)
+ {
+-	u32 self = tipc_own_addr(net);
+-
+-	if (!self)
++	if (!tipc_own_id(net))
+ 		return;
+ 
+-	tipc_nametbl_withdraw(net, TIPC_CFG_SRV, self, self, self);
+ 	rtnl_lock();
+ 	tipc_bearer_stop(net);
+ 	tipc_node_stop(net);
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 139694f2c576..4dca9161f99b 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -2349,6 +2349,16 @@ static int tipc_wait_for_connect(struct socket *sock, long *timeo_p)
+ 	return 0;
+ }
+ 
++static bool tipc_sockaddr_is_sane(struct sockaddr_tipc *addr)
++{
++	if (addr->family != AF_TIPC)
++		return false;
++	if (addr->addrtype == TIPC_SERVICE_RANGE)
++		return (addr->addr.nameseq.lower <= addr->addr.nameseq.upper);
++	return (addr->addrtype == TIPC_SERVICE_ADDR ||
++		addr->addrtype == TIPC_SOCKET_ADDR);
++}
++
+ /**
+  * tipc_connect - establish a connection to another TIPC port
+  * @sock: socket structure
+@@ -2384,18 +2394,18 @@ static int tipc_connect(struct socket *sock, struct sockaddr *dest,
+ 		if (!tipc_sk_type_connectionless(sk))
+ 			res = -EINVAL;
+ 		goto exit;
+-	} else if (dst->family != AF_TIPC) {
+-		res = -EINVAL;
+ 	}
+-	if (dst->addrtype != TIPC_ADDR_ID && dst->addrtype != TIPC_ADDR_NAME)
++	if (!tipc_sockaddr_is_sane(dst)) {
+ 		res = -EINVAL;
+-	if (res)
+ 		goto exit;
+-
++	}
+ 	/* DGRAM/RDM connect(), just save the destaddr */
+ 	if (tipc_sk_type_connectionless(sk)) {
+ 		memcpy(&tsk->peer, dest, destlen);
+ 		goto exit;
++	} else if (dst->addrtype == TIPC_SERVICE_RANGE) {
++		res = -EINVAL;
++		goto exit;
+ 	}
+ 
+ 	previous = sk->sk_state;
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index a457c0fbbef1..f5edb213d760 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -365,6 +365,7 @@ static int tipc_conn_rcv_sub(struct tipc_topsrv *srv,
+ 	struct tipc_subscription *sub;
+ 
+ 	if (tipc_sub_read(s, filter) & TIPC_SUB_CANCEL) {
++		s->filter &= __constant_ntohl(~TIPC_SUB_CANCEL);
+ 		tipc_conn_delete_sub(con, s);
+ 		return 0;
+ 	}
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 26bf886bd168..588a3bc29ecc 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -640,7 +640,7 @@ static void handle_modversions(struct module *mod, struct elf_info *info,
+ 			       info->sechdrs[sym->st_shndx].sh_offset -
+ 			       (info->hdr->e_type != ET_REL ?
+ 				info->sechdrs[sym->st_shndx].sh_addr : 0);
+-			crc = *crcp;
++			crc = TO_NATIVE(*crcp);
+ 		}
+ 		sym_update_crc(symname + strlen("__crc_"), mod, crc,
+ 				export);
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 467039b342b5..41abb8bd466a 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -940,6 +940,28 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ 	oss_frame_size = snd_pcm_format_physical_width(params_format(params)) *
+ 			 params_channels(params) / 8;
+ 
++	err = snd_pcm_oss_period_size(substream, params, sparams);
++	if (err < 0)
++		goto failure;
++
++	n = snd_pcm_plug_slave_size(substream, runtime->oss.period_bytes / oss_frame_size);
++	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, n, NULL);
++	if (err < 0)
++		goto failure;
++
++	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIODS,
++				     runtime->oss.periods, NULL);
++	if (err < 0)
++		goto failure;
++
++	snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_DROP, NULL);
++
++	err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_HW_PARAMS, sparams);
++	if (err < 0) {
++		pcm_dbg(substream->pcm, "HW_PARAMS failed: %i\n", err);
++		goto failure;
++	}
++
+ #ifdef CONFIG_SND_PCM_OSS_PLUGINS
+ 	snd_pcm_oss_plugin_clear(substream);
+ 	if (!direct) {
+@@ -974,27 +996,6 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ 	}
+ #endif
+ 
+-	err = snd_pcm_oss_period_size(substream, params, sparams);
+-	if (err < 0)
+-		goto failure;
+-
+-	n = snd_pcm_plug_slave_size(substream, runtime->oss.period_bytes / oss_frame_size);
+-	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, n, NULL);
+-	if (err < 0)
+-		goto failure;
+-
+-	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIODS,
+-				     runtime->oss.periods, NULL);
+-	if (err < 0)
+-		goto failure;
+-
+-	snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_DROP, NULL);
+-
+-	if ((err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_HW_PARAMS, sparams)) < 0) {
+-		pcm_dbg(substream->pcm, "HW_PARAMS failed: %i\n", err);
+-		goto failure;
+-	}
+-
+ 	if (runtime->oss.trigger) {
+ 		sw_params->start_threshold = 1;
+ 	} else {
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 818dff1de545..b67f6fe08a1b 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -1426,8 +1426,15 @@ static int snd_pcm_pause(struct snd_pcm_substream *substream, int push)
+ static int snd_pcm_pre_suspend(struct snd_pcm_substream *substream, int state)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	if (runtime->status->state == SNDRV_PCM_STATE_SUSPENDED)
++	switch (runtime->status->state) {
++	case SNDRV_PCM_STATE_SUSPENDED:
+ 		return -EBUSY;
++	/* unresumable PCM state; return -EBUSY for skipping suspend */
++	case SNDRV_PCM_STATE_OPEN:
++	case SNDRV_PCM_STATE_SETUP:
++	case SNDRV_PCM_STATE_DISCONNECTED:
++		return -EBUSY;
++	}
+ 	runtime->trigger_master = substream;
+ 	return 0;
+ }
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index ee601d7f0926..c0690d1ecd55 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -30,6 +30,7 @@
+ #include <linux/module.h>
+ #include <linux/delay.h>
+ #include <linux/mm.h>
++#include <linux/nospec.h>
+ #include <sound/rawmidi.h>
+ #include <sound/info.h>
+ #include <sound/control.h>
+@@ -601,6 +602,7 @@ static int __snd_rawmidi_info_select(struct snd_card *card,
+ 		return -ENXIO;
+ 	if (info->stream < 0 || info->stream > 1)
+ 		return -EINVAL;
++	info->stream = array_index_nospec(info->stream, 2);
+ 	pstr = &rmidi->streams[info->stream];
+ 	if (pstr->substream_count == 0)
+ 		return -ENOENT;
+diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c
+index 278ebb993122..c93945917235 100644
+--- a/sound/core/seq/oss/seq_oss_synth.c
++++ b/sound/core/seq/oss/seq_oss_synth.c
+@@ -617,13 +617,14 @@ int
+ snd_seq_oss_synth_make_info(struct seq_oss_devinfo *dp, int dev, struct synth_info *inf)
+ {
+ 	struct seq_oss_synth *rec;
++	struct seq_oss_synthinfo *info = get_synthinfo_nospec(dp, dev);
+ 
+-	if (dev < 0 || dev >= dp->max_synthdev)
++	if (!info)
+ 		return -ENXIO;
+ 
+-	if (dp->synths[dev].is_midi) {
++	if (info->is_midi) {
+ 		struct midi_info minf;
+-		snd_seq_oss_midi_make_info(dp, dp->synths[dev].midi_mapped, &minf);
++		snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf);
+ 		inf->synth_type = SYNTH_TYPE_MIDI;
+ 		inf->synth_subtype = 0;
+ 		inf->nr_voices = 16;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3a8568d3928f..00c27b3b8c14 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5489,7 +5489,7 @@ static void alc_headset_btn_callback(struct hda_codec *codec,
+ 	jack->jack->button_state = report;
+ }
+ 
+-static void alc_fixup_headset_jack(struct hda_codec *codec,
++static void alc295_fixup_chromebook(struct hda_codec *codec,
+ 				    const struct hda_fixup *fix, int action)
+ {
+ 
+@@ -5499,6 +5499,16 @@ static void alc_fixup_headset_jack(struct hda_codec *codec,
+ 						    alc_headset_btn_callback);
+ 		snd_hda_jack_add_kctl(codec, 0x55, "Headset Jack", false,
+ 				      SND_JACK_HEADSET, alc_headset_btn_keymap);
++		switch (codec->core.vendor_id) {
++		case 0x10ec0295:
++			alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */
++			alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15);
++			break;
++		case 0x10ec0236:
++			alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */
++			alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15);
++			break;
++		}
+ 		break;
+ 	case HDA_FIXUP_ACT_INIT:
+ 		switch (codec->core.vendor_id) {
+@@ -5668,10 +5678,16 @@ enum {
+ 	ALC294_FIXUP_ASUS_MIC,
+ 	ALC294_FIXUP_ASUS_HEADSET_MIC,
+ 	ALC294_FIXUP_ASUS_SPK,
+-	ALC225_FIXUP_HEADSET_JACK,
+ 	ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+ 	ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
+ 	ALC255_FIXUP_ACER_HEADSET_MIC,
++	ALC295_FIXUP_CHROME_BOOK,
++	ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE,
++	ALC225_FIXUP_WYSE_AUTO_MUTE,
++	ALC225_FIXUP_WYSE_DISABLE_MIC_VREF,
++	ALC286_FIXUP_ACER_AIO_HEADSET_MIC,
++	ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++	ALC299_FIXUP_PREDATOR_SPK,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -6614,9 +6630,9 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC
+ 	},
+-	[ALC225_FIXUP_HEADSET_JACK] = {
++	[ALC295_FIXUP_CHROME_BOOK] = {
+ 		.type = HDA_FIXUP_FUNC,
+-		.v.func = alc_fixup_headset_jack,
++		.v.func = alc295_fixup_chromebook,
+ 	},
+ 	[ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+@@ -6648,6 +6664,54 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC
+ 	},
++	[ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x16, 0x01011020 }, /* Rear Line out */
++			{ 0x19, 0x01a1913c }, /* use as Front headset mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC225_FIXUP_WYSE_AUTO_MUTE
++	},
++	[ALC225_FIXUP_WYSE_AUTO_MUTE] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_auto_mute_via_amp,
++		.chained = true,
++		.chain_id = ALC225_FIXUP_WYSE_DISABLE_MIC_VREF
++	},
++	[ALC225_FIXUP_WYSE_DISABLE_MIC_VREF] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_disable_mic_vref,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
++	},
++	[ALC286_FIXUP_ACER_AIO_HEADSET_MIC] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x4f },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x5029 },
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE
++	},
++	[ALC256_FIXUP_ASUS_MIC_NO_PRESENCE] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x04a11120 }, /* use as headset mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE
++	},
++	[ALC299_FIXUP_PREDATOR_SPK] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x21, 0x90170150 }, /* use as headset mic, without its own jack detect */
++			{ }
++		}
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -6664,9 +6728,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
+ 	SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK),
+-	SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1025, 0x1099, "Acer Aspire E5-523G", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1025, 0x1246, "Acer Predator Helios 500", ALC299_FIXUP_PREDATOR_SPK),
++	SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ 	SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
+@@ -6712,6 +6780,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
++	SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+@@ -7060,7 +7130,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC255_FIXUP_DUMMY_LINEOUT_VERB, .name = "alc255-dummy-lineout"},
+ 	{.id = ALC255_FIXUP_DELL_HEADSET_MIC, .name = "alc255-dell-headset"},
+ 	{.id = ALC295_FIXUP_HP_X360, .name = "alc295-hp-x360"},
+-	{.id = ALC225_FIXUP_HEADSET_JACK, .name = "alc-sense-combo"},
++	{.id = ALC295_FIXUP_CHROME_BOOK, .name = "alc-sense-combo"},
++	{.id = ALC299_FIXUP_PREDATOR_SPK, .name = "predator-spk"},
+ 	{}
+ };
+ #define ALC225_STANDARD_PINS \
+@@ -7281,6 +7352,18 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x14, 0x90170110},
+ 		{0x1b, 0x90a70130},
+ 		{0x21, 0x03211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++		{0x12, 0x90a60130},
++		{0x14, 0x90170110},
++		{0x21, 0x03211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++		{0x12, 0x90a60130},
++		{0x14, 0x90170110},
++		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++		{0x1a, 0x90a70130},
++		{0x1b, 0x90170110},
++		{0x21, 0x03211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
+ 		{0x12, 0xb7a60130},
+ 		{0x13, 0xb8a61140},
+diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
+index c9d038f91af6..53f8be0f4a1f 100644
+--- a/tools/objtool/Makefile
++++ b/tools/objtool/Makefile
+@@ -25,14 +25,17 @@ LIBSUBCMD		= $(LIBSUBCMD_OUTPUT)libsubcmd.a
+ OBJTOOL    := $(OUTPUT)objtool
+ OBJTOOL_IN := $(OBJTOOL)-in.o
+ 
++LIBELF_FLAGS := $(shell pkg-config libelf --cflags 2>/dev/null)
++LIBELF_LIBS  := $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
++
+ all: $(OBJTOOL)
+ 
+ INCLUDES := -I$(srctree)/tools/include \
+ 	    -I$(srctree)/tools/arch/$(HOSTARCH)/include/uapi \
+ 	    -I$(srctree)/tools/objtool/arch/$(ARCH)/include
+ WARNINGS := $(EXTRA_WARNINGS) -Wno-switch-default -Wno-switch-enum -Wno-packed
+-CFLAGS   += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES)
+-LDFLAGS  += -lelf $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
++CFLAGS   += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES) $(LIBELF_FLAGS)
++LDFLAGS  += $(LIBELF_LIBS) $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
+ 
+ # Allow old libelf to be used:
+ elfshdr := $(shell echo '$(pound)include <libelf.h>' | $(CC) $(CFLAGS) -x c -E - | grep elf_getshdr)
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index a54d6c9a4601..7c0b975dd2f0 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -251,19 +251,15 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params)
+ 		if (!(decoder->tsc_ctc_ratio_n % decoder->tsc_ctc_ratio_d))
+ 			decoder->tsc_ctc_mult = decoder->tsc_ctc_ratio_n /
+ 						decoder->tsc_ctc_ratio_d;
+-
+-		/*
+-		 * Allow for timestamps appearing to backwards because a TSC
+-		 * packet has slipped past a MTC packet, so allow 2 MTC ticks
+-		 * or ...
+-		 */
+-		decoder->tsc_slip = multdiv(2 << decoder->mtc_shift,
+-					decoder->tsc_ctc_ratio_n,
+-					decoder->tsc_ctc_ratio_d);
+ 	}
+-	/* ... or 0x100 paranoia */
+-	if (decoder->tsc_slip < 0x100)
+-		decoder->tsc_slip = 0x100;
++
++	/*
++	 * A TSC packet can slip past MTC packets so that the timestamp appears
++	 * to go backwards. One estimate is that can be up to about 40 CPU
++	 * cycles, which is certainly less than 0x1000 TSC ticks, but accept
++	 * slippage an order of magnitude more to be on the safe side.
++	 */
++	decoder->tsc_slip = 0x10000;
+ 
+ 	intel_pt_log("timestamp: mtc_shift %u\n", decoder->mtc_shift);
+ 	intel_pt_log("timestamp: tsc_ctc_ratio_n %u\n", decoder->tsc_ctc_ratio_n);
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 11a234740632..ccd3275feeaa 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -734,10 +734,20 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
+ 
+ 		if (!is_arm_pmu_core(name)) {
+ 			pname = pe->pmu ? pe->pmu : "cpu";
++
++			/*
++			 * uncore alias may be from different PMU
++			 * with common prefix
++			 */
++			if (pmu_is_uncore(name) &&
++			    !strncmp(pname, name, strlen(pname)))
++				goto new_alias;
++
+ 			if (strcmp(pname, name))
+ 				continue;
+ 		}
+ 
++new_alias:
+ 		/* need type casts to override 'const' */
+ 		__perf_pmu__new_alias(head, NULL, (char *)pe->name,
+ 				(char *)pe->desc, (char *)pe->event,
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 4e1024dbb73f..b4f2d892a1d3 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2902,6 +2902,9 @@ static long kvm_device_ioctl(struct file *filp, unsigned int ioctl,
+ {
+ 	struct kvm_device *dev = filp->private_data;
+ 
++	if (dev->kvm->mm != current->mm)
++		return -EIO;
++
+ 	switch (ioctl) {
+ 	case KVM_SET_DEVICE_ATTR:
+ 		return kvm_device_ioctl_attr(dev, dev->ops->set_attr, arg);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-04-03 11:09 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-04-03 11:09 UTC (permalink / raw
  To: gentoo-commits

commit:     eb3023590694db5d00b2c90aef55a1aa33682713
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr  3 11:08:46 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr  3 11:08:46 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=eb302359

Removal of redundant netfilter patch

Removal:
2900_netfilter-patch-nf_tables-fix-set-
double-free-in-abort-path.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |   4 -
 ..._tables-fix-set-double-free-in-abort-path.patch | 127 ---------------------
 2 files changed, 131 deletions(-)

diff --git a/0000_README b/0000_README
index 8c66a94..d25ad88 100644
--- a/0000_README
+++ b/0000_README
@@ -83,10 +83,6 @@ Patch:  2600_enable-key-swapping-for-apple-mac.patch
 From:   https://github.com/free5lot/hid-apple-patched
 Desc:   This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902
 
-Patch:  2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch
-From:   https://www.spinics.net/lists/netfilter-devel/msg58466.html
-Desc:   netfilter: nf_tables: fix set double-free in abort path
-
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.

diff --git a/2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch b/2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch
deleted file mode 100644
index 3cc4aef..0000000
--- a/2900_netfilter-patch-nf_tables-fix-set-double-free-in-abort-path.patch
+++ /dev/null
@@ -1,127 +0,0 @@
-commit 40ba1d9b4d19796afc9b7ece872f5f3e8f5e2c13 upstream.
-
-The abort path can cause a double-free of an anonymous set.
-Added-and-to-be-aborted rule looks like this:
-
-udp dport { 137, 138 } drop
-
-The to-be-aborted transaction list looks like this:
-
-newset
-newsetelem
-newsetelem
-rule
-
-This gets walked in reverse order, so first pass disables the rule, the
-set elements, then the set.
-
-After synchronize_rcu(), we then destroy those in same order: rule, set
-element, set element, newset.
-
-Problem is that the anonymous set has already been bound to the rule, so
-the rule (lookup expression destructor) already frees the set, when then
-cause use-after-free when trying to delete the elements from this set,
-then try to free the set again when handling the newset expression.
-
-Rule releases the bound set in first place from the abort path, this
-causes the use-after-free on set element removal when undoing the new
-element transactions. To handle this, skip new element transaction if
-set is bound from the abort path.
-
-This is still causes the use-after-free on set element removal.  To
-handle this, remove transaction from the list when the set is already
-bound.
-
-Fixes: f6ac85858976 ("netfilter: nf_tables: unbind set in rule from commit path")
-Bugzilla: https://bugzilla.netfilter.org/show_bug.cgi?id=1325
-Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
----
-Florian, I'm taking your original patch subject and part of the description,
-sending this as v2. Please ack if this looks good to you. Thanks.
-
- include/net/netfilter/nf_tables.h |  6 ++----
- net/netfilter/nf_tables_api.c     | 17 +++++++++++------
- 2 files changed, 13 insertions(+), 10 deletions(-)
-
-diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
-index b4984bbbe157..3d58acf94dd2 100644
---- a/include/net/netfilter/nf_tables.h
-+++ b/include/net/netfilter/nf_tables.h
-@@ -416,7 +416,8 @@ struct nft_set {
- 	unsigned char			*udata;
- 	/* runtime data below here */
- 	const struct nft_set_ops	*ops ____cacheline_aligned;
--	u16				flags:14,
-+	u16				flags:13,
-+					bound:1,
- 					genmask:2;
- 	u8				klen;
- 	u8				dlen;
-@@ -1329,15 +1330,12 @@ struct nft_trans_rule {
- struct nft_trans_set {
- 	struct nft_set			*set;
- 	u32				set_id;
--	bool				bound;
- };
- 
- #define nft_trans_set(trans)	\
- 	(((struct nft_trans_set *)trans->data)->set)
- #define nft_trans_set_id(trans)	\
- 	(((struct nft_trans_set *)trans->data)->set_id)
--#define nft_trans_set_bound(trans)	\
--	(((struct nft_trans_set *)trans->data)->bound)
- 
- struct nft_trans_chain {
- 	bool				update;
-diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
-index 4893f248dfdc..e1724f9d8b9d 100644
---- a/net/netfilter/nf_tables_api.c
-+++ b/net/netfilter/nf_tables_api.c
-@@ -127,7 +127,7 @@ static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
- 	list_for_each_entry_reverse(trans, &net->nft.commit_list, list) {
- 		if (trans->msg_type == NFT_MSG_NEWSET &&
- 		    nft_trans_set(trans) == set) {
--			nft_trans_set_bound(trans) = true;
-+			set->bound = true;
- 			break;
- 		}
- 	}
-@@ -6617,8 +6617,7 @@ static void nf_tables_abort_release(struct nft_trans *trans)
- 		nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
- 		break;
- 	case NFT_MSG_NEWSET:
--		if (!nft_trans_set_bound(trans))
--			nft_set_destroy(nft_trans_set(trans));
-+		nft_set_destroy(nft_trans_set(trans));
- 		break;
- 	case NFT_MSG_NEWSETELEM:
- 		nft_set_elem_destroy(nft_trans_elem_set(trans),
-@@ -6691,8 +6690,11 @@ static int __nf_tables_abort(struct net *net)
- 			break;
- 		case NFT_MSG_NEWSET:
- 			trans->ctx.table->use--;
--			if (!nft_trans_set_bound(trans))
--				list_del_rcu(&nft_trans_set(trans)->list);
-+			if (nft_trans_set(trans)->bound) {
-+				nft_trans_destroy(trans);
-+				break;
-+			}
-+			list_del_rcu(&nft_trans_set(trans)->list);
- 			break;
- 		case NFT_MSG_DELSET:
- 			trans->ctx.table->use++;
-@@ -6700,8 +6702,11 @@ static int __nf_tables_abort(struct net *net)
- 			nft_trans_destroy(trans);
- 			break;
- 		case NFT_MSG_NEWSETELEM:
-+			if (nft_trans_elem_set(trans)->bound) {
-+				nft_trans_destroy(trans);
-+				break;
-+			}
- 			te = (struct nft_trans_elem *)trans->data;
--
- 			te->set->ops->remove(net, te->set, &te->elem);
- 			atomic_dec(&te->set->nelems);
- 			break;
--- 
-2.11.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-04-05 21:47 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-04-05 21:47 UTC (permalink / raw
  To: gentoo-commits

commit:     2feb532d735ca946c71fb2e6bdcc397a58cdf191
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr  5 21:46:56 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr  5 21:46:56 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2feb532d

Linux patch 5.0.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1006_linux-5.0.7.patch | 8838 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 8842 insertions(+)

diff --git a/0000_README b/0000_README
index d25ad88..0545dfc 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-5.0.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.6
 
+Patch:  1006_linux-5.0.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-5.0.7.patch b/1006_linux-5.0.7.patch
new file mode 100644
index 0000000..4f6abf5
--- /dev/null
+++ b/1006_linux-5.0.7.patch
@@ -0,0 +1,8838 @@
+diff --git a/Documentation/arm/kernel_mode_neon.txt b/Documentation/arm/kernel_mode_neon.txt
+index 525452726d31..b9e060c5b61e 100644
+--- a/Documentation/arm/kernel_mode_neon.txt
++++ b/Documentation/arm/kernel_mode_neon.txt
+@@ -6,7 +6,7 @@ TL;DR summary
+ * Use only NEON instructions, or VFP instructions that don't rely on support
+   code
+ * Isolate your NEON code in a separate compilation unit, and compile it with
+-  '-mfpu=neon -mfloat-abi=softfp'
++  '-march=armv7-a -mfpu=neon -mfloat-abi=softfp'
+ * Put kernel_neon_begin() and kernel_neon_end() calls around the calls into your
+   NEON code
+ * Don't sleep in your NEON code, and be aware that it will be executed with
+@@ -87,7 +87,7 @@ instructions appearing in unexpected places if no special care is taken.
+ Therefore, the recommended and only supported way of using NEON/VFP in the
+ kernel is by adhering to the following rules:
+ * isolate the NEON code in a separate compilation unit and compile it with
+-  '-mfpu=neon -mfloat-abi=softfp';
++  '-march=armv7-a -mfpu=neon -mfloat-abi=softfp';
+ * issue the calls to kernel_neon_begin(), kernel_neon_end() as well as the calls
+   into the unit containing the NEON code from a compilation unit which is *not*
+   built with the GCC flag '-mfpu=neon' set.
+diff --git a/Makefile b/Makefile
+index 3ee390feea61..af99c77c7066 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+@@ -15,19 +15,6 @@ NAME = Shy Crocodile
+ PHONY := _all
+ _all:
+ 
+-# Do not use make's built-in rules and variables
+-# (this increases performance and avoids hard-to-debug behaviour)
+-MAKEFLAGS += -rR
+-
+-# Avoid funny character set dependencies
+-unexport LC_ALL
+-LC_COLLATE=C
+-LC_NUMERIC=C
+-export LC_COLLATE LC_NUMERIC
+-
+-# Avoid interference with shell env settings
+-unexport GREP_OPTIONS
+-
+ # We are using a recursive build, so we need to do a little thinking
+ # to get the ordering right.
+ #
+@@ -44,6 +31,21 @@ unexport GREP_OPTIONS
+ # descending is started. They are now explicitly listed as the
+ # prepare rule.
+ 
++ifneq ($(sub-make-done),1)
++
++# Do not use make's built-in rules and variables
++# (this increases performance and avoids hard-to-debug behaviour)
++MAKEFLAGS += -rR
++
++# Avoid funny character set dependencies
++unexport LC_ALL
++LC_COLLATE=C
++LC_NUMERIC=C
++export LC_COLLATE LC_NUMERIC
++
++# Avoid interference with shell env settings
++unexport GREP_OPTIONS
++
+ # Beautify output
+ # ---------------------------------------------------------------------------
+ #
+@@ -112,7 +114,6 @@ export quiet Q KBUILD_VERBOSE
+ 
+ # KBUILD_SRC is not intended to be used by the regular user (for now),
+ # it is set on invocation of make with KBUILD_OUTPUT or O= specified.
+-ifeq ($(KBUILD_SRC),)
+ 
+ # OK, Make called in directory where kernel src resides
+ # Do we want to locate output files in a separate directory?
+@@ -142,6 +143,24 @@ $(if $(KBUILD_OUTPUT),, \
+ # 'sub-make' below.
+ MAKEFLAGS += --include-dir=$(CURDIR)
+ 
++need-sub-make := 1
++else
++
++# Do not print "Entering directory ..." at all for in-tree build.
++MAKEFLAGS += --no-print-directory
++
++endif # ifneq ($(KBUILD_OUTPUT),)
++
++ifneq ($(filter 3.%,$(MAKE_VERSION)),)
++# 'MAKEFLAGS += -rR' does not immediately become effective for GNU Make 3.x
++# We need to invoke sub-make to avoid implicit rules in the top Makefile.
++need-sub-make := 1
++# Cancel implicit rules for this Makefile.
++$(lastword $(MAKEFILE_LIST)): ;
++endif
++
++ifeq ($(need-sub-make),1)
++
+ PHONY += $(MAKECMDGOALS) sub-make
+ 
+ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
+@@ -149,16 +168,15 @@ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
+ 
+ # Invoke a second make in the output directory, passing relevant variables
+ sub-make:
+-	$(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \
++	$(Q)$(MAKE) sub-make-done=1 \
++	$(if $(KBUILD_OUTPUT),-C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR)) \
+ 	-f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS))
+ 
+-# Leave processing to above invocation of make
+-skip-makefile := 1
+-endif # ifneq ($(KBUILD_OUTPUT),)
+-endif # ifeq ($(KBUILD_SRC),)
++endif # need-sub-make
++endif # sub-make-done
+ 
+ # We process the rest of the Makefile if this is the final invocation of make
+-ifeq ($(skip-makefile),)
++ifeq ($(need-sub-make),)
+ 
+ # Do not print "Entering directory ...",
+ # but we want to display it when entering to the output directory
+@@ -625,12 +643,15 @@ ifeq ($(may-sync-config),1)
+ -include include/config/auto.conf.cmd
+ 
+ # To avoid any implicit rule to kick in, define an empty command
+-$(KCONFIG_CONFIG) include/config/auto.conf.cmd: ;
++$(KCONFIG_CONFIG): ;
+ 
+ # The actual configuration files used during the build are stored in
+ # include/generated/ and include/config/. Update them if .config is newer than
+ # include/config/auto.conf (which mirrors .config).
+-include/config/%.conf: $(KCONFIG_CONFIG) include/config/auto.conf.cmd
++#
++# This exploits the 'multi-target pattern rule' trick.
++# The syncconfig should be executed only once to make all the targets.
++%/auto.conf %/auto.conf.cmd %/tristate.conf: $(KCONFIG_CONFIG)
+ 	$(Q)$(MAKE) -f $(srctree)/Makefile syncconfig
+ else
+ # External modules and some install targets need include/generated/autoconf.h
+@@ -1756,7 +1777,7 @@ $(cmd_files): ;	# Do not try to update included dependency files
+ 
+ endif   # ifeq ($(config-targets),1)
+ endif   # ifeq ($(mixed-targets),1)
+-endif	# skip-makefile
++endif   # need-sub-make
+ 
+ PHONY += FORCE
+ FORCE:
+diff --git a/arch/arm/boot/dts/lpc32xx.dtsi b/arch/arm/boot/dts/lpc32xx.dtsi
+index b7303a4e4236..ed0d6fb20122 100644
+--- a/arch/arm/boot/dts/lpc32xx.dtsi
++++ b/arch/arm/boot/dts/lpc32xx.dtsi
+@@ -230,7 +230,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			i2s1: i2s@2009C000 {
++			i2s1: i2s@2009c000 {
+ 				compatible = "nxp,lpc3220-i2s";
+ 				reg = <0x2009C000 0x1000>;
+ 			};
+@@ -273,7 +273,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			i2c1: i2c@400A0000 {
++			i2c1: i2c@400a0000 {
+ 				compatible = "nxp,pnx-i2c";
+ 				reg = <0x400A0000 0x100>;
+ 				interrupt-parent = <&sic1>;
+@@ -284,7 +284,7 @@
+ 				clocks = <&clk LPC32XX_CLK_I2C1>;
+ 			};
+ 
+-			i2c2: i2c@400A8000 {
++			i2c2: i2c@400a8000 {
+ 				compatible = "nxp,pnx-i2c";
+ 				reg = <0x400A8000 0x100>;
+ 				interrupt-parent = <&sic1>;
+@@ -295,7 +295,7 @@
+ 				clocks = <&clk LPC32XX_CLK_I2C2>;
+ 			};
+ 
+-			mpwm: mpwm@400E8000 {
++			mpwm: mpwm@400e8000 {
+ 				compatible = "nxp,lpc3220-motor-pwm";
+ 				reg = <0x400E8000 0x78>;
+ 				status = "disabled";
+@@ -394,7 +394,7 @@
+ 				#gpio-cells = <3>; /* bank, pin, flags */
+ 			};
+ 
+-			timer4: timer@4002C000 {
++			timer4: timer@4002c000 {
+ 				compatible = "nxp,lpc3220-timer";
+ 				reg = <0x4002C000 0x1000>;
+ 				interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+@@ -412,7 +412,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			watchdog: watchdog@4003C000 {
++			watchdog: watchdog@4003c000 {
+ 				compatible = "nxp,pnx4008-wdt";
+ 				reg = <0x4003C000 0x1000>;
+ 				clocks = <&clk LPC32XX_CLK_WDOG>;
+@@ -451,7 +451,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			timer1: timer@4004C000 {
++			timer1: timer@4004c000 {
+ 				compatible = "nxp,lpc3220-timer";
+ 				reg = <0x4004C000 0x1000>;
+ 				interrupts = <17 IRQ_TYPE_LEVEL_LOW>;
+@@ -475,7 +475,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			pwm1: pwm@4005C000 {
++			pwm1: pwm@4005c000 {
+ 				compatible = "nxp,lpc3220-pwm";
+ 				reg = <0x4005C000 0x4>;
+ 				clocks = <&clk LPC32XX_CLK_PWM1>;
+@@ -484,7 +484,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			pwm2: pwm@4005C004 {
++			pwm2: pwm@4005c004 {
+ 				compatible = "nxp,lpc3220-pwm";
+ 				reg = <0x4005C004 0x4>;
+ 				clocks = <&clk LPC32XX_CLK_PWM2>;
+diff --git a/arch/arm/boot/dts/meson8b.dtsi b/arch/arm/boot/dts/meson8b.dtsi
+index 22d775460767..dc125769fe85 100644
+--- a/arch/arm/boot/dts/meson8b.dtsi
++++ b/arch/arm/boot/dts/meson8b.dtsi
+@@ -270,9 +270,7 @@
+ 				groups = "eth_tx_clk",
+ 					 "eth_tx_en",
+ 					 "eth_txd1_0",
+-					 "eth_txd1_1",
+ 					 "eth_txd0_0",
+-					 "eth_txd0_1",
+ 					 "eth_rx_clk",
+ 					 "eth_rx_dv",
+ 					 "eth_rxd1",
+@@ -281,7 +279,9 @@
+ 					 "eth_mdc",
+ 					 "eth_ref_clk",
+ 					 "eth_txd2",
+-					 "eth_txd3";
++					 "eth_txd3",
++					 "eth_rxd3",
++					 "eth_rxd2";
+ 				function = "ethernet";
+ 				bias-disable;
+ 			};
+diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
+index 69772e742a0a..83ae97c049d9 100644
+--- a/arch/arm/include/asm/barrier.h
++++ b/arch/arm/include/asm/barrier.h
+@@ -11,6 +11,8 @@
+ #define sev()	__asm__ __volatile__ ("sev" : : : "memory")
+ #define wfe()	__asm__ __volatile__ ("wfe" : : : "memory")
+ #define wfi()	__asm__ __volatile__ ("wfi" : : : "memory")
++#else
++#define wfe()	do { } while (0)
+ #endif
+ 
+ #if __LINUX_ARM_ARCH__ >= 7
+diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
+index 120f4c9bbfde..57fe73ea0f72 100644
+--- a/arch/arm/include/asm/processor.h
++++ b/arch/arm/include/asm/processor.h
+@@ -89,7 +89,11 @@ extern void release_thread(struct task_struct *);
+ unsigned long get_wchan(struct task_struct *p);
+ 
+ #if __LINUX_ARM_ARCH__ == 6 || defined(CONFIG_ARM_ERRATA_754327)
+-#define cpu_relax()			smp_mb()
++#define cpu_relax()						\
++	do {							\
++		smp_mb();					\
++		__asm__ __volatile__("nop; nop; nop; nop; nop; nop; nop; nop; nop; nop;");	\
++	} while (0)
+ #else
+ #define cpu_relax()			barrier()
+ #endif
+diff --git a/arch/arm/include/asm/v7m.h b/arch/arm/include/asm/v7m.h
+index 187ccf6496ad..2cb00d15831b 100644
+--- a/arch/arm/include/asm/v7m.h
++++ b/arch/arm/include/asm/v7m.h
+@@ -49,7 +49,7 @@
+  * (0 -> msp; 1 -> psp). Bits [1:0] are fixed to 0b01.
+  */
+ #define EXC_RET_STACK_MASK			0x00000004
+-#define EXC_RET_THREADMODE_PROCESSSTACK		0xfffffffd
++#define EXC_RET_THREADMODE_PROCESSSTACK		(3 << 2)
+ 
+ /* Cache related definitions */
+ 
+diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S
+index 773424843d6e..62db1c9746cb 100644
+--- a/arch/arm/kernel/entry-header.S
++++ b/arch/arm/kernel/entry-header.S
+@@ -127,7 +127,8 @@
+          */
+ 	.macro	v7m_exception_slow_exit ret_r0
+ 	cpsid	i
+-	ldr	lr, =EXC_RET_THREADMODE_PROCESSSTACK
++	ldr	lr, =exc_ret
++	ldr	lr, [lr]
+ 
+ 	@ read original r12, sp, lr, pc and xPSR
+ 	add	r12, sp, #S_IP
+diff --git a/arch/arm/kernel/entry-v7m.S b/arch/arm/kernel/entry-v7m.S
+index abcf47848525..19d2dcd6530d 100644
+--- a/arch/arm/kernel/entry-v7m.S
++++ b/arch/arm/kernel/entry-v7m.S
+@@ -146,3 +146,7 @@ ENTRY(vector_table)
+ 	.rept	CONFIG_CPU_V7M_NUM_IRQ
+ 	.long	__irq_entry		@ External Interrupts
+ 	.endr
++	.align	2
++	.globl	exc_ret
++exc_ret:
++	.space	4
+diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c
+index dd2eb5f76b9f..76300f3813e8 100644
+--- a/arch/arm/kernel/machine_kexec.c
++++ b/arch/arm/kernel/machine_kexec.c
+@@ -91,8 +91,11 @@ void machine_crash_nonpanic_core(void *unused)
+ 
+ 	set_cpu_online(smp_processor_id(), false);
+ 	atomic_dec(&waiting_for_crash_ipi);
+-	while (1)
++
++	while (1) {
+ 		cpu_relax();
++		wfe();
++	}
+ }
+ 
+ void crash_smp_send_stop(void)
+diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
+index 1d6f5ea522f4..a3ce7c5365fa 100644
+--- a/arch/arm/kernel/smp.c
++++ b/arch/arm/kernel/smp.c
+@@ -604,8 +604,10 @@ static void ipi_cpu_stop(unsigned int cpu)
+ 	local_fiq_disable();
+ 	local_irq_disable();
+ 
+-	while (1)
++	while (1) {
+ 		cpu_relax();
++		wfe();
++	}
+ }
+ 
+ static DEFINE_PER_CPU(struct completion *, cpu_completion);
+diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
+index 0bee233fef9a..314cfb232a63 100644
+--- a/arch/arm/kernel/unwind.c
++++ b/arch/arm/kernel/unwind.c
+@@ -93,7 +93,7 @@ extern const struct unwind_idx __start_unwind_idx[];
+ static const struct unwind_idx *__origin_unwind_idx;
+ extern const struct unwind_idx __stop_unwind_idx[];
+ 
+-static DEFINE_SPINLOCK(unwind_lock);
++static DEFINE_RAW_SPINLOCK(unwind_lock);
+ static LIST_HEAD(unwind_tables);
+ 
+ /* Convert a prel31 symbol to an absolute address */
+@@ -201,7 +201,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
+ 		/* module unwind tables */
+ 		struct unwind_table *table;
+ 
+-		spin_lock_irqsave(&unwind_lock, flags);
++		raw_spin_lock_irqsave(&unwind_lock, flags);
+ 		list_for_each_entry(table, &unwind_tables, list) {
+ 			if (addr >= table->begin_addr &&
+ 			    addr < table->end_addr) {
+@@ -213,7 +213,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
+ 				break;
+ 			}
+ 		}
+-		spin_unlock_irqrestore(&unwind_lock, flags);
++		raw_spin_unlock_irqrestore(&unwind_lock, flags);
+ 	}
+ 
+ 	pr_debug("%s: idx = %p\n", __func__, idx);
+@@ -529,9 +529,9 @@ struct unwind_table *unwind_table_add(unsigned long start, unsigned long size,
+ 	tab->begin_addr = text_addr;
+ 	tab->end_addr = text_addr + text_size;
+ 
+-	spin_lock_irqsave(&unwind_lock, flags);
++	raw_spin_lock_irqsave(&unwind_lock, flags);
+ 	list_add_tail(&tab->list, &unwind_tables);
+-	spin_unlock_irqrestore(&unwind_lock, flags);
++	raw_spin_unlock_irqrestore(&unwind_lock, flags);
+ 
+ 	return tab;
+ }
+@@ -543,9 +543,9 @@ void unwind_table_del(struct unwind_table *tab)
+ 	if (!tab)
+ 		return;
+ 
+-	spin_lock_irqsave(&unwind_lock, flags);
++	raw_spin_lock_irqsave(&unwind_lock, flags);
+ 	list_del(&tab->list);
+-	spin_unlock_irqrestore(&unwind_lock, flags);
++	raw_spin_unlock_irqrestore(&unwind_lock, flags);
+ 
+ 	kfree(tab);
+ }
+diff --git a/arch/arm/lib/Makefile b/arch/arm/lib/Makefile
+index ad25fd1872c7..0bff0176db2c 100644
+--- a/arch/arm/lib/Makefile
++++ b/arch/arm/lib/Makefile
+@@ -39,7 +39,7 @@ $(obj)/csumpartialcopy.o:	$(obj)/csumpartialcopygeneric.S
+ $(obj)/csumpartialcopyuser.o:	$(obj)/csumpartialcopygeneric.S
+ 
+ ifeq ($(CONFIG_KERNEL_MODE_NEON),y)
+-  NEON_FLAGS			:= -mfloat-abi=softfp -mfpu=neon
++  NEON_FLAGS			:= -march=armv7-a -mfloat-abi=softfp -mfpu=neon
+   CFLAGS_xor-neon.o		+= $(NEON_FLAGS)
+   obj-$(CONFIG_XOR_BLOCKS)	+= xor-neon.o
+ endif
+diff --git a/arch/arm/lib/xor-neon.c b/arch/arm/lib/xor-neon.c
+index 2c40aeab3eaa..c691b901092f 100644
+--- a/arch/arm/lib/xor-neon.c
++++ b/arch/arm/lib/xor-neon.c
+@@ -14,7 +14,7 @@
+ MODULE_LICENSE("GPL");
+ 
+ #ifndef __ARM_NEON__
+-#error You should compile this file with '-mfloat-abi=softfp -mfpu=neon'
++#error You should compile this file with '-march=armv7-a -mfloat-abi=softfp -mfpu=neon'
+ #endif
+ 
+ /*
+diff --git a/arch/arm/mach-omap2/prm_common.c b/arch/arm/mach-omap2/prm_common.c
+index 058a37e6d11c..fd6e0671f957 100644
+--- a/arch/arm/mach-omap2/prm_common.c
++++ b/arch/arm/mach-omap2/prm_common.c
+@@ -523,8 +523,10 @@ void omap_prm_reset_system(void)
+ 
+ 	prm_ll_data->reset_system();
+ 
+-	while (1)
++	while (1) {
+ 		cpu_relax();
++		wfe();
++	}
+ }
+ 
+ /**
+diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+index 8e50daa99151..dc526ef2e9b3 100644
+--- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
++++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+@@ -40,6 +40,7 @@
+ struct regulator_quirk {
+ 	struct list_head		list;
+ 	const struct of_device_id	*id;
++	struct device_node		*np;
+ 	struct of_phandle_args		irq_args;
+ 	struct i2c_msg			i2c_msg;
+ 	bool				shared;	/* IRQ line is shared */
+@@ -101,6 +102,9 @@ static int regulator_quirk_notify(struct notifier_block *nb,
+ 		if (!pos->shared)
+ 			continue;
+ 
++		if (pos->np->parent != client->dev.parent->of_node)
++			continue;
++
+ 		dev_info(&client->dev, "clearing %s@0x%02x interrupts\n",
+ 			 pos->id->compatible, pos->i2c_msg.addr);
+ 
+@@ -165,6 +169,7 @@ static int __init rcar_gen2_regulator_quirk(void)
+ 		memcpy(&quirk->i2c_msg, id->data, sizeof(quirk->i2c_msg));
+ 
+ 		quirk->id = id;
++		quirk->np = np;
+ 		quirk->i2c_msg.addr = addr;
+ 
+ 		ret = of_irq_parse_one(np, 0, argsa);
+diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c
+index b03202cddddb..f74cdce6d4da 100644
+--- a/arch/arm/mm/copypage-v4mc.c
++++ b/arch/arm/mm/copypage-v4mc.c
+@@ -45,6 +45,7 @@ static void mc_copy_user_page(void *from, void *to)
+ 	int tmp;
+ 
+ 	asm volatile ("\
++	.syntax unified\n\
+ 	ldmia	%0!, {r2, r3, ip, lr}		@ 4\n\
+ 1:	mcr	p15, 0, %1, c7, c6, 1		@ 1   invalidate D line\n\
+ 	stmia	%1!, {r2, r3, ip, lr}		@ 4\n\
+@@ -56,7 +57,7 @@ static void mc_copy_user_page(void *from, void *to)
+ 	ldmia	%0!, {r2, r3, ip, lr}		@ 4\n\
+ 	subs	%2, %2, #1			@ 1\n\
+ 	stmia	%1!, {r2, r3, ip, lr}		@ 4\n\
+-	ldmneia	%0!, {r2, r3, ip, lr}		@ 4\n\
++	ldmiane	%0!, {r2, r3, ip, lr}		@ 4\n\
+ 	bne	1b				@ "
+ 	: "+&r" (from), "+&r" (to), "=&r" (tmp)
+ 	: "2" (PAGE_SIZE / 64)
+diff --git a/arch/arm/mm/copypage-v4wb.c b/arch/arm/mm/copypage-v4wb.c
+index cd3e165afeed..6d336740aae4 100644
+--- a/arch/arm/mm/copypage-v4wb.c
++++ b/arch/arm/mm/copypage-v4wb.c
+@@ -27,6 +27,7 @@ static void v4wb_copy_user_page(void *kto, const void *kfrom)
+ 	int tmp;
+ 
+ 	asm volatile ("\
++	.syntax unified\n\
+ 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 1:	mcr	p15, 0, %0, c7, c6, 1		@ 1   invalidate D line\n\
+ 	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
+@@ -38,7 +39,7 @@ static void v4wb_copy_user_page(void *kto, const void *kfrom)
+ 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 	subs	%2, %2, #1			@ 1\n\
+ 	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
+-	ldmneia	%1!, {r3, r4, ip, lr}		@ 4\n\
++	ldmiane	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 	bne	1b				@ 1\n\
+ 	mcr	p15, 0, %1, c7, c10, 4		@ 1   drain WB"
+ 	: "+&r" (kto), "+&r" (kfrom), "=&r" (tmp)
+diff --git a/arch/arm/mm/copypage-v4wt.c b/arch/arm/mm/copypage-v4wt.c
+index 8614572e1296..3851bb396442 100644
+--- a/arch/arm/mm/copypage-v4wt.c
++++ b/arch/arm/mm/copypage-v4wt.c
+@@ -25,6 +25,7 @@ static void v4wt_copy_user_page(void *kto, const void *kfrom)
+ 	int tmp;
+ 
+ 	asm volatile ("\
++	.syntax unified\n\
+ 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 1:	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
+ 	ldmia	%1!, {r3, r4, ip, lr}		@ 4+1\n\
+@@ -34,7 +35,7 @@ static void v4wt_copy_user_page(void *kto, const void *kfrom)
+ 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 	subs	%2, %2, #1			@ 1\n\
+ 	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
+-	ldmneia	%1!, {r3, r4, ip, lr}		@ 4\n\
++	ldmiane	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 	bne	1b				@ 1\n\
+ 	mcr	p15, 0, %2, c7, c7, 0		@ flush ID cache"
+ 	: "+&r" (kto), "+&r" (kfrom), "=&r" (tmp)
+diff --git a/arch/arm/mm/proc-v7m.S b/arch/arm/mm/proc-v7m.S
+index 47a5acc64433..92e84181933a 100644
+--- a/arch/arm/mm/proc-v7m.S
++++ b/arch/arm/mm/proc-v7m.S
+@@ -139,6 +139,9 @@ __v7m_setup_cont:
+ 	cpsie	i
+ 	svc	#0
+ 1:	cpsid	i
++	ldr	r0, =exc_ret
++	orr	lr, lr, #EXC_RET_THREADMODE_PROCESSSTACK
++	str	lr, [r0]
+ 	ldmia	sp, {r0-r3, r12}
+ 	str	r5, [r12, #11 * 4]	@ restore the original SVC vector entry
+ 	mov	lr, r6			@ restore LR
+diff --git a/arch/h8300/Makefile b/arch/h8300/Makefile
+index f801f3708a89..ba0f26cfad61 100644
+--- a/arch/h8300/Makefile
++++ b/arch/h8300/Makefile
+@@ -27,7 +27,7 @@ KBUILD_LDFLAGS += $(ldflags-y)
+ CHECKFLAGS += -msize-long
+ 
+ ifeq ($(CROSS_COMPILE),)
+-CROSS_COMPILE := h8300-unknown-linux-
++CROSS_COMPILE := $(call cc-cross-prefix, h8300-unknown-linux- h8300-linux-)
+ endif
+ 
+ core-y	+= arch/$(ARCH)/kernel/ arch/$(ARCH)/mm/
+diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
+index a4a718dbfec6..f85e2b01c3df 100644
+--- a/arch/powerpc/include/asm/topology.h
++++ b/arch/powerpc/include/asm/topology.h
+@@ -132,6 +132,8 @@ static inline void shared_proc_topology_init(void) {}
+ #define topology_sibling_cpumask(cpu)	(per_cpu(cpu_sibling_map, cpu))
+ #define topology_core_cpumask(cpu)	(per_cpu(cpu_core_map, cpu))
+ #define topology_core_id(cpu)		(cpu_to_core_id(cpu))
++
++int dlpar_cpu_readd(int cpu);
+ #endif
+ #endif
+ 
+diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
+index 435927f549c4..a2c168b395d2 100644
+--- a/arch/powerpc/kernel/entry_64.S
++++ b/arch/powerpc/kernel/entry_64.S
+@@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
+ 	ld	r2,_NIP(r1)
+ 	mtspr	SPRN_SRR0,r2
+ 
++	/*
++	 * Leaving a stale exception_marker on the stack can confuse
++	 * the reliable stack unwinder later on. Clear it.
++	 */
++	li	r2,0
++	std	r2,STACK_FRAME_OVERHEAD-16(r1)
++
+ 	ld	r0,GPR0(r1)
+ 	ld	r2,GPR2(r1)
+ 	ld	r3,GPR3(r1)
+diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
+index 53151698bfe0..d9ac7d94656e 100644
+--- a/arch/powerpc/kernel/ptrace.c
++++ b/arch/powerpc/kernel/ptrace.c
+@@ -33,6 +33,7 @@
+ #include <linux/hw_breakpoint.h>
+ #include <linux/perf_event.h>
+ #include <linux/context_tracking.h>
++#include <linux/nospec.h>
+ 
+ #include <linux/uaccess.h>
+ #include <linux/pkeys.h>
+@@ -274,6 +275,8 @@ static int set_user_trap(struct task_struct *task, unsigned long trap)
+  */
+ int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
+ {
++	unsigned int regs_max;
++
+ 	if ((task->thread.regs == NULL) || !data)
+ 		return -EIO;
+ 
+@@ -297,7 +300,9 @@ int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
+ 	}
+ #endif
+ 
+-	if (regno < (sizeof(struct user_pt_regs) / sizeof(unsigned long))) {
++	regs_max = sizeof(struct user_pt_regs) / sizeof(unsigned long);
++	if (regno < regs_max) {
++		regno = array_index_nospec(regno, regs_max);
+ 		*data = ((unsigned long *)task->thread.regs)[regno];
+ 		return 0;
+ 	}
+@@ -321,6 +326,7 @@ int ptrace_put_reg(struct task_struct *task, int regno, unsigned long data)
+ 		return set_user_dscr(task, data);
+ 
+ 	if (regno <= PT_MAX_PUT_REG) {
++		regno = array_index_nospec(regno, PT_MAX_PUT_REG + 1);
+ 		((unsigned long *)task->thread.regs)[regno] = data;
+ 		return 0;
+ 	}
+diff --git a/arch/powerpc/mm/hugetlbpage-radix.c b/arch/powerpc/mm/hugetlbpage-radix.c
+index 2486bee0f93e..97c7a39ebc00 100644
+--- a/arch/powerpc/mm/hugetlbpage-radix.c
++++ b/arch/powerpc/mm/hugetlbpage-radix.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/mm.h>
+ #include <linux/hugetlb.h>
++#include <linux/security.h>
+ #include <asm/pgtable.h>
+ #include <asm/pgalloc.h>
+ #include <asm/cacheflush.h>
+@@ -73,7 +74,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ 	if (addr) {
+ 		addr = ALIGN(addr, huge_page_size(h));
+ 		vma = find_vma(mm, addr);
+-		if (high_limit - len >= addr &&
++		if (high_limit - len >= addr && addr >= mmap_min_addr &&
+ 		    (!vma || addr + len <= vm_start_gap(vma)))
+ 			return addr;
+ 	}
+@@ -83,7 +84,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ 	 */
+ 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ 	info.length = len;
+-	info.low_limit = PAGE_SIZE;
++	info.low_limit = max(PAGE_SIZE, mmap_min_addr);
+ 	info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW);
+ 	info.align_mask = PAGE_MASK & ~huge_page_mask(h);
+ 	info.align_offset = 0;
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 87f0dd004295..b5d1c45c1475 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1460,13 +1460,6 @@ static void reset_topology_timer(void)
+ 
+ #ifdef CONFIG_SMP
+ 
+-static void stage_topology_update(int core_id)
+-{
+-	cpumask_or(&cpu_associativity_changes_mask,
+-		&cpu_associativity_changes_mask, cpu_sibling_mask(core_id));
+-	reset_topology_timer();
+-}
+-
+ static int dt_update_callback(struct notifier_block *nb,
+ 				unsigned long action, void *data)
+ {
+@@ -1479,7 +1472,7 @@ static int dt_update_callback(struct notifier_block *nb,
+ 		    !of_prop_cmp(update->prop->name, "ibm,associativity")) {
+ 			u32 core_id;
+ 			of_property_read_u32(update->dn, "reg", &core_id);
+-			stage_topology_update(core_id);
++			rc = dlpar_cpu_readd(core_id);
+ 			rc = NOTIFY_OK;
+ 		}
+ 		break;
+diff --git a/arch/powerpc/platforms/44x/Kconfig b/arch/powerpc/platforms/44x/Kconfig
+index 4a9a72d01c3c..35be81fd2dc2 100644
+--- a/arch/powerpc/platforms/44x/Kconfig
++++ b/arch/powerpc/platforms/44x/Kconfig
+@@ -180,6 +180,7 @@ config CURRITUCK
+ 	depends on PPC_47x
+ 	select SWIOTLB
+ 	select 476FPE
++	select FORCE_PCI
+ 	select PPC4xx_PCI_EXPRESS
+ 	help
+ 	  This option enables support for the IBM Currituck (476fpe) evaluation board
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+index 697449afb3f7..e28f03e1eb5e 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+@@ -313,7 +313,6 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+ 			page_shift);
+ 	tbl->it_level_size = 1ULL << (level_shift - 3);
+ 	tbl->it_indirect_levels = levels - 1;
+-	tbl->it_allocated_size = total_allocated;
+ 	tbl->it_userspace = uas;
+ 	tbl->it_nid = nid;
+ 
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 145373f0e5dc..2d62c58f9a4c 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -2594,8 +2594,13 @@ static long pnv_pci_ioda2_create_table_userspace(
+ 		int num, __u32 page_shift, __u64 window_size, __u32 levels,
+ 		struct iommu_table **ptbl)
+ {
+-	return pnv_pci_ioda2_create_table(table_group,
++	long ret = pnv_pci_ioda2_create_table(table_group,
+ 			num, page_shift, window_size, levels, true, ptbl);
++
++	if (!ret)
++		(*ptbl)->it_allocated_size = pnv_pci_ioda2_get_table_size(
++				page_shift, window_size, levels);
++	return ret;
+ }
+ 
+ static void pnv_ioda2_take_ownership(struct iommu_table_group *table_group)
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 2f8e62163602..97feb6e79f1a 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -802,6 +802,25 @@ static int dlpar_cpu_add_by_count(u32 cpus_to_add)
+ 	return rc;
+ }
+ 
++int dlpar_cpu_readd(int cpu)
++{
++	struct device_node *dn;
++	struct device *dev;
++	u32 drc_index;
++	int rc;
++
++	dev = get_cpu_device(cpu);
++	dn = dev->of_node;
++
++	rc = of_property_read_u32(dn, "ibm,my-drc-index", &drc_index);
++
++	rc = dlpar_cpu_remove_by_index(drc_index);
++	if (!rc)
++		rc = dlpar_cpu_add(drc_index);
++
++	return rc;
++}
++
+ int dlpar_cpu(struct pseries_hp_errorlog *hp_elog)
+ {
+ 	u32 count, drc_index;
+diff --git a/arch/powerpc/xmon/ppc-dis.c b/arch/powerpc/xmon/ppc-dis.c
+index 9deea5ee13f6..27f1e6415036 100644
+--- a/arch/powerpc/xmon/ppc-dis.c
++++ b/arch/powerpc/xmon/ppc-dis.c
+@@ -158,7 +158,7 @@ int print_insn_powerpc (unsigned long insn, unsigned long memaddr)
+     dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7
+ 		| PPC_OPCODE_POWER8 | PPC_OPCODE_POWER9 | PPC_OPCODE_HTM
+ 		| PPC_OPCODE_ALTIVEC | PPC_OPCODE_ALTIVEC2
+-		| PPC_OPCODE_VSX | PPC_OPCODE_VSX3),
++		| PPC_OPCODE_VSX | PPC_OPCODE_VSX3);
+ 
+   /* Get the major opcode of the insn.  */
+   opcode = NULL;
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index bfabeb1889cc..1266194afb02 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -1600,7 +1600,7 @@ static void aux_sdb_init(unsigned long sdb)
+ 
+ /*
+  * aux_buffer_setup() - Setup AUX buffer for diagnostic mode sampling
+- * @cpu:	On which to allocate, -1 means current
++ * @event:	Event the buffer is setup for, event->cpu == -1 means current
+  * @pages:	Array of pointers to buffer pages passed from perf core
+  * @nr_pages:	Total pages
+  * @snapshot:	Flag for snapshot mode
+@@ -1612,8 +1612,8 @@ static void aux_sdb_init(unsigned long sdb)
+  *
+  * Return the private AUX buffer structure if success or NULL if fails.
+  */
+-static void *aux_buffer_setup(int cpu, void **pages, int nr_pages,
+-			      bool snapshot)
++static void *aux_buffer_setup(struct perf_event *event, void **pages,
++			      int nr_pages, bool snapshot)
+ {
+ 	struct sf_buffer *sfb;
+ 	struct aux_buffer *aux;
+diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
+index 9b5adae9cc40..e2839b5c246c 100644
+--- a/arch/x86/boot/Makefile
++++ b/arch/x86/boot/Makefile
+@@ -100,7 +100,7 @@ $(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
+ AFLAGS_header.o += -I$(objtree)/$(obj)
+ $(obj)/header.o: $(obj)/zoffset.h
+ 
+-LDFLAGS_setup.elf	:= -T
++LDFLAGS_setup.elf	:= -m elf_i386 -T
+ $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
+ 	$(call if_changed,ld)
+ 
+diff --git a/arch/x86/events/intel/bts.c b/arch/x86/events/intel/bts.c
+index a01ef1b0f883..7cdd7b13bbda 100644
+--- a/arch/x86/events/intel/bts.c
++++ b/arch/x86/events/intel/bts.c
+@@ -77,10 +77,12 @@ static size_t buf_size(struct page *page)
+ }
+ 
+ static void *
+-bts_buffer_setup_aux(int cpu, void **pages, int nr_pages, bool overwrite)
++bts_buffer_setup_aux(struct perf_event *event, void **pages,
++		     int nr_pages, bool overwrite)
+ {
+ 	struct bts_buffer *buf;
+ 	struct page *page;
++	int cpu = event->cpu;
+ 	int node = (cpu == -1) ? cpu : cpu_to_node(cpu);
+ 	unsigned long offset;
+ 	size_t size = nr_pages << PAGE_SHIFT;
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 9494ca68fd9d..c0e86ff21f81 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -1114,10 +1114,11 @@ static int pt_buffer_init_topa(struct pt_buffer *buf, unsigned long nr_pages,
+  * Return:	Our private PT buffer structure.
+  */
+ static void *
+-pt_buffer_setup_aux(int cpu, void **pages, int nr_pages, bool snapshot)
++pt_buffer_setup_aux(struct perf_event *event, void **pages,
++		    int nr_pages, bool snapshot)
+ {
+ 	struct pt_buffer *buf;
+-	int node, ret;
++	int node, ret, cpu = event->cpu;
+ 
+ 	if (!nr_pages)
+ 		return NULL;
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index 7abb09e2eeb8..d3f42b6bbdac 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -406,6 +406,13 @@ void hyperv_cleanup(void)
+ 	/* Reset our OS id */
+ 	wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
+ 
++	/*
++	 * Reset hypercall page reference before reset the page,
++	 * let hypercall operations fail safely rather than
++	 * panic the kernel for using invalid hypercall page
++	 */
++	hv_hypercall_pg = NULL;
++
+ 	/* Reset the hypercall page */
+ 	hypercall_msr.as_uint64 = 0;
+ 	wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index c1334aaaa78d..f3aed639dccd 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -76,7 +76,7 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
+ #endif
+ 
+ /**
+- * access_ok: - Checks if a user space pointer is valid
++ * access_ok - Checks if a user space pointer is valid
+  * @addr: User space pointer to start of block to check
+  * @size: Size of block to check
+  *
+@@ -85,12 +85,12 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
+  *
+  * Checks if a pointer to a block of memory in user space is valid.
+  *
+- * Returns true (nonzero) if the memory block may be valid, false (zero)
+- * if it is definitely invalid.
+- *
+  * Note that, depending on architecture, this function probably just
+  * checks that the pointer is in the user space range - after calling
+  * this function, memory access functions may still return -EFAULT.
++ *
++ * Return: true (nonzero) if the memory block may be valid, false (zero)
++ * if it is definitely invalid.
+  */
+ #define access_ok(addr, size)					\
+ ({									\
+@@ -135,7 +135,7 @@ extern int __get_user_bad(void);
+ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+ 
+ /**
+- * get_user: - Get a simple variable from user space.
++ * get_user - Get a simple variable from user space.
+  * @x:   Variable to store result.
+  * @ptr: Source address, in user space.
+  *
+@@ -149,7 +149,7 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+  * @ptr must have pointer-to-simple-variable type, and the result of
+  * dereferencing @ptr must be assignable to @x without a cast.
+  *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+  * On error, the variable @x is set to zero.
+  */
+ /*
+@@ -227,7 +227,7 @@ extern void __put_user_4(void);
+ extern void __put_user_8(void);
+ 
+ /**
+- * put_user: - Write a simple value into user space.
++ * put_user - Write a simple value into user space.
+  * @x:   Value to copy to user space.
+  * @ptr: Destination address, in user space.
+  *
+@@ -241,7 +241,7 @@ extern void __put_user_8(void);
+  * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+  * to the result of dereferencing @ptr.
+  *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+  */
+ #define put_user(x, ptr)					\
+ ({								\
+@@ -503,7 +503,7 @@ struct __large_struct { unsigned long buf[100]; };
+ } while (0)
+ 
+ /**
+- * __get_user: - Get a simple variable from user space, with less checking.
++ * __get_user - Get a simple variable from user space, with less checking.
+  * @x:   Variable to store result.
+  * @ptr: Source address, in user space.
+  *
+@@ -520,7 +520,7 @@ struct __large_struct { unsigned long buf[100]; };
+  * Caller must check the pointer with access_ok() before calling this
+  * function.
+  *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+  * On error, the variable @x is set to zero.
+  */
+ 
+@@ -528,7 +528,7 @@ struct __large_struct { unsigned long buf[100]; };
+ 	__get_user_nocheck((x), (ptr), sizeof(*(ptr)))
+ 
+ /**
+- * __put_user: - Write a simple value into user space, with less checking.
++ * __put_user - Write a simple value into user space, with less checking.
+  * @x:   Value to copy to user space.
+  * @ptr: Destination address, in user space.
+  *
+@@ -545,7 +545,7 @@ struct __large_struct { unsigned long buf[100]; };
+  * Caller must check the pointer with access_ok() before calling this
+  * function.
+  *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+  */
+ 
+ #define __put_user(x, ptr)						\
+diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
+index 53917a3ebf94..1f3b77367948 100644
+--- a/arch/x86/kernel/kexec-bzimage64.c
++++ b/arch/x86/kernel/kexec-bzimage64.c
+@@ -218,6 +218,9 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params,
+ 	params->screen_info.ext_mem_k = 0;
+ 	params->alt_mem_k = 0;
+ 
++	/* Always fill in RSDP: it is either 0 or a valid value */
++	params->acpi_rsdp_addr = boot_params.acpi_rsdp_addr;
++
+ 	/* Default APM info */
+ 	memset(&params->apm_bios_info, 0, sizeof(params->apm_bios_info));
+ 
+@@ -256,7 +259,6 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params,
+ 	setup_efi_state(params, params_load_addr, efi_map_offset, efi_map_sz,
+ 			efi_setup_data_offset);
+ #endif
+-
+ 	/* Setup EDD info */
+ 	memcpy(params->eddbuf, boot_params.eddbuf,
+ 				EDDMAXNR * sizeof(struct edd_info));
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 0d618ee634ac..ee3b5c7d662e 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -401,7 +401,7 @@ SECTIONS
+  * Per-cpu symbols which need to be offset from __per_cpu_load
+  * for the boot processor.
+  */
+-#define INIT_PER_CPU(x) init_per_cpu__##x = x + __per_cpu_load
++#define INIT_PER_CPU(x) init_per_cpu__##x = ABSOLUTE(x) + __per_cpu_load
+ INIT_PER_CPU(gdt_page);
+ INIT_PER_CPU(irq_stack_union);
+ 
+diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
+index bfd94e7812fc..7d290777246d 100644
+--- a/arch/x86/lib/usercopy_32.c
++++ b/arch/x86/lib/usercopy_32.c
+@@ -54,13 +54,13 @@ do {									\
+ } while (0)
+ 
+ /**
+- * clear_user: - Zero a block of memory in user space.
++ * clear_user - Zero a block of memory in user space.
+  * @to:   Destination address, in user space.
+  * @n:    Number of bytes to zero.
+  *
+  * Zero a block of memory in user space.
+  *
+- * Returns number of bytes that could not be cleared.
++ * Return: number of bytes that could not be cleared.
+  * On success, this will be zero.
+  */
+ unsigned long
+@@ -74,14 +74,14 @@ clear_user(void __user *to, unsigned long n)
+ EXPORT_SYMBOL(clear_user);
+ 
+ /**
+- * __clear_user: - Zero a block of memory in user space, with less checking.
++ * __clear_user - Zero a block of memory in user space, with less checking.
+  * @to:   Destination address, in user space.
+  * @n:    Number of bytes to zero.
+  *
+  * Zero a block of memory in user space.  Caller must check
+  * the specified block with access_ok() before calling this function.
+  *
+- * Returns number of bytes that could not be cleared.
++ * Return: number of bytes that could not be cleared.
+  * On success, this will be zero.
+  */
+ unsigned long
+diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
+index 17456a1d3f04..6c571ae86947 100644
+--- a/arch/x86/platform/efi/quirks.c
++++ b/arch/x86/platform/efi/quirks.c
+@@ -717,7 +717,7 @@ void efi_recover_from_page_fault(unsigned long phys_addr)
+ 	 * "efi_mm" cannot be used to check if the page fault had occurred
+ 	 * in the firmware context because efi=old_map doesn't use efi_pgd.
+ 	 */
+-	if (efi_rts_work.efi_rts_id == NONE)
++	if (efi_rts_work.efi_rts_id == EFI_NONE)
+ 		return;
+ 
+ 	/*
+@@ -742,7 +742,7 @@ void efi_recover_from_page_fault(unsigned long phys_addr)
+ 	 * because this case occurs *very* rarely and hence could be improved
+ 	 * on a need by basis.
+ 	 */
+-	if (efi_rts_work.efi_rts_id == RESET_SYSTEM) {
++	if (efi_rts_work.efi_rts_id == EFI_RESET_SYSTEM) {
+ 		pr_info("efi_reset_system() buggy! Reboot through BIOS\n");
+ 		machine_real_restart(MRR_BIOS);
+ 		return;
+diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
+index 4463fa72db94..96cb20de08af 100644
+--- a/arch/x86/realmode/rm/Makefile
++++ b/arch/x86/realmode/rm/Makefile
+@@ -47,7 +47,7 @@ $(obj)/pasyms.h: $(REALMODE_OBJS) FORCE
+ targets += realmode.lds
+ $(obj)/realmode.lds: $(obj)/pasyms.h
+ 
+-LDFLAGS_realmode.elf := --emit-relocs -T
++LDFLAGS_realmode.elf := -m elf_i386 --emit-relocs -T
+ CPPFLAGS_realmode.lds += -P -C -I$(objtree)/$(obj)
+ 
+ targets += realmode.elf
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index cd307767a134..e5ed28629271 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -747,6 +747,7 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 
+ inc_counter:
+ 	bfqq->weight_counter->num_active++;
++	bfqq->ref++;
+ }
+ 
+ /*
+@@ -771,6 +772,7 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
+ 
+ reset_entity_pointer:
+ 	bfqq->weight_counter = NULL;
++	bfq_put_queue(bfqq);
+ }
+ 
+ /*
+@@ -782,9 +784,6 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
+ {
+ 	struct bfq_entity *entity = bfqq->entity.parent;
+ 
+-	__bfq_weights_tree_remove(bfqd, bfqq,
+-				  &bfqd->queue_weights_tree);
+-
+ 	for_each_entity(entity) {
+ 		struct bfq_sched_data *sd = entity->my_sched_data;
+ 
+@@ -818,6 +817,15 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
+ 			bfqd->num_groups_with_pending_reqs--;
+ 		}
+ 	}
++
++	/*
++	 * Next function is invoked last, because it causes bfqq to be
++	 * freed if the following holds: bfqq is not in service and
++	 * has no dispatched request. DO NOT use bfqq after the next
++	 * function invocation.
++	 */
++	__bfq_weights_tree_remove(bfqd, bfqq,
++				  &bfqd->queue_weights_tree);
+ }
+ 
+ /*
+@@ -1011,7 +1019,8 @@ bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd,
+ 
+ static int bfqq_process_refs(struct bfq_queue *bfqq)
+ {
+-	return bfqq->ref - bfqq->allocated - bfqq->entity.on_st;
++	return bfqq->ref - bfqq->allocated - bfqq->entity.on_st -
++		(bfqq->weight_counter != NULL);
+ }
+ 
+ /* Empty burst list and add just bfqq (see comments on bfq_handle_burst) */
+@@ -2224,7 +2233,8 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 
+ 	if (in_service_bfqq && in_service_bfqq != bfqq &&
+ 	    likely(in_service_bfqq != &bfqd->oom_bfqq) &&
+-	    bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) &&
++	    bfq_rq_close_to_sector(io_struct, request,
++				   bfqd->in_serv_last_pos) &&
+ 	    bfqq->entity.parent == in_service_bfqq->entity.parent &&
+ 	    bfq_may_be_close_cooperator(bfqq, in_service_bfqq)) {
+ 		new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq);
+@@ -2764,6 +2774,8 @@ update_rate_and_reset:
+ 	bfq_update_rate_reset(bfqd, rq);
+ update_last_values:
+ 	bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
++	if (RQ_BFQQ(rq) == bfqd->in_service_queue)
++		bfqd->in_serv_last_pos = bfqd->last_position;
+ 	bfqd->last_dispatch = now_ns;
+ }
+ 
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index 0b02bf302de0..746bd570b85a 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -537,6 +537,9 @@ struct bfq_data {
+ 	/* on-disk position of the last served request */
+ 	sector_t last_position;
+ 
++	/* position of the last served request for the in-service queue */
++	sector_t in_serv_last_pos;
++
+ 	/* time of last request completion (ns) */
+ 	u64 last_completion;
+ 
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index 72adbbe975d5..4aab1a8191f0 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -1667,15 +1667,15 @@ void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 
+ 	bfqd->busy_queues--;
+ 
+-	if (!bfqq->dispatched)
+-		bfq_weights_tree_remove(bfqd, bfqq);
+-
+ 	if (bfqq->wr_coeff > 1)
+ 		bfqd->wr_busy_queues--;
+ 
+ 	bfqg_stats_update_dequeue(bfqq_group(bfqq));
+ 
+ 	bfq_deactivate_bfqq(bfqd, bfqq, true, expiration);
++
++	if (!bfqq->dispatched)
++		bfq_weights_tree_remove(bfqd, bfqq);
+ }
+ 
+ /*
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index f0b52266b3ac..d73afb562ad9 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -2124,21 +2124,29 @@ static int __init intel_opregion_present(void)
+ 	return opregion;
+ }
+ 
++/* Check if the chassis-type indicates there is no builtin LCD panel */
+ static bool dmi_is_desktop(void)
+ {
+ 	const char *chassis_type;
++	unsigned long type;
+ 
+ 	chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE);
+ 	if (!chassis_type)
+ 		return false;
+ 
+-	if (!strcmp(chassis_type, "3") || /*  3: Desktop */
+-	    !strcmp(chassis_type, "4") || /*  4: Low Profile Desktop */
+-	    !strcmp(chassis_type, "5") || /*  5: Pizza Box */
+-	    !strcmp(chassis_type, "6") || /*  6: Mini Tower */
+-	    !strcmp(chassis_type, "7") || /*  7: Tower */
+-	    !strcmp(chassis_type, "11"))  /* 11: Main Server Chassis */
++	if (kstrtoul(chassis_type, 10, &type) != 0)
++		return false;
++
++	switch (type) {
++	case 0x03: /* Desktop */
++	case 0x04: /* Low Profile Desktop */
++	case 0x05: /* Pizza Box */
++	case 0x06: /* Mini Tower */
++	case 0x07: /* Tower */
++	case 0x10: /* Lunch Box */
++	case 0x11: /* Main Server Chassis */
+ 		return true;
++	}
+ 
+ 	return false;
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 2faefdd6f420..9a8d83bc1e75 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1089,16 +1089,12 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
+ 		kobject_uevent(&disk_to_dev(bdev->bd_disk)->kobj, KOBJ_CHANGE);
+ 	}
+ 	mapping_set_gfp_mask(filp->f_mapping, gfp);
+-	lo->lo_state = Lo_unbound;
+ 	/* This is safe: open() is still holding a reference. */
+ 	module_put(THIS_MODULE);
+ 	blk_mq_unfreeze_queue(lo->lo_queue);
+ 
+ 	partscan = lo->lo_flags & LO_FLAGS_PARTSCAN && bdev;
+ 	lo_number = lo->lo_number;
+-	lo->lo_flags = 0;
+-	if (!part_shift)
+-		lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
+ 	loop_unprepare_queue(lo);
+ out_unlock:
+ 	mutex_unlock(&loop_ctl_mutex);
+@@ -1120,6 +1116,23 @@ out_unlock:
+ 		/* Device is gone, no point in returning error */
+ 		err = 0;
+ 	}
++
++	/*
++	 * lo->lo_state is set to Lo_unbound here after above partscan has
++	 * finished.
++	 *
++	 * There cannot be anybody else entering __loop_clr_fd() as
++	 * lo->lo_backing_file is already cleared and Lo_rundown state
++	 * protects us from all the other places trying to change the 'lo'
++	 * device.
++	 */
++	mutex_lock(&loop_ctl_mutex);
++	lo->lo_flags = 0;
++	if (!part_shift)
++		lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
++	lo->lo_state = Lo_unbound;
++	mutex_unlock(&loop_ctl_mutex);
++
+ 	/*
+ 	 * Need not hold loop_ctl_mutex to fput backing file.
+ 	 * Calling fput holding loop_ctl_mutex triggers a circular
+diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
+index 614ecdbb4ab7..933268b8d6a5 100644
+--- a/drivers/cdrom/cdrom.c
++++ b/drivers/cdrom/cdrom.c
+@@ -265,6 +265,7 @@
+ /* #define ERRLOGMASK (CD_WARNING|CD_OPEN|CD_COUNT_TRACKS|CD_CLOSE) */
+ /* #define ERRLOGMASK (CD_WARNING|CD_REG_UNREG|CD_DO_IOCTL|CD_OPEN|CD_CLOSE|CD_COUNT_TRACKS) */
+ 
++#include <linux/atomic.h>
+ #include <linux/module.h>
+ #include <linux/fs.h>
+ #include <linux/major.h>
+@@ -3692,9 +3693,9 @@ static struct ctl_table_header *cdrom_sysctl_header;
+ 
+ static void cdrom_sysctl_register(void)
+ {
+-	static int initialized;
++	static atomic_t initialized = ATOMIC_INIT(0);
+ 
+-	if (initialized == 1)
++	if (!atomic_add_unless(&initialized, 1, 1))
+ 		return;
+ 
+ 	cdrom_sysctl_header = register_sysctl_table(cdrom_root_table);
+@@ -3705,8 +3706,6 @@ static void cdrom_sysctl_register(void)
+ 	cdrom_sysctl_settings.debug = debug;
+ 	cdrom_sysctl_settings.lock = lockdoor;
+ 	cdrom_sysctl_settings.check = check_media_type;
+-
+-	initialized = 1;
+ }
+ 
+ static void cdrom_sysctl_unregister(void)
+diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
+index 4a22b4b41aef..9bffcd37cc7b 100644
+--- a/drivers/char/hpet.c
++++ b/drivers/char/hpet.c
+@@ -377,7 +377,7 @@ static __init int hpet_mmap_enable(char *str)
+ 	pr_info("HPET mmap %s\n", hpet_mmap_enabled ? "enabled" : "disabled");
+ 	return 1;
+ }
+-__setup("hpet_mmap", hpet_mmap_enable);
++__setup("hpet_mmap=", hpet_mmap_enable);
+ 
+ static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
+index b89df66ea1ae..7abd604e938c 100644
+--- a/drivers/char/hw_random/virtio-rng.c
++++ b/drivers/char/hw_random/virtio-rng.c
+@@ -73,7 +73,7 @@ static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait)
+ 
+ 	if (!vi->busy) {
+ 		vi->busy = true;
+-		init_completion(&vi->have_data);
++		reinit_completion(&vi->have_data);
+ 		register_buffer(vi, buf, size);
+ 	}
+ 
+diff --git a/drivers/clk/clk-fractional-divider.c b/drivers/clk/clk-fractional-divider.c
+index 545dceec0bbf..fdfe2e423d15 100644
+--- a/drivers/clk/clk-fractional-divider.c
++++ b/drivers/clk/clk-fractional-divider.c
+@@ -79,7 +79,7 @@ static long clk_fd_round_rate(struct clk_hw *hw, unsigned long rate,
+ 	unsigned long m, n;
+ 	u64 ret;
+ 
+-	if (!rate || rate >= *parent_rate)
++	if (!rate || (!clk_hw_can_set_rate_parent(hw) && rate >= *parent_rate))
+ 		return *parent_rate;
+ 
+ 	if (fd->approximation)
+diff --git a/drivers/clk/meson/meson-aoclk.c b/drivers/clk/meson/meson-aoclk.c
+index f965845917e3..258c8d259ea1 100644
+--- a/drivers/clk/meson/meson-aoclk.c
++++ b/drivers/clk/meson/meson-aoclk.c
+@@ -65,15 +65,20 @@ int meson_aoclkc_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	/*
+-	 * Populate regmap and register all clks
+-	 */
+-	for (clkid = 0; clkid < data->num_clks; clkid++) {
++	/* Populate regmap */
++	for (clkid = 0; clkid < data->num_clks; clkid++)
+ 		data->clks[clkid]->map = regmap;
+ 
++	/* Register all clks */
++	for (clkid = 0; clkid < data->hw_data->num; clkid++) {
++		if (!data->hw_data->hws[clkid])
++			continue;
++
+ 		ret = devm_clk_hw_register(dev, data->hw_data->hws[clkid]);
+-		if (ret)
++		if (ret) {
++			dev_err(dev, "Clock registration failed\n");
+ 			return ret;
++		}
+ 	}
+ 
+ 	return devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get,
+diff --git a/drivers/clk/rockchip/clk-rk3328.c b/drivers/clk/rockchip/clk-rk3328.c
+index faa94adb2a37..65ab5c2f48b0 100644
+--- a/drivers/clk/rockchip/clk-rk3328.c
++++ b/drivers/clk/rockchip/clk-rk3328.c
+@@ -78,17 +78,17 @@ static struct rockchip_pll_rate_table rk3328_pll_rates[] = {
+ 
+ static struct rockchip_pll_rate_table rk3328_pll_frac_rates[] = {
+ 	/* _mhz, _refdiv, _fbdiv, _postdiv1, _postdiv2, _dsmpd, _frac */
+-	RK3036_PLL_RATE(1016064000, 3, 127, 1, 1, 0, 134217),
++	RK3036_PLL_RATE(1016064000, 3, 127, 1, 1, 0, 134218),
+ 	/* vco = 1016064000 */
+-	RK3036_PLL_RATE(983040000, 24, 983, 1, 1, 0, 671088),
++	RK3036_PLL_RATE(983040000, 24, 983, 1, 1, 0, 671089),
+ 	/* vco = 983040000 */
+-	RK3036_PLL_RATE(491520000, 24, 983, 2, 1, 0, 671088),
++	RK3036_PLL_RATE(491520000, 24, 983, 2, 1, 0, 671089),
+ 	/* vco = 983040000 */
+-	RK3036_PLL_RATE(61440000, 6, 215, 7, 2, 0, 671088),
++	RK3036_PLL_RATE(61440000, 6, 215, 7, 2, 0, 671089),
+ 	/* vco = 860156000 */
+-	RK3036_PLL_RATE(56448000, 12, 451, 4, 4, 0, 9797894),
++	RK3036_PLL_RATE(56448000, 12, 451, 4, 4, 0, 9797895),
+ 	/* vco = 903168000 */
+-	RK3036_PLL_RATE(40960000, 12, 409, 4, 5, 0, 10066329),
++	RK3036_PLL_RATE(40960000, 12, 409, 4, 5, 0, 10066330),
+ 	/* vco = 819200000 */
+ 	{ /* sentinel */ },
+ };
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 40630eb950fc..85d7f301149b 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -530,7 +530,7 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ 		 * Create default clkdm name, replace _cm from end of parent
+ 		 * node name with _clkdm
+ 		 */
+-		provider->clkdm_name[strlen(provider->clkdm_name) - 5] = 0;
++		provider->clkdm_name[strlen(provider->clkdm_name) - 2] = 0;
+ 	} else {
+ 		provider->clkdm_name = kasprintf(GFP_KERNEL, "%pOFn", node);
+ 		if (!provider->clkdm_name) {
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index d62fd374d5c7..c72258a44ba4 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -916,8 +916,10 @@ static void __init acpi_cpufreq_boost_init(void)
+ {
+ 	int ret;
+ 
+-	if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA)))
++	if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA))) {
++		pr_debug("Boost capabilities not present in the processor\n");
+ 		return;
++	}
+ 
+ 	acpi_cpufreq_driver.set_boost = set_boost;
+ 	acpi_cpufreq_driver.boost_enabled = boost_state(0);
+diff --git a/drivers/crypto/amcc/crypto4xx_trng.c b/drivers/crypto/amcc/crypto4xx_trng.c
+index 5e63742b0d22..53ab1f140a26 100644
+--- a/drivers/crypto/amcc/crypto4xx_trng.c
++++ b/drivers/crypto/amcc/crypto4xx_trng.c
+@@ -80,8 +80,10 @@ void ppc4xx_trng_probe(struct crypto4xx_core_device *core_dev)
+ 
+ 	/* Find the TRNG device node and map it */
+ 	trng = of_find_matching_node(NULL, ppc4xx_trng_match);
+-	if (!trng || !of_device_is_available(trng))
++	if (!trng || !of_device_is_available(trng)) {
++		of_node_put(trng);
+ 		return;
++	}
+ 
+ 	dev->trng_base = of_iomap(trng, 0);
+ 	of_node_put(trng);
+diff --git a/drivers/crypto/cavium/zip/zip_main.c b/drivers/crypto/cavium/zip/zip_main.c
+index be055b9547f6..6183f9128a8a 100644
+--- a/drivers/crypto/cavium/zip/zip_main.c
++++ b/drivers/crypto/cavium/zip/zip_main.c
+@@ -351,6 +351,7 @@ static struct pci_driver zip_driver = {
+ 
+ static struct crypto_alg zip_comp_deflate = {
+ 	.cra_name		= "deflate",
++	.cra_driver_name	= "deflate-cavium",
+ 	.cra_flags		= CRYPTO_ALG_TYPE_COMPRESS,
+ 	.cra_ctxsize		= sizeof(struct zip_kernel_ctx),
+ 	.cra_priority           = 300,
+@@ -365,6 +366,7 @@ static struct crypto_alg zip_comp_deflate = {
+ 
+ static struct crypto_alg zip_comp_lzs = {
+ 	.cra_name		= "lzs",
++	.cra_driver_name	= "lzs-cavium",
+ 	.cra_flags		= CRYPTO_ALG_TYPE_COMPRESS,
+ 	.cra_ctxsize		= sizeof(struct zip_kernel_ctx),
+ 	.cra_priority           = 300,
+@@ -384,7 +386,7 @@ static struct scomp_alg zip_scomp_deflate = {
+ 	.decompress		= zip_scomp_decompress,
+ 	.base			= {
+ 		.cra_name		= "deflate",
+-		.cra_driver_name	= "deflate-scomp",
++		.cra_driver_name	= "deflate-scomp-cavium",
+ 		.cra_module		= THIS_MODULE,
+ 		.cra_priority           = 300,
+ 	}
+@@ -397,7 +399,7 @@ static struct scomp_alg zip_scomp_lzs = {
+ 	.decompress		= zip_scomp_decompress,
+ 	.base			= {
+ 		.cra_name		= "lzs",
+-		.cra_driver_name	= "lzs-scomp",
++		.cra_driver_name	= "lzs-scomp-cavium",
+ 		.cra_module		= THIS_MODULE,
+ 		.cra_priority           = 300,
+ 	}
+diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
+index 4a09af3cd546..7b9a7fb28bb9 100644
+--- a/drivers/dma/imx-dma.c
++++ b/drivers/dma/imx-dma.c
+@@ -285,7 +285,7 @@ static inline int imxdma_sg_next(struct imxdma_desc *d)
+ 	struct scatterlist *sg = d->sg;
+ 	unsigned long now;
+ 
+-	now = min(d->len, sg_dma_len(sg));
++	now = min_t(size_t, d->len, sg_dma_len(sg));
+ 	if (d->len != IMX_DMA_LENGTH_LOOP)
+ 		d->len -= now;
+ 
+diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
+index 43d4b00b8138..411f91fde734 100644
+--- a/drivers/dma/qcom/hidma.c
++++ b/drivers/dma/qcom/hidma.c
+@@ -138,24 +138,25 @@ static void hidma_process_completed(struct hidma_chan *mchan)
+ 		desc = &mdesc->desc;
+ 		last_cookie = desc->cookie;
+ 
++		llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
++
+ 		spin_lock_irqsave(&mchan->lock, irqflags);
++		if (llstat == DMA_COMPLETE) {
++			mchan->last_success = last_cookie;
++			result.result = DMA_TRANS_NOERROR;
++		} else {
++			result.result = DMA_TRANS_ABORTED;
++		}
++
+ 		dma_cookie_complete(desc);
+ 		spin_unlock_irqrestore(&mchan->lock, irqflags);
+ 
+-		llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
+ 		dmaengine_desc_get_callback(desc, &cb);
+ 
+ 		dma_run_dependencies(desc);
+ 
+ 		spin_lock_irqsave(&mchan->lock, irqflags);
+ 		list_move(&mdesc->node, &mchan->free);
+-
+-		if (llstat == DMA_COMPLETE) {
+-			mchan->last_success = last_cookie;
+-			result.result = DMA_TRANS_NOERROR;
+-		} else
+-			result.result = DMA_TRANS_ABORTED;
+-
+ 		spin_unlock_irqrestore(&mchan->lock, irqflags);
+ 
+ 		dmaengine_desc_callback_invoke(&cb, &result);
+@@ -415,6 +416,7 @@ hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dest, dma_addr_t src,
+ 	if (!mdesc)
+ 		return NULL;
+ 
++	mdesc->desc.flags = flags;
+ 	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+ 				     src, dest, len, flags,
+ 				     HIDMA_TRE_MEMCPY);
+@@ -447,6 +449,7 @@ hidma_prep_dma_memset(struct dma_chan *dmach, dma_addr_t dest, int value,
+ 	if (!mdesc)
+ 		return NULL;
+ 
++	mdesc->desc.flags = flags;
+ 	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+ 				     value, dest, len, flags,
+ 				     HIDMA_TRE_MEMSET);
+diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
+index 9a558e30c461..8219ab88a507 100644
+--- a/drivers/dma/tegra20-apb-dma.c
++++ b/drivers/dma/tegra20-apb-dma.c
+@@ -636,7 +636,10 @@ static void handle_cont_sngl_cycle_dma_done(struct tegra_dma_channel *tdc,
+ 
+ 	sgreq = list_first_entry(&tdc->pending_sg_req, typeof(*sgreq), node);
+ 	dma_desc = sgreq->dma_desc;
+-	dma_desc->bytes_transferred += sgreq->req_len;
++	/* if we dma for long enough the transfer count will wrap */
++	dma_desc->bytes_transferred =
++		(dma_desc->bytes_transferred + sgreq->req_len) %
++		dma_desc->bytes_requested;
+ 
+ 	/* Callback need to be call */
+ 	if (!dma_desc->cb_count)
+diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c
+index a7902fccdcfa..6090d25dce85 100644
+--- a/drivers/firmware/efi/cper.c
++++ b/drivers/firmware/efi/cper.c
+@@ -546,19 +546,24 @@ EXPORT_SYMBOL_GPL(cper_estatus_check_header);
+ int cper_estatus_check(const struct acpi_hest_generic_status *estatus)
+ {
+ 	struct acpi_hest_generic_data *gdata;
+-	unsigned int data_len, gedata_len;
++	unsigned int data_len, record_size;
+ 	int rc;
+ 
+ 	rc = cper_estatus_check_header(estatus);
+ 	if (rc)
+ 		return rc;
++
+ 	data_len = estatus->data_length;
+ 
+ 	apei_estatus_for_each_section(estatus, gdata) {
+-		gedata_len = acpi_hest_get_error_length(gdata);
+-		if (gedata_len > data_len - acpi_hest_get_size(gdata))
++		if (sizeof(struct acpi_hest_generic_data) > data_len)
++			return -EINVAL;
++
++		record_size = acpi_hest_get_record_size(gdata);
++		if (record_size > data_len)
+ 			return -EINVAL;
+-		data_len -= acpi_hest_get_record_size(gdata);
++
++		data_len -= record_size;
+ 	}
+ 	if (data_len)
+ 		return -EINVAL;
+diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
+index c037c6c5d0b7..04e6ecd72cd9 100644
+--- a/drivers/firmware/efi/libstub/arm-stub.c
++++ b/drivers/firmware/efi/libstub/arm-stub.c
+@@ -367,6 +367,11 @@ void efi_get_virtmap(efi_memory_desc_t *memory_map, unsigned long map_size,
+ 		paddr = in->phys_addr;
+ 		size = in->num_pages * EFI_PAGE_SIZE;
+ 
++		if (novamap()) {
++			in->virt_addr = in->phys_addr;
++			continue;
++		}
++
+ 		/*
+ 		 * Make the mapping compatible with 64k pages: this allows
+ 		 * a 4k page size kernel to kexec a 64k page size kernel and
+diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
+index e94975f4655b..442f51c2a53d 100644
+--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
++++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
+@@ -34,6 +34,7 @@ static unsigned long __chunk_size = EFI_READ_CHUNK_SIZE;
+ 
+ static int __section(.data) __nokaslr;
+ static int __section(.data) __quiet;
++static int __section(.data) __novamap;
+ 
+ int __pure nokaslr(void)
+ {
+@@ -43,6 +44,10 @@ int __pure is_quiet(void)
+ {
+ 	return __quiet;
+ }
++int __pure novamap(void)
++{
++	return __novamap;
++}
+ 
+ #define EFI_MMAP_NR_SLACK_SLOTS	8
+ 
+@@ -482,6 +487,11 @@ efi_status_t efi_parse_options(char const *cmdline)
+ 			__chunk_size = -1UL;
+ 		}
+ 
++		if (!strncmp(str, "novamap", 7)) {
++			str += strlen("novamap");
++			__novamap = 1;
++		}
++
+ 		/* Group words together, delimited by "," */
+ 		while (*str && *str != ' ' && *str != ',')
+ 			str++;
+diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
+index 32799cf039ef..337b52c4702c 100644
+--- a/drivers/firmware/efi/libstub/efistub.h
++++ b/drivers/firmware/efi/libstub/efistub.h
+@@ -27,6 +27,7 @@
+ 
+ extern int __pure nokaslr(void);
+ extern int __pure is_quiet(void);
++extern int __pure novamap(void);
+ 
+ #define pr_efi(sys_table, msg)		do {				\
+ 	if (!is_quiet()) efi_printk(sys_table, "EFI stub: "msg);	\
+diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
+index 0dc7b4987cc2..f8f89f995e9d 100644
+--- a/drivers/firmware/efi/libstub/fdt.c
++++ b/drivers/firmware/efi/libstub/fdt.c
+@@ -327,6 +327,9 @@ efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table,
+ 	if (status == EFI_SUCCESS) {
+ 		efi_set_virtual_address_map_t *svam;
+ 
++		if (novamap())
++			return EFI_SUCCESS;
++
+ 		/* Install the new virtual address map */
+ 		svam = sys_table->runtime->set_virtual_address_map;
+ 		status = svam(runtime_entry_count * desc_size, desc_size,
+diff --git a/drivers/firmware/efi/memattr.c b/drivers/firmware/efi/memattr.c
+index 8986757eafaf..aac972b056d9 100644
+--- a/drivers/firmware/efi/memattr.c
++++ b/drivers/firmware/efi/memattr.c
+@@ -94,7 +94,7 @@ static bool entry_is_valid(const efi_memory_desc_t *in, efi_memory_desc_t *out)
+ 
+ 		if (!(md->attribute & EFI_MEMORY_RUNTIME))
+ 			continue;
+-		if (md->virt_addr == 0) {
++		if (md->virt_addr == 0 && md->phys_addr != 0) {
+ 			/* no virtual mapping has been installed by the stub */
+ 			break;
+ 		}
+diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
+index e2abfdb5cee6..698745c249e8 100644
+--- a/drivers/firmware/efi/runtime-wrappers.c
++++ b/drivers/firmware/efi/runtime-wrappers.c
+@@ -85,7 +85,7 @@ struct efi_runtime_work efi_rts_work;
+ 		pr_err("Failed to queue work to efi_rts_wq.\n");	\
+ 									\
+ exit:									\
+-	efi_rts_work.efi_rts_id = NONE;					\
++	efi_rts_work.efi_rts_id = EFI_NONE;				\
+ 	efi_rts_work.status;						\
+ })
+ 
+@@ -175,50 +175,50 @@ static void efi_call_rts(struct work_struct *work)
+ 	arg5 = efi_rts_work.arg5;
+ 
+ 	switch (efi_rts_work.efi_rts_id) {
+-	case GET_TIME:
++	case EFI_GET_TIME:
+ 		status = efi_call_virt(get_time, (efi_time_t *)arg1,
+ 				       (efi_time_cap_t *)arg2);
+ 		break;
+-	case SET_TIME:
++	case EFI_SET_TIME:
+ 		status = efi_call_virt(set_time, (efi_time_t *)arg1);
+ 		break;
+-	case GET_WAKEUP_TIME:
++	case EFI_GET_WAKEUP_TIME:
+ 		status = efi_call_virt(get_wakeup_time, (efi_bool_t *)arg1,
+ 				       (efi_bool_t *)arg2, (efi_time_t *)arg3);
+ 		break;
+-	case SET_WAKEUP_TIME:
++	case EFI_SET_WAKEUP_TIME:
+ 		status = efi_call_virt(set_wakeup_time, *(efi_bool_t *)arg1,
+ 				       (efi_time_t *)arg2);
+ 		break;
+-	case GET_VARIABLE:
++	case EFI_GET_VARIABLE:
+ 		status = efi_call_virt(get_variable, (efi_char16_t *)arg1,
+ 				       (efi_guid_t *)arg2, (u32 *)arg3,
+ 				       (unsigned long *)arg4, (void *)arg5);
+ 		break;
+-	case GET_NEXT_VARIABLE:
++	case EFI_GET_NEXT_VARIABLE:
+ 		status = efi_call_virt(get_next_variable, (unsigned long *)arg1,
+ 				       (efi_char16_t *)arg2,
+ 				       (efi_guid_t *)arg3);
+ 		break;
+-	case SET_VARIABLE:
++	case EFI_SET_VARIABLE:
+ 		status = efi_call_virt(set_variable, (efi_char16_t *)arg1,
+ 				       (efi_guid_t *)arg2, *(u32 *)arg3,
+ 				       *(unsigned long *)arg4, (void *)arg5);
+ 		break;
+-	case QUERY_VARIABLE_INFO:
++	case EFI_QUERY_VARIABLE_INFO:
+ 		status = efi_call_virt(query_variable_info, *(u32 *)arg1,
+ 				       (u64 *)arg2, (u64 *)arg3, (u64 *)arg4);
+ 		break;
+-	case GET_NEXT_HIGH_MONO_COUNT:
++	case EFI_GET_NEXT_HIGH_MONO_COUNT:
+ 		status = efi_call_virt(get_next_high_mono_count, (u32 *)arg1);
+ 		break;
+-	case UPDATE_CAPSULE:
++	case EFI_UPDATE_CAPSULE:
+ 		status = efi_call_virt(update_capsule,
+ 				       (efi_capsule_header_t **)arg1,
+ 				       *(unsigned long *)arg2,
+ 				       *(unsigned long *)arg3);
+ 		break;
+-	case QUERY_CAPSULE_CAPS:
++	case EFI_QUERY_CAPSULE_CAPS:
+ 		status = efi_call_virt(query_capsule_caps,
+ 				       (efi_capsule_header_t **)arg1,
+ 				       *(unsigned long *)arg2, (u64 *)arg3,
+@@ -242,7 +242,7 @@ static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc)
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(GET_TIME, tm, tc, NULL, NULL, NULL);
++	status = efi_queue_work(EFI_GET_TIME, tm, tc, NULL, NULL, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+ }
+@@ -253,7 +253,7 @@ static efi_status_t virt_efi_set_time(efi_time_t *tm)
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(SET_TIME, tm, NULL, NULL, NULL, NULL);
++	status = efi_queue_work(EFI_SET_TIME, tm, NULL, NULL, NULL, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+ }
+@@ -266,7 +266,7 @@ static efi_status_t virt_efi_get_wakeup_time(efi_bool_t *enabled,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(GET_WAKEUP_TIME, enabled, pending, tm, NULL,
++	status = efi_queue_work(EFI_GET_WAKEUP_TIME, enabled, pending, tm, NULL,
+ 				NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -278,7 +278,7 @@ static efi_status_t virt_efi_set_wakeup_time(efi_bool_t enabled, efi_time_t *tm)
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(SET_WAKEUP_TIME, &enabled, tm, NULL, NULL,
++	status = efi_queue_work(EFI_SET_WAKEUP_TIME, &enabled, tm, NULL, NULL,
+ 				NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -294,7 +294,7 @@ static efi_status_t virt_efi_get_variable(efi_char16_t *name,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(GET_VARIABLE, name, vendor, attr, data_size,
++	status = efi_queue_work(EFI_GET_VARIABLE, name, vendor, attr, data_size,
+ 				data);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -308,7 +308,7 @@ static efi_status_t virt_efi_get_next_variable(unsigned long *name_size,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(GET_NEXT_VARIABLE, name_size, name, vendor,
++	status = efi_queue_work(EFI_GET_NEXT_VARIABLE, name_size, name, vendor,
+ 				NULL, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -324,7 +324,7 @@ static efi_status_t virt_efi_set_variable(efi_char16_t *name,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(SET_VARIABLE, name, vendor, &attr, &data_size,
++	status = efi_queue_work(EFI_SET_VARIABLE, name, vendor, &attr, &data_size,
+ 				data);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -359,7 +359,7 @@ static efi_status_t virt_efi_query_variable_info(u32 attr,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(QUERY_VARIABLE_INFO, &attr, storage_space,
++	status = efi_queue_work(EFI_QUERY_VARIABLE_INFO, &attr, storage_space,
+ 				remaining_space, max_variable_size, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -391,7 +391,7 @@ static efi_status_t virt_efi_get_next_high_mono_count(u32 *count)
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(GET_NEXT_HIGH_MONO_COUNT, count, NULL, NULL,
++	status = efi_queue_work(EFI_GET_NEXT_HIGH_MONO_COUNT, count, NULL, NULL,
+ 				NULL, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -407,7 +407,7 @@ static void virt_efi_reset_system(int reset_type,
+ 			"could not get exclusive access to the firmware\n");
+ 		return;
+ 	}
+-	efi_rts_work.efi_rts_id = RESET_SYSTEM;
++	efi_rts_work.efi_rts_id = EFI_RESET_SYSTEM;
+ 	__efi_call_virt(reset_system, reset_type, status, data_size, data);
+ 	up(&efi_runtime_lock);
+ }
+@@ -423,7 +423,7 @@ static efi_status_t virt_efi_update_capsule(efi_capsule_header_t **capsules,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(UPDATE_CAPSULE, capsules, &count, &sg_list,
++	status = efi_queue_work(EFI_UPDATE_CAPSULE, capsules, &count, &sg_list,
+ 				NULL, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -441,7 +441,7 @@ static efi_status_t virt_efi_query_capsule_caps(efi_capsule_header_t **capsules,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(QUERY_CAPSULE_CAPS, capsules, &count,
++	status = efi_queue_work(EFI_QUERY_CAPSULE_CAPS, capsules, &count,
+ 				max_size, reset_type, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c
+index f4e9921fa966..7f33024b6d83 100644
+--- a/drivers/gpio/gpio-omap.c
++++ b/drivers/gpio/gpio-omap.c
+@@ -883,14 +883,16 @@ static void omap_gpio_unmask_irq(struct irq_data *d)
+ 	if (trigger)
+ 		omap_set_gpio_triggering(bank, offset, trigger);
+ 
+-	/* For level-triggered GPIOs, the clearing must be done after
+-	 * the HW source is cleared, thus after the handler has run */
+-	if (bank->level_mask & BIT(offset)) {
+-		omap_set_gpio_irqenable(bank, offset, 0);
++	omap_set_gpio_irqenable(bank, offset, 1);
++
++	/*
++	 * For level-triggered GPIOs, clearing must be done after the source
++	 * is cleared, thus after the handler has run. OMAP4 needs this done
++	 * after enabing the interrupt to clear the wakeup status.
++	 */
++	if (bank->level_mask & BIT(offset))
+ 		omap_clear_gpio_irqstatus(bank, offset);
+-	}
+ 
+-	omap_set_gpio_irqenable(bank, offset, 1);
+ 	raw_spin_unlock_irqrestore(&bank->lock, flags);
+ }
+ 
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index a6e1891217e2..a1dd2f1c0d02 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -86,7 +86,8 @@ static void of_gpio_flags_quirks(struct device_node *np,
+ 	if (IS_ENABLED(CONFIG_REGULATOR) &&
+ 	    (of_device_is_compatible(np, "regulator-fixed") ||
+ 	     of_device_is_compatible(np, "reg-fixed-voltage") ||
+-	     of_device_is_compatible(np, "regulator-gpio"))) {
++	     (of_device_is_compatible(np, "regulator-gpio") &&
++	      strcmp(propname, "enable-gpio") == 0))) {
+ 		/*
+ 		 * The regulator GPIO handles are specified such that the
+ 		 * presence or absence of "enable-active-high" solely controls
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 636d14a60952..83c8a0407537 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -886,6 +886,7 @@ static void emulated_link_detect(struct dc_link *link)
+ 		return;
+ 	}
+ 
++	/* dc_sink_create returns a new reference */
+ 	link->local_sink = sink;
+ 
+ 	edid_status = dm_helpers_read_local_edid(
+@@ -952,6 +953,8 @@ static int dm_resume(void *handle)
+ 		if (aconnector->fake_enable && aconnector->dc_link->local_sink)
+ 			aconnector->fake_enable = false;
+ 
++		if (aconnector->dc_sink)
++			dc_sink_release(aconnector->dc_sink);
+ 		aconnector->dc_sink = NULL;
+ 		amdgpu_dm_update_connector_after_detect(aconnector);
+ 		mutex_unlock(&aconnector->hpd_lock);
+@@ -1061,6 +1064,8 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 
+ 
+ 	sink = aconnector->dc_link->local_sink;
++	if (sink)
++		dc_sink_retain(sink);
+ 
+ 	/*
+ 	 * Edid mgmt connector gets first update only in mode_valid hook and then
+@@ -1085,21 +1090,24 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 				 * to it anymore after disconnect, so on next crtc to connector
+ 				 * reshuffle by UMD we will get into unwanted dc_sink release
+ 				 */
+-				if (aconnector->dc_sink != aconnector->dc_em_sink)
+-					dc_sink_release(aconnector->dc_sink);
++				dc_sink_release(aconnector->dc_sink);
+ 			}
+ 			aconnector->dc_sink = sink;
++			dc_sink_retain(aconnector->dc_sink);
+ 			amdgpu_dm_update_freesync_caps(connector,
+ 					aconnector->edid);
+ 		} else {
+ 			amdgpu_dm_update_freesync_caps(connector, NULL);
+-			if (!aconnector->dc_sink)
++			if (!aconnector->dc_sink) {
+ 				aconnector->dc_sink = aconnector->dc_em_sink;
+-			else if (aconnector->dc_sink != aconnector->dc_em_sink)
+ 				dc_sink_retain(aconnector->dc_sink);
++			}
+ 		}
+ 
+ 		mutex_unlock(&dev->mode_config.mutex);
++
++		if (sink)
++			dc_sink_release(sink);
+ 		return;
+ 	}
+ 
+@@ -1107,8 +1115,10 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 	 * TODO: temporary guard to look for proper fix
+ 	 * if this sink is MST sink, we should not do anything
+ 	 */
+-	if (sink && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
++	if (sink && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
++		dc_sink_release(sink);
+ 		return;
++	}
+ 
+ 	if (aconnector->dc_sink == sink) {
+ 		/*
+@@ -1117,6 +1127,8 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 		 */
+ 		DRM_DEBUG_DRIVER("DCHPD: connector_id=%d: dc_sink didn't change.\n",
+ 				aconnector->connector_id);
++		if (sink)
++			dc_sink_release(sink);
+ 		return;
+ 	}
+ 
+@@ -1138,6 +1150,7 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 			amdgpu_dm_update_freesync_caps(connector, NULL);
+ 
+ 		aconnector->dc_sink = sink;
++		dc_sink_retain(aconnector->dc_sink);
+ 		if (sink->dc_edid.length == 0) {
+ 			aconnector->edid = NULL;
+ 			drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux);
+@@ -1158,11 +1171,15 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 		amdgpu_dm_update_freesync_caps(connector, NULL);
+ 		drm_connector_update_edid_property(connector, NULL);
+ 		aconnector->num_modes = 0;
++		dc_sink_release(aconnector->dc_sink);
+ 		aconnector->dc_sink = NULL;
+ 		aconnector->edid = NULL;
+ 	}
+ 
+ 	mutex_unlock(&dev->mode_config.mutex);
++
++	if (sink)
++		dc_sink_release(sink);
+ }
+ 
+ static void handle_hpd_irq(void *param)
+@@ -2908,6 +2925,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 		}
+ 	} else {
+ 		sink = aconnector->dc_sink;
++		dc_sink_retain(sink);
+ 	}
+ 
+ 	stream = dc_create_stream_for_sink(sink);
+@@ -2974,8 +2992,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 		stream->ignore_msa_timing_param = true;
+ 
+ finish:
+-	if (sink && sink->sink_signal == SIGNAL_TYPE_VIRTUAL && aconnector->base.force != DRM_FORCE_ON)
+-		dc_sink_release(sink);
++	dc_sink_release(sink);
+ 
+ 	return stream;
+ }
+@@ -3233,6 +3250,14 @@ static void amdgpu_dm_connector_destroy(struct drm_connector *connector)
+ 		dm->backlight_dev = NULL;
+ 	}
+ #endif
++
++	if (aconnector->dc_em_sink)
++		dc_sink_release(aconnector->dc_em_sink);
++	aconnector->dc_em_sink = NULL;
++	if (aconnector->dc_sink)
++		dc_sink_release(aconnector->dc_sink);
++	aconnector->dc_sink = NULL;
++
+ 	drm_dp_cec_unregister_connector(&aconnector->dm_dp_aux.aux);
+ 	drm_connector_unregister(connector);
+ 	drm_connector_cleanup(connector);
+@@ -3330,10 +3355,12 @@ static void create_eml_sink(struct amdgpu_dm_connector *aconnector)
+ 		(edid->extensions + 1) * EDID_LENGTH,
+ 		&init_params);
+ 
+-	if (aconnector->base.force == DRM_FORCE_ON)
++	if (aconnector->base.force == DRM_FORCE_ON) {
+ 		aconnector->dc_sink = aconnector->dc_link->local_sink ?
+ 		aconnector->dc_link->local_sink :
+ 		aconnector->dc_em_sink;
++		dc_sink_retain(aconnector->dc_sink);
++	}
+ }
+ 
+ static void handle_edid_mgmt(struct amdgpu_dm_connector *aconnector)
+@@ -4948,7 +4975,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ static void amdgpu_dm_crtc_copy_transient_flags(struct drm_crtc_state *crtc_state,
+ 						struct dc_stream_state *stream_state)
+ {
+-	stream_state->mode_changed = crtc_state->mode_changed;
++	stream_state->mode_changed =
++		crtc_state->mode_changed || crtc_state->active_changed;
+ }
+ 
+ static int amdgpu_dm_atomic_commit(struct drm_device *dev,
+@@ -4969,10 +4997,22 @@ static int amdgpu_dm_atomic_commit(struct drm_device *dev,
+ 	 */
+ 	for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
+ 		struct dm_crtc_state *dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
++		struct dm_crtc_state *dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+ 		struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
+ 
+-		if (drm_atomic_crtc_needs_modeset(new_crtc_state) && dm_old_crtc_state->stream)
++		if (drm_atomic_crtc_needs_modeset(new_crtc_state)
++		    && dm_old_crtc_state->stream) {
++			/*
++			 * CRC capture was enabled but not disabled.
++			 * Release the vblank reference.
++			 */
++			if (dm_new_crtc_state->crc_enabled) {
++				drm_crtc_vblank_put(crtc);
++				dm_new_crtc_state->crc_enabled = false;
++			}
++
+ 			manage_dm_interrupts(adev, acrtc, false);
++		}
+ 	}
+ 	/*
+ 	 * Add check here for SoC's that support hardware cursor plane, to
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+index f088ac585978..26b651148c67 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+@@ -66,6 +66,7 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+ {
+ 	struct dm_crtc_state *crtc_state = to_dm_crtc_state(crtc->state);
+ 	struct dc_stream_state *stream_state = crtc_state->stream;
++	bool enable;
+ 
+ 	enum amdgpu_dm_pipe_crc_source source = dm_parse_crc_source(src_name);
+ 
+@@ -80,28 +81,27 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+ 		return -EINVAL;
+ 	}
+ 
++	enable = (source == AMDGPU_DM_PIPE_CRC_SOURCE_AUTO);
++
++	if (!dc_stream_configure_crc(stream_state->ctx->dc, stream_state,
++				     enable, enable))
++		return -EINVAL;
++
+ 	/* When enabling CRC, we should also disable dithering. */
+-	if (source == AMDGPU_DM_PIPE_CRC_SOURCE_AUTO) {
+-		if (dc_stream_configure_crc(stream_state->ctx->dc,
+-					    stream_state,
+-					    true, true)) {
+-			crtc_state->crc_enabled = true;
+-			dc_stream_set_dither_option(stream_state,
+-						    DITHER_OPTION_TRUN8);
+-		}
+-		else
+-			return -EINVAL;
+-	} else {
+-		if (dc_stream_configure_crc(stream_state->ctx->dc,
+-					    stream_state,
+-					    false, false)) {
+-			crtc_state->crc_enabled = false;
+-			dc_stream_set_dither_option(stream_state,
+-						    DITHER_OPTION_DEFAULT);
+-		}
+-		else
+-			return -EINVAL;
+-	}
++	dc_stream_set_dither_option(stream_state,
++				    enable ? DITHER_OPTION_TRUN8
++					   : DITHER_OPTION_DEFAULT);
++
++	/*
++	 * Reading the CRC requires the vblank interrupt handler to be
++	 * enabled. Keep a reference until CRC capture stops.
++	 */
++	if (!crtc_state->crc_enabled && enable)
++		drm_crtc_vblank_get(crtc);
++	else if (crtc_state->crc_enabled && !enable)
++		drm_crtc_vblank_put(crtc);
++
++	crtc_state->crc_enabled = enable;
+ 
+ 	/* Reset crc_skipped on dm state */
+ 	crtc_state->crc_skip_count = 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 1b0d209d8367..3b95a637b508 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -239,6 +239,7 @@ static int dm_dp_mst_get_modes(struct drm_connector *connector)
+ 			&init_params);
+ 
+ 		dc_sink->priv = aconnector;
++		/* dc_link_add_remote_sink returns a new reference */
+ 		aconnector->dc_sink = dc_sink;
+ 
+ 		if (aconnector->dc_sink)
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 5fd52094d459..1f92e7e8e3d3 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1078,6 +1078,9 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
+ 	/* pplib is notified if disp_num changed */
+ 	dc->hwss.optimize_bandwidth(dc, context);
+ 
++	for (i = 0; i < context->stream_count; i++)
++		context->streams[i]->mode_changed = false;
++
+ 	dc_release_state(dc->current_state);
+ 
+ 	dc->current_state = context;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index b0265dbebd4c..583eb367850f 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -792,6 +792,7 @@ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason)
+ 		sink->dongle_max_pix_clk = sink_caps.max_hdmi_pixel_clock;
+ 		sink->converter_disable_audio = converter_disable_audio;
+ 
++		/* dc_sink_create returns a new reference */
+ 		link->local_sink = sink;
+ 
+ 		edid_status = dm_helpers_read_local_edid(
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 41883c981789..a684b38332ac 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -2334,9 +2334,10 @@ static void dcn10_apply_ctx_for_surface(
+ 			}
+ 		}
+ 
+-		if (!pipe_ctx->plane_state &&
+-			old_pipe_ctx->plane_state &&
+-			old_pipe_ctx->stream_res.tg == tg) {
++		if ((!pipe_ctx->plane_state ||
++		     pipe_ctx->stream_res.tg != old_pipe_ctx->stream_res.tg) &&
++		    old_pipe_ctx->plane_state &&
++		    old_pipe_ctx->stream_res.tg == tg) {
+ 
+ 			dc->hwss.plane_atomic_disconnect(dc, old_pipe_ctx);
+ 			removed_pipe[i] = true;
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 529414556962..1a244c53252c 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -3286,6 +3286,7 @@ static int drm_dp_mst_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs
+ 		msg.u.i2c_read.transactions[i].i2c_dev_id = msgs[i].addr;
+ 		msg.u.i2c_read.transactions[i].num_bytes = msgs[i].len;
+ 		msg.u.i2c_read.transactions[i].bytes = msgs[i].buf;
++		msg.u.i2c_read.transactions[i].no_stop_bit = !(msgs[i].flags & I2C_M_STOP);
+ 	}
+ 	msg.u.i2c_read.read_i2c_device_id = msgs[num - 1].addr;
+ 	msg.u.i2c_read.num_bytes_read = msgs[num - 1].len;
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 70fc8e356b18..edd8cb497f3b 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -2891,7 +2891,7 @@ int drm_fb_helper_fbdev_setup(struct drm_device *dev,
+ 	return 0;
+ 
+ err_drm_fb_helper_fini:
+-	drm_fb_helper_fini(fb_helper);
++	drm_fb_helper_fbdev_teardown(dev);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/gpu/drm/drm_mode_object.c b/drivers/gpu/drm/drm_mode_object.c
+index 004191d01772..15b919f90c5a 100644
+--- a/drivers/gpu/drm/drm_mode_object.c
++++ b/drivers/gpu/drm/drm_mode_object.c
+@@ -465,6 +465,7 @@ static int set_property_atomic(struct drm_mode_object *obj,
+ 
+ 	drm_modeset_acquire_init(&ctx, 0);
+ 	state->acquire_ctx = &ctx;
++
+ retry:
+ 	if (prop == state->dev->mode_config.dpms_property) {
+ 		if (obj->type != DRM_MODE_OBJECT_CONNECTOR) {
+diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
+index 5f650d8fc66b..4cfb56893b7f 100644
+--- a/drivers/gpu/drm/drm_plane.c
++++ b/drivers/gpu/drm/drm_plane.c
+@@ -220,6 +220,9 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane,
+ 			format_modifier_count++;
+ 	}
+ 
++	if (format_modifier_count)
++		config->allow_fb_modifiers = true;
++
+ 	plane->modifier_count = format_modifier_count;
+ 	plane->modifiers = kmalloc_array(format_modifier_count,
+ 					 sizeof(format_modifiers[0]),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+index cb307a2abf06..7316b4ab1b85 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+@@ -23,11 +23,14 @@ struct dpu_mdss {
+ 	struct dpu_irq_controller irq_controller;
+ };
+ 
+-static irqreturn_t dpu_mdss_irq(int irq, void *arg)
++static void dpu_mdss_irq(struct irq_desc *desc)
+ {
+-	struct dpu_mdss *dpu_mdss = arg;
++	struct dpu_mdss *dpu_mdss = irq_desc_get_handler_data(desc);
++	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	u32 interrupts;
+ 
++	chained_irq_enter(chip, desc);
++
+ 	interrupts = readl_relaxed(dpu_mdss->mmio + HW_INTR_STATUS);
+ 
+ 	while (interrupts) {
+@@ -39,20 +42,20 @@ static irqreturn_t dpu_mdss_irq(int irq, void *arg)
+ 					   hwirq);
+ 		if (mapping == 0) {
+ 			DRM_ERROR("couldn't find irq mapping for %lu\n", hwirq);
+-			return IRQ_NONE;
++			break;
+ 		}
+ 
+ 		rc = generic_handle_irq(mapping);
+ 		if (rc < 0) {
+ 			DRM_ERROR("handle irq fail: irq=%lu mapping=%u rc=%d\n",
+ 				  hwirq, mapping, rc);
+-			return IRQ_NONE;
++			break;
+ 		}
+ 
+ 		interrupts &= ~(1 << hwirq);
+ 	}
+ 
+-	return IRQ_HANDLED;
++	chained_irq_exit(chip, desc);
+ }
+ 
+ static void dpu_mdss_irq_mask(struct irq_data *irqd)
+@@ -83,16 +86,16 @@ static struct irq_chip dpu_mdss_irq_chip = {
+ 	.irq_unmask = dpu_mdss_irq_unmask,
+ };
+ 
++static struct lock_class_key dpu_mdss_lock_key, dpu_mdss_request_key;
++
+ static int dpu_mdss_irqdomain_map(struct irq_domain *domain,
+ 		unsigned int irq, irq_hw_number_t hwirq)
+ {
+ 	struct dpu_mdss *dpu_mdss = domain->host_data;
+-	int ret;
+ 
++	irq_set_lockdep_class(irq, &dpu_mdss_lock_key, &dpu_mdss_request_key);
+ 	irq_set_chip_and_handler(irq, &dpu_mdss_irq_chip, handle_level_irq);
+-	ret = irq_set_chip_data(irq, dpu_mdss);
+-
+-	return ret;
++	return irq_set_chip_data(irq, dpu_mdss);
+ }
+ 
+ static const struct irq_domain_ops dpu_mdss_irqdomain_ops = {
+@@ -159,11 +162,13 @@ static void dpu_mdss_destroy(struct drm_device *dev)
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 	struct dpu_mdss *dpu_mdss = to_dpu_mdss(priv->mdss);
+ 	struct dss_module_power *mp = &dpu_mdss->mp;
++	int irq;
+ 
+ 	pm_runtime_suspend(dev->dev);
+ 	pm_runtime_disable(dev->dev);
+ 	_dpu_mdss_irq_domain_fini(dpu_mdss);
+-	free_irq(platform_get_irq(pdev, 0), dpu_mdss);
++	irq = platform_get_irq(pdev, 0);
++	irq_set_chained_handler_and_data(irq, NULL, NULL);
+ 	msm_dss_put_clk(mp->clk_config, mp->num_clk);
+ 	devm_kfree(&pdev->dev, mp->clk_config);
+ 
+@@ -187,6 +192,7 @@ int dpu_mdss_init(struct drm_device *dev)
+ 	struct dpu_mdss *dpu_mdss;
+ 	struct dss_module_power *mp;
+ 	int ret = 0;
++	int irq;
+ 
+ 	dpu_mdss = devm_kzalloc(dev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
+ 	if (!dpu_mdss)
+@@ -219,12 +225,12 @@ int dpu_mdss_init(struct drm_device *dev)
+ 	if (ret)
+ 		goto irq_domain_error;
+ 
+-	ret = request_irq(platform_get_irq(pdev, 0),
+-			dpu_mdss_irq, 0, "dpu_mdss_isr", dpu_mdss);
+-	if (ret) {
+-		DPU_ERROR("failed to init irq: %d\n", ret);
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0)
+ 		goto irq_error;
+-	}
++
++	irq_set_chained_handler_and_data(irq, dpu_mdss_irq,
++					 dpu_mdss);
+ 
+ 	pm_runtime_enable(dev->dev);
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
+index 6a4ca139cf5d..8fd8124d72ba 100644
+--- a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
++++ b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
+@@ -750,7 +750,9 @@ static int nv17_tv_set_property(struct drm_encoder *encoder,
+ 		/* Disable the crtc to ensure a full modeset is
+ 		 * performed whenever it's turned on again. */
+ 		if (crtc)
+-			drm_crtc_force_disable(crtc);
++			drm_crtc_helper_set_mode(crtc, &crtc->mode,
++						 crtc->x, crtc->y,
++						 crtc->primary->fb);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_kms.c b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+index 9c7007d45408..f9a90ff24e6d 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_kms.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+@@ -331,6 +331,7 @@ static int rcar_du_encoders_init_one(struct rcar_du_device *rcdu,
+ 		dev_dbg(rcdu->dev,
+ 			"connected entity %pOF is disabled, skipping\n",
+ 			entity);
++		of_node_put(entity);
+ 		return -ENODEV;
+ 	}
+ 
+@@ -366,6 +367,7 @@ static int rcar_du_encoders_init_one(struct rcar_du_device *rcdu,
+ 		dev_warn(rcdu->dev,
+ 			 "no encoder found for endpoint %pOF, skipping\n",
+ 			 ep->local_node);
++		of_node_put(entity);
+ 		return -ENODEV;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index e2942c9a11a7..35ddbec1375a 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -52,12 +52,12 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
+ {
+ 	int i;
+ 
+-	if (!(entity && rq_list && num_rq_list > 0 && rq_list[0]))
++	if (!(entity && rq_list && (num_rq_list == 0 || rq_list[0])))
+ 		return -EINVAL;
+ 
+ 	memset(entity, 0, sizeof(struct drm_sched_entity));
+ 	INIT_LIST_HEAD(&entity->list);
+-	entity->rq = rq_list[0];
++	entity->rq = NULL;
+ 	entity->guilty = guilty;
+ 	entity->num_rq_list = num_rq_list;
+ 	entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *),
+@@ -67,6 +67,10 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
+ 
+ 	for (i = 0; i < num_rq_list; ++i)
+ 		entity->rq_list[i] = rq_list[i];
++
++	if (num_rq_list)
++		entity->rq = rq_list[0];
++
+ 	entity->last_scheduled = NULL;
+ 
+ 	spin_lock_init(&entity->rq_lock);
+@@ -165,6 +169,9 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout)
+ 	struct task_struct *last_user;
+ 	long ret = timeout;
+ 
++	if (!entity->rq)
++		return 0;
++
+ 	sched = entity->rq->sched;
+ 	/**
+ 	 * The client will not queue more IBs during this fini, consume existing
+@@ -264,20 +271,24 @@ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity)
+  */
+ void drm_sched_entity_fini(struct drm_sched_entity *entity)
+ {
+-	struct drm_gpu_scheduler *sched;
++	struct drm_gpu_scheduler *sched = NULL;
+ 
+-	sched = entity->rq->sched;
+-	drm_sched_rq_remove_entity(entity->rq, entity);
++	if (entity->rq) {
++		sched = entity->rq->sched;
++		drm_sched_rq_remove_entity(entity->rq, entity);
++	}
+ 
+ 	/* Consumption of existing IBs wasn't completed. Forcefully
+ 	 * remove them here.
+ 	 */
+ 	if (spsc_queue_peek(&entity->job_queue)) {
+-		/* Park the kernel for a moment to make sure it isn't processing
+-		 * our enity.
+-		 */
+-		kthread_park(sched->thread);
+-		kthread_unpark(sched->thread);
++		if (sched) {
++			/* Park the kernel for a moment to make sure it isn't processing
++			 * our enity.
++			 */
++			kthread_park(sched->thread);
++			kthread_unpark(sched->thread);
++		}
+ 		if (entity->dependency) {
+ 			dma_fence_remove_callback(entity->dependency,
+ 						  &entity->cb);
+@@ -362,9 +373,11 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity,
+ 	for (i = 0; i < entity->num_rq_list; ++i)
+ 		drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority);
+ 
+-	drm_sched_rq_remove_entity(entity->rq, entity);
+-	drm_sched_entity_set_rq_priority(&entity->rq, priority);
+-	drm_sched_rq_add_entity(entity->rq, entity);
++	if (entity->rq) {
++		drm_sched_rq_remove_entity(entity->rq, entity);
++		drm_sched_entity_set_rq_priority(&entity->rq, priority);
++		drm_sched_rq_add_entity(entity->rq, entity);
++	}
+ 
+ 	spin_unlock(&entity->rq_lock);
+ }
+diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
+index e747a7d16739..1054f535178a 100644
+--- a/drivers/gpu/drm/vkms/vkms_crtc.c
++++ b/drivers/gpu/drm/vkms/vkms_crtc.c
+@@ -4,13 +4,17 @@
+ #include <drm/drm_atomic_helper.h>
+ #include <drm/drm_crtc_helper.h>
+ 
+-static void _vblank_handle(struct vkms_output *output)
++static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
+ {
++	struct vkms_output *output = container_of(timer, struct vkms_output,
++						  vblank_hrtimer);
+ 	struct drm_crtc *crtc = &output->crtc;
+ 	struct vkms_crtc_state *state = to_vkms_crtc_state(crtc->state);
++	int ret_overrun;
+ 	bool ret;
+ 
+ 	spin_lock(&output->lock);
++
+ 	ret = drm_crtc_handle_vblank(crtc);
+ 	if (!ret)
+ 		DRM_ERROR("vkms failure on handling vblank");
+@@ -31,19 +35,9 @@ static void _vblank_handle(struct vkms_output *output)
+ 			DRM_WARN("failed to queue vkms_crc_work_handle");
+ 	}
+ 
+-	spin_unlock(&output->lock);
+-}
+-
+-static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
+-{
+-	struct vkms_output *output = container_of(timer, struct vkms_output,
+-						  vblank_hrtimer);
+-	int ret_overrun;
+-
+-	_vblank_handle(output);
+-
+ 	ret_overrun = hrtimer_forward_now(&output->vblank_hrtimer,
+ 					  output->period_ns);
++	spin_unlock(&output->lock);
+ 
+ 	return HRTIMER_RESTART;
+ }
+@@ -81,6 +75,9 @@ bool vkms_get_vblank_timestamp(struct drm_device *dev, unsigned int pipe,
+ 
+ 	*vblank_time = output->vblank_hrtimer.node.expires;
+ 
++	if (!in_vblank_irq)
++		*vblank_time -= output->period_ns;
++
+ 	return true;
+ }
+ 
+diff --git a/drivers/hid/intel-ish-hid/ipc/ipc.c b/drivers/hid/intel-ish-hid/ipc/ipc.c
+index 742191bb24c6..45e33c7ba9a6 100644
+--- a/drivers/hid/intel-ish-hid/ipc/ipc.c
++++ b/drivers/hid/intel-ish-hid/ipc/ipc.c
+@@ -91,7 +91,10 @@ static bool check_generated_interrupt(struct ishtp_device *dev)
+ 			IPC_INT_FROM_ISH_TO_HOST_CHV_AB(pisr_val);
+ 	} else {
+ 		pisr_val = ish_reg_read(dev, IPC_REG_PISR_BXT);
+-		interrupt_generated = IPC_INT_FROM_ISH_TO_HOST_BXT(pisr_val);
++		interrupt_generated = !!pisr_val;
++		/* only busy-clear bit is RW, others are RO */
++		if (pisr_val)
++			ish_reg_write(dev, IPC_REG_PISR_BXT, pisr_val);
+ 	}
+ 
+ 	return interrupt_generated;
+@@ -839,11 +842,11 @@ int ish_hw_start(struct ishtp_device *dev)
+ {
+ 	ish_set_host_rdy(dev);
+ 
++	set_host_ready(dev);
++
+ 	/* After that we can enable ISH DMA operation and wakeup ISHFW */
+ 	ish_wakeup(dev);
+ 
+-	set_host_ready(dev);
+-
+ 	/* wait for FW-initiated reset flow */
+ 	if (!dev->recvd_hw_ready)
+ 		wait_event_interruptible_timeout(dev->wait_hw_ready,
+diff --git a/drivers/hid/intel-ish-hid/ishtp/bus.c b/drivers/hid/intel-ish-hid/ishtp/bus.c
+index 728dc6d4561a..a271d6d169b1 100644
+--- a/drivers/hid/intel-ish-hid/ishtp/bus.c
++++ b/drivers/hid/intel-ish-hid/ishtp/bus.c
+@@ -675,7 +675,8 @@ int ishtp_cl_device_bind(struct ishtp_cl *cl)
+ 	spin_lock_irqsave(&cl->dev->device_list_lock, flags);
+ 	list_for_each_entry(cl_device, &cl->dev->device_list,
+ 			device_link) {
+-		if (cl_device->fw_client->client_id == cl->fw_client_id) {
++		if (cl_device->fw_client &&
++		    cl_device->fw_client->client_id == cl->fw_client_id) {
+ 			cl->device = cl_device;
+ 			rv = 0;
+ 			break;
+@@ -735,6 +736,7 @@ void ishtp_bus_remove_all_clients(struct ishtp_device *ishtp_dev,
+ 	spin_lock_irqsave(&ishtp_dev->device_list_lock, flags);
+ 	list_for_each_entry_safe(cl_device, n, &ishtp_dev->device_list,
+ 				 device_link) {
++		cl_device->fw_client = NULL;
+ 		if (warm_reset && cl_device->reference_count)
+ 			continue;
+ 
+diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
+index abe8249b893b..f21eb28b6782 100644
+--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
++++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
+@@ -177,15 +177,15 @@ static void etm_free_aux(void *data)
+ 	schedule_work(&event_data->work);
+ }
+ 
+-static void *etm_setup_aux(int event_cpu, void **pages,
++static void *etm_setup_aux(struct perf_event *event, void **pages,
+ 			   int nr_pages, bool overwrite)
+ {
+-	int cpu;
++	int cpu = event->cpu;
+ 	cpumask_t *mask;
+ 	struct coresight_device *sink;
+ 	struct etm_event_data *event_data = NULL;
+ 
+-	event_data = alloc_event_data(event_cpu);
++	event_data = alloc_event_data(cpu);
+ 	if (!event_data)
+ 		return NULL;
+ 	INIT_WORK(&event_data->work, free_event_data);
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index 53e2fb6e86f6..fe76b176974a 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -55,7 +55,8 @@ static void etm4_os_unlock(struct etmv4_drvdata *drvdata)
+ 
+ static bool etm4_arch_supported(u8 arch)
+ {
+-	switch (arch) {
++	/* Mask out the minor version number */
++	switch (arch & 0xf0) {
+ 	case ETM_ARCH_V4:
+ 		break;
+ 	default:
+diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
+index b4a0b2b99a78..6b4ef1d38fb2 100644
+--- a/drivers/i2c/busses/i2c-designware-core.h
++++ b/drivers/i2c/busses/i2c-designware-core.h
+@@ -215,6 +215,7 @@
+  * @disable_int: function to disable all interrupts
+  * @init: function to initialize the I2C hardware
+  * @mode: operation mode - DW_IC_MASTER or DW_IC_SLAVE
++ * @suspended: set to true if the controller is suspended
+  *
+  * HCNT and LCNT parameters can be used if the platform knows more accurate
+  * values than the one computed based only on the input clock frequency.
+@@ -270,6 +271,7 @@ struct dw_i2c_dev {
+ 	int			(*set_sda_hold_time)(struct dw_i2c_dev *dev);
+ 	int			mode;
+ 	struct i2c_bus_recovery_info rinfo;
++	bool			suspended;
+ };
+ 
+ #define ACCESS_SWAP		0x00000001
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 8d1bc44d2530..bb8e3f149979 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -426,6 +426,12 @@ i2c_dw_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
+ 
+ 	pm_runtime_get_sync(dev->dev);
+ 
++	if (dev->suspended) {
++		dev_err(dev->dev, "Error %s call while suspended\n", __func__);
++		ret = -ESHUTDOWN;
++		goto done_nolock;
++	}
++
+ 	reinit_completion(&dev->cmd_complete);
+ 	dev->msgs = msgs;
+ 	dev->msgs_num = num;
+diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
+index d50f80487214..76810deb2de6 100644
+--- a/drivers/i2c/busses/i2c-designware-pcidrv.c
++++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
+@@ -176,6 +176,7 @@ static int i2c_dw_pci_suspend(struct device *dev)
+ 	struct pci_dev *pdev = to_pci_dev(dev);
+ 	struct dw_i2c_dev *i_dev = pci_get_drvdata(pdev);
+ 
++	i_dev->suspended = true;
+ 	i_dev->disable(i_dev);
+ 
+ 	return 0;
+@@ -185,8 +186,12 @@ static int i2c_dw_pci_resume(struct device *dev)
+ {
+ 	struct pci_dev *pdev = to_pci_dev(dev);
+ 	struct dw_i2c_dev *i_dev = pci_get_drvdata(pdev);
++	int ret;
+ 
+-	return i_dev->init(i_dev);
++	ret = i_dev->init(i_dev);
++	i_dev->suspended = false;
++
++	return ret;
+ }
+ #endif
+ 
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 9eaac3be1f63..ead5e7de3e4d 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -454,6 +454,8 @@ static int dw_i2c_plat_suspend(struct device *dev)
+ {
+ 	struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
+ 
++	i_dev->suspended = true;
++
+ 	if (i_dev->shared_with_punit)
+ 		return 0;
+ 
+@@ -471,6 +473,7 @@ static int dw_i2c_plat_resume(struct device *dev)
+ 		i2c_dw_prepare_clk(i_dev, true);
+ 
+ 	i_dev->init(i_dev);
++	i_dev->suspended = false;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 28460f6a60cc..af87a16ac3a5 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -430,7 +430,7 @@ static int i2c_device_remove(struct device *dev)
+ 	dev_pm_clear_wake_irq(&client->dev);
+ 	device_init_wakeup(&client->dev, false);
+ 
+-	client->irq = 0;
++	client->irq = client->init_irq;
+ 
+ 	return status;
+ }
+@@ -741,10 +741,11 @@ i2c_new_device(struct i2c_adapter *adap, struct i2c_board_info const *info)
+ 	client->flags = info->flags;
+ 	client->addr = info->addr;
+ 
+-	client->irq = info->irq;
+-	if (!client->irq)
+-		client->irq = i2c_dev_irq_from_resources(info->resources,
++	client->init_irq = info->irq;
++	if (!client->init_irq)
++		client->init_irq = i2c_dev_irq_from_resources(info->resources,
+ 							 info->num_resources);
++	client->irq = client->init_irq;
+ 
+ 	strlcpy(client->name, info->type, sizeof(client->name));
+ 
+diff --git a/drivers/i2c/i2c-core-of.c b/drivers/i2c/i2c-core-of.c
+index 6cb7ad608bcd..0f01cdba9d2c 100644
+--- a/drivers/i2c/i2c-core-of.c
++++ b/drivers/i2c/i2c-core-of.c
+@@ -121,6 +121,17 @@ static int of_dev_node_match(struct device *dev, void *data)
+ 	return dev->of_node == data;
+ }
+ 
++static int of_dev_or_parent_node_match(struct device *dev, void *data)
++{
++	if (dev->of_node == data)
++		return 1;
++
++	if (dev->parent)
++		return dev->parent->of_node == data;
++
++	return 0;
++}
++
+ /* must call put_device() when done with returned i2c_client device */
+ struct i2c_client *of_find_i2c_device_by_node(struct device_node *node)
+ {
+@@ -145,7 +156,8 @@ struct i2c_adapter *of_find_i2c_adapter_by_node(struct device_node *node)
+ 	struct device *dev;
+ 	struct i2c_adapter *adapter;
+ 
+-	dev = bus_find_device(&i2c_bus_type, NULL, node, of_dev_node_match);
++	dev = bus_find_device(&i2c_bus_type, NULL, node,
++			      of_dev_or_parent_node_match);
+ 	if (!dev)
+ 		return NULL;
+ 
+diff --git a/drivers/iio/adc/qcom-pm8xxx-xoadc.c b/drivers/iio/adc/qcom-pm8xxx-xoadc.c
+index c30c002f1fef..4735f8a1ca9d 100644
+--- a/drivers/iio/adc/qcom-pm8xxx-xoadc.c
++++ b/drivers/iio/adc/qcom-pm8xxx-xoadc.c
+@@ -423,18 +423,14 @@ static irqreturn_t pm8xxx_eoc_irq(int irq, void *d)
+ static struct pm8xxx_chan_info *
+ pm8xxx_get_channel(struct pm8xxx_xoadc *adc, u8 chan)
+ {
+-	struct pm8xxx_chan_info *ch;
+ 	int i;
+ 
+ 	for (i = 0; i < adc->nchans; i++) {
+-		ch = &adc->chans[i];
++		struct pm8xxx_chan_info *ch = &adc->chans[i];
+ 		if (ch->hwchan->amux_channel == chan)
+-			break;
++			return ch;
+ 	}
+-	if (i == adc->nchans)
+-		return NULL;
+-
+-	return ch;
++	return NULL;
+ }
+ 
+ static int pm8xxx_read_channel_rsv(struct pm8xxx_xoadc *adc,
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index 8221813219e5..25a81fbb0d4d 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -1903,8 +1903,10 @@ static int abort_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
+ 	}
+ 	mutex_unlock(&ep->com.mutex);
+ 
+-	if (release)
++	if (release) {
++		close_complete_upcall(ep, -ECONNRESET);
+ 		release_ep_resources(ep);
++	}
+ 	c4iw_put_ep(&ep->com);
+ 	return 0;
+ }
+@@ -3606,7 +3608,6 @@ int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp)
+ 	if (close) {
+ 		if (abrupt) {
+ 			set_bit(EP_DISC_ABORT, &ep->com.history);
+-			close_complete_upcall(ep, -ECONNRESET);
+ 			ret = send_abort(ep);
+ 		} else {
+ 			set_bit(EP_DISC_CLOSE, &ep->com.history);
+diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c
+index fedaf8260105..8c79a480f2b7 100644
+--- a/drivers/infiniband/hw/mlx4/cm.c
++++ b/drivers/infiniband/hw/mlx4/cm.c
+@@ -39,7 +39,7 @@
+ 
+ #include "mlx4_ib.h"
+ 
+-#define CM_CLEANUP_CACHE_TIMEOUT  (5 * HZ)
++#define CM_CLEANUP_CACHE_TIMEOUT  (30 * HZ)
+ 
+ struct id_map_entry {
+ 	struct rb_node node;
+diff --git a/drivers/input/misc/soc_button_array.c b/drivers/input/misc/soc_button_array.c
+index 23520df7650f..55cd6e0b409c 100644
+--- a/drivers/input/misc/soc_button_array.c
++++ b/drivers/input/misc/soc_button_array.c
+@@ -373,7 +373,7 @@ static struct soc_button_info soc_button_PNP0C40[] = {
+ 	{ "home", 1, EV_KEY, KEY_LEFTMETA, false, true },
+ 	{ "volume_up", 2, EV_KEY, KEY_VOLUMEUP, true, false },
+ 	{ "volume_down", 3, EV_KEY, KEY_VOLUMEDOWN, true, false },
+-	{ "rotation_lock", 4, EV_SW, SW_ROTATE_LOCK, false, false },
++	{ "rotation_lock", 4, EV_KEY, KEY_ROTATE_LOCK_TOGGLE, false, false },
+ 	{ }
+ };
+ 
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 78188bf7e90d..dbd6824dfffa 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -2485,7 +2485,8 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ 	if (dev && dev_is_pci(dev)) {
+ 		struct pci_dev *pdev = to_pci_dev(info->dev);
+ 
+-		if (!pci_ats_disabled() &&
++		if (!pdev->untrusted &&
++		    !pci_ats_disabled() &&
+ 		    ecap_dev_iotlb_support(iommu->ecap) &&
+ 		    pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ATS) &&
+ 		    dmar_find_matched_atsr_unit(pdev))
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index 1b9e40a203e0..18a8330e1882 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -228,7 +228,8 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 		if (dma != phys)
+ 			goto out_unmap;
+ 	}
+-	kmemleak_ignore(table);
++	if (lvl == 2)
++		kmemleak_ignore(table);
+ 	return table;
+ 
+ out_unmap:
+diff --git a/drivers/leds/leds-lp55xx-common.c b/drivers/leds/leds-lp55xx-common.c
+index 3d79a6380761..723f2f17497a 100644
+--- a/drivers/leds/leds-lp55xx-common.c
++++ b/drivers/leds/leds-lp55xx-common.c
+@@ -201,7 +201,7 @@ static void lp55xx_firmware_loaded(const struct firmware *fw, void *context)
+ 
+ 	if (!fw) {
+ 		dev_err(dev, "firmware request failed\n");
+-		goto out;
++		return;
+ 	}
+ 
+ 	/* handling firmware data is chip dependent */
+@@ -214,9 +214,9 @@ static void lp55xx_firmware_loaded(const struct firmware *fw, void *context)
+ 
+ 	mutex_unlock(&chip->lock);
+ 
+-out:
+ 	/* firmware should be released for other channel use */
+ 	release_firmware(chip->fw);
++	chip->fw = NULL;
+ }
+ 
+ static int lp55xx_request_firmware(struct lp55xx_chip *chip)
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index 557a8a3270a1..e5daf91310f6 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -287,8 +287,12 @@ STORE(__cached_dev)
+ 	sysfs_strtoul_clamp(writeback_rate_update_seconds,
+ 			    dc->writeback_rate_update_seconds,
+ 			    1, WRITEBACK_RATE_UPDATE_SECS_MAX);
+-	d_strtoul(writeback_rate_i_term_inverse);
+-	d_strtoul_nonzero(writeback_rate_p_term_inverse);
++	sysfs_strtoul_clamp(writeback_rate_i_term_inverse,
++			    dc->writeback_rate_i_term_inverse,
++			    1, UINT_MAX);
++	sysfs_strtoul_clamp(writeback_rate_p_term_inverse,
++			    dc->writeback_rate_p_term_inverse,
++			    1, UINT_MAX);
+ 	d_strtoul_nonzero(writeback_rate_minimum);
+ 
+ 	sysfs_strtoul_clamp(io_error_limit, dc->error_limit, 0, INT_MAX);
+@@ -299,7 +303,9 @@ STORE(__cached_dev)
+ 		dc->io_disable = v ? 1 : 0;
+ 	}
+ 
+-	d_strtoi_h(sequential_cutoff);
++	sysfs_strtoul_clamp(sequential_cutoff,
++			    dc->sequential_cutoff,
++			    0, UINT_MAX);
+ 	d_strtoi_h(readahead);
+ 
+ 	if (attr == &sysfs_clear_stats)
+@@ -778,8 +784,17 @@ STORE(__bch_cache_set)
+ 		c->error_limit = strtoul_or_return(buf);
+ 
+ 	/* See count_io_errors() for why 88 */
+-	if (attr == &sysfs_io_error_halflife)
+-		c->error_decay = strtoul_or_return(buf) / 88;
++	if (attr == &sysfs_io_error_halflife) {
++		unsigned long v = 0;
++		ssize_t ret;
++
++		ret = strtoul_safe_clamp(buf, v, 0, UINT_MAX);
++		if (!ret) {
++			c->error_decay = v / 88;
++			return size;
++		}
++		return ret;
++	}
+ 
+ 	if (attr == &sysfs_io_disable) {
+ 		v = strtoul_or_return(buf);
+diff --git a/drivers/md/bcache/sysfs.h b/drivers/md/bcache/sysfs.h
+index 3fe82425859c..0ad2715a884e 100644
+--- a/drivers/md/bcache/sysfs.h
++++ b/drivers/md/bcache/sysfs.h
+@@ -81,9 +81,16 @@ do {									\
+ 
+ #define sysfs_strtoul_clamp(file, var, min, max)			\
+ do {									\
+-	if (attr == &sysfs_ ## file)					\
+-		return strtoul_safe_clamp(buf, var, min, max)		\
+-			?: (ssize_t) size;				\
++	if (attr == &sysfs_ ## file) {					\
++		unsigned long v = 0;					\
++		ssize_t ret;						\
++		ret = strtoul_safe_clamp(buf, v, min, max);		\
++		if (!ret) {						\
++			var = v;					\
++			return size;					\
++		}							\
++		return ret;						\
++	}								\
+ } while (0)
+ 
+ #define strtoul_or_return(cp)						\
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index e83b63608262..254c26eb963a 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -3283,6 +3283,13 @@ static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 	as.argc = argc;
+ 	as.argv = argv;
+ 
++	/* make sure metadata and data are different devices */
++	if (!strcmp(argv[0], argv[1])) {
++		ti->error = "Error setting metadata or data device";
++		r = -EINVAL;
++		goto out_unlock;
++	}
++
+ 	/*
+ 	 * Set default pool features.
+ 	 */
+@@ -4167,6 +4174,12 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 	tc->sort_bio_list = RB_ROOT;
+ 
+ 	if (argc == 3) {
++		if (!strcmp(argv[0], argv[2])) {
++			ti->error = "Error setting origin device";
++			r = -EINVAL;
++			goto bad_origin_dev;
++		}
++
+ 		r = dm_get_device(ti, argv[2], FMODE_READ, &origin_dev);
+ 		if (r) {
+ 			ti->error = "Error opening origin device";
+diff --git a/drivers/media/i2c/mt9m111.c b/drivers/media/i2c/mt9m111.c
+index d639b9bcf64a..7a759b4b88cf 100644
+--- a/drivers/media/i2c/mt9m111.c
++++ b/drivers/media/i2c/mt9m111.c
+@@ -1273,6 +1273,8 @@ static int mt9m111_probe(struct i2c_client *client,
+ 	mt9m111->rect.top	= MT9M111_MIN_DARK_ROWS;
+ 	mt9m111->rect.width	= MT9M111_MAX_WIDTH;
+ 	mt9m111->rect.height	= MT9M111_MAX_HEIGHT;
++	mt9m111->width		= mt9m111->rect.width;
++	mt9m111->height		= mt9m111->rect.height;
+ 	mt9m111->fmt		= &mt9m111_colour_fmts[0];
+ 	mt9m111->lastpage	= -1;
+ 	mutex_init(&mt9m111->power_lock);
+diff --git a/drivers/media/i2c/ov7740.c b/drivers/media/i2c/ov7740.c
+index 177688afd9a6..8835b831cdc0 100644
+--- a/drivers/media/i2c/ov7740.c
++++ b/drivers/media/i2c/ov7740.c
+@@ -1101,6 +1101,9 @@ static int ov7740_probe(struct i2c_client *client,
+ 	if (ret)
+ 		return ret;
+ 
++	pm_runtime_set_active(&client->dev);
++	pm_runtime_enable(&client->dev);
++
+ 	ret = ov7740_detect(ov7740);
+ 	if (ret)
+ 		goto error_detect;
+@@ -1123,8 +1126,6 @@ static int ov7740_probe(struct i2c_client *client,
+ 	if (ret)
+ 		goto error_async_register;
+ 
+-	pm_runtime_set_active(&client->dev);
+-	pm_runtime_enable(&client->dev);
+ 	pm_runtime_idle(&client->dev);
+ 
+ 	return 0;
+@@ -1134,6 +1135,8 @@ error_async_register:
+ error_init_controls:
+ 	ov7740_free_controls(ov7740);
+ error_detect:
++	pm_runtime_disable(&client->dev);
++	pm_runtime_set_suspended(&client->dev);
+ 	ov7740_set_power(ov7740, 0);
+ 	media_entity_cleanup(&ov7740->subdev.entity);
+ 
+diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+index 2a5d5002c27e..f761e4d8bf2a 100644
+--- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+@@ -702,7 +702,7 @@ end:
+ 	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, to_vb2_v4l2_buffer(vb));
+ }
+ 
+-static void *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
++static struct vb2_v4l2_buffer *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
+ 				 enum v4l2_buf_type type)
+ {
+ 	if (V4L2_TYPE_IS_OUTPUT(type))
+@@ -714,7 +714,7 @@ static void *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
+ static int mtk_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct mtk_jpeg_ctx *ctx = vb2_get_drv_priv(q);
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 	int ret = 0;
+ 
+ 	ret = pm_runtime_get_sync(ctx->jpeg->dev);
+@@ -724,14 +724,14 @@ static int mtk_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
+ 	return 0;
+ err:
+ 	while ((vb = mtk_jpeg_buf_remove(ctx, q->type)))
+-		v4l2_m2m_buf_done(to_vb2_v4l2_buffer(vb), VB2_BUF_STATE_QUEUED);
++		v4l2_m2m_buf_done(vb, VB2_BUF_STATE_QUEUED);
+ 	return ret;
+ }
+ 
+ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
+ {
+ 	struct mtk_jpeg_ctx *ctx = vb2_get_drv_priv(q);
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 
+ 	/*
+ 	 * STREAMOFF is an acknowledgment for source change event.
+@@ -743,7 +743,7 @@ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
+ 		struct mtk_jpeg_src_buf *src_buf;
+ 
+ 		vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+-		src_buf = mtk_jpeg_vb2_to_srcbuf(vb);
++		src_buf = mtk_jpeg_vb2_to_srcbuf(&vb->vb2_buf);
+ 		mtk_jpeg_set_queue_data(ctx, &src_buf->dec_param);
+ 		ctx->state = MTK_JPEG_RUNNING;
+ 	} else if (V4L2_TYPE_IS_OUTPUT(q->type)) {
+@@ -751,7 +751,7 @@ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
+ 	}
+ 
+ 	while ((vb = mtk_jpeg_buf_remove(ctx, q->type)))
+-		v4l2_m2m_buf_done(to_vb2_v4l2_buffer(vb), VB2_BUF_STATE_ERROR);
++		v4l2_m2m_buf_done(vb, VB2_BUF_STATE_ERROR);
+ 
+ 	pm_runtime_put_sync(ctx->jpeg->dev);
+ }
+@@ -807,7 +807,7 @@ static void mtk_jpeg_device_run(void *priv)
+ {
+ 	struct mtk_jpeg_ctx *ctx = priv;
+ 	struct mtk_jpeg_dev *jpeg = ctx->jpeg;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR;
+ 	unsigned long flags;
+ 	struct mtk_jpeg_src_buf *jpeg_src_buf;
+@@ -817,11 +817,11 @@ static void mtk_jpeg_device_run(void *priv)
+ 
+ 	src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+-	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(src_buf);
++	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf);
+ 
+ 	if (jpeg_src_buf->flags & MTK_JPEG_BUF_FLAGS_LAST_FRAME) {
+-		for (i = 0; i < dst_buf->num_planes; i++)
+-			vb2_set_plane_payload(dst_buf, i, 0);
++		for (i = 0; i < dst_buf->vb2_buf.num_planes; i++)
++			vb2_set_plane_payload(&dst_buf->vb2_buf, i, 0);
+ 		buf_state = VB2_BUF_STATE_DONE;
+ 		goto dec_end;
+ 	}
+@@ -833,8 +833,8 @@ static void mtk_jpeg_device_run(void *priv)
+ 		return;
+ 	}
+ 
+-	mtk_jpeg_set_dec_src(ctx, src_buf, &bs);
+-	if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, dst_buf, &fb))
++	mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs);
++	if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, &dst_buf->vb2_buf, &fb))
+ 		goto dec_end;
+ 
+ 	spin_lock_irqsave(&jpeg->hw_lock, flags);
+@@ -849,8 +849,8 @@ static void mtk_jpeg_device_run(void *priv)
+ dec_end:
+ 	v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ 	v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+-	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(src_buf), buf_state);
+-	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(dst_buf), buf_state);
++	v4l2_m2m_buf_done(src_buf, buf_state);
++	v4l2_m2m_buf_done(dst_buf, buf_state);
+ 	v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx);
+ }
+ 
+@@ -921,7 +921,7 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
+ {
+ 	struct mtk_jpeg_dev *jpeg = priv;
+ 	struct mtk_jpeg_ctx *ctx;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	struct mtk_jpeg_src_buf *jpeg_src_buf;
+ 	enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR;
+ 	u32	dec_irq_ret;
+@@ -938,7 +938,7 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
+ 
+ 	src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ 	dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+-	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(src_buf);
++	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf);
+ 
+ 	if (dec_irq_ret >= MTK_JPEG_DEC_RESULT_UNDERFLOW)
+ 		mtk_jpeg_dec_reset(jpeg->dec_reg_base);
+@@ -948,15 +948,15 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
+ 		goto dec_end;
+ 	}
+ 
+-	for (i = 0; i < dst_buf->num_planes; i++)
+-		vb2_set_plane_payload(dst_buf, i,
++	for (i = 0; i < dst_buf->vb2_buf.num_planes; i++)
++		vb2_set_plane_payload(&dst_buf->vb2_buf, i,
+ 				      jpeg_src_buf->dec_param.comp_size[i]);
+ 
+ 	buf_state = VB2_BUF_STATE_DONE;
+ 
+ dec_end:
+-	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(src_buf), buf_state);
+-	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(dst_buf), buf_state);
++	v4l2_m2m_buf_done(src_buf, buf_state);
++	v4l2_m2m_buf_done(dst_buf, buf_state);
+ 	v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx);
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/media/platform/mx2_emmaprp.c b/drivers/media/platform/mx2_emmaprp.c
+index 27b078cf98e3..f60f499c596b 100644
+--- a/drivers/media/platform/mx2_emmaprp.c
++++ b/drivers/media/platform/mx2_emmaprp.c
+@@ -274,7 +274,7 @@ static void emmaprp_device_run(void *priv)
+ {
+ 	struct emmaprp_ctx *ctx = priv;
+ 	struct emmaprp_q_data *s_q_data, *d_q_data;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	struct emmaprp_dev *pcdev = ctx->dev;
+ 	unsigned int s_width, s_height;
+ 	unsigned int d_width, d_height;
+@@ -294,8 +294,8 @@ static void emmaprp_device_run(void *priv)
+ 	d_height = d_q_data->height;
+ 	d_size = d_width * d_height;
+ 
+-	p_in = vb2_dma_contig_plane_dma_addr(src_buf, 0);
+-	p_out = vb2_dma_contig_plane_dma_addr(dst_buf, 0);
++	p_in = vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0);
++	p_out = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0);
+ 	if (!p_in || !p_out) {
+ 		v4l2_err(&pcdev->v4l2_dev,
+ 			 "Acquiring kernel pointers to buffers failed\n");
+diff --git a/drivers/media/platform/rcar-vin/rcar-core.c b/drivers/media/platform/rcar-vin/rcar-core.c
+index f0719ce24b97..aef8d8dab6ab 100644
+--- a/drivers/media/platform/rcar-vin/rcar-core.c
++++ b/drivers/media/platform/rcar-vin/rcar-core.c
+@@ -131,9 +131,13 @@ static int rvin_group_link_notify(struct media_link *link, u32 flags,
+ 	    !is_media_entity_v4l2_video_device(link->sink->entity))
+ 		return 0;
+ 
+-	/* If any entity is in use don't allow link changes. */
++	/*
++	 * Don't allow link changes if any entity in the graph is
++	 * streaming, modifying the CHSEL register fields can disrupt
++	 * running streams.
++	 */
+ 	media_device_for_each_entity(entity, &group->mdev)
+-		if (entity->use_count)
++		if (entity->stream_count)
+ 			return -EBUSY;
+ 
+ 	mutex_lock(&group->lock);
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 5c653287185f..b096227a9722 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -43,7 +43,7 @@ static void device_run(void *prv)
+ {
+ 	struct rga_ctx *ctx = prv;
+ 	struct rockchip_rga *rga = ctx->rga;
+-	struct vb2_buffer *src, *dst;
++	struct vb2_v4l2_buffer *src, *dst;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&rga->ctrl_lock, flags);
+@@ -53,8 +53,8 @@ static void device_run(void *prv)
+ 	src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 
+-	rga_buf_map(src);
+-	rga_buf_map(dst);
++	rga_buf_map(&src->vb2_buf);
++	rga_buf_map(&dst->vb2_buf);
+ 
+ 	rga_hw_start(rga);
+ 
+diff --git a/drivers/media/platform/s5p-g2d/g2d.c b/drivers/media/platform/s5p-g2d/g2d.c
+index 57ab1d1085d1..971c47165010 100644
+--- a/drivers/media/platform/s5p-g2d/g2d.c
++++ b/drivers/media/platform/s5p-g2d/g2d.c
+@@ -513,7 +513,7 @@ static void device_run(void *prv)
+ {
+ 	struct g2d_ctx *ctx = prv;
+ 	struct g2d_dev *dev = ctx->dev;
+-	struct vb2_buffer *src, *dst;
++	struct vb2_v4l2_buffer *src, *dst;
+ 	unsigned long flags;
+ 	u32 cmd = 0;
+ 
+@@ -528,10 +528,10 @@ static void device_run(void *prv)
+ 	spin_lock_irqsave(&dev->ctrl_lock, flags);
+ 
+ 	g2d_set_src_size(dev, &ctx->in);
+-	g2d_set_src_addr(dev, vb2_dma_contig_plane_dma_addr(src, 0));
++	g2d_set_src_addr(dev, vb2_dma_contig_plane_dma_addr(&src->vb2_buf, 0));
+ 
+ 	g2d_set_dst_size(dev, &ctx->out);
+-	g2d_set_dst_addr(dev, vb2_dma_contig_plane_dma_addr(dst, 0));
++	g2d_set_dst_addr(dev, vb2_dma_contig_plane_dma_addr(&dst->vb2_buf, 0));
+ 
+ 	g2d_set_rop4(dev, ctx->rop);
+ 	g2d_set_flip(dev, ctx->flip);
+diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.c b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+index 3f9000b70385..370942b67d86 100644
+--- a/drivers/media/platform/s5p-jpeg/jpeg-core.c
++++ b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+@@ -793,14 +793,14 @@ static void skip(struct s5p_jpeg_buffer *buf, long len);
+ static void exynos4_jpeg_parse_decode_h_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	struct s5p_jpeg_buffer jpeg_buffer;
+ 	unsigned int word;
+ 	int c, x, components;
+ 
+ 	jpeg_buffer.size = 2; /* Ls */
+ 	jpeg_buffer.data =
+-		(unsigned long)vb2_plane_vaddr(vb, 0) + ctx->out_q.sos + 2;
++		(unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sos + 2;
+ 	jpeg_buffer.curr = 0;
+ 
+ 	word = 0;
+@@ -830,14 +830,14 @@ static void exynos4_jpeg_parse_decode_h_tbl(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_parse_huff_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	struct s5p_jpeg_buffer jpeg_buffer;
+ 	unsigned int word;
+ 	int c, i, n, j;
+ 
+ 	for (j = 0; j < ctx->out_q.dht.n; ++j) {
+ 		jpeg_buffer.size = ctx->out_q.dht.len[j];
+-		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(vb, 0) +
++		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) +
+ 				   ctx->out_q.dht.marker[j];
+ 		jpeg_buffer.curr = 0;
+ 
+@@ -889,13 +889,13 @@ static void exynos4_jpeg_parse_huff_tbl(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_parse_decode_q_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	struct s5p_jpeg_buffer jpeg_buffer;
+ 	int c, x, components;
+ 
+ 	jpeg_buffer.size = ctx->out_q.sof_len;
+ 	jpeg_buffer.data =
+-		(unsigned long)vb2_plane_vaddr(vb, 0) + ctx->out_q.sof;
++		(unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sof;
+ 	jpeg_buffer.curr = 0;
+ 
+ 	skip(&jpeg_buffer, 5); /* P, Y, X */
+@@ -920,14 +920,14 @@ static void exynos4_jpeg_parse_decode_q_tbl(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_parse_q_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	struct s5p_jpeg_buffer jpeg_buffer;
+ 	unsigned int word;
+ 	int c, i, j;
+ 
+ 	for (j = 0; j < ctx->out_q.dqt.n; ++j) {
+ 		jpeg_buffer.size = ctx->out_q.dqt.len[j];
+-		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(vb, 0) +
++		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) +
+ 				   ctx->out_q.dqt.marker[j];
+ 		jpeg_buffer.curr = 0;
+ 
+@@ -1293,13 +1293,16 @@ static int s5p_jpeg_querycap(struct file *file, void *priv,
+ 	return 0;
+ }
+ 
+-static int enum_fmt(struct s5p_jpeg_fmt *sjpeg_formats, int n,
++static int enum_fmt(struct s5p_jpeg_ctx *ctx,
++		    struct s5p_jpeg_fmt *sjpeg_formats, int n,
+ 		    struct v4l2_fmtdesc *f, u32 type)
+ {
+ 	int i, num = 0;
++	unsigned int fmt_ver_flag = ctx->jpeg->variant->fmt_ver_flag;
+ 
+ 	for (i = 0; i < n; ++i) {
+-		if (sjpeg_formats[i].flags & type) {
++		if (sjpeg_formats[i].flags & type &&
++		    sjpeg_formats[i].flags & fmt_ver_flag) {
+ 			/* index-th format of type type found ? */
+ 			if (num == f->index)
+ 				break;
+@@ -1326,11 +1329,11 @@ static int s5p_jpeg_enum_fmt_vid_cap(struct file *file, void *priv,
+ 	struct s5p_jpeg_ctx *ctx = fh_to_ctx(priv);
+ 
+ 	if (ctx->mode == S5P_JPEG_ENCODE)
+-		return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
++		return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
+ 				SJPEG_FMT_FLAG_ENC_CAPTURE);
+ 
+-	return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
+-					SJPEG_FMT_FLAG_DEC_CAPTURE);
++	return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
++			SJPEG_FMT_FLAG_DEC_CAPTURE);
+ }
+ 
+ static int s5p_jpeg_enum_fmt_vid_out(struct file *file, void *priv,
+@@ -1339,11 +1342,11 @@ static int s5p_jpeg_enum_fmt_vid_out(struct file *file, void *priv,
+ 	struct s5p_jpeg_ctx *ctx = fh_to_ctx(priv);
+ 
+ 	if (ctx->mode == S5P_JPEG_ENCODE)
+-		return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
++		return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
+ 				SJPEG_FMT_FLAG_ENC_OUTPUT);
+ 
+-	return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
+-					SJPEG_FMT_FLAG_DEC_OUTPUT);
++	return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
++			SJPEG_FMT_FLAG_DEC_OUTPUT);
+ }
+ 
+ static struct s5p_jpeg_q_data *get_q_data(struct s5p_jpeg_ctx *ctx,
+@@ -2072,15 +2075,15 @@ static void s5p_jpeg_device_run(void *priv)
+ {
+ 	struct s5p_jpeg_ctx *ctx = priv;
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	unsigned long src_addr, dst_addr, flags;
+ 
+ 	spin_lock_irqsave(&ctx->jpeg->slock, flags);
+ 
+ 	src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+-	src_addr = vb2_dma_contig_plane_dma_addr(src_buf, 0);
+-	dst_addr = vb2_dma_contig_plane_dma_addr(dst_buf, 0);
++	src_addr = vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0);
++	dst_addr = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0);
+ 
+ 	s5p_jpeg_reset(jpeg->regs);
+ 	s5p_jpeg_poweron(jpeg->regs);
+@@ -2153,7 +2156,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+ 	struct s5p_jpeg_fmt *fmt;
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 	struct s5p_jpeg_addr jpeg_addr = {};
+ 	u32 pix_size, padding_bytes = 0;
+ 
+@@ -2172,7 +2175,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ 		vb = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 	}
+ 
+-	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(vb, 0);
++	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+ 
+ 	if (fmt->colplanes == 2) {
+ 		jpeg_addr.cb = jpeg_addr.y + pix_size - padding_bytes;
+@@ -2190,7 +2193,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 	unsigned int jpeg_addr = 0;
+ 
+ 	if (ctx->mode == S5P_JPEG_ENCODE)
+@@ -2198,7 +2201,7 @@ static void exynos4_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ 	else
+ 		vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 
+-	jpeg_addr = vb2_dma_contig_plane_dma_addr(vb, 0);
++	jpeg_addr = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+ 	if (jpeg->variant->version == SJPEG_EXYNOS5433 &&
+ 	    ctx->mode == S5P_JPEG_DECODE)
+ 		jpeg_addr += ctx->out_q.sos;
+@@ -2314,7 +2317,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+ 	struct s5p_jpeg_fmt *fmt;
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 	struct s5p_jpeg_addr jpeg_addr = {};
+ 	u32 pix_size;
+ 
+@@ -2328,7 +2331,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ 		fmt = ctx->cap_q.fmt;
+ 	}
+ 
+-	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(vb, 0);
++	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+ 
+ 	if (fmt->colplanes == 2) {
+ 		jpeg_addr.cb = jpeg_addr.y + pix_size;
+@@ -2346,7 +2349,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ static void exynos3250_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 	unsigned int jpeg_addr = 0;
+ 
+ 	if (ctx->mode == S5P_JPEG_ENCODE)
+@@ -2354,7 +2357,7 @@ static void exynos3250_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ 	else
+ 		vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 
+-	jpeg_addr = vb2_dma_contig_plane_dma_addr(vb, 0);
++	jpeg_addr = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+ 	exynos3250_jpeg_jpgadr(jpeg->regs, jpeg_addr);
+ }
+ 
+diff --git a/drivers/media/platform/sh_veu.c b/drivers/media/platform/sh_veu.c
+index 09ae64a0004c..d277cc674349 100644
+--- a/drivers/media/platform/sh_veu.c
++++ b/drivers/media/platform/sh_veu.c
+@@ -273,13 +273,13 @@ static void sh_veu_process(struct sh_veu_dev *veu,
+ static void sh_veu_device_run(void *priv)
+ {
+ 	struct sh_veu_dev *veu = priv;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 
+ 	src_buf = v4l2_m2m_next_src_buf(veu->m2m_ctx);
+ 	dst_buf = v4l2_m2m_next_dst_buf(veu->m2m_ctx);
+ 
+ 	if (src_buf && dst_buf)
+-		sh_veu_process(veu, src_buf, dst_buf);
++		sh_veu_process(veu, &src_buf->vb2_buf, &dst_buf->vb2_buf);
+ }
+ 
+ 		/* ========== video ioctls ========== */
+diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
+index c60a7625b1fa..b2873a2432b6 100644
+--- a/drivers/mmc/host/omap.c
++++ b/drivers/mmc/host/omap.c
+@@ -920,7 +920,7 @@ static inline void set_cmd_timeout(struct mmc_omap_host *host, struct mmc_reques
+ 	reg &= ~(1 << 5);
+ 	OMAP_MMC_WRITE(host, SDIO, reg);
+ 	/* Set maximum timeout */
+-	OMAP_MMC_WRITE(host, CTO, 0xff);
++	OMAP_MMC_WRITE(host, CTO, 0xfd);
+ }
+ 
+ static inline void set_data_timeout(struct mmc_omap_host *host, struct mmc_request *req)
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 4a0ec8e87c7a..6cba05a80892 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -442,12 +442,20 @@ out_mapping:
+ 
+ static int mv88e6xxx_g1_irq_setup(struct mv88e6xxx_chip *chip)
+ {
++	static struct lock_class_key lock_key;
++	static struct lock_class_key request_key;
+ 	int err;
+ 
+ 	err = mv88e6xxx_g1_irq_setup_common(chip);
+ 	if (err)
+ 		return err;
+ 
++	/* These lock classes tells lockdep that global 1 irqs are in
++	 * a different category than their parent GPIO, so it won't
++	 * report false recursion.
++	 */
++	irq_set_lockdep_class(chip->irq, &lock_key, &request_key);
++
+ 	err = request_threaded_irq(chip->irq, NULL,
+ 				   mv88e6xxx_g1_irq_thread_fn,
+ 				   IRQF_ONESHOT | IRQF_SHARED,
+diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
+index 41eee62fed25..c44b2822e4dd 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.c
++++ b/drivers/net/dsa/mv88e6xxx/port.c
+@@ -480,6 +480,8 @@ int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ 			     phy_interface_t mode)
+ {
+ 	switch (mode) {
++	case PHY_INTERFACE_MODE_NA:
++		return 0;
+ 	case PHY_INTERFACE_MODE_XGMII:
+ 	case PHY_INTERFACE_MODE_XAUI:
+ 	case PHY_INTERFACE_MODE_RXAUI:
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index 9a7f70db20c7..733d9172425b 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -119,7 +119,7 @@ static void enic_init_affinity_hint(struct enic *enic)
+ 
+ 	for (i = 0; i < enic->intr_count; i++) {
+ 		if (enic_is_err_intr(enic, i) || enic_is_notify_intr(enic, i) ||
+-		    (enic->msix[i].affinity_mask &&
++		    (cpumask_available(enic->msix[i].affinity_mask) &&
+ 		     !cpumask_empty(enic->msix[i].affinity_mask)))
+ 			continue;
+ 		if (zalloc_cpumask_var(&enic->msix[i].affinity_mask,
+@@ -148,7 +148,7 @@ static void enic_set_affinity_hint(struct enic *enic)
+ 	for (i = 0; i < enic->intr_count; i++) {
+ 		if (enic_is_err_intr(enic, i)		||
+ 		    enic_is_notify_intr(enic, i)	||
+-		    !enic->msix[i].affinity_mask	||
++		    !cpumask_available(enic->msix[i].affinity_mask) ||
+ 		    cpumask_empty(enic->msix[i].affinity_mask))
+ 			continue;
+ 		err = irq_set_affinity_hint(enic->msix_entry[i].vector,
+@@ -161,7 +161,7 @@ static void enic_set_affinity_hint(struct enic *enic)
+ 	for (i = 0; i < enic->wq_count; i++) {
+ 		int wq_intr = enic_msix_wq_intr(enic, i);
+ 
+-		if (enic->msix[wq_intr].affinity_mask &&
++		if (cpumask_available(enic->msix[wq_intr].affinity_mask) &&
+ 		    !cpumask_empty(enic->msix[wq_intr].affinity_mask))
+ 			netif_set_xps_queue(enic->netdev,
+ 					    enic->msix[wq_intr].affinity_mask,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+index efb6c1a25171..3ea72e4d9dc4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+@@ -1094,10 +1094,10 @@ static int hclge_log_rocee_ovf_error(struct hclge_dev *hdev)
+ 	return 0;
+ }
+ 
+-static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
++static enum hnae3_reset_type
++hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ {
+-	enum hnae3_reset_type reset_type = HNAE3_FUNC_RESET;
+-	struct hnae3_ae_dev *ae_dev = hdev->ae_dev;
++	enum hnae3_reset_type reset_type = HNAE3_NONE_RESET;
+ 	struct device *dev = &hdev->pdev->dev;
+ 	struct hclge_desc desc[2];
+ 	unsigned int status;
+@@ -1110,17 +1110,20 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ 	if (ret) {
+ 		dev_err(dev, "failed(%d) to query ROCEE RAS INT SRC\n", ret);
+ 		/* reset everything for now */
+-		HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET);
+-		return ret;
++		return HNAE3_GLOBAL_RESET;
+ 	}
+ 
+ 	status = le32_to_cpu(desc[0].data[0]);
+ 
+-	if (status & HCLGE_ROCEE_RERR_INT_MASK)
++	if (status & HCLGE_ROCEE_RERR_INT_MASK) {
+ 		dev_warn(dev, "ROCEE RAS AXI rresp error\n");
++		reset_type = HNAE3_FUNC_RESET;
++	}
+ 
+-	if (status & HCLGE_ROCEE_BERR_INT_MASK)
++	if (status & HCLGE_ROCEE_BERR_INT_MASK) {
+ 		dev_warn(dev, "ROCEE RAS AXI bresp error\n");
++		reset_type = HNAE3_FUNC_RESET;
++	}
+ 
+ 	if (status & HCLGE_ROCEE_ECC_INT_MASK) {
+ 		dev_warn(dev, "ROCEE RAS 2bit ECC error\n");
+@@ -1132,9 +1135,9 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ 		if (ret) {
+ 			dev_err(dev, "failed(%d) to process ovf error\n", ret);
+ 			/* reset everything for now */
+-			HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET);
+-			return ret;
++			return HNAE3_GLOBAL_RESET;
+ 		}
++		reset_type = HNAE3_FUNC_RESET;
+ 	}
+ 
+ 	/* clear error status */
+@@ -1143,12 +1146,10 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ 	if (ret) {
+ 		dev_err(dev, "failed(%d) to clear ROCEE RAS error\n", ret);
+ 		/* reset everything for now */
+-		reset_type = HNAE3_GLOBAL_RESET;
++		return HNAE3_GLOBAL_RESET;
+ 	}
+ 
+-	HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type);
+-
+-	return ret;
++	return reset_type;
+ }
+ 
+ static int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en)
+@@ -1178,15 +1179,18 @@ static int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en)
+ 	return ret;
+ }
+ 
+-static int hclge_handle_rocee_ras_error(struct hnae3_ae_dev *ae_dev)
++static void hclge_handle_rocee_ras_error(struct hnae3_ae_dev *ae_dev)
+ {
++	enum hnae3_reset_type reset_type = HNAE3_NONE_RESET;
+ 	struct hclge_dev *hdev = ae_dev->priv;
+ 
+ 	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
+ 	    hdev->pdev->revision < 0x21)
+-		return HNAE3_NONE_RESET;
++		return;
+ 
+-	return hclge_log_and_clear_rocee_ras_error(hdev);
++	reset_type = hclge_log_and_clear_rocee_ras_error(hdev);
++	if (reset_type != HNAE3_NONE_RESET)
++		HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type);
+ }
+ 
+ static const struct hclge_hw_blk hw_blk[] = {
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 189f231075c2..7acc61e4f645 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -2106,7 +2106,7 @@ static int e1000_request_msix(struct e1000_adapter *adapter)
+ 	if (strlen(netdev->name) < (IFNAMSIZ - 5))
+ 		snprintf(adapter->rx_ring->name,
+ 			 sizeof(adapter->rx_ring->name) - 1,
+-			 "%s-rx-0", netdev->name);
++			 "%.14s-rx-0", netdev->name);
+ 	else
+ 		memcpy(adapter->rx_ring->name, netdev->name, IFNAMSIZ);
+ 	err = request_irq(adapter->msix_entries[vector].vector,
+@@ -2122,7 +2122,7 @@ static int e1000_request_msix(struct e1000_adapter *adapter)
+ 	if (strlen(netdev->name) < (IFNAMSIZ - 5))
+ 		snprintf(adapter->tx_ring->name,
+ 			 sizeof(adapter->tx_ring->name) - 1,
+-			 "%s-tx-0", netdev->name);
++			 "%.14s-tx-0", netdev->name);
+ 	else
+ 		memcpy(adapter->tx_ring->name, netdev->name, IFNAMSIZ);
+ 	err = request_irq(adapter->msix_entries[vector].vector,
+@@ -5309,8 +5309,13 @@ static void e1000_watchdog_task(struct work_struct *work)
+ 			/* 8000ES2LAN requires a Rx packet buffer work-around
+ 			 * on link down event; reset the controller to flush
+ 			 * the Rx packet buffer.
++			 *
++			 * If the link is lost the controller stops DMA, but
++			 * if there is queued Tx work it cannot be done.  So
++			 * reset the controller to flush the Tx packet buffers.
+ 			 */
+-			if (adapter->flags & FLAG_RX_NEEDS_RESTART)
++			if ((adapter->flags & FLAG_RX_NEEDS_RESTART) ||
++			    e1000_desc_unused(tx_ring) + 1 < tx_ring->count)
+ 				adapter->flags |= FLAG_RESTART_NOW;
+ 			else
+ 				pm_schedule_suspend(netdev->dev.parent,
+@@ -5333,14 +5338,6 @@ link_up:
+ 	adapter->gotc_old = adapter->stats.gotc;
+ 	spin_unlock(&adapter->stats64_lock);
+ 
+-	/* If the link is lost the controller stops DMA, but
+-	 * if there is queued Tx work it cannot be done.  So
+-	 * reset the controller to flush the Tx packet buffers.
+-	 */
+-	if (!netif_carrier_ok(netdev) &&
+-	    (e1000_desc_unused(tx_ring) + 1 < tx_ring->count))
+-		adapter->flags |= FLAG_RESTART_NOW;
+-
+ 	/* If reset is necessary, do it outside of interrupt context. */
+ 	if (adapter->flags & FLAG_RESTART_NOW) {
+ 		schedule_work(&adapter->reset_task);
+@@ -7351,6 +7348,8 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	e1000_print_device_info(adapter);
+ 
++	dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
++
+ 	if (pci_dev_run_wake(pdev))
+ 		pm_runtime_put_noidle(&pdev->dev);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 2e5693107fa4..8d602247eb44 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -1538,9 +1538,20 @@ ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
+ 	} else if (!list_elem->vsi_list_info) {
+ 		status = ICE_ERR_DOES_NOT_EXIST;
+ 		goto exit;
++	} else if (list_elem->vsi_list_info->ref_cnt > 1) {
++		/* a ref_cnt > 1 indicates that the vsi_list is being
++		 * shared by multiple rules. Decrement the ref_cnt and
++		 * remove this rule, but do not modify the list, as it
++		 * is in-use by other rules.
++		 */
++		list_elem->vsi_list_info->ref_cnt--;
++		remove_rule = true;
+ 	} else {
+-		if (list_elem->vsi_list_info->ref_cnt > 1)
+-			list_elem->vsi_list_info->ref_cnt--;
++		/* a ref_cnt of 1 indicates the vsi_list is only used
++		 * by one rule. However, the original removal request is only
++		 * for a single VSI. Update the vsi_list first, and only
++		 * remove the rule if there are no further VSIs in this list.
++		 */
+ 		vsi_handle = f_entry->fltr_info.vsi_handle;
+ 		status = ice_rem_update_vsi_list(hw, vsi_handle, list_elem);
+ 		if (status)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 16066c2d5b3a..931beac3359d 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1380,13 +1380,9 @@ static void mvpp2_port_reset(struct mvpp2_port *port)
+ 	for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_regs); i++)
+ 		mvpp2_read_count(port, &mvpp2_ethtool_regs[i]);
+ 
+-	val = readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
+-		    ~MVPP2_GMAC_PORT_RESET_MASK;
++	val = readl(port->base + MVPP2_GMAC_CTRL_2_REG) |
++	      MVPP2_GMAC_PORT_RESET_MASK;
+ 	writel(val, port->base + MVPP2_GMAC_CTRL_2_REG);
+-
+-	while (readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
+-	       MVPP2_GMAC_PORT_RESET_MASK)
+-		continue;
+ }
+ 
+ /* Change maximum receive size of the port */
+@@ -4543,12 +4539,15 @@ static void mvpp2_gmac_config(struct mvpp2_port *port, unsigned int mode,
+ 			      const struct phylink_link_state *state)
+ {
+ 	u32 an, ctrl0, ctrl2, ctrl4;
++	u32 old_ctrl2;
+ 
+ 	an = readl(port->base + MVPP2_GMAC_AUTONEG_CONFIG);
+ 	ctrl0 = readl(port->base + MVPP2_GMAC_CTRL_0_REG);
+ 	ctrl2 = readl(port->base + MVPP2_GMAC_CTRL_2_REG);
+ 	ctrl4 = readl(port->base + MVPP22_GMAC_CTRL_4_REG);
+ 
++	old_ctrl2 = ctrl2;
++
+ 	/* Force link down */
+ 	an &= ~MVPP2_GMAC_FORCE_LINK_PASS;
+ 	an |= MVPP2_GMAC_FORCE_LINK_DOWN;
+@@ -4621,6 +4620,12 @@ static void mvpp2_gmac_config(struct mvpp2_port *port, unsigned int mode,
+ 	writel(ctrl2, port->base + MVPP2_GMAC_CTRL_2_REG);
+ 	writel(ctrl4, port->base + MVPP22_GMAC_CTRL_4_REG);
+ 	writel(an, port->base + MVPP2_GMAC_AUTONEG_CONFIG);
++
++	if (old_ctrl2 & MVPP2_GMAC_PORT_RESET_MASK) {
++		while (readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
++		       MVPP2_GMAC_PORT_RESET_MASK)
++			continue;
++	}
+ }
+ 
+ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 47233b9a4f81..e6099f51d25f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -357,6 +357,9 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
+ 
+ 	if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
+ 		priv->channels.params = new_channels.params;
++		if (!netif_is_rxfh_configured(priv->netdev))
++			mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt,
++						      MLX5E_INDIR_RQT_SIZE, count);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 5b492b67f4e1..13c48883ed61 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1812,7 +1812,7 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
+ 	u64 node_guid;
+ 	int err = 0;
+ 
+-	if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
++	if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager))
+ 		return -EPERM;
+ 	if (!LEGAL_VPORT(esw, vport) || is_multicast_ether_addr(mac))
+ 		return -EINVAL;
+@@ -1886,7 +1886,7 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
+ {
+ 	struct mlx5_vport *evport;
+ 
+-	if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
++	if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager))
+ 		return -EPERM;
+ 	if (!LEGAL_VPORT(esw, vport))
+ 		return -EINVAL;
+@@ -2059,19 +2059,24 @@ static int normalize_vports_min_rate(struct mlx5_eswitch *esw, u32 divider)
+ int mlx5_eswitch_set_vport_rate(struct mlx5_eswitch *esw, int vport,
+ 				u32 max_rate, u32 min_rate)
+ {
+-	u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share);
+-	bool min_rate_supported = MLX5_CAP_QOS(esw->dev, esw_bw_share) &&
+-					fw_max_bw_share >= MLX5_MIN_BW_SHARE;
+-	bool max_rate_supported = MLX5_CAP_QOS(esw->dev, esw_rate_limit);
+ 	struct mlx5_vport *evport;
++	u32 fw_max_bw_share;
+ 	u32 previous_min_rate;
+ 	u32 divider;
++	bool min_rate_supported;
++	bool max_rate_supported;
+ 	int err = 0;
+ 
+ 	if (!ESW_ALLOWED(esw))
+ 		return -EPERM;
+ 	if (!LEGAL_VPORT(esw, vport))
+ 		return -EINVAL;
++
++	fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share);
++	min_rate_supported = MLX5_CAP_QOS(esw->dev, esw_bw_share) &&
++				fw_max_bw_share >= MLX5_MIN_BW_SHARE;
++	max_rate_supported = MLX5_CAP_QOS(esw->dev, esw_rate_limit);
++
+ 	if ((min_rate && !min_rate_supported) || (max_rate && !max_rate_supported))
+ 		return -EOPNOTSUPP;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index b65e274b02e9..cbdee5164be7 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -2105,7 +2105,7 @@ static void mlxsw_sp_port_get_prio_strings(u8 **p, int prio)
+ 	int i;
+ 
+ 	for (i = 0; i < MLXSW_SP_PORT_HW_PRIO_STATS_LEN; i++) {
+-		snprintf(*p, ETH_GSTRING_LEN, "%s_%d",
++		snprintf(*p, ETH_GSTRING_LEN, "%.29s_%.1d",
+ 			 mlxsw_sp_port_hw_prio_stats[i].str, prio);
+ 		*p += ETH_GSTRING_LEN;
+ 	}
+@@ -2116,7 +2116,7 @@ static void mlxsw_sp_port_get_tc_strings(u8 **p, int tc)
+ 	int i;
+ 
+ 	for (i = 0; i < MLXSW_SP_PORT_HW_TC_STATS_LEN; i++) {
+-		snprintf(*p, ETH_GSTRING_LEN, "%s_%d",
++		snprintf(*p, ETH_GSTRING_LEN, "%.29s_%.1d",
+ 			 mlxsw_sp_port_hw_tc_stats[i].str, tc);
+ 		*p += ETH_GSTRING_LEN;
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 685d20472358..019ab99e65bb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -474,7 +474,7 @@ static void stmmac_get_tx_hwtstamp(struct stmmac_priv *priv,
+ 				   struct dma_desc *p, struct sk_buff *skb)
+ {
+ 	struct skb_shared_hwtstamps shhwtstamp;
+-	u64 ns;
++	u64 ns = 0;
+ 
+ 	if (!priv->hwts_tx_en)
+ 		return;
+@@ -513,7 +513,7 @@ static void stmmac_get_rx_hwtstamp(struct stmmac_priv *priv, struct dma_desc *p,
+ {
+ 	struct skb_shared_hwtstamps *shhwtstamp = NULL;
+ 	struct dma_desc *desc = p;
+-	u64 ns;
++	u64 ns = 0;
+ 
+ 	if (!priv->hwts_rx_en)
+ 		return;
+@@ -558,8 +558,8 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr)
+ 	u32 snap_type_sel = 0;
+ 	u32 ts_master_en = 0;
+ 	u32 ts_event_en = 0;
++	u32 sec_inc = 0;
+ 	u32 value = 0;
+-	u32 sec_inc;
+ 	bool xmac;
+ 
+ 	xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+index 2293e21f789f..cc60b3fb0892 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+@@ -105,7 +105,7 @@ static int stmmac_get_time(struct ptp_clock_info *ptp, struct timespec64 *ts)
+ 	struct stmmac_priv *priv =
+ 	    container_of(ptp, struct stmmac_priv, ptp_clock_ops);
+ 	unsigned long flags;
+-	u64 ns;
++	u64 ns = 0;
+ 
+ 	spin_lock_irqsave(&priv->ptp_lock, flags);
+ 	stmmac_get_systime(priv, priv->ptpaddr, &ns);
+diff --git a/drivers/net/phy/phy-c45.c b/drivers/net/phy/phy-c45.c
+index 03af927fa5ad..e39bf0428dd9 100644
+--- a/drivers/net/phy/phy-c45.c
++++ b/drivers/net/phy/phy-c45.c
+@@ -147,9 +147,15 @@ int genphy_c45_read_link(struct phy_device *phydev, u32 mmd_mask)
+ 		mmd_mask &= ~BIT(devad);
+ 
+ 		/* The link state is latched low so that momentary link
+-		 * drops can be detected.  Do not double-read the status
+-		 * register if the link is down.
++		 * drops can be detected. Do not double-read the status
++		 * in polling mode to detect such short link drops.
+ 		 */
++		if (!phy_polling_mode(phydev)) {
++			val = phy_read_mmd(phydev, devad, MDIO_STAT1);
++			if (val < 0)
++				return val;
++		}
++
+ 		val = phy_read_mmd(phydev, devad, MDIO_STAT1);
+ 		if (val < 0)
+ 			return val;
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 739434fe04fa..adf79614c2db 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1683,10 +1683,15 @@ int genphy_update_link(struct phy_device *phydev)
+ {
+ 	int status;
+ 
+-	/* Do a fake read */
+-	status = phy_read(phydev, MII_BMSR);
+-	if (status < 0)
+-		return status;
++	/* The link state is latched low so that momentary link
++	 * drops can be detected. Do not double-read the status
++	 * in polling mode to detect such short link drops.
++	 */
++	if (!phy_polling_mode(phydev)) {
++		status = phy_read(phydev, MII_BMSR);
++		if (status < 0)
++			return status;
++	}
+ 
+ 	/* Read link and autonegotiation status */
+ 	status = phy_read(phydev, MII_BMSR);
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index f412ea1cef18..b203d1867959 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -115,7 +115,8 @@ static void veth_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
+ 		p += sizeof(ethtool_stats_keys);
+ 		for (i = 0; i < dev->real_num_rx_queues; i++) {
+ 			for (j = 0; j < VETH_RQ_STATS_LEN; j++) {
+-				snprintf(p, ETH_GSTRING_LEN, "rx_queue_%u_%s",
++				snprintf(p, ETH_GSTRING_LEN,
++					 "rx_queue_%u_%.11s",
+ 					 i, veth_rq_stats_desc[j].desc);
+ 				p += ETH_GSTRING_LEN;
+ 			}
+diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
+index 2a5668b4f6bc..1a1ea4bbf8a0 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.c
++++ b/drivers/net/wireless/ath/ath10k/ce.c
+@@ -500,14 +500,8 @@ static int _ath10k_ce_send_nolock(struct ath10k_ce_pipe *ce_state,
+ 	write_index = CE_RING_IDX_INCR(nentries_mask, write_index);
+ 
+ 	/* WORKAROUND */
+-	if (!(flags & CE_SEND_FLAG_GATHER)) {
+-		if (ar->hw_params.shadow_reg_support)
+-			ath10k_ce_shadow_src_ring_write_index_set(ar, ce_state,
+-								  write_index);
+-		else
+-			ath10k_ce_src_ring_write_index_set(ar, ctrl_addr,
+-							   write_index);
+-	}
++	if (!(flags & CE_SEND_FLAG_GATHER))
++		ath10k_ce_src_ring_write_index_set(ar, ctrl_addr, write_index);
+ 
+ 	src_ring->write_index = write_index;
+ exit:
+@@ -581,8 +575,14 @@ static int _ath10k_ce_send_nolock_64(struct ath10k_ce_pipe *ce_state,
+ 	/* Update Source Ring Write Index */
+ 	write_index = CE_RING_IDX_INCR(nentries_mask, write_index);
+ 
+-	if (!(flags & CE_SEND_FLAG_GATHER))
+-		ath10k_ce_src_ring_write_index_set(ar, ctrl_addr, write_index);
++	if (!(flags & CE_SEND_FLAG_GATHER)) {
++		if (ar->hw_params.shadow_reg_support)
++			ath10k_ce_shadow_src_ring_write_index_set(ar, ce_state,
++								  write_index);
++		else
++			ath10k_ce_src_ring_write_index_set(ar, ctrl_addr,
++							   write_index);
++	}
+ 
+ 	src_ring->write_index = write_index;
+ exit:
+@@ -1404,12 +1404,12 @@ static int ath10k_ce_alloc_shadow_base(struct ath10k *ar,
+ 				       u32 nentries)
+ {
+ 	src_ring->shadow_base_unaligned = kcalloc(nentries,
+-						  sizeof(struct ce_desc),
++						  sizeof(struct ce_desc_64),
+ 						  GFP_KERNEL);
+ 	if (!src_ring->shadow_base_unaligned)
+ 		return -ENOMEM;
+ 
+-	src_ring->shadow_base = (struct ce_desc *)
++	src_ring->shadow_base = (struct ce_desc_64 *)
+ 			PTR_ALIGN(src_ring->shadow_base_unaligned,
+ 				  CE_DESC_RING_ALIGN);
+ 	return 0;
+@@ -1461,7 +1461,7 @@ ath10k_ce_alloc_src_ring(struct ath10k *ar, unsigned int ce_id,
+ 		ret = ath10k_ce_alloc_shadow_base(ar, src_ring, nentries);
+ 		if (ret) {
+ 			dma_free_coherent(ar->dev,
+-					  (nentries * sizeof(struct ce_desc) +
++					  (nentries * sizeof(struct ce_desc_64) +
+ 					   CE_DESC_RING_ALIGN),
+ 					  src_ring->base_addr_owner_space_unaligned,
+ 					  base_addr);
+diff --git a/drivers/net/wireless/ath/ath10k/ce.h b/drivers/net/wireless/ath/ath10k/ce.h
+index ead9987c3259..463e2fc8b501 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.h
++++ b/drivers/net/wireless/ath/ath10k/ce.h
+@@ -118,7 +118,7 @@ struct ath10k_ce_ring {
+ 	u32 base_addr_ce_space;
+ 
+ 	char *shadow_base_unaligned;
+-	struct ce_desc *shadow_base;
++	struct ce_desc_64 *shadow_base;
+ 
+ 	/* keep last */
+ 	void *per_transfer_context[0];
+diff --git a/drivers/net/wireless/ath/ath10k/debugfs_sta.c b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+index 4778a455d81a..068f1a7e07d3 100644
+--- a/drivers/net/wireless/ath/ath10k/debugfs_sta.c
++++ b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+@@ -696,11 +696,12 @@ static ssize_t ath10k_dbg_sta_dump_tx_stats(struct file *file,
+ 						 "  %llu ", stats->ht[j][i]);
+ 			len += scnprintf(buf + len, size - len, "\n");
+ 			len += scnprintf(buf + len, size - len,
+-					" BW %s (20,40,80,160 MHz)\n", str[j]);
++					" BW %s (20,5,10,40,80,160 MHz)\n", str[j]);
+ 			len += scnprintf(buf + len, size - len,
+-					 "  %llu %llu %llu %llu\n",
++					 "  %llu %llu %llu %llu %llu %llu\n",
+ 					 stats->bw[j][0], stats->bw[j][1],
+-					 stats->bw[j][2], stats->bw[j][3]);
++					 stats->bw[j][2], stats->bw[j][3],
++					 stats->bw[j][4], stats->bw[j][5]);
+ 			len += scnprintf(buf + len, size - len,
+ 					 " NSS %s (1x1,2x2,3x3,4x4)\n", str[j]);
+ 			len += scnprintf(buf + len, size - len,
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index f42bac204ef8..ecf34ce7acf0 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -2130,9 +2130,15 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
+ 	hdr = (struct ieee80211_hdr *)skb->data;
+ 	rx_status = IEEE80211_SKB_RXCB(skb);
+ 	rx_status->chains |= BIT(0);
+-	rx_status->signal = ATH10K_DEFAULT_NOISE_FLOOR +
+-			    rx->ppdu.combined_rssi;
+-	rx_status->flag &= ~RX_FLAG_NO_SIGNAL_VAL;
++	if (rx->ppdu.combined_rssi == 0) {
++		/* SDIO firmware does not provide signal */
++		rx_status->signal = 0;
++		rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
++	} else {
++		rx_status->signal = ATH10K_DEFAULT_NOISE_FLOOR +
++			rx->ppdu.combined_rssi;
++		rx_status->flag &= ~RX_FLAG_NO_SIGNAL_VAL;
++	}
+ 
+ 	spin_lock_bh(&ar->data_lock);
+ 	ch = ar->scan_channel;
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 2034ccc7cc72..1d5d0209ebeb 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -5003,7 +5003,7 @@ enum wmi_rate_preamble {
+ #define ATH10K_FW_SKIPPED_RATE_CTRL(flags)	(((flags) >> 6) & 0x1)
+ 
+ #define ATH10K_VHT_MCS_NUM	10
+-#define ATH10K_BW_NUM		4
++#define ATH10K_BW_NUM		6
+ #define ATH10K_NSS_NUM		4
+ #define ATH10K_LEGACY_NUM	12
+ #define ATH10K_GI_NUM		2
+diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
+index 9b2f9f543952..5a44f9d0ff02 100644
+--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
++++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
+@@ -1580,6 +1580,12 @@ static int _wil_cfg80211_merge_extra_ies(const u8 *ies1, u16 ies1_len,
+ 	u8 *buf, *dpos;
+ 	const u8 *spos;
+ 
++	if (!ies1)
++		ies1_len = 0;
++
++	if (!ies2)
++		ies2_len = 0;
++
+ 	if (ies1_len == 0 && ies2_len == 0) {
+ 		*merged_ies = NULL;
+ 		*merged_len = 0;
+@@ -1589,17 +1595,19 @@ static int _wil_cfg80211_merge_extra_ies(const u8 *ies1, u16 ies1_len,
+ 	buf = kmalloc(ies1_len + ies2_len, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+-	memcpy(buf, ies1, ies1_len);
++	if (ies1)
++		memcpy(buf, ies1, ies1_len);
+ 	dpos = buf + ies1_len;
+ 	spos = ies2;
+-	while (spos + 1 < ies2 + ies2_len) {
++	while (spos && (spos + 1 < ies2 + ies2_len)) {
+ 		/* IE tag at offset 0, length at offset 1 */
+ 		u16 ielen = 2 + spos[1];
+ 
+ 		if (spos + ielen > ies2 + ies2_len)
+ 			break;
+ 		if (spos[0] == WLAN_EID_VENDOR_SPECIFIC &&
+-		    !_wil_cfg80211_find_ie(ies1, ies1_len, spos, ielen)) {
++		    (!ies1 || !_wil_cfg80211_find_ie(ies1, ies1_len,
++						     spos, ielen))) {
+ 			memcpy(dpos, spos, ielen);
+ 			dpos += ielen;
+ 		}
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+index 1f1e95a15a17..0ce1d8174e6d 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+@@ -149,7 +149,7 @@ static int brcmf_c_process_clm_blob(struct brcmf_if *ifp)
+ 		return err;
+ 	}
+ 
+-	err = request_firmware(&clm, clm_name, bus->dev);
++	err = firmware_request_nowarn(&clm, clm_name, bus->dev);
+ 	if (err) {
+ 		brcmf_info("no clm_blob available (err=%d), device may have limited channels available\n",
+ 			   err);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 0d6c313b6669..19ec55cef802 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -127,13 +127,17 @@ static int iwl_send_rss_cfg_cmd(struct iwl_mvm *mvm)
+ 
+ static int iwl_configure_rxq(struct iwl_mvm *mvm)
+ {
+-	int i, num_queues, size;
++	int i, num_queues, size, ret;
+ 	struct iwl_rfh_queue_config *cmd;
++	struct iwl_host_cmd hcmd = {
++		.id = WIDE_ID(DATA_PATH_GROUP, RFH_QUEUE_CONFIG_CMD),
++		.dataflags[0] = IWL_HCMD_DFL_NOCOPY,
++	};
+ 
+ 	/* Do not configure default queue, it is configured via context info */
+ 	num_queues = mvm->trans->num_rx_queues - 1;
+ 
+-	size = sizeof(*cmd) + num_queues * sizeof(struct iwl_rfh_queue_data);
++	size = struct_size(cmd, data, num_queues);
+ 
+ 	cmd = kzalloc(size, GFP_KERNEL);
+ 	if (!cmd)
+@@ -154,10 +158,14 @@ static int iwl_configure_rxq(struct iwl_mvm *mvm)
+ 		cmd->data[i].fr_bd_wid = cpu_to_le32(data.fr_bd_wid);
+ 	}
+ 
+-	return iwl_mvm_send_cmd_pdu(mvm,
+-				    WIDE_ID(DATA_PATH_GROUP,
+-					    RFH_QUEUE_CONFIG_CMD),
+-				    0, size, cmd);
++	hcmd.data[0] = cmd;
++	hcmd.len[0] = size;
++
++	ret = iwl_mvm_send_cmd(mvm, &hcmd);
++
++	kfree(cmd);
++
++	return ret;
+ }
+ 
+ static int iwl_mvm_send_dqa_cmd(struct iwl_mvm *mvm)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 9e850c25877b..c596c7b13504 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -499,7 +499,7 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	struct iwl_rb_allocator *rba = &trans_pcie->rba;
+ 	struct list_head local_empty;
+-	int pending = atomic_xchg(&rba->req_pending, 0);
++	int pending = atomic_read(&rba->req_pending);
+ 
+ 	IWL_DEBUG_RX(trans, "Pending allocation requests = %d\n", pending);
+ 
+@@ -554,11 +554,13 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
+ 			i++;
+ 		}
+ 
++		atomic_dec(&rba->req_pending);
+ 		pending--;
++
+ 		if (!pending) {
+-			pending = atomic_xchg(&rba->req_pending, 0);
++			pending = atomic_read(&rba->req_pending);
+ 			IWL_DEBUG_RX(trans,
+-				     "Pending allocation requests = %d\n",
++				     "Got more pending allocation requests = %d\n",
+ 				     pending);
+ 		}
+ 
+@@ -570,12 +572,15 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
+ 		spin_unlock(&rba->lock);
+ 
+ 		atomic_inc(&rba->req_ready);
++
+ 	}
+ 
+ 	spin_lock(&rba->lock);
+ 	/* return unused rbds to the allocator empty list */
+ 	list_splice_tail(&local_empty, &rba->rbd_empty);
+ 	spin_unlock(&rba->lock);
++
++	IWL_DEBUG_RX(trans, "%s, exit.\n", __func__);
+ }
+ 
+ /*
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 1467af22e394..883752f640b4 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -4310,11 +4310,13 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ 	wiphy->mgmt_stypes = mwifiex_mgmt_stypes;
+ 	wiphy->max_remain_on_channel_duration = 5000;
+ 	wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
+-				 BIT(NL80211_IFTYPE_ADHOC) |
+ 				 BIT(NL80211_IFTYPE_P2P_CLIENT) |
+ 				 BIT(NL80211_IFTYPE_P2P_GO) |
+ 				 BIT(NL80211_IFTYPE_AP);
+ 
++	if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
++		wiphy->interface_modes |= BIT(NL80211_IFTYPE_ADHOC);
++
+ 	wiphy->bands[NL80211_BAND_2GHZ] = &mwifiex_band_2ghz;
+ 	if (adapter->config_bands & BAND_A)
+ 		wiphy->bands[NL80211_BAND_5GHZ] = &mwifiex_band_5ghz;
+@@ -4374,11 +4376,13 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ 	wiphy->available_antennas_tx = BIT(adapter->number_of_antenna) - 1;
+ 	wiphy->available_antennas_rx = BIT(adapter->number_of_antenna) - 1;
+ 
+-	wiphy->features |= NL80211_FEATURE_HT_IBSS |
+-			   NL80211_FEATURE_INACTIVITY_TIMER |
++	wiphy->features |= NL80211_FEATURE_INACTIVITY_TIMER |
+ 			   NL80211_FEATURE_LOW_PRIORITY_SCAN |
+ 			   NL80211_FEATURE_NEED_OBSS_SCAN;
+ 
++	if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
++		wiphy->features |= NL80211_FEATURE_HT_IBSS;
++
+ 	if (ISSUPP_RANDOM_MAC(adapter->fw_cap_info))
+ 		wiphy->features |= NL80211_FEATURE_SCAN_RANDOM_MAC_ADDR |
+ 				   NL80211_FEATURE_SCHED_SCAN_RANDOM_MAC_ADDR |
+diff --git a/drivers/net/wireless/mediatek/mt76/eeprom.c b/drivers/net/wireless/mediatek/mt76/eeprom.c
+index 530e5593765c..a1529920d877 100644
+--- a/drivers/net/wireless/mediatek/mt76/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/eeprom.c
+@@ -54,22 +54,30 @@ mt76_get_of_eeprom(struct mt76_dev *dev, int len)
+ 		part = np->name;
+ 
+ 	mtd = get_mtd_device_nm(part);
+-	if (IS_ERR(mtd))
+-		return PTR_ERR(mtd);
++	if (IS_ERR(mtd)) {
++		ret =  PTR_ERR(mtd);
++		goto out_put_node;
++	}
+ 
+-	if (size <= sizeof(*list))
+-		return -EINVAL;
++	if (size <= sizeof(*list)) {
++		ret = -EINVAL;
++		goto out_put_node;
++	}
+ 
+ 	offset = be32_to_cpup(list);
+ 	ret = mtd_read(mtd, offset, len, &retlen, dev->eeprom.data);
+ 	put_mtd_device(mtd);
+ 	if (ret)
+-		return ret;
++		goto out_put_node;
+ 
+-	if (retlen < len)
+-		return -EINVAL;
++	if (retlen < len) {
++		ret = -EINVAL;
++		goto out_put_node;
++	}
+ 
+-	return 0;
++out_put_node:
++	of_node_put(np);
++	return ret;
+ #else
+ 	return -ENOENT;
+ #endif
+diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
+index 09923cedd039..61cde0f9f58f 100644
+--- a/drivers/net/wireless/mediatek/mt76/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/usb.c
+@@ -837,16 +837,9 @@ int mt76u_alloc_queues(struct mt76_dev *dev)
+ 
+ 	err = mt76u_alloc_rx(dev);
+ 	if (err < 0)
+-		goto err;
+-
+-	err = mt76u_alloc_tx(dev);
+-	if (err < 0)
+-		goto err;
++		return err;
+ 
+-	return 0;
+-err:
+-	mt76u_queues_deinit(dev);
+-	return err;
++	return mt76u_alloc_tx(dev);
+ }
+ EXPORT_SYMBOL_GPL(mt76u_alloc_queues);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt7601u/eeprom.h b/drivers/net/wireless/mediatek/mt7601u/eeprom.h
+index 662d12703b69..57b503ae63f1 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/eeprom.h
++++ b/drivers/net/wireless/mediatek/mt7601u/eeprom.h
+@@ -17,7 +17,7 @@
+ 
+ struct mt7601u_dev;
+ 
+-#define MT7601U_EE_MAX_VER			0x0c
++#define MT7601U_EE_MAX_VER			0x0d
+ #define MT7601U_EEPROM_SIZE			256
+ 
+ #define MT7601U_DEFAULT_TX_POWER		6
+diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
+index 26b187336875..2e12de813a5b 100644
+--- a/drivers/net/wireless/ti/wlcore/main.c
++++ b/drivers/net/wireless/ti/wlcore/main.c
+@@ -1085,8 +1085,11 @@ static int wl12xx_chip_wakeup(struct wl1271 *wl, bool plt)
+ 		goto out;
+ 
+ 	ret = wl12xx_fetch_firmware(wl, plt);
+-	if (ret < 0)
+-		goto out;
++	if (ret < 0) {
++		kfree(wl->fw_status);
++		kfree(wl->raw_fw_status);
++		kfree(wl->tx_res_if);
++	}
+ 
+ out:
+ 	return ret;
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 89accc76d71c..c37d5bbd72ab 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -3018,7 +3018,10 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
+ 
+ 	ctrl->ctrl.opts = opts;
+ 	ctrl->ctrl.nr_reconnects = 0;
+-	ctrl->ctrl.numa_node = dev_to_node(lport->dev);
++	if (lport->dev)
++		ctrl->ctrl.numa_node = dev_to_node(lport->dev);
++	else
++		ctrl->ctrl.numa_node = NUMA_NO_NODE;
+ 	INIT_LIST_HEAD(&ctrl->ctrl_list);
+ 	ctrl->lport = lport;
+ 	ctrl->rport = rport;
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 88d260f31835..02c63c463222 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -1171,6 +1171,15 @@ static void nvmet_release_p2p_ns_map(struct nvmet_ctrl *ctrl)
+ 	put_device(ctrl->p2p_client);
+ }
+ 
++static void nvmet_fatal_error_handler(struct work_struct *work)
++{
++	struct nvmet_ctrl *ctrl =
++			container_of(work, struct nvmet_ctrl, fatal_err_work);
++
++	pr_err("ctrl %d fatal error occurred!\n", ctrl->cntlid);
++	ctrl->ops->delete_ctrl(ctrl);
++}
++
+ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ 		struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp)
+ {
+@@ -1213,6 +1222,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ 	INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work);
+ 	INIT_LIST_HEAD(&ctrl->async_events);
+ 	INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL);
++	INIT_WORK(&ctrl->fatal_err_work, nvmet_fatal_error_handler);
+ 
+ 	memcpy(ctrl->subsysnqn, subsysnqn, NVMF_NQN_SIZE);
+ 	memcpy(ctrl->hostnqn, hostnqn, NVMF_NQN_SIZE);
+@@ -1316,21 +1326,11 @@ void nvmet_ctrl_put(struct nvmet_ctrl *ctrl)
+ 	kref_put(&ctrl->ref, nvmet_ctrl_free);
+ }
+ 
+-static void nvmet_fatal_error_handler(struct work_struct *work)
+-{
+-	struct nvmet_ctrl *ctrl =
+-			container_of(work, struct nvmet_ctrl, fatal_err_work);
+-
+-	pr_err("ctrl %d fatal error occurred!\n", ctrl->cntlid);
+-	ctrl->ops->delete_ctrl(ctrl);
+-}
+-
+ void nvmet_ctrl_fatal_error(struct nvmet_ctrl *ctrl)
+ {
+ 	mutex_lock(&ctrl->lock);
+ 	if (!(ctrl->csts & NVME_CSTS_CFS)) {
+ 		ctrl->csts |= NVME_CSTS_CFS;
+-		INIT_WORK(&ctrl->fatal_err_work, nvmet_fatal_error_handler);
+ 		schedule_work(&ctrl->fatal_err_work);
+ 	}
+ 	mutex_unlock(&ctrl->lock);
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index 55e471c18e8d..c42fe5c4319f 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -654,7 +654,6 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
+ 	struct resource *mem = &pcie->mem;
+ 	const struct mtk_pcie_soc *soc = port->pcie->soc;
+ 	u32 val;
+-	size_t size;
+ 	int err;
+ 
+ 	/* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */
+@@ -706,8 +705,8 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
+ 		mtk_pcie_enable_msi(port);
+ 
+ 	/* Set AHB to PCIe translation windows */
+-	size = mem->end - mem->start;
+-	val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size));
++	val = lower_32_bits(mem->start) |
++	      AHB2PCIE_SIZE(fls(resource_size(mem)));
+ 	writel(val, port->base + PCIE_AHB_TRANS_BASE0_L);
+ 
+ 	val = upper_32_bits(mem->start);
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index c0fb64ace05a..8bfcb8cd0900 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -156,9 +156,9 @@ static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
+ 	slot_ctrl |= (cmd & mask);
+ 	ctrl->cmd_busy = 1;
+ 	smp_mb();
++	ctrl->slot_ctrl = slot_ctrl;
+ 	pcie_capability_write_word(pdev, PCI_EXP_SLTCTL, slot_ctrl);
+ 	ctrl->cmd_started = jiffies;
+-	ctrl->slot_ctrl = slot_ctrl;
+ 
+ 	/*
+ 	 * Controllers with the Intel CF118 and similar errata advertise
+diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c
+index 1a8b85051b1b..efa5b552914b 100644
+--- a/drivers/pci/pcie/pme.c
++++ b/drivers/pci/pcie/pme.c
+@@ -363,6 +363,16 @@ static bool pcie_pme_check_wakeup(struct pci_bus *bus)
+ 	return false;
+ }
+ 
++static void pcie_pme_disable_interrupt(struct pci_dev *port,
++				       struct pcie_pme_service_data *data)
++{
++	spin_lock_irq(&data->lock);
++	pcie_pme_interrupt_enable(port, false);
++	pcie_clear_root_pme_status(port);
++	data->noirq = true;
++	spin_unlock_irq(&data->lock);
++}
++
+ /**
+  * pcie_pme_suspend - Suspend PCIe PME service device.
+  * @srv: PCIe service device to suspend.
+@@ -387,11 +397,7 @@ static int pcie_pme_suspend(struct pcie_device *srv)
+ 			return 0;
+ 	}
+ 
+-	spin_lock_irq(&data->lock);
+-	pcie_pme_interrupt_enable(port, false);
+-	pcie_clear_root_pme_status(port);
+-	data->noirq = true;
+-	spin_unlock_irq(&data->lock);
++	pcie_pme_disable_interrupt(port, data);
+ 
+ 	synchronize_irq(srv->irq);
+ 
+@@ -427,9 +433,11 @@ static int pcie_pme_resume(struct pcie_device *srv)
+  */
+ static void pcie_pme_remove(struct pcie_device *srv)
+ {
+-	pcie_pme_suspend(srv);
++	struct pcie_pme_service_data *data = get_service_data(srv);
++
++	pcie_pme_disable_interrupt(srv->port, data);
+ 	free_irq(srv->irq, srv);
+-	kfree(get_service_data(srv));
++	kfree(data);
+ }
+ 
+ static struct pcie_port_service_driver pcie_pme_driver = {
+diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
+index 8e46a9dad2fa..7cb766dafe85 100644
+--- a/drivers/perf/arm_spe_pmu.c
++++ b/drivers/perf/arm_spe_pmu.c
+@@ -824,10 +824,10 @@ static void arm_spe_pmu_read(struct perf_event *event)
+ {
+ }
+ 
+-static void *arm_spe_pmu_setup_aux(int cpu, void **pages, int nr_pages,
+-				   bool snapshot)
++static void *arm_spe_pmu_setup_aux(struct perf_event *event, void **pages,
++				   int nr_pages, bool snapshot)
+ {
+-	int i;
++	int i, cpu = event->cpu;
+ 	struct page **pglist;
+ 	struct arm_spe_pmu_buf *buf;
+ 
+diff --git a/drivers/pinctrl/meson/pinctrl-meson.c b/drivers/pinctrl/meson/pinctrl-meson.c
+index ea87d739f534..a4ae1ac5369e 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson.c
++++ b/drivers/pinctrl/meson/pinctrl-meson.c
+@@ -31,6 +31,9 @@
+  * In some cases the register ranges for pull enable and pull
+  * direction are the same and thus there are only 3 register ranges.
+  *
++ * Since Meson G12A SoC, the ao register ranges for gpio, pull enable
++ * and pull direction are the same, so there are only 2 register ranges.
++ *
+  * For the pull and GPIO configuration every bank uses a contiguous
+  * set of bits in the register sets described above; the same register
+  * can be shared by more banks with different offsets.
+@@ -488,23 +491,22 @@ static int meson_pinctrl_parse_dt(struct meson_pinctrl *pc,
+ 		return PTR_ERR(pc->reg_mux);
+ 	}
+ 
+-	pc->reg_pull = meson_map_resource(pc, gpio_np, "pull");
+-	if (IS_ERR(pc->reg_pull)) {
+-		dev_err(pc->dev, "pull registers not found\n");
+-		return PTR_ERR(pc->reg_pull);
++	pc->reg_gpio = meson_map_resource(pc, gpio_np, "gpio");
++	if (IS_ERR(pc->reg_gpio)) {
++		dev_err(pc->dev, "gpio registers not found\n");
++		return PTR_ERR(pc->reg_gpio);
+ 	}
+ 
++	pc->reg_pull = meson_map_resource(pc, gpio_np, "pull");
++	/* Use gpio region if pull one is not present */
++	if (IS_ERR(pc->reg_pull))
++		pc->reg_pull = pc->reg_gpio;
++
+ 	pc->reg_pullen = meson_map_resource(pc, gpio_np, "pull-enable");
+ 	/* Use pull region if pull-enable one is not present */
+ 	if (IS_ERR(pc->reg_pullen))
+ 		pc->reg_pullen = pc->reg_pull;
+ 
+-	pc->reg_gpio = meson_map_resource(pc, gpio_np, "gpio");
+-	if (IS_ERR(pc->reg_gpio)) {
+-		dev_err(pc->dev, "gpio registers not found\n");
+-		return PTR_ERR(pc->reg_gpio);
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pinctrl/meson/pinctrl-meson8b.c b/drivers/pinctrl/meson/pinctrl-meson8b.c
+index 0f140a802137..7f76000cc12e 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson8b.c
++++ b/drivers/pinctrl/meson/pinctrl-meson8b.c
+@@ -346,6 +346,8 @@ static const unsigned int eth_rx_dv_pins[]	= { DIF_1_P };
+ static const unsigned int eth_rx_clk_pins[]	= { DIF_1_N };
+ static const unsigned int eth_txd0_1_pins[]	= { DIF_2_P };
+ static const unsigned int eth_txd1_1_pins[]	= { DIF_2_N };
++static const unsigned int eth_rxd3_pins[]	= { DIF_2_P };
++static const unsigned int eth_rxd2_pins[]	= { DIF_2_N };
+ static const unsigned int eth_tx_en_pins[]	= { DIF_3_P };
+ static const unsigned int eth_ref_clk_pins[]	= { DIF_3_N };
+ static const unsigned int eth_mdc_pins[]	= { DIF_4_P };
+@@ -599,6 +601,8 @@ static struct meson_pmx_group meson8b_cbus_groups[] = {
+ 	GROUP(eth_ref_clk,	6,	8),
+ 	GROUP(eth_mdc,		6,	9),
+ 	GROUP(eth_mdio_en,	6,	10),
++	GROUP(eth_rxd3,		7,	22),
++	GROUP(eth_rxd2,		7,	23),
+ };
+ 
+ static struct meson_pmx_group meson8b_aobus_groups[] = {
+@@ -748,7 +752,7 @@ static const char * const ethernet_groups[] = {
+ 	"eth_tx_clk", "eth_tx_en", "eth_txd1_0", "eth_txd1_1",
+ 	"eth_txd0_0", "eth_txd0_1", "eth_rx_clk", "eth_rx_dv",
+ 	"eth_rxd1", "eth_rxd0", "eth_mdio_en", "eth_mdc", "eth_ref_clk",
+-	"eth_txd2", "eth_txd3"
++	"eth_txd2", "eth_txd3", "eth_rxd3", "eth_rxd2"
+ };
+ 
+ static const char * const i2c_a_groups[] = {
+diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a77990.c b/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
+index e40908dc37e0..1ce286f7b286 100644
+--- a/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
++++ b/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
+@@ -391,29 +391,33 @@ FM(IP12_23_20)	IP12_23_20	FM(IP13_23_20)	IP13_23_20	FM(IP14_23_20)	IP14_23_20	FM
+ FM(IP12_27_24)	IP12_27_24	FM(IP13_27_24)	IP13_27_24	FM(IP14_27_24)	IP14_27_24	FM(IP15_27_24)	IP15_27_24 \
+ FM(IP12_31_28)	IP12_31_28	FM(IP13_31_28)	IP13_31_28	FM(IP14_31_28)	IP14_31_28	FM(IP15_31_28)	IP15_31_28
+ 
++/* The bit numbering in MOD_SEL fields is reversed */
++#define REV4(f0, f1, f2, f3)			f0 f2 f1 f3
++#define REV8(f0, f1, f2, f3, f4, f5, f6, f7)	f0 f4 f2 f6 f1 f5 f3 f7
++
+ /* MOD_SEL0 */			/* 0 */				/* 1 */				/* 2 */				/* 3 */			/* 4 */			/* 5 */		/* 6 */		/* 7 */
+-#define MOD_SEL0_30_29		FM(SEL_ADGB_0)			FM(SEL_ADGB_1)			FM(SEL_ADGB_2)			F_(0, 0)
++#define MOD_SEL0_30_29	   REV4(FM(SEL_ADGB_0),			FM(SEL_ADGB_1),			FM(SEL_ADGB_2),			F_(0, 0))
+ #define MOD_SEL0_28		FM(SEL_DRIF0_0)			FM(SEL_DRIF0_1)
+-#define MOD_SEL0_27_26		FM(SEL_FM_0)			FM(SEL_FM_1)			FM(SEL_FM_2)			F_(0, 0)
++#define MOD_SEL0_27_26	   REV4(FM(SEL_FM_0),			FM(SEL_FM_1),			FM(SEL_FM_2),			F_(0, 0))
+ #define MOD_SEL0_25		FM(SEL_FSO_0)			FM(SEL_FSO_1)
+ #define MOD_SEL0_24		FM(SEL_HSCIF0_0)		FM(SEL_HSCIF0_1)
+ #define MOD_SEL0_23		FM(SEL_HSCIF1_0)		FM(SEL_HSCIF1_1)
+ #define MOD_SEL0_22		FM(SEL_HSCIF2_0)		FM(SEL_HSCIF2_1)
+-#define MOD_SEL0_21_20		FM(SEL_I2C1_0)			FM(SEL_I2C1_1)			FM(SEL_I2C1_2)			FM(SEL_I2C1_3)
+-#define MOD_SEL0_19_18_17	FM(SEL_I2C2_0)			FM(SEL_I2C2_1)			FM(SEL_I2C2_2)			FM(SEL_I2C2_3)		FM(SEL_I2C2_4)		F_(0, 0)	F_(0, 0)	F_(0, 0)
++#define MOD_SEL0_21_20	   REV4(FM(SEL_I2C1_0),			FM(SEL_I2C1_1),			FM(SEL_I2C1_2),			FM(SEL_I2C1_3))
++#define MOD_SEL0_19_18_17  REV8(FM(SEL_I2C2_0),			FM(SEL_I2C2_1),			FM(SEL_I2C2_2),			FM(SEL_I2C2_3),		FM(SEL_I2C2_4),		F_(0, 0),	F_(0, 0),	F_(0, 0))
+ #define MOD_SEL0_16		FM(SEL_NDFC_0)			FM(SEL_NDFC_1)
+ #define MOD_SEL0_15		FM(SEL_PWM0_0)			FM(SEL_PWM0_1)
+ #define MOD_SEL0_14		FM(SEL_PWM1_0)			FM(SEL_PWM1_1)
+-#define MOD_SEL0_13_12		FM(SEL_PWM2_0)			FM(SEL_PWM2_1)			FM(SEL_PWM2_2)			F_(0, 0)
+-#define MOD_SEL0_11_10		FM(SEL_PWM3_0)			FM(SEL_PWM3_1)			FM(SEL_PWM3_2)			F_(0, 0)
++#define MOD_SEL0_13_12	   REV4(FM(SEL_PWM2_0),			FM(SEL_PWM2_1),			FM(SEL_PWM2_2),			F_(0, 0))
++#define MOD_SEL0_11_10	   REV4(FM(SEL_PWM3_0),			FM(SEL_PWM3_1),			FM(SEL_PWM3_2),			F_(0, 0))
+ #define MOD_SEL0_9		FM(SEL_PWM4_0)			FM(SEL_PWM4_1)
+ #define MOD_SEL0_8		FM(SEL_PWM5_0)			FM(SEL_PWM5_1)
+ #define MOD_SEL0_7		FM(SEL_PWM6_0)			FM(SEL_PWM6_1)
+-#define MOD_SEL0_6_5		FM(SEL_REMOCON_0)		FM(SEL_REMOCON_1)		FM(SEL_REMOCON_2)		F_(0, 0)
++#define MOD_SEL0_6_5	   REV4(FM(SEL_REMOCON_0),		FM(SEL_REMOCON_1),		FM(SEL_REMOCON_2),		F_(0, 0))
+ #define MOD_SEL0_4		FM(SEL_SCIF_0)			FM(SEL_SCIF_1)
+ #define MOD_SEL0_3		FM(SEL_SCIF0_0)			FM(SEL_SCIF0_1)
+ #define MOD_SEL0_2		FM(SEL_SCIF2_0)			FM(SEL_SCIF2_1)
+-#define MOD_SEL0_1_0		FM(SEL_SPEED_PULSE_IF_0)	FM(SEL_SPEED_PULSE_IF_1)	FM(SEL_SPEED_PULSE_IF_2)	F_(0, 0)
++#define MOD_SEL0_1_0	   REV4(FM(SEL_SPEED_PULSE_IF_0),	FM(SEL_SPEED_PULSE_IF_1),	FM(SEL_SPEED_PULSE_IF_2),	F_(0, 0))
+ 
+ /* MOD_SEL1 */			/* 0 */				/* 1 */				/* 2 */				/* 3 */			/* 4 */			/* 5 */		/* 6 */		/* 7 */
+ #define MOD_SEL1_31		FM(SEL_SIMCARD_0)		FM(SEL_SIMCARD_1)
+@@ -422,18 +426,18 @@ FM(IP12_31_28)	IP12_31_28	FM(IP13_31_28)	IP13_31_28	FM(IP14_31_28)	IP14_31_28	FM
+ #define MOD_SEL1_28		FM(SEL_USB_20_CH0_0)		FM(SEL_USB_20_CH0_1)
+ #define MOD_SEL1_26		FM(SEL_DRIF2_0)			FM(SEL_DRIF2_1)
+ #define MOD_SEL1_25		FM(SEL_DRIF3_0)			FM(SEL_DRIF3_1)
+-#define MOD_SEL1_24_23_22	FM(SEL_HSCIF3_0)		FM(SEL_HSCIF3_1)		FM(SEL_HSCIF3_2)		FM(SEL_HSCIF3_3)	FM(SEL_HSCIF3_4)	F_(0, 0)	F_(0, 0)	F_(0, 0)
+-#define MOD_SEL1_21_20_19	FM(SEL_HSCIF4_0)		FM(SEL_HSCIF4_1)		FM(SEL_HSCIF4_2)		FM(SEL_HSCIF4_3)	FM(SEL_HSCIF4_4)	F_(0, 0)	F_(0, 0)	F_(0, 0)
++#define MOD_SEL1_24_23_22  REV8(FM(SEL_HSCIF3_0),		FM(SEL_HSCIF3_1),		FM(SEL_HSCIF3_2),		FM(SEL_HSCIF3_3),	FM(SEL_HSCIF3_4),	F_(0, 0),	F_(0, 0),	F_(0, 0))
++#define MOD_SEL1_21_20_19  REV8(FM(SEL_HSCIF4_0),		FM(SEL_HSCIF4_1),		FM(SEL_HSCIF4_2),		FM(SEL_HSCIF4_3),	FM(SEL_HSCIF4_4),	F_(0, 0),	F_(0, 0),	F_(0, 0))
+ #define MOD_SEL1_18		FM(SEL_I2C6_0)			FM(SEL_I2C6_1)
+ #define MOD_SEL1_17		FM(SEL_I2C7_0)			FM(SEL_I2C7_1)
+ #define MOD_SEL1_16		FM(SEL_MSIOF2_0)		FM(SEL_MSIOF2_1)
+ #define MOD_SEL1_15		FM(SEL_MSIOF3_0)		FM(SEL_MSIOF3_1)
+-#define MOD_SEL1_14_13		FM(SEL_SCIF3_0)			FM(SEL_SCIF3_1)			FM(SEL_SCIF3_2)			F_(0, 0)
+-#define MOD_SEL1_12_11		FM(SEL_SCIF4_0)			FM(SEL_SCIF4_1)			FM(SEL_SCIF4_2)			F_(0, 0)
+-#define MOD_SEL1_10_9		FM(SEL_SCIF5_0)			FM(SEL_SCIF5_1)			FM(SEL_SCIF5_2)			F_(0, 0)
++#define MOD_SEL1_14_13	   REV4(FM(SEL_SCIF3_0),		FM(SEL_SCIF3_1),		FM(SEL_SCIF3_2),		F_(0, 0))
++#define MOD_SEL1_12_11	   REV4(FM(SEL_SCIF4_0),		FM(SEL_SCIF4_1),		FM(SEL_SCIF4_2),		F_(0, 0))
++#define MOD_SEL1_10_9	   REV4(FM(SEL_SCIF5_0),		FM(SEL_SCIF5_1),		FM(SEL_SCIF5_2),		F_(0, 0))
+ #define MOD_SEL1_8		FM(SEL_VIN4_0)			FM(SEL_VIN4_1)
+ #define MOD_SEL1_7		FM(SEL_VIN5_0)			FM(SEL_VIN5_1)
+-#define MOD_SEL1_6_5		FM(SEL_ADGC_0)			FM(SEL_ADGC_1)			FM(SEL_ADGC_2)			F_(0, 0)
++#define MOD_SEL1_6_5	   REV4(FM(SEL_ADGC_0),			FM(SEL_ADGC_1),			FM(SEL_ADGC_2),			F_(0, 0))
+ #define MOD_SEL1_4		FM(SEL_SSI9_0)			FM(SEL_SSI9_1)
+ 
+ #define PINMUX_MOD_SELS	\
+diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a77995.c b/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
+index 84d78db381e3..9e377e3b9cb3 100644
+--- a/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
++++ b/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
+@@ -381,6 +381,9 @@ FM(IP12_23_20)	IP12_23_20 \
+ FM(IP12_27_24)	IP12_27_24 \
+ FM(IP12_31_28)	IP12_31_28 \
+ 
++/* The bit numbering in MOD_SEL fields is reversed */
++#define REV4(f0, f1, f2, f3)			f0 f2 f1 f3
++
+ /* MOD_SEL0 */			/* 0 */			/* 1 */			/* 2 */			/* 3 */
+ #define MOD_SEL0_30		FM(SEL_MSIOF2_0)	FM(SEL_MSIOF2_1)
+ #define MOD_SEL0_29		FM(SEL_I2C3_0)		FM(SEL_I2C3_1)
+@@ -388,10 +391,10 @@ FM(IP12_31_28)	IP12_31_28 \
+ #define MOD_SEL0_27		FM(SEL_MSIOF3_0)	FM(SEL_MSIOF3_1)
+ #define MOD_SEL0_26		FM(SEL_HSCIF3_0)	FM(SEL_HSCIF3_1)
+ #define MOD_SEL0_25		FM(SEL_SCIF4_0)		FM(SEL_SCIF4_1)
+-#define MOD_SEL0_24_23		FM(SEL_PWM0_0)		FM(SEL_PWM0_1)		FM(SEL_PWM0_2)		F_(0, 0)
+-#define MOD_SEL0_22_21		FM(SEL_PWM1_0)		FM(SEL_PWM1_1)		FM(SEL_PWM1_2)		F_(0, 0)
+-#define MOD_SEL0_20_19		FM(SEL_PWM2_0)		FM(SEL_PWM2_1)		FM(SEL_PWM2_2)		F_(0, 0)
+-#define MOD_SEL0_18_17		FM(SEL_PWM3_0)		FM(SEL_PWM3_1)		FM(SEL_PWM3_2)		F_(0, 0)
++#define MOD_SEL0_24_23	   REV4(FM(SEL_PWM0_0),		FM(SEL_PWM0_1),		FM(SEL_PWM0_2),		F_(0, 0))
++#define MOD_SEL0_22_21	   REV4(FM(SEL_PWM1_0),		FM(SEL_PWM1_1),		FM(SEL_PWM1_2),		F_(0, 0))
++#define MOD_SEL0_20_19	   REV4(FM(SEL_PWM2_0),		FM(SEL_PWM2_1),		FM(SEL_PWM2_2),		F_(0, 0))
++#define MOD_SEL0_18_17	   REV4(FM(SEL_PWM3_0),		FM(SEL_PWM3_1),		FM(SEL_PWM3_2),		F_(0, 0))
+ #define MOD_SEL0_15		FM(SEL_IRQ_0_0)		FM(SEL_IRQ_0_1)
+ #define MOD_SEL0_14		FM(SEL_IRQ_1_0)		FM(SEL_IRQ_1_1)
+ #define MOD_SEL0_13		FM(SEL_IRQ_2_0)		FM(SEL_IRQ_2_1)
+diff --git a/drivers/platform/mellanox/mlxreg-hotplug.c b/drivers/platform/mellanox/mlxreg-hotplug.c
+index b6d44550d98c..eca16d00e310 100644
+--- a/drivers/platform/mellanox/mlxreg-hotplug.c
++++ b/drivers/platform/mellanox/mlxreg-hotplug.c
+@@ -248,7 +248,8 @@ mlxreg_hotplug_work_helper(struct mlxreg_hotplug_priv_data *priv,
+ 			   struct mlxreg_core_item *item)
+ {
+ 	struct mlxreg_core_data *data;
+-	u32 asserted, regval, bit;
++	unsigned long asserted;
++	u32 regval, bit;
+ 	int ret;
+ 
+ 	/*
+@@ -281,7 +282,7 @@ mlxreg_hotplug_work_helper(struct mlxreg_hotplug_priv_data *priv,
+ 	asserted = item->cache ^ regval;
+ 	item->cache = regval;
+ 
+-	for_each_set_bit(bit, (unsigned long *)&asserted, 8) {
++	for_each_set_bit(bit, &asserted, 8) {
+ 		data = item->data + bit;
+ 		if (regval & BIT(bit)) {
+ 			if (item->inversed)
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index 1589dffab9fa..8b53a9ceb897 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -989,7 +989,7 @@ static const struct dmi_system_id no_hw_rfkill_list[] = {
+ 		.ident = "Lenovo RESCUER R720-15IKBN",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_BOARD_NAME, "80WW"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo R720-15IKBN"),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index e28bcf61b126..bc0d55a59015 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -363,7 +363,7 @@ wakeup:
+ 	 * the 5-button array, but still send notifies with power button
+ 	 * event code to this device object on power button actions.
+ 	 *
+-	 * Report the power button press; catch and ignore the button release.
++	 * Report the power button press and release.
+ 	 */
+ 	if (!priv->array) {
+ 		if (event == 0xce) {
+@@ -372,8 +372,11 @@ wakeup:
+ 			return;
+ 		}
+ 
+-		if (event == 0xcf)
++		if (event == 0xcf) {
++			input_report_key(priv->input_dev, KEY_POWER, 0);
++			input_sync(priv->input_dev);
+ 			return;
++		}
+ 	}
+ 
+ 	/* 0xC0 is for HID events, other values are for 5 button array */
+diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
+index 22dbf115782e..c37e74ee609d 100644
+--- a/drivers/platform/x86/intel_pmc_core.c
++++ b/drivers/platform/x86/intel_pmc_core.c
+@@ -380,7 +380,8 @@ static int pmc_core_ppfear_show(struct seq_file *s, void *unused)
+ 	     index < PPFEAR_MAX_NUM_ENTRIES; index++, iter++)
+ 		pf_regs[index] = pmc_core_reg_read_byte(pmcdev, iter);
+ 
+-	for (index = 0; map[index].name; index++)
++	for (index = 0; map[index].name &&
++	     index < pmcdev->map->ppfear_buckets * 8; index++)
+ 		pmc_core_display_map(s, index, pf_regs[index / 8], map);
+ 
+ 	return 0;
+diff --git a/drivers/platform/x86/intel_pmc_core.h b/drivers/platform/x86/intel_pmc_core.h
+index 89554cba5758..1a0104d2cbf0 100644
+--- a/drivers/platform/x86/intel_pmc_core.h
++++ b/drivers/platform/x86/intel_pmc_core.h
+@@ -32,7 +32,7 @@
+ #define SPT_PMC_SLP_S0_RES_COUNTER_STEP		0x64
+ #define PMC_BASE_ADDR_MASK			~(SPT_PMC_MMIO_REG_LEN - 1)
+ #define MTPMC_MASK				0xffff0000
+-#define PPFEAR_MAX_NUM_ENTRIES			5
++#define PPFEAR_MAX_NUM_ENTRIES			12
+ #define SPT_PPFEAR_NUM_ENTRIES			5
+ #define SPT_PMC_READ_DISABLE_BIT		0x16
+ #define SPT_PMC_MSG_FULL_STS_BIT		0x18
+diff --git a/drivers/regulator/act8865-regulator.c b/drivers/regulator/act8865-regulator.c
+index 21e20483bd91..e0239cf3f56d 100644
+--- a/drivers/regulator/act8865-regulator.c
++++ b/drivers/regulator/act8865-regulator.c
+@@ -131,7 +131,7 @@
+  * ACT8865 voltage number
+  */
+ #define	ACT8865_VOLTAGE_NUM	64
+-#define ACT8600_SUDCDC_VOLTAGE_NUM	255
++#define ACT8600_SUDCDC_VOLTAGE_NUM	256
+ 
+ struct act8865 {
+ 	struct regmap *regmap;
+@@ -222,7 +222,8 @@ static const struct regulator_linear_range act8600_sudcdc_voltage_ranges[] = {
+ 	REGULATOR_LINEAR_RANGE(3000000, 0, 63, 0),
+ 	REGULATOR_LINEAR_RANGE(3000000, 64, 159, 100000),
+ 	REGULATOR_LINEAR_RANGE(12600000, 160, 191, 200000),
+-	REGULATOR_LINEAR_RANGE(19000000, 191, 255, 400000),
++	REGULATOR_LINEAR_RANGE(19000000, 192, 247, 400000),
++	REGULATOR_LINEAR_RANGE(41400000, 248, 255, 0),
+ };
+ 
+ static struct regulator_ops act8865_ops = {
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index b9d7b45c7295..e2caf11598c7 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1349,7 +1349,9 @@ static int set_machine_constraints(struct regulator_dev *rdev,
+ 		 * We'll only apply the initial system load if an
+ 		 * initial mode wasn't specified.
+ 		 */
++		regulator_lock(rdev);
+ 		drms_uA_update(rdev);
++		regulator_unlock(rdev);
+ 	}
+ 
+ 	if ((rdev->constraints->ramp_delay || rdev->constraints->ramp_disable)
+diff --git a/drivers/regulator/mcp16502.c b/drivers/regulator/mcp16502.c
+index 3479ae009b0b..0fc4963bd5b0 100644
+--- a/drivers/regulator/mcp16502.c
++++ b/drivers/regulator/mcp16502.c
+@@ -17,6 +17,7 @@
+ #include <linux/regmap.h>
+ #include <linux/regulator/driver.h>
+ #include <linux/suspend.h>
++#include <linux/gpio/consumer.h>
+ 
+ #define VDD_LOW_SEL 0x0D
+ #define VDD_HIGH_SEL 0x3F
+diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c
+index ed8e58f09054..3e132592c1fe 100644
+--- a/drivers/s390/net/ism_drv.c
++++ b/drivers/s390/net/ism_drv.c
+@@ -141,10 +141,13 @@ static int register_ieq(struct ism_dev *ism)
+ 
+ static int unregister_sba(struct ism_dev *ism)
+ {
++	int ret;
++
+ 	if (!ism->sba)
+ 		return 0;
+ 
+-	if (ism_cmd_simple(ism, ISM_UNREG_SBA))
++	ret = ism_cmd_simple(ism, ISM_UNREG_SBA);
++	if (ret && ret != ISM_ERROR)
+ 		return -EIO;
+ 
+ 	dma_free_coherent(&ism->pdev->dev, PAGE_SIZE,
+@@ -158,10 +161,13 @@ static int unregister_sba(struct ism_dev *ism)
+ 
+ static int unregister_ieq(struct ism_dev *ism)
+ {
++	int ret;
++
+ 	if (!ism->ieq)
+ 		return 0;
+ 
+-	if (ism_cmd_simple(ism, ISM_UNREG_IEQ))
++	ret = ism_cmd_simple(ism, ISM_UNREG_IEQ);
++	if (ret && ret != ISM_ERROR)
+ 		return -EIO;
+ 
+ 	dma_free_coherent(&ism->pdev->dev, PAGE_SIZE,
+@@ -287,7 +293,7 @@ static int ism_unregister_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb)
+ 	cmd.request.dmb_tok = dmb->dmb_tok;
+ 
+ 	ret = ism_cmd(ism, &cmd);
+-	if (ret)
++	if (ret && ret != ISM_ERROR)
+ 		goto out;
+ 
+ 	ism_free_dmb(ism, dmb);
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+index 2e4e7159ebf9..a75e74ad1698 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+@@ -1438,7 +1438,7 @@ bind_err:
+ static struct bnx2fc_interface *
+ bnx2fc_interface_create(struct bnx2fc_hba *hba,
+ 			struct net_device *netdev,
+-			enum fip_state fip_mode)
++			enum fip_mode fip_mode)
+ {
+ 	struct fcoe_ctlr_device *ctlr_dev;
+ 	struct bnx2fc_interface *interface;
+diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
+index cd19be3f3405..8ba8862d3292 100644
+--- a/drivers/scsi/fcoe/fcoe.c
++++ b/drivers/scsi/fcoe/fcoe.c
+@@ -389,7 +389,7 @@ static int fcoe_interface_setup(struct fcoe_interface *fcoe,
+  * Returns: pointer to a struct fcoe_interface or NULL on error
+  */
+ static struct fcoe_interface *fcoe_interface_create(struct net_device *netdev,
+-						    enum fip_state fip_mode)
++						    enum fip_mode fip_mode)
+ {
+ 	struct fcoe_ctlr_device *ctlr_dev;
+ 	struct fcoe_ctlr *ctlr;
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index 54da3166da8d..7dc4ffa24430 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -147,7 +147,7 @@ static void fcoe_ctlr_map_dest(struct fcoe_ctlr *fip)
+  * fcoe_ctlr_init() - Initialize the FCoE Controller instance
+  * @fip: The FCoE controller to initialize
+  */
+-void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_state mode)
++void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_mode mode)
+ {
+ 	fcoe_ctlr_set_state(fip, FIP_ST_LINK_WAIT);
+ 	fip->mode = mode;
+@@ -454,7 +454,10 @@ void fcoe_ctlr_link_up(struct fcoe_ctlr *fip)
+ 		mutex_unlock(&fip->ctlr_mutex);
+ 		fc_linkup(fip->lp);
+ 	} else if (fip->state == FIP_ST_LINK_WAIT) {
+-		fcoe_ctlr_set_state(fip, fip->mode);
++		if (fip->mode == FIP_MODE_NON_FIP)
++			fcoe_ctlr_set_state(fip, FIP_ST_NON_FIP);
++		else
++			fcoe_ctlr_set_state(fip, FIP_ST_AUTO);
+ 		switch (fip->mode) {
+ 		default:
+ 			LIBFCOE_FIP_DBG(fip, "invalid mode %d\n", fip->mode);
+diff --git a/drivers/scsi/fcoe/fcoe_transport.c b/drivers/scsi/fcoe/fcoe_transport.c
+index f4909cd206d3..f15d5e1d56b1 100644
+--- a/drivers/scsi/fcoe/fcoe_transport.c
++++ b/drivers/scsi/fcoe/fcoe_transport.c
+@@ -873,7 +873,7 @@ static int fcoe_transport_create(const char *buffer,
+ 	int rc = -ENODEV;
+ 	struct net_device *netdev = NULL;
+ 	struct fcoe_transport *ft = NULL;
+-	enum fip_state fip_mode = (enum fip_state)(long)kp->arg;
++	enum fip_mode fip_mode = (enum fip_mode)kp->arg;
+ 
+ 	mutex_lock(&ft_mutex);
+ 
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index bc17fa0d8375..62d158574281 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -10,6 +10,7 @@
+  */
+ 
+ #include "hisi_sas.h"
++#include "../libsas/sas_internal.h"
+ #define DRV_NAME "hisi_sas"
+ 
+ #define DEV_IS_GONE(dev) \
+@@ -872,7 +873,8 @@ static void hisi_sas_do_release_task(struct hisi_hba *hisi_hba, struct sas_task
+ 		spin_lock_irqsave(&task->task_state_lock, flags);
+ 		task->task_state_flags &=
+ 			~(SAS_TASK_STATE_PENDING | SAS_TASK_AT_INITIATOR);
+-		task->task_state_flags |= SAS_TASK_STATE_DONE;
++		if (!slot->is_internal && task->task_proto != SAS_PROTOCOL_SMP)
++			task->task_state_flags |= SAS_TASK_STATE_DONE;
+ 		spin_unlock_irqrestore(&task->task_state_lock, flags);
+ 	}
+ 
+@@ -1972,9 +1974,18 @@ static int hisi_sas_write_gpio(struct sas_ha_struct *sha, u8 reg_type,
+ 
+ static void hisi_sas_phy_disconnected(struct hisi_sas_phy *phy)
+ {
++	struct asd_sas_phy *sas_phy = &phy->sas_phy;
++	struct sas_phy *sphy = sas_phy->phy;
++	struct sas_phy_data *d = sphy->hostdata;
++
+ 	phy->phy_attached = 0;
+ 	phy->phy_type = 0;
+ 	phy->port = NULL;
++
++	if (d->enable)
++		sphy->negotiated_linkrate = SAS_LINK_RATE_UNKNOWN;
++	else
++		sphy->negotiated_linkrate = SAS_PHY_DISABLED;
+ }
+ 
+ void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index fcbff83c0097..c9811d1aa007 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -4188,6 +4188,7 @@ int megasas_alloc_cmds(struct megasas_instance *instance)
+ 	if (megasas_create_frame_pool(instance)) {
+ 		dev_printk(KERN_DEBUG, &instance->pdev->dev, "Error creating frame DMA pool\n");
+ 		megasas_free_cmds(instance);
++		return -ENOMEM;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 9bbc19fc190b..9f9431a4cc0e 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -1418,7 +1418,7 @@ static struct libfc_function_template qedf_lport_template = {
+ 
+ static void qedf_fcoe_ctlr_setup(struct qedf_ctx *qedf)
+ {
+-	fcoe_ctlr_init(&qedf->ctlr, FIP_ST_AUTO);
++	fcoe_ctlr_init(&qedf->ctlr, FIP_MODE_AUTO);
+ 
+ 	qedf->ctlr.send = qedf_fip_send;
+ 	qedf->ctlr.get_src_addr = qedf_get_src_mac;
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index dd0d516f65e2..53380e07b40e 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -220,7 +220,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
+ 	struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
+ 
+ 	sdev = kzalloc(sizeof(*sdev) + shost->transportt->device_size,
+-		       GFP_ATOMIC);
++		       GFP_KERNEL);
+ 	if (!sdev)
+ 		goto out;
+ 
+@@ -788,7 +788,7 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
+ 	 */
+ 	sdev->inquiry = kmemdup(inq_result,
+ 				max_t(size_t, sdev->inquiry_len, 36),
+-				GFP_ATOMIC);
++				GFP_KERNEL);
+ 	if (sdev->inquiry == NULL)
+ 		return SCSI_SCAN_NO_RESPONSE;
+ 
+@@ -1079,7 +1079,7 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
+ 	if (!sdev)
+ 		goto out;
+ 
+-	result = kmalloc(result_len, GFP_ATOMIC |
++	result = kmalloc(result_len, GFP_KERNEL |
+ 			((shost->unchecked_isa_dma) ? __GFP_DMA : 0));
+ 	if (!result)
+ 		goto out_free_sdev;
+diff --git a/drivers/soc/qcom/qcom_gsbi.c b/drivers/soc/qcom/qcom_gsbi.c
+index 09c669e70d63..038abc377fdb 100644
+--- a/drivers/soc/qcom/qcom_gsbi.c
++++ b/drivers/soc/qcom/qcom_gsbi.c
+@@ -138,7 +138,7 @@ static int gsbi_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	void __iomem *base;
+ 	struct gsbi_info *gsbi;
+-	int i;
++	int i, ret;
+ 	u32 mask, gsbi_num;
+ 	const struct crci_config *config = NULL;
+ 
+@@ -221,7 +221,10 @@ static int gsbi_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, gsbi);
+ 
+-	return of_platform_populate(node, NULL, NULL, &pdev->dev);
++	ret = of_platform_populate(node, NULL, NULL, &pdev->dev);
++	if (ret)
++		clk_disable_unprepare(gsbi->hclk);
++	return ret;
+ }
+ 
+ static int gsbi_remove(struct platform_device *pdev)
+diff --git a/drivers/soc/tegra/fuse/fuse-tegra.c b/drivers/soc/tegra/fuse/fuse-tegra.c
+index a33ee8ef8b6b..51625703399e 100644
+--- a/drivers/soc/tegra/fuse/fuse-tegra.c
++++ b/drivers/soc/tegra/fuse/fuse-tegra.c
+@@ -137,13 +137,17 @@ static int tegra_fuse_probe(struct platform_device *pdev)
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	fuse->phys = res->start;
+ 	fuse->base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(fuse->base))
+-		return PTR_ERR(fuse->base);
++	if (IS_ERR(fuse->base)) {
++		err = PTR_ERR(fuse->base);
++		fuse->base = base;
++		return err;
++	}
+ 
+ 	fuse->clk = devm_clk_get(&pdev->dev, "fuse");
+ 	if (IS_ERR(fuse->clk)) {
+ 		dev_err(&pdev->dev, "failed to get FUSE clock: %ld",
+ 			PTR_ERR(fuse->clk));
++		fuse->base = base;
+ 		return PTR_ERR(fuse->clk);
+ 	}
+ 
+@@ -152,8 +156,10 @@ static int tegra_fuse_probe(struct platform_device *pdev)
+ 
+ 	if (fuse->soc->probe) {
+ 		err = fuse->soc->probe(fuse);
+-		if (err < 0)
++		if (err < 0) {
++			fuse->base = base;
+ 			return err;
++		}
+ 	}
+ 
+ 	if (tegra_fuse_create_sysfs(&pdev->dev, fuse->soc->info->size,
+diff --git a/drivers/staging/iio/addac/adt7316.c b/drivers/staging/iio/addac/adt7316.c
+index dc93e85808e0..7839d869d25d 100644
+--- a/drivers/staging/iio/addac/adt7316.c
++++ b/drivers/staging/iio/addac/adt7316.c
+@@ -651,17 +651,10 @@ static ssize_t adt7316_store_da_high_resolution(struct device *dev,
+ 	u8 config3;
+ 	int ret;
+ 
+-	chip->dac_bits = 8;
+-
+-	if (buf[0] == '1') {
++	if (buf[0] == '1')
+ 		config3 = chip->config3 | ADT7316_DA_HIGH_RESOLUTION;
+-		if (chip->id == ID_ADT7316 || chip->id == ID_ADT7516)
+-			chip->dac_bits = 12;
+-		else if (chip->id == ID_ADT7317 || chip->id == ID_ADT7517)
+-			chip->dac_bits = 10;
+-	} else {
++	else
+ 		config3 = chip->config3 & (~ADT7316_DA_HIGH_RESOLUTION);
+-	}
+ 
+ 	ret = chip->bus.write(chip->bus.client, ADT7316_CONFIG3, config3);
+ 	if (ret)
+@@ -2123,6 +2116,13 @@ int adt7316_probe(struct device *dev, struct adt7316_bus *bus,
+ 	else
+ 		return -ENODEV;
+ 
++	if (chip->id == ID_ADT7316 || chip->id == ID_ADT7516)
++		chip->dac_bits = 12;
++	else if (chip->id == ID_ADT7317 || chip->id == ID_ADT7517)
++		chip->dac_bits = 10;
++	else
++		chip->dac_bits = 8;
++
+ 	chip->ldac_pin = devm_gpiod_get_optional(dev, "adi,ldac", GPIOD_OUT_LOW);
+ 	if (IS_ERR(chip->ldac_pin)) {
+ 		ret = PTR_ERR(chip->ldac_pin);
+diff --git a/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c b/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
+index 5282236d1bb1..06daea66fb49 100644
+--- a/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
++++ b/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
+@@ -80,7 +80,7 @@ rk3288_vpu_jpeg_enc_set_qtable(struct rockchip_vpu_dev *vpu,
+ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ {
+ 	struct rockchip_vpu_dev *vpu = ctx->dev;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	struct rockchip_vpu_jpeg_ctx jpeg_ctx;
+ 	u32 reg;
+ 
+@@ -88,7 +88,7 @@ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 
+ 	memset(&jpeg_ctx, 0, sizeof(jpeg_ctx));
+-	jpeg_ctx.buffer = vb2_plane_vaddr(dst_buf, 0);
++	jpeg_ctx.buffer = vb2_plane_vaddr(&dst_buf->vb2_buf, 0);
+ 	jpeg_ctx.width = ctx->dst_fmt.width;
+ 	jpeg_ctx.height = ctx->dst_fmt.height;
+ 	jpeg_ctx.quality = ctx->jpeg_quality;
+@@ -99,7 +99,7 @@ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ 			   VEPU_REG_ENC_CTRL);
+ 
+ 	rk3288_vpu_set_src_img_ctrl(vpu, ctx);
+-	rk3288_vpu_jpeg_enc_set_buffers(vpu, ctx, src_buf);
++	rk3288_vpu_jpeg_enc_set_buffers(vpu, ctx, &src_buf->vb2_buf);
+ 	rk3288_vpu_jpeg_enc_set_qtable(vpu,
+ 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 0),
+ 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 1));
+diff --git a/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c b/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
+index dbc86d95fe3b..3d438797692e 100644
+--- a/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
++++ b/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
+@@ -111,7 +111,7 @@ rk3399_vpu_jpeg_enc_set_qtable(struct rockchip_vpu_dev *vpu,
+ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ {
+ 	struct rockchip_vpu_dev *vpu = ctx->dev;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	struct rockchip_vpu_jpeg_ctx jpeg_ctx;
+ 	u32 reg;
+ 
+@@ -119,7 +119,7 @@ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 
+ 	memset(&jpeg_ctx, 0, sizeof(jpeg_ctx));
+-	jpeg_ctx.buffer = vb2_plane_vaddr(dst_buf, 0);
++	jpeg_ctx.buffer = vb2_plane_vaddr(&dst_buf->vb2_buf, 0);
+ 	jpeg_ctx.width = ctx->dst_fmt.width;
+ 	jpeg_ctx.height = ctx->dst_fmt.height;
+ 	jpeg_ctx.quality = ctx->jpeg_quality;
+@@ -130,7 +130,7 @@ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ 			   VEPU_REG_ENCODE_START);
+ 
+ 	rk3399_vpu_set_src_img_ctrl(vpu, ctx);
+-	rk3399_vpu_jpeg_enc_set_buffers(vpu, ctx, src_buf);
++	rk3399_vpu_jpeg_enc_set_buffers(vpu, ctx, &src_buf->vb2_buf);
+ 	rk3399_vpu_jpeg_enc_set_qtable(vpu,
+ 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 0),
+ 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 1));
+diff --git a/drivers/staging/mt7621-spi/spi-mt7621.c b/drivers/staging/mt7621-spi/spi-mt7621.c
+index 513b6e79b985..e1f50efd0922 100644
+--- a/drivers/staging/mt7621-spi/spi-mt7621.c
++++ b/drivers/staging/mt7621-spi/spi-mt7621.c
+@@ -330,6 +330,7 @@ static int mt7621_spi_probe(struct platform_device *pdev)
+ 	int status = 0;
+ 	struct clk *clk;
+ 	struct mt7621_spi_ops *ops;
++	int ret;
+ 
+ 	match = of_match_device(mt7621_spi_match, &pdev->dev);
+ 	if (!match)
+@@ -377,7 +378,11 @@ static int mt7621_spi_probe(struct platform_device *pdev)
+ 	rs->pending_write = 0;
+ 	dev_info(&pdev->dev, "sys_freq: %u\n", rs->sys_freq);
+ 
+-	device_reset(&pdev->dev);
++	ret = device_reset(&pdev->dev);
++	if (ret) {
++		dev_err(&pdev->dev, "SPI reset failed!\n");
++		return ret;
++	}
+ 
+ 	mt7621_spi_reset(rs);
+ 
+diff --git a/drivers/tty/serial/8250/8250_pxa.c b/drivers/tty/serial/8250/8250_pxa.c
+index b9bcbe20a2be..c47188860e32 100644
+--- a/drivers/tty/serial/8250/8250_pxa.c
++++ b/drivers/tty/serial/8250/8250_pxa.c
+@@ -113,6 +113,10 @@ static int serial_pxa_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = of_alias_get_id(pdev->dev.of_node, "serial");
++	if (ret >= 0)
++		uart.port.line = ret;
++
+ 	uart.port.type = PORT_XSCALE;
+ 	uart.port.iotype = UPIO_MEM32;
+ 	uart.port.mapbase = mmres->start;
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 77070c2d1240..ec145a59f199 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -26,7 +26,7 @@
+  * Byte threshold to limit memory consumption for flip buffers.
+  * The actual memory limit is > 2x this amount.
+  */
+-#define TTYB_DEFAULT_MEM_LIMIT	65536
++#define TTYB_DEFAULT_MEM_LIMIT	(640 * 1024UL)
+ 
+ /*
+  * We default to dicing tty buffer allocations to this many characters
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 7bfcbb23c2a4..016e4004fe9d 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -954,8 +954,15 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ 	} else if (ci->platdata->usb_phy) {
+ 		ci->usb_phy = ci->platdata->usb_phy;
+ 	} else {
++		ci->usb_phy = devm_usb_get_phy_by_phandle(dev->parent, "phys",
++							  0);
+ 		ci->phy = devm_phy_get(dev->parent, "usb-phy");
+-		ci->usb_phy = devm_usb_get_phy(dev->parent, USB_PHY_TYPE_USB2);
++
++		/* Fallback to grabbing any registered USB2 PHY */
++		if (IS_ERR(ci->usb_phy) &&
++		    PTR_ERR(ci->usb_phy) != -EPROBE_DEFER)
++			ci->usb_phy = devm_usb_get_phy(dev->parent,
++						       USB_PHY_TYPE_USB2);
+ 
+ 		/* if both generic PHY and USB PHY layers aren't enabled */
+ 		if (PTR_ERR(ci->phy) == -ENOSYS &&
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 6c9b76bcc2e1..8d1dbe36db92 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3339,6 +3339,8 @@ int dwc3_gadget_init(struct dwc3 *dwc)
+ 		goto err4;
+ 	}
+ 
++	dwc3_gadget_set_speed(&dwc->gadget, dwc->maximum_speed);
++
+ 	return 0;
+ 
+ err4:
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 1e5430438703..0f8d16de7a37 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1082,6 +1082,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
+ 			 * condition with req->complete callback.
+ 			 */
+ 			usb_ep_dequeue(ep->ep, req);
++			wait_for_completion(&done);
+ 			interrupted = ep->status < 0;
+ 		}
+ 
+diff --git a/drivers/video/backlight/pwm_bl.c b/drivers/video/backlight/pwm_bl.c
+index feb90764a811..53b8ceea9bde 100644
+--- a/drivers/video/backlight/pwm_bl.c
++++ b/drivers/video/backlight/pwm_bl.c
+@@ -435,7 +435,7 @@ static int pwm_backlight_initial_power_state(const struct pwm_bl_data *pb)
+ 	 */
+ 
+ 	/* if the enable GPIO is disabled, do not enable the backlight */
+-	if (pb->enable_gpio && gpiod_get_value(pb->enable_gpio) == 0)
++	if (pb->enable_gpio && gpiod_get_value_cansleep(pb->enable_gpio) == 0)
+ 		return FB_BLANK_POWERDOWN;
+ 
+ 	/* The regulator is disabled, do not enable the backlight */
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index cb43a2258c51..4721491e6c8c 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -431,6 +431,9 @@ static void fb_do_show_logo(struct fb_info *info, struct fb_image *image,
+ {
+ 	unsigned int x;
+ 
++	if (image->width > info->var.xres || image->height > info->var.yres)
++		return;
++
+ 	if (rotate == FB_ROTATE_UR) {
+ 		for (x = 0;
+ 		     x < num && image->dx + image->width <= info->var.xres;
+diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
+index cba6b586bfbd..d97fcfc5e558 100644
+--- a/drivers/xen/gntdev-dmabuf.c
++++ b/drivers/xen/gntdev-dmabuf.c
+@@ -80,6 +80,12 @@ struct gntdev_dmabuf_priv {
+ 	struct list_head imp_list;
+ 	/* This is the lock which protects dma_buf_xxx lists. */
+ 	struct mutex lock;
++	/*
++	 * We reference this file while exporting dma-bufs, so
++	 * the grant device context is not destroyed while there are
++	 * external users alive.
++	 */
++	struct file *filp;
+ };
+ 
+ /* DMA buffer export support. */
+@@ -311,6 +317,7 @@ static void dmabuf_exp_release(struct kref *kref)
+ 
+ 	dmabuf_exp_wait_obj_signal(gntdev_dmabuf->priv, gntdev_dmabuf);
+ 	list_del(&gntdev_dmabuf->next);
++	fput(gntdev_dmabuf->priv->filp);
+ 	kfree(gntdev_dmabuf);
+ }
+ 
+@@ -423,6 +430,7 @@ static int dmabuf_exp_from_pages(struct gntdev_dmabuf_export_args *args)
+ 	mutex_lock(&args->dmabuf_priv->lock);
+ 	list_add(&gntdev_dmabuf->next, &args->dmabuf_priv->exp_list);
+ 	mutex_unlock(&args->dmabuf_priv->lock);
++	get_file(gntdev_dmabuf->priv->filp);
+ 	return 0;
+ 
+ fail:
+@@ -834,7 +842,7 @@ long gntdev_ioctl_dmabuf_imp_release(struct gntdev_priv *priv,
+ 	return dmabuf_imp_release(priv->dmabuf_priv, op.fd);
+ }
+ 
+-struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void)
++struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct file *filp)
+ {
+ 	struct gntdev_dmabuf_priv *priv;
+ 
+@@ -847,6 +855,8 @@ struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void)
+ 	INIT_LIST_HEAD(&priv->exp_wait_list);
+ 	INIT_LIST_HEAD(&priv->imp_list);
+ 
++	priv->filp = filp;
++
+ 	return priv;
+ }
+ 
+diff --git a/drivers/xen/gntdev-dmabuf.h b/drivers/xen/gntdev-dmabuf.h
+index 7220a53d0fc5..3d9b9cf9d5a1 100644
+--- a/drivers/xen/gntdev-dmabuf.h
++++ b/drivers/xen/gntdev-dmabuf.h
+@@ -14,7 +14,7 @@
+ struct gntdev_dmabuf_priv;
+ struct gntdev_priv;
+ 
+-struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void);
++struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct file *filp);
+ 
+ void gntdev_dmabuf_fini(struct gntdev_dmabuf_priv *priv);
+ 
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 5efc5eee9544..7cf9c51318aa 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -600,7 +600,7 @@ static int gntdev_open(struct inode *inode, struct file *flip)
+ 	mutex_init(&priv->lock);
+ 
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+-	priv->dmabuf_priv = gntdev_dmabuf_init();
++	priv->dmabuf_priv = gntdev_dmabuf_init(flip);
+ 	if (IS_ERR(priv->dmabuf_priv)) {
+ 		ret = PTR_ERR(priv->dmabuf_priv);
+ 		kfree(priv);
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 0a6615573351..1b68700bc1c5 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4808,6 +4808,7 @@ skip_async:
+ }
+ 
+ struct reserve_ticket {
++	u64 orig_bytes;
+ 	u64 bytes;
+ 	int error;
+ 	struct list_head list;
+@@ -5030,7 +5031,7 @@ static inline int need_do_async_reclaim(struct btrfs_fs_info *fs_info,
+ 		!test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state));
+ }
+ 
+-static void wake_all_tickets(struct list_head *head)
++static bool wake_all_tickets(struct list_head *head)
+ {
+ 	struct reserve_ticket *ticket;
+ 
+@@ -5039,7 +5040,10 @@ static void wake_all_tickets(struct list_head *head)
+ 		list_del_init(&ticket->list);
+ 		ticket->error = -ENOSPC;
+ 		wake_up(&ticket->wait);
++		if (ticket->bytes != ticket->orig_bytes)
++			return true;
+ 	}
++	return false;
+ }
+ 
+ /*
+@@ -5094,8 +5098,12 @@ static void btrfs_async_reclaim_metadata_space(struct work_struct *work)
+ 		if (flush_state > COMMIT_TRANS) {
+ 			commit_cycles++;
+ 			if (commit_cycles > 2) {
+-				wake_all_tickets(&space_info->tickets);
+-				space_info->flush = 0;
++				if (wake_all_tickets(&space_info->tickets)) {
++					flush_state = FLUSH_DELAYED_ITEMS_NR;
++					commit_cycles--;
++				} else {
++					space_info->flush = 0;
++				}
+ 			} else {
+ 				flush_state = FLUSH_DELAYED_ITEMS_NR;
+ 			}
+@@ -5147,10 +5155,11 @@ static void priority_reclaim_metadata_space(struct btrfs_fs_info *fs_info,
+ 
+ static int wait_reserve_ticket(struct btrfs_fs_info *fs_info,
+ 			       struct btrfs_space_info *space_info,
+-			       struct reserve_ticket *ticket, u64 orig_bytes)
++			       struct reserve_ticket *ticket)
+ 
+ {
+ 	DEFINE_WAIT(wait);
++	u64 reclaim_bytes = 0;
+ 	int ret = 0;
+ 
+ 	spin_lock(&space_info->lock);
+@@ -5171,14 +5180,12 @@ static int wait_reserve_ticket(struct btrfs_fs_info *fs_info,
+ 		ret = ticket->error;
+ 	if (!list_empty(&ticket->list))
+ 		list_del_init(&ticket->list);
+-	if (ticket->bytes && ticket->bytes < orig_bytes) {
+-		u64 num_bytes = orig_bytes - ticket->bytes;
+-		update_bytes_may_use(space_info, -num_bytes);
+-		trace_btrfs_space_reservation(fs_info, "space_info",
+-					      space_info->flags, num_bytes, 0);
+-	}
++	if (ticket->bytes && ticket->bytes < ticket->orig_bytes)
++		reclaim_bytes = ticket->orig_bytes - ticket->bytes;
+ 	spin_unlock(&space_info->lock);
+ 
++	if (reclaim_bytes)
++		space_info_add_old_bytes(fs_info, space_info, reclaim_bytes);
+ 	return ret;
+ }
+ 
+@@ -5204,6 +5211,7 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
+ {
+ 	struct reserve_ticket ticket;
+ 	u64 used;
++	u64 reclaim_bytes = 0;
+ 	int ret = 0;
+ 
+ 	ASSERT(orig_bytes);
+@@ -5239,6 +5247,7 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
+ 	 * the list and we will do our own flushing further down.
+ 	 */
+ 	if (ret && flush != BTRFS_RESERVE_NO_FLUSH) {
++		ticket.orig_bytes = orig_bytes;
+ 		ticket.bytes = orig_bytes;
+ 		ticket.error = 0;
+ 		init_waitqueue_head(&ticket.wait);
+@@ -5279,25 +5288,21 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
+ 		return ret;
+ 
+ 	if (flush == BTRFS_RESERVE_FLUSH_ALL)
+-		return wait_reserve_ticket(fs_info, space_info, &ticket,
+-					   orig_bytes);
++		return wait_reserve_ticket(fs_info, space_info, &ticket);
+ 
+ 	ret = 0;
+ 	priority_reclaim_metadata_space(fs_info, space_info, &ticket);
+ 	spin_lock(&space_info->lock);
+ 	if (ticket.bytes) {
+-		if (ticket.bytes < orig_bytes) {
+-			u64 num_bytes = orig_bytes - ticket.bytes;
+-			update_bytes_may_use(space_info, -num_bytes);
+-			trace_btrfs_space_reservation(fs_info, "space_info",
+-						      space_info->flags,
+-						      num_bytes, 0);
+-
+-		}
++		if (ticket.bytes < orig_bytes)
++			reclaim_bytes = orig_bytes - ticket.bytes;
+ 		list_del_init(&ticket.list);
+ 		ret = -ENOSPC;
+ 	}
+ 	spin_unlock(&space_info->lock);
++
++	if (reclaim_bytes)
++		space_info_add_old_bytes(fs_info, space_info, reclaim_bytes);
+ 	ASSERT(list_empty(&ticket.list));
+ 	return ret;
+ }
+@@ -8690,6 +8695,8 @@ struct walk_control {
+ 	u64 refs[BTRFS_MAX_LEVEL];
+ 	u64 flags[BTRFS_MAX_LEVEL];
+ 	struct btrfs_key update_progress;
++	struct btrfs_key drop_progress;
++	int drop_level;
+ 	int stage;
+ 	int level;
+ 	int shared_level;
+@@ -9028,6 +9035,16 @@ skip:
+ 					     ret);
+ 			}
+ 		}
++
++		/*
++		 * We need to update the next key in our walk control so we can
++		 * update the drop_progress key accordingly.  We don't care if
++		 * find_next_key doesn't find a key because that means we're at
++		 * the end and are going to clean up now.
++		 */
++		wc->drop_level = level;
++		find_next_key(path, level, &wc->drop_progress);
++
+ 		ret = btrfs_free_extent(trans, root, bytenr, fs_info->nodesize,
+ 					parent, root->root_key.objectid,
+ 					level - 1, 0);
+@@ -9378,12 +9395,14 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
+ 		}
+ 
+ 		if (wc->stage == DROP_REFERENCE) {
+-			level = wc->level;
+-			btrfs_node_key(path->nodes[level],
+-				       &root_item->drop_progress,
+-				       path->slots[level]);
+-			root_item->drop_level = level;
+-		}
++			wc->drop_level = wc->level;
++			btrfs_node_key_to_cpu(path->nodes[wc->drop_level],
++					      &wc->drop_progress,
++					      path->slots[wc->drop_level]);
++		}
++		btrfs_cpu_key_to_disk(&root_item->drop_progress,
++				      &wc->drop_progress);
++		root_item->drop_level = wc->drop_level;
+ 
+ 		BUG_ON(wc->level == 0);
+ 		if (btrfs_should_end_transaction(trans) ||
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 543dd5e66f31..e28fb43e943b 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -2842,16 +2842,15 @@ out:
+ /*
+  * Two limits to commit transaction in advance.
+  *
+- * For RATIO, it will be 1/RATIO of the remaining limit
+- * (excluding data and prealloc meta) as threshold.
++ * For RATIO, it will be 1/RATIO of the remaining limit as threshold.
+  * For SIZE, it will be in byte unit as threshold.
+  */
+-#define QGROUP_PERTRANS_RATIO		32
+-#define QGROUP_PERTRANS_SIZE		SZ_32M
++#define QGROUP_FREE_RATIO		32
++#define QGROUP_FREE_SIZE		SZ_32M
+ static bool qgroup_check_limits(struct btrfs_fs_info *fs_info,
+ 				const struct btrfs_qgroup *qg, u64 num_bytes)
+ {
+-	u64 limit;
++	u64 free;
+ 	u64 threshold;
+ 
+ 	if ((qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_RFER) &&
+@@ -2870,20 +2869,21 @@ static bool qgroup_check_limits(struct btrfs_fs_info *fs_info,
+ 	 */
+ 	if ((qg->lim_flags & (BTRFS_QGROUP_LIMIT_MAX_RFER |
+ 			      BTRFS_QGROUP_LIMIT_MAX_EXCL))) {
+-		if (qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_EXCL)
+-			limit = qg->max_excl;
+-		else
+-			limit = qg->max_rfer;
+-		threshold = (limit - qg->rsv.values[BTRFS_QGROUP_RSV_DATA] -
+-			    qg->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC]) /
+-			    QGROUP_PERTRANS_RATIO;
+-		threshold = min_t(u64, threshold, QGROUP_PERTRANS_SIZE);
++		if (qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_EXCL) {
++			free = qg->max_excl - qgroup_rsv_total(qg) - qg->excl;
++			threshold = min_t(u64, qg->max_excl / QGROUP_FREE_RATIO,
++					  QGROUP_FREE_SIZE);
++		} else {
++			free = qg->max_rfer - qgroup_rsv_total(qg) - qg->rfer;
++			threshold = min_t(u64, qg->max_rfer / QGROUP_FREE_RATIO,
++					  QGROUP_FREE_SIZE);
++		}
+ 
+ 		/*
+ 		 * Use transaction_kthread to commit transaction, so we no
+ 		 * longer need to bother nested transaction nor lock context.
+ 		 */
+-		if (qg->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS] > threshold)
++		if (free < threshold)
+ 			btrfs_commit_transaction_locksafe(fs_info);
+ 	}
+ 
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 48318fb74938..cab7a026876b 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -3027,6 +3027,13 @@ void guard_bio_eod(int op, struct bio *bio)
+ 	/* Uhhuh. We've got a bio that straddles the device size! */
+ 	truncated_bytes = bio->bi_iter.bi_size - (maxsector << 9);
+ 
++	/*
++	 * The bio contains more than one segment which spans EOD, just return
++	 * and let IO layer turn it into an EIO
++	 */
++	if (truncated_bytes > bvec->bv_len)
++		return;
++
+ 	/* Truncate the bio.. */
+ 	bio->bi_iter.bi_size -= truncated_bytes;
+ 	bvec->bv_len -= truncated_bytes;
+diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
+index d9b99abe1243..5d83c924cc47 100644
+--- a/fs/cifs/cifs_dfs_ref.c
++++ b/fs/cifs/cifs_dfs_ref.c
+@@ -285,9 +285,9 @@ static void dump_referral(const struct dfs_info3_param *ref)
+ {
+ 	cifs_dbg(FYI, "DFS: ref path: %s\n", ref->path_name);
+ 	cifs_dbg(FYI, "DFS: node path: %s\n", ref->node_name);
+-	cifs_dbg(FYI, "DFS: fl: %hd, srv_type: %hd\n",
++	cifs_dbg(FYI, "DFS: fl: %d, srv_type: %d\n",
+ 		 ref->flags, ref->server_type);
+-	cifs_dbg(FYI, "DFS: ref_flags: %hd, path_consumed: %hd\n",
++	cifs_dbg(FYI, "DFS: ref_flags: %d, path_consumed: %d\n",
+ 		 ref->ref_flag, ref->path_consumed);
+ }
+ 
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index e61cd2938c9e..9d4e60123db4 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1487,6 +1487,11 @@ cifs_parse_devname(const char *devname, struct smb_vol *vol)
+ 	const char *delims = "/\\";
+ 	size_t len;
+ 
++	if (unlikely(!devname || !*devname)) {
++		cifs_dbg(VFS, "Device name not specified.\n");
++		return -EINVAL;
++	}
++
+ 	/* make sure we have a valid UNC double delimiter prefix */
+ 	len = strspn(devname, delims);
+ 	if (len != 2)
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 95461db80011..8d107587208f 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -1645,8 +1645,20 @@ cifs_setlk(struct file *file, struct file_lock *flock, __u32 type,
+ 		rc = server->ops->mand_unlock_range(cfile, flock, xid);
+ 
+ out:
+-	if (flock->fl_flags & FL_POSIX && !rc)
++	if (flock->fl_flags & FL_POSIX) {
++		/*
++		 * If this is a request to remove all locks because we
++		 * are closing the file, it doesn't matter if the
++		 * unlocking failed as both cifs.ko and the SMB server
++		 * remove the lock on file close
++		 */
++		if (rc) {
++			cifs_dbg(VFS, "%s failed rc=%d\n", __func__, rc);
++			if (!(flock->fl_flags & FL_CLOSE))
++				return rc;
++		}
+ 		rc = locks_lock_file_wait(file, flock);
++	}
+ 	return rc;
+ }
+ 
+diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c
+index 32a6c020478f..20a88776f04d 100644
+--- a/fs/cifs/smb1ops.c
++++ b/fs/cifs/smb1ops.c
+@@ -308,7 +308,7 @@ coalesce_t2(char *second_buf, struct smb_hdr *target_hdr)
+ 	remaining = tgt_total_cnt - total_in_tgt;
+ 
+ 	if (remaining < 0) {
+-		cifs_dbg(FYI, "Server sent too much data. tgt_total_cnt=%hu total_in_tgt=%hu\n",
++		cifs_dbg(FYI, "Server sent too much data. tgt_total_cnt=%hu total_in_tgt=%u\n",
+ 			 tgt_total_cnt, total_in_tgt);
+ 		return -EPROTO;
+ 	}
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 104905732fbe..53642a237bf9 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -986,8 +986,14 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ 	rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+ 		FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,
+ 		(char *)pneg_inbuf, inbuflen, (char **)&pneg_rsp, &rsplen);
+-
+-	if (rc != 0) {
++	if (rc == -EOPNOTSUPP) {
++		/*
++		 * Old Windows versions or Netapp SMB server can return
++		 * not supported error. Client should accept it.
++		 */
++		cifs_dbg(VFS, "Server does not support validate negotiate\n");
++		return 0;
++	} else if (rc != 0) {
+ 		cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc);
+ 		rc = -EIO;
+ 		goto out_free_inbuf;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 240b6dea5441..252bbbb5a2f4 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -2956,14 +2956,17 @@ again:
+ 			if (err < 0)
+ 				goto out;
+ 
+-		} else if (sbi->s_cluster_ratio > 1 && end >= ex_end) {
++		} else if (sbi->s_cluster_ratio > 1 && end >= ex_end &&
++			   partial.state == initial) {
+ 			/*
+-			 * If there's an extent to the right its first cluster
+-			 * contains the immediate right boundary of the
+-			 * truncated/punched region.  Set partial_cluster to
+-			 * its negative value so it won't be freed if shared
+-			 * with the current extent.  The end < ee_block case
+-			 * is handled in ext4_ext_rm_leaf().
++			 * If we're punching, there's an extent to the right.
++			 * If the partial cluster hasn't been set, set it to
++			 * that extent's first cluster and its state to nofree
++			 * so it won't be freed should it contain blocks to be
++			 * removed. If it's already set (tofree/nofree), we're
++			 * retrying and keep the original partial cluster info
++			 * so a cluster marked tofree as a result of earlier
++			 * extent removal is not lost.
+ 			 */
+ 			lblk = ex_end + 1;
+ 			err = ext4_ext_search_right(inode, path, &lblk, &pblk,
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index 9e96a0bd08d9..e1801b288847 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -1219,6 +1219,7 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
+ 	ext4_lblk_t offsets[4], offsets2[4];
+ 	Indirect chain[4], chain2[4];
+ 	Indirect *partial, *partial2;
++	Indirect *p = NULL, *p2 = NULL;
+ 	ext4_lblk_t max_block;
+ 	__le32 nr = 0, nr2 = 0;
+ 	int n = 0, n2 = 0;
+@@ -1260,7 +1261,7 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
+ 		}
+ 
+ 
+-		partial = ext4_find_shared(inode, n, offsets, chain, &nr);
++		partial = p = ext4_find_shared(inode, n, offsets, chain, &nr);
+ 		if (nr) {
+ 			if (partial == chain) {
+ 				/* Shared branch grows from the inode */
+@@ -1285,13 +1286,11 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
+ 				partial->p + 1,
+ 				(__le32 *)partial->bh->b_data+addr_per_block,
+ 				(chain+n-1) - partial);
+-			BUFFER_TRACE(partial->bh, "call brelse");
+-			brelse(partial->bh);
+ 			partial--;
+ 		}
+ 
+ end_range:
+-		partial2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
++		partial2 = p2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
+ 		if (nr2) {
+ 			if (partial2 == chain2) {
+ 				/*
+@@ -1321,16 +1320,14 @@ end_range:
+ 					   (__le32 *)partial2->bh->b_data,
+ 					   partial2->p,
+ 					   (chain2+n2-1) - partial2);
+-			BUFFER_TRACE(partial2->bh, "call brelse");
+-			brelse(partial2->bh);
+ 			partial2--;
+ 		}
+ 		goto do_indirects;
+ 	}
+ 
+ 	/* Punch happened within the same level (n == n2) */
+-	partial = ext4_find_shared(inode, n, offsets, chain, &nr);
+-	partial2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
++	partial = p = ext4_find_shared(inode, n, offsets, chain, &nr);
++	partial2 = p2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
+ 
+ 	/* Free top, but only if partial2 isn't its subtree. */
+ 	if (nr) {
+@@ -1387,15 +1384,7 @@ end_range:
+ 					   partial->p + 1,
+ 					   partial2->p,
+ 					   (chain+n-1) - partial);
+-			while (partial > chain) {
+-				BUFFER_TRACE(partial->bh, "call brelse");
+-				brelse(partial->bh);
+-			}
+-			while (partial2 > chain2) {
+-				BUFFER_TRACE(partial2->bh, "call brelse");
+-				brelse(partial2->bh);
+-			}
+-			return 0;
++			goto cleanup;
+ 		}
+ 
+ 		/*
+@@ -1410,8 +1399,6 @@ end_range:
+ 					   partial->p + 1,
+ 					   (__le32 *)partial->bh->b_data+addr_per_block,
+ 					   (chain+n-1) - partial);
+-			BUFFER_TRACE(partial->bh, "call brelse");
+-			brelse(partial->bh);
+ 			partial--;
+ 		}
+ 		if (partial2 > chain2 && depth2 <= depth) {
+@@ -1419,11 +1406,21 @@ end_range:
+ 					   (__le32 *)partial2->bh->b_data,
+ 					   partial2->p,
+ 					   (chain2+n2-1) - partial2);
+-			BUFFER_TRACE(partial2->bh, "call brelse");
+-			brelse(partial2->bh);
+ 			partial2--;
+ 		}
+ 	}
++
++cleanup:
++	while (p && p > chain) {
++		BUFFER_TRACE(p->bh, "call brelse");
++		brelse(p->bh);
++		p--;
++	}
++	while (p2 && p2 > chain2) {
++		BUFFER_TRACE(p2->bh, "call brelse");
++		brelse(p2->bh);
++		p2--;
++	}
+ 	return 0;
+ 
+ do_indirects:
+@@ -1431,7 +1428,7 @@ do_indirects:
+ 	switch (offsets[0]) {
+ 	default:
+ 		if (++n >= n2)
+-			return 0;
++			break;
+ 		nr = i_data[EXT4_IND_BLOCK];
+ 		if (nr) {
+ 			ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 1);
+@@ -1439,7 +1436,7 @@ do_indirects:
+ 		}
+ 	case EXT4_IND_BLOCK:
+ 		if (++n >= n2)
+-			return 0;
++			break;
+ 		nr = i_data[EXT4_DIND_BLOCK];
+ 		if (nr) {
+ 			ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 2);
+@@ -1447,7 +1444,7 @@ do_indirects:
+ 		}
+ 	case EXT4_DIND_BLOCK:
+ 		if (++n >= n2)
+-			return 0;
++			break;
+ 		nr = i_data[EXT4_TIND_BLOCK];
+ 		if (nr) {
+ 			ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 3);
+@@ -1456,5 +1453,5 @@ do_indirects:
+ 	case EXT4_TIND_BLOCK:
+ 		;
+ 	}
+-	return 0;
++	goto cleanup;
+ }
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 1cb0fcc67d2d..caf77fe8ac07 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -506,7 +506,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	unsigned int end = fofs + len;
+ 	unsigned int pos = (unsigned int)fofs;
+ 	bool updated = false;
+-	bool leftmost;
++	bool leftmost = false;
+ 
+ 	if (!et)
+ 		return;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 12fabd6735dd..279bc00489cc 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -456,7 +456,6 @@ struct f2fs_flush_device {
+ 
+ /* for inline stuff */
+ #define DEF_INLINE_RESERVED_SIZE	1
+-#define DEF_MIN_INLINE_SIZE		1
+ static inline int get_extra_isize(struct inode *inode);
+ static inline int get_inline_xattr_addrs(struct inode *inode);
+ #define MAX_INLINE_DATA(inode)	(sizeof(__le32) *			\
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index d636cbcf68f2..aacbb864ec1e 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -659,6 +659,12 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
+ 	if (IS_ERR(ipage))
+ 		return PTR_ERR(ipage);
+ 
++	/*
++	 * f2fs_readdir was protected by inode.i_rwsem, it is safe to access
++	 * ipage without page's lock held.
++	 */
++	unlock_page(ipage);
++
+ 	inline_dentry = inline_data_addr(inode, ipage);
+ 
+ 	make_dentry_ptr_inline(inode, &d, inline_dentry);
+@@ -667,7 +673,7 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
+ 	if (!err)
+ 		ctx->pos = d.max;
+ 
+-	f2fs_put_page(ipage, 1);
++	f2fs_put_page(ipage, 0);
+ 	return err < 0 ? err : 0;
+ }
+ 
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index c46a1d4318d4..5892fa3c885f 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -834,12 +834,13 @@ static int parse_options(struct super_block *sb, char *options)
+ 					"set with inline_xattr option");
+ 			return -EINVAL;
+ 		}
+-		if (!F2FS_OPTION(sbi).inline_xattr_size ||
+-			F2FS_OPTION(sbi).inline_xattr_size >=
+-					DEF_ADDRS_PER_INODE -
+-					F2FS_TOTAL_EXTRA_ATTR_SIZE -
+-					DEF_INLINE_RESERVED_SIZE -
+-					DEF_MIN_INLINE_SIZE) {
++		if (F2FS_OPTION(sbi).inline_xattr_size <
++			sizeof(struct f2fs_xattr_header) / sizeof(__le32) ||
++			F2FS_OPTION(sbi).inline_xattr_size >
++			DEF_ADDRS_PER_INODE -
++			F2FS_TOTAL_EXTRA_ATTR_SIZE / sizeof(__le32) -
++			DEF_INLINE_RESERVED_SIZE -
++			MIN_INLINE_DENTRY_SIZE / sizeof(__le32)) {
+ 			f2fs_msg(sb, KERN_ERR,
+ 					"inline xattr size is out of range");
+ 			return -EINVAL;
+@@ -915,6 +916,10 @@ static int f2fs_drop_inode(struct inode *inode)
+ 			sb_start_intwrite(inode->i_sb);
+ 			f2fs_i_size_write(inode, 0);
+ 
++			f2fs_submit_merged_write_cond(F2FS_I_SB(inode),
++					inode, NULL, 0, DATA);
++			truncate_inode_pages_final(inode->i_mapping);
++
+ 			if (F2FS_HAS_BLOCKS(inode))
+ 				f2fs_truncate(inode);
+ 
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 0575edbe3ed6..f1ab9000b294 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -278,10 +278,16 @@ out:
+ 		return count;
+ 	}
+ 
+-	*ui = t;
+ 
+-	if (!strcmp(a->attr.name, "iostat_enable") && *ui == 0)
+-		f2fs_reset_iostat(sbi);
++	if (!strcmp(a->attr.name, "iostat_enable")) {
++		sbi->iostat_enable = !!t;
++		if (!sbi->iostat_enable)
++			f2fs_reset_iostat(sbi);
++		return count;
++	}
++
++	*ui = (unsigned int)t;
++
+ 	return count;
+ }
+ 
+diff --git a/fs/f2fs/trace.c b/fs/f2fs/trace.c
+index ce2a5eb210b6..d0ab533a9ce8 100644
+--- a/fs/f2fs/trace.c
++++ b/fs/f2fs/trace.c
+@@ -14,7 +14,7 @@
+ #include "trace.h"
+ 
+ static RADIX_TREE(pids, GFP_ATOMIC);
+-static struct mutex pids_lock;
++static spinlock_t pids_lock;
+ static struct last_io_info last_io;
+ 
+ static inline void __print_last_io(void)
+@@ -58,23 +58,29 @@ void f2fs_trace_pid(struct page *page)
+ 
+ 	set_page_private(page, (unsigned long)pid);
+ 
++retry:
+ 	if (radix_tree_preload(GFP_NOFS))
+ 		return;
+ 
+-	mutex_lock(&pids_lock);
++	spin_lock(&pids_lock);
+ 	p = radix_tree_lookup(&pids, pid);
+ 	if (p == current)
+ 		goto out;
+ 	if (p)
+ 		radix_tree_delete(&pids, pid);
+ 
+-	f2fs_radix_tree_insert(&pids, pid, current);
++	if (radix_tree_insert(&pids, pid, current)) {
++		spin_unlock(&pids_lock);
++		radix_tree_preload_end();
++		cond_resched();
++		goto retry;
++	}
+ 
+ 	trace_printk("%3x:%3x %4x %-16s\n",
+ 			MAJOR(inode->i_sb->s_dev), MINOR(inode->i_sb->s_dev),
+ 			pid, current->comm);
+ out:
+-	mutex_unlock(&pids_lock);
++	spin_unlock(&pids_lock);
+ 	radix_tree_preload_end();
+ }
+ 
+@@ -119,7 +125,7 @@ void f2fs_trace_ios(struct f2fs_io_info *fio, int flush)
+ 
+ void f2fs_build_trace_ios(void)
+ {
+-	mutex_init(&pids_lock);
++	spin_lock_init(&pids_lock);
+ }
+ 
+ #define PIDVEC_SIZE	128
+@@ -147,7 +153,7 @@ void f2fs_destroy_trace_ios(void)
+ 	pid_t next_pid = 0;
+ 	unsigned int found;
+ 
+-	mutex_lock(&pids_lock);
++	spin_lock(&pids_lock);
+ 	while ((found = gang_lookup_pids(pid, next_pid, PIDVEC_SIZE))) {
+ 		unsigned idx;
+ 
+@@ -155,5 +161,5 @@ void f2fs_destroy_trace_ios(void)
+ 		for (idx = 0; idx < found; idx++)
+ 			radix_tree_delete(&pids, pid[idx]);
+ 	}
+-	mutex_unlock(&pids_lock);
++	spin_unlock(&pids_lock);
+ }
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index 18d5ffbc5e8c..73b92985198b 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -224,11 +224,11 @@ static struct f2fs_xattr_entry *__find_inline_xattr(struct inode *inode,
+ {
+ 	struct f2fs_xattr_entry *entry;
+ 	unsigned int inline_size = inline_xattr_size(inode);
++	void *max_addr = base_addr + inline_size;
+ 
+ 	list_for_each_xattr(entry, base_addr) {
+-		if ((void *)entry + sizeof(__u32) > base_addr + inline_size ||
+-			(void *)XATTR_NEXT_ENTRY(entry) + sizeof(__u32) >
+-			base_addr + inline_size) {
++		if ((void *)entry + sizeof(__u32) > max_addr ||
++			(void *)XATTR_NEXT_ENTRY(entry) > max_addr) {
+ 			*last_addr = entry;
+ 			return NULL;
+ 		}
+@@ -239,6 +239,13 @@ static struct f2fs_xattr_entry *__find_inline_xattr(struct inode *inode,
+ 		if (!memcmp(entry->e_name, name, len))
+ 			break;
+ 	}
++
++	/* inline xattr header or entry across max inline xattr size */
++	if (IS_XATTR_LAST_ENTRY(entry) &&
++		(void *)entry + sizeof(__u32) > max_addr) {
++		*last_addr = entry;
++		return NULL;
++	}
+ 	return entry;
+ }
+ 
+diff --git a/fs/file.c b/fs/file.c
+index 3209ee271c41..a10487aa0a84 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -457,6 +457,7 @@ struct files_struct init_files = {
+ 		.full_fds_bits	= init_files.full_fds_bits_init,
+ 	},
+ 	.file_lock	= __SPIN_LOCK_UNLOCKED(init_files.file_lock),
++	.resize_wait	= __WAIT_QUEUE_HEAD_INITIALIZER(init_files.resize_wait),
+ };
+ 
+ static unsigned int find_next_fd(struct fdtable *fdt, unsigned int start)
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 2eb55c3361a8..efd0ce9489ae 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -694,9 +694,11 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+                            the last tag we set up. */
+ 
+ 			tag->t_flags |= cpu_to_be16(JBD2_FLAG_LAST_TAG);
+-
+-			jbd2_descriptor_block_csum_set(journal, descriptor);
+ start_journal_io:
++			if (descriptor)
++				jbd2_descriptor_block_csum_set(journal,
++							descriptor);
++
+ 			for (i = 0; i < bufs; i++) {
+ 				struct buffer_head *bh = wbuf[i];
+ 				/*
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 8ef6b6daaa7a..88f2a49338a1 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1356,6 +1356,10 @@ static int journal_reset(journal_t *journal)
+ 	return jbd2_journal_start_thread(journal);
+ }
+ 
++/*
++ * This function expects that the caller will have locked the journal
++ * buffer head, and will return with it unlocked
++ */
+ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ {
+ 	struct buffer_head *bh = journal->j_sb_buffer;
+@@ -1365,7 +1369,6 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ 	trace_jbd2_write_superblock(journal, write_flags);
+ 	if (!(journal->j_flags & JBD2_BARRIER))
+ 		write_flags &= ~(REQ_FUA | REQ_PREFLUSH);
+-	lock_buffer(bh);
+ 	if (buffer_write_io_error(bh)) {
+ 		/*
+ 		 * Oh, dear.  A previous attempt to write the journal
+@@ -1424,6 +1427,7 @@ int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
+ 	jbd_debug(1, "JBD2: updating superblock (start %lu, seq %u)\n",
+ 		  tail_block, tail_tid);
+ 
++	lock_buffer(journal->j_sb_buffer);
+ 	sb->s_sequence = cpu_to_be32(tail_tid);
+ 	sb->s_start    = cpu_to_be32(tail_block);
+ 
+@@ -1454,18 +1458,17 @@ static void jbd2_mark_journal_empty(journal_t *journal, int write_op)
+ 	journal_superblock_t *sb = journal->j_superblock;
+ 
+ 	BUG_ON(!mutex_is_locked(&journal->j_checkpoint_mutex));
+-	read_lock(&journal->j_state_lock);
+-	/* Is it already empty? */
+-	if (sb->s_start == 0) {
+-		read_unlock(&journal->j_state_lock);
++	lock_buffer(journal->j_sb_buffer);
++	if (sb->s_start == 0) {		/* Is it already empty? */
++		unlock_buffer(journal->j_sb_buffer);
+ 		return;
+ 	}
++
+ 	jbd_debug(1, "JBD2: Marking journal as empty (seq %d)\n",
+ 		  journal->j_tail_sequence);
+ 
+ 	sb->s_sequence = cpu_to_be32(journal->j_tail_sequence);
+ 	sb->s_start    = cpu_to_be32(0);
+-	read_unlock(&journal->j_state_lock);
+ 
+ 	jbd2_write_superblock(journal, write_op);
+ 
+@@ -1488,9 +1491,8 @@ void jbd2_journal_update_sb_errno(journal_t *journal)
+ 	journal_superblock_t *sb = journal->j_superblock;
+ 	int errcode;
+ 
+-	read_lock(&journal->j_state_lock);
++	lock_buffer(journal->j_sb_buffer);
+ 	errcode = journal->j_errno;
+-	read_unlock(&journal->j_state_lock);
+ 	if (errcode == -ESHUTDOWN)
+ 		errcode = 0;
+ 	jbd_debug(1, "JBD2: updating superblock error (errno %d)\n", errcode);
+@@ -1894,28 +1896,27 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
+ 
+ 	sb = journal->j_superblock;
+ 
++	/* Load the checksum driver if necessary */
++	if ((journal->j_chksum_driver == NULL) &&
++	    INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V3)) {
++		journal->j_chksum_driver = crypto_alloc_shash("crc32c", 0, 0);
++		if (IS_ERR(journal->j_chksum_driver)) {
++			printk(KERN_ERR "JBD2: Cannot load crc32c driver.\n");
++			journal->j_chksum_driver = NULL;
++			return 0;
++		}
++		/* Precompute checksum seed for all metadata */
++		journal->j_csum_seed = jbd2_chksum(journal, ~0, sb->s_uuid,
++						   sizeof(sb->s_uuid));
++	}
++
++	lock_buffer(journal->j_sb_buffer);
++
+ 	/* If enabling v3 checksums, update superblock */
+ 	if (INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V3)) {
+ 		sb->s_checksum_type = JBD2_CRC32C_CHKSUM;
+ 		sb->s_feature_compat &=
+ 			~cpu_to_be32(JBD2_FEATURE_COMPAT_CHECKSUM);
+-
+-		/* Load the checksum driver */
+-		if (journal->j_chksum_driver == NULL) {
+-			journal->j_chksum_driver = crypto_alloc_shash("crc32c",
+-								      0, 0);
+-			if (IS_ERR(journal->j_chksum_driver)) {
+-				printk(KERN_ERR "JBD2: Cannot load crc32c "
+-				       "driver.\n");
+-				journal->j_chksum_driver = NULL;
+-				return 0;
+-			}
+-
+-			/* Precompute checksum seed for all metadata */
+-			journal->j_csum_seed = jbd2_chksum(journal, ~0,
+-							   sb->s_uuid,
+-							   sizeof(sb->s_uuid));
+-		}
+ 	}
+ 
+ 	/* If enabling v1 checksums, downgrade superblock */
+@@ -1927,6 +1928,7 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
+ 	sb->s_feature_compat    |= cpu_to_be32(compat);
+ 	sb->s_feature_ro_compat |= cpu_to_be32(ro);
+ 	sb->s_feature_incompat  |= cpu_to_be32(incompat);
++	unlock_buffer(journal->j_sb_buffer);
+ 
+ 	return 1;
+ #undef COMPAT_FEATURE_ON
+diff --git a/fs/ocfs2/cluster/nodemanager.c b/fs/ocfs2/cluster/nodemanager.c
+index 0e4166cc23a0..4ac775e32240 100644
+--- a/fs/ocfs2/cluster/nodemanager.c
++++ b/fs/ocfs2/cluster/nodemanager.c
+@@ -621,13 +621,15 @@ static void o2nm_node_group_drop_item(struct config_group *group,
+ 	struct o2nm_node *node = to_o2nm_node(item);
+ 	struct o2nm_cluster *cluster = to_o2nm_cluster(group->cg_item.ci_parent);
+ 
+-	o2net_disconnect_node(node);
++	if (cluster->cl_nodes[node->nd_num] == node) {
++		o2net_disconnect_node(node);
+ 
+-	if (cluster->cl_has_local &&
+-	    (cluster->cl_local_node == node->nd_num)) {
+-		cluster->cl_has_local = 0;
+-		cluster->cl_local_node = O2NM_INVALID_NODE_NUM;
+-		o2net_stop_listening(node);
++		if (cluster->cl_has_local &&
++		    (cluster->cl_local_node == node->nd_num)) {
++			cluster->cl_has_local = 0;
++			cluster->cl_local_node = O2NM_INVALID_NODE_NUM;
++			o2net_stop_listening(node);
++		}
+ 	}
+ 
+ 	/* XXX call into net to stop this node from trading messages */
+diff --git a/fs/read_write.c b/fs/read_write.c
+index ff3c5e6f87cf..27b69b85d49f 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -1238,6 +1238,9 @@ COMPAT_SYSCALL_DEFINE5(preadv64v2, unsigned long, fd,
+ 		const struct compat_iovec __user *,vec,
+ 		unsigned long, vlen, loff_t, pos, rwf_t, flags)
+ {
++	if (pos == -1)
++		return do_compat_readv(fd, vec, vlen, flags);
++
+ 	return do_compat_preadv64(fd, vec, vlen, pos, flags);
+ }
+ #endif
+@@ -1344,6 +1347,9 @@ COMPAT_SYSCALL_DEFINE5(pwritev64v2, unsigned long, fd,
+ 		const struct compat_iovec __user *,vec,
+ 		unsigned long, vlen, loff_t, pos, rwf_t, flags)
+ {
++	if (pos == -1)
++		return do_compat_writev(fd, vec, vlen, flags);
++
+ 	return do_compat_pwritev64(fd, vec, vlen, pos, flags);
+ }
+ #endif
+diff --git a/include/linux/atalk.h b/include/linux/atalk.h
+index 23f805562f4e..840cf92307ba 100644
+--- a/include/linux/atalk.h
++++ b/include/linux/atalk.h
+@@ -161,16 +161,26 @@ extern int sysctl_aarp_resolve_time;
+ extern void atalk_register_sysctl(void);
+ extern void atalk_unregister_sysctl(void);
+ #else
+-#define atalk_register_sysctl()		do { } while(0)
+-#define atalk_unregister_sysctl()	do { } while(0)
++static inline int atalk_register_sysctl(void)
++{
++	return 0;
++}
++static inline void atalk_unregister_sysctl(void)
++{
++}
+ #endif
+ 
+ #ifdef CONFIG_PROC_FS
+ extern int atalk_proc_init(void);
+ extern void atalk_proc_exit(void);
+ #else
+-#define atalk_proc_init()	({ 0; })
+-#define atalk_proc_exit()	do { } while(0)
++static inline int atalk_proc_init(void)
++{
++	return 0;
++}
++static inline void atalk_proc_exit(void)
++{
++}
+ #endif /* CONFIG_PROC_FS */
+ 
+ #endif /* __LINUX_ATALK_H__ */
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 8fcbae1b8db0..120d1d40704b 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -602,7 +602,7 @@ struct cgroup_subsys {
+ 	void (*cancel_fork)(struct task_struct *task);
+ 	void (*fork)(struct task_struct *task);
+ 	void (*exit)(struct task_struct *task);
+-	void (*free)(struct task_struct *task);
++	void (*release)(struct task_struct *task);
+ 	void (*bind)(struct cgroup_subsys_state *root_css);
+ 
+ 	bool early_init:1;
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index 9968332cceed..81f58b4a5418 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -121,6 +121,7 @@ extern int cgroup_can_fork(struct task_struct *p);
+ extern void cgroup_cancel_fork(struct task_struct *p);
+ extern void cgroup_post_fork(struct task_struct *p);
+ void cgroup_exit(struct task_struct *p);
++void cgroup_release(struct task_struct *p);
+ void cgroup_free(struct task_struct *p);
+ 
+ int cgroup_init_early(void);
+@@ -697,6 +698,7 @@ static inline int cgroup_can_fork(struct task_struct *p) { return 0; }
+ static inline void cgroup_cancel_fork(struct task_struct *p) {}
+ static inline void cgroup_post_fork(struct task_struct *p) {}
+ static inline void cgroup_exit(struct task_struct *p) {}
++static inline void cgroup_release(struct task_struct *p) {}
+ static inline void cgroup_free(struct task_struct *p) {}
+ 
+ static inline int cgroup_init_early(void) { return 0; }
+diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
+index e443fa9fa859..b7cf80a71293 100644
+--- a/include/linux/clk-provider.h
++++ b/include/linux/clk-provider.h
+@@ -792,6 +792,9 @@ unsigned int __clk_get_enable_count(struct clk *clk);
+ unsigned long clk_hw_get_rate(const struct clk_hw *hw);
+ unsigned long __clk_get_flags(struct clk *clk);
+ unsigned long clk_hw_get_flags(const struct clk_hw *hw);
++#define clk_hw_can_set_rate_parent(hw) \
++	(clk_hw_get_flags((hw)) & CLK_SET_RATE_PARENT)
++
+ bool clk_hw_is_prepared(const struct clk_hw *hw);
+ bool clk_hw_rate_is_protected(const struct clk_hw *hw);
+ bool clk_hw_is_enabled(const struct clk_hw *hw);
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index 28604a8d0aa9..a86485ac7c87 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -1699,19 +1699,19 @@ extern int efi_tpm_eventlog_init(void);
+  * fault happened while executing an efi runtime service.
+  */
+ enum efi_rts_ids {
+-	NONE,
+-	GET_TIME,
+-	SET_TIME,
+-	GET_WAKEUP_TIME,
+-	SET_WAKEUP_TIME,
+-	GET_VARIABLE,
+-	GET_NEXT_VARIABLE,
+-	SET_VARIABLE,
+-	QUERY_VARIABLE_INFO,
+-	GET_NEXT_HIGH_MONO_COUNT,
+-	RESET_SYSTEM,
+-	UPDATE_CAPSULE,
+-	QUERY_CAPSULE_CAPS,
++	EFI_NONE,
++	EFI_GET_TIME,
++	EFI_SET_TIME,
++	EFI_GET_WAKEUP_TIME,
++	EFI_SET_WAKEUP_TIME,
++	EFI_GET_VARIABLE,
++	EFI_GET_NEXT_VARIABLE,
++	EFI_SET_VARIABLE,
++	EFI_QUERY_VARIABLE_INFO,
++	EFI_GET_NEXT_HIGH_MONO_COUNT,
++	EFI_RESET_SYSTEM,
++	EFI_UPDATE_CAPSULE,
++	EFI_QUERY_CAPSULE_CAPS,
+ };
+ 
+ /*
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index d7711048ef93..c524ad7d31da 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -489,12 +489,12 @@ typedef __le32	f2fs_hash_t;
+ 
+ /*
+  * space utilization of regular dentry and inline dentry (w/o extra reservation)
+- *		regular dentry			inline dentry
+- * bitmap	1 * 27 = 27			1 * 23 = 23
+- * reserved	1 * 3 = 3			1 * 7 = 7
+- * dentry	11 * 214 = 2354			11 * 182 = 2002
+- * filename	8 * 214 = 1712			8 * 182 = 1456
+- * total	4096				3488
++ *		regular dentry		inline dentry (def)	inline dentry (min)
++ * bitmap	1 * 27 = 27		1 * 23 = 23		1 * 1 = 1
++ * reserved	1 * 3 = 3		1 * 7 = 7		1 * 1 = 1
++ * dentry	11 * 214 = 2354		11 * 182 = 2002		11 * 2 = 22
++ * filename	8 * 214 = 1712		8 * 182 = 1456		8 * 2 = 16
++ * total	4096			3488			40
+  *
+  * Note: there are more reserved space in inline dentry than in regular
+  * dentry, when converting inline dentry we should handle this carefully.
+@@ -506,6 +506,7 @@ typedef __le32	f2fs_hash_t;
+ #define SIZE_OF_RESERVED	(PAGE_SIZE - ((SIZE_OF_DIR_ENTRY + \
+ 				F2FS_SLOT_LEN) * \
+ 				NR_DENTRY_IN_BLOCK + SIZE_OF_DENTRY_BITMAP))
++#define MIN_INLINE_DENTRY_SIZE		40	/* just include '.' and '..' entries */
+ 
+ /* One directory entry slot representing F2FS_SLOT_LEN-sized file name */
+ struct f2fs_dir_entry {
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index e532fcc6e4b5..3358646a8e7a 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -874,7 +874,9 @@ bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
+ 		     unsigned int alignment,
+ 		     bpf_jit_fill_hole_t bpf_fill_ill_insns);
+ void bpf_jit_binary_free(struct bpf_binary_header *hdr);
+-
++u64 bpf_jit_alloc_exec_limit(void);
++void *bpf_jit_alloc_exec(unsigned long size);
++void bpf_jit_free_exec(void *addr);
+ void bpf_jit_free(struct bpf_prog *fp);
+ 
+ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index 65b4eaed1d96..7e748648c7d3 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -333,6 +333,7 @@ struct i2c_client {
+ 	char name[I2C_NAME_SIZE];
+ 	struct i2c_adapter *adapter;	/* the adapter we sit on	*/
+ 	struct device dev;		/* the device structure		*/
++	int init_irq;			/* irq set at initialization	*/
+ 	int irq;			/* irq issued by device		*/
+ 	struct list_head detected;
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
+index dd1e40ddac7d..875c41b23f20 100644
+--- a/include/linux/irqdesc.h
++++ b/include/linux/irqdesc.h
+@@ -65,6 +65,7 @@ struct irq_desc {
+ 	unsigned int		core_internal_state__do_not_mess_with_it;
+ 	unsigned int		depth;		/* nested irq disables */
+ 	unsigned int		wake_depth;	/* nested wake enables */
++	unsigned int		tot_count;
+ 	unsigned int		irq_count;	/* For detecting broken IRQs */
+ 	unsigned long		last_unhandled;	/* Aging timer for unhandled count */
+ 	unsigned int		irqs_unhandled;
+diff --git a/include/linux/kasan-checks.h b/include/linux/kasan-checks.h
+index d314150658a4..a61dc075e2ce 100644
+--- a/include/linux/kasan-checks.h
++++ b/include/linux/kasan-checks.h
+@@ -2,7 +2,7 @@
+ #ifndef _LINUX_KASAN_CHECKS_H
+ #define _LINUX_KASAN_CHECKS_H
+ 
+-#ifdef CONFIG_KASAN
++#if defined(__SANITIZE_ADDRESS__) || defined(__KASAN_INTERNAL)
+ void kasan_check_read(const volatile void *p, unsigned int size);
+ void kasan_check_write(const volatile void *p, unsigned int size);
+ #else
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index e1a051724f7e..7cbbd891bfcd 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -409,7 +409,7 @@ struct pmu {
+ 	/*
+ 	 * Set up pmu-private data structures for an AUX area
+ 	 */
+-	void *(*setup_aux)		(int cpu, void **pages,
++	void *(*setup_aux)		(struct perf_event *event, void **pages,
+ 					 int nr_pages, bool overwrite);
+ 					/* optional */
+ 
+diff --git a/include/linux/relay.h b/include/linux/relay.h
+index e1bdf01a86e2..c759f96e39c1 100644
+--- a/include/linux/relay.h
++++ b/include/linux/relay.h
+@@ -66,7 +66,7 @@ struct rchan
+ 	struct kref kref;		/* channel refcount */
+ 	void *private_data;		/* for user-defined data */
+ 	size_t last_toobig;		/* tried to log event > subbuf size */
+-	struct rchan_buf ** __percpu buf; /* per-cpu channel buffers */
++	struct rchan_buf * __percpu *buf; /* per-cpu channel buffers */
+ 	int is_global;			/* One global buffer ? */
+ 	struct list_head list;		/* for channel list */
+ 	struct dentry *parent;		/* parent dentry passed to open */
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index 5b9ae62272bb..503778920448 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -128,7 +128,7 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts,
+ 		    unsigned long *lost_events);
+ 
+ struct ring_buffer_iter *
+-ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu);
++ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu, gfp_t flags);
+ void ring_buffer_read_prepare_sync(void);
+ void ring_buffer_read_start(struct ring_buffer_iter *iter);
+ void ring_buffer_read_finish(struct ring_buffer_iter *iter);
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index f9b43c989577..9b35aff09f70 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1748,9 +1748,9 @@ static __always_inline bool need_resched(void)
+ static inline unsigned int task_cpu(const struct task_struct *p)
+ {
+ #ifdef CONFIG_THREAD_INFO_IN_TASK
+-	return p->cpu;
++	return READ_ONCE(p->cpu);
+ #else
+-	return task_thread_info(p)->cpu;
++	return READ_ONCE(task_thread_info(p)->cpu);
+ #endif
+ }
+ 
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index c31d3a47a47c..57c7ed3fe465 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -176,10 +176,10 @@ typedef int (*sched_domain_flags_f)(void);
+ #define SDTL_OVERLAP	0x01
+ 
+ struct sd_data {
+-	struct sched_domain **__percpu sd;
+-	struct sched_domain_shared **__percpu sds;
+-	struct sched_group **__percpu sg;
+-	struct sched_group_capacity **__percpu sgc;
++	struct sched_domain *__percpu *sd;
++	struct sched_domain_shared *__percpu *sds;
++	struct sched_group *__percpu *sg;
++	struct sched_group_capacity *__percpu *sgc;
+ };
+ 
+ struct sched_domain_topology_level {
+diff --git a/include/net/netfilter/br_netfilter.h b/include/net/netfilter/br_netfilter.h
+index 4cd56808ac4e..89808ce293c4 100644
+--- a/include/net/netfilter/br_netfilter.h
++++ b/include/net/netfilter/br_netfilter.h
+@@ -43,7 +43,6 @@ static inline struct rtable *bridge_parent_rtable(const struct net_device *dev)
+ }
+ 
+ struct net_device *setup_pre_routing(struct sk_buff *skb);
+-void br_netfilter_enable(void);
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+ int br_validate_ipv6(struct net *net, struct sk_buff *skb);
+diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
+index cb8a273732cf..bb8092fa1e36 100644
+--- a/include/scsi/libfcoe.h
++++ b/include/scsi/libfcoe.h
+@@ -79,7 +79,7 @@ enum fip_state {
+  * It must not change after fcoe_ctlr_init() sets it.
+  */
+ enum fip_mode {
+-	FIP_MODE_AUTO = FIP_ST_AUTO,
++	FIP_MODE_AUTO,
+ 	FIP_MODE_NON_FIP,
+ 	FIP_MODE_FABRIC,
+ 	FIP_MODE_VN2VN,
+@@ -250,7 +250,7 @@ struct fcoe_rport {
+ };
+ 
+ /* FIP API functions */
+-void fcoe_ctlr_init(struct fcoe_ctlr *, enum fip_state);
++void fcoe_ctlr_init(struct fcoe_ctlr *, enum fip_mode);
+ void fcoe_ctlr_destroy(struct fcoe_ctlr *);
+ void fcoe_ctlr_link_up(struct fcoe_ctlr *);
+ int fcoe_ctlr_link_down(struct fcoe_ctlr *);
+diff --git a/kernel/audit.h b/kernel/audit.h
+index 91421679a168..6ffb70575082 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -314,7 +314,7 @@ extern void audit_trim_trees(void);
+ extern int audit_tag_tree(char *old, char *new);
+ extern const char *audit_tree_path(struct audit_tree *tree);
+ extern void audit_put_tree(struct audit_tree *tree);
+-extern void audit_kill_trees(struct list_head *list);
++extern void audit_kill_trees(struct audit_context *context);
+ #else
+ #define audit_remove_tree_rule(rule) BUG()
+ #define audit_add_tree_rule(rule) -EINVAL
+@@ -323,7 +323,7 @@ extern void audit_kill_trees(struct list_head *list);
+ #define audit_put_tree(tree) (void)0
+ #define audit_tag_tree(old, new) -EINVAL
+ #define audit_tree_path(rule) ""	/* never called */
+-#define audit_kill_trees(list) BUG()
++#define audit_kill_trees(context) BUG()
+ #endif
+ 
+ extern char *audit_unpack_string(void **bufp, size_t *remain, size_t len);
+diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c
+index d4af4d97f847..abfb112f26aa 100644
+--- a/kernel/audit_tree.c
++++ b/kernel/audit_tree.c
+@@ -524,13 +524,14 @@ static int tag_chunk(struct inode *inode, struct audit_tree *tree)
+ 	return 0;
+ }
+ 
+-static void audit_tree_log_remove_rule(struct audit_krule *rule)
++static void audit_tree_log_remove_rule(struct audit_context *context,
++				       struct audit_krule *rule)
+ {
+ 	struct audit_buffer *ab;
+ 
+ 	if (!audit_enabled)
+ 		return;
+-	ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_CONFIG_CHANGE);
++	ab = audit_log_start(context, GFP_KERNEL, AUDIT_CONFIG_CHANGE);
+ 	if (unlikely(!ab))
+ 		return;
+ 	audit_log_format(ab, "op=remove_rule dir=");
+@@ -540,7 +541,7 @@ static void audit_tree_log_remove_rule(struct audit_krule *rule)
+ 	audit_log_end(ab);
+ }
+ 
+-static void kill_rules(struct audit_tree *tree)
++static void kill_rules(struct audit_context *context, struct audit_tree *tree)
+ {
+ 	struct audit_krule *rule, *next;
+ 	struct audit_entry *entry;
+@@ -551,7 +552,7 @@ static void kill_rules(struct audit_tree *tree)
+ 		list_del_init(&rule->rlist);
+ 		if (rule->tree) {
+ 			/* not a half-baked one */
+-			audit_tree_log_remove_rule(rule);
++			audit_tree_log_remove_rule(context, rule);
+ 			if (entry->rule.exe)
+ 				audit_remove_mark(entry->rule.exe);
+ 			rule->tree = NULL;
+@@ -633,7 +634,7 @@ static void trim_marked(struct audit_tree *tree)
+ 		tree->goner = 1;
+ 		spin_unlock(&hash_lock);
+ 		mutex_lock(&audit_filter_mutex);
+-		kill_rules(tree);
++		kill_rules(audit_context(), tree);
+ 		list_del_init(&tree->list);
+ 		mutex_unlock(&audit_filter_mutex);
+ 		prune_one(tree);
+@@ -973,8 +974,10 @@ static void audit_schedule_prune(void)
+  * ... and that one is done if evict_chunk() decides to delay until the end
+  * of syscall.  Runs synchronously.
+  */
+-void audit_kill_trees(struct list_head *list)
++void audit_kill_trees(struct audit_context *context)
+ {
++	struct list_head *list = &context->killed_trees;
++
+ 	audit_ctl_lock();
+ 	mutex_lock(&audit_filter_mutex);
+ 
+@@ -982,7 +985,7 @@ void audit_kill_trees(struct list_head *list)
+ 		struct audit_tree *victim;
+ 
+ 		victim = list_entry(list->next, struct audit_tree, list);
+-		kill_rules(victim);
++		kill_rules(context, victim);
+ 		list_del_init(&victim->list);
+ 
+ 		mutex_unlock(&audit_filter_mutex);
+@@ -1017,7 +1020,7 @@ static void evict_chunk(struct audit_chunk *chunk)
+ 		list_del_init(&owner->same_root);
+ 		spin_unlock(&hash_lock);
+ 		if (!postponed) {
+-			kill_rules(owner);
++			kill_rules(audit_context(), owner);
+ 			list_move(&owner->list, &prune_list);
+ 			need_prune = 1;
+ 		} else {
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 6593a5207fb0..b585ceb2f7a2 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1444,6 +1444,9 @@ void __audit_free(struct task_struct *tsk)
+ 	if (!context)
+ 		return;
+ 
++	if (!list_empty(&context->killed_trees))
++		audit_kill_trees(context);
++
+ 	/* We are called either by do_exit() or the fork() error handling code;
+ 	 * in the former case tsk == current and in the latter tsk is a
+ 	 * random task_struct that doesn't doesn't have any meaningful data we
+@@ -1460,9 +1463,6 @@ void __audit_free(struct task_struct *tsk)
+ 			audit_log_exit();
+ 	}
+ 
+-	if (!list_empty(&context->killed_trees))
+-		audit_kill_trees(&context->killed_trees);
+-
+ 	audit_set_context(tsk, NULL);
+ 	audit_free_context(context);
+ }
+@@ -1537,6 +1537,9 @@ void __audit_syscall_exit(int success, long return_code)
+ 	if (!context)
+ 		return;
+ 
++	if (!list_empty(&context->killed_trees))
++		audit_kill_trees(context);
++
+ 	if (!context->dummy && context->in_syscall) {
+ 		if (success)
+ 			context->return_valid = AUDITSC_SUCCESS;
+@@ -1571,9 +1574,6 @@ void __audit_syscall_exit(int success, long return_code)
+ 	context->in_syscall = 0;
+ 	context->prio = context->state == AUDIT_RECORD_CONTEXT ? ~0ULL : 0;
+ 
+-	if (!list_empty(&context->killed_trees))
+-		audit_kill_trees(&context->killed_trees);
+-
+ 	audit_free_names(context);
+ 	unroll_tree_refs(context, NULL, 0);
+ 	audit_free_aux(context);
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 503bba3c4bae..f84bf28f36ba 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -197,7 +197,7 @@ static u64 css_serial_nr_next = 1;
+  */
+ static u16 have_fork_callback __read_mostly;
+ static u16 have_exit_callback __read_mostly;
+-static u16 have_free_callback __read_mostly;
++static u16 have_release_callback __read_mostly;
+ static u16 have_canfork_callback __read_mostly;
+ 
+ /* cgroup namespace for init task */
+@@ -5316,7 +5316,7 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss, bool early)
+ 
+ 	have_fork_callback |= (bool)ss->fork << ss->id;
+ 	have_exit_callback |= (bool)ss->exit << ss->id;
+-	have_free_callback |= (bool)ss->free << ss->id;
++	have_release_callback |= (bool)ss->release << ss->id;
+ 	have_canfork_callback |= (bool)ss->can_fork << ss->id;
+ 
+ 	/* At system boot, before all subsystems have been
+@@ -5752,16 +5752,19 @@ void cgroup_exit(struct task_struct *tsk)
+ 	} while_each_subsys_mask();
+ }
+ 
+-void cgroup_free(struct task_struct *task)
++void cgroup_release(struct task_struct *task)
+ {
+-	struct css_set *cset = task_css_set(task);
+ 	struct cgroup_subsys *ss;
+ 	int ssid;
+ 
+-	do_each_subsys_mask(ss, ssid, have_free_callback) {
+-		ss->free(task);
++	do_each_subsys_mask(ss, ssid, have_release_callback) {
++		ss->release(task);
+ 	} while_each_subsys_mask();
++}
+ 
++void cgroup_free(struct task_struct *task)
++{
++	struct css_set *cset = task_css_set(task);
+ 	put_css_set(cset);
+ }
+ 
+diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c
+index 9829c67ebc0a..c9960baaa14f 100644
+--- a/kernel/cgroup/pids.c
++++ b/kernel/cgroup/pids.c
+@@ -247,7 +247,7 @@ static void pids_cancel_fork(struct task_struct *task)
+ 	pids_uncharge(pids, 1);
+ }
+ 
+-static void pids_free(struct task_struct *task)
++static void pids_release(struct task_struct *task)
+ {
+ 	struct pids_cgroup *pids = css_pids(task_css(task, pids_cgrp_id));
+ 
+@@ -342,7 +342,7 @@ struct cgroup_subsys pids_cgrp_subsys = {
+ 	.cancel_attach 	= pids_cancel_attach,
+ 	.can_fork	= pids_can_fork,
+ 	.cancel_fork	= pids_cancel_fork,
+-	.free		= pids_free,
++	.release	= pids_release,
+ 	.legacy_cftypes	= pids_files,
+ 	.dfl_cftypes	= pids_files,
+ 	.threaded	= true,
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index d503d1a9007c..bb95a35e8c2d 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -87,7 +87,6 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
+ 						   struct cgroup *root, int cpu)
+ {
+ 	struct cgroup_rstat_cpu *rstatc;
+-	struct cgroup *parent;
+ 
+ 	if (pos == root)
+ 		return NULL;
+@@ -115,8 +114,8 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
+ 	 * However, due to the way we traverse, @pos will be the first
+ 	 * child in most cases. The only exception is @root.
+ 	 */
+-	parent = cgroup_parent(pos);
+-	if (parent && rstatc->updated_next) {
++	if (rstatc->updated_next) {
++		struct cgroup *parent = cgroup_parent(pos);
+ 		struct cgroup_rstat_cpu *prstatc = cgroup_rstat_cpu(parent, cpu);
+ 		struct cgroup_rstat_cpu *nrstatc;
+ 		struct cgroup **nextp;
+@@ -140,9 +139,12 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
+ 		 * updated stat.
+ 		 */
+ 		smp_mb();
++
++		return pos;
+ 	}
+ 
+-	return pos;
++	/* only happens for @root */
++	return NULL;
+ }
+ 
+ /* see cgroup_rstat_flush() */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 47f695d80dd1..6754f3ecfd94 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -313,6 +313,15 @@ void cpus_write_unlock(void)
+ 
+ void lockdep_assert_cpus_held(void)
+ {
++	/*
++	 * We can't have hotplug operations before userspace starts running,
++	 * and some init codepaths will knowingly not take the hotplug lock.
++	 * This is all valid, so mute lockdep until it makes sense to report
++	 * unheld locks.
++	 */
++	if (system_state < SYSTEM_RUNNING)
++		return;
++
+ 	percpu_rwsem_assert_held(&cpu_hotplug_lock);
+ }
+ 
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 5ab4fe3b1dcc..878c62ec0190 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -658,7 +658,7 @@ int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event,
+ 			goto out;
+ 	}
+ 
+-	rb->aux_priv = event->pmu->setup_aux(event->cpu, rb->aux_pages, nr_pages,
++	rb->aux_priv = event->pmu->setup_aux(event, rb->aux_pages, nr_pages,
+ 					     overwrite);
+ 	if (!rb->aux_priv)
+ 		goto out;
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 2639a30a8aa5..2166c2d92ddc 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -219,6 +219,7 @@ repeat:
+ 	}
+ 
+ 	write_unlock_irq(&tasklist_lock);
++	cgroup_release(p);
+ 	release_thread(p);
+ 	call_rcu(&p->rcu, delayed_put_task_struct);
+ 
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index 34e969069488..e960c4f46ee0 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -855,7 +855,11 @@ void handle_percpu_irq(struct irq_desc *desc)
+ {
+ 	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 
+-	kstat_incr_irqs_this_cpu(desc);
++	/*
++	 * PER CPU interrupts are not serialized. Do not touch
++	 * desc->tot_count.
++	 */
++	__kstat_incr_irqs_this_cpu(desc);
+ 
+ 	if (chip->irq_ack)
+ 		chip->irq_ack(&desc->irq_data);
+@@ -884,7 +888,11 @@ void handle_percpu_devid_irq(struct irq_desc *desc)
+ 	unsigned int irq = irq_desc_get_irq(desc);
+ 	irqreturn_t res;
+ 
+-	kstat_incr_irqs_this_cpu(desc);
++	/*
++	 * PER CPU interrupts are not serialized. Do not touch
++	 * desc->tot_count.
++	 */
++	__kstat_incr_irqs_this_cpu(desc);
+ 
+ 	if (chip->irq_ack)
+ 		chip->irq_ack(&desc->irq_data);
+diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
+index ca6afa267070..e74e7eea76cf 100644
+--- a/kernel/irq/internals.h
++++ b/kernel/irq/internals.h
+@@ -242,12 +242,18 @@ static inline void irq_state_set_masked(struct irq_desc *desc)
+ 
+ #undef __irqd_to_state
+ 
+-static inline void kstat_incr_irqs_this_cpu(struct irq_desc *desc)
++static inline void __kstat_incr_irqs_this_cpu(struct irq_desc *desc)
+ {
+ 	__this_cpu_inc(*desc->kstat_irqs);
+ 	__this_cpu_inc(kstat.irqs_sum);
+ }
+ 
++static inline void kstat_incr_irqs_this_cpu(struct irq_desc *desc)
++{
++	__kstat_incr_irqs_this_cpu(desc);
++	desc->tot_count++;
++}
++
+ static inline int irq_desc_get_node(struct irq_desc *desc)
+ {
+ 	return irq_common_data_get_node(&desc->irq_common_data);
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index ef8ad36cadcf..84fa255d0329 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -119,6 +119,7 @@ static void desc_set_defaults(unsigned int irq, struct irq_desc *desc, int node,
+ 	desc->depth = 1;
+ 	desc->irq_count = 0;
+ 	desc->irqs_unhandled = 0;
++	desc->tot_count = 0;
+ 	desc->name = NULL;
+ 	desc->owner = owner;
+ 	for_each_possible_cpu(cpu)
+@@ -919,11 +920,15 @@ unsigned int kstat_irqs_cpu(unsigned int irq, int cpu)
+ unsigned int kstat_irqs(unsigned int irq)
+ {
+ 	struct irq_desc *desc = irq_to_desc(irq);
+-	int cpu;
+ 	unsigned int sum = 0;
++	int cpu;
+ 
+ 	if (!desc || !desc->kstat_irqs)
+ 		return 0;
++	if (!irq_settings_is_per_cpu_devid(desc) &&
++	    !irq_settings_is_per_cpu(desc))
++	    return desc->tot_count;
++
+ 	for_each_possible_cpu(cpu)
+ 		sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
+ 	return sum;
+diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
+index 1971869c4072..f4ca36d92138 100644
+--- a/kernel/rcu/update.c
++++ b/kernel/rcu/update.c
+@@ -52,6 +52,7 @@
+ #include <linux/tick.h>
+ #include <linux/rcupdate_wait.h>
+ #include <linux/sched/isolation.h>
++#include <linux/kprobes.h>
+ 
+ #define CREATE_TRACE_POINTS
+ 
+@@ -249,6 +250,7 @@ int notrace debug_lockdep_rcu_enabled(void)
+ 	       current->lockdep_recursion == 0;
+ }
+ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
++NOKPROBE_SYMBOL(debug_lockdep_rcu_enabled);
+ 
+ /**
+  * rcu_read_lock_held() - might we be in RCU read-side critical section?
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 915c02e8e5dd..ca7ed5158cff 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -382,7 +382,7 @@ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
+ 				 int (*func)(struct resource *, void *))
+ {
+ 	struct resource res;
+-	int ret = -1;
++	int ret = -EINVAL;
+ 
+ 	while (start < end &&
+ 	       !find_next_iomem_res(start, end, flags, desc, first_lvl, &res)) {
+@@ -462,7 +462,7 @@ int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
+ 	unsigned long flags;
+ 	struct resource res;
+ 	unsigned long pfn, end_pfn;
+-	int ret = -1;
++	int ret = -EINVAL;
+ 
+ 	start = (u64) start_pfn << PAGE_SHIFT;
+ 	end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index d8d76a65cfdd..01a2489de94e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -107,11 +107,12 @@ struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
+ 		 *					[L] ->on_rq
+ 		 *	RELEASE (rq->lock)
+ 		 *
+-		 * If we observe the old CPU in task_rq_lock, the acquire of
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
+ 		 * the old rq->lock will fully serialize against the stores.
+ 		 *
+-		 * If we observe the new CPU in task_rq_lock, the acquire will
+-		 * pair with the WMB to ensure we must then also see migrating.
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
+ 		 */
+ 		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
+ 			rq_pin_lock(rq, rf);
+@@ -928,7 +929,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
+ {
+ 	lockdep_assert_held(&rq->lock);
+ 
+-	p->on_rq = TASK_ON_RQ_MIGRATING;
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
+ 	dequeue_task(rq, p, DEQUEUE_NOCLOCK);
+ 	set_task_cpu(p, new_cpu);
+ 	rq_unlock(rq, rf);
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index de3de997e245..8039d62ae36e 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -315,6 +315,7 @@ void register_sched_domain_sysctl(void)
+ {
+ 	static struct ctl_table *cpu_entries;
+ 	static struct ctl_table **cpu_idx;
++	static bool init_done = false;
+ 	char buf[32];
+ 	int i;
+ 
+@@ -344,7 +345,10 @@ void register_sched_domain_sysctl(void)
+ 	if (!cpumask_available(sd_sysctl_cpus)) {
+ 		if (!alloc_cpumask_var(&sd_sysctl_cpus, GFP_KERNEL))
+ 			return;
++	}
+ 
++	if (!init_done) {
++		init_done = true;
+ 		/* init to possible to not have holes in @cpu_entries */
+ 		cpumask_copy(sd_sysctl_cpus, cpu_possible_mask);
+ 	}
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index d04530bf251f..425a5589e5f6 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1460,9 +1460,9 @@ static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
+ 	 */
+ 	smp_wmb();
+ #ifdef CONFIG_THREAD_INFO_IN_TASK
+-	p->cpu = cpu;
++	WRITE_ONCE(p->cpu, cpu);
+ #else
+-	task_thread_info(p)->cpu = cpu;
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
+ #endif
+ 	p->wake_cpu = cpu;
+ #endif
+@@ -1563,7 +1563,7 @@ static inline int task_on_rq_queued(struct task_struct *p)
+ 
+ static inline int task_on_rq_migrating(struct task_struct *p)
+ {
+-	return p->on_rq == TASK_ON_RQ_MIGRATING;
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
+ }
+ 
+ /*
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 3f35ba1d8fde..efca2489d881 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -676,7 +676,7 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
+ }
+ 
+ struct s_data {
+-	struct sched_domain ** __percpu sd;
++	struct sched_domain * __percpu *sd;
+ 	struct root_domain	*rd;
+ };
+ 
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index d80bee8ff12e..28ec71d914c7 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -127,6 +127,7 @@ static int __maybe_unused one = 1;
+ static int __maybe_unused two = 2;
+ static int __maybe_unused four = 4;
+ static unsigned long one_ul = 1;
++static unsigned long long_max = LONG_MAX;
+ static int one_hundred = 100;
+ static int one_thousand = 1000;
+ #ifdef CONFIG_PRINTK
+@@ -1722,6 +1723,8 @@ static struct ctl_table fs_table[] = {
+ 		.maxlen		= sizeof(files_stat.max_files),
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_doulongvec_minmax,
++		.extra1		= &zero,
++		.extra2		= &long_max,
+ 	},
+ 	{
+ 		.procname	= "nr_open",
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 06e864a334bb..b49affb4666b 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -4205,6 +4205,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_consume);
+  * ring_buffer_read_prepare - Prepare for a non consuming read of the buffer
+  * @buffer: The ring buffer to read from
+  * @cpu: The cpu buffer to iterate over
++ * @flags: gfp flags to use for memory allocation
+  *
+  * This performs the initial preparations necessary to iterate
+  * through the buffer.  Memory is allocated, buffer recording
+@@ -4222,7 +4223,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_consume);
+  * This overall must be paired with ring_buffer_read_finish.
+  */
+ struct ring_buffer_iter *
+-ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu)
++ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu, gfp_t flags)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	struct ring_buffer_iter *iter;
+@@ -4230,7 +4231,7 @@ ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu)
+ 	if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ 		return NULL;
+ 
+-	iter = kmalloc(sizeof(*iter), GFP_KERNEL);
++	iter = kmalloc(sizeof(*iter), flags);
+ 	if (!iter)
+ 		return NULL;
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 5f40db27aaf2..89158aa93fa6 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3904,7 +3904,8 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
+ 	if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
+ 		for_each_tracing_cpu(cpu) {
+ 			iter->buffer_iter[cpu] =
+-				ring_buffer_read_prepare(iter->trace_buffer->buffer, cpu);
++				ring_buffer_read_prepare(iter->trace_buffer->buffer,
++							 cpu, GFP_KERNEL);
+ 		}
+ 		ring_buffer_read_prepare_sync();
+ 		for_each_tracing_cpu(cpu) {
+@@ -3914,7 +3915,8 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
+ 	} else {
+ 		cpu = iter->cpu_file;
+ 		iter->buffer_iter[cpu] =
+-			ring_buffer_read_prepare(iter->trace_buffer->buffer, cpu);
++			ring_buffer_read_prepare(iter->trace_buffer->buffer,
++						 cpu, GFP_KERNEL);
+ 		ring_buffer_read_prepare_sync();
+ 		ring_buffer_read_start(iter->buffer_iter[cpu]);
+ 		tracing_iter_reset(iter, cpu);
+diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
+index d953c163a079..810d78a8d14c 100644
+--- a/kernel/trace/trace_kdb.c
++++ b/kernel/trace/trace_kdb.c
+@@ -51,14 +51,16 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
+ 	if (cpu_file == RING_BUFFER_ALL_CPUS) {
+ 		for_each_tracing_cpu(cpu) {
+ 			iter.buffer_iter[cpu] =
+-			ring_buffer_read_prepare(iter.trace_buffer->buffer, cpu);
++			ring_buffer_read_prepare(iter.trace_buffer->buffer,
++						 cpu, GFP_ATOMIC);
+ 			ring_buffer_read_start(iter.buffer_iter[cpu]);
+ 			tracing_iter_reset(&iter, cpu);
+ 		}
+ 	} else {
+ 		iter.cpu_file = cpu_file;
+ 		iter.buffer_iter[cpu_file] =
+-			ring_buffer_read_prepare(iter.trace_buffer->buffer, cpu_file);
++			ring_buffer_read_prepare(iter.trace_buffer->buffer,
++						 cpu_file, GFP_ATOMIC);
+ 		ring_buffer_read_start(iter.buffer_iter[cpu_file]);
+ 		tracing_iter_reset(&iter, cpu_file);
+ 	}
+diff --git a/lib/bsearch.c b/lib/bsearch.c
+index 18b445b010c3..82512fe7b33c 100644
+--- a/lib/bsearch.c
++++ b/lib/bsearch.c
+@@ -11,6 +11,7 @@
+ 
+ #include <linux/export.h>
+ #include <linux/bsearch.h>
++#include <linux/kprobes.h>
+ 
+ /*
+  * bsearch - binary search an array of elements
+@@ -53,3 +54,4 @@ void *bsearch(const void *key, const void *base, size_t num, size_t size,
+ 	return NULL;
+ }
+ EXPORT_SYMBOL(bsearch);
++NOKPROBE_SYMBOL(bsearch);
+diff --git a/lib/raid6/Makefile b/lib/raid6/Makefile
+index 4e90d443d1b0..e723eacf7868 100644
+--- a/lib/raid6/Makefile
++++ b/lib/raid6/Makefile
+@@ -39,7 +39,7 @@ endif
+ ifeq ($(CONFIG_KERNEL_MODE_NEON),y)
+ NEON_FLAGS := -ffreestanding
+ ifeq ($(ARCH),arm)
+-NEON_FLAGS += -mfloat-abi=softfp -mfpu=neon
++NEON_FLAGS += -march=armv7-a -mfloat-abi=softfp -mfpu=neon
+ endif
+ CFLAGS_recov_neon_inner.o += $(NEON_FLAGS)
+ ifeq ($(ARCH),arm64)
+diff --git a/mm/cma.c b/mm/cma.c
+index c7b39dd3b4f6..f4f3a8a57d86 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -353,12 +353,14 @@ int __init cma_declare_contiguous(phys_addr_t base,
+ 
+ 	ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma);
+ 	if (ret)
+-		goto err;
++		goto free_mem;
+ 
+ 	pr_info("Reserved %ld MiB at %pa\n", (unsigned long)size / SZ_1M,
+ 		&base);
+ 	return 0;
+ 
++free_mem:
++	memblock_free(base, size);
+ err:
+ 	pr_err("Failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
+ 	return ret;
+diff --git a/mm/kasan/common.c b/mm/kasan/common.c
+index 09b534fbba17..80bbe62b16cd 100644
+--- a/mm/kasan/common.c
++++ b/mm/kasan/common.c
+@@ -14,6 +14,8 @@
+  *
+  */
+ 
++#define __KASAN_INTERNAL
++
+ #include <linux/export.h>
+ #include <linux/interrupt.h>
+ #include <linux/init.h>
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index af7f18b32389..79a7d2a06bba 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -248,6 +248,12 @@ enum res_type {
+ 	     iter != NULL;				\
+ 	     iter = mem_cgroup_iter(NULL, iter, NULL))
+ 
++static inline bool should_force_charge(void)
++{
++	return tsk_is_oom_victim(current) || fatal_signal_pending(current) ||
++		(current->flags & PF_EXITING);
++}
++
+ /* Some nice accessors for the vmpressure. */
+ struct vmpressure *memcg_to_vmpressure(struct mem_cgroup *memcg)
+ {
+@@ -1389,8 +1395,13 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ 	};
+ 	bool ret;
+ 
+-	mutex_lock(&oom_lock);
+-	ret = out_of_memory(&oc);
++	if (mutex_lock_killable(&oom_lock))
++		return true;
++	/*
++	 * A few threads which were not waiting at mutex_lock_killable() can
++	 * fail to bail out. Therefore, check again after holding oom_lock.
++	 */
++	ret = should_force_charge() || out_of_memory(&oc);
+ 	mutex_unlock(&oom_lock);
+ 	return ret;
+ }
+@@ -2209,9 +2220,7 @@ retry:
+ 	 * bypass the last charges so that they can exit quickly and
+ 	 * free their memory.
+ 	 */
+-	if (unlikely(tsk_is_oom_victim(current) ||
+-		     fatal_signal_pending(current) ||
+-		     current->flags & PF_EXITING))
++	if (unlikely(should_force_charge()))
+ 		goto force;
+ 
+ 	/*
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 6bc9786aad6e..c2275c1e6d2a 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -350,7 +350,7 @@ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask)
+ {
+ 	if (!pol)
+ 		return;
+-	if (!mpol_store_user_nodemask(pol) &&
++	if (!mpol_store_user_nodemask(pol) && !(pol->flags & MPOL_F_LOCAL) &&
+ 	    nodes_equal(pol->w.cpuset_mems_allowed, *newmask))
+ 		return;
+ 
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 26ea8636758f..da0e44914085 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -928,7 +928,8 @@ static void __oom_kill_process(struct task_struct *victim)
+  */
+ static int oom_kill_memcg_member(struct task_struct *task, void *unused)
+ {
+-	if (task->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
++	if (task->signal->oom_score_adj != OOM_SCORE_ADJ_MIN &&
++	    !is_global_init(task)) {
+ 		get_task_struct(task);
+ 		__oom_kill_process(task);
+ 	}
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 11dc3c0e8728..20dd3283bb1b 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1945,8 +1945,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
+ 
+ 	arch_alloc_page(page, order);
+ 	kernel_map_pages(page, 1 << order, 1);
+-	kernel_poison_pages(page, 1 << order, 1);
+ 	kasan_alloc_pages(page, order);
++	kernel_poison_pages(page, 1 << order, 1);
+ 	set_page_owner(page, order, gfp_flags);
+ }
+ 
+diff --git a/mm/page_ext.c b/mm/page_ext.c
+index 8c78b8d45117..f116431c3dee 100644
+--- a/mm/page_ext.c
++++ b/mm/page_ext.c
+@@ -273,6 +273,7 @@ static void free_page_ext(void *addr)
+ 		table_size = get_entry_size() * PAGES_PER_SECTION;
+ 
+ 		BUG_ON(PageReserved(page));
++		kmemleak_free(addr);
+ 		free_pages_exact(addr, table_size);
+ 	}
+ }
+diff --git a/mm/page_poison.c b/mm/page_poison.c
+index f0c15e9017c0..21d4f97cb49b 100644
+--- a/mm/page_poison.c
++++ b/mm/page_poison.c
+@@ -6,6 +6,7 @@
+ #include <linux/page_ext.h>
+ #include <linux/poison.h>
+ #include <linux/ratelimit.h>
++#include <linux/kasan.h>
+ 
+ static bool want_page_poisoning __read_mostly;
+ 
+@@ -40,7 +41,10 @@ static void poison_page(struct page *page)
+ {
+ 	void *addr = kmap_atomic(page);
+ 
++	/* KASAN still think the page is in-use, so skip it. */
++	kasan_disable_current();
+ 	memset(addr, PAGE_POISON, PAGE_SIZE);
++	kasan_enable_current();
+ 	kunmap_atomic(addr);
+ }
+ 
+diff --git a/mm/slab.c b/mm/slab.c
+index b3e74b56a468..2f2aa8eaf7d9 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -550,14 +550,6 @@ static void start_cpu_timer(int cpu)
+ 
+ static void init_arraycache(struct array_cache *ac, int limit, int batch)
+ {
+-	/*
+-	 * The array_cache structures contain pointers to free object.
+-	 * However, when such objects are allocated or transferred to another
+-	 * cache the pointers are not cleared and they could be counted as
+-	 * valid references during a kmemleak scan. Therefore, kmemleak must
+-	 * not scan such objects.
+-	 */
+-	kmemleak_no_scan(ac);
+ 	if (ac) {
+ 		ac->avail = 0;
+ 		ac->limit = limit;
+@@ -573,6 +565,14 @@ static struct array_cache *alloc_arraycache(int node, int entries,
+ 	struct array_cache *ac = NULL;
+ 
+ 	ac = kmalloc_node(memsize, gfp, node);
++	/*
++	 * The array_cache structures contain pointers to free object.
++	 * However, when such objects are allocated or transferred to another
++	 * cache the pointers are not cleared and they could be counted as
++	 * valid references during a kmemleak scan. Therefore, kmemleak must
++	 * not scan such objects.
++	 */
++	kmemleak_no_scan(ac);
+ 	init_arraycache(ac, entries, batchcount);
+ 	return ac;
+ }
+@@ -667,6 +667,7 @@ static struct alien_cache *__alloc_alien_cache(int node, int entries,
+ 
+ 	alc = kmalloc_node(memsize, gfp, node);
+ 	if (alc) {
++		kmemleak_no_scan(alc);
+ 		init_arraycache(&alc->ac, entries, batch);
+ 		spin_lock_init(&alc->lock);
+ 	}
+diff --git a/mm/sparse.c b/mm/sparse.c
+index 4763519d4399..b3771f35a0ed 100644
+--- a/mm/sparse.c
++++ b/mm/sparse.c
+@@ -197,7 +197,7 @@ static inline int next_present_section_nr(int section_nr)
+ }
+ #define for_each_present_section_nr(start, section_nr)		\
+ 	for (section_nr = next_present_section_nr(start-1);	\
+-	     ((section_nr >= 0) &&				\
++	     ((section_nr != -1) &&				\
+ 	      (section_nr <= __highest_present_section_nr));	\
+ 	     section_nr = next_present_section_nr(section_nr))
+ 
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index dbac1d49469d..67f60e051814 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -98,6 +98,15 @@ static atomic_t proc_poll_event = ATOMIC_INIT(0);
+ 
+ atomic_t nr_rotate_swap = ATOMIC_INIT(0);
+ 
++static struct swap_info_struct *swap_type_to_swap_info(int type)
++{
++	if (type >= READ_ONCE(nr_swapfiles))
++		return NULL;
++
++	smp_rmb();	/* Pairs with smp_wmb in alloc_swap_info. */
++	return READ_ONCE(swap_info[type]);
++}
++
+ static inline unsigned char swap_count(unsigned char ent)
+ {
+ 	return ent & ~SWAP_HAS_CACHE;	/* may include COUNT_CONTINUED flag */
+@@ -1044,12 +1053,14 @@ noswap:
+ /* The only caller of this function is now suspend routine */
+ swp_entry_t get_swap_page_of_type(int type)
+ {
+-	struct swap_info_struct *si;
++	struct swap_info_struct *si = swap_type_to_swap_info(type);
+ 	pgoff_t offset;
+ 
+-	si = swap_info[type];
++	if (!si)
++		goto fail;
++
+ 	spin_lock(&si->lock);
+-	if (si && (si->flags & SWP_WRITEOK)) {
++	if (si->flags & SWP_WRITEOK) {
+ 		atomic_long_dec(&nr_swap_pages);
+ 		/* This is called for allocating swap entry, not cache */
+ 		offset = scan_swap_map(si, 1);
+@@ -1060,6 +1071,7 @@ swp_entry_t get_swap_page_of_type(int type)
+ 		atomic_long_inc(&nr_swap_pages);
+ 	}
+ 	spin_unlock(&si->lock);
++fail:
+ 	return (swp_entry_t) {0};
+ }
+ 
+@@ -1071,9 +1083,9 @@ static struct swap_info_struct *__swap_info_get(swp_entry_t entry)
+ 	if (!entry.val)
+ 		goto out;
+ 	type = swp_type(entry);
+-	if (type >= nr_swapfiles)
++	p = swap_type_to_swap_info(type);
++	if (!p)
+ 		goto bad_nofile;
+-	p = swap_info[type];
+ 	if (!(p->flags & SWP_USED))
+ 		goto bad_device;
+ 	offset = swp_offset(entry);
+@@ -1697,10 +1709,9 @@ int swap_type_of(dev_t device, sector_t offset, struct block_device **bdev_p)
+ sector_t swapdev_block(int type, pgoff_t offset)
+ {
+ 	struct block_device *bdev;
++	struct swap_info_struct *si = swap_type_to_swap_info(type);
+ 
+-	if ((unsigned int)type >= nr_swapfiles)
+-		return 0;
+-	if (!(swap_info[type]->flags & SWP_WRITEOK))
++	if (!si || !(si->flags & SWP_WRITEOK))
+ 		return 0;
+ 	return map_swap_entry(swp_entry(type, offset), &bdev);
+ }
+@@ -2258,7 +2269,7 @@ static sector_t map_swap_entry(swp_entry_t entry, struct block_device **bdev)
+ 	struct swap_extent *se;
+ 	pgoff_t offset;
+ 
+-	sis = swap_info[swp_type(entry)];
++	sis = swp_swap_info(entry);
+ 	*bdev = sis->bdev;
+ 
+ 	offset = swp_offset(entry);
+@@ -2700,9 +2711,7 @@ static void *swap_start(struct seq_file *swap, loff_t *pos)
+ 	if (!l)
+ 		return SEQ_START_TOKEN;
+ 
+-	for (type = 0; type < nr_swapfiles; type++) {
+-		smp_rmb();	/* read nr_swapfiles before swap_info[type] */
+-		si = swap_info[type];
++	for (type = 0; (si = swap_type_to_swap_info(type)); type++) {
+ 		if (!(si->flags & SWP_USED) || !si->swap_map)
+ 			continue;
+ 		if (!--l)
+@@ -2722,9 +2731,7 @@ static void *swap_next(struct seq_file *swap, void *v, loff_t *pos)
+ 	else
+ 		type = si->type + 1;
+ 
+-	for (; type < nr_swapfiles; type++) {
+-		smp_rmb();	/* read nr_swapfiles before swap_info[type] */
+-		si = swap_info[type];
++	for (; (si = swap_type_to_swap_info(type)); type++) {
+ 		if (!(si->flags & SWP_USED) || !si->swap_map)
+ 			continue;
+ 		++*pos;
+@@ -2831,14 +2838,14 @@ static struct swap_info_struct *alloc_swap_info(void)
+ 	}
+ 	if (type >= nr_swapfiles) {
+ 		p->type = type;
+-		swap_info[type] = p;
++		WRITE_ONCE(swap_info[type], p);
+ 		/*
+ 		 * Write swap_info[type] before nr_swapfiles, in case a
+ 		 * racing procfs swap_start() or swap_next() is reading them.
+ 		 * (We never shrink nr_swapfiles, we never free this entry.)
+ 		 */
+ 		smp_wmb();
+-		nr_swapfiles++;
++		WRITE_ONCE(nr_swapfiles, nr_swapfiles + 1);
+ 	} else {
+ 		kvfree(p);
+ 		p = swap_info[type];
+@@ -3358,7 +3365,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+ {
+ 	struct swap_info_struct *p;
+ 	struct swap_cluster_info *ci;
+-	unsigned long offset, type;
++	unsigned long offset;
+ 	unsigned char count;
+ 	unsigned char has_cache;
+ 	int err = -EINVAL;
+@@ -3366,10 +3373,10 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+ 	if (non_swap_entry(entry))
+ 		goto out;
+ 
+-	type = swp_type(entry);
+-	if (type >= nr_swapfiles)
++	p = swp_swap_info(entry);
++	if (!p)
+ 		goto bad_file;
+-	p = swap_info[type];
++
+ 	offset = swp_offset(entry);
+ 	if (unlikely(offset >= p->max))
+ 		goto out;
+@@ -3466,7 +3473,7 @@ int swapcache_prepare(swp_entry_t entry)
+ 
+ struct swap_info_struct *swp_swap_info(swp_entry_t entry)
+ {
+-	return swap_info[swp_type(entry)];
++	return swap_type_to_swap_info(swp_type(entry));
+ }
+ 
+ struct swap_info_struct *page_swap_info(struct page *page)
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 2cd24186ba84..583630bf247d 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -498,7 +498,11 @@ nocache:
+ 	}
+ 
+ found:
+-	if (addr + size > vend)
++	/*
++	 * Check also calculated address against the vstart,
++	 * because it can be 0 because of big align request.
++	 */
++	if (addr + size > vend || addr < vstart)
+ 		goto overflow;
+ 
+ 	va->va_start = addr;
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index c93c35bb73dd..40d058378b52 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -881,11 +881,6 @@ static const struct nf_br_ops br_ops = {
+ 	.br_dev_xmit_hook =	br_nf_dev_xmit,
+ };
+ 
+-void br_netfilter_enable(void)
+-{
+-}
+-EXPORT_SYMBOL_GPL(br_netfilter_enable);
+-
+ /* For br_nf_post_routing, we need (prio = NF_BR_PRI_LAST), because
+  * br_dev_queue_push_xmit is called afterwards */
+ static const struct nf_hook_ops br_nf_ops[] = {
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index db4d46332e86..9dd4c2048a2b 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -901,10 +901,18 @@ __nf_conntrack_confirm(struct sk_buff *skb)
+ 	 * REJECT will give spurious warnings here.
+ 	 */
+ 
+-	/* No external references means no one else could have
+-	 * confirmed us.
++	/* Another skb with the same unconfirmed conntrack may
++	 * win the race. This may happen for bridge(br_flood)
++	 * or broadcast/multicast packets do skb_clone with
++	 * unconfirmed conntrack.
+ 	 */
+-	WARN_ON(nf_ct_is_confirmed(ct));
++	if (unlikely(nf_ct_is_confirmed(ct))) {
++		WARN_ON_ONCE(1);
++		nf_conntrack_double_unlock(hash, reply_hash);
++		local_bh_enable();
++		return NF_DROP;
++	}
++
+ 	pr_debug("Confirming conntrack %p\n", ct);
+ 	/* We have to check the DYING flag after unlink to prevent
+ 	 * a race against nf_ct_get_next_corpse() possibly called from
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index 4dcbd51a8e97..74fb3fa34db4 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -828,6 +828,12 @@ static noinline bool tcp_new(struct nf_conn *ct, const struct sk_buff *skb,
+ 	return true;
+ }
+ 
++static bool nf_conntrack_tcp_established(const struct nf_conn *ct)
++{
++	return ct->proto.tcp.state == TCP_CONNTRACK_ESTABLISHED &&
++	       test_bit(IPS_ASSURED_BIT, &ct->status);
++}
++
+ /* Returns verdict for packet, or -1 for invalid. */
+ static int tcp_packet(struct nf_conn *ct,
+ 		      struct sk_buff *skb,
+@@ -1030,16 +1036,38 @@ static int tcp_packet(struct nf_conn *ct,
+ 			new_state = TCP_CONNTRACK_ESTABLISHED;
+ 		break;
+ 	case TCP_CONNTRACK_CLOSE:
+-		if (index == TCP_RST_SET
+-		    && (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET)
+-		    && before(ntohl(th->seq), ct->proto.tcp.seen[!dir].td_maxack)) {
+-			/* Invalid RST  */
+-			spin_unlock_bh(&ct->lock);
+-			nf_ct_l4proto_log_invalid(skb, ct, "invalid rst");
+-			return -NF_ACCEPT;
++		if (index != TCP_RST_SET)
++			break;
++
++		if (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET) {
++			u32 seq = ntohl(th->seq);
++
++			if (before(seq, ct->proto.tcp.seen[!dir].td_maxack)) {
++				/* Invalid RST  */
++				spin_unlock_bh(&ct->lock);
++				nf_ct_l4proto_log_invalid(skb, ct, "invalid rst");
++				return -NF_ACCEPT;
++			}
++
++			if (!nf_conntrack_tcp_established(ct) ||
++			    seq == ct->proto.tcp.seen[!dir].td_maxack)
++				break;
++
++			/* Check if rst is part of train, such as
++			 *   foo:80 > bar:4379: P, 235946583:235946602(19) ack 42
++			 *   foo:80 > bar:4379: R, 235946602:235946602(0)  ack 42
++			 */
++			if (ct->proto.tcp.last_index == TCP_ACK_SET &&
++			    ct->proto.tcp.last_dir == dir &&
++			    seq == ct->proto.tcp.last_end)
++				break;
++
++			/* ... RST sequence number doesn't match exactly, keep
++			 * established state to allow a possible challenge ACK.
++			 */
++			new_state = old_state;
+ 		}
+-		if (index == TCP_RST_SET
+-		    && ((test_bit(IPS_SEEN_REPLY_BIT, &ct->status)
++		if (((test_bit(IPS_SEEN_REPLY_BIT, &ct->status)
+ 			 && ct->proto.tcp.last_index == TCP_SYN_SET)
+ 			|| (!test_bit(IPS_ASSURED_BIT, &ct->status)
+ 			    && ct->proto.tcp.last_index == TCP_ACK_SET))
+@@ -1055,7 +1083,7 @@ static int tcp_packet(struct nf_conn *ct,
+ 			 * segments we ignored. */
+ 			goto in_window;
+ 		}
+-		/* Just fall through */
++		break;
+ 	default:
+ 		/* Keep compilers happy. */
+ 		break;
+@@ -1090,6 +1118,8 @@ static int tcp_packet(struct nf_conn *ct,
+ 	if (ct->proto.tcp.retrans >= tn->tcp_max_retrans &&
+ 	    timeouts[new_state] > timeouts[TCP_CONNTRACK_RETRANS])
+ 		timeout = timeouts[TCP_CONNTRACK_RETRANS];
++	else if (unlikely(index == TCP_RST_SET))
++		timeout = timeouts[TCP_CONNTRACK_CLOSE];
+ 	else if ((ct->proto.tcp.seen[0].flags | ct->proto.tcp.seen[1].flags) &
+ 		 IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED &&
+ 		 timeouts[new_state] > timeouts[TCP_CONNTRACK_UNACK])
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index a50500232b0a..7e8dae82ca52 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -98,21 +98,23 @@ static noinline void nft_update_chain_stats(const struct nft_chain *chain,
+ 					    const struct nft_pktinfo *pkt)
+ {
+ 	struct nft_base_chain *base_chain;
++	struct nft_stats __percpu *pstats;
+ 	struct nft_stats *stats;
+ 
+ 	base_chain = nft_base_chain(chain);
+-	if (!rcu_access_pointer(base_chain->stats))
+-		return;
+ 
+-	local_bh_disable();
+-	stats = this_cpu_ptr(rcu_dereference(base_chain->stats));
+-	if (stats) {
++	rcu_read_lock();
++	pstats = READ_ONCE(base_chain->stats);
++	if (pstats) {
++		local_bh_disable();
++		stats = this_cpu_ptr(pstats);
+ 		u64_stats_update_begin(&stats->syncp);
+ 		stats->pkts++;
+ 		stats->bytes += pkt->skb->len;
+ 		u64_stats_update_end(&stats->syncp);
++		local_bh_enable();
+ 	}
+-	local_bh_enable();
++	rcu_read_unlock();
+ }
+ 
+ struct nft_jumpstack {
+diff --git a/net/netfilter/xt_physdev.c b/net/netfilter/xt_physdev.c
+index 4034d70bff39..b2e39cb6a590 100644
+--- a/net/netfilter/xt_physdev.c
++++ b/net/netfilter/xt_physdev.c
+@@ -96,8 +96,7 @@ match_outdev:
+ static int physdev_mt_check(const struct xt_mtchk_param *par)
+ {
+ 	const struct xt_physdev_info *info = par->matchinfo;
+-
+-	br_netfilter_enable();
++	static bool brnf_probed __read_mostly;
+ 
+ 	if (!(info->bitmask & XT_PHYSDEV_OP_MASK) ||
+ 	    info->bitmask & ~XT_PHYSDEV_OP_MASK)
+@@ -111,6 +110,12 @@ static int physdev_mt_check(const struct xt_mtchk_param *par)
+ 		if (par->hook_mask & (1 << NF_INET_LOCAL_OUT))
+ 			return -EINVAL;
+ 	}
++
++	if (!brnf_probed) {
++		brnf_probed = true;
++		request_module("br_netfilter");
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 85e4fe4f18cc..f3031c8907d9 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -407,6 +407,10 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 	if (sxdp->sxdp_family != AF_XDP)
+ 		return -EINVAL;
+ 
++	flags = sxdp->sxdp_flags;
++	if (flags & ~(XDP_SHARED_UMEM | XDP_COPY | XDP_ZEROCOPY))
++		return -EINVAL;
++
+ 	mutex_lock(&xs->mutex);
+ 	if (xs->dev) {
+ 		err = -EBUSY;
+@@ -425,7 +429,6 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 	}
+ 
+ 	qid = sxdp->sxdp_queue_id;
+-	flags = sxdp->sxdp_flags;
+ 
+ 	if (flags & XDP_SHARED_UMEM) {
+ 		struct xdp_sock *umem_xs;
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 379682e2a8d5..f6c2bcb2ab14 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -579,6 +579,7 @@ fail:
+ 			kfree(profile->secmark[i].label);
+ 		kfree(profile->secmark);
+ 		profile->secmark_count = 0;
++		profile->secmark = NULL;
+ 	}
+ 
+ 	e->pos = pos;
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index cf20dd36a30f..07b11b5aaf1f 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -3244,12 +3244,16 @@ static int selinux_inode_setsecurity(struct inode *inode, const char *name,
+ 				     const void *value, size_t size, int flags)
+ {
+ 	struct inode_security_struct *isec = inode_security_novalidate(inode);
++	struct superblock_security_struct *sbsec = inode->i_sb->s_security;
+ 	u32 newsid;
+ 	int rc;
+ 
+ 	if (strcmp(name, XATTR_SELINUX_SUFFIX))
+ 		return -EOPNOTSUPP;
+ 
++	if (!(sbsec->flags & SBLABEL_MNT))
++		return -EOPNOTSUPP;
++
+ 	if (!value || !size)
+ 		return -EACCES;
+ 
+@@ -6398,7 +6402,10 @@ static void selinux_inode_invalidate_secctx(struct inode *inode)
+  */
+ static int selinux_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen)
+ {
+-	return selinux_inode_setsecurity(inode, XATTR_SELINUX_SUFFIX, ctx, ctxlen, 0);
++	int rc = selinux_inode_setsecurity(inode, XATTR_SELINUX_SUFFIX,
++					   ctx, ctxlen, 0);
++	/* Do not return error when suppressing label (SBLABEL_MNT not set). */
++	return rc == -EOPNOTSUPP ? 0 : rc;
+ }
+ 
+ /*
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index b67f6fe08a1b..e08c6c6ca029 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -1513,6 +1513,14 @@ int snd_pcm_suspend_all(struct snd_pcm *pcm)
+ 			/* FIXME: the open/close code should lock this as well */
+ 			if (substream->runtime == NULL)
+ 				continue;
++
++			/*
++			 * Skip BE dai link PCM's that are internal and may
++			 * not have their substream ops set.
++			 */
++			if (!substream->ops)
++				continue;
++
+ 			err = snd_pcm_suspend(substream);
+ 			if (err < 0 && err != -EBUSY)
+ 				return err;
+diff --git a/sound/firewire/dice/dice.c b/sound/firewire/dice/dice.c
+index ed50b222d36e..eee184b05d93 100644
+--- a/sound/firewire/dice/dice.c
++++ b/sound/firewire/dice/dice.c
+@@ -18,6 +18,7 @@ MODULE_LICENSE("GPL v2");
+ #define OUI_ALESIS		0x000595
+ #define OUI_MAUDIO		0x000d6c
+ #define OUI_MYTEK		0x001ee8
++#define OUI_SSL			0x0050c2	// Actually ID reserved by IEEE.
+ 
+ #define DICE_CATEGORY_ID	0x04
+ #define WEISS_CATEGORY_ID	0x00
+@@ -196,7 +197,7 @@ static int dice_probe(struct fw_unit *unit,
+ 	struct snd_dice *dice;
+ 	int err;
+ 
+-	if (!entry->driver_data) {
++	if (!entry->driver_data && entry->vendor_id != OUI_SSL) {
+ 		err = check_dice_category(unit);
+ 		if (err < 0)
+ 			return -ENODEV;
+@@ -361,6 +362,15 @@ static const struct ieee1394_device_id dice_id_table[] = {
+ 		.model_id	= 0x000002,
+ 		.driver_data = (kernel_ulong_t)snd_dice_detect_mytek_formats,
+ 	},
++	// Solid State Logic, Duende Classic and Mini.
++	// NOTE: each field of GUID in config ROM is not compliant to standard
++	// DICE scheme.
++	{
++		.match_flags	= IEEE1394_MATCH_VENDOR_ID |
++				  IEEE1394_MATCH_MODEL_ID,
++		.vendor_id	= OUI_SSL,
++		.model_id	= 0x000070,
++	},
+ 	{
+ 		.match_flags = IEEE1394_MATCH_VERSION,
+ 		.version     = DICE_INTERFACE,
+diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
+index 81f2fe2c6d23..60f87a0d99f4 100644
+--- a/sound/soc/fsl/fsl-asoc-card.c
++++ b/sound/soc/fsl/fsl-asoc-card.c
+@@ -689,6 +689,7 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ asrc_fail:
+ 	of_node_put(asrc_np);
+ 	of_node_put(codec_np);
++	put_device(&cpu_pdev->dev);
+ fail:
+ 	of_node_put(cpu_np);
+ 
+diff --git a/sound/soc/fsl/imx-sgtl5000.c b/sound/soc/fsl/imx-sgtl5000.c
+index c29200cf755a..9b9a7ec52905 100644
+--- a/sound/soc/fsl/imx-sgtl5000.c
++++ b/sound/soc/fsl/imx-sgtl5000.c
+@@ -108,6 +108,7 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ 		ret = -EPROBE_DEFER;
+ 		goto fail;
+ 	}
++	put_device(&ssi_pdev->dev);
+ 	codec_dev = of_find_i2c_device_by_node(codec_np);
+ 	if (!codec_dev) {
+ 		dev_err(&pdev->dev, "failed to find codec platform device\n");
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index b807a47515eb..336895f7fd1e 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -283,12 +283,20 @@ static int asoc_simple_card_get_dai_id(struct device_node *ep)
+ 	/* use endpoint/port reg if exist */
+ 	ret = of_graph_parse_endpoint(ep, &info);
+ 	if (ret == 0) {
+-		if (info.id)
++		/*
++		 * Because it will count port/endpoint if it doesn't have "reg".
++		 * But, we can't judge whether it has "no reg", or "reg = <0>"
++		 * only of_graph_parse_endpoint().
++		 * We need to check "reg" property
++		 */
++		if (of_get_property(ep,   "reg", NULL))
+ 			return info.id;
+-		if (info.port)
++
++		node = of_get_parent(ep);
++		of_node_put(node);
++		if (of_get_property(node, "reg", NULL))
+ 			return info.port;
+ 	}
+-
+ 	node = of_graph_get_port_parent(ep);
+ 
+ 	/*
+diff --git a/sound/soc/qcom/common.c b/sound/soc/qcom/common.c
+index 4715527054e5..5661025e8cec 100644
+--- a/sound/soc/qcom/common.c
++++ b/sound/soc/qcom/common.c
+@@ -42,6 +42,9 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ 	link = card->dai_link;
+ 	for_each_child_of_node(dev->of_node, np) {
+ 		cpu = of_get_child_by_name(np, "cpu");
++		platform = of_get_child_by_name(np, "platform");
++		codec = of_get_child_by_name(np, "codec");
++
+ 		if (!cpu) {
+ 			dev_err(dev, "Can't find cpu DT node\n");
+ 			ret = -EINVAL;
+@@ -63,8 +66,6 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ 			goto err;
+ 		}
+ 
+-		platform = of_get_child_by_name(np, "platform");
+-		codec = of_get_child_by_name(np, "codec");
+ 		if (codec && platform) {
+ 			link->platform_of_node = of_parse_phandle(platform,
+ 					"sound-dai",
+@@ -100,10 +101,15 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ 		link->dpcm_capture = 1;
+ 		link->stream_name = link->name;
+ 		link++;
++
++		of_node_put(cpu);
++		of_node_put(codec);
++		of_node_put(platform);
+ 	}
+ 
+ 	return 0;
+ err:
++	of_node_put(np);
+ 	of_node_put(cpu);
+ 	of_node_put(codec);
+ 	of_node_put(platform);
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index 5467c6bf9ceb..bb9dca65eb5f 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -70,7 +70,6 @@ FEATURE_TESTS_BASIC :=                  \
+         sched_getcpu			\
+         sdt				\
+         setns				\
+-        libopencsd			\
+         libaio
+ 
+ # FEATURE_TESTS_BASIC + FEATURE_TESTS_EXTRA is the complete list
+@@ -84,6 +83,7 @@ FEATURE_TESTS_EXTRA :=                  \
+          libbabeltrace                  \
+          libbfd-liberty                 \
+          libbfd-liberty-z               \
++         libopencsd                     \
+          libunwind-debug-frame          \
+          libunwind-debug-frame-arm      \
+          libunwind-debug-frame-aarch64  \
+diff --git a/tools/build/feature/test-all.c b/tools/build/feature/test-all.c
+index 20cdaa4fc112..e903b86b742f 100644
+--- a/tools/build/feature/test-all.c
++++ b/tools/build/feature/test-all.c
+@@ -170,14 +170,14 @@
+ # include "test-setns.c"
+ #undef main
+ 
+-#define main main_test_libopencsd
+-# include "test-libopencsd.c"
+-#undef main
+-
+ #define main main_test_libaio
+ # include "test-libaio.c"
+ #undef main
+ 
++#define main main_test_reallocarray
++# include "test-reallocarray.c"
++#undef main
++
+ int main(int argc, char *argv[])
+ {
+ 	main_test_libpython();
+@@ -217,8 +217,8 @@ int main(int argc, char *argv[])
+ 	main_test_sched_getcpu();
+ 	main_test_sdt();
+ 	main_test_setns();
+-	main_test_libopencsd();
+ 	main_test_libaio();
++	main_test_reallocarray();
+ 
+ 	return 0;
+ }
+diff --git a/tools/build/feature/test-reallocarray.c b/tools/build/feature/test-reallocarray.c
+index 8170de35150d..8f6743e31da7 100644
+--- a/tools/build/feature/test-reallocarray.c
++++ b/tools/build/feature/test-reallocarray.c
+@@ -6,3 +6,5 @@ int main(void)
+ {
+ 	return !!reallocarray(NULL, 1, 1);
+ }
++
++#undef _GNU_SOURCE
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index 34d9c3619c96..78fd86b85087 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -162,7 +162,8 @@ endif
+ 
+ TARGETS = $(CMD_TARGETS)
+ 
+-all: fixdep all_cmd
++all: fixdep
++	$(Q)$(MAKE) all_cmd
+ 
+ all_cmd: $(CMD_TARGETS) check
+ 
+diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
+index c8fbd0306960..11f425662b43 100755
+--- a/tools/lib/lockdep/run_tests.sh
++++ b/tools/lib/lockdep/run_tests.sh
+@@ -11,7 +11,7 @@ find tests -name '*.c' | sort | while read -r i; do
+ 	testname=$(basename "$i" .c)
+ 	echo -ne "$testname... "
+ 	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
+-		timeout 1 "tests/$testname" 2>&1 | "tests/${testname}.sh"; then
++		timeout 1 "tests/$testname" 2>&1 | /bin/bash "tests/${testname}.sh"; then
+ 		echo "PASSED!"
+ 	else
+ 		echo "FAILED!"
+@@ -24,7 +24,7 @@ find tests -name '*.c' | sort | while read -r i; do
+ 	echo -ne "(PRELOAD) $testname... "
+ 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+ 		timeout 1 ./lockdep "tests/$testname" 2>&1 |
+-		"tests/${testname}.sh"; then
++		/bin/bash "tests/${testname}.sh"; then
+ 		echo "PASSED!"
+ 	else
+ 		echo "FAILED!"
+@@ -37,7 +37,7 @@ find tests -name '*.c' | sort | while read -r i; do
+ 	echo -ne "(PRELOAD + Valgrind) $testname... "
+ 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+ 		{ timeout 10 valgrind --read-var-info=yes ./lockdep "./tests/$testname" >& "tests/${testname}.vg.out"; true; } &&
+-		"tests/${testname}.sh" < "tests/${testname}.vg.out" &&
++		/bin/bash "tests/${testname}.sh" < "tests/${testname}.vg.out" &&
+ 		! grep -Eq '(^==[0-9]*== (Invalid |Uninitialised ))|Mismatched free|Source and destination overlap| UME ' "tests/${testname}.vg.out"; then
+ 		echo "PASSED!"
+ 	else
+diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
+index abd4fa5d3088..87494c7c619d 100644
+--- a/tools/lib/traceevent/event-parse.c
++++ b/tools/lib/traceevent/event-parse.c
+@@ -2457,7 +2457,7 @@ static int arg_num_eval(struct tep_print_arg *arg, long long *val)
+ static char *arg_eval (struct tep_print_arg *arg)
+ {
+ 	long long val;
+-	static char buf[20];
++	static char buf[24];
+ 
+ 	switch (arg->type) {
+ 	case TEP_PRINT_ATOM:
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index b441c88cafa1..cf4a8329c4c0 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -218,6 +218,8 @@ FEATURE_CHECK_LDFLAGS-libpython := $(PYTHON_EMBED_LDOPTS)
+ FEATURE_CHECK_CFLAGS-libpython-version := $(PYTHON_EMBED_CCOPTS)
+ FEATURE_CHECK_LDFLAGS-libpython-version := $(PYTHON_EMBED_LDOPTS)
+ 
++FEATURE_CHECK_LDFLAGS-libaio = -lrt
++
+ CFLAGS += -fno-omit-frame-pointer
+ CFLAGS += -ggdb3
+ CFLAGS += -funwind-tables
+@@ -386,7 +388,8 @@ ifeq ($(feature-setns), 1)
+   $(call detected,CONFIG_SETNS)
+ endif
+ 
+-ifndef NO_CORESIGHT
++ifdef CORESIGHT
++  $(call feature_check,libopencsd)
+   ifeq ($(feature-libopencsd), 1)
+     CFLAGS += -DHAVE_CSTRACE_SUPPORT $(LIBOPENCSD_CFLAGS)
+     LDFLAGS += $(LIBOPENCSD_LDFLAGS)
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 0ee6795d82cc..77f8f069f1e7 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -102,7 +102,7 @@ include ../scripts/utilities.mak
+ # When selected, pass LLVM_CONFIG=/path/to/llvm-config to `make' if
+ # llvm-config is not in $PATH.
+ #
+-# Define NO_CORESIGHT if you do not want support for CoreSight trace decoding.
++# Define CORESIGHT if you DO WANT support for CoreSight trace decoding.
+ #
+ # Define NO_AIO if you do not want support of Posix AIO based trace
+ # streaming for record mode. Currently Posix AIO trace streaming is
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index d340d2e42776..13758a0b367b 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2055,6 +2055,12 @@ static int setup_nodes(struct perf_session *session)
+ 		if (!set)
+ 			return -ENOMEM;
+ 
++		nodes[node] = set;
++
++		/* empty node, skip */
++		if (cpu_map__empty(map))
++			continue;
++
+ 		for (cpu = 0; cpu < map->nr; cpu++) {
+ 			set_bit(map->map[cpu], set);
+ 
+@@ -2063,8 +2069,6 @@ static int setup_nodes(struct perf_session *session)
+ 
+ 			cpu2node[map->map[cpu]] = node;
+ 		}
+-
+-		nodes[node] = set;
+ 	}
+ 
+ 	setup_nodes_header();
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index ac221f137ed2..cff4d10daf49 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -148,6 +148,7 @@ static struct {
+ 	unsigned int print_ip_opts;
+ 	u64 fields;
+ 	u64 invalid_fields;
++	u64 user_set_fields;
+ } output[OUTPUT_TYPE_MAX] = {
+ 
+ 	[PERF_TYPE_HARDWARE] = {
+@@ -344,7 +345,7 @@ static int perf_evsel__do_check_stype(struct perf_evsel *evsel,
+ 	if (attr->sample_type & sample_type)
+ 		return 0;
+ 
+-	if (output[type].user_set) {
++	if (output[type].user_set_fields & field) {
+ 		if (allow_user_set)
+ 			return 0;
+ 		evname = perf_evsel__name(evsel);
+@@ -2627,10 +2628,13 @@ parse:
+ 					pr_warning("\'%s\' not valid for %s events. Ignoring.\n",
+ 						   all_output_options[i].str, event_type(j));
+ 				} else {
+-					if (change == REMOVE)
++					if (change == REMOVE) {
+ 						output[j].fields &= ~all_output_options[i].field;
+-					else
++						output[j].user_set_fields &= ~all_output_options[i].field;
++					} else {
+ 						output[j].fields |= all_output_options[i].field;
++						output[j].user_set_fields |= all_output_options[i].field;
++					}
+ 					output[j].user_set = true;
+ 					output[j].wildcard_set = true;
+ 				}
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index b36061cd1ab8..91cdbf504535 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -1039,6 +1039,9 @@ static const size_t trace__entry_str_size = 2048;
+ 
+ static struct file *thread_trace__files_entry(struct thread_trace *ttrace, int fd)
+ {
++	if (fd < 0)
++		return NULL;
++
+ 	if (fd > ttrace->files.max) {
+ 		struct file *nfiles = realloc(ttrace->files.table, (fd + 1) * sizeof(struct file));
+ 
+@@ -3865,7 +3868,8 @@ int cmd_trace(int argc, const char **argv)
+ 				goto init_augmented_syscall_tp;
+ 			}
+ 
+-			if (strcmp(perf_evsel__name(evsel), "raw_syscalls:sys_enter") == 0) {
++			if (trace.syscalls.events.augmented->priv == NULL &&
++			    strstr(perf_evsel__name(evsel), "syscalls:sys_enter")) {
+ 				struct perf_evsel *augmented = trace.syscalls.events.augmented;
+ 				if (perf_evsel__init_augmented_syscall_tp(augmented, evsel) ||
+ 				    perf_evsel__init_augmented_syscall_tp_args(augmented))
+diff --git a/tools/perf/tests/evsel-tp-sched.c b/tools/perf/tests/evsel-tp-sched.c
+index 5cbba70bcdd0..ea7acf403727 100644
+--- a/tools/perf/tests/evsel-tp-sched.c
++++ b/tools/perf/tests/evsel-tp-sched.c
+@@ -43,7 +43,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
+ 		return -1;
+ 	}
+ 
+-	if (perf_evsel__test_field(evsel, "prev_comm", 16, true))
++	if (perf_evsel__test_field(evsel, "prev_comm", 16, false))
+ 		ret = -1;
+ 
+ 	if (perf_evsel__test_field(evsel, "prev_pid", 4, true))
+@@ -55,7 +55,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
+ 	if (perf_evsel__test_field(evsel, "prev_state", sizeof(long), true))
+ 		ret = -1;
+ 
+-	if (perf_evsel__test_field(evsel, "next_comm", 16, true))
++	if (perf_evsel__test_field(evsel, "next_comm", 16, false))
+ 		ret = -1;
+ 
+ 	if (perf_evsel__test_field(evsel, "next_pid", 4, true))
+@@ -73,7 +73,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
+ 		return -1;
+ 	}
+ 
+-	if (perf_evsel__test_field(evsel, "comm", 16, true))
++	if (perf_evsel__test_field(evsel, "comm", 16, false))
+ 		ret = -1;
+ 
+ 	if (perf_evsel__test_field(evsel, "pid", 4, true))
+diff --git a/tools/perf/trace/beauty/msg_flags.c b/tools/perf/trace/beauty/msg_flags.c
+index d66c66315987..ea68db08b8e7 100644
+--- a/tools/perf/trace/beauty/msg_flags.c
++++ b/tools/perf/trace/beauty/msg_flags.c
+@@ -29,7 +29,7 @@ static size_t syscall_arg__scnprintf_msg_flags(char *bf, size_t size,
+ 		return scnprintf(bf, size, "NONE");
+ #define	P_MSG_FLAG(n) \
+ 	if (flags & MSG_##n) { \
+-		printed += scnprintf(bf + printed, size - printed, "%s%s", printed ? "|" : "", show_prefix ? prefix : "", #n); \
++		printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : "", #n); \
+ 		flags &= ~MSG_##n; \
+ 	}
+ 
+diff --git a/tools/perf/trace/beauty/waitid_options.c b/tools/perf/trace/beauty/waitid_options.c
+index 6897fab40dcc..d4d10b33ba0e 100644
+--- a/tools/perf/trace/beauty/waitid_options.c
++++ b/tools/perf/trace/beauty/waitid_options.c
+@@ -11,7 +11,7 @@ static size_t syscall_arg__scnprintf_waitid_options(char *bf, size_t size,
+ 
+ #define	P_OPTION(n) \
+ 	if (options & W##n) { \
+-		printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : #n); \
++		printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : "",  #n); \
+ 		options &= ~W##n; \
+ 	}
+ 
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 70de8f6b3aee..9142fd294e76 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -1889,6 +1889,7 @@ int symbol__annotate(struct symbol *sym, struct map *map,
+ 		     struct annotation_options *options,
+ 		     struct arch **parch)
+ {
++	struct annotation *notes = symbol__annotation(sym);
+ 	struct annotate_args args = {
+ 		.privsize	= privsize,
+ 		.evsel		= evsel,
+@@ -1919,6 +1920,7 @@ int symbol__annotate(struct symbol *sym, struct map *map,
+ 
+ 	args.ms.map = map;
+ 	args.ms.sym = sym;
++	notes->start = map__rip_2objdump(map, sym->start);
+ 
+ 	return symbol__disassemble(sym, &args);
+ }
+@@ -2794,8 +2796,6 @@ int symbol__annotate2(struct symbol *sym, struct map *map, struct perf_evsel *ev
+ 
+ 	symbol__calc_percent(sym, evsel);
+ 
+-	notes->start = map__rip_2objdump(map, sym->start);
+-
+ 	annotation__set_offsets(notes, size);
+ 	annotation__mark_jump_targets(notes, sym);
+ 	annotation__compute_ipc(notes, size);
+diff --git a/tools/perf/util/s390-cpumsf.c b/tools/perf/util/s390-cpumsf.c
+index 68b2570304ec..08073a4d59a4 100644
+--- a/tools/perf/util/s390-cpumsf.c
++++ b/tools/perf/util/s390-cpumsf.c
+@@ -301,6 +301,11 @@ static bool s390_cpumsf_validate(int machine_type,
+ 			*dsdes = 85;
+ 			*bsdes = 32;
+ 			break;
++		case 2964:
++		case 2965:
++			*dsdes = 112;
++			*bsdes = 32;
++			break;
+ 		default:
+ 			/* Illegal trailer entry */
+ 			return false;
+diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
+index 87ef16a1b17e..7059d1be2d09 100644
+--- a/tools/perf/util/scripting-engines/trace-event-python.c
++++ b/tools/perf/util/scripting-engines/trace-event-python.c
+@@ -733,8 +733,7 @@ static PyObject *get_perf_sample_dict(struct perf_sample *sample,
+ 		Py_FatalError("couldn't create Python dictionary");
+ 
+ 	pydict_set_item_string_decref(dict, "ev_name", _PyUnicode_FromString(perf_evsel__name(evsel)));
+-	pydict_set_item_string_decref(dict, "attr", _PyUnicode_FromStringAndSize(
+-			(const char *)&evsel->attr, sizeof(evsel->attr)));
++	pydict_set_item_string_decref(dict, "attr", _PyBytes_FromStringAndSize((const char *)&evsel->attr, sizeof(evsel->attr)));
+ 
+ 	pydict_set_item_string_decref(dict_sample, "pid",
+ 			_PyLong_FromLong(sample->pid));
+@@ -1494,34 +1493,40 @@ static void _free_command_line(wchar_t **command_line, int num)
+ static int python_start_script(const char *script, int argc, const char **argv)
+ {
+ 	struct tables *tables = &tables_global;
++	PyMODINIT_FUNC (*initfunc)(void);
+ #if PY_MAJOR_VERSION < 3
+ 	const char **command_line;
+ #else
+ 	wchar_t **command_line;
+ #endif
+-	char buf[PATH_MAX];
++	/*
++	 * Use a non-const name variable to cope with python 2.6's
++	 * PyImport_AppendInittab prototype
++	 */
++	char buf[PATH_MAX], name[19] = "perf_trace_context";
+ 	int i, err = 0;
+ 	FILE *fp;
+ 
+ #if PY_MAJOR_VERSION < 3
++	initfunc = initperf_trace_context;
+ 	command_line = malloc((argc + 1) * sizeof(const char *));
+ 	command_line[0] = script;
+ 	for (i = 1; i < argc + 1; i++)
+ 		command_line[i] = argv[i - 1];
+ #else
++	initfunc = PyInit_perf_trace_context;
+ 	command_line = malloc((argc + 1) * sizeof(wchar_t *));
+ 	command_line[0] = Py_DecodeLocale(script, NULL);
+ 	for (i = 1; i < argc + 1; i++)
+ 		command_line[i] = Py_DecodeLocale(argv[i - 1], NULL);
+ #endif
+ 
++	PyImport_AppendInittab(name, initfunc);
+ 	Py_Initialize();
+ 
+ #if PY_MAJOR_VERSION < 3
+-	initperf_trace_context();
+ 	PySys_SetArgv(argc + 1, (char **)command_line);
+ #else
+-	PyInit_perf_trace_context();
+ 	PySys_SetArgv(argc + 1, command_line);
+ #endif
+ 
+diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
+index 6c1a83768eb0..d0334c33da54 100644
+--- a/tools/perf/util/sort.c
++++ b/tools/perf/util/sort.c
+@@ -230,8 +230,14 @@ static int64_t _sort__sym_cmp(struct symbol *sym_l, struct symbol *sym_r)
+ 	if (sym_l == sym_r)
+ 		return 0;
+ 
+-	if (sym_l->inlined || sym_r->inlined)
+-		return strcmp(sym_l->name, sym_r->name);
++	if (sym_l->inlined || sym_r->inlined) {
++		int ret = strcmp(sym_l->name, sym_r->name);
++
++		if (ret)
++			return ret;
++		if ((sym_l->start <= sym_r->end) && (sym_l->end >= sym_r->start))
++			return 0;
++	}
+ 
+ 	if (sym_l->start != sym_r->start)
+ 		return (int64_t)(sym_r->start - sym_l->start);
+diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
+index dc86597d0cc4..ccf42c4e83f0 100644
+--- a/tools/perf/util/srcline.c
++++ b/tools/perf/util/srcline.c
+@@ -104,7 +104,7 @@ static struct symbol *new_inline_sym(struct dso *dso,
+ 	} else {
+ 		/* create a fake symbol for the inline frame */
+ 		inline_sym = symbol__new(base_sym ? base_sym->start : 0,
+-					 base_sym ? base_sym->end : 0,
++					 base_sym ? (base_sym->end - base_sym->start) : 0,
+ 					 base_sym ? base_sym->binding : 0,
+ 					 base_sym ? base_sym->type : 0,
+ 					 funcname);
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 41ab7a3668b3..936f726f7cd9 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -96,6 +96,7 @@ $(BPFOBJ): force
+ CLANG ?= clang
+ LLC   ?= llc
+ LLVM_OBJCOPY ?= llvm-objcopy
++LLVM_READELF ?= llvm-readelf
+ BTF_PAHOLE ?= pahole
+ 
+ PROBE := $(shell $(LLC) -march=bpf -mcpu=probe -filetype=null /dev/null 2>&1)
+@@ -132,7 +133,7 @@ BTF_PAHOLE_PROBE := $(shell $(BTF_PAHOLE) --help 2>&1 | grep BTF)
+ BTF_OBJCOPY_PROBE := $(shell $(LLVM_OBJCOPY) --help 2>&1 | grep -i 'usage.*llvm')
+ BTF_LLVM_PROBE := $(shell echo "int main() { return 0; }" | \
+ 			  $(CLANG) -target bpf -O2 -g -c -x c - -o ./llvm_btf_verify.o; \
+-			  readelf -S ./llvm_btf_verify.o | grep BTF; \
++			  $(LLVM_READELF) -S ./llvm_btf_verify.o | grep BTF; \
+ 			  /bin/rm -f ./llvm_btf_verify.o)
+ 
+ ifneq ($(BTF_LLVM_PROBE),)
+diff --git a/tools/testing/selftests/bpf/test_map_in_map.c b/tools/testing/selftests/bpf/test_map_in_map.c
+index ce923e67e08e..2985f262846e 100644
+--- a/tools/testing/selftests/bpf/test_map_in_map.c
++++ b/tools/testing/selftests/bpf/test_map_in_map.c
+@@ -27,6 +27,7 @@ SEC("xdp_mimtest")
+ int xdp_mimtest0(struct xdp_md *ctx)
+ {
+ 	int value = 123;
++	int *value_p;
+ 	int key = 0;
+ 	void *map;
+ 
+@@ -35,6 +36,9 @@ int xdp_mimtest0(struct xdp_md *ctx)
+ 		return XDP_DROP;
+ 
+ 	bpf_map_update_elem(map, &key, &value, 0);
++	value_p = bpf_map_lookup_elem(map, &key);
++	if (!value_p || *value_p != 123)
++		return XDP_DROP;
+ 
+ 	map = bpf_map_lookup_elem(&mim_hash, &key);
+ 	if (!map)
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index e2b9eee37187..6e05a22b346c 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -43,7 +43,7 @@ static int map_flags;
+ 	}								\
+ })
+ 
+-static void test_hashmap(int task, void *data)
++static void test_hashmap(unsigned int task, void *data)
+ {
+ 	long long key, next_key, first_key, value;
+ 	int fd;
+@@ -133,7 +133,7 @@ static void test_hashmap(int task, void *data)
+ 	close(fd);
+ }
+ 
+-static void test_hashmap_sizes(int task, void *data)
++static void test_hashmap_sizes(unsigned int task, void *data)
+ {
+ 	int fd, i, j;
+ 
+@@ -153,7 +153,7 @@ static void test_hashmap_sizes(int task, void *data)
+ 		}
+ }
+ 
+-static void test_hashmap_percpu(int task, void *data)
++static void test_hashmap_percpu(unsigned int task, void *data)
+ {
+ 	unsigned int nr_cpus = bpf_num_possible_cpus();
+ 	BPF_DECLARE_PERCPU(long, value);
+@@ -280,7 +280,7 @@ static int helper_fill_hashmap(int max_entries)
+ 	return fd;
+ }
+ 
+-static void test_hashmap_walk(int task, void *data)
++static void test_hashmap_walk(unsigned int task, void *data)
+ {
+ 	int fd, i, max_entries = 1000;
+ 	long long key, value, next_key;
+@@ -351,7 +351,7 @@ static void test_hashmap_zero_seed(void)
+ 	close(second);
+ }
+ 
+-static void test_arraymap(int task, void *data)
++static void test_arraymap(unsigned int task, void *data)
+ {
+ 	int key, next_key, fd;
+ 	long long value;
+@@ -406,7 +406,7 @@ static void test_arraymap(int task, void *data)
+ 	close(fd);
+ }
+ 
+-static void test_arraymap_percpu(int task, void *data)
++static void test_arraymap_percpu(unsigned int task, void *data)
+ {
+ 	unsigned int nr_cpus = bpf_num_possible_cpus();
+ 	BPF_DECLARE_PERCPU(long, values);
+@@ -502,7 +502,7 @@ static void test_arraymap_percpu_many_keys(void)
+ 	close(fd);
+ }
+ 
+-static void test_devmap(int task, void *data)
++static void test_devmap(unsigned int task, void *data)
+ {
+ 	int fd;
+ 	__u32 key, value;
+@@ -517,7 +517,7 @@ static void test_devmap(int task, void *data)
+ 	close(fd);
+ }
+ 
+-static void test_queuemap(int task, void *data)
++static void test_queuemap(unsigned int task, void *data)
+ {
+ 	const int MAP_SIZE = 32;
+ 	__u32 vals[MAP_SIZE + MAP_SIZE/2], val;
+@@ -575,7 +575,7 @@ static void test_queuemap(int task, void *data)
+ 	close(fd);
+ }
+ 
+-static void test_stackmap(int task, void *data)
++static void test_stackmap(unsigned int task, void *data)
+ {
+ 	const int MAP_SIZE = 32;
+ 	__u32 vals[MAP_SIZE + MAP_SIZE/2], val;
+@@ -641,7 +641,7 @@ static void test_stackmap(int task, void *data)
+ #define SOCKMAP_PARSE_PROG "./sockmap_parse_prog.o"
+ #define SOCKMAP_VERDICT_PROG "./sockmap_verdict_prog.o"
+ #define SOCKMAP_TCP_MSG_PROG "./sockmap_tcp_msg_prog.o"
+-static void test_sockmap(int tasks, void *data)
++static void test_sockmap(unsigned int tasks, void *data)
+ {
+ 	struct bpf_map *bpf_map_rx, *bpf_map_tx, *bpf_map_msg, *bpf_map_break;
+ 	int map_fd_msg = 0, map_fd_rx = 0, map_fd_tx = 0, map_fd_break;
+@@ -1258,10 +1258,11 @@ static void test_map_large(void)
+ }
+ 
+ #define run_parallel(N, FN, DATA) \
+-	printf("Fork %d tasks to '" #FN "'\n", N); \
++	printf("Fork %u tasks to '" #FN "'\n", N); \
+ 	__run_parallel(N, FN, DATA)
+ 
+-static void __run_parallel(int tasks, void (*fn)(int task, void *data),
++static void __run_parallel(unsigned int tasks,
++			   void (*fn)(unsigned int task, void *data),
+ 			   void *data)
+ {
+ 	pid_t pid[tasks];
+@@ -1302,7 +1303,7 @@ static void test_map_stress(void)
+ #define DO_UPDATE 1
+ #define DO_DELETE 0
+ 
+-static void test_update_delete(int fn, void *data)
++static void test_update_delete(unsigned int fn, void *data)
+ {
+ 	int do_update = ((int *)data)[1];
+ 	int fd = ((int *)data)[0];
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 2fd90d456892..9a967983abed 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -34,6 +34,7 @@
+ #include <linux/if_ether.h>
+ 
+ #include <bpf/bpf.h>
++#include <bpf/libbpf.h>
+ 
+ #ifdef HAVE_GENHDR
+ # include "autoconf.h"
+@@ -59,6 +60,7 @@
+ 
+ #define UNPRIV_SYSCTL "kernel/unprivileged_bpf_disabled"
+ static bool unpriv_disabled = false;
++static int skips;
+ 
+ struct bpf_test {
+ 	const char *descr;
+@@ -15946,6 +15948,11 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
+ 		pflags |= BPF_F_ANY_ALIGNMENT;
+ 	fd_prog = bpf_verify_program(prog_type, prog, prog_len, pflags,
+ 				     "GPL", 0, bpf_vlog, sizeof(bpf_vlog), 1);
++	if (fd_prog < 0 && !bpf_probe_prog_type(prog_type, 0)) {
++		printf("SKIP (unsupported program type %d)\n", prog_type);
++		skips++;
++		goto close_fds;
++	}
+ 
+ 	expected_ret = unpriv && test->result_unpriv != UNDEF ?
+ 		       test->result_unpriv : test->result;
+@@ -16099,7 +16106,7 @@ static bool test_as_unpriv(struct bpf_test *test)
+ 
+ static int do_test(bool unpriv, unsigned int from, unsigned int to)
+ {
+-	int i, passes = 0, errors = 0, skips = 0;
++	int i, passes = 0, errors = 0;
+ 
+ 	for (i = from; i < to; i++) {
+ 		struct bpf_test *test = &tests[i];
+diff --git a/tools/testing/selftests/ir/ir_loopback.c b/tools/testing/selftests/ir/ir_loopback.c
+index 858c19caf224..8cdf1b89ac9c 100644
+--- a/tools/testing/selftests/ir/ir_loopback.c
++++ b/tools/testing/selftests/ir/ir_loopback.c
+@@ -27,6 +27,8 @@
+ 
+ #define TEST_SCANCODES	10
+ #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
++#define SYSFS_PATH_MAX 256
++#define DNAME_PATH_MAX 256
+ 
+ static const struct {
+ 	enum rc_proto proto;
+@@ -56,7 +58,7 @@ static const struct {
+ int lirc_open(const char *rc)
+ {
+ 	struct dirent *dent;
+-	char buf[100];
++	char buf[SYSFS_PATH_MAX + DNAME_PATH_MAX];
+ 	DIR *d;
+ 	int fd;
+ 
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 7e632b465ab4..6d7a81306f8a 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -2971,6 +2971,12 @@ TEST(get_metadata)
+ 	struct seccomp_metadata md;
+ 	long ret;
+ 
++	/* Only real root can get metadata. */
++	if (geteuid()) {
++		XFAIL(return, "get_metadata requires real root");
++		return;
++	}
++
+ 	ASSERT_EQ(0, pipe(pipefd));
+ 
+ 	pid = fork();


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-04-17  7:32 Alice Ferrazzi
  0 siblings, 0 replies; 28+ messages in thread
From: Alice Ferrazzi @ 2019-04-17  7:32 UTC (permalink / raw
  To: gentoo-commits

commit:     f5cf400c13c66c3c62cc0b83cd9894bec6c56983
Author:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 17 07:31:29 2019 +0000
Commit:     Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Apr 17 07:32:17 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f5cf400c

Linux Patch 5.0.8

Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>

 0000_README            |     4 +
 1007_linux-5.0.8.patch | 36018 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 36022 insertions(+)

diff --git a/0000_README b/0000_README
index 0545dfc..2dd07a5 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-5.0.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.7
 
+Patch:  1007_linux-5.0.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-5.0.8.patch b/1007_linux-5.0.8.patch
new file mode 100644
index 0000000..2e45798
--- /dev/null
+++ b/1007_linux-5.0.8.patch
@@ -0,0 +1,36018 @@
+diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
+index e133ccd60228..acfe3d0f78d1 100644
+--- a/Documentation/DMA-API.txt
++++ b/Documentation/DMA-API.txt
+@@ -195,6 +195,14 @@ Requesting the required mask does not alter the current mask.  If you
+ wish to take advantage of it, you should issue a dma_set_mask()
+ call to set the mask to the value returned.
+ 
++::
++
++	size_t
++	dma_direct_max_mapping_size(struct device *dev);
++
++Returns the maximum size of a mapping for the device. The size parameter
++of the mapping functions like dma_map_single(), dma_map_page() and
++others should not be larger than the returned value.
+ 
+ Part Id - Streaming DMA mappings
+ --------------------------------
+diff --git a/Documentation/arm/kernel_mode_neon.txt b/Documentation/arm/kernel_mode_neon.txt
+index 525452726d31..b9e060c5b61e 100644
+--- a/Documentation/arm/kernel_mode_neon.txt
++++ b/Documentation/arm/kernel_mode_neon.txt
+@@ -6,7 +6,7 @@ TL;DR summary
+ * Use only NEON instructions, or VFP instructions that don't rely on support
+   code
+ * Isolate your NEON code in a separate compilation unit, and compile it with
+-  '-mfpu=neon -mfloat-abi=softfp'
++  '-march=armv7-a -mfpu=neon -mfloat-abi=softfp'
+ * Put kernel_neon_begin() and kernel_neon_end() calls around the calls into your
+   NEON code
+ * Don't sleep in your NEON code, and be aware that it will be executed with
+@@ -87,7 +87,7 @@ instructions appearing in unexpected places if no special care is taken.
+ Therefore, the recommended and only supported way of using NEON/VFP in the
+ kernel is by adhering to the following rules:
+ * isolate the NEON code in a separate compilation unit and compile it with
+-  '-mfpu=neon -mfloat-abi=softfp';
++  '-march=armv7-a -mfpu=neon -mfloat-abi=softfp';
+ * issue the calls to kernel_neon_begin(), kernel_neon_end() as well as the calls
+   into the unit containing the NEON code from a compilation unit which is *not*
+   built with the GCC flag '-mfpu=neon' set.
+diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
+index 1f09d043d086..ddb8ce5333ba 100644
+--- a/Documentation/arm64/silicon-errata.txt
++++ b/Documentation/arm64/silicon-errata.txt
+@@ -44,6 +44,8 @@ stable kernels.
+ 
+ | Implementor    | Component       | Erratum ID      | Kconfig                     |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Allwinner      | A64/R18         | UNKNOWN1        | SUN50I_ERRATUM_UNKNOWN1     |
++|                |                 |                 |                             |
+ | ARM            | Cortex-A53      | #826319         | ARM64_ERRATUM_826319        |
+ | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_827319        |
+ | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_824069        |
+diff --git a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
+index a10c1f89037d..e1fe02f3e3e9 100644
+--- a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
++++ b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
+@@ -11,11 +11,13 @@ New driver handles the following
+ 
+ Required properties:
+ - compatible:		Must be "samsung,exynos-adc-v1"
+-				for exynos4412/5250 controllers.
++				for Exynos5250 controllers.
+ 			Must be "samsung,exynos-adc-v2" for
+ 				future controllers.
+ 			Must be "samsung,exynos3250-adc" for
+ 				controllers compatible with ADC of Exynos3250.
++			Must be "samsung,exynos4212-adc" for
++				controllers compatible with ADC of Exynos4212 and Exynos4412.
+ 			Must be "samsung,exynos7-adc" for
+ 				the ADC in Exynos7 and compatibles
+ 			Must be "samsung,s3c2410-adc" for
+diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
+index 0de6f6145cc6..7ba8cd567f84 100644
+--- a/Documentation/process/stable-kernel-rules.rst
++++ b/Documentation/process/stable-kernel-rules.rst
+@@ -38,6 +38,9 @@ Procedure for submitting patches to the -stable tree
+  - If the patch covers files in net/ or drivers/net please follow netdev stable
+    submission guidelines as described in
+    :ref:`Documentation/networking/netdev-FAQ.rst <netdev-FAQ>`
++   after first checking the stable networking queue at
++   https://patchwork.ozlabs.org/bundle/davem/stable/?series=&submitter=&state=*&q=&archive=
++   to ensure the requested patch is not already queued up.
+  - Security patches should not be handled (solely) by the -stable review
+    process but should follow the procedures in
+    :ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.
+diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
+index 356156f5c52d..ba8927c0d45c 100644
+--- a/Documentation/virtual/kvm/api.txt
++++ b/Documentation/virtual/kvm/api.txt
+@@ -13,7 +13,7 @@ of a virtual machine.  The ioctls belong to three classes
+ 
+  - VM ioctls: These query and set attributes that affect an entire virtual
+    machine, for example memory layout.  In addition a VM ioctl is used to
+-   create virtual cpus (vcpus).
++   create virtual cpus (vcpus) and devices.
+ 
+    Only run VM ioctls from the same process (address space) that was used
+    to create the VM.
+@@ -24,6 +24,11 @@ of a virtual machine.  The ioctls belong to three classes
+    Only run vcpu ioctls from the same thread that was used to create the
+    vcpu.
+ 
++ - device ioctls: These query and set attributes that control the operation
++   of a single device.
++
++   device ioctls must be issued from the same process (address space) that
++   was used to create the VM.
+ 
+ 2. File descriptors
+ -------------------
+@@ -32,10 +37,11 @@ The kvm API is centered around file descriptors.  An initial
+ open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
+ can be used to issue system ioctls.  A KVM_CREATE_VM ioctl on this
+ handle will create a VM file descriptor which can be used to issue VM
+-ioctls.  A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu
+-and return a file descriptor pointing to it.  Finally, ioctls on a vcpu
+-fd can be used to control the vcpu, including the important task of
+-actually running guest code.
++ioctls.  A KVM_CREATE_VCPU or KVM_CREATE_DEVICE ioctl on a VM fd will
++create a virtual cpu or device and return a file descriptor pointing to
++the new resource.  Finally, ioctls on a vcpu or device fd can be used
++to control the vcpu or device.  For vcpus, this includes the important
++task of actually running guest code.
+ 
+ In general file descriptors can be migrated among processes by means
+ of fork() and the SCM_RIGHTS facility of unix domain socket.  These
+diff --git a/Makefile b/Makefile
+index d5713e7b1e50..f7666051de66 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 0
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+@@ -15,19 +15,6 @@ NAME = Shy Crocodile
+ PHONY := _all
+ _all:
+ 
+-# Do not use make's built-in rules and variables
+-# (this increases performance and avoids hard-to-debug behaviour)
+-MAKEFLAGS += -rR
+-
+-# Avoid funny character set dependencies
+-unexport LC_ALL
+-LC_COLLATE=C
+-LC_NUMERIC=C
+-export LC_COLLATE LC_NUMERIC
+-
+-# Avoid interference with shell env settings
+-unexport GREP_OPTIONS
+-
+ # We are using a recursive build, so we need to do a little thinking
+ # to get the ordering right.
+ #
+@@ -44,6 +31,21 @@ unexport GREP_OPTIONS
+ # descending is started. They are now explicitly listed as the
+ # prepare rule.
+ 
++ifneq ($(sub-make-done),1)
++
++# Do not use make's built-in rules and variables
++# (this increases performance and avoids hard-to-debug behaviour)
++MAKEFLAGS += -rR
++
++# Avoid funny character set dependencies
++unexport LC_ALL
++LC_COLLATE=C
++LC_NUMERIC=C
++export LC_COLLATE LC_NUMERIC
++
++# Avoid interference with shell env settings
++unexport GREP_OPTIONS
++
+ # Beautify output
+ # ---------------------------------------------------------------------------
+ #
+@@ -112,7 +114,6 @@ export quiet Q KBUILD_VERBOSE
+ 
+ # KBUILD_SRC is not intended to be used by the regular user (for now),
+ # it is set on invocation of make with KBUILD_OUTPUT or O= specified.
+-ifeq ($(KBUILD_SRC),)
+ 
+ # OK, Make called in directory where kernel src resides
+ # Do we want to locate output files in a separate directory?
+@@ -142,6 +143,24 @@ $(if $(KBUILD_OUTPUT),, \
+ # 'sub-make' below.
+ MAKEFLAGS += --include-dir=$(CURDIR)
+ 
++need-sub-make := 1
++else
++
++# Do not print "Entering directory ..." at all for in-tree build.
++MAKEFLAGS += --no-print-directory
++
++endif # ifneq ($(KBUILD_OUTPUT),)
++
++ifneq ($(filter 3.%,$(MAKE_VERSION)),)
++# 'MAKEFLAGS += -rR' does not immediately become effective for GNU Make 3.x
++# We need to invoke sub-make to avoid implicit rules in the top Makefile.
++need-sub-make := 1
++# Cancel implicit rules for this Makefile.
++$(lastword $(MAKEFILE_LIST)): ;
++endif
++
++ifeq ($(need-sub-make),1)
++
+ PHONY += $(MAKECMDGOALS) sub-make
+ 
+ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
+@@ -149,16 +168,15 @@ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
+ 
+ # Invoke a second make in the output directory, passing relevant variables
+ sub-make:
+-	$(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \
++	$(Q)$(MAKE) sub-make-done=1 \
++	$(if $(KBUILD_OUTPUT),-C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR)) \
+ 	-f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS))
+ 
+-# Leave processing to above invocation of make
+-skip-makefile := 1
+-endif # ifneq ($(KBUILD_OUTPUT),)
+-endif # ifeq ($(KBUILD_SRC),)
++endif # need-sub-make
++endif # sub-make-done
+ 
+ # We process the rest of the Makefile if this is the final invocation of make
+-ifeq ($(skip-makefile),)
++ifeq ($(need-sub-make),)
+ 
+ # Do not print "Entering directory ...",
+ # but we want to display it when entering to the output directory
+@@ -492,7 +510,7 @@ endif
+ ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
+ ifneq ($(CROSS_COMPILE),)
+ CLANG_FLAGS	:= --target=$(notdir $(CROSS_COMPILE:%-=%))
+-GCC_TOOLCHAIN_DIR := $(dir $(shell which $(LD)))
++GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)elfedit))
+ CLANG_FLAGS	+= --prefix=$(GCC_TOOLCHAIN_DIR)
+ GCC_TOOLCHAIN	:= $(realpath $(GCC_TOOLCHAIN_DIR)/..)
+ endif
+@@ -625,12 +643,15 @@ ifeq ($(may-sync-config),1)
+ -include include/config/auto.conf.cmd
+ 
+ # To avoid any implicit rule to kick in, define an empty command
+-$(KCONFIG_CONFIG) include/config/auto.conf.cmd: ;
++$(KCONFIG_CONFIG): ;
+ 
+ # The actual configuration files used during the build are stored in
+ # include/generated/ and include/config/. Update them if .config is newer than
+ # include/config/auto.conf (which mirrors .config).
+-include/config/%.conf: $(KCONFIG_CONFIG) include/config/auto.conf.cmd
++#
++# This exploits the 'multi-target pattern rule' trick.
++# The syncconfig should be executed only once to make all the targets.
++%/auto.conf %/auto.conf.cmd %/tristate.conf: $(KCONFIG_CONFIG)
+ 	$(Q)$(MAKE) -f $(srctree)/Makefile syncconfig
+ else
+ # External modules and some install targets need include/generated/autoconf.h
+@@ -944,9 +965,11 @@ mod_sign_cmd = true
+ endif
+ export mod_sign_cmd
+ 
++HOST_LIBELF_LIBS = $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
++
+ ifdef CONFIG_STACK_VALIDATION
+   has_libelf := $(call try-run,\
+-		echo "int main() {}" | $(HOSTCC) -xc -o /dev/null -lelf -,1,0)
++		echo "int main() {}" | $(HOSTCC) -xc -o /dev/null $(HOST_LIBELF_LIBS) -,1,0)
+   ifeq ($(has_libelf),1)
+     objtool_target := tools/objtool FORCE
+   else
+@@ -1754,7 +1777,7 @@ $(cmd_files): ;	# Do not try to update included dependency files
+ 
+ endif   # ifeq ($(config-targets),1)
+ endif   # ifeq ($(mixed-targets),1)
+-endif	# skip-makefile
++endif   # need-sub-make
+ 
+ PHONY += FORCE
+ FORCE:
+diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl
+index 7b56a53be5e3..e09558edae73 100644
+--- a/arch/alpha/kernel/syscalls/syscall.tbl
++++ b/arch/alpha/kernel/syscalls/syscall.tbl
+@@ -451,3 +451,4 @@
+ 520	common	preadv2				sys_preadv2
+ 521	common	pwritev2			sys_pwritev2
+ 522	common	statx				sys_statx
++523	common	io_pgetevents			sys_io_pgetevents
+diff --git a/arch/arm/boot/dts/am335x-evm.dts b/arch/arm/boot/dts/am335x-evm.dts
+index dce5be5df97b..edcff79879e7 100644
+--- a/arch/arm/boot/dts/am335x-evm.dts
++++ b/arch/arm/boot/dts/am335x-evm.dts
+@@ -57,6 +57,24 @@
+ 		enable-active-high;
+ 	};
+ 
++	/* TPS79501 */
++	v1_8d_reg: fixedregulator-v1_8d {
++		compatible = "regulator-fixed";
++		regulator-name = "v1_8d";
++		vin-supply = <&vbat>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
++	};
++
++	/* TPS79501 */
++	v3_3d_reg: fixedregulator-v3_3d {
++		compatible = "regulator-fixed";
++		regulator-name = "v3_3d";
++		vin-supply = <&vbat>;
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++	};
++
+ 	matrix_keypad: matrix_keypad0 {
+ 		compatible = "gpio-matrix-keypad";
+ 		debounce-delay-ms = <5>;
+@@ -499,10 +517,10 @@
+ 		status = "okay";
+ 
+ 		/* Regulators */
+-		AVDD-supply = <&vaux2_reg>;
+-		IOVDD-supply = <&vaux2_reg>;
+-		DRVDD-supply = <&vaux2_reg>;
+-		DVDD-supply = <&vbat>;
++		AVDD-supply = <&v3_3d_reg>;
++		IOVDD-supply = <&v3_3d_reg>;
++		DRVDD-supply = <&v3_3d_reg>;
++		DVDD-supply = <&v1_8d_reg>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/am335x-evmsk.dts b/arch/arm/boot/dts/am335x-evmsk.dts
+index b128998097ce..2c2d8b5b8cf5 100644
+--- a/arch/arm/boot/dts/am335x-evmsk.dts
++++ b/arch/arm/boot/dts/am335x-evmsk.dts
+@@ -73,6 +73,24 @@
+ 		enable-active-high;
+ 	};
+ 
++	/* TPS79518 */
++	v1_8d_reg: fixedregulator-v1_8d {
++		compatible = "regulator-fixed";
++		regulator-name = "v1_8d";
++		vin-supply = <&vbat>;
++		regulator-min-microvolt = <1800000>;
++		regulator-max-microvolt = <1800000>;
++	};
++
++	/* TPS78633 */
++	v3_3d_reg: fixedregulator-v3_3d {
++		compatible = "regulator-fixed";
++		regulator-name = "v3_3d";
++		vin-supply = <&vbat>;
++		regulator-min-microvolt = <3300000>;
++		regulator-max-microvolt = <3300000>;
++	};
++
+ 	leds {
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&user_leds_s0>;
+@@ -501,10 +519,10 @@
+ 		status = "okay";
+ 
+ 		/* Regulators */
+-		AVDD-supply = <&vaux2_reg>;
+-		IOVDD-supply = <&vaux2_reg>;
+-		DRVDD-supply = <&vaux2_reg>;
+-		DVDD-supply = <&vbat>;
++		AVDD-supply = <&v3_3d_reg>;
++		IOVDD-supply = <&v3_3d_reg>;
++		DRVDD-supply = <&v3_3d_reg>;
++		DVDD-supply = <&v1_8d_reg>;
+ 	};
+ };
+ 
+diff --git a/arch/arm/boot/dts/exynos3250.dtsi b/arch/arm/boot/dts/exynos3250.dtsi
+index 608d17454179..5892a9f7622f 100644
+--- a/arch/arm/boot/dts/exynos3250.dtsi
++++ b/arch/arm/boot/dts/exynos3250.dtsi
+@@ -168,6 +168,9 @@
+ 			interrupt-controller;
+ 			#interrupt-cells = <3>;
+ 			interrupt-parent = <&gic>;
++			clock-names = "clkout8";
++			clocks = <&cmu CLK_FIN_PLL>;
++			#clock-cells = <1>;
+ 		};
+ 
+ 		mipi_phy: video-phy {
+diff --git a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+index 3a9eb1e91c45..8a64c4e8c474 100644
+--- a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
++++ b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+@@ -49,7 +49,7 @@
+ 	};
+ 
+ 	emmc_pwrseq: pwrseq {
+-		pinctrl-0 = <&sd1_cd>;
++		pinctrl-0 = <&emmc_rstn>;
+ 		pinctrl-names = "default";
+ 		compatible = "mmc-pwrseq-emmc";
+ 		reset-gpios = <&gpk1 2 GPIO_ACTIVE_LOW>;
+@@ -165,12 +165,6 @@
+ 	cpu0-supply = <&buck2_reg>;
+ };
+ 
+-/* RSTN signal for eMMC */
+-&sd1_cd {
+-	samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
+-	samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
+-};
+-
+ &pinctrl_1 {
+ 	gpio_power_key: power_key {
+ 		samsung,pins = "gpx1-3";
+@@ -188,6 +182,11 @@
+ 		samsung,pins = "gpx3-7";
+ 		samsung,pin-pud = <EXYNOS_PIN_PULL_DOWN>;
+ 	};
++
++	emmc_rstn: emmc-rstn {
++		samsung,pins = "gpk1-2";
++		samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
++	};
+ };
+ 
+ &ehci {
+diff --git a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+index bf09eab90f8a..6bf3661293ee 100644
+--- a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
++++ b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
+@@ -468,7 +468,7 @@
+ 			buck8_reg: BUCK8 {
+ 				regulator-name = "vdd_1.8v_ldo";
+ 				regulator-min-microvolt = <800000>;
+-				regulator-max-microvolt = <1500000>;
++				regulator-max-microvolt = <2000000>;
+ 				regulator-always-on;
+ 				regulator-boot-on;
+ 			};
+diff --git a/arch/arm/boot/dts/lpc32xx.dtsi b/arch/arm/boot/dts/lpc32xx.dtsi
+index b7303a4e4236..ed0d6fb20122 100644
+--- a/arch/arm/boot/dts/lpc32xx.dtsi
++++ b/arch/arm/boot/dts/lpc32xx.dtsi
+@@ -230,7 +230,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			i2s1: i2s@2009C000 {
++			i2s1: i2s@2009c000 {
+ 				compatible = "nxp,lpc3220-i2s";
+ 				reg = <0x2009C000 0x1000>;
+ 			};
+@@ -273,7 +273,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			i2c1: i2c@400A0000 {
++			i2c1: i2c@400a0000 {
+ 				compatible = "nxp,pnx-i2c";
+ 				reg = <0x400A0000 0x100>;
+ 				interrupt-parent = <&sic1>;
+@@ -284,7 +284,7 @@
+ 				clocks = <&clk LPC32XX_CLK_I2C1>;
+ 			};
+ 
+-			i2c2: i2c@400A8000 {
++			i2c2: i2c@400a8000 {
+ 				compatible = "nxp,pnx-i2c";
+ 				reg = <0x400A8000 0x100>;
+ 				interrupt-parent = <&sic1>;
+@@ -295,7 +295,7 @@
+ 				clocks = <&clk LPC32XX_CLK_I2C2>;
+ 			};
+ 
+-			mpwm: mpwm@400E8000 {
++			mpwm: mpwm@400e8000 {
+ 				compatible = "nxp,lpc3220-motor-pwm";
+ 				reg = <0x400E8000 0x78>;
+ 				status = "disabled";
+@@ -394,7 +394,7 @@
+ 				#gpio-cells = <3>; /* bank, pin, flags */
+ 			};
+ 
+-			timer4: timer@4002C000 {
++			timer4: timer@4002c000 {
+ 				compatible = "nxp,lpc3220-timer";
+ 				reg = <0x4002C000 0x1000>;
+ 				interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
+@@ -412,7 +412,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			watchdog: watchdog@4003C000 {
++			watchdog: watchdog@4003c000 {
+ 				compatible = "nxp,pnx4008-wdt";
+ 				reg = <0x4003C000 0x1000>;
+ 				clocks = <&clk LPC32XX_CLK_WDOG>;
+@@ -451,7 +451,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			timer1: timer@4004C000 {
++			timer1: timer@4004c000 {
+ 				compatible = "nxp,lpc3220-timer";
+ 				reg = <0x4004C000 0x1000>;
+ 				interrupts = <17 IRQ_TYPE_LEVEL_LOW>;
+@@ -475,7 +475,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			pwm1: pwm@4005C000 {
++			pwm1: pwm@4005c000 {
+ 				compatible = "nxp,lpc3220-pwm";
+ 				reg = <0x4005C000 0x4>;
+ 				clocks = <&clk LPC32XX_CLK_PWM1>;
+@@ -484,7 +484,7 @@
+ 				status = "disabled";
+ 			};
+ 
+-			pwm2: pwm@4005C004 {
++			pwm2: pwm@4005c004 {
+ 				compatible = "nxp,lpc3220-pwm";
+ 				reg = <0x4005C004 0x4>;
+ 				clocks = <&clk LPC32XX_CLK_PWM2>;
+diff --git a/arch/arm/boot/dts/meson8b.dtsi b/arch/arm/boot/dts/meson8b.dtsi
+index 22d775460767..dc125769fe85 100644
+--- a/arch/arm/boot/dts/meson8b.dtsi
++++ b/arch/arm/boot/dts/meson8b.dtsi
+@@ -270,9 +270,7 @@
+ 				groups = "eth_tx_clk",
+ 					 "eth_tx_en",
+ 					 "eth_txd1_0",
+-					 "eth_txd1_1",
+ 					 "eth_txd0_0",
+-					 "eth_txd0_1",
+ 					 "eth_rx_clk",
+ 					 "eth_rx_dv",
+ 					 "eth_rxd1",
+@@ -281,7 +279,9 @@
+ 					 "eth_mdc",
+ 					 "eth_ref_clk",
+ 					 "eth_txd2",
+-					 "eth_txd3";
++					 "eth_txd3",
++					 "eth_rxd3",
++					 "eth_rxd2";
+ 				function = "ethernet";
+ 				bias-disable;
+ 			};
+diff --git a/arch/arm/boot/dts/rk3288-tinker.dtsi b/arch/arm/boot/dts/rk3288-tinker.dtsi
+index aa107ee41b8b..ef653c3209bc 100644
+--- a/arch/arm/boot/dts/rk3288-tinker.dtsi
++++ b/arch/arm/boot/dts/rk3288-tinker.dtsi
+@@ -254,6 +254,7 @@
+ 			};
+ 
+ 			vccio_sd: LDO_REG5 {
++				regulator-boot-on;
+ 				regulator-min-microvolt = <1800000>;
+ 				regulator-max-microvolt = <3300000>;
+ 				regulator-name = "vccio_sd";
+@@ -430,7 +431,7 @@
+ 	bus-width = <4>;
+ 	cap-mmc-highspeed;
+ 	cap-sd-highspeed;
+-	card-detect-delay = <200>;
++	broken-cd;
+ 	disable-wp;			/* wp not hooked up */
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&sdmmc_clk &sdmmc_cmd &sdmmc_cd &sdmmc_bus4>;
+diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
+index ca7d52daa8fb..09868dcee34b 100644
+--- a/arch/arm/boot/dts/rk3288.dtsi
++++ b/arch/arm/boot/dts/rk3288.dtsi
+@@ -70,7 +70,7 @@
+ 			compatible = "arm,cortex-a12";
+ 			reg = <0x501>;
+ 			resets = <&cru SRST_CORE1>;
+-			operating-points = <&cpu_opp_table>;
++			operating-points-v2 = <&cpu_opp_table>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 			clock-latency = <40000>;
+ 			clocks = <&cru ARMCLK>;
+@@ -80,7 +80,7 @@
+ 			compatible = "arm,cortex-a12";
+ 			reg = <0x502>;
+ 			resets = <&cru SRST_CORE2>;
+-			operating-points = <&cpu_opp_table>;
++			operating-points-v2 = <&cpu_opp_table>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 			clock-latency = <40000>;
+ 			clocks = <&cru ARMCLK>;
+@@ -90,7 +90,7 @@
+ 			compatible = "arm,cortex-a12";
+ 			reg = <0x503>;
+ 			resets = <&cru SRST_CORE3>;
+-			operating-points = <&cpu_opp_table>;
++			operating-points-v2 = <&cpu_opp_table>;
+ 			#cooling-cells = <2>; /* min followed by max */
+ 			clock-latency = <40000>;
+ 			clocks = <&cru ARMCLK>;
+diff --git a/arch/arm/boot/dts/sama5d2-pinfunc.h b/arch/arm/boot/dts/sama5d2-pinfunc.h
+index 1c01a6f843d8..28a2e45752fe 100644
+--- a/arch/arm/boot/dts/sama5d2-pinfunc.h
++++ b/arch/arm/boot/dts/sama5d2-pinfunc.h
+@@ -518,7 +518,7 @@
+ #define PIN_PC9__GPIO			PINMUX_PIN(PIN_PC9, 0, 0)
+ #define PIN_PC9__FIQ			PINMUX_PIN(PIN_PC9, 1, 3)
+ #define PIN_PC9__GTSUCOMP		PINMUX_PIN(PIN_PC9, 2, 1)
+-#define PIN_PC9__ISC_D0			PINMUX_PIN(PIN_PC9, 2, 1)
++#define PIN_PC9__ISC_D0			PINMUX_PIN(PIN_PC9, 3, 1)
+ #define PIN_PC9__TIOA4			PINMUX_PIN(PIN_PC9, 4, 2)
+ #define PIN_PC10			74
+ #define PIN_PC10__GPIO			PINMUX_PIN(PIN_PC10, 0, 0)
+diff --git a/arch/arm/crypto/crct10dif-ce-core.S b/arch/arm/crypto/crct10dif-ce-core.S
+index ce45ba0c0687..16019b5961e7 100644
+--- a/arch/arm/crypto/crct10dif-ce-core.S
++++ b/arch/arm/crypto/crct10dif-ce-core.S
+@@ -124,10 +124,10 @@ ENTRY(crc_t10dif_pmull)
+ 	vext.8		q10, qzr, q0, #4
+ 
+ 	// receive the initial 64B data, xor the initial crc value
+-	vld1.64		{q0-q1}, [arg2, :128]!
+-	vld1.64		{q2-q3}, [arg2, :128]!
+-	vld1.64		{q4-q5}, [arg2, :128]!
+-	vld1.64		{q6-q7}, [arg2, :128]!
++	vld1.64		{q0-q1}, [arg2]!
++	vld1.64		{q2-q3}, [arg2]!
++	vld1.64		{q4-q5}, [arg2]!
++	vld1.64		{q6-q7}, [arg2]!
+ CPU_LE(	vrev64.8	q0, q0			)
+ CPU_LE(	vrev64.8	q1, q1			)
+ CPU_LE(	vrev64.8	q2, q2			)
+@@ -167,7 +167,7 @@ CPU_LE(	vrev64.8	q7, q7			)
+ _fold_64_B_loop:
+ 
+ 	.macro		fold64, reg1, reg2
+-	vld1.64		{q11-q12}, [arg2, :128]!
++	vld1.64		{q11-q12}, [arg2]!
+ 
+ 	vmull.p64	q8, \reg1\()h, d21
+ 	vmull.p64	\reg1, \reg1\()l, d20
+@@ -238,7 +238,7 @@ _16B_reduction_loop:
+ 	vmull.p64	q7, d15, d21
+ 	veor.8		q7, q7, q8
+ 
+-	vld1.64		{q0}, [arg2, :128]!
++	vld1.64		{q0}, [arg2]!
+ CPU_LE(	vrev64.8	q0, q0		)
+ 	vswp		d0, d1
+ 	veor.8		q7, q7, q0
+@@ -335,7 +335,7 @@ _less_than_128:
+ 	vmov.i8		q0, #0
+ 	vmov		s3, arg1_low32		// get the initial crc value
+ 
+-	vld1.64		{q7}, [arg2, :128]!
++	vld1.64		{q7}, [arg2]!
+ CPU_LE(	vrev64.8	q7, q7		)
+ 	vswp		d14, d15
+ 	veor.8		q7, q7, q0
+diff --git a/arch/arm/crypto/crct10dif-ce-glue.c b/arch/arm/crypto/crct10dif-ce-glue.c
+index d428355cf38d..14c19c70a841 100644
+--- a/arch/arm/crypto/crct10dif-ce-glue.c
++++ b/arch/arm/crypto/crct10dif-ce-glue.c
+@@ -35,26 +35,15 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data,
+ 			    unsigned int length)
+ {
+ 	u16 *crc = shash_desc_ctx(desc);
+-	unsigned int l;
+ 
+-	if (!may_use_simd()) {
+-		*crc = crc_t10dif_generic(*crc, data, length);
++	if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) {
++		kernel_neon_begin();
++		*crc = crc_t10dif_pmull(*crc, data, length);
++		kernel_neon_end();
+ 	} else {
+-		if (unlikely((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) {
+-			l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE -
+-				  ((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE));
+-
+-			*crc = crc_t10dif_generic(*crc, data, l);
+-
+-			length -= l;
+-			data += l;
+-		}
+-		if (length > 0) {
+-			kernel_neon_begin();
+-			*crc = crc_t10dif_pmull(*crc, data, length);
+-			kernel_neon_end();
+-		}
++		*crc = crc_t10dif_generic(*crc, data, length);
+ 	}
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
+index 69772e742a0a..83ae97c049d9 100644
+--- a/arch/arm/include/asm/barrier.h
++++ b/arch/arm/include/asm/barrier.h
+@@ -11,6 +11,8 @@
+ #define sev()	__asm__ __volatile__ ("sev" : : : "memory")
+ #define wfe()	__asm__ __volatile__ ("wfe" : : : "memory")
+ #define wfi()	__asm__ __volatile__ ("wfi" : : : "memory")
++#else
++#define wfe()	do { } while (0)
+ #endif
+ 
+ #if __LINUX_ARM_ARCH__ >= 7
+diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
+index 120f4c9bbfde..57fe73ea0f72 100644
+--- a/arch/arm/include/asm/processor.h
++++ b/arch/arm/include/asm/processor.h
+@@ -89,7 +89,11 @@ extern void release_thread(struct task_struct *);
+ unsigned long get_wchan(struct task_struct *p);
+ 
+ #if __LINUX_ARM_ARCH__ == 6 || defined(CONFIG_ARM_ERRATA_754327)
+-#define cpu_relax()			smp_mb()
++#define cpu_relax()						\
++	do {							\
++		smp_mb();					\
++		__asm__ __volatile__("nop; nop; nop; nop; nop; nop; nop; nop; nop; nop;");	\
++	} while (0)
+ #else
+ #define cpu_relax()			barrier()
+ #endif
+diff --git a/arch/arm/include/asm/v7m.h b/arch/arm/include/asm/v7m.h
+index 187ccf6496ad..2cb00d15831b 100644
+--- a/arch/arm/include/asm/v7m.h
++++ b/arch/arm/include/asm/v7m.h
+@@ -49,7 +49,7 @@
+  * (0 -> msp; 1 -> psp). Bits [1:0] are fixed to 0b01.
+  */
+ #define EXC_RET_STACK_MASK			0x00000004
+-#define EXC_RET_THREADMODE_PROCESSSTACK		0xfffffffd
++#define EXC_RET_THREADMODE_PROCESSSTACK		(3 << 2)
+ 
+ /* Cache related definitions */
+ 
+diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S
+index 773424843d6e..62db1c9746cb 100644
+--- a/arch/arm/kernel/entry-header.S
++++ b/arch/arm/kernel/entry-header.S
+@@ -127,7 +127,8 @@
+          */
+ 	.macro	v7m_exception_slow_exit ret_r0
+ 	cpsid	i
+-	ldr	lr, =EXC_RET_THREADMODE_PROCESSSTACK
++	ldr	lr, =exc_ret
++	ldr	lr, [lr]
+ 
+ 	@ read original r12, sp, lr, pc and xPSR
+ 	add	r12, sp, #S_IP
+diff --git a/arch/arm/kernel/entry-v7m.S b/arch/arm/kernel/entry-v7m.S
+index abcf47848525..19d2dcd6530d 100644
+--- a/arch/arm/kernel/entry-v7m.S
++++ b/arch/arm/kernel/entry-v7m.S
+@@ -146,3 +146,7 @@ ENTRY(vector_table)
+ 	.rept	CONFIG_CPU_V7M_NUM_IRQ
+ 	.long	__irq_entry		@ External Interrupts
+ 	.endr
++	.align	2
++	.globl	exc_ret
++exc_ret:
++	.space	4
+diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c
+index dd2eb5f76b9f..76300f3813e8 100644
+--- a/arch/arm/kernel/machine_kexec.c
++++ b/arch/arm/kernel/machine_kexec.c
+@@ -91,8 +91,11 @@ void machine_crash_nonpanic_core(void *unused)
+ 
+ 	set_cpu_online(smp_processor_id(), false);
+ 	atomic_dec(&waiting_for_crash_ipi);
+-	while (1)
++
++	while (1) {
+ 		cpu_relax();
++		wfe();
++	}
+ }
+ 
+ void crash_smp_send_stop(void)
+diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
+index 1d6f5ea522f4..a3ce7c5365fa 100644
+--- a/arch/arm/kernel/smp.c
++++ b/arch/arm/kernel/smp.c
+@@ -604,8 +604,10 @@ static void ipi_cpu_stop(unsigned int cpu)
+ 	local_fiq_disable();
+ 	local_irq_disable();
+ 
+-	while (1)
++	while (1) {
+ 		cpu_relax();
++		wfe();
++	}
+ }
+ 
+ static DEFINE_PER_CPU(struct completion *, cpu_completion);
+diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
+index 0bee233fef9a..314cfb232a63 100644
+--- a/arch/arm/kernel/unwind.c
++++ b/arch/arm/kernel/unwind.c
+@@ -93,7 +93,7 @@ extern const struct unwind_idx __start_unwind_idx[];
+ static const struct unwind_idx *__origin_unwind_idx;
+ extern const struct unwind_idx __stop_unwind_idx[];
+ 
+-static DEFINE_SPINLOCK(unwind_lock);
++static DEFINE_RAW_SPINLOCK(unwind_lock);
+ static LIST_HEAD(unwind_tables);
+ 
+ /* Convert a prel31 symbol to an absolute address */
+@@ -201,7 +201,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
+ 		/* module unwind tables */
+ 		struct unwind_table *table;
+ 
+-		spin_lock_irqsave(&unwind_lock, flags);
++		raw_spin_lock_irqsave(&unwind_lock, flags);
+ 		list_for_each_entry(table, &unwind_tables, list) {
+ 			if (addr >= table->begin_addr &&
+ 			    addr < table->end_addr) {
+@@ -213,7 +213,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
+ 				break;
+ 			}
+ 		}
+-		spin_unlock_irqrestore(&unwind_lock, flags);
++		raw_spin_unlock_irqrestore(&unwind_lock, flags);
+ 	}
+ 
+ 	pr_debug("%s: idx = %p\n", __func__, idx);
+@@ -529,9 +529,9 @@ struct unwind_table *unwind_table_add(unsigned long start, unsigned long size,
+ 	tab->begin_addr = text_addr;
+ 	tab->end_addr = text_addr + text_size;
+ 
+-	spin_lock_irqsave(&unwind_lock, flags);
++	raw_spin_lock_irqsave(&unwind_lock, flags);
+ 	list_add_tail(&tab->list, &unwind_tables);
+-	spin_unlock_irqrestore(&unwind_lock, flags);
++	raw_spin_unlock_irqrestore(&unwind_lock, flags);
+ 
+ 	return tab;
+ }
+@@ -543,9 +543,9 @@ void unwind_table_del(struct unwind_table *tab)
+ 	if (!tab)
+ 		return;
+ 
+-	spin_lock_irqsave(&unwind_lock, flags);
++	raw_spin_lock_irqsave(&unwind_lock, flags);
+ 	list_del(&tab->list);
+-	spin_unlock_irqrestore(&unwind_lock, flags);
++	raw_spin_unlock_irqrestore(&unwind_lock, flags);
+ 
+ 	kfree(tab);
+ }
+diff --git a/arch/arm/lib/Makefile b/arch/arm/lib/Makefile
+index ad25fd1872c7..0bff0176db2c 100644
+--- a/arch/arm/lib/Makefile
++++ b/arch/arm/lib/Makefile
+@@ -39,7 +39,7 @@ $(obj)/csumpartialcopy.o:	$(obj)/csumpartialcopygeneric.S
+ $(obj)/csumpartialcopyuser.o:	$(obj)/csumpartialcopygeneric.S
+ 
+ ifeq ($(CONFIG_KERNEL_MODE_NEON),y)
+-  NEON_FLAGS			:= -mfloat-abi=softfp -mfpu=neon
++  NEON_FLAGS			:= -march=armv7-a -mfloat-abi=softfp -mfpu=neon
+   CFLAGS_xor-neon.o		+= $(NEON_FLAGS)
+   obj-$(CONFIG_XOR_BLOCKS)	+= xor-neon.o
+ endif
+diff --git a/arch/arm/lib/xor-neon.c b/arch/arm/lib/xor-neon.c
+index 2c40aeab3eaa..c691b901092f 100644
+--- a/arch/arm/lib/xor-neon.c
++++ b/arch/arm/lib/xor-neon.c
+@@ -14,7 +14,7 @@
+ MODULE_LICENSE("GPL");
+ 
+ #ifndef __ARM_NEON__
+-#error You should compile this file with '-mfloat-abi=softfp -mfpu=neon'
++#error You should compile this file with '-march=armv7-a -mfloat-abi=softfp -mfpu=neon'
+ #endif
+ 
+ /*
+diff --git a/arch/arm/mach-imx/cpuidle-imx6q.c b/arch/arm/mach-imx/cpuidle-imx6q.c
+index bfeb25aaf9a2..326e870d7123 100644
+--- a/arch/arm/mach-imx/cpuidle-imx6q.c
++++ b/arch/arm/mach-imx/cpuidle-imx6q.c
+@@ -16,30 +16,23 @@
+ #include "cpuidle.h"
+ #include "hardware.h"
+ 
+-static atomic_t master = ATOMIC_INIT(0);
+-static DEFINE_SPINLOCK(master_lock);
++static int num_idle_cpus = 0;
++static DEFINE_SPINLOCK(cpuidle_lock);
+ 
+ static int imx6q_enter_wait(struct cpuidle_device *dev,
+ 			    struct cpuidle_driver *drv, int index)
+ {
+-	if (atomic_inc_return(&master) == num_online_cpus()) {
+-		/*
+-		 * With this lock, we prevent other cpu to exit and enter
+-		 * this function again and become the master.
+-		 */
+-		if (!spin_trylock(&master_lock))
+-			goto idle;
++	spin_lock(&cpuidle_lock);
++	if (++num_idle_cpus == num_online_cpus())
+ 		imx6_set_lpm(WAIT_UNCLOCKED);
+-		cpu_do_idle();
+-		imx6_set_lpm(WAIT_CLOCKED);
+-		spin_unlock(&master_lock);
+-		goto done;
+-	}
++	spin_unlock(&cpuidle_lock);
+ 
+-idle:
+ 	cpu_do_idle();
+-done:
+-	atomic_dec(&master);
++
++	spin_lock(&cpuidle_lock);
++	if (num_idle_cpus-- == num_online_cpus())
++		imx6_set_lpm(WAIT_CLOCKED);
++	spin_unlock(&cpuidle_lock);
+ 
+ 	return index;
+ }
+diff --git a/arch/arm/mach-omap1/board-ams-delta.c b/arch/arm/mach-omap1/board-ams-delta.c
+index c4c0a8ea11e4..ee410ae7369e 100644
+--- a/arch/arm/mach-omap1/board-ams-delta.c
++++ b/arch/arm/mach-omap1/board-ams-delta.c
+@@ -182,6 +182,7 @@ static struct resource latch1_resources[] = {
+ 
+ static struct bgpio_pdata latch1_pdata = {
+ 	.label	= LATCH1_LABEL,
++	.base	= -1,
+ 	.ngpio	= LATCH1_NGPIO,
+ };
+ 
+@@ -219,6 +220,7 @@ static struct resource latch2_resources[] = {
+ 
+ static struct bgpio_pdata latch2_pdata = {
+ 	.label	= LATCH2_LABEL,
++	.base	= -1,
+ 	.ngpio	= LATCH2_NGPIO,
+ };
+ 
+diff --git a/arch/arm/mach-omap2/prm_common.c b/arch/arm/mach-omap2/prm_common.c
+index 058a37e6d11c..fd6e0671f957 100644
+--- a/arch/arm/mach-omap2/prm_common.c
++++ b/arch/arm/mach-omap2/prm_common.c
+@@ -523,8 +523,10 @@ void omap_prm_reset_system(void)
+ 
+ 	prm_ll_data->reset_system();
+ 
+-	while (1)
++	while (1) {
+ 		cpu_relax();
++		wfe();
++	}
+ }
+ 
+ /**
+diff --git a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
+index 058ce73137e8..5d819b6ea428 100644
+--- a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
++++ b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
+@@ -65,16 +65,16 @@ static int osiris_dvs_notify(struct notifier_block *nb,
+ 
+ 	switch (val) {
+ 	case CPUFREQ_PRECHANGE:
+-		if (old_dvs & !new_dvs ||
+-		    cur_dvs & !new_dvs) {
++		if ((old_dvs && !new_dvs) ||
++		    (cur_dvs && !new_dvs)) {
+ 			pr_debug("%s: exiting dvs\n", __func__);
+ 			cur_dvs = false;
+ 			gpio_set_value(OSIRIS_GPIO_DVS, 1);
+ 		}
+ 		break;
+ 	case CPUFREQ_POSTCHANGE:
+-		if (!old_dvs & new_dvs ||
+-		    !cur_dvs & new_dvs) {
++		if ((!old_dvs && new_dvs) ||
++		    (!cur_dvs && new_dvs)) {
+ 			pr_debug("entering dvs\n");
+ 			cur_dvs = true;
+ 			gpio_set_value(OSIRIS_GPIO_DVS, 0);
+diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+index 8e50daa99151..dc526ef2e9b3 100644
+--- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
++++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
+@@ -40,6 +40,7 @@
+ struct regulator_quirk {
+ 	struct list_head		list;
+ 	const struct of_device_id	*id;
++	struct device_node		*np;
+ 	struct of_phandle_args		irq_args;
+ 	struct i2c_msg			i2c_msg;
+ 	bool				shared;	/* IRQ line is shared */
+@@ -101,6 +102,9 @@ static int regulator_quirk_notify(struct notifier_block *nb,
+ 		if (!pos->shared)
+ 			continue;
+ 
++		if (pos->np->parent != client->dev.parent->of_node)
++			continue;
++
+ 		dev_info(&client->dev, "clearing %s@0x%02x interrupts\n",
+ 			 pos->id->compatible, pos->i2c_msg.addr);
+ 
+@@ -165,6 +169,7 @@ static int __init rcar_gen2_regulator_quirk(void)
+ 		memcpy(&quirk->i2c_msg, id->data, sizeof(quirk->i2c_msg));
+ 
+ 		quirk->id = id;
++		quirk->np = np;
+ 		quirk->i2c_msg.addr = addr;
+ 
+ 		ret = of_irq_parse_one(np, 0, argsa);
+diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c
+index b03202cddddb..f74cdce6d4da 100644
+--- a/arch/arm/mm/copypage-v4mc.c
++++ b/arch/arm/mm/copypage-v4mc.c
+@@ -45,6 +45,7 @@ static void mc_copy_user_page(void *from, void *to)
+ 	int tmp;
+ 
+ 	asm volatile ("\
++	.syntax unified\n\
+ 	ldmia	%0!, {r2, r3, ip, lr}		@ 4\n\
+ 1:	mcr	p15, 0, %1, c7, c6, 1		@ 1   invalidate D line\n\
+ 	stmia	%1!, {r2, r3, ip, lr}		@ 4\n\
+@@ -56,7 +57,7 @@ static void mc_copy_user_page(void *from, void *to)
+ 	ldmia	%0!, {r2, r3, ip, lr}		@ 4\n\
+ 	subs	%2, %2, #1			@ 1\n\
+ 	stmia	%1!, {r2, r3, ip, lr}		@ 4\n\
+-	ldmneia	%0!, {r2, r3, ip, lr}		@ 4\n\
++	ldmiane	%0!, {r2, r3, ip, lr}		@ 4\n\
+ 	bne	1b				@ "
+ 	: "+&r" (from), "+&r" (to), "=&r" (tmp)
+ 	: "2" (PAGE_SIZE / 64)
+diff --git a/arch/arm/mm/copypage-v4wb.c b/arch/arm/mm/copypage-v4wb.c
+index cd3e165afeed..6d336740aae4 100644
+--- a/arch/arm/mm/copypage-v4wb.c
++++ b/arch/arm/mm/copypage-v4wb.c
+@@ -27,6 +27,7 @@ static void v4wb_copy_user_page(void *kto, const void *kfrom)
+ 	int tmp;
+ 
+ 	asm volatile ("\
++	.syntax unified\n\
+ 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 1:	mcr	p15, 0, %0, c7, c6, 1		@ 1   invalidate D line\n\
+ 	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
+@@ -38,7 +39,7 @@ static void v4wb_copy_user_page(void *kto, const void *kfrom)
+ 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 	subs	%2, %2, #1			@ 1\n\
+ 	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
+-	ldmneia	%1!, {r3, r4, ip, lr}		@ 4\n\
++	ldmiane	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 	bne	1b				@ 1\n\
+ 	mcr	p15, 0, %1, c7, c10, 4		@ 1   drain WB"
+ 	: "+&r" (kto), "+&r" (kfrom), "=&r" (tmp)
+diff --git a/arch/arm/mm/copypage-v4wt.c b/arch/arm/mm/copypage-v4wt.c
+index 8614572e1296..3851bb396442 100644
+--- a/arch/arm/mm/copypage-v4wt.c
++++ b/arch/arm/mm/copypage-v4wt.c
+@@ -25,6 +25,7 @@ static void v4wt_copy_user_page(void *kto, const void *kfrom)
+ 	int tmp;
+ 
+ 	asm volatile ("\
++	.syntax unified\n\
+ 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 1:	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
+ 	ldmia	%1!, {r3, r4, ip, lr}		@ 4+1\n\
+@@ -34,7 +35,7 @@ static void v4wt_copy_user_page(void *kto, const void *kfrom)
+ 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 	subs	%2, %2, #1			@ 1\n\
+ 	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
+-	ldmneia	%1!, {r3, r4, ip, lr}		@ 4\n\
++	ldmiane	%1!, {r3, r4, ip, lr}		@ 4\n\
+ 	bne	1b				@ 1\n\
+ 	mcr	p15, 0, %2, c7, c7, 0		@ flush ID cache"
+ 	: "+&r" (kto), "+&r" (kfrom), "=&r" (tmp)
+diff --git a/arch/arm/mm/proc-v7m.S b/arch/arm/mm/proc-v7m.S
+index 47a5acc64433..92e84181933a 100644
+--- a/arch/arm/mm/proc-v7m.S
++++ b/arch/arm/mm/proc-v7m.S
+@@ -139,6 +139,9 @@ __v7m_setup_cont:
+ 	cpsie	i
+ 	svc	#0
+ 1:	cpsid	i
++	ldr	r0, =exc_ret
++	orr	lr, lr, #EXC_RET_THREADMODE_PROCESSSTACK
++	str	lr, [r0]
+ 	ldmia	sp, {r0-r3, r12}
+ 	str	r5, [r12, #11 * 4]	@ restore the original SVC vector entry
+ 	mov	lr, r6			@ restore LR
+diff --git a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+index 610235028cc7..c14205cd6bf5 100644
+--- a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
++++ b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+@@ -118,6 +118,7 @@
+ 		reset-gpios = <&gpio0 5 GPIO_ACTIVE_LOW>;
+ 		clocks = <&pmic>;
+ 		clock-names = "ext_clock";
++		post-power-on-delay-ms = <10>;
+ 		power-off-delay-us = <10>;
+ 	};
+ 
+@@ -300,7 +301,6 @@
+ 
+ 		dwmmc_0: dwmmc0@f723d000 {
+ 			cap-mmc-highspeed;
+-			mmc-hs200-1_8v;
+ 			non-removable;
+ 			bus-width = <0x8>;
+ 			vmmc-supply = <&ldo19>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
+index 040b36ef0dd2..520ed8e474be 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
+@@ -46,8 +46,7 @@
+ 
+ 	vcc_host1_5v: vcc_otg_5v: vcc-host1-5v-regulator {
+ 		compatible = "regulator-fixed";
+-		enable-active-high;
+-		gpio = <&gpio0 RK_PA2 GPIO_ACTIVE_HIGH>;
++		gpio = <&gpio0 RK_PA2 GPIO_ACTIVE_LOW>;
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&usb20_host_drv>;
+ 		regulator-name = "vcc_host1_5v";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index ecd7f19c3542..97aa65455b4a 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -1431,11 +1431,11 @@
+ 
+ 		sdmmc0 {
+ 			sdmmc0_clk: sdmmc0-clk {
+-				rockchip,pins = <1 RK_PA6 1 &pcfg_pull_none_4ma>;
++				rockchip,pins = <1 RK_PA6 1 &pcfg_pull_none_8ma>;
+ 			};
+ 
+ 			sdmmc0_cmd: sdmmc0-cmd {
+-				rockchip,pins = <1 RK_PA4 1 &pcfg_pull_up_4ma>;
++				rockchip,pins = <1 RK_PA4 1 &pcfg_pull_up_8ma>;
+ 			};
+ 
+ 			sdmmc0_dectn: sdmmc0-dectn {
+@@ -1447,14 +1447,14 @@
+ 			};
+ 
+ 			sdmmc0_bus1: sdmmc0-bus1 {
+-				rockchip,pins = <1 RK_PA0 1 &pcfg_pull_up_4ma>;
++				rockchip,pins = <1 RK_PA0 1 &pcfg_pull_up_8ma>;
+ 			};
+ 
+ 			sdmmc0_bus4: sdmmc0-bus4 {
+-				rockchip,pins = <1 RK_PA0 1 &pcfg_pull_up_4ma>,
+-						<1 RK_PA1 1 &pcfg_pull_up_4ma>,
+-						<1 RK_PA2 1 &pcfg_pull_up_4ma>,
+-						<1 RK_PA3 1 &pcfg_pull_up_4ma>;
++				rockchip,pins = <1 RK_PA0 1 &pcfg_pull_up_8ma>,
++						<1 RK_PA1 1 &pcfg_pull_up_8ma>,
++						<1 RK_PA2 1 &pcfg_pull_up_8ma>,
++						<1 RK_PA3 1 &pcfg_pull_up_8ma>;
+ 			};
+ 
+ 			sdmmc0_gpio: sdmmc0-gpio {
+@@ -1628,50 +1628,50 @@
+ 			rgmiim1_pins: rgmiim1-pins {
+ 				rockchip,pins =
+ 					/* mac_txclk */
+-					<1 RK_PB4 2 &pcfg_pull_none_12ma>,
++					<1 RK_PB4 2 &pcfg_pull_none_8ma>,
+ 					/* mac_rxclk */
+-					<1 RK_PB5 2 &pcfg_pull_none_2ma>,
++					<1 RK_PB5 2 &pcfg_pull_none_4ma>,
+ 					/* mac_mdio */
+-					<1 RK_PC3 2 &pcfg_pull_none_2ma>,
++					<1 RK_PC3 2 &pcfg_pull_none_4ma>,
+ 					/* mac_txen */
+-					<1 RK_PD1 2 &pcfg_pull_none_12ma>,
++					<1 RK_PD1 2 &pcfg_pull_none_8ma>,
+ 					/* mac_clk */
+-					<1 RK_PC5 2 &pcfg_pull_none_2ma>,
++					<1 RK_PC5 2 &pcfg_pull_none_4ma>,
+ 					/* mac_rxdv */
+-					<1 RK_PC6 2 &pcfg_pull_none_2ma>,
++					<1 RK_PC6 2 &pcfg_pull_none_4ma>,
+ 					/* mac_mdc */
+-					<1 RK_PC7 2 &pcfg_pull_none_2ma>,
++					<1 RK_PC7 2 &pcfg_pull_none_4ma>,
+ 					/* mac_rxd1 */
+-					<1 RK_PB2 2 &pcfg_pull_none_2ma>,
++					<1 RK_PB2 2 &pcfg_pull_none_4ma>,
+ 					/* mac_rxd0 */
+-					<1 RK_PB3 2 &pcfg_pull_none_2ma>,
++					<1 RK_PB3 2 &pcfg_pull_none_4ma>,
+ 					/* mac_txd1 */
+-					<1 RK_PB0 2 &pcfg_pull_none_12ma>,
++					<1 RK_PB0 2 &pcfg_pull_none_8ma>,
+ 					/* mac_txd0 */
+-					<1 RK_PB1 2 &pcfg_pull_none_12ma>,
++					<1 RK_PB1 2 &pcfg_pull_none_8ma>,
+ 					/* mac_rxd3 */
+-					<1 RK_PB6 2 &pcfg_pull_none_2ma>,
++					<1 RK_PB6 2 &pcfg_pull_none_4ma>,
+ 					/* mac_rxd2 */
+-					<1 RK_PB7 2 &pcfg_pull_none_2ma>,
++					<1 RK_PB7 2 &pcfg_pull_none_4ma>,
+ 					/* mac_txd3 */
+-					<1 RK_PC0 2 &pcfg_pull_none_12ma>,
++					<1 RK_PC0 2 &pcfg_pull_none_8ma>,
+ 					/* mac_txd2 */
+-					<1 RK_PC1 2 &pcfg_pull_none_12ma>,
++					<1 RK_PC1 2 &pcfg_pull_none_8ma>,
+ 
+ 					/* mac_txclk */
+-					<0 RK_PB0 1 &pcfg_pull_none>,
++					<0 RK_PB0 1 &pcfg_pull_none_8ma>,
+ 					/* mac_txen */
+-					<0 RK_PB4 1 &pcfg_pull_none>,
++					<0 RK_PB4 1 &pcfg_pull_none_8ma>,
+ 					/* mac_clk */
+-					<0 RK_PD0 1 &pcfg_pull_none>,
++					<0 RK_PD0 1 &pcfg_pull_none_4ma>,
+ 					/* mac_txd1 */
+-					<0 RK_PC0 1 &pcfg_pull_none>,
++					<0 RK_PC0 1 &pcfg_pull_none_8ma>,
+ 					/* mac_txd0 */
+-					<0 RK_PC1 1 &pcfg_pull_none>,
++					<0 RK_PC1 1 &pcfg_pull_none_8ma>,
+ 					/* mac_txd3 */
+-					<0 RK_PC7 1 &pcfg_pull_none>,
++					<0 RK_PC7 1 &pcfg_pull_none_8ma>,
+ 					/* mac_txd2 */
+-					<0 RK_PC6 1 &pcfg_pull_none>;
++					<0 RK_PC6 1 &pcfg_pull_none_8ma>;
+ 			};
+ 
+ 			rmiim1_pins: rmiim1-pins {
+diff --git a/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts b/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
+index 13a0a028df98..e5699d0d91e4 100644
+--- a/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
++++ b/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
+@@ -101,6 +101,7 @@
+ 	sdio_pwrseq: sdio-pwrseq {
+ 		compatible = "mmc-pwrseq-simple";
+ 		reset-gpios = <&gpio 7 GPIO_ACTIVE_LOW>; /* WIFI_EN */
++		post-power-on-delay-ms = <10>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/crypto/aes-ce-ccm-core.S b/arch/arm64/crypto/aes-ce-ccm-core.S
+index e3a375c4cb83..1b151442dac1 100644
+--- a/arch/arm64/crypto/aes-ce-ccm-core.S
++++ b/arch/arm64/crypto/aes-ce-ccm-core.S
+@@ -74,12 +74,13 @@ ENTRY(ce_aes_ccm_auth_data)
+ 	beq	10f
+ 	ext	v0.16b, v0.16b, v0.16b, #1	/* rotate out the mac bytes */
+ 	b	7b
+-8:	mov	w7, w8
++8:	cbz	w8, 91f
++	mov	w7, w8
+ 	add	w8, w8, #16
+ 9:	ext	v1.16b, v1.16b, v1.16b, #1
+ 	adds	w7, w7, #1
+ 	bne	9b
+-	eor	v0.16b, v0.16b, v1.16b
++91:	eor	v0.16b, v0.16b, v1.16b
+ 	st1	{v0.16b}, [x0]
+ 10:	str	w8, [x3]
+ 	ret
+diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
+index 68b11aa690e4..986191e8c058 100644
+--- a/arch/arm64/crypto/aes-ce-ccm-glue.c
++++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
+@@ -125,7 +125,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[],
+ 			abytes -= added;
+ 		}
+ 
+-		while (abytes > AES_BLOCK_SIZE) {
++		while (abytes >= AES_BLOCK_SIZE) {
+ 			__aes_arm64_encrypt(key->key_enc, mac, mac,
+ 					    num_rounds(key));
+ 			crypto_xor(mac, in, AES_BLOCK_SIZE);
+@@ -139,8 +139,6 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[],
+ 					    num_rounds(key));
+ 			crypto_xor(mac, in, abytes);
+ 			*macp = abytes;
+-		} else {
+-			*macp = 0;
+ 		}
+ 	}
+ }
+diff --git a/arch/arm64/crypto/aes-neonbs-core.S b/arch/arm64/crypto/aes-neonbs-core.S
+index e613a87f8b53..8432c8d0dea6 100644
+--- a/arch/arm64/crypto/aes-neonbs-core.S
++++ b/arch/arm64/crypto/aes-neonbs-core.S
+@@ -971,18 +971,22 @@ CPU_LE(	rev		x8, x8		)
+ 
+ 8:	next_ctr	v0
+ 	st1		{v0.16b}, [x24]
+-	cbz		x23, 0f
++	cbz		x23, .Lctr_done
+ 
+ 	cond_yield_neon	98b
+ 	b		99b
+ 
+-0:	frame_pop
++.Lctr_done:
++	frame_pop
+ 	ret
+ 
+ 	/*
+ 	 * If we are handling the tail of the input (x6 != NULL), return the
+ 	 * final keystream block back to the caller.
+ 	 */
++0:	cbz		x25, 8b
++	st1		{v0.16b}, [x25]
++	b		8b
+ 1:	cbz		x25, 8b
+ 	st1		{v1.16b}, [x25]
+ 	b		8b
+diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c
+index b461d62023f2..567c24f3d224 100644
+--- a/arch/arm64/crypto/crct10dif-ce-glue.c
++++ b/arch/arm64/crypto/crct10dif-ce-glue.c
+@@ -39,26 +39,13 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data,
+ 			    unsigned int length)
+ {
+ 	u16 *crc = shash_desc_ctx(desc);
+-	unsigned int l;
+ 
+-	if (unlikely((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) {
+-		l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE -
+-			  ((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE));
+-
+-		*crc = crc_t10dif_generic(*crc, data, l);
+-
+-		length -= l;
+-		data += l;
+-	}
+-
+-	if (length > 0) {
+-		if (may_use_simd()) {
+-			kernel_neon_begin();
+-			*crc = crc_t10dif_pmull(*crc, data, length);
+-			kernel_neon_end();
+-		} else {
+-			*crc = crc_t10dif_generic(*crc, data, length);
+-		}
++	if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) {
++		kernel_neon_begin();
++		*crc = crc_t10dif_pmull(*crc, data, length);
++		kernel_neon_end();
++	} else {
++		*crc = crc_t10dif_generic(*crc, data, length);
+ 	}
+ 
+ 	return 0;
+diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
+index cccb83ad7fa8..e1d95f08f8e1 100644
+--- a/arch/arm64/include/asm/futex.h
++++ b/arch/arm64/include/asm/futex.h
+@@ -30,8 +30,8 @@ do {									\
+ "	prfm	pstl1strm, %2\n"					\
+ "1:	ldxr	%w1, %2\n"						\
+ 	insn "\n"							\
+-"2:	stlxr	%w3, %w0, %2\n"						\
+-"	cbnz	%w3, 1b\n"						\
++"2:	stlxr	%w0, %w3, %2\n"						\
++"	cbnz	%w0, 1b\n"						\
+ "	dmb	ish\n"							\
+ "3:\n"									\
+ "	.pushsection .fixup,\"ax\"\n"					\
+@@ -50,30 +50,30 @@ do {									\
+ static inline int
+ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
+ {
+-	int oldval = 0, ret, tmp;
++	int oldval, ret, tmp;
+ 	u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);
+ 
+ 	pagefault_disable();
+ 
+ 	switch (op) {
+ 	case FUTEX_OP_SET:
+-		__futex_atomic_op("mov	%w0, %w4",
++		__futex_atomic_op("mov	%w3, %w4",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	case FUTEX_OP_ADD:
+-		__futex_atomic_op("add	%w0, %w1, %w4",
++		__futex_atomic_op("add	%w3, %w1, %w4",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	case FUTEX_OP_OR:
+-		__futex_atomic_op("orr	%w0, %w1, %w4",
++		__futex_atomic_op("orr	%w3, %w1, %w4",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	case FUTEX_OP_ANDN:
+-		__futex_atomic_op("and	%w0, %w1, %w4",
++		__futex_atomic_op("and	%w3, %w1, %w4",
+ 				  ret, oldval, uaddr, tmp, ~oparg);
+ 		break;
+ 	case FUTEX_OP_XOR:
+-		__futex_atomic_op("eor	%w0, %w1, %w4",
++		__futex_atomic_op("eor	%w3, %w1, %w4",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	default:
+diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h
+index 1473fc2f7ab7..89691c86640a 100644
+--- a/arch/arm64/include/asm/hardirq.h
++++ b/arch/arm64/include/asm/hardirq.h
+@@ -17,8 +17,12 @@
+ #define __ASM_HARDIRQ_H
+ 
+ #include <linux/cache.h>
++#include <linux/percpu.h>
+ #include <linux/threads.h>
++#include <asm/barrier.h>
+ #include <asm/irq.h>
++#include <asm/kvm_arm.h>
++#include <asm/sysreg.h>
+ 
+ #define NR_IPI	7
+ 
+@@ -37,6 +41,33 @@ u64 smp_irq_stat_cpu(unsigned int cpu);
+ 
+ #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
+ 
++struct nmi_ctx {
++	u64 hcr;
++};
++
++DECLARE_PER_CPU(struct nmi_ctx, nmi_contexts);
++
++#define arch_nmi_enter()							\
++	do {									\
++		if (is_kernel_in_hyp_mode()) {					\
++			struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts);	\
++			nmi_ctx->hcr = read_sysreg(hcr_el2);			\
++			if (!(nmi_ctx->hcr & HCR_TGE)) {			\
++				write_sysreg(nmi_ctx->hcr | HCR_TGE, hcr_el2);	\
++				isb();						\
++			}							\
++		}								\
++	} while (0)
++
++#define arch_nmi_exit()								\
++	do {									\
++		if (is_kernel_in_hyp_mode()) {					\
++			struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts);	\
++			if (!(nmi_ctx->hcr & HCR_TGE))				\
++				write_sysreg(nmi_ctx->hcr, hcr_el2);		\
++		}								\
++	} while (0)
++
+ static inline void ack_bad_irq(unsigned int irq)
+ {
+ 	extern unsigned long irq_err_count;
+diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h
+index 905e1bb0e7bd..cd9f4e9d04d3 100644
+--- a/arch/arm64/include/asm/module.h
++++ b/arch/arm64/include/asm/module.h
+@@ -73,4 +73,9 @@ static inline bool is_forbidden_offset_for_adrp(void *place)
+ struct plt_entry get_plt_entry(u64 dst, void *pc);
+ bool plt_entries_equal(const struct plt_entry *a, const struct plt_entry *b);
+ 
++static inline bool plt_entry_is_initialized(const struct plt_entry *e)
++{
++	return e->adrp || e->add || e->br;
++}
++
+ #endif /* __ASM_MODULE_H */
+diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
+index 8e4431a8821f..07b298120182 100644
+--- a/arch/arm64/kernel/ftrace.c
++++ b/arch/arm64/kernel/ftrace.c
+@@ -107,8 +107,7 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+ 		trampoline = get_plt_entry(addr, mod->arch.ftrace_trampoline);
+ 		if (!plt_entries_equal(mod->arch.ftrace_trampoline,
+ 				       &trampoline)) {
+-			if (!plt_entries_equal(mod->arch.ftrace_trampoline,
+-					       &(struct plt_entry){})) {
++			if (plt_entry_is_initialized(mod->arch.ftrace_trampoline)) {
+ 				pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
+ 				return -EINVAL;
+ 			}
+diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
+index 780a12f59a8f..92fa81798fb9 100644
+--- a/arch/arm64/kernel/irq.c
++++ b/arch/arm64/kernel/irq.c
+@@ -33,6 +33,9 @@
+ 
+ unsigned long irq_err_count;
+ 
++/* Only access this in an NMI enter/exit */
++DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts);
++
+ DEFINE_PER_CPU(unsigned long *, irq_stack_ptr);
+ 
+ int arch_show_interrupts(struct seq_file *p, int prec)
+diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
+index ce46c4cdf368..691854b77c7f 100644
+--- a/arch/arm64/kernel/kgdb.c
++++ b/arch/arm64/kernel/kgdb.c
+@@ -244,27 +244,33 @@ int kgdb_arch_handle_exception(int exception_vector, int signo,
+ 
+ static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr)
+ {
++	if (user_mode(regs))
++		return DBG_HOOK_ERROR;
++
+ 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
+-	return 0;
++	return DBG_HOOK_HANDLED;
+ }
+ NOKPROBE_SYMBOL(kgdb_brk_fn)
+ 
+ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)
+ {
++	if (user_mode(regs))
++		return DBG_HOOK_ERROR;
++
+ 	compiled_break = 1;
+ 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
+ 
+-	return 0;
++	return DBG_HOOK_HANDLED;
+ }
+ NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
+ 
+ static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
+ {
+-	if (!kgdb_single_step)
++	if (user_mode(regs) || !kgdb_single_step)
+ 		return DBG_HOOK_ERROR;
+ 
+ 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
+-	return 0;
++	return DBG_HOOK_HANDLED;
+ }
+ NOKPROBE_SYMBOL(kgdb_step_brk_fn);
+ 
+diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
+index f17afb99890c..7fb6f3aa5ceb 100644
+--- a/arch/arm64/kernel/probes/kprobes.c
++++ b/arch/arm64/kernel/probes/kprobes.c
+@@ -450,6 +450,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
+ 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+ 	int retval;
+ 
++	if (user_mode(regs))
++		return DBG_HOOK_ERROR;
++
+ 	/* return error if this is not our step */
+ 	retval = kprobe_ss_hit(kcb, instruction_pointer(regs));
+ 
+@@ -466,6 +469,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
+ int __kprobes
+ kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr)
+ {
++	if (user_mode(regs))
++		return DBG_HOOK_ERROR;
++
+ 	kprobe_handler(regs);
+ 	return DBG_HOOK_HANDLED;
+ }
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 4e2fb877f8d5..92bfeb3e8d7c 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -102,10 +102,16 @@ static void dump_instr(const char *lvl, struct pt_regs *regs)
+ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
+ {
+ 	struct stackframe frame;
+-	int skip;
++	int skip = 0;
+ 
+ 	pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
+ 
++	if (regs) {
++		if (user_mode(regs))
++			return;
++		skip = 1;
++	}
++
+ 	if (!tsk)
+ 		tsk = current;
+ 
+@@ -126,7 +132,6 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
+ 	frame.graph = 0;
+ #endif
+ 
+-	skip = !!regs;
+ 	printk("Call trace:\n");
+ 	do {
+ 		/* skip until specified stack frame */
+@@ -176,15 +181,13 @@ static int __die(const char *str, int err, struct pt_regs *regs)
+ 		return ret;
+ 
+ 	print_modules();
+-	__show_regs(regs);
+ 	pr_emerg("Process %.*s (pid: %d, stack limit = 0x%p)\n",
+ 		 TASK_COMM_LEN, tsk->comm, task_pid_nr(tsk),
+ 		 end_of_stack(tsk));
++	show_regs(regs);
+ 
+-	if (!user_mode(regs)) {
+-		dump_backtrace(regs, tsk);
++	if (!user_mode(regs))
+ 		dump_instr(KERN_EMERG, regs);
+-	}
+ 
+ 	return ret;
+ }
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index c936aa40c3f4..b6dac3a68508 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1476,7 +1476,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ 
+ 	{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
+ 	{ SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
+-	{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 },
++	{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 },
+ };
+ 
+ static bool trap_dbgidr(struct kvm_vcpu *vcpu,
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index efb7b2cbead5..ef46925096f0 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -824,11 +824,12 @@ void __init hook_debug_fault_code(int nr,
+ 	debug_fault_info[nr].name	= name;
+ }
+ 
+-asmlinkage int __exception do_debug_exception(unsigned long addr,
++asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint,
+ 					      unsigned int esr,
+ 					      struct pt_regs *regs)
+ {
+ 	const struct fault_info *inf = esr_to_debug_fault_info(esr);
++	unsigned long pc = instruction_pointer(regs);
+ 	int rv;
+ 
+ 	/*
+@@ -838,14 +839,14 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
+ 	if (interrupts_enabled(regs))
+ 		trace_hardirqs_off();
+ 
+-	if (user_mode(regs) && !is_ttbr0_addr(instruction_pointer(regs)))
++	if (user_mode(regs) && !is_ttbr0_addr(pc))
+ 		arm64_apply_bp_hardening();
+ 
+-	if (!inf->fn(addr, esr, regs)) {
++	if (!inf->fn(addr_if_watchpoint, esr, regs)) {
+ 		rv = 1;
+ 	} else {
+ 		arm64_notify_die(inf->name, regs,
+-				 inf->sig, inf->code, (void __user *)addr, esr);
++				 inf->sig, inf->code, (void __user *)pc, esr);
+ 		rv = 0;
+ 	}
+ 
+diff --git a/arch/csky/include/asm/syscall.h b/arch/csky/include/asm/syscall.h
+index d637445737b7..9a9cd81e66c1 100644
+--- a/arch/csky/include/asm/syscall.h
++++ b/arch/csky/include/asm/syscall.h
+@@ -49,10 +49,11 @@ syscall_get_arguments(struct task_struct *task, struct pt_regs *regs,
+ 	if (i == 0) {
+ 		args[0] = regs->orig_a0;
+ 		args++;
+-		i++;
+ 		n--;
++	} else {
++		i--;
+ 	}
+-	memcpy(args, &regs->a1 + i * sizeof(regs->a1), n * sizeof(args[0]));
++	memcpy(args, &regs->a1 + i, n * sizeof(args[0]));
+ }
+ 
+ static inline void
+@@ -63,10 +64,11 @@ syscall_set_arguments(struct task_struct *task, struct pt_regs *regs,
+ 	if (i == 0) {
+ 		regs->orig_a0 = args[0];
+ 		args++;
+-		i++;
+ 		n--;
++	} else {
++		i--;
+ 	}
+-	memcpy(&regs->a1 + i * sizeof(regs->a1), args, n * sizeof(regs->a0));
++	memcpy(&regs->a1 + i, args, n * sizeof(regs->a1));
+ }
+ 
+ static inline int
+diff --git a/arch/h8300/Makefile b/arch/h8300/Makefile
+index f801f3708a89..ba0f26cfad61 100644
+--- a/arch/h8300/Makefile
++++ b/arch/h8300/Makefile
+@@ -27,7 +27,7 @@ KBUILD_LDFLAGS += $(ldflags-y)
+ CHECKFLAGS += -msize-long
+ 
+ ifeq ($(CROSS_COMPILE),)
+-CROSS_COMPILE := h8300-unknown-linux-
++CROSS_COMPILE := $(call cc-cross-prefix, h8300-unknown-linux- h8300-linux-)
+ endif
+ 
+ core-y	+= arch/$(ARCH)/kernel/ arch/$(ARCH)/mm/
+diff --git a/arch/m68k/Makefile b/arch/m68k/Makefile
+index f00ca53f8c14..482513b9af2c 100644
+--- a/arch/m68k/Makefile
++++ b/arch/m68k/Makefile
+@@ -58,7 +58,10 @@ cpuflags-$(CONFIG_M5206e)	:= $(call cc-option,-mcpu=5206e,-m5200)
+ cpuflags-$(CONFIG_M5206)	:= $(call cc-option,-mcpu=5206,-m5200)
+ 
+ KBUILD_AFLAGS += $(cpuflags-y)
+-KBUILD_CFLAGS += $(cpuflags-y) -pipe
++KBUILD_CFLAGS += $(cpuflags-y)
++
++KBUILD_CFLAGS += -pipe -ffreestanding
++
+ ifdef CONFIG_MMU
+ # without -fno-strength-reduce the 53c7xx.c driver fails ;-(
+ KBUILD_CFLAGS += -fno-strength-reduce -ffixed-a2
+diff --git a/arch/mips/include/asm/jump_label.h b/arch/mips/include/asm/jump_label.h
+index e77672539e8e..e4456e450f94 100644
+--- a/arch/mips/include/asm/jump_label.h
++++ b/arch/mips/include/asm/jump_label.h
+@@ -21,15 +21,15 @@
+ #endif
+ 
+ #ifdef CONFIG_CPU_MICROMIPS
+-#define NOP_INSN "nop32"
++#define B_INSN "b32"
+ #else
+-#define NOP_INSN "nop"
++#define B_INSN "b"
+ #endif
+ 
+ static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
+ {
+-	asm_volatile_goto("1:\t" NOP_INSN "\n\t"
+-		"nop\n\t"
++	asm_volatile_goto("1:\t" B_INSN " 2f\n\t"
++		"2:\tnop\n\t"
+ 		".pushsection __jump_table,  \"aw\"\n\t"
+ 		WORD_INSN " 1b, %l[l_yes], %0\n\t"
+ 		".popsection\n\t"
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index d2abd98471e8..41204a49cf95 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -1134,7 +1134,7 @@ static inline void kvm_arch_hardware_unsetup(void) {}
+ static inline void kvm_arch_sync_events(struct kvm *kvm) {}
+ static inline void kvm_arch_free_memslot(struct kvm *kvm,
+ 		struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
+-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
+ static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
+diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
+index ba150c755fcc..85b6c60f285d 100644
+--- a/arch/mips/kernel/irq.c
++++ b/arch/mips/kernel/irq.c
+@@ -52,6 +52,7 @@ asmlinkage void spurious_interrupt(void)
+ void __init init_IRQ(void)
+ {
+ 	int i;
++	unsigned int order = get_order(IRQ_STACK_SIZE);
+ 
+ 	for (i = 0; i < NR_IRQS; i++)
+ 		irq_set_noprobe(i);
+@@ -62,8 +63,7 @@ void __init init_IRQ(void)
+ 	arch_init_irq();
+ 
+ 	for_each_possible_cpu(i) {
+-		int irq_pages = IRQ_STACK_SIZE / PAGE_SIZE;
+-		void *s = (void *)__get_free_pages(GFP_KERNEL, irq_pages);
++		void *s = (void *)__get_free_pages(GFP_KERNEL, order);
+ 
+ 		irq_stack[i] = s;
+ 		pr_debug("CPU%d IRQ stack at 0x%p - 0x%p\n", i,
+diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
+index cb7e9ed7a453..33ee0d18fb0a 100644
+--- a/arch/mips/kernel/vmlinux.lds.S
++++ b/arch/mips/kernel/vmlinux.lds.S
+@@ -140,6 +140,13 @@ SECTIONS
+ 	PERCPU_SECTION(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
+ #endif
+ 
++#ifdef CONFIG_MIPS_ELF_APPENDED_DTB
++	.appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) {
++		*(.appended_dtb)
++		KEEP(*(.appended_dtb))
++	}
++#endif
++
+ #ifdef CONFIG_RELOCATABLE
+ 	. = ALIGN(4);
+ 
+@@ -164,11 +171,6 @@ SECTIONS
+ 	__appended_dtb = .;
+ 	/* leave space for appended DTB */
+ 	. += 0x100000;
+-#elif defined(CONFIG_MIPS_ELF_APPENDED_DTB)
+-	.appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) {
+-		*(.appended_dtb)
+-		KEEP(*(.appended_dtb))
+-	}
+ #endif
+ 	/*
+ 	 * Align to 64K in attempt to eliminate holes before the
+diff --git a/arch/mips/loongson64/lemote-2f/irq.c b/arch/mips/loongson64/lemote-2f/irq.c
+index 9e33e45aa17c..b213cecb8e3a 100644
+--- a/arch/mips/loongson64/lemote-2f/irq.c
++++ b/arch/mips/loongson64/lemote-2f/irq.c
+@@ -103,7 +103,7 @@ static struct irqaction ip6_irqaction = {
+ static struct irqaction cascade_irqaction = {
+ 	.handler = no_action,
+ 	.name = "cascade",
+-	.flags = IRQF_NO_THREAD,
++	.flags = IRQF_NO_THREAD | IRQF_NO_SUSPEND,
+ };
+ 
+ void __init mach_init_irq(void)
+diff --git a/arch/parisc/include/asm/ptrace.h b/arch/parisc/include/asm/ptrace.h
+index 2a27b275ab09..9ff033d261ab 100644
+--- a/arch/parisc/include/asm/ptrace.h
++++ b/arch/parisc/include/asm/ptrace.h
+@@ -22,13 +22,14 @@ unsigned long profile_pc(struct pt_regs *);
+ 
+ static inline unsigned long regs_return_value(struct pt_regs *regs)
+ {
+-	return regs->gr[20];
++	return regs->gr[28];
+ }
+ 
+ static inline void instruction_pointer_set(struct pt_regs *regs,
+ 						unsigned long val)
+ {
+-        regs->iaoq[0] = val;
++	regs->iaoq[0] = val;
++	regs->iaoq[1] = val + 4;
+ }
+ 
+ /* Query offset/name of register from its name/offset */
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index eb39e7e380d7..841db71958cd 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -210,12 +210,6 @@ void __cpuidle arch_cpu_idle(void)
+ 
+ static int __init parisc_idle_init(void)
+ {
+-	const char *marker;
+-
+-	/* check QEMU/SeaBIOS marker in PAGE0 */
+-	marker = (char *) &PAGE0->pad0;
+-	running_on_qemu = (memcmp(marker, "SeaBIOS", 8) == 0);
+-
+ 	if (!running_on_qemu)
+ 		cpu_idle_poll_ctrl(1);
+ 
+diff --git a/arch/parisc/kernel/setup.c b/arch/parisc/kernel/setup.c
+index f2cf86ac279b..25946624ce6a 100644
+--- a/arch/parisc/kernel/setup.c
++++ b/arch/parisc/kernel/setup.c
+@@ -396,6 +396,9 @@ void __init start_parisc(void)
+ 	int ret, cpunum;
+ 	struct pdc_coproc_cfg coproc_cfg;
+ 
++	/* check QEMU/SeaBIOS marker in PAGE0 */
++	running_on_qemu = (memcmp(&PAGE0->pad0, "SeaBIOS", 8) == 0);
++
+ 	cpunum = smp_processor_id();
+ 
+ 	init_cpu_topology();
+diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h
+index 5b0177733994..46130ef4941c 100644
+--- a/arch/powerpc/include/asm/book3s/64/hugetlb.h
++++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h
+@@ -35,6 +35,14 @@ static inline int hstate_get_psize(struct hstate *hstate)
+ #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
+ static inline bool gigantic_page_supported(void)
+ {
++	/*
++	 * We used gigantic page reservation with hypervisor assist in some case.
++	 * We cannot use runtime allocation of gigantic pages in those platforms
++	 * This is hash translation mode LPARs.
++	 */
++	if (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled())
++		return false;
++
+ 	return true;
+ }
+ #endif
+diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
+index 0f98f00da2ea..19693b8add93 100644
+--- a/arch/powerpc/include/asm/kvm_host.h
++++ b/arch/powerpc/include/asm/kvm_host.h
+@@ -837,7 +837,7 @@ struct kvm_vcpu_arch {
+ static inline void kvm_arch_hardware_disable(void) {}
+ static inline void kvm_arch_hardware_unsetup(void) {}
+ static inline void kvm_arch_sync_events(struct kvm *kvm) {}
+-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+ static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
+ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_exit(void) {}
+diff --git a/arch/powerpc/include/asm/powernv.h b/arch/powerpc/include/asm/powernv.h
+index 2f3ff7a27881..d85fcfea32ca 100644
+--- a/arch/powerpc/include/asm/powernv.h
++++ b/arch/powerpc/include/asm/powernv.h
+@@ -23,6 +23,8 @@ extern int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea,
+ 				unsigned long *flags, unsigned long *status,
+ 				int count);
+ 
++void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val);
++
+ void pnv_tm_init(void);
+ #else
+ static inline void powernv_set_nmmu_ptcr(unsigned long ptcr) { }
+diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
+index 19a8834e0398..0690a306f6ca 100644
+--- a/arch/powerpc/include/asm/ppc-opcode.h
++++ b/arch/powerpc/include/asm/ppc-opcode.h
+@@ -302,6 +302,7 @@
+ /* Misc instructions for BPF compiler */
+ #define PPC_INST_LBZ			0x88000000
+ #define PPC_INST_LD			0xe8000000
++#define PPC_INST_LDX			0x7c00002a
+ #define PPC_INST_LHZ			0xa0000000
+ #define PPC_INST_LWZ			0x80000000
+ #define PPC_INST_LHBRX			0x7c00062c
+@@ -309,6 +310,7 @@
+ #define PPC_INST_STB			0x98000000
+ #define PPC_INST_STH			0xb0000000
+ #define PPC_INST_STD			0xf8000000
++#define PPC_INST_STDX			0x7c00012a
+ #define PPC_INST_STDU			0xf8000001
+ #define PPC_INST_STW			0x90000000
+ #define PPC_INST_STWU			0x94000000
+diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
+index a4a718dbfec6..f85e2b01c3df 100644
+--- a/arch/powerpc/include/asm/topology.h
++++ b/arch/powerpc/include/asm/topology.h
+@@ -132,6 +132,8 @@ static inline void shared_proc_topology_init(void) {}
+ #define topology_sibling_cpumask(cpu)	(per_cpu(cpu_sibling_map, cpu))
+ #define topology_core_cpumask(cpu)	(per_cpu(cpu_core_map, cpu))
+ #define topology_core_id(cpu)		(cpu_to_core_id(cpu))
++
++int dlpar_cpu_readd(int cpu);
+ #endif
+ #endif
+ 
+diff --git a/arch/powerpc/include/asm/vdso_datapage.h b/arch/powerpc/include/asm/vdso_datapage.h
+index 1afe90ade595..bbc06bd72b1f 100644
+--- a/arch/powerpc/include/asm/vdso_datapage.h
++++ b/arch/powerpc/include/asm/vdso_datapage.h
+@@ -82,10 +82,10 @@ struct vdso_data {
+ 	__u32 icache_block_size;		/* L1 i-cache block size     */
+ 	__u32 dcache_log_block_size;		/* L1 d-cache log block size */
+ 	__u32 icache_log_block_size;		/* L1 i-cache log block size */
+-	__s32 wtom_clock_sec;			/* Wall to monotonic clock */
+-	__s32 wtom_clock_nsec;
+-	struct timespec stamp_xtime;	/* xtime as at tb_orig_stamp */
+-	__u32 stamp_sec_fraction;	/* fractional seconds of stamp_xtime */
++	__u32 stamp_sec_fraction;		/* fractional seconds of stamp_xtime */
++	__s32 wtom_clock_nsec;			/* Wall to monotonic clock nsec */
++	__s64 wtom_clock_sec;			/* Wall to monotonic clock sec */
++	struct timespec stamp_xtime;		/* xtime as at tb_orig_stamp */
+    	__u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls  */
+    	__u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
+ };
+diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
+index 0768dfd8a64e..fdd528cdb2ee 100644
+--- a/arch/powerpc/kernel/entry_32.S
++++ b/arch/powerpc/kernel/entry_32.S
+@@ -745,6 +745,9 @@ fast_exception_return:
+ 	mtcr	r10
+ 	lwz	r10,_LINK(r11)
+ 	mtlr	r10
++	/* Clear the exception_marker on the stack to avoid confusing stacktrace */
++	li	r10, 0
++	stw	r10, 8(r11)
+ 	REST_GPR(10, r11)
+ #if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
+ 	mtspr	SPRN_NRI, r0
+@@ -982,6 +985,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX)
+ 	mtcrf	0xFF,r10
+ 	mtlr	r11
+ 
++	/* Clear the exception_marker on the stack to avoid confusing stacktrace */
++	li	r10, 0
++	stw	r10, 8(r1)
+ 	/*
+ 	 * Once we put values in SRR0 and SRR1, we are in a state
+ 	 * where exceptions are not recoverable, since taking an
+@@ -1021,6 +1027,9 @@ exc_exit_restart_end:
+ 	mtlr	r11
+ 	lwz	r10,_CCR(r1)
+ 	mtcrf	0xff,r10
++	/* Clear the exception_marker on the stack to avoid confusing stacktrace */
++	li	r10, 0
++	stw	r10, 8(r1)
+ 	REST_2GPRS(9, r1)
+ 	.globl exc_exit_restart
+ exc_exit_restart:
+diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
+index 435927f549c4..a2c168b395d2 100644
+--- a/arch/powerpc/kernel/entry_64.S
++++ b/arch/powerpc/kernel/entry_64.S
+@@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
+ 	ld	r2,_NIP(r1)
+ 	mtspr	SPRN_SRR0,r2
+ 
++	/*
++	 * Leaving a stale exception_marker on the stack can confuse
++	 * the reliable stack unwinder later on. Clear it.
++	 */
++	li	r2,0
++	std	r2,STACK_FRAME_OVERHEAD-16(r1)
++
+ 	ld	r0,GPR0(r1)
+ 	ld	r2,GPR2(r1)
+ 	ld	r3,GPR3(r1)
+diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
+index afb638778f44..447defdd4503 100644
+--- a/arch/powerpc/kernel/exceptions-64e.S
++++ b/arch/powerpc/kernel/exceptions-64e.S
+@@ -349,6 +349,7 @@ ret_from_mc_except:
+ #define GEN_BTB_FLUSH
+ #define CRIT_BTB_FLUSH
+ #define DBG_BTB_FLUSH
++#define MC_BTB_FLUSH
+ #define GDBELL_BTB_FLUSH
+ #endif
+ 
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 9e253ce27e08..4fee6c9887db 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -612,11 +612,17 @@ EXC_COMMON_BEGIN(data_access_slb_common)
+ 	ld	r4,PACA_EXSLB+EX_DAR(r13)
+ 	std	r4,_DAR(r1)
+ 	addi	r3,r1,STACK_FRAME_OVERHEAD
++BEGIN_MMU_FTR_SECTION
++	/* HPT case, do SLB fault */
+ 	bl	do_slb_fault
+ 	cmpdi	r3,0
+ 	bne-	1f
+ 	b	fast_exception_return
+ 1:	/* Error case */
++MMU_FTR_SECTION_ELSE
++	/* Radix case, access is outside page table range */
++	li	r3,-EFAULT
++ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+ 	std	r3,RESULT(r1)
+ 	bl	save_nvgprs
+ 	RECONCILE_IRQ_STATE(r10, r11)
+@@ -661,11 +667,17 @@ EXC_COMMON_BEGIN(instruction_access_slb_common)
+ 	EXCEPTION_PROLOG_COMMON(0x480, PACA_EXSLB)
+ 	ld	r4,_NIP(r1)
+ 	addi	r3,r1,STACK_FRAME_OVERHEAD
++BEGIN_MMU_FTR_SECTION
++	/* HPT case, do SLB fault */
+ 	bl	do_slb_fault
+ 	cmpdi	r3,0
+ 	bne-	1f
+ 	b	fast_exception_return
+ 1:	/* Error case */
++MMU_FTR_SECTION_ELSE
++	/* Radix case, access is outside page table range */
++	li	r3,-EFAULT
++ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+ 	std	r3,RESULT(r1)
+ 	bl	save_nvgprs
+ 	RECONCILE_IRQ_STATE(r10, r11)
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index ce393df243aa..71bad4b6f80d 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -176,7 +176,7 @@ static void __giveup_fpu(struct task_struct *tsk)
+ 
+ 	save_fpu(tsk);
+ 	msr = tsk->thread.regs->msr;
+-	msr &= ~MSR_FP;
++	msr &= ~(MSR_FP|MSR_FE0|MSR_FE1);
+ #ifdef CONFIG_VSX
+ 	if (cpu_has_feature(CPU_FTR_VSX))
+ 		msr &= ~MSR_VSX;
+diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
+index cdd5d1d3ae41..d9ac7d94656e 100644
+--- a/arch/powerpc/kernel/ptrace.c
++++ b/arch/powerpc/kernel/ptrace.c
+@@ -33,6 +33,7 @@
+ #include <linux/hw_breakpoint.h>
+ #include <linux/perf_event.h>
+ #include <linux/context_tracking.h>
++#include <linux/nospec.h>
+ 
+ #include <linux/uaccess.h>
+ #include <linux/pkeys.h>
+@@ -274,6 +275,8 @@ static int set_user_trap(struct task_struct *task, unsigned long trap)
+  */
+ int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
+ {
++	unsigned int regs_max;
++
+ 	if ((task->thread.regs == NULL) || !data)
+ 		return -EIO;
+ 
+@@ -297,7 +300,9 @@ int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
+ 	}
+ #endif
+ 
+-	if (regno < (sizeof(struct user_pt_regs) / sizeof(unsigned long))) {
++	regs_max = sizeof(struct user_pt_regs) / sizeof(unsigned long);
++	if (regno < regs_max) {
++		regno = array_index_nospec(regno, regs_max);
+ 		*data = ((unsigned long *)task->thread.regs)[regno];
+ 		return 0;
+ 	}
+@@ -321,6 +326,7 @@ int ptrace_put_reg(struct task_struct *task, int regno, unsigned long data)
+ 		return set_user_dscr(task, data);
+ 
+ 	if (regno <= PT_MAX_PUT_REG) {
++		regno = array_index_nospec(regno, PT_MAX_PUT_REG + 1);
+ 		((unsigned long *)task->thread.regs)[regno] = data;
+ 		return 0;
+ 	}
+@@ -561,6 +567,7 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
+ 		/*
+ 		 * Copy out only the low-order word of vrsave.
+ 		 */
++		int start, end;
+ 		union {
+ 			elf_vrreg_t reg;
+ 			u32 word;
+@@ -569,8 +576,10 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
+ 
+ 		vrsave.word = target->thread.vrsave;
+ 
++		start = 33 * sizeof(vector128);
++		end = start + sizeof(vrsave);
+ 		ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, &vrsave,
+-					  33 * sizeof(vector128), -1);
++					  start, end);
+ 	}
+ 
+ 	return ret;
+@@ -608,6 +617,7 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
+ 		/*
+ 		 * We use only the first word of vrsave.
+ 		 */
++		int start, end;
+ 		union {
+ 			elf_vrreg_t reg;
+ 			u32 word;
+@@ -616,8 +626,10 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
+ 
+ 		vrsave.word = target->thread.vrsave;
+ 
++		start = 33 * sizeof(vector128);
++		end = start + sizeof(vrsave);
+ 		ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &vrsave,
+-					 33 * sizeof(vector128), -1);
++					 start, end);
+ 		if (!ret)
+ 			target->thread.vrsave = vrsave.word;
+ 	}
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index 9b8631533e02..b33bafb8fcea 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -190,29 +190,22 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
+ 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
+ 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
+ 
+-	if (bcs || ccd || count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
+-		bool comma = false;
++	if (bcs || ccd) {
+ 		seq_buf_printf(&s, "Mitigation: ");
+ 
+-		if (bcs) {
++		if (bcs)
+ 			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
+-			comma = true;
+-		}
+ 
+-		if (ccd) {
+-			if (comma)
+-				seq_buf_printf(&s, ", ");
+-			seq_buf_printf(&s, "Indirect branch cache disabled");
+-			comma = true;
+-		}
+-
+-		if (comma)
++		if (bcs && ccd)
+ 			seq_buf_printf(&s, ", ");
+ 
+-		seq_buf_printf(&s, "Software count cache flush");
++		if (ccd)
++			seq_buf_printf(&s, "Indirect branch cache disabled");
++	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
++		seq_buf_printf(&s, "Mitigation: Software count cache flush");
+ 
+ 		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
+-			seq_buf_printf(&s, "(hardware accelerated)");
++			seq_buf_printf(&s, " (hardware accelerated)");
+ 	} else if (btb_flush_enabled) {
+ 		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
+ 	} else {
+diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
+index 3f15edf25a0d..6e521a3f67ca 100644
+--- a/arch/powerpc/kernel/smp.c
++++ b/arch/powerpc/kernel/smp.c
+@@ -358,13 +358,12 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask)
+  * NMI IPIs may not be recoverable, so should not be used as ongoing part of
+  * a running system. They can be used for crash, debug, halt/reboot, etc.
+  *
+- * NMI IPIs are globally single threaded. No more than one in progress at
+- * any time.
+- *
+  * The IPI call waits with interrupts disabled until all targets enter the
+- * NMI handler, then the call returns.
++ * NMI handler, then returns. Subsequent IPIs can be issued before targets
++ * have returned from their handlers, so there is no guarantee about
++ * concurrency or re-entrancy.
+  *
+- * No new NMI can be initiated until targets exit the handler.
++ * A new NMI can be issued before all targets exit the handler.
+  *
+  * The IPI call may time out without all targets entering the NMI handler.
+  * In that case, there is some logic to recover (and ignore subsequent
+@@ -375,7 +374,7 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask)
+ 
+ static atomic_t __nmi_ipi_lock = ATOMIC_INIT(0);
+ static struct cpumask nmi_ipi_pending_mask;
+-static int nmi_ipi_busy_count = 0;
++static bool nmi_ipi_busy = false;
+ static void (*nmi_ipi_function)(struct pt_regs *) = NULL;
+ 
+ static void nmi_ipi_lock_start(unsigned long *flags)
+@@ -414,7 +413,7 @@ static void nmi_ipi_unlock_end(unsigned long *flags)
+  */
+ int smp_handle_nmi_ipi(struct pt_regs *regs)
+ {
+-	void (*fn)(struct pt_regs *);
++	void (*fn)(struct pt_regs *) = NULL;
+ 	unsigned long flags;
+ 	int me = raw_smp_processor_id();
+ 	int ret = 0;
+@@ -425,29 +424,17 @@ int smp_handle_nmi_ipi(struct pt_regs *regs)
+ 	 * because the caller may have timed out.
+ 	 */
+ 	nmi_ipi_lock_start(&flags);
+-	if (!nmi_ipi_busy_count)
+-		goto out;
+-	if (!cpumask_test_cpu(me, &nmi_ipi_pending_mask))
+-		goto out;
+-
+-	fn = nmi_ipi_function;
+-	if (!fn)
+-		goto out;
+-
+-	cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
+-	nmi_ipi_busy_count++;
+-	nmi_ipi_unlock();
+-
+-	ret = 1;
+-
+-	fn(regs);
+-
+-	nmi_ipi_lock();
+-	if (nmi_ipi_busy_count > 1) /* Can race with caller time-out */
+-		nmi_ipi_busy_count--;
+-out:
++	if (cpumask_test_cpu(me, &nmi_ipi_pending_mask)) {
++		cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
++		fn = READ_ONCE(nmi_ipi_function);
++		WARN_ON_ONCE(!fn);
++		ret = 1;
++	}
+ 	nmi_ipi_unlock_end(&flags);
+ 
++	if (fn)
++		fn(regs);
++
+ 	return ret;
+ }
+ 
+@@ -473,7 +460,7 @@ static void do_smp_send_nmi_ipi(int cpu, bool safe)
+  * - cpu is the target CPU (must not be this CPU), or NMI_IPI_ALL_OTHERS.
+  * - fn is the target callback function.
+  * - delay_us > 0 is the delay before giving up waiting for targets to
+- *   complete executing the handler, == 0 specifies indefinite delay.
++ *   begin executing the handler, == 0 specifies indefinite delay.
+  */
+ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool safe)
+ {
+@@ -487,31 +474,33 @@ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool
+ 	if (unlikely(!smp_ops))
+ 		return 0;
+ 
+-	/* Take the nmi_ipi_busy count/lock with interrupts hard disabled */
+ 	nmi_ipi_lock_start(&flags);
+-	while (nmi_ipi_busy_count) {
++	while (nmi_ipi_busy) {
+ 		nmi_ipi_unlock_end(&flags);
+-		spin_until_cond(nmi_ipi_busy_count == 0);
++		spin_until_cond(!nmi_ipi_busy);
+ 		nmi_ipi_lock_start(&flags);
+ 	}
+-
++	nmi_ipi_busy = true;
+ 	nmi_ipi_function = fn;
+ 
++	WARN_ON_ONCE(!cpumask_empty(&nmi_ipi_pending_mask));
++
+ 	if (cpu < 0) {
+ 		/* ALL_OTHERS */
+ 		cpumask_copy(&nmi_ipi_pending_mask, cpu_online_mask);
+ 		cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
+ 	} else {
+-		/* cpumask starts clear */
+ 		cpumask_set_cpu(cpu, &nmi_ipi_pending_mask);
+ 	}
+-	nmi_ipi_busy_count++;
++
+ 	nmi_ipi_unlock();
+ 
++	/* Interrupts remain hard disabled */
++
+ 	do_smp_send_nmi_ipi(cpu, safe);
+ 
+ 	nmi_ipi_lock();
+-	/* nmi_ipi_busy_count is held here, so unlock/lock is okay */
++	/* nmi_ipi_busy is set here, so unlock/lock is okay */
+ 	while (!cpumask_empty(&nmi_ipi_pending_mask)) {
+ 		nmi_ipi_unlock();
+ 		udelay(1);
+@@ -523,29 +512,15 @@ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool
+ 		}
+ 	}
+ 
+-	while (nmi_ipi_busy_count > 1) {
+-		nmi_ipi_unlock();
+-		udelay(1);
+-		nmi_ipi_lock();
+-		if (delay_us) {
+-			delay_us--;
+-			if (!delay_us)
+-				break;
+-		}
+-	}
+-
+ 	if (!cpumask_empty(&nmi_ipi_pending_mask)) {
+ 		/* Timeout waiting for CPUs to call smp_handle_nmi_ipi */
+ 		ret = 0;
+ 		cpumask_clear(&nmi_ipi_pending_mask);
+ 	}
+-	if (nmi_ipi_busy_count > 1) {
+-		/* Timeout waiting for CPUs to execute fn */
+-		ret = 0;
+-		nmi_ipi_busy_count = 1;
+-	}
+ 
+-	nmi_ipi_busy_count--;
++	nmi_ipi_function = NULL;
++	nmi_ipi_busy = false;
++
+ 	nmi_ipi_unlock_end(&flags);
+ 
+ 	return ret;
+@@ -613,17 +588,8 @@ void crash_send_ipi(void (*crash_ipi_callback)(struct pt_regs *))
+ static void nmi_stop_this_cpu(struct pt_regs *regs)
+ {
+ 	/*
+-	 * This is a special case because it never returns, so the NMI IPI
+-	 * handling would never mark it as done, which makes any later
+-	 * smp_send_nmi_ipi() call spin forever. Mark it done now.
+-	 *
+ 	 * IRQs are already hard disabled by the smp_handle_nmi_ipi.
+ 	 */
+-	nmi_ipi_lock();
+-	if (nmi_ipi_busy_count > 1)
+-		nmi_ipi_busy_count--;
+-	nmi_ipi_unlock();
+-
+ 	spin_begin();
+ 	while (1)
+ 		spin_cpu_relax();
+diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
+index 64936b60d521..7a1de34f38c8 100644
+--- a/arch/powerpc/kernel/traps.c
++++ b/arch/powerpc/kernel/traps.c
+@@ -763,15 +763,15 @@ void machine_check_exception(struct pt_regs *regs)
+ 	if (check_io_access(regs))
+ 		goto bail;
+ 
+-	/* Must die if the interrupt is not recoverable */
+-	if (!(regs->msr & MSR_RI))
+-		nmi_panic(regs, "Unrecoverable Machine check");
+-
+ 	if (!nested)
+ 		nmi_exit();
+ 
+ 	die("Machine check", regs, SIGBUS);
+ 
++	/* Must die if the interrupt is not recoverable */
++	if (!(regs->msr & MSR_RI))
++		nmi_panic(regs, "Unrecoverable Machine check");
++
+ 	return;
+ 
+ bail:
+@@ -1542,8 +1542,8 @@ bail:
+ 
+ void StackOverflow(struct pt_regs *regs)
+ {
+-	printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n",
+-	       current, regs->gpr[1]);
++	pr_crit("Kernel stack overflow in process %s[%d], r1=%lx\n",
++		current->comm, task_pid_nr(current), regs->gpr[1]);
+ 	debugger(regs);
+ 	show_regs(regs);
+ 	panic("kernel stack overflow");
+diff --git a/arch/powerpc/kernel/vdso64/gettimeofday.S b/arch/powerpc/kernel/vdso64/gettimeofday.S
+index a4ed9edfd5f0..1f324c28705b 100644
+--- a/arch/powerpc/kernel/vdso64/gettimeofday.S
++++ b/arch/powerpc/kernel/vdso64/gettimeofday.S
+@@ -92,7 +92,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
+ 	 * At this point, r4,r5 contain our sec/nsec values.
+ 	 */
+ 
+-	lwa	r6,WTOM_CLOCK_SEC(r3)
++	ld	r6,WTOM_CLOCK_SEC(r3)
+ 	lwa	r9,WTOM_CLOCK_NSEC(r3)
+ 
+ 	/* We now have our result in r6,r9. We create a fake dependency
+@@ -125,7 +125,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
+ 	bne     cr6,75f
+ 
+ 	/* CLOCK_MONOTONIC_COARSE */
+-	lwa     r6,WTOM_CLOCK_SEC(r3)
++	ld	r6,WTOM_CLOCK_SEC(r3)
+ 	lwa     r9,WTOM_CLOCK_NSEC(r3)
+ 
+ 	/* check if counter has updated */
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 9b8d50a7cbaf..45b06e239d1f 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -58,6 +58,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
+ #define STACK_SLOT_DAWR		(SFS-56)
+ #define STACK_SLOT_DAWRX	(SFS-64)
+ #define STACK_SLOT_HFSCR	(SFS-72)
++#define STACK_SLOT_AMR		(SFS-80)
++#define STACK_SLOT_UAMOR	(SFS-88)
+ /* the following is used by the P9 short path */
+ #define STACK_SLOT_NVGPRS	(SFS-152)	/* 18 gprs */
+ 
+@@ -726,11 +728,9 @@ BEGIN_FTR_SECTION
+ 	mfspr	r5, SPRN_TIDR
+ 	mfspr	r6, SPRN_PSSCR
+ 	mfspr	r7, SPRN_PID
+-	mfspr	r8, SPRN_IAMR
+ 	std	r5, STACK_SLOT_TID(r1)
+ 	std	r6, STACK_SLOT_PSSCR(r1)
+ 	std	r7, STACK_SLOT_PID(r1)
+-	std	r8, STACK_SLOT_IAMR(r1)
+ 	mfspr	r5, SPRN_HFSCR
+ 	std	r5, STACK_SLOT_HFSCR(r1)
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+@@ -738,11 +738,18 @@ BEGIN_FTR_SECTION
+ 	mfspr	r5, SPRN_CIABR
+ 	mfspr	r6, SPRN_DAWR
+ 	mfspr	r7, SPRN_DAWRX
++	mfspr	r8, SPRN_IAMR
+ 	std	r5, STACK_SLOT_CIABR(r1)
+ 	std	r6, STACK_SLOT_DAWR(r1)
+ 	std	r7, STACK_SLOT_DAWRX(r1)
++	std	r8, STACK_SLOT_IAMR(r1)
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
+ 
++	mfspr	r5, SPRN_AMR
++	std	r5, STACK_SLOT_AMR(r1)
++	mfspr	r6, SPRN_UAMOR
++	std	r6, STACK_SLOT_UAMOR(r1)
++
+ BEGIN_FTR_SECTION
+ 	/* Set partition DABR */
+ 	/* Do this before re-enabling PMU to avoid P7 DABR corruption bug */
+@@ -1631,22 +1638,25 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300)
+ 	mtspr	SPRN_PSPB, r0
+ 	mtspr	SPRN_WORT, r0
+ BEGIN_FTR_SECTION
+-	mtspr	SPRN_IAMR, r0
+ 	mtspr	SPRN_TCSCR, r0
+ 	/* Set MMCRS to 1<<31 to freeze and disable the SPMC counters */
+ 	li	r0, 1
+ 	sldi	r0, r0, 31
+ 	mtspr	SPRN_MMCRS, r0
+ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
+-8:
+ 
+-	/* Save and reset AMR and UAMOR before turning on the MMU */
++	/* Save and restore AMR, IAMR and UAMOR before turning on the MMU */
++	ld	r8, STACK_SLOT_IAMR(r1)
++	mtspr	SPRN_IAMR, r8
++
++8:	/* Power7 jumps back in here */
+ 	mfspr	r5,SPRN_AMR
+ 	mfspr	r6,SPRN_UAMOR
+ 	std	r5,VCPU_AMR(r9)
+ 	std	r6,VCPU_UAMOR(r9)
+-	li	r6,0
+-	mtspr	SPRN_AMR,r6
++	ld	r5,STACK_SLOT_AMR(r1)
++	ld	r6,STACK_SLOT_UAMOR(r1)
++	mtspr	SPRN_AMR, r5
+ 	mtspr	SPRN_UAMOR, r6
+ 
+ 	/* Switch DSCR back to host value */
+@@ -1746,11 +1756,9 @@ BEGIN_FTR_SECTION
+ 	ld	r5, STACK_SLOT_TID(r1)
+ 	ld	r6, STACK_SLOT_PSSCR(r1)
+ 	ld	r7, STACK_SLOT_PID(r1)
+-	ld	r8, STACK_SLOT_IAMR(r1)
+ 	mtspr	SPRN_TIDR, r5
+ 	mtspr	SPRN_PSSCR, r6
+ 	mtspr	SPRN_PID, r7
+-	mtspr	SPRN_IAMR, r8
+ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+ 
+ #ifdef CONFIG_PPC_RADIX_MMU
+diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S
+index 844d8e774492..b7f6f6e0b6e8 100644
+--- a/arch/powerpc/lib/memcmp_64.S
++++ b/arch/powerpc/lib/memcmp_64.S
+@@ -215,11 +215,20 @@ _GLOBAL_TOC(memcmp)
+ 	beq	.Lzero
+ 
+ .Lcmp_rest_lt8bytes:
+-	/* Here we have only less than 8 bytes to compare with. at least s1
+-	 * Address is aligned with 8 bytes.
+-	 * The next double words are load and shift right with appropriate
+-	 * bits.
++	/*
++	 * Here we have less than 8 bytes to compare. At least s1 is aligned to
++	 * 8 bytes, but s2 may not be. We must make sure s2 + 7 doesn't cross a
++	 * page boundary, otherwise we might read past the end of the buffer and
++	 * trigger a page fault. We use 4K as the conservative minimum page
++	 * size. If we detect that case we go to the byte-by-byte loop.
++	 *
++	 * Otherwise the next double word is loaded from s1 and s2, and shifted
++	 * right to compare the appropriate bits.
+ 	 */
++	clrldi	r6,r4,(64-12)	// r6 = r4 & 0xfff
++	cmpdi	r6,0xff8
++	bgt	.Lshort
++
+ 	subfic  r6,r5,8
+ 	slwi	r6,r6,3
+ 	LD	rA,0,r3
+diff --git a/arch/powerpc/mm/hugetlbpage-radix.c b/arch/powerpc/mm/hugetlbpage-radix.c
+index 2486bee0f93e..97c7a39ebc00 100644
+--- a/arch/powerpc/mm/hugetlbpage-radix.c
++++ b/arch/powerpc/mm/hugetlbpage-radix.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/mm.h>
+ #include <linux/hugetlb.h>
++#include <linux/security.h>
+ #include <asm/pgtable.h>
+ #include <asm/pgalloc.h>
+ #include <asm/cacheflush.h>
+@@ -73,7 +74,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ 	if (addr) {
+ 		addr = ALIGN(addr, huge_page_size(h));
+ 		vma = find_vma(mm, addr);
+-		if (high_limit - len >= addr &&
++		if (high_limit - len >= addr && addr >= mmap_min_addr &&
+ 		    (!vma || addr + len <= vm_start_gap(vma)))
+ 			return addr;
+ 	}
+@@ -83,7 +84,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ 	 */
+ 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ 	info.length = len;
+-	info.low_limit = PAGE_SIZE;
++	info.low_limit = max(PAGE_SIZE, mmap_min_addr);
+ 	info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW);
+ 	info.align_mask = PAGE_MASK & ~huge_page_mask(h);
+ 	info.align_offset = 0;
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index 87f0dd004295..b5d1c45c1475 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1460,13 +1460,6 @@ static void reset_topology_timer(void)
+ 
+ #ifdef CONFIG_SMP
+ 
+-static void stage_topology_update(int core_id)
+-{
+-	cpumask_or(&cpu_associativity_changes_mask,
+-		&cpu_associativity_changes_mask, cpu_sibling_mask(core_id));
+-	reset_topology_timer();
+-}
+-
+ static int dt_update_callback(struct notifier_block *nb,
+ 				unsigned long action, void *data)
+ {
+@@ -1479,7 +1472,7 @@ static int dt_update_callback(struct notifier_block *nb,
+ 		    !of_prop_cmp(update->prop->name, "ibm,associativity")) {
+ 			u32 core_id;
+ 			of_property_read_u32(update->dn, "reg", &core_id);
+-			stage_topology_update(core_id);
++			rc = dlpar_cpu_readd(core_id);
+ 			rc = NOTIFY_OK;
+ 		}
+ 		break;
+diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
+index bc3914d54e26..5986df48359b 100644
+--- a/arch/powerpc/mm/slb.c
++++ b/arch/powerpc/mm/slb.c
+@@ -69,6 +69,11 @@ static void assert_slb_presence(bool present, unsigned long ea)
+ 	if (!cpu_has_feature(CPU_FTR_ARCH_206))
+ 		return;
+ 
++	/*
++	 * slbfee. requires bit 24 (PPC bit 39) be clear in RB. Hardware
++	 * ignores all other bits from 0-27, so just clear them all.
++	 */
++	ea &= ~((1UL << 28) - 1);
+ 	asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0");
+ 
+ 	WARN_ON(present == (tmp == 0));
+diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
+index c2d5192ed64f..e52e30bf7d86 100644
+--- a/arch/powerpc/net/bpf_jit.h
++++ b/arch/powerpc/net/bpf_jit.h
+@@ -51,6 +51,8 @@
+ #define PPC_LIS(r, i)		PPC_ADDIS(r, 0, i)
+ #define PPC_STD(r, base, i)	EMIT(PPC_INST_STD | ___PPC_RS(r) |	      \
+ 				     ___PPC_RA(base) | ((i) & 0xfffc))
++#define PPC_STDX(r, base, b)	EMIT(PPC_INST_STDX | ___PPC_RS(r) |	      \
++				     ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_STDU(r, base, i)	EMIT(PPC_INST_STDU | ___PPC_RS(r) |	      \
+ 				     ___PPC_RA(base) | ((i) & 0xfffc))
+ #define PPC_STW(r, base, i)	EMIT(PPC_INST_STW | ___PPC_RS(r) |	      \
+@@ -65,7 +67,9 @@
+ #define PPC_LBZ(r, base, i)	EMIT(PPC_INST_LBZ | ___PPC_RT(r) |	      \
+ 				     ___PPC_RA(base) | IMM_L(i))
+ #define PPC_LD(r, base, i)	EMIT(PPC_INST_LD | ___PPC_RT(r) |	      \
+-				     ___PPC_RA(base) | IMM_L(i))
++				     ___PPC_RA(base) | ((i) & 0xfffc))
++#define PPC_LDX(r, base, b)	EMIT(PPC_INST_LDX | ___PPC_RT(r) |	      \
++				     ___PPC_RA(base) | ___PPC_RB(b))
+ #define PPC_LWZ(r, base, i)	EMIT(PPC_INST_LWZ | ___PPC_RT(r) |	      \
+ 				     ___PPC_RA(base) | IMM_L(i))
+ #define PPC_LHZ(r, base, i)	EMIT(PPC_INST_LHZ | ___PPC_RT(r) |	      \
+@@ -85,17 +89,6 @@
+ 					___PPC_RA(a) | ___PPC_RB(b))
+ #define PPC_BPF_STDCX(s, a, b)	EMIT(PPC_INST_STDCX | ___PPC_RS(s) |	      \
+ 					___PPC_RA(a) | ___PPC_RB(b))
+-
+-#ifdef CONFIG_PPC64
+-#define PPC_BPF_LL(r, base, i) do { PPC_LD(r, base, i); } while(0)
+-#define PPC_BPF_STL(r, base, i) do { PPC_STD(r, base, i); } while(0)
+-#define PPC_BPF_STLU(r, base, i) do { PPC_STDU(r, base, i); } while(0)
+-#else
+-#define PPC_BPF_LL(r, base, i) do { PPC_LWZ(r, base, i); } while(0)
+-#define PPC_BPF_STL(r, base, i) do { PPC_STW(r, base, i); } while(0)
+-#define PPC_BPF_STLU(r, base, i) do { PPC_STWU(r, base, i); } while(0)
+-#endif
+-
+ #define PPC_CMPWI(a, i)		EMIT(PPC_INST_CMPWI | ___PPC_RA(a) | IMM_L(i))
+ #define PPC_CMPDI(a, i)		EMIT(PPC_INST_CMPDI | ___PPC_RA(a) | IMM_L(i))
+ #define PPC_CMPW(a, b)		EMIT(PPC_INST_CMPW | ___PPC_RA(a) |	      \
+diff --git a/arch/powerpc/net/bpf_jit32.h b/arch/powerpc/net/bpf_jit32.h
+index 6f4daacad296..ade04547703f 100644
+--- a/arch/powerpc/net/bpf_jit32.h
++++ b/arch/powerpc/net/bpf_jit32.h
+@@ -123,6 +123,10 @@ DECLARE_LOAD_FUNC(sk_load_byte_msh);
+ #define PPC_NTOHS_OFFS(r, base, i)	PPC_LHZ_OFFS(r, base, i)
+ #endif
+ 
++#define PPC_BPF_LL(r, base, i) do { PPC_LWZ(r, base, i); } while(0)
++#define PPC_BPF_STL(r, base, i) do { PPC_STW(r, base, i); } while(0)
++#define PPC_BPF_STLU(r, base, i) do { PPC_STWU(r, base, i); } while(0)
++
+ #define SEEN_DATAREF 0x10000 /* might call external helpers */
+ #define SEEN_XREG    0x20000 /* X reg is used */
+ #define SEEN_MEM     0x40000 /* SEEN_MEM+(1<<n) = use mem[n] for temporary
+diff --git a/arch/powerpc/net/bpf_jit64.h b/arch/powerpc/net/bpf_jit64.h
+index 3609be4692b3..47f441f351a6 100644
+--- a/arch/powerpc/net/bpf_jit64.h
++++ b/arch/powerpc/net/bpf_jit64.h
+@@ -68,6 +68,26 @@ static const int b2p[] = {
+ /* PPC NVR range -- update this if we ever use NVRs below r27 */
+ #define BPF_PPC_NVR_MIN		27
+ 
++/*
++ * WARNING: These can use TMP_REG_2 if the offset is not at word boundary,
++ * so ensure that it isn't in use already.
++ */
++#define PPC_BPF_LL(r, base, i) do {					      \
++				if ((i) % 4) {				      \
++					PPC_LI(b2p[TMP_REG_2], (i));	      \
++					PPC_LDX(r, base, b2p[TMP_REG_2]);     \
++				} else					      \
++					PPC_LD(r, base, i);		      \
++				} while(0)
++#define PPC_BPF_STL(r, base, i) do {					      \
++				if ((i) % 4) {				      \
++					PPC_LI(b2p[TMP_REG_2], (i));	      \
++					PPC_STDX(r, base, b2p[TMP_REG_2]);    \
++				} else					      \
++					PPC_STD(r, base, i);		      \
++				} while(0)
++#define PPC_BPF_STLU(r, base, i) do { PPC_STDU(r, base, i); } while(0)
++
+ #define SEEN_FUNC	0x1000 /* might call external helpers */
+ #define SEEN_STACK	0x2000 /* uses BPF stack */
+ #define SEEN_TAILCALL	0x4000 /* uses tail calls */
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index 7ce57657d3b8..b1a116eecae2 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -252,7 +252,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ 	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
+ 	 *   goto out;
+ 	 */
+-	PPC_LD(b2p[TMP_REG_1], 1, bpf_jit_stack_tailcallcnt(ctx));
++	PPC_BPF_LL(b2p[TMP_REG_1], 1, bpf_jit_stack_tailcallcnt(ctx));
+ 	PPC_CMPLWI(b2p[TMP_REG_1], MAX_TAIL_CALL_CNT);
+ 	PPC_BCC(COND_GT, out);
+ 
+@@ -265,7 +265,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ 	/* prog = array->ptrs[index]; */
+ 	PPC_MULI(b2p[TMP_REG_1], b2p_index, 8);
+ 	PPC_ADD(b2p[TMP_REG_1], b2p[TMP_REG_1], b2p_bpf_array);
+-	PPC_LD(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_array, ptrs));
++	PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_array, ptrs));
+ 
+ 	/*
+ 	 * if (prog == NULL)
+@@ -275,7 +275,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ 	PPC_BCC(COND_EQ, out);
+ 
+ 	/* goto *(prog->bpf_func + prologue_size); */
+-	PPC_LD(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_prog, bpf_func));
++	PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_prog, bpf_func));
+ #ifdef PPC64_ELF_ABI_v1
+ 	/* skip past the function descriptor */
+ 	PPC_ADDI(b2p[TMP_REG_1], b2p[TMP_REG_1],
+@@ -606,7 +606,7 @@ bpf_alu32_trunc:
+ 				 * the instructions generated will remain the
+ 				 * same across all passes
+ 				 */
+-				PPC_STD(dst_reg, 1, bpf_jit_stack_local(ctx));
++				PPC_BPF_STL(dst_reg, 1, bpf_jit_stack_local(ctx));
+ 				PPC_ADDI(b2p[TMP_REG_1], 1, bpf_jit_stack_local(ctx));
+ 				PPC_LDBRX(dst_reg, 0, b2p[TMP_REG_1]);
+ 				break;
+@@ -662,7 +662,7 @@ emit_clear:
+ 				PPC_LI32(b2p[TMP_REG_1], imm);
+ 				src_reg = b2p[TMP_REG_1];
+ 			}
+-			PPC_STD(src_reg, dst_reg, off);
++			PPC_BPF_STL(src_reg, dst_reg, off);
+ 			break;
+ 
+ 		/*
+@@ -709,7 +709,7 @@ emit_clear:
+ 			break;
+ 		/* dst = *(u64 *)(ul) (src + off) */
+ 		case BPF_LDX | BPF_MEM | BPF_DW:
+-			PPC_LD(dst_reg, src_reg, off);
++			PPC_BPF_LL(dst_reg, src_reg, off);
+ 			break;
+ 
+ 		/*
+diff --git a/arch/powerpc/platforms/44x/Kconfig b/arch/powerpc/platforms/44x/Kconfig
+index 4a9a72d01c3c..35be81fd2dc2 100644
+--- a/arch/powerpc/platforms/44x/Kconfig
++++ b/arch/powerpc/platforms/44x/Kconfig
+@@ -180,6 +180,7 @@ config CURRITUCK
+ 	depends on PPC_47x
+ 	select SWIOTLB
+ 	select 476FPE
++	select FORCE_PCI
+ 	select PPC4xx_PCI_EXPRESS
+ 	help
+ 	  This option enables support for the IBM Currituck (476fpe) evaluation board
+diff --git a/arch/powerpc/platforms/83xx/suspend-asm.S b/arch/powerpc/platforms/83xx/suspend-asm.S
+index 3d1ecd211776..8137f77abad5 100644
+--- a/arch/powerpc/platforms/83xx/suspend-asm.S
++++ b/arch/powerpc/platforms/83xx/suspend-asm.S
+@@ -26,13 +26,13 @@
+ #define SS_MSR		0x74
+ #define SS_SDR1		0x78
+ #define SS_LR		0x7c
+-#define SS_SPRG		0x80 /* 4 SPRGs */
+-#define SS_DBAT		0x90 /* 8 DBATs */
+-#define SS_IBAT		0xd0 /* 8 IBATs */
+-#define SS_TB		0x110
+-#define SS_CR		0x118
+-#define SS_GPREG	0x11c /* r12-r31 */
+-#define STATE_SAVE_SIZE 0x16c
++#define SS_SPRG		0x80 /* 8 SPRGs */
++#define SS_DBAT		0xa0 /* 8 DBATs */
++#define SS_IBAT		0xe0 /* 8 IBATs */
++#define SS_TB		0x120
++#define SS_CR		0x128
++#define SS_GPREG	0x12c /* r12-r31 */
++#define STATE_SAVE_SIZE 0x17c
+ 
+ 	.section .data
+ 	.align	5
+@@ -103,6 +103,16 @@ _GLOBAL(mpc83xx_enter_deep_sleep)
+ 	stw	r7, SS_SPRG+12(r3)
+ 	stw	r8, SS_SDR1(r3)
+ 
++	mfspr	r4, SPRN_SPRG4
++	mfspr	r5, SPRN_SPRG5
++	mfspr	r6, SPRN_SPRG6
++	mfspr	r7, SPRN_SPRG7
++
++	stw	r4, SS_SPRG+16(r3)
++	stw	r5, SS_SPRG+20(r3)
++	stw	r6, SS_SPRG+24(r3)
++	stw	r7, SS_SPRG+28(r3)
++
+ 	mfspr	r4, SPRN_DBAT0U
+ 	mfspr	r5, SPRN_DBAT0L
+ 	mfspr	r6, SPRN_DBAT1U
+@@ -493,6 +503,16 @@ mpc83xx_deep_resume:
+ 	mtspr	SPRN_IBAT7U, r6
+ 	mtspr	SPRN_IBAT7L, r7
+ 
++	lwz	r4, SS_SPRG+16(r3)
++	lwz	r5, SS_SPRG+20(r3)
++	lwz	r6, SS_SPRG+24(r3)
++	lwz	r7, SS_SPRG+28(r3)
++
++	mtspr	SPRN_SPRG4, r4
++	mtspr	SPRN_SPRG5, r5
++	mtspr	SPRN_SPRG6, r6
++	mtspr	SPRN_SPRG7, r7
++
+ 	lwz	r4, SS_SPRG+0(r3)
+ 	lwz	r5, SS_SPRG+4(r3)
+ 	lwz	r6, SS_SPRG+8(r3)
+diff --git a/arch/powerpc/platforms/embedded6xx/wii.c b/arch/powerpc/platforms/embedded6xx/wii.c
+index ecf703ee3a76..ac4ee88efc80 100644
+--- a/arch/powerpc/platforms/embedded6xx/wii.c
++++ b/arch/powerpc/platforms/embedded6xx/wii.c
+@@ -83,6 +83,10 @@ unsigned long __init wii_mmu_mapin_mem2(unsigned long top)
+ 	/* MEM2 64MB@0x10000000 */
+ 	delta = wii_hole_start + wii_hole_size;
+ 	size = top - delta;
++
++	if (__map_without_bats)
++		return delta;
++
+ 	for (bl = 128<<10; bl < max_size; bl <<= 1) {
+ 		if (bl * 2 > size)
+ 			break;
+diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
+index 35f699ebb662..e52f9b06dd9c 100644
+--- a/arch/powerpc/platforms/powernv/idle.c
++++ b/arch/powerpc/platforms/powernv/idle.c
+@@ -458,7 +458,8 @@ EXPORT_SYMBOL_GPL(pnv_power9_force_smt4_release);
+ #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+-static void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
++
++void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
+ {
+ 	u64 pir = get_hard_smp_processor_id(cpu);
+ 
+@@ -481,20 +482,6 @@ unsigned long pnv_cpu_offline(unsigned int cpu)
+ {
+ 	unsigned long srr1;
+ 	u32 idle_states = pnv_get_supported_cpuidle_states();
+-	u64 lpcr_val;
+-
+-	/*
+-	 * We don't want to take decrementer interrupts while we are
+-	 * offline, so clear LPCR:PECE1. We keep PECE2 (and
+-	 * LPCR_PECE_HVEE on P9) enabled as to let IPIs in.
+-	 *
+-	 * If the CPU gets woken up by a special wakeup, ensure that
+-	 * the SLW engine sets LPCR with decrementer bit cleared, else
+-	 * the CPU will come back to the kernel due to a spurious
+-	 * wakeup.
+-	 */
+-	lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
+-	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
+ 
+ 	__ppc64_runlatch_off();
+ 
+@@ -526,16 +513,6 @@ unsigned long pnv_cpu_offline(unsigned int cpu)
+ 
+ 	__ppc64_runlatch_on();
+ 
+-	/*
+-	 * Re-enable decrementer interrupts in LPCR.
+-	 *
+-	 * Further, we want stop states to be woken up by decrementer
+-	 * for non-hotplug cases. So program the LPCR via stop api as
+-	 * well.
+-	 */
+-	lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1;
+-	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
+-
+ 	return srr1;
+ }
+ #endif
+diff --git a/arch/powerpc/platforms/powernv/opal-msglog.c b/arch/powerpc/platforms/powernv/opal-msglog.c
+index acd3206dfae3..06628c71cef6 100644
+--- a/arch/powerpc/platforms/powernv/opal-msglog.c
++++ b/arch/powerpc/platforms/powernv/opal-msglog.c
+@@ -98,7 +98,7 @@ static ssize_t opal_msglog_read(struct file *file, struct kobject *kobj,
+ }
+ 
+ static struct bin_attribute opal_msglog_attr = {
+-	.attr = {.name = "msglog", .mode = 0444},
++	.attr = {.name = "msglog", .mode = 0400},
+ 	.read = opal_msglog_read
+ };
+ 
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+index 697449afb3f7..e28f03e1eb5e 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
+@@ -313,7 +313,6 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
+ 			page_shift);
+ 	tbl->it_level_size = 1ULL << (level_shift - 3);
+ 	tbl->it_indirect_levels = levels - 1;
+-	tbl->it_allocated_size = total_allocated;
+ 	tbl->it_userspace = uas;
+ 	tbl->it_nid = nid;
+ 
+diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
+index 145373f0e5dc..2d62c58f9a4c 100644
+--- a/arch/powerpc/platforms/powernv/pci-ioda.c
++++ b/arch/powerpc/platforms/powernv/pci-ioda.c
+@@ -2594,8 +2594,13 @@ static long pnv_pci_ioda2_create_table_userspace(
+ 		int num, __u32 page_shift, __u64 window_size, __u32 levels,
+ 		struct iommu_table **ptbl)
+ {
+-	return pnv_pci_ioda2_create_table(table_group,
++	long ret = pnv_pci_ioda2_create_table(table_group,
+ 			num, page_shift, window_size, levels, true, ptbl);
++
++	if (!ret)
++		(*ptbl)->it_allocated_size = pnv_pci_ioda2_get_table_size(
++				page_shift, window_size, levels);
++	return ret;
+ }
+ 
+ static void pnv_ioda2_take_ownership(struct iommu_table_group *table_group)
+diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
+index 0d354e19ef92..db09c7022635 100644
+--- a/arch/powerpc/platforms/powernv/smp.c
++++ b/arch/powerpc/platforms/powernv/smp.c
+@@ -39,6 +39,7 @@
+ #include <asm/cpuidle.h>
+ #include <asm/kexec.h>
+ #include <asm/reg.h>
++#include <asm/powernv.h>
+ 
+ #include "powernv.h"
+ 
+@@ -153,6 +154,7 @@ static void pnv_smp_cpu_kill_self(void)
+ {
+ 	unsigned int cpu;
+ 	unsigned long srr1, wmask;
++	u64 lpcr_val;
+ 
+ 	/* Standard hot unplug procedure */
+ 	/*
+@@ -174,6 +176,19 @@ static void pnv_smp_cpu_kill_self(void)
+ 	if (cpu_has_feature(CPU_FTR_ARCH_207S))
+ 		wmask = SRR1_WAKEMASK_P8;
+ 
++	/*
++	 * We don't want to take decrementer interrupts while we are
++	 * offline, so clear LPCR:PECE1. We keep PECE2 (and
++	 * LPCR_PECE_HVEE on P9) enabled so as to let IPIs in.
++	 *
++	 * If the CPU gets woken up by a special wakeup, ensure that
++	 * the SLW engine sets LPCR with decrementer bit cleared, else
++	 * the CPU will come back to the kernel due to a spurious
++	 * wakeup.
++	 */
++	lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
++	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
++
+ 	while (!generic_check_cpu_restart(cpu)) {
+ 		/*
+ 		 * Clear IPI flag, since we don't handle IPIs while
+@@ -246,6 +261,16 @@ static void pnv_smp_cpu_kill_self(void)
+ 
+ 	}
+ 
++	/*
++	 * Re-enable decrementer interrupts in LPCR.
++	 *
++	 * Further, we want stop states to be woken up by decrementer
++	 * for non-hotplug cases. So program the LPCR via stop api as
++	 * well.
++	 */
++	lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1;
++	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
++
+ 	DBG("CPU%d coming online...\n", cpu);
+ }
+ 
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index 2f8e62163602..97feb6e79f1a 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -802,6 +802,25 @@ static int dlpar_cpu_add_by_count(u32 cpus_to_add)
+ 	return rc;
+ }
+ 
++int dlpar_cpu_readd(int cpu)
++{
++	struct device_node *dn;
++	struct device *dev;
++	u32 drc_index;
++	int rc;
++
++	dev = get_cpu_device(cpu);
++	dn = dev->of_node;
++
++	rc = of_property_read_u32(dn, "ibm,my-drc-index", &drc_index);
++
++	rc = dlpar_cpu_remove_by_index(drc_index);
++	if (!rc)
++		rc = dlpar_cpu_add(drc_index);
++
++	return rc;
++}
++
+ int dlpar_cpu(struct pseries_hp_errorlog *hp_elog)
+ {
+ 	u32 count, drc_index;
+diff --git a/arch/powerpc/platforms/pseries/pseries_energy.c b/arch/powerpc/platforms/pseries/pseries_energy.c
+index 6ed22127391b..921f12182f3e 100644
+--- a/arch/powerpc/platforms/pseries/pseries_energy.c
++++ b/arch/powerpc/platforms/pseries/pseries_energy.c
+@@ -77,18 +77,27 @@ static u32 cpu_to_drc_index(int cpu)
+ 
+ 		ret = drc.drc_index_start + (thread_index * drc.sequential_inc);
+ 	} else {
+-		const __be32 *indexes;
+-
+-		indexes = of_get_property(dn, "ibm,drc-indexes", NULL);
+-		if (indexes == NULL)
+-			goto err_of_node_put;
++		u32 nr_drc_indexes, thread_drc_index;
+ 
+ 		/*
+-		 * The first element indexes[0] is the number of drc_indexes
+-		 * returned in the list.  Hence thread_index+1 will get the
+-		 * drc_index corresponding to core number thread_index.
++		 * The first element of ibm,drc-indexes array is the
++		 * number of drc_indexes returned in the list.  Hence
++		 * thread_index+1 will get the drc_index corresponding
++		 * to core number thread_index.
+ 		 */
+-		ret = indexes[thread_index + 1];
++		rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
++						0, &nr_drc_indexes);
++		if (rc)
++			goto err_of_node_put;
++
++		WARN_ON_ONCE(thread_index > nr_drc_indexes);
++		rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
++						thread_index + 1,
++						&thread_drc_index);
++		if (rc)
++			goto err_of_node_put;
++
++		ret = thread_drc_index;
+ 	}
+ 
+ 	rc = 0;
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index d97d52772789..452dcfd7e5dd 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -550,6 +550,7 @@ static void pseries_print_mce_info(struct pt_regs *regs,
+ 		"UE",
+ 		"SLB",
+ 		"ERAT",
++		"Unknown",
+ 		"TLB",
+ 		"D-Cache",
+ 		"Unknown",
+diff --git a/arch/powerpc/xmon/ppc-dis.c b/arch/powerpc/xmon/ppc-dis.c
+index 9deea5ee13f6..27f1e6415036 100644
+--- a/arch/powerpc/xmon/ppc-dis.c
++++ b/arch/powerpc/xmon/ppc-dis.c
+@@ -158,7 +158,7 @@ int print_insn_powerpc (unsigned long insn, unsigned long memaddr)
+     dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7
+ 		| PPC_OPCODE_POWER8 | PPC_OPCODE_POWER9 | PPC_OPCODE_HTM
+ 		| PPC_OPCODE_ALTIVEC | PPC_OPCODE_ALTIVEC2
+-		| PPC_OPCODE_VSX | PPC_OPCODE_VSX3),
++		| PPC_OPCODE_VSX | PPC_OPCODE_VSX3);
+ 
+   /* Get the major opcode of the insn.  */
+   opcode = NULL;
+diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
+index bba3da6ef157..6ea9e1804233 100644
+--- a/arch/riscv/include/asm/syscall.h
++++ b/arch/riscv/include/asm/syscall.h
+@@ -79,10 +79,11 @@ static inline void syscall_get_arguments(struct task_struct *task,
+ 	if (i == 0) {
+ 		args[0] = regs->orig_a0;
+ 		args++;
+-		i++;
+ 		n--;
++	} else {
++		i--;
+ 	}
+-	memcpy(args, &regs->a1 + i * sizeof(regs->a1), n * sizeof(args[0]));
++	memcpy(args, &regs->a1 + i, n * sizeof(args[0]));
+ }
+ 
+ static inline void syscall_set_arguments(struct task_struct *task,
+@@ -94,10 +95,11 @@ static inline void syscall_set_arguments(struct task_struct *task,
+         if (i == 0) {
+                 regs->orig_a0 = args[0];
+                 args++;
+-                i++;
+                 n--;
+-        }
+-	memcpy(&regs->a1 + i * sizeof(regs->a1), args, n * sizeof(regs->a0));
++	} else {
++		i--;
++	}
++	memcpy(&regs->a1 + i, args, n * sizeof(regs->a1));
+ }
+ 
+ static inline int syscall_get_arch(void)
+diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
+index d5d24889c3bc..c2b8c8c6c9be 100644
+--- a/arch/s390/include/asm/kvm_host.h
++++ b/arch/s390/include/asm/kvm_host.h
+@@ -878,7 +878,7 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
+ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+ static inline void kvm_arch_free_memslot(struct kvm *kvm,
+ 		struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
+-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
+ static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
+ static inline void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
+ 		struct kvm_memory_slot *slot) {}
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index bfabeb1889cc..1266194afb02 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -1600,7 +1600,7 @@ static void aux_sdb_init(unsigned long sdb)
+ 
+ /*
+  * aux_buffer_setup() - Setup AUX buffer for diagnostic mode sampling
+- * @cpu:	On which to allocate, -1 means current
++ * @event:	Event the buffer is setup for, event->cpu == -1 means current
+  * @pages:	Array of pointers to buffer pages passed from perf core
+  * @nr_pages:	Total pages
+  * @snapshot:	Flag for snapshot mode
+@@ -1612,8 +1612,8 @@ static void aux_sdb_init(unsigned long sdb)
+  *
+  * Return the private AUX buffer structure if success or NULL if fails.
+  */
+-static void *aux_buffer_setup(int cpu, void **pages, int nr_pages,
+-			      bool snapshot)
++static void *aux_buffer_setup(struct perf_event *event, void **pages,
++			      int nr_pages, bool snapshot)
+ {
+ 	struct sf_buffer *sfb;
+ 	struct aux_buffer *aux;
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 7ed90a759135..01a3f4964d57 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -369,7 +369,7 @@ void __init arch_call_rest_init(void)
+ 		: : [_frame] "a" (frame));
+ }
+ 
+-static void __init setup_lowcore(void)
++static void __init setup_lowcore_dat_off(void)
+ {
+ 	struct lowcore *lc;
+ 
+@@ -380,19 +380,16 @@ static void __init setup_lowcore(void)
+ 	lc = memblock_alloc_low(sizeof(*lc), sizeof(*lc));
+ 	lc->restart_psw.mask = PSW_KERNEL_BITS;
+ 	lc->restart_psw.addr = (unsigned long) restart_int_handler;
+-	lc->external_new_psw.mask = PSW_KERNEL_BITS |
+-		PSW_MASK_DAT | PSW_MASK_MCHECK;
++	lc->external_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
+ 	lc->external_new_psw.addr = (unsigned long) ext_int_handler;
+ 	lc->svc_new_psw.mask = PSW_KERNEL_BITS |
+-		PSW_MASK_DAT | PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK;
++		PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK;
+ 	lc->svc_new_psw.addr = (unsigned long) system_call;
+-	lc->program_new_psw.mask = PSW_KERNEL_BITS |
+-		PSW_MASK_DAT | PSW_MASK_MCHECK;
++	lc->program_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
+ 	lc->program_new_psw.addr = (unsigned long) pgm_check_handler;
+ 	lc->mcck_new_psw.mask = PSW_KERNEL_BITS;
+ 	lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler;
+-	lc->io_new_psw.mask = PSW_KERNEL_BITS |
+-		PSW_MASK_DAT | PSW_MASK_MCHECK;
++	lc->io_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
+ 	lc->io_new_psw.addr = (unsigned long) io_int_handler;
+ 	lc->clock_comparator = clock_comparator_max;
+ 	lc->nodat_stack = ((unsigned long) &init_thread_union)
+@@ -452,6 +449,16 @@ static void __init setup_lowcore(void)
+ 	lowcore_ptr[0] = lc;
+ }
+ 
++static void __init setup_lowcore_dat_on(void)
++{
++	__ctl_clear_bit(0, 28);
++	S390_lowcore.external_new_psw.mask |= PSW_MASK_DAT;
++	S390_lowcore.svc_new_psw.mask |= PSW_MASK_DAT;
++	S390_lowcore.program_new_psw.mask |= PSW_MASK_DAT;
++	S390_lowcore.io_new_psw.mask |= PSW_MASK_DAT;
++	__ctl_set_bit(0, 28);
++}
++
+ static struct resource code_resource = {
+ 	.name  = "Kernel code",
+ 	.flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM,
+@@ -1072,7 +1079,7 @@ void __init setup_arch(char **cmdline_p)
+ #endif
+ 
+ 	setup_resources();
+-	setup_lowcore();
++	setup_lowcore_dat_off();
+ 	smp_fill_possible_mask();
+ 	cpu_detect_mhz_feature();
+         cpu_init();
+@@ -1085,6 +1092,12 @@ void __init setup_arch(char **cmdline_p)
+ 	 */
+         paging_init();
+ 
++	/*
++	 * After paging_init created the kernel page table, the new PSWs
++	 * in lowcore can now run with DAT enabled.
++	 */
++	setup_lowcore_dat_on();
++
+         /* Setup default console */
+ 	conmode_default();
+ 	set_preferred_console();
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 68261430fe6e..64d5a3327030 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2221,14 +2221,8 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING
+ 	   If unsure, leave at the default value.
+ 
+ config HOTPLUG_CPU
+-	bool "Support for hot-pluggable CPUs"
++	def_bool y
+ 	depends on SMP
+-	---help---
+-	  Say Y here to allow turning CPUs off and on. CPUs can be
+-	  controlled through /sys/devices/system/cpu.
+-	  ( Note: power management support will enable this option
+-	    automatically on SMP systems. )
+-	  Say N if you want to disable CPU hotplug.
+ 
+ config BOOTPARAM_HOTPLUG_CPU0
+ 	bool "Set default setting of cpu0_hotpluggable"
+diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
+index 9b5adae9cc40..e2839b5c246c 100644
+--- a/arch/x86/boot/Makefile
++++ b/arch/x86/boot/Makefile
+@@ -100,7 +100,7 @@ $(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
+ AFLAGS_header.o += -I$(objtree)/$(obj)
+ $(obj)/header.o: $(obj)/zoffset.h
+ 
+-LDFLAGS_setup.elf	:= -T
++LDFLAGS_setup.elf	:= -m elf_i386 -T
+ $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
+ 	$(call if_changed,ld)
+ 
+diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
+index 9e2157371491..f8debf7aeb4c 100644
+--- a/arch/x86/boot/compressed/pgtable_64.c
++++ b/arch/x86/boot/compressed/pgtable_64.c
+@@ -1,5 +1,7 @@
++#include <linux/efi.h>
+ #include <asm/e820/types.h>
+ #include <asm/processor.h>
++#include <asm/efi.h>
+ #include "pgtable.h"
+ #include "../string.h"
+ 
+@@ -37,9 +39,10 @@ int cmdline_find_option_bool(const char *option);
+ 
+ static unsigned long find_trampoline_placement(void)
+ {
+-	unsigned long bios_start, ebda_start;
++	unsigned long bios_start = 0, ebda_start = 0;
+ 	unsigned long trampoline_start;
+ 	struct boot_e820_entry *entry;
++	char *signature;
+ 	int i;
+ 
+ 	/*
+@@ -47,8 +50,18 @@ static unsigned long find_trampoline_placement(void)
+ 	 * This code is based on reserve_bios_regions().
+ 	 */
+ 
+-	ebda_start = *(unsigned short *)0x40e << 4;
+-	bios_start = *(unsigned short *)0x413 << 10;
++	/*
++	 * EFI systems may not provide legacy ROM. The memory may not be mapped
++	 * at all.
++	 *
++	 * Only look for values in the legacy ROM for non-EFI system.
++	 */
++	signature = (char *)&boot_params->efi_info.efi_loader_signature;
++	if (strncmp(signature, EFI32_LOADER_SIGNATURE, 4) &&
++	    strncmp(signature, EFI64_LOADER_SIGNATURE, 4)) {
++		ebda_start = *(unsigned short *)0x40e << 4;
++		bios_start = *(unsigned short *)0x413 << 10;
++	}
+ 
+ 	if (bios_start < BIOS_START_MIN || bios_start > BIOS_START_MAX)
+ 		bios_start = BIOS_START_MAX;
+diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c
+index 2a356b948720..3ea71b871813 100644
+--- a/arch/x86/crypto/aegis128-aesni-glue.c
++++ b/arch/x86/crypto/aegis128-aesni-glue.c
+@@ -119,31 +119,20 @@ static void crypto_aegis128_aesni_process_ad(
+ }
+ 
+ static void crypto_aegis128_aesni_process_crypt(
+-		struct aegis_state *state, struct aead_request *req,
++		struct aegis_state *state, struct skcipher_walk *walk,
+ 		const struct aegis_crypt_ops *ops)
+ {
+-	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize, base;
+-
+-	ops->skcipher_walk_init(&walk, req, false);
+-
+-	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
+-
+-		ops->crypt_blocks(state, chunksize, src, dst);
+-
+-		base = chunksize & ~(AEGIS128_BLOCK_SIZE - 1);
+-		src += base;
+-		dst += base;
+-		chunksize &= AEGIS128_BLOCK_SIZE - 1;
+-
+-		if (chunksize > 0)
+-			ops->crypt_tail(state, chunksize, src, dst);
++	while (walk->nbytes >= AEGIS128_BLOCK_SIZE) {
++		ops->crypt_blocks(state,
++				  round_down(walk->nbytes, AEGIS128_BLOCK_SIZE),
++				  walk->src.virt.addr, walk->dst.virt.addr);
++		skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE);
++	}
+ 
+-		skcipher_walk_done(&walk, 0);
++	if (walk->nbytes) {
++		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
++				walk->dst.virt.addr);
++		skcipher_walk_done(walk, 0);
+ 	}
+ }
+ 
+@@ -186,13 +175,16 @@ static void crypto_aegis128_aesni_crypt(struct aead_request *req,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct aegis_ctx *ctx = crypto_aegis128_aesni_ctx(tfm);
++	struct skcipher_walk walk;
+ 	struct aegis_state state;
+ 
++	ops->skcipher_walk_init(&walk, req, true);
++
+ 	kernel_fpu_begin();
+ 
+ 	crypto_aegis128_aesni_init(&state, ctx->key.bytes, req->iv);
+ 	crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen);
+-	crypto_aegis128_aesni_process_crypt(&state, req, ops);
++	crypto_aegis128_aesni_process_crypt(&state, &walk, ops);
+ 	crypto_aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+ 
+ 	kernel_fpu_end();
+diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c
+index dbe8bb980da1..1b1b39c66c5e 100644
+--- a/arch/x86/crypto/aegis128l-aesni-glue.c
++++ b/arch/x86/crypto/aegis128l-aesni-glue.c
+@@ -119,31 +119,20 @@ static void crypto_aegis128l_aesni_process_ad(
+ }
+ 
+ static void crypto_aegis128l_aesni_process_crypt(
+-		struct aegis_state *state, struct aead_request *req,
++		struct aegis_state *state, struct skcipher_walk *walk,
+ 		const struct aegis_crypt_ops *ops)
+ {
+-	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize, base;
+-
+-	ops->skcipher_walk_init(&walk, req, false);
+-
+-	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
+-
+-		ops->crypt_blocks(state, chunksize, src, dst);
+-
+-		base = chunksize & ~(AEGIS128L_BLOCK_SIZE - 1);
+-		src += base;
+-		dst += base;
+-		chunksize &= AEGIS128L_BLOCK_SIZE - 1;
+-
+-		if (chunksize > 0)
+-			ops->crypt_tail(state, chunksize, src, dst);
++	while (walk->nbytes >= AEGIS128L_BLOCK_SIZE) {
++		ops->crypt_blocks(state, round_down(walk->nbytes,
++						    AEGIS128L_BLOCK_SIZE),
++				  walk->src.virt.addr, walk->dst.virt.addr);
++		skcipher_walk_done(walk, walk->nbytes % AEGIS128L_BLOCK_SIZE);
++	}
+ 
+-		skcipher_walk_done(&walk, 0);
++	if (walk->nbytes) {
++		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
++				walk->dst.virt.addr);
++		skcipher_walk_done(walk, 0);
+ 	}
+ }
+ 
+@@ -186,13 +175,16 @@ static void crypto_aegis128l_aesni_crypt(struct aead_request *req,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct aegis_ctx *ctx = crypto_aegis128l_aesni_ctx(tfm);
++	struct skcipher_walk walk;
+ 	struct aegis_state state;
+ 
++	ops->skcipher_walk_init(&walk, req, true);
++
+ 	kernel_fpu_begin();
+ 
+ 	crypto_aegis128l_aesni_init(&state, ctx->key.bytes, req->iv);
+ 	crypto_aegis128l_aesni_process_ad(&state, req->src, req->assoclen);
+-	crypto_aegis128l_aesni_process_crypt(&state, req, ops);
++	crypto_aegis128l_aesni_process_crypt(&state, &walk, ops);
+ 	crypto_aegis128l_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+ 
+ 	kernel_fpu_end();
+diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c
+index 8bebda2de92f..6227ca3220a0 100644
+--- a/arch/x86/crypto/aegis256-aesni-glue.c
++++ b/arch/x86/crypto/aegis256-aesni-glue.c
+@@ -119,31 +119,20 @@ static void crypto_aegis256_aesni_process_ad(
+ }
+ 
+ static void crypto_aegis256_aesni_process_crypt(
+-		struct aegis_state *state, struct aead_request *req,
++		struct aegis_state *state, struct skcipher_walk *walk,
+ 		const struct aegis_crypt_ops *ops)
+ {
+-	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize, base;
+-
+-	ops->skcipher_walk_init(&walk, req, false);
+-
+-	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
+-
+-		ops->crypt_blocks(state, chunksize, src, dst);
+-
+-		base = chunksize & ~(AEGIS256_BLOCK_SIZE - 1);
+-		src += base;
+-		dst += base;
+-		chunksize &= AEGIS256_BLOCK_SIZE - 1;
+-
+-		if (chunksize > 0)
+-			ops->crypt_tail(state, chunksize, src, dst);
++	while (walk->nbytes >= AEGIS256_BLOCK_SIZE) {
++		ops->crypt_blocks(state,
++				  round_down(walk->nbytes, AEGIS256_BLOCK_SIZE),
++				  walk->src.virt.addr, walk->dst.virt.addr);
++		skcipher_walk_done(walk, walk->nbytes % AEGIS256_BLOCK_SIZE);
++	}
+ 
+-		skcipher_walk_done(&walk, 0);
++	if (walk->nbytes) {
++		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
++				walk->dst.virt.addr);
++		skcipher_walk_done(walk, 0);
+ 	}
+ }
+ 
+@@ -186,13 +175,16 @@ static void crypto_aegis256_aesni_crypt(struct aead_request *req,
+ {
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct aegis_ctx *ctx = crypto_aegis256_aesni_ctx(tfm);
++	struct skcipher_walk walk;
+ 	struct aegis_state state;
+ 
++	ops->skcipher_walk_init(&walk, req, true);
++
+ 	kernel_fpu_begin();
+ 
+ 	crypto_aegis256_aesni_init(&state, ctx->key, req->iv);
+ 	crypto_aegis256_aesni_process_ad(&state, req->src, req->assoclen);
+-	crypto_aegis256_aesni_process_crypt(&state, req, ops);
++	crypto_aegis256_aesni_process_crypt(&state, &walk, ops);
+ 	crypto_aegis256_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
+ 
+ 	kernel_fpu_end();
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index 1321700d6647..ae30c8b6ec4d 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -821,11 +821,14 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
+ 		scatterwalk_map_and_copy(assoc, req->src, 0, assoclen, 0);
+ 	}
+ 
+-	src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
+-	scatterwalk_start(&src_sg_walk, src_sg);
+-	if (req->src != req->dst) {
+-		dst_sg = scatterwalk_ffwd(dst_start, req->dst, req->assoclen);
+-		scatterwalk_start(&dst_sg_walk, dst_sg);
++	if (left) {
++		src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
++		scatterwalk_start(&src_sg_walk, src_sg);
++		if (req->src != req->dst) {
++			dst_sg = scatterwalk_ffwd(dst_start, req->dst,
++						  req->assoclen);
++			scatterwalk_start(&dst_sg_walk, dst_sg);
++		}
+ 	}
+ 
+ 	kernel_fpu_begin();
+diff --git a/arch/x86/crypto/morus1280_glue.c b/arch/x86/crypto/morus1280_glue.c
+index 0dccdda1eb3a..7e600f8bcdad 100644
+--- a/arch/x86/crypto/morus1280_glue.c
++++ b/arch/x86/crypto/morus1280_glue.c
+@@ -85,31 +85,20 @@ static void crypto_morus1280_glue_process_ad(
+ 
+ static void crypto_morus1280_glue_process_crypt(struct morus1280_state *state,
+ 						struct morus1280_ops ops,
+-						struct aead_request *req)
++						struct skcipher_walk *walk)
+ {
+-	struct skcipher_walk walk;
+-	u8 *cursor_src, *cursor_dst;
+-	unsigned int chunksize, base;
+-
+-	ops.skcipher_walk_init(&walk, req, false);
+-
+-	while (walk.nbytes) {
+-		cursor_src = walk.src.virt.addr;
+-		cursor_dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
+-
+-		ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
+-
+-		base = chunksize & ~(MORUS1280_BLOCK_SIZE - 1);
+-		cursor_src += base;
+-		cursor_dst += base;
+-		chunksize &= MORUS1280_BLOCK_SIZE - 1;
+-
+-		if (chunksize > 0)
+-			ops.crypt_tail(state, cursor_src, cursor_dst,
+-				       chunksize);
++	while (walk->nbytes >= MORUS1280_BLOCK_SIZE) {
++		ops.crypt_blocks(state, walk->src.virt.addr,
++				 walk->dst.virt.addr,
++				 round_down(walk->nbytes,
++					    MORUS1280_BLOCK_SIZE));
++		skcipher_walk_done(walk, walk->nbytes % MORUS1280_BLOCK_SIZE);
++	}
+ 
+-		skcipher_walk_done(&walk, 0);
++	if (walk->nbytes) {
++		ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
++			       walk->nbytes);
++		skcipher_walk_done(walk, 0);
+ 	}
+ }
+ 
+@@ -147,12 +136,15 @@ static void crypto_morus1280_glue_crypt(struct aead_request *req,
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct morus1280_ctx *ctx = crypto_aead_ctx(tfm);
+ 	struct morus1280_state state;
++	struct skcipher_walk walk;
++
++	ops.skcipher_walk_init(&walk, req, true);
+ 
+ 	kernel_fpu_begin();
+ 
+ 	ctx->ops->init(&state, &ctx->key, req->iv);
+ 	crypto_morus1280_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
+-	crypto_morus1280_glue_process_crypt(&state, ops, req);
++	crypto_morus1280_glue_process_crypt(&state, ops, &walk);
+ 	ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
+ 
+ 	kernel_fpu_end();
+diff --git a/arch/x86/crypto/morus640_glue.c b/arch/x86/crypto/morus640_glue.c
+index 7b58fe4d9bd1..cb3a81732016 100644
+--- a/arch/x86/crypto/morus640_glue.c
++++ b/arch/x86/crypto/morus640_glue.c
+@@ -85,31 +85,19 @@ static void crypto_morus640_glue_process_ad(
+ 
+ static void crypto_morus640_glue_process_crypt(struct morus640_state *state,
+ 					       struct morus640_ops ops,
+-					       struct aead_request *req)
++					       struct skcipher_walk *walk)
+ {
+-	struct skcipher_walk walk;
+-	u8 *cursor_src, *cursor_dst;
+-	unsigned int chunksize, base;
+-
+-	ops.skcipher_walk_init(&walk, req, false);
+-
+-	while (walk.nbytes) {
+-		cursor_src = walk.src.virt.addr;
+-		cursor_dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
+-
+-		ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
+-
+-		base = chunksize & ~(MORUS640_BLOCK_SIZE - 1);
+-		cursor_src += base;
+-		cursor_dst += base;
+-		chunksize &= MORUS640_BLOCK_SIZE - 1;
+-
+-		if (chunksize > 0)
+-			ops.crypt_tail(state, cursor_src, cursor_dst,
+-				       chunksize);
++	while (walk->nbytes >= MORUS640_BLOCK_SIZE) {
++		ops.crypt_blocks(state, walk->src.virt.addr,
++				 walk->dst.virt.addr,
++				 round_down(walk->nbytes, MORUS640_BLOCK_SIZE));
++		skcipher_walk_done(walk, walk->nbytes % MORUS640_BLOCK_SIZE);
++	}
+ 
+-		skcipher_walk_done(&walk, 0);
++	if (walk->nbytes) {
++		ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
++			       walk->nbytes);
++		skcipher_walk_done(walk, 0);
+ 	}
+ }
+ 
+@@ -143,12 +131,15 @@ static void crypto_morus640_glue_crypt(struct aead_request *req,
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct morus640_ctx *ctx = crypto_aead_ctx(tfm);
+ 	struct morus640_state state;
++	struct skcipher_walk walk;
++
++	ops.skcipher_walk_init(&walk, req, true);
+ 
+ 	kernel_fpu_begin();
+ 
+ 	ctx->ops->init(&state, &ctx->key, req->iv);
+ 	crypto_morus640_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
+-	crypto_morus640_glue_process_crypt(&state, ops, req);
++	crypto_morus640_glue_process_crypt(&state, ops, &walk);
+ 	ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
+ 
+ 	kernel_fpu_end();
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index 7d2d7c801dba..0ecfac84ba91 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -3,10 +3,14 @@
+ #include <linux/types.h>
+ #include <linux/init.h>
+ #include <linux/slab.h>
++#include <linux/delay.h>
+ #include <asm/apicdef.h>
++#include <asm/nmi.h>
+ 
+ #include "../perf_event.h"
+ 
++static DEFINE_PER_CPU(unsigned int, perf_nmi_counter);
++
+ static __initconst const u64 amd_hw_cache_event_ids
+ 				[PERF_COUNT_HW_CACHE_MAX]
+ 				[PERF_COUNT_HW_CACHE_OP_MAX]
+@@ -429,6 +433,132 @@ static void amd_pmu_cpu_dead(int cpu)
+ 	}
+ }
+ 
++/*
++ * When a PMC counter overflows, an NMI is used to process the event and
++ * reset the counter. NMI latency can result in the counter being updated
++ * before the NMI can run, which can result in what appear to be spurious
++ * NMIs. This function is intended to wait for the NMI to run and reset
++ * the counter to avoid possible unhandled NMI messages.
++ */
++#define OVERFLOW_WAIT_COUNT	50
++
++static void amd_pmu_wait_on_overflow(int idx)
++{
++	unsigned int i;
++	u64 counter;
++
++	/*
++	 * Wait for the counter to be reset if it has overflowed. This loop
++	 * should exit very, very quickly, but just in case, don't wait
++	 * forever...
++	 */
++	for (i = 0; i < OVERFLOW_WAIT_COUNT; i++) {
++		rdmsrl(x86_pmu_event_addr(idx), counter);
++		if (counter & (1ULL << (x86_pmu.cntval_bits - 1)))
++			break;
++
++		/* Might be in IRQ context, so can't sleep */
++		udelay(1);
++	}
++}
++
++static void amd_pmu_disable_all(void)
++{
++	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++	int idx;
++
++	x86_pmu_disable_all();
++
++	/*
++	 * This shouldn't be called from NMI context, but add a safeguard here
++	 * to return, since if we're in NMI context we can't wait for an NMI
++	 * to reset an overflowed counter value.
++	 */
++	if (in_nmi())
++		return;
++
++	/*
++	 * Check each counter for overflow and wait for it to be reset by the
++	 * NMI if it has overflowed. This relies on the fact that all active
++	 * counters are always enabled when this function is caled and
++	 * ARCH_PERFMON_EVENTSEL_INT is always set.
++	 */
++	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++		if (!test_bit(idx, cpuc->active_mask))
++			continue;
++
++		amd_pmu_wait_on_overflow(idx);
++	}
++}
++
++static void amd_pmu_disable_event(struct perf_event *event)
++{
++	x86_pmu_disable_event(event);
++
++	/*
++	 * This can be called from NMI context (via x86_pmu_stop). The counter
++	 * may have overflowed, but either way, we'll never see it get reset
++	 * by the NMI if we're already in the NMI. And the NMI latency support
++	 * below will take care of any pending NMI that might have been
++	 * generated by the overflow.
++	 */
++	if (in_nmi())
++		return;
++
++	amd_pmu_wait_on_overflow(event->hw.idx);
++}
++
++/*
++ * Because of NMI latency, if multiple PMC counters are active or other sources
++ * of NMIs are received, the perf NMI handler can handle one or more overflowed
++ * PMC counters outside of the NMI associated with the PMC overflow. If the NMI
++ * doesn't arrive at the LAPIC in time to become a pending NMI, then the kernel
++ * back-to-back NMI support won't be active. This PMC handler needs to take into
++ * account that this can occur, otherwise this could result in unknown NMI
++ * messages being issued. Examples of this is PMC overflow while in the NMI
++ * handler when multiple PMCs are active or PMC overflow while handling some
++ * other source of an NMI.
++ *
++ * Attempt to mitigate this by using the number of active PMCs to determine
++ * whether to return NMI_HANDLED if the perf NMI handler did not handle/reset
++ * any PMCs. The per-CPU perf_nmi_counter variable is set to a minimum of the
++ * number of active PMCs or 2. The value of 2 is used in case an NMI does not
++ * arrive at the LAPIC in time to be collapsed into an already pending NMI.
++ */
++static int amd_pmu_handle_irq(struct pt_regs *regs)
++{
++	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++	int active, handled;
++
++	/*
++	 * Obtain the active count before calling x86_pmu_handle_irq() since
++	 * it is possible that x86_pmu_handle_irq() may make a counter
++	 * inactive (through x86_pmu_stop).
++	 */
++	active = __bitmap_weight(cpuc->active_mask, X86_PMC_IDX_MAX);
++
++	/* Process any counter overflows */
++	handled = x86_pmu_handle_irq(regs);
++
++	/*
++	 * If a counter was handled, record the number of possible remaining
++	 * NMIs that can occur.
++	 */
++	if (handled) {
++		this_cpu_write(perf_nmi_counter,
++			       min_t(unsigned int, 2, active));
++
++		return handled;
++	}
++
++	if (!this_cpu_read(perf_nmi_counter))
++		return NMI_DONE;
++
++	this_cpu_dec(perf_nmi_counter);
++
++	return NMI_HANDLED;
++}
++
+ static struct event_constraint *
+ amd_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ 			  struct perf_event *event)
+@@ -621,11 +751,11 @@ static ssize_t amd_event_sysfs_show(char *page, u64 config)
+ 
+ static __initconst const struct x86_pmu amd_pmu = {
+ 	.name			= "AMD",
+-	.handle_irq		= x86_pmu_handle_irq,
+-	.disable_all		= x86_pmu_disable_all,
++	.handle_irq		= amd_pmu_handle_irq,
++	.disable_all		= amd_pmu_disable_all,
+ 	.enable_all		= x86_pmu_enable_all,
+ 	.enable			= x86_pmu_enable_event,
+-	.disable		= x86_pmu_disable_event,
++	.disable		= amd_pmu_disable_event,
+ 	.hw_config		= amd_pmu_hw_config,
+ 	.schedule_events	= x86_schedule_events,
+ 	.eventsel		= MSR_K7_EVNTSEL0,
+@@ -732,7 +862,7 @@ void amd_pmu_enable_virt(void)
+ 	cpuc->perf_ctr_virt_mask = 0;
+ 
+ 	/* Reload all events */
+-	x86_pmu_disable_all();
++	amd_pmu_disable_all();
+ 	x86_pmu_enable_all(0);
+ }
+ EXPORT_SYMBOL_GPL(amd_pmu_enable_virt);
+@@ -750,7 +880,7 @@ void amd_pmu_disable_virt(void)
+ 	cpuc->perf_ctr_virt_mask = AMD64_EVENTSEL_HOSTONLY;
+ 
+ 	/* Reload all events */
+-	x86_pmu_disable_all();
++	amd_pmu_disable_all();
+ 	x86_pmu_enable_all(0);
+ }
+ EXPORT_SYMBOL_GPL(amd_pmu_disable_virt);
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index b684f0294f35..81911e11a15d 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -1349,8 +1349,9 @@ void x86_pmu_stop(struct perf_event *event, int flags)
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ 	struct hw_perf_event *hwc = &event->hw;
+ 
+-	if (__test_and_clear_bit(hwc->idx, cpuc->active_mask)) {
++	if (test_bit(hwc->idx, cpuc->active_mask)) {
+ 		x86_pmu.disable(event);
++		__clear_bit(hwc->idx, cpuc->active_mask);
+ 		cpuc->events[hwc->idx] = NULL;
+ 		WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
+ 		hwc->state |= PERF_HES_STOPPED;
+@@ -1447,16 +1448,8 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
+ 	apic_write(APIC_LVTPC, APIC_DM_NMI);
+ 
+ 	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+-		if (!test_bit(idx, cpuc->active_mask)) {
+-			/*
+-			 * Though we deactivated the counter some cpus
+-			 * might still deliver spurious interrupts still
+-			 * in flight. Catch them:
+-			 */
+-			if (__test_and_clear_bit(idx, cpuc->running))
+-				handled++;
++		if (!test_bit(idx, cpuc->active_mask))
+ 			continue;
+-		}
+ 
+ 		event = cpuc->events[idx];
+ 
+@@ -1995,7 +1988,7 @@ static int x86_pmu_commit_txn(struct pmu *pmu)
+  */
+ static void free_fake_cpuc(struct cpu_hw_events *cpuc)
+ {
+-	kfree(cpuc->shared_regs);
++	intel_cpuc_finish(cpuc);
+ 	kfree(cpuc);
+ }
+ 
+@@ -2007,14 +2000,11 @@ static struct cpu_hw_events *allocate_fake_cpuc(void)
+ 	cpuc = kzalloc(sizeof(*cpuc), GFP_KERNEL);
+ 	if (!cpuc)
+ 		return ERR_PTR(-ENOMEM);
+-
+-	/* only needed, if we have extra_regs */
+-	if (x86_pmu.extra_regs) {
+-		cpuc->shared_regs = allocate_shared_regs(cpu);
+-		if (!cpuc->shared_regs)
+-			goto error;
+-	}
+ 	cpuc->is_fake = 1;
++
++	if (intel_cpuc_prepare(cpuc, cpu))
++		goto error;
++
+ 	return cpuc;
+ error:
+ 	free_fake_cpuc(cpuc);
+diff --git a/arch/x86/events/intel/bts.c b/arch/x86/events/intel/bts.c
+index a01ef1b0f883..7cdd7b13bbda 100644
+--- a/arch/x86/events/intel/bts.c
++++ b/arch/x86/events/intel/bts.c
+@@ -77,10 +77,12 @@ static size_t buf_size(struct page *page)
+ }
+ 
+ static void *
+-bts_buffer_setup_aux(int cpu, void **pages, int nr_pages, bool overwrite)
++bts_buffer_setup_aux(struct perf_event *event, void **pages,
++		     int nr_pages, bool overwrite)
+ {
+ 	struct bts_buffer *buf;
+ 	struct page *page;
++	int cpu = event->cpu;
+ 	int node = (cpu == -1) ? cpu : cpu_to_node(cpu);
+ 	unsigned long offset;
+ 	size_t size = nr_pages << PAGE_SHIFT;
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 730978dff63f..2480feb07df3 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -1999,6 +1999,39 @@ static void intel_pmu_nhm_enable_all(int added)
+ 	intel_pmu_enable_all(added);
+ }
+ 
++static void intel_set_tfa(struct cpu_hw_events *cpuc, bool on)
++{
++	u64 val = on ? MSR_TFA_RTM_FORCE_ABORT : 0;
++
++	if (cpuc->tfa_shadow != val) {
++		cpuc->tfa_shadow = val;
++		wrmsrl(MSR_TSX_FORCE_ABORT, val);
++	}
++}
++
++static void intel_tfa_commit_scheduling(struct cpu_hw_events *cpuc, int idx, int cntr)
++{
++	/*
++	 * We're going to use PMC3, make sure TFA is set before we touch it.
++	 */
++	if (cntr == 3 && !cpuc->is_fake)
++		intel_set_tfa(cpuc, true);
++}
++
++static void intel_tfa_pmu_enable_all(int added)
++{
++	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++
++	/*
++	 * If we find PMC3 is no longer used when we enable the PMU, we can
++	 * clear TFA.
++	 */
++	if (!test_bit(3, cpuc->active_mask))
++		intel_set_tfa(cpuc, false);
++
++	intel_pmu_enable_all(added);
++}
++
+ static void enable_counter_freeze(void)
+ {
+ 	update_debugctlmsr(get_debugctlmsr() |
+@@ -2768,6 +2801,35 @@ intel_stop_scheduling(struct cpu_hw_events *cpuc)
+ 	raw_spin_unlock(&excl_cntrs->lock);
+ }
+ 
++static struct event_constraint *
++dyn_constraint(struct cpu_hw_events *cpuc, struct event_constraint *c, int idx)
++{
++	WARN_ON_ONCE(!cpuc->constraint_list);
++
++	if (!(c->flags & PERF_X86_EVENT_DYNAMIC)) {
++		struct event_constraint *cx;
++
++		/*
++		 * grab pre-allocated constraint entry
++		 */
++		cx = &cpuc->constraint_list[idx];
++
++		/*
++		 * initialize dynamic constraint
++		 * with static constraint
++		 */
++		*cx = *c;
++
++		/*
++		 * mark constraint as dynamic
++		 */
++		cx->flags |= PERF_X86_EVENT_DYNAMIC;
++		c = cx;
++	}
++
++	return c;
++}
++
+ static struct event_constraint *
+ intel_get_excl_constraints(struct cpu_hw_events *cpuc, struct perf_event *event,
+ 			   int idx, struct event_constraint *c)
+@@ -2798,27 +2860,7 @@ intel_get_excl_constraints(struct cpu_hw_events *cpuc, struct perf_event *event,
+ 	 * only needed when constraint has not yet
+ 	 * been cloned (marked dynamic)
+ 	 */
+-	if (!(c->flags & PERF_X86_EVENT_DYNAMIC)) {
+-		struct event_constraint *cx;
+-
+-		/*
+-		 * grab pre-allocated constraint entry
+-		 */
+-		cx = &cpuc->constraint_list[idx];
+-
+-		/*
+-		 * initialize dynamic constraint
+-		 * with static constraint
+-		 */
+-		*cx = *c;
+-
+-		/*
+-		 * mark constraint as dynamic, so we
+-		 * can free it later on
+-		 */
+-		cx->flags |= PERF_X86_EVENT_DYNAMIC;
+-		c = cx;
+-	}
++	c = dyn_constraint(cpuc, c, idx);
+ 
+ 	/*
+ 	 * From here on, the constraint is dynamic.
+@@ -3345,6 +3387,26 @@ glp_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ 	return c;
+ }
+ 
++static bool allow_tsx_force_abort = true;
++
++static struct event_constraint *
++tfa_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
++			  struct perf_event *event)
++{
++	struct event_constraint *c = hsw_get_event_constraints(cpuc, idx, event);
++
++	/*
++	 * Without TFA we must not use PMC3.
++	 */
++	if (!allow_tsx_force_abort && test_bit(3, c->idxmsk) && idx >= 0) {
++		c = dyn_constraint(cpuc, c, idx);
++		c->idxmsk64 &= ~(1ULL << 3);
++		c->weight--;
++	}
++
++	return c;
++}
++
+ /*
+  * Broadwell:
+  *
+@@ -3398,7 +3460,7 @@ ssize_t intel_event_sysfs_show(char *page, u64 config)
+ 	return x86_event_sysfs_show(page, config, event);
+ }
+ 
+-struct intel_shared_regs *allocate_shared_regs(int cpu)
++static struct intel_shared_regs *allocate_shared_regs(int cpu)
+ {
+ 	struct intel_shared_regs *regs;
+ 	int i;
+@@ -3430,23 +3492,24 @@ static struct intel_excl_cntrs *allocate_excl_cntrs(int cpu)
+ 	return c;
+ }
+ 
+-static int intel_pmu_cpu_prepare(int cpu)
+-{
+-	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
+ 
++int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu)
++{
+ 	if (x86_pmu.extra_regs || x86_pmu.lbr_sel_map) {
+ 		cpuc->shared_regs = allocate_shared_regs(cpu);
+ 		if (!cpuc->shared_regs)
+ 			goto err;
+ 	}
+ 
+-	if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) {
++	if (x86_pmu.flags & (PMU_FL_EXCL_CNTRS | PMU_FL_TFA)) {
+ 		size_t sz = X86_PMC_IDX_MAX * sizeof(struct event_constraint);
+ 
+-		cpuc->constraint_list = kzalloc(sz, GFP_KERNEL);
++		cpuc->constraint_list = kzalloc_node(sz, GFP_KERNEL, cpu_to_node(cpu));
+ 		if (!cpuc->constraint_list)
+ 			goto err_shared_regs;
++	}
+ 
++	if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) {
+ 		cpuc->excl_cntrs = allocate_excl_cntrs(cpu);
+ 		if (!cpuc->excl_cntrs)
+ 			goto err_constraint_list;
+@@ -3468,6 +3531,11 @@ err:
+ 	return -ENOMEM;
+ }
+ 
++static int intel_pmu_cpu_prepare(int cpu)
++{
++	return intel_cpuc_prepare(&per_cpu(cpu_hw_events, cpu), cpu);
++}
++
+ static void flip_smm_bit(void *data)
+ {
+ 	unsigned long set = *(unsigned long *)data;
+@@ -3542,9 +3610,8 @@ static void intel_pmu_cpu_starting(int cpu)
+ 	}
+ }
+ 
+-static void free_excl_cntrs(int cpu)
++static void free_excl_cntrs(struct cpu_hw_events *cpuc)
+ {
+-	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
+ 	struct intel_excl_cntrs *c;
+ 
+ 	c = cpuc->excl_cntrs;
+@@ -3552,9 +3619,10 @@ static void free_excl_cntrs(int cpu)
+ 		if (c->core_id == -1 || --c->refcnt == 0)
+ 			kfree(c);
+ 		cpuc->excl_cntrs = NULL;
+-		kfree(cpuc->constraint_list);
+-		cpuc->constraint_list = NULL;
+ 	}
++
++	kfree(cpuc->constraint_list);
++	cpuc->constraint_list = NULL;
+ }
+ 
+ static void intel_pmu_cpu_dying(int cpu)
+@@ -3565,9 +3633,8 @@ static void intel_pmu_cpu_dying(int cpu)
+ 		disable_counter_freeze();
+ }
+ 
+-static void intel_pmu_cpu_dead(int cpu)
++void intel_cpuc_finish(struct cpu_hw_events *cpuc)
+ {
+-	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
+ 	struct intel_shared_regs *pc;
+ 
+ 	pc = cpuc->shared_regs;
+@@ -3577,7 +3644,12 @@ static void intel_pmu_cpu_dead(int cpu)
+ 		cpuc->shared_regs = NULL;
+ 	}
+ 
+-	free_excl_cntrs(cpu);
++	free_excl_cntrs(cpuc);
++}
++
++static void intel_pmu_cpu_dead(int cpu)
++{
++	intel_cpuc_finish(&per_cpu(cpu_hw_events, cpu));
+ }
+ 
+ static void intel_pmu_sched_task(struct perf_event_context *ctx,
+@@ -4070,8 +4142,11 @@ static struct attribute *intel_pmu_caps_attrs[] = {
+        NULL
+ };
+ 
++static DEVICE_BOOL_ATTR(allow_tsx_force_abort, 0644, allow_tsx_force_abort);
++
+ static struct attribute *intel_pmu_attrs[] = {
+ 	&dev_attr_freeze_on_smi.attr,
++	NULL, /* &dev_attr_allow_tsx_force_abort.attr.attr */
+ 	NULL,
+ };
+ 
+@@ -4564,6 +4639,15 @@ __init int intel_pmu_init(void)
+ 		tsx_attr = hsw_tsx_events_attrs;
+ 		intel_pmu_pebs_data_source_skl(
+ 			boot_cpu_data.x86_model == INTEL_FAM6_SKYLAKE_X);
++
++		if (boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)) {
++			x86_pmu.flags |= PMU_FL_TFA;
++			x86_pmu.get_event_constraints = tfa_get_event_constraints;
++			x86_pmu.enable_all = intel_tfa_pmu_enable_all;
++			x86_pmu.commit_scheduling = intel_tfa_commit_scheduling;
++			intel_pmu_attrs[1] = &dev_attr_allow_tsx_force_abort.attr.attr;
++		}
++
+ 		pr_cont("Skylake events, ");
+ 		name = "skylake";
+ 		break;
+@@ -4715,7 +4799,7 @@ static __init int fixup_ht_bug(void)
+ 	hardlockup_detector_perf_restart();
+ 
+ 	for_each_online_cpu(c)
+-		free_excl_cntrs(c);
++		free_excl_cntrs(&per_cpu(cpu_hw_events, c));
+ 
+ 	cpus_read_unlock();
+ 	pr_info("PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off\n");
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 9494ca68fd9d..c0e86ff21f81 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -1114,10 +1114,11 @@ static int pt_buffer_init_topa(struct pt_buffer *buf, unsigned long nr_pages,
+  * Return:	Our private PT buffer structure.
+  */
+ static void *
+-pt_buffer_setup_aux(int cpu, void **pages, int nr_pages, bool snapshot)
++pt_buffer_setup_aux(struct perf_event *event, void **pages,
++		    int nr_pages, bool snapshot)
+ {
+ 	struct pt_buffer *buf;
+-	int node, ret;
++	int node, ret, cpu = event->cpu;
+ 
+ 	if (!nr_pages)
+ 		return NULL;
+diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
+index 27a461414b30..2690135bf83f 100644
+--- a/arch/x86/events/intel/uncore.c
++++ b/arch/x86/events/intel/uncore.c
+@@ -740,6 +740,7 @@ static int uncore_pmu_event_init(struct perf_event *event)
+ 		/* fixed counters have event field hardcoded to zero */
+ 		hwc->config = 0ULL;
+ 	} else if (is_freerunning_event(event)) {
++		hwc->config = event->attr.config;
+ 		if (!check_valid_freerunning_event(box, event))
+ 			return -EINVAL;
+ 		event->hw.idx = UNCORE_PMC_IDX_FREERUNNING;
+diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
+index cb46d602a6b8..853a49a8ccf6 100644
+--- a/arch/x86/events/intel/uncore.h
++++ b/arch/x86/events/intel/uncore.h
+@@ -292,8 +292,8 @@ static inline
+ unsigned int uncore_freerunning_counter(struct intel_uncore_box *box,
+ 					struct perf_event *event)
+ {
+-	unsigned int type = uncore_freerunning_type(event->attr.config);
+-	unsigned int idx = uncore_freerunning_idx(event->attr.config);
++	unsigned int type = uncore_freerunning_type(event->hw.config);
++	unsigned int idx = uncore_freerunning_idx(event->hw.config);
+ 	struct intel_uncore_pmu *pmu = box->pmu;
+ 
+ 	return pmu->type->freerunning[type].counter_base +
+@@ -377,7 +377,7 @@ static inline
+ unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
+ 				     struct perf_event *event)
+ {
+-	unsigned int type = uncore_freerunning_type(event->attr.config);
++	unsigned int type = uncore_freerunning_type(event->hw.config);
+ 
+ 	return box->pmu->type->freerunning[type].bits;
+ }
+@@ -385,7 +385,7 @@ unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
+ static inline int uncore_num_freerunning(struct intel_uncore_box *box,
+ 					 struct perf_event *event)
+ {
+-	unsigned int type = uncore_freerunning_type(event->attr.config);
++	unsigned int type = uncore_freerunning_type(event->hw.config);
+ 
+ 	return box->pmu->type->freerunning[type].num_counters;
+ }
+@@ -399,8 +399,8 @@ static inline int uncore_num_freerunning_types(struct intel_uncore_box *box,
+ static inline bool check_valid_freerunning_event(struct intel_uncore_box *box,
+ 						 struct perf_event *event)
+ {
+-	unsigned int type = uncore_freerunning_type(event->attr.config);
+-	unsigned int idx = uncore_freerunning_idx(event->attr.config);
++	unsigned int type = uncore_freerunning_type(event->hw.config);
++	unsigned int idx = uncore_freerunning_idx(event->hw.config);
+ 
+ 	return (type < uncore_num_freerunning_types(box, event)) &&
+ 	       (idx < uncore_num_freerunning(box, event));
+diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
+index 2593b0d7aeee..ef7faf486a1a 100644
+--- a/arch/x86/events/intel/uncore_snb.c
++++ b/arch/x86/events/intel/uncore_snb.c
+@@ -448,9 +448,11 @@ static int snb_uncore_imc_event_init(struct perf_event *event)
+ 
+ 	/* must be done before validate_group */
+ 	event->hw.event_base = base;
+-	event->hw.config = cfg;
+ 	event->hw.idx = idx;
+ 
++	/* Convert to standard encoding format for freerunning counters */
++	event->hw.config = ((cfg - 1) << 8) | 0x10ff;
++
+ 	/* no group validation needed, we have free running counters */
+ 
+ 	return 0;
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index d46fd6754d92..acd72e669c04 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -242,6 +242,11 @@ struct cpu_hw_events {
+ 	struct intel_excl_cntrs		*excl_cntrs;
+ 	int excl_thread_id; /* 0 or 1 */
+ 
++	/*
++	 * SKL TSX_FORCE_ABORT shadow
++	 */
++	u64				tfa_shadow;
++
+ 	/*
+ 	 * AMD specific bits
+ 	 */
+@@ -681,6 +686,7 @@ do {									\
+ #define PMU_FL_EXCL_CNTRS	0x4 /* has exclusive counter requirements  */
+ #define PMU_FL_EXCL_ENABLED	0x8 /* exclusive counter active */
+ #define PMU_FL_PEBS_ALL		0x10 /* all events are valid PEBS events */
++#define PMU_FL_TFA		0x20 /* deal with TSX force abort */
+ 
+ #define EVENT_VAR(_id)  event_attr_##_id
+ #define EVENT_PTR(_id) &event_attr_##_id.attr.attr
+@@ -889,7 +895,8 @@ struct event_constraint *
+ x86_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ 			  struct perf_event *event);
+ 
+-struct intel_shared_regs *allocate_shared_regs(int cpu);
++extern int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu);
++extern void intel_cpuc_finish(struct cpu_hw_events *cpuc);
+ 
+ int intel_pmu_init(void);
+ 
+@@ -1025,9 +1032,13 @@ static inline int intel_pmu_init(void)
+ 	return 0;
+ }
+ 
+-static inline struct intel_shared_regs *allocate_shared_regs(int cpu)
++static inline int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu)
++{
++	return 0;
++}
++
++static inline void intel_cpuc_finish(struct cpu_hw_events *cpuc)
+ {
+-	return NULL;
+ }
+ 
+ static inline int is_ht_workaround_enabled(void)
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index 7abb09e2eeb8..d3f42b6bbdac 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -406,6 +406,13 @@ void hyperv_cleanup(void)
+ 	/* Reset our OS id */
+ 	wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
+ 
++	/*
++	 * Reset hypercall page reference before reset the page,
++	 * let hypercall operations fail safely rather than
++	 * panic the kernel for using invalid hypercall page
++	 */
++	hv_hypercall_pg = NULL;
++
+ 	/* Reset the hypercall page */
+ 	hypercall_msr.as_uint64 = 0;
+ 	wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
+index ad7b210aa3f6..8e790ec219a5 100644
+--- a/arch/x86/include/asm/bitops.h
++++ b/arch/x86/include/asm/bitops.h
+@@ -36,22 +36,17 @@
+  * bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
+  */
+ 
+-#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 1)
+-/* Technically wrong, but this avoids compilation errors on some gcc
+-   versions. */
+-#define BITOP_ADDR(x) "=m" (*(volatile long *) (x))
+-#else
+-#define BITOP_ADDR(x) "+m" (*(volatile long *) (x))
+-#endif
++#define RLONG_ADDR(x)			 "m" (*(volatile long *) (x))
++#define WBYTE_ADDR(x)			"+m" (*(volatile char *) (x))
+ 
+-#define ADDR				BITOP_ADDR(addr)
++#define ADDR				RLONG_ADDR(addr)
+ 
+ /*
+  * We do the locked ops that don't return the old value as
+  * a mask operation on a byte.
+  */
+ #define IS_IMMEDIATE(nr)		(__builtin_constant_p(nr))
+-#define CONST_MASK_ADDR(nr, addr)	BITOP_ADDR((void *)(addr) + ((nr)>>3))
++#define CONST_MASK_ADDR(nr, addr)	WBYTE_ADDR((void *)(addr) + ((nr)>>3))
+ #define CONST_MASK(nr)			(1 << ((nr) & 7))
+ 
+ /**
+@@ -79,7 +74,7 @@ set_bit(long nr, volatile unsigned long *addr)
+ 			: "memory");
+ 	} else {
+ 		asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
+-			: BITOP_ADDR(addr) : "Ir" (nr) : "memory");
++			: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
+ 	}
+ }
+ 
+@@ -94,7 +89,7 @@ set_bit(long nr, volatile unsigned long *addr)
+  */
+ static __always_inline void __set_bit(long nr, volatile unsigned long *addr)
+ {
+-	asm volatile(__ASM_SIZE(bts) " %1,%0" : ADDR : "Ir" (nr) : "memory");
++	asm volatile(__ASM_SIZE(bts) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
+ }
+ 
+ /**
+@@ -116,8 +111,7 @@ clear_bit(long nr, volatile unsigned long *addr)
+ 			: "iq" ((u8)~CONST_MASK(nr)));
+ 	} else {
+ 		asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
+-			: BITOP_ADDR(addr)
+-			: "Ir" (nr));
++			: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
+ 	}
+ }
+ 
+@@ -137,7 +131,7 @@ static __always_inline void clear_bit_unlock(long nr, volatile unsigned long *ad
+ 
+ static __always_inline void __clear_bit(long nr, volatile unsigned long *addr)
+ {
+-	asm volatile(__ASM_SIZE(btr) " %1,%0" : ADDR : "Ir" (nr));
++	asm volatile(__ASM_SIZE(btr) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
+ }
+ 
+ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr)
+@@ -145,7 +139,7 @@ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile
+ 	bool negative;
+ 	asm volatile(LOCK_PREFIX "andb %2,%1"
+ 		CC_SET(s)
+-		: CC_OUT(s) (negative), ADDR
++		: CC_OUT(s) (negative), WBYTE_ADDR(addr)
+ 		: "ir" ((char) ~(1 << nr)) : "memory");
+ 	return negative;
+ }
+@@ -161,13 +155,9 @@ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile
+  * __clear_bit() is non-atomic and implies release semantics before the memory
+  * operation. It can be used for an unlock if no other CPUs can concurrently
+  * modify other bits in the word.
+- *
+- * No memory barrier is required here, because x86 cannot reorder stores past
+- * older loads. Same principle as spin_unlock.
+  */
+ static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
+ {
+-	barrier();
+ 	__clear_bit(nr, addr);
+ }
+ 
+@@ -182,7 +172,7 @@ static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *
+  */
+ static __always_inline void __change_bit(long nr, volatile unsigned long *addr)
+ {
+-	asm volatile(__ASM_SIZE(btc) " %1,%0" : ADDR : "Ir" (nr));
++	asm volatile(__ASM_SIZE(btc) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
+ }
+ 
+ /**
+@@ -202,8 +192,7 @@ static __always_inline void change_bit(long nr, volatile unsigned long *addr)
+ 			: "iq" ((u8)CONST_MASK(nr)));
+ 	} else {
+ 		asm volatile(LOCK_PREFIX __ASM_SIZE(btc) " %1,%0"
+-			: BITOP_ADDR(addr)
+-			: "Ir" (nr));
++			: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
+ 	}
+ }
+ 
+@@ -248,8 +237,8 @@ static __always_inline bool __test_and_set_bit(long nr, volatile unsigned long *
+ 
+ 	asm(__ASM_SIZE(bts) " %2,%1"
+ 	    CC_SET(c)
+-	    : CC_OUT(c) (oldbit), ADDR
+-	    : "Ir" (nr));
++	    : CC_OUT(c) (oldbit)
++	    : ADDR, "Ir" (nr) : "memory");
+ 	return oldbit;
+ }
+ 
+@@ -288,8 +277,8 @@ static __always_inline bool __test_and_clear_bit(long nr, volatile unsigned long
+ 
+ 	asm volatile(__ASM_SIZE(btr) " %2,%1"
+ 		     CC_SET(c)
+-		     : CC_OUT(c) (oldbit), ADDR
+-		     : "Ir" (nr));
++		     : CC_OUT(c) (oldbit)
++		     : ADDR, "Ir" (nr) : "memory");
+ 	return oldbit;
+ }
+ 
+@@ -300,8 +289,8 @@ static __always_inline bool __test_and_change_bit(long nr, volatile unsigned lon
+ 
+ 	asm volatile(__ASM_SIZE(btc) " %2,%1"
+ 		     CC_SET(c)
+-		     : CC_OUT(c) (oldbit), ADDR
+-		     : "Ir" (nr) : "memory");
++		     : CC_OUT(c) (oldbit)
++		     : ADDR, "Ir" (nr) : "memory");
+ 
+ 	return oldbit;
+ }
+@@ -332,7 +321,7 @@ static __always_inline bool variable_test_bit(long nr, volatile const unsigned l
+ 	asm volatile(__ASM_SIZE(bt) " %2,%1"
+ 		     CC_SET(c)
+ 		     : CC_OUT(c) (oldbit)
+-		     : "m" (*(unsigned long *)addr), "Ir" (nr));
++		     : "m" (*(unsigned long *)addr), "Ir" (nr) : "memory");
+ 
+ 	return oldbit;
+ }
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 6d6122524711..981ff9479648 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -344,6 +344,7 @@
+ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
+ #define X86_FEATURE_AVX512_4VNNIW	(18*32+ 2) /* AVX-512 Neural Network Instructions */
+ #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
++#define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 180373360e34..71d763ad2637 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -352,6 +352,7 @@ struct kvm_mmu_page {
+ };
+ 
+ struct kvm_pio_request {
++	unsigned long linear_rip;
+ 	unsigned long count;
+ 	int in;
+ 	int port;
+@@ -570,6 +571,7 @@ struct kvm_vcpu_arch {
+ 	bool tpr_access_reporting;
+ 	u64 ia32_xss;
+ 	u64 microcode_version;
++	u64 arch_capabilities;
+ 
+ 	/*
+ 	 * Paging state of the vcpu
+@@ -1255,7 +1257,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
+ 				   struct kvm_memory_slot *slot,
+ 				   gfn_t gfn_offset, unsigned long mask);
+ void kvm_mmu_zap_all(struct kvm *kvm);
+-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots);
++void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen);
+ unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
+ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
+ 
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 8e40c2446fd1..ca5bc0eacb95 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -666,6 +666,12 @@
+ 
+ #define MSR_IA32_TSC_DEADLINE		0x000006E0
+ 
++
++#define MSR_TSX_FORCE_ABORT		0x0000010F
++
++#define MSR_TFA_RTM_FORCE_ABORT_BIT	0
++#define MSR_TFA_RTM_FORCE_ABORT		BIT_ULL(MSR_TFA_RTM_FORCE_ABORT_BIT)
++
+ /* P4/Xeon+ specific */
+ #define MSR_IA32_MCG_EAX		0x00000180
+ #define MSR_IA32_MCG_EBX		0x00000181
+diff --git a/arch/x86/include/asm/string_32.h b/arch/x86/include/asm/string_32.h
+index 55d392c6bd29..2fd165f1cffa 100644
+--- a/arch/x86/include/asm/string_32.h
++++ b/arch/x86/include/asm/string_32.h
+@@ -179,14 +179,7 @@ static inline void *__memcpy3d(void *to, const void *from, size_t len)
+  *	No 3D Now!
+  */
+ 
+-#if (__GNUC__ >= 4)
+ #define memcpy(t, f, n) __builtin_memcpy(t, f, n)
+-#else
+-#define memcpy(t, f, n)				\
+-	(__builtin_constant_p((n))		\
+-	 ? __constant_memcpy((t), (f), (n))	\
+-	 : __memcpy((t), (f), (n)))
+-#endif
+ 
+ #endif
+ #endif /* !CONFIG_FORTIFY_SOURCE */
+@@ -282,12 +275,7 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
+ 
+ 	{
+ 		int d0, d1;
+-#if __GNUC__ == 4 && __GNUC_MINOR__ == 0
+-		/* Workaround for broken gcc 4.0 */
+-		register unsigned long eax asm("%eax") = pattern;
+-#else
+ 		unsigned long eax = pattern;
+-#endif
+ 
+ 		switch (count % 4) {
+ 		case 0:
+@@ -321,15 +309,7 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
+ #define __HAVE_ARCH_MEMSET
+ extern void *memset(void *, int, size_t);
+ #ifndef CONFIG_FORTIFY_SOURCE
+-#if (__GNUC__ >= 4)
+ #define memset(s, c, count) __builtin_memset(s, c, count)
+-#else
+-#define memset(s, c, count)						\
+-	(__builtin_constant_p(c)					\
+-	 ? __constant_c_x_memset((s), (0x01010101UL * (unsigned char)(c)), \
+-				 (count))				\
+-	 : __memset((s), (c), (count)))
+-#endif
+ #endif /* !CONFIG_FORTIFY_SOURCE */
+ 
+ #define __HAVE_ARCH_MEMSET16
+diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
+index 4e4194e21a09..75314c3dbe47 100644
+--- a/arch/x86/include/asm/string_64.h
++++ b/arch/x86/include/asm/string_64.h
+@@ -14,21 +14,6 @@
+ extern void *memcpy(void *to, const void *from, size_t len);
+ extern void *__memcpy(void *to, const void *from, size_t len);
+ 
+-#ifndef CONFIG_FORTIFY_SOURCE
+-#if (__GNUC__ == 4 && __GNUC_MINOR__ < 3) || __GNUC__ < 4
+-#define memcpy(dst, src, len)					\
+-({								\
+-	size_t __len = (len);					\
+-	void *__ret;						\
+-	if (__builtin_constant_p(len) && __len >= 64)		\
+-		__ret = __memcpy((dst), (src), __len);		\
+-	else							\
+-		__ret = __builtin_memcpy((dst), (src), __len);	\
+-	__ret;							\
+-})
+-#endif
+-#endif /* !CONFIG_FORTIFY_SOURCE */
+-
+ #define __HAVE_ARCH_MEMSET
+ void *memset(void *s, int c, size_t n);
+ void *__memset(void *s, int c, size_t n);
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index c1334aaaa78d..f3aed639dccd 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -76,7 +76,7 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
+ #endif
+ 
+ /**
+- * access_ok: - Checks if a user space pointer is valid
++ * access_ok - Checks if a user space pointer is valid
+  * @addr: User space pointer to start of block to check
+  * @size: Size of block to check
+  *
+@@ -85,12 +85,12 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
+  *
+  * Checks if a pointer to a block of memory in user space is valid.
+  *
+- * Returns true (nonzero) if the memory block may be valid, false (zero)
+- * if it is definitely invalid.
+- *
+  * Note that, depending on architecture, this function probably just
+  * checks that the pointer is in the user space range - after calling
+  * this function, memory access functions may still return -EFAULT.
++ *
++ * Return: true (nonzero) if the memory block may be valid, false (zero)
++ * if it is definitely invalid.
+  */
+ #define access_ok(addr, size)					\
+ ({									\
+@@ -135,7 +135,7 @@ extern int __get_user_bad(void);
+ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+ 
+ /**
+- * get_user: - Get a simple variable from user space.
++ * get_user - Get a simple variable from user space.
+  * @x:   Variable to store result.
+  * @ptr: Source address, in user space.
+  *
+@@ -149,7 +149,7 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
+  * @ptr must have pointer-to-simple-variable type, and the result of
+  * dereferencing @ptr must be assignable to @x without a cast.
+  *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+  * On error, the variable @x is set to zero.
+  */
+ /*
+@@ -227,7 +227,7 @@ extern void __put_user_4(void);
+ extern void __put_user_8(void);
+ 
+ /**
+- * put_user: - Write a simple value into user space.
++ * put_user - Write a simple value into user space.
+  * @x:   Value to copy to user space.
+  * @ptr: Destination address, in user space.
+  *
+@@ -241,7 +241,7 @@ extern void __put_user_8(void);
+  * @ptr must have pointer-to-simple-variable type, and @x must be assignable
+  * to the result of dereferencing @ptr.
+  *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+  */
+ #define put_user(x, ptr)					\
+ ({								\
+@@ -503,7 +503,7 @@ struct __large_struct { unsigned long buf[100]; };
+ } while (0)
+ 
+ /**
+- * __get_user: - Get a simple variable from user space, with less checking.
++ * __get_user - Get a simple variable from user space, with less checking.
+  * @x:   Variable to store result.
+  * @ptr: Source address, in user space.
+  *
+@@ -520,7 +520,7 @@ struct __large_struct { unsigned long buf[100]; };
+  * Caller must check the pointer with access_ok() before calling this
+  * function.
+  *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+  * On error, the variable @x is set to zero.
+  */
+ 
+@@ -528,7 +528,7 @@ struct __large_struct { unsigned long buf[100]; };
+ 	__get_user_nocheck((x), (ptr), sizeof(*(ptr)))
+ 
+ /**
+- * __put_user: - Write a simple value into user space, with less checking.
++ * __put_user - Write a simple value into user space, with less checking.
+  * @x:   Value to copy to user space.
+  * @ptr: Destination address, in user space.
+  *
+@@ -545,7 +545,7 @@ struct __large_struct { unsigned long buf[100]; };
+  * Caller must check the pointer with access_ok() before calling this
+  * function.
+  *
+- * Returns zero on success, or -EFAULT on error.
++ * Return: zero on success, or -EFAULT on error.
+  */
+ 
+ #define __put_user(x, ptr)						\
+diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h
+index 1f86e1b0a5cd..499578f7e6d7 100644
+--- a/arch/x86/include/asm/unwind.h
++++ b/arch/x86/include/asm/unwind.h
+@@ -23,6 +23,12 @@ struct unwind_state {
+ #elif defined(CONFIG_UNWINDER_FRAME_POINTER)
+ 	bool got_irq;
+ 	unsigned long *bp, *orig_sp, ip;
++	/*
++	 * If non-NULL: The current frame is incomplete and doesn't contain a
++	 * valid BP. When looking for the next frame, use this instead of the
++	 * non-existent saved BP.
++	 */
++	unsigned long *next_bp;
+ 	struct pt_regs *regs;
+ #else
+ 	unsigned long *sp;
+diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
+index ef05bea7010d..6b5c710846f5 100644
+--- a/arch/x86/include/asm/xen/hypercall.h
++++ b/arch/x86/include/asm/xen/hypercall.h
+@@ -206,6 +206,9 @@ xen_single_call(unsigned int call,
+ 	__HYPERCALL_DECLS;
+ 	__HYPERCALL_5ARG(a1, a2, a3, a4, a5);
+ 
++	if (call >= PAGE_SIZE / sizeof(hypercall_page[0]))
++		return -EINVAL;
++
+ 	asm volatile(CALL_NOSPEC
+ 		     : __HYPERCALL_5PARAM
+ 		     : [thunk_target] "a" (&hypercall_page[call])
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 69f6bbb41be0..01004bfb1a1b 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -819,11 +819,9 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ static void init_amd_zn(struct cpuinfo_x86 *c)
+ {
+ 	set_cpu_cap(c, X86_FEATURE_ZEN);
+-	/*
+-	 * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects
+-	 * all up to and including B1.
+-	 */
+-	if (c->x86_model <= 1 && c->x86_stepping <= 1)
++
++	/* Fix erratum 1076: CPB feature bit not being set in CPUID. */
++	if (!cpu_has(c, X86_FEATURE_CPB))
+ 		set_cpu_cap(c, X86_FEATURE_CPB);
+ }
+ 
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 8257a59704ae..763d4264d16a 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -49,7 +49,7 @@ int ftrace_arch_code_modify_post_process(void)
+ union ftrace_code_union {
+ 	char code[MCOUNT_INSN_SIZE];
+ 	struct {
+-		unsigned char e8;
++		unsigned char op;
+ 		int offset;
+ 	} __attribute__((packed));
+ };
+@@ -59,20 +59,23 @@ static int ftrace_calc_offset(long ip, long addr)
+ 	return (int)(addr - ip);
+ }
+ 
+-static unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
++static unsigned char *
++ftrace_text_replace(unsigned char op, unsigned long ip, unsigned long addr)
+ {
+ 	static union ftrace_code_union calc;
+ 
+-	calc.e8		= 0xe8;
++	calc.op		= op;
+ 	calc.offset	= ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
+ 
+-	/*
+-	 * No locking needed, this must be called via kstop_machine
+-	 * which in essence is like running on a uniprocessor machine.
+-	 */
+ 	return calc.code;
+ }
+ 
++static unsigned char *
++ftrace_call_replace(unsigned long ip, unsigned long addr)
++{
++	return ftrace_text_replace(0xe8, ip, addr);
++}
++
+ static inline int
+ within(unsigned long addr, unsigned long start, unsigned long end)
+ {
+@@ -664,22 +667,6 @@ int __init ftrace_dyn_arch_init(void)
+ 	return 0;
+ }
+ 
+-#if defined(CONFIG_X86_64) || defined(CONFIG_FUNCTION_GRAPH_TRACER)
+-static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
+-{
+-	static union ftrace_code_union calc;
+-
+-	/* Jmp not a call (ignore the .e8) */
+-	calc.e8		= 0xe9;
+-	calc.offset	= ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
+-
+-	/*
+-	 * ftrace external locks synchronize the access to the static variable.
+-	 */
+-	return calc.code;
+-}
+-#endif
+-
+ /* Currently only x86_64 supports dynamic trampolines */
+ #ifdef CONFIG_X86_64
+ 
+@@ -891,8 +878,8 @@ static void *addr_from_call(void *ptr)
+ 		return NULL;
+ 
+ 	/* Make sure this is a call */
+-	if (WARN_ON_ONCE(calc.e8 != 0xe8)) {
+-		pr_warn("Expected e8, got %x\n", calc.e8);
++	if (WARN_ON_ONCE(calc.op != 0xe8)) {
++		pr_warn("Expected e8, got %x\n", calc.op);
+ 		return NULL;
+ 	}
+ 
+@@ -963,6 +950,11 @@ void arch_ftrace_trampoline_free(struct ftrace_ops *ops)
+ #ifdef CONFIG_DYNAMIC_FTRACE
+ extern void ftrace_graph_call(void);
+ 
++static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
++{
++	return ftrace_text_replace(0xe9, ip, addr);
++}
++
+ static int ftrace_mod_jmp(unsigned long ip, void *func)
+ {
+ 	unsigned char *new;
+diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
+index 53917a3ebf94..1f3b77367948 100644
+--- a/arch/x86/kernel/kexec-bzimage64.c
++++ b/arch/x86/kernel/kexec-bzimage64.c
+@@ -218,6 +218,9 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params,
+ 	params->screen_info.ext_mem_k = 0;
+ 	params->alt_mem_k = 0;
+ 
++	/* Always fill in RSDP: it is either 0 or a valid value */
++	params->acpi_rsdp_addr = boot_params.acpi_rsdp_addr;
++
+ 	/* Default APM info */
+ 	memset(&params->apm_bios_info, 0, sizeof(params->apm_bios_info));
+ 
+@@ -256,7 +259,6 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params,
+ 	setup_efi_state(params, params_load_addr, efi_map_offset, efi_map_sz,
+ 			efi_setup_data_offset);
+ #endif
+-
+ 	/* Setup EDD info */
+ 	memcpy(params->eddbuf, boot_params.eddbuf,
+ 				EDDMAXNR * sizeof(struct edd_info));
+diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
+index 6adf6e6c2933..544bd41a514c 100644
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -141,6 +141,11 @@ asm (
+ 
+ void optprobe_template_func(void);
+ STACK_FRAME_NON_STANDARD(optprobe_template_func);
++NOKPROBE_SYMBOL(optprobe_template_func);
++NOKPROBE_SYMBOL(optprobe_template_entry);
++NOKPROBE_SYMBOL(optprobe_template_val);
++NOKPROBE_SYMBOL(optprobe_template_call);
++NOKPROBE_SYMBOL(optprobe_template_end);
+ 
+ #define TMPL_MOVE_IDX \
+ 	((long)optprobe_template_val - (long)optprobe_template_entry)
+diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
+index e811d4d1c824..d908a37bf3f3 100644
+--- a/arch/x86/kernel/kvmclock.c
++++ b/arch/x86/kernel/kvmclock.c
+@@ -104,12 +104,8 @@ static u64 kvm_sched_clock_read(void)
+ 
+ static inline void kvm_sched_clock_init(bool stable)
+ {
+-	if (!stable) {
+-		pv_ops.time.sched_clock = kvm_clock_read;
++	if (!stable)
+ 		clear_sched_clock_stable();
+-		return;
+-	}
+-
+ 	kvm_sched_clock_offset = kvm_clock_read();
+ 	pv_ops.time.sched_clock = kvm_sched_clock_read;
+ 
+diff --git a/arch/x86/kernel/unwind_frame.c b/arch/x86/kernel/unwind_frame.c
+index 3dc26f95d46e..9b9fd4826e7a 100644
+--- a/arch/x86/kernel/unwind_frame.c
++++ b/arch/x86/kernel/unwind_frame.c
+@@ -320,10 +320,14 @@ bool unwind_next_frame(struct unwind_state *state)
+ 	}
+ 
+ 	/* Get the next frame pointer: */
+-	if (state->regs)
++	if (state->next_bp) {
++		next_bp = state->next_bp;
++		state->next_bp = NULL;
++	} else if (state->regs) {
+ 		next_bp = (unsigned long *)state->regs->bp;
+-	else
++	} else {
+ 		next_bp = (unsigned long *)READ_ONCE_TASK_STACK(state->task, *state->bp);
++	}
+ 
+ 	/* Move to the next frame if it's safe: */
+ 	if (!update_stack_state(state, next_bp))
+@@ -398,6 +402,21 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ 
+ 	bp = get_frame_pointer(task, regs);
+ 
++	/*
++	 * If we crash with IP==0, the last successfully executed instruction
++	 * was probably an indirect function call with a NULL function pointer.
++	 * That means that SP points into the middle of an incomplete frame:
++	 * *SP is a return pointer, and *(SP-sizeof(unsigned long)) is where we
++	 * would have written a frame pointer if we hadn't crashed.
++	 * Pretend that the frame is complete and that BP points to it, but save
++	 * the real BP so that we can use it when looking for the next frame.
++	 */
++	if (regs && regs->ip == 0 &&
++	    (unsigned long *)kernel_stack_pointer(regs) >= first_frame) {
++		state->next_bp = bp;
++		bp = ((unsigned long *)kernel_stack_pointer(regs)) - 1;
++	}
++
+ 	/* Initialize stack info and make sure the frame data is accessible: */
+ 	get_stack_info(bp, state->task, &state->stack_info,
+ 		       &state->stack_mask);
+@@ -410,7 +429,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ 	 */
+ 	while (!unwind_done(state) &&
+ 	       (!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
+-			state->bp < first_frame))
++			(state->next_bp == NULL && state->bp < first_frame)))
+ 		unwind_next_frame(state);
+ }
+ EXPORT_SYMBOL_GPL(__unwind_start);
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index 26038eacf74a..89be1be1790c 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -113,6 +113,20 @@ static struct orc_entry *orc_ftrace_find(unsigned long ip)
+ }
+ #endif
+ 
++/*
++ * If we crash with IP==0, the last successfully executed instruction
++ * was probably an indirect function call with a NULL function pointer,
++ * and we don't have unwind information for NULL.
++ * This hardcoded ORC entry for IP==0 allows us to unwind from a NULL function
++ * pointer into its parent and then continue normally from there.
++ */
++static struct orc_entry null_orc_entry = {
++	.sp_offset = sizeof(long),
++	.sp_reg = ORC_REG_SP,
++	.bp_reg = ORC_REG_UNDEFINED,
++	.type = ORC_TYPE_CALL
++};
++
+ static struct orc_entry *orc_find(unsigned long ip)
+ {
+ 	static struct orc_entry *orc;
+@@ -120,6 +134,9 @@ static struct orc_entry *orc_find(unsigned long ip)
+ 	if (!orc_init)
+ 		return NULL;
+ 
++	if (ip == 0)
++		return &null_orc_entry;
++
+ 	/* For non-init vmlinux addresses, use the fast lookup table: */
+ 	if (ip >= LOOKUP_START_IP && ip < LOOKUP_STOP_IP) {
+ 		unsigned int idx, start, stop;
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 0d618ee634ac..ee3b5c7d662e 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -401,7 +401,7 @@ SECTIONS
+  * Per-cpu symbols which need to be offset from __per_cpu_load
+  * for the boot processor.
+  */
+-#define INIT_PER_CPU(x) init_per_cpu__##x = x + __per_cpu_load
++#define INIT_PER_CPU(x) init_per_cpu__##x = ABSOLUTE(x) + __per_cpu_load
+ INIT_PER_CPU(gdt_page);
+ INIT_PER_CPU(irq_stack_union);
+ 
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index f2d1d230d5b8..9ab33cab9486 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -5635,13 +5635,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+ {
+ 	struct kvm_memslots *slots;
+ 	struct kvm_memory_slot *memslot;
+-	bool flush_tlb = true;
+-	bool flush = false;
+ 	int i;
+ 
+-	if (kvm_available_flush_tlb_with_range())
+-		flush_tlb = false;
+-
+ 	spin_lock(&kvm->mmu_lock);
+ 	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
+ 		slots = __kvm_memslots(kvm, i);
+@@ -5653,17 +5648,12 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
+ 			if (start >= end)
+ 				continue;
+ 
+-			flush |= slot_handle_level_range(kvm, memslot,
+-					kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL,
+-					PT_MAX_HUGEPAGE_LEVEL, start,
+-					end - 1, flush_tlb);
++			slot_handle_level_range(kvm, memslot, kvm_zap_rmapp,
++						PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL,
++						start, end - 1, true);
+ 		}
+ 	}
+ 
+-	if (flush)
+-		kvm_flush_remote_tlbs_with_address(kvm, gfn_start,
+-				gfn_end - gfn_start + 1);
+-
+ 	spin_unlock(&kvm->mmu_lock);
+ }
+ 
+@@ -5901,13 +5891,30 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
+ 	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
+ }
+ 
+-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots)
++void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
+ {
++	gen &= MMIO_GEN_MASK;
++
++	/*
++	 * Shift to eliminate the "update in-progress" flag, which isn't
++	 * included in the spte's generation number.
++	 */
++	gen >>= 1;
++
++	/*
++	 * Generation numbers are incremented in multiples of the number of
++	 * address spaces in order to provide unique generations across all
++	 * address spaces.  Strip what is effectively the address space
++	 * modifier prior to checking for a wrap of the MMIO generation so
++	 * that a wrap in any address space is detected.
++	 */
++	gen &= ~((u64)KVM_ADDRESS_SPACE_NUM - 1);
++
+ 	/*
+-	 * The very rare case: if the generation-number is round,
++	 * The very rare case: if the MMIO generation number has wrapped,
+ 	 * zap all shadow pages.
+ 	 */
+-	if (unlikely((slots->generation & MMIO_GEN_MASK) == 0)) {
++	if (unlikely(gen == 0)) {
+ 		kvm_debug_ratelimited("kvm: zapping shadow pages for mmio generation wraparound\n");
+ 		kvm_mmu_invalidate_zap_all_pages(kvm);
+ 	}
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index f13a3a24d360..a9b8e38d78ad 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -6422,11 +6422,11 @@ e_free:
+ 	return ret;
+ }
+ 
+-static int get_num_contig_pages(int idx, struct page **inpages,
+-				unsigned long npages)
++static unsigned long get_num_contig_pages(unsigned long idx,
++				struct page **inpages, unsigned long npages)
+ {
+ 	unsigned long paddr, next_paddr;
+-	int i = idx + 1, pages = 1;
++	unsigned long i = idx + 1, pages = 1;
+ 
+ 	/* find the number of contiguous pages starting from idx */
+ 	paddr = __sme_page_pa(inpages[idx]);
+@@ -6445,12 +6445,12 @@ static int get_num_contig_pages(int idx, struct page **inpages,
+ 
+ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ {
+-	unsigned long vaddr, vaddr_end, next_vaddr, npages, size;
++	unsigned long vaddr, vaddr_end, next_vaddr, npages, pages, size, i;
+ 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ 	struct kvm_sev_launch_update_data params;
+ 	struct sev_data_launch_update_data *data;
+ 	struct page **inpages;
+-	int i, ret, pages;
++	int ret;
+ 
+ 	if (!sev_guest(kvm))
+ 		return -ENOTTY;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index d737a51a53ca..f90b3a948291 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -500,6 +500,17 @@ static void nested_vmx_disable_intercept_for_msr(unsigned long *msr_bitmap_l1,
+ 	}
+ }
+ 
++static inline void enable_x2apic_msr_intercepts(unsigned long *msr_bitmap) {
++	int msr;
++
++	for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
++		unsigned word = msr / BITS_PER_LONG;
++
++		msr_bitmap[word] = ~0;
++		msr_bitmap[word + (0x800 / sizeof(long))] = ~0;
++	}
++}
++
+ /*
+  * Merge L0's and L1's MSR bitmap, return false to indicate that
+  * we do not use the hardware.
+@@ -541,39 +552,44 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ 		return false;
+ 
+ 	msr_bitmap_l1 = (unsigned long *)kmap(page);
+-	if (nested_cpu_has_apic_reg_virt(vmcs12)) {
+-		/*
+-		 * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it
+-		 * just lets the processor take the value from the virtual-APIC page;
+-		 * take those 256 bits directly from the L1 bitmap.
+-		 */
+-		for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
+-			unsigned word = msr / BITS_PER_LONG;
+-			msr_bitmap_l0[word] = msr_bitmap_l1[word];
+-			msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
+-		}
+-	} else {
+-		for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
+-			unsigned word = msr / BITS_PER_LONG;
+-			msr_bitmap_l0[word] = ~0;
+-			msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
+-		}
+-	}
+ 
+-	nested_vmx_disable_intercept_for_msr(
+-		msr_bitmap_l1, msr_bitmap_l0,
+-		X2APIC_MSR(APIC_TASKPRI),
+-		MSR_TYPE_W);
++	/*
++	 * To keep the control flow simple, pay eight 8-byte writes (sixteen
++	 * 4-byte writes on 32-bit systems) up front to enable intercepts for
++	 * the x2APIC MSR range and selectively disable them below.
++	 */
++	enable_x2apic_msr_intercepts(msr_bitmap_l0);
++
++	if (nested_cpu_has_virt_x2apic_mode(vmcs12)) {
++		if (nested_cpu_has_apic_reg_virt(vmcs12)) {
++			/*
++			 * L0 need not intercept reads for MSRs between 0x800
++			 * and 0x8ff, it just lets the processor take the value
++			 * from the virtual-APIC page; take those 256 bits
++			 * directly from the L1 bitmap.
++			 */
++			for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
++				unsigned word = msr / BITS_PER_LONG;
++
++				msr_bitmap_l0[word] = msr_bitmap_l1[word];
++			}
++		}
+ 
+-	if (nested_cpu_has_vid(vmcs12)) {
+-		nested_vmx_disable_intercept_for_msr(
+-			msr_bitmap_l1, msr_bitmap_l0,
+-			X2APIC_MSR(APIC_EOI),
+-			MSR_TYPE_W);
+ 		nested_vmx_disable_intercept_for_msr(
+ 			msr_bitmap_l1, msr_bitmap_l0,
+-			X2APIC_MSR(APIC_SELF_IPI),
+-			MSR_TYPE_W);
++			X2APIC_MSR(APIC_TASKPRI),
++			MSR_TYPE_R | MSR_TYPE_W);
++
++		if (nested_cpu_has_vid(vmcs12)) {
++			nested_vmx_disable_intercept_for_msr(
++				msr_bitmap_l1, msr_bitmap_l0,
++				X2APIC_MSR(APIC_EOI),
++				MSR_TYPE_W);
++			nested_vmx_disable_intercept_for_msr(
++				msr_bitmap_l1, msr_bitmap_l0,
++				X2APIC_MSR(APIC_SELF_IPI),
++				MSR_TYPE_W);
++		}
+ 	}
+ 
+ 	if (spec_ctrl)
+@@ -2765,7 +2781,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
+ 		"add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */
+ 
+ 		/* Check if vmlaunch or vmresume is needed */
+-		"cmpl $0, %c[launched](%% " _ASM_CX")\n\t"
++		"cmpb $0, %c[launched](%% " _ASM_CX")\n\t"
+ 
+ 		"call vmx_vmenter\n\t"
+ 
+@@ -4035,25 +4051,50 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
+ 	/* Addr = segment_base + offset */
+ 	/* offset = base + [index * scale] + displacement */
+ 	off = exit_qualification; /* holds the displacement */
++	if (addr_size == 1)
++		off = (gva_t)sign_extend64(off, 31);
++	else if (addr_size == 0)
++		off = (gva_t)sign_extend64(off, 15);
+ 	if (base_is_valid)
+ 		off += kvm_register_read(vcpu, base_reg);
+ 	if (index_is_valid)
+ 		off += kvm_register_read(vcpu, index_reg)<<scaling;
+ 	vmx_get_segment(vcpu, &s, seg_reg);
+-	*ret = s.base + off;
+ 
++	/*
++	 * The effective address, i.e. @off, of a memory operand is truncated
++	 * based on the address size of the instruction.  Note that this is
++	 * the *effective address*, i.e. the address prior to accounting for
++	 * the segment's base.
++	 */
+ 	if (addr_size == 1) /* 32 bit */
+-		*ret &= 0xffffffff;
++		off &= 0xffffffff;
++	else if (addr_size == 0) /* 16 bit */
++		off &= 0xffff;
+ 
+ 	/* Checks for #GP/#SS exceptions. */
+ 	exn = false;
+ 	if (is_long_mode(vcpu)) {
++		/*
++		 * The virtual/linear address is never truncated in 64-bit
++		 * mode, e.g. a 32-bit address size can yield a 64-bit virtual
++		 * address when using FS/GS with a non-zero base.
++		 */
++		*ret = s.base + off;
++
+ 		/* Long mode: #GP(0)/#SS(0) if the memory address is in a
+ 		 * non-canonical form. This is the only check on the memory
+ 		 * destination for long mode!
+ 		 */
+ 		exn = is_noncanonical_address(*ret, vcpu);
+ 	} else if (is_protmode(vcpu)) {
++		/*
++		 * When not in long mode, the virtual/linear address is
++		 * unconditionally truncated to 32 bits regardless of the
++		 * address size.
++		 */
++		*ret = (s.base + off) & 0xffffffff;
++
+ 		/* Protected mode: apply checks for segment validity in the
+ 		 * following order:
+ 		 * - segment type check (#GP(0) may be thrown)
+@@ -4077,10 +4118,16 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
+ 		/* Protected mode: #GP(0)/#SS(0) if the segment is unusable.
+ 		 */
+ 		exn = (s.unusable != 0);
+-		/* Protected mode: #GP(0)/#SS(0) if the memory
+-		 * operand is outside the segment limit.
++
++		/*
++		 * Protected mode: #GP(0)/#SS(0) if the memory operand is
++		 * outside the segment limit.  All CPUs that support VMX ignore
++		 * limit checks for flat segments, i.e. segments with base==0,
++		 * limit==0xffffffff and of type expand-up data or code.
+ 		 */
+-		exn = exn || (off + sizeof(u64) > s.limit);
++		if (!(s.base == 0 && s.limit == 0xffffffff &&
++		     ((s.type & 8) || !(s.type & 4))))
++			exn = exn || (off + sizeof(u64) > s.limit);
+ 	}
+ 	if (exn) {
+ 		kvm_queue_exception_e(vcpu,
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 30a6bcd735ec..a0a770816429 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1679,12 +1679,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 
+ 		msr_info->data = to_vmx(vcpu)->spec_ctrl;
+ 		break;
+-	case MSR_IA32_ARCH_CAPABILITIES:
+-		if (!msr_info->host_initiated &&
+-		    !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
+-			return 1;
+-		msr_info->data = to_vmx(vcpu)->arch_capabilities;
+-		break;
+ 	case MSR_IA32_SYSENTER_CS:
+ 		msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
+ 		break;
+@@ -1891,11 +1885,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD,
+ 					      MSR_TYPE_W);
+ 		break;
+-	case MSR_IA32_ARCH_CAPABILITIES:
+-		if (!msr_info->host_initiated)
+-			return 1;
+-		vmx->arch_capabilities = data;
+-		break;
+ 	case MSR_IA32_CR_PAT:
+ 		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
+ 			if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
+@@ -4083,8 +4072,6 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ 		++vmx->nmsrs;
+ 	}
+ 
+-	vmx->arch_capabilities = kvm_get_arch_capabilities();
+-
+ 	vm_exit_controls_init(vmx, vmx_vmexit_ctrl());
+ 
+ 	/* 22.2.1, 20.8.1 */
+@@ -6399,7 +6386,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ 		"mov %%" _ASM_AX", %%cr2 \n\t"
+ 		"3: \n\t"
+ 		/* Check if vmlaunch or vmresume is needed */
+-		"cmpl $0, %c[launched](%%" _ASM_CX ") \n\t"
++		"cmpb $0, %c[launched](%%" _ASM_CX ") \n\t"
+ 		/* Load guest registers.  Don't clobber flags. */
+ 		"mov %c[rax](%%" _ASM_CX "), %%" _ASM_AX " \n\t"
+ 		"mov %c[rbx](%%" _ASM_CX "), %%" _ASM_BX " \n\t"
+@@ -6449,10 +6436,15 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ 		"mov %%r13, %c[r13](%%" _ASM_CX ") \n\t"
+ 		"mov %%r14, %c[r14](%%" _ASM_CX ") \n\t"
+ 		"mov %%r15, %c[r15](%%" _ASM_CX ") \n\t"
++
+ 		/*
+-		* Clear host registers marked as clobbered to prevent
+-		* speculative use.
+-		*/
++		 * Clear all general purpose registers (except RSP, which is loaded by
++		 * the CPU during VM-Exit) to prevent speculative use of the guest's
++		 * values, even those that are saved/loaded via the stack.  In theory,
++		 * an L1 cache miss when restoring registers could lead to speculative
++		 * execution with the guest's values.  Zeroing XORs are dirt cheap,
++		 * i.e. the extra paranoia is essentially free.
++		 */
+ 		"xor %%r8d,  %%r8d \n\t"
+ 		"xor %%r9d,  %%r9d \n\t"
+ 		"xor %%r10d, %%r10d \n\t"
+@@ -6467,8 +6459,11 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ 
+ 		"xor %%eax, %%eax \n\t"
+ 		"xor %%ebx, %%ebx \n\t"
++		"xor %%ecx, %%ecx \n\t"
++		"xor %%edx, %%edx \n\t"
+ 		"xor %%esi, %%esi \n\t"
+ 		"xor %%edi, %%edi \n\t"
++		"xor %%ebp, %%ebp \n\t"
+ 		"pop  %%" _ASM_BP "; pop  %%" _ASM_DX " \n\t"
+ 	      : ASM_CALL_CONSTRAINT
+ 	      : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp),
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 0ac0a64c7790..1abae731c3e4 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -191,7 +191,6 @@ struct vcpu_vmx {
+ 	u64		      msr_guest_kernel_gs_base;
+ #endif
+ 
+-	u64		      arch_capabilities;
+ 	u64		      spec_ctrl;
+ 
+ 	u32 vm_entry_controls_shadow;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 941f932373d0..7ee802a92bc8 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2443,6 +2443,11 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		if (msr_info->host_initiated)
+ 			vcpu->arch.microcode_version = data;
+ 		break;
++	case MSR_IA32_ARCH_CAPABILITIES:
++		if (!msr_info->host_initiated)
++			return 1;
++		vcpu->arch.arch_capabilities = data;
++		break;
+ 	case MSR_EFER:
+ 		return set_efer(vcpu, data);
+ 	case MSR_K7_HWCR:
+@@ -2747,6 +2752,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	case MSR_IA32_UCODE_REV:
+ 		msr_info->data = vcpu->arch.microcode_version;
+ 		break;
++	case MSR_IA32_ARCH_CAPABILITIES:
++		if (!msr_info->host_initiated &&
++		    !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
++			return 1;
++		msr_info->data = vcpu->arch.arch_capabilities;
++		break;
+ 	case MSR_IA32_TSC:
+ 		msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset;
+ 		break;
+@@ -6522,14 +6533,27 @@ int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
+ }
+ EXPORT_SYMBOL_GPL(kvm_emulate_instruction_from_buffer);
+ 
++static int complete_fast_pio_out(struct kvm_vcpu *vcpu)
++{
++	vcpu->arch.pio.count = 0;
++
++	if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip)))
++		return 1;
++
++	return kvm_skip_emulated_instruction(vcpu);
++}
++
+ static int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size,
+ 			    unsigned short port)
+ {
+ 	unsigned long val = kvm_register_read(vcpu, VCPU_REGS_RAX);
+ 	int ret = emulator_pio_out_emulated(&vcpu->arch.emulate_ctxt,
+ 					    size, port, &val, 1);
+-	/* do not return to emulator after return from userspace */
+-	vcpu->arch.pio.count = 0;
++
++	if (!ret) {
++		vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
++		vcpu->arch.complete_userspace_io = complete_fast_pio_out;
++	}
+ 	return ret;
+ }
+ 
+@@ -6540,6 +6564,11 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+ 	/* We should only ever be called with arch.pio.count equal to 1 */
+ 	BUG_ON(vcpu->arch.pio.count != 1);
+ 
++	if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip))) {
++		vcpu->arch.pio.count = 0;
++		return 1;
++	}
++
+ 	/* For size less than 4 we merge, else we zero extend */
+ 	val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
+ 					: 0;
+@@ -6552,7 +6581,7 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+ 				 vcpu->arch.pio.port, &val, 1);
+ 	kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+ 
+-	return 1;
++	return kvm_skip_emulated_instruction(vcpu);
+ }
+ 
+ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
+@@ -6571,6 +6600,7 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
+ 		return ret;
+ 	}
+ 
++	vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
+ 	vcpu->arch.complete_userspace_io = complete_fast_pio_in;
+ 
+ 	return 0;
+@@ -6578,16 +6608,13 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
+ 
+ int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in)
+ {
+-	int ret = kvm_skip_emulated_instruction(vcpu);
++	int ret;
+ 
+-	/*
+-	 * TODO: we might be squashing a KVM_GUESTDBG_SINGLESTEP-triggered
+-	 * KVM_EXIT_DEBUG here.
+-	 */
+ 	if (in)
+-		return kvm_fast_pio_in(vcpu, size, port) && ret;
++		ret = kvm_fast_pio_in(vcpu, size, port);
+ 	else
+-		return kvm_fast_pio_out(vcpu, size, port) && ret;
++		ret = kvm_fast_pio_out(vcpu, size, port);
++	return ret && kvm_skip_emulated_instruction(vcpu);
+ }
+ EXPORT_SYMBOL_GPL(kvm_fast_pio);
+ 
+@@ -8725,6 +8752,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
+ 
+ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
+ {
++	vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
+ 	vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
+ 	kvm_vcpu_mtrr_init(vcpu);
+ 	vcpu_load(vcpu);
+@@ -9348,13 +9376,13 @@ out_free:
+ 	return -ENOMEM;
+ }
+ 
+-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
+ {
+ 	/*
+ 	 * memslots->generation has been incremented.
+ 	 * mmio generation may have reached its maximum value.
+ 	 */
+-	kvm_mmu_invalidate_mmio_sptes(kvm, slots);
++	kvm_mmu_invalidate_mmio_sptes(kvm, gen);
+ }
+ 
+ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index 224cd0a47568..20ede17202bf 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -181,6 +181,11 @@ static inline bool emul_is_noncanonical_address(u64 la,
+ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
+ 					gva_t gva, gfn_t gfn, unsigned access)
+ {
++	u64 gen = kvm_memslots(vcpu->kvm)->generation;
++
++	if (unlikely(gen & 1))
++		return;
++
+ 	/*
+ 	 * If this is a shadow nested page table, the "GVA" is
+ 	 * actually a nGPA.
+@@ -188,7 +193,7 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
+ 	vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
+ 	vcpu->arch.access = access;
+ 	vcpu->arch.mmio_gfn = gfn;
+-	vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
++	vcpu->arch.mmio_gen = gen;
+ }
+ 
+ static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
+index bfd94e7812fc..7d290777246d 100644
+--- a/arch/x86/lib/usercopy_32.c
++++ b/arch/x86/lib/usercopy_32.c
+@@ -54,13 +54,13 @@ do {									\
+ } while (0)
+ 
+ /**
+- * clear_user: - Zero a block of memory in user space.
++ * clear_user - Zero a block of memory in user space.
+  * @to:   Destination address, in user space.
+  * @n:    Number of bytes to zero.
+  *
+  * Zero a block of memory in user space.
+  *
+- * Returns number of bytes that could not be cleared.
++ * Return: number of bytes that could not be cleared.
+  * On success, this will be zero.
+  */
+ unsigned long
+@@ -74,14 +74,14 @@ clear_user(void __user *to, unsigned long n)
+ EXPORT_SYMBOL(clear_user);
+ 
+ /**
+- * __clear_user: - Zero a block of memory in user space, with less checking.
++ * __clear_user - Zero a block of memory in user space, with less checking.
+  * @to:   Destination address, in user space.
+  * @n:    Number of bytes to zero.
+  *
+  * Zero a block of memory in user space.  Caller must check
+  * the specified block with access_ok() before calling this function.
+  *
+- * Returns number of bytes that could not be cleared.
++ * Return: number of bytes that could not be cleared.
+  * On success, this will be zero.
+  */
+ unsigned long
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index 30a5111ae5fd..527e69b12002 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -635,6 +635,22 @@ static void quirk_no_aersid(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
+ 			      PCI_CLASS_BRIDGE_PCI, 8, quirk_no_aersid);
+ 
++static void quirk_intel_th_dnv(struct pci_dev *dev)
++{
++	struct resource *r = &dev->resource[4];
++
++	/*
++	 * Denverton reports 2k of RTIT_BAR (intel_th resource 4), which
++	 * appears to be 4 MB in reality.
++	 */
++	if (r->end == r->start + 0x7ff) {
++		r->start = 0;
++		r->end   = 0x3fffff;
++		r->flags |= IORESOURCE_UNSET;
++	}
++}
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x19e1, quirk_intel_th_dnv);
++
+ #ifdef CONFIG_PHYS_ADDR_T_64BIT
+ 
+ #define AMD_141b_MMIO_BASE(x)	(0x80 + (x) * 0x8)
+diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
+index 17456a1d3f04..6c571ae86947 100644
+--- a/arch/x86/platform/efi/quirks.c
++++ b/arch/x86/platform/efi/quirks.c
+@@ -717,7 +717,7 @@ void efi_recover_from_page_fault(unsigned long phys_addr)
+ 	 * "efi_mm" cannot be used to check if the page fault had occurred
+ 	 * in the firmware context because efi=old_map doesn't use efi_pgd.
+ 	 */
+-	if (efi_rts_work.efi_rts_id == NONE)
++	if (efi_rts_work.efi_rts_id == EFI_NONE)
+ 		return;
+ 
+ 	/*
+@@ -742,7 +742,7 @@ void efi_recover_from_page_fault(unsigned long phys_addr)
+ 	 * because this case occurs *very* rarely and hence could be improved
+ 	 * on a need by basis.
+ 	 */
+-	if (efi_rts_work.efi_rts_id == RESET_SYSTEM) {
++	if (efi_rts_work.efi_rts_id == EFI_RESET_SYSTEM) {
+ 		pr_info("efi_reset_system() buggy! Reboot through BIOS\n");
+ 		machine_real_restart(MRR_BIOS);
+ 		return;
+diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
+index 4463fa72db94..96cb20de08af 100644
+--- a/arch/x86/realmode/rm/Makefile
++++ b/arch/x86/realmode/rm/Makefile
+@@ -47,7 +47,7 @@ $(obj)/pasyms.h: $(REALMODE_OBJS) FORCE
+ targets += realmode.lds
+ $(obj)/realmode.lds: $(obj)/pasyms.h
+ 
+-LDFLAGS_realmode.elf := --emit-relocs -T
++LDFLAGS_realmode.elf := -m elf_i386 --emit-relocs -T
+ CPPFLAGS_realmode.lds += -P -C -I$(objtree)/$(obj)
+ 
+ targets += realmode.elf
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 0f4fe206dcc2..20701977e6c0 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -2114,10 +2114,10 @@ void __init xen_relocate_p2m(void)
+ 				pt = early_memremap(pt_phys, PAGE_SIZE);
+ 				clear_page(pt);
+ 				for (idx_pte = 0;
+-						idx_pte < min(n_pte, PTRS_PER_PTE);
+-						idx_pte++) {
+-					set_pte(pt + idx_pte,
+-							pfn_pte(p2m_pfn, PAGE_KERNEL));
++				     idx_pte < min(n_pte, PTRS_PER_PTE);
++				     idx_pte++) {
++					pt[idx_pte] = pfn_pte(p2m_pfn,
++							      PAGE_KERNEL);
+ 					p2m_pfn++;
+ 				}
+ 				n_pte -= PTRS_PER_PTE;
+@@ -2125,8 +2125,7 @@ void __init xen_relocate_p2m(void)
+ 				make_lowmem_page_readonly(__va(pt_phys));
+ 				pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE,
+ 						PFN_DOWN(pt_phys));
+-				set_pmd(pmd + idx_pt,
+-						__pmd(_PAGE_TABLE | pt_phys));
++				pmd[idx_pt] = __pmd(_PAGE_TABLE | pt_phys);
+ 				pt_phys += PAGE_SIZE;
+ 			}
+ 			n_pt -= PTRS_PER_PMD;
+@@ -2134,7 +2133,7 @@ void __init xen_relocate_p2m(void)
+ 			make_lowmem_page_readonly(__va(pmd_phys));
+ 			pin_pagetable_pfn(MMUEXT_PIN_L2_TABLE,
+ 					PFN_DOWN(pmd_phys));
+-			set_pud(pud + idx_pmd, __pud(_PAGE_TABLE | pmd_phys));
++			pud[idx_pmd] = __pud(_PAGE_TABLE | pmd_phys);
+ 			pmd_phys += PAGE_SIZE;
+ 		}
+ 		n_pmd -= PTRS_PER_PUD;
+diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c
+index 74969a437a37..2e73395f0560 100644
+--- a/arch/xtensa/kernel/process.c
++++ b/arch/xtensa/kernel/process.c
+@@ -321,8 +321,8 @@ unsigned long get_wchan(struct task_struct *p)
+ 
+ 		/* Stack layout: sp-4: ra, sp-3: sp' */
+ 
+-		pc = MAKE_PC_FROM_RA(*(unsigned long*)sp - 4, sp);
+-		sp = *(unsigned long *)sp - 3;
++		pc = MAKE_PC_FROM_RA(SPILL_SLOT(sp, 0), sp);
++		sp = SPILL_SLOT(sp, 1);
+ 	} while (count++ < 16);
+ 	return 0;
+ }
+diff --git a/arch/xtensa/kernel/stacktrace.c b/arch/xtensa/kernel/stacktrace.c
+index 174c11f13bba..b9f82510c650 100644
+--- a/arch/xtensa/kernel/stacktrace.c
++++ b/arch/xtensa/kernel/stacktrace.c
+@@ -253,10 +253,14 @@ static int return_address_cb(struct stackframe *frame, void *data)
+ 	return 1;
+ }
+ 
++/*
++ * level == 0 is for the return address from the caller of this function,
++ * not from this function itself.
++ */
+ unsigned long return_address(unsigned level)
+ {
+ 	struct return_addr_data r = {
+-		.skip = level + 1,
++		.skip = level,
+ 	};
+ 	walk_stackframe(stack_pointer(NULL), return_address_cb, &r);
+ 	return r.addr;
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index cd307767a134..e5ed28629271 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -747,6 +747,7 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 
+ inc_counter:
+ 	bfqq->weight_counter->num_active++;
++	bfqq->ref++;
+ }
+ 
+ /*
+@@ -771,6 +772,7 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
+ 
+ reset_entity_pointer:
+ 	bfqq->weight_counter = NULL;
++	bfq_put_queue(bfqq);
+ }
+ 
+ /*
+@@ -782,9 +784,6 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
+ {
+ 	struct bfq_entity *entity = bfqq->entity.parent;
+ 
+-	__bfq_weights_tree_remove(bfqd, bfqq,
+-				  &bfqd->queue_weights_tree);
+-
+ 	for_each_entity(entity) {
+ 		struct bfq_sched_data *sd = entity->my_sched_data;
+ 
+@@ -818,6 +817,15 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
+ 			bfqd->num_groups_with_pending_reqs--;
+ 		}
+ 	}
++
++	/*
++	 * Next function is invoked last, because it causes bfqq to be
++	 * freed if the following holds: bfqq is not in service and
++	 * has no dispatched request. DO NOT use bfqq after the next
++	 * function invocation.
++	 */
++	__bfq_weights_tree_remove(bfqd, bfqq,
++				  &bfqd->queue_weights_tree);
+ }
+ 
+ /*
+@@ -1011,7 +1019,8 @@ bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd,
+ 
+ static int bfqq_process_refs(struct bfq_queue *bfqq)
+ {
+-	return bfqq->ref - bfqq->allocated - bfqq->entity.on_st;
++	return bfqq->ref - bfqq->allocated - bfqq->entity.on_st -
++		(bfqq->weight_counter != NULL);
+ }
+ 
+ /* Empty burst list and add just bfqq (see comments on bfq_handle_burst) */
+@@ -2224,7 +2233,8 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 
+ 	if (in_service_bfqq && in_service_bfqq != bfqq &&
+ 	    likely(in_service_bfqq != &bfqd->oom_bfqq) &&
+-	    bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) &&
++	    bfq_rq_close_to_sector(io_struct, request,
++				   bfqd->in_serv_last_pos) &&
+ 	    bfqq->entity.parent == in_service_bfqq->entity.parent &&
+ 	    bfq_may_be_close_cooperator(bfqq, in_service_bfqq)) {
+ 		new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq);
+@@ -2764,6 +2774,8 @@ update_rate_and_reset:
+ 	bfq_update_rate_reset(bfqd, rq);
+ update_last_values:
+ 	bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
++	if (RQ_BFQQ(rq) == bfqd->in_service_queue)
++		bfqd->in_serv_last_pos = bfqd->last_position;
+ 	bfqd->last_dispatch = now_ns;
+ }
+ 
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index 0b02bf302de0..746bd570b85a 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -537,6 +537,9 @@ struct bfq_data {
+ 	/* on-disk position of the last served request */
+ 	sector_t last_position;
+ 
++	/* position of the last served request for the in-service queue */
++	sector_t in_serv_last_pos;
++
+ 	/* time of last request completion (ns) */
+ 	u64 last_completion;
+ 
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index 72adbbe975d5..4aab1a8191f0 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -1667,15 +1667,15 @@ void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 
+ 	bfqd->busy_queues--;
+ 
+-	if (!bfqq->dispatched)
+-		bfq_weights_tree_remove(bfqd, bfqq);
+-
+ 	if (bfqq->wr_coeff > 1)
+ 		bfqd->wr_busy_queues--;
+ 
+ 	bfqg_stats_update_dequeue(bfqq_group(bfqq));
+ 
+ 	bfq_deactivate_bfqq(bfqd, bfqq, true, expiration);
++
++	if (!bfqq->dispatched)
++		bfq_weights_tree_remove(bfqd, bfqq);
+ }
+ 
+ /*
+diff --git a/block/bio.c b/block/bio.c
+index 4db1008309ed..a06f58bd4c72 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1238,8 +1238,11 @@ struct bio *bio_copy_user_iov(struct request_queue *q,
+ 			}
+ 		}
+ 
+-		if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes)
++		if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes) {
++			if (!map_data)
++				__free_page(page);
+ 			break;
++		}
+ 
+ 		len -= bytes;
+ 		offset = 0;
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 6b78ec56a4f2..5bde73a49399 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -1246,8 +1246,6 @@ static int blk_cloned_rq_check_limits(struct request_queue *q,
+  */
+ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq)
+ {
+-	blk_qc_t unused;
+-
+ 	if (blk_cloned_rq_check_limits(q, rq))
+ 		return BLK_STS_IOERR;
+ 
+@@ -1263,7 +1261,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
+ 	 * bypass a potential scheduler on the bottom device for
+ 	 * insert.
+ 	 */
+-	return blk_mq_try_issue_directly(rq->mq_hctx, rq, &unused, true, true);
++	return blk_mq_request_issue_directly(rq, true);
+ }
+ EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
+ 
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 140933e4a7d1..0c98b6c1ca49 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -423,10 +423,12 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
+ 		 * busy in case of 'none' scheduler, and this way may save
+ 		 * us one extra enqueue & dequeue to sw queue.
+ 		 */
+-		if (!hctx->dispatch_busy && !e && !run_queue_async)
++		if (!hctx->dispatch_busy && !e && !run_queue_async) {
+ 			blk_mq_try_issue_list_directly(hctx, list);
+-		else
+-			blk_mq_insert_requests(hctx, ctx, list);
++			if (list_empty(list))
++				return;
++		}
++		blk_mq_insert_requests(hctx, ctx, list);
+ 	}
+ 
+ 	blk_mq_run_hw_queue(hctx, run_queue_async);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 9437a5eb07cf..16f9675c57e6 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1076,7 +1076,13 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
+ 	hctx = container_of(wait, struct blk_mq_hw_ctx, dispatch_wait);
+ 
+ 	spin_lock(&hctx->dispatch_wait_lock);
+-	list_del_init(&wait->entry);
++	if (!list_empty(&wait->entry)) {
++		struct sbitmap_queue *sbq;
++
++		list_del_init(&wait->entry);
++		sbq = &hctx->tags->bitmap_tags;
++		atomic_dec(&sbq->ws_active);
++	}
+ 	spin_unlock(&hctx->dispatch_wait_lock);
+ 
+ 	blk_mq_run_hw_queue(hctx, true);
+@@ -1092,6 +1098,7 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
+ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ 				 struct request *rq)
+ {
++	struct sbitmap_queue *sbq = &hctx->tags->bitmap_tags;
+ 	struct wait_queue_head *wq;
+ 	wait_queue_entry_t *wait;
+ 	bool ret;
+@@ -1115,7 +1122,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ 	if (!list_empty_careful(&wait->entry))
+ 		return false;
+ 
+-	wq = &bt_wait_ptr(&hctx->tags->bitmap_tags, hctx)->wait;
++	wq = &bt_wait_ptr(sbq, hctx)->wait;
+ 
+ 	spin_lock_irq(&wq->lock);
+ 	spin_lock(&hctx->dispatch_wait_lock);
+@@ -1125,6 +1132,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ 		return false;
+ 	}
+ 
++	atomic_inc(&sbq->ws_active);
+ 	wait->flags &= ~WQ_FLAG_EXCLUSIVE;
+ 	__add_wait_queue(wq, wait);
+ 
+@@ -1145,6 +1153,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
+ 	 * someone else gets the wakeup.
+ 	 */
+ 	list_del_init(&wait->entry);
++	atomic_dec(&sbq->ws_active);
+ 	spin_unlock(&hctx->dispatch_wait_lock);
+ 	spin_unlock_irq(&wq->lock);
+ 
+@@ -1796,74 +1805,76 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
+ 	return ret;
+ }
+ 
+-blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
++static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+ 						struct request *rq,
+ 						blk_qc_t *cookie,
+-						bool bypass, bool last)
++						bool bypass_insert, bool last)
+ {
+ 	struct request_queue *q = rq->q;
+ 	bool run_queue = true;
+-	blk_status_t ret = BLK_STS_RESOURCE;
+-	int srcu_idx;
+-	bool force = false;
+ 
+-	hctx_lock(hctx, &srcu_idx);
+ 	/*
+-	 * hctx_lock is needed before checking quiesced flag.
++	 * RCU or SRCU read lock is needed before checking quiesced flag.
+ 	 *
+-	 * When queue is stopped or quiesced, ignore 'bypass', insert
+-	 * and return BLK_STS_OK to caller, and avoid driver to try to
+-	 * dispatch again.
++	 * When queue is stopped or quiesced, ignore 'bypass_insert' from
++	 * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller,
++	 * and avoid driver to try to dispatch again.
+ 	 */
+-	if (unlikely(blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q))) {
++	if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) {
+ 		run_queue = false;
+-		bypass = false;
+-		goto out_unlock;
++		bypass_insert = false;
++		goto insert;
+ 	}
+ 
+-	if (unlikely(q->elevator && !bypass))
+-		goto out_unlock;
++	if (q->elevator && !bypass_insert)
++		goto insert;
+ 
+ 	if (!blk_mq_get_dispatch_budget(hctx))
+-		goto out_unlock;
++		goto insert;
+ 
+ 	if (!blk_mq_get_driver_tag(rq)) {
+ 		blk_mq_put_dispatch_budget(hctx);
+-		goto out_unlock;
++		goto insert;
+ 	}
+ 
+-	/*
+-	 * Always add a request that has been through
+-	 *.queue_rq() to the hardware dispatch list.
+-	 */
+-	force = true;
+-	ret = __blk_mq_issue_directly(hctx, rq, cookie, last);
+-out_unlock:
++	return __blk_mq_issue_directly(hctx, rq, cookie, last);
++insert:
++	if (bypass_insert)
++		return BLK_STS_RESOURCE;
++
++	blk_mq_request_bypass_insert(rq, run_queue);
++	return BLK_STS_OK;
++}
++
++static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
++		struct request *rq, blk_qc_t *cookie)
++{
++	blk_status_t ret;
++	int srcu_idx;
++
++	might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING);
++
++	hctx_lock(hctx, &srcu_idx);
++
++	ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true);
++	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
++		blk_mq_request_bypass_insert(rq, true);
++	else if (ret != BLK_STS_OK)
++		blk_mq_end_request(rq, ret);
++
++	hctx_unlock(hctx, srcu_idx);
++}
++
++blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
++{
++	blk_status_t ret;
++	int srcu_idx;
++	blk_qc_t unused_cookie;
++	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
++
++	hctx_lock(hctx, &srcu_idx);
++	ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true, last);
+ 	hctx_unlock(hctx, srcu_idx);
+-	switch (ret) {
+-	case BLK_STS_OK:
+-		break;
+-	case BLK_STS_DEV_RESOURCE:
+-	case BLK_STS_RESOURCE:
+-		if (force) {
+-			blk_mq_request_bypass_insert(rq, run_queue);
+-			/*
+-			 * We have to return BLK_STS_OK for the DM
+-			 * to avoid livelock. Otherwise, we return
+-			 * the real result to indicate whether the
+-			 * request is direct-issued successfully.
+-			 */
+-			ret = bypass ? BLK_STS_OK : ret;
+-		} else if (!bypass) {
+-			blk_mq_sched_insert_request(rq, false,
+-						    run_queue, false);
+-		}
+-		break;
+-	default:
+-		if (!bypass)
+-			blk_mq_end_request(rq, ret);
+-		break;
+-	}
+ 
+ 	return ret;
+ }
+@@ -1871,20 +1882,22 @@ out_unlock:
+ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ 		struct list_head *list)
+ {
+-	blk_qc_t unused;
+-	blk_status_t ret = BLK_STS_OK;
+-
+ 	while (!list_empty(list)) {
++		blk_status_t ret;
+ 		struct request *rq = list_first_entry(list, struct request,
+ 				queuelist);
+ 
+ 		list_del_init(&rq->queuelist);
+-		if (ret == BLK_STS_OK)
+-			ret = blk_mq_try_issue_directly(hctx, rq, &unused,
+-							false,
++		ret = blk_mq_request_issue_directly(rq, list_empty(list));
++		if (ret != BLK_STS_OK) {
++			if (ret == BLK_STS_RESOURCE ||
++					ret == BLK_STS_DEV_RESOURCE) {
++				blk_mq_request_bypass_insert(rq,
+ 							list_empty(list));
+-		else
+-			blk_mq_sched_insert_request(rq, false, true, false);
++				break;
++			}
++			blk_mq_end_request(rq, ret);
++		}
+ 	}
+ 
+ 	/*
+@@ -1892,7 +1905,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ 	 * the driver there was more coming, but that turned out to
+ 	 * be a lie.
+ 	 */
+-	if (ret != BLK_STS_OK && hctx->queue->mq_ops->commit_rqs)
++	if (!list_empty(list) && hctx->queue->mq_ops->commit_rqs)
+ 		hctx->queue->mq_ops->commit_rqs(hctx);
+ }
+ 
+@@ -2005,13 +2018,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
+ 		if (same_queue_rq) {
+ 			data.hctx = same_queue_rq->mq_hctx;
+ 			blk_mq_try_issue_directly(data.hctx, same_queue_rq,
+-					&cookie, false, true);
++					&cookie);
+ 		}
+ 	} else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
+ 			!data.hctx->dispatch_busy)) {
+ 		blk_mq_put_ctx(data.ctx);
+ 		blk_mq_bio_to_request(rq, bio);
+-		blk_mq_try_issue_directly(data.hctx, rq, &cookie, false, true);
++		blk_mq_try_issue_directly(data.hctx, rq, &cookie);
+ 	} else {
+ 		blk_mq_put_ctx(data.ctx);
+ 		blk_mq_bio_to_request(rq, bio);
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index d0b3dd54ef8d..a3a684a8c633 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -67,10 +67,8 @@ void blk_mq_request_bypass_insert(struct request *rq, bool run_queue);
+ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
+ 				struct list_head *list);
+ 
+-blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+-						struct request *rq,
+-						blk_qc_t *cookie,
+-						bool bypass, bool last);
++/* Used by blk_insert_cloned_request() to issue request directly */
++blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last);
+ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ 				    struct list_head *list);
+ 
+diff --git a/crypto/aead.c b/crypto/aead.c
+index 189c52d1f63a..4908b5e846f0 100644
+--- a/crypto/aead.c
++++ b/crypto/aead.c
+@@ -61,8 +61,10 @@ int crypto_aead_setkey(struct crypto_aead *tfm,
+ 	else
+ 		err = crypto_aead_alg(tfm)->setkey(tfm, key, keylen);
+ 
+-	if (err)
++	if (unlikely(err)) {
++		crypto_aead_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 		return err;
++	}
+ 
+ 	crypto_aead_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+diff --git a/crypto/aegis128.c b/crypto/aegis128.c
+index c22f4414856d..789716f92e4c 100644
+--- a/crypto/aegis128.c
++++ b/crypto/aegis128.c
+@@ -290,19 +290,19 @@ static void crypto_aegis128_process_crypt(struct aegis_state *state,
+ 					  const struct aegis128_ops *ops)
+ {
+ 	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize;
+ 
+ 	ops->skcipher_walk_init(&walk, req, false);
+ 
+ 	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
++		unsigned int nbytes = walk.nbytes;
+ 
+-		ops->crypt_chunk(state, dst, src, chunksize);
++		if (nbytes < walk.total)
++			nbytes = round_down(nbytes, walk.stride);
+ 
+-		skcipher_walk_done(&walk, 0);
++		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++				 nbytes);
++
++		skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ 	}
+ }
+ 
+diff --git a/crypto/aegis128l.c b/crypto/aegis128l.c
+index b6fb21ebdc3e..73811448cb6b 100644
+--- a/crypto/aegis128l.c
++++ b/crypto/aegis128l.c
+@@ -353,19 +353,19 @@ static void crypto_aegis128l_process_crypt(struct aegis_state *state,
+ 					   const struct aegis128l_ops *ops)
+ {
+ 	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize;
+ 
+ 	ops->skcipher_walk_init(&walk, req, false);
+ 
+ 	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
++		unsigned int nbytes = walk.nbytes;
+ 
+-		ops->crypt_chunk(state, dst, src, chunksize);
++		if (nbytes < walk.total)
++			nbytes = round_down(nbytes, walk.stride);
+ 
+-		skcipher_walk_done(&walk, 0);
++		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++				 nbytes);
++
++		skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ 	}
+ }
+ 
+diff --git a/crypto/aegis256.c b/crypto/aegis256.c
+index 11f0f8ec9c7c..8a71e9c06193 100644
+--- a/crypto/aegis256.c
++++ b/crypto/aegis256.c
+@@ -303,19 +303,19 @@ static void crypto_aegis256_process_crypt(struct aegis_state *state,
+ 					  const struct aegis256_ops *ops)
+ {
+ 	struct skcipher_walk walk;
+-	u8 *src, *dst;
+-	unsigned int chunksize;
+ 
+ 	ops->skcipher_walk_init(&walk, req, false);
+ 
+ 	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
+-		chunksize = walk.nbytes;
++		unsigned int nbytes = walk.nbytes;
+ 
+-		ops->crypt_chunk(state, dst, src, chunksize);
++		if (nbytes < walk.total)
++			nbytes = round_down(nbytes, walk.stride);
+ 
+-		skcipher_walk_done(&walk, 0);
++		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++				 nbytes);
++
++		skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ 	}
+ }
+ 
+diff --git a/crypto/ahash.c b/crypto/ahash.c
+index 5d320a811f75..81e2767e2164 100644
+--- a/crypto/ahash.c
++++ b/crypto/ahash.c
+@@ -86,17 +86,17 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
+ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
+ {
+ 	unsigned int alignmask = walk->alignmask;
+-	unsigned int nbytes = walk->entrylen;
+ 
+ 	walk->data -= walk->offset;
+ 
+-	if (nbytes && walk->offset & alignmask && !err) {
+-		walk->offset = ALIGN(walk->offset, alignmask + 1);
+-		nbytes = min(nbytes,
+-			     ((unsigned int)(PAGE_SIZE)) - walk->offset);
+-		walk->entrylen -= nbytes;
++	if (walk->entrylen && (walk->offset & alignmask) && !err) {
++		unsigned int nbytes;
+ 
++		walk->offset = ALIGN(walk->offset, alignmask + 1);
++		nbytes = min(walk->entrylen,
++			     (unsigned int)(PAGE_SIZE - walk->offset));
+ 		if (nbytes) {
++			walk->entrylen -= nbytes;
+ 			walk->data += walk->offset;
+ 			return nbytes;
+ 		}
+@@ -116,7 +116,7 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
+ 	if (err)
+ 		return err;
+ 
+-	if (nbytes) {
++	if (walk->entrylen) {
+ 		walk->offset = 0;
+ 		walk->pg++;
+ 		return hash_walk_next(walk);
+@@ -190,6 +190,21 @@ static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
+ 	return ret;
+ }
+ 
++static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
++			  unsigned int keylen)
++{
++	return -ENOSYS;
++}
++
++static void ahash_set_needkey(struct crypto_ahash *tfm)
++{
++	const struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
++
++	if (tfm->setkey != ahash_nosetkey &&
++	    !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
++		crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
++}
++
+ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ 			unsigned int keylen)
+ {
+@@ -201,20 +216,16 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ 	else
+ 		err = tfm->setkey(tfm, key, keylen);
+ 
+-	if (err)
++	if (unlikely(err)) {
++		ahash_set_needkey(tfm);
+ 		return err;
++	}
+ 
+ 	crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
+ 
+-static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
+-			  unsigned int keylen)
+-{
+-	return -ENOSYS;
+-}
+-
+ static inline unsigned int ahash_align_buffer_size(unsigned len,
+ 						   unsigned long mask)
+ {
+@@ -489,8 +500,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
+ 
+ 	if (alg->setkey) {
+ 		hash->setkey = alg->setkey;
+-		if (!(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+-			crypto_ahash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
++		ahash_set_needkey(hash);
+ 	}
+ 
+ 	return 0;
+diff --git a/crypto/cfb.c b/crypto/cfb.c
+index e81e45673498..4abfe32ff845 100644
+--- a/crypto/cfb.c
++++ b/crypto/cfb.c
+@@ -77,12 +77,14 @@ static int crypto_cfb_encrypt_segment(struct skcipher_walk *walk,
+ 	do {
+ 		crypto_cfb_encrypt_one(tfm, iv, dst);
+ 		crypto_xor(dst, src, bsize);
+-		memcpy(iv, dst, bsize);
++		iv = dst;
+ 
+ 		src += bsize;
+ 		dst += bsize;
+ 	} while ((nbytes -= bsize) >= bsize);
+ 
++	memcpy(walk->iv, iv, bsize);
++
+ 	return nbytes;
+ }
+ 
+@@ -162,7 +164,7 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
+ 	const unsigned int bsize = crypto_cfb_bsize(tfm);
+ 	unsigned int nbytes = walk->nbytes;
+ 	u8 *src = walk->src.virt.addr;
+-	u8 *iv = walk->iv;
++	u8 * const iv = walk->iv;
+ 	u8 tmp[MAX_CIPHER_BLOCKSIZE];
+ 
+ 	do {
+@@ -172,8 +174,6 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
+ 		src += bsize;
+ 	} while ((nbytes -= bsize) >= bsize);
+ 
+-	memcpy(walk->iv, iv, bsize);
+-
+ 	return nbytes;
+ }
+ 
+@@ -298,6 +298,12 @@ static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	inst->alg.base.cra_blocksize = 1;
+ 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
+ 
++	/*
++	 * To simplify the implementation, configure the skcipher walk to only
++	 * give a partial block at the very end, never earlier.
++	 */
++	inst->alg.chunksize = alg->cra_blocksize;
++
+ 	inst->alg.ivsize = alg->cra_blocksize;
+ 	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
+ 	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
+diff --git a/crypto/morus1280.c b/crypto/morus1280.c
+index 3889c188f266..b83576b4eb55 100644
+--- a/crypto/morus1280.c
++++ b/crypto/morus1280.c
+@@ -366,18 +366,19 @@ static void crypto_morus1280_process_crypt(struct morus1280_state *state,
+ 					   const struct morus1280_ops *ops)
+ {
+ 	struct skcipher_walk walk;
+-	u8 *dst;
+-	const u8 *src;
+ 
+ 	ops->skcipher_walk_init(&walk, req, false);
+ 
+ 	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
++		unsigned int nbytes = walk.nbytes;
+ 
+-		ops->crypt_chunk(state, dst, src, walk.nbytes);
++		if (nbytes < walk.total)
++			nbytes = round_down(nbytes, walk.stride);
+ 
+-		skcipher_walk_done(&walk, 0);
++		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++				 nbytes);
++
++		skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ 	}
+ }
+ 
+diff --git a/crypto/morus640.c b/crypto/morus640.c
+index da06ec2f6a80..b6a477444f6d 100644
+--- a/crypto/morus640.c
++++ b/crypto/morus640.c
+@@ -365,18 +365,19 @@ static void crypto_morus640_process_crypt(struct morus640_state *state,
+ 					  const struct morus640_ops *ops)
+ {
+ 	struct skcipher_walk walk;
+-	u8 *dst;
+-	const u8 *src;
+ 
+ 	ops->skcipher_walk_init(&walk, req, false);
+ 
+ 	while (walk.nbytes) {
+-		src = walk.src.virt.addr;
+-		dst = walk.dst.virt.addr;
++		unsigned int nbytes = walk.nbytes;
+ 
+-		ops->crypt_chunk(state, dst, src, walk.nbytes);
++		if (nbytes < walk.total)
++			nbytes = round_down(nbytes, walk.stride);
+ 
+-		skcipher_walk_done(&walk, 0);
++		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
++				 nbytes);
++
++		skcipher_walk_done(&walk, walk.nbytes - nbytes);
+ 	}
+ }
+ 
+diff --git a/crypto/ofb.c b/crypto/ofb.c
+index 886631708c5e..cab0b80953fe 100644
+--- a/crypto/ofb.c
++++ b/crypto/ofb.c
+@@ -5,9 +5,6 @@
+  *
+  * Copyright (C) 2018 ARM Limited or its affiliates.
+  * All rights reserved.
+- *
+- * Based loosely on public domain code gleaned from libtomcrypt
+- * (https://github.com/libtom/libtomcrypt).
+  */
+ 
+ #include <crypto/algapi.h>
+@@ -21,7 +18,6 @@
+ 
+ struct crypto_ofb_ctx {
+ 	struct crypto_cipher *child;
+-	int cnt;
+ };
+ 
+ 
+@@ -41,58 +37,40 @@ static int crypto_ofb_setkey(struct crypto_skcipher *parent, const u8 *key,
+ 	return err;
+ }
+ 
+-static int crypto_ofb_encrypt_segment(struct crypto_ofb_ctx *ctx,
+-				      struct skcipher_walk *walk,
+-				      struct crypto_cipher *tfm)
++static int crypto_ofb_crypt(struct skcipher_request *req)
+ {
+-	int bsize = crypto_cipher_blocksize(tfm);
+-	int nbytes = walk->nbytes;
+-
+-	u8 *src = walk->src.virt.addr;
+-	u8 *dst = walk->dst.virt.addr;
+-	u8 *iv = walk->iv;
+-
+-	do {
+-		if (ctx->cnt == bsize) {
+-			if (nbytes < bsize)
+-				break;
+-			crypto_cipher_encrypt_one(tfm, iv, iv);
+-			ctx->cnt = 0;
+-		}
+-		*dst = *src ^ iv[ctx->cnt];
+-		src++;
+-		dst++;
+-		ctx->cnt++;
+-	} while (--nbytes);
+-	return nbytes;
+-}
+-
+-static int crypto_ofb_encrypt(struct skcipher_request *req)
+-{
+-	struct skcipher_walk walk;
+ 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+-	unsigned int bsize;
+ 	struct crypto_ofb_ctx *ctx = crypto_skcipher_ctx(tfm);
+-	struct crypto_cipher *child = ctx->child;
+-	int ret = 0;
++	struct crypto_cipher *cipher = ctx->child;
++	const unsigned int bsize = crypto_cipher_blocksize(cipher);
++	struct skcipher_walk walk;
++	int err;
+ 
+-	bsize =  crypto_cipher_blocksize(child);
+-	ctx->cnt = bsize;
++	err = skcipher_walk_virt(&walk, req, false);
+ 
+-	ret = skcipher_walk_virt(&walk, req, false);
++	while (walk.nbytes >= bsize) {
++		const u8 *src = walk.src.virt.addr;
++		u8 *dst = walk.dst.virt.addr;
++		u8 * const iv = walk.iv;
++		unsigned int nbytes = walk.nbytes;
+ 
+-	while (walk.nbytes) {
+-		ret = crypto_ofb_encrypt_segment(ctx, &walk, child);
+-		ret = skcipher_walk_done(&walk, ret);
+-	}
++		do {
++			crypto_cipher_encrypt_one(cipher, iv, iv);
++			crypto_xor_cpy(dst, src, iv, bsize);
++			dst += bsize;
++			src += bsize;
++		} while ((nbytes -= bsize) >= bsize);
+ 
+-	return ret;
+-}
++		err = skcipher_walk_done(&walk, nbytes);
++	}
+ 
+-/* OFB encrypt and decrypt are identical */
+-static int crypto_ofb_decrypt(struct skcipher_request *req)
+-{
+-	return crypto_ofb_encrypt(req);
++	if (walk.nbytes) {
++		crypto_cipher_encrypt_one(cipher, walk.iv, walk.iv);
++		crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, walk.iv,
++			       walk.nbytes);
++		err = skcipher_walk_done(&walk, 0);
++	}
++	return err;
+ }
+ 
+ static int crypto_ofb_init_tfm(struct crypto_skcipher *tfm)
+@@ -165,13 +143,18 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	if (err)
+ 		goto err_drop_spawn;
+ 
++	/* OFB mode is a stream cipher. */
++	inst->alg.base.cra_blocksize = 1;
++
++	/*
++	 * To simplify the implementation, configure the skcipher walk to only
++	 * give a partial block at the very end, never earlier.
++	 */
++	inst->alg.chunksize = alg->cra_blocksize;
++
+ 	inst->alg.base.cra_priority = alg->cra_priority;
+-	inst->alg.base.cra_blocksize = alg->cra_blocksize;
+ 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
+ 
+-	/* We access the data as u32s when xoring. */
+-	inst->alg.base.cra_alignmask |= __alignof__(u32) - 1;
+-
+ 	inst->alg.ivsize = alg->cra_blocksize;
+ 	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
+ 	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
+@@ -182,8 +165,8 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	inst->alg.exit = crypto_ofb_exit_tfm;
+ 
+ 	inst->alg.setkey = crypto_ofb_setkey;
+-	inst->alg.encrypt = crypto_ofb_encrypt;
+-	inst->alg.decrypt = crypto_ofb_decrypt;
++	inst->alg.encrypt = crypto_ofb_crypt;
++	inst->alg.decrypt = crypto_ofb_crypt;
+ 
+ 	inst->free = crypto_ofb_free;
+ 
+diff --git a/crypto/pcbc.c b/crypto/pcbc.c
+index 8aa10144407c..1b182dfedc94 100644
+--- a/crypto/pcbc.c
++++ b/crypto/pcbc.c
+@@ -51,7 +51,7 @@ static int crypto_pcbc_encrypt_segment(struct skcipher_request *req,
+ 	unsigned int nbytes = walk->nbytes;
+ 	u8 *src = walk->src.virt.addr;
+ 	u8 *dst = walk->dst.virt.addr;
+-	u8 *iv = walk->iv;
++	u8 * const iv = walk->iv;
+ 
+ 	do {
+ 		crypto_xor(iv, src, bsize);
+@@ -72,7 +72,7 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
+ 	int bsize = crypto_cipher_blocksize(tfm);
+ 	unsigned int nbytes = walk->nbytes;
+ 	u8 *src = walk->src.virt.addr;
+-	u8 *iv = walk->iv;
++	u8 * const iv = walk->iv;
+ 	u8 tmpbuf[MAX_CIPHER_BLOCKSIZE];
+ 
+ 	do {
+@@ -84,8 +84,6 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
+ 		src += bsize;
+ 	} while ((nbytes -= bsize) >= bsize);
+ 
+-	memcpy(walk->iv, iv, bsize);
+-
+ 	return nbytes;
+ }
+ 
+@@ -121,7 +119,7 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
+ 	unsigned int nbytes = walk->nbytes;
+ 	u8 *src = walk->src.virt.addr;
+ 	u8 *dst = walk->dst.virt.addr;
+-	u8 *iv = walk->iv;
++	u8 * const iv = walk->iv;
+ 
+ 	do {
+ 		crypto_cipher_decrypt_one(tfm, dst, src);
+@@ -132,8 +130,6 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
+ 		dst += bsize;
+ 	} while ((nbytes -= bsize) >= bsize);
+ 
+-	memcpy(walk->iv, iv, bsize);
+-
+ 	return nbytes;
+ }
+ 
+@@ -144,7 +140,7 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
+ 	int bsize = crypto_cipher_blocksize(tfm);
+ 	unsigned int nbytes = walk->nbytes;
+ 	u8 *src = walk->src.virt.addr;
+-	u8 *iv = walk->iv;
++	u8 * const iv = walk->iv;
+ 	u8 tmpbuf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(u32));
+ 
+ 	do {
+@@ -156,8 +152,6 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
+ 		src += bsize;
+ 	} while ((nbytes -= bsize) >= bsize);
+ 
+-	memcpy(walk->iv, iv, bsize);
+-
+ 	return nbytes;
+ }
+ 
+diff --git a/crypto/shash.c b/crypto/shash.c
+index 44d297b82a8f..40311ccad3fa 100644
+--- a/crypto/shash.c
++++ b/crypto/shash.c
+@@ -53,6 +53,13 @@ static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
+ 	return err;
+ }
+ 
++static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg)
++{
++	if (crypto_shash_alg_has_setkey(alg) &&
++	    !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
++		crypto_shash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
++}
++
+ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
+ 			unsigned int keylen)
+ {
+@@ -65,8 +72,10 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
+ 	else
+ 		err = shash->setkey(tfm, key, keylen);
+ 
+-	if (err)
++	if (unlikely(err)) {
++		shash_set_needkey(tfm, shash);
+ 		return err;
++	}
+ 
+ 	crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+@@ -373,7 +382,8 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
+ 	crt->final = shash_async_final;
+ 	crt->finup = shash_async_finup;
+ 	crt->digest = shash_async_digest;
+-	crt->setkey = shash_async_setkey;
++	if (crypto_shash_alg_has_setkey(alg))
++		crt->setkey = shash_async_setkey;
+ 
+ 	crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
+ 				    CRYPTO_TFM_NEED_KEY);
+@@ -395,9 +405,7 @@ static int crypto_shash_init_tfm(struct crypto_tfm *tfm)
+ 
+ 	hash->descsize = alg->descsize;
+ 
+-	if (crypto_shash_alg_has_setkey(alg) &&
+-	    !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+-		crypto_shash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
++	shash_set_needkey(hash, alg);
+ 
+ 	return 0;
+ }
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index 2a969296bc24..de09ff60991e 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -585,6 +585,12 @@ static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg)
+ 	return crypto_alg_extsize(alg);
+ }
+ 
++static void skcipher_set_needkey(struct crypto_skcipher *tfm)
++{
++	if (tfm->keysize)
++		crypto_skcipher_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
++}
++
+ static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm,
+ 				     const u8 *key, unsigned int keylen)
+ {
+@@ -598,8 +604,10 @@ static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm,
+ 	err = crypto_blkcipher_setkey(blkcipher, key, keylen);
+ 	crypto_skcipher_set_flags(tfm, crypto_blkcipher_get_flags(blkcipher) &
+ 				       CRYPTO_TFM_RES_MASK);
+-	if (err)
++	if (unlikely(err)) {
++		skcipher_set_needkey(tfm);
+ 		return err;
++	}
+ 
+ 	crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+@@ -677,8 +685,7 @@ static int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm)
+ 	skcipher->ivsize = crypto_blkcipher_ivsize(blkcipher);
+ 	skcipher->keysize = calg->cra_blkcipher.max_keysize;
+ 
+-	if (skcipher->keysize)
+-		crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
++	skcipher_set_needkey(skcipher);
+ 
+ 	return 0;
+ }
+@@ -698,8 +705,10 @@ static int skcipher_setkey_ablkcipher(struct crypto_skcipher *tfm,
+ 	crypto_skcipher_set_flags(tfm,
+ 				  crypto_ablkcipher_get_flags(ablkcipher) &
+ 				  CRYPTO_TFM_RES_MASK);
+-	if (err)
++	if (unlikely(err)) {
++		skcipher_set_needkey(tfm);
+ 		return err;
++	}
+ 
+ 	crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+@@ -776,8 +785,7 @@ static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm)
+ 			    sizeof(struct ablkcipher_request);
+ 	skcipher->keysize = calg->cra_ablkcipher.max_keysize;
+ 
+-	if (skcipher->keysize)
+-		crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
++	skcipher_set_needkey(skcipher);
+ 
+ 	return 0;
+ }
+@@ -820,8 +828,10 @@ static int skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ 	else
+ 		err = cipher->setkey(tfm, key, keylen);
+ 
+-	if (err)
++	if (unlikely(err)) {
++		skcipher_set_needkey(tfm);
+ 		return err;
++	}
+ 
+ 	crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
+ 	return 0;
+@@ -852,8 +862,7 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
+ 	skcipher->ivsize = alg->ivsize;
+ 	skcipher->keysize = alg->max_keysize;
+ 
+-	if (skcipher->keysize)
+-		crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
++	skcipher_set_needkey(skcipher);
+ 
+ 	if (alg->exit)
+ 		skcipher->base.exit = crypto_skcipher_exit_tfm;
+diff --git a/crypto/testmgr.c b/crypto/testmgr.c
+index 0f684a414acb..b8e4a3ccbfe0 100644
+--- a/crypto/testmgr.c
++++ b/crypto/testmgr.c
+@@ -1894,14 +1894,21 @@ static int alg_test_crc32c(const struct alg_test_desc *desc,
+ 
+ 	err = alg_test_hash(desc, driver, type, mask);
+ 	if (err)
+-		goto out;
++		return err;
+ 
+ 	tfm = crypto_alloc_shash(driver, type, mask);
+ 	if (IS_ERR(tfm)) {
++		if (PTR_ERR(tfm) == -ENOENT) {
++			/*
++			 * This crc32c implementation is only available through
++			 * ahash API, not the shash API, so the remaining part
++			 * of the test is not applicable to it.
++			 */
++			return 0;
++		}
+ 		printk(KERN_ERR "alg: crc32c: Failed to load transform for %s: "
+ 		       "%ld\n", driver, PTR_ERR(tfm));
+-		err = PTR_ERR(tfm);
+-		goto out;
++		return PTR_ERR(tfm);
+ 	}
+ 
+ 	do {
+@@ -1928,7 +1935,6 @@ static int alg_test_crc32c(const struct alg_test_desc *desc,
+ 
+ 	crypto_free_shash(tfm);
+ 
+-out:
+ 	return err;
+ }
+ 
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index e8f47d7b92cd..ca8e8ebef309 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -12870,6 +12870,31 @@ static const struct cipher_testvec aes_cfb_tv_template[] = {
+ 			  "\x75\xa3\x85\x74\x1a\xb9\xce\xf8"
+ 			  "\x20\x31\x62\x3d\x55\xb1\xe4\x71",
+ 		.len	= 64,
++		.also_non_np = 1,
++		.np	= 2,
++		.tap	= { 31, 33 },
++	}, { /* > 16 bytes, not a multiple of 16 bytes */
++		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++		.klen	= 16,
++		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
++			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
++			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
++			  "\xae",
++		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
++			  "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
++			  "\xc8",
++		.len	= 17,
++	}, { /* < 16 bytes */
++		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++		.klen	= 16,
++		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
++			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
++		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
++		.len	= 7,
+ 	},
+ };
+ 
+@@ -16656,8 +16681,7 @@ static const struct cipher_testvec aes_ctr_rfc3686_tv_template[] = {
+ };
+ 
+ static const struct cipher_testvec aes_ofb_tv_template[] = {
+-	 /* From NIST Special Publication 800-38A, Appendix F.5 */
+-	{
++	{ /* From NIST Special Publication 800-38A, Appendix F.5 */
+ 		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+ 			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+ 		.klen	= 16,
+@@ -16680,6 +16704,31 @@ static const struct cipher_testvec aes_ofb_tv_template[] = {
+ 			  "\x30\x4c\x65\x28\xf6\x59\xc7\x78"
+ 			  "\x66\xa5\x10\xd9\xc1\xd6\xae\x5e",
+ 		.len	= 64,
++		.also_non_np = 1,
++		.np	= 2,
++		.tap	= { 31, 33 },
++	}, { /* > 16 bytes, not a multiple of 16 bytes */
++		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++		.klen	= 16,
++		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
++			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
++			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
++			  "\xae",
++		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
++			  "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
++			  "\x77",
++		.len	= 17,
++	}, { /* < 16 bytes */
++		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
++			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
++		.klen	= 16,
++		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
++			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
++		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
++		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
++		.len	= 7,
+ 	}
+ };
+ 
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index f0b52266b3ac..d73afb562ad9 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -2124,21 +2124,29 @@ static int __init intel_opregion_present(void)
+ 	return opregion;
+ }
+ 
++/* Check if the chassis-type indicates there is no builtin LCD panel */
+ static bool dmi_is_desktop(void)
+ {
+ 	const char *chassis_type;
++	unsigned long type;
+ 
+ 	chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE);
+ 	if (!chassis_type)
+ 		return false;
+ 
+-	if (!strcmp(chassis_type, "3") || /*  3: Desktop */
+-	    !strcmp(chassis_type, "4") || /*  4: Low Profile Desktop */
+-	    !strcmp(chassis_type, "5") || /*  5: Pizza Box */
+-	    !strcmp(chassis_type, "6") || /*  6: Mini Tower */
+-	    !strcmp(chassis_type, "7") || /*  7: Tower */
+-	    !strcmp(chassis_type, "11"))  /* 11: Main Server Chassis */
++	if (kstrtoul(chassis_type, 10, &type) != 0)
++		return false;
++
++	switch (type) {
++	case 0x03: /* Desktop */
++	case 0x04: /* Low Profile Desktop */
++	case 0x05: /* Pizza Box */
++	case 0x06: /* Mini Tower */
++	case 0x07: /* Tower */
++	case 0x10: /* Lunch Box */
++	case 0x11: /* Main Server Chassis */
+ 		return true;
++	}
+ 
+ 	return false;
+ }
+diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
+index e10fec99a182..4424997ecf30 100644
+--- a/drivers/acpi/acpica/evgpe.c
++++ b/drivers/acpi/acpica/evgpe.c
+@@ -81,8 +81,12 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
+ 
+ 	ACPI_FUNCTION_TRACE(ev_enable_gpe);
+ 
+-	/* Enable the requested GPE */
++	/* Clear the GPE status */
++	status = acpi_hw_clear_gpe(gpe_event_info);
++	if (ACPI_FAILURE(status))
++		return_ACPI_STATUS(status);
+ 
++	/* Enable the requested GPE */
+ 	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
+ 	return_ACPI_STATUS(status);
+ }
+diff --git a/drivers/acpi/acpica/nsobject.c b/drivers/acpi/acpica/nsobject.c
+index 8638f43cfc3d..79d86da1c892 100644
+--- a/drivers/acpi/acpica/nsobject.c
++++ b/drivers/acpi/acpica/nsobject.c
+@@ -186,6 +186,10 @@ void acpi_ns_detach_object(struct acpi_namespace_node *node)
+ 		}
+ 	}
+ 
++	if (obj_desc->common.type == ACPI_TYPE_REGION) {
++		acpi_ut_remove_address_range(obj_desc->region.space_id, node);
++	}
++
+ 	/* Clear the Node entry in all cases */
+ 
+ 	node->object = NULL;
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 217a782c3e55..7aa08884ed48 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -1108,8 +1108,13 @@ int cppc_get_perf_caps(int cpunum, struct cppc_perf_caps *perf_caps)
+ 	cpc_read(cpunum, nominal_reg, &nom);
+ 	perf_caps->nominal_perf = nom;
+ 
+-	cpc_read(cpunum, guaranteed_reg, &guaranteed);
+-	perf_caps->guaranteed_perf = guaranteed;
++	if (guaranteed_reg->type != ACPI_TYPE_BUFFER  ||
++	    IS_NULL_REG(&guaranteed_reg->cpc_entry.reg)) {
++		perf_caps->guaranteed_perf = 0;
++	} else {
++		cpc_read(cpunum, guaranteed_reg, &guaranteed);
++		perf_caps->guaranteed_perf = guaranteed;
++	}
+ 
+ 	cpc_read(cpunum, lowest_non_linear_reg, &min_nonlinear);
+ 	perf_caps->lowest_nonlinear_perf = min_nonlinear;
+diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
+index 545e91420cde..8940054d6250 100644
+--- a/drivers/acpi/device_sysfs.c
++++ b/drivers/acpi/device_sysfs.c
+@@ -202,11 +202,15 @@ static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias,
+ {
+ 	struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
+ 	const union acpi_object *of_compatible, *obj;
++	acpi_status status;
+ 	int len, count;
+ 	int i, nval;
+ 	char *c;
+ 
+-	acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf);
++	status = acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf);
++	if (ACPI_FAILURE(status))
++		return -ENODEV;
++
+ 	/* DT strings are all in lower case */
+ 	for (c = buf.pointer; *c != '\0'; c++)
+ 		*c = tolower(*c);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index e18ade5d74e9..f75f8f870ce3 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -415,7 +415,7 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
+ 	if (call_pkg) {
+ 		int i;
+ 
+-		if (nfit_mem->family != call_pkg->nd_family)
++		if (nfit_mem && nfit_mem->family != call_pkg->nd_family)
+ 			return -ENOTTY;
+ 
+ 		for (i = 0; i < ARRAY_SIZE(call_pkg->nd_reserved2); i++)
+@@ -424,6 +424,10 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
+ 		return call_pkg->nd_command;
+ 	}
+ 
++	/* In the !call_pkg case, bus commands == bus functions */
++	if (!nfit_mem)
++		return cmd;
++
+ 	/* Linux ND commands == NVDIMM_FAMILY_INTEL function numbers */
+ 	if (nfit_mem->family == NVDIMM_FAMILY_INTEL)
+ 		return cmd;
+@@ -454,17 +458,18 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 	if (cmd_rc)
+ 		*cmd_rc = -EINVAL;
+ 
++	if (cmd == ND_CMD_CALL)
++		call_pkg = buf;
++	func = cmd_to_func(nfit_mem, cmd, call_pkg);
++	if (func < 0)
++		return func;
++
+ 	if (nvdimm) {
+ 		struct acpi_device *adev = nfit_mem->adev;
+ 
+ 		if (!adev)
+ 			return -ENOTTY;
+ 
+-		if (cmd == ND_CMD_CALL)
+-			call_pkg = buf;
+-		func = cmd_to_func(nfit_mem, cmd, call_pkg);
+-		if (func < 0)
+-			return func;
+ 		dimm_name = nvdimm_name(nvdimm);
+ 		cmd_name = nvdimm_cmd_name(cmd);
+ 		cmd_mask = nvdimm_cmd_mask(nvdimm);
+@@ -475,12 +480,9 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 	} else {
+ 		struct acpi_device *adev = to_acpi_dev(acpi_desc);
+ 
+-		func = cmd;
+ 		cmd_name = nvdimm_bus_cmd_name(cmd);
+ 		cmd_mask = nd_desc->cmd_mask;
+-		dsm_mask = cmd_mask;
+-		if (cmd == ND_CMD_CALL)
+-			dsm_mask = nd_desc->bus_dsm_mask;
++		dsm_mask = nd_desc->bus_dsm_mask;
+ 		desc = nd_cmd_bus_desc(cmd);
+ 		guid = to_nfit_uuid(NFIT_DEV_BUS);
+ 		handle = adev->handle;
+@@ -554,6 +556,13 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 		return -EINVAL;
+ 	}
+ 
++	if (out_obj->type != ACPI_TYPE_BUFFER) {
++		dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n",
++				dimm_name, cmd_name, out_obj->type);
++		rc = -EINVAL;
++		goto out;
++	}
++
+ 	if (call_pkg) {
+ 		call_pkg->nd_fw_size = out_obj->buffer.length;
+ 		memcpy(call_pkg->nd_payload + call_pkg->nd_size_in,
+@@ -572,13 +581,6 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 		return 0;
+ 	}
+ 
+-	if (out_obj->package.type != ACPI_TYPE_BUFFER) {
+-		dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n",
+-				dimm_name, cmd_name, out_obj->type);
+-		rc = -EINVAL;
+-		goto out;
+-	}
+-
+ 	dev_dbg(dev, "%s cmd: %s output length: %d\n", dimm_name,
+ 			cmd_name, out_obj->buffer.length);
+ 	print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4, 4,
+@@ -1759,14 +1761,14 @@ static bool acpi_nvdimm_has_method(struct acpi_device *adev, char *method)
+ 
+ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
+ {
++	struct device *dev = &nfit_mem->adev->dev;
+ 	struct nd_intel_smart smart = { 0 };
+ 	union acpi_object in_buf = {
+-		.type = ACPI_TYPE_BUFFER,
+-		.buffer.pointer = (char *) &smart,
+-		.buffer.length = sizeof(smart),
++		.buffer.type = ACPI_TYPE_BUFFER,
++		.buffer.length = 0,
+ 	};
+ 	union acpi_object in_obj = {
+-		.type = ACPI_TYPE_PACKAGE,
++		.package.type = ACPI_TYPE_PACKAGE,
+ 		.package.count = 1,
+ 		.package.elements = &in_buf,
+ 	};
+@@ -1781,8 +1783,15 @@ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
+ 		return;
+ 
+ 	out_obj = acpi_evaluate_dsm(handle, guid, revid, func, &in_obj);
+-	if (!out_obj)
++	if (!out_obj || out_obj->type != ACPI_TYPE_BUFFER
++			|| out_obj->buffer.length < sizeof(smart)) {
++		dev_dbg(dev->parent, "%s: failed to retrieve initial health\n",
++				dev_name(dev));
++		ACPI_FREE(out_obj);
+ 		return;
++	}
++	memcpy(&smart, out_obj->buffer.pointer, sizeof(smart));
++	ACPI_FREE(out_obj);
+ 
+ 	if (smart.flags & ND_INTEL_SMART_SHUTDOWN_VALID) {
+ 		if (smart.shutdown_state)
+@@ -1793,7 +1802,6 @@ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
+ 		set_bit(NFIT_MEM_DIRTY_COUNT, &nfit_mem->flags);
+ 		nfit_mem->dirty_shutdown = smart.shutdown_count;
+ 	}
+-	ACPI_FREE(out_obj);
+ }
+ 
+ static void populate_shutdown_status(struct nfit_mem *nfit_mem)
+@@ -1915,18 +1923,19 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
+ 		| 1 << ND_CMD_SET_CONFIG_DATA;
+ 	if (family == NVDIMM_FAMILY_INTEL
+ 			&& (dsm_mask & label_mask) == label_mask)
+-		return 0;
+-
+-	if (acpi_nvdimm_has_method(adev_dimm, "_LSI")
+-			&& acpi_nvdimm_has_method(adev_dimm, "_LSR")) {
+-		dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev));
+-		set_bit(NFIT_MEM_LSR, &nfit_mem->flags);
+-	}
++		/* skip _LS{I,R,W} enabling */;
++	else {
++		if (acpi_nvdimm_has_method(adev_dimm, "_LSI")
++				&& acpi_nvdimm_has_method(adev_dimm, "_LSR")) {
++			dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev));
++			set_bit(NFIT_MEM_LSR, &nfit_mem->flags);
++		}
+ 
+-	if (test_bit(NFIT_MEM_LSR, &nfit_mem->flags)
+-			&& acpi_nvdimm_has_method(adev_dimm, "_LSW")) {
+-		dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev));
+-		set_bit(NFIT_MEM_LSW, &nfit_mem->flags);
++		if (test_bit(NFIT_MEM_LSR, &nfit_mem->flags)
++				&& acpi_nvdimm_has_method(adev_dimm, "_LSW")) {
++			dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev));
++			set_bit(NFIT_MEM_LSW, &nfit_mem->flags);
++		}
+ 	}
+ 
+ 	populate_shutdown_status(nfit_mem);
+@@ -3004,14 +3013,16 @@ static int ars_register(struct acpi_nfit_desc *acpi_desc,
+ {
+ 	int rc;
+ 
+-	if (no_init_ars || test_bit(ARS_FAILED, &nfit_spa->ars_state))
++	if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
+ 		return acpi_nfit_register_region(acpi_desc, nfit_spa);
+ 
+ 	set_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
+-	set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
++	if (!no_init_ars)
++		set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
+ 
+ 	switch (acpi_nfit_query_poison(acpi_desc)) {
+ 	case 0:
++	case -ENOSPC:
+ 	case -EAGAIN:
+ 		rc = ars_start(acpi_desc, nfit_spa, ARS_REQ_SHORT);
+ 		/* shouldn't happen, try again later */
+@@ -3036,7 +3047,6 @@ static int ars_register(struct acpi_nfit_desc *acpi_desc,
+ 		break;
+ 	case -EBUSY:
+ 	case -ENOMEM:
+-	case -ENOSPC:
+ 		/*
+ 		 * BIOS was using ARS, wait for it to complete (or
+ 		 * resources to become available) and then perform our
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 4d2b2ad1ee0e..01f80cbd2741 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -329,6 +329,8 @@ struct binder_error {
+  *                        (invariant after initialized)
+  * @min_priority:         minimum scheduling priority
+  *                        (invariant after initialized)
++ * @txn_security_ctx:     require sender's security context
++ *                        (invariant after initialized)
+  * @async_todo:           list of async work items
+  *                        (protected by @proc->inner_lock)
+  *
+@@ -365,6 +367,7 @@ struct binder_node {
+ 		 * invariant after initialization
+ 		 */
+ 		u8 accept_fds:1;
++		u8 txn_security_ctx:1;
+ 		u8 min_priority;
+ 	};
+ 	bool has_async_transaction;
+@@ -615,6 +618,7 @@ struct binder_transaction {
+ 	long	saved_priority;
+ 	kuid_t	sender_euid;
+ 	struct list_head fd_fixups;
++	binder_uintptr_t security_ctx;
+ 	/**
+ 	 * @lock:  protects @from, @to_proc, and @to_thread
+ 	 *
+@@ -1152,6 +1156,7 @@ static struct binder_node *binder_init_node_ilocked(
+ 	node->work.type = BINDER_WORK_NODE;
+ 	node->min_priority = flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
+ 	node->accept_fds = !!(flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
++	node->txn_security_ctx = !!(flags & FLAT_BINDER_FLAG_TXN_SECURITY_CTX);
+ 	spin_lock_init(&node->lock);
+ 	INIT_LIST_HEAD(&node->work.entry);
+ 	INIT_LIST_HEAD(&node->async_todo);
+@@ -2778,6 +2783,8 @@ static void binder_transaction(struct binder_proc *proc,
+ 	binder_size_t last_fixup_min_off = 0;
+ 	struct binder_context *context = proc->context;
+ 	int t_debug_id = atomic_inc_return(&binder_last_id);
++	char *secctx = NULL;
++	u32 secctx_sz = 0;
+ 
+ 	e = binder_transaction_log_add(&binder_transaction_log);
+ 	e->debug_id = t_debug_id;
+@@ -3020,6 +3027,20 @@ static void binder_transaction(struct binder_proc *proc,
+ 	t->flags = tr->flags;
+ 	t->priority = task_nice(current);
+ 
++	if (target_node && target_node->txn_security_ctx) {
++		u32 secid;
++
++		security_task_getsecid(proc->tsk, &secid);
++		ret = security_secid_to_secctx(secid, &secctx, &secctx_sz);
++		if (ret) {
++			return_error = BR_FAILED_REPLY;
++			return_error_param = ret;
++			return_error_line = __LINE__;
++			goto err_get_secctx_failed;
++		}
++		extra_buffers_size += ALIGN(secctx_sz, sizeof(u64));
++	}
++
+ 	trace_binder_transaction(reply, t, target_node);
+ 
+ 	t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
+@@ -3036,6 +3057,19 @@ static void binder_transaction(struct binder_proc *proc,
+ 		t->buffer = NULL;
+ 		goto err_binder_alloc_buf_failed;
+ 	}
++	if (secctx) {
++		size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) +
++				    ALIGN(tr->offsets_size, sizeof(void *)) +
++				    ALIGN(extra_buffers_size, sizeof(void *)) -
++				    ALIGN(secctx_sz, sizeof(u64));
++		char *kptr = t->buffer->data + buf_offset;
++
++		t->security_ctx = (uintptr_t)kptr +
++		    binder_alloc_get_user_buffer_offset(&target_proc->alloc);
++		memcpy(kptr, secctx, secctx_sz);
++		security_release_secctx(secctx, secctx_sz);
++		secctx = NULL;
++	}
+ 	t->buffer->debug_id = t->debug_id;
+ 	t->buffer->transaction = t;
+ 	t->buffer->target_node = target_node;
+@@ -3305,6 +3339,9 @@ err_copy_data_failed:
+ 	t->buffer->transaction = NULL;
+ 	binder_alloc_free_buf(&target_proc->alloc, t->buffer);
+ err_binder_alloc_buf_failed:
++	if (secctx)
++		security_release_secctx(secctx, secctx_sz);
++err_get_secctx_failed:
+ 	kfree(tcomplete);
+ 	binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
+ err_alloc_tcomplete_failed:
+@@ -4036,11 +4073,13 @@ retry:
+ 
+ 	while (1) {
+ 		uint32_t cmd;
+-		struct binder_transaction_data tr;
++		struct binder_transaction_data_secctx tr;
++		struct binder_transaction_data *trd = &tr.transaction_data;
+ 		struct binder_work *w = NULL;
+ 		struct list_head *list = NULL;
+ 		struct binder_transaction *t = NULL;
+ 		struct binder_thread *t_from;
++		size_t trsize = sizeof(*trd);
+ 
+ 		binder_inner_proc_lock(proc);
+ 		if (!binder_worklist_empty_ilocked(&thread->todo))
+@@ -4240,8 +4279,8 @@ retry:
+ 		if (t->buffer->target_node) {
+ 			struct binder_node *target_node = t->buffer->target_node;
+ 
+-			tr.target.ptr = target_node->ptr;
+-			tr.cookie =  target_node->cookie;
++			trd->target.ptr = target_node->ptr;
++			trd->cookie =  target_node->cookie;
+ 			t->saved_priority = task_nice(current);
+ 			if (t->priority < target_node->min_priority &&
+ 			    !(t->flags & TF_ONE_WAY))
+@@ -4251,22 +4290,23 @@ retry:
+ 				binder_set_nice(target_node->min_priority);
+ 			cmd = BR_TRANSACTION;
+ 		} else {
+-			tr.target.ptr = 0;
+-			tr.cookie = 0;
++			trd->target.ptr = 0;
++			trd->cookie = 0;
+ 			cmd = BR_REPLY;
+ 		}
+-		tr.code = t->code;
+-		tr.flags = t->flags;
+-		tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
++		trd->code = t->code;
++		trd->flags = t->flags;
++		trd->sender_euid = from_kuid(current_user_ns(), t->sender_euid);
+ 
+ 		t_from = binder_get_txn_from(t);
+ 		if (t_from) {
+ 			struct task_struct *sender = t_from->proc->tsk;
+ 
+-			tr.sender_pid = task_tgid_nr_ns(sender,
+-							task_active_pid_ns(current));
++			trd->sender_pid =
++				task_tgid_nr_ns(sender,
++						task_active_pid_ns(current));
+ 		} else {
+-			tr.sender_pid = 0;
++			trd->sender_pid = 0;
+ 		}
+ 
+ 		ret = binder_apply_fd_fixups(t);
+@@ -4297,15 +4337,20 @@ retry:
+ 			}
+ 			continue;
+ 		}
+-		tr.data_size = t->buffer->data_size;
+-		tr.offsets_size = t->buffer->offsets_size;
+-		tr.data.ptr.buffer = (binder_uintptr_t)
++		trd->data_size = t->buffer->data_size;
++		trd->offsets_size = t->buffer->offsets_size;
++		trd->data.ptr.buffer = (binder_uintptr_t)
+ 			((uintptr_t)t->buffer->data +
+ 			binder_alloc_get_user_buffer_offset(&proc->alloc));
+-		tr.data.ptr.offsets = tr.data.ptr.buffer +
++		trd->data.ptr.offsets = trd->data.ptr.buffer +
+ 					ALIGN(t->buffer->data_size,
+ 					    sizeof(void *));
+ 
++		tr.secctx = t->security_ctx;
++		if (t->security_ctx) {
++			cmd = BR_TRANSACTION_SEC_CTX;
++			trsize = sizeof(tr);
++		}
+ 		if (put_user(cmd, (uint32_t __user *)ptr)) {
+ 			if (t_from)
+ 				binder_thread_dec_tmpref(t_from);
+@@ -4316,7 +4361,7 @@ retry:
+ 			return -EFAULT;
+ 		}
+ 		ptr += sizeof(uint32_t);
+-		if (copy_to_user(ptr, &tr, sizeof(tr))) {
++		if (copy_to_user(ptr, &tr, trsize)) {
+ 			if (t_from)
+ 				binder_thread_dec_tmpref(t_from);
+ 
+@@ -4325,7 +4370,7 @@ retry:
+ 
+ 			return -EFAULT;
+ 		}
+-		ptr += sizeof(tr);
++		ptr += trsize;
+ 
+ 		trace_binder_transaction_received(t);
+ 		binder_stat_br(proc, thread, cmd);
+@@ -4333,16 +4378,18 @@ retry:
+ 			     "%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %016llx-%016llx\n",
+ 			     proc->pid, thread->pid,
+ 			     (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :
+-			     "BR_REPLY",
++				(cmd == BR_TRANSACTION_SEC_CTX) ?
++				     "BR_TRANSACTION_SEC_CTX" : "BR_REPLY",
+ 			     t->debug_id, t_from ? t_from->proc->pid : 0,
+ 			     t_from ? t_from->pid : 0, cmd,
+ 			     t->buffer->data_size, t->buffer->offsets_size,
+-			     (u64)tr.data.ptr.buffer, (u64)tr.data.ptr.offsets);
++			     (u64)trd->data.ptr.buffer,
++			     (u64)trd->data.ptr.offsets);
+ 
+ 		if (t_from)
+ 			binder_thread_dec_tmpref(t_from);
+ 		t->buffer->allow_user_free = 1;
+-		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
++		if (cmd != BR_REPLY && !(t->flags & TF_ONE_WAY)) {
+ 			binder_inner_proc_lock(thread->proc);
+ 			t->to_parent = thread->transaction_stack;
+ 			t->to_thread = thread;
+@@ -4690,7 +4737,8 @@ out:
+ 	return ret;
+ }
+ 
+-static int binder_ioctl_set_ctx_mgr(struct file *filp)
++static int binder_ioctl_set_ctx_mgr(struct file *filp,
++				    struct flat_binder_object *fbo)
+ {
+ 	int ret = 0;
+ 	struct binder_proc *proc = filp->private_data;
+@@ -4719,7 +4767,7 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp)
+ 	} else {
+ 		context->binder_context_mgr_uid = curr_euid;
+ 	}
+-	new_node = binder_new_node(proc, NULL);
++	new_node = binder_new_node(proc, fbo);
+ 	if (!new_node) {
+ 		ret = -ENOMEM;
+ 		goto out;
+@@ -4842,8 +4890,20 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 		binder_inner_proc_unlock(proc);
+ 		break;
+ 	}
++	case BINDER_SET_CONTEXT_MGR_EXT: {
++		struct flat_binder_object fbo;
++
++		if (copy_from_user(&fbo, ubuf, sizeof(fbo))) {
++			ret = -EINVAL;
++			goto err;
++		}
++		ret = binder_ioctl_set_ctx_mgr(filp, &fbo);
++		if (ret)
++			goto err;
++		break;
++	}
+ 	case BINDER_SET_CONTEXT_MGR:
+-		ret = binder_ioctl_set_ctx_mgr(filp);
++		ret = binder_ioctl_set_ctx_mgr(filp, NULL);
+ 		if (ret)
+ 			goto err;
+ 		break;
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index 8ac10af17c00..d62487d02455 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -968,9 +968,9 @@ static void __device_release_driver(struct device *dev, struct device *parent)
+ 			drv->remove(dev);
+ 
+ 		device_links_driver_cleanup(dev);
+-		arch_teardown_dma_ops(dev);
+ 
+ 		devres_release_all(dev);
++		arch_teardown_dma_ops(dev);
+ 		dev->driver = NULL;
+ 		dev_set_drvdata(dev, NULL);
+ 		if (dev->pm_domain && dev->pm_domain->dismiss)
+diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
+index 5fa1898755a3..7c84f64c74f7 100644
+--- a/drivers/base/power/wakeup.c
++++ b/drivers/base/power/wakeup.c
+@@ -118,7 +118,6 @@ void wakeup_source_drop(struct wakeup_source *ws)
+ 	if (!ws)
+ 		return;
+ 
+-	del_timer_sync(&ws->timer);
+ 	__pm_relax(ws);
+ }
+ EXPORT_SYMBOL_GPL(wakeup_source_drop);
+@@ -205,6 +204,13 @@ void wakeup_source_remove(struct wakeup_source *ws)
+ 	list_del_rcu(&ws->entry);
+ 	raw_spin_unlock_irqrestore(&events_lock, flags);
+ 	synchronize_srcu(&wakeup_srcu);
++
++	del_timer_sync(&ws->timer);
++	/*
++	 * Clear timer.function to make wakeup_source_not_registered() treat
++	 * this wakeup source as not registered.
++	 */
++	ws->timer.function = NULL;
+ }
+ EXPORT_SYMBOL_GPL(wakeup_source_remove);
+ 
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index cf5538942834..9a8d83bc1e75 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -656,7 +656,7 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
+ 			return -EBADF;
+ 
+ 		l = f->f_mapping->host->i_bdev->bd_disk->private_data;
+-		if (l->lo_state == Lo_unbound) {
++		if (l->lo_state != Lo_bound) {
+ 			return -EINVAL;
+ 		}
+ 		f = l->lo_backing_file;
+@@ -1089,16 +1089,12 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
+ 		kobject_uevent(&disk_to_dev(bdev->bd_disk)->kobj, KOBJ_CHANGE);
+ 	}
+ 	mapping_set_gfp_mask(filp->f_mapping, gfp);
+-	lo->lo_state = Lo_unbound;
+ 	/* This is safe: open() is still holding a reference. */
+ 	module_put(THIS_MODULE);
+ 	blk_mq_unfreeze_queue(lo->lo_queue);
+ 
+ 	partscan = lo->lo_flags & LO_FLAGS_PARTSCAN && bdev;
+ 	lo_number = lo->lo_number;
+-	lo->lo_flags = 0;
+-	if (!part_shift)
+-		lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
+ 	loop_unprepare_queue(lo);
+ out_unlock:
+ 	mutex_unlock(&loop_ctl_mutex);
+@@ -1120,6 +1116,23 @@ out_unlock:
+ 		/* Device is gone, no point in returning error */
+ 		err = 0;
+ 	}
++
++	/*
++	 * lo->lo_state is set to Lo_unbound here after above partscan has
++	 * finished.
++	 *
++	 * There cannot be anybody else entering __loop_clr_fd() as
++	 * lo->lo_backing_file is already cleared and Lo_rundown state
++	 * protects us from all the other places trying to change the 'lo'
++	 * device.
++	 */
++	mutex_lock(&loop_ctl_mutex);
++	lo->lo_flags = 0;
++	if (!part_shift)
++		lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
++	lo->lo_state = Lo_unbound;
++	mutex_unlock(&loop_ctl_mutex);
++
+ 	/*
+ 	 * Need not hold loop_ctl_mutex to fput backing file.
+ 	 * Calling fput holding loop_ctl_mutex triggers a circular
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 04ca65912638..684854d3b0ad 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -290,18 +290,8 @@ static ssize_t idle_store(struct device *dev,
+ 	struct zram *zram = dev_to_zram(dev);
+ 	unsigned long nr_pages = zram->disksize >> PAGE_SHIFT;
+ 	int index;
+-	char mode_buf[8];
+-	ssize_t sz;
+ 
+-	sz = strscpy(mode_buf, buf, sizeof(mode_buf));
+-	if (sz <= 0)
+-		return -EINVAL;
+-
+-	/* ignore trailing new line */
+-	if (mode_buf[sz - 1] == '\n')
+-		mode_buf[sz - 1] = 0x00;
+-
+-	if (strcmp(mode_buf, "all"))
++	if (!sysfs_streq(buf, "all"))
+ 		return -EINVAL;
+ 
+ 	down_read(&zram->init_lock);
+@@ -635,25 +625,15 @@ static ssize_t writeback_store(struct device *dev,
+ 	struct bio bio;
+ 	struct bio_vec bio_vec;
+ 	struct page *page;
+-	ssize_t ret, sz;
+-	char mode_buf[8];
+-	int mode = -1;
++	ssize_t ret;
++	int mode;
+ 	unsigned long blk_idx = 0;
+ 
+-	sz = strscpy(mode_buf, buf, sizeof(mode_buf));
+-	if (sz <= 0)
+-		return -EINVAL;
+-
+-	/* ignore trailing newline */
+-	if (mode_buf[sz - 1] == '\n')
+-		mode_buf[sz - 1] = 0x00;
+-
+-	if (!strcmp(mode_buf, "idle"))
++	if (sysfs_streq(buf, "idle"))
+ 		mode = IDLE_WRITEBACK;
+-	else if (!strcmp(mode_buf, "huge"))
++	else if (sysfs_streq(buf, "huge"))
+ 		mode = HUGE_WRITEBACK;
+-
+-	if (mode == -1)
++	else
+ 		return -EINVAL;
+ 
+ 	down_read(&zram->init_lock);
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index 41405de27d66..c91bba00df4e 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -552,10 +552,9 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev,
+ 					    hdev->bus);
+ 
+ 	if (!btrtl_dev->ic_info) {
+-		rtl_dev_err(hdev, "rtl: unknown IC info, lmp subver %04x, hci rev %04x, hci ver %04x",
++		rtl_dev_info(hdev, "rtl: unknown IC info, lmp subver %04x, hci rev %04x, hci ver %04x",
+ 			    lmp_subver, hci_rev, hci_ver);
+-		ret = -EINVAL;
+-		goto err_free;
++		return btrtl_dev;
+ 	}
+ 
+ 	if (btrtl_dev->ic_info->has_rom_version) {
+@@ -610,6 +609,11 @@ int btrtl_download_firmware(struct hci_dev *hdev,
+ 	 * standard btusb. Once that firmware is uploaded, the subver changes
+ 	 * to a different value.
+ 	 */
++	if (!btrtl_dev->ic_info) {
++		rtl_dev_info(hdev, "rtl: assuming no firmware upload needed\n");
++		return 0;
++	}
++
+ 	switch (btrtl_dev->ic_info->lmp_subver) {
+ 	case RTL_ROM_LMP_8723A:
+ 	case RTL_ROM_LMP_3499:
+diff --git a/drivers/bluetooth/h4_recv.h b/drivers/bluetooth/h4_recv.h
+index b432651f8236..307d82166f48 100644
+--- a/drivers/bluetooth/h4_recv.h
++++ b/drivers/bluetooth/h4_recv.h
+@@ -60,6 +60,10 @@ static inline struct sk_buff *h4_recv_buf(struct hci_dev *hdev,
+ 					  const struct h4_recv_pkt *pkts,
+ 					  int pkts_count)
+ {
++	/* Check for error from previous call */
++	if (IS_ERR(skb))
++		skb = NULL;
++
+ 	while (count) {
+ 		int i, len;
+ 
+diff --git a/drivers/bluetooth/hci_h4.c b/drivers/bluetooth/hci_h4.c
+index fb97a3bf069b..5d97d77627c1 100644
+--- a/drivers/bluetooth/hci_h4.c
++++ b/drivers/bluetooth/hci_h4.c
+@@ -174,6 +174,10 @@ struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb,
+ 	struct hci_uart *hu = hci_get_drvdata(hdev);
+ 	u8 alignment = hu->alignment ? hu->alignment : 1;
+ 
++	/* Check for error from previous call */
++	if (IS_ERR(skb))
++		skb = NULL;
++
+ 	while (count) {
+ 		int i, len;
+ 
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index fbf7b4df23ab..9562e72c1ae5 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -207,11 +207,11 @@ void hci_uart_init_work(struct work_struct *work)
+ 	err = hci_register_dev(hu->hdev);
+ 	if (err < 0) {
+ 		BT_ERR("Can't register HCI device");
++		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
++		hu->proto->close(hu);
+ 		hdev = hu->hdev;
+ 		hu->hdev = NULL;
+ 		hci_free_dev(hdev);
+-		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+-		hu->proto->close(hu);
+ 		return;
+ 	}
+ 
+@@ -616,6 +616,7 @@ static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data,
+ static int hci_uart_register_dev(struct hci_uart *hu)
+ {
+ 	struct hci_dev *hdev;
++	int err;
+ 
+ 	BT_DBG("");
+ 
+@@ -659,11 +660,22 @@ static int hci_uart_register_dev(struct hci_uart *hu)
+ 	else
+ 		hdev->dev_type = HCI_PRIMARY;
+ 
++	/* Only call open() for the protocol after hdev is fully initialized as
++	 * open() (or a timer/workqueue it starts) may attempt to reference it.
++	 */
++	err = hu->proto->open(hu);
++	if (err) {
++		hu->hdev = NULL;
++		hci_free_dev(hdev);
++		return err;
++	}
++
+ 	if (test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags))
+ 		return 0;
+ 
+ 	if (hci_register_dev(hdev) < 0) {
+ 		BT_ERR("Can't register HCI device");
++		hu->proto->close(hu);
+ 		hu->hdev = NULL;
+ 		hci_free_dev(hdev);
+ 		return -ENODEV;
+@@ -683,20 +695,14 @@ static int hci_uart_set_proto(struct hci_uart *hu, int id)
+ 	if (!p)
+ 		return -EPROTONOSUPPORT;
+ 
+-	err = p->open(hu);
+-	if (err)
+-		return err;
+-
+ 	hu->proto = p;
+-	set_bit(HCI_UART_PROTO_READY, &hu->flags);
+ 
+ 	err = hci_uart_register_dev(hu);
+ 	if (err) {
+-		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
+-		p->close(hu);
+ 		return err;
+ 	}
+ 
++	set_bit(HCI_UART_PROTO_READY, &hu->flags);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
+index 614ecdbb4ab7..933268b8d6a5 100644
+--- a/drivers/cdrom/cdrom.c
++++ b/drivers/cdrom/cdrom.c
+@@ -265,6 +265,7 @@
+ /* #define ERRLOGMASK (CD_WARNING|CD_OPEN|CD_COUNT_TRACKS|CD_CLOSE) */
+ /* #define ERRLOGMASK (CD_WARNING|CD_REG_UNREG|CD_DO_IOCTL|CD_OPEN|CD_CLOSE|CD_COUNT_TRACKS) */
+ 
++#include <linux/atomic.h>
+ #include <linux/module.h>
+ #include <linux/fs.h>
+ #include <linux/major.h>
+@@ -3692,9 +3693,9 @@ static struct ctl_table_header *cdrom_sysctl_header;
+ 
+ static void cdrom_sysctl_register(void)
+ {
+-	static int initialized;
++	static atomic_t initialized = ATOMIC_INIT(0);
+ 
+-	if (initialized == 1)
++	if (!atomic_add_unless(&initialized, 1, 1))
+ 		return;
+ 
+ 	cdrom_sysctl_header = register_sysctl_table(cdrom_root_table);
+@@ -3705,8 +3706,6 @@ static void cdrom_sysctl_register(void)
+ 	cdrom_sysctl_settings.debug = debug;
+ 	cdrom_sysctl_settings.lock = lockdoor;
+ 	cdrom_sysctl_settings.check = check_media_type;
+-
+-	initialized = 1;
+ }
+ 
+ static void cdrom_sysctl_unregister(void)
+diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
+index 2e2ffe7010aa..51c77f0e47b2 100644
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -351,7 +351,7 @@ config XILINX_HWICAP
+ 
+ config R3964
+ 	tristate "Siemens R3964 line discipline"
+-	depends on TTY
++	depends on TTY && BROKEN
+ 	---help---
+ 	  This driver allows synchronous communication with devices using the
+ 	  Siemens R3964 packet protocol. Unless you are dealing with special
+diff --git a/drivers/char/applicom.c b/drivers/char/applicom.c
+index c0a5b1f3a986..4ccc39e00ced 100644
+--- a/drivers/char/applicom.c
++++ b/drivers/char/applicom.c
+@@ -32,6 +32,7 @@
+ #include <linux/wait.h>
+ #include <linux/init.h>
+ #include <linux/fs.h>
++#include <linux/nospec.h>
+ 
+ #include <asm/io.h>
+ #include <linux/uaccess.h>
+@@ -386,7 +387,11 @@ static ssize_t ac_write(struct file *file, const char __user *buf, size_t count,
+ 	TicCard = st_loc.tic_des_from_pc;	/* tic number to send            */
+ 	IndexCard = NumCard - 1;
+ 
+-	if((NumCard < 1) || (NumCard > MAX_BOARD) || !apbs[IndexCard].RamIO)
++	if (IndexCard >= MAX_BOARD)
++		return -EINVAL;
++	IndexCard = array_index_nospec(IndexCard, MAX_BOARD);
++
++	if (!apbs[IndexCard].RamIO)
+ 		return -EINVAL;
+ 
+ #ifdef DEBUG
+@@ -697,6 +702,7 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	unsigned char IndexCard;
+ 	void __iomem *pmem;
+ 	int ret = 0;
++	static int warncount = 10;
+ 	volatile unsigned char byte_reset_it;
+ 	struct st_ram_io *adgl;
+ 	void __user *argp = (void __user *)arg;
+@@ -711,16 +717,12 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	mutex_lock(&ac_mutex);	
+ 	IndexCard = adgl->num_card-1;
+ 	 
+-	if(cmd != 6 && ((IndexCard >= MAX_BOARD) || !apbs[IndexCard].RamIO)) {
+-		static int warncount = 10;
+-		if (warncount) {
+-			printk( KERN_WARNING "APPLICOM driver IOCTL, bad board number %d\n",(int)IndexCard+1);
+-			warncount--;
+-		}
+-		kfree(adgl);
+-		mutex_unlock(&ac_mutex);
+-		return -EINVAL;
+-	}
++	if (cmd != 6 && IndexCard >= MAX_BOARD)
++		goto err;
++	IndexCard = array_index_nospec(IndexCard, MAX_BOARD);
++
++	if (cmd != 6 && !apbs[IndexCard].RamIO)
++		goto err;
+ 
+ 	switch (cmd) {
+ 		
+@@ -838,5 +840,16 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 	kfree(adgl);
+ 	mutex_unlock(&ac_mutex);
+ 	return 0;
++
++err:
++	if (warncount) {
++		pr_warn("APPLICOM driver IOCTL, bad board number %d\n",
++			(int)IndexCard + 1);
++		warncount--;
++	}
++	kfree(adgl);
++	mutex_unlock(&ac_mutex);
++	return -EINVAL;
++
+ }
+ 
+diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
+index 4a22b4b41aef..9bffcd37cc7b 100644
+--- a/drivers/char/hpet.c
++++ b/drivers/char/hpet.c
+@@ -377,7 +377,7 @@ static __init int hpet_mmap_enable(char *str)
+ 	pr_info("HPET mmap %s\n", hpet_mmap_enabled ? "enabled" : "disabled");
+ 	return 1;
+ }
+-__setup("hpet_mmap", hpet_mmap_enable);
++__setup("hpet_mmap=", hpet_mmap_enable);
+ 
+ static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
+index b89df66ea1ae..7abd604e938c 100644
+--- a/drivers/char/hw_random/virtio-rng.c
++++ b/drivers/char/hw_random/virtio-rng.c
+@@ -73,7 +73,7 @@ static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait)
+ 
+ 	if (!vi->busy) {
+ 		vi->busy = true;
+-		init_completion(&vi->have_data);
++		reinit_completion(&vi->have_data);
+ 		register_buffer(vi, buf, size);
+ 	}
+ 
+diff --git a/drivers/char/ipmi/ipmi_si.h b/drivers/char/ipmi/ipmi_si.h
+index 52f6152d1fcb..7ae52c17618e 100644
+--- a/drivers/char/ipmi/ipmi_si.h
++++ b/drivers/char/ipmi/ipmi_si.h
+@@ -25,7 +25,9 @@ void ipmi_irq_finish_setup(struct si_sm_io *io);
+ int ipmi_si_remove_by_dev(struct device *dev);
+ void ipmi_si_remove_by_data(int addr_space, enum si_type si_type,
+ 			    unsigned long addr);
+-int ipmi_si_hardcode_find_bmc(void);
++void ipmi_hardcode_init(void);
++void ipmi_si_hardcode_exit(void);
++int ipmi_si_hardcode_match(int addr_type, unsigned long addr);
+ void ipmi_si_platform_init(void);
+ void ipmi_si_platform_shutdown(void);
+ 
+diff --git a/drivers/char/ipmi/ipmi_si_hardcode.c b/drivers/char/ipmi/ipmi_si_hardcode.c
+index 487642809c58..1e5783961b0d 100644
+--- a/drivers/char/ipmi/ipmi_si_hardcode.c
++++ b/drivers/char/ipmi/ipmi_si_hardcode.c
+@@ -3,6 +3,7 @@
+ #define pr_fmt(fmt) "ipmi_hardcode: " fmt
+ 
+ #include <linux/moduleparam.h>
++#include <linux/platform_device.h>
+ #include "ipmi_si.h"
+ 
+ /*
+@@ -12,23 +13,22 @@
+ 
+ #define SI_MAX_PARMS 4
+ 
+-static char          *si_type[SI_MAX_PARMS];
+ #define MAX_SI_TYPE_STR 30
+-static char          si_type_str[MAX_SI_TYPE_STR];
++static char          si_type_str[MAX_SI_TYPE_STR] __initdata;
+ static unsigned long addrs[SI_MAX_PARMS];
+ static unsigned int num_addrs;
+ static unsigned int  ports[SI_MAX_PARMS];
+ static unsigned int num_ports;
+-static int           irqs[SI_MAX_PARMS];
+-static unsigned int num_irqs;
+-static int           regspacings[SI_MAX_PARMS];
+-static unsigned int num_regspacings;
+-static int           regsizes[SI_MAX_PARMS];
+-static unsigned int num_regsizes;
+-static int           regshifts[SI_MAX_PARMS];
+-static unsigned int num_regshifts;
+-static int slave_addrs[SI_MAX_PARMS]; /* Leaving 0 chooses the default value */
+-static unsigned int num_slave_addrs;
++static int           irqs[SI_MAX_PARMS] __initdata;
++static unsigned int num_irqs __initdata;
++static int           regspacings[SI_MAX_PARMS] __initdata;
++static unsigned int num_regspacings __initdata;
++static int           regsizes[SI_MAX_PARMS] __initdata;
++static unsigned int num_regsizes __initdata;
++static int           regshifts[SI_MAX_PARMS] __initdata;
++static unsigned int num_regshifts __initdata;
++static int slave_addrs[SI_MAX_PARMS] __initdata;
++static unsigned int num_slave_addrs __initdata;
+ 
+ module_param_string(type, si_type_str, MAX_SI_TYPE_STR, 0);
+ MODULE_PARM_DESC(type, "Defines the type of each interface, each"
+@@ -73,12 +73,133 @@ MODULE_PARM_DESC(slave_addrs, "Set the default IPMB slave address for"
+ 		 " overridden by this parm.  This is an array indexed"
+ 		 " by interface number.");
+ 
+-int ipmi_si_hardcode_find_bmc(void)
++static struct platform_device *ipmi_hc_pdevs[SI_MAX_PARMS];
++
++static void __init ipmi_hardcode_init_one(const char *si_type_str,
++					  unsigned int i,
++					  unsigned long addr,
++					  unsigned int flags)
+ {
+-	int ret = -ENODEV;
+-	int             i;
+-	struct si_sm_io io;
++	struct platform_device *pdev;
++	unsigned int num_r = 1, size;
++	struct resource r[4];
++	struct property_entry p[6];
++	enum si_type si_type;
++	unsigned int regspacing, regsize;
++	int rv;
++
++	memset(p, 0, sizeof(p));
++	memset(r, 0, sizeof(r));
++
++	if (!si_type_str || !*si_type_str || strcmp(si_type_str, "kcs") == 0) {
++		size = 2;
++		si_type = SI_KCS;
++	} else if (strcmp(si_type_str, "smic") == 0) {
++		size = 2;
++		si_type = SI_SMIC;
++	} else if (strcmp(si_type_str, "bt") == 0) {
++		size = 3;
++		si_type = SI_BT;
++	} else if (strcmp(si_type_str, "invalid") == 0) {
++		/*
++		 * Allow a firmware-specified interface to be
++		 * disabled.
++		 */
++		size = 1;
++		si_type = SI_TYPE_INVALID;
++	} else {
++		pr_warn("Interface type specified for interface %d, was invalid: %s\n",
++			i, si_type_str);
++		return;
++	}
++
++	regsize = regsizes[i];
++	if (regsize == 0)
++		regsize = DEFAULT_REGSIZE;
++
++	p[0] = PROPERTY_ENTRY_U8("ipmi-type", si_type);
++	p[1] = PROPERTY_ENTRY_U8("slave-addr", slave_addrs[i]);
++	p[2] = PROPERTY_ENTRY_U8("addr-source", SI_HARDCODED);
++	p[3] = PROPERTY_ENTRY_U8("reg-shift", regshifts[i]);
++	p[4] = PROPERTY_ENTRY_U8("reg-size", regsize);
++	/* Last entry must be left NULL to terminate it. */
++
++	/*
++	 * Register spacing is derived from the resources in
++	 * the IPMI platform code.
++	 */
++	regspacing = regspacings[i];
++	if (regspacing == 0)
++		regspacing = regsize;
++
++	r[0].start = addr;
++	r[0].end = r[0].start + regsize - 1;
++	r[0].name = "IPMI Address 1";
++	r[0].flags = flags;
++
++	if (size > 1) {
++		r[1].start = r[0].start + regspacing;
++		r[1].end = r[1].start + regsize - 1;
++		r[1].name = "IPMI Address 2";
++		r[1].flags = flags;
++		num_r++;
++	}
++
++	if (size > 2) {
++		r[2].start = r[1].start + regspacing;
++		r[2].end = r[2].start + regsize - 1;
++		r[2].name = "IPMI Address 3";
++		r[2].flags = flags;
++		num_r++;
++	}
++
++	if (irqs[i]) {
++		r[num_r].start = irqs[i];
++		r[num_r].end = irqs[i];
++		r[num_r].name = "IPMI IRQ";
++		r[num_r].flags = IORESOURCE_IRQ;
++		num_r++;
++	}
++
++	pdev = platform_device_alloc("hardcode-ipmi-si", i);
++	if (!pdev) {
++		pr_err("Error allocating IPMI platform device %d\n", i);
++		return;
++	}
++
++	rv = platform_device_add_resources(pdev, r, num_r);
++	if (rv) {
++		dev_err(&pdev->dev,
++			"Unable to add hard-code resources: %d\n", rv);
++		goto err;
++	}
++
++	rv = platform_device_add_properties(pdev, p);
++	if (rv) {
++		dev_err(&pdev->dev,
++			"Unable to add hard-code properties: %d\n", rv);
++		goto err;
++	}
++
++	rv = platform_device_add(pdev);
++	if (rv) {
++		dev_err(&pdev->dev,
++			"Unable to add hard-code device: %d\n", rv);
++		goto err;
++	}
++
++	ipmi_hc_pdevs[i] = pdev;
++	return;
++
++err:
++	platform_device_put(pdev);
++}
++
++void __init ipmi_hardcode_init(void)
++{
++	unsigned int i;
+ 	char *str;
++	char *si_type[SI_MAX_PARMS];
+ 
+ 	/* Parse out the si_type string into its components. */
+ 	str = si_type_str;
+@@ -95,54 +216,45 @@ int ipmi_si_hardcode_find_bmc(void)
+ 		}
+ 	}
+ 
+-	memset(&io, 0, sizeof(io));
+ 	for (i = 0; i < SI_MAX_PARMS; i++) {
+-		if (!ports[i] && !addrs[i])
+-			continue;
+-
+-		io.addr_source = SI_HARDCODED;
+-		pr_info("probing via hardcoded address\n");
+-
+-		if (!si_type[i] || strcmp(si_type[i], "kcs") == 0) {
+-			io.si_type = SI_KCS;
+-		} else if (strcmp(si_type[i], "smic") == 0) {
+-			io.si_type = SI_SMIC;
+-		} else if (strcmp(si_type[i], "bt") == 0) {
+-			io.si_type = SI_BT;
+-		} else {
+-			pr_warn("Interface type specified for interface %d, was invalid: %s\n",
+-				i, si_type[i]);
+-			continue;
+-		}
++		if (i < num_ports && ports[i])
++			ipmi_hardcode_init_one(si_type[i], i, ports[i],
++					       IORESOURCE_IO);
++		if (i < num_addrs && addrs[i])
++			ipmi_hardcode_init_one(si_type[i], i, addrs[i],
++					       IORESOURCE_MEM);
++	}
++}
+ 
+-		if (ports[i]) {
+-			/* An I/O port */
+-			io.addr_data = ports[i];
+-			io.addr_type = IPMI_IO_ADDR_SPACE;
+-		} else if (addrs[i]) {
+-			/* A memory port */
+-			io.addr_data = addrs[i];
+-			io.addr_type = IPMI_MEM_ADDR_SPACE;
+-		} else {
+-			pr_warn("Interface type specified for interface %d, but port and address were not set or set to zero\n",
+-				i);
+-			continue;
+-		}
++void ipmi_si_hardcode_exit(void)
++{
++	unsigned int i;
+ 
+-		io.addr = NULL;
+-		io.regspacing = regspacings[i];
+-		if (!io.regspacing)
+-			io.regspacing = DEFAULT_REGSPACING;
+-		io.regsize = regsizes[i];
+-		if (!io.regsize)
+-			io.regsize = DEFAULT_REGSIZE;
+-		io.regshift = regshifts[i];
+-		io.irq = irqs[i];
+-		if (io.irq)
+-			io.irq_setup = ipmi_std_irq_setup;
+-		io.slave_addr = slave_addrs[i];
+-
+-		ret = ipmi_si_add_smi(&io);
++	for (i = 0; i < SI_MAX_PARMS; i++) {
++		if (ipmi_hc_pdevs[i])
++			platform_device_unregister(ipmi_hc_pdevs[i]);
+ 	}
+-	return ret;
++}
++
++/*
++ * Returns true of the given address exists as a hardcoded address,
++ * false if not.
++ */
++int ipmi_si_hardcode_match(int addr_type, unsigned long addr)
++{
++	unsigned int i;
++
++	if (addr_type == IPMI_IO_ADDR_SPACE) {
++		for (i = 0; i < num_ports; i++) {
++			if (ports[i] == addr)
++				return 1;
++		}
++	} else {
++		for (i = 0; i < num_addrs; i++) {
++			if (addrs[i] == addr)
++				return 1;
++		}
++	}
++
++	return 0;
+ }
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index dc8603d34320..5294abc4c96c 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -1862,6 +1862,18 @@ int ipmi_si_add_smi(struct si_sm_io *io)
+ 	int rv = 0;
+ 	struct smi_info *new_smi, *dup;
+ 
++	/*
++	 * If the user gave us a hard-coded device at the same
++	 * address, they presumably want us to use it and not what is
++	 * in the firmware.
++	 */
++	if (io->addr_source != SI_HARDCODED &&
++	    ipmi_si_hardcode_match(io->addr_type, io->addr_data)) {
++		dev_info(io->dev,
++			 "Hard-coded device at this address already exists");
++		return -ENODEV;
++	}
++
+ 	if (!io->io_setup) {
+ 		if (io->addr_type == IPMI_IO_ADDR_SPACE) {
+ 			io->io_setup = ipmi_si_port_setup;
+@@ -2085,11 +2097,16 @@ static int try_smi_init(struct smi_info *new_smi)
+ 	WARN_ON(new_smi->io.dev->init_name != NULL);
+ 
+  out_err:
++	if (rv && new_smi->io.io_cleanup) {
++		new_smi->io.io_cleanup(&new_smi->io);
++		new_smi->io.io_cleanup = NULL;
++	}
++
+ 	kfree(init_name);
+ 	return rv;
+ }
+ 
+-static int init_ipmi_si(void)
++static int __init init_ipmi_si(void)
+ {
+ 	struct smi_info *e;
+ 	enum ipmi_addr_src type = SI_INVALID;
+@@ -2097,11 +2114,9 @@ static int init_ipmi_si(void)
+ 	if (initialized)
+ 		return 0;
+ 
+-	pr_info("IPMI System Interface driver\n");
++	ipmi_hardcode_init();
+ 
+-	/* If the user gave us a device, they presumably want us to use it */
+-	if (!ipmi_si_hardcode_find_bmc())
+-		goto do_scan;
++	pr_info("IPMI System Interface driver\n");
+ 
+ 	ipmi_si_platform_init();
+ 
+@@ -2113,7 +2128,6 @@ static int init_ipmi_si(void)
+ 	   with multiple BMCs we assume that there will be several instances
+ 	   of a given type so if we succeed in registering a type then also
+ 	   try to register everything else of the same type */
+-do_scan:
+ 	mutex_lock(&smi_infos_lock);
+ 	list_for_each_entry(e, &smi_infos, link) {
+ 		/* Try to register a device if it has an IRQ and we either
+@@ -2299,6 +2313,8 @@ static void cleanup_ipmi_si(void)
+ 	list_for_each_entry_safe(e, tmp_e, &smi_infos, link)
+ 		cleanup_one_si(e);
+ 	mutex_unlock(&smi_infos_lock);
++
++	ipmi_si_hardcode_exit();
+ }
+ module_exit(cleanup_ipmi_si);
+ 
+diff --git a/drivers/char/ipmi/ipmi_si_mem_io.c b/drivers/char/ipmi/ipmi_si_mem_io.c
+index fd0ec8d6bf0e..75583612ab10 100644
+--- a/drivers/char/ipmi/ipmi_si_mem_io.c
++++ b/drivers/char/ipmi/ipmi_si_mem_io.c
+@@ -81,8 +81,6 @@ int ipmi_si_mem_setup(struct si_sm_io *io)
+ 	if (!addr)
+ 		return -ENODEV;
+ 
+-	io->io_cleanup = mem_cleanup;
+-
+ 	/*
+ 	 * Figure out the actual readb/readw/readl/etc routine to use based
+ 	 * upon the register size.
+@@ -141,5 +139,8 @@ int ipmi_si_mem_setup(struct si_sm_io *io)
+ 		mem_region_cleanup(io, io->io_size);
+ 		return -EIO;
+ 	}
++
++	io->io_cleanup = mem_cleanup;
++
+ 	return 0;
+ }
+diff --git a/drivers/char/ipmi/ipmi_si_platform.c b/drivers/char/ipmi/ipmi_si_platform.c
+index 15cf819f884f..8158d03542f4 100644
+--- a/drivers/char/ipmi/ipmi_si_platform.c
++++ b/drivers/char/ipmi/ipmi_si_platform.c
+@@ -128,8 +128,6 @@ ipmi_get_info_from_resources(struct platform_device *pdev,
+ 		if (res_second->start > io->addr_data)
+ 			io->regspacing = res_second->start - io->addr_data;
+ 	}
+-	io->regsize = DEFAULT_REGSIZE;
+-	io->regshift = 0;
+ 
+ 	return res;
+ }
+@@ -137,7 +135,7 @@ ipmi_get_info_from_resources(struct platform_device *pdev,
+ static int platform_ipmi_probe(struct platform_device *pdev)
+ {
+ 	struct si_sm_io io;
+-	u8 type, slave_addr, addr_source;
++	u8 type, slave_addr, addr_source, regsize, regshift;
+ 	int rv;
+ 
+ 	rv = device_property_read_u8(&pdev->dev, "addr-source", &addr_source);
+@@ -149,7 +147,7 @@ static int platform_ipmi_probe(struct platform_device *pdev)
+ 	if (addr_source == SI_SMBIOS) {
+ 		if (!si_trydmi)
+ 			return -ENODEV;
+-	} else {
++	} else if (addr_source != SI_HARDCODED) {
+ 		if (!si_tryplatform)
+ 			return -ENODEV;
+ 	}
+@@ -169,11 +167,23 @@ static int platform_ipmi_probe(struct platform_device *pdev)
+ 	case SI_BT:
+ 		io.si_type = type;
+ 		break;
++	case SI_TYPE_INVALID: /* User disabled this in hardcode. */
++		return -ENODEV;
+ 	default:
+ 		dev_err(&pdev->dev, "ipmi-type property is invalid\n");
+ 		return -EINVAL;
+ 	}
+ 
++	io.regsize = DEFAULT_REGSIZE;
++	rv = device_property_read_u8(&pdev->dev, "reg-size", &regsize);
++	if (!rv)
++		io.regsize = regsize;
++
++	io.regshift = 0;
++	rv = device_property_read_u8(&pdev->dev, "reg-shift", &regshift);
++	if (!rv)
++		io.regshift = regshift;
++
+ 	if (!ipmi_get_info_from_resources(pdev, &io))
+ 		return -EINVAL;
+ 
+@@ -193,7 +203,8 @@ static int platform_ipmi_probe(struct platform_device *pdev)
+ 
+ 	io.dev = &pdev->dev;
+ 
+-	pr_info("ipmi_si: SMBIOS: %s %#lx regsize %d spacing %d irq %d\n",
++	pr_info("ipmi_si: %s: %s %#lx regsize %d spacing %d irq %d\n",
++		ipmi_addr_src_to_str(addr_source),
+ 		(io.addr_type == IPMI_IO_ADDR_SPACE) ? "io" : "mem",
+ 		io.addr_data, io.regsize, io.regspacing, io.irq);
+ 
+@@ -358,6 +369,9 @@ static int acpi_ipmi_probe(struct platform_device *pdev)
+ 		goto err_free;
+ 	}
+ 
++	io.regsize = DEFAULT_REGSIZE;
++	io.regshift = 0;
++
+ 	res = ipmi_get_info_from_resources(pdev, &io);
+ 	if (!res) {
+ 		rv = -EINVAL;
+@@ -420,8 +434,9 @@ static int ipmi_remove(struct platform_device *pdev)
+ }
+ 
+ static const struct platform_device_id si_plat_ids[] = {
+-    { "dmi-ipmi-si", 0 },
+-    { }
++	{ "dmi-ipmi-si", 0 },
++	{ "hardcode-ipmi-si", 0 },
++	{ }
+ };
+ 
+ struct platform_driver ipmi_platform_driver = {
+diff --git a/drivers/char/ipmi/ipmi_si_port_io.c b/drivers/char/ipmi/ipmi_si_port_io.c
+index ef6dffcea9fa..03924c32b6e9 100644
+--- a/drivers/char/ipmi/ipmi_si_port_io.c
++++ b/drivers/char/ipmi/ipmi_si_port_io.c
+@@ -68,8 +68,6 @@ int ipmi_si_port_setup(struct si_sm_io *io)
+ 	if (!addr)
+ 		return -ENODEV;
+ 
+-	io->io_cleanup = port_cleanup;
+-
+ 	/*
+ 	 * Figure out the actual inb/inw/inl/etc routine to use based
+ 	 * upon the register size.
+@@ -109,5 +107,8 @@ int ipmi_si_port_setup(struct si_sm_io *io)
+ 			return -EIO;
+ 		}
+ 	}
++
++	io->io_cleanup = port_cleanup;
++
+ 	return 0;
+ }
+diff --git a/drivers/char/tpm/st33zp24/st33zp24.c b/drivers/char/tpm/st33zp24/st33zp24.c
+index 64dc560859f2..13dc614b7ebc 100644
+--- a/drivers/char/tpm/st33zp24/st33zp24.c
++++ b/drivers/char/tpm/st33zp24/st33zp24.c
+@@ -436,7 +436,7 @@ static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf,
+ 			goto out_err;
+ 	}
+ 
+-	return len;
++	return 0;
+ out_err:
+ 	st33zp24_cancel(chip);
+ 	release_locality(chip);
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index d9439f9abe78..88d2e01a651d 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -230,10 +230,19 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
+ 	if (rc < 0) {
+ 		if (rc != -EPIPE)
+ 			dev_err(&chip->dev,
+-				"%s: tpm_send: error %d\n", __func__, rc);
++				"%s: send(): error %d\n", __func__, rc);
+ 		goto out;
+ 	}
+ 
++	/* A sanity check. send() should just return zero on success e.g.
++	 * not the command length.
++	 */
++	if (rc > 0) {
++		dev_warn(&chip->dev,
++			 "%s: send(): invalid value %d\n", __func__, rc);
++		rc = 0;
++	}
++
+ 	if (chip->flags & TPM_CHIP_FLAG_IRQ)
+ 		goto out_recv;
+ 
+diff --git a/drivers/char/tpm/tpm_atmel.c b/drivers/char/tpm/tpm_atmel.c
+index 66a14526aaf4..a290b30a0c35 100644
+--- a/drivers/char/tpm/tpm_atmel.c
++++ b/drivers/char/tpm/tpm_atmel.c
+@@ -105,7 +105,7 @@ static int tpm_atml_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 		iowrite8(buf[i], priv->iobase);
+ 	}
+ 
+-	return count;
++	return 0;
+ }
+ 
+ static void tpm_atml_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index 36952ef98f90..763fc7e6c005 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -287,19 +287,29 @@ static int crb_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ 	struct crb_priv *priv = dev_get_drvdata(&chip->dev);
+ 	unsigned int expected;
+ 
+-	/* sanity check */
+-	if (count < 6)
++	/* A sanity check that the upper layer wants to get at least the header
++	 * as that is the minimum size for any TPM response.
++	 */
++	if (count < TPM_HEADER_SIZE)
+ 		return -EIO;
+ 
++	/* If this bit is set, according to the spec, the TPM is in
++	 * unrecoverable condition.
++	 */
+ 	if (ioread32(&priv->regs_t->ctrl_sts) & CRB_CTRL_STS_ERROR)
+ 		return -EIO;
+ 
+-	memcpy_fromio(buf, priv->rsp, 6);
+-	expected = be32_to_cpup((__be32 *) &buf[2]);
+-	if (expected > count || expected < 6)
++	/* Read the first 8 bytes in order to get the length of the response.
++	 * We read exactly a quad word in order to make sure that the remaining
++	 * reads will be aligned.
++	 */
++	memcpy_fromio(buf, priv->rsp, 8);
++
++	expected = be32_to_cpup((__be32 *)&buf[2]);
++	if (expected > count || expected < TPM_HEADER_SIZE)
+ 		return -EIO;
+ 
+-	memcpy_fromio(&buf[6], &priv->rsp[6], expected - 6);
++	memcpy_fromio(&buf[8], &priv->rsp[8], expected - 8);
+ 
+ 	return expected;
+ }
+diff --git a/drivers/char/tpm/tpm_i2c_atmel.c b/drivers/char/tpm/tpm_i2c_atmel.c
+index 95ce2e9ccdc6..32a8e27c5382 100644
+--- a/drivers/char/tpm/tpm_i2c_atmel.c
++++ b/drivers/char/tpm/tpm_i2c_atmel.c
+@@ -65,7 +65,11 @@ static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	dev_dbg(&chip->dev,
+ 		"%s(buf=%*ph len=%0zx) -> sts=%d\n", __func__,
+ 		(int)min_t(size_t, 64, len), buf, len, status);
+-	return status;
++
++	if (status < 0)
++		return status;
++
++	return 0;
+ }
+ 
+ static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c
+index 9086edc9066b..977fd42daa1b 100644
+--- a/drivers/char/tpm/tpm_i2c_infineon.c
++++ b/drivers/char/tpm/tpm_i2c_infineon.c
+@@ -587,7 +587,7 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	/* go and do it */
+ 	iic_tpm_write(TPM_STS(tpm_dev.locality), &sts, 1);
+ 
+-	return len;
++	return 0;
+ out_err:
+ 	tpm_tis_i2c_ready(chip);
+ 	/* The TPM needs some time to clean up here,
+diff --git a/drivers/char/tpm/tpm_i2c_nuvoton.c b/drivers/char/tpm/tpm_i2c_nuvoton.c
+index 217f7f1cbde8..058220edb8b3 100644
+--- a/drivers/char/tpm/tpm_i2c_nuvoton.c
++++ b/drivers/char/tpm/tpm_i2c_nuvoton.c
+@@ -467,7 +467,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	}
+ 
+ 	dev_dbg(dev, "%s() -> %zd\n", __func__, len);
+-	return len;
++	return 0;
+ }
+ 
+ static bool i2c_nuvoton_req_canceled(struct tpm_chip *chip, u8 status)
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 07b5a487d0c8..757ca45b39b8 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -139,14 +139,14 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ }
+ 
+ /**
+- * tpm_ibmvtpm_send - Send tpm request
+- *
++ * tpm_ibmvtpm_send() - Send a TPM command
+  * @chip:	tpm chip struct
+  * @buf:	buffer contains data to send
+  * @count:	size of buffer
+  *
+  * Return:
+- *	Number of bytes sent or < 0 on error.
++ *   0 on success,
++ *   -errno on error
+  */
+ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+@@ -192,7 +192,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 		rc = 0;
+ 		ibmvtpm->tpm_processing_cmd = false;
+ 	} else
+-		rc = count;
++		rc = 0;
+ 
+ 	spin_unlock(&ibmvtpm->rtce_lock);
+ 	return rc;
+diff --git a/drivers/char/tpm/tpm_infineon.c b/drivers/char/tpm/tpm_infineon.c
+index d8f10047fbba..97f6d4fe0aee 100644
+--- a/drivers/char/tpm/tpm_infineon.c
++++ b/drivers/char/tpm/tpm_infineon.c
+@@ -354,7 +354,7 @@ static int tpm_inf_send(struct tpm_chip *chip, u8 * buf, size_t count)
+ 	for (i = 0; i < count; i++) {
+ 		wait_and_send(chip, buf[i]);
+ 	}
+-	return count;
++	return 0;
+ }
+ 
+ static void tpm_inf_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/tpm_nsc.c b/drivers/char/tpm/tpm_nsc.c
+index 5d6cce74cd3f..9bee3c5eb4bf 100644
+--- a/drivers/char/tpm/tpm_nsc.c
++++ b/drivers/char/tpm/tpm_nsc.c
+@@ -226,7 +226,7 @@ static int tpm_nsc_send(struct tpm_chip *chip, u8 * buf, size_t count)
+ 	}
+ 	outb(NSC_COMMAND_EOC, priv->base + NSC_COMMAND);
+ 
+-	return count;
++	return 0;
+ }
+ 
+ static void tpm_nsc_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index bf7e49cfa643..bb0c2e160562 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -481,7 +481,7 @@ static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len)
+ 			goto out_err;
+ 		}
+ 	}
+-	return len;
++	return 0;
+ out_err:
+ 	tpm_tis_ready(chip);
+ 	return rc;
+diff --git a/drivers/char/tpm/tpm_vtpm_proxy.c b/drivers/char/tpm/tpm_vtpm_proxy.c
+index 87a0ce47f201..ecbb63f8d231 100644
+--- a/drivers/char/tpm/tpm_vtpm_proxy.c
++++ b/drivers/char/tpm/tpm_vtpm_proxy.c
+@@ -335,7 +335,6 @@ static int vtpm_proxy_is_driver_command(struct tpm_chip *chip,
+ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+ 	struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev);
+-	int rc = 0;
+ 
+ 	if (count > sizeof(proxy_dev->buffer)) {
+ 		dev_err(&chip->dev,
+@@ -366,7 +365,7 @@ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 
+ 	wake_up_interruptible(&proxy_dev->wq);
+ 
+-	return rc;
++	return 0;
+ }
+ 
+ static void vtpm_proxy_tpm_op_cancel(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
+index b150f87f38f5..5a327eb7f63a 100644
+--- a/drivers/char/tpm/xen-tpmfront.c
++++ b/drivers/char/tpm/xen-tpmfront.c
+@@ -173,7 +173,7 @@ static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 		return -ETIME;
+ 	}
+ 
+-	return count;
++	return 0;
+ }
+ 
+ static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+diff --git a/drivers/clk/clk-fractional-divider.c b/drivers/clk/clk-fractional-divider.c
+index 545dceec0bbf..fdfe2e423d15 100644
+--- a/drivers/clk/clk-fractional-divider.c
++++ b/drivers/clk/clk-fractional-divider.c
+@@ -79,7 +79,7 @@ static long clk_fd_round_rate(struct clk_hw *hw, unsigned long rate,
+ 	unsigned long m, n;
+ 	u64 ret;
+ 
+-	if (!rate || rate >= *parent_rate)
++	if (!rate || (!clk_hw_can_set_rate_parent(hw) && rate >= *parent_rate))
+ 		return *parent_rate;
+ 
+ 	if (fd->approximation)
+diff --git a/drivers/clk/clk-twl6040.c b/drivers/clk/clk-twl6040.c
+index ea846f77750b..0cad5748bf0e 100644
+--- a/drivers/clk/clk-twl6040.c
++++ b/drivers/clk/clk-twl6040.c
+@@ -41,6 +41,43 @@ static int twl6040_pdmclk_is_prepared(struct clk_hw *hw)
+ 	return pdmclk->enabled;
+ }
+ 
++static int twl6040_pdmclk_reset_one_clock(struct twl6040_pdmclk *pdmclk,
++					  unsigned int reg)
++{
++	const u8 reset_mask = TWL6040_HPLLRST;	/* Same for HPPLL and LPPLL */
++	int ret;
++
++	ret = twl6040_set_bits(pdmclk->twl6040, reg, reset_mask);
++	if (ret < 0)
++		return ret;
++
++	ret = twl6040_clear_bits(pdmclk->twl6040, reg, reset_mask);
++	if (ret < 0)
++		return ret;
++
++	return 0;
++}
++
++/*
++ * TWL6040A2 Phoenix Audio IC erratum #6: "PDM Clock Generation Issue At
++ * Cold Temperature". This affects cold boot and deeper idle states it
++ * seems. The workaround consists of resetting HPPLL and LPPLL.
++ */
++static int twl6040_pdmclk_quirk_reset_clocks(struct twl6040_pdmclk *pdmclk)
++{
++	int ret;
++
++	ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_HPPLLCTL);
++	if (ret)
++		return ret;
++
++	ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_LPPLLCTL);
++	if (ret)
++		return ret;
++
++	return 0;
++}
++
+ static int twl6040_pdmclk_prepare(struct clk_hw *hw)
+ {
+ 	struct twl6040_pdmclk *pdmclk = container_of(hw, struct twl6040_pdmclk,
+@@ -48,8 +85,20 @@ static int twl6040_pdmclk_prepare(struct clk_hw *hw)
+ 	int ret;
+ 
+ 	ret = twl6040_power(pdmclk->twl6040, 1);
+-	if (!ret)
+-		pdmclk->enabled = 1;
++	if (ret)
++		return ret;
++
++	ret = twl6040_pdmclk_quirk_reset_clocks(pdmclk);
++	if (ret)
++		goto out_err;
++
++	pdmclk->enabled = 1;
++
++	return 0;
++
++out_err:
++	dev_err(pdmclk->dev, "%s: error %i\n", __func__, ret);
++	twl6040_power(pdmclk->twl6040, 0);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/clk/ingenic/cgu.c b/drivers/clk/ingenic/cgu.c
+index 5ef7d9ba2195..b40160eb3372 100644
+--- a/drivers/clk/ingenic/cgu.c
++++ b/drivers/clk/ingenic/cgu.c
+@@ -426,16 +426,16 @@ ingenic_clk_round_rate(struct clk_hw *hw, unsigned long req_rate,
+ 	struct ingenic_clk *ingenic_clk = to_ingenic_clk(hw);
+ 	struct ingenic_cgu *cgu = ingenic_clk->cgu;
+ 	const struct ingenic_cgu_clk_info *clk_info;
+-	long rate = *parent_rate;
++	unsigned int div = 1;
+ 
+ 	clk_info = &cgu->clock_info[ingenic_clk->idx];
+ 
+ 	if (clk_info->type & CGU_CLK_DIV)
+-		rate /= ingenic_clk_calc_div(clk_info, *parent_rate, req_rate);
++		div = ingenic_clk_calc_div(clk_info, *parent_rate, req_rate);
+ 	else if (clk_info->type & CGU_CLK_FIXDIV)
+-		rate /= clk_info->fixdiv.div;
++		div = clk_info->fixdiv.div;
+ 
+-	return rate;
++	return DIV_ROUND_UP(*parent_rate, div);
+ }
+ 
+ static int
+@@ -455,7 +455,7 @@ ingenic_clk_set_rate(struct clk_hw *hw, unsigned long req_rate,
+ 
+ 	if (clk_info->type & CGU_CLK_DIV) {
+ 		div = ingenic_clk_calc_div(clk_info, parent_rate, req_rate);
+-		rate = parent_rate / div;
++		rate = DIV_ROUND_UP(parent_rate, div);
+ 
+ 		if (rate != req_rate)
+ 			return -EINVAL;
+diff --git a/drivers/clk/ingenic/cgu.h b/drivers/clk/ingenic/cgu.h
+index 502bcbb61b04..e12716d8ce3c 100644
+--- a/drivers/clk/ingenic/cgu.h
++++ b/drivers/clk/ingenic/cgu.h
+@@ -80,7 +80,7 @@ struct ingenic_cgu_mux_info {
+  * @reg: offset of the divider control register within the CGU
+  * @shift: number of bits to left shift the divide value by (ie. the index of
+  *         the lowest bit of the divide value within its control register)
+- * @div: number of bits to divide the divider value by (i.e. if the
++ * @div: number to divide the divider value by (i.e. if the
+  *	 effective divider value is the value written to the register
+  *	 multiplied by some constant)
+  * @bits: the size of the divide value in bits
+diff --git a/drivers/clk/rockchip/clk-rk3328.c b/drivers/clk/rockchip/clk-rk3328.c
+index faa94adb2a37..65ab5c2f48b0 100644
+--- a/drivers/clk/rockchip/clk-rk3328.c
++++ b/drivers/clk/rockchip/clk-rk3328.c
+@@ -78,17 +78,17 @@ static struct rockchip_pll_rate_table rk3328_pll_rates[] = {
+ 
+ static struct rockchip_pll_rate_table rk3328_pll_frac_rates[] = {
+ 	/* _mhz, _refdiv, _fbdiv, _postdiv1, _postdiv2, _dsmpd, _frac */
+-	RK3036_PLL_RATE(1016064000, 3, 127, 1, 1, 0, 134217),
++	RK3036_PLL_RATE(1016064000, 3, 127, 1, 1, 0, 134218),
+ 	/* vco = 1016064000 */
+-	RK3036_PLL_RATE(983040000, 24, 983, 1, 1, 0, 671088),
++	RK3036_PLL_RATE(983040000, 24, 983, 1, 1, 0, 671089),
+ 	/* vco = 983040000 */
+-	RK3036_PLL_RATE(491520000, 24, 983, 2, 1, 0, 671088),
++	RK3036_PLL_RATE(491520000, 24, 983, 2, 1, 0, 671089),
+ 	/* vco = 983040000 */
+-	RK3036_PLL_RATE(61440000, 6, 215, 7, 2, 0, 671088),
++	RK3036_PLL_RATE(61440000, 6, 215, 7, 2, 0, 671089),
+ 	/* vco = 860156000 */
+-	RK3036_PLL_RATE(56448000, 12, 451, 4, 4, 0, 9797894),
++	RK3036_PLL_RATE(56448000, 12, 451, 4, 4, 0, 9797895),
+ 	/* vco = 903168000 */
+-	RK3036_PLL_RATE(40960000, 12, 409, 4, 5, 0, 10066329),
++	RK3036_PLL_RATE(40960000, 12, 409, 4, 5, 0, 10066330),
+ 	/* vco = 819200000 */
+ 	{ /* sentinel */ },
+ };
+diff --git a/drivers/clk/samsung/clk-exynos5-subcmu.c b/drivers/clk/samsung/clk-exynos5-subcmu.c
+index 93306283d764..8ae44b5db4c2 100644
+--- a/drivers/clk/samsung/clk-exynos5-subcmu.c
++++ b/drivers/clk/samsung/clk-exynos5-subcmu.c
+@@ -136,15 +136,20 @@ static int __init exynos5_clk_register_subcmu(struct device *parent,
+ {
+ 	struct of_phandle_args genpdspec = { .np = pd_node };
+ 	struct platform_device *pdev;
++	int ret;
++
++	pdev = platform_device_alloc("exynos5-subcmu", PLATFORM_DEVID_AUTO);
++	if (!pdev)
++		return -ENOMEM;
+ 
+-	pdev = platform_device_alloc(info->pd_name, -1);
+ 	pdev->dev.parent = parent;
+-	pdev->driver_override = "exynos5-subcmu";
+ 	platform_set_drvdata(pdev, (void *)info);
+ 	of_genpd_add_device(&genpdspec, &pdev->dev);
+-	platform_device_add(pdev);
++	ret = platform_device_add(pdev);
++	if (ret)
++		platform_device_put(pdev);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int __init exynos5_clk_probe(struct platform_device *pdev)
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 40630eb950fc..85d7f301149b 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -530,7 +530,7 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
+ 		 * Create default clkdm name, replace _cm from end of parent
+ 		 * node name with _clkdm
+ 		 */
+-		provider->clkdm_name[strlen(provider->clkdm_name) - 5] = 0;
++		provider->clkdm_name[strlen(provider->clkdm_name) - 2] = 0;
+ 	} else {
+ 		provider->clkdm_name = kasprintf(GFP_KERNEL, "%pOFn", node);
+ 		if (!provider->clkdm_name) {
+diff --git a/drivers/clk/uniphier/clk-uniphier-cpugear.c b/drivers/clk/uniphier/clk-uniphier-cpugear.c
+index ec11f55594ad..5d2d42b7e182 100644
+--- a/drivers/clk/uniphier/clk-uniphier-cpugear.c
++++ b/drivers/clk/uniphier/clk-uniphier-cpugear.c
+@@ -47,7 +47,7 @@ static int uniphier_clk_cpugear_set_parent(struct clk_hw *hw, u8 index)
+ 		return ret;
+ 
+ 	ret = regmap_write_bits(gear->regmap,
+-				gear->regbase + UNIPHIER_CLK_CPUGEAR_SET,
++				gear->regbase + UNIPHIER_CLK_CPUGEAR_UPD,
+ 				UNIPHIER_CLK_CPUGEAR_UPD_BIT,
+ 				UNIPHIER_CLK_CPUGEAR_UPD_BIT);
+ 	if (ret)
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index a9e26f6a81a1..8dfd3bc448d0 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -360,6 +360,16 @@ config ARM64_ERRATUM_858921
+ 	  The workaround will be dynamically enabled when an affected
+ 	  core is detected.
+ 
++config SUN50I_ERRATUM_UNKNOWN1
++	bool "Workaround for Allwinner A64 erratum UNKNOWN1"
++	default y
++	depends on ARM_ARCH_TIMER && ARM64 && ARCH_SUNXI
++	select ARM_ARCH_TIMER_OOL_WORKAROUND
++	help
++	  This option enables a workaround for instability in the timer on
++	  the Allwinner A64 SoC. The workaround will only be active if the
++	  allwinner,erratum-unknown1 property is found in the timer node.
++
+ config ARM_GLOBAL_TIMER
+ 	bool "Support for the ARM global timer" if COMPILE_TEST
+ 	select TIMER_OF if OF
+diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
+index 9a7d4dc00b6e..a8b20b65bd4b 100644
+--- a/drivers/clocksource/arm_arch_timer.c
++++ b/drivers/clocksource/arm_arch_timer.c
+@@ -326,6 +326,48 @@ static u64 notrace arm64_1188873_read_cntvct_el0(void)
+ }
+ #endif
+ 
++#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
++/*
++ * The low bits of the counter registers are indeterminate while bit 10 or
++ * greater is rolling over. Since the counter value can jump both backward
++ * (7ff -> 000 -> 800) and forward (7ff -> fff -> 800), ignore register values
++ * with all ones or all zeros in the low bits. Bound the loop by the maximum
++ * number of CPU cycles in 3 consecutive 24 MHz counter periods.
++ */
++#define __sun50i_a64_read_reg(reg) ({					\
++	u64 _val;							\
++	int _retries = 150;						\
++									\
++	do {								\
++		_val = read_sysreg(reg);				\
++		_retries--;						\
++	} while (((_val + 1) & GENMASK(9, 0)) <= 1 && _retries);	\
++									\
++	WARN_ON_ONCE(!_retries);					\
++	_val;								\
++})
++
++static u64 notrace sun50i_a64_read_cntpct_el0(void)
++{
++	return __sun50i_a64_read_reg(cntpct_el0);
++}
++
++static u64 notrace sun50i_a64_read_cntvct_el0(void)
++{
++	return __sun50i_a64_read_reg(cntvct_el0);
++}
++
++static u32 notrace sun50i_a64_read_cntp_tval_el0(void)
++{
++	return read_sysreg(cntp_cval_el0) - sun50i_a64_read_cntpct_el0();
++}
++
++static u32 notrace sun50i_a64_read_cntv_tval_el0(void)
++{
++	return read_sysreg(cntv_cval_el0) - sun50i_a64_read_cntvct_el0();
++}
++#endif
++
+ #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
+ DEFINE_PER_CPU(const struct arch_timer_erratum_workaround *, timer_unstable_counter_workaround);
+ EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround);
+@@ -423,6 +465,19 @@ static const struct arch_timer_erratum_workaround ool_workarounds[] = {
+ 		.read_cntvct_el0 = arm64_1188873_read_cntvct_el0,
+ 	},
+ #endif
++#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
++	{
++		.match_type = ate_match_dt,
++		.id = "allwinner,erratum-unknown1",
++		.desc = "Allwinner erratum UNKNOWN1",
++		.read_cntp_tval_el0 = sun50i_a64_read_cntp_tval_el0,
++		.read_cntv_tval_el0 = sun50i_a64_read_cntv_tval_el0,
++		.read_cntpct_el0 = sun50i_a64_read_cntpct_el0,
++		.read_cntvct_el0 = sun50i_a64_read_cntvct_el0,
++		.set_next_event_phys = erratum_set_next_event_tval_phys,
++		.set_next_event_virt = erratum_set_next_event_tval_virt,
++	},
++#endif
+ };
+ 
+ typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *,
+diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c
+index 7a244b681876..d55c30f6981d 100644
+--- a/drivers/clocksource/exynos_mct.c
++++ b/drivers/clocksource/exynos_mct.c
+@@ -388,6 +388,13 @@ static void exynos4_mct_tick_start(unsigned long cycles,
+ 	exynos4_mct_write(tmp, mevt->base + MCT_L_TCON_OFFSET);
+ }
+ 
++static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
++{
++	/* Clear the MCT tick interrupt */
++	if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1)
++		exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET);
++}
++
+ static int exynos4_tick_set_next_event(unsigned long cycles,
+ 				       struct clock_event_device *evt)
+ {
+@@ -404,6 +411,7 @@ static int set_state_shutdown(struct clock_event_device *evt)
+ 
+ 	mevt = container_of(evt, struct mct_clock_event_device, evt);
+ 	exynos4_mct_tick_stop(mevt);
++	exynos4_mct_tick_clear(mevt);
+ 	return 0;
+ }
+ 
+@@ -420,8 +428,11 @@ static int set_state_periodic(struct clock_event_device *evt)
+ 	return 0;
+ }
+ 
+-static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
++static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id)
+ {
++	struct mct_clock_event_device *mevt = dev_id;
++	struct clock_event_device *evt = &mevt->evt;
++
+ 	/*
+ 	 * This is for supporting oneshot mode.
+ 	 * Mct would generate interrupt periodically
+@@ -430,16 +441,6 @@ static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
+ 	if (!clockevent_state_periodic(&mevt->evt))
+ 		exynos4_mct_tick_stop(mevt);
+ 
+-	/* Clear the MCT tick interrupt */
+-	if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1)
+-		exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET);
+-}
+-
+-static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id)
+-{
+-	struct mct_clock_event_device *mevt = dev_id;
+-	struct clock_event_device *evt = &mevt->evt;
+-
+ 	exynos4_mct_tick_clear(mevt);
+ 
+ 	evt->event_handler(evt);
+diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
+index 431892200a08..ead71bfac689 100644
+--- a/drivers/clocksource/timer-riscv.c
++++ b/drivers/clocksource/timer-riscv.c
+@@ -58,7 +58,7 @@ static u64 riscv_sched_clock(void)
+ static DEFINE_PER_CPU(struct clocksource, riscv_clocksource) = {
+ 	.name		= "riscv_clocksource",
+ 	.rating		= 300,
+-	.mask		= CLOCKSOURCE_MASK(BITS_PER_LONG),
++	.mask		= CLOCKSOURCE_MASK(64),
+ 	.flags		= CLOCK_SOURCE_IS_CONTINUOUS,
+ 	.read		= riscv_clocksource_rdtime,
+ };
+@@ -103,8 +103,7 @@ static int __init riscv_timer_init_dt(struct device_node *n)
+ 	cs = per_cpu_ptr(&riscv_clocksource, cpuid);
+ 	clocksource_register_hz(cs, riscv_timebase);
+ 
+-	sched_clock_register(riscv_sched_clock,
+-			BITS_PER_LONG, riscv_timebase);
++	sched_clock_register(riscv_sched_clock, 64, riscv_timebase);
+ 
+ 	error = cpuhp_setup_state(CPUHP_AP_RISCV_TIMER_STARTING,
+ 			 "clockevents/riscv/timer:starting",
+diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
+index ed5e42461094..ad48fd52cb53 100644
+--- a/drivers/connector/cn_proc.c
++++ b/drivers/connector/cn_proc.c
+@@ -250,6 +250,7 @@ void proc_coredump_connector(struct task_struct *task)
+ {
+ 	struct cn_msg *msg;
+ 	struct proc_event *ev;
++	struct task_struct *parent;
+ 	__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
+ 
+ 	if (atomic_read(&proc_event_num_listeners) < 1)
+@@ -262,8 +263,14 @@ void proc_coredump_connector(struct task_struct *task)
+ 	ev->what = PROC_EVENT_COREDUMP;
+ 	ev->event_data.coredump.process_pid = task->pid;
+ 	ev->event_data.coredump.process_tgid = task->tgid;
+-	ev->event_data.coredump.parent_pid = task->real_parent->pid;
+-	ev->event_data.coredump.parent_tgid = task->real_parent->tgid;
++
++	rcu_read_lock();
++	if (pid_alive(task)) {
++		parent = rcu_dereference(task->real_parent);
++		ev->event_data.coredump.parent_pid = parent->pid;
++		ev->event_data.coredump.parent_tgid = parent->tgid;
++	}
++	rcu_read_unlock();
+ 
+ 	memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ 	msg->ack = 0; /* not used */
+@@ -276,6 +283,7 @@ void proc_exit_connector(struct task_struct *task)
+ {
+ 	struct cn_msg *msg;
+ 	struct proc_event *ev;
++	struct task_struct *parent;
+ 	__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
+ 
+ 	if (atomic_read(&proc_event_num_listeners) < 1)
+@@ -290,8 +298,14 @@ void proc_exit_connector(struct task_struct *task)
+ 	ev->event_data.exit.process_tgid = task->tgid;
+ 	ev->event_data.exit.exit_code = task->exit_code;
+ 	ev->event_data.exit.exit_signal = task->exit_signal;
+-	ev->event_data.exit.parent_pid = task->real_parent->pid;
+-	ev->event_data.exit.parent_tgid = task->real_parent->tgid;
++
++	rcu_read_lock();
++	if (pid_alive(task)) {
++		parent = rcu_dereference(task->real_parent);
++		ev->event_data.exit.parent_pid = parent->pid;
++		ev->event_data.exit.parent_tgid = parent->tgid;
++	}
++	rcu_read_unlock();
+ 
+ 	memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
+ 	msg->ack = 0; /* not used */
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index d62fd374d5c7..c72258a44ba4 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -916,8 +916,10 @@ static void __init acpi_cpufreq_boost_init(void)
+ {
+ 	int ret;
+ 
+-	if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA)))
++	if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA))) {
++		pr_debug("Boost capabilities not present in the processor\n");
+ 		return;
++	}
+ 
+ 	acpi_cpufreq_driver.set_boost = set_boost;
+ 	acpi_cpufreq_driver.boost_enabled = boost_state(0);
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index e35a886e00bc..ef0e33e21b98 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -545,13 +545,13 @@ EXPORT_SYMBOL_GPL(cpufreq_policy_transition_delay_us);
+  *                          SYSFS INTERFACE                          *
+  *********************************************************************/
+ static ssize_t show_boost(struct kobject *kobj,
+-				 struct attribute *attr, char *buf)
++			  struct kobj_attribute *attr, char *buf)
+ {
+ 	return sprintf(buf, "%d\n", cpufreq_driver->boost_enabled);
+ }
+ 
+-static ssize_t store_boost(struct kobject *kobj, struct attribute *attr,
+-				  const char *buf, size_t count)
++static ssize_t store_boost(struct kobject *kobj, struct kobj_attribute *attr,
++			   const char *buf, size_t count)
+ {
+ 	int ret, enable;
+ 
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index dd66decf2087..a579ca4552df 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -383,7 +383,10 @@ static int intel_pstate_get_cppc_guranteed(int cpu)
+ 	if (ret)
+ 		return ret;
+ 
+-	return cppc_perf.guaranteed_perf;
++	if (cppc_perf.guaranteed_perf)
++		return cppc_perf.guaranteed_perf;
++
++	return cppc_perf.nominal_perf;
+ }
+ 
+ #else /* CONFIG_ACPI_CPPC_LIB */
+@@ -895,7 +898,7 @@ static void intel_pstate_update_policies(void)
+ /************************** sysfs begin ************************/
+ #define show_one(file_name, object)					\
+ 	static ssize_t show_##file_name					\
+-	(struct kobject *kobj, struct attribute *attr, char *buf)	\
++	(struct kobject *kobj, struct kobj_attribute *attr, char *buf)	\
+ 	{								\
+ 		return sprintf(buf, "%u\n", global.object);		\
+ 	}
+@@ -904,7 +907,7 @@ static ssize_t intel_pstate_show_status(char *buf);
+ static int intel_pstate_update_status(const char *buf, size_t size);
+ 
+ static ssize_t show_status(struct kobject *kobj,
+-			   struct attribute *attr, char *buf)
++			   struct kobj_attribute *attr, char *buf)
+ {
+ 	ssize_t ret;
+ 
+@@ -915,7 +918,7 @@ static ssize_t show_status(struct kobject *kobj,
+ 	return ret;
+ }
+ 
+-static ssize_t store_status(struct kobject *a, struct attribute *b,
++static ssize_t store_status(struct kobject *a, struct kobj_attribute *b,
+ 			    const char *buf, size_t count)
+ {
+ 	char *p = memchr(buf, '\n', count);
+@@ -929,7 +932,7 @@ static ssize_t store_status(struct kobject *a, struct attribute *b,
+ }
+ 
+ static ssize_t show_turbo_pct(struct kobject *kobj,
+-				struct attribute *attr, char *buf)
++				struct kobj_attribute *attr, char *buf)
+ {
+ 	struct cpudata *cpu;
+ 	int total, no_turbo, turbo_pct;
+@@ -955,7 +958,7 @@ static ssize_t show_turbo_pct(struct kobject *kobj,
+ }
+ 
+ static ssize_t show_num_pstates(struct kobject *kobj,
+-				struct attribute *attr, char *buf)
++				struct kobj_attribute *attr, char *buf)
+ {
+ 	struct cpudata *cpu;
+ 	int total;
+@@ -976,7 +979,7 @@ static ssize_t show_num_pstates(struct kobject *kobj,
+ }
+ 
+ static ssize_t show_no_turbo(struct kobject *kobj,
+-			     struct attribute *attr, char *buf)
++			     struct kobj_attribute *attr, char *buf)
+ {
+ 	ssize_t ret;
+ 
+@@ -998,7 +1001,7 @@ static ssize_t show_no_turbo(struct kobject *kobj,
+ 	return ret;
+ }
+ 
+-static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
++static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
+ 			      const char *buf, size_t count)
+ {
+ 	unsigned int input;
+@@ -1045,7 +1048,7 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
+ 	return count;
+ }
+ 
+-static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
++static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
+ 				  const char *buf, size_t count)
+ {
+ 	unsigned int input;
+@@ -1075,7 +1078,7 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
+ 	return count;
+ }
+ 
+-static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
++static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b,
+ 				  const char *buf, size_t count)
+ {
+ 	unsigned int input;
+@@ -1107,12 +1110,13 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
+ }
+ 
+ static ssize_t show_hwp_dynamic_boost(struct kobject *kobj,
+-				struct attribute *attr, char *buf)
++				struct kobj_attribute *attr, char *buf)
+ {
+ 	return sprintf(buf, "%u\n", hwp_boost);
+ }
+ 
+-static ssize_t store_hwp_dynamic_boost(struct kobject *a, struct attribute *b,
++static ssize_t store_hwp_dynamic_boost(struct kobject *a,
++				       struct kobj_attribute *b,
+ 				       const char *buf, size_t count)
+ {
+ 	unsigned int input;
+diff --git a/drivers/cpufreq/pxa2xx-cpufreq.c b/drivers/cpufreq/pxa2xx-cpufreq.c
+index 46254e583982..74e0e0c20c46 100644
+--- a/drivers/cpufreq/pxa2xx-cpufreq.c
++++ b/drivers/cpufreq/pxa2xx-cpufreq.c
+@@ -143,7 +143,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
+ 	return ret;
+ }
+ 
+-static void __init pxa_cpufreq_init_voltages(void)
++static void pxa_cpufreq_init_voltages(void)
+ {
+ 	vcc_core = regulator_get(NULL, "vcc_core");
+ 	if (IS_ERR(vcc_core)) {
+@@ -159,7 +159,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
+ 	return 0;
+ }
+ 
+-static void __init pxa_cpufreq_init_voltages(void) { }
++static void pxa_cpufreq_init_voltages(void) { }
+ #endif
+ 
+ static void find_freq_tables(struct cpufreq_frequency_table **freq_table,
+diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c
+index 2a3675c24032..a472b814058f 100644
+--- a/drivers/cpufreq/qcom-cpufreq-kryo.c
++++ b/drivers/cpufreq/qcom-cpufreq-kryo.c
+@@ -75,7 +75,7 @@ static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
+ 
+ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
+ {
+-	struct opp_table *opp_tables[NR_CPUS] = {0};
++	struct opp_table **opp_tables;
+ 	enum _msm8996_version msm8996_version;
+ 	struct nvmem_cell *speedbin_nvmem;
+ 	struct device_node *np;
+@@ -133,6 +133,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
+ 	}
+ 	kfree(speedbin);
+ 
++	opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables), GFP_KERNEL);
++	if (!opp_tables)
++		return -ENOMEM;
++
+ 	for_each_possible_cpu(cpu) {
+ 		cpu_dev = get_cpu_device(cpu);
+ 		if (NULL == cpu_dev) {
+@@ -151,8 +155,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
+ 
+ 	cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
+ 							  NULL, 0);
+-	if (!IS_ERR(cpufreq_dt_pdev))
++	if (!IS_ERR(cpufreq_dt_pdev)) {
++		platform_set_drvdata(pdev, opp_tables);
+ 		return 0;
++	}
+ 
+ 	ret = PTR_ERR(cpufreq_dt_pdev);
+ 	dev_err(cpu_dev, "Failed to register platform device\n");
+@@ -163,13 +169,23 @@ free_opp:
+ 			break;
+ 		dev_pm_opp_put_supported_hw(opp_tables[cpu]);
+ 	}
++	kfree(opp_tables);
+ 
+ 	return ret;
+ }
+ 
+ static int qcom_cpufreq_kryo_remove(struct platform_device *pdev)
+ {
++	struct opp_table **opp_tables = platform_get_drvdata(pdev);
++	unsigned int cpu;
++
+ 	platform_device_unregister(cpufreq_dt_pdev);
++
++	for_each_possible_cpu(cpu)
++		dev_pm_opp_put_supported_hw(opp_tables[cpu]);
++
++	kfree(opp_tables);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
+index 99449738faa4..632ccf82c5d3 100644
+--- a/drivers/cpufreq/scpi-cpufreq.c
++++ b/drivers/cpufreq/scpi-cpufreq.c
+@@ -189,8 +189,8 @@ static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
+ 	cpufreq_cooling_unregister(priv->cdev);
+ 	clk_put(priv->clk);
+ 	dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
+-	kfree(priv);
+ 	dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
++	kfree(priv);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/cpufreq/tegra124-cpufreq.c b/drivers/cpufreq/tegra124-cpufreq.c
+index 43530254201a..4bb154f6c54c 100644
+--- a/drivers/cpufreq/tegra124-cpufreq.c
++++ b/drivers/cpufreq/tegra124-cpufreq.c
+@@ -134,6 +134,8 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, priv);
+ 
++	of_node_put(np);
++
+ 	return 0;
+ 
+ out_switch_to_pllx:
+diff --git a/drivers/cpuidle/governor.c b/drivers/cpuidle/governor.c
+index bb93e5cf6a4a..9fddf828a76f 100644
+--- a/drivers/cpuidle/governor.c
++++ b/drivers/cpuidle/governor.c
+@@ -89,6 +89,7 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
+ 	mutex_lock(&cpuidle_lock);
+ 	if (__cpuidle_find_governor(gov->name) == NULL) {
+ 		ret = 0;
++		list_add_tail(&gov->governor_list, &cpuidle_governors);
+ 		if (!cpuidle_curr_governor ||
+ 		    !strncasecmp(param_governor, gov->name, CPUIDLE_NAME_LEN) ||
+ 		    (cpuidle_curr_governor->rating < gov->rating &&
+diff --git a/drivers/crypto/amcc/crypto4xx_trng.c b/drivers/crypto/amcc/crypto4xx_trng.c
+index 5e63742b0d22..53ab1f140a26 100644
+--- a/drivers/crypto/amcc/crypto4xx_trng.c
++++ b/drivers/crypto/amcc/crypto4xx_trng.c
+@@ -80,8 +80,10 @@ void ppc4xx_trng_probe(struct crypto4xx_core_device *core_dev)
+ 
+ 	/* Find the TRNG device node and map it */
+ 	trng = of_find_matching_node(NULL, ppc4xx_trng_match);
+-	if (!trng || !of_device_is_available(trng))
++	if (!trng || !of_device_is_available(trng)) {
++		of_node_put(trng);
+ 		return;
++	}
+ 
+ 	dev->trng_base = of_iomap(trng, 0);
+ 	of_node_put(trng);
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index 80ae69f906fb..1c4f3a046dc5 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -1040,6 +1040,7 @@ static void init_aead_job(struct aead_request *req,
+ 	if (unlikely(req->src != req->dst)) {
+ 		if (edesc->dst_nents == 1) {
+ 			dst_dma = sg_dma_address(req->dst);
++			out_options = 0;
+ 		} else {
+ 			dst_dma = edesc->sec4_sg_dma +
+ 				  sec4_sg_index *
+diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
+index bb1a2cdf1951..0f11811a3585 100644
+--- a/drivers/crypto/caam/caamhash.c
++++ b/drivers/crypto/caam/caamhash.c
+@@ -113,6 +113,7 @@ struct caam_hash_ctx {
+ struct caam_hash_state {
+ 	dma_addr_t buf_dma;
+ 	dma_addr_t ctx_dma;
++	int ctx_dma_len;
+ 	u8 buf_0[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
+ 	int buflen_0;
+ 	u8 buf_1[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
+@@ -165,6 +166,7 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
+ 				      struct caam_hash_state *state,
+ 				      int ctx_len)
+ {
++	state->ctx_dma_len = ctx_len;
+ 	state->ctx_dma = dma_map_single(jrdev, state->caam_ctx,
+ 					ctx_len, DMA_FROM_DEVICE);
+ 	if (dma_mapping_error(jrdev, state->ctx_dma)) {
+@@ -178,18 +180,6 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
+ 	return 0;
+ }
+ 
+-/* Map req->result, and append seq_out_ptr command that points to it */
+-static inline dma_addr_t map_seq_out_ptr_result(u32 *desc, struct device *jrdev,
+-						u8 *result, int digestsize)
+-{
+-	dma_addr_t dst_dma;
+-
+-	dst_dma = dma_map_single(jrdev, result, digestsize, DMA_FROM_DEVICE);
+-	append_seq_out_ptr(desc, dst_dma, digestsize, 0);
+-
+-	return dst_dma;
+-}
+-
+ /* Map current buffer in state (if length > 0) and put it in link table */
+ static inline int buf_map_to_sec4_sg(struct device *jrdev,
+ 				     struct sec4_sg_entry *sec4_sg,
+@@ -218,6 +208,7 @@ static inline int ctx_map_to_sec4_sg(struct device *jrdev,
+ 				     struct caam_hash_state *state, int ctx_len,
+ 				     struct sec4_sg_entry *sec4_sg, u32 flag)
+ {
++	state->ctx_dma_len = ctx_len;
+ 	state->ctx_dma = dma_map_single(jrdev, state->caam_ctx, ctx_len, flag);
+ 	if (dma_mapping_error(jrdev, state->ctx_dma)) {
+ 		dev_err(jrdev, "unable to map ctx\n");
+@@ -426,7 +417,6 @@ static int ahash_setkey(struct crypto_ahash *ahash,
+ 
+ /*
+  * ahash_edesc - s/w-extended ahash descriptor
+- * @dst_dma: physical mapped address of req->result
+  * @sec4_sg_dma: physical mapped address of h/w link table
+  * @src_nents: number of segments in input scatterlist
+  * @sec4_sg_bytes: length of dma mapped sec4_sg space
+@@ -434,7 +424,6 @@ static int ahash_setkey(struct crypto_ahash *ahash,
+  * @sec4_sg: h/w link table
+  */
+ struct ahash_edesc {
+-	dma_addr_t dst_dma;
+ 	dma_addr_t sec4_sg_dma;
+ 	int src_nents;
+ 	int sec4_sg_bytes;
+@@ -450,8 +439,6 @@ static inline void ahash_unmap(struct device *dev,
+ 
+ 	if (edesc->src_nents)
+ 		dma_unmap_sg(dev, req->src, edesc->src_nents, DMA_TO_DEVICE);
+-	if (edesc->dst_dma)
+-		dma_unmap_single(dev, edesc->dst_dma, dst_len, DMA_FROM_DEVICE);
+ 
+ 	if (edesc->sec4_sg_bytes)
+ 		dma_unmap_single(dev, edesc->sec4_sg_dma,
+@@ -468,12 +455,10 @@ static inline void ahash_unmap_ctx(struct device *dev,
+ 			struct ahash_edesc *edesc,
+ 			struct ahash_request *req, int dst_len, u32 flag)
+ {
+-	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+-	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ 	struct caam_hash_state *state = ahash_request_ctx(req);
+ 
+ 	if (state->ctx_dma) {
+-		dma_unmap_single(dev, state->ctx_dma, ctx->ctx_len, flag);
++		dma_unmap_single(dev, state->ctx_dma, state->ctx_dma_len, flag);
+ 		state->ctx_dma = 0;
+ 	}
+ 	ahash_unmap(dev, edesc, req, dst_len);
+@@ -486,9 +471,9 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
+ 	struct ahash_edesc *edesc;
+ 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+ 	int digestsize = crypto_ahash_digestsize(ahash);
++	struct caam_hash_state *state = ahash_request_ctx(req);
+ #ifdef DEBUG
+ 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+-	struct caam_hash_state *state = ahash_request_ctx(req);
+ 
+ 	dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
+ #endif
+@@ -497,17 +482,14 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
+ 	if (err)
+ 		caam_jr_strstatus(jrdev, err);
+ 
+-	ahash_unmap(jrdev, edesc, req, digestsize);
++	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
++	memcpy(req->result, state->caam_ctx, digestsize);
+ 	kfree(edesc);
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "ctx@"__stringify(__LINE__)": ",
+ 		       DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
+ 		       ctx->ctx_len, 1);
+-	if (req->result)
+-		print_hex_dump(KERN_ERR, "result@"__stringify(__LINE__)": ",
+-			       DUMP_PREFIX_ADDRESS, 16, 4, req->result,
+-			       digestsize, 1);
+ #endif
+ 
+ 	req->base.complete(&req->base, err);
+@@ -555,9 +537,9 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
+ 	struct ahash_edesc *edesc;
+ 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+ 	int digestsize = crypto_ahash_digestsize(ahash);
++	struct caam_hash_state *state = ahash_request_ctx(req);
+ #ifdef DEBUG
+ 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+-	struct caam_hash_state *state = ahash_request_ctx(req);
+ 
+ 	dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
+ #endif
+@@ -566,17 +548,14 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
+ 	if (err)
+ 		caam_jr_strstatus(jrdev, err);
+ 
+-	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_TO_DEVICE);
++	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
++	memcpy(req->result, state->caam_ctx, digestsize);
+ 	kfree(edesc);
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "ctx@"__stringify(__LINE__)": ",
+ 		       DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
+ 		       ctx->ctx_len, 1);
+-	if (req->result)
+-		print_hex_dump(KERN_ERR, "result@"__stringify(__LINE__)": ",
+-			       DUMP_PREFIX_ADDRESS, 16, 4, req->result,
+-			       digestsize, 1);
+ #endif
+ 
+ 	req->base.complete(&req->base, err);
+@@ -837,7 +816,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 	edesc->sec4_sg_bytes = sec4_sg_bytes;
+ 
+ 	ret = ctx_map_to_sec4_sg(jrdev, state, ctx->ctx_len,
+-				 edesc->sec4_sg, DMA_TO_DEVICE);
++				 edesc->sec4_sg, DMA_BIDIRECTIONAL);
+ 	if (ret)
+ 		goto unmap_ctx;
+ 
+@@ -857,14 +836,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 
+ 	append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len + buflen,
+ 			  LDST_SGF);
+-
+-	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+-						digestsize);
+-	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+-		dev_err(jrdev, "unable to map dst\n");
+-		ret = -ENOMEM;
+-		goto unmap_ctx;
+-	}
++	append_seq_out_ptr(desc, state->ctx_dma, digestsize, 0);
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -877,7 +849,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 
+ 	return -EINPROGRESS;
+  unmap_ctx:
+-	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
++	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
+ 	kfree(edesc);
+ 	return ret;
+ }
+@@ -931,7 +903,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 	edesc->src_nents = src_nents;
+ 
+ 	ret = ctx_map_to_sec4_sg(jrdev, state, ctx->ctx_len,
+-				 edesc->sec4_sg, DMA_TO_DEVICE);
++				 edesc->sec4_sg, DMA_BIDIRECTIONAL);
+ 	if (ret)
+ 		goto unmap_ctx;
+ 
+@@ -945,13 +917,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 	if (ret)
+ 		goto unmap_ctx;
+ 
+-	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+-						digestsize);
+-	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+-		dev_err(jrdev, "unable to map dst\n");
+-		ret = -ENOMEM;
+-		goto unmap_ctx;
+-	}
++	append_seq_out_ptr(desc, state->ctx_dma, digestsize, 0);
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -964,7 +930,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 
+ 	return -EINPROGRESS;
+  unmap_ctx:
+-	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
++	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
+ 	kfree(edesc);
+ 	return ret;
+ }
+@@ -1023,10 +989,8 @@ static int ahash_digest(struct ahash_request *req)
+ 
+ 	desc = edesc->hw_desc;
+ 
+-	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+-						digestsize);
+-	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+-		dev_err(jrdev, "unable to map dst\n");
++	ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
++	if (ret) {
+ 		ahash_unmap(jrdev, edesc, req, digestsize);
+ 		kfree(edesc);
+ 		return -ENOMEM;
+@@ -1041,7 +1005,7 @@ static int ahash_digest(struct ahash_request *req)
+ 	if (!ret) {
+ 		ret = -EINPROGRESS;
+ 	} else {
+-		ahash_unmap(jrdev, edesc, req, digestsize);
++		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+ 		kfree(edesc);
+ 	}
+ 
+@@ -1083,12 +1047,9 @@ static int ahash_final_no_ctx(struct ahash_request *req)
+ 		append_seq_in_ptr(desc, state->buf_dma, buflen, 0);
+ 	}
+ 
+-	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+-						digestsize);
+-	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+-		dev_err(jrdev, "unable to map dst\n");
++	ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
++	if (ret)
+ 		goto unmap;
+-	}
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -1099,7 +1060,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
+ 	if (!ret) {
+ 		ret = -EINPROGRESS;
+ 	} else {
+-		ahash_unmap(jrdev, edesc, req, digestsize);
++		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+ 		kfree(edesc);
+ 	}
+ 
+@@ -1298,12 +1259,9 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ 		goto unmap;
+ 	}
+ 
+-	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
+-						digestsize);
+-	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
+-		dev_err(jrdev, "unable to map dst\n");
++	ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
++	if (ret)
+ 		goto unmap;
+-	}
+ 
+ #ifdef DEBUG
+ 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
+@@ -1314,7 +1272,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ 	if (!ret) {
+ 		ret = -EINPROGRESS;
+ 	} else {
+-		ahash_unmap(jrdev, edesc, req, digestsize);
++		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
+ 		kfree(edesc);
+ 	}
+ 
+@@ -1446,6 +1404,7 @@ static int ahash_init(struct ahash_request *req)
+ 	state->final = ahash_final_no_ctx;
+ 
+ 	state->ctx_dma = 0;
++	state->ctx_dma_len = 0;
+ 	state->current_buf = 0;
+ 	state->buf_dma = 0;
+ 	state->buflen_0 = 0;
+diff --git a/drivers/crypto/cavium/zip/zip_main.c b/drivers/crypto/cavium/zip/zip_main.c
+index be055b9547f6..6183f9128a8a 100644
+--- a/drivers/crypto/cavium/zip/zip_main.c
++++ b/drivers/crypto/cavium/zip/zip_main.c
+@@ -351,6 +351,7 @@ static struct pci_driver zip_driver = {
+ 
+ static struct crypto_alg zip_comp_deflate = {
+ 	.cra_name		= "deflate",
++	.cra_driver_name	= "deflate-cavium",
+ 	.cra_flags		= CRYPTO_ALG_TYPE_COMPRESS,
+ 	.cra_ctxsize		= sizeof(struct zip_kernel_ctx),
+ 	.cra_priority           = 300,
+@@ -365,6 +366,7 @@ static struct crypto_alg zip_comp_deflate = {
+ 
+ static struct crypto_alg zip_comp_lzs = {
+ 	.cra_name		= "lzs",
++	.cra_driver_name	= "lzs-cavium",
+ 	.cra_flags		= CRYPTO_ALG_TYPE_COMPRESS,
+ 	.cra_ctxsize		= sizeof(struct zip_kernel_ctx),
+ 	.cra_priority           = 300,
+@@ -384,7 +386,7 @@ static struct scomp_alg zip_scomp_deflate = {
+ 	.decompress		= zip_scomp_decompress,
+ 	.base			= {
+ 		.cra_name		= "deflate",
+-		.cra_driver_name	= "deflate-scomp",
++		.cra_driver_name	= "deflate-scomp-cavium",
+ 		.cra_module		= THIS_MODULE,
+ 		.cra_priority           = 300,
+ 	}
+@@ -397,7 +399,7 @@ static struct scomp_alg zip_scomp_lzs = {
+ 	.decompress		= zip_scomp_decompress,
+ 	.base			= {
+ 		.cra_name		= "lzs",
+-		.cra_driver_name	= "lzs-scomp",
++		.cra_driver_name	= "lzs-scomp-cavium",
+ 		.cra_module		= THIS_MODULE,
+ 		.cra_priority           = 300,
+ 	}
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index dd948e1df9e5..3bcb6bce666e 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -614,10 +614,10 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 				 hw_iv_size, DMA_BIDIRECTIONAL);
+ 	}
+ 
+-	/*In case a pool was set, a table was
+-	 *allocated and should be released
+-	 */
+-	if (areq_ctx->mlli_params.curr_pool) {
++	/* Release pool */
++	if ((areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI ||
++	     areq_ctx->data_buff_type == CC_DMA_BUF_MLLI) &&
++	    (areq_ctx->mlli_params.mlli_virt_addr)) {
+ 		dev_dbg(dev, "free MLLI buffer: dma=%pad virt=%pK\n",
+ 			&areq_ctx->mlli_params.mlli_dma_addr,
+ 			areq_ctx->mlli_params.mlli_virt_addr);
+diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
+index cc92b031fad1..4ec93079daaf 100644
+--- a/drivers/crypto/ccree/cc_cipher.c
++++ b/drivers/crypto/ccree/cc_cipher.c
+@@ -80,6 +80,7 @@ static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size)
+ 		default:
+ 			break;
+ 		}
++		break;
+ 	case S_DIN_to_DES:
+ 		if (size == DES3_EDE_KEY_SIZE || size == DES_KEY_SIZE)
+ 			return 0;
+@@ -652,6 +653,8 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
+ 	unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
+ 	unsigned int len;
+ 
++	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
++
+ 	switch (ctx_p->cipher_mode) {
+ 	case DRV_CIPHER_CBC:
+ 		/*
+@@ -681,7 +684,6 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
+ 		break;
+ 	}
+ 
+-	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
+ 	kzfree(req_ctx->iv);
+ 
+ 	skcipher_request_complete(req, err);
+@@ -799,7 +801,8 @@ static int cc_cipher_decrypt(struct skcipher_request *req)
+ 
+ 	memset(req_ctx, 0, sizeof(*req_ctx));
+ 
+-	if (ctx_p->cipher_mode == DRV_CIPHER_CBC) {
++	if ((ctx_p->cipher_mode == DRV_CIPHER_CBC) &&
++	    (req->cryptlen >= ivsize)) {
+ 
+ 		/* Allocate and save the last IV sized bytes of the source,
+ 		 * which will be lost in case of in-place decryption.
+diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c
+index c9d622abd90c..0ce4a65b95f5 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto.c
++++ b/drivers/crypto/rockchip/rk3288_crypto.c
+@@ -119,7 +119,7 @@ static int rk_load_data(struct rk_crypto_info *dev,
+ 		count = (dev->left_bytes > PAGE_SIZE) ?
+ 			PAGE_SIZE : dev->left_bytes;
+ 
+-		if (!sg_pcopy_to_buffer(dev->first, dev->nents,
++		if (!sg_pcopy_to_buffer(dev->first, dev->src_nents,
+ 					dev->addr_vir, count,
+ 					dev->total - dev->left_bytes)) {
+ 			dev_err(dev->dev, "[%s:%d] pcopy err\n",
+diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h
+index d5fb4013fb42..54ee5b3ed9db 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto.h
++++ b/drivers/crypto/rockchip/rk3288_crypto.h
+@@ -207,7 +207,8 @@ struct rk_crypto_info {
+ 	void				*addr_vir;
+ 	int				aligned;
+ 	int				align_size;
+-	size_t				nents;
++	size_t				src_nents;
++	size_t				dst_nents;
+ 	unsigned int			total;
+ 	unsigned int			count;
+ 	dma_addr_t			addr_in;
+@@ -244,6 +245,7 @@ struct rk_cipher_ctx {
+ 	struct rk_crypto_info		*dev;
+ 	unsigned int			keylen;
+ 	u32				mode;
++	u8				iv[AES_BLOCK_SIZE];
+ };
+ 
+ enum alg_type {
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
+index 639c15c5364b..23305f22072f 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
+@@ -242,6 +242,17 @@ static void crypto_dma_start(struct rk_crypto_info *dev)
+ static int rk_set_data_start(struct rk_crypto_info *dev)
+ {
+ 	int err;
++	struct ablkcipher_request *req =
++		ablkcipher_request_cast(dev->async_req);
++	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
++	struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm);
++	u32 ivsize = crypto_ablkcipher_ivsize(tfm);
++	u8 *src_last_blk = page_address(sg_page(dev->sg_src)) +
++		dev->sg_src->offset + dev->sg_src->length - ivsize;
++
++	/* store the iv that need to be updated in chain mode */
++	if (ctx->mode & RK_CRYPTO_DEC)
++		memcpy(ctx->iv, src_last_blk, ivsize);
+ 
+ 	err = dev->load_data(dev, dev->sg_src, dev->sg_dst);
+ 	if (!err)
+@@ -260,8 +271,9 @@ static int rk_ablk_start(struct rk_crypto_info *dev)
+ 	dev->total = req->nbytes;
+ 	dev->sg_src = req->src;
+ 	dev->first = req->src;
+-	dev->nents = sg_nents(req->src);
++	dev->src_nents = sg_nents(req->src);
+ 	dev->sg_dst = req->dst;
++	dev->dst_nents = sg_nents(req->dst);
+ 	dev->aligned = 1;
+ 
+ 	spin_lock_irqsave(&dev->lock, flags);
+@@ -285,6 +297,28 @@ static void rk_iv_copyback(struct rk_crypto_info *dev)
+ 		memcpy_fromio(req->info, dev->reg + RK_CRYPTO_AES_IV_0, ivsize);
+ }
+ 
++static void rk_update_iv(struct rk_crypto_info *dev)
++{
++	struct ablkcipher_request *req =
++		ablkcipher_request_cast(dev->async_req);
++	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
++	struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm);
++	u32 ivsize = crypto_ablkcipher_ivsize(tfm);
++	u8 *new_iv = NULL;
++
++	if (ctx->mode & RK_CRYPTO_DEC) {
++		new_iv = ctx->iv;
++	} else {
++		new_iv = page_address(sg_page(dev->sg_dst)) +
++			 dev->sg_dst->offset + dev->sg_dst->length - ivsize;
++	}
++
++	if (ivsize == DES_BLOCK_SIZE)
++		memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize);
++	else if (ivsize == AES_BLOCK_SIZE)
++		memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize);
++}
++
+ /* return:
+  *	true	some err was occurred
+  *	fault	no err, continue
+@@ -297,7 +331,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev)
+ 
+ 	dev->unload_data(dev);
+ 	if (!dev->aligned) {
+-		if (!sg_pcopy_from_buffer(req->dst, dev->nents,
++		if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents,
+ 					  dev->addr_vir, dev->count,
+ 					  dev->total - dev->left_bytes -
+ 					  dev->count)) {
+@@ -306,6 +340,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev)
+ 		}
+ 	}
+ 	if (dev->left_bytes) {
++		rk_update_iv(dev);
+ 		if (dev->aligned) {
+ 			if (sg_is_last(dev->sg_src)) {
+ 				dev_err(dev->dev, "[%s:%d] Lack of data\n",
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+index 821a506b9e17..c336ae75e361 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
+@@ -206,7 +206,7 @@ static int rk_ahash_start(struct rk_crypto_info *dev)
+ 	dev->sg_dst = NULL;
+ 	dev->sg_src = req->src;
+ 	dev->first = req->src;
+-	dev->nents = sg_nents(req->src);
++	dev->src_nents = sg_nents(req->src);
+ 	rctx = ahash_request_ctx(req);
+ 	rctx->mode = 0;
+ 
+diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
+index 4a09af3cd546..7b9a7fb28bb9 100644
+--- a/drivers/dma/imx-dma.c
++++ b/drivers/dma/imx-dma.c
+@@ -285,7 +285,7 @@ static inline int imxdma_sg_next(struct imxdma_desc *d)
+ 	struct scatterlist *sg = d->sg;
+ 	unsigned long now;
+ 
+-	now = min(d->len, sg_dma_len(sg));
++	now = min_t(size_t, d->len, sg_dma_len(sg));
+ 	if (d->len != IMX_DMA_LENGTH_LOOP)
+ 		d->len -= now;
+ 
+diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
+index 43d4b00b8138..411f91fde734 100644
+--- a/drivers/dma/qcom/hidma.c
++++ b/drivers/dma/qcom/hidma.c
+@@ -138,24 +138,25 @@ static void hidma_process_completed(struct hidma_chan *mchan)
+ 		desc = &mdesc->desc;
+ 		last_cookie = desc->cookie;
+ 
++		llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
++
+ 		spin_lock_irqsave(&mchan->lock, irqflags);
++		if (llstat == DMA_COMPLETE) {
++			mchan->last_success = last_cookie;
++			result.result = DMA_TRANS_NOERROR;
++		} else {
++			result.result = DMA_TRANS_ABORTED;
++		}
++
+ 		dma_cookie_complete(desc);
+ 		spin_unlock_irqrestore(&mchan->lock, irqflags);
+ 
+-		llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
+ 		dmaengine_desc_get_callback(desc, &cb);
+ 
+ 		dma_run_dependencies(desc);
+ 
+ 		spin_lock_irqsave(&mchan->lock, irqflags);
+ 		list_move(&mdesc->node, &mchan->free);
+-
+-		if (llstat == DMA_COMPLETE) {
+-			mchan->last_success = last_cookie;
+-			result.result = DMA_TRANS_NOERROR;
+-		} else
+-			result.result = DMA_TRANS_ABORTED;
+-
+ 		spin_unlock_irqrestore(&mchan->lock, irqflags);
+ 
+ 		dmaengine_desc_callback_invoke(&cb, &result);
+@@ -415,6 +416,7 @@ hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dest, dma_addr_t src,
+ 	if (!mdesc)
+ 		return NULL;
+ 
++	mdesc->desc.flags = flags;
+ 	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+ 				     src, dest, len, flags,
+ 				     HIDMA_TRE_MEMCPY);
+@@ -447,6 +449,7 @@ hidma_prep_dma_memset(struct dma_chan *dmach, dma_addr_t dest, int value,
+ 	if (!mdesc)
+ 		return NULL;
+ 
++	mdesc->desc.flags = flags;
+ 	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
+ 				     value, dest, len, flags,
+ 				     HIDMA_TRE_MEMSET);
+diff --git a/drivers/dma/sh/usb-dmac.c b/drivers/dma/sh/usb-dmac.c
+index 7f7184c3cf95..59403f6d008a 100644
+--- a/drivers/dma/sh/usb-dmac.c
++++ b/drivers/dma/sh/usb-dmac.c
+@@ -694,6 +694,8 @@ static int usb_dmac_runtime_resume(struct device *dev)
+ #endif /* CONFIG_PM */
+ 
+ static const struct dev_pm_ops usb_dmac_pm = {
++	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
++				      pm_runtime_force_resume)
+ 	SET_RUNTIME_PM_OPS(usb_dmac_runtime_suspend, usb_dmac_runtime_resume,
+ 			   NULL)
+ };
+diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
+index 9a558e30c461..8219ab88a507 100644
+--- a/drivers/dma/tegra20-apb-dma.c
++++ b/drivers/dma/tegra20-apb-dma.c
+@@ -636,7 +636,10 @@ static void handle_cont_sngl_cycle_dma_done(struct tegra_dma_channel *tdc,
+ 
+ 	sgreq = list_first_entry(&tdc->pending_sg_req, typeof(*sgreq), node);
+ 	dma_desc = sgreq->dma_desc;
+-	dma_desc->bytes_transferred += sgreq->req_len;
++	/* if we dma for long enough the transfer count will wrap */
++	dma_desc->bytes_transferred =
++		(dma_desc->bytes_transferred + sgreq->req_len) %
++		dma_desc->bytes_requested;
+ 
+ 	/* Callback need to be call */
+ 	if (!dma_desc->cb_count)
+diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c
+index a7902fccdcfa..6090d25dce85 100644
+--- a/drivers/firmware/efi/cper.c
++++ b/drivers/firmware/efi/cper.c
+@@ -546,19 +546,24 @@ EXPORT_SYMBOL_GPL(cper_estatus_check_header);
+ int cper_estatus_check(const struct acpi_hest_generic_status *estatus)
+ {
+ 	struct acpi_hest_generic_data *gdata;
+-	unsigned int data_len, gedata_len;
++	unsigned int data_len, record_size;
+ 	int rc;
+ 
+ 	rc = cper_estatus_check_header(estatus);
+ 	if (rc)
+ 		return rc;
++
+ 	data_len = estatus->data_length;
+ 
+ 	apei_estatus_for_each_section(estatus, gdata) {
+-		gedata_len = acpi_hest_get_error_length(gdata);
+-		if (gedata_len > data_len - acpi_hest_get_size(gdata))
++		if (sizeof(struct acpi_hest_generic_data) > data_len)
++			return -EINVAL;
++
++		record_size = acpi_hest_get_record_size(gdata);
++		if (record_size > data_len)
+ 			return -EINVAL;
+-		data_len -= acpi_hest_get_record_size(gdata);
++
++		data_len -= record_size;
+ 	}
+ 	if (data_len)
+ 		return -EINVAL;
+diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
+index c037c6c5d0b7..04e6ecd72cd9 100644
+--- a/drivers/firmware/efi/libstub/arm-stub.c
++++ b/drivers/firmware/efi/libstub/arm-stub.c
+@@ -367,6 +367,11 @@ void efi_get_virtmap(efi_memory_desc_t *memory_map, unsigned long map_size,
+ 		paddr = in->phys_addr;
+ 		size = in->num_pages * EFI_PAGE_SIZE;
+ 
++		if (novamap()) {
++			in->virt_addr = in->phys_addr;
++			continue;
++		}
++
+ 		/*
+ 		 * Make the mapping compatible with 64k pages: this allows
+ 		 * a 4k page size kernel to kexec a 64k page size kernel and
+diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
+index e94975f4655b..442f51c2a53d 100644
+--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
++++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
+@@ -34,6 +34,7 @@ static unsigned long __chunk_size = EFI_READ_CHUNK_SIZE;
+ 
+ static int __section(.data) __nokaslr;
+ static int __section(.data) __quiet;
++static int __section(.data) __novamap;
+ 
+ int __pure nokaslr(void)
+ {
+@@ -43,6 +44,10 @@ int __pure is_quiet(void)
+ {
+ 	return __quiet;
+ }
++int __pure novamap(void)
++{
++	return __novamap;
++}
+ 
+ #define EFI_MMAP_NR_SLACK_SLOTS	8
+ 
+@@ -482,6 +487,11 @@ efi_status_t efi_parse_options(char const *cmdline)
+ 			__chunk_size = -1UL;
+ 		}
+ 
++		if (!strncmp(str, "novamap", 7)) {
++			str += strlen("novamap");
++			__novamap = 1;
++		}
++
+ 		/* Group words together, delimited by "," */
+ 		while (*str && *str != ' ' && *str != ',')
+ 			str++;
+diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
+index 32799cf039ef..337b52c4702c 100644
+--- a/drivers/firmware/efi/libstub/efistub.h
++++ b/drivers/firmware/efi/libstub/efistub.h
+@@ -27,6 +27,7 @@
+ 
+ extern int __pure nokaslr(void);
+ extern int __pure is_quiet(void);
++extern int __pure novamap(void);
+ 
+ #define pr_efi(sys_table, msg)		do {				\
+ 	if (!is_quiet()) efi_printk(sys_table, "EFI stub: "msg);	\
+diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
+index 0dc7b4987cc2..f8f89f995e9d 100644
+--- a/drivers/firmware/efi/libstub/fdt.c
++++ b/drivers/firmware/efi/libstub/fdt.c
+@@ -327,6 +327,9 @@ efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table,
+ 	if (status == EFI_SUCCESS) {
+ 		efi_set_virtual_address_map_t *svam;
+ 
++		if (novamap())
++			return EFI_SUCCESS;
++
+ 		/* Install the new virtual address map */
+ 		svam = sys_table->runtime->set_virtual_address_map;
+ 		status = svam(runtime_entry_count * desc_size, desc_size,
+diff --git a/drivers/firmware/efi/memattr.c b/drivers/firmware/efi/memattr.c
+index 8986757eafaf..aac972b056d9 100644
+--- a/drivers/firmware/efi/memattr.c
++++ b/drivers/firmware/efi/memattr.c
+@@ -94,7 +94,7 @@ static bool entry_is_valid(const efi_memory_desc_t *in, efi_memory_desc_t *out)
+ 
+ 		if (!(md->attribute & EFI_MEMORY_RUNTIME))
+ 			continue;
+-		if (md->virt_addr == 0) {
++		if (md->virt_addr == 0 && md->phys_addr != 0) {
+ 			/* no virtual mapping has been installed by the stub */
+ 			break;
+ 		}
+diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
+index e2abfdb5cee6..698745c249e8 100644
+--- a/drivers/firmware/efi/runtime-wrappers.c
++++ b/drivers/firmware/efi/runtime-wrappers.c
+@@ -85,7 +85,7 @@ struct efi_runtime_work efi_rts_work;
+ 		pr_err("Failed to queue work to efi_rts_wq.\n");	\
+ 									\
+ exit:									\
+-	efi_rts_work.efi_rts_id = NONE;					\
++	efi_rts_work.efi_rts_id = EFI_NONE;				\
+ 	efi_rts_work.status;						\
+ })
+ 
+@@ -175,50 +175,50 @@ static void efi_call_rts(struct work_struct *work)
+ 	arg5 = efi_rts_work.arg5;
+ 
+ 	switch (efi_rts_work.efi_rts_id) {
+-	case GET_TIME:
++	case EFI_GET_TIME:
+ 		status = efi_call_virt(get_time, (efi_time_t *)arg1,
+ 				       (efi_time_cap_t *)arg2);
+ 		break;
+-	case SET_TIME:
++	case EFI_SET_TIME:
+ 		status = efi_call_virt(set_time, (efi_time_t *)arg1);
+ 		break;
+-	case GET_WAKEUP_TIME:
++	case EFI_GET_WAKEUP_TIME:
+ 		status = efi_call_virt(get_wakeup_time, (efi_bool_t *)arg1,
+ 				       (efi_bool_t *)arg2, (efi_time_t *)arg3);
+ 		break;
+-	case SET_WAKEUP_TIME:
++	case EFI_SET_WAKEUP_TIME:
+ 		status = efi_call_virt(set_wakeup_time, *(efi_bool_t *)arg1,
+ 				       (efi_time_t *)arg2);
+ 		break;
+-	case GET_VARIABLE:
++	case EFI_GET_VARIABLE:
+ 		status = efi_call_virt(get_variable, (efi_char16_t *)arg1,
+ 				       (efi_guid_t *)arg2, (u32 *)arg3,
+ 				       (unsigned long *)arg4, (void *)arg5);
+ 		break;
+-	case GET_NEXT_VARIABLE:
++	case EFI_GET_NEXT_VARIABLE:
+ 		status = efi_call_virt(get_next_variable, (unsigned long *)arg1,
+ 				       (efi_char16_t *)arg2,
+ 				       (efi_guid_t *)arg3);
+ 		break;
+-	case SET_VARIABLE:
++	case EFI_SET_VARIABLE:
+ 		status = efi_call_virt(set_variable, (efi_char16_t *)arg1,
+ 				       (efi_guid_t *)arg2, *(u32 *)arg3,
+ 				       *(unsigned long *)arg4, (void *)arg5);
+ 		break;
+-	case QUERY_VARIABLE_INFO:
++	case EFI_QUERY_VARIABLE_INFO:
+ 		status = efi_call_virt(query_variable_info, *(u32 *)arg1,
+ 				       (u64 *)arg2, (u64 *)arg3, (u64 *)arg4);
+ 		break;
+-	case GET_NEXT_HIGH_MONO_COUNT:
++	case EFI_GET_NEXT_HIGH_MONO_COUNT:
+ 		status = efi_call_virt(get_next_high_mono_count, (u32 *)arg1);
+ 		break;
+-	case UPDATE_CAPSULE:
++	case EFI_UPDATE_CAPSULE:
+ 		status = efi_call_virt(update_capsule,
+ 				       (efi_capsule_header_t **)arg1,
+ 				       *(unsigned long *)arg2,
+ 				       *(unsigned long *)arg3);
+ 		break;
+-	case QUERY_CAPSULE_CAPS:
++	case EFI_QUERY_CAPSULE_CAPS:
+ 		status = efi_call_virt(query_capsule_caps,
+ 				       (efi_capsule_header_t **)arg1,
+ 				       *(unsigned long *)arg2, (u64 *)arg3,
+@@ -242,7 +242,7 @@ static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc)
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(GET_TIME, tm, tc, NULL, NULL, NULL);
++	status = efi_queue_work(EFI_GET_TIME, tm, tc, NULL, NULL, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+ }
+@@ -253,7 +253,7 @@ static efi_status_t virt_efi_set_time(efi_time_t *tm)
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(SET_TIME, tm, NULL, NULL, NULL, NULL);
++	status = efi_queue_work(EFI_SET_TIME, tm, NULL, NULL, NULL, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+ }
+@@ -266,7 +266,7 @@ static efi_status_t virt_efi_get_wakeup_time(efi_bool_t *enabled,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(GET_WAKEUP_TIME, enabled, pending, tm, NULL,
++	status = efi_queue_work(EFI_GET_WAKEUP_TIME, enabled, pending, tm, NULL,
+ 				NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -278,7 +278,7 @@ static efi_status_t virt_efi_set_wakeup_time(efi_bool_t enabled, efi_time_t *tm)
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(SET_WAKEUP_TIME, &enabled, tm, NULL, NULL,
++	status = efi_queue_work(EFI_SET_WAKEUP_TIME, &enabled, tm, NULL, NULL,
+ 				NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -294,7 +294,7 @@ static efi_status_t virt_efi_get_variable(efi_char16_t *name,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(GET_VARIABLE, name, vendor, attr, data_size,
++	status = efi_queue_work(EFI_GET_VARIABLE, name, vendor, attr, data_size,
+ 				data);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -308,7 +308,7 @@ static efi_status_t virt_efi_get_next_variable(unsigned long *name_size,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(GET_NEXT_VARIABLE, name_size, name, vendor,
++	status = efi_queue_work(EFI_GET_NEXT_VARIABLE, name_size, name, vendor,
+ 				NULL, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -324,7 +324,7 @@ static efi_status_t virt_efi_set_variable(efi_char16_t *name,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(SET_VARIABLE, name, vendor, &attr, &data_size,
++	status = efi_queue_work(EFI_SET_VARIABLE, name, vendor, &attr, &data_size,
+ 				data);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -359,7 +359,7 @@ static efi_status_t virt_efi_query_variable_info(u32 attr,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(QUERY_VARIABLE_INFO, &attr, storage_space,
++	status = efi_queue_work(EFI_QUERY_VARIABLE_INFO, &attr, storage_space,
+ 				remaining_space, max_variable_size, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -391,7 +391,7 @@ static efi_status_t virt_efi_get_next_high_mono_count(u32 *count)
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(GET_NEXT_HIGH_MONO_COUNT, count, NULL, NULL,
++	status = efi_queue_work(EFI_GET_NEXT_HIGH_MONO_COUNT, count, NULL, NULL,
+ 				NULL, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -407,7 +407,7 @@ static void virt_efi_reset_system(int reset_type,
+ 			"could not get exclusive access to the firmware\n");
+ 		return;
+ 	}
+-	efi_rts_work.efi_rts_id = RESET_SYSTEM;
++	efi_rts_work.efi_rts_id = EFI_RESET_SYSTEM;
+ 	__efi_call_virt(reset_system, reset_type, status, data_size, data);
+ 	up(&efi_runtime_lock);
+ }
+@@ -423,7 +423,7 @@ static efi_status_t virt_efi_update_capsule(efi_capsule_header_t **capsules,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(UPDATE_CAPSULE, capsules, &count, &sg_list,
++	status = efi_queue_work(EFI_UPDATE_CAPSULE, capsules, &count, &sg_list,
+ 				NULL, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+@@ -441,7 +441,7 @@ static efi_status_t virt_efi_query_capsule_caps(efi_capsule_header_t **capsules,
+ 
+ 	if (down_interruptible(&efi_runtime_lock))
+ 		return EFI_ABORTED;
+-	status = efi_queue_work(QUERY_CAPSULE_CAPS, capsules, &count,
++	status = efi_queue_work(EFI_QUERY_CAPSULE_CAPS, capsules, &count,
+ 				max_size, reset_type, NULL);
+ 	up(&efi_runtime_lock);
+ 	return status;
+diff --git a/drivers/firmware/iscsi_ibft.c b/drivers/firmware/iscsi_ibft.c
+index 6bc8e6640d71..c51462f5aa1e 100644
+--- a/drivers/firmware/iscsi_ibft.c
++++ b/drivers/firmware/iscsi_ibft.c
+@@ -542,6 +542,7 @@ static umode_t __init ibft_check_tgt_for(void *data, int type)
+ 	case ISCSI_BOOT_TGT_NIC_ASSOC:
+ 	case ISCSI_BOOT_TGT_CHAP_TYPE:
+ 		rc = S_IRUGO;
++		break;
+ 	case ISCSI_BOOT_TGT_NAME:
+ 		if (tgt->tgt_name_len)
+ 			rc = S_IRUGO;
+diff --git a/drivers/gnss/sirf.c b/drivers/gnss/sirf.c
+index 226f6e6fe01b..8e3f6a776e02 100644
+--- a/drivers/gnss/sirf.c
++++ b/drivers/gnss/sirf.c
+@@ -310,30 +310,26 @@ static int sirf_probe(struct serdev_device *serdev)
+ 			ret = -ENODEV;
+ 			goto err_put_device;
+ 		}
++
++		ret = regulator_enable(data->vcc);
++		if (ret)
++			goto err_put_device;
++
++		/* Wait for chip to boot into hibernate mode. */
++		msleep(SIRF_BOOT_DELAY);
+ 	}
+ 
+ 	if (data->wakeup) {
+ 		ret = gpiod_to_irq(data->wakeup);
+ 		if (ret < 0)
+-			goto err_put_device;
+-
++			goto err_disable_vcc;
+ 		data->irq = ret;
+ 
+-		ret = devm_request_threaded_irq(dev, data->irq, NULL,
+-				sirf_wakeup_handler,
++		ret = request_threaded_irq(data->irq, NULL, sirf_wakeup_handler,
+ 				IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 				"wakeup", data);
+ 		if (ret)
+-			goto err_put_device;
+-	}
+-
+-	if (data->on_off) {
+-		ret = regulator_enable(data->vcc);
+-		if (ret)
+-			goto err_put_device;
+-
+-		/* Wait for chip to boot into hibernate mode */
+-		msleep(SIRF_BOOT_DELAY);
++			goto err_disable_vcc;
+ 	}
+ 
+ 	if (IS_ENABLED(CONFIG_PM)) {
+@@ -342,7 +338,7 @@ static int sirf_probe(struct serdev_device *serdev)
+ 	} else {
+ 		ret = sirf_runtime_resume(dev);
+ 		if (ret < 0)
+-			goto err_disable_vcc;
++			goto err_free_irq;
+ 	}
+ 
+ 	ret = gnss_register_device(gdev);
+@@ -356,6 +352,9 @@ err_disable_rpm:
+ 		pm_runtime_disable(dev);
+ 	else
+ 		sirf_runtime_suspend(dev);
++err_free_irq:
++	if (data->wakeup)
++		free_irq(data->irq, data);
+ err_disable_vcc:
+ 	if (data->on_off)
+ 		regulator_disable(data->vcc);
+@@ -376,6 +375,9 @@ static void sirf_remove(struct serdev_device *serdev)
+ 	else
+ 		sirf_runtime_suspend(&serdev->dev);
+ 
++	if (data->wakeup)
++		free_irq(data->irq, data);
++
+ 	if (data->on_off)
+ 		regulator_disable(data->vcc);
+ 
+diff --git a/drivers/gpio/gpio-adnp.c b/drivers/gpio/gpio-adnp.c
+index 91b90c0cea73..12acdac85820 100644
+--- a/drivers/gpio/gpio-adnp.c
++++ b/drivers/gpio/gpio-adnp.c
+@@ -132,8 +132,10 @@ static int adnp_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
+ 	if (err < 0)
+ 		goto out;
+ 
+-	if (err & BIT(pos))
+-		err = -EACCES;
++	if (value & BIT(pos)) {
++		err = -EPERM;
++		goto out;
++	}
+ 
+ 	err = 0;
+ 
+diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c
+index 0ecd2369c2ca..a09d2f9ebacc 100644
+--- a/drivers/gpio/gpio-exar.c
++++ b/drivers/gpio/gpio-exar.c
+@@ -148,6 +148,8 @@ static int gpio_exar_probe(struct platform_device *pdev)
+ 	mutex_init(&exar_gpio->lock);
+ 
+ 	index = ida_simple_get(&ida_index, 0, 0, GFP_KERNEL);
++	if (index < 0)
++		goto err_destroy;
+ 
+ 	sprintf(exar_gpio->name, "exar_gpio%d", index);
+ 	exar_gpio->gpio_chip.label = exar_gpio->name;
+diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c
+index f4e9921fa966..7f33024b6d83 100644
+--- a/drivers/gpio/gpio-omap.c
++++ b/drivers/gpio/gpio-omap.c
+@@ -883,14 +883,16 @@ static void omap_gpio_unmask_irq(struct irq_data *d)
+ 	if (trigger)
+ 		omap_set_gpio_triggering(bank, offset, trigger);
+ 
+-	/* For level-triggered GPIOs, the clearing must be done after
+-	 * the HW source is cleared, thus after the handler has run */
+-	if (bank->level_mask & BIT(offset)) {
+-		omap_set_gpio_irqenable(bank, offset, 0);
++	omap_set_gpio_irqenable(bank, offset, 1);
++
++	/*
++	 * For level-triggered GPIOs, clearing must be done after the source
++	 * is cleared, thus after the handler has run. OMAP4 needs this done
++	 * after enabing the interrupt to clear the wakeup status.
++	 */
++	if (bank->level_mask & BIT(offset))
+ 		omap_clear_gpio_irqstatus(bank, offset);
+-	}
+ 
+-	omap_set_gpio_irqenable(bank, offset, 1);
+ 	raw_spin_unlock_irqrestore(&bank->lock, flags);
+ }
+ 
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 0dc96419efe3..d8a985fc6a5d 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -587,7 +587,8 @@ static int pca953x_irq_set_type(struct irq_data *d, unsigned int type)
+ 
+ static void pca953x_irq_shutdown(struct irq_data *d)
+ {
+-	struct pca953x_chip *chip = irq_data_get_irq_chip_data(d);
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++	struct pca953x_chip *chip = gpiochip_get_data(gc);
+ 	u8 mask = 1 << (d->hwirq % BANK_SZ);
+ 
+ 	chip->irq_trig_raise[d->hwirq / BANK_SZ] &= ~mask;
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index a6e1891217e2..a1dd2f1c0d02 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -86,7 +86,8 @@ static void of_gpio_flags_quirks(struct device_node *np,
+ 	if (IS_ENABLED(CONFIG_REGULATOR) &&
+ 	    (of_device_is_compatible(np, "regulator-fixed") ||
+ 	     of_device_is_compatible(np, "reg-fixed-voltage") ||
+-	     of_device_is_compatible(np, "regulator-gpio"))) {
++	     (of_device_is_compatible(np, "regulator-gpio") &&
++	      strcmp(propname, "enable-gpio") == 0))) {
+ 		/*
+ 		 * The regulator GPIO handles are specified such that the
+ 		 * presence or absence of "enable-active-high" solely controls
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+index bacdaef77b6c..278dd55ff476 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+@@ -738,7 +738,7 @@ static int gmc_v9_0_allocate_vm_inv_eng(struct amdgpu_device *adev)
+ 		}
+ 
+ 		ring->vm_inv_eng = inv_eng - 1;
+-		change_bit(inv_eng - 1, (unsigned long *)(&vm_inv_engs[vmhub]));
++		vm_inv_engs[vmhub] &= ~(1 << ring->vm_inv_eng);
+ 
+ 		dev_info(adev->dev, "ring %s uses VM inv eng %u on hub %u\n",
+ 			 ring->name, ring->vm_inv_eng, ring->funcs->vmhub);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 636d14a60952..83c8a0407537 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -886,6 +886,7 @@ static void emulated_link_detect(struct dc_link *link)
+ 		return;
+ 	}
+ 
++	/* dc_sink_create returns a new reference */
+ 	link->local_sink = sink;
+ 
+ 	edid_status = dm_helpers_read_local_edid(
+@@ -952,6 +953,8 @@ static int dm_resume(void *handle)
+ 		if (aconnector->fake_enable && aconnector->dc_link->local_sink)
+ 			aconnector->fake_enable = false;
+ 
++		if (aconnector->dc_sink)
++			dc_sink_release(aconnector->dc_sink);
+ 		aconnector->dc_sink = NULL;
+ 		amdgpu_dm_update_connector_after_detect(aconnector);
+ 		mutex_unlock(&aconnector->hpd_lock);
+@@ -1061,6 +1064,8 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 
+ 
+ 	sink = aconnector->dc_link->local_sink;
++	if (sink)
++		dc_sink_retain(sink);
+ 
+ 	/*
+ 	 * Edid mgmt connector gets first update only in mode_valid hook and then
+@@ -1085,21 +1090,24 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 				 * to it anymore after disconnect, so on next crtc to connector
+ 				 * reshuffle by UMD we will get into unwanted dc_sink release
+ 				 */
+-				if (aconnector->dc_sink != aconnector->dc_em_sink)
+-					dc_sink_release(aconnector->dc_sink);
++				dc_sink_release(aconnector->dc_sink);
+ 			}
+ 			aconnector->dc_sink = sink;
++			dc_sink_retain(aconnector->dc_sink);
+ 			amdgpu_dm_update_freesync_caps(connector,
+ 					aconnector->edid);
+ 		} else {
+ 			amdgpu_dm_update_freesync_caps(connector, NULL);
+-			if (!aconnector->dc_sink)
++			if (!aconnector->dc_sink) {
+ 				aconnector->dc_sink = aconnector->dc_em_sink;
+-			else if (aconnector->dc_sink != aconnector->dc_em_sink)
+ 				dc_sink_retain(aconnector->dc_sink);
++			}
+ 		}
+ 
+ 		mutex_unlock(&dev->mode_config.mutex);
++
++		if (sink)
++			dc_sink_release(sink);
+ 		return;
+ 	}
+ 
+@@ -1107,8 +1115,10 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 	 * TODO: temporary guard to look for proper fix
+ 	 * if this sink is MST sink, we should not do anything
+ 	 */
+-	if (sink && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
++	if (sink && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
++		dc_sink_release(sink);
+ 		return;
++	}
+ 
+ 	if (aconnector->dc_sink == sink) {
+ 		/*
+@@ -1117,6 +1127,8 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 		 */
+ 		DRM_DEBUG_DRIVER("DCHPD: connector_id=%d: dc_sink didn't change.\n",
+ 				aconnector->connector_id);
++		if (sink)
++			dc_sink_release(sink);
+ 		return;
+ 	}
+ 
+@@ -1138,6 +1150,7 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 			amdgpu_dm_update_freesync_caps(connector, NULL);
+ 
+ 		aconnector->dc_sink = sink;
++		dc_sink_retain(aconnector->dc_sink);
+ 		if (sink->dc_edid.length == 0) {
+ 			aconnector->edid = NULL;
+ 			drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux);
+@@ -1158,11 +1171,15 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 		amdgpu_dm_update_freesync_caps(connector, NULL);
+ 		drm_connector_update_edid_property(connector, NULL);
+ 		aconnector->num_modes = 0;
++		dc_sink_release(aconnector->dc_sink);
+ 		aconnector->dc_sink = NULL;
+ 		aconnector->edid = NULL;
+ 	}
+ 
+ 	mutex_unlock(&dev->mode_config.mutex);
++
++	if (sink)
++		dc_sink_release(sink);
+ }
+ 
+ static void handle_hpd_irq(void *param)
+@@ -2908,6 +2925,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 		}
+ 	} else {
+ 		sink = aconnector->dc_sink;
++		dc_sink_retain(sink);
+ 	}
+ 
+ 	stream = dc_create_stream_for_sink(sink);
+@@ -2974,8 +2992,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ 		stream->ignore_msa_timing_param = true;
+ 
+ finish:
+-	if (sink && sink->sink_signal == SIGNAL_TYPE_VIRTUAL && aconnector->base.force != DRM_FORCE_ON)
+-		dc_sink_release(sink);
++	dc_sink_release(sink);
+ 
+ 	return stream;
+ }
+@@ -3233,6 +3250,14 @@ static void amdgpu_dm_connector_destroy(struct drm_connector *connector)
+ 		dm->backlight_dev = NULL;
+ 	}
+ #endif
++
++	if (aconnector->dc_em_sink)
++		dc_sink_release(aconnector->dc_em_sink);
++	aconnector->dc_em_sink = NULL;
++	if (aconnector->dc_sink)
++		dc_sink_release(aconnector->dc_sink);
++	aconnector->dc_sink = NULL;
++
+ 	drm_dp_cec_unregister_connector(&aconnector->dm_dp_aux.aux);
+ 	drm_connector_unregister(connector);
+ 	drm_connector_cleanup(connector);
+@@ -3330,10 +3355,12 @@ static void create_eml_sink(struct amdgpu_dm_connector *aconnector)
+ 		(edid->extensions + 1) * EDID_LENGTH,
+ 		&init_params);
+ 
+-	if (aconnector->base.force == DRM_FORCE_ON)
++	if (aconnector->base.force == DRM_FORCE_ON) {
+ 		aconnector->dc_sink = aconnector->dc_link->local_sink ?
+ 		aconnector->dc_link->local_sink :
+ 		aconnector->dc_em_sink;
++		dc_sink_retain(aconnector->dc_sink);
++	}
+ }
+ 
+ static void handle_edid_mgmt(struct amdgpu_dm_connector *aconnector)
+@@ -4948,7 +4975,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ static void amdgpu_dm_crtc_copy_transient_flags(struct drm_crtc_state *crtc_state,
+ 						struct dc_stream_state *stream_state)
+ {
+-	stream_state->mode_changed = crtc_state->mode_changed;
++	stream_state->mode_changed =
++		crtc_state->mode_changed || crtc_state->active_changed;
+ }
+ 
+ static int amdgpu_dm_atomic_commit(struct drm_device *dev,
+@@ -4969,10 +4997,22 @@ static int amdgpu_dm_atomic_commit(struct drm_device *dev,
+ 	 */
+ 	for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
+ 		struct dm_crtc_state *dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
++		struct dm_crtc_state *dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+ 		struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
+ 
+-		if (drm_atomic_crtc_needs_modeset(new_crtc_state) && dm_old_crtc_state->stream)
++		if (drm_atomic_crtc_needs_modeset(new_crtc_state)
++		    && dm_old_crtc_state->stream) {
++			/*
++			 * CRC capture was enabled but not disabled.
++			 * Release the vblank reference.
++			 */
++			if (dm_new_crtc_state->crc_enabled) {
++				drm_crtc_vblank_put(crtc);
++				dm_new_crtc_state->crc_enabled = false;
++			}
++
+ 			manage_dm_interrupts(adev, acrtc, false);
++		}
+ 	}
+ 	/*
+ 	 * Add check here for SoC's that support hardware cursor plane, to
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+index f088ac585978..26b651148c67 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+@@ -66,6 +66,7 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+ {
+ 	struct dm_crtc_state *crtc_state = to_dm_crtc_state(crtc->state);
+ 	struct dc_stream_state *stream_state = crtc_state->stream;
++	bool enable;
+ 
+ 	enum amdgpu_dm_pipe_crc_source source = dm_parse_crc_source(src_name);
+ 
+@@ -80,28 +81,27 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+ 		return -EINVAL;
+ 	}
+ 
++	enable = (source == AMDGPU_DM_PIPE_CRC_SOURCE_AUTO);
++
++	if (!dc_stream_configure_crc(stream_state->ctx->dc, stream_state,
++				     enable, enable))
++		return -EINVAL;
++
+ 	/* When enabling CRC, we should also disable dithering. */
+-	if (source == AMDGPU_DM_PIPE_CRC_SOURCE_AUTO) {
+-		if (dc_stream_configure_crc(stream_state->ctx->dc,
+-					    stream_state,
+-					    true, true)) {
+-			crtc_state->crc_enabled = true;
+-			dc_stream_set_dither_option(stream_state,
+-						    DITHER_OPTION_TRUN8);
+-		}
+-		else
+-			return -EINVAL;
+-	} else {
+-		if (dc_stream_configure_crc(stream_state->ctx->dc,
+-					    stream_state,
+-					    false, false)) {
+-			crtc_state->crc_enabled = false;
+-			dc_stream_set_dither_option(stream_state,
+-						    DITHER_OPTION_DEFAULT);
+-		}
+-		else
+-			return -EINVAL;
+-	}
++	dc_stream_set_dither_option(stream_state,
++				    enable ? DITHER_OPTION_TRUN8
++					   : DITHER_OPTION_DEFAULT);
++
++	/*
++	 * Reading the CRC requires the vblank interrupt handler to be
++	 * enabled. Keep a reference until CRC capture stops.
++	 */
++	if (!crtc_state->crc_enabled && enable)
++		drm_crtc_vblank_get(crtc);
++	else if (crtc_state->crc_enabled && !enable)
++		drm_crtc_vblank_put(crtc);
++
++	crtc_state->crc_enabled = enable;
+ 
+ 	/* Reset crc_skipped on dm state */
+ 	crtc_state->crc_skip_count = 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 1b0d209d8367..3b95a637b508 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -239,6 +239,7 @@ static int dm_dp_mst_get_modes(struct drm_connector *connector)
+ 			&init_params);
+ 
+ 		dc_sink->priv = aconnector;
++		/* dc_link_add_remote_sink returns a new reference */
+ 		aconnector->dc_sink = dc_sink;
+ 
+ 		if (aconnector->dc_sink)
+diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+index 43e4a2be0fa6..57cc11d0e9a5 100644
+--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
+@@ -1355,12 +1355,12 @@ void dcn_bw_update_from_pplib(struct dc *dc)
+ 	struct dm_pp_clock_levels_with_voltage fclks = {0}, dcfclks = {0};
+ 	bool res;
+ 
+-	kernel_fpu_begin();
+-
+ 	/* TODO: This is not the proper way to obtain fabric_and_dram_bandwidth, should be min(fclk, memclk) */
+ 	res = dm_pp_get_clock_levels_by_type_with_voltage(
+ 			ctx, DM_PP_CLOCK_TYPE_FCLK, &fclks);
+ 
++	kernel_fpu_begin();
++
+ 	if (res)
+ 		res = verify_clock_values(&fclks);
+ 
+@@ -1379,9 +1379,13 @@ void dcn_bw_update_from_pplib(struct dc *dc)
+ 	} else
+ 		BREAK_TO_DEBUGGER();
+ 
++	kernel_fpu_end();
++
+ 	res = dm_pp_get_clock_levels_by_type_with_voltage(
+ 			ctx, DM_PP_CLOCK_TYPE_DCFCLK, &dcfclks);
+ 
++	kernel_fpu_begin();
++
+ 	if (res)
+ 		res = verify_clock_values(&dcfclks);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 5fd52094d459..1f92e7e8e3d3 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1078,6 +1078,9 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
+ 	/* pplib is notified if disp_num changed */
+ 	dc->hwss.optimize_bandwidth(dc, context);
+ 
++	for (i = 0; i < context->stream_count; i++)
++		context->streams[i]->mode_changed = false;
++
+ 	dc_release_state(dc->current_state);
+ 
+ 	dc->current_state = context;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+index b0265dbebd4c..583eb367850f 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+@@ -792,6 +792,7 @@ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason)
+ 		sink->dongle_max_pix_clk = sink_caps.max_hdmi_pixel_clock;
+ 		sink->converter_disable_audio = converter_disable_audio;
+ 
++		/* dc_sink_create returns a new reference */
+ 		link->local_sink = sink;
+ 
+ 		edid_status = dm_helpers_read_local_edid(
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 41883c981789..a684b38332ac 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -2334,9 +2334,10 @@ static void dcn10_apply_ctx_for_surface(
+ 			}
+ 		}
+ 
+-		if (!pipe_ctx->plane_state &&
+-			old_pipe_ctx->plane_state &&
+-			old_pipe_ctx->stream_res.tg == tg) {
++		if ((!pipe_ctx->plane_state ||
++		     pipe_ctx->stream_res.tg != old_pipe_ctx->stream_res.tg) &&
++		    old_pipe_ctx->plane_state &&
++		    old_pipe_ctx->stream_res.tg == tg) {
+ 
+ 			dc->hwss.plane_atomic_disconnect(dc, old_pipe_ctx);
+ 			removed_pipe[i] = true;
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index c8f5c00dd1e7..86e3fb27c125 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -3491,14 +3491,14 @@ static int smu7_get_gpu_power(struct pp_hwmgr *hwmgr, u32 *query)
+ 
+ 	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogStart);
+ 	cgs_write_ind_register(hwmgr->device, CGS_IND_REG__SMC,
+-							ixSMU_PM_STATUS_94, 0);
++							ixSMU_PM_STATUS_95, 0);
+ 
+ 	for (i = 0; i < 10; i++) {
+-		mdelay(1);
++		mdelay(500);
+ 		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogSample);
+ 		tmp = cgs_read_ind_register(hwmgr->device,
+ 						CGS_IND_REG__SMC,
+-						ixSMU_PM_STATUS_94);
++						ixSMU_PM_STATUS_95);
+ 		if (tmp != 0)
+ 			break;
+ 	}
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index f4290f6b0c38..2323ba9310d9 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1611,6 +1611,15 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
+ 	if (old_plane_state->fb != new_plane_state->fb)
+ 		return -EINVAL;
+ 
++	/*
++	 * FIXME: Since prepare_fb and cleanup_fb are always called on
++	 * the new_plane_state for async updates we need to block framebuffer
++	 * changes. This prevents use of a fb that's been cleaned up and
++	 * double cleanups from occuring.
++	 */
++	if (old_plane_state->fb != new_plane_state->fb)
++		return -EINVAL;
++
+ 	funcs = plane->helper_private;
+ 	if (!funcs->atomic_async_update)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 529414556962..1a244c53252c 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -3286,6 +3286,7 @@ static int drm_dp_mst_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs
+ 		msg.u.i2c_read.transactions[i].i2c_dev_id = msgs[i].addr;
+ 		msg.u.i2c_read.transactions[i].num_bytes = msgs[i].len;
+ 		msg.u.i2c_read.transactions[i].bytes = msgs[i].buf;
++		msg.u.i2c_read.transactions[i].no_stop_bit = !(msgs[i].flags & I2C_M_STOP);
+ 	}
+ 	msg.u.i2c_read.read_i2c_device_id = msgs[num - 1].addr;
+ 	msg.u.i2c_read.num_bytes_read = msgs[num - 1].len;
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index d73703a695e8..edd8cb497f3b 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -2891,7 +2891,7 @@ int drm_fb_helper_fbdev_setup(struct drm_device *dev,
+ 	return 0;
+ 
+ err_drm_fb_helper_fini:
+-	drm_fb_helper_fini(fb_helper);
++	drm_fb_helper_fbdev_teardown(dev);
+ 
+ 	return ret;
+ }
+@@ -3170,9 +3170,7 @@ static void drm_fbdev_client_unregister(struct drm_client_dev *client)
+ 
+ static int drm_fbdev_client_restore(struct drm_client_dev *client)
+ {
+-	struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);
+-
+-	drm_fb_helper_restore_fbdev_mode_unlocked(fb_helper);
++	drm_fb_helper_lastclose(client->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/drm_mode_object.c b/drivers/gpu/drm/drm_mode_object.c
+index 004191d01772..15b919f90c5a 100644
+--- a/drivers/gpu/drm/drm_mode_object.c
++++ b/drivers/gpu/drm/drm_mode_object.c
+@@ -465,6 +465,7 @@ static int set_property_atomic(struct drm_mode_object *obj,
+ 
+ 	drm_modeset_acquire_init(&ctx, 0);
+ 	state->acquire_ctx = &ctx;
++
+ retry:
+ 	if (prop == state->dev->mode_config.dpms_property) {
+ 		if (obj->type != DRM_MODE_OBJECT_CONNECTOR) {
+diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
+index 5f650d8fc66b..4cfb56893b7f 100644
+--- a/drivers/gpu/drm/drm_plane.c
++++ b/drivers/gpu/drm/drm_plane.c
+@@ -220,6 +220,9 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane,
+ 			format_modifier_count++;
+ 	}
+ 
++	if (format_modifier_count)
++		config->allow_fb_modifiers = true;
++
+ 	plane->modifier_count = format_modifier_count;
+ 	plane->modifiers = kmalloc_array(format_modifier_count,
+ 					 sizeof(format_modifiers[0]),
+diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
+index 77ae634eb11c..bd95fd6b4ac8 100644
+--- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
++++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
+@@ -1446,7 +1446,7 @@ static inline int cmd_address_audit(struct parser_exec_state *s,
+ 	}
+ 
+ 	if (index_mode)	{
+-		if (guest_gma >= I915_GTT_PAGE_SIZE / sizeof(u64)) {
++		if (guest_gma >= I915_GTT_PAGE_SIZE) {
+ 			ret = -EFAULT;
+ 			goto err;
+ 		}
+diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
+index c7103dd2d8d5..563ab8590061 100644
+--- a/drivers/gpu/drm/i915/gvt/gtt.c
++++ b/drivers/gpu/drm/i915/gvt/gtt.c
+@@ -1942,7 +1942,7 @@ void _intel_vgpu_mm_release(struct kref *mm_ref)
+  */
+ void intel_vgpu_unpin_mm(struct intel_vgpu_mm *mm)
+ {
+-	atomic_dec(&mm->pincount);
++	atomic_dec_if_positive(&mm->pincount);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
+index 55bb7885e228..8fff49affc11 100644
+--- a/drivers/gpu/drm/i915/gvt/scheduler.c
++++ b/drivers/gpu/drm/i915/gvt/scheduler.c
+@@ -1475,8 +1475,9 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id,
+ 		intel_runtime_pm_put(dev_priv);
+ 	}
+ 
+-	if (ret && (vgpu_is_vm_unhealthy(ret))) {
+-		enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR);
++	if (ret) {
++		if (vgpu_is_vm_unhealthy(ret))
++			enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR);
+ 		intel_vgpu_destroy_workload(workload);
+ 		return ERR_PTR(ret);
+ 	}
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index b1c31967194b..489c1e656ff6 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -2293,7 +2293,8 @@ intel_info(const struct drm_i915_private *dev_priv)
+ 				 INTEL_DEVID(dev_priv) == 0x5915 || \
+ 				 INTEL_DEVID(dev_priv) == 0x591E)
+ #define IS_AML_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x591C || \
+-				 INTEL_DEVID(dev_priv) == 0x87C0)
++				 INTEL_DEVID(dev_priv) == 0x87C0 || \
++				 INTEL_DEVID(dev_priv) == 0x87CA)
+ #define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+ 				 (dev_priv)->info.gt == 2)
+ #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 067054cf4a86..60bed3f27775 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -9205,7 +9205,7 @@ enum skl_power_gate {
+ #define TRANS_DDI_FUNC_CTL2(tran)	_MMIO_TRANS2(tran, \
+ 						     _TRANS_DDI_FUNC_CTL2_A)
+ #define  PORT_SYNC_MODE_ENABLE			(1 << 4)
+-#define  PORT_SYNC_MODE_MASTER_SELECT(x)	((x) < 0)
++#define  PORT_SYNC_MODE_MASTER_SELECT(x)	((x) << 0)
+ #define  PORT_SYNC_MODE_MASTER_SELECT_MASK	(0x7 << 0)
+ #define  PORT_SYNC_MODE_MASTER_SELECT_SHIFT	0
+ 
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 22a74608c6e4..dcd1df5322e8 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -1845,42 +1845,6 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
+ 	return false;
+ }
+ 
+-/* Optimize link config in order: max bpp, min lanes, min clock */
+-static bool
+-intel_dp_compute_link_config_fast(struct intel_dp *intel_dp,
+-				  struct intel_crtc_state *pipe_config,
+-				  const struct link_config_limits *limits)
+-{
+-	struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
+-	int bpp, clock, lane_count;
+-	int mode_rate, link_clock, link_avail;
+-
+-	for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
+-		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
+-						   bpp);
+-
+-		for (lane_count = limits->min_lane_count;
+-		     lane_count <= limits->max_lane_count;
+-		     lane_count <<= 1) {
+-			for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
+-				link_clock = intel_dp->common_rates[clock];
+-				link_avail = intel_dp_max_data_rate(link_clock,
+-								    lane_count);
+-
+-				if (mode_rate <= link_avail) {
+-					pipe_config->lane_count = lane_count;
+-					pipe_config->pipe_bpp = bpp;
+-					pipe_config->port_clock = link_clock;
+-
+-					return true;
+-				}
+-			}
+-		}
+-	}
+-
+-	return false;
+-}
+-
+ static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc)
+ {
+ 	int i, num_bpc;
+@@ -2013,15 +1977,13 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
+ 	limits.min_bpp = 6 * 3;
+ 	limits.max_bpp = intel_dp_compute_bpp(intel_dp, pipe_config);
+ 
+-	if (intel_dp_is_edp(intel_dp) && intel_dp->edp_dpcd[0] < DP_EDP_14) {
++	if (intel_dp_is_edp(intel_dp)) {
+ 		/*
+ 		 * Use the maximum clock and number of lanes the eDP panel
+-		 * advertizes being capable of. The eDP 1.3 and earlier panels
+-		 * are generally designed to support only a single clock and
+-		 * lane configuration, and typically these values correspond to
+-		 * the native resolution of the panel. With eDP 1.4 rate select
+-		 * and DSC, this is decreasingly the case, and we need to be
+-		 * able to select less than maximum link config.
++		 * advertizes being capable of. The panels are generally
++		 * designed to support only a single clock and lane
++		 * configuration, and typically these values correspond to the
++		 * native resolution of the panel.
+ 		 */
+ 		limits.min_lane_count = limits.max_lane_count;
+ 		limits.min_clock = limits.max_clock;
+@@ -2035,22 +1997,11 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
+ 		      intel_dp->common_rates[limits.max_clock],
+ 		      limits.max_bpp, adjusted_mode->crtc_clock);
+ 
+-	if (intel_dp_is_edp(intel_dp))
+-		/*
+-		 * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4
+-		 * section A.1: "It is recommended that the minimum number of
+-		 * lanes be used, using the minimum link rate allowed for that
+-		 * lane configuration."
+-		 *
+-		 * Note that we use the max clock and lane count for eDP 1.3 and
+-		 * earlier, and fast vs. wide is irrelevant.
+-		 */
+-		ret = intel_dp_compute_link_config_fast(intel_dp, pipe_config,
+-							&limits);
+-	else
+-		/* Optimize for slow and wide. */
+-		ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config,
+-							&limits);
++	/*
++	 * Optimize for slow and wide. This is the place to add alternative
++	 * optimization policy.
++	 */
++	ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits);
+ 
+ 	/* enable compression if the mode doesn't fit available BW */
+ 	if (!ret) {
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+index cb307a2abf06..7316b4ab1b85 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
+@@ -23,11 +23,14 @@ struct dpu_mdss {
+ 	struct dpu_irq_controller irq_controller;
+ };
+ 
+-static irqreturn_t dpu_mdss_irq(int irq, void *arg)
++static void dpu_mdss_irq(struct irq_desc *desc)
+ {
+-	struct dpu_mdss *dpu_mdss = arg;
++	struct dpu_mdss *dpu_mdss = irq_desc_get_handler_data(desc);
++	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	u32 interrupts;
+ 
++	chained_irq_enter(chip, desc);
++
+ 	interrupts = readl_relaxed(dpu_mdss->mmio + HW_INTR_STATUS);
+ 
+ 	while (interrupts) {
+@@ -39,20 +42,20 @@ static irqreturn_t dpu_mdss_irq(int irq, void *arg)
+ 					   hwirq);
+ 		if (mapping == 0) {
+ 			DRM_ERROR("couldn't find irq mapping for %lu\n", hwirq);
+-			return IRQ_NONE;
++			break;
+ 		}
+ 
+ 		rc = generic_handle_irq(mapping);
+ 		if (rc < 0) {
+ 			DRM_ERROR("handle irq fail: irq=%lu mapping=%u rc=%d\n",
+ 				  hwirq, mapping, rc);
+-			return IRQ_NONE;
++			break;
+ 		}
+ 
+ 		interrupts &= ~(1 << hwirq);
+ 	}
+ 
+-	return IRQ_HANDLED;
++	chained_irq_exit(chip, desc);
+ }
+ 
+ static void dpu_mdss_irq_mask(struct irq_data *irqd)
+@@ -83,16 +86,16 @@ static struct irq_chip dpu_mdss_irq_chip = {
+ 	.irq_unmask = dpu_mdss_irq_unmask,
+ };
+ 
++static struct lock_class_key dpu_mdss_lock_key, dpu_mdss_request_key;
++
+ static int dpu_mdss_irqdomain_map(struct irq_domain *domain,
+ 		unsigned int irq, irq_hw_number_t hwirq)
+ {
+ 	struct dpu_mdss *dpu_mdss = domain->host_data;
+-	int ret;
+ 
++	irq_set_lockdep_class(irq, &dpu_mdss_lock_key, &dpu_mdss_request_key);
+ 	irq_set_chip_and_handler(irq, &dpu_mdss_irq_chip, handle_level_irq);
+-	ret = irq_set_chip_data(irq, dpu_mdss);
+-
+-	return ret;
++	return irq_set_chip_data(irq, dpu_mdss);
+ }
+ 
+ static const struct irq_domain_ops dpu_mdss_irqdomain_ops = {
+@@ -159,11 +162,13 @@ static void dpu_mdss_destroy(struct drm_device *dev)
+ 	struct msm_drm_private *priv = dev->dev_private;
+ 	struct dpu_mdss *dpu_mdss = to_dpu_mdss(priv->mdss);
+ 	struct dss_module_power *mp = &dpu_mdss->mp;
++	int irq;
+ 
+ 	pm_runtime_suspend(dev->dev);
+ 	pm_runtime_disable(dev->dev);
+ 	_dpu_mdss_irq_domain_fini(dpu_mdss);
+-	free_irq(platform_get_irq(pdev, 0), dpu_mdss);
++	irq = platform_get_irq(pdev, 0);
++	irq_set_chained_handler_and_data(irq, NULL, NULL);
+ 	msm_dss_put_clk(mp->clk_config, mp->num_clk);
+ 	devm_kfree(&pdev->dev, mp->clk_config);
+ 
+@@ -187,6 +192,7 @@ int dpu_mdss_init(struct drm_device *dev)
+ 	struct dpu_mdss *dpu_mdss;
+ 	struct dss_module_power *mp;
+ 	int ret = 0;
++	int irq;
+ 
+ 	dpu_mdss = devm_kzalloc(dev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
+ 	if (!dpu_mdss)
+@@ -219,12 +225,12 @@ int dpu_mdss_init(struct drm_device *dev)
+ 	if (ret)
+ 		goto irq_domain_error;
+ 
+-	ret = request_irq(platform_get_irq(pdev, 0),
+-			dpu_mdss_irq, 0, "dpu_mdss_isr", dpu_mdss);
+-	if (ret) {
+-		DPU_ERROR("failed to init irq: %d\n", ret);
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0)
+ 		goto irq_error;
+-	}
++
++	irq_set_chained_handler_and_data(irq, dpu_mdss_irq,
++					 dpu_mdss);
+ 
+ 	pm_runtime_enable(dev->dev);
+ 
+diff --git a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
+index 6a4ca139cf5d..8fd8124d72ba 100644
+--- a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
++++ b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
+@@ -750,7 +750,9 @@ static int nv17_tv_set_property(struct drm_encoder *encoder,
+ 		/* Disable the crtc to ensure a full modeset is
+ 		 * performed whenever it's turned on again. */
+ 		if (crtc)
+-			drm_crtc_force_disable(crtc);
++			drm_crtc_helper_set_mode(crtc, &crtc->mode,
++						 crtc->x, crtc->y,
++						 crtc->primary->fb);
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c
+index f471537c852f..1e14c6921454 100644
+--- a/drivers/gpu/drm/radeon/evergreen_cs.c
++++ b/drivers/gpu/drm/radeon/evergreen_cs.c
+@@ -1299,6 +1299,7 @@ static int evergreen_cs_handle_reg(struct radeon_cs_parser *p, u32 reg, u32 idx)
+ 			return -EINVAL;
+ 		}
+ 		ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff);
++		break;
+ 	case CB_TARGET_MASK:
+ 		track->cb_target_mask = radeon_get_ib_value(p, idx);
+ 		track->cb_dirty = true;
+diff --git a/drivers/gpu/drm/rcar-du/rcar_du_kms.c b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+index 9c7007d45408..f9a90ff24e6d 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_du_kms.c
++++ b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
+@@ -331,6 +331,7 @@ static int rcar_du_encoders_init_one(struct rcar_du_device *rcdu,
+ 		dev_dbg(rcdu->dev,
+ 			"connected entity %pOF is disabled, skipping\n",
+ 			entity);
++		of_node_put(entity);
+ 		return -ENODEV;
+ 	}
+ 
+@@ -366,6 +367,7 @@ static int rcar_du_encoders_init_one(struct rcar_du_device *rcdu,
+ 		dev_warn(rcdu->dev,
+ 			 "no encoder found for endpoint %pOF, skipping\n",
+ 			 ep->local_node);
++		of_node_put(entity);
+ 		return -ENODEV;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+index fb70fb486fbf..cdbb47566cac 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -511,6 +511,18 @@ static void vop_core_clks_disable(struct vop *vop)
+ 	clk_disable(vop->hclk);
+ }
+ 
++static void vop_win_disable(struct vop *vop, const struct vop_win_data *win)
++{
++	if (win->phy->scl && win->phy->scl->ext) {
++		VOP_SCL_SET_EXT(vop, win, yrgb_hor_scl_mode, SCALE_NONE);
++		VOP_SCL_SET_EXT(vop, win, yrgb_ver_scl_mode, SCALE_NONE);
++		VOP_SCL_SET_EXT(vop, win, cbcr_hor_scl_mode, SCALE_NONE);
++		VOP_SCL_SET_EXT(vop, win, cbcr_ver_scl_mode, SCALE_NONE);
++	}
++
++	VOP_WIN_SET(vop, win, enable, 0);
++}
++
+ static int vop_enable(struct drm_crtc *crtc)
+ {
+ 	struct vop *vop = to_vop(crtc);
+@@ -556,7 +568,7 @@ static int vop_enable(struct drm_crtc *crtc)
+ 		struct vop_win *vop_win = &vop->win[i];
+ 		const struct vop_win_data *win = vop_win->data;
+ 
+-		VOP_WIN_SET(vop, win, enable, 0);
++		vop_win_disable(vop, win);
+ 	}
+ 	spin_unlock(&vop->reg_lock);
+ 
+@@ -700,7 +712,7 @@ static void vop_plane_atomic_disable(struct drm_plane *plane,
+ 
+ 	spin_lock(&vop->reg_lock);
+ 
+-	VOP_WIN_SET(vop, win, enable, 0);
++	vop_win_disable(vop, win);
+ 
+ 	spin_unlock(&vop->reg_lock);
+ }
+@@ -1476,7 +1488,7 @@ static int vop_initial(struct vop *vop)
+ 		int channel = i * 2 + 1;
+ 
+ 		VOP_WIN_SET(vop, win, channel, (channel + 1) << 4 | channel);
+-		VOP_WIN_SET(vop, win, enable, 0);
++		vop_win_disable(vop, win);
+ 		VOP_WIN_SET(vop, win, gate, 1);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index e2942c9a11a7..35ddbec1375a 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -52,12 +52,12 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
+ {
+ 	int i;
+ 
+-	if (!(entity && rq_list && num_rq_list > 0 && rq_list[0]))
++	if (!(entity && rq_list && (num_rq_list == 0 || rq_list[0])))
+ 		return -EINVAL;
+ 
+ 	memset(entity, 0, sizeof(struct drm_sched_entity));
+ 	INIT_LIST_HEAD(&entity->list);
+-	entity->rq = rq_list[0];
++	entity->rq = NULL;
+ 	entity->guilty = guilty;
+ 	entity->num_rq_list = num_rq_list;
+ 	entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *),
+@@ -67,6 +67,10 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
+ 
+ 	for (i = 0; i < num_rq_list; ++i)
+ 		entity->rq_list[i] = rq_list[i];
++
++	if (num_rq_list)
++		entity->rq = rq_list[0];
++
+ 	entity->last_scheduled = NULL;
+ 
+ 	spin_lock_init(&entity->rq_lock);
+@@ -165,6 +169,9 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout)
+ 	struct task_struct *last_user;
+ 	long ret = timeout;
+ 
++	if (!entity->rq)
++		return 0;
++
+ 	sched = entity->rq->sched;
+ 	/**
+ 	 * The client will not queue more IBs during this fini, consume existing
+@@ -264,20 +271,24 @@ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity)
+  */
+ void drm_sched_entity_fini(struct drm_sched_entity *entity)
+ {
+-	struct drm_gpu_scheduler *sched;
++	struct drm_gpu_scheduler *sched = NULL;
+ 
+-	sched = entity->rq->sched;
+-	drm_sched_rq_remove_entity(entity->rq, entity);
++	if (entity->rq) {
++		sched = entity->rq->sched;
++		drm_sched_rq_remove_entity(entity->rq, entity);
++	}
+ 
+ 	/* Consumption of existing IBs wasn't completed. Forcefully
+ 	 * remove them here.
+ 	 */
+ 	if (spsc_queue_peek(&entity->job_queue)) {
+-		/* Park the kernel for a moment to make sure it isn't processing
+-		 * our enity.
+-		 */
+-		kthread_park(sched->thread);
+-		kthread_unpark(sched->thread);
++		if (sched) {
++			/* Park the kernel for a moment to make sure it isn't processing
++			 * our enity.
++			 */
++			kthread_park(sched->thread);
++			kthread_unpark(sched->thread);
++		}
+ 		if (entity->dependency) {
+ 			dma_fence_remove_callback(entity->dependency,
+ 						  &entity->cb);
+@@ -362,9 +373,11 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity,
+ 	for (i = 0; i < entity->num_rq_list; ++i)
+ 		drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority);
+ 
+-	drm_sched_rq_remove_entity(entity->rq, entity);
+-	drm_sched_entity_set_rq_priority(&entity->rq, priority);
+-	drm_sched_rq_add_entity(entity->rq, entity);
++	if (entity->rq) {
++		drm_sched_rq_remove_entity(entity->rq, entity);
++		drm_sched_entity_set_rq_priority(&entity->rq, priority);
++		drm_sched_rq_add_entity(entity->rq, entity);
++	}
+ 
+ 	spin_unlock(&entity->rq_lock);
+ }
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+index dc47720c99ba..39d8509d96a0 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+@@ -48,8 +48,13 @@ static enum drm_mode_status
+ sun8i_dw_hdmi_mode_valid_h6(struct drm_connector *connector,
+ 			    const struct drm_display_mode *mode)
+ {
+-	/* This is max for HDMI 2.0b (4K@60Hz) */
+-	if (mode->clock > 594000)
++	/*
++	 * Controller support maximum of 594 MHz, which correlates to
++	 * 4K@60Hz 4:4:4 or RGB. However, for frequencies greater than
++	 * 340 MHz scrambling has to be enabled. Because scrambling is
++	 * not yet implemented, just limit to 340 MHz for now.
++	 */
++	if (mode->clock > 340000)
+ 		return MODE_CLOCK_HIGH;
+ 
+ 	return MODE_OK;
+diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
+index a63e3011e971..bd4f0b88bbd7 100644
+--- a/drivers/gpu/drm/udl/udl_drv.c
++++ b/drivers/gpu/drm/udl/udl_drv.c
+@@ -51,6 +51,7 @@ static struct drm_driver driver = {
+ 	.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME,
+ 	.load = udl_driver_load,
+ 	.unload = udl_driver_unload,
++	.release = udl_driver_release,
+ 
+ 	/* gem hooks */
+ 	.gem_free_object_unlocked = udl_gem_free_object,
+diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
+index e9e9b1ff678e..4ae67d882eae 100644
+--- a/drivers/gpu/drm/udl/udl_drv.h
++++ b/drivers/gpu/drm/udl/udl_drv.h
+@@ -104,6 +104,7 @@ void udl_urb_completion(struct urb *urb);
+ 
+ int udl_driver_load(struct drm_device *dev, unsigned long flags);
+ void udl_driver_unload(struct drm_device *dev);
++void udl_driver_release(struct drm_device *dev);
+ 
+ int udl_fbdev_init(struct drm_device *dev);
+ void udl_fbdev_cleanup(struct drm_device *dev);
+diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
+index 1b014d92855b..19055dda3140 100644
+--- a/drivers/gpu/drm/udl/udl_main.c
++++ b/drivers/gpu/drm/udl/udl_main.c
+@@ -378,6 +378,12 @@ void udl_driver_unload(struct drm_device *dev)
+ 		udl_free_urb_list(dev);
+ 
+ 	udl_fbdev_cleanup(dev);
+-	udl_modeset_cleanup(dev);
+ 	kfree(udl);
+ }
++
++void udl_driver_release(struct drm_device *dev)
++{
++	udl_modeset_cleanup(dev);
++	drm_dev_fini(dev);
++	kfree(dev);
++}
+diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
+index 5930facd6d2d..11a8f99ba18c 100644
+--- a/drivers/gpu/drm/vgem/vgem_drv.c
++++ b/drivers/gpu/drm/vgem/vgem_drv.c
+@@ -191,13 +191,9 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
+ 	ret = drm_gem_handle_create(file, &obj->base, handle);
+ 	drm_gem_object_put_unlocked(&obj->base);
+ 	if (ret)
+-		goto err;
++		return ERR_PTR(ret);
+ 
+ 	return &obj->base;
+-
+-err:
+-	__vgem_gem_destroy(obj);
+-	return ERR_PTR(ret);
+ }
+ 
+ static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
+diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
+index f39a183d59c2..e7e946035027 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_object.c
++++ b/drivers/gpu/drm/virtio/virtgpu_object.c
+@@ -28,10 +28,21 @@
+ static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
+ 				       uint32_t *resid)
+ {
++#if 0
+ 	int handle = ida_alloc(&vgdev->resource_ida, GFP_KERNEL);
+ 
+ 	if (handle < 0)
+ 		return handle;
++#else
++	static int handle;
++
++	/*
++	 * FIXME: dirty hack to avoid re-using IDs, virglrenderer
++	 * can't deal with that.  Needs fixing in virglrenderer, also
++	 * should figure a better way to handle that in the guest.
++	 */
++	handle++;
++#endif
+ 
+ 	*resid = handle + 1;
+ 	return 0;
+@@ -39,7 +50,9 @@ static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
+ 
+ static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t id)
+ {
++#if 0
+ 	ida_free(&vgdev->resource_ida, id - 1);
++#endif
+ }
+ 
+ static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
+diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
+index eb56ee893761..1054f535178a 100644
+--- a/drivers/gpu/drm/vkms/vkms_crtc.c
++++ b/drivers/gpu/drm/vkms/vkms_crtc.c
+@@ -4,13 +4,17 @@
+ #include <drm/drm_atomic_helper.h>
+ #include <drm/drm_crtc_helper.h>
+ 
+-static void _vblank_handle(struct vkms_output *output)
++static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
+ {
++	struct vkms_output *output = container_of(timer, struct vkms_output,
++						  vblank_hrtimer);
+ 	struct drm_crtc *crtc = &output->crtc;
+ 	struct vkms_crtc_state *state = to_vkms_crtc_state(crtc->state);
++	int ret_overrun;
+ 	bool ret;
+ 
+ 	spin_lock(&output->lock);
++
+ 	ret = drm_crtc_handle_vblank(crtc);
+ 	if (!ret)
+ 		DRM_ERROR("vkms failure on handling vblank");
+@@ -31,19 +35,9 @@ static void _vblank_handle(struct vkms_output *output)
+ 			DRM_WARN("failed to queue vkms_crc_work_handle");
+ 	}
+ 
+-	spin_unlock(&output->lock);
+-}
+-
+-static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
+-{
+-	struct vkms_output *output = container_of(timer, struct vkms_output,
+-						  vblank_hrtimer);
+-	int ret_overrun;
+-
+-	_vblank_handle(output);
+-
+ 	ret_overrun = hrtimer_forward_now(&output->vblank_hrtimer,
+ 					  output->period_ns);
++	spin_unlock(&output->lock);
+ 
+ 	return HRTIMER_RESTART;
+ }
+@@ -81,6 +75,9 @@ bool vkms_get_vblank_timestamp(struct drm_device *dev, unsigned int pipe,
+ 
+ 	*vblank_time = output->vblank_hrtimer.node.expires;
+ 
++	if (!in_vblank_irq)
++		*vblank_time -= output->period_ns;
++
+ 	return true;
+ }
+ 
+@@ -98,6 +95,7 @@ static void vkms_atomic_crtc_reset(struct drm_crtc *crtc)
+ 	vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL);
+ 	if (!vkms_state)
+ 		return;
++	INIT_WORK(&vkms_state->crc_work, vkms_crc_work_handle);
+ 
+ 	crtc->state = &vkms_state->base;
+ 	crtc->state->crtc = crtc;
+diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
+index 138b0bb325cf..69048e73377d 100644
+--- a/drivers/gpu/drm/vkms/vkms_gem.c
++++ b/drivers/gpu/drm/vkms/vkms_gem.c
+@@ -111,11 +111,8 @@ struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
+ 
+ 	ret = drm_gem_handle_create(file, &obj->gem, handle);
+ 	drm_gem_object_put_unlocked(&obj->gem);
+-	if (ret) {
+-		drm_gem_object_release(&obj->gem);
+-		kfree(obj);
++	if (ret)
+ 		return ERR_PTR(ret);
+-	}
+ 
+ 	return &obj->gem;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+index b913a56f3426..2a9112515f46 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
+@@ -564,11 +564,9 @@ static int vmw_fb_set_par(struct fb_info *info)
+ 		0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 		DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC)
+ 	};
+-	struct drm_display_mode *old_mode;
+ 	struct drm_display_mode *mode;
+ 	int ret;
+ 
+-	old_mode = par->set_mode;
+ 	mode = drm_mode_duplicate(vmw_priv->dev, &new_mode);
+ 	if (!mode) {
+ 		DRM_ERROR("Could not create new fb mode.\n");
+@@ -579,11 +577,7 @@ static int vmw_fb_set_par(struct fb_info *info)
+ 	mode->vdisplay = var->yres;
+ 	vmw_guess_mode_timing(mode);
+ 
+-	if (old_mode && drm_mode_equal(old_mode, mode)) {
+-		drm_mode_destroy(vmw_priv->dev, mode);
+-		mode = old_mode;
+-		old_mode = NULL;
+-	} else if (!vmw_kms_validate_mode_vram(vmw_priv,
++	if (!vmw_kms_validate_mode_vram(vmw_priv,
+ 					mode->hdisplay *
+ 					DIV_ROUND_UP(var->bits_per_pixel, 8),
+ 					mode->vdisplay)) {
+@@ -620,8 +614,8 @@ static int vmw_fb_set_par(struct fb_info *info)
+ 	schedule_delayed_work(&par->local_work, 0);
+ 
+ out_unlock:
+-	if (old_mode)
+-		drm_mode_destroy(vmw_priv->dev, old_mode);
++	if (par->set_mode)
++		drm_mode_destroy(vmw_priv->dev, par->set_mode);
+ 	par->set_mode = mode;
+ 
+ 	mutex_unlock(&par->bo_mutex);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
+index b93c558dd86e..7da752ca1c34 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
+@@ -57,7 +57,7 @@ static int vmw_gmrid_man_get_node(struct ttm_mem_type_manager *man,
+ 
+ 	id = ida_alloc_max(&gman->gmr_ida, gman->max_gmr_ids - 1, GFP_KERNEL);
+ 	if (id < 0)
+-		return id;
++		return (id != -ENOMEM ? 0 : id);
+ 
+ 	spin_lock(&gman->lock);
+ 
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 15ed6177a7a3..f040c8a7f9a9 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -2608,8 +2608,9 @@ static int m560_raw_event(struct hid_device *hdev, u8 *data, int size)
+ 		input_report_rel(mydata->input, REL_Y, v);
+ 
+ 		v = hid_snto32(data[6], 8);
+-		hidpp_scroll_counter_handle_scroll(
+-				&hidpp->vertical_wheel_counter, v);
++		if (v != 0)
++			hidpp_scroll_counter_handle_scroll(
++					&hidpp->vertical_wheel_counter, v);
+ 
+ 		input_sync(mydata->input);
+ 	}
+diff --git a/drivers/hid/intel-ish-hid/ipc/ipc.c b/drivers/hid/intel-ish-hid/ipc/ipc.c
+index 742191bb24c6..45e33c7ba9a6 100644
+--- a/drivers/hid/intel-ish-hid/ipc/ipc.c
++++ b/drivers/hid/intel-ish-hid/ipc/ipc.c
+@@ -91,7 +91,10 @@ static bool check_generated_interrupt(struct ishtp_device *dev)
+ 			IPC_INT_FROM_ISH_TO_HOST_CHV_AB(pisr_val);
+ 	} else {
+ 		pisr_val = ish_reg_read(dev, IPC_REG_PISR_BXT);
+-		interrupt_generated = IPC_INT_FROM_ISH_TO_HOST_BXT(pisr_val);
++		interrupt_generated = !!pisr_val;
++		/* only busy-clear bit is RW, others are RO */
++		if (pisr_val)
++			ish_reg_write(dev, IPC_REG_PISR_BXT, pisr_val);
+ 	}
+ 
+ 	return interrupt_generated;
+@@ -839,11 +842,11 @@ int ish_hw_start(struct ishtp_device *dev)
+ {
+ 	ish_set_host_rdy(dev);
+ 
++	set_host_ready(dev);
++
+ 	/* After that we can enable ISH DMA operation and wakeup ISHFW */
+ 	ish_wakeup(dev);
+ 
+-	set_host_ready(dev);
+-
+ 	/* wait for FW-initiated reset flow */
+ 	if (!dev->recvd_hw_ready)
+ 		wait_event_interruptible_timeout(dev->wait_hw_ready,
+diff --git a/drivers/hid/intel-ish-hid/ishtp/bus.c b/drivers/hid/intel-ish-hid/ishtp/bus.c
+index 728dc6d4561a..a271d6d169b1 100644
+--- a/drivers/hid/intel-ish-hid/ishtp/bus.c
++++ b/drivers/hid/intel-ish-hid/ishtp/bus.c
+@@ -675,7 +675,8 @@ int ishtp_cl_device_bind(struct ishtp_cl *cl)
+ 	spin_lock_irqsave(&cl->dev->device_list_lock, flags);
+ 	list_for_each_entry(cl_device, &cl->dev->device_list,
+ 			device_link) {
+-		if (cl_device->fw_client->client_id == cl->fw_client_id) {
++		if (cl_device->fw_client &&
++		    cl_device->fw_client->client_id == cl->fw_client_id) {
+ 			cl->device = cl_device;
+ 			rv = 0;
+ 			break;
+@@ -735,6 +736,7 @@ void ishtp_bus_remove_all_clients(struct ishtp_device *ishtp_dev,
+ 	spin_lock_irqsave(&ishtp_dev->device_list_lock, flags);
+ 	list_for_each_entry_safe(cl_device, n, &ishtp_dev->device_list,
+ 				 device_link) {
++		cl_device->fw_client = NULL;
+ 		if (warm_reset && cl_device->reference_count)
+ 			continue;
+ 
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 6f929bfa9fcd..d0f1dfe2bcbb 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1759,6 +1759,7 @@ config SENSORS_VT8231
+ config SENSORS_W83773G
+ 	tristate "Nuvoton W83773G"
+ 	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  If you say yes here you get support for the Nuvoton W83773G hardware
+ 	  monitoring chip.
+diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c
+index 391118c8aae8..c888f4aca45c 100644
+--- a/drivers/hwmon/occ/common.c
++++ b/drivers/hwmon/occ/common.c
+@@ -889,6 +889,8 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 				s++;
+ 			}
+ 		}
++
++		s = (sensors->power.num_sensors * 4) + 1;
+ 	} else {
+ 		for (i = 0; i < sensors->power.num_sensors; ++i) {
+ 			s = i + 1;
+@@ -917,11 +919,11 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 						     show_power, NULL, 3, i);
+ 			attr++;
+ 		}
+-	}
+ 
+-	if (sensors->caps.num_sensors >= 1) {
+ 		s = sensors->power.num_sensors + 1;
++	}
+ 
++	if (sensors->caps.num_sensors >= 1) {
+ 		snprintf(attr->name, sizeof(attr->name), "power%d_label", s);
+ 		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL,
+ 					     0, 0);
+diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
+index abe8249b893b..f21eb28b6782 100644
+--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
++++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
+@@ -177,15 +177,15 @@ static void etm_free_aux(void *data)
+ 	schedule_work(&event_data->work);
+ }
+ 
+-static void *etm_setup_aux(int event_cpu, void **pages,
++static void *etm_setup_aux(struct perf_event *event, void **pages,
+ 			   int nr_pages, bool overwrite)
+ {
+-	int cpu;
++	int cpu = event->cpu;
+ 	cpumask_t *mask;
+ 	struct coresight_device *sink;
+ 	struct etm_event_data *event_data = NULL;
+ 
+-	event_data = alloc_event_data(event_cpu);
++	event_data = alloc_event_data(cpu);
+ 	if (!event_data)
+ 		return NULL;
+ 	INIT_WORK(&event_data->work, free_event_data);
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
+index 53e2fb6e86f6..fe76b176974a 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x.c
+@@ -55,7 +55,8 @@ static void etm4_os_unlock(struct etmv4_drvdata *drvdata)
+ 
+ static bool etm4_arch_supported(u8 arch)
+ {
+-	switch (arch) {
++	/* Mask out the minor version number */
++	switch (arch & 0xf0) {
+ 	case ETM_ARCH_V4:
+ 		break;
+ 	default:
+diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
+index 8426b7970c14..cc287cf6eb29 100644
+--- a/drivers/hwtracing/intel_th/gth.c
++++ b/drivers/hwtracing/intel_th/gth.c
+@@ -607,6 +607,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
+ {
+ 	struct gth_device *gth = dev_get_drvdata(&thdev->dev);
+ 	int port = othdev->output.port;
++	int master;
+ 
+ 	if (thdev->host_mode)
+ 		return;
+@@ -615,6 +616,9 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
+ 	othdev->output.port = -1;
+ 	othdev->output.active = false;
+ 	gth->output[port].output = NULL;
++	for (master = 0; master < TH_CONFIGURABLE_MASTERS; master++)
++		if (gth->master[master] == port)
++			gth->master[master] = -1;
+ 	spin_unlock(&gth->gth_lock);
+ }
+ 
+diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
+index 93ce3aa740a9..c7ba8acfd4d5 100644
+--- a/drivers/hwtracing/stm/core.c
++++ b/drivers/hwtracing/stm/core.c
+@@ -244,6 +244,9 @@ static int find_free_channels(unsigned long *bitmap, unsigned int start,
+ 			;
+ 		if (i == width)
+ 			return pos;
++
++		/* step over [pos..pos+i) to continue search */
++		pos += i;
+ 	}
+ 
+ 	return -1;
+@@ -732,7 +735,7 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
+ 	struct stm_device *stm = stmf->stm;
+ 	struct stp_policy_id *id;
+ 	char *ids[] = { NULL, NULL };
+-	int ret = -EINVAL;
++	int ret = -EINVAL, wlimit = 1;
+ 	u32 size;
+ 
+ 	if (stmf->output.nr_chans)
+@@ -760,8 +763,10 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
+ 	if (id->__reserved_0 || id->__reserved_1)
+ 		goto err_free;
+ 
+-	if (id->width < 1 ||
+-	    id->width > PAGE_SIZE / stm->data->sw_mmiosz)
++	if (stm->data->sw_mmiosz)
++		wlimit = PAGE_SIZE / stm->data->sw_mmiosz;
++
++	if (id->width < 1 || id->width > wlimit)
+ 		goto err_free;
+ 
+ 	ids[0] = id->id;
+diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
+index b4a0b2b99a78..6b4ef1d38fb2 100644
+--- a/drivers/i2c/busses/i2c-designware-core.h
++++ b/drivers/i2c/busses/i2c-designware-core.h
+@@ -215,6 +215,7 @@
+  * @disable_int: function to disable all interrupts
+  * @init: function to initialize the I2C hardware
+  * @mode: operation mode - DW_IC_MASTER or DW_IC_SLAVE
++ * @suspended: set to true if the controller is suspended
+  *
+  * HCNT and LCNT parameters can be used if the platform knows more accurate
+  * values than the one computed based only on the input clock frequency.
+@@ -270,6 +271,7 @@ struct dw_i2c_dev {
+ 	int			(*set_sda_hold_time)(struct dw_i2c_dev *dev);
+ 	int			mode;
+ 	struct i2c_bus_recovery_info rinfo;
++	bool			suspended;
+ };
+ 
+ #define ACCESS_SWAP		0x00000001
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 8d1bc44d2530..bb8e3f149979 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -426,6 +426,12 @@ i2c_dw_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
+ 
+ 	pm_runtime_get_sync(dev->dev);
+ 
++	if (dev->suspended) {
++		dev_err(dev->dev, "Error %s call while suspended\n", __func__);
++		ret = -ESHUTDOWN;
++		goto done_nolock;
++	}
++
+ 	reinit_completion(&dev->cmd_complete);
+ 	dev->msgs = msgs;
+ 	dev->msgs_num = num;
+diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
+index d50f80487214..76810deb2de6 100644
+--- a/drivers/i2c/busses/i2c-designware-pcidrv.c
++++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
+@@ -176,6 +176,7 @@ static int i2c_dw_pci_suspend(struct device *dev)
+ 	struct pci_dev *pdev = to_pci_dev(dev);
+ 	struct dw_i2c_dev *i_dev = pci_get_drvdata(pdev);
+ 
++	i_dev->suspended = true;
+ 	i_dev->disable(i_dev);
+ 
+ 	return 0;
+@@ -185,8 +186,12 @@ static int i2c_dw_pci_resume(struct device *dev)
+ {
+ 	struct pci_dev *pdev = to_pci_dev(dev);
+ 	struct dw_i2c_dev *i_dev = pci_get_drvdata(pdev);
++	int ret;
+ 
+-	return i_dev->init(i_dev);
++	ret = i_dev->init(i_dev);
++	i_dev->suspended = false;
++
++	return ret;
+ }
+ #endif
+ 
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 9eaac3be1f63..ead5e7de3e4d 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -454,6 +454,8 @@ static int dw_i2c_plat_suspend(struct device *dev)
+ {
+ 	struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
+ 
++	i_dev->suspended = true;
++
+ 	if (i_dev->shared_with_punit)
+ 		return 0;
+ 
+@@ -471,6 +473,7 @@ static int dw_i2c_plat_resume(struct device *dev)
+ 		i2c_dw_prepare_clk(i_dev, true);
+ 
+ 	i_dev->init(i_dev);
++	i_dev->suspended = false;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
+index c77adbbea0c7..e85dc8583896 100644
+--- a/drivers/i2c/busses/i2c-tegra.c
++++ b/drivers/i2c/busses/i2c-tegra.c
+@@ -118,6 +118,9 @@
+ #define I2C_MST_FIFO_STATUS_TX_MASK		0xff0000
+ #define I2C_MST_FIFO_STATUS_TX_SHIFT		16
+ 
++/* Packet header size in bytes */
++#define I2C_PACKET_HEADER_SIZE			12
++
+ /*
+  * msg_end_type: The bus control which need to be send at end of transfer.
+  * @MSG_END_STOP: Send stop pulse at end of transfer.
+@@ -836,12 +839,13 @@ static const struct i2c_algorithm tegra_i2c_algo = {
+ /* payload size is only 12 bit */
+ static const struct i2c_adapter_quirks tegra_i2c_quirks = {
+ 	.flags = I2C_AQ_NO_ZERO_LEN,
+-	.max_read_len = 4096,
+-	.max_write_len = 4096,
++	.max_read_len = SZ_4K,
++	.max_write_len = SZ_4K - I2C_PACKET_HEADER_SIZE,
+ };
+ 
+ static const struct i2c_adapter_quirks tegra194_i2c_quirks = {
+ 	.flags = I2C_AQ_NO_ZERO_LEN,
++	.max_write_len = SZ_64K - I2C_PACKET_HEADER_SIZE,
+ };
+ 
+ static const struct tegra_i2c_hw_feature tegra20_i2c_hw = {
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 28460f6a60cc..af87a16ac3a5 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -430,7 +430,7 @@ static int i2c_device_remove(struct device *dev)
+ 	dev_pm_clear_wake_irq(&client->dev);
+ 	device_init_wakeup(&client->dev, false);
+ 
+-	client->irq = 0;
++	client->irq = client->init_irq;
+ 
+ 	return status;
+ }
+@@ -741,10 +741,11 @@ i2c_new_device(struct i2c_adapter *adap, struct i2c_board_info const *info)
+ 	client->flags = info->flags;
+ 	client->addr = info->addr;
+ 
+-	client->irq = info->irq;
+-	if (!client->irq)
+-		client->irq = i2c_dev_irq_from_resources(info->resources,
++	client->init_irq = info->irq;
++	if (!client->init_irq)
++		client->init_irq = i2c_dev_irq_from_resources(info->resources,
+ 							 info->num_resources);
++	client->irq = client->init_irq;
+ 
+ 	strlcpy(client->name, info->type, sizeof(client->name));
+ 
+diff --git a/drivers/i2c/i2c-core-of.c b/drivers/i2c/i2c-core-of.c
+index 6cb7ad608bcd..0f01cdba9d2c 100644
+--- a/drivers/i2c/i2c-core-of.c
++++ b/drivers/i2c/i2c-core-of.c
+@@ -121,6 +121,17 @@ static int of_dev_node_match(struct device *dev, void *data)
+ 	return dev->of_node == data;
+ }
+ 
++static int of_dev_or_parent_node_match(struct device *dev, void *data)
++{
++	if (dev->of_node == data)
++		return 1;
++
++	if (dev->parent)
++		return dev->parent->of_node == data;
++
++	return 0;
++}
++
+ /* must call put_device() when done with returned i2c_client device */
+ struct i2c_client *of_find_i2c_device_by_node(struct device_node *node)
+ {
+@@ -145,7 +156,8 @@ struct i2c_adapter *of_find_i2c_adapter_by_node(struct device_node *node)
+ 	struct device *dev;
+ 	struct i2c_adapter *adapter;
+ 
+-	dev = bus_find_device(&i2c_bus_type, NULL, node, of_dev_node_match);
++	dev = bus_find_device(&i2c_bus_type, NULL, node,
++			      of_dev_or_parent_node_match);
+ 	if (!dev)
+ 		return NULL;
+ 
+diff --git a/drivers/iio/adc/exynos_adc.c b/drivers/iio/adc/exynos_adc.c
+index fa2d2b5767f3..1ca2c4d39f87 100644
+--- a/drivers/iio/adc/exynos_adc.c
++++ b/drivers/iio/adc/exynos_adc.c
+@@ -115,6 +115,7 @@
+ #define MAX_ADC_V2_CHANNELS		10
+ #define MAX_ADC_V1_CHANNELS		8
+ #define MAX_EXYNOS3250_ADC_CHANNELS	2
++#define MAX_EXYNOS4212_ADC_CHANNELS	4
+ #define MAX_S5PV210_ADC_CHANNELS	10
+ 
+ /* Bit definitions common for ADC_V1 and ADC_V2 */
+@@ -271,6 +272,19 @@ static void exynos_adc_v1_start_conv(struct exynos_adc *info,
+ 	writel(con1 | ADC_CON_EN_START, ADC_V1_CON(info->regs));
+ }
+ 
++/* Exynos4212 and 4412 is like ADCv1 but with four channels only */
++static const struct exynos_adc_data exynos4212_adc_data = {
++	.num_channels	= MAX_EXYNOS4212_ADC_CHANNELS,
++	.mask		= ADC_DATX_MASK,	/* 12 bit ADC resolution */
++	.needs_adc_phy	= true,
++	.phy_offset	= EXYNOS_ADCV1_PHY_OFFSET,
++
++	.init_hw	= exynos_adc_v1_init_hw,
++	.exit_hw	= exynos_adc_v1_exit_hw,
++	.clear_irq	= exynos_adc_v1_clear_irq,
++	.start_conv	= exynos_adc_v1_start_conv,
++};
++
+ static const struct exynos_adc_data exynos_adc_v1_data = {
+ 	.num_channels	= MAX_ADC_V1_CHANNELS,
+ 	.mask		= ADC_DATX_MASK,	/* 12 bit ADC resolution */
+@@ -492,6 +506,9 @@ static const struct of_device_id exynos_adc_match[] = {
+ 	}, {
+ 		.compatible = "samsung,s5pv210-adc",
+ 		.data = &exynos_adc_s5pv210_data,
++	}, {
++		.compatible = "samsung,exynos4212-adc",
++		.data = &exynos4212_adc_data,
+ 	}, {
+ 		.compatible = "samsung,exynos-adc-v1",
+ 		.data = &exynos_adc_v1_data,
+@@ -929,7 +946,7 @@ static int exynos_adc_remove(struct platform_device *pdev)
+ 	struct iio_dev *indio_dev = platform_get_drvdata(pdev);
+ 	struct exynos_adc *info = iio_priv(indio_dev);
+ 
+-	if (IS_REACHABLE(CONFIG_INPUT)) {
++	if (IS_REACHABLE(CONFIG_INPUT) && info->input) {
+ 		free_irq(info->tsirq, info);
+ 		input_unregister_device(info->input);
+ 	}
+diff --git a/drivers/iio/adc/qcom-pm8xxx-xoadc.c b/drivers/iio/adc/qcom-pm8xxx-xoadc.c
+index c30c002f1fef..4735f8a1ca9d 100644
+--- a/drivers/iio/adc/qcom-pm8xxx-xoadc.c
++++ b/drivers/iio/adc/qcom-pm8xxx-xoadc.c
+@@ -423,18 +423,14 @@ static irqreturn_t pm8xxx_eoc_irq(int irq, void *d)
+ static struct pm8xxx_chan_info *
+ pm8xxx_get_channel(struct pm8xxx_xoadc *adc, u8 chan)
+ {
+-	struct pm8xxx_chan_info *ch;
+ 	int i;
+ 
+ 	for (i = 0; i < adc->nchans; i++) {
+-		ch = &adc->chans[i];
++		struct pm8xxx_chan_info *ch = &adc->chans[i];
+ 		if (ch->hwchan->amux_channel == chan)
+-			break;
++			return ch;
+ 	}
+-	if (i == adc->nchans)
+-		return NULL;
+-
+-	return ch;
++	return NULL;
+ }
+ 
+ static int pm8xxx_read_channel_rsv(struct pm8xxx_xoadc *adc,
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 84f077b2b90a..81bded0d37d1 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -2966,13 +2966,22 @@ static void addr_handler(int status, struct sockaddr *src_addr,
+ {
+ 	struct rdma_id_private *id_priv = context;
+ 	struct rdma_cm_event event = {};
++	struct sockaddr *addr;
++	struct sockaddr_storage old_addr;
+ 
+ 	mutex_lock(&id_priv->handler_mutex);
+ 	if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_QUERY,
+ 			   RDMA_CM_ADDR_RESOLVED))
+ 		goto out;
+ 
+-	memcpy(cma_src_addr(id_priv), src_addr, rdma_addr_size(src_addr));
++	/*
++	 * Store the previous src address, so that if we fail to acquire
++	 * matching rdma device, old address can be restored back, which helps
++	 * to cancel the cma listen operation correctly.
++	 */
++	addr = cma_src_addr(id_priv);
++	memcpy(&old_addr, addr, rdma_addr_size(addr));
++	memcpy(addr, src_addr, rdma_addr_size(src_addr));
+ 	if (!status && !id_priv->cma_dev) {
+ 		status = cma_acquire_dev_by_src_ip(id_priv);
+ 		if (status)
+@@ -2983,6 +2992,8 @@ static void addr_handler(int status, struct sockaddr *src_addr,
+ 	}
+ 
+ 	if (status) {
++		memcpy(addr, &old_addr,
++		       rdma_addr_size((struct sockaddr *)&old_addr));
+ 		if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_RESOLVED,
+ 				   RDMA_CM_ADDR_BOUND))
+ 			goto out;
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index 8221813219e5..25a81fbb0d4d 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -1903,8 +1903,10 @@ static int abort_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
+ 	}
+ 	mutex_unlock(&ep->com.mutex);
+ 
+-	if (release)
++	if (release) {
++		close_complete_upcall(ep, -ECONNRESET);
+ 		release_ep_resources(ep);
++	}
+ 	c4iw_put_ep(&ep->com);
+ 	return 0;
+ }
+@@ -3606,7 +3608,6 @@ int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp)
+ 	if (close) {
+ 		if (abrupt) {
+ 			set_bit(EP_DISC_ABORT, &ep->com.history);
+-			close_complete_upcall(ep, -ECONNRESET);
+ 			ret = send_abort(ep);
+ 		} else {
+ 			set_bit(EP_DISC_CLOSE, &ep->com.history);
+diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
+index 6db2276f5c13..15ec3e1feb09 100644
+--- a/drivers/infiniband/hw/hfi1/hfi.h
++++ b/drivers/infiniband/hw/hfi1/hfi.h
+@@ -1435,7 +1435,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd,
+ 			 struct hfi1_devdata *dd, u8 hw_pidx, u8 port);
+ void hfi1_free_ctxtdata(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd);
+ int hfi1_rcd_put(struct hfi1_ctxtdata *rcd);
+-void hfi1_rcd_get(struct hfi1_ctxtdata *rcd);
++int hfi1_rcd_get(struct hfi1_ctxtdata *rcd);
+ struct hfi1_ctxtdata *hfi1_rcd_get_by_index_safe(struct hfi1_devdata *dd,
+ 						 u16 ctxt);
+ struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt);
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index 7835eb52e7c5..c532ceb0bb9a 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -215,12 +215,12 @@ static void hfi1_rcd_free(struct kref *kref)
+ 	struct hfi1_ctxtdata *rcd =
+ 		container_of(kref, struct hfi1_ctxtdata, kref);
+ 
+-	hfi1_free_ctxtdata(rcd->dd, rcd);
+-
+ 	spin_lock_irqsave(&rcd->dd->uctxt_lock, flags);
+ 	rcd->dd->rcd[rcd->ctxt] = NULL;
+ 	spin_unlock_irqrestore(&rcd->dd->uctxt_lock, flags);
+ 
++	hfi1_free_ctxtdata(rcd->dd, rcd);
++
+ 	kfree(rcd);
+ }
+ 
+@@ -243,10 +243,13 @@ int hfi1_rcd_put(struct hfi1_ctxtdata *rcd)
+  * @rcd: pointer to an initialized rcd data structure
+  *
+  * Use this to get a reference after the init.
++ *
++ * Return : reflect kref_get_unless_zero(), which returns non-zero on
++ * increment, otherwise 0.
+  */
+-void hfi1_rcd_get(struct hfi1_ctxtdata *rcd)
++int hfi1_rcd_get(struct hfi1_ctxtdata *rcd)
+ {
+-	kref_get(&rcd->kref);
++	return kref_get_unless_zero(&rcd->kref);
+ }
+ 
+ /**
+@@ -326,7 +329,8 @@ struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt)
+ 	spin_lock_irqsave(&dd->uctxt_lock, flags);
+ 	if (dd->rcd[ctxt]) {
+ 		rcd = dd->rcd[ctxt];
+-		hfi1_rcd_get(rcd);
++		if (!hfi1_rcd_get(rcd))
++			rcd = NULL;
+ 	}
+ 	spin_unlock_irqrestore(&dd->uctxt_lock, flags);
+ 
+diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c
+index fedaf8260105..8c79a480f2b7 100644
+--- a/drivers/infiniband/hw/mlx4/cm.c
++++ b/drivers/infiniband/hw/mlx4/cm.c
+@@ -39,7 +39,7 @@
+ 
+ #include "mlx4_ib.h"
+ 
+-#define CM_CLEANUP_CACHE_TIMEOUT  (5 * HZ)
++#define CM_CLEANUP_CACHE_TIMEOUT  (30 * HZ)
+ 
+ struct id_map_entry {
+ 	struct rb_node node;
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 4ee32964e1dd..948eb6e25219 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -560,7 +560,7 @@ static int pagefault_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr,
+ 	struct ib_umem_odp *odp_mr = to_ib_umem_odp(mr->umem);
+ 	bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE;
+ 	bool prefetch = flags & MLX5_PF_FLAGS_PREFETCH;
+-	u64 access_mask = ODP_READ_ALLOWED_BIT;
++	u64 access_mask;
+ 	u64 start_idx, page_mask;
+ 	struct ib_umem_odp *odp;
+ 	size_t size;
+@@ -582,6 +582,7 @@ next_mr:
+ 	page_shift = mr->umem->page_shift;
+ 	page_mask = ~(BIT(page_shift) - 1);
+ 	start_idx = (io_virt - (mr->mmkey.iova & page_mask)) >> page_shift;
++	access_mask = ODP_READ_ALLOWED_BIT;
+ 
+ 	if (prefetch && !downgrade && !mr->umem->writable) {
+ 		/* prefetch with write-access must
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index c6cc3e4ab71d..c45b8359b389 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -2785,6 +2785,18 @@ again:
+ }
+ EXPORT_SYMBOL(rvt_copy_sge);
+ 
++static enum ib_wc_status loopback_qp_drop(struct rvt_ibport *rvp,
++					  struct rvt_qp *sqp)
++{
++	rvp->n_pkt_drops++;
++	/*
++	 * For RC, the requester would timeout and retry so
++	 * shortcut the timeouts and just signal too many retries.
++	 */
++	return sqp->ibqp.qp_type == IB_QPT_RC ?
++		IB_WC_RETRY_EXC_ERR : IB_WC_SUCCESS;
++}
++
+ /**
+  * ruc_loopback - handle UC and RC loopback requests
+  * @sqp: the sending QP
+@@ -2857,17 +2869,14 @@ again:
+ 	}
+ 	spin_unlock_irqrestore(&sqp->s_lock, flags);
+ 
+-	if (!qp || !(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) ||
++	if (!qp) {
++		send_status = loopback_qp_drop(rvp, sqp);
++		goto serr_no_r_lock;
++	}
++	spin_lock_irqsave(&qp->r_lock, flags);
++	if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) ||
+ 	    qp->ibqp.qp_type != sqp->ibqp.qp_type) {
+-		rvp->n_pkt_drops++;
+-		/*
+-		 * For RC, the requester would timeout and retry so
+-		 * shortcut the timeouts and just signal too many retries.
+-		 */
+-		if (sqp->ibqp.qp_type == IB_QPT_RC)
+-			send_status = IB_WC_RETRY_EXC_ERR;
+-		else
+-			send_status = IB_WC_SUCCESS;
++		send_status = loopback_qp_drop(rvp, sqp);
+ 		goto serr;
+ 	}
+ 
+@@ -2893,18 +2902,8 @@ again:
+ 		goto send_comp;
+ 
+ 	case IB_WR_SEND_WITH_INV:
+-		if (!rvt_invalidate_rkey(qp, wqe->wr.ex.invalidate_rkey)) {
+-			wc.wc_flags = IB_WC_WITH_INVALIDATE;
+-			wc.ex.invalidate_rkey = wqe->wr.ex.invalidate_rkey;
+-		}
+-		goto send;
+-
+ 	case IB_WR_SEND_WITH_IMM:
+-		wc.wc_flags = IB_WC_WITH_IMM;
+-		wc.ex.imm_data = wqe->wr.ex.imm_data;
+-		/* FALLTHROUGH */
+ 	case IB_WR_SEND:
+-send:
+ 		ret = rvt_get_rwqe(qp, false);
+ 		if (ret < 0)
+ 			goto op_err;
+@@ -2912,6 +2911,22 @@ send:
+ 			goto rnr_nak;
+ 		if (wqe->length > qp->r_len)
+ 			goto inv_err;
++		switch (wqe->wr.opcode) {
++		case IB_WR_SEND_WITH_INV:
++			if (!rvt_invalidate_rkey(qp,
++						 wqe->wr.ex.invalidate_rkey)) {
++				wc.wc_flags = IB_WC_WITH_INVALIDATE;
++				wc.ex.invalidate_rkey =
++					wqe->wr.ex.invalidate_rkey;
++			}
++			break;
++		case IB_WR_SEND_WITH_IMM:
++			wc.wc_flags = IB_WC_WITH_IMM;
++			wc.ex.imm_data = wqe->wr.ex.imm_data;
++			break;
++		default:
++			break;
++		}
+ 		break;
+ 
+ 	case IB_WR_RDMA_WRITE_WITH_IMM:
+@@ -3041,6 +3056,7 @@ do_write:
+ 		     wqe->wr.send_flags & IB_SEND_SOLICITED);
+ 
+ send_comp:
++	spin_unlock_irqrestore(&qp->r_lock, flags);
+ 	spin_lock_irqsave(&sqp->s_lock, flags);
+ 	rvp->n_loop_pkts++;
+ flush_send:
+@@ -3067,6 +3083,7 @@ rnr_nak:
+ 	}
+ 	if (sqp->s_rnr_retry_cnt < 7)
+ 		sqp->s_rnr_retry--;
++	spin_unlock_irqrestore(&qp->r_lock, flags);
+ 	spin_lock_irqsave(&sqp->s_lock, flags);
+ 	if (!(ib_rvt_state_ops[sqp->state] & RVT_PROCESS_RECV_OK))
+ 		goto clr_busy;
+@@ -3095,6 +3112,8 @@ err:
+ 	rvt_rc_error(qp, wc.status);
+ 
+ serr:
++	spin_unlock_irqrestore(&qp->r_lock, flags);
++serr_no_r_lock:
+ 	spin_lock_irqsave(&sqp->s_lock, flags);
+ 	rvt_send_complete(sqp, wqe, send_status);
+ 	if (sqp->ibqp.qp_type == IB_QPT_RC) {
+diff --git a/drivers/input/misc/soc_button_array.c b/drivers/input/misc/soc_button_array.c
+index 23520df7650f..55cd6e0b409c 100644
+--- a/drivers/input/misc/soc_button_array.c
++++ b/drivers/input/misc/soc_button_array.c
+@@ -373,7 +373,7 @@ static struct soc_button_info soc_button_PNP0C40[] = {
+ 	{ "home", 1, EV_KEY, KEY_LEFTMETA, false, true },
+ 	{ "volume_up", 2, EV_KEY, KEY_VOLUMEUP, true, false },
+ 	{ "volume_down", 3, EV_KEY, KEY_VOLUMEDOWN, true, false },
+-	{ "rotation_lock", 4, EV_SW, SW_ROTATE_LOCK, false, false },
++	{ "rotation_lock", 4, EV_KEY, KEY_ROTATE_LOCK_TOGGLE, false, false },
+ 	{ }
+ };
+ 
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 225ae6980182..628ef617bb2f 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1337,6 +1337,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ 	{ "ELAN0000", 0 },
+ 	{ "ELAN0100", 0 },
+ 	{ "ELAN0600", 0 },
++	{ "ELAN0601", 0 },
+ 	{ "ELAN0602", 0 },
+ 	{ "ELAN0605", 0 },
+ 	{ "ELAN0608", 0 },
+diff --git a/drivers/input/tablet/wacom_serial4.c b/drivers/input/tablet/wacom_serial4.c
+index 38bfaca48eab..150f9eecaca7 100644
+--- a/drivers/input/tablet/wacom_serial4.c
++++ b/drivers/input/tablet/wacom_serial4.c
+@@ -187,6 +187,7 @@ enum {
+ 	MODEL_DIGITIZER_II	= 0x5544, /* UD */
+ 	MODEL_GRAPHIRE		= 0x4554, /* ET */
+ 	MODEL_PENPARTNER	= 0x4354, /* CT */
++	MODEL_ARTPAD_II		= 0x4B54, /* KT */
+ };
+ 
+ static void wacom_handle_model_response(struct wacom *wacom)
+@@ -245,6 +246,7 @@ static void wacom_handle_model_response(struct wacom *wacom)
+ 		wacom->flags = F_HAS_STYLUS2 | F_HAS_SCROLLWHEEL;
+ 		break;
+ 
++	case MODEL_ARTPAD_II:
+ 	case MODEL_DIGITIZER_II:
+ 		wacom->dev->name = "Wacom Digitizer II";
+ 		wacom->dev->id.version = MODEL_DIGITIZER_II;
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 2a7b78bb98b4..e628ef23418f 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -2605,7 +2605,12 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
+ 
+ 	/* Everything is mapped - write the right values into s->dma_address */
+ 	for_each_sg(sglist, s, nelems, i) {
+-		s->dma_address += address + s->offset;
++		/*
++		 * Add in the remaining piece of the scatter-gather offset that
++		 * was masked out when we were determining the physical address
++		 * via (sg_phys(s) & PAGE_MASK) earlier.
++		 */
++		s->dma_address += address + (s->offset & ~PAGE_MASK);
+ 		s->dma_length   = s->length;
+ 	}
+ 
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 78188bf7e90d..dbd6824dfffa 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -2485,7 +2485,8 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
+ 	if (dev && dev_is_pci(dev)) {
+ 		struct pci_dev *pdev = to_pci_dev(info->dev);
+ 
+-		if (!pci_ats_disabled() &&
++		if (!pdev->untrusted &&
++		    !pci_ats_disabled() &&
+ 		    ecap_dev_iotlb_support(iommu->ecap) &&
+ 		    pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ATS) &&
+ 		    dmar_find_matched_atsr_unit(pdev))
+diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
+index cec29bf45c9b..18a8330e1882 100644
+--- a/drivers/iommu/io-pgtable-arm-v7s.c
++++ b/drivers/iommu/io-pgtable-arm-v7s.c
+@@ -161,6 +161,14 @@
+ 
+ #define ARM_V7S_TCR_PD1			BIT(5)
+ 
++#ifdef CONFIG_ZONE_DMA32
++#define ARM_V7S_TABLE_GFP_DMA GFP_DMA32
++#define ARM_V7S_TABLE_SLAB_FLAGS SLAB_CACHE_DMA32
++#else
++#define ARM_V7S_TABLE_GFP_DMA GFP_DMA
++#define ARM_V7S_TABLE_SLAB_FLAGS SLAB_CACHE_DMA
++#endif
++
+ typedef u32 arm_v7s_iopte;
+ 
+ static bool selftest_running;
+@@ -198,13 +206,16 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 	void *table = NULL;
+ 
+ 	if (lvl == 1)
+-		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
++		table = (void *)__get_free_pages(
++			__GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size));
+ 	else if (lvl == 2)
+-		table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA);
++		table = kmem_cache_zalloc(data->l2_tables, gfp);
+ 	phys = virt_to_phys(table);
+-	if (phys != (arm_v7s_iopte)phys)
++	if (phys != (arm_v7s_iopte)phys) {
+ 		/* Doesn't fit in PTE */
++		dev_err(dev, "Page table does not fit in PTE: %pa", &phys);
+ 		goto out_free;
++	}
+ 	if (table && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)) {
+ 		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, dma))
+@@ -217,7 +228,8 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
+ 		if (dma != phys)
+ 			goto out_unmap;
+ 	}
+-	kmemleak_ignore(table);
++	if (lvl == 2)
++		kmemleak_ignore(table);
+ 	return table;
+ 
+ out_unmap:
+@@ -733,7 +745,7 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
+ 	data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
+ 					    ARM_V7S_TABLE_SIZE(2),
+ 					    ARM_V7S_TABLE_SIZE(2),
+-					    SLAB_CACHE_DMA, NULL);
++					    ARM_V7S_TABLE_SLAB_FLAGS, NULL);
+ 	if (!data->l2_tables)
+ 		goto out_free_data;
+ 
+diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
+index f8d3ba247523..2de8122e218f 100644
+--- a/drivers/iommu/iova.c
++++ b/drivers/iommu/iova.c
+@@ -207,8 +207,10 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
+ 		curr_iova = rb_entry(curr, struct iova, node);
+ 	} while (curr && new_pfn <= curr_iova->pfn_hi);
+ 
+-	if (limit_pfn < size || new_pfn < iovad->start_pfn)
++	if (limit_pfn < size || new_pfn < iovad->start_pfn) {
++		iovad->max32_alloc_size = size;
+ 		goto iova32_full;
++	}
+ 
+ 	/* pfn_lo will point to size aligned address if size_aligned is set */
+ 	new->pfn_lo = new_pfn;
+@@ -222,7 +224,6 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
+ 	return 0;
+ 
+ iova32_full:
+-	iovad->max32_alloc_size = size;
+ 	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+ 	return -ENOMEM;
+ }
+diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c
+index 0e65f609352e..83364fedbf0a 100644
+--- a/drivers/irqchip/irq-brcmstb-l2.c
++++ b/drivers/irqchip/irq-brcmstb-l2.c
+@@ -129,8 +129,9 @@ static void brcmstb_l2_intc_suspend(struct irq_data *d)
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ 	struct brcmstb_l2_intc_data *b = gc->private;
++	unsigned long flags;
+ 
+-	irq_gc_lock(gc);
++	irq_gc_lock_irqsave(gc, flags);
+ 	/* Save the current mask */
+ 	b->saved_mask = irq_reg_readl(gc, ct->regs.mask);
+ 
+@@ -139,7 +140,7 @@ static void brcmstb_l2_intc_suspend(struct irq_data *d)
+ 		irq_reg_writel(gc, ~gc->wake_active, ct->regs.disable);
+ 		irq_reg_writel(gc, gc->wake_active, ct->regs.enable);
+ 	}
+-	irq_gc_unlock(gc);
++	irq_gc_unlock_irqrestore(gc, flags);
+ }
+ 
+ static void brcmstb_l2_intc_resume(struct irq_data *d)
+@@ -147,8 +148,9 @@ static void brcmstb_l2_intc_resume(struct irq_data *d)
+ 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ 	struct irq_chip_type *ct = irq_data_get_chip_type(d);
+ 	struct brcmstb_l2_intc_data *b = gc->private;
++	unsigned long flags;
+ 
+-	irq_gc_lock(gc);
++	irq_gc_lock_irqsave(gc, flags);
+ 	if (ct->chip.irq_ack) {
+ 		/* Clear unmasked non-wakeup interrupts */
+ 		irq_reg_writel(gc, ~b->saved_mask & ~gc->wake_active,
+@@ -158,7 +160,7 @@ static void brcmstb_l2_intc_resume(struct irq_data *d)
+ 	/* Restore the saved mask */
+ 	irq_reg_writel(gc, b->saved_mask, ct->regs.disable);
+ 	irq_reg_writel(gc, ~b->saved_mask, ct->regs.enable);
+-	irq_gc_unlock(gc);
++	irq_gc_unlock_irqrestore(gc, flags);
+ }
+ 
+ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index c3aba3fc818d..93e32a59640c 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -1482,7 +1482,7 @@ static int lpi_range_cmp(void *priv, struct list_head *a, struct list_head *b)
+ 	ra = container_of(a, struct lpi_range, entry);
+ 	rb = container_of(b, struct lpi_range, entry);
+ 
+-	return rb->base_id - ra->base_id;
++	return ra->base_id - rb->base_id;
+ }
+ 
+ static void merge_lpi_ranges(void)
+@@ -1955,6 +1955,8 @@ static int its_alloc_tables(struct its_node *its)
+ 			indirect = its_parse_indirect_baser(its, baser,
+ 							    psz, &order,
+ 							    its->device_ids);
++			break;
++
+ 		case GITS_BASER_TYPE_VCPU:
+ 			indirect = its_parse_indirect_baser(its, baser,
+ 							    psz, &order,
+diff --git a/drivers/isdn/hardware/mISDN/hfcmulti.c b/drivers/isdn/hardware/mISDN/hfcmulti.c
+index 4d85645c87f7..0928fd1f0e0c 100644
+--- a/drivers/isdn/hardware/mISDN/hfcmulti.c
++++ b/drivers/isdn/hardware/mISDN/hfcmulti.c
+@@ -4365,7 +4365,8 @@ setup_pci(struct hfc_multi *hc, struct pci_dev *pdev,
+ 	if (m->clock2)
+ 		test_and_set_bit(HFC_CHIP_CLOCK2, &hc->chip);
+ 
+-	if (ent->device == 0xB410) {
++	if (ent->vendor == PCI_VENDOR_ID_DIGIUM &&
++	    ent->device == PCI_DEVICE_ID_DIGIUM_HFC4S) {
+ 		test_and_set_bit(HFC_CHIP_B410P, &hc->chip);
+ 		test_and_set_bit(HFC_CHIP_PCM_MASTER, &hc->chip);
+ 		test_and_clear_bit(HFC_CHIP_PCM_SLAVE, &hc->chip);
+diff --git a/drivers/leds/leds-lp55xx-common.c b/drivers/leds/leds-lp55xx-common.c
+index 3d79a6380761..723f2f17497a 100644
+--- a/drivers/leds/leds-lp55xx-common.c
++++ b/drivers/leds/leds-lp55xx-common.c
+@@ -201,7 +201,7 @@ static void lp55xx_firmware_loaded(const struct firmware *fw, void *context)
+ 
+ 	if (!fw) {
+ 		dev_err(dev, "firmware request failed\n");
+-		goto out;
++		return;
+ 	}
+ 
+ 	/* handling firmware data is chip dependent */
+@@ -214,9 +214,9 @@ static void lp55xx_firmware_loaded(const struct firmware *fw, void *context)
+ 
+ 	mutex_unlock(&chip->lock);
+ 
+-out:
+ 	/* firmware should be released for other channel use */
+ 	release_firmware(chip->fw);
++	chip->fw = NULL;
+ }
+ 
+ static int lp55xx_request_firmware(struct lp55xx_chip *chip)
+diff --git a/drivers/md/bcache/extents.c b/drivers/md/bcache/extents.c
+index 956004366699..886710043025 100644
+--- a/drivers/md/bcache/extents.c
++++ b/drivers/md/bcache/extents.c
+@@ -538,6 +538,7 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
+ {
+ 	struct btree *b = container_of(bk, struct btree, keys);
+ 	unsigned int i, stale;
++	char buf[80];
+ 
+ 	if (!KEY_PTRS(k) ||
+ 	    bch_extent_invalid(bk, k))
+@@ -547,19 +548,19 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
+ 		if (!ptr_available(b->c, k, i))
+ 			return true;
+ 
+-	if (!expensive_debug_checks(b->c) && KEY_DIRTY(k))
+-		return false;
+-
+ 	for (i = 0; i < KEY_PTRS(k); i++) {
+ 		stale = ptr_stale(b->c, k, i);
+ 
++		if (stale && KEY_DIRTY(k)) {
++			bch_extent_to_text(buf, sizeof(buf), k);
++			pr_info("stale dirty pointer, stale %u, key: %s",
++				stale, buf);
++		}
++
+ 		btree_bug_on(stale > BUCKET_GC_GEN_MAX, b,
+ 			     "key too stale: %i, need_gc %u",
+ 			     stale, b->c->need_gc);
+ 
+-		btree_bug_on(stale && KEY_DIRTY(k) && KEY_SIZE(k),
+-			     b, "stale dirty pointer");
+-
+ 		if (stale)
+ 			return true;
+ 
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index 15070412a32e..f101bfe8657a 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -392,10 +392,11 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
+ 
+ 	/*
+ 	 * Flag for bypass if the IO is for read-ahead or background,
+-	 * unless the read-ahead request is for metadata (eg, for gfs2).
++	 * unless the read-ahead request is for metadata
++	 * (eg, for gfs2 or xfs).
+ 	 */
+ 	if (bio->bi_opf & (REQ_RAHEAD|REQ_BACKGROUND) &&
+-	    !(bio->bi_opf & REQ_PRIO))
++	    !(bio->bi_opf & (REQ_META|REQ_PRIO)))
+ 		goto skip;
+ 
+ 	if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) ||
+@@ -877,7 +878,7 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s,
+ 	}
+ 
+ 	if (!(bio->bi_opf & REQ_RAHEAD) &&
+-	    !(bio->bi_opf & REQ_PRIO) &&
++	    !(bio->bi_opf & (REQ_META|REQ_PRIO)) &&
+ 	    s->iop.c->gc_stats.in_use < CUTOFF_CACHE_READA)
+ 		reada = min_t(sector_t, dc->readahead >> 9,
+ 			      get_capacity(bio->bi_disk) - bio_end_sector(bio));
+diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
+index 557a8a3270a1..e5daf91310f6 100644
+--- a/drivers/md/bcache/sysfs.c
++++ b/drivers/md/bcache/sysfs.c
+@@ -287,8 +287,12 @@ STORE(__cached_dev)
+ 	sysfs_strtoul_clamp(writeback_rate_update_seconds,
+ 			    dc->writeback_rate_update_seconds,
+ 			    1, WRITEBACK_RATE_UPDATE_SECS_MAX);
+-	d_strtoul(writeback_rate_i_term_inverse);
+-	d_strtoul_nonzero(writeback_rate_p_term_inverse);
++	sysfs_strtoul_clamp(writeback_rate_i_term_inverse,
++			    dc->writeback_rate_i_term_inverse,
++			    1, UINT_MAX);
++	sysfs_strtoul_clamp(writeback_rate_p_term_inverse,
++			    dc->writeback_rate_p_term_inverse,
++			    1, UINT_MAX);
+ 	d_strtoul_nonzero(writeback_rate_minimum);
+ 
+ 	sysfs_strtoul_clamp(io_error_limit, dc->error_limit, 0, INT_MAX);
+@@ -299,7 +303,9 @@ STORE(__cached_dev)
+ 		dc->io_disable = v ? 1 : 0;
+ 	}
+ 
+-	d_strtoi_h(sequential_cutoff);
++	sysfs_strtoul_clamp(sequential_cutoff,
++			    dc->sequential_cutoff,
++			    0, UINT_MAX);
+ 	d_strtoi_h(readahead);
+ 
+ 	if (attr == &sysfs_clear_stats)
+@@ -778,8 +784,17 @@ STORE(__bch_cache_set)
+ 		c->error_limit = strtoul_or_return(buf);
+ 
+ 	/* See count_io_errors() for why 88 */
+-	if (attr == &sysfs_io_error_halflife)
+-		c->error_decay = strtoul_or_return(buf) / 88;
++	if (attr == &sysfs_io_error_halflife) {
++		unsigned long v = 0;
++		ssize_t ret;
++
++		ret = strtoul_safe_clamp(buf, v, 0, UINT_MAX);
++		if (!ret) {
++			c->error_decay = v / 88;
++			return size;
++		}
++		return ret;
++	}
+ 
+ 	if (attr == &sysfs_io_disable) {
+ 		v = strtoul_or_return(buf);
+diff --git a/drivers/md/bcache/sysfs.h b/drivers/md/bcache/sysfs.h
+index 3fe82425859c..0ad2715a884e 100644
+--- a/drivers/md/bcache/sysfs.h
++++ b/drivers/md/bcache/sysfs.h
+@@ -81,9 +81,16 @@ do {									\
+ 
+ #define sysfs_strtoul_clamp(file, var, min, max)			\
+ do {									\
+-	if (attr == &sysfs_ ## file)					\
+-		return strtoul_safe_clamp(buf, var, min, max)		\
+-			?: (ssize_t) size;				\
++	if (attr == &sysfs_ ## file) {					\
++		unsigned long v = 0;					\
++		ssize_t ret;						\
++		ret = strtoul_safe_clamp(buf, v, min, max);		\
++		if (!ret) {						\
++			var = v;					\
++			return size;					\
++		}							\
++		return ret;						\
++	}								\
+ } while (0)
+ 
+ #define strtoul_or_return(cp)						\
+diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
+index 6a743d3bb338..4e4c6810dc3c 100644
+--- a/drivers/md/bcache/writeback.h
++++ b/drivers/md/bcache/writeback.h
+@@ -71,6 +71,9 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio,
+ 	    in_use > bch_cutoff_writeback_sync)
+ 		return false;
+ 
++	if (bio_op(bio) == REQ_OP_DISCARD)
++		return false;
++
+ 	if (dc->partial_stripes_expensive &&
+ 	    bcache_dev_stripe_dirty(dc, bio->bi_iter.bi_sector,
+ 				    bio_sectors(bio)))
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index 95c6d86ab5e8..c4ef1fceead6 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -115,6 +115,7 @@ struct mapped_device {
+ 	struct srcu_struct io_barrier;
+ };
+ 
++void disable_discard(struct mapped_device *md);
+ void disable_write_same(struct mapped_device *md);
+ void disable_write_zeroes(struct mapped_device *md);
+ 
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 457200ca6287..f535fd8ac82d 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -913,7 +913,7 @@ static void copy_from_journal(struct dm_integrity_c *ic, unsigned section, unsig
+ static bool ranges_overlap(struct dm_integrity_range *range1, struct dm_integrity_range *range2)
+ {
+ 	return range1->logical_sector < range2->logical_sector + range2->n_sectors &&
+-	       range2->logical_sector + range2->n_sectors > range2->logical_sector;
++	       range1->logical_sector + range1->n_sectors > range2->logical_sector;
+ }
+ 
+ static bool add_new_range(struct dm_integrity_c *ic, struct dm_integrity_range *new_range, bool check_waiting)
+@@ -959,8 +959,6 @@ static void remove_range_unlocked(struct dm_integrity_c *ic, struct dm_integrity
+ 		struct dm_integrity_range *last_range =
+ 			list_first_entry(&ic->wait_list, struct dm_integrity_range, wait_entry);
+ 		struct task_struct *last_range_task;
+-		if (!ranges_overlap(range, last_range))
+-			break;
+ 		last_range_task = last_range->task;
+ 		list_del(&last_range->wait_entry);
+ 		if (!add_new_range(ic, last_range, false)) {
+@@ -1368,8 +1366,8 @@ again:
+ 						checksums_ptr - checksums, !dio->write ? TAG_CMP : TAG_WRITE);
+ 			if (unlikely(r)) {
+ 				if (r > 0) {
+-					DMERR("Checksum failed at sector 0x%llx",
+-					      (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size)));
++					DMERR_LIMIT("Checksum failed at sector 0x%llx",
++						    (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size)));
+ 					r = -EILSEQ;
+ 					atomic64_inc(&ic->number_of_mismatches);
+ 				}
+@@ -1561,8 +1559,8 @@ retry_kmap:
+ 
+ 					integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
+ 					if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
+-						DMERR("Checksum failed when reading from journal, at sector 0x%llx",
+-						      (unsigned long long)logical_sector);
++						DMERR_LIMIT("Checksum failed when reading from journal, at sector 0x%llx",
++							    (unsigned long long)logical_sector);
+ 					}
+ 				}
+ #endif
+@@ -3185,7 +3183,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 			journal_watermark = val;
+ 		else if (sscanf(opt_string, "commit_time:%u%c", &val, &dummy) == 1)
+ 			sync_msec = val;
+-		else if (!memcmp(opt_string, "meta_device:", strlen("meta_device:"))) {
++		else if (!strncmp(opt_string, "meta_device:", strlen("meta_device:"))) {
+ 			if (ic->meta_dev) {
+ 				dm_put_device(ti, ic->meta_dev);
+ 				ic->meta_dev = NULL;
+@@ -3204,17 +3202,17 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 				goto bad;
+ 			}
+ 			ic->sectors_per_block = val >> SECTOR_SHIFT;
+-		} else if (!memcmp(opt_string, "internal_hash:", strlen("internal_hash:"))) {
++		} else if (!strncmp(opt_string, "internal_hash:", strlen("internal_hash:"))) {
+ 			r = get_alg_and_key(opt_string, &ic->internal_hash_alg, &ti->error,
+ 					    "Invalid internal_hash argument");
+ 			if (r)
+ 				goto bad;
+-		} else if (!memcmp(opt_string, "journal_crypt:", strlen("journal_crypt:"))) {
++		} else if (!strncmp(opt_string, "journal_crypt:", strlen("journal_crypt:"))) {
+ 			r = get_alg_and_key(opt_string, &ic->journal_crypt_alg, &ti->error,
+ 					    "Invalid journal_crypt argument");
+ 			if (r)
+ 				goto bad;
+-		} else if (!memcmp(opt_string, "journal_mac:", strlen("journal_mac:"))) {
++		} else if (!strncmp(opt_string, "journal_mac:", strlen("journal_mac:"))) {
+ 			r = get_alg_and_key(opt_string, &ic->journal_mac_alg,  &ti->error,
+ 					    "Invalid journal_mac argument");
+ 			if (r)
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index a20531e5f3b4..582265e043a6 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -206,11 +206,14 @@ static void dm_done(struct request *clone, blk_status_t error, bool mapped)
+ 	}
+ 
+ 	if (unlikely(error == BLK_STS_TARGET)) {
+-		if (req_op(clone) == REQ_OP_WRITE_SAME &&
+-		    !clone->q->limits.max_write_same_sectors)
++		if (req_op(clone) == REQ_OP_DISCARD &&
++		    !clone->q->limits.max_discard_sectors)
++			disable_discard(tio->md);
++		else if (req_op(clone) == REQ_OP_WRITE_SAME &&
++			 !clone->q->limits.max_write_same_sectors)
+ 			disable_write_same(tio->md);
+-		if (req_op(clone) == REQ_OP_WRITE_ZEROES &&
+-		    !clone->q->limits.max_write_zeroes_sectors)
++		else if (req_op(clone) == REQ_OP_WRITE_ZEROES &&
++			 !clone->q->limits.max_write_zeroes_sectors)
+ 			disable_write_zeroes(tio->md);
+ 	}
+ 
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 4b1be754cc41..eb257e4dcb1c 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -1852,6 +1852,36 @@ static bool dm_table_supports_secure_erase(struct dm_table *t)
+ 	return true;
+ }
+ 
++static int device_requires_stable_pages(struct dm_target *ti,
++					struct dm_dev *dev, sector_t start,
++					sector_t len, void *data)
++{
++	struct request_queue *q = bdev_get_queue(dev->bdev);
++
++	return q && bdi_cap_stable_pages_required(q->backing_dev_info);
++}
++
++/*
++ * If any underlying device requires stable pages, a table must require
++ * them as well.  Only targets that support iterate_devices are considered:
++ * don't want error, zero, etc to require stable pages.
++ */
++static bool dm_table_requires_stable_pages(struct dm_table *t)
++{
++	struct dm_target *ti;
++	unsigned i;
++
++	for (i = 0; i < dm_table_get_num_targets(t); i++) {
++		ti = dm_table_get_target(t, i);
++
++		if (ti->type->iterate_devices &&
++		    ti->type->iterate_devices(ti, device_requires_stable_pages, NULL))
++			return true;
++	}
++
++	return false;
++}
++
+ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 			       struct queue_limits *limits)
+ {
+@@ -1909,6 +1939,15 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 
+ 	dm_table_verify_integrity(t);
+ 
++	/*
++	 * Some devices don't use blk_integrity but still want stable pages
++	 * because they do their own checksumming.
++	 */
++	if (dm_table_requires_stable_pages(t))
++		q->backing_dev_info->capabilities |= BDI_CAP_STABLE_WRITES;
++	else
++		q->backing_dev_info->capabilities &= ~BDI_CAP_STABLE_WRITES;
++
+ 	/*
+ 	 * Determine whether or not this queue's I/O timings contribute
+ 	 * to the entropy pool, Only request-based targets use this.
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index e83b63608262..254c26eb963a 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -3283,6 +3283,13 @@ static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 	as.argc = argc;
+ 	as.argv = argv;
+ 
++	/* make sure metadata and data are different devices */
++	if (!strcmp(argv[0], argv[1])) {
++		ti->error = "Error setting metadata or data device";
++		r = -EINVAL;
++		goto out_unlock;
++	}
++
+ 	/*
+ 	 * Set default pool features.
+ 	 */
+@@ -4167,6 +4174,12 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 	tc->sort_bio_list = RB_ROOT;
+ 
+ 	if (argc == 3) {
++		if (!strcmp(argv[0], argv[2])) {
++			ti->error = "Error setting origin device";
++			r = -EINVAL;
++			goto bad_origin_dev;
++		}
++
+ 		r = dm_get_device(ti, argv[2], FMODE_READ, &origin_dev);
+ 		if (r) {
+ 			ti->error = "Error opening origin device";
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 515e6af9bed2..4986eea520b6 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -963,6 +963,15 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
+ 	}
+ }
+ 
++void disable_discard(struct mapped_device *md)
++{
++	struct queue_limits *limits = dm_get_queue_limits(md);
++
++	/* device doesn't really support DISCARD, disable it */
++	limits->max_discard_sectors = 0;
++	blk_queue_flag_clear(QUEUE_FLAG_DISCARD, md->queue);
++}
++
+ void disable_write_same(struct mapped_device *md)
+ {
+ 	struct queue_limits *limits = dm_get_queue_limits(md);
+@@ -988,11 +997,14 @@ static void clone_endio(struct bio *bio)
+ 	dm_endio_fn endio = tio->ti->type->end_io;
+ 
+ 	if (unlikely(error == BLK_STS_TARGET) && md->type != DM_TYPE_NVME_BIO_BASED) {
+-		if (bio_op(bio) == REQ_OP_WRITE_SAME &&
+-		    !bio->bi_disk->queue->limits.max_write_same_sectors)
++		if (bio_op(bio) == REQ_OP_DISCARD &&
++		    !bio->bi_disk->queue->limits.max_discard_sectors)
++			disable_discard(md);
++		else if (bio_op(bio) == REQ_OP_WRITE_SAME &&
++			 !bio->bi_disk->queue->limits.max_write_same_sectors)
+ 			disable_write_same(md);
+-		if (bio_op(bio) == REQ_OP_WRITE_ZEROES &&
+-		    !bio->bi_disk->queue->limits.max_write_zeroes_sectors)
++		else if (bio_op(bio) == REQ_OP_WRITE_ZEROES &&
++			 !bio->bi_disk->queue->limits.max_write_zeroes_sectors)
+ 			disable_write_zeroes(md);
+ 	}
+ 
+@@ -1060,15 +1072,7 @@ int dm_set_target_max_io_len(struct dm_target *ti, sector_t len)
+ 		return -EINVAL;
+ 	}
+ 
+-	/*
+-	 * BIO based queue uses its own splitting. When multipage bvecs
+-	 * is switched on, size of the incoming bio may be too big to
+-	 * be handled in some targets, such as crypt.
+-	 *
+-	 * When these targets are ready for the big bio, we can remove
+-	 * the limit.
+-	 */
+-	ti->max_io_len = min_t(uint32_t, len, BIO_MAX_PAGES * PAGE_SIZE);
++	ti->max_io_len = (uint32_t) len;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index abb5d382f64d..3b6880dd648d 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -3939,6 +3939,8 @@ static int raid10_run(struct mddev *mddev)
+ 		set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+ 		mddev->sync_thread = md_register_thread(md_do_sync, mddev,
+ 							"reshape");
++		if (!mddev->sync_thread)
++			goto out_free_conf;
+ 	}
+ 
+ 	return 0;
+@@ -4670,7 +4672,6 @@ read_more:
+ 	atomic_inc(&r10_bio->remaining);
+ 	read_bio->bi_next = NULL;
+ 	generic_make_request(read_bio);
+-	sector_nr += nr_sectors;
+ 	sectors_done += nr_sectors;
+ 	if (sector_nr <= last)
+ 		goto read_more;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index cecea901ab8c..5b68f2d0da60 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -7402,6 +7402,8 @@ static int raid5_run(struct mddev *mddev)
+ 		set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
+ 		mddev->sync_thread = md_register_thread(md_do_sync, mddev,
+ 							"reshape");
++		if (!mddev->sync_thread)
++			goto abort;
+ 	}
+ 
+ 	/* Ok, everything is just fine now */
+diff --git a/drivers/media/dvb-frontends/lgdt330x.c b/drivers/media/dvb-frontends/lgdt330x.c
+index 96807e134886..8abb1a510a81 100644
+--- a/drivers/media/dvb-frontends/lgdt330x.c
++++ b/drivers/media/dvb-frontends/lgdt330x.c
+@@ -783,7 +783,7 @@ static int lgdt3303_read_status(struct dvb_frontend *fe,
+ 
+ 		if ((buf[0] & 0x02) == 0x00)
+ 			*status |= FE_HAS_SYNC;
+-		if ((buf[0] & 0xfd) == 0x01)
++		if ((buf[0] & 0x01) == 0x01)
+ 			*status |= FE_HAS_VITERBI | FE_HAS_LOCK;
+ 		break;
+ 	default:
+diff --git a/drivers/media/i2c/cx25840/cx25840-core.c b/drivers/media/i2c/cx25840/cx25840-core.c
+index b168bf3635b6..8b0b8b5aa531 100644
+--- a/drivers/media/i2c/cx25840/cx25840-core.c
++++ b/drivers/media/i2c/cx25840/cx25840-core.c
+@@ -5216,8 +5216,9 @@ static int cx25840_probe(struct i2c_client *client,
+ 	 * those extra inputs. So, let's add it only when needed.
+ 	 */
+ 	state->pads[CX25840_PAD_INPUT].flags = MEDIA_PAD_FL_SINK;
++	state->pads[CX25840_PAD_INPUT].sig_type = PAD_SIGNAL_ANALOG;
+ 	state->pads[CX25840_PAD_VID_OUT].flags = MEDIA_PAD_FL_SOURCE;
+-	state->pads[CX25840_PAD_VBI_OUT].flags = MEDIA_PAD_FL_SOURCE;
++	state->pads[CX25840_PAD_VID_OUT].sig_type = PAD_SIGNAL_DV;
+ 	sd->entity.function = MEDIA_ENT_F_ATV_DECODER;
+ 
+ 	ret = media_entity_pads_init(&sd->entity, ARRAY_SIZE(state->pads),
+diff --git a/drivers/media/i2c/cx25840/cx25840-core.h b/drivers/media/i2c/cx25840/cx25840-core.h
+index c323b1af1f83..9efefa15d090 100644
+--- a/drivers/media/i2c/cx25840/cx25840-core.h
++++ b/drivers/media/i2c/cx25840/cx25840-core.h
+@@ -40,7 +40,6 @@ enum cx25840_model {
+ enum cx25840_media_pads {
+ 	CX25840_PAD_INPUT,
+ 	CX25840_PAD_VID_OUT,
+-	CX25840_PAD_VBI_OUT,
+ 
+ 	CX25840_NUM_PADS
+ };
+diff --git a/drivers/media/i2c/mt9m111.c b/drivers/media/i2c/mt9m111.c
+index d639b9bcf64a..7a759b4b88cf 100644
+--- a/drivers/media/i2c/mt9m111.c
++++ b/drivers/media/i2c/mt9m111.c
+@@ -1273,6 +1273,8 @@ static int mt9m111_probe(struct i2c_client *client,
+ 	mt9m111->rect.top	= MT9M111_MIN_DARK_ROWS;
+ 	mt9m111->rect.width	= MT9M111_MAX_WIDTH;
+ 	mt9m111->rect.height	= MT9M111_MAX_HEIGHT;
++	mt9m111->width		= mt9m111->rect.width;
++	mt9m111->height		= mt9m111->rect.height;
+ 	mt9m111->fmt		= &mt9m111_colour_fmts[0];
+ 	mt9m111->lastpage	= -1;
+ 	mutex_init(&mt9m111->power_lock);
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index bef3f3aae0ed..9f8fc1ad9b1a 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -1893,7 +1893,7 @@ static void ov5640_reset(struct ov5640_dev *sensor)
+ 	usleep_range(1000, 2000);
+ 
+ 	gpiod_set_value_cansleep(sensor->reset_gpio, 0);
+-	usleep_range(5000, 10000);
++	usleep_range(20000, 25000);
+ }
+ 
+ static int ov5640_set_power_on(struct ov5640_dev *sensor)
+diff --git a/drivers/media/i2c/ov7740.c b/drivers/media/i2c/ov7740.c
+index 177688afd9a6..8835b831cdc0 100644
+--- a/drivers/media/i2c/ov7740.c
++++ b/drivers/media/i2c/ov7740.c
+@@ -1101,6 +1101,9 @@ static int ov7740_probe(struct i2c_client *client,
+ 	if (ret)
+ 		return ret;
+ 
++	pm_runtime_set_active(&client->dev);
++	pm_runtime_enable(&client->dev);
++
+ 	ret = ov7740_detect(ov7740);
+ 	if (ret)
+ 		goto error_detect;
+@@ -1123,8 +1126,6 @@ static int ov7740_probe(struct i2c_client *client,
+ 	if (ret)
+ 		goto error_async_register;
+ 
+-	pm_runtime_set_active(&client->dev);
+-	pm_runtime_enable(&client->dev);
+ 	pm_runtime_idle(&client->dev);
+ 
+ 	return 0;
+@@ -1134,6 +1135,8 @@ error_async_register:
+ error_init_controls:
+ 	ov7740_free_controls(ov7740);
+ error_detect:
++	pm_runtime_disable(&client->dev);
++	pm_runtime_set_suspended(&client->dev);
+ 	ov7740_set_power(ov7740, 0);
+ 	media_entity_cleanup(&ov7740->subdev.entity);
+ 
+diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+index 2a5d5002c27e..f761e4d8bf2a 100644
+--- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
+@@ -702,7 +702,7 @@ end:
+ 	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, to_vb2_v4l2_buffer(vb));
+ }
+ 
+-static void *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
++static struct vb2_v4l2_buffer *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
+ 				 enum v4l2_buf_type type)
+ {
+ 	if (V4L2_TYPE_IS_OUTPUT(type))
+@@ -714,7 +714,7 @@ static void *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
+ static int mtk_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
+ {
+ 	struct mtk_jpeg_ctx *ctx = vb2_get_drv_priv(q);
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 	int ret = 0;
+ 
+ 	ret = pm_runtime_get_sync(ctx->jpeg->dev);
+@@ -724,14 +724,14 @@ static int mtk_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
+ 	return 0;
+ err:
+ 	while ((vb = mtk_jpeg_buf_remove(ctx, q->type)))
+-		v4l2_m2m_buf_done(to_vb2_v4l2_buffer(vb), VB2_BUF_STATE_QUEUED);
++		v4l2_m2m_buf_done(vb, VB2_BUF_STATE_QUEUED);
+ 	return ret;
+ }
+ 
+ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
+ {
+ 	struct mtk_jpeg_ctx *ctx = vb2_get_drv_priv(q);
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 
+ 	/*
+ 	 * STREAMOFF is an acknowledgment for source change event.
+@@ -743,7 +743,7 @@ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
+ 		struct mtk_jpeg_src_buf *src_buf;
+ 
+ 		vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+-		src_buf = mtk_jpeg_vb2_to_srcbuf(vb);
++		src_buf = mtk_jpeg_vb2_to_srcbuf(&vb->vb2_buf);
+ 		mtk_jpeg_set_queue_data(ctx, &src_buf->dec_param);
+ 		ctx->state = MTK_JPEG_RUNNING;
+ 	} else if (V4L2_TYPE_IS_OUTPUT(q->type)) {
+@@ -751,7 +751,7 @@ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
+ 	}
+ 
+ 	while ((vb = mtk_jpeg_buf_remove(ctx, q->type)))
+-		v4l2_m2m_buf_done(to_vb2_v4l2_buffer(vb), VB2_BUF_STATE_ERROR);
++		v4l2_m2m_buf_done(vb, VB2_BUF_STATE_ERROR);
+ 
+ 	pm_runtime_put_sync(ctx->jpeg->dev);
+ }
+@@ -807,7 +807,7 @@ static void mtk_jpeg_device_run(void *priv)
+ {
+ 	struct mtk_jpeg_ctx *ctx = priv;
+ 	struct mtk_jpeg_dev *jpeg = ctx->jpeg;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR;
+ 	unsigned long flags;
+ 	struct mtk_jpeg_src_buf *jpeg_src_buf;
+@@ -817,11 +817,11 @@ static void mtk_jpeg_device_run(void *priv)
+ 
+ 	src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+-	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(src_buf);
++	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf);
+ 
+ 	if (jpeg_src_buf->flags & MTK_JPEG_BUF_FLAGS_LAST_FRAME) {
+-		for (i = 0; i < dst_buf->num_planes; i++)
+-			vb2_set_plane_payload(dst_buf, i, 0);
++		for (i = 0; i < dst_buf->vb2_buf.num_planes; i++)
++			vb2_set_plane_payload(&dst_buf->vb2_buf, i, 0);
+ 		buf_state = VB2_BUF_STATE_DONE;
+ 		goto dec_end;
+ 	}
+@@ -833,8 +833,8 @@ static void mtk_jpeg_device_run(void *priv)
+ 		return;
+ 	}
+ 
+-	mtk_jpeg_set_dec_src(ctx, src_buf, &bs);
+-	if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, dst_buf, &fb))
++	mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs);
++	if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, &dst_buf->vb2_buf, &fb))
+ 		goto dec_end;
+ 
+ 	spin_lock_irqsave(&jpeg->hw_lock, flags);
+@@ -849,8 +849,8 @@ static void mtk_jpeg_device_run(void *priv)
+ dec_end:
+ 	v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ 	v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+-	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(src_buf), buf_state);
+-	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(dst_buf), buf_state);
++	v4l2_m2m_buf_done(src_buf, buf_state);
++	v4l2_m2m_buf_done(dst_buf, buf_state);
+ 	v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx);
+ }
+ 
+@@ -921,7 +921,7 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
+ {
+ 	struct mtk_jpeg_dev *jpeg = priv;
+ 	struct mtk_jpeg_ctx *ctx;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	struct mtk_jpeg_src_buf *jpeg_src_buf;
+ 	enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR;
+ 	u32	dec_irq_ret;
+@@ -938,7 +938,7 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
+ 
+ 	src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+ 	dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+-	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(src_buf);
++	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf);
+ 
+ 	if (dec_irq_ret >= MTK_JPEG_DEC_RESULT_UNDERFLOW)
+ 		mtk_jpeg_dec_reset(jpeg->dec_reg_base);
+@@ -948,15 +948,15 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
+ 		goto dec_end;
+ 	}
+ 
+-	for (i = 0; i < dst_buf->num_planes; i++)
+-		vb2_set_plane_payload(dst_buf, i,
++	for (i = 0; i < dst_buf->vb2_buf.num_planes; i++)
++		vb2_set_plane_payload(&dst_buf->vb2_buf, i,
+ 				      jpeg_src_buf->dec_param.comp_size[i]);
+ 
+ 	buf_state = VB2_BUF_STATE_DONE;
+ 
+ dec_end:
+-	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(src_buf), buf_state);
+-	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(dst_buf), buf_state);
++	v4l2_m2m_buf_done(src_buf, buf_state);
++	v4l2_m2m_buf_done(dst_buf, buf_state);
+ 	v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx);
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/media/platform/mx2_emmaprp.c b/drivers/media/platform/mx2_emmaprp.c
+index 27b078cf98e3..f60f499c596b 100644
+--- a/drivers/media/platform/mx2_emmaprp.c
++++ b/drivers/media/platform/mx2_emmaprp.c
+@@ -274,7 +274,7 @@ static void emmaprp_device_run(void *priv)
+ {
+ 	struct emmaprp_ctx *ctx = priv;
+ 	struct emmaprp_q_data *s_q_data, *d_q_data;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	struct emmaprp_dev *pcdev = ctx->dev;
+ 	unsigned int s_width, s_height;
+ 	unsigned int d_width, d_height;
+@@ -294,8 +294,8 @@ static void emmaprp_device_run(void *priv)
+ 	d_height = d_q_data->height;
+ 	d_size = d_width * d_height;
+ 
+-	p_in = vb2_dma_contig_plane_dma_addr(src_buf, 0);
+-	p_out = vb2_dma_contig_plane_dma_addr(dst_buf, 0);
++	p_in = vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0);
++	p_out = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0);
+ 	if (!p_in || !p_out) {
+ 		v4l2_err(&pcdev->v4l2_dev,
+ 			 "Acquiring kernel pointers to buffers failed\n");
+diff --git a/drivers/media/platform/rcar-vin/rcar-core.c b/drivers/media/platform/rcar-vin/rcar-core.c
+index f0719ce24b97..aef8d8dab6ab 100644
+--- a/drivers/media/platform/rcar-vin/rcar-core.c
++++ b/drivers/media/platform/rcar-vin/rcar-core.c
+@@ -131,9 +131,13 @@ static int rvin_group_link_notify(struct media_link *link, u32 flags,
+ 	    !is_media_entity_v4l2_video_device(link->sink->entity))
+ 		return 0;
+ 
+-	/* If any entity is in use don't allow link changes. */
++	/*
++	 * Don't allow link changes if any entity in the graph is
++	 * streaming, modifying the CHSEL register fields can disrupt
++	 * running streams.
++	 */
+ 	media_device_for_each_entity(entity, &group->mdev)
+-		if (entity->use_count)
++		if (entity->stream_count)
+ 			return -EBUSY;
+ 
+ 	mutex_lock(&group->lock);
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 5c653287185f..b096227a9722 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -43,7 +43,7 @@ static void device_run(void *prv)
+ {
+ 	struct rga_ctx *ctx = prv;
+ 	struct rockchip_rga *rga = ctx->rga;
+-	struct vb2_buffer *src, *dst;
++	struct vb2_v4l2_buffer *src, *dst;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&rga->ctrl_lock, flags);
+@@ -53,8 +53,8 @@ static void device_run(void *prv)
+ 	src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 
+-	rga_buf_map(src);
+-	rga_buf_map(dst);
++	rga_buf_map(&src->vb2_buf);
++	rga_buf_map(&dst->vb2_buf);
+ 
+ 	rga_hw_start(rga);
+ 
+diff --git a/drivers/media/platform/s5p-g2d/g2d.c b/drivers/media/platform/s5p-g2d/g2d.c
+index 57ab1d1085d1..971c47165010 100644
+--- a/drivers/media/platform/s5p-g2d/g2d.c
++++ b/drivers/media/platform/s5p-g2d/g2d.c
+@@ -513,7 +513,7 @@ static void device_run(void *prv)
+ {
+ 	struct g2d_ctx *ctx = prv;
+ 	struct g2d_dev *dev = ctx->dev;
+-	struct vb2_buffer *src, *dst;
++	struct vb2_v4l2_buffer *src, *dst;
+ 	unsigned long flags;
+ 	u32 cmd = 0;
+ 
+@@ -528,10 +528,10 @@ static void device_run(void *prv)
+ 	spin_lock_irqsave(&dev->ctrl_lock, flags);
+ 
+ 	g2d_set_src_size(dev, &ctx->in);
+-	g2d_set_src_addr(dev, vb2_dma_contig_plane_dma_addr(src, 0));
++	g2d_set_src_addr(dev, vb2_dma_contig_plane_dma_addr(&src->vb2_buf, 0));
+ 
+ 	g2d_set_dst_size(dev, &ctx->out);
+-	g2d_set_dst_addr(dev, vb2_dma_contig_plane_dma_addr(dst, 0));
++	g2d_set_dst_addr(dev, vb2_dma_contig_plane_dma_addr(&dst->vb2_buf, 0));
+ 
+ 	g2d_set_rop4(dev, ctx->rop);
+ 	g2d_set_flip(dev, ctx->flip);
+diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.c b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+index 3f9000b70385..370942b67d86 100644
+--- a/drivers/media/platform/s5p-jpeg/jpeg-core.c
++++ b/drivers/media/platform/s5p-jpeg/jpeg-core.c
+@@ -793,14 +793,14 @@ static void skip(struct s5p_jpeg_buffer *buf, long len);
+ static void exynos4_jpeg_parse_decode_h_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	struct s5p_jpeg_buffer jpeg_buffer;
+ 	unsigned int word;
+ 	int c, x, components;
+ 
+ 	jpeg_buffer.size = 2; /* Ls */
+ 	jpeg_buffer.data =
+-		(unsigned long)vb2_plane_vaddr(vb, 0) + ctx->out_q.sos + 2;
++		(unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sos + 2;
+ 	jpeg_buffer.curr = 0;
+ 
+ 	word = 0;
+@@ -830,14 +830,14 @@ static void exynos4_jpeg_parse_decode_h_tbl(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_parse_huff_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	struct s5p_jpeg_buffer jpeg_buffer;
+ 	unsigned int word;
+ 	int c, i, n, j;
+ 
+ 	for (j = 0; j < ctx->out_q.dht.n; ++j) {
+ 		jpeg_buffer.size = ctx->out_q.dht.len[j];
+-		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(vb, 0) +
++		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) +
+ 				   ctx->out_q.dht.marker[j];
+ 		jpeg_buffer.curr = 0;
+ 
+@@ -889,13 +889,13 @@ static void exynos4_jpeg_parse_huff_tbl(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_parse_decode_q_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	struct s5p_jpeg_buffer jpeg_buffer;
+ 	int c, x, components;
+ 
+ 	jpeg_buffer.size = ctx->out_q.sof_len;
+ 	jpeg_buffer.data =
+-		(unsigned long)vb2_plane_vaddr(vb, 0) + ctx->out_q.sof;
++		(unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sof;
+ 	jpeg_buffer.curr = 0;
+ 
+ 	skip(&jpeg_buffer, 5); /* P, Y, X */
+@@ -920,14 +920,14 @@ static void exynos4_jpeg_parse_decode_q_tbl(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_parse_q_tbl(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
++	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	struct s5p_jpeg_buffer jpeg_buffer;
+ 	unsigned int word;
+ 	int c, i, j;
+ 
+ 	for (j = 0; j < ctx->out_q.dqt.n; ++j) {
+ 		jpeg_buffer.size = ctx->out_q.dqt.len[j];
+-		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(vb, 0) +
++		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) +
+ 				   ctx->out_q.dqt.marker[j];
+ 		jpeg_buffer.curr = 0;
+ 
+@@ -1293,13 +1293,16 @@ static int s5p_jpeg_querycap(struct file *file, void *priv,
+ 	return 0;
+ }
+ 
+-static int enum_fmt(struct s5p_jpeg_fmt *sjpeg_formats, int n,
++static int enum_fmt(struct s5p_jpeg_ctx *ctx,
++		    struct s5p_jpeg_fmt *sjpeg_formats, int n,
+ 		    struct v4l2_fmtdesc *f, u32 type)
+ {
+ 	int i, num = 0;
++	unsigned int fmt_ver_flag = ctx->jpeg->variant->fmt_ver_flag;
+ 
+ 	for (i = 0; i < n; ++i) {
+-		if (sjpeg_formats[i].flags & type) {
++		if (sjpeg_formats[i].flags & type &&
++		    sjpeg_formats[i].flags & fmt_ver_flag) {
+ 			/* index-th format of type type found ? */
+ 			if (num == f->index)
+ 				break;
+@@ -1326,11 +1329,11 @@ static int s5p_jpeg_enum_fmt_vid_cap(struct file *file, void *priv,
+ 	struct s5p_jpeg_ctx *ctx = fh_to_ctx(priv);
+ 
+ 	if (ctx->mode == S5P_JPEG_ENCODE)
+-		return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
++		return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
+ 				SJPEG_FMT_FLAG_ENC_CAPTURE);
+ 
+-	return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
+-					SJPEG_FMT_FLAG_DEC_CAPTURE);
++	return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
++			SJPEG_FMT_FLAG_DEC_CAPTURE);
+ }
+ 
+ static int s5p_jpeg_enum_fmt_vid_out(struct file *file, void *priv,
+@@ -1339,11 +1342,11 @@ static int s5p_jpeg_enum_fmt_vid_out(struct file *file, void *priv,
+ 	struct s5p_jpeg_ctx *ctx = fh_to_ctx(priv);
+ 
+ 	if (ctx->mode == S5P_JPEG_ENCODE)
+-		return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
++		return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
+ 				SJPEG_FMT_FLAG_ENC_OUTPUT);
+ 
+-	return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
+-					SJPEG_FMT_FLAG_DEC_OUTPUT);
++	return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
++			SJPEG_FMT_FLAG_DEC_OUTPUT);
+ }
+ 
+ static struct s5p_jpeg_q_data *get_q_data(struct s5p_jpeg_ctx *ctx,
+@@ -2072,15 +2075,15 @@ static void s5p_jpeg_device_run(void *priv)
+ {
+ 	struct s5p_jpeg_ctx *ctx = priv;
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	unsigned long src_addr, dst_addr, flags;
+ 
+ 	spin_lock_irqsave(&ctx->jpeg->slock, flags);
+ 
+ 	src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+-	src_addr = vb2_dma_contig_plane_dma_addr(src_buf, 0);
+-	dst_addr = vb2_dma_contig_plane_dma_addr(dst_buf, 0);
++	src_addr = vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0);
++	dst_addr = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0);
+ 
+ 	s5p_jpeg_reset(jpeg->regs);
+ 	s5p_jpeg_poweron(jpeg->regs);
+@@ -2153,7 +2156,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+ 	struct s5p_jpeg_fmt *fmt;
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 	struct s5p_jpeg_addr jpeg_addr = {};
+ 	u32 pix_size, padding_bytes = 0;
+ 
+@@ -2172,7 +2175,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ 		vb = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 	}
+ 
+-	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(vb, 0);
++	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+ 
+ 	if (fmt->colplanes == 2) {
+ 		jpeg_addr.cb = jpeg_addr.y + pix_size - padding_bytes;
+@@ -2190,7 +2193,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ static void exynos4_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 	unsigned int jpeg_addr = 0;
+ 
+ 	if (ctx->mode == S5P_JPEG_ENCODE)
+@@ -2198,7 +2201,7 @@ static void exynos4_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ 	else
+ 		vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 
+-	jpeg_addr = vb2_dma_contig_plane_dma_addr(vb, 0);
++	jpeg_addr = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+ 	if (jpeg->variant->version == SJPEG_EXYNOS5433 &&
+ 	    ctx->mode == S5P_JPEG_DECODE)
+ 		jpeg_addr += ctx->out_q.sos;
+@@ -2314,7 +2317,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+ 	struct s5p_jpeg_fmt *fmt;
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 	struct s5p_jpeg_addr jpeg_addr = {};
+ 	u32 pix_size;
+ 
+@@ -2328,7 +2331,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ 		fmt = ctx->cap_q.fmt;
+ 	}
+ 
+-	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(vb, 0);
++	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+ 
+ 	if (fmt->colplanes == 2) {
+ 		jpeg_addr.cb = jpeg_addr.y + pix_size;
+@@ -2346,7 +2349,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
+ static void exynos3250_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ {
+ 	struct s5p_jpeg *jpeg = ctx->jpeg;
+-	struct vb2_buffer *vb;
++	struct vb2_v4l2_buffer *vb;
+ 	unsigned int jpeg_addr = 0;
+ 
+ 	if (ctx->mode == S5P_JPEG_ENCODE)
+@@ -2354,7 +2357,7 @@ static void exynos3250_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
+ 	else
+ 		vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+ 
+-	jpeg_addr = vb2_dma_contig_plane_dma_addr(vb, 0);
++	jpeg_addr = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
+ 	exynos3250_jpeg_jpgadr(jpeg->regs, jpeg_addr);
+ }
+ 
+diff --git a/drivers/media/platform/sh_veu.c b/drivers/media/platform/sh_veu.c
+index 09ae64a0004c..d277cc674349 100644
+--- a/drivers/media/platform/sh_veu.c
++++ b/drivers/media/platform/sh_veu.c
+@@ -273,13 +273,13 @@ static void sh_veu_process(struct sh_veu_dev *veu,
+ static void sh_veu_device_run(void *priv)
+ {
+ 	struct sh_veu_dev *veu = priv;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 
+ 	src_buf = v4l2_m2m_next_src_buf(veu->m2m_ctx);
+ 	dst_buf = v4l2_m2m_next_dst_buf(veu->m2m_ctx);
+ 
+ 	if (src_buf && dst_buf)
+-		sh_veu_process(veu, src_buf, dst_buf);
++		sh_veu_process(veu, &src_buf->vb2_buf, &dst_buf->vb2_buf);
+ }
+ 
+ 		/* ========== video ioctls ========== */
+diff --git a/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c b/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
+index 6950585edb5a..d16f54cdc3b0 100644
+--- a/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
++++ b/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
+@@ -793,7 +793,7 @@ static const struct regmap_config sun6i_csi_regmap_config = {
+ 	.reg_bits       = 32,
+ 	.reg_stride     = 4,
+ 	.val_bits       = 32,
+-	.max_register	= 0x1000,
++	.max_register	= 0x9c,
+ };
+ 
+ static int sun6i_csi_resource_request(struct sun6i_csi_dev *sdev,
+diff --git a/drivers/media/platform/vimc/Makefile b/drivers/media/platform/vimc/Makefile
+index 4b2e3de7856e..c4fc8e7d365a 100644
+--- a/drivers/media/platform/vimc/Makefile
++++ b/drivers/media/platform/vimc/Makefile
+@@ -5,6 +5,7 @@ vimc_common-objs := vimc-common.o
+ vimc_debayer-objs := vimc-debayer.o
+ vimc_scaler-objs := vimc-scaler.o
+ vimc_sensor-objs := vimc-sensor.o
++vimc_streamer-objs := vimc-streamer.o
+ 
+ obj-$(CONFIG_VIDEO_VIMC) += vimc.o vimc_capture.o vimc_common.o vimc-debayer.o \
+-				vimc_scaler.o vimc_sensor.o
++			    vimc_scaler.o vimc_sensor.o vimc_streamer.o
+diff --git a/drivers/media/platform/vimc/vimc-capture.c b/drivers/media/platform/vimc/vimc-capture.c
+index 3f7e9ed56633..80d7515ec420 100644
+--- a/drivers/media/platform/vimc/vimc-capture.c
++++ b/drivers/media/platform/vimc/vimc-capture.c
+@@ -24,6 +24,7 @@
+ #include <media/videobuf2-vmalloc.h>
+ 
+ #include "vimc-common.h"
++#include "vimc-streamer.h"
+ 
+ #define VIMC_CAP_DRV_NAME "vimc-capture"
+ 
+@@ -44,7 +45,7 @@ struct vimc_cap_device {
+ 	spinlock_t qlock;
+ 	struct mutex lock;
+ 	u32 sequence;
+-	struct media_pipeline pipe;
++	struct vimc_stream stream;
+ };
+ 
+ static const struct v4l2_pix_format fmt_default = {
+@@ -248,14 +249,13 @@ static int vimc_cap_start_streaming(struct vb2_queue *vq, unsigned int count)
+ 	vcap->sequence = 0;
+ 
+ 	/* Start the media pipeline */
+-	ret = media_pipeline_start(entity, &vcap->pipe);
++	ret = media_pipeline_start(entity, &vcap->stream.pipe);
+ 	if (ret) {
+ 		vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED);
+ 		return ret;
+ 	}
+ 
+-	/* Enable streaming from the pipe */
+-	ret = vimc_pipeline_s_stream(&vcap->vdev.entity, 1);
++	ret = vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 1);
+ 	if (ret) {
+ 		media_pipeline_stop(entity);
+ 		vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED);
+@@ -273,8 +273,7 @@ static void vimc_cap_stop_streaming(struct vb2_queue *vq)
+ {
+ 	struct vimc_cap_device *vcap = vb2_get_drv_priv(vq);
+ 
+-	/* Disable streaming from the pipe */
+-	vimc_pipeline_s_stream(&vcap->vdev.entity, 0);
++	vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 0);
+ 
+ 	/* Stop the media pipeline */
+ 	media_pipeline_stop(&vcap->vdev.entity);
+@@ -355,8 +354,8 @@ static void vimc_cap_comp_unbind(struct device *comp, struct device *master,
+ 	kfree(vcap);
+ }
+ 
+-static void vimc_cap_process_frame(struct vimc_ent_device *ved,
+-				   struct media_pad *sink, const void *frame)
++static void *vimc_cap_process_frame(struct vimc_ent_device *ved,
++				    const void *frame)
+ {
+ 	struct vimc_cap_device *vcap = container_of(ved, struct vimc_cap_device,
+ 						    ved);
+@@ -370,7 +369,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved,
+ 					    typeof(*vimc_buf), list);
+ 	if (!vimc_buf) {
+ 		spin_unlock(&vcap->qlock);
+-		return;
++		return ERR_PTR(-EAGAIN);
+ 	}
+ 
+ 	/* Remove this entry from the list */
+@@ -391,6 +390,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved,
+ 	vb2_set_plane_payload(&vimc_buf->vb2.vb2_buf, 0,
+ 			      vcap->format.sizeimage);
+ 	vb2_buffer_done(&vimc_buf->vb2.vb2_buf, VB2_BUF_STATE_DONE);
++	return NULL;
+ }
+ 
+ static int vimc_cap_comp_bind(struct device *comp, struct device *master,
+diff --git a/drivers/media/platform/vimc/vimc-common.c b/drivers/media/platform/vimc/vimc-common.c
+index 867e24dbd6b5..c1a74bb2df58 100644
+--- a/drivers/media/platform/vimc/vimc-common.c
++++ b/drivers/media/platform/vimc/vimc-common.c
+@@ -207,41 +207,6 @@ const struct vimc_pix_map *vimc_pix_map_by_pixelformat(u32 pixelformat)
+ }
+ EXPORT_SYMBOL_GPL(vimc_pix_map_by_pixelformat);
+ 
+-int vimc_propagate_frame(struct media_pad *src, const void *frame)
+-{
+-	struct media_link *link;
+-
+-	if (!(src->flags & MEDIA_PAD_FL_SOURCE))
+-		return -EINVAL;
+-
+-	/* Send this frame to all sink pads that are direct linked */
+-	list_for_each_entry(link, &src->entity->links, list) {
+-		if (link->source == src &&
+-		    (link->flags & MEDIA_LNK_FL_ENABLED)) {
+-			struct vimc_ent_device *ved = NULL;
+-			struct media_entity *entity = link->sink->entity;
+-
+-			if (is_media_entity_v4l2_subdev(entity)) {
+-				struct v4l2_subdev *sd =
+-					container_of(entity, struct v4l2_subdev,
+-						     entity);
+-				ved = v4l2_get_subdevdata(sd);
+-			} else if (is_media_entity_v4l2_video_device(entity)) {
+-				struct video_device *vdev =
+-					container_of(entity,
+-						     struct video_device,
+-						     entity);
+-				ved = video_get_drvdata(vdev);
+-			}
+-			if (ved && ved->process_frame)
+-				ved->process_frame(ved, link->sink, frame);
+-		}
+-	}
+-
+-	return 0;
+-}
+-EXPORT_SYMBOL_GPL(vimc_propagate_frame);
+-
+ /* Helper function to allocate and initialize pads */
+ struct media_pad *vimc_pads_init(u16 num_pads, const unsigned long *pads_flag)
+ {
+diff --git a/drivers/media/platform/vimc/vimc-common.h b/drivers/media/platform/vimc/vimc-common.h
+index 2e9981b18166..6ed969d9efbb 100644
+--- a/drivers/media/platform/vimc/vimc-common.h
++++ b/drivers/media/platform/vimc/vimc-common.h
+@@ -113,23 +113,12 @@ struct vimc_pix_map {
+ struct vimc_ent_device {
+ 	struct media_entity *ent;
+ 	struct media_pad *pads;
+-	void (*process_frame)(struct vimc_ent_device *ved,
+-			      struct media_pad *sink, const void *frame);
++	void * (*process_frame)(struct vimc_ent_device *ved,
++				const void *frame);
+ 	void (*vdev_get_format)(struct vimc_ent_device *ved,
+ 			      struct v4l2_pix_format *fmt);
+ };
+ 
+-/**
+- * vimc_propagate_frame - propagate a frame through the topology
+- *
+- * @src:	the source pad where the frame is being originated
+- * @frame:	the frame to be propagated
+- *
+- * This function will call the process_frame callback from the vimc_ent_device
+- * struct of the nodes directly connected to the @src pad
+- */
+-int vimc_propagate_frame(struct media_pad *src, const void *frame);
+-
+ /**
+  * vimc_pads_init - initialize pads
+  *
+diff --git a/drivers/media/platform/vimc/vimc-debayer.c b/drivers/media/platform/vimc/vimc-debayer.c
+index 77887f66f323..7d77c63b99d2 100644
+--- a/drivers/media/platform/vimc/vimc-debayer.c
++++ b/drivers/media/platform/vimc/vimc-debayer.c
+@@ -321,7 +321,6 @@ static void vimc_deb_set_rgb_mbus_fmt_rgb888_1x24(struct vimc_deb_device *vdeb,
+ static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable)
+ {
+ 	struct vimc_deb_device *vdeb = v4l2_get_subdevdata(sd);
+-	int ret;
+ 
+ 	if (enable) {
+ 		const struct vimc_pix_map *vpix;
+@@ -351,22 +350,10 @@ static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable)
+ 		if (!vdeb->src_frame)
+ 			return -ENOMEM;
+ 
+-		/* Turn the stream on in the subdevices directly connected */
+-		ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 1);
+-		if (ret) {
+-			vfree(vdeb->src_frame);
+-			vdeb->src_frame = NULL;
+-			return ret;
+-		}
+ 	} else {
+ 		if (!vdeb->src_frame)
+ 			return 0;
+ 
+-		/* Disable streaming from the pipe */
+-		ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 0);
+-		if (ret)
+-			return ret;
+-
+ 		vfree(vdeb->src_frame);
+ 		vdeb->src_frame = NULL;
+ 	}
+@@ -480,9 +467,8 @@ static void vimc_deb_calc_rgb_sink(struct vimc_deb_device *vdeb,
+ 	}
+ }
+ 
+-static void vimc_deb_process_frame(struct vimc_ent_device *ved,
+-				   struct media_pad *sink,
+-				   const void *sink_frame)
++static void *vimc_deb_process_frame(struct vimc_ent_device *ved,
++				    const void *sink_frame)
+ {
+ 	struct vimc_deb_device *vdeb = container_of(ved, struct vimc_deb_device,
+ 						    ved);
+@@ -491,7 +477,7 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved,
+ 
+ 	/* If the stream in this node is not active, just return */
+ 	if (!vdeb->src_frame)
+-		return;
++		return ERR_PTR(-EINVAL);
+ 
+ 	for (i = 0; i < vdeb->sink_fmt.height; i++)
+ 		for (j = 0; j < vdeb->sink_fmt.width; j++) {
+@@ -499,12 +485,8 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved,
+ 			vdeb->set_rgb_src(vdeb, i, j, rgb);
+ 		}
+ 
+-	/* Propagate the frame through all source pads */
+-	for (i = 1; i < vdeb->sd.entity.num_pads; i++) {
+-		struct media_pad *pad = &vdeb->sd.entity.pads[i];
++	return vdeb->src_frame;
+ 
+-		vimc_propagate_frame(pad, vdeb->src_frame);
+-	}
+ }
+ 
+ static void vimc_deb_comp_unbind(struct device *comp, struct device *master,
+diff --git a/drivers/media/platform/vimc/vimc-scaler.c b/drivers/media/platform/vimc/vimc-scaler.c
+index b0952ee86296..39b2a73dfcc1 100644
+--- a/drivers/media/platform/vimc/vimc-scaler.c
++++ b/drivers/media/platform/vimc/vimc-scaler.c
+@@ -217,7 +217,6 @@ static const struct v4l2_subdev_pad_ops vimc_sca_pad_ops = {
+ static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable)
+ {
+ 	struct vimc_sca_device *vsca = v4l2_get_subdevdata(sd);
+-	int ret;
+ 
+ 	if (enable) {
+ 		const struct vimc_pix_map *vpix;
+@@ -245,22 +244,10 @@ static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable)
+ 		if (!vsca->src_frame)
+ 			return -ENOMEM;
+ 
+-		/* Turn the stream on in the subdevices directly connected */
+-		ret = vimc_pipeline_s_stream(&vsca->sd.entity, 1);
+-		if (ret) {
+-			vfree(vsca->src_frame);
+-			vsca->src_frame = NULL;
+-			return ret;
+-		}
+ 	} else {
+ 		if (!vsca->src_frame)
+ 			return 0;
+ 
+-		/* Disable streaming from the pipe */
+-		ret = vimc_pipeline_s_stream(&vsca->sd.entity, 0);
+-		if (ret)
+-			return ret;
+-
+ 		vfree(vsca->src_frame);
+ 		vsca->src_frame = NULL;
+ 	}
+@@ -346,26 +333,19 @@ static void vimc_sca_fill_src_frame(const struct vimc_sca_device *const vsca,
+ 			vimc_sca_scale_pix(vsca, i, j, sink_frame);
+ }
+ 
+-static void vimc_sca_process_frame(struct vimc_ent_device *ved,
+-				   struct media_pad *sink,
+-				   const void *sink_frame)
++static void *vimc_sca_process_frame(struct vimc_ent_device *ved,
++				    const void *sink_frame)
+ {
+ 	struct vimc_sca_device *vsca = container_of(ved, struct vimc_sca_device,
+ 						    ved);
+-	unsigned int i;
+ 
+ 	/* If the stream in this node is not active, just return */
+ 	if (!vsca->src_frame)
+-		return;
++		return ERR_PTR(-EINVAL);
+ 
+ 	vimc_sca_fill_src_frame(vsca, sink_frame);
+ 
+-	/* Propagate the frame through all source pads */
+-	for (i = 1; i < vsca->sd.entity.num_pads; i++) {
+-		struct media_pad *pad = &vsca->sd.entity.pads[i];
+-
+-		vimc_propagate_frame(pad, vsca->src_frame);
+-	}
++	return vsca->src_frame;
+ };
+ 
+ static void vimc_sca_comp_unbind(struct device *comp, struct device *master,
+diff --git a/drivers/media/platform/vimc/vimc-sensor.c b/drivers/media/platform/vimc/vimc-sensor.c
+index 32ca9c6172b1..93961a1e694f 100644
+--- a/drivers/media/platform/vimc/vimc-sensor.c
++++ b/drivers/media/platform/vimc/vimc-sensor.c
+@@ -16,8 +16,6 @@
+  */
+ 
+ #include <linux/component.h>
+-#include <linux/freezer.h>
+-#include <linux/kthread.h>
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/platform_device.h>
+@@ -201,38 +199,27 @@ static const struct v4l2_subdev_pad_ops vimc_sen_pad_ops = {
+ 	.set_fmt		= vimc_sen_set_fmt,
+ };
+ 
+-static int vimc_sen_tpg_thread(void *data)
++static void *vimc_sen_process_frame(struct vimc_ent_device *ved,
++				    const void *sink_frame)
+ {
+-	struct vimc_sen_device *vsen = data;
+-	unsigned int i;
+-
+-	set_freezable();
+-	set_current_state(TASK_UNINTERRUPTIBLE);
+-
+-	for (;;) {
+-		try_to_freeze();
+-		if (kthread_should_stop())
+-			break;
+-
+-		tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame);
++	struct vimc_sen_device *vsen = container_of(ved, struct vimc_sen_device,
++						    ved);
++	const struct vimc_pix_map *vpix;
++	unsigned int frame_size;
+ 
+-		/* Send the frame to all source pads */
+-		for (i = 0; i < vsen->sd.entity.num_pads; i++)
+-			vimc_propagate_frame(&vsen->sd.entity.pads[i],
+-					     vsen->frame);
++	/* Calculate the frame size */
++	vpix = vimc_pix_map_by_code(vsen->mbus_format.code);
++	frame_size = vsen->mbus_format.width * vpix->bpp *
++		     vsen->mbus_format.height;
+ 
+-		/* 60 frames per second */
+-		schedule_timeout(HZ/60);
+-	}
+-
+-	return 0;
++	tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame);
++	return vsen->frame;
+ }
+ 
+ static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable)
+ {
+ 	struct vimc_sen_device *vsen =
+ 				container_of(sd, struct vimc_sen_device, sd);
+-	int ret;
+ 
+ 	if (enable) {
+ 		const struct vimc_pix_map *vpix;
+@@ -258,26 +245,8 @@ static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable)
+ 		/* configure the test pattern generator */
+ 		vimc_sen_tpg_s_format(vsen);
+ 
+-		/* Initialize the image generator thread */
+-		vsen->kthread_sen = kthread_run(vimc_sen_tpg_thread, vsen,
+-					"%s-sen", vsen->sd.v4l2_dev->name);
+-		if (IS_ERR(vsen->kthread_sen)) {
+-			dev_err(vsen->dev, "%s: kernel_thread() failed\n",
+-				vsen->sd.name);
+-			vfree(vsen->frame);
+-			vsen->frame = NULL;
+-			return PTR_ERR(vsen->kthread_sen);
+-		}
+ 	} else {
+-		if (!vsen->kthread_sen)
+-			return 0;
+-
+-		/* Stop image generator */
+-		ret = kthread_stop(vsen->kthread_sen);
+-		if (ret)
+-			return ret;
+ 
+-		vsen->kthread_sen = NULL;
+ 		vfree(vsen->frame);
+ 		vsen->frame = NULL;
+ 		return 0;
+@@ -413,6 +382,7 @@ static int vimc_sen_comp_bind(struct device *comp, struct device *master,
+ 	if (ret)
+ 		goto err_free_hdl;
+ 
++	vsen->ved.process_frame = vimc_sen_process_frame;
+ 	dev_set_drvdata(comp, &vsen->ved);
+ 	vsen->dev = comp;
+ 
+diff --git a/drivers/media/platform/vimc/vimc-streamer.c b/drivers/media/platform/vimc/vimc-streamer.c
+new file mode 100644
+index 000000000000..fcc897fb247b
+--- /dev/null
++++ b/drivers/media/platform/vimc/vimc-streamer.c
+@@ -0,0 +1,188 @@
++// SPDX-License-Identifier: GPL-2.0+
++/*
++ * vimc-streamer.c Virtual Media Controller Driver
++ *
++ * Copyright (C) 2018 Lucas A. M. Magalhães <lucmaga@gmail.com>
++ *
++ */
++
++#include <linux/init.h>
++#include <linux/module.h>
++#include <linux/freezer.h>
++#include <linux/kthread.h>
++
++#include "vimc-streamer.h"
++
++/**
++ * vimc_get_source_entity - get the entity connected with the first sink pad
++ *
++ * @ent:	reference media_entity
++ *
++ * Helper function that returns the media entity containing the source pad
++ * linked with the first sink pad from the given media entity pad list.
++ */
++static struct media_entity *vimc_get_source_entity(struct media_entity *ent)
++{
++	struct media_pad *pad;
++	int i;
++
++	for (i = 0; i < ent->num_pads; i++) {
++		if (ent->pads[i].flags & MEDIA_PAD_FL_SOURCE)
++			continue;
++		pad = media_entity_remote_pad(&ent->pads[i]);
++		return pad ? pad->entity : NULL;
++	}
++	return NULL;
++}
++
++/*
++ * vimc_streamer_pipeline_terminate - Disable stream in all ved in stream
++ *
++ * @stream: the pointer to the stream structure with the pipeline to be
++ *	    disabled.
++ *
++ * Calls s_stream to disable the stream in each entity of the pipeline
++ *
++ */
++static void vimc_streamer_pipeline_terminate(struct vimc_stream *stream)
++{
++	struct media_entity *entity;
++	struct v4l2_subdev *sd;
++
++	while (stream->pipe_size) {
++		stream->pipe_size--;
++		entity = stream->ved_pipeline[stream->pipe_size]->ent;
++		entity = vimc_get_source_entity(entity);
++		stream->ved_pipeline[stream->pipe_size] = NULL;
++
++		if (!is_media_entity_v4l2_subdev(entity))
++			continue;
++
++		sd = media_entity_to_v4l2_subdev(entity);
++		v4l2_subdev_call(sd, video, s_stream, 0);
++	}
++}
++
++/*
++ * vimc_streamer_pipeline_init - initializes the stream structure
++ *
++ * @stream: the pointer to the stream structure to be initialized
++ * @ved:    the pointer to the vimc entity initializing the stream
++ *
++ * Initializes the stream structure. Walks through the entity graph to
++ * construct the pipeline used later on the streamer thread.
++ * Calls s_stream to enable stream in all entities of the pipeline.
++ */
++static int vimc_streamer_pipeline_init(struct vimc_stream *stream,
++				       struct vimc_ent_device *ved)
++{
++	struct media_entity *entity;
++	struct video_device *vdev;
++	struct v4l2_subdev *sd;
++	int ret = 0;
++
++	stream->pipe_size = 0;
++	while (stream->pipe_size < VIMC_STREAMER_PIPELINE_MAX_SIZE) {
++		if (!ved) {
++			vimc_streamer_pipeline_terminate(stream);
++			return -EINVAL;
++		}
++		stream->ved_pipeline[stream->pipe_size++] = ved;
++
++		entity = vimc_get_source_entity(ved->ent);
++		/* Check if the end of the pipeline was reached*/
++		if (!entity)
++			return 0;
++
++		if (is_media_entity_v4l2_subdev(entity)) {
++			sd = media_entity_to_v4l2_subdev(entity);
++			ret = v4l2_subdev_call(sd, video, s_stream, 1);
++			if (ret && ret != -ENOIOCTLCMD) {
++				vimc_streamer_pipeline_terminate(stream);
++				return ret;
++			}
++			ved = v4l2_get_subdevdata(sd);
++		} else {
++			vdev = container_of(entity,
++					    struct video_device,
++					    entity);
++			ved = video_get_drvdata(vdev);
++		}
++	}
++
++	vimc_streamer_pipeline_terminate(stream);
++	return -EINVAL;
++}
++
++static int vimc_streamer_thread(void *data)
++{
++	struct vimc_stream *stream = data;
++	int i;
++
++	set_freezable();
++	set_current_state(TASK_UNINTERRUPTIBLE);
++
++	for (;;) {
++		try_to_freeze();
++		if (kthread_should_stop())
++			break;
++
++		for (i = stream->pipe_size - 1; i >= 0; i--) {
++			stream->frame = stream->ved_pipeline[i]->process_frame(
++					stream->ved_pipeline[i],
++					stream->frame);
++			if (!stream->frame)
++				break;
++			if (IS_ERR(stream->frame))
++				break;
++		}
++		//wait for 60hz
++		schedule_timeout(HZ / 60);
++	}
++
++	return 0;
++}
++
++int vimc_streamer_s_stream(struct vimc_stream *stream,
++			   struct vimc_ent_device *ved,
++			   int enable)
++{
++	int ret;
++
++	if (!stream || !ved)
++		return -EINVAL;
++
++	if (enable) {
++		if (stream->kthread)
++			return 0;
++
++		ret = vimc_streamer_pipeline_init(stream, ved);
++		if (ret)
++			return ret;
++
++		stream->kthread = kthread_run(vimc_streamer_thread, stream,
++					      "vimc-streamer thread");
++
++		if (IS_ERR(stream->kthread))
++			return PTR_ERR(stream->kthread);
++
++	} else {
++		if (!stream->kthread)
++			return 0;
++
++		ret = kthread_stop(stream->kthread);
++		if (ret)
++			return ret;
++
++		stream->kthread = NULL;
++
++		vimc_streamer_pipeline_terminate(stream);
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(vimc_streamer_s_stream);
++
++MODULE_DESCRIPTION("Virtual Media Controller Driver (VIMC) Streamer");
++MODULE_AUTHOR("Lucas A. M. Magalhães <lucmaga@gmail.com>");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/media/platform/vimc/vimc-streamer.h b/drivers/media/platform/vimc/vimc-streamer.h
+new file mode 100644
+index 000000000000..752af2e2d5a2
+--- /dev/null
++++ b/drivers/media/platform/vimc/vimc-streamer.h
+@@ -0,0 +1,38 @@
++/* SPDX-License-Identifier: GPL-2.0+ */
++/*
++ * vimc-streamer.h Virtual Media Controller Driver
++ *
++ * Copyright (C) 2018 Lucas A. M. Magalhães <lucmaga@gmail.com>
++ *
++ */
++
++#ifndef _VIMC_STREAMER_H_
++#define _VIMC_STREAMER_H_
++
++#include <media/media-device.h>
++
++#include "vimc-common.h"
++
++#define VIMC_STREAMER_PIPELINE_MAX_SIZE 16
++
++struct vimc_stream {
++	struct media_pipeline pipe;
++	struct vimc_ent_device *ved_pipeline[VIMC_STREAMER_PIPELINE_MAX_SIZE];
++	unsigned int pipe_size;
++	u8 *frame;
++	struct task_struct *kthread;
++};
++
++/**
++ * vimc_streamer_s_streamer - start/stop the stream
++ *
++ * @stream:	the pointer to the stream to start or stop
++ * @ved:	The last entity of the streamer pipeline
++ * @enable:	any non-zero number start the stream, zero stop
++ *
++ */
++int vimc_streamer_s_stream(struct vimc_stream *stream,
++			   struct vimc_ent_device *ved,
++			   int enable);
++
++#endif  //_VIMC_STREAMER_H_
+diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
+index 66a174979b3c..81745644f720 100644
+--- a/drivers/media/rc/rc-main.c
++++ b/drivers/media/rc/rc-main.c
+@@ -274,6 +274,7 @@ static unsigned int ir_update_mapping(struct rc_dev *dev,
+ 				      unsigned int new_keycode)
+ {
+ 	int old_keycode = rc_map->scan[index].keycode;
++	int i;
+ 
+ 	/* Did the user wish to remove the mapping? */
+ 	if (new_keycode == KEY_RESERVED || new_keycode == KEY_UNKNOWN) {
+@@ -288,9 +289,20 @@ static unsigned int ir_update_mapping(struct rc_dev *dev,
+ 			old_keycode == KEY_RESERVED ? "New" : "Replacing",
+ 			rc_map->scan[index].scancode, new_keycode);
+ 		rc_map->scan[index].keycode = new_keycode;
++		__set_bit(new_keycode, dev->input_dev->keybit);
+ 	}
+ 
+ 	if (old_keycode != KEY_RESERVED) {
++		/* A previous mapping was updated... */
++		__clear_bit(old_keycode, dev->input_dev->keybit);
++		/* ... but another scancode might use the same keycode */
++		for (i = 0; i < rc_map->len; i++) {
++			if (rc_map->scan[i].keycode == old_keycode) {
++				__set_bit(old_keycode, dev->input_dev->keybit);
++				break;
++			}
++		}
++
+ 		/* Possibly shrink the keytable, failure is not a problem */
+ 		ir_resize_table(dev, rc_map, GFP_ATOMIC);
+ 	}
+@@ -1750,7 +1762,6 @@ static int rc_prepare_rx_device(struct rc_dev *dev)
+ 	set_bit(EV_REP, dev->input_dev->evbit);
+ 	set_bit(EV_MSC, dev->input_dev->evbit);
+ 	set_bit(MSC_SCAN, dev->input_dev->mscbit);
+-	bitmap_fill(dev->input_dev->keybit, KEY_CNT);
+ 
+ 	/* Pointer/mouse events */
+ 	set_bit(EV_REL, dev->input_dev->evbit);
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index d45415cbe6e7..14cff91b7aea 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1212,7 +1212,7 @@ static void uvc_ctrl_fill_event(struct uvc_video_chain *chain,
+ 
+ 	__uvc_query_v4l2_ctrl(chain, ctrl, mapping, &v4l2_ctrl);
+ 
+-	memset(ev->reserved, 0, sizeof(ev->reserved));
++	memset(ev, 0, sizeof(*ev));
+ 	ev->type = V4L2_EVENT_CTRL;
+ 	ev->id = v4l2_ctrl.id;
+ 	ev->u.ctrl.value = value;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index b62cbd800111..33a22c016456 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -1106,11 +1106,19 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ 			return -EINVAL;
+ 		}
+ 
+-		/* Make sure the terminal type MSB is not null, otherwise it
+-		 * could be confused with a unit.
++		/*
++		 * Reject invalid terminal types that would cause issues:
++		 *
++		 * - The high byte must be non-zero, otherwise it would be
++		 *   confused with a unit.
++		 *
++		 * - Bit 15 must be 0, as we use it internally as a terminal
++		 *   direction flag.
++		 *
++		 * Other unknown types are accepted.
+ 		 */
+ 		type = get_unaligned_le16(&buffer[4]);
+-		if ((type & 0xff00) == 0) {
++		if ((type & 0x7f00) == 0 || (type & 0x8000) != 0) {
+ 			uvc_trace(UVC_TRACE_DESCR, "device %d videocontrol "
+ 				"interface %d INPUT_TERMINAL %d has invalid "
+ 				"type 0x%04x, skipping\n", udev->devnum,
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index 84525ff04745..e314657a1843 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -676,6 +676,14 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
+ 	if (!uvc_hw_timestamps_param)
+ 		return;
+ 
++	/*
++	 * We will get called from __vb2_queue_cancel() if there are buffers
++	 * done but not dequeued by the user, but the sample array has already
++	 * been released at that time. Just bail out in that case.
++	 */
++	if (!clock->samples)
++		return;
++
+ 	spin_lock_irqsave(&clock->lock, flags);
+ 
+ 	if (clock->count < clock->size)
+diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
+index 5e3806feb5d7..8a82427c4d54 100644
+--- a/drivers/media/v4l2-core/v4l2-ctrls.c
++++ b/drivers/media/v4l2-core/v4l2-ctrls.c
+@@ -1387,7 +1387,7 @@ static u32 user_flags(const struct v4l2_ctrl *ctrl)
+ 
+ static void fill_event(struct v4l2_event *ev, struct v4l2_ctrl *ctrl, u32 changes)
+ {
+-	memset(ev->reserved, 0, sizeof(ev->reserved));
++	memset(ev, 0, sizeof(*ev));
+ 	ev->type = V4L2_EVENT_CTRL;
+ 	ev->id = ctrl->id;
+ 	ev->u.ctrl.changes = changes;
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index a530972c5a7e..e0173bf4b0dc 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -1145,6 +1145,9 @@ static int sm501_register_gpio_i2c_instance(struct sm501_devdata *sm,
+ 	lookup = devm_kzalloc(&pdev->dev,
+ 			      sizeof(*lookup) + 3 * sizeof(struct gpiod_lookup),
+ 			      GFP_KERNEL);
++	if (!lookup)
++		return -ENOMEM;
++
+ 	lookup->dev_id = "i2c-gpio";
+ 	if (iic->pin_sda < 32)
+ 		lookup->table[0].chip_label = "SM501-LOW";
+diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
+index 5d28d9e454f5..08f4a512afad 100644
+--- a/drivers/misc/cxl/guest.c
++++ b/drivers/misc/cxl/guest.c
+@@ -267,6 +267,7 @@ static int guest_reset(struct cxl *adapter)
+ 	int i, rc;
+ 
+ 	pr_devel("Adapter reset request\n");
++	spin_lock(&adapter->afu_list_lock);
+ 	for (i = 0; i < adapter->slices; i++) {
+ 		if ((afu = adapter->afu[i])) {
+ 			pci_error_handlers(afu, CXL_ERROR_DETECTED_EVENT,
+@@ -283,6 +284,7 @@ static int guest_reset(struct cxl *adapter)
+ 			pci_error_handlers(afu, CXL_RESUME_EVENT, 0);
+ 		}
+ 	}
++	spin_unlock(&adapter->afu_list_lock);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
+index c79ba1c699ad..300531d6136f 100644
+--- a/drivers/misc/cxl/pci.c
++++ b/drivers/misc/cxl/pci.c
+@@ -1805,7 +1805,7 @@ static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu,
+ 	/* There should only be one entry, but go through the list
+ 	 * anyway
+ 	 */
+-	if (afu->phb == NULL)
++	if (afu == NULL || afu->phb == NULL)
+ 		return result;
+ 
+ 	list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
+@@ -1832,7 +1832,8 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ {
+ 	struct cxl *adapter = pci_get_drvdata(pdev);
+ 	struct cxl_afu *afu;
+-	pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET, afu_result;
++	pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET;
++	pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET;
+ 	int i;
+ 
+ 	/* At this point, we could still have an interrupt pending.
+@@ -1843,6 +1844,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ 
+ 	/* If we're permanently dead, give up. */
+ 	if (state == pci_channel_io_perm_failure) {
++		spin_lock(&adapter->afu_list_lock);
+ 		for (i = 0; i < adapter->slices; i++) {
+ 			afu = adapter->afu[i];
+ 			/*
+@@ -1851,6 +1853,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ 			 */
+ 			cxl_vphb_error_detected(afu, state);
+ 		}
++		spin_unlock(&adapter->afu_list_lock);
+ 		return PCI_ERS_RESULT_DISCONNECT;
+ 	}
+ 
+@@ -1932,11 +1935,17 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ 	 *     * In slot_reset, free the old resources and allocate new ones.
+ 	 *     * In resume, clear the flag to allow things to start.
+ 	 */
++
++	/* Make sure no one else changes the afu list */
++	spin_lock(&adapter->afu_list_lock);
++
+ 	for (i = 0; i < adapter->slices; i++) {
+ 		afu = adapter->afu[i];
+ 
+-		afu_result = cxl_vphb_error_detected(afu, state);
++		if (afu == NULL)
++			continue;
+ 
++		afu_result = cxl_vphb_error_detected(afu, state);
+ 		cxl_context_detach_all(afu);
+ 		cxl_ops->afu_deactivate_mode(afu, afu->current_mode);
+ 		pci_deconfigure_afu(afu);
+@@ -1948,6 +1957,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
+ 			 (result == PCI_ERS_RESULT_NEED_RESET))
+ 			result = PCI_ERS_RESULT_NONE;
+ 	}
++	spin_unlock(&adapter->afu_list_lock);
+ 
+ 	/* should take the context lock here */
+ 	if (cxl_adapter_context_lock(adapter) != 0)
+@@ -1980,14 +1990,18 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
+ 	 */
+ 	cxl_adapter_context_unlock(adapter);
+ 
++	spin_lock(&adapter->afu_list_lock);
+ 	for (i = 0; i < adapter->slices; i++) {
+ 		afu = adapter->afu[i];
+ 
++		if (afu == NULL)
++			continue;
++
+ 		if (pci_configure_afu(afu, adapter, pdev))
+-			goto err;
++			goto err_unlock;
+ 
+ 		if (cxl_afu_select_best_mode(afu))
+-			goto err;
++			goto err_unlock;
+ 
+ 		if (afu->phb == NULL)
+ 			continue;
+@@ -1999,16 +2013,16 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
+ 			ctx = cxl_get_context(afu_dev);
+ 
+ 			if (ctx && cxl_release_context(ctx))
+-				goto err;
++				goto err_unlock;
+ 
+ 			ctx = cxl_dev_context_init(afu_dev);
+ 			if (IS_ERR(ctx))
+-				goto err;
++				goto err_unlock;
+ 
+ 			afu_dev->dev.archdata.cxl_ctx = ctx;
+ 
+ 			if (cxl_ops->afu_check_and_enable(afu))
+-				goto err;
++				goto err_unlock;
+ 
+ 			afu_dev->error_state = pci_channel_io_normal;
+ 
+@@ -2029,8 +2043,13 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
+ 				result = PCI_ERS_RESULT_DISCONNECT;
+ 		}
+ 	}
++
++	spin_unlock(&adapter->afu_list_lock);
+ 	return result;
+ 
++err_unlock:
++	spin_unlock(&adapter->afu_list_lock);
++
+ err:
+ 	/* All the bits that happen in both error_detected and cxl_remove
+ 	 * should be idempotent, so we don't need to worry about leaving a mix
+@@ -2051,10 +2070,11 @@ static void cxl_pci_resume(struct pci_dev *pdev)
+ 	 * This is not the place to be checking if everything came back up
+ 	 * properly, because there's no return value: do that in slot_reset.
+ 	 */
++	spin_lock(&adapter->afu_list_lock);
+ 	for (i = 0; i < adapter->slices; i++) {
+ 		afu = adapter->afu[i];
+ 
+-		if (afu->phb == NULL)
++		if (afu == NULL || afu->phb == NULL)
+ 			continue;
+ 
+ 		list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
+@@ -2063,6 +2083,7 @@ static void cxl_pci_resume(struct pci_dev *pdev)
+ 				afu_dev->driver->err_handler->resume(afu_dev);
+ 		}
+ 	}
++	spin_unlock(&adapter->afu_list_lock);
+ }
+ 
+ static const struct pci_error_handlers cxl_err_handler = {
+diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
+index fc3872fe7b25..c383322ec2ba 100644
+--- a/drivers/misc/mei/bus.c
++++ b/drivers/misc/mei/bus.c
+@@ -541,17 +541,9 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
+ 		goto out;
+ 	}
+ 
+-	if (!mei_cl_bus_module_get(cldev)) {
+-		dev_err(&cldev->dev, "get hw module failed");
+-		ret = -ENODEV;
+-		goto out;
+-	}
+-
+ 	ret = mei_cl_connect(cl, cldev->me_cl, NULL);
+-	if (ret < 0) {
++	if (ret < 0)
+ 		dev_err(&cldev->dev, "cannot connect\n");
+-		mei_cl_bus_module_put(cldev);
+-	}
+ 
+ out:
+ 	mutex_unlock(&bus->device_lock);
+@@ -614,7 +606,6 @@ int mei_cldev_disable(struct mei_cl_device *cldev)
+ 	if (err < 0)
+ 		dev_err(bus->dev, "Could not disconnect from the ME client\n");
+ 
+-	mei_cl_bus_module_put(cldev);
+ out:
+ 	/* Flush queues and remove any pending read */
+ 	mei_cl_flush_queues(cl, NULL);
+@@ -725,9 +716,16 @@ static int mei_cl_device_probe(struct device *dev)
+ 	if (!id)
+ 		return -ENODEV;
+ 
++	if (!mei_cl_bus_module_get(cldev)) {
++		dev_err(&cldev->dev, "get hw module failed");
++		return -ENODEV;
++	}
++
+ 	ret = cldrv->probe(cldev, id);
+-	if (ret)
++	if (ret) {
++		mei_cl_bus_module_put(cldev);
+ 		return ret;
++	}
+ 
+ 	__module_get(THIS_MODULE);
+ 	return 0;
+@@ -755,6 +753,7 @@ static int mei_cl_device_remove(struct device *dev)
+ 
+ 	mei_cldev_unregister_callbacks(cldev);
+ 
++	mei_cl_bus_module_put(cldev);
+ 	module_put(THIS_MODULE);
+ 	dev->driver = NULL;
+ 	return ret;
+diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
+index 8f7616557c97..e6207f614816 100644
+--- a/drivers/misc/mei/hbm.c
++++ b/drivers/misc/mei/hbm.c
+@@ -1029,29 +1029,36 @@ static void mei_hbm_config_features(struct mei_device *dev)
+ 	    dev->version.minor_version >= HBM_MINOR_VERSION_PGI)
+ 		dev->hbm_f_pg_supported = 1;
+ 
++	dev->hbm_f_dc_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_DC)
+ 		dev->hbm_f_dc_supported = 1;
+ 
++	dev->hbm_f_ie_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_IE)
+ 		dev->hbm_f_ie_supported = 1;
+ 
+ 	/* disconnect on connect timeout instead of link reset */
++	dev->hbm_f_dot_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_DOT)
+ 		dev->hbm_f_dot_supported = 1;
+ 
+ 	/* Notification Event Support */
++	dev->hbm_f_ev_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_EV)
+ 		dev->hbm_f_ev_supported = 1;
+ 
+ 	/* Fixed Address Client Support */
++	dev->hbm_f_fa_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_FA)
+ 		dev->hbm_f_fa_supported = 1;
+ 
+ 	/* OS ver message Support */
++	dev->hbm_f_os_supported = 0;
+ 	if (dev->version.major_version >= HBM_MAJOR_VERSION_OS)
+ 		dev->hbm_f_os_supported = 1;
+ 
+ 	/* DMA Ring Support */
++	dev->hbm_f_dr_supported = 0;
+ 	if (dev->version.major_version > HBM_MAJOR_VERSION_DR ||
+ 	    (dev->version.major_version == HBM_MAJOR_VERSION_DR &&
+ 	     dev->version.minor_version >= HBM_MINOR_VERSION_DR))
+diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
+index f8240b87df22..f69acb5d4a50 100644
+--- a/drivers/misc/vmw_balloon.c
++++ b/drivers/misc/vmw_balloon.c
+@@ -1287,7 +1287,7 @@ static void vmballoon_reset(struct vmballoon *b)
+ 	vmballoon_pop(b);
+ 
+ 	if (vmballoon_send_start(b, VMW_BALLOON_CAPABILITIES))
+-		return;
++		goto unlock;
+ 
+ 	if ((b->capabilities & VMW_BALLOON_BATCHED_CMDS) != 0) {
+ 		if (vmballoon_init_batching(b)) {
+@@ -1298,7 +1298,7 @@ static void vmballoon_reset(struct vmballoon *b)
+ 			 * The guest will retry in one second.
+ 			 */
+ 			vmballoon_send_start(b, 0);
+-			return;
++			goto unlock;
+ 		}
+ 	} else if ((b->capabilities & VMW_BALLOON_BASIC_CMDS) != 0) {
+ 		vmballoon_deinit_batching(b);
+@@ -1314,6 +1314,7 @@ static void vmballoon_reset(struct vmballoon *b)
+ 	if (vmballoon_send_guest_id(b))
+ 		pr_err("failed to send guest ID to the host\n");
+ 
++unlock:
+ 	up_write(&b->conf_sem);
+ }
+ 
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index b27a1e620233..1e6b07c176dc 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -2381,9 +2381,9 @@ unsigned int mmc_calc_max_discard(struct mmc_card *card)
+ 		return card->pref_erase;
+ 
+ 	max_discard = mmc_do_calc_max_discard(card, MMC_ERASE_ARG);
+-	if (max_discard && mmc_can_trim(card)) {
++	if (mmc_can_trim(card)) {
+ 		max_trim = mmc_do_calc_max_discard(card, MMC_TRIM_ARG);
+-		if (max_trim < max_discard)
++		if (max_trim < max_discard || max_discard == 0)
+ 			max_discard = max_trim;
+ 	} else if (max_discard < card->erase_size) {
+ 		max_discard = 0;
+diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c
+index c712b7deb3a9..7c8f203f9a24 100644
+--- a/drivers/mmc/host/alcor.c
++++ b/drivers/mmc/host/alcor.c
+@@ -48,7 +48,6 @@ struct alcor_sdmmc_host {
+ 	struct mmc_command *cmd;
+ 	struct mmc_data *data;
+ 	unsigned int dma_on:1;
+-	unsigned int early_data:1;
+ 
+ 	struct mutex cmd_mutex;
+ 
+@@ -144,8 +143,7 @@ static void alcor_data_set_dma(struct alcor_sdmmc_host *host)
+ 	host->sg_count--;
+ }
+ 
+-static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host,
+-					bool early)
++static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host)
+ {
+ 	struct alcor_pci_priv *priv = host->alcor_pci;
+ 	struct mmc_data *data = host->data;
+@@ -155,13 +153,6 @@ static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host,
+ 		ctrl |= AU6601_DATA_WRITE;
+ 
+ 	if (data->host_cookie == COOKIE_MAPPED) {
+-		if (host->early_data) {
+-			host->early_data = false;
+-			return;
+-		}
+-
+-		host->early_data = early;
+-
+ 		alcor_data_set_dma(host);
+ 		ctrl |= AU6601_DATA_DMA_MODE;
+ 		host->dma_on = 1;
+@@ -231,6 +222,7 @@ static void alcor_prepare_sg_miter(struct alcor_sdmmc_host *host)
+ static void alcor_prepare_data(struct alcor_sdmmc_host *host,
+ 			       struct mmc_command *cmd)
+ {
++	struct alcor_pci_priv *priv = host->alcor_pci;
+ 	struct mmc_data *data = cmd->data;
+ 
+ 	if (!data)
+@@ -248,7 +240,7 @@ static void alcor_prepare_data(struct alcor_sdmmc_host *host,
+ 	if (data->host_cookie != COOKIE_MAPPED)
+ 		alcor_prepare_sg_miter(host);
+ 
+-	alcor_trigger_data_transfer(host, true);
++	alcor_write8(priv, 0, AU6601_DATA_XFER_CTRL);
+ }
+ 
+ static void alcor_send_cmd(struct alcor_sdmmc_host *host,
+@@ -435,7 +427,7 @@ static int alcor_cmd_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
+ 	if (!host->data)
+ 		return false;
+ 
+-	alcor_trigger_data_transfer(host, false);
++	alcor_trigger_data_transfer(host);
+ 	host->cmd = NULL;
+ 	return true;
+ }
+@@ -456,7 +448,7 @@ static void alcor_cmd_irq_thread(struct alcor_sdmmc_host *host, u32 intmask)
+ 	if (!host->data)
+ 		alcor_request_complete(host, 1);
+ 	else
+-		alcor_trigger_data_transfer(host, false);
++		alcor_trigger_data_transfer(host);
+ 	host->cmd = NULL;
+ }
+ 
+@@ -487,15 +479,9 @@ static int alcor_data_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
+ 		break;
+ 	case AU6601_INT_READ_BUF_RDY:
+ 		alcor_trf_block_pio(host, true);
+-		if (!host->blocks)
+-			break;
+-		alcor_trigger_data_transfer(host, false);
+ 		return 1;
+ 	case AU6601_INT_WRITE_BUF_RDY:
+ 		alcor_trf_block_pio(host, false);
+-		if (!host->blocks)
+-			break;
+-		alcor_trigger_data_transfer(host, false);
+ 		return 1;
+ 	case AU6601_INT_DMA_END:
+ 		if (!host->sg_count)
+@@ -508,8 +494,14 @@ static int alcor_data_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
+ 		break;
+ 	}
+ 
+-	if (intmask & AU6601_INT_DATA_END)
+-		return 0;
++	if (intmask & AU6601_INT_DATA_END) {
++		if (!host->dma_on && host->blocks) {
++			alcor_trigger_data_transfer(host);
++			return 1;
++		} else {
++			return 0;
++		}
++	}
+ 
+ 	return 1;
+ }
+@@ -1044,14 +1036,27 @@ static void alcor_init_mmc(struct alcor_sdmmc_host *host)
+ 	mmc->caps2 = MMC_CAP2_NO_SDIO;
+ 	mmc->ops = &alcor_sdc_ops;
+ 
+-	/* Hardware cannot do scatter lists */
++	/* The hardware does DMA data transfer of 4096 bytes to/from a single
++	 * buffer address. Scatterlists are not supported, but upon DMA
++	 * completion (signalled via IRQ), the original vendor driver does
++	 * then immediately set up another DMA transfer of the next 4096
++	 * bytes.
++	 *
++	 * This means that we need to handle the I/O in 4096 byte chunks.
++	 * Lacking a way to limit the sglist entries to 4096 bytes, we instead
++	 * impose that only one segment is provided, with maximum size 4096,
++	 * which also happens to be the minimum size. This means that the
++	 * single-entry sglist handled by this driver can be handed directly
++	 * to the hardware, nice and simple.
++	 *
++	 * Unfortunately though, that means we only do 4096 bytes I/O per
++	 * MMC command. A future improvement would be to make the driver
++	 * accept sg lists and entries of any size, and simply iterate
++	 * through them 4096 bytes at a time.
++	 */
+ 	mmc->max_segs = AU6601_MAX_DMA_SEGMENTS;
+ 	mmc->max_seg_size = AU6601_MAX_DMA_BLOCK_SIZE;
+-
+-	mmc->max_blk_size = mmc->max_seg_size;
+-	mmc->max_blk_count = mmc->max_segs;
+-
+-	mmc->max_req_size = mmc->max_seg_size * mmc->max_segs;
++	mmc->max_req_size = mmc->max_seg_size;
+ }
+ 
+ static int alcor_pci_sdmmc_drv_probe(struct platform_device *pdev)
+diff --git a/drivers/mmc/host/mxcmmc.c b/drivers/mmc/host/mxcmmc.c
+index 4d17032d15ee..7b530e5a86da 100644
+--- a/drivers/mmc/host/mxcmmc.c
++++ b/drivers/mmc/host/mxcmmc.c
+@@ -292,11 +292,8 @@ static void mxcmci_swap_buffers(struct mmc_data *data)
+ 	struct scatterlist *sg;
+ 	int i;
+ 
+-	for_each_sg(data->sg, sg, data->sg_len, i) {
+-		void *buf = kmap_atomic(sg_page(sg) + sg->offset);
+-		buffer_swap32(buf, sg->length);
+-		kunmap_atomic(buf);
+-	}
++	for_each_sg(data->sg, sg, data->sg_len, i)
++		buffer_swap32(sg_virt(sg), sg->length);
+ }
+ #else
+ static inline void mxcmci_swap_buffers(struct mmc_data *data) {}
+@@ -613,7 +610,6 @@ static int mxcmci_transfer_data(struct mxcmci_host *host)
+ {
+ 	struct mmc_data *data = host->req->data;
+ 	struct scatterlist *sg;
+-	void *buf;
+ 	int stat, i;
+ 
+ 	host->data = data;
+@@ -621,18 +617,14 @@ static int mxcmci_transfer_data(struct mxcmci_host *host)
+ 
+ 	if (data->flags & MMC_DATA_READ) {
+ 		for_each_sg(data->sg, sg, data->sg_len, i) {
+-			buf = kmap_atomic(sg_page(sg) + sg->offset);
+-			stat = mxcmci_pull(host, buf, sg->length);
+-			kunmap(buf);
++			stat = mxcmci_pull(host, sg_virt(sg), sg->length);
+ 			if (stat)
+ 				return stat;
+ 			host->datasize += sg->length;
+ 		}
+ 	} else {
+ 		for_each_sg(data->sg, sg, data->sg_len, i) {
+-			buf = kmap_atomic(sg_page(sg) + sg->offset);
+-			stat = mxcmci_push(host, buf, sg->length);
+-			kunmap(buf);
++			stat = mxcmci_push(host, sg_virt(sg), sg->length);
+ 			if (stat)
+ 				return stat;
+ 			host->datasize += sg->length;
+diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
+index c60a7625b1fa..b2873a2432b6 100644
+--- a/drivers/mmc/host/omap.c
++++ b/drivers/mmc/host/omap.c
+@@ -920,7 +920,7 @@ static inline void set_cmd_timeout(struct mmc_omap_host *host, struct mmc_reques
+ 	reg &= ~(1 << 5);
+ 	OMAP_MMC_WRITE(host, SDIO, reg);
+ 	/* Set maximum timeout */
+-	OMAP_MMC_WRITE(host, CTO, 0xff);
++	OMAP_MMC_WRITE(host, CTO, 0xfd);
+ }
+ 
+ static inline void set_data_timeout(struct mmc_omap_host *host, struct mmc_request *req)
+diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
+index 8779bbaa6b69..194a81888792 100644
+--- a/drivers/mmc/host/pxamci.c
++++ b/drivers/mmc/host/pxamci.c
+@@ -162,7 +162,7 @@ static void pxamci_dma_irq(void *param);
+ static void pxamci_setup_data(struct pxamci_host *host, struct mmc_data *data)
+ {
+ 	struct dma_async_tx_descriptor *tx;
+-	enum dma_data_direction direction;
++	enum dma_transfer_direction direction;
+ 	struct dma_slave_config	config;
+ 	struct dma_chan *chan;
+ 	unsigned int nob = data->blocks;
+diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
+index 31a351a20dc0..d9be22b310e6 100644
+--- a/drivers/mmc/host/renesas_sdhi_core.c
++++ b/drivers/mmc/host/renesas_sdhi_core.c
+@@ -634,6 +634,7 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 	struct renesas_sdhi *priv;
+ 	struct resource *res;
+ 	int irq, ret, i;
++	u16 ver;
+ 
+ 	of_data = of_device_get_match_data(&pdev->dev);
+ 
+@@ -723,6 +724,13 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		host->ops.start_signal_voltage_switch =
+ 			renesas_sdhi_start_signal_voltage_switch;
+ 		host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27;
++
++		/* SDR and HS200/400 registers requires HW reset */
++		if (of_data && of_data->scc_offset) {
++			priv->scc_ctl = host->ctl + of_data->scc_offset;
++			host->mmc->caps |= MMC_CAP_HW_RESET;
++			host->hw_reset = renesas_sdhi_hw_reset;
++		}
+ 	}
+ 
+ 	/* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */
+@@ -759,12 +767,17 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 	if (ret)
+ 		goto efree;
+ 
++	ver = sd_ctrl_read16(host, CTL_VERSION);
++	/* GEN2_SDR104 is first known SDHI to use 32bit block count */
++	if (ver < SDHI_VER_GEN2_SDR104 && mmc_data->max_blk_count > U16_MAX)
++		mmc_data->max_blk_count = U16_MAX;
++
+ 	ret = tmio_mmc_host_probe(host);
+ 	if (ret < 0)
+ 		goto edisclk;
+ 
+ 	/* One Gen2 SDHI incarnation does NOT have a CBSY bit */
+-	if (sd_ctrl_read16(host, CTL_VERSION) == SDHI_VER_GEN2_SDR50)
++	if (ver == SDHI_VER_GEN2_SDR50)
+ 		mmc_data->flags &= ~TMIO_MMC_HAVE_CBSY;
+ 
+ 	/* Enable tuning iff we have an SCC and a supported mode */
+@@ -775,8 +788,6 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		const struct renesas_sdhi_scc *taps = of_data->taps;
+ 		bool hit = false;
+ 
+-		host->mmc->caps |= MMC_CAP_HW_RESET;
+-
+ 		for (i = 0; i < of_data->taps_num; i++) {
+ 			if (taps[i].clk_rate == 0 ||
+ 			    taps[i].clk_rate == host->mmc->f_max) {
+@@ -789,12 +800,10 @@ int renesas_sdhi_probe(struct platform_device *pdev,
+ 		if (!hit)
+ 			dev_warn(&host->pdev->dev, "Unknown clock rate for SDR104\n");
+ 
+-		priv->scc_ctl = host->ctl + of_data->scc_offset;
+ 		host->init_tuning = renesas_sdhi_init_tuning;
+ 		host->prepare_tuning = renesas_sdhi_prepare_tuning;
+ 		host->select_tuning = renesas_sdhi_select_tuning;
+ 		host->check_scc_error = renesas_sdhi_check_scc_error;
+-		host->hw_reset = renesas_sdhi_hw_reset;
+ 		host->prepare_hs400_tuning =
+ 			renesas_sdhi_prepare_hs400_tuning;
+ 		host->hs400_downgrade = renesas_sdhi_disable_scc;
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 00d41b312c79..a6f25c796aed 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -979,6 +979,7 @@ static void esdhc_set_uhs_signaling(struct sdhci_host *host, unsigned timing)
+ 	case MMC_TIMING_UHS_SDR25:
+ 	case MMC_TIMING_UHS_SDR50:
+ 	case MMC_TIMING_UHS_SDR104:
++	case MMC_TIMING_MMC_HS:
+ 	case MMC_TIMING_MMC_HS200:
+ 		writel(m, host->ioaddr + ESDHC_MIX_CTRL);
+ 		break;
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index c11c18a9aacb..9ec300ec94ba 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -797,6 +797,43 @@ void sdhci_omap_reset(struct sdhci_host *host, u8 mask)
+ 	sdhci_reset(host, mask);
+ }
+ 
++#define CMD_ERR_MASK (SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX |\
++		      SDHCI_INT_TIMEOUT)
++#define CMD_MASK (CMD_ERR_MASK | SDHCI_INT_RESPONSE)
++
++static u32 sdhci_omap_irq(struct sdhci_host *host, u32 intmask)
++{
++	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++	struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host);
++
++	if (omap_host->is_tuning && host->cmd && !host->data_early &&
++	    (intmask & CMD_ERR_MASK)) {
++
++		/*
++		 * Since we are not resetting data lines during tuning
++		 * operation, data error or data complete interrupts
++		 * might still arrive. Mark this request as a failure
++		 * but still wait for the data interrupt
++		 */
++		if (intmask & SDHCI_INT_TIMEOUT)
++			host->cmd->error = -ETIMEDOUT;
++		else
++			host->cmd->error = -EILSEQ;
++
++		host->cmd = NULL;
++
++		/*
++		 * Sometimes command error interrupts and command complete
++		 * interrupt will arrive together. Clear all command related
++		 * interrupts here.
++		 */
++		sdhci_writel(host, intmask & CMD_MASK, SDHCI_INT_STATUS);
++		intmask &= ~CMD_MASK;
++	}
++
++	return intmask;
++}
++
+ static struct sdhci_ops sdhci_omap_ops = {
+ 	.set_clock = sdhci_omap_set_clock,
+ 	.set_power = sdhci_omap_set_power,
+@@ -807,6 +844,7 @@ static struct sdhci_ops sdhci_omap_ops = {
+ 	.platform_send_init_74_clocks = sdhci_omap_init_74_clocks,
+ 	.reset = sdhci_omap_reset,
+ 	.set_uhs_signaling = sdhci_omap_set_uhs_signaling,
++	.irq = sdhci_omap_irq,
+ };
+ 
+ static int sdhci_omap_set_capabilities(struct sdhci_omap_host *omap_host)
+diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
+index 21bf8ac78380..390e896dadc7 100644
+--- a/drivers/net/Kconfig
++++ b/drivers/net/Kconfig
+@@ -213,8 +213,8 @@ config GENEVE
+ 
+ config GTP
+ 	tristate "GPRS Tunneling Protocol datapath (GTP-U)"
+-	depends on INET && NET_UDP_TUNNEL
+-	select NET_IP_TUNNEL
++	depends on INET
++	select NET_UDP_TUNNEL
+ 	---help---
+ 	  This allows one to create gtp virtual interfaces that provide
+ 	  the GPRS Tunneling Protocol datapath (GTP-U). This tunneling protocol
+diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
+index ddc1f9ca8ebc..4543ac97f077 100644
+--- a/drivers/net/dsa/lantiq_gswip.c
++++ b/drivers/net/dsa/lantiq_gswip.c
+@@ -1069,10 +1069,10 @@ static int gswip_probe(struct platform_device *pdev)
+ 	version = gswip_switch_r(priv, GSWIP_VERSION);
+ 
+ 	/* bring up the mdio bus */
+-	gphy_fw_np = of_find_compatible_node(pdev->dev.of_node, NULL,
+-					     "lantiq,gphy-fw");
++	gphy_fw_np = of_get_compatible_child(dev->of_node, "lantiq,gphy-fw");
+ 	if (gphy_fw_np) {
+ 		err = gswip_gphy_fw_list(priv, gphy_fw_np, version);
++		of_node_put(gphy_fw_np);
+ 		if (err) {
+ 			dev_err(dev, "gphy fw probe failed\n");
+ 			return err;
+@@ -1080,13 +1080,12 @@ static int gswip_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* bring up the mdio bus */
+-	mdio_np = of_find_compatible_node(pdev->dev.of_node, NULL,
+-					  "lantiq,xrx200-mdio");
++	mdio_np = of_get_compatible_child(dev->of_node, "lantiq,xrx200-mdio");
+ 	if (mdio_np) {
+ 		err = gswip_mdio(priv, mdio_np);
+ 		if (err) {
+ 			dev_err(dev, "mdio probe failed\n");
+-			goto gphy_fw;
++			goto put_mdio_node;
+ 		}
+ 	}
+ 
+@@ -1099,7 +1098,7 @@ static int gswip_probe(struct platform_device *pdev)
+ 		dev_err(dev, "wrong CPU port defined, HW only supports port: %i",
+ 			priv->hw_info->cpu_port);
+ 		err = -EINVAL;
+-		goto mdio_bus;
++		goto disable_switch;
+ 	}
+ 
+ 	platform_set_drvdata(pdev, priv);
+@@ -1109,10 +1108,14 @@ static int gswip_probe(struct platform_device *pdev)
+ 		 (version & GSWIP_VERSION_MOD_MASK) >> GSWIP_VERSION_MOD_SHIFT);
+ 	return 0;
+ 
++disable_switch:
++	gswip_mdio_mask(priv, GSWIP_MDIO_GLOB_ENABLE, 0, GSWIP_MDIO_GLOB);
++	dsa_unregister_switch(priv->ds);
+ mdio_bus:
+ 	if (mdio_np)
+ 		mdiobus_unregister(priv->ds->slave_mii_bus);
+-gphy_fw:
++put_mdio_node:
++	of_node_put(mdio_np);
+ 	for (i = 0; i < priv->num_gphy_fw; i++)
+ 		gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]);
+ 	return err;
+@@ -1131,8 +1134,10 @@ static int gswip_remove(struct platform_device *pdev)
+ 
+ 	dsa_unregister_switch(priv->ds);
+ 
+-	if (priv->ds->slave_mii_bus)
++	if (priv->ds->slave_mii_bus) {
+ 		mdiobus_unregister(priv->ds->slave_mii_bus);
++		of_node_put(priv->ds->slave_mii_bus->dev.of_node);
++	}
+ 
+ 	for (i = 0; i < priv->num_gphy_fw; i++)
+ 		gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 7e3c00bd9532..6cba05a80892 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -442,12 +442,20 @@ out_mapping:
+ 
+ static int mv88e6xxx_g1_irq_setup(struct mv88e6xxx_chip *chip)
+ {
++	static struct lock_class_key lock_key;
++	static struct lock_class_key request_key;
+ 	int err;
+ 
+ 	err = mv88e6xxx_g1_irq_setup_common(chip);
+ 	if (err)
+ 		return err;
+ 
++	/* These lock classes tells lockdep that global 1 irqs are in
++	 * a different category than their parent GPIO, so it won't
++	 * report false recursion.
++	 */
++	irq_set_lockdep_class(chip->irq, &lock_key, &request_key);
++
+ 	err = request_threaded_irq(chip->irq, NULL,
+ 				   mv88e6xxx_g1_irq_thread_fn,
+ 				   IRQF_ONESHOT | IRQF_SHARED,
+@@ -559,6 +567,9 @@ static int mv88e6xxx_port_setup_mac(struct mv88e6xxx_chip *chip, int port,
+ 			goto restore_link;
+ 	}
+ 
++	if (speed == SPEED_MAX && chip->info->ops->port_max_speed_mode)
++		mode = chip->info->ops->port_max_speed_mode(port);
++
+ 	if (chip->info->ops->port_set_pause) {
+ 		err = chip->info->ops->port_set_pause(chip, port, pause);
+ 		if (err)
+@@ -3042,6 +3053,7 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6341_port_set_speed,
++	.port_max_speed_mode = mv88e6341_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6095_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3360,6 +3372,7 @@ static const struct mv88e6xxx_ops mv88e6190_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390_port_set_speed,
++	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3404,6 +3417,7 @@ static const struct mv88e6xxx_ops mv88e6190x_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390x_port_set_speed,
++	.port_max_speed_mode = mv88e6390x_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3448,6 +3462,7 @@ static const struct mv88e6xxx_ops mv88e6191_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390_port_set_speed,
++	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3541,6 +3556,7 @@ static const struct mv88e6xxx_ops mv88e6290_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390_port_set_speed,
++	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3672,6 +3688,7 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6341_port_set_speed,
++	.port_max_speed_mode = mv88e6341_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6095_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3847,6 +3864,7 @@ static const struct mv88e6xxx_ops mv88e6390_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390_port_set_speed,
++	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -3895,6 +3913,7 @@ static const struct mv88e6xxx_ops mv88e6390x_ops = {
+ 	.port_set_duplex = mv88e6xxx_port_set_duplex,
+ 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
+ 	.port_set_speed = mv88e6390x_port_set_speed,
++	.port_max_speed_mode = mv88e6390x_port_max_speed_mode,
+ 	.port_tag_remap = mv88e6390_port_tag_remap,
+ 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
+ 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
+@@ -4222,7 +4241,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6190",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.num_gpio = 16,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+@@ -4245,7 +4264,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6190X",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.num_gpio = 16,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+@@ -4268,7 +4287,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6191",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+ 		.phy_base_addr = 0x0,
+@@ -4315,7 +4334,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6290",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.num_gpio = 16,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+@@ -4477,7 +4496,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6390",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.num_gpio = 16,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+@@ -4500,7 +4519,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ 		.name = "Marvell 88E6390X",
+ 		.num_databases = 4096,
+ 		.num_ports = 11,	/* 10 + Z80 */
+-		.num_internal_phys = 11,
++		.num_internal_phys = 9,
+ 		.num_gpio = 16,
+ 		.max_vid = 8191,
+ 		.port_base_addr = 0x0,
+@@ -4847,6 +4866,7 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ 	if (err)
+ 		goto out;
+ 
++	mv88e6xxx_ports_cmode_init(chip);
+ 	mv88e6xxx_phy_init(chip);
+ 
+ 	if (chip->info->ops->get_eeprom) {
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
+index 546651d8c3e1..dfb1af65c205 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.h
++++ b/drivers/net/dsa/mv88e6xxx/chip.h
+@@ -377,6 +377,9 @@ struct mv88e6xxx_ops {
+ 	 */
+ 	int (*port_set_speed)(struct mv88e6xxx_chip *chip, int port, int speed);
+ 
++	/* What interface mode should be used for maximum speed? */
++	phy_interface_t (*port_max_speed_mode)(int port);
++
+ 	int (*port_tag_remap)(struct mv88e6xxx_chip *chip, int port);
+ 
+ 	int (*port_set_frame_mode)(struct mv88e6xxx_chip *chip, int port,
+diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
+index 79ab51e69aee..c44b2822e4dd 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.c
++++ b/drivers/net/dsa/mv88e6xxx/port.c
+@@ -190,7 +190,7 @@ int mv88e6xxx_port_set_duplex(struct mv88e6xxx_chip *chip, int port, int dup)
+ 		/* normal duplex detection */
+ 		break;
+ 	default:
+-		return -EINVAL;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	err = mv88e6xxx_port_write(chip, port, MV88E6XXX_PORT_MAC_CTL, reg);
+@@ -312,6 +312,14 @@ int mv88e6341_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ 	return mv88e6xxx_port_set_speed(chip, port, speed, !port, true);
+ }
+ 
++phy_interface_t mv88e6341_port_max_speed_mode(int port)
++{
++	if (port == 5)
++		return PHY_INTERFACE_MODE_2500BASEX;
++
++	return PHY_INTERFACE_MODE_NA;
++}
++
+ /* Support 10, 100, 200, 1000 Mbps (e.g. 88E6352 family) */
+ int mv88e6352_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ {
+@@ -345,6 +353,14 @@ int mv88e6390_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ 	return mv88e6xxx_port_set_speed(chip, port, speed, true, true);
+ }
+ 
++phy_interface_t mv88e6390_port_max_speed_mode(int port)
++{
++	if (port == 9 || port == 10)
++		return PHY_INTERFACE_MODE_2500BASEX;
++
++	return PHY_INTERFACE_MODE_NA;
++}
++
+ /* Support 10, 100, 200, 1000, 2500, 10000 Mbps (e.g. 88E6190X) */
+ int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ {
+@@ -360,6 +376,14 @@ int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
+ 	return mv88e6xxx_port_set_speed(chip, port, speed, true, true);
+ }
+ 
++phy_interface_t mv88e6390x_port_max_speed_mode(int port)
++{
++	if (port == 9 || port == 10)
++		return PHY_INTERFACE_MODE_XAUI;
++
++	return PHY_INTERFACE_MODE_NA;
++}
++
+ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ 			      phy_interface_t mode)
+ {
+@@ -403,18 +427,22 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ 		return 0;
+ 
+ 	lane = mv88e6390x_serdes_get_lane(chip, port);
+-	if (lane < 0)
++	if (lane < 0 && lane != -ENODEV)
+ 		return lane;
+ 
+-	if (chip->ports[port].serdes_irq) {
+-		err = mv88e6390_serdes_irq_disable(chip, port, lane);
++	if (lane >= 0) {
++		if (chip->ports[port].serdes_irq) {
++			err = mv88e6390_serdes_irq_disable(chip, port, lane);
++			if (err)
++				return err;
++		}
++
++		err = mv88e6390x_serdes_power(chip, port, false);
+ 		if (err)
+ 			return err;
+ 	}
+ 
+-	err = mv88e6390x_serdes_power(chip, port, false);
+-	if (err)
+-		return err;
++	chip->ports[port].cmode = 0;
+ 
+ 	if (cmode) {
+ 		err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, &reg);
+@@ -428,6 +456,12 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ 		if (err)
+ 			return err;
+ 
++		chip->ports[port].cmode = cmode;
++
++		lane = mv88e6390x_serdes_get_lane(chip, port);
++		if (lane < 0)
++			return lane;
++
+ 		err = mv88e6390x_serdes_power(chip, port, true);
+ 		if (err)
+ 			return err;
+@@ -439,8 +473,6 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ 		}
+ 	}
+ 
+-	chip->ports[port].cmode = cmode;
+-
+ 	return 0;
+ }
+ 
+@@ -448,6 +480,8 @@ int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
+ 			     phy_interface_t mode)
+ {
+ 	switch (mode) {
++	case PHY_INTERFACE_MODE_NA:
++		return 0;
+ 	case PHY_INTERFACE_MODE_XGMII:
+ 	case PHY_INTERFACE_MODE_XAUI:
+ 	case PHY_INTERFACE_MODE_RXAUI:
+diff --git a/drivers/net/dsa/mv88e6xxx/port.h b/drivers/net/dsa/mv88e6xxx/port.h
+index 4aadf321edb7..c7bed263a0f4 100644
+--- a/drivers/net/dsa/mv88e6xxx/port.h
++++ b/drivers/net/dsa/mv88e6xxx/port.h
+@@ -285,6 +285,10 @@ int mv88e6352_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
+ int mv88e6390_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
+ int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
+ 
++phy_interface_t mv88e6341_port_max_speed_mode(int port);
++phy_interface_t mv88e6390_port_max_speed_mode(int port);
++phy_interface_t mv88e6390x_port_max_speed_mode(int port);
++
+ int mv88e6xxx_port_set_state(struct mv88e6xxx_chip *chip, int port, u8 state);
+ 
+ int mv88e6xxx_port_set_vlan_map(struct mv88e6xxx_chip *chip, int port, u16 map);
+diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
+index 7e97e620bd44..a26850c888cf 100644
+--- a/drivers/net/dsa/qca8k.c
++++ b/drivers/net/dsa/qca8k.c
+@@ -620,22 +620,6 @@ qca8k_adjust_link(struct dsa_switch *ds, int port, struct phy_device *phy)
+ 	qca8k_port_set_status(priv, port, 1);
+ }
+ 
+-static int
+-qca8k_phy_read(struct dsa_switch *ds, int phy, int regnum)
+-{
+-	struct qca8k_priv *priv = (struct qca8k_priv *)ds->priv;
+-
+-	return mdiobus_read(priv->bus, phy, regnum);
+-}
+-
+-static int
+-qca8k_phy_write(struct dsa_switch *ds, int phy, int regnum, u16 val)
+-{
+-	struct qca8k_priv *priv = (struct qca8k_priv *)ds->priv;
+-
+-	return mdiobus_write(priv->bus, phy, regnum, val);
+-}
+-
+ static void
+ qca8k_get_strings(struct dsa_switch *ds, int port, u32 stringset, uint8_t *data)
+ {
+@@ -876,8 +860,6 @@ static const struct dsa_switch_ops qca8k_switch_ops = {
+ 	.setup			= qca8k_setup,
+ 	.adjust_link            = qca8k_adjust_link,
+ 	.get_strings		= qca8k_get_strings,
+-	.phy_read		= qca8k_phy_read,
+-	.phy_write		= qca8k_phy_write,
+ 	.get_ethtool_stats	= qca8k_get_ethtool_stats,
+ 	.get_sset_count		= qca8k_get_sset_count,
+ 	.get_mac_eee		= qca8k_get_mac_eee,
+diff --git a/drivers/net/ethernet/8390/mac8390.c b/drivers/net/ethernet/8390/mac8390.c
+index 342ae08ec3c2..d60a86aa8aa8 100644
+--- a/drivers/net/ethernet/8390/mac8390.c
++++ b/drivers/net/ethernet/8390/mac8390.c
+@@ -153,8 +153,6 @@ static void dayna_block_input(struct net_device *dev, int count,
+ static void dayna_block_output(struct net_device *dev, int count,
+ 			       const unsigned char *buf, int start_page);
+ 
+-#define memcmp_withio(a, b, c)	memcmp((a), (void *)(b), (c))
+-
+ /* Slow Sane (16-bit chunk memory read/write) Cabletron uses this */
+ static void slow_sane_get_8390_hdr(struct net_device *dev,
+ 				   struct e8390_pkt_hdr *hdr, int ring_page);
+@@ -233,19 +231,26 @@ static enum mac8390_type mac8390_ident(struct nubus_rsrc *fres)
+ 
+ static enum mac8390_access mac8390_testio(unsigned long membase)
+ {
+-	unsigned long outdata = 0xA5A0B5B0;
+-	unsigned long indata =  0x00000000;
++	u32 outdata = 0xA5A0B5B0;
++	u32 indata = 0;
++
+ 	/* Try writing 32 bits */
+-	memcpy_toio((void __iomem *)membase, &outdata, 4);
+-	/* Now compare them */
+-	if (memcmp_withio(&outdata, membase, 4) == 0)
++	nubus_writel(outdata, membase);
++	/* Now read it back */
++	indata = nubus_readl(membase);
++	if (outdata == indata)
+ 		return ACCESS_32;
++
++	outdata = 0xC5C0D5D0;
++	indata = 0;
++
+ 	/* Write 16 bit output */
+ 	word_memcpy_tocard(membase, &outdata, 4);
+ 	/* Now read it back */
+ 	word_memcpy_fromcard(&indata, membase, 4);
+ 	if (outdata == indata)
+ 		return ACCESS_16;
++
+ 	return ACCESS_UNKNOWN;
+ }
+ 
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+index 74550ccc7a20..e2ffb159cbe2 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
+@@ -186,11 +186,12 @@ static void aq_rx_checksum(struct aq_ring_s *self,
+ 	}
+ 	if (buff->is_ip_cso) {
+ 		__skb_incr_checksum_unnecessary(skb);
+-		if (buff->is_udp_cso || buff->is_tcp_cso)
+-			__skb_incr_checksum_unnecessary(skb);
+ 	} else {
+ 		skb->ip_summed = CHECKSUM_NONE;
+ 	}
++
++	if (buff->is_udp_cso || buff->is_tcp_cso)
++		__skb_incr_checksum_unnecessary(skb);
+ }
+ 
+ #define AQ_SKB_ALIGN SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 803f7990d32b..40ca339ec3df 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1129,6 +1129,8 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+ 	tpa_info = &rxr->rx_tpa[agg_id];
+ 
+ 	if (unlikely(cons != rxr->rx_next_cons)) {
++		netdev_warn(bp->dev, "TPA cons %x != expected cons %x\n",
++			    cons, rxr->rx_next_cons);
+ 		bnxt_sched_reset(bp, rxr);
+ 		return;
+ 	}
+@@ -1581,15 +1583,17 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 	}
+ 
+ 	cons = rxcmp->rx_cmp_opaque;
+-	rx_buf = &rxr->rx_buf_ring[cons];
+-	data = rx_buf->data;
+-	data_ptr = rx_buf->data_ptr;
+ 	if (unlikely(cons != rxr->rx_next_cons)) {
+ 		int rc1 = bnxt_discard_rx(bp, cpr, raw_cons, rxcmp);
+ 
++		netdev_warn(bp->dev, "RX cons %x != expected cons %x\n",
++			    cons, rxr->rx_next_cons);
+ 		bnxt_sched_reset(bp, rxr);
+ 		return rc1;
+ 	}
++	rx_buf = &rxr->rx_buf_ring[cons];
++	data = rx_buf->data;
++	data_ptr = rx_buf->data_ptr;
+ 	prefetch(data_ptr);
+ 
+ 	misc = le32_to_cpu(rxcmp->rx_cmp_misc_v1);
+@@ -1606,11 +1610,17 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 
+ 	rx_buf->data = NULL;
+ 	if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L2_ERRORS) {
++		u32 rx_err = le32_to_cpu(rxcmp1->rx_cmp_cfa_code_errors_v2);
++
+ 		bnxt_reuse_rx_data(rxr, cons, data);
+ 		if (agg_bufs)
+ 			bnxt_reuse_rx_agg_bufs(cpr, cp_cons, agg_bufs);
+ 
+ 		rc = -EIO;
++		if (rx_err & RX_CMPL_ERRORS_BUFFER_ERROR_MASK) {
++			netdev_warn(bp->dev, "RX buffer error %x\n", rx_err);
++			bnxt_sched_reset(bp, rxr);
++		}
+ 		goto next_rx;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index 503cfadff4ac..d4ee9f9c8c34 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -1328,10 +1328,11 @@ int nicvf_stop(struct net_device *netdev)
+ 	struct nicvf_cq_poll *cq_poll = NULL;
+ 	union nic_mbx mbx = {};
+ 
+-	cancel_delayed_work_sync(&nic->link_change_work);
+-
+ 	/* wait till all queued set_rx_mode tasks completes */
+-	drain_workqueue(nic->nicvf_rx_mode_wq);
++	if (nic->nicvf_rx_mode_wq) {
++		cancel_delayed_work_sync(&nic->link_change_work);
++		drain_workqueue(nic->nicvf_rx_mode_wq);
++	}
+ 
+ 	mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
+ 	nicvf_send_msg_to_pf(nic, &mbx);
+@@ -1452,7 +1453,8 @@ int nicvf_open(struct net_device *netdev)
+ 	struct nicvf_cq_poll *cq_poll = NULL;
+ 
+ 	/* wait till all queued set_rx_mode tasks completes if any */
+-	drain_workqueue(nic->nicvf_rx_mode_wq);
++	if (nic->nicvf_rx_mode_wq)
++		drain_workqueue(nic->nicvf_rx_mode_wq);
+ 
+ 	netif_carrier_off(netdev);
+ 
+@@ -1550,10 +1552,12 @@ int nicvf_open(struct net_device *netdev)
+ 	/* Send VF config done msg to PF */
+ 	nicvf_send_cfg_done(nic);
+ 
+-	INIT_DELAYED_WORK(&nic->link_change_work,
+-			  nicvf_link_status_check_task);
+-	queue_delayed_work(nic->nicvf_rx_mode_wq,
+-			   &nic->link_change_work, 0);
++	if (nic->nicvf_rx_mode_wq) {
++		INIT_DELAYED_WORK(&nic->link_change_work,
++				  nicvf_link_status_check_task);
++		queue_delayed_work(nic->nicvf_rx_mode_wq,
++				   &nic->link_change_work, 0);
++	}
+ 
+ 	return 0;
+ cleanup:
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+index 5b4d3badcb73..e246f9733bb8 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+@@ -105,20 +105,19 @@ static inline struct pgcache *nicvf_alloc_page(struct nicvf *nic,
+ 	/* Check if page can be recycled */
+ 	if (page) {
+ 		ref_count = page_ref_count(page);
+-		/* Check if this page has been used once i.e 'put_page'
+-		 * called after packet transmission i.e internal ref_count
+-		 * and page's ref_count are equal i.e page can be recycled.
++		/* This page can be recycled if internal ref_count and page's
++		 * ref_count are equal, indicating that the page has been used
++		 * once for packet transmission. For non-XDP mode, internal
++		 * ref_count is always '1'.
+ 		 */
+-		if (rbdr->is_xdp && (ref_count == pgcache->ref_count))
+-			pgcache->ref_count--;
+-		else
+-			page = NULL;
+-
+-		/* In non-XDP mode, page's ref_count needs to be '1' for it
+-		 * to be recycled.
+-		 */
+-		if (!rbdr->is_xdp && (ref_count != 1))
++		if (rbdr->is_xdp) {
++			if (ref_count == pgcache->ref_count)
++				pgcache->ref_count--;
++			else
++				page = NULL;
++		} else if (ref_count != 1) {
+ 			page = NULL;
++		}
+ 	}
+ 
+ 	if (!page) {
+@@ -365,11 +364,10 @@ static void nicvf_free_rbdr(struct nicvf *nic, struct rbdr *rbdr)
+ 	while (head < rbdr->pgcnt) {
+ 		pgcache = &rbdr->pgcache[head];
+ 		if (pgcache->page && page_ref_count(pgcache->page) != 0) {
+-			if (!rbdr->is_xdp) {
+-				put_page(pgcache->page);
+-				continue;
++			if (rbdr->is_xdp) {
++				page_ref_sub(pgcache->page,
++					     pgcache->ref_count - 1);
+ 			}
+-			page_ref_sub(pgcache->page, pgcache->ref_count - 1);
+ 			put_page(pgcache->page);
+ 		}
+ 		head++;
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index 9a7f70db20c7..733d9172425b 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -119,7 +119,7 @@ static void enic_init_affinity_hint(struct enic *enic)
+ 
+ 	for (i = 0; i < enic->intr_count; i++) {
+ 		if (enic_is_err_intr(enic, i) || enic_is_notify_intr(enic, i) ||
+-		    (enic->msix[i].affinity_mask &&
++		    (cpumask_available(enic->msix[i].affinity_mask) &&
+ 		     !cpumask_empty(enic->msix[i].affinity_mask)))
+ 			continue;
+ 		if (zalloc_cpumask_var(&enic->msix[i].affinity_mask,
+@@ -148,7 +148,7 @@ static void enic_set_affinity_hint(struct enic *enic)
+ 	for (i = 0; i < enic->intr_count; i++) {
+ 		if (enic_is_err_intr(enic, i)		||
+ 		    enic_is_notify_intr(enic, i)	||
+-		    !enic->msix[i].affinity_mask	||
++		    !cpumask_available(enic->msix[i].affinity_mask) ||
+ 		    cpumask_empty(enic->msix[i].affinity_mask))
+ 			continue;
+ 		err = irq_set_affinity_hint(enic->msix_entry[i].vector,
+@@ -161,7 +161,7 @@ static void enic_set_affinity_hint(struct enic *enic)
+ 	for (i = 0; i < enic->wq_count; i++) {
+ 		int wq_intr = enic_msix_wq_intr(enic, i);
+ 
+-		if (enic->msix[wq_intr].affinity_mask &&
++		if (cpumask_available(enic->msix[wq_intr].affinity_mask) &&
+ 		    !cpumask_empty(enic->msix[wq_intr].affinity_mask))
+ 			netif_set_xps_queue(enic->netdev,
+ 					    enic->msix[wq_intr].affinity_mask,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 36eab37d8a40..09c774fe8853 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -192,6 +192,7 @@ struct hnae3_ae_dev {
+ 	const struct hnae3_ae_ops *ops;
+ 	struct list_head node;
+ 	u32 flag;
++	u8 override_pci_need_reset; /* fix to stop multiple reset happening */
+ 	enum hnae3_dev_type dev_type;
+ 	enum hnae3_reset_type reset_type;
+ 	void *priv;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 1bf7a5f116a0..d84c50068f66 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -1852,7 +1852,9 @@ static pci_ers_result_t hns3_slot_reset(struct pci_dev *pdev)
+ 
+ 	/* request the reset */
+ 	if (ae_dev->ops->reset_event) {
+-		ae_dev->ops->reset_event(pdev, NULL);
++		if (!ae_dev->override_pci_need_reset)
++			ae_dev->ops->reset_event(pdev, NULL);
++
+ 		return PCI_ERS_RESULT_RECOVERED;
+ 	}
+ 
+@@ -2476,6 +2478,8 @@ static int hns3_add_frag(struct hns3_enet_ring *ring, struct hns3_desc *desc,
+ 		desc = &ring->desc[ring->next_to_clean];
+ 		desc_cb = &ring->desc_cb[ring->next_to_clean];
+ 		bd_base_info = le32_to_cpu(desc->rx.bd_base_info);
++		/* make sure HW write desc complete */
++		dma_rmb();
+ 		if (!hnae3_get_bit(bd_base_info, HNS3_RXD_VLD_B))
+ 			return -ENXIO;
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+index d0f654123b9b..3ea72e4d9dc4 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+@@ -1094,10 +1094,10 @@ static int hclge_log_rocee_ovf_error(struct hclge_dev *hdev)
+ 	return 0;
+ }
+ 
+-static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
++static enum hnae3_reset_type
++hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ {
+-	enum hnae3_reset_type reset_type = HNAE3_FUNC_RESET;
+-	struct hnae3_ae_dev *ae_dev = hdev->ae_dev;
++	enum hnae3_reset_type reset_type = HNAE3_NONE_RESET;
+ 	struct device *dev = &hdev->pdev->dev;
+ 	struct hclge_desc desc[2];
+ 	unsigned int status;
+@@ -1110,17 +1110,20 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ 	if (ret) {
+ 		dev_err(dev, "failed(%d) to query ROCEE RAS INT SRC\n", ret);
+ 		/* reset everything for now */
+-		HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET);
+-		return ret;
++		return HNAE3_GLOBAL_RESET;
+ 	}
+ 
+ 	status = le32_to_cpu(desc[0].data[0]);
+ 
+-	if (status & HCLGE_ROCEE_RERR_INT_MASK)
++	if (status & HCLGE_ROCEE_RERR_INT_MASK) {
+ 		dev_warn(dev, "ROCEE RAS AXI rresp error\n");
++		reset_type = HNAE3_FUNC_RESET;
++	}
+ 
+-	if (status & HCLGE_ROCEE_BERR_INT_MASK)
++	if (status & HCLGE_ROCEE_BERR_INT_MASK) {
+ 		dev_warn(dev, "ROCEE RAS AXI bresp error\n");
++		reset_type = HNAE3_FUNC_RESET;
++	}
+ 
+ 	if (status & HCLGE_ROCEE_ECC_INT_MASK) {
+ 		dev_warn(dev, "ROCEE RAS 2bit ECC error\n");
+@@ -1132,9 +1135,9 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ 		if (ret) {
+ 			dev_err(dev, "failed(%d) to process ovf error\n", ret);
+ 			/* reset everything for now */
+-			HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET);
+-			return ret;
++			return HNAE3_GLOBAL_RESET;
+ 		}
++		reset_type = HNAE3_FUNC_RESET;
+ 	}
+ 
+ 	/* clear error status */
+@@ -1143,12 +1146,10 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
+ 	if (ret) {
+ 		dev_err(dev, "failed(%d) to clear ROCEE RAS error\n", ret);
+ 		/* reset everything for now */
+-		reset_type = HNAE3_GLOBAL_RESET;
++		return HNAE3_GLOBAL_RESET;
+ 	}
+ 
+-	HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type);
+-
+-	return ret;
++	return reset_type;
+ }
+ 
+ static int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en)
+@@ -1178,15 +1179,18 @@ static int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en)
+ 	return ret;
+ }
+ 
+-static int hclge_handle_rocee_ras_error(struct hnae3_ae_dev *ae_dev)
++static void hclge_handle_rocee_ras_error(struct hnae3_ae_dev *ae_dev)
+ {
++	enum hnae3_reset_type reset_type = HNAE3_NONE_RESET;
+ 	struct hclge_dev *hdev = ae_dev->priv;
+ 
+ 	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
+ 	    hdev->pdev->revision < 0x21)
+-		return HNAE3_NONE_RESET;
++		return;
+ 
+-	return hclge_log_and_clear_rocee_ras_error(hdev);
++	reset_type = hclge_log_and_clear_rocee_ras_error(hdev);
++	if (reset_type != HNAE3_NONE_RESET)
++		HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type);
+ }
+ 
+ static const struct hclge_hw_blk hw_blk[] = {
+@@ -1259,8 +1263,10 @@ pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev)
+ 		hclge_handle_all_ras_errors(hdev);
+ 	} else {
+ 		if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
+-		    hdev->pdev->revision < 0x21)
++		    hdev->pdev->revision < 0x21) {
++			ae_dev->override_pci_need_reset = 1;
+ 			return PCI_ERS_RESULT_RECOVERED;
++		}
+ 	}
+ 
+ 	if (status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
+@@ -1269,8 +1275,11 @@ pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev)
+ 	}
+ 
+ 	if (status & HCLGE_RAS_REG_NFE_MASK ||
+-	    status & HCLGE_RAS_REG_ROCEE_ERR_MASK)
++	    status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
++		ae_dev->override_pci_need_reset = 0;
+ 		return PCI_ERS_RESULT_NEED_RESET;
++	}
++	ae_dev->override_pci_need_reset = 1;
+ 
+ 	return PCI_ERS_RESULT_RECOVERED;
+ }
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 5ecbb1adcf3b..51cfe95f3e24 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1885,6 +1885,7 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter,
+ 	 */
+ 	adapter->state = VNIC_PROBED;
+ 
++	reinit_completion(&adapter->init_done);
+ 	rc = init_crq_queue(adapter);
+ 	if (rc) {
+ 		netdev_err(adapter->netdev,
+@@ -4625,7 +4626,7 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter)
+ 	old_num_rx_queues = adapter->req_rx_queues;
+ 	old_num_tx_queues = adapter->req_tx_queues;
+ 
+-	init_completion(&adapter->init_done);
++	reinit_completion(&adapter->init_done);
+ 	adapter->init_done_rc = 0;
+ 	ibmvnic_send_crq_init(adapter);
+ 	if (!wait_for_completion_timeout(&adapter->init_done, timeout)) {
+@@ -4680,7 +4681,6 @@ static int ibmvnic_init(struct ibmvnic_adapter *adapter)
+ 
+ 	adapter->from_passive_init = false;
+ 
+-	init_completion(&adapter->init_done);
+ 	adapter->init_done_rc = 0;
+ 	ibmvnic_send_crq_init(adapter);
+ 	if (!wait_for_completion_timeout(&adapter->init_done, timeout)) {
+@@ -4759,6 +4759,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ 	INIT_WORK(&adapter->ibmvnic_reset, __ibmvnic_reset);
+ 	INIT_LIST_HEAD(&adapter->rwi_list);
+ 	spin_lock_init(&adapter->rwi_lock);
++	init_completion(&adapter->init_done);
+ 	adapter->resetting = false;
+ 
+ 	adapter->mac_change_pending = false;
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 189f231075c2..7acc61e4f645 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -2106,7 +2106,7 @@ static int e1000_request_msix(struct e1000_adapter *adapter)
+ 	if (strlen(netdev->name) < (IFNAMSIZ - 5))
+ 		snprintf(adapter->rx_ring->name,
+ 			 sizeof(adapter->rx_ring->name) - 1,
+-			 "%s-rx-0", netdev->name);
++			 "%.14s-rx-0", netdev->name);
+ 	else
+ 		memcpy(adapter->rx_ring->name, netdev->name, IFNAMSIZ);
+ 	err = request_irq(adapter->msix_entries[vector].vector,
+@@ -2122,7 +2122,7 @@ static int e1000_request_msix(struct e1000_adapter *adapter)
+ 	if (strlen(netdev->name) < (IFNAMSIZ - 5))
+ 		snprintf(adapter->tx_ring->name,
+ 			 sizeof(adapter->tx_ring->name) - 1,
+-			 "%s-tx-0", netdev->name);
++			 "%.14s-tx-0", netdev->name);
+ 	else
+ 		memcpy(adapter->tx_ring->name, netdev->name, IFNAMSIZ);
+ 	err = request_irq(adapter->msix_entries[vector].vector,
+@@ -5309,8 +5309,13 @@ static void e1000_watchdog_task(struct work_struct *work)
+ 			/* 8000ES2LAN requires a Rx packet buffer work-around
+ 			 * on link down event; reset the controller to flush
+ 			 * the Rx packet buffer.
++			 *
++			 * If the link is lost the controller stops DMA, but
++			 * if there is queued Tx work it cannot be done.  So
++			 * reset the controller to flush the Tx packet buffers.
+ 			 */
+-			if (adapter->flags & FLAG_RX_NEEDS_RESTART)
++			if ((adapter->flags & FLAG_RX_NEEDS_RESTART) ||
++			    e1000_desc_unused(tx_ring) + 1 < tx_ring->count)
+ 				adapter->flags |= FLAG_RESTART_NOW;
+ 			else
+ 				pm_schedule_suspend(netdev->dev.parent,
+@@ -5333,14 +5338,6 @@ link_up:
+ 	adapter->gotc_old = adapter->stats.gotc;
+ 	spin_unlock(&adapter->stats64_lock);
+ 
+-	/* If the link is lost the controller stops DMA, but
+-	 * if there is queued Tx work it cannot be done.  So
+-	 * reset the controller to flush the Tx packet buffers.
+-	 */
+-	if (!netif_carrier_ok(netdev) &&
+-	    (e1000_desc_unused(tx_ring) + 1 < tx_ring->count))
+-		adapter->flags |= FLAG_RESTART_NOW;
+-
+ 	/* If reset is necessary, do it outside of interrupt context. */
+ 	if (adapter->flags & FLAG_RESTART_NOW) {
+ 		schedule_work(&adapter->reset_task);
+@@ -7351,6 +7348,8 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	e1000_print_device_info(adapter);
+ 
++	dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
++
+ 	if (pci_dev_run_wake(pdev))
+ 		pm_runtime_put_noidle(&pdev->dev);
+ 
+diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
+index 2e5693107fa4..8d602247eb44 100644
+--- a/drivers/net/ethernet/intel/ice/ice_switch.c
++++ b/drivers/net/ethernet/intel/ice/ice_switch.c
+@@ -1538,9 +1538,20 @@ ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
+ 	} else if (!list_elem->vsi_list_info) {
+ 		status = ICE_ERR_DOES_NOT_EXIST;
+ 		goto exit;
++	} else if (list_elem->vsi_list_info->ref_cnt > 1) {
++		/* a ref_cnt > 1 indicates that the vsi_list is being
++		 * shared by multiple rules. Decrement the ref_cnt and
++		 * remove this rule, but do not modify the list, as it
++		 * is in-use by other rules.
++		 */
++		list_elem->vsi_list_info->ref_cnt--;
++		remove_rule = true;
+ 	} else {
+-		if (list_elem->vsi_list_info->ref_cnt > 1)
+-			list_elem->vsi_list_info->ref_cnt--;
++		/* a ref_cnt of 1 indicates the vsi_list is only used
++		 * by one rule. However, the original removal request is only
++		 * for a single VSI. Update the vsi_list first, and only
++		 * remove the rule if there are no further VSIs in this list.
++		 */
+ 		vsi_handle = f_entry->fltr_info.vsi_handle;
+ 		status = ice_rem_update_vsi_list(hw, vsi_handle, list_elem);
+ 		if (status)
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 16066c2d5b3a..931beac3359d 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1380,13 +1380,9 @@ static void mvpp2_port_reset(struct mvpp2_port *port)
+ 	for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_regs); i++)
+ 		mvpp2_read_count(port, &mvpp2_ethtool_regs[i]);
+ 
+-	val = readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
+-		    ~MVPP2_GMAC_PORT_RESET_MASK;
++	val = readl(port->base + MVPP2_GMAC_CTRL_2_REG) |
++	      MVPP2_GMAC_PORT_RESET_MASK;
+ 	writel(val, port->base + MVPP2_GMAC_CTRL_2_REG);
+-
+-	while (readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
+-	       MVPP2_GMAC_PORT_RESET_MASK)
+-		continue;
+ }
+ 
+ /* Change maximum receive size of the port */
+@@ -4543,12 +4539,15 @@ static void mvpp2_gmac_config(struct mvpp2_port *port, unsigned int mode,
+ 			      const struct phylink_link_state *state)
+ {
+ 	u32 an, ctrl0, ctrl2, ctrl4;
++	u32 old_ctrl2;
+ 
+ 	an = readl(port->base + MVPP2_GMAC_AUTONEG_CONFIG);
+ 	ctrl0 = readl(port->base + MVPP2_GMAC_CTRL_0_REG);
+ 	ctrl2 = readl(port->base + MVPP2_GMAC_CTRL_2_REG);
+ 	ctrl4 = readl(port->base + MVPP22_GMAC_CTRL_4_REG);
+ 
++	old_ctrl2 = ctrl2;
++
+ 	/* Force link down */
+ 	an &= ~MVPP2_GMAC_FORCE_LINK_PASS;
+ 	an |= MVPP2_GMAC_FORCE_LINK_DOWN;
+@@ -4621,6 +4620,12 @@ static void mvpp2_gmac_config(struct mvpp2_port *port, unsigned int mode,
+ 	writel(ctrl2, port->base + MVPP2_GMAC_CTRL_2_REG);
+ 	writel(ctrl4, port->base + MVPP22_GMAC_CTRL_4_REG);
+ 	writel(an, port->base + MVPP2_GMAC_AUTONEG_CONFIG);
++
++	if (old_ctrl2 & MVPP2_GMAC_PORT_RESET_MASK) {
++		while (readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
++		       MVPP2_GMAC_PORT_RESET_MASK)
++			continue;
++	}
+ }
+ 
+ static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
+diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
+index 57727fe1501e..8b3495ee2b6e 100644
+--- a/drivers/net/ethernet/marvell/sky2.c
++++ b/drivers/net/ethernet/marvell/sky2.c
+@@ -46,6 +46,7 @@
+ #include <linux/mii.h>
+ #include <linux/of_device.h>
+ #include <linux/of_net.h>
++#include <linux/dmi.h>
+ 
+ #include <asm/irq.h>
+ 
+@@ -93,7 +94,7 @@ static int copybreak __read_mostly = 128;
+ module_param(copybreak, int, 0);
+ MODULE_PARM_DESC(copybreak, "Receive copy threshold");
+ 
+-static int disable_msi = 0;
++static int disable_msi = -1;
+ module_param(disable_msi, int, 0);
+ MODULE_PARM_DESC(disable_msi, "Disable Message Signaled Interrupt (MSI)");
+ 
+@@ -4917,6 +4918,24 @@ static const char *sky2_name(u8 chipid, char *buf, int sz)
+ 	return buf;
+ }
+ 
++static const struct dmi_system_id msi_blacklist[] = {
++	{
++		.ident = "Dell Inspiron 1545",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 1545"),
++		},
++	},
++	{
++		.ident = "Gateway P-79",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Gateway"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "P-79"),
++		},
++	},
++	{}
++};
++
+ static int sky2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ {
+ 	struct net_device *dev, *dev1;
+@@ -5028,6 +5047,9 @@ static int sky2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		goto err_out_free_pci;
+ 	}
+ 
++	if (disable_msi == -1)
++		disable_msi = !!dmi_check_system(msi_blacklist);
++
+ 	if (!disable_msi && pci_enable_msi(pdev) == 0) {
+ 		err = sky2_test_msi(hw);
+ 		if (err) {
+diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c
+index e65bc3c95630..857588e2488d 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c
+@@ -2645,6 +2645,8 @@ int mlx4_cmd_use_events(struct mlx4_dev *dev)
+ 	if (!priv->cmd.context)
+ 		return -ENOMEM;
+ 
++	if (mlx4_is_mfunc(dev))
++		mutex_lock(&priv->cmd.slave_cmd_mutex);
+ 	down_write(&priv->cmd.switch_sem);
+ 	for (i = 0; i < priv->cmd.max_cmds; ++i) {
+ 		priv->cmd.context[i].token = i;
+@@ -2670,6 +2672,8 @@ int mlx4_cmd_use_events(struct mlx4_dev *dev)
+ 	down(&priv->cmd.poll_sem);
+ 	priv->cmd.use_events = 1;
+ 	up_write(&priv->cmd.switch_sem);
++	if (mlx4_is_mfunc(dev))
++		mutex_unlock(&priv->cmd.slave_cmd_mutex);
+ 
+ 	return err;
+ }
+@@ -2682,6 +2686,8 @@ void mlx4_cmd_use_polling(struct mlx4_dev *dev)
+ 	struct mlx4_priv *priv = mlx4_priv(dev);
+ 	int i;
+ 
++	if (mlx4_is_mfunc(dev))
++		mutex_lock(&priv->cmd.slave_cmd_mutex);
+ 	down_write(&priv->cmd.switch_sem);
+ 	priv->cmd.use_events = 0;
+ 
+@@ -2689,9 +2695,12 @@ void mlx4_cmd_use_polling(struct mlx4_dev *dev)
+ 		down(&priv->cmd.event_sem);
+ 
+ 	kfree(priv->cmd.context);
++	priv->cmd.context = NULL;
+ 
+ 	up(&priv->cmd.poll_sem);
+ 	up_write(&priv->cmd.switch_sem);
++	if (mlx4_is_mfunc(dev))
++		mutex_unlock(&priv->cmd.slave_cmd_mutex);
+ }
+ 
+ struct mlx4_cmd_mailbox *mlx4_alloc_cmd_mailbox(struct mlx4_dev *dev)
+diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+index eb13d3618162..4356f3a58002 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+@@ -2719,13 +2719,13 @@ static int qp_get_mtt_size(struct mlx4_qp_context *qpc)
+ 	int total_pages;
+ 	int total_mem;
+ 	int page_offset = (be32_to_cpu(qpc->params2) >> 6) & 0x3f;
++	int tot;
+ 
+ 	sq_size = 1 << (log_sq_size + log_sq_sride + 4);
+ 	rq_size = (srq|rss|xrc) ? 0 : (1 << (log_rq_size + log_rq_stride + 4));
+ 	total_mem = sq_size + rq_size;
+-	total_pages =
+-		roundup_pow_of_two((total_mem + (page_offset << 6)) >>
+-				   page_shift);
++	tot = (total_mem + (page_offset << 6)) >> page_shift;
++	total_pages = !tot ? 1 : roundup_pow_of_two(tot);
+ 
+ 	return total_pages;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index eac245a93f91..4ab0d030b544 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -122,7 +122,9 @@ out:
+ 	return err;
+ }
+ 
+-/* xoff = ((301+2.16 * len [m]) * speed [Gbps] + 2.72 MTU [B]) */
++/* xoff = ((301+2.16 * len [m]) * speed [Gbps] + 2.72 MTU [B])
++ * minimum speed value is 40Gbps
++ */
+ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
+ {
+ 	u32 speed;
+@@ -130,10 +132,9 @@ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
+ 	int err;
+ 
+ 	err = mlx5e_port_linkspeed(priv->mdev, &speed);
+-	if (err) {
+-		mlx5_core_warn(priv->mdev, "cannot get port speed\n");
+-		return 0;
+-	}
++	if (err)
++		speed = SPEED_40000;
++	speed = max_t(u32, speed, SPEED_40000);
+ 
+ 	xoff = (301 + 216 * priv->dcbx.cable_len / 100) * speed / 1000 + 272 * mtu / 100;
+ 
+@@ -142,7 +143,7 @@ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
+ }
+ 
+ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+-				 u32 xoff, unsigned int mtu)
++				 u32 xoff, unsigned int max_mtu)
+ {
+ 	int i;
+ 
+@@ -154,11 +155,12 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+ 		}
+ 
+ 		if (port_buffer->buffer[i].size <
+-		    (xoff + mtu + (1 << MLX5E_BUFFER_CELL_SHIFT)))
++		    (xoff + max_mtu + (1 << MLX5E_BUFFER_CELL_SHIFT)))
+ 			return -ENOMEM;
+ 
+ 		port_buffer->buffer[i].xoff = port_buffer->buffer[i].size - xoff;
+-		port_buffer->buffer[i].xon  = port_buffer->buffer[i].xoff - mtu;
++		port_buffer->buffer[i].xon  =
++			port_buffer->buffer[i].xoff - max_mtu;
+ 	}
+ 
+ 	return 0;
+@@ -166,7 +168,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+ 
+ /**
+  * update_buffer_lossy()
+- *   mtu: device's MTU
++ *   max_mtu: netdev's max_mtu
+  *   pfc_en: <input> current pfc configuration
+  *   buffer: <input> current prio to buffer mapping
+  *   xoff:   <input> xoff value
+@@ -183,7 +185,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+  *     Return 0 if no error.
+  *     Set change to true if buffer configuration is modified.
+  */
+-static int update_buffer_lossy(unsigned int mtu,
++static int update_buffer_lossy(unsigned int max_mtu,
+ 			       u8 pfc_en, u8 *buffer, u32 xoff,
+ 			       struct mlx5e_port_buffer *port_buffer,
+ 			       bool *change)
+@@ -220,7 +222,7 @@ static int update_buffer_lossy(unsigned int mtu,
+ 	}
+ 
+ 	if (changed) {
+-		err = update_xoff_threshold(port_buffer, xoff, mtu);
++		err = update_xoff_threshold(port_buffer, xoff, max_mtu);
+ 		if (err)
+ 			return err;
+ 
+@@ -230,6 +232,7 @@ static int update_buffer_lossy(unsigned int mtu,
+ 	return 0;
+ }
+ 
++#define MINIMUM_MAX_MTU 9216
+ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 				    u32 change, unsigned int mtu,
+ 				    struct ieee_pfc *pfc,
+@@ -241,12 +244,14 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 	bool update_prio2buffer = false;
+ 	u8 buffer[MLX5E_MAX_PRIORITY];
+ 	bool update_buffer = false;
++	unsigned int max_mtu;
+ 	u32 total_used = 0;
+ 	u8 curr_pfc_en;
+ 	int err;
+ 	int i;
+ 
+ 	mlx5e_dbg(HW, priv, "%s: change=%x\n", __func__, change);
++	max_mtu = max_t(unsigned int, priv->netdev->max_mtu, MINIMUM_MAX_MTU);
+ 
+ 	err = mlx5e_port_query_buffer(priv, &port_buffer);
+ 	if (err)
+@@ -254,7 +259,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 
+ 	if (change & MLX5E_PORT_BUFFER_CABLE_LEN) {
+ 		update_buffer = true;
+-		err = update_xoff_threshold(&port_buffer, xoff, mtu);
++		err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -264,7 +269,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 		if (err)
+ 			return err;
+ 
+-		err = update_buffer_lossy(mtu, pfc->pfc_en, buffer, xoff,
++		err = update_buffer_lossy(max_mtu, pfc->pfc_en, buffer, xoff,
+ 					  &port_buffer, &update_buffer);
+ 		if (err)
+ 			return err;
+@@ -276,8 +281,8 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 		if (err)
+ 			return err;
+ 
+-		err = update_buffer_lossy(mtu, curr_pfc_en, prio2buffer, xoff,
+-					  &port_buffer, &update_buffer);
++		err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer,
++					  xoff, &port_buffer, &update_buffer);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -301,7 +306,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 			return -EINVAL;
+ 
+ 		update_buffer = true;
+-		err = update_xoff_threshold(&port_buffer, xoff, mtu);
++		err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -309,7 +314,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 	/* Need to update buffer configuration if xoff value is changed */
+ 	if (!update_buffer && xoff != priv->dcbx.xoff) {
+ 		update_buffer = true;
+-		err = update_xoff_threshold(&port_buffer, xoff, mtu);
++		err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
+ 		if (err)
+ 			return err;
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+index 3078491cc0d0..1539cf3de5dc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+@@ -45,7 +45,9 @@ int mlx5e_create_tir(struct mlx5_core_dev *mdev,
+ 	if (err)
+ 		return err;
+ 
++	mutex_lock(&mdev->mlx5e_res.td.list_lock);
+ 	list_add(&tir->list, &mdev->mlx5e_res.td.tirs_list);
++	mutex_unlock(&mdev->mlx5e_res.td.list_lock);
+ 
+ 	return 0;
+ }
+@@ -53,8 +55,10 @@ int mlx5e_create_tir(struct mlx5_core_dev *mdev,
+ void mlx5e_destroy_tir(struct mlx5_core_dev *mdev,
+ 		       struct mlx5e_tir *tir)
+ {
++	mutex_lock(&mdev->mlx5e_res.td.list_lock);
+ 	mlx5_core_destroy_tir(mdev, tir->tirn);
+ 	list_del(&tir->list);
++	mutex_unlock(&mdev->mlx5e_res.td.list_lock);
+ }
+ 
+ static int mlx5e_create_mkey(struct mlx5_core_dev *mdev, u32 pdn,
+@@ -114,6 +118,7 @@ int mlx5e_create_mdev_resources(struct mlx5_core_dev *mdev)
+ 	}
+ 
+ 	INIT_LIST_HEAD(&mdev->mlx5e_res.td.tirs_list);
++	mutex_init(&mdev->mlx5e_res.td.list_lock);
+ 
+ 	return 0;
+ 
+@@ -141,15 +146,17 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb)
+ {
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	struct mlx5e_tir *tir;
+-	int err  = -ENOMEM;
++	int err  = 0;
+ 	u32 tirn = 0;
+ 	int inlen;
+ 	void *in;
+ 
+ 	inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
+ 	in = kvzalloc(inlen, GFP_KERNEL);
+-	if (!in)
++	if (!in) {
++		err = -ENOMEM;
+ 		goto out;
++	}
+ 
+ 	if (enable_uc_lb)
+ 		MLX5_SET(modify_tir_in, in, ctx.self_lb_block,
+@@ -157,6 +164,7 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb)
+ 
+ 	MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1);
+ 
++	mutex_lock(&mdev->mlx5e_res.td.list_lock);
+ 	list_for_each_entry(tir, &mdev->mlx5e_res.td.tirs_list, list) {
+ 		tirn = tir->tirn;
+ 		err = mlx5_core_modify_tir(mdev, tirn, in, inlen);
+@@ -168,6 +176,7 @@ out:
+ 	kvfree(in);
+ 	if (err)
+ 		netdev_err(priv->netdev, "refresh tir(0x%x) failed, %d\n", tirn, err);
++	mutex_unlock(&mdev->mlx5e_res.td.list_lock);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 47233b9a4f81..e6099f51d25f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -357,6 +357,9 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
+ 
+ 	if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
+ 		priv->channels.params = new_channels.params;
++		if (!netif_is_rxfh_configured(priv->netdev))
++			mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt,
++						      MLX5E_INDIR_RQT_SIZE, count);
+ 		goto out;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 5b492b67f4e1..13c48883ed61 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -1812,7 +1812,7 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
+ 	u64 node_guid;
+ 	int err = 0;
+ 
+-	if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
++	if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager))
+ 		return -EPERM;
+ 	if (!LEGAL_VPORT(esw, vport) || is_multicast_ether_addr(mac))
+ 		return -EINVAL;
+@@ -1886,7 +1886,7 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
+ {
+ 	struct mlx5_vport *evport;
+ 
+-	if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
++	if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager))
+ 		return -EPERM;
+ 	if (!LEGAL_VPORT(esw, vport))
+ 		return -EINVAL;
+@@ -2059,19 +2059,24 @@ static int normalize_vports_min_rate(struct mlx5_eswitch *esw, u32 divider)
+ int mlx5_eswitch_set_vport_rate(struct mlx5_eswitch *esw, int vport,
+ 				u32 max_rate, u32 min_rate)
+ {
+-	u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share);
+-	bool min_rate_supported = MLX5_CAP_QOS(esw->dev, esw_bw_share) &&
+-					fw_max_bw_share >= MLX5_MIN_BW_SHARE;
+-	bool max_rate_supported = MLX5_CAP_QOS(esw->dev, esw_rate_limit);
+ 	struct mlx5_vport *evport;
++	u32 fw_max_bw_share;
+ 	u32 previous_min_rate;
+ 	u32 divider;
++	bool min_rate_supported;
++	bool max_rate_supported;
+ 	int err = 0;
+ 
+ 	if (!ESW_ALLOWED(esw))
+ 		return -EPERM;
+ 	if (!LEGAL_VPORT(esw, vport))
+ 		return -EINVAL;
++
++	fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share);
++	min_rate_supported = MLX5_CAP_QOS(esw->dev, esw_bw_share) &&
++				fw_max_bw_share >= MLX5_MIN_BW_SHARE;
++	max_rate_supported = MLX5_CAP_QOS(esw->dev, esw_rate_limit);
++
+ 	if ((min_rate && !min_rate_supported) || (max_rate && !max_rate_supported))
+ 		return -EOPNOTSUPP;
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
+index 5cf5f2a9d51f..8de64e88c670 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
+@@ -217,15 +217,21 @@ int mlx5_fpga_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
+ 	void *cmd;
+ 	int ret;
+ 
++	rcu_read_lock();
++	flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle));
++	rcu_read_unlock();
++
++	if (!flow) {
++		WARN_ONCE(1, "Received NULL pointer for handle\n");
++		return -EINVAL;
++	}
++
+ 	buf = kzalloc(size, GFP_ATOMIC);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+ 	cmd = (buf + 1);
+ 
+-	rcu_read_lock();
+-	flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle));
+-	rcu_read_unlock();
+ 	mlx5_fpga_tls_flow_to_cmd(flow, cmd);
+ 
+ 	MLX5_SET(tls_cmd, cmd, swid, ntohl(handle));
+@@ -238,6 +244,8 @@ int mlx5_fpga_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
+ 	buf->complete = mlx_tls_kfree_complete;
+ 
+ 	ret = mlx5_fpga_sbu_conn_sendmsg(mdev->fpga->tls->conn, buf);
++	if (ret < 0)
++		kfree(buf);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index be81b319b0dc..694edd899322 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -163,26 +163,6 @@ static struct mlx5_profile profile[] = {
+ 			.size	= 8,
+ 			.limit	= 4
+ 		},
+-		.mr_cache[16]	= {
+-			.size	= 8,
+-			.limit	= 4
+-		},
+-		.mr_cache[17]	= {
+-			.size	= 8,
+-			.limit	= 4
+-		},
+-		.mr_cache[18]	= {
+-			.size	= 8,
+-			.limit	= 4
+-		},
+-		.mr_cache[19]	= {
+-			.size	= 4,
+-			.limit	= 2
+-		},
+-		.mr_cache[20]	= {
+-			.size	= 4,
+-			.limit	= 2
+-		},
+ 	},
+ };
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qp.c b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
+index 370ca94b6775..c7c2920c05c4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/qp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
+@@ -40,6 +40,9 @@
+ #include "mlx5_core.h"
+ #include "lib/eq.h"
+ 
++static int mlx5_core_drain_dct(struct mlx5_core_dev *dev,
++			       struct mlx5_core_dct *dct);
++
+ static struct mlx5_core_rsc_common *
+ mlx5_get_rsc(struct mlx5_qp_table *table, u32 rsn)
+ {
+@@ -227,13 +230,42 @@ static void destroy_resource_common(struct mlx5_core_dev *dev,
+ 	wait_for_completion(&qp->common.free);
+ }
+ 
++static int _mlx5_core_destroy_dct(struct mlx5_core_dev *dev,
++				  struct mlx5_core_dct *dct, bool need_cleanup)
++{
++	u32 out[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
++	u32 in[MLX5_ST_SZ_DW(destroy_dct_in)]   = {0};
++	struct mlx5_core_qp *qp = &dct->mqp;
++	int err;
++
++	err = mlx5_core_drain_dct(dev, dct);
++	if (err) {
++		if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
++			goto destroy;
++		} else {
++			mlx5_core_warn(
++				dev, "failed drain DCT 0x%x with error 0x%x\n",
++				qp->qpn, err);
++			return err;
++		}
++	}
++	wait_for_completion(&dct->drained);
++destroy:
++	if (need_cleanup)
++		destroy_resource_common(dev, &dct->mqp);
++	MLX5_SET(destroy_dct_in, in, opcode, MLX5_CMD_OP_DESTROY_DCT);
++	MLX5_SET(destroy_dct_in, in, dctn, qp->qpn);
++	MLX5_SET(destroy_dct_in, in, uid, qp->uid);
++	err = mlx5_cmd_exec(dev, (void *)&in, sizeof(in),
++			    (void *)&out, sizeof(out));
++	return err;
++}
++
+ int mlx5_core_create_dct(struct mlx5_core_dev *dev,
+ 			 struct mlx5_core_dct *dct,
+ 			 u32 *in, int inlen)
+ {
+ 	u32 out[MLX5_ST_SZ_DW(create_dct_out)]   = {0};
+-	u32 din[MLX5_ST_SZ_DW(destroy_dct_in)]   = {0};
+-	u32 dout[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
+ 	struct mlx5_core_qp *qp = &dct->mqp;
+ 	int err;
+ 
+@@ -254,11 +286,7 @@ int mlx5_core_create_dct(struct mlx5_core_dev *dev,
+ 
+ 	return 0;
+ err_cmd:
+-	MLX5_SET(destroy_dct_in, din, opcode, MLX5_CMD_OP_DESTROY_DCT);
+-	MLX5_SET(destroy_dct_in, din, dctn, qp->qpn);
+-	MLX5_SET(destroy_dct_in, din, uid, qp->uid);
+-	mlx5_cmd_exec(dev, (void *)&in, sizeof(din),
+-		      (void *)&out, sizeof(dout));
++	_mlx5_core_destroy_dct(dev, dct, false);
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(mlx5_core_create_dct);
+@@ -323,29 +351,7 @@ static int mlx5_core_drain_dct(struct mlx5_core_dev *dev,
+ int mlx5_core_destroy_dct(struct mlx5_core_dev *dev,
+ 			  struct mlx5_core_dct *dct)
+ {
+-	u32 out[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
+-	u32 in[MLX5_ST_SZ_DW(destroy_dct_in)]   = {0};
+-	struct mlx5_core_qp *qp = &dct->mqp;
+-	int err;
+-
+-	err = mlx5_core_drain_dct(dev, dct);
+-	if (err) {
+-		if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
+-			goto destroy;
+-		} else {
+-			mlx5_core_warn(dev, "failed drain DCT 0x%x with error 0x%x\n", qp->qpn, err);
+-			return err;
+-		}
+-	}
+-	wait_for_completion(&dct->drained);
+-destroy:
+-	destroy_resource_common(dev, &dct->mqp);
+-	MLX5_SET(destroy_dct_in, in, opcode, MLX5_CMD_OP_DESTROY_DCT);
+-	MLX5_SET(destroy_dct_in, in, dctn, qp->qpn);
+-	MLX5_SET(destroy_dct_in, in, uid, qp->uid);
+-	err = mlx5_cmd_exec(dev, (void *)&in, sizeof(in),
+-			    (void *)&out, sizeof(out));
+-	return err;
++	return _mlx5_core_destroy_dct(dev, dct, true);
+ }
+ EXPORT_SYMBOL_GPL(mlx5_core_destroy_dct);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index b65e274b02e9..cbdee5164be7 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -2105,7 +2105,7 @@ static void mlxsw_sp_port_get_prio_strings(u8 **p, int prio)
+ 	int i;
+ 
+ 	for (i = 0; i < MLXSW_SP_PORT_HW_PRIO_STATS_LEN; i++) {
+-		snprintf(*p, ETH_GSTRING_LEN, "%s_%d",
++		snprintf(*p, ETH_GSTRING_LEN, "%.29s_%.1d",
+ 			 mlxsw_sp_port_hw_prio_stats[i].str, prio);
+ 		*p += ETH_GSTRING_LEN;
+ 	}
+@@ -2116,7 +2116,7 @@ static void mlxsw_sp_port_get_tc_strings(u8 **p, int tc)
+ 	int i;
+ 
+ 	for (i = 0; i < MLXSW_SP_PORT_HW_TC_STATS_LEN; i++) {
+-		snprintf(*p, ETH_GSTRING_LEN, "%s_%d",
++		snprintf(*p, ETH_GSTRING_LEN, "%.29s_%.1d",
+ 			 mlxsw_sp_port_hw_tc_stats[i].str, tc);
+ 		*p += ETH_GSTRING_LEN;
+ 	}
+diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
+index 4d1b4a24907f..13e6bf13ac4d 100644
+--- a/drivers/net/ethernet/microchip/lan743x_main.c
++++ b/drivers/net/ethernet/microchip/lan743x_main.c
+@@ -585,8 +585,7 @@ static int lan743x_intr_open(struct lan743x_adapter *adapter)
+ 
+ 		if (adapter->csr.flags &
+ 		   LAN743X_CSR_FLAG_SUPPORTS_INTR_AUTO_SET_CLR) {
+-			flags = LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_CLEAR |
+-				LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_SET |
++			flags = LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_SET |
+ 				LAN743X_VECTOR_FLAG_SOURCE_ENABLE_AUTO_SET |
+ 				LAN743X_VECTOR_FLAG_SOURCE_ENABLE_AUTO_CLEAR |
+ 				LAN743X_VECTOR_FLAG_SOURCE_STATUS_AUTO_CLEAR;
+@@ -599,12 +598,6 @@ static int lan743x_intr_open(struct lan743x_adapter *adapter)
+ 			/* map TX interrupt to vector */
+ 			int_vec_map1 |= INT_VEC_MAP1_TX_VEC_(index, vector);
+ 			lan743x_csr_write(adapter, INT_VEC_MAP1, int_vec_map1);
+-			if (flags &
+-			    LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_CLEAR) {
+-				int_vec_en_auto_clr |= INT_VEC_EN_(vector);
+-				lan743x_csr_write(adapter, INT_VEC_EN_AUTO_CLR,
+-						  int_vec_en_auto_clr);
+-			}
+ 
+ 			/* Remove TX interrupt from shared mask */
+ 			intr->vector_list[0].int_mask &= ~int_bit;
+@@ -1902,7 +1895,17 @@ static int lan743x_rx_next_index(struct lan743x_rx *rx, int index)
+ 	return ((++index) % rx->ring_size);
+ }
+ 
+-static int lan743x_rx_allocate_ring_element(struct lan743x_rx *rx, int index)
++static struct sk_buff *lan743x_rx_allocate_skb(struct lan743x_rx *rx)
++{
++	int length = 0;
++
++	length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
++	return __netdev_alloc_skb(rx->adapter->netdev,
++				  length, GFP_ATOMIC | GFP_DMA);
++}
++
++static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index,
++					struct sk_buff *skb)
+ {
+ 	struct lan743x_rx_buffer_info *buffer_info;
+ 	struct lan743x_rx_descriptor *descriptor;
+@@ -1911,9 +1914,7 @@ static int lan743x_rx_allocate_ring_element(struct lan743x_rx *rx, int index)
+ 	length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
+ 	descriptor = &rx->ring_cpu_ptr[index];
+ 	buffer_info = &rx->buffer_info[index];
+-	buffer_info->skb = __netdev_alloc_skb(rx->adapter->netdev,
+-					      length,
+-					      GFP_ATOMIC | GFP_DMA);
++	buffer_info->skb = skb;
+ 	if (!(buffer_info->skb))
+ 		return -ENOMEM;
+ 	buffer_info->dma_ptr = dma_map_single(&rx->adapter->pdev->dev,
+@@ -2060,8 +2061,19 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 		/* packet is available */
+ 		if (first_index == last_index) {
+ 			/* single buffer packet */
++			struct sk_buff *new_skb = NULL;
+ 			int packet_length;
+ 
++			new_skb = lan743x_rx_allocate_skb(rx);
++			if (!new_skb) {
++				/* failed to allocate next skb.
++				 * Memory is very low.
++				 * Drop this packet and reuse buffer.
++				 */
++				lan743x_rx_reuse_ring_element(rx, first_index);
++				goto process_extension;
++			}
++
+ 			buffer_info = &rx->buffer_info[first_index];
+ 			skb = buffer_info->skb;
+ 			descriptor = &rx->ring_cpu_ptr[first_index];
+@@ -2081,7 +2093,7 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 			skb_put(skb, packet_length - 4);
+ 			skb->protocol = eth_type_trans(skb,
+ 						       rx->adapter->netdev);
+-			lan743x_rx_allocate_ring_element(rx, first_index);
++			lan743x_rx_init_ring_element(rx, first_index, new_skb);
+ 		} else {
+ 			int index = first_index;
+ 
+@@ -2094,26 +2106,23 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
+ 			if (first_index <= last_index) {
+ 				while ((index >= first_index) &&
+ 				       (index <= last_index)) {
+-					lan743x_rx_release_ring_element(rx,
+-									index);
+-					lan743x_rx_allocate_ring_element(rx,
+-									 index);
++					lan743x_rx_reuse_ring_element(rx,
++								      index);
+ 					index = lan743x_rx_next_index(rx,
+ 								      index);
+ 				}
+ 			} else {
+ 				while ((index >= first_index) ||
+ 				       (index <= last_index)) {
+-					lan743x_rx_release_ring_element(rx,
+-									index);
+-					lan743x_rx_allocate_ring_element(rx,
+-									 index);
++					lan743x_rx_reuse_ring_element(rx,
++								      index);
+ 					index = lan743x_rx_next_index(rx,
+ 								      index);
+ 				}
+ 			}
+ 		}
+ 
++process_extension:
+ 		if (extension_index >= 0) {
+ 			descriptor = &rx->ring_cpu_ptr[extension_index];
+ 			buffer_info = &rx->buffer_info[extension_index];
+@@ -2290,7 +2299,9 @@ static int lan743x_rx_ring_init(struct lan743x_rx *rx)
+ 
+ 	rx->last_head = 0;
+ 	for (index = 0; index < rx->ring_size; index++) {
+-		ret = lan743x_rx_allocate_ring_element(rx, index);
++		struct sk_buff *new_skb = lan743x_rx_allocate_skb(rx);
++
++		ret = lan743x_rx_init_ring_element(rx, index, new_skb);
+ 		if (ret)
+ 			goto cleanup;
+ 	}
+diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c
+index ca3ea2fbfcd0..80d87798c62b 100644
+--- a/drivers/net/ethernet/mscc/ocelot_board.c
++++ b/drivers/net/ethernet/mscc/ocelot_board.c
+@@ -267,6 +267,7 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ 		struct phy *serdes;
+ 		void __iomem *regs;
+ 		char res_name[8];
++		int phy_mode;
+ 		u32 port;
+ 
+ 		if (of_property_read_u32(portnp, "reg", &port))
+@@ -292,11 +293,11 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ 		if (err)
+ 			return err;
+ 
+-		err = of_get_phy_mode(portnp);
+-		if (err < 0)
++		phy_mode = of_get_phy_mode(portnp);
++		if (phy_mode < 0)
+ 			ocelot->ports[port]->phy_mode = PHY_INTERFACE_MODE_NA;
+ 		else
+-			ocelot->ports[port]->phy_mode = err;
++			ocelot->ports[port]->phy_mode = phy_mode;
+ 
+ 		switch (ocelot->ports[port]->phy_mode) {
+ 		case PHY_INTERFACE_MODE_NA:
+@@ -304,6 +305,13 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
+ 		case PHY_INTERFACE_MODE_SGMII:
+ 			break;
+ 		case PHY_INTERFACE_MODE_QSGMII:
++			/* Ensure clock signals and speed is set on all
++			 * QSGMII links
++			 */
++			ocelot_port_writel(ocelot->ports[port],
++					   DEV_CLOCK_CFG_LINK_SPEED
++					   (OCELOT_SPEED_1000),
++					   DEV_CLOCK_CFG);
+ 			break;
+ 		default:
+ 			dev_err(ocelot->dev,
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
+index 69d7aebda09b..73db94e55fd0 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
+@@ -196,7 +196,7 @@ static netdev_tx_t nfp_repr_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 	ret = dev_queue_xmit(skb);
+ 	nfp_repr_inc_tx_stats(netdev, len, ret);
+ 
+-	return ret;
++	return NETDEV_TX_OK;
+ }
+ 
+ static int nfp_repr_stop(struct net_device *netdev)
+@@ -384,7 +384,7 @@ int nfp_repr_init(struct nfp_app *app, struct net_device *netdev,
+ 	netdev->features &= ~(NETIF_F_TSO | NETIF_F_TSO6);
+ 	netdev->gso_max_segs = NFP_NET_LSO_MAX_SEGS;
+ 
+-	netdev->priv_flags |= IFF_NO_QUEUE;
++	netdev->priv_flags |= IFF_NO_QUEUE | IFF_DISABLE_NETPOLL;
+ 	netdev->features |= NETIF_F_LLTX;
+ 
+ 	if (nfp_app_has_tc(app)) {
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 6e36b88ca7c9..365cddbfc684 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -28,6 +28,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/firmware.h>
+ #include <linux/prefetch.h>
++#include <linux/pci-aspm.h>
+ #include <linux/ipv6.h>
+ #include <net/ip6_checksum.h>
+ 
+@@ -5332,7 +5333,7 @@ static void rtl_hw_start_8168(struct rtl8169_private *tp)
+ 	tp->cp_cmd |= PktCntrDisable | INTT_1;
+ 	RTL_W16(tp, CPlusCmd, tp->cp_cmd);
+ 
+-	RTL_W16(tp, IntrMitigate, 0x5151);
++	RTL_W16(tp, IntrMitigate, 0x5100);
+ 
+ 	/* Work around for RxFIFO overflow. */
+ 	if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
+@@ -6435,7 +6436,7 @@ static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
+ 		set_bit(RTL_FLAG_TASK_RESET_PENDING, tp->wk.flags);
+ 	}
+ 
+-	if (status & RTL_EVENT_NAPI) {
++	if (status & (RTL_EVENT_NAPI | LinkChg)) {
+ 		rtl_irq_disable(tp);
+ 		napi_schedule_irqoff(&tp->napi);
+ 	}
+@@ -7224,6 +7225,11 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			return rc;
+ 	}
+ 
++	/* Disable ASPM completely as that cause random device stop working
++	 * problems as well as full system hangs for some PCIe devices users.
++	 */
++	pci_disable_link_state(pdev, PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1);
++
+ 	/* enable device (incl. PCI PM wakeup and hotplug setup) */
+ 	rc = pcim_enable_device(pdev);
+ 	if (rc < 0) {
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index d28c8f9ca55b..8154b38c08f7 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -458,7 +458,7 @@ static int ravb_dmac_init(struct net_device *ndev)
+ 		   RCR_EFFS | RCR_ENCF | RCR_ETS0 | RCR_ESF | 0x18000000, RCR);
+ 
+ 	/* Set FIFO size */
+-	ravb_write(ndev, TGC_TQP_AVBMODE1 | 0x00222200, TGC);
++	ravb_write(ndev, TGC_TQP_AVBMODE1 | 0x00112200, TGC);
+ 
+ 	/* Timestamp enable */
+ 	ravb_write(ndev, TCCR_TFEN, TCCR);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
+index d8c5bc412219..c0c75c111abb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
++++ b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
+@@ -111,10 +111,11 @@ static unsigned int is_jumbo_frm(int len, int enh_desc)
+ 
+ static void refill_desc3(void *priv_ptr, struct dma_desc *p)
+ {
+-	struct stmmac_priv *priv = (struct stmmac_priv *)priv_ptr;
++	struct stmmac_rx_queue *rx_q = priv_ptr;
++	struct stmmac_priv *priv = rx_q->priv_data;
+ 
+ 	/* Fill DES3 in case of RING mode */
+-	if (priv->dma_buf_sz >= BUF_SIZE_8KiB)
++	if (priv->dma_buf_sz == BUF_SIZE_16KiB)
+ 		p->des3 = cpu_to_le32(le32_to_cpu(p->des2) + BUF_SIZE_8KiB);
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 685d20472358..019ab99e65bb 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -474,7 +474,7 @@ static void stmmac_get_tx_hwtstamp(struct stmmac_priv *priv,
+ 				   struct dma_desc *p, struct sk_buff *skb)
+ {
+ 	struct skb_shared_hwtstamps shhwtstamp;
+-	u64 ns;
++	u64 ns = 0;
+ 
+ 	if (!priv->hwts_tx_en)
+ 		return;
+@@ -513,7 +513,7 @@ static void stmmac_get_rx_hwtstamp(struct stmmac_priv *priv, struct dma_desc *p,
+ {
+ 	struct skb_shared_hwtstamps *shhwtstamp = NULL;
+ 	struct dma_desc *desc = p;
+-	u64 ns;
++	u64 ns = 0;
+ 
+ 	if (!priv->hwts_rx_en)
+ 		return;
+@@ -558,8 +558,8 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr)
+ 	u32 snap_type_sel = 0;
+ 	u32 ts_master_en = 0;
+ 	u32 ts_event_en = 0;
++	u32 sec_inc = 0;
+ 	u32 value = 0;
+-	u32 sec_inc;
+ 	bool xmac;
+ 
+ 	xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+index 2293e21f789f..cc60b3fb0892 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+@@ -105,7 +105,7 @@ static int stmmac_get_time(struct ptp_clock_info *ptp, struct timespec64 *ts)
+ 	struct stmmac_priv *priv =
+ 	    container_of(ptp, struct stmmac_priv, ptp_clock_ops);
+ 	unsigned long flags;
+-	u64 ns;
++	u64 ns = 0;
+ 
+ 	spin_lock_irqsave(&priv->ptp_lock, flags);
+ 	stmmac_get_systime(priv, priv->ptpaddr, &ns);
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index e859ae2e42d5..49f41b64077b 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -987,6 +987,7 @@ struct netvsc_device {
+ 
+ 	wait_queue_head_t wait_drain;
+ 	bool destroy;
++	bool tx_disable; /* if true, do not wake up queue again */
+ 
+ 	/* Receive buffer allocated by us but manages by NetVSP */
+ 	void *recv_buf;
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 813d195bbd57..e0dce373cdd9 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -110,6 +110,7 @@ static struct netvsc_device *alloc_net_device(void)
+ 
+ 	init_waitqueue_head(&net_device->wait_drain);
+ 	net_device->destroy = false;
++	net_device->tx_disable = false;
+ 
+ 	net_device->max_pkt = RNDIS_MAX_PKT_DEFAULT;
+ 	net_device->pkt_align = RNDIS_PKT_ALIGN_DEFAULT;
+@@ -719,7 +720,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
+ 	} else {
+ 		struct netdev_queue *txq = netdev_get_tx_queue(ndev, q_idx);
+ 
+-		if (netif_tx_queue_stopped(txq) &&
++		if (netif_tx_queue_stopped(txq) && !net_device->tx_disable &&
+ 		    (hv_get_avail_to_write_percent(&channel->outbound) >
+ 		     RING_AVAIL_PERCENT_HIWATER || queue_sends < 1)) {
+ 			netif_tx_wake_queue(txq);
+@@ -874,7 +875,8 @@ static inline int netvsc_send_pkt(
+ 	} else if (ret == -EAGAIN) {
+ 		netif_tx_stop_queue(txq);
+ 		ndev_ctx->eth_stats.stop_queue++;
+-		if (atomic_read(&nvchan->queue_sends) < 1) {
++		if (atomic_read(&nvchan->queue_sends) < 1 &&
++		    !net_device->tx_disable) {
+ 			netif_tx_wake_queue(txq);
+ 			ndev_ctx->eth_stats.wake_queue++;
+ 			ret = -ENOSPC;
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index cf4897043e83..b20fb0fb595b 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -109,6 +109,15 @@ static void netvsc_set_rx_mode(struct net_device *net)
+ 	rcu_read_unlock();
+ }
+ 
++static void netvsc_tx_enable(struct netvsc_device *nvscdev,
++			     struct net_device *ndev)
++{
++	nvscdev->tx_disable = false;
++	virt_wmb(); /* ensure queue wake up mechanism is on */
++
++	netif_tx_wake_all_queues(ndev);
++}
++
+ static int netvsc_open(struct net_device *net)
+ {
+ 	struct net_device_context *ndev_ctx = netdev_priv(net);
+@@ -129,7 +138,7 @@ static int netvsc_open(struct net_device *net)
+ 	rdev = nvdev->extension;
+ 	if (!rdev->link_state) {
+ 		netif_carrier_on(net);
+-		netif_tx_wake_all_queues(net);
++		netvsc_tx_enable(nvdev, net);
+ 	}
+ 
+ 	if (vf_netdev) {
+@@ -184,6 +193,17 @@ static int netvsc_wait_until_empty(struct netvsc_device *nvdev)
+ 	}
+ }
+ 
++static void netvsc_tx_disable(struct netvsc_device *nvscdev,
++			      struct net_device *ndev)
++{
++	if (nvscdev) {
++		nvscdev->tx_disable = true;
++		virt_wmb(); /* ensure txq will not wake up after stop */
++	}
++
++	netif_tx_disable(ndev);
++}
++
+ static int netvsc_close(struct net_device *net)
+ {
+ 	struct net_device_context *net_device_ctx = netdev_priv(net);
+@@ -192,7 +212,7 @@ static int netvsc_close(struct net_device *net)
+ 	struct netvsc_device *nvdev = rtnl_dereference(net_device_ctx->nvdev);
+ 	int ret;
+ 
+-	netif_tx_disable(net);
++	netvsc_tx_disable(nvdev, net);
+ 
+ 	/* No need to close rndis filter if it is removed already */
+ 	if (!nvdev)
+@@ -920,7 +940,7 @@ static int netvsc_detach(struct net_device *ndev,
+ 
+ 	/* If device was up (receiving) then shutdown */
+ 	if (netif_running(ndev)) {
+-		netif_tx_disable(ndev);
++		netvsc_tx_disable(nvdev, ndev);
+ 
+ 		ret = rndis_filter_close(nvdev);
+ 		if (ret) {
+@@ -1908,7 +1928,7 @@ static void netvsc_link_change(struct work_struct *w)
+ 		if (rdev->link_state) {
+ 			rdev->link_state = false;
+ 			netif_carrier_on(net);
+-			netif_tx_wake_all_queues(net);
++			netvsc_tx_enable(net_device, net);
+ 		} else {
+ 			notify = true;
+ 		}
+@@ -1918,7 +1938,7 @@ static void netvsc_link_change(struct work_struct *w)
+ 		if (!rdev->link_state) {
+ 			rdev->link_state = true;
+ 			netif_carrier_off(net);
+-			netif_tx_stop_all_queues(net);
++			netvsc_tx_disable(net_device, net);
+ 		}
+ 		kfree(event);
+ 		break;
+@@ -1927,7 +1947,7 @@ static void netvsc_link_change(struct work_struct *w)
+ 		if (!rdev->link_state) {
+ 			rdev->link_state = true;
+ 			netif_carrier_off(net);
+-			netif_tx_stop_all_queues(net);
++			netvsc_tx_disable(net_device, net);
+ 			event->event = RNDIS_STATUS_MEDIA_CONNECT;
+ 			spin_lock_irqsave(&ndev_ctx->lock, flags);
+ 			list_add(&event->list, &ndev_ctx->reconfig_events);
+diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c
+index 3ddaf9595697..68af4c75ffb3 100644
+--- a/drivers/net/phy/meson-gxl.c
++++ b/drivers/net/phy/meson-gxl.c
+@@ -211,6 +211,7 @@ static int meson_gxl_ack_interrupt(struct phy_device *phydev)
+ static int meson_gxl_config_intr(struct phy_device *phydev)
+ {
+ 	u16 val;
++	int ret;
+ 
+ 	if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
+ 		val = INTSRC_ANEG_PR
+@@ -223,6 +224,11 @@ static int meson_gxl_config_intr(struct phy_device *phydev)
+ 		val = 0;
+ 	}
+ 
++	/* Ack any pending IRQ */
++	ret = meson_gxl_ack_interrupt(phydev);
++	if (ret)
++		return ret;
++
+ 	return phy_write(phydev, INTSRC_MASK, val);
+ }
+ 
+diff --git a/drivers/net/phy/phy-c45.c b/drivers/net/phy/phy-c45.c
+index 03af927fa5ad..e39bf0428dd9 100644
+--- a/drivers/net/phy/phy-c45.c
++++ b/drivers/net/phy/phy-c45.c
+@@ -147,9 +147,15 @@ int genphy_c45_read_link(struct phy_device *phydev, u32 mmd_mask)
+ 		mmd_mask &= ~BIT(devad);
+ 
+ 		/* The link state is latched low so that momentary link
+-		 * drops can be detected.  Do not double-read the status
+-		 * register if the link is down.
++		 * drops can be detected. Do not double-read the status
++		 * in polling mode to detect such short link drops.
+ 		 */
++		if (!phy_polling_mode(phydev)) {
++			val = phy_read_mmd(phydev, devad, MDIO_STAT1);
++			if (val < 0)
++				return val;
++		}
++
+ 		val = phy_read_mmd(phydev, devad, MDIO_STAT1);
+ 		if (val < 0)
+ 			return val;
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 46c86725a693..adf79614c2db 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1683,10 +1683,15 @@ int genphy_update_link(struct phy_device *phydev)
+ {
+ 	int status;
+ 
+-	/* Do a fake read */
+-	status = phy_read(phydev, MII_BMSR);
+-	if (status < 0)
+-		return status;
++	/* The link state is latched low so that momentary link
++	 * drops can be detected. Do not double-read the status
++	 * in polling mode to detect such short link drops.
++	 */
++	if (!phy_polling_mode(phydev)) {
++		status = phy_read(phydev, MII_BMSR);
++		if (status < 0)
++			return status;
++	}
+ 
+ 	/* Read link and autonegotiation status */
+ 	status = phy_read(phydev, MII_BMSR);
+@@ -1827,7 +1832,7 @@ int genphy_soft_reset(struct phy_device *phydev)
+ {
+ 	int ret;
+ 
+-	ret = phy_write(phydev, MII_BMCR, BMCR_RESET);
++	ret = phy_set_bits(phydev, MII_BMCR, BMCR_RESET);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index 8f09edd811e9..50c60550f295 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -532,6 +532,7 @@ static void pptp_sock_destruct(struct sock *sk)
+ 		pppox_unbind_sock(sk);
+ 	}
+ 	skb_queue_purge(&sk->sk_receive_queue);
++	dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1));
+ }
+ 
+ static int pptp_create(struct net *net, struct socket *sock, int kern)
+diff --git a/drivers/net/team/team_mode_loadbalance.c b/drivers/net/team/team_mode_loadbalance.c
+index a5ef97010eb3..5541e1c19936 100644
+--- a/drivers/net/team/team_mode_loadbalance.c
++++ b/drivers/net/team/team_mode_loadbalance.c
+@@ -325,6 +325,20 @@ static int lb_bpf_func_set(struct team *team, struct team_gsetter_ctx *ctx)
+ 	return 0;
+ }
+ 
++static void lb_bpf_func_free(struct team *team)
++{
++	struct lb_priv *lb_priv = get_lb_priv(team);
++	struct bpf_prog *fp;
++
++	if (!lb_priv->ex->orig_fprog)
++		return;
++
++	__fprog_destroy(lb_priv->ex->orig_fprog);
++	fp = rcu_dereference_protected(lb_priv->fp,
++				       lockdep_is_held(&team->lock));
++	bpf_prog_destroy(fp);
++}
++
+ static int lb_tx_method_get(struct team *team, struct team_gsetter_ctx *ctx)
+ {
+ 	struct lb_priv *lb_priv = get_lb_priv(team);
+@@ -639,6 +653,7 @@ static void lb_exit(struct team *team)
+ 
+ 	team_options_unregister(team, lb_options,
+ 				ARRAY_SIZE(lb_options));
++	lb_bpf_func_free(team);
+ 	cancel_delayed_work_sync(&lb_priv->ex->stats.refresh_dw);
+ 	free_percpu(lb_priv->pcpu_stats);
+ 	kfree(lb_priv->ex);
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 53f4f37b0ffd..448d5439ff6a 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1763,9 +1763,6 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 	int skb_xdp = 1;
+ 	bool frags = tun_napi_frags_enabled(tfile);
+ 
+-	if (!(tun->dev->flags & IFF_UP))
+-		return -EIO;
+-
+ 	if (!(tun->flags & IFF_NO_PI)) {
+ 		if (len < sizeof(pi))
+ 			return -EINVAL;
+@@ -1867,6 +1864,8 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 			err = skb_copy_datagram_from_iter(skb, 0, from, len);
+ 
+ 		if (err) {
++			err = -EFAULT;
++drop:
+ 			this_cpu_inc(tun->pcpu_stats->rx_dropped);
+ 			kfree_skb(skb);
+ 			if (frags) {
+@@ -1874,7 +1873,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 				mutex_unlock(&tfile->napi_mutex);
+ 			}
+ 
+-			return -EFAULT;
++			return err;
+ 		}
+ 	}
+ 
+@@ -1958,6 +1957,13 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 	    !tfile->detached)
+ 		rxhash = __skb_get_hash_symmetric(skb);
+ 
++	rcu_read_lock();
++	if (unlikely(!(tun->dev->flags & IFF_UP))) {
++		err = -EIO;
++		rcu_read_unlock();
++		goto drop;
++	}
++
+ 	if (frags) {
+ 		/* Exercise flow dissector code path. */
+ 		u32 headlen = eth_get_headlen(skb->data, skb_headlen(skb));
+@@ -1965,6 +1971,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 		if (unlikely(headlen > skb_headlen(skb))) {
+ 			this_cpu_inc(tun->pcpu_stats->rx_dropped);
+ 			napi_free_frags(&tfile->napi);
++			rcu_read_unlock();
+ 			mutex_unlock(&tfile->napi_mutex);
+ 			WARN_ON(1);
+ 			return -ENOMEM;
+@@ -1992,6 +1999,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
+ 	} else {
+ 		netif_rx_ni(skb);
+ 	}
++	rcu_read_unlock();
+ 
+ 	stats = get_cpu_ptr(tun->pcpu_stats);
+ 	u64_stats_update_begin(&stats->syncp);
+diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c
+index 820a2fe7d027..aff995be2a31 100644
+--- a/drivers/net/usb/aqc111.c
++++ b/drivers/net/usb/aqc111.c
+@@ -1301,6 +1301,20 @@ static const struct driver_info trendnet_info = {
+ 	.tx_fixup	= aqc111_tx_fixup,
+ };
+ 
++static const struct driver_info qnap_info = {
++	.description	= "QNAP QNA-UC5G1T USB to 5GbE Adapter",
++	.bind		= aqc111_bind,
++	.unbind		= aqc111_unbind,
++	.status		= aqc111_status,
++	.link_reset	= aqc111_link_reset,
++	.reset		= aqc111_reset,
++	.stop		= aqc111_stop,
++	.flags		= FLAG_ETHER | FLAG_FRAMING_AX |
++			  FLAG_AVOID_UNLINK_URBS | FLAG_MULTI_PACKET,
++	.rx_fixup	= aqc111_rx_fixup,
++	.tx_fixup	= aqc111_tx_fixup,
++};
++
+ static int aqc111_suspend(struct usb_interface *intf, pm_message_t message)
+ {
+ 	struct usbnet *dev = usb_get_intfdata(intf);
+@@ -1455,6 +1469,7 @@ static const struct usb_device_id products[] = {
+ 	{AQC111_USB_ETH_DEV(0x0b95, 0x2790, asix111_info)},
+ 	{AQC111_USB_ETH_DEV(0x0b95, 0x2791, asix112_info)},
+ 	{AQC111_USB_ETH_DEV(0x20f4, 0xe05a, trendnet_info)},
++	{AQC111_USB_ETH_DEV(0x1c04, 0x0015, qnap_info)},
+ 	{ },/* END */
+ };
+ MODULE_DEVICE_TABLE(usb, products);
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 5512a1038721..3e9b2c319e45 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -851,6 +851,14 @@ static const struct usb_device_id	products[] = {
+ 	.driver_info = 0,
+ },
+ 
++/* QNAP QNA-UC5G1T USB to 5GbE Adapter (based on AQC111U) */
++{
++	USB_DEVICE_AND_INTERFACE_INFO(0x1c04, 0x0015, USB_CLASS_COMM,
++				      USB_CDC_SUBCLASS_ETHERNET,
++				      USB_CDC_PROTO_NONE),
++	.driver_info = 0,
++},
++
+ /* WHITELIST!!!
+  *
+  * CDC Ether uses two interfaces, not necessarily consecutive.
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 18af2f8eee96..9195f3476b1d 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -976,6 +976,13 @@ static const struct usb_device_id products[] = {
+ 					      0xff),
+ 		.driver_info	    = (unsigned long)&qmi_wwan_info_quirk_dtr,
+ 	},
++	{	/* Quectel EG12/EM12 */
++		USB_DEVICE_AND_INTERFACE_INFO(0x2c7c, 0x0512,
++					      USB_CLASS_VENDOR_SPEC,
++					      USB_SUBCLASS_VENDOR_SPEC,
++					      0xff),
++		.driver_info	    = (unsigned long)&qmi_wwan_info_quirk_dtr,
++	},
+ 
+ 	/* 3. Combined interface devices matching on interface number */
+ 	{QMI_FIXED_INTF(0x0408, 0xea42, 4)},	/* Yota / Megafon M100-1 */
+@@ -1196,6 +1203,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x19d2, 0x2002, 4)},	/* ZTE (Vodafone) K3765-Z */
+ 	{QMI_FIXED_INTF(0x2001, 0x7e19, 4)},	/* D-Link DWM-221 B1 */
+ 	{QMI_FIXED_INTF(0x2001, 0x7e35, 4)},	/* D-Link DWM-222 */
++	{QMI_FIXED_INTF(0x2020, 0x2031, 4)},	/* Olicard 600 */
+ 	{QMI_FIXED_INTF(0x2020, 0x2033, 4)},	/* BroadMobi BM806U */
+ 	{QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)},    /* Sierra Wireless MC7700 */
+ 	{QMI_FIXED_INTF(0x114f, 0x68a2, 8)},    /* Sierra Wireless MC7750 */
+@@ -1343,17 +1351,20 @@ static bool quectel_ec20_detected(struct usb_interface *intf)
+ 	return false;
+ }
+ 
+-static bool quectel_ep06_diag_detected(struct usb_interface *intf)
++static bool quectel_diag_detected(struct usb_interface *intf)
+ {
+ 	struct usb_device *dev = interface_to_usbdev(intf);
+ 	struct usb_interface_descriptor intf_desc = intf->cur_altsetting->desc;
++	u16 id_vendor = le16_to_cpu(dev->descriptor.idVendor);
++	u16 id_product = le16_to_cpu(dev->descriptor.idProduct);
+ 
+-	if (le16_to_cpu(dev->descriptor.idVendor) == 0x2c7c &&
+-	    le16_to_cpu(dev->descriptor.idProduct) == 0x0306 &&
+-	    intf_desc.bNumEndpoints == 2)
+-		return true;
++	if (id_vendor != 0x2c7c || intf_desc.bNumEndpoints != 2)
++		return false;
+ 
+-	return false;
++	if (id_product == 0x0306 || id_product == 0x0512)
++		return true;
++	else
++		return false;
+ }
+ 
+ static int qmi_wwan_probe(struct usb_interface *intf,
+@@ -1390,13 +1401,13 @@ static int qmi_wwan_probe(struct usb_interface *intf,
+ 		return -ENODEV;
+ 	}
+ 
+-	/* Quectel EP06/EM06/EG06 supports dynamic interface configuration, so
++	/* Several Quectel modems supports dynamic interface configuration, so
+ 	 * we need to match on class/subclass/protocol. These values are
+ 	 * identical for the diagnostic- and QMI-interface, but bNumEndpoints is
+ 	 * different. Ignore the current interface if the number of endpoints
+ 	 * the number for the diag interface (two).
+ 	 */
+-	if (quectel_ep06_diag_detected(intf))
++	if (quectel_diag_detected(intf))
+ 		return -ENODEV;
+ 
+ 	return usbnet_probe(intf, id);
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index f412ea1cef18..b203d1867959 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -115,7 +115,8 @@ static void veth_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
+ 		p += sizeof(ethtool_stats_keys);
+ 		for (i = 0; i < dev->real_num_rx_queues; i++) {
+ 			for (j = 0; j < VETH_RQ_STATS_LEN; j++) {
+-				snprintf(p, ETH_GSTRING_LEN, "rx_queue_%u_%s",
++				snprintf(p, ETH_GSTRING_LEN,
++					 "rx_queue_%u_%.11s",
+ 					 i, veth_rq_stats_desc[j].desc);
+ 				p += ETH_GSTRING_LEN;
+ 			}
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 7c1430ed0244..cd15c32b2e43 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1273,9 +1273,14 @@ static void vrf_setup(struct net_device *dev)
+ 
+ 	/* default to no qdisc; user can add if desired */
+ 	dev->priv_flags |= IFF_NO_QUEUE;
++	dev->priv_flags |= IFF_NO_RX_HANDLER;
+ 
+-	dev->min_mtu = 0;
+-	dev->max_mtu = 0;
++	/* VRF devices do not care about MTU, but if the MTU is set
++	 * too low then the ipv4 and ipv6 protocols are disabled
++	 * which breaks networking.
++	 */
++	dev->min_mtu = IPV6_MIN_MTU;
++	dev->max_mtu = ETH_MAX_MTU;
+ }
+ 
+ static int vrf_validate(struct nlattr *tb[], struct nlattr *data[],
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 2aae11feff0c..5006daed2e96 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1657,6 +1657,14 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
+ 		goto drop;
+ 	}
+ 
++	rcu_read_lock();
++
++	if (unlikely(!(vxlan->dev->flags & IFF_UP))) {
++		rcu_read_unlock();
++		atomic_long_inc(&vxlan->dev->rx_dropped);
++		goto drop;
++	}
++
+ 	stats = this_cpu_ptr(vxlan->dev->tstats);
+ 	u64_stats_update_begin(&stats->syncp);
+ 	stats->rx_packets++;
+@@ -1664,6 +1672,9 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
+ 	u64_stats_update_end(&stats->syncp);
+ 
+ 	gro_cells_receive(&vxlan->gro_cells, skb);
++
++	rcu_read_unlock();
++
+ 	return 0;
+ 
+ drop:
+@@ -2693,6 +2704,8 @@ static void vxlan_uninit(struct net_device *dev)
+ {
+ 	struct vxlan_dev *vxlan = netdev_priv(dev);
+ 
++	gro_cells_destroy(&vxlan->gro_cells);
++
+ 	vxlan_fdb_delete_default(vxlan, vxlan->cfg.vni);
+ 
+ 	free_percpu(dev->tstats);
+@@ -3794,7 +3807,6 @@ static void vxlan_dellink(struct net_device *dev, struct list_head *head)
+ 
+ 	vxlan_flush(vxlan, true);
+ 
+-	gro_cells_destroy(&vxlan->gro_cells);
+ 	list_del(&vxlan->next);
+ 	unregister_netdevice_queue(dev, head);
+ }
+@@ -4172,10 +4184,8 @@ static void vxlan_destroy_tunnels(struct net *net, struct list_head *head)
+ 		/* If vxlan->dev is in the same netns, it has already been added
+ 		 * to the list by the previous loop.
+ 		 */
+-		if (!net_eq(dev_net(vxlan->dev), net)) {
+-			gro_cells_destroy(&vxlan->gro_cells);
++		if (!net_eq(dev_net(vxlan->dev), net))
+ 			unregister_netdevice_queue(vxlan->dev, head);
+-		}
+ 	}
+ 
+ 	for (h = 0; h < PORT_HASH_SIZE; ++h)
+diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
+index 2a5668b4f6bc..1a1ea4bbf8a0 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.c
++++ b/drivers/net/wireless/ath/ath10k/ce.c
+@@ -500,14 +500,8 @@ static int _ath10k_ce_send_nolock(struct ath10k_ce_pipe *ce_state,
+ 	write_index = CE_RING_IDX_INCR(nentries_mask, write_index);
+ 
+ 	/* WORKAROUND */
+-	if (!(flags & CE_SEND_FLAG_GATHER)) {
+-		if (ar->hw_params.shadow_reg_support)
+-			ath10k_ce_shadow_src_ring_write_index_set(ar, ce_state,
+-								  write_index);
+-		else
+-			ath10k_ce_src_ring_write_index_set(ar, ctrl_addr,
+-							   write_index);
+-	}
++	if (!(flags & CE_SEND_FLAG_GATHER))
++		ath10k_ce_src_ring_write_index_set(ar, ctrl_addr, write_index);
+ 
+ 	src_ring->write_index = write_index;
+ exit:
+@@ -581,8 +575,14 @@ static int _ath10k_ce_send_nolock_64(struct ath10k_ce_pipe *ce_state,
+ 	/* Update Source Ring Write Index */
+ 	write_index = CE_RING_IDX_INCR(nentries_mask, write_index);
+ 
+-	if (!(flags & CE_SEND_FLAG_GATHER))
+-		ath10k_ce_src_ring_write_index_set(ar, ctrl_addr, write_index);
++	if (!(flags & CE_SEND_FLAG_GATHER)) {
++		if (ar->hw_params.shadow_reg_support)
++			ath10k_ce_shadow_src_ring_write_index_set(ar, ce_state,
++								  write_index);
++		else
++			ath10k_ce_src_ring_write_index_set(ar, ctrl_addr,
++							   write_index);
++	}
+ 
+ 	src_ring->write_index = write_index;
+ exit:
+@@ -1404,12 +1404,12 @@ static int ath10k_ce_alloc_shadow_base(struct ath10k *ar,
+ 				       u32 nentries)
+ {
+ 	src_ring->shadow_base_unaligned = kcalloc(nentries,
+-						  sizeof(struct ce_desc),
++						  sizeof(struct ce_desc_64),
+ 						  GFP_KERNEL);
+ 	if (!src_ring->shadow_base_unaligned)
+ 		return -ENOMEM;
+ 
+-	src_ring->shadow_base = (struct ce_desc *)
++	src_ring->shadow_base = (struct ce_desc_64 *)
+ 			PTR_ALIGN(src_ring->shadow_base_unaligned,
+ 				  CE_DESC_RING_ALIGN);
+ 	return 0;
+@@ -1461,7 +1461,7 @@ ath10k_ce_alloc_src_ring(struct ath10k *ar, unsigned int ce_id,
+ 		ret = ath10k_ce_alloc_shadow_base(ar, src_ring, nentries);
+ 		if (ret) {
+ 			dma_free_coherent(ar->dev,
+-					  (nentries * sizeof(struct ce_desc) +
++					  (nentries * sizeof(struct ce_desc_64) +
+ 					   CE_DESC_RING_ALIGN),
+ 					  src_ring->base_addr_owner_space_unaligned,
+ 					  base_addr);
+diff --git a/drivers/net/wireless/ath/ath10k/ce.h b/drivers/net/wireless/ath/ath10k/ce.h
+index ead9987c3259..463e2fc8b501 100644
+--- a/drivers/net/wireless/ath/ath10k/ce.h
++++ b/drivers/net/wireless/ath/ath10k/ce.h
+@@ -118,7 +118,7 @@ struct ath10k_ce_ring {
+ 	u32 base_addr_ce_space;
+ 
+ 	char *shadow_base_unaligned;
+-	struct ce_desc *shadow_base;
++	struct ce_desc_64 *shadow_base;
+ 
+ 	/* keep last */
+ 	void *per_transfer_context[0];
+diff --git a/drivers/net/wireless/ath/ath10k/debugfs_sta.c b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+index 4778a455d81a..068f1a7e07d3 100644
+--- a/drivers/net/wireless/ath/ath10k/debugfs_sta.c
++++ b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
+@@ -696,11 +696,12 @@ static ssize_t ath10k_dbg_sta_dump_tx_stats(struct file *file,
+ 						 "  %llu ", stats->ht[j][i]);
+ 			len += scnprintf(buf + len, size - len, "\n");
+ 			len += scnprintf(buf + len, size - len,
+-					" BW %s (20,40,80,160 MHz)\n", str[j]);
++					" BW %s (20,5,10,40,80,160 MHz)\n", str[j]);
+ 			len += scnprintf(buf + len, size - len,
+-					 "  %llu %llu %llu %llu\n",
++					 "  %llu %llu %llu %llu %llu %llu\n",
+ 					 stats->bw[j][0], stats->bw[j][1],
+-					 stats->bw[j][2], stats->bw[j][3]);
++					 stats->bw[j][2], stats->bw[j][3],
++					 stats->bw[j][4], stats->bw[j][5]);
+ 			len += scnprintf(buf + len, size - len,
+ 					 " NSS %s (1x1,2x2,3x3,4x4)\n", str[j]);
+ 			len += scnprintf(buf + len, size - len,
+diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
+index f42bac204ef8..ecf34ce7acf0 100644
+--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
++++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
+@@ -2130,9 +2130,15 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
+ 	hdr = (struct ieee80211_hdr *)skb->data;
+ 	rx_status = IEEE80211_SKB_RXCB(skb);
+ 	rx_status->chains |= BIT(0);
+-	rx_status->signal = ATH10K_DEFAULT_NOISE_FLOOR +
+-			    rx->ppdu.combined_rssi;
+-	rx_status->flag &= ~RX_FLAG_NO_SIGNAL_VAL;
++	if (rx->ppdu.combined_rssi == 0) {
++		/* SDIO firmware does not provide signal */
++		rx_status->signal = 0;
++		rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
++	} else {
++		rx_status->signal = ATH10K_DEFAULT_NOISE_FLOOR +
++			rx->ppdu.combined_rssi;
++		rx_status->flag &= ~RX_FLAG_NO_SIGNAL_VAL;
++	}
+ 
+ 	spin_lock_bh(&ar->data_lock);
+ 	ch = ar->scan_channel;
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index 2034ccc7cc72..1d5d0209ebeb 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -5003,7 +5003,7 @@ enum wmi_rate_preamble {
+ #define ATH10K_FW_SKIPPED_RATE_CTRL(flags)	(((flags) >> 6) & 0x1)
+ 
+ #define ATH10K_VHT_MCS_NUM	10
+-#define ATH10K_BW_NUM		4
++#define ATH10K_BW_NUM		6
+ #define ATH10K_NSS_NUM		4
+ #define ATH10K_LEGACY_NUM	12
+ #define ATH10K_GI_NUM		2
+diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c
+index c070a9e51ebf..fae572b38416 100644
+--- a/drivers/net/wireless/ath/ath9k/init.c
++++ b/drivers/net/wireless/ath/ath9k/init.c
+@@ -636,15 +636,15 @@ static int ath9k_of_init(struct ath_softc *sc)
+ 		ret = ath9k_eeprom_request(sc, eeprom_name);
+ 		if (ret)
+ 			return ret;
++
++		ah->ah_flags &= ~AH_USE_EEPROM;
++		ah->ah_flags |= AH_NO_EEP_SWAP;
+ 	}
+ 
+ 	mac = of_get_mac_address(np);
+ 	if (mac)
+ 		ether_addr_copy(common->macaddr, mac);
+ 
+-	ah->ah_flags &= ~AH_USE_EEPROM;
+-	ah->ah_flags |= AH_NO_EEP_SWAP;
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
+index 9b2f9f543952..5a44f9d0ff02 100644
+--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
++++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
+@@ -1580,6 +1580,12 @@ static int _wil_cfg80211_merge_extra_ies(const u8 *ies1, u16 ies1_len,
+ 	u8 *buf, *dpos;
+ 	const u8 *spos;
+ 
++	if (!ies1)
++		ies1_len = 0;
++
++	if (!ies2)
++		ies2_len = 0;
++
+ 	if (ies1_len == 0 && ies2_len == 0) {
+ 		*merged_ies = NULL;
+ 		*merged_len = 0;
+@@ -1589,17 +1595,19 @@ static int _wil_cfg80211_merge_extra_ies(const u8 *ies1, u16 ies1_len,
+ 	buf = kmalloc(ies1_len + ies2_len, GFP_KERNEL);
+ 	if (!buf)
+ 		return -ENOMEM;
+-	memcpy(buf, ies1, ies1_len);
++	if (ies1)
++		memcpy(buf, ies1, ies1_len);
+ 	dpos = buf + ies1_len;
+ 	spos = ies2;
+-	while (spos + 1 < ies2 + ies2_len) {
++	while (spos && (spos + 1 < ies2 + ies2_len)) {
+ 		/* IE tag at offset 0, length at offset 1 */
+ 		u16 ielen = 2 + spos[1];
+ 
+ 		if (spos + ielen > ies2 + ies2_len)
+ 			break;
+ 		if (spos[0] == WLAN_EID_VENDOR_SPECIFIC &&
+-		    !_wil_cfg80211_find_ie(ies1, ies1_len, spos, ielen)) {
++		    (!ies1 || !_wil_cfg80211_find_ie(ies1, ies1_len,
++						     spos, ielen))) {
+ 			memcpy(dpos, spos, ielen);
+ 			dpos += ielen;
+ 		}
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+index 1f1e95a15a17..0ce1d8174e6d 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+@@ -149,7 +149,7 @@ static int brcmf_c_process_clm_blob(struct brcmf_if *ifp)
+ 		return err;
+ 	}
+ 
+-	err = request_firmware(&clm, clm_name, bus->dev);
++	err = firmware_request_nowarn(&clm, clm_name, bus->dev);
+ 	if (err) {
+ 		brcmf_info("no clm_blob available (err=%d), device may have limited channels available\n",
+ 			   err);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 0d6c313b6669..19ec55cef802 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -127,13 +127,17 @@ static int iwl_send_rss_cfg_cmd(struct iwl_mvm *mvm)
+ 
+ static int iwl_configure_rxq(struct iwl_mvm *mvm)
+ {
+-	int i, num_queues, size;
++	int i, num_queues, size, ret;
+ 	struct iwl_rfh_queue_config *cmd;
++	struct iwl_host_cmd hcmd = {
++		.id = WIDE_ID(DATA_PATH_GROUP, RFH_QUEUE_CONFIG_CMD),
++		.dataflags[0] = IWL_HCMD_DFL_NOCOPY,
++	};
+ 
+ 	/* Do not configure default queue, it is configured via context info */
+ 	num_queues = mvm->trans->num_rx_queues - 1;
+ 
+-	size = sizeof(*cmd) + num_queues * sizeof(struct iwl_rfh_queue_data);
++	size = struct_size(cmd, data, num_queues);
+ 
+ 	cmd = kzalloc(size, GFP_KERNEL);
+ 	if (!cmd)
+@@ -154,10 +158,14 @@ static int iwl_configure_rxq(struct iwl_mvm *mvm)
+ 		cmd->data[i].fr_bd_wid = cpu_to_le32(data.fr_bd_wid);
+ 	}
+ 
+-	return iwl_mvm_send_cmd_pdu(mvm,
+-				    WIDE_ID(DATA_PATH_GROUP,
+-					    RFH_QUEUE_CONFIG_CMD),
+-				    0, size, cmd);
++	hcmd.data[0] = cmd;
++	hcmd.len[0] = size;
++
++	ret = iwl_mvm_send_cmd(mvm, &hcmd);
++
++	kfree(cmd);
++
++	return ret;
+ }
+ 
+ static int iwl_mvm_send_dqa_cmd(struct iwl_mvm *mvm)
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index 9e850c25877b..c596c7b13504 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -499,7 +499,7 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+ 	struct iwl_rb_allocator *rba = &trans_pcie->rba;
+ 	struct list_head local_empty;
+-	int pending = atomic_xchg(&rba->req_pending, 0);
++	int pending = atomic_read(&rba->req_pending);
+ 
+ 	IWL_DEBUG_RX(trans, "Pending allocation requests = %d\n", pending);
+ 
+@@ -554,11 +554,13 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
+ 			i++;
+ 		}
+ 
++		atomic_dec(&rba->req_pending);
+ 		pending--;
++
+ 		if (!pending) {
+-			pending = atomic_xchg(&rba->req_pending, 0);
++			pending = atomic_read(&rba->req_pending);
+ 			IWL_DEBUG_RX(trans,
+-				     "Pending allocation requests = %d\n",
++				     "Got more pending allocation requests = %d\n",
+ 				     pending);
+ 		}
+ 
+@@ -570,12 +572,15 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
+ 		spin_unlock(&rba->lock);
+ 
+ 		atomic_inc(&rba->req_ready);
++
+ 	}
+ 
+ 	spin_lock(&rba->lock);
+ 	/* return unused rbds to the allocator empty list */
+ 	list_splice_tail(&local_empty, &rba->rbd_empty);
+ 	spin_unlock(&rba->lock);
++
++	IWL_DEBUG_RX(trans, "%s, exit.\n", __func__);
+ }
+ 
+ /*
+diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+index 789337ea676a..6ede6168bd85 100644
+--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
++++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+@@ -433,8 +433,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
+ 			  skb_tail_pointer(skb),
+ 			  MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn, cardp);
+ 
+-	cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
+-
+ 	lbtf_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n",
+ 		cardp->rx_urb);
+ 	ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC);
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 1467af22e394..883752f640b4 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -4310,11 +4310,13 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ 	wiphy->mgmt_stypes = mwifiex_mgmt_stypes;
+ 	wiphy->max_remain_on_channel_duration = 5000;
+ 	wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
+-				 BIT(NL80211_IFTYPE_ADHOC) |
+ 				 BIT(NL80211_IFTYPE_P2P_CLIENT) |
+ 				 BIT(NL80211_IFTYPE_P2P_GO) |
+ 				 BIT(NL80211_IFTYPE_AP);
+ 
++	if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
++		wiphy->interface_modes |= BIT(NL80211_IFTYPE_ADHOC);
++
+ 	wiphy->bands[NL80211_BAND_2GHZ] = &mwifiex_band_2ghz;
+ 	if (adapter->config_bands & BAND_A)
+ 		wiphy->bands[NL80211_BAND_5GHZ] = &mwifiex_band_5ghz;
+@@ -4374,11 +4376,13 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
+ 	wiphy->available_antennas_tx = BIT(adapter->number_of_antenna) - 1;
+ 	wiphy->available_antennas_rx = BIT(adapter->number_of_antenna) - 1;
+ 
+-	wiphy->features |= NL80211_FEATURE_HT_IBSS |
+-			   NL80211_FEATURE_INACTIVITY_TIMER |
++	wiphy->features |= NL80211_FEATURE_INACTIVITY_TIMER |
+ 			   NL80211_FEATURE_LOW_PRIORITY_SCAN |
+ 			   NL80211_FEATURE_NEED_OBSS_SCAN;
+ 
++	if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
++		wiphy->features |= NL80211_FEATURE_HT_IBSS;
++
+ 	if (ISSUPP_RANDOM_MAC(adapter->fw_cap_info))
+ 		wiphy->features |= NL80211_FEATURE_SCAN_RANDOM_MAC_ADDR |
+ 				   NL80211_FEATURE_SCHED_SCAN_RANDOM_MAC_ADDR |
+diff --git a/drivers/net/wireless/mediatek/mt76/eeprom.c b/drivers/net/wireless/mediatek/mt76/eeprom.c
+index 530e5593765c..a1529920d877 100644
+--- a/drivers/net/wireless/mediatek/mt76/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/eeprom.c
+@@ -54,22 +54,30 @@ mt76_get_of_eeprom(struct mt76_dev *dev, int len)
+ 		part = np->name;
+ 
+ 	mtd = get_mtd_device_nm(part);
+-	if (IS_ERR(mtd))
+-		return PTR_ERR(mtd);
++	if (IS_ERR(mtd)) {
++		ret =  PTR_ERR(mtd);
++		goto out_put_node;
++	}
+ 
+-	if (size <= sizeof(*list))
+-		return -EINVAL;
++	if (size <= sizeof(*list)) {
++		ret = -EINVAL;
++		goto out_put_node;
++	}
+ 
+ 	offset = be32_to_cpup(list);
+ 	ret = mtd_read(mtd, offset, len, &retlen, dev->eeprom.data);
+ 	put_mtd_device(mtd);
+ 	if (ret)
+-		return ret;
++		goto out_put_node;
+ 
+-	if (retlen < len)
+-		return -EINVAL;
++	if (retlen < len) {
++		ret = -EINVAL;
++		goto out_put_node;
++	}
+ 
+-	return 0;
++out_put_node:
++	of_node_put(np);
++	return ret;
+ #else
+ 	return -ENOENT;
+ #endif
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 5cd508a68609..6d29ba4046c3 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -713,6 +713,19 @@ static inline bool mt76u_check_sg(struct mt76_dev *dev)
+ 		 udev->speed == USB_SPEED_WIRELESS));
+ }
+ 
++static inline int
++mt76u_bulk_msg(struct mt76_dev *dev, void *data, int len, int timeout)
++{
++	struct usb_interface *intf = to_usb_interface(dev->dev);
++	struct usb_device *udev = interface_to_usbdev(intf);
++	struct mt76_usb *usb = &dev->usb;
++	unsigned int pipe;
++	int sent;
++
++	pipe = usb_sndbulkpipe(udev, usb->out_ep[MT_EP_OUT_INBAND_CMD]);
++	return usb_bulk_msg(udev, pipe, data, len, &sent, timeout);
++}
++
+ int mt76u_vendor_request(struct mt76_dev *dev, u8 req,
+ 			 u8 req_type, u16 val, u16 offset,
+ 			 void *buf, size_t len);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+index c08bf371e527..7c9dfa54fee8 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+@@ -309,7 +309,7 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi,
+ 		ccmp_pn[6] = pn >> 32;
+ 		ccmp_pn[7] = pn >> 40;
+ 		txwi->iv = *((__le32 *)&ccmp_pn[0]);
+-		txwi->eiv = *((__le32 *)&ccmp_pn[1]);
++		txwi->eiv = *((__le32 *)&ccmp_pn[4]);
+ 	}
+ 
+ 	spin_lock_bh(&dev->mt76.lock);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
+index 6db789f90269..2ca393e267af 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
+@@ -121,18 +121,14 @@ static int
+ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb,
+ 			int cmd, bool wait_resp)
+ {
+-	struct usb_interface *intf = to_usb_interface(dev->dev);
+-	struct usb_device *udev = interface_to_usbdev(intf);
+ 	struct mt76_usb *usb = &dev->usb;
+-	unsigned int pipe;
+-	int ret, sent;
++	int ret;
+ 	u8 seq = 0;
+ 	u32 info;
+ 
+ 	if (test_bit(MT76_REMOVED, &dev->state))
+ 		return 0;
+ 
+-	pipe = usb_sndbulkpipe(udev, usb->out_ep[MT_EP_OUT_INBAND_CMD]);
+ 	if (wait_resp) {
+ 		seq = ++usb->mcu.msg_seq & 0xf;
+ 		if (!seq)
+@@ -146,7 +142,7 @@ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb,
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = usb_bulk_msg(udev, pipe, skb->data, skb->len, &sent, 500);
++	ret = mt76u_bulk_msg(dev, skb->data, skb->len, 500);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -268,14 +264,12 @@ void mt76x02u_mcu_fw_reset(struct mt76x02_dev *dev)
+ EXPORT_SYMBOL_GPL(mt76x02u_mcu_fw_reset);
+ 
+ static int
+-__mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
++__mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, u8 *data,
+ 			    const void *fw_data, int len, u32 dst_addr)
+ {
+-	u8 *data = sg_virt(&buf->urb->sg[0]);
+-	DECLARE_COMPLETION_ONSTACK(cmpl);
+ 	__le32 info;
+ 	u32 val;
+-	int err;
++	int err, data_len;
+ 
+ 	info = cpu_to_le32(FIELD_PREP(MT_MCU_MSG_PORT, CPU_TX_PORT) |
+ 			   FIELD_PREP(MT_MCU_MSG_LEN, len) |
+@@ -291,25 +285,12 @@ __mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
+ 	mt76u_single_wr(&dev->mt76, MT_VEND_WRITE_FCE,
+ 			MT_FCE_DMA_LEN, len << 16);
+ 
+-	buf->len = MT_CMD_HDR_LEN + len + sizeof(info);
+-	err = mt76u_submit_buf(&dev->mt76, USB_DIR_OUT,
+-			       MT_EP_OUT_INBAND_CMD,
+-			       buf, GFP_KERNEL,
+-			       mt76u_mcu_complete_urb, &cmpl);
+-	if (err < 0)
+-		return err;
+-
+-	if (!wait_for_completion_timeout(&cmpl,
+-					 msecs_to_jiffies(1000))) {
+-		dev_err(dev->mt76.dev, "firmware upload timed out\n");
+-		usb_kill_urb(buf->urb);
+-		return -ETIMEDOUT;
+-	}
++	data_len = MT_CMD_HDR_LEN + len + sizeof(info);
+ 
+-	if (mt76u_urb_error(buf->urb)) {
+-		dev_err(dev->mt76.dev, "firmware upload failed: %d\n",
+-			buf->urb->status);
+-		return buf->urb->status;
++	err = mt76u_bulk_msg(&dev->mt76, data, data_len, 1000);
++	if (err) {
++		dev_err(dev->mt76.dev, "firmware upload failed: %d\n", err);
++		return err;
+ 	}
+ 
+ 	val = mt76_rr(dev, MT_TX_CPU_FROM_FCE_CPU_DESC_IDX);
+@@ -322,17 +303,16 @@ __mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
+ int mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, const void *data,
+ 			      int data_len, u32 max_payload, u32 offset)
+ {
+-	int err, len, pos = 0, max_len = max_payload - 8;
+-	struct mt76u_buf buf;
++	int len, err = 0, pos = 0, max_len = max_payload - 8;
++	u8 *buf;
+ 
+-	err = mt76u_buf_alloc(&dev->mt76, &buf, 1, max_payload, max_payload,
+-			      GFP_KERNEL);
+-	if (err < 0)
+-		return err;
++	buf = kmalloc(max_payload, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
+ 
+ 	while (data_len > 0) {
+ 		len = min_t(int, data_len, max_len);
+-		err = __mt76x02u_mcu_fw_send_data(dev, &buf, data + pos,
++		err = __mt76x02u_mcu_fw_send_data(dev, buf, data + pos,
+ 						  len, offset + pos);
+ 		if (err < 0)
+ 			break;
+@@ -341,7 +321,7 @@ int mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, const void *data,
+ 		pos += len;
+ 		usleep_range(5000, 10000);
+ 	}
+-	mt76u_buf_free(&buf);
++	kfree(buf);
+ 
+ 	return err;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
+index b061263453d4..61cde0f9f58f 100644
+--- a/drivers/net/wireless/mediatek/mt76/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/usb.c
+@@ -326,7 +326,6 @@ int mt76u_buf_alloc(struct mt76_dev *dev, struct mt76u_buf *buf,
+ 
+ 	return mt76u_fill_rx_sg(dev, buf, nsgs, len, sglen);
+ }
+-EXPORT_SYMBOL_GPL(mt76u_buf_alloc);
+ 
+ void mt76u_buf_free(struct mt76u_buf *buf)
+ {
+@@ -838,16 +837,9 @@ int mt76u_alloc_queues(struct mt76_dev *dev)
+ 
+ 	err = mt76u_alloc_rx(dev);
+ 	if (err < 0)
+-		goto err;
+-
+-	err = mt76u_alloc_tx(dev);
+-	if (err < 0)
+-		goto err;
++		return err;
+ 
+-	return 0;
+-err:
+-	mt76u_queues_deinit(dev);
+-	return err;
++	return mt76u_alloc_tx(dev);
+ }
+ EXPORT_SYMBOL_GPL(mt76u_alloc_queues);
+ 
+diff --git a/drivers/net/wireless/mediatek/mt7601u/eeprom.h b/drivers/net/wireless/mediatek/mt7601u/eeprom.h
+index 662d12703b69..57b503ae63f1 100644
+--- a/drivers/net/wireless/mediatek/mt7601u/eeprom.h
++++ b/drivers/net/wireless/mediatek/mt7601u/eeprom.h
+@@ -17,7 +17,7 @@
+ 
+ struct mt7601u_dev;
+ 
+-#define MT7601U_EE_MAX_VER			0x0c
++#define MT7601U_EE_MAX_VER			0x0d
+ #define MT7601U_EEPROM_SIZE			256
+ 
+ #define MT7601U_DEFAULT_TX_POWER		6
+diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
+index 26b187336875..2e12de813a5b 100644
+--- a/drivers/net/wireless/ti/wlcore/main.c
++++ b/drivers/net/wireless/ti/wlcore/main.c
+@@ -1085,8 +1085,11 @@ static int wl12xx_chip_wakeup(struct wl1271 *wl, bool plt)
+ 		goto out;
+ 
+ 	ret = wl12xx_fetch_firmware(wl, plt);
+-	if (ret < 0)
+-		goto out;
++	if (ret < 0) {
++		kfree(wl->fw_status);
++		kfree(wl->raw_fw_status);
++		kfree(wl->tx_res_if);
++	}
+ 
+ out:
+ 	return ret;
+diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
+index a11bf4e6b451..6d6e9a12150b 100644
+--- a/drivers/nvdimm/label.c
++++ b/drivers/nvdimm/label.c
+@@ -755,7 +755,7 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
+ 
+ static int __pmem_label_update(struct nd_region *nd_region,
+ 		struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
+-		int pos)
++		int pos, unsigned long flags)
+ {
+ 	struct nd_namespace_common *ndns = &nspm->nsio.common;
+ 	struct nd_interleave_set *nd_set = nd_region->nd_set;
+@@ -796,7 +796,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
+ 	memcpy(nd_label->uuid, nspm->uuid, NSLABEL_UUID_LEN);
+ 	if (nspm->alt_name)
+ 		memcpy(nd_label->name, nspm->alt_name, NSLABEL_NAME_LEN);
+-	nd_label->flags = __cpu_to_le32(NSLABEL_FLAG_UPDATING);
++	nd_label->flags = __cpu_to_le32(flags);
+ 	nd_label->nlabel = __cpu_to_le16(nd_region->ndr_mappings);
+ 	nd_label->position = __cpu_to_le16(pos);
+ 	nd_label->isetcookie = __cpu_to_le64(cookie);
+@@ -1249,13 +1249,13 @@ static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
+ int nd_pmem_namespace_label_update(struct nd_region *nd_region,
+ 		struct nd_namespace_pmem *nspm, resource_size_t size)
+ {
+-	int i;
++	int i, rc;
+ 
+ 	for (i = 0; i < nd_region->ndr_mappings; i++) {
+ 		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ 		struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+ 		struct resource *res;
+-		int rc, count = 0;
++		int count = 0;
+ 
+ 		if (size == 0) {
+ 			rc = del_labels(nd_mapping, nspm->uuid);
+@@ -1273,7 +1273,20 @@ int nd_pmem_namespace_label_update(struct nd_region *nd_region,
+ 		if (rc < 0)
+ 			return rc;
+ 
+-		rc = __pmem_label_update(nd_region, nd_mapping, nspm, i);
++		rc = __pmem_label_update(nd_region, nd_mapping, nspm, i,
++				NSLABEL_FLAG_UPDATING);
++		if (rc)
++			return rc;
++	}
++
++	if (size == 0)
++		return 0;
++
++	/* Clear the UPDATING flag per UEFI 2.7 expectations */
++	for (i = 0; i < nd_region->ndr_mappings; i++) {
++		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
++
++		rc = __pmem_label_update(nd_region, nd_mapping, nspm, i, 0);
+ 		if (rc)
+ 			return rc;
+ 	}
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 4b077555ac70..33a3b23b3db7 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -138,6 +138,7 @@ bool nd_is_uuid_unique(struct device *dev, u8 *uuid)
+ bool pmem_should_map_pages(struct device *dev)
+ {
+ 	struct nd_region *nd_region = to_nd_region(dev->parent);
++	struct nd_namespace_common *ndns = to_ndns(dev);
+ 	struct nd_namespace_io *nsio;
+ 
+ 	if (!IS_ENABLED(CONFIG_ZONE_DEVICE))
+@@ -149,6 +150,9 @@ bool pmem_should_map_pages(struct device *dev)
+ 	if (is_nd_pfn(dev) || is_nd_btt(dev))
+ 		return false;
+ 
++	if (ndns->force_raw)
++		return false;
++
+ 	nsio = to_nd_namespace_io(dev);
+ 	if (region_intersects(nsio->res.start, resource_size(&nsio->res),
+ 				IORESOURCE_SYSTEM_RAM,
+diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
+index 6f22272e8d80..7760c1b91853 100644
+--- a/drivers/nvdimm/pfn_devs.c
++++ b/drivers/nvdimm/pfn_devs.c
+@@ -593,7 +593,7 @@ static unsigned long init_altmap_base(resource_size_t base)
+ 
+ static unsigned long init_altmap_reserve(resource_size_t base)
+ {
+-	unsigned long reserve = PHYS_PFN(SZ_8K);
++	unsigned long reserve = PFN_UP(SZ_8K);
+ 	unsigned long base_pfn = PHYS_PFN(base);
+ 
+ 	reserve += base_pfn - PFN_SECTION_ALIGN_DOWN(base_pfn);
+@@ -678,7 +678,7 @@ static void trim_pfn_device(struct nd_pfn *nd_pfn, u32 *start_pad, u32 *end_trun
+ 	if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM,
+ 				IORES_DESC_NONE) == REGION_MIXED
+ 			|| !IS_ALIGNED(end, nd_pfn->align)
+-			|| nd_region_conflict(nd_region, start, size + adjust))
++			|| nd_region_conflict(nd_region, start, size))
+ 		*end_trunc = end - phys_pmem_align_down(nd_pfn, end);
+ }
+ 
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 89accc76d71c..c37d5bbd72ab 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -3018,7 +3018,10 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
+ 
+ 	ctrl->ctrl.opts = opts;
+ 	ctrl->ctrl.nr_reconnects = 0;
+-	ctrl->ctrl.numa_node = dev_to_node(lport->dev);
++	if (lport->dev)
++		ctrl->ctrl.numa_node = dev_to_node(lport->dev);
++	else
++		ctrl->ctrl.numa_node = NUMA_NO_NODE;
+ 	INIT_LIST_HEAD(&ctrl->ctrl_list);
+ 	ctrl->lport = lport;
+ 	ctrl->rport = rport;
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 88d260f31835..02c63c463222 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -1171,6 +1171,15 @@ static void nvmet_release_p2p_ns_map(struct nvmet_ctrl *ctrl)
+ 	put_device(ctrl->p2p_client);
+ }
+ 
++static void nvmet_fatal_error_handler(struct work_struct *work)
++{
++	struct nvmet_ctrl *ctrl =
++			container_of(work, struct nvmet_ctrl, fatal_err_work);
++
++	pr_err("ctrl %d fatal error occurred!\n", ctrl->cntlid);
++	ctrl->ops->delete_ctrl(ctrl);
++}
++
+ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ 		struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp)
+ {
+@@ -1213,6 +1222,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
+ 	INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work);
+ 	INIT_LIST_HEAD(&ctrl->async_events);
+ 	INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL);
++	INIT_WORK(&ctrl->fatal_err_work, nvmet_fatal_error_handler);
+ 
+ 	memcpy(ctrl->subsysnqn, subsysnqn, NVMF_NQN_SIZE);
+ 	memcpy(ctrl->hostnqn, hostnqn, NVMF_NQN_SIZE);
+@@ -1316,21 +1326,11 @@ void nvmet_ctrl_put(struct nvmet_ctrl *ctrl)
+ 	kref_put(&ctrl->ref, nvmet_ctrl_free);
+ }
+ 
+-static void nvmet_fatal_error_handler(struct work_struct *work)
+-{
+-	struct nvmet_ctrl *ctrl =
+-			container_of(work, struct nvmet_ctrl, fatal_err_work);
+-
+-	pr_err("ctrl %d fatal error occurred!\n", ctrl->cntlid);
+-	ctrl->ops->delete_ctrl(ctrl);
+-}
+-
+ void nvmet_ctrl_fatal_error(struct nvmet_ctrl *ctrl)
+ {
+ 	mutex_lock(&ctrl->lock);
+ 	if (!(ctrl->csts & NVME_CSTS_CFS)) {
+ 		ctrl->csts |= NVME_CSTS_CFS;
+-		INIT_WORK(&ctrl->fatal_err_work, nvmet_fatal_error_handler);
+ 		schedule_work(&ctrl->fatal_err_work);
+ 	}
+ 	mutex_unlock(&ctrl->lock);
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index f7301bb4ef3b..3ce65927e11c 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -686,9 +686,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
+ 	if (rval)
+ 		goto err_remove_cells;
+ 
+-	rval = blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
+-	if (rval)
+-		goto err_remove_cells;
++	blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
+ 
+ 	return nvmem;
+ 
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 18f1639dbc4a..f5d2fa195f5f 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -743,7 +743,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
+ 		old_freq, freq);
+ 
+ 	/* Scaling up? Configure required OPPs before frequency */
+-	if (freq > old_freq) {
++	if (freq >= old_freq) {
+ 		ret = _set_required_opps(dev, opp_table, opp);
+ 		if (ret)
+ 			goto put_opp;
+diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c
+index 9c8249f74479..6296dbb83d47 100644
+--- a/drivers/parport/parport_pc.c
++++ b/drivers/parport/parport_pc.c
+@@ -1377,7 +1377,7 @@ static struct superio_struct *find_superio(struct parport *p)
+ {
+ 	int i;
+ 	for (i = 0; i < NR_SUPERIOS; i++)
+-		if (superios[i].io != p->base)
++		if (superios[i].io == p->base)
+ 			return &superios[i];
+ 	return NULL;
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 721d60a5d9e4..9c5614f21b8e 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -439,7 +439,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
+ 	if (ret)
+ 		pci->num_viewport = 2;
+ 
+-	if (IS_ENABLED(CONFIG_PCI_MSI)) {
++	if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_enabled()) {
+ 		/*
+ 		 * If a specific SoC driver needs to change the
+ 		 * default number of vectors, it needs to implement
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index d185ea5fe996..a7f703556790 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1228,7 +1228,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ 
+ 	pcie->ops = of_device_get_match_data(dev);
+ 
+-	pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW);
++	pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(pcie->reset)) {
+ 		ret = PTR_ERR(pcie->reset);
+ 		goto err_pm_runtime_put;
+diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
+index 750081c1cb48..6eecae447af3 100644
+--- a/drivers/pci/controller/pci-aardvark.c
++++ b/drivers/pci/controller/pci-aardvark.c
+@@ -499,7 +499,7 @@ static void advk_sw_pci_bridge_init(struct advk_pcie *pcie)
+ 	bridge->data = pcie;
+ 	bridge->ops = &advk_pci_bridge_emul_ops;
+ 
+-	pci_bridge_emul_init(bridge);
++	pci_bridge_emul_init(bridge, 0);
+ 
+ }
+ 
+diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
+index fa0fc46edb0c..d3a0419e42f2 100644
+--- a/drivers/pci/controller/pci-mvebu.c
++++ b/drivers/pci/controller/pci-mvebu.c
+@@ -583,7 +583,7 @@ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
+ 	bridge->data = port;
+ 	bridge->ops = &mvebu_pci_bridge_emul_ops;
+ 
+-	pci_bridge_emul_init(bridge);
++	pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR);
+ }
+ 
+ static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys)
+diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
+index 55e471c18e8d..c42fe5c4319f 100644
+--- a/drivers/pci/controller/pcie-mediatek.c
++++ b/drivers/pci/controller/pcie-mediatek.c
+@@ -654,7 +654,6 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
+ 	struct resource *mem = &pcie->mem;
+ 	const struct mtk_pcie_soc *soc = port->pcie->soc;
+ 	u32 val;
+-	size_t size;
+ 	int err;
+ 
+ 	/* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */
+@@ -706,8 +705,8 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
+ 		mtk_pcie_enable_msi(port);
+ 
+ 	/* Set AHB to PCIe translation windows */
+-	size = mem->end - mem->start;
+-	val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size));
++	val = lower_32_bits(mem->start) |
++	      AHB2PCIE_SIZE(fls(resource_size(mem)));
+ 	writel(val, port->base + PCIE_AHB_TRANS_BASE0_L);
+ 
+ 	val = upper_32_bits(mem->start);
+diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c
+index 3f3df4c29f6e..905282a8ddaa 100644
+--- a/drivers/pci/hotplug/pciehp_ctrl.c
++++ b/drivers/pci/hotplug/pciehp_ctrl.c
+@@ -115,6 +115,10 @@ static void remove_board(struct controller *ctrl, bool safe_removal)
+ 		 * removed from the slot/adapter.
+ 		 */
+ 		msleep(1000);
++
++		/* Ignore link or presence changes caused by power off */
++		atomic_and(~(PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC),
++			   &ctrl->pending_events);
+ 	}
+ 
+ 	/* turn off Green LED */
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 7dd443aea5a5..8bfcb8cd0900 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -156,9 +156,9 @@ static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
+ 	slot_ctrl |= (cmd & mask);
+ 	ctrl->cmd_busy = 1;
+ 	smp_mb();
++	ctrl->slot_ctrl = slot_ctrl;
+ 	pcie_capability_write_word(pdev, PCI_EXP_SLTCTL, slot_ctrl);
+ 	ctrl->cmd_started = jiffies;
+-	ctrl->slot_ctrl = slot_ctrl;
+ 
+ 	/*
+ 	 * Controllers with the Intel CF118 and similar errata advertise
+@@ -736,12 +736,25 @@ void pcie_clear_hotplug_events(struct controller *ctrl)
+ 
+ void pcie_enable_interrupt(struct controller *ctrl)
+ {
+-	pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_HPIE, PCI_EXP_SLTCTL_HPIE);
++	u16 mask;
++
++	mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
++	pcie_write_cmd(ctrl, mask, mask);
+ }
+ 
+ void pcie_disable_interrupt(struct controller *ctrl)
+ {
+-	pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_HPIE);
++	u16 mask;
++
++	/*
++	 * Mask hot-plug interrupt to prevent it triggering immediately
++	 * when the link goes inactive (we still get PME when any of the
++	 * enabled events is detected). Same goes with Link Layer State
++	 * changed event which generates PME immediately when the link goes
++	 * inactive so mask it as well.
++	 */
++	mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
++	pcie_write_cmd(ctrl, 0, mask);
+ }
+ 
+ /*
+diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
+index 129738362d90..83fb077d0b41 100644
+--- a/drivers/pci/pci-bridge-emul.c
++++ b/drivers/pci/pci-bridge-emul.c
+@@ -24,29 +24,6 @@
+ #define PCI_CAP_PCIE_START	PCI_BRIDGE_CONF_END
+ #define PCI_CAP_PCIE_END	(PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2)
+ 
+-/*
+- * Initialize a pci_bridge_emul structure to represent a fake PCI
+- * bridge configuration space. The caller needs to have initialized
+- * the PCI configuration space with whatever values make sense
+- * (typically at least vendor, device, revision), the ->ops pointer,
+- * and optionally ->data and ->has_pcie.
+- */
+-void pci_bridge_emul_init(struct pci_bridge_emul *bridge)
+-{
+-	bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16;
+-	bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
+-	bridge->conf.cache_line_size = 0x10;
+-	bridge->conf.status = PCI_STATUS_CAP_LIST;
+-
+-	if (bridge->has_pcie) {
+-		bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
+-		bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
+-		/* Set PCIe v2, root port, slot support */
+-		bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
+-			PCI_EXP_FLAGS_SLOT;
+-	}
+-}
+-
+ struct pci_bridge_reg_behavior {
+ 	/* Read-only bits */
+ 	u32 ro;
+@@ -283,6 +260,61 @@ const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
+ 	},
+ };
+ 
++/*
++ * Initialize a pci_bridge_emul structure to represent a fake PCI
++ * bridge configuration space. The caller needs to have initialized
++ * the PCI configuration space with whatever values make sense
++ * (typically at least vendor, device, revision), the ->ops pointer,
++ * and optionally ->data and ->has_pcie.
++ */
++int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
++			 unsigned int flags)
++{
++	bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16;
++	bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
++	bridge->conf.cache_line_size = 0x10;
++	bridge->conf.status = PCI_STATUS_CAP_LIST;
++	bridge->pci_regs_behavior = kmemdup(pci_regs_behavior,
++					    sizeof(pci_regs_behavior),
++					    GFP_KERNEL);
++	if (!bridge->pci_regs_behavior)
++		return -ENOMEM;
++
++	if (bridge->has_pcie) {
++		bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
++		bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
++		/* Set PCIe v2, root port, slot support */
++		bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
++			PCI_EXP_FLAGS_SLOT;
++		bridge->pcie_cap_regs_behavior =
++			kmemdup(pcie_cap_regs_behavior,
++				sizeof(pcie_cap_regs_behavior),
++				GFP_KERNEL);
++		if (!bridge->pcie_cap_regs_behavior) {
++			kfree(bridge->pci_regs_behavior);
++			return -ENOMEM;
++		}
++	}
++
++	if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) {
++		bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].ro = ~0;
++		bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].rw = 0;
++	}
++
++	return 0;
++}
++
++/*
++ * Cleanup a pci_bridge_emul structure that was previously initilized
++ * using pci_bridge_emul_init().
++ */
++void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge)
++{
++	if (bridge->has_pcie)
++		kfree(bridge->pcie_cap_regs_behavior);
++	kfree(bridge->pci_regs_behavior);
++}
++
+ /*
+  * Should be called by the PCI controller driver when reading the PCI
+  * configuration space of the fake bridge. It will call back the
+@@ -312,11 +344,11 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
+ 		reg -= PCI_CAP_PCIE_START;
+ 		read_op = bridge->ops->read_pcie;
+ 		cfgspace = (u32 *) &bridge->pcie_conf;
+-		behavior = pcie_cap_regs_behavior;
++		behavior = bridge->pcie_cap_regs_behavior;
+ 	} else {
+ 		read_op = bridge->ops->read_base;
+ 		cfgspace = (u32 *) &bridge->conf;
+-		behavior = pci_regs_behavior;
++		behavior = bridge->pci_regs_behavior;
+ 	}
+ 
+ 	if (read_op)
+@@ -383,11 +415,11 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
+ 		reg -= PCI_CAP_PCIE_START;
+ 		write_op = bridge->ops->write_pcie;
+ 		cfgspace = (u32 *) &bridge->pcie_conf;
+-		behavior = pcie_cap_regs_behavior;
++		behavior = bridge->pcie_cap_regs_behavior;
+ 	} else {
+ 		write_op = bridge->ops->write_base;
+ 		cfgspace = (u32 *) &bridge->conf;
+-		behavior = pci_regs_behavior;
++		behavior = bridge->pci_regs_behavior;
+ 	}
+ 
+ 	/* Keep all bits, except the RW bits */
+diff --git a/drivers/pci/pci-bridge-emul.h b/drivers/pci/pci-bridge-emul.h
+index 9d510ccf738b..e65b1b79899d 100644
+--- a/drivers/pci/pci-bridge-emul.h
++++ b/drivers/pci/pci-bridge-emul.h
+@@ -107,15 +107,26 @@ struct pci_bridge_emul_ops {
+ 			   u32 old, u32 new, u32 mask);
+ };
+ 
++struct pci_bridge_reg_behavior;
++
+ struct pci_bridge_emul {
+ 	struct pci_bridge_emul_conf conf;
+ 	struct pci_bridge_emul_pcie_conf pcie_conf;
+ 	struct pci_bridge_emul_ops *ops;
++	struct pci_bridge_reg_behavior *pci_regs_behavior;
++	struct pci_bridge_reg_behavior *pcie_cap_regs_behavior;
+ 	void *data;
+ 	bool has_pcie;
+ };
+ 
+-void pci_bridge_emul_init(struct pci_bridge_emul *bridge);
++enum {
++	PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR = BIT(0),
++};
++
++int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
++			 unsigned int flags);
++void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge);
++
+ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
+ 			      int size, u32 *value);
+ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
+diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
+index e435d12e61a0..7b77754a82de 100644
+--- a/drivers/pci/pcie/dpc.c
++++ b/drivers/pci/pcie/dpc.c
+@@ -202,6 +202,28 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
+ 	pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status);
+ }
+ 
++static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
++					  struct aer_err_info *info)
++{
++	int pos = dev->aer_cap;
++	u32 status, mask, sev;
++
++	pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status);
++	pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &mask);
++	status &= ~mask;
++	if (!status)
++		return 0;
++
++	pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev);
++	status &= sev;
++	if (status)
++		info->severity = AER_FATAL;
++	else
++		info->severity = AER_NONFATAL;
++
++	return 1;
++}
++
+ static irqreturn_t dpc_handler(int irq, void *context)
+ {
+ 	struct aer_err_info info;
+@@ -229,9 +251,12 @@ static irqreturn_t dpc_handler(int irq, void *context)
+ 	/* show RP PIO error detail information */
+ 	if (dpc->rp_extensions && reason == 3 && ext_reason == 0)
+ 		dpc_process_rp_pio_error(dpc);
+-	else if (reason == 0 && aer_get_device_error_info(pdev, &info)) {
++	else if (reason == 0 &&
++		 dpc_get_aer_uncorrect_severity(pdev, &info) &&
++		 aer_get_device_error_info(pdev, &info)) {
+ 		aer_print_error(pdev, &info);
+ 		pci_cleanup_aer_uncorrect_error_status(pdev);
++		pci_aer_clear_fatal_status(pdev);
+ 	}
+ 
+ 	/* We configure DPC so it only triggers on ERR_FATAL */
+diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c
+index 0dbcf429089f..efa5b552914b 100644
+--- a/drivers/pci/pcie/pme.c
++++ b/drivers/pci/pcie/pme.c
+@@ -363,6 +363,16 @@ static bool pcie_pme_check_wakeup(struct pci_bus *bus)
+ 	return false;
+ }
+ 
++static void pcie_pme_disable_interrupt(struct pci_dev *port,
++				       struct pcie_pme_service_data *data)
++{
++	spin_lock_irq(&data->lock);
++	pcie_pme_interrupt_enable(port, false);
++	pcie_clear_root_pme_status(port);
++	data->noirq = true;
++	spin_unlock_irq(&data->lock);
++}
++
+ /**
+  * pcie_pme_suspend - Suspend PCIe PME service device.
+  * @srv: PCIe service device to suspend.
+@@ -387,11 +397,7 @@ static int pcie_pme_suspend(struct pcie_device *srv)
+ 			return 0;
+ 	}
+ 
+-	spin_lock_irq(&data->lock);
+-	pcie_pme_interrupt_enable(port, false);
+-	pcie_clear_root_pme_status(port);
+-	data->noirq = true;
+-	spin_unlock_irq(&data->lock);
++	pcie_pme_disable_interrupt(port, data);
+ 
+ 	synchronize_irq(srv->irq);
+ 
+@@ -426,35 +432,12 @@ static int pcie_pme_resume(struct pcie_device *srv)
+  * @srv - PCIe service device to remove.
+  */
+ static void pcie_pme_remove(struct pcie_device *srv)
+-{
+-	pcie_pme_suspend(srv);
+-	free_irq(srv->irq, srv);
+-	kfree(get_service_data(srv));
+-}
+-
+-static int pcie_pme_runtime_suspend(struct pcie_device *srv)
+-{
+-	struct pcie_pme_service_data *data = get_service_data(srv);
+-
+-	spin_lock_irq(&data->lock);
+-	pcie_pme_interrupt_enable(srv->port, false);
+-	pcie_clear_root_pme_status(srv->port);
+-	data->noirq = true;
+-	spin_unlock_irq(&data->lock);
+-
+-	return 0;
+-}
+-
+-static int pcie_pme_runtime_resume(struct pcie_device *srv)
+ {
+ 	struct pcie_pme_service_data *data = get_service_data(srv);
+ 
+-	spin_lock_irq(&data->lock);
+-	pcie_pme_interrupt_enable(srv->port, true);
+-	data->noirq = false;
+-	spin_unlock_irq(&data->lock);
+-
+-	return 0;
++	pcie_pme_disable_interrupt(srv->port, data);
++	free_irq(srv->irq, srv);
++	kfree(data);
+ }
+ 
+ static struct pcie_port_service_driver pcie_pme_driver = {
+@@ -464,8 +447,6 @@ static struct pcie_port_service_driver pcie_pme_driver = {
+ 
+ 	.probe		= pcie_pme_probe,
+ 	.suspend	= pcie_pme_suspend,
+-	.runtime_suspend = pcie_pme_runtime_suspend,
+-	.runtime_resume	= pcie_pme_runtime_resume,
+ 	.resume		= pcie_pme_resume,
+ 	.remove		= pcie_pme_remove,
+ };
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 257b9f6f2ebb..c46a3fcb341e 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -2071,11 +2071,8 @@ static void pci_configure_ltr(struct pci_dev *dev)
+ {
+ #ifdef CONFIG_PCIEASPM
+ 	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
+-	u32 cap;
+ 	struct pci_dev *bridge;
+-
+-	if (!host->native_ltr)
+-		return;
++	u32 cap, ctl;
+ 
+ 	if (!pci_is_pcie(dev))
+ 		return;
+@@ -2084,22 +2081,35 @@ static void pci_configure_ltr(struct pci_dev *dev)
+ 	if (!(cap & PCI_EXP_DEVCAP2_LTR))
+ 		return;
+ 
+-	/*
+-	 * Software must not enable LTR in an Endpoint unless the Root
+-	 * Complex and all intermediate Switches indicate support for LTR.
+-	 * PCIe r3.1, sec 6.18.
+-	 */
+-	if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)
+-		dev->ltr_path = 1;
+-	else {
++	pcie_capability_read_dword(dev, PCI_EXP_DEVCTL2, &ctl);
++	if (ctl & PCI_EXP_DEVCTL2_LTR_EN) {
++		if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) {
++			dev->ltr_path = 1;
++			return;
++		}
++
+ 		bridge = pci_upstream_bridge(dev);
+ 		if (bridge && bridge->ltr_path)
+ 			dev->ltr_path = 1;
++
++		return;
+ 	}
+ 
+-	if (dev->ltr_path)
++	if (!host->native_ltr)
++		return;
++
++	/*
++	 * Software must not enable LTR in an Endpoint unless the Root
++	 * Complex and all intermediate Switches indicate support for LTR.
++	 * PCIe r4.0, sec 6.18.
++	 */
++	if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
++	    ((bridge = pci_upstream_bridge(dev)) &&
++	      bridge->ltr_path)) {
+ 		pcie_capability_set_word(dev, PCI_EXP_DEVCTL2,
+ 					 PCI_EXP_DEVCTL2_LTR_EN);
++		dev->ltr_path = 1;
++	}
+ #endif
+ }
+ 
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index e2a879e93d86..fba03a7d5c7f 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3877,6 +3877,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9128,
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9130,
+ 			 quirk_dma_func1_alias);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9170,
++			 quirk_dma_func1_alias);
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c47 + c57 */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9172,
+ 			 quirk_dma_func1_alias);
+diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
+index 8e46a9dad2fa..7cb766dafe85 100644
+--- a/drivers/perf/arm_spe_pmu.c
++++ b/drivers/perf/arm_spe_pmu.c
+@@ -824,10 +824,10 @@ static void arm_spe_pmu_read(struct perf_event *event)
+ {
+ }
+ 
+-static void *arm_spe_pmu_setup_aux(int cpu, void **pages, int nr_pages,
+-				   bool snapshot)
++static void *arm_spe_pmu_setup_aux(struct perf_event *event, void **pages,
++				   int nr_pages, bool snapshot)
+ {
+-	int i;
++	int i, cpu = event->cpu;
+ 	struct page **pglist;
+ 	struct arm_spe_pmu_buf *buf;
+ 
+diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c
+index 5163097b43df..4bbd9ede38c8 100644
+--- a/drivers/phy/allwinner/phy-sun4i-usb.c
++++ b/drivers/phy/allwinner/phy-sun4i-usb.c
+@@ -485,8 +485,11 @@ static int sun4i_usb_phy_set_mode(struct phy *_phy,
+ 	struct sun4i_usb_phy_data *data = to_sun4i_usb_phy_data(phy);
+ 	int new_mode;
+ 
+-	if (phy->index != 0)
++	if (phy->index != 0) {
++		if (mode == PHY_MODE_USB_HOST)
++			return 0;
+ 		return -EINVAL;
++	}
+ 
+ 	switch (mode) {
+ 	case PHY_MODE_USB_HOST:
+diff --git a/drivers/pinctrl/meson/pinctrl-meson.c b/drivers/pinctrl/meson/pinctrl-meson.c
+index ea87d739f534..a4ae1ac5369e 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson.c
++++ b/drivers/pinctrl/meson/pinctrl-meson.c
+@@ -31,6 +31,9 @@
+  * In some cases the register ranges for pull enable and pull
+  * direction are the same and thus there are only 3 register ranges.
+  *
++ * Since Meson G12A SoC, the ao register ranges for gpio, pull enable
++ * and pull direction are the same, so there are only 2 register ranges.
++ *
+  * For the pull and GPIO configuration every bank uses a contiguous
+  * set of bits in the register sets described above; the same register
+  * can be shared by more banks with different offsets.
+@@ -488,23 +491,22 @@ static int meson_pinctrl_parse_dt(struct meson_pinctrl *pc,
+ 		return PTR_ERR(pc->reg_mux);
+ 	}
+ 
+-	pc->reg_pull = meson_map_resource(pc, gpio_np, "pull");
+-	if (IS_ERR(pc->reg_pull)) {
+-		dev_err(pc->dev, "pull registers not found\n");
+-		return PTR_ERR(pc->reg_pull);
++	pc->reg_gpio = meson_map_resource(pc, gpio_np, "gpio");
++	if (IS_ERR(pc->reg_gpio)) {
++		dev_err(pc->dev, "gpio registers not found\n");
++		return PTR_ERR(pc->reg_gpio);
+ 	}
+ 
++	pc->reg_pull = meson_map_resource(pc, gpio_np, "pull");
++	/* Use gpio region if pull one is not present */
++	if (IS_ERR(pc->reg_pull))
++		pc->reg_pull = pc->reg_gpio;
++
+ 	pc->reg_pullen = meson_map_resource(pc, gpio_np, "pull-enable");
+ 	/* Use pull region if pull-enable one is not present */
+ 	if (IS_ERR(pc->reg_pullen))
+ 		pc->reg_pullen = pc->reg_pull;
+ 
+-	pc->reg_gpio = meson_map_resource(pc, gpio_np, "gpio");
+-	if (IS_ERR(pc->reg_gpio)) {
+-		dev_err(pc->dev, "gpio registers not found\n");
+-		return PTR_ERR(pc->reg_gpio);
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pinctrl/meson/pinctrl-meson8b.c b/drivers/pinctrl/meson/pinctrl-meson8b.c
+index 0f140a802137..7f76000cc12e 100644
+--- a/drivers/pinctrl/meson/pinctrl-meson8b.c
++++ b/drivers/pinctrl/meson/pinctrl-meson8b.c
+@@ -346,6 +346,8 @@ static const unsigned int eth_rx_dv_pins[]	= { DIF_1_P };
+ static const unsigned int eth_rx_clk_pins[]	= { DIF_1_N };
+ static const unsigned int eth_txd0_1_pins[]	= { DIF_2_P };
+ static const unsigned int eth_txd1_1_pins[]	= { DIF_2_N };
++static const unsigned int eth_rxd3_pins[]	= { DIF_2_P };
++static const unsigned int eth_rxd2_pins[]	= { DIF_2_N };
+ static const unsigned int eth_tx_en_pins[]	= { DIF_3_P };
+ static const unsigned int eth_ref_clk_pins[]	= { DIF_3_N };
+ static const unsigned int eth_mdc_pins[]	= { DIF_4_P };
+@@ -599,6 +601,8 @@ static struct meson_pmx_group meson8b_cbus_groups[] = {
+ 	GROUP(eth_ref_clk,	6,	8),
+ 	GROUP(eth_mdc,		6,	9),
+ 	GROUP(eth_mdio_en,	6,	10),
++	GROUP(eth_rxd3,		7,	22),
++	GROUP(eth_rxd2,		7,	23),
+ };
+ 
+ static struct meson_pmx_group meson8b_aobus_groups[] = {
+@@ -748,7 +752,7 @@ static const char * const ethernet_groups[] = {
+ 	"eth_tx_clk", "eth_tx_en", "eth_txd1_0", "eth_txd1_1",
+ 	"eth_txd0_0", "eth_txd0_1", "eth_rx_clk", "eth_rx_dv",
+ 	"eth_rxd1", "eth_rxd0", "eth_mdio_en", "eth_mdc", "eth_ref_clk",
+-	"eth_txd2", "eth_txd3"
++	"eth_txd2", "eth_txd3", "eth_rxd3", "eth_rxd2"
+ };
+ 
+ static const char * const i2c_a_groups[] = {
+diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a77990.c b/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
+index e40908dc37e0..1ce286f7b286 100644
+--- a/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
++++ b/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
+@@ -391,29 +391,33 @@ FM(IP12_23_20)	IP12_23_20	FM(IP13_23_20)	IP13_23_20	FM(IP14_23_20)	IP14_23_20	FM
+ FM(IP12_27_24)	IP12_27_24	FM(IP13_27_24)	IP13_27_24	FM(IP14_27_24)	IP14_27_24	FM(IP15_27_24)	IP15_27_24 \
+ FM(IP12_31_28)	IP12_31_28	FM(IP13_31_28)	IP13_31_28	FM(IP14_31_28)	IP14_31_28	FM(IP15_31_28)	IP15_31_28
+ 
++/* The bit numbering in MOD_SEL fields is reversed */
++#define REV4(f0, f1, f2, f3)			f0 f2 f1 f3
++#define REV8(f0, f1, f2, f3, f4, f5, f6, f7)	f0 f4 f2 f6 f1 f5 f3 f7
++
+ /* MOD_SEL0 */			/* 0 */				/* 1 */				/* 2 */				/* 3 */			/* 4 */			/* 5 */		/* 6 */		/* 7 */
+-#define MOD_SEL0_30_29		FM(SEL_ADGB_0)			FM(SEL_ADGB_1)			FM(SEL_ADGB_2)			F_(0, 0)
++#define MOD_SEL0_30_29	   REV4(FM(SEL_ADGB_0),			FM(SEL_ADGB_1),			FM(SEL_ADGB_2),			F_(0, 0))
+ #define MOD_SEL0_28		FM(SEL_DRIF0_0)			FM(SEL_DRIF0_1)
+-#define MOD_SEL0_27_26		FM(SEL_FM_0)			FM(SEL_FM_1)			FM(SEL_FM_2)			F_(0, 0)
++#define MOD_SEL0_27_26	   REV4(FM(SEL_FM_0),			FM(SEL_FM_1),			FM(SEL_FM_2),			F_(0, 0))
+ #define MOD_SEL0_25		FM(SEL_FSO_0)			FM(SEL_FSO_1)
+ #define MOD_SEL0_24		FM(SEL_HSCIF0_0)		FM(SEL_HSCIF0_1)
+ #define MOD_SEL0_23		FM(SEL_HSCIF1_0)		FM(SEL_HSCIF1_1)
+ #define MOD_SEL0_22		FM(SEL_HSCIF2_0)		FM(SEL_HSCIF2_1)
+-#define MOD_SEL0_21_20		FM(SEL_I2C1_0)			FM(SEL_I2C1_1)			FM(SEL_I2C1_2)			FM(SEL_I2C1_3)
+-#define MOD_SEL0_19_18_17	FM(SEL_I2C2_0)			FM(SEL_I2C2_1)			FM(SEL_I2C2_2)			FM(SEL_I2C2_3)		FM(SEL_I2C2_4)		F_(0, 0)	F_(0, 0)	F_(0, 0)
++#define MOD_SEL0_21_20	   REV4(FM(SEL_I2C1_0),			FM(SEL_I2C1_1),			FM(SEL_I2C1_2),			FM(SEL_I2C1_3))
++#define MOD_SEL0_19_18_17  REV8(FM(SEL_I2C2_0),			FM(SEL_I2C2_1),			FM(SEL_I2C2_2),			FM(SEL_I2C2_3),		FM(SEL_I2C2_4),		F_(0, 0),	F_(0, 0),	F_(0, 0))
+ #define MOD_SEL0_16		FM(SEL_NDFC_0)			FM(SEL_NDFC_1)
+ #define MOD_SEL0_15		FM(SEL_PWM0_0)			FM(SEL_PWM0_1)
+ #define MOD_SEL0_14		FM(SEL_PWM1_0)			FM(SEL_PWM1_1)
+-#define MOD_SEL0_13_12		FM(SEL_PWM2_0)			FM(SEL_PWM2_1)			FM(SEL_PWM2_2)			F_(0, 0)
+-#define MOD_SEL0_11_10		FM(SEL_PWM3_0)			FM(SEL_PWM3_1)			FM(SEL_PWM3_2)			F_(0, 0)
++#define MOD_SEL0_13_12	   REV4(FM(SEL_PWM2_0),			FM(SEL_PWM2_1),			FM(SEL_PWM2_2),			F_(0, 0))
++#define MOD_SEL0_11_10	   REV4(FM(SEL_PWM3_0),			FM(SEL_PWM3_1),			FM(SEL_PWM3_2),			F_(0, 0))
+ #define MOD_SEL0_9		FM(SEL_PWM4_0)			FM(SEL_PWM4_1)
+ #define MOD_SEL0_8		FM(SEL_PWM5_0)			FM(SEL_PWM5_1)
+ #define MOD_SEL0_7		FM(SEL_PWM6_0)			FM(SEL_PWM6_1)
+-#define MOD_SEL0_6_5		FM(SEL_REMOCON_0)		FM(SEL_REMOCON_1)		FM(SEL_REMOCON_2)		F_(0, 0)
++#define MOD_SEL0_6_5	   REV4(FM(SEL_REMOCON_0),		FM(SEL_REMOCON_1),		FM(SEL_REMOCON_2),		F_(0, 0))
+ #define MOD_SEL0_4		FM(SEL_SCIF_0)			FM(SEL_SCIF_1)
+ #define MOD_SEL0_3		FM(SEL_SCIF0_0)			FM(SEL_SCIF0_1)
+ #define MOD_SEL0_2		FM(SEL_SCIF2_0)			FM(SEL_SCIF2_1)
+-#define MOD_SEL0_1_0		FM(SEL_SPEED_PULSE_IF_0)	FM(SEL_SPEED_PULSE_IF_1)	FM(SEL_SPEED_PULSE_IF_2)	F_(0, 0)
++#define MOD_SEL0_1_0	   REV4(FM(SEL_SPEED_PULSE_IF_0),	FM(SEL_SPEED_PULSE_IF_1),	FM(SEL_SPEED_PULSE_IF_2),	F_(0, 0))
+ 
+ /* MOD_SEL1 */			/* 0 */				/* 1 */				/* 2 */				/* 3 */			/* 4 */			/* 5 */		/* 6 */		/* 7 */
+ #define MOD_SEL1_31		FM(SEL_SIMCARD_0)		FM(SEL_SIMCARD_1)
+@@ -422,18 +426,18 @@ FM(IP12_31_28)	IP12_31_28	FM(IP13_31_28)	IP13_31_28	FM(IP14_31_28)	IP14_31_28	FM
+ #define MOD_SEL1_28		FM(SEL_USB_20_CH0_0)		FM(SEL_USB_20_CH0_1)
+ #define MOD_SEL1_26		FM(SEL_DRIF2_0)			FM(SEL_DRIF2_1)
+ #define MOD_SEL1_25		FM(SEL_DRIF3_0)			FM(SEL_DRIF3_1)
+-#define MOD_SEL1_24_23_22	FM(SEL_HSCIF3_0)		FM(SEL_HSCIF3_1)		FM(SEL_HSCIF3_2)		FM(SEL_HSCIF3_3)	FM(SEL_HSCIF3_4)	F_(0, 0)	F_(0, 0)	F_(0, 0)
+-#define MOD_SEL1_21_20_19	FM(SEL_HSCIF4_0)		FM(SEL_HSCIF4_1)		FM(SEL_HSCIF4_2)		FM(SEL_HSCIF4_3)	FM(SEL_HSCIF4_4)	F_(0, 0)	F_(0, 0)	F_(0, 0)
++#define MOD_SEL1_24_23_22  REV8(FM(SEL_HSCIF3_0),		FM(SEL_HSCIF3_1),		FM(SEL_HSCIF3_2),		FM(SEL_HSCIF3_3),	FM(SEL_HSCIF3_4),	F_(0, 0),	F_(0, 0),	F_(0, 0))
++#define MOD_SEL1_21_20_19  REV8(FM(SEL_HSCIF4_0),		FM(SEL_HSCIF4_1),		FM(SEL_HSCIF4_2),		FM(SEL_HSCIF4_3),	FM(SEL_HSCIF4_4),	F_(0, 0),	F_(0, 0),	F_(0, 0))
+ #define MOD_SEL1_18		FM(SEL_I2C6_0)			FM(SEL_I2C6_1)
+ #define MOD_SEL1_17		FM(SEL_I2C7_0)			FM(SEL_I2C7_1)
+ #define MOD_SEL1_16		FM(SEL_MSIOF2_0)		FM(SEL_MSIOF2_1)
+ #define MOD_SEL1_15		FM(SEL_MSIOF3_0)		FM(SEL_MSIOF3_1)
+-#define MOD_SEL1_14_13		FM(SEL_SCIF3_0)			FM(SEL_SCIF3_1)			FM(SEL_SCIF3_2)			F_(0, 0)
+-#define MOD_SEL1_12_11		FM(SEL_SCIF4_0)			FM(SEL_SCIF4_1)			FM(SEL_SCIF4_2)			F_(0, 0)
+-#define MOD_SEL1_10_9		FM(SEL_SCIF5_0)			FM(SEL_SCIF5_1)			FM(SEL_SCIF5_2)			F_(0, 0)
++#define MOD_SEL1_14_13	   REV4(FM(SEL_SCIF3_0),		FM(SEL_SCIF3_1),		FM(SEL_SCIF3_2),		F_(0, 0))
++#define MOD_SEL1_12_11	   REV4(FM(SEL_SCIF4_0),		FM(SEL_SCIF4_1),		FM(SEL_SCIF4_2),		F_(0, 0))
++#define MOD_SEL1_10_9	   REV4(FM(SEL_SCIF5_0),		FM(SEL_SCIF5_1),		FM(SEL_SCIF5_2),		F_(0, 0))
+ #define MOD_SEL1_8		FM(SEL_VIN4_0)			FM(SEL_VIN4_1)
+ #define MOD_SEL1_7		FM(SEL_VIN5_0)			FM(SEL_VIN5_1)
+-#define MOD_SEL1_6_5		FM(SEL_ADGC_0)			FM(SEL_ADGC_1)			FM(SEL_ADGC_2)			F_(0, 0)
++#define MOD_SEL1_6_5	   REV4(FM(SEL_ADGC_0),			FM(SEL_ADGC_1),			FM(SEL_ADGC_2),			F_(0, 0))
+ #define MOD_SEL1_4		FM(SEL_SSI9_0)			FM(SEL_SSI9_1)
+ 
+ #define PINMUX_MOD_SELS	\
+diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a77995.c b/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
+index 84d78db381e3..9e377e3b9cb3 100644
+--- a/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
++++ b/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
+@@ -381,6 +381,9 @@ FM(IP12_23_20)	IP12_23_20 \
+ FM(IP12_27_24)	IP12_27_24 \
+ FM(IP12_31_28)	IP12_31_28 \
+ 
++/* The bit numbering in MOD_SEL fields is reversed */
++#define REV4(f0, f1, f2, f3)			f0 f2 f1 f3
++
+ /* MOD_SEL0 */			/* 0 */			/* 1 */			/* 2 */			/* 3 */
+ #define MOD_SEL0_30		FM(SEL_MSIOF2_0)	FM(SEL_MSIOF2_1)
+ #define MOD_SEL0_29		FM(SEL_I2C3_0)		FM(SEL_I2C3_1)
+@@ -388,10 +391,10 @@ FM(IP12_31_28)	IP12_31_28 \
+ #define MOD_SEL0_27		FM(SEL_MSIOF3_0)	FM(SEL_MSIOF3_1)
+ #define MOD_SEL0_26		FM(SEL_HSCIF3_0)	FM(SEL_HSCIF3_1)
+ #define MOD_SEL0_25		FM(SEL_SCIF4_0)		FM(SEL_SCIF4_1)
+-#define MOD_SEL0_24_23		FM(SEL_PWM0_0)		FM(SEL_PWM0_1)		FM(SEL_PWM0_2)		F_(0, 0)
+-#define MOD_SEL0_22_21		FM(SEL_PWM1_0)		FM(SEL_PWM1_1)		FM(SEL_PWM1_2)		F_(0, 0)
+-#define MOD_SEL0_20_19		FM(SEL_PWM2_0)		FM(SEL_PWM2_1)		FM(SEL_PWM2_2)		F_(0, 0)
+-#define MOD_SEL0_18_17		FM(SEL_PWM3_0)		FM(SEL_PWM3_1)		FM(SEL_PWM3_2)		F_(0, 0)
++#define MOD_SEL0_24_23	   REV4(FM(SEL_PWM0_0),		FM(SEL_PWM0_1),		FM(SEL_PWM0_2),		F_(0, 0))
++#define MOD_SEL0_22_21	   REV4(FM(SEL_PWM1_0),		FM(SEL_PWM1_1),		FM(SEL_PWM1_2),		F_(0, 0))
++#define MOD_SEL0_20_19	   REV4(FM(SEL_PWM2_0),		FM(SEL_PWM2_1),		FM(SEL_PWM2_2),		F_(0, 0))
++#define MOD_SEL0_18_17	   REV4(FM(SEL_PWM3_0),		FM(SEL_PWM3_1),		FM(SEL_PWM3_2),		F_(0, 0))
+ #define MOD_SEL0_15		FM(SEL_IRQ_0_0)		FM(SEL_IRQ_0_1)
+ #define MOD_SEL0_14		FM(SEL_IRQ_1_0)		FM(SEL_IRQ_1_1)
+ #define MOD_SEL0_13		FM(SEL_IRQ_2_0)		FM(SEL_IRQ_2_1)
+diff --git a/drivers/platform/mellanox/mlxreg-hotplug.c b/drivers/platform/mellanox/mlxreg-hotplug.c
+index b6d44550d98c..eca16d00e310 100644
+--- a/drivers/platform/mellanox/mlxreg-hotplug.c
++++ b/drivers/platform/mellanox/mlxreg-hotplug.c
+@@ -248,7 +248,8 @@ mlxreg_hotplug_work_helper(struct mlxreg_hotplug_priv_data *priv,
+ 			   struct mlxreg_core_item *item)
+ {
+ 	struct mlxreg_core_data *data;
+-	u32 asserted, regval, bit;
++	unsigned long asserted;
++	u32 regval, bit;
+ 	int ret;
+ 
+ 	/*
+@@ -281,7 +282,7 @@ mlxreg_hotplug_work_helper(struct mlxreg_hotplug_priv_data *priv,
+ 	asserted = item->cache ^ regval;
+ 	item->cache = regval;
+ 
+-	for_each_set_bit(bit, (unsigned long *)&asserted, 8) {
++	for_each_set_bit(bit, &asserted, 8) {
+ 		data = item->data + bit;
+ 		if (regval & BIT(bit)) {
+ 			if (item->inversed)
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index 1589dffab9fa..8b53a9ceb897 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -989,7 +989,7 @@ static const struct dmi_system_id no_hw_rfkill_list[] = {
+ 		.ident = "Lenovo RESCUER R720-15IKBN",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+-			DMI_MATCH(DMI_BOARD_NAME, "80WW"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo R720-15IKBN"),
+ 		},
+ 	},
+ 	{
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index e28bcf61b126..bc0d55a59015 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -363,7 +363,7 @@ wakeup:
+ 	 * the 5-button array, but still send notifies with power button
+ 	 * event code to this device object on power button actions.
+ 	 *
+-	 * Report the power button press; catch and ignore the button release.
++	 * Report the power button press and release.
+ 	 */
+ 	if (!priv->array) {
+ 		if (event == 0xce) {
+@@ -372,8 +372,11 @@ wakeup:
+ 			return;
+ 		}
+ 
+-		if (event == 0xcf)
++		if (event == 0xcf) {
++			input_report_key(priv->input_dev, KEY_POWER, 0);
++			input_sync(priv->input_dev);
+ 			return;
++		}
+ 	}
+ 
+ 	/* 0xC0 is for HID events, other values are for 5 button array */
+diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
+index 22dbf115782e..c37e74ee609d 100644
+--- a/drivers/platform/x86/intel_pmc_core.c
++++ b/drivers/platform/x86/intel_pmc_core.c
+@@ -380,7 +380,8 @@ static int pmc_core_ppfear_show(struct seq_file *s, void *unused)
+ 	     index < PPFEAR_MAX_NUM_ENTRIES; index++, iter++)
+ 		pf_regs[index] = pmc_core_reg_read_byte(pmcdev, iter);
+ 
+-	for (index = 0; map[index].name; index++)
++	for (index = 0; map[index].name &&
++	     index < pmcdev->map->ppfear_buckets * 8; index++)
+ 		pmc_core_display_map(s, index, pf_regs[index / 8], map);
+ 
+ 	return 0;
+diff --git a/drivers/platform/x86/intel_pmc_core.h b/drivers/platform/x86/intel_pmc_core.h
+index 89554cba5758..1a0104d2cbf0 100644
+--- a/drivers/platform/x86/intel_pmc_core.h
++++ b/drivers/platform/x86/intel_pmc_core.h
+@@ -32,7 +32,7 @@
+ #define SPT_PMC_SLP_S0_RES_COUNTER_STEP		0x64
+ #define PMC_BASE_ADDR_MASK			~(SPT_PMC_MMIO_REG_LEN - 1)
+ #define MTPMC_MASK				0xffff0000
+-#define PPFEAR_MAX_NUM_ENTRIES			5
++#define PPFEAR_MAX_NUM_ENTRIES			12
+ #define SPT_PPFEAR_NUM_ENTRIES			5
+ #define SPT_PMC_READ_DISABLE_BIT		0x16
+ #define SPT_PMC_MSG_FULL_STS_BIT		0x18
+diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c
+index c843eaff8ad0..c3ed7b476676 100644
+--- a/drivers/power/supply/cpcap-charger.c
++++ b/drivers/power/supply/cpcap-charger.c
+@@ -458,6 +458,7 @@ static void cpcap_usb_detect(struct work_struct *work)
+ 			goto out_err;
+ 	}
+ 
++	power_supply_changed(ddata->usb);
+ 	return;
+ 
+ out_err:
+diff --git a/drivers/regulator/act8865-regulator.c b/drivers/regulator/act8865-regulator.c
+index 21e20483bd91..e0239cf3f56d 100644
+--- a/drivers/regulator/act8865-regulator.c
++++ b/drivers/regulator/act8865-regulator.c
+@@ -131,7 +131,7 @@
+  * ACT8865 voltage number
+  */
+ #define	ACT8865_VOLTAGE_NUM	64
+-#define ACT8600_SUDCDC_VOLTAGE_NUM	255
++#define ACT8600_SUDCDC_VOLTAGE_NUM	256
+ 
+ struct act8865 {
+ 	struct regmap *regmap;
+@@ -222,7 +222,8 @@ static const struct regulator_linear_range act8600_sudcdc_voltage_ranges[] = {
+ 	REGULATOR_LINEAR_RANGE(3000000, 0, 63, 0),
+ 	REGULATOR_LINEAR_RANGE(3000000, 64, 159, 100000),
+ 	REGULATOR_LINEAR_RANGE(12600000, 160, 191, 200000),
+-	REGULATOR_LINEAR_RANGE(19000000, 191, 255, 400000),
++	REGULATOR_LINEAR_RANGE(19000000, 192, 247, 400000),
++	REGULATOR_LINEAR_RANGE(41400000, 248, 255, 0),
+ };
+ 
+ static struct regulator_ops act8865_ops = {
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index b9d7b45c7295..e2caf11598c7 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -1349,7 +1349,9 @@ static int set_machine_constraints(struct regulator_dev *rdev,
+ 		 * We'll only apply the initial system load if an
+ 		 * initial mode wasn't specified.
+ 		 */
++		regulator_lock(rdev);
+ 		drms_uA_update(rdev);
++		regulator_unlock(rdev);
+ 	}
+ 
+ 	if ((rdev->constraints->ramp_delay || rdev->constraints->ramp_disable)
+diff --git a/drivers/regulator/max77620-regulator.c b/drivers/regulator/max77620-regulator.c
+index b94e3a721721..cd93cf53e23c 100644
+--- a/drivers/regulator/max77620-regulator.c
++++ b/drivers/regulator/max77620-regulator.c
+@@ -1,7 +1,7 @@
+ /*
+  * Maxim MAX77620 Regulator driver
+  *
+- * Copyright (c) 2016, NVIDIA CORPORATION.  All rights reserved.
++ * Copyright (c) 2016-2018, NVIDIA CORPORATION.  All rights reserved.
+  *
+  * Author: Mallikarjun Kasoju <mkasoju@nvidia.com>
+  *	Laxman Dewangan <ldewangan@nvidia.com>
+@@ -803,6 +803,14 @@ static int max77620_regulator_probe(struct platform_device *pdev)
+ 		rdesc = &rinfo[id].desc;
+ 		pmic->rinfo[id] = &max77620_regs_info[id];
+ 		pmic->enable_power_mode[id] = MAX77620_POWER_MODE_NORMAL;
++		pmic->reg_pdata[id].active_fps_src = -1;
++		pmic->reg_pdata[id].active_fps_pd_slot = -1;
++		pmic->reg_pdata[id].active_fps_pu_slot = -1;
++		pmic->reg_pdata[id].suspend_fps_src = -1;
++		pmic->reg_pdata[id].suspend_fps_pd_slot = -1;
++		pmic->reg_pdata[id].suspend_fps_pu_slot = -1;
++		pmic->reg_pdata[id].power_ok = -1;
++		pmic->reg_pdata[id].ramp_rate_setting = -1;
+ 
+ 		ret = max77620_read_slew_rate(pmic, id);
+ 		if (ret < 0)
+diff --git a/drivers/regulator/mcp16502.c b/drivers/regulator/mcp16502.c
+index 3479ae009b0b..0fc4963bd5b0 100644
+--- a/drivers/regulator/mcp16502.c
++++ b/drivers/regulator/mcp16502.c
+@@ -17,6 +17,7 @@
+ #include <linux/regmap.h>
+ #include <linux/regulator/driver.h>
+ #include <linux/suspend.h>
++#include <linux/gpio/consumer.h>
+ 
+ #define VDD_LOW_SEL 0x0D
+ #define VDD_HIGH_SEL 0x3F
+diff --git a/drivers/regulator/s2mpa01.c b/drivers/regulator/s2mpa01.c
+index 095d25f3d2ea..58a1fe583a6c 100644
+--- a/drivers/regulator/s2mpa01.c
++++ b/drivers/regulator/s2mpa01.c
+@@ -298,13 +298,13 @@ static const struct regulator_desc regulators[] = {
+ 	regulator_desc_ldo(2, STEP_50_MV),
+ 	regulator_desc_ldo(3, STEP_50_MV),
+ 	regulator_desc_ldo(4, STEP_50_MV),
+-	regulator_desc_ldo(5, STEP_50_MV),
++	regulator_desc_ldo(5, STEP_25_MV),
+ 	regulator_desc_ldo(6, STEP_25_MV),
+ 	regulator_desc_ldo(7, STEP_50_MV),
+ 	regulator_desc_ldo(8, STEP_50_MV),
+ 	regulator_desc_ldo(9, STEP_50_MV),
+ 	regulator_desc_ldo(10, STEP_50_MV),
+-	regulator_desc_ldo(11, STEP_25_MV),
++	regulator_desc_ldo(11, STEP_50_MV),
+ 	regulator_desc_ldo(12, STEP_50_MV),
+ 	regulator_desc_ldo(13, STEP_50_MV),
+ 	regulator_desc_ldo(14, STEP_50_MV),
+@@ -315,11 +315,11 @@ static const struct regulator_desc regulators[] = {
+ 	regulator_desc_ldo(19, STEP_50_MV),
+ 	regulator_desc_ldo(20, STEP_50_MV),
+ 	regulator_desc_ldo(21, STEP_50_MV),
+-	regulator_desc_ldo(22, STEP_25_MV),
+-	regulator_desc_ldo(23, STEP_25_MV),
++	regulator_desc_ldo(22, STEP_50_MV),
++	regulator_desc_ldo(23, STEP_50_MV),
+ 	regulator_desc_ldo(24, STEP_50_MV),
+ 	regulator_desc_ldo(25, STEP_50_MV),
+-	regulator_desc_ldo(26, STEP_50_MV),
++	regulator_desc_ldo(26, STEP_25_MV),
+ 	regulator_desc_buck1_4(1),
+ 	regulator_desc_buck1_4(2),
+ 	regulator_desc_buck1_4(3),
+diff --git a/drivers/regulator/s2mps11.c b/drivers/regulator/s2mps11.c
+index ee4a23ab0663..134c62db36c5 100644
+--- a/drivers/regulator/s2mps11.c
++++ b/drivers/regulator/s2mps11.c
+@@ -362,7 +362,7 @@ static const struct regulator_desc s2mps11_regulators[] = {
+ 	regulator_desc_s2mps11_ldo(32, STEP_50_MV),
+ 	regulator_desc_s2mps11_ldo(33, STEP_50_MV),
+ 	regulator_desc_s2mps11_ldo(34, STEP_50_MV),
+-	regulator_desc_s2mps11_ldo(35, STEP_50_MV),
++	regulator_desc_s2mps11_ldo(35, STEP_25_MV),
+ 	regulator_desc_s2mps11_ldo(36, STEP_50_MV),
+ 	regulator_desc_s2mps11_ldo(37, STEP_50_MV),
+ 	regulator_desc_s2mps11_ldo(38, STEP_50_MV),
+@@ -372,8 +372,8 @@ static const struct regulator_desc s2mps11_regulators[] = {
+ 	regulator_desc_s2mps11_buck1_4(4),
+ 	regulator_desc_s2mps11_buck5,
+ 	regulator_desc_s2mps11_buck67810(6, MIN_600_MV, STEP_6_25_MV),
+-	regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_6_25_MV),
+-	regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_6_25_MV),
++	regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_12_5_MV),
++	regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_12_5_MV),
+ 	regulator_desc_s2mps11_buck9,
+ 	regulator_desc_s2mps11_buck67810(10, MIN_750_MV, STEP_12_5_MV),
+ };
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index a10cec0e86eb..0b3b9de45c60 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -72,20 +72,24 @@ static void vfio_ccw_sch_io_todo(struct work_struct *work)
+ {
+ 	struct vfio_ccw_private *private;
+ 	struct irb *irb;
++	bool is_final;
+ 
+ 	private = container_of(work, struct vfio_ccw_private, io_work);
+ 	irb = &private->irb;
+ 
++	is_final = !(scsw_actl(&irb->scsw) &
++		     (SCSW_ACTL_DEVACT | SCSW_ACTL_SCHACT));
+ 	if (scsw_is_solicited(&irb->scsw)) {
+ 		cp_update_scsw(&private->cp, &irb->scsw);
+-		cp_free(&private->cp);
++		if (is_final)
++			cp_free(&private->cp);
+ 	}
+ 	memcpy(private->io_region->irb_area, irb, sizeof(*irb));
+ 
+ 	if (private->io_trigger)
+ 		eventfd_signal(private->io_trigger, 1);
+ 
+-	if (private->mdev)
++	if (private->mdev && is_final)
+ 		private->state = VFIO_CCW_STATE_IDLE;
+ }
+ 
+diff --git a/drivers/s390/crypto/vfio_ap_drv.c b/drivers/s390/crypto/vfio_ap_drv.c
+index 31c6c847eaca..e9824c35c34f 100644
+--- a/drivers/s390/crypto/vfio_ap_drv.c
++++ b/drivers/s390/crypto/vfio_ap_drv.c
+@@ -15,7 +15,6 @@
+ #include "vfio_ap_private.h"
+ 
+ #define VFIO_AP_ROOT_NAME "vfio_ap"
+-#define VFIO_AP_DEV_TYPE_NAME "ap_matrix"
+ #define VFIO_AP_DEV_NAME "matrix"
+ 
+ MODULE_AUTHOR("IBM Corporation");
+@@ -24,10 +23,6 @@ MODULE_LICENSE("GPL v2");
+ 
+ static struct ap_driver vfio_ap_drv;
+ 
+-static struct device_type vfio_ap_dev_type = {
+-	.name = VFIO_AP_DEV_TYPE_NAME,
+-};
+-
+ struct ap_matrix_dev *matrix_dev;
+ 
+ /* Only type 10 adapters (CEX4 and later) are supported
+@@ -62,6 +57,22 @@ static void vfio_ap_matrix_dev_release(struct device *dev)
+ 	kfree(matrix_dev);
+ }
+ 
++static int matrix_bus_match(struct device *dev, struct device_driver *drv)
++{
++	return 1;
++}
++
++static struct bus_type matrix_bus = {
++	.name = "matrix",
++	.match = &matrix_bus_match,
++};
++
++static struct device_driver matrix_driver = {
++	.name = "vfio_ap",
++	.bus = &matrix_bus,
++	.suppress_bind_attrs = true,
++};
++
+ static int vfio_ap_matrix_dev_create(void)
+ {
+ 	int ret;
+@@ -71,6 +82,10 @@ static int vfio_ap_matrix_dev_create(void)
+ 	if (IS_ERR(root_device))
+ 		return PTR_ERR(root_device);
+ 
++	ret = bus_register(&matrix_bus);
++	if (ret)
++		goto bus_register_err;
++
+ 	matrix_dev = kzalloc(sizeof(*matrix_dev), GFP_KERNEL);
+ 	if (!matrix_dev) {
+ 		ret = -ENOMEM;
+@@ -87,30 +102,41 @@ static int vfio_ap_matrix_dev_create(void)
+ 	mutex_init(&matrix_dev->lock);
+ 	INIT_LIST_HEAD(&matrix_dev->mdev_list);
+ 
+-	matrix_dev->device.type = &vfio_ap_dev_type;
+ 	dev_set_name(&matrix_dev->device, "%s", VFIO_AP_DEV_NAME);
+ 	matrix_dev->device.parent = root_device;
++	matrix_dev->device.bus = &matrix_bus;
+ 	matrix_dev->device.release = vfio_ap_matrix_dev_release;
+-	matrix_dev->device.driver = &vfio_ap_drv.driver;
++	matrix_dev->vfio_ap_drv = &vfio_ap_drv;
+ 
+ 	ret = device_register(&matrix_dev->device);
+ 	if (ret)
+ 		goto matrix_reg_err;
+ 
++	ret = driver_register(&matrix_driver);
++	if (ret)
++		goto matrix_drv_err;
++
+ 	return 0;
+ 
++matrix_drv_err:
++	device_unregister(&matrix_dev->device);
+ matrix_reg_err:
+ 	put_device(&matrix_dev->device);
+ matrix_alloc_err:
++	bus_unregister(&matrix_bus);
++bus_register_err:
+ 	root_device_unregister(root_device);
+-
+ 	return ret;
+ }
+ 
+ static void vfio_ap_matrix_dev_destroy(void)
+ {
++	struct device *root_device = matrix_dev->device.parent;
++
++	driver_unregister(&matrix_driver);
+ 	device_unregister(&matrix_dev->device);
+-	root_device_unregister(matrix_dev->device.parent);
++	bus_unregister(&matrix_bus);
++	root_device_unregister(root_device);
+ }
+ 
+ static int __init vfio_ap_init(void)
+diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
+index 272ef427dcc0..900b9cf20ca5 100644
+--- a/drivers/s390/crypto/vfio_ap_ops.c
++++ b/drivers/s390/crypto/vfio_ap_ops.c
+@@ -198,8 +198,8 @@ static int vfio_ap_verify_queue_reserved(unsigned long *apid,
+ 	qres.apqi = apqi;
+ 	qres.reserved = false;
+ 
+-	ret = driver_for_each_device(matrix_dev->device.driver, NULL, &qres,
+-				     vfio_ap_has_queue);
++	ret = driver_for_each_device(&matrix_dev->vfio_ap_drv->driver, NULL,
++				     &qres, vfio_ap_has_queue);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/s390/crypto/vfio_ap_private.h b/drivers/s390/crypto/vfio_ap_private.h
+index 5675492233c7..76b7f98e47e9 100644
+--- a/drivers/s390/crypto/vfio_ap_private.h
++++ b/drivers/s390/crypto/vfio_ap_private.h
+@@ -40,6 +40,7 @@ struct ap_matrix_dev {
+ 	struct ap_config_info info;
+ 	struct list_head mdev_list;
+ 	struct mutex lock;
++	struct ap_driver  *vfio_ap_drv;
+ };
+ 
+ extern struct ap_matrix_dev *matrix_dev;
+diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c
+index ed8e58f09054..3e132592c1fe 100644
+--- a/drivers/s390/net/ism_drv.c
++++ b/drivers/s390/net/ism_drv.c
+@@ -141,10 +141,13 @@ static int register_ieq(struct ism_dev *ism)
+ 
+ static int unregister_sba(struct ism_dev *ism)
+ {
++	int ret;
++
+ 	if (!ism->sba)
+ 		return 0;
+ 
+-	if (ism_cmd_simple(ism, ISM_UNREG_SBA))
++	ret = ism_cmd_simple(ism, ISM_UNREG_SBA);
++	if (ret && ret != ISM_ERROR)
+ 		return -EIO;
+ 
+ 	dma_free_coherent(&ism->pdev->dev, PAGE_SIZE,
+@@ -158,10 +161,13 @@ static int unregister_sba(struct ism_dev *ism)
+ 
+ static int unregister_ieq(struct ism_dev *ism)
+ {
++	int ret;
++
+ 	if (!ism->ieq)
+ 		return 0;
+ 
+-	if (ism_cmd_simple(ism, ISM_UNREG_IEQ))
++	ret = ism_cmd_simple(ism, ISM_UNREG_IEQ);
++	if (ret && ret != ISM_ERROR)
+ 		return -EIO;
+ 
+ 	dma_free_coherent(&ism->pdev->dev, PAGE_SIZE,
+@@ -287,7 +293,7 @@ static int ism_unregister_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb)
+ 	cmd.request.dmb_tok = dmb->dmb_tok;
+ 
+ 	ret = ism_cmd(ism, &cmd);
+-	if (ret)
++	if (ret && ret != ISM_ERROR)
+ 		goto out;
+ 
+ 	ism_free_dmb(ism, dmb);
+diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
+index 744a64680d5b..e8fc28dba8df 100644
+--- a/drivers/s390/scsi/zfcp_erp.c
++++ b/drivers/s390/scsi/zfcp_erp.c
+@@ -624,6 +624,20 @@ static void zfcp_erp_strategy_memwait(struct zfcp_erp_action *erp_action)
+ 	add_timer(&erp_action->timer);
+ }
+ 
++void zfcp_erp_port_forced_reopen_all(struct zfcp_adapter *adapter,
++				     int clear, char *dbftag)
++{
++	unsigned long flags;
++	struct zfcp_port *port;
++
++	write_lock_irqsave(&adapter->erp_lock, flags);
++	read_lock(&adapter->port_list_lock);
++	list_for_each_entry(port, &adapter->port_list, list)
++		_zfcp_erp_port_forced_reopen(port, clear, dbftag);
++	read_unlock(&adapter->port_list_lock);
++	write_unlock_irqrestore(&adapter->erp_lock, flags);
++}
++
+ static void _zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter,
+ 				      int clear, char *dbftag)
+ {
+@@ -1341,6 +1355,9 @@ static void zfcp_erp_try_rport_unblock(struct zfcp_port *port)
+ 		struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev);
+ 		int lun_status;
+ 
++		if (sdev->sdev_state == SDEV_DEL ||
++		    sdev->sdev_state == SDEV_CANCEL)
++			continue;
+ 		if (zsdev->port != port)
+ 			continue;
+ 		/* LUN under port of interest */
+diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
+index 3fce47b0b21b..c6acca521ffe 100644
+--- a/drivers/s390/scsi/zfcp_ext.h
++++ b/drivers/s390/scsi/zfcp_ext.h
+@@ -70,6 +70,8 @@ extern void zfcp_erp_port_reopen(struct zfcp_port *port, int clear,
+ 				 char *dbftag);
+ extern void zfcp_erp_port_shutdown(struct zfcp_port *, int, char *);
+ extern void zfcp_erp_port_forced_reopen(struct zfcp_port *, int, char *);
++extern void zfcp_erp_port_forced_reopen_all(struct zfcp_adapter *adapter,
++					    int clear, char *dbftag);
+ extern void zfcp_erp_set_lun_status(struct scsi_device *, u32);
+ extern void zfcp_erp_clear_lun_status(struct scsi_device *, u32);
+ extern void zfcp_erp_lun_reopen(struct scsi_device *, int, char *);
+diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
+index f4f6a07c5222..221d0dfb8493 100644
+--- a/drivers/s390/scsi/zfcp_scsi.c
++++ b/drivers/s390/scsi/zfcp_scsi.c
+@@ -368,6 +368,10 @@ static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt)
+ 	struct zfcp_adapter *adapter = zfcp_sdev->port->adapter;
+ 	int ret = SUCCESS, fc_ret;
+ 
++	if (!(adapter->connection_features & FSF_FEATURE_NPIV_MODE)) {
++		zfcp_erp_port_forced_reopen_all(adapter, 0, "schrh_p");
++		zfcp_erp_wait(adapter);
++	}
+ 	zfcp_erp_adapter_reopen(adapter, 0, "schrh_1");
+ 	zfcp_erp_wait(adapter);
+ 	fc_ret = fc_block_scsi_eh(scpnt);
+diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
+index ae1d56da671d..1a738fe9f26b 100644
+--- a/drivers/s390/virtio/virtio_ccw.c
++++ b/drivers/s390/virtio/virtio_ccw.c
+@@ -272,6 +272,8 @@ static void virtio_ccw_drop_indicators(struct virtio_ccw_device *vcdev)
+ {
+ 	struct virtio_ccw_vq_info *info;
+ 
++	if (!vcdev->airq_info)
++		return;
+ 	list_for_each_entry(info, &vcdev->virtqueues, node)
+ 		drop_airq_indicator(info->vq, vcdev->airq_info);
+ }
+@@ -413,7 +415,7 @@ static int virtio_ccw_read_vq_conf(struct virtio_ccw_device *vcdev,
+ 	ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_VQ_CONF);
+ 	if (ret)
+ 		return ret;
+-	return vcdev->config_block->num;
++	return vcdev->config_block->num ?: -ENOENT;
+ }
+ 
+ static void virtio_ccw_del_vq(struct virtqueue *vq, struct ccw1 *ccw)
+diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
+index d5a6aa9676c8..a3adc954f40f 100644
+--- a/drivers/scsi/aacraid/commsup.c
++++ b/drivers/scsi/aacraid/commsup.c
+@@ -1303,8 +1303,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
+ 				  ADD : DELETE;
+ 				break;
+ 			}
+-			case AifBuManagerEvent:
+-				aac_handle_aif_bu(dev, aifcmd);
++			break;
++		case AifBuManagerEvent:
++			aac_handle_aif_bu(dev, aifcmd);
+ 			break;
+ 		}
+ 
+diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
+index 7e56a11836c1..ccefface7e31 100644
+--- a/drivers/scsi/aacraid/linit.c
++++ b/drivers/scsi/aacraid/linit.c
+@@ -413,13 +413,16 @@ static int aac_slave_configure(struct scsi_device *sdev)
+ 	if (chn < AAC_MAX_BUSES && tid < AAC_MAX_TARGETS && aac->sa_firmware) {
+ 		devtype = aac->hba_map[chn][tid].devtype;
+ 
+-		if (devtype == AAC_DEVTYPE_NATIVE_RAW)
++		if (devtype == AAC_DEVTYPE_NATIVE_RAW) {
+ 			depth = aac->hba_map[chn][tid].qd_limit;
+-		else if (devtype == AAC_DEVTYPE_ARC_RAW)
++			set_timeout = 1;
++			goto common_config;
++		}
++		if (devtype == AAC_DEVTYPE_ARC_RAW) {
+ 			set_qd_dev_type = true;
+-
+-		set_timeout = 1;
+-		goto common_config;
++			set_timeout = 1;
++			goto common_config;
++		}
+ 	}
+ 
+ 	if (aac->jbod && (sdev->type == TYPE_DISK))
+diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+index 2e4e7159ebf9..a75e74ad1698 100644
+--- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
++++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
+@@ -1438,7 +1438,7 @@ bind_err:
+ static struct bnx2fc_interface *
+ bnx2fc_interface_create(struct bnx2fc_hba *hba,
+ 			struct net_device *netdev,
+-			enum fip_state fip_mode)
++			enum fip_mode fip_mode)
+ {
+ 	struct fcoe_ctlr_device *ctlr_dev;
+ 	struct bnx2fc_interface *interface;
+diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
+index cd19be3f3405..8ba8862d3292 100644
+--- a/drivers/scsi/fcoe/fcoe.c
++++ b/drivers/scsi/fcoe/fcoe.c
+@@ -389,7 +389,7 @@ static int fcoe_interface_setup(struct fcoe_interface *fcoe,
+  * Returns: pointer to a struct fcoe_interface or NULL on error
+  */
+ static struct fcoe_interface *fcoe_interface_create(struct net_device *netdev,
+-						    enum fip_state fip_mode)
++						    enum fip_mode fip_mode)
+ {
+ 	struct fcoe_ctlr_device *ctlr_dev;
+ 	struct fcoe_ctlr *ctlr;
+diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
+index 54da3166da8d..7dc4ffa24430 100644
+--- a/drivers/scsi/fcoe/fcoe_ctlr.c
++++ b/drivers/scsi/fcoe/fcoe_ctlr.c
+@@ -147,7 +147,7 @@ static void fcoe_ctlr_map_dest(struct fcoe_ctlr *fip)
+  * fcoe_ctlr_init() - Initialize the FCoE Controller instance
+  * @fip: The FCoE controller to initialize
+  */
+-void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_state mode)
++void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_mode mode)
+ {
+ 	fcoe_ctlr_set_state(fip, FIP_ST_LINK_WAIT);
+ 	fip->mode = mode;
+@@ -454,7 +454,10 @@ void fcoe_ctlr_link_up(struct fcoe_ctlr *fip)
+ 		mutex_unlock(&fip->ctlr_mutex);
+ 		fc_linkup(fip->lp);
+ 	} else if (fip->state == FIP_ST_LINK_WAIT) {
+-		fcoe_ctlr_set_state(fip, fip->mode);
++		if (fip->mode == FIP_MODE_NON_FIP)
++			fcoe_ctlr_set_state(fip, FIP_ST_NON_FIP);
++		else
++			fcoe_ctlr_set_state(fip, FIP_ST_AUTO);
+ 		switch (fip->mode) {
+ 		default:
+ 			LIBFCOE_FIP_DBG(fip, "invalid mode %d\n", fip->mode);
+diff --git a/drivers/scsi/fcoe/fcoe_transport.c b/drivers/scsi/fcoe/fcoe_transport.c
+index f4909cd206d3..f15d5e1d56b1 100644
+--- a/drivers/scsi/fcoe/fcoe_transport.c
++++ b/drivers/scsi/fcoe/fcoe_transport.c
+@@ -873,7 +873,7 @@ static int fcoe_transport_create(const char *buffer,
+ 	int rc = -ENODEV;
+ 	struct net_device *netdev = NULL;
+ 	struct fcoe_transport *ft = NULL;
+-	enum fip_state fip_mode = (enum fip_state)(long)kp->arg;
++	enum fip_mode fip_mode = (enum fip_mode)kp->arg;
+ 
+ 	mutex_lock(&ft_mutex);
+ 
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index bc17fa0d8375..62d158574281 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -10,6 +10,7 @@
+  */
+ 
+ #include "hisi_sas.h"
++#include "../libsas/sas_internal.h"
+ #define DRV_NAME "hisi_sas"
+ 
+ #define DEV_IS_GONE(dev) \
+@@ -872,7 +873,8 @@ static void hisi_sas_do_release_task(struct hisi_hba *hisi_hba, struct sas_task
+ 		spin_lock_irqsave(&task->task_state_lock, flags);
+ 		task->task_state_flags &=
+ 			~(SAS_TASK_STATE_PENDING | SAS_TASK_AT_INITIATOR);
+-		task->task_state_flags |= SAS_TASK_STATE_DONE;
++		if (!slot->is_internal && task->task_proto != SAS_PROTOCOL_SMP)
++			task->task_state_flags |= SAS_TASK_STATE_DONE;
+ 		spin_unlock_irqrestore(&task->task_state_lock, flags);
+ 	}
+ 
+@@ -1972,9 +1974,18 @@ static int hisi_sas_write_gpio(struct sas_ha_struct *sha, u8 reg_type,
+ 
+ static void hisi_sas_phy_disconnected(struct hisi_sas_phy *phy)
+ {
++	struct asd_sas_phy *sas_phy = &phy->sas_phy;
++	struct sas_phy *sphy = sas_phy->phy;
++	struct sas_phy_data *d = sphy->hostdata;
++
+ 	phy->phy_attached = 0;
+ 	phy->phy_type = 0;
+ 	phy->port = NULL;
++
++	if (d->enable)
++		sphy->negotiated_linkrate = SAS_LINK_RATE_UNKNOWN;
++	else
++		sphy->negotiated_linkrate = SAS_PHY_DISABLED;
+ }
+ 
+ void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 1135e74646e2..8cec5230fe31 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -96,6 +96,7 @@ static int client_reserve = 1;
+ static char partition_name[96] = "UNKNOWN";
+ static unsigned int partition_number = -1;
+ static LIST_HEAD(ibmvscsi_head);
++static DEFINE_SPINLOCK(ibmvscsi_driver_lock);
+ 
+ static struct scsi_transport_template *ibmvscsi_transport_template;
+ 
+@@ -2270,7 +2271,9 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ 	}
+ 
+ 	dev_set_drvdata(&vdev->dev, hostdata);
++	spin_lock(&ibmvscsi_driver_lock);
+ 	list_add_tail(&hostdata->host_list, &ibmvscsi_head);
++	spin_unlock(&ibmvscsi_driver_lock);
+ 	return 0;
+ 
+       add_srp_port_failed:
+@@ -2292,15 +2295,27 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ static int ibmvscsi_remove(struct vio_dev *vdev)
+ {
+ 	struct ibmvscsi_host_data *hostdata = dev_get_drvdata(&vdev->dev);
+-	list_del(&hostdata->host_list);
+-	unmap_persist_bufs(hostdata);
++	unsigned long flags;
++
++	srp_remove_host(hostdata->host);
++	scsi_remove_host(hostdata->host);
++
++	purge_requests(hostdata, DID_ERROR);
++
++	spin_lock_irqsave(hostdata->host->host_lock, flags);
+ 	release_event_pool(&hostdata->pool, hostdata);
++	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
++
+ 	ibmvscsi_release_crq_queue(&hostdata->queue, hostdata,
+ 					max_events);
+ 
+ 	kthread_stop(hostdata->work_thread);
+-	srp_remove_host(hostdata->host);
+-	scsi_remove_host(hostdata->host);
++	unmap_persist_bufs(hostdata);
++
++	spin_lock(&ibmvscsi_driver_lock);
++	list_del(&hostdata->host_list);
++	spin_unlock(&ibmvscsi_driver_lock);
++
+ 	scsi_host_put(hostdata->host);
+ 
+ 	return 0;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index fcbff83c0097..c9811d1aa007 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -4188,6 +4188,7 @@ int megasas_alloc_cmds(struct megasas_instance *instance)
+ 	if (megasas_create_frame_pool(instance)) {
+ 		dev_printk(KERN_DEBUG, &instance->pdev->dev, "Error creating frame DMA pool\n");
+ 		megasas_free_cmds(instance);
++		return -ENOMEM;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index 9bbc19fc190b..9f9431a4cc0e 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -1418,7 +1418,7 @@ static struct libfc_function_template qedf_lport_template = {
+ 
+ static void qedf_fcoe_ctlr_setup(struct qedf_ctx *qedf)
+ {
+-	fcoe_ctlr_init(&qedf->ctlr, FIP_ST_AUTO);
++	fcoe_ctlr_init(&qedf->ctlr, FIP_MODE_AUTO);
+ 
+ 	qedf->ctlr.send = qedf_fip_send;
+ 	qedf->ctlr.get_src_addr = qedf_get_src_mac;
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 8d1acc802a67..7f8946844a5e 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -644,11 +644,14 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
+ 				break;
+ 			case DSC_LS_PORT_UNAVAIL:
+ 			default:
+-				if (fcport->loop_id != FC_NO_LOOP_ID)
+-					qla2x00_clear_loop_id(fcport);
+-
+-				fcport->loop_id = loop_id;
+-				fcport->fw_login_state = DSC_LS_PORT_UNAVAIL;
++				if (fcport->loop_id == FC_NO_LOOP_ID) {
++					qla2x00_find_new_loop_id(vha, fcport);
++					fcport->fw_login_state =
++					    DSC_LS_PORT_UNAVAIL;
++				}
++				ql_dbg(ql_dbg_disc, vha, 0x20e5,
++				    "%s %d %8phC\n", __func__, __LINE__,
++				    fcport->port_name);
+ 				qla24xx_fcport_handle_login(vha, fcport);
+ 				break;
+ 			}
+@@ -1471,29 +1474,6 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 	return 0;
+ }
+ 
+-static
+-void qla24xx_handle_rscn_event(fc_port_t *fcport, struct event_arg *ea)
+-{
+-	fcport->rscn_gen++;
+-
+-	ql_dbg(ql_dbg_disc, fcport->vha, 0x210c,
+-	    "%s %8phC DS %d LS %d\n",
+-	    __func__, fcport->port_name, fcport->disc_state,
+-	    fcport->fw_login_state);
+-
+-	if (fcport->flags & FCF_ASYNC_SENT)
+-		return;
+-
+-	switch (fcport->disc_state) {
+-	case DSC_DELETED:
+-	case DSC_LOGIN_COMPLETE:
+-		qla24xx_post_gpnid_work(fcport->vha, &ea->id);
+-		break;
+-	default:
+-		break;
+-	}
+-}
+-
+ int qla24xx_post_newsess_work(struct scsi_qla_host *vha, port_id_t *id,
+     u8 *port_name, u8 *node_name, void *pla, u8 fc4_type)
+ {
+@@ -1560,8 +1540,6 @@ static void qla_handle_els_plogi_done(scsi_qla_host_t *vha,
+ 
+ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
+ {
+-	fc_port_t *f, *tf;
+-	uint32_t id = 0, mask, rid;
+ 	fc_port_t *fcport;
+ 
+ 	switch (ea->event) {
+@@ -1574,10 +1552,6 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
+ 	case FCME_RSCN:
+ 		if (test_bit(UNLOADING, &vha->dpc_flags))
+ 			return;
+-		switch (ea->id.b.rsvd_1) {
+-		case RSCN_PORT_ADDR:
+-#define BIGSCAN 1
+-#if defined BIGSCAN & BIGSCAN > 0
+ 		{
+ 			unsigned long flags;
+ 			fcport = qla2x00_find_fcport_by_nportid
+@@ -1596,59 +1570,6 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
+ 			}
+ 			spin_unlock_irqrestore(&vha->work_lock, flags);
+ 		}
+-#else
+-		{
+-			int rc;
+-			fcport = qla2x00_find_fcport_by_nportid(vha, &ea->id, 1);
+-			if (!fcport) {
+-				/* cable moved */
+-				 rc = qla24xx_post_gpnid_work(vha, &ea->id);
+-				 if (rc) {
+-					 ql_log(ql_log_warn, vha, 0xd044,
+-					     "RSCN GPNID work failed %06x\n",
+-					     ea->id.b24);
+-				 }
+-			} else {
+-				ea->fcport = fcport;
+-				fcport->scan_needed = 1;
+-				qla24xx_handle_rscn_event(fcport, ea);
+-			}
+-		}
+-#endif
+-			break;
+-		case RSCN_AREA_ADDR:
+-		case RSCN_DOM_ADDR:
+-			if (ea->id.b.rsvd_1 == RSCN_AREA_ADDR) {
+-				mask = 0xffff00;
+-				ql_dbg(ql_dbg_async, vha, 0x5044,
+-				    "RSCN: Area 0x%06x was affected\n",
+-				    ea->id.b24);
+-			} else {
+-				mask = 0xff0000;
+-				ql_dbg(ql_dbg_async, vha, 0x507a,
+-				    "RSCN: Domain 0x%06x was affected\n",
+-				    ea->id.b24);
+-			}
+-
+-			rid = ea->id.b24 & mask;
+-			list_for_each_entry_safe(f, tf, &vha->vp_fcports,
+-			    list) {
+-				id = f->d_id.b24 & mask;
+-				if (rid == id) {
+-					ea->fcport = f;
+-					qla24xx_handle_rscn_event(f, ea);
+-				}
+-			}
+-			break;
+-		case RSCN_FAB_ADDR:
+-		default:
+-			ql_log(ql_log_warn, vha, 0xd045,
+-			    "RSCN: Fabric was affected. Addr format %d\n",
+-			    ea->id.b.rsvd_1);
+-			qla2x00_mark_all_devices_lost(vha, 1);
+-			set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
+-			set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
+-		}
+ 		break;
+ 	case FCME_GNL_DONE:
+ 		qla24xx_handle_gnl_done_event(vha, ea);
+@@ -1709,11 +1630,7 @@ void qla_rscn_replay(fc_port_t *fcport)
+                ea.event = FCME_RSCN;
+                ea.id = fcport->d_id;
+                ea.id.b.rsvd_1 = RSCN_PORT_ADDR;
+-#if defined BIGSCAN & BIGSCAN > 0
+                qla2x00_fcport_event_handler(fcport->vha, &ea);
+-#else
+-               qla24xx_post_gpnid_work(fcport->vha, &ea.id);
+-#endif
+ 	}
+ }
+ 
+@@ -5051,6 +4968,13 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
+ 		    (area != vha->d_id.b.area || domain != vha->d_id.b.domain))
+ 			continue;
+ 
++		/* Bypass if not same domain and area of adapter. */
++		if (area && domain && ((area != vha->d_id.b.area) ||
++		    (domain != vha->d_id.b.domain)) &&
++		    (ha->current_topology == ISP_CFG_NL))
++			continue;
++
++
+ 		/* Bypass invalid local loop ID. */
+ 		if (loop_id > LAST_LOCAL_LOOP_ID)
+ 			continue;
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 8507c43b918c..1a20e5d8f057 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -3410,7 +3410,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
+ 		min_vecs++;
+ 	}
+ 
+-	if (USER_CTRL_IRQ(ha)) {
++	if (USER_CTRL_IRQ(ha) || !ha->mqiobase) {
+ 		/* user wants to control IRQ setting for target mode */
+ 		ret = pci_alloc_irq_vectors(ha->pdev, min_vecs,
+ 		    ha->msix_count, PCI_IRQ_MSIX);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index c6ef83d0d99b..7e35ce2162d0 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -6936,7 +6936,7 @@ static int qla2xxx_map_queues(struct Scsi_Host *shost)
+ 	scsi_qla_host_t *vha = (scsi_qla_host_t *)shost->hostdata;
+ 	struct blk_mq_queue_map *qmap = &shost->tag_set.map[0];
+ 
+-	if (USER_CTRL_IRQ(vha->hw))
++	if (USER_CTRL_IRQ(vha->hw) || !vha->hw->mqiobase)
+ 		rc = blk_mq_map_queues(qmap);
+ 	else
+ 		rc = blk_mq_pci_map_queues(qmap, vha->hw->pdev, vha->irq_offset);
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index a6828391d6b3..5a6e8e12701a 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -2598,8 +2598,10 @@ void scsi_device_resume(struct scsi_device *sdev)
+ 	 * device deleted during suspend)
+ 	 */
+ 	mutex_lock(&sdev->state_mutex);
+-	sdev->quiesced_by = NULL;
+-	blk_clear_pm_only(sdev->request_queue);
++	if (sdev->quiesced_by) {
++		sdev->quiesced_by = NULL;
++		blk_clear_pm_only(sdev->request_queue);
++	}
+ 	if (sdev->sdev_state == SDEV_QUIESCE)
+ 		scsi_device_set_state(sdev, SDEV_RUNNING);
+ 	mutex_unlock(&sdev->state_mutex);
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index dd0d516f65e2..53380e07b40e 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -220,7 +220,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
+ 	struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
+ 
+ 	sdev = kzalloc(sizeof(*sdev) + shost->transportt->device_size,
+-		       GFP_ATOMIC);
++		       GFP_KERNEL);
+ 	if (!sdev)
+ 		goto out;
+ 
+@@ -788,7 +788,7 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
+ 	 */
+ 	sdev->inquiry = kmemdup(inq_result,
+ 				max_t(size_t, sdev->inquiry_len, 36),
+-				GFP_ATOMIC);
++				GFP_KERNEL);
+ 	if (sdev->inquiry == NULL)
+ 		return SCSI_SCAN_NO_RESPONSE;
+ 
+@@ -1079,7 +1079,7 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
+ 	if (!sdev)
+ 		goto out;
+ 
+-	result = kmalloc(result_len, GFP_ATOMIC |
++	result = kmalloc(result_len, GFP_KERNEL |
+ 			((shost->unchecked_isa_dma) ? __GFP_DMA : 0));
+ 	if (!result)
+ 		goto out_free_sdev;
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 5464d467e23e..d64553c0a051 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1398,11 +1398,6 @@ static void sd_release(struct gendisk *disk, fmode_t mode)
+ 			scsi_set_medium_removal(sdev, SCSI_REMOVAL_ALLOW);
+ 	}
+ 
+-	/*
+-	 * XXX and what if there are packets in flight and this close()
+-	 * XXX is followed by a "rmmod sd_mod"?
+-	 */
+-
+ 	scsi_disk_put(sdkp);
+ }
+ 
+@@ -3047,6 +3042,58 @@ static void sd_read_security(struct scsi_disk *sdkp, unsigned char *buffer)
+ 		sdkp->security = 1;
+ }
+ 
++/*
++ * Determine the device's preferred I/O size for reads and writes
++ * unless the reported value is unreasonably small, large, not a
++ * multiple of the physical block size, or simply garbage.
++ */
++static bool sd_validate_opt_xfer_size(struct scsi_disk *sdkp,
++				      unsigned int dev_max)
++{
++	struct scsi_device *sdp = sdkp->device;
++	unsigned int opt_xfer_bytes =
++		logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
++
++	if (sdkp->opt_xfer_blocks == 0)
++		return false;
++
++	if (sdkp->opt_xfer_blocks > dev_max) {
++		sd_first_printk(KERN_WARNING, sdkp,
++				"Optimal transfer size %u logical blocks " \
++				"> dev_max (%u logical blocks)\n",
++				sdkp->opt_xfer_blocks, dev_max);
++		return false;
++	}
++
++	if (sdkp->opt_xfer_blocks > SD_DEF_XFER_BLOCKS) {
++		sd_first_printk(KERN_WARNING, sdkp,
++				"Optimal transfer size %u logical blocks " \
++				"> sd driver limit (%u logical blocks)\n",
++				sdkp->opt_xfer_blocks, SD_DEF_XFER_BLOCKS);
++		return false;
++	}
++
++	if (opt_xfer_bytes < PAGE_SIZE) {
++		sd_first_printk(KERN_WARNING, sdkp,
++				"Optimal transfer size %u bytes < " \
++				"PAGE_SIZE (%u bytes)\n",
++				opt_xfer_bytes, (unsigned int)PAGE_SIZE);
++		return false;
++	}
++
++	if (opt_xfer_bytes & (sdkp->physical_block_size - 1)) {
++		sd_first_printk(KERN_WARNING, sdkp,
++				"Optimal transfer size %u bytes not a " \
++				"multiple of physical block size (%u bytes)\n",
++				opt_xfer_bytes, sdkp->physical_block_size);
++		return false;
++	}
++
++	sd_first_printk(KERN_INFO, sdkp, "Optimal transfer size %u bytes\n",
++			opt_xfer_bytes);
++	return true;
++}
++
+ /**
+  *	sd_revalidate_disk - called the first time a new disk is seen,
+  *	performs disk spin up, read_capacity, etc.
+@@ -3125,15 +3172,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
+ 	dev_max = min_not_zero(dev_max, sdkp->max_xfer_blocks);
+ 	q->limits.max_dev_sectors = logical_to_sectors(sdp, dev_max);
+ 
+-	/*
+-	 * Determine the device's preferred I/O size for reads and writes
+-	 * unless the reported value is unreasonably small, large, or
+-	 * garbage.
+-	 */
+-	if (sdkp->opt_xfer_blocks &&
+-	    sdkp->opt_xfer_blocks <= dev_max &&
+-	    sdkp->opt_xfer_blocks <= SD_DEF_XFER_BLOCKS &&
+-	    logical_to_bytes(sdp, sdkp->opt_xfer_blocks) >= PAGE_SIZE) {
++	if (sd_validate_opt_xfer_size(sdkp, dev_max)) {
+ 		q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
+ 		rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks);
+ 	} else
+@@ -3447,9 +3486,21 @@ static void scsi_disk_release(struct device *dev)
+ {
+ 	struct scsi_disk *sdkp = to_scsi_disk(dev);
+ 	struct gendisk *disk = sdkp->disk;
+-	
++	struct request_queue *q = disk->queue;
++
+ 	ida_free(&sd_index_ida, sdkp->index);
+ 
++	/*
++	 * Wait until all requests that are in progress have completed.
++	 * This is necessary to avoid that e.g. scsi_end_request() crashes
++	 * due to clearing the disk->private_data pointer. Wait from inside
++	 * scsi_disk_release() instead of from sd_release() to avoid that
++	 * freezing and unfreezing the request queue affects user space I/O
++	 * in case multiple processes open a /dev/sd... node concurrently.
++	 */
++	blk_mq_freeze_queue(q);
++	blk_mq_unfreeze_queue(q);
++
+ 	disk->private_data = NULL;
+ 	put_disk(disk);
+ 	put_device(&sdkp->device->sdev_gendev);
+diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
+index 772b976e4ee4..464cba521fb6 100644
+--- a/drivers/scsi/virtio_scsi.c
++++ b/drivers/scsi/virtio_scsi.c
+@@ -594,7 +594,6 @@ static int virtscsi_device_reset(struct scsi_cmnd *sc)
+ 		return FAILED;
+ 
+ 	memset(cmd, 0, sizeof(*cmd));
+-	cmd->sc = sc;
+ 	cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){
+ 		.type = VIRTIO_SCSI_T_TMF,
+ 		.subtype = cpu_to_virtio32(vscsi->vdev,
+@@ -653,7 +652,6 @@ static int virtscsi_abort(struct scsi_cmnd *sc)
+ 		return FAILED;
+ 
+ 	memset(cmd, 0, sizeof(*cmd));
+-	cmd->sc = sc;
+ 	cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){
+ 		.type = VIRTIO_SCSI_T_TMF,
+ 		.subtype = VIRTIO_SCSI_T_TMF_ABORT_TASK,
+diff --git a/drivers/soc/qcom/qcom_gsbi.c b/drivers/soc/qcom/qcom_gsbi.c
+index 09c669e70d63..038abc377fdb 100644
+--- a/drivers/soc/qcom/qcom_gsbi.c
++++ b/drivers/soc/qcom/qcom_gsbi.c
+@@ -138,7 +138,7 @@ static int gsbi_probe(struct platform_device *pdev)
+ 	struct resource *res;
+ 	void __iomem *base;
+ 	struct gsbi_info *gsbi;
+-	int i;
++	int i, ret;
+ 	u32 mask, gsbi_num;
+ 	const struct crci_config *config = NULL;
+ 
+@@ -221,7 +221,10 @@ static int gsbi_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, gsbi);
+ 
+-	return of_platform_populate(node, NULL, NULL, &pdev->dev);
++	ret = of_platform_populate(node, NULL, NULL, &pdev->dev);
++	if (ret)
++		clk_disable_unprepare(gsbi->hclk);
++	return ret;
+ }
+ 
+ static int gsbi_remove(struct platform_device *pdev)
+diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
+index c7beb6841289..ab8f731a3426 100644
+--- a/drivers/soc/qcom/rpmh.c
++++ b/drivers/soc/qcom/rpmh.c
+@@ -80,6 +80,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
+ 	struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request,
+ 						    msg);
+ 	struct completion *compl = rpm_msg->completion;
++	bool free = rpm_msg->needs_free;
+ 
+ 	rpm_msg->err = r;
+ 
+@@ -94,7 +95,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
+ 	complete(compl);
+ 
+ exit:
+-	if (rpm_msg->needs_free)
++	if (free)
+ 		kfree(rpm_msg);
+ }
+ 
+@@ -348,11 +349,12 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ {
+ 	struct batch_cache_req *req;
+ 	struct rpmh_request *rpm_msgs;
+-	DECLARE_COMPLETION_ONSTACK(compl);
++	struct completion *compls;
+ 	struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
+ 	unsigned long time_left;
+ 	int count = 0;
+-	int ret, i, j;
++	int ret, i;
++	void *ptr;
+ 
+ 	if (!cmd || !n)
+ 		return -EINVAL;
+@@ -362,10 +364,15 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ 	if (!count)
+ 		return -EINVAL;
+ 
+-	req = kzalloc(sizeof(*req) + count * sizeof(req->rpm_msgs[0]),
++	ptr = kzalloc(sizeof(*req) +
++		      count * (sizeof(req->rpm_msgs[0]) + sizeof(*compls)),
+ 		      GFP_ATOMIC);
+-	if (!req)
++	if (!ptr)
+ 		return -ENOMEM;
++
++	req = ptr;
++	compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
++
+ 	req->count = count;
+ 	rpm_msgs = req->rpm_msgs;
+ 
+@@ -380,25 +387,26 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ 	}
+ 
+ 	for (i = 0; i < count; i++) {
+-		rpm_msgs[i].completion = &compl;
++		struct completion *compl = &compls[i];
++
++		init_completion(compl);
++		rpm_msgs[i].completion = compl;
+ 		ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msgs[i].msg);
+ 		if (ret) {
+ 			pr_err("Error(%d) sending RPMH message addr=%#x\n",
+ 			       ret, rpm_msgs[i].msg.cmds[0].addr);
+-			for (j = i; j < count; j++)
+-				rpmh_tx_done(&rpm_msgs[j].msg, ret);
+ 			break;
+ 		}
+ 	}
+ 
+ 	time_left = RPMH_TIMEOUT_MS;
+-	for (i = 0; i < count; i++) {
+-		time_left = wait_for_completion_timeout(&compl, time_left);
++	while (i--) {
++		time_left = wait_for_completion_timeout(&compls[i], time_left);
+ 		if (!time_left) {
+ 			/*
+ 			 * Better hope they never finish because they'll signal
+-			 * the completion on our stack and that's bad once
+-			 * we've returned from the function.
++			 * the completion that we're going to free once
++			 * we've returned from this function.
+ 			 */
+ 			WARN_ON(1);
+ 			ret = -ETIMEDOUT;
+@@ -407,7 +415,7 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
+ 	}
+ 
+ exit:
+-	kfree(req);
++	kfree(ptr);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/soc/tegra/fuse/fuse-tegra.c b/drivers/soc/tegra/fuse/fuse-tegra.c
+index a33ee8ef8b6b..51625703399e 100644
+--- a/drivers/soc/tegra/fuse/fuse-tegra.c
++++ b/drivers/soc/tegra/fuse/fuse-tegra.c
+@@ -137,13 +137,17 @@ static int tegra_fuse_probe(struct platform_device *pdev)
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	fuse->phys = res->start;
+ 	fuse->base = devm_ioremap_resource(&pdev->dev, res);
+-	if (IS_ERR(fuse->base))
+-		return PTR_ERR(fuse->base);
++	if (IS_ERR(fuse->base)) {
++		err = PTR_ERR(fuse->base);
++		fuse->base = base;
++		return err;
++	}
+ 
+ 	fuse->clk = devm_clk_get(&pdev->dev, "fuse");
+ 	if (IS_ERR(fuse->clk)) {
+ 		dev_err(&pdev->dev, "failed to get FUSE clock: %ld",
+ 			PTR_ERR(fuse->clk));
++		fuse->base = base;
+ 		return PTR_ERR(fuse->clk);
+ 	}
+ 
+@@ -152,8 +156,10 @@ static int tegra_fuse_probe(struct platform_device *pdev)
+ 
+ 	if (fuse->soc->probe) {
+ 		err = fuse->soc->probe(fuse);
+-		if (err < 0)
++		if (err < 0) {
++			fuse->base = base;
+ 			return err;
++		}
+ 	}
+ 
+ 	if (tegra_fuse_create_sysfs(&pdev->dev, fuse->soc->info->size,
+diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
+index a4aee26028cd..53b35c56a557 100644
+--- a/drivers/spi/spi-gpio.c
++++ b/drivers/spi/spi-gpio.c
+@@ -428,7 +428,8 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 		return status;
+ 
+ 	master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
+-	master->mode_bits = SPI_3WIRE | SPI_3WIRE_HIZ | SPI_CPHA | SPI_CPOL;
++	master->mode_bits = SPI_3WIRE | SPI_3WIRE_HIZ | SPI_CPHA | SPI_CPOL |
++			    SPI_CS_HIGH;
+ 	master->flags = master_flags;
+ 	master->bus_num = pdev->id;
+ 	/* The master needs to think there is a chipselect even if not connected */
+@@ -455,7 +456,6 @@ static int spi_gpio_probe(struct platform_device *pdev)
+ 		spi_gpio->bitbang.txrx_word[SPI_MODE_3] = spi_gpio_spec_txrx_word_mode3;
+ 	}
+ 	spi_gpio->bitbang.setup_transfer = spi_bitbang_setup_transfer;
+-	spi_gpio->bitbang.flags = SPI_CS_HIGH;
+ 
+ 	status = spi_bitbang_start(&spi_gpio->bitbang);
+ 	if (status)
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index 2fd8881fcd65..8be304379628 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -623,8 +623,8 @@ omap2_mcspi_txrx_dma(struct spi_device *spi, struct spi_transfer *xfer)
+ 	cfg.dst_addr = cs->phys + OMAP2_MCSPI_TX0;
+ 	cfg.src_addr_width = width;
+ 	cfg.dst_addr_width = width;
+-	cfg.src_maxburst = es;
+-	cfg.dst_maxburst = es;
++	cfg.src_maxburst = 1;
++	cfg.dst_maxburst = 1;
+ 
+ 	rx = xfer->rx_buf;
+ 	tx = xfer->tx_buf;
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index d84b893a64d7..3e82eaad0f2d 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1696,6 +1696,7 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
+ 			platform_info->enable_dma = false;
+ 		} else {
+ 			master->can_dma = pxa2xx_spi_can_dma;
++			master->max_dma_len = MAX_DMA_LEN;
+ 		}
+ 	}
+ 
+diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
+index 5f19016bbf10..b9fb6493cd6b 100644
+--- a/drivers/spi/spi-ti-qspi.c
++++ b/drivers/spi/spi-ti-qspi.c
+@@ -490,8 +490,8 @@ static void ti_qspi_enable_memory_map(struct spi_device *spi)
+ 	ti_qspi_write(qspi, MM_SWITCH, QSPI_SPI_SWITCH_REG);
+ 	if (qspi->ctrl_base) {
+ 		regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg,
+-				   MEM_CS_EN(spi->chip_select),
+-				   MEM_CS_MASK);
++				   MEM_CS_MASK,
++				   MEM_CS_EN(spi->chip_select));
+ 	}
+ 	qspi->mmap_enabled = true;
+ }
+@@ -503,7 +503,7 @@ static void ti_qspi_disable_memory_map(struct spi_device *spi)
+ 	ti_qspi_write(qspi, 0, QSPI_SPI_SWITCH_REG);
+ 	if (qspi->ctrl_base)
+ 		regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg,
+-				   0, MEM_CS_MASK);
++				   MEM_CS_MASK, 0);
+ 	qspi->mmap_enabled = false;
+ }
+ 
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index 90a8a9f1ac7d..910826df4a31 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -75,6 +75,9 @@ struct ashmem_range {
+ /* LRU list of unpinned pages, protected by ashmem_mutex */
+ static LIST_HEAD(ashmem_lru_list);
+ 
++static atomic_t ashmem_shrink_inflight = ATOMIC_INIT(0);
++static DECLARE_WAIT_QUEUE_HEAD(ashmem_shrink_wait);
++
+ /*
+  * long lru_count - The count of pages on our LRU list.
+  *
+@@ -168,19 +171,15 @@ static inline void lru_del(struct ashmem_range *range)
+  * @end:	   The ending page (inclusive)
+  *
+  * This function is protected by ashmem_mutex.
+- *
+- * Return: 0 if successful, or -ENOMEM if there is an error
+  */
+-static int range_alloc(struct ashmem_area *asma,
+-		       struct ashmem_range *prev_range, unsigned int purged,
+-		       size_t start, size_t end)
++static void range_alloc(struct ashmem_area *asma,
++			struct ashmem_range *prev_range, unsigned int purged,
++			size_t start, size_t end,
++			struct ashmem_range **new_range)
+ {
+-	struct ashmem_range *range;
+-
+-	range = kmem_cache_zalloc(ashmem_range_cachep, GFP_KERNEL);
+-	if (!range)
+-		return -ENOMEM;
++	struct ashmem_range *range = *new_range;
+ 
++	*new_range = NULL;
+ 	range->asma = asma;
+ 	range->pgstart = start;
+ 	range->pgend = end;
+@@ -190,8 +189,6 @@ static int range_alloc(struct ashmem_area *asma,
+ 
+ 	if (range_on_lru(range))
+ 		lru_add(range);
+-
+-	return 0;
+ }
+ 
+ /**
+@@ -438,7 +435,6 @@ out:
+ static unsigned long
+ ashmem_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
+ {
+-	struct ashmem_range *range, *next;
+ 	unsigned long freed = 0;
+ 
+ 	/* We might recurse into filesystem code, so bail out if necessary */
+@@ -448,21 +444,33 @@ ashmem_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
+ 	if (!mutex_trylock(&ashmem_mutex))
+ 		return -1;
+ 
+-	list_for_each_entry_safe(range, next, &ashmem_lru_list, lru) {
++	while (!list_empty(&ashmem_lru_list)) {
++		struct ashmem_range *range =
++			list_first_entry(&ashmem_lru_list, typeof(*range), lru);
+ 		loff_t start = range->pgstart * PAGE_SIZE;
+ 		loff_t end = (range->pgend + 1) * PAGE_SIZE;
++		struct file *f = range->asma->file;
+ 
+-		range->asma->file->f_op->fallocate(range->asma->file,
+-				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
+-				start, end - start);
++		get_file(f);
++		atomic_inc(&ashmem_shrink_inflight);
+ 		range->purged = ASHMEM_WAS_PURGED;
+ 		lru_del(range);
+ 
+ 		freed += range_size(range);
++		mutex_unlock(&ashmem_mutex);
++		f->f_op->fallocate(f,
++				   FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
++				   start, end - start);
++		fput(f);
++		if (atomic_dec_and_test(&ashmem_shrink_inflight))
++			wake_up_all(&ashmem_shrink_wait);
++		if (!mutex_trylock(&ashmem_mutex))
++			goto out;
+ 		if (--sc->nr_to_scan <= 0)
+ 			break;
+ 	}
+ 	mutex_unlock(&ashmem_mutex);
++out:
+ 	return freed;
+ }
+ 
+@@ -582,7 +590,8 @@ static int get_name(struct ashmem_area *asma, void __user *name)
+  *
+  * Caller must hold ashmem_mutex.
+  */
+-static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
++static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend,
++		      struct ashmem_range **new_range)
+ {
+ 	struct ashmem_range *range, *next;
+ 	int ret = ASHMEM_NOT_PURGED;
+@@ -635,7 +644,7 @@ static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
+ 			 * second half and adjust the first chunk's endpoint.
+ 			 */
+ 			range_alloc(asma, range, range->purged,
+-				    pgend + 1, range->pgend);
++				    pgend + 1, range->pgend, new_range);
+ 			range_shrink(range, range->pgstart, pgstart - 1);
+ 			break;
+ 		}
+@@ -649,7 +658,8 @@ static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
+  *
+  * Caller must hold ashmem_mutex.
+  */
+-static int ashmem_unpin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
++static int ashmem_unpin(struct ashmem_area *asma, size_t pgstart, size_t pgend,
++			struct ashmem_range **new_range)
+ {
+ 	struct ashmem_range *range, *next;
+ 	unsigned int purged = ASHMEM_NOT_PURGED;
+@@ -675,7 +685,8 @@ restart:
+ 		}
+ 	}
+ 
+-	return range_alloc(asma, range, purged, pgstart, pgend);
++	range_alloc(asma, range, purged, pgstart, pgend, new_range);
++	return 0;
+ }
+ 
+ /*
+@@ -708,11 +719,19 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+ 	struct ashmem_pin pin;
+ 	size_t pgstart, pgend;
+ 	int ret = -EINVAL;
++	struct ashmem_range *range = NULL;
+ 
+ 	if (copy_from_user(&pin, p, sizeof(pin)))
+ 		return -EFAULT;
+ 
++	if (cmd == ASHMEM_PIN || cmd == ASHMEM_UNPIN) {
++		range = kmem_cache_zalloc(ashmem_range_cachep, GFP_KERNEL);
++		if (!range)
++			return -ENOMEM;
++	}
++
+ 	mutex_lock(&ashmem_mutex);
++	wait_event(ashmem_shrink_wait, !atomic_read(&ashmem_shrink_inflight));
+ 
+ 	if (!asma->file)
+ 		goto out_unlock;
+@@ -735,10 +754,10 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+ 
+ 	switch (cmd) {
+ 	case ASHMEM_PIN:
+-		ret = ashmem_pin(asma, pgstart, pgend);
++		ret = ashmem_pin(asma, pgstart, pgend, &range);
+ 		break;
+ 	case ASHMEM_UNPIN:
+-		ret = ashmem_unpin(asma, pgstart, pgend);
++		ret = ashmem_unpin(asma, pgstart, pgend, &range);
+ 		break;
+ 	case ASHMEM_GET_PIN_STATUS:
+ 		ret = ashmem_get_pin_status(asma, pgstart, pgend);
+@@ -747,6 +766,8 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+ 
+ out_unlock:
+ 	mutex_unlock(&ashmem_mutex);
++	if (range)
++		kmem_cache_free(ashmem_range_cachep, range);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c
+index 0383f7548d48..20f2103a4ebf 100644
+--- a/drivers/staging/android/ion/ion_system_heap.c
++++ b/drivers/staging/android/ion/ion_system_heap.c
+@@ -223,10 +223,10 @@ static void ion_system_heap_destroy_pools(struct ion_page_pool **pools)
+ static int ion_system_heap_create_pools(struct ion_page_pool **pools)
+ {
+ 	int i;
+-	gfp_t gfp_flags = low_order_gfp_flags;
+ 
+ 	for (i = 0; i < NUM_ORDERS; i++) {
+ 		struct ion_page_pool *pool;
++		gfp_t gfp_flags = low_order_gfp_flags;
+ 
+ 		if (orders[i] > 4)
+ 			gfp_flags = high_order_gfp_flags;
+diff --git a/drivers/staging/comedi/comedidev.h b/drivers/staging/comedi/comedidev.h
+index a7d569cfca5d..0dff1ac057cd 100644
+--- a/drivers/staging/comedi/comedidev.h
++++ b/drivers/staging/comedi/comedidev.h
+@@ -1001,6 +1001,8 @@ int comedi_dio_insn_config(struct comedi_device *dev,
+ 			   unsigned int mask);
+ unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
+ 				     unsigned int *data);
++unsigned int comedi_bytes_per_scan_cmd(struct comedi_subdevice *s,
++				       struct comedi_cmd *cmd);
+ unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s);
+ unsigned int comedi_nscans_left(struct comedi_subdevice *s,
+ 				unsigned int nscans);
+diff --git a/drivers/staging/comedi/drivers.c b/drivers/staging/comedi/drivers.c
+index eefa62f42c0f..5a32b8fc000e 100644
+--- a/drivers/staging/comedi/drivers.c
++++ b/drivers/staging/comedi/drivers.c
+@@ -394,11 +394,13 @@ unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
+ EXPORT_SYMBOL_GPL(comedi_dio_update_state);
+ 
+ /**
+- * comedi_bytes_per_scan() - Get length of asynchronous command "scan" in bytes
++ * comedi_bytes_per_scan_cmd() - Get length of asynchronous command "scan" in
++ * bytes
+  * @s: COMEDI subdevice.
++ * @cmd: COMEDI command.
+  *
+  * Determines the overall scan length according to the subdevice type and the
+- * number of channels in the scan.
++ * number of channels in the scan for the specified command.
+  *
+  * For digital input, output or input/output subdevices, samples for
+  * multiple channels are assumed to be packed into one or more unsigned
+@@ -408,9 +410,9 @@ EXPORT_SYMBOL_GPL(comedi_dio_update_state);
+  *
+  * Returns the overall scan length in bytes.
+  */
+-unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
++unsigned int comedi_bytes_per_scan_cmd(struct comedi_subdevice *s,
++				       struct comedi_cmd *cmd)
+ {
+-	struct comedi_cmd *cmd = &s->async->cmd;
+ 	unsigned int num_samples;
+ 	unsigned int bits_per_sample;
+ 
+@@ -427,6 +429,29 @@ unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
+ 	}
+ 	return comedi_samples_to_bytes(s, num_samples);
+ }
++EXPORT_SYMBOL_GPL(comedi_bytes_per_scan_cmd);
++
++/**
++ * comedi_bytes_per_scan() - Get length of asynchronous command "scan" in bytes
++ * @s: COMEDI subdevice.
++ *
++ * Determines the overall scan length according to the subdevice type and the
++ * number of channels in the scan for the current command.
++ *
++ * For digital input, output or input/output subdevices, samples for
++ * multiple channels are assumed to be packed into one or more unsigned
++ * short or unsigned int values according to the subdevice's %SDF_LSAMPL
++ * flag.  For other types of subdevice, samples are assumed to occupy a
++ * whole unsigned short or unsigned int according to the %SDF_LSAMPL flag.
++ *
++ * Returns the overall scan length in bytes.
++ */
++unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
++{
++	struct comedi_cmd *cmd = &s->async->cmd;
++
++	return comedi_bytes_per_scan_cmd(s, cmd);
++}
+ EXPORT_SYMBOL_GPL(comedi_bytes_per_scan);
+ 
+ static unsigned int __comedi_nscans_left(struct comedi_subdevice *s,
+diff --git a/drivers/staging/comedi/drivers/ni_660x.c b/drivers/staging/comedi/drivers/ni_660x.c
+index e70a461e723f..405573e927cf 100644
+--- a/drivers/staging/comedi/drivers/ni_660x.c
++++ b/drivers/staging/comedi/drivers/ni_660x.c
+@@ -656,6 +656,7 @@ static int ni_660x_set_pfi_routing(struct comedi_device *dev,
+ 	case NI_660X_PFI_OUTPUT_DIO:
+ 		if (chan > 31)
+ 			return -EINVAL;
++		break;
+ 	default:
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c
+index 5edf59ac6706..b04dad8c7092 100644
+--- a/drivers/staging/comedi/drivers/ni_mio_common.c
++++ b/drivers/staging/comedi/drivers/ni_mio_common.c
+@@ -3545,6 +3545,7 @@ static int ni_cdio_cmdtest(struct comedi_device *dev,
+ 			   struct comedi_subdevice *s, struct comedi_cmd *cmd)
+ {
+ 	struct ni_private *devpriv = dev->private;
++	unsigned int bytes_per_scan;
+ 	int err = 0;
+ 
+ 	/* Step 1 : check if triggers are trivially valid */
+@@ -3579,9 +3580,12 @@ static int ni_cdio_cmdtest(struct comedi_device *dev,
+ 	err |= comedi_check_trigger_arg_is(&cmd->convert_arg, 0);
+ 	err |= comedi_check_trigger_arg_is(&cmd->scan_end_arg,
+ 					   cmd->chanlist_len);
+-	err |= comedi_check_trigger_arg_max(&cmd->stop_arg,
+-					    s->async->prealloc_bufsz /
+-					    comedi_bytes_per_scan(s));
++	bytes_per_scan = comedi_bytes_per_scan_cmd(s, cmd);
++	if (bytes_per_scan) {
++		err |= comedi_check_trigger_arg_max(&cmd->stop_arg,
++						    s->async->prealloc_bufsz /
++						    bytes_per_scan);
++	}
+ 
+ 	if (err)
+ 		return 3;
+diff --git a/drivers/staging/erofs/dir.c b/drivers/staging/erofs/dir.c
+index 833f052f79d0..b21ed5b4c711 100644
+--- a/drivers/staging/erofs/dir.c
++++ b/drivers/staging/erofs/dir.c
+@@ -23,6 +23,21 @@ static const unsigned char erofs_filetype_table[EROFS_FT_MAX] = {
+ 	[EROFS_FT_SYMLINK]	= DT_LNK,
+ };
+ 
++static void debug_one_dentry(unsigned char d_type, const char *de_name,
++			     unsigned int de_namelen)
++{
++#ifdef CONFIG_EROFS_FS_DEBUG
++	/* since the on-disk name could not have the trailing '\0' */
++	unsigned char dbg_namebuf[EROFS_NAME_LEN + 1];
++
++	memcpy(dbg_namebuf, de_name, de_namelen);
++	dbg_namebuf[de_namelen] = '\0';
++
++	debugln("found dirent %s de_len %u d_type %d", dbg_namebuf,
++		de_namelen, d_type);
++#endif
++}
++
+ static int erofs_fill_dentries(struct dir_context *ctx,
+ 	void *dentry_blk, unsigned int *ofs,
+ 	unsigned int nameoff, unsigned int maxsize)
+@@ -33,14 +48,10 @@ static int erofs_fill_dentries(struct dir_context *ctx,
+ 	de = dentry_blk + *ofs;
+ 	while (de < end) {
+ 		const char *de_name;
+-		int de_namelen;
++		unsigned int de_namelen;
+ 		unsigned char d_type;
+-#ifdef CONFIG_EROFS_FS_DEBUG
+-		unsigned int dbg_namelen;
+-		unsigned char dbg_namebuf[EROFS_NAME_LEN];
+-#endif
+ 
+-		if (unlikely(de->file_type < EROFS_FT_MAX))
++		if (de->file_type < EROFS_FT_MAX)
+ 			d_type = erofs_filetype_table[de->file_type];
+ 		else
+ 			d_type = DT_UNKNOWN;
+@@ -48,26 +59,20 @@ static int erofs_fill_dentries(struct dir_context *ctx,
+ 		nameoff = le16_to_cpu(de->nameoff);
+ 		de_name = (char *)dentry_blk + nameoff;
+ 
+-		de_namelen = unlikely(de + 1 >= end) ?
+-			/* last directory entry */
+-			strnlen(de_name, maxsize - nameoff) :
+-			le16_to_cpu(de[1].nameoff) - nameoff;
++		/* the last dirent in the block? */
++		if (de + 1 >= end)
++			de_namelen = strnlen(de_name, maxsize - nameoff);
++		else
++			de_namelen = le16_to_cpu(de[1].nameoff) - nameoff;
+ 
+ 		/* a corrupted entry is found */
+-		if (unlikely(de_namelen < 0)) {
++		if (unlikely(nameoff + de_namelen > maxsize ||
++			     de_namelen > EROFS_NAME_LEN)) {
+ 			DBG_BUGON(1);
+ 			return -EIO;
+ 		}
+ 
+-#ifdef CONFIG_EROFS_FS_DEBUG
+-		dbg_namelen = min(EROFS_NAME_LEN - 1, de_namelen);
+-		memcpy(dbg_namebuf, de_name, dbg_namelen);
+-		dbg_namebuf[dbg_namelen] = '\0';
+-
+-		debugln("%s, found de_name %s de_len %d d_type %d", __func__,
+-			dbg_namebuf, de_namelen, d_type);
+-#endif
+-
++		debug_one_dentry(d_type, de_name, de_namelen);
+ 		if (!dir_emit(ctx, de_name, de_namelen,
+ 			      le64_to_cpu(de->nid), d_type))
+ 			/* stopped by some reason */
+diff --git a/drivers/staging/erofs/inode.c b/drivers/staging/erofs/inode.c
+index d7fbf5f4600f..f99954dbfdb5 100644
+--- a/drivers/staging/erofs/inode.c
++++ b/drivers/staging/erofs/inode.c
+@@ -185,16 +185,16 @@ static int fill_inode(struct inode *inode, int isdir)
+ 		/* setup the new inode */
+ 		if (S_ISREG(inode->i_mode)) {
+ #ifdef CONFIG_EROFS_FS_XATTR
+-			if (vi->xattr_isize)
+-				inode->i_op = &erofs_generic_xattr_iops;
++			inode->i_op = &erofs_generic_xattr_iops;
+ #endif
+ 			inode->i_fop = &generic_ro_fops;
+ 		} else if (S_ISDIR(inode->i_mode)) {
+ 			inode->i_op =
+ #ifdef CONFIG_EROFS_FS_XATTR
+-				vi->xattr_isize ? &erofs_dir_xattr_iops :
+-#endif
++				&erofs_dir_xattr_iops;
++#else
+ 				&erofs_dir_iops;
++#endif
+ 			inode->i_fop = &erofs_dir_fops;
+ 		} else if (S_ISLNK(inode->i_mode)) {
+ 			/* by default, page_get_link is used for symlink */
+diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
+index e049d00c087a..16249d7f0895 100644
+--- a/drivers/staging/erofs/internal.h
++++ b/drivers/staging/erofs/internal.h
+@@ -354,12 +354,17 @@ static inline erofs_off_t iloc(struct erofs_sb_info *sbi, erofs_nid_t nid)
+ 	return blknr_to_addr(sbi->meta_blkaddr) + (nid << sbi->islotbits);
+ }
+ 
+-#define inode_set_inited_xattr(inode)   (EROFS_V(inode)->flags |= 1)
+-#define inode_has_inited_xattr(inode)   (EROFS_V(inode)->flags & 1)
++/* atomic flag definitions */
++#define EROFS_V_EA_INITED_BIT	0
++
++/* bitlock definitions (arranged in reverse order) */
++#define EROFS_V_BL_XATTR_BIT	(BITS_PER_LONG - 1)
+ 
+ struct erofs_vnode {
+ 	erofs_nid_t nid;
+-	unsigned int flags;
++
++	/* atomic flags (including bitlocks) */
++	unsigned long flags;
+ 
+ 	unsigned char data_mapping_mode;
+ 	/* inline size in bytes */
+diff --git a/drivers/staging/erofs/namei.c b/drivers/staging/erofs/namei.c
+index 5596c52e246d..ecc51ef0753f 100644
+--- a/drivers/staging/erofs/namei.c
++++ b/drivers/staging/erofs/namei.c
+@@ -15,74 +15,77 @@
+ 
+ #include <trace/events/erofs.h>
+ 
+-/* based on the value of qn->len is accurate */
+-static inline int dirnamecmp(struct qstr *qn,
+-	struct qstr *qd, unsigned int *matched)
++struct erofs_qstr {
++	const unsigned char *name;
++	const unsigned char *end;
++};
++
++/* based on the end of qn is accurate and it must have the trailing '\0' */
++static inline int dirnamecmp(const struct erofs_qstr *qn,
++			     const struct erofs_qstr *qd,
++			     unsigned int *matched)
+ {
+-	unsigned int i = *matched, len = min(qn->len, qd->len);
+-loop:
+-	if (unlikely(i >= len)) {
+-		*matched = i;
+-		if (qn->len < qd->len) {
+-			/*
+-			 * actually (qn->len == qd->len)
+-			 * when qd->name[i] == '\0'
+-			 */
+-			return qd->name[i] == '\0' ? 0 : -1;
++	unsigned int i = *matched;
++
++	/*
++	 * on-disk error, let's only BUG_ON in the debugging mode.
++	 * otherwise, it will return 1 to just skip the invalid name
++	 * and go on (in consideration of the lookup performance).
++	 */
++	DBG_BUGON(qd->name > qd->end);
++
++	/* qd could not have trailing '\0' */
++	/* However it is absolutely safe if < qd->end */
++	while (qd->name + i < qd->end && qd->name[i] != '\0') {
++		if (qn->name[i] != qd->name[i]) {
++			*matched = i;
++			return qn->name[i] > qd->name[i] ? 1 : -1;
+ 		}
+-		return (qn->len > qd->len);
++		++i;
+ 	}
+-
+-	if (qn->name[i] != qd->name[i]) {
+-		*matched = i;
+-		return qn->name[i] > qd->name[i] ? 1 : -1;
+-	}
+-
+-	++i;
+-	goto loop;
++	*matched = i;
++	/* See comments in __d_alloc on the terminating NUL character */
++	return qn->name[i] == '\0' ? 0 : 1;
+ }
+ 
+-static struct erofs_dirent *find_target_dirent(
+-	struct qstr *name,
+-	u8 *data, int maxsize)
++#define nameoff_from_disk(off, sz)	(le16_to_cpu(off) & ((sz) - 1))
++
++static struct erofs_dirent *find_target_dirent(struct erofs_qstr *name,
++					       u8 *data,
++					       unsigned int dirblksize,
++					       const int ndirents)
+ {
+-	unsigned int ndirents, head, back;
++	int head, back;
+ 	unsigned int startprfx, endprfx;
+ 	struct erofs_dirent *const de = (struct erofs_dirent *)data;
+ 
+-	/* make sure that maxsize is valid */
+-	BUG_ON(maxsize < sizeof(struct erofs_dirent));
+-
+-	ndirents = le16_to_cpu(de->nameoff) / sizeof(*de);
+-
+-	/* corrupted dir (may be unnecessary...) */
+-	BUG_ON(!ndirents);
+-
+-	head = 0;
++	/* since the 1st dirent has been evaluated previously */
++	head = 1;
+ 	back = ndirents - 1;
+ 	startprfx = endprfx = 0;
+ 
+ 	while (head <= back) {
+-		unsigned int mid = head + (back - head) / 2;
+-		unsigned int nameoff = le16_to_cpu(de[mid].nameoff);
++		const int mid = head + (back - head) / 2;
++		const int nameoff = nameoff_from_disk(de[mid].nameoff,
++						      dirblksize);
+ 		unsigned int matched = min(startprfx, endprfx);
+-
+-		struct qstr dname = QSTR_INIT(data + nameoff,
+-			unlikely(mid >= ndirents - 1) ?
+-				maxsize - nameoff :
+-				le16_to_cpu(de[mid + 1].nameoff) - nameoff);
++		struct erofs_qstr dname = {
++			.name = data + nameoff,
++			.end = unlikely(mid >= ndirents - 1) ?
++				data + dirblksize :
++				data + nameoff_from_disk(de[mid + 1].nameoff,
++							 dirblksize)
++		};
+ 
+ 		/* string comparison without already matched prefix */
+ 		int ret = dirnamecmp(name, &dname, &matched);
+ 
+-		if (unlikely(!ret))
++		if (unlikely(!ret)) {
+ 			return de + mid;
+-		else if (ret > 0) {
++		} else if (ret > 0) {
+ 			head = mid + 1;
+ 			startprfx = matched;
+-		} else if (unlikely(mid < 1))	/* fix "mid" overflow */
+-			break;
+-		else {
++		} else {
+ 			back = mid - 1;
+ 			endprfx = matched;
+ 		}
+@@ -91,12 +94,12 @@ static struct erofs_dirent *find_target_dirent(
+ 	return ERR_PTR(-ENOENT);
+ }
+ 
+-static struct page *find_target_block_classic(
+-	struct inode *dir,
+-	struct qstr *name, int *_diff)
++static struct page *find_target_block_classic(struct inode *dir,
++					      struct erofs_qstr *name,
++					      int *_ndirents)
+ {
+ 	unsigned int startprfx, endprfx;
+-	unsigned int head, back;
++	int head, back;
+ 	struct address_space *const mapping = dir->i_mapping;
+ 	struct page *candidate = ERR_PTR(-ENOENT);
+ 
+@@ -105,41 +108,43 @@ static struct page *find_target_block_classic(
+ 	back = inode_datablocks(dir) - 1;
+ 
+ 	while (head <= back) {
+-		unsigned int mid = head + (back - head) / 2;
++		const int mid = head + (back - head) / 2;
+ 		struct page *page = read_mapping_page(mapping, mid, NULL);
+ 
+-		if (IS_ERR(page)) {
+-exact_out:
+-			if (!IS_ERR(candidate)) /* valid candidate */
+-				put_page(candidate);
+-			return page;
+-		} else {
+-			int diff;
+-			unsigned int ndirents, matched;
+-			struct qstr dname;
++		if (!IS_ERR(page)) {
+ 			struct erofs_dirent *de = kmap_atomic(page);
+-			unsigned int nameoff = le16_to_cpu(de->nameoff);
+-
+-			ndirents = nameoff / sizeof(*de);
++			const int nameoff = nameoff_from_disk(de->nameoff,
++							      EROFS_BLKSIZ);
++			const int ndirents = nameoff / sizeof(*de);
++			int diff;
++			unsigned int matched;
++			struct erofs_qstr dname;
+ 
+-			/* corrupted dir (should have one entry at least) */
+-			BUG_ON(!ndirents || nameoff > PAGE_SIZE);
++			if (unlikely(!ndirents)) {
++				DBG_BUGON(1);
++				kunmap_atomic(de);
++				put_page(page);
++				page = ERR_PTR(-EIO);
++				goto out;
++			}
+ 
+ 			matched = min(startprfx, endprfx);
+ 
+ 			dname.name = (u8 *)de + nameoff;
+-			dname.len = ndirents == 1 ?
+-				/* since the rest of the last page is 0 */
+-				EROFS_BLKSIZ - nameoff
+-				: le16_to_cpu(de[1].nameoff) - nameoff;
++			if (ndirents == 1)
++				dname.end = (u8 *)de + EROFS_BLKSIZ;
++			else
++				dname.end = (u8 *)de +
++					nameoff_from_disk(de[1].nameoff,
++							  EROFS_BLKSIZ);
+ 
+ 			/* string comparison without already matched prefix */
+ 			diff = dirnamecmp(name, &dname, &matched);
+ 			kunmap_atomic(de);
+ 
+ 			if (unlikely(!diff)) {
+-				*_diff = 0;
+-				goto exact_out;
++				*_ndirents = 0;
++				goto out;
+ 			} else if (diff > 0) {
+ 				head = mid + 1;
+ 				startprfx = matched;
+@@ -147,45 +152,51 @@ exact_out:
+ 				if (likely(!IS_ERR(candidate)))
+ 					put_page(candidate);
+ 				candidate = page;
++				*_ndirents = ndirents;
+ 			} else {
+ 				put_page(page);
+ 
+-				if (unlikely(mid < 1))	/* fix "mid" overflow */
+-					break;
+-
+ 				back = mid - 1;
+ 				endprfx = matched;
+ 			}
++			continue;
+ 		}
++out:		/* free if the candidate is valid */
++		if (!IS_ERR(candidate))
++			put_page(candidate);
++		return page;
+ 	}
+-	*_diff = 1;
+ 	return candidate;
+ }
+ 
+ int erofs_namei(struct inode *dir,
+-	struct qstr *name,
+-	erofs_nid_t *nid, unsigned int *d_type)
++		struct qstr *name,
++		erofs_nid_t *nid, unsigned int *d_type)
+ {
+-	int diff;
++	int ndirents;
+ 	struct page *page;
+-	u8 *data;
++	void *data;
+ 	struct erofs_dirent *de;
++	struct erofs_qstr qn;
+ 
+ 	if (unlikely(!dir->i_size))
+ 		return -ENOENT;
+ 
+-	diff = 1;
+-	page = find_target_block_classic(dir, name, &diff);
++	qn.name = name->name;
++	qn.end = name->name + name->len;
++
++	ndirents = 0;
++	page = find_target_block_classic(dir, &qn, &ndirents);
+ 
+ 	if (unlikely(IS_ERR(page)))
+ 		return PTR_ERR(page);
+ 
+ 	data = kmap_atomic(page);
+ 	/* the target page has been mapped */
+-	de = likely(diff) ?
+-		/* since the rest of the last page is 0 */
+-		find_target_dirent(name, data, EROFS_BLKSIZ) :
+-		(struct erofs_dirent *)data;
++	if (ndirents)
++		de = find_target_dirent(&qn, data, EROFS_BLKSIZ, ndirents);
++	else
++		de = (struct erofs_dirent *)data;
+ 
+ 	if (likely(!IS_ERR(de))) {
+ 		*nid = le64_to_cpu(de->nid);
+diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
+index 4ac1099a39c6..d850be1abc84 100644
+--- a/drivers/staging/erofs/unzip_vle.c
++++ b/drivers/staging/erofs/unzip_vle.c
+@@ -107,15 +107,30 @@ enum z_erofs_vle_work_role {
+ 	Z_EROFS_VLE_WORK_SECONDARY,
+ 	Z_EROFS_VLE_WORK_PRIMARY,
+ 	/*
+-	 * The current work has at least been linked with the following
+-	 * processed chained works, which means if the processing page
+-	 * is the tail partial page of the work, the current work can
+-	 * safely use the whole page, as illustrated below:
+-	 * +--------------+-------------------------------------------+
+-	 * |  tail page   |      head page (of the previous work)     |
+-	 * +--------------+-------------------------------------------+
+-	 *   /\  which belongs to the current work
+-	 * [  (*) this page can be used for the current work itself.  ]
++	 * The current work was the tail of an exist chain, and the previous
++	 * processed chained works are all decided to be hooked up to it.
++	 * A new chain should be created for the remaining unprocessed works,
++	 * therefore different from Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED,
++	 * the next work cannot reuse the whole page in the following scenario:
++	 *  ________________________________________________________________
++	 * |      tail (partial) page     |       head (partial) page       |
++	 * |  (belongs to the next work)  |  (belongs to the current work)  |
++	 * |_______PRIMARY_FOLLOWED_______|________PRIMARY_HOOKED___________|
++	 */
++	Z_EROFS_VLE_WORK_PRIMARY_HOOKED,
++	/*
++	 * The current work has been linked with the processed chained works,
++	 * and could be also linked with the potential remaining works, which
++	 * means if the processing page is the tail partial page of the work,
++	 * the current work can safely use the whole page (since the next work
++	 * is under control) for in-place decompression, as illustrated below:
++	 *  ________________________________________________________________
++	 * |  tail (partial) page  |          head (partial) page           |
++	 * | (of the current work) |         (of the previous work)         |
++	 * |  PRIMARY_FOLLOWED or  |                                        |
++	 * |_____PRIMARY_HOOKED____|____________PRIMARY_FOLLOWED____________|
++	 *
++	 * [  (*) the above page can be used for the current work itself.  ]
+ 	 */
+ 	Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED,
+ 	Z_EROFS_VLE_WORK_MAX
+@@ -315,10 +330,10 @@ static int z_erofs_vle_work_add_page(
+ 	return ret ? 0 : -EAGAIN;
+ }
+ 
+-static inline bool try_to_claim_workgroup(
+-	struct z_erofs_vle_workgroup *grp,
+-	z_erofs_vle_owned_workgrp_t *owned_head,
+-	bool *hosted)
++static enum z_erofs_vle_work_role
++try_to_claim_workgroup(struct z_erofs_vle_workgroup *grp,
++		       z_erofs_vle_owned_workgrp_t *owned_head,
++		       bool *hosted)
+ {
+ 	DBG_BUGON(*hosted == true);
+ 
+@@ -332,6 +347,9 @@ retry:
+ 
+ 		*owned_head = &grp->next;
+ 		*hosted = true;
++		/* lucky, I am the followee :) */
++		return Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED;
++
+ 	} else if (grp->next == Z_EROFS_VLE_WORKGRP_TAIL) {
+ 		/*
+ 		 * type 2, link to the end of a existing open chain,
+@@ -341,12 +359,11 @@ retry:
+ 		if (cmpxchg(&grp->next, Z_EROFS_VLE_WORKGRP_TAIL,
+ 			    *owned_head) != Z_EROFS_VLE_WORKGRP_TAIL)
+ 			goto retry;
+-
+ 		*owned_head = Z_EROFS_VLE_WORKGRP_TAIL;
+-	} else
+-		return false;	/* :( better luck next time */
++		return Z_EROFS_VLE_WORK_PRIMARY_HOOKED;
++	}
+ 
+-	return true;	/* lucky, I am the followee :) */
++	return Z_EROFS_VLE_WORK_PRIMARY; /* :( better luck next time */
+ }
+ 
+ struct z_erofs_vle_work_finder {
+@@ -424,12 +441,9 @@ z_erofs_vle_work_lookup(const struct z_erofs_vle_work_finder *f)
+ 	*f->hosted = false;
+ 	if (!primary)
+ 		*f->role = Z_EROFS_VLE_WORK_SECONDARY;
+-	/* claim the workgroup if possible */
+-	else if (try_to_claim_workgroup(grp, f->owned_head, f->hosted))
+-		*f->role = Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED;
+-	else
+-		*f->role = Z_EROFS_VLE_WORK_PRIMARY;
+-
++	else	/* claim the workgroup if possible */
++		*f->role = try_to_claim_workgroup(grp, f->owned_head,
++						  f->hosted);
+ 	return work;
+ }
+ 
+@@ -493,6 +507,9 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
+ 	return work;
+ }
+ 
++#define builder_is_hooked(builder) \
++	((builder)->role >= Z_EROFS_VLE_WORK_PRIMARY_HOOKED)
++
+ #define builder_is_followed(builder) \
+ 	((builder)->role >= Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED)
+ 
+@@ -686,7 +703,7 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
+ 	struct z_erofs_vle_work_builder *const builder = &fe->builder;
+ 	const loff_t offset = page_offset(page);
+ 
+-	bool tight = builder_is_followed(builder);
++	bool tight = builder_is_hooked(builder);
+ 	struct z_erofs_vle_work *work = builder->work;
+ 
+ 	enum z_erofs_cache_alloctype cache_strategy;
+@@ -704,8 +721,12 @@ repeat:
+ 
+ 	/* lucky, within the range of the current map_blocks */
+ 	if (offset + cur >= map->m_la &&
+-		offset + cur < map->m_la + map->m_llen)
++		offset + cur < map->m_la + map->m_llen) {
++		/* didn't get a valid unzip work previously (very rare) */
++		if (!builder->work)
++			goto restart_now;
+ 		goto hitted;
++	}
+ 
+ 	/* go ahead the next map_blocks */
+ 	debugln("%s: [out-of-range] pos %llu", __func__, offset + cur);
+@@ -719,6 +740,7 @@ repeat:
+ 	if (unlikely(err))
+ 		goto err_out;
+ 
++restart_now:
+ 	if (unlikely(!(map->m_flags & EROFS_MAP_MAPPED)))
+ 		goto hitted;
+ 
+@@ -740,7 +762,7 @@ repeat:
+ 				 map->m_plen / PAGE_SIZE,
+ 				 cache_strategy, page_pool, GFP_KERNEL);
+ 
+-	tight &= builder_is_followed(builder);
++	tight &= builder_is_hooked(builder);
+ 	work = builder->work;
+ hitted:
+ 	cur = end - min_t(unsigned int, offset + end - map->m_la, end);
+@@ -755,6 +777,9 @@ hitted:
+ 			(tight ? Z_EROFS_PAGE_TYPE_EXCLUSIVE :
+ 				Z_EROFS_VLE_PAGE_TYPE_TAIL_SHARED));
+ 
++	if (cur)
++		tight &= builder_is_followed(builder);
++
+ retry:
+ 	err = z_erofs_vle_work_add_page(builder, page, page_type);
+ 	/* should allocate an additional staging page for pagevec */
+@@ -952,6 +977,7 @@ repeat:
+ 	overlapped = false;
+ 	compressed_pages = grp->compressed_pages;
+ 
++	err = 0;
+ 	for (i = 0; i < clusterpages; ++i) {
+ 		unsigned int pagenr;
+ 
+@@ -961,26 +987,39 @@ repeat:
+ 		DBG_BUGON(!page);
+ 		DBG_BUGON(!page->mapping);
+ 
+-		if (z_erofs_is_stagingpage(page))
+-			continue;
++		if (!z_erofs_is_stagingpage(page)) {
+ #ifdef EROFS_FS_HAS_MANAGED_CACHE
+-		if (page->mapping == MNGD_MAPPING(sbi)) {
+-			DBG_BUGON(!PageUptodate(page));
+-			continue;
+-		}
++			if (page->mapping == MNGD_MAPPING(sbi)) {
++				if (unlikely(!PageUptodate(page)))
++					err = -EIO;
++				continue;
++			}
+ #endif
+ 
+-		/* only non-head page could be reused as a compressed page */
+-		pagenr = z_erofs_onlinepage_index(page);
++			/*
++			 * only if non-head page can be selected
++			 * for inplace decompression
++			 */
++			pagenr = z_erofs_onlinepage_index(page);
+ 
+-		DBG_BUGON(pagenr >= nr_pages);
+-		DBG_BUGON(pages[pagenr]);
+-		++sparsemem_pages;
+-		pages[pagenr] = page;
++			DBG_BUGON(pagenr >= nr_pages);
++			DBG_BUGON(pages[pagenr]);
++			++sparsemem_pages;
++			pages[pagenr] = page;
+ 
+-		overlapped = true;
++			overlapped = true;
++		}
++
++		/* PG_error needs checking for inplaced and staging pages */
++		if (unlikely(PageError(page))) {
++			DBG_BUGON(PageUptodate(page));
++			err = -EIO;
++		}
+ 	}
+ 
++	if (unlikely(err))
++		goto out;
++
+ 	llen = (nr_pages << PAGE_SHIFT) - work->pageofs;
+ 
+ 	if (z_erofs_vle_workgrp_fmt(grp) == Z_EROFS_VLE_WORKGRP_FMT_PLAIN) {
+@@ -992,11 +1031,10 @@ repeat:
+ 	if (llen > grp->llen)
+ 		llen = grp->llen;
+ 
+-	err = z_erofs_vle_unzip_fast_percpu(compressed_pages,
+-		clusterpages, pages, llen, work->pageofs,
+-		z_erofs_onlinepage_endio);
++	err = z_erofs_vle_unzip_fast_percpu(compressed_pages, clusterpages,
++					    pages, llen, work->pageofs);
+ 	if (err != -ENOTSUPP)
+-		goto out_percpu;
++		goto out;
+ 
+ 	if (sparsemem_pages >= nr_pages)
+ 		goto skip_allocpage;
+@@ -1010,6 +1048,10 @@ repeat:
+ 
+ skip_allocpage:
+ 	vout = erofs_vmap(pages, nr_pages);
++	if (!vout) {
++		err = -ENOMEM;
++		goto out;
++	}
+ 
+ 	err = z_erofs_vle_unzip_vmap(compressed_pages,
+ 		clusterpages, vout, llen, work->pageofs, overlapped);
+@@ -1017,8 +1059,25 @@ skip_allocpage:
+ 	erofs_vunmap(vout, nr_pages);
+ 
+ out:
++	/* must handle all compressed pages before endding pages */
++	for (i = 0; i < clusterpages; ++i) {
++		page = compressed_pages[i];
++
++#ifdef EROFS_FS_HAS_MANAGED_CACHE
++		if (page->mapping == MNGD_MAPPING(sbi))
++			continue;
++#endif
++		/* recycle all individual staging pages */
++		(void)z_erofs_gather_if_stagingpage(page_pool, page);
++
++		WRITE_ONCE(compressed_pages[i], NULL);
++	}
++
+ 	for (i = 0; i < nr_pages; ++i) {
+ 		page = pages[i];
++		if (!page)
++			continue;
++
+ 		DBG_BUGON(!page->mapping);
+ 
+ 		/* recycle all individual staging pages */
+@@ -1031,20 +1090,6 @@ out:
+ 		z_erofs_onlinepage_endio(page);
+ 	}
+ 
+-out_percpu:
+-	for (i = 0; i < clusterpages; ++i) {
+-		page = compressed_pages[i];
+-
+-#ifdef EROFS_FS_HAS_MANAGED_CACHE
+-		if (page->mapping == MNGD_MAPPING(sbi))
+-			continue;
+-#endif
+-		/* recycle all individual staging pages */
+-		(void)z_erofs_gather_if_stagingpage(page_pool, page);
+-
+-		WRITE_ONCE(compressed_pages[i], NULL);
+-	}
+-
+ 	if (pages == z_pagemap_global)
+ 		mutex_unlock(&z_pagemap_global_lock);
+ 	else if (unlikely(pages != pages_onstack))
+@@ -1172,6 +1217,7 @@ repeat:
+ 	if (page->mapping == mc) {
+ 		WRITE_ONCE(grp->compressed_pages[nr], page);
+ 
++		ClearPageError(page);
+ 		if (!PagePrivate(page)) {
+ 			/*
+ 			 * impossible to be !PagePrivate(page) for
+diff --git a/drivers/staging/erofs/unzip_vle.h b/drivers/staging/erofs/unzip_vle.h
+index 5a4e1b62c0d1..c0dfd6906aa8 100644
+--- a/drivers/staging/erofs/unzip_vle.h
++++ b/drivers/staging/erofs/unzip_vle.h
+@@ -218,8 +218,7 @@ extern int z_erofs_vle_plain_copy(struct page **compressed_pages,
+ 
+ extern int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ 	unsigned clusterpages, struct page **pages,
+-	unsigned outlen, unsigned short pageofs,
+-	void (*endio)(struct page *));
++	unsigned int outlen, unsigned short pageofs);
+ 
+ extern int z_erofs_vle_unzip_vmap(struct page **compressed_pages,
+ 	unsigned clusterpages, void *vaddr, unsigned llen,
+diff --git a/drivers/staging/erofs/unzip_vle_lz4.c b/drivers/staging/erofs/unzip_vle_lz4.c
+index 52797bd89da1..3e8b0ff2efeb 100644
+--- a/drivers/staging/erofs/unzip_vle_lz4.c
++++ b/drivers/staging/erofs/unzip_vle_lz4.c
+@@ -125,8 +125,7 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ 				  unsigned int clusterpages,
+ 				  struct page **pages,
+ 				  unsigned int outlen,
+-				  unsigned short pageofs,
+-				  void (*endio)(struct page *))
++				  unsigned short pageofs)
+ {
+ 	void *vin, *vout;
+ 	unsigned int nr_pages, i, j;
+@@ -137,10 +136,13 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ 
+ 	nr_pages = DIV_ROUND_UP(outlen + pageofs, PAGE_SIZE);
+ 
+-	if (clusterpages == 1)
++	if (clusterpages == 1) {
+ 		vin = kmap_atomic(compressed_pages[0]);
+-	else
++	} else {
+ 		vin = erofs_vmap(compressed_pages, clusterpages);
++		if (!vin)
++			return -ENOMEM;
++	}
+ 
+ 	preempt_disable();
+ 	vout = erofs_pcpubuf[smp_processor_id()].data;
+@@ -148,19 +150,16 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ 	ret = z_erofs_unzip_lz4(vin, vout + pageofs,
+ 				clusterpages * PAGE_SIZE, outlen);
+ 
+-	if (ret >= 0) {
+-		outlen = ret;
+-		ret = 0;
+-	}
++	if (ret < 0)
++		goto out;
++	ret = 0;
+ 
+ 	for (i = 0; i < nr_pages; ++i) {
+ 		j = min((unsigned int)PAGE_SIZE - pageofs, outlen);
+ 
+ 		if (pages[i]) {
+-			if (ret < 0) {
+-				SetPageError(pages[i]);
+-			} else if (clusterpages == 1 &&
+-				   pages[i] == compressed_pages[0]) {
++			if (clusterpages == 1 &&
++			    pages[i] == compressed_pages[0]) {
+ 				memcpy(vin + pageofs, vout + pageofs, j);
+ 			} else {
+ 				void *dst = kmap_atomic(pages[i]);
+@@ -168,12 +167,13 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
+ 				memcpy(dst + pageofs, vout + pageofs, j);
+ 				kunmap_atomic(dst);
+ 			}
+-			endio(pages[i]);
+ 		}
+ 		vout += PAGE_SIZE;
+ 		outlen -= j;
+ 		pageofs = 0;
+ 	}
++
++out:
+ 	preempt_enable();
+ 
+ 	if (clusterpages == 1)
+diff --git a/drivers/staging/erofs/xattr.c b/drivers/staging/erofs/xattr.c
+index 80dca6a4adbe..6cb05ae31233 100644
+--- a/drivers/staging/erofs/xattr.c
++++ b/drivers/staging/erofs/xattr.c
+@@ -44,19 +44,48 @@ static inline void xattr_iter_end_final(struct xattr_iter *it)
+ 
+ static int init_inode_xattrs(struct inode *inode)
+ {
++	struct erofs_vnode *const vi = EROFS_V(inode);
+ 	struct xattr_iter it;
+ 	unsigned int i;
+ 	struct erofs_xattr_ibody_header *ih;
+ 	struct super_block *sb;
+ 	struct erofs_sb_info *sbi;
+-	struct erofs_vnode *vi;
+ 	bool atomic_map;
++	int ret = 0;
+ 
+-	if (likely(inode_has_inited_xattr(inode)))
++	/* the most case is that xattrs of this inode are initialized. */
++	if (test_bit(EROFS_V_EA_INITED_BIT, &vi->flags))
+ 		return 0;
+ 
+-	vi = EROFS_V(inode);
+-	BUG_ON(!vi->xattr_isize);
++	if (wait_on_bit_lock(&vi->flags, EROFS_V_BL_XATTR_BIT, TASK_KILLABLE))
++		return -ERESTARTSYS;
++
++	/* someone has initialized xattrs for us? */
++	if (test_bit(EROFS_V_EA_INITED_BIT, &vi->flags))
++		goto out_unlock;
++
++	/*
++	 * bypass all xattr operations if ->xattr_isize is not greater than
++	 * sizeof(struct erofs_xattr_ibody_header), in detail:
++	 * 1) it is not enough to contain erofs_xattr_ibody_header then
++	 *    ->xattr_isize should be 0 (it means no xattr);
++	 * 2) it is just to contain erofs_xattr_ibody_header, which is on-disk
++	 *    undefined right now (maybe use later with some new sb feature).
++	 */
++	if (vi->xattr_isize == sizeof(struct erofs_xattr_ibody_header)) {
++		errln("xattr_isize %d of nid %llu is not supported yet",
++		      vi->xattr_isize, vi->nid);
++		ret = -ENOTSUPP;
++		goto out_unlock;
++	} else if (vi->xattr_isize < sizeof(struct erofs_xattr_ibody_header)) {
++		if (unlikely(vi->xattr_isize)) {
++			DBG_BUGON(1);
++			ret = -EIO;
++			goto out_unlock;	/* xattr ondisk layout error */
++		}
++		ret = -ENOATTR;
++		goto out_unlock;
++	}
+ 
+ 	sb = inode->i_sb;
+ 	sbi = EROFS_SB(sb);
+@@ -64,8 +93,10 @@ static int init_inode_xattrs(struct inode *inode)
+ 	it.ofs = erofs_blkoff(iloc(sbi, vi->nid) + vi->inode_isize);
+ 
+ 	it.page = erofs_get_inline_page(inode, it.blkaddr);
+-	if (IS_ERR(it.page))
+-		return PTR_ERR(it.page);
++	if (IS_ERR(it.page)) {
++		ret = PTR_ERR(it.page);
++		goto out_unlock;
++	}
+ 
+ 	/* read in shared xattr array (non-atomic, see kmalloc below) */
+ 	it.kaddr = kmap(it.page);
+@@ -78,7 +109,8 @@ static int init_inode_xattrs(struct inode *inode)
+ 						sizeof(uint), GFP_KERNEL);
+ 	if (vi->xattr_shared_xattrs == NULL) {
+ 		xattr_iter_end(&it, atomic_map);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto out_unlock;
+ 	}
+ 
+ 	/* let's skip ibody header */
+@@ -92,8 +124,12 @@ static int init_inode_xattrs(struct inode *inode)
+ 
+ 			it.page = erofs_get_meta_page(sb,
+ 				++it.blkaddr, S_ISDIR(inode->i_mode));
+-			if (IS_ERR(it.page))
+-				return PTR_ERR(it.page);
++			if (IS_ERR(it.page)) {
++				kfree(vi->xattr_shared_xattrs);
++				vi->xattr_shared_xattrs = NULL;
++				ret = PTR_ERR(it.page);
++				goto out_unlock;
++			}
+ 
+ 			it.kaddr = kmap_atomic(it.page);
+ 			atomic_map = true;
+@@ -105,8 +141,11 @@ static int init_inode_xattrs(struct inode *inode)
+ 	}
+ 	xattr_iter_end(&it, atomic_map);
+ 
+-	inode_set_inited_xattr(inode);
+-	return 0;
++	set_bit(EROFS_V_EA_INITED_BIT, &vi->flags);
++
++out_unlock:
++	clear_and_wake_up_bit(EROFS_V_BL_XATTR_BIT, &vi->flags);
++	return ret;
+ }
+ 
+ /*
+@@ -422,7 +461,6 @@ static int erofs_xattr_generic_get(const struct xattr_handler *handler,
+ 		struct dentry *unused, struct inode *inode,
+ 		const char *name, void *buffer, size_t size)
+ {
+-	struct erofs_vnode *const vi = EROFS_V(inode);
+ 	struct erofs_sb_info *const sbi = EROFS_I_SB(inode);
+ 
+ 	switch (handler->flags) {
+@@ -440,9 +478,6 @@ static int erofs_xattr_generic_get(const struct xattr_handler *handler,
+ 		return -EINVAL;
+ 	}
+ 
+-	if (!vi->xattr_isize)
+-		return -ENOATTR;
+-
+ 	return erofs_getxattr(inode, handler->flags, name, buffer, size);
+ }
+ 
+diff --git a/drivers/staging/iio/addac/adt7316.c b/drivers/staging/iio/addac/adt7316.c
+index dc93e85808e0..7839d869d25d 100644
+--- a/drivers/staging/iio/addac/adt7316.c
++++ b/drivers/staging/iio/addac/adt7316.c
+@@ -651,17 +651,10 @@ static ssize_t adt7316_store_da_high_resolution(struct device *dev,
+ 	u8 config3;
+ 	int ret;
+ 
+-	chip->dac_bits = 8;
+-
+-	if (buf[0] == '1') {
++	if (buf[0] == '1')
+ 		config3 = chip->config3 | ADT7316_DA_HIGH_RESOLUTION;
+-		if (chip->id == ID_ADT7316 || chip->id == ID_ADT7516)
+-			chip->dac_bits = 12;
+-		else if (chip->id == ID_ADT7317 || chip->id == ID_ADT7517)
+-			chip->dac_bits = 10;
+-	} else {
++	else
+ 		config3 = chip->config3 & (~ADT7316_DA_HIGH_RESOLUTION);
+-	}
+ 
+ 	ret = chip->bus.write(chip->bus.client, ADT7316_CONFIG3, config3);
+ 	if (ret)
+@@ -2123,6 +2116,13 @@ int adt7316_probe(struct device *dev, struct adt7316_bus *bus,
+ 	else
+ 		return -ENODEV;
+ 
++	if (chip->id == ID_ADT7316 || chip->id == ID_ADT7516)
++		chip->dac_bits = 12;
++	else if (chip->id == ID_ADT7317 || chip->id == ID_ADT7517)
++		chip->dac_bits = 10;
++	else
++		chip->dac_bits = 8;
++
+ 	chip->ldac_pin = devm_gpiod_get_optional(dev, "adi,ldac", GPIOD_OUT_LOW);
+ 	if (IS_ERR(chip->ldac_pin)) {
+ 		ret = PTR_ERR(chip->ldac_pin);
+diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
+index 28f41caba05d..fb442499f806 100644
+--- a/drivers/staging/media/imx/imx-ic-prpencvf.c
++++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
+@@ -680,12 +680,23 @@ static int prp_start(struct prp_priv *priv)
+ 		goto out_free_nfb4eof_irq;
+ 	}
+ 
++	/* start upstream */
++	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
++	ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
++	if (ret) {
++		v4l2_err(&ic_priv->sd,
++			 "upstream stream on failed: %d\n", ret);
++		goto out_free_eof_irq;
++	}
++
+ 	/* start the EOF timeout timer */
+ 	mod_timer(&priv->eof_timeout_timer,
+ 		  jiffies + msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+ 
+ 	return 0;
+ 
++out_free_eof_irq:
++	devm_free_irq(ic_priv->dev, priv->eof_irq, priv);
+ out_free_nfb4eof_irq:
+ 	devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv);
+ out_unsetup:
+@@ -717,6 +728,12 @@ static void prp_stop(struct prp_priv *priv)
+ 	if (ret == 0)
+ 		v4l2_warn(&ic_priv->sd, "wait last EOF timeout\n");
+ 
++	/* stop upstream */
++	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
++	if (ret && ret != -ENOIOCTLCMD)
++		v4l2_warn(&ic_priv->sd,
++			  "upstream stream off failed: %d\n", ret);
++
+ 	devm_free_irq(ic_priv->dev, priv->eof_irq, priv);
+ 	devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv);
+ 
+@@ -1148,15 +1165,6 @@ static int prp_s_stream(struct v4l2_subdev *sd, int enable)
+ 	if (ret)
+ 		goto out;
+ 
+-	/* start/stop upstream */
+-	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, enable);
+-	ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
+-	if (ret) {
+-		if (enable)
+-			prp_stop(priv);
+-		goto out;
+-	}
+-
+ update_count:
+ 	priv->stream_count += enable ? 1 : -1;
+ 	if (priv->stream_count < 0)
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index 4223f8d418ae..be1e9e52b2a0 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -629,7 +629,7 @@ out_put_ipu:
+ 	return ret;
+ }
+ 
+-static void csi_idmac_stop(struct csi_priv *priv)
++static void csi_idmac_wait_last_eof(struct csi_priv *priv)
+ {
+ 	unsigned long flags;
+ 	int ret;
+@@ -646,7 +646,10 @@ static void csi_idmac_stop(struct csi_priv *priv)
+ 		&priv->last_eof_comp, msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
+ 	if (ret == 0)
+ 		v4l2_warn(&priv->sd, "wait last EOF timeout\n");
++}
+ 
++static void csi_idmac_stop(struct csi_priv *priv)
++{
+ 	devm_free_irq(priv->dev, priv->eof_irq, priv);
+ 	devm_free_irq(priv->dev, priv->nfb4eof_irq, priv);
+ 
+@@ -722,10 +725,16 @@ static int csi_start(struct csi_priv *priv)
+ 
+ 	output_fi = &priv->frame_interval[priv->active_output_pad];
+ 
++	/* start upstream */
++	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
++	ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
++	if (ret)
++		return ret;
++
+ 	if (priv->dest == IPU_CSI_DEST_IDMAC) {
+ 		ret = csi_idmac_start(priv);
+ 		if (ret)
+-			return ret;
++			goto stop_upstream;
+ 	}
+ 
+ 	ret = csi_setup(priv);
+@@ -753,11 +762,26 @@ fim_off:
+ idmac_stop:
+ 	if (priv->dest == IPU_CSI_DEST_IDMAC)
+ 		csi_idmac_stop(priv);
++stop_upstream:
++	v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
+ 	return ret;
+ }
+ 
+ static void csi_stop(struct csi_priv *priv)
+ {
++	if (priv->dest == IPU_CSI_DEST_IDMAC)
++		csi_idmac_wait_last_eof(priv);
++
++	/*
++	 * Disable the CSI asap, after syncing with the last EOF.
++	 * Doing so after the IDMA channel is disabled has shown to
++	 * create hard system-wide hangs.
++	 */
++	ipu_csi_disable(priv->csi);
++
++	/* stop upstream */
++	v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
++
+ 	if (priv->dest == IPU_CSI_DEST_IDMAC) {
+ 		csi_idmac_stop(priv);
+ 
+@@ -765,8 +789,6 @@ static void csi_stop(struct csi_priv *priv)
+ 		if (priv->fim)
+ 			imx_media_fim_set_stream(priv->fim, NULL, false);
+ 	}
+-
+-	ipu_csi_disable(priv->csi);
+ }
+ 
+ static const struct csi_skip_desc csi_skip[12] = {
+@@ -927,23 +949,13 @@ static int csi_s_stream(struct v4l2_subdev *sd, int enable)
+ 		goto update_count;
+ 
+ 	if (enable) {
+-		/* upstream must be started first, before starting CSI */
+-		ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
+-		ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
+-		if (ret)
+-			goto out;
+-
+ 		dev_dbg(priv->dev, "stream ON\n");
+ 		ret = csi_start(priv);
+-		if (ret) {
+-			v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
++		if (ret)
+ 			goto out;
+-		}
+ 	} else {
+ 		dev_dbg(priv->dev, "stream OFF\n");
+-		/* CSI must be stopped first, then stop upstream */
+ 		csi_stop(priv);
+-		v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
+ 	}
+ 
+ update_count:
+@@ -1787,7 +1799,7 @@ static int imx_csi_parse_endpoint(struct device *dev,
+ 				  struct v4l2_fwnode_endpoint *vep,
+ 				  struct v4l2_async_subdev *asd)
+ {
+-	return fwnode_device_is_available(asd->match.fwnode) ? 0 : -EINVAL;
++	return fwnode_device_is_available(asd->match.fwnode) ? 0 : -ENOTCONN;
+ }
+ 
+ static int imx_csi_async_register(struct csi_priv *priv)
+diff --git a/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c b/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
+index 5282236d1bb1..06daea66fb49 100644
+--- a/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
++++ b/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
+@@ -80,7 +80,7 @@ rk3288_vpu_jpeg_enc_set_qtable(struct rockchip_vpu_dev *vpu,
+ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ {
+ 	struct rockchip_vpu_dev *vpu = ctx->dev;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	struct rockchip_vpu_jpeg_ctx jpeg_ctx;
+ 	u32 reg;
+ 
+@@ -88,7 +88,7 @@ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 
+ 	memset(&jpeg_ctx, 0, sizeof(jpeg_ctx));
+-	jpeg_ctx.buffer = vb2_plane_vaddr(dst_buf, 0);
++	jpeg_ctx.buffer = vb2_plane_vaddr(&dst_buf->vb2_buf, 0);
+ 	jpeg_ctx.width = ctx->dst_fmt.width;
+ 	jpeg_ctx.height = ctx->dst_fmt.height;
+ 	jpeg_ctx.quality = ctx->jpeg_quality;
+@@ -99,7 +99,7 @@ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ 			   VEPU_REG_ENC_CTRL);
+ 
+ 	rk3288_vpu_set_src_img_ctrl(vpu, ctx);
+-	rk3288_vpu_jpeg_enc_set_buffers(vpu, ctx, src_buf);
++	rk3288_vpu_jpeg_enc_set_buffers(vpu, ctx, &src_buf->vb2_buf);
+ 	rk3288_vpu_jpeg_enc_set_qtable(vpu,
+ 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 0),
+ 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 1));
+diff --git a/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c b/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
+index dbc86d95fe3b..3d438797692e 100644
+--- a/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
++++ b/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
+@@ -111,7 +111,7 @@ rk3399_vpu_jpeg_enc_set_qtable(struct rockchip_vpu_dev *vpu,
+ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ {
+ 	struct rockchip_vpu_dev *vpu = ctx->dev;
+-	struct vb2_buffer *src_buf, *dst_buf;
++	struct vb2_v4l2_buffer *src_buf, *dst_buf;
+ 	struct rockchip_vpu_jpeg_ctx jpeg_ctx;
+ 	u32 reg;
+ 
+@@ -119,7 +119,7 @@ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+ 
+ 	memset(&jpeg_ctx, 0, sizeof(jpeg_ctx));
+-	jpeg_ctx.buffer = vb2_plane_vaddr(dst_buf, 0);
++	jpeg_ctx.buffer = vb2_plane_vaddr(&dst_buf->vb2_buf, 0);
+ 	jpeg_ctx.width = ctx->dst_fmt.width;
+ 	jpeg_ctx.height = ctx->dst_fmt.height;
+ 	jpeg_ctx.quality = ctx->jpeg_quality;
+@@ -130,7 +130,7 @@ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
+ 			   VEPU_REG_ENCODE_START);
+ 
+ 	rk3399_vpu_set_src_img_ctrl(vpu, ctx);
+-	rk3399_vpu_jpeg_enc_set_buffers(vpu, ctx, src_buf);
++	rk3399_vpu_jpeg_enc_set_buffers(vpu, ctx, &src_buf->vb2_buf);
+ 	rk3399_vpu_jpeg_enc_set_qtable(vpu,
+ 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 0),
+ 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 1));
+diff --git a/drivers/staging/mt7621-spi/spi-mt7621.c b/drivers/staging/mt7621-spi/spi-mt7621.c
+index 513b6e79b985..e1f50efd0922 100644
+--- a/drivers/staging/mt7621-spi/spi-mt7621.c
++++ b/drivers/staging/mt7621-spi/spi-mt7621.c
+@@ -330,6 +330,7 @@ static int mt7621_spi_probe(struct platform_device *pdev)
+ 	int status = 0;
+ 	struct clk *clk;
+ 	struct mt7621_spi_ops *ops;
++	int ret;
+ 
+ 	match = of_match_device(mt7621_spi_match, &pdev->dev);
+ 	if (!match)
+@@ -377,7 +378,11 @@ static int mt7621_spi_probe(struct platform_device *pdev)
+ 	rs->pending_write = 0;
+ 	dev_info(&pdev->dev, "sys_freq: %u\n", rs->sys_freq);
+ 
+-	device_reset(&pdev->dev);
++	ret = device_reset(&pdev->dev);
++	if (ret) {
++		dev_err(&pdev->dev, "SPI reset failed!\n");
++		return ret;
++	}
+ 
+ 	mt7621_spi_reset(rs);
+ 
+diff --git a/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c b/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
+index 80b8d4153414..a54286498a47 100644
+--- a/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
++++ b/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
+@@ -45,7 +45,7 @@ static int dcon_init_xo_1(struct dcon_priv *dcon)
+ {
+ 	unsigned char lob;
+ 	int ret, i;
+-	struct dcon_gpio *pin = &gpios_asis[0];
++	const struct dcon_gpio *pin = &gpios_asis[0];
+ 
+ 	for (i = 0; i < ARRAY_SIZE(gpios_asis); i++) {
+ 		gpios[i] = devm_gpiod_get(&dcon->client->dev, pin[i].name,
+diff --git a/drivers/staging/speakup/speakup_soft.c b/drivers/staging/speakup/speakup_soft.c
+index 947c79532e10..d5383974d40e 100644
+--- a/drivers/staging/speakup/speakup_soft.c
++++ b/drivers/staging/speakup/speakup_soft.c
+@@ -208,12 +208,15 @@ static ssize_t softsynthx_read(struct file *fp, char __user *buf, size_t count,
+ 		return -EINVAL;
+ 
+ 	spin_lock_irqsave(&speakup_info.spinlock, flags);
++	synth_soft.alive = 1;
+ 	while (1) {
+ 		prepare_to_wait(&speakup_event, &wait, TASK_INTERRUPTIBLE);
+-		if (!unicode)
+-			synth_buffer_skip_nonlatin1();
+-		if (!synth_buffer_empty() || speakup_info.flushing)
+-			break;
++		if (synth_current() == &synth_soft) {
++			if (!unicode)
++				synth_buffer_skip_nonlatin1();
++			if (!synth_buffer_empty() || speakup_info.flushing)
++				break;
++		}
+ 		spin_unlock_irqrestore(&speakup_info.spinlock, flags);
+ 		if (fp->f_flags & O_NONBLOCK) {
+ 			finish_wait(&speakup_event, &wait);
+@@ -233,6 +236,8 @@ static ssize_t softsynthx_read(struct file *fp, char __user *buf, size_t count,
+ 
+ 	/* Keep 3 bytes available for a 16bit UTF-8-encoded character */
+ 	while (chars_sent <= count - bytes_per_ch) {
++		if (synth_current() != &synth_soft)
++			break;
+ 		if (speakup_info.flushing) {
+ 			speakup_info.flushing = 0;
+ 			ch = '\x18';
+@@ -329,7 +334,8 @@ static __poll_t softsynth_poll(struct file *fp, struct poll_table_struct *wait)
+ 	poll_wait(fp, &speakup_event, wait);
+ 
+ 	spin_lock_irqsave(&speakup_info.spinlock, flags);
+-	if (!synth_buffer_empty() || speakup_info.flushing)
++	if (synth_current() == &synth_soft &&
++	    (!synth_buffer_empty() || speakup_info.flushing))
+ 		ret = EPOLLIN | EPOLLRDNORM;
+ 	spin_unlock_irqrestore(&speakup_info.spinlock, flags);
+ 	return ret;
+diff --git a/drivers/staging/speakup/spk_priv.h b/drivers/staging/speakup/spk_priv.h
+index c8e688878fc7..ac6a74883af4 100644
+--- a/drivers/staging/speakup/spk_priv.h
++++ b/drivers/staging/speakup/spk_priv.h
+@@ -74,6 +74,7 @@ int synth_request_region(unsigned long start, unsigned long n);
+ int synth_release_region(unsigned long start, unsigned long n);
+ int synth_add(struct spk_synth *in_synth);
+ void synth_remove(struct spk_synth *in_synth);
++struct spk_synth *synth_current(void);
+ 
+ extern struct speakup_info_t speakup_info;
+ 
+diff --git a/drivers/staging/speakup/synth.c b/drivers/staging/speakup/synth.c
+index 25f259ee4ffc..3568bfb89912 100644
+--- a/drivers/staging/speakup/synth.c
++++ b/drivers/staging/speakup/synth.c
+@@ -481,4 +481,10 @@ void synth_remove(struct spk_synth *in_synth)
+ }
+ EXPORT_SYMBOL_GPL(synth_remove);
+ 
++struct spk_synth *synth_current(void)
++{
++	return synth;
++}
++EXPORT_SYMBOL_GPL(synth_current);
++
+ short spk_punc_masks[] = { 0, SOME, MOST, PUNC, PUNC | B_SYM };
+diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c
+index c9097e7367d8..2e28fbcdfe8e 100644
+--- a/drivers/staging/vt6655/device_main.c
++++ b/drivers/staging/vt6655/device_main.c
+@@ -1033,8 +1033,6 @@ static void vnt_interrupt_process(struct vnt_private *priv)
+ 		return;
+ 	}
+ 
+-	MACvIntDisable(priv->PortOffset);
+-
+ 	spin_lock_irqsave(&priv->lock, flags);
+ 
+ 	/* Read low level stats */
+@@ -1122,8 +1120,6 @@ static void vnt_interrupt_process(struct vnt_private *priv)
+ 	}
+ 
+ 	spin_unlock_irqrestore(&priv->lock, flags);
+-
+-	MACvIntEnable(priv->PortOffset, IMR_MASK_VALUE);
+ }
+ 
+ static void vnt_interrupt_work(struct work_struct *work)
+@@ -1133,14 +1129,17 @@ static void vnt_interrupt_work(struct work_struct *work)
+ 
+ 	if (priv->vif)
+ 		vnt_interrupt_process(priv);
++
++	MACvIntEnable(priv->PortOffset, IMR_MASK_VALUE);
+ }
+ 
+ static irqreturn_t vnt_interrupt(int irq,  void *arg)
+ {
+ 	struct vnt_private *priv = arg;
+ 
+-	if (priv->vif)
+-		schedule_work(&priv->interrupt_work);
++	schedule_work(&priv->interrupt_work);
++
++	MACvIntDisable(priv->PortOffset);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/staging/wilc1000/linux_wlan.c b/drivers/staging/wilc1000/linux_wlan.c
+index 721689048648..5e5149c9a92d 100644
+--- a/drivers/staging/wilc1000/linux_wlan.c
++++ b/drivers/staging/wilc1000/linux_wlan.c
+@@ -1086,8 +1086,8 @@ int wilc_netdev_init(struct wilc **wilc, struct device *dev, int io_type,
+ 		vif->wilc = *wilc;
+ 		vif->ndev = ndev;
+ 		wl->vif[i] = vif;
+-		wl->vif_num = i;
+-		vif->idx = wl->vif_num;
++		wl->vif_num = i + 1;
++		vif->idx = i;
+ 
+ 		ndev->netdev_ops = &wilc_netdev_ops;
+ 
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index bd15a564fe24..3ad2659630e8 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4040,9 +4040,9 @@ static void iscsit_release_commands_from_conn(struct iscsi_conn *conn)
+ 		struct se_cmd *se_cmd = &cmd->se_cmd;
+ 
+ 		if (se_cmd->se_tfo != NULL) {
+-			spin_lock(&se_cmd->t_state_lock);
++			spin_lock_irq(&se_cmd->t_state_lock);
+ 			se_cmd->transport_state |= CMD_T_FABRIC_STOP;
+-			spin_unlock(&se_cmd->t_state_lock);
++			spin_unlock_irq(&se_cmd->t_state_lock);
+ 		}
+ 	}
+ 	spin_unlock_bh(&conn->cmd_lock);
+diff --git a/drivers/tty/Kconfig b/drivers/tty/Kconfig
+index 0840d27381ea..e0a04bfc873e 100644
+--- a/drivers/tty/Kconfig
++++ b/drivers/tty/Kconfig
+@@ -441,4 +441,28 @@ config VCC
+ 	depends on SUN_LDOMS
+ 	help
+ 	  Support for Sun logical domain consoles.
++
++config LDISC_AUTOLOAD
++	bool "Automatically load TTY Line Disciplines"
++	default y
++	help
++	  Historically the kernel has always automatically loaded any
++	  line discipline that is in a kernel module when a user asks
++	  for it to be loaded with the TIOCSETD ioctl, or through other
++	  means.  This is not always the best thing to do on systems
++	  where you know you will not be using some of the more
++	  "ancient" line disciplines, so prevent the kernel from doing
++	  this unless the request is coming from a process with the
++	  CAP_SYS_MODULE permissions.
++
++	  Say 'Y' here if you trust your userspace users to do the right
++	  thing, or if you have only provided the line disciplines that
++	  you know you will be using, or if you wish to continue to use
++	  the traditional method of on-demand loading of these modules
++	  by any user.
++
++	  This functionality can be changed at runtime with the
++	  dev.tty.ldisc_autoload sysctl, this configuration option will
++	  only set the default value of this functionality.
++
+ endif # TTY
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index a1a85805d010..2488de1c4bc4 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -130,6 +130,10 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
+ 		port->flags |= UPF_IOREMAP;
+ 	}
+ 
++	/* Compatibility with the deprecated pxa driver and 8250_pxa drivers. */
++	if (of_device_is_compatible(np, "mrvl,mmp-uart"))
++		port->regshift = 2;
++
+ 	/* Check for registers offset within the devices address range */
+ 	if (of_property_read_u32(np, "reg-shift", &prop) == 0)
+ 		port->regshift = prop;
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 48bd694a5fa1..bbe5cba21522 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -2027,6 +2027,111 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
+ 		.setup		= pci_default_setup,
+ 		.exit		= pci_plx9050_exit,
+ 	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4S,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4SM,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
++	{
++		.vendor     = PCI_VENDOR_ID_ACCESIO,
++		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM,
++		.subvendor  = PCI_ANY_ID,
++		.subdevice  = PCI_ANY_ID,
++		.setup      = pci_pericom_setup,
++	},
+ 	/*
+ 	 * SBS Technologies, Inc., PMC-OCTALPRO 232
+ 	 */
+@@ -4575,10 +4680,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 	 */
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SDB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2S,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+@@ -4587,10 +4692,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_2DB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+@@ -4599,10 +4704,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SMDB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2SM,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+@@ -4611,13 +4716,13 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_1,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7951 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+@@ -4626,16 +4731,16 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2S,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_2,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7954 },
+@@ -4644,13 +4749,13 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2SM,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7954 },
++		pbn_pericom_PI7C9X7952 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7958 },
++		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7958 },
++		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_8,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7958 },
+@@ -4659,19 +4764,19 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ 		pbn_pericom_PI7C9X7958 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7958 },
++		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_8,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7958 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7958 },
++		pbn_pericom_PI7C9X7954 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_8SM,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ 		pbn_pericom_PI7C9X7958 },
+ 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM,
+ 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+-		pbn_pericom_PI7C9X7958 },
++		pbn_pericom_PI7C9X7954 },
+ 	/*
+ 	 * Topic TP560 Data/Fax/Voice 56k modem (reported by Evan Clarke)
+ 	 */
+diff --git a/drivers/tty/serial/8250/8250_pxa.c b/drivers/tty/serial/8250/8250_pxa.c
+index b9bcbe20a2be..c47188860e32 100644
+--- a/drivers/tty/serial/8250/8250_pxa.c
++++ b/drivers/tty/serial/8250/8250_pxa.c
+@@ -113,6 +113,10 @@ static int serial_pxa_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		return ret;
+ 
++	ret = of_alias_get_id(pdev->dev.of_node, "serial");
++	if (ret >= 0)
++		uart.port.line = ret;
++
+ 	uart.port.type = PORT_XSCALE;
+ 	uart.port.iotype = UPIO_MEM32;
+ 	uart.port.mapbase = mmres->start;
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 05147fe24343..0b4f36905321 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -166,6 +166,8 @@ struct atmel_uart_port {
+ 	unsigned int		pending_status;
+ 	spinlock_t		lock_suspended;
+ 
++	bool			hd_start_rx;	/* can start RX during half-duplex operation */
++
+ 	/* ISO7816 */
+ 	unsigned int		fidi_min;
+ 	unsigned int		fidi_max;
+@@ -231,6 +233,13 @@ static inline void atmel_uart_write_char(struct uart_port *port, u8 value)
+ 	__raw_writeb(value, port->membase + ATMEL_US_THR);
+ }
+ 
++static inline int atmel_uart_is_half_duplex(struct uart_port *port)
++{
++	return ((port->rs485.flags & SER_RS485_ENABLED) &&
++		!(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
++		(port->iso7816.flags & SER_ISO7816_ENABLED);
++}
++
+ #ifdef CONFIG_SERIAL_ATMEL_PDC
+ static bool atmel_use_pdc_rx(struct uart_port *port)
+ {
+@@ -608,10 +617,9 @@ static void atmel_stop_tx(struct uart_port *port)
+ 	/* Disable interrupts */
+ 	atmel_uart_writel(port, ATMEL_US_IDR, atmel_port->tx_done_mask);
+ 
+-	if (((port->rs485.flags & SER_RS485_ENABLED) &&
+-	     !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+-	    port->iso7816.flags & SER_ISO7816_ENABLED)
++	if (atmel_uart_is_half_duplex(port))
+ 		atmel_start_rx(port);
++
+ }
+ 
+ /*
+@@ -628,9 +636,7 @@ static void atmel_start_tx(struct uart_port *port)
+ 		return;
+ 
+ 	if (atmel_use_pdc_tx(port) || atmel_use_dma_tx(port))
+-		if (((port->rs485.flags & SER_RS485_ENABLED) &&
+-		     !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+-		    port->iso7816.flags & SER_ISO7816_ENABLED)
++		if (atmel_uart_is_half_duplex(port))
+ 			atmel_stop_rx(port);
+ 
+ 	if (atmel_use_pdc_tx(port))
+@@ -928,11 +934,14 @@ static void atmel_complete_tx_dma(void *arg)
+ 	 */
+ 	if (!uart_circ_empty(xmit))
+ 		atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx);
+-	else if (((port->rs485.flags & SER_RS485_ENABLED) &&
+-		  !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+-		 port->iso7816.flags & SER_ISO7816_ENABLED) {
+-		/* DMA done, stop TX, start RX for RS485 */
+-		atmel_start_rx(port);
++	else if (atmel_uart_is_half_duplex(port)) {
++		/*
++		 * DMA done, re-enable TXEMPTY and signal that we can stop
++		 * TX and start RX for RS485
++		 */
++		atmel_port->hd_start_rx = true;
++		atmel_uart_writel(port, ATMEL_US_IER,
++				  atmel_port->tx_done_mask);
+ 	}
+ 
+ 	spin_unlock_irqrestore(&port->lock, flags);
+@@ -1288,6 +1297,10 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
+ 					 sg_dma_len(&atmel_port->sg_rx)/2,
+ 					 DMA_DEV_TO_MEM,
+ 					 DMA_PREP_INTERRUPT);
++	if (!desc) {
++		dev_err(port->dev, "Preparing DMA cyclic failed\n");
++		goto chan_err;
++	}
+ 	desc->callback = atmel_complete_rx_dma;
+ 	desc->callback_param = port;
+ 	atmel_port->desc_rx = desc;
+@@ -1376,9 +1389,20 @@ atmel_handle_transmit(struct uart_port *port, unsigned int pending)
+ 	struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
+ 
+ 	if (pending & atmel_port->tx_done_mask) {
+-		/* Either PDC or interrupt transmission */
+ 		atmel_uart_writel(port, ATMEL_US_IDR,
+ 				  atmel_port->tx_done_mask);
++
++		/* Start RX if flag was set and FIFO is empty */
++		if (atmel_port->hd_start_rx) {
++			if (!(atmel_uart_readl(port, ATMEL_US_CSR)
++					& ATMEL_US_TXEMPTY))
++				dev_warn(port->dev, "Should start RX, but TX fifo is not empty\n");
++
++			atmel_port->hd_start_rx = false;
++			atmel_start_rx(port);
++			return;
++		}
++
+ 		atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx);
+ 	}
+ }
+@@ -1508,9 +1532,7 @@ static void atmel_tx_pdc(struct uart_port *port)
+ 		atmel_uart_writel(port, ATMEL_US_IER,
+ 				  atmel_port->tx_done_mask);
+ 	} else {
+-		if (((port->rs485.flags & SER_RS485_ENABLED) &&
+-		     !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
+-		    port->iso7816.flags & SER_ISO7816_ENABLED) {
++		if (atmel_uart_is_half_duplex(port)) {
+ 			/* DMA done, stop TX, start RX for RS485 */
+ 			atmel_start_rx(port);
+ 		}
+diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
+index 6fb312e7af71..bfe5e9e034ec 100644
+--- a/drivers/tty/serial/kgdboc.c
++++ b/drivers/tty/serial/kgdboc.c
+@@ -148,8 +148,10 @@ static int configure_kgdboc(void)
+ 	char *cptr = config;
+ 	struct console *cons;
+ 
+-	if (!strlen(config) || isspace(config[0]))
++	if (!strlen(config) || isspace(config[0])) {
++		err = 0;
+ 		goto noconfig;
++	}
+ 
+ 	kgdboc_io_ops.is_console = 0;
+ 	kgdb_tty_driver = NULL;
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 4f479841769a..0fdf3a760aa0 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1416,6 +1416,8 @@ static int max310x_spi_probe(struct spi_device *spi)
+ 	if (spi->dev.of_node) {
+ 		const struct of_device_id *of_id =
+ 			of_match_device(max310x_dt_ids, &spi->dev);
++		if (!of_id)
++			return -ENODEV;
+ 
+ 		devtype = (struct max310x_devtype *)of_id->data;
+ 	} else {
+diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
+index 231f751d1ef4..7e7b1559fa36 100644
+--- a/drivers/tty/serial/mvebu-uart.c
++++ b/drivers/tty/serial/mvebu-uart.c
+@@ -810,6 +810,9 @@ static int mvebu_uart_probe(struct platform_device *pdev)
+ 		return -EINVAL;
+ 	}
+ 
++	if (!match)
++		return -ENODEV;
++
+ 	/* Assume that all UART ports have a DT alias or none has */
+ 	id = of_alias_get_id(pdev->dev.of_node, "serial");
+ 	if (!pdev->dev.of_node || id < 0)
+diff --git a/drivers/tty/serial/mxs-auart.c b/drivers/tty/serial/mxs-auart.c
+index 27235a526cce..4c188f4079b3 100644
+--- a/drivers/tty/serial/mxs-auart.c
++++ b/drivers/tty/serial/mxs-auart.c
+@@ -1686,6 +1686,10 @@ static int mxs_auart_probe(struct platform_device *pdev)
+ 
+ 	s->port.mapbase = r->start;
+ 	s->port.membase = ioremap(r->start, resource_size(r));
++	if (!s->port.membase) {
++		ret = -ENOMEM;
++		goto out_disable_clks;
++	}
+ 	s->port.ops = &mxs_auart_ops;
+ 	s->port.iotype = UPIO_MEM;
+ 	s->port.fifosize = MXS_AUART_FIFO_SIZE;
+diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
+index 38016609c7fa..d30502c58106 100644
+--- a/drivers/tty/serial/qcom_geni_serial.c
++++ b/drivers/tty/serial/qcom_geni_serial.c
+@@ -1117,7 +1117,7 @@ static int __init qcom_geni_console_setup(struct console *co, char *options)
+ {
+ 	struct uart_port *uport;
+ 	struct qcom_geni_serial_port *port;
+-	int baud;
++	int baud = 9600;
+ 	int bits = 8;
+ 	int parity = 'n';
+ 	int flow = 'n';
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 64bbeb7d7e0c..93bd90f1ff14 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -838,19 +838,9 @@ static void sci_transmit_chars(struct uart_port *port)
+ 
+ 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ 		uart_write_wakeup(port);
+-	if (uart_circ_empty(xmit)) {
++	if (uart_circ_empty(xmit))
+ 		sci_stop_tx(port);
+-	} else {
+-		ctrl = serial_port_in(port, SCSCR);
+-
+-		if (port->type != PORT_SCI) {
+-			serial_port_in(port, SCxSR); /* Dummy read */
+-			sci_clear_SCxSR(port, SCxSR_TDxE_CLEAR(port));
+-		}
+ 
+-		ctrl |= SCSCR_TIE;
+-		serial_port_out(port, SCSCR, ctrl);
+-	}
+ }
+ 
+ /* On SH3, SCIF may read end-of-break as a space->mark char */
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index 094f2958cb2b..ee9f18c52d29 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -364,7 +364,13 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
+ 		cdns_uart_handle_tx(dev_id);
+ 		isrstatus &= ~CDNS_UART_IXR_TXEMPTY;
+ 	}
+-	if (isrstatus & CDNS_UART_IXR_RXMASK)
++
++	/*
++	 * Skip RX processing if RX is disabled as RXEMPTY will never be set
++	 * as read bytes will not be removed from the FIFO.
++	 */
++	if (isrstatus & CDNS_UART_IXR_RXMASK &&
++	    !(readl(port->membase + CDNS_UART_CR) & CDNS_UART_CR_RX_DIS))
+ 		cdns_uart_handle_rx(dev_id, isrstatus);
+ 
+ 	spin_unlock(&port->lock);
+diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
+index 77070c2d1240..ec145a59f199 100644
+--- a/drivers/tty/tty_buffer.c
++++ b/drivers/tty/tty_buffer.c
+@@ -26,7 +26,7 @@
+  * Byte threshold to limit memory consumption for flip buffers.
+  * The actual memory limit is > 2x this amount.
+  */
+-#define TTYB_DEFAULT_MEM_LIMIT	65536
++#define TTYB_DEFAULT_MEM_LIMIT	(640 * 1024UL)
+ 
+ /*
+  * We default to dicing tty buffer allocations to this many characters
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 21ffcce16927..5fa250157025 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -513,6 +513,8 @@ static const struct file_operations hung_up_tty_fops = {
+ static DEFINE_SPINLOCK(redirect_lock);
+ static struct file *redirect;
+ 
++extern void tty_sysctl_init(void);
++
+ /**
+  *	tty_wakeup	-	request more data
+  *	@tty: terminal
+@@ -3483,6 +3485,7 @@ void console_sysfs_notify(void)
+  */
+ int __init tty_init(void)
+ {
++	tty_sysctl_init();
+ 	cdev_init(&tty_cdev, &tty_fops);
+ 	if (cdev_add(&tty_cdev, MKDEV(TTYAUX_MAJOR, 0), 1) ||
+ 	    register_chrdev_region(MKDEV(TTYAUX_MAJOR, 0), 1, "/dev/tty") < 0)
+diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c
+index 45eda69b150c..e38f104db174 100644
+--- a/drivers/tty/tty_ldisc.c
++++ b/drivers/tty/tty_ldisc.c
+@@ -156,6 +156,13 @@ static void put_ldops(struct tty_ldisc_ops *ldops)
+  *		takes tty_ldiscs_lock to guard against ldisc races
+  */
+ 
++#if defined(CONFIG_LDISC_AUTOLOAD)
++	#define INITIAL_AUTOLOAD_STATE	1
++#else
++	#define INITIAL_AUTOLOAD_STATE	0
++#endif
++static int tty_ldisc_autoload = INITIAL_AUTOLOAD_STATE;
++
+ static struct tty_ldisc *tty_ldisc_get(struct tty_struct *tty, int disc)
+ {
+ 	struct tty_ldisc *ld;
+@@ -170,6 +177,8 @@ static struct tty_ldisc *tty_ldisc_get(struct tty_struct *tty, int disc)
+ 	 */
+ 	ldops = get_ldops(disc);
+ 	if (IS_ERR(ldops)) {
++		if (!capable(CAP_SYS_MODULE) && !tty_ldisc_autoload)
++			return ERR_PTR(-EPERM);
+ 		request_module("tty-ldisc-%d", disc);
+ 		ldops = get_ldops(disc);
+ 		if (IS_ERR(ldops))
+@@ -845,3 +854,41 @@ void tty_ldisc_deinit(struct tty_struct *tty)
+ 		tty_ldisc_put(tty->ldisc);
+ 	tty->ldisc = NULL;
+ }
++
++static int zero;
++static int one = 1;
++static struct ctl_table tty_table[] = {
++	{
++		.procname	= "ldisc_autoload",
++		.data		= &tty_ldisc_autoload,
++		.maxlen		= sizeof(tty_ldisc_autoload),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec,
++		.extra1		= &zero,
++		.extra2		= &one,
++	},
++	{ }
++};
++
++static struct ctl_table tty_dir_table[] = {
++	{
++		.procname	= "tty",
++		.mode		= 0555,
++		.child		= tty_table,
++	},
++	{ }
++};
++
++static struct ctl_table tty_root_table[] = {
++	{
++		.procname	= "dev",
++		.mode		= 0555,
++		.child		= tty_dir_table,
++	},
++	{ }
++};
++
++void tty_sysctl_init(void)
++{
++	register_sysctl_table(tty_root_table);
++}
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index bba75560d11e..9646ff63e77a 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -935,8 +935,11 @@ static void flush_scrollback(struct vc_data *vc)
+ {
+ 	WARN_CONSOLE_UNLOCKED();
+ 
++	set_origin(vc);
+ 	if (vc->vc_sw->con_flush_scrollback)
+ 		vc->vc_sw->con_flush_scrollback(vc);
++	else
++		vc->vc_sw->con_switch(vc);
+ }
+ 
+ /*
+@@ -1503,8 +1506,10 @@ static void csi_J(struct vc_data *vc, int vpar)
+ 			count = ((vc->vc_pos - vc->vc_origin) >> 1) + 1;
+ 			start = (unsigned short *)vc->vc_origin;
+ 			break;
++		case 3: /* include scrollback */
++			flush_scrollback(vc);
++			/* fallthrough */
+ 		case 2: /* erase whole display */
+-		case 3: /* (and scrollback buffer later) */
+ 			vc_uniscr_clear_lines(vc, 0, vc->vc_rows);
+ 			count = vc->vc_cols * vc->vc_rows;
+ 			start = (unsigned short *)vc->vc_origin;
+@@ -1513,13 +1518,7 @@ static void csi_J(struct vc_data *vc, int vpar)
+ 			return;
+ 	}
+ 	scr_memsetw(start, vc->vc_video_erase_char, 2 * count);
+-	if (vpar == 3) {
+-		set_origin(vc);
+-		flush_scrollback(vc);
+-		if (con_is_visible(vc))
+-			update_screen(vc);
+-	} else if (con_should_update(vc))
+-		do_update_region(vc, (unsigned long) start, count);
++	update_region(vc, (unsigned long) start, count);
+ 	vc->vc_need_wrap = 0;
+ }
+ 
+diff --git a/drivers/usb/chipidea/ci_hdrc_tegra.c b/drivers/usb/chipidea/ci_hdrc_tegra.c
+index 772851bee99b..12025358bb3c 100644
+--- a/drivers/usb/chipidea/ci_hdrc_tegra.c
++++ b/drivers/usb/chipidea/ci_hdrc_tegra.c
+@@ -130,6 +130,7 @@ static int tegra_udc_remove(struct platform_device *pdev)
+ {
+ 	struct tegra_udc *udc = platform_get_drvdata(pdev);
+ 
++	ci_hdrc_remove_device(udc->dev);
+ 	usb_phy_set_suspend(udc->phy, 1);
+ 	clk_disable_unprepare(udc->clk);
+ 
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 7bfcbb23c2a4..016e4004fe9d 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -954,8 +954,15 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ 	} else if (ci->platdata->usb_phy) {
+ 		ci->usb_phy = ci->platdata->usb_phy;
+ 	} else {
++		ci->usb_phy = devm_usb_get_phy_by_phandle(dev->parent, "phys",
++							  0);
+ 		ci->phy = devm_phy_get(dev->parent, "usb-phy");
+-		ci->usb_phy = devm_usb_get_phy(dev->parent, USB_PHY_TYPE_USB2);
++
++		/* Fallback to grabbing any registered USB2 PHY */
++		if (IS_ERR(ci->usb_phy) &&
++		    PTR_ERR(ci->usb_phy) != -EPROBE_DEFER)
++			ci->usb_phy = devm_usb_get_phy(dev->parent,
++						       USB_PHY_TYPE_USB2);
+ 
+ 		/* if both generic PHY and USB PHY layers aren't enabled */
+ 		if (PTR_ERR(ci->phy) == -ENOSYS &&
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 739f8960811a..ec666eb4b7b4 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -558,10 +558,8 @@ static void acm_softint(struct work_struct *work)
+ 		clear_bit(EVENT_RX_STALL, &acm->flags);
+ 	}
+ 
+-	if (test_bit(EVENT_TTY_WAKEUP, &acm->flags)) {
++	if (test_and_clear_bit(EVENT_TTY_WAKEUP, &acm->flags))
+ 		tty_port_tty_wakeup(&acm->port);
+-		clear_bit(EVENT_TTY_WAKEUP, &acm->flags);
+-	}
+ }
+ 
+ /*
+diff --git a/drivers/usb/common/common.c b/drivers/usb/common/common.c
+index 48277bbc15e4..73c8e6591746 100644
+--- a/drivers/usb/common/common.c
++++ b/drivers/usb/common/common.c
+@@ -145,6 +145,8 @@ enum usb_dr_mode of_usb_get_dr_mode_by_phy(struct device_node *np, int arg0)
+ 
+ 	do {
+ 		controller = of_find_node_with_property(controller, "phys");
++		if (!of_device_is_available(controller))
++			continue;
+ 		index = 0;
+ 		do {
+ 			if (arg0 == -1) {
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 6c9b76bcc2e1..8d1dbe36db92 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3339,6 +3339,8 @@ int dwc3_gadget_init(struct dwc3 *dwc)
+ 		goto err4;
+ 	}
+ 
++	dwc3_gadget_set_speed(&dwc->gadget, dwc->maximum_speed);
++
+ 	return 0;
+ 
+ err4:
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 1e5430438703..0f8d16de7a37 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1082,6 +1082,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
+ 			 * condition with req->complete callback.
+ 			 */
+ 			usb_ep_dequeue(ep->ep, req);
++			wait_for_completion(&done);
+ 			interrupted = ep->status < 0;
+ 		}
+ 
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index 75b113a5b25c..f3816a5c861e 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -391,20 +391,20 @@ try_again:
+ 	req->complete = f_hidg_req_complete;
+ 	req->context  = hidg;
+ 
++	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
++
+ 	status = usb_ep_queue(hidg->in_ep, req, GFP_ATOMIC);
+ 	if (status < 0) {
+ 		ERROR(hidg->func.config->cdev,
+ 			"usb_ep_queue error on int endpoint %zd\n", status);
+-		goto release_write_pending_unlocked;
++		goto release_write_pending;
+ 	} else {
+ 		status = count;
+ 	}
+-	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+ 
+ 	return status;
+ release_write_pending:
+ 	spin_lock_irqsave(&hidg->write_spinlock, flags);
+-release_write_pending_unlocked:
+ 	hidg->write_pending = 0;
+ 	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+ 
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index 86cff5c28eff..ba841c569c48 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -516,7 +516,6 @@ static int xhci_do_dbc_stop(struct xhci_hcd *xhci)
+ 		return -1;
+ 
+ 	writel(0, &dbc->regs->control);
+-	xhci_dbc_mem_cleanup(xhci);
+ 	dbc->state = DS_DISABLED;
+ 
+ 	return 0;
+@@ -562,8 +561,10 @@ static void xhci_dbc_stop(struct xhci_hcd *xhci)
+ 	ret = xhci_do_dbc_stop(xhci);
+ 	spin_unlock_irqrestore(&dbc->lock, flags);
+ 
+-	if (!ret)
++	if (!ret) {
++		xhci_dbc_mem_cleanup(xhci);
+ 		pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller);
++	}
+ }
+ 
+ static void
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index e2eece693655..96a740543183 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1545,20 +1545,25 @@ int xhci_bus_suspend(struct usb_hcd *hcd)
+ 	port_index = max_ports;
+ 	while (port_index--) {
+ 		u32 t1, t2;
+-
++		int retries = 10;
++retry:
+ 		t1 = readl(ports[port_index]->addr);
+ 		t2 = xhci_port_state_to_neutral(t1);
+ 		portsc_buf[port_index] = 0;
+ 
+-		/* Bail out if a USB3 port has a new device in link training */
+-		if ((hcd->speed >= HCD_USB3) &&
++		/*
++		 * Give a USB3 port in link training time to finish, but don't
++		 * prevent suspend as port might be stuck
++		 */
++		if ((hcd->speed >= HCD_USB3) && retries-- &&
+ 		    (t1 & PORT_PLS_MASK) == XDEV_POLLING) {
+-			bus_state->bus_suspended = 0;
+ 			spin_unlock_irqrestore(&xhci->lock, flags);
+-			xhci_dbg(xhci, "Bus suspend bailout, port in polling\n");
+-			return -EBUSY;
++			msleep(XHCI_PORT_POLLING_LFPS_TIME);
++			spin_lock_irqsave(&xhci->lock, flags);
++			xhci_dbg(xhci, "port %d polling in bus suspend, waiting\n",
++				 port_index);
++			goto retry;
+ 		}
+-
+ 		/* suspend ports in U0, or bail out for new connect changes */
+ 		if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) {
+ 			if ((t1 & PORT_CSC) && wake_enabled) {
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index a9ec7051f286..c2fe218e051f 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -194,6 +194,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ 		xhci->quirks |= XHCI_SSIC_PORT_UNUSED;
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ 	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
++	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
+ 	     pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI))
+ 		xhci->quirks |= XHCI_INTEL_USB_ROLE_SW;
+ 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c
+index a6e463715779..671bce18782c 100644
+--- a/drivers/usb/host/xhci-rcar.c
++++ b/drivers/usb/host/xhci-rcar.c
+@@ -246,6 +246,7 @@ int xhci_rcar_init_quirk(struct usb_hcd *hcd)
+ 	if (!xhci_rcar_wait_for_pll_active(hcd))
+ 		return -ETIMEDOUT;
+ 
++	xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+ 	return xhci_rcar_download_firmware(hcd);
+ }
+ 
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 40fa25c4d041..9215a28dad40 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1647,10 +1647,13 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 		}
+ 	}
+ 
+-	if ((portsc & PORT_PLC) && (portsc & PORT_PLS_MASK) == XDEV_U0 &&
+-			DEV_SUPERSPEED_ANY(portsc)) {
++	if ((portsc & PORT_PLC) &&
++	    DEV_SUPERSPEED_ANY(portsc) &&
++	    ((portsc & PORT_PLS_MASK) == XDEV_U0 ||
++	     (portsc & PORT_PLS_MASK) == XDEV_U1 ||
++	     (portsc & PORT_PLS_MASK) == XDEV_U2)) {
+ 		xhci_dbg(xhci, "resume SS port %d finished\n", port_id);
+-		/* We've just brought the device into U0 through either the
++		/* We've just brought the device into U0/1/2 through either the
+ 		 * Resume state after a device remote wakeup, or through the
+ 		 * U3Exit state after a host-initiated resume.  If it's a device
+ 		 * initiated remote wake, don't pass up the link state change,
+diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
+index 938ff06c0349..efb0cad8710e 100644
+--- a/drivers/usb/host/xhci-tegra.c
++++ b/drivers/usb/host/xhci-tegra.c
+@@ -941,9 +941,9 @@ static void tegra_xusb_powerdomain_remove(struct device *dev,
+ 		device_link_del(tegra->genpd_dl_ss);
+ 	if (tegra->genpd_dl_host)
+ 		device_link_del(tegra->genpd_dl_host);
+-	if (tegra->genpd_dev_ss)
++	if (!IS_ERR_OR_NULL(tegra->genpd_dev_ss))
+ 		dev_pm_domain_detach(tegra->genpd_dev_ss, true);
+-	if (tegra->genpd_dev_host)
++	if (!IS_ERR_OR_NULL(tegra->genpd_dev_host))
+ 		dev_pm_domain_detach(tegra->genpd_dev_host, true);
+ }
+ 
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 652dc36e3012..9334cdee382a 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -452,6 +452,14 @@ struct xhci_op_regs {
+  */
+ #define XHCI_DEFAULT_BESL	4
+ 
++/*
++ * USB3 specification define a 360ms tPollingLFPSTiemout for USB3 ports
++ * to complete link training. usually link trainig completes much faster
++ * so check status 10 times with 36ms sleep in places we need to wait for
++ * polling to complete.
++ */
++#define XHCI_PORT_POLLING_LFPS_TIME  36
++
+ /**
+  * struct xhci_intr_reg - Interrupt Register Set
+  * @irq_pending:	IMAN - Interrupt Management Register.  Used to enable
+diff --git a/drivers/usb/mtu3/Kconfig b/drivers/usb/mtu3/Kconfig
+index 40bbf1f53337..fe58904f350b 100644
+--- a/drivers/usb/mtu3/Kconfig
++++ b/drivers/usb/mtu3/Kconfig
+@@ -4,6 +4,7 @@ config USB_MTU3
+ 	tristate "MediaTek USB3 Dual Role controller"
+ 	depends on USB || USB_GADGET
+ 	depends on ARCH_MEDIATEK || COMPILE_TEST
++	depends on EXTCON || !EXTCON
+ 	select USB_XHCI_MTK if USB_SUPPORT && USB_XHCI_HCD
+ 	help
+ 	  Say Y or M here if your system runs on MediaTek SoCs with
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index c0777a374a88..e732949f6567 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -61,6 +61,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
+ 	{ USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
+ 	{ USB_DEVICE(0x0908, 0x01FF) }, /* Siemens RUGGEDCOM USB Serial Console */
++	{ USB_DEVICE(0x0B00, 0x3070) }, /* Ingenico 3070 */
+ 	{ USB_DEVICE(0x0BED, 0x1100) }, /* MEI (TM) Cashflow-SC Bill/Voucher Acceptor */
+ 	{ USB_DEVICE(0x0BED, 0x1101) }, /* MEI series 2000 Combo Acceptor */
+ 	{ USB_DEVICE(0x0FCF, 0x1003) }, /* Dynastream ANT development board */
+@@ -79,6 +80,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x804E) }, /* Software Bisque Paramount ME build-in converter */
+ 	{ USB_DEVICE(0x10C4, 0x8053) }, /* Enfora EDG1228 */
+ 	{ USB_DEVICE(0x10C4, 0x8054) }, /* Enfora GSM2228 */
++	{ USB_DEVICE(0x10C4, 0x8056) }, /* Lorenz Messtechnik devices */
+ 	{ USB_DEVICE(0x10C4, 0x8066) }, /* Argussoft In-System Programmer */
+ 	{ USB_DEVICE(0x10C4, 0x806F) }, /* IMS USB to RS422 Converter Cable */
+ 	{ USB_DEVICE(0x10C4, 0x807A) }, /* Crumb128 board */
+@@ -1353,8 +1355,13 @@ static int cp210x_gpio_get(struct gpio_chip *gc, unsigned int gpio)
+ 	if (priv->partnum == CP210X_PARTNUM_CP2105)
+ 		req_type = REQTYPE_INTERFACE_TO_HOST;
+ 
++	result = usb_autopm_get_interface(serial->interface);
++	if (result)
++		return result;
++
+ 	result = cp210x_read_vendor_block(serial, req_type,
+ 					  CP210X_READ_LATCH, &buf, sizeof(buf));
++	usb_autopm_put_interface(serial->interface);
+ 	if (result < 0)
+ 		return result;
+ 
+@@ -1375,6 +1382,10 @@ static void cp210x_gpio_set(struct gpio_chip *gc, unsigned int gpio, int value)
+ 
+ 	buf.mask = BIT(gpio);
+ 
++	result = usb_autopm_get_interface(serial->interface);
++	if (result)
++		goto out;
++
+ 	if (priv->partnum == CP210X_PARTNUM_CP2105) {
+ 		result = cp210x_write_vendor_block(serial,
+ 						   REQTYPE_HOST_TO_INTERFACE,
+@@ -1392,6 +1403,8 @@ static void cp210x_gpio_set(struct gpio_chip *gc, unsigned int gpio, int value)
+ 					 NULL, 0, USB_CTRL_SET_TIMEOUT);
+ 	}
+ 
++	usb_autopm_put_interface(serial->interface);
++out:
+ 	if (result < 0) {
+ 		dev_err(&serial->interface->dev, "failed to set GPIO value: %d\n",
+ 				result);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 77ef4c481f3c..1d8461ae2c34 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -609,6 +609,8 @@ static const struct usb_device_id id_table_combined[] = {
+ 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID),
+ 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
++	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLX_PLUS_PID) },
++	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORION_IO_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX_PID) },
+ 	{ USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2_PID) },
+@@ -1025,6 +1027,8 @@ static const struct usb_device_id id_table_combined[] = {
+ 	{ USB_DEVICE(CYPRESS_VID, CYPRESS_WICED_BT_USB_PID) },
+ 	{ USB_DEVICE(CYPRESS_VID, CYPRESS_WICED_WL_USB_PID) },
+ 	{ USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) },
++	/* EZPrototypes devices */
++	{ USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) },
+ 	{ }					/* Terminating entry */
+ };
+ 
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 975d02666c5a..5755f0df0025 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -567,7 +567,9 @@
+ /*
+  * NovaTech product ids (FTDI_VID)
+  */
+-#define FTDI_NT_ORIONLXM_PID	0x7c90	/* OrionLXm Substation Automation Platform */
++#define FTDI_NT_ORIONLXM_PID		0x7c90	/* OrionLXm Substation Automation Platform */
++#define FTDI_NT_ORIONLX_PLUS_PID	0x7c91	/* OrionLX+ Substation Automation Platform */
++#define FTDI_NT_ORION_IO_PID		0x7c92	/* Orion I/O */
+ 
+ /*
+  * Synapse Wireless product ids (FTDI_VID)
+@@ -1308,6 +1310,12 @@
+ #define IONICS_VID			0x1c0c
+ #define IONICS_PLUGCOMPUTER_PID		0x0102
+ 
++/*
++ * EZPrototypes (PID reseller)
++ */
++#define EZPROTOTYPES_VID		0x1c40
++#define HJELMSLUND_USB485_ISO_PID	0x0477
++
+ /*
+  * Dresden Elektronik Sensor Terminal Board
+  */
+diff --git a/drivers/usb/serial/mos7720.c b/drivers/usb/serial/mos7720.c
+index fc52ac75fbf6..18110225d506 100644
+--- a/drivers/usb/serial/mos7720.c
++++ b/drivers/usb/serial/mos7720.c
+@@ -366,8 +366,6 @@ static int write_parport_reg_nonblock(struct mos7715_parport *mos_parport,
+ 	if (!urbtrack)
+ 		return -ENOMEM;
+ 
+-	kref_get(&mos_parport->ref_count);
+-	urbtrack->mos_parport = mos_parport;
+ 	urbtrack->urb = usb_alloc_urb(0, GFP_ATOMIC);
+ 	if (!urbtrack->urb) {
+ 		kfree(urbtrack);
+@@ -388,6 +386,8 @@ static int write_parport_reg_nonblock(struct mos7715_parport *mos_parport,
+ 			     usb_sndctrlpipe(usbdev, 0),
+ 			     (unsigned char *)urbtrack->setup,
+ 			     NULL, 0, async_complete, urbtrack);
++	kref_get(&mos_parport->ref_count);
++	urbtrack->mos_parport = mos_parport;
+ 	kref_init(&urbtrack->ref_count);
+ 	INIT_LIST_HEAD(&urbtrack->urblist_entry);
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index aef15497ff31..83869065b802 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -246,6 +246,7 @@ static void option_instat_callback(struct urb *urb);
+ #define QUECTEL_PRODUCT_EC25			0x0125
+ #define QUECTEL_PRODUCT_BG96			0x0296
+ #define QUECTEL_PRODUCT_EP06			0x0306
++#define QUECTEL_PRODUCT_EM12			0x0512
+ 
+ #define CMOTECH_VENDOR_ID			0x16d8
+ #define CMOTECH_PRODUCT_6001			0x6001
+@@ -1066,7 +1067,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(3) },
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */
+-	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */
++	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000), /* SIMCom SIM5218 */
++	  .driver_info = NCTRL(0) | NCTRL(1) | NCTRL(2) | NCTRL(3) | RSVD(4) },
+ 	/* Quectel products using Qualcomm vendor ID */
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC15)},
+ 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC20),
+@@ -1087,6 +1089,9 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
+ 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
++	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
++	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
+ 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6003),
+@@ -1148,6 +1153,8 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+ 	  .driver_info = NCTRL(0) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1102, 0xff),	/* Telit ME910 (ECM) */
++	  .driver_info = NCTRL(0) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910),
+ 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),
+@@ -1938,10 +1945,12 @@ static const struct usb_device_id option_ids[] = {
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e35, 0xff),			/* D-Link DWM-222 */
+ 	  .driver_info = RSVD(4) },
+-	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
+-	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
+-	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/A3 */
+-	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) },                /* OLICARD300 - MT6225 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) },	/* D-Link DWM-152/C1 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) },	/* D-Link DWM-156/C1 */
++	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) },	/* D-Link DWM-156/A3 */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2031, 0xff),			/* Olicard 600 */
++	  .driver_info = RSVD(4) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) },			/* OLICARD300 - MT6225 */
+ 	{ USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) },
+ 	{ USB_DEVICE(VIATELECOM_VENDOR_ID, VIATELECOM_PRODUCT_CDS7) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(WETELECOM_VENDOR_ID, WETELECOM_PRODUCT_WMD200, 0xff, 0xff, 0xff) },
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index f1c39a3c7534..d34e945e5d09 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -37,6 +37,7 @@
+ 	S(SRC_ATTACHED),			\
+ 	S(SRC_STARTUP),				\
+ 	S(SRC_SEND_CAPABILITIES),		\
++	S(SRC_SEND_CAPABILITIES_TIMEOUT),	\
+ 	S(SRC_NEGOTIATE_CAPABILITIES),		\
+ 	S(SRC_TRANSITION_SUPPLY),		\
+ 	S(SRC_READY),				\
+@@ -2966,10 +2967,34 @@ static void run_state_machine(struct tcpm_port *port)
+ 			/* port->hard_reset_count = 0; */
+ 			port->caps_count = 0;
+ 			port->pd_capable = true;
+-			tcpm_set_state_cond(port, hard_reset_state(port),
++			tcpm_set_state_cond(port, SRC_SEND_CAPABILITIES_TIMEOUT,
+ 					    PD_T_SEND_SOURCE_CAP);
+ 		}
+ 		break;
++	case SRC_SEND_CAPABILITIES_TIMEOUT:
++		/*
++		 * Error recovery for a PD_DATA_SOURCE_CAP reply timeout.
++		 *
++		 * PD 2.0 sinks are supposed to accept src-capabilities with a
++		 * 3.0 header and simply ignore any src PDOs which the sink does
++		 * not understand such as PPS but some 2.0 sinks instead ignore
++		 * the entire PD_DATA_SOURCE_CAP message, causing contract
++		 * negotiation to fail.
++		 *
++		 * After PD_N_HARD_RESET_COUNT hard-reset attempts, we try
++		 * sending src-capabilities with a lower PD revision to
++		 * make these broken sinks work.
++		 */
++		if (port->hard_reset_count < PD_N_HARD_RESET_COUNT) {
++			tcpm_set_state(port, HARD_RESET_SEND, 0);
++		} else if (port->negotiated_rev > PD_REV20) {
++			port->negotiated_rev--;
++			port->hard_reset_count = 0;
++			tcpm_set_state(port, SRC_SEND_CAPABILITIES, 0);
++		} else {
++			tcpm_set_state(port, hard_reset_state(port), 0);
++		}
++		break;
+ 	case SRC_NEGOTIATE_CAPABILITIES:
+ 		ret = tcpm_pd_check_request(port);
+ 		if (ret < 0) {
+diff --git a/drivers/usb/typec/tcpm/wcove.c b/drivers/usb/typec/tcpm/wcove.c
+index 423208e19383..6770afd40765 100644
+--- a/drivers/usb/typec/tcpm/wcove.c
++++ b/drivers/usb/typec/tcpm/wcove.c
+@@ -615,8 +615,13 @@ static int wcove_typec_probe(struct platform_device *pdev)
+ 	wcove->dev = &pdev->dev;
+ 	wcove->regmap = pmic->regmap;
+ 
+-	irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr,
+-				  platform_get_irq(pdev, 0));
++	irq = platform_get_irq(pdev, 0);
++	if (irq < 0) {
++		dev_err(&pdev->dev, "Failed to get IRQ: %d\n", irq);
++		return irq;
++	}
++
++	irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr, irq);
+ 	if (irq < 0)
+ 		return irq;
+ 
+diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
+index 1c0033ad8738..e1109b15636d 100644
+--- a/drivers/usb/typec/tps6598x.c
++++ b/drivers/usb/typec/tps6598x.c
+@@ -110,6 +110,20 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)
+ 	return 0;
+ }
+ 
++static int tps6598x_block_write(struct tps6598x *tps, u8 reg,
++				void *val, size_t len)
++{
++	u8 data[TPS_MAX_LEN + 1];
++
++	if (!tps->i2c_protocol)
++		return regmap_raw_write(tps->regmap, reg, val, len);
++
++	data[0] = len;
++	memcpy(&data[1], val, len);
++
++	return regmap_raw_write(tps->regmap, reg, data, sizeof(data));
++}
++
+ static inline int tps6598x_read16(struct tps6598x *tps, u8 reg, u16 *val)
+ {
+ 	return tps6598x_block_read(tps, reg, val, sizeof(u16));
+@@ -127,23 +141,23 @@ static inline int tps6598x_read64(struct tps6598x *tps, u8 reg, u64 *val)
+ 
+ static inline int tps6598x_write16(struct tps6598x *tps, u8 reg, u16 val)
+ {
+-	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u16));
++	return tps6598x_block_write(tps, reg, &val, sizeof(u16));
+ }
+ 
+ static inline int tps6598x_write32(struct tps6598x *tps, u8 reg, u32 val)
+ {
+-	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u32));
++	return tps6598x_block_write(tps, reg, &val, sizeof(u32));
+ }
+ 
+ static inline int tps6598x_write64(struct tps6598x *tps, u8 reg, u64 val)
+ {
+-	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u64));
++	return tps6598x_block_write(tps, reg, &val, sizeof(u64));
+ }
+ 
+ static inline int
+ tps6598x_write_4cc(struct tps6598x *tps, u8 reg, const char *val)
+ {
+-	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u32));
++	return tps6598x_block_write(tps, reg, &val, sizeof(u32));
+ }
+ 
+ static int tps6598x_read_partner_identity(struct tps6598x *tps)
+@@ -229,8 +243,8 @@ static int tps6598x_exec_cmd(struct tps6598x *tps, const char *cmd,
+ 		return -EBUSY;
+ 
+ 	if (in_len) {
+-		ret = regmap_raw_write(tps->regmap, TPS_REG_DATA1,
+-				       in_data, in_len);
++		ret = tps6598x_block_write(tps, TPS_REG_DATA1,
++					   in_data, in_len);
+ 		if (ret)
+ 			return ret;
+ 	}
+diff --git a/drivers/video/backlight/pwm_bl.c b/drivers/video/backlight/pwm_bl.c
+index feb90764a811..53b8ceea9bde 100644
+--- a/drivers/video/backlight/pwm_bl.c
++++ b/drivers/video/backlight/pwm_bl.c
+@@ -435,7 +435,7 @@ static int pwm_backlight_initial_power_state(const struct pwm_bl_data *pb)
+ 	 */
+ 
+ 	/* if the enable GPIO is disabled, do not enable the backlight */
+-	if (pb->enable_gpio && gpiod_get_value(pb->enable_gpio) == 0)
++	if (pb->enable_gpio && gpiod_get_value_cansleep(pb->enable_gpio) == 0)
+ 		return FB_BLANK_POWERDOWN;
+ 
+ 	/* The regulator is disabled, do not enable the backlight */
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index cb43a2258c51..4721491e6c8c 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -431,6 +431,9 @@ static void fb_do_show_logo(struct fb_info *info, struct fb_image *image,
+ {
+ 	unsigned int x;
+ 
++	if (image->width > info->var.xres || image->height > info->var.yres)
++		return;
++
+ 	if (rotate == FB_ROTATE_UR) {
+ 		for (x = 0;
+ 		     x < num && image->dx + image->width <= info->var.xres;
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index a0b07c331255..a38b65b97be0 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -871,6 +871,8 @@ static struct virtqueue *vring_create_virtqueue_split(
+ 					  GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
+ 		if (queue)
+ 			break;
++		if (!may_reduce_num)
++			return NULL;
+ 	}
+ 
+ 	if (!num)
+diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
+index cba6b586bfbd..d97fcfc5e558 100644
+--- a/drivers/xen/gntdev-dmabuf.c
++++ b/drivers/xen/gntdev-dmabuf.c
+@@ -80,6 +80,12 @@ struct gntdev_dmabuf_priv {
+ 	struct list_head imp_list;
+ 	/* This is the lock which protects dma_buf_xxx lists. */
+ 	struct mutex lock;
++	/*
++	 * We reference this file while exporting dma-bufs, so
++	 * the grant device context is not destroyed while there are
++	 * external users alive.
++	 */
++	struct file *filp;
+ };
+ 
+ /* DMA buffer export support. */
+@@ -311,6 +317,7 @@ static void dmabuf_exp_release(struct kref *kref)
+ 
+ 	dmabuf_exp_wait_obj_signal(gntdev_dmabuf->priv, gntdev_dmabuf);
+ 	list_del(&gntdev_dmabuf->next);
++	fput(gntdev_dmabuf->priv->filp);
+ 	kfree(gntdev_dmabuf);
+ }
+ 
+@@ -423,6 +430,7 @@ static int dmabuf_exp_from_pages(struct gntdev_dmabuf_export_args *args)
+ 	mutex_lock(&args->dmabuf_priv->lock);
+ 	list_add(&gntdev_dmabuf->next, &args->dmabuf_priv->exp_list);
+ 	mutex_unlock(&args->dmabuf_priv->lock);
++	get_file(gntdev_dmabuf->priv->filp);
+ 	return 0;
+ 
+ fail:
+@@ -834,7 +842,7 @@ long gntdev_ioctl_dmabuf_imp_release(struct gntdev_priv *priv,
+ 	return dmabuf_imp_release(priv->dmabuf_priv, op.fd);
+ }
+ 
+-struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void)
++struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct file *filp)
+ {
+ 	struct gntdev_dmabuf_priv *priv;
+ 
+@@ -847,6 +855,8 @@ struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void)
+ 	INIT_LIST_HEAD(&priv->exp_wait_list);
+ 	INIT_LIST_HEAD(&priv->imp_list);
+ 
++	priv->filp = filp;
++
+ 	return priv;
+ }
+ 
+diff --git a/drivers/xen/gntdev-dmabuf.h b/drivers/xen/gntdev-dmabuf.h
+index 7220a53d0fc5..3d9b9cf9d5a1 100644
+--- a/drivers/xen/gntdev-dmabuf.h
++++ b/drivers/xen/gntdev-dmabuf.h
+@@ -14,7 +14,7 @@
+ struct gntdev_dmabuf_priv;
+ struct gntdev_priv;
+ 
+-struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void);
++struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct file *filp);
+ 
+ void gntdev_dmabuf_fini(struct gntdev_dmabuf_priv *priv);
+ 
+diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
+index 5efc5eee9544..7cf9c51318aa 100644
+--- a/drivers/xen/gntdev.c
++++ b/drivers/xen/gntdev.c
+@@ -600,7 +600,7 @@ static int gntdev_open(struct inode *inode, struct file *flip)
+ 	mutex_init(&priv->lock);
+ 
+ #ifdef CONFIG_XEN_GNTDEV_DMABUF
+-	priv->dmabuf_priv = gntdev_dmabuf_init();
++	priv->dmabuf_priv = gntdev_dmabuf_init(flip);
+ 	if (IS_ERR(priv->dmabuf_priv)) {
+ 		ret = PTR_ERR(priv->dmabuf_priv);
+ 		kfree(priv);
+diff --git a/fs/9p/v9fs_vfs.h b/fs/9p/v9fs_vfs.h
+index 5a0db6dec8d1..aaee1e6584e6 100644
+--- a/fs/9p/v9fs_vfs.h
++++ b/fs/9p/v9fs_vfs.h
+@@ -40,6 +40,9 @@
+  */
+ #define P9_LOCK_TIMEOUT (30*HZ)
+ 
++/* flags for v9fs_stat2inode() & v9fs_stat2inode_dotl() */
++#define V9FS_STAT2INODE_KEEP_ISIZE 1
++
+ extern struct file_system_type v9fs_fs_type;
+ extern const struct address_space_operations v9fs_addr_operations;
+ extern const struct file_operations v9fs_file_operations;
+@@ -61,8 +64,10 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses,
+ 		    struct inode *inode, umode_t mode, dev_t);
+ void v9fs_evict_inode(struct inode *inode);
+ ino_t v9fs_qid2ino(struct p9_qid *qid);
+-void v9fs_stat2inode(struct p9_wstat *, struct inode *, struct super_block *);
+-void v9fs_stat2inode_dotl(struct p9_stat_dotl *, struct inode *);
++void v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
++		      struct super_block *sb, unsigned int flags);
++void v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
++			   unsigned int flags);
+ int v9fs_dir_release(struct inode *inode, struct file *filp);
+ int v9fs_file_open(struct inode *inode, struct file *file);
+ void v9fs_inode2stat(struct inode *inode, struct p9_wstat *stat);
+@@ -83,4 +88,18 @@ static inline void v9fs_invalidate_inode_attr(struct inode *inode)
+ }
+ 
+ int v9fs_open_to_dotl_flags(int flags);
++
++static inline void v9fs_i_size_write(struct inode *inode, loff_t i_size)
++{
++	/*
++	 * 32-bit need the lock, concurrent updates could break the
++	 * sequences and make i_size_read() loop forever.
++	 * 64-bit updates are atomic and can skip the locking.
++	 */
++	if (sizeof(i_size) > sizeof(long))
++		spin_lock(&inode->i_lock);
++	i_size_write(inode, i_size);
++	if (sizeof(i_size) > sizeof(long))
++		spin_unlock(&inode->i_lock);
++}
+ #endif
+diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
+index a25efa782fcc..9a1125305d84 100644
+--- a/fs/9p/vfs_file.c
++++ b/fs/9p/vfs_file.c
+@@ -446,7 +446,11 @@ v9fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 		i_size = i_size_read(inode);
+ 		if (iocb->ki_pos > i_size) {
+ 			inode_add_bytes(inode, iocb->ki_pos - i_size);
+-			i_size_write(inode, iocb->ki_pos);
++			/*
++			 * Need to serialize against i_size_write() in
++			 * v9fs_stat2inode()
++			 */
++			v9fs_i_size_write(inode, iocb->ki_pos);
+ 		}
+ 		return retval;
+ 	}
+diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
+index 85ff859d3af5..72b779bc0942 100644
+--- a/fs/9p/vfs_inode.c
++++ b/fs/9p/vfs_inode.c
+@@ -538,7 +538,7 @@ static struct inode *v9fs_qid_iget(struct super_block *sb,
+ 	if (retval)
+ 		goto error;
+ 
+-	v9fs_stat2inode(st, inode, sb);
++	v9fs_stat2inode(st, inode, sb, 0);
+ 	v9fs_cache_inode_get_cookie(inode);
+ 	unlock_new_inode(inode);
+ 	return inode;
+@@ -1092,7 +1092,7 @@ v9fs_vfs_getattr(const struct path *path, struct kstat *stat,
+ 	if (IS_ERR(st))
+ 		return PTR_ERR(st);
+ 
+-	v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb);
++	v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb, 0);
+ 	generic_fillattr(d_inode(dentry), stat);
+ 
+ 	p9stat_free(st);
+@@ -1170,12 +1170,13 @@ static int v9fs_vfs_setattr(struct dentry *dentry, struct iattr *iattr)
+  * @stat: Plan 9 metadata (mistat) structure
+  * @inode: inode to populate
+  * @sb: superblock of filesystem
++ * @flags: control flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE)
+  *
+  */
+ 
+ void
+ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
+-	struct super_block *sb)
++		 struct super_block *sb, unsigned int flags)
+ {
+ 	umode_t mode;
+ 	char ext[32];
+@@ -1216,10 +1217,11 @@ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
+ 	mode = p9mode2perm(v9ses, stat);
+ 	mode |= inode->i_mode & ~S_IALLUGO;
+ 	inode->i_mode = mode;
+-	i_size_write(inode, stat->length);
+ 
++	if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE))
++		v9fs_i_size_write(inode, stat->length);
+ 	/* not real number of blocks, but 512 byte ones ... */
+-	inode->i_blocks = (i_size_read(inode) + 512 - 1) >> 9;
++	inode->i_blocks = (stat->length + 512 - 1) >> 9;
+ 	v9inode->cache_validity &= ~V9FS_INO_INVALID_ATTR;
+ }
+ 
+@@ -1416,9 +1418,9 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
+ {
+ 	int umode;
+ 	dev_t rdev;
+-	loff_t i_size;
+ 	struct p9_wstat *st;
+ 	struct v9fs_session_info *v9ses;
++	unsigned int flags;
+ 
+ 	v9ses = v9fs_inode2v9ses(inode);
+ 	st = p9_client_stat(fid);
+@@ -1431,16 +1433,13 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
+ 	if ((inode->i_mode & S_IFMT) != (umode & S_IFMT))
+ 		goto out;
+ 
+-	spin_lock(&inode->i_lock);
+ 	/*
+ 	 * We don't want to refresh inode->i_size,
+ 	 * because we may have cached data
+ 	 */
+-	i_size = inode->i_size;
+-	v9fs_stat2inode(st, inode, inode->i_sb);
+-	if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
+-		inode->i_size = i_size;
+-	spin_unlock(&inode->i_lock);
++	flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ?
++		V9FS_STAT2INODE_KEEP_ISIZE : 0;
++	v9fs_stat2inode(st, inode, inode->i_sb, flags);
+ out:
+ 	p9stat_free(st);
+ 	kfree(st);
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index 4823e1c46999..a950a927a626 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -143,7 +143,7 @@ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb,
+ 	if (retval)
+ 		goto error;
+ 
+-	v9fs_stat2inode_dotl(st, inode);
++	v9fs_stat2inode_dotl(st, inode, 0);
+ 	v9fs_cache_inode_get_cookie(inode);
+ 	retval = v9fs_get_acl(inode, fid);
+ 	if (retval)
+@@ -496,7 +496,7 @@ v9fs_vfs_getattr_dotl(const struct path *path, struct kstat *stat,
+ 	if (IS_ERR(st))
+ 		return PTR_ERR(st);
+ 
+-	v9fs_stat2inode_dotl(st, d_inode(dentry));
++	v9fs_stat2inode_dotl(st, d_inode(dentry), 0);
+ 	generic_fillattr(d_inode(dentry), stat);
+ 	/* Change block size to what the server returned */
+ 	stat->blksize = st->st_blksize;
+@@ -607,11 +607,13 @@ int v9fs_vfs_setattr_dotl(struct dentry *dentry, struct iattr *iattr)
+  * v9fs_stat2inode_dotl - populate an inode structure with stat info
+  * @stat: stat structure
+  * @inode: inode to populate
++ * @flags: ctrl flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE)
+  *
+  */
+ 
+ void
+-v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
++v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
++		      unsigned int flags)
+ {
+ 	umode_t mode;
+ 	struct v9fs_inode *v9inode = V9FS_I(inode);
+@@ -631,7 +633,8 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
+ 		mode |= inode->i_mode & ~S_IALLUGO;
+ 		inode->i_mode = mode;
+ 
+-		i_size_write(inode, stat->st_size);
++		if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE))
++			v9fs_i_size_write(inode, stat->st_size);
+ 		inode->i_blocks = stat->st_blocks;
+ 	} else {
+ 		if (stat->st_result_mask & P9_STATS_ATIME) {
+@@ -661,8 +664,9 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
+ 		}
+ 		if (stat->st_result_mask & P9_STATS_RDEV)
+ 			inode->i_rdev = new_decode_dev(stat->st_rdev);
+-		if (stat->st_result_mask & P9_STATS_SIZE)
+-			i_size_write(inode, stat->st_size);
++		if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) &&
++		    stat->st_result_mask & P9_STATS_SIZE)
++			v9fs_i_size_write(inode, stat->st_size);
+ 		if (stat->st_result_mask & P9_STATS_BLOCKS)
+ 			inode->i_blocks = stat->st_blocks;
+ 	}
+@@ -928,9 +932,9 @@ v9fs_vfs_get_link_dotl(struct dentry *dentry,
+ 
+ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
+ {
+-	loff_t i_size;
+ 	struct p9_stat_dotl *st;
+ 	struct v9fs_session_info *v9ses;
++	unsigned int flags;
+ 
+ 	v9ses = v9fs_inode2v9ses(inode);
+ 	st = p9_client_getattr_dotl(fid, P9_STATS_ALL);
+@@ -942,16 +946,13 @@ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
+ 	if ((inode->i_mode & S_IFMT) != (st->st_mode & S_IFMT))
+ 		goto out;
+ 
+-	spin_lock(&inode->i_lock);
+ 	/*
+ 	 * We don't want to refresh inode->i_size,
+ 	 * because we may have cached data
+ 	 */
+-	i_size = inode->i_size;
+-	v9fs_stat2inode_dotl(st, inode);
+-	if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
+-		inode->i_size = i_size;
+-	spin_unlock(&inode->i_lock);
++	flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ?
++		V9FS_STAT2INODE_KEEP_ISIZE : 0;
++	v9fs_stat2inode_dotl(st, inode, flags);
+ out:
+ 	kfree(st);
+ 	return 0;
+diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c
+index 48ce50484e80..eeab9953af89 100644
+--- a/fs/9p/vfs_super.c
++++ b/fs/9p/vfs_super.c
+@@ -172,7 +172,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags,
+ 			goto release_sb;
+ 		}
+ 		d_inode(root)->i_ino = v9fs_qid2ino(&st->qid);
+-		v9fs_stat2inode_dotl(st, d_inode(root));
++		v9fs_stat2inode_dotl(st, d_inode(root), 0);
+ 		kfree(st);
+ 	} else {
+ 		struct p9_wstat *st = NULL;
+@@ -183,7 +183,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags,
+ 		}
+ 
+ 		d_inode(root)->i_ino = v9fs_qid2ino(&st->qid);
+-		v9fs_stat2inode(st, d_inode(root), sb);
++		v9fs_stat2inode(st, d_inode(root), sb, 0);
+ 
+ 		p9stat_free(st);
+ 		kfree(st);
+diff --git a/fs/aio.c b/fs/aio.c
+index aaaaf4d12c73..3d9669d011b9 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -167,9 +167,13 @@ struct kioctx {
+ 	unsigned		id;
+ };
+ 
++/*
++ * First field must be the file pointer in all the
++ * iocb unions! See also 'struct kiocb' in <linux/fs.h>
++ */
+ struct fsync_iocb {
+-	struct work_struct	work;
+ 	struct file		*file;
++	struct work_struct	work;
+ 	bool			datasync;
+ };
+ 
+@@ -183,8 +187,15 @@ struct poll_iocb {
+ 	struct work_struct	work;
+ };
+ 
++/*
++ * NOTE! Each of the iocb union members has the file pointer
++ * as the first entry in their struct definition. So you can
++ * access the file pointer through any of the sub-structs,
++ * or directly as just 'ki_filp' in this struct.
++ */
+ struct aio_kiocb {
+ 	union {
++		struct file		*ki_filp;
+ 		struct kiocb		rw;
+ 		struct fsync_iocb	fsync;
+ 		struct poll_iocb	poll;
+@@ -1060,6 +1071,8 @@ static inline void iocb_put(struct aio_kiocb *iocb)
+ {
+ 	if (refcount_read(&iocb->ki_refcnt) == 0 ||
+ 	    refcount_dec_and_test(&iocb->ki_refcnt)) {
++		if (iocb->ki_filp)
++			fput(iocb->ki_filp);
+ 		percpu_ref_put(&iocb->ki_ctx->reqs);
+ 		kmem_cache_free(kiocb_cachep, iocb);
+ 	}
+@@ -1424,7 +1437,6 @@ static void aio_complete_rw(struct kiocb *kiocb, long res, long res2)
+ 		file_end_write(kiocb->ki_filp);
+ 	}
+ 
+-	fput(kiocb->ki_filp);
+ 	aio_complete(iocb, res, res2);
+ }
+ 
+@@ -1432,9 +1444,6 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+ {
+ 	int ret;
+ 
+-	req->ki_filp = fget(iocb->aio_fildes);
+-	if (unlikely(!req->ki_filp))
+-		return -EBADF;
+ 	req->ki_complete = aio_complete_rw;
+ 	req->private = NULL;
+ 	req->ki_pos = iocb->aio_offset;
+@@ -1451,7 +1460,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+ 		ret = ioprio_check_cap(iocb->aio_reqprio);
+ 		if (ret) {
+ 			pr_debug("aio ioprio check cap error: %d\n", ret);
+-			goto out_fput;
++			return ret;
+ 		}
+ 
+ 		req->ki_ioprio = iocb->aio_reqprio;
+@@ -1460,14 +1469,10 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+ 
+ 	ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags);
+ 	if (unlikely(ret))
+-		goto out_fput;
++		return ret;
+ 
+ 	req->ki_flags &= ~IOCB_HIPRI; /* no one is going to poll for this I/O */
+ 	return 0;
+-
+-out_fput:
+-	fput(req->ki_filp);
+-	return ret;
+ }
+ 
+ static int aio_setup_rw(int rw, const struct iocb *iocb, struct iovec **iovec,
+@@ -1521,24 +1526,19 @@ static ssize_t aio_read(struct kiocb *req, const struct iocb *iocb,
+ 	if (ret)
+ 		return ret;
+ 	file = req->ki_filp;
+-
+-	ret = -EBADF;
+ 	if (unlikely(!(file->f_mode & FMODE_READ)))
+-		goto out_fput;
++		return -EBADF;
+ 	ret = -EINVAL;
+ 	if (unlikely(!file->f_op->read_iter))
+-		goto out_fput;
++		return -EINVAL;
+ 
+ 	ret = aio_setup_rw(READ, iocb, &iovec, vectored, compat, &iter);
+ 	if (ret)
+-		goto out_fput;
++		return ret;
+ 	ret = rw_verify_area(READ, file, &req->ki_pos, iov_iter_count(&iter));
+ 	if (!ret)
+ 		aio_rw_done(req, call_read_iter(file, req, &iter));
+ 	kfree(iovec);
+-out_fput:
+-	if (unlikely(ret))
+-		fput(file);
+ 	return ret;
+ }
+ 
+@@ -1555,16 +1555,14 @@ static ssize_t aio_write(struct kiocb *req, const struct iocb *iocb,
+ 		return ret;
+ 	file = req->ki_filp;
+ 
+-	ret = -EBADF;
+ 	if (unlikely(!(file->f_mode & FMODE_WRITE)))
+-		goto out_fput;
+-	ret = -EINVAL;
++		return -EBADF;
+ 	if (unlikely(!file->f_op->write_iter))
+-		goto out_fput;
++		return -EINVAL;
+ 
+ 	ret = aio_setup_rw(WRITE, iocb, &iovec, vectored, compat, &iter);
+ 	if (ret)
+-		goto out_fput;
++		return ret;
+ 	ret = rw_verify_area(WRITE, file, &req->ki_pos, iov_iter_count(&iter));
+ 	if (!ret) {
+ 		/*
+@@ -1582,9 +1580,6 @@ static ssize_t aio_write(struct kiocb *req, const struct iocb *iocb,
+ 		aio_rw_done(req, call_write_iter(file, req, &iter));
+ 	}
+ 	kfree(iovec);
+-out_fput:
+-	if (unlikely(ret))
+-		fput(file);
+ 	return ret;
+ }
+ 
+@@ -1594,7 +1589,6 @@ static void aio_fsync_work(struct work_struct *work)
+ 	int ret;
+ 
+ 	ret = vfs_fsync(req->file, req->datasync);
+-	fput(req->file);
+ 	aio_complete(container_of(req, struct aio_kiocb, fsync), ret, 0);
+ }
+ 
+@@ -1605,13 +1599,8 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
+ 			iocb->aio_rw_flags))
+ 		return -EINVAL;
+ 
+-	req->file = fget(iocb->aio_fildes);
+-	if (unlikely(!req->file))
+-		return -EBADF;
+-	if (unlikely(!req->file->f_op->fsync)) {
+-		fput(req->file);
++	if (unlikely(!req->file->f_op->fsync))
+ 		return -EINVAL;
+-	}
+ 
+ 	req->datasync = datasync;
+ 	INIT_WORK(&req->work, aio_fsync_work);
+@@ -1621,10 +1610,7 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
+ 
+ static inline void aio_poll_complete(struct aio_kiocb *iocb, __poll_t mask)
+ {
+-	struct file *file = iocb->poll.file;
+-
+ 	aio_complete(iocb, mangle_poll(mask), 0);
+-	fput(file);
+ }
+ 
+ static void aio_poll_complete_work(struct work_struct *work)
+@@ -1680,6 +1666,7 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ 	struct poll_iocb *req = container_of(wait, struct poll_iocb, wait);
+ 	struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll);
+ 	__poll_t mask = key_to_poll(key);
++	unsigned long flags;
+ 
+ 	req->woken = true;
+ 
+@@ -1688,10 +1675,15 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ 		if (!(mask & req->events))
+ 			return 0;
+ 
+-		/* try to complete the iocb inline if we can: */
+-		if (spin_trylock(&iocb->ki_ctx->ctx_lock)) {
++		/*
++		 * Try to complete the iocb inline if we can. Use
++		 * irqsave/irqrestore because not all filesystems (e.g. fuse)
++		 * call this function with IRQs disabled and because IRQs
++		 * have to be disabled before ctx_lock is obtained.
++		 */
++		if (spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
+ 			list_del(&iocb->ki_list);
+-			spin_unlock(&iocb->ki_ctx->ctx_lock);
++			spin_unlock_irqrestore(&iocb->ki_ctx->ctx_lock, flags);
+ 
+ 			list_del_init(&req->wait.entry);
+ 			aio_poll_complete(iocb, mask);
+@@ -1743,9 +1735,6 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+ 
+ 	INIT_WORK(&req->work, aio_poll_complete_work);
+ 	req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP;
+-	req->file = fget(iocb->aio_fildes);
+-	if (unlikely(!req->file))
+-		return -EBADF;
+ 
+ 	req->head = NULL;
+ 	req->woken = false;
+@@ -1788,10 +1777,8 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+ 	spin_unlock_irq(&ctx->ctx_lock);
+ 
+ out:
+-	if (unlikely(apt.error)) {
+-		fput(req->file);
++	if (unlikely(apt.error))
+ 		return apt.error;
+-	}
+ 
+ 	if (mask)
+ 		aio_poll_complete(aiocb, mask);
+@@ -1829,6 +1816,11 @@ static int __io_submit_one(struct kioctx *ctx, const struct iocb *iocb,
+ 	if (unlikely(!req))
+ 		goto out_put_reqs_available;
+ 
++	req->ki_filp = fget(iocb->aio_fildes);
++	ret = -EBADF;
++	if (unlikely(!req->ki_filp))
++		goto out_put_req;
++
+ 	if (iocb->aio_flags & IOCB_FLAG_RESFD) {
+ 		/*
+ 		 * If the IOCB_FLAG_RESFD flag of aio_flags is set, get an
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 58a4c1217fa8..06ef48ad1998 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -298,10 +298,10 @@ static void blkdev_bio_end_io(struct bio *bio)
+ 	struct blkdev_dio *dio = bio->bi_private;
+ 	bool should_dirty = dio->should_dirty;
+ 
+-	if (dio->multi_bio && !atomic_dec_and_test(&dio->ref)) {
+-		if (bio->bi_status && !dio->bio.bi_status)
+-			dio->bio.bi_status = bio->bi_status;
+-	} else {
++	if (bio->bi_status && !dio->bio.bi_status)
++		dio->bio.bi_status = bio->bi_status;
++
++	if (!dio->multi_bio || atomic_dec_and_test(&dio->ref)) {
+ 		if (!dio->is_sync) {
+ 			struct kiocb *iocb = dio->iocb;
+ 			ssize_t ret;
+diff --git a/fs/btrfs/acl.c b/fs/btrfs/acl.c
+index 3b66c957ea6f..5810463dc6d2 100644
+--- a/fs/btrfs/acl.c
++++ b/fs/btrfs/acl.c
+@@ -9,6 +9,7 @@
+ #include <linux/posix_acl_xattr.h>
+ #include <linux/posix_acl.h>
+ #include <linux/sched.h>
++#include <linux/sched/mm.h>
+ #include <linux/slab.h>
+ 
+ #include "ctree.h"
+@@ -72,8 +73,16 @@ static int __btrfs_set_acl(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	if (acl) {
++		unsigned int nofs_flag;
++
+ 		size = posix_acl_xattr_size(acl->a_count);
++		/*
++		 * We're holding a transaction handle, so use a NOFS memory
++		 * allocation context to avoid deadlock if reclaim happens.
++		 */
++		nofs_flag = memalloc_nofs_save();
+ 		value = kmalloc(size, GFP_KERNEL);
++		memalloc_nofs_restore(nofs_flag);
+ 		if (!value) {
+ 			ret = -ENOMEM;
+ 			goto out;
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index 8750c835f535..c4dea3b7349e 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -862,6 +862,7 @@ int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info)
+ 			btrfs_destroy_dev_replace_tgtdev(tgt_device);
+ 		break;
+ 	default:
++		up_write(&dev_replace->rwsem);
+ 		result = -EINVAL;
+ 	}
+ 
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 6a2a2a951705..888d72dda794 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -17,6 +17,7 @@
+ #include <linux/semaphore.h>
+ #include <linux/error-injection.h>
+ #include <linux/crc32c.h>
++#include <linux/sched/mm.h>
+ #include <asm/unaligned.h>
+ #include "ctree.h"
+ #include "disk-io.h"
+@@ -1258,10 +1259,17 @@ struct btrfs_root *btrfs_create_tree(struct btrfs_trans_handle *trans,
+ 	struct btrfs_root *tree_root = fs_info->tree_root;
+ 	struct btrfs_root *root;
+ 	struct btrfs_key key;
++	unsigned int nofs_flag;
+ 	int ret = 0;
+ 	uuid_le uuid = NULL_UUID_LE;
+ 
++	/*
++	 * We're holding a transaction handle, so use a NOFS memory allocation
++	 * context to avoid deadlock if reclaim happens.
++	 */
++	nofs_flag = memalloc_nofs_save();
+ 	root = btrfs_alloc_root(fs_info, GFP_KERNEL);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (!root)
+ 		return ERR_PTR(-ENOMEM);
+ 
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index d81035b7ea7d..1b68700bc1c5 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -4808,6 +4808,7 @@ skip_async:
+ }
+ 
+ struct reserve_ticket {
++	u64 orig_bytes;
+ 	u64 bytes;
+ 	int error;
+ 	struct list_head list;
+@@ -5030,7 +5031,7 @@ static inline int need_do_async_reclaim(struct btrfs_fs_info *fs_info,
+ 		!test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state));
+ }
+ 
+-static void wake_all_tickets(struct list_head *head)
++static bool wake_all_tickets(struct list_head *head)
+ {
+ 	struct reserve_ticket *ticket;
+ 
+@@ -5039,7 +5040,10 @@ static void wake_all_tickets(struct list_head *head)
+ 		list_del_init(&ticket->list);
+ 		ticket->error = -ENOSPC;
+ 		wake_up(&ticket->wait);
++		if (ticket->bytes != ticket->orig_bytes)
++			return true;
+ 	}
++	return false;
+ }
+ 
+ /*
+@@ -5094,8 +5098,12 @@ static void btrfs_async_reclaim_metadata_space(struct work_struct *work)
+ 		if (flush_state > COMMIT_TRANS) {
+ 			commit_cycles++;
+ 			if (commit_cycles > 2) {
+-				wake_all_tickets(&space_info->tickets);
+-				space_info->flush = 0;
++				if (wake_all_tickets(&space_info->tickets)) {
++					flush_state = FLUSH_DELAYED_ITEMS_NR;
++					commit_cycles--;
++				} else {
++					space_info->flush = 0;
++				}
+ 			} else {
+ 				flush_state = FLUSH_DELAYED_ITEMS_NR;
+ 			}
+@@ -5147,10 +5155,11 @@ static void priority_reclaim_metadata_space(struct btrfs_fs_info *fs_info,
+ 
+ static int wait_reserve_ticket(struct btrfs_fs_info *fs_info,
+ 			       struct btrfs_space_info *space_info,
+-			       struct reserve_ticket *ticket, u64 orig_bytes)
++			       struct reserve_ticket *ticket)
+ 
+ {
+ 	DEFINE_WAIT(wait);
++	u64 reclaim_bytes = 0;
+ 	int ret = 0;
+ 
+ 	spin_lock(&space_info->lock);
+@@ -5171,14 +5180,12 @@ static int wait_reserve_ticket(struct btrfs_fs_info *fs_info,
+ 		ret = ticket->error;
+ 	if (!list_empty(&ticket->list))
+ 		list_del_init(&ticket->list);
+-	if (ticket->bytes && ticket->bytes < orig_bytes) {
+-		u64 num_bytes = orig_bytes - ticket->bytes;
+-		update_bytes_may_use(space_info, -num_bytes);
+-		trace_btrfs_space_reservation(fs_info, "space_info",
+-					      space_info->flags, num_bytes, 0);
+-	}
++	if (ticket->bytes && ticket->bytes < ticket->orig_bytes)
++		reclaim_bytes = ticket->orig_bytes - ticket->bytes;
+ 	spin_unlock(&space_info->lock);
+ 
++	if (reclaim_bytes)
++		space_info_add_old_bytes(fs_info, space_info, reclaim_bytes);
+ 	return ret;
+ }
+ 
+@@ -5204,6 +5211,7 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
+ {
+ 	struct reserve_ticket ticket;
+ 	u64 used;
++	u64 reclaim_bytes = 0;
+ 	int ret = 0;
+ 
+ 	ASSERT(orig_bytes);
+@@ -5239,6 +5247,7 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
+ 	 * the list and we will do our own flushing further down.
+ 	 */
+ 	if (ret && flush != BTRFS_RESERVE_NO_FLUSH) {
++		ticket.orig_bytes = orig_bytes;
+ 		ticket.bytes = orig_bytes;
+ 		ticket.error = 0;
+ 		init_waitqueue_head(&ticket.wait);
+@@ -5279,25 +5288,21 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
+ 		return ret;
+ 
+ 	if (flush == BTRFS_RESERVE_FLUSH_ALL)
+-		return wait_reserve_ticket(fs_info, space_info, &ticket,
+-					   orig_bytes);
++		return wait_reserve_ticket(fs_info, space_info, &ticket);
+ 
+ 	ret = 0;
+ 	priority_reclaim_metadata_space(fs_info, space_info, &ticket);
+ 	spin_lock(&space_info->lock);
+ 	if (ticket.bytes) {
+-		if (ticket.bytes < orig_bytes) {
+-			u64 num_bytes = orig_bytes - ticket.bytes;
+-			update_bytes_may_use(space_info, -num_bytes);
+-			trace_btrfs_space_reservation(fs_info, "space_info",
+-						      space_info->flags,
+-						      num_bytes, 0);
+-
+-		}
++		if (ticket.bytes < orig_bytes)
++			reclaim_bytes = orig_bytes - ticket.bytes;
+ 		list_del_init(&ticket.list);
+ 		ret = -ENOSPC;
+ 	}
+ 	spin_unlock(&space_info->lock);
++
++	if (reclaim_bytes)
++		space_info_add_old_bytes(fs_info, space_info, reclaim_bytes);
+ 	ASSERT(list_empty(&ticket.list));
+ 	return ret;
+ }
+@@ -6115,7 +6120,7 @@ static void btrfs_calculate_inode_block_rsv_size(struct btrfs_fs_info *fs_info,
+ 	 *
+ 	 * This is overestimating in most cases.
+ 	 */
+-	qgroup_rsv_size = outstanding_extents * fs_info->nodesize;
++	qgroup_rsv_size = (u64)outstanding_extents * fs_info->nodesize;
+ 
+ 	spin_lock(&block_rsv->lock);
+ 	block_rsv->size = reserve_size;
+@@ -8690,6 +8695,8 @@ struct walk_control {
+ 	u64 refs[BTRFS_MAX_LEVEL];
+ 	u64 flags[BTRFS_MAX_LEVEL];
+ 	struct btrfs_key update_progress;
++	struct btrfs_key drop_progress;
++	int drop_level;
+ 	int stage;
+ 	int level;
+ 	int shared_level;
+@@ -9028,6 +9035,16 @@ skip:
+ 					     ret);
+ 			}
+ 		}
++
++		/*
++		 * We need to update the next key in our walk control so we can
++		 * update the drop_progress key accordingly.  We don't care if
++		 * find_next_key doesn't find a key because that means we're at
++		 * the end and are going to clean up now.
++		 */
++		wc->drop_level = level;
++		find_next_key(path, level, &wc->drop_progress);
++
+ 		ret = btrfs_free_extent(trans, root, bytenr, fs_info->nodesize,
+ 					parent, root->root_key.objectid,
+ 					level - 1, 0);
+@@ -9378,12 +9395,14 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
+ 		}
+ 
+ 		if (wc->stage == DROP_REFERENCE) {
+-			level = wc->level;
+-			btrfs_node_key(path->nodes[level],
+-				       &root_item->drop_progress,
+-				       path->slots[level]);
+-			root_item->drop_level = level;
+-		}
++			wc->drop_level = wc->level;
++			btrfs_node_key_to_cpu(path->nodes[wc->drop_level],
++					      &wc->drop_progress,
++					      path->slots[wc->drop_level]);
++		}
++		btrfs_cpu_key_to_disk(&root_item->drop_progress,
++				      &wc->drop_progress);
++		root_item->drop_level = wc->drop_level;
+ 
+ 		BUG_ON(wc->level == 0);
+ 		if (btrfs_should_end_transaction(trans) ||
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 52abe4082680..1bfb7207bbf0 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2985,11 +2985,11 @@ static int __do_readpage(struct extent_io_tree *tree,
+ 		 */
+ 		if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) &&
+ 		    prev_em_start && *prev_em_start != (u64)-1 &&
+-		    *prev_em_start != em->orig_start)
++		    *prev_em_start != em->start)
+ 			force_bio_submit = true;
+ 
+ 		if (prev_em_start)
+-			*prev_em_start = em->orig_start;
++			*prev_em_start = em->start;
+ 
+ 		free_extent_map(em);
+ 		em = NULL;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 9c8e1734429c..1d64a6b8e413 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -501,6 +501,16 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
+ 
++	/*
++	 * If the fs is mounted with nologreplay, which requires it to be
++	 * mounted in RO mode as well, we can not allow discard on free space
++	 * inside block groups, because log trees refer to extents that are not
++	 * pinned in a block group's free space cache (pinning the extents is
++	 * precisely the first phase of replaying a log tree).
++	 */
++	if (btrfs_test_opt(fs_info, NOLOGREPLAY))
++		return -EROFS;
++
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(device, &fs_info->fs_devices->devices,
+ 				dev_list) {
+@@ -3206,21 +3216,6 @@ out:
+ 	return ret;
+ }
+ 
+-static void btrfs_double_inode_unlock(struct inode *inode1, struct inode *inode2)
+-{
+-	inode_unlock(inode1);
+-	inode_unlock(inode2);
+-}
+-
+-static void btrfs_double_inode_lock(struct inode *inode1, struct inode *inode2)
+-{
+-	if (inode1 < inode2)
+-		swap(inode1, inode2);
+-
+-	inode_lock_nested(inode1, I_MUTEX_PARENT);
+-	inode_lock_nested(inode2, I_MUTEX_CHILD);
+-}
+-
+ static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1,
+ 				       struct inode *inode2, u64 loff2, u64 len)
+ {
+@@ -3989,7 +3984,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+ 	if (same_inode)
+ 		inode_lock(inode_in);
+ 	else
+-		btrfs_double_inode_lock(inode_in, inode_out);
++		lock_two_nondirectories(inode_in, inode_out);
+ 
+ 	/*
+ 	 * Now that the inodes are locked, we need to start writeback ourselves
+@@ -4039,7 +4034,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+ 	if (same_inode)
+ 		inode_unlock(inode_in);
+ 	else
+-		btrfs_double_inode_unlock(inode_in, inode_out);
++		unlock_two_nondirectories(inode_in, inode_out);
+ 
+ 	return ret;
+ }
+@@ -4069,7 +4064,7 @@ loff_t btrfs_remap_file_range(struct file *src_file, loff_t off,
+ 	if (same_inode)
+ 		inode_unlock(src_inode);
+ 	else
+-		btrfs_double_inode_unlock(src_inode, dst_inode);
++		unlock_two_nondirectories(src_inode, dst_inode);
+ 
+ 	return ret < 0 ? ret : len;
+ }
+diff --git a/fs/btrfs/props.c b/fs/btrfs/props.c
+index dc6140013ae8..61d22a56c0ba 100644
+--- a/fs/btrfs/props.c
++++ b/fs/btrfs/props.c
+@@ -366,11 +366,11 @@ int btrfs_subvol_inherit_props(struct btrfs_trans_handle *trans,
+ 
+ static int prop_compression_validate(const char *value, size_t len)
+ {
+-	if (!strncmp("lzo", value, len))
++	if (!strncmp("lzo", value, 3))
+ 		return 0;
+-	else if (!strncmp("zlib", value, len))
++	else if (!strncmp("zlib", value, 4))
+ 		return 0;
+-	else if (!strncmp("zstd", value, len))
++	else if (!strncmp("zstd", value, 4))
+ 		return 0;
+ 
+ 	return -EINVAL;
+@@ -396,7 +396,7 @@ static int prop_compression_apply(struct inode *inode,
+ 		btrfs_set_fs_incompat(fs_info, COMPRESS_LZO);
+ 	} else if (!strncmp("zlib", value, 4)) {
+ 		type = BTRFS_COMPRESS_ZLIB;
+-	} else if (!strncmp("zstd", value, len)) {
++	} else if (!strncmp("zstd", value, 4)) {
+ 		type = BTRFS_COMPRESS_ZSTD;
+ 		btrfs_set_fs_incompat(fs_info, COMPRESS_ZSTD);
+ 	} else {
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 4e473a998219..e28fb43e943b 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1917,8 +1917,8 @@ static int qgroup_trace_new_subtree_blocks(struct btrfs_trans_handle* trans,
+ 	int i;
+ 
+ 	/* Level sanity check */
+-	if (cur_level < 0 || cur_level >= BTRFS_MAX_LEVEL ||
+-	    root_level < 0 || root_level >= BTRFS_MAX_LEVEL ||
++	if (cur_level < 0 || cur_level >= BTRFS_MAX_LEVEL - 1 ||
++	    root_level < 0 || root_level >= BTRFS_MAX_LEVEL - 1 ||
+ 	    root_level < cur_level) {
+ 		btrfs_err_rl(fs_info,
+ 			"%s: bad levels, cur_level=%d root_level=%d",
+@@ -2842,16 +2842,15 @@ out:
+ /*
+  * Two limits to commit transaction in advance.
+  *
+- * For RATIO, it will be 1/RATIO of the remaining limit
+- * (excluding data and prealloc meta) as threshold.
++ * For RATIO, it will be 1/RATIO of the remaining limit as threshold.
+  * For SIZE, it will be in byte unit as threshold.
+  */
+-#define QGROUP_PERTRANS_RATIO		32
+-#define QGROUP_PERTRANS_SIZE		SZ_32M
++#define QGROUP_FREE_RATIO		32
++#define QGROUP_FREE_SIZE		SZ_32M
+ static bool qgroup_check_limits(struct btrfs_fs_info *fs_info,
+ 				const struct btrfs_qgroup *qg, u64 num_bytes)
+ {
+-	u64 limit;
++	u64 free;
+ 	u64 threshold;
+ 
+ 	if ((qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_RFER) &&
+@@ -2870,20 +2869,21 @@ static bool qgroup_check_limits(struct btrfs_fs_info *fs_info,
+ 	 */
+ 	if ((qg->lim_flags & (BTRFS_QGROUP_LIMIT_MAX_RFER |
+ 			      BTRFS_QGROUP_LIMIT_MAX_EXCL))) {
+-		if (qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_EXCL)
+-			limit = qg->max_excl;
+-		else
+-			limit = qg->max_rfer;
+-		threshold = (limit - qg->rsv.values[BTRFS_QGROUP_RSV_DATA] -
+-			    qg->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC]) /
+-			    QGROUP_PERTRANS_RATIO;
+-		threshold = min_t(u64, threshold, QGROUP_PERTRANS_SIZE);
++		if (qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_EXCL) {
++			free = qg->max_excl - qgroup_rsv_total(qg) - qg->excl;
++			threshold = min_t(u64, qg->max_excl / QGROUP_FREE_RATIO,
++					  QGROUP_FREE_SIZE);
++		} else {
++			free = qg->max_rfer - qgroup_rsv_total(qg) - qg->rfer;
++			threshold = min_t(u64, qg->max_rfer / QGROUP_FREE_RATIO,
++					  QGROUP_FREE_SIZE);
++		}
+ 
+ 		/*
+ 		 * Use transaction_kthread to commit transaction, so we no
+ 		 * longer need to bother nested transaction nor lock context.
+ 		 */
+-		if (qg->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS] > threshold)
++		if (free < threshold)
+ 			btrfs_commit_transaction_locksafe(fs_info);
+ 	}
+ 
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index e74455eb42f9..6976e2280771 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -2429,8 +2429,9 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
+ 			bitmap_clear(rbio->dbitmap, pagenr, 1);
+ 		kunmap(p);
+ 
+-		for (stripe = 0; stripe < rbio->real_stripes; stripe++)
++		for (stripe = 0; stripe < nr_data; stripe++)
+ 			kunmap(page_in_rbio(rbio, stripe, pagenr, 0));
++		kunmap(p_page);
+ 	}
+ 
+ 	__free_page(p_page);
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 6dcd36d7b849..1aeac70d0531 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -584,6 +584,7 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
+ 	sctx->pages_per_rd_bio = SCRUB_PAGES_PER_RD_BIO;
+ 	sctx->curr = -1;
+ 	sctx->fs_info = fs_info;
++	INIT_LIST_HEAD(&sctx->csum_list);
+ 	for (i = 0; i < SCRUB_BIOS_PER_SCTX; ++i) {
+ 		struct scrub_bio *sbio;
+ 
+@@ -608,7 +609,6 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
+ 	atomic_set(&sctx->workers_pending, 0);
+ 	atomic_set(&sctx->cancel_req, 0);
+ 	sctx->csum_size = btrfs_super_csum_size(fs_info->super_copy);
+-	INIT_LIST_HEAD(&sctx->csum_list);
+ 
+ 	spin_lock_init(&sctx->list_lock);
+ 	spin_lock_init(&sctx->stat_lock);
+@@ -3770,16 +3770,6 @@ fail_scrub_workers:
+ 	return -ENOMEM;
+ }
+ 
+-static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
+-{
+-	if (--fs_info->scrub_workers_refcnt == 0) {
+-		btrfs_destroy_workqueue(fs_info->scrub_workers);
+-		btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
+-		btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
+-	}
+-	WARN_ON(fs_info->scrub_workers_refcnt < 0);
+-}
+-
+ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 		    u64 end, struct btrfs_scrub_progress *progress,
+ 		    int readonly, int is_dev_replace)
+@@ -3788,6 +3778,9 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 	int ret;
+ 	struct btrfs_device *dev;
+ 	unsigned int nofs_flag;
++	struct btrfs_workqueue *scrub_workers = NULL;
++	struct btrfs_workqueue *scrub_wr_comp = NULL;
++	struct btrfs_workqueue *scrub_parity = NULL;
+ 
+ 	if (btrfs_fs_closing(fs_info))
+ 		return -EINVAL;
+@@ -3927,9 +3920,16 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
+ 
+ 	mutex_lock(&fs_info->scrub_lock);
+ 	dev->scrub_ctx = NULL;
+-	scrub_workers_put(fs_info);
++	if (--fs_info->scrub_workers_refcnt == 0) {
++		scrub_workers = fs_info->scrub_workers;
++		scrub_wr_comp = fs_info->scrub_wr_completion_workers;
++		scrub_parity = fs_info->scrub_parity_workers;
++	}
+ 	mutex_unlock(&fs_info->scrub_lock);
+ 
++	btrfs_destroy_workqueue(scrub_workers);
++	btrfs_destroy_workqueue(scrub_wr_comp);
++	btrfs_destroy_workqueue(scrub_parity);
+ 	scrub_put_ctx(sctx);
+ 
+ 	return ret;
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index ac232b3d6d7e..7f3b74a55073 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -3517,9 +3517,16 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
+ 	}
+ 	btrfs_release_path(path);
+ 
+-	/* find the first key from this transaction again */
++	/*
++	 * Find the first key from this transaction again.  See the note for
++	 * log_new_dir_dentries, if we're logging a directory recursively we
++	 * won't be holding its i_mutex, which means we can modify the directory
++	 * while we're logging it.  If we remove an entry between our first
++	 * search and this search we'll not find the key again and can just
++	 * bail.
++	 */
+ 	ret = btrfs_search_slot(NULL, root, &min_key, path, 0, 0);
+-	if (WARN_ON(ret != 0))
++	if (ret != 0)
+ 		goto done;
+ 
+ 	/*
+@@ -4481,6 +4488,19 @@ static int logged_inode_size(struct btrfs_root *log, struct btrfs_inode *inode,
+ 		item = btrfs_item_ptr(path->nodes[0], path->slots[0],
+ 				      struct btrfs_inode_item);
+ 		*size_ret = btrfs_inode_size(path->nodes[0], item);
++		/*
++		 * If the in-memory inode's i_size is smaller then the inode
++		 * size stored in the btree, return the inode's i_size, so
++		 * that we get a correct inode size after replaying the log
++		 * when before a power failure we had a shrinking truncate
++		 * followed by addition of a new name (rename / new hard link).
++		 * Otherwise return the inode size from the btree, to avoid
++		 * data loss when replaying a log due to previously doing a
++		 * write that expands the inode's size and logging a new name
++		 * immediately after.
++		 */
++		if (*size_ret > inode->vfs_inode.i_size)
++			*size_ret = inode->vfs_inode.i_size;
+ 	}
+ 
+ 	btrfs_release_path(path);
+@@ -4642,15 +4662,8 @@ static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,
+ 					struct btrfs_file_extent_item);
+ 
+ 		if (btrfs_file_extent_type(leaf, extent) ==
+-		    BTRFS_FILE_EXTENT_INLINE) {
+-			len = btrfs_file_extent_ram_bytes(leaf, extent);
+-			ASSERT(len == i_size ||
+-			       (len == fs_info->sectorsize &&
+-				btrfs_file_extent_compression(leaf, extent) !=
+-				BTRFS_COMPRESS_NONE) ||
+-			       (len < i_size && i_size < fs_info->sectorsize));
++		    BTRFS_FILE_EXTENT_INLINE)
+ 			return 0;
+-		}
+ 
+ 		len = btrfs_file_extent_num_bytes(leaf, extent);
+ 		/* Last extent goes beyond i_size, no need to log a hole. */
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 15561926ab32..88a323a453d8 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -6413,7 +6413,7 @@ static void btrfs_end_bio(struct bio *bio)
+ 				if (bio_op(bio) == REQ_OP_WRITE)
+ 					btrfs_dev_stat_inc_and_print(dev,
+ 						BTRFS_DEV_STAT_WRITE_ERRS);
+-				else
++				else if (!(bio->bi_opf & REQ_RAHEAD))
+ 					btrfs_dev_stat_inc_and_print(dev,
+ 						BTRFS_DEV_STAT_READ_ERRS);
+ 				if (bio->bi_opf & REQ_PREFLUSH)
+@@ -6782,10 +6782,10 @@ static int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info,
+ 	}
+ 
+ 	if ((type & BTRFS_BLOCK_GROUP_RAID10 && sub_stripes != 2) ||
+-	    (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes < 1) ||
++	    (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes != 2) ||
+ 	    (type & BTRFS_BLOCK_GROUP_RAID5 && num_stripes < 2) ||
+ 	    (type & BTRFS_BLOCK_GROUP_RAID6 && num_stripes < 3) ||
+-	    (type & BTRFS_BLOCK_GROUP_DUP && num_stripes > 2) ||
++	    (type & BTRFS_BLOCK_GROUP_DUP && num_stripes != 2) ||
+ 	    ((type & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 &&
+ 	     num_stripes != 1)) {
+ 		btrfs_err(fs_info,
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 48318fb74938..cab7a026876b 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -3027,6 +3027,13 @@ void guard_bio_eod(int op, struct bio *bio)
+ 	/* Uhhuh. We've got a bio that straddles the device size! */
+ 	truncated_bytes = bio->bi_iter.bi_size - (maxsector << 9);
+ 
++	/*
++	 * The bio contains more than one segment which spans EOD, just return
++	 * and let IO layer turn it into an EIO
++	 */
++	if (truncated_bytes > bvec->bv_len)
++		return;
++
+ 	/* Truncate the bio.. */
+ 	bio->bi_iter.bi_size -= truncated_bytes;
+ 	bvec->bv_len -= truncated_bytes;
+diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
+index d9b99abe1243..5d83c924cc47 100644
+--- a/fs/cifs/cifs_dfs_ref.c
++++ b/fs/cifs/cifs_dfs_ref.c
+@@ -285,9 +285,9 @@ static void dump_referral(const struct dfs_info3_param *ref)
+ {
+ 	cifs_dbg(FYI, "DFS: ref path: %s\n", ref->path_name);
+ 	cifs_dbg(FYI, "DFS: node path: %s\n", ref->node_name);
+-	cifs_dbg(FYI, "DFS: fl: %hd, srv_type: %hd\n",
++	cifs_dbg(FYI, "DFS: fl: %d, srv_type: %d\n",
+ 		 ref->flags, ref->server_type);
+-	cifs_dbg(FYI, "DFS: ref_flags: %hd, path_consumed: %hd\n",
++	cifs_dbg(FYI, "DFS: ref_flags: %d, path_consumed: %d\n",
+ 		 ref->ref_flag, ref->path_consumed);
+ }
+ 
+diff --git a/fs/cifs/cifs_fs_sb.h b/fs/cifs/cifs_fs_sb.h
+index 42f0d67f1054..ed49222abecb 100644
+--- a/fs/cifs/cifs_fs_sb.h
++++ b/fs/cifs/cifs_fs_sb.h
+@@ -58,6 +58,7 @@ struct cifs_sb_info {
+ 	spinlock_t tlink_tree_lock;
+ 	struct tcon_link *master_tlink;
+ 	struct nls_table *local_nls;
++	unsigned int bsize;
+ 	unsigned int rsize;
+ 	unsigned int wsize;
+ 	unsigned long actimeo; /* attribute cache timeout (jiffies) */
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 62d48d486d8f..07cad54b84f1 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -554,10 +554,13 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
+ 
+ 	seq_printf(s, ",rsize=%u", cifs_sb->rsize);
+ 	seq_printf(s, ",wsize=%u", cifs_sb->wsize);
++	seq_printf(s, ",bsize=%u", cifs_sb->bsize);
+ 	seq_printf(s, ",echo_interval=%lu",
+ 			tcon->ses->server->echo_interval / HZ);
+ 	if (tcon->snapshot_time)
+ 		seq_printf(s, ",snapshot=%llu", tcon->snapshot_time);
++	if (tcon->handle_timeout)
++		seq_printf(s, ",handletimeout=%u", tcon->handle_timeout);
+ 	/* convert actimeo and display it in seconds */
+ 	seq_printf(s, ",actimeo=%lu", cifs_sb->actimeo / HZ);
+ 
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 94dbdbe5be34..6c934ab3722b 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -59,6 +59,12 @@
+  */
+ #define CIFS_MAX_ACTIMEO (1 << 30)
+ 
++/*
++ * Max persistent and resilient handle timeout (milliseconds).
++ * Windows durable max was 960000 (16 minutes)
++ */
++#define SMB3_MAX_HANDLE_TIMEOUT 960000
++
+ /*
+  * MAX_REQ is the maximum number of requests that WE will send
+  * on one socket concurrently.
+@@ -236,6 +242,8 @@ struct smb_version_operations {
+ 	int * (*get_credits_field)(struct TCP_Server_Info *, const int);
+ 	unsigned int (*get_credits)(struct mid_q_entry *);
+ 	__u64 (*get_next_mid)(struct TCP_Server_Info *);
++	void (*revert_current_mid)(struct TCP_Server_Info *server,
++				   const unsigned int val);
+ 	/* data offset from read response message */
+ 	unsigned int (*read_data_offset)(char *);
+ 	/*
+@@ -557,6 +565,7 @@ struct smb_vol {
+ 	bool resilient:1; /* noresilient not required since not fored for CA */
+ 	bool domainauto:1;
+ 	bool rdma:1;
++	unsigned int bsize;
+ 	unsigned int rsize;
+ 	unsigned int wsize;
+ 	bool sockopt_tcp_nodelay:1;
+@@ -569,6 +578,7 @@ struct smb_vol {
+ 	struct nls_table *local_nls;
+ 	unsigned int echo_interval; /* echo interval in secs */
+ 	__u64 snapshot_time; /* needed for timewarp tokens */
++	__u32 handle_timeout; /* persistent and durable handle timeout in ms */
+ 	unsigned int max_credits; /* smb3 max_credits 10 < credits < 60000 */
+ };
+ 
+@@ -770,6 +780,22 @@ get_next_mid(struct TCP_Server_Info *server)
+ 	return cpu_to_le16(mid);
+ }
+ 
++static inline void
++revert_current_mid(struct TCP_Server_Info *server, const unsigned int val)
++{
++	if (server->ops->revert_current_mid)
++		server->ops->revert_current_mid(server, val);
++}
++
++static inline void
++revert_current_mid_from_hdr(struct TCP_Server_Info *server,
++			    const struct smb2_sync_hdr *shdr)
++{
++	unsigned int num = le16_to_cpu(shdr->CreditCharge);
++
++	return revert_current_mid(server, num > 0 ? num : 1);
++}
++
+ static inline __u16
+ get_mid(const struct smb_hdr *smb)
+ {
+@@ -1009,6 +1035,7 @@ struct cifs_tcon {
+ 	__u32 vol_serial_number;
+ 	__le64 vol_create_time;
+ 	__u64 snapshot_time; /* for timewarp tokens - timestamp of snapshot */
++	__u32 handle_timeout; /* persistent and durable handle timeout in ms */
+ 	__u32 ss_flags;		/* sector size flags */
+ 	__u32 perf_sector_size; /* best sector size for perf */
+ 	__u32 max_chunks;
+@@ -1422,6 +1449,7 @@ struct mid_q_entry {
+ 	struct kref refcount;
+ 	struct TCP_Server_Info *server;	/* server corresponding to this mid */
+ 	__u64 mid;		/* multiplex id */
++	__u16 credits;		/* number of credits consumed by this mid */
+ 	__u32 pid;		/* process id */
+ 	__u32 sequence_number;  /* for CIFS signing */
+ 	unsigned long when_alloc;  /* when mid was created */
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index bb54ccf8481c..551924beb86f 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -2125,12 +2125,13 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ 
+ 		wdata2->cfile = find_writable_file(CIFS_I(inode), false);
+ 		if (!wdata2->cfile) {
+-			cifs_dbg(VFS, "No writable handles for inode\n");
++			cifs_dbg(VFS, "No writable handle to retry writepages\n");
+ 			rc = -EBADF;
+-			break;
++		} else {
++			wdata2->pid = wdata2->cfile->pid;
++			rc = server->ops->async_writev(wdata2,
++						       cifs_writedata_release);
+ 		}
+-		wdata2->pid = wdata2->cfile->pid;
+-		rc = server->ops->async_writev(wdata2, cifs_writedata_release);
+ 
+ 		for (j = 0; j < nr_pages; j++) {
+ 			unlock_page(wdata2->pages[j]);
+@@ -2145,6 +2146,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ 			kref_put(&wdata2->refcount, cifs_writedata_release);
+ 			if (is_retryable_error(rc))
+ 				continue;
++			i += nr_pages;
+ 			break;
+ 		}
+ 
+@@ -2152,6 +2154,13 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ 		i += nr_pages;
+ 	} while (i < wdata->nr_pages);
+ 
++	/* cleanup remaining pages from the original wdata */
++	for (; i < wdata->nr_pages; i++) {
++		SetPageError(wdata->pages[i]);
++		end_page_writeback(wdata->pages[i]);
++		put_page(wdata->pages[i]);
++	}
++
+ 	if (rc != 0 && !is_retryable_error(rc))
+ 		mapping_set_error(inode->i_mapping, rc);
+ 	kref_put(&wdata->refcount, cifs_writedata_release);
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 8463c940e0e5..44e6ec85f832 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -102,8 +102,8 @@ enum {
+ 	Opt_backupuid, Opt_backupgid, Opt_uid,
+ 	Opt_cruid, Opt_gid, Opt_file_mode,
+ 	Opt_dirmode, Opt_port,
+-	Opt_rsize, Opt_wsize, Opt_actimeo,
+-	Opt_echo_interval, Opt_max_credits,
++	Opt_blocksize, Opt_rsize, Opt_wsize, Opt_actimeo,
++	Opt_echo_interval, Opt_max_credits, Opt_handletimeout,
+ 	Opt_snapshot,
+ 
+ 	/* Mount options which take string value */
+@@ -204,9 +204,11 @@ static const match_table_t cifs_mount_option_tokens = {
+ 	{ Opt_dirmode, "dirmode=%s" },
+ 	{ Opt_dirmode, "dir_mode=%s" },
+ 	{ Opt_port, "port=%s" },
++	{ Opt_blocksize, "bsize=%s" },
+ 	{ Opt_rsize, "rsize=%s" },
+ 	{ Opt_wsize, "wsize=%s" },
+ 	{ Opt_actimeo, "actimeo=%s" },
++	{ Opt_handletimeout, "handletimeout=%s" },
+ 	{ Opt_echo_interval, "echo_interval=%s" },
+ 	{ Opt_max_credits, "max_credits=%s" },
+ 	{ Opt_snapshot, "snapshot=%s" },
+@@ -1486,6 +1488,11 @@ cifs_parse_devname(const char *devname, struct smb_vol *vol)
+ 	const char *delims = "/\\";
+ 	size_t len;
+ 
++	if (unlikely(!devname || !*devname)) {
++		cifs_dbg(VFS, "Device name not specified.\n");
++		return -EINVAL;
++	}
++
+ 	/* make sure we have a valid UNC double delimiter prefix */
+ 	len = strspn(devname, delims);
+ 	if (len != 2)
+@@ -1571,7 +1578,7 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ 	vol->cred_uid = current_uid();
+ 	vol->linux_uid = current_uid();
+ 	vol->linux_gid = current_gid();
+-
++	vol->bsize = 1024 * 1024; /* can improve cp performance significantly */
+ 	/*
+ 	 * default to SFM style remapping of seven reserved characters
+ 	 * unless user overrides it or we negotiate CIFS POSIX where
+@@ -1594,6 +1601,9 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ 
+ 	vol->actimeo = CIFS_DEF_ACTIMEO;
+ 
++	/* Most clients set timeout to 0, allows server to use its default */
++	vol->handle_timeout = 0; /* See MS-SMB2 spec section 2.2.14.2.12 */
++
+ 	/* offer SMB2.1 and later (SMB3 etc). Secure and widely accepted */
+ 	vol->ops = &smb30_operations;
+ 	vol->vals = &smbdefault_values;
+@@ -1944,6 +1954,26 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ 			}
+ 			port = (unsigned short)option;
+ 			break;
++		case Opt_blocksize:
++			if (get_option_ul(args, &option)) {
++				cifs_dbg(VFS, "%s: Invalid blocksize value\n",
++					__func__);
++				goto cifs_parse_mount_err;
++			}
++			/*
++			 * inode blocksize realistically should never need to be
++			 * less than 16K or greater than 16M and default is 1MB.
++			 * Note that small inode block sizes (e.g. 64K) can lead
++			 * to very poor performance of common tools like cp and scp
++			 */
++			if ((option < CIFS_MAX_MSGSIZE) ||
++			   (option > (4 * SMB3_DEFAULT_IOSIZE))) {
++				cifs_dbg(VFS, "%s: Invalid blocksize\n",
++					__func__);
++				goto cifs_parse_mount_err;
++			}
++			vol->bsize = option;
++			break;
+ 		case Opt_rsize:
+ 			if (get_option_ul(args, &option)) {
+ 				cifs_dbg(VFS, "%s: Invalid rsize value\n",
+@@ -1972,6 +2002,18 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ 				goto cifs_parse_mount_err;
+ 			}
+ 			break;
++		case Opt_handletimeout:
++			if (get_option_ul(args, &option)) {
++				cifs_dbg(VFS, "%s: Invalid handletimeout value\n",
++					 __func__);
++				goto cifs_parse_mount_err;
++			}
++			vol->handle_timeout = option;
++			if (vol->handle_timeout > SMB3_MAX_HANDLE_TIMEOUT) {
++				cifs_dbg(VFS, "Invalid handle cache timeout, longer than 16 minutes\n");
++				goto cifs_parse_mount_err;
++			}
++			break;
+ 		case Opt_echo_interval:
+ 			if (get_option_ul(args, &option)) {
+ 				cifs_dbg(VFS, "%s: Invalid echo interval value\n",
+@@ -3138,6 +3180,8 @@ static int match_tcon(struct cifs_tcon *tcon, struct smb_vol *volume_info)
+ 		return 0;
+ 	if (tcon->snapshot_time != volume_info->snapshot_time)
+ 		return 0;
++	if (tcon->handle_timeout != volume_info->handle_timeout)
++		return 0;
+ 	return 1;
+ }
+ 
+@@ -3252,6 +3296,16 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb_vol *volume_info)
+ 			tcon->snapshot_time = volume_info->snapshot_time;
+ 	}
+ 
++	if (volume_info->handle_timeout) {
++		if (ses->server->vals->protocol_id == 0) {
++			cifs_dbg(VFS,
++			     "Use SMB2.1 or later for handle timeout option\n");
++			rc = -EOPNOTSUPP;
++			goto out_fail;
++		} else
++			tcon->handle_timeout = volume_info->handle_timeout;
++	}
++
+ 	tcon->ses = ses;
+ 	if (volume_info->password) {
+ 		tcon->password = kstrdup(volume_info->password, GFP_KERNEL);
+@@ -3839,6 +3893,7 @@ int cifs_setup_cifs_sb(struct smb_vol *pvolume_info,
+ 	spin_lock_init(&cifs_sb->tlink_tree_lock);
+ 	cifs_sb->tlink_tree = RB_ROOT;
+ 
++	cifs_sb->bsize = pvolume_info->bsize;
+ 	/*
+ 	 * Temporarily set r/wsize for matching superblock. If we end up using
+ 	 * new sb then client will later negotiate it downward if needed.
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 659ce1b92c44..8d107587208f 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -1645,8 +1645,20 @@ cifs_setlk(struct file *file, struct file_lock *flock, __u32 type,
+ 		rc = server->ops->mand_unlock_range(cfile, flock, xid);
+ 
+ out:
+-	if (flock->fl_flags & FL_POSIX && !rc)
++	if (flock->fl_flags & FL_POSIX) {
++		/*
++		 * If this is a request to remove all locks because we
++		 * are closing the file, it doesn't matter if the
++		 * unlocking failed as both cifs.ko and the SMB server
++		 * remove the lock on file close
++		 */
++		if (rc) {
++			cifs_dbg(VFS, "%s failed rc=%d\n", __func__, rc);
++			if (!(flock->fl_flags & FL_CLOSE))
++				return rc;
++		}
+ 		rc = locks_lock_file_wait(file, flock);
++	}
+ 	return rc;
+ }
+ 
+@@ -3028,14 +3040,16 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from)
+ 	 * these pages but not on the region from pos to ppos+len-1.
+ 	 */
+ 	written = cifs_user_writev(iocb, from);
+-	if (written > 0 && CIFS_CACHE_READ(cinode)) {
++	if (CIFS_CACHE_READ(cinode)) {
+ 		/*
+-		 * Windows 7 server can delay breaking level2 oplock if a write
+-		 * request comes - break it on the client to prevent reading
+-		 * an old data.
++		 * We have read level caching and we have just sent a write
++		 * request to the server thus making data in the cache stale.
++		 * Zap the cache and set oplock/lease level to NONE to avoid
++		 * reading stale data from the cache. All subsequent read
++		 * operations will read new data from the server.
+ 		 */
+ 		cifs_zap_mapping(inode);
+-		cifs_dbg(FYI, "Set no oplock for inode=%p after a write operation\n",
++		cifs_dbg(FYI, "Set Oplock/Lease to NONE for inode=%p after write\n",
+ 			 inode);
+ 		cinode->oplock = 0;
+ 	}
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 478003644916..53fdb5df0d2e 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2080,7 +2080,7 @@ int cifs_getattr(const struct path *path, struct kstat *stat,
+ 		return rc;
+ 
+ 	generic_fillattr(inode, stat);
+-	stat->blksize = CIFS_MAX_MSGSIZE;
++	stat->blksize = cifs_sb->bsize;
+ 	stat->ino = CIFS_I(inode)->uniqueid;
+ 
+ 	/* old CIFS Unix Extensions doesn't return create time */
+diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c
+index 32a6c020478f..20a88776f04d 100644
+--- a/fs/cifs/smb1ops.c
++++ b/fs/cifs/smb1ops.c
+@@ -308,7 +308,7 @@ coalesce_t2(char *second_buf, struct smb_hdr *target_hdr)
+ 	remaining = tgt_total_cnt - total_in_tgt;
+ 
+ 	if (remaining < 0) {
+-		cifs_dbg(FYI, "Server sent too much data. tgt_total_cnt=%hu total_in_tgt=%hu\n",
++		cifs_dbg(FYI, "Server sent too much data. tgt_total_cnt=%hu total_in_tgt=%u\n",
+ 			 tgt_total_cnt, total_in_tgt);
+ 		return -EPROTO;
+ 	}
+diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
+index b204e84b87fb..b0e76d27d752 100644
+--- a/fs/cifs/smb2file.c
++++ b/fs/cifs/smb2file.c
+@@ -68,7 +68,9 @@ smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms,
+ 
+ 
+ 	 if (oparms->tcon->use_resilient) {
+-		nr_ioctl_req.Timeout = 0; /* use server default (120 seconds) */
++		/* default timeout is 0, servers pick default (120 seconds) */
++		nr_ioctl_req.Timeout =
++			cpu_to_le32(oparms->tcon->handle_timeout);
+ 		nr_ioctl_req.Reserved = 0;
+ 		rc = SMB2_ioctl(xid, oparms->tcon, fid->persistent_fid,
+ 			fid->volatile_fid, FSCTL_LMR_REQUEST_RESILIENCY,
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 7b8b58fb4d3f..58700d2ba8cd 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -517,7 +517,6 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+ 	__u8 lease_state;
+ 	struct list_head *tmp;
+ 	struct cifsFileInfo *cfile;
+-	struct TCP_Server_Info *server = tcon->ses->server;
+ 	struct cifs_pending_open *open;
+ 	struct cifsInodeInfo *cinode;
+ 	int ack_req = le32_to_cpu(rsp->Flags &
+@@ -537,13 +536,25 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+ 		cifs_dbg(FYI, "lease key match, lease break 0x%x\n",
+ 			 le32_to_cpu(rsp->NewLeaseState));
+ 
+-		server->ops->set_oplock_level(cinode, lease_state, 0, NULL);
+-
+ 		if (ack_req)
+ 			cfile->oplock_break_cancelled = false;
+ 		else
+ 			cfile->oplock_break_cancelled = true;
+ 
++		set_bit(CIFS_INODE_PENDING_OPLOCK_BREAK, &cinode->flags);
++
++		/*
++		 * Set or clear flags depending on the lease state being READ.
++		 * HANDLE caching flag should be added when the client starts
++		 * to defer closing remote file handles with HANDLE leases.
++		 */
++		if (lease_state & SMB2_LEASE_READ_CACHING_HE)
++			set_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
++				&cinode->flags);
++		else
++			clear_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
++				  &cinode->flags);
++
+ 		queue_work(cifsoplockd_wq, &cfile->oplock_break);
+ 		kfree(lw);
+ 		return true;
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 6f96e2292856..b29f711ab965 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -219,6 +219,15 @@ smb2_get_next_mid(struct TCP_Server_Info *server)
+ 	return mid;
+ }
+ 
++static void
++smb2_revert_current_mid(struct TCP_Server_Info *server, const unsigned int val)
++{
++	spin_lock(&GlobalMid_Lock);
++	if (server->CurrentMid >= val)
++		server->CurrentMid -= val;
++	spin_unlock(&GlobalMid_Lock);
++}
++
+ static struct mid_q_entry *
+ smb2_find_mid(struct TCP_Server_Info *server, char *buf)
+ {
+@@ -2594,6 +2603,15 @@ smb2_downgrade_oplock(struct TCP_Server_Info *server,
+ 		server->ops->set_oplock_level(cinode, 0, 0, NULL);
+ }
+ 
++static void
++smb21_downgrade_oplock(struct TCP_Server_Info *server,
++		       struct cifsInodeInfo *cinode, bool set_level2)
++{
++	server->ops->set_oplock_level(cinode,
++				      set_level2 ? SMB2_LEASE_READ_CACHING_HE :
++				      0, 0, NULL);
++}
++
+ static void
+ smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+ 		      unsigned int epoch, bool *purge_cache)
+@@ -3541,6 +3559,7 @@ struct smb_version_operations smb20_operations = {
+ 	.get_credits = smb2_get_credits,
+ 	.wait_mtu_credits = cifs_wait_mtu_credits,
+ 	.get_next_mid = smb2_get_next_mid,
++	.revert_current_mid = smb2_revert_current_mid,
+ 	.read_data_offset = smb2_read_data_offset,
+ 	.read_data_length = smb2_read_data_length,
+ 	.map_error = map_smb2_to_linux_error,
+@@ -3636,6 +3655,7 @@ struct smb_version_operations smb21_operations = {
+ 	.get_credits = smb2_get_credits,
+ 	.wait_mtu_credits = smb2_wait_mtu_credits,
+ 	.get_next_mid = smb2_get_next_mid,
++	.revert_current_mid = smb2_revert_current_mid,
+ 	.read_data_offset = smb2_read_data_offset,
+ 	.read_data_length = smb2_read_data_length,
+ 	.map_error = map_smb2_to_linux_error,
+@@ -3646,7 +3666,7 @@ struct smb_version_operations smb21_operations = {
+ 	.print_stats = smb2_print_stats,
+ 	.is_oplock_break = smb2_is_valid_oplock_break,
+ 	.handle_cancelled_mid = smb2_handle_cancelled_mid,
+-	.downgrade_oplock = smb2_downgrade_oplock,
++	.downgrade_oplock = smb21_downgrade_oplock,
+ 	.need_neg = smb2_need_neg,
+ 	.negotiate = smb2_negotiate,
+ 	.negotiate_wsize = smb2_negotiate_wsize,
+@@ -3732,6 +3752,7 @@ struct smb_version_operations smb30_operations = {
+ 	.get_credits = smb2_get_credits,
+ 	.wait_mtu_credits = smb2_wait_mtu_credits,
+ 	.get_next_mid = smb2_get_next_mid,
++	.revert_current_mid = smb2_revert_current_mid,
+ 	.read_data_offset = smb2_read_data_offset,
+ 	.read_data_length = smb2_read_data_length,
+ 	.map_error = map_smb2_to_linux_error,
+@@ -3743,7 +3764,7 @@ struct smb_version_operations smb30_operations = {
+ 	.dump_share_caps = smb2_dump_share_caps,
+ 	.is_oplock_break = smb2_is_valid_oplock_break,
+ 	.handle_cancelled_mid = smb2_handle_cancelled_mid,
+-	.downgrade_oplock = smb2_downgrade_oplock,
++	.downgrade_oplock = smb21_downgrade_oplock,
+ 	.need_neg = smb2_need_neg,
+ 	.negotiate = smb2_negotiate,
+ 	.negotiate_wsize = smb3_negotiate_wsize,
+@@ -3837,6 +3858,7 @@ struct smb_version_operations smb311_operations = {
+ 	.get_credits = smb2_get_credits,
+ 	.wait_mtu_credits = smb2_wait_mtu_credits,
+ 	.get_next_mid = smb2_get_next_mid,
++	.revert_current_mid = smb2_revert_current_mid,
+ 	.read_data_offset = smb2_read_data_offset,
+ 	.read_data_length = smb2_read_data_length,
+ 	.map_error = map_smb2_to_linux_error,
+@@ -3848,7 +3870,7 @@ struct smb_version_operations smb311_operations = {
+ 	.dump_share_caps = smb2_dump_share_caps,
+ 	.is_oplock_break = smb2_is_valid_oplock_break,
+ 	.handle_cancelled_mid = smb2_handle_cancelled_mid,
+-	.downgrade_oplock = smb2_downgrade_oplock,
++	.downgrade_oplock = smb21_downgrade_oplock,
+ 	.need_neg = smb2_need_neg,
+ 	.negotiate = smb2_negotiate,
+ 	.negotiate_wsize = smb3_negotiate_wsize,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 77b3aaa39b35..068febe37fe4 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -986,8 +986,14 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ 	rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
+ 		FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,
+ 		(char *)pneg_inbuf, inbuflen, (char **)&pneg_rsp, &rsplen);
+-
+-	if (rc != 0) {
++	if (rc == -EOPNOTSUPP) {
++		/*
++		 * Old Windows versions or Netapp SMB server can return
++		 * not supported error. Client should accept it.
++		 */
++		cifs_dbg(VFS, "Server does not support validate negotiate\n");
++		return 0;
++	} else if (rc != 0) {
+ 		cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc);
+ 		rc = -EIO;
+ 		goto out_free_inbuf;
+@@ -1605,9 +1611,16 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree,
+ 	iov[1].iov_base = unc_path;
+ 	iov[1].iov_len = unc_path_len;
+ 
+-	/* 3.11 tcon req must be signed if not encrypted. See MS-SMB2 3.2.4.1.1 */
++	/*
++	 * 3.11 tcon req must be signed if not encrypted. See MS-SMB2 3.2.4.1.1
++	 * unless it is guest or anonymous user. See MS-SMB2 3.2.5.3.1
++	 * (Samba servers don't always set the flag so also check if null user)
++	 */
+ 	if ((ses->server->dialect == SMB311_PROT_ID) &&
+-	    !smb3_encryption_required(tcon))
++	    !smb3_encryption_required(tcon) &&
++	    !(ses->session_flags &
++		    (SMB2_SESSION_FLAG_IS_GUEST|SMB2_SESSION_FLAG_IS_NULL)) &&
++	    ((ses->user_name != NULL) || (ses->sectype == Kerberos)))
+ 		req->sync_hdr.Flags |= SMB2_FLAGS_SIGNED;
+ 
+ 	memset(&rqst, 0, sizeof(struct smb_rqst));
+@@ -1824,8 +1837,9 @@ add_lease_context(struct TCP_Server_Info *server, struct kvec *iov,
+ }
+ 
+ static struct create_durable_v2 *
+-create_durable_v2_buf(struct cifs_fid *pfid)
++create_durable_v2_buf(struct cifs_open_parms *oparms)
+ {
++	struct cifs_fid *pfid = oparms->fid;
+ 	struct create_durable_v2 *buf;
+ 
+ 	buf = kzalloc(sizeof(struct create_durable_v2), GFP_KERNEL);
+@@ -1839,7 +1853,14 @@ create_durable_v2_buf(struct cifs_fid *pfid)
+ 				(struct create_durable_v2, Name));
+ 	buf->ccontext.NameLength = cpu_to_le16(4);
+ 
+-	buf->dcontext.Timeout = 0; /* Should this be configurable by workload */
++	/*
++	 * NB: Handle timeout defaults to 0, which allows server to choose
++	 * (most servers default to 120 seconds) and most clients default to 0.
++	 * This can be overridden at mount ("handletimeout=") if the user wants
++	 * a different persistent (or resilient) handle timeout for all opens
++	 * opens on a particular SMB3 mount.
++	 */
++	buf->dcontext.Timeout = cpu_to_le32(oparms->tcon->handle_timeout);
+ 	buf->dcontext.Flags = cpu_to_le32(SMB2_DHANDLE_FLAG_PERSISTENT);
+ 	generate_random_uuid(buf->dcontext.CreateGuid);
+ 	memcpy(pfid->create_guid, buf->dcontext.CreateGuid, 16);
+@@ -1892,7 +1913,7 @@ add_durable_v2_context(struct kvec *iov, unsigned int *num_iovec,
+ 	struct smb2_create_req *req = iov[0].iov_base;
+ 	unsigned int num = *num_iovec;
+ 
+-	iov[num].iov_base = create_durable_v2_buf(oparms->fid);
++	iov[num].iov_base = create_durable_v2_buf(oparms);
+ 	if (iov[num].iov_base == NULL)
+ 		return -ENOMEM;
+ 	iov[num].iov_len = sizeof(struct create_durable_v2);
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index 7b351c65ee46..63264db78b89 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -576,6 +576,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr,
+ 		     struct TCP_Server_Info *server)
+ {
+ 	struct mid_q_entry *temp;
++	unsigned int credits = le16_to_cpu(shdr->CreditCharge);
+ 
+ 	if (server == NULL) {
+ 		cifs_dbg(VFS, "Null TCP session in smb2_mid_entry_alloc\n");
+@@ -586,6 +587,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr,
+ 	memset(temp, 0, sizeof(struct mid_q_entry));
+ 	kref_init(&temp->refcount);
+ 	temp->mid = le64_to_cpu(shdr->MessageId);
++	temp->credits = credits > 0 ? credits : 1;
+ 	temp->pid = current->pid;
+ 	temp->command = shdr->Command; /* Always LE */
+ 	temp->when_alloc = jiffies;
+@@ -674,13 +676,18 @@ smb2_setup_request(struct cifs_ses *ses, struct smb_rqst *rqst)
+ 	smb2_seq_num_into_buf(ses->server, shdr);
+ 
+ 	rc = smb2_get_mid_entry(ses, shdr, &mid);
+-	if (rc)
++	if (rc) {
++		revert_current_mid_from_hdr(ses->server, shdr);
+ 		return ERR_PTR(rc);
++	}
++
+ 	rc = smb2_sign_rqst(rqst, ses->server);
+ 	if (rc) {
++		revert_current_mid_from_hdr(ses->server, shdr);
+ 		cifs_delete_mid(mid);
+ 		return ERR_PTR(rc);
+ 	}
++
+ 	return mid;
+ }
+ 
+@@ -695,11 +702,14 @@ smb2_setup_async_request(struct TCP_Server_Info *server, struct smb_rqst *rqst)
+ 	smb2_seq_num_into_buf(server, shdr);
+ 
+ 	mid = smb2_mid_entry_alloc(shdr, server);
+-	if (mid == NULL)
++	if (mid == NULL) {
++		revert_current_mid_from_hdr(server, shdr);
+ 		return ERR_PTR(-ENOMEM);
++	}
+ 
+ 	rc = smb2_sign_rqst(rqst, server);
+ 	if (rc) {
++		revert_current_mid_from_hdr(server, shdr);
+ 		DeleteMidQEntry(mid);
+ 		return ERR_PTR(rc);
+ 	}
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 53532bd3f50d..9544eb99b5a2 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -647,6 +647,7 @@ cifs_call_async(struct TCP_Server_Info *server, struct smb_rqst *rqst,
+ 	cifs_in_send_dec(server);
+ 
+ 	if (rc < 0) {
++		revert_current_mid(server, mid->credits);
+ 		server->sequence_number -= 2;
+ 		cifs_delete_mid(mid);
+ 	}
+@@ -868,6 +869,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 	for (i = 0; i < num_rqst; i++) {
+ 		midQ[i] = ses->server->ops->setup_request(ses, &rqst[i]);
+ 		if (IS_ERR(midQ[i])) {
++			revert_current_mid(ses->server, i);
+ 			for (j = 0; j < i; j++)
+ 				cifs_delete_mid(midQ[j]);
+ 			mutex_unlock(&ses->server->srv_mutex);
+@@ -897,8 +899,10 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 	for (i = 0; i < num_rqst; i++)
+ 		cifs_save_when_sent(midQ[i]);
+ 
+-	if (rc < 0)
++	if (rc < 0) {
++		revert_current_mid(ses->server, num_rqst);
+ 		ses->server->sequence_number -= 2;
++	}
+ 
+ 	mutex_unlock(&ses->server->srv_mutex);
+ 
+diff --git a/fs/dax.c b/fs/dax.c
+index 6959837cc465..05cca2214ae3 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -843,9 +843,8 @@ unlock_pte:
+ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ 		struct address_space *mapping, void *entry)
+ {
+-	unsigned long pfn;
++	unsigned long pfn, index, count;
+ 	long ret = 0;
+-	size_t size;
+ 
+ 	/*
+ 	 * A page got tagged dirty in DAX mapping? Something is seriously
+@@ -894,17 +893,18 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ 	xas_unlock_irq(xas);
+ 
+ 	/*
+-	 * Even if dax_writeback_mapping_range() was given a wbc->range_start
+-	 * in the middle of a PMD, the 'index' we are given will be aligned to
+-	 * the start index of the PMD, as will the pfn we pull from 'entry'.
++	 * If dax_writeback_mapping_range() was given a wbc->range_start
++	 * in the middle of a PMD, the 'index' we use needs to be
++	 * aligned to the start of the PMD.
+ 	 * This allows us to flush for PMD_SIZE and not have to worry about
+ 	 * partial PMD writebacks.
+ 	 */
+ 	pfn = dax_to_pfn(entry);
+-	size = PAGE_SIZE << dax_entry_order(entry);
++	count = 1UL << dax_entry_order(entry);
++	index = xas->xa_index & ~(count - 1);
+ 
+-	dax_entry_mkclean(mapping, xas->xa_index, pfn);
+-	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), size);
++	dax_entry_mkclean(mapping, index, pfn);
++	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE);
+ 	/*
+ 	 * After we have flushed the cache, we can clear the dirty tag. There
+ 	 * cannot be new dirty data in the pfn after the flush has completed as
+@@ -917,8 +917,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+ 	xas_clear_mark(xas, PAGECACHE_TAG_DIRTY);
+ 	dax_wake_entry(xas, entry, false);
+ 
+-	trace_dax_writeback_one(mapping->host, xas->xa_index,
+-			size >> PAGE_SHIFT);
++	trace_dax_writeback_one(mapping->host, index, count);
+ 	return ret;
+ 
+  put_unlocked:
+diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
+index c53814539070..553a3f3300ae 100644
+--- a/fs/devpts/inode.c
++++ b/fs/devpts/inode.c
+@@ -455,6 +455,7 @@ devpts_fill_super(struct super_block *s, void *data, int silent)
+ 	s->s_blocksize_bits = 10;
+ 	s->s_magic = DEVPTS_SUPER_MAGIC;
+ 	s->s_op = &devpts_sops;
++	s->s_d_op = &simple_dentry_operations;
+ 	s->s_time_gran = 1;
+ 
+ 	error = -ENOMEM;
+diff --git a/fs/exec.c b/fs/exec.c
+index fb72d36f7823..bcf383730bea 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -932,7 +932,7 @@ int kernel_read_file(struct file *file, void **buf, loff_t *size,
+ 		bytes = kernel_read(file, *buf + pos, i_size - pos, &pos);
+ 		if (bytes < 0) {
+ 			ret = bytes;
+-			goto out;
++			goto out_free;
+ 		}
+ 
+ 		if (bytes == 0)
+diff --git a/fs/ext2/super.c b/fs/ext2/super.c
+index 73b2d528237f..a9ea38182578 100644
+--- a/fs/ext2/super.c
++++ b/fs/ext2/super.c
+@@ -757,7 +757,8 @@ static loff_t ext2_max_size(int bits)
+ {
+ 	loff_t res = EXT2_NDIR_BLOCKS;
+ 	int meta_blocks;
+-	loff_t upper_limit;
++	unsigned int upper_limit;
++	unsigned int ppb = 1 << (bits-2);
+ 
+ 	/* This is calculated to be the largest file size for a
+ 	 * dense, file such that the total number of
+@@ -771,24 +772,34 @@ static loff_t ext2_max_size(int bits)
+ 	/* total blocks in file system block size */
+ 	upper_limit >>= (bits - 9);
+ 
++	/* Compute how many blocks we can address by block tree */
++	res += 1LL << (bits-2);
++	res += 1LL << (2*(bits-2));
++	res += 1LL << (3*(bits-2));
++	/* Does block tree limit file size? */
++	if (res < upper_limit)
++		goto check_lfs;
+ 
++	res = upper_limit;
++	/* How many metadata blocks are needed for addressing upper_limit? */
++	upper_limit -= EXT2_NDIR_BLOCKS;
+ 	/* indirect blocks */
+ 	meta_blocks = 1;
++	upper_limit -= ppb;
+ 	/* double indirect blocks */
+-	meta_blocks += 1 + (1LL << (bits-2));
+-	/* tripple indirect blocks */
+-	meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
+-
+-	upper_limit -= meta_blocks;
+-	upper_limit <<= bits;
+-
+-	res += 1LL << (bits-2);
+-	res += 1LL << (2*(bits-2));
+-	res += 1LL << (3*(bits-2));
++	if (upper_limit < ppb * ppb) {
++		meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb);
++		res -= meta_blocks;
++		goto check_lfs;
++	}
++	meta_blocks += 1 + ppb;
++	upper_limit -= ppb * ppb;
++	/* tripple indirect blocks for the rest */
++	meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb) +
++		DIV_ROUND_UP(upper_limit, ppb*ppb);
++	res -= meta_blocks;
++check_lfs:
+ 	res <<= bits;
+-	if (res > upper_limit)
+-		res = upper_limit;
+-
+ 	if (res > MAX_LFS_FILESIZE)
+ 		res = MAX_LFS_FILESIZE;
+ 
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 185a05d3257e..508a37ec9271 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -426,6 +426,9 @@ struct flex_groups {
+ /* Flags that are appropriate for non-directories/regular files. */
+ #define EXT4_OTHER_FLMASK (EXT4_NODUMP_FL | EXT4_NOATIME_FL)
+ 
++/* The only flags that should be swapped */
++#define EXT4_FL_SHOULD_SWAP (EXT4_HUGE_FILE_FL | EXT4_EXTENTS_FL)
++
+ /* Mask out flags that are inappropriate for the given type of inode. */
+ static inline __u32 ext4_mask_flags(umode_t mode, __u32 flags)
+ {
+diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
+index 15b6dd733780..df908ef79cce 100644
+--- a/fs/ext4/ext4_jbd2.h
++++ b/fs/ext4/ext4_jbd2.h
+@@ -384,7 +384,7 @@ static inline void ext4_update_inode_fsync_trans(handle_t *handle,
+ {
+ 	struct ext4_inode_info *ei = EXT4_I(inode);
+ 
+-	if (ext4_handle_valid(handle)) {
++	if (ext4_handle_valid(handle) && !is_handle_aborted(handle)) {
+ 		ei->i_sync_tid = handle->h_transaction->t_tid;
+ 		if (datasync)
+ 			ei->i_datasync_tid = handle->h_transaction->t_tid;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 240b6dea5441..252bbbb5a2f4 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -2956,14 +2956,17 @@ again:
+ 			if (err < 0)
+ 				goto out;
+ 
+-		} else if (sbi->s_cluster_ratio > 1 && end >= ex_end) {
++		} else if (sbi->s_cluster_ratio > 1 && end >= ex_end &&
++			   partial.state == initial) {
+ 			/*
+-			 * If there's an extent to the right its first cluster
+-			 * contains the immediate right boundary of the
+-			 * truncated/punched region.  Set partial_cluster to
+-			 * its negative value so it won't be freed if shared
+-			 * with the current extent.  The end < ee_block case
+-			 * is handled in ext4_ext_rm_leaf().
++			 * If we're punching, there's an extent to the right.
++			 * If the partial cluster hasn't been set, set it to
++			 * that extent's first cluster and its state to nofree
++			 * so it won't be freed should it contain blocks to be
++			 * removed. If it's already set (tofree/nofree), we're
++			 * retrying and keep the original partial cluster info
++			 * so a cluster marked tofree as a result of earlier
++			 * extent removal is not lost.
+ 			 */
+ 			lblk = ex_end + 1;
+ 			err = ext4_ext_search_right(inode, path, &lblk, &pblk,
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 69d65d49837b..98ec11f69cd4 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -125,7 +125,7 @@ ext4_unaligned_aio(struct inode *inode, struct iov_iter *from, loff_t pos)
+ 	struct super_block *sb = inode->i_sb;
+ 	int blockmask = sb->s_blocksize - 1;
+ 
+-	if (pos >= i_size_read(inode))
++	if (pos >= ALIGN(i_size_read(inode), sb->s_blocksize))
+ 		return 0;
+ 
+ 	if ((pos | iov_iter_alignment(from)) & blockmask)
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index bf7fa1507e81..e1801b288847 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -1219,6 +1219,7 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
+ 	ext4_lblk_t offsets[4], offsets2[4];
+ 	Indirect chain[4], chain2[4];
+ 	Indirect *partial, *partial2;
++	Indirect *p = NULL, *p2 = NULL;
+ 	ext4_lblk_t max_block;
+ 	__le32 nr = 0, nr2 = 0;
+ 	int n = 0, n2 = 0;
+@@ -1260,7 +1261,7 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
+ 		}
+ 
+ 
+-		partial = ext4_find_shared(inode, n, offsets, chain, &nr);
++		partial = p = ext4_find_shared(inode, n, offsets, chain, &nr);
+ 		if (nr) {
+ 			if (partial == chain) {
+ 				/* Shared branch grows from the inode */
+@@ -1285,13 +1286,11 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
+ 				partial->p + 1,
+ 				(__le32 *)partial->bh->b_data+addr_per_block,
+ 				(chain+n-1) - partial);
+-			BUFFER_TRACE(partial->bh, "call brelse");
+-			brelse(partial->bh);
+ 			partial--;
+ 		}
+ 
+ end_range:
+-		partial2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
++		partial2 = p2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
+ 		if (nr2) {
+ 			if (partial2 == chain2) {
+ 				/*
+@@ -1321,16 +1320,14 @@ end_range:
+ 					   (__le32 *)partial2->bh->b_data,
+ 					   partial2->p,
+ 					   (chain2+n2-1) - partial2);
+-			BUFFER_TRACE(partial2->bh, "call brelse");
+-			brelse(partial2->bh);
+ 			partial2--;
+ 		}
+ 		goto do_indirects;
+ 	}
+ 
+ 	/* Punch happened within the same level (n == n2) */
+-	partial = ext4_find_shared(inode, n, offsets, chain, &nr);
+-	partial2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
++	partial = p = ext4_find_shared(inode, n, offsets, chain, &nr);
++	partial2 = p2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
+ 
+ 	/* Free top, but only if partial2 isn't its subtree. */
+ 	if (nr) {
+@@ -1387,11 +1384,7 @@ end_range:
+ 					   partial->p + 1,
+ 					   partial2->p,
+ 					   (chain+n-1) - partial);
+-			BUFFER_TRACE(partial->bh, "call brelse");
+-			brelse(partial->bh);
+-			BUFFER_TRACE(partial2->bh, "call brelse");
+-			brelse(partial2->bh);
+-			return 0;
++			goto cleanup;
+ 		}
+ 
+ 		/*
+@@ -1406,8 +1399,6 @@ end_range:
+ 					   partial->p + 1,
+ 					   (__le32 *)partial->bh->b_data+addr_per_block,
+ 					   (chain+n-1) - partial);
+-			BUFFER_TRACE(partial->bh, "call brelse");
+-			brelse(partial->bh);
+ 			partial--;
+ 		}
+ 		if (partial2 > chain2 && depth2 <= depth) {
+@@ -1415,11 +1406,21 @@ end_range:
+ 					   (__le32 *)partial2->bh->b_data,
+ 					   partial2->p,
+ 					   (chain2+n2-1) - partial2);
+-			BUFFER_TRACE(partial2->bh, "call brelse");
+-			brelse(partial2->bh);
+ 			partial2--;
+ 		}
+ 	}
++
++cleanup:
++	while (p && p > chain) {
++		BUFFER_TRACE(p->bh, "call brelse");
++		brelse(p->bh);
++		p--;
++	}
++	while (p2 && p2 > chain2) {
++		BUFFER_TRACE(p2->bh, "call brelse");
++		brelse(p2->bh);
++		p2--;
++	}
+ 	return 0;
+ 
+ do_indirects:
+@@ -1427,7 +1428,7 @@ do_indirects:
+ 	switch (offsets[0]) {
+ 	default:
+ 		if (++n >= n2)
+-			return 0;
++			break;
+ 		nr = i_data[EXT4_IND_BLOCK];
+ 		if (nr) {
+ 			ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 1);
+@@ -1435,7 +1436,7 @@ do_indirects:
+ 		}
+ 	case EXT4_IND_BLOCK:
+ 		if (++n >= n2)
+-			return 0;
++			break;
+ 		nr = i_data[EXT4_DIND_BLOCK];
+ 		if (nr) {
+ 			ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 2);
+@@ -1443,7 +1444,7 @@ do_indirects:
+ 		}
+ 	case EXT4_DIND_BLOCK:
+ 		if (++n >= n2)
+-			return 0;
++			break;
+ 		nr = i_data[EXT4_TIND_BLOCK];
+ 		if (nr) {
+ 			ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 3);
+@@ -1452,5 +1453,5 @@ do_indirects:
+ 	case EXT4_TIND_BLOCK:
+ 		;
+ 	}
+-	return 0;
++	goto cleanup;
+ }
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index d37dafa1d133..2e76fb55d94a 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -63,18 +63,20 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
+ 	loff_t isize;
+ 	struct ext4_inode_info *ei1;
+ 	struct ext4_inode_info *ei2;
++	unsigned long tmp;
+ 
+ 	ei1 = EXT4_I(inode1);
+ 	ei2 = EXT4_I(inode2);
+ 
+ 	swap(inode1->i_version, inode2->i_version);
+-	swap(inode1->i_blocks, inode2->i_blocks);
+-	swap(inode1->i_bytes, inode2->i_bytes);
+ 	swap(inode1->i_atime, inode2->i_atime);
+ 	swap(inode1->i_mtime, inode2->i_mtime);
+ 
+ 	memswap(ei1->i_data, ei2->i_data, sizeof(ei1->i_data));
+-	swap(ei1->i_flags, ei2->i_flags);
++	tmp = ei1->i_flags & EXT4_FL_SHOULD_SWAP;
++	ei1->i_flags = (ei2->i_flags & EXT4_FL_SHOULD_SWAP) |
++		(ei1->i_flags & ~EXT4_FL_SHOULD_SWAP);
++	ei2->i_flags = tmp | (ei2->i_flags & ~EXT4_FL_SHOULD_SWAP);
+ 	swap(ei1->i_disksize, ei2->i_disksize);
+ 	ext4_es_remove_extent(inode1, 0, EXT_MAX_BLOCKS);
+ 	ext4_es_remove_extent(inode2, 0, EXT_MAX_BLOCKS);
+@@ -115,28 +117,41 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	int err;
+ 	struct inode *inode_bl;
+ 	struct ext4_inode_info *ei_bl;
+-
+-	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
+-	    IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
+-	    ext4_has_inline_data(inode))
+-		return -EINVAL;
+-
+-	if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
+-	    !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
+-		return -EPERM;
++	qsize_t size, size_bl, diff;
++	blkcnt_t blocks;
++	unsigned short bytes;
+ 
+ 	inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO, EXT4_IGET_SPECIAL);
+ 	if (IS_ERR(inode_bl))
+ 		return PTR_ERR(inode_bl);
+ 	ei_bl = EXT4_I(inode_bl);
+ 
+-	filemap_flush(inode->i_mapping);
+-	filemap_flush(inode_bl->i_mapping);
+-
+ 	/* Protect orig inodes against a truncate and make sure,
+ 	 * that only 1 swap_inode_boot_loader is running. */
+ 	lock_two_nondirectories(inode, inode_bl);
+ 
++	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
++	    IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
++	    ext4_has_inline_data(inode)) {
++		err = -EINVAL;
++		goto journal_err_out;
++	}
++
++	if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
++	    !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN)) {
++		err = -EPERM;
++		goto journal_err_out;
++	}
++
++	down_write(&EXT4_I(inode)->i_mmap_sem);
++	err = filemap_write_and_wait(inode->i_mapping);
++	if (err)
++		goto err_out;
++
++	err = filemap_write_and_wait(inode_bl->i_mapping);
++	if (err)
++		goto err_out;
++
+ 	/* Wait for all existing dio workers */
+ 	inode_dio_wait(inode);
+ 	inode_dio_wait(inode_bl);
+@@ -147,7 +162,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 	handle = ext4_journal_start(inode_bl, EXT4_HT_MOVE_EXTENTS, 2);
+ 	if (IS_ERR(handle)) {
+ 		err = -EINVAL;
+-		goto journal_err_out;
++		goto err_out;
+ 	}
+ 
+ 	/* Protect extent tree against block allocations via delalloc */
+@@ -170,6 +185,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 			memset(ei_bl->i_data, 0, sizeof(ei_bl->i_data));
+ 	}
+ 
++	err = dquot_initialize(inode);
++	if (err)
++		goto err_out1;
++
++	size = (qsize_t)(inode->i_blocks) * (1 << 9) + inode->i_bytes;
++	size_bl = (qsize_t)(inode_bl->i_blocks) * (1 << 9) + inode_bl->i_bytes;
++	diff = size - size_bl;
+ 	swap_inode_data(inode, inode_bl);
+ 
+ 	inode->i_ctime = inode_bl->i_ctime = current_time(inode);
+@@ -183,27 +205,51 @@ static long swap_inode_boot_loader(struct super_block *sb,
+ 
+ 	err = ext4_mark_inode_dirty(handle, inode);
+ 	if (err < 0) {
++		/* No need to update quota information. */
+ 		ext4_warning(inode->i_sb,
+ 			"couldn't mark inode #%lu dirty (err %d)",
+ 			inode->i_ino, err);
+ 		/* Revert all changes: */
+ 		swap_inode_data(inode, inode_bl);
+ 		ext4_mark_inode_dirty(handle, inode);
+-	} else {
+-		err = ext4_mark_inode_dirty(handle, inode_bl);
+-		if (err < 0) {
+-			ext4_warning(inode_bl->i_sb,
+-				"couldn't mark inode #%lu dirty (err %d)",
+-				inode_bl->i_ino, err);
+-			/* Revert all changes: */
+-			swap_inode_data(inode, inode_bl);
+-			ext4_mark_inode_dirty(handle, inode);
+-			ext4_mark_inode_dirty(handle, inode_bl);
+-		}
++		goto err_out1;
++	}
++
++	blocks = inode_bl->i_blocks;
++	bytes = inode_bl->i_bytes;
++	inode_bl->i_blocks = inode->i_blocks;
++	inode_bl->i_bytes = inode->i_bytes;
++	err = ext4_mark_inode_dirty(handle, inode_bl);
++	if (err < 0) {
++		/* No need to update quota information. */
++		ext4_warning(inode_bl->i_sb,
++			"couldn't mark inode #%lu dirty (err %d)",
++			inode_bl->i_ino, err);
++		goto revert;
++	}
++
++	/* Bootloader inode should not be counted into quota information. */
++	if (diff > 0)
++		dquot_free_space(inode, diff);
++	else
++		err = dquot_alloc_space(inode, -1 * diff);
++
++	if (err < 0) {
++revert:
++		/* Revert all changes: */
++		inode_bl->i_blocks = blocks;
++		inode_bl->i_bytes = bytes;
++		swap_inode_data(inode, inode_bl);
++		ext4_mark_inode_dirty(handle, inode);
++		ext4_mark_inode_dirty(handle, inode_bl);
+ 	}
++
++err_out1:
+ 	ext4_journal_stop(handle);
+ 	ext4_double_up_write_data_sem(inode, inode_bl);
+ 
++err_out:
++	up_write(&EXT4_I(inode)->i_mmap_sem);
+ journal_err_out:
+ 	unlock_two_nondirectories(inode, inode_bl);
+ 	iput(inode_bl);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 48421de803b7..3d9b18505c0c 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1960,7 +1960,8 @@ retry:
+ 				le16_to_cpu(es->s_reserved_gdt_blocks);
+ 			n_group = n_desc_blocks * EXT4_DESC_PER_BLOCK(sb);
+ 			n_blocks_count = (ext4_fsblk_t)n_group *
+-				EXT4_BLOCKS_PER_GROUP(sb);
++				EXT4_BLOCKS_PER_GROUP(sb) +
++				le32_to_cpu(es->s_first_data_block);
+ 			n_group--; /* set to last group number */
+ 		}
+ 
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 1cb0fcc67d2d..caf77fe8ac07 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -506,7 +506,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
+ 	unsigned int end = fofs + len;
+ 	unsigned int pos = (unsigned int)fofs;
+ 	bool updated = false;
+-	bool leftmost;
++	bool leftmost = false;
+ 
+ 	if (!et)
+ 		return;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 12fabd6735dd..279bc00489cc 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -456,7 +456,6 @@ struct f2fs_flush_device {
+ 
+ /* for inline stuff */
+ #define DEF_INLINE_RESERVED_SIZE	1
+-#define DEF_MIN_INLINE_SIZE		1
+ static inline int get_extra_isize(struct inode *inode);
+ static inline int get_inline_xattr_addrs(struct inode *inode);
+ #define MAX_INLINE_DATA(inode)	(sizeof(__le32) *			\
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index bba56b39dcc5..ae2b45e75847 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1750,10 +1750,12 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ 
+ 	down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 
+-	if (!get_dirty_pages(inode))
+-		goto skip_flush;
+-
+-	f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
++	/*
++	 * Should wait end_io to count F2FS_WB_CP_DATA correctly by
++	 * f2fs_is_atomic_file.
++	 */
++	if (get_dirty_pages(inode))
++		f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
+ 		"Unexpected flush for atomic writes: ino=%lu, npages=%u",
+ 					inode->i_ino, get_dirty_pages(inode));
+ 	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
+@@ -1761,7 +1763,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
+ 		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+ 		goto out;
+ 	}
+-skip_flush:
++
+ 	set_inode_flag(inode, FI_ATOMIC_FILE);
+ 	clear_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST);
+ 	up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index d636cbcf68f2..aacbb864ec1e 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -659,6 +659,12 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
+ 	if (IS_ERR(ipage))
+ 		return PTR_ERR(ipage);
+ 
++	/*
++	 * f2fs_readdir was protected by inode.i_rwsem, it is safe to access
++	 * ipage without page's lock held.
++	 */
++	unlock_page(ipage);
++
+ 	inline_dentry = inline_data_addr(inode, ipage);
+ 
+ 	make_dentry_ptr_inline(inode, &d, inline_dentry);
+@@ -667,7 +673,7 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
+ 	if (!err)
+ 		ctx->pos = d.max;
+ 
+-	f2fs_put_page(ipage, 1);
++	f2fs_put_page(ipage, 0);
+ 	return err < 0 ? err : 0;
+ }
+ 
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 9b79056d705d..e1b1d390b329 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -215,7 +215,8 @@ void f2fs_register_inmem_page(struct inode *inode, struct page *page)
+ }
+ 
+ static int __revoke_inmem_pages(struct inode *inode,
+-				struct list_head *head, bool drop, bool recover)
++				struct list_head *head, bool drop, bool recover,
++				bool trylock)
+ {
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct inmem_pages *cur, *tmp;
+@@ -227,7 +228,16 @@ static int __revoke_inmem_pages(struct inode *inode,
+ 		if (drop)
+ 			trace_f2fs_commit_inmem_page(page, INMEM_DROP);
+ 
+-		lock_page(page);
++		if (trylock) {
++			/*
++			 * to avoid deadlock in between page lock and
++			 * inmem_lock.
++			 */
++			if (!trylock_page(page))
++				continue;
++		} else {
++			lock_page(page);
++		}
+ 
+ 		f2fs_wait_on_page_writeback(page, DATA, true, true);
+ 
+@@ -318,13 +328,19 @@ void f2fs_drop_inmem_pages(struct inode *inode)
+ 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ 	struct f2fs_inode_info *fi = F2FS_I(inode);
+ 
+-	mutex_lock(&fi->inmem_lock);
+-	__revoke_inmem_pages(inode, &fi->inmem_pages, true, false);
+-	spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
+-	if (!list_empty(&fi->inmem_ilist))
+-		list_del_init(&fi->inmem_ilist);
+-	spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
+-	mutex_unlock(&fi->inmem_lock);
++	while (!list_empty(&fi->inmem_pages)) {
++		mutex_lock(&fi->inmem_lock);
++		__revoke_inmem_pages(inode, &fi->inmem_pages,
++						true, false, true);
++
++		if (list_empty(&fi->inmem_pages)) {
++			spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
++			if (!list_empty(&fi->inmem_ilist))
++				list_del_init(&fi->inmem_ilist);
++			spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
++		}
++		mutex_unlock(&fi->inmem_lock);
++	}
+ 
+ 	clear_inode_flag(inode, FI_ATOMIC_FILE);
+ 	fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
+@@ -429,12 +445,15 @@ retry:
+ 		 * recovery or rewrite & commit last transaction. For other
+ 		 * error number, revoking was done by filesystem itself.
+ 		 */
+-		err = __revoke_inmem_pages(inode, &revoke_list, false, true);
++		err = __revoke_inmem_pages(inode, &revoke_list,
++						false, true, false);
+ 
+ 		/* drop all uncommitted pages */
+-		__revoke_inmem_pages(inode, &fi->inmem_pages, true, false);
++		__revoke_inmem_pages(inode, &fi->inmem_pages,
++						true, false, false);
+ 	} else {
+-		__revoke_inmem_pages(inode, &revoke_list, false, false);
++		__revoke_inmem_pages(inode, &revoke_list,
++						false, false, false);
+ 	}
+ 
+ 	return err;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index c46a1d4318d4..5892fa3c885f 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -834,12 +834,13 @@ static int parse_options(struct super_block *sb, char *options)
+ 					"set with inline_xattr option");
+ 			return -EINVAL;
+ 		}
+-		if (!F2FS_OPTION(sbi).inline_xattr_size ||
+-			F2FS_OPTION(sbi).inline_xattr_size >=
+-					DEF_ADDRS_PER_INODE -
+-					F2FS_TOTAL_EXTRA_ATTR_SIZE -
+-					DEF_INLINE_RESERVED_SIZE -
+-					DEF_MIN_INLINE_SIZE) {
++		if (F2FS_OPTION(sbi).inline_xattr_size <
++			sizeof(struct f2fs_xattr_header) / sizeof(__le32) ||
++			F2FS_OPTION(sbi).inline_xattr_size >
++			DEF_ADDRS_PER_INODE -
++			F2FS_TOTAL_EXTRA_ATTR_SIZE / sizeof(__le32) -
++			DEF_INLINE_RESERVED_SIZE -
++			MIN_INLINE_DENTRY_SIZE / sizeof(__le32)) {
+ 			f2fs_msg(sb, KERN_ERR,
+ 					"inline xattr size is out of range");
+ 			return -EINVAL;
+@@ -915,6 +916,10 @@ static int f2fs_drop_inode(struct inode *inode)
+ 			sb_start_intwrite(inode->i_sb);
+ 			f2fs_i_size_write(inode, 0);
+ 
++			f2fs_submit_merged_write_cond(F2FS_I_SB(inode),
++					inode, NULL, 0, DATA);
++			truncate_inode_pages_final(inode->i_mapping);
++
+ 			if (F2FS_HAS_BLOCKS(inode))
+ 				f2fs_truncate(inode);
+ 
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index 0575edbe3ed6..f1ab9000b294 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -278,10 +278,16 @@ out:
+ 		return count;
+ 	}
+ 
+-	*ui = t;
+ 
+-	if (!strcmp(a->attr.name, "iostat_enable") && *ui == 0)
+-		f2fs_reset_iostat(sbi);
++	if (!strcmp(a->attr.name, "iostat_enable")) {
++		sbi->iostat_enable = !!t;
++		if (!sbi->iostat_enable)
++			f2fs_reset_iostat(sbi);
++		return count;
++	}
++
++	*ui = (unsigned int)t;
++
+ 	return count;
+ }
+ 
+diff --git a/fs/f2fs/trace.c b/fs/f2fs/trace.c
+index ce2a5eb210b6..d0ab533a9ce8 100644
+--- a/fs/f2fs/trace.c
++++ b/fs/f2fs/trace.c
+@@ -14,7 +14,7 @@
+ #include "trace.h"
+ 
+ static RADIX_TREE(pids, GFP_ATOMIC);
+-static struct mutex pids_lock;
++static spinlock_t pids_lock;
+ static struct last_io_info last_io;
+ 
+ static inline void __print_last_io(void)
+@@ -58,23 +58,29 @@ void f2fs_trace_pid(struct page *page)
+ 
+ 	set_page_private(page, (unsigned long)pid);
+ 
++retry:
+ 	if (radix_tree_preload(GFP_NOFS))
+ 		return;
+ 
+-	mutex_lock(&pids_lock);
++	spin_lock(&pids_lock);
+ 	p = radix_tree_lookup(&pids, pid);
+ 	if (p == current)
+ 		goto out;
+ 	if (p)
+ 		radix_tree_delete(&pids, pid);
+ 
+-	f2fs_radix_tree_insert(&pids, pid, current);
++	if (radix_tree_insert(&pids, pid, current)) {
++		spin_unlock(&pids_lock);
++		radix_tree_preload_end();
++		cond_resched();
++		goto retry;
++	}
+ 
+ 	trace_printk("%3x:%3x %4x %-16s\n",
+ 			MAJOR(inode->i_sb->s_dev), MINOR(inode->i_sb->s_dev),
+ 			pid, current->comm);
+ out:
+-	mutex_unlock(&pids_lock);
++	spin_unlock(&pids_lock);
+ 	radix_tree_preload_end();
+ }
+ 
+@@ -119,7 +125,7 @@ void f2fs_trace_ios(struct f2fs_io_info *fio, int flush)
+ 
+ void f2fs_build_trace_ios(void)
+ {
+-	mutex_init(&pids_lock);
++	spin_lock_init(&pids_lock);
+ }
+ 
+ #define PIDVEC_SIZE	128
+@@ -147,7 +153,7 @@ void f2fs_destroy_trace_ios(void)
+ 	pid_t next_pid = 0;
+ 	unsigned int found;
+ 
+-	mutex_lock(&pids_lock);
++	spin_lock(&pids_lock);
+ 	while ((found = gang_lookup_pids(pid, next_pid, PIDVEC_SIZE))) {
+ 		unsigned idx;
+ 
+@@ -155,5 +161,5 @@ void f2fs_destroy_trace_ios(void)
+ 		for (idx = 0; idx < found; idx++)
+ 			radix_tree_delete(&pids, pid[idx]);
+ 	}
+-	mutex_unlock(&pids_lock);
++	spin_unlock(&pids_lock);
+ }
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index 18d5ffbc5e8c..73b92985198b 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -224,11 +224,11 @@ static struct f2fs_xattr_entry *__find_inline_xattr(struct inode *inode,
+ {
+ 	struct f2fs_xattr_entry *entry;
+ 	unsigned int inline_size = inline_xattr_size(inode);
++	void *max_addr = base_addr + inline_size;
+ 
+ 	list_for_each_xattr(entry, base_addr) {
+-		if ((void *)entry + sizeof(__u32) > base_addr + inline_size ||
+-			(void *)XATTR_NEXT_ENTRY(entry) + sizeof(__u32) >
+-			base_addr + inline_size) {
++		if ((void *)entry + sizeof(__u32) > max_addr ||
++			(void *)XATTR_NEXT_ENTRY(entry) > max_addr) {
+ 			*last_addr = entry;
+ 			return NULL;
+ 		}
+@@ -239,6 +239,13 @@ static struct f2fs_xattr_entry *__find_inline_xattr(struct inode *inode,
+ 		if (!memcmp(entry->e_name, name, len))
+ 			break;
+ 	}
++
++	/* inline xattr header or entry across max inline xattr size */
++	if (IS_XATTR_LAST_ENTRY(entry) &&
++		(void *)entry + sizeof(__u32) > max_addr) {
++		*last_addr = entry;
++		return NULL;
++	}
+ 	return entry;
+ }
+ 
+diff --git a/fs/file.c b/fs/file.c
+index 3209ee271c41..a10487aa0a84 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -457,6 +457,7 @@ struct files_struct init_files = {
+ 		.full_fds_bits	= init_files.full_fds_bits_init,
+ 	},
+ 	.file_lock	= __SPIN_LOCK_UNLOCKED(init_files.file_lock),
++	.resize_wait	= __WAIT_QUEUE_HEAD_INITIALIZER(init_files.resize_wait),
+ };
+ 
+ static unsigned int find_next_fd(struct fdtable *fdt, unsigned int start)
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index b92740edc416..4b038f25f256 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -107,7 +107,7 @@ static int glock_wake_function(wait_queue_entry_t *wait, unsigned int mode,
+ 
+ static wait_queue_head_t *glock_waitqueue(struct lm_lockname *name)
+ {
+-	u32 hash = jhash2((u32 *)name, sizeof(*name) / 4, 0);
++	u32 hash = jhash2((u32 *)name, ht_parms.key_len / 4, 0);
+ 
+ 	return glock_wait_table + hash_32(hash, GLOCK_WAIT_TABLE_BITS);
+ }
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 2eb55c3361a8..efd0ce9489ae 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -694,9 +694,11 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+                            the last tag we set up. */
+ 
+ 			tag->t_flags |= cpu_to_be16(JBD2_FLAG_LAST_TAG);
+-
+-			jbd2_descriptor_block_csum_set(journal, descriptor);
+ start_journal_io:
++			if (descriptor)
++				jbd2_descriptor_block_csum_set(journal,
++							descriptor);
++
+ 			for (i = 0; i < bufs; i++) {
+ 				struct buffer_head *bh = wbuf[i];
+ 				/*
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 8ef6b6daaa7a..88f2a49338a1 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1356,6 +1356,10 @@ static int journal_reset(journal_t *journal)
+ 	return jbd2_journal_start_thread(journal);
+ }
+ 
++/*
++ * This function expects that the caller will have locked the journal
++ * buffer head, and will return with it unlocked
++ */
+ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ {
+ 	struct buffer_head *bh = journal->j_sb_buffer;
+@@ -1365,7 +1369,6 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ 	trace_jbd2_write_superblock(journal, write_flags);
+ 	if (!(journal->j_flags & JBD2_BARRIER))
+ 		write_flags &= ~(REQ_FUA | REQ_PREFLUSH);
+-	lock_buffer(bh);
+ 	if (buffer_write_io_error(bh)) {
+ 		/*
+ 		 * Oh, dear.  A previous attempt to write the journal
+@@ -1424,6 +1427,7 @@ int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
+ 	jbd_debug(1, "JBD2: updating superblock (start %lu, seq %u)\n",
+ 		  tail_block, tail_tid);
+ 
++	lock_buffer(journal->j_sb_buffer);
+ 	sb->s_sequence = cpu_to_be32(tail_tid);
+ 	sb->s_start    = cpu_to_be32(tail_block);
+ 
+@@ -1454,18 +1458,17 @@ static void jbd2_mark_journal_empty(journal_t *journal, int write_op)
+ 	journal_superblock_t *sb = journal->j_superblock;
+ 
+ 	BUG_ON(!mutex_is_locked(&journal->j_checkpoint_mutex));
+-	read_lock(&journal->j_state_lock);
+-	/* Is it already empty? */
+-	if (sb->s_start == 0) {
+-		read_unlock(&journal->j_state_lock);
++	lock_buffer(journal->j_sb_buffer);
++	if (sb->s_start == 0) {		/* Is it already empty? */
++		unlock_buffer(journal->j_sb_buffer);
+ 		return;
+ 	}
++
+ 	jbd_debug(1, "JBD2: Marking journal as empty (seq %d)\n",
+ 		  journal->j_tail_sequence);
+ 
+ 	sb->s_sequence = cpu_to_be32(journal->j_tail_sequence);
+ 	sb->s_start    = cpu_to_be32(0);
+-	read_unlock(&journal->j_state_lock);
+ 
+ 	jbd2_write_superblock(journal, write_op);
+ 
+@@ -1488,9 +1491,8 @@ void jbd2_journal_update_sb_errno(journal_t *journal)
+ 	journal_superblock_t *sb = journal->j_superblock;
+ 	int errcode;
+ 
+-	read_lock(&journal->j_state_lock);
++	lock_buffer(journal->j_sb_buffer);
+ 	errcode = journal->j_errno;
+-	read_unlock(&journal->j_state_lock);
+ 	if (errcode == -ESHUTDOWN)
+ 		errcode = 0;
+ 	jbd_debug(1, "JBD2: updating superblock error (errno %d)\n", errcode);
+@@ -1894,28 +1896,27 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
+ 
+ 	sb = journal->j_superblock;
+ 
++	/* Load the checksum driver if necessary */
++	if ((journal->j_chksum_driver == NULL) &&
++	    INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V3)) {
++		journal->j_chksum_driver = crypto_alloc_shash("crc32c", 0, 0);
++		if (IS_ERR(journal->j_chksum_driver)) {
++			printk(KERN_ERR "JBD2: Cannot load crc32c driver.\n");
++			journal->j_chksum_driver = NULL;
++			return 0;
++		}
++		/* Precompute checksum seed for all metadata */
++		journal->j_csum_seed = jbd2_chksum(journal, ~0, sb->s_uuid,
++						   sizeof(sb->s_uuid));
++	}
++
++	lock_buffer(journal->j_sb_buffer);
++
+ 	/* If enabling v3 checksums, update superblock */
+ 	if (INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V3)) {
+ 		sb->s_checksum_type = JBD2_CRC32C_CHKSUM;
+ 		sb->s_feature_compat &=
+ 			~cpu_to_be32(JBD2_FEATURE_COMPAT_CHECKSUM);
+-
+-		/* Load the checksum driver */
+-		if (journal->j_chksum_driver == NULL) {
+-			journal->j_chksum_driver = crypto_alloc_shash("crc32c",
+-								      0, 0);
+-			if (IS_ERR(journal->j_chksum_driver)) {
+-				printk(KERN_ERR "JBD2: Cannot load crc32c "
+-				       "driver.\n");
+-				journal->j_chksum_driver = NULL;
+-				return 0;
+-			}
+-
+-			/* Precompute checksum seed for all metadata */
+-			journal->j_csum_seed = jbd2_chksum(journal, ~0,
+-							   sb->s_uuid,
+-							   sizeof(sb->s_uuid));
+-		}
+ 	}
+ 
+ 	/* If enabling v1 checksums, downgrade superblock */
+@@ -1927,6 +1928,7 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
+ 	sb->s_feature_compat    |= cpu_to_be32(compat);
+ 	sb->s_feature_ro_compat |= cpu_to_be32(ro);
+ 	sb->s_feature_incompat  |= cpu_to_be32(incompat);
++	unlock_buffer(journal->j_sb_buffer);
+ 
+ 	return 1;
+ #undef COMPAT_FEATURE_ON
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index cc35537232f2..f0d8dabe1ff5 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1252,11 +1252,12 @@ int jbd2_journal_get_undo_access(handle_t *handle, struct buffer_head *bh)
+ 	struct journal_head *jh;
+ 	char *committed_data = NULL;
+ 
+-	JBUFFER_TRACE(jh, "entry");
+ 	if (jbd2_write_access_granted(handle, bh, true))
+ 		return 0;
+ 
+ 	jh = jbd2_journal_add_journal_head(bh);
++	JBUFFER_TRACE(jh, "entry");
++
+ 	/*
+ 	 * Do this first --- it can drop the journal lock, so we want to
+ 	 * make sure that obtaining the committed_data is done
+@@ -1367,15 +1368,17 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 
+ 	if (is_handle_aborted(handle))
+ 		return -EROFS;
+-	if (!buffer_jbd(bh)) {
+-		ret = -EUCLEAN;
+-		goto out;
+-	}
++	if (!buffer_jbd(bh))
++		return -EUCLEAN;
++
+ 	/*
+ 	 * We don't grab jh reference here since the buffer must be part
+ 	 * of the running transaction.
+ 	 */
+ 	jh = bh2jh(bh);
++	jbd_debug(5, "journal_head %p\n", jh);
++	JBUFFER_TRACE(jh, "entry");
++
+ 	/*
+ 	 * This and the following assertions are unreliable since we may see jh
+ 	 * in inconsistent state unless we grab bh_state lock. But this is
+@@ -1409,9 +1412,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
+ 	}
+ 
+ 	journal = transaction->t_journal;
+-	jbd_debug(5, "journal_head %p\n", jh);
+-	JBUFFER_TRACE(jh, "entry");
+-
+ 	jbd_lock_bh_state(bh);
+ 
+ 	if (jh->b_modified == 0) {
+@@ -1609,14 +1609,21 @@ int jbd2_journal_forget (handle_t *handle, struct buffer_head *bh)
+ 		/* However, if the buffer is still owned by a prior
+ 		 * (committing) transaction, we can't drop it yet... */
+ 		JBUFFER_TRACE(jh, "belongs to older transaction");
+-		/* ... but we CAN drop it from the new transaction if we
+-		 * have also modified it since the original commit. */
++		/* ... but we CAN drop it from the new transaction through
++		 * marking the buffer as freed and set j_next_transaction to
++		 * the new transaction, so that not only the commit code
++		 * knows it should clear dirty bits when it is done with the
++		 * buffer, but also the buffer can be checkpointed only
++		 * after the new transaction commits. */
+ 
+-		if (jh->b_next_transaction) {
+-			J_ASSERT(jh->b_next_transaction == transaction);
++		set_buffer_freed(bh);
++
++		if (!jh->b_next_transaction) {
+ 			spin_lock(&journal->j_list_lock);
+-			jh->b_next_transaction = NULL;
++			jh->b_next_transaction = transaction;
+ 			spin_unlock(&journal->j_list_lock);
++		} else {
++			J_ASSERT(jh->b_next_transaction == transaction);
+ 
+ 			/*
+ 			 * only drop a reference if this transaction modified
+diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c
+index fdf527b6d79c..d71c9405874a 100644
+--- a/fs/kernfs/mount.c
++++ b/fs/kernfs/mount.c
+@@ -196,8 +196,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
+ 		return dentry;
+ 
+ 	knparent = find_next_ancestor(kn, NULL);
+-	if (WARN_ON(!knparent))
++	if (WARN_ON(!knparent)) {
++		dput(dentry);
+ 		return ERR_PTR(-EINVAL);
++	}
+ 
+ 	do {
+ 		struct dentry *dtmp;
+@@ -206,8 +208,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
+ 		if (kn == knparent)
+ 			return dentry;
+ 		kntmp = find_next_ancestor(kn, knparent);
+-		if (WARN_ON(!kntmp))
++		if (WARN_ON(!kntmp)) {
++			dput(dentry);
+ 			return ERR_PTR(-EINVAL);
++		}
+ 		dtmp = lookup_one_len_unlocked(kntmp->name, dentry,
+ 					       strlen(kntmp->name));
+ 		dput(dentry);
+diff --git a/fs/lockd/host.c b/fs/lockd/host.c
+index 93fb7cf0b92b..f0b5c987d6ae 100644
+--- a/fs/lockd/host.c
++++ b/fs/lockd/host.c
+@@ -290,12 +290,11 @@ void nlmclnt_release_host(struct nlm_host *host)
+ 
+ 	WARN_ON_ONCE(host->h_server);
+ 
+-	if (refcount_dec_and_test(&host->h_count)) {
++	if (refcount_dec_and_mutex_lock(&host->h_count, &nlm_host_mutex)) {
+ 		WARN_ON_ONCE(!list_empty(&host->h_lockowners));
+ 		WARN_ON_ONCE(!list_empty(&host->h_granted));
+ 		WARN_ON_ONCE(!list_empty(&host->h_reclaim));
+ 
+-		mutex_lock(&nlm_host_mutex);
+ 		nlm_destroy_host_locked(host);
+ 		mutex_unlock(&nlm_host_mutex);
+ 	}
+diff --git a/fs/locks.c b/fs/locks.c
+index ff6af2c32601..5f468cd95f68 100644
+--- a/fs/locks.c
++++ b/fs/locks.c
+@@ -1160,6 +1160,11 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
+ 			 */
+ 			error = -EDEADLK;
+ 			spin_lock(&blocked_lock_lock);
++			/*
++			 * Ensure that we don't find any locks blocked on this
++			 * request during deadlock detection.
++			 */
++			__locks_wake_up_blocks(request);
+ 			if (likely(!posix_locks_deadlock(request, fl))) {
+ 				error = FILE_LOCK_DEFERRED;
+ 				__locks_insert_block(fl, request,
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 557a5d636183..44258c516305 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -947,6 +947,13 @@ nfs4_sequence_process_interrupted(struct nfs_client *client,
+ 
+ #endif	/* !CONFIG_NFS_V4_1 */
+ 
++static void nfs41_sequence_res_init(struct nfs4_sequence_res *res)
++{
++	res->sr_timestamp = jiffies;
++	res->sr_status_flags = 0;
++	res->sr_status = 1;
++}
++
+ static
+ void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args,
+ 		struct nfs4_sequence_res *res,
+@@ -958,10 +965,6 @@ void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args,
+ 	args->sa_slot = slot;
+ 
+ 	res->sr_slot = slot;
+-	res->sr_timestamp = jiffies;
+-	res->sr_status_flags = 0;
+-	res->sr_status = 1;
+-
+ }
+ 
+ int nfs4_setup_sequence(struct nfs_client *client,
+@@ -1007,6 +1010,7 @@ int nfs4_setup_sequence(struct nfs_client *client,
+ 
+ 	trace_nfs4_setup_sequence(session, args);
+ out_start:
++	nfs41_sequence_res_init(res);
+ 	rpc_call_start(task);
+ 	return 0;
+ 
+@@ -2934,7 +2938,8 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ 	}
+ 
+ out:
+-	nfs4_sequence_free_slot(&opendata->o_res.seq_res);
++	if (!opendata->cancelled)
++		nfs4_sequence_free_slot(&opendata->o_res.seq_res);
+ 	return ret;
+ }
+ 
+@@ -6302,7 +6307,6 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl,
+ 	p->arg.seqid = seqid;
+ 	p->res.seqid = seqid;
+ 	p->lsp = lsp;
+-	refcount_inc(&lsp->ls_count);
+ 	/* Ensure we don't close file until we're done freeing locks! */
+ 	p->ctx = get_nfs_open_context(ctx);
+ 	p->l_ctx = nfs_get_lock_context(ctx);
+@@ -6527,7 +6531,6 @@ static struct nfs4_lockdata *nfs4_alloc_lockdata(struct file_lock *fl,
+ 	p->res.lock_seqid = p->arg.lock_seqid;
+ 	p->lsp = lsp;
+ 	p->server = server;
+-	refcount_inc(&lsp->ls_count);
+ 	p->ctx = get_nfs_open_context(ctx);
+ 	locks_init_lock(&p->fl);
+ 	locks_copy_lock(&p->fl, fl);
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index e54d899c1848..a8951f1f7b4e 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -988,6 +988,17 @@ static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc)
+ 	}
+ }
+ 
++static void
++nfs_pageio_cleanup_request(struct nfs_pageio_descriptor *desc,
++		struct nfs_page *req)
++{
++	LIST_HEAD(head);
++
++	nfs_list_remove_request(req);
++	nfs_list_add_request(req, &head);
++	desc->pg_completion_ops->error_cleanup(&head);
++}
++
+ /**
+  * nfs_pageio_add_request - Attempt to coalesce a request into a page list.
+  * @desc: destination io descriptor
+@@ -1025,10 +1036,8 @@ static int __nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 			nfs_page_group_unlock(req);
+ 			desc->pg_moreio = 1;
+ 			nfs_pageio_doio(desc);
+-			if (desc->pg_error < 0)
+-				return 0;
+-			if (mirror->pg_recoalesce)
+-				return 0;
++			if (desc->pg_error < 0 || mirror->pg_recoalesce)
++				goto out_cleanup_subreq;
+ 			/* retry add_request for this subreq */
+ 			nfs_page_group_lock(req);
+ 			continue;
+@@ -1061,6 +1070,10 @@ err_ptr:
+ 	desc->pg_error = PTR_ERR(subreq);
+ 	nfs_page_group_unlock(req);
+ 	return 0;
++out_cleanup_subreq:
++	if (req != subreq)
++		nfs_pageio_cleanup_request(desc, subreq);
++	return 0;
+ }
+ 
+ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
+@@ -1079,7 +1092,6 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
+ 			struct nfs_page *req;
+ 
+ 			req = list_first_entry(&head, struct nfs_page, wb_list);
+-			nfs_list_remove_request(req);
+ 			if (__nfs_pageio_add_request(desc, req))
+ 				continue;
+ 			if (desc->pg_error < 0) {
+@@ -1168,11 +1180,14 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 		if (nfs_pgio_has_mirroring(desc))
+ 			desc->pg_mirror_idx = midx;
+ 		if (!nfs_pageio_add_request_mirror(desc, dupreq))
+-			goto out_failed;
++			goto out_cleanup_subreq;
+ 	}
+ 
+ 	return 1;
+ 
++out_cleanup_subreq:
++	if (req != dupreq)
++		nfs_pageio_cleanup_request(desc, dupreq);
+ out_failed:
+ 	nfs_pageio_error_cleanup(desc);
+ 	return 0;
+@@ -1194,7 +1209,7 @@ static void nfs_pageio_complete_mirror(struct nfs_pageio_descriptor *desc,
+ 		desc->pg_mirror_idx = mirror_idx;
+ 	for (;;) {
+ 		nfs_pageio_doio(desc);
+-		if (!mirror->pg_recoalesce)
++		if (desc->pg_error < 0 || !mirror->pg_recoalesce)
+ 			break;
+ 		if (!nfs_do_recoalesce(desc))
+ 			break;
+diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c
+index 9eb8086ea841..c9cf46e0c040 100644
+--- a/fs/nfsd/nfs3proc.c
++++ b/fs/nfsd/nfs3proc.c
+@@ -463,8 +463,19 @@ nfsd3_proc_readdir(struct svc_rqst *rqstp)
+ 					&resp->common, nfs3svc_encode_entry);
+ 	memcpy(resp->verf, argp->verf, 8);
+ 	resp->count = resp->buffer - argp->buffer;
+-	if (resp->offset)
+-		xdr_encode_hyper(resp->offset, argp->cookie);
++	if (resp->offset) {
++		loff_t offset = argp->cookie;
++
++		if (unlikely(resp->offset1)) {
++			/* we ended up with offset on a page boundary */
++			*resp->offset = htonl(offset >> 32);
++			*resp->offset1 = htonl(offset & 0xffffffff);
++			resp->offset1 = NULL;
++		} else {
++			xdr_encode_hyper(resp->offset, offset);
++		}
++		resp->offset = NULL;
++	}
+ 
+ 	RETURN_STATUS(nfserr);
+ }
+@@ -533,6 +544,7 @@ nfsd3_proc_readdirplus(struct svc_rqst *rqstp)
+ 		} else {
+ 			xdr_encode_hyper(resp->offset, offset);
+ 		}
++		resp->offset = NULL;
+ 	}
+ 
+ 	RETURN_STATUS(nfserr);
+diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
+index 9b973f4f7d01..83919116d5cb 100644
+--- a/fs/nfsd/nfs3xdr.c
++++ b/fs/nfsd/nfs3xdr.c
+@@ -921,6 +921,7 @@ encode_entry(struct readdir_cd *ccd, const char *name, int namlen,
+ 		} else {
+ 			xdr_encode_hyper(cd->offset, offset64);
+ 		}
++		cd->offset = NULL;
+ 	}
+ 
+ 	/*
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index fb3c9844c82a..6a45fb00c5fc 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1544,16 +1544,16 @@ static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca)
+ {
+ 	u32 slotsize = slot_bytes(ca);
+ 	u32 num = ca->maxreqs;
+-	int avail;
++	unsigned long avail, total_avail;
+ 
+ 	spin_lock(&nfsd_drc_lock);
+-	avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION,
+-		    nfsd_drc_max_mem - nfsd_drc_mem_used);
++	total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;
++	avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail);
+ 	/*
+ 	 * Never use more than a third of the remaining memory,
+ 	 * unless it's the only way to give this client a slot:
+ 	 */
+-	avail = clamp_t(int, avail, slotsize, avail/3);
++	avail = clamp_t(int, avail, slotsize, total_avail/3);
+ 	num = min_t(int, num, avail / slotsize);
+ 	nfsd_drc_mem_used += num * slotsize;
+ 	spin_unlock(&nfsd_drc_lock);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 72a7681f4046..f2feb2d11bae 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1126,7 +1126,7 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size)
+ 		case 'Y':
+ 		case 'y':
+ 		case '1':
+-			if (nn->nfsd_serv)
++			if (!nn->nfsd_serv)
+ 				return -EBUSY;
+ 			nfsd4_end_grace(nn);
+ 			break;
+diff --git a/fs/ocfs2/cluster/nodemanager.c b/fs/ocfs2/cluster/nodemanager.c
+index 0e4166cc23a0..4ac775e32240 100644
+--- a/fs/ocfs2/cluster/nodemanager.c
++++ b/fs/ocfs2/cluster/nodemanager.c
+@@ -621,13 +621,15 @@ static void o2nm_node_group_drop_item(struct config_group *group,
+ 	struct o2nm_node *node = to_o2nm_node(item);
+ 	struct o2nm_cluster *cluster = to_o2nm_cluster(group->cg_item.ci_parent);
+ 
+-	o2net_disconnect_node(node);
++	if (cluster->cl_nodes[node->nd_num] == node) {
++		o2net_disconnect_node(node);
+ 
+-	if (cluster->cl_has_local &&
+-	    (cluster->cl_local_node == node->nd_num)) {
+-		cluster->cl_has_local = 0;
+-		cluster->cl_local_node = O2NM_INVALID_NODE_NUM;
+-		o2net_stop_listening(node);
++		if (cluster->cl_has_local &&
++		    (cluster->cl_local_node == node->nd_num)) {
++			cluster->cl_has_local = 0;
++			cluster->cl_local_node = O2NM_INVALID_NODE_NUM;
++			o2net_stop_listening(node);
++		}
+ 	}
+ 
+ 	/* XXX call into net to stop this node from trading messages */
+diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
+index a35259eebc56..1dc9a08e8bdc 100644
+--- a/fs/ocfs2/refcounttree.c
++++ b/fs/ocfs2/refcounttree.c
+@@ -4719,22 +4719,23 @@ out:
+ 
+ /* Lock an inode and grab a bh pointing to the inode. */
+ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+-			      struct buffer_head **bh1,
++			      struct buffer_head **bh_s,
+ 			      struct inode *t_inode,
+-			      struct buffer_head **bh2)
++			      struct buffer_head **bh_t)
+ {
+-	struct inode *inode1;
+-	struct inode *inode2;
++	struct inode *inode1 = s_inode;
++	struct inode *inode2 = t_inode;
+ 	struct ocfs2_inode_info *oi1;
+ 	struct ocfs2_inode_info *oi2;
++	struct buffer_head *bh1 = NULL;
++	struct buffer_head *bh2 = NULL;
+ 	bool same_inode = (s_inode == t_inode);
++	bool need_swap = (inode1->i_ino > inode2->i_ino);
+ 	int status;
+ 
+ 	/* First grab the VFS and rw locks. */
+ 	lock_two_nondirectories(s_inode, t_inode);
+-	inode1 = s_inode;
+-	inode2 = t_inode;
+-	if (inode1->i_ino > inode2->i_ino)
++	if (need_swap)
+ 		swap(inode1, inode2);
+ 
+ 	status = ocfs2_rw_lock(inode1, 1);
+@@ -4757,17 +4758,13 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+ 	trace_ocfs2_double_lock((unsigned long long)oi1->ip_blkno,
+ 				(unsigned long long)oi2->ip_blkno);
+ 
+-	if (*bh1)
+-		*bh1 = NULL;
+-	if (*bh2)
+-		*bh2 = NULL;
+-
+ 	/* We always want to lock the one with the lower lockid first. */
+ 	if (oi1->ip_blkno > oi2->ip_blkno)
+ 		mlog_errno(-ENOLCK);
+ 
+ 	/* lock id1 */
+-	status = ocfs2_inode_lock_nested(inode1, bh1, 1, OI_LS_REFLINK_TARGET);
++	status = ocfs2_inode_lock_nested(inode1, &bh1, 1,
++					 OI_LS_REFLINK_TARGET);
+ 	if (status < 0) {
+ 		if (status != -ENOENT)
+ 			mlog_errno(status);
+@@ -4776,15 +4773,25 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+ 
+ 	/* lock id2 */
+ 	if (!same_inode) {
+-		status = ocfs2_inode_lock_nested(inode2, bh2, 1,
++		status = ocfs2_inode_lock_nested(inode2, &bh2, 1,
+ 						 OI_LS_REFLINK_TARGET);
+ 		if (status < 0) {
+ 			if (status != -ENOENT)
+ 				mlog_errno(status);
+ 			goto out_cl1;
+ 		}
+-	} else
+-		*bh2 = *bh1;
++	} else {
++		bh2 = bh1;
++	}
++
++	/*
++	 * If we swapped inode order above, we have to swap the buffer heads
++	 * before passing them back to the caller.
++	 */
++	if (need_swap)
++		swap(bh1, bh2);
++	*bh_s = bh1;
++	*bh_t = bh2;
+ 
+ 	trace_ocfs2_double_lock_end(
+ 			(unsigned long long)oi1->ip_blkno,
+@@ -4794,8 +4801,7 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
+ 
+ out_cl1:
+ 	ocfs2_inode_unlock(inode1, 1);
+-	brelse(*bh1);
+-	*bh1 = NULL;
++	brelse(bh1);
+ out_rw2:
+ 	ocfs2_rw_unlock(inode2, 1);
+ out_i2:
+diff --git a/fs/open.c b/fs/open.c
+index 0285ce7dbd51..f1c2f855fd43 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -733,6 +733,12 @@ static int do_dentry_open(struct file *f,
+ 		return 0;
+ 	}
+ 
++	/* Any file opened for execve()/uselib() has to be a regular file. */
++	if (unlikely(f->f_flags & FMODE_EXEC && !S_ISREG(inode->i_mode))) {
++		error = -EACCES;
++		goto cleanup_file;
++	}
++
+ 	if (f->f_mode & FMODE_WRITE && !special_file(inode->i_mode)) {
+ 		error = get_write_access(inode);
+ 		if (unlikely(error))
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 9e62dcf06fc4..68b3303e4b46 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -443,6 +443,24 @@ static int ovl_copy_up_inode(struct ovl_copy_up_ctx *c, struct dentry *temp)
+ {
+ 	int err;
+ 
++	/*
++	 * Copy up data first and then xattrs. Writing data after
++	 * xattrs will remove security.capability xattr automatically.
++	 */
++	if (S_ISREG(c->stat.mode) && !c->metacopy) {
++		struct path upperpath, datapath;
++
++		ovl_path_upper(c->dentry, &upperpath);
++		if (WARN_ON(upperpath.dentry != NULL))
++			return -EIO;
++		upperpath.dentry = temp;
++
++		ovl_path_lowerdata(c->dentry, &datapath);
++		err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
++		if (err)
++			return err;
++	}
++
+ 	err = ovl_copy_xattr(c->lowerpath.dentry, temp);
+ 	if (err)
+ 		return err;
+@@ -460,19 +478,6 @@ static int ovl_copy_up_inode(struct ovl_copy_up_ctx *c, struct dentry *temp)
+ 			return err;
+ 	}
+ 
+-	if (S_ISREG(c->stat.mode) && !c->metacopy) {
+-		struct path upperpath, datapath;
+-
+-		ovl_path_upper(c->dentry, &upperpath);
+-		BUG_ON(upperpath.dentry != NULL);
+-		upperpath.dentry = temp;
+-
+-		ovl_path_lowerdata(c->dentry, &datapath);
+-		err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
+-		if (err)
+-			return err;
+-	}
+-
+ 	if (c->metacopy) {
+ 		err = ovl_check_setxattr(c->dentry, temp, OVL_XATTR_METACOPY,
+ 					 NULL, 0, -EOPNOTSUPP);
+@@ -737,6 +742,8 @@ static int ovl_copy_up_meta_inode_data(struct ovl_copy_up_ctx *c)
+ {
+ 	struct path upperpath, datapath;
+ 	int err;
++	char *capability = NULL;
++	ssize_t uninitialized_var(cap_size);
+ 
+ 	ovl_path_upper(c->dentry, &upperpath);
+ 	if (WARN_ON(upperpath.dentry == NULL))
+@@ -746,15 +753,37 @@ static int ovl_copy_up_meta_inode_data(struct ovl_copy_up_ctx *c)
+ 	if (WARN_ON(datapath.dentry == NULL))
+ 		return -EIO;
+ 
++	if (c->stat.size) {
++		err = cap_size = ovl_getxattr(upperpath.dentry, XATTR_NAME_CAPS,
++					      &capability, 0);
++		if (err < 0 && err != -ENODATA)
++			goto out;
++	}
++
+ 	err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
+ 	if (err)
+-		return err;
++		goto out_free;
++
++	/*
++	 * Writing to upper file will clear security.capability xattr. We
++	 * don't want that to happen for normal copy-up operation.
++	 */
++	if (capability) {
++		err = ovl_do_setxattr(upperpath.dentry, XATTR_NAME_CAPS,
++				      capability, cap_size, 0);
++		if (err)
++			goto out_free;
++	}
++
+ 
+ 	err = vfs_removexattr(upperpath.dentry, OVL_XATTR_METACOPY);
+ 	if (err)
+-		return err;
++		goto out_free;
+ 
+ 	ovl_set_upperdata(d_inode(c->dentry));
++out_free:
++	kfree(capability);
++out:
+ 	return err;
+ }
+ 
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 5e45cb3630a0..9c6018287d57 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -277,6 +277,8 @@ int ovl_lock_rename_workdir(struct dentry *workdir, struct dentry *upperdir);
+ int ovl_check_metacopy_xattr(struct dentry *dentry);
+ bool ovl_is_metacopy_dentry(struct dentry *dentry);
+ char *ovl_get_redirect_xattr(struct dentry *dentry, int padding);
++ssize_t ovl_getxattr(struct dentry *dentry, char *name, char **value,
++		     size_t padding);
+ 
+ static inline bool ovl_is_impuredir(struct dentry *dentry)
+ {
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 7c01327b1852..4035e640f402 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -863,28 +863,49 @@ bool ovl_is_metacopy_dentry(struct dentry *dentry)
+ 	return (oe->numlower > 1);
+ }
+ 
+-char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
++ssize_t ovl_getxattr(struct dentry *dentry, char *name, char **value,
++		     size_t padding)
+ {
+-	int res;
+-	char *s, *next, *buf = NULL;
++	ssize_t res;
++	char *buf = NULL;
+ 
+-	res = vfs_getxattr(dentry, OVL_XATTR_REDIRECT, NULL, 0);
++	res = vfs_getxattr(dentry, name, NULL, 0);
+ 	if (res < 0) {
+ 		if (res == -ENODATA || res == -EOPNOTSUPP)
+-			return NULL;
++			return -ENODATA;
+ 		goto fail;
+ 	}
+ 
+-	buf = kzalloc(res + padding + 1, GFP_KERNEL);
+-	if (!buf)
+-		return ERR_PTR(-ENOMEM);
++	if (res != 0) {
++		buf = kzalloc(res + padding, GFP_KERNEL);
++		if (!buf)
++			return -ENOMEM;
+ 
+-	if (res == 0)
+-		goto invalid;
++		res = vfs_getxattr(dentry, name, buf, res);
++		if (res < 0)
++			goto fail;
++	}
++	*value = buf;
++
++	return res;
++
++fail:
++	pr_warn_ratelimited("overlayfs: failed to get xattr %s: err=%zi)\n",
++			    name, res);
++	kfree(buf);
++	return res;
++}
+ 
+-	res = vfs_getxattr(dentry, OVL_XATTR_REDIRECT, buf, res);
++char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
++{
++	int res;
++	char *s, *next, *buf = NULL;
++
++	res = ovl_getxattr(dentry, OVL_XATTR_REDIRECT, &buf, padding + 1);
++	if (res == -ENODATA)
++		return NULL;
+ 	if (res < 0)
+-		goto fail;
++		return ERR_PTR(res);
+ 	if (res == 0)
+ 		goto invalid;
+ 
+@@ -900,15 +921,9 @@ char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
+ 	}
+ 
+ 	return buf;
+-
+-err_free:
+-	kfree(buf);
+-	return ERR_PTR(res);
+-fail:
+-	pr_warn_ratelimited("overlayfs: failed to get redirect (%i)\n", res);
+-	goto err_free;
+ invalid:
+ 	pr_warn_ratelimited("overlayfs: invalid redirect (%s)\n", buf);
+ 	res = -EINVAL;
+-	goto err_free;
++	kfree(buf);
++	return ERR_PTR(res);
+ }
+diff --git a/fs/pipe.c b/fs/pipe.c
+index bdc5d3c0977d..c51750ed4011 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -234,6 +234,14 @@ static const struct pipe_buf_operations anon_pipe_buf_ops = {
+ 	.get = generic_pipe_buf_get,
+ };
+ 
++static const struct pipe_buf_operations anon_pipe_buf_nomerge_ops = {
++	.can_merge = 0,
++	.confirm = generic_pipe_buf_confirm,
++	.release = anon_pipe_buf_release,
++	.steal = anon_pipe_buf_steal,
++	.get = generic_pipe_buf_get,
++};
++
+ static const struct pipe_buf_operations packet_pipe_buf_ops = {
+ 	.can_merge = 0,
+ 	.confirm = generic_pipe_buf_confirm,
+@@ -242,6 +250,12 @@ static const struct pipe_buf_operations packet_pipe_buf_ops = {
+ 	.get = generic_pipe_buf_get,
+ };
+ 
++void pipe_buf_mark_unmergeable(struct pipe_buffer *buf)
++{
++	if (buf->ops == &anon_pipe_buf_ops)
++		buf->ops = &anon_pipe_buf_nomerge_ops;
++}
++
+ static ssize_t
+ pipe_read(struct kiocb *iocb, struct iov_iter *to)
+ {
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index 4d598a399bbf..d65390727541 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -1626,7 +1626,8 @@ static void drop_sysctl_table(struct ctl_table_header *header)
+ 	if (--header->nreg)
+ 		return;
+ 
+-	put_links(header);
++	if (parent)
++		put_links(header);
+ 	start_unregistering(header);
+ 	if (!--header->count)
+ 		kfree_rcu(header, rcu);
+diff --git a/fs/read_write.c b/fs/read_write.c
+index ff3c5e6f87cf..27b69b85d49f 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -1238,6 +1238,9 @@ COMPAT_SYSCALL_DEFINE5(preadv64v2, unsigned long, fd,
+ 		const struct compat_iovec __user *,vec,
+ 		unsigned long, vlen, loff_t, pos, rwf_t, flags)
+ {
++	if (pos == -1)
++		return do_compat_readv(fd, vec, vlen, flags);
++
+ 	return do_compat_preadv64(fd, vec, vlen, pos, flags);
+ }
+ #endif
+@@ -1344,6 +1347,9 @@ COMPAT_SYSCALL_DEFINE5(pwritev64v2, unsigned long, fd,
+ 		const struct compat_iovec __user *,vec,
+ 		unsigned long, vlen, loff_t, pos, rwf_t, flags)
+ {
++	if (pos == -1)
++		return do_compat_writev(fd, vec, vlen, flags);
++
+ 	return do_compat_pwritev64(fd, vec, vlen, pos, flags);
+ }
+ #endif
+diff --git a/fs/splice.c b/fs/splice.c
+index de2ede048473..90c29675d573 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -1597,6 +1597,8 @@ retry:
+ 			 */
+ 			obuf->flags &= ~PIPE_BUF_FLAG_GIFT;
+ 
++			pipe_buf_mark_unmergeable(obuf);
++
+ 			obuf->len = len;
+ 			opipe->nrbufs++;
+ 			ibuf->offset += obuf->len;
+@@ -1671,6 +1673,8 @@ static int link_pipe(struct pipe_inode_info *ipipe,
+ 		 */
+ 		obuf->flags &= ~PIPE_BUF_FLAG_GIFT;
+ 
++		pipe_buf_mark_unmergeable(obuf);
++
+ 		if (obuf->len > len)
+ 			obuf->len = len;
+ 
+diff --git a/fs/udf/truncate.c b/fs/udf/truncate.c
+index b647f0bd150c..94220ba85628 100644
+--- a/fs/udf/truncate.c
++++ b/fs/udf/truncate.c
+@@ -260,6 +260,9 @@ void udf_truncate_extents(struct inode *inode)
+ 			epos.block = eloc;
+ 			epos.bh = udf_tread(sb,
+ 					udf_get_lb_pblock(sb, &eloc, 0));
++			/* Error reading indirect block? */
++			if (!epos.bh)
++				return;
+ 			if (elen)
+ 				indirect_ext_len =
+ 					(elen + sb->s_blocksize - 1) >>
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index 3d7a6a9c2370..f8f6f04c4453 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -733,7 +733,7 @@
+ 		KEEP(*(.orc_unwind_ip))					\
+ 		__stop_orc_unwind_ip = .;				\
+ 	}								\
+-	. = ALIGN(6);							\
++	. = ALIGN(2);							\
+ 	.orc_unwind : AT(ADDR(.orc_unwind) - LOAD_OFFSET) {		\
+ 		__start_orc_unwind = .;					\
+ 		KEEP(*(.orc_unwind))					\
+diff --git a/include/drm/drm_cache.h b/include/drm/drm_cache.h
+index bfe1639df02d..97fc498dc767 100644
+--- a/include/drm/drm_cache.h
++++ b/include/drm/drm_cache.h
+@@ -47,6 +47,24 @@ static inline bool drm_arch_can_wc_memory(void)
+ 	return false;
+ #elif defined(CONFIG_MIPS) && defined(CONFIG_CPU_LOONGSON3)
+ 	return false;
++#elif defined(CONFIG_ARM) || defined(CONFIG_ARM64)
++	/*
++	 * The DRM driver stack is designed to work with cache coherent devices
++	 * only, but permits an optimization to be enabled in some cases, where
++	 * for some buffers, both the CPU and the GPU use uncached mappings,
++	 * removing the need for DMA snooping and allocation in the CPU caches.
++	 *
++	 * The use of uncached GPU mappings relies on the correct implementation
++	 * of the PCIe NoSnoop TLP attribute by the platform, otherwise the GPU
++	 * will use cached mappings nonetheless. On x86 platforms, this does not
++	 * seem to matter, as uncached CPU mappings will snoop the caches in any
++	 * case. However, on ARM and arm64, enabling this optimization on a
++	 * platform where NoSnoop is ignored results in loss of coherency, which
++	 * breaks correct operation of the device. Since we have no way of
++	 * detecting whether NoSnoop works or not, just disable this
++	 * optimization entirely for ARM and arm64.
++	 */
++	return false;
+ #else
+ 	return true;
+ #endif
+diff --git a/include/linux/atalk.h b/include/linux/atalk.h
+index 23f805562f4e..840cf92307ba 100644
+--- a/include/linux/atalk.h
++++ b/include/linux/atalk.h
+@@ -161,16 +161,26 @@ extern int sysctl_aarp_resolve_time;
+ extern void atalk_register_sysctl(void);
+ extern void atalk_unregister_sysctl(void);
+ #else
+-#define atalk_register_sysctl()		do { } while(0)
+-#define atalk_unregister_sysctl()	do { } while(0)
++static inline int atalk_register_sysctl(void)
++{
++	return 0;
++}
++static inline void atalk_unregister_sysctl(void)
++{
++}
+ #endif
+ 
+ #ifdef CONFIG_PROC_FS
+ extern int atalk_proc_init(void);
+ extern void atalk_proc_exit(void);
+ #else
+-#define atalk_proc_init()	({ 0; })
+-#define atalk_proc_exit()	do { } while(0)
++static inline int atalk_proc_init(void)
++{
++	return 0;
++}
++static inline void atalk_proc_exit(void)
++{
++}
+ #endif /* CONFIG_PROC_FS */
+ 
+ #endif /* __LINUX_ATALK_H__ */
+diff --git a/include/linux/bitrev.h b/include/linux/bitrev.h
+index 50fb0dee23e8..d35b8ec1c485 100644
+--- a/include/linux/bitrev.h
++++ b/include/linux/bitrev.h
+@@ -34,41 +34,41 @@ static inline u32 __bitrev32(u32 x)
+ 
+ #define __constant_bitrev32(x)	\
+ ({					\
+-	u32 __x = x;			\
+-	__x = (__x >> 16) | (__x << 16);	\
+-	__x = ((__x & (u32)0xFF00FF00UL) >> 8) | ((__x & (u32)0x00FF00FFUL) << 8);	\
+-	__x = ((__x & (u32)0xF0F0F0F0UL) >> 4) | ((__x & (u32)0x0F0F0F0FUL) << 4);	\
+-	__x = ((__x & (u32)0xCCCCCCCCUL) >> 2) | ((__x & (u32)0x33333333UL) << 2);	\
+-	__x = ((__x & (u32)0xAAAAAAAAUL) >> 1) | ((__x & (u32)0x55555555UL) << 1);	\
+-	__x;								\
++	u32 ___x = x;			\
++	___x = (___x >> 16) | (___x << 16);	\
++	___x = ((___x & (u32)0xFF00FF00UL) >> 8) | ((___x & (u32)0x00FF00FFUL) << 8);	\
++	___x = ((___x & (u32)0xF0F0F0F0UL) >> 4) | ((___x & (u32)0x0F0F0F0FUL) << 4);	\
++	___x = ((___x & (u32)0xCCCCCCCCUL) >> 2) | ((___x & (u32)0x33333333UL) << 2);	\
++	___x = ((___x & (u32)0xAAAAAAAAUL) >> 1) | ((___x & (u32)0x55555555UL) << 1);	\
++	___x;								\
+ })
+ 
+ #define __constant_bitrev16(x)	\
+ ({					\
+-	u16 __x = x;			\
+-	__x = (__x >> 8) | (__x << 8);	\
+-	__x = ((__x & (u16)0xF0F0U) >> 4) | ((__x & (u16)0x0F0FU) << 4);	\
+-	__x = ((__x & (u16)0xCCCCU) >> 2) | ((__x & (u16)0x3333U) << 2);	\
+-	__x = ((__x & (u16)0xAAAAU) >> 1) | ((__x & (u16)0x5555U) << 1);	\
+-	__x;								\
++	u16 ___x = x;			\
++	___x = (___x >> 8) | (___x << 8);	\
++	___x = ((___x & (u16)0xF0F0U) >> 4) | ((___x & (u16)0x0F0FU) << 4);	\
++	___x = ((___x & (u16)0xCCCCU) >> 2) | ((___x & (u16)0x3333U) << 2);	\
++	___x = ((___x & (u16)0xAAAAU) >> 1) | ((___x & (u16)0x5555U) << 1);	\
++	___x;								\
+ })
+ 
+ #define __constant_bitrev8x4(x) \
+ ({			\
+-	u32 __x = x;	\
+-	__x = ((__x & (u32)0xF0F0F0F0UL) >> 4) | ((__x & (u32)0x0F0F0F0FUL) << 4);	\
+-	__x = ((__x & (u32)0xCCCCCCCCUL) >> 2) | ((__x & (u32)0x33333333UL) << 2);	\
+-	__x = ((__x & (u32)0xAAAAAAAAUL) >> 1) | ((__x & (u32)0x55555555UL) << 1);	\
+-	__x;								\
++	u32 ___x = x;	\
++	___x = ((___x & (u32)0xF0F0F0F0UL) >> 4) | ((___x & (u32)0x0F0F0F0FUL) << 4);	\
++	___x = ((___x & (u32)0xCCCCCCCCUL) >> 2) | ((___x & (u32)0x33333333UL) << 2);	\
++	___x = ((___x & (u32)0xAAAAAAAAUL) >> 1) | ((___x & (u32)0x55555555UL) << 1);	\
++	___x;								\
+ })
+ 
+ #define __constant_bitrev8(x)	\
+ ({					\
+-	u8 __x = x;			\
+-	__x = (__x >> 4) | (__x << 4);	\
+-	__x = ((__x & (u8)0xCCU) >> 2) | ((__x & (u8)0x33U) << 2);	\
+-	__x = ((__x & (u8)0xAAU) >> 1) | ((__x & (u8)0x55U) << 1);	\
+-	__x;								\
++	u8 ___x = x;			\
++	___x = (___x >> 4) | (___x << 4);	\
++	___x = ((___x & (u8)0xCCU) >> 2) | ((___x & (u8)0x33U) << 2);	\
++	___x = ((___x & (u8)0xAAU) >> 1) | ((___x & (u8)0x55U) << 1);	\
++	___x;								\
+ })
+ 
+ #define bitrev32(x) \
+diff --git a/include/linux/ceph/libceph.h b/include/linux/ceph/libceph.h
+index a420c07904bc..337d5049ff93 100644
+--- a/include/linux/ceph/libceph.h
++++ b/include/linux/ceph/libceph.h
+@@ -294,6 +294,8 @@ extern void ceph_destroy_client(struct ceph_client *client);
+ extern int __ceph_open_session(struct ceph_client *client,
+ 			       unsigned long started);
+ extern int ceph_open_session(struct ceph_client *client);
++int ceph_wait_for_latest_osdmap(struct ceph_client *client,
++				unsigned long timeout);
+ 
+ /* pagevec.c */
+ extern void ceph_release_page_vector(struct page **pages, int num_pages);
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 8fcbae1b8db0..120d1d40704b 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -602,7 +602,7 @@ struct cgroup_subsys {
+ 	void (*cancel_fork)(struct task_struct *task);
+ 	void (*fork)(struct task_struct *task);
+ 	void (*exit)(struct task_struct *task);
+-	void (*free)(struct task_struct *task);
++	void (*release)(struct task_struct *task);
+ 	void (*bind)(struct cgroup_subsys_state *root_css);
+ 
+ 	bool early_init:1;
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index 9968332cceed..81f58b4a5418 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -121,6 +121,7 @@ extern int cgroup_can_fork(struct task_struct *p);
+ extern void cgroup_cancel_fork(struct task_struct *p);
+ extern void cgroup_post_fork(struct task_struct *p);
+ void cgroup_exit(struct task_struct *p);
++void cgroup_release(struct task_struct *p);
+ void cgroup_free(struct task_struct *p);
+ 
+ int cgroup_init_early(void);
+@@ -697,6 +698,7 @@ static inline int cgroup_can_fork(struct task_struct *p) { return 0; }
+ static inline void cgroup_cancel_fork(struct task_struct *p) {}
+ static inline void cgroup_post_fork(struct task_struct *p) {}
+ static inline void cgroup_exit(struct task_struct *p) {}
++static inline void cgroup_release(struct task_struct *p) {}
+ static inline void cgroup_free(struct task_struct *p) {}
+ 
+ static inline int cgroup_init_early(void) { return 0; }
+diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
+index e443fa9fa859..b7cf80a71293 100644
+--- a/include/linux/clk-provider.h
++++ b/include/linux/clk-provider.h
+@@ -792,6 +792,9 @@ unsigned int __clk_get_enable_count(struct clk *clk);
+ unsigned long clk_hw_get_rate(const struct clk_hw *hw);
+ unsigned long __clk_get_flags(struct clk *clk);
+ unsigned long clk_hw_get_flags(const struct clk_hw *hw);
++#define clk_hw_can_set_rate_parent(hw) \
++	(clk_hw_get_flags((hw)) & CLK_SET_RATE_PARENT)
++
+ bool clk_hw_is_prepared(const struct clk_hw *hw);
+ bool clk_hw_rate_is_protected(const struct clk_hw *hw);
+ bool clk_hw_is_enabled(const struct clk_hw *hw);
+diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
+index c86d6d8bdfed..0b427d5df0fe 100644
+--- a/include/linux/cpufreq.h
++++ b/include/linux/cpufreq.h
+@@ -254,20 +254,12 @@ __ATTR(_name, 0644, show_##_name, store_##_name)
+ static struct freq_attr _name =			\
+ __ATTR(_name, 0200, NULL, store_##_name)
+ 
+-struct global_attr {
+-	struct attribute attr;
+-	ssize_t (*show)(struct kobject *kobj,
+-			struct attribute *attr, char *buf);
+-	ssize_t (*store)(struct kobject *a, struct attribute *b,
+-			 const char *c, size_t count);
+-};
+-
+ #define define_one_global_ro(_name)		\
+-static struct global_attr _name =		\
++static struct kobj_attribute _name =		\
+ __ATTR(_name, 0444, show_##_name, NULL)
+ 
+ #define define_one_global_rw(_name)		\
+-static struct global_attr _name =		\
++static struct kobj_attribute _name =		\
+ __ATTR(_name, 0644, show_##_name, store_##_name)
+ 
+ 
+diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
+index e528baebad69..bee4bb9f81bc 100644
+--- a/include/linux/device-mapper.h
++++ b/include/linux/device-mapper.h
+@@ -609,7 +609,7 @@ do {									\
+  */
+ #define dm_target_offset(ti, sector) ((sector) - (ti)->begin)
+ 
+-static inline sector_t to_sector(unsigned long n)
++static inline sector_t to_sector(unsigned long long n)
+ {
+ 	return (n >> SECTOR_SHIFT);
+ }
+diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
+index f6ded992c183..5b21f14802e1 100644
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -130,6 +130,7 @@ struct dma_map_ops {
+ 			enum dma_data_direction direction);
+ 	int (*dma_supported)(struct device *dev, u64 mask);
+ 	u64 (*get_required_mask)(struct device *dev);
++	size_t (*max_mapping_size)(struct device *dev);
+ };
+ 
+ #define DMA_MAPPING_ERROR		(~(dma_addr_t)0)
+@@ -257,6 +258,8 @@ static inline void dma_direct_sync_sg_for_cpu(struct device *dev,
+ }
+ #endif
+ 
++size_t dma_direct_max_mapping_size(struct device *dev);
++
+ #ifdef CONFIG_HAS_DMA
+ #include <asm/dma-mapping.h>
+ 
+@@ -460,6 +463,7 @@ int dma_supported(struct device *dev, u64 mask);
+ int dma_set_mask(struct device *dev, u64 mask);
+ int dma_set_coherent_mask(struct device *dev, u64 mask);
+ u64 dma_get_required_mask(struct device *dev);
++size_t dma_max_mapping_size(struct device *dev);
+ #else /* CONFIG_HAS_DMA */
+ static inline dma_addr_t dma_map_page_attrs(struct device *dev,
+ 		struct page *page, size_t offset, size_t size,
+@@ -561,6 +565,10 @@ static inline u64 dma_get_required_mask(struct device *dev)
+ {
+ 	return 0;
+ }
++static inline size_t dma_max_mapping_size(struct device *dev)
++{
++	return 0;
++}
+ #endif /* CONFIG_HAS_DMA */
+ 
+ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index 28604a8d0aa9..a86485ac7c87 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -1699,19 +1699,19 @@ extern int efi_tpm_eventlog_init(void);
+  * fault happened while executing an efi runtime service.
+  */
+ enum efi_rts_ids {
+-	NONE,
+-	GET_TIME,
+-	SET_TIME,
+-	GET_WAKEUP_TIME,
+-	SET_WAKEUP_TIME,
+-	GET_VARIABLE,
+-	GET_NEXT_VARIABLE,
+-	SET_VARIABLE,
+-	QUERY_VARIABLE_INFO,
+-	GET_NEXT_HIGH_MONO_COUNT,
+-	RESET_SYSTEM,
+-	UPDATE_CAPSULE,
+-	QUERY_CAPSULE_CAPS,
++	EFI_NONE,
++	EFI_GET_TIME,
++	EFI_SET_TIME,
++	EFI_GET_WAKEUP_TIME,
++	EFI_SET_WAKEUP_TIME,
++	EFI_GET_VARIABLE,
++	EFI_GET_NEXT_VARIABLE,
++	EFI_SET_VARIABLE,
++	EFI_QUERY_VARIABLE_INFO,
++	EFI_GET_NEXT_HIGH_MONO_COUNT,
++	EFI_RESET_SYSTEM,
++	EFI_UPDATE_CAPSULE,
++	EFI_QUERY_CAPSULE_CAPS,
+ };
+ 
+ /*
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index d7711048ef93..c524ad7d31da 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -489,12 +489,12 @@ typedef __le32	f2fs_hash_t;
+ 
+ /*
+  * space utilization of regular dentry and inline dentry (w/o extra reservation)
+- *		regular dentry			inline dentry
+- * bitmap	1 * 27 = 27			1 * 23 = 23
+- * reserved	1 * 3 = 3			1 * 7 = 7
+- * dentry	11 * 214 = 2354			11 * 182 = 2002
+- * filename	8 * 214 = 1712			8 * 182 = 1456
+- * total	4096				3488
++ *		regular dentry		inline dentry (def)	inline dentry (min)
++ * bitmap	1 * 27 = 27		1 * 23 = 23		1 * 1 = 1
++ * reserved	1 * 3 = 3		1 * 7 = 7		1 * 1 = 1
++ * dentry	11 * 214 = 2354		11 * 182 = 2002		11 * 2 = 22
++ * filename	8 * 214 = 1712		8 * 182 = 1456		8 * 2 = 16
++ * total	4096			3488			40
+  *
+  * Note: there are more reserved space in inline dentry than in regular
+  * dentry, when converting inline dentry we should handle this carefully.
+@@ -506,6 +506,7 @@ typedef __le32	f2fs_hash_t;
+ #define SIZE_OF_RESERVED	(PAGE_SIZE - ((SIZE_OF_DIR_ENTRY + \
+ 				F2FS_SLOT_LEN) * \
+ 				NR_DENTRY_IN_BLOCK + SIZE_OF_DENTRY_BITMAP))
++#define MIN_INLINE_DENTRY_SIZE		40	/* just include '.' and '..' entries */
+ 
+ /* One directory entry slot representing F2FS_SLOT_LEN-sized file name */
+ struct f2fs_dir_entry {
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index e532fcc6e4b5..3358646a8e7a 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -874,7 +874,9 @@ bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
+ 		     unsigned int alignment,
+ 		     bpf_jit_fill_hole_t bpf_fill_ill_insns);
+ void bpf_jit_binary_free(struct bpf_binary_header *hdr);
+-
++u64 bpf_jit_alloc_exec_limit(void);
++void *bpf_jit_alloc_exec(unsigned long size);
++void bpf_jit_free_exec(void *addr);
+ void bpf_jit_free(struct bpf_prog *fp);
+ 
+ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 29d8e2cfed0e..fd423fec8d83 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -304,13 +304,19 @@ enum rw_hint {
+ 
+ struct kiocb {
+ 	struct file		*ki_filp;
++
++	/* The 'ki_filp' pointer is shared in a union for aio */
++	randomized_struct_fields_start
++
+ 	loff_t			ki_pos;
+ 	void (*ki_complete)(struct kiocb *iocb, long ret, long ret2);
+ 	void			*private;
+ 	int			ki_flags;
+ 	u16			ki_hint;
+ 	u16			ki_ioprio; /* See linux/ioprio.h */
+-} __randomize_layout;
++
++	randomized_struct_fields_end
++};
+ 
+ static inline bool is_sync_kiocb(struct kiocb *kiocb)
+ {
+diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
+index 0fbbcdf0c178..da0af631ded5 100644
+--- a/include/linux/hardirq.h
++++ b/include/linux/hardirq.h
+@@ -60,8 +60,14 @@ extern void irq_enter(void);
+  */
+ extern void irq_exit(void);
+ 
++#ifndef arch_nmi_enter
++#define arch_nmi_enter()	do { } while (0)
++#define arch_nmi_exit()		do { } while (0)
++#endif
++
+ #define nmi_enter()						\
+ 	do {							\
++		arch_nmi_enter();				\
+ 		printk_nmi_enter();				\
+ 		lockdep_off();					\
+ 		ftrace_nmi_enter();				\
+@@ -80,6 +86,7 @@ extern void irq_exit(void);
+ 		ftrace_nmi_exit();				\
+ 		lockdep_on();					\
+ 		printk_nmi_exit();				\
++		arch_nmi_exit();				\
+ 	} while (0)
+ 
+ #endif /* LINUX_HARDIRQ_H */
+diff --git a/include/linux/i2c.h b/include/linux/i2c.h
+index 65b4eaed1d96..7e748648c7d3 100644
+--- a/include/linux/i2c.h
++++ b/include/linux/i2c.h
+@@ -333,6 +333,7 @@ struct i2c_client {
+ 	char name[I2C_NAME_SIZE];
+ 	struct i2c_adapter *adapter;	/* the adapter we sit on	*/
+ 	struct device dev;		/* the device structure		*/
++	int init_irq;			/* irq set at initialization	*/
+ 	int irq;			/* irq issued by device		*/
+ 	struct list_head detected;
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
+index dd1e40ddac7d..875c41b23f20 100644
+--- a/include/linux/irqdesc.h
++++ b/include/linux/irqdesc.h
+@@ -65,6 +65,7 @@ struct irq_desc {
+ 	unsigned int		core_internal_state__do_not_mess_with_it;
+ 	unsigned int		depth;		/* nested irq disables */
+ 	unsigned int		wake_depth;	/* nested wake enables */
++	unsigned int		tot_count;
+ 	unsigned int		irq_count;	/* For detecting broken IRQs */
+ 	unsigned long		last_unhandled;	/* Aging timer for unhandled count */
+ 	unsigned int		irqs_unhandled;
+diff --git a/include/linux/kasan-checks.h b/include/linux/kasan-checks.h
+index d314150658a4..a61dc075e2ce 100644
+--- a/include/linux/kasan-checks.h
++++ b/include/linux/kasan-checks.h
+@@ -2,7 +2,7 @@
+ #ifndef _LINUX_KASAN_CHECKS_H
+ #define _LINUX_KASAN_CHECKS_H
+ 
+-#ifdef CONFIG_KASAN
++#if defined(__SANITIZE_ADDRESS__) || defined(__KASAN_INTERNAL)
+ void kasan_check_read(const volatile void *p, unsigned int size);
+ void kasan_check_write(const volatile void *p, unsigned int size);
+ #else
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index c38cc5eb7e73..cf761ff58224 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -634,7 +634,7 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
+ 			   struct kvm_memory_slot *dont);
+ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+ 			    unsigned long npages);
+-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots);
++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen);
+ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ 				struct kvm_memory_slot *memslot,
+ 				const struct kvm_userspace_memory_region *mem,
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 83ae11cbd12c..7391f5fe4eda 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -561,7 +561,10 @@ struct mem_cgroup *lock_page_memcg(struct page *page);
+ void __unlock_page_memcg(struct mem_cgroup *memcg);
+ void unlock_page_memcg(struct page *page);
+ 
+-/* idx can be of type enum memcg_stat_item or node_stat_item */
++/*
++ * idx can be of type enum memcg_stat_item or node_stat_item.
++ * Keep in sync with memcg_exact_page_state().
++ */
+ static inline unsigned long memcg_page_state(struct mem_cgroup *memcg,
+ 					     int idx)
+ {
+diff --git a/include/linux/mii.h b/include/linux/mii.h
+index 6fee8b1a4400..5cd824c1c0ca 100644
+--- a/include/linux/mii.h
++++ b/include/linux/mii.h
+@@ -469,7 +469,7 @@ static inline u32 linkmode_adv_to_lcl_adv_t(unsigned long *advertising)
+ 	if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ 			      advertising))
+ 		lcl_adv |= ADVERTISE_PAUSE_CAP;
+-	if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
++	if (linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ 			      advertising))
+ 		lcl_adv |= ADVERTISE_PAUSE_ASYM;
+ 
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 54299251d40d..4f001619f854 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -591,6 +591,8 @@ enum mlx5_pagefault_type_flags {
+ };
+ 
+ struct mlx5_td {
++	/* protects tirs list changes while tirs refresh */
++	struct mutex     list_lock;
+ 	struct list_head tirs_list;
+ 	u32              tdn;
+ };
+diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
+index 4eb26d278046..280ae96dc4c3 100644
+--- a/include/linux/page-isolation.h
++++ b/include/linux/page-isolation.h
+@@ -41,16 +41,6 @@ int move_freepages_block(struct zone *zone, struct page *page,
+ 
+ /*
+  * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
+- * If specified range includes migrate types other than MOVABLE or CMA,
+- * this will fail with -EBUSY.
+- *
+- * For isolating all pages in the range finally, the caller have to
+- * free all pages in the range. test_page_isolated() can be used for
+- * test it.
+- *
+- * The following flags are allowed (they can be combined in a bit mask)
+- * SKIP_HWPOISON - ignore hwpoison pages
+- * REPORT_FAILURE - report details about the failure to isolate the range
+  */
+ int
+ start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index e1a051724f7e..7cbbd891bfcd 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -409,7 +409,7 @@ struct pmu {
+ 	/*
+ 	 * Set up pmu-private data structures for an AUX area
+ 	 */
+-	void *(*setup_aux)		(int cpu, void **pages,
++	void *(*setup_aux)		(struct perf_event *event, void **pages,
+ 					 int nr_pages, bool overwrite);
+ 					/* optional */
+ 
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index 5a3bb3b7c9ad..3ecd7ea212ae 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -182,6 +182,7 @@ void generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_confirm(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_steal(struct pipe_inode_info *, struct pipe_buffer *);
+ void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *);
++void pipe_buf_mark_unmergeable(struct pipe_buffer *buf);
+ 
+ extern const struct pipe_buf_operations nosteal_pipe_buf_ops;
+ 
+diff --git a/include/linux/property.h b/include/linux/property.h
+index 3789ec755fb6..65d3420dd5d1 100644
+--- a/include/linux/property.h
++++ b/include/linux/property.h
+@@ -258,7 +258,7 @@ struct property_entry {
+ #define PROPERTY_ENTRY_STRING(_name_, _val_)		\
+ (struct property_entry) {				\
+ 	.name = _name_,					\
+-	.length = sizeof(_val_),			\
++	.length = sizeof(const char *),			\
+ 	.type = DEV_PROP_STRING,			\
+ 	{ .value = { .str = _val_ } },			\
+ }
+diff --git a/include/linux/relay.h b/include/linux/relay.h
+index e1bdf01a86e2..c759f96e39c1 100644
+--- a/include/linux/relay.h
++++ b/include/linux/relay.h
+@@ -66,7 +66,7 @@ struct rchan
+ 	struct kref kref;		/* channel refcount */
+ 	void *private_data;		/* for user-defined data */
+ 	size_t last_toobig;		/* tried to log event > subbuf size */
+-	struct rchan_buf ** __percpu buf; /* per-cpu channel buffers */
++	struct rchan_buf * __percpu *buf; /* per-cpu channel buffers */
+ 	int is_global;			/* One global buffer ? */
+ 	struct list_head list;		/* for channel list */
+ 	struct dentry *parent;		/* parent dentry passed to open */
+diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
+index 5b9ae62272bb..503778920448 100644
+--- a/include/linux/ring_buffer.h
++++ b/include/linux/ring_buffer.h
+@@ -128,7 +128,7 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts,
+ 		    unsigned long *lost_events);
+ 
+ struct ring_buffer_iter *
+-ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu);
++ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu, gfp_t flags);
+ void ring_buffer_read_prepare_sync(void);
+ void ring_buffer_read_start(struct ring_buffer_iter *iter);
+ void ring_buffer_read_finish(struct ring_buffer_iter *iter);
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index f9b43c989577..9b35aff09f70 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1748,9 +1748,9 @@ static __always_inline bool need_resched(void)
+ static inline unsigned int task_cpu(const struct task_struct *p)
+ {
+ #ifdef CONFIG_THREAD_INFO_IN_TASK
+-	return p->cpu;
++	return READ_ONCE(p->cpu);
+ #else
+-	return task_thread_info(p)->cpu;
++	return READ_ONCE(task_thread_info(p)->cpu);
+ #endif
+ }
+ 
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index c31d3a47a47c..57c7ed3fe465 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -176,10 +176,10 @@ typedef int (*sched_domain_flags_f)(void);
+ #define SDTL_OVERLAP	0x01
+ 
+ struct sd_data {
+-	struct sched_domain **__percpu sd;
+-	struct sched_domain_shared **__percpu sds;
+-	struct sched_group **__percpu sg;
+-	struct sched_group_capacity **__percpu sgc;
++	struct sched_domain *__percpu *sd;
++	struct sched_domain_shared *__percpu *sds;
++	struct sched_group *__percpu *sg;
++	struct sched_group_capacity *__percpu *sgc;
+ };
+ 
+ struct sched_domain_topology_level {
+diff --git a/include/linux/slab.h b/include/linux/slab.h
+index 11b45f7ae405..9449b19c5f10 100644
+--- a/include/linux/slab.h
++++ b/include/linux/slab.h
+@@ -32,6 +32,8 @@
+ #define SLAB_HWCACHE_ALIGN	((slab_flags_t __force)0x00002000U)
+ /* Use GFP_DMA memory */
+ #define SLAB_CACHE_DMA		((slab_flags_t __force)0x00004000U)
++/* Use GFP_DMA32 memory */
++#define SLAB_CACHE_DMA32	((slab_flags_t __force)0x00008000U)
+ /* DEBUG: Store the last owner for bug hunting */
+ #define SLAB_STORE_USER		((slab_flags_t __force)0x00010000U)
+ /* Panic if kmem_cache_create() fails */
+diff --git a/include/linux/string.h b/include/linux/string.h
+index 7927b875f80c..6ab0a6fa512e 100644
+--- a/include/linux/string.h
++++ b/include/linux/string.h
+@@ -150,6 +150,9 @@ extern void * memscan(void *,int,__kernel_size_t);
+ #ifndef __HAVE_ARCH_MEMCMP
+ extern int memcmp(const void *,const void *,__kernel_size_t);
+ #endif
++#ifndef __HAVE_ARCH_BCMP
++extern int bcmp(const void *,const void *,__kernel_size_t);
++#endif
+ #ifndef __HAVE_ARCH_MEMCHR
+ extern void * memchr(const void *,int,__kernel_size_t);
+ #endif
+diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
+index 7c007ed7505f..29bc3a203283 100644
+--- a/include/linux/swiotlb.h
++++ b/include/linux/swiotlb.h
+@@ -76,6 +76,8 @@ bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr,
+ 		size_t size, enum dma_data_direction dir, unsigned long attrs);
+ void __init swiotlb_exit(void);
+ unsigned int swiotlb_max_segment(void);
++size_t swiotlb_max_mapping_size(struct device *dev);
++bool is_swiotlb_active(void);
+ #else
+ #define swiotlb_force SWIOTLB_NO_FORCE
+ static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+@@ -95,6 +97,15 @@ static inline unsigned int swiotlb_max_segment(void)
+ {
+ 	return 0;
+ }
++static inline size_t swiotlb_max_mapping_size(struct device *dev)
++{
++	return SIZE_MAX;
++}
++
++static inline bool is_swiotlb_active(void)
++{
++	return false;
++}
+ #endif /* CONFIG_SWIOTLB */
+ 
+ extern void swiotlb_print_info(void);
+diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
+index fab02133a919..3dc70adfe5f5 100644
+--- a/include/linux/virtio_ring.h
++++ b/include/linux/virtio_ring.h
+@@ -63,7 +63,7 @@ struct virtqueue;
+ /*
+  * Creates a virtqueue and allocates the descriptor ring.  If
+  * may_reduce_num is set, then this may allocate a smaller ring than
+- * expected.  The caller should query virtqueue_get_ring_size to learn
++ * expected.  The caller should query virtqueue_get_vring_size to learn
+  * the actual size of the ring.
+  */
+ struct virtqueue *vring_create_virtqueue(unsigned int index,
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index ec9d6bc65855..fabee6db0abb 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -276,7 +276,7 @@ int  bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg);
+ int  bt_sock_wait_state(struct sock *sk, int state, unsigned long timeo);
+ int  bt_sock_wait_ready(struct sock *sk, unsigned long flags);
+ 
+-void bt_accept_enqueue(struct sock *parent, struct sock *sk);
++void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh);
+ void bt_accept_unlink(struct sock *sk);
+ struct sock *bt_accept_dequeue(struct sock *parent, struct socket *newsock);
+ 
+diff --git a/include/net/ip.h b/include/net/ip.h
+index be3cad9c2e4c..583526aad1d0 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -677,7 +677,7 @@ int ip_options_get_from_user(struct net *net, struct ip_options_rcu **optp,
+ 			     unsigned char __user *data, int optlen);
+ void ip_options_undo(struct ip_options *opt);
+ void ip_forward_options(struct sk_buff *skb);
+-int ip_options_rcv_srr(struct sk_buff *skb);
++int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev);
+ 
+ /*
+  *	Functions provided by ip_sockglue.c
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index 99d4148e0f90..1c3126c14930 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -58,6 +58,7 @@ struct net {
+ 						 */
+ 	spinlock_t		rules_mod_lock;
+ 
++	u32			hash_mix;
+ 	atomic64_t		cookie_gen;
+ 
+ 	struct list_head	list;		/* list of network namespaces */
+diff --git a/include/net/netfilter/br_netfilter.h b/include/net/netfilter/br_netfilter.h
+index 4cd56808ac4e..89808ce293c4 100644
+--- a/include/net/netfilter/br_netfilter.h
++++ b/include/net/netfilter/br_netfilter.h
+@@ -43,7 +43,6 @@ static inline struct rtable *bridge_parent_rtable(const struct net_device *dev)
+ }
+ 
+ struct net_device *setup_pre_routing(struct sk_buff *skb);
+-void br_netfilter_enable(void);
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+ int br_validate_ipv6(struct net *net, struct sk_buff *skb);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index b4984bbbe157..0612439909dc 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -416,7 +416,8 @@ struct nft_set {
+ 	unsigned char			*udata;
+ 	/* runtime data below here */
+ 	const struct nft_set_ops	*ops ____cacheline_aligned;
+-	u16				flags:14,
++	u16				flags:13,
++					bound:1,
+ 					genmask:2;
+ 	u8				klen;
+ 	u8				dlen;
+@@ -690,10 +691,12 @@ static inline void nft_set_gc_batch_add(struct nft_set_gc_batch *gcb,
+ 	gcb->elems[gcb->head.cnt++] = elem;
+ }
+ 
++struct nft_expr_ops;
+ /**
+  *	struct nft_expr_type - nf_tables expression type
+  *
+  *	@select_ops: function to select nft_expr_ops
++ *	@release_ops: release nft_expr_ops
+  *	@ops: default ops, used when no select_ops functions is present
+  *	@list: used internally
+  *	@name: Identifier
+@@ -706,6 +709,7 @@ static inline void nft_set_gc_batch_add(struct nft_set_gc_batch *gcb,
+ struct nft_expr_type {
+ 	const struct nft_expr_ops	*(*select_ops)(const struct nft_ctx *,
+ 						       const struct nlattr * const tb[]);
++	void				(*release_ops)(const struct nft_expr_ops *ops);
+ 	const struct nft_expr_ops	*ops;
+ 	struct list_head		list;
+ 	const char			*name;
+@@ -1329,15 +1333,12 @@ struct nft_trans_rule {
+ struct nft_trans_set {
+ 	struct nft_set			*set;
+ 	u32				set_id;
+-	bool				bound;
+ };
+ 
+ #define nft_trans_set(trans)	\
+ 	(((struct nft_trans_set *)trans->data)->set)
+ #define nft_trans_set_id(trans)	\
+ 	(((struct nft_trans_set *)trans->data)->set_id)
+-#define nft_trans_set_bound(trans)	\
+-	(((struct nft_trans_set *)trans->data)->bound)
+ 
+ struct nft_trans_chain {
+ 	bool				update;
+diff --git a/include/net/netns/hash.h b/include/net/netns/hash.h
+index 16a842456189..d9b665151f3d 100644
+--- a/include/net/netns/hash.h
++++ b/include/net/netns/hash.h
+@@ -2,16 +2,10 @@
+ #ifndef __NET_NS_HASH_H__
+ #define __NET_NS_HASH_H__
+ 
+-#include <asm/cache.h>
+-
+-struct net;
++#include <net/net_namespace.h>
+ 
+ static inline u32 net_hash_mix(const struct net *net)
+ {
+-#ifdef CONFIG_NET_NS
+-	return (u32)(((unsigned long)net) >> ilog2(sizeof(*net)));
+-#else
+-	return 0;
+-#endif
++	return net->hash_mix;
+ }
+ #endif
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 9481f2c142e2..e7eb4aa6ccc9 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -51,7 +51,10 @@ struct qdisc_size_table {
+ struct qdisc_skb_head {
+ 	struct sk_buff	*head;
+ 	struct sk_buff	*tail;
+-	__u32		qlen;
++	union {
++		u32		qlen;
++		atomic_t	atomic_qlen;
++	};
+ 	spinlock_t	lock;
+ };
+ 
+@@ -408,27 +411,19 @@ static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
+ 	BUILD_BUG_ON(sizeof(qcb->data) < sz);
+ }
+ 
+-static inline int qdisc_qlen_cpu(const struct Qdisc *q)
+-{
+-	return this_cpu_ptr(q->cpu_qstats)->qlen;
+-}
+-
+ static inline int qdisc_qlen(const struct Qdisc *q)
+ {
+ 	return q->q.qlen;
+ }
+ 
+-static inline int qdisc_qlen_sum(const struct Qdisc *q)
++static inline u32 qdisc_qlen_sum(const struct Qdisc *q)
+ {
+-	__u32 qlen = q->qstats.qlen;
+-	int i;
++	u32 qlen = q->qstats.qlen;
+ 
+-	if (q->flags & TCQ_F_NOLOCK) {
+-		for_each_possible_cpu(i)
+-			qlen += per_cpu_ptr(q->cpu_qstats, i)->qlen;
+-	} else {
++	if (q->flags & TCQ_F_NOLOCK)
++		qlen += atomic_read(&q->q.atomic_qlen);
++	else
+ 		qlen += q->q.qlen;
+-	}
+ 
+ 	return qlen;
+ }
+@@ -825,14 +820,14 @@ static inline void qdisc_qstats_cpu_backlog_inc(struct Qdisc *sch,
+ 	this_cpu_add(sch->cpu_qstats->backlog, qdisc_pkt_len(skb));
+ }
+ 
+-static inline void qdisc_qstats_cpu_qlen_inc(struct Qdisc *sch)
++static inline void qdisc_qstats_atomic_qlen_inc(struct Qdisc *sch)
+ {
+-	this_cpu_inc(sch->cpu_qstats->qlen);
++	atomic_inc(&sch->q.atomic_qlen);
+ }
+ 
+-static inline void qdisc_qstats_cpu_qlen_dec(struct Qdisc *sch)
++static inline void qdisc_qstats_atomic_qlen_dec(struct Qdisc *sch)
+ {
+-	this_cpu_dec(sch->cpu_qstats->qlen);
++	atomic_dec(&sch->q.atomic_qlen);
+ }
+ 
+ static inline void qdisc_qstats_cpu_requeues_inc(struct Qdisc *sch)
+diff --git a/include/net/sctp/checksum.h b/include/net/sctp/checksum.h
+index 32ee65a30aff..1c6e6c0766ca 100644
+--- a/include/net/sctp/checksum.h
++++ b/include/net/sctp/checksum.h
+@@ -61,7 +61,7 @@ static inline __wsum sctp_csum_combine(__wsum csum, __wsum csum2,
+ static inline __le32 sctp_compute_cksum(const struct sk_buff *skb,
+ 					unsigned int offset)
+ {
+-	struct sctphdr *sh = sctp_hdr(skb);
++	struct sctphdr *sh = (struct sctphdr *)(skb->data + offset);
+ 	const struct skb_checksum_ops ops = {
+ 		.update  = sctp_csum_update,
+ 		.combine = sctp_csum_combine,
+diff --git a/include/net/sock.h b/include/net/sock.h
+index f43f935cb113..89d0d94d5db2 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -710,6 +710,12 @@ static inline void sk_add_node_rcu(struct sock *sk, struct hlist_head *list)
+ 		hlist_add_head_rcu(&sk->sk_node, list);
+ }
+ 
++static inline void sk_add_node_tail_rcu(struct sock *sk, struct hlist_head *list)
++{
++	sock_hold(sk);
++	hlist_add_tail_rcu(&sk->sk_node, list);
++}
++
+ static inline void __sk_nulls_add_node_rcu(struct sock *sk, struct hlist_nulls_head *list)
+ {
+ 	hlist_nulls_add_head_rcu(&sk->sk_nulls_node, list);
+diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
+index cb8a273732cf..bb8092fa1e36 100644
+--- a/include/scsi/libfcoe.h
++++ b/include/scsi/libfcoe.h
+@@ -79,7 +79,7 @@ enum fip_state {
+  * It must not change after fcoe_ctlr_init() sets it.
+  */
+ enum fip_mode {
+-	FIP_MODE_AUTO = FIP_ST_AUTO,
++	FIP_MODE_AUTO,
+ 	FIP_MODE_NON_FIP,
+ 	FIP_MODE_FABRIC,
+ 	FIP_MODE_VN2VN,
+@@ -250,7 +250,7 @@ struct fcoe_rport {
+ };
+ 
+ /* FIP API functions */
+-void fcoe_ctlr_init(struct fcoe_ctlr *, enum fip_state);
++void fcoe_ctlr_init(struct fcoe_ctlr *, enum fip_mode);
+ void fcoe_ctlr_destroy(struct fcoe_ctlr *);
+ void fcoe_ctlr_link_up(struct fcoe_ctlr *);
+ int fcoe_ctlr_link_down(struct fcoe_ctlr *);
+diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
+index b9ba520f7e4b..2832134e5397 100644
+--- a/include/uapi/linux/android/binder.h
++++ b/include/uapi/linux/android/binder.h
+@@ -41,6 +41,14 @@ enum {
+ enum {
+ 	FLAT_BINDER_FLAG_PRIORITY_MASK = 0xff,
+ 	FLAT_BINDER_FLAG_ACCEPTS_FDS = 0x100,
++
++	/**
++	 * @FLAT_BINDER_FLAG_TXN_SECURITY_CTX: request security contexts
++	 *
++	 * Only when set, causes senders to include their security
++	 * context
++	 */
++	FLAT_BINDER_FLAG_TXN_SECURITY_CTX = 0x1000,
+ };
+ 
+ #ifdef BINDER_IPC_32BIT
+@@ -218,6 +226,7 @@ struct binder_node_info_for_ref {
+ #define BINDER_VERSION			_IOWR('b', 9, struct binder_version)
+ #define BINDER_GET_NODE_DEBUG_INFO	_IOWR('b', 11, struct binder_node_debug_info)
+ #define BINDER_GET_NODE_INFO_FOR_REF	_IOWR('b', 12, struct binder_node_info_for_ref)
++#define BINDER_SET_CONTEXT_MGR_EXT	_IOW('b', 13, struct flat_binder_object)
+ 
+ /*
+  * NOTE: Two special error codes you should check for when calling
+@@ -276,6 +285,11 @@ struct binder_transaction_data {
+ 	} data;
+ };
+ 
++struct binder_transaction_data_secctx {
++	struct binder_transaction_data transaction_data;
++	binder_uintptr_t secctx;
++};
++
+ struct binder_transaction_data_sg {
+ 	struct binder_transaction_data transaction_data;
+ 	binder_size_t buffers_size;
+@@ -311,6 +325,11 @@ enum binder_driver_return_protocol {
+ 	BR_OK = _IO('r', 1),
+ 	/* No parameters! */
+ 
++	BR_TRANSACTION_SEC_CTX = _IOR('r', 2,
++				      struct binder_transaction_data_secctx),
++	/*
++	 * binder_transaction_data_secctx: the received command.
++	 */
+ 	BR_TRANSACTION = _IOR('r', 2, struct binder_transaction_data),
+ 	BR_REPLY = _IOR('r', 3, struct binder_transaction_data),
+ 	/*
+diff --git a/kernel/audit.h b/kernel/audit.h
+index 91421679a168..6ffb70575082 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -314,7 +314,7 @@ extern void audit_trim_trees(void);
+ extern int audit_tag_tree(char *old, char *new);
+ extern const char *audit_tree_path(struct audit_tree *tree);
+ extern void audit_put_tree(struct audit_tree *tree);
+-extern void audit_kill_trees(struct list_head *list);
++extern void audit_kill_trees(struct audit_context *context);
+ #else
+ #define audit_remove_tree_rule(rule) BUG()
+ #define audit_add_tree_rule(rule) -EINVAL
+@@ -323,7 +323,7 @@ extern void audit_kill_trees(struct list_head *list);
+ #define audit_put_tree(tree) (void)0
+ #define audit_tag_tree(old, new) -EINVAL
+ #define audit_tree_path(rule) ""	/* never called */
+-#define audit_kill_trees(list) BUG()
++#define audit_kill_trees(context) BUG()
+ #endif
+ 
+ extern char *audit_unpack_string(void **bufp, size_t *remain, size_t len);
+diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c
+index d4af4d97f847..abfb112f26aa 100644
+--- a/kernel/audit_tree.c
++++ b/kernel/audit_tree.c
+@@ -524,13 +524,14 @@ static int tag_chunk(struct inode *inode, struct audit_tree *tree)
+ 	return 0;
+ }
+ 
+-static void audit_tree_log_remove_rule(struct audit_krule *rule)
++static void audit_tree_log_remove_rule(struct audit_context *context,
++				       struct audit_krule *rule)
+ {
+ 	struct audit_buffer *ab;
+ 
+ 	if (!audit_enabled)
+ 		return;
+-	ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_CONFIG_CHANGE);
++	ab = audit_log_start(context, GFP_KERNEL, AUDIT_CONFIG_CHANGE);
+ 	if (unlikely(!ab))
+ 		return;
+ 	audit_log_format(ab, "op=remove_rule dir=");
+@@ -540,7 +541,7 @@ static void audit_tree_log_remove_rule(struct audit_krule *rule)
+ 	audit_log_end(ab);
+ }
+ 
+-static void kill_rules(struct audit_tree *tree)
++static void kill_rules(struct audit_context *context, struct audit_tree *tree)
+ {
+ 	struct audit_krule *rule, *next;
+ 	struct audit_entry *entry;
+@@ -551,7 +552,7 @@ static void kill_rules(struct audit_tree *tree)
+ 		list_del_init(&rule->rlist);
+ 		if (rule->tree) {
+ 			/* not a half-baked one */
+-			audit_tree_log_remove_rule(rule);
++			audit_tree_log_remove_rule(context, rule);
+ 			if (entry->rule.exe)
+ 				audit_remove_mark(entry->rule.exe);
+ 			rule->tree = NULL;
+@@ -633,7 +634,7 @@ static void trim_marked(struct audit_tree *tree)
+ 		tree->goner = 1;
+ 		spin_unlock(&hash_lock);
+ 		mutex_lock(&audit_filter_mutex);
+-		kill_rules(tree);
++		kill_rules(audit_context(), tree);
+ 		list_del_init(&tree->list);
+ 		mutex_unlock(&audit_filter_mutex);
+ 		prune_one(tree);
+@@ -973,8 +974,10 @@ static void audit_schedule_prune(void)
+  * ... and that one is done if evict_chunk() decides to delay until the end
+  * of syscall.  Runs synchronously.
+  */
+-void audit_kill_trees(struct list_head *list)
++void audit_kill_trees(struct audit_context *context)
+ {
++	struct list_head *list = &context->killed_trees;
++
+ 	audit_ctl_lock();
+ 	mutex_lock(&audit_filter_mutex);
+ 
+@@ -982,7 +985,7 @@ void audit_kill_trees(struct list_head *list)
+ 		struct audit_tree *victim;
+ 
+ 		victim = list_entry(list->next, struct audit_tree, list);
+-		kill_rules(victim);
++		kill_rules(context, victim);
+ 		list_del_init(&victim->list);
+ 
+ 		mutex_unlock(&audit_filter_mutex);
+@@ -1017,7 +1020,7 @@ static void evict_chunk(struct audit_chunk *chunk)
+ 		list_del_init(&owner->same_root);
+ 		spin_unlock(&hash_lock);
+ 		if (!postponed) {
+-			kill_rules(owner);
++			kill_rules(audit_context(), owner);
+ 			list_move(&owner->list, &prune_list);
+ 			need_prune = 1;
+ 		} else {
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index 6593a5207fb0..b585ceb2f7a2 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -1444,6 +1444,9 @@ void __audit_free(struct task_struct *tsk)
+ 	if (!context)
+ 		return;
+ 
++	if (!list_empty(&context->killed_trees))
++		audit_kill_trees(context);
++
+ 	/* We are called either by do_exit() or the fork() error handling code;
+ 	 * in the former case tsk == current and in the latter tsk is a
+ 	 * random task_struct that doesn't doesn't have any meaningful data we
+@@ -1460,9 +1463,6 @@ void __audit_free(struct task_struct *tsk)
+ 			audit_log_exit();
+ 	}
+ 
+-	if (!list_empty(&context->killed_trees))
+-		audit_kill_trees(&context->killed_trees);
+-
+ 	audit_set_context(tsk, NULL);
+ 	audit_free_context(context);
+ }
+@@ -1537,6 +1537,9 @@ void __audit_syscall_exit(int success, long return_code)
+ 	if (!context)
+ 		return;
+ 
++	if (!list_empty(&context->killed_trees))
++		audit_kill_trees(context);
++
+ 	if (!context->dummy && context->in_syscall) {
+ 		if (success)
+ 			context->return_valid = AUDITSC_SUCCESS;
+@@ -1571,9 +1574,6 @@ void __audit_syscall_exit(int success, long return_code)
+ 	context->in_syscall = 0;
+ 	context->prio = context->state == AUDIT_RECORD_CONTEXT ? ~0ULL : 0;
+ 
+-	if (!list_empty(&context->killed_trees))
+-		audit_kill_trees(&context->killed_trees);
+-
+ 	audit_free_names(context);
+ 	unroll_tree_refs(context, NULL, 0);
+ 	audit_free_aux(context);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 5fcce2f4209d..d53825b6fcd9 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -3187,7 +3187,7 @@ do_sim:
+ 		*dst_reg = *ptr_reg;
+ 	}
+ 	ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
+-	if (!ptr_is_dst_reg)
++	if (!ptr_is_dst_reg && ret)
+ 		*dst_reg = tmp;
+ 	return !ret ? -EFAULT : 0;
+ }
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index f31bd61c9466..f84bf28f36ba 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -197,7 +197,7 @@ static u64 css_serial_nr_next = 1;
+  */
+ static u16 have_fork_callback __read_mostly;
+ static u16 have_exit_callback __read_mostly;
+-static u16 have_free_callback __read_mostly;
++static u16 have_release_callback __read_mostly;
+ static u16 have_canfork_callback __read_mostly;
+ 
+ /* cgroup namespace for init task */
+@@ -2033,7 +2033,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
+ 			       struct cgroup_namespace *ns)
+ {
+ 	struct dentry *dentry;
+-	bool new_sb;
++	bool new_sb = false;
+ 
+ 	dentry = kernfs_mount(fs_type, flags, root->kf_root, magic, &new_sb);
+ 
+@@ -2043,6 +2043,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
+ 	 */
+ 	if (!IS_ERR(dentry) && ns != &init_cgroup_ns) {
+ 		struct dentry *nsdentry;
++		struct super_block *sb = dentry->d_sb;
+ 		struct cgroup *cgrp;
+ 
+ 		mutex_lock(&cgroup_mutex);
+@@ -2053,12 +2054,14 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
+ 		spin_unlock_irq(&css_set_lock);
+ 		mutex_unlock(&cgroup_mutex);
+ 
+-		nsdentry = kernfs_node_dentry(cgrp->kn, dentry->d_sb);
++		nsdentry = kernfs_node_dentry(cgrp->kn, sb);
+ 		dput(dentry);
++		if (IS_ERR(nsdentry))
++			deactivate_locked_super(sb);
+ 		dentry = nsdentry;
+ 	}
+ 
+-	if (IS_ERR(dentry) || !new_sb)
++	if (!new_sb)
+ 		cgroup_put(&root->cgrp);
+ 
+ 	return dentry;
+@@ -5313,7 +5316,7 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss, bool early)
+ 
+ 	have_fork_callback |= (bool)ss->fork << ss->id;
+ 	have_exit_callback |= (bool)ss->exit << ss->id;
+-	have_free_callback |= (bool)ss->free << ss->id;
++	have_release_callback |= (bool)ss->release << ss->id;
+ 	have_canfork_callback |= (bool)ss->can_fork << ss->id;
+ 
+ 	/* At system boot, before all subsystems have been
+@@ -5749,16 +5752,19 @@ void cgroup_exit(struct task_struct *tsk)
+ 	} while_each_subsys_mask();
+ }
+ 
+-void cgroup_free(struct task_struct *task)
++void cgroup_release(struct task_struct *task)
+ {
+-	struct css_set *cset = task_css_set(task);
+ 	struct cgroup_subsys *ss;
+ 	int ssid;
+ 
+-	do_each_subsys_mask(ss, ssid, have_free_callback) {
+-		ss->free(task);
++	do_each_subsys_mask(ss, ssid, have_release_callback) {
++		ss->release(task);
+ 	} while_each_subsys_mask();
++}
+ 
++void cgroup_free(struct task_struct *task)
++{
++	struct css_set *cset = task_css_set(task);
+ 	put_css_set(cset);
+ }
+ 
+diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c
+index 9829c67ebc0a..c9960baaa14f 100644
+--- a/kernel/cgroup/pids.c
++++ b/kernel/cgroup/pids.c
+@@ -247,7 +247,7 @@ static void pids_cancel_fork(struct task_struct *task)
+ 	pids_uncharge(pids, 1);
+ }
+ 
+-static void pids_free(struct task_struct *task)
++static void pids_release(struct task_struct *task)
+ {
+ 	struct pids_cgroup *pids = css_pids(task_css(task, pids_cgrp_id));
+ 
+@@ -342,7 +342,7 @@ struct cgroup_subsys pids_cgrp_subsys = {
+ 	.cancel_attach 	= pids_cancel_attach,
+ 	.can_fork	= pids_can_fork,
+ 	.cancel_fork	= pids_cancel_fork,
+-	.free		= pids_free,
++	.release	= pids_release,
+ 	.legacy_cftypes	= pids_files,
+ 	.dfl_cftypes	= pids_files,
+ 	.threaded	= true,
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index d503d1a9007c..bb95a35e8c2d 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -87,7 +87,6 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
+ 						   struct cgroup *root, int cpu)
+ {
+ 	struct cgroup_rstat_cpu *rstatc;
+-	struct cgroup *parent;
+ 
+ 	if (pos == root)
+ 		return NULL;
+@@ -115,8 +114,8 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
+ 	 * However, due to the way we traverse, @pos will be the first
+ 	 * child in most cases. The only exception is @root.
+ 	 */
+-	parent = cgroup_parent(pos);
+-	if (parent && rstatc->updated_next) {
++	if (rstatc->updated_next) {
++		struct cgroup *parent = cgroup_parent(pos);
+ 		struct cgroup_rstat_cpu *prstatc = cgroup_rstat_cpu(parent, cpu);
+ 		struct cgroup_rstat_cpu *nrstatc;
+ 		struct cgroup **nextp;
+@@ -140,9 +139,12 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
+ 		 * updated stat.
+ 		 */
+ 		smp_mb();
++
++		return pos;
+ 	}
+ 
+-	return pos;
++	/* only happens for @root */
++	return NULL;
+ }
+ 
+ /* see cgroup_rstat_flush() */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index d1c6d152da89..6754f3ecfd94 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -313,6 +313,15 @@ void cpus_write_unlock(void)
+ 
+ void lockdep_assert_cpus_held(void)
+ {
++	/*
++	 * We can't have hotplug operations before userspace starts running,
++	 * and some init codepaths will knowingly not take the hotplug lock.
++	 * This is all valid, so mute lockdep until it makes sense to report
++	 * unheld locks.
++	 */
++	if (system_state < SYSTEM_RUNNING)
++		return;
++
+ 	percpu_rwsem_assert_held(&cpu_hotplug_lock);
+ }
+ 
+@@ -555,6 +564,20 @@ static void undo_cpu_up(unsigned int cpu, struct cpuhp_cpu_state *st)
+ 		cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
+ }
+ 
++static inline bool can_rollback_cpu(struct cpuhp_cpu_state *st)
++{
++	if (IS_ENABLED(CONFIG_HOTPLUG_CPU))
++		return true;
++	/*
++	 * When CPU hotplug is disabled, then taking the CPU down is not
++	 * possible because takedown_cpu() and the architecture and
++	 * subsystem specific mechanisms are not available. So the CPU
++	 * which would be completely unplugged again needs to stay around
++	 * in the current state.
++	 */
++	return st->state <= CPUHP_BRINGUP_CPU;
++}
++
+ static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ 			      enum cpuhp_state target)
+ {
+@@ -565,8 +588,10 @@ static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+ 		st->state++;
+ 		ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
+ 		if (ret) {
+-			st->target = prev_state;
+-			undo_cpu_up(cpu, st);
++			if (can_rollback_cpu(st)) {
++				st->target = prev_state;
++				undo_cpu_up(cpu, st);
++			}
+ 			break;
+ 		}
+ 	}
+diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
+index 355d16acee6d..6310ad01f915 100644
+--- a/kernel/dma/direct.c
++++ b/kernel/dma/direct.c
+@@ -380,3 +380,14 @@ int dma_direct_supported(struct device *dev, u64 mask)
+ 	 */
+ 	return mask >= __phys_to_dma(dev, min_mask);
+ }
++
++size_t dma_direct_max_mapping_size(struct device *dev)
++{
++	size_t size = SIZE_MAX;
++
++	/* If SWIOTLB is active, use its maximum mapping size */
++	if (is_swiotlb_active())
++		size = swiotlb_max_mapping_size(dev);
++
++	return size;
++}
+diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
+index a11006b6d8e8..5753008ab286 100644
+--- a/kernel/dma/mapping.c
++++ b/kernel/dma/mapping.c
+@@ -357,3 +357,17 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
+ 		ops->cache_sync(dev, vaddr, size, dir);
+ }
+ EXPORT_SYMBOL(dma_cache_sync);
++
++size_t dma_max_mapping_size(struct device *dev)
++{
++	const struct dma_map_ops *ops = get_dma_ops(dev);
++	size_t size = SIZE_MAX;
++
++	if (dma_is_direct(ops))
++		size = dma_direct_max_mapping_size(dev);
++	else if (ops && ops->max_mapping_size)
++		size = ops->max_mapping_size(dev);
++
++	return size;
++}
++EXPORT_SYMBOL_GPL(dma_max_mapping_size);
+diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
+index 1fb6fd68b9c7..c873f9cc2146 100644
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -662,3 +662,17 @@ swiotlb_dma_supported(struct device *hwdev, u64 mask)
+ {
+ 	return __phys_to_dma(hwdev, io_tlb_end - 1) <= mask;
+ }
++
++size_t swiotlb_max_mapping_size(struct device *dev)
++{
++	return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
++}
++
++bool is_swiotlb_active(void)
++{
++	/*
++	 * When SWIOTLB is initialized, even if io_tlb_start points to physical
++	 * address zero, io_tlb_end surely doesn't.
++	 */
++	return io_tlb_end != 0;
++}
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 5ab4fe3b1dcc..878c62ec0190 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -658,7 +658,7 @@ int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event,
+ 			goto out;
+ 	}
+ 
+-	rb->aux_priv = event->pmu->setup_aux(event->cpu, rb->aux_pages, nr_pages,
++	rb->aux_priv = event->pmu->setup_aux(event, rb->aux_pages, nr_pages,
+ 					     overwrite);
+ 	if (!rb->aux_priv)
+ 		goto out;
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 2639a30a8aa5..2166c2d92ddc 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -219,6 +219,7 @@ repeat:
+ 	}
+ 
+ 	write_unlock_irq(&tasklist_lock);
++	cgroup_release(p);
+ 	release_thread(p);
+ 	call_rcu(&p->rcu, delayed_put_task_struct);
+ 
+diff --git a/kernel/futex.c b/kernel/futex.c
+index a0514e01c3eb..52668d44e07b 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -3440,6 +3440,10 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p
+ {
+ 	u32 uval, uninitialized_var(nval), mval;
+ 
++	/* Futex address must be 32bit aligned */
++	if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0)
++		return -1;
++
+ retry:
+ 	if (get_user(uval, uaddr))
+ 		return -1;
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index 34e969069488..b07a2acc4eec 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -855,7 +855,11 @@ void handle_percpu_irq(struct irq_desc *desc)
+ {
+ 	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 
+-	kstat_incr_irqs_this_cpu(desc);
++	/*
++	 * PER CPU interrupts are not serialized. Do not touch
++	 * desc->tot_count.
++	 */
++	__kstat_incr_irqs_this_cpu(desc);
+ 
+ 	if (chip->irq_ack)
+ 		chip->irq_ack(&desc->irq_data);
+@@ -884,7 +888,11 @@ void handle_percpu_devid_irq(struct irq_desc *desc)
+ 	unsigned int irq = irq_desc_get_irq(desc);
+ 	irqreturn_t res;
+ 
+-	kstat_incr_irqs_this_cpu(desc);
++	/*
++	 * PER CPU interrupts are not serialized. Do not touch
++	 * desc->tot_count.
++	 */
++	__kstat_incr_irqs_this_cpu(desc);
+ 
+ 	if (chip->irq_ack)
+ 		chip->irq_ack(&desc->irq_data);
+@@ -1376,6 +1384,10 @@ int irq_chip_set_vcpu_affinity_parent(struct irq_data *data, void *vcpu_info)
+ int irq_chip_set_wake_parent(struct irq_data *data, unsigned int on)
+ {
+ 	data = data->parent_data;
++
++	if (data->chip->flags & IRQCHIP_SKIP_SET_WAKE)
++		return 0;
++
+ 	if (data->chip->irq_set_wake)
+ 		return data->chip->irq_set_wake(data, on);
+ 
+diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
+index ca6afa267070..e74e7eea76cf 100644
+--- a/kernel/irq/internals.h
++++ b/kernel/irq/internals.h
+@@ -242,12 +242,18 @@ static inline void irq_state_set_masked(struct irq_desc *desc)
+ 
+ #undef __irqd_to_state
+ 
+-static inline void kstat_incr_irqs_this_cpu(struct irq_desc *desc)
++static inline void __kstat_incr_irqs_this_cpu(struct irq_desc *desc)
+ {
+ 	__this_cpu_inc(*desc->kstat_irqs);
+ 	__this_cpu_inc(kstat.irqs_sum);
+ }
+ 
++static inline void kstat_incr_irqs_this_cpu(struct irq_desc *desc)
++{
++	__kstat_incr_irqs_this_cpu(desc);
++	desc->tot_count++;
++}
++
+ static inline int irq_desc_get_node(struct irq_desc *desc)
+ {
+ 	return irq_common_data_get_node(&desc->irq_common_data);
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index ef8ad36cadcf..e16e022eae09 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -119,6 +119,7 @@ static void desc_set_defaults(unsigned int irq, struct irq_desc *desc, int node,
+ 	desc->depth = 1;
+ 	desc->irq_count = 0;
+ 	desc->irqs_unhandled = 0;
++	desc->tot_count = 0;
+ 	desc->name = NULL;
+ 	desc->owner = owner;
+ 	for_each_possible_cpu(cpu)
+@@ -557,6 +558,7 @@ int __init early_irq_init(void)
+ 		alloc_masks(&desc[i], node);
+ 		raw_spin_lock_init(&desc[i].lock);
+ 		lockdep_set_class(&desc[i].lock, &irq_desc_lock_class);
++		mutex_init(&desc[i].request_mutex);
+ 		desc_set_defaults(i, &desc[i], node, NULL, NULL);
+ 	}
+ 	return arch_early_irq_init();
+@@ -919,11 +921,15 @@ unsigned int kstat_irqs_cpu(unsigned int irq, int cpu)
+ unsigned int kstat_irqs(unsigned int irq)
+ {
+ 	struct irq_desc *desc = irq_to_desc(irq);
+-	int cpu;
+ 	unsigned int sum = 0;
++	int cpu;
+ 
+ 	if (!desc || !desc->kstat_irqs)
+ 		return 0;
++	if (!irq_settings_is_per_cpu_devid(desc) &&
++	    !irq_settings_is_per_cpu(desc))
++	    return desc->tot_count;
++
+ 	for_each_possible_cpu(cpu)
+ 		sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
+ 	return sum;
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 95932333a48b..e805fe3bf87f 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -3535,6 +3535,9 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
+ 	unsigned int depth;
+ 	int i;
+ 
++	if (unlikely(!debug_locks))
++		return 0;
++
+ 	depth = curr->lockdep_depth;
+ 	/*
+ 	 * This function is about (re)setting the class of a held lock,
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 9180158756d2..38d44d27e37a 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -1557,14 +1557,23 @@ static bool rcu_future_gp_cleanup(struct rcu_node *rnp)
+ }
+ 
+ /*
+- * Awaken the grace-period kthread.  Don't do a self-awaken, and don't
+- * bother awakening when there is nothing for the grace-period kthread
+- * to do (as in several CPUs raced to awaken, and we lost), and finally
+- * don't try to awaken a kthread that has not yet been created.
++ * Awaken the grace-period kthread.  Don't do a self-awaken (unless in
++ * an interrupt or softirq handler), and don't bother awakening when there
++ * is nothing for the grace-period kthread to do (as in several CPUs raced
++ * to awaken, and we lost), and finally don't try to awaken a kthread that
++ * has not yet been created.  If all those checks are passed, track some
++ * debug information and awaken.
++ *
++ * So why do the self-wakeup when in an interrupt or softirq handler
++ * in the grace-period kthread's context?  Because the kthread might have
++ * been interrupted just as it was going to sleep, and just after the final
++ * pre-sleep check of the awaken condition.  In this case, a wakeup really
++ * is required, and is therefore supplied.
+  */
+ static void rcu_gp_kthread_wake(void)
+ {
+-	if (current == rcu_state.gp_kthread ||
++	if ((current == rcu_state.gp_kthread &&
++	     !in_interrupt() && !in_serving_softirq()) ||
+ 	    !READ_ONCE(rcu_state.gp_flags) ||
+ 	    !rcu_state.gp_kthread)
+ 		return;
+diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
+index 1971869c4072..f4ca36d92138 100644
+--- a/kernel/rcu/update.c
++++ b/kernel/rcu/update.c
+@@ -52,6 +52,7 @@
+ #include <linux/tick.h>
+ #include <linux/rcupdate_wait.h>
+ #include <linux/sched/isolation.h>
++#include <linux/kprobes.h>
+ 
+ #define CREATE_TRACE_POINTS
+ 
+@@ -249,6 +250,7 @@ int notrace debug_lockdep_rcu_enabled(void)
+ 	       current->lockdep_recursion == 0;
+ }
+ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
++NOKPROBE_SYMBOL(debug_lockdep_rcu_enabled);
+ 
+ /**
+  * rcu_read_lock_held() - might we be in RCU read-side critical section?
+diff --git a/kernel/resource.c b/kernel/resource.c
+index 915c02e8e5dd..ca7ed5158cff 100644
+--- a/kernel/resource.c
++++ b/kernel/resource.c
+@@ -382,7 +382,7 @@ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
+ 				 int (*func)(struct resource *, void *))
+ {
+ 	struct resource res;
+-	int ret = -1;
++	int ret = -EINVAL;
+ 
+ 	while (start < end &&
+ 	       !find_next_iomem_res(start, end, flags, desc, first_lvl, &res)) {
+@@ -462,7 +462,7 @@ int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
+ 	unsigned long flags;
+ 	struct resource res;
+ 	unsigned long pfn, end_pfn;
+-	int ret = -1;
++	int ret = -EINVAL;
+ 
+ 	start = (u64) start_pfn << PAGE_SHIFT;
+ 	end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index d8d76a65cfdd..01a2489de94e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -107,11 +107,12 @@ struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
+ 		 *					[L] ->on_rq
+ 		 *	RELEASE (rq->lock)
+ 		 *
+-		 * If we observe the old CPU in task_rq_lock, the acquire of
++		 * If we observe the old CPU in task_rq_lock(), the acquire of
+ 		 * the old rq->lock will fully serialize against the stores.
+ 		 *
+-		 * If we observe the new CPU in task_rq_lock, the acquire will
+-		 * pair with the WMB to ensure we must then also see migrating.
++		 * If we observe the new CPU in task_rq_lock(), the address
++		 * dependency headed by '[L] rq = task_rq()' and the acquire
++		 * will pair with the WMB to ensure we then also see migrating.
+ 		 */
+ 		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
+ 			rq_pin_lock(rq, rf);
+@@ -928,7 +929,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
+ {
+ 	lockdep_assert_held(&rq->lock);
+ 
+-	p->on_rq = TASK_ON_RQ_MIGRATING;
++	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
+ 	dequeue_task(rq, p, DEQUEUE_NOCLOCK);
+ 	set_task_cpu(p, new_cpu);
+ 	rq_unlock(rq, rf);
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index de3de997e245..8039d62ae36e 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -315,6 +315,7 @@ void register_sched_domain_sysctl(void)
+ {
+ 	static struct ctl_table *cpu_entries;
+ 	static struct ctl_table **cpu_idx;
++	static bool init_done = false;
+ 	char buf[32];
+ 	int i;
+ 
+@@ -344,7 +345,10 @@ void register_sched_domain_sysctl(void)
+ 	if (!cpumask_available(sd_sysctl_cpus)) {
+ 		if (!alloc_cpumask_var(&sd_sysctl_cpus, GFP_KERNEL))
+ 			return;
++	}
+ 
++	if (!init_done) {
++		init_done = true;
+ 		/* init to possible to not have holes in @cpu_entries */
+ 		cpumask_copy(sd_sysctl_cpus, cpu_possible_mask);
+ 	}
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 310d0637fe4b..5e61a1a99e38 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7713,10 +7713,10 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
+ 	if (cfs_rq->last_h_load_update == now)
+ 		return;
+ 
+-	cfs_rq->h_load_next = NULL;
++	WRITE_ONCE(cfs_rq->h_load_next, NULL);
+ 	for_each_sched_entity(se) {
+ 		cfs_rq = cfs_rq_of(se);
+-		cfs_rq->h_load_next = se;
++		WRITE_ONCE(cfs_rq->h_load_next, se);
+ 		if (cfs_rq->last_h_load_update == now)
+ 			break;
+ 	}
+@@ -7726,7 +7726,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
+ 		cfs_rq->last_h_load_update = now;
+ 	}
+ 
+-	while ((se = cfs_rq->h_load_next) != NULL) {
++	while ((se = READ_ONCE(cfs_rq->h_load_next)) != NULL) {
+ 		load = cfs_rq->h_load;
+ 		load = div64_ul(load * se->avg.load_avg,
+ 			cfs_rq_load_avg(cfs_rq) + 1);
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index d04530bf251f..425a5589e5f6 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -1460,9 +1460,9 @@ static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
+ 	 */
+ 	smp_wmb();
+ #ifdef CONFIG_THREAD_INFO_IN_TASK
+-	p->cpu = cpu;
++	WRITE_ONCE(p->cpu, cpu);
+ #else
+-	task_thread_info(p)->cpu = cpu;
++	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
+ #endif
+ 	p->wake_cpu = cpu;
+ #endif
+@@ -1563,7 +1563,7 @@ static inline int task_on_rq_queued(struct task_struct *p)
+ 
+ static inline int task_on_rq_migrating(struct task_struct *p)
+ {
+-	return p->on_rq == TASK_ON_RQ_MIGRATING;
++	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
+ }
+ 
+ /*
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 3f35ba1d8fde..efca2489d881 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -676,7 +676,7 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
+ }
+ 
+ struct s_data {
+-	struct sched_domain ** __percpu sd;
++	struct sched_domain * __percpu *sd;
+ 	struct root_domain	*rd;
+ };
+ 
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index ba4d9e85feb8..28ec71d914c7 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -127,6 +127,7 @@ static int __maybe_unused one = 1;
+ static int __maybe_unused two = 2;
+ static int __maybe_unused four = 4;
+ static unsigned long one_ul = 1;
++static unsigned long long_max = LONG_MAX;
+ static int one_hundred = 100;
+ static int one_thousand = 1000;
+ #ifdef CONFIG_PRINTK
+@@ -1722,6 +1723,8 @@ static struct ctl_table fs_table[] = {
+ 		.maxlen		= sizeof(files_stat.max_files),
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_doulongvec_minmax,
++		.extra1		= &zero,
++		.extra2		= &long_max,
+ 	},
+ 	{
+ 		.procname	= "nr_open",
+@@ -2579,7 +2582,16 @@ static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp,
+ {
+ 	struct do_proc_dointvec_minmax_conv_param *param = data;
+ 	if (write) {
+-		int val = *negp ? -*lvalp : *lvalp;
++		int val;
++		if (*negp) {
++			if (*lvalp > (unsigned long) INT_MAX + 1)
++				return -EINVAL;
++			val = -*lvalp;
++		} else {
++			if (*lvalp > (unsigned long) INT_MAX)
++				return -EINVAL;
++			val = *lvalp;
++		}
+ 		if ((param->min && *param->min > val) ||
+ 		    (param->max && *param->max < val))
+ 			return -EINVAL;
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index 2c97e8c2d29f..0519a8805aab 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -594,7 +594,7 @@ static ktime_t alarm_timer_remaining(struct k_itimer *timr, ktime_t now)
+ {
+ 	struct alarm *alarm = &timr->it.alarm.alarmtimer;
+ 
+-	return ktime_sub(now, alarm->node.expires);
++	return ktime_sub(alarm->node.expires, now);
+ }
+ 
+ /**
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 06e864a334bb..b49affb4666b 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -4205,6 +4205,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_consume);
+  * ring_buffer_read_prepare - Prepare for a non consuming read of the buffer
+  * @buffer: The ring buffer to read from
+  * @cpu: The cpu buffer to iterate over
++ * @flags: gfp flags to use for memory allocation
+  *
+  * This performs the initial preparations necessary to iterate
+  * through the buffer.  Memory is allocated, buffer recording
+@@ -4222,7 +4223,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_consume);
+  * This overall must be paired with ring_buffer_read_finish.
+  */
+ struct ring_buffer_iter *
+-ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu)
++ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu, gfp_t flags)
+ {
+ 	struct ring_buffer_per_cpu *cpu_buffer;
+ 	struct ring_buffer_iter *iter;
+@@ -4230,7 +4231,7 @@ ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu)
+ 	if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ 		return NULL;
+ 
+-	iter = kmalloc(sizeof(*iter), GFP_KERNEL);
++	iter = kmalloc(sizeof(*iter), flags);
+ 	if (!iter)
+ 		return NULL;
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index c4238b441624..89158aa93fa6 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3904,7 +3904,8 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
+ 	if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
+ 		for_each_tracing_cpu(cpu) {
+ 			iter->buffer_iter[cpu] =
+-				ring_buffer_read_prepare(iter->trace_buffer->buffer, cpu);
++				ring_buffer_read_prepare(iter->trace_buffer->buffer,
++							 cpu, GFP_KERNEL);
+ 		}
+ 		ring_buffer_read_prepare_sync();
+ 		for_each_tracing_cpu(cpu) {
+@@ -3914,7 +3915,8 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
+ 	} else {
+ 		cpu = iter->cpu_file;
+ 		iter->buffer_iter[cpu] =
+-			ring_buffer_read_prepare(iter->trace_buffer->buffer, cpu);
++			ring_buffer_read_prepare(iter->trace_buffer->buffer,
++						 cpu, GFP_KERNEL);
+ 		ring_buffer_read_prepare_sync();
+ 		ring_buffer_read_start(iter->buffer_iter[cpu]);
+ 		tracing_iter_reset(iter, cpu);
+@@ -5626,7 +5628,6 @@ out:
+ 	return ret;
+ 
+ fail:
+-	kfree(iter->trace);
+ 	kfree(iter);
+ 	__trace_array_put(tr);
+ 	mutex_unlock(&trace_types_lock);
+diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
+index dd1f43588d70..fa100ed3b4de 100644
+--- a/kernel/trace/trace_dynevent.c
++++ b/kernel/trace/trace_dynevent.c
+@@ -74,7 +74,7 @@ int dyn_event_release(int argc, char **argv, struct dyn_event_operations *type)
+ static int create_dyn_event(int argc, char **argv)
+ {
+ 	struct dyn_event_operations *ops;
+-	int ret;
++	int ret = -ENODEV;
+ 
+ 	if (argv[0][0] == '-' || argv[0][0] == '!')
+ 		return dyn_event_release(argc, argv, NULL);
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index 76217bbef815..4629a6104474 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -299,15 +299,13 @@ int perf_uprobe_init(struct perf_event *p_event,
+ 
+ 	if (!p_event->attr.uprobe_path)
+ 		return -EINVAL;
+-	path = kzalloc(PATH_MAX, GFP_KERNEL);
+-	if (!path)
+-		return -ENOMEM;
+-	ret = strncpy_from_user(
+-		path, u64_to_user_ptr(p_event->attr.uprobe_path), PATH_MAX);
+-	if (ret == PATH_MAX)
+-		return -E2BIG;
+-	if (ret < 0)
+-		goto out;
++
++	path = strndup_user(u64_to_user_ptr(p_event->attr.uprobe_path),
++			    PATH_MAX);
++	if (IS_ERR(path)) {
++		ret = PTR_ERR(path);
++		return (ret == -EINVAL) ? -E2BIG : ret;
++	}
+ 	if (path[0] == '\0') {
+ 		ret = -EINVAL;
+ 		goto out;
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 27821480105e..217ef481fbbb 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -1301,7 +1301,7 @@ static int parse_pred(const char *str, void *data,
+ 		/* go past the last quote */
+ 		i++;
+ 
+-	} else if (isdigit(str[i])) {
++	} else if (isdigit(str[i]) || str[i] == '-') {
+ 
+ 		/* Make sure the field is not a string */
+ 		if (is_string_field(field)) {
+@@ -1314,6 +1314,9 @@ static int parse_pred(const char *str, void *data,
+ 			goto err_free;
+ 		}
+ 
++		if (str[i] == '-')
++			i++;
++
+ 		/* We allow 0xDEADBEEF */
+ 		while (isalnum(str[i]))
+ 			i++;
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 449d90cfa151..55b72b1c63a0 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -4695,9 +4695,10 @@ static inline void add_to_key(char *compound_key, void *key,
+ 		/* ensure NULL-termination */
+ 		if (size > key_field->size - 1)
+ 			size = key_field->size - 1;
+-	}
+ 
+-	memcpy(compound_key + key_field->offset, key, size);
++		strncpy(compound_key + key_field->offset, (char *)key, size);
++	} else
++		memcpy(compound_key + key_field->offset, key, size);
+ }
+ 
+ static void
+diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
+index d953c163a079..810d78a8d14c 100644
+--- a/kernel/trace/trace_kdb.c
++++ b/kernel/trace/trace_kdb.c
+@@ -51,14 +51,16 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
+ 	if (cpu_file == RING_BUFFER_ALL_CPUS) {
+ 		for_each_tracing_cpu(cpu) {
+ 			iter.buffer_iter[cpu] =
+-			ring_buffer_read_prepare(iter.trace_buffer->buffer, cpu);
++			ring_buffer_read_prepare(iter.trace_buffer->buffer,
++						 cpu, GFP_ATOMIC);
+ 			ring_buffer_read_start(iter.buffer_iter[cpu]);
+ 			tracing_iter_reset(&iter, cpu);
+ 		}
+ 	} else {
+ 		iter.cpu_file = cpu_file;
+ 		iter.buffer_iter[cpu_file] =
+-			ring_buffer_read_prepare(iter.trace_buffer->buffer, cpu_file);
++			ring_buffer_read_prepare(iter.trace_buffer->buffer,
++						 cpu_file, GFP_ATOMIC);
+ 		ring_buffer_read_start(iter.buffer_iter[cpu_file]);
+ 		tracing_iter_reset(&iter, cpu_file);
+ 	}
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 977918d5d350..bbc4940f21af 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -547,13 +547,15 @@ static void softlockup_start_all(void)
+ 
+ int lockup_detector_online_cpu(unsigned int cpu)
+ {
+-	watchdog_enable(cpu);
++	if (cpumask_test_cpu(cpu, &watchdog_allowed_mask))
++		watchdog_enable(cpu);
+ 	return 0;
+ }
+ 
+ int lockup_detector_offline_cpu(unsigned int cpu)
+ {
+-	watchdog_disable(cpu);
++	if (cpumask_test_cpu(cpu, &watchdog_allowed_mask))
++		watchdog_disable(cpu);
+ 	return 0;
+ }
+ 
+diff --git a/lib/bsearch.c b/lib/bsearch.c
+index 18b445b010c3..82512fe7b33c 100644
+--- a/lib/bsearch.c
++++ b/lib/bsearch.c
+@@ -11,6 +11,7 @@
+ 
+ #include <linux/export.h>
+ #include <linux/bsearch.h>
++#include <linux/kprobes.h>
+ 
+ /*
+  * bsearch - binary search an array of elements
+@@ -53,3 +54,4 @@ void *bsearch(const void *key, const void *base, size_t num, size_t size,
+ 	return NULL;
+ }
+ EXPORT_SYMBOL(bsearch);
++NOKPROBE_SYMBOL(bsearch);
+diff --git a/lib/raid6/Makefile b/lib/raid6/Makefile
+index 4e90d443d1b0..e723eacf7868 100644
+--- a/lib/raid6/Makefile
++++ b/lib/raid6/Makefile
+@@ -39,7 +39,7 @@ endif
+ ifeq ($(CONFIG_KERNEL_MODE_NEON),y)
+ NEON_FLAGS := -ffreestanding
+ ifeq ($(ARCH),arm)
+-NEON_FLAGS += -mfloat-abi=softfp -mfpu=neon
++NEON_FLAGS += -march=armv7-a -mfloat-abi=softfp -mfpu=neon
+ endif
+ CFLAGS_recov_neon_inner.o += $(NEON_FLAGS)
+ ifeq ($(ARCH),arm64)
+diff --git a/lib/rhashtable.c b/lib/rhashtable.c
+index 852ffa5160f1..4edcf3310513 100644
+--- a/lib/rhashtable.c
++++ b/lib/rhashtable.c
+@@ -416,8 +416,12 @@ static void rht_deferred_worker(struct work_struct *work)
+ 	else if (tbl->nest)
+ 		err = rhashtable_rehash_alloc(ht, tbl, tbl->size);
+ 
+-	if (!err)
+-		err = rhashtable_rehash_table(ht);
++	if (!err || err == -EEXIST) {
++		int nerr;
++
++		nerr = rhashtable_rehash_table(ht);
++		err = err ?: nerr;
++	}
+ 
+ 	mutex_unlock(&ht->mutex);
+ 
+diff --git a/lib/string.c b/lib/string.c
+index 38e4ca08e757..3ab861c1a857 100644
+--- a/lib/string.c
++++ b/lib/string.c
+@@ -866,6 +866,26 @@ __visible int memcmp(const void *cs, const void *ct, size_t count)
+ EXPORT_SYMBOL(memcmp);
+ #endif
+ 
++#ifndef __HAVE_ARCH_BCMP
++/**
++ * bcmp - returns 0 if and only if the buffers have identical contents.
++ * @a: pointer to first buffer.
++ * @b: pointer to second buffer.
++ * @len: size of buffers.
++ *
++ * The sign or magnitude of a non-zero return value has no particular
++ * meaning, and architectures may implement their own more efficient bcmp(). So
++ * while this particular implementation is a simple (tail) call to memcmp, do
++ * not rely on anything but whether the return value is zero or non-zero.
++ */
++#undef bcmp
++int bcmp(const void *a, const void *b, size_t len)
++{
++	return memcmp(a, b, len);
++}
++EXPORT_SYMBOL(bcmp);
++#endif
++
+ #ifndef __HAVE_ARCH_MEMSCAN
+ /**
+  * memscan - Find a character in an area of memory.
+diff --git a/mm/cma.c b/mm/cma.c
+index c7b39dd3b4f6..f4f3a8a57d86 100644
+--- a/mm/cma.c
++++ b/mm/cma.c
+@@ -353,12 +353,14 @@ int __init cma_declare_contiguous(phys_addr_t base,
+ 
+ 	ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma);
+ 	if (ret)
+-		goto err;
++		goto free_mem;
+ 
+ 	pr_info("Reserved %ld MiB at %pa\n", (unsigned long)size / SZ_1M,
+ 		&base);
+ 	return 0;
+ 
++free_mem:
++	memblock_free(base, size);
+ err:
+ 	pr_err("Failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
+ 	return ret;
+diff --git a/mm/debug.c b/mm/debug.c
+index 1611cf00a137..854d5f84047d 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -79,7 +79,7 @@ void __dump_page(struct page *page, const char *reason)
+ 		pr_warn("ksm ");
+ 	else if (mapping) {
+ 		pr_warn("%ps ", mapping->a_ops);
+-		if (mapping->host->i_dentry.first) {
++		if (mapping->host && mapping->host->i_dentry.first) {
+ 			struct dentry *dentry;
+ 			dentry = container_of(mapping->host->i_dentry.first, struct dentry, d_u.d_alias);
+ 			pr_warn("name:\"%pd\" ", dentry);
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index faf357eaf0ce..8b03c698f86e 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -753,6 +753,21 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+ 	spinlock_t *ptl;
+ 
+ 	ptl = pmd_lock(mm, pmd);
++	if (!pmd_none(*pmd)) {
++		if (write) {
++			if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
++				WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
++				goto out_unlock;
++			}
++			entry = pmd_mkyoung(*pmd);
++			entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
++			if (pmdp_set_access_flags(vma, addr, pmd, entry, 1))
++				update_mmu_cache_pmd(vma, addr, pmd);
++		}
++
++		goto out_unlock;
++	}
++
+ 	entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
+ 	if (pfn_t_devmap(pfn))
+ 		entry = pmd_mkdevmap(entry);
+@@ -764,11 +779,16 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+ 	if (pgtable) {
+ 		pgtable_trans_huge_deposit(mm, pmd, pgtable);
+ 		mm_inc_nr_ptes(mm);
++		pgtable = NULL;
+ 	}
+ 
+ 	set_pmd_at(mm, addr, pmd, entry);
+ 	update_mmu_cache_pmd(vma, addr, pmd);
++
++out_unlock:
+ 	spin_unlock(ptl);
++	if (pgtable)
++		pte_free(mm, pgtable);
+ }
+ 
+ vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+@@ -819,6 +839,20 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+ 	spinlock_t *ptl;
+ 
+ 	ptl = pud_lock(mm, pud);
++	if (!pud_none(*pud)) {
++		if (write) {
++			if (pud_pfn(*pud) != pfn_t_to_pfn(pfn)) {
++				WARN_ON_ONCE(!is_huge_zero_pud(*pud));
++				goto out_unlock;
++			}
++			entry = pud_mkyoung(*pud);
++			entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma);
++			if (pudp_set_access_flags(vma, addr, pud, entry, 1))
++				update_mmu_cache_pud(vma, addr, pud);
++		}
++		goto out_unlock;
++	}
++
+ 	entry = pud_mkhuge(pfn_t_pud(pfn, prot));
+ 	if (pfn_t_devmap(pfn))
+ 		entry = pud_mkdevmap(entry);
+@@ -828,6 +862,8 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+ 	}
+ 	set_pud_at(mm, addr, pud, entry);
+ 	update_mmu_cache_pud(vma, addr, pud);
++
++out_unlock:
+ 	spin_unlock(ptl);
+ }
+ 
+diff --git a/mm/kasan/common.c b/mm/kasan/common.c
+index 09b534fbba17..80bbe62b16cd 100644
+--- a/mm/kasan/common.c
++++ b/mm/kasan/common.c
+@@ -14,6 +14,8 @@
+  *
+  */
+ 
++#define __KASAN_INTERNAL
++
+ #include <linux/export.h>
+ #include <linux/interrupt.h>
+ #include <linux/init.h>
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index af7f18b32389..5bbf2de02a0f 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -248,6 +248,12 @@ enum res_type {
+ 	     iter != NULL;				\
+ 	     iter = mem_cgroup_iter(NULL, iter, NULL))
+ 
++static inline bool should_force_charge(void)
++{
++	return tsk_is_oom_victim(current) || fatal_signal_pending(current) ||
++		(current->flags & PF_EXITING);
++}
++
+ /* Some nice accessors for the vmpressure. */
+ struct vmpressure *memcg_to_vmpressure(struct mem_cgroup *memcg)
+ {
+@@ -1389,8 +1395,13 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
+ 	};
+ 	bool ret;
+ 
+-	mutex_lock(&oom_lock);
+-	ret = out_of_memory(&oc);
++	if (mutex_lock_killable(&oom_lock))
++		return true;
++	/*
++	 * A few threads which were not waiting at mutex_lock_killable() can
++	 * fail to bail out. Therefore, check again after holding oom_lock.
++	 */
++	ret = should_force_charge() || out_of_memory(&oc);
+ 	mutex_unlock(&oom_lock);
+ 	return ret;
+ }
+@@ -2209,9 +2220,7 @@ retry:
+ 	 * bypass the last charges so that they can exit quickly and
+ 	 * free their memory.
+ 	 */
+-	if (unlikely(tsk_is_oom_victim(current) ||
+-		     fatal_signal_pending(current) ||
+-		     current->flags & PF_EXITING))
++	if (unlikely(should_force_charge()))
+ 		goto force;
+ 
+ 	/*
+@@ -3873,6 +3882,22 @@ struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb)
+ 	return &memcg->cgwb_domain;
+ }
+ 
++/*
++ * idx can be of type enum memcg_stat_item or node_stat_item.
++ * Keep in sync with memcg_exact_page().
++ */
++static unsigned long memcg_exact_page_state(struct mem_cgroup *memcg, int idx)
++{
++	long x = atomic_long_read(&memcg->stat[idx]);
++	int cpu;
++
++	for_each_online_cpu(cpu)
++		x += per_cpu_ptr(memcg->stat_cpu, cpu)->count[idx];
++	if (x < 0)
++		x = 0;
++	return x;
++}
++
+ /**
+  * mem_cgroup_wb_stats - retrieve writeback related stats from its memcg
+  * @wb: bdi_writeback in question
+@@ -3898,10 +3923,10 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
+ 	struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css);
+ 	struct mem_cgroup *parent;
+ 
+-	*pdirty = memcg_page_state(memcg, NR_FILE_DIRTY);
++	*pdirty = memcg_exact_page_state(memcg, NR_FILE_DIRTY);
+ 
+ 	/* this should eventually include NR_UNSTABLE_NFS */
+-	*pwriteback = memcg_page_state(memcg, NR_WRITEBACK);
++	*pwriteback = memcg_exact_page_state(memcg, NR_WRITEBACK);
+ 	*pfilepages = mem_cgroup_nr_lru_pages(memcg, (1 << LRU_INACTIVE_FILE) |
+ 						     (1 << LRU_ACTIVE_FILE));
+ 	*pheadroom = PAGE_COUNTER_MAX;
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 831be5ff5f4d..fc8b51744579 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1825,19 +1825,17 @@ static int soft_offline_in_use_page(struct page *page, int flags)
+ 	struct page *hpage = compound_head(page);
+ 
+ 	if (!PageHuge(page) && PageTransHuge(hpage)) {
+-		lock_page(hpage);
+-		if (!PageAnon(hpage) || unlikely(split_huge_page(hpage))) {
+-			unlock_page(hpage);
+-			if (!PageAnon(hpage))
++		lock_page(page);
++		if (!PageAnon(page) || unlikely(split_huge_page(page))) {
++			unlock_page(page);
++			if (!PageAnon(page))
+ 				pr_info("soft offline: %#lx: non anonymous thp\n", page_to_pfn(page));
+ 			else
+ 				pr_info("soft offline: %#lx: thp split failed\n", page_to_pfn(page));
+-			put_hwpoison_page(hpage);
++			put_hwpoison_page(page);
+ 			return -EBUSY;
+ 		}
+-		unlock_page(hpage);
+-		get_hwpoison_page(page);
+-		put_hwpoison_page(hpage);
++		unlock_page(page);
+ 	}
+ 
+ 	/*
+diff --git a/mm/memory.c b/mm/memory.c
+index e11ca9dd823f..8d3f38fa530d 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1546,10 +1546,12 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
+ 				WARN_ON_ONCE(!is_zero_pfn(pte_pfn(*pte)));
+ 				goto out_unlock;
+ 			}
+-			entry = *pte;
+-			goto out_mkwrite;
+-		} else
+-			goto out_unlock;
++			entry = pte_mkyoung(*pte);
++			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
++			if (ptep_set_access_flags(vma, addr, pte, entry, 1))
++				update_mmu_cache(vma, addr, pte);
++		}
++		goto out_unlock;
+ 	}
+ 
+ 	/* Ok, finally just insert the thing.. */
+@@ -1558,7 +1560,6 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
+ 	else
+ 		entry = pte_mkspecial(pfn_t_pte(pfn, prot));
+ 
+-out_mkwrite:
+ 	if (mkwrite) {
+ 		entry = pte_mkyoung(entry);
+ 		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+@@ -3517,10 +3518,13 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf)
+  * but allow concurrent faults).
+  * The mmap_sem may have been released depending on flags and our
+  * return value.  See filemap_fault() and __lock_page_or_retry().
++ * If mmap_sem is released, vma may become invalid (for example
++ * by other thread calling munmap()).
+  */
+ static vm_fault_t do_fault(struct vm_fault *vmf)
+ {
+ 	struct vm_area_struct *vma = vmf->vma;
++	struct mm_struct *vm_mm = vma->vm_mm;
+ 	vm_fault_t ret;
+ 
+ 	/*
+@@ -3561,7 +3565,7 @@ static vm_fault_t do_fault(struct vm_fault *vmf)
+ 
+ 	/* preallocated pagetable is unused: free it */
+ 	if (vmf->prealloc_pte) {
+-		pte_free(vma->vm_mm, vmf->prealloc_pte);
++		pte_free(vm_mm, vmf->prealloc_pte);
+ 		vmf->prealloc_pte = NULL;
+ 	}
+ 	return ret;
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 1ad28323fb9f..11593a03c051 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1560,7 +1560,7 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ {
+ 	unsigned long pfn, nr_pages;
+ 	long offlined_pages;
+-	int ret, node;
++	int ret, node, nr_isolate_pageblock;
+ 	unsigned long flags;
+ 	unsigned long valid_start, valid_end;
+ 	struct zone *zone;
+@@ -1586,10 +1586,11 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ 	ret = start_isolate_page_range(start_pfn, end_pfn,
+ 				       MIGRATE_MOVABLE,
+ 				       SKIP_HWPOISON | REPORT_FAILURE);
+-	if (ret) {
++	if (ret < 0) {
+ 		reason = "failure to isolate range";
+ 		goto failed_removal;
+ 	}
++	nr_isolate_pageblock = ret;
+ 
+ 	arg.start_pfn = start_pfn;
+ 	arg.nr_pages = nr_pages;
+@@ -1642,8 +1643,16 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ 	/* Ok, all of our target is isolated.
+ 	   We cannot do rollback at this point. */
+ 	offline_isolated_pages(start_pfn, end_pfn);
+-	/* reset pagetype flags and makes migrate type to be MOVABLE */
+-	undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
++
++	/*
++	 * Onlining will reset pagetype flags and makes migrate type
++	 * MOVABLE, so just need to decrease the number of isolated
++	 * pageblocks zone counter here.
++	 */
++	spin_lock_irqsave(&zone->lock, flags);
++	zone->nr_isolate_pageblock -= nr_isolate_pageblock;
++	spin_unlock_irqrestore(&zone->lock, flags);
++
+ 	/* removal success */
+ 	adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages);
+ 	zone->present_pages -= offlined_pages;
+@@ -1675,12 +1684,12 @@ static int __ref __offline_pages(unsigned long start_pfn,
+ 
+ failed_removal_isolated:
+ 	undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
++	memory_notify(MEM_CANCEL_OFFLINE, &arg);
+ failed_removal:
+ 	pr_debug("memory offlining [mem %#010llx-%#010llx] failed due to %s\n",
+ 		 (unsigned long long) start_pfn << PAGE_SHIFT,
+ 		 ((unsigned long long) end_pfn << PAGE_SHIFT) - 1,
+ 		 reason);
+-	memory_notify(MEM_CANCEL_OFFLINE, &arg);
+ 	/* pushback to free area */
+ 	mem_hotplug_done();
+ 	return ret;
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index ee2bce59d2bf..c2275c1e6d2a 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -350,7 +350,7 @@ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask)
+ {
+ 	if (!pol)
+ 		return;
+-	if (!mpol_store_user_nodemask(pol) &&
++	if (!mpol_store_user_nodemask(pol) && !(pol->flags & MPOL_F_LOCAL) &&
+ 	    nodes_equal(pol->w.cpuset_mems_allowed, *newmask))
+ 		return;
+ 
+@@ -428,6 +428,13 @@ static inline bool queue_pages_required(struct page *page,
+ 	return node_isset(nid, *qp->nmask) == !(flags & MPOL_MF_INVERT);
+ }
+ 
++/*
++ * queue_pages_pmd() has three possible return values:
++ * 1 - pages are placed on the right node or queued successfully.
++ * 0 - THP was split.
++ * -EIO - is migration entry or MPOL_MF_STRICT was specified and an existing
++ *        page was already on a node that does not follow the policy.
++ */
+ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ 				unsigned long end, struct mm_walk *walk)
+ {
+@@ -437,7 +444,7 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ 	unsigned long flags;
+ 
+ 	if (unlikely(is_pmd_migration_entry(*pmd))) {
+-		ret = 1;
++		ret = -EIO;
+ 		goto unlock;
+ 	}
+ 	page = pmd_page(*pmd);
+@@ -454,8 +461,15 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ 	ret = 1;
+ 	flags = qp->flags;
+ 	/* go to thp migration */
+-	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
++	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
++		if (!vma_migratable(walk->vma)) {
++			ret = -EIO;
++			goto unlock;
++		}
++
+ 		migrate_page_add(page, qp->pagelist, flags);
++	} else
++		ret = -EIO;
+ unlock:
+ 	spin_unlock(ptl);
+ out:
+@@ -480,8 +494,10 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
+ 	ptl = pmd_trans_huge_lock(pmd, vma);
+ 	if (ptl) {
+ 		ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
+-		if (ret)
++		if (ret > 0)
+ 			return 0;
++		else if (ret < 0)
++			return ret;
+ 	}
+ 
+ 	if (pmd_trans_unstable(pmd))
+@@ -502,11 +518,16 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
+ 			continue;
+ 		if (!queue_pages_required(page, qp))
+ 			continue;
+-		migrate_page_add(page, qp->pagelist, flags);
++		if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
++			if (!vma_migratable(vma))
++				break;
++			migrate_page_add(page, qp->pagelist, flags);
++		} else
++			break;
+ 	}
+ 	pte_unmap_unlock(pte - 1, ptl);
+ 	cond_resched();
+-	return 0;
++	return addr != end ? -EIO : 0;
+ }
+ 
+ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
+@@ -576,7 +597,12 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
+ 	unsigned long endvma = vma->vm_end;
+ 	unsigned long flags = qp->flags;
+ 
+-	if (!vma_migratable(vma))
++	/*
++	 * Need check MPOL_MF_STRICT to return -EIO if possible
++	 * regardless of vma_migratable
++	 */
++	if (!vma_migratable(vma) &&
++	    !(flags & MPOL_MF_STRICT))
+ 		return 1;
+ 
+ 	if (endvma > end)
+@@ -603,7 +629,7 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
+ 	}
+ 
+ 	/* queue pages from current vma */
+-	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
++	if (flags & MPOL_MF_VALID)
+ 		return 0;
+ 	return 1;
+ }
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 181f5d2718a9..76e237b4610c 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -248,10 +248,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
+ 				pte = swp_entry_to_pte(entry);
+ 			} else if (is_device_public_page(new)) {
+ 				pte = pte_mkdevmap(pte);
+-				flush_dcache_page(new);
+ 			}
+-		} else
+-			flush_dcache_page(new);
++		}
+ 
+ #ifdef CONFIG_HUGETLB_PAGE
+ 		if (PageHuge(new)) {
+@@ -995,6 +993,13 @@ static int move_to_new_page(struct page *newpage, struct page *page,
+ 		 */
+ 		if (!PageMappingFlags(page))
+ 			page->mapping = NULL;
++
++		if (unlikely(is_zone_device_page(newpage))) {
++			if (is_device_public_page(newpage))
++				flush_dcache_page(newpage);
++		} else
++			flush_dcache_page(newpage);
++
+ 	}
+ out:
+ 	return rc;
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 26ea8636758f..da0e44914085 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -928,7 +928,8 @@ static void __oom_kill_process(struct task_struct *victim)
+  */
+ static int oom_kill_memcg_member(struct task_struct *task, void *unused)
+ {
+-	if (task->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
++	if (task->signal->oom_score_adj != OOM_SCORE_ADJ_MIN &&
++	    !is_global_init(task)) {
+ 		get_task_struct(task);
+ 		__oom_kill_process(task);
+ 	}
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 0b9f577b1a2a..20dd3283bb1b 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1945,8 +1945,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
+ 
+ 	arch_alloc_page(page, order);
+ 	kernel_map_pages(page, 1 << order, 1);
+-	kernel_poison_pages(page, 1 << order, 1);
+ 	kasan_alloc_pages(page, order);
++	kernel_poison_pages(page, 1 << order, 1);
+ 	set_page_owner(page, order, gfp_flags);
+ }
+ 
+@@ -8160,7 +8160,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
+ 
+ 	ret = start_isolate_page_range(pfn_max_align_down(start),
+ 				       pfn_max_align_up(end), migratetype, 0);
+-	if (ret)
++	if (ret < 0)
+ 		return ret;
+ 
+ 	/*
+diff --git a/mm/page_ext.c b/mm/page_ext.c
+index 8c78b8d45117..f116431c3dee 100644
+--- a/mm/page_ext.c
++++ b/mm/page_ext.c
+@@ -273,6 +273,7 @@ static void free_page_ext(void *addr)
+ 		table_size = get_entry_size() * PAGES_PER_SECTION;
+ 
+ 		BUG_ON(PageReserved(page));
++		kmemleak_free(addr);
+ 		free_pages_exact(addr, table_size);
+ 	}
+ }
+diff --git a/mm/page_isolation.c b/mm/page_isolation.c
+index ce323e56b34d..019280712e1b 100644
+--- a/mm/page_isolation.c
++++ b/mm/page_isolation.c
+@@ -59,7 +59,8 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
+ 	 * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
+ 	 * We just check MOVABLE pages.
+ 	 */
+-	if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype, flags))
++	if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype,
++				 isol_flags))
+ 		ret = 0;
+ 
+ 	/*
+@@ -160,27 +161,36 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
+ 	return NULL;
+ }
+ 
+-/*
+- * start_isolate_page_range() -- make page-allocation-type of range of pages
+- * to be MIGRATE_ISOLATE.
+- * @start_pfn: The lower PFN of the range to be isolated.
+- * @end_pfn: The upper PFN of the range to be isolated.
+- * @migratetype: migrate type to set in error recovery.
++/**
++ * start_isolate_page_range() - make page-allocation-type of range of pages to
++ * be MIGRATE_ISOLATE.
++ * @start_pfn:		The lower PFN of the range to be isolated.
++ * @end_pfn:		The upper PFN of the range to be isolated.
++ *			start_pfn/end_pfn must be aligned to pageblock_order.
++ * @migratetype:	Migrate type to set in error recovery.
++ * @flags:		The following flags are allowed (they can be combined in
++ *			a bit mask)
++ *			SKIP_HWPOISON - ignore hwpoison pages
++ *			REPORT_FAILURE - report details about the failure to
++ *			isolate the range
+  *
+  * Making page-allocation-type to be MIGRATE_ISOLATE means free pages in
+  * the range will never be allocated. Any free pages and pages freed in the
+- * future will not be allocated again.
+- *
+- * start_pfn/end_pfn must be aligned to pageblock_order.
+- * Return 0 on success and -EBUSY if any part of range cannot be isolated.
++ * future will not be allocated again. If specified range includes migrate types
++ * other than MOVABLE or CMA, this will fail with -EBUSY. For isolating all
++ * pages in the range finally, the caller have to free all pages in the range.
++ * test_page_isolated() can be used for test it.
+  *
+  * There is no high level synchronization mechanism that prevents two threads
+- * from trying to isolate overlapping ranges.  If this happens, one thread
++ * from trying to isolate overlapping ranges. If this happens, one thread
+  * will notice pageblocks in the overlapping range already set to isolate.
+  * This happens in set_migratetype_isolate, and set_migratetype_isolate
+- * returns an error.  We then clean up by restoring the migration type on
+- * pageblocks we may have modified and return -EBUSY to caller.  This
++ * returns an error. We then clean up by restoring the migration type on
++ * pageblocks we may have modified and return -EBUSY to caller. This
+  * prevents two threads from simultaneously working on overlapping ranges.
++ *
++ * Return: the number of isolated pageblocks on success and -EBUSY if any part
++ * of range cannot be isolated.
+  */
+ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ 			     unsigned migratetype, int flags)
+@@ -188,6 +198,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ 	unsigned long pfn;
+ 	unsigned long undo_pfn;
+ 	struct page *page;
++	int nr_isolate_pageblock = 0;
+ 
+ 	BUG_ON(!IS_ALIGNED(start_pfn, pageblock_nr_pages));
+ 	BUG_ON(!IS_ALIGNED(end_pfn, pageblock_nr_pages));
+@@ -196,13 +207,15 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
+ 	     pfn < end_pfn;
+ 	     pfn += pageblock_nr_pages) {
+ 		page = __first_valid_page(pfn, pageblock_nr_pages);
+-		if (page &&
+-		    set_migratetype_isolate(page, migratetype, flags)) {
+-			undo_pfn = pfn;
+-			goto undo;
++		if (page) {
++			if (set_migratetype_isolate(page, migratetype, flags)) {
++				undo_pfn = pfn;
++				goto undo;
++			}
++			nr_isolate_pageblock++;
+ 		}
+ 	}
+-	return 0;
++	return nr_isolate_pageblock;
+ undo:
+ 	for (pfn = start_pfn;
+ 	     pfn < undo_pfn;
+diff --git a/mm/page_poison.c b/mm/page_poison.c
+index f0c15e9017c0..21d4f97cb49b 100644
+--- a/mm/page_poison.c
++++ b/mm/page_poison.c
+@@ -6,6 +6,7 @@
+ #include <linux/page_ext.h>
+ #include <linux/poison.h>
+ #include <linux/ratelimit.h>
++#include <linux/kasan.h>
+ 
+ static bool want_page_poisoning __read_mostly;
+ 
+@@ -40,7 +41,10 @@ static void poison_page(struct page *page)
+ {
+ 	void *addr = kmap_atomic(page);
+ 
++	/* KASAN still think the page is in-use, so skip it. */
++	kasan_disable_current();
+ 	memset(addr, PAGE_POISON, PAGE_SIZE);
++	kasan_enable_current();
+ 	kunmap_atomic(addr);
+ }
+ 
+diff --git a/mm/slab.c b/mm/slab.c
+index 91c1863df93d..2f2aa8eaf7d9 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -550,14 +550,6 @@ static void start_cpu_timer(int cpu)
+ 
+ static void init_arraycache(struct array_cache *ac, int limit, int batch)
+ {
+-	/*
+-	 * The array_cache structures contain pointers to free object.
+-	 * However, when such objects are allocated or transferred to another
+-	 * cache the pointers are not cleared and they could be counted as
+-	 * valid references during a kmemleak scan. Therefore, kmemleak must
+-	 * not scan such objects.
+-	 */
+-	kmemleak_no_scan(ac);
+ 	if (ac) {
+ 		ac->avail = 0;
+ 		ac->limit = limit;
+@@ -573,6 +565,14 @@ static struct array_cache *alloc_arraycache(int node, int entries,
+ 	struct array_cache *ac = NULL;
+ 
+ 	ac = kmalloc_node(memsize, gfp, node);
++	/*
++	 * The array_cache structures contain pointers to free object.
++	 * However, when such objects are allocated or transferred to another
++	 * cache the pointers are not cleared and they could be counted as
++	 * valid references during a kmemleak scan. Therefore, kmemleak must
++	 * not scan such objects.
++	 */
++	kmemleak_no_scan(ac);
+ 	init_arraycache(ac, entries, batchcount);
+ 	return ac;
+ }
+@@ -667,6 +667,7 @@ static struct alien_cache *__alloc_alien_cache(int node, int entries,
+ 
+ 	alc = kmalloc_node(memsize, gfp, node);
+ 	if (alc) {
++		kmemleak_no_scan(alc);
+ 		init_arraycache(&alc->ac, entries, batch);
+ 		spin_lock_init(&alc->lock);
+ 	}
+@@ -2111,6 +2112,8 @@ done:
+ 	cachep->allocflags = __GFP_COMP;
+ 	if (flags & SLAB_CACHE_DMA)
+ 		cachep->allocflags |= GFP_DMA;
++	if (flags & SLAB_CACHE_DMA32)
++		cachep->allocflags |= GFP_DMA32;
+ 	if (flags & SLAB_RECLAIM_ACCOUNT)
+ 		cachep->allocflags |= __GFP_RECLAIMABLE;
+ 	cachep->size = size;
+diff --git a/mm/slab.h b/mm/slab.h
+index 384105318779..27834ead5f14 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -127,7 +127,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
+ 
+ 
+ /* Legal flag mask for kmem_cache_create(), for various configurations */
+-#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
++#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
++			 SLAB_CACHE_DMA32 | SLAB_PANIC | \
+ 			 SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS )
+ 
+ #if defined(CONFIG_DEBUG_SLAB)
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index f9d89c1b5977..333618231f8d 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -53,7 +53,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
+ 		SLAB_FAILSLAB | SLAB_KASAN)
+ 
+ #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
+-			 SLAB_ACCOUNT)
++			 SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
+ 
+ /*
+  * Merge control. If this is set then no merging of slab caches will occur.
+diff --git a/mm/slub.c b/mm/slub.c
+index dc777761b6b7..1673100fd534 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -3591,6 +3591,9 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
+ 	if (s->flags & SLAB_CACHE_DMA)
+ 		s->allocflags |= GFP_DMA;
+ 
++	if (s->flags & SLAB_CACHE_DMA32)
++		s->allocflags |= GFP_DMA32;
++
+ 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
+ 		s->allocflags |= __GFP_RECLAIMABLE;
+ 
+@@ -5681,6 +5684,8 @@ static char *create_unique_id(struct kmem_cache *s)
+ 	 */
+ 	if (s->flags & SLAB_CACHE_DMA)
+ 		*p++ = 'd';
++	if (s->flags & SLAB_CACHE_DMA32)
++		*p++ = 'D';
+ 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
+ 		*p++ = 'a';
+ 	if (s->flags & SLAB_CONSISTENCY_CHECKS)
+diff --git a/mm/sparse.c b/mm/sparse.c
+index 7ea5dc6c6b19..b3771f35a0ed 100644
+--- a/mm/sparse.c
++++ b/mm/sparse.c
+@@ -197,7 +197,7 @@ static inline int next_present_section_nr(int section_nr)
+ }
+ #define for_each_present_section_nr(start, section_nr)		\
+ 	for (section_nr = next_present_section_nr(start-1);	\
+-	     ((section_nr >= 0) &&				\
++	     ((section_nr != -1) &&				\
+ 	      (section_nr <= __highest_present_section_nr));	\
+ 	     section_nr = next_present_section_nr(section_nr))
+ 
+@@ -556,7 +556,7 @@ void online_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
+ }
+ 
+ #ifdef CONFIG_MEMORY_HOTREMOVE
+-/* Mark all memory sections within the pfn range as online */
++/* Mark all memory sections within the pfn range as offline */
+ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
+ {
+ 	unsigned long pfn;
+diff --git a/mm/swapfile.c b/mm/swapfile.c
+index dbac1d49469d..67f60e051814 100644
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -98,6 +98,15 @@ static atomic_t proc_poll_event = ATOMIC_INIT(0);
+ 
+ atomic_t nr_rotate_swap = ATOMIC_INIT(0);
+ 
++static struct swap_info_struct *swap_type_to_swap_info(int type)
++{
++	if (type >= READ_ONCE(nr_swapfiles))
++		return NULL;
++
++	smp_rmb();	/* Pairs with smp_wmb in alloc_swap_info. */
++	return READ_ONCE(swap_info[type]);
++}
++
+ static inline unsigned char swap_count(unsigned char ent)
+ {
+ 	return ent & ~SWAP_HAS_CACHE;	/* may include COUNT_CONTINUED flag */
+@@ -1044,12 +1053,14 @@ noswap:
+ /* The only caller of this function is now suspend routine */
+ swp_entry_t get_swap_page_of_type(int type)
+ {
+-	struct swap_info_struct *si;
++	struct swap_info_struct *si = swap_type_to_swap_info(type);
+ 	pgoff_t offset;
+ 
+-	si = swap_info[type];
++	if (!si)
++		goto fail;
++
+ 	spin_lock(&si->lock);
+-	if (si && (si->flags & SWP_WRITEOK)) {
++	if (si->flags & SWP_WRITEOK) {
+ 		atomic_long_dec(&nr_swap_pages);
+ 		/* This is called for allocating swap entry, not cache */
+ 		offset = scan_swap_map(si, 1);
+@@ -1060,6 +1071,7 @@ swp_entry_t get_swap_page_of_type(int type)
+ 		atomic_long_inc(&nr_swap_pages);
+ 	}
+ 	spin_unlock(&si->lock);
++fail:
+ 	return (swp_entry_t) {0};
+ }
+ 
+@@ -1071,9 +1083,9 @@ static struct swap_info_struct *__swap_info_get(swp_entry_t entry)
+ 	if (!entry.val)
+ 		goto out;
+ 	type = swp_type(entry);
+-	if (type >= nr_swapfiles)
++	p = swap_type_to_swap_info(type);
++	if (!p)
+ 		goto bad_nofile;
+-	p = swap_info[type];
+ 	if (!(p->flags & SWP_USED))
+ 		goto bad_device;
+ 	offset = swp_offset(entry);
+@@ -1697,10 +1709,9 @@ int swap_type_of(dev_t device, sector_t offset, struct block_device **bdev_p)
+ sector_t swapdev_block(int type, pgoff_t offset)
+ {
+ 	struct block_device *bdev;
++	struct swap_info_struct *si = swap_type_to_swap_info(type);
+ 
+-	if ((unsigned int)type >= nr_swapfiles)
+-		return 0;
+-	if (!(swap_info[type]->flags & SWP_WRITEOK))
++	if (!si || !(si->flags & SWP_WRITEOK))
+ 		return 0;
+ 	return map_swap_entry(swp_entry(type, offset), &bdev);
+ }
+@@ -2258,7 +2269,7 @@ static sector_t map_swap_entry(swp_entry_t entry, struct block_device **bdev)
+ 	struct swap_extent *se;
+ 	pgoff_t offset;
+ 
+-	sis = swap_info[swp_type(entry)];
++	sis = swp_swap_info(entry);
+ 	*bdev = sis->bdev;
+ 
+ 	offset = swp_offset(entry);
+@@ -2700,9 +2711,7 @@ static void *swap_start(struct seq_file *swap, loff_t *pos)
+ 	if (!l)
+ 		return SEQ_START_TOKEN;
+ 
+-	for (type = 0; type < nr_swapfiles; type++) {
+-		smp_rmb();	/* read nr_swapfiles before swap_info[type] */
+-		si = swap_info[type];
++	for (type = 0; (si = swap_type_to_swap_info(type)); type++) {
+ 		if (!(si->flags & SWP_USED) || !si->swap_map)
+ 			continue;
+ 		if (!--l)
+@@ -2722,9 +2731,7 @@ static void *swap_next(struct seq_file *swap, void *v, loff_t *pos)
+ 	else
+ 		type = si->type + 1;
+ 
+-	for (; type < nr_swapfiles; type++) {
+-		smp_rmb();	/* read nr_swapfiles before swap_info[type] */
+-		si = swap_info[type];
++	for (; (si = swap_type_to_swap_info(type)); type++) {
+ 		if (!(si->flags & SWP_USED) || !si->swap_map)
+ 			continue;
+ 		++*pos;
+@@ -2831,14 +2838,14 @@ static struct swap_info_struct *alloc_swap_info(void)
+ 	}
+ 	if (type >= nr_swapfiles) {
+ 		p->type = type;
+-		swap_info[type] = p;
++		WRITE_ONCE(swap_info[type], p);
+ 		/*
+ 		 * Write swap_info[type] before nr_swapfiles, in case a
+ 		 * racing procfs swap_start() or swap_next() is reading them.
+ 		 * (We never shrink nr_swapfiles, we never free this entry.)
+ 		 */
+ 		smp_wmb();
+-		nr_swapfiles++;
++		WRITE_ONCE(nr_swapfiles, nr_swapfiles + 1);
+ 	} else {
+ 		kvfree(p);
+ 		p = swap_info[type];
+@@ -3358,7 +3365,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+ {
+ 	struct swap_info_struct *p;
+ 	struct swap_cluster_info *ci;
+-	unsigned long offset, type;
++	unsigned long offset;
+ 	unsigned char count;
+ 	unsigned char has_cache;
+ 	int err = -EINVAL;
+@@ -3366,10 +3373,10 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+ 	if (non_swap_entry(entry))
+ 		goto out;
+ 
+-	type = swp_type(entry);
+-	if (type >= nr_swapfiles)
++	p = swp_swap_info(entry);
++	if (!p)
+ 		goto bad_file;
+-	p = swap_info[type];
++
+ 	offset = swp_offset(entry);
+ 	if (unlikely(offset >= p->max))
+ 		goto out;
+@@ -3466,7 +3473,7 @@ int swapcache_prepare(swp_entry_t entry)
+ 
+ struct swap_info_struct *swp_swap_info(swp_entry_t entry)
+ {
+-	return swap_info[swp_type(entry)];
++	return swap_type_to_swap_info(swp_type(entry));
+ }
+ 
+ struct swap_info_struct *page_swap_info(struct page *page)
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 871e41c55e23..583630bf247d 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -498,7 +498,11 @@ nocache:
+ 	}
+ 
+ found:
+-	if (addr + size > vend)
++	/*
++	 * Check also calculated address against the vstart,
++	 * because it can be 0 because of big align request.
++	 */
++	if (addr + size > vend || addr < vstart)
+ 		goto overflow;
+ 
+ 	va->va_start = addr;
+@@ -2248,7 +2252,7 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
+ 	if (!(area->flags & VM_USERMAP))
+ 		return -EINVAL;
+ 
+-	if (kaddr + size > area->addr + area->size)
++	if (kaddr + size > area->addr + get_vm_area_size(area))
+ 		return -EINVAL;
+ 
+ 	do {
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 357214a51f13..b85d51f4b8eb 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -1061,7 +1061,7 @@ struct p9_client *p9_client_create(const char *dev_name, char *options)
+ 		p9_debug(P9_DEBUG_ERROR,
+ 			 "Please specify a msize of at least 4k\n");
+ 		err = -EINVAL;
+-		goto free_client;
++		goto close_trans;
+ 	}
+ 
+ 	err = p9_client_version(clnt);
+diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
+index deacc52d7ff1..8d12198eaa94 100644
+--- a/net/bluetooth/af_bluetooth.c
++++ b/net/bluetooth/af_bluetooth.c
+@@ -154,15 +154,25 @@ void bt_sock_unlink(struct bt_sock_list *l, struct sock *sk)
+ }
+ EXPORT_SYMBOL(bt_sock_unlink);
+ 
+-void bt_accept_enqueue(struct sock *parent, struct sock *sk)
++void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh)
+ {
+ 	BT_DBG("parent %p, sk %p", parent, sk);
+ 
+ 	sock_hold(sk);
+-	lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
++
++	if (bh)
++		bh_lock_sock_nested(sk);
++	else
++		lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
++
+ 	list_add_tail(&bt_sk(sk)->accept_q, &bt_sk(parent)->accept_q);
+ 	bt_sk(sk)->parent = parent;
+-	release_sock(sk);
++
++	if (bh)
++		bh_unlock_sock(sk);
++	else
++		release_sock(sk);
++
+ 	parent->sk_ack_backlog++;
+ }
+ EXPORT_SYMBOL(bt_accept_enqueue);
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 1506e1632394..d4e2a166ae17 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -831,8 +831,6 @@ static int hci_sock_release(struct socket *sock)
+ 	if (!sk)
+ 		return 0;
+ 
+-	hdev = hci_pi(sk)->hdev;
+-
+ 	switch (hci_pi(sk)->channel) {
+ 	case HCI_CHANNEL_MONITOR:
+ 		atomic_dec(&monitor_promisc);
+@@ -854,6 +852,7 @@ static int hci_sock_release(struct socket *sock)
+ 
+ 	bt_sock_unlink(&hci_sk_list, sk);
+ 
++	hdev = hci_pi(sk)->hdev;
+ 	if (hdev) {
+ 		if (hci_pi(sk)->channel == HCI_CHANNEL_USER) {
+ 			/* When releasing a user channel exclusive access,
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 2a7fb517d460..ccdc5c67d22a 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -3337,16 +3337,22 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ 
+ 	while (len >= L2CAP_CONF_OPT_SIZE) {
+ 		len -= l2cap_get_conf_opt(&req, &type, &olen, &val);
++		if (len < 0)
++			break;
+ 
+ 		hint  = type & L2CAP_CONF_HINT;
+ 		type &= L2CAP_CONF_MASK;
+ 
+ 		switch (type) {
+ 		case L2CAP_CONF_MTU:
++			if (olen != 2)
++				break;
+ 			mtu = val;
+ 			break;
+ 
+ 		case L2CAP_CONF_FLUSH_TO:
++			if (olen != 2)
++				break;
+ 			chan->flush_to = val;
+ 			break;
+ 
+@@ -3354,26 +3360,30 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ 			break;
+ 
+ 		case L2CAP_CONF_RFC:
+-			if (olen == sizeof(rfc))
+-				memcpy(&rfc, (void *) val, olen);
++			if (olen != sizeof(rfc))
++				break;
++			memcpy(&rfc, (void *) val, olen);
+ 			break;
+ 
+ 		case L2CAP_CONF_FCS:
++			if (olen != 1)
++				break;
+ 			if (val == L2CAP_FCS_NONE)
+ 				set_bit(CONF_RECV_NO_FCS, &chan->conf_state);
+ 			break;
+ 
+ 		case L2CAP_CONF_EFS:
+-			if (olen == sizeof(efs)) {
+-				remote_efs = 1;
+-				memcpy(&efs, (void *) val, olen);
+-			}
++			if (olen != sizeof(efs))
++				break;
++			remote_efs = 1;
++			memcpy(&efs, (void *) val, olen);
+ 			break;
+ 
+ 		case L2CAP_CONF_EWS:
++			if (olen != 2)
++				break;
+ 			if (!(chan->conn->local_fixed_chan & L2CAP_FC_A2MP))
+ 				return -ECONNREFUSED;
+-
+ 			set_bit(FLAG_EXT_CTRL, &chan->flags);
+ 			set_bit(CONF_EWS_RECV, &chan->conf_state);
+ 			chan->tx_win_max = L2CAP_DEFAULT_EXT_WINDOW;
+@@ -3383,7 +3393,6 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ 		default:
+ 			if (hint)
+ 				break;
+-
+ 			result = L2CAP_CONF_UNKNOWN;
+ 			*((u8 *) ptr++) = type;
+ 			break;
+@@ -3548,58 +3557,65 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len,
+ 
+ 	while (len >= L2CAP_CONF_OPT_SIZE) {
+ 		len -= l2cap_get_conf_opt(&rsp, &type, &olen, &val);
++		if (len < 0)
++			break;
+ 
+ 		switch (type) {
+ 		case L2CAP_CONF_MTU:
++			if (olen != 2)
++				break;
+ 			if (val < L2CAP_DEFAULT_MIN_MTU) {
+ 				*result = L2CAP_CONF_UNACCEPT;
+ 				chan->imtu = L2CAP_DEFAULT_MIN_MTU;
+ 			} else
+ 				chan->imtu = val;
+-			l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu, endptr - ptr);
++			l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu,
++					   endptr - ptr);
+ 			break;
+ 
+ 		case L2CAP_CONF_FLUSH_TO:
++			if (olen != 2)
++				break;
+ 			chan->flush_to = val;
+-			l2cap_add_conf_opt(&ptr, L2CAP_CONF_FLUSH_TO,
+-					   2, chan->flush_to, endptr - ptr);
++			l2cap_add_conf_opt(&ptr, L2CAP_CONF_FLUSH_TO, 2,
++					   chan->flush_to, endptr - ptr);
+ 			break;
+ 
+ 		case L2CAP_CONF_RFC:
+-			if (olen == sizeof(rfc))
+-				memcpy(&rfc, (void *)val, olen);
+-
++			if (olen != sizeof(rfc))
++				break;
++			memcpy(&rfc, (void *)val, olen);
+ 			if (test_bit(CONF_STATE2_DEVICE, &chan->conf_state) &&
+ 			    rfc.mode != chan->mode)
+ 				return -ECONNREFUSED;
+-
+ 			chan->fcs = 0;
+-
+-			l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC,
+-					   sizeof(rfc), (unsigned long) &rfc, endptr - ptr);
++			l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, sizeof(rfc),
++					   (unsigned long) &rfc, endptr - ptr);
+ 			break;
+ 
+ 		case L2CAP_CONF_EWS:
++			if (olen != 2)
++				break;
+ 			chan->ack_win = min_t(u16, val, chan->ack_win);
+ 			l2cap_add_conf_opt(&ptr, L2CAP_CONF_EWS, 2,
+ 					   chan->tx_win, endptr - ptr);
+ 			break;
+ 
+ 		case L2CAP_CONF_EFS:
+-			if (olen == sizeof(efs)) {
+-				memcpy(&efs, (void *)val, olen);
+-
+-				if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
+-				    efs.stype != L2CAP_SERV_NOTRAFIC &&
+-				    efs.stype != chan->local_stype)
+-					return -ECONNREFUSED;
+-
+-				l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
+-						   (unsigned long) &efs, endptr - ptr);
+-			}
++			if (olen != sizeof(efs))
++				break;
++			memcpy(&efs, (void *)val, olen);
++			if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
++			    efs.stype != L2CAP_SERV_NOTRAFIC &&
++			    efs.stype != chan->local_stype)
++				return -ECONNREFUSED;
++			l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
++					   (unsigned long) &efs, endptr - ptr);
+ 			break;
+ 
+ 		case L2CAP_CONF_FCS:
++			if (olen != 1)
++				break;
+ 			if (*result == L2CAP_CONF_PENDING)
+ 				if (val == L2CAP_FCS_NONE)
+ 					set_bit(CONF_RECV_NO_FCS,
+@@ -3728,13 +3744,18 @@ static void l2cap_conf_rfc_get(struct l2cap_chan *chan, void *rsp, int len)
+ 
+ 	while (len >= L2CAP_CONF_OPT_SIZE) {
+ 		len -= l2cap_get_conf_opt(&rsp, &type, &olen, &val);
++		if (len < 0)
++			break;
+ 
+ 		switch (type) {
+ 		case L2CAP_CONF_RFC:
+-			if (olen == sizeof(rfc))
+-				memcpy(&rfc, (void *)val, olen);
++			if (olen != sizeof(rfc))
++				break;
++			memcpy(&rfc, (void *)val, olen);
+ 			break;
+ 		case L2CAP_CONF_EWS:
++			if (olen != 2)
++				break;
+ 			txwin_ext = val;
+ 			break;
+ 		}
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 686bdc6b35b0..a3a2cd55e23a 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1252,7 +1252,7 @@ static struct l2cap_chan *l2cap_sock_new_connection_cb(struct l2cap_chan *chan)
+ 
+ 	l2cap_sock_init(sk, parent);
+ 
+-	bt_accept_enqueue(parent, sk);
++	bt_accept_enqueue(parent, sk, false);
+ 
+ 	release_sock(parent);
+ 
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index aa0db1d1bd9b..b1f49fcc0478 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -988,7 +988,7 @@ int rfcomm_connect_ind(struct rfcomm_session *s, u8 channel, struct rfcomm_dlc *
+ 	rfcomm_pi(sk)->channel = channel;
+ 
+ 	sk->sk_state = BT_CONFIG;
+-	bt_accept_enqueue(parent, sk);
++	bt_accept_enqueue(parent, sk, true);
+ 
+ 	/* Accept connection and return socket DLC */
+ 	*d = rfcomm_pi(sk)->dlc;
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 529b38996d8b..9a580999ca57 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -193,7 +193,7 @@ static void __sco_chan_add(struct sco_conn *conn, struct sock *sk,
+ 	conn->sk = sk;
+ 
+ 	if (parent)
+-		bt_accept_enqueue(parent, sk);
++		bt_accept_enqueue(parent, sk, true);
+ }
+ 
+ static int sco_chan_add(struct sco_conn *conn, struct sock *sk,
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index ac92b2eb32b1..e4777614a8a0 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -599,6 +599,7 @@ static int br_ip4_multicast_add_group(struct net_bridge *br,
+ 	if (ipv4_is_local_multicast(group))
+ 		return 0;
+ 
++	memset(&br_group, 0, sizeof(br_group));
+ 	br_group.u.ip4 = group;
+ 	br_group.proto = htons(ETH_P_IP);
+ 	br_group.vid = vid;
+@@ -1489,6 +1490,7 @@ static void br_ip4_multicast_leave_group(struct net_bridge *br,
+ 
+ 	own_query = port ? &port->ip4_own_query : &br->ip4_own_query;
+ 
++	memset(&br_group, 0, sizeof(br_group));
+ 	br_group.u.ip4 = group;
+ 	br_group.proto = htons(ETH_P_IP);
+ 	br_group.vid = vid;
+@@ -1512,6 +1514,7 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br,
+ 
+ 	own_query = port ? &port->ip6_own_query : &br->ip6_own_query;
+ 
++	memset(&br_group, 0, sizeof(br_group));
+ 	br_group.u.ip6 = *group;
+ 	br_group.proto = htons(ETH_P_IPV6);
+ 	br_group.vid = vid;
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index c93c35bb73dd..40d058378b52 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -881,11 +881,6 @@ static const struct nf_br_ops br_ops = {
+ 	.br_dev_xmit_hook =	br_nf_dev_xmit,
+ };
+ 
+-void br_netfilter_enable(void)
+-{
+-}
+-EXPORT_SYMBOL_GPL(br_netfilter_enable);
+-
+ /* For br_nf_post_routing, we need (prio = NF_BR_PRI_LAST), because
+  * br_dev_queue_push_xmit is called afterwards */
+ static const struct nf_hook_ops br_nf_ops[] = {
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index 6693e209efe8..f77888ec93f1 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -31,10 +31,6 @@
+ /* needed for logical [in,out]-dev filtering */
+ #include "../br_private.h"
+ 
+-#define BUGPRINT(format, args...) printk("kernel msg: ebtables bug: please "\
+-					 "report to author: "format, ## args)
+-/* #define BUGPRINT(format, args...) */
+-
+ /* Each cpu has its own set of counters, so there is no need for write_lock in
+  * the softirq
+  * For reading or updating the counters, the user context needs to
+@@ -466,8 +462,6 @@ static int ebt_verify_pointers(const struct ebt_replace *repl,
+ 				/* we make userspace set this right,
+ 				 * so there is no misunderstanding
+ 				 */
+-				BUGPRINT("EBT_ENTRY_OR_ENTRIES shouldn't be set "
+-					 "in distinguisher\n");
+ 				return -EINVAL;
+ 			}
+ 			if (i != NF_BR_NUMHOOKS)
+@@ -485,18 +479,14 @@ static int ebt_verify_pointers(const struct ebt_replace *repl,
+ 			offset += e->next_offset;
+ 		}
+ 	}
+-	if (offset != limit) {
+-		BUGPRINT("entries_size too small\n");
++	if (offset != limit)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* check if all valid hooks have a chain */
+ 	for (i = 0; i < NF_BR_NUMHOOKS; i++) {
+ 		if (!newinfo->hook_entry[i] &&
+-		   (valid_hooks & (1 << i))) {
+-			BUGPRINT("Valid hook without chain\n");
++		   (valid_hooks & (1 << i)))
+ 			return -EINVAL;
+-		}
+ 	}
+ 	return 0;
+ }
+@@ -523,26 +513,20 @@ ebt_check_entry_size_and_hooks(const struct ebt_entry *e,
+ 		/* this checks if the previous chain has as many entries
+ 		 * as it said it has
+ 		 */
+-		if (*n != *cnt) {
+-			BUGPRINT("nentries does not equal the nr of entries "
+-				 "in the chain\n");
++		if (*n != *cnt)
+ 			return -EINVAL;
+-		}
++
+ 		if (((struct ebt_entries *)e)->policy != EBT_DROP &&
+ 		   ((struct ebt_entries *)e)->policy != EBT_ACCEPT) {
+ 			/* only RETURN from udc */
+ 			if (i != NF_BR_NUMHOOKS ||
+-			   ((struct ebt_entries *)e)->policy != EBT_RETURN) {
+-				BUGPRINT("bad policy\n");
++			   ((struct ebt_entries *)e)->policy != EBT_RETURN)
+ 				return -EINVAL;
+-			}
+ 		}
+ 		if (i == NF_BR_NUMHOOKS) /* it's a user defined chain */
+ 			(*udc_cnt)++;
+-		if (((struct ebt_entries *)e)->counter_offset != *totalcnt) {
+-			BUGPRINT("counter_offset != totalcnt");
++		if (((struct ebt_entries *)e)->counter_offset != *totalcnt)
+ 			return -EINVAL;
+-		}
+ 		*n = ((struct ebt_entries *)e)->nentries;
+ 		*cnt = 0;
+ 		return 0;
+@@ -550,15 +534,13 @@ ebt_check_entry_size_and_hooks(const struct ebt_entry *e,
+ 	/* a plain old entry, heh */
+ 	if (sizeof(struct ebt_entry) > e->watchers_offset ||
+ 	   e->watchers_offset > e->target_offset ||
+-	   e->target_offset >= e->next_offset) {
+-		BUGPRINT("entry offsets not in right order\n");
++	   e->target_offset >= e->next_offset)
+ 		return -EINVAL;
+-	}
++
+ 	/* this is not checked anywhere else */
+-	if (e->next_offset - e->target_offset < sizeof(struct ebt_entry_target)) {
+-		BUGPRINT("target size too small\n");
++	if (e->next_offset - e->target_offset < sizeof(struct ebt_entry_target))
+ 		return -EINVAL;
+-	}
++
+ 	(*cnt)++;
+ 	(*totalcnt)++;
+ 	return 0;
+@@ -678,18 +660,15 @@ ebt_check_entry(struct ebt_entry *e, struct net *net,
+ 	if (e->bitmask == 0)
+ 		return 0;
+ 
+-	if (e->bitmask & ~EBT_F_MASK) {
+-		BUGPRINT("Unknown flag for bitmask\n");
++	if (e->bitmask & ~EBT_F_MASK)
+ 		return -EINVAL;
+-	}
+-	if (e->invflags & ~EBT_INV_MASK) {
+-		BUGPRINT("Unknown flag for inv bitmask\n");
++
++	if (e->invflags & ~EBT_INV_MASK)
+ 		return -EINVAL;
+-	}
+-	if ((e->bitmask & EBT_NOPROTO) && (e->bitmask & EBT_802_3)) {
+-		BUGPRINT("NOPROTO & 802_3 not allowed\n");
++
++	if ((e->bitmask & EBT_NOPROTO) && (e->bitmask & EBT_802_3))
+ 		return -EINVAL;
+-	}
++
+ 	/* what hook do we belong to? */
+ 	for (i = 0; i < NF_BR_NUMHOOKS; i++) {
+ 		if (!newinfo->hook_entry[i])
+@@ -748,13 +727,11 @@ ebt_check_entry(struct ebt_entry *e, struct net *net,
+ 	t->u.target = target;
+ 	if (t->u.target == &ebt_standard_target) {
+ 		if (gap < sizeof(struct ebt_standard_target)) {
+-			BUGPRINT("Standard target size too big\n");
+ 			ret = -EFAULT;
+ 			goto cleanup_watchers;
+ 		}
+ 		if (((struct ebt_standard_target *)t)->verdict <
+ 		   -NUM_STANDARD_TARGETS) {
+-			BUGPRINT("Invalid standard target\n");
+ 			ret = -EFAULT;
+ 			goto cleanup_watchers;
+ 		}
+@@ -813,10 +790,9 @@ static int check_chainloops(const struct ebt_entries *chain, struct ebt_cl_stack
+ 		if (strcmp(t->u.name, EBT_STANDARD_TARGET))
+ 			goto letscontinue;
+ 		if (e->target_offset + sizeof(struct ebt_standard_target) >
+-		   e->next_offset) {
+-			BUGPRINT("Standard target size too big\n");
++		   e->next_offset)
+ 			return -1;
+-		}
++
+ 		verdict = ((struct ebt_standard_target *)t)->verdict;
+ 		if (verdict >= 0) { /* jump to another chain */
+ 			struct ebt_entries *hlp2 =
+@@ -825,14 +801,12 @@ static int check_chainloops(const struct ebt_entries *chain, struct ebt_cl_stack
+ 				if (hlp2 == cl_s[i].cs.chaininfo)
+ 					break;
+ 			/* bad destination or loop */
+-			if (i == udc_cnt) {
+-				BUGPRINT("bad destination\n");
++			if (i == udc_cnt)
+ 				return -1;
+-			}
+-			if (cl_s[i].cs.n) {
+-				BUGPRINT("loop\n");
++
++			if (cl_s[i].cs.n)
+ 				return -1;
+-			}
++
+ 			if (cl_s[i].hookmask & (1 << hooknr))
+ 				goto letscontinue;
+ 			/* this can't be 0, so the loop test is correct */
+@@ -865,24 +839,21 @@ static int translate_table(struct net *net, const char *name,
+ 	i = 0;
+ 	while (i < NF_BR_NUMHOOKS && !newinfo->hook_entry[i])
+ 		i++;
+-	if (i == NF_BR_NUMHOOKS) {
+-		BUGPRINT("No valid hooks specified\n");
++	if (i == NF_BR_NUMHOOKS)
+ 		return -EINVAL;
+-	}
+-	if (newinfo->hook_entry[i] != (struct ebt_entries *)newinfo->entries) {
+-		BUGPRINT("Chains don't start at beginning\n");
++
++	if (newinfo->hook_entry[i] != (struct ebt_entries *)newinfo->entries)
+ 		return -EINVAL;
+-	}
++
+ 	/* make sure chains are ordered after each other in same order
+ 	 * as their corresponding hooks
+ 	 */
+ 	for (j = i + 1; j < NF_BR_NUMHOOKS; j++) {
+ 		if (!newinfo->hook_entry[j])
+ 			continue;
+-		if (newinfo->hook_entry[j] <= newinfo->hook_entry[i]) {
+-			BUGPRINT("Hook order must be followed\n");
++		if (newinfo->hook_entry[j] <= newinfo->hook_entry[i])
+ 			return -EINVAL;
+-		}
++
+ 		i = j;
+ 	}
+ 
+@@ -900,15 +871,11 @@ static int translate_table(struct net *net, const char *name,
+ 	if (ret != 0)
+ 		return ret;
+ 
+-	if (i != j) {
+-		BUGPRINT("nentries does not equal the nr of entries in the "
+-			 "(last) chain\n");
++	if (i != j)
+ 		return -EINVAL;
+-	}
+-	if (k != newinfo->nentries) {
+-		BUGPRINT("Total nentries is wrong\n");
++
++	if (k != newinfo->nentries)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* get the location of the udc, put them in an array
+ 	 * while we're at it, allocate the chainstack
+@@ -942,7 +909,6 @@ static int translate_table(struct net *net, const char *name,
+ 		   ebt_get_udc_positions, newinfo, &i, cl_s);
+ 		/* sanity check */
+ 		if (i != udc_cnt) {
+-			BUGPRINT("i != udc_cnt\n");
+ 			vfree(cl_s);
+ 			return -EFAULT;
+ 		}
+@@ -1042,7 +1008,6 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
+ 		goto free_unlock;
+ 
+ 	if (repl->num_counters && repl->num_counters != t->private->nentries) {
+-		BUGPRINT("Wrong nr. of counters requested\n");
+ 		ret = -EINVAL;
+ 		goto free_unlock;
+ 	}
+@@ -1118,15 +1083,12 @@ static int do_replace(struct net *net, const void __user *user,
+ 	if (copy_from_user(&tmp, user, sizeof(tmp)) != 0)
+ 		return -EFAULT;
+ 
+-	if (len != sizeof(tmp) + tmp.entries_size) {
+-		BUGPRINT("Wrong len argument\n");
++	if (len != sizeof(tmp) + tmp.entries_size)
+ 		return -EINVAL;
+-	}
+ 
+-	if (tmp.entries_size == 0) {
+-		BUGPRINT("Entries_size never zero\n");
++	if (tmp.entries_size == 0)
+ 		return -EINVAL;
+-	}
++
+ 	/* overflow check */
+ 	if (tmp.nentries >= ((INT_MAX - sizeof(struct ebt_table_info)) /
+ 			NR_CPUS - SMP_CACHE_BYTES) / sizeof(struct ebt_counter))
+@@ -1153,7 +1115,6 @@ static int do_replace(struct net *net, const void __user *user,
+ 	}
+ 	if (copy_from_user(
+ 	   newinfo->entries, tmp.entries, tmp.entries_size) != 0) {
+-		BUGPRINT("Couldn't copy entries from userspace\n");
+ 		ret = -EFAULT;
+ 		goto free_entries;
+ 	}
+@@ -1194,10 +1155,8 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+ 
+ 	if (input_table == NULL || (repl = input_table->table) == NULL ||
+ 	    repl->entries == NULL || repl->entries_size == 0 ||
+-	    repl->counters != NULL || input_table->private != NULL) {
+-		BUGPRINT("Bad table data for ebt_register_table!!!\n");
++	    repl->counters != NULL || input_table->private != NULL)
+ 		return -EINVAL;
+-	}
+ 
+ 	/* Don't add one table to multiple lists. */
+ 	table = kmemdup(input_table, sizeof(struct ebt_table), GFP_KERNEL);
+@@ -1235,13 +1194,10 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+ 				((char *)repl->hook_entry[i] - repl->entries);
+ 	}
+ 	ret = translate_table(net, repl->name, newinfo);
+-	if (ret != 0) {
+-		BUGPRINT("Translate_table failed\n");
++	if (ret != 0)
+ 		goto free_chainstack;
+-	}
+ 
+ 	if (table->check && table->check(newinfo, table->valid_hooks)) {
+-		BUGPRINT("The table doesn't like its own initial data, lol\n");
+ 		ret = -EINVAL;
+ 		goto free_chainstack;
+ 	}
+@@ -1252,7 +1208,6 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
+ 	list_for_each_entry(t, &net->xt.tables[NFPROTO_BRIDGE], list) {
+ 		if (strcmp(t->name, table->name) == 0) {
+ 			ret = -EEXIST;
+-			BUGPRINT("Table name already exists\n");
+ 			goto free_unlock;
+ 		}
+ 	}
+@@ -1320,7 +1275,6 @@ static int do_update_counters(struct net *net, const char *name,
+ 		goto free_tmp;
+ 
+ 	if (num_counters != t->private->nentries) {
+-		BUGPRINT("Wrong nr of counters\n");
+ 		ret = -EINVAL;
+ 		goto unlock_mutex;
+ 	}
+@@ -1447,10 +1401,8 @@ static int copy_counters_to_user(struct ebt_table *t,
+ 	if (num_counters == 0)
+ 		return 0;
+ 
+-	if (num_counters != nentries) {
+-		BUGPRINT("Num_counters wrong\n");
++	if (num_counters != nentries)
+ 		return -EINVAL;
+-	}
+ 
+ 	counterstmp = vmalloc(array_size(nentries, sizeof(*counterstmp)));
+ 	if (!counterstmp)
+@@ -1496,15 +1448,11 @@ static int copy_everything_to_user(struct ebt_table *t, void __user *user,
+ 	   (tmp.num_counters ? nentries * sizeof(struct ebt_counter) : 0))
+ 		return -EINVAL;
+ 
+-	if (tmp.nentries != nentries) {
+-		BUGPRINT("Nentries wrong\n");
++	if (tmp.nentries != nentries)
+ 		return -EINVAL;
+-	}
+ 
+-	if (tmp.entries_size != entries_size) {
+-		BUGPRINT("Wrong size\n");
++	if (tmp.entries_size != entries_size)
+ 		return -EINVAL;
+-	}
+ 
+ 	ret = copy_counters_to_user(t, oldcounters, tmp.counters,
+ 					tmp.num_counters, nentries);
+@@ -1576,7 +1524,6 @@ static int do_ebt_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
+ 		}
+ 		mutex_unlock(&ebt_mutex);
+ 		if (copy_to_user(user, &tmp, *len) != 0) {
+-			BUGPRINT("c2u Didn't work\n");
+ 			ret = -EFAULT;
+ 			break;
+ 		}
+diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
+index 9cab80207ced..79eac465ec65 100644
+--- a/net/ceph/ceph_common.c
++++ b/net/ceph/ceph_common.c
+@@ -738,7 +738,6 @@ int __ceph_open_session(struct ceph_client *client, unsigned long started)
+ }
+ EXPORT_SYMBOL(__ceph_open_session);
+ 
+-
+ int ceph_open_session(struct ceph_client *client)
+ {
+ 	int ret;
+@@ -754,6 +753,23 @@ int ceph_open_session(struct ceph_client *client)
+ }
+ EXPORT_SYMBOL(ceph_open_session);
+ 
++int ceph_wait_for_latest_osdmap(struct ceph_client *client,
++				unsigned long timeout)
++{
++	u64 newest_epoch;
++	int ret;
++
++	ret = ceph_monc_get_version(&client->monc, "osdmap", &newest_epoch);
++	if (ret)
++		return ret;
++
++	if (client->osdc.osdmap->epoch >= newest_epoch)
++		return 0;
++
++	ceph_osdc_maybe_request_map(&client->osdc);
++	return ceph_monc_wait_osdmap(&client->monc, newest_epoch, timeout);
++}
++EXPORT_SYMBOL(ceph_wait_for_latest_osdmap);
+ 
+ static int __init init_ceph_lib(void)
+ {
+diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
+index 18deb3d889c4..a53e4fbb6319 100644
+--- a/net/ceph/mon_client.c
++++ b/net/ceph/mon_client.c
+@@ -922,6 +922,15 @@ int ceph_monc_blacklist_add(struct ceph_mon_client *monc,
+ 	mutex_unlock(&monc->mutex);
+ 
+ 	ret = wait_generic_request(req);
++	if (!ret)
++		/*
++		 * Make sure we have the osdmap that includes the blacklist
++		 * entry.  This is needed to ensure that the OSDs pick up the
++		 * new blacklist before processing any future requests from
++		 * this client.
++		 */
++		ret = ceph_wait_for_latest_osdmap(monc->client, 0);
++
+ out:
+ 	put_generic_request(req);
+ 	return ret;
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index b2651bb6d2a3..e657289db4ac 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -279,7 +279,7 @@ struct sk_buff *__skb_try_recv_datagram(struct sock *sk, unsigned int flags,
+ 			break;
+ 
+ 		sk_busy_loop(sk, flags & MSG_DONTWAIT);
+-	} while (!skb_queue_empty(&sk->sk_receive_queue));
++	} while (sk->sk_receive_queue.prev != *last);
+ 
+ 	error = -EAGAIN;
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 5d03889502eb..12824e007e06 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5014,8 +5014,10 @@ static inline void __netif_receive_skb_list_ptype(struct list_head *head,
+ 	if (pt_prev->list_func != NULL)
+ 		pt_prev->list_func(head, pt_prev, orig_dev);
+ 	else
+-		list_for_each_entry_safe(skb, next, head, list)
++		list_for_each_entry_safe(skb, next, head, list) {
++			skb_list_del_init(skb);
+ 			pt_prev->func(skb, skb->dev, pt_prev, orig_dev);
++		}
+ }
+ 
+ static void __netif_receive_skb_list_core(struct list_head *head, bool pfmemalloc)
+diff --git a/net/core/ethtool.c b/net/core/ethtool.c
+index 158264f7cfaf..3a7f19a61768 100644
+--- a/net/core/ethtool.c
++++ b/net/core/ethtool.c
+@@ -1794,11 +1794,16 @@ static int ethtool_get_strings(struct net_device *dev, void __user *useraddr)
+ 	WARN_ON_ONCE(!ret);
+ 
+ 	gstrings.len = ret;
+-	data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
+-	if (gstrings.len && !data)
+-		return -ENOMEM;
+ 
+-	__ethtool_get_strings(dev, gstrings.string_set, data);
++	if (gstrings.len) {
++		data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
++		if (!data)
++			return -ENOMEM;
++
++		__ethtool_get_strings(dev, gstrings.string_set, data);
++	} else {
++		data = NULL;
++	}
+ 
+ 	ret = -EFAULT;
+ 	if (copy_to_user(useraddr, &gstrings, sizeof(gstrings)))
+@@ -1894,11 +1899,15 @@ static int ethtool_get_stats(struct net_device *dev, void __user *useraddr)
+ 		return -EFAULT;
+ 
+ 	stats.n_stats = n_stats;
+-	data = vzalloc(array_size(n_stats, sizeof(u64)));
+-	if (n_stats && !data)
+-		return -ENOMEM;
+ 
+-	ops->get_ethtool_stats(dev, &stats, data);
++	if (n_stats) {
++		data = vzalloc(array_size(n_stats, sizeof(u64)));
++		if (!data)
++			return -ENOMEM;
++		ops->get_ethtool_stats(dev, &stats, data);
++	} else {
++		data = NULL;
++	}
+ 
+ 	ret = -EFAULT;
+ 	if (copy_to_user(useraddr, &stats, sizeof(stats)))
+@@ -1938,16 +1947,21 @@ static int ethtool_get_phy_stats(struct net_device *dev, void __user *useraddr)
+ 		return -EFAULT;
+ 
+ 	stats.n_stats = n_stats;
+-	data = vzalloc(array_size(n_stats, sizeof(u64)));
+-	if (n_stats && !data)
+-		return -ENOMEM;
+ 
+-	if (dev->phydev && !ops->get_ethtool_phy_stats) {
+-		ret = phy_ethtool_get_stats(dev->phydev, &stats, data);
+-		if (ret < 0)
+-			return ret;
++	if (n_stats) {
++		data = vzalloc(array_size(n_stats, sizeof(u64)));
++		if (!data)
++			return -ENOMEM;
++
++		if (dev->phydev && !ops->get_ethtool_phy_stats) {
++			ret = phy_ethtool_get_stats(dev->phydev, &stats, data);
++			if (ret < 0)
++				goto out;
++		} else {
++			ops->get_ethtool_phy_stats(dev, &stats, data);
++		}
+ 	} else {
+-		ops->get_ethtool_phy_stats(dev, &stats, data);
++		data = NULL;
+ 	}
+ 
+ 	ret = -EFAULT;
+diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
+index 9bf1b9ad1780..ac679f74ba47 100644
+--- a/net/core/gen_stats.c
++++ b/net/core/gen_stats.c
+@@ -291,7 +291,6 @@ __gnet_stats_copy_queue_cpu(struct gnet_stats_queue *qstats,
+ 	for_each_possible_cpu(i) {
+ 		const struct gnet_stats_queue *qcpu = per_cpu_ptr(q, i);
+ 
+-		qstats->qlen = 0;
+ 		qstats->backlog += qcpu->backlog;
+ 		qstats->drops += qcpu->drops;
+ 		qstats->requeues += qcpu->requeues;
+@@ -307,7 +306,6 @@ void __gnet_stats_copy_queue(struct gnet_stats_queue *qstats,
+ 	if (cpu) {
+ 		__gnet_stats_copy_queue_cpu(qstats, cpu);
+ 	} else {
+-		qstats->qlen = q->qlen;
+ 		qstats->backlog = q->backlog;
+ 		qstats->drops = q->drops;
+ 		qstats->requeues = q->requeues;
+diff --git a/net/core/gro_cells.c b/net/core/gro_cells.c
+index acf45ddbe924..e095fb871d91 100644
+--- a/net/core/gro_cells.c
++++ b/net/core/gro_cells.c
+@@ -13,22 +13,36 @@ int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
+ {
+ 	struct net_device *dev = skb->dev;
+ 	struct gro_cell *cell;
++	int res;
+ 
+-	if (!gcells->cells || skb_cloned(skb) || netif_elide_gro(dev))
+-		return netif_rx(skb);
++	rcu_read_lock();
++	if (unlikely(!(dev->flags & IFF_UP)))
++		goto drop;
++
++	if (!gcells->cells || skb_cloned(skb) || netif_elide_gro(dev)) {
++		res = netif_rx(skb);
++		goto unlock;
++	}
+ 
+ 	cell = this_cpu_ptr(gcells->cells);
+ 
+ 	if (skb_queue_len(&cell->napi_skbs) > netdev_max_backlog) {
++drop:
+ 		atomic_long_inc(&dev->rx_dropped);
+ 		kfree_skb(skb);
+-		return NET_RX_DROP;
++		res = NET_RX_DROP;
++		goto unlock;
+ 	}
+ 
+ 	__skb_queue_tail(&cell->napi_skbs, skb);
+ 	if (skb_queue_len(&cell->napi_skbs) == 1)
+ 		napi_schedule(&cell->napi);
+-	return NET_RX_SUCCESS;
++
++	res = NET_RX_SUCCESS;
++
++unlock:
++	rcu_read_unlock();
++	return res;
+ }
+ EXPORT_SYMBOL(gro_cells_receive);
+ 
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index ff9fd2bb4ce4..aec26584f0ca 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -934,6 +934,8 @@ static int rx_queue_add_kobject(struct net_device *dev, int index)
+ 	if (error)
+ 		return error;
+ 
++	dev_hold(queue->dev);
++
+ 	if (dev->sysfs_rx_queue_group) {
+ 		error = sysfs_create_group(kobj, dev->sysfs_rx_queue_group);
+ 		if (error) {
+@@ -943,7 +945,6 @@ static int rx_queue_add_kobject(struct net_device *dev, int index)
+ 	}
+ 
+ 	kobject_uevent(kobj, KOBJ_ADD);
+-	dev_hold(queue->dev);
+ 
+ 	return error;
+ }
+@@ -1472,6 +1473,8 @@ static int netdev_queue_add_kobject(struct net_device *dev, int index)
+ 	if (error)
+ 		return error;
+ 
++	dev_hold(queue->dev);
++
+ #ifdef CONFIG_BQL
+ 	error = sysfs_create_group(kobj, &dql_group);
+ 	if (error) {
+@@ -1481,7 +1484,6 @@ static int netdev_queue_add_kobject(struct net_device *dev, int index)
+ #endif
+ 
+ 	kobject_uevent(kobj, KOBJ_ADD);
+-	dev_hold(queue->dev);
+ 
+ 	return 0;
+ }
+@@ -1547,6 +1549,9 @@ static int register_queue_kobjects(struct net_device *dev)
+ error:
+ 	netdev_queue_update_kobjects(dev, txq, 0);
+ 	net_rx_queue_update_kobjects(dev, rxq, 0);
++#ifdef CONFIG_SYSFS
++	kset_unregister(dev->queues_kset);
++#endif
+ 	return error;
+ }
+ 
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index b02fb19df2cc..40c249c574c1 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -304,6 +304,7 @@ static __net_init int setup_net(struct net *net, struct user_namespace *user_ns)
+ 
+ 	refcount_set(&net->count, 1);
+ 	refcount_set(&net->passive, 1);
++	get_random_bytes(&net->hash_mix, sizeof(u32));
+ 	net->dev_base_seq = 1;
+ 	net->user_ns = user_ns;
+ 	idr_init(&net->netns_ids);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 2415d9cb9b89..ef2cd5712098 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3801,7 +3801,7 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
+ 	unsigned int delta_truesize;
+ 	struct sk_buff *lp;
+ 
+-	if (unlikely(p->len + len >= 65536))
++	if (unlikely(p->len + len >= 65536 || NAPI_GRO_CB(skb)->flush))
+ 		return -E2BIG;
+ 
+ 	lp = NAPI_GRO_CB(p)->last;
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 8c826603bf36..8bc0ba1ebabe 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -545,6 +545,7 @@ static void sk_psock_destroy_deferred(struct work_struct *gc)
+ 	struct sk_psock *psock = container_of(gc, struct sk_psock, gc);
+ 
+ 	/* No sk_callback_lock since already detached. */
++	strp_stop(&psock->parser.strp);
+ 	strp_done(&psock->parser.strp);
+ 
+ 	cancel_work_sync(&psock->work);
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index d5740bad5b18..57d84e9b7b6f 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -436,8 +436,8 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk,
+ 		newnp->ipv6_mc_list = NULL;
+ 		newnp->ipv6_ac_list = NULL;
+ 		newnp->ipv6_fl_list = NULL;
+-		newnp->mcast_oif   = inet6_iif(skb);
+-		newnp->mcast_hops  = ipv6_hdr(skb)->hop_limit;
++		newnp->mcast_oif   = inet_iif(skb);
++		newnp->mcast_hops  = ip_hdr(skb)->ttl;
+ 
+ 		/*
+ 		 * No need to charge this sock to the relevant IPv6 refcnt debug socks count
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index b8cd43c9ed5b..a97bf326b231 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -94,9 +94,8 @@ static void hsr_check_announce(struct net_device *hsr_dev,
+ 			&& (old_operstate != IF_OPER_UP)) {
+ 		/* Went up */
+ 		hsr->announce_count = 0;
+-		hsr->announce_timer.expires = jiffies +
+-				msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
+-		add_timer(&hsr->announce_timer);
++		mod_timer(&hsr->announce_timer,
++			  jiffies + msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL));
+ 	}
+ 
+ 	if ((hsr_dev->operstate != IF_OPER_UP) && (old_operstate == IF_OPER_UP))
+@@ -332,6 +331,7 @@ static void hsr_announce(struct timer_list *t)
+ {
+ 	struct hsr_priv *hsr;
+ 	struct hsr_port *master;
++	unsigned long interval;
+ 
+ 	hsr = from_timer(hsr, t, announce_timer);
+ 
+@@ -343,18 +343,16 @@ static void hsr_announce(struct timer_list *t)
+ 				hsr->protVersion);
+ 		hsr->announce_count++;
+ 
+-		hsr->announce_timer.expires = jiffies +
+-				msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
++		interval = msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
+ 	} else {
+ 		send_hsr_supervision_frame(master, HSR_TLV_LIFE_CHECK,
+ 				hsr->protVersion);
+ 
+-		hsr->announce_timer.expires = jiffies +
+-				msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
++		interval = msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
+ 	}
+ 
+ 	if (is_admin_up(master->dev))
+-		add_timer(&hsr->announce_timer);
++		mod_timer(&hsr->announce_timer, jiffies + interval);
+ 
+ 	rcu_read_unlock();
+ }
+@@ -486,7 +484,7 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+ 
+ 	res = hsr_add_port(hsr, hsr_dev, HSR_PT_MASTER);
+ 	if (res)
+-		return res;
++		goto err_add_port;
+ 
+ 	res = register_netdevice(hsr_dev);
+ 	if (res)
+@@ -506,6 +504,8 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
+ fail:
+ 	hsr_for_each_port(hsr, port)
+ 		hsr_del_port(port);
++err_add_port:
++	hsr_del_node(&hsr->self_node_db);
+ 
+ 	return res;
+ }
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 286ceb41ac0c..9af16cb68f76 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -124,6 +124,18 @@ int hsr_create_self_node(struct list_head *self_node_db,
+ 	return 0;
+ }
+ 
++void hsr_del_node(struct list_head *self_node_db)
++{
++	struct hsr_node *node;
++
++	rcu_read_lock();
++	node = list_first_or_null_rcu(self_node_db, struct hsr_node, mac_list);
++	rcu_read_unlock();
++	if (node) {
++		list_del_rcu(&node->mac_list);
++		kfree(node);
++	}
++}
+ 
+ /* Allocate an hsr_node and add it to node_db. 'addr' is the node's AddressA;
+  * seq_out is used to initialize filtering of outgoing duplicate frames
+diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h
+index 370b45998121..531fd3dfcac1 100644
+--- a/net/hsr/hsr_framereg.h
++++ b/net/hsr/hsr_framereg.h
+@@ -16,6 +16,7 @@
+ 
+ struct hsr_node;
+ 
++void hsr_del_node(struct list_head *self_node_db);
+ struct hsr_node *hsr_add_node(struct list_head *node_db, unsigned char addr[],
+ 			      u16 seq_out);
+ struct hsr_node *hsr_get_node(struct hsr_port *port, struct sk_buff *skb,
+diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c
+index 437070d1ffb1..79e98e21cdd7 100644
+--- a/net/ipv4/fou.c
++++ b/net/ipv4/fou.c
+@@ -1024,7 +1024,7 @@ static int gue_err(struct sk_buff *skb, u32 info)
+ 	int ret;
+ 
+ 	len = sizeof(struct udphdr) + sizeof(struct guehdr);
+-	if (!pskb_may_pull(skb, len))
++	if (!pskb_may_pull(skb, transport_offset + len))
+ 		return -EINVAL;
+ 
+ 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+@@ -1059,7 +1059,7 @@ static int gue_err(struct sk_buff *skb, u32 info)
+ 
+ 	optlen = guehdr->hlen << 2;
+ 
+-	if (!pskb_may_pull(skb, len + optlen))
++	if (!pskb_may_pull(skb, transport_offset + len + optlen))
+ 		return -EINVAL;
+ 
+ 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 6ae89f2b541b..2d5734079e6b 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -259,7 +259,6 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ 	struct net *net = dev_net(skb->dev);
+ 	struct metadata_dst *tun_dst = NULL;
+ 	struct erspan_base_hdr *ershdr;
+-	struct erspan_metadata *pkt_md;
+ 	struct ip_tunnel_net *itn;
+ 	struct ip_tunnel *tunnel;
+ 	const struct iphdr *iph;
+@@ -282,9 +281,6 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ 		if (unlikely(!pskb_may_pull(skb, len)))
+ 			return PACKET_REJECT;
+ 
+-		ershdr = (struct erspan_base_hdr *)(skb->data + gre_hdr_len);
+-		pkt_md = (struct erspan_metadata *)(ershdr + 1);
+-
+ 		if (__iptunnel_pull_header(skb,
+ 					   len,
+ 					   htons(ETH_P_TEB),
+@@ -292,8 +288,9 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ 			goto drop;
+ 
+ 		if (tunnel->collect_md) {
++			struct erspan_metadata *pkt_md, *md;
+ 			struct ip_tunnel_info *info;
+-			struct erspan_metadata *md;
++			unsigned char *gh;
+ 			__be64 tun_id;
+ 			__be16 flags;
+ 
+@@ -306,6 +303,14 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ 			if (!tun_dst)
+ 				return PACKET_REJECT;
+ 
++			/* skb can be uncloned in __iptunnel_pull_header, so
++			 * old pkt_md is no longer valid and we need to reset
++			 * it
++			 */
++			gh = skb_network_header(skb) +
++			     skb_network_header_len(skb);
++			pkt_md = (struct erspan_metadata *)(gh + gre_hdr_len +
++							    sizeof(*ershdr));
+ 			md = ip_tunnel_info_opts(&tun_dst->u.tun_info);
+ 			md->version = ver;
+ 			md2 = &md->u.md2;
+diff --git a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c
+index 1f4737b77067..ccf0d31b6ce5 100644
+--- a/net/ipv4/ip_input.c
++++ b/net/ipv4/ip_input.c
+@@ -257,11 +257,10 @@ int ip_local_deliver(struct sk_buff *skb)
+ 		       ip_local_deliver_finish);
+ }
+ 
+-static inline bool ip_rcv_options(struct sk_buff *skb)
++static inline bool ip_rcv_options(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ip_options *opt;
+ 	const struct iphdr *iph;
+-	struct net_device *dev = skb->dev;
+ 
+ 	/* It looks as overkill, because not all
+ 	   IP options require packet mangling.
+@@ -297,7 +296,7 @@ static inline bool ip_rcv_options(struct sk_buff *skb)
+ 			}
+ 		}
+ 
+-		if (ip_options_rcv_srr(skb))
++		if (ip_options_rcv_srr(skb, dev))
+ 			goto drop;
+ 	}
+ 
+@@ -353,7 +352,7 @@ static int ip_rcv_finish_core(struct net *net, struct sock *sk,
+ 	}
+ #endif
+ 
+-	if (iph->ihl > 5 && ip_rcv_options(skb))
++	if (iph->ihl > 5 && ip_rcv_options(skb, dev))
+ 		goto drop;
+ 
+ 	rt = skb_rtable(skb);
+diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
+index 32a35043c9f5..3db31bb9df50 100644
+--- a/net/ipv4/ip_options.c
++++ b/net/ipv4/ip_options.c
+@@ -612,7 +612,7 @@ void ip_forward_options(struct sk_buff *skb)
+ 	}
+ }
+ 
+-int ip_options_rcv_srr(struct sk_buff *skb)
++int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev)
+ {
+ 	struct ip_options *opt = &(IPCB(skb)->opt);
+ 	int srrspace, srrptr;
+@@ -647,7 +647,7 @@ int ip_options_rcv_srr(struct sk_buff *skb)
+ 
+ 		orefdst = skb->_skb_refdst;
+ 		skb_dst_set(skb, NULL);
+-		err = ip_route_input(skb, nexthop, iph->saddr, iph->tos, skb->dev);
++		err = ip_route_input(skb, nexthop, iph->saddr, iph->tos, dev);
+ 		rt2 = skb_rtable(skb);
+ 		if (err || (rt2->rt_type != RTN_UNICAST && rt2->rt_type != RTN_LOCAL)) {
+ 			skb_dst_drop(skb);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 7bb9128c8363..e04cdb58a602 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1303,6 +1303,10 @@ static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr)
+ 		if (fnhe->fnhe_daddr == daddr) {
+ 			rcu_assign_pointer(*fnhe_p, rcu_dereference_protected(
+ 				fnhe->fnhe_next, lockdep_is_held(&fnhe_lock)));
++			/* set fnhe_daddr to 0 to ensure it won't bind with
++			 * new dsts in rt_bind_exception().
++			 */
++			fnhe->fnhe_daddr = 0;
+ 			fnhe_flush_routes(fnhe);
+ 			kfree_rcu(fnhe, rcu);
+ 			break;
+@@ -2144,12 +2148,13 @@ int ip_route_input_rcu(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ 		int our = 0;
+ 		int err = -EINVAL;
+ 
+-		if (in_dev)
+-			our = ip_check_mc_rcu(in_dev, daddr, saddr,
+-					      ip_hdr(skb)->protocol);
++		if (!in_dev)
++			return err;
++		our = ip_check_mc_rcu(in_dev, daddr, saddr,
++				      ip_hdr(skb)->protocol);
+ 
+ 		/* check l3 master if no match yet */
+-		if ((!in_dev || !our) && netif_is_l3_slave(dev)) {
++		if (!our && netif_is_l3_slave(dev)) {
+ 			struct in_device *l3_in_dev;
+ 
+ 			l3_in_dev = __in_dev_get_rcu(skb->dev);
+diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
+index 606f868d9f3f..e531344611a0 100644
+--- a/net/ipv4/syncookies.c
++++ b/net/ipv4/syncookies.c
+@@ -216,7 +216,12 @@ struct sock *tcp_get_cookie_sock(struct sock *sk, struct sk_buff *skb,
+ 		refcount_set(&req->rsk_refcnt, 1);
+ 		tcp_sk(child)->tsoffset = tsoff;
+ 		sock_rps_save_rxhash(child, skb);
+-		inet_csk_reqsk_queue_add(sk, req, child);
++		if (!inet_csk_reqsk_queue_add(sk, req, child)) {
++			bh_unlock_sock(child);
++			sock_put(child);
++			child = NULL;
++			reqsk_put(req);
++		}
+ 	} else {
+ 		reqsk_free(req);
+ 	}
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index cf3c5095c10e..ce365cbba1d1 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1914,6 +1914,11 @@ static int tcp_inq_hint(struct sock *sk)
+ 		inq = tp->rcv_nxt - tp->copied_seq;
+ 		release_sock(sk);
+ 	}
++	/* After receiving a FIN, tell the user-space to continue reading
++	 * by returning a non-zero inq.
++	 */
++	if (inq == 0 && sock_flag(sk, SOCK_DONE))
++		inq = 1;
+ 	return inq;
+ }
+ 
+diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
+index cd4814f7e962..359da68d7c06 100644
+--- a/net/ipv4/tcp_dctcp.c
++++ b/net/ipv4/tcp_dctcp.c
+@@ -67,11 +67,6 @@ static unsigned int dctcp_alpha_on_init __read_mostly = DCTCP_MAX_ALPHA;
+ module_param(dctcp_alpha_on_init, uint, 0644);
+ MODULE_PARM_DESC(dctcp_alpha_on_init, "parameter for initial alpha value");
+ 
+-static unsigned int dctcp_clamp_alpha_on_loss __read_mostly;
+-module_param(dctcp_clamp_alpha_on_loss, uint, 0644);
+-MODULE_PARM_DESC(dctcp_clamp_alpha_on_loss,
+-		 "parameter for clamping alpha on loss");
+-
+ static struct tcp_congestion_ops dctcp_reno;
+ 
+ static void dctcp_reset(const struct tcp_sock *tp, struct dctcp *ca)
+@@ -164,21 +159,23 @@ static void dctcp_update_alpha(struct sock *sk, u32 flags)
+ 	}
+ }
+ 
+-static void dctcp_state(struct sock *sk, u8 new_state)
++static void dctcp_react_to_loss(struct sock *sk)
+ {
+-	if (dctcp_clamp_alpha_on_loss && new_state == TCP_CA_Loss) {
+-		struct dctcp *ca = inet_csk_ca(sk);
++	struct dctcp *ca = inet_csk_ca(sk);
++	struct tcp_sock *tp = tcp_sk(sk);
+ 
+-		/* If this extension is enabled, we clamp dctcp_alpha to
+-		 * max on packet loss; the motivation is that dctcp_alpha
+-		 * is an indicator to the extend of congestion and packet
+-		 * loss is an indicator of extreme congestion; setting
+-		 * this in practice turned out to be beneficial, and
+-		 * effectively assumes total congestion which reduces the
+-		 * window by half.
+-		 */
+-		ca->dctcp_alpha = DCTCP_MAX_ALPHA;
+-	}
++	ca->loss_cwnd = tp->snd_cwnd;
++	tp->snd_ssthresh = max(tp->snd_cwnd >> 1U, 2U);
++}
++
++static void dctcp_state(struct sock *sk, u8 new_state)
++{
++	if (new_state == TCP_CA_Recovery &&
++	    new_state != inet_csk(sk)->icsk_ca_state)
++		dctcp_react_to_loss(sk);
++	/* We handle RTO in dctcp_cwnd_event to ensure that we perform only
++	 * one loss-adjustment per RTT.
++	 */
+ }
+ 
+ static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev)
+@@ -190,6 +187,9 @@ static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev)
+ 	case CA_EVENT_ECN_NO_CE:
+ 		dctcp_ece_ack_update(sk, ev, &ca->prior_rcv_nxt, &ca->ce_state);
+ 		break;
++	case CA_EVENT_LOSS:
++		dctcp_react_to_loss(sk);
++		break;
+ 	default:
+ 		/* Don't care for the rest. */
+ 		break;
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 76858b14ebe9..7b1ef897b398 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -6519,7 +6519,13 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
+ 		af_ops->send_synack(fastopen_sk, dst, &fl, req,
+ 				    &foc, TCP_SYNACK_FASTOPEN);
+ 		/* Add the child socket directly into the accept queue */
+-		inet_csk_reqsk_queue_add(sk, req, fastopen_sk);
++		if (!inet_csk_reqsk_queue_add(sk, req, fastopen_sk)) {
++			reqsk_fastopen_remove(fastopen_sk, req, false);
++			bh_unlock_sock(fastopen_sk);
++			sock_put(fastopen_sk);
++			reqsk_put(req);
++			goto drop;
++		}
+ 		sk->sk_data_ready(sk);
+ 		bh_unlock_sock(fastopen_sk);
+ 		sock_put(fastopen_sk);
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index ec3cea9d6828..00852f47a73d 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1734,15 +1734,8 @@ EXPORT_SYMBOL(tcp_add_backlog);
+ int tcp_filter(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct tcphdr *th = (struct tcphdr *)skb->data;
+-	unsigned int eaten = skb->len;
+-	int err;
+ 
+-	err = sk_filter_trim_cap(sk, skb, th->doff * 4);
+-	if (!err) {
+-		eaten -= skb->len;
+-		TCP_SKB_CB(skb)->end_seq -= eaten;
+-	}
+-	return err;
++	return sk_filter_trim_cap(sk, skb, th->doff * 4);
+ }
+ EXPORT_SYMBOL(tcp_filter);
+ 
+@@ -2585,7 +2578,8 @@ static void __net_exit tcp_sk_exit(struct net *net)
+ {
+ 	int cpu;
+ 
+-	module_put(net->ipv4.tcp_congestion_control->owner);
++	if (net->ipv4.tcp_congestion_control)
++		module_put(net->ipv4.tcp_congestion_control->owner);
+ 
+ 	for_each_possible_cpu(cpu)
+ 		inet_ctl_sock_destroy(*per_cpu_ptr(net->ipv4.tcp_sk, cpu));
+diff --git a/net/ipv6/fou6.c b/net/ipv6/fou6.c
+index 867474abe269..ec4e2ed95f36 100644
+--- a/net/ipv6/fou6.c
++++ b/net/ipv6/fou6.c
+@@ -94,7 +94,7 @@ static int gue6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	int ret;
+ 
+ 	len = sizeof(struct udphdr) + sizeof(struct guehdr);
+-	if (!pskb_may_pull(skb, len))
++	if (!pskb_may_pull(skb, transport_offset + len))
+ 		return -EINVAL;
+ 
+ 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+@@ -129,7 +129,7 @@ static int gue6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 
+ 	optlen = guehdr->hlen << 2;
+ 
+-	if (!pskb_may_pull(skb, len + optlen))
++	if (!pskb_may_pull(skb, transport_offset + len + optlen))
+ 		return -EINVAL;
+ 
+ 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
+diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
+index 17c455ff69ff..7858fa9ea103 100644
+--- a/net/ipv6/ila/ila_xlat.c
++++ b/net/ipv6/ila/ila_xlat.c
+@@ -420,6 +420,7 @@ int ila_xlat_nl_cmd_flush(struct sk_buff *skb, struct genl_info *info)
+ 
+ done:
+ 	rhashtable_walk_stop(&iter);
++	rhashtable_walk_exit(&iter);
+ 	return ret;
+ }
+ 
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 26f25b6e2833..438f1a5fd19a 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -524,11 +524,10 @@ static int ip6gre_rcv(struct sk_buff *skb, const struct tnl_ptk_info *tpi)
+ 	return PACKET_REJECT;
+ }
+ 
+-static int ip6erspan_rcv(struct sk_buff *skb, int gre_hdr_len,
+-			 struct tnl_ptk_info *tpi)
++static int ip6erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
++			 int gre_hdr_len)
+ {
+ 	struct erspan_base_hdr *ershdr;
+-	struct erspan_metadata *pkt_md;
+ 	const struct ipv6hdr *ipv6h;
+ 	struct erspan_md2 *md2;
+ 	struct ip6_tnl *tunnel;
+@@ -547,18 +546,16 @@ static int ip6erspan_rcv(struct sk_buff *skb, int gre_hdr_len,
+ 		if (unlikely(!pskb_may_pull(skb, len)))
+ 			return PACKET_REJECT;
+ 
+-		ershdr = (struct erspan_base_hdr *)skb->data;
+-		pkt_md = (struct erspan_metadata *)(ershdr + 1);
+-
+ 		if (__iptunnel_pull_header(skb, len,
+ 					   htons(ETH_P_TEB),
+ 					   false, false) < 0)
+ 			return PACKET_REJECT;
+ 
+ 		if (tunnel->parms.collect_md) {
++			struct erspan_metadata *pkt_md, *md;
+ 			struct metadata_dst *tun_dst;
+ 			struct ip_tunnel_info *info;
+-			struct erspan_metadata *md;
++			unsigned char *gh;
+ 			__be64 tun_id;
+ 			__be16 flags;
+ 
+@@ -571,6 +568,14 @@ static int ip6erspan_rcv(struct sk_buff *skb, int gre_hdr_len,
+ 			if (!tun_dst)
+ 				return PACKET_REJECT;
+ 
++			/* skb can be uncloned in __iptunnel_pull_header, so
++			 * old pkt_md is no longer valid and we need to reset
++			 * it
++			 */
++			gh = skb_network_header(skb) +
++			     skb_network_header_len(skb);
++			pkt_md = (struct erspan_metadata *)(gh + gre_hdr_len +
++							    sizeof(*ershdr));
+ 			info = &tun_dst->u.tun_info;
+ 			md = ip_tunnel_info_opts(info);
+ 			md->version = ver;
+@@ -607,7 +612,7 @@ static int gre_rcv(struct sk_buff *skb)
+ 
+ 	if (unlikely(tpi.proto == htons(ETH_P_ERSPAN) ||
+ 		     tpi.proto == htons(ETH_P_ERSPAN2))) {
+-		if (ip6erspan_rcv(skb, hdr_len, &tpi) == PACKET_RCVD)
++		if (ip6erspan_rcv(skb, &tpi, hdr_len) == PACKET_RCVD)
+ 			return 0;
+ 		goto out;
+ 	}
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 5f9fa0302b5a..e71227390bec 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -595,7 +595,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 				inet6_sk(skb->sk) : NULL;
+ 	struct ipv6hdr *tmp_hdr;
+ 	struct frag_hdr *fh;
+-	unsigned int mtu, hlen, left, len;
++	unsigned int mtu, hlen, left, len, nexthdr_offset;
+ 	int hroom, troom;
+ 	__be32 frag_id;
+ 	int ptr, offset = 0, err = 0;
+@@ -606,6 +606,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 		goto fail;
+ 	hlen = err;
+ 	nexthdr = *prevhdr;
++	nexthdr_offset = prevhdr - skb_network_header(skb);
+ 
+ 	mtu = ip6_skb_dst_mtu(skb);
+ 
+@@ -640,6 +641,7 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ 	    (err = skb_checksum_help(skb)))
+ 		goto fail;
+ 
++	prevhdr = skb_network_header(skb) + nexthdr_offset;
+ 	hroom = LL_RESERVED_SPACE(rt->dst.dev);
+ 	if (skb_has_frag_list(skb)) {
+ 		unsigned int first_len = skb_pagelen(skb);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 0c6403cf8b52..ade1390c6348 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -627,7 +627,7 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 		rt = ip_route_output_ports(dev_net(skb->dev), &fl4, NULL,
+ 					   eiph->daddr, eiph->saddr, 0, 0,
+ 					   IPPROTO_IPIP, RT_TOS(eiph->tos), 0);
+-		if (IS_ERR(rt) || rt->dst.dev->type != ARPHRD_TUNNEL) {
++		if (IS_ERR(rt) || rt->dst.dev->type != ARPHRD_TUNNEL6) {
+ 			if (!IS_ERR(rt))
+ 				ip_rt_put(rt);
+ 			goto out;
+@@ -636,7 +636,7 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ 	} else {
+ 		if (ip_route_input(skb2, eiph->daddr, eiph->saddr, eiph->tos,
+ 				   skb2->dev) ||
+-		    skb_dst(skb2)->dev->type != ARPHRD_TUNNEL)
++		    skb_dst(skb2)->dev->type != ARPHRD_TUNNEL6)
+ 			goto out;
+ 	}
+ 
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index cc01aa3f2b5e..af91a1a402f1 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -1964,10 +1964,10 @@ int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+ 
+ static inline int ip6mr_forward2_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+-	__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
+-			IPSTATS_MIB_OUTFORWDATAGRAMS);
+-	__IP6_ADD_STATS(net, ip6_dst_idev(skb_dst(skb)),
+-			IPSTATS_MIB_OUTOCTETS, skb->len);
++	IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
++		      IPSTATS_MIB_OUTFORWDATAGRAMS);
++	IP6_ADD_STATS(net, ip6_dst_idev(skb_dst(skb)),
++		      IPSTATS_MIB_OUTOCTETS, skb->len);
+ 	return dst_output(net, sk, skb);
+ }
+ 
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 8dad1d690b78..0086acc16f3c 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1040,14 +1040,20 @@ static struct rt6_info *ip6_create_rt_rcu(struct fib6_info *rt)
+ 	struct rt6_info *nrt;
+ 
+ 	if (!fib6_info_hold_safe(rt))
+-		return NULL;
++		goto fallback;
+ 
+ 	nrt = ip6_dst_alloc(dev_net(dev), dev, flags);
+-	if (nrt)
+-		ip6_rt_copy_init(nrt, rt);
+-	else
++	if (!nrt) {
+ 		fib6_info_release(rt);
++		goto fallback;
++	}
+ 
++	ip6_rt_copy_init(nrt, rt);
++	return nrt;
++
++fallback:
++	nrt = dev_net(dev)->ipv6.ip6_null_entry;
++	dst_hold(&nrt->dst);
+ 	return nrt;
+ }
+ 
+@@ -1096,10 +1102,6 @@ restart:
+ 		dst_hold(&rt->dst);
+ 	} else {
+ 		rt = ip6_create_rt_rcu(f6i);
+-		if (!rt) {
+-			rt = net->ipv6.ip6_null_entry;
+-			dst_hold(&rt->dst);
+-		}
+ 	}
+ 
+ 	rcu_read_unlock();
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 09e440e8dfae..b2109b74857d 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -669,6 +669,10 @@ static int ipip6_rcv(struct sk_buff *skb)
+ 		    !net_eq(tunnel->net, dev_net(tunnel->dev))))
+ 			goto out;
+ 
++		/* skb can be uncloned in iptunnel_pull_header, so
++		 * old iph is no longer valid
++		 */
++		iph = (const struct iphdr *)skb_mac_header(skb);
+ 		err = IP_ECN_decapsulate(iph, skb);
+ 		if (unlikely(err)) {
+ 			if (log_ecn_error)
+@@ -778,8 +782,9 @@ static bool check_6rd(struct ip_tunnel *tunnel, const struct in6_addr *v6dst,
+ 		pbw0 = tunnel->ip6rd.prefixlen >> 5;
+ 		pbi0 = tunnel->ip6rd.prefixlen & 0x1f;
+ 
+-		d = (ntohl(v6dst->s6_addr32[pbw0]) << pbi0) >>
+-		    tunnel->ip6rd.relay_prefixlen;
++		d = tunnel->ip6rd.relay_prefixlen < 32 ?
++			(ntohl(v6dst->s6_addr32[pbw0]) << pbi0) >>
++		    tunnel->ip6rd.relay_prefixlen : 0;
+ 
+ 		pbi1 = pbi0 - tunnel->ip6rd.relay_prefixlen;
+ 		if (pbi1 > 0)
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index b81eb7cb815e..8505d96483d5 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1112,11 +1112,11 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
+ 		newnp->ipv6_fl_list = NULL;
+ 		newnp->pktoptions  = NULL;
+ 		newnp->opt	   = NULL;
+-		newnp->mcast_oif   = tcp_v6_iif(skb);
+-		newnp->mcast_hops  = ipv6_hdr(skb)->hop_limit;
+-		newnp->rcv_flowinfo = ip6_flowinfo(ipv6_hdr(skb));
++		newnp->mcast_oif   = inet_iif(skb);
++		newnp->mcast_hops  = ip_hdr(skb)->ttl;
++		newnp->rcv_flowinfo = 0;
+ 		if (np->repflow)
+-			newnp->flow_label = ip6_flowlabel(ipv6_hdr(skb));
++			newnp->flow_label = 0;
+ 
+ 		/*
+ 		 * No need to charge this sock to the relevant IPv6 refcnt debug socks count
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 571d824e4e24..b919db02c7f9 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -2054,14 +2054,14 @@ static int __init kcm_init(void)
+ 	if (err)
+ 		goto fail;
+ 
+-	err = sock_register(&kcm_family_ops);
+-	if (err)
+-		goto sock_register_fail;
+-
+ 	err = register_pernet_device(&kcm_net_ops);
+ 	if (err)
+ 		goto net_ops_fail;
+ 
++	err = sock_register(&kcm_family_ops);
++	if (err)
++		goto sock_register_fail;
++
+ 	err = kcm_proc_init();
+ 	if (err)
+ 		goto proc_init_fail;
+@@ -2069,12 +2069,12 @@ static int __init kcm_init(void)
+ 	return 0;
+ 
+ proc_init_fail:
+-	unregister_pernet_device(&kcm_net_ops);
+-
+-net_ops_fail:
+ 	sock_unregister(PF_KCM);
+ 
+ sock_register_fail:
++	unregister_pernet_device(&kcm_net_ops);
++
++net_ops_fail:
+ 	proto_unregister(&kcm_proto);
+ 
+ fail:
+@@ -2090,8 +2090,8 @@ fail:
+ static void __exit kcm_exit(void)
+ {
+ 	kcm_proc_exit();
+-	unregister_pernet_device(&kcm_net_ops);
+ 	sock_unregister(PF_KCM);
++	unregister_pernet_device(&kcm_net_ops);
+ 	proto_unregister(&kcm_proto);
+ 	destroy_workqueue(kcm_wq);
+ 
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index 0ae6899edac0..37a69df17cab 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -674,9 +674,6 @@ static int l2tp_ip6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 	if (flags & MSG_OOB)
+ 		goto out;
+ 
+-	if (addr_len)
+-		*addr_len = sizeof(*lsa);
+-
+ 	if (flags & MSG_ERRQUEUE)
+ 		return ipv6_recv_error(sk, msg, len, addr_len);
+ 
+@@ -706,6 +703,7 @@ static int l2tp_ip6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 		lsa->l2tp_conn_id = 0;
+ 		if (ipv6_addr_type(&lsa->l2tp_addr) & IPV6_ADDR_LINKLOCAL)
+ 			lsa->l2tp_scope_id = inet6_iif(skb);
++		*addr_len = sizeof(*lsa);
+ 	}
+ 
+ 	if (np->rxopt.all)
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index db4d46332e86..9dd4c2048a2b 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -901,10 +901,18 @@ __nf_conntrack_confirm(struct sk_buff *skb)
+ 	 * REJECT will give spurious warnings here.
+ 	 */
+ 
+-	/* No external references means no one else could have
+-	 * confirmed us.
++	/* Another skb with the same unconfirmed conntrack may
++	 * win the race. This may happen for bridge(br_flood)
++	 * or broadcast/multicast packets do skb_clone with
++	 * unconfirmed conntrack.
+ 	 */
+-	WARN_ON(nf_ct_is_confirmed(ct));
++	if (unlikely(nf_ct_is_confirmed(ct))) {
++		WARN_ON_ONCE(1);
++		nf_conntrack_double_unlock(hash, reply_hash);
++		local_bh_enable();
++		return NF_DROP;
++	}
++
+ 	pr_debug("Confirming conntrack %p\n", ct);
+ 	/* We have to check the DYING flag after unlink to prevent
+ 	 * a race against nf_ct_get_next_corpse() possibly called from
+diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
+index 4dcbd51a8e97..74fb3fa34db4 100644
+--- a/net/netfilter/nf_conntrack_proto_tcp.c
++++ b/net/netfilter/nf_conntrack_proto_tcp.c
+@@ -828,6 +828,12 @@ static noinline bool tcp_new(struct nf_conn *ct, const struct sk_buff *skb,
+ 	return true;
+ }
+ 
++static bool nf_conntrack_tcp_established(const struct nf_conn *ct)
++{
++	return ct->proto.tcp.state == TCP_CONNTRACK_ESTABLISHED &&
++	       test_bit(IPS_ASSURED_BIT, &ct->status);
++}
++
+ /* Returns verdict for packet, or -1 for invalid. */
+ static int tcp_packet(struct nf_conn *ct,
+ 		      struct sk_buff *skb,
+@@ -1030,16 +1036,38 @@ static int tcp_packet(struct nf_conn *ct,
+ 			new_state = TCP_CONNTRACK_ESTABLISHED;
+ 		break;
+ 	case TCP_CONNTRACK_CLOSE:
+-		if (index == TCP_RST_SET
+-		    && (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET)
+-		    && before(ntohl(th->seq), ct->proto.tcp.seen[!dir].td_maxack)) {
+-			/* Invalid RST  */
+-			spin_unlock_bh(&ct->lock);
+-			nf_ct_l4proto_log_invalid(skb, ct, "invalid rst");
+-			return -NF_ACCEPT;
++		if (index != TCP_RST_SET)
++			break;
++
++		if (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET) {
++			u32 seq = ntohl(th->seq);
++
++			if (before(seq, ct->proto.tcp.seen[!dir].td_maxack)) {
++				/* Invalid RST  */
++				spin_unlock_bh(&ct->lock);
++				nf_ct_l4proto_log_invalid(skb, ct, "invalid rst");
++				return -NF_ACCEPT;
++			}
++
++			if (!nf_conntrack_tcp_established(ct) ||
++			    seq == ct->proto.tcp.seen[!dir].td_maxack)
++				break;
++
++			/* Check if rst is part of train, such as
++			 *   foo:80 > bar:4379: P, 235946583:235946602(19) ack 42
++			 *   foo:80 > bar:4379: R, 235946602:235946602(0)  ack 42
++			 */
++			if (ct->proto.tcp.last_index == TCP_ACK_SET &&
++			    ct->proto.tcp.last_dir == dir &&
++			    seq == ct->proto.tcp.last_end)
++				break;
++
++			/* ... RST sequence number doesn't match exactly, keep
++			 * established state to allow a possible challenge ACK.
++			 */
++			new_state = old_state;
+ 		}
+-		if (index == TCP_RST_SET
+-		    && ((test_bit(IPS_SEEN_REPLY_BIT, &ct->status)
++		if (((test_bit(IPS_SEEN_REPLY_BIT, &ct->status)
+ 			 && ct->proto.tcp.last_index == TCP_SYN_SET)
+ 			|| (!test_bit(IPS_ASSURED_BIT, &ct->status)
+ 			    && ct->proto.tcp.last_index == TCP_ACK_SET))
+@@ -1055,7 +1083,7 @@ static int tcp_packet(struct nf_conn *ct,
+ 			 * segments we ignored. */
+ 			goto in_window;
+ 		}
+-		/* Just fall through */
++		break;
+ 	default:
+ 		/* Keep compilers happy. */
+ 		break;
+@@ -1090,6 +1118,8 @@ static int tcp_packet(struct nf_conn *ct,
+ 	if (ct->proto.tcp.retrans >= tn->tcp_max_retrans &&
+ 	    timeouts[new_state] > timeouts[TCP_CONNTRACK_RETRANS])
+ 		timeout = timeouts[TCP_CONNTRACK_RETRANS];
++	else if (unlikely(index == TCP_RST_SET))
++		timeout = timeouts[TCP_CONNTRACK_CLOSE];
+ 	else if ((ct->proto.tcp.seen[0].flags | ct->proto.tcp.seen[1].flags) &
+ 		 IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED &&
+ 		 timeouts[new_state] > timeouts[TCP_CONNTRACK_UNACK])
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 4893f248dfdc..acb124ce92ec 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -127,7 +127,7 @@ static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
+ 	list_for_each_entry_reverse(trans, &net->nft.commit_list, list) {
+ 		if (trans->msg_type == NFT_MSG_NEWSET &&
+ 		    nft_trans_set(trans) == set) {
+-			nft_trans_set_bound(trans) = true;
++			set->bound = true;
+ 			break;
+ 		}
+ 	}
+@@ -2119,9 +2119,11 @@ err1:
+ static void nf_tables_expr_destroy(const struct nft_ctx *ctx,
+ 				   struct nft_expr *expr)
+ {
++	const struct nft_expr_type *type = expr->ops->type;
++
+ 	if (expr->ops->destroy)
+ 		expr->ops->destroy(ctx, expr);
+-	module_put(expr->ops->type->owner);
++	module_put(type->owner);
+ }
+ 
+ struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
+@@ -2129,6 +2131,7 @@ struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
+ {
+ 	struct nft_expr_info info;
+ 	struct nft_expr *expr;
++	struct module *owner;
+ 	int err;
+ 
+ 	err = nf_tables_expr_parse(ctx, nla, &info);
+@@ -2148,7 +2151,11 @@ struct nft_expr *nft_expr_init(const struct nft_ctx *ctx,
+ err3:
+ 	kfree(expr);
+ err2:
+-	module_put(info.ops->type->owner);
++	owner = info.ops->type->owner;
++	if (info.ops->type->release_ops)
++		info.ops->type->release_ops(info.ops);
++
++	module_put(owner);
+ err1:
+ 	return ERR_PTR(err);
+ }
+@@ -2746,8 +2753,11 @@ err2:
+ 	nf_tables_rule_release(&ctx, rule);
+ err1:
+ 	for (i = 0; i < n; i++) {
+-		if (info[i].ops != NULL)
++		if (info[i].ops) {
+ 			module_put(info[i].ops->type->owner);
++			if (info[i].ops->type->release_ops)
++				info[i].ops->type->release_ops(info[i].ops);
++		}
+ 	}
+ 	kvfree(info);
+ 	return err;
+@@ -6617,8 +6627,7 @@ static void nf_tables_abort_release(struct nft_trans *trans)
+ 		nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
+ 		break;
+ 	case NFT_MSG_NEWSET:
+-		if (!nft_trans_set_bound(trans))
+-			nft_set_destroy(nft_trans_set(trans));
++		nft_set_destroy(nft_trans_set(trans));
+ 		break;
+ 	case NFT_MSG_NEWSETELEM:
+ 		nft_set_elem_destroy(nft_trans_elem_set(trans),
+@@ -6691,8 +6700,11 @@ static int __nf_tables_abort(struct net *net)
+ 			break;
+ 		case NFT_MSG_NEWSET:
+ 			trans->ctx.table->use--;
+-			if (!nft_trans_set_bound(trans))
+-				list_del_rcu(&nft_trans_set(trans)->list);
++			if (nft_trans_set(trans)->bound) {
++				nft_trans_destroy(trans);
++				break;
++			}
++			list_del_rcu(&nft_trans_set(trans)->list);
+ 			break;
+ 		case NFT_MSG_DELSET:
+ 			trans->ctx.table->use++;
+@@ -6700,8 +6712,11 @@ static int __nf_tables_abort(struct net *net)
+ 			nft_trans_destroy(trans);
+ 			break;
+ 		case NFT_MSG_NEWSETELEM:
++			if (nft_trans_elem_set(trans)->bound) {
++				nft_trans_destroy(trans);
++				break;
++			}
+ 			te = (struct nft_trans_elem *)trans->data;
+-
+ 			te->set->ops->remove(net, te->set, &te->elem);
+ 			atomic_dec(&te->set->nelems);
+ 			break;
+diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
+index a50500232b0a..7e8dae82ca52 100644
+--- a/net/netfilter/nf_tables_core.c
++++ b/net/netfilter/nf_tables_core.c
+@@ -98,21 +98,23 @@ static noinline void nft_update_chain_stats(const struct nft_chain *chain,
+ 					    const struct nft_pktinfo *pkt)
+ {
+ 	struct nft_base_chain *base_chain;
++	struct nft_stats __percpu *pstats;
+ 	struct nft_stats *stats;
+ 
+ 	base_chain = nft_base_chain(chain);
+-	if (!rcu_access_pointer(base_chain->stats))
+-		return;
+ 
+-	local_bh_disable();
+-	stats = this_cpu_ptr(rcu_dereference(base_chain->stats));
+-	if (stats) {
++	rcu_read_lock();
++	pstats = READ_ONCE(base_chain->stats);
++	if (pstats) {
++		local_bh_disable();
++		stats = this_cpu_ptr(pstats);
+ 		u64_stats_update_begin(&stats->syncp);
+ 		stats->pkts++;
+ 		stats->bytes += pkt->skb->len;
+ 		u64_stats_update_end(&stats->syncp);
++		local_bh_enable();
+ 	}
+-	local_bh_enable();
++	rcu_read_unlock();
+ }
+ 
+ struct nft_jumpstack {
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 0a4bad55a8aa..469f9da5073b 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -22,23 +22,6 @@
+ #include <linux/netfilter_bridge/ebtables.h>
+ #include <linux/netfilter_arp/arp_tables.h>
+ #include <net/netfilter/nf_tables.h>
+-#include <net/netns/generic.h>
+-
+-struct nft_xt {
+-	struct list_head	head;
+-	struct nft_expr_ops	ops;
+-	refcount_t		refcnt;
+-
+-	/* used only when transaction mutex is locked */
+-	unsigned int		listcnt;
+-
+-	/* Unlike other expressions, ops doesn't have static storage duration.
+-	 * nft core assumes they do.  We use kfree_rcu so that nft core can
+-	 * can check expr->ops->size even after nft_compat->destroy() frees
+-	 * the nft_xt struct that holds the ops structure.
+-	 */
+-	struct rcu_head		rcu_head;
+-};
+ 
+ /* Used for matches where *info is larger than X byte */
+ #define NFT_MATCH_LARGE_THRESH	192
+@@ -47,46 +30,6 @@ struct nft_xt_match_priv {
+ 	void *info;
+ };
+ 
+-struct nft_compat_net {
+-	struct list_head nft_target_list;
+-	struct list_head nft_match_list;
+-};
+-
+-static unsigned int nft_compat_net_id __read_mostly;
+-static struct nft_expr_type nft_match_type;
+-static struct nft_expr_type nft_target_type;
+-
+-static struct nft_compat_net *nft_compat_pernet(struct net *net)
+-{
+-	return net_generic(net, nft_compat_net_id);
+-}
+-
+-static void nft_xt_get(struct nft_xt *xt)
+-{
+-	/* refcount_inc() warns on 0 -> 1 transition, but we can't
+-	 * init the reference count to 1 in .select_ops -- we can't
+-	 * undo such an increase when another expression inside the same
+-	 * rule fails afterwards.
+-	 */
+-	if (xt->listcnt == 0)
+-		refcount_set(&xt->refcnt, 1);
+-	else
+-		refcount_inc(&xt->refcnt);
+-
+-	xt->listcnt++;
+-}
+-
+-static bool nft_xt_put(struct nft_xt *xt)
+-{
+-	if (refcount_dec_and_test(&xt->refcnt)) {
+-		WARN_ON_ONCE(!list_empty(&xt->head));
+-		kfree_rcu(xt, rcu_head);
+-		return true;
+-	}
+-
+-	return false;
+-}
+-
+ static int nft_compat_chain_validate_dependency(const struct nft_ctx *ctx,
+ 						const char *tablename)
+ {
+@@ -281,7 +224,6 @@ nft_target_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 	struct xt_target *target = expr->ops->data;
+ 	struct xt_tgchk_param par;
+ 	size_t size = XT_ALIGN(nla_len(tb[NFTA_TARGET_INFO]));
+-	struct nft_xt *nft_xt;
+ 	u16 proto = 0;
+ 	bool inv = false;
+ 	union nft_entry e = {};
+@@ -305,8 +247,6 @@ nft_target_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 	if (!target->target)
+ 		return -EINVAL;
+ 
+-	nft_xt = container_of(expr->ops, struct nft_xt, ops);
+-	nft_xt_get(nft_xt);
+ 	return 0;
+ }
+ 
+@@ -325,8 +265,8 @@ nft_target_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
+ 	if (par.target->destroy != NULL)
+ 		par.target->destroy(&par);
+ 
+-	if (nft_xt_put(container_of(expr->ops, struct nft_xt, ops)))
+-		module_put(me);
++	module_put(me);
++	kfree(expr->ops);
+ }
+ 
+ static int nft_extension_dump_info(struct sk_buff *skb, int attr,
+@@ -499,7 +439,6 @@ __nft_match_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 	struct xt_match *match = expr->ops->data;
+ 	struct xt_mtchk_param par;
+ 	size_t size = XT_ALIGN(nla_len(tb[NFTA_MATCH_INFO]));
+-	struct nft_xt *nft_xt;
+ 	u16 proto = 0;
+ 	bool inv = false;
+ 	union nft_entry e = {};
+@@ -515,13 +454,7 @@ __nft_match_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 
+ 	nft_match_set_mtchk_param(&par, ctx, match, info, &e, proto, inv);
+ 
+-	ret = xt_check_match(&par, size, proto, inv);
+-	if (ret < 0)
+-		return ret;
+-
+-	nft_xt = container_of(expr->ops, struct nft_xt, ops);
+-	nft_xt_get(nft_xt);
+-	return 0;
++	return xt_check_match(&par, size, proto, inv);
+ }
+ 
+ static int
+@@ -564,8 +497,8 @@ __nft_match_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr,
+ 	if (par.match->destroy != NULL)
+ 		par.match->destroy(&par);
+ 
+-	if (nft_xt_put(container_of(expr->ops, struct nft_xt, ops)))
+-		module_put(me);
++	module_put(me);
++	kfree(expr->ops);
+ }
+ 
+ static void
+@@ -574,18 +507,6 @@ nft_match_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
+ 	__nft_match_destroy(ctx, expr, nft_expr_priv(expr));
+ }
+ 
+-static void nft_compat_deactivate(const struct nft_ctx *ctx,
+-				  const struct nft_expr *expr,
+-				  enum nft_trans_phase phase)
+-{
+-	struct nft_xt *xt = container_of(expr->ops, struct nft_xt, ops);
+-
+-	if (phase == NFT_TRANS_ABORT || phase == NFT_TRANS_COMMIT) {
+-		if (--xt->listcnt == 0)
+-			list_del_init(&xt->head);
+-	}
+-}
+-
+ static void
+ nft_match_large_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
+ {
+@@ -780,19 +701,13 @@ static const struct nfnetlink_subsystem nfnl_compat_subsys = {
+ 	.cb		= nfnl_nft_compat_cb,
+ };
+ 
+-static bool nft_match_cmp(const struct xt_match *match,
+-			  const char *name, u32 rev, u32 family)
+-{
+-	return strcmp(match->name, name) == 0 && match->revision == rev &&
+-	       (match->family == NFPROTO_UNSPEC || match->family == family);
+-}
++static struct nft_expr_type nft_match_type;
+ 
+ static const struct nft_expr_ops *
+ nft_match_select_ops(const struct nft_ctx *ctx,
+ 		     const struct nlattr * const tb[])
+ {
+-	struct nft_compat_net *cn;
+-	struct nft_xt *nft_match;
++	struct nft_expr_ops *ops;
+ 	struct xt_match *match;
+ 	unsigned int matchsize;
+ 	char *mt_name;
+@@ -808,16 +723,6 @@ nft_match_select_ops(const struct nft_ctx *ctx,
+ 	rev = ntohl(nla_get_be32(tb[NFTA_MATCH_REV]));
+ 	family = ctx->family;
+ 
+-	cn = nft_compat_pernet(ctx->net);
+-
+-	/* Re-use the existing match if it's already loaded. */
+-	list_for_each_entry(nft_match, &cn->nft_match_list, head) {
+-		struct xt_match *match = nft_match->ops.data;
+-
+-		if (nft_match_cmp(match, mt_name, rev, family))
+-			return &nft_match->ops;
+-	}
+-
+ 	match = xt_request_find_match(family, mt_name, rev);
+ 	if (IS_ERR(match))
+ 		return ERR_PTR(-ENOENT);
+@@ -827,65 +732,62 @@ nft_match_select_ops(const struct nft_ctx *ctx,
+ 		goto err;
+ 	}
+ 
+-	/* This is the first time we use this match, allocate operations */
+-	nft_match = kzalloc(sizeof(struct nft_xt), GFP_KERNEL);
+-	if (nft_match == NULL) {
++	ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL);
++	if (!ops) {
+ 		err = -ENOMEM;
+ 		goto err;
+ 	}
+ 
+-	refcount_set(&nft_match->refcnt, 0);
+-	nft_match->ops.type = &nft_match_type;
+-	nft_match->ops.eval = nft_match_eval;
+-	nft_match->ops.init = nft_match_init;
+-	nft_match->ops.destroy = nft_match_destroy;
+-	nft_match->ops.deactivate = nft_compat_deactivate;
+-	nft_match->ops.dump = nft_match_dump;
+-	nft_match->ops.validate = nft_match_validate;
+-	nft_match->ops.data = match;
++	ops->type = &nft_match_type;
++	ops->eval = nft_match_eval;
++	ops->init = nft_match_init;
++	ops->destroy = nft_match_destroy;
++	ops->dump = nft_match_dump;
++	ops->validate = nft_match_validate;
++	ops->data = match;
+ 
+ 	matchsize = NFT_EXPR_SIZE(XT_ALIGN(match->matchsize));
+ 	if (matchsize > NFT_MATCH_LARGE_THRESH) {
+ 		matchsize = NFT_EXPR_SIZE(sizeof(struct nft_xt_match_priv));
+ 
+-		nft_match->ops.eval = nft_match_large_eval;
+-		nft_match->ops.init = nft_match_large_init;
+-		nft_match->ops.destroy = nft_match_large_destroy;
+-		nft_match->ops.dump = nft_match_large_dump;
++		ops->eval = nft_match_large_eval;
++		ops->init = nft_match_large_init;
++		ops->destroy = nft_match_large_destroy;
++		ops->dump = nft_match_large_dump;
+ 	}
+ 
+-	nft_match->ops.size = matchsize;
++	ops->size = matchsize;
+ 
+-	nft_match->listcnt = 0;
+-	list_add(&nft_match->head, &cn->nft_match_list);
+-
+-	return &nft_match->ops;
++	return ops;
+ err:
+ 	module_put(match->me);
+ 	return ERR_PTR(err);
+ }
+ 
++static void nft_match_release_ops(const struct nft_expr_ops *ops)
++{
++	struct xt_match *match = ops->data;
++
++	module_put(match->me);
++	kfree(ops);
++}
++
+ static struct nft_expr_type nft_match_type __read_mostly = {
+ 	.name		= "match",
+ 	.select_ops	= nft_match_select_ops,
++	.release_ops	= nft_match_release_ops,
+ 	.policy		= nft_match_policy,
+ 	.maxattr	= NFTA_MATCH_MAX,
+ 	.owner		= THIS_MODULE,
+ };
+ 
+-static bool nft_target_cmp(const struct xt_target *tg,
+-			   const char *name, u32 rev, u32 family)
+-{
+-	return strcmp(tg->name, name) == 0 && tg->revision == rev &&
+-	       (tg->family == NFPROTO_UNSPEC || tg->family == family);
+-}
++static struct nft_expr_type nft_target_type;
+ 
+ static const struct nft_expr_ops *
+ nft_target_select_ops(const struct nft_ctx *ctx,
+ 		      const struct nlattr * const tb[])
+ {
+-	struct nft_compat_net *cn;
+-	struct nft_xt *nft_target;
++	struct nft_expr_ops *ops;
+ 	struct xt_target *target;
+ 	char *tg_name;
+ 	u32 rev, family;
+@@ -905,18 +807,6 @@ nft_target_select_ops(const struct nft_ctx *ctx,
+ 	    strcmp(tg_name, "standard") == 0)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	cn = nft_compat_pernet(ctx->net);
+-	/* Re-use the existing target if it's already loaded. */
+-	list_for_each_entry(nft_target, &cn->nft_target_list, head) {
+-		struct xt_target *target = nft_target->ops.data;
+-
+-		if (!target->target)
+-			continue;
+-
+-		if (nft_target_cmp(target, tg_name, rev, family))
+-			return &nft_target->ops;
+-	}
+-
+ 	target = xt_request_find_target(family, tg_name, rev);
+ 	if (IS_ERR(target))
+ 		return ERR_PTR(-ENOENT);
+@@ -931,113 +821,55 @@ nft_target_select_ops(const struct nft_ctx *ctx,
+ 		goto err;
+ 	}
+ 
+-	/* This is the first time we use this target, allocate operations */
+-	nft_target = kzalloc(sizeof(struct nft_xt), GFP_KERNEL);
+-	if (nft_target == NULL) {
++	ops = kzalloc(sizeof(struct nft_expr_ops), GFP_KERNEL);
++	if (!ops) {
+ 		err = -ENOMEM;
+ 		goto err;
+ 	}
+ 
+-	refcount_set(&nft_target->refcnt, 0);
+-	nft_target->ops.type = &nft_target_type;
+-	nft_target->ops.size = NFT_EXPR_SIZE(XT_ALIGN(target->targetsize));
+-	nft_target->ops.init = nft_target_init;
+-	nft_target->ops.destroy = nft_target_destroy;
+-	nft_target->ops.deactivate = nft_compat_deactivate;
+-	nft_target->ops.dump = nft_target_dump;
+-	nft_target->ops.validate = nft_target_validate;
+-	nft_target->ops.data = target;
++	ops->type = &nft_target_type;
++	ops->size = NFT_EXPR_SIZE(XT_ALIGN(target->targetsize));
++	ops->init = nft_target_init;
++	ops->destroy = nft_target_destroy;
++	ops->dump = nft_target_dump;
++	ops->validate = nft_target_validate;
++	ops->data = target;
+ 
+ 	if (family == NFPROTO_BRIDGE)
+-		nft_target->ops.eval = nft_target_eval_bridge;
++		ops->eval = nft_target_eval_bridge;
+ 	else
+-		nft_target->ops.eval = nft_target_eval_xt;
+-
+-	nft_target->listcnt = 0;
+-	list_add(&nft_target->head, &cn->nft_target_list);
++		ops->eval = nft_target_eval_xt;
+ 
+-	return &nft_target->ops;
++	return ops;
+ err:
+ 	module_put(target->me);
+ 	return ERR_PTR(err);
+ }
+ 
++static void nft_target_release_ops(const struct nft_expr_ops *ops)
++{
++	struct xt_target *target = ops->data;
++
++	module_put(target->me);
++	kfree(ops);
++}
++
+ static struct nft_expr_type nft_target_type __read_mostly = {
+ 	.name		= "target",
+ 	.select_ops	= nft_target_select_ops,
++	.release_ops	= nft_target_release_ops,
+ 	.policy		= nft_target_policy,
+ 	.maxattr	= NFTA_TARGET_MAX,
+ 	.owner		= THIS_MODULE,
+ };
+ 
+-static int __net_init nft_compat_init_net(struct net *net)
+-{
+-	struct nft_compat_net *cn = nft_compat_pernet(net);
+-
+-	INIT_LIST_HEAD(&cn->nft_target_list);
+-	INIT_LIST_HEAD(&cn->nft_match_list);
+-
+-	return 0;
+-}
+-
+-static void __net_exit nft_compat_exit_net(struct net *net)
+-{
+-	struct nft_compat_net *cn = nft_compat_pernet(net);
+-	struct nft_xt *xt, *next;
+-
+-	if (list_empty(&cn->nft_match_list) &&
+-	    list_empty(&cn->nft_target_list))
+-		return;
+-
+-	/* If there was an error that caused nft_xt expr to not be initialized
+-	 * fully and noone else requested the same expression later, the lists
+-	 * contain 0-refcount entries that still hold module reference.
+-	 *
+-	 * Clean them here.
+-	 */
+-	mutex_lock(&net->nft.commit_mutex);
+-	list_for_each_entry_safe(xt, next, &cn->nft_target_list, head) {
+-		struct xt_target *target = xt->ops.data;
+-
+-		list_del_init(&xt->head);
+-
+-		if (refcount_read(&xt->refcnt))
+-			continue;
+-		module_put(target->me);
+-		kfree(xt);
+-	}
+-
+-	list_for_each_entry_safe(xt, next, &cn->nft_match_list, head) {
+-		struct xt_match *match = xt->ops.data;
+-
+-		list_del_init(&xt->head);
+-
+-		if (refcount_read(&xt->refcnt))
+-			continue;
+-		module_put(match->me);
+-		kfree(xt);
+-	}
+-	mutex_unlock(&net->nft.commit_mutex);
+-}
+-
+-static struct pernet_operations nft_compat_net_ops = {
+-	.init	= nft_compat_init_net,
+-	.exit	= nft_compat_exit_net,
+-	.id	= &nft_compat_net_id,
+-	.size	= sizeof(struct nft_compat_net),
+-};
+-
+ static int __init nft_compat_module_init(void)
+ {
+ 	int ret;
+ 
+-	ret = register_pernet_subsys(&nft_compat_net_ops);
+-	if (ret < 0)
+-		goto err_target;
+-
+ 	ret = nft_register_expr(&nft_match_type);
+ 	if (ret < 0)
+-		goto err_pernet;
++		return ret;
+ 
+ 	ret = nft_register_expr(&nft_target_type);
+ 	if (ret < 0)
+@@ -1054,8 +886,6 @@ err_target:
+ 	nft_unregister_expr(&nft_target_type);
+ err_match:
+ 	nft_unregister_expr(&nft_match_type);
+-err_pernet:
+-	unregister_pernet_subsys(&nft_compat_net_ops);
+ 	return ret;
+ }
+ 
+@@ -1064,7 +894,6 @@ static void __exit nft_compat_module_exit(void)
+ 	nfnetlink_subsys_unregister(&nfnl_compat_subsys);
+ 	nft_unregister_expr(&nft_target_type);
+ 	nft_unregister_expr(&nft_match_type);
+-	unregister_pernet_subsys(&nft_compat_net_ops);
+ }
+ 
+ MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_NFT_COMPAT);
+diff --git a/net/netfilter/xt_physdev.c b/net/netfilter/xt_physdev.c
+index 4034d70bff39..b2e39cb6a590 100644
+--- a/net/netfilter/xt_physdev.c
++++ b/net/netfilter/xt_physdev.c
+@@ -96,8 +96,7 @@ match_outdev:
+ static int physdev_mt_check(const struct xt_mtchk_param *par)
+ {
+ 	const struct xt_physdev_info *info = par->matchinfo;
+-
+-	br_netfilter_enable();
++	static bool brnf_probed __read_mostly;
+ 
+ 	if (!(info->bitmask & XT_PHYSDEV_OP_MASK) ||
+ 	    info->bitmask & ~XT_PHYSDEV_OP_MASK)
+@@ -111,6 +110,12 @@ static int physdev_mt_check(const struct xt_mtchk_param *par)
+ 		if (par->hook_mask & (1 << NF_INET_LOCAL_OUT))
+ 			return -EINVAL;
+ 	}
++
++	if (!brnf_probed) {
++		brnf_probed = true;
++		request_module("br_netfilter");
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 25eeb6d2a75a..f0ec068e1d02 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -366,7 +366,7 @@ int genl_register_family(struct genl_family *family)
+ 			       start, end + 1, GFP_KERNEL);
+ 	if (family->id < 0) {
+ 		err = family->id;
+-		goto errout_locked;
++		goto errout_free;
+ 	}
+ 
+ 	err = genl_validate_assign_mc_groups(family);
+@@ -385,6 +385,7 @@ int genl_register_family(struct genl_family *family)
+ 
+ errout_remove:
+ 	idr_remove(&genl_fam_idr, family->id);
++errout_free:
+ 	kfree(family->attrbuf);
+ errout_locked:
+ 	genl_unlock_all();
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 691da853bef5..4bdf5e3ac208 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2306,14 +2306,14 @@ static struct nlattr *reserve_sfa_size(struct sw_flow_actions **sfa,
+ 
+ 	struct sw_flow_actions *acts;
+ 	int new_acts_size;
+-	int req_size = NLA_ALIGN(attr_len);
++	size_t req_size = NLA_ALIGN(attr_len);
+ 	int next_offset = offsetof(struct sw_flow_actions, actions) +
+ 					(*sfa)->actions_len;
+ 
+ 	if (req_size <= (ksize(*sfa) - next_offset))
+ 		goto out;
+ 
+-	new_acts_size = ksize(*sfa) * 2;
++	new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2);
+ 
+ 	if (new_acts_size > MAX_ACTIONS_BUFSIZE) {
+ 		if ((MAX_ACTIONS_BUFSIZE - next_offset) < req_size) {
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 1cd1d83a4be0..8406bf11eef4 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3245,7 +3245,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
+ 	}
+ 
+ 	mutex_lock(&net->packet.sklist_lock);
+-	sk_add_node_rcu(sk, &net->packet.sklist);
++	sk_add_node_tail_rcu(sk, &net->packet.sklist);
+ 	mutex_unlock(&net->packet.sklist_lock);
+ 
+ 	preempt_disable();
+@@ -4211,7 +4211,7 @@ static struct pgv *alloc_pg_vec(struct tpacket_req *req, int order)
+ 	struct pgv *pg_vec;
+ 	int i;
+ 
+-	pg_vec = kcalloc(block_nr, sizeof(struct pgv), GFP_KERNEL);
++	pg_vec = kcalloc(block_nr, sizeof(struct pgv), GFP_KERNEL | __GFP_NOWARN);
+ 	if (unlikely(!pg_vec))
+ 		goto out;
+ 
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index c16f0a362c32..a729c47db781 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -600,7 +600,7 @@ static void rds_tcp_kill_sock(struct net *net)
+ 	list_for_each_entry_safe(tc, _tc, &rds_tcp_conn_list, t_tcp_node) {
+ 		struct net *c_net = read_pnet(&tc->t_cpath->cp_conn->c_net);
+ 
+-		if (net != c_net || !tc->t_sock)
++		if (net != c_net)
+ 			continue;
+ 		if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn)) {
+ 			list_move_tail(&tc->t_tcp_node, &tmp_list);
+diff --git a/net/rose/rose_subr.c b/net/rose/rose_subr.c
+index 7ca57741b2fb..7849f286bb93 100644
+--- a/net/rose/rose_subr.c
++++ b/net/rose/rose_subr.c
+@@ -105,16 +105,17 @@ void rose_write_internal(struct sock *sk, int frametype)
+ 	struct sk_buff *skb;
+ 	unsigned char  *dptr;
+ 	unsigned char  lci1, lci2;
+-	char buffer[100];
+-	int len, faclen = 0;
++	int maxfaclen = 0;
++	int len, faclen;
++	int reserve;
+ 
+-	len = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + ROSE_MIN_LEN + 1;
++	reserve = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + 1;
++	len = ROSE_MIN_LEN;
+ 
+ 	switch (frametype) {
+ 	case ROSE_CALL_REQUEST:
+ 		len   += 1 + ROSE_ADDR_LEN + ROSE_ADDR_LEN;
+-		faclen = rose_create_facilities(buffer, rose);
+-		len   += faclen;
++		maxfaclen = 256;
+ 		break;
+ 	case ROSE_CALL_ACCEPTED:
+ 	case ROSE_CLEAR_REQUEST:
+@@ -123,15 +124,16 @@ void rose_write_internal(struct sock *sk, int frametype)
+ 		break;
+ 	}
+ 
+-	if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)
++	skb = alloc_skb(reserve + len + maxfaclen, GFP_ATOMIC);
++	if (!skb)
+ 		return;
+ 
+ 	/*
+ 	 *	Space for AX.25 header and PID.
+ 	 */
+-	skb_reserve(skb, AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + 1);
++	skb_reserve(skb, reserve);
+ 
+-	dptr = skb_put(skb, skb_tailroom(skb));
++	dptr = skb_put(skb, len);
+ 
+ 	lci1 = (rose->lci >> 8) & 0x0F;
+ 	lci2 = (rose->lci >> 0) & 0xFF;
+@@ -146,7 +148,8 @@ void rose_write_internal(struct sock *sk, int frametype)
+ 		dptr   += ROSE_ADDR_LEN;
+ 		memcpy(dptr, &rose->source_addr, ROSE_ADDR_LEN);
+ 		dptr   += ROSE_ADDR_LEN;
+-		memcpy(dptr, buffer, faclen);
++		faclen = rose_create_facilities(dptr, rose);
++		skb_put(skb, faclen);
+ 		dptr   += faclen;
+ 		break;
+ 
+diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
+index b2adfa825363..5cf6d9f4761d 100644
+--- a/net/rxrpc/conn_client.c
++++ b/net/rxrpc/conn_client.c
+@@ -353,7 +353,7 @@ static int rxrpc_get_client_conn(struct rxrpc_sock *rx,
+ 	 * normally have to take channel_lock but we do this before anyone else
+ 	 * can see the connection.
+ 	 */
+-	list_add_tail(&call->chan_wait_link, &candidate->waiting_calls);
++	list_add(&call->chan_wait_link, &candidate->waiting_calls);
+ 
+ 	if (cp->exclusive) {
+ 		call->conn = candidate;
+@@ -432,7 +432,7 @@ found_extant_conn:
+ 	call->conn = conn;
+ 	call->security_ix = conn->security_ix;
+ 	call->service_id = conn->service_id;
+-	list_add(&call->chan_wait_link, &conn->waiting_calls);
++	list_add_tail(&call->chan_wait_link, &conn->waiting_calls);
+ 	spin_unlock(&conn->channel_lock);
+ 	_leave(" = 0 [extant %d]", conn->debug_id);
+ 	return 0;
+diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
+index 1a0c682fd734..fd62fe6c8e73 100644
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -43,8 +43,8 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ 	struct tc_action_net *tn = net_generic(net, sample_net_id);
+ 	struct nlattr *tb[TCA_SAMPLE_MAX + 1];
+ 	struct psample_group *psample_group;
++	u32 psample_group_num, rate;
+ 	struct tc_sample *parm;
+-	u32 psample_group_num;
+ 	struct tcf_sample *s;
+ 	bool exists = false;
+ 	int ret, err;
+@@ -80,6 +80,12 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ 		return -EEXIST;
+ 	}
+ 
++	rate = nla_get_u32(tb[TCA_SAMPLE_RATE]);
++	if (!rate) {
++		NL_SET_ERR_MSG(extack, "invalid sample rate");
++		tcf_idr_release(*a, bind);
++		return -EINVAL;
++	}
+ 	psample_group_num = nla_get_u32(tb[TCA_SAMPLE_PSAMPLE_GROUP]);
+ 	psample_group = psample_group_get(net, psample_group_num);
+ 	if (!psample_group) {
+@@ -91,7 +97,7 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
+ 
+ 	spin_lock_bh(&s->tcf_lock);
+ 	s->tcf_action = parm->action;
+-	s->rate = nla_get_u32(tb[TCA_SAMPLE_RATE]);
++	s->rate = rate;
+ 	s->psample_group_num = psample_group_num;
+ 	RCU_INIT_POINTER(s->psample_group, psample_group);
+ 
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 12ca9d13db83..bf67ae5ac1c3 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1327,46 +1327,46 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
+ 	if (err < 0)
+ 		goto errout;
+ 
+-	if (!handle) {
+-		handle = 1;
+-		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
+-				    INT_MAX, GFP_KERNEL);
+-	} else if (!fold) {
+-		/* user specifies a handle and it doesn't exist */
+-		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
+-				    handle, GFP_KERNEL);
+-	}
+-	if (err)
+-		goto errout;
+-	fnew->handle = handle;
+-
+ 	if (tb[TCA_FLOWER_FLAGS]) {
+ 		fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]);
+ 
+ 		if (!tc_flags_valid(fnew->flags)) {
+ 			err = -EINVAL;
+-			goto errout_idr;
++			goto errout;
+ 		}
+ 	}
+ 
+ 	err = fl_set_parms(net, tp, fnew, mask, base, tb, tca[TCA_RATE], ovr,
+ 			   tp->chain->tmplt_priv, extack);
+ 	if (err)
+-		goto errout_idr;
++		goto errout;
+ 
+ 	err = fl_check_assign_mask(head, fnew, fold, mask);
+ 	if (err)
+-		goto errout_idr;
++		goto errout;
++
++	if (!handle) {
++		handle = 1;
++		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
++				    INT_MAX, GFP_KERNEL);
++	} else if (!fold) {
++		/* user specifies a handle and it doesn't exist */
++		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
++				    handle, GFP_KERNEL);
++	}
++	if (err)
++		goto errout_mask;
++	fnew->handle = handle;
+ 
+ 	if (!fold && __fl_lookup(fnew->mask, &fnew->mkey)) {
+ 		err = -EEXIST;
+-		goto errout_mask;
++		goto errout_idr;
+ 	}
+ 
+ 	err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node,
+ 				     fnew->mask->filter_ht_params);
+ 	if (err)
+-		goto errout_mask;
++		goto errout_idr;
+ 
+ 	if (!tc_skip_hw(fnew->flags)) {
+ 		err = fl_hw_replace_filter(tp, fnew, extack);
+@@ -1405,12 +1405,13 @@ errout_mask_ht:
+ 	rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node,
+ 			       fnew->mask->filter_ht_params);
+ 
+-errout_mask:
+-	fl_mask_put(head, fnew->mask, false);
+-
+ errout_idr:
+ 	if (!fold)
+ 		idr_remove(&head->handle_idr, fnew->handle);
++
++errout_mask:
++	fl_mask_put(head, fnew->mask, false);
++
+ errout:
+ 	tcf_exts_destroy(&fnew->exts);
+ 	kfree(fnew);
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index 0e408ee9dcec..5ba07cd11e31 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -125,6 +125,11 @@ static void mall_destroy(struct tcf_proto *tp, struct netlink_ext_ack *extack)
+ 
+ static void *mall_get(struct tcf_proto *tp, u32 handle)
+ {
++	struct cls_mall_head *head = rtnl_dereference(tp->root);
++
++	if (head && head->handle == handle)
++		return head;
++
+ 	return NULL;
+ }
+ 
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 968a85fe4d4a..de31f2f3b973 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -68,7 +68,7 @@ static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q)
+ 			skb = __skb_dequeue(&q->skb_bad_txq);
+ 			if (qdisc_is_percpu_stats(q)) {
+ 				qdisc_qstats_cpu_backlog_dec(q, skb);
+-				qdisc_qstats_cpu_qlen_dec(q);
++				qdisc_qstats_atomic_qlen_dec(q);
+ 			} else {
+ 				qdisc_qstats_backlog_dec(q, skb);
+ 				q->q.qlen--;
+@@ -108,7 +108,7 @@ static inline void qdisc_enqueue_skb_bad_txq(struct Qdisc *q,
+ 
+ 	if (qdisc_is_percpu_stats(q)) {
+ 		qdisc_qstats_cpu_backlog_inc(q, skb);
+-		qdisc_qstats_cpu_qlen_inc(q);
++		qdisc_qstats_atomic_qlen_inc(q);
+ 	} else {
+ 		qdisc_qstats_backlog_inc(q, skb);
+ 		q->q.qlen++;
+@@ -147,7 +147,7 @@ static inline int dev_requeue_skb_locked(struct sk_buff *skb, struct Qdisc *q)
+ 
+ 		qdisc_qstats_cpu_requeues_inc(q);
+ 		qdisc_qstats_cpu_backlog_inc(q, skb);
+-		qdisc_qstats_cpu_qlen_inc(q);
++		qdisc_qstats_atomic_qlen_inc(q);
+ 
+ 		skb = next;
+ 	}
+@@ -252,7 +252,7 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate,
+ 			skb = __skb_dequeue(&q->gso_skb);
+ 			if (qdisc_is_percpu_stats(q)) {
+ 				qdisc_qstats_cpu_backlog_dec(q, skb);
+-				qdisc_qstats_cpu_qlen_dec(q);
++				qdisc_qstats_atomic_qlen_dec(q);
+ 			} else {
+ 				qdisc_qstats_backlog_dec(q, skb);
+ 				q->q.qlen--;
+@@ -645,7 +645,7 @@ static int pfifo_fast_enqueue(struct sk_buff *skb, struct Qdisc *qdisc,
+ 	if (unlikely(err))
+ 		return qdisc_drop_cpu(skb, qdisc, to_free);
+ 
+-	qdisc_qstats_cpu_qlen_inc(qdisc);
++	qdisc_qstats_atomic_qlen_inc(qdisc);
+ 	/* Note: skb can not be used after skb_array_produce(),
+ 	 * so we better not use qdisc_qstats_cpu_backlog_inc()
+ 	 */
+@@ -670,7 +670,7 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
+ 	if (likely(skb)) {
+ 		qdisc_qstats_cpu_backlog_dec(qdisc, skb);
+ 		qdisc_bstats_cpu_update(qdisc, skb);
+-		qdisc_qstats_cpu_qlen_dec(qdisc);
++		qdisc_qstats_atomic_qlen_dec(qdisc);
+ 	}
+ 
+ 	return skb;
+@@ -714,7 +714,6 @@ static void pfifo_fast_reset(struct Qdisc *qdisc)
+ 		struct gnet_stats_queue *q = per_cpu_ptr(qdisc->cpu_qstats, i);
+ 
+ 		q->backlog = 0;
+-		q->qlen = 0;
+ 	}
+ }
+ 
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 6abc8b274270..951afdeea5e9 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -600,6 +600,7 @@ out:
+ static int sctp_v4_addr_to_user(struct sctp_sock *sp, union sctp_addr *addr)
+ {
+ 	/* No address mapping for V4 sockets */
++	memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
+ 	return sizeof(struct sockaddr_in);
+ }
+ 
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 65d6d04546ae..5f68420b4b0d 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -999,7 +999,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ 	if (unlikely(addrs_size <= 0))
+ 		return -EINVAL;
+ 
+-	kaddrs = vmemdup_user(addrs, addrs_size);
++	kaddrs = memdup_user(addrs, addrs_size);
+ 	if (unlikely(IS_ERR(kaddrs)))
+ 		return PTR_ERR(kaddrs);
+ 
+@@ -1007,7 +1007,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ 	addr_buf = kaddrs;
+ 	while (walk_size < addrs_size) {
+ 		if (walk_size + sizeof(sa_family_t) > addrs_size) {
+-			kvfree(kaddrs);
++			kfree(kaddrs);
+ 			return -EINVAL;
+ 		}
+ 
+@@ -1018,7 +1018,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ 		 * causes the address buffer to overflow return EINVAL.
+ 		 */
+ 		if (!af || (walk_size + af->sockaddr_len) > addrs_size) {
+-			kvfree(kaddrs);
++			kfree(kaddrs);
+ 			return -EINVAL;
+ 		}
+ 		addrcnt++;
+@@ -1054,7 +1054,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
+ 	}
+ 
+ out:
+-	kvfree(kaddrs);
++	kfree(kaddrs);
+ 
+ 	return err;
+ }
+@@ -1329,7 +1329,7 @@ static int __sctp_setsockopt_connectx(struct sock *sk,
+ 	if (unlikely(addrs_size <= 0))
+ 		return -EINVAL;
+ 
+-	kaddrs = vmemdup_user(addrs, addrs_size);
++	kaddrs = memdup_user(addrs, addrs_size);
+ 	if (unlikely(IS_ERR(kaddrs)))
+ 		return PTR_ERR(kaddrs);
+ 
+@@ -1349,7 +1349,7 @@ static int __sctp_setsockopt_connectx(struct sock *sk,
+ 	err = __sctp_connect(sk, kaddrs, addrs_size, flags, assoc_id);
+ 
+ out_free:
+-	kvfree(kaddrs);
++	kfree(kaddrs);
+ 
+ 	return err;
+ }
+@@ -1866,6 +1866,7 @@ static int sctp_sendmsg_check_sflags(struct sctp_association *asoc,
+ 
+ 		pr_debug("%s: aborting association:%p\n", __func__, asoc);
+ 		sctp_primitive_ABORT(net, asoc, chunk);
++		iov_iter_revert(&msg->msg_iter, msg_len);
+ 
+ 		return 0;
+ 	}
+diff --git a/net/sctp/stream.c b/net/sctp/stream.c
+index 2936ed17bf9e..3b47457862cc 100644
+--- a/net/sctp/stream.c
++++ b/net/sctp/stream.c
+@@ -230,8 +230,6 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt,
+ 	for (i = 0; i < stream->outcnt; i++)
+ 		SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
+ 
+-	sched->init(stream);
+-
+ in:
+ 	sctp_stream_interleave_init(stream);
+ 	if (!incnt)
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index d7ec6132c046..d455537c8fc6 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -66,9 +66,6 @@ static void	call_decode(struct rpc_task *task);
+ static void	call_bind(struct rpc_task *task);
+ static void	call_bind_status(struct rpc_task *task);
+ static void	call_transmit(struct rpc_task *task);
+-#if defined(CONFIG_SUNRPC_BACKCHANNEL)
+-static void	call_bc_transmit(struct rpc_task *task);
+-#endif /* CONFIG_SUNRPC_BACKCHANNEL */
+ static void	call_status(struct rpc_task *task);
+ static void	call_transmit_status(struct rpc_task *task);
+ static void	call_refresh(struct rpc_task *task);
+@@ -80,6 +77,7 @@ static void	call_connect_status(struct rpc_task *task);
+ static __be32	*rpc_encode_header(struct rpc_task *task);
+ static __be32	*rpc_verify_header(struct rpc_task *task);
+ static int	rpc_ping(struct rpc_clnt *clnt);
++static void	rpc_check_timeout(struct rpc_task *task);
+ 
+ static void rpc_register_client(struct rpc_clnt *clnt)
+ {
+@@ -1131,6 +1129,8 @@ rpc_call_async(struct rpc_clnt *clnt, const struct rpc_message *msg, int flags,
+ EXPORT_SYMBOL_GPL(rpc_call_async);
+ 
+ #if defined(CONFIG_SUNRPC_BACKCHANNEL)
++static void call_bc_encode(struct rpc_task *task);
++
+ /**
+  * rpc_run_bc_task - Allocate a new RPC task for backchannel use, then run
+  * rpc_execute against it
+@@ -1152,7 +1152,7 @@ struct rpc_task *rpc_run_bc_task(struct rpc_rqst *req)
+ 	task = rpc_new_task(&task_setup_data);
+ 	xprt_init_bc_request(req, task);
+ 
+-	task->tk_action = call_bc_transmit;
++	task->tk_action = call_bc_encode;
+ 	atomic_inc(&task->tk_count);
+ 	WARN_ON_ONCE(atomic_read(&task->tk_count) != 2);
+ 	rpc_execute(task);
+@@ -1786,7 +1786,12 @@ call_encode(struct rpc_task *task)
+ 		xprt_request_enqueue_receive(task);
+ 	xprt_request_enqueue_transmit(task);
+ out:
+-	task->tk_action = call_bind;
++	task->tk_action = call_transmit;
++	/* Check that the connection is OK */
++	if (!xprt_bound(task->tk_xprt))
++		task->tk_action = call_bind;
++	else if (!xprt_connected(task->tk_xprt))
++		task->tk_action = call_connect;
+ }
+ 
+ /*
+@@ -1937,8 +1942,7 @@ call_connect_status(struct rpc_task *task)
+ 			break;
+ 		if (clnt->cl_autobind) {
+ 			rpc_force_rebind(clnt);
+-			task->tk_action = call_bind;
+-			return;
++			goto out_retry;
+ 		}
+ 		/* fall through */
+ 	case -ECONNRESET:
+@@ -1958,16 +1962,19 @@ call_connect_status(struct rpc_task *task)
+ 		/* fall through */
+ 	case -ENOTCONN:
+ 	case -EAGAIN:
+-		/* Check for timeouts before looping back to call_bind */
+ 	case -ETIMEDOUT:
+-		task->tk_action = call_timeout;
+-		return;
++		goto out_retry;
+ 	case 0:
+ 		clnt->cl_stats->netreconn++;
+ 		task->tk_action = call_transmit;
+ 		return;
+ 	}
+ 	rpc_exit(task, status);
++	return;
++out_retry:
++	/* Check for timeouts before looping back to call_bind */
++	task->tk_action = call_bind;
++	rpc_check_timeout(task);
+ }
+ 
+ /*
+@@ -1978,13 +1985,19 @@ call_transmit(struct rpc_task *task)
+ {
+ 	dprint_status(task);
+ 
+-	task->tk_status = 0;
++	task->tk_action = call_transmit_status;
+ 	if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
+ 		if (!xprt_prepare_transmit(task))
+ 			return;
+-		xprt_transmit(task);
++		task->tk_status = 0;
++		if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
++			if (!xprt_connected(task->tk_xprt)) {
++				task->tk_status = -ENOTCONN;
++				return;
++			}
++			xprt_transmit(task);
++		}
+ 	}
+-	task->tk_action = call_transmit_status;
+ 	xprt_end_transmit(task);
+ }
+ 
+@@ -2038,7 +2051,7 @@ call_transmit_status(struct rpc_task *task)
+ 				trace_xprt_ping(task->tk_xprt,
+ 						task->tk_status);
+ 			rpc_exit(task, task->tk_status);
+-			break;
++			return;
+ 		}
+ 		/* fall through */
+ 	case -ECONNRESET:
+@@ -2046,11 +2059,24 @@ call_transmit_status(struct rpc_task *task)
+ 	case -EADDRINUSE:
+ 	case -ENOTCONN:
+ 	case -EPIPE:
++		task->tk_action = call_bind;
++		task->tk_status = 0;
+ 		break;
+ 	}
++	rpc_check_timeout(task);
+ }
+ 
+ #if defined(CONFIG_SUNRPC_BACKCHANNEL)
++static void call_bc_transmit(struct rpc_task *task);
++static void call_bc_transmit_status(struct rpc_task *task);
++
++static void
++call_bc_encode(struct rpc_task *task)
++{
++	xprt_request_enqueue_transmit(task);
++	task->tk_action = call_bc_transmit;
++}
++
+ /*
+  * 5b.	Send the backchannel RPC reply.  On error, drop the reply.  In
+  * addition, disconnect on connectivity errors.
+@@ -2058,26 +2084,23 @@ call_transmit_status(struct rpc_task *task)
+ static void
+ call_bc_transmit(struct rpc_task *task)
+ {
+-	struct rpc_rqst *req = task->tk_rqstp;
+-
+-	if (rpc_task_need_encode(task))
+-		xprt_request_enqueue_transmit(task);
+-	if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
+-		goto out_wakeup;
+-
+-	if (!xprt_prepare_transmit(task))
+-		goto out_retry;
+-
+-	if (task->tk_status < 0) {
+-		printk(KERN_NOTICE "RPC: Could not send backchannel reply "
+-			"error: %d\n", task->tk_status);
+-		goto out_done;
++	task->tk_action = call_bc_transmit_status;
++	if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
++		if (!xprt_prepare_transmit(task))
++			return;
++		task->tk_status = 0;
++		xprt_transmit(task);
+ 	}
++	xprt_end_transmit(task);
++}
+ 
+-	xprt_transmit(task);
++static void
++call_bc_transmit_status(struct rpc_task *task)
++{
++	struct rpc_rqst *req = task->tk_rqstp;
+ 
+-	xprt_end_transmit(task);
+ 	dprint_status(task);
++
+ 	switch (task->tk_status) {
+ 	case 0:
+ 		/* Success */
+@@ -2091,8 +2114,14 @@ call_bc_transmit(struct rpc_task *task)
+ 	case -ENOTCONN:
+ 	case -EPIPE:
+ 		break;
++	case -ENOBUFS:
++		rpc_delay(task, HZ>>2);
++		/* fall through */
++	case -EBADSLT:
+ 	case -EAGAIN:
+-		goto out_retry;
++		task->tk_status = 0;
++		task->tk_action = call_bc_transmit;
++		return;
+ 	case -ETIMEDOUT:
+ 		/*
+ 		 * Problem reaching the server.  Disconnect and let the
+@@ -2111,18 +2140,11 @@ call_bc_transmit(struct rpc_task *task)
+ 		 * We were unable to reply and will have to drop the
+ 		 * request.  The server should reconnect and retransmit.
+ 		 */
+-		WARN_ON_ONCE(task->tk_status == -EAGAIN);
+ 		printk(KERN_NOTICE "RPC: Could not send backchannel reply "
+ 			"error: %d\n", task->tk_status);
+ 		break;
+ 	}
+-out_wakeup:
+-	rpc_wake_up_queued_task(&req->rq_xprt->pending, task);
+-out_done:
+ 	task->tk_action = rpc_exit_task;
+-	return;
+-out_retry:
+-	task->tk_status = 0;
+ }
+ #endif /* CONFIG_SUNRPC_BACKCHANNEL */
+ 
+@@ -2178,7 +2200,7 @@ call_status(struct rpc_task *task)
+ 	case -EPIPE:
+ 	case -ENOTCONN:
+ 	case -EAGAIN:
+-		task->tk_action = call_encode;
++		task->tk_action = call_timeout;
+ 		break;
+ 	case -EIO:
+ 		/* shutdown or soft timeout */
+@@ -2192,20 +2214,13 @@ call_status(struct rpc_task *task)
+ 	}
+ }
+ 
+-/*
+- * 6a.	Handle RPC timeout
+- * 	We do not release the request slot, so we keep using the
+- *	same XID for all retransmits.
+- */
+ static void
+-call_timeout(struct rpc_task *task)
++rpc_check_timeout(struct rpc_task *task)
+ {
+ 	struct rpc_clnt	*clnt = task->tk_client;
+ 
+-	if (xprt_adjust_timeout(task->tk_rqstp) == 0) {
+-		dprintk("RPC: %5u call_timeout (minor)\n", task->tk_pid);
+-		goto retry;
+-	}
++	if (xprt_adjust_timeout(task->tk_rqstp) == 0)
++		return;
+ 
+ 	dprintk("RPC: %5u call_timeout (major)\n", task->tk_pid);
+ 	task->tk_timeouts++;
+@@ -2241,10 +2256,19 @@ call_timeout(struct rpc_task *task)
+ 	 * event? RFC2203 requires the server to drop all such requests.
+ 	 */
+ 	rpcauth_invalcred(task);
++}
+ 
+-retry:
++/*
++ * 6a.	Handle RPC timeout
++ * 	We do not release the request slot, so we keep using the
++ *	same XID for all retransmits.
++ */
++static void
++call_timeout(struct rpc_task *task)
++{
+ 	task->tk_action = call_encode;
+ 	task->tk_status = 0;
++	rpc_check_timeout(task);
+ }
+ 
+ /*
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index a6a060925e5d..43590a968b73 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -349,12 +349,16 @@ static ssize_t svc_recvfrom(struct svc_rqst *rqstp, struct kvec *iov,
+ /*
+  * Set socket snd and rcv buffer lengths
+  */
+-static void svc_sock_setbufsize(struct socket *sock, unsigned int snd,
+-				unsigned int rcv)
++static void svc_sock_setbufsize(struct svc_sock *svsk, unsigned int nreqs)
+ {
++	unsigned int max_mesg = svsk->sk_xprt.xpt_server->sv_max_mesg;
++	struct socket *sock = svsk->sk_sock;
++
++	nreqs = min(nreqs, INT_MAX / 2 / max_mesg);
++
+ 	lock_sock(sock->sk);
+-	sock->sk->sk_sndbuf = snd * 2;
+-	sock->sk->sk_rcvbuf = rcv * 2;
++	sock->sk->sk_sndbuf = nreqs * max_mesg * 2;
++	sock->sk->sk_rcvbuf = nreqs * max_mesg * 2;
+ 	sock->sk->sk_write_space(sock->sk);
+ 	release_sock(sock->sk);
+ }
+@@ -516,9 +520,7 @@ static int svc_udp_recvfrom(struct svc_rqst *rqstp)
+ 	     * provides an upper bound on the number of threads
+ 	     * which will access the socket.
+ 	     */
+-	    svc_sock_setbufsize(svsk->sk_sock,
+-				(serv->sv_nrthreads+3) * serv->sv_max_mesg,
+-				(serv->sv_nrthreads+3) * serv->sv_max_mesg);
++	    svc_sock_setbufsize(svsk, serv->sv_nrthreads + 3);
+ 
+ 	clear_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags);
+ 	skb = NULL;
+@@ -681,9 +683,7 @@ static void svc_udp_init(struct svc_sock *svsk, struct svc_serv *serv)
+ 	 * receive and respond to one request.
+ 	 * svc_udp_recvfrom will re-adjust if necessary
+ 	 */
+-	svc_sock_setbufsize(svsk->sk_sock,
+-			    3 * svsk->sk_xprt.xpt_server->sv_max_mesg,
+-			    3 * svsk->sk_xprt.xpt_server->sv_max_mesg);
++	svc_sock_setbufsize(svsk, 3);
+ 
+ 	/* data might have come in before data_ready set up */
+ 	set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 21113bfd4eca..a5ae9c036b9c 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -90,7 +90,7 @@ static void rpcrdma_xprt_drain(struct rpcrdma_xprt *r_xprt)
+ 	/* Flush Receives, then wait for deferred Reply work
+ 	 * to complete.
+ 	 */
+-	ib_drain_qp(ia->ri_id->qp);
++	ib_drain_rq(ia->ri_id->qp);
+ 	drain_workqueue(buf->rb_completion_wq);
+ 
+ 	/* Deferred Reply processing might have scheduled
+diff --git a/net/tipc/net.c b/net/tipc/net.c
+index f076edb74338..7ce1e86b024f 100644
+--- a/net/tipc/net.c
++++ b/net/tipc/net.c
+@@ -163,12 +163,9 @@ void tipc_sched_net_finalize(struct net *net, u32 addr)
+ 
+ void tipc_net_stop(struct net *net)
+ {
+-	u32 self = tipc_own_addr(net);
+-
+-	if (!self)
++	if (!tipc_own_id(net))
+ 		return;
+ 
+-	tipc_nametbl_withdraw(net, TIPC_CFG_SRV, self, self, self);
+ 	rtnl_lock();
+ 	tipc_bearer_stop(net);
+ 	tipc_node_stop(net);
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 70343ac448b1..4dca9161f99b 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1333,7 +1333,7 @@ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dlen)
+ 
+ 	if (unlikely(!dest)) {
+ 		dest = &tsk->peer;
+-		if (!syn || dest->family != AF_TIPC)
++		if (!syn && dest->family != AF_TIPC)
+ 			return -EDESTADDRREQ;
+ 	}
+ 
+@@ -2349,6 +2349,16 @@ static int tipc_wait_for_connect(struct socket *sock, long *timeo_p)
+ 	return 0;
+ }
+ 
++static bool tipc_sockaddr_is_sane(struct sockaddr_tipc *addr)
++{
++	if (addr->family != AF_TIPC)
++		return false;
++	if (addr->addrtype == TIPC_SERVICE_RANGE)
++		return (addr->addr.nameseq.lower <= addr->addr.nameseq.upper);
++	return (addr->addrtype == TIPC_SERVICE_ADDR ||
++		addr->addrtype == TIPC_SOCKET_ADDR);
++}
++
+ /**
+  * tipc_connect - establish a connection to another TIPC port
+  * @sock: socket structure
+@@ -2384,18 +2394,18 @@ static int tipc_connect(struct socket *sock, struct sockaddr *dest,
+ 		if (!tipc_sk_type_connectionless(sk))
+ 			res = -EINVAL;
+ 		goto exit;
+-	} else if (dst->family != AF_TIPC) {
+-		res = -EINVAL;
+ 	}
+-	if (dst->addrtype != TIPC_ADDR_ID && dst->addrtype != TIPC_ADDR_NAME)
++	if (!tipc_sockaddr_is_sane(dst)) {
+ 		res = -EINVAL;
+-	if (res)
+ 		goto exit;
+-
++	}
+ 	/* DGRAM/RDM connect(), just save the destaddr */
+ 	if (tipc_sk_type_connectionless(sk)) {
+ 		memcpy(&tsk->peer, dest, destlen);
+ 		goto exit;
++	} else if (dst->addrtype == TIPC_SERVICE_RANGE) {
++		res = -EINVAL;
++		goto exit;
+ 	}
+ 
+ 	previous = sk->sk_state;
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index a457c0fbbef1..f5edb213d760 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -365,6 +365,7 @@ static int tipc_conn_rcv_sub(struct tipc_topsrv *srv,
+ 	struct tipc_subscription *sub;
+ 
+ 	if (tipc_sub_read(s, filter) & TIPC_SUB_CANCEL) {
++		s->filter &= __constant_ntohl(~TIPC_SUB_CANCEL);
+ 		tipc_conn_delete_sub(con, s);
+ 		return 0;
+ 	}
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 3ae3a33da70b..602715fc9a75 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -662,6 +662,8 @@ static int virtio_transport_reset(struct vsock_sock *vsk,
+  */
+ static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
+ {
++	const struct virtio_transport *t;
++	struct virtio_vsock_pkt *reply;
+ 	struct virtio_vsock_pkt_info info = {
+ 		.op = VIRTIO_VSOCK_OP_RST,
+ 		.type = le16_to_cpu(pkt->hdr.type),
+@@ -672,15 +674,21 @@ static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
+ 	if (le16_to_cpu(pkt->hdr.op) == VIRTIO_VSOCK_OP_RST)
+ 		return 0;
+ 
+-	pkt = virtio_transport_alloc_pkt(&info, 0,
+-					 le64_to_cpu(pkt->hdr.dst_cid),
+-					 le32_to_cpu(pkt->hdr.dst_port),
+-					 le64_to_cpu(pkt->hdr.src_cid),
+-					 le32_to_cpu(pkt->hdr.src_port));
+-	if (!pkt)
++	reply = virtio_transport_alloc_pkt(&info, 0,
++					   le64_to_cpu(pkt->hdr.dst_cid),
++					   le32_to_cpu(pkt->hdr.dst_port),
++					   le64_to_cpu(pkt->hdr.src_cid),
++					   le32_to_cpu(pkt->hdr.src_port));
++	if (!reply)
+ 		return -ENOMEM;
+ 
+-	return virtio_transport_get_ops()->send_pkt(pkt);
++	t = virtio_transport_get_ops();
++	if (!t) {
++		virtio_transport_free_pkt(reply);
++		return -ENOTCONN;
++	}
++
++	return t->send_pkt(reply);
+ }
+ 
+ static void virtio_transport_wait_close(struct sock *sk, long timeout)
+diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
+index eff31348e20b..20a511398389 100644
+--- a/net/x25/af_x25.c
++++ b/net/x25/af_x25.c
+@@ -820,8 +820,13 @@ static int x25_connect(struct socket *sock, struct sockaddr *uaddr,
+ 	sock->state = SS_CONNECTED;
+ 	rc = 0;
+ out_put_neigh:
+-	if (rc)
++	if (rc) {
++		read_lock_bh(&x25_list_lock);
+ 		x25_neigh_put(x25->neighbour);
++		x25->neighbour = NULL;
++		read_unlock_bh(&x25_list_lock);
++		x25->state = X25_STATE_0;
++	}
+ out_put_route:
+ 	x25_route_put(rt);
+ out:
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 85e4fe4f18cc..f3031c8907d9 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -407,6 +407,10 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 	if (sxdp->sxdp_family != AF_XDP)
+ 		return -EINVAL;
+ 
++	flags = sxdp->sxdp_flags;
++	if (flags & ~(XDP_SHARED_UMEM | XDP_COPY | XDP_ZEROCOPY))
++		return -EINVAL;
++
+ 	mutex_lock(&xs->mutex);
+ 	if (xs->dev) {
+ 		err = -EBUSY;
+@@ -425,7 +429,6 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 	}
+ 
+ 	qid = sxdp->sxdp_queue_id;
+-	flags = sxdp->sxdp_flags;
+ 
+ 	if (flags & XDP_SHARED_UMEM) {
+ 		struct xdp_sock *umem_xs;
+diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
+index 7aad82406422..d3319a80788a 100644
+--- a/scripts/gdb/linux/constants.py.in
++++ b/scripts/gdb/linux/constants.py.in
+@@ -37,12 +37,12 @@
+ import gdb
+ 
+ /* linux/fs.h */
+-LX_VALUE(MS_RDONLY)
+-LX_VALUE(MS_SYNCHRONOUS)
+-LX_VALUE(MS_MANDLOCK)
+-LX_VALUE(MS_DIRSYNC)
+-LX_VALUE(MS_NOATIME)
+-LX_VALUE(MS_NODIRATIME)
++LX_VALUE(SB_RDONLY)
++LX_VALUE(SB_SYNCHRONOUS)
++LX_VALUE(SB_MANDLOCK)
++LX_VALUE(SB_DIRSYNC)
++LX_VALUE(SB_NOATIME)
++LX_VALUE(SB_NODIRATIME)
+ 
+ /* linux/mount.h */
+ LX_VALUE(MNT_NOSUID)
+diff --git a/scripts/gdb/linux/proc.py b/scripts/gdb/linux/proc.py
+index 0aebd7565b03..2f01a958eb22 100644
+--- a/scripts/gdb/linux/proc.py
++++ b/scripts/gdb/linux/proc.py
+@@ -114,11 +114,11 @@ def info_opts(lst, opt):
+     return opts
+ 
+ 
+-FS_INFO = {constants.LX_MS_SYNCHRONOUS: ",sync",
+-           constants.LX_MS_MANDLOCK: ",mand",
+-           constants.LX_MS_DIRSYNC: ",dirsync",
+-           constants.LX_MS_NOATIME: ",noatime",
+-           constants.LX_MS_NODIRATIME: ",nodiratime"}
++FS_INFO = {constants.LX_SB_SYNCHRONOUS: ",sync",
++           constants.LX_SB_MANDLOCK: ",mand",
++           constants.LX_SB_DIRSYNC: ",dirsync",
++           constants.LX_SB_NOATIME: ",noatime",
++           constants.LX_SB_NODIRATIME: ",nodiratime"}
+ 
+ MNT_INFO = {constants.LX_MNT_NOSUID: ",nosuid",
+             constants.LX_MNT_NODEV: ",nodev",
+@@ -184,7 +184,7 @@ values of that process namespace"""
+             fstype = superblock['s_type']['name'].string()
+             s_flags = int(superblock['s_flags'])
+             m_flags = int(vfs['mnt']['mnt_flags'])
+-            rd = "ro" if (s_flags & constants.LX_MS_RDONLY) else "rw"
++            rd = "ro" if (s_flags & constants.LX_SB_RDONLY) else "rw"
+ 
+             gdb.write(
+                 "{} {} {} {}{}{} 0 0\n"
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 26bf886bd168..588a3bc29ecc 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -640,7 +640,7 @@ static void handle_modversions(struct module *mod, struct elf_info *info,
+ 			       info->sechdrs[sym->st_shndx].sh_offset -
+ 			       (info->hdr->e_type != ET_REL ?
+ 				info->sechdrs[sym->st_shndx].sh_addr : 0);
+-			crc = *crcp;
++			crc = TO_NATIVE(*crcp);
+ 		}
+ 		sym_update_crc(symname + strlen("__crc_"), mod, crc,
+ 				export);
+diff --git a/scripts/package/Makefile b/scripts/package/Makefile
+index 453fecee62f0..aa39c2b5e46a 100644
+--- a/scripts/package/Makefile
++++ b/scripts/package/Makefile
+@@ -59,7 +59,7 @@ rpm-pkg: FORCE
+ # binrpm-pkg
+ # ---------------------------------------------------------------------------
+ binrpm-pkg: FORCE
+-	$(MAKE) KBUILD_SRC=
++	$(MAKE) -f $(srctree)/Makefile
+ 	$(CONFIG_SHELL) $(MKSPEC) prebuilt > $(objtree)/binkernel.spec
+ 	+rpmbuild $(RPMOPTS) --define "_builddir $(objtree)" --target \
+ 		$(UTS_MACHINE) -bb $(objtree)/binkernel.spec
+@@ -102,7 +102,7 @@ clean-dirs += $(objtree)/snap/
+ # tarball targets
+ # ---------------------------------------------------------------------------
+ tar%pkg: FORCE
+-	$(MAKE) KBUILD_SRC=
++	$(MAKE) -f $(srctree)/Makefile
+ 	$(CONFIG_SHELL) $(srctree)/scripts/package/buildtar $@
+ 
+ clean-dirs += $(objtree)/tar-install/
+diff --git a/scripts/package/builddeb b/scripts/package/builddeb
+index f43a274f4f1d..8ac25d10a6ad 100755
+--- a/scripts/package/builddeb
++++ b/scripts/package/builddeb
+@@ -86,12 +86,12 @@ cp "$($MAKE -s -f $srctree/Makefile image_name)" "$tmpdir/$installed_image_path"
+ if grep -q "^CONFIG_OF_EARLY_FLATTREE=y" $KCONFIG_CONFIG ; then
+ 	# Only some architectures with OF support have this target
+ 	if [ -d "${srctree}/arch/$SRCARCH/boot/dts" ]; then
+-		$MAKE KBUILD_SRC= INSTALL_DTBS_PATH="$tmpdir/usr/lib/$packagename" dtbs_install
++		$MAKE -f $srctree/Makefile INSTALL_DTBS_PATH="$tmpdir/usr/lib/$packagename" dtbs_install
+ 	fi
+ fi
+ 
+ if grep -q '^CONFIG_MODULES=y' $KCONFIG_CONFIG ; then
+-	INSTALL_MOD_PATH="$tmpdir" $MAKE KBUILD_SRC= modules_install
++	INSTALL_MOD_PATH="$tmpdir" $MAKE -f $srctree/Makefile modules_install
+ 	rm -f "$tmpdir/lib/modules/$version/build"
+ 	rm -f "$tmpdir/lib/modules/$version/source"
+ 	if [ "$ARCH" = "um" ] ; then
+@@ -113,14 +113,14 @@ if grep -q '^CONFIG_MODULES=y' $KCONFIG_CONFIG ; then
+ 		# resign stripped modules
+ 		MODULE_SIG_ALL="$(grep -s '^CONFIG_MODULE_SIG_ALL=y' $KCONFIG_CONFIG || true)"
+ 		if [ -n "$MODULE_SIG_ALL" ]; then
+-			INSTALL_MOD_PATH="$tmpdir" $MAKE KBUILD_SRC= modules_sign
++			INSTALL_MOD_PATH="$tmpdir" $MAKE -f $srctree/Makefile modules_sign
+ 		fi
+ 	fi
+ fi
+ 
+ if [ "$ARCH" != "um" ]; then
+-	$MAKE headers_check KBUILD_SRC=
+-	$MAKE headers_install KBUILD_SRC= INSTALL_HDR_PATH="$libc_headers_dir/usr"
++	$MAKE -f $srctree/Makefile headers_check
++	$MAKE -f $srctree/Makefile headers_install INSTALL_HDR_PATH="$libc_headers_dir/usr"
+ fi
+ 
+ # Install the maintainer scripts
+diff --git a/scripts/package/buildtar b/scripts/package/buildtar
+index d624a07a4e77..cfd2a4a3fe42 100755
+--- a/scripts/package/buildtar
++++ b/scripts/package/buildtar
+@@ -57,7 +57,7 @@ dirs=boot
+ # Try to install modules
+ #
+ if grep -q '^CONFIG_MODULES=y' "${KCONFIG_CONFIG}"; then
+-	make ARCH="${ARCH}" O="${objtree}" KBUILD_SRC= INSTALL_MOD_PATH="${tmpdir}" modules_install
++	make ARCH="${ARCH}" -f ${srctree}/Makefile INSTALL_MOD_PATH="${tmpdir}" modules_install
+ 	dirs="$dirs lib"
+ fi
+ 
+diff --git a/scripts/package/mkdebian b/scripts/package/mkdebian
+index edcad61fe3cd..f030961c5165 100755
+--- a/scripts/package/mkdebian
++++ b/scripts/package/mkdebian
+@@ -205,13 +205,15 @@ EOF
+ cat <<EOF > debian/rules
+ #!$(command -v $MAKE) -f
+ 
++srctree ?= .
++
+ build:
+ 	\$(MAKE) KERNELRELEASE=${version} ARCH=${ARCH} \
+-	KBUILD_BUILD_VERSION=${revision} KBUILD_SRC=
++	KBUILD_BUILD_VERSION=${revision} -f \$(srctree)/Makefile
+ 
+ binary-arch:
+ 	\$(MAKE) KERNELRELEASE=${version} ARCH=${ARCH} \
+-	KBUILD_BUILD_VERSION=${revision} KBUILD_SRC= intdeb-pkg
++	KBUILD_BUILD_VERSION=${revision} -f \$(srctree)/Makefile intdeb-pkg
+ 
+ clean:
+ 	rm -rf debian/*tmp debian/files
+diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
+index 379682e2a8d5..f6c2bcb2ab14 100644
+--- a/security/apparmor/policy_unpack.c
++++ b/security/apparmor/policy_unpack.c
+@@ -579,6 +579,7 @@ fail:
+ 			kfree(profile->secmark[i].label);
+ 		kfree(profile->secmark);
+ 		profile->secmark_count = 0;
++		profile->secmark = NULL;
+ 	}
+ 
+ 	e->pos = pos;
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index f0e36c3492ba..07b11b5aaf1f 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -959,8 +959,11 @@ static int selinux_sb_clone_mnt_opts(const struct super_block *oldsb,
+ 	BUG_ON(!(oldsbsec->flags & SE_SBINITIALIZED));
+ 
+ 	/* if fs is reusing a sb, make sure that the contexts match */
+-	if (newsbsec->flags & SE_SBINITIALIZED)
++	if (newsbsec->flags & SE_SBINITIALIZED) {
++		if ((kern_flags & SECURITY_LSM_NATIVE_LABELS) && !set_context)
++			*set_kern_flags |= SECURITY_LSM_NATIVE_LABELS;
+ 		return selinux_cmp_sb_context(oldsb, newsb);
++	}
+ 
+ 	mutex_lock(&newsbsec->lock);
+ 
+@@ -3241,12 +3244,16 @@ static int selinux_inode_setsecurity(struct inode *inode, const char *name,
+ 				     const void *value, size_t size, int flags)
+ {
+ 	struct inode_security_struct *isec = inode_security_novalidate(inode);
++	struct superblock_security_struct *sbsec = inode->i_sb->s_security;
+ 	u32 newsid;
+ 	int rc;
+ 
+ 	if (strcmp(name, XATTR_SELINUX_SUFFIX))
+ 		return -EOPNOTSUPP;
+ 
++	if (!(sbsec->flags & SBLABEL_MNT))
++		return -EOPNOTSUPP;
++
+ 	if (!value || !size)
+ 		return -EACCES;
+ 
+@@ -5120,6 +5127,9 @@ static int selinux_sctp_bind_connect(struct sock *sk, int optname,
+ 			return -EINVAL;
+ 		}
+ 
++		if (walk_size + len > addrlen)
++			return -EINVAL;
++
+ 		err = -EINVAL;
+ 		switch (optname) {
+ 		/* Bind checks */
+@@ -6392,7 +6402,10 @@ static void selinux_inode_invalidate_secctx(struct inode *inode)
+  */
+ static int selinux_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen)
+ {
+-	return selinux_inode_setsecurity(inode, XATTR_SELINUX_SUFFIX, ctx, ctxlen, 0);
++	int rc = selinux_inode_setsecurity(inode, XATTR_SELINUX_SUFFIX,
++					   ctx, ctxlen, 0);
++	/* Do not return error when suppressing label (SBLABEL_MNT not set). */
++	return rc == -EOPNOTSUPP ? 0 : rc;
+ }
+ 
+ /*
+diff --git a/sound/ac97/bus.c b/sound/ac97/bus.c
+index 9f0c480489ef..9cbf6927abe9 100644
+--- a/sound/ac97/bus.c
++++ b/sound/ac97/bus.c
+@@ -84,7 +84,7 @@ ac97_of_get_child_device(struct ac97_controller *ac97_ctrl, int idx,
+ 		if ((idx != of_property_read_u32(node, "reg", &reg)) ||
+ 		    !of_device_is_compatible(node, compat))
+ 			continue;
+-		return of_node_get(node);
++		return node;
+ 	}
+ 
+ 	return NULL;
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 467039b342b5..41abb8bd466a 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -940,6 +940,28 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ 	oss_frame_size = snd_pcm_format_physical_width(params_format(params)) *
+ 			 params_channels(params) / 8;
+ 
++	err = snd_pcm_oss_period_size(substream, params, sparams);
++	if (err < 0)
++		goto failure;
++
++	n = snd_pcm_plug_slave_size(substream, runtime->oss.period_bytes / oss_frame_size);
++	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, n, NULL);
++	if (err < 0)
++		goto failure;
++
++	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIODS,
++				     runtime->oss.periods, NULL);
++	if (err < 0)
++		goto failure;
++
++	snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_DROP, NULL);
++
++	err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_HW_PARAMS, sparams);
++	if (err < 0) {
++		pcm_dbg(substream->pcm, "HW_PARAMS failed: %i\n", err);
++		goto failure;
++	}
++
+ #ifdef CONFIG_SND_PCM_OSS_PLUGINS
+ 	snd_pcm_oss_plugin_clear(substream);
+ 	if (!direct) {
+@@ -974,27 +996,6 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
+ 	}
+ #endif
+ 
+-	err = snd_pcm_oss_period_size(substream, params, sparams);
+-	if (err < 0)
+-		goto failure;
+-
+-	n = snd_pcm_plug_slave_size(substream, runtime->oss.period_bytes / oss_frame_size);
+-	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, n, NULL);
+-	if (err < 0)
+-		goto failure;
+-
+-	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIODS,
+-				     runtime->oss.periods, NULL);
+-	if (err < 0)
+-		goto failure;
+-
+-	snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_DROP, NULL);
+-
+-	if ((err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_HW_PARAMS, sparams)) < 0) {
+-		pcm_dbg(substream->pcm, "HW_PARAMS failed: %i\n", err);
+-		goto failure;
+-	}
+-
+ 	if (runtime->oss.trigger) {
+ 		sw_params->start_threshold = 1;
+ 	} else {
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index 818dff1de545..e08c6c6ca029 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -1426,8 +1426,15 @@ static int snd_pcm_pause(struct snd_pcm_substream *substream, int push)
+ static int snd_pcm_pre_suspend(struct snd_pcm_substream *substream, int state)
+ {
+ 	struct snd_pcm_runtime *runtime = substream->runtime;
+-	if (runtime->status->state == SNDRV_PCM_STATE_SUSPENDED)
++	switch (runtime->status->state) {
++	case SNDRV_PCM_STATE_SUSPENDED:
++		return -EBUSY;
++	/* unresumable PCM state; return -EBUSY for skipping suspend */
++	case SNDRV_PCM_STATE_OPEN:
++	case SNDRV_PCM_STATE_SETUP:
++	case SNDRV_PCM_STATE_DISCONNECTED:
+ 		return -EBUSY;
++	}
+ 	runtime->trigger_master = substream;
+ 	return 0;
+ }
+@@ -1506,6 +1513,14 @@ int snd_pcm_suspend_all(struct snd_pcm *pcm)
+ 			/* FIXME: the open/close code should lock this as well */
+ 			if (substream->runtime == NULL)
+ 				continue;
++
++			/*
++			 * Skip BE dai link PCM's that are internal and may
++			 * not have their substream ops set.
++			 */
++			if (!substream->ops)
++				continue;
++
+ 			err = snd_pcm_suspend(substream);
+ 			if (err < 0 && err != -EBUSY)
+ 				return err;
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index ee601d7f0926..c0690d1ecd55 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -30,6 +30,7 @@
+ #include <linux/module.h>
+ #include <linux/delay.h>
+ #include <linux/mm.h>
++#include <linux/nospec.h>
+ #include <sound/rawmidi.h>
+ #include <sound/info.h>
+ #include <sound/control.h>
+@@ -601,6 +602,7 @@ static int __snd_rawmidi_info_select(struct snd_card *card,
+ 		return -ENXIO;
+ 	if (info->stream < 0 || info->stream > 1)
+ 		return -EINVAL;
++	info->stream = array_index_nospec(info->stream, 2);
+ 	pstr = &rmidi->streams[info->stream];
+ 	if (pstr->substream_count == 0)
+ 		return -ENOENT;
+diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c
+index 278ebb993122..c93945917235 100644
+--- a/sound/core/seq/oss/seq_oss_synth.c
++++ b/sound/core/seq/oss/seq_oss_synth.c
+@@ -617,13 +617,14 @@ int
+ snd_seq_oss_synth_make_info(struct seq_oss_devinfo *dp, int dev, struct synth_info *inf)
+ {
+ 	struct seq_oss_synth *rec;
++	struct seq_oss_synthinfo *info = get_synthinfo_nospec(dp, dev);
+ 
+-	if (dev < 0 || dev >= dp->max_synthdev)
++	if (!info)
+ 		return -ENXIO;
+ 
+-	if (dp->synths[dev].is_midi) {
++	if (info->is_midi) {
+ 		struct midi_info minf;
+-		snd_seq_oss_midi_make_info(dp, dp->synths[dev].midi_mapped, &minf);
++		snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf);
+ 		inf->synth_type = SYNTH_TYPE_MIDI;
+ 		inf->synth_subtype = 0;
+ 		inf->nr_voices = 16;
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 7d4640d1fe9f..38e7deab6384 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1252,7 +1252,7 @@ static int snd_seq_ioctl_set_client_info(struct snd_seq_client *client,
+ 
+ 	/* fill the info fields */
+ 	if (client_info->name[0])
+-		strlcpy(client->name, client_info->name, sizeof(client->name));
++		strscpy(client->name, client_info->name, sizeof(client->name));
+ 
+ 	client->filter = client_info->filter;
+ 	client->event_lost = client_info->event_lost;
+@@ -1530,7 +1530,7 @@ static int snd_seq_ioctl_create_queue(struct snd_seq_client *client, void *arg)
+ 	/* set queue name */
+ 	if (!info->name[0])
+ 		snprintf(info->name, sizeof(info->name), "Queue-%d", q->queue);
+-	strlcpy(q->name, info->name, sizeof(q->name));
++	strscpy(q->name, info->name, sizeof(q->name));
+ 	snd_use_lock_free(&q->use_lock);
+ 
+ 	return 0;
+@@ -1592,7 +1592,7 @@ static int snd_seq_ioctl_set_queue_info(struct snd_seq_client *client,
+ 		queuefree(q);
+ 		return -EPERM;
+ 	}
+-	strlcpy(q->name, info->name, sizeof(q->name));
++	strscpy(q->name, info->name, sizeof(q->name));
+ 	queuefree(q);
+ 
+ 	return 0;
+diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
+index d91874275d2c..5b46e8dcc2dd 100644
+--- a/sound/firewire/bebob/bebob.c
++++ b/sound/firewire/bebob/bebob.c
+@@ -448,7 +448,19 @@ static const struct ieee1394_device_id bebob_id_table[] = {
+ 	/* Focusrite, SaffirePro 26 I/O */
+ 	SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, 0x00000003, &saffirepro_26_spec),
+ 	/* Focusrite, SaffirePro 10 I/O */
+-	SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, 0x00000006, &saffirepro_10_spec),
++	{
++		// The combination of vendor_id and model_id is the same as the
++		// same as the one of Liquid Saffire 56.
++		.match_flags	= IEEE1394_MATCH_VENDOR_ID |
++				  IEEE1394_MATCH_MODEL_ID |
++				  IEEE1394_MATCH_SPECIFIER_ID |
++				  IEEE1394_MATCH_VERSION,
++		.vendor_id	= VEN_FOCUSRITE,
++		.model_id	= 0x000006,
++		.specifier_id	= 0x00a02d,
++		.version	= 0x010001,
++		.driver_data	= (kernel_ulong_t)&saffirepro_10_spec,
++	},
+ 	/* Focusrite, Saffire(no label and LE) */
+ 	SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, MODEL_FOCUSRITE_SAFFIRE_BOTH,
+ 			    &saffire_spec),
+diff --git a/sound/firewire/dice/dice.c b/sound/firewire/dice/dice.c
+index ed50b222d36e..eee184b05d93 100644
+--- a/sound/firewire/dice/dice.c
++++ b/sound/firewire/dice/dice.c
+@@ -18,6 +18,7 @@ MODULE_LICENSE("GPL v2");
+ #define OUI_ALESIS		0x000595
+ #define OUI_MAUDIO		0x000d6c
+ #define OUI_MYTEK		0x001ee8
++#define OUI_SSL			0x0050c2	// Actually ID reserved by IEEE.
+ 
+ #define DICE_CATEGORY_ID	0x04
+ #define WEISS_CATEGORY_ID	0x00
+@@ -196,7 +197,7 @@ static int dice_probe(struct fw_unit *unit,
+ 	struct snd_dice *dice;
+ 	int err;
+ 
+-	if (!entry->driver_data) {
++	if (!entry->driver_data && entry->vendor_id != OUI_SSL) {
+ 		err = check_dice_category(unit);
+ 		if (err < 0)
+ 			return -ENODEV;
+@@ -361,6 +362,15 @@ static const struct ieee1394_device_id dice_id_table[] = {
+ 		.model_id	= 0x000002,
+ 		.driver_data = (kernel_ulong_t)snd_dice_detect_mytek_formats,
+ 	},
++	// Solid State Logic, Duende Classic and Mini.
++	// NOTE: each field of GUID in config ROM is not compliant to standard
++	// DICE scheme.
++	{
++		.match_flags	= IEEE1394_MATCH_VENDOR_ID |
++				  IEEE1394_MATCH_MODEL_ID,
++		.vendor_id	= OUI_SSL,
++		.model_id	= 0x000070,
++	},
+ 	{
+ 		.match_flags = IEEE1394_MATCH_VERSION,
+ 		.version     = DICE_INTERFACE,
+diff --git a/sound/firewire/motu/amdtp-motu.c b/sound/firewire/motu/amdtp-motu.c
+index f0555a24d90e..6c9b743ea74b 100644
+--- a/sound/firewire/motu/amdtp-motu.c
++++ b/sound/firewire/motu/amdtp-motu.c
+@@ -136,7 +136,9 @@ static void read_pcm_s32(struct amdtp_stream *s,
+ 		byte = (u8 *)buffer + p->pcm_byte_offset;
+ 
+ 		for (c = 0; c < channels; ++c) {
+-			*dst = (byte[0] << 24) | (byte[1] << 16) | byte[2];
++			*dst = (byte[0] << 24) |
++			       (byte[1] << 16) |
++			       (byte[2] << 8);
+ 			byte += 3;
+ 			dst++;
+ 		}
+diff --git a/sound/firewire/motu/motu.c b/sound/firewire/motu/motu.c
+index 220e61926ea4..513291ba0ab0 100644
+--- a/sound/firewire/motu/motu.c
++++ b/sound/firewire/motu/motu.c
+@@ -36,7 +36,7 @@ static void name_card(struct snd_motu *motu)
+ 	fw_csr_iterator_init(&it, motu->unit->directory);
+ 	while (fw_csr_iterator_next(&it, &key, &val)) {
+ 		switch (key) {
+-		case CSR_VERSION:
++		case CSR_MODEL:
+ 			version = val;
+ 			break;
+ 		}
+@@ -46,7 +46,7 @@ static void name_card(struct snd_motu *motu)
+ 	strcpy(motu->card->shortname, motu->spec->name);
+ 	strcpy(motu->card->mixername, motu->spec->name);
+ 	snprintf(motu->card->longname, sizeof(motu->card->longname),
+-		 "MOTU %s (version:%d), GUID %08x%08x at %s, S%d",
++		 "MOTU %s (version:%06x), GUID %08x%08x at %s, S%d",
+ 		 motu->spec->name, version,
+ 		 fw_dev->config_rom[3], fw_dev->config_rom[4],
+ 		 dev_name(&motu->unit->device), 100 << fw_dev->max_speed);
+@@ -237,20 +237,20 @@ static const struct snd_motu_spec motu_audio_express = {
+ #define SND_MOTU_DEV_ENTRY(model, data)			\
+ {							\
+ 	.match_flags	= IEEE1394_MATCH_VENDOR_ID |	\
+-			  IEEE1394_MATCH_MODEL_ID |	\
+-			  IEEE1394_MATCH_SPECIFIER_ID,	\
++			  IEEE1394_MATCH_SPECIFIER_ID |	\
++			  IEEE1394_MATCH_VERSION,	\
+ 	.vendor_id	= OUI_MOTU,			\
+-	.model_id	= model,			\
+ 	.specifier_id	= OUI_MOTU,			\
++	.version	= model,			\
+ 	.driver_data	= (kernel_ulong_t)data,		\
+ }
+ 
+ static const struct ieee1394_device_id motu_id_table[] = {
+-	SND_MOTU_DEV_ENTRY(0x101800, &motu_828mk2),
+-	SND_MOTU_DEV_ENTRY(0x107800, &snd_motu_spec_traveler),
+-	SND_MOTU_DEV_ENTRY(0x106800, &motu_828mk3),	/* FireWire only. */
+-	SND_MOTU_DEV_ENTRY(0x100800, &motu_828mk3),	/* Hybrid. */
+-	SND_MOTU_DEV_ENTRY(0x104800, &motu_audio_express),
++	SND_MOTU_DEV_ENTRY(0x000003, &motu_828mk2),
++	SND_MOTU_DEV_ENTRY(0x000009, &snd_motu_spec_traveler),
++	SND_MOTU_DEV_ENTRY(0x000015, &motu_828mk3),	/* FireWire only. */
++	SND_MOTU_DEV_ENTRY(0x000035, &motu_828mk3),	/* Hybrid. */
++	SND_MOTU_DEV_ENTRY(0x000033, &motu_audio_express),
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(ieee1394, motu_id_table);
+diff --git a/sound/hda/hdac_i915.c b/sound/hda/hdac_i915.c
+index 617ff1aa818f..27eb0270a711 100644
+--- a/sound/hda/hdac_i915.c
++++ b/sound/hda/hdac_i915.c
+@@ -144,9 +144,9 @@ int snd_hdac_i915_init(struct hdac_bus *bus)
+ 		return -ENODEV;
+ 	if (!acomp->ops) {
+ 		request_module("i915");
+-		/* 10s timeout */
++		/* 60s timeout */
+ 		wait_for_completion_timeout(&bind_complete,
+-					    msecs_to_jiffies(10 * 1000));
++					    msecs_to_jiffies(60 * 1000));
+ 	}
+ 	if (!acomp->ops) {
+ 		dev_info(bus->dev, "couldn't bind with audio component\n");
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 9f8d59e7e89f..b238e903b9d7 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2917,6 +2917,7 @@ static void hda_call_codec_resume(struct hda_codec *codec)
+ 		hda_jackpoll_work(&codec->jackpoll_work.work);
+ 	else
+ 		snd_hda_jack_report_sync(codec);
++	codec->core.dev.power.power_state = PMSG_ON;
+ 	snd_hdac_leave_pm(&codec->core);
+ }
+ 
+@@ -2950,10 +2951,62 @@ static int hda_codec_runtime_resume(struct device *dev)
+ }
+ #endif /* CONFIG_PM */
+ 
++#ifdef CONFIG_PM_SLEEP
++static int hda_codec_force_resume(struct device *dev)
++{
++	int ret;
++
++	/* The get/put pair below enforces the runtime resume even if the
++	 * device hasn't been used at suspend time.  This trick is needed to
++	 * update the jack state change during the sleep.
++	 */
++	pm_runtime_get_noresume(dev);
++	ret = pm_runtime_force_resume(dev);
++	pm_runtime_put(dev);
++	return ret;
++}
++
++static int hda_codec_pm_suspend(struct device *dev)
++{
++	dev->power.power_state = PMSG_SUSPEND;
++	return pm_runtime_force_suspend(dev);
++}
++
++static int hda_codec_pm_resume(struct device *dev)
++{
++	dev->power.power_state = PMSG_RESUME;
++	return hda_codec_force_resume(dev);
++}
++
++static int hda_codec_pm_freeze(struct device *dev)
++{
++	dev->power.power_state = PMSG_FREEZE;
++	return pm_runtime_force_suspend(dev);
++}
++
++static int hda_codec_pm_thaw(struct device *dev)
++{
++	dev->power.power_state = PMSG_THAW;
++	return hda_codec_force_resume(dev);
++}
++
++static int hda_codec_pm_restore(struct device *dev)
++{
++	dev->power.power_state = PMSG_RESTORE;
++	return hda_codec_force_resume(dev);
++}
++#endif /* CONFIG_PM_SLEEP */
++
+ /* referred in hda_bind.c */
+ const struct dev_pm_ops hda_codec_driver_pm = {
+-	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
+-				pm_runtime_force_resume)
++#ifdef CONFIG_PM_SLEEP
++	.suspend = hda_codec_pm_suspend,
++	.resume = hda_codec_pm_resume,
++	.freeze = hda_codec_pm_freeze,
++	.thaw = hda_codec_pm_thaw,
++	.poweroff = hda_codec_pm_suspend,
++	.restore = hda_codec_pm_restore,
++#endif /* CONFIG_PM_SLEEP */
+ 	SET_RUNTIME_PM_OPS(hda_codec_runtime_suspend, hda_codec_runtime_resume,
+ 			   NULL)
+ };
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index e5c49003e75f..2ec91085fa3e 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -947,7 +947,7 @@ static void __azx_runtime_suspend(struct azx *chip)
+ 	display_power(chip, false);
+ }
+ 
+-static void __azx_runtime_resume(struct azx *chip)
++static void __azx_runtime_resume(struct azx *chip, bool from_rt)
+ {
+ 	struct hda_intel *hda = container_of(chip, struct hda_intel, chip);
+ 	struct hdac_bus *bus = azx_bus(chip);
+@@ -964,7 +964,7 @@ static void __azx_runtime_resume(struct azx *chip)
+ 	azx_init_pci(chip);
+ 	hda_intel_init_chip(chip, true);
+ 
+-	if (status) {
++	if (status && from_rt) {
+ 		list_for_each_codec(codec, &chip->bus)
+ 			if (status & (1 << codec->addr))
+ 				schedule_delayed_work(&codec->jackpoll_work,
+@@ -1016,7 +1016,7 @@ static int azx_resume(struct device *dev)
+ 			chip->msi = 0;
+ 	if (azx_acquire_irq(chip, 1) < 0)
+ 		return -EIO;
+-	__azx_runtime_resume(chip);
++	__azx_runtime_resume(chip, false);
+ 	snd_power_change_state(card, SNDRV_CTL_POWER_D0);
+ 
+ 	trace_azx_resume(chip);
+@@ -1081,7 +1081,7 @@ static int azx_runtime_resume(struct device *dev)
+ 	chip = card->private_data;
+ 	if (!azx_has_pm_runtime(chip))
+ 		return 0;
+-	__azx_runtime_resume(chip);
++	__azx_runtime_resume(chip, true);
+ 
+ 	/* disable controller Wake Up event*/
+ 	azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &
+@@ -2142,12 +2142,18 @@ static struct snd_pci_quirk power_save_blacklist[] = {
+ 	SND_PCI_QUIRK(0x8086, 0x2040, "Intel DZ77BH-55K", 0),
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=199607 */
+ 	SND_PCI_QUIRK(0x8086, 0x2057, "Intel NUC5i7RYB", 0),
++	/* https://bugs.launchpad.net/bugs/1821663 */
++	SND_PCI_QUIRK(0x8086, 0x2064, "Intel SDP 8086:2064", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1520902 */
+ 	SND_PCI_QUIRK(0x8086, 0x2068, "Intel NUC7i3BNB", 0),
+-	/* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
+-	SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0),
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=198611 */
+ 	SND_PCI_QUIRK(0x17aa, 0x2227, "Lenovo X1 Carbon 3rd Gen", 0),
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1689623 */
++	SND_PCI_QUIRK(0x17aa, 0x367b, "Lenovo IdeaCentre B550", 0),
++	/* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
++	SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0),
++	/* https://bugs.launchpad.net/bugs/1821663 */
++	SND_PCI_QUIRK(0x1631, 0xe017, "Packard Bell NEC IMEDIA 5204", 0),
+ 	{}
+ };
+ #endif /* CONFIG_PM */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index a4ee7656d9ee..fb65ad31e86c 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -936,6 +936,9 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x8458, "HP Z2 G4 mini premium", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
+ 	SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 1ffa36e987b4..84fae0df59e9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -118,6 +118,7 @@ struct alc_spec {
+ 	unsigned int has_alc5505_dsp:1;
+ 	unsigned int no_depop_delay:1;
+ 	unsigned int done_hp_init:1;
++	unsigned int no_shutup_pins:1;
+ 
+ 	/* for PLL fix */
+ 	hda_nid_t pll_nid;
+@@ -476,6 +477,14 @@ static void alc_auto_setup_eapd(struct hda_codec *codec, bool on)
+ 		set_eapd(codec, *p, on);
+ }
+ 
++static void alc_shutup_pins(struct hda_codec *codec)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (!spec->no_shutup_pins)
++		snd_hda_shutup_pins(codec);
++}
++
+ /* generic shutup callback;
+  * just turning off EAPD and a little pause for avoiding pop-noise
+  */
+@@ -486,7 +495,7 @@ static void alc_eapd_shutup(struct hda_codec *codec)
+ 	alc_auto_setup_eapd(codec, false);
+ 	if (!spec->no_depop_delay)
+ 		msleep(200);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ }
+ 
+ /* generic EAPD initialization */
+@@ -814,7 +823,7 @@ static inline void alc_shutup(struct hda_codec *codec)
+ 	if (spec && spec->shutup)
+ 		spec->shutup(codec);
+ 	else
+-		snd_hda_shutup_pins(codec);
++		alc_shutup_pins(codec);
+ }
+ 
+ static void alc_reboot_notify(struct hda_codec *codec)
+@@ -1855,8 +1864,8 @@ enum {
+ 	ALC887_FIXUP_BASS_CHMAP,
+ 	ALC1220_FIXUP_GB_DUAL_CODECS,
+ 	ALC1220_FIXUP_CLEVO_P950,
+-	ALC1220_FIXUP_SYSTEM76_ORYP5,
+-	ALC1220_FIXUP_SYSTEM76_ORYP5_PINS,
++	ALC1220_FIXUP_CLEVO_PB51ED,
++	ALC1220_FIXUP_CLEVO_PB51ED_PINS,
+ };
+ 
+ static void alc889_fixup_coef(struct hda_codec *codec,
+@@ -2061,7 +2070,7 @@ static void alc1220_fixup_clevo_p950(struct hda_codec *codec,
+ static void alc_fixup_headset_mode_no_hp_mic(struct hda_codec *codec,
+ 				const struct hda_fixup *fix, int action);
+ 
+-static void alc1220_fixup_system76_oryp5(struct hda_codec *codec,
++static void alc1220_fixup_clevo_pb51ed(struct hda_codec *codec,
+ 				     const struct hda_fixup *fix,
+ 				     int action)
+ {
+@@ -2313,18 +2322,18 @@ static const struct hda_fixup alc882_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc1220_fixup_clevo_p950,
+ 	},
+-	[ALC1220_FIXUP_SYSTEM76_ORYP5] = {
++	[ALC1220_FIXUP_CLEVO_PB51ED] = {
+ 		.type = HDA_FIXUP_FUNC,
+-		.v.func = alc1220_fixup_system76_oryp5,
++		.v.func = alc1220_fixup_clevo_pb51ed,
+ 	},
+-	[ALC1220_FIXUP_SYSTEM76_ORYP5_PINS] = {
++	[ALC1220_FIXUP_CLEVO_PB51ED_PINS] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+ 			{ 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */
+ 			{}
+ 		},
+ 		.chained = true,
+-		.chain_id = ALC1220_FIXUP_SYSTEM76_ORYP5,
++		.chain_id = ALC1220_FIXUP_CLEVO_PB51ED,
+ 	},
+ };
+ 
+@@ -2402,8 +2411,9 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x95e1, "Clevo P95xER", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x95e2, "Clevo P950ER", ALC1220_FIXUP_CLEVO_P950),
+-	SND_PCI_QUIRK(0x1558, 0x96e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_SYSTEM76_ORYP5_PINS),
+-	SND_PCI_QUIRK(0x1558, 0x97e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_SYSTEM76_ORYP5_PINS),
++	SND_PCI_QUIRK(0x1558, 0x96e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x97e1, "System76 Oryx Pro (oryp5)", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x65d1, "Tuxedo Book XC1509", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530),
+@@ -2950,7 +2960,7 @@ static void alc269_shutup(struct hda_codec *codec)
+ 			(alc_get_coef0(codec) & 0x00ff) == 0x018) {
+ 		msleep(150);
+ 	}
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ }
+ 
+ static struct coef_fw alc282_coefs[] = {
+@@ -3053,14 +3063,15 @@ static void alc282_shutup(struct hda_codec *codec)
+ 	if (hp_pin_sense)
+ 		msleep(85);
+ 
+-	snd_hda_codec_write(codec, hp_pin, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	if (!spec->no_shutup_pins)
++		snd_hda_codec_write(codec, hp_pin, 0,
++				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ 
+ 	if (hp_pin_sense)
+ 		msleep(100);
+ 
+ 	alc_auto_setup_eapd(codec, false);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ 	alc_write_coef_idx(codec, 0x78, coef78);
+ }
+ 
+@@ -3166,15 +3177,16 @@ static void alc283_shutup(struct hda_codec *codec)
+ 	if (hp_pin_sense)
+ 		msleep(100);
+ 
+-	snd_hda_codec_write(codec, hp_pin, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	if (!spec->no_shutup_pins)
++		snd_hda_codec_write(codec, hp_pin, 0,
++				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ 
+ 	alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ 
+ 	if (hp_pin_sense)
+ 		msleep(100);
+ 	alc_auto_setup_eapd(codec, false);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ 	alc_write_coef_idx(codec, 0x43, 0x9614);
+ }
+ 
+@@ -3240,14 +3252,15 @@ static void alc256_shutup(struct hda_codec *codec)
+ 	/* NOTE: call this before clearing the pin, otherwise codec stalls */
+ 	alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ 
+-	snd_hda_codec_write(codec, hp_pin, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	if (!spec->no_shutup_pins)
++		snd_hda_codec_write(codec, hp_pin, 0,
++				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ 
+ 	if (hp_pin_sense)
+ 		msleep(100);
+ 
+ 	alc_auto_setup_eapd(codec, false);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ }
+ 
+ static void alc225_init(struct hda_codec *codec)
+@@ -3334,7 +3347,7 @@ static void alc225_shutup(struct hda_codec *codec)
+ 		msleep(100);
+ 
+ 	alc_auto_setup_eapd(codec, false);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ }
+ 
+ static void alc_default_init(struct hda_codec *codec)
+@@ -3388,14 +3401,15 @@ static void alc_default_shutup(struct hda_codec *codec)
+ 	if (hp_pin_sense)
+ 		msleep(85);
+ 
+-	snd_hda_codec_write(codec, hp_pin, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	if (!spec->no_shutup_pins)
++		snd_hda_codec_write(codec, hp_pin, 0,
++				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ 
+ 	if (hp_pin_sense)
+ 		msleep(100);
+ 
+ 	alc_auto_setup_eapd(codec, false);
+-	snd_hda_shutup_pins(codec);
++	alc_shutup_pins(codec);
+ }
+ 
+ static void alc294_hp_init(struct hda_codec *codec)
+@@ -3412,8 +3426,9 @@ static void alc294_hp_init(struct hda_codec *codec)
+ 
+ 	msleep(100);
+ 
+-	snd_hda_codec_write(codec, hp_pin, 0,
+-			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++	if (!spec->no_shutup_pins)
++		snd_hda_codec_write(codec, hp_pin, 0,
++				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+ 
+ 	alc_update_coef_idx(codec, 0x6f, 0x000f, 0);/* Set HP depop to manual mode */
+ 	alc_update_coefex_idx(codec, 0x58, 0x00, 0x8000, 0x8000); /* HP depop procedure start */
+@@ -5007,16 +5022,12 @@ static void alc_fixup_auto_mute_via_amp(struct hda_codec *codec,
+ 	}
+ }
+ 
+-static void alc_no_shutup(struct hda_codec *codec)
+-{
+-}
+-
+ static void alc_fixup_no_shutup(struct hda_codec *codec,
+ 				const struct hda_fixup *fix, int action)
+ {
+ 	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ 		struct alc_spec *spec = codec->spec;
+-		spec->shutup = alc_no_shutup;
++		spec->no_shutup_pins = 1;
+ 	}
+ }
+ 
+@@ -5479,7 +5490,7 @@ static void alc_headset_btn_callback(struct hda_codec *codec,
+ 	jack->jack->button_state = report;
+ }
+ 
+-static void alc_fixup_headset_jack(struct hda_codec *codec,
++static void alc295_fixup_chromebook(struct hda_codec *codec,
+ 				    const struct hda_fixup *fix, int action)
+ {
+ 
+@@ -5489,6 +5500,16 @@ static void alc_fixup_headset_jack(struct hda_codec *codec,
+ 						    alc_headset_btn_callback);
+ 		snd_hda_jack_add_kctl(codec, 0x55, "Headset Jack", false,
+ 				      SND_JACK_HEADSET, alc_headset_btn_keymap);
++		switch (codec->core.vendor_id) {
++		case 0x10ec0295:
++			alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */
++			alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15);
++			break;
++		case 0x10ec0236:
++			alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */
++			alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15);
++			break;
++		}
+ 		break;
+ 	case HDA_FIXUP_ACT_INIT:
+ 		switch (codec->core.vendor_id) {
+@@ -5641,6 +5662,7 @@ enum {
+ 	ALC233_FIXUP_ASUS_MIC_NO_PRESENCE,
+ 	ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE,
+ 	ALC233_FIXUP_LENOVO_MULTI_CODECS,
++	ALC233_FIXUP_ACER_HEADSET_MIC,
+ 	ALC294_FIXUP_LENOVO_MIC_LOCATION,
+ 	ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE,
+ 	ALC700_FIXUP_INTEL_REFERENCE,
+@@ -5658,9 +5680,16 @@ enum {
+ 	ALC294_FIXUP_ASUS_MIC,
+ 	ALC294_FIXUP_ASUS_HEADSET_MIC,
+ 	ALC294_FIXUP_ASUS_SPK,
+-	ALC225_FIXUP_HEADSET_JACK,
+ 	ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+ 	ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
++	ALC255_FIXUP_ACER_HEADSET_MIC,
++	ALC295_FIXUP_CHROME_BOOK,
++	ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE,
++	ALC225_FIXUP_WYSE_AUTO_MUTE,
++	ALC225_FIXUP_WYSE_DISABLE_MIC_VREF,
++	ALC286_FIXUP_ACER_AIO_HEADSET_MIC,
++	ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++	ALC299_FIXUP_PREDATOR_SPK,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -6461,6 +6490,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc233_alc662_fixup_lenovo_dual_codecs,
+ 	},
++	[ALC233_FIXUP_ACER_HEADSET_MIC] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x45 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x5089 },
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC233_FIXUP_ASUS_MIC_NO_PRESENCE
++	},
+ 	[ALC294_FIXUP_LENOVO_MIC_LOCATION] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -6603,9 +6642,9 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC
+ 	},
+-	[ALC225_FIXUP_HEADSET_JACK] = {
++	[ALC295_FIXUP_CHROME_BOOK] = {
+ 		.type = HDA_FIXUP_FUNC,
+-		.v.func = alc_fixup_headset_jack,
++		.v.func = alc295_fixup_chromebook,
+ 	},
+ 	[ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+@@ -6627,6 +6666,64 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC285_FIXUP_LENOVO_HEADPHONE_NOISE
+ 	},
++	[ALC255_FIXUP_ACER_HEADSET_MIC] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x03a11130 },
++			{ 0x1a, 0x90a60140 }, /* use as internal mic */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC
++	},
++	[ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x16, 0x01011020 }, /* Rear Line out */
++			{ 0x19, 0x01a1913c }, /* use as Front headset mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC225_FIXUP_WYSE_AUTO_MUTE
++	},
++	[ALC225_FIXUP_WYSE_AUTO_MUTE] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_auto_mute_via_amp,
++		.chained = true,
++		.chain_id = ALC225_FIXUP_WYSE_DISABLE_MIC_VREF
++	},
++	[ALC225_FIXUP_WYSE_DISABLE_MIC_VREF] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_disable_mic_vref,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
++	},
++	[ALC286_FIXUP_ACER_AIO_HEADSET_MIC] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x4f },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x5029 },
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE
++	},
++	[ALC256_FIXUP_ASUS_MIC_NO_PRESENCE] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x04a11120 }, /* use as headset mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE
++	},
++	[ALC299_FIXUP_PREDATOR_SPK] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x21, 0x90170150 }, /* use as headset mic, without its own jack detect */
++			{ }
++		}
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -6643,9 +6740,15 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
+ 	SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK),
+-	SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1025, 0x1099, "Acer Aspire E5-523G", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1025, 0x1246, "Acer Predator Helios 500", ALC299_FIXUP_PREDATOR_SPK),
++	SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ 	SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
+ 	SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X),
+@@ -6677,6 +6780,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13 9350", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ 	SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ 	SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
++	SND_PCI_QUIRK(0x1028, 0x0738, "Dell Precision 5820", ALC269_FIXUP_NO_SHUTUP),
+ 	SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ 	SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
+@@ -6689,6 +6793,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
++	SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+ 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+@@ -6751,11 +6857,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
+ 	SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x103c, 0x802e, "HP Z240 SFF", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC),
+ 	SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
+-	SND_PCI_QUIRK(0x103c, 0x82bf, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x103c, 0x82c0, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x82bf, "HP G3 mini", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+@@ -6771,7 +6879,6 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+-	SND_PCI_QUIRK(0x1043, 0x14a1, "ASUS UX533FD", ALC294_FIXUP_ASUS_SPK),
+ 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+@@ -7036,7 +7143,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC255_FIXUP_DUMMY_LINEOUT_VERB, .name = "alc255-dummy-lineout"},
+ 	{.id = ALC255_FIXUP_DELL_HEADSET_MIC, .name = "alc255-dell-headset"},
+ 	{.id = ALC295_FIXUP_HP_X360, .name = "alc295-hp-x360"},
+-	{.id = ALC225_FIXUP_HEADSET_JACK, .name = "alc-sense-combo"},
++	{.id = ALC295_FIXUP_CHROME_BOOK, .name = "alc-sense-combo"},
++	{.id = ALC299_FIXUP_PREDATOR_SPK, .name = "predator-spk"},
+ 	{}
+ };
+ #define ALC225_STANDARD_PINS \
+@@ -7257,6 +7365,18 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x14, 0x90170110},
+ 		{0x1b, 0x90a70130},
+ 		{0x21, 0x03211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++		{0x12, 0x90a60130},
++		{0x14, 0x90170110},
++		{0x21, 0x03211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++		{0x12, 0x90a60130},
++		{0x14, 0x90170110},
++		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
++		{0x1a, 0x90a70130},
++		{0x1b, 0x90170110},
++		{0x21, 0x03211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
+ 		{0x12, 0xb7a60130},
+ 		{0x13, 0xb8a61140},
+@@ -7388,6 +7508,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x14, 0x90170110},
+ 		{0x1b, 0x90a70130},
+ 		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0294, 0x1043, "ASUS", ALC294_FIXUP_ASUS_SPK,
++		{0x12, 0x90a60130},
++		{0x17, 0x90170110},
++		{0x21, 0x03211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0294, 0x1043, "ASUS", ALC294_FIXUP_ASUS_SPK,
+ 		{0x12, 0x90a60130},
+ 		{0x17, 0x90170110},
+diff --git a/sound/soc/codecs/pcm186x.c b/sound/soc/codecs/pcm186x.c
+index 809b7e9f03ca..c5fcc632f670 100644
+--- a/sound/soc/codecs/pcm186x.c
++++ b/sound/soc/codecs/pcm186x.c
+@@ -42,7 +42,7 @@ struct pcm186x_priv {
+ 	bool is_master_mode;
+ };
+ 
+-static const DECLARE_TLV_DB_SCALE(pcm186x_pga_tlv, -1200, 4000, 50);
++static const DECLARE_TLV_DB_SCALE(pcm186x_pga_tlv, -1200, 50, 0);
+ 
+ static const struct snd_kcontrol_new pcm1863_snd_controls[] = {
+ 	SOC_DOUBLE_R_S_TLV("ADC Capture Volume", PCM186X_PGA_VAL_CH1_L,
+@@ -158,7 +158,7 @@ static const struct snd_soc_dapm_widget pcm1863_dapm_widgets[] = {
+ 	 * Put the codec into SLEEP mode when not in use, allowing the
+ 	 * Energysense mechanism to operate.
+ 	 */
+-	SND_SOC_DAPM_ADC("ADC", "HiFi Capture", PCM186X_POWER_CTRL, 1,  0),
++	SND_SOC_DAPM_ADC("ADC", "HiFi Capture", PCM186X_POWER_CTRL, 1,  1),
+ };
+ 
+ static const struct snd_soc_dapm_widget pcm1865_dapm_widgets[] = {
+@@ -184,8 +184,8 @@ static const struct snd_soc_dapm_widget pcm1865_dapm_widgets[] = {
+ 	 * Put the codec into SLEEP mode when not in use, allowing the
+ 	 * Energysense mechanism to operate.
+ 	 */
+-	SND_SOC_DAPM_ADC("ADC1", "HiFi Capture 1", PCM186X_POWER_CTRL, 1,  0),
+-	SND_SOC_DAPM_ADC("ADC2", "HiFi Capture 2", PCM186X_POWER_CTRL, 1,  0),
++	SND_SOC_DAPM_ADC("ADC1", "HiFi Capture 1", PCM186X_POWER_CTRL, 1,  1),
++	SND_SOC_DAPM_ADC("ADC2", "HiFi Capture 2", PCM186X_POWER_CTRL, 1,  1),
+ };
+ 
+ static const struct snd_soc_dapm_route pcm1863_dapm_routes[] = {
+diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
+index 81f2fe2c6d23..60f87a0d99f4 100644
+--- a/sound/soc/fsl/fsl-asoc-card.c
++++ b/sound/soc/fsl/fsl-asoc-card.c
+@@ -689,6 +689,7 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ asrc_fail:
+ 	of_node_put(asrc_np);
+ 	of_node_put(codec_np);
++	put_device(&cpu_pdev->dev);
+ fail:
+ 	of_node_put(cpu_np);
+ 
+diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
+index 57b484768a58..3623aa9a6f2e 100644
+--- a/sound/soc/fsl/fsl_esai.c
++++ b/sound/soc/fsl/fsl_esai.c
+@@ -54,6 +54,8 @@ struct fsl_esai {
+ 	u32 fifo_depth;
+ 	u32 slot_width;
+ 	u32 slots;
++	u32 tx_mask;
++	u32 rx_mask;
+ 	u32 hck_rate[2];
+ 	u32 sck_rate[2];
+ 	bool hck_dir[2];
+@@ -361,21 +363,13 @@ static int fsl_esai_set_dai_tdm_slot(struct snd_soc_dai *dai, u32 tx_mask,
+ 	regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR,
+ 			   ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(slots));
+ 
+-	regmap_update_bits(esai_priv->regmap, REG_ESAI_TSMA,
+-			   ESAI_xSMA_xS_MASK, ESAI_xSMA_xS(tx_mask));
+-	regmap_update_bits(esai_priv->regmap, REG_ESAI_TSMB,
+-			   ESAI_xSMB_xS_MASK, ESAI_xSMB_xS(tx_mask));
+-
+ 	regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR,
+ 			   ESAI_xCCR_xDC_MASK, ESAI_xCCR_xDC(slots));
+ 
+-	regmap_update_bits(esai_priv->regmap, REG_ESAI_RSMA,
+-			   ESAI_xSMA_xS_MASK, ESAI_xSMA_xS(rx_mask));
+-	regmap_update_bits(esai_priv->regmap, REG_ESAI_RSMB,
+-			   ESAI_xSMB_xS_MASK, ESAI_xSMB_xS(rx_mask));
+-
+ 	esai_priv->slot_width = slot_width;
+ 	esai_priv->slots = slots;
++	esai_priv->tx_mask = tx_mask;
++	esai_priv->rx_mask = rx_mask;
+ 
+ 	return 0;
+ }
+@@ -398,7 +392,8 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 		break;
+ 	case SND_SOC_DAIFMT_RIGHT_J:
+ 		/* Data on rising edge of bclk, frame high, right aligned */
+-		xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCR_xWA;
++		xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP;
++		xcr  |= ESAI_xCR_xWA;
+ 		break;
+ 	case SND_SOC_DAIFMT_DSP_A:
+ 		/* Data on rising edge of bclk, frame high, 1clk before data */
+@@ -455,12 +450,12 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ 		return -EINVAL;
+ 	}
+ 
+-	mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR;
++	mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR | ESAI_xCR_xWA;
+ 	regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR, mask, xcr);
+ 	regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR, mask, xcr);
+ 
+ 	mask = ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCCR_xFSP |
+-		ESAI_xCCR_xFSD | ESAI_xCCR_xCKD | ESAI_xCR_xWA;
++		ESAI_xCCR_xFSD | ESAI_xCCR_xCKD;
+ 	regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR, mask, xccr);
+ 	regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR, mask, xccr);
+ 
+@@ -595,6 +590,7 @@ static int fsl_esai_trigger(struct snd_pcm_substream *substream, int cmd,
+ 	bool tx = substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
+ 	u8 i, channels = substream->runtime->channels;
+ 	u32 pins = DIV_ROUND_UP(channels, esai_priv->slots);
++	u32 mask;
+ 
+ 	switch (cmd) {
+ 	case SNDRV_PCM_TRIGGER_START:
+@@ -607,15 +603,38 @@ static int fsl_esai_trigger(struct snd_pcm_substream *substream, int cmd,
+ 		for (i = 0; tx && i < channels; i++)
+ 			regmap_write(esai_priv->regmap, REG_ESAI_ETDR, 0x0);
+ 
++		/*
++		 * When set the TE/RE in the end of enablement flow, there
++		 * will be channel swap issue for multi data line case.
++		 * In order to workaround this issue, we switch the bit
++		 * enablement sequence to below sequence
++		 * 1) clear the xSMB & xSMA: which is done in probe and
++		 *                           stop state.
++		 * 2) set TE/RE
++		 * 3) set xSMB
++		 * 4) set xSMA:  xSMA is the last one in this flow, which
++		 *               will trigger esai to start.
++		 */
+ 		regmap_update_bits(esai_priv->regmap, REG_ESAI_xCR(tx),
+ 				   tx ? ESAI_xCR_TE_MASK : ESAI_xCR_RE_MASK,
+ 				   tx ? ESAI_xCR_TE(pins) : ESAI_xCR_RE(pins));
++		mask = tx ? esai_priv->tx_mask : esai_priv->rx_mask;
++
++		regmap_update_bits(esai_priv->regmap, REG_ESAI_xSMB(tx),
++				   ESAI_xSMB_xS_MASK, ESAI_xSMB_xS(mask));
++		regmap_update_bits(esai_priv->regmap, REG_ESAI_xSMA(tx),
++				   ESAI_xSMA_xS_MASK, ESAI_xSMA_xS(mask));
++
+ 		break;
+ 	case SNDRV_PCM_TRIGGER_SUSPEND:
+ 	case SNDRV_PCM_TRIGGER_STOP:
+ 	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ 		regmap_update_bits(esai_priv->regmap, REG_ESAI_xCR(tx),
+ 				   tx ? ESAI_xCR_TE_MASK : ESAI_xCR_RE_MASK, 0);
++		regmap_update_bits(esai_priv->regmap, REG_ESAI_xSMA(tx),
++				   ESAI_xSMA_xS_MASK, 0);
++		regmap_update_bits(esai_priv->regmap, REG_ESAI_xSMB(tx),
++				   ESAI_xSMB_xS_MASK, 0);
+ 
+ 		/* Disable and reset FIFO */
+ 		regmap_update_bits(esai_priv->regmap, REG_ESAI_xFCR(tx),
+@@ -905,6 +924,15 @@ static int fsl_esai_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	esai_priv->tx_mask = 0xFFFFFFFF;
++	esai_priv->rx_mask = 0xFFFFFFFF;
++
++	/* Clear the TSMA, TSMB, RSMA, RSMB */
++	regmap_write(esai_priv->regmap, REG_ESAI_TSMA, 0);
++	regmap_write(esai_priv->regmap, REG_ESAI_TSMB, 0);
++	regmap_write(esai_priv->regmap, REG_ESAI_RSMA, 0);
++	regmap_write(esai_priv->regmap, REG_ESAI_RSMB, 0);
++
+ 	ret = devm_snd_soc_register_component(&pdev->dev, &fsl_esai_component,
+ 					      &fsl_esai_dai, 1);
+ 	if (ret) {
+diff --git a/sound/soc/fsl/imx-sgtl5000.c b/sound/soc/fsl/imx-sgtl5000.c
+index c29200cf755a..9b9a7ec52905 100644
+--- a/sound/soc/fsl/imx-sgtl5000.c
++++ b/sound/soc/fsl/imx-sgtl5000.c
+@@ -108,6 +108,7 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
+ 		ret = -EPROBE_DEFER;
+ 		goto fail;
+ 	}
++	put_device(&ssi_pdev->dev);
+ 	codec_dev = of_find_i2c_device_by_node(codec_np);
+ 	if (!codec_dev) {
+ 		dev_err(&pdev->dev, "failed to find codec platform device\n");
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index b807a47515eb..336895f7fd1e 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -283,12 +283,20 @@ static int asoc_simple_card_get_dai_id(struct device_node *ep)
+ 	/* use endpoint/port reg if exist */
+ 	ret = of_graph_parse_endpoint(ep, &info);
+ 	if (ret == 0) {
+-		if (info.id)
++		/*
++		 * Because it will count port/endpoint if it doesn't have "reg".
++		 * But, we can't judge whether it has "no reg", or "reg = <0>"
++		 * only of_graph_parse_endpoint().
++		 * We need to check "reg" property
++		 */
++		if (of_get_property(ep,   "reg", NULL))
+ 			return info.id;
+-		if (info.port)
++
++		node = of_get_parent(ep);
++		of_node_put(node);
++		if (of_get_property(node, "reg", NULL))
+ 			return info.port;
+ 	}
+-
+ 	node = of_graph_get_port_parent(ep);
+ 
+ 	/*
+diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+index 91a2436ce952..e9623da911d5 100644
+--- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
++++ b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
+@@ -711,9 +711,17 @@ static int sst_soc_probe(struct snd_soc_component *component)
+ 	return sst_dsp_init_v2_dpcm(component);
+ }
+ 
++static void sst_soc_remove(struct snd_soc_component *component)
++{
++	struct sst_data *drv = dev_get_drvdata(component->dev);
++
++	drv->soc_card = NULL;
++}
++
+ static const struct snd_soc_component_driver sst_soc_platform_drv  = {
+ 	.name		= DRV_NAME,
+ 	.probe		= sst_soc_probe,
++	.remove		= sst_soc_remove,
+ 	.ops		= &sst_platform_ops,
+ 	.compr_ops	= &sst_platform_compr_ops,
+ 	.pcm_new	= sst_pcm_new,
+diff --git a/sound/soc/qcom/common.c b/sound/soc/qcom/common.c
+index 4715527054e5..5661025e8cec 100644
+--- a/sound/soc/qcom/common.c
++++ b/sound/soc/qcom/common.c
+@@ -42,6 +42,9 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ 	link = card->dai_link;
+ 	for_each_child_of_node(dev->of_node, np) {
+ 		cpu = of_get_child_by_name(np, "cpu");
++		platform = of_get_child_by_name(np, "platform");
++		codec = of_get_child_by_name(np, "codec");
++
+ 		if (!cpu) {
+ 			dev_err(dev, "Can't find cpu DT node\n");
+ 			ret = -EINVAL;
+@@ -63,8 +66,6 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ 			goto err;
+ 		}
+ 
+-		platform = of_get_child_by_name(np, "platform");
+-		codec = of_get_child_by_name(np, "codec");
+ 		if (codec && platform) {
+ 			link->platform_of_node = of_parse_phandle(platform,
+ 					"sound-dai",
+@@ -100,10 +101,15 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
+ 		link->dpcm_capture = 1;
+ 		link->stream_name = link->name;
+ 		link++;
++
++		of_node_put(cpu);
++		of_node_put(codec);
++		of_node_put(platform);
+ 	}
+ 
+ 	return 0;
+ err:
++	of_node_put(np);
+ 	of_node_put(cpu);
+ 	of_node_put(codec);
+ 	of_node_put(platform);
+diff --git a/sound/xen/xen_snd_front_alsa.c b/sound/xen/xen_snd_front_alsa.c
+index a7f413cb704d..b14ab512c2ce 100644
+--- a/sound/xen/xen_snd_front_alsa.c
++++ b/sound/xen/xen_snd_front_alsa.c
+@@ -441,7 +441,7 @@ static int shbuf_setup_backstore(struct xen_snd_front_pcm_stream_info *stream,
+ {
+ 	int i;
+ 
+-	stream->buffer = alloc_pages_exact(stream->buffer_sz, GFP_KERNEL);
++	stream->buffer = alloc_pages_exact(buffer_sz, GFP_KERNEL);
+ 	if (!stream->buffer)
+ 		return -ENOMEM;
+ 
+diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
+index 5467c6bf9ceb..bb9dca65eb5f 100644
+--- a/tools/build/Makefile.feature
++++ b/tools/build/Makefile.feature
+@@ -70,7 +70,6 @@ FEATURE_TESTS_BASIC :=                  \
+         sched_getcpu			\
+         sdt				\
+         setns				\
+-        libopencsd			\
+         libaio
+ 
+ # FEATURE_TESTS_BASIC + FEATURE_TESTS_EXTRA is the complete list
+@@ -84,6 +83,7 @@ FEATURE_TESTS_EXTRA :=                  \
+          libbabeltrace                  \
+          libbfd-liberty                 \
+          libbfd-liberty-z               \
++         libopencsd                     \
+          libunwind-debug-frame          \
+          libunwind-debug-frame-arm      \
+          libunwind-debug-frame-aarch64  \
+diff --git a/tools/build/feature/test-all.c b/tools/build/feature/test-all.c
+index 20cdaa4fc112..e903b86b742f 100644
+--- a/tools/build/feature/test-all.c
++++ b/tools/build/feature/test-all.c
+@@ -170,14 +170,14 @@
+ # include "test-setns.c"
+ #undef main
+ 
+-#define main main_test_libopencsd
+-# include "test-libopencsd.c"
+-#undef main
+-
+ #define main main_test_libaio
+ # include "test-libaio.c"
+ #undef main
+ 
++#define main main_test_reallocarray
++# include "test-reallocarray.c"
++#undef main
++
+ int main(int argc, char *argv[])
+ {
+ 	main_test_libpython();
+@@ -217,8 +217,8 @@ int main(int argc, char *argv[])
+ 	main_test_sched_getcpu();
+ 	main_test_sdt();
+ 	main_test_setns();
+-	main_test_libopencsd();
+ 	main_test_libaio();
++	main_test_reallocarray();
+ 
+ 	return 0;
+ }
+diff --git a/tools/build/feature/test-reallocarray.c b/tools/build/feature/test-reallocarray.c
+index 8170de35150d..8f6743e31da7 100644
+--- a/tools/build/feature/test-reallocarray.c
++++ b/tools/build/feature/test-reallocarray.c
+@@ -6,3 +6,5 @@ int main(void)
+ {
+ 	return !!reallocarray(NULL, 1, 1);
+ }
++
++#undef _GNU_SOURCE
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index 34d9c3619c96..78fd86b85087 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -162,7 +162,8 @@ endif
+ 
+ TARGETS = $(CMD_TARGETS)
+ 
+-all: fixdep all_cmd
++all: fixdep
++	$(Q)$(MAKE) all_cmd
+ 
+ all_cmd: $(CMD_TARGETS) check
+ 
+diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
+index c8fbd0306960..11f425662b43 100755
+--- a/tools/lib/lockdep/run_tests.sh
++++ b/tools/lib/lockdep/run_tests.sh
+@@ -11,7 +11,7 @@ find tests -name '*.c' | sort | while read -r i; do
+ 	testname=$(basename "$i" .c)
+ 	echo -ne "$testname... "
+ 	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
+-		timeout 1 "tests/$testname" 2>&1 | "tests/${testname}.sh"; then
++		timeout 1 "tests/$testname" 2>&1 | /bin/bash "tests/${testname}.sh"; then
+ 		echo "PASSED!"
+ 	else
+ 		echo "FAILED!"
+@@ -24,7 +24,7 @@ find tests -name '*.c' | sort | while read -r i; do
+ 	echo -ne "(PRELOAD) $testname... "
+ 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+ 		timeout 1 ./lockdep "tests/$testname" 2>&1 |
+-		"tests/${testname}.sh"; then
++		/bin/bash "tests/${testname}.sh"; then
+ 		echo "PASSED!"
+ 	else
+ 		echo "FAILED!"
+@@ -37,7 +37,7 @@ find tests -name '*.c' | sort | while read -r i; do
+ 	echo -ne "(PRELOAD + Valgrind) $testname... "
+ 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
+ 		{ timeout 10 valgrind --read-var-info=yes ./lockdep "./tests/$testname" >& "tests/${testname}.vg.out"; true; } &&
+-		"tests/${testname}.sh" < "tests/${testname}.vg.out" &&
++		/bin/bash "tests/${testname}.sh" < "tests/${testname}.vg.out" &&
+ 		! grep -Eq '(^==[0-9]*== (Invalid |Uninitialised ))|Mismatched free|Source and destination overlap| UME ' "tests/${testname}.vg.out"; then
+ 		echo "PASSED!"
+ 	else
+diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
+index abd4fa5d3088..87494c7c619d 100644
+--- a/tools/lib/traceevent/event-parse.c
++++ b/tools/lib/traceevent/event-parse.c
+@@ -2457,7 +2457,7 @@ static int arg_num_eval(struct tep_print_arg *arg, long long *val)
+ static char *arg_eval (struct tep_print_arg *arg)
+ {
+ 	long long val;
+-	static char buf[20];
++	static char buf[24];
+ 
+ 	switch (arg->type) {
+ 	case TEP_PRINT_ATOM:
+diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
+index c9d038f91af6..53f8be0f4a1f 100644
+--- a/tools/objtool/Makefile
++++ b/tools/objtool/Makefile
+@@ -25,14 +25,17 @@ LIBSUBCMD		= $(LIBSUBCMD_OUTPUT)libsubcmd.a
+ OBJTOOL    := $(OUTPUT)objtool
+ OBJTOOL_IN := $(OBJTOOL)-in.o
+ 
++LIBELF_FLAGS := $(shell pkg-config libelf --cflags 2>/dev/null)
++LIBELF_LIBS  := $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
++
+ all: $(OBJTOOL)
+ 
+ INCLUDES := -I$(srctree)/tools/include \
+ 	    -I$(srctree)/tools/arch/$(HOSTARCH)/include/uapi \
+ 	    -I$(srctree)/tools/objtool/arch/$(ARCH)/include
+ WARNINGS := $(EXTRA_WARNINGS) -Wno-switch-default -Wno-switch-enum -Wno-packed
+-CFLAGS   += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES)
+-LDFLAGS  += -lelf $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
++CFLAGS   += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES) $(LIBELF_FLAGS)
++LDFLAGS  += $(LIBELF_LIBS) $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
+ 
+ # Allow old libelf to be used:
+ elfshdr := $(shell echo '$(pound)include <libelf.h>' | $(CC) $(CFLAGS) -x c -E - | grep elf_getshdr)
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 0414a0d52262..5dde107083c6 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2184,9 +2184,10 @@ static void cleanup(struct objtool_file *file)
+ 	elf_close(file->elf);
+ }
+ 
++static struct objtool_file file;
++
+ int check(const char *_objname, bool orc)
+ {
+-	struct objtool_file file;
+ 	int ret, warnings = 0;
+ 
+ 	objname = _objname;
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index b441c88cafa1..cf4a8329c4c0 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -218,6 +218,8 @@ FEATURE_CHECK_LDFLAGS-libpython := $(PYTHON_EMBED_LDOPTS)
+ FEATURE_CHECK_CFLAGS-libpython-version := $(PYTHON_EMBED_CCOPTS)
+ FEATURE_CHECK_LDFLAGS-libpython-version := $(PYTHON_EMBED_LDOPTS)
+ 
++FEATURE_CHECK_LDFLAGS-libaio = -lrt
++
+ CFLAGS += -fno-omit-frame-pointer
+ CFLAGS += -ggdb3
+ CFLAGS += -funwind-tables
+@@ -386,7 +388,8 @@ ifeq ($(feature-setns), 1)
+   $(call detected,CONFIG_SETNS)
+ endif
+ 
+-ifndef NO_CORESIGHT
++ifdef CORESIGHT
++  $(call feature_check,libopencsd)
+   ifeq ($(feature-libopencsd), 1)
+     CFLAGS += -DHAVE_CSTRACE_SUPPORT $(LIBOPENCSD_CFLAGS)
+     LDFLAGS += $(LIBOPENCSD_LDFLAGS)
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 0ee6795d82cc..77f8f069f1e7 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -102,7 +102,7 @@ include ../scripts/utilities.mak
+ # When selected, pass LLVM_CONFIG=/path/to/llvm-config to `make' if
+ # llvm-config is not in $PATH.
+ #
+-# Define NO_CORESIGHT if you do not want support for CoreSight trace decoding.
++# Define CORESIGHT if you DO WANT support for CoreSight trace decoding.
+ #
+ # Define NO_AIO if you do not want support of Posix AIO based trace
+ # streaming for record mode. Currently Posix AIO trace streaming is
+diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
+index d340d2e42776..13758a0b367b 100644
+--- a/tools/perf/builtin-c2c.c
++++ b/tools/perf/builtin-c2c.c
+@@ -2055,6 +2055,12 @@ static int setup_nodes(struct perf_session *session)
+ 		if (!set)
+ 			return -ENOMEM;
+ 
++		nodes[node] = set;
++
++		/* empty node, skip */
++		if (cpu_map__empty(map))
++			continue;
++
+ 		for (cpu = 0; cpu < map->nr; cpu++) {
+ 			set_bit(map->map[cpu], set);
+ 
+@@ -2063,8 +2069,6 @@ static int setup_nodes(struct perf_session *session)
+ 
+ 			cpu2node[map->map[cpu]] = node;
+ 		}
+-
+-		nodes[node] = set;
+ 	}
+ 
+ 	setup_nodes_header();
+diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
+index ac221f137ed2..cff4d10daf49 100644
+--- a/tools/perf/builtin-script.c
++++ b/tools/perf/builtin-script.c
+@@ -148,6 +148,7 @@ static struct {
+ 	unsigned int print_ip_opts;
+ 	u64 fields;
+ 	u64 invalid_fields;
++	u64 user_set_fields;
+ } output[OUTPUT_TYPE_MAX] = {
+ 
+ 	[PERF_TYPE_HARDWARE] = {
+@@ -344,7 +345,7 @@ static int perf_evsel__do_check_stype(struct perf_evsel *evsel,
+ 	if (attr->sample_type & sample_type)
+ 		return 0;
+ 
+-	if (output[type].user_set) {
++	if (output[type].user_set_fields & field) {
+ 		if (allow_user_set)
+ 			return 0;
+ 		evname = perf_evsel__name(evsel);
+@@ -2627,10 +2628,13 @@ parse:
+ 					pr_warning("\'%s\' not valid for %s events. Ignoring.\n",
+ 						   all_output_options[i].str, event_type(j));
+ 				} else {
+-					if (change == REMOVE)
++					if (change == REMOVE) {
+ 						output[j].fields &= ~all_output_options[i].field;
+-					else
++						output[j].user_set_fields &= ~all_output_options[i].field;
++					} else {
+ 						output[j].fields |= all_output_options[i].field;
++						output[j].user_set_fields |= all_output_options[i].field;
++					}
+ 					output[j].user_set = true;
+ 					output[j].wildcard_set = true;
+ 				}
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index b36061cd1ab8..91cdbf504535 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -1039,6 +1039,9 @@ static const size_t trace__entry_str_size = 2048;
+ 
+ static struct file *thread_trace__files_entry(struct thread_trace *ttrace, int fd)
+ {
++	if (fd < 0)
++		return NULL;
++
+ 	if (fd > ttrace->files.max) {
+ 		struct file *nfiles = realloc(ttrace->files.table, (fd + 1) * sizeof(struct file));
+ 
+@@ -3865,7 +3868,8 @@ int cmd_trace(int argc, const char **argv)
+ 				goto init_augmented_syscall_tp;
+ 			}
+ 
+-			if (strcmp(perf_evsel__name(evsel), "raw_syscalls:sys_enter") == 0) {
++			if (trace.syscalls.events.augmented->priv == NULL &&
++			    strstr(perf_evsel__name(evsel), "syscalls:sys_enter")) {
+ 				struct perf_evsel *augmented = trace.syscalls.events.augmented;
+ 				if (perf_evsel__init_augmented_syscall_tp(augmented, evsel) ||
+ 				    perf_evsel__init_augmented_syscall_tp_args(augmented))
+diff --git a/tools/perf/tests/evsel-tp-sched.c b/tools/perf/tests/evsel-tp-sched.c
+index 5cbba70bcdd0..ea7acf403727 100644
+--- a/tools/perf/tests/evsel-tp-sched.c
++++ b/tools/perf/tests/evsel-tp-sched.c
+@@ -43,7 +43,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
+ 		return -1;
+ 	}
+ 
+-	if (perf_evsel__test_field(evsel, "prev_comm", 16, true))
++	if (perf_evsel__test_field(evsel, "prev_comm", 16, false))
+ 		ret = -1;
+ 
+ 	if (perf_evsel__test_field(evsel, "prev_pid", 4, true))
+@@ -55,7 +55,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
+ 	if (perf_evsel__test_field(evsel, "prev_state", sizeof(long), true))
+ 		ret = -1;
+ 
+-	if (perf_evsel__test_field(evsel, "next_comm", 16, true))
++	if (perf_evsel__test_field(evsel, "next_comm", 16, false))
+ 		ret = -1;
+ 
+ 	if (perf_evsel__test_field(evsel, "next_pid", 4, true))
+@@ -73,7 +73,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
+ 		return -1;
+ 	}
+ 
+-	if (perf_evsel__test_field(evsel, "comm", 16, true))
++	if (perf_evsel__test_field(evsel, "comm", 16, false))
+ 		ret = -1;
+ 
+ 	if (perf_evsel__test_field(evsel, "pid", 4, true))
+diff --git a/tools/perf/trace/beauty/msg_flags.c b/tools/perf/trace/beauty/msg_flags.c
+index d66c66315987..ea68db08b8e7 100644
+--- a/tools/perf/trace/beauty/msg_flags.c
++++ b/tools/perf/trace/beauty/msg_flags.c
+@@ -29,7 +29,7 @@ static size_t syscall_arg__scnprintf_msg_flags(char *bf, size_t size,
+ 		return scnprintf(bf, size, "NONE");
+ #define	P_MSG_FLAG(n) \
+ 	if (flags & MSG_##n) { \
+-		printed += scnprintf(bf + printed, size - printed, "%s%s", printed ? "|" : "", show_prefix ? prefix : "", #n); \
++		printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : "", #n); \
+ 		flags &= ~MSG_##n; \
+ 	}
+ 
+diff --git a/tools/perf/trace/beauty/waitid_options.c b/tools/perf/trace/beauty/waitid_options.c
+index 6897fab40dcc..d4d10b33ba0e 100644
+--- a/tools/perf/trace/beauty/waitid_options.c
++++ b/tools/perf/trace/beauty/waitid_options.c
+@@ -11,7 +11,7 @@ static size_t syscall_arg__scnprintf_waitid_options(char *bf, size_t size,
+ 
+ #define	P_OPTION(n) \
+ 	if (options & W##n) { \
+-		printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : #n); \
++		printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : "",  #n); \
+ 		options &= ~W##n; \
+ 	}
+ 
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 70de8f6b3aee..9142fd294e76 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -1889,6 +1889,7 @@ int symbol__annotate(struct symbol *sym, struct map *map,
+ 		     struct annotation_options *options,
+ 		     struct arch **parch)
+ {
++	struct annotation *notes = symbol__annotation(sym);
+ 	struct annotate_args args = {
+ 		.privsize	= privsize,
+ 		.evsel		= evsel,
+@@ -1919,6 +1920,7 @@ int symbol__annotate(struct symbol *sym, struct map *map,
+ 
+ 	args.ms.map = map;
+ 	args.ms.sym = sym;
++	notes->start = map__rip_2objdump(map, sym->start);
+ 
+ 	return symbol__disassemble(sym, &args);
+ }
+@@ -2794,8 +2796,6 @@ int symbol__annotate2(struct symbol *sym, struct map *map, struct perf_evsel *ev
+ 
+ 	symbol__calc_percent(sym, evsel);
+ 
+-	notes->start = map__rip_2objdump(map, sym->start);
+-
+ 	annotation__set_offsets(notes, size);
+ 	annotation__mark_jump_targets(notes, sym);
+ 	annotation__compute_ipc(notes, size);
+diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
+index f69961c4a4f3..2921ce08b198 100644
+--- a/tools/perf/util/auxtrace.c
++++ b/tools/perf/util/auxtrace.c
+@@ -1278,9 +1278,9 @@ static int __auxtrace_mmap__read(struct perf_mmap *map,
+ 	}
+ 
+ 	/* padding must be written by fn() e.g. record__process_auxtrace() */
+-	padding = size & 7;
++	padding = size & (PERF_AUXTRACE_RECORD_ALIGNMENT - 1);
+ 	if (padding)
+-		padding = 8 - padding;
++		padding = PERF_AUXTRACE_RECORD_ALIGNMENT - padding;
+ 
+ 	memset(&ev, 0, sizeof(ev));
+ 	ev.auxtrace.header.type = PERF_RECORD_AUXTRACE;
+diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h
+index 8e50f96d4b23..fac32482db61 100644
+--- a/tools/perf/util/auxtrace.h
++++ b/tools/perf/util/auxtrace.h
+@@ -40,6 +40,9 @@ struct record_opts;
+ struct auxtrace_info_event;
+ struct events_stats;
+ 
++/* Auxtrace records must have the same alignment as perf event records */
++#define PERF_AUXTRACE_RECORD_ALIGNMENT 8
++
+ enum auxtrace_type {
+ 	PERF_AUXTRACE_UNKNOWN,
+ 	PERF_AUXTRACE_INTEL_PT,
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index 4503f3ca45ab..7c0b975dd2f0 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -26,6 +26,7 @@
+ 
+ #include "../cache.h"
+ #include "../util.h"
++#include "../auxtrace.h"
+ 
+ #include "intel-pt-insn-decoder.h"
+ #include "intel-pt-pkt-decoder.h"
+@@ -250,19 +251,15 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params)
+ 		if (!(decoder->tsc_ctc_ratio_n % decoder->tsc_ctc_ratio_d))
+ 			decoder->tsc_ctc_mult = decoder->tsc_ctc_ratio_n /
+ 						decoder->tsc_ctc_ratio_d;
+-
+-		/*
+-		 * Allow for timestamps appearing to backwards because a TSC
+-		 * packet has slipped past a MTC packet, so allow 2 MTC ticks
+-		 * or ...
+-		 */
+-		decoder->tsc_slip = multdiv(2 << decoder->mtc_shift,
+-					decoder->tsc_ctc_ratio_n,
+-					decoder->tsc_ctc_ratio_d);
+ 	}
+-	/* ... or 0x100 paranoia */
+-	if (decoder->tsc_slip < 0x100)
+-		decoder->tsc_slip = 0x100;
++
++	/*
++	 * A TSC packet can slip past MTC packets so that the timestamp appears
++	 * to go backwards. One estimate is that can be up to about 40 CPU
++	 * cycles, which is certainly less than 0x1000 TSC ticks, but accept
++	 * slippage an order of magnitude more to be on the safe side.
++	 */
++	decoder->tsc_slip = 0x10000;
+ 
+ 	intel_pt_log("timestamp: mtc_shift %u\n", decoder->mtc_shift);
+ 	intel_pt_log("timestamp: tsc_ctc_ratio_n %u\n", decoder->tsc_ctc_ratio_n);
+@@ -1394,7 +1391,6 @@ static int intel_pt_overflow(struct intel_pt_decoder *decoder)
+ {
+ 	intel_pt_log("ERROR: Buffer overflow\n");
+ 	intel_pt_clear_tx_flags(decoder);
+-	decoder->cbr = 0;
+ 	decoder->timestamp_insn_cnt = 0;
+ 	decoder->pkt_state = INTEL_PT_STATE_ERR_RESYNC;
+ 	decoder->overflow = true;
+@@ -2575,6 +2571,34 @@ static int intel_pt_tsc_cmp(uint64_t tsc1, uint64_t tsc2)
+ 	}
+ }
+ 
++#define MAX_PADDING (PERF_AUXTRACE_RECORD_ALIGNMENT - 1)
++
++/**
++ * adj_for_padding - adjust overlap to account for padding.
++ * @buf_b: second buffer
++ * @buf_a: first buffer
++ * @len_a: size of first buffer
++ *
++ * @buf_a might have up to 7 bytes of padding appended. Adjust the overlap
++ * accordingly.
++ *
++ * Return: A pointer into @buf_b from where non-overlapped data starts
++ */
++static unsigned char *adj_for_padding(unsigned char *buf_b,
++				      unsigned char *buf_a, size_t len_a)
++{
++	unsigned char *p = buf_b - MAX_PADDING;
++	unsigned char *q = buf_a + len_a - MAX_PADDING;
++	int i;
++
++	for (i = MAX_PADDING; i; i--, p++, q++) {
++		if (*p != *q)
++			break;
++	}
++
++	return p;
++}
++
+ /**
+  * intel_pt_find_overlap_tsc - determine start of non-overlapped trace data
+  *                             using TSC.
+@@ -2625,8 +2649,11 @@ static unsigned char *intel_pt_find_overlap_tsc(unsigned char *buf_a,
+ 
+ 			/* Same TSC, so buffers are consecutive */
+ 			if (!cmp && rem_b >= rem_a) {
++				unsigned char *start;
++
+ 				*consecutive = true;
+-				return buf_b + len_b - (rem_b - rem_a);
++				start = buf_b + len_b - (rem_b - rem_a);
++				return adj_for_padding(start, buf_a, len_a);
+ 			}
+ 			if (cmp < 0)
+ 				return buf_b; /* tsc_a < tsc_b => no overlap */
+@@ -2689,7 +2716,7 @@ unsigned char *intel_pt_find_overlap(unsigned char *buf_a, size_t len_a,
+ 		found = memmem(buf_a, len_a, buf_b, len_a);
+ 		if (found) {
+ 			*consecutive = true;
+-			return buf_b + len_a;
++			return adj_for_padding(buf_b + len_a, buf_a, len_a);
+ 		}
+ 
+ 		/* Try again at next PSB in buffer 'a' */
+diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
+index 2e72373ec6df..4493fc13a6fa 100644
+--- a/tools/perf/util/intel-pt.c
++++ b/tools/perf/util/intel-pt.c
+@@ -2522,6 +2522,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
+ 	}
+ 
+ 	pt->timeless_decoding = intel_pt_timeless_decoding(pt);
++	if (pt->timeless_decoding && !pt->tc.time_mult)
++		pt->tc.time_mult = 1;
+ 	pt->have_tsc = intel_pt_have_tsc(pt);
+ 	pt->sampling_mode = false;
+ 	pt->est_tsc = !pt->timeless_decoding;
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 11a234740632..ccd3275feeaa 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -734,10 +734,20 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
+ 
+ 		if (!is_arm_pmu_core(name)) {
+ 			pname = pe->pmu ? pe->pmu : "cpu";
++
++			/*
++			 * uncore alias may be from different PMU
++			 * with common prefix
++			 */
++			if (pmu_is_uncore(name) &&
++			    !strncmp(pname, name, strlen(pname)))
++				goto new_alias;
++
+ 			if (strcmp(pname, name))
+ 				continue;
+ 		}
+ 
++new_alias:
+ 		/* need type casts to override 'const' */
+ 		__perf_pmu__new_alias(head, NULL, (char *)pe->name,
+ 				(char *)pe->desc, (char *)pe->event,
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index 18a59fba97ff..cc4773157b9b 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -157,8 +157,10 @@ static struct map *kernel_get_module_map(const char *module)
+ 	if (module && strchr(module, '/'))
+ 		return dso__new_map(module);
+ 
+-	if (!module)
+-		module = "kernel";
++	if (!module) {
++		pos = machine__kernel_map(host_machine);
++		return map__get(pos);
++	}
+ 
+ 	for (pos = maps__first(maps); pos; pos = map__next(pos)) {
+ 		/* short_name is "[module]" */
+diff --git a/tools/perf/util/s390-cpumsf.c b/tools/perf/util/s390-cpumsf.c
+index 68b2570304ec..08073a4d59a4 100644
+--- a/tools/perf/util/s390-cpumsf.c
++++ b/tools/perf/util/s390-cpumsf.c
+@@ -301,6 +301,11 @@ static bool s390_cpumsf_validate(int machine_type,
+ 			*dsdes = 85;
+ 			*bsdes = 32;
+ 			break;
++		case 2964:
++		case 2965:
++			*dsdes = 112;
++			*bsdes = 32;
++			break;
+ 		default:
+ 			/* Illegal trailer entry */
+ 			return false;
+diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
+index 87ef16a1b17e..7059d1be2d09 100644
+--- a/tools/perf/util/scripting-engines/trace-event-python.c
++++ b/tools/perf/util/scripting-engines/trace-event-python.c
+@@ -733,8 +733,7 @@ static PyObject *get_perf_sample_dict(struct perf_sample *sample,
+ 		Py_FatalError("couldn't create Python dictionary");
+ 
+ 	pydict_set_item_string_decref(dict, "ev_name", _PyUnicode_FromString(perf_evsel__name(evsel)));
+-	pydict_set_item_string_decref(dict, "attr", _PyUnicode_FromStringAndSize(
+-			(const char *)&evsel->attr, sizeof(evsel->attr)));
++	pydict_set_item_string_decref(dict, "attr", _PyBytes_FromStringAndSize((const char *)&evsel->attr, sizeof(evsel->attr)));
+ 
+ 	pydict_set_item_string_decref(dict_sample, "pid",
+ 			_PyLong_FromLong(sample->pid));
+@@ -1494,34 +1493,40 @@ static void _free_command_line(wchar_t **command_line, int num)
+ static int python_start_script(const char *script, int argc, const char **argv)
+ {
+ 	struct tables *tables = &tables_global;
++	PyMODINIT_FUNC (*initfunc)(void);
+ #if PY_MAJOR_VERSION < 3
+ 	const char **command_line;
+ #else
+ 	wchar_t **command_line;
+ #endif
+-	char buf[PATH_MAX];
++	/*
++	 * Use a non-const name variable to cope with python 2.6's
++	 * PyImport_AppendInittab prototype
++	 */
++	char buf[PATH_MAX], name[19] = "perf_trace_context";
+ 	int i, err = 0;
+ 	FILE *fp;
+ 
+ #if PY_MAJOR_VERSION < 3
++	initfunc = initperf_trace_context;
+ 	command_line = malloc((argc + 1) * sizeof(const char *));
+ 	command_line[0] = script;
+ 	for (i = 1; i < argc + 1; i++)
+ 		command_line[i] = argv[i - 1];
+ #else
++	initfunc = PyInit_perf_trace_context;
+ 	command_line = malloc((argc + 1) * sizeof(wchar_t *));
+ 	command_line[0] = Py_DecodeLocale(script, NULL);
+ 	for (i = 1; i < argc + 1; i++)
+ 		command_line[i] = Py_DecodeLocale(argv[i - 1], NULL);
+ #endif
+ 
++	PyImport_AppendInittab(name, initfunc);
+ 	Py_Initialize();
+ 
+ #if PY_MAJOR_VERSION < 3
+-	initperf_trace_context();
+ 	PySys_SetArgv(argc + 1, (char **)command_line);
+ #else
+-	PyInit_perf_trace_context();
+ 	PySys_SetArgv(argc + 1, command_line);
+ #endif
+ 
+diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
+index 6c1a83768eb0..d0334c33da54 100644
+--- a/tools/perf/util/sort.c
++++ b/tools/perf/util/sort.c
+@@ -230,8 +230,14 @@ static int64_t _sort__sym_cmp(struct symbol *sym_l, struct symbol *sym_r)
+ 	if (sym_l == sym_r)
+ 		return 0;
+ 
+-	if (sym_l->inlined || sym_r->inlined)
+-		return strcmp(sym_l->name, sym_r->name);
++	if (sym_l->inlined || sym_r->inlined) {
++		int ret = strcmp(sym_l->name, sym_r->name);
++
++		if (ret)
++			return ret;
++		if ((sym_l->start <= sym_r->end) && (sym_l->end >= sym_r->start))
++			return 0;
++	}
+ 
+ 	if (sym_l->start != sym_r->start)
+ 		return (int64_t)(sym_r->start - sym_l->start);
+diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
+index dc86597d0cc4..ccf42c4e83f0 100644
+--- a/tools/perf/util/srcline.c
++++ b/tools/perf/util/srcline.c
+@@ -104,7 +104,7 @@ static struct symbol *new_inline_sym(struct dso *dso,
+ 	} else {
+ 		/* create a fake symbol for the inline frame */
+ 		inline_sym = symbol__new(base_sym ? base_sym->start : 0,
+-					 base_sym ? base_sym->end : 0,
++					 base_sym ? (base_sym->end - base_sym->start) : 0,
+ 					 base_sym ? base_sym->binding : 0,
+ 					 base_sym ? base_sym->type : 0,
+ 					 funcname);
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 48efad6d0f90..ca5f2e4796ea 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -710,6 +710,8 @@ static int map_groups__split_kallsyms_for_kcore(struct map_groups *kmaps, struct
+ 		}
+ 
+ 		pos->start -= curr_map->start - curr_map->pgoff;
++		if (pos->end > curr_map->end)
++			pos->end = curr_map->end;
+ 		if (pos->end)
+ 			pos->end -= curr_map->start - curr_map->pgoff;
+ 		symbols__insert(&curr_map->dso->symbols, pos);
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 41ab7a3668b3..936f726f7cd9 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -96,6 +96,7 @@ $(BPFOBJ): force
+ CLANG ?= clang
+ LLC   ?= llc
+ LLVM_OBJCOPY ?= llvm-objcopy
++LLVM_READELF ?= llvm-readelf
+ BTF_PAHOLE ?= pahole
+ 
+ PROBE := $(shell $(LLC) -march=bpf -mcpu=probe -filetype=null /dev/null 2>&1)
+@@ -132,7 +133,7 @@ BTF_PAHOLE_PROBE := $(shell $(BTF_PAHOLE) --help 2>&1 | grep BTF)
+ BTF_OBJCOPY_PROBE := $(shell $(LLVM_OBJCOPY) --help 2>&1 | grep -i 'usage.*llvm')
+ BTF_LLVM_PROBE := $(shell echo "int main() { return 0; }" | \
+ 			  $(CLANG) -target bpf -O2 -g -c -x c - -o ./llvm_btf_verify.o; \
+-			  readelf -S ./llvm_btf_verify.o | grep BTF; \
++			  $(LLVM_READELF) -S ./llvm_btf_verify.o | grep BTF; \
+ 			  /bin/rm -f ./llvm_btf_verify.o)
+ 
+ ifneq ($(BTF_LLVM_PROBE),)
+diff --git a/tools/testing/selftests/bpf/test_map_in_map.c b/tools/testing/selftests/bpf/test_map_in_map.c
+index ce923e67e08e..2985f262846e 100644
+--- a/tools/testing/selftests/bpf/test_map_in_map.c
++++ b/tools/testing/selftests/bpf/test_map_in_map.c
+@@ -27,6 +27,7 @@ SEC("xdp_mimtest")
+ int xdp_mimtest0(struct xdp_md *ctx)
+ {
+ 	int value = 123;
++	int *value_p;
+ 	int key = 0;
+ 	void *map;
+ 
+@@ -35,6 +36,9 @@ int xdp_mimtest0(struct xdp_md *ctx)
+ 		return XDP_DROP;
+ 
+ 	bpf_map_update_elem(map, &key, &value, 0);
++	value_p = bpf_map_lookup_elem(map, &key);
++	if (!value_p || *value_p != 123)
++		return XDP_DROP;
+ 
+ 	map = bpf_map_lookup_elem(&mim_hash, &key);
+ 	if (!map)
+diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
+index e2b9eee37187..6e05a22b346c 100644
+--- a/tools/testing/selftests/bpf/test_maps.c
++++ b/tools/testing/selftests/bpf/test_maps.c
+@@ -43,7 +43,7 @@ static int map_flags;
+ 	}								\
+ })
+ 
+-static void test_hashmap(int task, void *data)
++static void test_hashmap(unsigned int task, void *data)
+ {
+ 	long long key, next_key, first_key, value;
+ 	int fd;
+@@ -133,7 +133,7 @@ static void test_hashmap(int task, void *data)
+ 	close(fd);
+ }
+ 
+-static void test_hashmap_sizes(int task, void *data)
++static void test_hashmap_sizes(unsigned int task, void *data)
+ {
+ 	int fd, i, j;
+ 
+@@ -153,7 +153,7 @@ static void test_hashmap_sizes(int task, void *data)
+ 		}
+ }
+ 
+-static void test_hashmap_percpu(int task, void *data)
++static void test_hashmap_percpu(unsigned int task, void *data)
+ {
+ 	unsigned int nr_cpus = bpf_num_possible_cpus();
+ 	BPF_DECLARE_PERCPU(long, value);
+@@ -280,7 +280,7 @@ static int helper_fill_hashmap(int max_entries)
+ 	return fd;
+ }
+ 
+-static void test_hashmap_walk(int task, void *data)
++static void test_hashmap_walk(unsigned int task, void *data)
+ {
+ 	int fd, i, max_entries = 1000;
+ 	long long key, value, next_key;
+@@ -351,7 +351,7 @@ static void test_hashmap_zero_seed(void)
+ 	close(second);
+ }
+ 
+-static void test_arraymap(int task, void *data)
++static void test_arraymap(unsigned int task, void *data)
+ {
+ 	int key, next_key, fd;
+ 	long long value;
+@@ -406,7 +406,7 @@ static void test_arraymap(int task, void *data)
+ 	close(fd);
+ }
+ 
+-static void test_arraymap_percpu(int task, void *data)
++static void test_arraymap_percpu(unsigned int task, void *data)
+ {
+ 	unsigned int nr_cpus = bpf_num_possible_cpus();
+ 	BPF_DECLARE_PERCPU(long, values);
+@@ -502,7 +502,7 @@ static void test_arraymap_percpu_many_keys(void)
+ 	close(fd);
+ }
+ 
+-static void test_devmap(int task, void *data)
++static void test_devmap(unsigned int task, void *data)
+ {
+ 	int fd;
+ 	__u32 key, value;
+@@ -517,7 +517,7 @@ static void test_devmap(int task, void *data)
+ 	close(fd);
+ }
+ 
+-static void test_queuemap(int task, void *data)
++static void test_queuemap(unsigned int task, void *data)
+ {
+ 	const int MAP_SIZE = 32;
+ 	__u32 vals[MAP_SIZE + MAP_SIZE/2], val;
+@@ -575,7 +575,7 @@ static void test_queuemap(int task, void *data)
+ 	close(fd);
+ }
+ 
+-static void test_stackmap(int task, void *data)
++static void test_stackmap(unsigned int task, void *data)
+ {
+ 	const int MAP_SIZE = 32;
+ 	__u32 vals[MAP_SIZE + MAP_SIZE/2], val;
+@@ -641,7 +641,7 @@ static void test_stackmap(int task, void *data)
+ #define SOCKMAP_PARSE_PROG "./sockmap_parse_prog.o"
+ #define SOCKMAP_VERDICT_PROG "./sockmap_verdict_prog.o"
+ #define SOCKMAP_TCP_MSG_PROG "./sockmap_tcp_msg_prog.o"
+-static void test_sockmap(int tasks, void *data)
++static void test_sockmap(unsigned int tasks, void *data)
+ {
+ 	struct bpf_map *bpf_map_rx, *bpf_map_tx, *bpf_map_msg, *bpf_map_break;
+ 	int map_fd_msg = 0, map_fd_rx = 0, map_fd_tx = 0, map_fd_break;
+@@ -1258,10 +1258,11 @@ static void test_map_large(void)
+ }
+ 
+ #define run_parallel(N, FN, DATA) \
+-	printf("Fork %d tasks to '" #FN "'\n", N); \
++	printf("Fork %u tasks to '" #FN "'\n", N); \
+ 	__run_parallel(N, FN, DATA)
+ 
+-static void __run_parallel(int tasks, void (*fn)(int task, void *data),
++static void __run_parallel(unsigned int tasks,
++			   void (*fn)(unsigned int task, void *data),
+ 			   void *data)
+ {
+ 	pid_t pid[tasks];
+@@ -1302,7 +1303,7 @@ static void test_map_stress(void)
+ #define DO_UPDATE 1
+ #define DO_DELETE 0
+ 
+-static void test_update_delete(int fn, void *data)
++static void test_update_delete(unsigned int fn, void *data)
+ {
+ 	int do_update = ((int *)data)[1];
+ 	int fd = ((int *)data)[0];
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 2fd90d456892..9a967983abed 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -34,6 +34,7 @@
+ #include <linux/if_ether.h>
+ 
+ #include <bpf/bpf.h>
++#include <bpf/libbpf.h>
+ 
+ #ifdef HAVE_GENHDR
+ # include "autoconf.h"
+@@ -59,6 +60,7 @@
+ 
+ #define UNPRIV_SYSCTL "kernel/unprivileged_bpf_disabled"
+ static bool unpriv_disabled = false;
++static int skips;
+ 
+ struct bpf_test {
+ 	const char *descr;
+@@ -15946,6 +15948,11 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
+ 		pflags |= BPF_F_ANY_ALIGNMENT;
+ 	fd_prog = bpf_verify_program(prog_type, prog, prog_len, pflags,
+ 				     "GPL", 0, bpf_vlog, sizeof(bpf_vlog), 1);
++	if (fd_prog < 0 && !bpf_probe_prog_type(prog_type, 0)) {
++		printf("SKIP (unsupported program type %d)\n", prog_type);
++		skips++;
++		goto close_fds;
++	}
+ 
+ 	expected_ret = unpriv && test->result_unpriv != UNDEF ?
+ 		       test->result_unpriv : test->result;
+@@ -16099,7 +16106,7 @@ static bool test_as_unpriv(struct bpf_test *test)
+ 
+ static int do_test(bool unpriv, unsigned int from, unsigned int to)
+ {
+-	int i, passes = 0, errors = 0, skips = 0;
++	int i, passes = 0, errors = 0;
+ 
+ 	for (i = from; i < to; i++) {
+ 		struct bpf_test *test = &tests[i];
+diff --git a/tools/testing/selftests/firmware/config b/tools/testing/selftests/firmware/config
+index 913a25a4a32b..bf634dda0720 100644
+--- a/tools/testing/selftests/firmware/config
++++ b/tools/testing/selftests/firmware/config
+@@ -1,6 +1,5 @@
+ CONFIG_TEST_FIRMWARE=y
+ CONFIG_FW_LOADER=y
+ CONFIG_FW_LOADER_USER_HELPER=y
+-CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
+ CONFIG_IKCONFIG=y
+ CONFIG_IKCONFIG_PROC=y
+diff --git a/tools/testing/selftests/firmware/fw_filesystem.sh b/tools/testing/selftests/firmware/fw_filesystem.sh
+index 466cf2f91ba0..a4320c4b44dc 100755
+--- a/tools/testing/selftests/firmware/fw_filesystem.sh
++++ b/tools/testing/selftests/firmware/fw_filesystem.sh
+@@ -155,8 +155,11 @@ read_firmwares()
+ {
+ 	for i in $(seq 0 3); do
+ 		config_set_read_fw_idx $i
+-		# Verify the contents match
+-		if ! diff -q "$FW" $DIR/read_firmware 2>/dev/null ; then
++		# Verify the contents are what we expect.
++		# -Z required for now -- check for yourself, md5sum
++		# on $FW and DIR/read_firmware will yield the same. Even
++		# cmp agrees, so something is off.
++		if ! diff -q -Z "$FW" $DIR/read_firmware 2>/dev/null ; then
+ 			echo "request #$i: firmware was not loaded" >&2
+ 			exit 1
+ 		fi
+@@ -168,7 +171,7 @@ read_firmwares_expect_nofile()
+ 	for i in $(seq 0 3); do
+ 		config_set_read_fw_idx $i
+ 		# Ensures contents differ
+-		if diff -q "$FW" $DIR/read_firmware 2>/dev/null ; then
++		if diff -q -Z "$FW" $DIR/read_firmware 2>/dev/null ; then
+ 			echo "request $i: file was not expected to match" >&2
+ 			exit 1
+ 		fi
+diff --git a/tools/testing/selftests/firmware/fw_lib.sh b/tools/testing/selftests/firmware/fw_lib.sh
+index 6c5f1b2ffb74..1cbb12e284a6 100755
+--- a/tools/testing/selftests/firmware/fw_lib.sh
++++ b/tools/testing/selftests/firmware/fw_lib.sh
+@@ -91,7 +91,7 @@ verify_reqs()
+ 	if [ "$TEST_REQS_FW_SYSFS_FALLBACK" = "yes" ]; then
+ 		if [ ! "$HAS_FW_LOADER_USER_HELPER" = "yes" ]; then
+ 			echo "usermode helper disabled so ignoring test"
+-			exit $ksft_skip
++			exit 0
+ 		fi
+ 	fi
+ }
+diff --git a/tools/testing/selftests/ir/ir_loopback.c b/tools/testing/selftests/ir/ir_loopback.c
+index 858c19caf224..8cdf1b89ac9c 100644
+--- a/tools/testing/selftests/ir/ir_loopback.c
++++ b/tools/testing/selftests/ir/ir_loopback.c
+@@ -27,6 +27,8 @@
+ 
+ #define TEST_SCANCODES	10
+ #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
++#define SYSFS_PATH_MAX 256
++#define DNAME_PATH_MAX 256
+ 
+ static const struct {
+ 	enum rc_proto proto;
+@@ -56,7 +58,7 @@ static const struct {
+ int lirc_open(const char *rc)
+ {
+ 	struct dirent *dent;
+-	char buf[100];
++	char buf[SYSFS_PATH_MAX + DNAME_PATH_MAX];
+ 	DIR *d;
+ 	int fd;
+ 
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 7e632b465ab4..6d7a81306f8a 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -2971,6 +2971,12 @@ TEST(get_metadata)
+ 	struct seccomp_metadata md;
+ 	long ret;
+ 
++	/* Only real root can get metadata. */
++	if (geteuid()) {
++		XFAIL(return, "get_metadata requires real root");
++		return;
++	}
++
+ 	ASSERT_EQ(0, pipe(pipefd));
+ 
+ 	pid = fork();
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index 30251e288629..5cc22cdaa5ba 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -2353,7 +2353,7 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+ 	return 0;
+ }
+ 
+-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
+ {
+ }
+ 
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 076bc38963bf..b4f2d892a1d3 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -874,6 +874,7 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
+ 		int as_id, struct kvm_memslots *slots)
+ {
+ 	struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id);
++	u64 gen;
+ 
+ 	/*
+ 	 * Set the low bit in the generation, which disables SPTE caching
+@@ -896,9 +897,11 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
+ 	 * space 0 will use generations 0, 4, 8, ... while * address space 1 will
+ 	 * use generations 2, 6, 10, 14, ...
+ 	 */
+-	slots->generation += KVM_ADDRESS_SPACE_NUM * 2 - 1;
++	gen = slots->generation + KVM_ADDRESS_SPACE_NUM * 2 - 1;
+ 
+-	kvm_arch_memslots_updated(kvm, slots);
++	kvm_arch_memslots_updated(kvm, gen);
++
++	slots->generation = gen;
+ 
+ 	return old_memslots;
+ }
+@@ -2899,6 +2902,9 @@ static long kvm_device_ioctl(struct file *filp, unsigned int ioctl,
+ {
+ 	struct kvm_device *dev = filp->private_data;
+ 
++	if (dev->kvm->mm != current->mm)
++		return -EIO;
++
+ 	switch (ioctl) {
+ 	case KVM_SET_DEVICE_ATTR:
+ 		return kvm_device_ioctl_attr(dev, dev->ops->set_attr, arg);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-04-19 19:28 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-04-19 19:28 UTC (permalink / raw
  To: gentoo-commits

commit:     c4f7d776860d5e4ae21a8206edb1b12736b0b5a8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 19 19:28:25 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 19 19:28:25 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c4f7d776

Add incremental 5.0.8 patch over full one

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 1007_linux-5.0.8.patch | 36401 ++++-------------------------------------------
 1 file changed, 2776 insertions(+), 33625 deletions(-)

diff --git a/1007_linux-5.0.8.patch b/1007_linux-5.0.8.patch
index 2e45798..91bf104 100644
--- a/1007_linux-5.0.8.patch
+++ b/1007_linux-5.0.8.patch
@@ -1,242 +1,17 @@
-diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
-index e133ccd60228..acfe3d0f78d1 100644
---- a/Documentation/DMA-API.txt
-+++ b/Documentation/DMA-API.txt
-@@ -195,6 +195,14 @@ Requesting the required mask does not alter the current mask.  If you
- wish to take advantage of it, you should issue a dma_set_mask()
- call to set the mask to the value returned.
- 
-+::
-+
-+	size_t
-+	dma_direct_max_mapping_size(struct device *dev);
-+
-+Returns the maximum size of a mapping for the device. The size parameter
-+of the mapping functions like dma_map_single(), dma_map_page() and
-+others should not be larger than the returned value.
- 
- Part Id - Streaming DMA mappings
- --------------------------------
-diff --git a/Documentation/arm/kernel_mode_neon.txt b/Documentation/arm/kernel_mode_neon.txt
-index 525452726d31..b9e060c5b61e 100644
---- a/Documentation/arm/kernel_mode_neon.txt
-+++ b/Documentation/arm/kernel_mode_neon.txt
-@@ -6,7 +6,7 @@ TL;DR summary
- * Use only NEON instructions, or VFP instructions that don't rely on support
-   code
- * Isolate your NEON code in a separate compilation unit, and compile it with
--  '-mfpu=neon -mfloat-abi=softfp'
-+  '-march=armv7-a -mfpu=neon -mfloat-abi=softfp'
- * Put kernel_neon_begin() and kernel_neon_end() calls around the calls into your
-   NEON code
- * Don't sleep in your NEON code, and be aware that it will be executed with
-@@ -87,7 +87,7 @@ instructions appearing in unexpected places if no special care is taken.
- Therefore, the recommended and only supported way of using NEON/VFP in the
- kernel is by adhering to the following rules:
- * isolate the NEON code in a separate compilation unit and compile it with
--  '-mfpu=neon -mfloat-abi=softfp';
-+  '-march=armv7-a -mfpu=neon -mfloat-abi=softfp';
- * issue the calls to kernel_neon_begin(), kernel_neon_end() as well as the calls
-   into the unit containing the NEON code from a compilation unit which is *not*
-   built with the GCC flag '-mfpu=neon' set.
-diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
-index 1f09d043d086..ddb8ce5333ba 100644
---- a/Documentation/arm64/silicon-errata.txt
-+++ b/Documentation/arm64/silicon-errata.txt
-@@ -44,6 +44,8 @@ stable kernels.
- 
- | Implementor    | Component       | Erratum ID      | Kconfig                     |
- +----------------+-----------------+-----------------+-----------------------------+
-+| Allwinner      | A64/R18         | UNKNOWN1        | SUN50I_ERRATUM_UNKNOWN1     |
-+|                |                 |                 |                             |
- | ARM            | Cortex-A53      | #826319         | ARM64_ERRATUM_826319        |
- | ARM            | Cortex-A53      | #827319         | ARM64_ERRATUM_827319        |
- | ARM            | Cortex-A53      | #824069         | ARM64_ERRATUM_824069        |
-diff --git a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
-index a10c1f89037d..e1fe02f3e3e9 100644
---- a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
-+++ b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt
-@@ -11,11 +11,13 @@ New driver handles the following
- 
- Required properties:
- - compatible:		Must be "samsung,exynos-adc-v1"
--				for exynos4412/5250 controllers.
-+				for Exynos5250 controllers.
- 			Must be "samsung,exynos-adc-v2" for
- 				future controllers.
- 			Must be "samsung,exynos3250-adc" for
- 				controllers compatible with ADC of Exynos3250.
-+			Must be "samsung,exynos4212-adc" for
-+				controllers compatible with ADC of Exynos4212 and Exynos4412.
- 			Must be "samsung,exynos7-adc" for
- 				the ADC in Exynos7 and compatibles
- 			Must be "samsung,s3c2410-adc" for
-diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
-index 0de6f6145cc6..7ba8cd567f84 100644
---- a/Documentation/process/stable-kernel-rules.rst
-+++ b/Documentation/process/stable-kernel-rules.rst
-@@ -38,6 +38,9 @@ Procedure for submitting patches to the -stable tree
-  - If the patch covers files in net/ or drivers/net please follow netdev stable
-    submission guidelines as described in
-    :ref:`Documentation/networking/netdev-FAQ.rst <netdev-FAQ>`
-+   after first checking the stable networking queue at
-+   https://patchwork.ozlabs.org/bundle/davem/stable/?series=&submitter=&state=*&q=&archive=
-+   to ensure the requested patch is not already queued up.
-  - Security patches should not be handled (solely) by the -stable review
-    process but should follow the procedures in
-    :ref:`Documentation/admin-guide/security-bugs.rst <securitybugs>`.
-diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
-index 356156f5c52d..ba8927c0d45c 100644
---- a/Documentation/virtual/kvm/api.txt
-+++ b/Documentation/virtual/kvm/api.txt
-@@ -13,7 +13,7 @@ of a virtual machine.  The ioctls belong to three classes
- 
-  - VM ioctls: These query and set attributes that affect an entire virtual
-    machine, for example memory layout.  In addition a VM ioctl is used to
--   create virtual cpus (vcpus).
-+   create virtual cpus (vcpus) and devices.
- 
-    Only run VM ioctls from the same process (address space) that was used
-    to create the VM.
-@@ -24,6 +24,11 @@ of a virtual machine.  The ioctls belong to three classes
-    Only run vcpu ioctls from the same thread that was used to create the
-    vcpu.
- 
-+ - device ioctls: These query and set attributes that control the operation
-+   of a single device.
-+
-+   device ioctls must be issued from the same process (address space) that
-+   was used to create the VM.
- 
- 2. File descriptors
- -------------------
-@@ -32,10 +37,11 @@ The kvm API is centered around file descriptors.  An initial
- open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
- can be used to issue system ioctls.  A KVM_CREATE_VM ioctl on this
- handle will create a VM file descriptor which can be used to issue VM
--ioctls.  A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu
--and return a file descriptor pointing to it.  Finally, ioctls on a vcpu
--fd can be used to control the vcpu, including the important task of
--actually running guest code.
-+ioctls.  A KVM_CREATE_VCPU or KVM_CREATE_DEVICE ioctl on a VM fd will
-+create a virtual cpu or device and return a file descriptor pointing to
-+the new resource.  Finally, ioctls on a vcpu or device fd can be used
-+to control the vcpu or device.  For vcpus, this includes the important
-+task of actually running guest code.
- 
- In general file descriptors can be migrated among processes by means
- of fork() and the SCM_RIGHTS facility of unix domain socket.  These
 diff --git a/Makefile b/Makefile
-index d5713e7b1e50..f7666051de66 100644
+index af99c77c7066..f7666051de66 100644
 --- a/Makefile
 +++ b/Makefile
 @@ -1,7 +1,7 @@
  # SPDX-License-Identifier: GPL-2.0
  VERSION = 5
  PATCHLEVEL = 0
--SUBLEVEL = 0
+-SUBLEVEL = 7
 +SUBLEVEL = 8
  EXTRAVERSION =
  NAME = Shy Crocodile
  
-@@ -15,19 +15,6 @@ NAME = Shy Crocodile
- PHONY := _all
- _all:
- 
--# Do not use make's built-in rules and variables
--# (this increases performance and avoids hard-to-debug behaviour)
--MAKEFLAGS += -rR
--
--# Avoid funny character set dependencies
--unexport LC_ALL
--LC_COLLATE=C
--LC_NUMERIC=C
--export LC_COLLATE LC_NUMERIC
--
--# Avoid interference with shell env settings
--unexport GREP_OPTIONS
--
- # We are using a recursive build, so we need to do a little thinking
- # to get the ordering right.
- #
-@@ -44,6 +31,21 @@ unexport GREP_OPTIONS
- # descending is started. They are now explicitly listed as the
- # prepare rule.
- 
-+ifneq ($(sub-make-done),1)
-+
-+# Do not use make's built-in rules and variables
-+# (this increases performance and avoids hard-to-debug behaviour)
-+MAKEFLAGS += -rR
-+
-+# Avoid funny character set dependencies
-+unexport LC_ALL
-+LC_COLLATE=C
-+LC_NUMERIC=C
-+export LC_COLLATE LC_NUMERIC
-+
-+# Avoid interference with shell env settings
-+unexport GREP_OPTIONS
-+
- # Beautify output
- # ---------------------------------------------------------------------------
- #
-@@ -112,7 +114,6 @@ export quiet Q KBUILD_VERBOSE
- 
- # KBUILD_SRC is not intended to be used by the regular user (for now),
- # it is set on invocation of make with KBUILD_OUTPUT or O= specified.
--ifeq ($(KBUILD_SRC),)
- 
- # OK, Make called in directory where kernel src resides
- # Do we want to locate output files in a separate directory?
-@@ -142,6 +143,24 @@ $(if $(KBUILD_OUTPUT),, \
- # 'sub-make' below.
- MAKEFLAGS += --include-dir=$(CURDIR)
- 
-+need-sub-make := 1
-+else
-+
-+# Do not print "Entering directory ..." at all for in-tree build.
-+MAKEFLAGS += --no-print-directory
-+
-+endif # ifneq ($(KBUILD_OUTPUT),)
-+
-+ifneq ($(filter 3.%,$(MAKE_VERSION)),)
-+# 'MAKEFLAGS += -rR' does not immediately become effective for GNU Make 3.x
-+# We need to invoke sub-make to avoid implicit rules in the top Makefile.
-+need-sub-make := 1
-+# Cancel implicit rules for this Makefile.
-+$(lastword $(MAKEFILE_LIST)): ;
-+endif
-+
-+ifeq ($(need-sub-make),1)
-+
- PHONY += $(MAKECMDGOALS) sub-make
- 
- $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
-@@ -149,16 +168,15 @@ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
- 
- # Invoke a second make in the output directory, passing relevant variables
- sub-make:
--	$(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \
-+	$(Q)$(MAKE) sub-make-done=1 \
-+	$(if $(KBUILD_OUTPUT),-C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR)) \
- 	-f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS))
- 
--# Leave processing to above invocation of make
--skip-makefile := 1
--endif # ifneq ($(KBUILD_OUTPUT),)
--endif # ifeq ($(KBUILD_SRC),)
-+endif # need-sub-make
-+endif # sub-make-done
- 
- # We process the rest of the Makefile if this is the final invocation of make
--ifeq ($(skip-makefile),)
-+ifeq ($(need-sub-make),)
- 
- # Do not print "Entering directory ...",
- # but we want to display it when entering to the output directory
-@@ -492,7 +510,7 @@ endif
+@@ -510,7 +510,7 @@ endif
  ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
  ifneq ($(CROSS_COMPILE),)
  CLANG_FLAGS	:= --target=$(notdir $(CROSS_COMPILE:%-=%))
@@ -245,55 +20,6 @@ index d5713e7b1e50..f7666051de66 100644
  CLANG_FLAGS	+= --prefix=$(GCC_TOOLCHAIN_DIR)
  GCC_TOOLCHAIN	:= $(realpath $(GCC_TOOLCHAIN_DIR)/..)
  endif
-@@ -625,12 +643,15 @@ ifeq ($(may-sync-config),1)
- -include include/config/auto.conf.cmd
- 
- # To avoid any implicit rule to kick in, define an empty command
--$(KCONFIG_CONFIG) include/config/auto.conf.cmd: ;
-+$(KCONFIG_CONFIG): ;
- 
- # The actual configuration files used during the build are stored in
- # include/generated/ and include/config/. Update them if .config is newer than
- # include/config/auto.conf (which mirrors .config).
--include/config/%.conf: $(KCONFIG_CONFIG) include/config/auto.conf.cmd
-+#
-+# This exploits the 'multi-target pattern rule' trick.
-+# The syncconfig should be executed only once to make all the targets.
-+%/auto.conf %/auto.conf.cmd %/tristate.conf: $(KCONFIG_CONFIG)
- 	$(Q)$(MAKE) -f $(srctree)/Makefile syncconfig
- else
- # External modules and some install targets need include/generated/autoconf.h
-@@ -944,9 +965,11 @@ mod_sign_cmd = true
- endif
- export mod_sign_cmd
- 
-+HOST_LIBELF_LIBS = $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
-+
- ifdef CONFIG_STACK_VALIDATION
-   has_libelf := $(call try-run,\
--		echo "int main() {}" | $(HOSTCC) -xc -o /dev/null -lelf -,1,0)
-+		echo "int main() {}" | $(HOSTCC) -xc -o /dev/null $(HOST_LIBELF_LIBS) -,1,0)
-   ifeq ($(has_libelf),1)
-     objtool_target := tools/objtool FORCE
-   else
-@@ -1754,7 +1777,7 @@ $(cmd_files): ;	# Do not try to update included dependency files
- 
- endif   # ifeq ($(config-targets),1)
- endif   # ifeq ($(mixed-targets),1)
--endif	# skip-makefile
-+endif   # need-sub-make
- 
- PHONY += FORCE
- FORCE:
-diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl
-index 7b56a53be5e3..e09558edae73 100644
---- a/arch/alpha/kernel/syscalls/syscall.tbl
-+++ b/arch/alpha/kernel/syscalls/syscall.tbl
-@@ -451,3 +451,4 @@
- 520	common	preadv2				sys_preadv2
- 521	common	pwritev2			sys_pwritev2
- 522	common	statx				sys_statx
-+523	common	io_pgetevents			sys_io_pgetevents
 diff --git a/arch/arm/boot/dts/am335x-evm.dts b/arch/arm/boot/dts/am335x-evm.dts
 index dce5be5df97b..edcff79879e7 100644
 --- a/arch/arm/boot/dts/am335x-evm.dts
@@ -382,181 +108,6 @@ index b128998097ce..2c2d8b5b8cf5 100644
  	};
  };
  
-diff --git a/arch/arm/boot/dts/exynos3250.dtsi b/arch/arm/boot/dts/exynos3250.dtsi
-index 608d17454179..5892a9f7622f 100644
---- a/arch/arm/boot/dts/exynos3250.dtsi
-+++ b/arch/arm/boot/dts/exynos3250.dtsi
-@@ -168,6 +168,9 @@
- 			interrupt-controller;
- 			#interrupt-cells = <3>;
- 			interrupt-parent = <&gic>;
-+			clock-names = "clkout8";
-+			clocks = <&cmu CLK_FIN_PLL>;
-+			#clock-cells = <1>;
- 		};
- 
- 		mipi_phy: video-phy {
-diff --git a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
-index 3a9eb1e91c45..8a64c4e8c474 100644
---- a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
-+++ b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
-@@ -49,7 +49,7 @@
- 	};
- 
- 	emmc_pwrseq: pwrseq {
--		pinctrl-0 = <&sd1_cd>;
-+		pinctrl-0 = <&emmc_rstn>;
- 		pinctrl-names = "default";
- 		compatible = "mmc-pwrseq-emmc";
- 		reset-gpios = <&gpk1 2 GPIO_ACTIVE_LOW>;
-@@ -165,12 +165,6 @@
- 	cpu0-supply = <&buck2_reg>;
- };
- 
--/* RSTN signal for eMMC */
--&sd1_cd {
--	samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
--	samsung,pin-drv = <EXYNOS4_PIN_DRV_LV1>;
--};
--
- &pinctrl_1 {
- 	gpio_power_key: power_key {
- 		samsung,pins = "gpx1-3";
-@@ -188,6 +182,11 @@
- 		samsung,pins = "gpx3-7";
- 		samsung,pin-pud = <EXYNOS_PIN_PULL_DOWN>;
- 	};
-+
-+	emmc_rstn: emmc-rstn {
-+		samsung,pins = "gpk1-2";
-+		samsung,pin-pud = <EXYNOS_PIN_PULL_NONE>;
-+	};
- };
- 
- &ehci {
-diff --git a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
-index bf09eab90f8a..6bf3661293ee 100644
---- a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
-+++ b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi
-@@ -468,7 +468,7 @@
- 			buck8_reg: BUCK8 {
- 				regulator-name = "vdd_1.8v_ldo";
- 				regulator-min-microvolt = <800000>;
--				regulator-max-microvolt = <1500000>;
-+				regulator-max-microvolt = <2000000>;
- 				regulator-always-on;
- 				regulator-boot-on;
- 			};
-diff --git a/arch/arm/boot/dts/lpc32xx.dtsi b/arch/arm/boot/dts/lpc32xx.dtsi
-index b7303a4e4236..ed0d6fb20122 100644
---- a/arch/arm/boot/dts/lpc32xx.dtsi
-+++ b/arch/arm/boot/dts/lpc32xx.dtsi
-@@ -230,7 +230,7 @@
- 				status = "disabled";
- 			};
- 
--			i2s1: i2s@2009C000 {
-+			i2s1: i2s@2009c000 {
- 				compatible = "nxp,lpc3220-i2s";
- 				reg = <0x2009C000 0x1000>;
- 			};
-@@ -273,7 +273,7 @@
- 				status = "disabled";
- 			};
- 
--			i2c1: i2c@400A0000 {
-+			i2c1: i2c@400a0000 {
- 				compatible = "nxp,pnx-i2c";
- 				reg = <0x400A0000 0x100>;
- 				interrupt-parent = <&sic1>;
-@@ -284,7 +284,7 @@
- 				clocks = <&clk LPC32XX_CLK_I2C1>;
- 			};
- 
--			i2c2: i2c@400A8000 {
-+			i2c2: i2c@400a8000 {
- 				compatible = "nxp,pnx-i2c";
- 				reg = <0x400A8000 0x100>;
- 				interrupt-parent = <&sic1>;
-@@ -295,7 +295,7 @@
- 				clocks = <&clk LPC32XX_CLK_I2C2>;
- 			};
- 
--			mpwm: mpwm@400E8000 {
-+			mpwm: mpwm@400e8000 {
- 				compatible = "nxp,lpc3220-motor-pwm";
- 				reg = <0x400E8000 0x78>;
- 				status = "disabled";
-@@ -394,7 +394,7 @@
- 				#gpio-cells = <3>; /* bank, pin, flags */
- 			};
- 
--			timer4: timer@4002C000 {
-+			timer4: timer@4002c000 {
- 				compatible = "nxp,lpc3220-timer";
- 				reg = <0x4002C000 0x1000>;
- 				interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
-@@ -412,7 +412,7 @@
- 				status = "disabled";
- 			};
- 
--			watchdog: watchdog@4003C000 {
-+			watchdog: watchdog@4003c000 {
- 				compatible = "nxp,pnx4008-wdt";
- 				reg = <0x4003C000 0x1000>;
- 				clocks = <&clk LPC32XX_CLK_WDOG>;
-@@ -451,7 +451,7 @@
- 				status = "disabled";
- 			};
- 
--			timer1: timer@4004C000 {
-+			timer1: timer@4004c000 {
- 				compatible = "nxp,lpc3220-timer";
- 				reg = <0x4004C000 0x1000>;
- 				interrupts = <17 IRQ_TYPE_LEVEL_LOW>;
-@@ -475,7 +475,7 @@
- 				status = "disabled";
- 			};
- 
--			pwm1: pwm@4005C000 {
-+			pwm1: pwm@4005c000 {
- 				compatible = "nxp,lpc3220-pwm";
- 				reg = <0x4005C000 0x4>;
- 				clocks = <&clk LPC32XX_CLK_PWM1>;
-@@ -484,7 +484,7 @@
- 				status = "disabled";
- 			};
- 
--			pwm2: pwm@4005C004 {
-+			pwm2: pwm@4005c004 {
- 				compatible = "nxp,lpc3220-pwm";
- 				reg = <0x4005C004 0x4>;
- 				clocks = <&clk LPC32XX_CLK_PWM2>;
-diff --git a/arch/arm/boot/dts/meson8b.dtsi b/arch/arm/boot/dts/meson8b.dtsi
-index 22d775460767..dc125769fe85 100644
---- a/arch/arm/boot/dts/meson8b.dtsi
-+++ b/arch/arm/boot/dts/meson8b.dtsi
-@@ -270,9 +270,7 @@
- 				groups = "eth_tx_clk",
- 					 "eth_tx_en",
- 					 "eth_txd1_0",
--					 "eth_txd1_1",
- 					 "eth_txd0_0",
--					 "eth_txd0_1",
- 					 "eth_rx_clk",
- 					 "eth_rx_dv",
- 					 "eth_rxd1",
-@@ -281,7 +279,9 @@
- 					 "eth_mdc",
- 					 "eth_ref_clk",
- 					 "eth_txd2",
--					 "eth_txd3";
-+					 "eth_txd3",
-+					 "eth_rxd3",
-+					 "eth_rxd2";
- 				function = "ethernet";
- 				bias-disable;
- 			};
 diff --git a/arch/arm/boot/dts/rk3288-tinker.dtsi b/arch/arm/boot/dts/rk3288-tinker.dtsi
 index aa107ee41b8b..ef653c3209bc 100644
 --- a/arch/arm/boot/dts/rk3288-tinker.dtsi
@@ -622,317 +173,6 @@ index 1c01a6f843d8..28a2e45752fe 100644
  #define PIN_PC9__TIOA4			PINMUX_PIN(PIN_PC9, 4, 2)
  #define PIN_PC10			74
  #define PIN_PC10__GPIO			PINMUX_PIN(PIN_PC10, 0, 0)
-diff --git a/arch/arm/crypto/crct10dif-ce-core.S b/arch/arm/crypto/crct10dif-ce-core.S
-index ce45ba0c0687..16019b5961e7 100644
---- a/arch/arm/crypto/crct10dif-ce-core.S
-+++ b/arch/arm/crypto/crct10dif-ce-core.S
-@@ -124,10 +124,10 @@ ENTRY(crc_t10dif_pmull)
- 	vext.8		q10, qzr, q0, #4
- 
- 	// receive the initial 64B data, xor the initial crc value
--	vld1.64		{q0-q1}, [arg2, :128]!
--	vld1.64		{q2-q3}, [arg2, :128]!
--	vld1.64		{q4-q5}, [arg2, :128]!
--	vld1.64		{q6-q7}, [arg2, :128]!
-+	vld1.64		{q0-q1}, [arg2]!
-+	vld1.64		{q2-q3}, [arg2]!
-+	vld1.64		{q4-q5}, [arg2]!
-+	vld1.64		{q6-q7}, [arg2]!
- CPU_LE(	vrev64.8	q0, q0			)
- CPU_LE(	vrev64.8	q1, q1			)
- CPU_LE(	vrev64.8	q2, q2			)
-@@ -167,7 +167,7 @@ CPU_LE(	vrev64.8	q7, q7			)
- _fold_64_B_loop:
- 
- 	.macro		fold64, reg1, reg2
--	vld1.64		{q11-q12}, [arg2, :128]!
-+	vld1.64		{q11-q12}, [arg2]!
- 
- 	vmull.p64	q8, \reg1\()h, d21
- 	vmull.p64	\reg1, \reg1\()l, d20
-@@ -238,7 +238,7 @@ _16B_reduction_loop:
- 	vmull.p64	q7, d15, d21
- 	veor.8		q7, q7, q8
- 
--	vld1.64		{q0}, [arg2, :128]!
-+	vld1.64		{q0}, [arg2]!
- CPU_LE(	vrev64.8	q0, q0		)
- 	vswp		d0, d1
- 	veor.8		q7, q7, q0
-@@ -335,7 +335,7 @@ _less_than_128:
- 	vmov.i8		q0, #0
- 	vmov		s3, arg1_low32		// get the initial crc value
- 
--	vld1.64		{q7}, [arg2, :128]!
-+	vld1.64		{q7}, [arg2]!
- CPU_LE(	vrev64.8	q7, q7		)
- 	vswp		d14, d15
- 	veor.8		q7, q7, q0
-diff --git a/arch/arm/crypto/crct10dif-ce-glue.c b/arch/arm/crypto/crct10dif-ce-glue.c
-index d428355cf38d..14c19c70a841 100644
---- a/arch/arm/crypto/crct10dif-ce-glue.c
-+++ b/arch/arm/crypto/crct10dif-ce-glue.c
-@@ -35,26 +35,15 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data,
- 			    unsigned int length)
- {
- 	u16 *crc = shash_desc_ctx(desc);
--	unsigned int l;
- 
--	if (!may_use_simd()) {
--		*crc = crc_t10dif_generic(*crc, data, length);
-+	if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) {
-+		kernel_neon_begin();
-+		*crc = crc_t10dif_pmull(*crc, data, length);
-+		kernel_neon_end();
- 	} else {
--		if (unlikely((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) {
--			l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE -
--				  ((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE));
--
--			*crc = crc_t10dif_generic(*crc, data, l);
--
--			length -= l;
--			data += l;
--		}
--		if (length > 0) {
--			kernel_neon_begin();
--			*crc = crc_t10dif_pmull(*crc, data, length);
--			kernel_neon_end();
--		}
-+		*crc = crc_t10dif_generic(*crc, data, length);
- 	}
-+
- 	return 0;
- }
- 
-diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h
-index 69772e742a0a..83ae97c049d9 100644
---- a/arch/arm/include/asm/barrier.h
-+++ b/arch/arm/include/asm/barrier.h
-@@ -11,6 +11,8 @@
- #define sev()	__asm__ __volatile__ ("sev" : : : "memory")
- #define wfe()	__asm__ __volatile__ ("wfe" : : : "memory")
- #define wfi()	__asm__ __volatile__ ("wfi" : : : "memory")
-+#else
-+#define wfe()	do { } while (0)
- #endif
- 
- #if __LINUX_ARM_ARCH__ >= 7
-diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
-index 120f4c9bbfde..57fe73ea0f72 100644
---- a/arch/arm/include/asm/processor.h
-+++ b/arch/arm/include/asm/processor.h
-@@ -89,7 +89,11 @@ extern void release_thread(struct task_struct *);
- unsigned long get_wchan(struct task_struct *p);
- 
- #if __LINUX_ARM_ARCH__ == 6 || defined(CONFIG_ARM_ERRATA_754327)
--#define cpu_relax()			smp_mb()
-+#define cpu_relax()						\
-+	do {							\
-+		smp_mb();					\
-+		__asm__ __volatile__("nop; nop; nop; nop; nop; nop; nop; nop; nop; nop;");	\
-+	} while (0)
- #else
- #define cpu_relax()			barrier()
- #endif
-diff --git a/arch/arm/include/asm/v7m.h b/arch/arm/include/asm/v7m.h
-index 187ccf6496ad..2cb00d15831b 100644
---- a/arch/arm/include/asm/v7m.h
-+++ b/arch/arm/include/asm/v7m.h
-@@ -49,7 +49,7 @@
-  * (0 -> msp; 1 -> psp). Bits [1:0] are fixed to 0b01.
-  */
- #define EXC_RET_STACK_MASK			0x00000004
--#define EXC_RET_THREADMODE_PROCESSSTACK		0xfffffffd
-+#define EXC_RET_THREADMODE_PROCESSSTACK		(3 << 2)
- 
- /* Cache related definitions */
- 
-diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S
-index 773424843d6e..62db1c9746cb 100644
---- a/arch/arm/kernel/entry-header.S
-+++ b/arch/arm/kernel/entry-header.S
-@@ -127,7 +127,8 @@
-          */
- 	.macro	v7m_exception_slow_exit ret_r0
- 	cpsid	i
--	ldr	lr, =EXC_RET_THREADMODE_PROCESSSTACK
-+	ldr	lr, =exc_ret
-+	ldr	lr, [lr]
- 
- 	@ read original r12, sp, lr, pc and xPSR
- 	add	r12, sp, #S_IP
-diff --git a/arch/arm/kernel/entry-v7m.S b/arch/arm/kernel/entry-v7m.S
-index abcf47848525..19d2dcd6530d 100644
---- a/arch/arm/kernel/entry-v7m.S
-+++ b/arch/arm/kernel/entry-v7m.S
-@@ -146,3 +146,7 @@ ENTRY(vector_table)
- 	.rept	CONFIG_CPU_V7M_NUM_IRQ
- 	.long	__irq_entry		@ External Interrupts
- 	.endr
-+	.align	2
-+	.globl	exc_ret
-+exc_ret:
-+	.space	4
-diff --git a/arch/arm/kernel/machine_kexec.c b/arch/arm/kernel/machine_kexec.c
-index dd2eb5f76b9f..76300f3813e8 100644
---- a/arch/arm/kernel/machine_kexec.c
-+++ b/arch/arm/kernel/machine_kexec.c
-@@ -91,8 +91,11 @@ void machine_crash_nonpanic_core(void *unused)
- 
- 	set_cpu_online(smp_processor_id(), false);
- 	atomic_dec(&waiting_for_crash_ipi);
--	while (1)
-+
-+	while (1) {
- 		cpu_relax();
-+		wfe();
-+	}
- }
- 
- void crash_smp_send_stop(void)
-diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
-index 1d6f5ea522f4..a3ce7c5365fa 100644
---- a/arch/arm/kernel/smp.c
-+++ b/arch/arm/kernel/smp.c
-@@ -604,8 +604,10 @@ static void ipi_cpu_stop(unsigned int cpu)
- 	local_fiq_disable();
- 	local_irq_disable();
- 
--	while (1)
-+	while (1) {
- 		cpu_relax();
-+		wfe();
-+	}
- }
- 
- static DEFINE_PER_CPU(struct completion *, cpu_completion);
-diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
-index 0bee233fef9a..314cfb232a63 100644
---- a/arch/arm/kernel/unwind.c
-+++ b/arch/arm/kernel/unwind.c
-@@ -93,7 +93,7 @@ extern const struct unwind_idx __start_unwind_idx[];
- static const struct unwind_idx *__origin_unwind_idx;
- extern const struct unwind_idx __stop_unwind_idx[];
- 
--static DEFINE_SPINLOCK(unwind_lock);
-+static DEFINE_RAW_SPINLOCK(unwind_lock);
- static LIST_HEAD(unwind_tables);
- 
- /* Convert a prel31 symbol to an absolute address */
-@@ -201,7 +201,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
- 		/* module unwind tables */
- 		struct unwind_table *table;
- 
--		spin_lock_irqsave(&unwind_lock, flags);
-+		raw_spin_lock_irqsave(&unwind_lock, flags);
- 		list_for_each_entry(table, &unwind_tables, list) {
- 			if (addr >= table->begin_addr &&
- 			    addr < table->end_addr) {
-@@ -213,7 +213,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
- 				break;
- 			}
- 		}
--		spin_unlock_irqrestore(&unwind_lock, flags);
-+		raw_spin_unlock_irqrestore(&unwind_lock, flags);
- 	}
- 
- 	pr_debug("%s: idx = %p\n", __func__, idx);
-@@ -529,9 +529,9 @@ struct unwind_table *unwind_table_add(unsigned long start, unsigned long size,
- 	tab->begin_addr = text_addr;
- 	tab->end_addr = text_addr + text_size;
- 
--	spin_lock_irqsave(&unwind_lock, flags);
-+	raw_spin_lock_irqsave(&unwind_lock, flags);
- 	list_add_tail(&tab->list, &unwind_tables);
--	spin_unlock_irqrestore(&unwind_lock, flags);
-+	raw_spin_unlock_irqrestore(&unwind_lock, flags);
- 
- 	return tab;
- }
-@@ -543,9 +543,9 @@ void unwind_table_del(struct unwind_table *tab)
- 	if (!tab)
- 		return;
- 
--	spin_lock_irqsave(&unwind_lock, flags);
-+	raw_spin_lock_irqsave(&unwind_lock, flags);
- 	list_del(&tab->list);
--	spin_unlock_irqrestore(&unwind_lock, flags);
-+	raw_spin_unlock_irqrestore(&unwind_lock, flags);
- 
- 	kfree(tab);
- }
-diff --git a/arch/arm/lib/Makefile b/arch/arm/lib/Makefile
-index ad25fd1872c7..0bff0176db2c 100644
---- a/arch/arm/lib/Makefile
-+++ b/arch/arm/lib/Makefile
-@@ -39,7 +39,7 @@ $(obj)/csumpartialcopy.o:	$(obj)/csumpartialcopygeneric.S
- $(obj)/csumpartialcopyuser.o:	$(obj)/csumpartialcopygeneric.S
- 
- ifeq ($(CONFIG_KERNEL_MODE_NEON),y)
--  NEON_FLAGS			:= -mfloat-abi=softfp -mfpu=neon
-+  NEON_FLAGS			:= -march=armv7-a -mfloat-abi=softfp -mfpu=neon
-   CFLAGS_xor-neon.o		+= $(NEON_FLAGS)
-   obj-$(CONFIG_XOR_BLOCKS)	+= xor-neon.o
- endif
-diff --git a/arch/arm/lib/xor-neon.c b/arch/arm/lib/xor-neon.c
-index 2c40aeab3eaa..c691b901092f 100644
---- a/arch/arm/lib/xor-neon.c
-+++ b/arch/arm/lib/xor-neon.c
-@@ -14,7 +14,7 @@
- MODULE_LICENSE("GPL");
- 
- #ifndef __ARM_NEON__
--#error You should compile this file with '-mfloat-abi=softfp -mfpu=neon'
-+#error You should compile this file with '-march=armv7-a -mfloat-abi=softfp -mfpu=neon'
- #endif
- 
- /*
-diff --git a/arch/arm/mach-imx/cpuidle-imx6q.c b/arch/arm/mach-imx/cpuidle-imx6q.c
-index bfeb25aaf9a2..326e870d7123 100644
---- a/arch/arm/mach-imx/cpuidle-imx6q.c
-+++ b/arch/arm/mach-imx/cpuidle-imx6q.c
-@@ -16,30 +16,23 @@
- #include "cpuidle.h"
- #include "hardware.h"
- 
--static atomic_t master = ATOMIC_INIT(0);
--static DEFINE_SPINLOCK(master_lock);
-+static int num_idle_cpus = 0;
-+static DEFINE_SPINLOCK(cpuidle_lock);
- 
- static int imx6q_enter_wait(struct cpuidle_device *dev,
- 			    struct cpuidle_driver *drv, int index)
- {
--	if (atomic_inc_return(&master) == num_online_cpus()) {
--		/*
--		 * With this lock, we prevent other cpu to exit and enter
--		 * this function again and become the master.
--		 */
--		if (!spin_trylock(&master_lock))
--			goto idle;
-+	spin_lock(&cpuidle_lock);
-+	if (++num_idle_cpus == num_online_cpus())
- 		imx6_set_lpm(WAIT_UNCLOCKED);
--		cpu_do_idle();
--		imx6_set_lpm(WAIT_CLOCKED);
--		spin_unlock(&master_lock);
--		goto done;
--	}
-+	spin_unlock(&cpuidle_lock);
- 
--idle:
- 	cpu_do_idle();
--done:
--	atomic_dec(&master);
-+
-+	spin_lock(&cpuidle_lock);
-+	if (num_idle_cpus-- == num_online_cpus())
-+		imx6_set_lpm(WAIT_CLOCKED);
-+	spin_unlock(&cpuidle_lock);
- 
- 	return index;
- }
 diff --git a/arch/arm/mach-omap1/board-ams-delta.c b/arch/arm/mach-omap1/board-ams-delta.c
 index c4c0a8ea11e4..ee410ae7369e 100644
 --- a/arch/arm/mach-omap1/board-ams-delta.c
@@ -953,174 +193,6 @@ index c4c0a8ea11e4..ee410ae7369e 100644
  	.ngpio	= LATCH2_NGPIO,
  };
  
-diff --git a/arch/arm/mach-omap2/prm_common.c b/arch/arm/mach-omap2/prm_common.c
-index 058a37e6d11c..fd6e0671f957 100644
---- a/arch/arm/mach-omap2/prm_common.c
-+++ b/arch/arm/mach-omap2/prm_common.c
-@@ -523,8 +523,10 @@ void omap_prm_reset_system(void)
- 
- 	prm_ll_data->reset_system();
- 
--	while (1)
-+	while (1) {
- 		cpu_relax();
-+		wfe();
-+	}
- }
- 
- /**
-diff --git a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
-index 058ce73137e8..5d819b6ea428 100644
---- a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
-+++ b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c
-@@ -65,16 +65,16 @@ static int osiris_dvs_notify(struct notifier_block *nb,
- 
- 	switch (val) {
- 	case CPUFREQ_PRECHANGE:
--		if (old_dvs & !new_dvs ||
--		    cur_dvs & !new_dvs) {
-+		if ((old_dvs && !new_dvs) ||
-+		    (cur_dvs && !new_dvs)) {
- 			pr_debug("%s: exiting dvs\n", __func__);
- 			cur_dvs = false;
- 			gpio_set_value(OSIRIS_GPIO_DVS, 1);
- 		}
- 		break;
- 	case CPUFREQ_POSTCHANGE:
--		if (!old_dvs & new_dvs ||
--		    !cur_dvs & new_dvs) {
-+		if ((!old_dvs && new_dvs) ||
-+		    (!cur_dvs && new_dvs)) {
- 			pr_debug("entering dvs\n");
- 			cur_dvs = true;
- 			gpio_set_value(OSIRIS_GPIO_DVS, 0);
-diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
-index 8e50daa99151..dc526ef2e9b3 100644
---- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
-+++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c
-@@ -40,6 +40,7 @@
- struct regulator_quirk {
- 	struct list_head		list;
- 	const struct of_device_id	*id;
-+	struct device_node		*np;
- 	struct of_phandle_args		irq_args;
- 	struct i2c_msg			i2c_msg;
- 	bool				shared;	/* IRQ line is shared */
-@@ -101,6 +102,9 @@ static int regulator_quirk_notify(struct notifier_block *nb,
- 		if (!pos->shared)
- 			continue;
- 
-+		if (pos->np->parent != client->dev.parent->of_node)
-+			continue;
-+
- 		dev_info(&client->dev, "clearing %s@0x%02x interrupts\n",
- 			 pos->id->compatible, pos->i2c_msg.addr);
- 
-@@ -165,6 +169,7 @@ static int __init rcar_gen2_regulator_quirk(void)
- 		memcpy(&quirk->i2c_msg, id->data, sizeof(quirk->i2c_msg));
- 
- 		quirk->id = id;
-+		quirk->np = np;
- 		quirk->i2c_msg.addr = addr;
- 
- 		ret = of_irq_parse_one(np, 0, argsa);
-diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c
-index b03202cddddb..f74cdce6d4da 100644
---- a/arch/arm/mm/copypage-v4mc.c
-+++ b/arch/arm/mm/copypage-v4mc.c
-@@ -45,6 +45,7 @@ static void mc_copy_user_page(void *from, void *to)
- 	int tmp;
- 
- 	asm volatile ("\
-+	.syntax unified\n\
- 	ldmia	%0!, {r2, r3, ip, lr}		@ 4\n\
- 1:	mcr	p15, 0, %1, c7, c6, 1		@ 1   invalidate D line\n\
- 	stmia	%1!, {r2, r3, ip, lr}		@ 4\n\
-@@ -56,7 +57,7 @@ static void mc_copy_user_page(void *from, void *to)
- 	ldmia	%0!, {r2, r3, ip, lr}		@ 4\n\
- 	subs	%2, %2, #1			@ 1\n\
- 	stmia	%1!, {r2, r3, ip, lr}		@ 4\n\
--	ldmneia	%0!, {r2, r3, ip, lr}		@ 4\n\
-+	ldmiane	%0!, {r2, r3, ip, lr}		@ 4\n\
- 	bne	1b				@ "
- 	: "+&r" (from), "+&r" (to), "=&r" (tmp)
- 	: "2" (PAGE_SIZE / 64)
-diff --git a/arch/arm/mm/copypage-v4wb.c b/arch/arm/mm/copypage-v4wb.c
-index cd3e165afeed..6d336740aae4 100644
---- a/arch/arm/mm/copypage-v4wb.c
-+++ b/arch/arm/mm/copypage-v4wb.c
-@@ -27,6 +27,7 @@ static void v4wb_copy_user_page(void *kto, const void *kfrom)
- 	int tmp;
- 
- 	asm volatile ("\
-+	.syntax unified\n\
- 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
- 1:	mcr	p15, 0, %0, c7, c6, 1		@ 1   invalidate D line\n\
- 	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
-@@ -38,7 +39,7 @@ static void v4wb_copy_user_page(void *kto, const void *kfrom)
- 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
- 	subs	%2, %2, #1			@ 1\n\
- 	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
--	ldmneia	%1!, {r3, r4, ip, lr}		@ 4\n\
-+	ldmiane	%1!, {r3, r4, ip, lr}		@ 4\n\
- 	bne	1b				@ 1\n\
- 	mcr	p15, 0, %1, c7, c10, 4		@ 1   drain WB"
- 	: "+&r" (kto), "+&r" (kfrom), "=&r" (tmp)
-diff --git a/arch/arm/mm/copypage-v4wt.c b/arch/arm/mm/copypage-v4wt.c
-index 8614572e1296..3851bb396442 100644
---- a/arch/arm/mm/copypage-v4wt.c
-+++ b/arch/arm/mm/copypage-v4wt.c
-@@ -25,6 +25,7 @@ static void v4wt_copy_user_page(void *kto, const void *kfrom)
- 	int tmp;
- 
- 	asm volatile ("\
-+	.syntax unified\n\
- 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
- 1:	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
- 	ldmia	%1!, {r3, r4, ip, lr}		@ 4+1\n\
-@@ -34,7 +35,7 @@ static void v4wt_copy_user_page(void *kto, const void *kfrom)
- 	ldmia	%1!, {r3, r4, ip, lr}		@ 4\n\
- 	subs	%2, %2, #1			@ 1\n\
- 	stmia	%0!, {r3, r4, ip, lr}		@ 4\n\
--	ldmneia	%1!, {r3, r4, ip, lr}		@ 4\n\
-+	ldmiane	%1!, {r3, r4, ip, lr}		@ 4\n\
- 	bne	1b				@ 1\n\
- 	mcr	p15, 0, %2, c7, c7, 0		@ flush ID cache"
- 	: "+&r" (kto), "+&r" (kfrom), "=&r" (tmp)
-diff --git a/arch/arm/mm/proc-v7m.S b/arch/arm/mm/proc-v7m.S
-index 47a5acc64433..92e84181933a 100644
---- a/arch/arm/mm/proc-v7m.S
-+++ b/arch/arm/mm/proc-v7m.S
-@@ -139,6 +139,9 @@ __v7m_setup_cont:
- 	cpsie	i
- 	svc	#0
- 1:	cpsid	i
-+	ldr	r0, =exc_ret
-+	orr	lr, lr, #EXC_RET_THREADMODE_PROCESSSTACK
-+	str	lr, [r0]
- 	ldmia	sp, {r0-r3, r12}
- 	str	r5, [r12, #11 * 4]	@ restore the original SVC vector entry
- 	mov	lr, r6			@ restore LR
-diff --git a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
-index 610235028cc7..c14205cd6bf5 100644
---- a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
-+++ b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
-@@ -118,6 +118,7 @@
- 		reset-gpios = <&gpio0 5 GPIO_ACTIVE_LOW>;
- 		clocks = <&pmic>;
- 		clock-names = "ext_clock";
-+		post-power-on-delay-ms = <10>;
- 		power-off-delay-us = <10>;
- 	};
- 
-@@ -300,7 +301,6 @@
- 
- 		dwmmc_0: dwmmc0@f723d000 {
- 			cap-mmc-highspeed;
--			mmc-hs200-1_8v;
- 			non-removable;
- 			bus-width = <0x8>;
- 			vmmc-supply = <&ldo19>;
 diff --git a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
 index 040b36ef0dd2..520ed8e474be 100644
 --- a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
@@ -1246,126 +318,6 @@ index ecd7f19c3542..97aa65455b4a 100644
  			};
  
  			rmiim1_pins: rmiim1-pins {
-diff --git a/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts b/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
-index 13a0a028df98..e5699d0d91e4 100644
---- a/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
-+++ b/arch/arm64/boot/dts/xilinx/zynqmp-zcu100-revC.dts
-@@ -101,6 +101,7 @@
- 	sdio_pwrseq: sdio-pwrseq {
- 		compatible = "mmc-pwrseq-simple";
- 		reset-gpios = <&gpio 7 GPIO_ACTIVE_LOW>; /* WIFI_EN */
-+		post-power-on-delay-ms = <10>;
- 	};
- };
- 
-diff --git a/arch/arm64/crypto/aes-ce-ccm-core.S b/arch/arm64/crypto/aes-ce-ccm-core.S
-index e3a375c4cb83..1b151442dac1 100644
---- a/arch/arm64/crypto/aes-ce-ccm-core.S
-+++ b/arch/arm64/crypto/aes-ce-ccm-core.S
-@@ -74,12 +74,13 @@ ENTRY(ce_aes_ccm_auth_data)
- 	beq	10f
- 	ext	v0.16b, v0.16b, v0.16b, #1	/* rotate out the mac bytes */
- 	b	7b
--8:	mov	w7, w8
-+8:	cbz	w8, 91f
-+	mov	w7, w8
- 	add	w8, w8, #16
- 9:	ext	v1.16b, v1.16b, v1.16b, #1
- 	adds	w7, w7, #1
- 	bne	9b
--	eor	v0.16b, v0.16b, v1.16b
-+91:	eor	v0.16b, v0.16b, v1.16b
- 	st1	{v0.16b}, [x0]
- 10:	str	w8, [x3]
- 	ret
-diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c
-index 68b11aa690e4..986191e8c058 100644
---- a/arch/arm64/crypto/aes-ce-ccm-glue.c
-+++ b/arch/arm64/crypto/aes-ce-ccm-glue.c
-@@ -125,7 +125,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[],
- 			abytes -= added;
- 		}
- 
--		while (abytes > AES_BLOCK_SIZE) {
-+		while (abytes >= AES_BLOCK_SIZE) {
- 			__aes_arm64_encrypt(key->key_enc, mac, mac,
- 					    num_rounds(key));
- 			crypto_xor(mac, in, AES_BLOCK_SIZE);
-@@ -139,8 +139,6 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[],
- 					    num_rounds(key));
- 			crypto_xor(mac, in, abytes);
- 			*macp = abytes;
--		} else {
--			*macp = 0;
- 		}
- 	}
- }
-diff --git a/arch/arm64/crypto/aes-neonbs-core.S b/arch/arm64/crypto/aes-neonbs-core.S
-index e613a87f8b53..8432c8d0dea6 100644
---- a/arch/arm64/crypto/aes-neonbs-core.S
-+++ b/arch/arm64/crypto/aes-neonbs-core.S
-@@ -971,18 +971,22 @@ CPU_LE(	rev		x8, x8		)
- 
- 8:	next_ctr	v0
- 	st1		{v0.16b}, [x24]
--	cbz		x23, 0f
-+	cbz		x23, .Lctr_done
- 
- 	cond_yield_neon	98b
- 	b		99b
- 
--0:	frame_pop
-+.Lctr_done:
-+	frame_pop
- 	ret
- 
- 	/*
- 	 * If we are handling the tail of the input (x6 != NULL), return the
- 	 * final keystream block back to the caller.
- 	 */
-+0:	cbz		x25, 8b
-+	st1		{v0.16b}, [x25]
-+	b		8b
- 1:	cbz		x25, 8b
- 	st1		{v1.16b}, [x25]
- 	b		8b
-diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c
-index b461d62023f2..567c24f3d224 100644
---- a/arch/arm64/crypto/crct10dif-ce-glue.c
-+++ b/arch/arm64/crypto/crct10dif-ce-glue.c
-@@ -39,26 +39,13 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data,
- 			    unsigned int length)
- {
- 	u16 *crc = shash_desc_ctx(desc);
--	unsigned int l;
- 
--	if (unlikely((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) {
--		l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE -
--			  ((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE));
--
--		*crc = crc_t10dif_generic(*crc, data, l);
--
--		length -= l;
--		data += l;
--	}
--
--	if (length > 0) {
--		if (may_use_simd()) {
--			kernel_neon_begin();
--			*crc = crc_t10dif_pmull(*crc, data, length);
--			kernel_neon_end();
--		} else {
--			*crc = crc_t10dif_generic(*crc, data, length);
--		}
-+	if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) {
-+		kernel_neon_begin();
-+		*crc = crc_t10dif_pmull(*crc, data, length);
-+		kernel_neon_end();
-+	} else {
-+		*crc = crc_t10dif_generic(*crc, data, length);
- 	}
- 
- 	return 0;
 diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
 index cccb83ad7fa8..e1d95f08f8e1 100644
 --- a/arch/arm64/include/asm/futex.h
@@ -1418,57 +370,6 @@ index cccb83ad7fa8..e1d95f08f8e1 100644
  				  ret, oldval, uaddr, tmp, oparg);
  		break;
  	default:
-diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h
-index 1473fc2f7ab7..89691c86640a 100644
---- a/arch/arm64/include/asm/hardirq.h
-+++ b/arch/arm64/include/asm/hardirq.h
-@@ -17,8 +17,12 @@
- #define __ASM_HARDIRQ_H
- 
- #include <linux/cache.h>
-+#include <linux/percpu.h>
- #include <linux/threads.h>
-+#include <asm/barrier.h>
- #include <asm/irq.h>
-+#include <asm/kvm_arm.h>
-+#include <asm/sysreg.h>
- 
- #define NR_IPI	7
- 
-@@ -37,6 +41,33 @@ u64 smp_irq_stat_cpu(unsigned int cpu);
- 
- #define __ARCH_IRQ_EXIT_IRQS_DISABLED	1
- 
-+struct nmi_ctx {
-+	u64 hcr;
-+};
-+
-+DECLARE_PER_CPU(struct nmi_ctx, nmi_contexts);
-+
-+#define arch_nmi_enter()							\
-+	do {									\
-+		if (is_kernel_in_hyp_mode()) {					\
-+			struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts);	\
-+			nmi_ctx->hcr = read_sysreg(hcr_el2);			\
-+			if (!(nmi_ctx->hcr & HCR_TGE)) {			\
-+				write_sysreg(nmi_ctx->hcr | HCR_TGE, hcr_el2);	\
-+				isb();						\
-+			}							\
-+		}								\
-+	} while (0)
-+
-+#define arch_nmi_exit()								\
-+	do {									\
-+		if (is_kernel_in_hyp_mode()) {					\
-+			struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts);	\
-+			if (!(nmi_ctx->hcr & HCR_TGE))				\
-+				write_sysreg(nmi_ctx->hcr, hcr_el2);		\
-+		}								\
-+	} while (0)
-+
- static inline void ack_bad_irq(unsigned int irq)
- {
- 	extern unsigned long irq_err_count;
 diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h
 index 905e1bb0e7bd..cd9f4e9d04d3 100644
 --- a/arch/arm64/include/asm/module.h
@@ -1497,86 +398,6 @@ index 8e4431a8821f..07b298120182 100644
  				pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
  				return -EINVAL;
  			}
-diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
-index 780a12f59a8f..92fa81798fb9 100644
---- a/arch/arm64/kernel/irq.c
-+++ b/arch/arm64/kernel/irq.c
-@@ -33,6 +33,9 @@
- 
- unsigned long irq_err_count;
- 
-+/* Only access this in an NMI enter/exit */
-+DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts);
-+
- DEFINE_PER_CPU(unsigned long *, irq_stack_ptr);
- 
- int arch_show_interrupts(struct seq_file *p, int prec)
-diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
-index ce46c4cdf368..691854b77c7f 100644
---- a/arch/arm64/kernel/kgdb.c
-+++ b/arch/arm64/kernel/kgdb.c
-@@ -244,27 +244,33 @@ int kgdb_arch_handle_exception(int exception_vector, int signo,
- 
- static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr)
- {
-+	if (user_mode(regs))
-+		return DBG_HOOK_ERROR;
-+
- 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
--	return 0;
-+	return DBG_HOOK_HANDLED;
- }
- NOKPROBE_SYMBOL(kgdb_brk_fn)
- 
- static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)
- {
-+	if (user_mode(regs))
-+		return DBG_HOOK_ERROR;
-+
- 	compiled_break = 1;
- 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
- 
--	return 0;
-+	return DBG_HOOK_HANDLED;
- }
- NOKPROBE_SYMBOL(kgdb_compiled_brk_fn);
- 
- static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
- {
--	if (!kgdb_single_step)
-+	if (user_mode(regs) || !kgdb_single_step)
- 		return DBG_HOOK_ERROR;
- 
- 	kgdb_handle_exception(1, SIGTRAP, 0, regs);
--	return 0;
-+	return DBG_HOOK_HANDLED;
- }
- NOKPROBE_SYMBOL(kgdb_step_brk_fn);
- 
-diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
-index f17afb99890c..7fb6f3aa5ceb 100644
---- a/arch/arm64/kernel/probes/kprobes.c
-+++ b/arch/arm64/kernel/probes/kprobes.c
-@@ -450,6 +450,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
- 	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
- 	int retval;
- 
-+	if (user_mode(regs))
-+		return DBG_HOOK_ERROR;
-+
- 	/* return error if this is not our step */
- 	retval = kprobe_ss_hit(kcb, instruction_pointer(regs));
- 
-@@ -466,6 +469,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
- int __kprobes
- kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr)
- {
-+	if (user_mode(regs))
-+		return DBG_HOOK_ERROR;
-+
- 	kprobe_handler(regs);
- 	return DBG_HOOK_HANDLED;
- }
 diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
 index 4e2fb877f8d5..92bfeb3e8d7c 100644
 --- a/arch/arm64/kernel/traps.c
@@ -1625,55 +446,6 @@ index 4e2fb877f8d5..92bfeb3e8d7c 100644
  
  	return ret;
  }
-diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
-index c936aa40c3f4..b6dac3a68508 100644
---- a/arch/arm64/kvm/sys_regs.c
-+++ b/arch/arm64/kvm/sys_regs.c
-@@ -1476,7 +1476,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
- 
- 	{ SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 },
- 	{ SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 },
--	{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 },
-+	{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 },
- };
- 
- static bool trap_dbgidr(struct kvm_vcpu *vcpu,
-diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
-index efb7b2cbead5..ef46925096f0 100644
---- a/arch/arm64/mm/fault.c
-+++ b/arch/arm64/mm/fault.c
-@@ -824,11 +824,12 @@ void __init hook_debug_fault_code(int nr,
- 	debug_fault_info[nr].name	= name;
- }
- 
--asmlinkage int __exception do_debug_exception(unsigned long addr,
-+asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint,
- 					      unsigned int esr,
- 					      struct pt_regs *regs)
- {
- 	const struct fault_info *inf = esr_to_debug_fault_info(esr);
-+	unsigned long pc = instruction_pointer(regs);
- 	int rv;
- 
- 	/*
-@@ -838,14 +839,14 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
- 	if (interrupts_enabled(regs))
- 		trace_hardirqs_off();
- 
--	if (user_mode(regs) && !is_ttbr0_addr(instruction_pointer(regs)))
-+	if (user_mode(regs) && !is_ttbr0_addr(pc))
- 		arm64_apply_bp_hardening();
- 
--	if (!inf->fn(addr, esr, regs)) {
-+	if (!inf->fn(addr_if_watchpoint, esr, regs)) {
- 		rv = 1;
- 	} else {
- 		arm64_notify_die(inf->name, regs,
--				 inf->sig, inf->code, (void __user *)addr, esr);
-+				 inf->sig, inf->code, (void __user *)pc, esr);
- 		rv = 0;
- 	}
- 
 diff --git a/arch/csky/include/asm/syscall.h b/arch/csky/include/asm/syscall.h
 index d637445737b7..9a9cd81e66c1 100644
 --- a/arch/csky/include/asm/syscall.h
@@ -1706,137 +478,6 @@ index d637445737b7..9a9cd81e66c1 100644
  }
  
  static inline int
-diff --git a/arch/h8300/Makefile b/arch/h8300/Makefile
-index f801f3708a89..ba0f26cfad61 100644
---- a/arch/h8300/Makefile
-+++ b/arch/h8300/Makefile
-@@ -27,7 +27,7 @@ KBUILD_LDFLAGS += $(ldflags-y)
- CHECKFLAGS += -msize-long
- 
- ifeq ($(CROSS_COMPILE),)
--CROSS_COMPILE := h8300-unknown-linux-
-+CROSS_COMPILE := $(call cc-cross-prefix, h8300-unknown-linux- h8300-linux-)
- endif
- 
- core-y	+= arch/$(ARCH)/kernel/ arch/$(ARCH)/mm/
-diff --git a/arch/m68k/Makefile b/arch/m68k/Makefile
-index f00ca53f8c14..482513b9af2c 100644
---- a/arch/m68k/Makefile
-+++ b/arch/m68k/Makefile
-@@ -58,7 +58,10 @@ cpuflags-$(CONFIG_M5206e)	:= $(call cc-option,-mcpu=5206e,-m5200)
- cpuflags-$(CONFIG_M5206)	:= $(call cc-option,-mcpu=5206,-m5200)
- 
- KBUILD_AFLAGS += $(cpuflags-y)
--KBUILD_CFLAGS += $(cpuflags-y) -pipe
-+KBUILD_CFLAGS += $(cpuflags-y)
-+
-+KBUILD_CFLAGS += -pipe -ffreestanding
-+
- ifdef CONFIG_MMU
- # without -fno-strength-reduce the 53c7xx.c driver fails ;-(
- KBUILD_CFLAGS += -fno-strength-reduce -ffixed-a2
-diff --git a/arch/mips/include/asm/jump_label.h b/arch/mips/include/asm/jump_label.h
-index e77672539e8e..e4456e450f94 100644
---- a/arch/mips/include/asm/jump_label.h
-+++ b/arch/mips/include/asm/jump_label.h
-@@ -21,15 +21,15 @@
- #endif
- 
- #ifdef CONFIG_CPU_MICROMIPS
--#define NOP_INSN "nop32"
-+#define B_INSN "b32"
- #else
--#define NOP_INSN "nop"
-+#define B_INSN "b"
- #endif
- 
- static __always_inline bool arch_static_branch(struct static_key *key, bool branch)
- {
--	asm_volatile_goto("1:\t" NOP_INSN "\n\t"
--		"nop\n\t"
-+	asm_volatile_goto("1:\t" B_INSN " 2f\n\t"
-+		"2:\tnop\n\t"
- 		".pushsection __jump_table,  \"aw\"\n\t"
- 		WORD_INSN " 1b, %l[l_yes], %0\n\t"
- 		".popsection\n\t"
-diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
-index d2abd98471e8..41204a49cf95 100644
---- a/arch/mips/include/asm/kvm_host.h
-+++ b/arch/mips/include/asm/kvm_host.h
-@@ -1134,7 +1134,7 @@ static inline void kvm_arch_hardware_unsetup(void) {}
- static inline void kvm_arch_sync_events(struct kvm *kvm) {}
- static inline void kvm_arch_free_memslot(struct kvm *kvm,
- 		struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
--static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
-+static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
- static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
- static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
- static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
-diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
-index ba150c755fcc..85b6c60f285d 100644
---- a/arch/mips/kernel/irq.c
-+++ b/arch/mips/kernel/irq.c
-@@ -52,6 +52,7 @@ asmlinkage void spurious_interrupt(void)
- void __init init_IRQ(void)
- {
- 	int i;
-+	unsigned int order = get_order(IRQ_STACK_SIZE);
- 
- 	for (i = 0; i < NR_IRQS; i++)
- 		irq_set_noprobe(i);
-@@ -62,8 +63,7 @@ void __init init_IRQ(void)
- 	arch_init_irq();
- 
- 	for_each_possible_cpu(i) {
--		int irq_pages = IRQ_STACK_SIZE / PAGE_SIZE;
--		void *s = (void *)__get_free_pages(GFP_KERNEL, irq_pages);
-+		void *s = (void *)__get_free_pages(GFP_KERNEL, order);
- 
- 		irq_stack[i] = s;
- 		pr_debug("CPU%d IRQ stack at 0x%p - 0x%p\n", i,
-diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
-index cb7e9ed7a453..33ee0d18fb0a 100644
---- a/arch/mips/kernel/vmlinux.lds.S
-+++ b/arch/mips/kernel/vmlinux.lds.S
-@@ -140,6 +140,13 @@ SECTIONS
- 	PERCPU_SECTION(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
- #endif
- 
-+#ifdef CONFIG_MIPS_ELF_APPENDED_DTB
-+	.appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) {
-+		*(.appended_dtb)
-+		KEEP(*(.appended_dtb))
-+	}
-+#endif
-+
- #ifdef CONFIG_RELOCATABLE
- 	. = ALIGN(4);
- 
-@@ -164,11 +171,6 @@ SECTIONS
- 	__appended_dtb = .;
- 	/* leave space for appended DTB */
- 	. += 0x100000;
--#elif defined(CONFIG_MIPS_ELF_APPENDED_DTB)
--	.appended_dtb : AT(ADDR(.appended_dtb) - LOAD_OFFSET) {
--		*(.appended_dtb)
--		KEEP(*(.appended_dtb))
--	}
- #endif
- 	/*
- 	 * Align to 64K in attempt to eliminate holes before the
-diff --git a/arch/mips/loongson64/lemote-2f/irq.c b/arch/mips/loongson64/lemote-2f/irq.c
-index 9e33e45aa17c..b213cecb8e3a 100644
---- a/arch/mips/loongson64/lemote-2f/irq.c
-+++ b/arch/mips/loongson64/lemote-2f/irq.c
-@@ -103,7 +103,7 @@ static struct irqaction ip6_irqaction = {
- static struct irqaction cascade_irqaction = {
- 	.handler = no_action,
- 	.name = "cascade",
--	.flags = IRQF_NO_THREAD,
-+	.flags = IRQF_NO_THREAD | IRQF_NO_SUSPEND,
- };
- 
- void __init mach_init_irq(void)
 diff --git a/arch/parisc/include/asm/ptrace.h b/arch/parisc/include/asm/ptrace.h
 index 2a27b275ab09..9ff033d261ab 100644
 --- a/arch/parisc/include/asm/ptrace.h
@@ -1889,167 +530,6 @@ index f2cf86ac279b..25946624ce6a 100644
  	cpunum = smp_processor_id();
  
  	init_cpu_topology();
-diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h
-index 5b0177733994..46130ef4941c 100644
---- a/arch/powerpc/include/asm/book3s/64/hugetlb.h
-+++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h
-@@ -35,6 +35,14 @@ static inline int hstate_get_psize(struct hstate *hstate)
- #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
- static inline bool gigantic_page_supported(void)
- {
-+	/*
-+	 * We used gigantic page reservation with hypervisor assist in some case.
-+	 * We cannot use runtime allocation of gigantic pages in those platforms
-+	 * This is hash translation mode LPARs.
-+	 */
-+	if (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled())
-+		return false;
-+
- 	return true;
- }
- #endif
-diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
-index 0f98f00da2ea..19693b8add93 100644
---- a/arch/powerpc/include/asm/kvm_host.h
-+++ b/arch/powerpc/include/asm/kvm_host.h
-@@ -837,7 +837,7 @@ struct kvm_vcpu_arch {
- static inline void kvm_arch_hardware_disable(void) {}
- static inline void kvm_arch_hardware_unsetup(void) {}
- static inline void kvm_arch_sync_events(struct kvm *kvm) {}
--static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
-+static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
- static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
- static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
- static inline void kvm_arch_exit(void) {}
-diff --git a/arch/powerpc/include/asm/powernv.h b/arch/powerpc/include/asm/powernv.h
-index 2f3ff7a27881..d85fcfea32ca 100644
---- a/arch/powerpc/include/asm/powernv.h
-+++ b/arch/powerpc/include/asm/powernv.h
-@@ -23,6 +23,8 @@ extern int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea,
- 				unsigned long *flags, unsigned long *status,
- 				int count);
- 
-+void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val);
-+
- void pnv_tm_init(void);
- #else
- static inline void powernv_set_nmmu_ptcr(unsigned long ptcr) { }
-diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
-index 19a8834e0398..0690a306f6ca 100644
---- a/arch/powerpc/include/asm/ppc-opcode.h
-+++ b/arch/powerpc/include/asm/ppc-opcode.h
-@@ -302,6 +302,7 @@
- /* Misc instructions for BPF compiler */
- #define PPC_INST_LBZ			0x88000000
- #define PPC_INST_LD			0xe8000000
-+#define PPC_INST_LDX			0x7c00002a
- #define PPC_INST_LHZ			0xa0000000
- #define PPC_INST_LWZ			0x80000000
- #define PPC_INST_LHBRX			0x7c00062c
-@@ -309,6 +310,7 @@
- #define PPC_INST_STB			0x98000000
- #define PPC_INST_STH			0xb0000000
- #define PPC_INST_STD			0xf8000000
-+#define PPC_INST_STDX			0x7c00012a
- #define PPC_INST_STDU			0xf8000001
- #define PPC_INST_STW			0x90000000
- #define PPC_INST_STWU			0x94000000
-diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
-index a4a718dbfec6..f85e2b01c3df 100644
---- a/arch/powerpc/include/asm/topology.h
-+++ b/arch/powerpc/include/asm/topology.h
-@@ -132,6 +132,8 @@ static inline void shared_proc_topology_init(void) {}
- #define topology_sibling_cpumask(cpu)	(per_cpu(cpu_sibling_map, cpu))
- #define topology_core_cpumask(cpu)	(per_cpu(cpu_core_map, cpu))
- #define topology_core_id(cpu)		(cpu_to_core_id(cpu))
-+
-+int dlpar_cpu_readd(int cpu);
- #endif
- #endif
- 
-diff --git a/arch/powerpc/include/asm/vdso_datapage.h b/arch/powerpc/include/asm/vdso_datapage.h
-index 1afe90ade595..bbc06bd72b1f 100644
---- a/arch/powerpc/include/asm/vdso_datapage.h
-+++ b/arch/powerpc/include/asm/vdso_datapage.h
-@@ -82,10 +82,10 @@ struct vdso_data {
- 	__u32 icache_block_size;		/* L1 i-cache block size     */
- 	__u32 dcache_log_block_size;		/* L1 d-cache log block size */
- 	__u32 icache_log_block_size;		/* L1 i-cache log block size */
--	__s32 wtom_clock_sec;			/* Wall to monotonic clock */
--	__s32 wtom_clock_nsec;
--	struct timespec stamp_xtime;	/* xtime as at tb_orig_stamp */
--	__u32 stamp_sec_fraction;	/* fractional seconds of stamp_xtime */
-+	__u32 stamp_sec_fraction;		/* fractional seconds of stamp_xtime */
-+	__s32 wtom_clock_nsec;			/* Wall to monotonic clock nsec */
-+	__s64 wtom_clock_sec;			/* Wall to monotonic clock sec */
-+	struct timespec stamp_xtime;		/* xtime as at tb_orig_stamp */
-    	__u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls  */
-    	__u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
- };
-diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
-index 0768dfd8a64e..fdd528cdb2ee 100644
---- a/arch/powerpc/kernel/entry_32.S
-+++ b/arch/powerpc/kernel/entry_32.S
-@@ -745,6 +745,9 @@ fast_exception_return:
- 	mtcr	r10
- 	lwz	r10,_LINK(r11)
- 	mtlr	r10
-+	/* Clear the exception_marker on the stack to avoid confusing stacktrace */
-+	li	r10, 0
-+	stw	r10, 8(r11)
- 	REST_GPR(10, r11)
- #if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
- 	mtspr	SPRN_NRI, r0
-@@ -982,6 +985,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX)
- 	mtcrf	0xFF,r10
- 	mtlr	r11
- 
-+	/* Clear the exception_marker on the stack to avoid confusing stacktrace */
-+	li	r10, 0
-+	stw	r10, 8(r1)
- 	/*
- 	 * Once we put values in SRR0 and SRR1, we are in a state
- 	 * where exceptions are not recoverable, since taking an
-@@ -1021,6 +1027,9 @@ exc_exit_restart_end:
- 	mtlr	r11
- 	lwz	r10,_CCR(r1)
- 	mtcrf	0xff,r10
-+	/* Clear the exception_marker on the stack to avoid confusing stacktrace */
-+	li	r10, 0
-+	stw	r10, 8(r1)
- 	REST_2GPRS(9, r1)
- 	.globl exc_exit_restart
- exc_exit_restart:
-diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
-index 435927f549c4..a2c168b395d2 100644
---- a/arch/powerpc/kernel/entry_64.S
-+++ b/arch/powerpc/kernel/entry_64.S
-@@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
- 	ld	r2,_NIP(r1)
- 	mtspr	SPRN_SRR0,r2
- 
-+	/*
-+	 * Leaving a stale exception_marker on the stack can confuse
-+	 * the reliable stack unwinder later on. Clear it.
-+	 */
-+	li	r2,0
-+	std	r2,STACK_FRAME_OVERHEAD-16(r1)
-+
- 	ld	r0,GPR0(r1)
- 	ld	r2,GPR2(r1)
- 	ld	r3,GPR3(r1)
-diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
-index afb638778f44..447defdd4503 100644
---- a/arch/powerpc/kernel/exceptions-64e.S
-+++ b/arch/powerpc/kernel/exceptions-64e.S
-@@ -349,6 +349,7 @@ ret_from_mc_except:
- #define GEN_BTB_FLUSH
- #define CRIT_BTB_FLUSH
- #define DBG_BTB_FLUSH
-+#define MC_BTB_FLUSH
- #define GDBELL_BTB_FLUSH
- #endif
- 
 diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
 index 9e253ce27e08..4fee6c9887db 100644
 --- a/arch/powerpc/kernel/exceptions-64s.S
@@ -2090,28784 +570,3132 @@ index 9e253ce27e08..4fee6c9887db 100644
  	std	r3,RESULT(r1)
  	bl	save_nvgprs
  	RECONCILE_IRQ_STATE(r10, r11)
-diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
-index ce393df243aa..71bad4b6f80d 100644
---- a/arch/powerpc/kernel/process.c
-+++ b/arch/powerpc/kernel/process.c
-@@ -176,7 +176,7 @@ static void __giveup_fpu(struct task_struct *tsk)
- 
- 	save_fpu(tsk);
- 	msr = tsk->thread.regs->msr;
--	msr &= ~MSR_FP;
-+	msr &= ~(MSR_FP|MSR_FE0|MSR_FE1);
- #ifdef CONFIG_VSX
- 	if (cpu_has_feature(CPU_FTR_VSX))
- 		msr &= ~MSR_VSX;
-diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
-index cdd5d1d3ae41..d9ac7d94656e 100644
---- a/arch/powerpc/kernel/ptrace.c
-+++ b/arch/powerpc/kernel/ptrace.c
-@@ -33,6 +33,7 @@
- #include <linux/hw_breakpoint.h>
- #include <linux/perf_event.h>
- #include <linux/context_tracking.h>
-+#include <linux/nospec.h>
- 
- #include <linux/uaccess.h>
- #include <linux/pkeys.h>
-@@ -274,6 +275,8 @@ static int set_user_trap(struct task_struct *task, unsigned long trap)
-  */
- int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
- {
-+	unsigned int regs_max;
-+
- 	if ((task->thread.regs == NULL) || !data)
- 		return -EIO;
- 
-@@ -297,7 +300,9 @@ int ptrace_get_reg(struct task_struct *task, int regno, unsigned long *data)
+diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
+index bba3da6ef157..6ea9e1804233 100644
+--- a/arch/riscv/include/asm/syscall.h
++++ b/arch/riscv/include/asm/syscall.h
+@@ -79,10 +79,11 @@ static inline void syscall_get_arguments(struct task_struct *task,
+ 	if (i == 0) {
+ 		args[0] = regs->orig_a0;
+ 		args++;
+-		i++;
+ 		n--;
++	} else {
++		i--;
  	}
- #endif
+-	memcpy(args, &regs->a1 + i * sizeof(regs->a1), n * sizeof(args[0]));
++	memcpy(args, &regs->a1 + i, n * sizeof(args[0]));
+ }
  
--	if (regno < (sizeof(struct user_pt_regs) / sizeof(unsigned long))) {
-+	regs_max = sizeof(struct user_pt_regs) / sizeof(unsigned long);
-+	if (regno < regs_max) {
-+		regno = array_index_nospec(regno, regs_max);
- 		*data = ((unsigned long *)task->thread.regs)[regno];
- 		return 0;
- 	}
-@@ -321,6 +326,7 @@ int ptrace_put_reg(struct task_struct *task, int regno, unsigned long data)
- 		return set_user_dscr(task, data);
- 
- 	if (regno <= PT_MAX_PUT_REG) {
-+		regno = array_index_nospec(regno, PT_MAX_PUT_REG + 1);
- 		((unsigned long *)task->thread.regs)[regno] = data;
- 		return 0;
- 	}
-@@ -561,6 +567,7 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
- 		/*
- 		 * Copy out only the low-order word of vrsave.
- 		 */
-+		int start, end;
- 		union {
- 			elf_vrreg_t reg;
- 			u32 word;
-@@ -569,8 +576,10 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset,
- 
- 		vrsave.word = target->thread.vrsave;
- 
-+		start = 33 * sizeof(vector128);
-+		end = start + sizeof(vrsave);
- 		ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, &vrsave,
--					  33 * sizeof(vector128), -1);
-+					  start, end);
- 	}
- 
- 	return ret;
-@@ -608,6 +617,7 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
- 		/*
- 		 * We use only the first word of vrsave.
- 		 */
-+		int start, end;
- 		union {
- 			elf_vrreg_t reg;
- 			u32 word;
-@@ -616,8 +626,10 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset,
- 
- 		vrsave.word = target->thread.vrsave;
- 
-+		start = 33 * sizeof(vector128);
-+		end = start + sizeof(vrsave);
- 		ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &vrsave,
--					 33 * sizeof(vector128), -1);
-+					 start, end);
- 		if (!ret)
- 			target->thread.vrsave = vrsave.word;
- 	}
-diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
-index 9b8631533e02..b33bafb8fcea 100644
---- a/arch/powerpc/kernel/security.c
-+++ b/arch/powerpc/kernel/security.c
-@@ -190,29 +190,22 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
- 	bcs = security_ftr_enabled(SEC_FTR_BCCTRL_SERIALISED);
- 	ccd = security_ftr_enabled(SEC_FTR_COUNT_CACHE_DISABLED);
- 
--	if (bcs || ccd || count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
--		bool comma = false;
-+	if (bcs || ccd) {
- 		seq_buf_printf(&s, "Mitigation: ");
- 
--		if (bcs) {
-+		if (bcs)
- 			seq_buf_printf(&s, "Indirect branch serialisation (kernel only)");
--			comma = true;
--		}
- 
--		if (ccd) {
--			if (comma)
--				seq_buf_printf(&s, ", ");
--			seq_buf_printf(&s, "Indirect branch cache disabled");
--			comma = true;
--		}
--
--		if (comma)
-+		if (bcs && ccd)
- 			seq_buf_printf(&s, ", ");
- 
--		seq_buf_printf(&s, "Software count cache flush");
-+		if (ccd)
-+			seq_buf_printf(&s, "Indirect branch cache disabled");
-+	} else if (count_cache_flush_type != COUNT_CACHE_FLUSH_NONE) {
-+		seq_buf_printf(&s, "Mitigation: Software count cache flush");
- 
- 		if (count_cache_flush_type == COUNT_CACHE_FLUSH_HW)
--			seq_buf_printf(&s, "(hardware accelerated)");
-+			seq_buf_printf(&s, " (hardware accelerated)");
- 	} else if (btb_flush_enabled) {
- 		seq_buf_printf(&s, "Mitigation: Branch predictor state flush");
- 	} else {
-diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
-index 3f15edf25a0d..6e521a3f67ca 100644
---- a/arch/powerpc/kernel/smp.c
-+++ b/arch/powerpc/kernel/smp.c
-@@ -358,13 +358,12 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask)
-  * NMI IPIs may not be recoverable, so should not be used as ongoing part of
-  * a running system. They can be used for crash, debug, halt/reboot, etc.
-  *
-- * NMI IPIs are globally single threaded. No more than one in progress at
-- * any time.
-- *
-  * The IPI call waits with interrupts disabled until all targets enter the
-- * NMI handler, then the call returns.
-+ * NMI handler, then returns. Subsequent IPIs can be issued before targets
-+ * have returned from their handlers, so there is no guarantee about
-+ * concurrency or re-entrancy.
-  *
-- * No new NMI can be initiated until targets exit the handler.
-+ * A new NMI can be issued before all targets exit the handler.
-  *
-  * The IPI call may time out without all targets entering the NMI handler.
-  * In that case, there is some logic to recover (and ignore subsequent
-@@ -375,7 +374,7 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask)
- 
- static atomic_t __nmi_ipi_lock = ATOMIC_INIT(0);
- static struct cpumask nmi_ipi_pending_mask;
--static int nmi_ipi_busy_count = 0;
-+static bool nmi_ipi_busy = false;
- static void (*nmi_ipi_function)(struct pt_regs *) = NULL;
- 
- static void nmi_ipi_lock_start(unsigned long *flags)
-@@ -414,7 +413,7 @@ static void nmi_ipi_unlock_end(unsigned long *flags)
-  */
- int smp_handle_nmi_ipi(struct pt_regs *regs)
- {
--	void (*fn)(struct pt_regs *);
-+	void (*fn)(struct pt_regs *) = NULL;
- 	unsigned long flags;
- 	int me = raw_smp_processor_id();
- 	int ret = 0;
-@@ -425,29 +424,17 @@ int smp_handle_nmi_ipi(struct pt_regs *regs)
- 	 * because the caller may have timed out.
- 	 */
- 	nmi_ipi_lock_start(&flags);
--	if (!nmi_ipi_busy_count)
--		goto out;
--	if (!cpumask_test_cpu(me, &nmi_ipi_pending_mask))
--		goto out;
--
--	fn = nmi_ipi_function;
--	if (!fn)
--		goto out;
--
--	cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
--	nmi_ipi_busy_count++;
--	nmi_ipi_unlock();
--
--	ret = 1;
--
--	fn(regs);
--
--	nmi_ipi_lock();
--	if (nmi_ipi_busy_count > 1) /* Can race with caller time-out */
--		nmi_ipi_busy_count--;
--out:
-+	if (cpumask_test_cpu(me, &nmi_ipi_pending_mask)) {
-+		cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
-+		fn = READ_ONCE(nmi_ipi_function);
-+		WARN_ON_ONCE(!fn);
-+		ret = 1;
+ static inline void syscall_set_arguments(struct task_struct *task,
+@@ -94,10 +95,11 @@ static inline void syscall_set_arguments(struct task_struct *task,
+         if (i == 0) {
+                 regs->orig_a0 = args[0];
+                 args++;
+-                i++;
+                 n--;
+-        }
+-	memcpy(&regs->a1 + i * sizeof(regs->a1), args, n * sizeof(regs->a0));
++	} else {
++		i--;
 +	}
- 	nmi_ipi_unlock_end(&flags);
- 
-+	if (fn)
-+		fn(regs);
-+
- 	return ret;
++	memcpy(&regs->a1 + i, args, n * sizeof(regs->a1));
  }
  
-@@ -473,7 +460,7 @@ static void do_smp_send_nmi_ipi(int cpu, bool safe)
-  * - cpu is the target CPU (must not be this CPU), or NMI_IPI_ALL_OTHERS.
-  * - fn is the target callback function.
-  * - delay_us > 0 is the delay before giving up waiting for targets to
-- *   complete executing the handler, == 0 specifies indefinite delay.
-+ *   begin executing the handler, == 0 specifies indefinite delay.
-  */
- int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool safe)
- {
-@@ -487,31 +474,33 @@ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool
- 	if (unlikely(!smp_ops))
- 		return 0;
+ static inline int syscall_get_arch(void)
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index 7d2d7c801dba..0ecfac84ba91 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -3,10 +3,14 @@
+ #include <linux/types.h>
+ #include <linux/init.h>
+ #include <linux/slab.h>
++#include <linux/delay.h>
+ #include <asm/apicdef.h>
++#include <asm/nmi.h>
  
--	/* Take the nmi_ipi_busy count/lock with interrupts hard disabled */
- 	nmi_ipi_lock_start(&flags);
--	while (nmi_ipi_busy_count) {
-+	while (nmi_ipi_busy) {
- 		nmi_ipi_unlock_end(&flags);
--		spin_until_cond(nmi_ipi_busy_count == 0);
-+		spin_until_cond(!nmi_ipi_busy);
- 		nmi_ipi_lock_start(&flags);
- 	}
--
-+	nmi_ipi_busy = true;
- 	nmi_ipi_function = fn;
+ #include "../perf_event.h"
  
-+	WARN_ON_ONCE(!cpumask_empty(&nmi_ipi_pending_mask));
++static DEFINE_PER_CPU(unsigned int, perf_nmi_counter);
 +
- 	if (cpu < 0) {
- 		/* ALL_OTHERS */
- 		cpumask_copy(&nmi_ipi_pending_mask, cpu_online_mask);
- 		cpumask_clear_cpu(me, &nmi_ipi_pending_mask);
- 	} else {
--		/* cpumask starts clear */
- 		cpumask_set_cpu(cpu, &nmi_ipi_pending_mask);
+ static __initconst const u64 amd_hw_cache_event_ids
+ 				[PERF_COUNT_HW_CACHE_MAX]
+ 				[PERF_COUNT_HW_CACHE_OP_MAX]
+@@ -429,6 +433,132 @@ static void amd_pmu_cpu_dead(int cpu)
  	}
--	nmi_ipi_busy_count++;
-+
- 	nmi_ipi_unlock();
+ }
  
-+	/* Interrupts remain hard disabled */
++/*
++ * When a PMC counter overflows, an NMI is used to process the event and
++ * reset the counter. NMI latency can result in the counter being updated
++ * before the NMI can run, which can result in what appear to be spurious
++ * NMIs. This function is intended to wait for the NMI to run and reset
++ * the counter to avoid possible unhandled NMI messages.
++ */
++#define OVERFLOW_WAIT_COUNT	50
 +
- 	do_smp_send_nmi_ipi(cpu, safe);
- 
- 	nmi_ipi_lock();
--	/* nmi_ipi_busy_count is held here, so unlock/lock is okay */
-+	/* nmi_ipi_busy is set here, so unlock/lock is okay */
- 	while (!cpumask_empty(&nmi_ipi_pending_mask)) {
- 		nmi_ipi_unlock();
- 		udelay(1);
-@@ -523,29 +512,15 @@ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool
- 		}
- 	}
- 
--	while (nmi_ipi_busy_count > 1) {
--		nmi_ipi_unlock();
--		udelay(1);
--		nmi_ipi_lock();
--		if (delay_us) {
--			delay_us--;
--			if (!delay_us)
--				break;
--		}
--	}
--
- 	if (!cpumask_empty(&nmi_ipi_pending_mask)) {
- 		/* Timeout waiting for CPUs to call smp_handle_nmi_ipi */
- 		ret = 0;
- 		cpumask_clear(&nmi_ipi_pending_mask);
- 	}
--	if (nmi_ipi_busy_count > 1) {
--		/* Timeout waiting for CPUs to execute fn */
--		ret = 0;
--		nmi_ipi_busy_count = 1;
--	}
- 
--	nmi_ipi_busy_count--;
-+	nmi_ipi_function = NULL;
-+	nmi_ipi_busy = false;
++static void amd_pmu_wait_on_overflow(int idx)
++{
++	unsigned int i;
++	u64 counter;
 +
- 	nmi_ipi_unlock_end(&flags);
- 
- 	return ret;
-@@ -613,17 +588,8 @@ void crash_send_ipi(void (*crash_ipi_callback)(struct pt_regs *))
- static void nmi_stop_this_cpu(struct pt_regs *regs)
- {
- 	/*
--	 * This is a special case because it never returns, so the NMI IPI
--	 * handling would never mark it as done, which makes any later
--	 * smp_send_nmi_ipi() call spin forever. Mark it done now.
--	 *
- 	 * IRQs are already hard disabled by the smp_handle_nmi_ipi.
- 	 */
--	nmi_ipi_lock();
--	if (nmi_ipi_busy_count > 1)
--		nmi_ipi_busy_count--;
--	nmi_ipi_unlock();
--
- 	spin_begin();
- 	while (1)
- 		spin_cpu_relax();
-diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
-index 64936b60d521..7a1de34f38c8 100644
---- a/arch/powerpc/kernel/traps.c
-+++ b/arch/powerpc/kernel/traps.c
-@@ -763,15 +763,15 @@ void machine_check_exception(struct pt_regs *regs)
- 	if (check_io_access(regs))
- 		goto bail;
- 
--	/* Must die if the interrupt is not recoverable */
--	if (!(regs->msr & MSR_RI))
--		nmi_panic(regs, "Unrecoverable Machine check");
--
- 	if (!nested)
- 		nmi_exit();
- 
- 	die("Machine check", regs, SIGBUS);
- 
-+	/* Must die if the interrupt is not recoverable */
-+	if (!(regs->msr & MSR_RI))
-+		nmi_panic(regs, "Unrecoverable Machine check");
++	/*
++	 * Wait for the counter to be reset if it has overflowed. This loop
++	 * should exit very, very quickly, but just in case, don't wait
++	 * forever...
++	 */
++	for (i = 0; i < OVERFLOW_WAIT_COUNT; i++) {
++		rdmsrl(x86_pmu_event_addr(idx), counter);
++		if (counter & (1ULL << (x86_pmu.cntval_bits - 1)))
++			break;
 +
- 	return;
- 
- bail:
-@@ -1542,8 +1542,8 @@ bail:
- 
- void StackOverflow(struct pt_regs *regs)
- {
--	printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n",
--	       current, regs->gpr[1]);
-+	pr_crit("Kernel stack overflow in process %s[%d], r1=%lx\n",
-+		current->comm, task_pid_nr(current), regs->gpr[1]);
- 	debugger(regs);
- 	show_regs(regs);
- 	panic("kernel stack overflow");
-diff --git a/arch/powerpc/kernel/vdso64/gettimeofday.S b/arch/powerpc/kernel/vdso64/gettimeofday.S
-index a4ed9edfd5f0..1f324c28705b 100644
---- a/arch/powerpc/kernel/vdso64/gettimeofday.S
-+++ b/arch/powerpc/kernel/vdso64/gettimeofday.S
-@@ -92,7 +92,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
- 	 * At this point, r4,r5 contain our sec/nsec values.
- 	 */
- 
--	lwa	r6,WTOM_CLOCK_SEC(r3)
-+	ld	r6,WTOM_CLOCK_SEC(r3)
- 	lwa	r9,WTOM_CLOCK_NSEC(r3)
- 
- 	/* We now have our result in r6,r9. We create a fake dependency
-@@ -125,7 +125,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
- 	bne     cr6,75f
- 
- 	/* CLOCK_MONOTONIC_COARSE */
--	lwa     r6,WTOM_CLOCK_SEC(r3)
-+	ld	r6,WTOM_CLOCK_SEC(r3)
- 	lwa     r9,WTOM_CLOCK_NSEC(r3)
- 
- 	/* check if counter has updated */
-diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
-index 9b8d50a7cbaf..45b06e239d1f 100644
---- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
-+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
-@@ -58,6 +58,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
- #define STACK_SLOT_DAWR		(SFS-56)
- #define STACK_SLOT_DAWRX	(SFS-64)
- #define STACK_SLOT_HFSCR	(SFS-72)
-+#define STACK_SLOT_AMR		(SFS-80)
-+#define STACK_SLOT_UAMOR	(SFS-88)
- /* the following is used by the P9 short path */
- #define STACK_SLOT_NVGPRS	(SFS-152)	/* 18 gprs */
- 
-@@ -726,11 +728,9 @@ BEGIN_FTR_SECTION
- 	mfspr	r5, SPRN_TIDR
- 	mfspr	r6, SPRN_PSSCR
- 	mfspr	r7, SPRN_PID
--	mfspr	r8, SPRN_IAMR
- 	std	r5, STACK_SLOT_TID(r1)
- 	std	r6, STACK_SLOT_PSSCR(r1)
- 	std	r7, STACK_SLOT_PID(r1)
--	std	r8, STACK_SLOT_IAMR(r1)
- 	mfspr	r5, SPRN_HFSCR
- 	std	r5, STACK_SLOT_HFSCR(r1)
- END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
-@@ -738,11 +738,18 @@ BEGIN_FTR_SECTION
- 	mfspr	r5, SPRN_CIABR
- 	mfspr	r6, SPRN_DAWR
- 	mfspr	r7, SPRN_DAWRX
-+	mfspr	r8, SPRN_IAMR
- 	std	r5, STACK_SLOT_CIABR(r1)
- 	std	r6, STACK_SLOT_DAWR(r1)
- 	std	r7, STACK_SLOT_DAWRX(r1)
-+	std	r8, STACK_SLOT_IAMR(r1)
- END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
- 
-+	mfspr	r5, SPRN_AMR
-+	std	r5, STACK_SLOT_AMR(r1)
-+	mfspr	r6, SPRN_UAMOR
-+	std	r6, STACK_SLOT_UAMOR(r1)
++		/* Might be in IRQ context, so can't sleep */
++		udelay(1);
++	}
++}
++
++static void amd_pmu_disable_all(void)
++{
++	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++	int idx;
 +
- BEGIN_FTR_SECTION
- 	/* Set partition DABR */
- 	/* Do this before re-enabling PMU to avoid P7 DABR corruption bug */
-@@ -1631,22 +1638,25 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300)
- 	mtspr	SPRN_PSPB, r0
- 	mtspr	SPRN_WORT, r0
- BEGIN_FTR_SECTION
--	mtspr	SPRN_IAMR, r0
- 	mtspr	SPRN_TCSCR, r0
- 	/* Set MMCRS to 1<<31 to freeze and disable the SPMC counters */
- 	li	r0, 1
- 	sldi	r0, r0, 31
- 	mtspr	SPRN_MMCRS, r0
- END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
--8:
- 
--	/* Save and reset AMR and UAMOR before turning on the MMU */
-+	/* Save and restore AMR, IAMR and UAMOR before turning on the MMU */
-+	ld	r8, STACK_SLOT_IAMR(r1)
-+	mtspr	SPRN_IAMR, r8
++	x86_pmu_disable_all();
 +
-+8:	/* Power7 jumps back in here */
- 	mfspr	r5,SPRN_AMR
- 	mfspr	r6,SPRN_UAMOR
- 	std	r5,VCPU_AMR(r9)
- 	std	r6,VCPU_UAMOR(r9)
--	li	r6,0
--	mtspr	SPRN_AMR,r6
-+	ld	r5,STACK_SLOT_AMR(r1)
-+	ld	r6,STACK_SLOT_UAMOR(r1)
-+	mtspr	SPRN_AMR, r5
- 	mtspr	SPRN_UAMOR, r6
- 
- 	/* Switch DSCR back to host value */
-@@ -1746,11 +1756,9 @@ BEGIN_FTR_SECTION
- 	ld	r5, STACK_SLOT_TID(r1)
- 	ld	r6, STACK_SLOT_PSSCR(r1)
- 	ld	r7, STACK_SLOT_PID(r1)
--	ld	r8, STACK_SLOT_IAMR(r1)
- 	mtspr	SPRN_TIDR, r5
- 	mtspr	SPRN_PSSCR, r6
- 	mtspr	SPRN_PID, r7
--	mtspr	SPRN_IAMR, r8
- END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
- 
- #ifdef CONFIG_PPC_RADIX_MMU
-diff --git a/arch/powerpc/lib/memcmp_64.S b/arch/powerpc/lib/memcmp_64.S
-index 844d8e774492..b7f6f6e0b6e8 100644
---- a/arch/powerpc/lib/memcmp_64.S
-+++ b/arch/powerpc/lib/memcmp_64.S
-@@ -215,11 +215,20 @@ _GLOBAL_TOC(memcmp)
- 	beq	.Lzero
- 
- .Lcmp_rest_lt8bytes:
--	/* Here we have only less than 8 bytes to compare with. at least s1
--	 * Address is aligned with 8 bytes.
--	 * The next double words are load and shift right with appropriate
--	 * bits.
 +	/*
-+	 * Here we have less than 8 bytes to compare. At least s1 is aligned to
-+	 * 8 bytes, but s2 may not be. We must make sure s2 + 7 doesn't cross a
-+	 * page boundary, otherwise we might read past the end of the buffer and
-+	 * trigger a page fault. We use 4K as the conservative minimum page
-+	 * size. If we detect that case we go to the byte-by-byte loop.
-+	 *
-+	 * Otherwise the next double word is loaded from s1 and s2, and shifted
-+	 * right to compare the appropriate bits.
- 	 */
-+	clrldi	r6,r4,(64-12)	// r6 = r4 & 0xfff
-+	cmpdi	r6,0xff8
-+	bgt	.Lshort
++	 * This shouldn't be called from NMI context, but add a safeguard here
++	 * to return, since if we're in NMI context we can't wait for an NMI
++	 * to reset an overflowed counter value.
++	 */
++	if (in_nmi())
++		return;
 +
- 	subfic  r6,r5,8
- 	slwi	r6,r6,3
- 	LD	rA,0,r3
-diff --git a/arch/powerpc/mm/hugetlbpage-radix.c b/arch/powerpc/mm/hugetlbpage-radix.c
-index 2486bee0f93e..97c7a39ebc00 100644
---- a/arch/powerpc/mm/hugetlbpage-radix.c
-+++ b/arch/powerpc/mm/hugetlbpage-radix.c
-@@ -1,6 +1,7 @@
- // SPDX-License-Identifier: GPL-2.0
- #include <linux/mm.h>
- #include <linux/hugetlb.h>
-+#include <linux/security.h>
- #include <asm/pgtable.h>
- #include <asm/pgalloc.h>
- #include <asm/cacheflush.h>
-@@ -73,7 +74,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
- 	if (addr) {
- 		addr = ALIGN(addr, huge_page_size(h));
- 		vma = find_vma(mm, addr);
--		if (high_limit - len >= addr &&
-+		if (high_limit - len >= addr && addr >= mmap_min_addr &&
- 		    (!vma || addr + len <= vm_start_gap(vma)))
- 			return addr;
- 	}
-@@ -83,7 +84,7 @@ radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
- 	 */
- 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
- 	info.length = len;
--	info.low_limit = PAGE_SIZE;
-+	info.low_limit = max(PAGE_SIZE, mmap_min_addr);
- 	info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW);
- 	info.align_mask = PAGE_MASK & ~huge_page_mask(h);
- 	info.align_offset = 0;
-diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
-index 87f0dd004295..b5d1c45c1475 100644
---- a/arch/powerpc/mm/numa.c
-+++ b/arch/powerpc/mm/numa.c
-@@ -1460,13 +1460,6 @@ static void reset_topology_timer(void)
- 
- #ifdef CONFIG_SMP
- 
--static void stage_topology_update(int core_id)
--{
--	cpumask_or(&cpu_associativity_changes_mask,
--		&cpu_associativity_changes_mask, cpu_sibling_mask(core_id));
--	reset_topology_timer();
--}
--
- static int dt_update_callback(struct notifier_block *nb,
- 				unsigned long action, void *data)
- {
-@@ -1479,7 +1472,7 @@ static int dt_update_callback(struct notifier_block *nb,
- 		    !of_prop_cmp(update->prop->name, "ibm,associativity")) {
- 			u32 core_id;
- 			of_property_read_u32(update->dn, "reg", &core_id);
--			stage_topology_update(core_id);
-+			rc = dlpar_cpu_readd(core_id);
- 			rc = NOTIFY_OK;
- 		}
- 		break;
-diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
-index bc3914d54e26..5986df48359b 100644
---- a/arch/powerpc/mm/slb.c
-+++ b/arch/powerpc/mm/slb.c
-@@ -69,6 +69,11 @@ static void assert_slb_presence(bool present, unsigned long ea)
- 	if (!cpu_has_feature(CPU_FTR_ARCH_206))
- 		return;
- 
 +	/*
-+	 * slbfee. requires bit 24 (PPC bit 39) be clear in RB. Hardware
-+	 * ignores all other bits from 0-27, so just clear them all.
++	 * Check each counter for overflow and wait for it to be reset by the
++	 * NMI if it has overflowed. This relies on the fact that all active
++	 * counters are always enabled when this function is caled and
++	 * ARCH_PERFMON_EVENTSEL_INT is always set.
 +	 */
-+	ea &= ~((1UL << 28) - 1);
- 	asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0");
- 
- 	WARN_ON(present == (tmp == 0));
-diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h
-index c2d5192ed64f..e52e30bf7d86 100644
---- a/arch/powerpc/net/bpf_jit.h
-+++ b/arch/powerpc/net/bpf_jit.h
-@@ -51,6 +51,8 @@
- #define PPC_LIS(r, i)		PPC_ADDIS(r, 0, i)
- #define PPC_STD(r, base, i)	EMIT(PPC_INST_STD | ___PPC_RS(r) |	      \
- 				     ___PPC_RA(base) | ((i) & 0xfffc))
-+#define PPC_STDX(r, base, b)	EMIT(PPC_INST_STDX | ___PPC_RS(r) |	      \
-+				     ___PPC_RA(base) | ___PPC_RB(b))
- #define PPC_STDU(r, base, i)	EMIT(PPC_INST_STDU | ___PPC_RS(r) |	      \
- 				     ___PPC_RA(base) | ((i) & 0xfffc))
- #define PPC_STW(r, base, i)	EMIT(PPC_INST_STW | ___PPC_RS(r) |	      \
-@@ -65,7 +67,9 @@
- #define PPC_LBZ(r, base, i)	EMIT(PPC_INST_LBZ | ___PPC_RT(r) |	      \
- 				     ___PPC_RA(base) | IMM_L(i))
- #define PPC_LD(r, base, i)	EMIT(PPC_INST_LD | ___PPC_RT(r) |	      \
--				     ___PPC_RA(base) | IMM_L(i))
-+				     ___PPC_RA(base) | ((i) & 0xfffc))
-+#define PPC_LDX(r, base, b)	EMIT(PPC_INST_LDX | ___PPC_RT(r) |	      \
-+				     ___PPC_RA(base) | ___PPC_RB(b))
- #define PPC_LWZ(r, base, i)	EMIT(PPC_INST_LWZ | ___PPC_RT(r) |	      \
- 				     ___PPC_RA(base) | IMM_L(i))
- #define PPC_LHZ(r, base, i)	EMIT(PPC_INST_LHZ | ___PPC_RT(r) |	      \
-@@ -85,17 +89,6 @@
- 					___PPC_RA(a) | ___PPC_RB(b))
- #define PPC_BPF_STDCX(s, a, b)	EMIT(PPC_INST_STDCX | ___PPC_RS(s) |	      \
- 					___PPC_RA(a) | ___PPC_RB(b))
--
--#ifdef CONFIG_PPC64
--#define PPC_BPF_LL(r, base, i) do { PPC_LD(r, base, i); } while(0)
--#define PPC_BPF_STL(r, base, i) do { PPC_STD(r, base, i); } while(0)
--#define PPC_BPF_STLU(r, base, i) do { PPC_STDU(r, base, i); } while(0)
--#else
--#define PPC_BPF_LL(r, base, i) do { PPC_LWZ(r, base, i); } while(0)
--#define PPC_BPF_STL(r, base, i) do { PPC_STW(r, base, i); } while(0)
--#define PPC_BPF_STLU(r, base, i) do { PPC_STWU(r, base, i); } while(0)
--#endif
--
- #define PPC_CMPWI(a, i)		EMIT(PPC_INST_CMPWI | ___PPC_RA(a) | IMM_L(i))
- #define PPC_CMPDI(a, i)		EMIT(PPC_INST_CMPDI | ___PPC_RA(a) | IMM_L(i))
- #define PPC_CMPW(a, b)		EMIT(PPC_INST_CMPW | ___PPC_RA(a) |	      \
-diff --git a/arch/powerpc/net/bpf_jit32.h b/arch/powerpc/net/bpf_jit32.h
-index 6f4daacad296..ade04547703f 100644
---- a/arch/powerpc/net/bpf_jit32.h
-+++ b/arch/powerpc/net/bpf_jit32.h
-@@ -123,6 +123,10 @@ DECLARE_LOAD_FUNC(sk_load_byte_msh);
- #define PPC_NTOHS_OFFS(r, base, i)	PPC_LHZ_OFFS(r, base, i)
- #endif
- 
-+#define PPC_BPF_LL(r, base, i) do { PPC_LWZ(r, base, i); } while(0)
-+#define PPC_BPF_STL(r, base, i) do { PPC_STW(r, base, i); } while(0)
-+#define PPC_BPF_STLU(r, base, i) do { PPC_STWU(r, base, i); } while(0)
++	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
++		if (!test_bit(idx, cpuc->active_mask))
++			continue;
++
++		amd_pmu_wait_on_overflow(idx);
++	}
++}
++
++static void amd_pmu_disable_event(struct perf_event *event)
++{
++	x86_pmu_disable_event(event);
++
++	/*
++	 * This can be called from NMI context (via x86_pmu_stop). The counter
++	 * may have overflowed, but either way, we'll never see it get reset
++	 * by the NMI if we're already in the NMI. And the NMI latency support
++	 * below will take care of any pending NMI that might have been
++	 * generated by the overflow.
++	 */
++	if (in_nmi())
++		return;
++
++	amd_pmu_wait_on_overflow(event->hw.idx);
++}
 +
- #define SEEN_DATAREF 0x10000 /* might call external helpers */
- #define SEEN_XREG    0x20000 /* X reg is used */
- #define SEEN_MEM     0x40000 /* SEEN_MEM+(1<<n) = use mem[n] for temporary
-diff --git a/arch/powerpc/net/bpf_jit64.h b/arch/powerpc/net/bpf_jit64.h
-index 3609be4692b3..47f441f351a6 100644
---- a/arch/powerpc/net/bpf_jit64.h
-+++ b/arch/powerpc/net/bpf_jit64.h
-@@ -68,6 +68,26 @@ static const int b2p[] = {
- /* PPC NVR range -- update this if we ever use NVRs below r27 */
- #define BPF_PPC_NVR_MIN		27
- 
 +/*
-+ * WARNING: These can use TMP_REG_2 if the offset is not at word boundary,
-+ * so ensure that it isn't in use already.
++ * Because of NMI latency, if multiple PMC counters are active or other sources
++ * of NMIs are received, the perf NMI handler can handle one or more overflowed
++ * PMC counters outside of the NMI associated with the PMC overflow. If the NMI
++ * doesn't arrive at the LAPIC in time to become a pending NMI, then the kernel
++ * back-to-back NMI support won't be active. This PMC handler needs to take into
++ * account that this can occur, otherwise this could result in unknown NMI
++ * messages being issued. Examples of this is PMC overflow while in the NMI
++ * handler when multiple PMCs are active or PMC overflow while handling some
++ * other source of an NMI.
++ *
++ * Attempt to mitigate this by using the number of active PMCs to determine
++ * whether to return NMI_HANDLED if the perf NMI handler did not handle/reset
++ * any PMCs. The per-CPU perf_nmi_counter variable is set to a minimum of the
++ * number of active PMCs or 2. The value of 2 is used in case an NMI does not
++ * arrive at the LAPIC in time to be collapsed into an already pending NMI.
 + */
-+#define PPC_BPF_LL(r, base, i) do {					      \
-+				if ((i) % 4) {				      \
-+					PPC_LI(b2p[TMP_REG_2], (i));	      \
-+					PPC_LDX(r, base, b2p[TMP_REG_2]);     \
-+				} else					      \
-+					PPC_LD(r, base, i);		      \
-+				} while(0)
-+#define PPC_BPF_STL(r, base, i) do {					      \
-+				if ((i) % 4) {				      \
-+					PPC_LI(b2p[TMP_REG_2], (i));	      \
-+					PPC_STDX(r, base, b2p[TMP_REG_2]);    \
-+				} else					      \
-+					PPC_STD(r, base, i);		      \
-+				} while(0)
-+#define PPC_BPF_STLU(r, base, i) do { PPC_STDU(r, base, i); } while(0)
++static int amd_pmu_handle_irq(struct pt_regs *regs)
++{
++	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++	int active, handled;
 +
- #define SEEN_FUNC	0x1000 /* might call external helpers */
- #define SEEN_STACK	0x2000 /* uses BPF stack */
- #define SEEN_TAILCALL	0x4000 /* uses tail calls */
-diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
-index 7ce57657d3b8..b1a116eecae2 100644
---- a/arch/powerpc/net/bpf_jit_comp64.c
-+++ b/arch/powerpc/net/bpf_jit_comp64.c
-@@ -252,7 +252,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
- 	 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
- 	 *   goto out;
- 	 */
--	PPC_LD(b2p[TMP_REG_1], 1, bpf_jit_stack_tailcallcnt(ctx));
-+	PPC_BPF_LL(b2p[TMP_REG_1], 1, bpf_jit_stack_tailcallcnt(ctx));
- 	PPC_CMPLWI(b2p[TMP_REG_1], MAX_TAIL_CALL_CNT);
- 	PPC_BCC(COND_GT, out);
- 
-@@ -265,7 +265,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
- 	/* prog = array->ptrs[index]; */
- 	PPC_MULI(b2p[TMP_REG_1], b2p_index, 8);
- 	PPC_ADD(b2p[TMP_REG_1], b2p[TMP_REG_1], b2p_bpf_array);
--	PPC_LD(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_array, ptrs));
-+	PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_array, ptrs));
- 
- 	/*
- 	 * if (prog == NULL)
-@@ -275,7 +275,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
- 	PPC_BCC(COND_EQ, out);
- 
- 	/* goto *(prog->bpf_func + prologue_size); */
--	PPC_LD(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_prog, bpf_func));
-+	PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_1], offsetof(struct bpf_prog, bpf_func));
- #ifdef PPC64_ELF_ABI_v1
- 	/* skip past the function descriptor */
- 	PPC_ADDI(b2p[TMP_REG_1], b2p[TMP_REG_1],
-@@ -606,7 +606,7 @@ bpf_alu32_trunc:
- 				 * the instructions generated will remain the
- 				 * same across all passes
- 				 */
--				PPC_STD(dst_reg, 1, bpf_jit_stack_local(ctx));
-+				PPC_BPF_STL(dst_reg, 1, bpf_jit_stack_local(ctx));
- 				PPC_ADDI(b2p[TMP_REG_1], 1, bpf_jit_stack_local(ctx));
- 				PPC_LDBRX(dst_reg, 0, b2p[TMP_REG_1]);
- 				break;
-@@ -662,7 +662,7 @@ emit_clear:
- 				PPC_LI32(b2p[TMP_REG_1], imm);
- 				src_reg = b2p[TMP_REG_1];
- 			}
--			PPC_STD(src_reg, dst_reg, off);
-+			PPC_BPF_STL(src_reg, dst_reg, off);
- 			break;
- 
- 		/*
-@@ -709,7 +709,7 @@ emit_clear:
- 			break;
- 		/* dst = *(u64 *)(ul) (src + off) */
- 		case BPF_LDX | BPF_MEM | BPF_DW:
--			PPC_LD(dst_reg, src_reg, off);
-+			PPC_BPF_LL(dst_reg, src_reg, off);
- 			break;
- 
- 		/*
-diff --git a/arch/powerpc/platforms/44x/Kconfig b/arch/powerpc/platforms/44x/Kconfig
-index 4a9a72d01c3c..35be81fd2dc2 100644
---- a/arch/powerpc/platforms/44x/Kconfig
-+++ b/arch/powerpc/platforms/44x/Kconfig
-@@ -180,6 +180,7 @@ config CURRITUCK
- 	depends on PPC_47x
- 	select SWIOTLB
- 	select 476FPE
-+	select FORCE_PCI
- 	select PPC4xx_PCI_EXPRESS
- 	help
- 	  This option enables support for the IBM Currituck (476fpe) evaluation board
-diff --git a/arch/powerpc/platforms/83xx/suspend-asm.S b/arch/powerpc/platforms/83xx/suspend-asm.S
-index 3d1ecd211776..8137f77abad5 100644
---- a/arch/powerpc/platforms/83xx/suspend-asm.S
-+++ b/arch/powerpc/platforms/83xx/suspend-asm.S
-@@ -26,13 +26,13 @@
- #define SS_MSR		0x74
- #define SS_SDR1		0x78
- #define SS_LR		0x7c
--#define SS_SPRG		0x80 /* 4 SPRGs */
--#define SS_DBAT		0x90 /* 8 DBATs */
--#define SS_IBAT		0xd0 /* 8 IBATs */
--#define SS_TB		0x110
--#define SS_CR		0x118
--#define SS_GPREG	0x11c /* r12-r31 */
--#define STATE_SAVE_SIZE 0x16c
-+#define SS_SPRG		0x80 /* 8 SPRGs */
-+#define SS_DBAT		0xa0 /* 8 DBATs */
-+#define SS_IBAT		0xe0 /* 8 IBATs */
-+#define SS_TB		0x120
-+#define SS_CR		0x128
-+#define SS_GPREG	0x12c /* r12-r31 */
-+#define STATE_SAVE_SIZE 0x17c
- 
- 	.section .data
- 	.align	5
-@@ -103,6 +103,16 @@ _GLOBAL(mpc83xx_enter_deep_sleep)
- 	stw	r7, SS_SPRG+12(r3)
- 	stw	r8, SS_SDR1(r3)
- 
-+	mfspr	r4, SPRN_SPRG4
-+	mfspr	r5, SPRN_SPRG5
-+	mfspr	r6, SPRN_SPRG6
-+	mfspr	r7, SPRN_SPRG7
++	/*
++	 * Obtain the active count before calling x86_pmu_handle_irq() since
++	 * it is possible that x86_pmu_handle_irq() may make a counter
++	 * inactive (through x86_pmu_stop).
++	 */
++	active = __bitmap_weight(cpuc->active_mask, X86_PMC_IDX_MAX);
 +
-+	stw	r4, SS_SPRG+16(r3)
-+	stw	r5, SS_SPRG+20(r3)
-+	stw	r6, SS_SPRG+24(r3)
-+	stw	r7, SS_SPRG+28(r3)
++	/* Process any counter overflows */
++	handled = x86_pmu_handle_irq(regs);
 +
- 	mfspr	r4, SPRN_DBAT0U
- 	mfspr	r5, SPRN_DBAT0L
- 	mfspr	r6, SPRN_DBAT1U
-@@ -493,6 +503,16 @@ mpc83xx_deep_resume:
- 	mtspr	SPRN_IBAT7U, r6
- 	mtspr	SPRN_IBAT7L, r7
- 
-+	lwz	r4, SS_SPRG+16(r3)
-+	lwz	r5, SS_SPRG+20(r3)
-+	lwz	r6, SS_SPRG+24(r3)
-+	lwz	r7, SS_SPRG+28(r3)
++	/*
++	 * If a counter was handled, record the number of possible remaining
++	 * NMIs that can occur.
++	 */
++	if (handled) {
++		this_cpu_write(perf_nmi_counter,
++			       min_t(unsigned int, 2, active));
 +
-+	mtspr	SPRN_SPRG4, r4
-+	mtspr	SPRN_SPRG5, r5
-+	mtspr	SPRN_SPRG6, r6
-+	mtspr	SPRN_SPRG7, r7
++		return handled;
++	}
 +
- 	lwz	r4, SS_SPRG+0(r3)
- 	lwz	r5, SS_SPRG+4(r3)
- 	lwz	r6, SS_SPRG+8(r3)
-diff --git a/arch/powerpc/platforms/embedded6xx/wii.c b/arch/powerpc/platforms/embedded6xx/wii.c
-index ecf703ee3a76..ac4ee88efc80 100644
---- a/arch/powerpc/platforms/embedded6xx/wii.c
-+++ b/arch/powerpc/platforms/embedded6xx/wii.c
-@@ -83,6 +83,10 @@ unsigned long __init wii_mmu_mapin_mem2(unsigned long top)
- 	/* MEM2 64MB@0x10000000 */
- 	delta = wii_hole_start + wii_hole_size;
- 	size = top - delta;
++	if (!this_cpu_read(perf_nmi_counter))
++		return NMI_DONE;
 +
-+	if (__map_without_bats)
-+		return delta;
++	this_cpu_dec(perf_nmi_counter);
 +
- 	for (bl = 128<<10; bl < max_size; bl <<= 1) {
- 		if (bl * 2 > size)
- 			break;
-diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
-index 35f699ebb662..e52f9b06dd9c 100644
---- a/arch/powerpc/platforms/powernv/idle.c
-+++ b/arch/powerpc/platforms/powernv/idle.c
-@@ -458,7 +458,8 @@ EXPORT_SYMBOL_GPL(pnv_power9_force_smt4_release);
- #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */
- 
- #ifdef CONFIG_HOTPLUG_CPU
--static void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
++	return NMI_HANDLED;
++}
 +
-+void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val)
- {
- 	u64 pir = get_hard_smp_processor_id(cpu);
+ static struct event_constraint *
+ amd_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
+ 			  struct perf_event *event)
+@@ -621,11 +751,11 @@ static ssize_t amd_event_sysfs_show(char *page, u64 config)
  
-@@ -481,20 +482,6 @@ unsigned long pnv_cpu_offline(unsigned int cpu)
- {
- 	unsigned long srr1;
- 	u32 idle_states = pnv_get_supported_cpuidle_states();
--	u64 lpcr_val;
--
--	/*
--	 * We don't want to take decrementer interrupts while we are
--	 * offline, so clear LPCR:PECE1. We keep PECE2 (and
--	 * LPCR_PECE_HVEE on P9) enabled as to let IPIs in.
--	 *
--	 * If the CPU gets woken up by a special wakeup, ensure that
--	 * the SLW engine sets LPCR with decrementer bit cleared, else
--	 * the CPU will come back to the kernel due to a spurious
--	 * wakeup.
--	 */
--	lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
--	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
+ static __initconst const struct x86_pmu amd_pmu = {
+ 	.name			= "AMD",
+-	.handle_irq		= x86_pmu_handle_irq,
+-	.disable_all		= x86_pmu_disable_all,
++	.handle_irq		= amd_pmu_handle_irq,
++	.disable_all		= amd_pmu_disable_all,
+ 	.enable_all		= x86_pmu_enable_all,
+ 	.enable			= x86_pmu_enable_event,
+-	.disable		= x86_pmu_disable_event,
++	.disable		= amd_pmu_disable_event,
+ 	.hw_config		= amd_pmu_hw_config,
+ 	.schedule_events	= x86_schedule_events,
+ 	.eventsel		= MSR_K7_EVNTSEL0,
+@@ -732,7 +862,7 @@ void amd_pmu_enable_virt(void)
+ 	cpuc->perf_ctr_virt_mask = 0;
  
- 	__ppc64_runlatch_off();
- 
-@@ -526,16 +513,6 @@ unsigned long pnv_cpu_offline(unsigned int cpu)
- 
- 	__ppc64_runlatch_on();
- 
--	/*
--	 * Re-enable decrementer interrupts in LPCR.
--	 *
--	 * Further, we want stop states to be woken up by decrementer
--	 * for non-hotplug cases. So program the LPCR via stop api as
--	 * well.
--	 */
--	lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1;
--	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
--
- 	return srr1;
+ 	/* Reload all events */
+-	x86_pmu_disable_all();
++	amd_pmu_disable_all();
+ 	x86_pmu_enable_all(0);
  }
- #endif
-diff --git a/arch/powerpc/platforms/powernv/opal-msglog.c b/arch/powerpc/platforms/powernv/opal-msglog.c
-index acd3206dfae3..06628c71cef6 100644
---- a/arch/powerpc/platforms/powernv/opal-msglog.c
-+++ b/arch/powerpc/platforms/powernv/opal-msglog.c
-@@ -98,7 +98,7 @@ static ssize_t opal_msglog_read(struct file *file, struct kobject *kobj,
+ EXPORT_SYMBOL_GPL(amd_pmu_enable_virt);
+@@ -750,7 +880,7 @@ void amd_pmu_disable_virt(void)
+ 	cpuc->perf_ctr_virt_mask = AMD64_EVENTSEL_HOSTONLY;
+ 
+ 	/* Reload all events */
+-	x86_pmu_disable_all();
++	amd_pmu_disable_all();
+ 	x86_pmu_enable_all(0);
  }
+ EXPORT_SYMBOL_GPL(amd_pmu_disable_virt);
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index e2b1447192a8..81911e11a15d 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -1349,8 +1349,9 @@ void x86_pmu_stop(struct perf_event *event, int flags)
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ 	struct hw_perf_event *hwc = &event->hw;
  
- static struct bin_attribute opal_msglog_attr = {
--	.attr = {.name = "msglog", .mode = 0444},
-+	.attr = {.name = "msglog", .mode = 0400},
- 	.read = opal_msglog_read
- };
+-	if (__test_and_clear_bit(hwc->idx, cpuc->active_mask)) {
++	if (test_bit(hwc->idx, cpuc->active_mask)) {
+ 		x86_pmu.disable(event);
++		__clear_bit(hwc->idx, cpuc->active_mask);
+ 		cpuc->events[hwc->idx] = NULL;
+ 		WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
+ 		hwc->state |= PERF_HES_STOPPED;
+@@ -1447,16 +1448,8 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
+ 	apic_write(APIC_LVTPC, APIC_DM_NMI);
  
-diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
-index 697449afb3f7..e28f03e1eb5e 100644
---- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c
-+++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c
-@@ -313,7 +313,6 @@ long pnv_pci_ioda2_table_alloc_pages(int nid, __u64 bus_offset,
- 			page_shift);
- 	tbl->it_level_size = 1ULL << (level_shift - 3);
- 	tbl->it_indirect_levels = levels - 1;
--	tbl->it_allocated_size = total_allocated;
- 	tbl->it_userspace = uas;
- 	tbl->it_nid = nid;
- 
-diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
-index 145373f0e5dc..2d62c58f9a4c 100644
---- a/arch/powerpc/platforms/powernv/pci-ioda.c
-+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
-@@ -2594,8 +2594,13 @@ static long pnv_pci_ioda2_create_table_userspace(
- 		int num, __u32 page_shift, __u64 window_size, __u32 levels,
- 		struct iommu_table **ptbl)
- {
--	return pnv_pci_ioda2_create_table(table_group,
-+	long ret = pnv_pci_ioda2_create_table(table_group,
- 			num, page_shift, window_size, levels, true, ptbl);
-+
-+	if (!ret)
-+		(*ptbl)->it_allocated_size = pnv_pci_ioda2_get_table_size(
-+				page_shift, window_size, levels);
-+	return ret;
- }
+ 	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+-		if (!test_bit(idx, cpuc->active_mask)) {
+-			/*
+-			 * Though we deactivated the counter some cpus
+-			 * might still deliver spurious interrupts still
+-			 * in flight. Catch them:
+-			 */
+-			if (__test_and_clear_bit(idx, cpuc->running))
+-				handled++;
++		if (!test_bit(idx, cpuc->active_mask))
+ 			continue;
+-		}
  
- static void pnv_ioda2_take_ownership(struct iommu_table_group *table_group)
-diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
-index 0d354e19ef92..db09c7022635 100644
---- a/arch/powerpc/platforms/powernv/smp.c
-+++ b/arch/powerpc/platforms/powernv/smp.c
-@@ -39,6 +39,7 @@
- #include <asm/cpuidle.h>
- #include <asm/kexec.h>
- #include <asm/reg.h>
-+#include <asm/powernv.h>
+ 		event = cpuc->events[idx];
  
- #include "powernv.h"
+diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
+index ad7b210aa3f6..8e790ec219a5 100644
+--- a/arch/x86/include/asm/bitops.h
++++ b/arch/x86/include/asm/bitops.h
+@@ -36,22 +36,17 @@
+  * bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
+  */
  
-@@ -153,6 +154,7 @@ static void pnv_smp_cpu_kill_self(void)
- {
- 	unsigned int cpu;
- 	unsigned long srr1, wmask;
-+	u64 lpcr_val;
+-#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 1)
+-/* Technically wrong, but this avoids compilation errors on some gcc
+-   versions. */
+-#define BITOP_ADDR(x) "=m" (*(volatile long *) (x))
+-#else
+-#define BITOP_ADDR(x) "+m" (*(volatile long *) (x))
+-#endif
++#define RLONG_ADDR(x)			 "m" (*(volatile long *) (x))
++#define WBYTE_ADDR(x)			"+m" (*(volatile char *) (x))
  
- 	/* Standard hot unplug procedure */
- 	/*
-@@ -174,6 +176,19 @@ static void pnv_smp_cpu_kill_self(void)
- 	if (cpu_has_feature(CPU_FTR_ARCH_207S))
- 		wmask = SRR1_WAKEMASK_P8;
+-#define ADDR				BITOP_ADDR(addr)
++#define ADDR				RLONG_ADDR(addr)
  
-+	/*
-+	 * We don't want to take decrementer interrupts while we are
-+	 * offline, so clear LPCR:PECE1. We keep PECE2 (and
-+	 * LPCR_PECE_HVEE on P9) enabled so as to let IPIs in.
-+	 *
-+	 * If the CPU gets woken up by a special wakeup, ensure that
-+	 * the SLW engine sets LPCR with decrementer bit cleared, else
-+	 * the CPU will come back to the kernel due to a spurious
-+	 * wakeup.
-+	 */
-+	lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1;
-+	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
-+
- 	while (!generic_check_cpu_restart(cpu)) {
- 		/*
- 		 * Clear IPI flag, since we don't handle IPIs while
-@@ -246,6 +261,16 @@ static void pnv_smp_cpu_kill_self(void)
+ /*
+  * We do the locked ops that don't return the old value as
+  * a mask operation on a byte.
+  */
+ #define IS_IMMEDIATE(nr)		(__builtin_constant_p(nr))
+-#define CONST_MASK_ADDR(nr, addr)	BITOP_ADDR((void *)(addr) + ((nr)>>3))
++#define CONST_MASK_ADDR(nr, addr)	WBYTE_ADDR((void *)(addr) + ((nr)>>3))
+ #define CONST_MASK(nr)			(1 << ((nr) & 7))
  
+ /**
+@@ -79,7 +74,7 @@ set_bit(long nr, volatile unsigned long *addr)
+ 			: "memory");
+ 	} else {
+ 		asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
+-			: BITOP_ADDR(addr) : "Ir" (nr) : "memory");
++			: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
  	}
- 
-+	/*
-+	 * Re-enable decrementer interrupts in LPCR.
-+	 *
-+	 * Further, we want stop states to be woken up by decrementer
-+	 * for non-hotplug cases. So program the LPCR via stop api as
-+	 * well.
-+	 */
-+	lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1;
-+	pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val);
-+
- 	DBG("CPU%d coming online...\n", cpu);
  }
  
-diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
-index 2f8e62163602..97feb6e79f1a 100644
---- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
-+++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
-@@ -802,6 +802,25 @@ static int dlpar_cpu_add_by_count(u32 cpus_to_add)
- 	return rc;
+@@ -94,7 +89,7 @@ set_bit(long nr, volatile unsigned long *addr)
+  */
+ static __always_inline void __set_bit(long nr, volatile unsigned long *addr)
+ {
+-	asm volatile(__ASM_SIZE(bts) " %1,%0" : ADDR : "Ir" (nr) : "memory");
++	asm volatile(__ASM_SIZE(bts) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
  }
  
-+int dlpar_cpu_readd(int cpu)
-+{
-+	struct device_node *dn;
-+	struct device *dev;
-+	u32 drc_index;
-+	int rc;
-+
-+	dev = get_cpu_device(cpu);
-+	dn = dev->of_node;
-+
-+	rc = of_property_read_u32(dn, "ibm,my-drc-index", &drc_index);
-+
-+	rc = dlpar_cpu_remove_by_index(drc_index);
-+	if (!rc)
-+		rc = dlpar_cpu_add(drc_index);
-+
-+	return rc;
-+}
-+
- int dlpar_cpu(struct pseries_hp_errorlog *hp_elog)
- {
- 	u32 count, drc_index;
-diff --git a/arch/powerpc/platforms/pseries/pseries_energy.c b/arch/powerpc/platforms/pseries/pseries_energy.c
-index 6ed22127391b..921f12182f3e 100644
---- a/arch/powerpc/platforms/pseries/pseries_energy.c
-+++ b/arch/powerpc/platforms/pseries/pseries_energy.c
-@@ -77,18 +77,27 @@ static u32 cpu_to_drc_index(int cpu)
- 
- 		ret = drc.drc_index_start + (thread_index * drc.sequential_inc);
+ /**
+@@ -116,8 +111,7 @@ clear_bit(long nr, volatile unsigned long *addr)
+ 			: "iq" ((u8)~CONST_MASK(nr)));
  	} else {
--		const __be32 *indexes;
--
--		indexes = of_get_property(dn, "ibm,drc-indexes", NULL);
--		if (indexes == NULL)
--			goto err_of_node_put;
-+		u32 nr_drc_indexes, thread_drc_index;
- 
- 		/*
--		 * The first element indexes[0] is the number of drc_indexes
--		 * returned in the list.  Hence thread_index+1 will get the
--		 * drc_index corresponding to core number thread_index.
-+		 * The first element of ibm,drc-indexes array is the
-+		 * number of drc_indexes returned in the list.  Hence
-+		 * thread_index+1 will get the drc_index corresponding
-+		 * to core number thread_index.
- 		 */
--		ret = indexes[thread_index + 1];
-+		rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
-+						0, &nr_drc_indexes);
-+		if (rc)
-+			goto err_of_node_put;
-+
-+		WARN_ON_ONCE(thread_index > nr_drc_indexes);
-+		rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
-+						thread_index + 1,
-+						&thread_drc_index);
-+		if (rc)
-+			goto err_of_node_put;
-+
-+		ret = thread_drc_index;
- 	}
- 
- 	rc = 0;
-diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
-index d97d52772789..452dcfd7e5dd 100644
---- a/arch/powerpc/platforms/pseries/ras.c
-+++ b/arch/powerpc/platforms/pseries/ras.c
-@@ -550,6 +550,7 @@ static void pseries_print_mce_info(struct pt_regs *regs,
- 		"UE",
- 		"SLB",
- 		"ERAT",
-+		"Unknown",
- 		"TLB",
- 		"D-Cache",
- 		"Unknown",
-diff --git a/arch/powerpc/xmon/ppc-dis.c b/arch/powerpc/xmon/ppc-dis.c
-index 9deea5ee13f6..27f1e6415036 100644
---- a/arch/powerpc/xmon/ppc-dis.c
-+++ b/arch/powerpc/xmon/ppc-dis.c
-@@ -158,7 +158,7 @@ int print_insn_powerpc (unsigned long insn, unsigned long memaddr)
-     dialect |= (PPC_OPCODE_POWER5 | PPC_OPCODE_POWER6 | PPC_OPCODE_POWER7
- 		| PPC_OPCODE_POWER8 | PPC_OPCODE_POWER9 | PPC_OPCODE_HTM
- 		| PPC_OPCODE_ALTIVEC | PPC_OPCODE_ALTIVEC2
--		| PPC_OPCODE_VSX | PPC_OPCODE_VSX3),
-+		| PPC_OPCODE_VSX | PPC_OPCODE_VSX3);
- 
-   /* Get the major opcode of the insn.  */
-   opcode = NULL;
-diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
-index bba3da6ef157..6ea9e1804233 100644
---- a/arch/riscv/include/asm/syscall.h
-+++ b/arch/riscv/include/asm/syscall.h
-@@ -79,10 +79,11 @@ static inline void syscall_get_arguments(struct task_struct *task,
- 	if (i == 0) {
- 		args[0] = regs->orig_a0;
- 		args++;
--		i++;
- 		n--;
-+	} else {
-+		i--;
+ 		asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
+-			: BITOP_ADDR(addr)
+-			: "Ir" (nr));
++			: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
  	}
--	memcpy(args, &regs->a1 + i * sizeof(regs->a1), n * sizeof(args[0]));
-+	memcpy(args, &regs->a1 + i, n * sizeof(args[0]));
- }
- 
- static inline void syscall_set_arguments(struct task_struct *task,
-@@ -94,10 +95,11 @@ static inline void syscall_set_arguments(struct task_struct *task,
-         if (i == 0) {
-                 regs->orig_a0 = args[0];
-                 args++;
--                i++;
-                 n--;
--        }
--	memcpy(&regs->a1 + i * sizeof(regs->a1), args, n * sizeof(regs->a0));
-+	} else {
-+		i--;
-+	}
-+	memcpy(&regs->a1 + i, args, n * sizeof(regs->a1));
  }
  
- static inline int syscall_get_arch(void)
-diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
-index d5d24889c3bc..c2b8c8c6c9be 100644
---- a/arch/s390/include/asm/kvm_host.h
-+++ b/arch/s390/include/asm/kvm_host.h
-@@ -878,7 +878,7 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
- static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
- static inline void kvm_arch_free_memslot(struct kvm *kvm,
- 		struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {}
--static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {}
-+static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {}
- static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {}
- static inline void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
- 		struct kvm_memory_slot *slot) {}
-diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
-index bfabeb1889cc..1266194afb02 100644
---- a/arch/s390/kernel/perf_cpum_sf.c
-+++ b/arch/s390/kernel/perf_cpum_sf.c
-@@ -1600,7 +1600,7 @@ static void aux_sdb_init(unsigned long sdb)
+@@ -137,7 +131,7 @@ static __always_inline void clear_bit_unlock(long nr, volatile unsigned long *ad
  
- /*
-  * aux_buffer_setup() - Setup AUX buffer for diagnostic mode sampling
-- * @cpu:	On which to allocate, -1 means current
-+ * @event:	Event the buffer is setup for, event->cpu == -1 means current
-  * @pages:	Array of pointers to buffer pages passed from perf core
-  * @nr_pages:	Total pages
-  * @snapshot:	Flag for snapshot mode
-@@ -1612,8 +1612,8 @@ static void aux_sdb_init(unsigned long sdb)
-  *
-  * Return the private AUX buffer structure if success or NULL if fails.
-  */
--static void *aux_buffer_setup(int cpu, void **pages, int nr_pages,
--			      bool snapshot)
-+static void *aux_buffer_setup(struct perf_event *event, void **pages,
-+			      int nr_pages, bool snapshot)
+ static __always_inline void __clear_bit(long nr, volatile unsigned long *addr)
  {
- 	struct sf_buffer *sfb;
- 	struct aux_buffer *aux;
-diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
-index 7ed90a759135..01a3f4964d57 100644
---- a/arch/s390/kernel/setup.c
-+++ b/arch/s390/kernel/setup.c
-@@ -369,7 +369,7 @@ void __init arch_call_rest_init(void)
- 		: : [_frame] "a" (frame));
+-	asm volatile(__ASM_SIZE(btr) " %1,%0" : ADDR : "Ir" (nr));
++	asm volatile(__ASM_SIZE(btr) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
  }
  
--static void __init setup_lowcore(void)
-+static void __init setup_lowcore_dat_off(void)
- {
- 	struct lowcore *lc;
- 
-@@ -380,19 +380,16 @@ static void __init setup_lowcore(void)
- 	lc = memblock_alloc_low(sizeof(*lc), sizeof(*lc));
- 	lc->restart_psw.mask = PSW_KERNEL_BITS;
- 	lc->restart_psw.addr = (unsigned long) restart_int_handler;
--	lc->external_new_psw.mask = PSW_KERNEL_BITS |
--		PSW_MASK_DAT | PSW_MASK_MCHECK;
-+	lc->external_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
- 	lc->external_new_psw.addr = (unsigned long) ext_int_handler;
- 	lc->svc_new_psw.mask = PSW_KERNEL_BITS |
--		PSW_MASK_DAT | PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK;
-+		PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK;
- 	lc->svc_new_psw.addr = (unsigned long) system_call;
--	lc->program_new_psw.mask = PSW_KERNEL_BITS |
--		PSW_MASK_DAT | PSW_MASK_MCHECK;
-+	lc->program_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
- 	lc->program_new_psw.addr = (unsigned long) pgm_check_handler;
- 	lc->mcck_new_psw.mask = PSW_KERNEL_BITS;
- 	lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler;
--	lc->io_new_psw.mask = PSW_KERNEL_BITS |
--		PSW_MASK_DAT | PSW_MASK_MCHECK;
-+	lc->io_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK;
- 	lc->io_new_psw.addr = (unsigned long) io_int_handler;
- 	lc->clock_comparator = clock_comparator_max;
- 	lc->nodat_stack = ((unsigned long) &init_thread_union)
-@@ -452,6 +449,16 @@ static void __init setup_lowcore(void)
- 	lowcore_ptr[0] = lc;
+ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr)
+@@ -145,7 +139,7 @@ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile
+ 	bool negative;
+ 	asm volatile(LOCK_PREFIX "andb %2,%1"
+ 		CC_SET(s)
+-		: CC_OUT(s) (negative), ADDR
++		: CC_OUT(s) (negative), WBYTE_ADDR(addr)
+ 		: "ir" ((char) ~(1 << nr)) : "memory");
+ 	return negative;
  }
- 
-+static void __init setup_lowcore_dat_on(void)
-+{
-+	__ctl_clear_bit(0, 28);
-+	S390_lowcore.external_new_psw.mask |= PSW_MASK_DAT;
-+	S390_lowcore.svc_new_psw.mask |= PSW_MASK_DAT;
-+	S390_lowcore.program_new_psw.mask |= PSW_MASK_DAT;
-+	S390_lowcore.io_new_psw.mask |= PSW_MASK_DAT;
-+	__ctl_set_bit(0, 28);
-+}
-+
- static struct resource code_resource = {
- 	.name  = "Kernel code",
- 	.flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM,
-@@ -1072,7 +1079,7 @@ void __init setup_arch(char **cmdline_p)
- #endif
- 
- 	setup_resources();
--	setup_lowcore();
-+	setup_lowcore_dat_off();
- 	smp_fill_possible_mask();
- 	cpu_detect_mhz_feature();
-         cpu_init();
-@@ -1085,6 +1092,12 @@ void __init setup_arch(char **cmdline_p)
- 	 */
-         paging_init();
- 
-+	/*
-+	 * After paging_init created the kernel page table, the new PSWs
-+	 * in lowcore can now run with DAT enabled.
-+	 */
-+	setup_lowcore_dat_on();
-+
-         /* Setup default console */
- 	conmode_default();
- 	set_preferred_console();
-diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
-index 68261430fe6e..64d5a3327030 100644
---- a/arch/x86/Kconfig
-+++ b/arch/x86/Kconfig
-@@ -2221,14 +2221,8 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING
- 	   If unsure, leave at the default value.
- 
- config HOTPLUG_CPU
--	bool "Support for hot-pluggable CPUs"
-+	def_bool y
- 	depends on SMP
--	---help---
--	  Say Y here to allow turning CPUs off and on. CPUs can be
--	  controlled through /sys/devices/system/cpu.
--	  ( Note: power management support will enable this option
--	    automatically on SMP systems. )
--	  Say N if you want to disable CPU hotplug.
- 
- config BOOTPARAM_HOTPLUG_CPU0
- 	bool "Set default setting of cpu0_hotpluggable"
-diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
-index 9b5adae9cc40..e2839b5c246c 100644
---- a/arch/x86/boot/Makefile
-+++ b/arch/x86/boot/Makefile
-@@ -100,7 +100,7 @@ $(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
- AFLAGS_header.o += -I$(objtree)/$(obj)
- $(obj)/header.o: $(obj)/zoffset.h
- 
--LDFLAGS_setup.elf	:= -T
-+LDFLAGS_setup.elf	:= -m elf_i386 -T
- $(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
- 	$(call if_changed,ld)
- 
-diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
-index 9e2157371491..f8debf7aeb4c 100644
---- a/arch/x86/boot/compressed/pgtable_64.c
-+++ b/arch/x86/boot/compressed/pgtable_64.c
-@@ -1,5 +1,7 @@
-+#include <linux/efi.h>
- #include <asm/e820/types.h>
- #include <asm/processor.h>
-+#include <asm/efi.h>
- #include "pgtable.h"
- #include "../string.h"
- 
-@@ -37,9 +39,10 @@ int cmdline_find_option_bool(const char *option);
- 
- static unsigned long find_trampoline_placement(void)
+@@ -161,13 +155,9 @@ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile
+  * __clear_bit() is non-atomic and implies release semantics before the memory
+  * operation. It can be used for an unlock if no other CPUs can concurrently
+  * modify other bits in the word.
+- *
+- * No memory barrier is required here, because x86 cannot reorder stores past
+- * older loads. Same principle as spin_unlock.
+  */
+ static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
  {
--	unsigned long bios_start, ebda_start;
-+	unsigned long bios_start = 0, ebda_start = 0;
- 	unsigned long trampoline_start;
- 	struct boot_e820_entry *entry;
-+	char *signature;
- 	int i;
- 
- 	/*
-@@ -47,8 +50,18 @@ static unsigned long find_trampoline_placement(void)
- 	 * This code is based on reserve_bios_regions().
- 	 */
- 
--	ebda_start = *(unsigned short *)0x40e << 4;
--	bios_start = *(unsigned short *)0x413 << 10;
-+	/*
-+	 * EFI systems may not provide legacy ROM. The memory may not be mapped
-+	 * at all.
-+	 *
-+	 * Only look for values in the legacy ROM for non-EFI system.
-+	 */
-+	signature = (char *)&boot_params->efi_info.efi_loader_signature;
-+	if (strncmp(signature, EFI32_LOADER_SIGNATURE, 4) &&
-+	    strncmp(signature, EFI64_LOADER_SIGNATURE, 4)) {
-+		ebda_start = *(unsigned short *)0x40e << 4;
-+		bios_start = *(unsigned short *)0x413 << 10;
-+	}
- 
- 	if (bios_start < BIOS_START_MIN || bios_start > BIOS_START_MAX)
- 		bios_start = BIOS_START_MAX;
-diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c
-index 2a356b948720..3ea71b871813 100644
---- a/arch/x86/crypto/aegis128-aesni-glue.c
-+++ b/arch/x86/crypto/aegis128-aesni-glue.c
-@@ -119,31 +119,20 @@ static void crypto_aegis128_aesni_process_ad(
+-	barrier();
+ 	__clear_bit(nr, addr);
  }
  
- static void crypto_aegis128_aesni_process_crypt(
--		struct aegis_state *state, struct aead_request *req,
-+		struct aegis_state *state, struct skcipher_walk *walk,
- 		const struct aegis_crypt_ops *ops)
+@@ -182,7 +172,7 @@ static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *
+  */
+ static __always_inline void __change_bit(long nr, volatile unsigned long *addr)
  {
--	struct skcipher_walk walk;
--	u8 *src, *dst;
--	unsigned int chunksize, base;
--
--	ops->skcipher_walk_init(&walk, req, false);
--
--	while (walk.nbytes) {
--		src = walk.src.virt.addr;
--		dst = walk.dst.virt.addr;
--		chunksize = walk.nbytes;
--
--		ops->crypt_blocks(state, chunksize, src, dst);
--
--		base = chunksize & ~(AEGIS128_BLOCK_SIZE - 1);
--		src += base;
--		dst += base;
--		chunksize &= AEGIS128_BLOCK_SIZE - 1;
--
--		if (chunksize > 0)
--			ops->crypt_tail(state, chunksize, src, dst);
-+	while (walk->nbytes >= AEGIS128_BLOCK_SIZE) {
-+		ops->crypt_blocks(state,
-+				  round_down(walk->nbytes, AEGIS128_BLOCK_SIZE),
-+				  walk->src.virt.addr, walk->dst.virt.addr);
-+		skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE);
-+	}
+-	asm volatile(__ASM_SIZE(btc) " %1,%0" : ADDR : "Ir" (nr));
++	asm volatile(__ASM_SIZE(btc) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
+ }
  
--		skcipher_walk_done(&walk, 0);
-+	if (walk->nbytes) {
-+		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
-+				walk->dst.virt.addr);
-+		skcipher_walk_done(walk, 0);
+ /**
+@@ -202,8 +192,7 @@ static __always_inline void change_bit(long nr, volatile unsigned long *addr)
+ 			: "iq" ((u8)CONST_MASK(nr)));
+ 	} else {
+ 		asm volatile(LOCK_PREFIX __ASM_SIZE(btc) " %1,%0"
+-			: BITOP_ADDR(addr)
+-			: "Ir" (nr));
++			: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
  	}
  }
  
-@@ -186,13 +175,16 @@ static void crypto_aegis128_aesni_crypt(struct aead_request *req,
- {
- 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
- 	struct aegis_ctx *ctx = crypto_aegis128_aesni_ctx(tfm);
-+	struct skcipher_walk walk;
- 	struct aegis_state state;
+@@ -248,8 +237,8 @@ static __always_inline bool __test_and_set_bit(long nr, volatile unsigned long *
  
-+	ops->skcipher_walk_init(&walk, req, true);
-+
- 	kernel_fpu_begin();
- 
- 	crypto_aegis128_aesni_init(&state, ctx->key.bytes, req->iv);
- 	crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen);
--	crypto_aegis128_aesni_process_crypt(&state, req, ops);
-+	crypto_aegis128_aesni_process_crypt(&state, &walk, ops);
- 	crypto_aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
- 
- 	kernel_fpu_end();
-diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c
-index dbe8bb980da1..1b1b39c66c5e 100644
---- a/arch/x86/crypto/aegis128l-aesni-glue.c
-+++ b/arch/x86/crypto/aegis128l-aesni-glue.c
-@@ -119,31 +119,20 @@ static void crypto_aegis128l_aesni_process_ad(
+ 	asm(__ASM_SIZE(bts) " %2,%1"
+ 	    CC_SET(c)
+-	    : CC_OUT(c) (oldbit), ADDR
+-	    : "Ir" (nr));
++	    : CC_OUT(c) (oldbit)
++	    : ADDR, "Ir" (nr) : "memory");
+ 	return oldbit;
  }
  
- static void crypto_aegis128l_aesni_process_crypt(
--		struct aegis_state *state, struct aead_request *req,
-+		struct aegis_state *state, struct skcipher_walk *walk,
- 		const struct aegis_crypt_ops *ops)
- {
--	struct skcipher_walk walk;
--	u8 *src, *dst;
--	unsigned int chunksize, base;
--
--	ops->skcipher_walk_init(&walk, req, false);
--
--	while (walk.nbytes) {
--		src = walk.src.virt.addr;
--		dst = walk.dst.virt.addr;
--		chunksize = walk.nbytes;
--
--		ops->crypt_blocks(state, chunksize, src, dst);
--
--		base = chunksize & ~(AEGIS128L_BLOCK_SIZE - 1);
--		src += base;
--		dst += base;
--		chunksize &= AEGIS128L_BLOCK_SIZE - 1;
--
--		if (chunksize > 0)
--			ops->crypt_tail(state, chunksize, src, dst);
-+	while (walk->nbytes >= AEGIS128L_BLOCK_SIZE) {
-+		ops->crypt_blocks(state, round_down(walk->nbytes,
-+						    AEGIS128L_BLOCK_SIZE),
-+				  walk->src.virt.addr, walk->dst.virt.addr);
-+		skcipher_walk_done(walk, walk->nbytes % AEGIS128L_BLOCK_SIZE);
-+	}
+@@ -288,8 +277,8 @@ static __always_inline bool __test_and_clear_bit(long nr, volatile unsigned long
  
--		skcipher_walk_done(&walk, 0);
-+	if (walk->nbytes) {
-+		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
-+				walk->dst.virt.addr);
-+		skcipher_walk_done(walk, 0);
- 	}
+ 	asm volatile(__ASM_SIZE(btr) " %2,%1"
+ 		     CC_SET(c)
+-		     : CC_OUT(c) (oldbit), ADDR
+-		     : "Ir" (nr));
++		     : CC_OUT(c) (oldbit)
++		     : ADDR, "Ir" (nr) : "memory");
+ 	return oldbit;
  }
  
-@@ -186,13 +175,16 @@ static void crypto_aegis128l_aesni_crypt(struct aead_request *req,
- {
- 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
- 	struct aegis_ctx *ctx = crypto_aegis128l_aesni_ctx(tfm);
-+	struct skcipher_walk walk;
- 	struct aegis_state state;
- 
-+	ops->skcipher_walk_init(&walk, req, true);
-+
- 	kernel_fpu_begin();
- 
- 	crypto_aegis128l_aesni_init(&state, ctx->key.bytes, req->iv);
- 	crypto_aegis128l_aesni_process_ad(&state, req->src, req->assoclen);
--	crypto_aegis128l_aesni_process_crypt(&state, req, ops);
-+	crypto_aegis128l_aesni_process_crypt(&state, &walk, ops);
- 	crypto_aegis128l_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
- 
- 	kernel_fpu_end();
-diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c
-index 8bebda2de92f..6227ca3220a0 100644
---- a/arch/x86/crypto/aegis256-aesni-glue.c
-+++ b/arch/x86/crypto/aegis256-aesni-glue.c
-@@ -119,31 +119,20 @@ static void crypto_aegis256_aesni_process_ad(
- }
+@@ -300,8 +289,8 @@ static __always_inline bool __test_and_change_bit(long nr, volatile unsigned lon
  
- static void crypto_aegis256_aesni_process_crypt(
--		struct aegis_state *state, struct aead_request *req,
-+		struct aegis_state *state, struct skcipher_walk *walk,
- 		const struct aegis_crypt_ops *ops)
- {
--	struct skcipher_walk walk;
--	u8 *src, *dst;
--	unsigned int chunksize, base;
--
--	ops->skcipher_walk_init(&walk, req, false);
--
--	while (walk.nbytes) {
--		src = walk.src.virt.addr;
--		dst = walk.dst.virt.addr;
--		chunksize = walk.nbytes;
--
--		ops->crypt_blocks(state, chunksize, src, dst);
--
--		base = chunksize & ~(AEGIS256_BLOCK_SIZE - 1);
--		src += base;
--		dst += base;
--		chunksize &= AEGIS256_BLOCK_SIZE - 1;
--
--		if (chunksize > 0)
--			ops->crypt_tail(state, chunksize, src, dst);
-+	while (walk->nbytes >= AEGIS256_BLOCK_SIZE) {
-+		ops->crypt_blocks(state,
-+				  round_down(walk->nbytes, AEGIS256_BLOCK_SIZE),
-+				  walk->src.virt.addr, walk->dst.virt.addr);
-+		skcipher_walk_done(walk, walk->nbytes % AEGIS256_BLOCK_SIZE);
-+	}
+ 	asm volatile(__ASM_SIZE(btc) " %2,%1"
+ 		     CC_SET(c)
+-		     : CC_OUT(c) (oldbit), ADDR
+-		     : "Ir" (nr) : "memory");
++		     : CC_OUT(c) (oldbit)
++		     : ADDR, "Ir" (nr) : "memory");
  
--		skcipher_walk_done(&walk, 0);
-+	if (walk->nbytes) {
-+		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
-+				walk->dst.virt.addr);
-+		skcipher_walk_done(walk, 0);
- 	}
+ 	return oldbit;
  }
+@@ -332,7 +321,7 @@ static __always_inline bool variable_test_bit(long nr, volatile const unsigned l
+ 	asm volatile(__ASM_SIZE(bt) " %2,%1"
+ 		     CC_SET(c)
+ 		     : CC_OUT(c) (oldbit)
+-		     : "m" (*(unsigned long *)addr), "Ir" (nr));
++		     : "m" (*(unsigned long *)addr), "Ir" (nr) : "memory");
  
-@@ -186,13 +175,16 @@ static void crypto_aegis256_aesni_crypt(struct aead_request *req,
- {
- 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
- 	struct aegis_ctx *ctx = crypto_aegis256_aesni_ctx(tfm);
-+	struct skcipher_walk walk;
- 	struct aegis_state state;
+ 	return oldbit;
+ }
+diff --git a/arch/x86/include/asm/string_32.h b/arch/x86/include/asm/string_32.h
+index 55d392c6bd29..2fd165f1cffa 100644
+--- a/arch/x86/include/asm/string_32.h
++++ b/arch/x86/include/asm/string_32.h
+@@ -179,14 +179,7 @@ static inline void *__memcpy3d(void *to, const void *from, size_t len)
+  *	No 3D Now!
+  */
  
-+	ops->skcipher_walk_init(&walk, req, true);
-+
- 	kernel_fpu_begin();
- 
- 	crypto_aegis256_aesni_init(&state, ctx->key, req->iv);
- 	crypto_aegis256_aesni_process_ad(&state, req->src, req->assoclen);
--	crypto_aegis256_aesni_process_crypt(&state, req, ops);
-+	crypto_aegis256_aesni_process_crypt(&state, &walk, ops);
- 	crypto_aegis256_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
- 
- 	kernel_fpu_end();
-diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
-index 1321700d6647..ae30c8b6ec4d 100644
---- a/arch/x86/crypto/aesni-intel_glue.c
-+++ b/arch/x86/crypto/aesni-intel_glue.c
-@@ -821,11 +821,14 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
- 		scatterwalk_map_and_copy(assoc, req->src, 0, assoclen, 0);
- 	}
+-#if (__GNUC__ >= 4)
+ #define memcpy(t, f, n) __builtin_memcpy(t, f, n)
+-#else
+-#define memcpy(t, f, n)				\
+-	(__builtin_constant_p((n))		\
+-	 ? __constant_memcpy((t), (f), (n))	\
+-	 : __memcpy((t), (f), (n)))
+-#endif
  
--	src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
--	scatterwalk_start(&src_sg_walk, src_sg);
--	if (req->src != req->dst) {
--		dst_sg = scatterwalk_ffwd(dst_start, req->dst, req->assoclen);
--		scatterwalk_start(&dst_sg_walk, dst_sg);
-+	if (left) {
-+		src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
-+		scatterwalk_start(&src_sg_walk, src_sg);
-+		if (req->src != req->dst) {
-+			dst_sg = scatterwalk_ffwd(dst_start, req->dst,
-+						  req->assoclen);
-+			scatterwalk_start(&dst_sg_walk, dst_sg);
-+		}
- 	}
+ #endif
+ #endif /* !CONFIG_FORTIFY_SOURCE */
+@@ -282,12 +275,7 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
  
- 	kernel_fpu_begin();
-diff --git a/arch/x86/crypto/morus1280_glue.c b/arch/x86/crypto/morus1280_glue.c
-index 0dccdda1eb3a..7e600f8bcdad 100644
---- a/arch/x86/crypto/morus1280_glue.c
-+++ b/arch/x86/crypto/morus1280_glue.c
-@@ -85,31 +85,20 @@ static void crypto_morus1280_glue_process_ad(
- 
- static void crypto_morus1280_glue_process_crypt(struct morus1280_state *state,
- 						struct morus1280_ops ops,
--						struct aead_request *req)
-+						struct skcipher_walk *walk)
- {
--	struct skcipher_walk walk;
--	u8 *cursor_src, *cursor_dst;
--	unsigned int chunksize, base;
--
--	ops.skcipher_walk_init(&walk, req, false);
--
--	while (walk.nbytes) {
--		cursor_src = walk.src.virt.addr;
--		cursor_dst = walk.dst.virt.addr;
--		chunksize = walk.nbytes;
--
--		ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
--
--		base = chunksize & ~(MORUS1280_BLOCK_SIZE - 1);
--		cursor_src += base;
--		cursor_dst += base;
--		chunksize &= MORUS1280_BLOCK_SIZE - 1;
--
--		if (chunksize > 0)
--			ops.crypt_tail(state, cursor_src, cursor_dst,
--				       chunksize);
-+	while (walk->nbytes >= MORUS1280_BLOCK_SIZE) {
-+		ops.crypt_blocks(state, walk->src.virt.addr,
-+				 walk->dst.virt.addr,
-+				 round_down(walk->nbytes,
-+					    MORUS1280_BLOCK_SIZE));
-+		skcipher_walk_done(walk, walk->nbytes % MORUS1280_BLOCK_SIZE);
-+	}
+ 	{
+ 		int d0, d1;
+-#if __GNUC__ == 4 && __GNUC_MINOR__ == 0
+-		/* Workaround for broken gcc 4.0 */
+-		register unsigned long eax asm("%eax") = pattern;
+-#else
+ 		unsigned long eax = pattern;
+-#endif
  
--		skcipher_walk_done(&walk, 0);
-+	if (walk->nbytes) {
-+		ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
-+			       walk->nbytes);
-+		skcipher_walk_done(walk, 0);
- 	}
- }
+ 		switch (count % 4) {
+ 		case 0:
+@@ -321,15 +309,7 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
+ #define __HAVE_ARCH_MEMSET
+ extern void *memset(void *, int, size_t);
+ #ifndef CONFIG_FORTIFY_SOURCE
+-#if (__GNUC__ >= 4)
+ #define memset(s, c, count) __builtin_memset(s, c, count)
+-#else
+-#define memset(s, c, count)						\
+-	(__builtin_constant_p(c)					\
+-	 ? __constant_c_x_memset((s), (0x01010101UL * (unsigned char)(c)), \
+-				 (count))				\
+-	 : __memset((s), (c), (count)))
+-#endif
+ #endif /* !CONFIG_FORTIFY_SOURCE */
  
-@@ -147,12 +136,15 @@ static void crypto_morus1280_glue_crypt(struct aead_request *req,
- 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
- 	struct morus1280_ctx *ctx = crypto_aead_ctx(tfm);
- 	struct morus1280_state state;
-+	struct skcipher_walk walk;
-+
-+	ops.skcipher_walk_init(&walk, req, true);
- 
- 	kernel_fpu_begin();
- 
- 	ctx->ops->init(&state, &ctx->key, req->iv);
- 	crypto_morus1280_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
--	crypto_morus1280_glue_process_crypt(&state, ops, req);
-+	crypto_morus1280_glue_process_crypt(&state, ops, &walk);
- 	ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
- 
- 	kernel_fpu_end();
-diff --git a/arch/x86/crypto/morus640_glue.c b/arch/x86/crypto/morus640_glue.c
-index 7b58fe4d9bd1..cb3a81732016 100644
---- a/arch/x86/crypto/morus640_glue.c
-+++ b/arch/x86/crypto/morus640_glue.c
-@@ -85,31 +85,19 @@ static void crypto_morus640_glue_process_ad(
- 
- static void crypto_morus640_glue_process_crypt(struct morus640_state *state,
- 					       struct morus640_ops ops,
--					       struct aead_request *req)
-+					       struct skcipher_walk *walk)
- {
--	struct skcipher_walk walk;
--	u8 *cursor_src, *cursor_dst;
--	unsigned int chunksize, base;
--
--	ops.skcipher_walk_init(&walk, req, false);
--
--	while (walk.nbytes) {
--		cursor_src = walk.src.virt.addr;
--		cursor_dst = walk.dst.virt.addr;
--		chunksize = walk.nbytes;
--
--		ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
--
--		base = chunksize & ~(MORUS640_BLOCK_SIZE - 1);
--		cursor_src += base;
--		cursor_dst += base;
--		chunksize &= MORUS640_BLOCK_SIZE - 1;
--
--		if (chunksize > 0)
--			ops.crypt_tail(state, cursor_src, cursor_dst,
--				       chunksize);
-+	while (walk->nbytes >= MORUS640_BLOCK_SIZE) {
-+		ops.crypt_blocks(state, walk->src.virt.addr,
-+				 walk->dst.virt.addr,
-+				 round_down(walk->nbytes, MORUS640_BLOCK_SIZE));
-+		skcipher_walk_done(walk, walk->nbytes % MORUS640_BLOCK_SIZE);
-+	}
+ #define __HAVE_ARCH_MEMSET16
+diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
+index 4e4194e21a09..75314c3dbe47 100644
+--- a/arch/x86/include/asm/string_64.h
++++ b/arch/x86/include/asm/string_64.h
+@@ -14,21 +14,6 @@
+ extern void *memcpy(void *to, const void *from, size_t len);
+ extern void *__memcpy(void *to, const void *from, size_t len);
  
--		skcipher_walk_done(&walk, 0);
-+	if (walk->nbytes) {
-+		ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
-+			       walk->nbytes);
-+		skcipher_walk_done(walk, 0);
- 	}
- }
+-#ifndef CONFIG_FORTIFY_SOURCE
+-#if (__GNUC__ == 4 && __GNUC_MINOR__ < 3) || __GNUC__ < 4
+-#define memcpy(dst, src, len)					\
+-({								\
+-	size_t __len = (len);					\
+-	void *__ret;						\
+-	if (__builtin_constant_p(len) && __len >= 64)		\
+-		__ret = __memcpy((dst), (src), __len);		\
+-	else							\
+-		__ret = __builtin_memcpy((dst), (src), __len);	\
+-	__ret;							\
+-})
+-#endif
+-#endif /* !CONFIG_FORTIFY_SOURCE */
+-
+ #define __HAVE_ARCH_MEMSET
+ void *memset(void *s, int c, size_t n);
+ void *__memset(void *s, int c, size_t n);
+diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
+index ef05bea7010d..6b5c710846f5 100644
+--- a/arch/x86/include/asm/xen/hypercall.h
++++ b/arch/x86/include/asm/xen/hypercall.h
+@@ -206,6 +206,9 @@ xen_single_call(unsigned int call,
+ 	__HYPERCALL_DECLS;
+ 	__HYPERCALL_5ARG(a1, a2, a3, a4, a5);
  
-@@ -143,12 +131,15 @@ static void crypto_morus640_glue_crypt(struct aead_request *req,
- 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
- 	struct morus640_ctx *ctx = crypto_aead_ctx(tfm);
- 	struct morus640_state state;
-+	struct skcipher_walk walk;
++	if (call >= PAGE_SIZE / sizeof(hypercall_page[0]))
++		return -EINVAL;
 +
-+	ops.skcipher_walk_init(&walk, req, true);
- 
- 	kernel_fpu_begin();
+ 	asm volatile(CALL_NOSPEC
+ 		     : __HYPERCALL_5PARAM
+ 		     : [thunk_target] "a" (&hypercall_page[call])
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index f13a3a24d360..a9b8e38d78ad 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -6422,11 +6422,11 @@ e_free:
+ 	return ret;
+ }
  
- 	ctx->ops->init(&state, &ctx->key, req->iv);
- 	crypto_morus640_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
--	crypto_morus640_glue_process_crypt(&state, ops, req);
-+	crypto_morus640_glue_process_crypt(&state, ops, &walk);
- 	ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
+-static int get_num_contig_pages(int idx, struct page **inpages,
+-				unsigned long npages)
++static unsigned long get_num_contig_pages(unsigned long idx,
++				struct page **inpages, unsigned long npages)
+ {
+ 	unsigned long paddr, next_paddr;
+-	int i = idx + 1, pages = 1;
++	unsigned long i = idx + 1, pages = 1;
  
- 	kernel_fpu_end();
-diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
-index 7d2d7c801dba..0ecfac84ba91 100644
---- a/arch/x86/events/amd/core.c
-+++ b/arch/x86/events/amd/core.c
-@@ -3,10 +3,14 @@
- #include <linux/types.h>
- #include <linux/init.h>
- #include <linux/slab.h>
-+#include <linux/delay.h>
- #include <asm/apicdef.h>
-+#include <asm/nmi.h>
+ 	/* find the number of contiguous pages starting from idx */
+ 	paddr = __sme_page_pa(inpages[idx]);
+@@ -6445,12 +6445,12 @@ static int get_num_contig_pages(int idx, struct page **inpages,
  
- #include "../perf_event.h"
+ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
+ {
+-	unsigned long vaddr, vaddr_end, next_vaddr, npages, size;
++	unsigned long vaddr, vaddr_end, next_vaddr, npages, pages, size, i;
+ 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ 	struct kvm_sev_launch_update_data params;
+ 	struct sev_data_launch_update_data *data;
+ 	struct page **inpages;
+-	int i, ret, pages;
++	int ret;
  
-+static DEFINE_PER_CPU(unsigned int, perf_nmi_counter);
-+
- static __initconst const u64 amd_hw_cache_event_ids
- 				[PERF_COUNT_HW_CACHE_MAX]
- 				[PERF_COUNT_HW_CACHE_OP_MAX]
-@@ -429,6 +433,132 @@ static void amd_pmu_cpu_dead(int cpu)
+ 	if (!sev_guest(kvm))
+ 		return -ENOTTY;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index f014e1aeee96..f90b3a948291 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -500,6 +500,17 @@ static void nested_vmx_disable_intercept_for_msr(unsigned long *msr_bitmap_l1,
  	}
  }
  
-+/*
-+ * When a PMC counter overflows, an NMI is used to process the event and
-+ * reset the counter. NMI latency can result in the counter being updated
-+ * before the NMI can run, which can result in what appear to be spurious
-+ * NMIs. This function is intended to wait for the NMI to run and reset
-+ * the counter to avoid possible unhandled NMI messages.
-+ */
-+#define OVERFLOW_WAIT_COUNT	50
-+
-+static void amd_pmu_wait_on_overflow(int idx)
-+{
-+	unsigned int i;
-+	u64 counter;
++static inline void enable_x2apic_msr_intercepts(unsigned long *msr_bitmap) {
++	int msr;
 +
-+	/*
-+	 * Wait for the counter to be reset if it has overflowed. This loop
-+	 * should exit very, very quickly, but just in case, don't wait
-+	 * forever...
-+	 */
-+	for (i = 0; i < OVERFLOW_WAIT_COUNT; i++) {
-+		rdmsrl(x86_pmu_event_addr(idx), counter);
-+		if (counter & (1ULL << (x86_pmu.cntval_bits - 1)))
-+			break;
++	for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
++		unsigned word = msr / BITS_PER_LONG;
 +
-+		/* Might be in IRQ context, so can't sleep */
-+		udelay(1);
++		msr_bitmap[word] = ~0;
++		msr_bitmap[word + (0x800 / sizeof(long))] = ~0;
 +	}
 +}
 +
-+static void amd_pmu_disable_all(void)
-+{
-+	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
-+	int idx;
-+
-+	x86_pmu_disable_all();
-+
+ /*
+  * Merge L0's and L1's MSR bitmap, return false to indicate that
+  * we do not use the hardware.
+@@ -541,39 +552,44 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ 		return false;
+ 
+ 	msr_bitmap_l1 = (unsigned long *)kmap(page);
+-	if (nested_cpu_has_apic_reg_virt(vmcs12)) {
+-		/*
+-		 * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it
+-		 * just lets the processor take the value from the virtual-APIC page;
+-		 * take those 256 bits directly from the L1 bitmap.
+-		 */
+-		for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
+-			unsigned word = msr / BITS_PER_LONG;
+-			msr_bitmap_l0[word] = msr_bitmap_l1[word];
+-			msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
+-		}
+-	} else {
+-		for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
+-			unsigned word = msr / BITS_PER_LONG;
+-			msr_bitmap_l0[word] = ~0;
+-			msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
+-		}
+-	}
+ 
+-	nested_vmx_disable_intercept_for_msr(
+-		msr_bitmap_l1, msr_bitmap_l0,
+-		X2APIC_MSR(APIC_TASKPRI),
+-		MSR_TYPE_W);
 +	/*
-+	 * This shouldn't be called from NMI context, but add a safeguard here
-+	 * to return, since if we're in NMI context we can't wait for an NMI
-+	 * to reset an overflowed counter value.
++	 * To keep the control flow simple, pay eight 8-byte writes (sixteen
++	 * 4-byte writes on 32-bit systems) up front to enable intercepts for
++	 * the x2APIC MSR range and selectively disable them below.
 +	 */
-+	if (in_nmi())
-+		return;
++	enable_x2apic_msr_intercepts(msr_bitmap_l0);
 +
-+	/*
-+	 * Check each counter for overflow and wait for it to be reset by the
-+	 * NMI if it has overflowed. This relies on the fact that all active
-+	 * counters are always enabled when this function is caled and
-+	 * ARCH_PERFMON_EVENTSEL_INT is always set.
-+	 */
-+	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
-+		if (!test_bit(idx, cpuc->active_mask))
-+			continue;
-+
-+		amd_pmu_wait_on_overflow(idx);
-+	}
-+}
-+
-+static void amd_pmu_disable_event(struct perf_event *event)
-+{
-+	x86_pmu_disable_event(event);
-+
-+	/*
-+	 * This can be called from NMI context (via x86_pmu_stop). The counter
-+	 * may have overflowed, but either way, we'll never see it get reset
-+	 * by the NMI if we're already in the NMI. And the NMI latency support
-+	 * below will take care of any pending NMI that might have been
-+	 * generated by the overflow.
-+	 */
-+	if (in_nmi())
-+		return;
-+
-+	amd_pmu_wait_on_overflow(event->hw.idx);
-+}
-+
-+/*
-+ * Because of NMI latency, if multiple PMC counters are active or other sources
-+ * of NMIs are received, the perf NMI handler can handle one or more overflowed
-+ * PMC counters outside of the NMI associated with the PMC overflow. If the NMI
-+ * doesn't arrive at the LAPIC in time to become a pending NMI, then the kernel
-+ * back-to-back NMI support won't be active. This PMC handler needs to take into
-+ * account that this can occur, otherwise this could result in unknown NMI
-+ * messages being issued. Examples of this is PMC overflow while in the NMI
-+ * handler when multiple PMCs are active or PMC overflow while handling some
-+ * other source of an NMI.
-+ *
-+ * Attempt to mitigate this by using the number of active PMCs to determine
-+ * whether to return NMI_HANDLED if the perf NMI handler did not handle/reset
-+ * any PMCs. The per-CPU perf_nmi_counter variable is set to a minimum of the
-+ * number of active PMCs or 2. The value of 2 is used in case an NMI does not
-+ * arrive at the LAPIC in time to be collapsed into an already pending NMI.
-+ */
-+static int amd_pmu_handle_irq(struct pt_regs *regs)
-+{
-+	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
-+	int active, handled;
-+
-+	/*
-+	 * Obtain the active count before calling x86_pmu_handle_irq() since
-+	 * it is possible that x86_pmu_handle_irq() may make a counter
-+	 * inactive (through x86_pmu_stop).
-+	 */
-+	active = __bitmap_weight(cpuc->active_mask, X86_PMC_IDX_MAX);
-+
-+	/* Process any counter overflows */
-+	handled = x86_pmu_handle_irq(regs);
-+
-+	/*
-+	 * If a counter was handled, record the number of possible remaining
-+	 * NMIs that can occur.
-+	 */
-+	if (handled) {
-+		this_cpu_write(perf_nmi_counter,
-+			       min_t(unsigned int, 2, active));
-+
-+		return handled;
-+	}
-+
-+	if (!this_cpu_read(perf_nmi_counter))
-+		return NMI_DONE;
-+
-+	this_cpu_dec(perf_nmi_counter);
-+
-+	return NMI_HANDLED;
-+}
-+
- static struct event_constraint *
- amd_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
- 			  struct perf_event *event)
-@@ -621,11 +751,11 @@ static ssize_t amd_event_sysfs_show(char *page, u64 config)
- 
- static __initconst const struct x86_pmu amd_pmu = {
- 	.name			= "AMD",
--	.handle_irq		= x86_pmu_handle_irq,
--	.disable_all		= x86_pmu_disable_all,
-+	.handle_irq		= amd_pmu_handle_irq,
-+	.disable_all		= amd_pmu_disable_all,
- 	.enable_all		= x86_pmu_enable_all,
- 	.enable			= x86_pmu_enable_event,
--	.disable		= x86_pmu_disable_event,
-+	.disable		= amd_pmu_disable_event,
- 	.hw_config		= amd_pmu_hw_config,
- 	.schedule_events	= x86_schedule_events,
- 	.eventsel		= MSR_K7_EVNTSEL0,
-@@ -732,7 +862,7 @@ void amd_pmu_enable_virt(void)
- 	cpuc->perf_ctr_virt_mask = 0;
- 
- 	/* Reload all events */
--	x86_pmu_disable_all();
-+	amd_pmu_disable_all();
- 	x86_pmu_enable_all(0);
- }
- EXPORT_SYMBOL_GPL(amd_pmu_enable_virt);
-@@ -750,7 +880,7 @@ void amd_pmu_disable_virt(void)
- 	cpuc->perf_ctr_virt_mask = AMD64_EVENTSEL_HOSTONLY;
- 
- 	/* Reload all events */
--	x86_pmu_disable_all();
-+	amd_pmu_disable_all();
- 	x86_pmu_enable_all(0);
- }
- EXPORT_SYMBOL_GPL(amd_pmu_disable_virt);
-diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
-index b684f0294f35..81911e11a15d 100644
---- a/arch/x86/events/core.c
-+++ b/arch/x86/events/core.c
-@@ -1349,8 +1349,9 @@ void x86_pmu_stop(struct perf_event *event, int flags)
- 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
- 	struct hw_perf_event *hwc = &event->hw;
- 
--	if (__test_and_clear_bit(hwc->idx, cpuc->active_mask)) {
-+	if (test_bit(hwc->idx, cpuc->active_mask)) {
- 		x86_pmu.disable(event);
-+		__clear_bit(hwc->idx, cpuc->active_mask);
- 		cpuc->events[hwc->idx] = NULL;
- 		WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
- 		hwc->state |= PERF_HES_STOPPED;
-@@ -1447,16 +1448,8 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
- 	apic_write(APIC_LVTPC, APIC_DM_NMI);
- 
- 	for (idx = 0; idx < x86_pmu.num_counters; idx++) {
--		if (!test_bit(idx, cpuc->active_mask)) {
--			/*
--			 * Though we deactivated the counter some cpus
--			 * might still deliver spurious interrupts still
--			 * in flight. Catch them:
--			 */
--			if (__test_and_clear_bit(idx, cpuc->running))
--				handled++;
-+		if (!test_bit(idx, cpuc->active_mask))
- 			continue;
--		}
- 
- 		event = cpuc->events[idx];
- 
-@@ -1995,7 +1988,7 @@ static int x86_pmu_commit_txn(struct pmu *pmu)
-  */
- static void free_fake_cpuc(struct cpu_hw_events *cpuc)
- {
--	kfree(cpuc->shared_regs);
-+	intel_cpuc_finish(cpuc);
- 	kfree(cpuc);
- }
- 
-@@ -2007,14 +2000,11 @@ static struct cpu_hw_events *allocate_fake_cpuc(void)
- 	cpuc = kzalloc(sizeof(*cpuc), GFP_KERNEL);
- 	if (!cpuc)
- 		return ERR_PTR(-ENOMEM);
--
--	/* only needed, if we have extra_regs */
--	if (x86_pmu.extra_regs) {
--		cpuc->shared_regs = allocate_shared_regs(cpu);
--		if (!cpuc->shared_regs)
--			goto error;
--	}
- 	cpuc->is_fake = 1;
-+
-+	if (intel_cpuc_prepare(cpuc, cpu))
-+		goto error;
-+
- 	return cpuc;
- error:
- 	free_fake_cpuc(cpuc);
-diff --git a/arch/x86/events/intel/bts.c b/arch/x86/events/intel/bts.c
-index a01ef1b0f883..7cdd7b13bbda 100644
---- a/arch/x86/events/intel/bts.c
-+++ b/arch/x86/events/intel/bts.c
-@@ -77,10 +77,12 @@ static size_t buf_size(struct page *page)
- }
- 
- static void *
--bts_buffer_setup_aux(int cpu, void **pages, int nr_pages, bool overwrite)
-+bts_buffer_setup_aux(struct perf_event *event, void **pages,
-+		     int nr_pages, bool overwrite)
- {
- 	struct bts_buffer *buf;
- 	struct page *page;
-+	int cpu = event->cpu;
- 	int node = (cpu == -1) ? cpu : cpu_to_node(cpu);
- 	unsigned long offset;
- 	size_t size = nr_pages << PAGE_SHIFT;
-diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
-index 730978dff63f..2480feb07df3 100644
---- a/arch/x86/events/intel/core.c
-+++ b/arch/x86/events/intel/core.c
-@@ -1999,6 +1999,39 @@ static void intel_pmu_nhm_enable_all(int added)
- 	intel_pmu_enable_all(added);
- }
- 
-+static void intel_set_tfa(struct cpu_hw_events *cpuc, bool on)
-+{
-+	u64 val = on ? MSR_TFA_RTM_FORCE_ABORT : 0;
-+
-+	if (cpuc->tfa_shadow != val) {
-+		cpuc->tfa_shadow = val;
-+		wrmsrl(MSR_TSX_FORCE_ABORT, val);
-+	}
-+}
-+
-+static void intel_tfa_commit_scheduling(struct cpu_hw_events *cpuc, int idx, int cntr)
-+{
-+	/*
-+	 * We're going to use PMC3, make sure TFA is set before we touch it.
-+	 */
-+	if (cntr == 3 && !cpuc->is_fake)
-+		intel_set_tfa(cpuc, true);
-+}
-+
-+static void intel_tfa_pmu_enable_all(int added)
-+{
-+	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
-+
-+	/*
-+	 * If we find PMC3 is no longer used when we enable the PMU, we can
-+	 * clear TFA.
-+	 */
-+	if (!test_bit(3, cpuc->active_mask))
-+		intel_set_tfa(cpuc, false);
-+
-+	intel_pmu_enable_all(added);
-+}
-+
- static void enable_counter_freeze(void)
- {
- 	update_debugctlmsr(get_debugctlmsr() |
-@@ -2768,6 +2801,35 @@ intel_stop_scheduling(struct cpu_hw_events *cpuc)
- 	raw_spin_unlock(&excl_cntrs->lock);
- }
- 
-+static struct event_constraint *
-+dyn_constraint(struct cpu_hw_events *cpuc, struct event_constraint *c, int idx)
-+{
-+	WARN_ON_ONCE(!cpuc->constraint_list);
-+
-+	if (!(c->flags & PERF_X86_EVENT_DYNAMIC)) {
-+		struct event_constraint *cx;
-+
-+		/*
-+		 * grab pre-allocated constraint entry
-+		 */
-+		cx = &cpuc->constraint_list[idx];
-+
-+		/*
-+		 * initialize dynamic constraint
-+		 * with static constraint
-+		 */
-+		*cx = *c;
-+
-+		/*
-+		 * mark constraint as dynamic
-+		 */
-+		cx->flags |= PERF_X86_EVENT_DYNAMIC;
-+		c = cx;
-+	}
-+
-+	return c;
-+}
-+
- static struct event_constraint *
- intel_get_excl_constraints(struct cpu_hw_events *cpuc, struct perf_event *event,
- 			   int idx, struct event_constraint *c)
-@@ -2798,27 +2860,7 @@ intel_get_excl_constraints(struct cpu_hw_events *cpuc, struct perf_event *event,
- 	 * only needed when constraint has not yet
- 	 * been cloned (marked dynamic)
- 	 */
--	if (!(c->flags & PERF_X86_EVENT_DYNAMIC)) {
--		struct event_constraint *cx;
--
--		/*
--		 * grab pre-allocated constraint entry
--		 */
--		cx = &cpuc->constraint_list[idx];
--
--		/*
--		 * initialize dynamic constraint
--		 * with static constraint
--		 */
--		*cx = *c;
--
--		/*
--		 * mark constraint as dynamic, so we
--		 * can free it later on
--		 */
--		cx->flags |= PERF_X86_EVENT_DYNAMIC;
--		c = cx;
--	}
-+	c = dyn_constraint(cpuc, c, idx);
- 
- 	/*
- 	 * From here on, the constraint is dynamic.
-@@ -3345,6 +3387,26 @@ glp_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
- 	return c;
- }
- 
-+static bool allow_tsx_force_abort = true;
-+
-+static struct event_constraint *
-+tfa_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
-+			  struct perf_event *event)
-+{
-+	struct event_constraint *c = hsw_get_event_constraints(cpuc, idx, event);
-+
-+	/*
-+	 * Without TFA we must not use PMC3.
-+	 */
-+	if (!allow_tsx_force_abort && test_bit(3, c->idxmsk) && idx >= 0) {
-+		c = dyn_constraint(cpuc, c, idx);
-+		c->idxmsk64 &= ~(1ULL << 3);
-+		c->weight--;
-+	}
-+
-+	return c;
-+}
-+
- /*
-  * Broadwell:
-  *
-@@ -3398,7 +3460,7 @@ ssize_t intel_event_sysfs_show(char *page, u64 config)
- 	return x86_event_sysfs_show(page, config, event);
- }
- 
--struct intel_shared_regs *allocate_shared_regs(int cpu)
-+static struct intel_shared_regs *allocate_shared_regs(int cpu)
- {
- 	struct intel_shared_regs *regs;
- 	int i;
-@@ -3430,23 +3492,24 @@ static struct intel_excl_cntrs *allocate_excl_cntrs(int cpu)
- 	return c;
- }
- 
--static int intel_pmu_cpu_prepare(int cpu)
--{
--	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
- 
-+int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu)
-+{
- 	if (x86_pmu.extra_regs || x86_pmu.lbr_sel_map) {
- 		cpuc->shared_regs = allocate_shared_regs(cpu);
- 		if (!cpuc->shared_regs)
- 			goto err;
- 	}
- 
--	if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) {
-+	if (x86_pmu.flags & (PMU_FL_EXCL_CNTRS | PMU_FL_TFA)) {
- 		size_t sz = X86_PMC_IDX_MAX * sizeof(struct event_constraint);
- 
--		cpuc->constraint_list = kzalloc(sz, GFP_KERNEL);
-+		cpuc->constraint_list = kzalloc_node(sz, GFP_KERNEL, cpu_to_node(cpu));
- 		if (!cpuc->constraint_list)
- 			goto err_shared_regs;
-+	}
- 
-+	if (x86_pmu.flags & PMU_FL_EXCL_CNTRS) {
- 		cpuc->excl_cntrs = allocate_excl_cntrs(cpu);
- 		if (!cpuc->excl_cntrs)
- 			goto err_constraint_list;
-@@ -3468,6 +3531,11 @@ err:
- 	return -ENOMEM;
- }
- 
-+static int intel_pmu_cpu_prepare(int cpu)
-+{
-+	return intel_cpuc_prepare(&per_cpu(cpu_hw_events, cpu), cpu);
-+}
-+
- static void flip_smm_bit(void *data)
- {
- 	unsigned long set = *(unsigned long *)data;
-@@ -3542,9 +3610,8 @@ static void intel_pmu_cpu_starting(int cpu)
- 	}
- }
- 
--static void free_excl_cntrs(int cpu)
-+static void free_excl_cntrs(struct cpu_hw_events *cpuc)
- {
--	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
- 	struct intel_excl_cntrs *c;
- 
- 	c = cpuc->excl_cntrs;
-@@ -3552,9 +3619,10 @@ static void free_excl_cntrs(int cpu)
- 		if (c->core_id == -1 || --c->refcnt == 0)
- 			kfree(c);
- 		cpuc->excl_cntrs = NULL;
--		kfree(cpuc->constraint_list);
--		cpuc->constraint_list = NULL;
- 	}
-+
-+	kfree(cpuc->constraint_list);
-+	cpuc->constraint_list = NULL;
- }
- 
- static void intel_pmu_cpu_dying(int cpu)
-@@ -3565,9 +3633,8 @@ static void intel_pmu_cpu_dying(int cpu)
- 		disable_counter_freeze();
- }
- 
--static void intel_pmu_cpu_dead(int cpu)
-+void intel_cpuc_finish(struct cpu_hw_events *cpuc)
- {
--	struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
- 	struct intel_shared_regs *pc;
- 
- 	pc = cpuc->shared_regs;
-@@ -3577,7 +3644,12 @@ static void intel_pmu_cpu_dead(int cpu)
- 		cpuc->shared_regs = NULL;
- 	}
- 
--	free_excl_cntrs(cpu);
-+	free_excl_cntrs(cpuc);
-+}
-+
-+static void intel_pmu_cpu_dead(int cpu)
-+{
-+	intel_cpuc_finish(&per_cpu(cpu_hw_events, cpu));
- }
- 
- static void intel_pmu_sched_task(struct perf_event_context *ctx,
-@@ -4070,8 +4142,11 @@ static struct attribute *intel_pmu_caps_attrs[] = {
-        NULL
- };
- 
-+static DEVICE_BOOL_ATTR(allow_tsx_force_abort, 0644, allow_tsx_force_abort);
-+
- static struct attribute *intel_pmu_attrs[] = {
- 	&dev_attr_freeze_on_smi.attr,
-+	NULL, /* &dev_attr_allow_tsx_force_abort.attr.attr */
- 	NULL,
- };
- 
-@@ -4564,6 +4639,15 @@ __init int intel_pmu_init(void)
- 		tsx_attr = hsw_tsx_events_attrs;
- 		intel_pmu_pebs_data_source_skl(
- 			boot_cpu_data.x86_model == INTEL_FAM6_SKYLAKE_X);
-+
-+		if (boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)) {
-+			x86_pmu.flags |= PMU_FL_TFA;
-+			x86_pmu.get_event_constraints = tfa_get_event_constraints;
-+			x86_pmu.enable_all = intel_tfa_pmu_enable_all;
-+			x86_pmu.commit_scheduling = intel_tfa_commit_scheduling;
-+			intel_pmu_attrs[1] = &dev_attr_allow_tsx_force_abort.attr.attr;
-+		}
-+
- 		pr_cont("Skylake events, ");
- 		name = "skylake";
- 		break;
-@@ -4715,7 +4799,7 @@ static __init int fixup_ht_bug(void)
- 	hardlockup_detector_perf_restart();
- 
- 	for_each_online_cpu(c)
--		free_excl_cntrs(c);
-+		free_excl_cntrs(&per_cpu(cpu_hw_events, c));
- 
- 	cpus_read_unlock();
- 	pr_info("PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off\n");
-diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
-index 9494ca68fd9d..c0e86ff21f81 100644
---- a/arch/x86/events/intel/pt.c
-+++ b/arch/x86/events/intel/pt.c
-@@ -1114,10 +1114,11 @@ static int pt_buffer_init_topa(struct pt_buffer *buf, unsigned long nr_pages,
-  * Return:	Our private PT buffer structure.
-  */
- static void *
--pt_buffer_setup_aux(int cpu, void **pages, int nr_pages, bool snapshot)
-+pt_buffer_setup_aux(struct perf_event *event, void **pages,
-+		    int nr_pages, bool snapshot)
- {
- 	struct pt_buffer *buf;
--	int node, ret;
-+	int node, ret, cpu = event->cpu;
- 
- 	if (!nr_pages)
- 		return NULL;
-diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
-index 27a461414b30..2690135bf83f 100644
---- a/arch/x86/events/intel/uncore.c
-+++ b/arch/x86/events/intel/uncore.c
-@@ -740,6 +740,7 @@ static int uncore_pmu_event_init(struct perf_event *event)
- 		/* fixed counters have event field hardcoded to zero */
- 		hwc->config = 0ULL;
- 	} else if (is_freerunning_event(event)) {
-+		hwc->config = event->attr.config;
- 		if (!check_valid_freerunning_event(box, event))
- 			return -EINVAL;
- 		event->hw.idx = UNCORE_PMC_IDX_FREERUNNING;
-diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h
-index cb46d602a6b8..853a49a8ccf6 100644
---- a/arch/x86/events/intel/uncore.h
-+++ b/arch/x86/events/intel/uncore.h
-@@ -292,8 +292,8 @@ static inline
- unsigned int uncore_freerunning_counter(struct intel_uncore_box *box,
- 					struct perf_event *event)
- {
--	unsigned int type = uncore_freerunning_type(event->attr.config);
--	unsigned int idx = uncore_freerunning_idx(event->attr.config);
-+	unsigned int type = uncore_freerunning_type(event->hw.config);
-+	unsigned int idx = uncore_freerunning_idx(event->hw.config);
- 	struct intel_uncore_pmu *pmu = box->pmu;
- 
- 	return pmu->type->freerunning[type].counter_base +
-@@ -377,7 +377,7 @@ static inline
- unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
- 				     struct perf_event *event)
- {
--	unsigned int type = uncore_freerunning_type(event->attr.config);
-+	unsigned int type = uncore_freerunning_type(event->hw.config);
- 
- 	return box->pmu->type->freerunning[type].bits;
- }
-@@ -385,7 +385,7 @@ unsigned int uncore_freerunning_bits(struct intel_uncore_box *box,
- static inline int uncore_num_freerunning(struct intel_uncore_box *box,
- 					 struct perf_event *event)
- {
--	unsigned int type = uncore_freerunning_type(event->attr.config);
-+	unsigned int type = uncore_freerunning_type(event->hw.config);
- 
- 	return box->pmu->type->freerunning[type].num_counters;
- }
-@@ -399,8 +399,8 @@ static inline int uncore_num_freerunning_types(struct intel_uncore_box *box,
- static inline bool check_valid_freerunning_event(struct intel_uncore_box *box,
- 						 struct perf_event *event)
- {
--	unsigned int type = uncore_freerunning_type(event->attr.config);
--	unsigned int idx = uncore_freerunning_idx(event->attr.config);
-+	unsigned int type = uncore_freerunning_type(event->hw.config);
-+	unsigned int idx = uncore_freerunning_idx(event->hw.config);
- 
- 	return (type < uncore_num_freerunning_types(box, event)) &&
- 	       (idx < uncore_num_freerunning(box, event));
-diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c
-index 2593b0d7aeee..ef7faf486a1a 100644
---- a/arch/x86/events/intel/uncore_snb.c
-+++ b/arch/x86/events/intel/uncore_snb.c
-@@ -448,9 +448,11 @@ static int snb_uncore_imc_event_init(struct perf_event *event)
- 
- 	/* must be done before validate_group */
- 	event->hw.event_base = base;
--	event->hw.config = cfg;
- 	event->hw.idx = idx;
- 
-+	/* Convert to standard encoding format for freerunning counters */
-+	event->hw.config = ((cfg - 1) << 8) | 0x10ff;
-+
- 	/* no group validation needed, we have free running counters */
- 
- 	return 0;
-diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
-index d46fd6754d92..acd72e669c04 100644
---- a/arch/x86/events/perf_event.h
-+++ b/arch/x86/events/perf_event.h
-@@ -242,6 +242,11 @@ struct cpu_hw_events {
- 	struct intel_excl_cntrs		*excl_cntrs;
- 	int excl_thread_id; /* 0 or 1 */
- 
-+	/*
-+	 * SKL TSX_FORCE_ABORT shadow
-+	 */
-+	u64				tfa_shadow;
-+
- 	/*
- 	 * AMD specific bits
- 	 */
-@@ -681,6 +686,7 @@ do {									\
- #define PMU_FL_EXCL_CNTRS	0x4 /* has exclusive counter requirements  */
- #define PMU_FL_EXCL_ENABLED	0x8 /* exclusive counter active */
- #define PMU_FL_PEBS_ALL		0x10 /* all events are valid PEBS events */
-+#define PMU_FL_TFA		0x20 /* deal with TSX force abort */
- 
- #define EVENT_VAR(_id)  event_attr_##_id
- #define EVENT_PTR(_id) &event_attr_##_id.attr.attr
-@@ -889,7 +895,8 @@ struct event_constraint *
- x86_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
- 			  struct perf_event *event);
- 
--struct intel_shared_regs *allocate_shared_regs(int cpu);
-+extern int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu);
-+extern void intel_cpuc_finish(struct cpu_hw_events *cpuc);
- 
- int intel_pmu_init(void);
- 
-@@ -1025,9 +1032,13 @@ static inline int intel_pmu_init(void)
- 	return 0;
- }
- 
--static inline struct intel_shared_regs *allocate_shared_regs(int cpu)
-+static inline int intel_cpuc_prepare(struct cpu_hw_events *cpuc, int cpu)
-+{
-+	return 0;
-+}
-+
-+static inline void intel_cpuc_finish(struct cpu_hw_events *cpuc)
- {
--	return NULL;
- }
- 
- static inline int is_ht_workaround_enabled(void)
-diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
-index 7abb09e2eeb8..d3f42b6bbdac 100644
---- a/arch/x86/hyperv/hv_init.c
-+++ b/arch/x86/hyperv/hv_init.c
-@@ -406,6 +406,13 @@ void hyperv_cleanup(void)
- 	/* Reset our OS id */
- 	wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
- 
-+	/*
-+	 * Reset hypercall page reference before reset the page,
-+	 * let hypercall operations fail safely rather than
-+	 * panic the kernel for using invalid hypercall page
-+	 */
-+	hv_hypercall_pg = NULL;
-+
- 	/* Reset the hypercall page */
- 	hypercall_msr.as_uint64 = 0;
- 	wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
-diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h
-index ad7b210aa3f6..8e790ec219a5 100644
---- a/arch/x86/include/asm/bitops.h
-+++ b/arch/x86/include/asm/bitops.h
-@@ -36,22 +36,17 @@
-  * bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
-  */
- 
--#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 1)
--/* Technically wrong, but this avoids compilation errors on some gcc
--   versions. */
--#define BITOP_ADDR(x) "=m" (*(volatile long *) (x))
--#else
--#define BITOP_ADDR(x) "+m" (*(volatile long *) (x))
--#endif
-+#define RLONG_ADDR(x)			 "m" (*(volatile long *) (x))
-+#define WBYTE_ADDR(x)			"+m" (*(volatile char *) (x))
- 
--#define ADDR				BITOP_ADDR(addr)
-+#define ADDR				RLONG_ADDR(addr)
- 
- /*
-  * We do the locked ops that don't return the old value as
-  * a mask operation on a byte.
-  */
- #define IS_IMMEDIATE(nr)		(__builtin_constant_p(nr))
--#define CONST_MASK_ADDR(nr, addr)	BITOP_ADDR((void *)(addr) + ((nr)>>3))
-+#define CONST_MASK_ADDR(nr, addr)	WBYTE_ADDR((void *)(addr) + ((nr)>>3))
- #define CONST_MASK(nr)			(1 << ((nr) & 7))
- 
- /**
-@@ -79,7 +74,7 @@ set_bit(long nr, volatile unsigned long *addr)
- 			: "memory");
- 	} else {
- 		asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
--			: BITOP_ADDR(addr) : "Ir" (nr) : "memory");
-+			: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
- 	}
- }
- 
-@@ -94,7 +89,7 @@ set_bit(long nr, volatile unsigned long *addr)
-  */
- static __always_inline void __set_bit(long nr, volatile unsigned long *addr)
- {
--	asm volatile(__ASM_SIZE(bts) " %1,%0" : ADDR : "Ir" (nr) : "memory");
-+	asm volatile(__ASM_SIZE(bts) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
- }
- 
- /**
-@@ -116,8 +111,7 @@ clear_bit(long nr, volatile unsigned long *addr)
- 			: "iq" ((u8)~CONST_MASK(nr)));
- 	} else {
- 		asm volatile(LOCK_PREFIX __ASM_SIZE(btr) " %1,%0"
--			: BITOP_ADDR(addr)
--			: "Ir" (nr));
-+			: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
- 	}
- }
- 
-@@ -137,7 +131,7 @@ static __always_inline void clear_bit_unlock(long nr, volatile unsigned long *ad
- 
- static __always_inline void __clear_bit(long nr, volatile unsigned long *addr)
- {
--	asm volatile(__ASM_SIZE(btr) " %1,%0" : ADDR : "Ir" (nr));
-+	asm volatile(__ASM_SIZE(btr) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
- }
- 
- static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr)
-@@ -145,7 +139,7 @@ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile
- 	bool negative;
- 	asm volatile(LOCK_PREFIX "andb %2,%1"
- 		CC_SET(s)
--		: CC_OUT(s) (negative), ADDR
-+		: CC_OUT(s) (negative), WBYTE_ADDR(addr)
- 		: "ir" ((char) ~(1 << nr)) : "memory");
- 	return negative;
- }
-@@ -161,13 +155,9 @@ static __always_inline bool clear_bit_unlock_is_negative_byte(long nr, volatile
-  * __clear_bit() is non-atomic and implies release semantics before the memory
-  * operation. It can be used for an unlock if no other CPUs can concurrently
-  * modify other bits in the word.
-- *
-- * No memory barrier is required here, because x86 cannot reorder stores past
-- * older loads. Same principle as spin_unlock.
-  */
- static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
- {
--	barrier();
- 	__clear_bit(nr, addr);
- }
- 
-@@ -182,7 +172,7 @@ static __always_inline void __clear_bit_unlock(long nr, volatile unsigned long *
-  */
- static __always_inline void __change_bit(long nr, volatile unsigned long *addr)
- {
--	asm volatile(__ASM_SIZE(btc) " %1,%0" : ADDR : "Ir" (nr));
-+	asm volatile(__ASM_SIZE(btc) " %1,%0" : : ADDR, "Ir" (nr) : "memory");
- }
- 
- /**
-@@ -202,8 +192,7 @@ static __always_inline void change_bit(long nr, volatile unsigned long *addr)
- 			: "iq" ((u8)CONST_MASK(nr)));
- 	} else {
- 		asm volatile(LOCK_PREFIX __ASM_SIZE(btc) " %1,%0"
--			: BITOP_ADDR(addr)
--			: "Ir" (nr));
-+			: : RLONG_ADDR(addr), "Ir" (nr) : "memory");
- 	}
- }
- 
-@@ -248,8 +237,8 @@ static __always_inline bool __test_and_set_bit(long nr, volatile unsigned long *
- 
- 	asm(__ASM_SIZE(bts) " %2,%1"
- 	    CC_SET(c)
--	    : CC_OUT(c) (oldbit), ADDR
--	    : "Ir" (nr));
-+	    : CC_OUT(c) (oldbit)
-+	    : ADDR, "Ir" (nr) : "memory");
- 	return oldbit;
- }
- 
-@@ -288,8 +277,8 @@ static __always_inline bool __test_and_clear_bit(long nr, volatile unsigned long
- 
- 	asm volatile(__ASM_SIZE(btr) " %2,%1"
- 		     CC_SET(c)
--		     : CC_OUT(c) (oldbit), ADDR
--		     : "Ir" (nr));
-+		     : CC_OUT(c) (oldbit)
-+		     : ADDR, "Ir" (nr) : "memory");
- 	return oldbit;
- }
- 
-@@ -300,8 +289,8 @@ static __always_inline bool __test_and_change_bit(long nr, volatile unsigned lon
- 
- 	asm volatile(__ASM_SIZE(btc) " %2,%1"
- 		     CC_SET(c)
--		     : CC_OUT(c) (oldbit), ADDR
--		     : "Ir" (nr) : "memory");
-+		     : CC_OUT(c) (oldbit)
-+		     : ADDR, "Ir" (nr) : "memory");
- 
- 	return oldbit;
- }
-@@ -332,7 +321,7 @@ static __always_inline bool variable_test_bit(long nr, volatile const unsigned l
- 	asm volatile(__ASM_SIZE(bt) " %2,%1"
- 		     CC_SET(c)
- 		     : CC_OUT(c) (oldbit)
--		     : "m" (*(unsigned long *)addr), "Ir" (nr));
-+		     : "m" (*(unsigned long *)addr), "Ir" (nr) : "memory");
- 
- 	return oldbit;
- }
-diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
-index 6d6122524711..981ff9479648 100644
---- a/arch/x86/include/asm/cpufeatures.h
-+++ b/arch/x86/include/asm/cpufeatures.h
-@@ -344,6 +344,7 @@
- /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
- #define X86_FEATURE_AVX512_4VNNIW	(18*32+ 2) /* AVX-512 Neural Network Instructions */
- #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
-+#define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
- #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
- #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
- #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
-diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
-index 180373360e34..71d763ad2637 100644
---- a/arch/x86/include/asm/kvm_host.h
-+++ b/arch/x86/include/asm/kvm_host.h
-@@ -352,6 +352,7 @@ struct kvm_mmu_page {
- };
- 
- struct kvm_pio_request {
-+	unsigned long linear_rip;
- 	unsigned long count;
- 	int in;
- 	int port;
-@@ -570,6 +571,7 @@ struct kvm_vcpu_arch {
- 	bool tpr_access_reporting;
- 	u64 ia32_xss;
- 	u64 microcode_version;
-+	u64 arch_capabilities;
- 
- 	/*
- 	 * Paging state of the vcpu
-@@ -1255,7 +1257,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm,
- 				   struct kvm_memory_slot *slot,
- 				   gfn_t gfn_offset, unsigned long mask);
- void kvm_mmu_zap_all(struct kvm *kvm);
--void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots);
-+void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen);
- unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
- void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
- 
-diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
-index 8e40c2446fd1..ca5bc0eacb95 100644
---- a/arch/x86/include/asm/msr-index.h
-+++ b/arch/x86/include/asm/msr-index.h
-@@ -666,6 +666,12 @@
- 
- #define MSR_IA32_TSC_DEADLINE		0x000006E0
- 
-+
-+#define MSR_TSX_FORCE_ABORT		0x0000010F
-+
-+#define MSR_TFA_RTM_FORCE_ABORT_BIT	0
-+#define MSR_TFA_RTM_FORCE_ABORT		BIT_ULL(MSR_TFA_RTM_FORCE_ABORT_BIT)
-+
- /* P4/Xeon+ specific */
- #define MSR_IA32_MCG_EAX		0x00000180
- #define MSR_IA32_MCG_EBX		0x00000181
-diff --git a/arch/x86/include/asm/string_32.h b/arch/x86/include/asm/string_32.h
-index 55d392c6bd29..2fd165f1cffa 100644
---- a/arch/x86/include/asm/string_32.h
-+++ b/arch/x86/include/asm/string_32.h
-@@ -179,14 +179,7 @@ static inline void *__memcpy3d(void *to, const void *from, size_t len)
-  *	No 3D Now!
-  */
- 
--#if (__GNUC__ >= 4)
- #define memcpy(t, f, n) __builtin_memcpy(t, f, n)
--#else
--#define memcpy(t, f, n)				\
--	(__builtin_constant_p((n))		\
--	 ? __constant_memcpy((t), (f), (n))	\
--	 : __memcpy((t), (f), (n)))
--#endif
- 
- #endif
- #endif /* !CONFIG_FORTIFY_SOURCE */
-@@ -282,12 +275,7 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
- 
- 	{
- 		int d0, d1;
--#if __GNUC__ == 4 && __GNUC_MINOR__ == 0
--		/* Workaround for broken gcc 4.0 */
--		register unsigned long eax asm("%eax") = pattern;
--#else
- 		unsigned long eax = pattern;
--#endif
- 
- 		switch (count % 4) {
- 		case 0:
-@@ -321,15 +309,7 @@ void *__constant_c_and_count_memset(void *s, unsigned long pattern,
- #define __HAVE_ARCH_MEMSET
- extern void *memset(void *, int, size_t);
- #ifndef CONFIG_FORTIFY_SOURCE
--#if (__GNUC__ >= 4)
- #define memset(s, c, count) __builtin_memset(s, c, count)
--#else
--#define memset(s, c, count)						\
--	(__builtin_constant_p(c)					\
--	 ? __constant_c_x_memset((s), (0x01010101UL * (unsigned char)(c)), \
--				 (count))				\
--	 : __memset((s), (c), (count)))
--#endif
- #endif /* !CONFIG_FORTIFY_SOURCE */
- 
- #define __HAVE_ARCH_MEMSET16
-diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
-index 4e4194e21a09..75314c3dbe47 100644
---- a/arch/x86/include/asm/string_64.h
-+++ b/arch/x86/include/asm/string_64.h
-@@ -14,21 +14,6 @@
- extern void *memcpy(void *to, const void *from, size_t len);
- extern void *__memcpy(void *to, const void *from, size_t len);
- 
--#ifndef CONFIG_FORTIFY_SOURCE
--#if (__GNUC__ == 4 && __GNUC_MINOR__ < 3) || __GNUC__ < 4
--#define memcpy(dst, src, len)					\
--({								\
--	size_t __len = (len);					\
--	void *__ret;						\
--	if (__builtin_constant_p(len) && __len >= 64)		\
--		__ret = __memcpy((dst), (src), __len);		\
--	else							\
--		__ret = __builtin_memcpy((dst), (src), __len);	\
--	__ret;							\
--})
--#endif
--#endif /* !CONFIG_FORTIFY_SOURCE */
--
- #define __HAVE_ARCH_MEMSET
- void *memset(void *s, int c, size_t n);
- void *__memset(void *s, int c, size_t n);
-diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
-index c1334aaaa78d..f3aed639dccd 100644
---- a/arch/x86/include/asm/uaccess.h
-+++ b/arch/x86/include/asm/uaccess.h
-@@ -76,7 +76,7 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
- #endif
- 
- /**
-- * access_ok: - Checks if a user space pointer is valid
-+ * access_ok - Checks if a user space pointer is valid
-  * @addr: User space pointer to start of block to check
-  * @size: Size of block to check
-  *
-@@ -85,12 +85,12 @@ static inline bool __chk_range_not_ok(unsigned long addr, unsigned long size, un
-  *
-  * Checks if a pointer to a block of memory in user space is valid.
-  *
-- * Returns true (nonzero) if the memory block may be valid, false (zero)
-- * if it is definitely invalid.
-- *
-  * Note that, depending on architecture, this function probably just
-  * checks that the pointer is in the user space range - after calling
-  * this function, memory access functions may still return -EFAULT.
-+ *
-+ * Return: true (nonzero) if the memory block may be valid, false (zero)
-+ * if it is definitely invalid.
-  */
- #define access_ok(addr, size)					\
- ({									\
-@@ -135,7 +135,7 @@ extern int __get_user_bad(void);
- __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
- 
- /**
-- * get_user: - Get a simple variable from user space.
-+ * get_user - Get a simple variable from user space.
-  * @x:   Variable to store result.
-  * @ptr: Source address, in user space.
-  *
-@@ -149,7 +149,7 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
-  * @ptr must have pointer-to-simple-variable type, and the result of
-  * dereferencing @ptr must be assignable to @x without a cast.
-  *
-- * Returns zero on success, or -EFAULT on error.
-+ * Return: zero on success, or -EFAULT on error.
-  * On error, the variable @x is set to zero.
-  */
- /*
-@@ -227,7 +227,7 @@ extern void __put_user_4(void);
- extern void __put_user_8(void);
- 
- /**
-- * put_user: - Write a simple value into user space.
-+ * put_user - Write a simple value into user space.
-  * @x:   Value to copy to user space.
-  * @ptr: Destination address, in user space.
-  *
-@@ -241,7 +241,7 @@ extern void __put_user_8(void);
-  * @ptr must have pointer-to-simple-variable type, and @x must be assignable
-  * to the result of dereferencing @ptr.
-  *
-- * Returns zero on success, or -EFAULT on error.
-+ * Return: zero on success, or -EFAULT on error.
-  */
- #define put_user(x, ptr)					\
- ({								\
-@@ -503,7 +503,7 @@ struct __large_struct { unsigned long buf[100]; };
- } while (0)
- 
- /**
-- * __get_user: - Get a simple variable from user space, with less checking.
-+ * __get_user - Get a simple variable from user space, with less checking.
-  * @x:   Variable to store result.
-  * @ptr: Source address, in user space.
-  *
-@@ -520,7 +520,7 @@ struct __large_struct { unsigned long buf[100]; };
-  * Caller must check the pointer with access_ok() before calling this
-  * function.
-  *
-- * Returns zero on success, or -EFAULT on error.
-+ * Return: zero on success, or -EFAULT on error.
-  * On error, the variable @x is set to zero.
-  */
- 
-@@ -528,7 +528,7 @@ struct __large_struct { unsigned long buf[100]; };
- 	__get_user_nocheck((x), (ptr), sizeof(*(ptr)))
- 
- /**
-- * __put_user: - Write a simple value into user space, with less checking.
-+ * __put_user - Write a simple value into user space, with less checking.
-  * @x:   Value to copy to user space.
-  * @ptr: Destination address, in user space.
-  *
-@@ -545,7 +545,7 @@ struct __large_struct { unsigned long buf[100]; };
-  * Caller must check the pointer with access_ok() before calling this
-  * function.
-  *
-- * Returns zero on success, or -EFAULT on error.
-+ * Return: zero on success, or -EFAULT on error.
-  */
- 
- #define __put_user(x, ptr)						\
-diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h
-index 1f86e1b0a5cd..499578f7e6d7 100644
---- a/arch/x86/include/asm/unwind.h
-+++ b/arch/x86/include/asm/unwind.h
-@@ -23,6 +23,12 @@ struct unwind_state {
- #elif defined(CONFIG_UNWINDER_FRAME_POINTER)
- 	bool got_irq;
- 	unsigned long *bp, *orig_sp, ip;
-+	/*
-+	 * If non-NULL: The current frame is incomplete and doesn't contain a
-+	 * valid BP. When looking for the next frame, use this instead of the
-+	 * non-existent saved BP.
-+	 */
-+	unsigned long *next_bp;
- 	struct pt_regs *regs;
- #else
- 	unsigned long *sp;
-diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
-index ef05bea7010d..6b5c710846f5 100644
---- a/arch/x86/include/asm/xen/hypercall.h
-+++ b/arch/x86/include/asm/xen/hypercall.h
-@@ -206,6 +206,9 @@ xen_single_call(unsigned int call,
- 	__HYPERCALL_DECLS;
- 	__HYPERCALL_5ARG(a1, a2, a3, a4, a5);
- 
-+	if (call >= PAGE_SIZE / sizeof(hypercall_page[0]))
-+		return -EINVAL;
-+
- 	asm volatile(CALL_NOSPEC
- 		     : __HYPERCALL_5PARAM
- 		     : [thunk_target] "a" (&hypercall_page[call])
-diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
-index 69f6bbb41be0..01004bfb1a1b 100644
---- a/arch/x86/kernel/cpu/amd.c
-+++ b/arch/x86/kernel/cpu/amd.c
-@@ -819,11 +819,9 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
- static void init_amd_zn(struct cpuinfo_x86 *c)
- {
- 	set_cpu_cap(c, X86_FEATURE_ZEN);
--	/*
--	 * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects
--	 * all up to and including B1.
--	 */
--	if (c->x86_model <= 1 && c->x86_stepping <= 1)
-+
-+	/* Fix erratum 1076: CPB feature bit not being set in CPUID. */
-+	if (!cpu_has(c, X86_FEATURE_CPB))
- 		set_cpu_cap(c, X86_FEATURE_CPB);
- }
- 
-diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
-index 8257a59704ae..763d4264d16a 100644
---- a/arch/x86/kernel/ftrace.c
-+++ b/arch/x86/kernel/ftrace.c
-@@ -49,7 +49,7 @@ int ftrace_arch_code_modify_post_process(void)
- union ftrace_code_union {
- 	char code[MCOUNT_INSN_SIZE];
- 	struct {
--		unsigned char e8;
-+		unsigned char op;
- 		int offset;
- 	} __attribute__((packed));
- };
-@@ -59,20 +59,23 @@ static int ftrace_calc_offset(long ip, long addr)
- 	return (int)(addr - ip);
- }
- 
--static unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr)
-+static unsigned char *
-+ftrace_text_replace(unsigned char op, unsigned long ip, unsigned long addr)
- {
- 	static union ftrace_code_union calc;
- 
--	calc.e8		= 0xe8;
-+	calc.op		= op;
- 	calc.offset	= ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
- 
--	/*
--	 * No locking needed, this must be called via kstop_machine
--	 * which in essence is like running on a uniprocessor machine.
--	 */
- 	return calc.code;
- }
- 
-+static unsigned char *
-+ftrace_call_replace(unsigned long ip, unsigned long addr)
-+{
-+	return ftrace_text_replace(0xe8, ip, addr);
-+}
-+
- static inline int
- within(unsigned long addr, unsigned long start, unsigned long end)
- {
-@@ -664,22 +667,6 @@ int __init ftrace_dyn_arch_init(void)
- 	return 0;
- }
- 
--#if defined(CONFIG_X86_64) || defined(CONFIG_FUNCTION_GRAPH_TRACER)
--static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
--{
--	static union ftrace_code_union calc;
--
--	/* Jmp not a call (ignore the .e8) */
--	calc.e8		= 0xe9;
--	calc.offset	= ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr);
--
--	/*
--	 * ftrace external locks synchronize the access to the static variable.
--	 */
--	return calc.code;
--}
--#endif
--
- /* Currently only x86_64 supports dynamic trampolines */
- #ifdef CONFIG_X86_64
- 
-@@ -891,8 +878,8 @@ static void *addr_from_call(void *ptr)
- 		return NULL;
- 
- 	/* Make sure this is a call */
--	if (WARN_ON_ONCE(calc.e8 != 0xe8)) {
--		pr_warn("Expected e8, got %x\n", calc.e8);
-+	if (WARN_ON_ONCE(calc.op != 0xe8)) {
-+		pr_warn("Expected e8, got %x\n", calc.op);
- 		return NULL;
- 	}
- 
-@@ -963,6 +950,11 @@ void arch_ftrace_trampoline_free(struct ftrace_ops *ops)
- #ifdef CONFIG_DYNAMIC_FTRACE
- extern void ftrace_graph_call(void);
- 
-+static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
-+{
-+	return ftrace_text_replace(0xe9, ip, addr);
-+}
-+
- static int ftrace_mod_jmp(unsigned long ip, void *func)
- {
- 	unsigned char *new;
-diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
-index 53917a3ebf94..1f3b77367948 100644
---- a/arch/x86/kernel/kexec-bzimage64.c
-+++ b/arch/x86/kernel/kexec-bzimage64.c
-@@ -218,6 +218,9 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params,
- 	params->screen_info.ext_mem_k = 0;
- 	params->alt_mem_k = 0;
- 
-+	/* Always fill in RSDP: it is either 0 or a valid value */
-+	params->acpi_rsdp_addr = boot_params.acpi_rsdp_addr;
-+
- 	/* Default APM info */
- 	memset(&params->apm_bios_info, 0, sizeof(params->apm_bios_info));
- 
-@@ -256,7 +259,6 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params,
- 	setup_efi_state(params, params_load_addr, efi_map_offset, efi_map_sz,
- 			efi_setup_data_offset);
- #endif
--
- 	/* Setup EDD info */
- 	memcpy(params->eddbuf, boot_params.eddbuf,
- 				EDDMAXNR * sizeof(struct edd_info));
-diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
-index 6adf6e6c2933..544bd41a514c 100644
---- a/arch/x86/kernel/kprobes/opt.c
-+++ b/arch/x86/kernel/kprobes/opt.c
-@@ -141,6 +141,11 @@ asm (
- 
- void optprobe_template_func(void);
- STACK_FRAME_NON_STANDARD(optprobe_template_func);
-+NOKPROBE_SYMBOL(optprobe_template_func);
-+NOKPROBE_SYMBOL(optprobe_template_entry);
-+NOKPROBE_SYMBOL(optprobe_template_val);
-+NOKPROBE_SYMBOL(optprobe_template_call);
-+NOKPROBE_SYMBOL(optprobe_template_end);
- 
- #define TMPL_MOVE_IDX \
- 	((long)optprobe_template_val - (long)optprobe_template_entry)
-diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
-index e811d4d1c824..d908a37bf3f3 100644
---- a/arch/x86/kernel/kvmclock.c
-+++ b/arch/x86/kernel/kvmclock.c
-@@ -104,12 +104,8 @@ static u64 kvm_sched_clock_read(void)
- 
- static inline void kvm_sched_clock_init(bool stable)
- {
--	if (!stable) {
--		pv_ops.time.sched_clock = kvm_clock_read;
-+	if (!stable)
- 		clear_sched_clock_stable();
--		return;
--	}
--
- 	kvm_sched_clock_offset = kvm_clock_read();
- 	pv_ops.time.sched_clock = kvm_sched_clock_read;
- 
-diff --git a/arch/x86/kernel/unwind_frame.c b/arch/x86/kernel/unwind_frame.c
-index 3dc26f95d46e..9b9fd4826e7a 100644
---- a/arch/x86/kernel/unwind_frame.c
-+++ b/arch/x86/kernel/unwind_frame.c
-@@ -320,10 +320,14 @@ bool unwind_next_frame(struct unwind_state *state)
- 	}
- 
- 	/* Get the next frame pointer: */
--	if (state->regs)
-+	if (state->next_bp) {
-+		next_bp = state->next_bp;
-+		state->next_bp = NULL;
-+	} else if (state->regs) {
- 		next_bp = (unsigned long *)state->regs->bp;
--	else
-+	} else {
- 		next_bp = (unsigned long *)READ_ONCE_TASK_STACK(state->task, *state->bp);
-+	}
- 
- 	/* Move to the next frame if it's safe: */
- 	if (!update_stack_state(state, next_bp))
-@@ -398,6 +402,21 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
- 
- 	bp = get_frame_pointer(task, regs);
- 
-+	/*
-+	 * If we crash with IP==0, the last successfully executed instruction
-+	 * was probably an indirect function call with a NULL function pointer.
-+	 * That means that SP points into the middle of an incomplete frame:
-+	 * *SP is a return pointer, and *(SP-sizeof(unsigned long)) is where we
-+	 * would have written a frame pointer if we hadn't crashed.
-+	 * Pretend that the frame is complete and that BP points to it, but save
-+	 * the real BP so that we can use it when looking for the next frame.
-+	 */
-+	if (regs && regs->ip == 0 &&
-+	    (unsigned long *)kernel_stack_pointer(regs) >= first_frame) {
-+		state->next_bp = bp;
-+		bp = ((unsigned long *)kernel_stack_pointer(regs)) - 1;
-+	}
-+
- 	/* Initialize stack info and make sure the frame data is accessible: */
- 	get_stack_info(bp, state->task, &state->stack_info,
- 		       &state->stack_mask);
-@@ -410,7 +429,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
- 	 */
- 	while (!unwind_done(state) &&
- 	       (!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
--			state->bp < first_frame))
-+			(state->next_bp == NULL && state->bp < first_frame)))
- 		unwind_next_frame(state);
- }
- EXPORT_SYMBOL_GPL(__unwind_start);
-diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
-index 26038eacf74a..89be1be1790c 100644
---- a/arch/x86/kernel/unwind_orc.c
-+++ b/arch/x86/kernel/unwind_orc.c
-@@ -113,6 +113,20 @@ static struct orc_entry *orc_ftrace_find(unsigned long ip)
- }
- #endif
- 
-+/*
-+ * If we crash with IP==0, the last successfully executed instruction
-+ * was probably an indirect function call with a NULL function pointer,
-+ * and we don't have unwind information for NULL.
-+ * This hardcoded ORC entry for IP==0 allows us to unwind from a NULL function
-+ * pointer into its parent and then continue normally from there.
-+ */
-+static struct orc_entry null_orc_entry = {
-+	.sp_offset = sizeof(long),
-+	.sp_reg = ORC_REG_SP,
-+	.bp_reg = ORC_REG_UNDEFINED,
-+	.type = ORC_TYPE_CALL
-+};
-+
- static struct orc_entry *orc_find(unsigned long ip)
- {
- 	static struct orc_entry *orc;
-@@ -120,6 +134,9 @@ static struct orc_entry *orc_find(unsigned long ip)
- 	if (!orc_init)
- 		return NULL;
- 
-+	if (ip == 0)
-+		return &null_orc_entry;
-+
- 	/* For non-init vmlinux addresses, use the fast lookup table: */
- 	if (ip >= LOOKUP_START_IP && ip < LOOKUP_STOP_IP) {
- 		unsigned int idx, start, stop;
-diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
-index 0d618ee634ac..ee3b5c7d662e 100644
---- a/arch/x86/kernel/vmlinux.lds.S
-+++ b/arch/x86/kernel/vmlinux.lds.S
-@@ -401,7 +401,7 @@ SECTIONS
-  * Per-cpu symbols which need to be offset from __per_cpu_load
-  * for the boot processor.
-  */
--#define INIT_PER_CPU(x) init_per_cpu__##x = x + __per_cpu_load
-+#define INIT_PER_CPU(x) init_per_cpu__##x = ABSOLUTE(x) + __per_cpu_load
- INIT_PER_CPU(gdt_page);
- INIT_PER_CPU(irq_stack_union);
- 
-diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
-index f2d1d230d5b8..9ab33cab9486 100644
---- a/arch/x86/kvm/mmu.c
-+++ b/arch/x86/kvm/mmu.c
-@@ -5635,13 +5635,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
- {
- 	struct kvm_memslots *slots;
- 	struct kvm_memory_slot *memslot;
--	bool flush_tlb = true;
--	bool flush = false;
- 	int i;
- 
--	if (kvm_available_flush_tlb_with_range())
--		flush_tlb = false;
--
- 	spin_lock(&kvm->mmu_lock);
- 	for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) {
- 		slots = __kvm_memslots(kvm, i);
-@@ -5653,17 +5648,12 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end)
- 			if (start >= end)
- 				continue;
- 
--			flush |= slot_handle_level_range(kvm, memslot,
--					kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL,
--					PT_MAX_HUGEPAGE_LEVEL, start,
--					end - 1, flush_tlb);
-+			slot_handle_level_range(kvm, memslot, kvm_zap_rmapp,
-+						PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL,
-+						start, end - 1, true);
- 		}
- 	}
- 
--	if (flush)
--		kvm_flush_remote_tlbs_with_address(kvm, gfn_start,
--				gfn_end - gfn_start + 1);
--
- 	spin_unlock(&kvm->mmu_lock);
- }
- 
-@@ -5901,13 +5891,30 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm)
- 	return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages));
- }
- 
--void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots)
-+void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen)
- {
-+	gen &= MMIO_GEN_MASK;
-+
-+	/*
-+	 * Shift to eliminate the "update in-progress" flag, which isn't
-+	 * included in the spte's generation number.
-+	 */
-+	gen >>= 1;
-+
-+	/*
-+	 * Generation numbers are incremented in multiples of the number of
-+	 * address spaces in order to provide unique generations across all
-+	 * address spaces.  Strip what is effectively the address space
-+	 * modifier prior to checking for a wrap of the MMIO generation so
-+	 * that a wrap in any address space is detected.
-+	 */
-+	gen &= ~((u64)KVM_ADDRESS_SPACE_NUM - 1);
-+
- 	/*
--	 * The very rare case: if the generation-number is round,
-+	 * The very rare case: if the MMIO generation number has wrapped,
- 	 * zap all shadow pages.
- 	 */
--	if (unlikely((slots->generation & MMIO_GEN_MASK) == 0)) {
-+	if (unlikely(gen == 0)) {
- 		kvm_debug_ratelimited("kvm: zapping shadow pages for mmio generation wraparound\n");
- 		kvm_mmu_invalidate_zap_all_pages(kvm);
- 	}
-diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
-index f13a3a24d360..a9b8e38d78ad 100644
---- a/arch/x86/kvm/svm.c
-+++ b/arch/x86/kvm/svm.c
-@@ -6422,11 +6422,11 @@ e_free:
- 	return ret;
- }
- 
--static int get_num_contig_pages(int idx, struct page **inpages,
--				unsigned long npages)
-+static unsigned long get_num_contig_pages(unsigned long idx,
-+				struct page **inpages, unsigned long npages)
- {
- 	unsigned long paddr, next_paddr;
--	int i = idx + 1, pages = 1;
-+	unsigned long i = idx + 1, pages = 1;
- 
- 	/* find the number of contiguous pages starting from idx */
- 	paddr = __sme_page_pa(inpages[idx]);
-@@ -6445,12 +6445,12 @@ static int get_num_contig_pages(int idx, struct page **inpages,
- 
- static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
- {
--	unsigned long vaddr, vaddr_end, next_vaddr, npages, size;
-+	unsigned long vaddr, vaddr_end, next_vaddr, npages, pages, size, i;
- 	struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
- 	struct kvm_sev_launch_update_data params;
- 	struct sev_data_launch_update_data *data;
- 	struct page **inpages;
--	int i, ret, pages;
-+	int ret;
- 
- 	if (!sev_guest(kvm))
- 		return -ENOTTY;
-diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
-index d737a51a53ca..f90b3a948291 100644
---- a/arch/x86/kvm/vmx/nested.c
-+++ b/arch/x86/kvm/vmx/nested.c
-@@ -500,6 +500,17 @@ static void nested_vmx_disable_intercept_for_msr(unsigned long *msr_bitmap_l1,
- 	}
- }
- 
-+static inline void enable_x2apic_msr_intercepts(unsigned long *msr_bitmap) {
-+	int msr;
-+
-+	for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
-+		unsigned word = msr / BITS_PER_LONG;
-+
-+		msr_bitmap[word] = ~0;
-+		msr_bitmap[word + (0x800 / sizeof(long))] = ~0;
-+	}
-+}
-+
- /*
-  * Merge L0's and L1's MSR bitmap, return false to indicate that
-  * we do not use the hardware.
-@@ -541,39 +552,44 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
- 		return false;
- 
- 	msr_bitmap_l1 = (unsigned long *)kmap(page);
--	if (nested_cpu_has_apic_reg_virt(vmcs12)) {
--		/*
--		 * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it
--		 * just lets the processor take the value from the virtual-APIC page;
--		 * take those 256 bits directly from the L1 bitmap.
--		 */
--		for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
--			unsigned word = msr / BITS_PER_LONG;
--			msr_bitmap_l0[word] = msr_bitmap_l1[word];
--			msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
--		}
--	} else {
--		for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
--			unsigned word = msr / BITS_PER_LONG;
--			msr_bitmap_l0[word] = ~0;
--			msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
--		}
--	}
- 
--	nested_vmx_disable_intercept_for_msr(
--		msr_bitmap_l1, msr_bitmap_l0,
--		X2APIC_MSR(APIC_TASKPRI),
--		MSR_TYPE_W);
-+	/*
-+	 * To keep the control flow simple, pay eight 8-byte writes (sixteen
-+	 * 4-byte writes on 32-bit systems) up front to enable intercepts for
-+	 * the x2APIC MSR range and selectively disable them below.
-+	 */
-+	enable_x2apic_msr_intercepts(msr_bitmap_l0);
-+
-+	if (nested_cpu_has_virt_x2apic_mode(vmcs12)) {
-+		if (nested_cpu_has_apic_reg_virt(vmcs12)) {
-+			/*
-+			 * L0 need not intercept reads for MSRs between 0x800
-+			 * and 0x8ff, it just lets the processor take the value
-+			 * from the virtual-APIC page; take those 256 bits
-+			 * directly from the L1 bitmap.
-+			 */
-+			for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
-+				unsigned word = msr / BITS_PER_LONG;
-+
-+				msr_bitmap_l0[word] = msr_bitmap_l1[word];
-+			}
-+		}
- 
--	if (nested_cpu_has_vid(vmcs12)) {
--		nested_vmx_disable_intercept_for_msr(
--			msr_bitmap_l1, msr_bitmap_l0,
--			X2APIC_MSR(APIC_EOI),
--			MSR_TYPE_W);
- 		nested_vmx_disable_intercept_for_msr(
- 			msr_bitmap_l1, msr_bitmap_l0,
--			X2APIC_MSR(APIC_SELF_IPI),
--			MSR_TYPE_W);
-+			X2APIC_MSR(APIC_TASKPRI),
-+			MSR_TYPE_R | MSR_TYPE_W);
-+
-+		if (nested_cpu_has_vid(vmcs12)) {
-+			nested_vmx_disable_intercept_for_msr(
-+				msr_bitmap_l1, msr_bitmap_l0,
-+				X2APIC_MSR(APIC_EOI),
-+				MSR_TYPE_W);
-+			nested_vmx_disable_intercept_for_msr(
-+				msr_bitmap_l1, msr_bitmap_l0,
-+				X2APIC_MSR(APIC_SELF_IPI),
-+				MSR_TYPE_W);
-+		}
- 	}
- 
- 	if (spec_ctrl)
-@@ -2765,7 +2781,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
- 		"add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */
- 
- 		/* Check if vmlaunch or vmresume is needed */
--		"cmpl $0, %c[launched](%% " _ASM_CX")\n\t"
-+		"cmpb $0, %c[launched](%% " _ASM_CX")\n\t"
- 
- 		"call vmx_vmenter\n\t"
- 
-@@ -4035,25 +4051,50 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
- 	/* Addr = segment_base + offset */
- 	/* offset = base + [index * scale] + displacement */
- 	off = exit_qualification; /* holds the displacement */
-+	if (addr_size == 1)
-+		off = (gva_t)sign_extend64(off, 31);
-+	else if (addr_size == 0)
-+		off = (gva_t)sign_extend64(off, 15);
- 	if (base_is_valid)
- 		off += kvm_register_read(vcpu, base_reg);
- 	if (index_is_valid)
- 		off += kvm_register_read(vcpu, index_reg)<<scaling;
- 	vmx_get_segment(vcpu, &s, seg_reg);
--	*ret = s.base + off;
- 
-+	/*
-+	 * The effective address, i.e. @off, of a memory operand is truncated
-+	 * based on the address size of the instruction.  Note that this is
-+	 * the *effective address*, i.e. the address prior to accounting for
-+	 * the segment's base.
-+	 */
- 	if (addr_size == 1) /* 32 bit */
--		*ret &= 0xffffffff;
-+		off &= 0xffffffff;
-+	else if (addr_size == 0) /* 16 bit */
-+		off &= 0xffff;
- 
- 	/* Checks for #GP/#SS exceptions. */
- 	exn = false;
- 	if (is_long_mode(vcpu)) {
-+		/*
-+		 * The virtual/linear address is never truncated in 64-bit
-+		 * mode, e.g. a 32-bit address size can yield a 64-bit virtual
-+		 * address when using FS/GS with a non-zero base.
-+		 */
-+		*ret = s.base + off;
-+
- 		/* Long mode: #GP(0)/#SS(0) if the memory address is in a
- 		 * non-canonical form. This is the only check on the memory
- 		 * destination for long mode!
- 		 */
- 		exn = is_noncanonical_address(*ret, vcpu);
- 	} else if (is_protmode(vcpu)) {
-+		/*
-+		 * When not in long mode, the virtual/linear address is
-+		 * unconditionally truncated to 32 bits regardless of the
-+		 * address size.
-+		 */
-+		*ret = (s.base + off) & 0xffffffff;
-+
- 		/* Protected mode: apply checks for segment validity in the
- 		 * following order:
- 		 * - segment type check (#GP(0) may be thrown)
-@@ -4077,10 +4118,16 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
- 		/* Protected mode: #GP(0)/#SS(0) if the segment is unusable.
- 		 */
- 		exn = (s.unusable != 0);
--		/* Protected mode: #GP(0)/#SS(0) if the memory
--		 * operand is outside the segment limit.
-+
-+		/*
-+		 * Protected mode: #GP(0)/#SS(0) if the memory operand is
-+		 * outside the segment limit.  All CPUs that support VMX ignore
-+		 * limit checks for flat segments, i.e. segments with base==0,
-+		 * limit==0xffffffff and of type expand-up data or code.
- 		 */
--		exn = exn || (off + sizeof(u64) > s.limit);
-+		if (!(s.base == 0 && s.limit == 0xffffffff &&
-+		     ((s.type & 8) || !(s.type & 4))))
-+			exn = exn || (off + sizeof(u64) > s.limit);
- 	}
- 	if (exn) {
- 		kvm_queue_exception_e(vcpu,
-diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
-index 30a6bcd735ec..a0a770816429 100644
---- a/arch/x86/kvm/vmx/vmx.c
-+++ b/arch/x86/kvm/vmx/vmx.c
-@@ -1679,12 +1679,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
- 
- 		msr_info->data = to_vmx(vcpu)->spec_ctrl;
- 		break;
--	case MSR_IA32_ARCH_CAPABILITIES:
--		if (!msr_info->host_initiated &&
--		    !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
--			return 1;
--		msr_info->data = to_vmx(vcpu)->arch_capabilities;
--		break;
- 	case MSR_IA32_SYSENTER_CS:
- 		msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
- 		break;
-@@ -1891,11 +1885,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
- 		vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD,
- 					      MSR_TYPE_W);
- 		break;
--	case MSR_IA32_ARCH_CAPABILITIES:
--		if (!msr_info->host_initiated)
--			return 1;
--		vmx->arch_capabilities = data;
--		break;
- 	case MSR_IA32_CR_PAT:
- 		if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
- 			if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
-@@ -4083,8 +4072,6 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
- 		++vmx->nmsrs;
- 	}
- 
--	vmx->arch_capabilities = kvm_get_arch_capabilities();
--
- 	vm_exit_controls_init(vmx, vmx_vmexit_ctrl());
- 
- 	/* 22.2.1, 20.8.1 */
-@@ -6399,7 +6386,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
- 		"mov %%" _ASM_AX", %%cr2 \n\t"
- 		"3: \n\t"
- 		/* Check if vmlaunch or vmresume is needed */
--		"cmpl $0, %c[launched](%%" _ASM_CX ") \n\t"
-+		"cmpb $0, %c[launched](%%" _ASM_CX ") \n\t"
- 		/* Load guest registers.  Don't clobber flags. */
- 		"mov %c[rax](%%" _ASM_CX "), %%" _ASM_AX " \n\t"
- 		"mov %c[rbx](%%" _ASM_CX "), %%" _ASM_BX " \n\t"
-@@ -6449,10 +6436,15 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
- 		"mov %%r13, %c[r13](%%" _ASM_CX ") \n\t"
- 		"mov %%r14, %c[r14](%%" _ASM_CX ") \n\t"
- 		"mov %%r15, %c[r15](%%" _ASM_CX ") \n\t"
-+
- 		/*
--		* Clear host registers marked as clobbered to prevent
--		* speculative use.
--		*/
-+		 * Clear all general purpose registers (except RSP, which is loaded by
-+		 * the CPU during VM-Exit) to prevent speculative use of the guest's
-+		 * values, even those that are saved/loaded via the stack.  In theory,
-+		 * an L1 cache miss when restoring registers could lead to speculative
-+		 * execution with the guest's values.  Zeroing XORs are dirt cheap,
-+		 * i.e. the extra paranoia is essentially free.
-+		 */
- 		"xor %%r8d,  %%r8d \n\t"
- 		"xor %%r9d,  %%r9d \n\t"
- 		"xor %%r10d, %%r10d \n\t"
-@@ -6467,8 +6459,11 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
- 
- 		"xor %%eax, %%eax \n\t"
- 		"xor %%ebx, %%ebx \n\t"
-+		"xor %%ecx, %%ecx \n\t"
-+		"xor %%edx, %%edx \n\t"
- 		"xor %%esi, %%esi \n\t"
- 		"xor %%edi, %%edi \n\t"
-+		"xor %%ebp, %%ebp \n\t"
- 		"pop  %%" _ASM_BP "; pop  %%" _ASM_DX " \n\t"
- 	      : ASM_CALL_CONSTRAINT
- 	      : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp),
-diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
-index 0ac0a64c7790..1abae731c3e4 100644
---- a/arch/x86/kvm/vmx/vmx.h
-+++ b/arch/x86/kvm/vmx/vmx.h
-@@ -191,7 +191,6 @@ struct vcpu_vmx {
- 	u64		      msr_guest_kernel_gs_base;
- #endif
- 
--	u64		      arch_capabilities;
- 	u64		      spec_ctrl;
- 
- 	u32 vm_entry_controls_shadow;
-diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
-index 941f932373d0..7ee802a92bc8 100644
---- a/arch/x86/kvm/x86.c
-+++ b/arch/x86/kvm/x86.c
-@@ -2443,6 +2443,11 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
- 		if (msr_info->host_initiated)
- 			vcpu->arch.microcode_version = data;
- 		break;
-+	case MSR_IA32_ARCH_CAPABILITIES:
-+		if (!msr_info->host_initiated)
-+			return 1;
-+		vcpu->arch.arch_capabilities = data;
-+		break;
- 	case MSR_EFER:
- 		return set_efer(vcpu, data);
- 	case MSR_K7_HWCR:
-@@ -2747,6 +2752,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
- 	case MSR_IA32_UCODE_REV:
- 		msr_info->data = vcpu->arch.microcode_version;
- 		break;
-+	case MSR_IA32_ARCH_CAPABILITIES:
-+		if (!msr_info->host_initiated &&
-+		    !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
-+			return 1;
-+		msr_info->data = vcpu->arch.arch_capabilities;
-+		break;
- 	case MSR_IA32_TSC:
- 		msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset;
- 		break;
-@@ -6522,14 +6533,27 @@ int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
- }
- EXPORT_SYMBOL_GPL(kvm_emulate_instruction_from_buffer);
- 
-+static int complete_fast_pio_out(struct kvm_vcpu *vcpu)
-+{
-+	vcpu->arch.pio.count = 0;
-+
-+	if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip)))
-+		return 1;
-+
-+	return kvm_skip_emulated_instruction(vcpu);
-+}
-+
- static int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size,
- 			    unsigned short port)
- {
- 	unsigned long val = kvm_register_read(vcpu, VCPU_REGS_RAX);
- 	int ret = emulator_pio_out_emulated(&vcpu->arch.emulate_ctxt,
- 					    size, port, &val, 1);
--	/* do not return to emulator after return from userspace */
--	vcpu->arch.pio.count = 0;
-+
-+	if (!ret) {
-+		vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
-+		vcpu->arch.complete_userspace_io = complete_fast_pio_out;
-+	}
- 	return ret;
- }
- 
-@@ -6540,6 +6564,11 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
- 	/* We should only ever be called with arch.pio.count equal to 1 */
- 	BUG_ON(vcpu->arch.pio.count != 1);
- 
-+	if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip))) {
-+		vcpu->arch.pio.count = 0;
-+		return 1;
-+	}
-+
- 	/* For size less than 4 we merge, else we zero extend */
- 	val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
- 					: 0;
-@@ -6552,7 +6581,7 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
- 				 vcpu->arch.pio.port, &val, 1);
- 	kvm_register_write(vcpu, VCPU_REGS_RAX, val);
- 
--	return 1;
-+	return kvm_skip_emulated_instruction(vcpu);
- }
- 
- static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
-@@ -6571,6 +6600,7 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
- 		return ret;
- 	}
- 
-+	vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
- 	vcpu->arch.complete_userspace_io = complete_fast_pio_in;
- 
- 	return 0;
-@@ -6578,16 +6608,13 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
- 
- int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in)
- {
--	int ret = kvm_skip_emulated_instruction(vcpu);
-+	int ret;
- 
--	/*
--	 * TODO: we might be squashing a KVM_GUESTDBG_SINGLESTEP-triggered
--	 * KVM_EXIT_DEBUG here.
--	 */
- 	if (in)
--		return kvm_fast_pio_in(vcpu, size, port) && ret;
-+		ret = kvm_fast_pio_in(vcpu, size, port);
- 	else
--		return kvm_fast_pio_out(vcpu, size, port) && ret;
-+		ret = kvm_fast_pio_out(vcpu, size, port);
-+	return ret && kvm_skip_emulated_instruction(vcpu);
- }
- EXPORT_SYMBOL_GPL(kvm_fast_pio);
- 
-@@ -8725,6 +8752,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
- 
- int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
- {
-+	vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
- 	vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
- 	kvm_vcpu_mtrr_init(vcpu);
- 	vcpu_load(vcpu);
-@@ -9348,13 +9376,13 @@ out_free:
- 	return -ENOMEM;
- }
- 
--void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
-+void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
- {
- 	/*
- 	 * memslots->generation has been incremented.
- 	 * mmio generation may have reached its maximum value.
- 	 */
--	kvm_mmu_invalidate_mmio_sptes(kvm, slots);
-+	kvm_mmu_invalidate_mmio_sptes(kvm, gen);
- }
- 
- int kvm_arch_prepare_memory_region(struct kvm *kvm,
-diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
-index 224cd0a47568..20ede17202bf 100644
---- a/arch/x86/kvm/x86.h
-+++ b/arch/x86/kvm/x86.h
-@@ -181,6 +181,11 @@ static inline bool emul_is_noncanonical_address(u64 la,
- static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
- 					gva_t gva, gfn_t gfn, unsigned access)
- {
-+	u64 gen = kvm_memslots(vcpu->kvm)->generation;
-+
-+	if (unlikely(gen & 1))
-+		return;
-+
- 	/*
- 	 * If this is a shadow nested page table, the "GVA" is
- 	 * actually a nGPA.
-@@ -188,7 +193,7 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
- 	vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK;
- 	vcpu->arch.access = access;
- 	vcpu->arch.mmio_gfn = gfn;
--	vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
-+	vcpu->arch.mmio_gen = gen;
- }
- 
- static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu)
-diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
-index bfd94e7812fc..7d290777246d 100644
---- a/arch/x86/lib/usercopy_32.c
-+++ b/arch/x86/lib/usercopy_32.c
-@@ -54,13 +54,13 @@ do {									\
- } while (0)
- 
- /**
-- * clear_user: - Zero a block of memory in user space.
-+ * clear_user - Zero a block of memory in user space.
-  * @to:   Destination address, in user space.
-  * @n:    Number of bytes to zero.
-  *
-  * Zero a block of memory in user space.
-  *
-- * Returns number of bytes that could not be cleared.
-+ * Return: number of bytes that could not be cleared.
-  * On success, this will be zero.
-  */
- unsigned long
-@@ -74,14 +74,14 @@ clear_user(void __user *to, unsigned long n)
- EXPORT_SYMBOL(clear_user);
- 
- /**
-- * __clear_user: - Zero a block of memory in user space, with less checking.
-+ * __clear_user - Zero a block of memory in user space, with less checking.
-  * @to:   Destination address, in user space.
-  * @n:    Number of bytes to zero.
-  *
-  * Zero a block of memory in user space.  Caller must check
-  * the specified block with access_ok() before calling this function.
-  *
-- * Returns number of bytes that could not be cleared.
-+ * Return: number of bytes that could not be cleared.
-  * On success, this will be zero.
-  */
- unsigned long
-diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
-index 30a5111ae5fd..527e69b12002 100644
---- a/arch/x86/pci/fixup.c
-+++ b/arch/x86/pci/fixup.c
-@@ -635,6 +635,22 @@ static void quirk_no_aersid(struct pci_dev *pdev)
- DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_INTEL, PCI_ANY_ID,
- 			      PCI_CLASS_BRIDGE_PCI, 8, quirk_no_aersid);
- 
-+static void quirk_intel_th_dnv(struct pci_dev *dev)
-+{
-+	struct resource *r = &dev->resource[4];
-+
-+	/*
-+	 * Denverton reports 2k of RTIT_BAR (intel_th resource 4), which
-+	 * appears to be 4 MB in reality.
-+	 */
-+	if (r->end == r->start + 0x7ff) {
-+		r->start = 0;
-+		r->end   = 0x3fffff;
-+		r->flags |= IORESOURCE_UNSET;
-+	}
-+}
-+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x19e1, quirk_intel_th_dnv);
-+
- #ifdef CONFIG_PHYS_ADDR_T_64BIT
- 
- #define AMD_141b_MMIO_BASE(x)	(0x80 + (x) * 0x8)
-diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
-index 17456a1d3f04..6c571ae86947 100644
---- a/arch/x86/platform/efi/quirks.c
-+++ b/arch/x86/platform/efi/quirks.c
-@@ -717,7 +717,7 @@ void efi_recover_from_page_fault(unsigned long phys_addr)
- 	 * "efi_mm" cannot be used to check if the page fault had occurred
- 	 * in the firmware context because efi=old_map doesn't use efi_pgd.
- 	 */
--	if (efi_rts_work.efi_rts_id == NONE)
-+	if (efi_rts_work.efi_rts_id == EFI_NONE)
- 		return;
- 
- 	/*
-@@ -742,7 +742,7 @@ void efi_recover_from_page_fault(unsigned long phys_addr)
- 	 * because this case occurs *very* rarely and hence could be improved
- 	 * on a need by basis.
- 	 */
--	if (efi_rts_work.efi_rts_id == RESET_SYSTEM) {
-+	if (efi_rts_work.efi_rts_id == EFI_RESET_SYSTEM) {
- 		pr_info("efi_reset_system() buggy! Reboot through BIOS\n");
- 		machine_real_restart(MRR_BIOS);
- 		return;
-diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
-index 4463fa72db94..96cb20de08af 100644
---- a/arch/x86/realmode/rm/Makefile
-+++ b/arch/x86/realmode/rm/Makefile
-@@ -47,7 +47,7 @@ $(obj)/pasyms.h: $(REALMODE_OBJS) FORCE
- targets += realmode.lds
- $(obj)/realmode.lds: $(obj)/pasyms.h
- 
--LDFLAGS_realmode.elf := --emit-relocs -T
-+LDFLAGS_realmode.elf := -m elf_i386 --emit-relocs -T
- CPPFLAGS_realmode.lds += -P -C -I$(objtree)/$(obj)
- 
- targets += realmode.elf
-diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
-index 0f4fe206dcc2..20701977e6c0 100644
---- a/arch/x86/xen/mmu_pv.c
-+++ b/arch/x86/xen/mmu_pv.c
-@@ -2114,10 +2114,10 @@ void __init xen_relocate_p2m(void)
- 				pt = early_memremap(pt_phys, PAGE_SIZE);
- 				clear_page(pt);
- 				for (idx_pte = 0;
--						idx_pte < min(n_pte, PTRS_PER_PTE);
--						idx_pte++) {
--					set_pte(pt + idx_pte,
--							pfn_pte(p2m_pfn, PAGE_KERNEL));
-+				     idx_pte < min(n_pte, PTRS_PER_PTE);
-+				     idx_pte++) {
-+					pt[idx_pte] = pfn_pte(p2m_pfn,
-+							      PAGE_KERNEL);
- 					p2m_pfn++;
- 				}
- 				n_pte -= PTRS_PER_PTE;
-@@ -2125,8 +2125,7 @@ void __init xen_relocate_p2m(void)
- 				make_lowmem_page_readonly(__va(pt_phys));
- 				pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE,
- 						PFN_DOWN(pt_phys));
--				set_pmd(pmd + idx_pt,
--						__pmd(_PAGE_TABLE | pt_phys));
-+				pmd[idx_pt] = __pmd(_PAGE_TABLE | pt_phys);
- 				pt_phys += PAGE_SIZE;
- 			}
- 			n_pt -= PTRS_PER_PMD;
-@@ -2134,7 +2133,7 @@ void __init xen_relocate_p2m(void)
- 			make_lowmem_page_readonly(__va(pmd_phys));
- 			pin_pagetable_pfn(MMUEXT_PIN_L2_TABLE,
- 					PFN_DOWN(pmd_phys));
--			set_pud(pud + idx_pmd, __pud(_PAGE_TABLE | pmd_phys));
-+			pud[idx_pmd] = __pud(_PAGE_TABLE | pmd_phys);
- 			pmd_phys += PAGE_SIZE;
- 		}
- 		n_pmd -= PTRS_PER_PUD;
-diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c
-index 74969a437a37..2e73395f0560 100644
---- a/arch/xtensa/kernel/process.c
-+++ b/arch/xtensa/kernel/process.c
-@@ -321,8 +321,8 @@ unsigned long get_wchan(struct task_struct *p)
- 
- 		/* Stack layout: sp-4: ra, sp-3: sp' */
- 
--		pc = MAKE_PC_FROM_RA(*(unsigned long*)sp - 4, sp);
--		sp = *(unsigned long *)sp - 3;
-+		pc = MAKE_PC_FROM_RA(SPILL_SLOT(sp, 0), sp);
-+		sp = SPILL_SLOT(sp, 1);
- 	} while (count++ < 16);
- 	return 0;
- }
-diff --git a/arch/xtensa/kernel/stacktrace.c b/arch/xtensa/kernel/stacktrace.c
-index 174c11f13bba..b9f82510c650 100644
---- a/arch/xtensa/kernel/stacktrace.c
-+++ b/arch/xtensa/kernel/stacktrace.c
-@@ -253,10 +253,14 @@ static int return_address_cb(struct stackframe *frame, void *data)
- 	return 1;
- }
- 
-+/*
-+ * level == 0 is for the return address from the caller of this function,
-+ * not from this function itself.
-+ */
- unsigned long return_address(unsigned level)
- {
- 	struct return_addr_data r = {
--		.skip = level + 1,
-+		.skip = level,
- 	};
- 	walk_stackframe(stack_pointer(NULL), return_address_cb, &r);
- 	return r.addr;
-diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
-index cd307767a134..e5ed28629271 100644
---- a/block/bfq-iosched.c
-+++ b/block/bfq-iosched.c
-@@ -747,6 +747,7 @@ void bfq_weights_tree_add(struct bfq_data *bfqd, struct bfq_queue *bfqq,
- 
- inc_counter:
- 	bfqq->weight_counter->num_active++;
-+	bfqq->ref++;
- }
- 
- /*
-@@ -771,6 +772,7 @@ void __bfq_weights_tree_remove(struct bfq_data *bfqd,
- 
- reset_entity_pointer:
- 	bfqq->weight_counter = NULL;
-+	bfq_put_queue(bfqq);
- }
- 
- /*
-@@ -782,9 +784,6 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
- {
- 	struct bfq_entity *entity = bfqq->entity.parent;
- 
--	__bfq_weights_tree_remove(bfqd, bfqq,
--				  &bfqd->queue_weights_tree);
--
- 	for_each_entity(entity) {
- 		struct bfq_sched_data *sd = entity->my_sched_data;
- 
-@@ -818,6 +817,15 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd,
- 			bfqd->num_groups_with_pending_reqs--;
- 		}
- 	}
-+
-+	/*
-+	 * Next function is invoked last, because it causes bfqq to be
-+	 * freed if the following holds: bfqq is not in service and
-+	 * has no dispatched request. DO NOT use bfqq after the next
-+	 * function invocation.
-+	 */
-+	__bfq_weights_tree_remove(bfqd, bfqq,
-+				  &bfqd->queue_weights_tree);
- }
- 
- /*
-@@ -1011,7 +1019,8 @@ bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd,
- 
- static int bfqq_process_refs(struct bfq_queue *bfqq)
- {
--	return bfqq->ref - bfqq->allocated - bfqq->entity.on_st;
-+	return bfqq->ref - bfqq->allocated - bfqq->entity.on_st -
-+		(bfqq->weight_counter != NULL);
- }
- 
- /* Empty burst list and add just bfqq (see comments on bfq_handle_burst) */
-@@ -2224,7 +2233,8 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
- 
- 	if (in_service_bfqq && in_service_bfqq != bfqq &&
- 	    likely(in_service_bfqq != &bfqd->oom_bfqq) &&
--	    bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) &&
-+	    bfq_rq_close_to_sector(io_struct, request,
-+				   bfqd->in_serv_last_pos) &&
- 	    bfqq->entity.parent == in_service_bfqq->entity.parent &&
- 	    bfq_may_be_close_cooperator(bfqq, in_service_bfqq)) {
- 		new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq);
-@@ -2764,6 +2774,8 @@ update_rate_and_reset:
- 	bfq_update_rate_reset(bfqd, rq);
- update_last_values:
- 	bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
-+	if (RQ_BFQQ(rq) == bfqd->in_service_queue)
-+		bfqd->in_serv_last_pos = bfqd->last_position;
- 	bfqd->last_dispatch = now_ns;
- }
- 
-diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
-index 0b02bf302de0..746bd570b85a 100644
---- a/block/bfq-iosched.h
-+++ b/block/bfq-iosched.h
-@@ -537,6 +537,9 @@ struct bfq_data {
- 	/* on-disk position of the last served request */
- 	sector_t last_position;
- 
-+	/* position of the last served request for the in-service queue */
-+	sector_t in_serv_last_pos;
-+
- 	/* time of last request completion (ns) */
- 	u64 last_completion;
- 
-diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
-index 72adbbe975d5..4aab1a8191f0 100644
---- a/block/bfq-wf2q.c
-+++ b/block/bfq-wf2q.c
-@@ -1667,15 +1667,15 @@ void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
- 
- 	bfqd->busy_queues--;
- 
--	if (!bfqq->dispatched)
--		bfq_weights_tree_remove(bfqd, bfqq);
--
- 	if (bfqq->wr_coeff > 1)
- 		bfqd->wr_busy_queues--;
- 
- 	bfqg_stats_update_dequeue(bfqq_group(bfqq));
- 
- 	bfq_deactivate_bfqq(bfqd, bfqq, true, expiration);
-+
-+	if (!bfqq->dispatched)
-+		bfq_weights_tree_remove(bfqd, bfqq);
- }
- 
- /*
-diff --git a/block/bio.c b/block/bio.c
-index 4db1008309ed..a06f58bd4c72 100644
---- a/block/bio.c
-+++ b/block/bio.c
-@@ -1238,8 +1238,11 @@ struct bio *bio_copy_user_iov(struct request_queue *q,
- 			}
- 		}
- 
--		if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes)
-+		if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes) {
-+			if (!map_data)
-+				__free_page(page);
- 			break;
-+		}
- 
- 		len -= bytes;
- 		offset = 0;
-diff --git a/block/blk-core.c b/block/blk-core.c
-index 6b78ec56a4f2..5bde73a49399 100644
---- a/block/blk-core.c
-+++ b/block/blk-core.c
-@@ -1246,8 +1246,6 @@ static int blk_cloned_rq_check_limits(struct request_queue *q,
-  */
- blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq)
- {
--	blk_qc_t unused;
--
- 	if (blk_cloned_rq_check_limits(q, rq))
- 		return BLK_STS_IOERR;
- 
-@@ -1263,7 +1261,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
- 	 * bypass a potential scheduler on the bottom device for
- 	 * insert.
- 	 */
--	return blk_mq_try_issue_directly(rq->mq_hctx, rq, &unused, true, true);
-+	return blk_mq_request_issue_directly(rq, true);
- }
- EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
- 
-diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
-index 140933e4a7d1..0c98b6c1ca49 100644
---- a/block/blk-mq-sched.c
-+++ b/block/blk-mq-sched.c
-@@ -423,10 +423,12 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
- 		 * busy in case of 'none' scheduler, and this way may save
- 		 * us one extra enqueue & dequeue to sw queue.
- 		 */
--		if (!hctx->dispatch_busy && !e && !run_queue_async)
-+		if (!hctx->dispatch_busy && !e && !run_queue_async) {
- 			blk_mq_try_issue_list_directly(hctx, list);
--		else
--			blk_mq_insert_requests(hctx, ctx, list);
-+			if (list_empty(list))
-+				return;
-+		}
-+		blk_mq_insert_requests(hctx, ctx, list);
- 	}
- 
- 	blk_mq_run_hw_queue(hctx, run_queue_async);
-diff --git a/block/blk-mq.c b/block/blk-mq.c
-index 9437a5eb07cf..16f9675c57e6 100644
---- a/block/blk-mq.c
-+++ b/block/blk-mq.c
-@@ -1076,7 +1076,13 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
- 	hctx = container_of(wait, struct blk_mq_hw_ctx, dispatch_wait);
- 
- 	spin_lock(&hctx->dispatch_wait_lock);
--	list_del_init(&wait->entry);
-+	if (!list_empty(&wait->entry)) {
-+		struct sbitmap_queue *sbq;
-+
-+		list_del_init(&wait->entry);
-+		sbq = &hctx->tags->bitmap_tags;
-+		atomic_dec(&sbq->ws_active);
-+	}
- 	spin_unlock(&hctx->dispatch_wait_lock);
- 
- 	blk_mq_run_hw_queue(hctx, true);
-@@ -1092,6 +1098,7 @@ static int blk_mq_dispatch_wake(wait_queue_entry_t *wait, unsigned mode,
- static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
- 				 struct request *rq)
- {
-+	struct sbitmap_queue *sbq = &hctx->tags->bitmap_tags;
- 	struct wait_queue_head *wq;
- 	wait_queue_entry_t *wait;
- 	bool ret;
-@@ -1115,7 +1122,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
- 	if (!list_empty_careful(&wait->entry))
- 		return false;
- 
--	wq = &bt_wait_ptr(&hctx->tags->bitmap_tags, hctx)->wait;
-+	wq = &bt_wait_ptr(sbq, hctx)->wait;
- 
- 	spin_lock_irq(&wq->lock);
- 	spin_lock(&hctx->dispatch_wait_lock);
-@@ -1125,6 +1132,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
- 		return false;
- 	}
- 
-+	atomic_inc(&sbq->ws_active);
- 	wait->flags &= ~WQ_FLAG_EXCLUSIVE;
- 	__add_wait_queue(wq, wait);
- 
-@@ -1145,6 +1153,7 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
- 	 * someone else gets the wakeup.
- 	 */
- 	list_del_init(&wait->entry);
-+	atomic_dec(&sbq->ws_active);
- 	spin_unlock(&hctx->dispatch_wait_lock);
- 	spin_unlock_irq(&wq->lock);
- 
-@@ -1796,74 +1805,76 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
- 	return ret;
- }
- 
--blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
-+static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
- 						struct request *rq,
- 						blk_qc_t *cookie,
--						bool bypass, bool last)
-+						bool bypass_insert, bool last)
- {
- 	struct request_queue *q = rq->q;
- 	bool run_queue = true;
--	blk_status_t ret = BLK_STS_RESOURCE;
--	int srcu_idx;
--	bool force = false;
- 
--	hctx_lock(hctx, &srcu_idx);
- 	/*
--	 * hctx_lock is needed before checking quiesced flag.
-+	 * RCU or SRCU read lock is needed before checking quiesced flag.
- 	 *
--	 * When queue is stopped or quiesced, ignore 'bypass', insert
--	 * and return BLK_STS_OK to caller, and avoid driver to try to
--	 * dispatch again.
-+	 * When queue is stopped or quiesced, ignore 'bypass_insert' from
-+	 * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller,
-+	 * and avoid driver to try to dispatch again.
- 	 */
--	if (unlikely(blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q))) {
-+	if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) {
- 		run_queue = false;
--		bypass = false;
--		goto out_unlock;
-+		bypass_insert = false;
-+		goto insert;
- 	}
- 
--	if (unlikely(q->elevator && !bypass))
--		goto out_unlock;
-+	if (q->elevator && !bypass_insert)
-+		goto insert;
- 
- 	if (!blk_mq_get_dispatch_budget(hctx))
--		goto out_unlock;
-+		goto insert;
- 
- 	if (!blk_mq_get_driver_tag(rq)) {
- 		blk_mq_put_dispatch_budget(hctx);
--		goto out_unlock;
-+		goto insert;
- 	}
- 
--	/*
--	 * Always add a request that has been through
--	 *.queue_rq() to the hardware dispatch list.
--	 */
--	force = true;
--	ret = __blk_mq_issue_directly(hctx, rq, cookie, last);
--out_unlock:
-+	return __blk_mq_issue_directly(hctx, rq, cookie, last);
-+insert:
-+	if (bypass_insert)
-+		return BLK_STS_RESOURCE;
-+
-+	blk_mq_request_bypass_insert(rq, run_queue);
-+	return BLK_STS_OK;
-+}
-+
-+static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
-+		struct request *rq, blk_qc_t *cookie)
-+{
-+	blk_status_t ret;
-+	int srcu_idx;
-+
-+	might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING);
-+
-+	hctx_lock(hctx, &srcu_idx);
-+
-+	ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true);
-+	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
-+		blk_mq_request_bypass_insert(rq, true);
-+	else if (ret != BLK_STS_OK)
-+		blk_mq_end_request(rq, ret);
-+
-+	hctx_unlock(hctx, srcu_idx);
-+}
-+
-+blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
-+{
-+	blk_status_t ret;
-+	int srcu_idx;
-+	blk_qc_t unused_cookie;
-+	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
-+
-+	hctx_lock(hctx, &srcu_idx);
-+	ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true, last);
- 	hctx_unlock(hctx, srcu_idx);
--	switch (ret) {
--	case BLK_STS_OK:
--		break;
--	case BLK_STS_DEV_RESOURCE:
--	case BLK_STS_RESOURCE:
--		if (force) {
--			blk_mq_request_bypass_insert(rq, run_queue);
--			/*
--			 * We have to return BLK_STS_OK for the DM
--			 * to avoid livelock. Otherwise, we return
--			 * the real result to indicate whether the
--			 * request is direct-issued successfully.
--			 */
--			ret = bypass ? BLK_STS_OK : ret;
--		} else if (!bypass) {
--			blk_mq_sched_insert_request(rq, false,
--						    run_queue, false);
--		}
--		break;
--	default:
--		if (!bypass)
--			blk_mq_end_request(rq, ret);
--		break;
--	}
- 
- 	return ret;
- }
-@@ -1871,20 +1882,22 @@ out_unlock:
- void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
- 		struct list_head *list)
- {
--	blk_qc_t unused;
--	blk_status_t ret = BLK_STS_OK;
--
- 	while (!list_empty(list)) {
-+		blk_status_t ret;
- 		struct request *rq = list_first_entry(list, struct request,
- 				queuelist);
- 
- 		list_del_init(&rq->queuelist);
--		if (ret == BLK_STS_OK)
--			ret = blk_mq_try_issue_directly(hctx, rq, &unused,
--							false,
-+		ret = blk_mq_request_issue_directly(rq, list_empty(list));
-+		if (ret != BLK_STS_OK) {
-+			if (ret == BLK_STS_RESOURCE ||
-+					ret == BLK_STS_DEV_RESOURCE) {
-+				blk_mq_request_bypass_insert(rq,
- 							list_empty(list));
--		else
--			blk_mq_sched_insert_request(rq, false, true, false);
-+				break;
-+			}
-+			blk_mq_end_request(rq, ret);
-+		}
- 	}
- 
- 	/*
-@@ -1892,7 +1905,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
- 	 * the driver there was more coming, but that turned out to
- 	 * be a lie.
- 	 */
--	if (ret != BLK_STS_OK && hctx->queue->mq_ops->commit_rqs)
-+	if (!list_empty(list) && hctx->queue->mq_ops->commit_rqs)
- 		hctx->queue->mq_ops->commit_rqs(hctx);
- }
- 
-@@ -2005,13 +2018,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
- 		if (same_queue_rq) {
- 			data.hctx = same_queue_rq->mq_hctx;
- 			blk_mq_try_issue_directly(data.hctx, same_queue_rq,
--					&cookie, false, true);
-+					&cookie);
- 		}
- 	} else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
- 			!data.hctx->dispatch_busy)) {
- 		blk_mq_put_ctx(data.ctx);
- 		blk_mq_bio_to_request(rq, bio);
--		blk_mq_try_issue_directly(data.hctx, rq, &cookie, false, true);
-+		blk_mq_try_issue_directly(data.hctx, rq, &cookie);
- 	} else {
- 		blk_mq_put_ctx(data.ctx);
- 		blk_mq_bio_to_request(rq, bio);
-diff --git a/block/blk-mq.h b/block/blk-mq.h
-index d0b3dd54ef8d..a3a684a8c633 100644
---- a/block/blk-mq.h
-+++ b/block/blk-mq.h
-@@ -67,10 +67,8 @@ void blk_mq_request_bypass_insert(struct request *rq, bool run_queue);
- void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
- 				struct list_head *list);
- 
--blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
--						struct request *rq,
--						blk_qc_t *cookie,
--						bool bypass, bool last);
-+/* Used by blk_insert_cloned_request() to issue request directly */
-+blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last);
- void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
- 				    struct list_head *list);
- 
-diff --git a/crypto/aead.c b/crypto/aead.c
-index 189c52d1f63a..4908b5e846f0 100644
---- a/crypto/aead.c
-+++ b/crypto/aead.c
-@@ -61,8 +61,10 @@ int crypto_aead_setkey(struct crypto_aead *tfm,
- 	else
- 		err = crypto_aead_alg(tfm)->setkey(tfm, key, keylen);
- 
--	if (err)
-+	if (unlikely(err)) {
-+		crypto_aead_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
- 		return err;
-+	}
- 
- 	crypto_aead_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
- 	return 0;
-diff --git a/crypto/aegis128.c b/crypto/aegis128.c
-index c22f4414856d..789716f92e4c 100644
---- a/crypto/aegis128.c
-+++ b/crypto/aegis128.c
-@@ -290,19 +290,19 @@ static void crypto_aegis128_process_crypt(struct aegis_state *state,
- 					  const struct aegis128_ops *ops)
- {
- 	struct skcipher_walk walk;
--	u8 *src, *dst;
--	unsigned int chunksize;
- 
- 	ops->skcipher_walk_init(&walk, req, false);
- 
- 	while (walk.nbytes) {
--		src = walk.src.virt.addr;
--		dst = walk.dst.virt.addr;
--		chunksize = walk.nbytes;
-+		unsigned int nbytes = walk.nbytes;
- 
--		ops->crypt_chunk(state, dst, src, chunksize);
-+		if (nbytes < walk.total)
-+			nbytes = round_down(nbytes, walk.stride);
- 
--		skcipher_walk_done(&walk, 0);
-+		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
-+				 nbytes);
-+
-+		skcipher_walk_done(&walk, walk.nbytes - nbytes);
- 	}
- }
- 
-diff --git a/crypto/aegis128l.c b/crypto/aegis128l.c
-index b6fb21ebdc3e..73811448cb6b 100644
---- a/crypto/aegis128l.c
-+++ b/crypto/aegis128l.c
-@@ -353,19 +353,19 @@ static void crypto_aegis128l_process_crypt(struct aegis_state *state,
- 					   const struct aegis128l_ops *ops)
- {
- 	struct skcipher_walk walk;
--	u8 *src, *dst;
--	unsigned int chunksize;
- 
- 	ops->skcipher_walk_init(&walk, req, false);
- 
- 	while (walk.nbytes) {
--		src = walk.src.virt.addr;
--		dst = walk.dst.virt.addr;
--		chunksize = walk.nbytes;
-+		unsigned int nbytes = walk.nbytes;
- 
--		ops->crypt_chunk(state, dst, src, chunksize);
-+		if (nbytes < walk.total)
-+			nbytes = round_down(nbytes, walk.stride);
- 
--		skcipher_walk_done(&walk, 0);
-+		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
-+				 nbytes);
-+
-+		skcipher_walk_done(&walk, walk.nbytes - nbytes);
- 	}
- }
- 
-diff --git a/crypto/aegis256.c b/crypto/aegis256.c
-index 11f0f8ec9c7c..8a71e9c06193 100644
---- a/crypto/aegis256.c
-+++ b/crypto/aegis256.c
-@@ -303,19 +303,19 @@ static void crypto_aegis256_process_crypt(struct aegis_state *state,
- 					  const struct aegis256_ops *ops)
- {
- 	struct skcipher_walk walk;
--	u8 *src, *dst;
--	unsigned int chunksize;
- 
- 	ops->skcipher_walk_init(&walk, req, false);
- 
- 	while (walk.nbytes) {
--		src = walk.src.virt.addr;
--		dst = walk.dst.virt.addr;
--		chunksize = walk.nbytes;
-+		unsigned int nbytes = walk.nbytes;
- 
--		ops->crypt_chunk(state, dst, src, chunksize);
-+		if (nbytes < walk.total)
-+			nbytes = round_down(nbytes, walk.stride);
- 
--		skcipher_walk_done(&walk, 0);
-+		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
-+				 nbytes);
-+
-+		skcipher_walk_done(&walk, walk.nbytes - nbytes);
- 	}
- }
- 
-diff --git a/crypto/ahash.c b/crypto/ahash.c
-index 5d320a811f75..81e2767e2164 100644
---- a/crypto/ahash.c
-+++ b/crypto/ahash.c
-@@ -86,17 +86,17 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
- int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
- {
- 	unsigned int alignmask = walk->alignmask;
--	unsigned int nbytes = walk->entrylen;
- 
- 	walk->data -= walk->offset;
- 
--	if (nbytes && walk->offset & alignmask && !err) {
--		walk->offset = ALIGN(walk->offset, alignmask + 1);
--		nbytes = min(nbytes,
--			     ((unsigned int)(PAGE_SIZE)) - walk->offset);
--		walk->entrylen -= nbytes;
-+	if (walk->entrylen && (walk->offset & alignmask) && !err) {
-+		unsigned int nbytes;
- 
-+		walk->offset = ALIGN(walk->offset, alignmask + 1);
-+		nbytes = min(walk->entrylen,
-+			     (unsigned int)(PAGE_SIZE - walk->offset));
- 		if (nbytes) {
-+			walk->entrylen -= nbytes;
- 			walk->data += walk->offset;
- 			return nbytes;
- 		}
-@@ -116,7 +116,7 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
- 	if (err)
- 		return err;
- 
--	if (nbytes) {
-+	if (walk->entrylen) {
- 		walk->offset = 0;
- 		walk->pg++;
- 		return hash_walk_next(walk);
-@@ -190,6 +190,21 @@ static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
- 	return ret;
- }
- 
-+static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
-+			  unsigned int keylen)
-+{
-+	return -ENOSYS;
-+}
-+
-+static void ahash_set_needkey(struct crypto_ahash *tfm)
-+{
-+	const struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
-+
-+	if (tfm->setkey != ahash_nosetkey &&
-+	    !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
-+		crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
-+}
-+
- int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
- 			unsigned int keylen)
- {
-@@ -201,20 +216,16 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
- 	else
- 		err = tfm->setkey(tfm, key, keylen);
- 
--	if (err)
-+	if (unlikely(err)) {
-+		ahash_set_needkey(tfm);
- 		return err;
-+	}
- 
- 	crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
- 	return 0;
- }
- EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
- 
--static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
--			  unsigned int keylen)
--{
--	return -ENOSYS;
--}
--
- static inline unsigned int ahash_align_buffer_size(unsigned len,
- 						   unsigned long mask)
- {
-@@ -489,8 +500,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
- 
- 	if (alg->setkey) {
- 		hash->setkey = alg->setkey;
--		if (!(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
--			crypto_ahash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
-+		ahash_set_needkey(hash);
- 	}
- 
- 	return 0;
-diff --git a/crypto/cfb.c b/crypto/cfb.c
-index e81e45673498..4abfe32ff845 100644
---- a/crypto/cfb.c
-+++ b/crypto/cfb.c
-@@ -77,12 +77,14 @@ static int crypto_cfb_encrypt_segment(struct skcipher_walk *walk,
- 	do {
- 		crypto_cfb_encrypt_one(tfm, iv, dst);
- 		crypto_xor(dst, src, bsize);
--		memcpy(iv, dst, bsize);
-+		iv = dst;
- 
- 		src += bsize;
- 		dst += bsize;
- 	} while ((nbytes -= bsize) >= bsize);
- 
-+	memcpy(walk->iv, iv, bsize);
-+
- 	return nbytes;
- }
- 
-@@ -162,7 +164,7 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
- 	const unsigned int bsize = crypto_cfb_bsize(tfm);
- 	unsigned int nbytes = walk->nbytes;
- 	u8 *src = walk->src.virt.addr;
--	u8 *iv = walk->iv;
-+	u8 * const iv = walk->iv;
- 	u8 tmp[MAX_CIPHER_BLOCKSIZE];
- 
- 	do {
-@@ -172,8 +174,6 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
- 		src += bsize;
- 	} while ((nbytes -= bsize) >= bsize);
- 
--	memcpy(walk->iv, iv, bsize);
--
- 	return nbytes;
- }
- 
-@@ -298,6 +298,12 @@ static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb)
- 	inst->alg.base.cra_blocksize = 1;
- 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
- 
-+	/*
-+	 * To simplify the implementation, configure the skcipher walk to only
-+	 * give a partial block at the very end, never earlier.
-+	 */
-+	inst->alg.chunksize = alg->cra_blocksize;
-+
- 	inst->alg.ivsize = alg->cra_blocksize;
- 	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
- 	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
-diff --git a/crypto/morus1280.c b/crypto/morus1280.c
-index 3889c188f266..b83576b4eb55 100644
---- a/crypto/morus1280.c
-+++ b/crypto/morus1280.c
-@@ -366,18 +366,19 @@ static void crypto_morus1280_process_crypt(struct morus1280_state *state,
- 					   const struct morus1280_ops *ops)
- {
- 	struct skcipher_walk walk;
--	u8 *dst;
--	const u8 *src;
- 
- 	ops->skcipher_walk_init(&walk, req, false);
- 
- 	while (walk.nbytes) {
--		src = walk.src.virt.addr;
--		dst = walk.dst.virt.addr;
-+		unsigned int nbytes = walk.nbytes;
- 
--		ops->crypt_chunk(state, dst, src, walk.nbytes);
-+		if (nbytes < walk.total)
-+			nbytes = round_down(nbytes, walk.stride);
- 
--		skcipher_walk_done(&walk, 0);
-+		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
-+				 nbytes);
-+
-+		skcipher_walk_done(&walk, walk.nbytes - nbytes);
- 	}
- }
- 
-diff --git a/crypto/morus640.c b/crypto/morus640.c
-index da06ec2f6a80..b6a477444f6d 100644
---- a/crypto/morus640.c
-+++ b/crypto/morus640.c
-@@ -365,18 +365,19 @@ static void crypto_morus640_process_crypt(struct morus640_state *state,
- 					  const struct morus640_ops *ops)
- {
- 	struct skcipher_walk walk;
--	u8 *dst;
--	const u8 *src;
- 
- 	ops->skcipher_walk_init(&walk, req, false);
- 
- 	while (walk.nbytes) {
--		src = walk.src.virt.addr;
--		dst = walk.dst.virt.addr;
-+		unsigned int nbytes = walk.nbytes;
- 
--		ops->crypt_chunk(state, dst, src, walk.nbytes);
-+		if (nbytes < walk.total)
-+			nbytes = round_down(nbytes, walk.stride);
- 
--		skcipher_walk_done(&walk, 0);
-+		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
-+				 nbytes);
-+
-+		skcipher_walk_done(&walk, walk.nbytes - nbytes);
- 	}
- }
- 
-diff --git a/crypto/ofb.c b/crypto/ofb.c
-index 886631708c5e..cab0b80953fe 100644
---- a/crypto/ofb.c
-+++ b/crypto/ofb.c
-@@ -5,9 +5,6 @@
-  *
-  * Copyright (C) 2018 ARM Limited or its affiliates.
-  * All rights reserved.
-- *
-- * Based loosely on public domain code gleaned from libtomcrypt
-- * (https://github.com/libtom/libtomcrypt).
-  */
- 
- #include <crypto/algapi.h>
-@@ -21,7 +18,6 @@
- 
- struct crypto_ofb_ctx {
- 	struct crypto_cipher *child;
--	int cnt;
- };
- 
- 
-@@ -41,58 +37,40 @@ static int crypto_ofb_setkey(struct crypto_skcipher *parent, const u8 *key,
- 	return err;
- }
- 
--static int crypto_ofb_encrypt_segment(struct crypto_ofb_ctx *ctx,
--				      struct skcipher_walk *walk,
--				      struct crypto_cipher *tfm)
-+static int crypto_ofb_crypt(struct skcipher_request *req)
- {
--	int bsize = crypto_cipher_blocksize(tfm);
--	int nbytes = walk->nbytes;
--
--	u8 *src = walk->src.virt.addr;
--	u8 *dst = walk->dst.virt.addr;
--	u8 *iv = walk->iv;
--
--	do {
--		if (ctx->cnt == bsize) {
--			if (nbytes < bsize)
--				break;
--			crypto_cipher_encrypt_one(tfm, iv, iv);
--			ctx->cnt = 0;
--		}
--		*dst = *src ^ iv[ctx->cnt];
--		src++;
--		dst++;
--		ctx->cnt++;
--	} while (--nbytes);
--	return nbytes;
--}
--
--static int crypto_ofb_encrypt(struct skcipher_request *req)
--{
--	struct skcipher_walk walk;
- 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
--	unsigned int bsize;
- 	struct crypto_ofb_ctx *ctx = crypto_skcipher_ctx(tfm);
--	struct crypto_cipher *child = ctx->child;
--	int ret = 0;
-+	struct crypto_cipher *cipher = ctx->child;
-+	const unsigned int bsize = crypto_cipher_blocksize(cipher);
-+	struct skcipher_walk walk;
-+	int err;
- 
--	bsize =  crypto_cipher_blocksize(child);
--	ctx->cnt = bsize;
-+	err = skcipher_walk_virt(&walk, req, false);
- 
--	ret = skcipher_walk_virt(&walk, req, false);
-+	while (walk.nbytes >= bsize) {
-+		const u8 *src = walk.src.virt.addr;
-+		u8 *dst = walk.dst.virt.addr;
-+		u8 * const iv = walk.iv;
-+		unsigned int nbytes = walk.nbytes;
- 
--	while (walk.nbytes) {
--		ret = crypto_ofb_encrypt_segment(ctx, &walk, child);
--		ret = skcipher_walk_done(&walk, ret);
--	}
-+		do {
-+			crypto_cipher_encrypt_one(cipher, iv, iv);
-+			crypto_xor_cpy(dst, src, iv, bsize);
-+			dst += bsize;
-+			src += bsize;
-+		} while ((nbytes -= bsize) >= bsize);
- 
--	return ret;
--}
-+		err = skcipher_walk_done(&walk, nbytes);
-+	}
- 
--/* OFB encrypt and decrypt are identical */
--static int crypto_ofb_decrypt(struct skcipher_request *req)
--{
--	return crypto_ofb_encrypt(req);
-+	if (walk.nbytes) {
-+		crypto_cipher_encrypt_one(cipher, walk.iv, walk.iv);
-+		crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, walk.iv,
-+			       walk.nbytes);
-+		err = skcipher_walk_done(&walk, 0);
-+	}
-+	return err;
- }
- 
- static int crypto_ofb_init_tfm(struct crypto_skcipher *tfm)
-@@ -165,13 +143,18 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
- 	if (err)
- 		goto err_drop_spawn;
- 
-+	/* OFB mode is a stream cipher. */
-+	inst->alg.base.cra_blocksize = 1;
-+
-+	/*
-+	 * To simplify the implementation, configure the skcipher walk to only
-+	 * give a partial block at the very end, never earlier.
-+	 */
-+	inst->alg.chunksize = alg->cra_blocksize;
-+
- 	inst->alg.base.cra_priority = alg->cra_priority;
--	inst->alg.base.cra_blocksize = alg->cra_blocksize;
- 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
- 
--	/* We access the data as u32s when xoring. */
--	inst->alg.base.cra_alignmask |= __alignof__(u32) - 1;
--
- 	inst->alg.ivsize = alg->cra_blocksize;
- 	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
- 	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
-@@ -182,8 +165,8 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
- 	inst->alg.exit = crypto_ofb_exit_tfm;
- 
- 	inst->alg.setkey = crypto_ofb_setkey;
--	inst->alg.encrypt = crypto_ofb_encrypt;
--	inst->alg.decrypt = crypto_ofb_decrypt;
-+	inst->alg.encrypt = crypto_ofb_crypt;
-+	inst->alg.decrypt = crypto_ofb_crypt;
- 
- 	inst->free = crypto_ofb_free;
- 
-diff --git a/crypto/pcbc.c b/crypto/pcbc.c
-index 8aa10144407c..1b182dfedc94 100644
---- a/crypto/pcbc.c
-+++ b/crypto/pcbc.c
-@@ -51,7 +51,7 @@ static int crypto_pcbc_encrypt_segment(struct skcipher_request *req,
- 	unsigned int nbytes = walk->nbytes;
- 	u8 *src = walk->src.virt.addr;
- 	u8 *dst = walk->dst.virt.addr;
--	u8 *iv = walk->iv;
-+	u8 * const iv = walk->iv;
- 
- 	do {
- 		crypto_xor(iv, src, bsize);
-@@ -72,7 +72,7 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
- 	int bsize = crypto_cipher_blocksize(tfm);
- 	unsigned int nbytes = walk->nbytes;
- 	u8 *src = walk->src.virt.addr;
--	u8 *iv = walk->iv;
-+	u8 * const iv = walk->iv;
- 	u8 tmpbuf[MAX_CIPHER_BLOCKSIZE];
- 
- 	do {
-@@ -84,8 +84,6 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
- 		src += bsize;
- 	} while ((nbytes -= bsize) >= bsize);
- 
--	memcpy(walk->iv, iv, bsize);
--
- 	return nbytes;
- }
- 
-@@ -121,7 +119,7 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
- 	unsigned int nbytes = walk->nbytes;
- 	u8 *src = walk->src.virt.addr;
- 	u8 *dst = walk->dst.virt.addr;
--	u8 *iv = walk->iv;
-+	u8 * const iv = walk->iv;
- 
- 	do {
- 		crypto_cipher_decrypt_one(tfm, dst, src);
-@@ -132,8 +130,6 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
- 		dst += bsize;
- 	} while ((nbytes -= bsize) >= bsize);
- 
--	memcpy(walk->iv, iv, bsize);
--
- 	return nbytes;
- }
- 
-@@ -144,7 +140,7 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
- 	int bsize = crypto_cipher_blocksize(tfm);
- 	unsigned int nbytes = walk->nbytes;
- 	u8 *src = walk->src.virt.addr;
--	u8 *iv = walk->iv;
-+	u8 * const iv = walk->iv;
- 	u8 tmpbuf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(u32));
- 
- 	do {
-@@ -156,8 +152,6 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
- 		src += bsize;
- 	} while ((nbytes -= bsize) >= bsize);
- 
--	memcpy(walk->iv, iv, bsize);
--
- 	return nbytes;
- }
- 
-diff --git a/crypto/shash.c b/crypto/shash.c
-index 44d297b82a8f..40311ccad3fa 100644
---- a/crypto/shash.c
-+++ b/crypto/shash.c
-@@ -53,6 +53,13 @@ static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
- 	return err;
- }
- 
-+static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg)
-+{
-+	if (crypto_shash_alg_has_setkey(alg) &&
-+	    !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
-+		crypto_shash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
-+}
-+
- int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
- 			unsigned int keylen)
- {
-@@ -65,8 +72,10 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
- 	else
- 		err = shash->setkey(tfm, key, keylen);
- 
--	if (err)
-+	if (unlikely(err)) {
-+		shash_set_needkey(tfm, shash);
- 		return err;
-+	}
- 
- 	crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
- 	return 0;
-@@ -373,7 +382,8 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
- 	crt->final = shash_async_final;
- 	crt->finup = shash_async_finup;
- 	crt->digest = shash_async_digest;
--	crt->setkey = shash_async_setkey;
-+	if (crypto_shash_alg_has_setkey(alg))
-+		crt->setkey = shash_async_setkey;
- 
- 	crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
- 				    CRYPTO_TFM_NEED_KEY);
-@@ -395,9 +405,7 @@ static int crypto_shash_init_tfm(struct crypto_tfm *tfm)
- 
- 	hash->descsize = alg->descsize;
- 
--	if (crypto_shash_alg_has_setkey(alg) &&
--	    !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
--		crypto_shash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
-+	shash_set_needkey(hash, alg);
- 
- 	return 0;
- }
-diff --git a/crypto/skcipher.c b/crypto/skcipher.c
-index 2a969296bc24..de09ff60991e 100644
---- a/crypto/skcipher.c
-+++ b/crypto/skcipher.c
-@@ -585,6 +585,12 @@ static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg)
- 	return crypto_alg_extsize(alg);
- }
- 
-+static void skcipher_set_needkey(struct crypto_skcipher *tfm)
-+{
-+	if (tfm->keysize)
-+		crypto_skcipher_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
-+}
-+
- static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm,
- 				     const u8 *key, unsigned int keylen)
- {
-@@ -598,8 +604,10 @@ static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm,
- 	err = crypto_blkcipher_setkey(blkcipher, key, keylen);
- 	crypto_skcipher_set_flags(tfm, crypto_blkcipher_get_flags(blkcipher) &
- 				       CRYPTO_TFM_RES_MASK);
--	if (err)
-+	if (unlikely(err)) {
-+		skcipher_set_needkey(tfm);
- 		return err;
-+	}
- 
- 	crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
- 	return 0;
-@@ -677,8 +685,7 @@ static int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm)
- 	skcipher->ivsize = crypto_blkcipher_ivsize(blkcipher);
- 	skcipher->keysize = calg->cra_blkcipher.max_keysize;
- 
--	if (skcipher->keysize)
--		crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
-+	skcipher_set_needkey(skcipher);
- 
- 	return 0;
- }
-@@ -698,8 +705,10 @@ static int skcipher_setkey_ablkcipher(struct crypto_skcipher *tfm,
- 	crypto_skcipher_set_flags(tfm,
- 				  crypto_ablkcipher_get_flags(ablkcipher) &
- 				  CRYPTO_TFM_RES_MASK);
--	if (err)
-+	if (unlikely(err)) {
-+		skcipher_set_needkey(tfm);
- 		return err;
-+	}
- 
- 	crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
- 	return 0;
-@@ -776,8 +785,7 @@ static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm)
- 			    sizeof(struct ablkcipher_request);
- 	skcipher->keysize = calg->cra_ablkcipher.max_keysize;
- 
--	if (skcipher->keysize)
--		crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
-+	skcipher_set_needkey(skcipher);
- 
- 	return 0;
- }
-@@ -820,8 +828,10 @@ static int skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
- 	else
- 		err = cipher->setkey(tfm, key, keylen);
- 
--	if (err)
-+	if (unlikely(err)) {
-+		skcipher_set_needkey(tfm);
- 		return err;
-+	}
- 
- 	crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
- 	return 0;
-@@ -852,8 +862,7 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
- 	skcipher->ivsize = alg->ivsize;
- 	skcipher->keysize = alg->max_keysize;
- 
--	if (skcipher->keysize)
--		crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY);
-+	skcipher_set_needkey(skcipher);
- 
- 	if (alg->exit)
- 		skcipher->base.exit = crypto_skcipher_exit_tfm;
-diff --git a/crypto/testmgr.c b/crypto/testmgr.c
-index 0f684a414acb..b8e4a3ccbfe0 100644
---- a/crypto/testmgr.c
-+++ b/crypto/testmgr.c
-@@ -1894,14 +1894,21 @@ static int alg_test_crc32c(const struct alg_test_desc *desc,
- 
- 	err = alg_test_hash(desc, driver, type, mask);
- 	if (err)
--		goto out;
-+		return err;
- 
- 	tfm = crypto_alloc_shash(driver, type, mask);
- 	if (IS_ERR(tfm)) {
-+		if (PTR_ERR(tfm) == -ENOENT) {
-+			/*
-+			 * This crc32c implementation is only available through
-+			 * ahash API, not the shash API, so the remaining part
-+			 * of the test is not applicable to it.
-+			 */
-+			return 0;
-+		}
- 		printk(KERN_ERR "alg: crc32c: Failed to load transform for %s: "
- 		       "%ld\n", driver, PTR_ERR(tfm));
--		err = PTR_ERR(tfm);
--		goto out;
-+		return PTR_ERR(tfm);
- 	}
- 
- 	do {
-@@ -1928,7 +1935,6 @@ static int alg_test_crc32c(const struct alg_test_desc *desc,
- 
- 	crypto_free_shash(tfm);
- 
--out:
- 	return err;
- }
- 
-diff --git a/crypto/testmgr.h b/crypto/testmgr.h
-index e8f47d7b92cd..ca8e8ebef309 100644
---- a/crypto/testmgr.h
-+++ b/crypto/testmgr.h
-@@ -12870,6 +12870,31 @@ static const struct cipher_testvec aes_cfb_tv_template[] = {
- 			  "\x75\xa3\x85\x74\x1a\xb9\xce\xf8"
- 			  "\x20\x31\x62\x3d\x55\xb1\xe4\x71",
- 		.len	= 64,
-+		.also_non_np = 1,
-+		.np	= 2,
-+		.tap	= { 31, 33 },
-+	}, { /* > 16 bytes, not a multiple of 16 bytes */
-+		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
-+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
-+		.klen	= 16,
-+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
-+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
-+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
-+			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
-+			  "\xae",
-+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
-+			  "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
-+			  "\xc8",
-+		.len	= 17,
-+	}, { /* < 16 bytes */
-+		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
-+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
-+		.klen	= 16,
-+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
-+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
-+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
-+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
-+		.len	= 7,
- 	},
- };
- 
-@@ -16656,8 +16681,7 @@ static const struct cipher_testvec aes_ctr_rfc3686_tv_template[] = {
- };
- 
- static const struct cipher_testvec aes_ofb_tv_template[] = {
--	 /* From NIST Special Publication 800-38A, Appendix F.5 */
--	{
-+	{ /* From NIST Special Publication 800-38A, Appendix F.5 */
- 		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
- 			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
- 		.klen	= 16,
-@@ -16680,6 +16704,31 @@ static const struct cipher_testvec aes_ofb_tv_template[] = {
- 			  "\x30\x4c\x65\x28\xf6\x59\xc7\x78"
- 			  "\x66\xa5\x10\xd9\xc1\xd6\xae\x5e",
- 		.len	= 64,
-+		.also_non_np = 1,
-+		.np	= 2,
-+		.tap	= { 31, 33 },
-+	}, { /* > 16 bytes, not a multiple of 16 bytes */
-+		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
-+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
-+		.klen	= 16,
-+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
-+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
-+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
-+			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
-+			  "\xae",
-+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
-+			  "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
-+			  "\x77",
-+		.len	= 17,
-+	}, { /* < 16 bytes */
-+		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
-+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
-+		.klen	= 16,
-+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
-+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
-+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
-+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
-+		.len	= 7,
- 	}
- };
- 
-diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
-index f0b52266b3ac..d73afb562ad9 100644
---- a/drivers/acpi/acpi_video.c
-+++ b/drivers/acpi/acpi_video.c
-@@ -2124,21 +2124,29 @@ static int __init intel_opregion_present(void)
- 	return opregion;
- }
- 
-+/* Check if the chassis-type indicates there is no builtin LCD panel */
- static bool dmi_is_desktop(void)
- {
- 	const char *chassis_type;
-+	unsigned long type;
- 
- 	chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE);
- 	if (!chassis_type)
- 		return false;
- 
--	if (!strcmp(chassis_type, "3") || /*  3: Desktop */
--	    !strcmp(chassis_type, "4") || /*  4: Low Profile Desktop */
--	    !strcmp(chassis_type, "5") || /*  5: Pizza Box */
--	    !strcmp(chassis_type, "6") || /*  6: Mini Tower */
--	    !strcmp(chassis_type, "7") || /*  7: Tower */
--	    !strcmp(chassis_type, "11"))  /* 11: Main Server Chassis */
-+	if (kstrtoul(chassis_type, 10, &type) != 0)
-+		return false;
-+
-+	switch (type) {
-+	case 0x03: /* Desktop */
-+	case 0x04: /* Low Profile Desktop */
-+	case 0x05: /* Pizza Box */
-+	case 0x06: /* Mini Tower */
-+	case 0x07: /* Tower */
-+	case 0x10: /* Lunch Box */
-+	case 0x11: /* Main Server Chassis */
- 		return true;
-+	}
- 
- 	return false;
- }
-diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
-index e10fec99a182..4424997ecf30 100644
---- a/drivers/acpi/acpica/evgpe.c
-+++ b/drivers/acpi/acpica/evgpe.c
-@@ -81,8 +81,12 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
- 
- 	ACPI_FUNCTION_TRACE(ev_enable_gpe);
- 
--	/* Enable the requested GPE */
-+	/* Clear the GPE status */
-+	status = acpi_hw_clear_gpe(gpe_event_info);
-+	if (ACPI_FAILURE(status))
-+		return_ACPI_STATUS(status);
- 
-+	/* Enable the requested GPE */
- 	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
- 	return_ACPI_STATUS(status);
- }
-diff --git a/drivers/acpi/acpica/nsobject.c b/drivers/acpi/acpica/nsobject.c
-index 8638f43cfc3d..79d86da1c892 100644
---- a/drivers/acpi/acpica/nsobject.c
-+++ b/drivers/acpi/acpica/nsobject.c
-@@ -186,6 +186,10 @@ void acpi_ns_detach_object(struct acpi_namespace_node *node)
- 		}
- 	}
- 
-+	if (obj_desc->common.type == ACPI_TYPE_REGION) {
-+		acpi_ut_remove_address_range(obj_desc->region.space_id, node);
-+	}
-+
- 	/* Clear the Node entry in all cases */
- 
- 	node->object = NULL;
-diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
-index 217a782c3e55..7aa08884ed48 100644
---- a/drivers/acpi/cppc_acpi.c
-+++ b/drivers/acpi/cppc_acpi.c
-@@ -1108,8 +1108,13 @@ int cppc_get_perf_caps(int cpunum, struct cppc_perf_caps *perf_caps)
- 	cpc_read(cpunum, nominal_reg, &nom);
- 	perf_caps->nominal_perf = nom;
- 
--	cpc_read(cpunum, guaranteed_reg, &guaranteed);
--	perf_caps->guaranteed_perf = guaranteed;
-+	if (guaranteed_reg->type != ACPI_TYPE_BUFFER  ||
-+	    IS_NULL_REG(&guaranteed_reg->cpc_entry.reg)) {
-+		perf_caps->guaranteed_perf = 0;
-+	} else {
-+		cpc_read(cpunum, guaranteed_reg, &guaranteed);
-+		perf_caps->guaranteed_perf = guaranteed;
-+	}
- 
- 	cpc_read(cpunum, lowest_non_linear_reg, &min_nonlinear);
- 	perf_caps->lowest_nonlinear_perf = min_nonlinear;
-diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
-index 545e91420cde..8940054d6250 100644
---- a/drivers/acpi/device_sysfs.c
-+++ b/drivers/acpi/device_sysfs.c
-@@ -202,11 +202,15 @@ static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias,
- {
- 	struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER };
- 	const union acpi_object *of_compatible, *obj;
-+	acpi_status status;
- 	int len, count;
- 	int i, nval;
- 	char *c;
- 
--	acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf);
-+	status = acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf);
-+	if (ACPI_FAILURE(status))
-+		return -ENODEV;
-+
- 	/* DT strings are all in lower case */
- 	for (c = buf.pointer; *c != '\0'; c++)
- 		*c = tolower(*c);
-diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
-index e18ade5d74e9..f75f8f870ce3 100644
---- a/drivers/acpi/nfit/core.c
-+++ b/drivers/acpi/nfit/core.c
-@@ -415,7 +415,7 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
- 	if (call_pkg) {
- 		int i;
- 
--		if (nfit_mem->family != call_pkg->nd_family)
-+		if (nfit_mem && nfit_mem->family != call_pkg->nd_family)
- 			return -ENOTTY;
- 
- 		for (i = 0; i < ARRAY_SIZE(call_pkg->nd_reserved2); i++)
-@@ -424,6 +424,10 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd,
- 		return call_pkg->nd_command;
- 	}
- 
-+	/* In the !call_pkg case, bus commands == bus functions */
-+	if (!nfit_mem)
-+		return cmd;
-+
- 	/* Linux ND commands == NVDIMM_FAMILY_INTEL function numbers */
- 	if (nfit_mem->family == NVDIMM_FAMILY_INTEL)
- 		return cmd;
-@@ -454,17 +458,18 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
- 	if (cmd_rc)
- 		*cmd_rc = -EINVAL;
- 
-+	if (cmd == ND_CMD_CALL)
-+		call_pkg = buf;
-+	func = cmd_to_func(nfit_mem, cmd, call_pkg);
-+	if (func < 0)
-+		return func;
-+
- 	if (nvdimm) {
- 		struct acpi_device *adev = nfit_mem->adev;
- 
- 		if (!adev)
- 			return -ENOTTY;
- 
--		if (cmd == ND_CMD_CALL)
--			call_pkg = buf;
--		func = cmd_to_func(nfit_mem, cmd, call_pkg);
--		if (func < 0)
--			return func;
- 		dimm_name = nvdimm_name(nvdimm);
- 		cmd_name = nvdimm_cmd_name(cmd);
- 		cmd_mask = nvdimm_cmd_mask(nvdimm);
-@@ -475,12 +480,9 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
- 	} else {
- 		struct acpi_device *adev = to_acpi_dev(acpi_desc);
- 
--		func = cmd;
- 		cmd_name = nvdimm_bus_cmd_name(cmd);
- 		cmd_mask = nd_desc->cmd_mask;
--		dsm_mask = cmd_mask;
--		if (cmd == ND_CMD_CALL)
--			dsm_mask = nd_desc->bus_dsm_mask;
-+		dsm_mask = nd_desc->bus_dsm_mask;
- 		desc = nd_cmd_bus_desc(cmd);
- 		guid = to_nfit_uuid(NFIT_DEV_BUS);
- 		handle = adev->handle;
-@@ -554,6 +556,13 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
- 		return -EINVAL;
- 	}
- 
-+	if (out_obj->type != ACPI_TYPE_BUFFER) {
-+		dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n",
-+				dimm_name, cmd_name, out_obj->type);
-+		rc = -EINVAL;
-+		goto out;
-+	}
-+
- 	if (call_pkg) {
- 		call_pkg->nd_fw_size = out_obj->buffer.length;
- 		memcpy(call_pkg->nd_payload + call_pkg->nd_size_in,
-@@ -572,13 +581,6 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
- 		return 0;
- 	}
- 
--	if (out_obj->package.type != ACPI_TYPE_BUFFER) {
--		dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n",
--				dimm_name, cmd_name, out_obj->type);
--		rc = -EINVAL;
--		goto out;
--	}
--
- 	dev_dbg(dev, "%s cmd: %s output length: %d\n", dimm_name,
- 			cmd_name, out_obj->buffer.length);
- 	print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4, 4,
-@@ -1759,14 +1761,14 @@ static bool acpi_nvdimm_has_method(struct acpi_device *adev, char *method)
- 
- __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
- {
-+	struct device *dev = &nfit_mem->adev->dev;
- 	struct nd_intel_smart smart = { 0 };
- 	union acpi_object in_buf = {
--		.type = ACPI_TYPE_BUFFER,
--		.buffer.pointer = (char *) &smart,
--		.buffer.length = sizeof(smart),
-+		.buffer.type = ACPI_TYPE_BUFFER,
-+		.buffer.length = 0,
- 	};
- 	union acpi_object in_obj = {
--		.type = ACPI_TYPE_PACKAGE,
-+		.package.type = ACPI_TYPE_PACKAGE,
- 		.package.count = 1,
- 		.package.elements = &in_buf,
- 	};
-@@ -1781,8 +1783,15 @@ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
- 		return;
- 
- 	out_obj = acpi_evaluate_dsm(handle, guid, revid, func, &in_obj);
--	if (!out_obj)
-+	if (!out_obj || out_obj->type != ACPI_TYPE_BUFFER
-+			|| out_obj->buffer.length < sizeof(smart)) {
-+		dev_dbg(dev->parent, "%s: failed to retrieve initial health\n",
-+				dev_name(dev));
-+		ACPI_FREE(out_obj);
- 		return;
-+	}
-+	memcpy(&smart, out_obj->buffer.pointer, sizeof(smart));
-+	ACPI_FREE(out_obj);
- 
- 	if (smart.flags & ND_INTEL_SMART_SHUTDOWN_VALID) {
- 		if (smart.shutdown_state)
-@@ -1793,7 +1802,6 @@ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem)
- 		set_bit(NFIT_MEM_DIRTY_COUNT, &nfit_mem->flags);
- 		nfit_mem->dirty_shutdown = smart.shutdown_count;
- 	}
--	ACPI_FREE(out_obj);
- }
- 
- static void populate_shutdown_status(struct nfit_mem *nfit_mem)
-@@ -1915,18 +1923,19 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
- 		| 1 << ND_CMD_SET_CONFIG_DATA;
- 	if (family == NVDIMM_FAMILY_INTEL
- 			&& (dsm_mask & label_mask) == label_mask)
--		return 0;
--
--	if (acpi_nvdimm_has_method(adev_dimm, "_LSI")
--			&& acpi_nvdimm_has_method(adev_dimm, "_LSR")) {
--		dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev));
--		set_bit(NFIT_MEM_LSR, &nfit_mem->flags);
--	}
-+		/* skip _LS{I,R,W} enabling */;
-+	else {
-+		if (acpi_nvdimm_has_method(adev_dimm, "_LSI")
-+				&& acpi_nvdimm_has_method(adev_dimm, "_LSR")) {
-+			dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev));
-+			set_bit(NFIT_MEM_LSR, &nfit_mem->flags);
-+		}
- 
--	if (test_bit(NFIT_MEM_LSR, &nfit_mem->flags)
--			&& acpi_nvdimm_has_method(adev_dimm, "_LSW")) {
--		dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev));
--		set_bit(NFIT_MEM_LSW, &nfit_mem->flags);
-+		if (test_bit(NFIT_MEM_LSR, &nfit_mem->flags)
-+				&& acpi_nvdimm_has_method(adev_dimm, "_LSW")) {
-+			dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev));
-+			set_bit(NFIT_MEM_LSW, &nfit_mem->flags);
-+		}
- 	}
- 
- 	populate_shutdown_status(nfit_mem);
-@@ -3004,14 +3013,16 @@ static int ars_register(struct acpi_nfit_desc *acpi_desc,
- {
- 	int rc;
- 
--	if (no_init_ars || test_bit(ARS_FAILED, &nfit_spa->ars_state))
-+	if (test_bit(ARS_FAILED, &nfit_spa->ars_state))
- 		return acpi_nfit_register_region(acpi_desc, nfit_spa);
- 
- 	set_bit(ARS_REQ_SHORT, &nfit_spa->ars_state);
--	set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
-+	if (!no_init_ars)
-+		set_bit(ARS_REQ_LONG, &nfit_spa->ars_state);
- 
- 	switch (acpi_nfit_query_poison(acpi_desc)) {
- 	case 0:
-+	case -ENOSPC:
- 	case -EAGAIN:
- 		rc = ars_start(acpi_desc, nfit_spa, ARS_REQ_SHORT);
- 		/* shouldn't happen, try again later */
-@@ -3036,7 +3047,6 @@ static int ars_register(struct acpi_nfit_desc *acpi_desc,
- 		break;
- 	case -EBUSY:
- 	case -ENOMEM:
--	case -ENOSPC:
- 		/*
- 		 * BIOS was using ARS, wait for it to complete (or
- 		 * resources to become available) and then perform our
-diff --git a/drivers/android/binder.c b/drivers/android/binder.c
-index 4d2b2ad1ee0e..01f80cbd2741 100644
---- a/drivers/android/binder.c
-+++ b/drivers/android/binder.c
-@@ -329,6 +329,8 @@ struct binder_error {
-  *                        (invariant after initialized)
-  * @min_priority:         minimum scheduling priority
-  *                        (invariant after initialized)
-+ * @txn_security_ctx:     require sender's security context
-+ *                        (invariant after initialized)
-  * @async_todo:           list of async work items
-  *                        (protected by @proc->inner_lock)
-  *
-@@ -365,6 +367,7 @@ struct binder_node {
- 		 * invariant after initialization
- 		 */
- 		u8 accept_fds:1;
-+		u8 txn_security_ctx:1;
- 		u8 min_priority;
- 	};
- 	bool has_async_transaction;
-@@ -615,6 +618,7 @@ struct binder_transaction {
- 	long	saved_priority;
- 	kuid_t	sender_euid;
- 	struct list_head fd_fixups;
-+	binder_uintptr_t security_ctx;
- 	/**
- 	 * @lock:  protects @from, @to_proc, and @to_thread
- 	 *
-@@ -1152,6 +1156,7 @@ static struct binder_node *binder_init_node_ilocked(
- 	node->work.type = BINDER_WORK_NODE;
- 	node->min_priority = flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
- 	node->accept_fds = !!(flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
-+	node->txn_security_ctx = !!(flags & FLAT_BINDER_FLAG_TXN_SECURITY_CTX);
- 	spin_lock_init(&node->lock);
- 	INIT_LIST_HEAD(&node->work.entry);
- 	INIT_LIST_HEAD(&node->async_todo);
-@@ -2778,6 +2783,8 @@ static void binder_transaction(struct binder_proc *proc,
- 	binder_size_t last_fixup_min_off = 0;
- 	struct binder_context *context = proc->context;
- 	int t_debug_id = atomic_inc_return(&binder_last_id);
-+	char *secctx = NULL;
-+	u32 secctx_sz = 0;
- 
- 	e = binder_transaction_log_add(&binder_transaction_log);
- 	e->debug_id = t_debug_id;
-@@ -3020,6 +3027,20 @@ static void binder_transaction(struct binder_proc *proc,
- 	t->flags = tr->flags;
- 	t->priority = task_nice(current);
- 
-+	if (target_node && target_node->txn_security_ctx) {
-+		u32 secid;
-+
-+		security_task_getsecid(proc->tsk, &secid);
-+		ret = security_secid_to_secctx(secid, &secctx, &secctx_sz);
-+		if (ret) {
-+			return_error = BR_FAILED_REPLY;
-+			return_error_param = ret;
-+			return_error_line = __LINE__;
-+			goto err_get_secctx_failed;
-+		}
-+		extra_buffers_size += ALIGN(secctx_sz, sizeof(u64));
-+	}
-+
- 	trace_binder_transaction(reply, t, target_node);
- 
- 	t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
-@@ -3036,6 +3057,19 @@ static void binder_transaction(struct binder_proc *proc,
- 		t->buffer = NULL;
- 		goto err_binder_alloc_buf_failed;
- 	}
-+	if (secctx) {
-+		size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) +
-+				    ALIGN(tr->offsets_size, sizeof(void *)) +
-+				    ALIGN(extra_buffers_size, sizeof(void *)) -
-+				    ALIGN(secctx_sz, sizeof(u64));
-+		char *kptr = t->buffer->data + buf_offset;
-+
-+		t->security_ctx = (uintptr_t)kptr +
-+		    binder_alloc_get_user_buffer_offset(&target_proc->alloc);
-+		memcpy(kptr, secctx, secctx_sz);
-+		security_release_secctx(secctx, secctx_sz);
-+		secctx = NULL;
-+	}
- 	t->buffer->debug_id = t->debug_id;
- 	t->buffer->transaction = t;
- 	t->buffer->target_node = target_node;
-@@ -3305,6 +3339,9 @@ err_copy_data_failed:
- 	t->buffer->transaction = NULL;
- 	binder_alloc_free_buf(&target_proc->alloc, t->buffer);
- err_binder_alloc_buf_failed:
-+	if (secctx)
-+		security_release_secctx(secctx, secctx_sz);
-+err_get_secctx_failed:
- 	kfree(tcomplete);
- 	binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
- err_alloc_tcomplete_failed:
-@@ -4036,11 +4073,13 @@ retry:
- 
- 	while (1) {
- 		uint32_t cmd;
--		struct binder_transaction_data tr;
-+		struct binder_transaction_data_secctx tr;
-+		struct binder_transaction_data *trd = &tr.transaction_data;
- 		struct binder_work *w = NULL;
- 		struct list_head *list = NULL;
- 		struct binder_transaction *t = NULL;
- 		struct binder_thread *t_from;
-+		size_t trsize = sizeof(*trd);
- 
- 		binder_inner_proc_lock(proc);
- 		if (!binder_worklist_empty_ilocked(&thread->todo))
-@@ -4240,8 +4279,8 @@ retry:
- 		if (t->buffer->target_node) {
- 			struct binder_node *target_node = t->buffer->target_node;
- 
--			tr.target.ptr = target_node->ptr;
--			tr.cookie =  target_node->cookie;
-+			trd->target.ptr = target_node->ptr;
-+			trd->cookie =  target_node->cookie;
- 			t->saved_priority = task_nice(current);
- 			if (t->priority < target_node->min_priority &&
- 			    !(t->flags & TF_ONE_WAY))
-@@ -4251,22 +4290,23 @@ retry:
- 				binder_set_nice(target_node->min_priority);
- 			cmd = BR_TRANSACTION;
- 		} else {
--			tr.target.ptr = 0;
--			tr.cookie = 0;
-+			trd->target.ptr = 0;
-+			trd->cookie = 0;
- 			cmd = BR_REPLY;
- 		}
--		tr.code = t->code;
--		tr.flags = t->flags;
--		tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
-+		trd->code = t->code;
-+		trd->flags = t->flags;
-+		trd->sender_euid = from_kuid(current_user_ns(), t->sender_euid);
- 
- 		t_from = binder_get_txn_from(t);
- 		if (t_from) {
- 			struct task_struct *sender = t_from->proc->tsk;
- 
--			tr.sender_pid = task_tgid_nr_ns(sender,
--							task_active_pid_ns(current));
-+			trd->sender_pid =
-+				task_tgid_nr_ns(sender,
-+						task_active_pid_ns(current));
- 		} else {
--			tr.sender_pid = 0;
-+			trd->sender_pid = 0;
- 		}
- 
- 		ret = binder_apply_fd_fixups(t);
-@@ -4297,15 +4337,20 @@ retry:
- 			}
- 			continue;
- 		}
--		tr.data_size = t->buffer->data_size;
--		tr.offsets_size = t->buffer->offsets_size;
--		tr.data.ptr.buffer = (binder_uintptr_t)
-+		trd->data_size = t->buffer->data_size;
-+		trd->offsets_size = t->buffer->offsets_size;
-+		trd->data.ptr.buffer = (binder_uintptr_t)
- 			((uintptr_t)t->buffer->data +
- 			binder_alloc_get_user_buffer_offset(&proc->alloc));
--		tr.data.ptr.offsets = tr.data.ptr.buffer +
-+		trd->data.ptr.offsets = trd->data.ptr.buffer +
- 					ALIGN(t->buffer->data_size,
- 					    sizeof(void *));
- 
-+		tr.secctx = t->security_ctx;
-+		if (t->security_ctx) {
-+			cmd = BR_TRANSACTION_SEC_CTX;
-+			trsize = sizeof(tr);
-+		}
- 		if (put_user(cmd, (uint32_t __user *)ptr)) {
- 			if (t_from)
- 				binder_thread_dec_tmpref(t_from);
-@@ -4316,7 +4361,7 @@ retry:
- 			return -EFAULT;
- 		}
- 		ptr += sizeof(uint32_t);
--		if (copy_to_user(ptr, &tr, sizeof(tr))) {
-+		if (copy_to_user(ptr, &tr, trsize)) {
- 			if (t_from)
- 				binder_thread_dec_tmpref(t_from);
- 
-@@ -4325,7 +4370,7 @@ retry:
- 
- 			return -EFAULT;
- 		}
--		ptr += sizeof(tr);
-+		ptr += trsize;
- 
- 		trace_binder_transaction_received(t);
- 		binder_stat_br(proc, thread, cmd);
-@@ -4333,16 +4378,18 @@ retry:
- 			     "%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %016llx-%016llx\n",
- 			     proc->pid, thread->pid,
- 			     (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :
--			     "BR_REPLY",
-+				(cmd == BR_TRANSACTION_SEC_CTX) ?
-+				     "BR_TRANSACTION_SEC_CTX" : "BR_REPLY",
- 			     t->debug_id, t_from ? t_from->proc->pid : 0,
- 			     t_from ? t_from->pid : 0, cmd,
- 			     t->buffer->data_size, t->buffer->offsets_size,
--			     (u64)tr.data.ptr.buffer, (u64)tr.data.ptr.offsets);
-+			     (u64)trd->data.ptr.buffer,
-+			     (u64)trd->data.ptr.offsets);
- 
- 		if (t_from)
- 			binder_thread_dec_tmpref(t_from);
- 		t->buffer->allow_user_free = 1;
--		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
-+		if (cmd != BR_REPLY && !(t->flags & TF_ONE_WAY)) {
- 			binder_inner_proc_lock(thread->proc);
- 			t->to_parent = thread->transaction_stack;
- 			t->to_thread = thread;
-@@ -4690,7 +4737,8 @@ out:
- 	return ret;
- }
- 
--static int binder_ioctl_set_ctx_mgr(struct file *filp)
-+static int binder_ioctl_set_ctx_mgr(struct file *filp,
-+				    struct flat_binder_object *fbo)
- {
- 	int ret = 0;
- 	struct binder_proc *proc = filp->private_data;
-@@ -4719,7 +4767,7 @@ static int binder_ioctl_set_ctx_mgr(struct file *filp)
- 	} else {
- 		context->binder_context_mgr_uid = curr_euid;
- 	}
--	new_node = binder_new_node(proc, NULL);
-+	new_node = binder_new_node(proc, fbo);
- 	if (!new_node) {
- 		ret = -ENOMEM;
- 		goto out;
-@@ -4842,8 +4890,20 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
- 		binder_inner_proc_unlock(proc);
- 		break;
- 	}
-+	case BINDER_SET_CONTEXT_MGR_EXT: {
-+		struct flat_binder_object fbo;
-+
-+		if (copy_from_user(&fbo, ubuf, sizeof(fbo))) {
-+			ret = -EINVAL;
-+			goto err;
-+		}
-+		ret = binder_ioctl_set_ctx_mgr(filp, &fbo);
-+		if (ret)
-+			goto err;
-+		break;
-+	}
- 	case BINDER_SET_CONTEXT_MGR:
--		ret = binder_ioctl_set_ctx_mgr(filp);
-+		ret = binder_ioctl_set_ctx_mgr(filp, NULL);
- 		if (ret)
- 			goto err;
- 		break;
-diff --git a/drivers/base/dd.c b/drivers/base/dd.c
-index 8ac10af17c00..d62487d02455 100644
---- a/drivers/base/dd.c
-+++ b/drivers/base/dd.c
-@@ -968,9 +968,9 @@ static void __device_release_driver(struct device *dev, struct device *parent)
- 			drv->remove(dev);
- 
- 		device_links_driver_cleanup(dev);
--		arch_teardown_dma_ops(dev);
- 
- 		devres_release_all(dev);
-+		arch_teardown_dma_ops(dev);
- 		dev->driver = NULL;
- 		dev_set_drvdata(dev, NULL);
- 		if (dev->pm_domain && dev->pm_domain->dismiss)
-diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
-index 5fa1898755a3..7c84f64c74f7 100644
---- a/drivers/base/power/wakeup.c
-+++ b/drivers/base/power/wakeup.c
-@@ -118,7 +118,6 @@ void wakeup_source_drop(struct wakeup_source *ws)
- 	if (!ws)
- 		return;
- 
--	del_timer_sync(&ws->timer);
- 	__pm_relax(ws);
- }
- EXPORT_SYMBOL_GPL(wakeup_source_drop);
-@@ -205,6 +204,13 @@ void wakeup_source_remove(struct wakeup_source *ws)
- 	list_del_rcu(&ws->entry);
- 	raw_spin_unlock_irqrestore(&events_lock, flags);
- 	synchronize_srcu(&wakeup_srcu);
-+
-+	del_timer_sync(&ws->timer);
-+	/*
-+	 * Clear timer.function to make wakeup_source_not_registered() treat
-+	 * this wakeup source as not registered.
-+	 */
-+	ws->timer.function = NULL;
- }
- EXPORT_SYMBOL_GPL(wakeup_source_remove);
- 
-diff --git a/drivers/block/loop.c b/drivers/block/loop.c
-index cf5538942834..9a8d83bc1e75 100644
---- a/drivers/block/loop.c
-+++ b/drivers/block/loop.c
-@@ -656,7 +656,7 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
- 			return -EBADF;
- 
- 		l = f->f_mapping->host->i_bdev->bd_disk->private_data;
--		if (l->lo_state == Lo_unbound) {
-+		if (l->lo_state != Lo_bound) {
- 			return -EINVAL;
- 		}
- 		f = l->lo_backing_file;
-@@ -1089,16 +1089,12 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
- 		kobject_uevent(&disk_to_dev(bdev->bd_disk)->kobj, KOBJ_CHANGE);
- 	}
- 	mapping_set_gfp_mask(filp->f_mapping, gfp);
--	lo->lo_state = Lo_unbound;
- 	/* This is safe: open() is still holding a reference. */
- 	module_put(THIS_MODULE);
- 	blk_mq_unfreeze_queue(lo->lo_queue);
- 
- 	partscan = lo->lo_flags & LO_FLAGS_PARTSCAN && bdev;
- 	lo_number = lo->lo_number;
--	lo->lo_flags = 0;
--	if (!part_shift)
--		lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
- 	loop_unprepare_queue(lo);
- out_unlock:
- 	mutex_unlock(&loop_ctl_mutex);
-@@ -1120,6 +1116,23 @@ out_unlock:
- 		/* Device is gone, no point in returning error */
- 		err = 0;
- 	}
-+
-+	/*
-+	 * lo->lo_state is set to Lo_unbound here after above partscan has
-+	 * finished.
-+	 *
-+	 * There cannot be anybody else entering __loop_clr_fd() as
-+	 * lo->lo_backing_file is already cleared and Lo_rundown state
-+	 * protects us from all the other places trying to change the 'lo'
-+	 * device.
-+	 */
-+	mutex_lock(&loop_ctl_mutex);
-+	lo->lo_flags = 0;
-+	if (!part_shift)
-+		lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
-+	lo->lo_state = Lo_unbound;
-+	mutex_unlock(&loop_ctl_mutex);
-+
- 	/*
- 	 * Need not hold loop_ctl_mutex to fput backing file.
- 	 * Calling fput holding loop_ctl_mutex triggers a circular
-diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
-index 04ca65912638..684854d3b0ad 100644
---- a/drivers/block/zram/zram_drv.c
-+++ b/drivers/block/zram/zram_drv.c
-@@ -290,18 +290,8 @@ static ssize_t idle_store(struct device *dev,
- 	struct zram *zram = dev_to_zram(dev);
- 	unsigned long nr_pages = zram->disksize >> PAGE_SHIFT;
- 	int index;
--	char mode_buf[8];
--	ssize_t sz;
- 
--	sz = strscpy(mode_buf, buf, sizeof(mode_buf));
--	if (sz <= 0)
--		return -EINVAL;
--
--	/* ignore trailing new line */
--	if (mode_buf[sz - 1] == '\n')
--		mode_buf[sz - 1] = 0x00;
--
--	if (strcmp(mode_buf, "all"))
-+	if (!sysfs_streq(buf, "all"))
- 		return -EINVAL;
- 
- 	down_read(&zram->init_lock);
-@@ -635,25 +625,15 @@ static ssize_t writeback_store(struct device *dev,
- 	struct bio bio;
- 	struct bio_vec bio_vec;
- 	struct page *page;
--	ssize_t ret, sz;
--	char mode_buf[8];
--	int mode = -1;
-+	ssize_t ret;
-+	int mode;
- 	unsigned long blk_idx = 0;
- 
--	sz = strscpy(mode_buf, buf, sizeof(mode_buf));
--	if (sz <= 0)
--		return -EINVAL;
--
--	/* ignore trailing newline */
--	if (mode_buf[sz - 1] == '\n')
--		mode_buf[sz - 1] = 0x00;
--
--	if (!strcmp(mode_buf, "idle"))
-+	if (sysfs_streq(buf, "idle"))
- 		mode = IDLE_WRITEBACK;
--	else if (!strcmp(mode_buf, "huge"))
-+	else if (sysfs_streq(buf, "huge"))
- 		mode = HUGE_WRITEBACK;
--
--	if (mode == -1)
-+	else
- 		return -EINVAL;
- 
- 	down_read(&zram->init_lock);
-diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
-index 41405de27d66..c91bba00df4e 100644
---- a/drivers/bluetooth/btrtl.c
-+++ b/drivers/bluetooth/btrtl.c
-@@ -552,10 +552,9 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev,
- 					    hdev->bus);
- 
- 	if (!btrtl_dev->ic_info) {
--		rtl_dev_err(hdev, "rtl: unknown IC info, lmp subver %04x, hci rev %04x, hci ver %04x",
-+		rtl_dev_info(hdev, "rtl: unknown IC info, lmp subver %04x, hci rev %04x, hci ver %04x",
- 			    lmp_subver, hci_rev, hci_ver);
--		ret = -EINVAL;
--		goto err_free;
-+		return btrtl_dev;
- 	}
- 
- 	if (btrtl_dev->ic_info->has_rom_version) {
-@@ -610,6 +609,11 @@ int btrtl_download_firmware(struct hci_dev *hdev,
- 	 * standard btusb. Once that firmware is uploaded, the subver changes
- 	 * to a different value.
- 	 */
-+	if (!btrtl_dev->ic_info) {
-+		rtl_dev_info(hdev, "rtl: assuming no firmware upload needed\n");
-+		return 0;
-+	}
-+
- 	switch (btrtl_dev->ic_info->lmp_subver) {
- 	case RTL_ROM_LMP_8723A:
- 	case RTL_ROM_LMP_3499:
-diff --git a/drivers/bluetooth/h4_recv.h b/drivers/bluetooth/h4_recv.h
-index b432651f8236..307d82166f48 100644
---- a/drivers/bluetooth/h4_recv.h
-+++ b/drivers/bluetooth/h4_recv.h
-@@ -60,6 +60,10 @@ static inline struct sk_buff *h4_recv_buf(struct hci_dev *hdev,
- 					  const struct h4_recv_pkt *pkts,
- 					  int pkts_count)
- {
-+	/* Check for error from previous call */
-+	if (IS_ERR(skb))
-+		skb = NULL;
-+
- 	while (count) {
- 		int i, len;
- 
-diff --git a/drivers/bluetooth/hci_h4.c b/drivers/bluetooth/hci_h4.c
-index fb97a3bf069b..5d97d77627c1 100644
---- a/drivers/bluetooth/hci_h4.c
-+++ b/drivers/bluetooth/hci_h4.c
-@@ -174,6 +174,10 @@ struct sk_buff *h4_recv_buf(struct hci_dev *hdev, struct sk_buff *skb,
- 	struct hci_uart *hu = hci_get_drvdata(hdev);
- 	u8 alignment = hu->alignment ? hu->alignment : 1;
- 
-+	/* Check for error from previous call */
-+	if (IS_ERR(skb))
-+		skb = NULL;
-+
- 	while (count) {
- 		int i, len;
- 
-diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
-index fbf7b4df23ab..9562e72c1ae5 100644
---- a/drivers/bluetooth/hci_ldisc.c
-+++ b/drivers/bluetooth/hci_ldisc.c
-@@ -207,11 +207,11 @@ void hci_uart_init_work(struct work_struct *work)
- 	err = hci_register_dev(hu->hdev);
- 	if (err < 0) {
- 		BT_ERR("Can't register HCI device");
-+		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
-+		hu->proto->close(hu);
- 		hdev = hu->hdev;
- 		hu->hdev = NULL;
- 		hci_free_dev(hdev);
--		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
--		hu->proto->close(hu);
- 		return;
- 	}
- 
-@@ -616,6 +616,7 @@ static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data,
- static int hci_uart_register_dev(struct hci_uart *hu)
- {
- 	struct hci_dev *hdev;
-+	int err;
- 
- 	BT_DBG("");
- 
-@@ -659,11 +660,22 @@ static int hci_uart_register_dev(struct hci_uart *hu)
- 	else
- 		hdev->dev_type = HCI_PRIMARY;
- 
-+	/* Only call open() for the protocol after hdev is fully initialized as
-+	 * open() (or a timer/workqueue it starts) may attempt to reference it.
-+	 */
-+	err = hu->proto->open(hu);
-+	if (err) {
-+		hu->hdev = NULL;
-+		hci_free_dev(hdev);
-+		return err;
-+	}
-+
- 	if (test_bit(HCI_UART_INIT_PENDING, &hu->hdev_flags))
- 		return 0;
- 
- 	if (hci_register_dev(hdev) < 0) {
- 		BT_ERR("Can't register HCI device");
-+		hu->proto->close(hu);
- 		hu->hdev = NULL;
- 		hci_free_dev(hdev);
- 		return -ENODEV;
-@@ -683,20 +695,14 @@ static int hci_uart_set_proto(struct hci_uart *hu, int id)
- 	if (!p)
- 		return -EPROTONOSUPPORT;
- 
--	err = p->open(hu);
--	if (err)
--		return err;
--
- 	hu->proto = p;
--	set_bit(HCI_UART_PROTO_READY, &hu->flags);
- 
- 	err = hci_uart_register_dev(hu);
- 	if (err) {
--		clear_bit(HCI_UART_PROTO_READY, &hu->flags);
--		p->close(hu);
- 		return err;
- 	}
- 
-+	set_bit(HCI_UART_PROTO_READY, &hu->flags);
- 	return 0;
- }
- 
-diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c
-index 614ecdbb4ab7..933268b8d6a5 100644
---- a/drivers/cdrom/cdrom.c
-+++ b/drivers/cdrom/cdrom.c
-@@ -265,6 +265,7 @@
- /* #define ERRLOGMASK (CD_WARNING|CD_OPEN|CD_COUNT_TRACKS|CD_CLOSE) */
- /* #define ERRLOGMASK (CD_WARNING|CD_REG_UNREG|CD_DO_IOCTL|CD_OPEN|CD_CLOSE|CD_COUNT_TRACKS) */
- 
-+#include <linux/atomic.h>
- #include <linux/module.h>
- #include <linux/fs.h>
- #include <linux/major.h>
-@@ -3692,9 +3693,9 @@ static struct ctl_table_header *cdrom_sysctl_header;
- 
- static void cdrom_sysctl_register(void)
- {
--	static int initialized;
-+	static atomic_t initialized = ATOMIC_INIT(0);
- 
--	if (initialized == 1)
-+	if (!atomic_add_unless(&initialized, 1, 1))
- 		return;
- 
- 	cdrom_sysctl_header = register_sysctl_table(cdrom_root_table);
-@@ -3705,8 +3706,6 @@ static void cdrom_sysctl_register(void)
- 	cdrom_sysctl_settings.debug = debug;
- 	cdrom_sysctl_settings.lock = lockdoor;
- 	cdrom_sysctl_settings.check = check_media_type;
--
--	initialized = 1;
- }
- 
- static void cdrom_sysctl_unregister(void)
-diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
-index 2e2ffe7010aa..51c77f0e47b2 100644
---- a/drivers/char/Kconfig
-+++ b/drivers/char/Kconfig
-@@ -351,7 +351,7 @@ config XILINX_HWICAP
- 
- config R3964
- 	tristate "Siemens R3964 line discipline"
--	depends on TTY
-+	depends on TTY && BROKEN
- 	---help---
- 	  This driver allows synchronous communication with devices using the
- 	  Siemens R3964 packet protocol. Unless you are dealing with special
-diff --git a/drivers/char/applicom.c b/drivers/char/applicom.c
-index c0a5b1f3a986..4ccc39e00ced 100644
---- a/drivers/char/applicom.c
-+++ b/drivers/char/applicom.c
-@@ -32,6 +32,7 @@
- #include <linux/wait.h>
- #include <linux/init.h>
- #include <linux/fs.h>
-+#include <linux/nospec.h>
- 
- #include <asm/io.h>
- #include <linux/uaccess.h>
-@@ -386,7 +387,11 @@ static ssize_t ac_write(struct file *file, const char __user *buf, size_t count,
- 	TicCard = st_loc.tic_des_from_pc;	/* tic number to send            */
- 	IndexCard = NumCard - 1;
- 
--	if((NumCard < 1) || (NumCard > MAX_BOARD) || !apbs[IndexCard].RamIO)
-+	if (IndexCard >= MAX_BOARD)
-+		return -EINVAL;
-+	IndexCard = array_index_nospec(IndexCard, MAX_BOARD);
-+
-+	if (!apbs[IndexCard].RamIO)
- 		return -EINVAL;
- 
- #ifdef DEBUG
-@@ -697,6 +702,7 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
- 	unsigned char IndexCard;
- 	void __iomem *pmem;
- 	int ret = 0;
-+	static int warncount = 10;
- 	volatile unsigned char byte_reset_it;
- 	struct st_ram_io *adgl;
- 	void __user *argp = (void __user *)arg;
-@@ -711,16 +717,12 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
- 	mutex_lock(&ac_mutex);	
- 	IndexCard = adgl->num_card-1;
- 	 
--	if(cmd != 6 && ((IndexCard >= MAX_BOARD) || !apbs[IndexCard].RamIO)) {
--		static int warncount = 10;
--		if (warncount) {
--			printk( KERN_WARNING "APPLICOM driver IOCTL, bad board number %d\n",(int)IndexCard+1);
--			warncount--;
--		}
--		kfree(adgl);
--		mutex_unlock(&ac_mutex);
--		return -EINVAL;
--	}
-+	if (cmd != 6 && IndexCard >= MAX_BOARD)
-+		goto err;
-+	IndexCard = array_index_nospec(IndexCard, MAX_BOARD);
-+
-+	if (cmd != 6 && !apbs[IndexCard].RamIO)
-+		goto err;
- 
- 	switch (cmd) {
- 		
-@@ -838,5 +840,16 @@ static long ac_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
- 	kfree(adgl);
- 	mutex_unlock(&ac_mutex);
- 	return 0;
-+
-+err:
-+	if (warncount) {
-+		pr_warn("APPLICOM driver IOCTL, bad board number %d\n",
-+			(int)IndexCard + 1);
-+		warncount--;
-+	}
-+	kfree(adgl);
-+	mutex_unlock(&ac_mutex);
-+	return -EINVAL;
-+
- }
- 
-diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
-index 4a22b4b41aef..9bffcd37cc7b 100644
---- a/drivers/char/hpet.c
-+++ b/drivers/char/hpet.c
-@@ -377,7 +377,7 @@ static __init int hpet_mmap_enable(char *str)
- 	pr_info("HPET mmap %s\n", hpet_mmap_enabled ? "enabled" : "disabled");
- 	return 1;
- }
--__setup("hpet_mmap", hpet_mmap_enable);
-+__setup("hpet_mmap=", hpet_mmap_enable);
- 
- static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
- {
-diff --git a/drivers/char/hw_random/virtio-rng.c b/drivers/char/hw_random/virtio-rng.c
-index b89df66ea1ae..7abd604e938c 100644
---- a/drivers/char/hw_random/virtio-rng.c
-+++ b/drivers/char/hw_random/virtio-rng.c
-@@ -73,7 +73,7 @@ static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait)
- 
- 	if (!vi->busy) {
- 		vi->busy = true;
--		init_completion(&vi->have_data);
-+		reinit_completion(&vi->have_data);
- 		register_buffer(vi, buf, size);
- 	}
- 
-diff --git a/drivers/char/ipmi/ipmi_si.h b/drivers/char/ipmi/ipmi_si.h
-index 52f6152d1fcb..7ae52c17618e 100644
---- a/drivers/char/ipmi/ipmi_si.h
-+++ b/drivers/char/ipmi/ipmi_si.h
-@@ -25,7 +25,9 @@ void ipmi_irq_finish_setup(struct si_sm_io *io);
- int ipmi_si_remove_by_dev(struct device *dev);
- void ipmi_si_remove_by_data(int addr_space, enum si_type si_type,
- 			    unsigned long addr);
--int ipmi_si_hardcode_find_bmc(void);
-+void ipmi_hardcode_init(void);
-+void ipmi_si_hardcode_exit(void);
-+int ipmi_si_hardcode_match(int addr_type, unsigned long addr);
- void ipmi_si_platform_init(void);
- void ipmi_si_platform_shutdown(void);
- 
-diff --git a/drivers/char/ipmi/ipmi_si_hardcode.c b/drivers/char/ipmi/ipmi_si_hardcode.c
-index 487642809c58..1e5783961b0d 100644
---- a/drivers/char/ipmi/ipmi_si_hardcode.c
-+++ b/drivers/char/ipmi/ipmi_si_hardcode.c
-@@ -3,6 +3,7 @@
- #define pr_fmt(fmt) "ipmi_hardcode: " fmt
- 
- #include <linux/moduleparam.h>
-+#include <linux/platform_device.h>
- #include "ipmi_si.h"
- 
- /*
-@@ -12,23 +13,22 @@
- 
- #define SI_MAX_PARMS 4
- 
--static char          *si_type[SI_MAX_PARMS];
- #define MAX_SI_TYPE_STR 30
--static char          si_type_str[MAX_SI_TYPE_STR];
-+static char          si_type_str[MAX_SI_TYPE_STR] __initdata;
- static unsigned long addrs[SI_MAX_PARMS];
- static unsigned int num_addrs;
- static unsigned int  ports[SI_MAX_PARMS];
- static unsigned int num_ports;
--static int           irqs[SI_MAX_PARMS];
--static unsigned int num_irqs;
--static int           regspacings[SI_MAX_PARMS];
--static unsigned int num_regspacings;
--static int           regsizes[SI_MAX_PARMS];
--static unsigned int num_regsizes;
--static int           regshifts[SI_MAX_PARMS];
--static unsigned int num_regshifts;
--static int slave_addrs[SI_MAX_PARMS]; /* Leaving 0 chooses the default value */
--static unsigned int num_slave_addrs;
-+static int           irqs[SI_MAX_PARMS] __initdata;
-+static unsigned int num_irqs __initdata;
-+static int           regspacings[SI_MAX_PARMS] __initdata;
-+static unsigned int num_regspacings __initdata;
-+static int           regsizes[SI_MAX_PARMS] __initdata;
-+static unsigned int num_regsizes __initdata;
-+static int           regshifts[SI_MAX_PARMS] __initdata;
-+static unsigned int num_regshifts __initdata;
-+static int slave_addrs[SI_MAX_PARMS] __initdata;
-+static unsigned int num_slave_addrs __initdata;
- 
- module_param_string(type, si_type_str, MAX_SI_TYPE_STR, 0);
- MODULE_PARM_DESC(type, "Defines the type of each interface, each"
-@@ -73,12 +73,133 @@ MODULE_PARM_DESC(slave_addrs, "Set the default IPMB slave address for"
- 		 " overridden by this parm.  This is an array indexed"
- 		 " by interface number.");
- 
--int ipmi_si_hardcode_find_bmc(void)
-+static struct platform_device *ipmi_hc_pdevs[SI_MAX_PARMS];
-+
-+static void __init ipmi_hardcode_init_one(const char *si_type_str,
-+					  unsigned int i,
-+					  unsigned long addr,
-+					  unsigned int flags)
- {
--	int ret = -ENODEV;
--	int             i;
--	struct si_sm_io io;
-+	struct platform_device *pdev;
-+	unsigned int num_r = 1, size;
-+	struct resource r[4];
-+	struct property_entry p[6];
-+	enum si_type si_type;
-+	unsigned int regspacing, regsize;
-+	int rv;
-+
-+	memset(p, 0, sizeof(p));
-+	memset(r, 0, sizeof(r));
-+
-+	if (!si_type_str || !*si_type_str || strcmp(si_type_str, "kcs") == 0) {
-+		size = 2;
-+		si_type = SI_KCS;
-+	} else if (strcmp(si_type_str, "smic") == 0) {
-+		size = 2;
-+		si_type = SI_SMIC;
-+	} else if (strcmp(si_type_str, "bt") == 0) {
-+		size = 3;
-+		si_type = SI_BT;
-+	} else if (strcmp(si_type_str, "invalid") == 0) {
-+		/*
-+		 * Allow a firmware-specified interface to be
-+		 * disabled.
-+		 */
-+		size = 1;
-+		si_type = SI_TYPE_INVALID;
-+	} else {
-+		pr_warn("Interface type specified for interface %d, was invalid: %s\n",
-+			i, si_type_str);
-+		return;
-+	}
-+
-+	regsize = regsizes[i];
-+	if (regsize == 0)
-+		regsize = DEFAULT_REGSIZE;
-+
-+	p[0] = PROPERTY_ENTRY_U8("ipmi-type", si_type);
-+	p[1] = PROPERTY_ENTRY_U8("slave-addr", slave_addrs[i]);
-+	p[2] = PROPERTY_ENTRY_U8("addr-source", SI_HARDCODED);
-+	p[3] = PROPERTY_ENTRY_U8("reg-shift", regshifts[i]);
-+	p[4] = PROPERTY_ENTRY_U8("reg-size", regsize);
-+	/* Last entry must be left NULL to terminate it. */
-+
-+	/*
-+	 * Register spacing is derived from the resources in
-+	 * the IPMI platform code.
-+	 */
-+	regspacing = regspacings[i];
-+	if (regspacing == 0)
-+		regspacing = regsize;
-+
-+	r[0].start = addr;
-+	r[0].end = r[0].start + regsize - 1;
-+	r[0].name = "IPMI Address 1";
-+	r[0].flags = flags;
-+
-+	if (size > 1) {
-+		r[1].start = r[0].start + regspacing;
-+		r[1].end = r[1].start + regsize - 1;
-+		r[1].name = "IPMI Address 2";
-+		r[1].flags = flags;
-+		num_r++;
-+	}
-+
-+	if (size > 2) {
-+		r[2].start = r[1].start + regspacing;
-+		r[2].end = r[2].start + regsize - 1;
-+		r[2].name = "IPMI Address 3";
-+		r[2].flags = flags;
-+		num_r++;
-+	}
-+
-+	if (irqs[i]) {
-+		r[num_r].start = irqs[i];
-+		r[num_r].end = irqs[i];
-+		r[num_r].name = "IPMI IRQ";
-+		r[num_r].flags = IORESOURCE_IRQ;
-+		num_r++;
-+	}
-+
-+	pdev = platform_device_alloc("hardcode-ipmi-si", i);
-+	if (!pdev) {
-+		pr_err("Error allocating IPMI platform device %d\n", i);
-+		return;
-+	}
-+
-+	rv = platform_device_add_resources(pdev, r, num_r);
-+	if (rv) {
-+		dev_err(&pdev->dev,
-+			"Unable to add hard-code resources: %d\n", rv);
-+		goto err;
-+	}
-+
-+	rv = platform_device_add_properties(pdev, p);
-+	if (rv) {
-+		dev_err(&pdev->dev,
-+			"Unable to add hard-code properties: %d\n", rv);
-+		goto err;
-+	}
-+
-+	rv = platform_device_add(pdev);
-+	if (rv) {
-+		dev_err(&pdev->dev,
-+			"Unable to add hard-code device: %d\n", rv);
-+		goto err;
-+	}
-+
-+	ipmi_hc_pdevs[i] = pdev;
-+	return;
-+
-+err:
-+	platform_device_put(pdev);
-+}
-+
-+void __init ipmi_hardcode_init(void)
-+{
-+	unsigned int i;
- 	char *str;
-+	char *si_type[SI_MAX_PARMS];
- 
- 	/* Parse out the si_type string into its components. */
- 	str = si_type_str;
-@@ -95,54 +216,45 @@ int ipmi_si_hardcode_find_bmc(void)
- 		}
- 	}
- 
--	memset(&io, 0, sizeof(io));
- 	for (i = 0; i < SI_MAX_PARMS; i++) {
--		if (!ports[i] && !addrs[i])
--			continue;
--
--		io.addr_source = SI_HARDCODED;
--		pr_info("probing via hardcoded address\n");
--
--		if (!si_type[i] || strcmp(si_type[i], "kcs") == 0) {
--			io.si_type = SI_KCS;
--		} else if (strcmp(si_type[i], "smic") == 0) {
--			io.si_type = SI_SMIC;
--		} else if (strcmp(si_type[i], "bt") == 0) {
--			io.si_type = SI_BT;
--		} else {
--			pr_warn("Interface type specified for interface %d, was invalid: %s\n",
--				i, si_type[i]);
--			continue;
--		}
-+		if (i < num_ports && ports[i])
-+			ipmi_hardcode_init_one(si_type[i], i, ports[i],
-+					       IORESOURCE_IO);
-+		if (i < num_addrs && addrs[i])
-+			ipmi_hardcode_init_one(si_type[i], i, addrs[i],
-+					       IORESOURCE_MEM);
-+	}
-+}
- 
--		if (ports[i]) {
--			/* An I/O port */
--			io.addr_data = ports[i];
--			io.addr_type = IPMI_IO_ADDR_SPACE;
--		} else if (addrs[i]) {
--			/* A memory port */
--			io.addr_data = addrs[i];
--			io.addr_type = IPMI_MEM_ADDR_SPACE;
--		} else {
--			pr_warn("Interface type specified for interface %d, but port and address were not set or set to zero\n",
--				i);
--			continue;
--		}
-+void ipmi_si_hardcode_exit(void)
-+{
-+	unsigned int i;
- 
--		io.addr = NULL;
--		io.regspacing = regspacings[i];
--		if (!io.regspacing)
--			io.regspacing = DEFAULT_REGSPACING;
--		io.regsize = regsizes[i];
--		if (!io.regsize)
--			io.regsize = DEFAULT_REGSIZE;
--		io.regshift = regshifts[i];
--		io.irq = irqs[i];
--		if (io.irq)
--			io.irq_setup = ipmi_std_irq_setup;
--		io.slave_addr = slave_addrs[i];
--
--		ret = ipmi_si_add_smi(&io);
-+	for (i = 0; i < SI_MAX_PARMS; i++) {
-+		if (ipmi_hc_pdevs[i])
-+			platform_device_unregister(ipmi_hc_pdevs[i]);
- 	}
--	return ret;
-+}
-+
-+/*
-+ * Returns true of the given address exists as a hardcoded address,
-+ * false if not.
-+ */
-+int ipmi_si_hardcode_match(int addr_type, unsigned long addr)
-+{
-+	unsigned int i;
-+
-+	if (addr_type == IPMI_IO_ADDR_SPACE) {
-+		for (i = 0; i < num_ports; i++) {
-+			if (ports[i] == addr)
-+				return 1;
-+		}
-+	} else {
-+		for (i = 0; i < num_addrs; i++) {
-+			if (addrs[i] == addr)
-+				return 1;
-+		}
-+	}
-+
-+	return 0;
- }
-diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
-index dc8603d34320..5294abc4c96c 100644
---- a/drivers/char/ipmi/ipmi_si_intf.c
-+++ b/drivers/char/ipmi/ipmi_si_intf.c
-@@ -1862,6 +1862,18 @@ int ipmi_si_add_smi(struct si_sm_io *io)
- 	int rv = 0;
- 	struct smi_info *new_smi, *dup;
- 
-+	/*
-+	 * If the user gave us a hard-coded device at the same
-+	 * address, they presumably want us to use it and not what is
-+	 * in the firmware.
-+	 */
-+	if (io->addr_source != SI_HARDCODED &&
-+	    ipmi_si_hardcode_match(io->addr_type, io->addr_data)) {
-+		dev_info(io->dev,
-+			 "Hard-coded device at this address already exists");
-+		return -ENODEV;
-+	}
-+
- 	if (!io->io_setup) {
- 		if (io->addr_type == IPMI_IO_ADDR_SPACE) {
- 			io->io_setup = ipmi_si_port_setup;
-@@ -2085,11 +2097,16 @@ static int try_smi_init(struct smi_info *new_smi)
- 	WARN_ON(new_smi->io.dev->init_name != NULL);
- 
-  out_err:
-+	if (rv && new_smi->io.io_cleanup) {
-+		new_smi->io.io_cleanup(&new_smi->io);
-+		new_smi->io.io_cleanup = NULL;
-+	}
-+
- 	kfree(init_name);
- 	return rv;
- }
- 
--static int init_ipmi_si(void)
-+static int __init init_ipmi_si(void)
- {
- 	struct smi_info *e;
- 	enum ipmi_addr_src type = SI_INVALID;
-@@ -2097,11 +2114,9 @@ static int init_ipmi_si(void)
- 	if (initialized)
- 		return 0;
- 
--	pr_info("IPMI System Interface driver\n");
-+	ipmi_hardcode_init();
- 
--	/* If the user gave us a device, they presumably want us to use it */
--	if (!ipmi_si_hardcode_find_bmc())
--		goto do_scan;
-+	pr_info("IPMI System Interface driver\n");
- 
- 	ipmi_si_platform_init();
- 
-@@ -2113,7 +2128,6 @@ static int init_ipmi_si(void)
- 	   with multiple BMCs we assume that there will be several instances
- 	   of a given type so if we succeed in registering a type then also
- 	   try to register everything else of the same type */
--do_scan:
- 	mutex_lock(&smi_infos_lock);
- 	list_for_each_entry(e, &smi_infos, link) {
- 		/* Try to register a device if it has an IRQ and we either
-@@ -2299,6 +2313,8 @@ static void cleanup_ipmi_si(void)
- 	list_for_each_entry_safe(e, tmp_e, &smi_infos, link)
- 		cleanup_one_si(e);
- 	mutex_unlock(&smi_infos_lock);
-+
-+	ipmi_si_hardcode_exit();
- }
- module_exit(cleanup_ipmi_si);
- 
-diff --git a/drivers/char/ipmi/ipmi_si_mem_io.c b/drivers/char/ipmi/ipmi_si_mem_io.c
-index fd0ec8d6bf0e..75583612ab10 100644
---- a/drivers/char/ipmi/ipmi_si_mem_io.c
-+++ b/drivers/char/ipmi/ipmi_si_mem_io.c
-@@ -81,8 +81,6 @@ int ipmi_si_mem_setup(struct si_sm_io *io)
- 	if (!addr)
- 		return -ENODEV;
- 
--	io->io_cleanup = mem_cleanup;
--
- 	/*
- 	 * Figure out the actual readb/readw/readl/etc routine to use based
- 	 * upon the register size.
-@@ -141,5 +139,8 @@ int ipmi_si_mem_setup(struct si_sm_io *io)
- 		mem_region_cleanup(io, io->io_size);
- 		return -EIO;
- 	}
-+
-+	io->io_cleanup = mem_cleanup;
-+
- 	return 0;
- }
-diff --git a/drivers/char/ipmi/ipmi_si_platform.c b/drivers/char/ipmi/ipmi_si_platform.c
-index 15cf819f884f..8158d03542f4 100644
---- a/drivers/char/ipmi/ipmi_si_platform.c
-+++ b/drivers/char/ipmi/ipmi_si_platform.c
-@@ -128,8 +128,6 @@ ipmi_get_info_from_resources(struct platform_device *pdev,
- 		if (res_second->start > io->addr_data)
- 			io->regspacing = res_second->start - io->addr_data;
- 	}
--	io->regsize = DEFAULT_REGSIZE;
--	io->regshift = 0;
- 
- 	return res;
- }
-@@ -137,7 +135,7 @@ ipmi_get_info_from_resources(struct platform_device *pdev,
- static int platform_ipmi_probe(struct platform_device *pdev)
- {
- 	struct si_sm_io io;
--	u8 type, slave_addr, addr_source;
-+	u8 type, slave_addr, addr_source, regsize, regshift;
- 	int rv;
- 
- 	rv = device_property_read_u8(&pdev->dev, "addr-source", &addr_source);
-@@ -149,7 +147,7 @@ static int platform_ipmi_probe(struct platform_device *pdev)
- 	if (addr_source == SI_SMBIOS) {
- 		if (!si_trydmi)
- 			return -ENODEV;
--	} else {
-+	} else if (addr_source != SI_HARDCODED) {
- 		if (!si_tryplatform)
- 			return -ENODEV;
- 	}
-@@ -169,11 +167,23 @@ static int platform_ipmi_probe(struct platform_device *pdev)
- 	case SI_BT:
- 		io.si_type = type;
- 		break;
-+	case SI_TYPE_INVALID: /* User disabled this in hardcode. */
-+		return -ENODEV;
- 	default:
- 		dev_err(&pdev->dev, "ipmi-type property is invalid\n");
- 		return -EINVAL;
- 	}
- 
-+	io.regsize = DEFAULT_REGSIZE;
-+	rv = device_property_read_u8(&pdev->dev, "reg-size", &regsize);
-+	if (!rv)
-+		io.regsize = regsize;
-+
-+	io.regshift = 0;
-+	rv = device_property_read_u8(&pdev->dev, "reg-shift", &regshift);
-+	if (!rv)
-+		io.regshift = regshift;
-+
- 	if (!ipmi_get_info_from_resources(pdev, &io))
- 		return -EINVAL;
- 
-@@ -193,7 +203,8 @@ static int platform_ipmi_probe(struct platform_device *pdev)
- 
- 	io.dev = &pdev->dev;
- 
--	pr_info("ipmi_si: SMBIOS: %s %#lx regsize %d spacing %d irq %d\n",
-+	pr_info("ipmi_si: %s: %s %#lx regsize %d spacing %d irq %d\n",
-+		ipmi_addr_src_to_str(addr_source),
- 		(io.addr_type == IPMI_IO_ADDR_SPACE) ? "io" : "mem",
- 		io.addr_data, io.regsize, io.regspacing, io.irq);
- 
-@@ -358,6 +369,9 @@ static int acpi_ipmi_probe(struct platform_device *pdev)
- 		goto err_free;
- 	}
- 
-+	io.regsize = DEFAULT_REGSIZE;
-+	io.regshift = 0;
-+
- 	res = ipmi_get_info_from_resources(pdev, &io);
- 	if (!res) {
- 		rv = -EINVAL;
-@@ -420,8 +434,9 @@ static int ipmi_remove(struct platform_device *pdev)
- }
- 
- static const struct platform_device_id si_plat_ids[] = {
--    { "dmi-ipmi-si", 0 },
--    { }
-+	{ "dmi-ipmi-si", 0 },
-+	{ "hardcode-ipmi-si", 0 },
-+	{ }
- };
- 
- struct platform_driver ipmi_platform_driver = {
-diff --git a/drivers/char/ipmi/ipmi_si_port_io.c b/drivers/char/ipmi/ipmi_si_port_io.c
-index ef6dffcea9fa..03924c32b6e9 100644
---- a/drivers/char/ipmi/ipmi_si_port_io.c
-+++ b/drivers/char/ipmi/ipmi_si_port_io.c
-@@ -68,8 +68,6 @@ int ipmi_si_port_setup(struct si_sm_io *io)
- 	if (!addr)
- 		return -ENODEV;
- 
--	io->io_cleanup = port_cleanup;
--
- 	/*
- 	 * Figure out the actual inb/inw/inl/etc routine to use based
- 	 * upon the register size.
-@@ -109,5 +107,8 @@ int ipmi_si_port_setup(struct si_sm_io *io)
- 			return -EIO;
- 		}
- 	}
-+
-+	io->io_cleanup = port_cleanup;
-+
- 	return 0;
- }
-diff --git a/drivers/char/tpm/st33zp24/st33zp24.c b/drivers/char/tpm/st33zp24/st33zp24.c
-index 64dc560859f2..13dc614b7ebc 100644
---- a/drivers/char/tpm/st33zp24/st33zp24.c
-+++ b/drivers/char/tpm/st33zp24/st33zp24.c
-@@ -436,7 +436,7 @@ static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf,
- 			goto out_err;
- 	}
- 
--	return len;
-+	return 0;
- out_err:
- 	st33zp24_cancel(chip);
- 	release_locality(chip);
-diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
-index d9439f9abe78..88d2e01a651d 100644
---- a/drivers/char/tpm/tpm-interface.c
-+++ b/drivers/char/tpm/tpm-interface.c
-@@ -230,10 +230,19 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip,
- 	if (rc < 0) {
- 		if (rc != -EPIPE)
- 			dev_err(&chip->dev,
--				"%s: tpm_send: error %d\n", __func__, rc);
-+				"%s: send(): error %d\n", __func__, rc);
- 		goto out;
- 	}
- 
-+	/* A sanity check. send() should just return zero on success e.g.
-+	 * not the command length.
-+	 */
-+	if (rc > 0) {
-+		dev_warn(&chip->dev,
-+			 "%s: send(): invalid value %d\n", __func__, rc);
-+		rc = 0;
-+	}
-+
- 	if (chip->flags & TPM_CHIP_FLAG_IRQ)
- 		goto out_recv;
- 
-diff --git a/drivers/char/tpm/tpm_atmel.c b/drivers/char/tpm/tpm_atmel.c
-index 66a14526aaf4..a290b30a0c35 100644
---- a/drivers/char/tpm/tpm_atmel.c
-+++ b/drivers/char/tpm/tpm_atmel.c
-@@ -105,7 +105,7 @@ static int tpm_atml_send(struct tpm_chip *chip, u8 *buf, size_t count)
- 		iowrite8(buf[i], priv->iobase);
- 	}
- 
--	return count;
-+	return 0;
- }
- 
- static void tpm_atml_cancel(struct tpm_chip *chip)
-diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
-index 36952ef98f90..763fc7e6c005 100644
---- a/drivers/char/tpm/tpm_crb.c
-+++ b/drivers/char/tpm/tpm_crb.c
-@@ -287,19 +287,29 @@ static int crb_recv(struct tpm_chip *chip, u8 *buf, size_t count)
- 	struct crb_priv *priv = dev_get_drvdata(&chip->dev);
- 	unsigned int expected;
- 
--	/* sanity check */
--	if (count < 6)
-+	/* A sanity check that the upper layer wants to get at least the header
-+	 * as that is the minimum size for any TPM response.
-+	 */
-+	if (count < TPM_HEADER_SIZE)
- 		return -EIO;
- 
-+	/* If this bit is set, according to the spec, the TPM is in
-+	 * unrecoverable condition.
-+	 */
- 	if (ioread32(&priv->regs_t->ctrl_sts) & CRB_CTRL_STS_ERROR)
- 		return -EIO;
- 
--	memcpy_fromio(buf, priv->rsp, 6);
--	expected = be32_to_cpup((__be32 *) &buf[2]);
--	if (expected > count || expected < 6)
-+	/* Read the first 8 bytes in order to get the length of the response.
-+	 * We read exactly a quad word in order to make sure that the remaining
-+	 * reads will be aligned.
-+	 */
-+	memcpy_fromio(buf, priv->rsp, 8);
-+
-+	expected = be32_to_cpup((__be32 *)&buf[2]);
-+	if (expected > count || expected < TPM_HEADER_SIZE)
- 		return -EIO;
- 
--	memcpy_fromio(&buf[6], &priv->rsp[6], expected - 6);
-+	memcpy_fromio(&buf[8], &priv->rsp[8], expected - 8);
- 
- 	return expected;
- }
-diff --git a/drivers/char/tpm/tpm_i2c_atmel.c b/drivers/char/tpm/tpm_i2c_atmel.c
-index 95ce2e9ccdc6..32a8e27c5382 100644
---- a/drivers/char/tpm/tpm_i2c_atmel.c
-+++ b/drivers/char/tpm/tpm_i2c_atmel.c
-@@ -65,7 +65,11 @@ static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len)
- 	dev_dbg(&chip->dev,
- 		"%s(buf=%*ph len=%0zx) -> sts=%d\n", __func__,
- 		(int)min_t(size_t, 64, len), buf, len, status);
--	return status;
-+
-+	if (status < 0)
-+		return status;
-+
-+	return 0;
- }
- 
- static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count)
-diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c
-index 9086edc9066b..977fd42daa1b 100644
---- a/drivers/char/tpm/tpm_i2c_infineon.c
-+++ b/drivers/char/tpm/tpm_i2c_infineon.c
-@@ -587,7 +587,7 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
- 	/* go and do it */
- 	iic_tpm_write(TPM_STS(tpm_dev.locality), &sts, 1);
- 
--	return len;
-+	return 0;
- out_err:
- 	tpm_tis_i2c_ready(chip);
- 	/* The TPM needs some time to clean up here,
-diff --git a/drivers/char/tpm/tpm_i2c_nuvoton.c b/drivers/char/tpm/tpm_i2c_nuvoton.c
-index 217f7f1cbde8..058220edb8b3 100644
---- a/drivers/char/tpm/tpm_i2c_nuvoton.c
-+++ b/drivers/char/tpm/tpm_i2c_nuvoton.c
-@@ -467,7 +467,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len)
- 	}
- 
- 	dev_dbg(dev, "%s() -> %zd\n", __func__, len);
--	return len;
-+	return 0;
- }
- 
- static bool i2c_nuvoton_req_canceled(struct tpm_chip *chip, u8 status)
-diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
-index 07b5a487d0c8..757ca45b39b8 100644
---- a/drivers/char/tpm/tpm_ibmvtpm.c
-+++ b/drivers/char/tpm/tpm_ibmvtpm.c
-@@ -139,14 +139,14 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
- }
- 
- /**
-- * tpm_ibmvtpm_send - Send tpm request
-- *
-+ * tpm_ibmvtpm_send() - Send a TPM command
-  * @chip:	tpm chip struct
-  * @buf:	buffer contains data to send
-  * @count:	size of buffer
-  *
-  * Return:
-- *	Number of bytes sent or < 0 on error.
-+ *   0 on success,
-+ *   -errno on error
-  */
- static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
- {
-@@ -192,7 +192,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
- 		rc = 0;
- 		ibmvtpm->tpm_processing_cmd = false;
- 	} else
--		rc = count;
-+		rc = 0;
- 
- 	spin_unlock(&ibmvtpm->rtce_lock);
- 	return rc;
-diff --git a/drivers/char/tpm/tpm_infineon.c b/drivers/char/tpm/tpm_infineon.c
-index d8f10047fbba..97f6d4fe0aee 100644
---- a/drivers/char/tpm/tpm_infineon.c
-+++ b/drivers/char/tpm/tpm_infineon.c
-@@ -354,7 +354,7 @@ static int tpm_inf_send(struct tpm_chip *chip, u8 * buf, size_t count)
- 	for (i = 0; i < count; i++) {
- 		wait_and_send(chip, buf[i]);
- 	}
--	return count;
-+	return 0;
- }
- 
- static void tpm_inf_cancel(struct tpm_chip *chip)
-diff --git a/drivers/char/tpm/tpm_nsc.c b/drivers/char/tpm/tpm_nsc.c
-index 5d6cce74cd3f..9bee3c5eb4bf 100644
---- a/drivers/char/tpm/tpm_nsc.c
-+++ b/drivers/char/tpm/tpm_nsc.c
-@@ -226,7 +226,7 @@ static int tpm_nsc_send(struct tpm_chip *chip, u8 * buf, size_t count)
- 	}
- 	outb(NSC_COMMAND_EOC, priv->base + NSC_COMMAND);
- 
--	return count;
-+	return 0;
- }
- 
- static void tpm_nsc_cancel(struct tpm_chip *chip)
-diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
-index bf7e49cfa643..bb0c2e160562 100644
---- a/drivers/char/tpm/tpm_tis_core.c
-+++ b/drivers/char/tpm/tpm_tis_core.c
-@@ -481,7 +481,7 @@ static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len)
- 			goto out_err;
- 		}
- 	}
--	return len;
-+	return 0;
- out_err:
- 	tpm_tis_ready(chip);
- 	return rc;
-diff --git a/drivers/char/tpm/tpm_vtpm_proxy.c b/drivers/char/tpm/tpm_vtpm_proxy.c
-index 87a0ce47f201..ecbb63f8d231 100644
---- a/drivers/char/tpm/tpm_vtpm_proxy.c
-+++ b/drivers/char/tpm/tpm_vtpm_proxy.c
-@@ -335,7 +335,6 @@ static int vtpm_proxy_is_driver_command(struct tpm_chip *chip,
- static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count)
- {
- 	struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev);
--	int rc = 0;
- 
- 	if (count > sizeof(proxy_dev->buffer)) {
- 		dev_err(&chip->dev,
-@@ -366,7 +365,7 @@ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count)
- 
- 	wake_up_interruptible(&proxy_dev->wq);
- 
--	return rc;
-+	return 0;
- }
- 
- static void vtpm_proxy_tpm_op_cancel(struct tpm_chip *chip)
-diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c
-index b150f87f38f5..5a327eb7f63a 100644
---- a/drivers/char/tpm/xen-tpmfront.c
-+++ b/drivers/char/tpm/xen-tpmfront.c
-@@ -173,7 +173,7 @@ static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
- 		return -ETIME;
- 	}
- 
--	return count;
-+	return 0;
- }
- 
- static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
-diff --git a/drivers/clk/clk-fractional-divider.c b/drivers/clk/clk-fractional-divider.c
-index 545dceec0bbf..fdfe2e423d15 100644
---- a/drivers/clk/clk-fractional-divider.c
-+++ b/drivers/clk/clk-fractional-divider.c
-@@ -79,7 +79,7 @@ static long clk_fd_round_rate(struct clk_hw *hw, unsigned long rate,
- 	unsigned long m, n;
- 	u64 ret;
- 
--	if (!rate || rate >= *parent_rate)
-+	if (!rate || (!clk_hw_can_set_rate_parent(hw) && rate >= *parent_rate))
- 		return *parent_rate;
- 
- 	if (fd->approximation)
-diff --git a/drivers/clk/clk-twl6040.c b/drivers/clk/clk-twl6040.c
-index ea846f77750b..0cad5748bf0e 100644
---- a/drivers/clk/clk-twl6040.c
-+++ b/drivers/clk/clk-twl6040.c
-@@ -41,6 +41,43 @@ static int twl6040_pdmclk_is_prepared(struct clk_hw *hw)
- 	return pdmclk->enabled;
- }
- 
-+static int twl6040_pdmclk_reset_one_clock(struct twl6040_pdmclk *pdmclk,
-+					  unsigned int reg)
-+{
-+	const u8 reset_mask = TWL6040_HPLLRST;	/* Same for HPPLL and LPPLL */
-+	int ret;
-+
-+	ret = twl6040_set_bits(pdmclk->twl6040, reg, reset_mask);
-+	if (ret < 0)
-+		return ret;
-+
-+	ret = twl6040_clear_bits(pdmclk->twl6040, reg, reset_mask);
-+	if (ret < 0)
-+		return ret;
-+
-+	return 0;
-+}
-+
-+/*
-+ * TWL6040A2 Phoenix Audio IC erratum #6: "PDM Clock Generation Issue At
-+ * Cold Temperature". This affects cold boot and deeper idle states it
-+ * seems. The workaround consists of resetting HPPLL and LPPLL.
-+ */
-+static int twl6040_pdmclk_quirk_reset_clocks(struct twl6040_pdmclk *pdmclk)
-+{
-+	int ret;
-+
-+	ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_HPPLLCTL);
-+	if (ret)
-+		return ret;
-+
-+	ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_LPPLLCTL);
-+	if (ret)
-+		return ret;
-+
-+	return 0;
-+}
-+
- static int twl6040_pdmclk_prepare(struct clk_hw *hw)
- {
- 	struct twl6040_pdmclk *pdmclk = container_of(hw, struct twl6040_pdmclk,
-@@ -48,8 +85,20 @@ static int twl6040_pdmclk_prepare(struct clk_hw *hw)
- 	int ret;
- 
- 	ret = twl6040_power(pdmclk->twl6040, 1);
--	if (!ret)
--		pdmclk->enabled = 1;
-+	if (ret)
-+		return ret;
-+
-+	ret = twl6040_pdmclk_quirk_reset_clocks(pdmclk);
-+	if (ret)
-+		goto out_err;
-+
-+	pdmclk->enabled = 1;
-+
-+	return 0;
-+
-+out_err:
-+	dev_err(pdmclk->dev, "%s: error %i\n", __func__, ret);
-+	twl6040_power(pdmclk->twl6040, 0);
- 
- 	return ret;
- }
-diff --git a/drivers/clk/ingenic/cgu.c b/drivers/clk/ingenic/cgu.c
-index 5ef7d9ba2195..b40160eb3372 100644
---- a/drivers/clk/ingenic/cgu.c
-+++ b/drivers/clk/ingenic/cgu.c
-@@ -426,16 +426,16 @@ ingenic_clk_round_rate(struct clk_hw *hw, unsigned long req_rate,
- 	struct ingenic_clk *ingenic_clk = to_ingenic_clk(hw);
- 	struct ingenic_cgu *cgu = ingenic_clk->cgu;
- 	const struct ingenic_cgu_clk_info *clk_info;
--	long rate = *parent_rate;
-+	unsigned int div = 1;
- 
- 	clk_info = &cgu->clock_info[ingenic_clk->idx];
- 
- 	if (clk_info->type & CGU_CLK_DIV)
--		rate /= ingenic_clk_calc_div(clk_info, *parent_rate, req_rate);
-+		div = ingenic_clk_calc_div(clk_info, *parent_rate, req_rate);
- 	else if (clk_info->type & CGU_CLK_FIXDIV)
--		rate /= clk_info->fixdiv.div;
-+		div = clk_info->fixdiv.div;
- 
--	return rate;
-+	return DIV_ROUND_UP(*parent_rate, div);
- }
- 
- static int
-@@ -455,7 +455,7 @@ ingenic_clk_set_rate(struct clk_hw *hw, unsigned long req_rate,
- 
- 	if (clk_info->type & CGU_CLK_DIV) {
- 		div = ingenic_clk_calc_div(clk_info, parent_rate, req_rate);
--		rate = parent_rate / div;
-+		rate = DIV_ROUND_UP(parent_rate, div);
- 
- 		if (rate != req_rate)
- 			return -EINVAL;
-diff --git a/drivers/clk/ingenic/cgu.h b/drivers/clk/ingenic/cgu.h
-index 502bcbb61b04..e12716d8ce3c 100644
---- a/drivers/clk/ingenic/cgu.h
-+++ b/drivers/clk/ingenic/cgu.h
-@@ -80,7 +80,7 @@ struct ingenic_cgu_mux_info {
-  * @reg: offset of the divider control register within the CGU
-  * @shift: number of bits to left shift the divide value by (ie. the index of
-  *         the lowest bit of the divide value within its control register)
-- * @div: number of bits to divide the divider value by (i.e. if the
-+ * @div: number to divide the divider value by (i.e. if the
-  *	 effective divider value is the value written to the register
-  *	 multiplied by some constant)
-  * @bits: the size of the divide value in bits
-diff --git a/drivers/clk/rockchip/clk-rk3328.c b/drivers/clk/rockchip/clk-rk3328.c
-index faa94adb2a37..65ab5c2f48b0 100644
---- a/drivers/clk/rockchip/clk-rk3328.c
-+++ b/drivers/clk/rockchip/clk-rk3328.c
-@@ -78,17 +78,17 @@ static struct rockchip_pll_rate_table rk3328_pll_rates[] = {
- 
- static struct rockchip_pll_rate_table rk3328_pll_frac_rates[] = {
- 	/* _mhz, _refdiv, _fbdiv, _postdiv1, _postdiv2, _dsmpd, _frac */
--	RK3036_PLL_RATE(1016064000, 3, 127, 1, 1, 0, 134217),
-+	RK3036_PLL_RATE(1016064000, 3, 127, 1, 1, 0, 134218),
- 	/* vco = 1016064000 */
--	RK3036_PLL_RATE(983040000, 24, 983, 1, 1, 0, 671088),
-+	RK3036_PLL_RATE(983040000, 24, 983, 1, 1, 0, 671089),
- 	/* vco = 983040000 */
--	RK3036_PLL_RATE(491520000, 24, 983, 2, 1, 0, 671088),
-+	RK3036_PLL_RATE(491520000, 24, 983, 2, 1, 0, 671089),
- 	/* vco = 983040000 */
--	RK3036_PLL_RATE(61440000, 6, 215, 7, 2, 0, 671088),
-+	RK3036_PLL_RATE(61440000, 6, 215, 7, 2, 0, 671089),
- 	/* vco = 860156000 */
--	RK3036_PLL_RATE(56448000, 12, 451, 4, 4, 0, 9797894),
-+	RK3036_PLL_RATE(56448000, 12, 451, 4, 4, 0, 9797895),
- 	/* vco = 903168000 */
--	RK3036_PLL_RATE(40960000, 12, 409, 4, 5, 0, 10066329),
-+	RK3036_PLL_RATE(40960000, 12, 409, 4, 5, 0, 10066330),
- 	/* vco = 819200000 */
- 	{ /* sentinel */ },
- };
-diff --git a/drivers/clk/samsung/clk-exynos5-subcmu.c b/drivers/clk/samsung/clk-exynos5-subcmu.c
-index 93306283d764..8ae44b5db4c2 100644
---- a/drivers/clk/samsung/clk-exynos5-subcmu.c
-+++ b/drivers/clk/samsung/clk-exynos5-subcmu.c
-@@ -136,15 +136,20 @@ static int __init exynos5_clk_register_subcmu(struct device *parent,
- {
- 	struct of_phandle_args genpdspec = { .np = pd_node };
- 	struct platform_device *pdev;
-+	int ret;
-+
-+	pdev = platform_device_alloc("exynos5-subcmu", PLATFORM_DEVID_AUTO);
-+	if (!pdev)
-+		return -ENOMEM;
- 
--	pdev = platform_device_alloc(info->pd_name, -1);
- 	pdev->dev.parent = parent;
--	pdev->driver_override = "exynos5-subcmu";
- 	platform_set_drvdata(pdev, (void *)info);
- 	of_genpd_add_device(&genpdspec, &pdev->dev);
--	platform_device_add(pdev);
-+	ret = platform_device_add(pdev);
-+	if (ret)
-+		platform_device_put(pdev);
- 
--	return 0;
-+	return ret;
- }
- 
- static int __init exynos5_clk_probe(struct platform_device *pdev)
-diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
-index 40630eb950fc..85d7f301149b 100644
---- a/drivers/clk/ti/clkctrl.c
-+++ b/drivers/clk/ti/clkctrl.c
-@@ -530,7 +530,7 @@ static void __init _ti_omap4_clkctrl_setup(struct device_node *node)
- 		 * Create default clkdm name, replace _cm from end of parent
- 		 * node name with _clkdm
- 		 */
--		provider->clkdm_name[strlen(provider->clkdm_name) - 5] = 0;
-+		provider->clkdm_name[strlen(provider->clkdm_name) - 2] = 0;
- 	} else {
- 		provider->clkdm_name = kasprintf(GFP_KERNEL, "%pOFn", node);
- 		if (!provider->clkdm_name) {
-diff --git a/drivers/clk/uniphier/clk-uniphier-cpugear.c b/drivers/clk/uniphier/clk-uniphier-cpugear.c
-index ec11f55594ad..5d2d42b7e182 100644
---- a/drivers/clk/uniphier/clk-uniphier-cpugear.c
-+++ b/drivers/clk/uniphier/clk-uniphier-cpugear.c
-@@ -47,7 +47,7 @@ static int uniphier_clk_cpugear_set_parent(struct clk_hw *hw, u8 index)
- 		return ret;
- 
- 	ret = regmap_write_bits(gear->regmap,
--				gear->regbase + UNIPHIER_CLK_CPUGEAR_SET,
-+				gear->regbase + UNIPHIER_CLK_CPUGEAR_UPD,
- 				UNIPHIER_CLK_CPUGEAR_UPD_BIT,
- 				UNIPHIER_CLK_CPUGEAR_UPD_BIT);
- 	if (ret)
-diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
-index a9e26f6a81a1..8dfd3bc448d0 100644
---- a/drivers/clocksource/Kconfig
-+++ b/drivers/clocksource/Kconfig
-@@ -360,6 +360,16 @@ config ARM64_ERRATUM_858921
- 	  The workaround will be dynamically enabled when an affected
- 	  core is detected.
- 
-+config SUN50I_ERRATUM_UNKNOWN1
-+	bool "Workaround for Allwinner A64 erratum UNKNOWN1"
-+	default y
-+	depends on ARM_ARCH_TIMER && ARM64 && ARCH_SUNXI
-+	select ARM_ARCH_TIMER_OOL_WORKAROUND
-+	help
-+	  This option enables a workaround for instability in the timer on
-+	  the Allwinner A64 SoC. The workaround will only be active if the
-+	  allwinner,erratum-unknown1 property is found in the timer node.
-+
- config ARM_GLOBAL_TIMER
- 	bool "Support for the ARM global timer" if COMPILE_TEST
- 	select TIMER_OF if OF
-diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
-index 9a7d4dc00b6e..a8b20b65bd4b 100644
---- a/drivers/clocksource/arm_arch_timer.c
-+++ b/drivers/clocksource/arm_arch_timer.c
-@@ -326,6 +326,48 @@ static u64 notrace arm64_1188873_read_cntvct_el0(void)
- }
- #endif
- 
-+#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
-+/*
-+ * The low bits of the counter registers are indeterminate while bit 10 or
-+ * greater is rolling over. Since the counter value can jump both backward
-+ * (7ff -> 000 -> 800) and forward (7ff -> fff -> 800), ignore register values
-+ * with all ones or all zeros in the low bits. Bound the loop by the maximum
-+ * number of CPU cycles in 3 consecutive 24 MHz counter periods.
-+ */
-+#define __sun50i_a64_read_reg(reg) ({					\
-+	u64 _val;							\
-+	int _retries = 150;						\
-+									\
-+	do {								\
-+		_val = read_sysreg(reg);				\
-+		_retries--;						\
-+	} while (((_val + 1) & GENMASK(9, 0)) <= 1 && _retries);	\
-+									\
-+	WARN_ON_ONCE(!_retries);					\
-+	_val;								\
-+})
-+
-+static u64 notrace sun50i_a64_read_cntpct_el0(void)
-+{
-+	return __sun50i_a64_read_reg(cntpct_el0);
-+}
-+
-+static u64 notrace sun50i_a64_read_cntvct_el0(void)
-+{
-+	return __sun50i_a64_read_reg(cntvct_el0);
-+}
-+
-+static u32 notrace sun50i_a64_read_cntp_tval_el0(void)
-+{
-+	return read_sysreg(cntp_cval_el0) - sun50i_a64_read_cntpct_el0();
-+}
-+
-+static u32 notrace sun50i_a64_read_cntv_tval_el0(void)
-+{
-+	return read_sysreg(cntv_cval_el0) - sun50i_a64_read_cntvct_el0();
-+}
-+#endif
-+
- #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND
- DEFINE_PER_CPU(const struct arch_timer_erratum_workaround *, timer_unstable_counter_workaround);
- EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround);
-@@ -423,6 +465,19 @@ static const struct arch_timer_erratum_workaround ool_workarounds[] = {
- 		.read_cntvct_el0 = arm64_1188873_read_cntvct_el0,
- 	},
- #endif
-+#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1
-+	{
-+		.match_type = ate_match_dt,
-+		.id = "allwinner,erratum-unknown1",
-+		.desc = "Allwinner erratum UNKNOWN1",
-+		.read_cntp_tval_el0 = sun50i_a64_read_cntp_tval_el0,
-+		.read_cntv_tval_el0 = sun50i_a64_read_cntv_tval_el0,
-+		.read_cntpct_el0 = sun50i_a64_read_cntpct_el0,
-+		.read_cntvct_el0 = sun50i_a64_read_cntvct_el0,
-+		.set_next_event_phys = erratum_set_next_event_tval_phys,
-+		.set_next_event_virt = erratum_set_next_event_tval_virt,
-+	},
-+#endif
- };
- 
- typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *,
-diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c
-index 7a244b681876..d55c30f6981d 100644
---- a/drivers/clocksource/exynos_mct.c
-+++ b/drivers/clocksource/exynos_mct.c
-@@ -388,6 +388,13 @@ static void exynos4_mct_tick_start(unsigned long cycles,
- 	exynos4_mct_write(tmp, mevt->base + MCT_L_TCON_OFFSET);
- }
- 
-+static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
-+{
-+	/* Clear the MCT tick interrupt */
-+	if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1)
-+		exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET);
-+}
-+
- static int exynos4_tick_set_next_event(unsigned long cycles,
- 				       struct clock_event_device *evt)
- {
-@@ -404,6 +411,7 @@ static int set_state_shutdown(struct clock_event_device *evt)
- 
- 	mevt = container_of(evt, struct mct_clock_event_device, evt);
- 	exynos4_mct_tick_stop(mevt);
-+	exynos4_mct_tick_clear(mevt);
- 	return 0;
- }
- 
-@@ -420,8 +428,11 @@ static int set_state_periodic(struct clock_event_device *evt)
- 	return 0;
- }
- 
--static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
-+static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id)
- {
-+	struct mct_clock_event_device *mevt = dev_id;
-+	struct clock_event_device *evt = &mevt->evt;
-+
- 	/*
- 	 * This is for supporting oneshot mode.
- 	 * Mct would generate interrupt periodically
-@@ -430,16 +441,6 @@ static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt)
- 	if (!clockevent_state_periodic(&mevt->evt))
- 		exynos4_mct_tick_stop(mevt);
- 
--	/* Clear the MCT tick interrupt */
--	if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1)
--		exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET);
--}
--
--static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id)
--{
--	struct mct_clock_event_device *mevt = dev_id;
--	struct clock_event_device *evt = &mevt->evt;
--
- 	exynos4_mct_tick_clear(mevt);
- 
- 	evt->event_handler(evt);
-diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c
-index 431892200a08..ead71bfac689 100644
---- a/drivers/clocksource/timer-riscv.c
-+++ b/drivers/clocksource/timer-riscv.c
-@@ -58,7 +58,7 @@ static u64 riscv_sched_clock(void)
- static DEFINE_PER_CPU(struct clocksource, riscv_clocksource) = {
- 	.name		= "riscv_clocksource",
- 	.rating		= 300,
--	.mask		= CLOCKSOURCE_MASK(BITS_PER_LONG),
-+	.mask		= CLOCKSOURCE_MASK(64),
- 	.flags		= CLOCK_SOURCE_IS_CONTINUOUS,
- 	.read		= riscv_clocksource_rdtime,
- };
-@@ -103,8 +103,7 @@ static int __init riscv_timer_init_dt(struct device_node *n)
- 	cs = per_cpu_ptr(&riscv_clocksource, cpuid);
- 	clocksource_register_hz(cs, riscv_timebase);
- 
--	sched_clock_register(riscv_sched_clock,
--			BITS_PER_LONG, riscv_timebase);
-+	sched_clock_register(riscv_sched_clock, 64, riscv_timebase);
- 
- 	error = cpuhp_setup_state(CPUHP_AP_RISCV_TIMER_STARTING,
- 			 "clockevents/riscv/timer:starting",
-diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
-index ed5e42461094..ad48fd52cb53 100644
---- a/drivers/connector/cn_proc.c
-+++ b/drivers/connector/cn_proc.c
-@@ -250,6 +250,7 @@ void proc_coredump_connector(struct task_struct *task)
- {
- 	struct cn_msg *msg;
- 	struct proc_event *ev;
-+	struct task_struct *parent;
- 	__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
- 
- 	if (atomic_read(&proc_event_num_listeners) < 1)
-@@ -262,8 +263,14 @@ void proc_coredump_connector(struct task_struct *task)
- 	ev->what = PROC_EVENT_COREDUMP;
- 	ev->event_data.coredump.process_pid = task->pid;
- 	ev->event_data.coredump.process_tgid = task->tgid;
--	ev->event_data.coredump.parent_pid = task->real_parent->pid;
--	ev->event_data.coredump.parent_tgid = task->real_parent->tgid;
-+
-+	rcu_read_lock();
-+	if (pid_alive(task)) {
-+		parent = rcu_dereference(task->real_parent);
-+		ev->event_data.coredump.parent_pid = parent->pid;
-+		ev->event_data.coredump.parent_tgid = parent->tgid;
-+	}
-+	rcu_read_unlock();
- 
- 	memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
- 	msg->ack = 0; /* not used */
-@@ -276,6 +283,7 @@ void proc_exit_connector(struct task_struct *task)
- {
- 	struct cn_msg *msg;
- 	struct proc_event *ev;
-+	struct task_struct *parent;
- 	__u8 buffer[CN_PROC_MSG_SIZE] __aligned(8);
- 
- 	if (atomic_read(&proc_event_num_listeners) < 1)
-@@ -290,8 +298,14 @@ void proc_exit_connector(struct task_struct *task)
- 	ev->event_data.exit.process_tgid = task->tgid;
- 	ev->event_data.exit.exit_code = task->exit_code;
- 	ev->event_data.exit.exit_signal = task->exit_signal;
--	ev->event_data.exit.parent_pid = task->real_parent->pid;
--	ev->event_data.exit.parent_tgid = task->real_parent->tgid;
-+
-+	rcu_read_lock();
-+	if (pid_alive(task)) {
-+		parent = rcu_dereference(task->real_parent);
-+		ev->event_data.exit.parent_pid = parent->pid;
-+		ev->event_data.exit.parent_tgid = parent->tgid;
-+	}
-+	rcu_read_unlock();
- 
- 	memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
- 	msg->ack = 0; /* not used */
-diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
-index d62fd374d5c7..c72258a44ba4 100644
---- a/drivers/cpufreq/acpi-cpufreq.c
-+++ b/drivers/cpufreq/acpi-cpufreq.c
-@@ -916,8 +916,10 @@ static void __init acpi_cpufreq_boost_init(void)
- {
- 	int ret;
- 
--	if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA)))
-+	if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA))) {
-+		pr_debug("Boost capabilities not present in the processor\n");
- 		return;
-+	}
- 
- 	acpi_cpufreq_driver.set_boost = set_boost;
- 	acpi_cpufreq_driver.boost_enabled = boost_state(0);
-diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
-index e35a886e00bc..ef0e33e21b98 100644
---- a/drivers/cpufreq/cpufreq.c
-+++ b/drivers/cpufreq/cpufreq.c
-@@ -545,13 +545,13 @@ EXPORT_SYMBOL_GPL(cpufreq_policy_transition_delay_us);
-  *                          SYSFS INTERFACE                          *
-  *********************************************************************/
- static ssize_t show_boost(struct kobject *kobj,
--				 struct attribute *attr, char *buf)
-+			  struct kobj_attribute *attr, char *buf)
- {
- 	return sprintf(buf, "%d\n", cpufreq_driver->boost_enabled);
- }
- 
--static ssize_t store_boost(struct kobject *kobj, struct attribute *attr,
--				  const char *buf, size_t count)
-+static ssize_t store_boost(struct kobject *kobj, struct kobj_attribute *attr,
-+			   const char *buf, size_t count)
- {
- 	int ret, enable;
- 
-diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
-index dd66decf2087..a579ca4552df 100644
---- a/drivers/cpufreq/intel_pstate.c
-+++ b/drivers/cpufreq/intel_pstate.c
-@@ -383,7 +383,10 @@ static int intel_pstate_get_cppc_guranteed(int cpu)
- 	if (ret)
- 		return ret;
- 
--	return cppc_perf.guaranteed_perf;
-+	if (cppc_perf.guaranteed_perf)
-+		return cppc_perf.guaranteed_perf;
-+
-+	return cppc_perf.nominal_perf;
- }
- 
- #else /* CONFIG_ACPI_CPPC_LIB */
-@@ -895,7 +898,7 @@ static void intel_pstate_update_policies(void)
- /************************** sysfs begin ************************/
- #define show_one(file_name, object)					\
- 	static ssize_t show_##file_name					\
--	(struct kobject *kobj, struct attribute *attr, char *buf)	\
-+	(struct kobject *kobj, struct kobj_attribute *attr, char *buf)	\
- 	{								\
- 		return sprintf(buf, "%u\n", global.object);		\
- 	}
-@@ -904,7 +907,7 @@ static ssize_t intel_pstate_show_status(char *buf);
- static int intel_pstate_update_status(const char *buf, size_t size);
- 
- static ssize_t show_status(struct kobject *kobj,
--			   struct attribute *attr, char *buf)
-+			   struct kobj_attribute *attr, char *buf)
- {
- 	ssize_t ret;
- 
-@@ -915,7 +918,7 @@ static ssize_t show_status(struct kobject *kobj,
- 	return ret;
- }
- 
--static ssize_t store_status(struct kobject *a, struct attribute *b,
-+static ssize_t store_status(struct kobject *a, struct kobj_attribute *b,
- 			    const char *buf, size_t count)
- {
- 	char *p = memchr(buf, '\n', count);
-@@ -929,7 +932,7 @@ static ssize_t store_status(struct kobject *a, struct attribute *b,
- }
- 
- static ssize_t show_turbo_pct(struct kobject *kobj,
--				struct attribute *attr, char *buf)
-+				struct kobj_attribute *attr, char *buf)
- {
- 	struct cpudata *cpu;
- 	int total, no_turbo, turbo_pct;
-@@ -955,7 +958,7 @@ static ssize_t show_turbo_pct(struct kobject *kobj,
- }
- 
- static ssize_t show_num_pstates(struct kobject *kobj,
--				struct attribute *attr, char *buf)
-+				struct kobj_attribute *attr, char *buf)
- {
- 	struct cpudata *cpu;
- 	int total;
-@@ -976,7 +979,7 @@ static ssize_t show_num_pstates(struct kobject *kobj,
- }
- 
- static ssize_t show_no_turbo(struct kobject *kobj,
--			     struct attribute *attr, char *buf)
-+			     struct kobj_attribute *attr, char *buf)
- {
- 	ssize_t ret;
- 
-@@ -998,7 +1001,7 @@ static ssize_t show_no_turbo(struct kobject *kobj,
- 	return ret;
- }
- 
--static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
-+static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
- 			      const char *buf, size_t count)
- {
- 	unsigned int input;
-@@ -1045,7 +1048,7 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
- 	return count;
- }
- 
--static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
-+static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
- 				  const char *buf, size_t count)
- {
- 	unsigned int input;
-@@ -1075,7 +1078,7 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
- 	return count;
- }
- 
--static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
-+static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b,
- 				  const char *buf, size_t count)
- {
- 	unsigned int input;
-@@ -1107,12 +1110,13 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
- }
- 
- static ssize_t show_hwp_dynamic_boost(struct kobject *kobj,
--				struct attribute *attr, char *buf)
-+				struct kobj_attribute *attr, char *buf)
- {
- 	return sprintf(buf, "%u\n", hwp_boost);
- }
- 
--static ssize_t store_hwp_dynamic_boost(struct kobject *a, struct attribute *b,
-+static ssize_t store_hwp_dynamic_boost(struct kobject *a,
-+				       struct kobj_attribute *b,
- 				       const char *buf, size_t count)
- {
- 	unsigned int input;
-diff --git a/drivers/cpufreq/pxa2xx-cpufreq.c b/drivers/cpufreq/pxa2xx-cpufreq.c
-index 46254e583982..74e0e0c20c46 100644
---- a/drivers/cpufreq/pxa2xx-cpufreq.c
-+++ b/drivers/cpufreq/pxa2xx-cpufreq.c
-@@ -143,7 +143,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
- 	return ret;
- }
- 
--static void __init pxa_cpufreq_init_voltages(void)
-+static void pxa_cpufreq_init_voltages(void)
- {
- 	vcc_core = regulator_get(NULL, "vcc_core");
- 	if (IS_ERR(vcc_core)) {
-@@ -159,7 +159,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
- 	return 0;
- }
- 
--static void __init pxa_cpufreq_init_voltages(void) { }
-+static void pxa_cpufreq_init_voltages(void) { }
- #endif
- 
- static void find_freq_tables(struct cpufreq_frequency_table **freq_table,
-diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c
-index 2a3675c24032..a472b814058f 100644
---- a/drivers/cpufreq/qcom-cpufreq-kryo.c
-+++ b/drivers/cpufreq/qcom-cpufreq-kryo.c
-@@ -75,7 +75,7 @@ static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
- 
- static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
- {
--	struct opp_table *opp_tables[NR_CPUS] = {0};
-+	struct opp_table **opp_tables;
- 	enum _msm8996_version msm8996_version;
- 	struct nvmem_cell *speedbin_nvmem;
- 	struct device_node *np;
-@@ -133,6 +133,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
- 	}
- 	kfree(speedbin);
- 
-+	opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables), GFP_KERNEL);
-+	if (!opp_tables)
-+		return -ENOMEM;
-+
- 	for_each_possible_cpu(cpu) {
- 		cpu_dev = get_cpu_device(cpu);
- 		if (NULL == cpu_dev) {
-@@ -151,8 +155,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
- 
- 	cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
- 							  NULL, 0);
--	if (!IS_ERR(cpufreq_dt_pdev))
-+	if (!IS_ERR(cpufreq_dt_pdev)) {
-+		platform_set_drvdata(pdev, opp_tables);
- 		return 0;
-+	}
- 
- 	ret = PTR_ERR(cpufreq_dt_pdev);
- 	dev_err(cpu_dev, "Failed to register platform device\n");
-@@ -163,13 +169,23 @@ free_opp:
- 			break;
- 		dev_pm_opp_put_supported_hw(opp_tables[cpu]);
- 	}
-+	kfree(opp_tables);
- 
- 	return ret;
- }
- 
- static int qcom_cpufreq_kryo_remove(struct platform_device *pdev)
- {
-+	struct opp_table **opp_tables = platform_get_drvdata(pdev);
-+	unsigned int cpu;
-+
- 	platform_device_unregister(cpufreq_dt_pdev);
-+
-+	for_each_possible_cpu(cpu)
-+		dev_pm_opp_put_supported_hw(opp_tables[cpu]);
-+
-+	kfree(opp_tables);
-+
- 	return 0;
- }
- 
-diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
-index 99449738faa4..632ccf82c5d3 100644
---- a/drivers/cpufreq/scpi-cpufreq.c
-+++ b/drivers/cpufreq/scpi-cpufreq.c
-@@ -189,8 +189,8 @@ static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
- 	cpufreq_cooling_unregister(priv->cdev);
- 	clk_put(priv->clk);
- 	dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
--	kfree(priv);
- 	dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
-+	kfree(priv);
- 
- 	return 0;
- }
-diff --git a/drivers/cpufreq/tegra124-cpufreq.c b/drivers/cpufreq/tegra124-cpufreq.c
-index 43530254201a..4bb154f6c54c 100644
---- a/drivers/cpufreq/tegra124-cpufreq.c
-+++ b/drivers/cpufreq/tegra124-cpufreq.c
-@@ -134,6 +134,8 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev)
- 
- 	platform_set_drvdata(pdev, priv);
- 
-+	of_node_put(np);
-+
- 	return 0;
- 
- out_switch_to_pllx:
-diff --git a/drivers/cpuidle/governor.c b/drivers/cpuidle/governor.c
-index bb93e5cf6a4a..9fddf828a76f 100644
---- a/drivers/cpuidle/governor.c
-+++ b/drivers/cpuidle/governor.c
-@@ -89,6 +89,7 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
- 	mutex_lock(&cpuidle_lock);
- 	if (__cpuidle_find_governor(gov->name) == NULL) {
- 		ret = 0;
-+		list_add_tail(&gov->governor_list, &cpuidle_governors);
- 		if (!cpuidle_curr_governor ||
- 		    !strncasecmp(param_governor, gov->name, CPUIDLE_NAME_LEN) ||
- 		    (cpuidle_curr_governor->rating < gov->rating &&
-diff --git a/drivers/crypto/amcc/crypto4xx_trng.c b/drivers/crypto/amcc/crypto4xx_trng.c
-index 5e63742b0d22..53ab1f140a26 100644
---- a/drivers/crypto/amcc/crypto4xx_trng.c
-+++ b/drivers/crypto/amcc/crypto4xx_trng.c
-@@ -80,8 +80,10 @@ void ppc4xx_trng_probe(struct crypto4xx_core_device *core_dev)
- 
- 	/* Find the TRNG device node and map it */
- 	trng = of_find_matching_node(NULL, ppc4xx_trng_match);
--	if (!trng || !of_device_is_available(trng))
-+	if (!trng || !of_device_is_available(trng)) {
-+		of_node_put(trng);
- 		return;
-+	}
- 
- 	dev->trng_base = of_iomap(trng, 0);
- 	of_node_put(trng);
-diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
-index 80ae69f906fb..1c4f3a046dc5 100644
---- a/drivers/crypto/caam/caamalg.c
-+++ b/drivers/crypto/caam/caamalg.c
-@@ -1040,6 +1040,7 @@ static void init_aead_job(struct aead_request *req,
- 	if (unlikely(req->src != req->dst)) {
- 		if (edesc->dst_nents == 1) {
- 			dst_dma = sg_dma_address(req->dst);
-+			out_options = 0;
- 		} else {
- 			dst_dma = edesc->sec4_sg_dma +
- 				  sec4_sg_index *
-diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
-index bb1a2cdf1951..0f11811a3585 100644
---- a/drivers/crypto/caam/caamhash.c
-+++ b/drivers/crypto/caam/caamhash.c
-@@ -113,6 +113,7 @@ struct caam_hash_ctx {
- struct caam_hash_state {
- 	dma_addr_t buf_dma;
- 	dma_addr_t ctx_dma;
-+	int ctx_dma_len;
- 	u8 buf_0[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
- 	int buflen_0;
- 	u8 buf_1[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
-@@ -165,6 +166,7 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
- 				      struct caam_hash_state *state,
- 				      int ctx_len)
- {
-+	state->ctx_dma_len = ctx_len;
- 	state->ctx_dma = dma_map_single(jrdev, state->caam_ctx,
- 					ctx_len, DMA_FROM_DEVICE);
- 	if (dma_mapping_error(jrdev, state->ctx_dma)) {
-@@ -178,18 +180,6 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
- 	return 0;
- }
- 
--/* Map req->result, and append seq_out_ptr command that points to it */
--static inline dma_addr_t map_seq_out_ptr_result(u32 *desc, struct device *jrdev,
--						u8 *result, int digestsize)
--{
--	dma_addr_t dst_dma;
--
--	dst_dma = dma_map_single(jrdev, result, digestsize, DMA_FROM_DEVICE);
--	append_seq_out_ptr(desc, dst_dma, digestsize, 0);
--
--	return dst_dma;
--}
--
- /* Map current buffer in state (if length > 0) and put it in link table */
- static inline int buf_map_to_sec4_sg(struct device *jrdev,
- 				     struct sec4_sg_entry *sec4_sg,
-@@ -218,6 +208,7 @@ static inline int ctx_map_to_sec4_sg(struct device *jrdev,
- 				     struct caam_hash_state *state, int ctx_len,
- 				     struct sec4_sg_entry *sec4_sg, u32 flag)
- {
-+	state->ctx_dma_len = ctx_len;
- 	state->ctx_dma = dma_map_single(jrdev, state->caam_ctx, ctx_len, flag);
- 	if (dma_mapping_error(jrdev, state->ctx_dma)) {
- 		dev_err(jrdev, "unable to map ctx\n");
-@@ -426,7 +417,6 @@ static int ahash_setkey(struct crypto_ahash *ahash,
- 
- /*
-  * ahash_edesc - s/w-extended ahash descriptor
-- * @dst_dma: physical mapped address of req->result
-  * @sec4_sg_dma: physical mapped address of h/w link table
-  * @src_nents: number of segments in input scatterlist
-  * @sec4_sg_bytes: length of dma mapped sec4_sg space
-@@ -434,7 +424,6 @@ static int ahash_setkey(struct crypto_ahash *ahash,
-  * @sec4_sg: h/w link table
-  */
- struct ahash_edesc {
--	dma_addr_t dst_dma;
- 	dma_addr_t sec4_sg_dma;
- 	int src_nents;
- 	int sec4_sg_bytes;
-@@ -450,8 +439,6 @@ static inline void ahash_unmap(struct device *dev,
- 
- 	if (edesc->src_nents)
- 		dma_unmap_sg(dev, req->src, edesc->src_nents, DMA_TO_DEVICE);
--	if (edesc->dst_dma)
--		dma_unmap_single(dev, edesc->dst_dma, dst_len, DMA_FROM_DEVICE);
- 
- 	if (edesc->sec4_sg_bytes)
- 		dma_unmap_single(dev, edesc->sec4_sg_dma,
-@@ -468,12 +455,10 @@ static inline void ahash_unmap_ctx(struct device *dev,
- 			struct ahash_edesc *edesc,
- 			struct ahash_request *req, int dst_len, u32 flag)
- {
--	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
--	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
- 	struct caam_hash_state *state = ahash_request_ctx(req);
- 
- 	if (state->ctx_dma) {
--		dma_unmap_single(dev, state->ctx_dma, ctx->ctx_len, flag);
-+		dma_unmap_single(dev, state->ctx_dma, state->ctx_dma_len, flag);
- 		state->ctx_dma = 0;
- 	}
- 	ahash_unmap(dev, edesc, req, dst_len);
-@@ -486,9 +471,9 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
- 	struct ahash_edesc *edesc;
- 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
- 	int digestsize = crypto_ahash_digestsize(ahash);
-+	struct caam_hash_state *state = ahash_request_ctx(req);
- #ifdef DEBUG
- 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
--	struct caam_hash_state *state = ahash_request_ctx(req);
- 
- 	dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
- #endif
-@@ -497,17 +482,14 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
- 	if (err)
- 		caam_jr_strstatus(jrdev, err);
- 
--	ahash_unmap(jrdev, edesc, req, digestsize);
-+	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
-+	memcpy(req->result, state->caam_ctx, digestsize);
- 	kfree(edesc);
- 
- #ifdef DEBUG
- 	print_hex_dump(KERN_ERR, "ctx@"__stringify(__LINE__)": ",
- 		       DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
- 		       ctx->ctx_len, 1);
--	if (req->result)
--		print_hex_dump(KERN_ERR, "result@"__stringify(__LINE__)": ",
--			       DUMP_PREFIX_ADDRESS, 16, 4, req->result,
--			       digestsize, 1);
- #endif
- 
- 	req->base.complete(&req->base, err);
-@@ -555,9 +537,9 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
- 	struct ahash_edesc *edesc;
- 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
- 	int digestsize = crypto_ahash_digestsize(ahash);
-+	struct caam_hash_state *state = ahash_request_ctx(req);
- #ifdef DEBUG
- 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
--	struct caam_hash_state *state = ahash_request_ctx(req);
- 
- 	dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
- #endif
-@@ -566,17 +548,14 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
- 	if (err)
- 		caam_jr_strstatus(jrdev, err);
- 
--	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_TO_DEVICE);
-+	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
-+	memcpy(req->result, state->caam_ctx, digestsize);
- 	kfree(edesc);
- 
- #ifdef DEBUG
- 	print_hex_dump(KERN_ERR, "ctx@"__stringify(__LINE__)": ",
- 		       DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
- 		       ctx->ctx_len, 1);
--	if (req->result)
--		print_hex_dump(KERN_ERR, "result@"__stringify(__LINE__)": ",
--			       DUMP_PREFIX_ADDRESS, 16, 4, req->result,
--			       digestsize, 1);
- #endif
- 
- 	req->base.complete(&req->base, err);
-@@ -837,7 +816,7 @@ static int ahash_final_ctx(struct ahash_request *req)
- 	edesc->sec4_sg_bytes = sec4_sg_bytes;
- 
- 	ret = ctx_map_to_sec4_sg(jrdev, state, ctx->ctx_len,
--				 edesc->sec4_sg, DMA_TO_DEVICE);
-+				 edesc->sec4_sg, DMA_BIDIRECTIONAL);
- 	if (ret)
- 		goto unmap_ctx;
- 
-@@ -857,14 +836,7 @@ static int ahash_final_ctx(struct ahash_request *req)
- 
- 	append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len + buflen,
- 			  LDST_SGF);
--
--	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
--						digestsize);
--	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
--		dev_err(jrdev, "unable to map dst\n");
--		ret = -ENOMEM;
--		goto unmap_ctx;
--	}
-+	append_seq_out_ptr(desc, state->ctx_dma, digestsize, 0);
- 
- #ifdef DEBUG
- 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
-@@ -877,7 +849,7 @@ static int ahash_final_ctx(struct ahash_request *req)
- 
- 	return -EINPROGRESS;
-  unmap_ctx:
--	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
-+	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
- 	kfree(edesc);
- 	return ret;
- }
-@@ -931,7 +903,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
- 	edesc->src_nents = src_nents;
- 
- 	ret = ctx_map_to_sec4_sg(jrdev, state, ctx->ctx_len,
--				 edesc->sec4_sg, DMA_TO_DEVICE);
-+				 edesc->sec4_sg, DMA_BIDIRECTIONAL);
- 	if (ret)
- 		goto unmap_ctx;
- 
-@@ -945,13 +917,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
- 	if (ret)
- 		goto unmap_ctx;
- 
--	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
--						digestsize);
--	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
--		dev_err(jrdev, "unable to map dst\n");
--		ret = -ENOMEM;
--		goto unmap_ctx;
--	}
-+	append_seq_out_ptr(desc, state->ctx_dma, digestsize, 0);
- 
- #ifdef DEBUG
- 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
-@@ -964,7 +930,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
- 
- 	return -EINPROGRESS;
-  unmap_ctx:
--	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
-+	ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL);
- 	kfree(edesc);
- 	return ret;
- }
-@@ -1023,10 +989,8 @@ static int ahash_digest(struct ahash_request *req)
- 
- 	desc = edesc->hw_desc;
- 
--	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
--						digestsize);
--	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
--		dev_err(jrdev, "unable to map dst\n");
-+	ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
-+	if (ret) {
- 		ahash_unmap(jrdev, edesc, req, digestsize);
- 		kfree(edesc);
- 		return -ENOMEM;
-@@ -1041,7 +1005,7 @@ static int ahash_digest(struct ahash_request *req)
- 	if (!ret) {
- 		ret = -EINPROGRESS;
- 	} else {
--		ahash_unmap(jrdev, edesc, req, digestsize);
-+		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
- 		kfree(edesc);
- 	}
- 
-@@ -1083,12 +1047,9 @@ static int ahash_final_no_ctx(struct ahash_request *req)
- 		append_seq_in_ptr(desc, state->buf_dma, buflen, 0);
- 	}
- 
--	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
--						digestsize);
--	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
--		dev_err(jrdev, "unable to map dst\n");
-+	ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
-+	if (ret)
- 		goto unmap;
--	}
- 
- #ifdef DEBUG
- 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
-@@ -1099,7 +1060,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
- 	if (!ret) {
- 		ret = -EINPROGRESS;
- 	} else {
--		ahash_unmap(jrdev, edesc, req, digestsize);
-+		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
- 		kfree(edesc);
- 	}
- 
-@@ -1298,12 +1259,9 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
- 		goto unmap;
- 	}
- 
--	edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result,
--						digestsize);
--	if (dma_mapping_error(jrdev, edesc->dst_dma)) {
--		dev_err(jrdev, "unable to map dst\n");
-+	ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize);
-+	if (ret)
- 		goto unmap;
--	}
- 
- #ifdef DEBUG
- 	print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ",
-@@ -1314,7 +1272,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
- 	if (!ret) {
- 		ret = -EINPROGRESS;
- 	} else {
--		ahash_unmap(jrdev, edesc, req, digestsize);
-+		ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE);
- 		kfree(edesc);
- 	}
- 
-@@ -1446,6 +1404,7 @@ static int ahash_init(struct ahash_request *req)
- 	state->final = ahash_final_no_ctx;
- 
- 	state->ctx_dma = 0;
-+	state->ctx_dma_len = 0;
- 	state->current_buf = 0;
- 	state->buf_dma = 0;
- 	state->buflen_0 = 0;
-diff --git a/drivers/crypto/cavium/zip/zip_main.c b/drivers/crypto/cavium/zip/zip_main.c
-index be055b9547f6..6183f9128a8a 100644
---- a/drivers/crypto/cavium/zip/zip_main.c
-+++ b/drivers/crypto/cavium/zip/zip_main.c
-@@ -351,6 +351,7 @@ static struct pci_driver zip_driver = {
- 
- static struct crypto_alg zip_comp_deflate = {
- 	.cra_name		= "deflate",
-+	.cra_driver_name	= "deflate-cavium",
- 	.cra_flags		= CRYPTO_ALG_TYPE_COMPRESS,
- 	.cra_ctxsize		= sizeof(struct zip_kernel_ctx),
- 	.cra_priority           = 300,
-@@ -365,6 +366,7 @@ static struct crypto_alg zip_comp_deflate = {
- 
- static struct crypto_alg zip_comp_lzs = {
- 	.cra_name		= "lzs",
-+	.cra_driver_name	= "lzs-cavium",
- 	.cra_flags		= CRYPTO_ALG_TYPE_COMPRESS,
- 	.cra_ctxsize		= sizeof(struct zip_kernel_ctx),
- 	.cra_priority           = 300,
-@@ -384,7 +386,7 @@ static struct scomp_alg zip_scomp_deflate = {
- 	.decompress		= zip_scomp_decompress,
- 	.base			= {
- 		.cra_name		= "deflate",
--		.cra_driver_name	= "deflate-scomp",
-+		.cra_driver_name	= "deflate-scomp-cavium",
- 		.cra_module		= THIS_MODULE,
- 		.cra_priority           = 300,
- 	}
-@@ -397,7 +399,7 @@ static struct scomp_alg zip_scomp_lzs = {
- 	.decompress		= zip_scomp_decompress,
- 	.base			= {
- 		.cra_name		= "lzs",
--		.cra_driver_name	= "lzs-scomp",
-+		.cra_driver_name	= "lzs-scomp-cavium",
- 		.cra_module		= THIS_MODULE,
- 		.cra_priority           = 300,
- 	}
-diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
-index dd948e1df9e5..3bcb6bce666e 100644
---- a/drivers/crypto/ccree/cc_buffer_mgr.c
-+++ b/drivers/crypto/ccree/cc_buffer_mgr.c
-@@ -614,10 +614,10 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
- 				 hw_iv_size, DMA_BIDIRECTIONAL);
- 	}
- 
--	/*In case a pool was set, a table was
--	 *allocated and should be released
--	 */
--	if (areq_ctx->mlli_params.curr_pool) {
-+	/* Release pool */
-+	if ((areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI ||
-+	     areq_ctx->data_buff_type == CC_DMA_BUF_MLLI) &&
-+	    (areq_ctx->mlli_params.mlli_virt_addr)) {
- 		dev_dbg(dev, "free MLLI buffer: dma=%pad virt=%pK\n",
- 			&areq_ctx->mlli_params.mlli_dma_addr,
- 			areq_ctx->mlli_params.mlli_virt_addr);
-diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
-index cc92b031fad1..4ec93079daaf 100644
---- a/drivers/crypto/ccree/cc_cipher.c
-+++ b/drivers/crypto/ccree/cc_cipher.c
-@@ -80,6 +80,7 @@ static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size)
- 		default:
- 			break;
- 		}
-+		break;
- 	case S_DIN_to_DES:
- 		if (size == DES3_EDE_KEY_SIZE || size == DES_KEY_SIZE)
- 			return 0;
-@@ -652,6 +653,8 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
- 	unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm);
- 	unsigned int len;
- 
-+	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
-+
- 	switch (ctx_p->cipher_mode) {
- 	case DRV_CIPHER_CBC:
- 		/*
-@@ -681,7 +684,6 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err)
- 		break;
- 	}
- 
--	cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst);
- 	kzfree(req_ctx->iv);
- 
- 	skcipher_request_complete(req, err);
-@@ -799,7 +801,8 @@ static int cc_cipher_decrypt(struct skcipher_request *req)
- 
- 	memset(req_ctx, 0, sizeof(*req_ctx));
- 
--	if (ctx_p->cipher_mode == DRV_CIPHER_CBC) {
-+	if ((ctx_p->cipher_mode == DRV_CIPHER_CBC) &&
-+	    (req->cryptlen >= ivsize)) {
- 
- 		/* Allocate and save the last IV sized bytes of the source,
- 		 * which will be lost in case of in-place decryption.
-diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c
-index c9d622abd90c..0ce4a65b95f5 100644
---- a/drivers/crypto/rockchip/rk3288_crypto.c
-+++ b/drivers/crypto/rockchip/rk3288_crypto.c
-@@ -119,7 +119,7 @@ static int rk_load_data(struct rk_crypto_info *dev,
- 		count = (dev->left_bytes > PAGE_SIZE) ?
- 			PAGE_SIZE : dev->left_bytes;
- 
--		if (!sg_pcopy_to_buffer(dev->first, dev->nents,
-+		if (!sg_pcopy_to_buffer(dev->first, dev->src_nents,
- 					dev->addr_vir, count,
- 					dev->total - dev->left_bytes)) {
- 			dev_err(dev->dev, "[%s:%d] pcopy err\n",
-diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h
-index d5fb4013fb42..54ee5b3ed9db 100644
---- a/drivers/crypto/rockchip/rk3288_crypto.h
-+++ b/drivers/crypto/rockchip/rk3288_crypto.h
-@@ -207,7 +207,8 @@ struct rk_crypto_info {
- 	void				*addr_vir;
- 	int				aligned;
- 	int				align_size;
--	size_t				nents;
-+	size_t				src_nents;
-+	size_t				dst_nents;
- 	unsigned int			total;
- 	unsigned int			count;
- 	dma_addr_t			addr_in;
-@@ -244,6 +245,7 @@ struct rk_cipher_ctx {
- 	struct rk_crypto_info		*dev;
- 	unsigned int			keylen;
- 	u32				mode;
-+	u8				iv[AES_BLOCK_SIZE];
- };
- 
- enum alg_type {
-diff --git a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
-index 639c15c5364b..23305f22072f 100644
---- a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
-+++ b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
-@@ -242,6 +242,17 @@ static void crypto_dma_start(struct rk_crypto_info *dev)
- static int rk_set_data_start(struct rk_crypto_info *dev)
- {
- 	int err;
-+	struct ablkcipher_request *req =
-+		ablkcipher_request_cast(dev->async_req);
-+	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
-+	struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm);
-+	u32 ivsize = crypto_ablkcipher_ivsize(tfm);
-+	u8 *src_last_blk = page_address(sg_page(dev->sg_src)) +
-+		dev->sg_src->offset + dev->sg_src->length - ivsize;
-+
-+	/* store the iv that need to be updated in chain mode */
-+	if (ctx->mode & RK_CRYPTO_DEC)
-+		memcpy(ctx->iv, src_last_blk, ivsize);
- 
- 	err = dev->load_data(dev, dev->sg_src, dev->sg_dst);
- 	if (!err)
-@@ -260,8 +271,9 @@ static int rk_ablk_start(struct rk_crypto_info *dev)
- 	dev->total = req->nbytes;
- 	dev->sg_src = req->src;
- 	dev->first = req->src;
--	dev->nents = sg_nents(req->src);
-+	dev->src_nents = sg_nents(req->src);
- 	dev->sg_dst = req->dst;
-+	dev->dst_nents = sg_nents(req->dst);
- 	dev->aligned = 1;
- 
- 	spin_lock_irqsave(&dev->lock, flags);
-@@ -285,6 +297,28 @@ static void rk_iv_copyback(struct rk_crypto_info *dev)
- 		memcpy_fromio(req->info, dev->reg + RK_CRYPTO_AES_IV_0, ivsize);
- }
- 
-+static void rk_update_iv(struct rk_crypto_info *dev)
-+{
-+	struct ablkcipher_request *req =
-+		ablkcipher_request_cast(dev->async_req);
-+	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
-+	struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm);
-+	u32 ivsize = crypto_ablkcipher_ivsize(tfm);
-+	u8 *new_iv = NULL;
-+
-+	if (ctx->mode & RK_CRYPTO_DEC) {
-+		new_iv = ctx->iv;
-+	} else {
-+		new_iv = page_address(sg_page(dev->sg_dst)) +
-+			 dev->sg_dst->offset + dev->sg_dst->length - ivsize;
-+	}
-+
-+	if (ivsize == DES_BLOCK_SIZE)
-+		memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize);
-+	else if (ivsize == AES_BLOCK_SIZE)
-+		memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize);
-+}
-+
- /* return:
-  *	true	some err was occurred
-  *	fault	no err, continue
-@@ -297,7 +331,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev)
- 
- 	dev->unload_data(dev);
- 	if (!dev->aligned) {
--		if (!sg_pcopy_from_buffer(req->dst, dev->nents,
-+		if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents,
- 					  dev->addr_vir, dev->count,
- 					  dev->total - dev->left_bytes -
- 					  dev->count)) {
-@@ -306,6 +340,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev)
- 		}
- 	}
- 	if (dev->left_bytes) {
-+		rk_update_iv(dev);
- 		if (dev->aligned) {
- 			if (sg_is_last(dev->sg_src)) {
- 				dev_err(dev->dev, "[%s:%d] Lack of data\n",
-diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
-index 821a506b9e17..c336ae75e361 100644
---- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c
-+++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c
-@@ -206,7 +206,7 @@ static int rk_ahash_start(struct rk_crypto_info *dev)
- 	dev->sg_dst = NULL;
- 	dev->sg_src = req->src;
- 	dev->first = req->src;
--	dev->nents = sg_nents(req->src);
-+	dev->src_nents = sg_nents(req->src);
- 	rctx = ahash_request_ctx(req);
- 	rctx->mode = 0;
- 
-diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c
-index 4a09af3cd546..7b9a7fb28bb9 100644
---- a/drivers/dma/imx-dma.c
-+++ b/drivers/dma/imx-dma.c
-@@ -285,7 +285,7 @@ static inline int imxdma_sg_next(struct imxdma_desc *d)
- 	struct scatterlist *sg = d->sg;
- 	unsigned long now;
- 
--	now = min(d->len, sg_dma_len(sg));
-+	now = min_t(size_t, d->len, sg_dma_len(sg));
- 	if (d->len != IMX_DMA_LENGTH_LOOP)
- 		d->len -= now;
- 
-diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
-index 43d4b00b8138..411f91fde734 100644
---- a/drivers/dma/qcom/hidma.c
-+++ b/drivers/dma/qcom/hidma.c
-@@ -138,24 +138,25 @@ static void hidma_process_completed(struct hidma_chan *mchan)
- 		desc = &mdesc->desc;
- 		last_cookie = desc->cookie;
- 
-+		llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
-+
- 		spin_lock_irqsave(&mchan->lock, irqflags);
-+		if (llstat == DMA_COMPLETE) {
-+			mchan->last_success = last_cookie;
-+			result.result = DMA_TRANS_NOERROR;
-+		} else {
-+			result.result = DMA_TRANS_ABORTED;
-+		}
-+
- 		dma_cookie_complete(desc);
- 		spin_unlock_irqrestore(&mchan->lock, irqflags);
- 
--		llstat = hidma_ll_status(mdma->lldev, mdesc->tre_ch);
- 		dmaengine_desc_get_callback(desc, &cb);
- 
- 		dma_run_dependencies(desc);
- 
- 		spin_lock_irqsave(&mchan->lock, irqflags);
- 		list_move(&mdesc->node, &mchan->free);
--
--		if (llstat == DMA_COMPLETE) {
--			mchan->last_success = last_cookie;
--			result.result = DMA_TRANS_NOERROR;
--		} else
--			result.result = DMA_TRANS_ABORTED;
--
- 		spin_unlock_irqrestore(&mchan->lock, irqflags);
- 
- 		dmaengine_desc_callback_invoke(&cb, &result);
-@@ -415,6 +416,7 @@ hidma_prep_dma_memcpy(struct dma_chan *dmach, dma_addr_t dest, dma_addr_t src,
- 	if (!mdesc)
- 		return NULL;
- 
-+	mdesc->desc.flags = flags;
- 	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
- 				     src, dest, len, flags,
- 				     HIDMA_TRE_MEMCPY);
-@@ -447,6 +449,7 @@ hidma_prep_dma_memset(struct dma_chan *dmach, dma_addr_t dest, int value,
- 	if (!mdesc)
- 		return NULL;
- 
-+	mdesc->desc.flags = flags;
- 	hidma_ll_set_transfer_params(mdma->lldev, mdesc->tre_ch,
- 				     value, dest, len, flags,
- 				     HIDMA_TRE_MEMSET);
-diff --git a/drivers/dma/sh/usb-dmac.c b/drivers/dma/sh/usb-dmac.c
-index 7f7184c3cf95..59403f6d008a 100644
---- a/drivers/dma/sh/usb-dmac.c
-+++ b/drivers/dma/sh/usb-dmac.c
-@@ -694,6 +694,8 @@ static int usb_dmac_runtime_resume(struct device *dev)
- #endif /* CONFIG_PM */
- 
- static const struct dev_pm_ops usb_dmac_pm = {
-+	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
-+				      pm_runtime_force_resume)
- 	SET_RUNTIME_PM_OPS(usb_dmac_runtime_suspend, usb_dmac_runtime_resume,
- 			   NULL)
- };
-diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
-index 9a558e30c461..8219ab88a507 100644
---- a/drivers/dma/tegra20-apb-dma.c
-+++ b/drivers/dma/tegra20-apb-dma.c
-@@ -636,7 +636,10 @@ static void handle_cont_sngl_cycle_dma_done(struct tegra_dma_channel *tdc,
- 
- 	sgreq = list_first_entry(&tdc->pending_sg_req, typeof(*sgreq), node);
- 	dma_desc = sgreq->dma_desc;
--	dma_desc->bytes_transferred += sgreq->req_len;
-+	/* if we dma for long enough the transfer count will wrap */
-+	dma_desc->bytes_transferred =
-+		(dma_desc->bytes_transferred + sgreq->req_len) %
-+		dma_desc->bytes_requested;
- 
- 	/* Callback need to be call */
- 	if (!dma_desc->cb_count)
-diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c
-index a7902fccdcfa..6090d25dce85 100644
---- a/drivers/firmware/efi/cper.c
-+++ b/drivers/firmware/efi/cper.c
-@@ -546,19 +546,24 @@ EXPORT_SYMBOL_GPL(cper_estatus_check_header);
- int cper_estatus_check(const struct acpi_hest_generic_status *estatus)
- {
- 	struct acpi_hest_generic_data *gdata;
--	unsigned int data_len, gedata_len;
-+	unsigned int data_len, record_size;
- 	int rc;
- 
- 	rc = cper_estatus_check_header(estatus);
- 	if (rc)
- 		return rc;
-+
- 	data_len = estatus->data_length;
- 
- 	apei_estatus_for_each_section(estatus, gdata) {
--		gedata_len = acpi_hest_get_error_length(gdata);
--		if (gedata_len > data_len - acpi_hest_get_size(gdata))
-+		if (sizeof(struct acpi_hest_generic_data) > data_len)
-+			return -EINVAL;
-+
-+		record_size = acpi_hest_get_record_size(gdata);
-+		if (record_size > data_len)
- 			return -EINVAL;
--		data_len -= acpi_hest_get_record_size(gdata);
-+
-+		data_len -= record_size;
- 	}
- 	if (data_len)
- 		return -EINVAL;
-diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
-index c037c6c5d0b7..04e6ecd72cd9 100644
---- a/drivers/firmware/efi/libstub/arm-stub.c
-+++ b/drivers/firmware/efi/libstub/arm-stub.c
-@@ -367,6 +367,11 @@ void efi_get_virtmap(efi_memory_desc_t *memory_map, unsigned long map_size,
- 		paddr = in->phys_addr;
- 		size = in->num_pages * EFI_PAGE_SIZE;
- 
-+		if (novamap()) {
-+			in->virt_addr = in->phys_addr;
-+			continue;
-+		}
-+
- 		/*
- 		 * Make the mapping compatible with 64k pages: this allows
- 		 * a 4k page size kernel to kexec a 64k page size kernel and
-diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
-index e94975f4655b..442f51c2a53d 100644
---- a/drivers/firmware/efi/libstub/efi-stub-helper.c
-+++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
-@@ -34,6 +34,7 @@ static unsigned long __chunk_size = EFI_READ_CHUNK_SIZE;
- 
- static int __section(.data) __nokaslr;
- static int __section(.data) __quiet;
-+static int __section(.data) __novamap;
- 
- int __pure nokaslr(void)
- {
-@@ -43,6 +44,10 @@ int __pure is_quiet(void)
- {
- 	return __quiet;
- }
-+int __pure novamap(void)
-+{
-+	return __novamap;
-+}
- 
- #define EFI_MMAP_NR_SLACK_SLOTS	8
- 
-@@ -482,6 +487,11 @@ efi_status_t efi_parse_options(char const *cmdline)
- 			__chunk_size = -1UL;
- 		}
- 
-+		if (!strncmp(str, "novamap", 7)) {
-+			str += strlen("novamap");
-+			__novamap = 1;
-+		}
-+
- 		/* Group words together, delimited by "," */
- 		while (*str && *str != ' ' && *str != ',')
- 			str++;
-diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
-index 32799cf039ef..337b52c4702c 100644
---- a/drivers/firmware/efi/libstub/efistub.h
-+++ b/drivers/firmware/efi/libstub/efistub.h
-@@ -27,6 +27,7 @@
- 
- extern int __pure nokaslr(void);
- extern int __pure is_quiet(void);
-+extern int __pure novamap(void);
- 
- #define pr_efi(sys_table, msg)		do {				\
- 	if (!is_quiet()) efi_printk(sys_table, "EFI stub: "msg);	\
-diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
-index 0dc7b4987cc2..f8f89f995e9d 100644
---- a/drivers/firmware/efi/libstub/fdt.c
-+++ b/drivers/firmware/efi/libstub/fdt.c
-@@ -327,6 +327,9 @@ efi_status_t allocate_new_fdt_and_exit_boot(efi_system_table_t *sys_table,
- 	if (status == EFI_SUCCESS) {
- 		efi_set_virtual_address_map_t *svam;
- 
-+		if (novamap())
-+			return EFI_SUCCESS;
-+
- 		/* Install the new virtual address map */
- 		svam = sys_table->runtime->set_virtual_address_map;
- 		status = svam(runtime_entry_count * desc_size, desc_size,
-diff --git a/drivers/firmware/efi/memattr.c b/drivers/firmware/efi/memattr.c
-index 8986757eafaf..aac972b056d9 100644
---- a/drivers/firmware/efi/memattr.c
-+++ b/drivers/firmware/efi/memattr.c
-@@ -94,7 +94,7 @@ static bool entry_is_valid(const efi_memory_desc_t *in, efi_memory_desc_t *out)
- 
- 		if (!(md->attribute & EFI_MEMORY_RUNTIME))
- 			continue;
--		if (md->virt_addr == 0) {
-+		if (md->virt_addr == 0 && md->phys_addr != 0) {
- 			/* no virtual mapping has been installed by the stub */
- 			break;
- 		}
-diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
-index e2abfdb5cee6..698745c249e8 100644
---- a/drivers/firmware/efi/runtime-wrappers.c
-+++ b/drivers/firmware/efi/runtime-wrappers.c
-@@ -85,7 +85,7 @@ struct efi_runtime_work efi_rts_work;
- 		pr_err("Failed to queue work to efi_rts_wq.\n");	\
- 									\
- exit:									\
--	efi_rts_work.efi_rts_id = NONE;					\
-+	efi_rts_work.efi_rts_id = EFI_NONE;				\
- 	efi_rts_work.status;						\
- })
- 
-@@ -175,50 +175,50 @@ static void efi_call_rts(struct work_struct *work)
- 	arg5 = efi_rts_work.arg5;
- 
- 	switch (efi_rts_work.efi_rts_id) {
--	case GET_TIME:
-+	case EFI_GET_TIME:
- 		status = efi_call_virt(get_time, (efi_time_t *)arg1,
- 				       (efi_time_cap_t *)arg2);
- 		break;
--	case SET_TIME:
-+	case EFI_SET_TIME:
- 		status = efi_call_virt(set_time, (efi_time_t *)arg1);
- 		break;
--	case GET_WAKEUP_TIME:
-+	case EFI_GET_WAKEUP_TIME:
- 		status = efi_call_virt(get_wakeup_time, (efi_bool_t *)arg1,
- 				       (efi_bool_t *)arg2, (efi_time_t *)arg3);
- 		break;
--	case SET_WAKEUP_TIME:
-+	case EFI_SET_WAKEUP_TIME:
- 		status = efi_call_virt(set_wakeup_time, *(efi_bool_t *)arg1,
- 				       (efi_time_t *)arg2);
- 		break;
--	case GET_VARIABLE:
-+	case EFI_GET_VARIABLE:
- 		status = efi_call_virt(get_variable, (efi_char16_t *)arg1,
- 				       (efi_guid_t *)arg2, (u32 *)arg3,
- 				       (unsigned long *)arg4, (void *)arg5);
- 		break;
--	case GET_NEXT_VARIABLE:
-+	case EFI_GET_NEXT_VARIABLE:
- 		status = efi_call_virt(get_next_variable, (unsigned long *)arg1,
- 				       (efi_char16_t *)arg2,
- 				       (efi_guid_t *)arg3);
- 		break;
--	case SET_VARIABLE:
-+	case EFI_SET_VARIABLE:
- 		status = efi_call_virt(set_variable, (efi_char16_t *)arg1,
- 				       (efi_guid_t *)arg2, *(u32 *)arg3,
- 				       *(unsigned long *)arg4, (void *)arg5);
- 		break;
--	case QUERY_VARIABLE_INFO:
-+	case EFI_QUERY_VARIABLE_INFO:
- 		status = efi_call_virt(query_variable_info, *(u32 *)arg1,
- 				       (u64 *)arg2, (u64 *)arg3, (u64 *)arg4);
- 		break;
--	case GET_NEXT_HIGH_MONO_COUNT:
-+	case EFI_GET_NEXT_HIGH_MONO_COUNT:
- 		status = efi_call_virt(get_next_high_mono_count, (u32 *)arg1);
- 		break;
--	case UPDATE_CAPSULE:
-+	case EFI_UPDATE_CAPSULE:
- 		status = efi_call_virt(update_capsule,
- 				       (efi_capsule_header_t **)arg1,
- 				       *(unsigned long *)arg2,
- 				       *(unsigned long *)arg3);
- 		break;
--	case QUERY_CAPSULE_CAPS:
-+	case EFI_QUERY_CAPSULE_CAPS:
- 		status = efi_call_virt(query_capsule_caps,
- 				       (efi_capsule_header_t **)arg1,
- 				       *(unsigned long *)arg2, (u64 *)arg3,
-@@ -242,7 +242,7 @@ static efi_status_t virt_efi_get_time(efi_time_t *tm, efi_time_cap_t *tc)
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(GET_TIME, tm, tc, NULL, NULL, NULL);
-+	status = efi_queue_work(EFI_GET_TIME, tm, tc, NULL, NULL, NULL);
- 	up(&efi_runtime_lock);
- 	return status;
- }
-@@ -253,7 +253,7 @@ static efi_status_t virt_efi_set_time(efi_time_t *tm)
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(SET_TIME, tm, NULL, NULL, NULL, NULL);
-+	status = efi_queue_work(EFI_SET_TIME, tm, NULL, NULL, NULL, NULL);
- 	up(&efi_runtime_lock);
- 	return status;
- }
-@@ -266,7 +266,7 @@ static efi_status_t virt_efi_get_wakeup_time(efi_bool_t *enabled,
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(GET_WAKEUP_TIME, enabled, pending, tm, NULL,
-+	status = efi_queue_work(EFI_GET_WAKEUP_TIME, enabled, pending, tm, NULL,
- 				NULL);
- 	up(&efi_runtime_lock);
- 	return status;
-@@ -278,7 +278,7 @@ static efi_status_t virt_efi_set_wakeup_time(efi_bool_t enabled, efi_time_t *tm)
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(SET_WAKEUP_TIME, &enabled, tm, NULL, NULL,
-+	status = efi_queue_work(EFI_SET_WAKEUP_TIME, &enabled, tm, NULL, NULL,
- 				NULL);
- 	up(&efi_runtime_lock);
- 	return status;
-@@ -294,7 +294,7 @@ static efi_status_t virt_efi_get_variable(efi_char16_t *name,
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(GET_VARIABLE, name, vendor, attr, data_size,
-+	status = efi_queue_work(EFI_GET_VARIABLE, name, vendor, attr, data_size,
- 				data);
- 	up(&efi_runtime_lock);
- 	return status;
-@@ -308,7 +308,7 @@ static efi_status_t virt_efi_get_next_variable(unsigned long *name_size,
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(GET_NEXT_VARIABLE, name_size, name, vendor,
-+	status = efi_queue_work(EFI_GET_NEXT_VARIABLE, name_size, name, vendor,
- 				NULL, NULL);
- 	up(&efi_runtime_lock);
- 	return status;
-@@ -324,7 +324,7 @@ static efi_status_t virt_efi_set_variable(efi_char16_t *name,
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(SET_VARIABLE, name, vendor, &attr, &data_size,
-+	status = efi_queue_work(EFI_SET_VARIABLE, name, vendor, &attr, &data_size,
- 				data);
- 	up(&efi_runtime_lock);
- 	return status;
-@@ -359,7 +359,7 @@ static efi_status_t virt_efi_query_variable_info(u32 attr,
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(QUERY_VARIABLE_INFO, &attr, storage_space,
-+	status = efi_queue_work(EFI_QUERY_VARIABLE_INFO, &attr, storage_space,
- 				remaining_space, max_variable_size, NULL);
- 	up(&efi_runtime_lock);
- 	return status;
-@@ -391,7 +391,7 @@ static efi_status_t virt_efi_get_next_high_mono_count(u32 *count)
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(GET_NEXT_HIGH_MONO_COUNT, count, NULL, NULL,
-+	status = efi_queue_work(EFI_GET_NEXT_HIGH_MONO_COUNT, count, NULL, NULL,
- 				NULL, NULL);
- 	up(&efi_runtime_lock);
- 	return status;
-@@ -407,7 +407,7 @@ static void virt_efi_reset_system(int reset_type,
- 			"could not get exclusive access to the firmware\n");
- 		return;
- 	}
--	efi_rts_work.efi_rts_id = RESET_SYSTEM;
-+	efi_rts_work.efi_rts_id = EFI_RESET_SYSTEM;
- 	__efi_call_virt(reset_system, reset_type, status, data_size, data);
- 	up(&efi_runtime_lock);
- }
-@@ -423,7 +423,7 @@ static efi_status_t virt_efi_update_capsule(efi_capsule_header_t **capsules,
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(UPDATE_CAPSULE, capsules, &count, &sg_list,
-+	status = efi_queue_work(EFI_UPDATE_CAPSULE, capsules, &count, &sg_list,
- 				NULL, NULL);
- 	up(&efi_runtime_lock);
- 	return status;
-@@ -441,7 +441,7 @@ static efi_status_t virt_efi_query_capsule_caps(efi_capsule_header_t **capsules,
- 
- 	if (down_interruptible(&efi_runtime_lock))
- 		return EFI_ABORTED;
--	status = efi_queue_work(QUERY_CAPSULE_CAPS, capsules, &count,
-+	status = efi_queue_work(EFI_QUERY_CAPSULE_CAPS, capsules, &count,
- 				max_size, reset_type, NULL);
- 	up(&efi_runtime_lock);
- 	return status;
-diff --git a/drivers/firmware/iscsi_ibft.c b/drivers/firmware/iscsi_ibft.c
-index 6bc8e6640d71..c51462f5aa1e 100644
---- a/drivers/firmware/iscsi_ibft.c
-+++ b/drivers/firmware/iscsi_ibft.c
-@@ -542,6 +542,7 @@ static umode_t __init ibft_check_tgt_for(void *data, int type)
- 	case ISCSI_BOOT_TGT_NIC_ASSOC:
- 	case ISCSI_BOOT_TGT_CHAP_TYPE:
- 		rc = S_IRUGO;
-+		break;
- 	case ISCSI_BOOT_TGT_NAME:
- 		if (tgt->tgt_name_len)
- 			rc = S_IRUGO;
-diff --git a/drivers/gnss/sirf.c b/drivers/gnss/sirf.c
-index 226f6e6fe01b..8e3f6a776e02 100644
---- a/drivers/gnss/sirf.c
-+++ b/drivers/gnss/sirf.c
-@@ -310,30 +310,26 @@ static int sirf_probe(struct serdev_device *serdev)
- 			ret = -ENODEV;
- 			goto err_put_device;
- 		}
-+
-+		ret = regulator_enable(data->vcc);
-+		if (ret)
-+			goto err_put_device;
-+
-+		/* Wait for chip to boot into hibernate mode. */
-+		msleep(SIRF_BOOT_DELAY);
- 	}
- 
- 	if (data->wakeup) {
- 		ret = gpiod_to_irq(data->wakeup);
- 		if (ret < 0)
--			goto err_put_device;
--
-+			goto err_disable_vcc;
- 		data->irq = ret;
- 
--		ret = devm_request_threaded_irq(dev, data->irq, NULL,
--				sirf_wakeup_handler,
-+		ret = request_threaded_irq(data->irq, NULL, sirf_wakeup_handler,
- 				IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
- 				"wakeup", data);
- 		if (ret)
--			goto err_put_device;
--	}
--
--	if (data->on_off) {
--		ret = regulator_enable(data->vcc);
--		if (ret)
--			goto err_put_device;
--
--		/* Wait for chip to boot into hibernate mode */
--		msleep(SIRF_BOOT_DELAY);
-+			goto err_disable_vcc;
- 	}
- 
- 	if (IS_ENABLED(CONFIG_PM)) {
-@@ -342,7 +338,7 @@ static int sirf_probe(struct serdev_device *serdev)
- 	} else {
- 		ret = sirf_runtime_resume(dev);
- 		if (ret < 0)
--			goto err_disable_vcc;
-+			goto err_free_irq;
- 	}
- 
- 	ret = gnss_register_device(gdev);
-@@ -356,6 +352,9 @@ err_disable_rpm:
- 		pm_runtime_disable(dev);
- 	else
- 		sirf_runtime_suspend(dev);
-+err_free_irq:
-+	if (data->wakeup)
-+		free_irq(data->irq, data);
- err_disable_vcc:
- 	if (data->on_off)
- 		regulator_disable(data->vcc);
-@@ -376,6 +375,9 @@ static void sirf_remove(struct serdev_device *serdev)
- 	else
- 		sirf_runtime_suspend(&serdev->dev);
- 
-+	if (data->wakeup)
-+		free_irq(data->irq, data);
-+
- 	if (data->on_off)
- 		regulator_disable(data->vcc);
- 
-diff --git a/drivers/gpio/gpio-adnp.c b/drivers/gpio/gpio-adnp.c
-index 91b90c0cea73..12acdac85820 100644
---- a/drivers/gpio/gpio-adnp.c
-+++ b/drivers/gpio/gpio-adnp.c
-@@ -132,8 +132,10 @@ static int adnp_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
- 	if (err < 0)
- 		goto out;
- 
--	if (err & BIT(pos))
--		err = -EACCES;
-+	if (value & BIT(pos)) {
-+		err = -EPERM;
-+		goto out;
-+	}
- 
- 	err = 0;
- 
-diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c
-index 0ecd2369c2ca..a09d2f9ebacc 100644
---- a/drivers/gpio/gpio-exar.c
-+++ b/drivers/gpio/gpio-exar.c
-@@ -148,6 +148,8 @@ static int gpio_exar_probe(struct platform_device *pdev)
- 	mutex_init(&exar_gpio->lock);
- 
- 	index = ida_simple_get(&ida_index, 0, 0, GFP_KERNEL);
-+	if (index < 0)
-+		goto err_destroy;
- 
- 	sprintf(exar_gpio->name, "exar_gpio%d", index);
- 	exar_gpio->gpio_chip.label = exar_gpio->name;
-diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c
-index f4e9921fa966..7f33024b6d83 100644
---- a/drivers/gpio/gpio-omap.c
-+++ b/drivers/gpio/gpio-omap.c
-@@ -883,14 +883,16 @@ static void omap_gpio_unmask_irq(struct irq_data *d)
- 	if (trigger)
- 		omap_set_gpio_triggering(bank, offset, trigger);
- 
--	/* For level-triggered GPIOs, the clearing must be done after
--	 * the HW source is cleared, thus after the handler has run */
--	if (bank->level_mask & BIT(offset)) {
--		omap_set_gpio_irqenable(bank, offset, 0);
-+	omap_set_gpio_irqenable(bank, offset, 1);
-+
-+	/*
-+	 * For level-triggered GPIOs, clearing must be done after the source
-+	 * is cleared, thus after the handler has run. OMAP4 needs this done
-+	 * after enabing the interrupt to clear the wakeup status.
-+	 */
-+	if (bank->level_mask & BIT(offset))
- 		omap_clear_gpio_irqstatus(bank, offset);
--	}
- 
--	omap_set_gpio_irqenable(bank, offset, 1);
- 	raw_spin_unlock_irqrestore(&bank->lock, flags);
- }
- 
-diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
-index 0dc96419efe3..d8a985fc6a5d 100644
---- a/drivers/gpio/gpio-pca953x.c
-+++ b/drivers/gpio/gpio-pca953x.c
-@@ -587,7 +587,8 @@ static int pca953x_irq_set_type(struct irq_data *d, unsigned int type)
- 
- static void pca953x_irq_shutdown(struct irq_data *d)
- {
--	struct pca953x_chip *chip = irq_data_get_irq_chip_data(d);
-+	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
-+	struct pca953x_chip *chip = gpiochip_get_data(gc);
- 	u8 mask = 1 << (d->hwirq % BANK_SZ);
- 
- 	chip->irq_trig_raise[d->hwirq / BANK_SZ] &= ~mask;
-diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
-index a6e1891217e2..a1dd2f1c0d02 100644
---- a/drivers/gpio/gpiolib-of.c
-+++ b/drivers/gpio/gpiolib-of.c
-@@ -86,7 +86,8 @@ static void of_gpio_flags_quirks(struct device_node *np,
- 	if (IS_ENABLED(CONFIG_REGULATOR) &&
- 	    (of_device_is_compatible(np, "regulator-fixed") ||
- 	     of_device_is_compatible(np, "reg-fixed-voltage") ||
--	     of_device_is_compatible(np, "regulator-gpio"))) {
-+	     (of_device_is_compatible(np, "regulator-gpio") &&
-+	      strcmp(propname, "enable-gpio") == 0))) {
- 		/*
- 		 * The regulator GPIO handles are specified such that the
- 		 * presence or absence of "enable-active-high" solely controls
-diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
-index bacdaef77b6c..278dd55ff476 100644
---- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
-+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
-@@ -738,7 +738,7 @@ static int gmc_v9_0_allocate_vm_inv_eng(struct amdgpu_device *adev)
- 		}
- 
- 		ring->vm_inv_eng = inv_eng - 1;
--		change_bit(inv_eng - 1, (unsigned long *)(&vm_inv_engs[vmhub]));
-+		vm_inv_engs[vmhub] &= ~(1 << ring->vm_inv_eng);
- 
- 		dev_info(adev->dev, "ring %s uses VM inv eng %u on hub %u\n",
- 			 ring->name, ring->vm_inv_eng, ring->funcs->vmhub);
-diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-index 636d14a60952..83c8a0407537 100644
---- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
-@@ -886,6 +886,7 @@ static void emulated_link_detect(struct dc_link *link)
- 		return;
- 	}
- 
-+	/* dc_sink_create returns a new reference */
- 	link->local_sink = sink;
- 
- 	edid_status = dm_helpers_read_local_edid(
-@@ -952,6 +953,8 @@ static int dm_resume(void *handle)
- 		if (aconnector->fake_enable && aconnector->dc_link->local_sink)
- 			aconnector->fake_enable = false;
- 
-+		if (aconnector->dc_sink)
-+			dc_sink_release(aconnector->dc_sink);
- 		aconnector->dc_sink = NULL;
- 		amdgpu_dm_update_connector_after_detect(aconnector);
- 		mutex_unlock(&aconnector->hpd_lock);
-@@ -1061,6 +1064,8 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
- 
- 
- 	sink = aconnector->dc_link->local_sink;
-+	if (sink)
-+		dc_sink_retain(sink);
- 
- 	/*
- 	 * Edid mgmt connector gets first update only in mode_valid hook and then
-@@ -1085,21 +1090,24 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
- 				 * to it anymore after disconnect, so on next crtc to connector
- 				 * reshuffle by UMD we will get into unwanted dc_sink release
- 				 */
--				if (aconnector->dc_sink != aconnector->dc_em_sink)
--					dc_sink_release(aconnector->dc_sink);
-+				dc_sink_release(aconnector->dc_sink);
- 			}
- 			aconnector->dc_sink = sink;
-+			dc_sink_retain(aconnector->dc_sink);
- 			amdgpu_dm_update_freesync_caps(connector,
- 					aconnector->edid);
- 		} else {
- 			amdgpu_dm_update_freesync_caps(connector, NULL);
--			if (!aconnector->dc_sink)
-+			if (!aconnector->dc_sink) {
- 				aconnector->dc_sink = aconnector->dc_em_sink;
--			else if (aconnector->dc_sink != aconnector->dc_em_sink)
- 				dc_sink_retain(aconnector->dc_sink);
-+			}
- 		}
- 
- 		mutex_unlock(&dev->mode_config.mutex);
-+
-+		if (sink)
-+			dc_sink_release(sink);
- 		return;
- 	}
- 
-@@ -1107,8 +1115,10 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
- 	 * TODO: temporary guard to look for proper fix
- 	 * if this sink is MST sink, we should not do anything
- 	 */
--	if (sink && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
-+	if (sink && sink->sink_signal == SIGNAL_TYPE_DISPLAY_PORT_MST) {
-+		dc_sink_release(sink);
- 		return;
-+	}
- 
- 	if (aconnector->dc_sink == sink) {
- 		/*
-@@ -1117,6 +1127,8 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
- 		 */
- 		DRM_DEBUG_DRIVER("DCHPD: connector_id=%d: dc_sink didn't change.\n",
- 				aconnector->connector_id);
-+		if (sink)
-+			dc_sink_release(sink);
- 		return;
- 	}
- 
-@@ -1138,6 +1150,7 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
- 			amdgpu_dm_update_freesync_caps(connector, NULL);
- 
- 		aconnector->dc_sink = sink;
-+		dc_sink_retain(aconnector->dc_sink);
- 		if (sink->dc_edid.length == 0) {
- 			aconnector->edid = NULL;
- 			drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux);
-@@ -1158,11 +1171,15 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
- 		amdgpu_dm_update_freesync_caps(connector, NULL);
- 		drm_connector_update_edid_property(connector, NULL);
- 		aconnector->num_modes = 0;
-+		dc_sink_release(aconnector->dc_sink);
- 		aconnector->dc_sink = NULL;
- 		aconnector->edid = NULL;
- 	}
- 
- 	mutex_unlock(&dev->mode_config.mutex);
-+
-+	if (sink)
-+		dc_sink_release(sink);
- }
- 
- static void handle_hpd_irq(void *param)
-@@ -2908,6 +2925,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
- 		}
- 	} else {
- 		sink = aconnector->dc_sink;
-+		dc_sink_retain(sink);
- 	}
- 
- 	stream = dc_create_stream_for_sink(sink);
-@@ -2974,8 +2992,7 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector,
- 		stream->ignore_msa_timing_param = true;
- 
- finish:
--	if (sink && sink->sink_signal == SIGNAL_TYPE_VIRTUAL && aconnector->base.force != DRM_FORCE_ON)
--		dc_sink_release(sink);
-+	dc_sink_release(sink);
- 
- 	return stream;
- }
-@@ -3233,6 +3250,14 @@ static void amdgpu_dm_connector_destroy(struct drm_connector *connector)
- 		dm->backlight_dev = NULL;
- 	}
- #endif
-+
-+	if (aconnector->dc_em_sink)
-+		dc_sink_release(aconnector->dc_em_sink);
-+	aconnector->dc_em_sink = NULL;
-+	if (aconnector->dc_sink)
-+		dc_sink_release(aconnector->dc_sink);
-+	aconnector->dc_sink = NULL;
-+
- 	drm_dp_cec_unregister_connector(&aconnector->dm_dp_aux.aux);
- 	drm_connector_unregister(connector);
- 	drm_connector_cleanup(connector);
-@@ -3330,10 +3355,12 @@ static void create_eml_sink(struct amdgpu_dm_connector *aconnector)
- 		(edid->extensions + 1) * EDID_LENGTH,
- 		&init_params);
- 
--	if (aconnector->base.force == DRM_FORCE_ON)
-+	if (aconnector->base.force == DRM_FORCE_ON) {
- 		aconnector->dc_sink = aconnector->dc_link->local_sink ?
- 		aconnector->dc_link->local_sink :
- 		aconnector->dc_em_sink;
-+		dc_sink_retain(aconnector->dc_sink);
-+	}
- }
- 
- static void handle_edid_mgmt(struct amdgpu_dm_connector *aconnector)
-@@ -4948,7 +4975,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
- static void amdgpu_dm_crtc_copy_transient_flags(struct drm_crtc_state *crtc_state,
- 						struct dc_stream_state *stream_state)
- {
--	stream_state->mode_changed = crtc_state->mode_changed;
-+	stream_state->mode_changed =
-+		crtc_state->mode_changed || crtc_state->active_changed;
- }
- 
- static int amdgpu_dm_atomic_commit(struct drm_device *dev,
-@@ -4969,10 +4997,22 @@ static int amdgpu_dm_atomic_commit(struct drm_device *dev,
- 	 */
- 	for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
- 		struct dm_crtc_state *dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
-+		struct dm_crtc_state *dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
- 		struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
- 
--		if (drm_atomic_crtc_needs_modeset(new_crtc_state) && dm_old_crtc_state->stream)
-+		if (drm_atomic_crtc_needs_modeset(new_crtc_state)
-+		    && dm_old_crtc_state->stream) {
-+			/*
-+			 * CRC capture was enabled but not disabled.
-+			 * Release the vblank reference.
-+			 */
-+			if (dm_new_crtc_state->crc_enabled) {
-+				drm_crtc_vblank_put(crtc);
-+				dm_new_crtc_state->crc_enabled = false;
-+			}
-+
- 			manage_dm_interrupts(adev, acrtc, false);
-+		}
- 	}
- 	/*
- 	 * Add check here for SoC's that support hardware cursor plane, to
-diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
-index f088ac585978..26b651148c67 100644
---- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
-+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
-@@ -66,6 +66,7 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
- {
- 	struct dm_crtc_state *crtc_state = to_dm_crtc_state(crtc->state);
- 	struct dc_stream_state *stream_state = crtc_state->stream;
-+	bool enable;
- 
- 	enum amdgpu_dm_pipe_crc_source source = dm_parse_crc_source(src_name);
- 
-@@ -80,28 +81,27 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
- 		return -EINVAL;
- 	}
- 
-+	enable = (source == AMDGPU_DM_PIPE_CRC_SOURCE_AUTO);
-+
-+	if (!dc_stream_configure_crc(stream_state->ctx->dc, stream_state,
-+				     enable, enable))
-+		return -EINVAL;
-+
- 	/* When enabling CRC, we should also disable dithering. */
--	if (source == AMDGPU_DM_PIPE_CRC_SOURCE_AUTO) {
--		if (dc_stream_configure_crc(stream_state->ctx->dc,
--					    stream_state,
--					    true, true)) {
--			crtc_state->crc_enabled = true;
--			dc_stream_set_dither_option(stream_state,
--						    DITHER_OPTION_TRUN8);
--		}
--		else
--			return -EINVAL;
--	} else {
--		if (dc_stream_configure_crc(stream_state->ctx->dc,
--					    stream_state,
--					    false, false)) {
--			crtc_state->crc_enabled = false;
--			dc_stream_set_dither_option(stream_state,
--						    DITHER_OPTION_DEFAULT);
--		}
--		else
--			return -EINVAL;
--	}
-+	dc_stream_set_dither_option(stream_state,
-+				    enable ? DITHER_OPTION_TRUN8
-+					   : DITHER_OPTION_DEFAULT);
-+
-+	/*
-+	 * Reading the CRC requires the vblank interrupt handler to be
-+	 * enabled. Keep a reference until CRC capture stops.
-+	 */
-+	if (!crtc_state->crc_enabled && enable)
-+		drm_crtc_vblank_get(crtc);
-+	else if (crtc_state->crc_enabled && !enable)
-+		drm_crtc_vblank_put(crtc);
-+
-+	crtc_state->crc_enabled = enable;
- 
- 	/* Reset crc_skipped on dm state */
- 	crtc_state->crc_skip_count = 0;
-diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
-index 1b0d209d8367..3b95a637b508 100644
---- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
-+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
-@@ -239,6 +239,7 @@ static int dm_dp_mst_get_modes(struct drm_connector *connector)
- 			&init_params);
- 
- 		dc_sink->priv = aconnector;
-+		/* dc_link_add_remote_sink returns a new reference */
- 		aconnector->dc_sink = dc_sink;
- 
- 		if (aconnector->dc_sink)
-diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
-index 43e4a2be0fa6..57cc11d0e9a5 100644
---- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
-+++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c
-@@ -1355,12 +1355,12 @@ void dcn_bw_update_from_pplib(struct dc *dc)
- 	struct dm_pp_clock_levels_with_voltage fclks = {0}, dcfclks = {0};
- 	bool res;
- 
--	kernel_fpu_begin();
--
- 	/* TODO: This is not the proper way to obtain fabric_and_dram_bandwidth, should be min(fclk, memclk) */
- 	res = dm_pp_get_clock_levels_by_type_with_voltage(
- 			ctx, DM_PP_CLOCK_TYPE_FCLK, &fclks);
- 
-+	kernel_fpu_begin();
-+
- 	if (res)
- 		res = verify_clock_values(&fclks);
- 
-@@ -1379,9 +1379,13 @@ void dcn_bw_update_from_pplib(struct dc *dc)
- 	} else
- 		BREAK_TO_DEBUGGER();
- 
-+	kernel_fpu_end();
-+
- 	res = dm_pp_get_clock_levels_by_type_with_voltage(
- 			ctx, DM_PP_CLOCK_TYPE_DCFCLK, &dcfclks);
- 
-+	kernel_fpu_begin();
-+
- 	if (res)
- 		res = verify_clock_values(&dcfclks);
- 
-diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
-index 5fd52094d459..1f92e7e8e3d3 100644
---- a/drivers/gpu/drm/amd/display/dc/core/dc.c
-+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
-@@ -1078,6 +1078,9 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
- 	/* pplib is notified if disp_num changed */
- 	dc->hwss.optimize_bandwidth(dc, context);
- 
-+	for (i = 0; i < context->stream_count; i++)
-+		context->streams[i]->mode_changed = false;
-+
- 	dc_release_state(dc->current_state);
- 
- 	dc->current_state = context;
-diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
-index b0265dbebd4c..583eb367850f 100644
---- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
-+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
-@@ -792,6 +792,7 @@ bool dc_link_detect(struct dc_link *link, enum dc_detect_reason reason)
- 		sink->dongle_max_pix_clk = sink_caps.max_hdmi_pixel_clock;
- 		sink->converter_disable_audio = converter_disable_audio;
- 
-+		/* dc_sink_create returns a new reference */
- 		link->local_sink = sink;
- 
- 		edid_status = dm_helpers_read_local_edid(
-diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
-index 41883c981789..a684b38332ac 100644
---- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
-+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
-@@ -2334,9 +2334,10 @@ static void dcn10_apply_ctx_for_surface(
- 			}
- 		}
- 
--		if (!pipe_ctx->plane_state &&
--			old_pipe_ctx->plane_state &&
--			old_pipe_ctx->stream_res.tg == tg) {
-+		if ((!pipe_ctx->plane_state ||
-+		     pipe_ctx->stream_res.tg != old_pipe_ctx->stream_res.tg) &&
-+		    old_pipe_ctx->plane_state &&
-+		    old_pipe_ctx->stream_res.tg == tg) {
- 
- 			dc->hwss.plane_atomic_disconnect(dc, old_pipe_ctx);
- 			removed_pipe[i] = true;
-diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
-index c8f5c00dd1e7..86e3fb27c125 100644
---- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
-+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
-@@ -3491,14 +3491,14 @@ static int smu7_get_gpu_power(struct pp_hwmgr *hwmgr, u32 *query)
- 
- 	smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogStart);
- 	cgs_write_ind_register(hwmgr->device, CGS_IND_REG__SMC,
--							ixSMU_PM_STATUS_94, 0);
-+							ixSMU_PM_STATUS_95, 0);
- 
- 	for (i = 0; i < 10; i++) {
--		mdelay(1);
-+		mdelay(500);
- 		smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogSample);
- 		tmp = cgs_read_ind_register(hwmgr->device,
- 						CGS_IND_REG__SMC,
--						ixSMU_PM_STATUS_94);
-+						ixSMU_PM_STATUS_95);
- 		if (tmp != 0)
- 			break;
- 	}
-diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
-index f4290f6b0c38..2323ba9310d9 100644
---- a/drivers/gpu/drm/drm_atomic_helper.c
-+++ b/drivers/gpu/drm/drm_atomic_helper.c
-@@ -1611,6 +1611,15 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
- 	if (old_plane_state->fb != new_plane_state->fb)
- 		return -EINVAL;
- 
-+	/*
-+	 * FIXME: Since prepare_fb and cleanup_fb are always called on
-+	 * the new_plane_state for async updates we need to block framebuffer
-+	 * changes. This prevents use of a fb that's been cleaned up and
-+	 * double cleanups from occuring.
-+	 */
-+	if (old_plane_state->fb != new_plane_state->fb)
-+		return -EINVAL;
-+
- 	funcs = plane->helper_private;
- 	if (!funcs->atomic_async_update)
- 		return -EINVAL;
-diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
-index 529414556962..1a244c53252c 100644
---- a/drivers/gpu/drm/drm_dp_mst_topology.c
-+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
-@@ -3286,6 +3286,7 @@ static int drm_dp_mst_i2c_xfer(struct i2c_adapter *adapter, struct i2c_msg *msgs
- 		msg.u.i2c_read.transactions[i].i2c_dev_id = msgs[i].addr;
- 		msg.u.i2c_read.transactions[i].num_bytes = msgs[i].len;
- 		msg.u.i2c_read.transactions[i].bytes = msgs[i].buf;
-+		msg.u.i2c_read.transactions[i].no_stop_bit = !(msgs[i].flags & I2C_M_STOP);
- 	}
- 	msg.u.i2c_read.read_i2c_device_id = msgs[num - 1].addr;
- 	msg.u.i2c_read.num_bytes_read = msgs[num - 1].len;
-diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
-index d73703a695e8..edd8cb497f3b 100644
---- a/drivers/gpu/drm/drm_fb_helper.c
-+++ b/drivers/gpu/drm/drm_fb_helper.c
-@@ -2891,7 +2891,7 @@ int drm_fb_helper_fbdev_setup(struct drm_device *dev,
- 	return 0;
- 
- err_drm_fb_helper_fini:
--	drm_fb_helper_fini(fb_helper);
-+	drm_fb_helper_fbdev_teardown(dev);
- 
- 	return ret;
- }
-@@ -3170,9 +3170,7 @@ static void drm_fbdev_client_unregister(struct drm_client_dev *client)
- 
- static int drm_fbdev_client_restore(struct drm_client_dev *client)
- {
--	struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);
--
--	drm_fb_helper_restore_fbdev_mode_unlocked(fb_helper);
-+	drm_fb_helper_lastclose(client->dev);
- 
- 	return 0;
- }
-diff --git a/drivers/gpu/drm/drm_mode_object.c b/drivers/gpu/drm/drm_mode_object.c
-index 004191d01772..15b919f90c5a 100644
---- a/drivers/gpu/drm/drm_mode_object.c
-+++ b/drivers/gpu/drm/drm_mode_object.c
-@@ -465,6 +465,7 @@ static int set_property_atomic(struct drm_mode_object *obj,
- 
- 	drm_modeset_acquire_init(&ctx, 0);
- 	state->acquire_ctx = &ctx;
-+
- retry:
- 	if (prop == state->dev->mode_config.dpms_property) {
- 		if (obj->type != DRM_MODE_OBJECT_CONNECTOR) {
-diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c
-index 5f650d8fc66b..4cfb56893b7f 100644
---- a/drivers/gpu/drm/drm_plane.c
-+++ b/drivers/gpu/drm/drm_plane.c
-@@ -220,6 +220,9 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane,
- 			format_modifier_count++;
- 	}
- 
-+	if (format_modifier_count)
-+		config->allow_fb_modifiers = true;
-+
- 	plane->modifier_count = format_modifier_count;
- 	plane->modifiers = kmalloc_array(format_modifier_count,
- 					 sizeof(format_modifiers[0]),
-diff --git a/drivers/gpu/drm/i915/gvt/cmd_parser.c b/drivers/gpu/drm/i915/gvt/cmd_parser.c
-index 77ae634eb11c..bd95fd6b4ac8 100644
---- a/drivers/gpu/drm/i915/gvt/cmd_parser.c
-+++ b/drivers/gpu/drm/i915/gvt/cmd_parser.c
-@@ -1446,7 +1446,7 @@ static inline int cmd_address_audit(struct parser_exec_state *s,
- 	}
- 
- 	if (index_mode)	{
--		if (guest_gma >= I915_GTT_PAGE_SIZE / sizeof(u64)) {
-+		if (guest_gma >= I915_GTT_PAGE_SIZE) {
- 			ret = -EFAULT;
- 			goto err;
- 		}
-diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
-index c7103dd2d8d5..563ab8590061 100644
---- a/drivers/gpu/drm/i915/gvt/gtt.c
-+++ b/drivers/gpu/drm/i915/gvt/gtt.c
-@@ -1942,7 +1942,7 @@ void _intel_vgpu_mm_release(struct kref *mm_ref)
-  */
- void intel_vgpu_unpin_mm(struct intel_vgpu_mm *mm)
- {
--	atomic_dec(&mm->pincount);
-+	atomic_dec_if_positive(&mm->pincount);
- }
- 
- /**
-diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
-index 55bb7885e228..8fff49affc11 100644
---- a/drivers/gpu/drm/i915/gvt/scheduler.c
-+++ b/drivers/gpu/drm/i915/gvt/scheduler.c
-@@ -1475,8 +1475,9 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id,
- 		intel_runtime_pm_put(dev_priv);
- 	}
- 
--	if (ret && (vgpu_is_vm_unhealthy(ret))) {
--		enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR);
-+	if (ret) {
-+		if (vgpu_is_vm_unhealthy(ret))
-+			enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR);
- 		intel_vgpu_destroy_workload(workload);
- 		return ERR_PTR(ret);
- 	}
-diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
-index b1c31967194b..489c1e656ff6 100644
---- a/drivers/gpu/drm/i915/i915_drv.h
-+++ b/drivers/gpu/drm/i915/i915_drv.h
-@@ -2293,7 +2293,8 @@ intel_info(const struct drm_i915_private *dev_priv)
- 				 INTEL_DEVID(dev_priv) == 0x5915 || \
- 				 INTEL_DEVID(dev_priv) == 0x591E)
- #define IS_AML_ULX(dev_priv)	(INTEL_DEVID(dev_priv) == 0x591C || \
--				 INTEL_DEVID(dev_priv) == 0x87C0)
-+				 INTEL_DEVID(dev_priv) == 0x87C0 || \
-+				 INTEL_DEVID(dev_priv) == 0x87CA)
- #define IS_SKL_GT2(dev_priv)	(IS_SKYLAKE(dev_priv) && \
- 				 (dev_priv)->info.gt == 2)
- #define IS_SKL_GT3(dev_priv)	(IS_SKYLAKE(dev_priv) && \
-diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
-index 067054cf4a86..60bed3f27775 100644
---- a/drivers/gpu/drm/i915/i915_reg.h
-+++ b/drivers/gpu/drm/i915/i915_reg.h
-@@ -9205,7 +9205,7 @@ enum skl_power_gate {
- #define TRANS_DDI_FUNC_CTL2(tran)	_MMIO_TRANS2(tran, \
- 						     _TRANS_DDI_FUNC_CTL2_A)
- #define  PORT_SYNC_MODE_ENABLE			(1 << 4)
--#define  PORT_SYNC_MODE_MASTER_SELECT(x)	((x) < 0)
-+#define  PORT_SYNC_MODE_MASTER_SELECT(x)	((x) << 0)
- #define  PORT_SYNC_MODE_MASTER_SELECT_MASK	(0x7 << 0)
- #define  PORT_SYNC_MODE_MASTER_SELECT_SHIFT	0
- 
-diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
-index 22a74608c6e4..dcd1df5322e8 100644
---- a/drivers/gpu/drm/i915/intel_dp.c
-+++ b/drivers/gpu/drm/i915/intel_dp.c
-@@ -1845,42 +1845,6 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
- 	return false;
- }
- 
--/* Optimize link config in order: max bpp, min lanes, min clock */
--static bool
--intel_dp_compute_link_config_fast(struct intel_dp *intel_dp,
--				  struct intel_crtc_state *pipe_config,
--				  const struct link_config_limits *limits)
--{
--	struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
--	int bpp, clock, lane_count;
--	int mode_rate, link_clock, link_avail;
--
--	for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
--		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
--						   bpp);
--
--		for (lane_count = limits->min_lane_count;
--		     lane_count <= limits->max_lane_count;
--		     lane_count <<= 1) {
--			for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
--				link_clock = intel_dp->common_rates[clock];
--				link_avail = intel_dp_max_data_rate(link_clock,
--								    lane_count);
--
--				if (mode_rate <= link_avail) {
--					pipe_config->lane_count = lane_count;
--					pipe_config->pipe_bpp = bpp;
--					pipe_config->port_clock = link_clock;
--
--					return true;
--				}
--			}
--		}
--	}
--
--	return false;
--}
--
- static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc)
- {
- 	int i, num_bpc;
-@@ -2013,15 +1977,13 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
- 	limits.min_bpp = 6 * 3;
- 	limits.max_bpp = intel_dp_compute_bpp(intel_dp, pipe_config);
- 
--	if (intel_dp_is_edp(intel_dp) && intel_dp->edp_dpcd[0] < DP_EDP_14) {
-+	if (intel_dp_is_edp(intel_dp)) {
- 		/*
- 		 * Use the maximum clock and number of lanes the eDP panel
--		 * advertizes being capable of. The eDP 1.3 and earlier panels
--		 * are generally designed to support only a single clock and
--		 * lane configuration, and typically these values correspond to
--		 * the native resolution of the panel. With eDP 1.4 rate select
--		 * and DSC, this is decreasingly the case, and we need to be
--		 * able to select less than maximum link config.
-+		 * advertizes being capable of. The panels are generally
-+		 * designed to support only a single clock and lane
-+		 * configuration, and typically these values correspond to the
-+		 * native resolution of the panel.
- 		 */
- 		limits.min_lane_count = limits.max_lane_count;
- 		limits.min_clock = limits.max_clock;
-@@ -2035,22 +1997,11 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
- 		      intel_dp->common_rates[limits.max_clock],
- 		      limits.max_bpp, adjusted_mode->crtc_clock);
- 
--	if (intel_dp_is_edp(intel_dp))
--		/*
--		 * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4
--		 * section A.1: "It is recommended that the minimum number of
--		 * lanes be used, using the minimum link rate allowed for that
--		 * lane configuration."
--		 *
--		 * Note that we use the max clock and lane count for eDP 1.3 and
--		 * earlier, and fast vs. wide is irrelevant.
--		 */
--		ret = intel_dp_compute_link_config_fast(intel_dp, pipe_config,
--							&limits);
--	else
--		/* Optimize for slow and wide. */
--		ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config,
--							&limits);
-+	/*
-+	 * Optimize for slow and wide. This is the place to add alternative
-+	 * optimization policy.
-+	 */
-+	ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits);
- 
- 	/* enable compression if the mode doesn't fit available BW */
- 	if (!ret) {
-diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
-index cb307a2abf06..7316b4ab1b85 100644
---- a/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
-+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
-@@ -23,11 +23,14 @@ struct dpu_mdss {
- 	struct dpu_irq_controller irq_controller;
- };
- 
--static irqreturn_t dpu_mdss_irq(int irq, void *arg)
-+static void dpu_mdss_irq(struct irq_desc *desc)
- {
--	struct dpu_mdss *dpu_mdss = arg;
-+	struct dpu_mdss *dpu_mdss = irq_desc_get_handler_data(desc);
-+	struct irq_chip *chip = irq_desc_get_chip(desc);
- 	u32 interrupts;
- 
-+	chained_irq_enter(chip, desc);
-+
- 	interrupts = readl_relaxed(dpu_mdss->mmio + HW_INTR_STATUS);
- 
- 	while (interrupts) {
-@@ -39,20 +42,20 @@ static irqreturn_t dpu_mdss_irq(int irq, void *arg)
- 					   hwirq);
- 		if (mapping == 0) {
- 			DRM_ERROR("couldn't find irq mapping for %lu\n", hwirq);
--			return IRQ_NONE;
-+			break;
- 		}
- 
- 		rc = generic_handle_irq(mapping);
- 		if (rc < 0) {
- 			DRM_ERROR("handle irq fail: irq=%lu mapping=%u rc=%d\n",
- 				  hwirq, mapping, rc);
--			return IRQ_NONE;
-+			break;
- 		}
- 
- 		interrupts &= ~(1 << hwirq);
- 	}
- 
--	return IRQ_HANDLED;
-+	chained_irq_exit(chip, desc);
- }
- 
- static void dpu_mdss_irq_mask(struct irq_data *irqd)
-@@ -83,16 +86,16 @@ static struct irq_chip dpu_mdss_irq_chip = {
- 	.irq_unmask = dpu_mdss_irq_unmask,
- };
- 
-+static struct lock_class_key dpu_mdss_lock_key, dpu_mdss_request_key;
-+
- static int dpu_mdss_irqdomain_map(struct irq_domain *domain,
- 		unsigned int irq, irq_hw_number_t hwirq)
- {
- 	struct dpu_mdss *dpu_mdss = domain->host_data;
--	int ret;
- 
-+	irq_set_lockdep_class(irq, &dpu_mdss_lock_key, &dpu_mdss_request_key);
- 	irq_set_chip_and_handler(irq, &dpu_mdss_irq_chip, handle_level_irq);
--	ret = irq_set_chip_data(irq, dpu_mdss);
--
--	return ret;
-+	return irq_set_chip_data(irq, dpu_mdss);
- }
- 
- static const struct irq_domain_ops dpu_mdss_irqdomain_ops = {
-@@ -159,11 +162,13 @@ static void dpu_mdss_destroy(struct drm_device *dev)
- 	struct msm_drm_private *priv = dev->dev_private;
- 	struct dpu_mdss *dpu_mdss = to_dpu_mdss(priv->mdss);
- 	struct dss_module_power *mp = &dpu_mdss->mp;
-+	int irq;
- 
- 	pm_runtime_suspend(dev->dev);
- 	pm_runtime_disable(dev->dev);
- 	_dpu_mdss_irq_domain_fini(dpu_mdss);
--	free_irq(platform_get_irq(pdev, 0), dpu_mdss);
-+	irq = platform_get_irq(pdev, 0);
-+	irq_set_chained_handler_and_data(irq, NULL, NULL);
- 	msm_dss_put_clk(mp->clk_config, mp->num_clk);
- 	devm_kfree(&pdev->dev, mp->clk_config);
- 
-@@ -187,6 +192,7 @@ int dpu_mdss_init(struct drm_device *dev)
- 	struct dpu_mdss *dpu_mdss;
- 	struct dss_module_power *mp;
- 	int ret = 0;
-+	int irq;
- 
- 	dpu_mdss = devm_kzalloc(dev->dev, sizeof(*dpu_mdss), GFP_KERNEL);
- 	if (!dpu_mdss)
-@@ -219,12 +225,12 @@ int dpu_mdss_init(struct drm_device *dev)
- 	if (ret)
- 		goto irq_domain_error;
- 
--	ret = request_irq(platform_get_irq(pdev, 0),
--			dpu_mdss_irq, 0, "dpu_mdss_isr", dpu_mdss);
--	if (ret) {
--		DPU_ERROR("failed to init irq: %d\n", ret);
-+	irq = platform_get_irq(pdev, 0);
-+	if (irq < 0)
- 		goto irq_error;
--	}
-+
-+	irq_set_chained_handler_and_data(irq, dpu_mdss_irq,
-+					 dpu_mdss);
- 
- 	pm_runtime_enable(dev->dev);
- 
-diff --git a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
-index 6a4ca139cf5d..8fd8124d72ba 100644
---- a/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
-+++ b/drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
-@@ -750,7 +750,9 @@ static int nv17_tv_set_property(struct drm_encoder *encoder,
- 		/* Disable the crtc to ensure a full modeset is
- 		 * performed whenever it's turned on again. */
- 		if (crtc)
--			drm_crtc_force_disable(crtc);
-+			drm_crtc_helper_set_mode(crtc, &crtc->mode,
-+						 crtc->x, crtc->y,
-+						 crtc->primary->fb);
- 	}
- 
- 	return 0;
-diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c
-index f471537c852f..1e14c6921454 100644
---- a/drivers/gpu/drm/radeon/evergreen_cs.c
-+++ b/drivers/gpu/drm/radeon/evergreen_cs.c
-@@ -1299,6 +1299,7 @@ static int evergreen_cs_handle_reg(struct radeon_cs_parser *p, u32 reg, u32 idx)
- 			return -EINVAL;
- 		}
- 		ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff);
-+		break;
- 	case CB_TARGET_MASK:
- 		track->cb_target_mask = radeon_get_ib_value(p, idx);
- 		track->cb_dirty = true;
-diff --git a/drivers/gpu/drm/rcar-du/rcar_du_kms.c b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
-index 9c7007d45408..f9a90ff24e6d 100644
---- a/drivers/gpu/drm/rcar-du/rcar_du_kms.c
-+++ b/drivers/gpu/drm/rcar-du/rcar_du_kms.c
-@@ -331,6 +331,7 @@ static int rcar_du_encoders_init_one(struct rcar_du_device *rcdu,
- 		dev_dbg(rcdu->dev,
- 			"connected entity %pOF is disabled, skipping\n",
- 			entity);
-+		of_node_put(entity);
- 		return -ENODEV;
- 	}
- 
-@@ -366,6 +367,7 @@ static int rcar_du_encoders_init_one(struct rcar_du_device *rcdu,
- 		dev_warn(rcdu->dev,
- 			 "no encoder found for endpoint %pOF, skipping\n",
- 			 ep->local_node);
-+		of_node_put(entity);
- 		return -ENODEV;
- 	}
- 
-diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
-index fb70fb486fbf..cdbb47566cac 100644
---- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
-+++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
-@@ -511,6 +511,18 @@ static void vop_core_clks_disable(struct vop *vop)
- 	clk_disable(vop->hclk);
- }
- 
-+static void vop_win_disable(struct vop *vop, const struct vop_win_data *win)
-+{
-+	if (win->phy->scl && win->phy->scl->ext) {
-+		VOP_SCL_SET_EXT(vop, win, yrgb_hor_scl_mode, SCALE_NONE);
-+		VOP_SCL_SET_EXT(vop, win, yrgb_ver_scl_mode, SCALE_NONE);
-+		VOP_SCL_SET_EXT(vop, win, cbcr_hor_scl_mode, SCALE_NONE);
-+		VOP_SCL_SET_EXT(vop, win, cbcr_ver_scl_mode, SCALE_NONE);
-+	}
-+
-+	VOP_WIN_SET(vop, win, enable, 0);
-+}
-+
- static int vop_enable(struct drm_crtc *crtc)
- {
- 	struct vop *vop = to_vop(crtc);
-@@ -556,7 +568,7 @@ static int vop_enable(struct drm_crtc *crtc)
- 		struct vop_win *vop_win = &vop->win[i];
- 		const struct vop_win_data *win = vop_win->data;
- 
--		VOP_WIN_SET(vop, win, enable, 0);
-+		vop_win_disable(vop, win);
- 	}
- 	spin_unlock(&vop->reg_lock);
- 
-@@ -700,7 +712,7 @@ static void vop_plane_atomic_disable(struct drm_plane *plane,
- 
- 	spin_lock(&vop->reg_lock);
- 
--	VOP_WIN_SET(vop, win, enable, 0);
-+	vop_win_disable(vop, win);
- 
- 	spin_unlock(&vop->reg_lock);
- }
-@@ -1476,7 +1488,7 @@ static int vop_initial(struct vop *vop)
- 		int channel = i * 2 + 1;
- 
- 		VOP_WIN_SET(vop, win, channel, (channel + 1) << 4 | channel);
--		VOP_WIN_SET(vop, win, enable, 0);
-+		vop_win_disable(vop, win);
- 		VOP_WIN_SET(vop, win, gate, 1);
- 	}
- 
-diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
-index e2942c9a11a7..35ddbec1375a 100644
---- a/drivers/gpu/drm/scheduler/sched_entity.c
-+++ b/drivers/gpu/drm/scheduler/sched_entity.c
-@@ -52,12 +52,12 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
- {
- 	int i;
- 
--	if (!(entity && rq_list && num_rq_list > 0 && rq_list[0]))
-+	if (!(entity && rq_list && (num_rq_list == 0 || rq_list[0])))
- 		return -EINVAL;
- 
- 	memset(entity, 0, sizeof(struct drm_sched_entity));
- 	INIT_LIST_HEAD(&entity->list);
--	entity->rq = rq_list[0];
-+	entity->rq = NULL;
- 	entity->guilty = guilty;
- 	entity->num_rq_list = num_rq_list;
- 	entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *),
-@@ -67,6 +67,10 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
- 
- 	for (i = 0; i < num_rq_list; ++i)
- 		entity->rq_list[i] = rq_list[i];
-+
-+	if (num_rq_list)
-+		entity->rq = rq_list[0];
-+
- 	entity->last_scheduled = NULL;
- 
- 	spin_lock_init(&entity->rq_lock);
-@@ -165,6 +169,9 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout)
- 	struct task_struct *last_user;
- 	long ret = timeout;
- 
-+	if (!entity->rq)
-+		return 0;
-+
- 	sched = entity->rq->sched;
- 	/**
- 	 * The client will not queue more IBs during this fini, consume existing
-@@ -264,20 +271,24 @@ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity)
-  */
- void drm_sched_entity_fini(struct drm_sched_entity *entity)
- {
--	struct drm_gpu_scheduler *sched;
-+	struct drm_gpu_scheduler *sched = NULL;
- 
--	sched = entity->rq->sched;
--	drm_sched_rq_remove_entity(entity->rq, entity);
-+	if (entity->rq) {
-+		sched = entity->rq->sched;
-+		drm_sched_rq_remove_entity(entity->rq, entity);
-+	}
- 
- 	/* Consumption of existing IBs wasn't completed. Forcefully
- 	 * remove them here.
- 	 */
- 	if (spsc_queue_peek(&entity->job_queue)) {
--		/* Park the kernel for a moment to make sure it isn't processing
--		 * our enity.
--		 */
--		kthread_park(sched->thread);
--		kthread_unpark(sched->thread);
-+		if (sched) {
-+			/* Park the kernel for a moment to make sure it isn't processing
-+			 * our enity.
-+			 */
-+			kthread_park(sched->thread);
-+			kthread_unpark(sched->thread);
-+		}
- 		if (entity->dependency) {
- 			dma_fence_remove_callback(entity->dependency,
- 						  &entity->cb);
-@@ -362,9 +373,11 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity,
- 	for (i = 0; i < entity->num_rq_list; ++i)
- 		drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority);
- 
--	drm_sched_rq_remove_entity(entity->rq, entity);
--	drm_sched_entity_set_rq_priority(&entity->rq, priority);
--	drm_sched_rq_add_entity(entity->rq, entity);
-+	if (entity->rq) {
-+		drm_sched_rq_remove_entity(entity->rq, entity);
-+		drm_sched_entity_set_rq_priority(&entity->rq, priority);
-+		drm_sched_rq_add_entity(entity->rq, entity);
-+	}
- 
- 	spin_unlock(&entity->rq_lock);
- }
-diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
-index dc47720c99ba..39d8509d96a0 100644
---- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
-+++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
-@@ -48,8 +48,13 @@ static enum drm_mode_status
- sun8i_dw_hdmi_mode_valid_h6(struct drm_connector *connector,
- 			    const struct drm_display_mode *mode)
- {
--	/* This is max for HDMI 2.0b (4K@60Hz) */
--	if (mode->clock > 594000)
-+	/*
-+	 * Controller support maximum of 594 MHz, which correlates to
-+	 * 4K@60Hz 4:4:4 or RGB. However, for frequencies greater than
-+	 * 340 MHz scrambling has to be enabled. Because scrambling is
-+	 * not yet implemented, just limit to 340 MHz for now.
-+	 */
-+	if (mode->clock > 340000)
- 		return MODE_CLOCK_HIGH;
- 
- 	return MODE_OK;
-diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
-index a63e3011e971..bd4f0b88bbd7 100644
---- a/drivers/gpu/drm/udl/udl_drv.c
-+++ b/drivers/gpu/drm/udl/udl_drv.c
-@@ -51,6 +51,7 @@ static struct drm_driver driver = {
- 	.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME,
- 	.load = udl_driver_load,
- 	.unload = udl_driver_unload,
-+	.release = udl_driver_release,
- 
- 	/* gem hooks */
- 	.gem_free_object_unlocked = udl_gem_free_object,
-diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
-index e9e9b1ff678e..4ae67d882eae 100644
---- a/drivers/gpu/drm/udl/udl_drv.h
-+++ b/drivers/gpu/drm/udl/udl_drv.h
-@@ -104,6 +104,7 @@ void udl_urb_completion(struct urb *urb);
- 
- int udl_driver_load(struct drm_device *dev, unsigned long flags);
- void udl_driver_unload(struct drm_device *dev);
-+void udl_driver_release(struct drm_device *dev);
- 
- int udl_fbdev_init(struct drm_device *dev);
- void udl_fbdev_cleanup(struct drm_device *dev);
-diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
-index 1b014d92855b..19055dda3140 100644
---- a/drivers/gpu/drm/udl/udl_main.c
-+++ b/drivers/gpu/drm/udl/udl_main.c
-@@ -378,6 +378,12 @@ void udl_driver_unload(struct drm_device *dev)
- 		udl_free_urb_list(dev);
- 
- 	udl_fbdev_cleanup(dev);
--	udl_modeset_cleanup(dev);
- 	kfree(udl);
- }
-+
-+void udl_driver_release(struct drm_device *dev)
-+{
-+	udl_modeset_cleanup(dev);
-+	drm_dev_fini(dev);
-+	kfree(dev);
-+}
-diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
-index 5930facd6d2d..11a8f99ba18c 100644
---- a/drivers/gpu/drm/vgem/vgem_drv.c
-+++ b/drivers/gpu/drm/vgem/vgem_drv.c
-@@ -191,13 +191,9 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev,
- 	ret = drm_gem_handle_create(file, &obj->base, handle);
- 	drm_gem_object_put_unlocked(&obj->base);
- 	if (ret)
--		goto err;
-+		return ERR_PTR(ret);
- 
- 	return &obj->base;
--
--err:
--	__vgem_gem_destroy(obj);
--	return ERR_PTR(ret);
- }
- 
- static int vgem_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
-diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
-index f39a183d59c2..e7e946035027 100644
---- a/drivers/gpu/drm/virtio/virtgpu_object.c
-+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
-@@ -28,10 +28,21 @@
- static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
- 				       uint32_t *resid)
- {
-+#if 0
- 	int handle = ida_alloc(&vgdev->resource_ida, GFP_KERNEL);
- 
- 	if (handle < 0)
- 		return handle;
-+#else
-+	static int handle;
-+
-+	/*
-+	 * FIXME: dirty hack to avoid re-using IDs, virglrenderer
-+	 * can't deal with that.  Needs fixing in virglrenderer, also
-+	 * should figure a better way to handle that in the guest.
-+	 */
-+	handle++;
-+#endif
- 
- 	*resid = handle + 1;
- 	return 0;
-@@ -39,7 +50,9 @@ static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
- 
- static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t id)
- {
-+#if 0
- 	ida_free(&vgdev->resource_ida, id - 1);
-+#endif
- }
- 
- static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
-diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c
-index eb56ee893761..1054f535178a 100644
---- a/drivers/gpu/drm/vkms/vkms_crtc.c
-+++ b/drivers/gpu/drm/vkms/vkms_crtc.c
-@@ -4,13 +4,17 @@
- #include <drm/drm_atomic_helper.h>
- #include <drm/drm_crtc_helper.h>
- 
--static void _vblank_handle(struct vkms_output *output)
-+static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
- {
-+	struct vkms_output *output = container_of(timer, struct vkms_output,
-+						  vblank_hrtimer);
- 	struct drm_crtc *crtc = &output->crtc;
- 	struct vkms_crtc_state *state = to_vkms_crtc_state(crtc->state);
-+	int ret_overrun;
- 	bool ret;
- 
- 	spin_lock(&output->lock);
-+
- 	ret = drm_crtc_handle_vblank(crtc);
- 	if (!ret)
- 		DRM_ERROR("vkms failure on handling vblank");
-@@ -31,19 +35,9 @@ static void _vblank_handle(struct vkms_output *output)
- 			DRM_WARN("failed to queue vkms_crc_work_handle");
- 	}
- 
--	spin_unlock(&output->lock);
--}
--
--static enum hrtimer_restart vkms_vblank_simulate(struct hrtimer *timer)
--{
--	struct vkms_output *output = container_of(timer, struct vkms_output,
--						  vblank_hrtimer);
--	int ret_overrun;
--
--	_vblank_handle(output);
--
- 	ret_overrun = hrtimer_forward_now(&output->vblank_hrtimer,
- 					  output->period_ns);
-+	spin_unlock(&output->lock);
- 
- 	return HRTIMER_RESTART;
- }
-@@ -81,6 +75,9 @@ bool vkms_get_vblank_timestamp(struct drm_device *dev, unsigned int pipe,
- 
- 	*vblank_time = output->vblank_hrtimer.node.expires;
- 
-+	if (!in_vblank_irq)
-+		*vblank_time -= output->period_ns;
-+
- 	return true;
- }
- 
-@@ -98,6 +95,7 @@ static void vkms_atomic_crtc_reset(struct drm_crtc *crtc)
- 	vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL);
- 	if (!vkms_state)
- 		return;
-+	INIT_WORK(&vkms_state->crc_work, vkms_crc_work_handle);
- 
- 	crtc->state = &vkms_state->base;
- 	crtc->state->crtc = crtc;
-diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
-index 138b0bb325cf..69048e73377d 100644
---- a/drivers/gpu/drm/vkms/vkms_gem.c
-+++ b/drivers/gpu/drm/vkms/vkms_gem.c
-@@ -111,11 +111,8 @@ struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
- 
- 	ret = drm_gem_handle_create(file, &obj->gem, handle);
- 	drm_gem_object_put_unlocked(&obj->gem);
--	if (ret) {
--		drm_gem_object_release(&obj->gem);
--		kfree(obj);
-+	if (ret)
- 		return ERR_PTR(ret);
--	}
- 
- 	return &obj->gem;
- }
-diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
-index b913a56f3426..2a9112515f46 100644
---- a/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
-+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
-@@ -564,11 +564,9 @@ static int vmw_fb_set_par(struct fb_info *info)
- 		0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 		DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC)
- 	};
--	struct drm_display_mode *old_mode;
- 	struct drm_display_mode *mode;
- 	int ret;
- 
--	old_mode = par->set_mode;
- 	mode = drm_mode_duplicate(vmw_priv->dev, &new_mode);
- 	if (!mode) {
- 		DRM_ERROR("Could not create new fb mode.\n");
-@@ -579,11 +577,7 @@ static int vmw_fb_set_par(struct fb_info *info)
- 	mode->vdisplay = var->yres;
- 	vmw_guess_mode_timing(mode);
- 
--	if (old_mode && drm_mode_equal(old_mode, mode)) {
--		drm_mode_destroy(vmw_priv->dev, mode);
--		mode = old_mode;
--		old_mode = NULL;
--	} else if (!vmw_kms_validate_mode_vram(vmw_priv,
-+	if (!vmw_kms_validate_mode_vram(vmw_priv,
- 					mode->hdisplay *
- 					DIV_ROUND_UP(var->bits_per_pixel, 8),
- 					mode->vdisplay)) {
-@@ -620,8 +614,8 @@ static int vmw_fb_set_par(struct fb_info *info)
- 	schedule_delayed_work(&par->local_work, 0);
- 
- out_unlock:
--	if (old_mode)
--		drm_mode_destroy(vmw_priv->dev, old_mode);
-+	if (par->set_mode)
-+		drm_mode_destroy(vmw_priv->dev, par->set_mode);
- 	par->set_mode = mode;
- 
- 	mutex_unlock(&par->bo_mutex);
-diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
-index b93c558dd86e..7da752ca1c34 100644
---- a/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
-+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gmrid_manager.c
-@@ -57,7 +57,7 @@ static int vmw_gmrid_man_get_node(struct ttm_mem_type_manager *man,
- 
- 	id = ida_alloc_max(&gman->gmr_ida, gman->max_gmr_ids - 1, GFP_KERNEL);
- 	if (id < 0)
--		return id;
-+		return (id != -ENOMEM ? 0 : id);
- 
- 	spin_lock(&gman->lock);
- 
-diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
-index 15ed6177a7a3..f040c8a7f9a9 100644
---- a/drivers/hid/hid-logitech-hidpp.c
-+++ b/drivers/hid/hid-logitech-hidpp.c
-@@ -2608,8 +2608,9 @@ static int m560_raw_event(struct hid_device *hdev, u8 *data, int size)
- 		input_report_rel(mydata->input, REL_Y, v);
- 
- 		v = hid_snto32(data[6], 8);
--		hidpp_scroll_counter_handle_scroll(
--				&hidpp->vertical_wheel_counter, v);
-+		if (v != 0)
-+			hidpp_scroll_counter_handle_scroll(
-+					&hidpp->vertical_wheel_counter, v);
- 
- 		input_sync(mydata->input);
- 	}
-diff --git a/drivers/hid/intel-ish-hid/ipc/ipc.c b/drivers/hid/intel-ish-hid/ipc/ipc.c
-index 742191bb24c6..45e33c7ba9a6 100644
---- a/drivers/hid/intel-ish-hid/ipc/ipc.c
-+++ b/drivers/hid/intel-ish-hid/ipc/ipc.c
-@@ -91,7 +91,10 @@ static bool check_generated_interrupt(struct ishtp_device *dev)
- 			IPC_INT_FROM_ISH_TO_HOST_CHV_AB(pisr_val);
- 	} else {
- 		pisr_val = ish_reg_read(dev, IPC_REG_PISR_BXT);
--		interrupt_generated = IPC_INT_FROM_ISH_TO_HOST_BXT(pisr_val);
-+		interrupt_generated = !!pisr_val;
-+		/* only busy-clear bit is RW, others are RO */
-+		if (pisr_val)
-+			ish_reg_write(dev, IPC_REG_PISR_BXT, pisr_val);
- 	}
- 
- 	return interrupt_generated;
-@@ -839,11 +842,11 @@ int ish_hw_start(struct ishtp_device *dev)
- {
- 	ish_set_host_rdy(dev);
- 
-+	set_host_ready(dev);
-+
- 	/* After that we can enable ISH DMA operation and wakeup ISHFW */
- 	ish_wakeup(dev);
- 
--	set_host_ready(dev);
--
- 	/* wait for FW-initiated reset flow */
- 	if (!dev->recvd_hw_ready)
- 		wait_event_interruptible_timeout(dev->wait_hw_ready,
-diff --git a/drivers/hid/intel-ish-hid/ishtp/bus.c b/drivers/hid/intel-ish-hid/ishtp/bus.c
-index 728dc6d4561a..a271d6d169b1 100644
---- a/drivers/hid/intel-ish-hid/ishtp/bus.c
-+++ b/drivers/hid/intel-ish-hid/ishtp/bus.c
-@@ -675,7 +675,8 @@ int ishtp_cl_device_bind(struct ishtp_cl *cl)
- 	spin_lock_irqsave(&cl->dev->device_list_lock, flags);
- 	list_for_each_entry(cl_device, &cl->dev->device_list,
- 			device_link) {
--		if (cl_device->fw_client->client_id == cl->fw_client_id) {
-+		if (cl_device->fw_client &&
-+		    cl_device->fw_client->client_id == cl->fw_client_id) {
- 			cl->device = cl_device;
- 			rv = 0;
- 			break;
-@@ -735,6 +736,7 @@ void ishtp_bus_remove_all_clients(struct ishtp_device *ishtp_dev,
- 	spin_lock_irqsave(&ishtp_dev->device_list_lock, flags);
- 	list_for_each_entry_safe(cl_device, n, &ishtp_dev->device_list,
- 				 device_link) {
-+		cl_device->fw_client = NULL;
- 		if (warm_reset && cl_device->reference_count)
- 			continue;
- 
-diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
-index 6f929bfa9fcd..d0f1dfe2bcbb 100644
---- a/drivers/hwmon/Kconfig
-+++ b/drivers/hwmon/Kconfig
-@@ -1759,6 +1759,7 @@ config SENSORS_VT8231
- config SENSORS_W83773G
- 	tristate "Nuvoton W83773G"
- 	depends on I2C
-+	select REGMAP_I2C
- 	help
- 	  If you say yes here you get support for the Nuvoton W83773G hardware
- 	  monitoring chip.
-diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c
-index 391118c8aae8..c888f4aca45c 100644
---- a/drivers/hwmon/occ/common.c
-+++ b/drivers/hwmon/occ/common.c
-@@ -889,6 +889,8 @@ static int occ_setup_sensor_attrs(struct occ *occ)
- 				s++;
- 			}
- 		}
-+
-+		s = (sensors->power.num_sensors * 4) + 1;
- 	} else {
- 		for (i = 0; i < sensors->power.num_sensors; ++i) {
- 			s = i + 1;
-@@ -917,11 +919,11 @@ static int occ_setup_sensor_attrs(struct occ *occ)
- 						     show_power, NULL, 3, i);
- 			attr++;
- 		}
--	}
- 
--	if (sensors->caps.num_sensors >= 1) {
- 		s = sensors->power.num_sensors + 1;
-+	}
- 
-+	if (sensors->caps.num_sensors >= 1) {
- 		snprintf(attr->name, sizeof(attr->name), "power%d_label", s);
- 		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL,
- 					     0, 0);
-diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
-index abe8249b893b..f21eb28b6782 100644
---- a/drivers/hwtracing/coresight/coresight-etm-perf.c
-+++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
-@@ -177,15 +177,15 @@ static void etm_free_aux(void *data)
- 	schedule_work(&event_data->work);
- }
- 
--static void *etm_setup_aux(int event_cpu, void **pages,
-+static void *etm_setup_aux(struct perf_event *event, void **pages,
- 			   int nr_pages, bool overwrite)
- {
--	int cpu;
-+	int cpu = event->cpu;
- 	cpumask_t *mask;
- 	struct coresight_device *sink;
- 	struct etm_event_data *event_data = NULL;
- 
--	event_data = alloc_event_data(event_cpu);
-+	event_data = alloc_event_data(cpu);
- 	if (!event_data)
- 		return NULL;
- 	INIT_WORK(&event_data->work, free_event_data);
-diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
-index 53e2fb6e86f6..fe76b176974a 100644
---- a/drivers/hwtracing/coresight/coresight-etm4x.c
-+++ b/drivers/hwtracing/coresight/coresight-etm4x.c
-@@ -55,7 +55,8 @@ static void etm4_os_unlock(struct etmv4_drvdata *drvdata)
- 
- static bool etm4_arch_supported(u8 arch)
- {
--	switch (arch) {
-+	/* Mask out the minor version number */
-+	switch (arch & 0xf0) {
- 	case ETM_ARCH_V4:
- 		break;
- 	default:
-diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
-index 8426b7970c14..cc287cf6eb29 100644
---- a/drivers/hwtracing/intel_th/gth.c
-+++ b/drivers/hwtracing/intel_th/gth.c
-@@ -607,6 +607,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
- {
- 	struct gth_device *gth = dev_get_drvdata(&thdev->dev);
- 	int port = othdev->output.port;
-+	int master;
- 
- 	if (thdev->host_mode)
- 		return;
-@@ -615,6 +616,9 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
- 	othdev->output.port = -1;
- 	othdev->output.active = false;
- 	gth->output[port].output = NULL;
-+	for (master = 0; master < TH_CONFIGURABLE_MASTERS; master++)
-+		if (gth->master[master] == port)
-+			gth->master[master] = -1;
- 	spin_unlock(&gth->gth_lock);
- }
- 
-diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
-index 93ce3aa740a9..c7ba8acfd4d5 100644
---- a/drivers/hwtracing/stm/core.c
-+++ b/drivers/hwtracing/stm/core.c
-@@ -244,6 +244,9 @@ static int find_free_channels(unsigned long *bitmap, unsigned int start,
- 			;
- 		if (i == width)
- 			return pos;
-+
-+		/* step over [pos..pos+i) to continue search */
-+		pos += i;
- 	}
- 
- 	return -1;
-@@ -732,7 +735,7 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
- 	struct stm_device *stm = stmf->stm;
- 	struct stp_policy_id *id;
- 	char *ids[] = { NULL, NULL };
--	int ret = -EINVAL;
-+	int ret = -EINVAL, wlimit = 1;
- 	u32 size;
- 
- 	if (stmf->output.nr_chans)
-@@ -760,8 +763,10 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
- 	if (id->__reserved_0 || id->__reserved_1)
- 		goto err_free;
- 
--	if (id->width < 1 ||
--	    id->width > PAGE_SIZE / stm->data->sw_mmiosz)
-+	if (stm->data->sw_mmiosz)
-+		wlimit = PAGE_SIZE / stm->data->sw_mmiosz;
-+
-+	if (id->width < 1 || id->width > wlimit)
- 		goto err_free;
- 
- 	ids[0] = id->id;
-diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
-index b4a0b2b99a78..6b4ef1d38fb2 100644
---- a/drivers/i2c/busses/i2c-designware-core.h
-+++ b/drivers/i2c/busses/i2c-designware-core.h
-@@ -215,6 +215,7 @@
-  * @disable_int: function to disable all interrupts
-  * @init: function to initialize the I2C hardware
-  * @mode: operation mode - DW_IC_MASTER or DW_IC_SLAVE
-+ * @suspended: set to true if the controller is suspended
-  *
-  * HCNT and LCNT parameters can be used if the platform knows more accurate
-  * values than the one computed based only on the input clock frequency.
-@@ -270,6 +271,7 @@ struct dw_i2c_dev {
- 	int			(*set_sda_hold_time)(struct dw_i2c_dev *dev);
- 	int			mode;
- 	struct i2c_bus_recovery_info rinfo;
-+	bool			suspended;
- };
- 
- #define ACCESS_SWAP		0x00000001
-diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
-index 8d1bc44d2530..bb8e3f149979 100644
---- a/drivers/i2c/busses/i2c-designware-master.c
-+++ b/drivers/i2c/busses/i2c-designware-master.c
-@@ -426,6 +426,12 @@ i2c_dw_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
- 
- 	pm_runtime_get_sync(dev->dev);
- 
-+	if (dev->suspended) {
-+		dev_err(dev->dev, "Error %s call while suspended\n", __func__);
-+		ret = -ESHUTDOWN;
-+		goto done_nolock;
-+	}
-+
- 	reinit_completion(&dev->cmd_complete);
- 	dev->msgs = msgs;
- 	dev->msgs_num = num;
-diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
-index d50f80487214..76810deb2de6 100644
---- a/drivers/i2c/busses/i2c-designware-pcidrv.c
-+++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
-@@ -176,6 +176,7 @@ static int i2c_dw_pci_suspend(struct device *dev)
- 	struct pci_dev *pdev = to_pci_dev(dev);
- 	struct dw_i2c_dev *i_dev = pci_get_drvdata(pdev);
- 
-+	i_dev->suspended = true;
- 	i_dev->disable(i_dev);
- 
- 	return 0;
-@@ -185,8 +186,12 @@ static int i2c_dw_pci_resume(struct device *dev)
- {
- 	struct pci_dev *pdev = to_pci_dev(dev);
- 	struct dw_i2c_dev *i_dev = pci_get_drvdata(pdev);
-+	int ret;
- 
--	return i_dev->init(i_dev);
-+	ret = i_dev->init(i_dev);
-+	i_dev->suspended = false;
-+
-+	return ret;
- }
- #endif
- 
-diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
-index 9eaac3be1f63..ead5e7de3e4d 100644
---- a/drivers/i2c/busses/i2c-designware-platdrv.c
-+++ b/drivers/i2c/busses/i2c-designware-platdrv.c
-@@ -454,6 +454,8 @@ static int dw_i2c_plat_suspend(struct device *dev)
- {
- 	struct dw_i2c_dev *i_dev = dev_get_drvdata(dev);
- 
-+	i_dev->suspended = true;
-+
- 	if (i_dev->shared_with_punit)
- 		return 0;
- 
-@@ -471,6 +473,7 @@ static int dw_i2c_plat_resume(struct device *dev)
- 		i2c_dw_prepare_clk(i_dev, true);
- 
- 	i_dev->init(i_dev);
-+	i_dev->suspended = false;
- 
- 	return 0;
- }
-diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
-index c77adbbea0c7..e85dc8583896 100644
---- a/drivers/i2c/busses/i2c-tegra.c
-+++ b/drivers/i2c/busses/i2c-tegra.c
-@@ -118,6 +118,9 @@
- #define I2C_MST_FIFO_STATUS_TX_MASK		0xff0000
- #define I2C_MST_FIFO_STATUS_TX_SHIFT		16
- 
-+/* Packet header size in bytes */
-+#define I2C_PACKET_HEADER_SIZE			12
-+
- /*
-  * msg_end_type: The bus control which need to be send at end of transfer.
-  * @MSG_END_STOP: Send stop pulse at end of transfer.
-@@ -836,12 +839,13 @@ static const struct i2c_algorithm tegra_i2c_algo = {
- /* payload size is only 12 bit */
- static const struct i2c_adapter_quirks tegra_i2c_quirks = {
- 	.flags = I2C_AQ_NO_ZERO_LEN,
--	.max_read_len = 4096,
--	.max_write_len = 4096,
-+	.max_read_len = SZ_4K,
-+	.max_write_len = SZ_4K - I2C_PACKET_HEADER_SIZE,
- };
- 
- static const struct i2c_adapter_quirks tegra194_i2c_quirks = {
- 	.flags = I2C_AQ_NO_ZERO_LEN,
-+	.max_write_len = SZ_64K - I2C_PACKET_HEADER_SIZE,
- };
- 
- static const struct tegra_i2c_hw_feature tegra20_i2c_hw = {
-diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
-index 28460f6a60cc..af87a16ac3a5 100644
---- a/drivers/i2c/i2c-core-base.c
-+++ b/drivers/i2c/i2c-core-base.c
-@@ -430,7 +430,7 @@ static int i2c_device_remove(struct device *dev)
- 	dev_pm_clear_wake_irq(&client->dev);
- 	device_init_wakeup(&client->dev, false);
- 
--	client->irq = 0;
-+	client->irq = client->init_irq;
- 
- 	return status;
- }
-@@ -741,10 +741,11 @@ i2c_new_device(struct i2c_adapter *adap, struct i2c_board_info const *info)
- 	client->flags = info->flags;
- 	client->addr = info->addr;
- 
--	client->irq = info->irq;
--	if (!client->irq)
--		client->irq = i2c_dev_irq_from_resources(info->resources,
-+	client->init_irq = info->irq;
-+	if (!client->init_irq)
-+		client->init_irq = i2c_dev_irq_from_resources(info->resources,
- 							 info->num_resources);
-+	client->irq = client->init_irq;
- 
- 	strlcpy(client->name, info->type, sizeof(client->name));
- 
-diff --git a/drivers/i2c/i2c-core-of.c b/drivers/i2c/i2c-core-of.c
-index 6cb7ad608bcd..0f01cdba9d2c 100644
---- a/drivers/i2c/i2c-core-of.c
-+++ b/drivers/i2c/i2c-core-of.c
-@@ -121,6 +121,17 @@ static int of_dev_node_match(struct device *dev, void *data)
- 	return dev->of_node == data;
- }
- 
-+static int of_dev_or_parent_node_match(struct device *dev, void *data)
-+{
-+	if (dev->of_node == data)
-+		return 1;
-+
-+	if (dev->parent)
-+		return dev->parent->of_node == data;
-+
-+	return 0;
-+}
-+
- /* must call put_device() when done with returned i2c_client device */
- struct i2c_client *of_find_i2c_device_by_node(struct device_node *node)
- {
-@@ -145,7 +156,8 @@ struct i2c_adapter *of_find_i2c_adapter_by_node(struct device_node *node)
- 	struct device *dev;
- 	struct i2c_adapter *adapter;
- 
--	dev = bus_find_device(&i2c_bus_type, NULL, node, of_dev_node_match);
-+	dev = bus_find_device(&i2c_bus_type, NULL, node,
-+			      of_dev_or_parent_node_match);
- 	if (!dev)
- 		return NULL;
- 
-diff --git a/drivers/iio/adc/exynos_adc.c b/drivers/iio/adc/exynos_adc.c
-index fa2d2b5767f3..1ca2c4d39f87 100644
---- a/drivers/iio/adc/exynos_adc.c
-+++ b/drivers/iio/adc/exynos_adc.c
-@@ -115,6 +115,7 @@
- #define MAX_ADC_V2_CHANNELS		10
- #define MAX_ADC_V1_CHANNELS		8
- #define MAX_EXYNOS3250_ADC_CHANNELS	2
-+#define MAX_EXYNOS4212_ADC_CHANNELS	4
- #define MAX_S5PV210_ADC_CHANNELS	10
- 
- /* Bit definitions common for ADC_V1 and ADC_V2 */
-@@ -271,6 +272,19 @@ static void exynos_adc_v1_start_conv(struct exynos_adc *info,
- 	writel(con1 | ADC_CON_EN_START, ADC_V1_CON(info->regs));
- }
- 
-+/* Exynos4212 and 4412 is like ADCv1 but with four channels only */
-+static const struct exynos_adc_data exynos4212_adc_data = {
-+	.num_channels	= MAX_EXYNOS4212_ADC_CHANNELS,
-+	.mask		= ADC_DATX_MASK,	/* 12 bit ADC resolution */
-+	.needs_adc_phy	= true,
-+	.phy_offset	= EXYNOS_ADCV1_PHY_OFFSET,
-+
-+	.init_hw	= exynos_adc_v1_init_hw,
-+	.exit_hw	= exynos_adc_v1_exit_hw,
-+	.clear_irq	= exynos_adc_v1_clear_irq,
-+	.start_conv	= exynos_adc_v1_start_conv,
-+};
-+
- static const struct exynos_adc_data exynos_adc_v1_data = {
- 	.num_channels	= MAX_ADC_V1_CHANNELS,
- 	.mask		= ADC_DATX_MASK,	/* 12 bit ADC resolution */
-@@ -492,6 +506,9 @@ static const struct of_device_id exynos_adc_match[] = {
- 	}, {
- 		.compatible = "samsung,s5pv210-adc",
- 		.data = &exynos_adc_s5pv210_data,
-+	}, {
-+		.compatible = "samsung,exynos4212-adc",
-+		.data = &exynos4212_adc_data,
- 	}, {
- 		.compatible = "samsung,exynos-adc-v1",
- 		.data = &exynos_adc_v1_data,
-@@ -929,7 +946,7 @@ static int exynos_adc_remove(struct platform_device *pdev)
- 	struct iio_dev *indio_dev = platform_get_drvdata(pdev);
- 	struct exynos_adc *info = iio_priv(indio_dev);
- 
--	if (IS_REACHABLE(CONFIG_INPUT)) {
-+	if (IS_REACHABLE(CONFIG_INPUT) && info->input) {
- 		free_irq(info->tsirq, info);
- 		input_unregister_device(info->input);
- 	}
-diff --git a/drivers/iio/adc/qcom-pm8xxx-xoadc.c b/drivers/iio/adc/qcom-pm8xxx-xoadc.c
-index c30c002f1fef..4735f8a1ca9d 100644
---- a/drivers/iio/adc/qcom-pm8xxx-xoadc.c
-+++ b/drivers/iio/adc/qcom-pm8xxx-xoadc.c
-@@ -423,18 +423,14 @@ static irqreturn_t pm8xxx_eoc_irq(int irq, void *d)
- static struct pm8xxx_chan_info *
- pm8xxx_get_channel(struct pm8xxx_xoadc *adc, u8 chan)
- {
--	struct pm8xxx_chan_info *ch;
- 	int i;
- 
- 	for (i = 0; i < adc->nchans; i++) {
--		ch = &adc->chans[i];
-+		struct pm8xxx_chan_info *ch = &adc->chans[i];
- 		if (ch->hwchan->amux_channel == chan)
--			break;
-+			return ch;
- 	}
--	if (i == adc->nchans)
--		return NULL;
--
--	return ch;
-+	return NULL;
- }
- 
- static int pm8xxx_read_channel_rsv(struct pm8xxx_xoadc *adc,
-diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
-index 84f077b2b90a..81bded0d37d1 100644
---- a/drivers/infiniband/core/cma.c
-+++ b/drivers/infiniband/core/cma.c
-@@ -2966,13 +2966,22 @@ static void addr_handler(int status, struct sockaddr *src_addr,
- {
- 	struct rdma_id_private *id_priv = context;
- 	struct rdma_cm_event event = {};
-+	struct sockaddr *addr;
-+	struct sockaddr_storage old_addr;
- 
- 	mutex_lock(&id_priv->handler_mutex);
- 	if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_QUERY,
- 			   RDMA_CM_ADDR_RESOLVED))
- 		goto out;
- 
--	memcpy(cma_src_addr(id_priv), src_addr, rdma_addr_size(src_addr));
-+	/*
-+	 * Store the previous src address, so that if we fail to acquire
-+	 * matching rdma device, old address can be restored back, which helps
-+	 * to cancel the cma listen operation correctly.
-+	 */
-+	addr = cma_src_addr(id_priv);
-+	memcpy(&old_addr, addr, rdma_addr_size(addr));
-+	memcpy(addr, src_addr, rdma_addr_size(src_addr));
- 	if (!status && !id_priv->cma_dev) {
- 		status = cma_acquire_dev_by_src_ip(id_priv);
- 		if (status)
-@@ -2983,6 +2992,8 @@ static void addr_handler(int status, struct sockaddr *src_addr,
- 	}
- 
- 	if (status) {
-+		memcpy(addr, &old_addr,
-+		       rdma_addr_size((struct sockaddr *)&old_addr));
- 		if (!cma_comp_exch(id_priv, RDMA_CM_ADDR_RESOLVED,
- 				   RDMA_CM_ADDR_BOUND))
- 			goto out;
-diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
-index 8221813219e5..25a81fbb0d4d 100644
---- a/drivers/infiniband/hw/cxgb4/cm.c
-+++ b/drivers/infiniband/hw/cxgb4/cm.c
-@@ -1903,8 +1903,10 @@ static int abort_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
- 	}
- 	mutex_unlock(&ep->com.mutex);
- 
--	if (release)
-+	if (release) {
-+		close_complete_upcall(ep, -ECONNRESET);
- 		release_ep_resources(ep);
-+	}
- 	c4iw_put_ep(&ep->com);
- 	return 0;
- }
-@@ -3606,7 +3608,6 @@ int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp)
- 	if (close) {
- 		if (abrupt) {
- 			set_bit(EP_DISC_ABORT, &ep->com.history);
--			close_complete_upcall(ep, -ECONNRESET);
- 			ret = send_abort(ep);
- 		} else {
- 			set_bit(EP_DISC_CLOSE, &ep->com.history);
-diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
-index 6db2276f5c13..15ec3e1feb09 100644
---- a/drivers/infiniband/hw/hfi1/hfi.h
-+++ b/drivers/infiniband/hw/hfi1/hfi.h
-@@ -1435,7 +1435,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd,
- 			 struct hfi1_devdata *dd, u8 hw_pidx, u8 port);
- void hfi1_free_ctxtdata(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd);
- int hfi1_rcd_put(struct hfi1_ctxtdata *rcd);
--void hfi1_rcd_get(struct hfi1_ctxtdata *rcd);
-+int hfi1_rcd_get(struct hfi1_ctxtdata *rcd);
- struct hfi1_ctxtdata *hfi1_rcd_get_by_index_safe(struct hfi1_devdata *dd,
- 						 u16 ctxt);
- struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt);
-diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
-index 7835eb52e7c5..c532ceb0bb9a 100644
---- a/drivers/infiniband/hw/hfi1/init.c
-+++ b/drivers/infiniband/hw/hfi1/init.c
-@@ -215,12 +215,12 @@ static void hfi1_rcd_free(struct kref *kref)
- 	struct hfi1_ctxtdata *rcd =
- 		container_of(kref, struct hfi1_ctxtdata, kref);
- 
--	hfi1_free_ctxtdata(rcd->dd, rcd);
--
- 	spin_lock_irqsave(&rcd->dd->uctxt_lock, flags);
- 	rcd->dd->rcd[rcd->ctxt] = NULL;
- 	spin_unlock_irqrestore(&rcd->dd->uctxt_lock, flags);
- 
-+	hfi1_free_ctxtdata(rcd->dd, rcd);
-+
- 	kfree(rcd);
- }
- 
-@@ -243,10 +243,13 @@ int hfi1_rcd_put(struct hfi1_ctxtdata *rcd)
-  * @rcd: pointer to an initialized rcd data structure
-  *
-  * Use this to get a reference after the init.
-+ *
-+ * Return : reflect kref_get_unless_zero(), which returns non-zero on
-+ * increment, otherwise 0.
-  */
--void hfi1_rcd_get(struct hfi1_ctxtdata *rcd)
-+int hfi1_rcd_get(struct hfi1_ctxtdata *rcd)
- {
--	kref_get(&rcd->kref);
-+	return kref_get_unless_zero(&rcd->kref);
- }
- 
- /**
-@@ -326,7 +329,8 @@ struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt)
- 	spin_lock_irqsave(&dd->uctxt_lock, flags);
- 	if (dd->rcd[ctxt]) {
- 		rcd = dd->rcd[ctxt];
--		hfi1_rcd_get(rcd);
-+		if (!hfi1_rcd_get(rcd))
-+			rcd = NULL;
- 	}
- 	spin_unlock_irqrestore(&dd->uctxt_lock, flags);
- 
-diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c
-index fedaf8260105..8c79a480f2b7 100644
---- a/drivers/infiniband/hw/mlx4/cm.c
-+++ b/drivers/infiniband/hw/mlx4/cm.c
-@@ -39,7 +39,7 @@
- 
- #include "mlx4_ib.h"
- 
--#define CM_CLEANUP_CACHE_TIMEOUT  (5 * HZ)
-+#define CM_CLEANUP_CACHE_TIMEOUT  (30 * HZ)
- 
- struct id_map_entry {
- 	struct rb_node node;
-diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
-index 4ee32964e1dd..948eb6e25219 100644
---- a/drivers/infiniband/hw/mlx5/odp.c
-+++ b/drivers/infiniband/hw/mlx5/odp.c
-@@ -560,7 +560,7 @@ static int pagefault_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr,
- 	struct ib_umem_odp *odp_mr = to_ib_umem_odp(mr->umem);
- 	bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE;
- 	bool prefetch = flags & MLX5_PF_FLAGS_PREFETCH;
--	u64 access_mask = ODP_READ_ALLOWED_BIT;
-+	u64 access_mask;
- 	u64 start_idx, page_mask;
- 	struct ib_umem_odp *odp;
- 	size_t size;
-@@ -582,6 +582,7 @@ next_mr:
- 	page_shift = mr->umem->page_shift;
- 	page_mask = ~(BIT(page_shift) - 1);
- 	start_idx = (io_virt - (mr->mmkey.iova & page_mask)) >> page_shift;
-+	access_mask = ODP_READ_ALLOWED_BIT;
- 
- 	if (prefetch && !downgrade && !mr->umem->writable) {
- 		/* prefetch with write-access must
-diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
-index c6cc3e4ab71d..c45b8359b389 100644
---- a/drivers/infiniband/sw/rdmavt/qp.c
-+++ b/drivers/infiniband/sw/rdmavt/qp.c
-@@ -2785,6 +2785,18 @@ again:
- }
- EXPORT_SYMBOL(rvt_copy_sge);
- 
-+static enum ib_wc_status loopback_qp_drop(struct rvt_ibport *rvp,
-+					  struct rvt_qp *sqp)
-+{
-+	rvp->n_pkt_drops++;
-+	/*
-+	 * For RC, the requester would timeout and retry so
-+	 * shortcut the timeouts and just signal too many retries.
-+	 */
-+	return sqp->ibqp.qp_type == IB_QPT_RC ?
-+		IB_WC_RETRY_EXC_ERR : IB_WC_SUCCESS;
-+}
-+
- /**
-  * ruc_loopback - handle UC and RC loopback requests
-  * @sqp: the sending QP
-@@ -2857,17 +2869,14 @@ again:
- 	}
- 	spin_unlock_irqrestore(&sqp->s_lock, flags);
- 
--	if (!qp || !(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) ||
-+	if (!qp) {
-+		send_status = loopback_qp_drop(rvp, sqp);
-+		goto serr_no_r_lock;
-+	}
-+	spin_lock_irqsave(&qp->r_lock, flags);
-+	if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) ||
- 	    qp->ibqp.qp_type != sqp->ibqp.qp_type) {
--		rvp->n_pkt_drops++;
--		/*
--		 * For RC, the requester would timeout and retry so
--		 * shortcut the timeouts and just signal too many retries.
--		 */
--		if (sqp->ibqp.qp_type == IB_QPT_RC)
--			send_status = IB_WC_RETRY_EXC_ERR;
--		else
--			send_status = IB_WC_SUCCESS;
-+		send_status = loopback_qp_drop(rvp, sqp);
- 		goto serr;
- 	}
- 
-@@ -2893,18 +2902,8 @@ again:
- 		goto send_comp;
- 
- 	case IB_WR_SEND_WITH_INV:
--		if (!rvt_invalidate_rkey(qp, wqe->wr.ex.invalidate_rkey)) {
--			wc.wc_flags = IB_WC_WITH_INVALIDATE;
--			wc.ex.invalidate_rkey = wqe->wr.ex.invalidate_rkey;
--		}
--		goto send;
--
- 	case IB_WR_SEND_WITH_IMM:
--		wc.wc_flags = IB_WC_WITH_IMM;
--		wc.ex.imm_data = wqe->wr.ex.imm_data;
--		/* FALLTHROUGH */
- 	case IB_WR_SEND:
--send:
- 		ret = rvt_get_rwqe(qp, false);
- 		if (ret < 0)
- 			goto op_err;
-@@ -2912,6 +2911,22 @@ send:
- 			goto rnr_nak;
- 		if (wqe->length > qp->r_len)
- 			goto inv_err;
-+		switch (wqe->wr.opcode) {
-+		case IB_WR_SEND_WITH_INV:
-+			if (!rvt_invalidate_rkey(qp,
-+						 wqe->wr.ex.invalidate_rkey)) {
-+				wc.wc_flags = IB_WC_WITH_INVALIDATE;
-+				wc.ex.invalidate_rkey =
-+					wqe->wr.ex.invalidate_rkey;
-+			}
-+			break;
-+		case IB_WR_SEND_WITH_IMM:
-+			wc.wc_flags = IB_WC_WITH_IMM;
-+			wc.ex.imm_data = wqe->wr.ex.imm_data;
-+			break;
-+		default:
-+			break;
-+		}
- 		break;
- 
- 	case IB_WR_RDMA_WRITE_WITH_IMM:
-@@ -3041,6 +3056,7 @@ do_write:
- 		     wqe->wr.send_flags & IB_SEND_SOLICITED);
- 
- send_comp:
-+	spin_unlock_irqrestore(&qp->r_lock, flags);
- 	spin_lock_irqsave(&sqp->s_lock, flags);
- 	rvp->n_loop_pkts++;
- flush_send:
-@@ -3067,6 +3083,7 @@ rnr_nak:
- 	}
- 	if (sqp->s_rnr_retry_cnt < 7)
- 		sqp->s_rnr_retry--;
-+	spin_unlock_irqrestore(&qp->r_lock, flags);
- 	spin_lock_irqsave(&sqp->s_lock, flags);
- 	if (!(ib_rvt_state_ops[sqp->state] & RVT_PROCESS_RECV_OK))
- 		goto clr_busy;
-@@ -3095,6 +3112,8 @@ err:
- 	rvt_rc_error(qp, wc.status);
- 
- serr:
-+	spin_unlock_irqrestore(&qp->r_lock, flags);
-+serr_no_r_lock:
- 	spin_lock_irqsave(&sqp->s_lock, flags);
- 	rvt_send_complete(sqp, wqe, send_status);
- 	if (sqp->ibqp.qp_type == IB_QPT_RC) {
-diff --git a/drivers/input/misc/soc_button_array.c b/drivers/input/misc/soc_button_array.c
-index 23520df7650f..55cd6e0b409c 100644
---- a/drivers/input/misc/soc_button_array.c
-+++ b/drivers/input/misc/soc_button_array.c
-@@ -373,7 +373,7 @@ static struct soc_button_info soc_button_PNP0C40[] = {
- 	{ "home", 1, EV_KEY, KEY_LEFTMETA, false, true },
- 	{ "volume_up", 2, EV_KEY, KEY_VOLUMEUP, true, false },
- 	{ "volume_down", 3, EV_KEY, KEY_VOLUMEDOWN, true, false },
--	{ "rotation_lock", 4, EV_SW, SW_ROTATE_LOCK, false, false },
-+	{ "rotation_lock", 4, EV_KEY, KEY_ROTATE_LOCK_TOGGLE, false, false },
- 	{ }
- };
- 
-diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
-index 225ae6980182..628ef617bb2f 100644
---- a/drivers/input/mouse/elan_i2c_core.c
-+++ b/drivers/input/mouse/elan_i2c_core.c
-@@ -1337,6 +1337,7 @@ static const struct acpi_device_id elan_acpi_id[] = {
- 	{ "ELAN0000", 0 },
- 	{ "ELAN0100", 0 },
- 	{ "ELAN0600", 0 },
-+	{ "ELAN0601", 0 },
- 	{ "ELAN0602", 0 },
- 	{ "ELAN0605", 0 },
- 	{ "ELAN0608", 0 },
-diff --git a/drivers/input/tablet/wacom_serial4.c b/drivers/input/tablet/wacom_serial4.c
-index 38bfaca48eab..150f9eecaca7 100644
---- a/drivers/input/tablet/wacom_serial4.c
-+++ b/drivers/input/tablet/wacom_serial4.c
-@@ -187,6 +187,7 @@ enum {
- 	MODEL_DIGITIZER_II	= 0x5544, /* UD */
- 	MODEL_GRAPHIRE		= 0x4554, /* ET */
- 	MODEL_PENPARTNER	= 0x4354, /* CT */
-+	MODEL_ARTPAD_II		= 0x4B54, /* KT */
- };
- 
- static void wacom_handle_model_response(struct wacom *wacom)
-@@ -245,6 +246,7 @@ static void wacom_handle_model_response(struct wacom *wacom)
- 		wacom->flags = F_HAS_STYLUS2 | F_HAS_SCROLLWHEEL;
- 		break;
- 
-+	case MODEL_ARTPAD_II:
- 	case MODEL_DIGITIZER_II:
- 		wacom->dev->name = "Wacom Digitizer II";
- 		wacom->dev->id.version = MODEL_DIGITIZER_II;
-diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
-index 2a7b78bb98b4..e628ef23418f 100644
---- a/drivers/iommu/amd_iommu.c
-+++ b/drivers/iommu/amd_iommu.c
-@@ -2605,7 +2605,12 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
- 
- 	/* Everything is mapped - write the right values into s->dma_address */
- 	for_each_sg(sglist, s, nelems, i) {
--		s->dma_address += address + s->offset;
-+		/*
-+		 * Add in the remaining piece of the scatter-gather offset that
-+		 * was masked out when we were determining the physical address
-+		 * via (sg_phys(s) & PAGE_MASK) earlier.
-+		 */
-+		s->dma_address += address + (s->offset & ~PAGE_MASK);
- 		s->dma_length   = s->length;
- 	}
- 
-diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
-index 78188bf7e90d..dbd6824dfffa 100644
---- a/drivers/iommu/intel-iommu.c
-+++ b/drivers/iommu/intel-iommu.c
-@@ -2485,7 +2485,8 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu,
- 	if (dev && dev_is_pci(dev)) {
- 		struct pci_dev *pdev = to_pci_dev(info->dev);
- 
--		if (!pci_ats_disabled() &&
-+		if (!pdev->untrusted &&
-+		    !pci_ats_disabled() &&
- 		    ecap_dev_iotlb_support(iommu->ecap) &&
- 		    pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_ATS) &&
- 		    dmar_find_matched_atsr_unit(pdev))
-diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
-index cec29bf45c9b..18a8330e1882 100644
---- a/drivers/iommu/io-pgtable-arm-v7s.c
-+++ b/drivers/iommu/io-pgtable-arm-v7s.c
-@@ -161,6 +161,14 @@
- 
- #define ARM_V7S_TCR_PD1			BIT(5)
- 
-+#ifdef CONFIG_ZONE_DMA32
-+#define ARM_V7S_TABLE_GFP_DMA GFP_DMA32
-+#define ARM_V7S_TABLE_SLAB_FLAGS SLAB_CACHE_DMA32
-+#else
-+#define ARM_V7S_TABLE_GFP_DMA GFP_DMA
-+#define ARM_V7S_TABLE_SLAB_FLAGS SLAB_CACHE_DMA
-+#endif
-+
- typedef u32 arm_v7s_iopte;
- 
- static bool selftest_running;
-@@ -198,13 +206,16 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
- 	void *table = NULL;
- 
- 	if (lvl == 1)
--		table = (void *)__get_dma_pages(__GFP_ZERO, get_order(size));
-+		table = (void *)__get_free_pages(
-+			__GFP_ZERO | ARM_V7S_TABLE_GFP_DMA, get_order(size));
- 	else if (lvl == 2)
--		table = kmem_cache_zalloc(data->l2_tables, gfp | GFP_DMA);
-+		table = kmem_cache_zalloc(data->l2_tables, gfp);
- 	phys = virt_to_phys(table);
--	if (phys != (arm_v7s_iopte)phys)
-+	if (phys != (arm_v7s_iopte)phys) {
- 		/* Doesn't fit in PTE */
-+		dev_err(dev, "Page table does not fit in PTE: %pa", &phys);
- 		goto out_free;
-+	}
- 	if (table && !(cfg->quirks & IO_PGTABLE_QUIRK_NO_DMA)) {
- 		dma = dma_map_single(dev, table, size, DMA_TO_DEVICE);
- 		if (dma_mapping_error(dev, dma))
-@@ -217,7 +228,8 @@ static void *__arm_v7s_alloc_table(int lvl, gfp_t gfp,
- 		if (dma != phys)
- 			goto out_unmap;
- 	}
--	kmemleak_ignore(table);
-+	if (lvl == 2)
-+		kmemleak_ignore(table);
- 	return table;
- 
- out_unmap:
-@@ -733,7 +745,7 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
- 	data->l2_tables = kmem_cache_create("io-pgtable_armv7s_l2",
- 					    ARM_V7S_TABLE_SIZE(2),
- 					    ARM_V7S_TABLE_SIZE(2),
--					    SLAB_CACHE_DMA, NULL);
-+					    ARM_V7S_TABLE_SLAB_FLAGS, NULL);
- 	if (!data->l2_tables)
- 		goto out_free_data;
- 
-diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
-index f8d3ba247523..2de8122e218f 100644
---- a/drivers/iommu/iova.c
-+++ b/drivers/iommu/iova.c
-@@ -207,8 +207,10 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
- 		curr_iova = rb_entry(curr, struct iova, node);
- 	} while (curr && new_pfn <= curr_iova->pfn_hi);
- 
--	if (limit_pfn < size || new_pfn < iovad->start_pfn)
-+	if (limit_pfn < size || new_pfn < iovad->start_pfn) {
-+		iovad->max32_alloc_size = size;
- 		goto iova32_full;
-+	}
- 
- 	/* pfn_lo will point to size aligned address if size_aligned is set */
- 	new->pfn_lo = new_pfn;
-@@ -222,7 +224,6 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
- 	return 0;
- 
- iova32_full:
--	iovad->max32_alloc_size = size;
- 	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
- 	return -ENOMEM;
- }
-diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c
-index 0e65f609352e..83364fedbf0a 100644
---- a/drivers/irqchip/irq-brcmstb-l2.c
-+++ b/drivers/irqchip/irq-brcmstb-l2.c
-@@ -129,8 +129,9 @@ static void brcmstb_l2_intc_suspend(struct irq_data *d)
- 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
- 	struct irq_chip_type *ct = irq_data_get_chip_type(d);
- 	struct brcmstb_l2_intc_data *b = gc->private;
-+	unsigned long flags;
- 
--	irq_gc_lock(gc);
-+	irq_gc_lock_irqsave(gc, flags);
- 	/* Save the current mask */
- 	b->saved_mask = irq_reg_readl(gc, ct->regs.mask);
- 
-@@ -139,7 +140,7 @@ static void brcmstb_l2_intc_suspend(struct irq_data *d)
- 		irq_reg_writel(gc, ~gc->wake_active, ct->regs.disable);
- 		irq_reg_writel(gc, gc->wake_active, ct->regs.enable);
- 	}
--	irq_gc_unlock(gc);
-+	irq_gc_unlock_irqrestore(gc, flags);
- }
- 
- static void brcmstb_l2_intc_resume(struct irq_data *d)
-@@ -147,8 +148,9 @@ static void brcmstb_l2_intc_resume(struct irq_data *d)
- 	struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
- 	struct irq_chip_type *ct = irq_data_get_chip_type(d);
- 	struct brcmstb_l2_intc_data *b = gc->private;
-+	unsigned long flags;
- 
--	irq_gc_lock(gc);
-+	irq_gc_lock_irqsave(gc, flags);
- 	if (ct->chip.irq_ack) {
- 		/* Clear unmasked non-wakeup interrupts */
- 		irq_reg_writel(gc, ~b->saved_mask & ~gc->wake_active,
-@@ -158,7 +160,7 @@ static void brcmstb_l2_intc_resume(struct irq_data *d)
- 	/* Restore the saved mask */
- 	irq_reg_writel(gc, b->saved_mask, ct->regs.disable);
- 	irq_reg_writel(gc, ~b->saved_mask, ct->regs.enable);
--	irq_gc_unlock(gc);
-+	irq_gc_unlock_irqrestore(gc, flags);
- }
- 
- static int __init brcmstb_l2_intc_of_init(struct device_node *np,
-diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
-index c3aba3fc818d..93e32a59640c 100644
---- a/drivers/irqchip/irq-gic-v3-its.c
-+++ b/drivers/irqchip/irq-gic-v3-its.c
-@@ -1482,7 +1482,7 @@ static int lpi_range_cmp(void *priv, struct list_head *a, struct list_head *b)
- 	ra = container_of(a, struct lpi_range, entry);
- 	rb = container_of(b, struct lpi_range, entry);
- 
--	return rb->base_id - ra->base_id;
-+	return ra->base_id - rb->base_id;
- }
- 
- static void merge_lpi_ranges(void)
-@@ -1955,6 +1955,8 @@ static int its_alloc_tables(struct its_node *its)
- 			indirect = its_parse_indirect_baser(its, baser,
- 							    psz, &order,
- 							    its->device_ids);
-+			break;
-+
- 		case GITS_BASER_TYPE_VCPU:
- 			indirect = its_parse_indirect_baser(its, baser,
- 							    psz, &order,
-diff --git a/drivers/isdn/hardware/mISDN/hfcmulti.c b/drivers/isdn/hardware/mISDN/hfcmulti.c
-index 4d85645c87f7..0928fd1f0e0c 100644
---- a/drivers/isdn/hardware/mISDN/hfcmulti.c
-+++ b/drivers/isdn/hardware/mISDN/hfcmulti.c
-@@ -4365,7 +4365,8 @@ setup_pci(struct hfc_multi *hc, struct pci_dev *pdev,
- 	if (m->clock2)
- 		test_and_set_bit(HFC_CHIP_CLOCK2, &hc->chip);
- 
--	if (ent->device == 0xB410) {
-+	if (ent->vendor == PCI_VENDOR_ID_DIGIUM &&
-+	    ent->device == PCI_DEVICE_ID_DIGIUM_HFC4S) {
- 		test_and_set_bit(HFC_CHIP_B410P, &hc->chip);
- 		test_and_set_bit(HFC_CHIP_PCM_MASTER, &hc->chip);
- 		test_and_clear_bit(HFC_CHIP_PCM_SLAVE, &hc->chip);
-diff --git a/drivers/leds/leds-lp55xx-common.c b/drivers/leds/leds-lp55xx-common.c
-index 3d79a6380761..723f2f17497a 100644
---- a/drivers/leds/leds-lp55xx-common.c
-+++ b/drivers/leds/leds-lp55xx-common.c
-@@ -201,7 +201,7 @@ static void lp55xx_firmware_loaded(const struct firmware *fw, void *context)
- 
- 	if (!fw) {
- 		dev_err(dev, "firmware request failed\n");
--		goto out;
-+		return;
- 	}
- 
- 	/* handling firmware data is chip dependent */
-@@ -214,9 +214,9 @@ static void lp55xx_firmware_loaded(const struct firmware *fw, void *context)
- 
- 	mutex_unlock(&chip->lock);
- 
--out:
- 	/* firmware should be released for other channel use */
- 	release_firmware(chip->fw);
-+	chip->fw = NULL;
- }
- 
- static int lp55xx_request_firmware(struct lp55xx_chip *chip)
-diff --git a/drivers/md/bcache/extents.c b/drivers/md/bcache/extents.c
-index 956004366699..886710043025 100644
---- a/drivers/md/bcache/extents.c
-+++ b/drivers/md/bcache/extents.c
-@@ -538,6 +538,7 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
- {
- 	struct btree *b = container_of(bk, struct btree, keys);
- 	unsigned int i, stale;
-+	char buf[80];
- 
- 	if (!KEY_PTRS(k) ||
- 	    bch_extent_invalid(bk, k))
-@@ -547,19 +548,19 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k)
- 		if (!ptr_available(b->c, k, i))
- 			return true;
- 
--	if (!expensive_debug_checks(b->c) && KEY_DIRTY(k))
--		return false;
--
- 	for (i = 0; i < KEY_PTRS(k); i++) {
- 		stale = ptr_stale(b->c, k, i);
- 
-+		if (stale && KEY_DIRTY(k)) {
-+			bch_extent_to_text(buf, sizeof(buf), k);
-+			pr_info("stale dirty pointer, stale %u, key: %s",
-+				stale, buf);
-+		}
-+
- 		btree_bug_on(stale > BUCKET_GC_GEN_MAX, b,
- 			     "key too stale: %i, need_gc %u",
- 			     stale, b->c->need_gc);
- 
--		btree_bug_on(stale && KEY_DIRTY(k) && KEY_SIZE(k),
--			     b, "stale dirty pointer");
--
- 		if (stale)
- 			return true;
- 
-diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
-index 15070412a32e..f101bfe8657a 100644
---- a/drivers/md/bcache/request.c
-+++ b/drivers/md/bcache/request.c
-@@ -392,10 +392,11 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio)
- 
- 	/*
- 	 * Flag for bypass if the IO is for read-ahead or background,
--	 * unless the read-ahead request is for metadata (eg, for gfs2).
-+	 * unless the read-ahead request is for metadata
-+	 * (eg, for gfs2 or xfs).
- 	 */
- 	if (bio->bi_opf & (REQ_RAHEAD|REQ_BACKGROUND) &&
--	    !(bio->bi_opf & REQ_PRIO))
-+	    !(bio->bi_opf & (REQ_META|REQ_PRIO)))
- 		goto skip;
- 
- 	if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) ||
-@@ -877,7 +878,7 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s,
- 	}
- 
- 	if (!(bio->bi_opf & REQ_RAHEAD) &&
--	    !(bio->bi_opf & REQ_PRIO) &&
-+	    !(bio->bi_opf & (REQ_META|REQ_PRIO)) &&
- 	    s->iop.c->gc_stats.in_use < CUTOFF_CACHE_READA)
- 		reada = min_t(sector_t, dc->readahead >> 9,
- 			      get_capacity(bio->bi_disk) - bio_end_sector(bio));
-diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
-index 557a8a3270a1..e5daf91310f6 100644
---- a/drivers/md/bcache/sysfs.c
-+++ b/drivers/md/bcache/sysfs.c
-@@ -287,8 +287,12 @@ STORE(__cached_dev)
- 	sysfs_strtoul_clamp(writeback_rate_update_seconds,
- 			    dc->writeback_rate_update_seconds,
- 			    1, WRITEBACK_RATE_UPDATE_SECS_MAX);
--	d_strtoul(writeback_rate_i_term_inverse);
--	d_strtoul_nonzero(writeback_rate_p_term_inverse);
-+	sysfs_strtoul_clamp(writeback_rate_i_term_inverse,
-+			    dc->writeback_rate_i_term_inverse,
-+			    1, UINT_MAX);
-+	sysfs_strtoul_clamp(writeback_rate_p_term_inverse,
-+			    dc->writeback_rate_p_term_inverse,
-+			    1, UINT_MAX);
- 	d_strtoul_nonzero(writeback_rate_minimum);
- 
- 	sysfs_strtoul_clamp(io_error_limit, dc->error_limit, 0, INT_MAX);
-@@ -299,7 +303,9 @@ STORE(__cached_dev)
- 		dc->io_disable = v ? 1 : 0;
- 	}
- 
--	d_strtoi_h(sequential_cutoff);
-+	sysfs_strtoul_clamp(sequential_cutoff,
-+			    dc->sequential_cutoff,
-+			    0, UINT_MAX);
- 	d_strtoi_h(readahead);
- 
- 	if (attr == &sysfs_clear_stats)
-@@ -778,8 +784,17 @@ STORE(__bch_cache_set)
- 		c->error_limit = strtoul_or_return(buf);
- 
- 	/* See count_io_errors() for why 88 */
--	if (attr == &sysfs_io_error_halflife)
--		c->error_decay = strtoul_or_return(buf) / 88;
-+	if (attr == &sysfs_io_error_halflife) {
-+		unsigned long v = 0;
-+		ssize_t ret;
-+
-+		ret = strtoul_safe_clamp(buf, v, 0, UINT_MAX);
-+		if (!ret) {
-+			c->error_decay = v / 88;
-+			return size;
-+		}
-+		return ret;
-+	}
- 
- 	if (attr == &sysfs_io_disable) {
- 		v = strtoul_or_return(buf);
-diff --git a/drivers/md/bcache/sysfs.h b/drivers/md/bcache/sysfs.h
-index 3fe82425859c..0ad2715a884e 100644
---- a/drivers/md/bcache/sysfs.h
-+++ b/drivers/md/bcache/sysfs.h
-@@ -81,9 +81,16 @@ do {									\
- 
- #define sysfs_strtoul_clamp(file, var, min, max)			\
- do {									\
--	if (attr == &sysfs_ ## file)					\
--		return strtoul_safe_clamp(buf, var, min, max)		\
--			?: (ssize_t) size;				\
-+	if (attr == &sysfs_ ## file) {					\
-+		unsigned long v = 0;					\
-+		ssize_t ret;						\
-+		ret = strtoul_safe_clamp(buf, v, min, max);		\
-+		if (!ret) {						\
-+			var = v;					\
-+			return size;					\
-+		}							\
-+		return ret;						\
-+	}								\
- } while (0)
- 
- #define strtoul_or_return(cp)						\
-diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
-index 6a743d3bb338..4e4c6810dc3c 100644
---- a/drivers/md/bcache/writeback.h
-+++ b/drivers/md/bcache/writeback.h
-@@ -71,6 +71,9 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio,
- 	    in_use > bch_cutoff_writeback_sync)
- 		return false;
- 
-+	if (bio_op(bio) == REQ_OP_DISCARD)
-+		return false;
-+
- 	if (dc->partial_stripes_expensive &&
- 	    bcache_dev_stripe_dirty(dc, bio->bi_iter.bi_sector,
- 				    bio_sectors(bio)))
-diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
-index 95c6d86ab5e8..c4ef1fceead6 100644
---- a/drivers/md/dm-core.h
-+++ b/drivers/md/dm-core.h
-@@ -115,6 +115,7 @@ struct mapped_device {
- 	struct srcu_struct io_barrier;
- };
- 
-+void disable_discard(struct mapped_device *md);
- void disable_write_same(struct mapped_device *md);
- void disable_write_zeroes(struct mapped_device *md);
- 
-diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
-index 457200ca6287..f535fd8ac82d 100644
---- a/drivers/md/dm-integrity.c
-+++ b/drivers/md/dm-integrity.c
-@@ -913,7 +913,7 @@ static void copy_from_journal(struct dm_integrity_c *ic, unsigned section, unsig
- static bool ranges_overlap(struct dm_integrity_range *range1, struct dm_integrity_range *range2)
- {
- 	return range1->logical_sector < range2->logical_sector + range2->n_sectors &&
--	       range2->logical_sector + range2->n_sectors > range2->logical_sector;
-+	       range1->logical_sector + range1->n_sectors > range2->logical_sector;
- }
- 
- static bool add_new_range(struct dm_integrity_c *ic, struct dm_integrity_range *new_range, bool check_waiting)
-@@ -959,8 +959,6 @@ static void remove_range_unlocked(struct dm_integrity_c *ic, struct dm_integrity
- 		struct dm_integrity_range *last_range =
- 			list_first_entry(&ic->wait_list, struct dm_integrity_range, wait_entry);
- 		struct task_struct *last_range_task;
--		if (!ranges_overlap(range, last_range))
--			break;
- 		last_range_task = last_range->task;
- 		list_del(&last_range->wait_entry);
- 		if (!add_new_range(ic, last_range, false)) {
-@@ -1368,8 +1366,8 @@ again:
- 						checksums_ptr - checksums, !dio->write ? TAG_CMP : TAG_WRITE);
- 			if (unlikely(r)) {
- 				if (r > 0) {
--					DMERR("Checksum failed at sector 0x%llx",
--					      (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size)));
-+					DMERR_LIMIT("Checksum failed at sector 0x%llx",
-+						    (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size)));
- 					r = -EILSEQ;
- 					atomic64_inc(&ic->number_of_mismatches);
- 				}
-@@ -1561,8 +1559,8 @@ retry_kmap:
- 
- 					integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
- 					if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
--						DMERR("Checksum failed when reading from journal, at sector 0x%llx",
--						      (unsigned long long)logical_sector);
-+						DMERR_LIMIT("Checksum failed when reading from journal, at sector 0x%llx",
-+							    (unsigned long long)logical_sector);
- 					}
- 				}
- #endif
-@@ -3185,7 +3183,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
- 			journal_watermark = val;
- 		else if (sscanf(opt_string, "commit_time:%u%c", &val, &dummy) == 1)
- 			sync_msec = val;
--		else if (!memcmp(opt_string, "meta_device:", strlen("meta_device:"))) {
-+		else if (!strncmp(opt_string, "meta_device:", strlen("meta_device:"))) {
- 			if (ic->meta_dev) {
- 				dm_put_device(ti, ic->meta_dev);
- 				ic->meta_dev = NULL;
-@@ -3204,17 +3202,17 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
- 				goto bad;
- 			}
- 			ic->sectors_per_block = val >> SECTOR_SHIFT;
--		} else if (!memcmp(opt_string, "internal_hash:", strlen("internal_hash:"))) {
-+		} else if (!strncmp(opt_string, "internal_hash:", strlen("internal_hash:"))) {
- 			r = get_alg_and_key(opt_string, &ic->internal_hash_alg, &ti->error,
- 					    "Invalid internal_hash argument");
- 			if (r)
- 				goto bad;
--		} else if (!memcmp(opt_string, "journal_crypt:", strlen("journal_crypt:"))) {
-+		} else if (!strncmp(opt_string, "journal_crypt:", strlen("journal_crypt:"))) {
- 			r = get_alg_and_key(opt_string, &ic->journal_crypt_alg, &ti->error,
- 					    "Invalid journal_crypt argument");
- 			if (r)
- 				goto bad;
--		} else if (!memcmp(opt_string, "journal_mac:", strlen("journal_mac:"))) {
-+		} else if (!strncmp(opt_string, "journal_mac:", strlen("journal_mac:"))) {
- 			r = get_alg_and_key(opt_string, &ic->journal_mac_alg,  &ti->error,
- 					    "Invalid journal_mac argument");
- 			if (r)
-diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
-index a20531e5f3b4..582265e043a6 100644
---- a/drivers/md/dm-rq.c
-+++ b/drivers/md/dm-rq.c
-@@ -206,11 +206,14 @@ static void dm_done(struct request *clone, blk_status_t error, bool mapped)
- 	}
- 
- 	if (unlikely(error == BLK_STS_TARGET)) {
--		if (req_op(clone) == REQ_OP_WRITE_SAME &&
--		    !clone->q->limits.max_write_same_sectors)
-+		if (req_op(clone) == REQ_OP_DISCARD &&
-+		    !clone->q->limits.max_discard_sectors)
-+			disable_discard(tio->md);
-+		else if (req_op(clone) == REQ_OP_WRITE_SAME &&
-+			 !clone->q->limits.max_write_same_sectors)
- 			disable_write_same(tio->md);
--		if (req_op(clone) == REQ_OP_WRITE_ZEROES &&
--		    !clone->q->limits.max_write_zeroes_sectors)
-+		else if (req_op(clone) == REQ_OP_WRITE_ZEROES &&
-+			 !clone->q->limits.max_write_zeroes_sectors)
- 			disable_write_zeroes(tio->md);
- 	}
- 
-diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
-index 4b1be754cc41..eb257e4dcb1c 100644
---- a/drivers/md/dm-table.c
-+++ b/drivers/md/dm-table.c
-@@ -1852,6 +1852,36 @@ static bool dm_table_supports_secure_erase(struct dm_table *t)
- 	return true;
- }
- 
-+static int device_requires_stable_pages(struct dm_target *ti,
-+					struct dm_dev *dev, sector_t start,
-+					sector_t len, void *data)
-+{
-+	struct request_queue *q = bdev_get_queue(dev->bdev);
-+
-+	return q && bdi_cap_stable_pages_required(q->backing_dev_info);
-+}
-+
-+/*
-+ * If any underlying device requires stable pages, a table must require
-+ * them as well.  Only targets that support iterate_devices are considered:
-+ * don't want error, zero, etc to require stable pages.
-+ */
-+static bool dm_table_requires_stable_pages(struct dm_table *t)
-+{
-+	struct dm_target *ti;
-+	unsigned i;
-+
-+	for (i = 0; i < dm_table_get_num_targets(t); i++) {
-+		ti = dm_table_get_target(t, i);
-+
-+		if (ti->type->iterate_devices &&
-+		    ti->type->iterate_devices(ti, device_requires_stable_pages, NULL))
-+			return true;
-+	}
-+
-+	return false;
-+}
-+
- void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
- 			       struct queue_limits *limits)
- {
-@@ -1909,6 +1939,15 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
- 
- 	dm_table_verify_integrity(t);
- 
-+	/*
-+	 * Some devices don't use blk_integrity but still want stable pages
-+	 * because they do their own checksumming.
-+	 */
-+	if (dm_table_requires_stable_pages(t))
-+		q->backing_dev_info->capabilities |= BDI_CAP_STABLE_WRITES;
-+	else
-+		q->backing_dev_info->capabilities &= ~BDI_CAP_STABLE_WRITES;
-+
- 	/*
- 	 * Determine whether or not this queue's I/O timings contribute
- 	 * to the entropy pool, Only request-based targets use this.
-diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
-index e83b63608262..254c26eb963a 100644
---- a/drivers/md/dm-thin.c
-+++ b/drivers/md/dm-thin.c
-@@ -3283,6 +3283,13 @@ static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
- 	as.argc = argc;
- 	as.argv = argv;
- 
-+	/* make sure metadata and data are different devices */
-+	if (!strcmp(argv[0], argv[1])) {
-+		ti->error = "Error setting metadata or data device";
-+		r = -EINVAL;
-+		goto out_unlock;
-+	}
-+
- 	/*
- 	 * Set default pool features.
- 	 */
-@@ -4167,6 +4174,12 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
- 	tc->sort_bio_list = RB_ROOT;
- 
- 	if (argc == 3) {
-+		if (!strcmp(argv[0], argv[2])) {
-+			ti->error = "Error setting origin device";
-+			r = -EINVAL;
-+			goto bad_origin_dev;
-+		}
-+
- 		r = dm_get_device(ti, argv[2], FMODE_READ, &origin_dev);
- 		if (r) {
- 			ti->error = "Error opening origin device";
-diff --git a/drivers/md/dm.c b/drivers/md/dm.c
-index 515e6af9bed2..4986eea520b6 100644
---- a/drivers/md/dm.c
-+++ b/drivers/md/dm.c
-@@ -963,6 +963,15 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
- 	}
- }
- 
-+void disable_discard(struct mapped_device *md)
-+{
-+	struct queue_limits *limits = dm_get_queue_limits(md);
-+
-+	/* device doesn't really support DISCARD, disable it */
-+	limits->max_discard_sectors = 0;
-+	blk_queue_flag_clear(QUEUE_FLAG_DISCARD, md->queue);
-+}
-+
- void disable_write_same(struct mapped_device *md)
- {
- 	struct queue_limits *limits = dm_get_queue_limits(md);
-@@ -988,11 +997,14 @@ static void clone_endio(struct bio *bio)
- 	dm_endio_fn endio = tio->ti->type->end_io;
- 
- 	if (unlikely(error == BLK_STS_TARGET) && md->type != DM_TYPE_NVME_BIO_BASED) {
--		if (bio_op(bio) == REQ_OP_WRITE_SAME &&
--		    !bio->bi_disk->queue->limits.max_write_same_sectors)
-+		if (bio_op(bio) == REQ_OP_DISCARD &&
-+		    !bio->bi_disk->queue->limits.max_discard_sectors)
-+			disable_discard(md);
-+		else if (bio_op(bio) == REQ_OP_WRITE_SAME &&
-+			 !bio->bi_disk->queue->limits.max_write_same_sectors)
- 			disable_write_same(md);
--		if (bio_op(bio) == REQ_OP_WRITE_ZEROES &&
--		    !bio->bi_disk->queue->limits.max_write_zeroes_sectors)
-+		else if (bio_op(bio) == REQ_OP_WRITE_ZEROES &&
-+			 !bio->bi_disk->queue->limits.max_write_zeroes_sectors)
- 			disable_write_zeroes(md);
- 	}
- 
-@@ -1060,15 +1072,7 @@ int dm_set_target_max_io_len(struct dm_target *ti, sector_t len)
- 		return -EINVAL;
- 	}
- 
--	/*
--	 * BIO based queue uses its own splitting. When multipage bvecs
--	 * is switched on, size of the incoming bio may be too big to
--	 * be handled in some targets, such as crypt.
--	 *
--	 * When these targets are ready for the big bio, we can remove
--	 * the limit.
--	 */
--	ti->max_io_len = min_t(uint32_t, len, BIO_MAX_PAGES * PAGE_SIZE);
-+	ti->max_io_len = (uint32_t) len;
- 
- 	return 0;
- }
-diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
-index abb5d382f64d..3b6880dd648d 100644
---- a/drivers/md/raid10.c
-+++ b/drivers/md/raid10.c
-@@ -3939,6 +3939,8 @@ static int raid10_run(struct mddev *mddev)
- 		set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
- 		mddev->sync_thread = md_register_thread(md_do_sync, mddev,
- 							"reshape");
-+		if (!mddev->sync_thread)
-+			goto out_free_conf;
- 	}
- 
- 	return 0;
-@@ -4670,7 +4672,6 @@ read_more:
- 	atomic_inc(&r10_bio->remaining);
- 	read_bio->bi_next = NULL;
- 	generic_make_request(read_bio);
--	sector_nr += nr_sectors;
- 	sectors_done += nr_sectors;
- 	if (sector_nr <= last)
- 		goto read_more;
-diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
-index cecea901ab8c..5b68f2d0da60 100644
---- a/drivers/md/raid5.c
-+++ b/drivers/md/raid5.c
-@@ -7402,6 +7402,8 @@ static int raid5_run(struct mddev *mddev)
- 		set_bit(MD_RECOVERY_RUNNING, &mddev->recovery);
- 		mddev->sync_thread = md_register_thread(md_do_sync, mddev,
- 							"reshape");
-+		if (!mddev->sync_thread)
-+			goto abort;
- 	}
- 
- 	/* Ok, everything is just fine now */
-diff --git a/drivers/media/dvb-frontends/lgdt330x.c b/drivers/media/dvb-frontends/lgdt330x.c
-index 96807e134886..8abb1a510a81 100644
---- a/drivers/media/dvb-frontends/lgdt330x.c
-+++ b/drivers/media/dvb-frontends/lgdt330x.c
-@@ -783,7 +783,7 @@ static int lgdt3303_read_status(struct dvb_frontend *fe,
- 
- 		if ((buf[0] & 0x02) == 0x00)
- 			*status |= FE_HAS_SYNC;
--		if ((buf[0] & 0xfd) == 0x01)
-+		if ((buf[0] & 0x01) == 0x01)
- 			*status |= FE_HAS_VITERBI | FE_HAS_LOCK;
- 		break;
- 	default:
-diff --git a/drivers/media/i2c/cx25840/cx25840-core.c b/drivers/media/i2c/cx25840/cx25840-core.c
-index b168bf3635b6..8b0b8b5aa531 100644
---- a/drivers/media/i2c/cx25840/cx25840-core.c
-+++ b/drivers/media/i2c/cx25840/cx25840-core.c
-@@ -5216,8 +5216,9 @@ static int cx25840_probe(struct i2c_client *client,
- 	 * those extra inputs. So, let's add it only when needed.
- 	 */
- 	state->pads[CX25840_PAD_INPUT].flags = MEDIA_PAD_FL_SINK;
-+	state->pads[CX25840_PAD_INPUT].sig_type = PAD_SIGNAL_ANALOG;
- 	state->pads[CX25840_PAD_VID_OUT].flags = MEDIA_PAD_FL_SOURCE;
--	state->pads[CX25840_PAD_VBI_OUT].flags = MEDIA_PAD_FL_SOURCE;
-+	state->pads[CX25840_PAD_VID_OUT].sig_type = PAD_SIGNAL_DV;
- 	sd->entity.function = MEDIA_ENT_F_ATV_DECODER;
- 
- 	ret = media_entity_pads_init(&sd->entity, ARRAY_SIZE(state->pads),
-diff --git a/drivers/media/i2c/cx25840/cx25840-core.h b/drivers/media/i2c/cx25840/cx25840-core.h
-index c323b1af1f83..9efefa15d090 100644
---- a/drivers/media/i2c/cx25840/cx25840-core.h
-+++ b/drivers/media/i2c/cx25840/cx25840-core.h
-@@ -40,7 +40,6 @@ enum cx25840_model {
- enum cx25840_media_pads {
- 	CX25840_PAD_INPUT,
- 	CX25840_PAD_VID_OUT,
--	CX25840_PAD_VBI_OUT,
- 
- 	CX25840_NUM_PADS
- };
-diff --git a/drivers/media/i2c/mt9m111.c b/drivers/media/i2c/mt9m111.c
-index d639b9bcf64a..7a759b4b88cf 100644
---- a/drivers/media/i2c/mt9m111.c
-+++ b/drivers/media/i2c/mt9m111.c
-@@ -1273,6 +1273,8 @@ static int mt9m111_probe(struct i2c_client *client,
- 	mt9m111->rect.top	= MT9M111_MIN_DARK_ROWS;
- 	mt9m111->rect.width	= MT9M111_MAX_WIDTH;
- 	mt9m111->rect.height	= MT9M111_MAX_HEIGHT;
-+	mt9m111->width		= mt9m111->rect.width;
-+	mt9m111->height		= mt9m111->rect.height;
- 	mt9m111->fmt		= &mt9m111_colour_fmts[0];
- 	mt9m111->lastpage	= -1;
- 	mutex_init(&mt9m111->power_lock);
-diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
-index bef3f3aae0ed..9f8fc1ad9b1a 100644
---- a/drivers/media/i2c/ov5640.c
-+++ b/drivers/media/i2c/ov5640.c
-@@ -1893,7 +1893,7 @@ static void ov5640_reset(struct ov5640_dev *sensor)
- 	usleep_range(1000, 2000);
- 
- 	gpiod_set_value_cansleep(sensor->reset_gpio, 0);
--	usleep_range(5000, 10000);
-+	usleep_range(20000, 25000);
- }
- 
- static int ov5640_set_power_on(struct ov5640_dev *sensor)
-diff --git a/drivers/media/i2c/ov7740.c b/drivers/media/i2c/ov7740.c
-index 177688afd9a6..8835b831cdc0 100644
---- a/drivers/media/i2c/ov7740.c
-+++ b/drivers/media/i2c/ov7740.c
-@@ -1101,6 +1101,9 @@ static int ov7740_probe(struct i2c_client *client,
- 	if (ret)
- 		return ret;
- 
-+	pm_runtime_set_active(&client->dev);
-+	pm_runtime_enable(&client->dev);
-+
- 	ret = ov7740_detect(ov7740);
- 	if (ret)
- 		goto error_detect;
-@@ -1123,8 +1126,6 @@ static int ov7740_probe(struct i2c_client *client,
- 	if (ret)
- 		goto error_async_register;
- 
--	pm_runtime_set_active(&client->dev);
--	pm_runtime_enable(&client->dev);
- 	pm_runtime_idle(&client->dev);
- 
- 	return 0;
-@@ -1134,6 +1135,8 @@ error_async_register:
- error_init_controls:
- 	ov7740_free_controls(ov7740);
- error_detect:
-+	pm_runtime_disable(&client->dev);
-+	pm_runtime_set_suspended(&client->dev);
- 	ov7740_set_power(ov7740, 0);
- 	media_entity_cleanup(&ov7740->subdev.entity);
- 
-diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
-index 2a5d5002c27e..f761e4d8bf2a 100644
---- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
-+++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
-@@ -702,7 +702,7 @@ end:
- 	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, to_vb2_v4l2_buffer(vb));
- }
- 
--static void *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
-+static struct vb2_v4l2_buffer *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
- 				 enum v4l2_buf_type type)
- {
- 	if (V4L2_TYPE_IS_OUTPUT(type))
-@@ -714,7 +714,7 @@ static void *mtk_jpeg_buf_remove(struct mtk_jpeg_ctx *ctx,
- static int mtk_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
- {
- 	struct mtk_jpeg_ctx *ctx = vb2_get_drv_priv(q);
--	struct vb2_buffer *vb;
-+	struct vb2_v4l2_buffer *vb;
- 	int ret = 0;
- 
- 	ret = pm_runtime_get_sync(ctx->jpeg->dev);
-@@ -724,14 +724,14 @@ static int mtk_jpeg_start_streaming(struct vb2_queue *q, unsigned int count)
- 	return 0;
- err:
- 	while ((vb = mtk_jpeg_buf_remove(ctx, q->type)))
--		v4l2_m2m_buf_done(to_vb2_v4l2_buffer(vb), VB2_BUF_STATE_QUEUED);
-+		v4l2_m2m_buf_done(vb, VB2_BUF_STATE_QUEUED);
- 	return ret;
- }
- 
- static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
- {
- 	struct mtk_jpeg_ctx *ctx = vb2_get_drv_priv(q);
--	struct vb2_buffer *vb;
-+	struct vb2_v4l2_buffer *vb;
- 
- 	/*
- 	 * STREAMOFF is an acknowledgment for source change event.
-@@ -743,7 +743,7 @@ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
- 		struct mtk_jpeg_src_buf *src_buf;
- 
- 		vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
--		src_buf = mtk_jpeg_vb2_to_srcbuf(vb);
-+		src_buf = mtk_jpeg_vb2_to_srcbuf(&vb->vb2_buf);
- 		mtk_jpeg_set_queue_data(ctx, &src_buf->dec_param);
- 		ctx->state = MTK_JPEG_RUNNING;
- 	} else if (V4L2_TYPE_IS_OUTPUT(q->type)) {
-@@ -751,7 +751,7 @@ static void mtk_jpeg_stop_streaming(struct vb2_queue *q)
- 	}
- 
- 	while ((vb = mtk_jpeg_buf_remove(ctx, q->type)))
--		v4l2_m2m_buf_done(to_vb2_v4l2_buffer(vb), VB2_BUF_STATE_ERROR);
-+		v4l2_m2m_buf_done(vb, VB2_BUF_STATE_ERROR);
- 
- 	pm_runtime_put_sync(ctx->jpeg->dev);
- }
-@@ -807,7 +807,7 @@ static void mtk_jpeg_device_run(void *priv)
- {
- 	struct mtk_jpeg_ctx *ctx = priv;
- 	struct mtk_jpeg_dev *jpeg = ctx->jpeg;
--	struct vb2_buffer *src_buf, *dst_buf;
-+	struct vb2_v4l2_buffer *src_buf, *dst_buf;
- 	enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR;
- 	unsigned long flags;
- 	struct mtk_jpeg_src_buf *jpeg_src_buf;
-@@ -817,11 +817,11 @@ static void mtk_jpeg_device_run(void *priv)
- 
- 	src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
- 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
--	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(src_buf);
-+	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf);
- 
- 	if (jpeg_src_buf->flags & MTK_JPEG_BUF_FLAGS_LAST_FRAME) {
--		for (i = 0; i < dst_buf->num_planes; i++)
--			vb2_set_plane_payload(dst_buf, i, 0);
-+		for (i = 0; i < dst_buf->vb2_buf.num_planes; i++)
-+			vb2_set_plane_payload(&dst_buf->vb2_buf, i, 0);
- 		buf_state = VB2_BUF_STATE_DONE;
- 		goto dec_end;
- 	}
-@@ -833,8 +833,8 @@ static void mtk_jpeg_device_run(void *priv)
- 		return;
- 	}
- 
--	mtk_jpeg_set_dec_src(ctx, src_buf, &bs);
--	if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, dst_buf, &fb))
-+	mtk_jpeg_set_dec_src(ctx, &src_buf->vb2_buf, &bs);
-+	if (mtk_jpeg_set_dec_dst(ctx, &jpeg_src_buf->dec_param, &dst_buf->vb2_buf, &fb))
- 		goto dec_end;
- 
- 	spin_lock_irqsave(&jpeg->hw_lock, flags);
-@@ -849,8 +849,8 @@ static void mtk_jpeg_device_run(void *priv)
- dec_end:
- 	v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
- 	v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
--	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(src_buf), buf_state);
--	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(dst_buf), buf_state);
-+	v4l2_m2m_buf_done(src_buf, buf_state);
-+	v4l2_m2m_buf_done(dst_buf, buf_state);
- 	v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx);
- }
- 
-@@ -921,7 +921,7 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
- {
- 	struct mtk_jpeg_dev *jpeg = priv;
- 	struct mtk_jpeg_ctx *ctx;
--	struct vb2_buffer *src_buf, *dst_buf;
-+	struct vb2_v4l2_buffer *src_buf, *dst_buf;
- 	struct mtk_jpeg_src_buf *jpeg_src_buf;
- 	enum vb2_buffer_state buf_state = VB2_BUF_STATE_ERROR;
- 	u32	dec_irq_ret;
-@@ -938,7 +938,7 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
- 
- 	src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
- 	dst_buf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
--	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(src_buf);
-+	jpeg_src_buf = mtk_jpeg_vb2_to_srcbuf(&src_buf->vb2_buf);
- 
- 	if (dec_irq_ret >= MTK_JPEG_DEC_RESULT_UNDERFLOW)
- 		mtk_jpeg_dec_reset(jpeg->dec_reg_base);
-@@ -948,15 +948,15 @@ static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
- 		goto dec_end;
- 	}
- 
--	for (i = 0; i < dst_buf->num_planes; i++)
--		vb2_set_plane_payload(dst_buf, i,
-+	for (i = 0; i < dst_buf->vb2_buf.num_planes; i++)
-+		vb2_set_plane_payload(&dst_buf->vb2_buf, i,
- 				      jpeg_src_buf->dec_param.comp_size[i]);
- 
- 	buf_state = VB2_BUF_STATE_DONE;
- 
- dec_end:
--	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(src_buf), buf_state);
--	v4l2_m2m_buf_done(to_vb2_v4l2_buffer(dst_buf), buf_state);
-+	v4l2_m2m_buf_done(src_buf, buf_state);
-+	v4l2_m2m_buf_done(dst_buf, buf_state);
- 	v4l2_m2m_job_finish(jpeg->m2m_dev, ctx->fh.m2m_ctx);
- 	return IRQ_HANDLED;
- }
-diff --git a/drivers/media/platform/mx2_emmaprp.c b/drivers/media/platform/mx2_emmaprp.c
-index 27b078cf98e3..f60f499c596b 100644
---- a/drivers/media/platform/mx2_emmaprp.c
-+++ b/drivers/media/platform/mx2_emmaprp.c
-@@ -274,7 +274,7 @@ static void emmaprp_device_run(void *priv)
- {
- 	struct emmaprp_ctx *ctx = priv;
- 	struct emmaprp_q_data *s_q_data, *d_q_data;
--	struct vb2_buffer *src_buf, *dst_buf;
-+	struct vb2_v4l2_buffer *src_buf, *dst_buf;
- 	struct emmaprp_dev *pcdev = ctx->dev;
- 	unsigned int s_width, s_height;
- 	unsigned int d_width, d_height;
-@@ -294,8 +294,8 @@ static void emmaprp_device_run(void *priv)
- 	d_height = d_q_data->height;
- 	d_size = d_width * d_height;
- 
--	p_in = vb2_dma_contig_plane_dma_addr(src_buf, 0);
--	p_out = vb2_dma_contig_plane_dma_addr(dst_buf, 0);
-+	p_in = vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0);
-+	p_out = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0);
- 	if (!p_in || !p_out) {
- 		v4l2_err(&pcdev->v4l2_dev,
- 			 "Acquiring kernel pointers to buffers failed\n");
-diff --git a/drivers/media/platform/rcar-vin/rcar-core.c b/drivers/media/platform/rcar-vin/rcar-core.c
-index f0719ce24b97..aef8d8dab6ab 100644
---- a/drivers/media/platform/rcar-vin/rcar-core.c
-+++ b/drivers/media/platform/rcar-vin/rcar-core.c
-@@ -131,9 +131,13 @@ static int rvin_group_link_notify(struct media_link *link, u32 flags,
- 	    !is_media_entity_v4l2_video_device(link->sink->entity))
- 		return 0;
- 
--	/* If any entity is in use don't allow link changes. */
-+	/*
-+	 * Don't allow link changes if any entity in the graph is
-+	 * streaming, modifying the CHSEL register fields can disrupt
-+	 * running streams.
-+	 */
- 	media_device_for_each_entity(entity, &group->mdev)
--		if (entity->use_count)
-+		if (entity->stream_count)
- 			return -EBUSY;
- 
- 	mutex_lock(&group->lock);
-diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
-index 5c653287185f..b096227a9722 100644
---- a/drivers/media/platform/rockchip/rga/rga.c
-+++ b/drivers/media/platform/rockchip/rga/rga.c
-@@ -43,7 +43,7 @@ static void device_run(void *prv)
- {
- 	struct rga_ctx *ctx = prv;
- 	struct rockchip_rga *rga = ctx->rga;
--	struct vb2_buffer *src, *dst;
-+	struct vb2_v4l2_buffer *src, *dst;
- 	unsigned long flags;
- 
- 	spin_lock_irqsave(&rga->ctrl_lock, flags);
-@@ -53,8 +53,8 @@ static void device_run(void *prv)
- 	src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
- 	dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
- 
--	rga_buf_map(src);
--	rga_buf_map(dst);
-+	rga_buf_map(&src->vb2_buf);
-+	rga_buf_map(&dst->vb2_buf);
- 
- 	rga_hw_start(rga);
- 
-diff --git a/drivers/media/platform/s5p-g2d/g2d.c b/drivers/media/platform/s5p-g2d/g2d.c
-index 57ab1d1085d1..971c47165010 100644
---- a/drivers/media/platform/s5p-g2d/g2d.c
-+++ b/drivers/media/platform/s5p-g2d/g2d.c
-@@ -513,7 +513,7 @@ static void device_run(void *prv)
- {
- 	struct g2d_ctx *ctx = prv;
- 	struct g2d_dev *dev = ctx->dev;
--	struct vb2_buffer *src, *dst;
-+	struct vb2_v4l2_buffer *src, *dst;
- 	unsigned long flags;
- 	u32 cmd = 0;
- 
-@@ -528,10 +528,10 @@ static void device_run(void *prv)
- 	spin_lock_irqsave(&dev->ctrl_lock, flags);
- 
- 	g2d_set_src_size(dev, &ctx->in);
--	g2d_set_src_addr(dev, vb2_dma_contig_plane_dma_addr(src, 0));
-+	g2d_set_src_addr(dev, vb2_dma_contig_plane_dma_addr(&src->vb2_buf, 0));
- 
- 	g2d_set_dst_size(dev, &ctx->out);
--	g2d_set_dst_addr(dev, vb2_dma_contig_plane_dma_addr(dst, 0));
-+	g2d_set_dst_addr(dev, vb2_dma_contig_plane_dma_addr(&dst->vb2_buf, 0));
- 
- 	g2d_set_rop4(dev, ctx->rop);
- 	g2d_set_flip(dev, ctx->flip);
-diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.c b/drivers/media/platform/s5p-jpeg/jpeg-core.c
-index 3f9000b70385..370942b67d86 100644
---- a/drivers/media/platform/s5p-jpeg/jpeg-core.c
-+++ b/drivers/media/platform/s5p-jpeg/jpeg-core.c
-@@ -793,14 +793,14 @@ static void skip(struct s5p_jpeg_buffer *buf, long len);
- static void exynos4_jpeg_parse_decode_h_tbl(struct s5p_jpeg_ctx *ctx)
- {
- 	struct s5p_jpeg *jpeg = ctx->jpeg;
--	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
-+	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
- 	struct s5p_jpeg_buffer jpeg_buffer;
- 	unsigned int word;
- 	int c, x, components;
- 
- 	jpeg_buffer.size = 2; /* Ls */
- 	jpeg_buffer.data =
--		(unsigned long)vb2_plane_vaddr(vb, 0) + ctx->out_q.sos + 2;
-+		(unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sos + 2;
- 	jpeg_buffer.curr = 0;
- 
- 	word = 0;
-@@ -830,14 +830,14 @@ static void exynos4_jpeg_parse_decode_h_tbl(struct s5p_jpeg_ctx *ctx)
- static void exynos4_jpeg_parse_huff_tbl(struct s5p_jpeg_ctx *ctx)
- {
- 	struct s5p_jpeg *jpeg = ctx->jpeg;
--	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
-+	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
- 	struct s5p_jpeg_buffer jpeg_buffer;
- 	unsigned int word;
- 	int c, i, n, j;
- 
- 	for (j = 0; j < ctx->out_q.dht.n; ++j) {
- 		jpeg_buffer.size = ctx->out_q.dht.len[j];
--		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(vb, 0) +
-+		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) +
- 				   ctx->out_q.dht.marker[j];
- 		jpeg_buffer.curr = 0;
- 
-@@ -889,13 +889,13 @@ static void exynos4_jpeg_parse_huff_tbl(struct s5p_jpeg_ctx *ctx)
- static void exynos4_jpeg_parse_decode_q_tbl(struct s5p_jpeg_ctx *ctx)
- {
- 	struct s5p_jpeg *jpeg = ctx->jpeg;
--	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
-+	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
- 	struct s5p_jpeg_buffer jpeg_buffer;
- 	int c, x, components;
- 
- 	jpeg_buffer.size = ctx->out_q.sof_len;
- 	jpeg_buffer.data =
--		(unsigned long)vb2_plane_vaddr(vb, 0) + ctx->out_q.sof;
-+		(unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sof;
- 	jpeg_buffer.curr = 0;
- 
- 	skip(&jpeg_buffer, 5); /* P, Y, X */
-@@ -920,14 +920,14 @@ static void exynos4_jpeg_parse_decode_q_tbl(struct s5p_jpeg_ctx *ctx)
- static void exynos4_jpeg_parse_q_tbl(struct s5p_jpeg_ctx *ctx)
- {
- 	struct s5p_jpeg *jpeg = ctx->jpeg;
--	struct vb2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
-+	struct vb2_v4l2_buffer *vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
- 	struct s5p_jpeg_buffer jpeg_buffer;
- 	unsigned int word;
- 	int c, i, j;
- 
- 	for (j = 0; j < ctx->out_q.dqt.n; ++j) {
- 		jpeg_buffer.size = ctx->out_q.dqt.len[j];
--		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(vb, 0) +
-+		jpeg_buffer.data = (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) +
- 				   ctx->out_q.dqt.marker[j];
- 		jpeg_buffer.curr = 0;
- 
-@@ -1293,13 +1293,16 @@ static int s5p_jpeg_querycap(struct file *file, void *priv,
- 	return 0;
- }
- 
--static int enum_fmt(struct s5p_jpeg_fmt *sjpeg_formats, int n,
-+static int enum_fmt(struct s5p_jpeg_ctx *ctx,
-+		    struct s5p_jpeg_fmt *sjpeg_formats, int n,
- 		    struct v4l2_fmtdesc *f, u32 type)
- {
- 	int i, num = 0;
-+	unsigned int fmt_ver_flag = ctx->jpeg->variant->fmt_ver_flag;
- 
- 	for (i = 0; i < n; ++i) {
--		if (sjpeg_formats[i].flags & type) {
-+		if (sjpeg_formats[i].flags & type &&
-+		    sjpeg_formats[i].flags & fmt_ver_flag) {
- 			/* index-th format of type type found ? */
- 			if (num == f->index)
- 				break;
-@@ -1326,11 +1329,11 @@ static int s5p_jpeg_enum_fmt_vid_cap(struct file *file, void *priv,
- 	struct s5p_jpeg_ctx *ctx = fh_to_ctx(priv);
- 
- 	if (ctx->mode == S5P_JPEG_ENCODE)
--		return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
-+		return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
- 				SJPEG_FMT_FLAG_ENC_CAPTURE);
- 
--	return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
--					SJPEG_FMT_FLAG_DEC_CAPTURE);
-+	return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
-+			SJPEG_FMT_FLAG_DEC_CAPTURE);
- }
- 
- static int s5p_jpeg_enum_fmt_vid_out(struct file *file, void *priv,
-@@ -1339,11 +1342,11 @@ static int s5p_jpeg_enum_fmt_vid_out(struct file *file, void *priv,
- 	struct s5p_jpeg_ctx *ctx = fh_to_ctx(priv);
- 
- 	if (ctx->mode == S5P_JPEG_ENCODE)
--		return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
-+		return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
- 				SJPEG_FMT_FLAG_ENC_OUTPUT);
- 
--	return enum_fmt(sjpeg_formats, SJPEG_NUM_FORMATS, f,
--					SJPEG_FMT_FLAG_DEC_OUTPUT);
-+	return enum_fmt(ctx, sjpeg_formats, SJPEG_NUM_FORMATS, f,
-+			SJPEG_FMT_FLAG_DEC_OUTPUT);
- }
- 
- static struct s5p_jpeg_q_data *get_q_data(struct s5p_jpeg_ctx *ctx,
-@@ -2072,15 +2075,15 @@ static void s5p_jpeg_device_run(void *priv)
- {
- 	struct s5p_jpeg_ctx *ctx = priv;
- 	struct s5p_jpeg *jpeg = ctx->jpeg;
--	struct vb2_buffer *src_buf, *dst_buf;
-+	struct vb2_v4l2_buffer *src_buf, *dst_buf;
- 	unsigned long src_addr, dst_addr, flags;
- 
- 	spin_lock_irqsave(&ctx->jpeg->slock, flags);
- 
- 	src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
- 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
--	src_addr = vb2_dma_contig_plane_dma_addr(src_buf, 0);
--	dst_addr = vb2_dma_contig_plane_dma_addr(dst_buf, 0);
-+	src_addr = vb2_dma_contig_plane_dma_addr(&src_buf->vb2_buf, 0);
-+	dst_addr = vb2_dma_contig_plane_dma_addr(&dst_buf->vb2_buf, 0);
- 
- 	s5p_jpeg_reset(jpeg->regs);
- 	s5p_jpeg_poweron(jpeg->regs);
-@@ -2153,7 +2156,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
- {
- 	struct s5p_jpeg *jpeg = ctx->jpeg;
- 	struct s5p_jpeg_fmt *fmt;
--	struct vb2_buffer *vb;
-+	struct vb2_v4l2_buffer *vb;
- 	struct s5p_jpeg_addr jpeg_addr = {};
- 	u32 pix_size, padding_bytes = 0;
- 
-@@ -2172,7 +2175,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
- 		vb = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
- 	}
- 
--	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(vb, 0);
-+	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
- 
- 	if (fmt->colplanes == 2) {
- 		jpeg_addr.cb = jpeg_addr.y + pix_size - padding_bytes;
-@@ -2190,7 +2193,7 @@ static void exynos4_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
- static void exynos4_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
- {
- 	struct s5p_jpeg *jpeg = ctx->jpeg;
--	struct vb2_buffer *vb;
-+	struct vb2_v4l2_buffer *vb;
- 	unsigned int jpeg_addr = 0;
- 
- 	if (ctx->mode == S5P_JPEG_ENCODE)
-@@ -2198,7 +2201,7 @@ static void exynos4_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
- 	else
- 		vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
- 
--	jpeg_addr = vb2_dma_contig_plane_dma_addr(vb, 0);
-+	jpeg_addr = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
- 	if (jpeg->variant->version == SJPEG_EXYNOS5433 &&
- 	    ctx->mode == S5P_JPEG_DECODE)
- 		jpeg_addr += ctx->out_q.sos;
-@@ -2314,7 +2317,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
- {
- 	struct s5p_jpeg *jpeg = ctx->jpeg;
- 	struct s5p_jpeg_fmt *fmt;
--	struct vb2_buffer *vb;
-+	struct vb2_v4l2_buffer *vb;
- 	struct s5p_jpeg_addr jpeg_addr = {};
- 	u32 pix_size;
- 
-@@ -2328,7 +2331,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
- 		fmt = ctx->cap_q.fmt;
- 	}
- 
--	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(vb, 0);
-+	jpeg_addr.y = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
- 
- 	if (fmt->colplanes == 2) {
- 		jpeg_addr.cb = jpeg_addr.y + pix_size;
-@@ -2346,7 +2349,7 @@ static void exynos3250_jpeg_set_img_addr(struct s5p_jpeg_ctx *ctx)
- static void exynos3250_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
- {
- 	struct s5p_jpeg *jpeg = ctx->jpeg;
--	struct vb2_buffer *vb;
-+	struct vb2_v4l2_buffer *vb;
- 	unsigned int jpeg_addr = 0;
- 
- 	if (ctx->mode == S5P_JPEG_ENCODE)
-@@ -2354,7 +2357,7 @@ static void exynos3250_jpeg_set_jpeg_addr(struct s5p_jpeg_ctx *ctx)
- 	else
- 		vb = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
- 
--	jpeg_addr = vb2_dma_contig_plane_dma_addr(vb, 0);
-+	jpeg_addr = vb2_dma_contig_plane_dma_addr(&vb->vb2_buf, 0);
- 	exynos3250_jpeg_jpgadr(jpeg->regs, jpeg_addr);
- }
- 
-diff --git a/drivers/media/platform/sh_veu.c b/drivers/media/platform/sh_veu.c
-index 09ae64a0004c..d277cc674349 100644
---- a/drivers/media/platform/sh_veu.c
-+++ b/drivers/media/platform/sh_veu.c
-@@ -273,13 +273,13 @@ static void sh_veu_process(struct sh_veu_dev *veu,
- static void sh_veu_device_run(void *priv)
- {
- 	struct sh_veu_dev *veu = priv;
--	struct vb2_buffer *src_buf, *dst_buf;
-+	struct vb2_v4l2_buffer *src_buf, *dst_buf;
- 
- 	src_buf = v4l2_m2m_next_src_buf(veu->m2m_ctx);
- 	dst_buf = v4l2_m2m_next_dst_buf(veu->m2m_ctx);
- 
- 	if (src_buf && dst_buf)
--		sh_veu_process(veu, src_buf, dst_buf);
-+		sh_veu_process(veu, &src_buf->vb2_buf, &dst_buf->vb2_buf);
- }
- 
- 		/* ========== video ioctls ========== */
-diff --git a/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c b/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
-index 6950585edb5a..d16f54cdc3b0 100644
---- a/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
-+++ b/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c
-@@ -793,7 +793,7 @@ static const struct regmap_config sun6i_csi_regmap_config = {
- 	.reg_bits       = 32,
- 	.reg_stride     = 4,
- 	.val_bits       = 32,
--	.max_register	= 0x1000,
-+	.max_register	= 0x9c,
- };
- 
- static int sun6i_csi_resource_request(struct sun6i_csi_dev *sdev,
-diff --git a/drivers/media/platform/vimc/Makefile b/drivers/media/platform/vimc/Makefile
-index 4b2e3de7856e..c4fc8e7d365a 100644
---- a/drivers/media/platform/vimc/Makefile
-+++ b/drivers/media/platform/vimc/Makefile
-@@ -5,6 +5,7 @@ vimc_common-objs := vimc-common.o
- vimc_debayer-objs := vimc-debayer.o
- vimc_scaler-objs := vimc-scaler.o
- vimc_sensor-objs := vimc-sensor.o
-+vimc_streamer-objs := vimc-streamer.o
- 
- obj-$(CONFIG_VIDEO_VIMC) += vimc.o vimc_capture.o vimc_common.o vimc-debayer.o \
--				vimc_scaler.o vimc_sensor.o
-+			    vimc_scaler.o vimc_sensor.o vimc_streamer.o
-diff --git a/drivers/media/platform/vimc/vimc-capture.c b/drivers/media/platform/vimc/vimc-capture.c
-index 3f7e9ed56633..80d7515ec420 100644
---- a/drivers/media/platform/vimc/vimc-capture.c
-+++ b/drivers/media/platform/vimc/vimc-capture.c
-@@ -24,6 +24,7 @@
- #include <media/videobuf2-vmalloc.h>
- 
- #include "vimc-common.h"
-+#include "vimc-streamer.h"
- 
- #define VIMC_CAP_DRV_NAME "vimc-capture"
- 
-@@ -44,7 +45,7 @@ struct vimc_cap_device {
- 	spinlock_t qlock;
- 	struct mutex lock;
- 	u32 sequence;
--	struct media_pipeline pipe;
-+	struct vimc_stream stream;
- };
- 
- static const struct v4l2_pix_format fmt_default = {
-@@ -248,14 +249,13 @@ static int vimc_cap_start_streaming(struct vb2_queue *vq, unsigned int count)
- 	vcap->sequence = 0;
- 
- 	/* Start the media pipeline */
--	ret = media_pipeline_start(entity, &vcap->pipe);
-+	ret = media_pipeline_start(entity, &vcap->stream.pipe);
- 	if (ret) {
- 		vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED);
- 		return ret;
- 	}
- 
--	/* Enable streaming from the pipe */
--	ret = vimc_pipeline_s_stream(&vcap->vdev.entity, 1);
-+	ret = vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 1);
- 	if (ret) {
- 		media_pipeline_stop(entity);
- 		vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED);
-@@ -273,8 +273,7 @@ static void vimc_cap_stop_streaming(struct vb2_queue *vq)
- {
- 	struct vimc_cap_device *vcap = vb2_get_drv_priv(vq);
- 
--	/* Disable streaming from the pipe */
--	vimc_pipeline_s_stream(&vcap->vdev.entity, 0);
-+	vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 0);
- 
- 	/* Stop the media pipeline */
- 	media_pipeline_stop(&vcap->vdev.entity);
-@@ -355,8 +354,8 @@ static void vimc_cap_comp_unbind(struct device *comp, struct device *master,
- 	kfree(vcap);
- }
- 
--static void vimc_cap_process_frame(struct vimc_ent_device *ved,
--				   struct media_pad *sink, const void *frame)
-+static void *vimc_cap_process_frame(struct vimc_ent_device *ved,
-+				    const void *frame)
- {
- 	struct vimc_cap_device *vcap = container_of(ved, struct vimc_cap_device,
- 						    ved);
-@@ -370,7 +369,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved,
- 					    typeof(*vimc_buf), list);
- 	if (!vimc_buf) {
- 		spin_unlock(&vcap->qlock);
--		return;
-+		return ERR_PTR(-EAGAIN);
- 	}
- 
- 	/* Remove this entry from the list */
-@@ -391,6 +390,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved,
- 	vb2_set_plane_payload(&vimc_buf->vb2.vb2_buf, 0,
- 			      vcap->format.sizeimage);
- 	vb2_buffer_done(&vimc_buf->vb2.vb2_buf, VB2_BUF_STATE_DONE);
-+	return NULL;
- }
- 
- static int vimc_cap_comp_bind(struct device *comp, struct device *master,
-diff --git a/drivers/media/platform/vimc/vimc-common.c b/drivers/media/platform/vimc/vimc-common.c
-index 867e24dbd6b5..c1a74bb2df58 100644
---- a/drivers/media/platform/vimc/vimc-common.c
-+++ b/drivers/media/platform/vimc/vimc-common.c
-@@ -207,41 +207,6 @@ const struct vimc_pix_map *vimc_pix_map_by_pixelformat(u32 pixelformat)
- }
- EXPORT_SYMBOL_GPL(vimc_pix_map_by_pixelformat);
- 
--int vimc_propagate_frame(struct media_pad *src, const void *frame)
--{
--	struct media_link *link;
--
--	if (!(src->flags & MEDIA_PAD_FL_SOURCE))
--		return -EINVAL;
--
--	/* Send this frame to all sink pads that are direct linked */
--	list_for_each_entry(link, &src->entity->links, list) {
--		if (link->source == src &&
--		    (link->flags & MEDIA_LNK_FL_ENABLED)) {
--			struct vimc_ent_device *ved = NULL;
--			struct media_entity *entity = link->sink->entity;
--
--			if (is_media_entity_v4l2_subdev(entity)) {
--				struct v4l2_subdev *sd =
--					container_of(entity, struct v4l2_subdev,
--						     entity);
--				ved = v4l2_get_subdevdata(sd);
--			} else if (is_media_entity_v4l2_video_device(entity)) {
--				struct video_device *vdev =
--					container_of(entity,
--						     struct video_device,
--						     entity);
--				ved = video_get_drvdata(vdev);
--			}
--			if (ved && ved->process_frame)
--				ved->process_frame(ved, link->sink, frame);
--		}
--	}
--
--	return 0;
--}
--EXPORT_SYMBOL_GPL(vimc_propagate_frame);
--
- /* Helper function to allocate and initialize pads */
- struct media_pad *vimc_pads_init(u16 num_pads, const unsigned long *pads_flag)
- {
-diff --git a/drivers/media/platform/vimc/vimc-common.h b/drivers/media/platform/vimc/vimc-common.h
-index 2e9981b18166..6ed969d9efbb 100644
---- a/drivers/media/platform/vimc/vimc-common.h
-+++ b/drivers/media/platform/vimc/vimc-common.h
-@@ -113,23 +113,12 @@ struct vimc_pix_map {
- struct vimc_ent_device {
- 	struct media_entity *ent;
- 	struct media_pad *pads;
--	void (*process_frame)(struct vimc_ent_device *ved,
--			      struct media_pad *sink, const void *frame);
-+	void * (*process_frame)(struct vimc_ent_device *ved,
-+				const void *frame);
- 	void (*vdev_get_format)(struct vimc_ent_device *ved,
- 			      struct v4l2_pix_format *fmt);
- };
- 
--/**
-- * vimc_propagate_frame - propagate a frame through the topology
-- *
-- * @src:	the source pad where the frame is being originated
-- * @frame:	the frame to be propagated
-- *
-- * This function will call the process_frame callback from the vimc_ent_device
-- * struct of the nodes directly connected to the @src pad
-- */
--int vimc_propagate_frame(struct media_pad *src, const void *frame);
--
- /**
-  * vimc_pads_init - initialize pads
-  *
-diff --git a/drivers/media/platform/vimc/vimc-debayer.c b/drivers/media/platform/vimc/vimc-debayer.c
-index 77887f66f323..7d77c63b99d2 100644
---- a/drivers/media/platform/vimc/vimc-debayer.c
-+++ b/drivers/media/platform/vimc/vimc-debayer.c
-@@ -321,7 +321,6 @@ static void vimc_deb_set_rgb_mbus_fmt_rgb888_1x24(struct vimc_deb_device *vdeb,
- static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable)
- {
- 	struct vimc_deb_device *vdeb = v4l2_get_subdevdata(sd);
--	int ret;
- 
- 	if (enable) {
- 		const struct vimc_pix_map *vpix;
-@@ -351,22 +350,10 @@ static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable)
- 		if (!vdeb->src_frame)
- 			return -ENOMEM;
- 
--		/* Turn the stream on in the subdevices directly connected */
--		ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 1);
--		if (ret) {
--			vfree(vdeb->src_frame);
--			vdeb->src_frame = NULL;
--			return ret;
--		}
- 	} else {
- 		if (!vdeb->src_frame)
- 			return 0;
- 
--		/* Disable streaming from the pipe */
--		ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 0);
--		if (ret)
--			return ret;
--
- 		vfree(vdeb->src_frame);
- 		vdeb->src_frame = NULL;
- 	}
-@@ -480,9 +467,8 @@ static void vimc_deb_calc_rgb_sink(struct vimc_deb_device *vdeb,
- 	}
- }
- 
--static void vimc_deb_process_frame(struct vimc_ent_device *ved,
--				   struct media_pad *sink,
--				   const void *sink_frame)
-+static void *vimc_deb_process_frame(struct vimc_ent_device *ved,
-+				    const void *sink_frame)
- {
- 	struct vimc_deb_device *vdeb = container_of(ved, struct vimc_deb_device,
- 						    ved);
-@@ -491,7 +477,7 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved,
- 
- 	/* If the stream in this node is not active, just return */
- 	if (!vdeb->src_frame)
--		return;
-+		return ERR_PTR(-EINVAL);
- 
- 	for (i = 0; i < vdeb->sink_fmt.height; i++)
- 		for (j = 0; j < vdeb->sink_fmt.width; j++) {
-@@ -499,12 +485,8 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved,
- 			vdeb->set_rgb_src(vdeb, i, j, rgb);
- 		}
- 
--	/* Propagate the frame through all source pads */
--	for (i = 1; i < vdeb->sd.entity.num_pads; i++) {
--		struct media_pad *pad = &vdeb->sd.entity.pads[i];
-+	return vdeb->src_frame;
- 
--		vimc_propagate_frame(pad, vdeb->src_frame);
--	}
- }
- 
- static void vimc_deb_comp_unbind(struct device *comp, struct device *master,
-diff --git a/drivers/media/platform/vimc/vimc-scaler.c b/drivers/media/platform/vimc/vimc-scaler.c
-index b0952ee86296..39b2a73dfcc1 100644
---- a/drivers/media/platform/vimc/vimc-scaler.c
-+++ b/drivers/media/platform/vimc/vimc-scaler.c
-@@ -217,7 +217,6 @@ static const struct v4l2_subdev_pad_ops vimc_sca_pad_ops = {
- static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable)
- {
- 	struct vimc_sca_device *vsca = v4l2_get_subdevdata(sd);
--	int ret;
- 
- 	if (enable) {
- 		const struct vimc_pix_map *vpix;
-@@ -245,22 +244,10 @@ static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable)
- 		if (!vsca->src_frame)
- 			return -ENOMEM;
- 
--		/* Turn the stream on in the subdevices directly connected */
--		ret = vimc_pipeline_s_stream(&vsca->sd.entity, 1);
--		if (ret) {
--			vfree(vsca->src_frame);
--			vsca->src_frame = NULL;
--			return ret;
--		}
- 	} else {
- 		if (!vsca->src_frame)
- 			return 0;
- 
--		/* Disable streaming from the pipe */
--		ret = vimc_pipeline_s_stream(&vsca->sd.entity, 0);
--		if (ret)
--			return ret;
--
- 		vfree(vsca->src_frame);
- 		vsca->src_frame = NULL;
- 	}
-@@ -346,26 +333,19 @@ static void vimc_sca_fill_src_frame(const struct vimc_sca_device *const vsca,
- 			vimc_sca_scale_pix(vsca, i, j, sink_frame);
- }
- 
--static void vimc_sca_process_frame(struct vimc_ent_device *ved,
--				   struct media_pad *sink,
--				   const void *sink_frame)
-+static void *vimc_sca_process_frame(struct vimc_ent_device *ved,
-+				    const void *sink_frame)
- {
- 	struct vimc_sca_device *vsca = container_of(ved, struct vimc_sca_device,
- 						    ved);
--	unsigned int i;
- 
- 	/* If the stream in this node is not active, just return */
- 	if (!vsca->src_frame)
--		return;
-+		return ERR_PTR(-EINVAL);
- 
- 	vimc_sca_fill_src_frame(vsca, sink_frame);
- 
--	/* Propagate the frame through all source pads */
--	for (i = 1; i < vsca->sd.entity.num_pads; i++) {
--		struct media_pad *pad = &vsca->sd.entity.pads[i];
--
--		vimc_propagate_frame(pad, vsca->src_frame);
--	}
-+	return vsca->src_frame;
- };
- 
- static void vimc_sca_comp_unbind(struct device *comp, struct device *master,
-diff --git a/drivers/media/platform/vimc/vimc-sensor.c b/drivers/media/platform/vimc/vimc-sensor.c
-index 32ca9c6172b1..93961a1e694f 100644
---- a/drivers/media/platform/vimc/vimc-sensor.c
-+++ b/drivers/media/platform/vimc/vimc-sensor.c
-@@ -16,8 +16,6 @@
-  */
- 
- #include <linux/component.h>
--#include <linux/freezer.h>
--#include <linux/kthread.h>
- #include <linux/module.h>
- #include <linux/mod_devicetable.h>
- #include <linux/platform_device.h>
-@@ -201,38 +199,27 @@ static const struct v4l2_subdev_pad_ops vimc_sen_pad_ops = {
- 	.set_fmt		= vimc_sen_set_fmt,
- };
- 
--static int vimc_sen_tpg_thread(void *data)
-+static void *vimc_sen_process_frame(struct vimc_ent_device *ved,
-+				    const void *sink_frame)
- {
--	struct vimc_sen_device *vsen = data;
--	unsigned int i;
--
--	set_freezable();
--	set_current_state(TASK_UNINTERRUPTIBLE);
--
--	for (;;) {
--		try_to_freeze();
--		if (kthread_should_stop())
--			break;
--
--		tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame);
-+	struct vimc_sen_device *vsen = container_of(ved, struct vimc_sen_device,
-+						    ved);
-+	const struct vimc_pix_map *vpix;
-+	unsigned int frame_size;
- 
--		/* Send the frame to all source pads */
--		for (i = 0; i < vsen->sd.entity.num_pads; i++)
--			vimc_propagate_frame(&vsen->sd.entity.pads[i],
--					     vsen->frame);
-+	/* Calculate the frame size */
-+	vpix = vimc_pix_map_by_code(vsen->mbus_format.code);
-+	frame_size = vsen->mbus_format.width * vpix->bpp *
-+		     vsen->mbus_format.height;
- 
--		/* 60 frames per second */
--		schedule_timeout(HZ/60);
--	}
--
--	return 0;
-+	tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame);
-+	return vsen->frame;
- }
- 
- static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable)
- {
- 	struct vimc_sen_device *vsen =
- 				container_of(sd, struct vimc_sen_device, sd);
--	int ret;
- 
- 	if (enable) {
- 		const struct vimc_pix_map *vpix;
-@@ -258,26 +245,8 @@ static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable)
- 		/* configure the test pattern generator */
- 		vimc_sen_tpg_s_format(vsen);
- 
--		/* Initialize the image generator thread */
--		vsen->kthread_sen = kthread_run(vimc_sen_tpg_thread, vsen,
--					"%s-sen", vsen->sd.v4l2_dev->name);
--		if (IS_ERR(vsen->kthread_sen)) {
--			dev_err(vsen->dev, "%s: kernel_thread() failed\n",
--				vsen->sd.name);
--			vfree(vsen->frame);
--			vsen->frame = NULL;
--			return PTR_ERR(vsen->kthread_sen);
--		}
- 	} else {
--		if (!vsen->kthread_sen)
--			return 0;
--
--		/* Stop image generator */
--		ret = kthread_stop(vsen->kthread_sen);
--		if (ret)
--			return ret;
- 
--		vsen->kthread_sen = NULL;
- 		vfree(vsen->frame);
- 		vsen->frame = NULL;
- 		return 0;
-@@ -413,6 +382,7 @@ static int vimc_sen_comp_bind(struct device *comp, struct device *master,
- 	if (ret)
- 		goto err_free_hdl;
- 
-+	vsen->ved.process_frame = vimc_sen_process_frame;
- 	dev_set_drvdata(comp, &vsen->ved);
- 	vsen->dev = comp;
- 
-diff --git a/drivers/media/platform/vimc/vimc-streamer.c b/drivers/media/platform/vimc/vimc-streamer.c
-new file mode 100644
-index 000000000000..fcc897fb247b
---- /dev/null
-+++ b/drivers/media/platform/vimc/vimc-streamer.c
-@@ -0,0 +1,188 @@
-+// SPDX-License-Identifier: GPL-2.0+
-+/*
-+ * vimc-streamer.c Virtual Media Controller Driver
-+ *
-+ * Copyright (C) 2018 Lucas A. M. Magalhães <lucmaga@gmail.com>
-+ *
-+ */
-+
-+#include <linux/init.h>
-+#include <linux/module.h>
-+#include <linux/freezer.h>
-+#include <linux/kthread.h>
-+
-+#include "vimc-streamer.h"
-+
-+/**
-+ * vimc_get_source_entity - get the entity connected with the first sink pad
-+ *
-+ * @ent:	reference media_entity
-+ *
-+ * Helper function that returns the media entity containing the source pad
-+ * linked with the first sink pad from the given media entity pad list.
-+ */
-+static struct media_entity *vimc_get_source_entity(struct media_entity *ent)
-+{
-+	struct media_pad *pad;
-+	int i;
-+
-+	for (i = 0; i < ent->num_pads; i++) {
-+		if (ent->pads[i].flags & MEDIA_PAD_FL_SOURCE)
-+			continue;
-+		pad = media_entity_remote_pad(&ent->pads[i]);
-+		return pad ? pad->entity : NULL;
-+	}
-+	return NULL;
-+}
-+
-+/*
-+ * vimc_streamer_pipeline_terminate - Disable stream in all ved in stream
-+ *
-+ * @stream: the pointer to the stream structure with the pipeline to be
-+ *	    disabled.
-+ *
-+ * Calls s_stream to disable the stream in each entity of the pipeline
-+ *
-+ */
-+static void vimc_streamer_pipeline_terminate(struct vimc_stream *stream)
-+{
-+	struct media_entity *entity;
-+	struct v4l2_subdev *sd;
-+
-+	while (stream->pipe_size) {
-+		stream->pipe_size--;
-+		entity = stream->ved_pipeline[stream->pipe_size]->ent;
-+		entity = vimc_get_source_entity(entity);
-+		stream->ved_pipeline[stream->pipe_size] = NULL;
-+
-+		if (!is_media_entity_v4l2_subdev(entity))
-+			continue;
-+
-+		sd = media_entity_to_v4l2_subdev(entity);
-+		v4l2_subdev_call(sd, video, s_stream, 0);
-+	}
-+}
-+
-+/*
-+ * vimc_streamer_pipeline_init - initializes the stream structure
-+ *
-+ * @stream: the pointer to the stream structure to be initialized
-+ * @ved:    the pointer to the vimc entity initializing the stream
-+ *
-+ * Initializes the stream structure. Walks through the entity graph to
-+ * construct the pipeline used later on the streamer thread.
-+ * Calls s_stream to enable stream in all entities of the pipeline.
-+ */
-+static int vimc_streamer_pipeline_init(struct vimc_stream *stream,
-+				       struct vimc_ent_device *ved)
-+{
-+	struct media_entity *entity;
-+	struct video_device *vdev;
-+	struct v4l2_subdev *sd;
-+	int ret = 0;
-+
-+	stream->pipe_size = 0;
-+	while (stream->pipe_size < VIMC_STREAMER_PIPELINE_MAX_SIZE) {
-+		if (!ved) {
-+			vimc_streamer_pipeline_terminate(stream);
-+			return -EINVAL;
-+		}
-+		stream->ved_pipeline[stream->pipe_size++] = ved;
-+
-+		entity = vimc_get_source_entity(ved->ent);
-+		/* Check if the end of the pipeline was reached*/
-+		if (!entity)
-+			return 0;
-+
-+		if (is_media_entity_v4l2_subdev(entity)) {
-+			sd = media_entity_to_v4l2_subdev(entity);
-+			ret = v4l2_subdev_call(sd, video, s_stream, 1);
-+			if (ret && ret != -ENOIOCTLCMD) {
-+				vimc_streamer_pipeline_terminate(stream);
-+				return ret;
-+			}
-+			ved = v4l2_get_subdevdata(sd);
-+		} else {
-+			vdev = container_of(entity,
-+					    struct video_device,
-+					    entity);
-+			ved = video_get_drvdata(vdev);
-+		}
-+	}
-+
-+	vimc_streamer_pipeline_terminate(stream);
-+	return -EINVAL;
-+}
-+
-+static int vimc_streamer_thread(void *data)
-+{
-+	struct vimc_stream *stream = data;
-+	int i;
-+
-+	set_freezable();
-+	set_current_state(TASK_UNINTERRUPTIBLE);
-+
-+	for (;;) {
-+		try_to_freeze();
-+		if (kthread_should_stop())
-+			break;
-+
-+		for (i = stream->pipe_size - 1; i >= 0; i--) {
-+			stream->frame = stream->ved_pipeline[i]->process_frame(
-+					stream->ved_pipeline[i],
-+					stream->frame);
-+			if (!stream->frame)
-+				break;
-+			if (IS_ERR(stream->frame))
-+				break;
-+		}
-+		//wait for 60hz
-+		schedule_timeout(HZ / 60);
-+	}
-+
-+	return 0;
-+}
-+
-+int vimc_streamer_s_stream(struct vimc_stream *stream,
-+			   struct vimc_ent_device *ved,
-+			   int enable)
-+{
-+	int ret;
-+
-+	if (!stream || !ved)
-+		return -EINVAL;
-+
-+	if (enable) {
-+		if (stream->kthread)
-+			return 0;
-+
-+		ret = vimc_streamer_pipeline_init(stream, ved);
-+		if (ret)
-+			return ret;
-+
-+		stream->kthread = kthread_run(vimc_streamer_thread, stream,
-+					      "vimc-streamer thread");
-+
-+		if (IS_ERR(stream->kthread))
-+			return PTR_ERR(stream->kthread);
-+
-+	} else {
-+		if (!stream->kthread)
-+			return 0;
-+
-+		ret = kthread_stop(stream->kthread);
-+		if (ret)
-+			return ret;
-+
-+		stream->kthread = NULL;
-+
-+		vimc_streamer_pipeline_terminate(stream);
-+	}
-+
-+	return 0;
-+}
-+EXPORT_SYMBOL_GPL(vimc_streamer_s_stream);
-+
-+MODULE_DESCRIPTION("Virtual Media Controller Driver (VIMC) Streamer");
-+MODULE_AUTHOR("Lucas A. M. Magalhães <lucmaga@gmail.com>");
-+MODULE_LICENSE("GPL");
-diff --git a/drivers/media/platform/vimc/vimc-streamer.h b/drivers/media/platform/vimc/vimc-streamer.h
-new file mode 100644
-index 000000000000..752af2e2d5a2
---- /dev/null
-+++ b/drivers/media/platform/vimc/vimc-streamer.h
-@@ -0,0 +1,38 @@
-+/* SPDX-License-Identifier: GPL-2.0+ */
-+/*
-+ * vimc-streamer.h Virtual Media Controller Driver
-+ *
-+ * Copyright (C) 2018 Lucas A. M. Magalhães <lucmaga@gmail.com>
-+ *
-+ */
-+
-+#ifndef _VIMC_STREAMER_H_
-+#define _VIMC_STREAMER_H_
-+
-+#include <media/media-device.h>
-+
-+#include "vimc-common.h"
-+
-+#define VIMC_STREAMER_PIPELINE_MAX_SIZE 16
-+
-+struct vimc_stream {
-+	struct media_pipeline pipe;
-+	struct vimc_ent_device *ved_pipeline[VIMC_STREAMER_PIPELINE_MAX_SIZE];
-+	unsigned int pipe_size;
-+	u8 *frame;
-+	struct task_struct *kthread;
-+};
-+
-+/**
-+ * vimc_streamer_s_streamer - start/stop the stream
-+ *
-+ * @stream:	the pointer to the stream to start or stop
-+ * @ved:	The last entity of the streamer pipeline
-+ * @enable:	any non-zero number start the stream, zero stop
-+ *
-+ */
-+int vimc_streamer_s_stream(struct vimc_stream *stream,
-+			   struct vimc_ent_device *ved,
-+			   int enable);
-+
-+#endif  //_VIMC_STREAMER_H_
-diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
-index 66a174979b3c..81745644f720 100644
---- a/drivers/media/rc/rc-main.c
-+++ b/drivers/media/rc/rc-main.c
-@@ -274,6 +274,7 @@ static unsigned int ir_update_mapping(struct rc_dev *dev,
- 				      unsigned int new_keycode)
- {
- 	int old_keycode = rc_map->scan[index].keycode;
-+	int i;
- 
- 	/* Did the user wish to remove the mapping? */
- 	if (new_keycode == KEY_RESERVED || new_keycode == KEY_UNKNOWN) {
-@@ -288,9 +289,20 @@ static unsigned int ir_update_mapping(struct rc_dev *dev,
- 			old_keycode == KEY_RESERVED ? "New" : "Replacing",
- 			rc_map->scan[index].scancode, new_keycode);
- 		rc_map->scan[index].keycode = new_keycode;
-+		__set_bit(new_keycode, dev->input_dev->keybit);
- 	}
- 
- 	if (old_keycode != KEY_RESERVED) {
-+		/* A previous mapping was updated... */
-+		__clear_bit(old_keycode, dev->input_dev->keybit);
-+		/* ... but another scancode might use the same keycode */
-+		for (i = 0; i < rc_map->len; i++) {
-+			if (rc_map->scan[i].keycode == old_keycode) {
-+				__set_bit(old_keycode, dev->input_dev->keybit);
-+				break;
-+			}
-+		}
-+
- 		/* Possibly shrink the keytable, failure is not a problem */
- 		ir_resize_table(dev, rc_map, GFP_ATOMIC);
- 	}
-@@ -1750,7 +1762,6 @@ static int rc_prepare_rx_device(struct rc_dev *dev)
- 	set_bit(EV_REP, dev->input_dev->evbit);
- 	set_bit(EV_MSC, dev->input_dev->evbit);
- 	set_bit(MSC_SCAN, dev->input_dev->mscbit);
--	bitmap_fill(dev->input_dev->keybit, KEY_CNT);
- 
- 	/* Pointer/mouse events */
- 	set_bit(EV_REL, dev->input_dev->evbit);
-diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
-index d45415cbe6e7..14cff91b7aea 100644
---- a/drivers/media/usb/uvc/uvc_ctrl.c
-+++ b/drivers/media/usb/uvc/uvc_ctrl.c
-@@ -1212,7 +1212,7 @@ static void uvc_ctrl_fill_event(struct uvc_video_chain *chain,
- 
- 	__uvc_query_v4l2_ctrl(chain, ctrl, mapping, &v4l2_ctrl);
- 
--	memset(ev->reserved, 0, sizeof(ev->reserved));
-+	memset(ev, 0, sizeof(*ev));
- 	ev->type = V4L2_EVENT_CTRL;
- 	ev->id = v4l2_ctrl.id;
- 	ev->u.ctrl.value = value;
-diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
-index b62cbd800111..33a22c016456 100644
---- a/drivers/media/usb/uvc/uvc_driver.c
-+++ b/drivers/media/usb/uvc/uvc_driver.c
-@@ -1106,11 +1106,19 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
- 			return -EINVAL;
- 		}
- 
--		/* Make sure the terminal type MSB is not null, otherwise it
--		 * could be confused with a unit.
-+		/*
-+		 * Reject invalid terminal types that would cause issues:
-+		 *
-+		 * - The high byte must be non-zero, otherwise it would be
-+		 *   confused with a unit.
-+		 *
-+		 * - Bit 15 must be 0, as we use it internally as a terminal
-+		 *   direction flag.
-+		 *
-+		 * Other unknown types are accepted.
- 		 */
- 		type = get_unaligned_le16(&buffer[4]);
--		if ((type & 0xff00) == 0) {
-+		if ((type & 0x7f00) == 0 || (type & 0x8000) != 0) {
- 			uvc_trace(UVC_TRACE_DESCR, "device %d videocontrol "
- 				"interface %d INPUT_TERMINAL %d has invalid "
- 				"type 0x%04x, skipping\n", udev->devnum,
-diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
-index 84525ff04745..e314657a1843 100644
---- a/drivers/media/usb/uvc/uvc_video.c
-+++ b/drivers/media/usb/uvc/uvc_video.c
-@@ -676,6 +676,14 @@ void uvc_video_clock_update(struct uvc_streaming *stream,
- 	if (!uvc_hw_timestamps_param)
- 		return;
- 
-+	/*
-+	 * We will get called from __vb2_queue_cancel() if there are buffers
-+	 * done but not dequeued by the user, but the sample array has already
-+	 * been released at that time. Just bail out in that case.
-+	 */
-+	if (!clock->samples)
-+		return;
-+
- 	spin_lock_irqsave(&clock->lock, flags);
- 
- 	if (clock->count < clock->size)
-diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c
-index 5e3806feb5d7..8a82427c4d54 100644
---- a/drivers/media/v4l2-core/v4l2-ctrls.c
-+++ b/drivers/media/v4l2-core/v4l2-ctrls.c
-@@ -1387,7 +1387,7 @@ static u32 user_flags(const struct v4l2_ctrl *ctrl)
- 
- static void fill_event(struct v4l2_event *ev, struct v4l2_ctrl *ctrl, u32 changes)
- {
--	memset(ev->reserved, 0, sizeof(ev->reserved));
-+	memset(ev, 0, sizeof(*ev));
- 	ev->type = V4L2_EVENT_CTRL;
- 	ev->id = ctrl->id;
- 	ev->u.ctrl.changes = changes;
-diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
-index a530972c5a7e..e0173bf4b0dc 100644
---- a/drivers/mfd/sm501.c
-+++ b/drivers/mfd/sm501.c
-@@ -1145,6 +1145,9 @@ static int sm501_register_gpio_i2c_instance(struct sm501_devdata *sm,
- 	lookup = devm_kzalloc(&pdev->dev,
- 			      sizeof(*lookup) + 3 * sizeof(struct gpiod_lookup),
- 			      GFP_KERNEL);
-+	if (!lookup)
-+		return -ENOMEM;
-+
- 	lookup->dev_id = "i2c-gpio";
- 	if (iic->pin_sda < 32)
- 		lookup->table[0].chip_label = "SM501-LOW";
-diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
-index 5d28d9e454f5..08f4a512afad 100644
---- a/drivers/misc/cxl/guest.c
-+++ b/drivers/misc/cxl/guest.c
-@@ -267,6 +267,7 @@ static int guest_reset(struct cxl *adapter)
- 	int i, rc;
- 
- 	pr_devel("Adapter reset request\n");
-+	spin_lock(&adapter->afu_list_lock);
- 	for (i = 0; i < adapter->slices; i++) {
- 		if ((afu = adapter->afu[i])) {
- 			pci_error_handlers(afu, CXL_ERROR_DETECTED_EVENT,
-@@ -283,6 +284,7 @@ static int guest_reset(struct cxl *adapter)
- 			pci_error_handlers(afu, CXL_RESUME_EVENT, 0);
- 		}
- 	}
-+	spin_unlock(&adapter->afu_list_lock);
- 	return rc;
- }
- 
-diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
-index c79ba1c699ad..300531d6136f 100644
---- a/drivers/misc/cxl/pci.c
-+++ b/drivers/misc/cxl/pci.c
-@@ -1805,7 +1805,7 @@ static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu,
- 	/* There should only be one entry, but go through the list
- 	 * anyway
- 	 */
--	if (afu->phb == NULL)
-+	if (afu == NULL || afu->phb == NULL)
- 		return result;
- 
- 	list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
-@@ -1832,7 +1832,8 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
- {
- 	struct cxl *adapter = pci_get_drvdata(pdev);
- 	struct cxl_afu *afu;
--	pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET, afu_result;
-+	pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET;
-+	pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET;
- 	int i;
- 
- 	/* At this point, we could still have an interrupt pending.
-@@ -1843,6 +1844,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
- 
- 	/* If we're permanently dead, give up. */
- 	if (state == pci_channel_io_perm_failure) {
-+		spin_lock(&adapter->afu_list_lock);
- 		for (i = 0; i < adapter->slices; i++) {
- 			afu = adapter->afu[i];
- 			/*
-@@ -1851,6 +1853,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
- 			 */
- 			cxl_vphb_error_detected(afu, state);
- 		}
-+		spin_unlock(&adapter->afu_list_lock);
- 		return PCI_ERS_RESULT_DISCONNECT;
- 	}
- 
-@@ -1932,11 +1935,17 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
- 	 *     * In slot_reset, free the old resources and allocate new ones.
- 	 *     * In resume, clear the flag to allow things to start.
- 	 */
-+
-+	/* Make sure no one else changes the afu list */
-+	spin_lock(&adapter->afu_list_lock);
-+
- 	for (i = 0; i < adapter->slices; i++) {
- 		afu = adapter->afu[i];
- 
--		afu_result = cxl_vphb_error_detected(afu, state);
-+		if (afu == NULL)
-+			continue;
- 
-+		afu_result = cxl_vphb_error_detected(afu, state);
- 		cxl_context_detach_all(afu);
- 		cxl_ops->afu_deactivate_mode(afu, afu->current_mode);
- 		pci_deconfigure_afu(afu);
-@@ -1948,6 +1957,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev,
- 			 (result == PCI_ERS_RESULT_NEED_RESET))
- 			result = PCI_ERS_RESULT_NONE;
- 	}
-+	spin_unlock(&adapter->afu_list_lock);
- 
- 	/* should take the context lock here */
- 	if (cxl_adapter_context_lock(adapter) != 0)
-@@ -1980,14 +1990,18 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
- 	 */
- 	cxl_adapter_context_unlock(adapter);
- 
-+	spin_lock(&adapter->afu_list_lock);
- 	for (i = 0; i < adapter->slices; i++) {
- 		afu = adapter->afu[i];
- 
-+		if (afu == NULL)
-+			continue;
-+
- 		if (pci_configure_afu(afu, adapter, pdev))
--			goto err;
-+			goto err_unlock;
- 
- 		if (cxl_afu_select_best_mode(afu))
--			goto err;
-+			goto err_unlock;
- 
- 		if (afu->phb == NULL)
- 			continue;
-@@ -1999,16 +2013,16 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
- 			ctx = cxl_get_context(afu_dev);
- 
- 			if (ctx && cxl_release_context(ctx))
--				goto err;
-+				goto err_unlock;
- 
- 			ctx = cxl_dev_context_init(afu_dev);
- 			if (IS_ERR(ctx))
--				goto err;
-+				goto err_unlock;
- 
- 			afu_dev->dev.archdata.cxl_ctx = ctx;
- 
- 			if (cxl_ops->afu_check_and_enable(afu))
--				goto err;
-+				goto err_unlock;
- 
- 			afu_dev->error_state = pci_channel_io_normal;
- 
-@@ -2029,8 +2043,13 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev)
- 				result = PCI_ERS_RESULT_DISCONNECT;
- 		}
- 	}
-+
-+	spin_unlock(&adapter->afu_list_lock);
- 	return result;
- 
-+err_unlock:
-+	spin_unlock(&adapter->afu_list_lock);
-+
- err:
- 	/* All the bits that happen in both error_detected and cxl_remove
- 	 * should be idempotent, so we don't need to worry about leaving a mix
-@@ -2051,10 +2070,11 @@ static void cxl_pci_resume(struct pci_dev *pdev)
- 	 * This is not the place to be checking if everything came back up
- 	 * properly, because there's no return value: do that in slot_reset.
- 	 */
-+	spin_lock(&adapter->afu_list_lock);
- 	for (i = 0; i < adapter->slices; i++) {
- 		afu = adapter->afu[i];
- 
--		if (afu->phb == NULL)
-+		if (afu == NULL || afu->phb == NULL)
- 			continue;
- 
- 		list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) {
-@@ -2063,6 +2083,7 @@ static void cxl_pci_resume(struct pci_dev *pdev)
- 				afu_dev->driver->err_handler->resume(afu_dev);
- 		}
- 	}
-+	spin_unlock(&adapter->afu_list_lock);
- }
- 
- static const struct pci_error_handlers cxl_err_handler = {
-diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
-index fc3872fe7b25..c383322ec2ba 100644
---- a/drivers/misc/mei/bus.c
-+++ b/drivers/misc/mei/bus.c
-@@ -541,17 +541,9 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
- 		goto out;
- 	}
- 
--	if (!mei_cl_bus_module_get(cldev)) {
--		dev_err(&cldev->dev, "get hw module failed");
--		ret = -ENODEV;
--		goto out;
--	}
--
- 	ret = mei_cl_connect(cl, cldev->me_cl, NULL);
--	if (ret < 0) {
-+	if (ret < 0)
- 		dev_err(&cldev->dev, "cannot connect\n");
--		mei_cl_bus_module_put(cldev);
--	}
- 
- out:
- 	mutex_unlock(&bus->device_lock);
-@@ -614,7 +606,6 @@ int mei_cldev_disable(struct mei_cl_device *cldev)
- 	if (err < 0)
- 		dev_err(bus->dev, "Could not disconnect from the ME client\n");
- 
--	mei_cl_bus_module_put(cldev);
- out:
- 	/* Flush queues and remove any pending read */
- 	mei_cl_flush_queues(cl, NULL);
-@@ -725,9 +716,16 @@ static int mei_cl_device_probe(struct device *dev)
- 	if (!id)
- 		return -ENODEV;
- 
-+	if (!mei_cl_bus_module_get(cldev)) {
-+		dev_err(&cldev->dev, "get hw module failed");
-+		return -ENODEV;
-+	}
-+
- 	ret = cldrv->probe(cldev, id);
--	if (ret)
-+	if (ret) {
-+		mei_cl_bus_module_put(cldev);
- 		return ret;
-+	}
- 
- 	__module_get(THIS_MODULE);
- 	return 0;
-@@ -755,6 +753,7 @@ static int mei_cl_device_remove(struct device *dev)
- 
- 	mei_cldev_unregister_callbacks(cldev);
- 
-+	mei_cl_bus_module_put(cldev);
- 	module_put(THIS_MODULE);
- 	dev->driver = NULL;
- 	return ret;
-diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
-index 8f7616557c97..e6207f614816 100644
---- a/drivers/misc/mei/hbm.c
-+++ b/drivers/misc/mei/hbm.c
-@@ -1029,29 +1029,36 @@ static void mei_hbm_config_features(struct mei_device *dev)
- 	    dev->version.minor_version >= HBM_MINOR_VERSION_PGI)
- 		dev->hbm_f_pg_supported = 1;
- 
-+	dev->hbm_f_dc_supported = 0;
- 	if (dev->version.major_version >= HBM_MAJOR_VERSION_DC)
- 		dev->hbm_f_dc_supported = 1;
- 
-+	dev->hbm_f_ie_supported = 0;
- 	if (dev->version.major_version >= HBM_MAJOR_VERSION_IE)
- 		dev->hbm_f_ie_supported = 1;
- 
- 	/* disconnect on connect timeout instead of link reset */
-+	dev->hbm_f_dot_supported = 0;
- 	if (dev->version.major_version >= HBM_MAJOR_VERSION_DOT)
- 		dev->hbm_f_dot_supported = 1;
- 
- 	/* Notification Event Support */
-+	dev->hbm_f_ev_supported = 0;
- 	if (dev->version.major_version >= HBM_MAJOR_VERSION_EV)
- 		dev->hbm_f_ev_supported = 1;
- 
- 	/* Fixed Address Client Support */
-+	dev->hbm_f_fa_supported = 0;
- 	if (dev->version.major_version >= HBM_MAJOR_VERSION_FA)
- 		dev->hbm_f_fa_supported = 1;
- 
- 	/* OS ver message Support */
-+	dev->hbm_f_os_supported = 0;
- 	if (dev->version.major_version >= HBM_MAJOR_VERSION_OS)
- 		dev->hbm_f_os_supported = 1;
- 
- 	/* DMA Ring Support */
-+	dev->hbm_f_dr_supported = 0;
- 	if (dev->version.major_version > HBM_MAJOR_VERSION_DR ||
- 	    (dev->version.major_version == HBM_MAJOR_VERSION_DR &&
- 	     dev->version.minor_version >= HBM_MINOR_VERSION_DR))
-diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
-index f8240b87df22..f69acb5d4a50 100644
---- a/drivers/misc/vmw_balloon.c
-+++ b/drivers/misc/vmw_balloon.c
-@@ -1287,7 +1287,7 @@ static void vmballoon_reset(struct vmballoon *b)
- 	vmballoon_pop(b);
- 
- 	if (vmballoon_send_start(b, VMW_BALLOON_CAPABILITIES))
--		return;
-+		goto unlock;
- 
- 	if ((b->capabilities & VMW_BALLOON_BATCHED_CMDS) != 0) {
- 		if (vmballoon_init_batching(b)) {
-@@ -1298,7 +1298,7 @@ static void vmballoon_reset(struct vmballoon *b)
- 			 * The guest will retry in one second.
- 			 */
- 			vmballoon_send_start(b, 0);
--			return;
-+			goto unlock;
- 		}
- 	} else if ((b->capabilities & VMW_BALLOON_BASIC_CMDS) != 0) {
- 		vmballoon_deinit_batching(b);
-@@ -1314,6 +1314,7 @@ static void vmballoon_reset(struct vmballoon *b)
- 	if (vmballoon_send_guest_id(b))
- 		pr_err("failed to send guest ID to the host\n");
- 
-+unlock:
- 	up_write(&b->conf_sem);
- }
- 
-diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
-index b27a1e620233..1e6b07c176dc 100644
---- a/drivers/mmc/core/core.c
-+++ b/drivers/mmc/core/core.c
-@@ -2381,9 +2381,9 @@ unsigned int mmc_calc_max_discard(struct mmc_card *card)
- 		return card->pref_erase;
- 
- 	max_discard = mmc_do_calc_max_discard(card, MMC_ERASE_ARG);
--	if (max_discard && mmc_can_trim(card)) {
-+	if (mmc_can_trim(card)) {
- 		max_trim = mmc_do_calc_max_discard(card, MMC_TRIM_ARG);
--		if (max_trim < max_discard)
-+		if (max_trim < max_discard || max_discard == 0)
- 			max_discard = max_trim;
- 	} else if (max_discard < card->erase_size) {
- 		max_discard = 0;
-diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c
-index c712b7deb3a9..7c8f203f9a24 100644
---- a/drivers/mmc/host/alcor.c
-+++ b/drivers/mmc/host/alcor.c
-@@ -48,7 +48,6 @@ struct alcor_sdmmc_host {
- 	struct mmc_command *cmd;
- 	struct mmc_data *data;
- 	unsigned int dma_on:1;
--	unsigned int early_data:1;
- 
- 	struct mutex cmd_mutex;
- 
-@@ -144,8 +143,7 @@ static void alcor_data_set_dma(struct alcor_sdmmc_host *host)
- 	host->sg_count--;
- }
- 
--static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host,
--					bool early)
-+static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host)
- {
- 	struct alcor_pci_priv *priv = host->alcor_pci;
- 	struct mmc_data *data = host->data;
-@@ -155,13 +153,6 @@ static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host,
- 		ctrl |= AU6601_DATA_WRITE;
- 
- 	if (data->host_cookie == COOKIE_MAPPED) {
--		if (host->early_data) {
--			host->early_data = false;
--			return;
--		}
--
--		host->early_data = early;
--
- 		alcor_data_set_dma(host);
- 		ctrl |= AU6601_DATA_DMA_MODE;
- 		host->dma_on = 1;
-@@ -231,6 +222,7 @@ static void alcor_prepare_sg_miter(struct alcor_sdmmc_host *host)
- static void alcor_prepare_data(struct alcor_sdmmc_host *host,
- 			       struct mmc_command *cmd)
- {
-+	struct alcor_pci_priv *priv = host->alcor_pci;
- 	struct mmc_data *data = cmd->data;
- 
- 	if (!data)
-@@ -248,7 +240,7 @@ static void alcor_prepare_data(struct alcor_sdmmc_host *host,
- 	if (data->host_cookie != COOKIE_MAPPED)
- 		alcor_prepare_sg_miter(host);
- 
--	alcor_trigger_data_transfer(host, true);
-+	alcor_write8(priv, 0, AU6601_DATA_XFER_CTRL);
- }
- 
- static void alcor_send_cmd(struct alcor_sdmmc_host *host,
-@@ -435,7 +427,7 @@ static int alcor_cmd_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
- 	if (!host->data)
- 		return false;
- 
--	alcor_trigger_data_transfer(host, false);
-+	alcor_trigger_data_transfer(host);
- 	host->cmd = NULL;
- 	return true;
- }
-@@ -456,7 +448,7 @@ static void alcor_cmd_irq_thread(struct alcor_sdmmc_host *host, u32 intmask)
- 	if (!host->data)
- 		alcor_request_complete(host, 1);
- 	else
--		alcor_trigger_data_transfer(host, false);
-+		alcor_trigger_data_transfer(host);
- 	host->cmd = NULL;
- }
- 
-@@ -487,15 +479,9 @@ static int alcor_data_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
- 		break;
- 	case AU6601_INT_READ_BUF_RDY:
- 		alcor_trf_block_pio(host, true);
--		if (!host->blocks)
--			break;
--		alcor_trigger_data_transfer(host, false);
- 		return 1;
- 	case AU6601_INT_WRITE_BUF_RDY:
- 		alcor_trf_block_pio(host, false);
--		if (!host->blocks)
--			break;
--		alcor_trigger_data_transfer(host, false);
- 		return 1;
- 	case AU6601_INT_DMA_END:
- 		if (!host->sg_count)
-@@ -508,8 +494,14 @@ static int alcor_data_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
- 		break;
- 	}
- 
--	if (intmask & AU6601_INT_DATA_END)
--		return 0;
-+	if (intmask & AU6601_INT_DATA_END) {
-+		if (!host->dma_on && host->blocks) {
-+			alcor_trigger_data_transfer(host);
-+			return 1;
-+		} else {
-+			return 0;
-+		}
-+	}
- 
- 	return 1;
- }
-@@ -1044,14 +1036,27 @@ static void alcor_init_mmc(struct alcor_sdmmc_host *host)
- 	mmc->caps2 = MMC_CAP2_NO_SDIO;
- 	mmc->ops = &alcor_sdc_ops;
- 
--	/* Hardware cannot do scatter lists */
-+	/* The hardware does DMA data transfer of 4096 bytes to/from a single
-+	 * buffer address. Scatterlists are not supported, but upon DMA
-+	 * completion (signalled via IRQ), the original vendor driver does
-+	 * then immediately set up another DMA transfer of the next 4096
-+	 * bytes.
-+	 *
-+	 * This means that we need to handle the I/O in 4096 byte chunks.
-+	 * Lacking a way to limit the sglist entries to 4096 bytes, we instead
-+	 * impose that only one segment is provided, with maximum size 4096,
-+	 * which also happens to be the minimum size. This means that the
-+	 * single-entry sglist handled by this driver can be handed directly
-+	 * to the hardware, nice and simple.
-+	 *
-+	 * Unfortunately though, that means we only do 4096 bytes I/O per
-+	 * MMC command. A future improvement would be to make the driver
-+	 * accept sg lists and entries of any size, and simply iterate
-+	 * through them 4096 bytes at a time.
-+	 */
- 	mmc->max_segs = AU6601_MAX_DMA_SEGMENTS;
- 	mmc->max_seg_size = AU6601_MAX_DMA_BLOCK_SIZE;
--
--	mmc->max_blk_size = mmc->max_seg_size;
--	mmc->max_blk_count = mmc->max_segs;
--
--	mmc->max_req_size = mmc->max_seg_size * mmc->max_segs;
-+	mmc->max_req_size = mmc->max_seg_size;
- }
- 
- static int alcor_pci_sdmmc_drv_probe(struct platform_device *pdev)
-diff --git a/drivers/mmc/host/mxcmmc.c b/drivers/mmc/host/mxcmmc.c
-index 4d17032d15ee..7b530e5a86da 100644
---- a/drivers/mmc/host/mxcmmc.c
-+++ b/drivers/mmc/host/mxcmmc.c
-@@ -292,11 +292,8 @@ static void mxcmci_swap_buffers(struct mmc_data *data)
- 	struct scatterlist *sg;
- 	int i;
- 
--	for_each_sg(data->sg, sg, data->sg_len, i) {
--		void *buf = kmap_atomic(sg_page(sg) + sg->offset);
--		buffer_swap32(buf, sg->length);
--		kunmap_atomic(buf);
--	}
-+	for_each_sg(data->sg, sg, data->sg_len, i)
-+		buffer_swap32(sg_virt(sg), sg->length);
- }
- #else
- static inline void mxcmci_swap_buffers(struct mmc_data *data) {}
-@@ -613,7 +610,6 @@ static int mxcmci_transfer_data(struct mxcmci_host *host)
- {
- 	struct mmc_data *data = host->req->data;
- 	struct scatterlist *sg;
--	void *buf;
- 	int stat, i;
- 
- 	host->data = data;
-@@ -621,18 +617,14 @@ static int mxcmci_transfer_data(struct mxcmci_host *host)
- 
- 	if (data->flags & MMC_DATA_READ) {
- 		for_each_sg(data->sg, sg, data->sg_len, i) {
--			buf = kmap_atomic(sg_page(sg) + sg->offset);
--			stat = mxcmci_pull(host, buf, sg->length);
--			kunmap(buf);
-+			stat = mxcmci_pull(host, sg_virt(sg), sg->length);
- 			if (stat)
- 				return stat;
- 			host->datasize += sg->length;
- 		}
- 	} else {
- 		for_each_sg(data->sg, sg, data->sg_len, i) {
--			buf = kmap_atomic(sg_page(sg) + sg->offset);
--			stat = mxcmci_push(host, buf, sg->length);
--			kunmap(buf);
-+			stat = mxcmci_push(host, sg_virt(sg), sg->length);
- 			if (stat)
- 				return stat;
- 			host->datasize += sg->length;
-diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
-index c60a7625b1fa..b2873a2432b6 100644
---- a/drivers/mmc/host/omap.c
-+++ b/drivers/mmc/host/omap.c
-@@ -920,7 +920,7 @@ static inline void set_cmd_timeout(struct mmc_omap_host *host, struct mmc_reques
- 	reg &= ~(1 << 5);
- 	OMAP_MMC_WRITE(host, SDIO, reg);
- 	/* Set maximum timeout */
--	OMAP_MMC_WRITE(host, CTO, 0xff);
-+	OMAP_MMC_WRITE(host, CTO, 0xfd);
- }
- 
- static inline void set_data_timeout(struct mmc_omap_host *host, struct mmc_request *req)
-diff --git a/drivers/mmc/host/pxamci.c b/drivers/mmc/host/pxamci.c
-index 8779bbaa6b69..194a81888792 100644
---- a/drivers/mmc/host/pxamci.c
-+++ b/drivers/mmc/host/pxamci.c
-@@ -162,7 +162,7 @@ static void pxamci_dma_irq(void *param);
- static void pxamci_setup_data(struct pxamci_host *host, struct mmc_data *data)
- {
- 	struct dma_async_tx_descriptor *tx;
--	enum dma_data_direction direction;
-+	enum dma_transfer_direction direction;
- 	struct dma_slave_config	config;
- 	struct dma_chan *chan;
- 	unsigned int nob = data->blocks;
-diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c
-index 31a351a20dc0..d9be22b310e6 100644
---- a/drivers/mmc/host/renesas_sdhi_core.c
-+++ b/drivers/mmc/host/renesas_sdhi_core.c
-@@ -634,6 +634,7 @@ int renesas_sdhi_probe(struct platform_device *pdev,
- 	struct renesas_sdhi *priv;
- 	struct resource *res;
- 	int irq, ret, i;
-+	u16 ver;
- 
- 	of_data = of_device_get_match_data(&pdev->dev);
- 
-@@ -723,6 +724,13 @@ int renesas_sdhi_probe(struct platform_device *pdev,
- 		host->ops.start_signal_voltage_switch =
- 			renesas_sdhi_start_signal_voltage_switch;
- 		host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27;
-+
-+		/* SDR and HS200/400 registers requires HW reset */
-+		if (of_data && of_data->scc_offset) {
-+			priv->scc_ctl = host->ctl + of_data->scc_offset;
-+			host->mmc->caps |= MMC_CAP_HW_RESET;
-+			host->hw_reset = renesas_sdhi_hw_reset;
-+		}
- 	}
- 
- 	/* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */
-@@ -759,12 +767,17 @@ int renesas_sdhi_probe(struct platform_device *pdev,
- 	if (ret)
- 		goto efree;
- 
-+	ver = sd_ctrl_read16(host, CTL_VERSION);
-+	/* GEN2_SDR104 is first known SDHI to use 32bit block count */
-+	if (ver < SDHI_VER_GEN2_SDR104 && mmc_data->max_blk_count > U16_MAX)
-+		mmc_data->max_blk_count = U16_MAX;
-+
- 	ret = tmio_mmc_host_probe(host);
- 	if (ret < 0)
- 		goto edisclk;
- 
- 	/* One Gen2 SDHI incarnation does NOT have a CBSY bit */
--	if (sd_ctrl_read16(host, CTL_VERSION) == SDHI_VER_GEN2_SDR50)
-+	if (ver == SDHI_VER_GEN2_SDR50)
- 		mmc_data->flags &= ~TMIO_MMC_HAVE_CBSY;
- 
- 	/* Enable tuning iff we have an SCC and a supported mode */
-@@ -775,8 +788,6 @@ int renesas_sdhi_probe(struct platform_device *pdev,
- 		const struct renesas_sdhi_scc *taps = of_data->taps;
- 		bool hit = false;
- 
--		host->mmc->caps |= MMC_CAP_HW_RESET;
--
- 		for (i = 0; i < of_data->taps_num; i++) {
- 			if (taps[i].clk_rate == 0 ||
- 			    taps[i].clk_rate == host->mmc->f_max) {
-@@ -789,12 +800,10 @@ int renesas_sdhi_probe(struct platform_device *pdev,
- 		if (!hit)
- 			dev_warn(&host->pdev->dev, "Unknown clock rate for SDR104\n");
- 
--		priv->scc_ctl = host->ctl + of_data->scc_offset;
- 		host->init_tuning = renesas_sdhi_init_tuning;
- 		host->prepare_tuning = renesas_sdhi_prepare_tuning;
- 		host->select_tuning = renesas_sdhi_select_tuning;
- 		host->check_scc_error = renesas_sdhi_check_scc_error;
--		host->hw_reset = renesas_sdhi_hw_reset;
- 		host->prepare_hs400_tuning =
- 			renesas_sdhi_prepare_hs400_tuning;
- 		host->hs400_downgrade = renesas_sdhi_disable_scc;
-diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
-index 00d41b312c79..a6f25c796aed 100644
---- a/drivers/mmc/host/sdhci-esdhc-imx.c
-+++ b/drivers/mmc/host/sdhci-esdhc-imx.c
-@@ -979,6 +979,7 @@ static void esdhc_set_uhs_signaling(struct sdhci_host *host, unsigned timing)
- 	case MMC_TIMING_UHS_SDR25:
- 	case MMC_TIMING_UHS_SDR50:
- 	case MMC_TIMING_UHS_SDR104:
-+	case MMC_TIMING_MMC_HS:
- 	case MMC_TIMING_MMC_HS200:
- 		writel(m, host->ioaddr + ESDHC_MIX_CTRL);
- 		break;
-diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
-index c11c18a9aacb..9ec300ec94ba 100644
---- a/drivers/mmc/host/sdhci-omap.c
-+++ b/drivers/mmc/host/sdhci-omap.c
-@@ -797,6 +797,43 @@ void sdhci_omap_reset(struct sdhci_host *host, u8 mask)
- 	sdhci_reset(host, mask);
- }
- 
-+#define CMD_ERR_MASK (SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX |\
-+		      SDHCI_INT_TIMEOUT)
-+#define CMD_MASK (CMD_ERR_MASK | SDHCI_INT_RESPONSE)
-+
-+static u32 sdhci_omap_irq(struct sdhci_host *host, u32 intmask)
-+{
-+	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
-+	struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host);
-+
-+	if (omap_host->is_tuning && host->cmd && !host->data_early &&
-+	    (intmask & CMD_ERR_MASK)) {
-+
-+		/*
-+		 * Since we are not resetting data lines during tuning
-+		 * operation, data error or data complete interrupts
-+		 * might still arrive. Mark this request as a failure
-+		 * but still wait for the data interrupt
-+		 */
-+		if (intmask & SDHCI_INT_TIMEOUT)
-+			host->cmd->error = -ETIMEDOUT;
-+		else
-+			host->cmd->error = -EILSEQ;
-+
-+		host->cmd = NULL;
-+
-+		/*
-+		 * Sometimes command error interrupts and command complete
-+		 * interrupt will arrive together. Clear all command related
-+		 * interrupts here.
-+		 */
-+		sdhci_writel(host, intmask & CMD_MASK, SDHCI_INT_STATUS);
-+		intmask &= ~CMD_MASK;
-+	}
-+
-+	return intmask;
-+}
-+
- static struct sdhci_ops sdhci_omap_ops = {
- 	.set_clock = sdhci_omap_set_clock,
- 	.set_power = sdhci_omap_set_power,
-@@ -807,6 +844,7 @@ static struct sdhci_ops sdhci_omap_ops = {
- 	.platform_send_init_74_clocks = sdhci_omap_init_74_clocks,
- 	.reset = sdhci_omap_reset,
- 	.set_uhs_signaling = sdhci_omap_set_uhs_signaling,
-+	.irq = sdhci_omap_irq,
- };
- 
- static int sdhci_omap_set_capabilities(struct sdhci_omap_host *omap_host)
-diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
-index 21bf8ac78380..390e896dadc7 100644
---- a/drivers/net/Kconfig
-+++ b/drivers/net/Kconfig
-@@ -213,8 +213,8 @@ config GENEVE
- 
- config GTP
- 	tristate "GPRS Tunneling Protocol datapath (GTP-U)"
--	depends on INET && NET_UDP_TUNNEL
--	select NET_IP_TUNNEL
-+	depends on INET
-+	select NET_UDP_TUNNEL
- 	---help---
- 	  This allows one to create gtp virtual interfaces that provide
- 	  the GPRS Tunneling Protocol datapath (GTP-U). This tunneling protocol
-diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
-index ddc1f9ca8ebc..4543ac97f077 100644
---- a/drivers/net/dsa/lantiq_gswip.c
-+++ b/drivers/net/dsa/lantiq_gswip.c
-@@ -1069,10 +1069,10 @@ static int gswip_probe(struct platform_device *pdev)
- 	version = gswip_switch_r(priv, GSWIP_VERSION);
- 
- 	/* bring up the mdio bus */
--	gphy_fw_np = of_find_compatible_node(pdev->dev.of_node, NULL,
--					     "lantiq,gphy-fw");
-+	gphy_fw_np = of_get_compatible_child(dev->of_node, "lantiq,gphy-fw");
- 	if (gphy_fw_np) {
- 		err = gswip_gphy_fw_list(priv, gphy_fw_np, version);
-+		of_node_put(gphy_fw_np);
- 		if (err) {
- 			dev_err(dev, "gphy fw probe failed\n");
- 			return err;
-@@ -1080,13 +1080,12 @@ static int gswip_probe(struct platform_device *pdev)
- 	}
- 
- 	/* bring up the mdio bus */
--	mdio_np = of_find_compatible_node(pdev->dev.of_node, NULL,
--					  "lantiq,xrx200-mdio");
-+	mdio_np = of_get_compatible_child(dev->of_node, "lantiq,xrx200-mdio");
- 	if (mdio_np) {
- 		err = gswip_mdio(priv, mdio_np);
- 		if (err) {
- 			dev_err(dev, "mdio probe failed\n");
--			goto gphy_fw;
-+			goto put_mdio_node;
- 		}
- 	}
- 
-@@ -1099,7 +1098,7 @@ static int gswip_probe(struct platform_device *pdev)
- 		dev_err(dev, "wrong CPU port defined, HW only supports port: %i",
- 			priv->hw_info->cpu_port);
- 		err = -EINVAL;
--		goto mdio_bus;
-+		goto disable_switch;
- 	}
- 
- 	platform_set_drvdata(pdev, priv);
-@@ -1109,10 +1108,14 @@ static int gswip_probe(struct platform_device *pdev)
- 		 (version & GSWIP_VERSION_MOD_MASK) >> GSWIP_VERSION_MOD_SHIFT);
- 	return 0;
- 
-+disable_switch:
-+	gswip_mdio_mask(priv, GSWIP_MDIO_GLOB_ENABLE, 0, GSWIP_MDIO_GLOB);
-+	dsa_unregister_switch(priv->ds);
- mdio_bus:
- 	if (mdio_np)
- 		mdiobus_unregister(priv->ds->slave_mii_bus);
--gphy_fw:
-+put_mdio_node:
-+	of_node_put(mdio_np);
- 	for (i = 0; i < priv->num_gphy_fw; i++)
- 		gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]);
- 	return err;
-@@ -1131,8 +1134,10 @@ static int gswip_remove(struct platform_device *pdev)
- 
- 	dsa_unregister_switch(priv->ds);
- 
--	if (priv->ds->slave_mii_bus)
-+	if (priv->ds->slave_mii_bus) {
- 		mdiobus_unregister(priv->ds->slave_mii_bus);
-+		of_node_put(priv->ds->slave_mii_bus->dev.of_node);
-+	}
- 
- 	for (i = 0; i < priv->num_gphy_fw; i++)
- 		gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]);
-diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
-index 7e3c00bd9532..6cba05a80892 100644
---- a/drivers/net/dsa/mv88e6xxx/chip.c
-+++ b/drivers/net/dsa/mv88e6xxx/chip.c
-@@ -442,12 +442,20 @@ out_mapping:
- 
- static int mv88e6xxx_g1_irq_setup(struct mv88e6xxx_chip *chip)
- {
-+	static struct lock_class_key lock_key;
-+	static struct lock_class_key request_key;
- 	int err;
- 
- 	err = mv88e6xxx_g1_irq_setup_common(chip);
- 	if (err)
- 		return err;
- 
-+	/* These lock classes tells lockdep that global 1 irqs are in
-+	 * a different category than their parent GPIO, so it won't
-+	 * report false recursion.
-+	 */
-+	irq_set_lockdep_class(chip->irq, &lock_key, &request_key);
-+
- 	err = request_threaded_irq(chip->irq, NULL,
- 				   mv88e6xxx_g1_irq_thread_fn,
- 				   IRQF_ONESHOT | IRQF_SHARED,
-@@ -559,6 +567,9 @@ static int mv88e6xxx_port_setup_mac(struct mv88e6xxx_chip *chip, int port,
- 			goto restore_link;
- 	}
- 
-+	if (speed == SPEED_MAX && chip->info->ops->port_max_speed_mode)
-+		mode = chip->info->ops->port_max_speed_mode(port);
-+
- 	if (chip->info->ops->port_set_pause) {
- 		err = chip->info->ops->port_set_pause(chip, port, pause);
- 		if (err)
-@@ -3042,6 +3053,7 @@ static const struct mv88e6xxx_ops mv88e6141_ops = {
- 	.port_set_duplex = mv88e6xxx_port_set_duplex,
- 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
- 	.port_set_speed = mv88e6341_port_set_speed,
-+	.port_max_speed_mode = mv88e6341_port_max_speed_mode,
- 	.port_tag_remap = mv88e6095_port_tag_remap,
- 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
- 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
-@@ -3360,6 +3372,7 @@ static const struct mv88e6xxx_ops mv88e6190_ops = {
- 	.port_set_duplex = mv88e6xxx_port_set_duplex,
- 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
- 	.port_set_speed = mv88e6390_port_set_speed,
-+	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
- 	.port_tag_remap = mv88e6390_port_tag_remap,
- 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
- 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
-@@ -3404,6 +3417,7 @@ static const struct mv88e6xxx_ops mv88e6190x_ops = {
- 	.port_set_duplex = mv88e6xxx_port_set_duplex,
- 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
- 	.port_set_speed = mv88e6390x_port_set_speed,
-+	.port_max_speed_mode = mv88e6390x_port_max_speed_mode,
- 	.port_tag_remap = mv88e6390_port_tag_remap,
- 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
- 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
-@@ -3448,6 +3462,7 @@ static const struct mv88e6xxx_ops mv88e6191_ops = {
- 	.port_set_duplex = mv88e6xxx_port_set_duplex,
- 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
- 	.port_set_speed = mv88e6390_port_set_speed,
-+	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
- 	.port_tag_remap = mv88e6390_port_tag_remap,
- 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
- 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
-@@ -3541,6 +3556,7 @@ static const struct mv88e6xxx_ops mv88e6290_ops = {
- 	.port_set_duplex = mv88e6xxx_port_set_duplex,
- 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
- 	.port_set_speed = mv88e6390_port_set_speed,
-+	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
- 	.port_tag_remap = mv88e6390_port_tag_remap,
- 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
- 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
-@@ -3672,6 +3688,7 @@ static const struct mv88e6xxx_ops mv88e6341_ops = {
- 	.port_set_duplex = mv88e6xxx_port_set_duplex,
- 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
- 	.port_set_speed = mv88e6341_port_set_speed,
-+	.port_max_speed_mode = mv88e6341_port_max_speed_mode,
- 	.port_tag_remap = mv88e6095_port_tag_remap,
- 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
- 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
-@@ -3847,6 +3864,7 @@ static const struct mv88e6xxx_ops mv88e6390_ops = {
- 	.port_set_duplex = mv88e6xxx_port_set_duplex,
- 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
- 	.port_set_speed = mv88e6390_port_set_speed,
-+	.port_max_speed_mode = mv88e6390_port_max_speed_mode,
- 	.port_tag_remap = mv88e6390_port_tag_remap,
- 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
- 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
-@@ -3895,6 +3913,7 @@ static const struct mv88e6xxx_ops mv88e6390x_ops = {
- 	.port_set_duplex = mv88e6xxx_port_set_duplex,
- 	.port_set_rgmii_delay = mv88e6390_port_set_rgmii_delay,
- 	.port_set_speed = mv88e6390x_port_set_speed,
-+	.port_max_speed_mode = mv88e6390x_port_max_speed_mode,
- 	.port_tag_remap = mv88e6390_port_tag_remap,
- 	.port_set_frame_mode = mv88e6351_port_set_frame_mode,
- 	.port_set_egress_floods = mv88e6352_port_set_egress_floods,
-@@ -4222,7 +4241,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
- 		.name = "Marvell 88E6190",
- 		.num_databases = 4096,
- 		.num_ports = 11,	/* 10 + Z80 */
--		.num_internal_phys = 11,
-+		.num_internal_phys = 9,
- 		.num_gpio = 16,
- 		.max_vid = 8191,
- 		.port_base_addr = 0x0,
-@@ -4245,7 +4264,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
- 		.name = "Marvell 88E6190X",
- 		.num_databases = 4096,
- 		.num_ports = 11,	/* 10 + Z80 */
--		.num_internal_phys = 11,
-+		.num_internal_phys = 9,
- 		.num_gpio = 16,
- 		.max_vid = 8191,
- 		.port_base_addr = 0x0,
-@@ -4268,7 +4287,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
- 		.name = "Marvell 88E6191",
- 		.num_databases = 4096,
- 		.num_ports = 11,	/* 10 + Z80 */
--		.num_internal_phys = 11,
-+		.num_internal_phys = 9,
- 		.max_vid = 8191,
- 		.port_base_addr = 0x0,
- 		.phy_base_addr = 0x0,
-@@ -4315,7 +4334,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
- 		.name = "Marvell 88E6290",
- 		.num_databases = 4096,
- 		.num_ports = 11,	/* 10 + Z80 */
--		.num_internal_phys = 11,
-+		.num_internal_phys = 9,
- 		.num_gpio = 16,
- 		.max_vid = 8191,
- 		.port_base_addr = 0x0,
-@@ -4477,7 +4496,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
- 		.name = "Marvell 88E6390",
- 		.num_databases = 4096,
- 		.num_ports = 11,	/* 10 + Z80 */
--		.num_internal_phys = 11,
-+		.num_internal_phys = 9,
- 		.num_gpio = 16,
- 		.max_vid = 8191,
- 		.port_base_addr = 0x0,
-@@ -4500,7 +4519,7 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
- 		.name = "Marvell 88E6390X",
- 		.num_databases = 4096,
- 		.num_ports = 11,	/* 10 + Z80 */
--		.num_internal_phys = 11,
-+		.num_internal_phys = 9,
- 		.num_gpio = 16,
- 		.max_vid = 8191,
- 		.port_base_addr = 0x0,
-@@ -4847,6 +4866,7 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
- 	if (err)
- 		goto out;
- 
-+	mv88e6xxx_ports_cmode_init(chip);
- 	mv88e6xxx_phy_init(chip);
- 
- 	if (chip->info->ops->get_eeprom) {
-diff --git a/drivers/net/dsa/mv88e6xxx/chip.h b/drivers/net/dsa/mv88e6xxx/chip.h
-index 546651d8c3e1..dfb1af65c205 100644
---- a/drivers/net/dsa/mv88e6xxx/chip.h
-+++ b/drivers/net/dsa/mv88e6xxx/chip.h
-@@ -377,6 +377,9 @@ struct mv88e6xxx_ops {
- 	 */
- 	int (*port_set_speed)(struct mv88e6xxx_chip *chip, int port, int speed);
- 
-+	/* What interface mode should be used for maximum speed? */
-+	phy_interface_t (*port_max_speed_mode)(int port);
-+
- 	int (*port_tag_remap)(struct mv88e6xxx_chip *chip, int port);
- 
- 	int (*port_set_frame_mode)(struct mv88e6xxx_chip *chip, int port,
-diff --git a/drivers/net/dsa/mv88e6xxx/port.c b/drivers/net/dsa/mv88e6xxx/port.c
-index 79ab51e69aee..c44b2822e4dd 100644
---- a/drivers/net/dsa/mv88e6xxx/port.c
-+++ b/drivers/net/dsa/mv88e6xxx/port.c
-@@ -190,7 +190,7 @@ int mv88e6xxx_port_set_duplex(struct mv88e6xxx_chip *chip, int port, int dup)
- 		/* normal duplex detection */
- 		break;
- 	default:
--		return -EINVAL;
-+		return -EOPNOTSUPP;
- 	}
- 
- 	err = mv88e6xxx_port_write(chip, port, MV88E6XXX_PORT_MAC_CTL, reg);
-@@ -312,6 +312,14 @@ int mv88e6341_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
- 	return mv88e6xxx_port_set_speed(chip, port, speed, !port, true);
- }
- 
-+phy_interface_t mv88e6341_port_max_speed_mode(int port)
-+{
-+	if (port == 5)
-+		return PHY_INTERFACE_MODE_2500BASEX;
-+
-+	return PHY_INTERFACE_MODE_NA;
-+}
-+
- /* Support 10, 100, 200, 1000 Mbps (e.g. 88E6352 family) */
- int mv88e6352_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
- {
-@@ -345,6 +353,14 @@ int mv88e6390_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
- 	return mv88e6xxx_port_set_speed(chip, port, speed, true, true);
- }
- 
-+phy_interface_t mv88e6390_port_max_speed_mode(int port)
-+{
-+	if (port == 9 || port == 10)
-+		return PHY_INTERFACE_MODE_2500BASEX;
-+
-+	return PHY_INTERFACE_MODE_NA;
-+}
-+
- /* Support 10, 100, 200, 1000, 2500, 10000 Mbps (e.g. 88E6190X) */
- int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
- {
-@@ -360,6 +376,14 @@ int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed)
- 	return mv88e6xxx_port_set_speed(chip, port, speed, true, true);
- }
- 
-+phy_interface_t mv88e6390x_port_max_speed_mode(int port)
-+{
-+	if (port == 9 || port == 10)
-+		return PHY_INTERFACE_MODE_XAUI;
-+
-+	return PHY_INTERFACE_MODE_NA;
-+}
-+
- int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
- 			      phy_interface_t mode)
- {
-@@ -403,18 +427,22 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
- 		return 0;
- 
- 	lane = mv88e6390x_serdes_get_lane(chip, port);
--	if (lane < 0)
-+	if (lane < 0 && lane != -ENODEV)
- 		return lane;
- 
--	if (chip->ports[port].serdes_irq) {
--		err = mv88e6390_serdes_irq_disable(chip, port, lane);
-+	if (lane >= 0) {
-+		if (chip->ports[port].serdes_irq) {
-+			err = mv88e6390_serdes_irq_disable(chip, port, lane);
-+			if (err)
-+				return err;
-+		}
-+
-+		err = mv88e6390x_serdes_power(chip, port, false);
- 		if (err)
- 			return err;
- 	}
- 
--	err = mv88e6390x_serdes_power(chip, port, false);
--	if (err)
--		return err;
-+	chip->ports[port].cmode = 0;
- 
- 	if (cmode) {
- 		err = mv88e6xxx_port_read(chip, port, MV88E6XXX_PORT_STS, &reg);
-@@ -428,6 +456,12 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
- 		if (err)
- 			return err;
- 
-+		chip->ports[port].cmode = cmode;
-+
-+		lane = mv88e6390x_serdes_get_lane(chip, port);
-+		if (lane < 0)
-+			return lane;
-+
- 		err = mv88e6390x_serdes_power(chip, port, true);
- 		if (err)
- 			return err;
-@@ -439,8 +473,6 @@ int mv88e6390x_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
- 		}
- 	}
- 
--	chip->ports[port].cmode = cmode;
--
- 	return 0;
- }
- 
-@@ -448,6 +480,8 @@ int mv88e6390_port_set_cmode(struct mv88e6xxx_chip *chip, int port,
- 			     phy_interface_t mode)
- {
- 	switch (mode) {
-+	case PHY_INTERFACE_MODE_NA:
-+		return 0;
- 	case PHY_INTERFACE_MODE_XGMII:
- 	case PHY_INTERFACE_MODE_XAUI:
- 	case PHY_INTERFACE_MODE_RXAUI:
-diff --git a/drivers/net/dsa/mv88e6xxx/port.h b/drivers/net/dsa/mv88e6xxx/port.h
-index 4aadf321edb7..c7bed263a0f4 100644
---- a/drivers/net/dsa/mv88e6xxx/port.h
-+++ b/drivers/net/dsa/mv88e6xxx/port.h
-@@ -285,6 +285,10 @@ int mv88e6352_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
- int mv88e6390_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
- int mv88e6390x_port_set_speed(struct mv88e6xxx_chip *chip, int port, int speed);
- 
-+phy_interface_t mv88e6341_port_max_speed_mode(int port);
-+phy_interface_t mv88e6390_port_max_speed_mode(int port);
-+phy_interface_t mv88e6390x_port_max_speed_mode(int port);
-+
- int mv88e6xxx_port_set_state(struct mv88e6xxx_chip *chip, int port, u8 state);
- 
- int mv88e6xxx_port_set_vlan_map(struct mv88e6xxx_chip *chip, int port, u16 map);
-diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
-index 7e97e620bd44..a26850c888cf 100644
---- a/drivers/net/dsa/qca8k.c
-+++ b/drivers/net/dsa/qca8k.c
-@@ -620,22 +620,6 @@ qca8k_adjust_link(struct dsa_switch *ds, int port, struct phy_device *phy)
- 	qca8k_port_set_status(priv, port, 1);
- }
- 
--static int
--qca8k_phy_read(struct dsa_switch *ds, int phy, int regnum)
--{
--	struct qca8k_priv *priv = (struct qca8k_priv *)ds->priv;
--
--	return mdiobus_read(priv->bus, phy, regnum);
--}
--
--static int
--qca8k_phy_write(struct dsa_switch *ds, int phy, int regnum, u16 val)
--{
--	struct qca8k_priv *priv = (struct qca8k_priv *)ds->priv;
--
--	return mdiobus_write(priv->bus, phy, regnum, val);
--}
--
- static void
- qca8k_get_strings(struct dsa_switch *ds, int port, u32 stringset, uint8_t *data)
- {
-@@ -876,8 +860,6 @@ static const struct dsa_switch_ops qca8k_switch_ops = {
- 	.setup			= qca8k_setup,
- 	.adjust_link            = qca8k_adjust_link,
- 	.get_strings		= qca8k_get_strings,
--	.phy_read		= qca8k_phy_read,
--	.phy_write		= qca8k_phy_write,
- 	.get_ethtool_stats	= qca8k_get_ethtool_stats,
- 	.get_sset_count		= qca8k_get_sset_count,
- 	.get_mac_eee		= qca8k_get_mac_eee,
-diff --git a/drivers/net/ethernet/8390/mac8390.c b/drivers/net/ethernet/8390/mac8390.c
-index 342ae08ec3c2..d60a86aa8aa8 100644
---- a/drivers/net/ethernet/8390/mac8390.c
-+++ b/drivers/net/ethernet/8390/mac8390.c
-@@ -153,8 +153,6 @@ static void dayna_block_input(struct net_device *dev, int count,
- static void dayna_block_output(struct net_device *dev, int count,
- 			       const unsigned char *buf, int start_page);
- 
--#define memcmp_withio(a, b, c)	memcmp((a), (void *)(b), (c))
--
- /* Slow Sane (16-bit chunk memory read/write) Cabletron uses this */
- static void slow_sane_get_8390_hdr(struct net_device *dev,
- 				   struct e8390_pkt_hdr *hdr, int ring_page);
-@@ -233,19 +231,26 @@ static enum mac8390_type mac8390_ident(struct nubus_rsrc *fres)
- 
- static enum mac8390_access mac8390_testio(unsigned long membase)
- {
--	unsigned long outdata = 0xA5A0B5B0;
--	unsigned long indata =  0x00000000;
-+	u32 outdata = 0xA5A0B5B0;
-+	u32 indata = 0;
-+
- 	/* Try writing 32 bits */
--	memcpy_toio((void __iomem *)membase, &outdata, 4);
--	/* Now compare them */
--	if (memcmp_withio(&outdata, membase, 4) == 0)
-+	nubus_writel(outdata, membase);
-+	/* Now read it back */
-+	indata = nubus_readl(membase);
-+	if (outdata == indata)
- 		return ACCESS_32;
-+
-+	outdata = 0xC5C0D5D0;
-+	indata = 0;
-+
- 	/* Write 16 bit output */
- 	word_memcpy_tocard(membase, &outdata, 4);
- 	/* Now read it back */
- 	word_memcpy_fromcard(&indata, membase, 4);
- 	if (outdata == indata)
- 		return ACCESS_16;
-+
- 	return ACCESS_UNKNOWN;
- }
- 
-diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
-index 74550ccc7a20..e2ffb159cbe2 100644
---- a/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
-+++ b/drivers/net/ethernet/aquantia/atlantic/aq_ring.c
-@@ -186,11 +186,12 @@ static void aq_rx_checksum(struct aq_ring_s *self,
- 	}
- 	if (buff->is_ip_cso) {
- 		__skb_incr_checksum_unnecessary(skb);
--		if (buff->is_udp_cso || buff->is_tcp_cso)
--			__skb_incr_checksum_unnecessary(skb);
- 	} else {
- 		skb->ip_summed = CHECKSUM_NONE;
- 	}
-+
-+	if (buff->is_udp_cso || buff->is_tcp_cso)
-+		__skb_incr_checksum_unnecessary(skb);
- }
- 
- #define AQ_SKB_ALIGN SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
-diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
-index 803f7990d32b..40ca339ec3df 100644
---- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
-+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
-@@ -1129,6 +1129,8 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
- 	tpa_info = &rxr->rx_tpa[agg_id];
- 
- 	if (unlikely(cons != rxr->rx_next_cons)) {
-+		netdev_warn(bp->dev, "TPA cons %x != expected cons %x\n",
-+			    cons, rxr->rx_next_cons);
- 		bnxt_sched_reset(bp, rxr);
- 		return;
- 	}
-@@ -1581,15 +1583,17 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
- 	}
- 
- 	cons = rxcmp->rx_cmp_opaque;
--	rx_buf = &rxr->rx_buf_ring[cons];
--	data = rx_buf->data;
--	data_ptr = rx_buf->data_ptr;
- 	if (unlikely(cons != rxr->rx_next_cons)) {
- 		int rc1 = bnxt_discard_rx(bp, cpr, raw_cons, rxcmp);
- 
-+		netdev_warn(bp->dev, "RX cons %x != expected cons %x\n",
-+			    cons, rxr->rx_next_cons);
- 		bnxt_sched_reset(bp, rxr);
- 		return rc1;
- 	}
-+	rx_buf = &rxr->rx_buf_ring[cons];
-+	data = rx_buf->data;
-+	data_ptr = rx_buf->data_ptr;
- 	prefetch(data_ptr);
- 
- 	misc = le32_to_cpu(rxcmp->rx_cmp_misc_v1);
-@@ -1606,11 +1610,17 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
- 
- 	rx_buf->data = NULL;
- 	if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L2_ERRORS) {
-+		u32 rx_err = le32_to_cpu(rxcmp1->rx_cmp_cfa_code_errors_v2);
-+
- 		bnxt_reuse_rx_data(rxr, cons, data);
- 		if (agg_bufs)
- 			bnxt_reuse_rx_agg_bufs(cpr, cp_cons, agg_bufs);
- 
- 		rc = -EIO;
-+		if (rx_err & RX_CMPL_ERRORS_BUFFER_ERROR_MASK) {
-+			netdev_warn(bp->dev, "RX buffer error %x\n", rx_err);
-+			bnxt_sched_reset(bp, rxr);
-+		}
- 		goto next_rx;
- 	}
- 
-diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
-index 503cfadff4ac..d4ee9f9c8c34 100644
---- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
-+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
-@@ -1328,10 +1328,11 @@ int nicvf_stop(struct net_device *netdev)
- 	struct nicvf_cq_poll *cq_poll = NULL;
- 	union nic_mbx mbx = {};
- 
--	cancel_delayed_work_sync(&nic->link_change_work);
--
- 	/* wait till all queued set_rx_mode tasks completes */
--	drain_workqueue(nic->nicvf_rx_mode_wq);
-+	if (nic->nicvf_rx_mode_wq) {
-+		cancel_delayed_work_sync(&nic->link_change_work);
-+		drain_workqueue(nic->nicvf_rx_mode_wq);
-+	}
- 
- 	mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
- 	nicvf_send_msg_to_pf(nic, &mbx);
-@@ -1452,7 +1453,8 @@ int nicvf_open(struct net_device *netdev)
- 	struct nicvf_cq_poll *cq_poll = NULL;
- 
- 	/* wait till all queued set_rx_mode tasks completes if any */
--	drain_workqueue(nic->nicvf_rx_mode_wq);
-+	if (nic->nicvf_rx_mode_wq)
-+		drain_workqueue(nic->nicvf_rx_mode_wq);
- 
- 	netif_carrier_off(netdev);
- 
-@@ -1550,10 +1552,12 @@ int nicvf_open(struct net_device *netdev)
- 	/* Send VF config done msg to PF */
- 	nicvf_send_cfg_done(nic);
- 
--	INIT_DELAYED_WORK(&nic->link_change_work,
--			  nicvf_link_status_check_task);
--	queue_delayed_work(nic->nicvf_rx_mode_wq,
--			   &nic->link_change_work, 0);
-+	if (nic->nicvf_rx_mode_wq) {
-+		INIT_DELAYED_WORK(&nic->link_change_work,
-+				  nicvf_link_status_check_task);
-+		queue_delayed_work(nic->nicvf_rx_mode_wq,
-+				   &nic->link_change_work, 0);
-+	}
- 
- 	return 0;
- cleanup:
-diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
-index 5b4d3badcb73..e246f9733bb8 100644
---- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
-+++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
-@@ -105,20 +105,19 @@ static inline struct pgcache *nicvf_alloc_page(struct nicvf *nic,
- 	/* Check if page can be recycled */
- 	if (page) {
- 		ref_count = page_ref_count(page);
--		/* Check if this page has been used once i.e 'put_page'
--		 * called after packet transmission i.e internal ref_count
--		 * and page's ref_count are equal i.e page can be recycled.
-+		/* This page can be recycled if internal ref_count and page's
-+		 * ref_count are equal, indicating that the page has been used
-+		 * once for packet transmission. For non-XDP mode, internal
-+		 * ref_count is always '1'.
- 		 */
--		if (rbdr->is_xdp && (ref_count == pgcache->ref_count))
--			pgcache->ref_count--;
--		else
--			page = NULL;
--
--		/* In non-XDP mode, page's ref_count needs to be '1' for it
--		 * to be recycled.
--		 */
--		if (!rbdr->is_xdp && (ref_count != 1))
-+		if (rbdr->is_xdp) {
-+			if (ref_count == pgcache->ref_count)
-+				pgcache->ref_count--;
-+			else
-+				page = NULL;
-+		} else if (ref_count != 1) {
- 			page = NULL;
-+		}
- 	}
- 
- 	if (!page) {
-@@ -365,11 +364,10 @@ static void nicvf_free_rbdr(struct nicvf *nic, struct rbdr *rbdr)
- 	while (head < rbdr->pgcnt) {
- 		pgcache = &rbdr->pgcache[head];
- 		if (pgcache->page && page_ref_count(pgcache->page) != 0) {
--			if (!rbdr->is_xdp) {
--				put_page(pgcache->page);
--				continue;
-+			if (rbdr->is_xdp) {
-+				page_ref_sub(pgcache->page,
-+					     pgcache->ref_count - 1);
- 			}
--			page_ref_sub(pgcache->page, pgcache->ref_count - 1);
- 			put_page(pgcache->page);
- 		}
- 		head++;
-diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
-index 9a7f70db20c7..733d9172425b 100644
---- a/drivers/net/ethernet/cisco/enic/enic_main.c
-+++ b/drivers/net/ethernet/cisco/enic/enic_main.c
-@@ -119,7 +119,7 @@ static void enic_init_affinity_hint(struct enic *enic)
- 
- 	for (i = 0; i < enic->intr_count; i++) {
- 		if (enic_is_err_intr(enic, i) || enic_is_notify_intr(enic, i) ||
--		    (enic->msix[i].affinity_mask &&
-+		    (cpumask_available(enic->msix[i].affinity_mask) &&
- 		     !cpumask_empty(enic->msix[i].affinity_mask)))
- 			continue;
- 		if (zalloc_cpumask_var(&enic->msix[i].affinity_mask,
-@@ -148,7 +148,7 @@ static void enic_set_affinity_hint(struct enic *enic)
- 	for (i = 0; i < enic->intr_count; i++) {
- 		if (enic_is_err_intr(enic, i)		||
- 		    enic_is_notify_intr(enic, i)	||
--		    !enic->msix[i].affinity_mask	||
-+		    !cpumask_available(enic->msix[i].affinity_mask) ||
- 		    cpumask_empty(enic->msix[i].affinity_mask))
- 			continue;
- 		err = irq_set_affinity_hint(enic->msix_entry[i].vector,
-@@ -161,7 +161,7 @@ static void enic_set_affinity_hint(struct enic *enic)
- 	for (i = 0; i < enic->wq_count; i++) {
- 		int wq_intr = enic_msix_wq_intr(enic, i);
- 
--		if (enic->msix[wq_intr].affinity_mask &&
-+		if (cpumask_available(enic->msix[wq_intr].affinity_mask) &&
- 		    !cpumask_empty(enic->msix[wq_intr].affinity_mask))
- 			netif_set_xps_queue(enic->netdev,
- 					    enic->msix[wq_intr].affinity_mask,
-diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
-index 36eab37d8a40..09c774fe8853 100644
---- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
-+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
-@@ -192,6 +192,7 @@ struct hnae3_ae_dev {
- 	const struct hnae3_ae_ops *ops;
- 	struct list_head node;
- 	u32 flag;
-+	u8 override_pci_need_reset; /* fix to stop multiple reset happening */
- 	enum hnae3_dev_type dev_type;
- 	enum hnae3_reset_type reset_type;
- 	void *priv;
-diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
-index 1bf7a5f116a0..d84c50068f66 100644
---- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
-+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
-@@ -1852,7 +1852,9 @@ static pci_ers_result_t hns3_slot_reset(struct pci_dev *pdev)
- 
- 	/* request the reset */
- 	if (ae_dev->ops->reset_event) {
--		ae_dev->ops->reset_event(pdev, NULL);
-+		if (!ae_dev->override_pci_need_reset)
-+			ae_dev->ops->reset_event(pdev, NULL);
-+
- 		return PCI_ERS_RESULT_RECOVERED;
- 	}
- 
-@@ -2476,6 +2478,8 @@ static int hns3_add_frag(struct hns3_enet_ring *ring, struct hns3_desc *desc,
- 		desc = &ring->desc[ring->next_to_clean];
- 		desc_cb = &ring->desc_cb[ring->next_to_clean];
- 		bd_base_info = le32_to_cpu(desc->rx.bd_base_info);
-+		/* make sure HW write desc complete */
-+		dma_rmb();
- 		if (!hnae3_get_bit(bd_base_info, HNS3_RXD_VLD_B))
- 			return -ENXIO;
- 
-diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
-index d0f654123b9b..3ea72e4d9dc4 100644
---- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
-+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
-@@ -1094,10 +1094,10 @@ static int hclge_log_rocee_ovf_error(struct hclge_dev *hdev)
- 	return 0;
- }
- 
--static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
-+static enum hnae3_reset_type
-+hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
- {
--	enum hnae3_reset_type reset_type = HNAE3_FUNC_RESET;
--	struct hnae3_ae_dev *ae_dev = hdev->ae_dev;
-+	enum hnae3_reset_type reset_type = HNAE3_NONE_RESET;
- 	struct device *dev = &hdev->pdev->dev;
- 	struct hclge_desc desc[2];
- 	unsigned int status;
-@@ -1110,17 +1110,20 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
- 	if (ret) {
- 		dev_err(dev, "failed(%d) to query ROCEE RAS INT SRC\n", ret);
- 		/* reset everything for now */
--		HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET);
--		return ret;
-+		return HNAE3_GLOBAL_RESET;
- 	}
- 
- 	status = le32_to_cpu(desc[0].data[0]);
- 
--	if (status & HCLGE_ROCEE_RERR_INT_MASK)
-+	if (status & HCLGE_ROCEE_RERR_INT_MASK) {
- 		dev_warn(dev, "ROCEE RAS AXI rresp error\n");
-+		reset_type = HNAE3_FUNC_RESET;
-+	}
- 
--	if (status & HCLGE_ROCEE_BERR_INT_MASK)
-+	if (status & HCLGE_ROCEE_BERR_INT_MASK) {
- 		dev_warn(dev, "ROCEE RAS AXI bresp error\n");
-+		reset_type = HNAE3_FUNC_RESET;
-+	}
- 
- 	if (status & HCLGE_ROCEE_ECC_INT_MASK) {
- 		dev_warn(dev, "ROCEE RAS 2bit ECC error\n");
-@@ -1132,9 +1135,9 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
- 		if (ret) {
- 			dev_err(dev, "failed(%d) to process ovf error\n", ret);
- 			/* reset everything for now */
--			HCLGE_SET_DEFAULT_RESET_REQUEST(HNAE3_GLOBAL_RESET);
--			return ret;
-+			return HNAE3_GLOBAL_RESET;
- 		}
-+		reset_type = HNAE3_FUNC_RESET;
- 	}
- 
- 	/* clear error status */
-@@ -1143,12 +1146,10 @@ static int hclge_log_and_clear_rocee_ras_error(struct hclge_dev *hdev)
- 	if (ret) {
- 		dev_err(dev, "failed(%d) to clear ROCEE RAS error\n", ret);
- 		/* reset everything for now */
--		reset_type = HNAE3_GLOBAL_RESET;
-+		return HNAE3_GLOBAL_RESET;
- 	}
- 
--	HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type);
--
--	return ret;
-+	return reset_type;
- }
- 
- static int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en)
-@@ -1178,15 +1179,18 @@ static int hclge_config_rocee_ras_interrupt(struct hclge_dev *hdev, bool en)
- 	return ret;
- }
- 
--static int hclge_handle_rocee_ras_error(struct hnae3_ae_dev *ae_dev)
-+static void hclge_handle_rocee_ras_error(struct hnae3_ae_dev *ae_dev)
- {
-+	enum hnae3_reset_type reset_type = HNAE3_NONE_RESET;
- 	struct hclge_dev *hdev = ae_dev->priv;
- 
- 	if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
- 	    hdev->pdev->revision < 0x21)
--		return HNAE3_NONE_RESET;
-+		return;
- 
--	return hclge_log_and_clear_rocee_ras_error(hdev);
-+	reset_type = hclge_log_and_clear_rocee_ras_error(hdev);
-+	if (reset_type != HNAE3_NONE_RESET)
-+		HCLGE_SET_DEFAULT_RESET_REQUEST(reset_type);
- }
- 
- static const struct hclge_hw_blk hw_blk[] = {
-@@ -1259,8 +1263,10 @@ pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev)
- 		hclge_handle_all_ras_errors(hdev);
- 	} else {
- 		if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) ||
--		    hdev->pdev->revision < 0x21)
-+		    hdev->pdev->revision < 0x21) {
-+			ae_dev->override_pci_need_reset = 1;
- 			return PCI_ERS_RESULT_RECOVERED;
-+		}
- 	}
- 
- 	if (status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
-@@ -1269,8 +1275,11 @@ pci_ers_result_t hclge_handle_hw_ras_error(struct hnae3_ae_dev *ae_dev)
- 	}
- 
- 	if (status & HCLGE_RAS_REG_NFE_MASK ||
--	    status & HCLGE_RAS_REG_ROCEE_ERR_MASK)
-+	    status & HCLGE_RAS_REG_ROCEE_ERR_MASK) {
-+		ae_dev->override_pci_need_reset = 0;
- 		return PCI_ERS_RESULT_NEED_RESET;
-+	}
-+	ae_dev->override_pci_need_reset = 1;
- 
- 	return PCI_ERS_RESULT_RECOVERED;
- }
-diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
-index 5ecbb1adcf3b..51cfe95f3e24 100644
---- a/drivers/net/ethernet/ibm/ibmvnic.c
-+++ b/drivers/net/ethernet/ibm/ibmvnic.c
-@@ -1885,6 +1885,7 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter,
- 	 */
- 	adapter->state = VNIC_PROBED;
- 
-+	reinit_completion(&adapter->init_done);
- 	rc = init_crq_queue(adapter);
- 	if (rc) {
- 		netdev_err(adapter->netdev,
-@@ -4625,7 +4626,7 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter)
- 	old_num_rx_queues = adapter->req_rx_queues;
- 	old_num_tx_queues = adapter->req_tx_queues;
- 
--	init_completion(&adapter->init_done);
-+	reinit_completion(&adapter->init_done);
- 	adapter->init_done_rc = 0;
- 	ibmvnic_send_crq_init(adapter);
- 	if (!wait_for_completion_timeout(&adapter->init_done, timeout)) {
-@@ -4680,7 +4681,6 @@ static int ibmvnic_init(struct ibmvnic_adapter *adapter)
- 
- 	adapter->from_passive_init = false;
- 
--	init_completion(&adapter->init_done);
- 	adapter->init_done_rc = 0;
- 	ibmvnic_send_crq_init(adapter);
- 	if (!wait_for_completion_timeout(&adapter->init_done, timeout)) {
-@@ -4759,6 +4759,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
- 	INIT_WORK(&adapter->ibmvnic_reset, __ibmvnic_reset);
- 	INIT_LIST_HEAD(&adapter->rwi_list);
- 	spin_lock_init(&adapter->rwi_lock);
-+	init_completion(&adapter->init_done);
- 	adapter->resetting = false;
- 
- 	adapter->mac_change_pending = false;
-diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
-index 189f231075c2..7acc61e4f645 100644
---- a/drivers/net/ethernet/intel/e1000e/netdev.c
-+++ b/drivers/net/ethernet/intel/e1000e/netdev.c
-@@ -2106,7 +2106,7 @@ static int e1000_request_msix(struct e1000_adapter *adapter)
- 	if (strlen(netdev->name) < (IFNAMSIZ - 5))
- 		snprintf(adapter->rx_ring->name,
- 			 sizeof(adapter->rx_ring->name) - 1,
--			 "%s-rx-0", netdev->name);
-+			 "%.14s-rx-0", netdev->name);
- 	else
- 		memcpy(adapter->rx_ring->name, netdev->name, IFNAMSIZ);
- 	err = request_irq(adapter->msix_entries[vector].vector,
-@@ -2122,7 +2122,7 @@ static int e1000_request_msix(struct e1000_adapter *adapter)
- 	if (strlen(netdev->name) < (IFNAMSIZ - 5))
- 		snprintf(adapter->tx_ring->name,
- 			 sizeof(adapter->tx_ring->name) - 1,
--			 "%s-tx-0", netdev->name);
-+			 "%.14s-tx-0", netdev->name);
- 	else
- 		memcpy(adapter->tx_ring->name, netdev->name, IFNAMSIZ);
- 	err = request_irq(adapter->msix_entries[vector].vector,
-@@ -5309,8 +5309,13 @@ static void e1000_watchdog_task(struct work_struct *work)
- 			/* 8000ES2LAN requires a Rx packet buffer work-around
- 			 * on link down event; reset the controller to flush
- 			 * the Rx packet buffer.
-+			 *
-+			 * If the link is lost the controller stops DMA, but
-+			 * if there is queued Tx work it cannot be done.  So
-+			 * reset the controller to flush the Tx packet buffers.
- 			 */
--			if (adapter->flags & FLAG_RX_NEEDS_RESTART)
-+			if ((adapter->flags & FLAG_RX_NEEDS_RESTART) ||
-+			    e1000_desc_unused(tx_ring) + 1 < tx_ring->count)
- 				adapter->flags |= FLAG_RESTART_NOW;
- 			else
- 				pm_schedule_suspend(netdev->dev.parent,
-@@ -5333,14 +5338,6 @@ link_up:
- 	adapter->gotc_old = adapter->stats.gotc;
- 	spin_unlock(&adapter->stats64_lock);
- 
--	/* If the link is lost the controller stops DMA, but
--	 * if there is queued Tx work it cannot be done.  So
--	 * reset the controller to flush the Tx packet buffers.
--	 */
--	if (!netif_carrier_ok(netdev) &&
--	    (e1000_desc_unused(tx_ring) + 1 < tx_ring->count))
--		adapter->flags |= FLAG_RESTART_NOW;
--
- 	/* If reset is necessary, do it outside of interrupt context. */
- 	if (adapter->flags & FLAG_RESTART_NOW) {
- 		schedule_work(&adapter->reset_task);
-@@ -7351,6 +7348,8 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
- 
- 	e1000_print_device_info(adapter);
- 
-+	dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
-+
- 	if (pci_dev_run_wake(pdev))
- 		pm_runtime_put_noidle(&pdev->dev);
- 
-diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
-index 2e5693107fa4..8d602247eb44 100644
---- a/drivers/net/ethernet/intel/ice/ice_switch.c
-+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
-@@ -1538,9 +1538,20 @@ ice_remove_rule_internal(struct ice_hw *hw, u8 recp_id,
- 	} else if (!list_elem->vsi_list_info) {
- 		status = ICE_ERR_DOES_NOT_EXIST;
- 		goto exit;
-+	} else if (list_elem->vsi_list_info->ref_cnt > 1) {
-+		/* a ref_cnt > 1 indicates that the vsi_list is being
-+		 * shared by multiple rules. Decrement the ref_cnt and
-+		 * remove this rule, but do not modify the list, as it
-+		 * is in-use by other rules.
-+		 */
-+		list_elem->vsi_list_info->ref_cnt--;
-+		remove_rule = true;
- 	} else {
--		if (list_elem->vsi_list_info->ref_cnt > 1)
--			list_elem->vsi_list_info->ref_cnt--;
-+		/* a ref_cnt of 1 indicates the vsi_list is only used
-+		 * by one rule. However, the original removal request is only
-+		 * for a single VSI. Update the vsi_list first, and only
-+		 * remove the rule if there are no further VSIs in this list.
-+		 */
- 		vsi_handle = f_entry->fltr_info.vsi_handle;
- 		status = ice_rem_update_vsi_list(hw, vsi_handle, list_elem);
- 		if (status)
-diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
-index 16066c2d5b3a..931beac3359d 100644
---- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
-+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
-@@ -1380,13 +1380,9 @@ static void mvpp2_port_reset(struct mvpp2_port *port)
- 	for (i = 0; i < ARRAY_SIZE(mvpp2_ethtool_regs); i++)
- 		mvpp2_read_count(port, &mvpp2_ethtool_regs[i]);
- 
--	val = readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
--		    ~MVPP2_GMAC_PORT_RESET_MASK;
-+	val = readl(port->base + MVPP2_GMAC_CTRL_2_REG) |
-+	      MVPP2_GMAC_PORT_RESET_MASK;
- 	writel(val, port->base + MVPP2_GMAC_CTRL_2_REG);
--
--	while (readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
--	       MVPP2_GMAC_PORT_RESET_MASK)
--		continue;
- }
- 
- /* Change maximum receive size of the port */
-@@ -4543,12 +4539,15 @@ static void mvpp2_gmac_config(struct mvpp2_port *port, unsigned int mode,
- 			      const struct phylink_link_state *state)
- {
- 	u32 an, ctrl0, ctrl2, ctrl4;
-+	u32 old_ctrl2;
- 
- 	an = readl(port->base + MVPP2_GMAC_AUTONEG_CONFIG);
- 	ctrl0 = readl(port->base + MVPP2_GMAC_CTRL_0_REG);
- 	ctrl2 = readl(port->base + MVPP2_GMAC_CTRL_2_REG);
- 	ctrl4 = readl(port->base + MVPP22_GMAC_CTRL_4_REG);
- 
-+	old_ctrl2 = ctrl2;
-+
- 	/* Force link down */
- 	an &= ~MVPP2_GMAC_FORCE_LINK_PASS;
- 	an |= MVPP2_GMAC_FORCE_LINK_DOWN;
-@@ -4621,6 +4620,12 @@ static void mvpp2_gmac_config(struct mvpp2_port *port, unsigned int mode,
- 	writel(ctrl2, port->base + MVPP2_GMAC_CTRL_2_REG);
- 	writel(ctrl4, port->base + MVPP22_GMAC_CTRL_4_REG);
- 	writel(an, port->base + MVPP2_GMAC_AUTONEG_CONFIG);
-+
-+	if (old_ctrl2 & MVPP2_GMAC_PORT_RESET_MASK) {
-+		while (readl(port->base + MVPP2_GMAC_CTRL_2_REG) &
-+		       MVPP2_GMAC_PORT_RESET_MASK)
-+			continue;
-+	}
- }
- 
- static void mvpp2_mac_config(struct net_device *dev, unsigned int mode,
-diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
-index 57727fe1501e..8b3495ee2b6e 100644
---- a/drivers/net/ethernet/marvell/sky2.c
-+++ b/drivers/net/ethernet/marvell/sky2.c
-@@ -46,6 +46,7 @@
- #include <linux/mii.h>
- #include <linux/of_device.h>
- #include <linux/of_net.h>
-+#include <linux/dmi.h>
- 
- #include <asm/irq.h>
- 
-@@ -93,7 +94,7 @@ static int copybreak __read_mostly = 128;
- module_param(copybreak, int, 0);
- MODULE_PARM_DESC(copybreak, "Receive copy threshold");
- 
--static int disable_msi = 0;
-+static int disable_msi = -1;
- module_param(disable_msi, int, 0);
- MODULE_PARM_DESC(disable_msi, "Disable Message Signaled Interrupt (MSI)");
- 
-@@ -4917,6 +4918,24 @@ static const char *sky2_name(u8 chipid, char *buf, int sz)
- 	return buf;
- }
- 
-+static const struct dmi_system_id msi_blacklist[] = {
-+	{
-+		.ident = "Dell Inspiron 1545",
-+		.matches = {
-+			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
-+			DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 1545"),
-+		},
-+	},
-+	{
-+		.ident = "Gateway P-79",
-+		.matches = {
-+			DMI_MATCH(DMI_SYS_VENDOR, "Gateway"),
-+			DMI_MATCH(DMI_PRODUCT_NAME, "P-79"),
-+		},
-+	},
-+	{}
-+};
-+
- static int sky2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
- {
- 	struct net_device *dev, *dev1;
-@@ -5028,6 +5047,9 @@ static int sky2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
- 		goto err_out_free_pci;
- 	}
- 
-+	if (disable_msi == -1)
-+		disable_msi = !!dmi_check_system(msi_blacklist);
-+
- 	if (!disable_msi && pci_enable_msi(pdev) == 0) {
- 		err = sky2_test_msi(hw);
- 		if (err) {
-diff --git a/drivers/net/ethernet/mellanox/mlx4/cmd.c b/drivers/net/ethernet/mellanox/mlx4/cmd.c
-index e65bc3c95630..857588e2488d 100644
---- a/drivers/net/ethernet/mellanox/mlx4/cmd.c
-+++ b/drivers/net/ethernet/mellanox/mlx4/cmd.c
-@@ -2645,6 +2645,8 @@ int mlx4_cmd_use_events(struct mlx4_dev *dev)
- 	if (!priv->cmd.context)
- 		return -ENOMEM;
- 
-+	if (mlx4_is_mfunc(dev))
-+		mutex_lock(&priv->cmd.slave_cmd_mutex);
- 	down_write(&priv->cmd.switch_sem);
- 	for (i = 0; i < priv->cmd.max_cmds; ++i) {
- 		priv->cmd.context[i].token = i;
-@@ -2670,6 +2672,8 @@ int mlx4_cmd_use_events(struct mlx4_dev *dev)
- 	down(&priv->cmd.poll_sem);
- 	priv->cmd.use_events = 1;
- 	up_write(&priv->cmd.switch_sem);
-+	if (mlx4_is_mfunc(dev))
-+		mutex_unlock(&priv->cmd.slave_cmd_mutex);
- 
- 	return err;
- }
-@@ -2682,6 +2686,8 @@ void mlx4_cmd_use_polling(struct mlx4_dev *dev)
- 	struct mlx4_priv *priv = mlx4_priv(dev);
- 	int i;
- 
-+	if (mlx4_is_mfunc(dev))
-+		mutex_lock(&priv->cmd.slave_cmd_mutex);
- 	down_write(&priv->cmd.switch_sem);
- 	priv->cmd.use_events = 0;
- 
-@@ -2689,9 +2695,12 @@ void mlx4_cmd_use_polling(struct mlx4_dev *dev)
- 		down(&priv->cmd.event_sem);
- 
- 	kfree(priv->cmd.context);
-+	priv->cmd.context = NULL;
- 
- 	up(&priv->cmd.poll_sem);
- 	up_write(&priv->cmd.switch_sem);
-+	if (mlx4_is_mfunc(dev))
-+		mutex_unlock(&priv->cmd.slave_cmd_mutex);
- }
- 
- struct mlx4_cmd_mailbox *mlx4_alloc_cmd_mailbox(struct mlx4_dev *dev)
-diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
-index eb13d3618162..4356f3a58002 100644
---- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
-+++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
-@@ -2719,13 +2719,13 @@ static int qp_get_mtt_size(struct mlx4_qp_context *qpc)
- 	int total_pages;
- 	int total_mem;
- 	int page_offset = (be32_to_cpu(qpc->params2) >> 6) & 0x3f;
-+	int tot;
- 
- 	sq_size = 1 << (log_sq_size + log_sq_sride + 4);
- 	rq_size = (srq|rss|xrc) ? 0 : (1 << (log_rq_size + log_rq_stride + 4));
- 	total_mem = sq_size + rq_size;
--	total_pages =
--		roundup_pow_of_two((total_mem + (page_offset << 6)) >>
--				   page_shift);
-+	tot = (total_mem + (page_offset << 6)) >> page_shift;
-+	total_pages = !tot ? 1 : roundup_pow_of_two(tot);
- 
- 	return total_pages;
- }
-diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
-index eac245a93f91..4ab0d030b544 100644
---- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
-+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
-@@ -122,7 +122,9 @@ out:
- 	return err;
- }
- 
--/* xoff = ((301+2.16 * len [m]) * speed [Gbps] + 2.72 MTU [B]) */
-+/* xoff = ((301+2.16 * len [m]) * speed [Gbps] + 2.72 MTU [B])
-+ * minimum speed value is 40Gbps
-+ */
- static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
- {
- 	u32 speed;
-@@ -130,10 +132,9 @@ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
- 	int err;
- 
- 	err = mlx5e_port_linkspeed(priv->mdev, &speed);
--	if (err) {
--		mlx5_core_warn(priv->mdev, "cannot get port speed\n");
--		return 0;
--	}
-+	if (err)
-+		speed = SPEED_40000;
-+	speed = max_t(u32, speed, SPEED_40000);
- 
- 	xoff = (301 + 216 * priv->dcbx.cable_len / 100) * speed / 1000 + 272 * mtu / 100;
- 
-@@ -142,7 +143,7 @@ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
- }
- 
- static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
--				 u32 xoff, unsigned int mtu)
-+				 u32 xoff, unsigned int max_mtu)
- {
- 	int i;
- 
-@@ -154,11 +155,12 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
- 		}
- 
- 		if (port_buffer->buffer[i].size <
--		    (xoff + mtu + (1 << MLX5E_BUFFER_CELL_SHIFT)))
-+		    (xoff + max_mtu + (1 << MLX5E_BUFFER_CELL_SHIFT)))
- 			return -ENOMEM;
- 
- 		port_buffer->buffer[i].xoff = port_buffer->buffer[i].size - xoff;
--		port_buffer->buffer[i].xon  = port_buffer->buffer[i].xoff - mtu;
-+		port_buffer->buffer[i].xon  =
-+			port_buffer->buffer[i].xoff - max_mtu;
- 	}
- 
- 	return 0;
-@@ -166,7 +168,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
- 
- /**
-  * update_buffer_lossy()
-- *   mtu: device's MTU
-+ *   max_mtu: netdev's max_mtu
-  *   pfc_en: <input> current pfc configuration
-  *   buffer: <input> current prio to buffer mapping
-  *   xoff:   <input> xoff value
-@@ -183,7 +185,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
-  *     Return 0 if no error.
-  *     Set change to true if buffer configuration is modified.
-  */
--static int update_buffer_lossy(unsigned int mtu,
-+static int update_buffer_lossy(unsigned int max_mtu,
- 			       u8 pfc_en, u8 *buffer, u32 xoff,
- 			       struct mlx5e_port_buffer *port_buffer,
- 			       bool *change)
-@@ -220,7 +222,7 @@ static int update_buffer_lossy(unsigned int mtu,
- 	}
- 
- 	if (changed) {
--		err = update_xoff_threshold(port_buffer, xoff, mtu);
-+		err = update_xoff_threshold(port_buffer, xoff, max_mtu);
- 		if (err)
- 			return err;
- 
-@@ -230,6 +232,7 @@ static int update_buffer_lossy(unsigned int mtu,
- 	return 0;
- }
- 
-+#define MINIMUM_MAX_MTU 9216
- int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
- 				    u32 change, unsigned int mtu,
- 				    struct ieee_pfc *pfc,
-@@ -241,12 +244,14 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
- 	bool update_prio2buffer = false;
- 	u8 buffer[MLX5E_MAX_PRIORITY];
- 	bool update_buffer = false;
-+	unsigned int max_mtu;
- 	u32 total_used = 0;
- 	u8 curr_pfc_en;
- 	int err;
- 	int i;
- 
- 	mlx5e_dbg(HW, priv, "%s: change=%x\n", __func__, change);
-+	max_mtu = max_t(unsigned int, priv->netdev->max_mtu, MINIMUM_MAX_MTU);
- 
- 	err = mlx5e_port_query_buffer(priv, &port_buffer);
- 	if (err)
-@@ -254,7 +259,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
- 
- 	if (change & MLX5E_PORT_BUFFER_CABLE_LEN) {
- 		update_buffer = true;
--		err = update_xoff_threshold(&port_buffer, xoff, mtu);
-+		err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
- 		if (err)
- 			return err;
- 	}
-@@ -264,7 +269,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
- 		if (err)
- 			return err;
- 
--		err = update_buffer_lossy(mtu, pfc->pfc_en, buffer, xoff,
-+		err = update_buffer_lossy(max_mtu, pfc->pfc_en, buffer, xoff,
- 					  &port_buffer, &update_buffer);
- 		if (err)
- 			return err;
-@@ -276,8 +281,8 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
- 		if (err)
- 			return err;
- 
--		err = update_buffer_lossy(mtu, curr_pfc_en, prio2buffer, xoff,
--					  &port_buffer, &update_buffer);
-+		err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer,
-+					  xoff, &port_buffer, &update_buffer);
- 		if (err)
- 			return err;
- 	}
-@@ -301,7 +306,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
- 			return -EINVAL;
- 
- 		update_buffer = true;
--		err = update_xoff_threshold(&port_buffer, xoff, mtu);
-+		err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
- 		if (err)
- 			return err;
- 	}
-@@ -309,7 +314,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
- 	/* Need to update buffer configuration if xoff value is changed */
- 	if (!update_buffer && xoff != priv->dcbx.xoff) {
- 		update_buffer = true;
--		err = update_xoff_threshold(&port_buffer, xoff, mtu);
-+		err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
- 		if (err)
- 			return err;
- 	}
-diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
-index 3078491cc0d0..1539cf3de5dc 100644
---- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
-+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
-@@ -45,7 +45,9 @@ int mlx5e_create_tir(struct mlx5_core_dev *mdev,
- 	if (err)
- 		return err;
- 
-+	mutex_lock(&mdev->mlx5e_res.td.list_lock);
- 	list_add(&tir->list, &mdev->mlx5e_res.td.tirs_list);
-+	mutex_unlock(&mdev->mlx5e_res.td.list_lock);
- 
- 	return 0;
- }
-@@ -53,8 +55,10 @@ int mlx5e_create_tir(struct mlx5_core_dev *mdev,
- void mlx5e_destroy_tir(struct mlx5_core_dev *mdev,
- 		       struct mlx5e_tir *tir)
- {
-+	mutex_lock(&mdev->mlx5e_res.td.list_lock);
- 	mlx5_core_destroy_tir(mdev, tir->tirn);
- 	list_del(&tir->list);
-+	mutex_unlock(&mdev->mlx5e_res.td.list_lock);
- }
- 
- static int mlx5e_create_mkey(struct mlx5_core_dev *mdev, u32 pdn,
-@@ -114,6 +118,7 @@ int mlx5e_create_mdev_resources(struct mlx5_core_dev *mdev)
- 	}
- 
- 	INIT_LIST_HEAD(&mdev->mlx5e_res.td.tirs_list);
-+	mutex_init(&mdev->mlx5e_res.td.list_lock);
- 
- 	return 0;
- 
-@@ -141,15 +146,17 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb)
- {
- 	struct mlx5_core_dev *mdev = priv->mdev;
- 	struct mlx5e_tir *tir;
--	int err  = -ENOMEM;
-+	int err  = 0;
- 	u32 tirn = 0;
- 	int inlen;
- 	void *in;
- 
- 	inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
- 	in = kvzalloc(inlen, GFP_KERNEL);
--	if (!in)
-+	if (!in) {
-+		err = -ENOMEM;
- 		goto out;
-+	}
- 
- 	if (enable_uc_lb)
- 		MLX5_SET(modify_tir_in, in, ctx.self_lb_block,
-@@ -157,6 +164,7 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb)
- 
- 	MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1);
- 
-+	mutex_lock(&mdev->mlx5e_res.td.list_lock);
- 	list_for_each_entry(tir, &mdev->mlx5e_res.td.tirs_list, list) {
- 		tirn = tir->tirn;
- 		err = mlx5_core_modify_tir(mdev, tirn, in, inlen);
-@@ -168,6 +176,7 @@ out:
- 	kvfree(in);
- 	if (err)
- 		netdev_err(priv->netdev, "refresh tir(0x%x) failed, %d\n", tirn, err);
-+	mutex_unlock(&mdev->mlx5e_res.td.list_lock);
- 
- 	return err;
- }
-diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
-index 47233b9a4f81..e6099f51d25f 100644
---- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
-+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
-@@ -357,6 +357,9 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
- 
- 	if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
- 		priv->channels.params = new_channels.params;
-+		if (!netif_is_rxfh_configured(priv->netdev))
-+			mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt,
-+						      MLX5E_INDIR_RQT_SIZE, count);
- 		goto out;
- 	}
- 
-diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
-index 5b492b67f4e1..13c48883ed61 100644
---- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
-+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
-@@ -1812,7 +1812,7 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
- 	u64 node_guid;
- 	int err = 0;
- 
--	if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
-+	if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager))
- 		return -EPERM;
- 	if (!LEGAL_VPORT(esw, vport) || is_multicast_ether_addr(mac))
- 		return -EINVAL;
-@@ -1886,7 +1886,7 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
- {
- 	struct mlx5_vport *evport;
- 
--	if (!MLX5_CAP_GEN(esw->dev, vport_group_manager))
-+	if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager))
- 		return -EPERM;
- 	if (!LEGAL_VPORT(esw, vport))
- 		return -EINVAL;
-@@ -2059,19 +2059,24 @@ static int normalize_vports_min_rate(struct mlx5_eswitch *esw, u32 divider)
- int mlx5_eswitch_set_vport_rate(struct mlx5_eswitch *esw, int vport,
- 				u32 max_rate, u32 min_rate)
- {
--	u32 fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share);
--	bool min_rate_supported = MLX5_CAP_QOS(esw->dev, esw_bw_share) &&
--					fw_max_bw_share >= MLX5_MIN_BW_SHARE;
--	bool max_rate_supported = MLX5_CAP_QOS(esw->dev, esw_rate_limit);
- 	struct mlx5_vport *evport;
-+	u32 fw_max_bw_share;
- 	u32 previous_min_rate;
- 	u32 divider;
-+	bool min_rate_supported;
-+	bool max_rate_supported;
- 	int err = 0;
- 
- 	if (!ESW_ALLOWED(esw))
- 		return -EPERM;
- 	if (!LEGAL_VPORT(esw, vport))
- 		return -EINVAL;
-+
-+	fw_max_bw_share = MLX5_CAP_QOS(esw->dev, max_tsar_bw_share);
-+	min_rate_supported = MLX5_CAP_QOS(esw->dev, esw_bw_share) &&
-+				fw_max_bw_share >= MLX5_MIN_BW_SHARE;
-+	max_rate_supported = MLX5_CAP_QOS(esw->dev, esw_rate_limit);
-+
- 	if ((min_rate && !min_rate_supported) || (max_rate && !max_rate_supported))
- 		return -EOPNOTSUPP;
- 
-diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
-index 5cf5f2a9d51f..8de64e88c670 100644
---- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
-+++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
-@@ -217,15 +217,21 @@ int mlx5_fpga_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
- 	void *cmd;
- 	int ret;
- 
-+	rcu_read_lock();
-+	flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle));
-+	rcu_read_unlock();
-+
-+	if (!flow) {
-+		WARN_ONCE(1, "Received NULL pointer for handle\n");
-+		return -EINVAL;
-+	}
-+
- 	buf = kzalloc(size, GFP_ATOMIC);
- 	if (!buf)
- 		return -ENOMEM;
- 
- 	cmd = (buf + 1);
- 
--	rcu_read_lock();
--	flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle));
--	rcu_read_unlock();
- 	mlx5_fpga_tls_flow_to_cmd(flow, cmd);
- 
- 	MLX5_SET(tls_cmd, cmd, swid, ntohl(handle));
-@@ -238,6 +244,8 @@ int mlx5_fpga_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
- 	buf->complete = mlx_tls_kfree_complete;
- 
- 	ret = mlx5_fpga_sbu_conn_sendmsg(mdev->fpga->tls->conn, buf);
-+	if (ret < 0)
-+		kfree(buf);
- 
- 	return ret;
- }
-diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
-index be81b319b0dc..694edd899322 100644
---- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
-+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
-@@ -163,26 +163,6 @@ static struct mlx5_profile profile[] = {
- 			.size	= 8,
- 			.limit	= 4
- 		},
--		.mr_cache[16]	= {
--			.size	= 8,
--			.limit	= 4
--		},
--		.mr_cache[17]	= {
--			.size	= 8,
--			.limit	= 4
--		},
--		.mr_cache[18]	= {
--			.size	= 8,
--			.limit	= 4
--		},
--		.mr_cache[19]	= {
--			.size	= 4,
--			.limit	= 2
--		},
--		.mr_cache[20]	= {
--			.size	= 4,
--			.limit	= 2
--		},
- 	},
- };
- 
-diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qp.c b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
-index 370ca94b6775..c7c2920c05c4 100644
---- a/drivers/net/ethernet/mellanox/mlx5/core/qp.c
-+++ b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
-@@ -40,6 +40,9 @@
- #include "mlx5_core.h"
- #include "lib/eq.h"
- 
-+static int mlx5_core_drain_dct(struct mlx5_core_dev *dev,
-+			       struct mlx5_core_dct *dct);
-+
- static struct mlx5_core_rsc_common *
- mlx5_get_rsc(struct mlx5_qp_table *table, u32 rsn)
- {
-@@ -227,13 +230,42 @@ static void destroy_resource_common(struct mlx5_core_dev *dev,
- 	wait_for_completion(&qp->common.free);
- }
- 
-+static int _mlx5_core_destroy_dct(struct mlx5_core_dev *dev,
-+				  struct mlx5_core_dct *dct, bool need_cleanup)
-+{
-+	u32 out[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
-+	u32 in[MLX5_ST_SZ_DW(destroy_dct_in)]   = {0};
-+	struct mlx5_core_qp *qp = &dct->mqp;
-+	int err;
-+
-+	err = mlx5_core_drain_dct(dev, dct);
-+	if (err) {
-+		if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
-+			goto destroy;
-+		} else {
-+			mlx5_core_warn(
-+				dev, "failed drain DCT 0x%x with error 0x%x\n",
-+				qp->qpn, err);
-+			return err;
-+		}
-+	}
-+	wait_for_completion(&dct->drained);
-+destroy:
-+	if (need_cleanup)
-+		destroy_resource_common(dev, &dct->mqp);
-+	MLX5_SET(destroy_dct_in, in, opcode, MLX5_CMD_OP_DESTROY_DCT);
-+	MLX5_SET(destroy_dct_in, in, dctn, qp->qpn);
-+	MLX5_SET(destroy_dct_in, in, uid, qp->uid);
-+	err = mlx5_cmd_exec(dev, (void *)&in, sizeof(in),
-+			    (void *)&out, sizeof(out));
-+	return err;
-+}
-+
- int mlx5_core_create_dct(struct mlx5_core_dev *dev,
- 			 struct mlx5_core_dct *dct,
- 			 u32 *in, int inlen)
- {
- 	u32 out[MLX5_ST_SZ_DW(create_dct_out)]   = {0};
--	u32 din[MLX5_ST_SZ_DW(destroy_dct_in)]   = {0};
--	u32 dout[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
- 	struct mlx5_core_qp *qp = &dct->mqp;
- 	int err;
- 
-@@ -254,11 +286,7 @@ int mlx5_core_create_dct(struct mlx5_core_dev *dev,
- 
- 	return 0;
- err_cmd:
--	MLX5_SET(destroy_dct_in, din, opcode, MLX5_CMD_OP_DESTROY_DCT);
--	MLX5_SET(destroy_dct_in, din, dctn, qp->qpn);
--	MLX5_SET(destroy_dct_in, din, uid, qp->uid);
--	mlx5_cmd_exec(dev, (void *)&in, sizeof(din),
--		      (void *)&out, sizeof(dout));
-+	_mlx5_core_destroy_dct(dev, dct, false);
- 	return err;
- }
- EXPORT_SYMBOL_GPL(mlx5_core_create_dct);
-@@ -323,29 +351,7 @@ static int mlx5_core_drain_dct(struct mlx5_core_dev *dev,
- int mlx5_core_destroy_dct(struct mlx5_core_dev *dev,
- 			  struct mlx5_core_dct *dct)
- {
--	u32 out[MLX5_ST_SZ_DW(destroy_dct_out)] = {0};
--	u32 in[MLX5_ST_SZ_DW(destroy_dct_in)]   = {0};
--	struct mlx5_core_qp *qp = &dct->mqp;
--	int err;
--
--	err = mlx5_core_drain_dct(dev, dct);
--	if (err) {
--		if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
--			goto destroy;
--		} else {
--			mlx5_core_warn(dev, "failed drain DCT 0x%x with error 0x%x\n", qp->qpn, err);
--			return err;
--		}
--	}
--	wait_for_completion(&dct->drained);
--destroy:
--	destroy_resource_common(dev, &dct->mqp);
--	MLX5_SET(destroy_dct_in, in, opcode, MLX5_CMD_OP_DESTROY_DCT);
--	MLX5_SET(destroy_dct_in, in, dctn, qp->qpn);
--	MLX5_SET(destroy_dct_in, in, uid, qp->uid);
--	err = mlx5_cmd_exec(dev, (void *)&in, sizeof(in),
--			    (void *)&out, sizeof(out));
--	return err;
-+	return _mlx5_core_destroy_dct(dev, dct, true);
- }
- EXPORT_SYMBOL_GPL(mlx5_core_destroy_dct);
- 
-diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
-index b65e274b02e9..cbdee5164be7 100644
---- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
-+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
-@@ -2105,7 +2105,7 @@ static void mlxsw_sp_port_get_prio_strings(u8 **p, int prio)
- 	int i;
- 
- 	for (i = 0; i < MLXSW_SP_PORT_HW_PRIO_STATS_LEN; i++) {
--		snprintf(*p, ETH_GSTRING_LEN, "%s_%d",
-+		snprintf(*p, ETH_GSTRING_LEN, "%.29s_%.1d",
- 			 mlxsw_sp_port_hw_prio_stats[i].str, prio);
- 		*p += ETH_GSTRING_LEN;
- 	}
-@@ -2116,7 +2116,7 @@ static void mlxsw_sp_port_get_tc_strings(u8 **p, int tc)
- 	int i;
- 
- 	for (i = 0; i < MLXSW_SP_PORT_HW_TC_STATS_LEN; i++) {
--		snprintf(*p, ETH_GSTRING_LEN, "%s_%d",
-+		snprintf(*p, ETH_GSTRING_LEN, "%.29s_%.1d",
- 			 mlxsw_sp_port_hw_tc_stats[i].str, tc);
- 		*p += ETH_GSTRING_LEN;
- 	}
-diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
-index 4d1b4a24907f..13e6bf13ac4d 100644
---- a/drivers/net/ethernet/microchip/lan743x_main.c
-+++ b/drivers/net/ethernet/microchip/lan743x_main.c
-@@ -585,8 +585,7 @@ static int lan743x_intr_open(struct lan743x_adapter *adapter)
- 
- 		if (adapter->csr.flags &
- 		   LAN743X_CSR_FLAG_SUPPORTS_INTR_AUTO_SET_CLR) {
--			flags = LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_CLEAR |
--				LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_SET |
-+			flags = LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_SET |
- 				LAN743X_VECTOR_FLAG_SOURCE_ENABLE_AUTO_SET |
- 				LAN743X_VECTOR_FLAG_SOURCE_ENABLE_AUTO_CLEAR |
- 				LAN743X_VECTOR_FLAG_SOURCE_STATUS_AUTO_CLEAR;
-@@ -599,12 +598,6 @@ static int lan743x_intr_open(struct lan743x_adapter *adapter)
- 			/* map TX interrupt to vector */
- 			int_vec_map1 |= INT_VEC_MAP1_TX_VEC_(index, vector);
- 			lan743x_csr_write(adapter, INT_VEC_MAP1, int_vec_map1);
--			if (flags &
--			    LAN743X_VECTOR_FLAG_VECTOR_ENABLE_AUTO_CLEAR) {
--				int_vec_en_auto_clr |= INT_VEC_EN_(vector);
--				lan743x_csr_write(adapter, INT_VEC_EN_AUTO_CLR,
--						  int_vec_en_auto_clr);
--			}
- 
- 			/* Remove TX interrupt from shared mask */
- 			intr->vector_list[0].int_mask &= ~int_bit;
-@@ -1902,7 +1895,17 @@ static int lan743x_rx_next_index(struct lan743x_rx *rx, int index)
- 	return ((++index) % rx->ring_size);
- }
- 
--static int lan743x_rx_allocate_ring_element(struct lan743x_rx *rx, int index)
-+static struct sk_buff *lan743x_rx_allocate_skb(struct lan743x_rx *rx)
-+{
-+	int length = 0;
-+
-+	length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
-+	return __netdev_alloc_skb(rx->adapter->netdev,
-+				  length, GFP_ATOMIC | GFP_DMA);
-+}
-+
-+static int lan743x_rx_init_ring_element(struct lan743x_rx *rx, int index,
-+					struct sk_buff *skb)
- {
- 	struct lan743x_rx_buffer_info *buffer_info;
- 	struct lan743x_rx_descriptor *descriptor;
-@@ -1911,9 +1914,7 @@ static int lan743x_rx_allocate_ring_element(struct lan743x_rx *rx, int index)
- 	length = (LAN743X_MAX_FRAME_SIZE + ETH_HLEN + 4 + RX_HEAD_PADDING);
- 	descriptor = &rx->ring_cpu_ptr[index];
- 	buffer_info = &rx->buffer_info[index];
--	buffer_info->skb = __netdev_alloc_skb(rx->adapter->netdev,
--					      length,
--					      GFP_ATOMIC | GFP_DMA);
-+	buffer_info->skb = skb;
- 	if (!(buffer_info->skb))
- 		return -ENOMEM;
- 	buffer_info->dma_ptr = dma_map_single(&rx->adapter->pdev->dev,
-@@ -2060,8 +2061,19 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
- 		/* packet is available */
- 		if (first_index == last_index) {
- 			/* single buffer packet */
-+			struct sk_buff *new_skb = NULL;
- 			int packet_length;
- 
-+			new_skb = lan743x_rx_allocate_skb(rx);
-+			if (!new_skb) {
-+				/* failed to allocate next skb.
-+				 * Memory is very low.
-+				 * Drop this packet and reuse buffer.
-+				 */
-+				lan743x_rx_reuse_ring_element(rx, first_index);
-+				goto process_extension;
-+			}
-+
- 			buffer_info = &rx->buffer_info[first_index];
- 			skb = buffer_info->skb;
- 			descriptor = &rx->ring_cpu_ptr[first_index];
-@@ -2081,7 +2093,7 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
- 			skb_put(skb, packet_length - 4);
- 			skb->protocol = eth_type_trans(skb,
- 						       rx->adapter->netdev);
--			lan743x_rx_allocate_ring_element(rx, first_index);
-+			lan743x_rx_init_ring_element(rx, first_index, new_skb);
- 		} else {
- 			int index = first_index;
- 
-@@ -2094,26 +2106,23 @@ static int lan743x_rx_process_packet(struct lan743x_rx *rx)
- 			if (first_index <= last_index) {
- 				while ((index >= first_index) &&
- 				       (index <= last_index)) {
--					lan743x_rx_release_ring_element(rx,
--									index);
--					lan743x_rx_allocate_ring_element(rx,
--									 index);
-+					lan743x_rx_reuse_ring_element(rx,
-+								      index);
- 					index = lan743x_rx_next_index(rx,
- 								      index);
- 				}
- 			} else {
- 				while ((index >= first_index) ||
- 				       (index <= last_index)) {
--					lan743x_rx_release_ring_element(rx,
--									index);
--					lan743x_rx_allocate_ring_element(rx,
--									 index);
-+					lan743x_rx_reuse_ring_element(rx,
-+								      index);
- 					index = lan743x_rx_next_index(rx,
- 								      index);
- 				}
- 			}
- 		}
- 
-+process_extension:
- 		if (extension_index >= 0) {
- 			descriptor = &rx->ring_cpu_ptr[extension_index];
- 			buffer_info = &rx->buffer_info[extension_index];
-@@ -2290,7 +2299,9 @@ static int lan743x_rx_ring_init(struct lan743x_rx *rx)
- 
- 	rx->last_head = 0;
- 	for (index = 0; index < rx->ring_size; index++) {
--		ret = lan743x_rx_allocate_ring_element(rx, index);
-+		struct sk_buff *new_skb = lan743x_rx_allocate_skb(rx);
-+
-+		ret = lan743x_rx_init_ring_element(rx, index, new_skb);
- 		if (ret)
- 			goto cleanup;
- 	}
-diff --git a/drivers/net/ethernet/mscc/ocelot_board.c b/drivers/net/ethernet/mscc/ocelot_board.c
-index ca3ea2fbfcd0..80d87798c62b 100644
---- a/drivers/net/ethernet/mscc/ocelot_board.c
-+++ b/drivers/net/ethernet/mscc/ocelot_board.c
-@@ -267,6 +267,7 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
- 		struct phy *serdes;
- 		void __iomem *regs;
- 		char res_name[8];
-+		int phy_mode;
- 		u32 port;
- 
- 		if (of_property_read_u32(portnp, "reg", &port))
-@@ -292,11 +293,11 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
- 		if (err)
- 			return err;
- 
--		err = of_get_phy_mode(portnp);
--		if (err < 0)
-+		phy_mode = of_get_phy_mode(portnp);
-+		if (phy_mode < 0)
- 			ocelot->ports[port]->phy_mode = PHY_INTERFACE_MODE_NA;
- 		else
--			ocelot->ports[port]->phy_mode = err;
-+			ocelot->ports[port]->phy_mode = phy_mode;
- 
- 		switch (ocelot->ports[port]->phy_mode) {
- 		case PHY_INTERFACE_MODE_NA:
-@@ -304,6 +305,13 @@ static int mscc_ocelot_probe(struct platform_device *pdev)
- 		case PHY_INTERFACE_MODE_SGMII:
- 			break;
- 		case PHY_INTERFACE_MODE_QSGMII:
-+			/* Ensure clock signals and speed is set on all
-+			 * QSGMII links
-+			 */
-+			ocelot_port_writel(ocelot->ports[port],
-+					   DEV_CLOCK_CFG_LINK_SPEED
-+					   (OCELOT_SPEED_1000),
-+					   DEV_CLOCK_CFG);
- 			break;
- 		default:
- 			dev_err(ocelot->dev,
-diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
-index 69d7aebda09b..73db94e55fd0 100644
---- a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
-+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
-@@ -196,7 +196,7 @@ static netdev_tx_t nfp_repr_xmit(struct sk_buff *skb, struct net_device *netdev)
- 	ret = dev_queue_xmit(skb);
- 	nfp_repr_inc_tx_stats(netdev, len, ret);
- 
--	return ret;
-+	return NETDEV_TX_OK;
- }
- 
- static int nfp_repr_stop(struct net_device *netdev)
-@@ -384,7 +384,7 @@ int nfp_repr_init(struct nfp_app *app, struct net_device *netdev,
- 	netdev->features &= ~(NETIF_F_TSO | NETIF_F_TSO6);
- 	netdev->gso_max_segs = NFP_NET_LSO_MAX_SEGS;
- 
--	netdev->priv_flags |= IFF_NO_QUEUE;
-+	netdev->priv_flags |= IFF_NO_QUEUE | IFF_DISABLE_NETPOLL;
- 	netdev->features |= NETIF_F_LLTX;
- 
- 	if (nfp_app_has_tc(app)) {
-diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
-index 6e36b88ca7c9..365cddbfc684 100644
---- a/drivers/net/ethernet/realtek/r8169.c
-+++ b/drivers/net/ethernet/realtek/r8169.c
-@@ -28,6 +28,7 @@
- #include <linux/pm_runtime.h>
- #include <linux/firmware.h>
- #include <linux/prefetch.h>
-+#include <linux/pci-aspm.h>
- #include <linux/ipv6.h>
- #include <net/ip6_checksum.h>
- 
-@@ -5332,7 +5333,7 @@ static void rtl_hw_start_8168(struct rtl8169_private *tp)
- 	tp->cp_cmd |= PktCntrDisable | INTT_1;
- 	RTL_W16(tp, CPlusCmd, tp->cp_cmd);
- 
--	RTL_W16(tp, IntrMitigate, 0x5151);
-+	RTL_W16(tp, IntrMitigate, 0x5100);
- 
- 	/* Work around for RxFIFO overflow. */
- 	if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
-@@ -6435,7 +6436,7 @@ static irqreturn_t rtl8169_interrupt(int irq, void *dev_instance)
- 		set_bit(RTL_FLAG_TASK_RESET_PENDING, tp->wk.flags);
- 	}
- 
--	if (status & RTL_EVENT_NAPI) {
-+	if (status & (RTL_EVENT_NAPI | LinkChg)) {
- 		rtl_irq_disable(tp);
- 		napi_schedule_irqoff(&tp->napi);
- 	}
-@@ -7224,6 +7225,11 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
- 			return rc;
- 	}
- 
-+	/* Disable ASPM completely as that cause random device stop working
-+	 * problems as well as full system hangs for some PCIe devices users.
-+	 */
-+	pci_disable_link_state(pdev, PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1);
-+
- 	/* enable device (incl. PCI PM wakeup and hotplug setup) */
- 	rc = pcim_enable_device(pdev);
- 	if (rc < 0) {
-diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
-index d28c8f9ca55b..8154b38c08f7 100644
---- a/drivers/net/ethernet/renesas/ravb_main.c
-+++ b/drivers/net/ethernet/renesas/ravb_main.c
-@@ -458,7 +458,7 @@ static int ravb_dmac_init(struct net_device *ndev)
- 		   RCR_EFFS | RCR_ENCF | RCR_ETS0 | RCR_ESF | 0x18000000, RCR);
- 
- 	/* Set FIFO size */
--	ravb_write(ndev, TGC_TQP_AVBMODE1 | 0x00222200, TGC);
-+	ravb_write(ndev, TGC_TQP_AVBMODE1 | 0x00112200, TGC);
- 
- 	/* Timestamp enable */
- 	ravb_write(ndev, TCCR_TFEN, TCCR);
-diff --git a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
-index d8c5bc412219..c0c75c111abb 100644
---- a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
-+++ b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
-@@ -111,10 +111,11 @@ static unsigned int is_jumbo_frm(int len, int enh_desc)
- 
- static void refill_desc3(void *priv_ptr, struct dma_desc *p)
- {
--	struct stmmac_priv *priv = (struct stmmac_priv *)priv_ptr;
-+	struct stmmac_rx_queue *rx_q = priv_ptr;
-+	struct stmmac_priv *priv = rx_q->priv_data;
- 
- 	/* Fill DES3 in case of RING mode */
--	if (priv->dma_buf_sz >= BUF_SIZE_8KiB)
-+	if (priv->dma_buf_sz == BUF_SIZE_16KiB)
- 		p->des3 = cpu_to_le32(le32_to_cpu(p->des2) + BUF_SIZE_8KiB);
- }
- 
-diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
-index 685d20472358..019ab99e65bb 100644
---- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
-+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
-@@ -474,7 +474,7 @@ static void stmmac_get_tx_hwtstamp(struct stmmac_priv *priv,
- 				   struct dma_desc *p, struct sk_buff *skb)
- {
- 	struct skb_shared_hwtstamps shhwtstamp;
--	u64 ns;
-+	u64 ns = 0;
- 
- 	if (!priv->hwts_tx_en)
- 		return;
-@@ -513,7 +513,7 @@ static void stmmac_get_rx_hwtstamp(struct stmmac_priv *priv, struct dma_desc *p,
- {
- 	struct skb_shared_hwtstamps *shhwtstamp = NULL;
- 	struct dma_desc *desc = p;
--	u64 ns;
-+	u64 ns = 0;
- 
- 	if (!priv->hwts_rx_en)
- 		return;
-@@ -558,8 +558,8 @@ static int stmmac_hwtstamp_ioctl(struct net_device *dev, struct ifreq *ifr)
- 	u32 snap_type_sel = 0;
- 	u32 ts_master_en = 0;
- 	u32 ts_event_en = 0;
-+	u32 sec_inc = 0;
- 	u32 value = 0;
--	u32 sec_inc;
- 	bool xmac;
- 
- 	xmac = priv->plat->has_gmac4 || priv->plat->has_xgmac;
-diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
-index 2293e21f789f..cc60b3fb0892 100644
---- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
-+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
-@@ -105,7 +105,7 @@ static int stmmac_get_time(struct ptp_clock_info *ptp, struct timespec64 *ts)
- 	struct stmmac_priv *priv =
- 	    container_of(ptp, struct stmmac_priv, ptp_clock_ops);
- 	unsigned long flags;
--	u64 ns;
-+	u64 ns = 0;
- 
- 	spin_lock_irqsave(&priv->ptp_lock, flags);
- 	stmmac_get_systime(priv, priv->ptpaddr, &ns);
-diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
-index e859ae2e42d5..49f41b64077b 100644
---- a/drivers/net/hyperv/hyperv_net.h
-+++ b/drivers/net/hyperv/hyperv_net.h
-@@ -987,6 +987,7 @@ struct netvsc_device {
- 
- 	wait_queue_head_t wait_drain;
- 	bool destroy;
-+	bool tx_disable; /* if true, do not wake up queue again */
- 
- 	/* Receive buffer allocated by us but manages by NetVSP */
- 	void *recv_buf;
-diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
-index 813d195bbd57..e0dce373cdd9 100644
---- a/drivers/net/hyperv/netvsc.c
-+++ b/drivers/net/hyperv/netvsc.c
-@@ -110,6 +110,7 @@ static struct netvsc_device *alloc_net_device(void)
- 
- 	init_waitqueue_head(&net_device->wait_drain);
- 	net_device->destroy = false;
-+	net_device->tx_disable = false;
- 
- 	net_device->max_pkt = RNDIS_MAX_PKT_DEFAULT;
- 	net_device->pkt_align = RNDIS_PKT_ALIGN_DEFAULT;
-@@ -719,7 +720,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
- 	} else {
- 		struct netdev_queue *txq = netdev_get_tx_queue(ndev, q_idx);
- 
--		if (netif_tx_queue_stopped(txq) &&
-+		if (netif_tx_queue_stopped(txq) && !net_device->tx_disable &&
- 		    (hv_get_avail_to_write_percent(&channel->outbound) >
- 		     RING_AVAIL_PERCENT_HIWATER || queue_sends < 1)) {
- 			netif_tx_wake_queue(txq);
-@@ -874,7 +875,8 @@ static inline int netvsc_send_pkt(
- 	} else if (ret == -EAGAIN) {
- 		netif_tx_stop_queue(txq);
- 		ndev_ctx->eth_stats.stop_queue++;
--		if (atomic_read(&nvchan->queue_sends) < 1) {
-+		if (atomic_read(&nvchan->queue_sends) < 1 &&
-+		    !net_device->tx_disable) {
- 			netif_tx_wake_queue(txq);
- 			ndev_ctx->eth_stats.wake_queue++;
- 			ret = -ENOSPC;
-diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
-index cf4897043e83..b20fb0fb595b 100644
---- a/drivers/net/hyperv/netvsc_drv.c
-+++ b/drivers/net/hyperv/netvsc_drv.c
-@@ -109,6 +109,15 @@ static void netvsc_set_rx_mode(struct net_device *net)
- 	rcu_read_unlock();
- }
- 
-+static void netvsc_tx_enable(struct netvsc_device *nvscdev,
-+			     struct net_device *ndev)
-+{
-+	nvscdev->tx_disable = false;
-+	virt_wmb(); /* ensure queue wake up mechanism is on */
-+
-+	netif_tx_wake_all_queues(ndev);
-+}
-+
- static int netvsc_open(struct net_device *net)
- {
- 	struct net_device_context *ndev_ctx = netdev_priv(net);
-@@ -129,7 +138,7 @@ static int netvsc_open(struct net_device *net)
- 	rdev = nvdev->extension;
- 	if (!rdev->link_state) {
- 		netif_carrier_on(net);
--		netif_tx_wake_all_queues(net);
-+		netvsc_tx_enable(nvdev, net);
- 	}
- 
- 	if (vf_netdev) {
-@@ -184,6 +193,17 @@ static int netvsc_wait_until_empty(struct netvsc_device *nvdev)
- 	}
- }
- 
-+static void netvsc_tx_disable(struct netvsc_device *nvscdev,
-+			      struct net_device *ndev)
-+{
-+	if (nvscdev) {
-+		nvscdev->tx_disable = true;
-+		virt_wmb(); /* ensure txq will not wake up after stop */
-+	}
-+
-+	netif_tx_disable(ndev);
-+}
-+
- static int netvsc_close(struct net_device *net)
- {
- 	struct net_device_context *net_device_ctx = netdev_priv(net);
-@@ -192,7 +212,7 @@ static int netvsc_close(struct net_device *net)
- 	struct netvsc_device *nvdev = rtnl_dereference(net_device_ctx->nvdev);
- 	int ret;
- 
--	netif_tx_disable(net);
-+	netvsc_tx_disable(nvdev, net);
- 
- 	/* No need to close rndis filter if it is removed already */
- 	if (!nvdev)
-@@ -920,7 +940,7 @@ static int netvsc_detach(struct net_device *ndev,
- 
- 	/* If device was up (receiving) then shutdown */
- 	if (netif_running(ndev)) {
--		netif_tx_disable(ndev);
-+		netvsc_tx_disable(nvdev, ndev);
- 
- 		ret = rndis_filter_close(nvdev);
- 		if (ret) {
-@@ -1908,7 +1928,7 @@ static void netvsc_link_change(struct work_struct *w)
- 		if (rdev->link_state) {
- 			rdev->link_state = false;
- 			netif_carrier_on(net);
--			netif_tx_wake_all_queues(net);
-+			netvsc_tx_enable(net_device, net);
- 		} else {
- 			notify = true;
- 		}
-@@ -1918,7 +1938,7 @@ static void netvsc_link_change(struct work_struct *w)
- 		if (!rdev->link_state) {
- 			rdev->link_state = true;
- 			netif_carrier_off(net);
--			netif_tx_stop_all_queues(net);
-+			netvsc_tx_disable(net_device, net);
- 		}
- 		kfree(event);
- 		break;
-@@ -1927,7 +1947,7 @@ static void netvsc_link_change(struct work_struct *w)
- 		if (!rdev->link_state) {
- 			rdev->link_state = true;
- 			netif_carrier_off(net);
--			netif_tx_stop_all_queues(net);
-+			netvsc_tx_disable(net_device, net);
- 			event->event = RNDIS_STATUS_MEDIA_CONNECT;
- 			spin_lock_irqsave(&ndev_ctx->lock, flags);
- 			list_add(&event->list, &ndev_ctx->reconfig_events);
-diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c
-index 3ddaf9595697..68af4c75ffb3 100644
---- a/drivers/net/phy/meson-gxl.c
-+++ b/drivers/net/phy/meson-gxl.c
-@@ -211,6 +211,7 @@ static int meson_gxl_ack_interrupt(struct phy_device *phydev)
- static int meson_gxl_config_intr(struct phy_device *phydev)
- {
- 	u16 val;
-+	int ret;
- 
- 	if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
- 		val = INTSRC_ANEG_PR
-@@ -223,6 +224,11 @@ static int meson_gxl_config_intr(struct phy_device *phydev)
- 		val = 0;
- 	}
- 
-+	/* Ack any pending IRQ */
-+	ret = meson_gxl_ack_interrupt(phydev);
-+	if (ret)
-+		return ret;
-+
- 	return phy_write(phydev, INTSRC_MASK, val);
- }
- 
-diff --git a/drivers/net/phy/phy-c45.c b/drivers/net/phy/phy-c45.c
-index 03af927fa5ad..e39bf0428dd9 100644
---- a/drivers/net/phy/phy-c45.c
-+++ b/drivers/net/phy/phy-c45.c
-@@ -147,9 +147,15 @@ int genphy_c45_read_link(struct phy_device *phydev, u32 mmd_mask)
- 		mmd_mask &= ~BIT(devad);
- 
- 		/* The link state is latched low so that momentary link
--		 * drops can be detected.  Do not double-read the status
--		 * register if the link is down.
-+		 * drops can be detected. Do not double-read the status
-+		 * in polling mode to detect such short link drops.
- 		 */
-+		if (!phy_polling_mode(phydev)) {
-+			val = phy_read_mmd(phydev, devad, MDIO_STAT1);
-+			if (val < 0)
-+				return val;
-+		}
-+
- 		val = phy_read_mmd(phydev, devad, MDIO_STAT1);
- 		if (val < 0)
- 			return val;
-diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
-index 46c86725a693..adf79614c2db 100644
---- a/drivers/net/phy/phy_device.c
-+++ b/drivers/net/phy/phy_device.c
-@@ -1683,10 +1683,15 @@ int genphy_update_link(struct phy_device *phydev)
- {
- 	int status;
- 
--	/* Do a fake read */
--	status = phy_read(phydev, MII_BMSR);
--	if (status < 0)
--		return status;
-+	/* The link state is latched low so that momentary link
-+	 * drops can be detected. Do not double-read the status
-+	 * in polling mode to detect such short link drops.
-+	 */
-+	if (!phy_polling_mode(phydev)) {
-+		status = phy_read(phydev, MII_BMSR);
-+		if (status < 0)
-+			return status;
-+	}
- 
- 	/* Read link and autonegotiation status */
- 	status = phy_read(phydev, MII_BMSR);
-@@ -1827,7 +1832,7 @@ int genphy_soft_reset(struct phy_device *phydev)
- {
- 	int ret;
- 
--	ret = phy_write(phydev, MII_BMCR, BMCR_RESET);
-+	ret = phy_set_bits(phydev, MII_BMCR, BMCR_RESET);
- 	if (ret < 0)
- 		return ret;
- 
-diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
-index 8f09edd811e9..50c60550f295 100644
---- a/drivers/net/ppp/pptp.c
-+++ b/drivers/net/ppp/pptp.c
-@@ -532,6 +532,7 @@ static void pptp_sock_destruct(struct sock *sk)
- 		pppox_unbind_sock(sk);
- 	}
- 	skb_queue_purge(&sk->sk_receive_queue);
-+	dst_release(rcu_dereference_protected(sk->sk_dst_cache, 1));
- }
- 
- static int pptp_create(struct net *net, struct socket *sock, int kern)
-diff --git a/drivers/net/team/team_mode_loadbalance.c b/drivers/net/team/team_mode_loadbalance.c
-index a5ef97010eb3..5541e1c19936 100644
---- a/drivers/net/team/team_mode_loadbalance.c
-+++ b/drivers/net/team/team_mode_loadbalance.c
-@@ -325,6 +325,20 @@ static int lb_bpf_func_set(struct team *team, struct team_gsetter_ctx *ctx)
- 	return 0;
- }
- 
-+static void lb_bpf_func_free(struct team *team)
-+{
-+	struct lb_priv *lb_priv = get_lb_priv(team);
-+	struct bpf_prog *fp;
-+
-+	if (!lb_priv->ex->orig_fprog)
-+		return;
-+
-+	__fprog_destroy(lb_priv->ex->orig_fprog);
-+	fp = rcu_dereference_protected(lb_priv->fp,
-+				       lockdep_is_held(&team->lock));
-+	bpf_prog_destroy(fp);
-+}
-+
- static int lb_tx_method_get(struct team *team, struct team_gsetter_ctx *ctx)
- {
- 	struct lb_priv *lb_priv = get_lb_priv(team);
-@@ -639,6 +653,7 @@ static void lb_exit(struct team *team)
- 
- 	team_options_unregister(team, lb_options,
- 				ARRAY_SIZE(lb_options));
-+	lb_bpf_func_free(team);
- 	cancel_delayed_work_sync(&lb_priv->ex->stats.refresh_dw);
- 	free_percpu(lb_priv->pcpu_stats);
- 	kfree(lb_priv->ex);
-diff --git a/drivers/net/tun.c b/drivers/net/tun.c
-index 53f4f37b0ffd..448d5439ff6a 100644
---- a/drivers/net/tun.c
-+++ b/drivers/net/tun.c
-@@ -1763,9 +1763,6 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
- 	int skb_xdp = 1;
- 	bool frags = tun_napi_frags_enabled(tfile);
- 
--	if (!(tun->dev->flags & IFF_UP))
--		return -EIO;
--
- 	if (!(tun->flags & IFF_NO_PI)) {
- 		if (len < sizeof(pi))
- 			return -EINVAL;
-@@ -1867,6 +1864,8 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
- 			err = skb_copy_datagram_from_iter(skb, 0, from, len);
- 
- 		if (err) {
-+			err = -EFAULT;
-+drop:
- 			this_cpu_inc(tun->pcpu_stats->rx_dropped);
- 			kfree_skb(skb);
- 			if (frags) {
-@@ -1874,7 +1873,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
- 				mutex_unlock(&tfile->napi_mutex);
- 			}
- 
--			return -EFAULT;
-+			return err;
- 		}
- 	}
- 
-@@ -1958,6 +1957,13 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
- 	    !tfile->detached)
- 		rxhash = __skb_get_hash_symmetric(skb);
- 
-+	rcu_read_lock();
-+	if (unlikely(!(tun->dev->flags & IFF_UP))) {
-+		err = -EIO;
-+		rcu_read_unlock();
-+		goto drop;
-+	}
-+
- 	if (frags) {
- 		/* Exercise flow dissector code path. */
- 		u32 headlen = eth_get_headlen(skb->data, skb_headlen(skb));
-@@ -1965,6 +1971,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
- 		if (unlikely(headlen > skb_headlen(skb))) {
- 			this_cpu_inc(tun->pcpu_stats->rx_dropped);
- 			napi_free_frags(&tfile->napi);
-+			rcu_read_unlock();
- 			mutex_unlock(&tfile->napi_mutex);
- 			WARN_ON(1);
- 			return -ENOMEM;
-@@ -1992,6 +1999,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
- 	} else {
- 		netif_rx_ni(skb);
- 	}
-+	rcu_read_unlock();
- 
- 	stats = get_cpu_ptr(tun->pcpu_stats);
- 	u64_stats_update_begin(&stats->syncp);
-diff --git a/drivers/net/usb/aqc111.c b/drivers/net/usb/aqc111.c
-index 820a2fe7d027..aff995be2a31 100644
---- a/drivers/net/usb/aqc111.c
-+++ b/drivers/net/usb/aqc111.c
-@@ -1301,6 +1301,20 @@ static const struct driver_info trendnet_info = {
- 	.tx_fixup	= aqc111_tx_fixup,
- };
- 
-+static const struct driver_info qnap_info = {
-+	.description	= "QNAP QNA-UC5G1T USB to 5GbE Adapter",
-+	.bind		= aqc111_bind,
-+	.unbind		= aqc111_unbind,
-+	.status		= aqc111_status,
-+	.link_reset	= aqc111_link_reset,
-+	.reset		= aqc111_reset,
-+	.stop		= aqc111_stop,
-+	.flags		= FLAG_ETHER | FLAG_FRAMING_AX |
-+			  FLAG_AVOID_UNLINK_URBS | FLAG_MULTI_PACKET,
-+	.rx_fixup	= aqc111_rx_fixup,
-+	.tx_fixup	= aqc111_tx_fixup,
-+};
-+
- static int aqc111_suspend(struct usb_interface *intf, pm_message_t message)
- {
- 	struct usbnet *dev = usb_get_intfdata(intf);
-@@ -1455,6 +1469,7 @@ static const struct usb_device_id products[] = {
- 	{AQC111_USB_ETH_DEV(0x0b95, 0x2790, asix111_info)},
- 	{AQC111_USB_ETH_DEV(0x0b95, 0x2791, asix112_info)},
- 	{AQC111_USB_ETH_DEV(0x20f4, 0xe05a, trendnet_info)},
-+	{AQC111_USB_ETH_DEV(0x1c04, 0x0015, qnap_info)},
- 	{ },/* END */
- };
- MODULE_DEVICE_TABLE(usb, products);
-diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
-index 5512a1038721..3e9b2c319e45 100644
---- a/drivers/net/usb/cdc_ether.c
-+++ b/drivers/net/usb/cdc_ether.c
-@@ -851,6 +851,14 @@ static const struct usb_device_id	products[] = {
- 	.driver_info = 0,
- },
- 
-+/* QNAP QNA-UC5G1T USB to 5GbE Adapter (based on AQC111U) */
-+{
-+	USB_DEVICE_AND_INTERFACE_INFO(0x1c04, 0x0015, USB_CLASS_COMM,
-+				      USB_CDC_SUBCLASS_ETHERNET,
-+				      USB_CDC_PROTO_NONE),
-+	.driver_info = 0,
-+},
-+
- /* WHITELIST!!!
-  *
-  * CDC Ether uses two interfaces, not necessarily consecutive.
-diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
-index 18af2f8eee96..9195f3476b1d 100644
---- a/drivers/net/usb/qmi_wwan.c
-+++ b/drivers/net/usb/qmi_wwan.c
-@@ -976,6 +976,13 @@ static const struct usb_device_id products[] = {
- 					      0xff),
- 		.driver_info	    = (unsigned long)&qmi_wwan_info_quirk_dtr,
- 	},
-+	{	/* Quectel EG12/EM12 */
-+		USB_DEVICE_AND_INTERFACE_INFO(0x2c7c, 0x0512,
-+					      USB_CLASS_VENDOR_SPEC,
-+					      USB_SUBCLASS_VENDOR_SPEC,
-+					      0xff),
-+		.driver_info	    = (unsigned long)&qmi_wwan_info_quirk_dtr,
-+	},
- 
- 	/* 3. Combined interface devices matching on interface number */
- 	{QMI_FIXED_INTF(0x0408, 0xea42, 4)},	/* Yota / Megafon M100-1 */
-@@ -1196,6 +1203,7 @@ static const struct usb_device_id products[] = {
- 	{QMI_FIXED_INTF(0x19d2, 0x2002, 4)},	/* ZTE (Vodafone) K3765-Z */
- 	{QMI_FIXED_INTF(0x2001, 0x7e19, 4)},	/* D-Link DWM-221 B1 */
- 	{QMI_FIXED_INTF(0x2001, 0x7e35, 4)},	/* D-Link DWM-222 */
-+	{QMI_FIXED_INTF(0x2020, 0x2031, 4)},	/* Olicard 600 */
- 	{QMI_FIXED_INTF(0x2020, 0x2033, 4)},	/* BroadMobi BM806U */
- 	{QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)},    /* Sierra Wireless MC7700 */
- 	{QMI_FIXED_INTF(0x114f, 0x68a2, 8)},    /* Sierra Wireless MC7750 */
-@@ -1343,17 +1351,20 @@ static bool quectel_ec20_detected(struct usb_interface *intf)
- 	return false;
- }
- 
--static bool quectel_ep06_diag_detected(struct usb_interface *intf)
-+static bool quectel_diag_detected(struct usb_interface *intf)
- {
- 	struct usb_device *dev = interface_to_usbdev(intf);
- 	struct usb_interface_descriptor intf_desc = intf->cur_altsetting->desc;
-+	u16 id_vendor = le16_to_cpu(dev->descriptor.idVendor);
-+	u16 id_product = le16_to_cpu(dev->descriptor.idProduct);
- 
--	if (le16_to_cpu(dev->descriptor.idVendor) == 0x2c7c &&
--	    le16_to_cpu(dev->descriptor.idProduct) == 0x0306 &&
--	    intf_desc.bNumEndpoints == 2)
--		return true;
-+	if (id_vendor != 0x2c7c || intf_desc.bNumEndpoints != 2)
-+		return false;
- 
--	return false;
-+	if (id_product == 0x0306 || id_product == 0x0512)
-+		return true;
-+	else
-+		return false;
- }
- 
- static int qmi_wwan_probe(struct usb_interface *intf,
-@@ -1390,13 +1401,13 @@ static int qmi_wwan_probe(struct usb_interface *intf,
- 		return -ENODEV;
- 	}
- 
--	/* Quectel EP06/EM06/EG06 supports dynamic interface configuration, so
-+	/* Several Quectel modems supports dynamic interface configuration, so
- 	 * we need to match on class/subclass/protocol. These values are
- 	 * identical for the diagnostic- and QMI-interface, but bNumEndpoints is
- 	 * different. Ignore the current interface if the number of endpoints
- 	 * the number for the diag interface (two).
- 	 */
--	if (quectel_ep06_diag_detected(intf))
-+	if (quectel_diag_detected(intf))
- 		return -ENODEV;
- 
- 	return usbnet_probe(intf, id);
-diff --git a/drivers/net/veth.c b/drivers/net/veth.c
-index f412ea1cef18..b203d1867959 100644
---- a/drivers/net/veth.c
-+++ b/drivers/net/veth.c
-@@ -115,7 +115,8 @@ static void veth_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
- 		p += sizeof(ethtool_stats_keys);
- 		for (i = 0; i < dev->real_num_rx_queues; i++) {
- 			for (j = 0; j < VETH_RQ_STATS_LEN; j++) {
--				snprintf(p, ETH_GSTRING_LEN, "rx_queue_%u_%s",
-+				snprintf(p, ETH_GSTRING_LEN,
-+					 "rx_queue_%u_%.11s",
- 					 i, veth_rq_stats_desc[j].desc);
- 				p += ETH_GSTRING_LEN;
- 			}
-diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
-index 7c1430ed0244..cd15c32b2e43 100644
---- a/drivers/net/vrf.c
-+++ b/drivers/net/vrf.c
-@@ -1273,9 +1273,14 @@ static void vrf_setup(struct net_device *dev)
- 
- 	/* default to no qdisc; user can add if desired */
- 	dev->priv_flags |= IFF_NO_QUEUE;
-+	dev->priv_flags |= IFF_NO_RX_HANDLER;
- 
--	dev->min_mtu = 0;
--	dev->max_mtu = 0;
-+	/* VRF devices do not care about MTU, but if the MTU is set
-+	 * too low then the ipv4 and ipv6 protocols are disabled
-+	 * which breaks networking.
-+	 */
-+	dev->min_mtu = IPV6_MIN_MTU;
-+	dev->max_mtu = ETH_MAX_MTU;
- }
- 
- static int vrf_validate(struct nlattr *tb[], struct nlattr *data[],
-diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
-index 2aae11feff0c..5006daed2e96 100644
---- a/drivers/net/vxlan.c
-+++ b/drivers/net/vxlan.c
-@@ -1657,6 +1657,14 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
- 		goto drop;
- 	}
- 
-+	rcu_read_lock();
-+
-+	if (unlikely(!(vxlan->dev->flags & IFF_UP))) {
-+		rcu_read_unlock();
-+		atomic_long_inc(&vxlan->dev->rx_dropped);
-+		goto drop;
-+	}
-+
- 	stats = this_cpu_ptr(vxlan->dev->tstats);
- 	u64_stats_update_begin(&stats->syncp);
- 	stats->rx_packets++;
-@@ -1664,6 +1672,9 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
- 	u64_stats_update_end(&stats->syncp);
- 
- 	gro_cells_receive(&vxlan->gro_cells, skb);
-+
-+	rcu_read_unlock();
-+
- 	return 0;
- 
- drop:
-@@ -2693,6 +2704,8 @@ static void vxlan_uninit(struct net_device *dev)
- {
- 	struct vxlan_dev *vxlan = netdev_priv(dev);
- 
-+	gro_cells_destroy(&vxlan->gro_cells);
-+
- 	vxlan_fdb_delete_default(vxlan, vxlan->cfg.vni);
- 
- 	free_percpu(dev->tstats);
-@@ -3794,7 +3807,6 @@ static void vxlan_dellink(struct net_device *dev, struct list_head *head)
- 
- 	vxlan_flush(vxlan, true);
- 
--	gro_cells_destroy(&vxlan->gro_cells);
- 	list_del(&vxlan->next);
- 	unregister_netdevice_queue(dev, head);
- }
-@@ -4172,10 +4184,8 @@ static void vxlan_destroy_tunnels(struct net *net, struct list_head *head)
- 		/* If vxlan->dev is in the same netns, it has already been added
- 		 * to the list by the previous loop.
- 		 */
--		if (!net_eq(dev_net(vxlan->dev), net)) {
--			gro_cells_destroy(&vxlan->gro_cells);
-+		if (!net_eq(dev_net(vxlan->dev), net))
- 			unregister_netdevice_queue(vxlan->dev, head);
--		}
- 	}
- 
- 	for (h = 0; h < PORT_HASH_SIZE; ++h)
-diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
-index 2a5668b4f6bc..1a1ea4bbf8a0 100644
---- a/drivers/net/wireless/ath/ath10k/ce.c
-+++ b/drivers/net/wireless/ath/ath10k/ce.c
-@@ -500,14 +500,8 @@ static int _ath10k_ce_send_nolock(struct ath10k_ce_pipe *ce_state,
- 	write_index = CE_RING_IDX_INCR(nentries_mask, write_index);
- 
- 	/* WORKAROUND */
--	if (!(flags & CE_SEND_FLAG_GATHER)) {
--		if (ar->hw_params.shadow_reg_support)
--			ath10k_ce_shadow_src_ring_write_index_set(ar, ce_state,
--								  write_index);
--		else
--			ath10k_ce_src_ring_write_index_set(ar, ctrl_addr,
--							   write_index);
--	}
-+	if (!(flags & CE_SEND_FLAG_GATHER))
-+		ath10k_ce_src_ring_write_index_set(ar, ctrl_addr, write_index);
- 
- 	src_ring->write_index = write_index;
- exit:
-@@ -581,8 +575,14 @@ static int _ath10k_ce_send_nolock_64(struct ath10k_ce_pipe *ce_state,
- 	/* Update Source Ring Write Index */
- 	write_index = CE_RING_IDX_INCR(nentries_mask, write_index);
- 
--	if (!(flags & CE_SEND_FLAG_GATHER))
--		ath10k_ce_src_ring_write_index_set(ar, ctrl_addr, write_index);
-+	if (!(flags & CE_SEND_FLAG_GATHER)) {
-+		if (ar->hw_params.shadow_reg_support)
-+			ath10k_ce_shadow_src_ring_write_index_set(ar, ce_state,
-+								  write_index);
-+		else
-+			ath10k_ce_src_ring_write_index_set(ar, ctrl_addr,
-+							   write_index);
-+	}
- 
- 	src_ring->write_index = write_index;
- exit:
-@@ -1404,12 +1404,12 @@ static int ath10k_ce_alloc_shadow_base(struct ath10k *ar,
- 				       u32 nentries)
- {
- 	src_ring->shadow_base_unaligned = kcalloc(nentries,
--						  sizeof(struct ce_desc),
-+						  sizeof(struct ce_desc_64),
- 						  GFP_KERNEL);
- 	if (!src_ring->shadow_base_unaligned)
- 		return -ENOMEM;
- 
--	src_ring->shadow_base = (struct ce_desc *)
-+	src_ring->shadow_base = (struct ce_desc_64 *)
- 			PTR_ALIGN(src_ring->shadow_base_unaligned,
- 				  CE_DESC_RING_ALIGN);
- 	return 0;
-@@ -1461,7 +1461,7 @@ ath10k_ce_alloc_src_ring(struct ath10k *ar, unsigned int ce_id,
- 		ret = ath10k_ce_alloc_shadow_base(ar, src_ring, nentries);
- 		if (ret) {
- 			dma_free_coherent(ar->dev,
--					  (nentries * sizeof(struct ce_desc) +
-+					  (nentries * sizeof(struct ce_desc_64) +
- 					   CE_DESC_RING_ALIGN),
- 					  src_ring->base_addr_owner_space_unaligned,
- 					  base_addr);
-diff --git a/drivers/net/wireless/ath/ath10k/ce.h b/drivers/net/wireless/ath/ath10k/ce.h
-index ead9987c3259..463e2fc8b501 100644
---- a/drivers/net/wireless/ath/ath10k/ce.h
-+++ b/drivers/net/wireless/ath/ath10k/ce.h
-@@ -118,7 +118,7 @@ struct ath10k_ce_ring {
- 	u32 base_addr_ce_space;
- 
- 	char *shadow_base_unaligned;
--	struct ce_desc *shadow_base;
-+	struct ce_desc_64 *shadow_base;
- 
- 	/* keep last */
- 	void *per_transfer_context[0];
-diff --git a/drivers/net/wireless/ath/ath10k/debugfs_sta.c b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
-index 4778a455d81a..068f1a7e07d3 100644
---- a/drivers/net/wireless/ath/ath10k/debugfs_sta.c
-+++ b/drivers/net/wireless/ath/ath10k/debugfs_sta.c
-@@ -696,11 +696,12 @@ static ssize_t ath10k_dbg_sta_dump_tx_stats(struct file *file,
- 						 "  %llu ", stats->ht[j][i]);
- 			len += scnprintf(buf + len, size - len, "\n");
- 			len += scnprintf(buf + len, size - len,
--					" BW %s (20,40,80,160 MHz)\n", str[j]);
-+					" BW %s (20,5,10,40,80,160 MHz)\n", str[j]);
- 			len += scnprintf(buf + len, size - len,
--					 "  %llu %llu %llu %llu\n",
-+					 "  %llu %llu %llu %llu %llu %llu\n",
- 					 stats->bw[j][0], stats->bw[j][1],
--					 stats->bw[j][2], stats->bw[j][3]);
-+					 stats->bw[j][2], stats->bw[j][3],
-+					 stats->bw[j][4], stats->bw[j][5]);
- 			len += scnprintf(buf + len, size - len,
- 					 " NSS %s (1x1,2x2,3x3,4x4)\n", str[j]);
- 			len += scnprintf(buf + len, size - len,
-diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
-index f42bac204ef8..ecf34ce7acf0 100644
---- a/drivers/net/wireless/ath/ath10k/htt_rx.c
-+++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
-@@ -2130,9 +2130,15 @@ static bool ath10k_htt_rx_proc_rx_ind_hl(struct ath10k_htt *htt,
- 	hdr = (struct ieee80211_hdr *)skb->data;
- 	rx_status = IEEE80211_SKB_RXCB(skb);
- 	rx_status->chains |= BIT(0);
--	rx_status->signal = ATH10K_DEFAULT_NOISE_FLOOR +
--			    rx->ppdu.combined_rssi;
--	rx_status->flag &= ~RX_FLAG_NO_SIGNAL_VAL;
-+	if (rx->ppdu.combined_rssi == 0) {
-+		/* SDIO firmware does not provide signal */
-+		rx_status->signal = 0;
-+		rx_status->flag |= RX_FLAG_NO_SIGNAL_VAL;
-+	} else {
-+		rx_status->signal = ATH10K_DEFAULT_NOISE_FLOOR +
-+			rx->ppdu.combined_rssi;
-+		rx_status->flag &= ~RX_FLAG_NO_SIGNAL_VAL;
-+	}
- 
- 	spin_lock_bh(&ar->data_lock);
- 	ch = ar->scan_channel;
-diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
-index 2034ccc7cc72..1d5d0209ebeb 100644
---- a/drivers/net/wireless/ath/ath10k/wmi.h
-+++ b/drivers/net/wireless/ath/ath10k/wmi.h
-@@ -5003,7 +5003,7 @@ enum wmi_rate_preamble {
- #define ATH10K_FW_SKIPPED_RATE_CTRL(flags)	(((flags) >> 6) & 0x1)
- 
- #define ATH10K_VHT_MCS_NUM	10
--#define ATH10K_BW_NUM		4
-+#define ATH10K_BW_NUM		6
- #define ATH10K_NSS_NUM		4
- #define ATH10K_LEGACY_NUM	12
- #define ATH10K_GI_NUM		2
-diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c
-index c070a9e51ebf..fae572b38416 100644
---- a/drivers/net/wireless/ath/ath9k/init.c
-+++ b/drivers/net/wireless/ath/ath9k/init.c
-@@ -636,15 +636,15 @@ static int ath9k_of_init(struct ath_softc *sc)
- 		ret = ath9k_eeprom_request(sc, eeprom_name);
- 		if (ret)
- 			return ret;
-+
-+		ah->ah_flags &= ~AH_USE_EEPROM;
-+		ah->ah_flags |= AH_NO_EEP_SWAP;
- 	}
- 
- 	mac = of_get_mac_address(np);
- 	if (mac)
- 		ether_addr_copy(common->macaddr, mac);
- 
--	ah->ah_flags &= ~AH_USE_EEPROM;
--	ah->ah_flags |= AH_NO_EEP_SWAP;
--
- 	return 0;
- }
- 
-diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
-index 9b2f9f543952..5a44f9d0ff02 100644
---- a/drivers/net/wireless/ath/wil6210/cfg80211.c
-+++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
-@@ -1580,6 +1580,12 @@ static int _wil_cfg80211_merge_extra_ies(const u8 *ies1, u16 ies1_len,
- 	u8 *buf, *dpos;
- 	const u8 *spos;
- 
-+	if (!ies1)
-+		ies1_len = 0;
-+
-+	if (!ies2)
-+		ies2_len = 0;
-+
- 	if (ies1_len == 0 && ies2_len == 0) {
- 		*merged_ies = NULL;
- 		*merged_len = 0;
-@@ -1589,17 +1595,19 @@ static int _wil_cfg80211_merge_extra_ies(const u8 *ies1, u16 ies1_len,
- 	buf = kmalloc(ies1_len + ies2_len, GFP_KERNEL);
- 	if (!buf)
- 		return -ENOMEM;
--	memcpy(buf, ies1, ies1_len);
-+	if (ies1)
-+		memcpy(buf, ies1, ies1_len);
- 	dpos = buf + ies1_len;
- 	spos = ies2;
--	while (spos + 1 < ies2 + ies2_len) {
-+	while (spos && (spos + 1 < ies2 + ies2_len)) {
- 		/* IE tag at offset 0, length at offset 1 */
- 		u16 ielen = 2 + spos[1];
- 
- 		if (spos + ielen > ies2 + ies2_len)
- 			break;
- 		if (spos[0] == WLAN_EID_VENDOR_SPECIFIC &&
--		    !_wil_cfg80211_find_ie(ies1, ies1_len, spos, ielen)) {
-+		    (!ies1 || !_wil_cfg80211_find_ie(ies1, ies1_len,
-+						     spos, ielen))) {
- 			memcpy(dpos, spos, ielen);
- 			dpos += ielen;
- 		}
-diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
-index 1f1e95a15a17..0ce1d8174e6d 100644
---- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
-+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
-@@ -149,7 +149,7 @@ static int brcmf_c_process_clm_blob(struct brcmf_if *ifp)
- 		return err;
- 	}
- 
--	err = request_firmware(&clm, clm_name, bus->dev);
-+	err = firmware_request_nowarn(&clm, clm_name, bus->dev);
- 	if (err) {
- 		brcmf_info("no clm_blob available (err=%d), device may have limited channels available\n",
- 			   err);
-diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
-index 0d6c313b6669..19ec55cef802 100644
---- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
-+++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
-@@ -127,13 +127,17 @@ static int iwl_send_rss_cfg_cmd(struct iwl_mvm *mvm)
- 
- static int iwl_configure_rxq(struct iwl_mvm *mvm)
- {
--	int i, num_queues, size;
-+	int i, num_queues, size, ret;
- 	struct iwl_rfh_queue_config *cmd;
-+	struct iwl_host_cmd hcmd = {
-+		.id = WIDE_ID(DATA_PATH_GROUP, RFH_QUEUE_CONFIG_CMD),
-+		.dataflags[0] = IWL_HCMD_DFL_NOCOPY,
-+	};
- 
- 	/* Do not configure default queue, it is configured via context info */
- 	num_queues = mvm->trans->num_rx_queues - 1;
- 
--	size = sizeof(*cmd) + num_queues * sizeof(struct iwl_rfh_queue_data);
-+	size = struct_size(cmd, data, num_queues);
- 
- 	cmd = kzalloc(size, GFP_KERNEL);
- 	if (!cmd)
-@@ -154,10 +158,14 @@ static int iwl_configure_rxq(struct iwl_mvm *mvm)
- 		cmd->data[i].fr_bd_wid = cpu_to_le32(data.fr_bd_wid);
- 	}
- 
--	return iwl_mvm_send_cmd_pdu(mvm,
--				    WIDE_ID(DATA_PATH_GROUP,
--					    RFH_QUEUE_CONFIG_CMD),
--				    0, size, cmd);
-+	hcmd.data[0] = cmd;
-+	hcmd.len[0] = size;
-+
-+	ret = iwl_mvm_send_cmd(mvm, &hcmd);
-+
-+	kfree(cmd);
-+
-+	return ret;
- }
- 
- static int iwl_mvm_send_dqa_cmd(struct iwl_mvm *mvm)
-diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
-index 9e850c25877b..c596c7b13504 100644
---- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
-+++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
-@@ -499,7 +499,7 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
- 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
- 	struct iwl_rb_allocator *rba = &trans_pcie->rba;
- 	struct list_head local_empty;
--	int pending = atomic_xchg(&rba->req_pending, 0);
-+	int pending = atomic_read(&rba->req_pending);
- 
- 	IWL_DEBUG_RX(trans, "Pending allocation requests = %d\n", pending);
- 
-@@ -554,11 +554,13 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
- 			i++;
- 		}
- 
-+		atomic_dec(&rba->req_pending);
- 		pending--;
-+
- 		if (!pending) {
--			pending = atomic_xchg(&rba->req_pending, 0);
-+			pending = atomic_read(&rba->req_pending);
- 			IWL_DEBUG_RX(trans,
--				     "Pending allocation requests = %d\n",
-+				     "Got more pending allocation requests = %d\n",
- 				     pending);
- 		}
- 
-@@ -570,12 +572,15 @@ static void iwl_pcie_rx_allocator(struct iwl_trans *trans)
- 		spin_unlock(&rba->lock);
- 
- 		atomic_inc(&rba->req_ready);
-+
- 	}
- 
- 	spin_lock(&rba->lock);
- 	/* return unused rbds to the allocator empty list */
- 	list_splice_tail(&local_empty, &rba->rbd_empty);
- 	spin_unlock(&rba->lock);
-+
-+	IWL_DEBUG_RX(trans, "%s, exit.\n", __func__);
- }
- 
- /*
-diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
-index 789337ea676a..6ede6168bd85 100644
---- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
-+++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
-@@ -433,8 +433,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp,
- 			  skb_tail_pointer(skb),
- 			  MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn, cardp);
- 
--	cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET;
--
- 	lbtf_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n",
- 		cardp->rx_urb);
- 	ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC);
-diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
-index 1467af22e394..883752f640b4 100644
---- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
-+++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
-@@ -4310,11 +4310,13 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
- 	wiphy->mgmt_stypes = mwifiex_mgmt_stypes;
- 	wiphy->max_remain_on_channel_duration = 5000;
- 	wiphy->interface_modes = BIT(NL80211_IFTYPE_STATION) |
--				 BIT(NL80211_IFTYPE_ADHOC) |
- 				 BIT(NL80211_IFTYPE_P2P_CLIENT) |
- 				 BIT(NL80211_IFTYPE_P2P_GO) |
- 				 BIT(NL80211_IFTYPE_AP);
- 
-+	if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
-+		wiphy->interface_modes |= BIT(NL80211_IFTYPE_ADHOC);
-+
- 	wiphy->bands[NL80211_BAND_2GHZ] = &mwifiex_band_2ghz;
- 	if (adapter->config_bands & BAND_A)
- 		wiphy->bands[NL80211_BAND_5GHZ] = &mwifiex_band_5ghz;
-@@ -4374,11 +4376,13 @@ int mwifiex_register_cfg80211(struct mwifiex_adapter *adapter)
- 	wiphy->available_antennas_tx = BIT(adapter->number_of_antenna) - 1;
- 	wiphy->available_antennas_rx = BIT(adapter->number_of_antenna) - 1;
- 
--	wiphy->features |= NL80211_FEATURE_HT_IBSS |
--			   NL80211_FEATURE_INACTIVITY_TIMER |
-+	wiphy->features |= NL80211_FEATURE_INACTIVITY_TIMER |
- 			   NL80211_FEATURE_LOW_PRIORITY_SCAN |
- 			   NL80211_FEATURE_NEED_OBSS_SCAN;
- 
-+	if (ISSUPP_ADHOC_ENABLED(adapter->fw_cap_info))
-+		wiphy->features |= NL80211_FEATURE_HT_IBSS;
-+
- 	if (ISSUPP_RANDOM_MAC(adapter->fw_cap_info))
- 		wiphy->features |= NL80211_FEATURE_SCAN_RANDOM_MAC_ADDR |
- 				   NL80211_FEATURE_SCHED_SCAN_RANDOM_MAC_ADDR |
-diff --git a/drivers/net/wireless/mediatek/mt76/eeprom.c b/drivers/net/wireless/mediatek/mt76/eeprom.c
-index 530e5593765c..a1529920d877 100644
---- a/drivers/net/wireless/mediatek/mt76/eeprom.c
-+++ b/drivers/net/wireless/mediatek/mt76/eeprom.c
-@@ -54,22 +54,30 @@ mt76_get_of_eeprom(struct mt76_dev *dev, int len)
- 		part = np->name;
- 
- 	mtd = get_mtd_device_nm(part);
--	if (IS_ERR(mtd))
--		return PTR_ERR(mtd);
-+	if (IS_ERR(mtd)) {
-+		ret =  PTR_ERR(mtd);
-+		goto out_put_node;
-+	}
- 
--	if (size <= sizeof(*list))
--		return -EINVAL;
-+	if (size <= sizeof(*list)) {
-+		ret = -EINVAL;
-+		goto out_put_node;
-+	}
- 
- 	offset = be32_to_cpup(list);
- 	ret = mtd_read(mtd, offset, len, &retlen, dev->eeprom.data);
- 	put_mtd_device(mtd);
- 	if (ret)
--		return ret;
-+		goto out_put_node;
- 
--	if (retlen < len)
--		return -EINVAL;
-+	if (retlen < len) {
-+		ret = -EINVAL;
-+		goto out_put_node;
-+	}
- 
--	return 0;
-+out_put_node:
-+	of_node_put(np);
-+	return ret;
- #else
- 	return -ENOENT;
- #endif
-diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
-index 5cd508a68609..6d29ba4046c3 100644
---- a/drivers/net/wireless/mediatek/mt76/mt76.h
-+++ b/drivers/net/wireless/mediatek/mt76/mt76.h
-@@ -713,6 +713,19 @@ static inline bool mt76u_check_sg(struct mt76_dev *dev)
- 		 udev->speed == USB_SPEED_WIRELESS));
- }
- 
-+static inline int
-+mt76u_bulk_msg(struct mt76_dev *dev, void *data, int len, int timeout)
-+{
-+	struct usb_interface *intf = to_usb_interface(dev->dev);
-+	struct usb_device *udev = interface_to_usbdev(intf);
-+	struct mt76_usb *usb = &dev->usb;
-+	unsigned int pipe;
-+	int sent;
-+
-+	pipe = usb_sndbulkpipe(udev, usb->out_ep[MT_EP_OUT_INBAND_CMD]);
-+	return usb_bulk_msg(udev, pipe, data, len, &sent, timeout);
-+}
-+
- int mt76u_vendor_request(struct mt76_dev *dev, u8 req,
- 			 u8 req_type, u16 val, u16 offset,
- 			 void *buf, size_t len);
-diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
-index c08bf371e527..7c9dfa54fee8 100644
---- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
-+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
-@@ -309,7 +309,7 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi,
- 		ccmp_pn[6] = pn >> 32;
- 		ccmp_pn[7] = pn >> 40;
- 		txwi->iv = *((__le32 *)&ccmp_pn[0]);
--		txwi->eiv = *((__le32 *)&ccmp_pn[1]);
-+		txwi->eiv = *((__le32 *)&ccmp_pn[4]);
- 	}
- 
- 	spin_lock_bh(&dev->mt76.lock);
-diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
-index 6db789f90269..2ca393e267af 100644
---- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
-+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_mcu.c
-@@ -121,18 +121,14 @@ static int
- __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb,
- 			int cmd, bool wait_resp)
- {
--	struct usb_interface *intf = to_usb_interface(dev->dev);
--	struct usb_device *udev = interface_to_usbdev(intf);
- 	struct mt76_usb *usb = &dev->usb;
--	unsigned int pipe;
--	int ret, sent;
-+	int ret;
- 	u8 seq = 0;
- 	u32 info;
- 
- 	if (test_bit(MT76_REMOVED, &dev->state))
- 		return 0;
- 
--	pipe = usb_sndbulkpipe(udev, usb->out_ep[MT_EP_OUT_INBAND_CMD]);
- 	if (wait_resp) {
- 		seq = ++usb->mcu.msg_seq & 0xf;
- 		if (!seq)
-@@ -146,7 +142,7 @@ __mt76x02u_mcu_send_msg(struct mt76_dev *dev, struct sk_buff *skb,
- 	if (ret)
- 		return ret;
- 
--	ret = usb_bulk_msg(udev, pipe, skb->data, skb->len, &sent, 500);
-+	ret = mt76u_bulk_msg(dev, skb->data, skb->len, 500);
- 	if (ret)
- 		return ret;
- 
-@@ -268,14 +264,12 @@ void mt76x02u_mcu_fw_reset(struct mt76x02_dev *dev)
- EXPORT_SYMBOL_GPL(mt76x02u_mcu_fw_reset);
- 
- static int
--__mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
-+__mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, u8 *data,
- 			    const void *fw_data, int len, u32 dst_addr)
- {
--	u8 *data = sg_virt(&buf->urb->sg[0]);
--	DECLARE_COMPLETION_ONSTACK(cmpl);
- 	__le32 info;
- 	u32 val;
--	int err;
-+	int err, data_len;
- 
- 	info = cpu_to_le32(FIELD_PREP(MT_MCU_MSG_PORT, CPU_TX_PORT) |
- 			   FIELD_PREP(MT_MCU_MSG_LEN, len) |
-@@ -291,25 +285,12 @@ __mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
- 	mt76u_single_wr(&dev->mt76, MT_VEND_WRITE_FCE,
- 			MT_FCE_DMA_LEN, len << 16);
- 
--	buf->len = MT_CMD_HDR_LEN + len + sizeof(info);
--	err = mt76u_submit_buf(&dev->mt76, USB_DIR_OUT,
--			       MT_EP_OUT_INBAND_CMD,
--			       buf, GFP_KERNEL,
--			       mt76u_mcu_complete_urb, &cmpl);
--	if (err < 0)
--		return err;
--
--	if (!wait_for_completion_timeout(&cmpl,
--					 msecs_to_jiffies(1000))) {
--		dev_err(dev->mt76.dev, "firmware upload timed out\n");
--		usb_kill_urb(buf->urb);
--		return -ETIMEDOUT;
--	}
-+	data_len = MT_CMD_HDR_LEN + len + sizeof(info);
- 
--	if (mt76u_urb_error(buf->urb)) {
--		dev_err(dev->mt76.dev, "firmware upload failed: %d\n",
--			buf->urb->status);
--		return buf->urb->status;
-+	err = mt76u_bulk_msg(&dev->mt76, data, data_len, 1000);
-+	if (err) {
-+		dev_err(dev->mt76.dev, "firmware upload failed: %d\n", err);
-+		return err;
- 	}
- 
- 	val = mt76_rr(dev, MT_TX_CPU_FROM_FCE_CPU_DESC_IDX);
-@@ -322,17 +303,16 @@ __mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, struct mt76u_buf *buf,
- int mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, const void *data,
- 			      int data_len, u32 max_payload, u32 offset)
- {
--	int err, len, pos = 0, max_len = max_payload - 8;
--	struct mt76u_buf buf;
-+	int len, err = 0, pos = 0, max_len = max_payload - 8;
-+	u8 *buf;
- 
--	err = mt76u_buf_alloc(&dev->mt76, &buf, 1, max_payload, max_payload,
--			      GFP_KERNEL);
--	if (err < 0)
--		return err;
-+	buf = kmalloc(max_payload, GFP_KERNEL);
-+	if (!buf)
-+		return -ENOMEM;
- 
- 	while (data_len > 0) {
- 		len = min_t(int, data_len, max_len);
--		err = __mt76x02u_mcu_fw_send_data(dev, &buf, data + pos,
-+		err = __mt76x02u_mcu_fw_send_data(dev, buf, data + pos,
- 						  len, offset + pos);
- 		if (err < 0)
- 			break;
-@@ -341,7 +321,7 @@ int mt76x02u_mcu_fw_send_data(struct mt76x02_dev *dev, const void *data,
- 		pos += len;
- 		usleep_range(5000, 10000);
- 	}
--	mt76u_buf_free(&buf);
-+	kfree(buf);
- 
- 	return err;
- }
-diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
-index b061263453d4..61cde0f9f58f 100644
---- a/drivers/net/wireless/mediatek/mt76/usb.c
-+++ b/drivers/net/wireless/mediatek/mt76/usb.c
-@@ -326,7 +326,6 @@ int mt76u_buf_alloc(struct mt76_dev *dev, struct mt76u_buf *buf,
- 
- 	return mt76u_fill_rx_sg(dev, buf, nsgs, len, sglen);
- }
--EXPORT_SYMBOL_GPL(mt76u_buf_alloc);
- 
- void mt76u_buf_free(struct mt76u_buf *buf)
- {
-@@ -838,16 +837,9 @@ int mt76u_alloc_queues(struct mt76_dev *dev)
- 
- 	err = mt76u_alloc_rx(dev);
- 	if (err < 0)
--		goto err;
--
--	err = mt76u_alloc_tx(dev);
--	if (err < 0)
--		goto err;
-+		return err;
- 
--	return 0;
--err:
--	mt76u_queues_deinit(dev);
--	return err;
-+	return mt76u_alloc_tx(dev);
- }
- EXPORT_SYMBOL_GPL(mt76u_alloc_queues);
- 
-diff --git a/drivers/net/wireless/mediatek/mt7601u/eeprom.h b/drivers/net/wireless/mediatek/mt7601u/eeprom.h
-index 662d12703b69..57b503ae63f1 100644
---- a/drivers/net/wireless/mediatek/mt7601u/eeprom.h
-+++ b/drivers/net/wireless/mediatek/mt7601u/eeprom.h
-@@ -17,7 +17,7 @@
- 
- struct mt7601u_dev;
- 
--#define MT7601U_EE_MAX_VER			0x0c
-+#define MT7601U_EE_MAX_VER			0x0d
- #define MT7601U_EEPROM_SIZE			256
- 
- #define MT7601U_DEFAULT_TX_POWER		6
-diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
-index 26b187336875..2e12de813a5b 100644
---- a/drivers/net/wireless/ti/wlcore/main.c
-+++ b/drivers/net/wireless/ti/wlcore/main.c
-@@ -1085,8 +1085,11 @@ static int wl12xx_chip_wakeup(struct wl1271 *wl, bool plt)
- 		goto out;
- 
- 	ret = wl12xx_fetch_firmware(wl, plt);
--	if (ret < 0)
--		goto out;
-+	if (ret < 0) {
-+		kfree(wl->fw_status);
-+		kfree(wl->raw_fw_status);
-+		kfree(wl->tx_res_if);
-+	}
- 
- out:
- 	return ret;
-diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
-index a11bf4e6b451..6d6e9a12150b 100644
---- a/drivers/nvdimm/label.c
-+++ b/drivers/nvdimm/label.c
-@@ -755,7 +755,7 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
- 
- static int __pmem_label_update(struct nd_region *nd_region,
- 		struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
--		int pos)
-+		int pos, unsigned long flags)
- {
- 	struct nd_namespace_common *ndns = &nspm->nsio.common;
- 	struct nd_interleave_set *nd_set = nd_region->nd_set;
-@@ -796,7 +796,7 @@ static int __pmem_label_update(struct nd_region *nd_region,
- 	memcpy(nd_label->uuid, nspm->uuid, NSLABEL_UUID_LEN);
- 	if (nspm->alt_name)
- 		memcpy(nd_label->name, nspm->alt_name, NSLABEL_NAME_LEN);
--	nd_label->flags = __cpu_to_le32(NSLABEL_FLAG_UPDATING);
-+	nd_label->flags = __cpu_to_le32(flags);
- 	nd_label->nlabel = __cpu_to_le16(nd_region->ndr_mappings);
- 	nd_label->position = __cpu_to_le16(pos);
- 	nd_label->isetcookie = __cpu_to_le64(cookie);
-@@ -1249,13 +1249,13 @@ static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid)
- int nd_pmem_namespace_label_update(struct nd_region *nd_region,
- 		struct nd_namespace_pmem *nspm, resource_size_t size)
- {
--	int i;
-+	int i, rc;
- 
- 	for (i = 0; i < nd_region->ndr_mappings; i++) {
- 		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
- 		struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
- 		struct resource *res;
--		int rc, count = 0;
-+		int count = 0;
- 
- 		if (size == 0) {
- 			rc = del_labels(nd_mapping, nspm->uuid);
-@@ -1273,7 +1273,20 @@ int nd_pmem_namespace_label_update(struct nd_region *nd_region,
- 		if (rc < 0)
- 			return rc;
- 
--		rc = __pmem_label_update(nd_region, nd_mapping, nspm, i);
-+		rc = __pmem_label_update(nd_region, nd_mapping, nspm, i,
-+				NSLABEL_FLAG_UPDATING);
-+		if (rc)
-+			return rc;
-+	}
-+
-+	if (size == 0)
-+		return 0;
-+
-+	/* Clear the UPDATING flag per UEFI 2.7 expectations */
-+	for (i = 0; i < nd_region->ndr_mappings; i++) {
-+		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
-+
-+		rc = __pmem_label_update(nd_region, nd_mapping, nspm, i, 0);
- 		if (rc)
- 			return rc;
- 	}
-diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
-index 4b077555ac70..33a3b23b3db7 100644
---- a/drivers/nvdimm/namespace_devs.c
-+++ b/drivers/nvdimm/namespace_devs.c
-@@ -138,6 +138,7 @@ bool nd_is_uuid_unique(struct device *dev, u8 *uuid)
- bool pmem_should_map_pages(struct device *dev)
- {
- 	struct nd_region *nd_region = to_nd_region(dev->parent);
-+	struct nd_namespace_common *ndns = to_ndns(dev);
- 	struct nd_namespace_io *nsio;
- 
- 	if (!IS_ENABLED(CONFIG_ZONE_DEVICE))
-@@ -149,6 +150,9 @@ bool pmem_should_map_pages(struct device *dev)
- 	if (is_nd_pfn(dev) || is_nd_btt(dev))
- 		return false;
- 
-+	if (ndns->force_raw)
-+		return false;
-+
- 	nsio = to_nd_namespace_io(dev);
- 	if (region_intersects(nsio->res.start, resource_size(&nsio->res),
- 				IORESOURCE_SYSTEM_RAM,
-diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
-index 6f22272e8d80..7760c1b91853 100644
---- a/drivers/nvdimm/pfn_devs.c
-+++ b/drivers/nvdimm/pfn_devs.c
-@@ -593,7 +593,7 @@ static unsigned long init_altmap_base(resource_size_t base)
- 
- static unsigned long init_altmap_reserve(resource_size_t base)
- {
--	unsigned long reserve = PHYS_PFN(SZ_8K);
-+	unsigned long reserve = PFN_UP(SZ_8K);
- 	unsigned long base_pfn = PHYS_PFN(base);
- 
- 	reserve += base_pfn - PFN_SECTION_ALIGN_DOWN(base_pfn);
-@@ -678,7 +678,7 @@ static void trim_pfn_device(struct nd_pfn *nd_pfn, u32 *start_pad, u32 *end_trun
- 	if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM,
- 				IORES_DESC_NONE) == REGION_MIXED
- 			|| !IS_ALIGNED(end, nd_pfn->align)
--			|| nd_region_conflict(nd_region, start, size + adjust))
-+			|| nd_region_conflict(nd_region, start, size))
- 		*end_trunc = end - phys_pmem_align_down(nd_pfn, end);
- }
- 
-diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
-index 89accc76d71c..c37d5bbd72ab 100644
---- a/drivers/nvme/host/fc.c
-+++ b/drivers/nvme/host/fc.c
-@@ -3018,7 +3018,10 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
- 
- 	ctrl->ctrl.opts = opts;
- 	ctrl->ctrl.nr_reconnects = 0;
--	ctrl->ctrl.numa_node = dev_to_node(lport->dev);
-+	if (lport->dev)
-+		ctrl->ctrl.numa_node = dev_to_node(lport->dev);
-+	else
-+		ctrl->ctrl.numa_node = NUMA_NO_NODE;
- 	INIT_LIST_HEAD(&ctrl->ctrl_list);
- 	ctrl->lport = lport;
- 	ctrl->rport = rport;
-diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
-index 88d260f31835..02c63c463222 100644
---- a/drivers/nvme/target/core.c
-+++ b/drivers/nvme/target/core.c
-@@ -1171,6 +1171,15 @@ static void nvmet_release_p2p_ns_map(struct nvmet_ctrl *ctrl)
- 	put_device(ctrl->p2p_client);
- }
- 
-+static void nvmet_fatal_error_handler(struct work_struct *work)
-+{
-+	struct nvmet_ctrl *ctrl =
-+			container_of(work, struct nvmet_ctrl, fatal_err_work);
-+
-+	pr_err("ctrl %d fatal error occurred!\n", ctrl->cntlid);
-+	ctrl->ops->delete_ctrl(ctrl);
-+}
-+
- u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
- 		struct nvmet_req *req, u32 kato, struct nvmet_ctrl **ctrlp)
- {
-@@ -1213,6 +1222,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
- 	INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work);
- 	INIT_LIST_HEAD(&ctrl->async_events);
- 	INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL);
-+	INIT_WORK(&ctrl->fatal_err_work, nvmet_fatal_error_handler);
- 
- 	memcpy(ctrl->subsysnqn, subsysnqn, NVMF_NQN_SIZE);
- 	memcpy(ctrl->hostnqn, hostnqn, NVMF_NQN_SIZE);
-@@ -1316,21 +1326,11 @@ void nvmet_ctrl_put(struct nvmet_ctrl *ctrl)
- 	kref_put(&ctrl->ref, nvmet_ctrl_free);
- }
- 
--static void nvmet_fatal_error_handler(struct work_struct *work)
--{
--	struct nvmet_ctrl *ctrl =
--			container_of(work, struct nvmet_ctrl, fatal_err_work);
--
--	pr_err("ctrl %d fatal error occurred!\n", ctrl->cntlid);
--	ctrl->ops->delete_ctrl(ctrl);
--}
--
- void nvmet_ctrl_fatal_error(struct nvmet_ctrl *ctrl)
- {
- 	mutex_lock(&ctrl->lock);
- 	if (!(ctrl->csts & NVME_CSTS_CFS)) {
- 		ctrl->csts |= NVME_CSTS_CFS;
--		INIT_WORK(&ctrl->fatal_err_work, nvmet_fatal_error_handler);
- 		schedule_work(&ctrl->fatal_err_work);
- 	}
- 	mutex_unlock(&ctrl->lock);
-diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
-index f7301bb4ef3b..3ce65927e11c 100644
---- a/drivers/nvmem/core.c
-+++ b/drivers/nvmem/core.c
-@@ -686,9 +686,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
- 	if (rval)
- 		goto err_remove_cells;
- 
--	rval = blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
--	if (rval)
--		goto err_remove_cells;
-+	blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
- 
- 	return nvmem;
- 
-diff --git a/drivers/opp/core.c b/drivers/opp/core.c
-index 18f1639dbc4a..f5d2fa195f5f 100644
---- a/drivers/opp/core.c
-+++ b/drivers/opp/core.c
-@@ -743,7 +743,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
- 		old_freq, freq);
- 
- 	/* Scaling up? Configure required OPPs before frequency */
--	if (freq > old_freq) {
-+	if (freq >= old_freq) {
- 		ret = _set_required_opps(dev, opp_table, opp);
- 		if (ret)
- 			goto put_opp;
-diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c
-index 9c8249f74479..6296dbb83d47 100644
---- a/drivers/parport/parport_pc.c
-+++ b/drivers/parport/parport_pc.c
-@@ -1377,7 +1377,7 @@ static struct superio_struct *find_superio(struct parport *p)
- {
- 	int i;
- 	for (i = 0; i < NR_SUPERIOS; i++)
--		if (superios[i].io != p->base)
-+		if (superios[i].io == p->base)
- 			return &superios[i];
- 	return NULL;
- }
-diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
-index 721d60a5d9e4..9c5614f21b8e 100644
---- a/drivers/pci/controller/dwc/pcie-designware-host.c
-+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
-@@ -439,7 +439,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
- 	if (ret)
- 		pci->num_viewport = 2;
- 
--	if (IS_ENABLED(CONFIG_PCI_MSI)) {
-+	if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_enabled()) {
- 		/*
- 		 * If a specific SoC driver needs to change the
- 		 * default number of vectors, it needs to implement
-diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
-index d185ea5fe996..a7f703556790 100644
---- a/drivers/pci/controller/dwc/pcie-qcom.c
-+++ b/drivers/pci/controller/dwc/pcie-qcom.c
-@@ -1228,7 +1228,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
- 
- 	pcie->ops = of_device_get_match_data(dev);
- 
--	pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW);
-+	pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH);
- 	if (IS_ERR(pcie->reset)) {
- 		ret = PTR_ERR(pcie->reset);
- 		goto err_pm_runtime_put;
-diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c
-index 750081c1cb48..6eecae447af3 100644
---- a/drivers/pci/controller/pci-aardvark.c
-+++ b/drivers/pci/controller/pci-aardvark.c
-@@ -499,7 +499,7 @@ static void advk_sw_pci_bridge_init(struct advk_pcie *pcie)
- 	bridge->data = pcie;
- 	bridge->ops = &advk_pci_bridge_emul_ops;
- 
--	pci_bridge_emul_init(bridge);
-+	pci_bridge_emul_init(bridge, 0);
- 
- }
- 
-diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c
-index fa0fc46edb0c..d3a0419e42f2 100644
---- a/drivers/pci/controller/pci-mvebu.c
-+++ b/drivers/pci/controller/pci-mvebu.c
-@@ -583,7 +583,7 @@ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port)
- 	bridge->data = port;
- 	bridge->ops = &mvebu_pci_bridge_emul_ops;
- 
--	pci_bridge_emul_init(bridge);
-+	pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR);
- }
- 
- static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys)
-diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
-index 55e471c18e8d..c42fe5c4319f 100644
---- a/drivers/pci/controller/pcie-mediatek.c
-+++ b/drivers/pci/controller/pcie-mediatek.c
-@@ -654,7 +654,6 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
- 	struct resource *mem = &pcie->mem;
- 	const struct mtk_pcie_soc *soc = port->pcie->soc;
- 	u32 val;
--	size_t size;
- 	int err;
- 
- 	/* MT7622 platforms need to enable LTSSM and ASPM from PCIe subsys */
-@@ -706,8 +705,8 @@ static int mtk_pcie_startup_port_v2(struct mtk_pcie_port *port)
- 		mtk_pcie_enable_msi(port);
- 
- 	/* Set AHB to PCIe translation windows */
--	size = mem->end - mem->start;
--	val = lower_32_bits(mem->start) | AHB2PCIE_SIZE(fls(size));
-+	val = lower_32_bits(mem->start) |
-+	      AHB2PCIE_SIZE(fls(resource_size(mem)));
- 	writel(val, port->base + PCIE_AHB_TRANS_BASE0_L);
- 
- 	val = upper_32_bits(mem->start);
-diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c
-index 3f3df4c29f6e..905282a8ddaa 100644
---- a/drivers/pci/hotplug/pciehp_ctrl.c
-+++ b/drivers/pci/hotplug/pciehp_ctrl.c
-@@ -115,6 +115,10 @@ static void remove_board(struct controller *ctrl, bool safe_removal)
- 		 * removed from the slot/adapter.
- 		 */
- 		msleep(1000);
-+
-+		/* Ignore link or presence changes caused by power off */
-+		atomic_and(~(PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC),
-+			   &ctrl->pending_events);
- 	}
- 
- 	/* turn off Green LED */
-diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
-index 7dd443aea5a5..8bfcb8cd0900 100644
---- a/drivers/pci/hotplug/pciehp_hpc.c
-+++ b/drivers/pci/hotplug/pciehp_hpc.c
-@@ -156,9 +156,9 @@ static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
- 	slot_ctrl |= (cmd & mask);
- 	ctrl->cmd_busy = 1;
- 	smp_mb();
-+	ctrl->slot_ctrl = slot_ctrl;
- 	pcie_capability_write_word(pdev, PCI_EXP_SLTCTL, slot_ctrl);
- 	ctrl->cmd_started = jiffies;
--	ctrl->slot_ctrl = slot_ctrl;
- 
- 	/*
- 	 * Controllers with the Intel CF118 and similar errata advertise
-@@ -736,12 +736,25 @@ void pcie_clear_hotplug_events(struct controller *ctrl)
- 
- void pcie_enable_interrupt(struct controller *ctrl)
- {
--	pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_HPIE, PCI_EXP_SLTCTL_HPIE);
-+	u16 mask;
-+
-+	mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
-+	pcie_write_cmd(ctrl, mask, mask);
- }
- 
- void pcie_disable_interrupt(struct controller *ctrl)
- {
--	pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_HPIE);
-+	u16 mask;
-+
-+	/*
-+	 * Mask hot-plug interrupt to prevent it triggering immediately
-+	 * when the link goes inactive (we still get PME when any of the
-+	 * enabled events is detected). Same goes with Link Layer State
-+	 * changed event which generates PME immediately when the link goes
-+	 * inactive so mask it as well.
-+	 */
-+	mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
-+	pcie_write_cmd(ctrl, 0, mask);
- }
- 
- /*
-diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c
-index 129738362d90..83fb077d0b41 100644
---- a/drivers/pci/pci-bridge-emul.c
-+++ b/drivers/pci/pci-bridge-emul.c
-@@ -24,29 +24,6 @@
- #define PCI_CAP_PCIE_START	PCI_BRIDGE_CONF_END
- #define PCI_CAP_PCIE_END	(PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2)
- 
--/*
-- * Initialize a pci_bridge_emul structure to represent a fake PCI
-- * bridge configuration space. The caller needs to have initialized
-- * the PCI configuration space with whatever values make sense
-- * (typically at least vendor, device, revision), the ->ops pointer,
-- * and optionally ->data and ->has_pcie.
-- */
--void pci_bridge_emul_init(struct pci_bridge_emul *bridge)
--{
--	bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16;
--	bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
--	bridge->conf.cache_line_size = 0x10;
--	bridge->conf.status = PCI_STATUS_CAP_LIST;
--
--	if (bridge->has_pcie) {
--		bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
--		bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
--		/* Set PCIe v2, root port, slot support */
--		bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
--			PCI_EXP_FLAGS_SLOT;
--	}
--}
--
- struct pci_bridge_reg_behavior {
- 	/* Read-only bits */
- 	u32 ro;
-@@ -283,6 +260,61 @@ const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = {
- 	},
- };
- 
-+/*
-+ * Initialize a pci_bridge_emul structure to represent a fake PCI
-+ * bridge configuration space. The caller needs to have initialized
-+ * the PCI configuration space with whatever values make sense
-+ * (typically at least vendor, device, revision), the ->ops pointer,
-+ * and optionally ->data and ->has_pcie.
-+ */
-+int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
-+			 unsigned int flags)
-+{
-+	bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16;
-+	bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE;
-+	bridge->conf.cache_line_size = 0x10;
-+	bridge->conf.status = PCI_STATUS_CAP_LIST;
-+	bridge->pci_regs_behavior = kmemdup(pci_regs_behavior,
-+					    sizeof(pci_regs_behavior),
-+					    GFP_KERNEL);
-+	if (!bridge->pci_regs_behavior)
-+		return -ENOMEM;
-+
-+	if (bridge->has_pcie) {
-+		bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START;
-+		bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP;
-+		/* Set PCIe v2, root port, slot support */
-+		bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 |
-+			PCI_EXP_FLAGS_SLOT;
-+		bridge->pcie_cap_regs_behavior =
-+			kmemdup(pcie_cap_regs_behavior,
-+				sizeof(pcie_cap_regs_behavior),
-+				GFP_KERNEL);
-+		if (!bridge->pcie_cap_regs_behavior) {
-+			kfree(bridge->pci_regs_behavior);
-+			return -ENOMEM;
-+		}
-+	}
-+
-+	if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) {
-+		bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].ro = ~0;
-+		bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].rw = 0;
-+	}
-+
-+	return 0;
-+}
-+
-+/*
-+ * Cleanup a pci_bridge_emul structure that was previously initilized
-+ * using pci_bridge_emul_init().
-+ */
-+void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge)
-+{
-+	if (bridge->has_pcie)
-+		kfree(bridge->pcie_cap_regs_behavior);
-+	kfree(bridge->pci_regs_behavior);
-+}
-+
- /*
-  * Should be called by the PCI controller driver when reading the PCI
-  * configuration space of the fake bridge. It will call back the
-@@ -312,11 +344,11 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
- 		reg -= PCI_CAP_PCIE_START;
- 		read_op = bridge->ops->read_pcie;
- 		cfgspace = (u32 *) &bridge->pcie_conf;
--		behavior = pcie_cap_regs_behavior;
-+		behavior = bridge->pcie_cap_regs_behavior;
- 	} else {
- 		read_op = bridge->ops->read_base;
- 		cfgspace = (u32 *) &bridge->conf;
--		behavior = pci_regs_behavior;
-+		behavior = bridge->pci_regs_behavior;
- 	}
- 
- 	if (read_op)
-@@ -383,11 +415,11 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
- 		reg -= PCI_CAP_PCIE_START;
- 		write_op = bridge->ops->write_pcie;
- 		cfgspace = (u32 *) &bridge->pcie_conf;
--		behavior = pcie_cap_regs_behavior;
-+		behavior = bridge->pcie_cap_regs_behavior;
- 	} else {
- 		write_op = bridge->ops->write_base;
- 		cfgspace = (u32 *) &bridge->conf;
--		behavior = pci_regs_behavior;
-+		behavior = bridge->pci_regs_behavior;
- 	}
- 
- 	/* Keep all bits, except the RW bits */
-diff --git a/drivers/pci/pci-bridge-emul.h b/drivers/pci/pci-bridge-emul.h
-index 9d510ccf738b..e65b1b79899d 100644
---- a/drivers/pci/pci-bridge-emul.h
-+++ b/drivers/pci/pci-bridge-emul.h
-@@ -107,15 +107,26 @@ struct pci_bridge_emul_ops {
- 			   u32 old, u32 new, u32 mask);
- };
- 
-+struct pci_bridge_reg_behavior;
-+
- struct pci_bridge_emul {
- 	struct pci_bridge_emul_conf conf;
- 	struct pci_bridge_emul_pcie_conf pcie_conf;
- 	struct pci_bridge_emul_ops *ops;
-+	struct pci_bridge_reg_behavior *pci_regs_behavior;
-+	struct pci_bridge_reg_behavior *pcie_cap_regs_behavior;
- 	void *data;
- 	bool has_pcie;
- };
- 
--void pci_bridge_emul_init(struct pci_bridge_emul *bridge);
-+enum {
-+	PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR = BIT(0),
-+};
-+
-+int pci_bridge_emul_init(struct pci_bridge_emul *bridge,
-+			 unsigned int flags);
-+void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge);
-+
- int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where,
- 			      int size, u32 *value);
- int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where,
-diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
-index e435d12e61a0..7b77754a82de 100644
---- a/drivers/pci/pcie/dpc.c
-+++ b/drivers/pci/pcie/dpc.c
-@@ -202,6 +202,28 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
- 	pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status);
- }
- 
-+static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev,
-+					  struct aer_err_info *info)
-+{
-+	int pos = dev->aer_cap;
-+	u32 status, mask, sev;
-+
-+	pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status);
-+	pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &mask);
-+	status &= ~mask;
-+	if (!status)
-+		return 0;
-+
-+	pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev);
-+	status &= sev;
-+	if (status)
-+		info->severity = AER_FATAL;
-+	else
-+		info->severity = AER_NONFATAL;
-+
-+	return 1;
-+}
-+
- static irqreturn_t dpc_handler(int irq, void *context)
- {
- 	struct aer_err_info info;
-@@ -229,9 +251,12 @@ static irqreturn_t dpc_handler(int irq, void *context)
- 	/* show RP PIO error detail information */
- 	if (dpc->rp_extensions && reason == 3 && ext_reason == 0)
- 		dpc_process_rp_pio_error(dpc);
--	else if (reason == 0 && aer_get_device_error_info(pdev, &info)) {
-+	else if (reason == 0 &&
-+		 dpc_get_aer_uncorrect_severity(pdev, &info) &&
-+		 aer_get_device_error_info(pdev, &info)) {
- 		aer_print_error(pdev, &info);
- 		pci_cleanup_aer_uncorrect_error_status(pdev);
-+		pci_aer_clear_fatal_status(pdev);
- 	}
- 
- 	/* We configure DPC so it only triggers on ERR_FATAL */
-diff --git a/drivers/pci/pcie/pme.c b/drivers/pci/pcie/pme.c
-index 0dbcf429089f..efa5b552914b 100644
---- a/drivers/pci/pcie/pme.c
-+++ b/drivers/pci/pcie/pme.c
-@@ -363,6 +363,16 @@ static bool pcie_pme_check_wakeup(struct pci_bus *bus)
- 	return false;
- }
- 
-+static void pcie_pme_disable_interrupt(struct pci_dev *port,
-+				       struct pcie_pme_service_data *data)
-+{
-+	spin_lock_irq(&data->lock);
-+	pcie_pme_interrupt_enable(port, false);
-+	pcie_clear_root_pme_status(port);
-+	data->noirq = true;
-+	spin_unlock_irq(&data->lock);
-+}
-+
- /**
-  * pcie_pme_suspend - Suspend PCIe PME service device.
-  * @srv: PCIe service device to suspend.
-@@ -387,11 +397,7 @@ static int pcie_pme_suspend(struct pcie_device *srv)
- 			return 0;
- 	}
- 
--	spin_lock_irq(&data->lock);
--	pcie_pme_interrupt_enable(port, false);
--	pcie_clear_root_pme_status(port);
--	data->noirq = true;
--	spin_unlock_irq(&data->lock);
-+	pcie_pme_disable_interrupt(port, data);
- 
- 	synchronize_irq(srv->irq);
- 
-@@ -426,35 +432,12 @@ static int pcie_pme_resume(struct pcie_device *srv)
-  * @srv - PCIe service device to remove.
-  */
- static void pcie_pme_remove(struct pcie_device *srv)
--{
--	pcie_pme_suspend(srv);
--	free_irq(srv->irq, srv);
--	kfree(get_service_data(srv));
--}
--
--static int pcie_pme_runtime_suspend(struct pcie_device *srv)
--{
--	struct pcie_pme_service_data *data = get_service_data(srv);
--
--	spin_lock_irq(&data->lock);
--	pcie_pme_interrupt_enable(srv->port, false);
--	pcie_clear_root_pme_status(srv->port);
--	data->noirq = true;
--	spin_unlock_irq(&data->lock);
--
--	return 0;
--}
--
--static int pcie_pme_runtime_resume(struct pcie_device *srv)
- {
- 	struct pcie_pme_service_data *data = get_service_data(srv);
- 
--	spin_lock_irq(&data->lock);
--	pcie_pme_interrupt_enable(srv->port, true);
--	data->noirq = false;
--	spin_unlock_irq(&data->lock);
--
--	return 0;
-+	pcie_pme_disable_interrupt(srv->port, data);
-+	free_irq(srv->irq, srv);
-+	kfree(data);
- }
- 
- static struct pcie_port_service_driver pcie_pme_driver = {
-@@ -464,8 +447,6 @@ static struct pcie_port_service_driver pcie_pme_driver = {
- 
- 	.probe		= pcie_pme_probe,
- 	.suspend	= pcie_pme_suspend,
--	.runtime_suspend = pcie_pme_runtime_suspend,
--	.runtime_resume	= pcie_pme_runtime_resume,
- 	.resume		= pcie_pme_resume,
- 	.remove		= pcie_pme_remove,
- };
-diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
-index 257b9f6f2ebb..c46a3fcb341e 100644
---- a/drivers/pci/probe.c
-+++ b/drivers/pci/probe.c
-@@ -2071,11 +2071,8 @@ static void pci_configure_ltr(struct pci_dev *dev)
- {
- #ifdef CONFIG_PCIEASPM
- 	struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
--	u32 cap;
- 	struct pci_dev *bridge;
--
--	if (!host->native_ltr)
--		return;
-+	u32 cap, ctl;
- 
- 	if (!pci_is_pcie(dev))
- 		return;
-@@ -2084,22 +2081,35 @@ static void pci_configure_ltr(struct pci_dev *dev)
- 	if (!(cap & PCI_EXP_DEVCAP2_LTR))
- 		return;
- 
--	/*
--	 * Software must not enable LTR in an Endpoint unless the Root
--	 * Complex and all intermediate Switches indicate support for LTR.
--	 * PCIe r3.1, sec 6.18.
--	 */
--	if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT)
--		dev->ltr_path = 1;
--	else {
-+	pcie_capability_read_dword(dev, PCI_EXP_DEVCTL2, &ctl);
-+	if (ctl & PCI_EXP_DEVCTL2_LTR_EN) {
-+		if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) {
-+			dev->ltr_path = 1;
-+			return;
-+		}
-+
- 		bridge = pci_upstream_bridge(dev);
- 		if (bridge && bridge->ltr_path)
- 			dev->ltr_path = 1;
-+
-+		return;
- 	}
- 
--	if (dev->ltr_path)
-+	if (!host->native_ltr)
-+		return;
-+
-+	/*
-+	 * Software must not enable LTR in an Endpoint unless the Root
-+	 * Complex and all intermediate Switches indicate support for LTR.
-+	 * PCIe r4.0, sec 6.18.
-+	 */
-+	if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
-+	    ((bridge = pci_upstream_bridge(dev)) &&
-+	      bridge->ltr_path)) {
- 		pcie_capability_set_word(dev, PCI_EXP_DEVCTL2,
- 					 PCI_EXP_DEVCTL2_LTR_EN);
-+		dev->ltr_path = 1;
-+	}
- #endif
- }
- 
-diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
-index e2a879e93d86..fba03a7d5c7f 100644
---- a/drivers/pci/quirks.c
-+++ b/drivers/pci/quirks.c
-@@ -3877,6 +3877,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9128,
- /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */
- DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9130,
- 			 quirk_dma_func1_alias);
-+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9170,
-+			 quirk_dma_func1_alias);
- /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c47 + c57 */
- DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9172,
- 			 quirk_dma_func1_alias);
-diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
-index 8e46a9dad2fa..7cb766dafe85 100644
---- a/drivers/perf/arm_spe_pmu.c
-+++ b/drivers/perf/arm_spe_pmu.c
-@@ -824,10 +824,10 @@ static void arm_spe_pmu_read(struct perf_event *event)
- {
- }
- 
--static void *arm_spe_pmu_setup_aux(int cpu, void **pages, int nr_pages,
--				   bool snapshot)
-+static void *arm_spe_pmu_setup_aux(struct perf_event *event, void **pages,
-+				   int nr_pages, bool snapshot)
- {
--	int i;
-+	int i, cpu = event->cpu;
- 	struct page **pglist;
- 	struct arm_spe_pmu_buf *buf;
- 
-diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c
-index 5163097b43df..4bbd9ede38c8 100644
---- a/drivers/phy/allwinner/phy-sun4i-usb.c
-+++ b/drivers/phy/allwinner/phy-sun4i-usb.c
-@@ -485,8 +485,11 @@ static int sun4i_usb_phy_set_mode(struct phy *_phy,
- 	struct sun4i_usb_phy_data *data = to_sun4i_usb_phy_data(phy);
- 	int new_mode;
- 
--	if (phy->index != 0)
-+	if (phy->index != 0) {
-+		if (mode == PHY_MODE_USB_HOST)
-+			return 0;
- 		return -EINVAL;
-+	}
- 
- 	switch (mode) {
- 	case PHY_MODE_USB_HOST:
-diff --git a/drivers/pinctrl/meson/pinctrl-meson.c b/drivers/pinctrl/meson/pinctrl-meson.c
-index ea87d739f534..a4ae1ac5369e 100644
---- a/drivers/pinctrl/meson/pinctrl-meson.c
-+++ b/drivers/pinctrl/meson/pinctrl-meson.c
-@@ -31,6 +31,9 @@
-  * In some cases the register ranges for pull enable and pull
-  * direction are the same and thus there are only 3 register ranges.
-  *
-+ * Since Meson G12A SoC, the ao register ranges for gpio, pull enable
-+ * and pull direction are the same, so there are only 2 register ranges.
-+ *
-  * For the pull and GPIO configuration every bank uses a contiguous
-  * set of bits in the register sets described above; the same register
-  * can be shared by more banks with different offsets.
-@@ -488,23 +491,22 @@ static int meson_pinctrl_parse_dt(struct meson_pinctrl *pc,
- 		return PTR_ERR(pc->reg_mux);
- 	}
- 
--	pc->reg_pull = meson_map_resource(pc, gpio_np, "pull");
--	if (IS_ERR(pc->reg_pull)) {
--		dev_err(pc->dev, "pull registers not found\n");
--		return PTR_ERR(pc->reg_pull);
-+	pc->reg_gpio = meson_map_resource(pc, gpio_np, "gpio");
-+	if (IS_ERR(pc->reg_gpio)) {
-+		dev_err(pc->dev, "gpio registers not found\n");
-+		return PTR_ERR(pc->reg_gpio);
- 	}
- 
-+	pc->reg_pull = meson_map_resource(pc, gpio_np, "pull");
-+	/* Use gpio region if pull one is not present */
-+	if (IS_ERR(pc->reg_pull))
-+		pc->reg_pull = pc->reg_gpio;
-+
- 	pc->reg_pullen = meson_map_resource(pc, gpio_np, "pull-enable");
- 	/* Use pull region if pull-enable one is not present */
- 	if (IS_ERR(pc->reg_pullen))
- 		pc->reg_pullen = pc->reg_pull;
- 
--	pc->reg_gpio = meson_map_resource(pc, gpio_np, "gpio");
--	if (IS_ERR(pc->reg_gpio)) {
--		dev_err(pc->dev, "gpio registers not found\n");
--		return PTR_ERR(pc->reg_gpio);
--	}
--
- 	return 0;
- }
- 
-diff --git a/drivers/pinctrl/meson/pinctrl-meson8b.c b/drivers/pinctrl/meson/pinctrl-meson8b.c
-index 0f140a802137..7f76000cc12e 100644
---- a/drivers/pinctrl/meson/pinctrl-meson8b.c
-+++ b/drivers/pinctrl/meson/pinctrl-meson8b.c
-@@ -346,6 +346,8 @@ static const unsigned int eth_rx_dv_pins[]	= { DIF_1_P };
- static const unsigned int eth_rx_clk_pins[]	= { DIF_1_N };
- static const unsigned int eth_txd0_1_pins[]	= { DIF_2_P };
- static const unsigned int eth_txd1_1_pins[]	= { DIF_2_N };
-+static const unsigned int eth_rxd3_pins[]	= { DIF_2_P };
-+static const unsigned int eth_rxd2_pins[]	= { DIF_2_N };
- static const unsigned int eth_tx_en_pins[]	= { DIF_3_P };
- static const unsigned int eth_ref_clk_pins[]	= { DIF_3_N };
- static const unsigned int eth_mdc_pins[]	= { DIF_4_P };
-@@ -599,6 +601,8 @@ static struct meson_pmx_group meson8b_cbus_groups[] = {
- 	GROUP(eth_ref_clk,	6,	8),
- 	GROUP(eth_mdc,		6,	9),
- 	GROUP(eth_mdio_en,	6,	10),
-+	GROUP(eth_rxd3,		7,	22),
-+	GROUP(eth_rxd2,		7,	23),
- };
- 
- static struct meson_pmx_group meson8b_aobus_groups[] = {
-@@ -748,7 +752,7 @@ static const char * const ethernet_groups[] = {
- 	"eth_tx_clk", "eth_tx_en", "eth_txd1_0", "eth_txd1_1",
- 	"eth_txd0_0", "eth_txd0_1", "eth_rx_clk", "eth_rx_dv",
- 	"eth_rxd1", "eth_rxd0", "eth_mdio_en", "eth_mdc", "eth_ref_clk",
--	"eth_txd2", "eth_txd3"
-+	"eth_txd2", "eth_txd3", "eth_rxd3", "eth_rxd2"
- };
- 
- static const char * const i2c_a_groups[] = {
-diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a77990.c b/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
-index e40908dc37e0..1ce286f7b286 100644
---- a/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
-+++ b/drivers/pinctrl/sh-pfc/pfc-r8a77990.c
-@@ -391,29 +391,33 @@ FM(IP12_23_20)	IP12_23_20	FM(IP13_23_20)	IP13_23_20	FM(IP14_23_20)	IP14_23_20	FM
- FM(IP12_27_24)	IP12_27_24	FM(IP13_27_24)	IP13_27_24	FM(IP14_27_24)	IP14_27_24	FM(IP15_27_24)	IP15_27_24 \
- FM(IP12_31_28)	IP12_31_28	FM(IP13_31_28)	IP13_31_28	FM(IP14_31_28)	IP14_31_28	FM(IP15_31_28)	IP15_31_28
- 
-+/* The bit numbering in MOD_SEL fields is reversed */
-+#define REV4(f0, f1, f2, f3)			f0 f2 f1 f3
-+#define REV8(f0, f1, f2, f3, f4, f5, f6, f7)	f0 f4 f2 f6 f1 f5 f3 f7
-+
- /* MOD_SEL0 */			/* 0 */				/* 1 */				/* 2 */				/* 3 */			/* 4 */			/* 5 */		/* 6 */		/* 7 */
--#define MOD_SEL0_30_29		FM(SEL_ADGB_0)			FM(SEL_ADGB_1)			FM(SEL_ADGB_2)			F_(0, 0)
-+#define MOD_SEL0_30_29	   REV4(FM(SEL_ADGB_0),			FM(SEL_ADGB_1),			FM(SEL_ADGB_2),			F_(0, 0))
- #define MOD_SEL0_28		FM(SEL_DRIF0_0)			FM(SEL_DRIF0_1)
--#define MOD_SEL0_27_26		FM(SEL_FM_0)			FM(SEL_FM_1)			FM(SEL_FM_2)			F_(0, 0)
-+#define MOD_SEL0_27_26	   REV4(FM(SEL_FM_0),			FM(SEL_FM_1),			FM(SEL_FM_2),			F_(0, 0))
- #define MOD_SEL0_25		FM(SEL_FSO_0)			FM(SEL_FSO_1)
- #define MOD_SEL0_24		FM(SEL_HSCIF0_0)		FM(SEL_HSCIF0_1)
- #define MOD_SEL0_23		FM(SEL_HSCIF1_0)		FM(SEL_HSCIF1_1)
- #define MOD_SEL0_22		FM(SEL_HSCIF2_0)		FM(SEL_HSCIF2_1)
--#define MOD_SEL0_21_20		FM(SEL_I2C1_0)			FM(SEL_I2C1_1)			FM(SEL_I2C1_2)			FM(SEL_I2C1_3)
--#define MOD_SEL0_19_18_17	FM(SEL_I2C2_0)			FM(SEL_I2C2_1)			FM(SEL_I2C2_2)			FM(SEL_I2C2_3)		FM(SEL_I2C2_4)		F_(0, 0)	F_(0, 0)	F_(0, 0)
-+#define MOD_SEL0_21_20	   REV4(FM(SEL_I2C1_0),			FM(SEL_I2C1_1),			FM(SEL_I2C1_2),			FM(SEL_I2C1_3))
-+#define MOD_SEL0_19_18_17  REV8(FM(SEL_I2C2_0),			FM(SEL_I2C2_1),			FM(SEL_I2C2_2),			FM(SEL_I2C2_3),		FM(SEL_I2C2_4),		F_(0, 0),	F_(0, 0),	F_(0, 0))
- #define MOD_SEL0_16		FM(SEL_NDFC_0)			FM(SEL_NDFC_1)
- #define MOD_SEL0_15		FM(SEL_PWM0_0)			FM(SEL_PWM0_1)
- #define MOD_SEL0_14		FM(SEL_PWM1_0)			FM(SEL_PWM1_1)
--#define MOD_SEL0_13_12		FM(SEL_PWM2_0)			FM(SEL_PWM2_1)			FM(SEL_PWM2_2)			F_(0, 0)
--#define MOD_SEL0_11_10		FM(SEL_PWM3_0)			FM(SEL_PWM3_1)			FM(SEL_PWM3_2)			F_(0, 0)
-+#define MOD_SEL0_13_12	   REV4(FM(SEL_PWM2_0),			FM(SEL_PWM2_1),			FM(SEL_PWM2_2),			F_(0, 0))
-+#define MOD_SEL0_11_10	   REV4(FM(SEL_PWM3_0),			FM(SEL_PWM3_1),			FM(SEL_PWM3_2),			F_(0, 0))
- #define MOD_SEL0_9		FM(SEL_PWM4_0)			FM(SEL_PWM4_1)
- #define MOD_SEL0_8		FM(SEL_PWM5_0)			FM(SEL_PWM5_1)
- #define MOD_SEL0_7		FM(SEL_PWM6_0)			FM(SEL_PWM6_1)
--#define MOD_SEL0_6_5		FM(SEL_REMOCON_0)		FM(SEL_REMOCON_1)		FM(SEL_REMOCON_2)		F_(0, 0)
-+#define MOD_SEL0_6_5	   REV4(FM(SEL_REMOCON_0),		FM(SEL_REMOCON_1),		FM(SEL_REMOCON_2),		F_(0, 0))
- #define MOD_SEL0_4		FM(SEL_SCIF_0)			FM(SEL_SCIF_1)
- #define MOD_SEL0_3		FM(SEL_SCIF0_0)			FM(SEL_SCIF0_1)
- #define MOD_SEL0_2		FM(SEL_SCIF2_0)			FM(SEL_SCIF2_1)
--#define MOD_SEL0_1_0		FM(SEL_SPEED_PULSE_IF_0)	FM(SEL_SPEED_PULSE_IF_1)	FM(SEL_SPEED_PULSE_IF_2)	F_(0, 0)
-+#define MOD_SEL0_1_0	   REV4(FM(SEL_SPEED_PULSE_IF_0),	FM(SEL_SPEED_PULSE_IF_1),	FM(SEL_SPEED_PULSE_IF_2),	F_(0, 0))
- 
- /* MOD_SEL1 */			/* 0 */				/* 1 */				/* 2 */				/* 3 */			/* 4 */			/* 5 */		/* 6 */		/* 7 */
- #define MOD_SEL1_31		FM(SEL_SIMCARD_0)		FM(SEL_SIMCARD_1)
-@@ -422,18 +426,18 @@ FM(IP12_31_28)	IP12_31_28	FM(IP13_31_28)	IP13_31_28	FM(IP14_31_28)	IP14_31_28	FM
- #define MOD_SEL1_28		FM(SEL_USB_20_CH0_0)		FM(SEL_USB_20_CH0_1)
- #define MOD_SEL1_26		FM(SEL_DRIF2_0)			FM(SEL_DRIF2_1)
- #define MOD_SEL1_25		FM(SEL_DRIF3_0)			FM(SEL_DRIF3_1)
--#define MOD_SEL1_24_23_22	FM(SEL_HSCIF3_0)		FM(SEL_HSCIF3_1)		FM(SEL_HSCIF3_2)		FM(SEL_HSCIF3_3)	FM(SEL_HSCIF3_4)	F_(0, 0)	F_(0, 0)	F_(0, 0)
--#define MOD_SEL1_21_20_19	FM(SEL_HSCIF4_0)		FM(SEL_HSCIF4_1)		FM(SEL_HSCIF4_2)		FM(SEL_HSCIF4_3)	FM(SEL_HSCIF4_4)	F_(0, 0)	F_(0, 0)	F_(0, 0)
-+#define MOD_SEL1_24_23_22  REV8(FM(SEL_HSCIF3_0),		FM(SEL_HSCIF3_1),		FM(SEL_HSCIF3_2),		FM(SEL_HSCIF3_3),	FM(SEL_HSCIF3_4),	F_(0, 0),	F_(0, 0),	F_(0, 0))
-+#define MOD_SEL1_21_20_19  REV8(FM(SEL_HSCIF4_0),		FM(SEL_HSCIF4_1),		FM(SEL_HSCIF4_2),		FM(SEL_HSCIF4_3),	FM(SEL_HSCIF4_4),	F_(0, 0),	F_(0, 0),	F_(0, 0))
- #define MOD_SEL1_18		FM(SEL_I2C6_0)			FM(SEL_I2C6_1)
- #define MOD_SEL1_17		FM(SEL_I2C7_0)			FM(SEL_I2C7_1)
- #define MOD_SEL1_16		FM(SEL_MSIOF2_0)		FM(SEL_MSIOF2_1)
- #define MOD_SEL1_15		FM(SEL_MSIOF3_0)		FM(SEL_MSIOF3_1)
--#define MOD_SEL1_14_13		FM(SEL_SCIF3_0)			FM(SEL_SCIF3_1)			FM(SEL_SCIF3_2)			F_(0, 0)
--#define MOD_SEL1_12_11		FM(SEL_SCIF4_0)			FM(SEL_SCIF4_1)			FM(SEL_SCIF4_2)			F_(0, 0)
--#define MOD_SEL1_10_9		FM(SEL_SCIF5_0)			FM(SEL_SCIF5_1)			FM(SEL_SCIF5_2)			F_(0, 0)
-+#define MOD_SEL1_14_13	   REV4(FM(SEL_SCIF3_0),		FM(SEL_SCIF3_1),		FM(SEL_SCIF3_2),		F_(0, 0))
-+#define MOD_SEL1_12_11	   REV4(FM(SEL_SCIF4_0),		FM(SEL_SCIF4_1),		FM(SEL_SCIF4_2),		F_(0, 0))
-+#define MOD_SEL1_10_9	   REV4(FM(SEL_SCIF5_0),		FM(SEL_SCIF5_1),		FM(SEL_SCIF5_2),		F_(0, 0))
- #define MOD_SEL1_8		FM(SEL_VIN4_0)			FM(SEL_VIN4_1)
- #define MOD_SEL1_7		FM(SEL_VIN5_0)			FM(SEL_VIN5_1)
--#define MOD_SEL1_6_5		FM(SEL_ADGC_0)			FM(SEL_ADGC_1)			FM(SEL_ADGC_2)			F_(0, 0)
-+#define MOD_SEL1_6_5	   REV4(FM(SEL_ADGC_0),			FM(SEL_ADGC_1),			FM(SEL_ADGC_2),			F_(0, 0))
- #define MOD_SEL1_4		FM(SEL_SSI9_0)			FM(SEL_SSI9_1)
- 
- #define PINMUX_MOD_SELS	\
-diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a77995.c b/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
-index 84d78db381e3..9e377e3b9cb3 100644
---- a/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
-+++ b/drivers/pinctrl/sh-pfc/pfc-r8a77995.c
-@@ -381,6 +381,9 @@ FM(IP12_23_20)	IP12_23_20 \
- FM(IP12_27_24)	IP12_27_24 \
- FM(IP12_31_28)	IP12_31_28 \
- 
-+/* The bit numbering in MOD_SEL fields is reversed */
-+#define REV4(f0, f1, f2, f3)			f0 f2 f1 f3
-+
- /* MOD_SEL0 */			/* 0 */			/* 1 */			/* 2 */			/* 3 */
- #define MOD_SEL0_30		FM(SEL_MSIOF2_0)	FM(SEL_MSIOF2_1)
- #define MOD_SEL0_29		FM(SEL_I2C3_0)		FM(SEL_I2C3_1)
-@@ -388,10 +391,10 @@ FM(IP12_31_28)	IP12_31_28 \
- #define MOD_SEL0_27		FM(SEL_MSIOF3_0)	FM(SEL_MSIOF3_1)
- #define MOD_SEL0_26		FM(SEL_HSCIF3_0)	FM(SEL_HSCIF3_1)
- #define MOD_SEL0_25		FM(SEL_SCIF4_0)		FM(SEL_SCIF4_1)
--#define MOD_SEL0_24_23		FM(SEL_PWM0_0)		FM(SEL_PWM0_1)		FM(SEL_PWM0_2)		F_(0, 0)
--#define MOD_SEL0_22_21		FM(SEL_PWM1_0)		FM(SEL_PWM1_1)		FM(SEL_PWM1_2)		F_(0, 0)
--#define MOD_SEL0_20_19		FM(SEL_PWM2_0)		FM(SEL_PWM2_1)		FM(SEL_PWM2_2)		F_(0, 0)
--#define MOD_SEL0_18_17		FM(SEL_PWM3_0)		FM(SEL_PWM3_1)		FM(SEL_PWM3_2)		F_(0, 0)
-+#define MOD_SEL0_24_23	   REV4(FM(SEL_PWM0_0),		FM(SEL_PWM0_1),		FM(SEL_PWM0_2),		F_(0, 0))
-+#define MOD_SEL0_22_21	   REV4(FM(SEL_PWM1_0),		FM(SEL_PWM1_1),		FM(SEL_PWM1_2),		F_(0, 0))
-+#define MOD_SEL0_20_19	   REV4(FM(SEL_PWM2_0),		FM(SEL_PWM2_1),		FM(SEL_PWM2_2),		F_(0, 0))
-+#define MOD_SEL0_18_17	   REV4(FM(SEL_PWM3_0),		FM(SEL_PWM3_1),		FM(SEL_PWM3_2),		F_(0, 0))
- #define MOD_SEL0_15		FM(SEL_IRQ_0_0)		FM(SEL_IRQ_0_1)
- #define MOD_SEL0_14		FM(SEL_IRQ_1_0)		FM(SEL_IRQ_1_1)
- #define MOD_SEL0_13		FM(SEL_IRQ_2_0)		FM(SEL_IRQ_2_1)
-diff --git a/drivers/platform/mellanox/mlxreg-hotplug.c b/drivers/platform/mellanox/mlxreg-hotplug.c
-index b6d44550d98c..eca16d00e310 100644
---- a/drivers/platform/mellanox/mlxreg-hotplug.c
-+++ b/drivers/platform/mellanox/mlxreg-hotplug.c
-@@ -248,7 +248,8 @@ mlxreg_hotplug_work_helper(struct mlxreg_hotplug_priv_data *priv,
- 			   struct mlxreg_core_item *item)
- {
- 	struct mlxreg_core_data *data;
--	u32 asserted, regval, bit;
-+	unsigned long asserted;
-+	u32 regval, bit;
- 	int ret;
- 
- 	/*
-@@ -281,7 +282,7 @@ mlxreg_hotplug_work_helper(struct mlxreg_hotplug_priv_data *priv,
- 	asserted = item->cache ^ regval;
- 	item->cache = regval;
- 
--	for_each_set_bit(bit, (unsigned long *)&asserted, 8) {
-+	for_each_set_bit(bit, &asserted, 8) {
- 		data = item->data + bit;
- 		if (regval & BIT(bit)) {
- 			if (item->inversed)
-diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
-index 1589dffab9fa..8b53a9ceb897 100644
---- a/drivers/platform/x86/ideapad-laptop.c
-+++ b/drivers/platform/x86/ideapad-laptop.c
-@@ -989,7 +989,7 @@ static const struct dmi_system_id no_hw_rfkill_list[] = {
- 		.ident = "Lenovo RESCUER R720-15IKBN",
- 		.matches = {
- 			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
--			DMI_MATCH(DMI_BOARD_NAME, "80WW"),
-+			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo R720-15IKBN"),
- 		},
- 	},
- 	{
-diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
-index e28bcf61b126..bc0d55a59015 100644
---- a/drivers/platform/x86/intel-hid.c
-+++ b/drivers/platform/x86/intel-hid.c
-@@ -363,7 +363,7 @@ wakeup:
- 	 * the 5-button array, but still send notifies with power button
- 	 * event code to this device object on power button actions.
- 	 *
--	 * Report the power button press; catch and ignore the button release.
-+	 * Report the power button press and release.
- 	 */
- 	if (!priv->array) {
- 		if (event == 0xce) {
-@@ -372,8 +372,11 @@ wakeup:
- 			return;
- 		}
- 
--		if (event == 0xcf)
-+		if (event == 0xcf) {
-+			input_report_key(priv->input_dev, KEY_POWER, 0);
-+			input_sync(priv->input_dev);
- 			return;
-+		}
- 	}
- 
- 	/* 0xC0 is for HID events, other values are for 5 button array */
-diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
-index 22dbf115782e..c37e74ee609d 100644
---- a/drivers/platform/x86/intel_pmc_core.c
-+++ b/drivers/platform/x86/intel_pmc_core.c
-@@ -380,7 +380,8 @@ static int pmc_core_ppfear_show(struct seq_file *s, void *unused)
- 	     index < PPFEAR_MAX_NUM_ENTRIES; index++, iter++)
- 		pf_regs[index] = pmc_core_reg_read_byte(pmcdev, iter);
- 
--	for (index = 0; map[index].name; index++)
-+	for (index = 0; map[index].name &&
-+	     index < pmcdev->map->ppfear_buckets * 8; index++)
- 		pmc_core_display_map(s, index, pf_regs[index / 8], map);
- 
- 	return 0;
-diff --git a/drivers/platform/x86/intel_pmc_core.h b/drivers/platform/x86/intel_pmc_core.h
-index 89554cba5758..1a0104d2cbf0 100644
---- a/drivers/platform/x86/intel_pmc_core.h
-+++ b/drivers/platform/x86/intel_pmc_core.h
-@@ -32,7 +32,7 @@
- #define SPT_PMC_SLP_S0_RES_COUNTER_STEP		0x64
- #define PMC_BASE_ADDR_MASK			~(SPT_PMC_MMIO_REG_LEN - 1)
- #define MTPMC_MASK				0xffff0000
--#define PPFEAR_MAX_NUM_ENTRIES			5
-+#define PPFEAR_MAX_NUM_ENTRIES			12
- #define SPT_PPFEAR_NUM_ENTRIES			5
- #define SPT_PMC_READ_DISABLE_BIT		0x16
- #define SPT_PMC_MSG_FULL_STS_BIT		0x18
-diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c
-index c843eaff8ad0..c3ed7b476676 100644
---- a/drivers/power/supply/cpcap-charger.c
-+++ b/drivers/power/supply/cpcap-charger.c
-@@ -458,6 +458,7 @@ static void cpcap_usb_detect(struct work_struct *work)
- 			goto out_err;
- 	}
- 
-+	power_supply_changed(ddata->usb);
- 	return;
- 
- out_err:
-diff --git a/drivers/regulator/act8865-regulator.c b/drivers/regulator/act8865-regulator.c
-index 21e20483bd91..e0239cf3f56d 100644
---- a/drivers/regulator/act8865-regulator.c
-+++ b/drivers/regulator/act8865-regulator.c
-@@ -131,7 +131,7 @@
-  * ACT8865 voltage number
-  */
- #define	ACT8865_VOLTAGE_NUM	64
--#define ACT8600_SUDCDC_VOLTAGE_NUM	255
-+#define ACT8600_SUDCDC_VOLTAGE_NUM	256
- 
- struct act8865 {
- 	struct regmap *regmap;
-@@ -222,7 +222,8 @@ static const struct regulator_linear_range act8600_sudcdc_voltage_ranges[] = {
- 	REGULATOR_LINEAR_RANGE(3000000, 0, 63, 0),
- 	REGULATOR_LINEAR_RANGE(3000000, 64, 159, 100000),
- 	REGULATOR_LINEAR_RANGE(12600000, 160, 191, 200000),
--	REGULATOR_LINEAR_RANGE(19000000, 191, 255, 400000),
-+	REGULATOR_LINEAR_RANGE(19000000, 192, 247, 400000),
-+	REGULATOR_LINEAR_RANGE(41400000, 248, 255, 0),
- };
- 
- static struct regulator_ops act8865_ops = {
-diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
-index b9d7b45c7295..e2caf11598c7 100644
---- a/drivers/regulator/core.c
-+++ b/drivers/regulator/core.c
-@@ -1349,7 +1349,9 @@ static int set_machine_constraints(struct regulator_dev *rdev,
- 		 * We'll only apply the initial system load if an
- 		 * initial mode wasn't specified.
- 		 */
-+		regulator_lock(rdev);
- 		drms_uA_update(rdev);
-+		regulator_unlock(rdev);
- 	}
- 
- 	if ((rdev->constraints->ramp_delay || rdev->constraints->ramp_disable)
-diff --git a/drivers/regulator/max77620-regulator.c b/drivers/regulator/max77620-regulator.c
-index b94e3a721721..cd93cf53e23c 100644
---- a/drivers/regulator/max77620-regulator.c
-+++ b/drivers/regulator/max77620-regulator.c
-@@ -1,7 +1,7 @@
- /*
-  * Maxim MAX77620 Regulator driver
-  *
-- * Copyright (c) 2016, NVIDIA CORPORATION.  All rights reserved.
-+ * Copyright (c) 2016-2018, NVIDIA CORPORATION.  All rights reserved.
-  *
-  * Author: Mallikarjun Kasoju <mkasoju@nvidia.com>
-  *	Laxman Dewangan <ldewangan@nvidia.com>
-@@ -803,6 +803,14 @@ static int max77620_regulator_probe(struct platform_device *pdev)
- 		rdesc = &rinfo[id].desc;
- 		pmic->rinfo[id] = &max77620_regs_info[id];
- 		pmic->enable_power_mode[id] = MAX77620_POWER_MODE_NORMAL;
-+		pmic->reg_pdata[id].active_fps_src = -1;
-+		pmic->reg_pdata[id].active_fps_pd_slot = -1;
-+		pmic->reg_pdata[id].active_fps_pu_slot = -1;
-+		pmic->reg_pdata[id].suspend_fps_src = -1;
-+		pmic->reg_pdata[id].suspend_fps_pd_slot = -1;
-+		pmic->reg_pdata[id].suspend_fps_pu_slot = -1;
-+		pmic->reg_pdata[id].power_ok = -1;
-+		pmic->reg_pdata[id].ramp_rate_setting = -1;
- 
- 		ret = max77620_read_slew_rate(pmic, id);
- 		if (ret < 0)
-diff --git a/drivers/regulator/mcp16502.c b/drivers/regulator/mcp16502.c
-index 3479ae009b0b..0fc4963bd5b0 100644
---- a/drivers/regulator/mcp16502.c
-+++ b/drivers/regulator/mcp16502.c
-@@ -17,6 +17,7 @@
- #include <linux/regmap.h>
- #include <linux/regulator/driver.h>
- #include <linux/suspend.h>
-+#include <linux/gpio/consumer.h>
- 
- #define VDD_LOW_SEL 0x0D
- #define VDD_HIGH_SEL 0x3F
-diff --git a/drivers/regulator/s2mpa01.c b/drivers/regulator/s2mpa01.c
-index 095d25f3d2ea..58a1fe583a6c 100644
---- a/drivers/regulator/s2mpa01.c
-+++ b/drivers/regulator/s2mpa01.c
-@@ -298,13 +298,13 @@ static const struct regulator_desc regulators[] = {
- 	regulator_desc_ldo(2, STEP_50_MV),
- 	regulator_desc_ldo(3, STEP_50_MV),
- 	regulator_desc_ldo(4, STEP_50_MV),
--	regulator_desc_ldo(5, STEP_50_MV),
-+	regulator_desc_ldo(5, STEP_25_MV),
- 	regulator_desc_ldo(6, STEP_25_MV),
- 	regulator_desc_ldo(7, STEP_50_MV),
- 	regulator_desc_ldo(8, STEP_50_MV),
- 	regulator_desc_ldo(9, STEP_50_MV),
- 	regulator_desc_ldo(10, STEP_50_MV),
--	regulator_desc_ldo(11, STEP_25_MV),
-+	regulator_desc_ldo(11, STEP_50_MV),
- 	regulator_desc_ldo(12, STEP_50_MV),
- 	regulator_desc_ldo(13, STEP_50_MV),
- 	regulator_desc_ldo(14, STEP_50_MV),
-@@ -315,11 +315,11 @@ static const struct regulator_desc regulators[] = {
- 	regulator_desc_ldo(19, STEP_50_MV),
- 	regulator_desc_ldo(20, STEP_50_MV),
- 	regulator_desc_ldo(21, STEP_50_MV),
--	regulator_desc_ldo(22, STEP_25_MV),
--	regulator_desc_ldo(23, STEP_25_MV),
-+	regulator_desc_ldo(22, STEP_50_MV),
-+	regulator_desc_ldo(23, STEP_50_MV),
- 	regulator_desc_ldo(24, STEP_50_MV),
- 	regulator_desc_ldo(25, STEP_50_MV),
--	regulator_desc_ldo(26, STEP_50_MV),
-+	regulator_desc_ldo(26, STEP_25_MV),
- 	regulator_desc_buck1_4(1),
- 	regulator_desc_buck1_4(2),
- 	regulator_desc_buck1_4(3),
-diff --git a/drivers/regulator/s2mps11.c b/drivers/regulator/s2mps11.c
-index ee4a23ab0663..134c62db36c5 100644
---- a/drivers/regulator/s2mps11.c
-+++ b/drivers/regulator/s2mps11.c
-@@ -362,7 +362,7 @@ static const struct regulator_desc s2mps11_regulators[] = {
- 	regulator_desc_s2mps11_ldo(32, STEP_50_MV),
- 	regulator_desc_s2mps11_ldo(33, STEP_50_MV),
- 	regulator_desc_s2mps11_ldo(34, STEP_50_MV),
--	regulator_desc_s2mps11_ldo(35, STEP_50_MV),
-+	regulator_desc_s2mps11_ldo(35, STEP_25_MV),
- 	regulator_desc_s2mps11_ldo(36, STEP_50_MV),
- 	regulator_desc_s2mps11_ldo(37, STEP_50_MV),
- 	regulator_desc_s2mps11_ldo(38, STEP_50_MV),
-@@ -372,8 +372,8 @@ static const struct regulator_desc s2mps11_regulators[] = {
- 	regulator_desc_s2mps11_buck1_4(4),
- 	regulator_desc_s2mps11_buck5,
- 	regulator_desc_s2mps11_buck67810(6, MIN_600_MV, STEP_6_25_MV),
--	regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_6_25_MV),
--	regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_6_25_MV),
-+	regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_12_5_MV),
-+	regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_12_5_MV),
- 	regulator_desc_s2mps11_buck9,
- 	regulator_desc_s2mps11_buck67810(10, MIN_750_MV, STEP_12_5_MV),
- };
-diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
-index a10cec0e86eb..0b3b9de45c60 100644
---- a/drivers/s390/cio/vfio_ccw_drv.c
-+++ b/drivers/s390/cio/vfio_ccw_drv.c
-@@ -72,20 +72,24 @@ static void vfio_ccw_sch_io_todo(struct work_struct *work)
- {
- 	struct vfio_ccw_private *private;
- 	struct irb *irb;
-+	bool is_final;
- 
- 	private = container_of(work, struct vfio_ccw_private, io_work);
- 	irb = &private->irb;
- 
-+	is_final = !(scsw_actl(&irb->scsw) &
-+		     (SCSW_ACTL_DEVACT | SCSW_ACTL_SCHACT));
- 	if (scsw_is_solicited(&irb->scsw)) {
- 		cp_update_scsw(&private->cp, &irb->scsw);
--		cp_free(&private->cp);
-+		if (is_final)
-+			cp_free(&private->cp);
- 	}
- 	memcpy(private->io_region->irb_area, irb, sizeof(*irb));
- 
- 	if (private->io_trigger)
- 		eventfd_signal(private->io_trigger, 1);
- 
--	if (private->mdev)
-+	if (private->mdev && is_final)
- 		private->state = VFIO_CCW_STATE_IDLE;
- }
- 
-diff --git a/drivers/s390/crypto/vfio_ap_drv.c b/drivers/s390/crypto/vfio_ap_drv.c
-index 31c6c847eaca..e9824c35c34f 100644
---- a/drivers/s390/crypto/vfio_ap_drv.c
-+++ b/drivers/s390/crypto/vfio_ap_drv.c
-@@ -15,7 +15,6 @@
- #include "vfio_ap_private.h"
- 
- #define VFIO_AP_ROOT_NAME "vfio_ap"
--#define VFIO_AP_DEV_TYPE_NAME "ap_matrix"
- #define VFIO_AP_DEV_NAME "matrix"
- 
- MODULE_AUTHOR("IBM Corporation");
-@@ -24,10 +23,6 @@ MODULE_LICENSE("GPL v2");
- 
- static struct ap_driver vfio_ap_drv;
- 
--static struct device_type vfio_ap_dev_type = {
--	.name = VFIO_AP_DEV_TYPE_NAME,
--};
--
- struct ap_matrix_dev *matrix_dev;
- 
- /* Only type 10 adapters (CEX4 and later) are supported
-@@ -62,6 +57,22 @@ static void vfio_ap_matrix_dev_release(struct device *dev)
- 	kfree(matrix_dev);
- }
- 
-+static int matrix_bus_match(struct device *dev, struct device_driver *drv)
-+{
-+	return 1;
-+}
-+
-+static struct bus_type matrix_bus = {
-+	.name = "matrix",
-+	.match = &matrix_bus_match,
-+};
-+
-+static struct device_driver matrix_driver = {
-+	.name = "vfio_ap",
-+	.bus = &matrix_bus,
-+	.suppress_bind_attrs = true,
-+};
-+
- static int vfio_ap_matrix_dev_create(void)
- {
- 	int ret;
-@@ -71,6 +82,10 @@ static int vfio_ap_matrix_dev_create(void)
- 	if (IS_ERR(root_device))
- 		return PTR_ERR(root_device);
- 
-+	ret = bus_register(&matrix_bus);
-+	if (ret)
-+		goto bus_register_err;
-+
- 	matrix_dev = kzalloc(sizeof(*matrix_dev), GFP_KERNEL);
- 	if (!matrix_dev) {
- 		ret = -ENOMEM;
-@@ -87,30 +102,41 @@ static int vfio_ap_matrix_dev_create(void)
- 	mutex_init(&matrix_dev->lock);
- 	INIT_LIST_HEAD(&matrix_dev->mdev_list);
- 
--	matrix_dev->device.type = &vfio_ap_dev_type;
- 	dev_set_name(&matrix_dev->device, "%s", VFIO_AP_DEV_NAME);
- 	matrix_dev->device.parent = root_device;
-+	matrix_dev->device.bus = &matrix_bus;
- 	matrix_dev->device.release = vfio_ap_matrix_dev_release;
--	matrix_dev->device.driver = &vfio_ap_drv.driver;
-+	matrix_dev->vfio_ap_drv = &vfio_ap_drv;
- 
- 	ret = device_register(&matrix_dev->device);
- 	if (ret)
- 		goto matrix_reg_err;
- 
-+	ret = driver_register(&matrix_driver);
-+	if (ret)
-+		goto matrix_drv_err;
-+
- 	return 0;
- 
-+matrix_drv_err:
-+	device_unregister(&matrix_dev->device);
- matrix_reg_err:
- 	put_device(&matrix_dev->device);
- matrix_alloc_err:
-+	bus_unregister(&matrix_bus);
-+bus_register_err:
- 	root_device_unregister(root_device);
--
- 	return ret;
- }
- 
- static void vfio_ap_matrix_dev_destroy(void)
- {
-+	struct device *root_device = matrix_dev->device.parent;
-+
-+	driver_unregister(&matrix_driver);
- 	device_unregister(&matrix_dev->device);
--	root_device_unregister(matrix_dev->device.parent);
-+	bus_unregister(&matrix_bus);
-+	root_device_unregister(root_device);
- }
- 
- static int __init vfio_ap_init(void)
-diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c
-index 272ef427dcc0..900b9cf20ca5 100644
---- a/drivers/s390/crypto/vfio_ap_ops.c
-+++ b/drivers/s390/crypto/vfio_ap_ops.c
-@@ -198,8 +198,8 @@ static int vfio_ap_verify_queue_reserved(unsigned long *apid,
- 	qres.apqi = apqi;
- 	qres.reserved = false;
- 
--	ret = driver_for_each_device(matrix_dev->device.driver, NULL, &qres,
--				     vfio_ap_has_queue);
-+	ret = driver_for_each_device(&matrix_dev->vfio_ap_drv->driver, NULL,
-+				     &qres, vfio_ap_has_queue);
- 	if (ret)
- 		return ret;
- 
-diff --git a/drivers/s390/crypto/vfio_ap_private.h b/drivers/s390/crypto/vfio_ap_private.h
-index 5675492233c7..76b7f98e47e9 100644
---- a/drivers/s390/crypto/vfio_ap_private.h
-+++ b/drivers/s390/crypto/vfio_ap_private.h
-@@ -40,6 +40,7 @@ struct ap_matrix_dev {
- 	struct ap_config_info info;
- 	struct list_head mdev_list;
- 	struct mutex lock;
-+	struct ap_driver  *vfio_ap_drv;
- };
- 
- extern struct ap_matrix_dev *matrix_dev;
-diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c
-index ed8e58f09054..3e132592c1fe 100644
---- a/drivers/s390/net/ism_drv.c
-+++ b/drivers/s390/net/ism_drv.c
-@@ -141,10 +141,13 @@ static int register_ieq(struct ism_dev *ism)
- 
- static int unregister_sba(struct ism_dev *ism)
- {
-+	int ret;
-+
- 	if (!ism->sba)
- 		return 0;
- 
--	if (ism_cmd_simple(ism, ISM_UNREG_SBA))
-+	ret = ism_cmd_simple(ism, ISM_UNREG_SBA);
-+	if (ret && ret != ISM_ERROR)
- 		return -EIO;
- 
- 	dma_free_coherent(&ism->pdev->dev, PAGE_SIZE,
-@@ -158,10 +161,13 @@ static int unregister_sba(struct ism_dev *ism)
- 
- static int unregister_ieq(struct ism_dev *ism)
- {
-+	int ret;
-+
- 	if (!ism->ieq)
- 		return 0;
- 
--	if (ism_cmd_simple(ism, ISM_UNREG_IEQ))
-+	ret = ism_cmd_simple(ism, ISM_UNREG_IEQ);
-+	if (ret && ret != ISM_ERROR)
- 		return -EIO;
- 
- 	dma_free_coherent(&ism->pdev->dev, PAGE_SIZE,
-@@ -287,7 +293,7 @@ static int ism_unregister_dmb(struct smcd_dev *smcd, struct smcd_dmb *dmb)
- 	cmd.request.dmb_tok = dmb->dmb_tok;
- 
- 	ret = ism_cmd(ism, &cmd);
--	if (ret)
-+	if (ret && ret != ISM_ERROR)
- 		goto out;
- 
- 	ism_free_dmb(ism, dmb);
-diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
-index 744a64680d5b..e8fc28dba8df 100644
---- a/drivers/s390/scsi/zfcp_erp.c
-+++ b/drivers/s390/scsi/zfcp_erp.c
-@@ -624,6 +624,20 @@ static void zfcp_erp_strategy_memwait(struct zfcp_erp_action *erp_action)
- 	add_timer(&erp_action->timer);
- }
- 
-+void zfcp_erp_port_forced_reopen_all(struct zfcp_adapter *adapter,
-+				     int clear, char *dbftag)
-+{
-+	unsigned long flags;
-+	struct zfcp_port *port;
-+
-+	write_lock_irqsave(&adapter->erp_lock, flags);
-+	read_lock(&adapter->port_list_lock);
-+	list_for_each_entry(port, &adapter->port_list, list)
-+		_zfcp_erp_port_forced_reopen(port, clear, dbftag);
-+	read_unlock(&adapter->port_list_lock);
-+	write_unlock_irqrestore(&adapter->erp_lock, flags);
-+}
-+
- static void _zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter,
- 				      int clear, char *dbftag)
- {
-@@ -1341,6 +1355,9 @@ static void zfcp_erp_try_rport_unblock(struct zfcp_port *port)
- 		struct zfcp_scsi_dev *zsdev = sdev_to_zfcp(sdev);
- 		int lun_status;
- 
-+		if (sdev->sdev_state == SDEV_DEL ||
-+		    sdev->sdev_state == SDEV_CANCEL)
-+			continue;
- 		if (zsdev->port != port)
- 			continue;
- 		/* LUN under port of interest */
-diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
-index 3fce47b0b21b..c6acca521ffe 100644
---- a/drivers/s390/scsi/zfcp_ext.h
-+++ b/drivers/s390/scsi/zfcp_ext.h
-@@ -70,6 +70,8 @@ extern void zfcp_erp_port_reopen(struct zfcp_port *port, int clear,
- 				 char *dbftag);
- extern void zfcp_erp_port_shutdown(struct zfcp_port *, int, char *);
- extern void zfcp_erp_port_forced_reopen(struct zfcp_port *, int, char *);
-+extern void zfcp_erp_port_forced_reopen_all(struct zfcp_adapter *adapter,
-+					    int clear, char *dbftag);
- extern void zfcp_erp_set_lun_status(struct scsi_device *, u32);
- extern void zfcp_erp_clear_lun_status(struct scsi_device *, u32);
- extern void zfcp_erp_lun_reopen(struct scsi_device *, int, char *);
-diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
-index f4f6a07c5222..221d0dfb8493 100644
---- a/drivers/s390/scsi/zfcp_scsi.c
-+++ b/drivers/s390/scsi/zfcp_scsi.c
-@@ -368,6 +368,10 @@ static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt)
- 	struct zfcp_adapter *adapter = zfcp_sdev->port->adapter;
- 	int ret = SUCCESS, fc_ret;
- 
-+	if (!(adapter->connection_features & FSF_FEATURE_NPIV_MODE)) {
-+		zfcp_erp_port_forced_reopen_all(adapter, 0, "schrh_p");
-+		zfcp_erp_wait(adapter);
-+	}
- 	zfcp_erp_adapter_reopen(adapter, 0, "schrh_1");
- 	zfcp_erp_wait(adapter);
- 	fc_ret = fc_block_scsi_eh(scpnt);
-diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
-index ae1d56da671d..1a738fe9f26b 100644
---- a/drivers/s390/virtio/virtio_ccw.c
-+++ b/drivers/s390/virtio/virtio_ccw.c
-@@ -272,6 +272,8 @@ static void virtio_ccw_drop_indicators(struct virtio_ccw_device *vcdev)
- {
- 	struct virtio_ccw_vq_info *info;
- 
-+	if (!vcdev->airq_info)
-+		return;
- 	list_for_each_entry(info, &vcdev->virtqueues, node)
- 		drop_airq_indicator(info->vq, vcdev->airq_info);
- }
-@@ -413,7 +415,7 @@ static int virtio_ccw_read_vq_conf(struct virtio_ccw_device *vcdev,
- 	ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_VQ_CONF);
- 	if (ret)
- 		return ret;
--	return vcdev->config_block->num;
-+	return vcdev->config_block->num ?: -ENOENT;
- }
- 
- static void virtio_ccw_del_vq(struct virtqueue *vq, struct ccw1 *ccw)
-diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
-index d5a6aa9676c8..a3adc954f40f 100644
---- a/drivers/scsi/aacraid/commsup.c
-+++ b/drivers/scsi/aacraid/commsup.c
-@@ -1303,8 +1303,9 @@ static void aac_handle_aif(struct aac_dev * dev, struct fib * fibptr)
- 				  ADD : DELETE;
- 				break;
- 			}
--			case AifBuManagerEvent:
--				aac_handle_aif_bu(dev, aifcmd);
-+			break;
-+		case AifBuManagerEvent:
-+			aac_handle_aif_bu(dev, aifcmd);
- 			break;
- 		}
- 
-diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c
-index 7e56a11836c1..ccefface7e31 100644
---- a/drivers/scsi/aacraid/linit.c
-+++ b/drivers/scsi/aacraid/linit.c
-@@ -413,13 +413,16 @@ static int aac_slave_configure(struct scsi_device *sdev)
- 	if (chn < AAC_MAX_BUSES && tid < AAC_MAX_TARGETS && aac->sa_firmware) {
- 		devtype = aac->hba_map[chn][tid].devtype;
- 
--		if (devtype == AAC_DEVTYPE_NATIVE_RAW)
-+		if (devtype == AAC_DEVTYPE_NATIVE_RAW) {
- 			depth = aac->hba_map[chn][tid].qd_limit;
--		else if (devtype == AAC_DEVTYPE_ARC_RAW)
-+			set_timeout = 1;
-+			goto common_config;
-+		}
-+		if (devtype == AAC_DEVTYPE_ARC_RAW) {
- 			set_qd_dev_type = true;
--
--		set_timeout = 1;
--		goto common_config;
-+			set_timeout = 1;
-+			goto common_config;
-+		}
- 	}
- 
- 	if (aac->jbod && (sdev->type == TYPE_DISK))
-diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
-index 2e4e7159ebf9..a75e74ad1698 100644
---- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
-+++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c
-@@ -1438,7 +1438,7 @@ bind_err:
- static struct bnx2fc_interface *
- bnx2fc_interface_create(struct bnx2fc_hba *hba,
- 			struct net_device *netdev,
--			enum fip_state fip_mode)
-+			enum fip_mode fip_mode)
- {
- 	struct fcoe_ctlr_device *ctlr_dev;
- 	struct bnx2fc_interface *interface;
-diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
-index cd19be3f3405..8ba8862d3292 100644
---- a/drivers/scsi/fcoe/fcoe.c
-+++ b/drivers/scsi/fcoe/fcoe.c
-@@ -389,7 +389,7 @@ static int fcoe_interface_setup(struct fcoe_interface *fcoe,
-  * Returns: pointer to a struct fcoe_interface or NULL on error
-  */
- static struct fcoe_interface *fcoe_interface_create(struct net_device *netdev,
--						    enum fip_state fip_mode)
-+						    enum fip_mode fip_mode)
- {
- 	struct fcoe_ctlr_device *ctlr_dev;
- 	struct fcoe_ctlr *ctlr;
-diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
-index 54da3166da8d..7dc4ffa24430 100644
---- a/drivers/scsi/fcoe/fcoe_ctlr.c
-+++ b/drivers/scsi/fcoe/fcoe_ctlr.c
-@@ -147,7 +147,7 @@ static void fcoe_ctlr_map_dest(struct fcoe_ctlr *fip)
-  * fcoe_ctlr_init() - Initialize the FCoE Controller instance
-  * @fip: The FCoE controller to initialize
-  */
--void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_state mode)
-+void fcoe_ctlr_init(struct fcoe_ctlr *fip, enum fip_mode mode)
- {
- 	fcoe_ctlr_set_state(fip, FIP_ST_LINK_WAIT);
- 	fip->mode = mode;
-@@ -454,7 +454,10 @@ void fcoe_ctlr_link_up(struct fcoe_ctlr *fip)
- 		mutex_unlock(&fip->ctlr_mutex);
- 		fc_linkup(fip->lp);
- 	} else if (fip->state == FIP_ST_LINK_WAIT) {
--		fcoe_ctlr_set_state(fip, fip->mode);
-+		if (fip->mode == FIP_MODE_NON_FIP)
-+			fcoe_ctlr_set_state(fip, FIP_ST_NON_FIP);
-+		else
-+			fcoe_ctlr_set_state(fip, FIP_ST_AUTO);
- 		switch (fip->mode) {
- 		default:
- 			LIBFCOE_FIP_DBG(fip, "invalid mode %d\n", fip->mode);
-diff --git a/drivers/scsi/fcoe/fcoe_transport.c b/drivers/scsi/fcoe/fcoe_transport.c
-index f4909cd206d3..f15d5e1d56b1 100644
---- a/drivers/scsi/fcoe/fcoe_transport.c
-+++ b/drivers/scsi/fcoe/fcoe_transport.c
-@@ -873,7 +873,7 @@ static int fcoe_transport_create(const char *buffer,
- 	int rc = -ENODEV;
- 	struct net_device *netdev = NULL;
- 	struct fcoe_transport *ft = NULL;
--	enum fip_state fip_mode = (enum fip_state)(long)kp->arg;
-+	enum fip_mode fip_mode = (enum fip_mode)kp->arg;
- 
- 	mutex_lock(&ft_mutex);
- 
-diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
-index bc17fa0d8375..62d158574281 100644
---- a/drivers/scsi/hisi_sas/hisi_sas_main.c
-+++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
-@@ -10,6 +10,7 @@
-  */
- 
- #include "hisi_sas.h"
-+#include "../libsas/sas_internal.h"
- #define DRV_NAME "hisi_sas"
- 
- #define DEV_IS_GONE(dev) \
-@@ -872,7 +873,8 @@ static void hisi_sas_do_release_task(struct hisi_hba *hisi_hba, struct sas_task
- 		spin_lock_irqsave(&task->task_state_lock, flags);
- 		task->task_state_flags &=
- 			~(SAS_TASK_STATE_PENDING | SAS_TASK_AT_INITIATOR);
--		task->task_state_flags |= SAS_TASK_STATE_DONE;
-+		if (!slot->is_internal && task->task_proto != SAS_PROTOCOL_SMP)
-+			task->task_state_flags |= SAS_TASK_STATE_DONE;
- 		spin_unlock_irqrestore(&task->task_state_lock, flags);
- 	}
- 
-@@ -1972,9 +1974,18 @@ static int hisi_sas_write_gpio(struct sas_ha_struct *sha, u8 reg_type,
- 
- static void hisi_sas_phy_disconnected(struct hisi_sas_phy *phy)
- {
-+	struct asd_sas_phy *sas_phy = &phy->sas_phy;
-+	struct sas_phy *sphy = sas_phy->phy;
-+	struct sas_phy_data *d = sphy->hostdata;
-+
- 	phy->phy_attached = 0;
- 	phy->phy_type = 0;
- 	phy->port = NULL;
-+
-+	if (d->enable)
-+		sphy->negotiated_linkrate = SAS_LINK_RATE_UNKNOWN;
-+	else
-+		sphy->negotiated_linkrate = SAS_PHY_DISABLED;
- }
- 
- void hisi_sas_phy_down(struct hisi_hba *hisi_hba, int phy_no, int rdy)
-diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
-index 1135e74646e2..8cec5230fe31 100644
---- a/drivers/scsi/ibmvscsi/ibmvscsi.c
-+++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
-@@ -96,6 +96,7 @@ static int client_reserve = 1;
- static char partition_name[96] = "UNKNOWN";
- static unsigned int partition_number = -1;
- static LIST_HEAD(ibmvscsi_head);
-+static DEFINE_SPINLOCK(ibmvscsi_driver_lock);
- 
- static struct scsi_transport_template *ibmvscsi_transport_template;
- 
-@@ -2270,7 +2271,9 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
- 	}
- 
- 	dev_set_drvdata(&vdev->dev, hostdata);
-+	spin_lock(&ibmvscsi_driver_lock);
- 	list_add_tail(&hostdata->host_list, &ibmvscsi_head);
-+	spin_unlock(&ibmvscsi_driver_lock);
- 	return 0;
- 
-       add_srp_port_failed:
-@@ -2292,15 +2295,27 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
- static int ibmvscsi_remove(struct vio_dev *vdev)
- {
- 	struct ibmvscsi_host_data *hostdata = dev_get_drvdata(&vdev->dev);
--	list_del(&hostdata->host_list);
--	unmap_persist_bufs(hostdata);
-+	unsigned long flags;
-+
-+	srp_remove_host(hostdata->host);
-+	scsi_remove_host(hostdata->host);
-+
-+	purge_requests(hostdata, DID_ERROR);
-+
-+	spin_lock_irqsave(hostdata->host->host_lock, flags);
- 	release_event_pool(&hostdata->pool, hostdata);
-+	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
-+
- 	ibmvscsi_release_crq_queue(&hostdata->queue, hostdata,
- 					max_events);
- 
- 	kthread_stop(hostdata->work_thread);
--	srp_remove_host(hostdata->host);
--	scsi_remove_host(hostdata->host);
-+	unmap_persist_bufs(hostdata);
-+
-+	spin_lock(&ibmvscsi_driver_lock);
-+	list_del(&hostdata->host_list);
-+	spin_unlock(&ibmvscsi_driver_lock);
-+
- 	scsi_host_put(hostdata->host);
- 
- 	return 0;
-diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
-index fcbff83c0097..c9811d1aa007 100644
---- a/drivers/scsi/megaraid/megaraid_sas_base.c
-+++ b/drivers/scsi/megaraid/megaraid_sas_base.c
-@@ -4188,6 +4188,7 @@ int megasas_alloc_cmds(struct megasas_instance *instance)
- 	if (megasas_create_frame_pool(instance)) {
- 		dev_printk(KERN_DEBUG, &instance->pdev->dev, "Error creating frame DMA pool\n");
- 		megasas_free_cmds(instance);
-+		return -ENOMEM;
- 	}
- 
- 	return 0;
-diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
-index 9bbc19fc190b..9f9431a4cc0e 100644
---- a/drivers/scsi/qedf/qedf_main.c
-+++ b/drivers/scsi/qedf/qedf_main.c
-@@ -1418,7 +1418,7 @@ static struct libfc_function_template qedf_lport_template = {
- 
- static void qedf_fcoe_ctlr_setup(struct qedf_ctx *qedf)
- {
--	fcoe_ctlr_init(&qedf->ctlr, FIP_ST_AUTO);
-+	fcoe_ctlr_init(&qedf->ctlr, FIP_MODE_AUTO);
- 
- 	qedf->ctlr.send = qedf_fip_send;
- 	qedf->ctlr.get_src_addr = qedf_get_src_mac;
-diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
-index 8d1acc802a67..7f8946844a5e 100644
---- a/drivers/scsi/qla2xxx/qla_init.c
-+++ b/drivers/scsi/qla2xxx/qla_init.c
-@@ -644,11 +644,14 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha,
- 				break;
- 			case DSC_LS_PORT_UNAVAIL:
- 			default:
--				if (fcport->loop_id != FC_NO_LOOP_ID)
--					qla2x00_clear_loop_id(fcport);
--
--				fcport->loop_id = loop_id;
--				fcport->fw_login_state = DSC_LS_PORT_UNAVAIL;
-+				if (fcport->loop_id == FC_NO_LOOP_ID) {
-+					qla2x00_find_new_loop_id(vha, fcport);
-+					fcport->fw_login_state =
-+					    DSC_LS_PORT_UNAVAIL;
-+				}
-+				ql_dbg(ql_dbg_disc, vha, 0x20e5,
-+				    "%s %d %8phC\n", __func__, __LINE__,
-+				    fcport->port_name);
- 				qla24xx_fcport_handle_login(vha, fcport);
- 				break;
- 			}
-@@ -1471,29 +1474,6 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
- 	return 0;
- }
- 
--static
--void qla24xx_handle_rscn_event(fc_port_t *fcport, struct event_arg *ea)
--{
--	fcport->rscn_gen++;
--
--	ql_dbg(ql_dbg_disc, fcport->vha, 0x210c,
--	    "%s %8phC DS %d LS %d\n",
--	    __func__, fcport->port_name, fcport->disc_state,
--	    fcport->fw_login_state);
--
--	if (fcport->flags & FCF_ASYNC_SENT)
--		return;
--
--	switch (fcport->disc_state) {
--	case DSC_DELETED:
--	case DSC_LOGIN_COMPLETE:
--		qla24xx_post_gpnid_work(fcport->vha, &ea->id);
--		break;
--	default:
--		break;
--	}
--}
--
- int qla24xx_post_newsess_work(struct scsi_qla_host *vha, port_id_t *id,
-     u8 *port_name, u8 *node_name, void *pla, u8 fc4_type)
- {
-@@ -1560,8 +1540,6 @@ static void qla_handle_els_plogi_done(scsi_qla_host_t *vha,
- 
- void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
- {
--	fc_port_t *f, *tf;
--	uint32_t id = 0, mask, rid;
- 	fc_port_t *fcport;
- 
- 	switch (ea->event) {
-@@ -1574,10 +1552,6 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
- 	case FCME_RSCN:
- 		if (test_bit(UNLOADING, &vha->dpc_flags))
- 			return;
--		switch (ea->id.b.rsvd_1) {
--		case RSCN_PORT_ADDR:
--#define BIGSCAN 1
--#if defined BIGSCAN & BIGSCAN > 0
- 		{
- 			unsigned long flags;
- 			fcport = qla2x00_find_fcport_by_nportid
-@@ -1596,59 +1570,6 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea)
- 			}
- 			spin_unlock_irqrestore(&vha->work_lock, flags);
- 		}
--#else
--		{
--			int rc;
--			fcport = qla2x00_find_fcport_by_nportid(vha, &ea->id, 1);
--			if (!fcport) {
--				/* cable moved */
--				 rc = qla24xx_post_gpnid_work(vha, &ea->id);
--				 if (rc) {
--					 ql_log(ql_log_warn, vha, 0xd044,
--					     "RSCN GPNID work failed %06x\n",
--					     ea->id.b24);
--				 }
--			} else {
--				ea->fcport = fcport;
--				fcport->scan_needed = 1;
--				qla24xx_handle_rscn_event(fcport, ea);
--			}
--		}
--#endif
--			break;
--		case RSCN_AREA_ADDR:
--		case RSCN_DOM_ADDR:
--			if (ea->id.b.rsvd_1 == RSCN_AREA_ADDR) {
--				mask = 0xffff00;
--				ql_dbg(ql_dbg_async, vha, 0x5044,
--				    "RSCN: Area 0x%06x was affected\n",
--				    ea->id.b24);
--			} else {
--				mask = 0xff0000;
--				ql_dbg(ql_dbg_async, vha, 0x507a,
--				    "RSCN: Domain 0x%06x was affected\n",
--				    ea->id.b24);
--			}
--
--			rid = ea->id.b24 & mask;
--			list_for_each_entry_safe(f, tf, &vha->vp_fcports,
--			    list) {
--				id = f->d_id.b24 & mask;
--				if (rid == id) {
--					ea->fcport = f;
--					qla24xx_handle_rscn_event(f, ea);
--				}
--			}
--			break;
--		case RSCN_FAB_ADDR:
--		default:
--			ql_log(ql_log_warn, vha, 0xd045,
--			    "RSCN: Fabric was affected. Addr format %d\n",
--			    ea->id.b.rsvd_1);
--			qla2x00_mark_all_devices_lost(vha, 1);
--			set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags);
--			set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
--		}
- 		break;
- 	case FCME_GNL_DONE:
- 		qla24xx_handle_gnl_done_event(vha, ea);
-@@ -1709,11 +1630,7 @@ void qla_rscn_replay(fc_port_t *fcport)
-                ea.event = FCME_RSCN;
-                ea.id = fcport->d_id;
-                ea.id.b.rsvd_1 = RSCN_PORT_ADDR;
--#if defined BIGSCAN & BIGSCAN > 0
-                qla2x00_fcport_event_handler(fcport->vha, &ea);
--#else
--               qla24xx_post_gpnid_work(fcport->vha, &ea.id);
--#endif
- 	}
- }
- 
-@@ -5051,6 +4968,13 @@ qla2x00_configure_local_loop(scsi_qla_host_t *vha)
- 		    (area != vha->d_id.b.area || domain != vha->d_id.b.domain))
- 			continue;
- 
-+		/* Bypass if not same domain and area of adapter. */
-+		if (area && domain && ((area != vha->d_id.b.area) ||
-+		    (domain != vha->d_id.b.domain)) &&
-+		    (ha->current_topology == ISP_CFG_NL))
-+			continue;
-+
-+
- 		/* Bypass invalid local loop ID. */
- 		if (loop_id > LAST_LOCAL_LOOP_ID)
- 			continue;
-diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
-index 8507c43b918c..1a20e5d8f057 100644
---- a/drivers/scsi/qla2xxx/qla_isr.c
-+++ b/drivers/scsi/qla2xxx/qla_isr.c
-@@ -3410,7 +3410,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
- 		min_vecs++;
- 	}
- 
--	if (USER_CTRL_IRQ(ha)) {
-+	if (USER_CTRL_IRQ(ha) || !ha->mqiobase) {
- 		/* user wants to control IRQ setting for target mode */
- 		ret = pci_alloc_irq_vectors(ha->pdev, min_vecs,
- 		    ha->msix_count, PCI_IRQ_MSIX);
-diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
-index c6ef83d0d99b..7e35ce2162d0 100644
---- a/drivers/scsi/qla2xxx/qla_os.c
-+++ b/drivers/scsi/qla2xxx/qla_os.c
-@@ -6936,7 +6936,7 @@ static int qla2xxx_map_queues(struct Scsi_Host *shost)
- 	scsi_qla_host_t *vha = (scsi_qla_host_t *)shost->hostdata;
- 	struct blk_mq_queue_map *qmap = &shost->tag_set.map[0];
- 
--	if (USER_CTRL_IRQ(vha->hw))
-+	if (USER_CTRL_IRQ(vha->hw) || !vha->hw->mqiobase)
- 		rc = blk_mq_map_queues(qmap);
- 	else
- 		rc = blk_mq_pci_map_queues(qmap, vha->hw->pdev, vha->irq_offset);
-diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
-index a6828391d6b3..5a6e8e12701a 100644
---- a/drivers/scsi/scsi_lib.c
-+++ b/drivers/scsi/scsi_lib.c
-@@ -2598,8 +2598,10 @@ void scsi_device_resume(struct scsi_device *sdev)
- 	 * device deleted during suspend)
- 	 */
- 	mutex_lock(&sdev->state_mutex);
--	sdev->quiesced_by = NULL;
--	blk_clear_pm_only(sdev->request_queue);
-+	if (sdev->quiesced_by) {
-+		sdev->quiesced_by = NULL;
-+		blk_clear_pm_only(sdev->request_queue);
-+	}
- 	if (sdev->sdev_state == SDEV_QUIESCE)
- 		scsi_device_set_state(sdev, SDEV_RUNNING);
- 	mutex_unlock(&sdev->state_mutex);
-diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
-index dd0d516f65e2..53380e07b40e 100644
---- a/drivers/scsi/scsi_scan.c
-+++ b/drivers/scsi/scsi_scan.c
-@@ -220,7 +220,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
- 	struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
- 
- 	sdev = kzalloc(sizeof(*sdev) + shost->transportt->device_size,
--		       GFP_ATOMIC);
-+		       GFP_KERNEL);
- 	if (!sdev)
- 		goto out;
- 
-@@ -788,7 +788,7 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
- 	 */
- 	sdev->inquiry = kmemdup(inq_result,
- 				max_t(size_t, sdev->inquiry_len, 36),
--				GFP_ATOMIC);
-+				GFP_KERNEL);
- 	if (sdev->inquiry == NULL)
- 		return SCSI_SCAN_NO_RESPONSE;
- 
-@@ -1079,7 +1079,7 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
- 	if (!sdev)
- 		goto out;
- 
--	result = kmalloc(result_len, GFP_ATOMIC |
-+	result = kmalloc(result_len, GFP_KERNEL |
- 			((shost->unchecked_isa_dma) ? __GFP_DMA : 0));
- 	if (!result)
- 		goto out_free_sdev;
-diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
-index 5464d467e23e..d64553c0a051 100644
---- a/drivers/scsi/sd.c
-+++ b/drivers/scsi/sd.c
-@@ -1398,11 +1398,6 @@ static void sd_release(struct gendisk *disk, fmode_t mode)
- 			scsi_set_medium_removal(sdev, SCSI_REMOVAL_ALLOW);
- 	}
- 
--	/*
--	 * XXX and what if there are packets in flight and this close()
--	 * XXX is followed by a "rmmod sd_mod"?
--	 */
--
- 	scsi_disk_put(sdkp);
- }
- 
-@@ -3047,6 +3042,58 @@ static void sd_read_security(struct scsi_disk *sdkp, unsigned char *buffer)
- 		sdkp->security = 1;
- }
- 
-+/*
-+ * Determine the device's preferred I/O size for reads and writes
-+ * unless the reported value is unreasonably small, large, not a
-+ * multiple of the physical block size, or simply garbage.
-+ */
-+static bool sd_validate_opt_xfer_size(struct scsi_disk *sdkp,
-+				      unsigned int dev_max)
-+{
-+	struct scsi_device *sdp = sdkp->device;
-+	unsigned int opt_xfer_bytes =
-+		logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
-+
-+	if (sdkp->opt_xfer_blocks == 0)
-+		return false;
-+
-+	if (sdkp->opt_xfer_blocks > dev_max) {
-+		sd_first_printk(KERN_WARNING, sdkp,
-+				"Optimal transfer size %u logical blocks " \
-+				"> dev_max (%u logical blocks)\n",
-+				sdkp->opt_xfer_blocks, dev_max);
-+		return false;
-+	}
-+
-+	if (sdkp->opt_xfer_blocks > SD_DEF_XFER_BLOCKS) {
-+		sd_first_printk(KERN_WARNING, sdkp,
-+				"Optimal transfer size %u logical blocks " \
-+				"> sd driver limit (%u logical blocks)\n",
-+				sdkp->opt_xfer_blocks, SD_DEF_XFER_BLOCKS);
-+		return false;
-+	}
-+
-+	if (opt_xfer_bytes < PAGE_SIZE) {
-+		sd_first_printk(KERN_WARNING, sdkp,
-+				"Optimal transfer size %u bytes < " \
-+				"PAGE_SIZE (%u bytes)\n",
-+				opt_xfer_bytes, (unsigned int)PAGE_SIZE);
-+		return false;
-+	}
-+
-+	if (opt_xfer_bytes & (sdkp->physical_block_size - 1)) {
-+		sd_first_printk(KERN_WARNING, sdkp,
-+				"Optimal transfer size %u bytes not a " \
-+				"multiple of physical block size (%u bytes)\n",
-+				opt_xfer_bytes, sdkp->physical_block_size);
-+		return false;
-+	}
-+
-+	sd_first_printk(KERN_INFO, sdkp, "Optimal transfer size %u bytes\n",
-+			opt_xfer_bytes);
-+	return true;
-+}
-+
- /**
-  *	sd_revalidate_disk - called the first time a new disk is seen,
-  *	performs disk spin up, read_capacity, etc.
-@@ -3125,15 +3172,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
- 	dev_max = min_not_zero(dev_max, sdkp->max_xfer_blocks);
- 	q->limits.max_dev_sectors = logical_to_sectors(sdp, dev_max);
- 
--	/*
--	 * Determine the device's preferred I/O size for reads and writes
--	 * unless the reported value is unreasonably small, large, or
--	 * garbage.
--	 */
--	if (sdkp->opt_xfer_blocks &&
--	    sdkp->opt_xfer_blocks <= dev_max &&
--	    sdkp->opt_xfer_blocks <= SD_DEF_XFER_BLOCKS &&
--	    logical_to_bytes(sdp, sdkp->opt_xfer_blocks) >= PAGE_SIZE) {
-+	if (sd_validate_opt_xfer_size(sdkp, dev_max)) {
- 		q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
- 		rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks);
- 	} else
-@@ -3447,9 +3486,21 @@ static void scsi_disk_release(struct device *dev)
- {
- 	struct scsi_disk *sdkp = to_scsi_disk(dev);
- 	struct gendisk *disk = sdkp->disk;
--	
-+	struct request_queue *q = disk->queue;
-+
- 	ida_free(&sd_index_ida, sdkp->index);
- 
-+	/*
-+	 * Wait until all requests that are in progress have completed.
-+	 * This is necessary to avoid that e.g. scsi_end_request() crashes
-+	 * due to clearing the disk->private_data pointer. Wait from inside
-+	 * scsi_disk_release() instead of from sd_release() to avoid that
-+	 * freezing and unfreezing the request queue affects user space I/O
-+	 * in case multiple processes open a /dev/sd... node concurrently.
-+	 */
-+	blk_mq_freeze_queue(q);
-+	blk_mq_unfreeze_queue(q);
-+
- 	disk->private_data = NULL;
- 	put_disk(disk);
- 	put_device(&sdkp->device->sdev_gendev);
-diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
-index 772b976e4ee4..464cba521fb6 100644
---- a/drivers/scsi/virtio_scsi.c
-+++ b/drivers/scsi/virtio_scsi.c
-@@ -594,7 +594,6 @@ static int virtscsi_device_reset(struct scsi_cmnd *sc)
- 		return FAILED;
- 
- 	memset(cmd, 0, sizeof(*cmd));
--	cmd->sc = sc;
- 	cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){
- 		.type = VIRTIO_SCSI_T_TMF,
- 		.subtype = cpu_to_virtio32(vscsi->vdev,
-@@ -653,7 +652,6 @@ static int virtscsi_abort(struct scsi_cmnd *sc)
- 		return FAILED;
- 
- 	memset(cmd, 0, sizeof(*cmd));
--	cmd->sc = sc;
- 	cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){
- 		.type = VIRTIO_SCSI_T_TMF,
- 		.subtype = VIRTIO_SCSI_T_TMF_ABORT_TASK,
-diff --git a/drivers/soc/qcom/qcom_gsbi.c b/drivers/soc/qcom/qcom_gsbi.c
-index 09c669e70d63..038abc377fdb 100644
---- a/drivers/soc/qcom/qcom_gsbi.c
-+++ b/drivers/soc/qcom/qcom_gsbi.c
-@@ -138,7 +138,7 @@ static int gsbi_probe(struct platform_device *pdev)
- 	struct resource *res;
- 	void __iomem *base;
- 	struct gsbi_info *gsbi;
--	int i;
-+	int i, ret;
- 	u32 mask, gsbi_num;
- 	const struct crci_config *config = NULL;
- 
-@@ -221,7 +221,10 @@ static int gsbi_probe(struct platform_device *pdev)
- 
- 	platform_set_drvdata(pdev, gsbi);
- 
--	return of_platform_populate(node, NULL, NULL, &pdev->dev);
-+	ret = of_platform_populate(node, NULL, NULL, &pdev->dev);
-+	if (ret)
-+		clk_disable_unprepare(gsbi->hclk);
-+	return ret;
- }
- 
- static int gsbi_remove(struct platform_device *pdev)
-diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
-index c7beb6841289..ab8f731a3426 100644
---- a/drivers/soc/qcom/rpmh.c
-+++ b/drivers/soc/qcom/rpmh.c
-@@ -80,6 +80,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
- 	struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request,
- 						    msg);
- 	struct completion *compl = rpm_msg->completion;
-+	bool free = rpm_msg->needs_free;
- 
- 	rpm_msg->err = r;
- 
-@@ -94,7 +95,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r)
- 	complete(compl);
- 
- exit:
--	if (rpm_msg->needs_free)
-+	if (free)
- 		kfree(rpm_msg);
- }
- 
-@@ -348,11 +349,12 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
- {
- 	struct batch_cache_req *req;
- 	struct rpmh_request *rpm_msgs;
--	DECLARE_COMPLETION_ONSTACK(compl);
-+	struct completion *compls;
- 	struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
- 	unsigned long time_left;
- 	int count = 0;
--	int ret, i, j;
-+	int ret, i;
-+	void *ptr;
- 
- 	if (!cmd || !n)
- 		return -EINVAL;
-@@ -362,10 +364,15 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
- 	if (!count)
- 		return -EINVAL;
- 
--	req = kzalloc(sizeof(*req) + count * sizeof(req->rpm_msgs[0]),
-+	ptr = kzalloc(sizeof(*req) +
-+		      count * (sizeof(req->rpm_msgs[0]) + sizeof(*compls)),
- 		      GFP_ATOMIC);
--	if (!req)
-+	if (!ptr)
- 		return -ENOMEM;
-+
-+	req = ptr;
-+	compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
-+
- 	req->count = count;
- 	rpm_msgs = req->rpm_msgs;
- 
-@@ -380,25 +387,26 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
- 	}
- 
- 	for (i = 0; i < count; i++) {
--		rpm_msgs[i].completion = &compl;
-+		struct completion *compl = &compls[i];
-+
-+		init_completion(compl);
-+		rpm_msgs[i].completion = compl;
- 		ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msgs[i].msg);
- 		if (ret) {
- 			pr_err("Error(%d) sending RPMH message addr=%#x\n",
- 			       ret, rpm_msgs[i].msg.cmds[0].addr);
--			for (j = i; j < count; j++)
--				rpmh_tx_done(&rpm_msgs[j].msg, ret);
- 			break;
- 		}
- 	}
- 
- 	time_left = RPMH_TIMEOUT_MS;
--	for (i = 0; i < count; i++) {
--		time_left = wait_for_completion_timeout(&compl, time_left);
-+	while (i--) {
-+		time_left = wait_for_completion_timeout(&compls[i], time_left);
- 		if (!time_left) {
- 			/*
- 			 * Better hope they never finish because they'll signal
--			 * the completion on our stack and that's bad once
--			 * we've returned from the function.
-+			 * the completion that we're going to free once
-+			 * we've returned from this function.
- 			 */
- 			WARN_ON(1);
- 			ret = -ETIMEDOUT;
-@@ -407,7 +415,7 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
- 	}
- 
- exit:
--	kfree(req);
-+	kfree(ptr);
- 
- 	return ret;
- }
-diff --git a/drivers/soc/tegra/fuse/fuse-tegra.c b/drivers/soc/tegra/fuse/fuse-tegra.c
-index a33ee8ef8b6b..51625703399e 100644
---- a/drivers/soc/tegra/fuse/fuse-tegra.c
-+++ b/drivers/soc/tegra/fuse/fuse-tegra.c
-@@ -137,13 +137,17 @@ static int tegra_fuse_probe(struct platform_device *pdev)
- 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- 	fuse->phys = res->start;
- 	fuse->base = devm_ioremap_resource(&pdev->dev, res);
--	if (IS_ERR(fuse->base))
--		return PTR_ERR(fuse->base);
-+	if (IS_ERR(fuse->base)) {
-+		err = PTR_ERR(fuse->base);
-+		fuse->base = base;
-+		return err;
-+	}
- 
- 	fuse->clk = devm_clk_get(&pdev->dev, "fuse");
- 	if (IS_ERR(fuse->clk)) {
- 		dev_err(&pdev->dev, "failed to get FUSE clock: %ld",
- 			PTR_ERR(fuse->clk));
-+		fuse->base = base;
- 		return PTR_ERR(fuse->clk);
- 	}
- 
-@@ -152,8 +156,10 @@ static int tegra_fuse_probe(struct platform_device *pdev)
- 
- 	if (fuse->soc->probe) {
- 		err = fuse->soc->probe(fuse);
--		if (err < 0)
-+		if (err < 0) {
-+			fuse->base = base;
- 			return err;
-+		}
- 	}
- 
- 	if (tegra_fuse_create_sysfs(&pdev->dev, fuse->soc->info->size,
-diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c
-index a4aee26028cd..53b35c56a557 100644
---- a/drivers/spi/spi-gpio.c
-+++ b/drivers/spi/spi-gpio.c
-@@ -428,7 +428,8 @@ static int spi_gpio_probe(struct platform_device *pdev)
- 		return status;
- 
- 	master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32);
--	master->mode_bits = SPI_3WIRE | SPI_3WIRE_HIZ | SPI_CPHA | SPI_CPOL;
-+	master->mode_bits = SPI_3WIRE | SPI_3WIRE_HIZ | SPI_CPHA | SPI_CPOL |
-+			    SPI_CS_HIGH;
- 	master->flags = master_flags;
- 	master->bus_num = pdev->id;
- 	/* The master needs to think there is a chipselect even if not connected */
-@@ -455,7 +456,6 @@ static int spi_gpio_probe(struct platform_device *pdev)
- 		spi_gpio->bitbang.txrx_word[SPI_MODE_3] = spi_gpio_spec_txrx_word_mode3;
- 	}
- 	spi_gpio->bitbang.setup_transfer = spi_bitbang_setup_transfer;
--	spi_gpio->bitbang.flags = SPI_CS_HIGH;
- 
- 	status = spi_bitbang_start(&spi_gpio->bitbang);
- 	if (status)
-diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
-index 2fd8881fcd65..8be304379628 100644
---- a/drivers/spi/spi-omap2-mcspi.c
-+++ b/drivers/spi/spi-omap2-mcspi.c
-@@ -623,8 +623,8 @@ omap2_mcspi_txrx_dma(struct spi_device *spi, struct spi_transfer *xfer)
- 	cfg.dst_addr = cs->phys + OMAP2_MCSPI_TX0;
- 	cfg.src_addr_width = width;
- 	cfg.dst_addr_width = width;
--	cfg.src_maxburst = es;
--	cfg.dst_maxburst = es;
-+	cfg.src_maxburst = 1;
-+	cfg.dst_maxburst = 1;
- 
- 	rx = xfer->rx_buf;
- 	tx = xfer->tx_buf;
-diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
-index d84b893a64d7..3e82eaad0f2d 100644
---- a/drivers/spi/spi-pxa2xx.c
-+++ b/drivers/spi/spi-pxa2xx.c
-@@ -1696,6 +1696,7 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
- 			platform_info->enable_dma = false;
- 		} else {
- 			master->can_dma = pxa2xx_spi_can_dma;
-+			master->max_dma_len = MAX_DMA_LEN;
- 		}
- 	}
- 
-diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c
-index 5f19016bbf10..b9fb6493cd6b 100644
---- a/drivers/spi/spi-ti-qspi.c
-+++ b/drivers/spi/spi-ti-qspi.c
-@@ -490,8 +490,8 @@ static void ti_qspi_enable_memory_map(struct spi_device *spi)
- 	ti_qspi_write(qspi, MM_SWITCH, QSPI_SPI_SWITCH_REG);
- 	if (qspi->ctrl_base) {
- 		regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg,
--				   MEM_CS_EN(spi->chip_select),
--				   MEM_CS_MASK);
-+				   MEM_CS_MASK,
-+				   MEM_CS_EN(spi->chip_select));
- 	}
- 	qspi->mmap_enabled = true;
- }
-@@ -503,7 +503,7 @@ static void ti_qspi_disable_memory_map(struct spi_device *spi)
- 	ti_qspi_write(qspi, 0, QSPI_SPI_SWITCH_REG);
- 	if (qspi->ctrl_base)
- 		regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg,
--				   0, MEM_CS_MASK);
-+				   MEM_CS_MASK, 0);
- 	qspi->mmap_enabled = false;
- }
- 
-diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
-index 90a8a9f1ac7d..910826df4a31 100644
---- a/drivers/staging/android/ashmem.c
-+++ b/drivers/staging/android/ashmem.c
-@@ -75,6 +75,9 @@ struct ashmem_range {
- /* LRU list of unpinned pages, protected by ashmem_mutex */
- static LIST_HEAD(ashmem_lru_list);
- 
-+static atomic_t ashmem_shrink_inflight = ATOMIC_INIT(0);
-+static DECLARE_WAIT_QUEUE_HEAD(ashmem_shrink_wait);
-+
- /*
-  * long lru_count - The count of pages on our LRU list.
-  *
-@@ -168,19 +171,15 @@ static inline void lru_del(struct ashmem_range *range)
-  * @end:	   The ending page (inclusive)
-  *
-  * This function is protected by ashmem_mutex.
-- *
-- * Return: 0 if successful, or -ENOMEM if there is an error
-  */
--static int range_alloc(struct ashmem_area *asma,
--		       struct ashmem_range *prev_range, unsigned int purged,
--		       size_t start, size_t end)
-+static void range_alloc(struct ashmem_area *asma,
-+			struct ashmem_range *prev_range, unsigned int purged,
-+			size_t start, size_t end,
-+			struct ashmem_range **new_range)
- {
--	struct ashmem_range *range;
--
--	range = kmem_cache_zalloc(ashmem_range_cachep, GFP_KERNEL);
--	if (!range)
--		return -ENOMEM;
-+	struct ashmem_range *range = *new_range;
- 
-+	*new_range = NULL;
- 	range->asma = asma;
- 	range->pgstart = start;
- 	range->pgend = end;
-@@ -190,8 +189,6 @@ static int range_alloc(struct ashmem_area *asma,
- 
- 	if (range_on_lru(range))
- 		lru_add(range);
--
--	return 0;
- }
- 
- /**
-@@ -438,7 +435,6 @@ out:
- static unsigned long
- ashmem_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
- {
--	struct ashmem_range *range, *next;
- 	unsigned long freed = 0;
- 
- 	/* We might recurse into filesystem code, so bail out if necessary */
-@@ -448,21 +444,33 @@ ashmem_shrink_scan(struct shrinker *shrink, struct shrink_control *sc)
- 	if (!mutex_trylock(&ashmem_mutex))
- 		return -1;
- 
--	list_for_each_entry_safe(range, next, &ashmem_lru_list, lru) {
-+	while (!list_empty(&ashmem_lru_list)) {
-+		struct ashmem_range *range =
-+			list_first_entry(&ashmem_lru_list, typeof(*range), lru);
- 		loff_t start = range->pgstart * PAGE_SIZE;
- 		loff_t end = (range->pgend + 1) * PAGE_SIZE;
-+		struct file *f = range->asma->file;
- 
--		range->asma->file->f_op->fallocate(range->asma->file,
--				FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
--				start, end - start);
-+		get_file(f);
-+		atomic_inc(&ashmem_shrink_inflight);
- 		range->purged = ASHMEM_WAS_PURGED;
- 		lru_del(range);
- 
- 		freed += range_size(range);
-+		mutex_unlock(&ashmem_mutex);
-+		f->f_op->fallocate(f,
-+				   FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
-+				   start, end - start);
-+		fput(f);
-+		if (atomic_dec_and_test(&ashmem_shrink_inflight))
-+			wake_up_all(&ashmem_shrink_wait);
-+		if (!mutex_trylock(&ashmem_mutex))
-+			goto out;
- 		if (--sc->nr_to_scan <= 0)
- 			break;
- 	}
- 	mutex_unlock(&ashmem_mutex);
-+out:
- 	return freed;
- }
- 
-@@ -582,7 +590,8 @@ static int get_name(struct ashmem_area *asma, void __user *name)
-  *
-  * Caller must hold ashmem_mutex.
-  */
--static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
-+static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend,
-+		      struct ashmem_range **new_range)
- {
- 	struct ashmem_range *range, *next;
- 	int ret = ASHMEM_NOT_PURGED;
-@@ -635,7 +644,7 @@ static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
- 			 * second half and adjust the first chunk's endpoint.
- 			 */
- 			range_alloc(asma, range, range->purged,
--				    pgend + 1, range->pgend);
-+				    pgend + 1, range->pgend, new_range);
- 			range_shrink(range, range->pgstart, pgstart - 1);
- 			break;
- 		}
-@@ -649,7 +658,8 @@ static int ashmem_pin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
-  *
-  * Caller must hold ashmem_mutex.
-  */
--static int ashmem_unpin(struct ashmem_area *asma, size_t pgstart, size_t pgend)
-+static int ashmem_unpin(struct ashmem_area *asma, size_t pgstart, size_t pgend,
-+			struct ashmem_range **new_range)
- {
- 	struct ashmem_range *range, *next;
- 	unsigned int purged = ASHMEM_NOT_PURGED;
-@@ -675,7 +685,8 @@ restart:
- 		}
- 	}
- 
--	return range_alloc(asma, range, purged, pgstart, pgend);
-+	range_alloc(asma, range, purged, pgstart, pgend, new_range);
-+	return 0;
- }
- 
- /*
-@@ -708,11 +719,19 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
- 	struct ashmem_pin pin;
- 	size_t pgstart, pgend;
- 	int ret = -EINVAL;
-+	struct ashmem_range *range = NULL;
- 
- 	if (copy_from_user(&pin, p, sizeof(pin)))
- 		return -EFAULT;
- 
-+	if (cmd == ASHMEM_PIN || cmd == ASHMEM_UNPIN) {
-+		range = kmem_cache_zalloc(ashmem_range_cachep, GFP_KERNEL);
-+		if (!range)
-+			return -ENOMEM;
-+	}
-+
- 	mutex_lock(&ashmem_mutex);
-+	wait_event(ashmem_shrink_wait, !atomic_read(&ashmem_shrink_inflight));
- 
- 	if (!asma->file)
- 		goto out_unlock;
-@@ -735,10 +754,10 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
- 
- 	switch (cmd) {
- 	case ASHMEM_PIN:
--		ret = ashmem_pin(asma, pgstart, pgend);
-+		ret = ashmem_pin(asma, pgstart, pgend, &range);
- 		break;
- 	case ASHMEM_UNPIN:
--		ret = ashmem_unpin(asma, pgstart, pgend);
-+		ret = ashmem_unpin(asma, pgstart, pgend, &range);
- 		break;
- 	case ASHMEM_GET_PIN_STATUS:
- 		ret = ashmem_get_pin_status(asma, pgstart, pgend);
-@@ -747,6 +766,8 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
- 
- out_unlock:
- 	mutex_unlock(&ashmem_mutex);
-+	if (range)
-+		kmem_cache_free(ashmem_range_cachep, range);
- 
- 	return ret;
- }
-diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c
-index 0383f7548d48..20f2103a4ebf 100644
---- a/drivers/staging/android/ion/ion_system_heap.c
-+++ b/drivers/staging/android/ion/ion_system_heap.c
-@@ -223,10 +223,10 @@ static void ion_system_heap_destroy_pools(struct ion_page_pool **pools)
- static int ion_system_heap_create_pools(struct ion_page_pool **pools)
- {
- 	int i;
--	gfp_t gfp_flags = low_order_gfp_flags;
- 
- 	for (i = 0; i < NUM_ORDERS; i++) {
- 		struct ion_page_pool *pool;
-+		gfp_t gfp_flags = low_order_gfp_flags;
- 
- 		if (orders[i] > 4)
- 			gfp_flags = high_order_gfp_flags;
-diff --git a/drivers/staging/comedi/comedidev.h b/drivers/staging/comedi/comedidev.h
-index a7d569cfca5d..0dff1ac057cd 100644
---- a/drivers/staging/comedi/comedidev.h
-+++ b/drivers/staging/comedi/comedidev.h
-@@ -1001,6 +1001,8 @@ int comedi_dio_insn_config(struct comedi_device *dev,
- 			   unsigned int mask);
- unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
- 				     unsigned int *data);
-+unsigned int comedi_bytes_per_scan_cmd(struct comedi_subdevice *s,
-+				       struct comedi_cmd *cmd);
- unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s);
- unsigned int comedi_nscans_left(struct comedi_subdevice *s,
- 				unsigned int nscans);
-diff --git a/drivers/staging/comedi/drivers.c b/drivers/staging/comedi/drivers.c
-index eefa62f42c0f..5a32b8fc000e 100644
---- a/drivers/staging/comedi/drivers.c
-+++ b/drivers/staging/comedi/drivers.c
-@@ -394,11 +394,13 @@ unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
- EXPORT_SYMBOL_GPL(comedi_dio_update_state);
- 
- /**
-- * comedi_bytes_per_scan() - Get length of asynchronous command "scan" in bytes
-+ * comedi_bytes_per_scan_cmd() - Get length of asynchronous command "scan" in
-+ * bytes
-  * @s: COMEDI subdevice.
-+ * @cmd: COMEDI command.
-  *
-  * Determines the overall scan length according to the subdevice type and the
-- * number of channels in the scan.
-+ * number of channels in the scan for the specified command.
-  *
-  * For digital input, output or input/output subdevices, samples for
-  * multiple channels are assumed to be packed into one or more unsigned
-@@ -408,9 +410,9 @@ EXPORT_SYMBOL_GPL(comedi_dio_update_state);
-  *
-  * Returns the overall scan length in bytes.
-  */
--unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
-+unsigned int comedi_bytes_per_scan_cmd(struct comedi_subdevice *s,
-+				       struct comedi_cmd *cmd)
- {
--	struct comedi_cmd *cmd = &s->async->cmd;
- 	unsigned int num_samples;
- 	unsigned int bits_per_sample;
- 
-@@ -427,6 +429,29 @@ unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
- 	}
- 	return comedi_samples_to_bytes(s, num_samples);
- }
-+EXPORT_SYMBOL_GPL(comedi_bytes_per_scan_cmd);
-+
-+/**
-+ * comedi_bytes_per_scan() - Get length of asynchronous command "scan" in bytes
-+ * @s: COMEDI subdevice.
-+ *
-+ * Determines the overall scan length according to the subdevice type and the
-+ * number of channels in the scan for the current command.
-+ *
-+ * For digital input, output or input/output subdevices, samples for
-+ * multiple channels are assumed to be packed into one or more unsigned
-+ * short or unsigned int values according to the subdevice's %SDF_LSAMPL
-+ * flag.  For other types of subdevice, samples are assumed to occupy a
-+ * whole unsigned short or unsigned int according to the %SDF_LSAMPL flag.
-+ *
-+ * Returns the overall scan length in bytes.
-+ */
-+unsigned int comedi_bytes_per_scan(struct comedi_subdevice *s)
-+{
-+	struct comedi_cmd *cmd = &s->async->cmd;
-+
-+	return comedi_bytes_per_scan_cmd(s, cmd);
-+}
- EXPORT_SYMBOL_GPL(comedi_bytes_per_scan);
- 
- static unsigned int __comedi_nscans_left(struct comedi_subdevice *s,
-diff --git a/drivers/staging/comedi/drivers/ni_660x.c b/drivers/staging/comedi/drivers/ni_660x.c
-index e70a461e723f..405573e927cf 100644
---- a/drivers/staging/comedi/drivers/ni_660x.c
-+++ b/drivers/staging/comedi/drivers/ni_660x.c
-@@ -656,6 +656,7 @@ static int ni_660x_set_pfi_routing(struct comedi_device *dev,
- 	case NI_660X_PFI_OUTPUT_DIO:
- 		if (chan > 31)
- 			return -EINVAL;
-+		break;
- 	default:
- 		return -EINVAL;
- 	}
-diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c
-index 5edf59ac6706..b04dad8c7092 100644
---- a/drivers/staging/comedi/drivers/ni_mio_common.c
-+++ b/drivers/staging/comedi/drivers/ni_mio_common.c
-@@ -3545,6 +3545,7 @@ static int ni_cdio_cmdtest(struct comedi_device *dev,
- 			   struct comedi_subdevice *s, struct comedi_cmd *cmd)
- {
- 	struct ni_private *devpriv = dev->private;
-+	unsigned int bytes_per_scan;
- 	int err = 0;
- 
- 	/* Step 1 : check if triggers are trivially valid */
-@@ -3579,9 +3580,12 @@ static int ni_cdio_cmdtest(struct comedi_device *dev,
- 	err |= comedi_check_trigger_arg_is(&cmd->convert_arg, 0);
- 	err |= comedi_check_trigger_arg_is(&cmd->scan_end_arg,
- 					   cmd->chanlist_len);
--	err |= comedi_check_trigger_arg_max(&cmd->stop_arg,
--					    s->async->prealloc_bufsz /
--					    comedi_bytes_per_scan(s));
-+	bytes_per_scan = comedi_bytes_per_scan_cmd(s, cmd);
-+	if (bytes_per_scan) {
-+		err |= comedi_check_trigger_arg_max(&cmd->stop_arg,
-+						    s->async->prealloc_bufsz /
-+						    bytes_per_scan);
-+	}
- 
- 	if (err)
- 		return 3;
-diff --git a/drivers/staging/erofs/dir.c b/drivers/staging/erofs/dir.c
-index 833f052f79d0..b21ed5b4c711 100644
---- a/drivers/staging/erofs/dir.c
-+++ b/drivers/staging/erofs/dir.c
-@@ -23,6 +23,21 @@ static const unsigned char erofs_filetype_table[EROFS_FT_MAX] = {
- 	[EROFS_FT_SYMLINK]	= DT_LNK,
- };
- 
-+static void debug_one_dentry(unsigned char d_type, const char *de_name,
-+			     unsigned int de_namelen)
-+{
-+#ifdef CONFIG_EROFS_FS_DEBUG
-+	/* since the on-disk name could not have the trailing '\0' */
-+	unsigned char dbg_namebuf[EROFS_NAME_LEN + 1];
-+
-+	memcpy(dbg_namebuf, de_name, de_namelen);
-+	dbg_namebuf[de_namelen] = '\0';
-+
-+	debugln("found dirent %s de_len %u d_type %d", dbg_namebuf,
-+		de_namelen, d_type);
-+#endif
-+}
-+
- static int erofs_fill_dentries(struct dir_context *ctx,
- 	void *dentry_blk, unsigned int *ofs,
- 	unsigned int nameoff, unsigned int maxsize)
-@@ -33,14 +48,10 @@ static int erofs_fill_dentries(struct dir_context *ctx,
- 	de = dentry_blk + *ofs;
- 	while (de < end) {
- 		const char *de_name;
--		int de_namelen;
-+		unsigned int de_namelen;
- 		unsigned char d_type;
--#ifdef CONFIG_EROFS_FS_DEBUG
--		unsigned int dbg_namelen;
--		unsigned char dbg_namebuf[EROFS_NAME_LEN];
--#endif
- 
--		if (unlikely(de->file_type < EROFS_FT_MAX))
-+		if (de->file_type < EROFS_FT_MAX)
- 			d_type = erofs_filetype_table[de->file_type];
- 		else
- 			d_type = DT_UNKNOWN;
-@@ -48,26 +59,20 @@ static int erofs_fill_dentries(struct dir_context *ctx,
- 		nameoff = le16_to_cpu(de->nameoff);
- 		de_name = (char *)dentry_blk + nameoff;
- 
--		de_namelen = unlikely(de + 1 >= end) ?
--			/* last directory entry */
--			strnlen(de_name, maxsize - nameoff) :
--			le16_to_cpu(de[1].nameoff) - nameoff;
-+		/* the last dirent in the block? */
-+		if (de + 1 >= end)
-+			de_namelen = strnlen(de_name, maxsize - nameoff);
-+		else
-+			de_namelen = le16_to_cpu(de[1].nameoff) - nameoff;
- 
- 		/* a corrupted entry is found */
--		if (unlikely(de_namelen < 0)) {
-+		if (unlikely(nameoff + de_namelen > maxsize ||
-+			     de_namelen > EROFS_NAME_LEN)) {
- 			DBG_BUGON(1);
- 			return -EIO;
- 		}
- 
--#ifdef CONFIG_EROFS_FS_DEBUG
--		dbg_namelen = min(EROFS_NAME_LEN - 1, de_namelen);
--		memcpy(dbg_namebuf, de_name, dbg_namelen);
--		dbg_namebuf[dbg_namelen] = '\0';
--
--		debugln("%s, found de_name %s de_len %d d_type %d", __func__,
--			dbg_namebuf, de_namelen, d_type);
--#endif
--
-+		debug_one_dentry(d_type, de_name, de_namelen);
- 		if (!dir_emit(ctx, de_name, de_namelen,
- 			      le64_to_cpu(de->nid), d_type))
- 			/* stopped by some reason */
-diff --git a/drivers/staging/erofs/inode.c b/drivers/staging/erofs/inode.c
-index d7fbf5f4600f..f99954dbfdb5 100644
---- a/drivers/staging/erofs/inode.c
-+++ b/drivers/staging/erofs/inode.c
-@@ -185,16 +185,16 @@ static int fill_inode(struct inode *inode, int isdir)
- 		/* setup the new inode */
- 		if (S_ISREG(inode->i_mode)) {
- #ifdef CONFIG_EROFS_FS_XATTR
--			if (vi->xattr_isize)
--				inode->i_op = &erofs_generic_xattr_iops;
-+			inode->i_op = &erofs_generic_xattr_iops;
- #endif
- 			inode->i_fop = &generic_ro_fops;
- 		} else if (S_ISDIR(inode->i_mode)) {
- 			inode->i_op =
- #ifdef CONFIG_EROFS_FS_XATTR
--				vi->xattr_isize ? &erofs_dir_xattr_iops :
--#endif
-+				&erofs_dir_xattr_iops;
-+#else
- 				&erofs_dir_iops;
-+#endif
- 			inode->i_fop = &erofs_dir_fops;
- 		} else if (S_ISLNK(inode->i_mode)) {
- 			/* by default, page_get_link is used for symlink */
-diff --git a/drivers/staging/erofs/internal.h b/drivers/staging/erofs/internal.h
-index e049d00c087a..16249d7f0895 100644
---- a/drivers/staging/erofs/internal.h
-+++ b/drivers/staging/erofs/internal.h
-@@ -354,12 +354,17 @@ static inline erofs_off_t iloc(struct erofs_sb_info *sbi, erofs_nid_t nid)
- 	return blknr_to_addr(sbi->meta_blkaddr) + (nid << sbi->islotbits);
- }
- 
--#define inode_set_inited_xattr(inode)   (EROFS_V(inode)->flags |= 1)
--#define inode_has_inited_xattr(inode)   (EROFS_V(inode)->flags & 1)
-+/* atomic flag definitions */
-+#define EROFS_V_EA_INITED_BIT	0
-+
-+/* bitlock definitions (arranged in reverse order) */
-+#define EROFS_V_BL_XATTR_BIT	(BITS_PER_LONG - 1)
- 
- struct erofs_vnode {
- 	erofs_nid_t nid;
--	unsigned int flags;
-+
-+	/* atomic flags (including bitlocks) */
-+	unsigned long flags;
- 
- 	unsigned char data_mapping_mode;
- 	/* inline size in bytes */
-diff --git a/drivers/staging/erofs/namei.c b/drivers/staging/erofs/namei.c
-index 5596c52e246d..ecc51ef0753f 100644
---- a/drivers/staging/erofs/namei.c
-+++ b/drivers/staging/erofs/namei.c
-@@ -15,74 +15,77 @@
- 
- #include <trace/events/erofs.h>
- 
--/* based on the value of qn->len is accurate */
--static inline int dirnamecmp(struct qstr *qn,
--	struct qstr *qd, unsigned int *matched)
-+struct erofs_qstr {
-+	const unsigned char *name;
-+	const unsigned char *end;
-+};
-+
-+/* based on the end of qn is accurate and it must have the trailing '\0' */
-+static inline int dirnamecmp(const struct erofs_qstr *qn,
-+			     const struct erofs_qstr *qd,
-+			     unsigned int *matched)
- {
--	unsigned int i = *matched, len = min(qn->len, qd->len);
--loop:
--	if (unlikely(i >= len)) {
--		*matched = i;
--		if (qn->len < qd->len) {
--			/*
--			 * actually (qn->len == qd->len)
--			 * when qd->name[i] == '\0'
--			 */
--			return qd->name[i] == '\0' ? 0 : -1;
-+	unsigned int i = *matched;
-+
-+	/*
-+	 * on-disk error, let's only BUG_ON in the debugging mode.
-+	 * otherwise, it will return 1 to just skip the invalid name
-+	 * and go on (in consideration of the lookup performance).
-+	 */
-+	DBG_BUGON(qd->name > qd->end);
-+
-+	/* qd could not have trailing '\0' */
-+	/* However it is absolutely safe if < qd->end */
-+	while (qd->name + i < qd->end && qd->name[i] != '\0') {
-+		if (qn->name[i] != qd->name[i]) {
-+			*matched = i;
-+			return qn->name[i] > qd->name[i] ? 1 : -1;
- 		}
--		return (qn->len > qd->len);
-+		++i;
- 	}
--
--	if (qn->name[i] != qd->name[i]) {
--		*matched = i;
--		return qn->name[i] > qd->name[i] ? 1 : -1;
--	}
--
--	++i;
--	goto loop;
-+	*matched = i;
-+	/* See comments in __d_alloc on the terminating NUL character */
-+	return qn->name[i] == '\0' ? 0 : 1;
- }
- 
--static struct erofs_dirent *find_target_dirent(
--	struct qstr *name,
--	u8 *data, int maxsize)
-+#define nameoff_from_disk(off, sz)	(le16_to_cpu(off) & ((sz) - 1))
-+
-+static struct erofs_dirent *find_target_dirent(struct erofs_qstr *name,
-+					       u8 *data,
-+					       unsigned int dirblksize,
-+					       const int ndirents)
- {
--	unsigned int ndirents, head, back;
-+	int head, back;
- 	unsigned int startprfx, endprfx;
- 	struct erofs_dirent *const de = (struct erofs_dirent *)data;
- 
--	/* make sure that maxsize is valid */
--	BUG_ON(maxsize < sizeof(struct erofs_dirent));
--
--	ndirents = le16_to_cpu(de->nameoff) / sizeof(*de);
--
--	/* corrupted dir (may be unnecessary...) */
--	BUG_ON(!ndirents);
--
--	head = 0;
-+	/* since the 1st dirent has been evaluated previously */
-+	head = 1;
- 	back = ndirents - 1;
- 	startprfx = endprfx = 0;
- 
- 	while (head <= back) {
--		unsigned int mid = head + (back - head) / 2;
--		unsigned int nameoff = le16_to_cpu(de[mid].nameoff);
-+		const int mid = head + (back - head) / 2;
-+		const int nameoff = nameoff_from_disk(de[mid].nameoff,
-+						      dirblksize);
- 		unsigned int matched = min(startprfx, endprfx);
--
--		struct qstr dname = QSTR_INIT(data + nameoff,
--			unlikely(mid >= ndirents - 1) ?
--				maxsize - nameoff :
--				le16_to_cpu(de[mid + 1].nameoff) - nameoff);
-+		struct erofs_qstr dname = {
-+			.name = data + nameoff,
-+			.end = unlikely(mid >= ndirents - 1) ?
-+				data + dirblksize :
-+				data + nameoff_from_disk(de[mid + 1].nameoff,
-+							 dirblksize)
-+		};
- 
- 		/* string comparison without already matched prefix */
- 		int ret = dirnamecmp(name, &dname, &matched);
- 
--		if (unlikely(!ret))
-+		if (unlikely(!ret)) {
- 			return de + mid;
--		else if (ret > 0) {
-+		} else if (ret > 0) {
- 			head = mid + 1;
- 			startprfx = matched;
--		} else if (unlikely(mid < 1))	/* fix "mid" overflow */
--			break;
--		else {
-+		} else {
- 			back = mid - 1;
- 			endprfx = matched;
- 		}
-@@ -91,12 +94,12 @@ static struct erofs_dirent *find_target_dirent(
- 	return ERR_PTR(-ENOENT);
- }
- 
--static struct page *find_target_block_classic(
--	struct inode *dir,
--	struct qstr *name, int *_diff)
-+static struct page *find_target_block_classic(struct inode *dir,
-+					      struct erofs_qstr *name,
-+					      int *_ndirents)
- {
- 	unsigned int startprfx, endprfx;
--	unsigned int head, back;
-+	int head, back;
- 	struct address_space *const mapping = dir->i_mapping;
- 	struct page *candidate = ERR_PTR(-ENOENT);
- 
-@@ -105,41 +108,43 @@ static struct page *find_target_block_classic(
- 	back = inode_datablocks(dir) - 1;
- 
- 	while (head <= back) {
--		unsigned int mid = head + (back - head) / 2;
-+		const int mid = head + (back - head) / 2;
- 		struct page *page = read_mapping_page(mapping, mid, NULL);
- 
--		if (IS_ERR(page)) {
--exact_out:
--			if (!IS_ERR(candidate)) /* valid candidate */
--				put_page(candidate);
--			return page;
--		} else {
--			int diff;
--			unsigned int ndirents, matched;
--			struct qstr dname;
-+		if (!IS_ERR(page)) {
- 			struct erofs_dirent *de = kmap_atomic(page);
--			unsigned int nameoff = le16_to_cpu(de->nameoff);
--
--			ndirents = nameoff / sizeof(*de);
-+			const int nameoff = nameoff_from_disk(de->nameoff,
-+							      EROFS_BLKSIZ);
-+			const int ndirents = nameoff / sizeof(*de);
-+			int diff;
-+			unsigned int matched;
-+			struct erofs_qstr dname;
- 
--			/* corrupted dir (should have one entry at least) */
--			BUG_ON(!ndirents || nameoff > PAGE_SIZE);
-+			if (unlikely(!ndirents)) {
-+				DBG_BUGON(1);
-+				kunmap_atomic(de);
-+				put_page(page);
-+				page = ERR_PTR(-EIO);
-+				goto out;
-+			}
- 
- 			matched = min(startprfx, endprfx);
- 
- 			dname.name = (u8 *)de + nameoff;
--			dname.len = ndirents == 1 ?
--				/* since the rest of the last page is 0 */
--				EROFS_BLKSIZ - nameoff
--				: le16_to_cpu(de[1].nameoff) - nameoff;
-+			if (ndirents == 1)
-+				dname.end = (u8 *)de + EROFS_BLKSIZ;
-+			else
-+				dname.end = (u8 *)de +
-+					nameoff_from_disk(de[1].nameoff,
-+							  EROFS_BLKSIZ);
- 
- 			/* string comparison without already matched prefix */
- 			diff = dirnamecmp(name, &dname, &matched);
- 			kunmap_atomic(de);
- 
- 			if (unlikely(!diff)) {
--				*_diff = 0;
--				goto exact_out;
-+				*_ndirents = 0;
-+				goto out;
- 			} else if (diff > 0) {
- 				head = mid + 1;
- 				startprfx = matched;
-@@ -147,45 +152,51 @@ exact_out:
- 				if (likely(!IS_ERR(candidate)))
- 					put_page(candidate);
- 				candidate = page;
-+				*_ndirents = ndirents;
- 			} else {
- 				put_page(page);
- 
--				if (unlikely(mid < 1))	/* fix "mid" overflow */
--					break;
--
- 				back = mid - 1;
- 				endprfx = matched;
- 			}
-+			continue;
- 		}
-+out:		/* free if the candidate is valid */
-+		if (!IS_ERR(candidate))
-+			put_page(candidate);
-+		return page;
- 	}
--	*_diff = 1;
- 	return candidate;
- }
- 
- int erofs_namei(struct inode *dir,
--	struct qstr *name,
--	erofs_nid_t *nid, unsigned int *d_type)
-+		struct qstr *name,
-+		erofs_nid_t *nid, unsigned int *d_type)
- {
--	int diff;
-+	int ndirents;
- 	struct page *page;
--	u8 *data;
-+	void *data;
- 	struct erofs_dirent *de;
-+	struct erofs_qstr qn;
- 
- 	if (unlikely(!dir->i_size))
- 		return -ENOENT;
- 
--	diff = 1;
--	page = find_target_block_classic(dir, name, &diff);
-+	qn.name = name->name;
-+	qn.end = name->name + name->len;
-+
-+	ndirents = 0;
-+	page = find_target_block_classic(dir, &qn, &ndirents);
- 
- 	if (unlikely(IS_ERR(page)))
- 		return PTR_ERR(page);
- 
- 	data = kmap_atomic(page);
- 	/* the target page has been mapped */
--	de = likely(diff) ?
--		/* since the rest of the last page is 0 */
--		find_target_dirent(name, data, EROFS_BLKSIZ) :
--		(struct erofs_dirent *)data;
-+	if (ndirents)
-+		de = find_target_dirent(&qn, data, EROFS_BLKSIZ, ndirents);
-+	else
-+		de = (struct erofs_dirent *)data;
- 
- 	if (likely(!IS_ERR(de))) {
- 		*nid = le64_to_cpu(de->nid);
-diff --git a/drivers/staging/erofs/unzip_vle.c b/drivers/staging/erofs/unzip_vle.c
-index 4ac1099a39c6..d850be1abc84 100644
---- a/drivers/staging/erofs/unzip_vle.c
-+++ b/drivers/staging/erofs/unzip_vle.c
-@@ -107,15 +107,30 @@ enum z_erofs_vle_work_role {
- 	Z_EROFS_VLE_WORK_SECONDARY,
- 	Z_EROFS_VLE_WORK_PRIMARY,
- 	/*
--	 * The current work has at least been linked with the following
--	 * processed chained works, which means if the processing page
--	 * is the tail partial page of the work, the current work can
--	 * safely use the whole page, as illustrated below:
--	 * +--------------+-------------------------------------------+
--	 * |  tail page   |      head page (of the previous work)     |
--	 * +--------------+-------------------------------------------+
--	 *   /\  which belongs to the current work
--	 * [  (*) this page can be used for the current work itself.  ]
-+	 * The current work was the tail of an exist chain, and the previous
-+	 * processed chained works are all decided to be hooked up to it.
-+	 * A new chain should be created for the remaining unprocessed works,
-+	 * therefore different from Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED,
-+	 * the next work cannot reuse the whole page in the following scenario:
-+	 *  ________________________________________________________________
-+	 * |      tail (partial) page     |       head (partial) page       |
-+	 * |  (belongs to the next work)  |  (belongs to the current work)  |
-+	 * |_______PRIMARY_FOLLOWED_______|________PRIMARY_HOOKED___________|
-+	 */
-+	Z_EROFS_VLE_WORK_PRIMARY_HOOKED,
-+	/*
-+	 * The current work has been linked with the processed chained works,
-+	 * and could be also linked with the potential remaining works, which
-+	 * means if the processing page is the tail partial page of the work,
-+	 * the current work can safely use the whole page (since the next work
-+	 * is under control) for in-place decompression, as illustrated below:
-+	 *  ________________________________________________________________
-+	 * |  tail (partial) page  |          head (partial) page           |
-+	 * | (of the current work) |         (of the previous work)         |
-+	 * |  PRIMARY_FOLLOWED or  |                                        |
-+	 * |_____PRIMARY_HOOKED____|____________PRIMARY_FOLLOWED____________|
-+	 *
-+	 * [  (*) the above page can be used for the current work itself.  ]
- 	 */
- 	Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED,
- 	Z_EROFS_VLE_WORK_MAX
-@@ -315,10 +330,10 @@ static int z_erofs_vle_work_add_page(
- 	return ret ? 0 : -EAGAIN;
- }
- 
--static inline bool try_to_claim_workgroup(
--	struct z_erofs_vle_workgroup *grp,
--	z_erofs_vle_owned_workgrp_t *owned_head,
--	bool *hosted)
-+static enum z_erofs_vle_work_role
-+try_to_claim_workgroup(struct z_erofs_vle_workgroup *grp,
-+		       z_erofs_vle_owned_workgrp_t *owned_head,
-+		       bool *hosted)
- {
- 	DBG_BUGON(*hosted == true);
- 
-@@ -332,6 +347,9 @@ retry:
- 
- 		*owned_head = &grp->next;
- 		*hosted = true;
-+		/* lucky, I am the followee :) */
-+		return Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED;
-+
- 	} else if (grp->next == Z_EROFS_VLE_WORKGRP_TAIL) {
- 		/*
- 		 * type 2, link to the end of a existing open chain,
-@@ -341,12 +359,11 @@ retry:
- 		if (cmpxchg(&grp->next, Z_EROFS_VLE_WORKGRP_TAIL,
- 			    *owned_head) != Z_EROFS_VLE_WORKGRP_TAIL)
- 			goto retry;
--
- 		*owned_head = Z_EROFS_VLE_WORKGRP_TAIL;
--	} else
--		return false;	/* :( better luck next time */
-+		return Z_EROFS_VLE_WORK_PRIMARY_HOOKED;
-+	}
- 
--	return true;	/* lucky, I am the followee :) */
-+	return Z_EROFS_VLE_WORK_PRIMARY; /* :( better luck next time */
- }
- 
- struct z_erofs_vle_work_finder {
-@@ -424,12 +441,9 @@ z_erofs_vle_work_lookup(const struct z_erofs_vle_work_finder *f)
- 	*f->hosted = false;
- 	if (!primary)
- 		*f->role = Z_EROFS_VLE_WORK_SECONDARY;
--	/* claim the workgroup if possible */
--	else if (try_to_claim_workgroup(grp, f->owned_head, f->hosted))
--		*f->role = Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED;
--	else
--		*f->role = Z_EROFS_VLE_WORK_PRIMARY;
--
-+	else	/* claim the workgroup if possible */
-+		*f->role = try_to_claim_workgroup(grp, f->owned_head,
-+						  f->hosted);
- 	return work;
- }
- 
-@@ -493,6 +507,9 @@ z_erofs_vle_work_register(const struct z_erofs_vle_work_finder *f,
- 	return work;
- }
- 
-+#define builder_is_hooked(builder) \
-+	((builder)->role >= Z_EROFS_VLE_WORK_PRIMARY_HOOKED)
-+
- #define builder_is_followed(builder) \
- 	((builder)->role >= Z_EROFS_VLE_WORK_PRIMARY_FOLLOWED)
- 
-@@ -686,7 +703,7 @@ static int z_erofs_do_read_page(struct z_erofs_vle_frontend *fe,
- 	struct z_erofs_vle_work_builder *const builder = &fe->builder;
- 	const loff_t offset = page_offset(page);
- 
--	bool tight = builder_is_followed(builder);
-+	bool tight = builder_is_hooked(builder);
- 	struct z_erofs_vle_work *work = builder->work;
- 
- 	enum z_erofs_cache_alloctype cache_strategy;
-@@ -704,8 +721,12 @@ repeat:
- 
- 	/* lucky, within the range of the current map_blocks */
- 	if (offset + cur >= map->m_la &&
--		offset + cur < map->m_la + map->m_llen)
-+		offset + cur < map->m_la + map->m_llen) {
-+		/* didn't get a valid unzip work previously (very rare) */
-+		if (!builder->work)
-+			goto restart_now;
- 		goto hitted;
-+	}
- 
- 	/* go ahead the next map_blocks */
- 	debugln("%s: [out-of-range] pos %llu", __func__, offset + cur);
-@@ -719,6 +740,7 @@ repeat:
- 	if (unlikely(err))
- 		goto err_out;
- 
-+restart_now:
- 	if (unlikely(!(map->m_flags & EROFS_MAP_MAPPED)))
- 		goto hitted;
- 
-@@ -740,7 +762,7 @@ repeat:
- 				 map->m_plen / PAGE_SIZE,
- 				 cache_strategy, page_pool, GFP_KERNEL);
- 
--	tight &= builder_is_followed(builder);
-+	tight &= builder_is_hooked(builder);
- 	work = builder->work;
- hitted:
- 	cur = end - min_t(unsigned int, offset + end - map->m_la, end);
-@@ -755,6 +777,9 @@ hitted:
- 			(tight ? Z_EROFS_PAGE_TYPE_EXCLUSIVE :
- 				Z_EROFS_VLE_PAGE_TYPE_TAIL_SHARED));
- 
-+	if (cur)
-+		tight &= builder_is_followed(builder);
-+
- retry:
- 	err = z_erofs_vle_work_add_page(builder, page, page_type);
- 	/* should allocate an additional staging page for pagevec */
-@@ -952,6 +977,7 @@ repeat:
- 	overlapped = false;
- 	compressed_pages = grp->compressed_pages;
- 
-+	err = 0;
- 	for (i = 0; i < clusterpages; ++i) {
- 		unsigned int pagenr;
- 
-@@ -961,26 +987,39 @@ repeat:
- 		DBG_BUGON(!page);
- 		DBG_BUGON(!page->mapping);
- 
--		if (z_erofs_is_stagingpage(page))
--			continue;
-+		if (!z_erofs_is_stagingpage(page)) {
- #ifdef EROFS_FS_HAS_MANAGED_CACHE
--		if (page->mapping == MNGD_MAPPING(sbi)) {
--			DBG_BUGON(!PageUptodate(page));
--			continue;
--		}
-+			if (page->mapping == MNGD_MAPPING(sbi)) {
-+				if (unlikely(!PageUptodate(page)))
-+					err = -EIO;
-+				continue;
-+			}
- #endif
- 
--		/* only non-head page could be reused as a compressed page */
--		pagenr = z_erofs_onlinepage_index(page);
-+			/*
-+			 * only if non-head page can be selected
-+			 * for inplace decompression
-+			 */
-+			pagenr = z_erofs_onlinepage_index(page);
- 
--		DBG_BUGON(pagenr >= nr_pages);
--		DBG_BUGON(pages[pagenr]);
--		++sparsemem_pages;
--		pages[pagenr] = page;
-+			DBG_BUGON(pagenr >= nr_pages);
-+			DBG_BUGON(pages[pagenr]);
-+			++sparsemem_pages;
-+			pages[pagenr] = page;
- 
--		overlapped = true;
-+			overlapped = true;
-+		}
-+
-+		/* PG_error needs checking for inplaced and staging pages */
-+		if (unlikely(PageError(page))) {
-+			DBG_BUGON(PageUptodate(page));
-+			err = -EIO;
-+		}
- 	}
- 
-+	if (unlikely(err))
-+		goto out;
-+
- 	llen = (nr_pages << PAGE_SHIFT) - work->pageofs;
- 
- 	if (z_erofs_vle_workgrp_fmt(grp) == Z_EROFS_VLE_WORKGRP_FMT_PLAIN) {
-@@ -992,11 +1031,10 @@ repeat:
- 	if (llen > grp->llen)
- 		llen = grp->llen;
- 
--	err = z_erofs_vle_unzip_fast_percpu(compressed_pages,
--		clusterpages, pages, llen, work->pageofs,
--		z_erofs_onlinepage_endio);
-+	err = z_erofs_vle_unzip_fast_percpu(compressed_pages, clusterpages,
-+					    pages, llen, work->pageofs);
- 	if (err != -ENOTSUPP)
--		goto out_percpu;
-+		goto out;
- 
- 	if (sparsemem_pages >= nr_pages)
- 		goto skip_allocpage;
-@@ -1010,6 +1048,10 @@ repeat:
- 
- skip_allocpage:
- 	vout = erofs_vmap(pages, nr_pages);
-+	if (!vout) {
-+		err = -ENOMEM;
-+		goto out;
-+	}
- 
- 	err = z_erofs_vle_unzip_vmap(compressed_pages,
- 		clusterpages, vout, llen, work->pageofs, overlapped);
-@@ -1017,8 +1059,25 @@ skip_allocpage:
- 	erofs_vunmap(vout, nr_pages);
- 
- out:
-+	/* must handle all compressed pages before endding pages */
-+	for (i = 0; i < clusterpages; ++i) {
-+		page = compressed_pages[i];
-+
-+#ifdef EROFS_FS_HAS_MANAGED_CACHE
-+		if (page->mapping == MNGD_MAPPING(sbi))
-+			continue;
-+#endif
-+		/* recycle all individual staging pages */
-+		(void)z_erofs_gather_if_stagingpage(page_pool, page);
-+
-+		WRITE_ONCE(compressed_pages[i], NULL);
-+	}
-+
- 	for (i = 0; i < nr_pages; ++i) {
- 		page = pages[i];
-+		if (!page)
-+			continue;
-+
- 		DBG_BUGON(!page->mapping);
- 
- 		/* recycle all individual staging pages */
-@@ -1031,20 +1090,6 @@ out:
- 		z_erofs_onlinepage_endio(page);
- 	}
- 
--out_percpu:
--	for (i = 0; i < clusterpages; ++i) {
--		page = compressed_pages[i];
--
--#ifdef EROFS_FS_HAS_MANAGED_CACHE
--		if (page->mapping == MNGD_MAPPING(sbi))
--			continue;
--#endif
--		/* recycle all individual staging pages */
--		(void)z_erofs_gather_if_stagingpage(page_pool, page);
--
--		WRITE_ONCE(compressed_pages[i], NULL);
--	}
--
- 	if (pages == z_pagemap_global)
- 		mutex_unlock(&z_pagemap_global_lock);
- 	else if (unlikely(pages != pages_onstack))
-@@ -1172,6 +1217,7 @@ repeat:
- 	if (page->mapping == mc) {
- 		WRITE_ONCE(grp->compressed_pages[nr], page);
- 
-+		ClearPageError(page);
- 		if (!PagePrivate(page)) {
- 			/*
- 			 * impossible to be !PagePrivate(page) for
-diff --git a/drivers/staging/erofs/unzip_vle.h b/drivers/staging/erofs/unzip_vle.h
-index 5a4e1b62c0d1..c0dfd6906aa8 100644
---- a/drivers/staging/erofs/unzip_vle.h
-+++ b/drivers/staging/erofs/unzip_vle.h
-@@ -218,8 +218,7 @@ extern int z_erofs_vle_plain_copy(struct page **compressed_pages,
- 
- extern int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
- 	unsigned clusterpages, struct page **pages,
--	unsigned outlen, unsigned short pageofs,
--	void (*endio)(struct page *));
-+	unsigned int outlen, unsigned short pageofs);
- 
- extern int z_erofs_vle_unzip_vmap(struct page **compressed_pages,
- 	unsigned clusterpages, void *vaddr, unsigned llen,
-diff --git a/drivers/staging/erofs/unzip_vle_lz4.c b/drivers/staging/erofs/unzip_vle_lz4.c
-index 52797bd89da1..3e8b0ff2efeb 100644
---- a/drivers/staging/erofs/unzip_vle_lz4.c
-+++ b/drivers/staging/erofs/unzip_vle_lz4.c
-@@ -125,8 +125,7 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
- 				  unsigned int clusterpages,
- 				  struct page **pages,
- 				  unsigned int outlen,
--				  unsigned short pageofs,
--				  void (*endio)(struct page *))
-+				  unsigned short pageofs)
- {
- 	void *vin, *vout;
- 	unsigned int nr_pages, i, j;
-@@ -137,10 +136,13 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
- 
- 	nr_pages = DIV_ROUND_UP(outlen + pageofs, PAGE_SIZE);
- 
--	if (clusterpages == 1)
-+	if (clusterpages == 1) {
- 		vin = kmap_atomic(compressed_pages[0]);
--	else
-+	} else {
- 		vin = erofs_vmap(compressed_pages, clusterpages);
-+		if (!vin)
-+			return -ENOMEM;
-+	}
- 
- 	preempt_disable();
- 	vout = erofs_pcpubuf[smp_processor_id()].data;
-@@ -148,19 +150,16 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
- 	ret = z_erofs_unzip_lz4(vin, vout + pageofs,
- 				clusterpages * PAGE_SIZE, outlen);
- 
--	if (ret >= 0) {
--		outlen = ret;
--		ret = 0;
--	}
-+	if (ret < 0)
-+		goto out;
-+	ret = 0;
- 
- 	for (i = 0; i < nr_pages; ++i) {
- 		j = min((unsigned int)PAGE_SIZE - pageofs, outlen);
- 
- 		if (pages[i]) {
--			if (ret < 0) {
--				SetPageError(pages[i]);
--			} else if (clusterpages == 1 &&
--				   pages[i] == compressed_pages[0]) {
-+			if (clusterpages == 1 &&
-+			    pages[i] == compressed_pages[0]) {
- 				memcpy(vin + pageofs, vout + pageofs, j);
- 			} else {
- 				void *dst = kmap_atomic(pages[i]);
-@@ -168,12 +167,13 @@ int z_erofs_vle_unzip_fast_percpu(struct page **compressed_pages,
- 				memcpy(dst + pageofs, vout + pageofs, j);
- 				kunmap_atomic(dst);
- 			}
--			endio(pages[i]);
- 		}
- 		vout += PAGE_SIZE;
- 		outlen -= j;
- 		pageofs = 0;
- 	}
-+
-+out:
- 	preempt_enable();
- 
- 	if (clusterpages == 1)
-diff --git a/drivers/staging/erofs/xattr.c b/drivers/staging/erofs/xattr.c
-index 80dca6a4adbe..6cb05ae31233 100644
---- a/drivers/staging/erofs/xattr.c
-+++ b/drivers/staging/erofs/xattr.c
-@@ -44,19 +44,48 @@ static inline void xattr_iter_end_final(struct xattr_iter *it)
- 
- static int init_inode_xattrs(struct inode *inode)
- {
-+	struct erofs_vnode *const vi = EROFS_V(inode);
- 	struct xattr_iter it;
- 	unsigned int i;
- 	struct erofs_xattr_ibody_header *ih;
- 	struct super_block *sb;
- 	struct erofs_sb_info *sbi;
--	struct erofs_vnode *vi;
- 	bool atomic_map;
-+	int ret = 0;
- 
--	if (likely(inode_has_inited_xattr(inode)))
-+	/* the most case is that xattrs of this inode are initialized. */
-+	if (test_bit(EROFS_V_EA_INITED_BIT, &vi->flags))
- 		return 0;
- 
--	vi = EROFS_V(inode);
--	BUG_ON(!vi->xattr_isize);
-+	if (wait_on_bit_lock(&vi->flags, EROFS_V_BL_XATTR_BIT, TASK_KILLABLE))
-+		return -ERESTARTSYS;
-+
-+	/* someone has initialized xattrs for us? */
-+	if (test_bit(EROFS_V_EA_INITED_BIT, &vi->flags))
-+		goto out_unlock;
-+
-+	/*
-+	 * bypass all xattr operations if ->xattr_isize is not greater than
-+	 * sizeof(struct erofs_xattr_ibody_header), in detail:
-+	 * 1) it is not enough to contain erofs_xattr_ibody_header then
-+	 *    ->xattr_isize should be 0 (it means no xattr);
-+	 * 2) it is just to contain erofs_xattr_ibody_header, which is on-disk
-+	 *    undefined right now (maybe use later with some new sb feature).
-+	 */
-+	if (vi->xattr_isize == sizeof(struct erofs_xattr_ibody_header)) {
-+		errln("xattr_isize %d of nid %llu is not supported yet",
-+		      vi->xattr_isize, vi->nid);
-+		ret = -ENOTSUPP;
-+		goto out_unlock;
-+	} else if (vi->xattr_isize < sizeof(struct erofs_xattr_ibody_header)) {
-+		if (unlikely(vi->xattr_isize)) {
-+			DBG_BUGON(1);
-+			ret = -EIO;
-+			goto out_unlock;	/* xattr ondisk layout error */
-+		}
-+		ret = -ENOATTR;
-+		goto out_unlock;
-+	}
- 
- 	sb = inode->i_sb;
- 	sbi = EROFS_SB(sb);
-@@ -64,8 +93,10 @@ static int init_inode_xattrs(struct inode *inode)
- 	it.ofs = erofs_blkoff(iloc(sbi, vi->nid) + vi->inode_isize);
- 
- 	it.page = erofs_get_inline_page(inode, it.blkaddr);
--	if (IS_ERR(it.page))
--		return PTR_ERR(it.page);
-+	if (IS_ERR(it.page)) {
-+		ret = PTR_ERR(it.page);
-+		goto out_unlock;
-+	}
- 
- 	/* read in shared xattr array (non-atomic, see kmalloc below) */
- 	it.kaddr = kmap(it.page);
-@@ -78,7 +109,8 @@ static int init_inode_xattrs(struct inode *inode)
- 						sizeof(uint), GFP_KERNEL);
- 	if (vi->xattr_shared_xattrs == NULL) {
- 		xattr_iter_end(&it, atomic_map);
--		return -ENOMEM;
-+		ret = -ENOMEM;
-+		goto out_unlock;
- 	}
- 
- 	/* let's skip ibody header */
-@@ -92,8 +124,12 @@ static int init_inode_xattrs(struct inode *inode)
- 
- 			it.page = erofs_get_meta_page(sb,
- 				++it.blkaddr, S_ISDIR(inode->i_mode));
--			if (IS_ERR(it.page))
--				return PTR_ERR(it.page);
-+			if (IS_ERR(it.page)) {
-+				kfree(vi->xattr_shared_xattrs);
-+				vi->xattr_shared_xattrs = NULL;
-+				ret = PTR_ERR(it.page);
-+				goto out_unlock;
-+			}
- 
- 			it.kaddr = kmap_atomic(it.page);
- 			atomic_map = true;
-@@ -105,8 +141,11 @@ static int init_inode_xattrs(struct inode *inode)
- 	}
- 	xattr_iter_end(&it, atomic_map);
- 
--	inode_set_inited_xattr(inode);
--	return 0;
-+	set_bit(EROFS_V_EA_INITED_BIT, &vi->flags);
-+
-+out_unlock:
-+	clear_and_wake_up_bit(EROFS_V_BL_XATTR_BIT, &vi->flags);
-+	return ret;
- }
- 
- /*
-@@ -422,7 +461,6 @@ static int erofs_xattr_generic_get(const struct xattr_handler *handler,
- 		struct dentry *unused, struct inode *inode,
- 		const char *name, void *buffer, size_t size)
- {
--	struct erofs_vnode *const vi = EROFS_V(inode);
- 	struct erofs_sb_info *const sbi = EROFS_I_SB(inode);
- 
- 	switch (handler->flags) {
-@@ -440,9 +478,6 @@ static int erofs_xattr_generic_get(const struct xattr_handler *handler,
- 		return -EINVAL;
- 	}
- 
--	if (!vi->xattr_isize)
--		return -ENOATTR;
--
- 	return erofs_getxattr(inode, handler->flags, name, buffer, size);
- }
- 
-diff --git a/drivers/staging/iio/addac/adt7316.c b/drivers/staging/iio/addac/adt7316.c
-index dc93e85808e0..7839d869d25d 100644
---- a/drivers/staging/iio/addac/adt7316.c
-+++ b/drivers/staging/iio/addac/adt7316.c
-@@ -651,17 +651,10 @@ static ssize_t adt7316_store_da_high_resolution(struct device *dev,
- 	u8 config3;
- 	int ret;
- 
--	chip->dac_bits = 8;
--
--	if (buf[0] == '1') {
-+	if (buf[0] == '1')
- 		config3 = chip->config3 | ADT7316_DA_HIGH_RESOLUTION;
--		if (chip->id == ID_ADT7316 || chip->id == ID_ADT7516)
--			chip->dac_bits = 12;
--		else if (chip->id == ID_ADT7317 || chip->id == ID_ADT7517)
--			chip->dac_bits = 10;
--	} else {
-+	else
- 		config3 = chip->config3 & (~ADT7316_DA_HIGH_RESOLUTION);
--	}
- 
- 	ret = chip->bus.write(chip->bus.client, ADT7316_CONFIG3, config3);
- 	if (ret)
-@@ -2123,6 +2116,13 @@ int adt7316_probe(struct device *dev, struct adt7316_bus *bus,
- 	else
- 		return -ENODEV;
- 
-+	if (chip->id == ID_ADT7316 || chip->id == ID_ADT7516)
-+		chip->dac_bits = 12;
-+	else if (chip->id == ID_ADT7317 || chip->id == ID_ADT7517)
-+		chip->dac_bits = 10;
-+	else
-+		chip->dac_bits = 8;
-+
- 	chip->ldac_pin = devm_gpiod_get_optional(dev, "adi,ldac", GPIOD_OUT_LOW);
- 	if (IS_ERR(chip->ldac_pin)) {
- 		ret = PTR_ERR(chip->ldac_pin);
-diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c
-index 28f41caba05d..fb442499f806 100644
---- a/drivers/staging/media/imx/imx-ic-prpencvf.c
-+++ b/drivers/staging/media/imx/imx-ic-prpencvf.c
-@@ -680,12 +680,23 @@ static int prp_start(struct prp_priv *priv)
- 		goto out_free_nfb4eof_irq;
- 	}
- 
-+	/* start upstream */
-+	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
-+	ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
-+	if (ret) {
-+		v4l2_err(&ic_priv->sd,
-+			 "upstream stream on failed: %d\n", ret);
-+		goto out_free_eof_irq;
-+	}
-+
- 	/* start the EOF timeout timer */
- 	mod_timer(&priv->eof_timeout_timer,
- 		  jiffies + msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
- 
- 	return 0;
- 
-+out_free_eof_irq:
-+	devm_free_irq(ic_priv->dev, priv->eof_irq, priv);
- out_free_nfb4eof_irq:
- 	devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv);
- out_unsetup:
-@@ -717,6 +728,12 @@ static void prp_stop(struct prp_priv *priv)
- 	if (ret == 0)
- 		v4l2_warn(&ic_priv->sd, "wait last EOF timeout\n");
- 
-+	/* stop upstream */
-+	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
-+	if (ret && ret != -ENOIOCTLCMD)
-+		v4l2_warn(&ic_priv->sd,
-+			  "upstream stream off failed: %d\n", ret);
-+
- 	devm_free_irq(ic_priv->dev, priv->eof_irq, priv);
- 	devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv);
- 
-@@ -1148,15 +1165,6 @@ static int prp_s_stream(struct v4l2_subdev *sd, int enable)
- 	if (ret)
- 		goto out;
- 
--	/* start/stop upstream */
--	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, enable);
--	ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
--	if (ret) {
--		if (enable)
--			prp_stop(priv);
--		goto out;
--	}
--
- update_count:
- 	priv->stream_count += enable ? 1 : -1;
- 	if (priv->stream_count < 0)
-diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
-index 4223f8d418ae..be1e9e52b2a0 100644
---- a/drivers/staging/media/imx/imx-media-csi.c
-+++ b/drivers/staging/media/imx/imx-media-csi.c
-@@ -629,7 +629,7 @@ out_put_ipu:
- 	return ret;
- }
- 
--static void csi_idmac_stop(struct csi_priv *priv)
-+static void csi_idmac_wait_last_eof(struct csi_priv *priv)
- {
- 	unsigned long flags;
- 	int ret;
-@@ -646,7 +646,10 @@ static void csi_idmac_stop(struct csi_priv *priv)
- 		&priv->last_eof_comp, msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT));
- 	if (ret == 0)
- 		v4l2_warn(&priv->sd, "wait last EOF timeout\n");
-+}
- 
-+static void csi_idmac_stop(struct csi_priv *priv)
-+{
- 	devm_free_irq(priv->dev, priv->eof_irq, priv);
- 	devm_free_irq(priv->dev, priv->nfb4eof_irq, priv);
- 
-@@ -722,10 +725,16 @@ static int csi_start(struct csi_priv *priv)
- 
- 	output_fi = &priv->frame_interval[priv->active_output_pad];
- 
-+	/* start upstream */
-+	ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
-+	ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
-+	if (ret)
-+		return ret;
-+
- 	if (priv->dest == IPU_CSI_DEST_IDMAC) {
- 		ret = csi_idmac_start(priv);
- 		if (ret)
--			return ret;
-+			goto stop_upstream;
- 	}
- 
- 	ret = csi_setup(priv);
-@@ -753,11 +762,26 @@ fim_off:
- idmac_stop:
- 	if (priv->dest == IPU_CSI_DEST_IDMAC)
- 		csi_idmac_stop(priv);
-+stop_upstream:
-+	v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
- 	return ret;
- }
- 
- static void csi_stop(struct csi_priv *priv)
- {
-+	if (priv->dest == IPU_CSI_DEST_IDMAC)
-+		csi_idmac_wait_last_eof(priv);
-+
-+	/*
-+	 * Disable the CSI asap, after syncing with the last EOF.
-+	 * Doing so after the IDMA channel is disabled has shown to
-+	 * create hard system-wide hangs.
-+	 */
-+	ipu_csi_disable(priv->csi);
-+
-+	/* stop upstream */
-+	v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
-+
- 	if (priv->dest == IPU_CSI_DEST_IDMAC) {
- 		csi_idmac_stop(priv);
- 
-@@ -765,8 +789,6 @@ static void csi_stop(struct csi_priv *priv)
- 		if (priv->fim)
- 			imx_media_fim_set_stream(priv->fim, NULL, false);
- 	}
--
--	ipu_csi_disable(priv->csi);
- }
- 
- static const struct csi_skip_desc csi_skip[12] = {
-@@ -927,23 +949,13 @@ static int csi_s_stream(struct v4l2_subdev *sd, int enable)
- 		goto update_count;
- 
- 	if (enable) {
--		/* upstream must be started first, before starting CSI */
--		ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1);
--		ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0;
--		if (ret)
--			goto out;
--
- 		dev_dbg(priv->dev, "stream ON\n");
- 		ret = csi_start(priv);
--		if (ret) {
--			v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
-+		if (ret)
- 			goto out;
--		}
- 	} else {
- 		dev_dbg(priv->dev, "stream OFF\n");
--		/* CSI must be stopped first, then stop upstream */
- 		csi_stop(priv);
--		v4l2_subdev_call(priv->src_sd, video, s_stream, 0);
- 	}
- 
- update_count:
-@@ -1787,7 +1799,7 @@ static int imx_csi_parse_endpoint(struct device *dev,
- 				  struct v4l2_fwnode_endpoint *vep,
- 				  struct v4l2_async_subdev *asd)
- {
--	return fwnode_device_is_available(asd->match.fwnode) ? 0 : -EINVAL;
-+	return fwnode_device_is_available(asd->match.fwnode) ? 0 : -ENOTCONN;
- }
- 
- static int imx_csi_async_register(struct csi_priv *priv)
-diff --git a/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c b/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
-index 5282236d1bb1..06daea66fb49 100644
---- a/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
-+++ b/drivers/staging/media/rockchip/vpu/rk3288_vpu_hw_jpeg_enc.c
-@@ -80,7 +80,7 @@ rk3288_vpu_jpeg_enc_set_qtable(struct rockchip_vpu_dev *vpu,
- void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
- {
- 	struct rockchip_vpu_dev *vpu = ctx->dev;
--	struct vb2_buffer *src_buf, *dst_buf;
-+	struct vb2_v4l2_buffer *src_buf, *dst_buf;
- 	struct rockchip_vpu_jpeg_ctx jpeg_ctx;
- 	u32 reg;
- 
-@@ -88,7 +88,7 @@ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
- 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
- 
- 	memset(&jpeg_ctx, 0, sizeof(jpeg_ctx));
--	jpeg_ctx.buffer = vb2_plane_vaddr(dst_buf, 0);
-+	jpeg_ctx.buffer = vb2_plane_vaddr(&dst_buf->vb2_buf, 0);
- 	jpeg_ctx.width = ctx->dst_fmt.width;
- 	jpeg_ctx.height = ctx->dst_fmt.height;
- 	jpeg_ctx.quality = ctx->jpeg_quality;
-@@ -99,7 +99,7 @@ void rk3288_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
- 			   VEPU_REG_ENC_CTRL);
- 
- 	rk3288_vpu_set_src_img_ctrl(vpu, ctx);
--	rk3288_vpu_jpeg_enc_set_buffers(vpu, ctx, src_buf);
-+	rk3288_vpu_jpeg_enc_set_buffers(vpu, ctx, &src_buf->vb2_buf);
- 	rk3288_vpu_jpeg_enc_set_qtable(vpu,
- 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 0),
- 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 1));
-diff --git a/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c b/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
-index dbc86d95fe3b..3d438797692e 100644
---- a/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
-+++ b/drivers/staging/media/rockchip/vpu/rk3399_vpu_hw_jpeg_enc.c
-@@ -111,7 +111,7 @@ rk3399_vpu_jpeg_enc_set_qtable(struct rockchip_vpu_dev *vpu,
- void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
- {
- 	struct rockchip_vpu_dev *vpu = ctx->dev;
--	struct vb2_buffer *src_buf, *dst_buf;
-+	struct vb2_v4l2_buffer *src_buf, *dst_buf;
- 	struct rockchip_vpu_jpeg_ctx jpeg_ctx;
- 	u32 reg;
- 
-@@ -119,7 +119,7 @@ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
- 	dst_buf = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
- 
- 	memset(&jpeg_ctx, 0, sizeof(jpeg_ctx));
--	jpeg_ctx.buffer = vb2_plane_vaddr(dst_buf, 0);
-+	jpeg_ctx.buffer = vb2_plane_vaddr(&dst_buf->vb2_buf, 0);
- 	jpeg_ctx.width = ctx->dst_fmt.width;
- 	jpeg_ctx.height = ctx->dst_fmt.height;
- 	jpeg_ctx.quality = ctx->jpeg_quality;
-@@ -130,7 +130,7 @@ void rk3399_vpu_jpeg_enc_run(struct rockchip_vpu_ctx *ctx)
- 			   VEPU_REG_ENCODE_START);
- 
- 	rk3399_vpu_set_src_img_ctrl(vpu, ctx);
--	rk3399_vpu_jpeg_enc_set_buffers(vpu, ctx, src_buf);
-+	rk3399_vpu_jpeg_enc_set_buffers(vpu, ctx, &src_buf->vb2_buf);
- 	rk3399_vpu_jpeg_enc_set_qtable(vpu,
- 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 0),
- 				       rockchip_vpu_jpeg_get_qtable(&jpeg_ctx, 1));
-diff --git a/drivers/staging/mt7621-spi/spi-mt7621.c b/drivers/staging/mt7621-spi/spi-mt7621.c
-index 513b6e79b985..e1f50efd0922 100644
---- a/drivers/staging/mt7621-spi/spi-mt7621.c
-+++ b/drivers/staging/mt7621-spi/spi-mt7621.c
-@@ -330,6 +330,7 @@ static int mt7621_spi_probe(struct platform_device *pdev)
- 	int status = 0;
- 	struct clk *clk;
- 	struct mt7621_spi_ops *ops;
-+	int ret;
- 
- 	match = of_match_device(mt7621_spi_match, &pdev->dev);
- 	if (!match)
-@@ -377,7 +378,11 @@ static int mt7621_spi_probe(struct platform_device *pdev)
- 	rs->pending_write = 0;
- 	dev_info(&pdev->dev, "sys_freq: %u\n", rs->sys_freq);
- 
--	device_reset(&pdev->dev);
-+	ret = device_reset(&pdev->dev);
-+	if (ret) {
-+		dev_err(&pdev->dev, "SPI reset failed!\n");
-+		return ret;
-+	}
- 
- 	mt7621_spi_reset(rs);
- 
-diff --git a/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c b/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
-index 80b8d4153414..a54286498a47 100644
---- a/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
-+++ b/drivers/staging/olpc_dcon/olpc_dcon_xo_1.c
-@@ -45,7 +45,7 @@ static int dcon_init_xo_1(struct dcon_priv *dcon)
- {
- 	unsigned char lob;
- 	int ret, i;
--	struct dcon_gpio *pin = &gpios_asis[0];
-+	const struct dcon_gpio *pin = &gpios_asis[0];
- 
- 	for (i = 0; i < ARRAY_SIZE(gpios_asis); i++) {
- 		gpios[i] = devm_gpiod_get(&dcon->client->dev, pin[i].name,
-diff --git a/drivers/staging/speakup/speakup_soft.c b/drivers/staging/speakup/speakup_soft.c
-index 947c79532e10..d5383974d40e 100644
---- a/drivers/staging/speakup/speakup_soft.c
-+++ b/drivers/staging/speakup/speakup_soft.c
-@@ -208,12 +208,15 @@ static ssize_t softsynthx_read(struct file *fp, char __user *buf, size_t count,
- 		return -EINVAL;
- 
- 	spin_lock_irqsave(&speakup_info.spinlock, flags);
-+	synth_soft.alive = 1;
- 	while (1) {
- 		prepare_to_wait(&speakup_event, &wait, TASK_INTERRUPTIBLE);
--		if (!unicode)
--			synth_buffer_skip_nonlatin1();
--		if (!synth_buffer_empty() || speakup_info.flushing)
--			break;
-+		if (synth_current() == &synth_soft) {
-+			if (!unicode)
-+				synth_buffer_skip_nonlatin1();
-+			if (!synth_buffer_empty() || speakup_info.flushing)
-+				break;
-+		}
- 		spin_unlock_irqrestore(&speakup_info.spinlock, flags);
- 		if (fp->f_flags & O_NONBLOCK) {
- 			finish_wait(&speakup_event, &wait);
-@@ -233,6 +236,8 @@ static ssize_t softsynthx_read(struct file *fp, char __user *buf, size_t count,
- 
- 	/* Keep 3 bytes available for a 16bit UTF-8-encoded character */
- 	while (chars_sent <= count - bytes_per_ch) {
-+		if (synth_current() != &synth_soft)
-+			break;
- 		if (speakup_info.flushing) {
- 			speakup_info.flushing = 0;
- 			ch = '\x18';
-@@ -329,7 +334,8 @@ static __poll_t softsynth_poll(struct file *fp, struct poll_table_struct *wait)
- 	poll_wait(fp, &speakup_event, wait);
- 
- 	spin_lock_irqsave(&speakup_info.spinlock, flags);
--	if (!synth_buffer_empty() || speakup_info.flushing)
-+	if (synth_current() == &synth_soft &&
-+	    (!synth_buffer_empty() || speakup_info.flushing))
- 		ret = EPOLLIN | EPOLLRDNORM;
- 	spin_unlock_irqrestore(&speakup_info.spinlock, flags);
- 	return ret;
-diff --git a/drivers/staging/speakup/spk_priv.h b/drivers/staging/speakup/spk_priv.h
-index c8e688878fc7..ac6a74883af4 100644
---- a/drivers/staging/speakup/spk_priv.h
-+++ b/drivers/staging/speakup/spk_priv.h
-@@ -74,6 +74,7 @@ int synth_request_region(unsigned long start, unsigned long n);
- int synth_release_region(unsigned long start, unsigned long n);
- int synth_add(struct spk_synth *in_synth);
- void synth_remove(struct spk_synth *in_synth);
-+struct spk_synth *synth_current(void);
- 
- extern struct speakup_info_t speakup_info;
- 
-diff --git a/drivers/staging/speakup/synth.c b/drivers/staging/speakup/synth.c
-index 25f259ee4ffc..3568bfb89912 100644
---- a/drivers/staging/speakup/synth.c
-+++ b/drivers/staging/speakup/synth.c
-@@ -481,4 +481,10 @@ void synth_remove(struct spk_synth *in_synth)
- }
- EXPORT_SYMBOL_GPL(synth_remove);
- 
-+struct spk_synth *synth_current(void)
-+{
-+	return synth;
-+}
-+EXPORT_SYMBOL_GPL(synth_current);
-+
- short spk_punc_masks[] = { 0, SOME, MOST, PUNC, PUNC | B_SYM };
-diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c
-index c9097e7367d8..2e28fbcdfe8e 100644
---- a/drivers/staging/vt6655/device_main.c
-+++ b/drivers/staging/vt6655/device_main.c
-@@ -1033,8 +1033,6 @@ static void vnt_interrupt_process(struct vnt_private *priv)
- 		return;
- 	}
- 
--	MACvIntDisable(priv->PortOffset);
--
- 	spin_lock_irqsave(&priv->lock, flags);
- 
- 	/* Read low level stats */
-@@ -1122,8 +1120,6 @@ static void vnt_interrupt_process(struct vnt_private *priv)
- 	}
- 
- 	spin_unlock_irqrestore(&priv->lock, flags);
--
--	MACvIntEnable(priv->PortOffset, IMR_MASK_VALUE);
- }
- 
- static void vnt_interrupt_work(struct work_struct *work)
-@@ -1133,14 +1129,17 @@ static void vnt_interrupt_work(struct work_struct *work)
- 
- 	if (priv->vif)
- 		vnt_interrupt_process(priv);
-+
-+	MACvIntEnable(priv->PortOffset, IMR_MASK_VALUE);
- }
- 
- static irqreturn_t vnt_interrupt(int irq,  void *arg)
- {
- 	struct vnt_private *priv = arg;
- 
--	if (priv->vif)
--		schedule_work(&priv->interrupt_work);
-+	schedule_work(&priv->interrupt_work);
-+
-+	MACvIntDisable(priv->PortOffset);
- 
- 	return IRQ_HANDLED;
- }
-diff --git a/drivers/staging/wilc1000/linux_wlan.c b/drivers/staging/wilc1000/linux_wlan.c
-index 721689048648..5e5149c9a92d 100644
---- a/drivers/staging/wilc1000/linux_wlan.c
-+++ b/drivers/staging/wilc1000/linux_wlan.c
-@@ -1086,8 +1086,8 @@ int wilc_netdev_init(struct wilc **wilc, struct device *dev, int io_type,
- 		vif->wilc = *wilc;
- 		vif->ndev = ndev;
- 		wl->vif[i] = vif;
--		wl->vif_num = i;
--		vif->idx = wl->vif_num;
-+		wl->vif_num = i + 1;
-+		vif->idx = i;
- 
- 		ndev->netdev_ops = &wilc_netdev_ops;
- 
-diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
-index bd15a564fe24..3ad2659630e8 100644
---- a/drivers/target/iscsi/iscsi_target.c
-+++ b/drivers/target/iscsi/iscsi_target.c
-@@ -4040,9 +4040,9 @@ static void iscsit_release_commands_from_conn(struct iscsi_conn *conn)
- 		struct se_cmd *se_cmd = &cmd->se_cmd;
- 
- 		if (se_cmd->se_tfo != NULL) {
--			spin_lock(&se_cmd->t_state_lock);
-+			spin_lock_irq(&se_cmd->t_state_lock);
- 			se_cmd->transport_state |= CMD_T_FABRIC_STOP;
--			spin_unlock(&se_cmd->t_state_lock);
-+			spin_unlock_irq(&se_cmd->t_state_lock);
- 		}
- 	}
- 	spin_unlock_bh(&conn->cmd_lock);
-diff --git a/drivers/tty/Kconfig b/drivers/tty/Kconfig
-index 0840d27381ea..e0a04bfc873e 100644
---- a/drivers/tty/Kconfig
-+++ b/drivers/tty/Kconfig
-@@ -441,4 +441,28 @@ config VCC
- 	depends on SUN_LDOMS
- 	help
- 	  Support for Sun logical domain consoles.
-+
-+config LDISC_AUTOLOAD
-+	bool "Automatically load TTY Line Disciplines"
-+	default y
-+	help
-+	  Historically the kernel has always automatically loaded any
-+	  line discipline that is in a kernel module when a user asks
-+	  for it to be loaded with the TIOCSETD ioctl, or through other
-+	  means.  This is not always the best thing to do on systems
-+	  where you know you will not be using some of the more
-+	  "ancient" line disciplines, so prevent the kernel from doing
-+	  this unless the request is coming from a process with the
-+	  CAP_SYS_MODULE permissions.
-+
-+	  Say 'Y' here if you trust your userspace users to do the right
-+	  thing, or if you have only provided the line disciplines that
-+	  you know you will be using, or if you wish to continue to use
-+	  the traditional method of on-demand loading of these modules
-+	  by any user.
-+
-+	  This functionality can be changed at runtime with the
-+	  dev.tty.ldisc_autoload sysctl, this configuration option will
-+	  only set the default value of this functionality.
-+
- endif # TTY
-diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
-index a1a85805d010..2488de1c4bc4 100644
---- a/drivers/tty/serial/8250/8250_of.c
-+++ b/drivers/tty/serial/8250/8250_of.c
-@@ -130,6 +130,10 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
- 		port->flags |= UPF_IOREMAP;
- 	}
- 
-+	/* Compatibility with the deprecated pxa driver and 8250_pxa drivers. */
-+	if (of_device_is_compatible(np, "mrvl,mmp-uart"))
-+		port->regshift = 2;
-+
- 	/* Check for registers offset within the devices address range */
- 	if (of_property_read_u32(np, "reg-shift", &prop) == 0)
- 		port->regshift = prop;
-diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
-index 48bd694a5fa1..bbe5cba21522 100644
---- a/drivers/tty/serial/8250/8250_pci.c
-+++ b/drivers/tty/serial/8250/8250_pci.c
-@@ -2027,6 +2027,111 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = {
- 		.setup		= pci_default_setup,
- 		.exit		= pci_plx9050_exit,
- 	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4S,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_4,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4SM,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_4,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S,
-+		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_4,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
-+	{
-+		.vendor     = PCI_VENDOR_ID_ACCESIO,
-+		.device     = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM,
-+		.subvendor  = PCI_ANY_ID,
-+		.subdevice  = PCI_ANY_ID,
-+		.setup      = pci_pericom_setup,
-+	},
- 	/*
- 	 * SBS Technologies, Inc., PMC-OCTALPRO 232
- 	 */
-@@ -4575,10 +4680,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
- 	 */
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SDB,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2S,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- 		pbn_pericom_PI7C9X7954 },
-@@ -4587,10 +4692,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
- 		pbn_pericom_PI7C9X7954 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_2DB,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_2,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- 		pbn_pericom_PI7C9X7954 },
-@@ -4599,10 +4704,10 @@ static const struct pci_device_id serial_pci_tbl[] = {
- 		pbn_pericom_PI7C9X7954 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SMDB,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2SM,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- 		pbn_pericom_PI7C9X7954 },
-@@ -4611,13 +4716,13 @@ static const struct pci_device_id serial_pci_tbl[] = {
- 		pbn_pericom_PI7C9X7954 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_1,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7951 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_2,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_2,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- 		pbn_pericom_PI7C9X7954 },
-@@ -4626,16 +4731,16 @@ static const struct pci_device_id serial_pci_tbl[] = {
- 		pbn_pericom_PI7C9X7954 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2S,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- 		pbn_pericom_PI7C9X7954 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_2,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_2,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- 		pbn_pericom_PI7C9X7954 },
-@@ -4644,13 +4749,13 @@ static const struct pci_device_id serial_pci_tbl[] = {
- 		pbn_pericom_PI7C9X7954 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2SM,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7954 },
-+		pbn_pericom_PI7C9X7952 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7958 },
-+		pbn_pericom_PI7C9X7954 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7958 },
-+		pbn_pericom_PI7C9X7954 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_8,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- 		pbn_pericom_PI7C9X7958 },
-@@ -4659,19 +4764,19 @@ static const struct pci_device_id serial_pci_tbl[] = {
- 		pbn_pericom_PI7C9X7958 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7958 },
-+		pbn_pericom_PI7C9X7954 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_8,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- 		pbn_pericom_PI7C9X7958 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7958 },
-+		pbn_pericom_PI7C9X7954 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_8SM,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
- 		pbn_pericom_PI7C9X7958 },
- 	{	PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM,
- 		PCI_ANY_ID, PCI_ANY_ID, 0, 0,
--		pbn_pericom_PI7C9X7958 },
-+		pbn_pericom_PI7C9X7954 },
- 	/*
- 	 * Topic TP560 Data/Fax/Voice 56k modem (reported by Evan Clarke)
- 	 */
-diff --git a/drivers/tty/serial/8250/8250_pxa.c b/drivers/tty/serial/8250/8250_pxa.c
-index b9bcbe20a2be..c47188860e32 100644
---- a/drivers/tty/serial/8250/8250_pxa.c
-+++ b/drivers/tty/serial/8250/8250_pxa.c
-@@ -113,6 +113,10 @@ static int serial_pxa_probe(struct platform_device *pdev)
- 	if (ret)
- 		return ret;
- 
-+	ret = of_alias_get_id(pdev->dev.of_node, "serial");
-+	if (ret >= 0)
-+		uart.port.line = ret;
-+
- 	uart.port.type = PORT_XSCALE;
- 	uart.port.iotype = UPIO_MEM32;
- 	uart.port.mapbase = mmres->start;
-diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
-index 05147fe24343..0b4f36905321 100644
---- a/drivers/tty/serial/atmel_serial.c
-+++ b/drivers/tty/serial/atmel_serial.c
-@@ -166,6 +166,8 @@ struct atmel_uart_port {
- 	unsigned int		pending_status;
- 	spinlock_t		lock_suspended;
- 
-+	bool			hd_start_rx;	/* can start RX during half-duplex operation */
-+
- 	/* ISO7816 */
- 	unsigned int		fidi_min;
- 	unsigned int		fidi_max;
-@@ -231,6 +233,13 @@ static inline void atmel_uart_write_char(struct uart_port *port, u8 value)
- 	__raw_writeb(value, port->membase + ATMEL_US_THR);
- }
- 
-+static inline int atmel_uart_is_half_duplex(struct uart_port *port)
-+{
-+	return ((port->rs485.flags & SER_RS485_ENABLED) &&
-+		!(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
-+		(port->iso7816.flags & SER_ISO7816_ENABLED);
-+}
-+
- #ifdef CONFIG_SERIAL_ATMEL_PDC
- static bool atmel_use_pdc_rx(struct uart_port *port)
- {
-@@ -608,10 +617,9 @@ static void atmel_stop_tx(struct uart_port *port)
- 	/* Disable interrupts */
- 	atmel_uart_writel(port, ATMEL_US_IDR, atmel_port->tx_done_mask);
- 
--	if (((port->rs485.flags & SER_RS485_ENABLED) &&
--	     !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
--	    port->iso7816.flags & SER_ISO7816_ENABLED)
-+	if (atmel_uart_is_half_duplex(port))
- 		atmel_start_rx(port);
-+
- }
- 
- /*
-@@ -628,9 +636,7 @@ static void atmel_start_tx(struct uart_port *port)
- 		return;
- 
- 	if (atmel_use_pdc_tx(port) || atmel_use_dma_tx(port))
--		if (((port->rs485.flags & SER_RS485_ENABLED) &&
--		     !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
--		    port->iso7816.flags & SER_ISO7816_ENABLED)
-+		if (atmel_uart_is_half_duplex(port))
- 			atmel_stop_rx(port);
- 
- 	if (atmel_use_pdc_tx(port))
-@@ -928,11 +934,14 @@ static void atmel_complete_tx_dma(void *arg)
- 	 */
- 	if (!uart_circ_empty(xmit))
- 		atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx);
--	else if (((port->rs485.flags & SER_RS485_ENABLED) &&
--		  !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
--		 port->iso7816.flags & SER_ISO7816_ENABLED) {
--		/* DMA done, stop TX, start RX for RS485 */
--		atmel_start_rx(port);
-+	else if (atmel_uart_is_half_duplex(port)) {
-+		/*
-+		 * DMA done, re-enable TXEMPTY and signal that we can stop
-+		 * TX and start RX for RS485
-+		 */
-+		atmel_port->hd_start_rx = true;
-+		atmel_uart_writel(port, ATMEL_US_IER,
-+				  atmel_port->tx_done_mask);
- 	}
- 
- 	spin_unlock_irqrestore(&port->lock, flags);
-@@ -1288,6 +1297,10 @@ static int atmel_prepare_rx_dma(struct uart_port *port)
- 					 sg_dma_len(&atmel_port->sg_rx)/2,
- 					 DMA_DEV_TO_MEM,
- 					 DMA_PREP_INTERRUPT);
-+	if (!desc) {
-+		dev_err(port->dev, "Preparing DMA cyclic failed\n");
-+		goto chan_err;
-+	}
- 	desc->callback = atmel_complete_rx_dma;
- 	desc->callback_param = port;
- 	atmel_port->desc_rx = desc;
-@@ -1376,9 +1389,20 @@ atmel_handle_transmit(struct uart_port *port, unsigned int pending)
- 	struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
- 
- 	if (pending & atmel_port->tx_done_mask) {
--		/* Either PDC or interrupt transmission */
- 		atmel_uart_writel(port, ATMEL_US_IDR,
- 				  atmel_port->tx_done_mask);
-+
-+		/* Start RX if flag was set and FIFO is empty */
-+		if (atmel_port->hd_start_rx) {
-+			if (!(atmel_uart_readl(port, ATMEL_US_CSR)
-+					& ATMEL_US_TXEMPTY))
-+				dev_warn(port->dev, "Should start RX, but TX fifo is not empty\n");
-+
-+			atmel_port->hd_start_rx = false;
-+			atmel_start_rx(port);
-+			return;
-+		}
-+
- 		atmel_tasklet_schedule(atmel_port, &atmel_port->tasklet_tx);
- 	}
- }
-@@ -1508,9 +1532,7 @@ static void atmel_tx_pdc(struct uart_port *port)
- 		atmel_uart_writel(port, ATMEL_US_IER,
- 				  atmel_port->tx_done_mask);
- 	} else {
--		if (((port->rs485.flags & SER_RS485_ENABLED) &&
--		     !(port->rs485.flags & SER_RS485_RX_DURING_TX)) ||
--		    port->iso7816.flags & SER_ISO7816_ENABLED) {
-+		if (atmel_uart_is_half_duplex(port)) {
- 			/* DMA done, stop TX, start RX for RS485 */
- 			atmel_start_rx(port);
- 		}
-diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c
-index 6fb312e7af71..bfe5e9e034ec 100644
---- a/drivers/tty/serial/kgdboc.c
-+++ b/drivers/tty/serial/kgdboc.c
-@@ -148,8 +148,10 @@ static int configure_kgdboc(void)
- 	char *cptr = config;
- 	struct console *cons;
- 
--	if (!strlen(config) || isspace(config[0]))
-+	if (!strlen(config) || isspace(config[0])) {
-+		err = 0;
- 		goto noconfig;
-+	}
- 
- 	kgdboc_io_ops.is_console = 0;
- 	kgdb_tty_driver = NULL;
-diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
-index 4f479841769a..0fdf3a760aa0 100644
---- a/drivers/tty/serial/max310x.c
-+++ b/drivers/tty/serial/max310x.c
-@@ -1416,6 +1416,8 @@ static int max310x_spi_probe(struct spi_device *spi)
- 	if (spi->dev.of_node) {
- 		const struct of_device_id *of_id =
- 			of_match_device(max310x_dt_ids, &spi->dev);
-+		if (!of_id)
-+			return -ENODEV;
- 
- 		devtype = (struct max310x_devtype *)of_id->data;
- 	} else {
-diff --git a/drivers/tty/serial/mvebu-uart.c b/drivers/tty/serial/mvebu-uart.c
-index 231f751d1ef4..7e7b1559fa36 100644
---- a/drivers/tty/serial/mvebu-uart.c
-+++ b/drivers/tty/serial/mvebu-uart.c
-@@ -810,6 +810,9 @@ static int mvebu_uart_probe(struct platform_device *pdev)
- 		return -EINVAL;
- 	}
- 
-+	if (!match)
-+		return -ENODEV;
-+
- 	/* Assume that all UART ports have a DT alias or none has */
- 	id = of_alias_get_id(pdev->dev.of_node, "serial");
- 	if (!pdev->dev.of_node || id < 0)
-diff --git a/drivers/tty/serial/mxs-auart.c b/drivers/tty/serial/mxs-auart.c
-index 27235a526cce..4c188f4079b3 100644
---- a/drivers/tty/serial/mxs-auart.c
-+++ b/drivers/tty/serial/mxs-auart.c
-@@ -1686,6 +1686,10 @@ static int mxs_auart_probe(struct platform_device *pdev)
- 
- 	s->port.mapbase = r->start;
- 	s->port.membase = ioremap(r->start, resource_size(r));
-+	if (!s->port.membase) {
-+		ret = -ENOMEM;
-+		goto out_disable_clks;
-+	}
- 	s->port.ops = &mxs_auart_ops;
- 	s->port.iotype = UPIO_MEM;
- 	s->port.fifosize = MXS_AUART_FIFO_SIZE;
-diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
-index 38016609c7fa..d30502c58106 100644
---- a/drivers/tty/serial/qcom_geni_serial.c
-+++ b/drivers/tty/serial/qcom_geni_serial.c
-@@ -1117,7 +1117,7 @@ static int __init qcom_geni_console_setup(struct console *co, char *options)
- {
- 	struct uart_port *uport;
- 	struct qcom_geni_serial_port *port;
--	int baud;
-+	int baud = 9600;
- 	int bits = 8;
- 	int parity = 'n';
- 	int flow = 'n';
-diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
-index 64bbeb7d7e0c..93bd90f1ff14 100644
---- a/drivers/tty/serial/sh-sci.c
-+++ b/drivers/tty/serial/sh-sci.c
-@@ -838,19 +838,9 @@ static void sci_transmit_chars(struct uart_port *port)
- 
- 	if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
- 		uart_write_wakeup(port);
--	if (uart_circ_empty(xmit)) {
-+	if (uart_circ_empty(xmit))
- 		sci_stop_tx(port);
--	} else {
--		ctrl = serial_port_in(port, SCSCR);
--
--		if (port->type != PORT_SCI) {
--			serial_port_in(port, SCxSR); /* Dummy read */
--			sci_clear_SCxSR(port, SCxSR_TDxE_CLEAR(port));
--		}
- 
--		ctrl |= SCSCR_TIE;
--		serial_port_out(port, SCSCR, ctrl);
--	}
- }
- 
- /* On SH3, SCIF may read end-of-break as a space->mark char */
-diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
-index 094f2958cb2b..ee9f18c52d29 100644
---- a/drivers/tty/serial/xilinx_uartps.c
-+++ b/drivers/tty/serial/xilinx_uartps.c
-@@ -364,7 +364,13 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
- 		cdns_uart_handle_tx(dev_id);
- 		isrstatus &= ~CDNS_UART_IXR_TXEMPTY;
- 	}
--	if (isrstatus & CDNS_UART_IXR_RXMASK)
-+
-+	/*
-+	 * Skip RX processing if RX is disabled as RXEMPTY will never be set
-+	 * as read bytes will not be removed from the FIFO.
-+	 */
-+	if (isrstatus & CDNS_UART_IXR_RXMASK &&
-+	    !(readl(port->membase + CDNS_UART_CR) & CDNS_UART_CR_RX_DIS))
- 		cdns_uart_handle_rx(dev_id, isrstatus);
- 
- 	spin_unlock(&port->lock);
-diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c
-index 77070c2d1240..ec145a59f199 100644
---- a/drivers/tty/tty_buffer.c
-+++ b/drivers/tty/tty_buffer.c
-@@ -26,7 +26,7 @@
-  * Byte threshold to limit memory consumption for flip buffers.
-  * The actual memory limit is > 2x this amount.
-  */
--#define TTYB_DEFAULT_MEM_LIMIT	65536
-+#define TTYB_DEFAULT_MEM_LIMIT	(640 * 1024UL)
- 
- /*
-  * We default to dicing tty buffer allocations to this many characters
-diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
-index 21ffcce16927..5fa250157025 100644
---- a/drivers/tty/tty_io.c
-+++ b/drivers/tty/tty_io.c
-@@ -513,6 +513,8 @@ static const struct file_operations hung_up_tty_fops = {
- static DEFINE_SPINLOCK(redirect_lock);
- static struct file *redirect;
- 
-+extern void tty_sysctl_init(void);
-+
- /**
-  *	tty_wakeup	-	request more data
-  *	@tty: terminal
-@@ -3483,6 +3485,7 @@ void console_sysfs_notify(void)
-  */
- int __init tty_init(void)
- {
-+	tty_sysctl_init();
- 	cdev_init(&tty_cdev, &tty_fops);
- 	if (cdev_add(&tty_cdev, MKDEV(TTYAUX_MAJOR, 0), 1) ||
- 	    register_chrdev_region(MKDEV(TTYAUX_MAJOR, 0), 1, "/dev/tty") < 0)
-diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c
-index 45eda69b150c..e38f104db174 100644
---- a/drivers/tty/tty_ldisc.c
-+++ b/drivers/tty/tty_ldisc.c
-@@ -156,6 +156,13 @@ static void put_ldops(struct tty_ldisc_ops *ldops)
-  *		takes tty_ldiscs_lock to guard against ldisc races
-  */
- 
-+#if defined(CONFIG_LDISC_AUTOLOAD)
-+	#define INITIAL_AUTOLOAD_STATE	1
-+#else
-+	#define INITIAL_AUTOLOAD_STATE	0
-+#endif
-+static int tty_ldisc_autoload = INITIAL_AUTOLOAD_STATE;
-+
- static struct tty_ldisc *tty_ldisc_get(struct tty_struct *tty, int disc)
- {
- 	struct tty_ldisc *ld;
-@@ -170,6 +177,8 @@ static struct tty_ldisc *tty_ldisc_get(struct tty_struct *tty, int disc)
- 	 */
- 	ldops = get_ldops(disc);
- 	if (IS_ERR(ldops)) {
-+		if (!capable(CAP_SYS_MODULE) && !tty_ldisc_autoload)
-+			return ERR_PTR(-EPERM);
- 		request_module("tty-ldisc-%d", disc);
- 		ldops = get_ldops(disc);
- 		if (IS_ERR(ldops))
-@@ -845,3 +854,41 @@ void tty_ldisc_deinit(struct tty_struct *tty)
- 		tty_ldisc_put(tty->ldisc);
- 	tty->ldisc = NULL;
- }
-+
-+static int zero;
-+static int one = 1;
-+static struct ctl_table tty_table[] = {
-+	{
-+		.procname	= "ldisc_autoload",
-+		.data		= &tty_ldisc_autoload,
-+		.maxlen		= sizeof(tty_ldisc_autoload),
-+		.mode		= 0644,
-+		.proc_handler	= proc_dointvec,
-+		.extra1		= &zero,
-+		.extra2		= &one,
-+	},
-+	{ }
-+};
-+
-+static struct ctl_table tty_dir_table[] = {
-+	{
-+		.procname	= "tty",
-+		.mode		= 0555,
-+		.child		= tty_table,
-+	},
-+	{ }
-+};
-+
-+static struct ctl_table tty_root_table[] = {
-+	{
-+		.procname	= "dev",
-+		.mode		= 0555,
-+		.child		= tty_dir_table,
-+	},
-+	{ }
-+};
-+
-+void tty_sysctl_init(void)
-+{
-+	register_sysctl_table(tty_root_table);
-+}
-diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
-index bba75560d11e..9646ff63e77a 100644
---- a/drivers/tty/vt/vt.c
-+++ b/drivers/tty/vt/vt.c
-@@ -935,8 +935,11 @@ static void flush_scrollback(struct vc_data *vc)
- {
- 	WARN_CONSOLE_UNLOCKED();
- 
-+	set_origin(vc);
- 	if (vc->vc_sw->con_flush_scrollback)
- 		vc->vc_sw->con_flush_scrollback(vc);
-+	else
-+		vc->vc_sw->con_switch(vc);
- }
- 
- /*
-@@ -1503,8 +1506,10 @@ static void csi_J(struct vc_data *vc, int vpar)
- 			count = ((vc->vc_pos - vc->vc_origin) >> 1) + 1;
- 			start = (unsigned short *)vc->vc_origin;
- 			break;
-+		case 3: /* include scrollback */
-+			flush_scrollback(vc);
-+			/* fallthrough */
- 		case 2: /* erase whole display */
--		case 3: /* (and scrollback buffer later) */
- 			vc_uniscr_clear_lines(vc, 0, vc->vc_rows);
- 			count = vc->vc_cols * vc->vc_rows;
- 			start = (unsigned short *)vc->vc_origin;
-@@ -1513,13 +1518,7 @@ static void csi_J(struct vc_data *vc, int vpar)
- 			return;
- 	}
- 	scr_memsetw(start, vc->vc_video_erase_char, 2 * count);
--	if (vpar == 3) {
--		set_origin(vc);
--		flush_scrollback(vc);
--		if (con_is_visible(vc))
--			update_screen(vc);
--	} else if (con_should_update(vc))
--		do_update_region(vc, (unsigned long) start, count);
-+	update_region(vc, (unsigned long) start, count);
- 	vc->vc_need_wrap = 0;
- }
- 
-diff --git a/drivers/usb/chipidea/ci_hdrc_tegra.c b/drivers/usb/chipidea/ci_hdrc_tegra.c
-index 772851bee99b..12025358bb3c 100644
---- a/drivers/usb/chipidea/ci_hdrc_tegra.c
-+++ b/drivers/usb/chipidea/ci_hdrc_tegra.c
-@@ -130,6 +130,7 @@ static int tegra_udc_remove(struct platform_device *pdev)
- {
- 	struct tegra_udc *udc = platform_get_drvdata(pdev);
- 
-+	ci_hdrc_remove_device(udc->dev);
- 	usb_phy_set_suspend(udc->phy, 1);
- 	clk_disable_unprepare(udc->clk);
- 
-diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
-index 7bfcbb23c2a4..016e4004fe9d 100644
---- a/drivers/usb/chipidea/core.c
-+++ b/drivers/usb/chipidea/core.c
-@@ -954,8 +954,15 @@ static int ci_hdrc_probe(struct platform_device *pdev)
- 	} else if (ci->platdata->usb_phy) {
- 		ci->usb_phy = ci->platdata->usb_phy;
- 	} else {
-+		ci->usb_phy = devm_usb_get_phy_by_phandle(dev->parent, "phys",
-+							  0);
- 		ci->phy = devm_phy_get(dev->parent, "usb-phy");
--		ci->usb_phy = devm_usb_get_phy(dev->parent, USB_PHY_TYPE_USB2);
-+
-+		/* Fallback to grabbing any registered USB2 PHY */
-+		if (IS_ERR(ci->usb_phy) &&
-+		    PTR_ERR(ci->usb_phy) != -EPROBE_DEFER)
-+			ci->usb_phy = devm_usb_get_phy(dev->parent,
-+						       USB_PHY_TYPE_USB2);
- 
- 		/* if both generic PHY and USB PHY layers aren't enabled */
- 		if (PTR_ERR(ci->phy) == -ENOSYS &&
-diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
-index 739f8960811a..ec666eb4b7b4 100644
---- a/drivers/usb/class/cdc-acm.c
-+++ b/drivers/usb/class/cdc-acm.c
-@@ -558,10 +558,8 @@ static void acm_softint(struct work_struct *work)
- 		clear_bit(EVENT_RX_STALL, &acm->flags);
- 	}
- 
--	if (test_bit(EVENT_TTY_WAKEUP, &acm->flags)) {
-+	if (test_and_clear_bit(EVENT_TTY_WAKEUP, &acm->flags))
- 		tty_port_tty_wakeup(&acm->port);
--		clear_bit(EVENT_TTY_WAKEUP, &acm->flags);
--	}
- }
- 
- /*
-diff --git a/drivers/usb/common/common.c b/drivers/usb/common/common.c
-index 48277bbc15e4..73c8e6591746 100644
---- a/drivers/usb/common/common.c
-+++ b/drivers/usb/common/common.c
-@@ -145,6 +145,8 @@ enum usb_dr_mode of_usb_get_dr_mode_by_phy(struct device_node *np, int arg0)
- 
- 	do {
- 		controller = of_find_node_with_property(controller, "phys");
-+		if (!of_device_is_available(controller))
-+			continue;
- 		index = 0;
- 		do {
- 			if (arg0 == -1) {
-diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
-index 6c9b76bcc2e1..8d1dbe36db92 100644
---- a/drivers/usb/dwc3/gadget.c
-+++ b/drivers/usb/dwc3/gadget.c
-@@ -3339,6 +3339,8 @@ int dwc3_gadget_init(struct dwc3 *dwc)
- 		goto err4;
- 	}
- 
-+	dwc3_gadget_set_speed(&dwc->gadget, dwc->maximum_speed);
-+
- 	return 0;
- 
- err4:
-diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
-index 1e5430438703..0f8d16de7a37 100644
---- a/drivers/usb/gadget/function/f_fs.c
-+++ b/drivers/usb/gadget/function/f_fs.c
-@@ -1082,6 +1082,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
- 			 * condition with req->complete callback.
- 			 */
- 			usb_ep_dequeue(ep->ep, req);
-+			wait_for_completion(&done);
- 			interrupted = ep->status < 0;
- 		}
- 
-diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
-index 75b113a5b25c..f3816a5c861e 100644
---- a/drivers/usb/gadget/function/f_hid.c
-+++ b/drivers/usb/gadget/function/f_hid.c
-@@ -391,20 +391,20 @@ try_again:
- 	req->complete = f_hidg_req_complete;
- 	req->context  = hidg;
- 
-+	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
-+
- 	status = usb_ep_queue(hidg->in_ep, req, GFP_ATOMIC);
- 	if (status < 0) {
- 		ERROR(hidg->func.config->cdev,
- 			"usb_ep_queue error on int endpoint %zd\n", status);
--		goto release_write_pending_unlocked;
-+		goto release_write_pending;
- 	} else {
- 		status = count;
- 	}
--	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
- 
- 	return status;
- release_write_pending:
- 	spin_lock_irqsave(&hidg->write_spinlock, flags);
--release_write_pending_unlocked:
- 	hidg->write_pending = 0;
- 	spin_unlock_irqrestore(&hidg->write_spinlock, flags);
- 
-diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
-index 86cff5c28eff..ba841c569c48 100644
---- a/drivers/usb/host/xhci-dbgcap.c
-+++ b/drivers/usb/host/xhci-dbgcap.c
-@@ -516,7 +516,6 @@ static int xhci_do_dbc_stop(struct xhci_hcd *xhci)
- 		return -1;
- 
- 	writel(0, &dbc->regs->control);
--	xhci_dbc_mem_cleanup(xhci);
- 	dbc->state = DS_DISABLED;
- 
- 	return 0;
-@@ -562,8 +561,10 @@ static void xhci_dbc_stop(struct xhci_hcd *xhci)
- 	ret = xhci_do_dbc_stop(xhci);
- 	spin_unlock_irqrestore(&dbc->lock, flags);
- 
--	if (!ret)
-+	if (!ret) {
-+		xhci_dbc_mem_cleanup(xhci);
- 		pm_runtime_put_sync(xhci_to_hcd(xhci)->self.controller);
-+	}
- }
- 
- static void
-diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
-index e2eece693655..96a740543183 100644
---- a/drivers/usb/host/xhci-hub.c
-+++ b/drivers/usb/host/xhci-hub.c
-@@ -1545,20 +1545,25 @@ int xhci_bus_suspend(struct usb_hcd *hcd)
- 	port_index = max_ports;
- 	while (port_index--) {
- 		u32 t1, t2;
--
-+		int retries = 10;
-+retry:
- 		t1 = readl(ports[port_index]->addr);
- 		t2 = xhci_port_state_to_neutral(t1);
- 		portsc_buf[port_index] = 0;
- 
--		/* Bail out if a USB3 port has a new device in link training */
--		if ((hcd->speed >= HCD_USB3) &&
-+		/*
-+		 * Give a USB3 port in link training time to finish, but don't
-+		 * prevent suspend as port might be stuck
-+		 */
-+		if ((hcd->speed >= HCD_USB3) && retries-- &&
- 		    (t1 & PORT_PLS_MASK) == XDEV_POLLING) {
--			bus_state->bus_suspended = 0;
- 			spin_unlock_irqrestore(&xhci->lock, flags);
--			xhci_dbg(xhci, "Bus suspend bailout, port in polling\n");
--			return -EBUSY;
-+			msleep(XHCI_PORT_POLLING_LFPS_TIME);
-+			spin_lock_irqsave(&xhci->lock, flags);
-+			xhci_dbg(xhci, "port %d polling in bus suspend, waiting\n",
-+				 port_index);
-+			goto retry;
- 		}
--
- 		/* suspend ports in U0, or bail out for new connect changes */
- 		if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) {
- 			if ((t1 & PORT_CSC) && wake_enabled) {
-diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
-index a9ec7051f286..c2fe218e051f 100644
---- a/drivers/usb/host/xhci-pci.c
-+++ b/drivers/usb/host/xhci-pci.c
-@@ -194,6 +194,7 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
- 		xhci->quirks |= XHCI_SSIC_PORT_UNUSED;
- 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
- 	    (pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI ||
-+	     pdev->device == PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI ||
- 	     pdev->device == PCI_DEVICE_ID_INTEL_APL_XHCI))
- 		xhci->quirks |= XHCI_INTEL_USB_ROLE_SW;
- 	if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
-diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c
-index a6e463715779..671bce18782c 100644
---- a/drivers/usb/host/xhci-rcar.c
-+++ b/drivers/usb/host/xhci-rcar.c
-@@ -246,6 +246,7 @@ int xhci_rcar_init_quirk(struct usb_hcd *hcd)
- 	if (!xhci_rcar_wait_for_pll_active(hcd))
- 		return -ETIMEDOUT;
- 
-+	xhci->quirks |= XHCI_TRUST_TX_LENGTH;
- 	return xhci_rcar_download_firmware(hcd);
- }
- 
-diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
-index 40fa25c4d041..9215a28dad40 100644
---- a/drivers/usb/host/xhci-ring.c
-+++ b/drivers/usb/host/xhci-ring.c
-@@ -1647,10 +1647,13 @@ static void handle_port_status(struct xhci_hcd *xhci,
- 		}
- 	}
- 
--	if ((portsc & PORT_PLC) && (portsc & PORT_PLS_MASK) == XDEV_U0 &&
--			DEV_SUPERSPEED_ANY(portsc)) {
-+	if ((portsc & PORT_PLC) &&
-+	    DEV_SUPERSPEED_ANY(portsc) &&
-+	    ((portsc & PORT_PLS_MASK) == XDEV_U0 ||
-+	     (portsc & PORT_PLS_MASK) == XDEV_U1 ||
-+	     (portsc & PORT_PLS_MASK) == XDEV_U2)) {
- 		xhci_dbg(xhci, "resume SS port %d finished\n", port_id);
--		/* We've just brought the device into U0 through either the
-+		/* We've just brought the device into U0/1/2 through either the
- 		 * Resume state after a device remote wakeup, or through the
- 		 * U3Exit state after a host-initiated resume.  If it's a device
- 		 * initiated remote wake, don't pass up the link state change,
-diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
-index 938ff06c0349..efb0cad8710e 100644
---- a/drivers/usb/host/xhci-tegra.c
-+++ b/drivers/usb/host/xhci-tegra.c
-@@ -941,9 +941,9 @@ static void tegra_xusb_powerdomain_remove(struct device *dev,
- 		device_link_del(tegra->genpd_dl_ss);
- 	if (tegra->genpd_dl_host)
- 		device_link_del(tegra->genpd_dl_host);
--	if (tegra->genpd_dev_ss)
-+	if (!IS_ERR_OR_NULL(tegra->genpd_dev_ss))
- 		dev_pm_domain_detach(tegra->genpd_dev_ss, true);
--	if (tegra->genpd_dev_host)
-+	if (!IS_ERR_OR_NULL(tegra->genpd_dev_host))
- 		dev_pm_domain_detach(tegra->genpd_dev_host, true);
- }
- 
-diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
-index 652dc36e3012..9334cdee382a 100644
---- a/drivers/usb/host/xhci.h
-+++ b/drivers/usb/host/xhci.h
-@@ -452,6 +452,14 @@ struct xhci_op_regs {
-  */
- #define XHCI_DEFAULT_BESL	4
- 
-+/*
-+ * USB3 specification define a 360ms tPollingLFPSTiemout for USB3 ports
-+ * to complete link training. usually link trainig completes much faster
-+ * so check status 10 times with 36ms sleep in places we need to wait for
-+ * polling to complete.
-+ */
-+#define XHCI_PORT_POLLING_LFPS_TIME  36
-+
- /**
-  * struct xhci_intr_reg - Interrupt Register Set
-  * @irq_pending:	IMAN - Interrupt Management Register.  Used to enable
-diff --git a/drivers/usb/mtu3/Kconfig b/drivers/usb/mtu3/Kconfig
-index 40bbf1f53337..fe58904f350b 100644
---- a/drivers/usb/mtu3/Kconfig
-+++ b/drivers/usb/mtu3/Kconfig
-@@ -4,6 +4,7 @@ config USB_MTU3
- 	tristate "MediaTek USB3 Dual Role controller"
- 	depends on USB || USB_GADGET
- 	depends on ARCH_MEDIATEK || COMPILE_TEST
-+	depends on EXTCON || !EXTCON
- 	select USB_XHCI_MTK if USB_SUPPORT && USB_XHCI_HCD
- 	help
- 	  Say Y or M here if your system runs on MediaTek SoCs with
-diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
-index c0777a374a88..e732949f6567 100644
---- a/drivers/usb/serial/cp210x.c
-+++ b/drivers/usb/serial/cp210x.c
-@@ -61,6 +61,7 @@ static const struct usb_device_id id_table[] = {
- 	{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
- 	{ USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
- 	{ USB_DEVICE(0x0908, 0x01FF) }, /* Siemens RUGGEDCOM USB Serial Console */
-+	{ USB_DEVICE(0x0B00, 0x3070) }, /* Ingenico 3070 */
- 	{ USB_DEVICE(0x0BED, 0x1100) }, /* MEI (TM) Cashflow-SC Bill/Voucher Acceptor */
- 	{ USB_DEVICE(0x0BED, 0x1101) }, /* MEI series 2000 Combo Acceptor */
- 	{ USB_DEVICE(0x0FCF, 0x1003) }, /* Dynastream ANT development board */
-@@ -79,6 +80,7 @@ static const struct usb_device_id id_table[] = {
- 	{ USB_DEVICE(0x10C4, 0x804E) }, /* Software Bisque Paramount ME build-in converter */
- 	{ USB_DEVICE(0x10C4, 0x8053) }, /* Enfora EDG1228 */
- 	{ USB_DEVICE(0x10C4, 0x8054) }, /* Enfora GSM2228 */
-+	{ USB_DEVICE(0x10C4, 0x8056) }, /* Lorenz Messtechnik devices */
- 	{ USB_DEVICE(0x10C4, 0x8066) }, /* Argussoft In-System Programmer */
- 	{ USB_DEVICE(0x10C4, 0x806F) }, /* IMS USB to RS422 Converter Cable */
- 	{ USB_DEVICE(0x10C4, 0x807A) }, /* Crumb128 board */
-@@ -1353,8 +1355,13 @@ static int cp210x_gpio_get(struct gpio_chip *gc, unsigned int gpio)
- 	if (priv->partnum == CP210X_PARTNUM_CP2105)
- 		req_type = REQTYPE_INTERFACE_TO_HOST;
- 
-+	result = usb_autopm_get_interface(serial->interface);
-+	if (result)
-+		return result;
-+
- 	result = cp210x_read_vendor_block(serial, req_type,
- 					  CP210X_READ_LATCH, &buf, sizeof(buf));
-+	usb_autopm_put_interface(serial->interface);
- 	if (result < 0)
- 		return result;
- 
-@@ -1375,6 +1382,10 @@ static void cp210x_gpio_set(struct gpio_chip *gc, unsigned int gpio, int value)
- 
- 	buf.mask = BIT(gpio);
- 
-+	result = usb_autopm_get_interface(serial->interface);
-+	if (result)
-+		goto out;
-+
- 	if (priv->partnum == CP210X_PARTNUM_CP2105) {
- 		result = cp210x_write_vendor_block(serial,
- 						   REQTYPE_HOST_TO_INTERFACE,
-@@ -1392,6 +1403,8 @@ static void cp210x_gpio_set(struct gpio_chip *gc, unsigned int gpio, int value)
- 					 NULL, 0, USB_CTRL_SET_TIMEOUT);
- 	}
- 
-+	usb_autopm_put_interface(serial->interface);
-+out:
- 	if (result < 0) {
- 		dev_err(&serial->interface->dev, "failed to set GPIO value: %d\n",
- 				result);
-diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
-index 77ef4c481f3c..1d8461ae2c34 100644
---- a/drivers/usb/serial/ftdi_sio.c
-+++ b/drivers/usb/serial/ftdi_sio.c
-@@ -609,6 +609,8 @@ static const struct usb_device_id id_table_combined[] = {
- 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
- 	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID),
- 		.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
-+	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLX_PLUS_PID) },
-+	{ USB_DEVICE(FTDI_VID, FTDI_NT_ORION_IO_PID) },
- 	{ USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) },
- 	{ USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX_PID) },
- 	{ USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2_PID) },
-@@ -1025,6 +1027,8 @@ static const struct usb_device_id id_table_combined[] = {
- 	{ USB_DEVICE(CYPRESS_VID, CYPRESS_WICED_BT_USB_PID) },
- 	{ USB_DEVICE(CYPRESS_VID, CYPRESS_WICED_WL_USB_PID) },
- 	{ USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) },
-+	/* EZPrototypes devices */
-+	{ USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) },
- 	{ }					/* Terminating entry */
- };
- 
-diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
-index 975d02666c5a..5755f0df0025 100644
---- a/drivers/usb/serial/ftdi_sio_ids.h
-+++ b/drivers/usb/serial/ftdi_sio_ids.h
-@@ -567,7 +567,9 @@
- /*
-  * NovaTech product ids (FTDI_VID)
-  */
--#define FTDI_NT_ORIONLXM_PID	0x7c90	/* OrionLXm Substation Automation Platform */
-+#define FTDI_NT_ORIONLXM_PID		0x7c90	/* OrionLXm Substation Automation Platform */
-+#define FTDI_NT_ORIONLX_PLUS_PID	0x7c91	/* OrionLX+ Substation Automation Platform */
-+#define FTDI_NT_ORION_IO_PID		0x7c92	/* Orion I/O */
- 
- /*
-  * Synapse Wireless product ids (FTDI_VID)
-@@ -1308,6 +1310,12 @@
- #define IONICS_VID			0x1c0c
- #define IONICS_PLUGCOMPUTER_PID		0x0102
- 
-+/*
-+ * EZPrototypes (PID reseller)
-+ */
-+#define EZPROTOTYPES_VID		0x1c40
-+#define HJELMSLUND_USB485_ISO_PID	0x0477
-+
- /*
-  * Dresden Elektronik Sensor Terminal Board
-  */
-diff --git a/drivers/usb/serial/mos7720.c b/drivers/usb/serial/mos7720.c
-index fc52ac75fbf6..18110225d506 100644
---- a/drivers/usb/serial/mos7720.c
-+++ b/drivers/usb/serial/mos7720.c
-@@ -366,8 +366,6 @@ static int write_parport_reg_nonblock(struct mos7715_parport *mos_parport,
- 	if (!urbtrack)
- 		return -ENOMEM;
- 
--	kref_get(&mos_parport->ref_count);
--	urbtrack->mos_parport = mos_parport;
- 	urbtrack->urb = usb_alloc_urb(0, GFP_ATOMIC);
- 	if (!urbtrack->urb) {
- 		kfree(urbtrack);
-@@ -388,6 +386,8 @@ static int write_parport_reg_nonblock(struct mos7715_parport *mos_parport,
- 			     usb_sndctrlpipe(usbdev, 0),
- 			     (unsigned char *)urbtrack->setup,
- 			     NULL, 0, async_complete, urbtrack);
-+	kref_get(&mos_parport->ref_count);
-+	urbtrack->mos_parport = mos_parport;
- 	kref_init(&urbtrack->ref_count);
- 	INIT_LIST_HEAD(&urbtrack->urblist_entry);
- 
-diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
-index aef15497ff31..83869065b802 100644
---- a/drivers/usb/serial/option.c
-+++ b/drivers/usb/serial/option.c
-@@ -246,6 +246,7 @@ static void option_instat_callback(struct urb *urb);
- #define QUECTEL_PRODUCT_EC25			0x0125
- #define QUECTEL_PRODUCT_BG96			0x0296
- #define QUECTEL_PRODUCT_EP06			0x0306
-+#define QUECTEL_PRODUCT_EM12			0x0512
- 
- #define CMOTECH_VENDOR_ID			0x16d8
- #define CMOTECH_PRODUCT_6001			0x6001
-@@ -1066,7 +1067,8 @@ static const struct usb_device_id option_ids[] = {
- 	  .driver_info = RSVD(3) },
- 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */
- 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */
--	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */
-+	{ USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000), /* SIMCom SIM5218 */
-+	  .driver_info = NCTRL(0) | NCTRL(1) | NCTRL(2) | NCTRL(3) | RSVD(4) },
- 	/* Quectel products using Qualcomm vendor ID */
- 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC15)},
- 	{ USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC20),
-@@ -1087,6 +1089,9 @@ static const struct usb_device_id option_ids[] = {
- 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
- 	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
- 	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
-+	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff),
-+	  .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
-+	{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) },
- 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
- 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
- 	{ USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6003),
-@@ -1148,6 +1153,8 @@ static const struct usb_device_id option_ids[] = {
- 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
- 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
- 	  .driver_info = NCTRL(0) | RSVD(3) },
-+	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1102, 0xff),	/* Telit ME910 (ECM) */
-+	  .driver_info = NCTRL(0) },
- 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910),
- 	  .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
- 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910_USBCFG4),
-@@ -1938,10 +1945,12 @@ static const struct usb_device_id option_ids[] = {
- 	  .driver_info = RSVD(4) },
- 	{ USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e35, 0xff),			/* D-Link DWM-222 */
- 	  .driver_info = RSVD(4) },
--	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
--	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
--	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/A3 */
--	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) },                /* OLICARD300 - MT6225 */
-+	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) },	/* D-Link DWM-152/C1 */
-+	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) },	/* D-Link DWM-156/C1 */
-+	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) },	/* D-Link DWM-156/A3 */
-+	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2031, 0xff),			/* Olicard 600 */
-+	  .driver_info = RSVD(4) },
-+	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) },			/* OLICARD300 - MT6225 */
- 	{ USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) },
- 	{ USB_DEVICE(VIATELECOM_VENDOR_ID, VIATELECOM_PRODUCT_CDS7) },
- 	{ USB_DEVICE_AND_INTERFACE_INFO(WETELECOM_VENDOR_ID, WETELECOM_PRODUCT_WMD200, 0xff, 0xff, 0xff) },
-diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
-index f1c39a3c7534..d34e945e5d09 100644
---- a/drivers/usb/typec/tcpm/tcpm.c
-+++ b/drivers/usb/typec/tcpm/tcpm.c
-@@ -37,6 +37,7 @@
- 	S(SRC_ATTACHED),			\
- 	S(SRC_STARTUP),				\
- 	S(SRC_SEND_CAPABILITIES),		\
-+	S(SRC_SEND_CAPABILITIES_TIMEOUT),	\
- 	S(SRC_NEGOTIATE_CAPABILITIES),		\
- 	S(SRC_TRANSITION_SUPPLY),		\
- 	S(SRC_READY),				\
-@@ -2966,10 +2967,34 @@ static void run_state_machine(struct tcpm_port *port)
- 			/* port->hard_reset_count = 0; */
- 			port->caps_count = 0;
- 			port->pd_capable = true;
--			tcpm_set_state_cond(port, hard_reset_state(port),
-+			tcpm_set_state_cond(port, SRC_SEND_CAPABILITIES_TIMEOUT,
- 					    PD_T_SEND_SOURCE_CAP);
- 		}
- 		break;
-+	case SRC_SEND_CAPABILITIES_TIMEOUT:
-+		/*
-+		 * Error recovery for a PD_DATA_SOURCE_CAP reply timeout.
-+		 *
-+		 * PD 2.0 sinks are supposed to accept src-capabilities with a
-+		 * 3.0 header and simply ignore any src PDOs which the sink does
-+		 * not understand such as PPS but some 2.0 sinks instead ignore
-+		 * the entire PD_DATA_SOURCE_CAP message, causing contract
-+		 * negotiation to fail.
-+		 *
-+		 * After PD_N_HARD_RESET_COUNT hard-reset attempts, we try
-+		 * sending src-capabilities with a lower PD revision to
-+		 * make these broken sinks work.
-+		 */
-+		if (port->hard_reset_count < PD_N_HARD_RESET_COUNT) {
-+			tcpm_set_state(port, HARD_RESET_SEND, 0);
-+		} else if (port->negotiated_rev > PD_REV20) {
-+			port->negotiated_rev--;
-+			port->hard_reset_count = 0;
-+			tcpm_set_state(port, SRC_SEND_CAPABILITIES, 0);
-+		} else {
-+			tcpm_set_state(port, hard_reset_state(port), 0);
-+		}
-+		break;
- 	case SRC_NEGOTIATE_CAPABILITIES:
- 		ret = tcpm_pd_check_request(port);
- 		if (ret < 0) {
-diff --git a/drivers/usb/typec/tcpm/wcove.c b/drivers/usb/typec/tcpm/wcove.c
-index 423208e19383..6770afd40765 100644
---- a/drivers/usb/typec/tcpm/wcove.c
-+++ b/drivers/usb/typec/tcpm/wcove.c
-@@ -615,8 +615,13 @@ static int wcove_typec_probe(struct platform_device *pdev)
- 	wcove->dev = &pdev->dev;
- 	wcove->regmap = pmic->regmap;
- 
--	irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr,
--				  platform_get_irq(pdev, 0));
-+	irq = platform_get_irq(pdev, 0);
-+	if (irq < 0) {
-+		dev_err(&pdev->dev, "Failed to get IRQ: %d\n", irq);
-+		return irq;
-+	}
-+
-+	irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr, irq);
- 	if (irq < 0)
- 		return irq;
- 
-diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
-index 1c0033ad8738..e1109b15636d 100644
---- a/drivers/usb/typec/tps6598x.c
-+++ b/drivers/usb/typec/tps6598x.c
-@@ -110,6 +110,20 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)
- 	return 0;
- }
- 
-+static int tps6598x_block_write(struct tps6598x *tps, u8 reg,
-+				void *val, size_t len)
-+{
-+	u8 data[TPS_MAX_LEN + 1];
-+
-+	if (!tps->i2c_protocol)
-+		return regmap_raw_write(tps->regmap, reg, val, len);
-+
-+	data[0] = len;
-+	memcpy(&data[1], val, len);
-+
-+	return regmap_raw_write(tps->regmap, reg, data, sizeof(data));
-+}
-+
- static inline int tps6598x_read16(struct tps6598x *tps, u8 reg, u16 *val)
- {
- 	return tps6598x_block_read(tps, reg, val, sizeof(u16));
-@@ -127,23 +141,23 @@ static inline int tps6598x_read64(struct tps6598x *tps, u8 reg, u64 *val)
- 
- static inline int tps6598x_write16(struct tps6598x *tps, u8 reg, u16 val)
- {
--	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u16));
-+	return tps6598x_block_write(tps, reg, &val, sizeof(u16));
- }
- 
- static inline int tps6598x_write32(struct tps6598x *tps, u8 reg, u32 val)
- {
--	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u32));
-+	return tps6598x_block_write(tps, reg, &val, sizeof(u32));
- }
- 
- static inline int tps6598x_write64(struct tps6598x *tps, u8 reg, u64 val)
- {
--	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u64));
-+	return tps6598x_block_write(tps, reg, &val, sizeof(u64));
- }
- 
- static inline int
- tps6598x_write_4cc(struct tps6598x *tps, u8 reg, const char *val)
- {
--	return regmap_raw_write(tps->regmap, reg, &val, sizeof(u32));
-+	return tps6598x_block_write(tps, reg, &val, sizeof(u32));
- }
- 
- static int tps6598x_read_partner_identity(struct tps6598x *tps)
-@@ -229,8 +243,8 @@ static int tps6598x_exec_cmd(struct tps6598x *tps, const char *cmd,
- 		return -EBUSY;
- 
- 	if (in_len) {
--		ret = regmap_raw_write(tps->regmap, TPS_REG_DATA1,
--				       in_data, in_len);
-+		ret = tps6598x_block_write(tps, TPS_REG_DATA1,
-+					   in_data, in_len);
- 		if (ret)
- 			return ret;
- 	}
-diff --git a/drivers/video/backlight/pwm_bl.c b/drivers/video/backlight/pwm_bl.c
-index feb90764a811..53b8ceea9bde 100644
---- a/drivers/video/backlight/pwm_bl.c
-+++ b/drivers/video/backlight/pwm_bl.c
-@@ -435,7 +435,7 @@ static int pwm_backlight_initial_power_state(const struct pwm_bl_data *pb)
- 	 */
- 
- 	/* if the enable GPIO is disabled, do not enable the backlight */
--	if (pb->enable_gpio && gpiod_get_value(pb->enable_gpio) == 0)
-+	if (pb->enable_gpio && gpiod_get_value_cansleep(pb->enable_gpio) == 0)
- 		return FB_BLANK_POWERDOWN;
- 
- 	/* The regulator is disabled, do not enable the backlight */
-diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
-index cb43a2258c51..4721491e6c8c 100644
---- a/drivers/video/fbdev/core/fbmem.c
-+++ b/drivers/video/fbdev/core/fbmem.c
-@@ -431,6 +431,9 @@ static void fb_do_show_logo(struct fb_info *info, struct fb_image *image,
- {
- 	unsigned int x;
- 
-+	if (image->width > info->var.xres || image->height > info->var.yres)
-+		return;
-+
- 	if (rotate == FB_ROTATE_UR) {
- 		for (x = 0;
- 		     x < num && image->dx + image->width <= info->var.xres;
-diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
-index a0b07c331255..a38b65b97be0 100644
---- a/drivers/virtio/virtio_ring.c
-+++ b/drivers/virtio/virtio_ring.c
-@@ -871,6 +871,8 @@ static struct virtqueue *vring_create_virtqueue_split(
- 					  GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
- 		if (queue)
- 			break;
-+		if (!may_reduce_num)
-+			return NULL;
- 	}
- 
- 	if (!num)
-diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
-index cba6b586bfbd..d97fcfc5e558 100644
---- a/drivers/xen/gntdev-dmabuf.c
-+++ b/drivers/xen/gntdev-dmabuf.c
-@@ -80,6 +80,12 @@ struct gntdev_dmabuf_priv {
- 	struct list_head imp_list;
- 	/* This is the lock which protects dma_buf_xxx lists. */
- 	struct mutex lock;
-+	/*
-+	 * We reference this file while exporting dma-bufs, so
-+	 * the grant device context is not destroyed while there are
-+	 * external users alive.
-+	 */
-+	struct file *filp;
- };
- 
- /* DMA buffer export support. */
-@@ -311,6 +317,7 @@ static void dmabuf_exp_release(struct kref *kref)
- 
- 	dmabuf_exp_wait_obj_signal(gntdev_dmabuf->priv, gntdev_dmabuf);
- 	list_del(&gntdev_dmabuf->next);
-+	fput(gntdev_dmabuf->priv->filp);
- 	kfree(gntdev_dmabuf);
- }
- 
-@@ -423,6 +430,7 @@ static int dmabuf_exp_from_pages(struct gntdev_dmabuf_export_args *args)
- 	mutex_lock(&args->dmabuf_priv->lock);
- 	list_add(&gntdev_dmabuf->next, &args->dmabuf_priv->exp_list);
- 	mutex_unlock(&args->dmabuf_priv->lock);
-+	get_file(gntdev_dmabuf->priv->filp);
- 	return 0;
- 
- fail:
-@@ -834,7 +842,7 @@ long gntdev_ioctl_dmabuf_imp_release(struct gntdev_priv *priv,
- 	return dmabuf_imp_release(priv->dmabuf_priv, op.fd);
- }
- 
--struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void)
-+struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct file *filp)
- {
- 	struct gntdev_dmabuf_priv *priv;
- 
-@@ -847,6 +855,8 @@ struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void)
- 	INIT_LIST_HEAD(&priv->exp_wait_list);
- 	INIT_LIST_HEAD(&priv->imp_list);
- 
-+	priv->filp = filp;
-+
- 	return priv;
- }
- 
-diff --git a/drivers/xen/gntdev-dmabuf.h b/drivers/xen/gntdev-dmabuf.h
-index 7220a53d0fc5..3d9b9cf9d5a1 100644
---- a/drivers/xen/gntdev-dmabuf.h
-+++ b/drivers/xen/gntdev-dmabuf.h
-@@ -14,7 +14,7 @@
- struct gntdev_dmabuf_priv;
- struct gntdev_priv;
- 
--struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void);
-+struct gntdev_dmabuf_priv *gntdev_dmabuf_init(struct file *filp);
- 
- void gntdev_dmabuf_fini(struct gntdev_dmabuf_priv *priv);
- 
-diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
-index 5efc5eee9544..7cf9c51318aa 100644
---- a/drivers/xen/gntdev.c
-+++ b/drivers/xen/gntdev.c
-@@ -600,7 +600,7 @@ static int gntdev_open(struct inode *inode, struct file *flip)
- 	mutex_init(&priv->lock);
- 
- #ifdef CONFIG_XEN_GNTDEV_DMABUF
--	priv->dmabuf_priv = gntdev_dmabuf_init();
-+	priv->dmabuf_priv = gntdev_dmabuf_init(flip);
- 	if (IS_ERR(priv->dmabuf_priv)) {
- 		ret = PTR_ERR(priv->dmabuf_priv);
- 		kfree(priv);
-diff --git a/fs/9p/v9fs_vfs.h b/fs/9p/v9fs_vfs.h
-index 5a0db6dec8d1..aaee1e6584e6 100644
---- a/fs/9p/v9fs_vfs.h
-+++ b/fs/9p/v9fs_vfs.h
-@@ -40,6 +40,9 @@
-  */
- #define P9_LOCK_TIMEOUT (30*HZ)
- 
-+/* flags for v9fs_stat2inode() & v9fs_stat2inode_dotl() */
-+#define V9FS_STAT2INODE_KEEP_ISIZE 1
-+
- extern struct file_system_type v9fs_fs_type;
- extern const struct address_space_operations v9fs_addr_operations;
- extern const struct file_operations v9fs_file_operations;
-@@ -61,8 +64,10 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses,
- 		    struct inode *inode, umode_t mode, dev_t);
- void v9fs_evict_inode(struct inode *inode);
- ino_t v9fs_qid2ino(struct p9_qid *qid);
--void v9fs_stat2inode(struct p9_wstat *, struct inode *, struct super_block *);
--void v9fs_stat2inode_dotl(struct p9_stat_dotl *, struct inode *);
-+void v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
-+		      struct super_block *sb, unsigned int flags);
-+void v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
-+			   unsigned int flags);
- int v9fs_dir_release(struct inode *inode, struct file *filp);
- int v9fs_file_open(struct inode *inode, struct file *file);
- void v9fs_inode2stat(struct inode *inode, struct p9_wstat *stat);
-@@ -83,4 +88,18 @@ static inline void v9fs_invalidate_inode_attr(struct inode *inode)
- }
- 
- int v9fs_open_to_dotl_flags(int flags);
-+
-+static inline void v9fs_i_size_write(struct inode *inode, loff_t i_size)
-+{
-+	/*
-+	 * 32-bit need the lock, concurrent updates could break the
-+	 * sequences and make i_size_read() loop forever.
-+	 * 64-bit updates are atomic and can skip the locking.
-+	 */
-+	if (sizeof(i_size) > sizeof(long))
-+		spin_lock(&inode->i_lock);
-+	i_size_write(inode, i_size);
-+	if (sizeof(i_size) > sizeof(long))
-+		spin_unlock(&inode->i_lock);
-+}
- #endif
-diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
-index a25efa782fcc..9a1125305d84 100644
---- a/fs/9p/vfs_file.c
-+++ b/fs/9p/vfs_file.c
-@@ -446,7 +446,11 @@ v9fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
- 		i_size = i_size_read(inode);
- 		if (iocb->ki_pos > i_size) {
- 			inode_add_bytes(inode, iocb->ki_pos - i_size);
--			i_size_write(inode, iocb->ki_pos);
-+			/*
-+			 * Need to serialize against i_size_write() in
-+			 * v9fs_stat2inode()
-+			 */
-+			v9fs_i_size_write(inode, iocb->ki_pos);
- 		}
- 		return retval;
- 	}
-diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
-index 85ff859d3af5..72b779bc0942 100644
---- a/fs/9p/vfs_inode.c
-+++ b/fs/9p/vfs_inode.c
-@@ -538,7 +538,7 @@ static struct inode *v9fs_qid_iget(struct super_block *sb,
- 	if (retval)
- 		goto error;
- 
--	v9fs_stat2inode(st, inode, sb);
-+	v9fs_stat2inode(st, inode, sb, 0);
- 	v9fs_cache_inode_get_cookie(inode);
- 	unlock_new_inode(inode);
- 	return inode;
-@@ -1092,7 +1092,7 @@ v9fs_vfs_getattr(const struct path *path, struct kstat *stat,
- 	if (IS_ERR(st))
- 		return PTR_ERR(st);
- 
--	v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb);
-+	v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb, 0);
- 	generic_fillattr(d_inode(dentry), stat);
- 
- 	p9stat_free(st);
-@@ -1170,12 +1170,13 @@ static int v9fs_vfs_setattr(struct dentry *dentry, struct iattr *iattr)
-  * @stat: Plan 9 metadata (mistat) structure
-  * @inode: inode to populate
-  * @sb: superblock of filesystem
-+ * @flags: control flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE)
-  *
-  */
- 
- void
- v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
--	struct super_block *sb)
-+		 struct super_block *sb, unsigned int flags)
- {
- 	umode_t mode;
- 	char ext[32];
-@@ -1216,10 +1217,11 @@ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode,
- 	mode = p9mode2perm(v9ses, stat);
- 	mode |= inode->i_mode & ~S_IALLUGO;
- 	inode->i_mode = mode;
--	i_size_write(inode, stat->length);
- 
-+	if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE))
-+		v9fs_i_size_write(inode, stat->length);
- 	/* not real number of blocks, but 512 byte ones ... */
--	inode->i_blocks = (i_size_read(inode) + 512 - 1) >> 9;
-+	inode->i_blocks = (stat->length + 512 - 1) >> 9;
- 	v9inode->cache_validity &= ~V9FS_INO_INVALID_ATTR;
- }
- 
-@@ -1416,9 +1418,9 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
- {
- 	int umode;
- 	dev_t rdev;
--	loff_t i_size;
- 	struct p9_wstat *st;
- 	struct v9fs_session_info *v9ses;
-+	unsigned int flags;
- 
- 	v9ses = v9fs_inode2v9ses(inode);
- 	st = p9_client_stat(fid);
-@@ -1431,16 +1433,13 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode)
- 	if ((inode->i_mode & S_IFMT) != (umode & S_IFMT))
- 		goto out;
- 
--	spin_lock(&inode->i_lock);
- 	/*
- 	 * We don't want to refresh inode->i_size,
- 	 * because we may have cached data
- 	 */
--	i_size = inode->i_size;
--	v9fs_stat2inode(st, inode, inode->i_sb);
--	if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
--		inode->i_size = i_size;
--	spin_unlock(&inode->i_lock);
-+	flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ?
-+		V9FS_STAT2INODE_KEEP_ISIZE : 0;
-+	v9fs_stat2inode(st, inode, inode->i_sb, flags);
- out:
- 	p9stat_free(st);
- 	kfree(st);
-diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
-index 4823e1c46999..a950a927a626 100644
---- a/fs/9p/vfs_inode_dotl.c
-+++ b/fs/9p/vfs_inode_dotl.c
-@@ -143,7 +143,7 @@ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb,
- 	if (retval)
- 		goto error;
- 
--	v9fs_stat2inode_dotl(st, inode);
-+	v9fs_stat2inode_dotl(st, inode, 0);
- 	v9fs_cache_inode_get_cookie(inode);
- 	retval = v9fs_get_acl(inode, fid);
- 	if (retval)
-@@ -496,7 +496,7 @@ v9fs_vfs_getattr_dotl(const struct path *path, struct kstat *stat,
- 	if (IS_ERR(st))
- 		return PTR_ERR(st);
- 
--	v9fs_stat2inode_dotl(st, d_inode(dentry));
-+	v9fs_stat2inode_dotl(st, d_inode(dentry), 0);
- 	generic_fillattr(d_inode(dentry), stat);
- 	/* Change block size to what the server returned */
- 	stat->blksize = st->st_blksize;
-@@ -607,11 +607,13 @@ int v9fs_vfs_setattr_dotl(struct dentry *dentry, struct iattr *iattr)
-  * v9fs_stat2inode_dotl - populate an inode structure with stat info
-  * @stat: stat structure
-  * @inode: inode to populate
-+ * @flags: ctrl flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE)
-  *
-  */
- 
- void
--v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
-+v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode,
-+		      unsigned int flags)
- {
- 	umode_t mode;
- 	struct v9fs_inode *v9inode = V9FS_I(inode);
-@@ -631,7 +633,8 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
- 		mode |= inode->i_mode & ~S_IALLUGO;
- 		inode->i_mode = mode;
- 
--		i_size_write(inode, stat->st_size);
-+		if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE))
-+			v9fs_i_size_write(inode, stat->st_size);
- 		inode->i_blocks = stat->st_blocks;
- 	} else {
- 		if (stat->st_result_mask & P9_STATS_ATIME) {
-@@ -661,8 +664,9 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode)
- 		}
- 		if (stat->st_result_mask & P9_STATS_RDEV)
- 			inode->i_rdev = new_decode_dev(stat->st_rdev);
--		if (stat->st_result_mask & P9_STATS_SIZE)
--			i_size_write(inode, stat->st_size);
-+		if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) &&
-+		    stat->st_result_mask & P9_STATS_SIZE)
-+			v9fs_i_size_write(inode, stat->st_size);
- 		if (stat->st_result_mask & P9_STATS_BLOCKS)
- 			inode->i_blocks = stat->st_blocks;
- 	}
-@@ -928,9 +932,9 @@ v9fs_vfs_get_link_dotl(struct dentry *dentry,
- 
- int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
- {
--	loff_t i_size;
- 	struct p9_stat_dotl *st;
- 	struct v9fs_session_info *v9ses;
-+	unsigned int flags;
- 
- 	v9ses = v9fs_inode2v9ses(inode);
- 	st = p9_client_getattr_dotl(fid, P9_STATS_ALL);
-@@ -942,16 +946,13 @@ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode)
- 	if ((inode->i_mode & S_IFMT) != (st->st_mode & S_IFMT))
- 		goto out;
- 
--	spin_lock(&inode->i_lock);
- 	/*
- 	 * We don't want to refresh inode->i_size,
- 	 * because we may have cached data
- 	 */
--	i_size = inode->i_size;
--	v9fs_stat2inode_dotl(st, inode);
--	if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE)
--		inode->i_size = i_size;
--	spin_unlock(&inode->i_lock);
-+	flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ?
-+		V9FS_STAT2INODE_KEEP_ISIZE : 0;
-+	v9fs_stat2inode_dotl(st, inode, flags);
- out:
- 	kfree(st);
- 	return 0;
-diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c
-index 48ce50484e80..eeab9953af89 100644
---- a/fs/9p/vfs_super.c
-+++ b/fs/9p/vfs_super.c
-@@ -172,7 +172,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags,
- 			goto release_sb;
- 		}
- 		d_inode(root)->i_ino = v9fs_qid2ino(&st->qid);
--		v9fs_stat2inode_dotl(st, d_inode(root));
-+		v9fs_stat2inode_dotl(st, d_inode(root), 0);
- 		kfree(st);
- 	} else {
- 		struct p9_wstat *st = NULL;
-@@ -183,7 +183,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags,
- 		}
- 
- 		d_inode(root)->i_ino = v9fs_qid2ino(&st->qid);
--		v9fs_stat2inode(st, d_inode(root), sb);
-+		v9fs_stat2inode(st, d_inode(root), sb, 0);
- 
- 		p9stat_free(st);
- 		kfree(st);
-diff --git a/fs/aio.c b/fs/aio.c
-index aaaaf4d12c73..3d9669d011b9 100644
---- a/fs/aio.c
-+++ b/fs/aio.c
-@@ -167,9 +167,13 @@ struct kioctx {
- 	unsigned		id;
- };
- 
-+/*
-+ * First field must be the file pointer in all the
-+ * iocb unions! See also 'struct kiocb' in <linux/fs.h>
-+ */
- struct fsync_iocb {
--	struct work_struct	work;
- 	struct file		*file;
-+	struct work_struct	work;
- 	bool			datasync;
- };
- 
-@@ -183,8 +187,15 @@ struct poll_iocb {
- 	struct work_struct	work;
- };
- 
-+/*
-+ * NOTE! Each of the iocb union members has the file pointer
-+ * as the first entry in their struct definition. So you can
-+ * access the file pointer through any of the sub-structs,
-+ * or directly as just 'ki_filp' in this struct.
-+ */
- struct aio_kiocb {
- 	union {
-+		struct file		*ki_filp;
- 		struct kiocb		rw;
- 		struct fsync_iocb	fsync;
- 		struct poll_iocb	poll;
-@@ -1060,6 +1071,8 @@ static inline void iocb_put(struct aio_kiocb *iocb)
- {
- 	if (refcount_read(&iocb->ki_refcnt) == 0 ||
- 	    refcount_dec_and_test(&iocb->ki_refcnt)) {
-+		if (iocb->ki_filp)
-+			fput(iocb->ki_filp);
- 		percpu_ref_put(&iocb->ki_ctx->reqs);
- 		kmem_cache_free(kiocb_cachep, iocb);
- 	}
-@@ -1424,7 +1437,6 @@ static void aio_complete_rw(struct kiocb *kiocb, long res, long res2)
- 		file_end_write(kiocb->ki_filp);
- 	}
- 
--	fput(kiocb->ki_filp);
- 	aio_complete(iocb, res, res2);
- }
- 
-@@ -1432,9 +1444,6 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
- {
- 	int ret;
- 
--	req->ki_filp = fget(iocb->aio_fildes);
--	if (unlikely(!req->ki_filp))
--		return -EBADF;
- 	req->ki_complete = aio_complete_rw;
- 	req->private = NULL;
- 	req->ki_pos = iocb->aio_offset;
-@@ -1451,7 +1460,7 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
- 		ret = ioprio_check_cap(iocb->aio_reqprio);
- 		if (ret) {
- 			pr_debug("aio ioprio check cap error: %d\n", ret);
--			goto out_fput;
-+			return ret;
- 		}
- 
- 		req->ki_ioprio = iocb->aio_reqprio;
-@@ -1460,14 +1469,10 @@ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
- 
- 	ret = kiocb_set_rw_flags(req, iocb->aio_rw_flags);
- 	if (unlikely(ret))
--		goto out_fput;
-+		return ret;
- 
- 	req->ki_flags &= ~IOCB_HIPRI; /* no one is going to poll for this I/O */
- 	return 0;
--
--out_fput:
--	fput(req->ki_filp);
--	return ret;
- }
- 
- static int aio_setup_rw(int rw, const struct iocb *iocb, struct iovec **iovec,
-@@ -1521,24 +1526,19 @@ static ssize_t aio_read(struct kiocb *req, const struct iocb *iocb,
- 	if (ret)
- 		return ret;
- 	file = req->ki_filp;
--
--	ret = -EBADF;
- 	if (unlikely(!(file->f_mode & FMODE_READ)))
--		goto out_fput;
-+		return -EBADF;
- 	ret = -EINVAL;
- 	if (unlikely(!file->f_op->read_iter))
--		goto out_fput;
-+		return -EINVAL;
- 
- 	ret = aio_setup_rw(READ, iocb, &iovec, vectored, compat, &iter);
- 	if (ret)
--		goto out_fput;
-+		return ret;
- 	ret = rw_verify_area(READ, file, &req->ki_pos, iov_iter_count(&iter));
- 	if (!ret)
- 		aio_rw_done(req, call_read_iter(file, req, &iter));
- 	kfree(iovec);
--out_fput:
--	if (unlikely(ret))
--		fput(file);
- 	return ret;
- }
- 
-@@ -1555,16 +1555,14 @@ static ssize_t aio_write(struct kiocb *req, const struct iocb *iocb,
- 		return ret;
- 	file = req->ki_filp;
- 
--	ret = -EBADF;
- 	if (unlikely(!(file->f_mode & FMODE_WRITE)))
--		goto out_fput;
--	ret = -EINVAL;
-+		return -EBADF;
- 	if (unlikely(!file->f_op->write_iter))
--		goto out_fput;
-+		return -EINVAL;
- 
- 	ret = aio_setup_rw(WRITE, iocb, &iovec, vectored, compat, &iter);
- 	if (ret)
--		goto out_fput;
-+		return ret;
- 	ret = rw_verify_area(WRITE, file, &req->ki_pos, iov_iter_count(&iter));
- 	if (!ret) {
- 		/*
-@@ -1582,9 +1580,6 @@ static ssize_t aio_write(struct kiocb *req, const struct iocb *iocb,
- 		aio_rw_done(req, call_write_iter(file, req, &iter));
- 	}
- 	kfree(iovec);
--out_fput:
--	if (unlikely(ret))
--		fput(file);
- 	return ret;
- }
- 
-@@ -1594,7 +1589,6 @@ static void aio_fsync_work(struct work_struct *work)
- 	int ret;
- 
- 	ret = vfs_fsync(req->file, req->datasync);
--	fput(req->file);
- 	aio_complete(container_of(req, struct aio_kiocb, fsync), ret, 0);
- }
- 
-@@ -1605,13 +1599,8 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
- 			iocb->aio_rw_flags))
- 		return -EINVAL;
- 
--	req->file = fget(iocb->aio_fildes);
--	if (unlikely(!req->file))
--		return -EBADF;
--	if (unlikely(!req->file->f_op->fsync)) {
--		fput(req->file);
-+	if (unlikely(!req->file->f_op->fsync))
- 		return -EINVAL;
--	}
- 
- 	req->datasync = datasync;
- 	INIT_WORK(&req->work, aio_fsync_work);
-@@ -1621,10 +1610,7 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
- 
- static inline void aio_poll_complete(struct aio_kiocb *iocb, __poll_t mask)
- {
--	struct file *file = iocb->poll.file;
--
- 	aio_complete(iocb, mangle_poll(mask), 0);
--	fput(file);
- }
- 
- static void aio_poll_complete_work(struct work_struct *work)
-@@ -1680,6 +1666,7 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
- 	struct poll_iocb *req = container_of(wait, struct poll_iocb, wait);
- 	struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll);
- 	__poll_t mask = key_to_poll(key);
-+	unsigned long flags;
- 
- 	req->woken = true;
- 
-@@ -1688,10 +1675,15 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
- 		if (!(mask & req->events))
- 			return 0;
- 
--		/* try to complete the iocb inline if we can: */
--		if (spin_trylock(&iocb->ki_ctx->ctx_lock)) {
-+		/*
-+		 * Try to complete the iocb inline if we can. Use
-+		 * irqsave/irqrestore because not all filesystems (e.g. fuse)
-+		 * call this function with IRQs disabled and because IRQs
-+		 * have to be disabled before ctx_lock is obtained.
-+		 */
-+		if (spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
- 			list_del(&iocb->ki_list);
--			spin_unlock(&iocb->ki_ctx->ctx_lock);
-+			spin_unlock_irqrestore(&iocb->ki_ctx->ctx_lock, flags);
- 
- 			list_del_init(&req->wait.entry);
- 			aio_poll_complete(iocb, mask);
-@@ -1743,9 +1735,6 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
- 
- 	INIT_WORK(&req->work, aio_poll_complete_work);
- 	req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP;
--	req->file = fget(iocb->aio_fildes);
--	if (unlikely(!req->file))
--		return -EBADF;
- 
- 	req->head = NULL;
- 	req->woken = false;
-@@ -1788,10 +1777,8 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
- 	spin_unlock_irq(&ctx->ctx_lock);
- 
- out:
--	if (unlikely(apt.error)) {
--		fput(req->file);
-+	if (unlikely(apt.error))
- 		return apt.error;
--	}
- 
- 	if (mask)
- 		aio_poll_complete(aiocb, mask);
-@@ -1829,6 +1816,11 @@ static int __io_submit_one(struct kioctx *ctx, const struct iocb *iocb,
- 	if (unlikely(!req))
- 		goto out_put_reqs_available;
- 
-+	req->ki_filp = fget(iocb->aio_fildes);
-+	ret = -EBADF;
-+	if (unlikely(!req->ki_filp))
-+		goto out_put_req;
-+
- 	if (iocb->aio_flags & IOCB_FLAG_RESFD) {
- 		/*
- 		 * If the IOCB_FLAG_RESFD flag of aio_flags is set, get an
-diff --git a/fs/block_dev.c b/fs/block_dev.c
-index 58a4c1217fa8..06ef48ad1998 100644
---- a/fs/block_dev.c
-+++ b/fs/block_dev.c
-@@ -298,10 +298,10 @@ static void blkdev_bio_end_io(struct bio *bio)
- 	struct blkdev_dio *dio = bio->bi_private;
- 	bool should_dirty = dio->should_dirty;
- 
--	if (dio->multi_bio && !atomic_dec_and_test(&dio->ref)) {
--		if (bio->bi_status && !dio->bio.bi_status)
--			dio->bio.bi_status = bio->bi_status;
--	} else {
-+	if (bio->bi_status && !dio->bio.bi_status)
-+		dio->bio.bi_status = bio->bi_status;
-+
-+	if (!dio->multi_bio || atomic_dec_and_test(&dio->ref)) {
- 		if (!dio->is_sync) {
- 			struct kiocb *iocb = dio->iocb;
- 			ssize_t ret;
-diff --git a/fs/btrfs/acl.c b/fs/btrfs/acl.c
-index 3b66c957ea6f..5810463dc6d2 100644
---- a/fs/btrfs/acl.c
-+++ b/fs/btrfs/acl.c
-@@ -9,6 +9,7 @@
- #include <linux/posix_acl_xattr.h>
- #include <linux/posix_acl.h>
- #include <linux/sched.h>
-+#include <linux/sched/mm.h>
- #include <linux/slab.h>
- 
- #include "ctree.h"
-@@ -72,8 +73,16 @@ static int __btrfs_set_acl(struct btrfs_trans_handle *trans,
- 	}
- 
- 	if (acl) {
-+		unsigned int nofs_flag;
-+
- 		size = posix_acl_xattr_size(acl->a_count);
-+		/*
-+		 * We're holding a transaction handle, so use a NOFS memory
-+		 * allocation context to avoid deadlock if reclaim happens.
-+		 */
-+		nofs_flag = memalloc_nofs_save();
- 		value = kmalloc(size, GFP_KERNEL);
-+		memalloc_nofs_restore(nofs_flag);
- 		if (!value) {
- 			ret = -ENOMEM;
- 			goto out;
-diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
-index 8750c835f535..c4dea3b7349e 100644
---- a/fs/btrfs/dev-replace.c
-+++ b/fs/btrfs/dev-replace.c
-@@ -862,6 +862,7 @@ int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info)
- 			btrfs_destroy_dev_replace_tgtdev(tgt_device);
- 		break;
- 	default:
-+		up_write(&dev_replace->rwsem);
- 		result = -EINVAL;
- 	}
- 
-diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
-index 6a2a2a951705..888d72dda794 100644
---- a/fs/btrfs/disk-io.c
-+++ b/fs/btrfs/disk-io.c
-@@ -17,6 +17,7 @@
- #include <linux/semaphore.h>
- #include <linux/error-injection.h>
- #include <linux/crc32c.h>
-+#include <linux/sched/mm.h>
- #include <asm/unaligned.h>
- #include "ctree.h"
- #include "disk-io.h"
-@@ -1258,10 +1259,17 @@ struct btrfs_root *btrfs_create_tree(struct btrfs_trans_handle *trans,
- 	struct btrfs_root *tree_root = fs_info->tree_root;
- 	struct btrfs_root *root;
- 	struct btrfs_key key;
-+	unsigned int nofs_flag;
- 	int ret = 0;
- 	uuid_le uuid = NULL_UUID_LE;
- 
-+	/*
-+	 * We're holding a transaction handle, so use a NOFS memory allocation
-+	 * context to avoid deadlock if reclaim happens.
-+	 */
-+	nofs_flag = memalloc_nofs_save();
- 	root = btrfs_alloc_root(fs_info, GFP_KERNEL);
-+	memalloc_nofs_restore(nofs_flag);
- 	if (!root)
- 		return ERR_PTR(-ENOMEM);
- 
-diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
-index d81035b7ea7d..1b68700bc1c5 100644
---- a/fs/btrfs/extent-tree.c
-+++ b/fs/btrfs/extent-tree.c
-@@ -4808,6 +4808,7 @@ skip_async:
- }
- 
- struct reserve_ticket {
-+	u64 orig_bytes;
- 	u64 bytes;
- 	int error;
- 	struct list_head list;
-@@ -5030,7 +5031,7 @@ static inline int need_do_async_reclaim(struct btrfs_fs_info *fs_info,
- 		!test_bit(BTRFS_FS_STATE_REMOUNTING, &fs_info->fs_state));
- }
- 
--static void wake_all_tickets(struct list_head *head)
-+static bool wake_all_tickets(struct list_head *head)
- {
- 	struct reserve_ticket *ticket;
- 
-@@ -5039,7 +5040,10 @@ static void wake_all_tickets(struct list_head *head)
- 		list_del_init(&ticket->list);
- 		ticket->error = -ENOSPC;
- 		wake_up(&ticket->wait);
-+		if (ticket->bytes != ticket->orig_bytes)
-+			return true;
- 	}
-+	return false;
- }
- 
- /*
-@@ -5094,8 +5098,12 @@ static void btrfs_async_reclaim_metadata_space(struct work_struct *work)
- 		if (flush_state > COMMIT_TRANS) {
- 			commit_cycles++;
- 			if (commit_cycles > 2) {
--				wake_all_tickets(&space_info->tickets);
--				space_info->flush = 0;
-+				if (wake_all_tickets(&space_info->tickets)) {
-+					flush_state = FLUSH_DELAYED_ITEMS_NR;
-+					commit_cycles--;
-+				} else {
-+					space_info->flush = 0;
-+				}
- 			} else {
- 				flush_state = FLUSH_DELAYED_ITEMS_NR;
- 			}
-@@ -5147,10 +5155,11 @@ static void priority_reclaim_metadata_space(struct btrfs_fs_info *fs_info,
- 
- static int wait_reserve_ticket(struct btrfs_fs_info *fs_info,
- 			       struct btrfs_space_info *space_info,
--			       struct reserve_ticket *ticket, u64 orig_bytes)
-+			       struct reserve_ticket *ticket)
- 
- {
- 	DEFINE_WAIT(wait);
-+	u64 reclaim_bytes = 0;
- 	int ret = 0;
- 
- 	spin_lock(&space_info->lock);
-@@ -5171,14 +5180,12 @@ static int wait_reserve_ticket(struct btrfs_fs_info *fs_info,
- 		ret = ticket->error;
- 	if (!list_empty(&ticket->list))
- 		list_del_init(&ticket->list);
--	if (ticket->bytes && ticket->bytes < orig_bytes) {
--		u64 num_bytes = orig_bytes - ticket->bytes;
--		update_bytes_may_use(space_info, -num_bytes);
--		trace_btrfs_space_reservation(fs_info, "space_info",
--					      space_info->flags, num_bytes, 0);
--	}
-+	if (ticket->bytes && ticket->bytes < ticket->orig_bytes)
-+		reclaim_bytes = ticket->orig_bytes - ticket->bytes;
- 	spin_unlock(&space_info->lock);
- 
-+	if (reclaim_bytes)
-+		space_info_add_old_bytes(fs_info, space_info, reclaim_bytes);
- 	return ret;
- }
- 
-@@ -5204,6 +5211,7 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
- {
- 	struct reserve_ticket ticket;
- 	u64 used;
-+	u64 reclaim_bytes = 0;
- 	int ret = 0;
- 
- 	ASSERT(orig_bytes);
-@@ -5239,6 +5247,7 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
- 	 * the list and we will do our own flushing further down.
- 	 */
- 	if (ret && flush != BTRFS_RESERVE_NO_FLUSH) {
-+		ticket.orig_bytes = orig_bytes;
- 		ticket.bytes = orig_bytes;
- 		ticket.error = 0;
- 		init_waitqueue_head(&ticket.wait);
-@@ -5279,25 +5288,21 @@ static int __reserve_metadata_bytes(struct btrfs_fs_info *fs_info,
- 		return ret;
- 
- 	if (flush == BTRFS_RESERVE_FLUSH_ALL)
--		return wait_reserve_ticket(fs_info, space_info, &ticket,
--					   orig_bytes);
-+		return wait_reserve_ticket(fs_info, space_info, &ticket);
- 
- 	ret = 0;
- 	priority_reclaim_metadata_space(fs_info, space_info, &ticket);
- 	spin_lock(&space_info->lock);
- 	if (ticket.bytes) {
--		if (ticket.bytes < orig_bytes) {
--			u64 num_bytes = orig_bytes - ticket.bytes;
--			update_bytes_may_use(space_info, -num_bytes);
--			trace_btrfs_space_reservation(fs_info, "space_info",
--						      space_info->flags,
--						      num_bytes, 0);
--
--		}
-+		if (ticket.bytes < orig_bytes)
-+			reclaim_bytes = orig_bytes - ticket.bytes;
- 		list_del_init(&ticket.list);
- 		ret = -ENOSPC;
- 	}
- 	spin_unlock(&space_info->lock);
-+
-+	if (reclaim_bytes)
-+		space_info_add_old_bytes(fs_info, space_info, reclaim_bytes);
- 	ASSERT(list_empty(&ticket.list));
- 	return ret;
- }
-@@ -6115,7 +6120,7 @@ static void btrfs_calculate_inode_block_rsv_size(struct btrfs_fs_info *fs_info,
- 	 *
- 	 * This is overestimating in most cases.
- 	 */
--	qgroup_rsv_size = outstanding_extents * fs_info->nodesize;
-+	qgroup_rsv_size = (u64)outstanding_extents * fs_info->nodesize;
- 
- 	spin_lock(&block_rsv->lock);
- 	block_rsv->size = reserve_size;
-@@ -8690,6 +8695,8 @@ struct walk_control {
- 	u64 refs[BTRFS_MAX_LEVEL];
- 	u64 flags[BTRFS_MAX_LEVEL];
- 	struct btrfs_key update_progress;
-+	struct btrfs_key drop_progress;
-+	int drop_level;
- 	int stage;
- 	int level;
- 	int shared_level;
-@@ -9028,6 +9035,16 @@ skip:
- 					     ret);
- 			}
- 		}
-+
-+		/*
-+		 * We need to update the next key in our walk control so we can
-+		 * update the drop_progress key accordingly.  We don't care if
-+		 * find_next_key doesn't find a key because that means we're at
-+		 * the end and are going to clean up now.
-+		 */
-+		wc->drop_level = level;
-+		find_next_key(path, level, &wc->drop_progress);
-+
- 		ret = btrfs_free_extent(trans, root, bytenr, fs_info->nodesize,
- 					parent, root->root_key.objectid,
- 					level - 1, 0);
-@@ -9378,12 +9395,14 @@ int btrfs_drop_snapshot(struct btrfs_root *root,
- 		}
- 
- 		if (wc->stage == DROP_REFERENCE) {
--			level = wc->level;
--			btrfs_node_key(path->nodes[level],
--				       &root_item->drop_progress,
--				       path->slots[level]);
--			root_item->drop_level = level;
--		}
-+			wc->drop_level = wc->level;
-+			btrfs_node_key_to_cpu(path->nodes[wc->drop_level],
-+					      &wc->drop_progress,
-+					      path->slots[wc->drop_level]);
-+		}
-+		btrfs_cpu_key_to_disk(&root_item->drop_progress,
-+				      &wc->drop_progress);
-+		root_item->drop_level = wc->drop_level;
- 
- 		BUG_ON(wc->level == 0);
- 		if (btrfs_should_end_transaction(trans) ||
-diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
-index 52abe4082680..1bfb7207bbf0 100644
---- a/fs/btrfs/extent_io.c
-+++ b/fs/btrfs/extent_io.c
-@@ -2985,11 +2985,11 @@ static int __do_readpage(struct extent_io_tree *tree,
- 		 */
- 		if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) &&
- 		    prev_em_start && *prev_em_start != (u64)-1 &&
--		    *prev_em_start != em->orig_start)
-+		    *prev_em_start != em->start)
- 			force_bio_submit = true;
- 
- 		if (prev_em_start)
--			*prev_em_start = em->orig_start;
-+			*prev_em_start = em->start;
- 
- 		free_extent_map(em);
- 		em = NULL;
-diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
-index 9c8e1734429c..1d64a6b8e413 100644
---- a/fs/btrfs/ioctl.c
-+++ b/fs/btrfs/ioctl.c
-@@ -501,6 +501,16 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
- 	if (!capable(CAP_SYS_ADMIN))
- 		return -EPERM;
- 
-+	/*
-+	 * If the fs is mounted with nologreplay, which requires it to be
-+	 * mounted in RO mode as well, we can not allow discard on free space
-+	 * inside block groups, because log trees refer to extents that are not
-+	 * pinned in a block group's free space cache (pinning the extents is
-+	 * precisely the first phase of replaying a log tree).
-+	 */
-+	if (btrfs_test_opt(fs_info, NOLOGREPLAY))
-+		return -EROFS;
-+
- 	rcu_read_lock();
- 	list_for_each_entry_rcu(device, &fs_info->fs_devices->devices,
- 				dev_list) {
-@@ -3206,21 +3216,6 @@ out:
- 	return ret;
- }
- 
--static void btrfs_double_inode_unlock(struct inode *inode1, struct inode *inode2)
--{
--	inode_unlock(inode1);
--	inode_unlock(inode2);
--}
--
--static void btrfs_double_inode_lock(struct inode *inode1, struct inode *inode2)
--{
--	if (inode1 < inode2)
--		swap(inode1, inode2);
--
--	inode_lock_nested(inode1, I_MUTEX_PARENT);
--	inode_lock_nested(inode2, I_MUTEX_CHILD);
--}
--
- static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1,
- 				       struct inode *inode2, u64 loff2, u64 len)
- {
-@@ -3989,7 +3984,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in,
- 	if (same_inode)
- 		inode_lock(inode_in);
- 	else
--		btrfs_double_inode_lock(inode_in, inode_out);
-+		lock_two_nondirectories(inode_in, inode_out);
- 
- 	/*
- 	 * Now that the inodes are locked, we need to start writeback ourselves
-@@ -4039,7 +4034,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in,
- 	if (same_inode)
- 		inode_unlock(inode_in);
- 	else
--		btrfs_double_inode_unlock(inode_in, inode_out);
-+		unlock_two_nondirectories(inode_in, inode_out);
- 
- 	return ret;
- }
-@@ -4069,7 +4064,7 @@ loff_t btrfs_remap_file_range(struct file *src_file, loff_t off,
- 	if (same_inode)
- 		inode_unlock(src_inode);
- 	else
--		btrfs_double_inode_unlock(src_inode, dst_inode);
-+		unlock_two_nondirectories(src_inode, dst_inode);
- 
- 	return ret < 0 ? ret : len;
- }
-diff --git a/fs/btrfs/props.c b/fs/btrfs/props.c
-index dc6140013ae8..61d22a56c0ba 100644
---- a/fs/btrfs/props.c
-+++ b/fs/btrfs/props.c
-@@ -366,11 +366,11 @@ int btrfs_subvol_inherit_props(struct btrfs_trans_handle *trans,
- 
- static int prop_compression_validate(const char *value, size_t len)
- {
--	if (!strncmp("lzo", value, len))
-+	if (!strncmp("lzo", value, 3))
- 		return 0;
--	else if (!strncmp("zlib", value, len))
-+	else if (!strncmp("zlib", value, 4))
- 		return 0;
--	else if (!strncmp("zstd", value, len))
-+	else if (!strncmp("zstd", value, 4))
- 		return 0;
- 
- 	return -EINVAL;
-@@ -396,7 +396,7 @@ static int prop_compression_apply(struct inode *inode,
- 		btrfs_set_fs_incompat(fs_info, COMPRESS_LZO);
- 	} else if (!strncmp("zlib", value, 4)) {
- 		type = BTRFS_COMPRESS_ZLIB;
--	} else if (!strncmp("zstd", value, len)) {
-+	} else if (!strncmp("zstd", value, 4)) {
- 		type = BTRFS_COMPRESS_ZSTD;
- 		btrfs_set_fs_incompat(fs_info, COMPRESS_ZSTD);
- 	} else {
-diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
-index 4e473a998219..e28fb43e943b 100644
---- a/fs/btrfs/qgroup.c
-+++ b/fs/btrfs/qgroup.c
-@@ -1917,8 +1917,8 @@ static int qgroup_trace_new_subtree_blocks(struct btrfs_trans_handle* trans,
- 	int i;
- 
- 	/* Level sanity check */
--	if (cur_level < 0 || cur_level >= BTRFS_MAX_LEVEL ||
--	    root_level < 0 || root_level >= BTRFS_MAX_LEVEL ||
-+	if (cur_level < 0 || cur_level >= BTRFS_MAX_LEVEL - 1 ||
-+	    root_level < 0 || root_level >= BTRFS_MAX_LEVEL - 1 ||
- 	    root_level < cur_level) {
- 		btrfs_err_rl(fs_info,
- 			"%s: bad levels, cur_level=%d root_level=%d",
-@@ -2842,16 +2842,15 @@ out:
- /*
-  * Two limits to commit transaction in advance.
-  *
-- * For RATIO, it will be 1/RATIO of the remaining limit
-- * (excluding data and prealloc meta) as threshold.
-+ * For RATIO, it will be 1/RATIO of the remaining limit as threshold.
-  * For SIZE, it will be in byte unit as threshold.
-  */
--#define QGROUP_PERTRANS_RATIO		32
--#define QGROUP_PERTRANS_SIZE		SZ_32M
-+#define QGROUP_FREE_RATIO		32
-+#define QGROUP_FREE_SIZE		SZ_32M
- static bool qgroup_check_limits(struct btrfs_fs_info *fs_info,
- 				const struct btrfs_qgroup *qg, u64 num_bytes)
- {
--	u64 limit;
-+	u64 free;
- 	u64 threshold;
- 
- 	if ((qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_RFER) &&
-@@ -2870,20 +2869,21 @@ static bool qgroup_check_limits(struct btrfs_fs_info *fs_info,
- 	 */
- 	if ((qg->lim_flags & (BTRFS_QGROUP_LIMIT_MAX_RFER |
- 			      BTRFS_QGROUP_LIMIT_MAX_EXCL))) {
--		if (qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_EXCL)
--			limit = qg->max_excl;
--		else
--			limit = qg->max_rfer;
--		threshold = (limit - qg->rsv.values[BTRFS_QGROUP_RSV_DATA] -
--			    qg->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC]) /
--			    QGROUP_PERTRANS_RATIO;
--		threshold = min_t(u64, threshold, QGROUP_PERTRANS_SIZE);
-+		if (qg->lim_flags & BTRFS_QGROUP_LIMIT_MAX_EXCL) {
-+			free = qg->max_excl - qgroup_rsv_total(qg) - qg->excl;
-+			threshold = min_t(u64, qg->max_excl / QGROUP_FREE_RATIO,
-+					  QGROUP_FREE_SIZE);
-+		} else {
-+			free = qg->max_rfer - qgroup_rsv_total(qg) - qg->rfer;
-+			threshold = min_t(u64, qg->max_rfer / QGROUP_FREE_RATIO,
-+					  QGROUP_FREE_SIZE);
-+		}
- 
- 		/*
- 		 * Use transaction_kthread to commit transaction, so we no
- 		 * longer need to bother nested transaction nor lock context.
- 		 */
--		if (qg->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS] > threshold)
-+		if (free < threshold)
- 			btrfs_commit_transaction_locksafe(fs_info);
- 	}
- 
-diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
-index e74455eb42f9..6976e2280771 100644
---- a/fs/btrfs/raid56.c
-+++ b/fs/btrfs/raid56.c
-@@ -2429,8 +2429,9 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
- 			bitmap_clear(rbio->dbitmap, pagenr, 1);
- 		kunmap(p);
- 
--		for (stripe = 0; stripe < rbio->real_stripes; stripe++)
-+		for (stripe = 0; stripe < nr_data; stripe++)
- 			kunmap(page_in_rbio(rbio, stripe, pagenr, 0));
-+		kunmap(p_page);
- 	}
- 
- 	__free_page(p_page);
-diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
-index 6dcd36d7b849..1aeac70d0531 100644
---- a/fs/btrfs/scrub.c
-+++ b/fs/btrfs/scrub.c
-@@ -584,6 +584,7 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
- 	sctx->pages_per_rd_bio = SCRUB_PAGES_PER_RD_BIO;
- 	sctx->curr = -1;
- 	sctx->fs_info = fs_info;
-+	INIT_LIST_HEAD(&sctx->csum_list);
- 	for (i = 0; i < SCRUB_BIOS_PER_SCTX; ++i) {
- 		struct scrub_bio *sbio;
- 
-@@ -608,7 +609,6 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx(
- 	atomic_set(&sctx->workers_pending, 0);
- 	atomic_set(&sctx->cancel_req, 0);
- 	sctx->csum_size = btrfs_super_csum_size(fs_info->super_copy);
--	INIT_LIST_HEAD(&sctx->csum_list);
- 
- 	spin_lock_init(&sctx->list_lock);
- 	spin_lock_init(&sctx->stat_lock);
-@@ -3770,16 +3770,6 @@ fail_scrub_workers:
- 	return -ENOMEM;
- }
- 
--static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info)
--{
--	if (--fs_info->scrub_workers_refcnt == 0) {
--		btrfs_destroy_workqueue(fs_info->scrub_workers);
--		btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers);
--		btrfs_destroy_workqueue(fs_info->scrub_parity_workers);
--	}
--	WARN_ON(fs_info->scrub_workers_refcnt < 0);
--}
--
- int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
- 		    u64 end, struct btrfs_scrub_progress *progress,
- 		    int readonly, int is_dev_replace)
-@@ -3788,6 +3778,9 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
- 	int ret;
- 	struct btrfs_device *dev;
- 	unsigned int nofs_flag;
-+	struct btrfs_workqueue *scrub_workers = NULL;
-+	struct btrfs_workqueue *scrub_wr_comp = NULL;
-+	struct btrfs_workqueue *scrub_parity = NULL;
- 
- 	if (btrfs_fs_closing(fs_info))
- 		return -EINVAL;
-@@ -3927,9 +3920,16 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
- 
- 	mutex_lock(&fs_info->scrub_lock);
- 	dev->scrub_ctx = NULL;
--	scrub_workers_put(fs_info);
-+	if (--fs_info->scrub_workers_refcnt == 0) {
-+		scrub_workers = fs_info->scrub_workers;
-+		scrub_wr_comp = fs_info->scrub_wr_completion_workers;
-+		scrub_parity = fs_info->scrub_parity_workers;
-+	}
- 	mutex_unlock(&fs_info->scrub_lock);
- 
-+	btrfs_destroy_workqueue(scrub_workers);
-+	btrfs_destroy_workqueue(scrub_wr_comp);
-+	btrfs_destroy_workqueue(scrub_parity);
- 	scrub_put_ctx(sctx);
- 
- 	return ret;
-diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
-index ac232b3d6d7e..7f3b74a55073 100644
---- a/fs/btrfs/tree-log.c
-+++ b/fs/btrfs/tree-log.c
-@@ -3517,9 +3517,16 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans,
- 	}
- 	btrfs_release_path(path);
- 
--	/* find the first key from this transaction again */
-+	/*
-+	 * Find the first key from this transaction again.  See the note for
-+	 * log_new_dir_dentries, if we're logging a directory recursively we
-+	 * won't be holding its i_mutex, which means we can modify the directory
-+	 * while we're logging it.  If we remove an entry between our first
-+	 * search and this search we'll not find the key again and can just
-+	 * bail.
-+	 */
- 	ret = btrfs_search_slot(NULL, root, &min_key, path, 0, 0);
--	if (WARN_ON(ret != 0))
-+	if (ret != 0)
- 		goto done;
- 
- 	/*
-@@ -4481,6 +4488,19 @@ static int logged_inode_size(struct btrfs_root *log, struct btrfs_inode *inode,
- 		item = btrfs_item_ptr(path->nodes[0], path->slots[0],
- 				      struct btrfs_inode_item);
- 		*size_ret = btrfs_inode_size(path->nodes[0], item);
-+		/*
-+		 * If the in-memory inode's i_size is smaller then the inode
-+		 * size stored in the btree, return the inode's i_size, so
-+		 * that we get a correct inode size after replaying the log
-+		 * when before a power failure we had a shrinking truncate
-+		 * followed by addition of a new name (rename / new hard link).
-+		 * Otherwise return the inode size from the btree, to avoid
-+		 * data loss when replaying a log due to previously doing a
-+		 * write that expands the inode's size and logging a new name
-+		 * immediately after.
-+		 */
-+		if (*size_ret > inode->vfs_inode.i_size)
-+			*size_ret = inode->vfs_inode.i_size;
- 	}
- 
- 	btrfs_release_path(path);
-@@ -4642,15 +4662,8 @@ static int btrfs_log_trailing_hole(struct btrfs_trans_handle *trans,
- 					struct btrfs_file_extent_item);
- 
- 		if (btrfs_file_extent_type(leaf, extent) ==
--		    BTRFS_FILE_EXTENT_INLINE) {
--			len = btrfs_file_extent_ram_bytes(leaf, extent);
--			ASSERT(len == i_size ||
--			       (len == fs_info->sectorsize &&
--				btrfs_file_extent_compression(leaf, extent) !=
--				BTRFS_COMPRESS_NONE) ||
--			       (len < i_size && i_size < fs_info->sectorsize));
-+		    BTRFS_FILE_EXTENT_INLINE)
- 			return 0;
--		}
- 
- 		len = btrfs_file_extent_num_bytes(leaf, extent);
- 		/* Last extent goes beyond i_size, no need to log a hole. */
-diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
-index 15561926ab32..88a323a453d8 100644
---- a/fs/btrfs/volumes.c
-+++ b/fs/btrfs/volumes.c
-@@ -6413,7 +6413,7 @@ static void btrfs_end_bio(struct bio *bio)
- 				if (bio_op(bio) == REQ_OP_WRITE)
- 					btrfs_dev_stat_inc_and_print(dev,
- 						BTRFS_DEV_STAT_WRITE_ERRS);
--				else
-+				else if (!(bio->bi_opf & REQ_RAHEAD))
- 					btrfs_dev_stat_inc_and_print(dev,
- 						BTRFS_DEV_STAT_READ_ERRS);
- 				if (bio->bi_opf & REQ_PREFLUSH)
-@@ -6782,10 +6782,10 @@ static int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info,
- 	}
- 
- 	if ((type & BTRFS_BLOCK_GROUP_RAID10 && sub_stripes != 2) ||
--	    (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes < 1) ||
-+	    (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes != 2) ||
- 	    (type & BTRFS_BLOCK_GROUP_RAID5 && num_stripes < 2) ||
- 	    (type & BTRFS_BLOCK_GROUP_RAID6 && num_stripes < 3) ||
--	    (type & BTRFS_BLOCK_GROUP_DUP && num_stripes > 2) ||
-+	    (type & BTRFS_BLOCK_GROUP_DUP && num_stripes != 2) ||
- 	    ((type & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 &&
- 	     num_stripes != 1)) {
- 		btrfs_err(fs_info,
-diff --git a/fs/buffer.c b/fs/buffer.c
-index 48318fb74938..cab7a026876b 100644
---- a/fs/buffer.c
-+++ b/fs/buffer.c
-@@ -3027,6 +3027,13 @@ void guard_bio_eod(int op, struct bio *bio)
- 	/* Uhhuh. We've got a bio that straddles the device size! */
- 	truncated_bytes = bio->bi_iter.bi_size - (maxsector << 9);
- 
-+	/*
-+	 * The bio contains more than one segment which spans EOD, just return
-+	 * and let IO layer turn it into an EIO
-+	 */
-+	if (truncated_bytes > bvec->bv_len)
-+		return;
-+
- 	/* Truncate the bio.. */
- 	bio->bi_iter.bi_size -= truncated_bytes;
- 	bvec->bv_len -= truncated_bytes;
-diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
-index d9b99abe1243..5d83c924cc47 100644
---- a/fs/cifs/cifs_dfs_ref.c
-+++ b/fs/cifs/cifs_dfs_ref.c
-@@ -285,9 +285,9 @@ static void dump_referral(const struct dfs_info3_param *ref)
- {
- 	cifs_dbg(FYI, "DFS: ref path: %s\n", ref->path_name);
- 	cifs_dbg(FYI, "DFS: node path: %s\n", ref->node_name);
--	cifs_dbg(FYI, "DFS: fl: %hd, srv_type: %hd\n",
-+	cifs_dbg(FYI, "DFS: fl: %d, srv_type: %d\n",
- 		 ref->flags, ref->server_type);
--	cifs_dbg(FYI, "DFS: ref_flags: %hd, path_consumed: %hd\n",
-+	cifs_dbg(FYI, "DFS: ref_flags: %d, path_consumed: %d\n",
- 		 ref->ref_flag, ref->path_consumed);
- }
- 
-diff --git a/fs/cifs/cifs_fs_sb.h b/fs/cifs/cifs_fs_sb.h
-index 42f0d67f1054..ed49222abecb 100644
---- a/fs/cifs/cifs_fs_sb.h
-+++ b/fs/cifs/cifs_fs_sb.h
-@@ -58,6 +58,7 @@ struct cifs_sb_info {
- 	spinlock_t tlink_tree_lock;
- 	struct tcon_link *master_tlink;
- 	struct nls_table *local_nls;
-+	unsigned int bsize;
- 	unsigned int rsize;
- 	unsigned int wsize;
- 	unsigned long actimeo; /* attribute cache timeout (jiffies) */
-diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
-index 62d48d486d8f..07cad54b84f1 100644
---- a/fs/cifs/cifsfs.c
-+++ b/fs/cifs/cifsfs.c
-@@ -554,10 +554,13 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
- 
- 	seq_printf(s, ",rsize=%u", cifs_sb->rsize);
- 	seq_printf(s, ",wsize=%u", cifs_sb->wsize);
-+	seq_printf(s, ",bsize=%u", cifs_sb->bsize);
- 	seq_printf(s, ",echo_interval=%lu",
- 			tcon->ses->server->echo_interval / HZ);
- 	if (tcon->snapshot_time)
- 		seq_printf(s, ",snapshot=%llu", tcon->snapshot_time);
-+	if (tcon->handle_timeout)
-+		seq_printf(s, ",handletimeout=%u", tcon->handle_timeout);
- 	/* convert actimeo and display it in seconds */
- 	seq_printf(s, ",actimeo=%lu", cifs_sb->actimeo / HZ);
- 
-diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
-index 94dbdbe5be34..6c934ab3722b 100644
---- a/fs/cifs/cifsglob.h
-+++ b/fs/cifs/cifsglob.h
-@@ -59,6 +59,12 @@
-  */
- #define CIFS_MAX_ACTIMEO (1 << 30)
- 
-+/*
-+ * Max persistent and resilient handle timeout (milliseconds).
-+ * Windows durable max was 960000 (16 minutes)
-+ */
-+#define SMB3_MAX_HANDLE_TIMEOUT 960000
-+
- /*
-  * MAX_REQ is the maximum number of requests that WE will send
-  * on one socket concurrently.
-@@ -236,6 +242,8 @@ struct smb_version_operations {
- 	int * (*get_credits_field)(struct TCP_Server_Info *, const int);
- 	unsigned int (*get_credits)(struct mid_q_entry *);
- 	__u64 (*get_next_mid)(struct TCP_Server_Info *);
-+	void (*revert_current_mid)(struct TCP_Server_Info *server,
-+				   const unsigned int val);
- 	/* data offset from read response message */
- 	unsigned int (*read_data_offset)(char *);
- 	/*
-@@ -557,6 +565,7 @@ struct smb_vol {
- 	bool resilient:1; /* noresilient not required since not fored for CA */
- 	bool domainauto:1;
- 	bool rdma:1;
-+	unsigned int bsize;
- 	unsigned int rsize;
- 	unsigned int wsize;
- 	bool sockopt_tcp_nodelay:1;
-@@ -569,6 +578,7 @@ struct smb_vol {
- 	struct nls_table *local_nls;
- 	unsigned int echo_interval; /* echo interval in secs */
- 	__u64 snapshot_time; /* needed for timewarp tokens */
-+	__u32 handle_timeout; /* persistent and durable handle timeout in ms */
- 	unsigned int max_credits; /* smb3 max_credits 10 < credits < 60000 */
- };
- 
-@@ -770,6 +780,22 @@ get_next_mid(struct TCP_Server_Info *server)
- 	return cpu_to_le16(mid);
- }
- 
-+static inline void
-+revert_current_mid(struct TCP_Server_Info *server, const unsigned int val)
-+{
-+	if (server->ops->revert_current_mid)
-+		server->ops->revert_current_mid(server, val);
-+}
-+
-+static inline void
-+revert_current_mid_from_hdr(struct TCP_Server_Info *server,
-+			    const struct smb2_sync_hdr *shdr)
-+{
-+	unsigned int num = le16_to_cpu(shdr->CreditCharge);
-+
-+	return revert_current_mid(server, num > 0 ? num : 1);
-+}
-+
- static inline __u16
- get_mid(const struct smb_hdr *smb)
- {
-@@ -1009,6 +1035,7 @@ struct cifs_tcon {
- 	__u32 vol_serial_number;
- 	__le64 vol_create_time;
- 	__u64 snapshot_time; /* for timewarp tokens - timestamp of snapshot */
-+	__u32 handle_timeout; /* persistent and durable handle timeout in ms */
- 	__u32 ss_flags;		/* sector size flags */
- 	__u32 perf_sector_size; /* best sector size for perf */
- 	__u32 max_chunks;
-@@ -1422,6 +1449,7 @@ struct mid_q_entry {
- 	struct kref refcount;
- 	struct TCP_Server_Info *server;	/* server corresponding to this mid */
- 	__u64 mid;		/* multiplex id */
-+	__u16 credits;		/* number of credits consumed by this mid */
- 	__u32 pid;		/* process id */
- 	__u32 sequence_number;  /* for CIFS signing */
- 	unsigned long when_alloc;  /* when mid was created */
-diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
-index bb54ccf8481c..551924beb86f 100644
---- a/fs/cifs/cifssmb.c
-+++ b/fs/cifs/cifssmb.c
-@@ -2125,12 +2125,13 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
- 
- 		wdata2->cfile = find_writable_file(CIFS_I(inode), false);
- 		if (!wdata2->cfile) {
--			cifs_dbg(VFS, "No writable handles for inode\n");
-+			cifs_dbg(VFS, "No writable handle to retry writepages\n");
- 			rc = -EBADF;
--			break;
-+		} else {
-+			wdata2->pid = wdata2->cfile->pid;
-+			rc = server->ops->async_writev(wdata2,
-+						       cifs_writedata_release);
- 		}
--		wdata2->pid = wdata2->cfile->pid;
--		rc = server->ops->async_writev(wdata2, cifs_writedata_release);
- 
- 		for (j = 0; j < nr_pages; j++) {
- 			unlock_page(wdata2->pages[j]);
-@@ -2145,6 +2146,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
- 			kref_put(&wdata2->refcount, cifs_writedata_release);
- 			if (is_retryable_error(rc))
- 				continue;
-+			i += nr_pages;
- 			break;
- 		}
- 
-@@ -2152,6 +2154,13 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
- 		i += nr_pages;
- 	} while (i < wdata->nr_pages);
- 
-+	/* cleanup remaining pages from the original wdata */
-+	for (; i < wdata->nr_pages; i++) {
-+		SetPageError(wdata->pages[i]);
-+		end_page_writeback(wdata->pages[i]);
-+		put_page(wdata->pages[i]);
-+	}
-+
- 	if (rc != 0 && !is_retryable_error(rc))
- 		mapping_set_error(inode->i_mapping, rc);
- 	kref_put(&wdata->refcount, cifs_writedata_release);
-diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
-index 8463c940e0e5..44e6ec85f832 100644
---- a/fs/cifs/connect.c
-+++ b/fs/cifs/connect.c
-@@ -102,8 +102,8 @@ enum {
- 	Opt_backupuid, Opt_backupgid, Opt_uid,
- 	Opt_cruid, Opt_gid, Opt_file_mode,
- 	Opt_dirmode, Opt_port,
--	Opt_rsize, Opt_wsize, Opt_actimeo,
--	Opt_echo_interval, Opt_max_credits,
-+	Opt_blocksize, Opt_rsize, Opt_wsize, Opt_actimeo,
-+	Opt_echo_interval, Opt_max_credits, Opt_handletimeout,
- 	Opt_snapshot,
- 
- 	/* Mount options which take string value */
-@@ -204,9 +204,11 @@ static const match_table_t cifs_mount_option_tokens = {
- 	{ Opt_dirmode, "dirmode=%s" },
- 	{ Opt_dirmode, "dir_mode=%s" },
- 	{ Opt_port, "port=%s" },
-+	{ Opt_blocksize, "bsize=%s" },
- 	{ Opt_rsize, "rsize=%s" },
- 	{ Opt_wsize, "wsize=%s" },
- 	{ Opt_actimeo, "actimeo=%s" },
-+	{ Opt_handletimeout, "handletimeout=%s" },
- 	{ Opt_echo_interval, "echo_interval=%s" },
- 	{ Opt_max_credits, "max_credits=%s" },
- 	{ Opt_snapshot, "snapshot=%s" },
-@@ -1486,6 +1488,11 @@ cifs_parse_devname(const char *devname, struct smb_vol *vol)
- 	const char *delims = "/\\";
- 	size_t len;
- 
-+	if (unlikely(!devname || !*devname)) {
-+		cifs_dbg(VFS, "Device name not specified.\n");
-+		return -EINVAL;
-+	}
-+
- 	/* make sure we have a valid UNC double delimiter prefix */
- 	len = strspn(devname, delims);
- 	if (len != 2)
-@@ -1571,7 +1578,7 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
- 	vol->cred_uid = current_uid();
- 	vol->linux_uid = current_uid();
- 	vol->linux_gid = current_gid();
--
-+	vol->bsize = 1024 * 1024; /* can improve cp performance significantly */
- 	/*
- 	 * default to SFM style remapping of seven reserved characters
- 	 * unless user overrides it or we negotiate CIFS POSIX where
-@@ -1594,6 +1601,9 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
- 
- 	vol->actimeo = CIFS_DEF_ACTIMEO;
- 
-+	/* Most clients set timeout to 0, allows server to use its default */
-+	vol->handle_timeout = 0; /* See MS-SMB2 spec section 2.2.14.2.12 */
-+
- 	/* offer SMB2.1 and later (SMB3 etc). Secure and widely accepted */
- 	vol->ops = &smb30_operations;
- 	vol->vals = &smbdefault_values;
-@@ -1944,6 +1954,26 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
- 			}
- 			port = (unsigned short)option;
- 			break;
-+		case Opt_blocksize:
-+			if (get_option_ul(args, &option)) {
-+				cifs_dbg(VFS, "%s: Invalid blocksize value\n",
-+					__func__);
-+				goto cifs_parse_mount_err;
-+			}
-+			/*
-+			 * inode blocksize realistically should never need to be
-+			 * less than 16K or greater than 16M and default is 1MB.
-+			 * Note that small inode block sizes (e.g. 64K) can lead
-+			 * to very poor performance of common tools like cp and scp
-+			 */
-+			if ((option < CIFS_MAX_MSGSIZE) ||
-+			   (option > (4 * SMB3_DEFAULT_IOSIZE))) {
-+				cifs_dbg(VFS, "%s: Invalid blocksize\n",
-+					__func__);
-+				goto cifs_parse_mount_err;
-+			}
-+			vol->bsize = option;
-+			break;
- 		case Opt_rsize:
- 			if (get_option_ul(args, &option)) {
- 				cifs_dbg(VFS, "%s: Invalid rsize value\n",
-@@ -1972,6 +2002,18 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
- 				goto cifs_parse_mount_err;
- 			}
- 			break;
-+		case Opt_handletimeout:
-+			if (get_option_ul(args, &option)) {
-+				cifs_dbg(VFS, "%s: Invalid handletimeout value\n",
-+					 __func__);
-+				goto cifs_parse_mount_err;
-+			}
-+			vol->handle_timeout = option;
-+			if (vol->handle_timeout > SMB3_MAX_HANDLE_TIMEOUT) {
-+				cifs_dbg(VFS, "Invalid handle cache timeout, longer than 16 minutes\n");
-+				goto cifs_parse_mount_err;
-+			}
-+			break;
- 		case Opt_echo_interval:
- 			if (get_option_ul(args, &option)) {
- 				cifs_dbg(VFS, "%s: Invalid echo interval value\n",
-@@ -3138,6 +3180,8 @@ static int match_tcon(struct cifs_tcon *tcon, struct smb_vol *volume_info)
- 		return 0;
- 	if (tcon->snapshot_time != volume_info->snapshot_time)
- 		return 0;
-+	if (tcon->handle_timeout != volume_info->handle_timeout)
-+		return 0;
- 	return 1;
- }
- 
-@@ -3252,6 +3296,16 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb_vol *volume_info)
- 			tcon->snapshot_time = volume_info->snapshot_time;
- 	}
- 
-+	if (volume_info->handle_timeout) {
-+		if (ses->server->vals->protocol_id == 0) {
-+			cifs_dbg(VFS,
-+			     "Use SMB2.1 or later for handle timeout option\n");
-+			rc = -EOPNOTSUPP;
-+			goto out_fail;
-+		} else
-+			tcon->handle_timeout = volume_info->handle_timeout;
-+	}
-+
- 	tcon->ses = ses;
- 	if (volume_info->password) {
- 		tcon->password = kstrdup(volume_info->password, GFP_KERNEL);
-@@ -3839,6 +3893,7 @@ int cifs_setup_cifs_sb(struct smb_vol *pvolume_info,
- 	spin_lock_init(&cifs_sb->tlink_tree_lock);
- 	cifs_sb->tlink_tree = RB_ROOT;
- 
-+	cifs_sb->bsize = pvolume_info->bsize;
- 	/*
- 	 * Temporarily set r/wsize for matching superblock. If we end up using
- 	 * new sb then client will later negotiate it downward if needed.
-diff --git a/fs/cifs/file.c b/fs/cifs/file.c
-index 659ce1b92c44..8d107587208f 100644
---- a/fs/cifs/file.c
-+++ b/fs/cifs/file.c
-@@ -1645,8 +1645,20 @@ cifs_setlk(struct file *file, struct file_lock *flock, __u32 type,
- 		rc = server->ops->mand_unlock_range(cfile, flock, xid);
- 
- out:
--	if (flock->fl_flags & FL_POSIX && !rc)
-+	if (flock->fl_flags & FL_POSIX) {
-+		/*
-+		 * If this is a request to remove all locks because we
-+		 * are closing the file, it doesn't matter if the
-+		 * unlocking failed as both cifs.ko and the SMB server
-+		 * remove the lock on file close
-+		 */
-+		if (rc) {
-+			cifs_dbg(VFS, "%s failed rc=%d\n", __func__, rc);
-+			if (!(flock->fl_flags & FL_CLOSE))
-+				return rc;
-+		}
- 		rc = locks_lock_file_wait(file, flock);
-+	}
- 	return rc;
- }
- 
-@@ -3028,14 +3040,16 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from)
- 	 * these pages but not on the region from pos to ppos+len-1.
- 	 */
- 	written = cifs_user_writev(iocb, from);
--	if (written > 0 && CIFS_CACHE_READ(cinode)) {
-+	if (CIFS_CACHE_READ(cinode)) {
- 		/*
--		 * Windows 7 server can delay breaking level2 oplock if a write
--		 * request comes - break it on the client to prevent reading
--		 * an old data.
-+		 * We have read level caching and we have just sent a write
-+		 * request to the server thus making data in the cache stale.
-+		 * Zap the cache and set oplock/lease level to NONE to avoid
-+		 * reading stale data from the cache. All subsequent read
-+		 * operations will read new data from the server.
- 		 */
- 		cifs_zap_mapping(inode);
--		cifs_dbg(FYI, "Set no oplock for inode=%p after a write operation\n",
-+		cifs_dbg(FYI, "Set Oplock/Lease to NONE for inode=%p after write\n",
- 			 inode);
- 		cinode->oplock = 0;
- 	}
-diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
-index 478003644916..53fdb5df0d2e 100644
---- a/fs/cifs/inode.c
-+++ b/fs/cifs/inode.c
-@@ -2080,7 +2080,7 @@ int cifs_getattr(const struct path *path, struct kstat *stat,
- 		return rc;
- 
- 	generic_fillattr(inode, stat);
--	stat->blksize = CIFS_MAX_MSGSIZE;
-+	stat->blksize = cifs_sb->bsize;
- 	stat->ino = CIFS_I(inode)->uniqueid;
- 
- 	/* old CIFS Unix Extensions doesn't return create time */
-diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c
-index 32a6c020478f..20a88776f04d 100644
---- a/fs/cifs/smb1ops.c
-+++ b/fs/cifs/smb1ops.c
-@@ -308,7 +308,7 @@ coalesce_t2(char *second_buf, struct smb_hdr *target_hdr)
- 	remaining = tgt_total_cnt - total_in_tgt;
- 
- 	if (remaining < 0) {
--		cifs_dbg(FYI, "Server sent too much data. tgt_total_cnt=%hu total_in_tgt=%hu\n",
-+		cifs_dbg(FYI, "Server sent too much data. tgt_total_cnt=%hu total_in_tgt=%u\n",
- 			 tgt_total_cnt, total_in_tgt);
- 		return -EPROTO;
- 	}
-diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
-index b204e84b87fb..b0e76d27d752 100644
---- a/fs/cifs/smb2file.c
-+++ b/fs/cifs/smb2file.c
-@@ -68,7 +68,9 @@ smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms,
- 
- 
- 	 if (oparms->tcon->use_resilient) {
--		nr_ioctl_req.Timeout = 0; /* use server default (120 seconds) */
-+		/* default timeout is 0, servers pick default (120 seconds) */
-+		nr_ioctl_req.Timeout =
-+			cpu_to_le32(oparms->tcon->handle_timeout);
- 		nr_ioctl_req.Reserved = 0;
- 		rc = SMB2_ioctl(xid, oparms->tcon, fid->persistent_fid,
- 			fid->volatile_fid, FSCTL_LMR_REQUEST_RESILIENCY,
-diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
-index 7b8b58fb4d3f..58700d2ba8cd 100644
---- a/fs/cifs/smb2misc.c
-+++ b/fs/cifs/smb2misc.c
-@@ -517,7 +517,6 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
- 	__u8 lease_state;
- 	struct list_head *tmp;
- 	struct cifsFileInfo *cfile;
--	struct TCP_Server_Info *server = tcon->ses->server;
- 	struct cifs_pending_open *open;
- 	struct cifsInodeInfo *cinode;
- 	int ack_req = le32_to_cpu(rsp->Flags &
-@@ -537,13 +536,25 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
- 		cifs_dbg(FYI, "lease key match, lease break 0x%x\n",
- 			 le32_to_cpu(rsp->NewLeaseState));
- 
--		server->ops->set_oplock_level(cinode, lease_state, 0, NULL);
--
- 		if (ack_req)
- 			cfile->oplock_break_cancelled = false;
- 		else
- 			cfile->oplock_break_cancelled = true;
- 
-+		set_bit(CIFS_INODE_PENDING_OPLOCK_BREAK, &cinode->flags);
-+
-+		/*
-+		 * Set or clear flags depending on the lease state being READ.
-+		 * HANDLE caching flag should be added when the client starts
-+		 * to defer closing remote file handles with HANDLE leases.
-+		 */
-+		if (lease_state & SMB2_LEASE_READ_CACHING_HE)
-+			set_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
-+				&cinode->flags);
-+		else
-+			clear_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
-+				  &cinode->flags);
-+
- 		queue_work(cifsoplockd_wq, &cfile->oplock_break);
- 		kfree(lw);
- 		return true;
-diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
-index 6f96e2292856..b29f711ab965 100644
---- a/fs/cifs/smb2ops.c
-+++ b/fs/cifs/smb2ops.c
-@@ -219,6 +219,15 @@ smb2_get_next_mid(struct TCP_Server_Info *server)
- 	return mid;
- }
- 
-+static void
-+smb2_revert_current_mid(struct TCP_Server_Info *server, const unsigned int val)
-+{
-+	spin_lock(&GlobalMid_Lock);
-+	if (server->CurrentMid >= val)
-+		server->CurrentMid -= val;
-+	spin_unlock(&GlobalMid_Lock);
-+}
-+
- static struct mid_q_entry *
- smb2_find_mid(struct TCP_Server_Info *server, char *buf)
- {
-@@ -2594,6 +2603,15 @@ smb2_downgrade_oplock(struct TCP_Server_Info *server,
- 		server->ops->set_oplock_level(cinode, 0, 0, NULL);
- }
- 
-+static void
-+smb21_downgrade_oplock(struct TCP_Server_Info *server,
-+		       struct cifsInodeInfo *cinode, bool set_level2)
-+{
-+	server->ops->set_oplock_level(cinode,
-+				      set_level2 ? SMB2_LEASE_READ_CACHING_HE :
-+				      0, 0, NULL);
-+}
-+
- static void
- smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
- 		      unsigned int epoch, bool *purge_cache)
-@@ -3541,6 +3559,7 @@ struct smb_version_operations smb20_operations = {
- 	.get_credits = smb2_get_credits,
- 	.wait_mtu_credits = cifs_wait_mtu_credits,
- 	.get_next_mid = smb2_get_next_mid,
-+	.revert_current_mid = smb2_revert_current_mid,
- 	.read_data_offset = smb2_read_data_offset,
- 	.read_data_length = smb2_read_data_length,
- 	.map_error = map_smb2_to_linux_error,
-@@ -3636,6 +3655,7 @@ struct smb_version_operations smb21_operations = {
- 	.get_credits = smb2_get_credits,
- 	.wait_mtu_credits = smb2_wait_mtu_credits,
- 	.get_next_mid = smb2_get_next_mid,
-+	.revert_current_mid = smb2_revert_current_mid,
- 	.read_data_offset = smb2_read_data_offset,
- 	.read_data_length = smb2_read_data_length,
- 	.map_error = map_smb2_to_linux_error,
-@@ -3646,7 +3666,7 @@ struct smb_version_operations smb21_operations = {
- 	.print_stats = smb2_print_stats,
- 	.is_oplock_break = smb2_is_valid_oplock_break,
- 	.handle_cancelled_mid = smb2_handle_cancelled_mid,
--	.downgrade_oplock = smb2_downgrade_oplock,
-+	.downgrade_oplock = smb21_downgrade_oplock,
- 	.need_neg = smb2_need_neg,
- 	.negotiate = smb2_negotiate,
- 	.negotiate_wsize = smb2_negotiate_wsize,
-@@ -3732,6 +3752,7 @@ struct smb_version_operations smb30_operations = {
- 	.get_credits = smb2_get_credits,
- 	.wait_mtu_credits = smb2_wait_mtu_credits,
- 	.get_next_mid = smb2_get_next_mid,
-+	.revert_current_mid = smb2_revert_current_mid,
- 	.read_data_offset = smb2_read_data_offset,
- 	.read_data_length = smb2_read_data_length,
- 	.map_error = map_smb2_to_linux_error,
-@@ -3743,7 +3764,7 @@ struct smb_version_operations smb30_operations = {
- 	.dump_share_caps = smb2_dump_share_caps,
- 	.is_oplock_break = smb2_is_valid_oplock_break,
- 	.handle_cancelled_mid = smb2_handle_cancelled_mid,
--	.downgrade_oplock = smb2_downgrade_oplock,
-+	.downgrade_oplock = smb21_downgrade_oplock,
- 	.need_neg = smb2_need_neg,
- 	.negotiate = smb2_negotiate,
- 	.negotiate_wsize = smb3_negotiate_wsize,
-@@ -3837,6 +3858,7 @@ struct smb_version_operations smb311_operations = {
- 	.get_credits = smb2_get_credits,
- 	.wait_mtu_credits = smb2_wait_mtu_credits,
- 	.get_next_mid = smb2_get_next_mid,
-+	.revert_current_mid = smb2_revert_current_mid,
- 	.read_data_offset = smb2_read_data_offset,
- 	.read_data_length = smb2_read_data_length,
- 	.map_error = map_smb2_to_linux_error,
-@@ -3848,7 +3870,7 @@ struct smb_version_operations smb311_operations = {
- 	.dump_share_caps = smb2_dump_share_caps,
- 	.is_oplock_break = smb2_is_valid_oplock_break,
- 	.handle_cancelled_mid = smb2_handle_cancelled_mid,
--	.downgrade_oplock = smb2_downgrade_oplock,
-+	.downgrade_oplock = smb21_downgrade_oplock,
- 	.need_neg = smb2_need_neg,
- 	.negotiate = smb2_negotiate,
- 	.negotiate_wsize = smb3_negotiate_wsize,
-diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
-index 77b3aaa39b35..068febe37fe4 100644
---- a/fs/cifs/smb2pdu.c
-+++ b/fs/cifs/smb2pdu.c
-@@ -986,8 +986,14 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
- 	rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
- 		FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */,
- 		(char *)pneg_inbuf, inbuflen, (char **)&pneg_rsp, &rsplen);
--
--	if (rc != 0) {
-+	if (rc == -EOPNOTSUPP) {
-+		/*
-+		 * Old Windows versions or Netapp SMB server can return
-+		 * not supported error. Client should accept it.
-+		 */
-+		cifs_dbg(VFS, "Server does not support validate negotiate\n");
-+		return 0;
-+	} else if (rc != 0) {
- 		cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc);
- 		rc = -EIO;
- 		goto out_free_inbuf;
-@@ -1605,9 +1611,16 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree,
- 	iov[1].iov_base = unc_path;
- 	iov[1].iov_len = unc_path_len;
- 
--	/* 3.11 tcon req must be signed if not encrypted. See MS-SMB2 3.2.4.1.1 */
-+	/*
-+	 * 3.11 tcon req must be signed if not encrypted. See MS-SMB2 3.2.4.1.1
-+	 * unless it is guest or anonymous user. See MS-SMB2 3.2.5.3.1
-+	 * (Samba servers don't always set the flag so also check if null user)
-+	 */
- 	if ((ses->server->dialect == SMB311_PROT_ID) &&
--	    !smb3_encryption_required(tcon))
-+	    !smb3_encryption_required(tcon) &&
-+	    !(ses->session_flags &
-+		    (SMB2_SESSION_FLAG_IS_GUEST|SMB2_SESSION_FLAG_IS_NULL)) &&
-+	    ((ses->user_name != NULL) || (ses->sectype == Kerberos)))
- 		req->sync_hdr.Flags |= SMB2_FLAGS_SIGNED;
- 
- 	memset(&rqst, 0, sizeof(struct smb_rqst));
-@@ -1824,8 +1837,9 @@ add_lease_context(struct TCP_Server_Info *server, struct kvec *iov,
- }
- 
- static struct create_durable_v2 *
--create_durable_v2_buf(struct cifs_fid *pfid)
-+create_durable_v2_buf(struct cifs_open_parms *oparms)
- {
-+	struct cifs_fid *pfid = oparms->fid;
- 	struct create_durable_v2 *buf;
- 
- 	buf = kzalloc(sizeof(struct create_durable_v2), GFP_KERNEL);
-@@ -1839,7 +1853,14 @@ create_durable_v2_buf(struct cifs_fid *pfid)
- 				(struct create_durable_v2, Name));
- 	buf->ccontext.NameLength = cpu_to_le16(4);
- 
--	buf->dcontext.Timeout = 0; /* Should this be configurable by workload */
-+	/*
-+	 * NB: Handle timeout defaults to 0, which allows server to choose
-+	 * (most servers default to 120 seconds) and most clients default to 0.
-+	 * This can be overridden at mount ("handletimeout=") if the user wants
-+	 * a different persistent (or resilient) handle timeout for all opens
-+	 * opens on a particular SMB3 mount.
-+	 */
-+	buf->dcontext.Timeout = cpu_to_le32(oparms->tcon->handle_timeout);
- 	buf->dcontext.Flags = cpu_to_le32(SMB2_DHANDLE_FLAG_PERSISTENT);
- 	generate_random_uuid(buf->dcontext.CreateGuid);
- 	memcpy(pfid->create_guid, buf->dcontext.CreateGuid, 16);
-@@ -1892,7 +1913,7 @@ add_durable_v2_context(struct kvec *iov, unsigned int *num_iovec,
- 	struct smb2_create_req *req = iov[0].iov_base;
- 	unsigned int num = *num_iovec;
- 
--	iov[num].iov_base = create_durable_v2_buf(oparms->fid);
-+	iov[num].iov_base = create_durable_v2_buf(oparms);
- 	if (iov[num].iov_base == NULL)
- 		return -ENOMEM;
- 	iov[num].iov_len = sizeof(struct create_durable_v2);
-diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
-index 7b351c65ee46..63264db78b89 100644
---- a/fs/cifs/smb2transport.c
-+++ b/fs/cifs/smb2transport.c
-@@ -576,6 +576,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr,
- 		     struct TCP_Server_Info *server)
- {
- 	struct mid_q_entry *temp;
-+	unsigned int credits = le16_to_cpu(shdr->CreditCharge);
- 
- 	if (server == NULL) {
- 		cifs_dbg(VFS, "Null TCP session in smb2_mid_entry_alloc\n");
-@@ -586,6 +587,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr,
- 	memset(temp, 0, sizeof(struct mid_q_entry));
- 	kref_init(&temp->refcount);
- 	temp->mid = le64_to_cpu(shdr->MessageId);
-+	temp->credits = credits > 0 ? credits : 1;
- 	temp->pid = current->pid;
- 	temp->command = shdr->Command; /* Always LE */
- 	temp->when_alloc = jiffies;
-@@ -674,13 +676,18 @@ smb2_setup_request(struct cifs_ses *ses, struct smb_rqst *rqst)
- 	smb2_seq_num_into_buf(ses->server, shdr);
- 
- 	rc = smb2_get_mid_entry(ses, shdr, &mid);
--	if (rc)
-+	if (rc) {
-+		revert_current_mid_from_hdr(ses->server, shdr);
- 		return ERR_PTR(rc);
-+	}
-+
- 	rc = smb2_sign_rqst(rqst, ses->server);
- 	if (rc) {
-+		revert_current_mid_from_hdr(ses->server, shdr);
- 		cifs_delete_mid(mid);
- 		return ERR_PTR(rc);
- 	}
-+
- 	return mid;
- }
- 
-@@ -695,11 +702,14 @@ smb2_setup_async_request(struct TCP_Server_Info *server, struct smb_rqst *rqst)
- 	smb2_seq_num_into_buf(server, shdr);
- 
- 	mid = smb2_mid_entry_alloc(shdr, server);
--	if (mid == NULL)
-+	if (mid == NULL) {
-+		revert_current_mid_from_hdr(server, shdr);
- 		return ERR_PTR(-ENOMEM);
-+	}
- 
- 	rc = smb2_sign_rqst(rqst, server);
- 	if (rc) {
-+		revert_current_mid_from_hdr(server, shdr);
- 		DeleteMidQEntry(mid);
- 		return ERR_PTR(rc);
- 	}
-diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
-index 53532bd3f50d..9544eb99b5a2 100644
---- a/fs/cifs/transport.c
-+++ b/fs/cifs/transport.c
-@@ -647,6 +647,7 @@ cifs_call_async(struct TCP_Server_Info *server, struct smb_rqst *rqst,
- 	cifs_in_send_dec(server);
- 
- 	if (rc < 0) {
-+		revert_current_mid(server, mid->credits);
- 		server->sequence_number -= 2;
- 		cifs_delete_mid(mid);
- 	}
-@@ -868,6 +869,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
- 	for (i = 0; i < num_rqst; i++) {
- 		midQ[i] = ses->server->ops->setup_request(ses, &rqst[i]);
- 		if (IS_ERR(midQ[i])) {
-+			revert_current_mid(ses->server, i);
- 			for (j = 0; j < i; j++)
- 				cifs_delete_mid(midQ[j]);
- 			mutex_unlock(&ses->server->srv_mutex);
-@@ -897,8 +899,10 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
- 	for (i = 0; i < num_rqst; i++)
- 		cifs_save_when_sent(midQ[i]);
- 
--	if (rc < 0)
-+	if (rc < 0) {
-+		revert_current_mid(ses->server, num_rqst);
- 		ses->server->sequence_number -= 2;
-+	}
- 
- 	mutex_unlock(&ses->server->srv_mutex);
- 
-diff --git a/fs/dax.c b/fs/dax.c
-index 6959837cc465..05cca2214ae3 100644
---- a/fs/dax.c
-+++ b/fs/dax.c
-@@ -843,9 +843,8 @@ unlock_pte:
- static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
- 		struct address_space *mapping, void *entry)
- {
--	unsigned long pfn;
-+	unsigned long pfn, index, count;
- 	long ret = 0;
--	size_t size;
- 
- 	/*
- 	 * A page got tagged dirty in DAX mapping? Something is seriously
-@@ -894,17 +893,18 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
- 	xas_unlock_irq(xas);
- 
- 	/*
--	 * Even if dax_writeback_mapping_range() was given a wbc->range_start
--	 * in the middle of a PMD, the 'index' we are given will be aligned to
--	 * the start index of the PMD, as will the pfn we pull from 'entry'.
-+	 * If dax_writeback_mapping_range() was given a wbc->range_start
-+	 * in the middle of a PMD, the 'index' we use needs to be
-+	 * aligned to the start of the PMD.
- 	 * This allows us to flush for PMD_SIZE and not have to worry about
- 	 * partial PMD writebacks.
- 	 */
- 	pfn = dax_to_pfn(entry);
--	size = PAGE_SIZE << dax_entry_order(entry);
-+	count = 1UL << dax_entry_order(entry);
-+	index = xas->xa_index & ~(count - 1);
- 
--	dax_entry_mkclean(mapping, xas->xa_index, pfn);
--	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), size);
-+	dax_entry_mkclean(mapping, index, pfn);
-+	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE);
- 	/*
- 	 * After we have flushed the cache, we can clear the dirty tag. There
- 	 * cannot be new dirty data in the pfn after the flush has completed as
-@@ -917,8 +917,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
- 	xas_clear_mark(xas, PAGECACHE_TAG_DIRTY);
- 	dax_wake_entry(xas, entry, false);
- 
--	trace_dax_writeback_one(mapping->host, xas->xa_index,
--			size >> PAGE_SHIFT);
-+	trace_dax_writeback_one(mapping->host, index, count);
- 	return ret;
- 
-  put_unlocked:
-diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
-index c53814539070..553a3f3300ae 100644
---- a/fs/devpts/inode.c
-+++ b/fs/devpts/inode.c
-@@ -455,6 +455,7 @@ devpts_fill_super(struct super_block *s, void *data, int silent)
- 	s->s_blocksize_bits = 10;
- 	s->s_magic = DEVPTS_SUPER_MAGIC;
- 	s->s_op = &devpts_sops;
-+	s->s_d_op = &simple_dentry_operations;
- 	s->s_time_gran = 1;
- 
- 	error = -ENOMEM;
-diff --git a/fs/exec.c b/fs/exec.c
-index fb72d36f7823..bcf383730bea 100644
---- a/fs/exec.c
-+++ b/fs/exec.c
-@@ -932,7 +932,7 @@ int kernel_read_file(struct file *file, void **buf, loff_t *size,
- 		bytes = kernel_read(file, *buf + pos, i_size - pos, &pos);
- 		if (bytes < 0) {
- 			ret = bytes;
--			goto out;
-+			goto out_free;
- 		}
- 
- 		if (bytes == 0)
-diff --git a/fs/ext2/super.c b/fs/ext2/super.c
-index 73b2d528237f..a9ea38182578 100644
---- a/fs/ext2/super.c
-+++ b/fs/ext2/super.c
-@@ -757,7 +757,8 @@ static loff_t ext2_max_size(int bits)
- {
- 	loff_t res = EXT2_NDIR_BLOCKS;
- 	int meta_blocks;
--	loff_t upper_limit;
-+	unsigned int upper_limit;
-+	unsigned int ppb = 1 << (bits-2);
- 
- 	/* This is calculated to be the largest file size for a
- 	 * dense, file such that the total number of
-@@ -771,24 +772,34 @@ static loff_t ext2_max_size(int bits)
- 	/* total blocks in file system block size */
- 	upper_limit >>= (bits - 9);
- 
-+	/* Compute how many blocks we can address by block tree */
-+	res += 1LL << (bits-2);
-+	res += 1LL << (2*(bits-2));
-+	res += 1LL << (3*(bits-2));
-+	/* Does block tree limit file size? */
-+	if (res < upper_limit)
-+		goto check_lfs;
- 
-+	res = upper_limit;
-+	/* How many metadata blocks are needed for addressing upper_limit? */
-+	upper_limit -= EXT2_NDIR_BLOCKS;
- 	/* indirect blocks */
- 	meta_blocks = 1;
-+	upper_limit -= ppb;
- 	/* double indirect blocks */
--	meta_blocks += 1 + (1LL << (bits-2));
--	/* tripple indirect blocks */
--	meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2)));
--
--	upper_limit -= meta_blocks;
--	upper_limit <<= bits;
--
--	res += 1LL << (bits-2);
--	res += 1LL << (2*(bits-2));
--	res += 1LL << (3*(bits-2));
-+	if (upper_limit < ppb * ppb) {
-+		meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb);
-+		res -= meta_blocks;
-+		goto check_lfs;
-+	}
-+	meta_blocks += 1 + ppb;
-+	upper_limit -= ppb * ppb;
-+	/* tripple indirect blocks for the rest */
-+	meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb) +
-+		DIV_ROUND_UP(upper_limit, ppb*ppb);
-+	res -= meta_blocks;
-+check_lfs:
- 	res <<= bits;
--	if (res > upper_limit)
--		res = upper_limit;
--
- 	if (res > MAX_LFS_FILESIZE)
- 		res = MAX_LFS_FILESIZE;
- 
-diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
-index 185a05d3257e..508a37ec9271 100644
---- a/fs/ext4/ext4.h
-+++ b/fs/ext4/ext4.h
-@@ -426,6 +426,9 @@ struct flex_groups {
- /* Flags that are appropriate for non-directories/regular files. */
- #define EXT4_OTHER_FLMASK (EXT4_NODUMP_FL | EXT4_NOATIME_FL)
- 
-+/* The only flags that should be swapped */
-+#define EXT4_FL_SHOULD_SWAP (EXT4_HUGE_FILE_FL | EXT4_EXTENTS_FL)
-+
- /* Mask out flags that are inappropriate for the given type of inode. */
- static inline __u32 ext4_mask_flags(umode_t mode, __u32 flags)
- {
-diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
-index 15b6dd733780..df908ef79cce 100644
---- a/fs/ext4/ext4_jbd2.h
-+++ b/fs/ext4/ext4_jbd2.h
-@@ -384,7 +384,7 @@ static inline void ext4_update_inode_fsync_trans(handle_t *handle,
- {
- 	struct ext4_inode_info *ei = EXT4_I(inode);
- 
--	if (ext4_handle_valid(handle)) {
-+	if (ext4_handle_valid(handle) && !is_handle_aborted(handle)) {
- 		ei->i_sync_tid = handle->h_transaction->t_tid;
- 		if (datasync)
- 			ei->i_datasync_tid = handle->h_transaction->t_tid;
-diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
-index 240b6dea5441..252bbbb5a2f4 100644
---- a/fs/ext4/extents.c
-+++ b/fs/ext4/extents.c
-@@ -2956,14 +2956,17 @@ again:
- 			if (err < 0)
- 				goto out;
- 
--		} else if (sbi->s_cluster_ratio > 1 && end >= ex_end) {
-+		} else if (sbi->s_cluster_ratio > 1 && end >= ex_end &&
-+			   partial.state == initial) {
- 			/*
--			 * If there's an extent to the right its first cluster
--			 * contains the immediate right boundary of the
--			 * truncated/punched region.  Set partial_cluster to
--			 * its negative value so it won't be freed if shared
--			 * with the current extent.  The end < ee_block case
--			 * is handled in ext4_ext_rm_leaf().
-+			 * If we're punching, there's an extent to the right.
-+			 * If the partial cluster hasn't been set, set it to
-+			 * that extent's first cluster and its state to nofree
-+			 * so it won't be freed should it contain blocks to be
-+			 * removed. If it's already set (tofree/nofree), we're
-+			 * retrying and keep the original partial cluster info
-+			 * so a cluster marked tofree as a result of earlier
-+			 * extent removal is not lost.
- 			 */
- 			lblk = ex_end + 1;
- 			err = ext4_ext_search_right(inode, path, &lblk, &pblk,
-diff --git a/fs/ext4/file.c b/fs/ext4/file.c
-index 69d65d49837b..98ec11f69cd4 100644
---- a/fs/ext4/file.c
-+++ b/fs/ext4/file.c
-@@ -125,7 +125,7 @@ ext4_unaligned_aio(struct inode *inode, struct iov_iter *from, loff_t pos)
- 	struct super_block *sb = inode->i_sb;
- 	int blockmask = sb->s_blocksize - 1;
- 
--	if (pos >= i_size_read(inode))
-+	if (pos >= ALIGN(i_size_read(inode), sb->s_blocksize))
- 		return 0;
- 
- 	if ((pos | iov_iter_alignment(from)) & blockmask)
-diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
-index bf7fa1507e81..e1801b288847 100644
---- a/fs/ext4/indirect.c
-+++ b/fs/ext4/indirect.c
-@@ -1219,6 +1219,7 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
- 	ext4_lblk_t offsets[4], offsets2[4];
- 	Indirect chain[4], chain2[4];
- 	Indirect *partial, *partial2;
-+	Indirect *p = NULL, *p2 = NULL;
- 	ext4_lblk_t max_block;
- 	__le32 nr = 0, nr2 = 0;
- 	int n = 0, n2 = 0;
-@@ -1260,7 +1261,7 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
- 		}
- 
- 
--		partial = ext4_find_shared(inode, n, offsets, chain, &nr);
-+		partial = p = ext4_find_shared(inode, n, offsets, chain, &nr);
- 		if (nr) {
- 			if (partial == chain) {
- 				/* Shared branch grows from the inode */
-@@ -1285,13 +1286,11 @@ int ext4_ind_remove_space(handle_t *handle, struct inode *inode,
- 				partial->p + 1,
- 				(__le32 *)partial->bh->b_data+addr_per_block,
- 				(chain+n-1) - partial);
--			BUFFER_TRACE(partial->bh, "call brelse");
--			brelse(partial->bh);
- 			partial--;
- 		}
- 
- end_range:
--		partial2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
-+		partial2 = p2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
- 		if (nr2) {
- 			if (partial2 == chain2) {
- 				/*
-@@ -1321,16 +1320,14 @@ end_range:
- 					   (__le32 *)partial2->bh->b_data,
- 					   partial2->p,
- 					   (chain2+n2-1) - partial2);
--			BUFFER_TRACE(partial2->bh, "call brelse");
--			brelse(partial2->bh);
- 			partial2--;
- 		}
- 		goto do_indirects;
- 	}
- 
- 	/* Punch happened within the same level (n == n2) */
--	partial = ext4_find_shared(inode, n, offsets, chain, &nr);
--	partial2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
-+	partial = p = ext4_find_shared(inode, n, offsets, chain, &nr);
-+	partial2 = p2 = ext4_find_shared(inode, n2, offsets2, chain2, &nr2);
- 
- 	/* Free top, but only if partial2 isn't its subtree. */
- 	if (nr) {
-@@ -1387,11 +1384,7 @@ end_range:
- 					   partial->p + 1,
- 					   partial2->p,
- 					   (chain+n-1) - partial);
--			BUFFER_TRACE(partial->bh, "call brelse");
--			brelse(partial->bh);
--			BUFFER_TRACE(partial2->bh, "call brelse");
--			brelse(partial2->bh);
--			return 0;
-+			goto cleanup;
- 		}
- 
- 		/*
-@@ -1406,8 +1399,6 @@ end_range:
- 					   partial->p + 1,
- 					   (__le32 *)partial->bh->b_data+addr_per_block,
- 					   (chain+n-1) - partial);
--			BUFFER_TRACE(partial->bh, "call brelse");
--			brelse(partial->bh);
- 			partial--;
- 		}
- 		if (partial2 > chain2 && depth2 <= depth) {
-@@ -1415,11 +1406,21 @@ end_range:
- 					   (__le32 *)partial2->bh->b_data,
- 					   partial2->p,
- 					   (chain2+n2-1) - partial2);
--			BUFFER_TRACE(partial2->bh, "call brelse");
--			brelse(partial2->bh);
- 			partial2--;
- 		}
- 	}
-+
-+cleanup:
-+	while (p && p > chain) {
-+		BUFFER_TRACE(p->bh, "call brelse");
-+		brelse(p->bh);
-+		p--;
-+	}
-+	while (p2 && p2 > chain2) {
-+		BUFFER_TRACE(p2->bh, "call brelse");
-+		brelse(p2->bh);
-+		p2--;
-+	}
- 	return 0;
- 
- do_indirects:
-@@ -1427,7 +1428,7 @@ do_indirects:
- 	switch (offsets[0]) {
- 	default:
- 		if (++n >= n2)
--			return 0;
-+			break;
- 		nr = i_data[EXT4_IND_BLOCK];
- 		if (nr) {
- 			ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 1);
-@@ -1435,7 +1436,7 @@ do_indirects:
- 		}
- 	case EXT4_IND_BLOCK:
- 		if (++n >= n2)
--			return 0;
-+			break;
- 		nr = i_data[EXT4_DIND_BLOCK];
- 		if (nr) {
- 			ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 2);
-@@ -1443,7 +1444,7 @@ do_indirects:
- 		}
- 	case EXT4_DIND_BLOCK:
- 		if (++n >= n2)
--			return 0;
-+			break;
- 		nr = i_data[EXT4_TIND_BLOCK];
- 		if (nr) {
- 			ext4_free_branches(handle, inode, NULL, &nr, &nr+1, 3);
-@@ -1452,5 +1453,5 @@ do_indirects:
- 	case EXT4_TIND_BLOCK:
- 		;
- 	}
--	return 0;
-+	goto cleanup;
- }
-diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
-index d37dafa1d133..2e76fb55d94a 100644
---- a/fs/ext4/ioctl.c
-+++ b/fs/ext4/ioctl.c
-@@ -63,18 +63,20 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2)
- 	loff_t isize;
- 	struct ext4_inode_info *ei1;
- 	struct ext4_inode_info *ei2;
-+	unsigned long tmp;
- 
- 	ei1 = EXT4_I(inode1);
- 	ei2 = EXT4_I(inode2);
- 
- 	swap(inode1->i_version, inode2->i_version);
--	swap(inode1->i_blocks, inode2->i_blocks);
--	swap(inode1->i_bytes, inode2->i_bytes);
- 	swap(inode1->i_atime, inode2->i_atime);
- 	swap(inode1->i_mtime, inode2->i_mtime);
- 
- 	memswap(ei1->i_data, ei2->i_data, sizeof(ei1->i_data));
--	swap(ei1->i_flags, ei2->i_flags);
-+	tmp = ei1->i_flags & EXT4_FL_SHOULD_SWAP;
-+	ei1->i_flags = (ei2->i_flags & EXT4_FL_SHOULD_SWAP) |
-+		(ei1->i_flags & ~EXT4_FL_SHOULD_SWAP);
-+	ei2->i_flags = tmp | (ei2->i_flags & ~EXT4_FL_SHOULD_SWAP);
- 	swap(ei1->i_disksize, ei2->i_disksize);
- 	ext4_es_remove_extent(inode1, 0, EXT_MAX_BLOCKS);
- 	ext4_es_remove_extent(inode2, 0, EXT_MAX_BLOCKS);
-@@ -115,28 +117,41 @@ static long swap_inode_boot_loader(struct super_block *sb,
- 	int err;
- 	struct inode *inode_bl;
- 	struct ext4_inode_info *ei_bl;
--
--	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
--	    IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
--	    ext4_has_inline_data(inode))
--		return -EINVAL;
--
--	if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
--	    !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN))
--		return -EPERM;
-+	qsize_t size, size_bl, diff;
-+	blkcnt_t blocks;
-+	unsigned short bytes;
- 
- 	inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO, EXT4_IGET_SPECIAL);
- 	if (IS_ERR(inode_bl))
- 		return PTR_ERR(inode_bl);
- 	ei_bl = EXT4_I(inode_bl);
- 
--	filemap_flush(inode->i_mapping);
--	filemap_flush(inode_bl->i_mapping);
--
- 	/* Protect orig inodes against a truncate and make sure,
- 	 * that only 1 swap_inode_boot_loader is running. */
- 	lock_two_nondirectories(inode, inode_bl);
- 
-+	if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) ||
-+	    IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) ||
-+	    ext4_has_inline_data(inode)) {
-+		err = -EINVAL;
-+		goto journal_err_out;
-+	}
-+
-+	if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) ||
-+	    !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN)) {
-+		err = -EPERM;
-+		goto journal_err_out;
-+	}
-+
-+	down_write(&EXT4_I(inode)->i_mmap_sem);
-+	err = filemap_write_and_wait(inode->i_mapping);
-+	if (err)
-+		goto err_out;
-+
-+	err = filemap_write_and_wait(inode_bl->i_mapping);
-+	if (err)
-+		goto err_out;
-+
- 	/* Wait for all existing dio workers */
- 	inode_dio_wait(inode);
- 	inode_dio_wait(inode_bl);
-@@ -147,7 +162,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
- 	handle = ext4_journal_start(inode_bl, EXT4_HT_MOVE_EXTENTS, 2);
- 	if (IS_ERR(handle)) {
- 		err = -EINVAL;
--		goto journal_err_out;
-+		goto err_out;
- 	}
- 
- 	/* Protect extent tree against block allocations via delalloc */
-@@ -170,6 +185,13 @@ static long swap_inode_boot_loader(struct super_block *sb,
- 			memset(ei_bl->i_data, 0, sizeof(ei_bl->i_data));
- 	}
- 
-+	err = dquot_initialize(inode);
-+	if (err)
-+		goto err_out1;
-+
-+	size = (qsize_t)(inode->i_blocks) * (1 << 9) + inode->i_bytes;
-+	size_bl = (qsize_t)(inode_bl->i_blocks) * (1 << 9) + inode_bl->i_bytes;
-+	diff = size - size_bl;
- 	swap_inode_data(inode, inode_bl);
- 
- 	inode->i_ctime = inode_bl->i_ctime = current_time(inode);
-@@ -183,27 +205,51 @@ static long swap_inode_boot_loader(struct super_block *sb,
- 
- 	err = ext4_mark_inode_dirty(handle, inode);
- 	if (err < 0) {
-+		/* No need to update quota information. */
- 		ext4_warning(inode->i_sb,
- 			"couldn't mark inode #%lu dirty (err %d)",
- 			inode->i_ino, err);
- 		/* Revert all changes: */
- 		swap_inode_data(inode, inode_bl);
- 		ext4_mark_inode_dirty(handle, inode);
--	} else {
--		err = ext4_mark_inode_dirty(handle, inode_bl);
--		if (err < 0) {
--			ext4_warning(inode_bl->i_sb,
--				"couldn't mark inode #%lu dirty (err %d)",
--				inode_bl->i_ino, err);
--			/* Revert all changes: */
--			swap_inode_data(inode, inode_bl);
--			ext4_mark_inode_dirty(handle, inode);
--			ext4_mark_inode_dirty(handle, inode_bl);
--		}
-+		goto err_out1;
-+	}
-+
-+	blocks = inode_bl->i_blocks;
-+	bytes = inode_bl->i_bytes;
-+	inode_bl->i_blocks = inode->i_blocks;
-+	inode_bl->i_bytes = inode->i_bytes;
-+	err = ext4_mark_inode_dirty(handle, inode_bl);
-+	if (err < 0) {
-+		/* No need to update quota information. */
-+		ext4_warning(inode_bl->i_sb,
-+			"couldn't mark inode #%lu dirty (err %d)",
-+			inode_bl->i_ino, err);
-+		goto revert;
-+	}
-+
-+	/* Bootloader inode should not be counted into quota information. */
-+	if (diff > 0)
-+		dquot_free_space(inode, diff);
-+	else
-+		err = dquot_alloc_space(inode, -1 * diff);
-+
-+	if (err < 0) {
-+revert:
-+		/* Revert all changes: */
-+		inode_bl->i_blocks = blocks;
-+		inode_bl->i_bytes = bytes;
-+		swap_inode_data(inode, inode_bl);
-+		ext4_mark_inode_dirty(handle, inode);
-+		ext4_mark_inode_dirty(handle, inode_bl);
- 	}
-+
-+err_out1:
- 	ext4_journal_stop(handle);
- 	ext4_double_up_write_data_sem(inode, inode_bl);
- 
-+err_out:
-+	up_write(&EXT4_I(inode)->i_mmap_sem);
- journal_err_out:
- 	unlock_two_nondirectories(inode, inode_bl);
- 	iput(inode_bl);
-diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
-index 48421de803b7..3d9b18505c0c 100644
---- a/fs/ext4/resize.c
-+++ b/fs/ext4/resize.c
-@@ -1960,7 +1960,8 @@ retry:
- 				le16_to_cpu(es->s_reserved_gdt_blocks);
- 			n_group = n_desc_blocks * EXT4_DESC_PER_BLOCK(sb);
- 			n_blocks_count = (ext4_fsblk_t)n_group *
--				EXT4_BLOCKS_PER_GROUP(sb);
-+				EXT4_BLOCKS_PER_GROUP(sb) +
-+				le32_to_cpu(es->s_first_data_block);
- 			n_group--; /* set to last group number */
- 		}
- 
-diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
-index 1cb0fcc67d2d..caf77fe8ac07 100644
---- a/fs/f2fs/extent_cache.c
-+++ b/fs/f2fs/extent_cache.c
-@@ -506,7 +506,7 @@ static void f2fs_update_extent_tree_range(struct inode *inode,
- 	unsigned int end = fofs + len;
- 	unsigned int pos = (unsigned int)fofs;
- 	bool updated = false;
--	bool leftmost;
-+	bool leftmost = false;
- 
- 	if (!et)
- 		return;
-diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
-index 12fabd6735dd..279bc00489cc 100644
---- a/fs/f2fs/f2fs.h
-+++ b/fs/f2fs/f2fs.h
-@@ -456,7 +456,6 @@ struct f2fs_flush_device {
- 
- /* for inline stuff */
- #define DEF_INLINE_RESERVED_SIZE	1
--#define DEF_MIN_INLINE_SIZE		1
- static inline int get_extra_isize(struct inode *inode);
- static inline int get_inline_xattr_addrs(struct inode *inode);
- #define MAX_INLINE_DATA(inode)	(sizeof(__le32) *			\
-diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
-index bba56b39dcc5..ae2b45e75847 100644
---- a/fs/f2fs/file.c
-+++ b/fs/f2fs/file.c
-@@ -1750,10 +1750,12 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
- 
- 	down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
- 
--	if (!get_dirty_pages(inode))
--		goto skip_flush;
--
--	f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
-+	/*
-+	 * Should wait end_io to count F2FS_WB_CP_DATA correctly by
-+	 * f2fs_is_atomic_file.
-+	 */
-+	if (get_dirty_pages(inode))
-+		f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING,
- 		"Unexpected flush for atomic writes: ino=%lu, npages=%u",
- 					inode->i_ino, get_dirty_pages(inode));
- 	ret = filemap_write_and_wait_range(inode->i_mapping, 0, LLONG_MAX);
-@@ -1761,7 +1763,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp)
- 		up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
- 		goto out;
- 	}
--skip_flush:
-+
- 	set_inode_flag(inode, FI_ATOMIC_FILE);
- 	clear_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST);
- 	up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
-diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
-index d636cbcf68f2..aacbb864ec1e 100644
---- a/fs/f2fs/inline.c
-+++ b/fs/f2fs/inline.c
-@@ -659,6 +659,12 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
- 	if (IS_ERR(ipage))
- 		return PTR_ERR(ipage);
- 
-+	/*
-+	 * f2fs_readdir was protected by inode.i_rwsem, it is safe to access
-+	 * ipage without page's lock held.
-+	 */
-+	unlock_page(ipage);
-+
- 	inline_dentry = inline_data_addr(inode, ipage);
- 
- 	make_dentry_ptr_inline(inode, &d, inline_dentry);
-@@ -667,7 +673,7 @@ int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx,
- 	if (!err)
- 		ctx->pos = d.max;
- 
--	f2fs_put_page(ipage, 1);
-+	f2fs_put_page(ipage, 0);
- 	return err < 0 ? err : 0;
- }
- 
-diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
-index 9b79056d705d..e1b1d390b329 100644
---- a/fs/f2fs/segment.c
-+++ b/fs/f2fs/segment.c
-@@ -215,7 +215,8 @@ void f2fs_register_inmem_page(struct inode *inode, struct page *page)
- }
- 
- static int __revoke_inmem_pages(struct inode *inode,
--				struct list_head *head, bool drop, bool recover)
-+				struct list_head *head, bool drop, bool recover,
-+				bool trylock)
- {
- 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
- 	struct inmem_pages *cur, *tmp;
-@@ -227,7 +228,16 @@ static int __revoke_inmem_pages(struct inode *inode,
- 		if (drop)
- 			trace_f2fs_commit_inmem_page(page, INMEM_DROP);
- 
--		lock_page(page);
-+		if (trylock) {
++	if (nested_cpu_has_virt_x2apic_mode(vmcs12)) {
++		if (nested_cpu_has_apic_reg_virt(vmcs12)) {
 +			/*
-+			 * to avoid deadlock in between page lock and
-+			 * inmem_lock.
++			 * L0 need not intercept reads for MSRs between 0x800
++			 * and 0x8ff, it just lets the processor take the value
++			 * from the virtual-APIC page; take those 256 bits
++			 * directly from the L1 bitmap.
 +			 */
-+			if (!trylock_page(page))
-+				continue;
-+		} else {
-+			lock_page(page);
-+		}
- 
- 		f2fs_wait_on_page_writeback(page, DATA, true, true);
- 
-@@ -318,13 +328,19 @@ void f2fs_drop_inmem_pages(struct inode *inode)
- 	struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
- 	struct f2fs_inode_info *fi = F2FS_I(inode);
- 
--	mutex_lock(&fi->inmem_lock);
--	__revoke_inmem_pages(inode, &fi->inmem_pages, true, false);
--	spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
--	if (!list_empty(&fi->inmem_ilist))
--		list_del_init(&fi->inmem_ilist);
--	spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
--	mutex_unlock(&fi->inmem_lock);
-+	while (!list_empty(&fi->inmem_pages)) {
-+		mutex_lock(&fi->inmem_lock);
-+		__revoke_inmem_pages(inode, &fi->inmem_pages,
-+						true, false, true);
++			for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
++				unsigned word = msr / BITS_PER_LONG;
 +
-+		if (list_empty(&fi->inmem_pages)) {
-+			spin_lock(&sbi->inode_lock[ATOMIC_FILE]);
-+			if (!list_empty(&fi->inmem_ilist))
-+				list_del_init(&fi->inmem_ilist);
-+			spin_unlock(&sbi->inode_lock[ATOMIC_FILE]);
++				msr_bitmap_l0[word] = msr_bitmap_l1[word];
++			}
 +		}
-+		mutex_unlock(&fi->inmem_lock);
-+	}
- 
- 	clear_inode_flag(inode, FI_ATOMIC_FILE);
- 	fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0;
-@@ -429,12 +445,15 @@ retry:
- 		 * recovery or rewrite & commit last transaction. For other
- 		 * error number, revoking was done by filesystem itself.
- 		 */
--		err = __revoke_inmem_pages(inode, &revoke_list, false, true);
-+		err = __revoke_inmem_pages(inode, &revoke_list,
-+						false, true, false);
- 
- 		/* drop all uncommitted pages */
--		__revoke_inmem_pages(inode, &fi->inmem_pages, true, false);
-+		__revoke_inmem_pages(inode, &fi->inmem_pages,
-+						true, false, false);
- 	} else {
--		__revoke_inmem_pages(inode, &revoke_list, false, false);
-+		__revoke_inmem_pages(inode, &revoke_list,
-+						false, false, false);
- 	}
- 
- 	return err;
-diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
-index c46a1d4318d4..5892fa3c885f 100644
---- a/fs/f2fs/super.c
-+++ b/fs/f2fs/super.c
-@@ -834,12 +834,13 @@ static int parse_options(struct super_block *sb, char *options)
- 					"set with inline_xattr option");
- 			return -EINVAL;
- 		}
--		if (!F2FS_OPTION(sbi).inline_xattr_size ||
--			F2FS_OPTION(sbi).inline_xattr_size >=
--					DEF_ADDRS_PER_INODE -
--					F2FS_TOTAL_EXTRA_ATTR_SIZE -
--					DEF_INLINE_RESERVED_SIZE -
--					DEF_MIN_INLINE_SIZE) {
-+		if (F2FS_OPTION(sbi).inline_xattr_size <
-+			sizeof(struct f2fs_xattr_header) / sizeof(__le32) ||
-+			F2FS_OPTION(sbi).inline_xattr_size >
-+			DEF_ADDRS_PER_INODE -
-+			F2FS_TOTAL_EXTRA_ATTR_SIZE / sizeof(__le32) -
-+			DEF_INLINE_RESERVED_SIZE -
-+			MIN_INLINE_DENTRY_SIZE / sizeof(__le32)) {
- 			f2fs_msg(sb, KERN_ERR,
- 					"inline xattr size is out of range");
- 			return -EINVAL;
-@@ -915,6 +916,10 @@ static int f2fs_drop_inode(struct inode *inode)
- 			sb_start_intwrite(inode->i_sb);
- 			f2fs_i_size_write(inode, 0);
  
-+			f2fs_submit_merged_write_cond(F2FS_I_SB(inode),
-+					inode, NULL, 0, DATA);
-+			truncate_inode_pages_final(inode->i_mapping);
+-	if (nested_cpu_has_vid(vmcs12)) {
+-		nested_vmx_disable_intercept_for_msr(
+-			msr_bitmap_l1, msr_bitmap_l0,
+-			X2APIC_MSR(APIC_EOI),
+-			MSR_TYPE_W);
+ 		nested_vmx_disable_intercept_for_msr(
+ 			msr_bitmap_l1, msr_bitmap_l0,
+-			X2APIC_MSR(APIC_SELF_IPI),
+-			MSR_TYPE_W);
++			X2APIC_MSR(APIC_TASKPRI),
++			MSR_TYPE_R | MSR_TYPE_W);
 +
- 			if (F2FS_HAS_BLOCKS(inode))
- 				f2fs_truncate(inode);
- 
-diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
-index 0575edbe3ed6..f1ab9000b294 100644
---- a/fs/f2fs/sysfs.c
-+++ b/fs/f2fs/sysfs.c
-@@ -278,10 +278,16 @@ out:
- 		return count;
++		if (nested_cpu_has_vid(vmcs12)) {
++			nested_vmx_disable_intercept_for_msr(
++				msr_bitmap_l1, msr_bitmap_l0,
++				X2APIC_MSR(APIC_EOI),
++				MSR_TYPE_W);
++			nested_vmx_disable_intercept_for_msr(
++				msr_bitmap_l1, msr_bitmap_l0,
++				X2APIC_MSR(APIC_SELF_IPI),
++				MSR_TYPE_W);
++		}
  	}
  
--	*ui = t;
- 
--	if (!strcmp(a->attr.name, "iostat_enable") && *ui == 0)
--		f2fs_reset_iostat(sbi);
-+	if (!strcmp(a->attr.name, "iostat_enable")) {
-+		sbi->iostat_enable = !!t;
-+		if (!sbi->iostat_enable)
-+			f2fs_reset_iostat(sbi);
-+		return count;
-+	}
-+
-+	*ui = (unsigned int)t;
-+
- 	return count;
+ 	if (spec_ctrl)
+diff --git a/arch/xtensa/kernel/stacktrace.c b/arch/xtensa/kernel/stacktrace.c
+index 174c11f13bba..b9f82510c650 100644
+--- a/arch/xtensa/kernel/stacktrace.c
++++ b/arch/xtensa/kernel/stacktrace.c
+@@ -253,10 +253,14 @@ static int return_address_cb(struct stackframe *frame, void *data)
+ 	return 1;
  }
  
-diff --git a/fs/f2fs/trace.c b/fs/f2fs/trace.c
-index ce2a5eb210b6..d0ab533a9ce8 100644
---- a/fs/f2fs/trace.c
-+++ b/fs/f2fs/trace.c
-@@ -14,7 +14,7 @@
- #include "trace.h"
- 
- static RADIX_TREE(pids, GFP_ATOMIC);
--static struct mutex pids_lock;
-+static spinlock_t pids_lock;
- static struct last_io_info last_io;
- 
- static inline void __print_last_io(void)
-@@ -58,23 +58,29 @@ void f2fs_trace_pid(struct page *page)
- 
- 	set_page_private(page, (unsigned long)pid);
++/*
++ * level == 0 is for the return address from the caller of this function,
++ * not from this function itself.
++ */
+ unsigned long return_address(unsigned level)
+ {
+ 	struct return_addr_data r = {
+-		.skip = level + 1,
++		.skip = level,
+ 	};
+ 	walk_stackframe(stack_pointer(NULL), return_address_cb, &r);
+ 	return r.addr;
+diff --git a/block/bio.c b/block/bio.c
+index 4db1008309ed..a06f58bd4c72 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1238,8 +1238,11 @@ struct bio *bio_copy_user_iov(struct request_queue *q,
+ 			}
+ 		}
  
-+retry:
- 	if (radix_tree_preload(GFP_NOFS))
- 		return;
+-		if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes)
++		if (bio_add_pc_page(q, bio, page, bytes, offset) < bytes) {
++			if (!map_data)
++				__free_page(page);
+ 			break;
++		}
  
--	mutex_lock(&pids_lock);
-+	spin_lock(&pids_lock);
- 	p = radix_tree_lookup(&pids, pid);
- 	if (p == current)
- 		goto out;
- 	if (p)
- 		radix_tree_delete(&pids, pid);
- 
--	f2fs_radix_tree_insert(&pids, pid, current);
-+	if (radix_tree_insert(&pids, pid, current)) {
-+		spin_unlock(&pids_lock);
-+		radix_tree_preload_end();
-+		cond_resched();
-+		goto retry;
-+	}
+ 		len -= bytes;
+ 		offset = 0;
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 6b78ec56a4f2..5bde73a49399 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -1246,8 +1246,6 @@ static int blk_cloned_rq_check_limits(struct request_queue *q,
+  */
+ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *rq)
+ {
+-	blk_qc_t unused;
+-
+ 	if (blk_cloned_rq_check_limits(q, rq))
+ 		return BLK_STS_IOERR;
  
- 	trace_printk("%3x:%3x %4x %-16s\n",
- 			MAJOR(inode->i_sb->s_dev), MINOR(inode->i_sb->s_dev),
- 			pid, current->comm);
- out:
--	mutex_unlock(&pids_lock);
-+	spin_unlock(&pids_lock);
- 	radix_tree_preload_end();
+@@ -1263,7 +1261,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
+ 	 * bypass a potential scheduler on the bottom device for
+ 	 * insert.
+ 	 */
+-	return blk_mq_try_issue_directly(rq->mq_hctx, rq, &unused, true, true);
++	return blk_mq_request_issue_directly(rq, true);
  }
+ EXPORT_SYMBOL_GPL(blk_insert_cloned_request);
  
-@@ -119,7 +125,7 @@ void f2fs_trace_ios(struct f2fs_io_info *fio, int flush)
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 140933e4a7d1..0c98b6c1ca49 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -423,10 +423,12 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
+ 		 * busy in case of 'none' scheduler, and this way may save
+ 		 * us one extra enqueue & dequeue to sw queue.
+ 		 */
+-		if (!hctx->dispatch_busy && !e && !run_queue_async)
++		if (!hctx->dispatch_busy && !e && !run_queue_async) {
+ 			blk_mq_try_issue_list_directly(hctx, list);
+-		else
+-			blk_mq_insert_requests(hctx, ctx, list);
++			if (list_empty(list))
++				return;
++		}
++		blk_mq_insert_requests(hctx, ctx, list);
+ 	}
  
- void f2fs_build_trace_ios(void)
- {
--	mutex_init(&pids_lock);
-+	spin_lock_init(&pids_lock);
+ 	blk_mq_run_hw_queue(hctx, run_queue_async);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index b9283b63d116..16f9675c57e6 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1805,74 +1805,76 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx,
+ 	return ret;
  }
  
- #define PIDVEC_SIZE	128
-@@ -147,7 +153,7 @@ void f2fs_destroy_trace_ios(void)
- 	pid_t next_pid = 0;
- 	unsigned int found;
- 
--	mutex_lock(&pids_lock);
-+	spin_lock(&pids_lock);
- 	while ((found = gang_lookup_pids(pid, next_pid, PIDVEC_SIZE))) {
- 		unsigned idx;
- 
-@@ -155,5 +161,5 @@ void f2fs_destroy_trace_ios(void)
- 		for (idx = 0; idx < found; idx++)
- 			radix_tree_delete(&pids, pid[idx]);
- 	}
--	mutex_unlock(&pids_lock);
-+	spin_unlock(&pids_lock);
- }
-diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
-index 18d5ffbc5e8c..73b92985198b 100644
---- a/fs/f2fs/xattr.c
-+++ b/fs/f2fs/xattr.c
-@@ -224,11 +224,11 @@ static struct f2fs_xattr_entry *__find_inline_xattr(struct inode *inode,
+-blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
++static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+ 						struct request *rq,
+ 						blk_qc_t *cookie,
+-						bool bypass, bool last)
++						bool bypass_insert, bool last)
  {
- 	struct f2fs_xattr_entry *entry;
- 	unsigned int inline_size = inline_xattr_size(inode);
-+	void *max_addr = base_addr + inline_size;
- 
- 	list_for_each_xattr(entry, base_addr) {
--		if ((void *)entry + sizeof(__u32) > base_addr + inline_size ||
--			(void *)XATTR_NEXT_ENTRY(entry) + sizeof(__u32) >
--			base_addr + inline_size) {
-+		if ((void *)entry + sizeof(__u32) > max_addr ||
-+			(void *)XATTR_NEXT_ENTRY(entry) > max_addr) {
- 			*last_addr = entry;
- 			return NULL;
- 		}
-@@ -239,6 +239,13 @@ static struct f2fs_xattr_entry *__find_inline_xattr(struct inode *inode,
- 		if (!memcmp(entry->e_name, name, len))
- 			break;
+ 	struct request_queue *q = rq->q;
+ 	bool run_queue = true;
+-	blk_status_t ret = BLK_STS_RESOURCE;
+-	int srcu_idx;
+-	bool force = false;
+ 
+-	hctx_lock(hctx, &srcu_idx);
+ 	/*
+-	 * hctx_lock is needed before checking quiesced flag.
++	 * RCU or SRCU read lock is needed before checking quiesced flag.
+ 	 *
+-	 * When queue is stopped or quiesced, ignore 'bypass', insert
+-	 * and return BLK_STS_OK to caller, and avoid driver to try to
+-	 * dispatch again.
++	 * When queue is stopped or quiesced, ignore 'bypass_insert' from
++	 * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller,
++	 * and avoid driver to try to dispatch again.
+ 	 */
+-	if (unlikely(blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q))) {
++	if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) {
+ 		run_queue = false;
+-		bypass = false;
+-		goto out_unlock;
++		bypass_insert = false;
++		goto insert;
  	}
-+
-+	/* inline xattr header or entry across max inline xattr size */
-+	if (IS_XATTR_LAST_ENTRY(entry) &&
-+		(void *)entry + sizeof(__u32) > max_addr) {
-+		*last_addr = entry;
-+		return NULL;
-+	}
- 	return entry;
- }
  
-diff --git a/fs/file.c b/fs/file.c
-index 3209ee271c41..a10487aa0a84 100644
---- a/fs/file.c
-+++ b/fs/file.c
-@@ -457,6 +457,7 @@ struct files_struct init_files = {
- 		.full_fds_bits	= init_files.full_fds_bits_init,
- 	},
- 	.file_lock	= __SPIN_LOCK_UNLOCKED(init_files.file_lock),
-+	.resize_wait	= __WAIT_QUEUE_HEAD_INITIALIZER(init_files.resize_wait),
- };
+-	if (unlikely(q->elevator && !bypass))
+-		goto out_unlock;
++	if (q->elevator && !bypass_insert)
++		goto insert;
  
- static unsigned int find_next_fd(struct fdtable *fdt, unsigned int start)
-diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
-index b92740edc416..4b038f25f256 100644
---- a/fs/gfs2/glock.c
-+++ b/fs/gfs2/glock.c
-@@ -107,7 +107,7 @@ static int glock_wake_function(wait_queue_entry_t *wait, unsigned int mode,
+ 	if (!blk_mq_get_dispatch_budget(hctx))
+-		goto out_unlock;
++		goto insert;
  
- static wait_queue_head_t *glock_waitqueue(struct lm_lockname *name)
- {
--	u32 hash = jhash2((u32 *)name, sizeof(*name) / 4, 0);
-+	u32 hash = jhash2((u32 *)name, ht_parms.key_len / 4, 0);
+ 	if (!blk_mq_get_driver_tag(rq)) {
+ 		blk_mq_put_dispatch_budget(hctx);
+-		goto out_unlock;
++		goto insert;
+ 	}
  
- 	return glock_wait_table + hash_32(hash, GLOCK_WAIT_TABLE_BITS);
- }
-diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
-index 2eb55c3361a8..efd0ce9489ae 100644
---- a/fs/jbd2/commit.c
-+++ b/fs/jbd2/commit.c
-@@ -694,9 +694,11 @@ void jbd2_journal_commit_transaction(journal_t *journal)
-                            the last tag we set up. */
- 
- 			tag->t_flags |= cpu_to_be16(JBD2_FLAG_LAST_TAG);
--
--			jbd2_descriptor_block_csum_set(journal, descriptor);
- start_journal_io:
-+			if (descriptor)
-+				jbd2_descriptor_block_csum_set(journal,
-+							descriptor);
+-	/*
+-	 * Always add a request that has been through
+-	 *.queue_rq() to the hardware dispatch list.
+-	 */
+-	force = true;
+-	ret = __blk_mq_issue_directly(hctx, rq, cookie, last);
+-out_unlock:
++	return __blk_mq_issue_directly(hctx, rq, cookie, last);
++insert:
++	if (bypass_insert)
++		return BLK_STS_RESOURCE;
 +
- 			for (i = 0; i < bufs; i++) {
- 				struct buffer_head *bh = wbuf[i];
- 				/*
-diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
-index 8ef6b6daaa7a..88f2a49338a1 100644
---- a/fs/jbd2/journal.c
-+++ b/fs/jbd2/journal.c
-@@ -1356,6 +1356,10 @@ static int journal_reset(journal_t *journal)
- 	return jbd2_journal_start_thread(journal);
- }
- 
-+/*
-+ * This function expects that the caller will have locked the journal
-+ * buffer head, and will return with it unlocked
-+ */
- static int jbd2_write_superblock(journal_t *journal, int write_flags)
- {
- 	struct buffer_head *bh = journal->j_sb_buffer;
-@@ -1365,7 +1369,6 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
- 	trace_jbd2_write_superblock(journal, write_flags);
- 	if (!(journal->j_flags & JBD2_BARRIER))
- 		write_flags &= ~(REQ_FUA | REQ_PREFLUSH);
--	lock_buffer(bh);
- 	if (buffer_write_io_error(bh)) {
- 		/*
- 		 * Oh, dear.  A previous attempt to write the journal
-@@ -1424,6 +1427,7 @@ int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
- 	jbd_debug(1, "JBD2: updating superblock (start %lu, seq %u)\n",
- 		  tail_block, tail_tid);
- 
-+	lock_buffer(journal->j_sb_buffer);
- 	sb->s_sequence = cpu_to_be32(tail_tid);
- 	sb->s_start    = cpu_to_be32(tail_block);
- 
-@@ -1454,18 +1458,17 @@ static void jbd2_mark_journal_empty(journal_t *journal, int write_op)
- 	journal_superblock_t *sb = journal->j_superblock;
- 
- 	BUG_ON(!mutex_is_locked(&journal->j_checkpoint_mutex));
--	read_lock(&journal->j_state_lock);
--	/* Is it already empty? */
--	if (sb->s_start == 0) {
--		read_unlock(&journal->j_state_lock);
-+	lock_buffer(journal->j_sb_buffer);
-+	if (sb->s_start == 0) {		/* Is it already empty? */
-+		unlock_buffer(journal->j_sb_buffer);
- 		return;
- 	}
++	blk_mq_request_bypass_insert(rq, run_queue);
++	return BLK_STS_OK;
++}
 +
- 	jbd_debug(1, "JBD2: Marking journal as empty (seq %d)\n",
- 		  journal->j_tail_sequence);
- 
- 	sb->s_sequence = cpu_to_be32(journal->j_tail_sequence);
- 	sb->s_start    = cpu_to_be32(0);
--	read_unlock(&journal->j_state_lock);
- 
- 	jbd2_write_superblock(journal, write_op);
- 
-@@ -1488,9 +1491,8 @@ void jbd2_journal_update_sb_errno(journal_t *journal)
- 	journal_superblock_t *sb = journal->j_superblock;
- 	int errcode;
- 
--	read_lock(&journal->j_state_lock);
-+	lock_buffer(journal->j_sb_buffer);
- 	errcode = journal->j_errno;
--	read_unlock(&journal->j_state_lock);
- 	if (errcode == -ESHUTDOWN)
- 		errcode = 0;
- 	jbd_debug(1, "JBD2: updating superblock error (errno %d)\n", errcode);
-@@ -1894,28 +1896,27 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
- 
- 	sb = journal->j_superblock;
- 
-+	/* Load the checksum driver if necessary */
-+	if ((journal->j_chksum_driver == NULL) &&
-+	    INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V3)) {
-+		journal->j_chksum_driver = crypto_alloc_shash("crc32c", 0, 0);
-+		if (IS_ERR(journal->j_chksum_driver)) {
-+			printk(KERN_ERR "JBD2: Cannot load crc32c driver.\n");
-+			journal->j_chksum_driver = NULL;
-+			return 0;
-+		}
-+		/* Precompute checksum seed for all metadata */
-+		journal->j_csum_seed = jbd2_chksum(journal, ~0, sb->s_uuid,
-+						   sizeof(sb->s_uuid));
-+	}
++static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
++		struct request *rq, blk_qc_t *cookie)
++{
++	blk_status_t ret;
++	int srcu_idx;
 +
-+	lock_buffer(journal->j_sb_buffer);
++	might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING);
 +
- 	/* If enabling v3 checksums, update superblock */
- 	if (INCOMPAT_FEATURE_ON(JBD2_FEATURE_INCOMPAT_CSUM_V3)) {
- 		sb->s_checksum_type = JBD2_CRC32C_CHKSUM;
- 		sb->s_feature_compat &=
- 			~cpu_to_be32(JBD2_FEATURE_COMPAT_CHECKSUM);
--
--		/* Load the checksum driver */
--		if (journal->j_chksum_driver == NULL) {
--			journal->j_chksum_driver = crypto_alloc_shash("crc32c",
--								      0, 0);
--			if (IS_ERR(journal->j_chksum_driver)) {
--				printk(KERN_ERR "JBD2: Cannot load crc32c "
--				       "driver.\n");
--				journal->j_chksum_driver = NULL;
--				return 0;
--			}
--
--			/* Precompute checksum seed for all metadata */
--			journal->j_csum_seed = jbd2_chksum(journal, ~0,
--							   sb->s_uuid,
--							   sizeof(sb->s_uuid));
--		}
- 	}
- 
- 	/* If enabling v1 checksums, downgrade superblock */
-@@ -1927,6 +1928,7 @@ int jbd2_journal_set_features (journal_t *journal, unsigned long compat,
- 	sb->s_feature_compat    |= cpu_to_be32(compat);
- 	sb->s_feature_ro_compat |= cpu_to_be32(ro);
- 	sb->s_feature_incompat  |= cpu_to_be32(incompat);
-+	unlock_buffer(journal->j_sb_buffer);
- 
- 	return 1;
- #undef COMPAT_FEATURE_ON
-diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
-index cc35537232f2..f0d8dabe1ff5 100644
---- a/fs/jbd2/transaction.c
-+++ b/fs/jbd2/transaction.c
-@@ -1252,11 +1252,12 @@ int jbd2_journal_get_undo_access(handle_t *handle, struct buffer_head *bh)
- 	struct journal_head *jh;
- 	char *committed_data = NULL;
- 
--	JBUFFER_TRACE(jh, "entry");
- 	if (jbd2_write_access_granted(handle, bh, true))
- 		return 0;
- 
- 	jh = jbd2_journal_add_journal_head(bh);
-+	JBUFFER_TRACE(jh, "entry");
++	hctx_lock(hctx, &srcu_idx);
 +
- 	/*
- 	 * Do this first --- it can drop the journal lock, so we want to
- 	 * make sure that obtaining the committed_data is done
-@@ -1367,15 +1368,17 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
- 
- 	if (is_handle_aborted(handle))
- 		return -EROFS;
--	if (!buffer_jbd(bh)) {
--		ret = -EUCLEAN;
--		goto out;
--	}
-+	if (!buffer_jbd(bh))
-+		return -EUCLEAN;
++	ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false, true);
++	if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE)
++		blk_mq_request_bypass_insert(rq, true);
++	else if (ret != BLK_STS_OK)
++		blk_mq_end_request(rq, ret);
 +
- 	/*
- 	 * We don't grab jh reference here since the buffer must be part
- 	 * of the running transaction.
- 	 */
- 	jh = bh2jh(bh);
-+	jbd_debug(5, "journal_head %p\n", jh);
-+	JBUFFER_TRACE(jh, "entry");
++	hctx_unlock(hctx, srcu_idx);
++}
 +
- 	/*
- 	 * This and the following assertions are unreliable since we may see jh
- 	 * in inconsistent state unless we grab bh_state lock. But this is
-@@ -1409,9 +1412,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
- 	}
++blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
++{
++	blk_status_t ret;
++	int srcu_idx;
++	blk_qc_t unused_cookie;
++	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
++
++	hctx_lock(hctx, &srcu_idx);
++	ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true, last);
+ 	hctx_unlock(hctx, srcu_idx);
+-	switch (ret) {
+-	case BLK_STS_OK:
+-		break;
+-	case BLK_STS_DEV_RESOURCE:
+-	case BLK_STS_RESOURCE:
+-		if (force) {
+-			blk_mq_request_bypass_insert(rq, run_queue);
+-			/*
+-			 * We have to return BLK_STS_OK for the DM
+-			 * to avoid livelock. Otherwise, we return
+-			 * the real result to indicate whether the
+-			 * request is direct-issued successfully.
+-			 */
+-			ret = bypass ? BLK_STS_OK : ret;
+-		} else if (!bypass) {
+-			blk_mq_sched_insert_request(rq, false,
+-						    run_queue, false);
+-		}
+-		break;
+-	default:
+-		if (!bypass)
+-			blk_mq_end_request(rq, ret);
+-		break;
+-	}
  
- 	journal = transaction->t_journal;
--	jbd_debug(5, "journal_head %p\n", jh);
--	JBUFFER_TRACE(jh, "entry");
+ 	return ret;
+ }
+@@ -1880,20 +1882,22 @@ out_unlock:
+ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ 		struct list_head *list)
+ {
+-	blk_qc_t unused;
+-	blk_status_t ret = BLK_STS_OK;
 -
- 	jbd_lock_bh_state(bh);
- 
- 	if (jh->b_modified == 0) {
-@@ -1609,14 +1609,21 @@ int jbd2_journal_forget (handle_t *handle, struct buffer_head *bh)
- 		/* However, if the buffer is still owned by a prior
- 		 * (committing) transaction, we can't drop it yet... */
- 		JBUFFER_TRACE(jh, "belongs to older transaction");
--		/* ... but we CAN drop it from the new transaction if we
--		 * have also modified it since the original commit. */
-+		/* ... but we CAN drop it from the new transaction through
-+		 * marking the buffer as freed and set j_next_transaction to
-+		 * the new transaction, so that not only the commit code
-+		 * knows it should clear dirty bits when it is done with the
-+		 * buffer, but also the buffer can be checkpointed only
-+		 * after the new transaction commits. */
- 
--		if (jh->b_next_transaction) {
--			J_ASSERT(jh->b_next_transaction == transaction);
-+		set_buffer_freed(bh);
-+
-+		if (!jh->b_next_transaction) {
- 			spin_lock(&journal->j_list_lock);
--			jh->b_next_transaction = NULL;
-+			jh->b_next_transaction = transaction;
- 			spin_unlock(&journal->j_list_lock);
-+		} else {
-+			J_ASSERT(jh->b_next_transaction == transaction);
- 
- 			/*
- 			 * only drop a reference if this transaction modified
-diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c
-index fdf527b6d79c..d71c9405874a 100644
---- a/fs/kernfs/mount.c
-+++ b/fs/kernfs/mount.c
-@@ -196,8 +196,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
- 		return dentry;
- 
- 	knparent = find_next_ancestor(kn, NULL);
--	if (WARN_ON(!knparent))
-+	if (WARN_ON(!knparent)) {
-+		dput(dentry);
- 		return ERR_PTR(-EINVAL);
-+	}
+ 	while (!list_empty(list)) {
++		blk_status_t ret;
+ 		struct request *rq = list_first_entry(list, struct request,
+ 				queuelist);
  
- 	do {
- 		struct dentry *dtmp;
-@@ -206,8 +208,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn,
- 		if (kn == knparent)
- 			return dentry;
- 		kntmp = find_next_ancestor(kn, knparent);
--		if (WARN_ON(!kntmp))
-+		if (WARN_ON(!kntmp)) {
-+			dput(dentry);
- 			return ERR_PTR(-EINVAL);
+ 		list_del_init(&rq->queuelist);
+-		if (ret == BLK_STS_OK)
+-			ret = blk_mq_try_issue_directly(hctx, rq, &unused,
+-							false,
++		ret = blk_mq_request_issue_directly(rq, list_empty(list));
++		if (ret != BLK_STS_OK) {
++			if (ret == BLK_STS_RESOURCE ||
++					ret == BLK_STS_DEV_RESOURCE) {
++				blk_mq_request_bypass_insert(rq,
+ 							list_empty(list));
+-		else
+-			blk_mq_sched_insert_request(rq, false, true, false);
++				break;
++			}
++			blk_mq_end_request(rq, ret);
 +		}
- 		dtmp = lookup_one_len_unlocked(kntmp->name, dentry,
- 					       strlen(kntmp->name));
- 		dput(dentry);
-diff --git a/fs/lockd/host.c b/fs/lockd/host.c
-index 93fb7cf0b92b..f0b5c987d6ae 100644
---- a/fs/lockd/host.c
-+++ b/fs/lockd/host.c
-@@ -290,12 +290,11 @@ void nlmclnt_release_host(struct nlm_host *host)
- 
- 	WARN_ON_ONCE(host->h_server);
- 
--	if (refcount_dec_and_test(&host->h_count)) {
-+	if (refcount_dec_and_mutex_lock(&host->h_count, &nlm_host_mutex)) {
- 		WARN_ON_ONCE(!list_empty(&host->h_lockowners));
- 		WARN_ON_ONCE(!list_empty(&host->h_granted));
- 		WARN_ON_ONCE(!list_empty(&host->h_reclaim));
- 
--		mutex_lock(&nlm_host_mutex);
- 		nlm_destroy_host_locked(host);
- 		mutex_unlock(&nlm_host_mutex);
  	}
-diff --git a/fs/locks.c b/fs/locks.c
-index ff6af2c32601..5f468cd95f68 100644
---- a/fs/locks.c
-+++ b/fs/locks.c
-@@ -1160,6 +1160,11 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
- 			 */
- 			error = -EDEADLK;
- 			spin_lock(&blocked_lock_lock);
-+			/*
-+			 * Ensure that we don't find any locks blocked on this
-+			 * request during deadlock detection.
-+			 */
-+			__locks_wake_up_blocks(request);
- 			if (likely(!posix_locks_deadlock(request, fl))) {
- 				error = FILE_LOCK_DEFERRED;
- 				__locks_insert_block(fl, request,
-diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
-index 557a5d636183..44258c516305 100644
---- a/fs/nfs/nfs4proc.c
-+++ b/fs/nfs/nfs4proc.c
-@@ -947,6 +947,13 @@ nfs4_sequence_process_interrupted(struct nfs_client *client,
- 
- #endif	/* !CONFIG_NFS_V4_1 */
- 
-+static void nfs41_sequence_res_init(struct nfs4_sequence_res *res)
-+{
-+	res->sr_timestamp = jiffies;
-+	res->sr_status_flags = 0;
-+	res->sr_status = 1;
-+}
-+
- static
- void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args,
- 		struct nfs4_sequence_res *res,
-@@ -958,10 +965,6 @@ void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args,
- 	args->sa_slot = slot;
- 
- 	res->sr_slot = slot;
--	res->sr_timestamp = jiffies;
--	res->sr_status_flags = 0;
--	res->sr_status = 1;
--
+ 
+ 	/*
+@@ -1901,7 +1905,7 @@ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ 	 * the driver there was more coming, but that turned out to
+ 	 * be a lie.
+ 	 */
+-	if (ret != BLK_STS_OK && hctx->queue->mq_ops->commit_rqs)
++	if (!list_empty(list) && hctx->queue->mq_ops->commit_rqs)
+ 		hctx->queue->mq_ops->commit_rqs(hctx);
  }
  
- int nfs4_setup_sequence(struct nfs_client *client,
-@@ -1007,6 +1010,7 @@ int nfs4_setup_sequence(struct nfs_client *client,
+@@ -2014,13 +2018,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio)
+ 		if (same_queue_rq) {
+ 			data.hctx = same_queue_rq->mq_hctx;
+ 			blk_mq_try_issue_directly(data.hctx, same_queue_rq,
+-					&cookie, false, true);
++					&cookie);
+ 		}
+ 	} else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator &&
+ 			!data.hctx->dispatch_busy)) {
+ 		blk_mq_put_ctx(data.ctx);
+ 		blk_mq_bio_to_request(rq, bio);
+-		blk_mq_try_issue_directly(data.hctx, rq, &cookie, false, true);
++		blk_mq_try_issue_directly(data.hctx, rq, &cookie);
+ 	} else {
+ 		blk_mq_put_ctx(data.ctx);
+ 		blk_mq_bio_to_request(rq, bio);
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index d0b3dd54ef8d..a3a684a8c633 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -67,10 +67,8 @@ void blk_mq_request_bypass_insert(struct request *rq, bool run_queue);
+ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
+ 				struct list_head *list);
  
- 	trace_nfs4_setup_sequence(session, args);
- out_start:
-+	nfs41_sequence_res_init(res);
- 	rpc_call_start(task);
- 	return 0;
+-blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+-						struct request *rq,
+-						blk_qc_t *cookie,
+-						bool bypass, bool last);
++/* Used by blk_insert_cloned_request() to issue request directly */
++blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last);
+ void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ 				    struct list_head *list);
  
-@@ -2934,7 +2938,8 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
- 	}
+diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
+index e10fec99a182..4424997ecf30 100644
+--- a/drivers/acpi/acpica/evgpe.c
++++ b/drivers/acpi/acpica/evgpe.c
+@@ -81,8 +81,12 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
  
- out:
--	nfs4_sequence_free_slot(&opendata->o_res.seq_res);
-+	if (!opendata->cancelled)
-+		nfs4_sequence_free_slot(&opendata->o_res.seq_res);
- 	return ret;
- }
+ 	ACPI_FUNCTION_TRACE(ev_enable_gpe);
  
-@@ -6302,7 +6307,6 @@ static struct nfs4_unlockdata *nfs4_alloc_unlockdata(struct file_lock *fl,
- 	p->arg.seqid = seqid;
- 	p->res.seqid = seqid;
- 	p->lsp = lsp;
--	refcount_inc(&lsp->ls_count);
- 	/* Ensure we don't close file until we're done freeing locks! */
- 	p->ctx = get_nfs_open_context(ctx);
- 	p->l_ctx = nfs_get_lock_context(ctx);
-@@ -6527,7 +6531,6 @@ static struct nfs4_lockdata *nfs4_alloc_lockdata(struct file_lock *fl,
- 	p->res.lock_seqid = p->arg.lock_seqid;
- 	p->lsp = lsp;
- 	p->server = server;
--	refcount_inc(&lsp->ls_count);
- 	p->ctx = get_nfs_open_context(ctx);
- 	locks_init_lock(&p->fl);
- 	locks_copy_lock(&p->fl, fl);
-diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
-index e54d899c1848..a8951f1f7b4e 100644
---- a/fs/nfs/pagelist.c
-+++ b/fs/nfs/pagelist.c
-@@ -988,6 +988,17 @@ static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc)
- 	}
- }
+-	/* Enable the requested GPE */
++	/* Clear the GPE status */
++	status = acpi_hw_clear_gpe(gpe_event_info);
++	if (ACPI_FAILURE(status))
++		return_ACPI_STATUS(status);
  
-+static void
-+nfs_pageio_cleanup_request(struct nfs_pageio_descriptor *desc,
-+		struct nfs_page *req)
-+{
-+	LIST_HEAD(head);
-+
-+	nfs_list_remove_request(req);
-+	nfs_list_add_request(req, &head);
-+	desc->pg_completion_ops->error_cleanup(&head);
-+}
-+
- /**
-  * nfs_pageio_add_request - Attempt to coalesce a request into a page list.
-  * @desc: destination io descriptor
-@@ -1025,10 +1036,8 @@ static int __nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
- 			nfs_page_group_unlock(req);
- 			desc->pg_moreio = 1;
- 			nfs_pageio_doio(desc);
--			if (desc->pg_error < 0)
--				return 0;
--			if (mirror->pg_recoalesce)
--				return 0;
-+			if (desc->pg_error < 0 || mirror->pg_recoalesce)
-+				goto out_cleanup_subreq;
- 			/* retry add_request for this subreq */
- 			nfs_page_group_lock(req);
- 			continue;
-@@ -1061,6 +1070,10 @@ err_ptr:
- 	desc->pg_error = PTR_ERR(subreq);
- 	nfs_page_group_unlock(req);
- 	return 0;
-+out_cleanup_subreq:
-+	if (req != subreq)
-+		nfs_pageio_cleanup_request(desc, subreq);
-+	return 0;
++	/* Enable the requested GPE */
+ 	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
+ 	return_ACPI_STATUS(status);
  }
- 
- static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
-@@ -1079,7 +1092,6 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc)
- 			struct nfs_page *req;
- 
- 			req = list_first_entry(&head, struct nfs_page, wb_list);
--			nfs_list_remove_request(req);
- 			if (__nfs_pageio_add_request(desc, req))
- 				continue;
- 			if (desc->pg_error < 0) {
-@@ -1168,11 +1180,14 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
- 		if (nfs_pgio_has_mirroring(desc))
- 			desc->pg_mirror_idx = midx;
- 		if (!nfs_pageio_add_request_mirror(desc, dupreq))
--			goto out_failed;
-+			goto out_cleanup_subreq;
+diff --git a/drivers/acpi/acpica/nsobject.c b/drivers/acpi/acpica/nsobject.c
+index 8638f43cfc3d..79d86da1c892 100644
+--- a/drivers/acpi/acpica/nsobject.c
++++ b/drivers/acpi/acpica/nsobject.c
+@@ -186,6 +186,10 @@ void acpi_ns_detach_object(struct acpi_namespace_node *node)
+ 		}
  	}
  
- 	return 1;
- 
-+out_cleanup_subreq:
-+	if (req != dupreq)
-+		nfs_pageio_cleanup_request(desc, dupreq);
- out_failed:
- 	nfs_pageio_error_cleanup(desc);
- 	return 0;
-@@ -1194,7 +1209,7 @@ static void nfs_pageio_complete_mirror(struct nfs_pageio_descriptor *desc,
- 		desc->pg_mirror_idx = mirror_idx;
- 	for (;;) {
- 		nfs_pageio_doio(desc);
--		if (!mirror->pg_recoalesce)
-+		if (desc->pg_error < 0 || !mirror->pg_recoalesce)
- 			break;
- 		if (!nfs_do_recoalesce(desc))
- 			break;
-diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c
-index 9eb8086ea841..c9cf46e0c040 100644
---- a/fs/nfsd/nfs3proc.c
-+++ b/fs/nfsd/nfs3proc.c
-@@ -463,8 +463,19 @@ nfsd3_proc_readdir(struct svc_rqst *rqstp)
- 					&resp->common, nfs3svc_encode_entry);
- 	memcpy(resp->verf, argp->verf, 8);
- 	resp->count = resp->buffer - argp->buffer;
--	if (resp->offset)
--		xdr_encode_hyper(resp->offset, argp->cookie);
-+	if (resp->offset) {
-+		loff_t offset = argp->cookie;
-+
-+		if (unlikely(resp->offset1)) {
-+			/* we ended up with offset on a page boundary */
-+			*resp->offset = htonl(offset >> 32);
-+			*resp->offset1 = htonl(offset & 0xffffffff);
-+			resp->offset1 = NULL;
-+		} else {
-+			xdr_encode_hyper(resp->offset, offset);
-+		}
-+		resp->offset = NULL;
++	if (obj_desc->common.type == ACPI_TYPE_REGION) {
++		acpi_ut_remove_address_range(obj_desc->region.space_id, node);
 +	}
++
+ 	/* Clear the Node entry in all cases */
  
- 	RETURN_STATUS(nfserr);
- }
-@@ -533,6 +544,7 @@ nfsd3_proc_readdirplus(struct svc_rqst *rqstp)
- 		} else {
- 			xdr_encode_hyper(resp->offset, offset);
- 		}
-+		resp->offset = NULL;
- 	}
- 
- 	RETURN_STATUS(nfserr);
-diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
-index 9b973f4f7d01..83919116d5cb 100644
---- a/fs/nfsd/nfs3xdr.c
-+++ b/fs/nfsd/nfs3xdr.c
-@@ -921,6 +921,7 @@ encode_entry(struct readdir_cd *ccd, const char *name, int namlen,
- 		} else {
- 			xdr_encode_hyper(cd->offset, offset64);
- 		}
-+		cd->offset = NULL;
- 	}
+ 	node->object = NULL;
+diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
+index 2e2ffe7010aa..51c77f0e47b2 100644
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -351,7 +351,7 @@ config XILINX_HWICAP
  
- 	/*
-diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
-index fb3c9844c82a..6a45fb00c5fc 100644
---- a/fs/nfsd/nfs4state.c
-+++ b/fs/nfsd/nfs4state.c
-@@ -1544,16 +1544,16 @@ static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca)
- {
- 	u32 slotsize = slot_bytes(ca);
- 	u32 num = ca->maxreqs;
--	int avail;
-+	unsigned long avail, total_avail;
- 
- 	spin_lock(&nfsd_drc_lock);
--	avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION,
--		    nfsd_drc_max_mem - nfsd_drc_mem_used);
-+	total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used;
-+	avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail);
- 	/*
- 	 * Never use more than a third of the remaining memory,
- 	 * unless it's the only way to give this client a slot:
- 	 */
--	avail = clamp_t(int, avail, slotsize, avail/3);
-+	avail = clamp_t(int, avail, slotsize, total_avail/3);
- 	num = min_t(int, num, avail / slotsize);
- 	nfsd_drc_mem_used += num * slotsize;
- 	spin_unlock(&nfsd_drc_lock);
-diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
-index 72a7681f4046..f2feb2d11bae 100644
---- a/fs/nfsd/nfsctl.c
-+++ b/fs/nfsd/nfsctl.c
-@@ -1126,7 +1126,7 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size)
- 		case 'Y':
- 		case 'y':
- 		case '1':
--			if (nn->nfsd_serv)
-+			if (!nn->nfsd_serv)
- 				return -EBUSY;
- 			nfsd4_end_grace(nn);
- 			break;
-diff --git a/fs/ocfs2/cluster/nodemanager.c b/fs/ocfs2/cluster/nodemanager.c
-index 0e4166cc23a0..4ac775e32240 100644
---- a/fs/ocfs2/cluster/nodemanager.c
-+++ b/fs/ocfs2/cluster/nodemanager.c
-@@ -621,13 +621,15 @@ static void o2nm_node_group_drop_item(struct config_group *group,
- 	struct o2nm_node *node = to_o2nm_node(item);
- 	struct o2nm_cluster *cluster = to_o2nm_cluster(group->cg_item.ci_parent);
- 
--	o2net_disconnect_node(node);
-+	if (cluster->cl_nodes[node->nd_num] == node) {
-+		o2net_disconnect_node(node);
- 
--	if (cluster->cl_has_local &&
--	    (cluster->cl_local_node == node->nd_num)) {
--		cluster->cl_has_local = 0;
--		cluster->cl_local_node = O2NM_INVALID_NODE_NUM;
--		o2net_stop_listening(node);
-+		if (cluster->cl_has_local &&
-+		    (cluster->cl_local_node == node->nd_num)) {
-+			cluster->cl_has_local = 0;
-+			cluster->cl_local_node = O2NM_INVALID_NODE_NUM;
-+			o2net_stop_listening(node);
-+		}
+ config R3964
+ 	tristate "Siemens R3964 line discipline"
+-	depends on TTY
++	depends on TTY && BROKEN
+ 	---help---
+ 	  This driver allows synchronous communication with devices using the
+ 	  Siemens R3964 packet protocol. Unless you are dealing with special
+diff --git a/drivers/clk/meson/meson-aoclk.c b/drivers/clk/meson/meson-aoclk.c
+index 258c8d259ea1..f965845917e3 100644
+--- a/drivers/clk/meson/meson-aoclk.c
++++ b/drivers/clk/meson/meson-aoclk.c
+@@ -65,20 +65,15 @@ int meson_aoclkc_probe(struct platform_device *pdev)
+ 		return ret;
  	}
  
- 	/* XXX call into net to stop this node from trading messages */
-diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c
-index a35259eebc56..1dc9a08e8bdc 100644
---- a/fs/ocfs2/refcounttree.c
-+++ b/fs/ocfs2/refcounttree.c
-@@ -4719,22 +4719,23 @@ out:
- 
- /* Lock an inode and grab a bh pointing to the inode. */
- int ocfs2_reflink_inodes_lock(struct inode *s_inode,
--			      struct buffer_head **bh1,
-+			      struct buffer_head **bh_s,
- 			      struct inode *t_inode,
--			      struct buffer_head **bh2)
-+			      struct buffer_head **bh_t)
- {
--	struct inode *inode1;
--	struct inode *inode2;
-+	struct inode *inode1 = s_inode;
-+	struct inode *inode2 = t_inode;
- 	struct ocfs2_inode_info *oi1;
- 	struct ocfs2_inode_info *oi2;
-+	struct buffer_head *bh1 = NULL;
-+	struct buffer_head *bh2 = NULL;
- 	bool same_inode = (s_inode == t_inode);
-+	bool need_swap = (inode1->i_ino > inode2->i_ino);
- 	int status;
- 
- 	/* First grab the VFS and rw locks. */
- 	lock_two_nondirectories(s_inode, t_inode);
--	inode1 = s_inode;
--	inode2 = t_inode;
--	if (inode1->i_ino > inode2->i_ino)
-+	if (need_swap)
- 		swap(inode1, inode2);
- 
- 	status = ocfs2_rw_lock(inode1, 1);
-@@ -4757,17 +4758,13 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
- 	trace_ocfs2_double_lock((unsigned long long)oi1->ip_blkno,
- 				(unsigned long long)oi2->ip_blkno);
- 
--	if (*bh1)
--		*bh1 = NULL;
--	if (*bh2)
--		*bh2 = NULL;
--
- 	/* We always want to lock the one with the lower lockid first. */
- 	if (oi1->ip_blkno > oi2->ip_blkno)
- 		mlog_errno(-ENOLCK);
- 
- 	/* lock id1 */
--	status = ocfs2_inode_lock_nested(inode1, bh1, 1, OI_LS_REFLINK_TARGET);
-+	status = ocfs2_inode_lock_nested(inode1, &bh1, 1,
-+					 OI_LS_REFLINK_TARGET);
- 	if (status < 0) {
- 		if (status != -ENOENT)
- 			mlog_errno(status);
-@@ -4776,15 +4773,25 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
- 
- 	/* lock id2 */
- 	if (!same_inode) {
--		status = ocfs2_inode_lock_nested(inode2, bh2, 1,
-+		status = ocfs2_inode_lock_nested(inode2, &bh2, 1,
- 						 OI_LS_REFLINK_TARGET);
- 		if (status < 0) {
- 			if (status != -ENOENT)
- 				mlog_errno(status);
- 			goto out_cl1;
- 		}
--	} else
--		*bh2 = *bh1;
-+	} else {
-+		bh2 = bh1;
-+	}
-+
+-	/* Populate regmap */
+-	for (clkid = 0; clkid < data->num_clks; clkid++)
 +	/*
-+	 * If we swapped inode order above, we have to swap the buffer heads
-+	 * before passing them back to the caller.
++	 * Populate regmap and register all clks
 +	 */
-+	if (need_swap)
-+		swap(bh1, bh2);
-+	*bh_s = bh1;
-+	*bh_t = bh2;
- 
- 	trace_ocfs2_double_lock_end(
- 			(unsigned long long)oi1->ip_blkno,
-@@ -4794,8 +4801,7 @@ int ocfs2_reflink_inodes_lock(struct inode *s_inode,
- 
- out_cl1:
- 	ocfs2_inode_unlock(inode1, 1);
--	brelse(*bh1);
--	*bh1 = NULL;
-+	brelse(bh1);
- out_rw2:
- 	ocfs2_rw_unlock(inode2, 1);
- out_i2:
-diff --git a/fs/open.c b/fs/open.c
-index 0285ce7dbd51..f1c2f855fd43 100644
---- a/fs/open.c
-+++ b/fs/open.c
-@@ -733,6 +733,12 @@ static int do_dentry_open(struct file *f,
- 		return 0;
++	for (clkid = 0; clkid < data->num_clks; clkid++) {
+ 		data->clks[clkid]->map = regmap;
+ 
+-	/* Register all clks */
+-	for (clkid = 0; clkid < data->hw_data->num; clkid++) {
+-		if (!data->hw_data->hws[clkid])
+-			continue;
+-
+ 		ret = devm_clk_hw_register(dev, data->hw_data->hws[clkid]);
+-		if (ret) {
+-			dev_err(dev, "Clock registration failed\n");
++		if (ret)
+ 			return ret;
+-		}
  	}
  
-+	/* Any file opened for execve()/uselib() has to be a regular file. */
-+	if (unlikely(f->f_flags & FMODE_EXEC && !S_ISREG(inode->i_mode))) {
-+		error = -EACCES;
-+		goto cleanup_file;
-+	}
-+
- 	if (f->f_mode & FMODE_WRITE && !special_file(inode->i_mode)) {
- 		error = get_write_access(inode);
- 		if (unlikely(error))
-diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
-index 9e62dcf06fc4..68b3303e4b46 100644
---- a/fs/overlayfs/copy_up.c
-+++ b/fs/overlayfs/copy_up.c
-@@ -443,6 +443,24 @@ static int ovl_copy_up_inode(struct ovl_copy_up_ctx *c, struct dentry *temp)
+ 	return devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get,
+diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
+index c7103dd2d8d5..563ab8590061 100644
+--- a/drivers/gpu/drm/i915/gvt/gtt.c
++++ b/drivers/gpu/drm/i915/gvt/gtt.c
+@@ -1942,7 +1942,7 @@ void _intel_vgpu_mm_release(struct kref *mm_ref)
+  */
+ void intel_vgpu_unpin_mm(struct intel_vgpu_mm *mm)
  {
- 	int err;
+-	atomic_dec(&mm->pincount);
++	atomic_dec_if_positive(&mm->pincount);
+ }
  
-+	/*
-+	 * Copy up data first and then xattrs. Writing data after
-+	 * xattrs will remove security.capability xattr automatically.
-+	 */
-+	if (S_ISREG(c->stat.mode) && !c->metacopy) {
-+		struct path upperpath, datapath;
-+
-+		ovl_path_upper(c->dentry, &upperpath);
-+		if (WARN_ON(upperpath.dentry != NULL))
-+			return -EIO;
-+		upperpath.dentry = temp;
-+
-+		ovl_path_lowerdata(c->dentry, &datapath);
-+		err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
-+		if (err)
-+			return err;
-+	}
-+
- 	err = ovl_copy_xattr(c->lowerpath.dentry, temp);
- 	if (err)
- 		return err;
-@@ -460,19 +478,6 @@ static int ovl_copy_up_inode(struct ovl_copy_up_ctx *c, struct dentry *temp)
- 			return err;
+ /**
+diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
+index 55bb7885e228..8fff49affc11 100644
+--- a/drivers/gpu/drm/i915/gvt/scheduler.c
++++ b/drivers/gpu/drm/i915/gvt/scheduler.c
+@@ -1475,8 +1475,9 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu, int ring_id,
+ 		intel_runtime_pm_put(dev_priv);
+ 	}
+ 
+-	if (ret && (vgpu_is_vm_unhealthy(ret))) {
+-		enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR);
++	if (ret) {
++		if (vgpu_is_vm_unhealthy(ret))
++			enter_failsafe_mode(vgpu, GVT_FAILSAFE_GUEST_ERR);
+ 		intel_vgpu_destroy_workload(workload);
+ 		return ERR_PTR(ret);
  	}
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 22a74608c6e4..dcd1df5322e8 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -1845,42 +1845,6 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
+ 	return false;
+ }
  
--	if (S_ISREG(c->stat.mode) && !c->metacopy) {
--		struct path upperpath, datapath;
+-/* Optimize link config in order: max bpp, min lanes, min clock */
+-static bool
+-intel_dp_compute_link_config_fast(struct intel_dp *intel_dp,
+-				  struct intel_crtc_state *pipe_config,
+-				  const struct link_config_limits *limits)
+-{
+-	struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
+-	int bpp, clock, lane_count;
+-	int mode_rate, link_clock, link_avail;
+-
+-	for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
+-		mode_rate = intel_dp_link_required(adjusted_mode->crtc_clock,
+-						   bpp);
+-
+-		for (lane_count = limits->min_lane_count;
+-		     lane_count <= limits->max_lane_count;
+-		     lane_count <<= 1) {
+-			for (clock = limits->min_clock; clock <= limits->max_clock; clock++) {
+-				link_clock = intel_dp->common_rates[clock];
+-				link_avail = intel_dp_max_data_rate(link_clock,
+-								    lane_count);
 -
--		ovl_path_upper(c->dentry, &upperpath);
--		BUG_ON(upperpath.dentry != NULL);
--		upperpath.dentry = temp;
+-				if (mode_rate <= link_avail) {
+-					pipe_config->lane_count = lane_count;
+-					pipe_config->pipe_bpp = bpp;
+-					pipe_config->port_clock = link_clock;
 -
--		ovl_path_lowerdata(c->dentry, &datapath);
--		err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
--		if (err)
--			return err;
+-					return true;
+-				}
+-			}
+-		}
 -	}
 -
- 	if (c->metacopy) {
- 		err = ovl_check_setxattr(c->dentry, temp, OVL_XATTR_METACOPY,
- 					 NULL, 0, -EOPNOTSUPP);
-@@ -737,6 +742,8 @@ static int ovl_copy_up_meta_inode_data(struct ovl_copy_up_ctx *c)
+-	return false;
+-}
+-
+ static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc)
  {
- 	struct path upperpath, datapath;
- 	int err;
-+	char *capability = NULL;
-+	ssize_t uninitialized_var(cap_size);
- 
- 	ovl_path_upper(c->dentry, &upperpath);
- 	if (WARN_ON(upperpath.dentry == NULL))
-@@ -746,15 +753,37 @@ static int ovl_copy_up_meta_inode_data(struct ovl_copy_up_ctx *c)
- 	if (WARN_ON(datapath.dentry == NULL))
- 		return -EIO;
- 
-+	if (c->stat.size) {
-+		err = cap_size = ovl_getxattr(upperpath.dentry, XATTR_NAME_CAPS,
-+					      &capability, 0);
-+		if (err < 0 && err != -ENODATA)
-+			goto out;
-+	}
-+
- 	err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size);
- 	if (err)
--		return err;
-+		goto out_free;
-+
-+	/*
-+	 * Writing to upper file will clear security.capability xattr. We
-+	 * don't want that to happen for normal copy-up operation.
-+	 */
-+	if (capability) {
-+		err = ovl_do_setxattr(upperpath.dentry, XATTR_NAME_CAPS,
-+				      capability, cap_size, 0);
-+		if (err)
-+			goto out_free;
-+	}
-+
+ 	int i, num_bpc;
+@@ -2013,15 +1977,13 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
+ 	limits.min_bpp = 6 * 3;
+ 	limits.max_bpp = intel_dp_compute_bpp(intel_dp, pipe_config);
  
- 	err = vfs_removexattr(upperpath.dentry, OVL_XATTR_METACOPY);
- 	if (err)
--		return err;
-+		goto out_free;
+-	if (intel_dp_is_edp(intel_dp) && intel_dp->edp_dpcd[0] < DP_EDP_14) {
++	if (intel_dp_is_edp(intel_dp)) {
+ 		/*
+ 		 * Use the maximum clock and number of lanes the eDP panel
+-		 * advertizes being capable of. The eDP 1.3 and earlier panels
+-		 * are generally designed to support only a single clock and
+-		 * lane configuration, and typically these values correspond to
+-		 * the native resolution of the panel. With eDP 1.4 rate select
+-		 * and DSC, this is decreasingly the case, and we need to be
+-		 * able to select less than maximum link config.
++		 * advertizes being capable of. The panels are generally
++		 * designed to support only a single clock and lane
++		 * configuration, and typically these values correspond to the
++		 * native resolution of the panel.
+ 		 */
+ 		limits.min_lane_count = limits.max_lane_count;
+ 		limits.min_clock = limits.max_clock;
+@@ -2035,22 +1997,11 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
+ 		      intel_dp->common_rates[limits.max_clock],
+ 		      limits.max_bpp, adjusted_mode->crtc_clock);
  
- 	ovl_set_upperdata(d_inode(c->dentry));
-+out_free:
-+	kfree(capability);
-+out:
- 	return err;
- }
+-	if (intel_dp_is_edp(intel_dp))
+-		/*
+-		 * Optimize for fast and narrow. eDP 1.3 section 3.3 and eDP 1.4
+-		 * section A.1: "It is recommended that the minimum number of
+-		 * lanes be used, using the minimum link rate allowed for that
+-		 * lane configuration."
+-		 *
+-		 * Note that we use the max clock and lane count for eDP 1.3 and
+-		 * earlier, and fast vs. wide is irrelevant.
+-		 */
+-		ret = intel_dp_compute_link_config_fast(intel_dp, pipe_config,
+-							&limits);
+-	else
+-		/* Optimize for slow and wide. */
+-		ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config,
+-							&limits);
++	/*
++	 * Optimize for slow and wide. This is the place to add alternative
++	 * optimization policy.
++	 */
++	ret = intel_dp_compute_link_config_wide(intel_dp, pipe_config, &limits);
  
-diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
-index 5e45cb3630a0..9c6018287d57 100644
---- a/fs/overlayfs/overlayfs.h
-+++ b/fs/overlayfs/overlayfs.h
-@@ -277,6 +277,8 @@ int ovl_lock_rename_workdir(struct dentry *workdir, struct dentry *upperdir);
- int ovl_check_metacopy_xattr(struct dentry *dentry);
- bool ovl_is_metacopy_dentry(struct dentry *dentry);
- char *ovl_get_redirect_xattr(struct dentry *dentry, int padding);
-+ssize_t ovl_getxattr(struct dentry *dentry, char *name, char **value,
-+		     size_t padding);
- 
- static inline bool ovl_is_impuredir(struct dentry *dentry)
+ 	/* enable compression if the mode doesn't fit available BW */
+ 	if (!ret) {
+diff --git a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+index dc47720c99ba..39d8509d96a0 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
++++ b/drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
+@@ -48,8 +48,13 @@ static enum drm_mode_status
+ sun8i_dw_hdmi_mode_valid_h6(struct drm_connector *connector,
+ 			    const struct drm_display_mode *mode)
  {
-diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
-index 7c01327b1852..4035e640f402 100644
---- a/fs/overlayfs/util.c
-+++ b/fs/overlayfs/util.c
-@@ -863,28 +863,49 @@ bool ovl_is_metacopy_dentry(struct dentry *dentry)
- 	return (oe->numlower > 1);
- }
+-	/* This is max for HDMI 2.0b (4K@60Hz) */
+-	if (mode->clock > 594000)
++	/*
++	 * Controller support maximum of 594 MHz, which correlates to
++	 * 4K@60Hz 4:4:4 or RGB. However, for frequencies greater than
++	 * 340 MHz scrambling has to be enabled. Because scrambling is
++	 * not yet implemented, just limit to 340 MHz for now.
++	 */
++	if (mode->clock > 340000)
+ 		return MODE_CLOCK_HIGH;
  
--char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
-+ssize_t ovl_getxattr(struct dentry *dentry, char *name, char **value,
-+		     size_t padding)
- {
--	int res;
--	char *s, *next, *buf = NULL;
-+	ssize_t res;
-+	char *buf = NULL;
- 
--	res = vfs_getxattr(dentry, OVL_XATTR_REDIRECT, NULL, 0);
-+	res = vfs_getxattr(dentry, name, NULL, 0);
- 	if (res < 0) {
- 		if (res == -ENODATA || res == -EOPNOTSUPP)
--			return NULL;
-+			return -ENODATA;
- 		goto fail;
- 	}
+ 	return MODE_OK;
+diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
+index a63e3011e971..bd4f0b88bbd7 100644
+--- a/drivers/gpu/drm/udl/udl_drv.c
++++ b/drivers/gpu/drm/udl/udl_drv.c
+@@ -51,6 +51,7 @@ static struct drm_driver driver = {
+ 	.driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_PRIME,
+ 	.load = udl_driver_load,
+ 	.unload = udl_driver_unload,
++	.release = udl_driver_release,
  
--	buf = kzalloc(res + padding + 1, GFP_KERNEL);
--	if (!buf)
--		return ERR_PTR(-ENOMEM);
-+	if (res != 0) {
-+		buf = kzalloc(res + padding, GFP_KERNEL);
-+		if (!buf)
-+			return -ENOMEM;
+ 	/* gem hooks */
+ 	.gem_free_object_unlocked = udl_gem_free_object,
+diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
+index e9e9b1ff678e..4ae67d882eae 100644
+--- a/drivers/gpu/drm/udl/udl_drv.h
++++ b/drivers/gpu/drm/udl/udl_drv.h
+@@ -104,6 +104,7 @@ void udl_urb_completion(struct urb *urb);
  
--	if (res == 0)
--		goto invalid;
-+		res = vfs_getxattr(dentry, name, buf, res);
-+		if (res < 0)
-+			goto fail;
-+	}
-+	*value = buf;
-+
-+	return res;
-+
-+fail:
-+	pr_warn_ratelimited("overlayfs: failed to get xattr %s: err=%zi)\n",
-+			    name, res);
-+	kfree(buf);
-+	return res;
-+}
+ int udl_driver_load(struct drm_device *dev, unsigned long flags);
+ void udl_driver_unload(struct drm_device *dev);
++void udl_driver_release(struct drm_device *dev);
  
--	res = vfs_getxattr(dentry, OVL_XATTR_REDIRECT, buf, res);
-+char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
-+{
-+	int res;
-+	char *s, *next, *buf = NULL;
-+
-+	res = ovl_getxattr(dentry, OVL_XATTR_REDIRECT, &buf, padding + 1);
-+	if (res == -ENODATA)
-+		return NULL;
- 	if (res < 0)
--		goto fail;
-+		return ERR_PTR(res);
- 	if (res == 0)
- 		goto invalid;
- 
-@@ -900,15 +921,9 @@ char *ovl_get_redirect_xattr(struct dentry *dentry, int padding)
- 	}
+ int udl_fbdev_init(struct drm_device *dev);
+ void udl_fbdev_cleanup(struct drm_device *dev);
+diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
+index 1b014d92855b..19055dda3140 100644
+--- a/drivers/gpu/drm/udl/udl_main.c
++++ b/drivers/gpu/drm/udl/udl_main.c
+@@ -378,6 +378,12 @@ void udl_driver_unload(struct drm_device *dev)
+ 		udl_free_urb_list(dev);
  
- 	return buf;
--
--err_free:
--	kfree(buf);
--	return ERR_PTR(res);
--fail:
--	pr_warn_ratelimited("overlayfs: failed to get redirect (%i)\n", res);
--	goto err_free;
- invalid:
- 	pr_warn_ratelimited("overlayfs: invalid redirect (%s)\n", buf);
- 	res = -EINVAL;
--	goto err_free;
-+	kfree(buf);
-+	return ERR_PTR(res);
+ 	udl_fbdev_cleanup(dev);
+-	udl_modeset_cleanup(dev);
+ 	kfree(udl);
  }
-diff --git a/fs/pipe.c b/fs/pipe.c
-index bdc5d3c0977d..c51750ed4011 100644
---- a/fs/pipe.c
-+++ b/fs/pipe.c
-@@ -234,6 +234,14 @@ static const struct pipe_buf_operations anon_pipe_buf_ops = {
- 	.get = generic_pipe_buf_get,
- };
- 
-+static const struct pipe_buf_operations anon_pipe_buf_nomerge_ops = {
-+	.can_merge = 0,
-+	.confirm = generic_pipe_buf_confirm,
-+	.release = anon_pipe_buf_release,
-+	.steal = anon_pipe_buf_steal,
-+	.get = generic_pipe_buf_get,
-+};
 +
- static const struct pipe_buf_operations packet_pipe_buf_ops = {
- 	.can_merge = 0,
- 	.confirm = generic_pipe_buf_confirm,
-@@ -242,6 +250,12 @@ static const struct pipe_buf_operations packet_pipe_buf_ops = {
- 	.get = generic_pipe_buf_get,
- };
- 
-+void pipe_buf_mark_unmergeable(struct pipe_buffer *buf)
++void udl_driver_release(struct drm_device *dev)
 +{
-+	if (buf->ops == &anon_pipe_buf_ops)
-+		buf->ops = &anon_pipe_buf_nomerge_ops;
++	udl_modeset_cleanup(dev);
++	drm_dev_fini(dev);
++	kfree(dev);
 +}
-+
- static ssize_t
- pipe_read(struct kiocb *iocb, struct iov_iter *to)
- {
-diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
-index 4d598a399bbf..d65390727541 100644
---- a/fs/proc/proc_sysctl.c
-+++ b/fs/proc/proc_sysctl.c
-@@ -1626,7 +1626,8 @@ static void drop_sysctl_table(struct ctl_table_header *header)
- 	if (--header->nreg)
- 		return;
- 
--	put_links(header);
-+	if (parent)
-+		put_links(header);
- 	start_unregistering(header);
- 	if (!--header->count)
- 		kfree_rcu(header, rcu);
-diff --git a/fs/read_write.c b/fs/read_write.c
-index ff3c5e6f87cf..27b69b85d49f 100644
---- a/fs/read_write.c
-+++ b/fs/read_write.c
-@@ -1238,6 +1238,9 @@ COMPAT_SYSCALL_DEFINE5(preadv64v2, unsigned long, fd,
- 		const struct compat_iovec __user *,vec,
- 		unsigned long, vlen, loff_t, pos, rwf_t, flags)
- {
-+	if (pos == -1)
-+		return do_compat_readv(fd, vec, vlen, flags);
-+
- 	return do_compat_preadv64(fd, vec, vlen, pos, flags);
- }
- #endif
-@@ -1344,6 +1347,9 @@ COMPAT_SYSCALL_DEFINE5(pwritev64v2, unsigned long, fd,
- 		const struct compat_iovec __user *,vec,
- 		unsigned long, vlen, loff_t, pos, rwf_t, flags)
+diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
+index f39a183d59c2..e7e946035027 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_object.c
++++ b/drivers/gpu/drm/virtio/virtgpu_object.c
+@@ -28,10 +28,21 @@
+ static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
+ 				       uint32_t *resid)
  {
-+	if (pos == -1)
-+		return do_compat_writev(fd, vec, vlen, flags);
-+
- 	return do_compat_pwritev64(fd, vec, vlen, pos, flags);
- }
- #endif
-diff --git a/fs/splice.c b/fs/splice.c
-index de2ede048473..90c29675d573 100644
---- a/fs/splice.c
-+++ b/fs/splice.c
-@@ -1597,6 +1597,8 @@ retry:
- 			 */
- 			obuf->flags &= ~PIPE_BUF_FLAG_GIFT;
- 
-+			pipe_buf_mark_unmergeable(obuf);
-+
- 			obuf->len = len;
- 			opipe->nrbufs++;
- 			ibuf->offset += obuf->len;
-@@ -1671,6 +1673,8 @@ static int link_pipe(struct pipe_inode_info *ipipe,
- 		 */
- 		obuf->flags &= ~PIPE_BUF_FLAG_GIFT;
++#if 0
+ 	int handle = ida_alloc(&vgdev->resource_ida, GFP_KERNEL);
  
-+		pipe_buf_mark_unmergeable(obuf);
+ 	if (handle < 0)
+ 		return handle;
++#else
++	static int handle;
 +
- 		if (obuf->len > len)
- 			obuf->len = len;
- 
-diff --git a/fs/udf/truncate.c b/fs/udf/truncate.c
-index b647f0bd150c..94220ba85628 100644
---- a/fs/udf/truncate.c
-+++ b/fs/udf/truncate.c
-@@ -260,6 +260,9 @@ void udf_truncate_extents(struct inode *inode)
- 			epos.block = eloc;
- 			epos.bh = udf_tread(sb,
- 					udf_get_lb_pblock(sb, &eloc, 0));
-+			/* Error reading indirect block? */
-+			if (!epos.bh)
-+				return;
- 			if (elen)
- 				indirect_ext_len =
- 					(elen + sb->s_blocksize - 1) >>
-diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
-index 3d7a6a9c2370..f8f6f04c4453 100644
---- a/include/asm-generic/vmlinux.lds.h
-+++ b/include/asm-generic/vmlinux.lds.h
-@@ -733,7 +733,7 @@
- 		KEEP(*(.orc_unwind_ip))					\
- 		__stop_orc_unwind_ip = .;				\
- 	}								\
--	. = ALIGN(6);							\
-+	. = ALIGN(2);							\
- 	.orc_unwind : AT(ADDR(.orc_unwind) - LOAD_OFFSET) {		\
- 		__start_orc_unwind = .;					\
- 		KEEP(*(.orc_unwind))					\
-diff --git a/include/drm/drm_cache.h b/include/drm/drm_cache.h
-index bfe1639df02d..97fc498dc767 100644
---- a/include/drm/drm_cache.h
-+++ b/include/drm/drm_cache.h
-@@ -47,6 +47,24 @@ static inline bool drm_arch_can_wc_memory(void)
- 	return false;
- #elif defined(CONFIG_MIPS) && defined(CONFIG_CPU_LOONGSON3)
- 	return false;
-+#elif defined(CONFIG_ARM) || defined(CONFIG_ARM64)
 +	/*
-+	 * The DRM driver stack is designed to work with cache coherent devices
-+	 * only, but permits an optimization to be enabled in some cases, where
-+	 * for some buffers, both the CPU and the GPU use uncached mappings,
-+	 * removing the need for DMA snooping and allocation in the CPU caches.
-+	 *
-+	 * The use of uncached GPU mappings relies on the correct implementation
-+	 * of the PCIe NoSnoop TLP attribute by the platform, otherwise the GPU
-+	 * will use cached mappings nonetheless. On x86 platforms, this does not
-+	 * seem to matter, as uncached CPU mappings will snoop the caches in any
-+	 * case. However, on ARM and arm64, enabling this optimization on a
-+	 * platform where NoSnoop is ignored results in loss of coherency, which
-+	 * breaks correct operation of the device. Since we have no way of
-+	 * detecting whether NoSnoop works or not, just disable this
-+	 * optimization entirely for ARM and arm64.
++	 * FIXME: dirty hack to avoid re-using IDs, virglrenderer
++	 * can't deal with that.  Needs fixing in virglrenderer, also
++	 * should figure a better way to handle that in the guest.
 +	 */
-+	return false;
- #else
- 	return true;
- #endif
-diff --git a/include/linux/atalk.h b/include/linux/atalk.h
-index 23f805562f4e..840cf92307ba 100644
---- a/include/linux/atalk.h
-+++ b/include/linux/atalk.h
-@@ -161,16 +161,26 @@ extern int sysctl_aarp_resolve_time;
- extern void atalk_register_sysctl(void);
- extern void atalk_unregister_sysctl(void);
- #else
--#define atalk_register_sysctl()		do { } while(0)
--#define atalk_unregister_sysctl()	do { } while(0)
-+static inline int atalk_register_sysctl(void)
-+{
-+	return 0;
-+}
-+static inline void atalk_unregister_sysctl(void)
-+{
-+}
- #endif
++	handle++;
++#endif
  
- #ifdef CONFIG_PROC_FS
- extern int atalk_proc_init(void);
- extern void atalk_proc_exit(void);
- #else
--#define atalk_proc_init()	({ 0; })
--#define atalk_proc_exit()	do { } while(0)
-+static inline int atalk_proc_init(void)
-+{
-+	return 0;
-+}
-+static inline void atalk_proc_exit(void)
-+{
-+}
- #endif /* CONFIG_PROC_FS */
+ 	*resid = handle + 1;
+ 	return 0;
+@@ -39,7 +50,9 @@ static int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
  
- #endif /* __LINUX_ATALK_H__ */
-diff --git a/include/linux/bitrev.h b/include/linux/bitrev.h
-index 50fb0dee23e8..d35b8ec1c485 100644
---- a/include/linux/bitrev.h
-+++ b/include/linux/bitrev.h
-@@ -34,41 +34,41 @@ static inline u32 __bitrev32(u32 x)
+ static void virtio_gpu_resource_id_put(struct virtio_gpu_device *vgdev, uint32_t id)
+ {
++#if 0
+ 	ida_free(&vgdev->resource_ida, id - 1);
++#endif
+ }
  
- #define __constant_bitrev32(x)	\
- ({					\
--	u32 __x = x;			\
--	__x = (__x >> 16) | (__x << 16);	\
--	__x = ((__x & (u32)0xFF00FF00UL) >> 8) | ((__x & (u32)0x00FF00FFUL) << 8);	\
--	__x = ((__x & (u32)0xF0F0F0F0UL) >> 4) | ((__x & (u32)0x0F0F0F0FUL) << 4);	\
--	__x = ((__x & (u32)0xCCCCCCCCUL) >> 2) | ((__x & (u32)0x33333333UL) << 2);	\
--	__x = ((__x & (u32)0xAAAAAAAAUL) >> 1) | ((__x & (u32)0x55555555UL) << 1);	\
--	__x;								\
-+	u32 ___x = x;			\
-+	___x = (___x >> 16) | (___x << 16);	\
-+	___x = ((___x & (u32)0xFF00FF00UL) >> 8) | ((___x & (u32)0x00FF00FFUL) << 8);	\
-+	___x = ((___x & (u32)0xF0F0F0F0UL) >> 4) | ((___x & (u32)0x0F0F0F0FUL) << 4);	\
-+	___x = ((___x & (u32)0xCCCCCCCCUL) >> 2) | ((___x & (u32)0x33333333UL) << 2);	\
-+	___x = ((___x & (u32)0xAAAAAAAAUL) >> 1) | ((___x & (u32)0x55555555UL) << 1);	\
-+	___x;								\
- })
+ static void virtio_gpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 15ed6177a7a3..f040c8a7f9a9 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -2608,8 +2608,9 @@ static int m560_raw_event(struct hid_device *hdev, u8 *data, int size)
+ 		input_report_rel(mydata->input, REL_Y, v);
  
- #define __constant_bitrev16(x)	\
- ({					\
--	u16 __x = x;			\
--	__x = (__x >> 8) | (__x << 8);	\
--	__x = ((__x & (u16)0xF0F0U) >> 4) | ((__x & (u16)0x0F0FU) << 4);	\
--	__x = ((__x & (u16)0xCCCCU) >> 2) | ((__x & (u16)0x3333U) << 2);	\
--	__x = ((__x & (u16)0xAAAAU) >> 1) | ((__x & (u16)0x5555U) << 1);	\
--	__x;								\
-+	u16 ___x = x;			\
-+	___x = (___x >> 8) | (___x << 8);	\
-+	___x = ((___x & (u16)0xF0F0U) >> 4) | ((___x & (u16)0x0F0FU) << 4);	\
-+	___x = ((___x & (u16)0xCCCCU) >> 2) | ((___x & (u16)0x3333U) << 2);	\
-+	___x = ((___x & (u16)0xAAAAU) >> 1) | ((___x & (u16)0x5555U) << 1);	\
-+	___x;								\
- })
+ 		v = hid_snto32(data[6], 8);
+-		hidpp_scroll_counter_handle_scroll(
+-				&hidpp->vertical_wheel_counter, v);
++		if (v != 0)
++			hidpp_scroll_counter_handle_scroll(
++					&hidpp->vertical_wheel_counter, v);
  
- #define __constant_bitrev8x4(x) \
- ({			\
--	u32 __x = x;	\
--	__x = ((__x & (u32)0xF0F0F0F0UL) >> 4) | ((__x & (u32)0x0F0F0F0FUL) << 4);	\
--	__x = ((__x & (u32)0xCCCCCCCCUL) >> 2) | ((__x & (u32)0x33333333UL) << 2);	\
--	__x = ((__x & (u32)0xAAAAAAAAUL) >> 1) | ((__x & (u32)0x55555555UL) << 1);	\
--	__x;								\
-+	u32 ___x = x;	\
-+	___x = ((___x & (u32)0xF0F0F0F0UL) >> 4) | ((___x & (u32)0x0F0F0F0FUL) << 4);	\
-+	___x = ((___x & (u32)0xCCCCCCCCUL) >> 2) | ((___x & (u32)0x33333333UL) << 2);	\
-+	___x = ((___x & (u32)0xAAAAAAAAUL) >> 1) | ((___x & (u32)0x55555555UL) << 1);	\
-+	___x;								\
- })
+ 		input_sync(mydata->input);
+ 	}
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 6f929bfa9fcd..d0f1dfe2bcbb 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1759,6 +1759,7 @@ config SENSORS_VT8231
+ config SENSORS_W83773G
+ 	tristate "Nuvoton W83773G"
+ 	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  If you say yes here you get support for the Nuvoton W83773G hardware
+ 	  monitoring chip.
+diff --git a/drivers/hwmon/occ/common.c b/drivers/hwmon/occ/common.c
+index 391118c8aae8..c888f4aca45c 100644
+--- a/drivers/hwmon/occ/common.c
++++ b/drivers/hwmon/occ/common.c
+@@ -889,6 +889,8 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 				s++;
+ 			}
+ 		}
++
++		s = (sensors->power.num_sensors * 4) + 1;
+ 	} else {
+ 		for (i = 0; i < sensors->power.num_sensors; ++i) {
+ 			s = i + 1;
+@@ -917,11 +919,11 @@ static int occ_setup_sensor_attrs(struct occ *occ)
+ 						     show_power, NULL, 3, i);
+ 			attr++;
+ 		}
+-	}
  
- #define __constant_bitrev8(x)	\
- ({					\
--	u8 __x = x;			\
--	__x = (__x >> 4) | (__x << 4);	\
--	__x = ((__x & (u8)0xCCU) >> 2) | ((__x & (u8)0x33U) << 2);	\
--	__x = ((__x & (u8)0xAAU) >> 1) | ((__x & (u8)0x55U) << 1);	\
--	__x;								\
-+	u8 ___x = x;			\
-+	___x = (___x >> 4) | (___x << 4);	\
-+	___x = ((___x & (u8)0xCCU) >> 2) | ((___x & (u8)0x33U) << 2);	\
-+	___x = ((___x & (u8)0xAAU) >> 1) | ((___x & (u8)0x55U) << 1);	\
-+	___x;								\
- })
+-	if (sensors->caps.num_sensors >= 1) {
+ 		s = sensors->power.num_sensors + 1;
++	}
+ 
++	if (sensors->caps.num_sensors >= 1) {
+ 		snprintf(attr->name, sizeof(attr->name), "power%d_label", s);
+ 		attr->sensor = OCC_INIT_ATTR(attr->name, 0444, show_caps, NULL,
+ 					     0, 0);
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 4ee32964e1dd..948eb6e25219 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -560,7 +560,7 @@ static int pagefault_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr,
+ 	struct ib_umem_odp *odp_mr = to_ib_umem_odp(mr->umem);
+ 	bool downgrade = flags & MLX5_PF_FLAGS_DOWNGRADE;
+ 	bool prefetch = flags & MLX5_PF_FLAGS_PREFETCH;
+-	u64 access_mask = ODP_READ_ALLOWED_BIT;
++	u64 access_mask;
+ 	u64 start_idx, page_mask;
+ 	struct ib_umem_odp *odp;
+ 	size_t size;
+@@ -582,6 +582,7 @@ next_mr:
+ 	page_shift = mr->umem->page_shift;
+ 	page_mask = ~(BIT(page_shift) - 1);
+ 	start_idx = (io_virt - (mr->mmkey.iova & page_mask)) >> page_shift;
++	access_mask = ODP_READ_ALLOWED_BIT;
+ 
+ 	if (prefetch && !downgrade && !mr->umem->writable) {
+ 		/* prefetch with write-access must
+diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h
+index 95c6d86ab5e8..c4ef1fceead6 100644
+--- a/drivers/md/dm-core.h
++++ b/drivers/md/dm-core.h
+@@ -115,6 +115,7 @@ struct mapped_device {
+ 	struct srcu_struct io_barrier;
+ };
  
- #define bitrev32(x) \
-diff --git a/include/linux/ceph/libceph.h b/include/linux/ceph/libceph.h
-index a420c07904bc..337d5049ff93 100644
---- a/include/linux/ceph/libceph.h
-+++ b/include/linux/ceph/libceph.h
-@@ -294,6 +294,8 @@ extern void ceph_destroy_client(struct ceph_client *client);
- extern int __ceph_open_session(struct ceph_client *client,
- 			       unsigned long started);
- extern int ceph_open_session(struct ceph_client *client);
-+int ceph_wait_for_latest_osdmap(struct ceph_client *client,
-+				unsigned long timeout);
- 
- /* pagevec.c */
- extern void ceph_release_page_vector(struct page **pages, int num_pages);
-diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
-index 8fcbae1b8db0..120d1d40704b 100644
---- a/include/linux/cgroup-defs.h
-+++ b/include/linux/cgroup-defs.h
-@@ -602,7 +602,7 @@ struct cgroup_subsys {
- 	void (*cancel_fork)(struct task_struct *task);
- 	void (*fork)(struct task_struct *task);
- 	void (*exit)(struct task_struct *task);
--	void (*free)(struct task_struct *task);
-+	void (*release)(struct task_struct *task);
- 	void (*bind)(struct cgroup_subsys_state *root_css);
- 
- 	bool early_init:1;
-diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
-index 9968332cceed..81f58b4a5418 100644
---- a/include/linux/cgroup.h
-+++ b/include/linux/cgroup.h
-@@ -121,6 +121,7 @@ extern int cgroup_can_fork(struct task_struct *p);
- extern void cgroup_cancel_fork(struct task_struct *p);
- extern void cgroup_post_fork(struct task_struct *p);
- void cgroup_exit(struct task_struct *p);
-+void cgroup_release(struct task_struct *p);
- void cgroup_free(struct task_struct *p);
- 
- int cgroup_init_early(void);
-@@ -697,6 +698,7 @@ static inline int cgroup_can_fork(struct task_struct *p) { return 0; }
- static inline void cgroup_cancel_fork(struct task_struct *p) {}
- static inline void cgroup_post_fork(struct task_struct *p) {}
- static inline void cgroup_exit(struct task_struct *p) {}
-+static inline void cgroup_release(struct task_struct *p) {}
- static inline void cgroup_free(struct task_struct *p) {}
- 
- static inline int cgroup_init_early(void) { return 0; }
-diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
-index e443fa9fa859..b7cf80a71293 100644
---- a/include/linux/clk-provider.h
-+++ b/include/linux/clk-provider.h
-@@ -792,6 +792,9 @@ unsigned int __clk_get_enable_count(struct clk *clk);
- unsigned long clk_hw_get_rate(const struct clk_hw *hw);
- unsigned long __clk_get_flags(struct clk *clk);
- unsigned long clk_hw_get_flags(const struct clk_hw *hw);
-+#define clk_hw_can_set_rate_parent(hw) \
-+	(clk_hw_get_flags((hw)) & CLK_SET_RATE_PARENT)
-+
- bool clk_hw_is_prepared(const struct clk_hw *hw);
- bool clk_hw_rate_is_protected(const struct clk_hw *hw);
- bool clk_hw_is_enabled(const struct clk_hw *hw);
-diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
-index c86d6d8bdfed..0b427d5df0fe 100644
---- a/include/linux/cpufreq.h
-+++ b/include/linux/cpufreq.h
-@@ -254,20 +254,12 @@ __ATTR(_name, 0644, show_##_name, store_##_name)
- static struct freq_attr _name =			\
- __ATTR(_name, 0200, NULL, store_##_name)
- 
--struct global_attr {
--	struct attribute attr;
--	ssize_t (*show)(struct kobject *kobj,
--			struct attribute *attr, char *buf);
--	ssize_t (*store)(struct kobject *a, struct attribute *b,
--			 const char *c, size_t count);
--};
--
- #define define_one_global_ro(_name)		\
--static struct global_attr _name =		\
-+static struct kobj_attribute _name =		\
- __ATTR(_name, 0444, show_##_name, NULL)
- 
- #define define_one_global_rw(_name)		\
--static struct global_attr _name =		\
-+static struct kobj_attribute _name =		\
- __ATTR(_name, 0644, show_##_name, store_##_name)
- 
- 
-diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
-index e528baebad69..bee4bb9f81bc 100644
---- a/include/linux/device-mapper.h
-+++ b/include/linux/device-mapper.h
-@@ -609,7 +609,7 @@ do {									\
-  */
- #define dm_target_offset(ti, sector) ((sector) - (ti)->begin)
++void disable_discard(struct mapped_device *md);
+ void disable_write_same(struct mapped_device *md);
+ void disable_write_zeroes(struct mapped_device *md);
  
--static inline sector_t to_sector(unsigned long n)
-+static inline sector_t to_sector(unsigned long long n)
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 2e823252d797..f535fd8ac82d 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -913,7 +913,7 @@ static void copy_from_journal(struct dm_integrity_c *ic, unsigned section, unsig
+ static bool ranges_overlap(struct dm_integrity_range *range1, struct dm_integrity_range *range2)
  {
- 	return (n >> SECTOR_SHIFT);
+ 	return range1->logical_sector < range2->logical_sector + range2->n_sectors &&
+-	       range2->logical_sector + range2->n_sectors > range2->logical_sector;
++	       range1->logical_sector + range1->n_sectors > range2->logical_sector;
  }
-diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
-index f6ded992c183..5b21f14802e1 100644
---- a/include/linux/dma-mapping.h
-+++ b/include/linux/dma-mapping.h
-@@ -130,6 +130,7 @@ struct dma_map_ops {
- 			enum dma_data_direction direction);
- 	int (*dma_supported)(struct device *dev, u64 mask);
- 	u64 (*get_required_mask)(struct device *dev);
-+	size_t (*max_mapping_size)(struct device *dev);
- };
  
- #define DMA_MAPPING_ERROR		(~(dma_addr_t)0)
-@@ -257,6 +258,8 @@ static inline void dma_direct_sync_sg_for_cpu(struct device *dev,
- }
- #endif
+ static bool add_new_range(struct dm_integrity_c *ic, struct dm_integrity_range *new_range, bool check_waiting)
+@@ -959,8 +959,6 @@ static void remove_range_unlocked(struct dm_integrity_c *ic, struct dm_integrity
+ 		struct dm_integrity_range *last_range =
+ 			list_first_entry(&ic->wait_list, struct dm_integrity_range, wait_entry);
+ 		struct task_struct *last_range_task;
+-		if (!ranges_overlap(range, last_range))
+-			break;
+ 		last_range_task = last_range->task;
+ 		list_del(&last_range->wait_entry);
+ 		if (!add_new_range(ic, last_range, false)) {
+@@ -3185,7 +3183,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 			journal_watermark = val;
+ 		else if (sscanf(opt_string, "commit_time:%u%c", &val, &dummy) == 1)
+ 			sync_msec = val;
+-		else if (!memcmp(opt_string, "meta_device:", strlen("meta_device:"))) {
++		else if (!strncmp(opt_string, "meta_device:", strlen("meta_device:"))) {
+ 			if (ic->meta_dev) {
+ 				dm_put_device(ti, ic->meta_dev);
+ 				ic->meta_dev = NULL;
+@@ -3204,17 +3202,17 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 				goto bad;
+ 			}
+ 			ic->sectors_per_block = val >> SECTOR_SHIFT;
+-		} else if (!memcmp(opt_string, "internal_hash:", strlen("internal_hash:"))) {
++		} else if (!strncmp(opt_string, "internal_hash:", strlen("internal_hash:"))) {
+ 			r = get_alg_and_key(opt_string, &ic->internal_hash_alg, &ti->error,
+ 					    "Invalid internal_hash argument");
+ 			if (r)
+ 				goto bad;
+-		} else if (!memcmp(opt_string, "journal_crypt:", strlen("journal_crypt:"))) {
++		} else if (!strncmp(opt_string, "journal_crypt:", strlen("journal_crypt:"))) {
+ 			r = get_alg_and_key(opt_string, &ic->journal_crypt_alg, &ti->error,
+ 					    "Invalid journal_crypt argument");
+ 			if (r)
+ 				goto bad;
+-		} else if (!memcmp(opt_string, "journal_mac:", strlen("journal_mac:"))) {
++		} else if (!strncmp(opt_string, "journal_mac:", strlen("journal_mac:"))) {
+ 			r = get_alg_and_key(opt_string, &ic->journal_mac_alg,  &ti->error,
+ 					    "Invalid journal_mac argument");
+ 			if (r)
+diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
+index a20531e5f3b4..582265e043a6 100644
+--- a/drivers/md/dm-rq.c
++++ b/drivers/md/dm-rq.c
+@@ -206,11 +206,14 @@ static void dm_done(struct request *clone, blk_status_t error, bool mapped)
+ 	}
  
-+size_t dma_direct_max_mapping_size(struct device *dev);
-+
- #ifdef CONFIG_HAS_DMA
- #include <asm/dma-mapping.h>
- 
-@@ -460,6 +463,7 @@ int dma_supported(struct device *dev, u64 mask);
- int dma_set_mask(struct device *dev, u64 mask);
- int dma_set_coherent_mask(struct device *dev, u64 mask);
- u64 dma_get_required_mask(struct device *dev);
-+size_t dma_max_mapping_size(struct device *dev);
- #else /* CONFIG_HAS_DMA */
- static inline dma_addr_t dma_map_page_attrs(struct device *dev,
- 		struct page *page, size_t offset, size_t size,
-@@ -561,6 +565,10 @@ static inline u64 dma_get_required_mask(struct device *dev)
- {
- 	return 0;
+ 	if (unlikely(error == BLK_STS_TARGET)) {
+-		if (req_op(clone) == REQ_OP_WRITE_SAME &&
+-		    !clone->q->limits.max_write_same_sectors)
++		if (req_op(clone) == REQ_OP_DISCARD &&
++		    !clone->q->limits.max_discard_sectors)
++			disable_discard(tio->md);
++		else if (req_op(clone) == REQ_OP_WRITE_SAME &&
++			 !clone->q->limits.max_write_same_sectors)
+ 			disable_write_same(tio->md);
+-		if (req_op(clone) == REQ_OP_WRITE_ZEROES &&
+-		    !clone->q->limits.max_write_zeroes_sectors)
++		else if (req_op(clone) == REQ_OP_WRITE_ZEROES &&
++			 !clone->q->limits.max_write_zeroes_sectors)
+ 			disable_write_zeroes(tio->md);
+ 	}
+ 
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index 4b1be754cc41..eb257e4dcb1c 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -1852,6 +1852,36 @@ static bool dm_table_supports_secure_erase(struct dm_table *t)
+ 	return true;
  }
-+static inline size_t dma_max_mapping_size(struct device *dev)
+ 
++static int device_requires_stable_pages(struct dm_target *ti,
++					struct dm_dev *dev, sector_t start,
++					sector_t len, void *data)
 +{
-+	return 0;
++	struct request_queue *q = bdev_get_queue(dev->bdev);
++
++	return q && bdi_cap_stable_pages_required(q->backing_dev_info);
 +}
- #endif /* CONFIG_HAS_DMA */
- 
- static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
-diff --git a/include/linux/efi.h b/include/linux/efi.h
-index 28604a8d0aa9..a86485ac7c87 100644
---- a/include/linux/efi.h
-+++ b/include/linux/efi.h
-@@ -1699,19 +1699,19 @@ extern int efi_tpm_eventlog_init(void);
-  * fault happened while executing an efi runtime service.
-  */
- enum efi_rts_ids {
--	NONE,
--	GET_TIME,
--	SET_TIME,
--	GET_WAKEUP_TIME,
--	SET_WAKEUP_TIME,
--	GET_VARIABLE,
--	GET_NEXT_VARIABLE,
--	SET_VARIABLE,
--	QUERY_VARIABLE_INFO,
--	GET_NEXT_HIGH_MONO_COUNT,
--	RESET_SYSTEM,
--	UPDATE_CAPSULE,
--	QUERY_CAPSULE_CAPS,
-+	EFI_NONE,
-+	EFI_GET_TIME,
-+	EFI_SET_TIME,
-+	EFI_GET_WAKEUP_TIME,
-+	EFI_SET_WAKEUP_TIME,
-+	EFI_GET_VARIABLE,
-+	EFI_GET_NEXT_VARIABLE,
-+	EFI_SET_VARIABLE,
-+	EFI_QUERY_VARIABLE_INFO,
-+	EFI_GET_NEXT_HIGH_MONO_COUNT,
-+	EFI_RESET_SYSTEM,
-+	EFI_UPDATE_CAPSULE,
-+	EFI_QUERY_CAPSULE_CAPS,
- };
- 
- /*
-diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
-index d7711048ef93..c524ad7d31da 100644
---- a/include/linux/f2fs_fs.h
-+++ b/include/linux/f2fs_fs.h
-@@ -489,12 +489,12 @@ typedef __le32	f2fs_hash_t;
- 
- /*
-  * space utilization of regular dentry and inline dentry (w/o extra reservation)
-- *		regular dentry			inline dentry
-- * bitmap	1 * 27 = 27			1 * 23 = 23
-- * reserved	1 * 3 = 3			1 * 7 = 7
-- * dentry	11 * 214 = 2354			11 * 182 = 2002
-- * filename	8 * 214 = 1712			8 * 182 = 1456
-- * total	4096				3488
-+ *		regular dentry		inline dentry (def)	inline dentry (min)
-+ * bitmap	1 * 27 = 27		1 * 23 = 23		1 * 1 = 1
-+ * reserved	1 * 3 = 3		1 * 7 = 7		1 * 1 = 1
-+ * dentry	11 * 214 = 2354		11 * 182 = 2002		11 * 2 = 22
-+ * filename	8 * 214 = 1712		8 * 182 = 1456		8 * 2 = 16
-+ * total	4096			3488			40
-  *
-  * Note: there are more reserved space in inline dentry than in regular
-  * dentry, when converting inline dentry we should handle this carefully.
-@@ -506,6 +506,7 @@ typedef __le32	f2fs_hash_t;
- #define SIZE_OF_RESERVED	(PAGE_SIZE - ((SIZE_OF_DIR_ENTRY + \
- 				F2FS_SLOT_LEN) * \
- 				NR_DENTRY_IN_BLOCK + SIZE_OF_DENTRY_BITMAP))
-+#define MIN_INLINE_DENTRY_SIZE		40	/* just include '.' and '..' entries */
- 
- /* One directory entry slot representing F2FS_SLOT_LEN-sized file name */
- struct f2fs_dir_entry {
-diff --git a/include/linux/filter.h b/include/linux/filter.h
-index e532fcc6e4b5..3358646a8e7a 100644
---- a/include/linux/filter.h
-+++ b/include/linux/filter.h
-@@ -874,7 +874,9 @@ bpf_jit_binary_alloc(unsigned int proglen, u8 **image_ptr,
- 		     unsigned int alignment,
- 		     bpf_jit_fill_hole_t bpf_fill_ill_insns);
- void bpf_jit_binary_free(struct bpf_binary_header *hdr);
--
-+u64 bpf_jit_alloc_exec_limit(void);
-+void *bpf_jit_alloc_exec(unsigned long size);
-+void bpf_jit_free_exec(void *addr);
- void bpf_jit_free(struct bpf_prog *fp);
- 
- int bpf_jit_get_func_addr(const struct bpf_prog *prog,
-diff --git a/include/linux/fs.h b/include/linux/fs.h
-index 29d8e2cfed0e..fd423fec8d83 100644
---- a/include/linux/fs.h
-+++ b/include/linux/fs.h
-@@ -304,13 +304,19 @@ enum rw_hint {
- 
- struct kiocb {
- 	struct file		*ki_filp;
 +
-+	/* The 'ki_filp' pointer is shared in a union for aio */
-+	randomized_struct_fields_start
++/*
++ * If any underlying device requires stable pages, a table must require
++ * them as well.  Only targets that support iterate_devices are considered:
++ * don't want error, zero, etc to require stable pages.
++ */
++static bool dm_table_requires_stable_pages(struct dm_table *t)
++{
++	struct dm_target *ti;
++	unsigned i;
 +
- 	loff_t			ki_pos;
- 	void (*ki_complete)(struct kiocb *iocb, long ret, long ret2);
- 	void			*private;
- 	int			ki_flags;
- 	u16			ki_hint;
- 	u16			ki_ioprio; /* See linux/ioprio.h */
--} __randomize_layout;
++	for (i = 0; i < dm_table_get_num_targets(t); i++) {
++		ti = dm_table_get_target(t, i);
 +
-+	randomized_struct_fields_end
-+};
- 
- static inline bool is_sync_kiocb(struct kiocb *kiocb)
- {
-diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
-index 0fbbcdf0c178..da0af631ded5 100644
---- a/include/linux/hardirq.h
-+++ b/include/linux/hardirq.h
-@@ -60,8 +60,14 @@ extern void irq_enter(void);
-  */
- extern void irq_exit(void);
- 
-+#ifndef arch_nmi_enter
-+#define arch_nmi_enter()	do { } while (0)
-+#define arch_nmi_exit()		do { } while (0)
-+#endif
++		if (ti->type->iterate_devices &&
++		    ti->type->iterate_devices(ti, device_requires_stable_pages, NULL))
++			return true;
++	}
 +
- #define nmi_enter()						\
- 	do {							\
-+		arch_nmi_enter();				\
- 		printk_nmi_enter();				\
- 		lockdep_off();					\
- 		ftrace_nmi_enter();				\
-@@ -80,6 +86,7 @@ extern void irq_exit(void);
- 		ftrace_nmi_exit();				\
- 		lockdep_on();					\
- 		printk_nmi_exit();				\
-+		arch_nmi_exit();				\
- 	} while (0)
- 
- #endif /* LINUX_HARDIRQ_H */
-diff --git a/include/linux/i2c.h b/include/linux/i2c.h
-index 65b4eaed1d96..7e748648c7d3 100644
---- a/include/linux/i2c.h
-+++ b/include/linux/i2c.h
-@@ -333,6 +333,7 @@ struct i2c_client {
- 	char name[I2C_NAME_SIZE];
- 	struct i2c_adapter *adapter;	/* the adapter we sit on	*/
- 	struct device dev;		/* the device structure		*/
-+	int init_irq;			/* irq set at initialization	*/
- 	int irq;			/* irq issued by device		*/
- 	struct list_head detected;
- #if IS_ENABLED(CONFIG_I2C_SLAVE)
-diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
-index dd1e40ddac7d..875c41b23f20 100644
---- a/include/linux/irqdesc.h
-+++ b/include/linux/irqdesc.h
-@@ -65,6 +65,7 @@ struct irq_desc {
- 	unsigned int		core_internal_state__do_not_mess_with_it;
- 	unsigned int		depth;		/* nested irq disables */
- 	unsigned int		wake_depth;	/* nested wake enables */
-+	unsigned int		tot_count;
- 	unsigned int		irq_count;	/* For detecting broken IRQs */
- 	unsigned long		last_unhandled;	/* Aging timer for unhandled count */
- 	unsigned int		irqs_unhandled;
-diff --git a/include/linux/kasan-checks.h b/include/linux/kasan-checks.h
-index d314150658a4..a61dc075e2ce 100644
---- a/include/linux/kasan-checks.h
-+++ b/include/linux/kasan-checks.h
-@@ -2,7 +2,7 @@
- #ifndef _LINUX_KASAN_CHECKS_H
- #define _LINUX_KASAN_CHECKS_H
- 
--#ifdef CONFIG_KASAN
-+#if defined(__SANITIZE_ADDRESS__) || defined(__KASAN_INTERNAL)
- void kasan_check_read(const volatile void *p, unsigned int size);
- void kasan_check_write(const volatile void *p, unsigned int size);
- #else
-diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
-index c38cc5eb7e73..cf761ff58224 100644
---- a/include/linux/kvm_host.h
-+++ b/include/linux/kvm_host.h
-@@ -634,7 +634,7 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
- 			   struct kvm_memory_slot *dont);
- int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
- 			    unsigned long npages);
--void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots);
-+void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen);
- int kvm_arch_prepare_memory_region(struct kvm *kvm,
- 				struct kvm_memory_slot *memslot,
- 				const struct kvm_userspace_memory_region *mem,
-diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
-index 83ae11cbd12c..7391f5fe4eda 100644
---- a/include/linux/memcontrol.h
-+++ b/include/linux/memcontrol.h
-@@ -561,7 +561,10 @@ struct mem_cgroup *lock_page_memcg(struct page *page);
- void __unlock_page_memcg(struct mem_cgroup *memcg);
- void unlock_page_memcg(struct page *page);
- 
--/* idx can be of type enum memcg_stat_item or node_stat_item */
-+/*
-+ * idx can be of type enum memcg_stat_item or node_stat_item.
-+ * Keep in sync with memcg_exact_page_state().
-+ */
- static inline unsigned long memcg_page_state(struct mem_cgroup *memcg,
- 					     int idx)
++	return false;
++}
++
+ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
+ 			       struct queue_limits *limits)
  {
-diff --git a/include/linux/mii.h b/include/linux/mii.h
-index 6fee8b1a4400..5cd824c1c0ca 100644
---- a/include/linux/mii.h
-+++ b/include/linux/mii.h
-@@ -469,7 +469,7 @@ static inline u32 linkmode_adv_to_lcl_adv_t(unsigned long *advertising)
- 	if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
- 			      advertising))
- 		lcl_adv |= ADVERTISE_PAUSE_CAP;
--	if (linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
-+	if (linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
- 			      advertising))
- 		lcl_adv |= ADVERTISE_PAUSE_ASYM;
- 
-diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
-index 54299251d40d..4f001619f854 100644
---- a/include/linux/mlx5/driver.h
-+++ b/include/linux/mlx5/driver.h
-@@ -591,6 +591,8 @@ enum mlx5_pagefault_type_flags {
- };
+@@ -1909,6 +1939,15 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
  
- struct mlx5_td {
-+	/* protects tirs list changes while tirs refresh */
-+	struct mutex     list_lock;
- 	struct list_head tirs_list;
- 	u32              tdn;
- };
-diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
-index 4eb26d278046..280ae96dc4c3 100644
---- a/include/linux/page-isolation.h
-+++ b/include/linux/page-isolation.h
-@@ -41,16 +41,6 @@ int move_freepages_block(struct zone *zone, struct page *page,
+ 	dm_table_verify_integrity(t);
  
- /*
-  * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
-- * If specified range includes migrate types other than MOVABLE or CMA,
-- * this will fail with -EBUSY.
-- *
-- * For isolating all pages in the range finally, the caller have to
-- * free all pages in the range. test_page_isolated() can be used for
-- * test it.
-- *
-- * The following flags are allowed (they can be combined in a bit mask)
-- * SKIP_HWPOISON - ignore hwpoison pages
-- * REPORT_FAILURE - report details about the failure to isolate the range
-  */
- int
- start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
-diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
-index e1a051724f7e..7cbbd891bfcd 100644
---- a/include/linux/perf_event.h
-+++ b/include/linux/perf_event.h
-@@ -409,7 +409,7 @@ struct pmu {
++	/*
++	 * Some devices don't use blk_integrity but still want stable pages
++	 * because they do their own checksumming.
++	 */
++	if (dm_table_requires_stable_pages(t))
++		q->backing_dev_info->capabilities |= BDI_CAP_STABLE_WRITES;
++	else
++		q->backing_dev_info->capabilities &= ~BDI_CAP_STABLE_WRITES;
++
  	/*
- 	 * Set up pmu-private data structures for an AUX area
- 	 */
--	void *(*setup_aux)		(int cpu, void **pages,
-+	void *(*setup_aux)		(struct perf_event *event, void **pages,
- 					 int nr_pages, bool overwrite);
- 					/* optional */
- 
-diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
-index 5a3bb3b7c9ad..3ecd7ea212ae 100644
---- a/include/linux/pipe_fs_i.h
-+++ b/include/linux/pipe_fs_i.h
-@@ -182,6 +182,7 @@ void generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *);
- int generic_pipe_buf_confirm(struct pipe_inode_info *, struct pipe_buffer *);
- int generic_pipe_buf_steal(struct pipe_inode_info *, struct pipe_buffer *);
- void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *);
-+void pipe_buf_mark_unmergeable(struct pipe_buffer *buf);
- 
- extern const struct pipe_buf_operations nosteal_pipe_buf_ops;
- 
-diff --git a/include/linux/property.h b/include/linux/property.h
-index 3789ec755fb6..65d3420dd5d1 100644
---- a/include/linux/property.h
-+++ b/include/linux/property.h
-@@ -258,7 +258,7 @@ struct property_entry {
- #define PROPERTY_ENTRY_STRING(_name_, _val_)		\
- (struct property_entry) {				\
- 	.name = _name_,					\
--	.length = sizeof(_val_),			\
-+	.length = sizeof(const char *),			\
- 	.type = DEV_PROP_STRING,			\
- 	{ .value = { .str = _val_ } },			\
- }
-diff --git a/include/linux/relay.h b/include/linux/relay.h
-index e1bdf01a86e2..c759f96e39c1 100644
---- a/include/linux/relay.h
-+++ b/include/linux/relay.h
-@@ -66,7 +66,7 @@ struct rchan
- 	struct kref kref;		/* channel refcount */
- 	void *private_data;		/* for user-defined data */
- 	size_t last_toobig;		/* tried to log event > subbuf size */
--	struct rchan_buf ** __percpu buf; /* per-cpu channel buffers */
-+	struct rchan_buf * __percpu *buf; /* per-cpu channel buffers */
- 	int is_global;			/* One global buffer ? */
- 	struct list_head list;		/* for channel list */
- 	struct dentry *parent;		/* parent dentry passed to open */
-diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
-index 5b9ae62272bb..503778920448 100644
---- a/include/linux/ring_buffer.h
-+++ b/include/linux/ring_buffer.h
-@@ -128,7 +128,7 @@ ring_buffer_consume(struct ring_buffer *buffer, int cpu, u64 *ts,
- 		    unsigned long *lost_events);
- 
- struct ring_buffer_iter *
--ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu);
-+ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu, gfp_t flags);
- void ring_buffer_read_prepare_sync(void);
- void ring_buffer_read_start(struct ring_buffer_iter *iter);
- void ring_buffer_read_finish(struct ring_buffer_iter *iter);
-diff --git a/include/linux/sched.h b/include/linux/sched.h
-index f9b43c989577..9b35aff09f70 100644
---- a/include/linux/sched.h
-+++ b/include/linux/sched.h
-@@ -1748,9 +1748,9 @@ static __always_inline bool need_resched(void)
- static inline unsigned int task_cpu(const struct task_struct *p)
- {
- #ifdef CONFIG_THREAD_INFO_IN_TASK
--	return p->cpu;
-+	return READ_ONCE(p->cpu);
- #else
--	return task_thread_info(p)->cpu;
-+	return READ_ONCE(task_thread_info(p)->cpu);
- #endif
+ 	 * Determine whether or not this queue's I/O timings contribute
+ 	 * to the entropy pool, Only request-based targets use this.
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 515e6af9bed2..4986eea520b6 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -963,6 +963,15 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
+ 	}
  }
  
-diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
-index c31d3a47a47c..57c7ed3fe465 100644
---- a/include/linux/sched/topology.h
-+++ b/include/linux/sched/topology.h
-@@ -176,10 +176,10 @@ typedef int (*sched_domain_flags_f)(void);
- #define SDTL_OVERLAP	0x01
- 
- struct sd_data {
--	struct sched_domain **__percpu sd;
--	struct sched_domain_shared **__percpu sds;
--	struct sched_group **__percpu sg;
--	struct sched_group_capacity **__percpu sgc;
-+	struct sched_domain *__percpu *sd;
-+	struct sched_domain_shared *__percpu *sds;
-+	struct sched_group *__percpu *sg;
-+	struct sched_group_capacity *__percpu *sgc;
- };
- 
- struct sched_domain_topology_level {
-diff --git a/include/linux/slab.h b/include/linux/slab.h
-index 11b45f7ae405..9449b19c5f10 100644
---- a/include/linux/slab.h
-+++ b/include/linux/slab.h
-@@ -32,6 +32,8 @@
- #define SLAB_HWCACHE_ALIGN	((slab_flags_t __force)0x00002000U)
- /* Use GFP_DMA memory */
- #define SLAB_CACHE_DMA		((slab_flags_t __force)0x00004000U)
-+/* Use GFP_DMA32 memory */
-+#define SLAB_CACHE_DMA32	((slab_flags_t __force)0x00008000U)
- /* DEBUG: Store the last owner for bug hunting */
- #define SLAB_STORE_USER		((slab_flags_t __force)0x00010000U)
- /* Panic if kmem_cache_create() fails */
-diff --git a/include/linux/string.h b/include/linux/string.h
-index 7927b875f80c..6ab0a6fa512e 100644
---- a/include/linux/string.h
-+++ b/include/linux/string.h
-@@ -150,6 +150,9 @@ extern void * memscan(void *,int,__kernel_size_t);
- #ifndef __HAVE_ARCH_MEMCMP
- extern int memcmp(const void *,const void *,__kernel_size_t);
- #endif
-+#ifndef __HAVE_ARCH_BCMP
-+extern int bcmp(const void *,const void *,__kernel_size_t);
-+#endif
- #ifndef __HAVE_ARCH_MEMCHR
- extern void * memchr(const void *,int,__kernel_size_t);
- #endif
-diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
-index 7c007ed7505f..29bc3a203283 100644
---- a/include/linux/swiotlb.h
-+++ b/include/linux/swiotlb.h
-@@ -76,6 +76,8 @@ bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr,
- 		size_t size, enum dma_data_direction dir, unsigned long attrs);
- void __init swiotlb_exit(void);
- unsigned int swiotlb_max_segment(void);
-+size_t swiotlb_max_mapping_size(struct device *dev);
-+bool is_swiotlb_active(void);
- #else
- #define swiotlb_force SWIOTLB_NO_FORCE
- static inline bool is_swiotlb_buffer(phys_addr_t paddr)
-@@ -95,6 +97,15 @@ static inline unsigned int swiotlb_max_segment(void)
- {
- 	return 0;
- }
-+static inline size_t swiotlb_max_mapping_size(struct device *dev)
++void disable_discard(struct mapped_device *md)
 +{
-+	return SIZE_MAX;
-+}
++	struct queue_limits *limits = dm_get_queue_limits(md);
 +
-+static inline bool is_swiotlb_active(void)
-+{
-+	return false;
++	/* device doesn't really support DISCARD, disable it */
++	limits->max_discard_sectors = 0;
++	blk_queue_flag_clear(QUEUE_FLAG_DISCARD, md->queue);
 +}
- #endif /* CONFIG_SWIOTLB */
- 
- extern void swiotlb_print_info(void);
-diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
-index fab02133a919..3dc70adfe5f5 100644
---- a/include/linux/virtio_ring.h
-+++ b/include/linux/virtio_ring.h
-@@ -63,7 +63,7 @@ struct virtqueue;
- /*
-  * Creates a virtqueue and allocates the descriptor ring.  If
-  * may_reduce_num is set, then this may allocate a smaller ring than
-- * expected.  The caller should query virtqueue_get_ring_size to learn
-+ * expected.  The caller should query virtqueue_get_vring_size to learn
-  * the actual size of the ring.
-  */
- struct virtqueue *vring_create_virtqueue(unsigned int index,
-diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
-index ec9d6bc65855..fabee6db0abb 100644
---- a/include/net/bluetooth/bluetooth.h
-+++ b/include/net/bluetooth/bluetooth.h
-@@ -276,7 +276,7 @@ int  bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg);
- int  bt_sock_wait_state(struct sock *sk, int state, unsigned long timeo);
- int  bt_sock_wait_ready(struct sock *sk, unsigned long flags);
- 
--void bt_accept_enqueue(struct sock *parent, struct sock *sk);
-+void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh);
- void bt_accept_unlink(struct sock *sk);
- struct sock *bt_accept_dequeue(struct sock *parent, struct socket *newsock);
++
+ void disable_write_same(struct mapped_device *md)
+ {
+ 	struct queue_limits *limits = dm_get_queue_limits(md);
+@@ -988,11 +997,14 @@ static void clone_endio(struct bio *bio)
+ 	dm_endio_fn endio = tio->ti->type->end_io;
  
-diff --git a/include/net/ip.h b/include/net/ip.h
-index be3cad9c2e4c..583526aad1d0 100644
---- a/include/net/ip.h
-+++ b/include/net/ip.h
-@@ -677,7 +677,7 @@ int ip_options_get_from_user(struct net *net, struct ip_options_rcu **optp,
- 			     unsigned char __user *data, int optlen);
- void ip_options_undo(struct ip_options *opt);
- void ip_forward_options(struct sk_buff *skb);
--int ip_options_rcv_srr(struct sk_buff *skb);
-+int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev);
+ 	if (unlikely(error == BLK_STS_TARGET) && md->type != DM_TYPE_NVME_BIO_BASED) {
+-		if (bio_op(bio) == REQ_OP_WRITE_SAME &&
+-		    !bio->bi_disk->queue->limits.max_write_same_sectors)
++		if (bio_op(bio) == REQ_OP_DISCARD &&
++		    !bio->bi_disk->queue->limits.max_discard_sectors)
++			disable_discard(md);
++		else if (bio_op(bio) == REQ_OP_WRITE_SAME &&
++			 !bio->bi_disk->queue->limits.max_write_same_sectors)
+ 			disable_write_same(md);
+-		if (bio_op(bio) == REQ_OP_WRITE_ZEROES &&
+-		    !bio->bi_disk->queue->limits.max_write_zeroes_sectors)
++		else if (bio_op(bio) == REQ_OP_WRITE_ZEROES &&
++			 !bio->bi_disk->queue->limits.max_write_zeroes_sectors)
+ 			disable_write_zeroes(md);
+ 	}
  
- /*
-  *	Functions provided by ip_sockglue.c
-diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
-index 99d4148e0f90..1c3126c14930 100644
---- a/include/net/net_namespace.h
-+++ b/include/net/net_namespace.h
-@@ -58,6 +58,7 @@ struct net {
- 						 */
- 	spinlock_t		rules_mod_lock;
+@@ -1060,15 +1072,7 @@ int dm_set_target_max_io_len(struct dm_target *ti, sector_t len)
+ 		return -EINVAL;
+ 	}
  
-+	u32			hash_mix;
- 	atomic64_t		cookie_gen;
+-	/*
+-	 * BIO based queue uses its own splitting. When multipage bvecs
+-	 * is switched on, size of the incoming bio may be too big to
+-	 * be handled in some targets, such as crypt.
+-	 *
+-	 * When these targets are ready for the big bio, we can remove
+-	 * the limit.
+-	 */
+-	ti->max_io_len = min_t(uint32_t, len, BIO_MAX_PAGES * PAGE_SIZE);
++	ti->max_io_len = (uint32_t) len;
  
- 	struct list_head	list;		/* list of network namespaces */
-diff --git a/include/net/netfilter/br_netfilter.h b/include/net/netfilter/br_netfilter.h
-index 4cd56808ac4e..89808ce293c4 100644
---- a/include/net/netfilter/br_netfilter.h
-+++ b/include/net/netfilter/br_netfilter.h
-@@ -43,7 +43,6 @@ static inline struct rtable *bridge_parent_rtable(const struct net_device *dev)
+ 	return 0;
  }
+diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c
+index 82a97866e0cf..7c8f203f9a24 100644
+--- a/drivers/mmc/host/alcor.c
++++ b/drivers/mmc/host/alcor.c
+@@ -48,7 +48,6 @@ struct alcor_sdmmc_host {
+ 	struct mmc_command *cmd;
+ 	struct mmc_data *data;
+ 	unsigned int dma_on:1;
+-	unsigned int early_data:1;
  
- struct net_device *setup_pre_routing(struct sk_buff *skb);
--void br_netfilter_enable(void);
+ 	struct mutex cmd_mutex;
  
- #if IS_ENABLED(CONFIG_IPV6)
- int br_validate_ipv6(struct net *net, struct sk_buff *skb);
-diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
-index b4984bbbe157..0612439909dc 100644
---- a/include/net/netfilter/nf_tables.h
-+++ b/include/net/netfilter/nf_tables.h
-@@ -416,7 +416,8 @@ struct nft_set {
- 	unsigned char			*udata;
- 	/* runtime data below here */
- 	const struct nft_set_ops	*ops ____cacheline_aligned;
--	u16				flags:14,
-+	u16				flags:13,
-+					bound:1,
- 					genmask:2;
- 	u8				klen;
- 	u8				dlen;
-@@ -690,10 +691,12 @@ static inline void nft_set_gc_batch_add(struct nft_set_gc_batch *gcb,
- 	gcb->elems[gcb->head.cnt++] = elem;
+@@ -144,8 +143,7 @@ static void alcor_data_set_dma(struct alcor_sdmmc_host *host)
+ 	host->sg_count--;
  }
  
-+struct nft_expr_ops;
- /**
-  *	struct nft_expr_type - nf_tables expression type
-  *
-  *	@select_ops: function to select nft_expr_ops
-+ *	@release_ops: release nft_expr_ops
-  *	@ops: default ops, used when no select_ops functions is present
-  *	@list: used internally
-  *	@name: Identifier
-@@ -706,6 +709,7 @@ static inline void nft_set_gc_batch_add(struct nft_set_gc_batch *gcb,
- struct nft_expr_type {
- 	const struct nft_expr_ops	*(*select_ops)(const struct nft_ctx *,
- 						       const struct nlattr * const tb[]);
-+	void				(*release_ops)(const struct nft_expr_ops *ops);
- 	const struct nft_expr_ops	*ops;
- 	struct list_head		list;
- 	const char			*name;
-@@ -1329,15 +1333,12 @@ struct nft_trans_rule {
- struct nft_trans_set {
- 	struct nft_set			*set;
- 	u32				set_id;
--	bool				bound;
- };
- 
- #define nft_trans_set(trans)	\
- 	(((struct nft_trans_set *)trans->data)->set)
- #define nft_trans_set_id(trans)	\
- 	(((struct nft_trans_set *)trans->data)->set_id)
--#define nft_trans_set_bound(trans)	\
--	(((struct nft_trans_set *)trans->data)->bound)
- 
- struct nft_trans_chain {
- 	bool				update;
-diff --git a/include/net/netns/hash.h b/include/net/netns/hash.h
-index 16a842456189..d9b665151f3d 100644
---- a/include/net/netns/hash.h
-+++ b/include/net/netns/hash.h
-@@ -2,16 +2,10 @@
- #ifndef __NET_NS_HASH_H__
- #define __NET_NS_HASH_H__
+-static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host,
+-					bool early)
++static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host)
+ {
+ 	struct alcor_pci_priv *priv = host->alcor_pci;
+ 	struct mmc_data *data = host->data;
+@@ -155,13 +153,6 @@ static void alcor_trigger_data_transfer(struct alcor_sdmmc_host *host,
+ 		ctrl |= AU6601_DATA_WRITE;
  
--#include <asm/cache.h>
+ 	if (data->host_cookie == COOKIE_MAPPED) {
+-		if (host->early_data) {
+-			host->early_data = false;
+-			return;
+-		}
 -
--struct net;
-+#include <net/net_namespace.h>
- 
- static inline u32 net_hash_mix(const struct net *net)
+-		host->early_data = early;
+-
+ 		alcor_data_set_dma(host);
+ 		ctrl |= AU6601_DATA_DMA_MODE;
+ 		host->dma_on = 1;
+@@ -231,6 +222,7 @@ static void alcor_prepare_sg_miter(struct alcor_sdmmc_host *host)
+ static void alcor_prepare_data(struct alcor_sdmmc_host *host,
+ 			       struct mmc_command *cmd)
  {
--#ifdef CONFIG_NET_NS
--	return (u32)(((unsigned long)net) >> ilog2(sizeof(*net)));
--#else
--	return 0;
--#endif
-+	return net->hash_mix;
- }
- #endif
-diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
-index 9481f2c142e2..e7eb4aa6ccc9 100644
---- a/include/net/sch_generic.h
-+++ b/include/net/sch_generic.h
-@@ -51,7 +51,10 @@ struct qdisc_size_table {
- struct qdisc_skb_head {
- 	struct sk_buff	*head;
- 	struct sk_buff	*tail;
--	__u32		qlen;
-+	union {
-+		u32		qlen;
-+		atomic_t	atomic_qlen;
-+	};
- 	spinlock_t	lock;
- };
++	struct alcor_pci_priv *priv = host->alcor_pci;
+ 	struct mmc_data *data = cmd->data;
  
-@@ -408,27 +411,19 @@ static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
- 	BUILD_BUG_ON(sizeof(qcb->data) < sz);
- }
+ 	if (!data)
+@@ -248,7 +240,7 @@ static void alcor_prepare_data(struct alcor_sdmmc_host *host,
+ 	if (data->host_cookie != COOKIE_MAPPED)
+ 		alcor_prepare_sg_miter(host);
  
--static inline int qdisc_qlen_cpu(const struct Qdisc *q)
--{
--	return this_cpu_ptr(q->cpu_qstats)->qlen;
--}
--
- static inline int qdisc_qlen(const struct Qdisc *q)
- {
- 	return q->q.qlen;
+-	alcor_trigger_data_transfer(host, true);
++	alcor_write8(priv, 0, AU6601_DATA_XFER_CTRL);
  }
  
--static inline int qdisc_qlen_sum(const struct Qdisc *q)
-+static inline u32 qdisc_qlen_sum(const struct Qdisc *q)
- {
--	__u32 qlen = q->qstats.qlen;
--	int i;
-+	u32 qlen = q->qstats.qlen;
- 
--	if (q->flags & TCQ_F_NOLOCK) {
--		for_each_possible_cpu(i)
--			qlen += per_cpu_ptr(q->cpu_qstats, i)->qlen;
--	} else {
-+	if (q->flags & TCQ_F_NOLOCK)
-+		qlen += atomic_read(&q->q.atomic_qlen);
-+	else
- 		qlen += q->q.qlen;
--	}
+ static void alcor_send_cmd(struct alcor_sdmmc_host *host,
+@@ -435,7 +427,7 @@ static int alcor_cmd_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
+ 	if (!host->data)
+ 		return false;
  
- 	return qlen;
+-	alcor_trigger_data_transfer(host, false);
++	alcor_trigger_data_transfer(host);
+ 	host->cmd = NULL;
+ 	return true;
  }
-@@ -825,14 +820,14 @@ static inline void qdisc_qstats_cpu_backlog_inc(struct Qdisc *sch,
- 	this_cpu_add(sch->cpu_qstats->backlog, qdisc_pkt_len(skb));
+@@ -456,7 +448,7 @@ static void alcor_cmd_irq_thread(struct alcor_sdmmc_host *host, u32 intmask)
+ 	if (!host->data)
+ 		alcor_request_complete(host, 1);
+ 	else
+-		alcor_trigger_data_transfer(host, false);
++		alcor_trigger_data_transfer(host);
+ 	host->cmd = NULL;
  }
  
--static inline void qdisc_qstats_cpu_qlen_inc(struct Qdisc *sch)
-+static inline void qdisc_qstats_atomic_qlen_inc(struct Qdisc *sch)
- {
--	this_cpu_inc(sch->cpu_qstats->qlen);
-+	atomic_inc(&sch->q.atomic_qlen);
- }
+@@ -487,15 +479,9 @@ static int alcor_data_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
+ 		break;
+ 	case AU6601_INT_READ_BUF_RDY:
+ 		alcor_trf_block_pio(host, true);
+-		if (!host->blocks)
+-			break;
+-		alcor_trigger_data_transfer(host, false);
+ 		return 1;
+ 	case AU6601_INT_WRITE_BUF_RDY:
+ 		alcor_trf_block_pio(host, false);
+-		if (!host->blocks)
+-			break;
+-		alcor_trigger_data_transfer(host, false);
+ 		return 1;
+ 	case AU6601_INT_DMA_END:
+ 		if (!host->sg_count)
+@@ -508,8 +494,14 @@ static int alcor_data_irq_done(struct alcor_sdmmc_host *host, u32 intmask)
+ 		break;
+ 	}
  
--static inline void qdisc_qstats_cpu_qlen_dec(struct Qdisc *sch)
-+static inline void qdisc_qstats_atomic_qlen_dec(struct Qdisc *sch)
- {
--	this_cpu_dec(sch->cpu_qstats->qlen);
-+	atomic_dec(&sch->q.atomic_qlen);
- }
+-	if (intmask & AU6601_INT_DATA_END)
+-		return 0;
++	if (intmask & AU6601_INT_DATA_END) {
++		if (!host->dma_on && host->blocks) {
++			alcor_trigger_data_transfer(host);
++			return 1;
++		} else {
++			return 0;
++		}
++	}
  
- static inline void qdisc_qstats_cpu_requeues_inc(struct Qdisc *sch)
-diff --git a/include/net/sctp/checksum.h b/include/net/sctp/checksum.h
-index 32ee65a30aff..1c6e6c0766ca 100644
---- a/include/net/sctp/checksum.h
-+++ b/include/net/sctp/checksum.h
-@@ -61,7 +61,7 @@ static inline __wsum sctp_csum_combine(__wsum csum, __wsum csum2,
- static inline __le32 sctp_compute_cksum(const struct sk_buff *skb,
- 					unsigned int offset)
- {
--	struct sctphdr *sh = sctp_hdr(skb);
-+	struct sctphdr *sh = (struct sctphdr *)(skb->data + offset);
- 	const struct skb_checksum_ops ops = {
- 		.update  = sctp_csum_update,
- 		.combine = sctp_csum_combine,
-diff --git a/include/net/sock.h b/include/net/sock.h
-index f43f935cb113..89d0d94d5db2 100644
---- a/include/net/sock.h
-+++ b/include/net/sock.h
-@@ -710,6 +710,12 @@ static inline void sk_add_node_rcu(struct sock *sk, struct hlist_head *list)
- 		hlist_add_head_rcu(&sk->sk_node, list);
+ 	return 1;
+ }
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index c11c18a9aacb..9ec300ec94ba 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -797,6 +797,43 @@ void sdhci_omap_reset(struct sdhci_host *host, u8 mask)
+ 	sdhci_reset(host, mask);
  }
  
-+static inline void sk_add_node_tail_rcu(struct sock *sk, struct hlist_head *list)
++#define CMD_ERR_MASK (SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX |\
++		      SDHCI_INT_TIMEOUT)
++#define CMD_MASK (CMD_ERR_MASK | SDHCI_INT_RESPONSE)
++
++static u32 sdhci_omap_irq(struct sdhci_host *host, u32 intmask)
 +{
-+	sock_hold(sk);
-+	hlist_add_tail_rcu(&sk->sk_node, list);
-+}
++	struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
++	struct sdhci_omap_host *omap_host = sdhci_pltfm_priv(pltfm_host);
 +
- static inline void __sk_nulls_add_node_rcu(struct sock *sk, struct hlist_nulls_head *list)
- {
- 	hlist_nulls_add_head_rcu(&sk->sk_nulls_node, list);
-diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
-index cb8a273732cf..bb8092fa1e36 100644
---- a/include/scsi/libfcoe.h
-+++ b/include/scsi/libfcoe.h
-@@ -79,7 +79,7 @@ enum fip_state {
-  * It must not change after fcoe_ctlr_init() sets it.
-  */
- enum fip_mode {
--	FIP_MODE_AUTO = FIP_ST_AUTO,
-+	FIP_MODE_AUTO,
- 	FIP_MODE_NON_FIP,
- 	FIP_MODE_FABRIC,
- 	FIP_MODE_VN2VN,
-@@ -250,7 +250,7 @@ struct fcoe_rport {
- };
- 
- /* FIP API functions */
--void fcoe_ctlr_init(struct fcoe_ctlr *, enum fip_state);
-+void fcoe_ctlr_init(struct fcoe_ctlr *, enum fip_mode);
- void fcoe_ctlr_destroy(struct fcoe_ctlr *);
- void fcoe_ctlr_link_up(struct fcoe_ctlr *);
- int fcoe_ctlr_link_down(struct fcoe_ctlr *);
-diff --git a/include/uapi/linux/android/binder.h b/include/uapi/linux/android/binder.h
-index b9ba520f7e4b..2832134e5397 100644
---- a/include/uapi/linux/android/binder.h
-+++ b/include/uapi/linux/android/binder.h
-@@ -41,6 +41,14 @@ enum {
- enum {
- 	FLAT_BINDER_FLAG_PRIORITY_MASK = 0xff,
- 	FLAT_BINDER_FLAG_ACCEPTS_FDS = 0x100,
++	if (omap_host->is_tuning && host->cmd && !host->data_early &&
++	    (intmask & CMD_ERR_MASK)) {
 +
-+	/**
-+	 * @FLAT_BINDER_FLAG_TXN_SECURITY_CTX: request security contexts
-+	 *
-+	 * Only when set, causes senders to include their security
-+	 * context
-+	 */
-+	FLAT_BINDER_FLAG_TXN_SECURITY_CTX = 0x1000,
- };
- 
- #ifdef BINDER_IPC_32BIT
-@@ -218,6 +226,7 @@ struct binder_node_info_for_ref {
- #define BINDER_VERSION			_IOWR('b', 9, struct binder_version)
- #define BINDER_GET_NODE_DEBUG_INFO	_IOWR('b', 11, struct binder_node_debug_info)
- #define BINDER_GET_NODE_INFO_FOR_REF	_IOWR('b', 12, struct binder_node_info_for_ref)
-+#define BINDER_SET_CONTEXT_MGR_EXT	_IOW('b', 13, struct flat_binder_object)
- 
- /*
-  * NOTE: Two special error codes you should check for when calling
-@@ -276,6 +285,11 @@ struct binder_transaction_data {
- 	} data;
- };
- 
-+struct binder_transaction_data_secctx {
-+	struct binder_transaction_data transaction_data;
-+	binder_uintptr_t secctx;
-+};
++		/*
++		 * Since we are not resetting data lines during tuning
++		 * operation, data error or data complete interrupts
++		 * might still arrive. Mark this request as a failure
++		 * but still wait for the data interrupt
++		 */
++		if (intmask & SDHCI_INT_TIMEOUT)
++			host->cmd->error = -ETIMEDOUT;
++		else
++			host->cmd->error = -EILSEQ;
 +
- struct binder_transaction_data_sg {
- 	struct binder_transaction_data transaction_data;
- 	binder_size_t buffers_size;
-@@ -311,6 +325,11 @@ enum binder_driver_return_protocol {
- 	BR_OK = _IO('r', 1),
- 	/* No parameters! */
- 
-+	BR_TRANSACTION_SEC_CTX = _IOR('r', 2,
-+				      struct binder_transaction_data_secctx),
-+	/*
-+	 * binder_transaction_data_secctx: the received command.
-+	 */
- 	BR_TRANSACTION = _IOR('r', 2, struct binder_transaction_data),
- 	BR_REPLY = _IOR('r', 3, struct binder_transaction_data),
- 	/*
-diff --git a/kernel/audit.h b/kernel/audit.h
-index 91421679a168..6ffb70575082 100644
---- a/kernel/audit.h
-+++ b/kernel/audit.h
-@@ -314,7 +314,7 @@ extern void audit_trim_trees(void);
- extern int audit_tag_tree(char *old, char *new);
- extern const char *audit_tree_path(struct audit_tree *tree);
- extern void audit_put_tree(struct audit_tree *tree);
--extern void audit_kill_trees(struct list_head *list);
-+extern void audit_kill_trees(struct audit_context *context);
- #else
- #define audit_remove_tree_rule(rule) BUG()
- #define audit_add_tree_rule(rule) -EINVAL
-@@ -323,7 +323,7 @@ extern void audit_kill_trees(struct list_head *list);
- #define audit_put_tree(tree) (void)0
- #define audit_tag_tree(old, new) -EINVAL
- #define audit_tree_path(rule) ""	/* never called */
--#define audit_kill_trees(list) BUG()
-+#define audit_kill_trees(context) BUG()
- #endif
- 
- extern char *audit_unpack_string(void **bufp, size_t *remain, size_t len);
-diff --git a/kernel/audit_tree.c b/kernel/audit_tree.c
-index d4af4d97f847..abfb112f26aa 100644
---- a/kernel/audit_tree.c
-+++ b/kernel/audit_tree.c
-@@ -524,13 +524,14 @@ static int tag_chunk(struct inode *inode, struct audit_tree *tree)
- 	return 0;
- }
++		host->cmd = NULL;
++
++		/*
++		 * Sometimes command error interrupts and command complete
++		 * interrupt will arrive together. Clear all command related
++		 * interrupts here.
++		 */
++		sdhci_writel(host, intmask & CMD_MASK, SDHCI_INT_STATUS);
++		intmask &= ~CMD_MASK;
++	}
++
++	return intmask;
++}
++
+ static struct sdhci_ops sdhci_omap_ops = {
+ 	.set_clock = sdhci_omap_set_clock,
+ 	.set_power = sdhci_omap_set_power,
+@@ -807,6 +844,7 @@ static struct sdhci_ops sdhci_omap_ops = {
+ 	.platform_send_init_74_clocks = sdhci_omap_init_74_clocks,
+ 	.reset = sdhci_omap_reset,
+ 	.set_uhs_signaling = sdhci_omap_set_uhs_signaling,
++	.irq = sdhci_omap_irq,
+ };
  
--static void audit_tree_log_remove_rule(struct audit_krule *rule)
-+static void audit_tree_log_remove_rule(struct audit_context *context,
-+				       struct audit_krule *rule)
- {
- 	struct audit_buffer *ab;
+ static int sdhci_omap_set_capabilities(struct sdhci_omap_host *omap_host)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 803f7990d32b..40ca339ec3df 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1129,6 +1129,8 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+ 	tpa_info = &rxr->rx_tpa[agg_id];
  
- 	if (!audit_enabled)
- 		return;
--	ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_CONFIG_CHANGE);
-+	ab = audit_log_start(context, GFP_KERNEL, AUDIT_CONFIG_CHANGE);
- 	if (unlikely(!ab))
+ 	if (unlikely(cons != rxr->rx_next_cons)) {
++		netdev_warn(bp->dev, "TPA cons %x != expected cons %x\n",
++			    cons, rxr->rx_next_cons);
+ 		bnxt_sched_reset(bp, rxr);
  		return;
- 	audit_log_format(ab, "op=remove_rule dir=");
-@@ -540,7 +541,7 @@ static void audit_tree_log_remove_rule(struct audit_krule *rule)
- 	audit_log_end(ab);
- }
+ 	}
+@@ -1581,15 +1583,17 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 	}
  
--static void kill_rules(struct audit_tree *tree)
-+static void kill_rules(struct audit_context *context, struct audit_tree *tree)
- {
- 	struct audit_krule *rule, *next;
- 	struct audit_entry *entry;
-@@ -551,7 +552,7 @@ static void kill_rules(struct audit_tree *tree)
- 		list_del_init(&rule->rlist);
- 		if (rule->tree) {
- 			/* not a half-baked one */
--			audit_tree_log_remove_rule(rule);
-+			audit_tree_log_remove_rule(context, rule);
- 			if (entry->rule.exe)
- 				audit_remove_mark(entry->rule.exe);
- 			rule->tree = NULL;
-@@ -633,7 +634,7 @@ static void trim_marked(struct audit_tree *tree)
- 		tree->goner = 1;
- 		spin_unlock(&hash_lock);
- 		mutex_lock(&audit_filter_mutex);
--		kill_rules(tree);
-+		kill_rules(audit_context(), tree);
- 		list_del_init(&tree->list);
- 		mutex_unlock(&audit_filter_mutex);
- 		prune_one(tree);
-@@ -973,8 +974,10 @@ static void audit_schedule_prune(void)
-  * ... and that one is done if evict_chunk() decides to delay until the end
-  * of syscall.  Runs synchronously.
-  */
--void audit_kill_trees(struct list_head *list)
-+void audit_kill_trees(struct audit_context *context)
- {
-+	struct list_head *list = &context->killed_trees;
-+
- 	audit_ctl_lock();
- 	mutex_lock(&audit_filter_mutex);
- 
-@@ -982,7 +985,7 @@ void audit_kill_trees(struct list_head *list)
- 		struct audit_tree *victim;
- 
- 		victim = list_entry(list->next, struct audit_tree, list);
--		kill_rules(victim);
-+		kill_rules(context, victim);
- 		list_del_init(&victim->list);
- 
- 		mutex_unlock(&audit_filter_mutex);
-@@ -1017,7 +1020,7 @@ static void evict_chunk(struct audit_chunk *chunk)
- 		list_del_init(&owner->same_root);
- 		spin_unlock(&hash_lock);
- 		if (!postponed) {
--			kill_rules(owner);
-+			kill_rules(audit_context(), owner);
- 			list_move(&owner->list, &prune_list);
- 			need_prune = 1;
- 		} else {
-diff --git a/kernel/auditsc.c b/kernel/auditsc.c
-index 6593a5207fb0..b585ceb2f7a2 100644
---- a/kernel/auditsc.c
-+++ b/kernel/auditsc.c
-@@ -1444,6 +1444,9 @@ void __audit_free(struct task_struct *tsk)
- 	if (!context)
- 		return;
+ 	cons = rxcmp->rx_cmp_opaque;
+-	rx_buf = &rxr->rx_buf_ring[cons];
+-	data = rx_buf->data;
+-	data_ptr = rx_buf->data_ptr;
+ 	if (unlikely(cons != rxr->rx_next_cons)) {
+ 		int rc1 = bnxt_discard_rx(bp, cpr, raw_cons, rxcmp);
  
-+	if (!list_empty(&context->killed_trees))
-+		audit_kill_trees(context);
-+
- 	/* We are called either by do_exit() or the fork() error handling code;
- 	 * in the former case tsk == current and in the latter tsk is a
- 	 * random task_struct that doesn't doesn't have any meaningful data we
-@@ -1460,9 +1463,6 @@ void __audit_free(struct task_struct *tsk)
- 			audit_log_exit();
++		netdev_warn(bp->dev, "RX cons %x != expected cons %x\n",
++			    cons, rxr->rx_next_cons);
+ 		bnxt_sched_reset(bp, rxr);
+ 		return rc1;
  	}
++	rx_buf = &rxr->rx_buf_ring[cons];
++	data = rx_buf->data;
++	data_ptr = rx_buf->data_ptr;
+ 	prefetch(data_ptr);
  
--	if (!list_empty(&context->killed_trees))
--		audit_kill_trees(&context->killed_trees);
--
- 	audit_set_context(tsk, NULL);
- 	audit_free_context(context);
- }
-@@ -1537,6 +1537,9 @@ void __audit_syscall_exit(int success, long return_code)
- 	if (!context)
- 		return;
+ 	misc = le32_to_cpu(rxcmp->rx_cmp_misc_v1);
+@@ -1606,11 +1610,17 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
  
-+	if (!list_empty(&context->killed_trees))
-+		audit_kill_trees(context);
+ 	rx_buf->data = NULL;
+ 	if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L2_ERRORS) {
++		u32 rx_err = le32_to_cpu(rxcmp1->rx_cmp_cfa_code_errors_v2);
 +
- 	if (!context->dummy && context->in_syscall) {
- 		if (success)
- 			context->return_valid = AUDITSC_SUCCESS;
-@@ -1571,9 +1574,6 @@ void __audit_syscall_exit(int success, long return_code)
- 	context->in_syscall = 0;
- 	context->prio = context->state == AUDIT_RECORD_CONTEXT ? ~0ULL : 0;
- 
--	if (!list_empty(&context->killed_trees))
--		audit_kill_trees(&context->killed_trees);
--
- 	audit_free_names(context);
- 	unroll_tree_refs(context, NULL, 0);
- 	audit_free_aux(context);
-diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
-index 5fcce2f4209d..d53825b6fcd9 100644
---- a/kernel/bpf/verifier.c
-+++ b/kernel/bpf/verifier.c
-@@ -3187,7 +3187,7 @@ do_sim:
- 		*dst_reg = *ptr_reg;
- 	}
- 	ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
--	if (!ptr_is_dst_reg)
-+	if (!ptr_is_dst_reg && ret)
- 		*dst_reg = tmp;
- 	return !ret ? -EFAULT : 0;
- }
-diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
-index f31bd61c9466..f84bf28f36ba 100644
---- a/kernel/cgroup/cgroup.c
-+++ b/kernel/cgroup/cgroup.c
-@@ -197,7 +197,7 @@ static u64 css_serial_nr_next = 1;
-  */
- static u16 have_fork_callback __read_mostly;
- static u16 have_exit_callback __read_mostly;
--static u16 have_free_callback __read_mostly;
-+static u16 have_release_callback __read_mostly;
- static u16 have_canfork_callback __read_mostly;
- 
- /* cgroup namespace for init task */
-@@ -2033,7 +2033,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
- 			       struct cgroup_namespace *ns)
- {
- 	struct dentry *dentry;
--	bool new_sb;
-+	bool new_sb = false;
- 
- 	dentry = kernfs_mount(fs_type, flags, root->kf_root, magic, &new_sb);
+ 		bnxt_reuse_rx_data(rxr, cons, data);
+ 		if (agg_bufs)
+ 			bnxt_reuse_rx_agg_bufs(cpr, cp_cons, agg_bufs);
  
-@@ -2043,6 +2043,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
- 	 */
- 	if (!IS_ERR(dentry) && ns != &init_cgroup_ns) {
- 		struct dentry *nsdentry;
-+		struct super_block *sb = dentry->d_sb;
- 		struct cgroup *cgrp;
- 
- 		mutex_lock(&cgroup_mutex);
-@@ -2053,12 +2054,14 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags,
- 		spin_unlock_irq(&css_set_lock);
- 		mutex_unlock(&cgroup_mutex);
- 
--		nsdentry = kernfs_node_dentry(cgrp->kn, dentry->d_sb);
-+		nsdentry = kernfs_node_dentry(cgrp->kn, sb);
- 		dput(dentry);
-+		if (IS_ERR(nsdentry))
-+			deactivate_locked_super(sb);
- 		dentry = nsdentry;
+ 		rc = -EIO;
++		if (rx_err & RX_CMPL_ERRORS_BUFFER_ERROR_MASK) {
++			netdev_warn(bp->dev, "RX buffer error %x\n", rx_err);
++			bnxt_sched_reset(bp, rxr);
++		}
+ 		goto next_rx;
  	}
  
--	if (IS_ERR(dentry) || !new_sb)
-+	if (!new_sb)
- 		cgroup_put(&root->cgrp);
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index 503cfadff4ac..d4ee9f9c8c34 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -1328,10 +1328,11 @@ int nicvf_stop(struct net_device *netdev)
+ 	struct nicvf_cq_poll *cq_poll = NULL;
+ 	union nic_mbx mbx = {};
  
- 	return dentry;
-@@ -5313,7 +5316,7 @@ static void __init cgroup_init_subsys(struct cgroup_subsys *ss, bool early)
+-	cancel_delayed_work_sync(&nic->link_change_work);
+-
+ 	/* wait till all queued set_rx_mode tasks completes */
+-	drain_workqueue(nic->nicvf_rx_mode_wq);
++	if (nic->nicvf_rx_mode_wq) {
++		cancel_delayed_work_sync(&nic->link_change_work);
++		drain_workqueue(nic->nicvf_rx_mode_wq);
++	}
  
- 	have_fork_callback |= (bool)ss->fork << ss->id;
- 	have_exit_callback |= (bool)ss->exit << ss->id;
--	have_free_callback |= (bool)ss->free << ss->id;
-+	have_release_callback |= (bool)ss->release << ss->id;
- 	have_canfork_callback |= (bool)ss->can_fork << ss->id;
+ 	mbx.msg.msg = NIC_MBOX_MSG_SHUTDOWN;
+ 	nicvf_send_msg_to_pf(nic, &mbx);
+@@ -1452,7 +1453,8 @@ int nicvf_open(struct net_device *netdev)
+ 	struct nicvf_cq_poll *cq_poll = NULL;
  
- 	/* At system boot, before all subsystems have been
-@@ -5749,16 +5752,19 @@ void cgroup_exit(struct task_struct *tsk)
- 	} while_each_subsys_mask();
- }
+ 	/* wait till all queued set_rx_mode tasks completes if any */
+-	drain_workqueue(nic->nicvf_rx_mode_wq);
++	if (nic->nicvf_rx_mode_wq)
++		drain_workqueue(nic->nicvf_rx_mode_wq);
  
--void cgroup_free(struct task_struct *task)
-+void cgroup_release(struct task_struct *task)
- {
--	struct css_set *cset = task_css_set(task);
- 	struct cgroup_subsys *ss;
- 	int ssid;
- 
--	do_each_subsys_mask(ss, ssid, have_free_callback) {
--		ss->free(task);
-+	do_each_subsys_mask(ss, ssid, have_release_callback) {
-+		ss->release(task);
- 	} while_each_subsys_mask();
-+}
+ 	netif_carrier_off(netdev);
  
-+void cgroup_free(struct task_struct *task)
-+{
-+	struct css_set *cset = task_css_set(task);
- 	put_css_set(cset);
- }
+@@ -1550,10 +1552,12 @@ int nicvf_open(struct net_device *netdev)
+ 	/* Send VF config done msg to PF */
+ 	nicvf_send_cfg_done(nic);
  
-diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c
-index 9829c67ebc0a..c9960baaa14f 100644
---- a/kernel/cgroup/pids.c
-+++ b/kernel/cgroup/pids.c
-@@ -247,7 +247,7 @@ static void pids_cancel_fork(struct task_struct *task)
- 	pids_uncharge(pids, 1);
- }
+-	INIT_DELAYED_WORK(&nic->link_change_work,
+-			  nicvf_link_status_check_task);
+-	queue_delayed_work(nic->nicvf_rx_mode_wq,
+-			   &nic->link_change_work, 0);
++	if (nic->nicvf_rx_mode_wq) {
++		INIT_DELAYED_WORK(&nic->link_change_work,
++				  nicvf_link_status_check_task);
++		queue_delayed_work(nic->nicvf_rx_mode_wq,
++				   &nic->link_change_work, 0);
++	}
  
--static void pids_free(struct task_struct *task)
-+static void pids_release(struct task_struct *task)
- {
- 	struct pids_cgroup *pids = css_pids(task_css(task, pids_cgrp_id));
- 
-@@ -342,7 +342,7 @@ struct cgroup_subsys pids_cgrp_subsys = {
- 	.cancel_attach 	= pids_cancel_attach,
- 	.can_fork	= pids_can_fork,
- 	.cancel_fork	= pids_cancel_fork,
--	.free		= pids_free,
-+	.release	= pids_release,
- 	.legacy_cftypes	= pids_files,
- 	.dfl_cftypes	= pids_files,
- 	.threaded	= true,
-diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
-index d503d1a9007c..bb95a35e8c2d 100644
---- a/kernel/cgroup/rstat.c
-+++ b/kernel/cgroup/rstat.c
-@@ -87,7 +87,6 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
- 						   struct cgroup *root, int cpu)
- {
- 	struct cgroup_rstat_cpu *rstatc;
--	struct cgroup *parent;
- 
- 	if (pos == root)
- 		return NULL;
-@@ -115,8 +114,8 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
- 	 * However, due to the way we traverse, @pos will be the first
- 	 * child in most cases. The only exception is @root.
+ 	return 0;
+ cleanup:
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 5ecbb1adcf3b..51cfe95f3e24 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1885,6 +1885,7 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter,
  	 */
--	parent = cgroup_parent(pos);
--	if (parent && rstatc->updated_next) {
-+	if (rstatc->updated_next) {
-+		struct cgroup *parent = cgroup_parent(pos);
- 		struct cgroup_rstat_cpu *prstatc = cgroup_rstat_cpu(parent, cpu);
- 		struct cgroup_rstat_cpu *nrstatc;
- 		struct cgroup **nextp;
-@@ -140,9 +139,12 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
- 		 * updated stat.
- 		 */
- 		smp_mb();
-+
-+		return pos;
- 	}
+ 	adapter->state = VNIC_PROBED;
  
--	return pos;
-+	/* only happens for @root */
-+	return NULL;
- }
++	reinit_completion(&adapter->init_done);
+ 	rc = init_crq_queue(adapter);
+ 	if (rc) {
+ 		netdev_err(adapter->netdev,
+@@ -4625,7 +4626,7 @@ static int ibmvnic_reset_init(struct ibmvnic_adapter *adapter)
+ 	old_num_rx_queues = adapter->req_rx_queues;
+ 	old_num_tx_queues = adapter->req_tx_queues;
  
- /* see cgroup_rstat_flush() */
-diff --git a/kernel/cpu.c b/kernel/cpu.c
-index d1c6d152da89..6754f3ecfd94 100644
---- a/kernel/cpu.c
-+++ b/kernel/cpu.c
-@@ -313,6 +313,15 @@ void cpus_write_unlock(void)
+-	init_completion(&adapter->init_done);
++	reinit_completion(&adapter->init_done);
+ 	adapter->init_done_rc = 0;
+ 	ibmvnic_send_crq_init(adapter);
+ 	if (!wait_for_completion_timeout(&adapter->init_done, timeout)) {
+@@ -4680,7 +4681,6 @@ static int ibmvnic_init(struct ibmvnic_adapter *adapter)
  
- void lockdep_assert_cpus_held(void)
- {
-+	/*
-+	 * We can't have hotplug operations before userspace starts running,
-+	 * and some init codepaths will knowingly not take the hotplug lock.
-+	 * This is all valid, so mute lockdep until it makes sense to report
-+	 * unheld locks.
-+	 */
-+	if (system_state < SYSTEM_RUNNING)
-+		return;
-+
- 	percpu_rwsem_assert_held(&cpu_hotplug_lock);
- }
+ 	adapter->from_passive_init = false;
  
-@@ -555,6 +564,20 @@ static void undo_cpu_up(unsigned int cpu, struct cpuhp_cpu_state *st)
- 		cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
- }
+-	init_completion(&adapter->init_done);
+ 	adapter->init_done_rc = 0;
+ 	ibmvnic_send_crq_init(adapter);
+ 	if (!wait_for_completion_timeout(&adapter->init_done, timeout)) {
+@@ -4759,6 +4759,7 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
+ 	INIT_WORK(&adapter->ibmvnic_reset, __ibmvnic_reset);
+ 	INIT_LIST_HEAD(&adapter->rwi_list);
+ 	spin_lock_init(&adapter->rwi_lock);
++	init_completion(&adapter->init_done);
+ 	adapter->resetting = false;
  
-+static inline bool can_rollback_cpu(struct cpuhp_cpu_state *st)
-+{
-+	if (IS_ENABLED(CONFIG_HOTPLUG_CPU))
-+		return true;
-+	/*
-+	 * When CPU hotplug is disabled, then taking the CPU down is not
-+	 * possible because takedown_cpu() and the architecture and
-+	 * subsystem specific mechanisms are not available. So the CPU
-+	 * which would be completely unplugged again needs to stay around
-+	 * in the current state.
-+	 */
-+	return st->state <= CPUHP_BRINGUP_CPU;
-+}
-+
- static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
- 			      enum cpuhp_state target)
- {
-@@ -565,8 +588,10 @@ static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
- 		st->state++;
- 		ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
- 		if (ret) {
--			st->target = prev_state;
--			undo_cpu_up(cpu, st);
-+			if (can_rollback_cpu(st)) {
-+				st->target = prev_state;
-+				undo_cpu_up(cpu, st);
-+			}
- 			break;
- 		}
- 	}
-diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
-index 355d16acee6d..6310ad01f915 100644
---- a/kernel/dma/direct.c
-+++ b/kernel/dma/direct.c
-@@ -380,3 +380,14 @@ int dma_direct_supported(struct device *dev, u64 mask)
- 	 */
- 	return mask >= __phys_to_dma(dev, min_mask);
- }
-+
-+size_t dma_direct_max_mapping_size(struct device *dev)
-+{
-+	size_t size = SIZE_MAX;
-+
-+	/* If SWIOTLB is active, use its maximum mapping size */
-+	if (is_swiotlb_active())
-+		size = swiotlb_max_mapping_size(dev);
-+
-+	return size;
-+}
-diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
-index a11006b6d8e8..5753008ab286 100644
---- a/kernel/dma/mapping.c
-+++ b/kernel/dma/mapping.c
-@@ -357,3 +357,17 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
- 		ops->cache_sync(dev, vaddr, size, dir);
+ 	adapter->mac_change_pending = false;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+index eac245a93f91..4ab0d030b544 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/port_buffer.c
+@@ -122,7 +122,9 @@ out:
+ 	return err;
  }
- EXPORT_SYMBOL(dma_cache_sync);
-+
-+size_t dma_max_mapping_size(struct device *dev)
-+{
-+	const struct dma_map_ops *ops = get_dma_ops(dev);
-+	size_t size = SIZE_MAX;
-+
-+	if (dma_is_direct(ops))
-+		size = dma_direct_max_mapping_size(dev);
-+	else if (ops && ops->max_mapping_size)
-+		size = ops->max_mapping_size(dev);
-+
-+	return size;
-+}
-+EXPORT_SYMBOL_GPL(dma_max_mapping_size);
-diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
-index 1fb6fd68b9c7..c873f9cc2146 100644
---- a/kernel/dma/swiotlb.c
-+++ b/kernel/dma/swiotlb.c
-@@ -662,3 +662,17 @@ swiotlb_dma_supported(struct device *hwdev, u64 mask)
+ 
+-/* xoff = ((301+2.16 * len [m]) * speed [Gbps] + 2.72 MTU [B]) */
++/* xoff = ((301+2.16 * len [m]) * speed [Gbps] + 2.72 MTU [B])
++ * minimum speed value is 40Gbps
++ */
+ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
  {
- 	return __phys_to_dma(hwdev, io_tlb_end - 1) <= mask;
- }
-+
-+size_t swiotlb_max_mapping_size(struct device *dev)
-+{
-+	return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE;
-+}
-+
-+bool is_swiotlb_active(void)
-+{
-+	/*
-+	 * When SWIOTLB is initialized, even if io_tlb_start points to physical
-+	 * address zero, io_tlb_end surely doesn't.
-+	 */
-+	return io_tlb_end != 0;
-+}
-diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
-index 5ab4fe3b1dcc..878c62ec0190 100644
---- a/kernel/events/ring_buffer.c
-+++ b/kernel/events/ring_buffer.c
-@@ -658,7 +658,7 @@ int rb_alloc_aux(struct ring_buffer *rb, struct perf_event *event,
- 			goto out;
- 	}
+ 	u32 speed;
+@@ -130,10 +132,9 @@ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
+ 	int err;
  
--	rb->aux_priv = event->pmu->setup_aux(event->cpu, rb->aux_pages, nr_pages,
-+	rb->aux_priv = event->pmu->setup_aux(event, rb->aux_pages, nr_pages,
- 					     overwrite);
- 	if (!rb->aux_priv)
- 		goto out;
-diff --git a/kernel/exit.c b/kernel/exit.c
-index 2639a30a8aa5..2166c2d92ddc 100644
---- a/kernel/exit.c
-+++ b/kernel/exit.c
-@@ -219,6 +219,7 @@ repeat:
- 	}
+ 	err = mlx5e_port_linkspeed(priv->mdev, &speed);
+-	if (err) {
+-		mlx5_core_warn(priv->mdev, "cannot get port speed\n");
+-		return 0;
+-	}
++	if (err)
++		speed = SPEED_40000;
++	speed = max_t(u32, speed, SPEED_40000);
  
- 	write_unlock_irq(&tasklist_lock);
-+	cgroup_release(p);
- 	release_thread(p);
- 	call_rcu(&p->rcu, delayed_put_task_struct);
+ 	xoff = (301 + 216 * priv->dcbx.cable_len / 100) * speed / 1000 + 272 * mtu / 100;
  
-diff --git a/kernel/futex.c b/kernel/futex.c
-index a0514e01c3eb..52668d44e07b 100644
---- a/kernel/futex.c
-+++ b/kernel/futex.c
-@@ -3440,6 +3440,10 @@ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int p
- {
- 	u32 uval, uninitialized_var(nval), mval;
+@@ -142,7 +143,7 @@ static u32 calculate_xoff(struct mlx5e_priv *priv, unsigned int mtu)
+ }
  
-+	/* Futex address must be 32bit aligned */
-+	if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0)
-+		return -1;
-+
- retry:
- 	if (get_user(uval, uaddr))
- 		return -1;
-diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
-index 34e969069488..b07a2acc4eec 100644
---- a/kernel/irq/chip.c
-+++ b/kernel/irq/chip.c
-@@ -855,7 +855,11 @@ void handle_percpu_irq(struct irq_desc *desc)
+ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+-				 u32 xoff, unsigned int mtu)
++				 u32 xoff, unsigned int max_mtu)
  {
- 	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	int i;
  
--	kstat_incr_irqs_this_cpu(desc);
-+	/*
-+	 * PER CPU interrupts are not serialized. Do not touch
-+	 * desc->tot_count.
-+	 */
-+	__kstat_incr_irqs_this_cpu(desc);
+@@ -154,11 +155,12 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+ 		}
  
- 	if (chip->irq_ack)
- 		chip->irq_ack(&desc->irq_data);
-@@ -884,7 +888,11 @@ void handle_percpu_devid_irq(struct irq_desc *desc)
- 	unsigned int irq = irq_desc_get_irq(desc);
- 	irqreturn_t res;
+ 		if (port_buffer->buffer[i].size <
+-		    (xoff + mtu + (1 << MLX5E_BUFFER_CELL_SHIFT)))
++		    (xoff + max_mtu + (1 << MLX5E_BUFFER_CELL_SHIFT)))
+ 			return -ENOMEM;
  
--	kstat_incr_irqs_this_cpu(desc);
-+	/*
-+	 * PER CPU interrupts are not serialized. Do not touch
-+	 * desc->tot_count.
-+	 */
-+	__kstat_incr_irqs_this_cpu(desc);
+ 		port_buffer->buffer[i].xoff = port_buffer->buffer[i].size - xoff;
+-		port_buffer->buffer[i].xon  = port_buffer->buffer[i].xoff - mtu;
++		port_buffer->buffer[i].xon  =
++			port_buffer->buffer[i].xoff - max_mtu;
+ 	}
  
- 	if (chip->irq_ack)
- 		chip->irq_ack(&desc->irq_data);
-@@ -1376,6 +1384,10 @@ int irq_chip_set_vcpu_affinity_parent(struct irq_data *data, void *vcpu_info)
- int irq_chip_set_wake_parent(struct irq_data *data, unsigned int on)
- {
- 	data = data->parent_data;
-+
-+	if (data->chip->flags & IRQCHIP_SKIP_SET_WAKE)
-+		return 0;
-+
- 	if (data->chip->irq_set_wake)
- 		return data->chip->irq_set_wake(data, on);
+ 	return 0;
+@@ -166,7 +168,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
  
-diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
-index ca6afa267070..e74e7eea76cf 100644
---- a/kernel/irq/internals.h
-+++ b/kernel/irq/internals.h
-@@ -242,12 +242,18 @@ static inline void irq_state_set_masked(struct irq_desc *desc)
+ /**
+  * update_buffer_lossy()
+- *   mtu: device's MTU
++ *   max_mtu: netdev's max_mtu
+  *   pfc_en: <input> current pfc configuration
+  *   buffer: <input> current prio to buffer mapping
+  *   xoff:   <input> xoff value
+@@ -183,7 +185,7 @@ static int update_xoff_threshold(struct mlx5e_port_buffer *port_buffer,
+  *     Return 0 if no error.
+  *     Set change to true if buffer configuration is modified.
+  */
+-static int update_buffer_lossy(unsigned int mtu,
++static int update_buffer_lossy(unsigned int max_mtu,
+ 			       u8 pfc_en, u8 *buffer, u32 xoff,
+ 			       struct mlx5e_port_buffer *port_buffer,
+ 			       bool *change)
+@@ -220,7 +222,7 @@ static int update_buffer_lossy(unsigned int mtu,
+ 	}
  
- #undef __irqd_to_state
+ 	if (changed) {
+-		err = update_xoff_threshold(port_buffer, xoff, mtu);
++		err = update_xoff_threshold(port_buffer, xoff, max_mtu);
+ 		if (err)
+ 			return err;
  
--static inline void kstat_incr_irqs_this_cpu(struct irq_desc *desc)
-+static inline void __kstat_incr_irqs_this_cpu(struct irq_desc *desc)
- {
- 	__this_cpu_inc(*desc->kstat_irqs);
- 	__this_cpu_inc(kstat.irqs_sum);
+@@ -230,6 +232,7 @@ static int update_buffer_lossy(unsigned int mtu,
+ 	return 0;
  }
  
-+static inline void kstat_incr_irqs_this_cpu(struct irq_desc *desc)
-+{
-+	__kstat_incr_irqs_this_cpu(desc);
-+	desc->tot_count++;
-+}
-+
- static inline int irq_desc_get_node(struct irq_desc *desc)
- {
- 	return irq_common_data_get_node(&desc->irq_common_data);
-diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
-index ef8ad36cadcf..e16e022eae09 100644
---- a/kernel/irq/irqdesc.c
-+++ b/kernel/irq/irqdesc.c
-@@ -119,6 +119,7 @@ static void desc_set_defaults(unsigned int irq, struct irq_desc *desc, int node,
- 	desc->depth = 1;
- 	desc->irq_count = 0;
- 	desc->irqs_unhandled = 0;
-+	desc->tot_count = 0;
- 	desc->name = NULL;
- 	desc->owner = owner;
- 	for_each_possible_cpu(cpu)
-@@ -557,6 +558,7 @@ int __init early_irq_init(void)
- 		alloc_masks(&desc[i], node);
- 		raw_spin_lock_init(&desc[i].lock);
- 		lockdep_set_class(&desc[i].lock, &irq_desc_lock_class);
-+		mutex_init(&desc[i].request_mutex);
- 		desc_set_defaults(i, &desc[i], node, NULL, NULL);
- 	}
- 	return arch_early_irq_init();
-@@ -919,11 +921,15 @@ unsigned int kstat_irqs_cpu(unsigned int irq, int cpu)
- unsigned int kstat_irqs(unsigned int irq)
- {
- 	struct irq_desc *desc = irq_to_desc(irq);
--	int cpu;
- 	unsigned int sum = 0;
-+	int cpu;
- 
- 	if (!desc || !desc->kstat_irqs)
- 		return 0;
-+	if (!irq_settings_is_per_cpu_devid(desc) &&
-+	    !irq_settings_is_per_cpu(desc))
-+	    return desc->tot_count;
-+
- 	for_each_possible_cpu(cpu)
- 		sum += *per_cpu_ptr(desc->kstat_irqs, cpu);
- 	return sum;
-diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
-index 95932333a48b..e805fe3bf87f 100644
---- a/kernel/locking/lockdep.c
-+++ b/kernel/locking/lockdep.c
-@@ -3535,6 +3535,9 @@ static int __lock_downgrade(struct lockdep_map *lock, unsigned long ip)
- 	unsigned int depth;
++#define MINIMUM_MAX_MTU 9216
+ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 				    u32 change, unsigned int mtu,
+ 				    struct ieee_pfc *pfc,
+@@ -241,12 +244,14 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 	bool update_prio2buffer = false;
+ 	u8 buffer[MLX5E_MAX_PRIORITY];
+ 	bool update_buffer = false;
++	unsigned int max_mtu;
+ 	u32 total_used = 0;
+ 	u8 curr_pfc_en;
+ 	int err;
  	int i;
  
-+	if (unlikely(!debug_locks))
-+		return 0;
-+
- 	depth = curr->lockdep_depth;
- 	/*
- 	 * This function is about (re)setting the class of a held lock,
-diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
-index 9180158756d2..38d44d27e37a 100644
---- a/kernel/rcu/tree.c
-+++ b/kernel/rcu/tree.c
-@@ -1557,14 +1557,23 @@ static bool rcu_future_gp_cleanup(struct rcu_node *rnp)
- }
+ 	mlx5e_dbg(HW, priv, "%s: change=%x\n", __func__, change);
++	max_mtu = max_t(unsigned int, priv->netdev->max_mtu, MINIMUM_MAX_MTU);
  
- /*
-- * Awaken the grace-period kthread.  Don't do a self-awaken, and don't
-- * bother awakening when there is nothing for the grace-period kthread
-- * to do (as in several CPUs raced to awaken, and we lost), and finally
-- * don't try to awaken a kthread that has not yet been created.
-+ * Awaken the grace-period kthread.  Don't do a self-awaken (unless in
-+ * an interrupt or softirq handler), and don't bother awakening when there
-+ * is nothing for the grace-period kthread to do (as in several CPUs raced
-+ * to awaken, and we lost), and finally don't try to awaken a kthread that
-+ * has not yet been created.  If all those checks are passed, track some
-+ * debug information and awaken.
-+ *
-+ * So why do the self-wakeup when in an interrupt or softirq handler
-+ * in the grace-period kthread's context?  Because the kthread might have
-+ * been interrupted just as it was going to sleep, and just after the final
-+ * pre-sleep check of the awaken condition.  In this case, a wakeup really
-+ * is required, and is therefore supplied.
-  */
- static void rcu_gp_kthread_wake(void)
- {
--	if (current == rcu_state.gp_kthread ||
-+	if ((current == rcu_state.gp_kthread &&
-+	     !in_interrupt() && !in_serving_softirq()) ||
- 	    !READ_ONCE(rcu_state.gp_flags) ||
- 	    !rcu_state.gp_kthread)
- 		return;
-diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
-index 1971869c4072..f4ca36d92138 100644
---- a/kernel/rcu/update.c
-+++ b/kernel/rcu/update.c
-@@ -52,6 +52,7 @@
- #include <linux/tick.h>
- #include <linux/rcupdate_wait.h>
- #include <linux/sched/isolation.h>
-+#include <linux/kprobes.h>
- 
- #define CREATE_TRACE_POINTS
- 
-@@ -249,6 +250,7 @@ int notrace debug_lockdep_rcu_enabled(void)
- 	       current->lockdep_recursion == 0;
- }
- EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
-+NOKPROBE_SYMBOL(debug_lockdep_rcu_enabled);
+ 	err = mlx5e_port_query_buffer(priv, &port_buffer);
+ 	if (err)
+@@ -254,7 +259,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
  
- /**
-  * rcu_read_lock_held() - might we be in RCU read-side critical section?
-diff --git a/kernel/resource.c b/kernel/resource.c
-index 915c02e8e5dd..ca7ed5158cff 100644
---- a/kernel/resource.c
-+++ b/kernel/resource.c
-@@ -382,7 +382,7 @@ static int __walk_iomem_res_desc(resource_size_t start, resource_size_t end,
- 				 int (*func)(struct resource *, void *))
- {
- 	struct resource res;
--	int ret = -1;
-+	int ret = -EINVAL;
- 
- 	while (start < end &&
- 	       !find_next_iomem_res(start, end, flags, desc, first_lvl, &res)) {
-@@ -462,7 +462,7 @@ int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages,
- 	unsigned long flags;
- 	struct resource res;
- 	unsigned long pfn, end_pfn;
--	int ret = -1;
-+	int ret = -EINVAL;
- 
- 	start = (u64) start_pfn << PAGE_SHIFT;
- 	end = ((u64)(start_pfn + nr_pages) << PAGE_SHIFT) - 1;
-diff --git a/kernel/sched/core.c b/kernel/sched/core.c
-index d8d76a65cfdd..01a2489de94e 100644
---- a/kernel/sched/core.c
-+++ b/kernel/sched/core.c
-@@ -107,11 +107,12 @@ struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
- 		 *					[L] ->on_rq
- 		 *	RELEASE (rq->lock)
- 		 *
--		 * If we observe the old CPU in task_rq_lock, the acquire of
-+		 * If we observe the old CPU in task_rq_lock(), the acquire of
- 		 * the old rq->lock will fully serialize against the stores.
- 		 *
--		 * If we observe the new CPU in task_rq_lock, the acquire will
--		 * pair with the WMB to ensure we must then also see migrating.
-+		 * If we observe the new CPU in task_rq_lock(), the address
-+		 * dependency headed by '[L] rq = task_rq()' and the acquire
-+		 * will pair with the WMB to ensure we then also see migrating.
- 		 */
- 		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
- 			rq_pin_lock(rq, rf);
-@@ -928,7 +929,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
- {
- 	lockdep_assert_held(&rq->lock);
- 
--	p->on_rq = TASK_ON_RQ_MIGRATING;
-+	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
- 	dequeue_task(rq, p, DEQUEUE_NOCLOCK);
- 	set_task_cpu(p, new_cpu);
- 	rq_unlock(rq, rf);
-diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
-index de3de997e245..8039d62ae36e 100644
---- a/kernel/sched/debug.c
-+++ b/kernel/sched/debug.c
-@@ -315,6 +315,7 @@ void register_sched_domain_sysctl(void)
- {
- 	static struct ctl_table *cpu_entries;
- 	static struct ctl_table **cpu_idx;
-+	static bool init_done = false;
- 	char buf[32];
- 	int i;
+ 	if (change & MLX5E_PORT_BUFFER_CABLE_LEN) {
+ 		update_buffer = true;
+-		err = update_xoff_threshold(&port_buffer, xoff, mtu);
++		err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
+ 		if (err)
+ 			return err;
+ 	}
+@@ -264,7 +269,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 		if (err)
+ 			return err;
  
-@@ -344,7 +345,10 @@ void register_sched_domain_sysctl(void)
- 	if (!cpumask_available(sd_sysctl_cpus)) {
- 		if (!alloc_cpumask_var(&sd_sysctl_cpus, GFP_KERNEL))
- 			return;
-+	}
+-		err = update_buffer_lossy(mtu, pfc->pfc_en, buffer, xoff,
++		err = update_buffer_lossy(max_mtu, pfc->pfc_en, buffer, xoff,
+ 					  &port_buffer, &update_buffer);
+ 		if (err)
+ 			return err;
+@@ -276,8 +281,8 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 		if (err)
+ 			return err;
  
-+	if (!init_done) {
-+		init_done = true;
- 		/* init to possible to not have holes in @cpu_entries */
- 		cpumask_copy(sd_sysctl_cpus, cpu_possible_mask);
+-		err = update_buffer_lossy(mtu, curr_pfc_en, prio2buffer, xoff,
+-					  &port_buffer, &update_buffer);
++		err = update_buffer_lossy(max_mtu, curr_pfc_en, prio2buffer,
++					  xoff, &port_buffer, &update_buffer);
+ 		if (err)
+ 			return err;
  	}
-diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
-index 310d0637fe4b..5e61a1a99e38 100644
---- a/kernel/sched/fair.c
-+++ b/kernel/sched/fair.c
-@@ -7713,10 +7713,10 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
- 	if (cfs_rq->last_h_load_update == now)
- 		return;
+@@ -301,7 +306,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 			return -EINVAL;
  
--	cfs_rq->h_load_next = NULL;
-+	WRITE_ONCE(cfs_rq->h_load_next, NULL);
- 	for_each_sched_entity(se) {
- 		cfs_rq = cfs_rq_of(se);
--		cfs_rq->h_load_next = se;
-+		WRITE_ONCE(cfs_rq->h_load_next, se);
- 		if (cfs_rq->last_h_load_update == now)
- 			break;
+ 		update_buffer = true;
+-		err = update_xoff_threshold(&port_buffer, xoff, mtu);
++		err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
+ 		if (err)
+ 			return err;
  	}
-@@ -7726,7 +7726,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
- 		cfs_rq->last_h_load_update = now;
+@@ -309,7 +314,7 @@ int mlx5e_port_manual_buffer_config(struct mlx5e_priv *priv,
+ 	/* Need to update buffer configuration if xoff value is changed */
+ 	if (!update_buffer && xoff != priv->dcbx.xoff) {
+ 		update_buffer = true;
+-		err = update_xoff_threshold(&port_buffer, xoff, mtu);
++		err = update_xoff_threshold(&port_buffer, xoff, max_mtu);
+ 		if (err)
+ 			return err;
  	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+index 3078491cc0d0..1539cf3de5dc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_common.c
+@@ -45,7 +45,9 @@ int mlx5e_create_tir(struct mlx5_core_dev *mdev,
+ 	if (err)
+ 		return err;
  
--	while ((se = cfs_rq->h_load_next) != NULL) {
-+	while ((se = READ_ONCE(cfs_rq->h_load_next)) != NULL) {
- 		load = cfs_rq->h_load;
- 		load = div64_ul(load * se->avg.load_avg,
- 			cfs_rq_load_avg(cfs_rq) + 1);
-diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
-index d04530bf251f..425a5589e5f6 100644
---- a/kernel/sched/sched.h
-+++ b/kernel/sched/sched.h
-@@ -1460,9 +1460,9 @@ static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
- 	 */
- 	smp_wmb();
- #ifdef CONFIG_THREAD_INFO_IN_TASK
--	p->cpu = cpu;
-+	WRITE_ONCE(p->cpu, cpu);
- #else
--	task_thread_info(p)->cpu = cpu;
-+	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
- #endif
- 	p->wake_cpu = cpu;
- #endif
-@@ -1563,7 +1563,7 @@ static inline int task_on_rq_queued(struct task_struct *p)
++	mutex_lock(&mdev->mlx5e_res.td.list_lock);
+ 	list_add(&tir->list, &mdev->mlx5e_res.td.tirs_list);
++	mutex_unlock(&mdev->mlx5e_res.td.list_lock);
  
- static inline int task_on_rq_migrating(struct task_struct *p)
- {
--	return p->on_rq == TASK_ON_RQ_MIGRATING;
-+	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
+ 	return 0;
  }
- 
- /*
-diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
-index 3f35ba1d8fde..efca2489d881 100644
---- a/kernel/sched/topology.c
-+++ b/kernel/sched/topology.c
-@@ -676,7 +676,7 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
+@@ -53,8 +55,10 @@ int mlx5e_create_tir(struct mlx5_core_dev *mdev,
+ void mlx5e_destroy_tir(struct mlx5_core_dev *mdev,
+ 		       struct mlx5e_tir *tir)
+ {
++	mutex_lock(&mdev->mlx5e_res.td.list_lock);
+ 	mlx5_core_destroy_tir(mdev, tir->tirn);
+ 	list_del(&tir->list);
++	mutex_unlock(&mdev->mlx5e_res.td.list_lock);
  }
  
- struct s_data {
--	struct sched_domain ** __percpu sd;
-+	struct sched_domain * __percpu *sd;
- 	struct root_domain	*rd;
- };
+ static int mlx5e_create_mkey(struct mlx5_core_dev *mdev, u32 pdn,
+@@ -114,6 +118,7 @@ int mlx5e_create_mdev_resources(struct mlx5_core_dev *mdev)
+ 	}
  
-diff --git a/kernel/sysctl.c b/kernel/sysctl.c
-index ba4d9e85feb8..28ec71d914c7 100644
---- a/kernel/sysctl.c
-+++ b/kernel/sysctl.c
-@@ -127,6 +127,7 @@ static int __maybe_unused one = 1;
- static int __maybe_unused two = 2;
- static int __maybe_unused four = 4;
- static unsigned long one_ul = 1;
-+static unsigned long long_max = LONG_MAX;
- static int one_hundred = 100;
- static int one_thousand = 1000;
- #ifdef CONFIG_PRINTK
-@@ -1722,6 +1723,8 @@ static struct ctl_table fs_table[] = {
- 		.maxlen		= sizeof(files_stat.max_files),
- 		.mode		= 0644,
- 		.proc_handler	= proc_doulongvec_minmax,
-+		.extra1		= &zero,
-+		.extra2		= &long_max,
- 	},
- 	{
- 		.procname	= "nr_open",
-@@ -2579,7 +2582,16 @@ static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp,
- {
- 	struct do_proc_dointvec_minmax_conv_param *param = data;
- 	if (write) {
--		int val = *negp ? -*lvalp : *lvalp;
-+		int val;
-+		if (*negp) {
-+			if (*lvalp > (unsigned long) INT_MAX + 1)
-+				return -EINVAL;
-+			val = -*lvalp;
-+		} else {
-+			if (*lvalp > (unsigned long) INT_MAX)
-+				return -EINVAL;
-+			val = *lvalp;
-+		}
- 		if ((param->min && *param->min > val) ||
- 		    (param->max && *param->max < val))
- 			return -EINVAL;
-diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
-index 2c97e8c2d29f..0519a8805aab 100644
---- a/kernel/time/alarmtimer.c
-+++ b/kernel/time/alarmtimer.c
-@@ -594,7 +594,7 @@ static ktime_t alarm_timer_remaining(struct k_itimer *timr, ktime_t now)
- {
- 	struct alarm *alarm = &timr->it.alarm.alarmtimer;
+ 	INIT_LIST_HEAD(&mdev->mlx5e_res.td.tirs_list);
++	mutex_init(&mdev->mlx5e_res.td.list_lock);
  
--	return ktime_sub(now, alarm->node.expires);
-+	return ktime_sub(alarm->node.expires, now);
- }
+ 	return 0;
  
- /**
-diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
-index 06e864a334bb..b49affb4666b 100644
---- a/kernel/trace/ring_buffer.c
-+++ b/kernel/trace/ring_buffer.c
-@@ -4205,6 +4205,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_consume);
-  * ring_buffer_read_prepare - Prepare for a non consuming read of the buffer
-  * @buffer: The ring buffer to read from
-  * @cpu: The cpu buffer to iterate over
-+ * @flags: gfp flags to use for memory allocation
-  *
-  * This performs the initial preparations necessary to iterate
-  * through the buffer.  Memory is allocated, buffer recording
-@@ -4222,7 +4223,7 @@ EXPORT_SYMBOL_GPL(ring_buffer_consume);
-  * This overall must be paired with ring_buffer_read_finish.
-  */
- struct ring_buffer_iter *
--ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu)
-+ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu, gfp_t flags)
+@@ -141,15 +146,17 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb)
  {
- 	struct ring_buffer_per_cpu *cpu_buffer;
- 	struct ring_buffer_iter *iter;
-@@ -4230,7 +4231,7 @@ ring_buffer_read_prepare(struct ring_buffer *buffer, int cpu)
- 	if (!cpumask_test_cpu(cpu, buffer->cpumask))
- 		return NULL;
- 
--	iter = kmalloc(sizeof(*iter), GFP_KERNEL);
-+	iter = kmalloc(sizeof(*iter), flags);
- 	if (!iter)
- 		return NULL;
- 
-diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
-index c4238b441624..89158aa93fa6 100644
---- a/kernel/trace/trace.c
-+++ b/kernel/trace/trace.c
-@@ -3904,7 +3904,8 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
- 	if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
- 		for_each_tracing_cpu(cpu) {
- 			iter->buffer_iter[cpu] =
--				ring_buffer_read_prepare(iter->trace_buffer->buffer, cpu);
-+				ring_buffer_read_prepare(iter->trace_buffer->buffer,
-+							 cpu, GFP_KERNEL);
- 		}
- 		ring_buffer_read_prepare_sync();
- 		for_each_tracing_cpu(cpu) {
-@@ -3914,7 +3915,8 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
- 	} else {
- 		cpu = iter->cpu_file;
- 		iter->buffer_iter[cpu] =
--			ring_buffer_read_prepare(iter->trace_buffer->buffer, cpu);
-+			ring_buffer_read_prepare(iter->trace_buffer->buffer,
-+						 cpu, GFP_KERNEL);
- 		ring_buffer_read_prepare_sync();
- 		ring_buffer_read_start(iter->buffer_iter[cpu]);
- 		tracing_iter_reset(iter, cpu);
-@@ -5626,7 +5628,6 @@ out:
- 	return ret;
+ 	struct mlx5_core_dev *mdev = priv->mdev;
+ 	struct mlx5e_tir *tir;
+-	int err  = -ENOMEM;
++	int err  = 0;
+ 	u32 tirn = 0;
+ 	int inlen;
+ 	void *in;
  
- fail:
--	kfree(iter->trace);
- 	kfree(iter);
- 	__trace_array_put(tr);
- 	mutex_unlock(&trace_types_lock);
-diff --git a/kernel/trace/trace_dynevent.c b/kernel/trace/trace_dynevent.c
-index dd1f43588d70..fa100ed3b4de 100644
---- a/kernel/trace/trace_dynevent.c
-+++ b/kernel/trace/trace_dynevent.c
-@@ -74,7 +74,7 @@ int dyn_event_release(int argc, char **argv, struct dyn_event_operations *type)
- static int create_dyn_event(int argc, char **argv)
- {
- 	struct dyn_event_operations *ops;
--	int ret;
-+	int ret = -ENODEV;
- 
- 	if (argv[0][0] == '-' || argv[0][0] == '!')
- 		return dyn_event_release(argc, argv, NULL);
-diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
-index 76217bbef815..4629a6104474 100644
---- a/kernel/trace/trace_event_perf.c
-+++ b/kernel/trace/trace_event_perf.c
-@@ -299,15 +299,13 @@ int perf_uprobe_init(struct perf_event *p_event,
- 
- 	if (!p_event->attr.uprobe_path)
- 		return -EINVAL;
--	path = kzalloc(PATH_MAX, GFP_KERNEL);
--	if (!path)
--		return -ENOMEM;
--	ret = strncpy_from_user(
--		path, u64_to_user_ptr(p_event->attr.uprobe_path), PATH_MAX);
--	if (ret == PATH_MAX)
--		return -E2BIG;
--	if (ret < 0)
--		goto out;
-+
-+	path = strndup_user(u64_to_user_ptr(p_event->attr.uprobe_path),
-+			    PATH_MAX);
-+	if (IS_ERR(path)) {
-+		ret = PTR_ERR(path);
-+		return (ret == -EINVAL) ? -E2BIG : ret;
-+	}
- 	if (path[0] == '\0') {
- 		ret = -EINVAL;
+ 	inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
+ 	in = kvzalloc(inlen, GFP_KERNEL);
+-	if (!in)
++	if (!in) {
++		err = -ENOMEM;
  		goto out;
-diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
-index 27821480105e..217ef481fbbb 100644
---- a/kernel/trace/trace_events_filter.c
-+++ b/kernel/trace/trace_events_filter.c
-@@ -1301,7 +1301,7 @@ static int parse_pred(const char *str, void *data,
- 		/* go past the last quote */
- 		i++;
- 
--	} else if (isdigit(str[i])) {
-+	} else if (isdigit(str[i]) || str[i] == '-') {
- 
- 		/* Make sure the field is not a string */
- 		if (is_string_field(field)) {
-@@ -1314,6 +1314,9 @@ static int parse_pred(const char *str, void *data,
- 			goto err_free;
- 		}
++	}
  
-+		if (str[i] == '-')
-+			i++;
-+
- 		/* We allow 0xDEADBEEF */
- 		while (isalnum(str[i]))
- 			i++;
-diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
-index 449d90cfa151..55b72b1c63a0 100644
---- a/kernel/trace/trace_events_hist.c
-+++ b/kernel/trace/trace_events_hist.c
-@@ -4695,9 +4695,10 @@ static inline void add_to_key(char *compound_key, void *key,
- 		/* ensure NULL-termination */
- 		if (size > key_field->size - 1)
- 			size = key_field->size - 1;
--	}
+ 	if (enable_uc_lb)
+ 		MLX5_SET(modify_tir_in, in, ctx.self_lb_block,
+@@ -157,6 +164,7 @@ int mlx5e_refresh_tirs(struct mlx5e_priv *priv, bool enable_uc_lb)
  
--	memcpy(compound_key + key_field->offset, key, size);
-+		strncpy(compound_key + key_field->offset, (char *)key, size);
-+	} else
-+		memcpy(compound_key + key_field->offset, key, size);
- }
+ 	MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1);
  
- static void
-diff --git a/kernel/trace/trace_kdb.c b/kernel/trace/trace_kdb.c
-index d953c163a079..810d78a8d14c 100644
---- a/kernel/trace/trace_kdb.c
-+++ b/kernel/trace/trace_kdb.c
-@@ -51,14 +51,16 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
- 	if (cpu_file == RING_BUFFER_ALL_CPUS) {
- 		for_each_tracing_cpu(cpu) {
- 			iter.buffer_iter[cpu] =
--			ring_buffer_read_prepare(iter.trace_buffer->buffer, cpu);
-+			ring_buffer_read_prepare(iter.trace_buffer->buffer,
-+						 cpu, GFP_ATOMIC);
- 			ring_buffer_read_start(iter.buffer_iter[cpu]);
- 			tracing_iter_reset(&iter, cpu);
- 		}
- 	} else {
- 		iter.cpu_file = cpu_file;
- 		iter.buffer_iter[cpu_file] =
--			ring_buffer_read_prepare(iter.trace_buffer->buffer, cpu_file);
-+			ring_buffer_read_prepare(iter.trace_buffer->buffer,
-+						 cpu_file, GFP_ATOMIC);
- 		ring_buffer_read_start(iter.buffer_iter[cpu_file]);
- 		tracing_iter_reset(&iter, cpu_file);
- 	}
-diff --git a/kernel/watchdog.c b/kernel/watchdog.c
-index 977918d5d350..bbc4940f21af 100644
---- a/kernel/watchdog.c
-+++ b/kernel/watchdog.c
-@@ -547,13 +547,15 @@ static void softlockup_start_all(void)
++	mutex_lock(&mdev->mlx5e_res.td.list_lock);
+ 	list_for_each_entry(tir, &mdev->mlx5e_res.td.tirs_list, list) {
+ 		tirn = tir->tirn;
+ 		err = mlx5_core_modify_tir(mdev, tirn, in, inlen);
+@@ -168,6 +176,7 @@ out:
+ 	kvfree(in);
+ 	if (err)
+ 		netdev_err(priv->netdev, "refresh tir(0x%x) failed, %d\n", tirn, err);
++	mutex_unlock(&mdev->mlx5e_res.td.list_lock);
  
- int lockup_detector_online_cpu(unsigned int cpu)
- {
--	watchdog_enable(cpu);
-+	if (cpumask_test_cpu(cpu, &watchdog_allowed_mask))
-+		watchdog_enable(cpu);
- 	return 0;
+ 	return err;
  }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
+index 5cf5f2a9d51f..8de64e88c670 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
+@@ -217,15 +217,21 @@ int mlx5_fpga_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
+ 	void *cmd;
+ 	int ret;
  
- int lockup_detector_offline_cpu(unsigned int cpu)
- {
--	watchdog_disable(cpu);
-+	if (cpumask_test_cpu(cpu, &watchdog_allowed_mask))
-+		watchdog_disable(cpu);
- 	return 0;
- }
++	rcu_read_lock();
++	flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle));
++	rcu_read_unlock();
++
++	if (!flow) {
++		WARN_ONCE(1, "Received NULL pointer for handle\n");
++		return -EINVAL;
++	}
++
+ 	buf = kzalloc(size, GFP_ATOMIC);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+ 	cmd = (buf + 1);
+ 
+-	rcu_read_lock();
+-	flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle));
+-	rcu_read_unlock();
+ 	mlx5_fpga_tls_flow_to_cmd(flow, cmd);
  
-diff --git a/lib/bsearch.c b/lib/bsearch.c
-index 18b445b010c3..82512fe7b33c 100644
---- a/lib/bsearch.c
-+++ b/lib/bsearch.c
-@@ -11,6 +11,7 @@
+ 	MLX5_SET(tls_cmd, cmd, swid, ntohl(handle));
+@@ -238,6 +244,8 @@ int mlx5_fpga_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
+ 	buf->complete = mlx_tls_kfree_complete;
  
- #include <linux/export.h>
- #include <linux/bsearch.h>
-+#include <linux/kprobes.h>
+ 	ret = mlx5_fpga_sbu_conn_sendmsg(mdev->fpga->tls->conn, buf);
++	if (ret < 0)
++		kfree(buf);
  
- /*
-  * bsearch - binary search an array of elements
-@@ -53,3 +54,4 @@ void *bsearch(const void *key, const void *base, size_t num, size_t size,
- 	return NULL;
+ 	return ret;
  }
- EXPORT_SYMBOL(bsearch);
-+NOKPROBE_SYMBOL(bsearch);
-diff --git a/lib/raid6/Makefile b/lib/raid6/Makefile
-index 4e90d443d1b0..e723eacf7868 100644
---- a/lib/raid6/Makefile
-+++ b/lib/raid6/Makefile
-@@ -39,7 +39,7 @@ endif
- ifeq ($(CONFIG_KERNEL_MODE_NEON),y)
- NEON_FLAGS := -ffreestanding
- ifeq ($(ARCH),arm)
--NEON_FLAGS += -mfloat-abi=softfp -mfpu=neon
-+NEON_FLAGS += -march=armv7-a -mfloat-abi=softfp -mfpu=neon
- endif
- CFLAGS_recov_neon_inner.o += $(NEON_FLAGS)
- ifeq ($(ARCH),arm64)
-diff --git a/lib/rhashtable.c b/lib/rhashtable.c
-index 852ffa5160f1..4edcf3310513 100644
---- a/lib/rhashtable.c
-+++ b/lib/rhashtable.c
-@@ -416,8 +416,12 @@ static void rht_deferred_worker(struct work_struct *work)
- 	else if (tbl->nest)
- 		err = rhashtable_rehash_alloc(ht, tbl, tbl->size);
- 
--	if (!err)
--		err = rhashtable_rehash_table(ht);
-+	if (!err || err == -EEXIST) {
-+		int nerr;
-+
-+		nerr = rhashtable_rehash_table(ht);
-+		err = err ?: nerr;
-+	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index be81b319b0dc..694edd899322 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -163,26 +163,6 @@ static struct mlx5_profile profile[] = {
+ 			.size	= 8,
+ 			.limit	= 4
+ 		},
+-		.mr_cache[16]	= {
+-			.size	= 8,
+-			.limit	= 4
+-		},
+-		.mr_cache[17]	= {
+-			.size	= 8,
+-			.limit	= 4
+-		},
+-		.mr_cache[18]	= {
+-			.size	= 8,
+-			.limit	= 4
+-		},
+-		.mr_cache[19]	= {
+-			.size	= 4,
+-			.limit	= 2
+-		},
+-		.mr_cache[20]	= {
+-			.size	= 4,
+-			.limit	= 2
+-		},
+ 	},
+ };
  
- 	mutex_unlock(&ht->mutex);
+diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
+index 69d7aebda09b..73db94e55fd0 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.c
+@@ -196,7 +196,7 @@ static netdev_tx_t nfp_repr_xmit(struct sk_buff *skb, struct net_device *netdev)
+ 	ret = dev_queue_xmit(skb);
+ 	nfp_repr_inc_tx_stats(netdev, len, ret);
  
-diff --git a/lib/string.c b/lib/string.c
-index 38e4ca08e757..3ab861c1a857 100644
---- a/lib/string.c
-+++ b/lib/string.c
-@@ -866,6 +866,26 @@ __visible int memcmp(const void *cs, const void *ct, size_t count)
- EXPORT_SYMBOL(memcmp);
- #endif
+-	return ret;
++	return NETDEV_TX_OK;
+ }
  
-+#ifndef __HAVE_ARCH_BCMP
-+/**
-+ * bcmp - returns 0 if and only if the buffers have identical contents.
-+ * @a: pointer to first buffer.
-+ * @b: pointer to second buffer.
-+ * @len: size of buffers.
-+ *
-+ * The sign or magnitude of a non-zero return value has no particular
-+ * meaning, and architectures may implement their own more efficient bcmp(). So
-+ * while this particular implementation is a simple (tail) call to memcmp, do
-+ * not rely on anything but whether the return value is zero or non-zero.
-+ */
-+#undef bcmp
-+int bcmp(const void *a, const void *b, size_t len)
-+{
-+	return memcmp(a, b, len);
-+}
-+EXPORT_SYMBOL(bcmp);
-+#endif
-+
- #ifndef __HAVE_ARCH_MEMSCAN
- /**
-  * memscan - Find a character in an area of memory.
-diff --git a/mm/cma.c b/mm/cma.c
-index c7b39dd3b4f6..f4f3a8a57d86 100644
---- a/mm/cma.c
-+++ b/mm/cma.c
-@@ -353,12 +353,14 @@ int __init cma_declare_contiguous(phys_addr_t base,
- 
- 	ret = cma_init_reserved_mem(base, size, order_per_bit, name, res_cma);
- 	if (ret)
--		goto err;
-+		goto free_mem;
- 
- 	pr_info("Reserved %ld MiB at %pa\n", (unsigned long)size / SZ_1M,
- 		&base);
- 	return 0;
+ static int nfp_repr_stop(struct net_device *netdev)
+@@ -384,7 +384,7 @@ int nfp_repr_init(struct nfp_app *app, struct net_device *netdev,
+ 	netdev->features &= ~(NETIF_F_TSO | NETIF_F_TSO6);
+ 	netdev->gso_max_segs = NFP_NET_LSO_MAX_SEGS;
  
-+free_mem:
-+	memblock_free(base, size);
- err:
- 	pr_err("Failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
- 	return ret;
-diff --git a/mm/debug.c b/mm/debug.c
-index 1611cf00a137..854d5f84047d 100644
---- a/mm/debug.c
-+++ b/mm/debug.c
-@@ -79,7 +79,7 @@ void __dump_page(struct page *page, const char *reason)
- 		pr_warn("ksm ");
- 	else if (mapping) {
- 		pr_warn("%ps ", mapping->a_ops);
--		if (mapping->host->i_dentry.first) {
-+		if (mapping->host && mapping->host->i_dentry.first) {
- 			struct dentry *dentry;
- 			dentry = container_of(mapping->host->i_dentry.first, struct dentry, d_u.d_alias);
- 			pr_warn("name:\"%pd\" ", dentry);
-diff --git a/mm/huge_memory.c b/mm/huge_memory.c
-index faf357eaf0ce..8b03c698f86e 100644
---- a/mm/huge_memory.c
-+++ b/mm/huge_memory.c
-@@ -753,6 +753,21 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
- 	spinlock_t *ptl;
+-	netdev->priv_flags |= IFF_NO_QUEUE;
++	netdev->priv_flags |= IFF_NO_QUEUE | IFF_DISABLE_NETPOLL;
+ 	netdev->features |= NETIF_F_LLTX;
  
- 	ptl = pmd_lock(mm, pmd);
-+	if (!pmd_none(*pmd)) {
-+		if (write) {
-+			if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
-+				WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
-+				goto out_unlock;
-+			}
-+			entry = pmd_mkyoung(*pmd);
-+			entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
-+			if (pmdp_set_access_flags(vma, addr, pmd, entry, 1))
-+				update_mmu_cache_pmd(vma, addr, pmd);
-+		}
-+
-+		goto out_unlock;
-+	}
-+
- 	entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
- 	if (pfn_t_devmap(pfn))
- 		entry = pmd_mkdevmap(entry);
-@@ -764,11 +779,16 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
- 	if (pgtable) {
- 		pgtable_trans_huge_deposit(mm, pmd, pgtable);
- 		mm_inc_nr_ptes(mm);
-+		pgtable = NULL;
- 	}
+ 	if (nfp_app_has_tc(app)) {
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index f55d177ae894..365cddbfc684 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -28,6 +28,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/firmware.h>
+ #include <linux/prefetch.h>
++#include <linux/pci-aspm.h>
+ #include <linux/ipv6.h>
+ #include <net/ip6_checksum.h>
  
- 	set_pmd_at(mm, addr, pmd, entry);
- 	update_mmu_cache_pmd(vma, addr, pmd);
-+
-+out_unlock:
- 	spin_unlock(ptl);
-+	if (pgtable)
-+		pte_free(mm, pgtable);
- }
+@@ -5332,7 +5333,7 @@ static void rtl_hw_start_8168(struct rtl8169_private *tp)
+ 	tp->cp_cmd |= PktCntrDisable | INTT_1;
+ 	RTL_W16(tp, CPlusCmd, tp->cp_cmd);
  
- vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
-@@ -819,6 +839,20 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
- 	spinlock_t *ptl;
+-	RTL_W16(tp, IntrMitigate, 0x5151);
++	RTL_W16(tp, IntrMitigate, 0x5100);
  
- 	ptl = pud_lock(mm, pud);
-+	if (!pud_none(*pud)) {
-+		if (write) {
-+			if (pud_pfn(*pud) != pfn_t_to_pfn(pfn)) {
-+				WARN_ON_ONCE(!is_huge_zero_pud(*pud));
-+				goto out_unlock;
-+			}
-+			entry = pud_mkyoung(*pud);
-+			entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma);
-+			if (pudp_set_access_flags(vma, addr, pud, entry, 1))
-+				update_mmu_cache_pud(vma, addr, pud);
-+		}
-+		goto out_unlock;
-+	}
-+
- 	entry = pud_mkhuge(pfn_t_pud(pfn, prot));
- 	if (pfn_t_devmap(pfn))
- 		entry = pud_mkdevmap(entry);
-@@ -828,6 +862,8 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+ 	/* Work around for RxFIFO overflow. */
+ 	if (tp->mac_version == RTL_GIGA_MAC_VER_11) {
+@@ -7224,6 +7225,11 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			return rc;
  	}
- 	set_pud_at(mm, addr, pud, entry);
- 	update_mmu_cache_pud(vma, addr, pud);
+ 
++	/* Disable ASPM completely as that cause random device stop working
++	 * problems as well as full system hangs for some PCIe devices users.
++	 */
++	pci_disable_link_state(pdev, PCIE_LINK_STATE_L0S | PCIE_LINK_STATE_L1);
 +
-+out_unlock:
- 	spin_unlock(ptl);
- }
+ 	/* enable device (incl. PCI PM wakeup and hotplug setup) */
+ 	rc = pcim_enable_device(pdev);
+ 	if (rc < 0) {
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index e859ae2e42d5..49f41b64077b 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -987,6 +987,7 @@ struct netvsc_device {
  
-diff --git a/mm/kasan/common.c b/mm/kasan/common.c
-index 09b534fbba17..80bbe62b16cd 100644
---- a/mm/kasan/common.c
-+++ b/mm/kasan/common.c
-@@ -14,6 +14,8 @@
-  *
-  */
+ 	wait_queue_head_t wait_drain;
+ 	bool destroy;
++	bool tx_disable; /* if true, do not wake up queue again */
  
-+#define __KASAN_INTERNAL
-+
- #include <linux/export.h>
- #include <linux/interrupt.h>
- #include <linux/init.h>
-diff --git a/mm/memcontrol.c b/mm/memcontrol.c
-index af7f18b32389..5bbf2de02a0f 100644
---- a/mm/memcontrol.c
-+++ b/mm/memcontrol.c
-@@ -248,6 +248,12 @@ enum res_type {
- 	     iter != NULL;				\
- 	     iter = mem_cgroup_iter(NULL, iter, NULL))
+ 	/* Receive buffer allocated by us but manages by NetVSP */
+ 	void *recv_buf;
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index 813d195bbd57..e0dce373cdd9 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -110,6 +110,7 @@ static struct netvsc_device *alloc_net_device(void)
  
-+static inline bool should_force_charge(void)
-+{
-+	return tsk_is_oom_victim(current) || fatal_signal_pending(current) ||
-+		(current->flags & PF_EXITING);
-+}
-+
- /* Some nice accessors for the vmpressure. */
- struct vmpressure *memcg_to_vmpressure(struct mem_cgroup *memcg)
- {
-@@ -1389,8 +1395,13 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
- 	};
- 	bool ret;
+ 	init_waitqueue_head(&net_device->wait_drain);
+ 	net_device->destroy = false;
++	net_device->tx_disable = false;
  
--	mutex_lock(&oom_lock);
--	ret = out_of_memory(&oc);
-+	if (mutex_lock_killable(&oom_lock))
-+		return true;
-+	/*
-+	 * A few threads which were not waiting at mutex_lock_killable() can
-+	 * fail to bail out. Therefore, check again after holding oom_lock.
-+	 */
-+	ret = should_force_charge() || out_of_memory(&oc);
- 	mutex_unlock(&oom_lock);
- 	return ret;
- }
-@@ -2209,9 +2220,7 @@ retry:
- 	 * bypass the last charges so that they can exit quickly and
- 	 * free their memory.
- 	 */
--	if (unlikely(tsk_is_oom_victim(current) ||
--		     fatal_signal_pending(current) ||
--		     current->flags & PF_EXITING))
-+	if (unlikely(should_force_charge()))
- 		goto force;
+ 	net_device->max_pkt = RNDIS_MAX_PKT_DEFAULT;
+ 	net_device->pkt_align = RNDIS_PKT_ALIGN_DEFAULT;
+@@ -719,7 +720,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
+ 	} else {
+ 		struct netdev_queue *txq = netdev_get_tx_queue(ndev, q_idx);
  
- 	/*
-@@ -3873,6 +3882,22 @@ struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb)
- 	return &memcg->cgwb_domain;
+-		if (netif_tx_queue_stopped(txq) &&
++		if (netif_tx_queue_stopped(txq) && !net_device->tx_disable &&
+ 		    (hv_get_avail_to_write_percent(&channel->outbound) >
+ 		     RING_AVAIL_PERCENT_HIWATER || queue_sends < 1)) {
+ 			netif_tx_wake_queue(txq);
+@@ -874,7 +875,8 @@ static inline int netvsc_send_pkt(
+ 	} else if (ret == -EAGAIN) {
+ 		netif_tx_stop_queue(txq);
+ 		ndev_ctx->eth_stats.stop_queue++;
+-		if (atomic_read(&nvchan->queue_sends) < 1) {
++		if (atomic_read(&nvchan->queue_sends) < 1 &&
++		    !net_device->tx_disable) {
+ 			netif_tx_wake_queue(txq);
+ 			ndev_ctx->eth_stats.wake_queue++;
+ 			ret = -ENOSPC;
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index cf4897043e83..b20fb0fb595b 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -109,6 +109,15 @@ static void netvsc_set_rx_mode(struct net_device *net)
+ 	rcu_read_unlock();
  }
  
-+/*
-+ * idx can be of type enum memcg_stat_item or node_stat_item.
-+ * Keep in sync with memcg_exact_page().
-+ */
-+static unsigned long memcg_exact_page_state(struct mem_cgroup *memcg, int idx)
++static void netvsc_tx_enable(struct netvsc_device *nvscdev,
++			     struct net_device *ndev)
 +{
-+	long x = atomic_long_read(&memcg->stat[idx]);
-+	int cpu;
++	nvscdev->tx_disable = false;
++	virt_wmb(); /* ensure queue wake up mechanism is on */
 +
-+	for_each_online_cpu(cpu)
-+		x += per_cpu_ptr(memcg->stat_cpu, cpu)->count[idx];
-+	if (x < 0)
-+		x = 0;
-+	return x;
++	netif_tx_wake_all_queues(ndev);
 +}
 +
- /**
-  * mem_cgroup_wb_stats - retrieve writeback related stats from its memcg
-  * @wb: bdi_writeback in question
-@@ -3898,10 +3923,10 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
- 	struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css);
- 	struct mem_cgroup *parent;
- 
--	*pdirty = memcg_page_state(memcg, NR_FILE_DIRTY);
-+	*pdirty = memcg_exact_page_state(memcg, NR_FILE_DIRTY);
- 
- 	/* this should eventually include NR_UNSTABLE_NFS */
--	*pwriteback = memcg_page_state(memcg, NR_WRITEBACK);
-+	*pwriteback = memcg_exact_page_state(memcg, NR_WRITEBACK);
- 	*pfilepages = mem_cgroup_nr_lru_pages(memcg, (1 << LRU_INACTIVE_FILE) |
- 						     (1 << LRU_ACTIVE_FILE));
- 	*pheadroom = PAGE_COUNTER_MAX;
-diff --git a/mm/memory-failure.c b/mm/memory-failure.c
-index 831be5ff5f4d..fc8b51744579 100644
---- a/mm/memory-failure.c
-+++ b/mm/memory-failure.c
-@@ -1825,19 +1825,17 @@ static int soft_offline_in_use_page(struct page *page, int flags)
- 	struct page *hpage = compound_head(page);
- 
- 	if (!PageHuge(page) && PageTransHuge(hpage)) {
--		lock_page(hpage);
--		if (!PageAnon(hpage) || unlikely(split_huge_page(hpage))) {
--			unlock_page(hpage);
--			if (!PageAnon(hpage))
-+		lock_page(page);
-+		if (!PageAnon(page) || unlikely(split_huge_page(page))) {
-+			unlock_page(page);
-+			if (!PageAnon(page))
- 				pr_info("soft offline: %#lx: non anonymous thp\n", page_to_pfn(page));
- 			else
- 				pr_info("soft offline: %#lx: thp split failed\n", page_to_pfn(page));
--			put_hwpoison_page(hpage);
-+			put_hwpoison_page(page);
- 			return -EBUSY;
- 		}
--		unlock_page(hpage);
--		get_hwpoison_page(page);
--		put_hwpoison_page(hpage);
-+		unlock_page(page);
+ static int netvsc_open(struct net_device *net)
+ {
+ 	struct net_device_context *ndev_ctx = netdev_priv(net);
+@@ -129,7 +138,7 @@ static int netvsc_open(struct net_device *net)
+ 	rdev = nvdev->extension;
+ 	if (!rdev->link_state) {
+ 		netif_carrier_on(net);
+-		netif_tx_wake_all_queues(net);
++		netvsc_tx_enable(nvdev, net);
  	}
  
- 	/*
-diff --git a/mm/memory.c b/mm/memory.c
-index e11ca9dd823f..8d3f38fa530d 100644
---- a/mm/memory.c
-+++ b/mm/memory.c
-@@ -1546,10 +1546,12 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
- 				WARN_ON_ONCE(!is_zero_pfn(pte_pfn(*pte)));
- 				goto out_unlock;
- 			}
--			entry = *pte;
--			goto out_mkwrite;
--		} else
--			goto out_unlock;
-+			entry = pte_mkyoung(*pte);
-+			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
-+			if (ptep_set_access_flags(vma, addr, pte, entry, 1))
-+				update_mmu_cache(vma, addr, pte);
-+		}
-+		goto out_unlock;
+ 	if (vf_netdev) {
+@@ -184,6 +193,17 @@ static int netvsc_wait_until_empty(struct netvsc_device *nvdev)
  	}
+ }
  
- 	/* Ok, finally just insert the thing.. */
-@@ -1558,7 +1560,6 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
- 	else
- 		entry = pte_mkspecial(pfn_t_pte(pfn, prot));
- 
--out_mkwrite:
- 	if (mkwrite) {
- 		entry = pte_mkyoung(entry);
- 		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
-@@ -3517,10 +3518,13 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf)
-  * but allow concurrent faults).
-  * The mmap_sem may have been released depending on flags and our
-  * return value.  See filemap_fault() and __lock_page_or_retry().
-+ * If mmap_sem is released, vma may become invalid (for example
-+ * by other thread calling munmap()).
-  */
- static vm_fault_t do_fault(struct vm_fault *vmf)
- {
- 	struct vm_area_struct *vma = vmf->vma;
-+	struct mm_struct *vm_mm = vma->vm_mm;
- 	vm_fault_t ret;
- 
- 	/*
-@@ -3561,7 +3565,7 @@ static vm_fault_t do_fault(struct vm_fault *vmf)
- 
- 	/* preallocated pagetable is unused: free it */
- 	if (vmf->prealloc_pte) {
--		pte_free(vma->vm_mm, vmf->prealloc_pte);
-+		pte_free(vm_mm, vmf->prealloc_pte);
- 		vmf->prealloc_pte = NULL;
- 	}
- 	return ret;
-diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
-index 1ad28323fb9f..11593a03c051 100644
---- a/mm/memory_hotplug.c
-+++ b/mm/memory_hotplug.c
-@@ -1560,7 +1560,7 @@ static int __ref __offline_pages(unsigned long start_pfn,
- {
- 	unsigned long pfn, nr_pages;
- 	long offlined_pages;
--	int ret, node;
-+	int ret, node, nr_isolate_pageblock;
- 	unsigned long flags;
- 	unsigned long valid_start, valid_end;
- 	struct zone *zone;
-@@ -1586,10 +1586,11 @@ static int __ref __offline_pages(unsigned long start_pfn,
- 	ret = start_isolate_page_range(start_pfn, end_pfn,
- 				       MIGRATE_MOVABLE,
- 				       SKIP_HWPOISON | REPORT_FAILURE);
--	if (ret) {
-+	if (ret < 0) {
- 		reason = "failure to isolate range";
- 		goto failed_removal;
- 	}
-+	nr_isolate_pageblock = ret;
- 
- 	arg.start_pfn = start_pfn;
- 	arg.nr_pages = nr_pages;
-@@ -1642,8 +1643,16 @@ static int __ref __offline_pages(unsigned long start_pfn,
- 	/* Ok, all of our target is isolated.
- 	   We cannot do rollback at this point. */
- 	offline_isolated_pages(start_pfn, end_pfn);
--	/* reset pagetype flags and makes migrate type to be MOVABLE */
--	undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
++static void netvsc_tx_disable(struct netvsc_device *nvscdev,
++			      struct net_device *ndev)
++{
++	if (nvscdev) {
++		nvscdev->tx_disable = true;
++		virt_wmb(); /* ensure txq will not wake up after stop */
++	}
 +
-+	/*
-+	 * Onlining will reset pagetype flags and makes migrate type
-+	 * MOVABLE, so just need to decrease the number of isolated
-+	 * pageblocks zone counter here.
-+	 */
-+	spin_lock_irqsave(&zone->lock, flags);
-+	zone->nr_isolate_pageblock -= nr_isolate_pageblock;
-+	spin_unlock_irqrestore(&zone->lock, flags);
++	netif_tx_disable(ndev);
++}
 +
- 	/* removal success */
- 	adjust_managed_page_count(pfn_to_page(start_pfn), -offlined_pages);
- 	zone->present_pages -= offlined_pages;
-@@ -1675,12 +1684,12 @@ static int __ref __offline_pages(unsigned long start_pfn,
- 
- failed_removal_isolated:
- 	undo_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE);
-+	memory_notify(MEM_CANCEL_OFFLINE, &arg);
- failed_removal:
- 	pr_debug("memory offlining [mem %#010llx-%#010llx] failed due to %s\n",
- 		 (unsigned long long) start_pfn << PAGE_SHIFT,
- 		 ((unsigned long long) end_pfn << PAGE_SHIFT) - 1,
- 		 reason);
--	memory_notify(MEM_CANCEL_OFFLINE, &arg);
- 	/* pushback to free area */
- 	mem_hotplug_done();
- 	return ret;
-diff --git a/mm/mempolicy.c b/mm/mempolicy.c
-index ee2bce59d2bf..c2275c1e6d2a 100644
---- a/mm/mempolicy.c
-+++ b/mm/mempolicy.c
-@@ -350,7 +350,7 @@ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask)
+ static int netvsc_close(struct net_device *net)
  {
- 	if (!pol)
- 		return;
--	if (!mpol_store_user_nodemask(pol) &&
-+	if (!mpol_store_user_nodemask(pol) && !(pol->flags & MPOL_F_LOCAL) &&
- 	    nodes_equal(pol->w.cpuset_mems_allowed, *newmask))
- 		return;
- 
-@@ -428,6 +428,13 @@ static inline bool queue_pages_required(struct page *page,
- 	return node_isset(nid, *qp->nmask) == !(flags & MPOL_MF_INVERT);
- }
+ 	struct net_device_context *net_device_ctx = netdev_priv(net);
+@@ -192,7 +212,7 @@ static int netvsc_close(struct net_device *net)
+ 	struct netvsc_device *nvdev = rtnl_dereference(net_device_ctx->nvdev);
+ 	int ret;
  
-+/*
-+ * queue_pages_pmd() has three possible return values:
-+ * 1 - pages are placed on the right node or queued successfully.
-+ * 0 - THP was split.
-+ * -EIO - is migration entry or MPOL_MF_STRICT was specified and an existing
-+ *        page was already on a node that does not follow the policy.
-+ */
- static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
- 				unsigned long end, struct mm_walk *walk)
- {
-@@ -437,7 +444,7 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
- 	unsigned long flags;
+-	netif_tx_disable(net);
++	netvsc_tx_disable(nvdev, net);
  
- 	if (unlikely(is_pmd_migration_entry(*pmd))) {
--		ret = 1;
-+		ret = -EIO;
- 		goto unlock;
- 	}
- 	page = pmd_page(*pmd);
-@@ -454,8 +461,15 @@ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
- 	ret = 1;
- 	flags = qp->flags;
- 	/* go to thp migration */
--	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
-+	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
-+		if (!vma_migratable(walk->vma)) {
-+			ret = -EIO;
-+			goto unlock;
-+		}
-+
- 		migrate_page_add(page, qp->pagelist, flags);
-+	} else
-+		ret = -EIO;
- unlock:
- 	spin_unlock(ptl);
- out:
-@@ -480,8 +494,10 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
- 	ptl = pmd_trans_huge_lock(pmd, vma);
- 	if (ptl) {
- 		ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
--		if (ret)
-+		if (ret > 0)
- 			return 0;
-+		else if (ret < 0)
-+			return ret;
- 	}
+ 	/* No need to close rndis filter if it is removed already */
+ 	if (!nvdev)
+@@ -920,7 +940,7 @@ static int netvsc_detach(struct net_device *ndev,
  
- 	if (pmd_trans_unstable(pmd))
-@@ -502,11 +518,16 @@ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
- 			continue;
- 		if (!queue_pages_required(page, qp))
- 			continue;
--		migrate_page_add(page, qp->pagelist, flags);
-+		if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
-+			if (!vma_migratable(vma))
-+				break;
-+			migrate_page_add(page, qp->pagelist, flags);
-+		} else
-+			break;
- 	}
- 	pte_unmap_unlock(pte - 1, ptl);
- 	cond_resched();
--	return 0;
-+	return addr != end ? -EIO : 0;
- }
+ 	/* If device was up (receiving) then shutdown */
+ 	if (netif_running(ndev)) {
+-		netif_tx_disable(ndev);
++		netvsc_tx_disable(nvdev, ndev);
  
- static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
-@@ -576,7 +597,12 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
- 	unsigned long endvma = vma->vm_end;
- 	unsigned long flags = qp->flags;
+ 		ret = rndis_filter_close(nvdev);
+ 		if (ret) {
+@@ -1908,7 +1928,7 @@ static void netvsc_link_change(struct work_struct *w)
+ 		if (rdev->link_state) {
+ 			rdev->link_state = false;
+ 			netif_carrier_on(net);
+-			netif_tx_wake_all_queues(net);
++			netvsc_tx_enable(net_device, net);
+ 		} else {
+ 			notify = true;
+ 		}
+@@ -1918,7 +1938,7 @@ static void netvsc_link_change(struct work_struct *w)
+ 		if (!rdev->link_state) {
+ 			rdev->link_state = true;
+ 			netif_carrier_off(net);
+-			netif_tx_stop_all_queues(net);
++			netvsc_tx_disable(net_device, net);
+ 		}
+ 		kfree(event);
+ 		break;
+@@ -1927,7 +1947,7 @@ static void netvsc_link_change(struct work_struct *w)
+ 		if (!rdev->link_state) {
+ 			rdev->link_state = true;
+ 			netif_carrier_off(net);
+-			netif_tx_stop_all_queues(net);
++			netvsc_tx_disable(net_device, net);
+ 			event->event = RNDIS_STATUS_MEDIA_CONNECT;
+ 			spin_lock_irqsave(&ndev_ctx->lock, flags);
+ 			list_add(&event->list, &ndev_ctx->reconfig_events);
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 74bebbdb4b15..9195f3476b1d 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1203,6 +1203,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x19d2, 0x2002, 4)},	/* ZTE (Vodafone) K3765-Z */
+ 	{QMI_FIXED_INTF(0x2001, 0x7e19, 4)},	/* D-Link DWM-221 B1 */
+ 	{QMI_FIXED_INTF(0x2001, 0x7e35, 4)},	/* D-Link DWM-222 */
++	{QMI_FIXED_INTF(0x2020, 0x2031, 4)},	/* Olicard 600 */
+ 	{QMI_FIXED_INTF(0x2020, 0x2033, 4)},	/* BroadMobi BM806U */
+ 	{QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)},    /* Sierra Wireless MC7700 */
+ 	{QMI_FIXED_INTF(0x114f, 0x68a2, 8)},    /* Sierra Wireless MC7750 */
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 6d1a1abbed27..cd15c32b2e43 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1275,8 +1275,12 @@ static void vrf_setup(struct net_device *dev)
+ 	dev->priv_flags |= IFF_NO_QUEUE;
+ 	dev->priv_flags |= IFF_NO_RX_HANDLER;
  
--	if (!vma_migratable(vma))
-+	/*
-+	 * Need check MPOL_MF_STRICT to return -EIO if possible
-+	 * regardless of vma_migratable
+-	dev->min_mtu = 0;
+-	dev->max_mtu = 0;
++	/* VRF devices do not care about MTU, but if the MTU is set
++	 * too low then the ipv4 and ipv6 protocols are disabled
++	 * which breaks networking.
 +	 */
-+	if (!vma_migratable(vma) &&
-+	    !(flags & MPOL_MF_STRICT))
- 		return 1;
- 
- 	if (endvma > end)
-@@ -603,7 +629,7 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
- 	}
- 
- 	/* queue pages from current vma */
--	if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
-+	if (flags & MPOL_MF_VALID)
- 		return 0;
- 	return 1;
++	dev->min_mtu = IPV6_MIN_MTU;
++	dev->max_mtu = ETH_MAX_MTU;
  }
-diff --git a/mm/migrate.c b/mm/migrate.c
-index 181f5d2718a9..76e237b4610c 100644
---- a/mm/migrate.c
-+++ b/mm/migrate.c
-@@ -248,10 +248,8 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
- 				pte = swp_entry_to_pte(entry);
- 			} else if (is_device_public_page(new)) {
- 				pte = pte_mkdevmap(pte);
--				flush_dcache_page(new);
- 			}
--		} else
--			flush_dcache_page(new);
-+		}
  
- #ifdef CONFIG_HUGETLB_PAGE
- 		if (PageHuge(new)) {
-@@ -995,6 +993,13 @@ static int move_to_new_page(struct page *newpage, struct page *page,
+ static int vrf_validate(struct nlattr *tb[], struct nlattr *data[],
+diff --git a/drivers/pci/hotplug/pciehp_ctrl.c b/drivers/pci/hotplug/pciehp_ctrl.c
+index 3f3df4c29f6e..905282a8ddaa 100644
+--- a/drivers/pci/hotplug/pciehp_ctrl.c
++++ b/drivers/pci/hotplug/pciehp_ctrl.c
+@@ -115,6 +115,10 @@ static void remove_board(struct controller *ctrl, bool safe_removal)
+ 		 * removed from the slot/adapter.
  		 */
- 		if (!PageMappingFlags(page))
- 			page->mapping = NULL;
-+
-+		if (unlikely(is_zone_device_page(newpage))) {
-+			if (is_device_public_page(newpage))
-+				flush_dcache_page(newpage);
-+		} else
-+			flush_dcache_page(newpage);
+ 		msleep(1000);
 +
++		/* Ignore link or presence changes caused by power off */
++		atomic_and(~(PCI_EXP_SLTSTA_DLLSC | PCI_EXP_SLTSTA_PDC),
++			   &ctrl->pending_events);
  	}
- out:
- 	return rc;
-diff --git a/mm/oom_kill.c b/mm/oom_kill.c
-index 26ea8636758f..da0e44914085 100644
---- a/mm/oom_kill.c
-+++ b/mm/oom_kill.c
-@@ -928,7 +928,8 @@ static void __oom_kill_process(struct task_struct *victim)
-  */
- static int oom_kill_memcg_member(struct task_struct *task, void *unused)
- {
--	if (task->signal->oom_score_adj != OOM_SCORE_ADJ_MIN) {
-+	if (task->signal->oom_score_adj != OOM_SCORE_ADJ_MIN &&
-+	    !is_global_init(task)) {
- 		get_task_struct(task);
- 		__oom_kill_process(task);
- 	}
-diff --git a/mm/page_alloc.c b/mm/page_alloc.c
-index 0b9f577b1a2a..20dd3283bb1b 100644
---- a/mm/page_alloc.c
-+++ b/mm/page_alloc.c
-@@ -1945,8 +1945,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
- 
- 	arch_alloc_page(page, order);
- 	kernel_map_pages(page, 1 << order, 1);
--	kernel_poison_pages(page, 1 << order, 1);
- 	kasan_alloc_pages(page, order);
-+	kernel_poison_pages(page, 1 << order, 1);
- 	set_page_owner(page, order, gfp_flags);
- }
  
-@@ -8160,7 +8160,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
+ 	/* turn off Green LED */
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index e2a879e93d86..fba03a7d5c7f 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3877,6 +3877,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9128,
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c14 */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9130,
+ 			 quirk_dma_func1_alias);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9170,
++			 quirk_dma_func1_alias);
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c47 + c57 */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9172,
+ 			 quirk_dma_func1_alias);
+diff --git a/drivers/tty/Kconfig b/drivers/tty/Kconfig
+index 0840d27381ea..e0a04bfc873e 100644
+--- a/drivers/tty/Kconfig
++++ b/drivers/tty/Kconfig
+@@ -441,4 +441,28 @@ config VCC
+ 	depends on SUN_LDOMS
+ 	help
+ 	  Support for Sun logical domain consoles.
++
++config LDISC_AUTOLOAD
++	bool "Automatically load TTY Line Disciplines"
++	default y
++	help
++	  Historically the kernel has always automatically loaded any
++	  line discipline that is in a kernel module when a user asks
++	  for it to be loaded with the TIOCSETD ioctl, or through other
++	  means.  This is not always the best thing to do on systems
++	  where you know you will not be using some of the more
++	  "ancient" line disciplines, so prevent the kernel from doing
++	  this unless the request is coming from a process with the
++	  CAP_SYS_MODULE permissions.
++
++	  Say 'Y' here if you trust your userspace users to do the right
++	  thing, or if you have only provided the line disciplines that
++	  you know you will be using, or if you wish to continue to use
++	  the traditional method of on-demand loading of these modules
++	  by any user.
++
++	  This functionality can be changed at runtime with the
++	  dev.tty.ldisc_autoload sysctl, this configuration option will
++	  only set the default value of this functionality.
++
+ endif # TTY
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 21ffcce16927..5fa250157025 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -513,6 +513,8 @@ static const struct file_operations hung_up_tty_fops = {
+ static DEFINE_SPINLOCK(redirect_lock);
+ static struct file *redirect;
  
- 	ret = start_isolate_page_range(pfn_max_align_down(start),
- 				       pfn_max_align_up(end), migratetype, 0);
--	if (ret)
-+	if (ret < 0)
- 		return ret;
++extern void tty_sysctl_init(void);
++
+ /**
+  *	tty_wakeup	-	request more data
+  *	@tty: terminal
+@@ -3483,6 +3485,7 @@ void console_sysfs_notify(void)
+  */
+ int __init tty_init(void)
+ {
++	tty_sysctl_init();
+ 	cdev_init(&tty_cdev, &tty_fops);
+ 	if (cdev_add(&tty_cdev, MKDEV(TTYAUX_MAJOR, 0), 1) ||
+ 	    register_chrdev_region(MKDEV(TTYAUX_MAJOR, 0), 1, "/dev/tty") < 0)
+diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c
+index 45eda69b150c..e38f104db174 100644
+--- a/drivers/tty/tty_ldisc.c
++++ b/drivers/tty/tty_ldisc.c
+@@ -156,6 +156,13 @@ static void put_ldops(struct tty_ldisc_ops *ldops)
+  *		takes tty_ldiscs_lock to guard against ldisc races
+  */
  
- 	/*
-diff --git a/mm/page_ext.c b/mm/page_ext.c
-index 8c78b8d45117..f116431c3dee 100644
---- a/mm/page_ext.c
-+++ b/mm/page_ext.c
-@@ -273,6 +273,7 @@ static void free_page_ext(void *addr)
- 		table_size = get_entry_size() * PAGES_PER_SECTION;
- 
- 		BUG_ON(PageReserved(page));
-+		kmemleak_free(addr);
- 		free_pages_exact(addr, table_size);
- 	}
- }
-diff --git a/mm/page_isolation.c b/mm/page_isolation.c
-index ce323e56b34d..019280712e1b 100644
---- a/mm/page_isolation.c
-+++ b/mm/page_isolation.c
-@@ -59,7 +59,8 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
- 	 * FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
- 	 * We just check MOVABLE pages.
++#if defined(CONFIG_LDISC_AUTOLOAD)
++	#define INITIAL_AUTOLOAD_STATE	1
++#else
++	#define INITIAL_AUTOLOAD_STATE	0
++#endif
++static int tty_ldisc_autoload = INITIAL_AUTOLOAD_STATE;
++
+ static struct tty_ldisc *tty_ldisc_get(struct tty_struct *tty, int disc)
+ {
+ 	struct tty_ldisc *ld;
+@@ -170,6 +177,8 @@ static struct tty_ldisc *tty_ldisc_get(struct tty_struct *tty, int disc)
  	 */
--	if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype, flags))
-+	if (!has_unmovable_pages(zone, page, arg.pages_found, migratetype,
-+				 isol_flags))
- 		ret = 0;
- 
- 	/*
-@@ -160,27 +161,36 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
- 	return NULL;
+ 	ldops = get_ldops(disc);
+ 	if (IS_ERR(ldops)) {
++		if (!capable(CAP_SYS_MODULE) && !tty_ldisc_autoload)
++			return ERR_PTR(-EPERM);
+ 		request_module("tty-ldisc-%d", disc);
+ 		ldops = get_ldops(disc);
+ 		if (IS_ERR(ldops))
+@@ -845,3 +854,41 @@ void tty_ldisc_deinit(struct tty_struct *tty)
+ 		tty_ldisc_put(tty->ldisc);
+ 	tty->ldisc = NULL;
  }
- 
--/*
-- * start_isolate_page_range() -- make page-allocation-type of range of pages
-- * to be MIGRATE_ISOLATE.
-- * @start_pfn: The lower PFN of the range to be isolated.
-- * @end_pfn: The upper PFN of the range to be isolated.
-- * @migratetype: migrate type to set in error recovery.
-+/**
-+ * start_isolate_page_range() - make page-allocation-type of range of pages to
-+ * be MIGRATE_ISOLATE.
-+ * @start_pfn:		The lower PFN of the range to be isolated.
-+ * @end_pfn:		The upper PFN of the range to be isolated.
-+ *			start_pfn/end_pfn must be aligned to pageblock_order.
-+ * @migratetype:	Migrate type to set in error recovery.
-+ * @flags:		The following flags are allowed (they can be combined in
-+ *			a bit mask)
-+ *			SKIP_HWPOISON - ignore hwpoison pages
-+ *			REPORT_FAILURE - report details about the failure to
-+ *			isolate the range
-  *
-  * Making page-allocation-type to be MIGRATE_ISOLATE means free pages in
-  * the range will never be allocated. Any free pages and pages freed in the
-- * future will not be allocated again.
-- *
-- * start_pfn/end_pfn must be aligned to pageblock_order.
-- * Return 0 on success and -EBUSY if any part of range cannot be isolated.
-+ * future will not be allocated again. If specified range includes migrate types
-+ * other than MOVABLE or CMA, this will fail with -EBUSY. For isolating all
-+ * pages in the range finally, the caller have to free all pages in the range.
-+ * test_page_isolated() can be used for test it.
-  *
-  * There is no high level synchronization mechanism that prevents two threads
-- * from trying to isolate overlapping ranges.  If this happens, one thread
-+ * from trying to isolate overlapping ranges. If this happens, one thread
-  * will notice pageblocks in the overlapping range already set to isolate.
-  * This happens in set_migratetype_isolate, and set_migratetype_isolate
-- * returns an error.  We then clean up by restoring the migration type on
-- * pageblocks we may have modified and return -EBUSY to caller.  This
-+ * returns an error. We then clean up by restoring the migration type on
-+ * pageblocks we may have modified and return -EBUSY to caller. This
-  * prevents two threads from simultaneously working on overlapping ranges.
-+ *
-+ * Return: the number of isolated pageblocks on success and -EBUSY if any part
-+ * of range cannot be isolated.
-  */
- int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
- 			     unsigned migratetype, int flags)
-@@ -188,6 +198,7 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
- 	unsigned long pfn;
- 	unsigned long undo_pfn;
- 	struct page *page;
-+	int nr_isolate_pageblock = 0;
- 
- 	BUG_ON(!IS_ALIGNED(start_pfn, pageblock_nr_pages));
- 	BUG_ON(!IS_ALIGNED(end_pfn, pageblock_nr_pages));
-@@ -196,13 +207,15 @@ int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
- 	     pfn < end_pfn;
- 	     pfn += pageblock_nr_pages) {
- 		page = __first_valid_page(pfn, pageblock_nr_pages);
--		if (page &&
--		    set_migratetype_isolate(page, migratetype, flags)) {
--			undo_pfn = pfn;
--			goto undo;
-+		if (page) {
-+			if (set_migratetype_isolate(page, migratetype, flags)) {
-+				undo_pfn = pfn;
-+				goto undo;
-+			}
-+			nr_isolate_pageblock++;
- 		}
++
++static int zero;
++static int one = 1;
++static struct ctl_table tty_table[] = {
++	{
++		.procname	= "ldisc_autoload",
++		.data		= &tty_ldisc_autoload,
++		.maxlen		= sizeof(tty_ldisc_autoload),
++		.mode		= 0644,
++		.proc_handler	= proc_dointvec,
++		.extra1		= &zero,
++		.extra2		= &one,
++	},
++	{ }
++};
++
++static struct ctl_table tty_dir_table[] = {
++	{
++		.procname	= "tty",
++		.mode		= 0555,
++		.child		= tty_table,
++	},
++	{ }
++};
++
++static struct ctl_table tty_root_table[] = {
++	{
++		.procname	= "dev",
++		.mode		= 0555,
++		.child		= tty_dir_table,
++	},
++	{ }
++};
++
++void tty_sysctl_init(void)
++{
++	register_sysctl_table(tty_root_table);
++}
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index a0b07c331255..a38b65b97be0 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -871,6 +871,8 @@ static struct virtqueue *vring_create_virtqueue_split(
+ 					  GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
+ 		if (queue)
+ 			break;
++		if (!may_reduce_num)
++			return NULL;
  	}
--	return 0;
-+	return nr_isolate_pageblock;
- undo:
- 	for (pfn = start_pfn;
- 	     pfn < undo_pfn;
-diff --git a/mm/page_poison.c b/mm/page_poison.c
-index f0c15e9017c0..21d4f97cb49b 100644
---- a/mm/page_poison.c
-+++ b/mm/page_poison.c
-@@ -6,6 +6,7 @@
- #include <linux/page_ext.h>
- #include <linux/poison.h>
- #include <linux/ratelimit.h>
-+#include <linux/kasan.h>
- 
- static bool want_page_poisoning __read_mostly;
- 
-@@ -40,7 +41,10 @@ static void poison_page(struct page *page)
- {
- 	void *addr = kmap_atomic(page);
  
-+	/* KASAN still think the page is in-use, so skip it. */
-+	kasan_disable_current();
- 	memset(addr, PAGE_POISON, PAGE_SIZE);
-+	kasan_enable_current();
- 	kunmap_atomic(addr);
- }
+ 	if (!num)
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 58a4c1217fa8..06ef48ad1998 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -298,10 +298,10 @@ static void blkdev_bio_end_io(struct bio *bio)
+ 	struct blkdev_dio *dio = bio->bi_private;
+ 	bool should_dirty = dio->should_dirty;
  
-diff --git a/mm/slab.c b/mm/slab.c
-index 91c1863df93d..2f2aa8eaf7d9 100644
---- a/mm/slab.c
-+++ b/mm/slab.c
-@@ -550,14 +550,6 @@ static void start_cpu_timer(int cpu)
+-	if (dio->multi_bio && !atomic_dec_and_test(&dio->ref)) {
+-		if (bio->bi_status && !dio->bio.bi_status)
+-			dio->bio.bi_status = bio->bi_status;
+-	} else {
++	if (bio->bi_status && !dio->bio.bi_status)
++		dio->bio.bi_status = bio->bi_status;
++
++	if (!dio->multi_bio || atomic_dec_and_test(&dio->ref)) {
+ 		if (!dio->is_sync) {
+ 			struct kiocb *iocb = dio->iocb;
+ 			ssize_t ret;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 6e1119496721..1d64a6b8e413 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -501,6 +501,16 @@ static noinline int btrfs_ioctl_fitrim(struct file *file, void __user *arg)
+ 	if (!capable(CAP_SYS_ADMIN))
+ 		return -EPERM;
  
- static void init_arraycache(struct array_cache *ac, int limit, int batch)
- {
--	/*
--	 * The array_cache structures contain pointers to free object.
--	 * However, when such objects are allocated or transferred to another
--	 * cache the pointers are not cleared and they could be counted as
--	 * valid references during a kmemleak scan. Therefore, kmemleak must
--	 * not scan such objects.
--	 */
--	kmemleak_no_scan(ac);
- 	if (ac) {
- 		ac->avail = 0;
- 		ac->limit = limit;
-@@ -573,6 +565,14 @@ static struct array_cache *alloc_arraycache(int node, int entries,
- 	struct array_cache *ac = NULL;
- 
- 	ac = kmalloc_node(memsize, gfp, node);
 +	/*
-+	 * The array_cache structures contain pointers to free object.
-+	 * However, when such objects are allocated or transferred to another
-+	 * cache the pointers are not cleared and they could be counted as
-+	 * valid references during a kmemleak scan. Therefore, kmemleak must
-+	 * not scan such objects.
++	 * If the fs is mounted with nologreplay, which requires it to be
++	 * mounted in RO mode as well, we can not allow discard on free space
++	 * inside block groups, because log trees refer to extents that are not
++	 * pinned in a block group's free space cache (pinning the extents is
++	 * precisely the first phase of replaying a log tree).
 +	 */
-+	kmemleak_no_scan(ac);
- 	init_arraycache(ac, entries, batchcount);
- 	return ac;
- }
-@@ -667,6 +667,7 @@ static struct alien_cache *__alloc_alien_cache(int node, int entries,
- 
- 	alc = kmalloc_node(memsize, gfp, node);
- 	if (alc) {
-+		kmemleak_no_scan(alc);
- 		init_arraycache(&alc->ac, entries, batch);
- 		spin_lock_init(&alc->lock);
- 	}
-@@ -2111,6 +2112,8 @@ done:
- 	cachep->allocflags = __GFP_COMP;
- 	if (flags & SLAB_CACHE_DMA)
- 		cachep->allocflags |= GFP_DMA;
-+	if (flags & SLAB_CACHE_DMA32)
-+		cachep->allocflags |= GFP_DMA32;
- 	if (flags & SLAB_RECLAIM_ACCOUNT)
- 		cachep->allocflags |= __GFP_RECLAIMABLE;
- 	cachep->size = size;
-diff --git a/mm/slab.h b/mm/slab.h
-index 384105318779..27834ead5f14 100644
---- a/mm/slab.h
-+++ b/mm/slab.h
-@@ -127,7 +127,8 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
- 
- 
- /* Legal flag mask for kmem_cache_create(), for various configurations */
--#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
-+#define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \
-+			 SLAB_CACHE_DMA32 | SLAB_PANIC | \
- 			 SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS )
- 
- #if defined(CONFIG_DEBUG_SLAB)
-diff --git a/mm/slab_common.c b/mm/slab_common.c
-index f9d89c1b5977..333618231f8d 100644
---- a/mm/slab_common.c
-+++ b/mm/slab_common.c
-@@ -53,7 +53,7 @@ static DECLARE_WORK(slab_caches_to_rcu_destroy_work,
- 		SLAB_FAILSLAB | SLAB_KASAN)
- 
- #define SLAB_MERGE_SAME (SLAB_RECLAIM_ACCOUNT | SLAB_CACHE_DMA | \
--			 SLAB_ACCOUNT)
-+			 SLAB_CACHE_DMA32 | SLAB_ACCOUNT)
- 
- /*
-  * Merge control. If this is set then no merging of slab caches will occur.
-diff --git a/mm/slub.c b/mm/slub.c
-index dc777761b6b7..1673100fd534 100644
---- a/mm/slub.c
-+++ b/mm/slub.c
-@@ -3591,6 +3591,9 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
- 	if (s->flags & SLAB_CACHE_DMA)
- 		s->allocflags |= GFP_DMA;
- 
-+	if (s->flags & SLAB_CACHE_DMA32)
-+		s->allocflags |= GFP_DMA32;
++	if (btrfs_test_opt(fs_info, NOLOGREPLAY))
++		return -EROFS;
 +
- 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
- 		s->allocflags |= __GFP_RECLAIMABLE;
- 
-@@ -5681,6 +5684,8 @@ static char *create_unique_id(struct kmem_cache *s)
- 	 */
- 	if (s->flags & SLAB_CACHE_DMA)
- 		*p++ = 'd';
-+	if (s->flags & SLAB_CACHE_DMA32)
-+		*p++ = 'D';
- 	if (s->flags & SLAB_RECLAIM_ACCOUNT)
- 		*p++ = 'a';
- 	if (s->flags & SLAB_CONSISTENCY_CHECKS)
-diff --git a/mm/sparse.c b/mm/sparse.c
-index 7ea5dc6c6b19..b3771f35a0ed 100644
---- a/mm/sparse.c
-+++ b/mm/sparse.c
-@@ -197,7 +197,7 @@ static inline int next_present_section_nr(int section_nr)
- }
- #define for_each_present_section_nr(start, section_nr)		\
- 	for (section_nr = next_present_section_nr(start-1);	\
--	     ((section_nr >= 0) &&				\
-+	     ((section_nr != -1) &&				\
- 	      (section_nr <= __highest_present_section_nr));	\
- 	     section_nr = next_present_section_nr(section_nr))
- 
-@@ -556,7 +556,7 @@ void online_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
- }
+ 	rcu_read_lock();
+ 	list_for_each_entry_rcu(device, &fs_info->fs_devices->devices,
+ 				dev_list) {
+diff --git a/fs/btrfs/props.c b/fs/btrfs/props.c
+index dc6140013ae8..61d22a56c0ba 100644
+--- a/fs/btrfs/props.c
++++ b/fs/btrfs/props.c
+@@ -366,11 +366,11 @@ int btrfs_subvol_inherit_props(struct btrfs_trans_handle *trans,
  
- #ifdef CONFIG_MEMORY_HOTREMOVE
--/* Mark all memory sections within the pfn range as online */
-+/* Mark all memory sections within the pfn range as offline */
- void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
+ static int prop_compression_validate(const char *value, size_t len)
  {
- 	unsigned long pfn;
-diff --git a/mm/swapfile.c b/mm/swapfile.c
-index dbac1d49469d..67f60e051814 100644
---- a/mm/swapfile.c
-+++ b/mm/swapfile.c
-@@ -98,6 +98,15 @@ static atomic_t proc_poll_event = ATOMIC_INIT(0);
+-	if (!strncmp("lzo", value, len))
++	if (!strncmp("lzo", value, 3))
+ 		return 0;
+-	else if (!strncmp("zlib", value, len))
++	else if (!strncmp("zlib", value, 4))
+ 		return 0;
+-	else if (!strncmp("zstd", value, len))
++	else if (!strncmp("zstd", value, 4))
+ 		return 0;
  
- atomic_t nr_rotate_swap = ATOMIC_INIT(0);
+ 	return -EINVAL;
+@@ -396,7 +396,7 @@ static int prop_compression_apply(struct inode *inode,
+ 		btrfs_set_fs_incompat(fs_info, COMPRESS_LZO);
+ 	} else if (!strncmp("zlib", value, 4)) {
+ 		type = BTRFS_COMPRESS_ZLIB;
+-	} else if (!strncmp("zstd", value, len)) {
++	} else if (!strncmp("zstd", value, 4)) {
+ 		type = BTRFS_COMPRESS_ZSTD;
+ 		btrfs_set_fs_incompat(fs_info, COMPRESS_ZSTD);
+ 	} else {
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index f2c0d863fb52..07cad54b84f1 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -559,6 +559,8 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
+ 			tcon->ses->server->echo_interval / HZ);
+ 	if (tcon->snapshot_time)
+ 		seq_printf(s, ",snapshot=%llu", tcon->snapshot_time);
++	if (tcon->handle_timeout)
++		seq_printf(s, ",handletimeout=%u", tcon->handle_timeout);
+ 	/* convert actimeo and display it in seconds */
+ 	seq_printf(s, ",actimeo=%lu", cifs_sb->actimeo / HZ);
  
-+static struct swap_info_struct *swap_type_to_swap_info(int type)
-+{
-+	if (type >= READ_ONCE(nr_swapfiles))
-+		return NULL;
-+
-+	smp_rmb();	/* Pairs with smp_wmb in alloc_swap_info. */
-+	return READ_ONCE(swap_info[type]);
-+}
-+
- static inline unsigned char swap_count(unsigned char ent)
- {
- 	return ent & ~SWAP_HAS_CACHE;	/* may include COUNT_CONTINUED flag */
-@@ -1044,12 +1053,14 @@ noswap:
- /* The only caller of this function is now suspend routine */
- swp_entry_t get_swap_page_of_type(int type)
- {
--	struct swap_info_struct *si;
-+	struct swap_info_struct *si = swap_type_to_swap_info(type);
- 	pgoff_t offset;
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 1b25e6e95d45..6c934ab3722b 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -59,6 +59,12 @@
+  */
+ #define CIFS_MAX_ACTIMEO (1 << 30)
  
--	si = swap_info[type];
-+	if (!si)
-+		goto fail;
++/*
++ * Max persistent and resilient handle timeout (milliseconds).
++ * Windows durable max was 960000 (16 minutes)
++ */
++#define SMB3_MAX_HANDLE_TIMEOUT 960000
 +
- 	spin_lock(&si->lock);
--	if (si && (si->flags & SWP_WRITEOK)) {
-+	if (si->flags & SWP_WRITEOK) {
- 		atomic_long_dec(&nr_swap_pages);
- 		/* This is called for allocating swap entry, not cache */
- 		offset = scan_swap_map(si, 1);
-@@ -1060,6 +1071,7 @@ swp_entry_t get_swap_page_of_type(int type)
- 		atomic_long_inc(&nr_swap_pages);
- 	}
- 	spin_unlock(&si->lock);
-+fail:
- 	return (swp_entry_t) {0};
- }
+ /*
+  * MAX_REQ is the maximum number of requests that WE will send
+  * on one socket concurrently.
+@@ -572,6 +578,7 @@ struct smb_vol {
+ 	struct nls_table *local_nls;
+ 	unsigned int echo_interval; /* echo interval in secs */
+ 	__u64 snapshot_time; /* needed for timewarp tokens */
++	__u32 handle_timeout; /* persistent and durable handle timeout in ms */
+ 	unsigned int max_credits; /* smb3 max_credits 10 < credits < 60000 */
+ };
  
-@@ -1071,9 +1083,9 @@ static struct swap_info_struct *__swap_info_get(swp_entry_t entry)
- 	if (!entry.val)
- 		goto out;
- 	type = swp_type(entry);
--	if (type >= nr_swapfiles)
-+	p = swap_type_to_swap_info(type);
-+	if (!p)
- 		goto bad_nofile;
--	p = swap_info[type];
- 	if (!(p->flags & SWP_USED))
- 		goto bad_device;
- 	offset = swp_offset(entry);
-@@ -1697,10 +1709,9 @@ int swap_type_of(dev_t device, sector_t offset, struct block_device **bdev_p)
- sector_t swapdev_block(int type, pgoff_t offset)
- {
- 	struct block_device *bdev;
-+	struct swap_info_struct *si = swap_type_to_swap_info(type);
+@@ -1028,6 +1035,7 @@ struct cifs_tcon {
+ 	__u32 vol_serial_number;
+ 	__le64 vol_create_time;
+ 	__u64 snapshot_time; /* for timewarp tokens - timestamp of snapshot */
++	__u32 handle_timeout; /* persistent and durable handle timeout in ms */
+ 	__u32 ss_flags;		/* sector size flags */
+ 	__u32 perf_sector_size; /* best sector size for perf */
+ 	__u32 max_chunks;
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 9d4e60123db4..44e6ec85f832 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -103,7 +103,7 @@ enum {
+ 	Opt_cruid, Opt_gid, Opt_file_mode,
+ 	Opt_dirmode, Opt_port,
+ 	Opt_blocksize, Opt_rsize, Opt_wsize, Opt_actimeo,
+-	Opt_echo_interval, Opt_max_credits,
++	Opt_echo_interval, Opt_max_credits, Opt_handletimeout,
+ 	Opt_snapshot,
  
--	if ((unsigned int)type >= nr_swapfiles)
--		return 0;
--	if (!(swap_info[type]->flags & SWP_WRITEOK))
-+	if (!si || !(si->flags & SWP_WRITEOK))
- 		return 0;
- 	return map_swap_entry(swp_entry(type, offset), &bdev);
- }
-@@ -2258,7 +2269,7 @@ static sector_t map_swap_entry(swp_entry_t entry, struct block_device **bdev)
- 	struct swap_extent *se;
- 	pgoff_t offset;
- 
--	sis = swap_info[swp_type(entry)];
-+	sis = swp_swap_info(entry);
- 	*bdev = sis->bdev;
- 
- 	offset = swp_offset(entry);
-@@ -2700,9 +2711,7 @@ static void *swap_start(struct seq_file *swap, loff_t *pos)
- 	if (!l)
- 		return SEQ_START_TOKEN;
- 
--	for (type = 0; type < nr_swapfiles; type++) {
--		smp_rmb();	/* read nr_swapfiles before swap_info[type] */
--		si = swap_info[type];
-+	for (type = 0; (si = swap_type_to_swap_info(type)); type++) {
- 		if (!(si->flags & SWP_USED) || !si->swap_map)
- 			continue;
- 		if (!--l)
-@@ -2722,9 +2731,7 @@ static void *swap_next(struct seq_file *swap, void *v, loff_t *pos)
- 	else
- 		type = si->type + 1;
+ 	/* Mount options which take string value */
+@@ -208,6 +208,7 @@ static const match_table_t cifs_mount_option_tokens = {
+ 	{ Opt_rsize, "rsize=%s" },
+ 	{ Opt_wsize, "wsize=%s" },
+ 	{ Opt_actimeo, "actimeo=%s" },
++	{ Opt_handletimeout, "handletimeout=%s" },
+ 	{ Opt_echo_interval, "echo_interval=%s" },
+ 	{ Opt_max_credits, "max_credits=%s" },
+ 	{ Opt_snapshot, "snapshot=%s" },
+@@ -1600,6 +1601,9 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
  
--	for (; type < nr_swapfiles; type++) {
--		smp_rmb();	/* read nr_swapfiles before swap_info[type] */
--		si = swap_info[type];
-+	for (; (si = swap_type_to_swap_info(type)); type++) {
- 		if (!(si->flags & SWP_USED) || !si->swap_map)
- 			continue;
- 		++*pos;
-@@ -2831,14 +2838,14 @@ static struct swap_info_struct *alloc_swap_info(void)
- 	}
- 	if (type >= nr_swapfiles) {
- 		p->type = type;
--		swap_info[type] = p;
-+		WRITE_ONCE(swap_info[type], p);
- 		/*
- 		 * Write swap_info[type] before nr_swapfiles, in case a
- 		 * racing procfs swap_start() or swap_next() is reading them.
- 		 * (We never shrink nr_swapfiles, we never free this entry.)
- 		 */
- 		smp_wmb();
--		nr_swapfiles++;
-+		WRITE_ONCE(nr_swapfiles, nr_swapfiles + 1);
- 	} else {
- 		kvfree(p);
- 		p = swap_info[type];
-@@ -3358,7 +3365,7 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
- {
- 	struct swap_info_struct *p;
- 	struct swap_cluster_info *ci;
--	unsigned long offset, type;
-+	unsigned long offset;
- 	unsigned char count;
- 	unsigned char has_cache;
- 	int err = -EINVAL;
-@@ -3366,10 +3373,10 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
- 	if (non_swap_entry(entry))
- 		goto out;
+ 	vol->actimeo = CIFS_DEF_ACTIMEO;
  
--	type = swp_type(entry);
--	if (type >= nr_swapfiles)
-+	p = swp_swap_info(entry);
-+	if (!p)
- 		goto bad_file;
--	p = swap_info[type];
++	/* Most clients set timeout to 0, allows server to use its default */
++	vol->handle_timeout = 0; /* See MS-SMB2 spec section 2.2.14.2.12 */
 +
- 	offset = swp_offset(entry);
- 	if (unlikely(offset >= p->max))
- 		goto out;
-@@ -3466,7 +3473,7 @@ int swapcache_prepare(swp_entry_t entry)
- 
- struct swap_info_struct *swp_swap_info(swp_entry_t entry)
- {
--	return swap_info[swp_type(entry)];
-+	return swap_type_to_swap_info(swp_type(entry));
+ 	/* offer SMB2.1 and later (SMB3 etc). Secure and widely accepted */
+ 	vol->ops = &smb30_operations;
+ 	vol->vals = &smbdefault_values;
+@@ -1998,6 +2002,18 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ 				goto cifs_parse_mount_err;
+ 			}
+ 			break;
++		case Opt_handletimeout:
++			if (get_option_ul(args, &option)) {
++				cifs_dbg(VFS, "%s: Invalid handletimeout value\n",
++					 __func__);
++				goto cifs_parse_mount_err;
++			}
++			vol->handle_timeout = option;
++			if (vol->handle_timeout > SMB3_MAX_HANDLE_TIMEOUT) {
++				cifs_dbg(VFS, "Invalid handle cache timeout, longer than 16 minutes\n");
++				goto cifs_parse_mount_err;
++			}
++			break;
+ 		case Opt_echo_interval:
+ 			if (get_option_ul(args, &option)) {
+ 				cifs_dbg(VFS, "%s: Invalid echo interval value\n",
+@@ -3164,6 +3180,8 @@ static int match_tcon(struct cifs_tcon *tcon, struct smb_vol *volume_info)
+ 		return 0;
+ 	if (tcon->snapshot_time != volume_info->snapshot_time)
+ 		return 0;
++	if (tcon->handle_timeout != volume_info->handle_timeout)
++		return 0;
+ 	return 1;
  }
  
- struct swap_info_struct *page_swap_info(struct page *page)
-diff --git a/mm/vmalloc.c b/mm/vmalloc.c
-index 871e41c55e23..583630bf247d 100644
---- a/mm/vmalloc.c
-+++ b/mm/vmalloc.c
-@@ -498,7 +498,11 @@ nocache:
+@@ -3278,6 +3296,16 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb_vol *volume_info)
+ 			tcon->snapshot_time = volume_info->snapshot_time;
  	}
  
- found:
--	if (addr + size > vend)
-+	/*
-+	 * Check also calculated address against the vstart,
-+	 * because it can be 0 because of big align request.
-+	 */
-+	if (addr + size > vend || addr < vstart)
- 		goto overflow;
- 
- 	va->va_start = addr;
-@@ -2248,7 +2252,7 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
- 	if (!(area->flags & VM_USERMAP))
- 		return -EINVAL;
- 
--	if (kaddr + size > area->addr + area->size)
-+	if (kaddr + size > area->addr + get_vm_area_size(area))
- 		return -EINVAL;
++	if (volume_info->handle_timeout) {
++		if (ses->server->vals->protocol_id == 0) {
++			cifs_dbg(VFS,
++			     "Use SMB2.1 or later for handle timeout option\n");
++			rc = -EOPNOTSUPP;
++			goto out_fail;
++		} else
++			tcon->handle_timeout = volume_info->handle_timeout;
++	}
++
+ 	tcon->ses = ses;
+ 	if (volume_info->password) {
+ 		tcon->password = kstrdup(volume_info->password, GFP_KERNEL);
+diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
+index b204e84b87fb..b0e76d27d752 100644
+--- a/fs/cifs/smb2file.c
++++ b/fs/cifs/smb2file.c
+@@ -68,7 +68,9 @@ smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms,
  
- 	do {
-diff --git a/net/9p/client.c b/net/9p/client.c
-index 357214a51f13..b85d51f4b8eb 100644
---- a/net/9p/client.c
-+++ b/net/9p/client.c
-@@ -1061,7 +1061,7 @@ struct p9_client *p9_client_create(const char *dev_name, char *options)
- 		p9_debug(P9_DEBUG_ERROR,
- 			 "Please specify a msize of at least 4k\n");
- 		err = -EINVAL;
--		goto free_client;
-+		goto close_trans;
- 	}
  
- 	err = p9_client_version(clnt);
-diff --git a/net/bluetooth/af_bluetooth.c b/net/bluetooth/af_bluetooth.c
-index deacc52d7ff1..8d12198eaa94 100644
---- a/net/bluetooth/af_bluetooth.c
-+++ b/net/bluetooth/af_bluetooth.c
-@@ -154,15 +154,25 @@ void bt_sock_unlink(struct bt_sock_list *l, struct sock *sk)
+ 	 if (oparms->tcon->use_resilient) {
+-		nr_ioctl_req.Timeout = 0; /* use server default (120 seconds) */
++		/* default timeout is 0, servers pick default (120 seconds) */
++		nr_ioctl_req.Timeout =
++			cpu_to_le32(oparms->tcon->handle_timeout);
+ 		nr_ioctl_req.Reserved = 0;
+ 		rc = SMB2_ioctl(xid, oparms->tcon, fid->persistent_fid,
+ 			fid->volatile_fid, FSCTL_LMR_REQUEST_RESILIENCY,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 53642a237bf9..068febe37fe4 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1837,8 +1837,9 @@ add_lease_context(struct TCP_Server_Info *server, struct kvec *iov,
  }
- EXPORT_SYMBOL(bt_sock_unlink);
  
--void bt_accept_enqueue(struct sock *parent, struct sock *sk)
-+void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh)
+ static struct create_durable_v2 *
+-create_durable_v2_buf(struct cifs_fid *pfid)
++create_durable_v2_buf(struct cifs_open_parms *oparms)
  {
- 	BT_DBG("parent %p, sk %p", parent, sk);
- 
- 	sock_hold(sk);
--	lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
-+
-+	if (bh)
-+		bh_lock_sock_nested(sk);
-+	else
-+		lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
-+
- 	list_add_tail(&bt_sk(sk)->accept_q, &bt_sk(parent)->accept_q);
- 	bt_sk(sk)->parent = parent;
--	release_sock(sk);
-+
-+	if (bh)
-+		bh_unlock_sock(sk);
-+	else
-+		release_sock(sk);
-+
- 	parent->sk_ack_backlog++;
- }
- EXPORT_SYMBOL(bt_accept_enqueue);
-diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
-index 1506e1632394..d4e2a166ae17 100644
---- a/net/bluetooth/hci_sock.c
-+++ b/net/bluetooth/hci_sock.c
-@@ -831,8 +831,6 @@ static int hci_sock_release(struct socket *sock)
- 	if (!sk)
- 		return 0;
- 
--	hdev = hci_pi(sk)->hdev;
--
- 	switch (hci_pi(sk)->channel) {
- 	case HCI_CHANNEL_MONITOR:
- 		atomic_dec(&monitor_promisc);
-@@ -854,6 +852,7 @@ static int hci_sock_release(struct socket *sock)
- 
- 	bt_sock_unlink(&hci_sk_list, sk);
- 
-+	hdev = hci_pi(sk)->hdev;
- 	if (hdev) {
- 		if (hci_pi(sk)->channel == HCI_CHANNEL_USER) {
- 			/* When releasing a user channel exclusive access,
-diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
-index 2a7fb517d460..ccdc5c67d22a 100644
---- a/net/bluetooth/l2cap_core.c
-+++ b/net/bluetooth/l2cap_core.c
-@@ -3337,16 +3337,22 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
- 
- 	while (len >= L2CAP_CONF_OPT_SIZE) {
- 		len -= l2cap_get_conf_opt(&req, &type, &olen, &val);
-+		if (len < 0)
-+			break;
- 
- 		hint  = type & L2CAP_CONF_HINT;
- 		type &= L2CAP_CONF_MASK;
- 
- 		switch (type) {
- 		case L2CAP_CONF_MTU:
-+			if (olen != 2)
-+				break;
- 			mtu = val;
- 			break;
- 
- 		case L2CAP_CONF_FLUSH_TO:
-+			if (olen != 2)
-+				break;
- 			chan->flush_to = val;
- 			break;
- 
-@@ -3354,26 +3360,30 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
- 			break;
- 
- 		case L2CAP_CONF_RFC:
--			if (olen == sizeof(rfc))
--				memcpy(&rfc, (void *) val, olen);
-+			if (olen != sizeof(rfc))
-+				break;
-+			memcpy(&rfc, (void *) val, olen);
- 			break;
- 
- 		case L2CAP_CONF_FCS:
-+			if (olen != 1)
-+				break;
- 			if (val == L2CAP_FCS_NONE)
- 				set_bit(CONF_RECV_NO_FCS, &chan->conf_state);
- 			break;
- 
- 		case L2CAP_CONF_EFS:
--			if (olen == sizeof(efs)) {
--				remote_efs = 1;
--				memcpy(&efs, (void *) val, olen);
--			}
-+			if (olen != sizeof(efs))
-+				break;
-+			remote_efs = 1;
-+			memcpy(&efs, (void *) val, olen);
- 			break;
- 
- 		case L2CAP_CONF_EWS:
-+			if (olen != 2)
-+				break;
- 			if (!(chan->conn->local_fixed_chan & L2CAP_FC_A2MP))
- 				return -ECONNREFUSED;
--
- 			set_bit(FLAG_EXT_CTRL, &chan->flags);
- 			set_bit(CONF_EWS_RECV, &chan->conf_state);
- 			chan->tx_win_max = L2CAP_DEFAULT_EXT_WINDOW;
-@@ -3383,7 +3393,6 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
- 		default:
- 			if (hint)
- 				break;
--
- 			result = L2CAP_CONF_UNKNOWN;
- 			*((u8 *) ptr++) = type;
- 			break;
-@@ -3548,58 +3557,65 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len,
++	struct cifs_fid *pfid = oparms->fid;
+ 	struct create_durable_v2 *buf;
  
- 	while (len >= L2CAP_CONF_OPT_SIZE) {
- 		len -= l2cap_get_conf_opt(&rsp, &type, &olen, &val);
-+		if (len < 0)
-+			break;
+ 	buf = kzalloc(sizeof(struct create_durable_v2), GFP_KERNEL);
+@@ -1852,7 +1853,14 @@ create_durable_v2_buf(struct cifs_fid *pfid)
+ 				(struct create_durable_v2, Name));
+ 	buf->ccontext.NameLength = cpu_to_le16(4);
  
- 		switch (type) {
- 		case L2CAP_CONF_MTU:
-+			if (olen != 2)
-+				break;
- 			if (val < L2CAP_DEFAULT_MIN_MTU) {
- 				*result = L2CAP_CONF_UNACCEPT;
- 				chan->imtu = L2CAP_DEFAULT_MIN_MTU;
- 			} else
- 				chan->imtu = val;
--			l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu, endptr - ptr);
-+			l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu,
-+					   endptr - ptr);
- 			break;
+-	buf->dcontext.Timeout = 0; /* Should this be configurable by workload */
++	/*
++	 * NB: Handle timeout defaults to 0, which allows server to choose
++	 * (most servers default to 120 seconds) and most clients default to 0.
++	 * This can be overridden at mount ("handletimeout=") if the user wants
++	 * a different persistent (or resilient) handle timeout for all opens
++	 * opens on a particular SMB3 mount.
++	 */
++	buf->dcontext.Timeout = cpu_to_le32(oparms->tcon->handle_timeout);
+ 	buf->dcontext.Flags = cpu_to_le32(SMB2_DHANDLE_FLAG_PERSISTENT);
+ 	generate_random_uuid(buf->dcontext.CreateGuid);
+ 	memcpy(pfid->create_guid, buf->dcontext.CreateGuid, 16);
+@@ -1905,7 +1913,7 @@ add_durable_v2_context(struct kvec *iov, unsigned int *num_iovec,
+ 	struct smb2_create_req *req = iov[0].iov_base;
+ 	unsigned int num = *num_iovec;
  
- 		case L2CAP_CONF_FLUSH_TO:
-+			if (olen != 2)
-+				break;
- 			chan->flush_to = val;
--			l2cap_add_conf_opt(&ptr, L2CAP_CONF_FLUSH_TO,
--					   2, chan->flush_to, endptr - ptr);
-+			l2cap_add_conf_opt(&ptr, L2CAP_CONF_FLUSH_TO, 2,
-+					   chan->flush_to, endptr - ptr);
- 			break;
+-	iov[num].iov_base = create_durable_v2_buf(oparms->fid);
++	iov[num].iov_base = create_durable_v2_buf(oparms);
+ 	if (iov[num].iov_base == NULL)
+ 		return -ENOMEM;
+ 	iov[num].iov_len = sizeof(struct create_durable_v2);
+diff --git a/include/linux/bitrev.h b/include/linux/bitrev.h
+index 50fb0dee23e8..d35b8ec1c485 100644
+--- a/include/linux/bitrev.h
++++ b/include/linux/bitrev.h
+@@ -34,41 +34,41 @@ static inline u32 __bitrev32(u32 x)
  
- 		case L2CAP_CONF_RFC:
--			if (olen == sizeof(rfc))
--				memcpy(&rfc, (void *)val, olen);
--
-+			if (olen != sizeof(rfc))
-+				break;
-+			memcpy(&rfc, (void *)val, olen);
- 			if (test_bit(CONF_STATE2_DEVICE, &chan->conf_state) &&
- 			    rfc.mode != chan->mode)
- 				return -ECONNREFUSED;
--
- 			chan->fcs = 0;
--
--			l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC,
--					   sizeof(rfc), (unsigned long) &rfc, endptr - ptr);
-+			l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, sizeof(rfc),
-+					   (unsigned long) &rfc, endptr - ptr);
- 			break;
+ #define __constant_bitrev32(x)	\
+ ({					\
+-	u32 __x = x;			\
+-	__x = (__x >> 16) | (__x << 16);	\
+-	__x = ((__x & (u32)0xFF00FF00UL) >> 8) | ((__x & (u32)0x00FF00FFUL) << 8);	\
+-	__x = ((__x & (u32)0xF0F0F0F0UL) >> 4) | ((__x & (u32)0x0F0F0F0FUL) << 4);	\
+-	__x = ((__x & (u32)0xCCCCCCCCUL) >> 2) | ((__x & (u32)0x33333333UL) << 2);	\
+-	__x = ((__x & (u32)0xAAAAAAAAUL) >> 1) | ((__x & (u32)0x55555555UL) << 1);	\
+-	__x;								\
++	u32 ___x = x;			\
++	___x = (___x >> 16) | (___x << 16);	\
++	___x = ((___x & (u32)0xFF00FF00UL) >> 8) | ((___x & (u32)0x00FF00FFUL) << 8);	\
++	___x = ((___x & (u32)0xF0F0F0F0UL) >> 4) | ((___x & (u32)0x0F0F0F0FUL) << 4);	\
++	___x = ((___x & (u32)0xCCCCCCCCUL) >> 2) | ((___x & (u32)0x33333333UL) << 2);	\
++	___x = ((___x & (u32)0xAAAAAAAAUL) >> 1) | ((___x & (u32)0x55555555UL) << 1);	\
++	___x;								\
+ })
  
- 		case L2CAP_CONF_EWS:
-+			if (olen != 2)
-+				break;
- 			chan->ack_win = min_t(u16, val, chan->ack_win);
- 			l2cap_add_conf_opt(&ptr, L2CAP_CONF_EWS, 2,
- 					   chan->tx_win, endptr - ptr);
- 			break;
+ #define __constant_bitrev16(x)	\
+ ({					\
+-	u16 __x = x;			\
+-	__x = (__x >> 8) | (__x << 8);	\
+-	__x = ((__x & (u16)0xF0F0U) >> 4) | ((__x & (u16)0x0F0FU) << 4);	\
+-	__x = ((__x & (u16)0xCCCCU) >> 2) | ((__x & (u16)0x3333U) << 2);	\
+-	__x = ((__x & (u16)0xAAAAU) >> 1) | ((__x & (u16)0x5555U) << 1);	\
+-	__x;								\
++	u16 ___x = x;			\
++	___x = (___x >> 8) | (___x << 8);	\
++	___x = ((___x & (u16)0xF0F0U) >> 4) | ((___x & (u16)0x0F0FU) << 4);	\
++	___x = ((___x & (u16)0xCCCCU) >> 2) | ((___x & (u16)0x3333U) << 2);	\
++	___x = ((___x & (u16)0xAAAAU) >> 1) | ((___x & (u16)0x5555U) << 1);	\
++	___x;								\
+ })
  
- 		case L2CAP_CONF_EFS:
--			if (olen == sizeof(efs)) {
--				memcpy(&efs, (void *)val, olen);
--
--				if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
--				    efs.stype != L2CAP_SERV_NOTRAFIC &&
--				    efs.stype != chan->local_stype)
--					return -ECONNREFUSED;
--
--				l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
--						   (unsigned long) &efs, endptr - ptr);
--			}
-+			if (olen != sizeof(efs))
-+				break;
-+			memcpy(&efs, (void *)val, olen);
-+			if (chan->local_stype != L2CAP_SERV_NOTRAFIC &&
-+			    efs.stype != L2CAP_SERV_NOTRAFIC &&
-+			    efs.stype != chan->local_stype)
-+				return -ECONNREFUSED;
-+			l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs),
-+					   (unsigned long) &efs, endptr - ptr);
- 			break;
+ #define __constant_bitrev8x4(x) \
+ ({			\
+-	u32 __x = x;	\
+-	__x = ((__x & (u32)0xF0F0F0F0UL) >> 4) | ((__x & (u32)0x0F0F0F0FUL) << 4);	\
+-	__x = ((__x & (u32)0xCCCCCCCCUL) >> 2) | ((__x & (u32)0x33333333UL) << 2);	\
+-	__x = ((__x & (u32)0xAAAAAAAAUL) >> 1) | ((__x & (u32)0x55555555UL) << 1);	\
+-	__x;								\
++	u32 ___x = x;	\
++	___x = ((___x & (u32)0xF0F0F0F0UL) >> 4) | ((___x & (u32)0x0F0F0F0FUL) << 4);	\
++	___x = ((___x & (u32)0xCCCCCCCCUL) >> 2) | ((___x & (u32)0x33333333UL) << 2);	\
++	___x = ((___x & (u32)0xAAAAAAAAUL) >> 1) | ((___x & (u32)0x55555555UL) << 1);	\
++	___x;								\
+ })
  
- 		case L2CAP_CONF_FCS:
-+			if (olen != 1)
-+				break;
- 			if (*result == L2CAP_CONF_PENDING)
- 				if (val == L2CAP_FCS_NONE)
- 					set_bit(CONF_RECV_NO_FCS,
-@@ -3728,13 +3744,18 @@ static void l2cap_conf_rfc_get(struct l2cap_chan *chan, void *rsp, int len)
- 
- 	while (len >= L2CAP_CONF_OPT_SIZE) {
- 		len -= l2cap_get_conf_opt(&rsp, &type, &olen, &val);
-+		if (len < 0)
-+			break;
+ #define __constant_bitrev8(x)	\
+ ({					\
+-	u8 __x = x;			\
+-	__x = (__x >> 4) | (__x << 4);	\
+-	__x = ((__x & (u8)0xCCU) >> 2) | ((__x & (u8)0x33U) << 2);	\
+-	__x = ((__x & (u8)0xAAU) >> 1) | ((__x & (u8)0x55U) << 1);	\
+-	__x;								\
++	u8 ___x = x;			\
++	___x = (___x >> 4) | (___x << 4);	\
++	___x = ((___x & (u8)0xCCU) >> 2) | ((___x & (u8)0x33U) << 2);	\
++	___x = ((___x & (u8)0xAAU) >> 1) | ((___x & (u8)0x55U) << 1);	\
++	___x;								\
+ })
  
- 		switch (type) {
- 		case L2CAP_CONF_RFC:
--			if (olen == sizeof(rfc))
--				memcpy(&rfc, (void *)val, olen);
-+			if (olen != sizeof(rfc))
-+				break;
-+			memcpy(&rfc, (void *)val, olen);
- 			break;
- 		case L2CAP_CONF_EWS:
-+			if (olen != 2)
-+				break;
- 			txwin_ext = val;
- 			break;
- 		}
-diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
-index 686bdc6b35b0..a3a2cd55e23a 100644
---- a/net/bluetooth/l2cap_sock.c
-+++ b/net/bluetooth/l2cap_sock.c
-@@ -1252,7 +1252,7 @@ static struct l2cap_chan *l2cap_sock_new_connection_cb(struct l2cap_chan *chan)
- 
- 	l2cap_sock_init(sk, parent);
- 
--	bt_accept_enqueue(parent, sk);
-+	bt_accept_enqueue(parent, sk, false);
- 
- 	release_sock(parent);
- 
-diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
-index aa0db1d1bd9b..b1f49fcc0478 100644
---- a/net/bluetooth/rfcomm/sock.c
-+++ b/net/bluetooth/rfcomm/sock.c
-@@ -988,7 +988,7 @@ int rfcomm_connect_ind(struct rfcomm_session *s, u8 channel, struct rfcomm_dlc *
- 	rfcomm_pi(sk)->channel = channel;
- 
- 	sk->sk_state = BT_CONFIG;
--	bt_accept_enqueue(parent, sk);
-+	bt_accept_enqueue(parent, sk, true);
- 
- 	/* Accept connection and return socket DLC */
- 	*d = rfcomm_pi(sk)->dlc;
-diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
-index 529b38996d8b..9a580999ca57 100644
---- a/net/bluetooth/sco.c
-+++ b/net/bluetooth/sco.c
-@@ -193,7 +193,7 @@ static void __sco_chan_add(struct sco_conn *conn, struct sock *sk,
- 	conn->sk = sk;
- 
- 	if (parent)
--		bt_accept_enqueue(parent, sk);
-+		bt_accept_enqueue(parent, sk, true);
- }
+ #define bitrev32(x) \
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 83ae11cbd12c..7391f5fe4eda 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -561,7 +561,10 @@ struct mem_cgroup *lock_page_memcg(struct page *page);
+ void __unlock_page_memcg(struct mem_cgroup *memcg);
+ void unlock_page_memcg(struct page *page);
  
- static int sco_chan_add(struct sco_conn *conn, struct sock *sk,
-diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
-index ac92b2eb32b1..e4777614a8a0 100644
---- a/net/bridge/br_multicast.c
-+++ b/net/bridge/br_multicast.c
-@@ -599,6 +599,7 @@ static int br_ip4_multicast_add_group(struct net_bridge *br,
- 	if (ipv4_is_local_multicast(group))
- 		return 0;
+-/* idx can be of type enum memcg_stat_item or node_stat_item */
++/*
++ * idx can be of type enum memcg_stat_item or node_stat_item.
++ * Keep in sync with memcg_exact_page_state().
++ */
+ static inline unsigned long memcg_page_state(struct mem_cgroup *memcg,
+ 					     int idx)
+ {
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 54299251d40d..4f001619f854 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -591,6 +591,8 @@ enum mlx5_pagefault_type_flags {
+ };
  
-+	memset(&br_group, 0, sizeof(br_group));
- 	br_group.u.ip4 = group;
- 	br_group.proto = htons(ETH_P_IP);
- 	br_group.vid = vid;
-@@ -1489,6 +1490,7 @@ static void br_ip4_multicast_leave_group(struct net_bridge *br,
+ struct mlx5_td {
++	/* protects tirs list changes while tirs refresh */
++	struct mutex     list_lock;
+ 	struct list_head tirs_list;
+ 	u32              tdn;
+ };
+diff --git a/include/linux/string.h b/include/linux/string.h
+index 7927b875f80c..6ab0a6fa512e 100644
+--- a/include/linux/string.h
++++ b/include/linux/string.h
+@@ -150,6 +150,9 @@ extern void * memscan(void *,int,__kernel_size_t);
+ #ifndef __HAVE_ARCH_MEMCMP
+ extern int memcmp(const void *,const void *,__kernel_size_t);
+ #endif
++#ifndef __HAVE_ARCH_BCMP
++extern int bcmp(const void *,const void *,__kernel_size_t);
++#endif
+ #ifndef __HAVE_ARCH_MEMCHR
+ extern void * memchr(const void *,int,__kernel_size_t);
+ #endif
+diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
+index fab02133a919..3dc70adfe5f5 100644
+--- a/include/linux/virtio_ring.h
++++ b/include/linux/virtio_ring.h
+@@ -63,7 +63,7 @@ struct virtqueue;
+ /*
+  * Creates a virtqueue and allocates the descriptor ring.  If
+  * may_reduce_num is set, then this may allocate a smaller ring than
+- * expected.  The caller should query virtqueue_get_ring_size to learn
++ * expected.  The caller should query virtqueue_get_vring_size to learn
+  * the actual size of the ring.
+  */
+ struct virtqueue *vring_create_virtqueue(unsigned int index,
+diff --git a/include/net/ip.h b/include/net/ip.h
+index be3cad9c2e4c..583526aad1d0 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -677,7 +677,7 @@ int ip_options_get_from_user(struct net *net, struct ip_options_rcu **optp,
+ 			     unsigned char __user *data, int optlen);
+ void ip_options_undo(struct ip_options *opt);
+ void ip_forward_options(struct sk_buff *skb);
+-int ip_options_rcv_srr(struct sk_buff *skb);
++int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev);
  
- 	own_query = port ? &port->ip4_own_query : &br->ip4_own_query;
+ /*
+  *	Functions provided by ip_sockglue.c
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index 99d4148e0f90..1c3126c14930 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -58,6 +58,7 @@ struct net {
+ 						 */
+ 	spinlock_t		rules_mod_lock;
  
-+	memset(&br_group, 0, sizeof(br_group));
- 	br_group.u.ip4 = group;
- 	br_group.proto = htons(ETH_P_IP);
- 	br_group.vid = vid;
-@@ -1512,6 +1514,7 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br,
++	u32			hash_mix;
+ 	atomic64_t		cookie_gen;
  
- 	own_query = port ? &port->ip6_own_query : &br->ip6_own_query;
+ 	struct list_head	list;		/* list of network namespaces */
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 3d58acf94dd2..0612439909dc 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -691,10 +691,12 @@ static inline void nft_set_gc_batch_add(struct nft_set_gc_batch *gcb,
+ 	gcb->elems[gcb->head.cnt++] = elem;
+ }
  
-+	memset(&br_group, 0, sizeof(br_group));
- 	br_group.u.ip6 = *group;
- 	br_group.proto = htons(ETH_P_IPV6);
- 	br_group.vid = vid;
-diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
-index c93c35bb73dd..40d058378b52 100644
---- a/net/bridge/br_netfilter_hooks.c
-+++ b/net/bridge/br_netfilter_hooks.c
-@@ -881,11 +881,6 @@ static const struct nf_br_ops br_ops = {
- 	.br_dev_xmit_hook =	br_nf_dev_xmit,
- };
++struct nft_expr_ops;
+ /**
+  *	struct nft_expr_type - nf_tables expression type
+  *
+  *	@select_ops: function to select nft_expr_ops
++ *	@release_ops: release nft_expr_ops
+  *	@ops: default ops, used when no select_ops functions is present
+  *	@list: used internally
+  *	@name: Identifier
+@@ -707,6 +709,7 @@ static inline void nft_set_gc_batch_add(struct nft_set_gc_batch *gcb,
+ struct nft_expr_type {
+ 	const struct nft_expr_ops	*(*select_ops)(const struct nft_ctx *,
+ 						       const struct nlattr * const tb[]);
++	void				(*release_ops)(const struct nft_expr_ops *ops);
+ 	const struct nft_expr_ops	*ops;
+ 	struct list_head		list;
+ 	const char			*name;
+diff --git a/include/net/netns/hash.h b/include/net/netns/hash.h
+index 16a842456189..d9b665151f3d 100644
+--- a/include/net/netns/hash.h
++++ b/include/net/netns/hash.h
+@@ -2,16 +2,10 @@
+ #ifndef __NET_NS_HASH_H__
+ #define __NET_NS_HASH_H__
  
--void br_netfilter_enable(void)
--{
--}
--EXPORT_SYMBOL_GPL(br_netfilter_enable);
--
- /* For br_nf_post_routing, we need (prio = NF_BR_PRI_LAST), because
-  * br_dev_queue_push_xmit is called afterwards */
- static const struct nf_hook_ops br_nf_ops[] = {
-diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
-index 6693e209efe8..f77888ec93f1 100644
---- a/net/bridge/netfilter/ebtables.c
-+++ b/net/bridge/netfilter/ebtables.c
-@@ -31,10 +31,6 @@
- /* needed for logical [in,out]-dev filtering */
- #include "../br_private.h"
- 
--#define BUGPRINT(format, args...) printk("kernel msg: ebtables bug: please "\
--					 "report to author: "format, ## args)
--/* #define BUGPRINT(format, args...) */
+-#include <asm/cache.h>
 -
- /* Each cpu has its own set of counters, so there is no need for write_lock in
-  * the softirq
-  * For reading or updating the counters, the user context needs to
-@@ -466,8 +462,6 @@ static int ebt_verify_pointers(const struct ebt_replace *repl,
- 				/* we make userspace set this right,
- 				 * so there is no misunderstanding
- 				 */
--				BUGPRINT("EBT_ENTRY_OR_ENTRIES shouldn't be set "
--					 "in distinguisher\n");
- 				return -EINVAL;
- 			}
- 			if (i != NF_BR_NUMHOOKS)
-@@ -485,18 +479,14 @@ static int ebt_verify_pointers(const struct ebt_replace *repl,
- 			offset += e->next_offset;
- 		}
- 	}
--	if (offset != limit) {
--		BUGPRINT("entries_size too small\n");
-+	if (offset != limit)
- 		return -EINVAL;
--	}
+-struct net;
++#include <net/net_namespace.h>
  
- 	/* check if all valid hooks have a chain */
- 	for (i = 0; i < NF_BR_NUMHOOKS; i++) {
- 		if (!newinfo->hook_entry[i] &&
--		   (valid_hooks & (1 << i))) {
--			BUGPRINT("Valid hook without chain\n");
-+		   (valid_hooks & (1 << i)))
- 			return -EINVAL;
--		}
- 	}
- 	return 0;
+ static inline u32 net_hash_mix(const struct net *net)
+ {
+-#ifdef CONFIG_NET_NS
+-	return (u32)(((unsigned long)net) >> ilog2(sizeof(*net)));
+-#else
+-	return 0;
+-#endif
++	return net->hash_mix;
  }
-@@ -523,26 +513,20 @@ ebt_check_entry_size_and_hooks(const struct ebt_entry *e,
- 		/* this checks if the previous chain has as many entries
- 		 * as it said it has
- 		 */
--		if (*n != *cnt) {
--			BUGPRINT("nentries does not equal the nr of entries "
--				 "in the chain\n");
-+		if (*n != *cnt)
- 			return -EINVAL;
--		}
-+
- 		if (((struct ebt_entries *)e)->policy != EBT_DROP &&
- 		   ((struct ebt_entries *)e)->policy != EBT_ACCEPT) {
- 			/* only RETURN from udc */
- 			if (i != NF_BR_NUMHOOKS ||
--			   ((struct ebt_entries *)e)->policy != EBT_RETURN) {
--				BUGPRINT("bad policy\n");
-+			   ((struct ebt_entries *)e)->policy != EBT_RETURN)
- 				return -EINVAL;
--			}
- 		}
- 		if (i == NF_BR_NUMHOOKS) /* it's a user defined chain */
- 			(*udc_cnt)++;
--		if (((struct ebt_entries *)e)->counter_offset != *totalcnt) {
--			BUGPRINT("counter_offset != totalcnt");
-+		if (((struct ebt_entries *)e)->counter_offset != *totalcnt)
- 			return -EINVAL;
--		}
- 		*n = ((struct ebt_entries *)e)->nentries;
- 		*cnt = 0;
- 		return 0;
-@@ -550,15 +534,13 @@ ebt_check_entry_size_and_hooks(const struct ebt_entry *e,
- 	/* a plain old entry, heh */
- 	if (sizeof(struct ebt_entry) > e->watchers_offset ||
- 	   e->watchers_offset > e->target_offset ||
--	   e->target_offset >= e->next_offset) {
--		BUGPRINT("entry offsets not in right order\n");
-+	   e->target_offset >= e->next_offset)
- 		return -EINVAL;
--	}
+ #endif
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index e960c4f46ee0..b07a2acc4eec 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -1384,6 +1384,10 @@ int irq_chip_set_vcpu_affinity_parent(struct irq_data *data, void *vcpu_info)
+ int irq_chip_set_wake_parent(struct irq_data *data, unsigned int on)
+ {
+ 	data = data->parent_data;
 +
- 	/* this is not checked anywhere else */
--	if (e->next_offset - e->target_offset < sizeof(struct ebt_entry_target)) {
--		BUGPRINT("target size too small\n");
-+	if (e->next_offset - e->target_offset < sizeof(struct ebt_entry_target))
- 		return -EINVAL;
--	}
++	if (data->chip->flags & IRQCHIP_SKIP_SET_WAKE)
++		return 0;
 +
- 	(*cnt)++;
- 	(*totalcnt)++;
- 	return 0;
-@@ -678,18 +660,15 @@ ebt_check_entry(struct ebt_entry *e, struct net *net,
- 	if (e->bitmask == 0)
- 		return 0;
+ 	if (data->chip->irq_set_wake)
+ 		return data->chip->irq_set_wake(data, on);
  
--	if (e->bitmask & ~EBT_F_MASK) {
--		BUGPRINT("Unknown flag for bitmask\n");
-+	if (e->bitmask & ~EBT_F_MASK)
- 		return -EINVAL;
--	}
--	if (e->invflags & ~EBT_INV_MASK) {
--		BUGPRINT("Unknown flag for inv bitmask\n");
-+
-+	if (e->invflags & ~EBT_INV_MASK)
- 		return -EINVAL;
--	}
--	if ((e->bitmask & EBT_NOPROTO) && (e->bitmask & EBT_802_3)) {
--		BUGPRINT("NOPROTO & 802_3 not allowed\n");
-+
-+	if ((e->bitmask & EBT_NOPROTO) && (e->bitmask & EBT_802_3))
- 		return -EINVAL;
--	}
-+
- 	/* what hook do we belong to? */
- 	for (i = 0; i < NF_BR_NUMHOOKS; i++) {
- 		if (!newinfo->hook_entry[i])
-@@ -748,13 +727,11 @@ ebt_check_entry(struct ebt_entry *e, struct net *net,
- 	t->u.target = target;
- 	if (t->u.target == &ebt_standard_target) {
- 		if (gap < sizeof(struct ebt_standard_target)) {
--			BUGPRINT("Standard target size too big\n");
- 			ret = -EFAULT;
- 			goto cleanup_watchers;
- 		}
- 		if (((struct ebt_standard_target *)t)->verdict <
- 		   -NUM_STANDARD_TARGETS) {
--			BUGPRINT("Invalid standard target\n");
- 			ret = -EFAULT;
- 			goto cleanup_watchers;
- 		}
-@@ -813,10 +790,9 @@ static int check_chainloops(const struct ebt_entries *chain, struct ebt_cl_stack
- 		if (strcmp(t->u.name, EBT_STANDARD_TARGET))
- 			goto letscontinue;
- 		if (e->target_offset + sizeof(struct ebt_standard_target) >
--		   e->next_offset) {
--			BUGPRINT("Standard target size too big\n");
-+		   e->next_offset)
- 			return -1;
--		}
-+
- 		verdict = ((struct ebt_standard_target *)t)->verdict;
- 		if (verdict >= 0) { /* jump to another chain */
- 			struct ebt_entries *hlp2 =
-@@ -825,14 +801,12 @@ static int check_chainloops(const struct ebt_entries *chain, struct ebt_cl_stack
- 				if (hlp2 == cl_s[i].cs.chaininfo)
- 					break;
- 			/* bad destination or loop */
--			if (i == udc_cnt) {
--				BUGPRINT("bad destination\n");
-+			if (i == udc_cnt)
- 				return -1;
--			}
--			if (cl_s[i].cs.n) {
--				BUGPRINT("loop\n");
-+
-+			if (cl_s[i].cs.n)
- 				return -1;
--			}
-+
- 			if (cl_s[i].hookmask & (1 << hooknr))
- 				goto letscontinue;
- 			/* this can't be 0, so the loop test is correct */
-@@ -865,24 +839,21 @@ static int translate_table(struct net *net, const char *name,
- 	i = 0;
- 	while (i < NF_BR_NUMHOOKS && !newinfo->hook_entry[i])
- 		i++;
--	if (i == NF_BR_NUMHOOKS) {
--		BUGPRINT("No valid hooks specified\n");
-+	if (i == NF_BR_NUMHOOKS)
- 		return -EINVAL;
--	}
--	if (newinfo->hook_entry[i] != (struct ebt_entries *)newinfo->entries) {
--		BUGPRINT("Chains don't start at beginning\n");
-+
-+	if (newinfo->hook_entry[i] != (struct ebt_entries *)newinfo->entries)
- 		return -EINVAL;
--	}
-+
- 	/* make sure chains are ordered after each other in same order
- 	 * as their corresponding hooks
- 	 */
- 	for (j = i + 1; j < NF_BR_NUMHOOKS; j++) {
- 		if (!newinfo->hook_entry[j])
- 			continue;
--		if (newinfo->hook_entry[j] <= newinfo->hook_entry[i]) {
--			BUGPRINT("Hook order must be followed\n");
-+		if (newinfo->hook_entry[j] <= newinfo->hook_entry[i])
- 			return -EINVAL;
--		}
-+
- 		i = j;
+diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
+index 84fa255d0329..e16e022eae09 100644
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -558,6 +558,7 @@ int __init early_irq_init(void)
+ 		alloc_masks(&desc[i], node);
+ 		raw_spin_lock_init(&desc[i].lock);
+ 		lockdep_set_class(&desc[i].lock, &irq_desc_lock_class);
++		mutex_init(&desc[i].request_mutex);
+ 		desc_set_defaults(i, &desc[i], node, NULL, NULL);
  	}
+ 	return arch_early_irq_init();
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 310d0637fe4b..5e61a1a99e38 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -7713,10 +7713,10 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
+ 	if (cfs_rq->last_h_load_update == now)
+ 		return;
  
-@@ -900,15 +871,11 @@ static int translate_table(struct net *net, const char *name,
- 	if (ret != 0)
- 		return ret;
+-	cfs_rq->h_load_next = NULL;
++	WRITE_ONCE(cfs_rq->h_load_next, NULL);
+ 	for_each_sched_entity(se) {
+ 		cfs_rq = cfs_rq_of(se);
+-		cfs_rq->h_load_next = se;
++		WRITE_ONCE(cfs_rq->h_load_next, se);
+ 		if (cfs_rq->last_h_load_update == now)
+ 			break;
+ 	}
+@@ -7726,7 +7726,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
+ 		cfs_rq->last_h_load_update = now;
+ 	}
  
--	if (i != j) {
--		BUGPRINT("nentries does not equal the nr of entries in the "
--			 "(last) chain\n");
-+	if (i != j)
- 		return -EINVAL;
--	}
--	if (k != newinfo->nentries) {
--		BUGPRINT("Total nentries is wrong\n");
-+
-+	if (k != newinfo->nentries)
- 		return -EINVAL;
--	}
+-	while ((se = cfs_rq->h_load_next) != NULL) {
++	while ((se = READ_ONCE(cfs_rq->h_load_next)) != NULL) {
+ 		load = cfs_rq->h_load;
+ 		load = div64_ul(load * se->avg.load_avg,
+ 			cfs_rq_load_avg(cfs_rq) + 1);
+diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
+index 2c97e8c2d29f..0519a8805aab 100644
+--- a/kernel/time/alarmtimer.c
++++ b/kernel/time/alarmtimer.c
+@@ -594,7 +594,7 @@ static ktime_t alarm_timer_remaining(struct k_itimer *timr, ktime_t now)
+ {
+ 	struct alarm *alarm = &timr->it.alarm.alarmtimer;
  
- 	/* get the location of the udc, put them in an array
- 	 * while we're at it, allocate the chainstack
-@@ -942,7 +909,6 @@ static int translate_table(struct net *net, const char *name,
- 		   ebt_get_udc_positions, newinfo, &i, cl_s);
- 		/* sanity check */
- 		if (i != udc_cnt) {
--			BUGPRINT("i != udc_cnt\n");
- 			vfree(cl_s);
- 			return -EFAULT;
- 		}
-@@ -1042,7 +1008,6 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
- 		goto free_unlock;
+-	return ktime_sub(now, alarm->node.expires);
++	return ktime_sub(alarm->node.expires, now);
+ }
+ 
+ /**
+diff --git a/lib/string.c b/lib/string.c
+index 38e4ca08e757..3ab861c1a857 100644
+--- a/lib/string.c
++++ b/lib/string.c
+@@ -866,6 +866,26 @@ __visible int memcmp(const void *cs, const void *ct, size_t count)
+ EXPORT_SYMBOL(memcmp);
+ #endif
+ 
++#ifndef __HAVE_ARCH_BCMP
++/**
++ * bcmp - returns 0 if and only if the buffers have identical contents.
++ * @a: pointer to first buffer.
++ * @b: pointer to second buffer.
++ * @len: size of buffers.
++ *
++ * The sign or magnitude of a non-zero return value has no particular
++ * meaning, and architectures may implement their own more efficient bcmp(). So
++ * while this particular implementation is a simple (tail) call to memcmp, do
++ * not rely on anything but whether the return value is zero or non-zero.
++ */
++#undef bcmp
++int bcmp(const void *a, const void *b, size_t len)
++{
++	return memcmp(a, b, len);
++}
++EXPORT_SYMBOL(bcmp);
++#endif
++
+ #ifndef __HAVE_ARCH_MEMSCAN
+ /**
+  * memscan - Find a character in an area of memory.
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index faf357eaf0ce..8b03c698f86e 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -753,6 +753,21 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+ 	spinlock_t *ptl;
  
- 	if (repl->num_counters && repl->num_counters != t->private->nentries) {
--		BUGPRINT("Wrong nr. of counters requested\n");
- 		ret = -EINVAL;
- 		goto free_unlock;
+ 	ptl = pmd_lock(mm, pmd);
++	if (!pmd_none(*pmd)) {
++		if (write) {
++			if (pmd_pfn(*pmd) != pfn_t_to_pfn(pfn)) {
++				WARN_ON_ONCE(!is_huge_zero_pmd(*pmd));
++				goto out_unlock;
++			}
++			entry = pmd_mkyoung(*pmd);
++			entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
++			if (pmdp_set_access_flags(vma, addr, pmd, entry, 1))
++				update_mmu_cache_pmd(vma, addr, pmd);
++		}
++
++		goto out_unlock;
++	}
++
+ 	entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
+ 	if (pfn_t_devmap(pfn))
+ 		entry = pmd_mkdevmap(entry);
+@@ -764,11 +779,16 @@ static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+ 	if (pgtable) {
+ 		pgtable_trans_huge_deposit(mm, pmd, pgtable);
+ 		mm_inc_nr_ptes(mm);
++		pgtable = NULL;
  	}
-@@ -1118,15 +1083,12 @@ static int do_replace(struct net *net, const void __user *user,
- 	if (copy_from_user(&tmp, user, sizeof(tmp)) != 0)
- 		return -EFAULT;
- 
--	if (len != sizeof(tmp) + tmp.entries_size) {
--		BUGPRINT("Wrong len argument\n");
-+	if (len != sizeof(tmp) + tmp.entries_size)
- 		return -EINVAL;
--	}
  
--	if (tmp.entries_size == 0) {
--		BUGPRINT("Entries_size never zero\n");
-+	if (tmp.entries_size == 0)
- 		return -EINVAL;
--	}
+ 	set_pmd_at(mm, addr, pmd, entry);
+ 	update_mmu_cache_pmd(vma, addr, pmd);
 +
- 	/* overflow check */
- 	if (tmp.nentries >= ((INT_MAX - sizeof(struct ebt_table_info)) /
- 			NR_CPUS - SMP_CACHE_BYTES) / sizeof(struct ebt_counter))
-@@ -1153,7 +1115,6 @@ static int do_replace(struct net *net, const void __user *user,
- 	}
- 	if (copy_from_user(
- 	   newinfo->entries, tmp.entries, tmp.entries_size) != 0) {
--		BUGPRINT("Couldn't copy entries from userspace\n");
- 		ret = -EFAULT;
- 		goto free_entries;
- 	}
-@@ -1194,10 +1155,8 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
- 
- 	if (input_table == NULL || (repl = input_table->table) == NULL ||
- 	    repl->entries == NULL || repl->entries_size == 0 ||
--	    repl->counters != NULL || input_table->private != NULL) {
--		BUGPRINT("Bad table data for ebt_register_table!!!\n");
-+	    repl->counters != NULL || input_table->private != NULL)
- 		return -EINVAL;
--	}
- 
- 	/* Don't add one table to multiple lists. */
- 	table = kmemdup(input_table, sizeof(struct ebt_table), GFP_KERNEL);
-@@ -1235,13 +1194,10 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
- 				((char *)repl->hook_entry[i] - repl->entries);
- 	}
- 	ret = translate_table(net, repl->name, newinfo);
--	if (ret != 0) {
--		BUGPRINT("Translate_table failed\n");
-+	if (ret != 0)
- 		goto free_chainstack;
--	}
++out_unlock:
+ 	spin_unlock(ptl);
++	if (pgtable)
++		pte_free(mm, pgtable);
+ }
  
- 	if (table->check && table->check(newinfo, table->valid_hooks)) {
--		BUGPRINT("The table doesn't like its own initial data, lol\n");
- 		ret = -EINVAL;
- 		goto free_chainstack;
- 	}
-@@ -1252,7 +1208,6 @@ int ebt_register_table(struct net *net, const struct ebt_table *input_table,
- 	list_for_each_entry(t, &net->xt.tables[NFPROTO_BRIDGE], list) {
- 		if (strcmp(t->name, table->name) == 0) {
- 			ret = -EEXIST;
--			BUGPRINT("Table name already exists\n");
- 			goto free_unlock;
- 		}
- 	}
-@@ -1320,7 +1275,6 @@ static int do_update_counters(struct net *net, const char *name,
- 		goto free_tmp;
+ vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+@@ -819,6 +839,20 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+ 	spinlock_t *ptl;
  
- 	if (num_counters != t->private->nentries) {
--		BUGPRINT("Wrong nr of counters\n");
- 		ret = -EINVAL;
- 		goto unlock_mutex;
+ 	ptl = pud_lock(mm, pud);
++	if (!pud_none(*pud)) {
++		if (write) {
++			if (pud_pfn(*pud) != pfn_t_to_pfn(pfn)) {
++				WARN_ON_ONCE(!is_huge_zero_pud(*pud));
++				goto out_unlock;
++			}
++			entry = pud_mkyoung(*pud);
++			entry = maybe_pud_mkwrite(pud_mkdirty(entry), vma);
++			if (pudp_set_access_flags(vma, addr, pud, entry, 1))
++				update_mmu_cache_pud(vma, addr, pud);
++		}
++		goto out_unlock;
++	}
++
+ 	entry = pud_mkhuge(pfn_t_pud(pfn, prot));
+ 	if (pfn_t_devmap(pfn))
+ 		entry = pud_mkdevmap(entry);
+@@ -828,6 +862,8 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
  	}
-@@ -1447,10 +1401,8 @@ static int copy_counters_to_user(struct ebt_table *t,
- 	if (num_counters == 0)
- 		return 0;
- 
--	if (num_counters != nentries) {
--		BUGPRINT("Num_counters wrong\n");
-+	if (num_counters != nentries)
- 		return -EINVAL;
--	}
- 
- 	counterstmp = vmalloc(array_size(nentries, sizeof(*counterstmp)));
- 	if (!counterstmp)
-@@ -1496,15 +1448,11 @@ static int copy_everything_to_user(struct ebt_table *t, void __user *user,
- 	   (tmp.num_counters ? nentries * sizeof(struct ebt_counter) : 0))
- 		return -EINVAL;
- 
--	if (tmp.nentries != nentries) {
--		BUGPRINT("Nentries wrong\n");
-+	if (tmp.nentries != nentries)
- 		return -EINVAL;
--	}
- 
--	if (tmp.entries_size != entries_size) {
--		BUGPRINT("Wrong size\n");
-+	if (tmp.entries_size != entries_size)
- 		return -EINVAL;
--	}
- 
- 	ret = copy_counters_to_user(t, oldcounters, tmp.counters,
- 					tmp.num_counters, nentries);
-@@ -1576,7 +1524,6 @@ static int do_ebt_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
- 		}
- 		mutex_unlock(&ebt_mutex);
- 		if (copy_to_user(user, &tmp, *len) != 0) {
--			BUGPRINT("c2u Didn't work\n");
- 			ret = -EFAULT;
- 			break;
- 		}
-diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
-index 9cab80207ced..79eac465ec65 100644
---- a/net/ceph/ceph_common.c
-+++ b/net/ceph/ceph_common.c
-@@ -738,7 +738,6 @@ int __ceph_open_session(struct ceph_client *client, unsigned long started)
+ 	set_pud_at(mm, addr, pud, entry);
+ 	update_mmu_cache_pud(vma, addr, pud);
++
++out_unlock:
+ 	spin_unlock(ptl);
  }
- EXPORT_SYMBOL(__ceph_open_session);
  
--
- int ceph_open_session(struct ceph_client *client)
- {
- 	int ret;
-@@ -754,6 +753,23 @@ int ceph_open_session(struct ceph_client *client)
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 79a7d2a06bba..5bbf2de02a0f 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -3882,6 +3882,22 @@ struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb)
+ 	return &memcg->cgwb_domain;
  }
- EXPORT_SYMBOL(ceph_open_session);
  
-+int ceph_wait_for_latest_osdmap(struct ceph_client *client,
-+				unsigned long timeout)
++/*
++ * idx can be of type enum memcg_stat_item or node_stat_item.
++ * Keep in sync with memcg_exact_page().
++ */
++static unsigned long memcg_exact_page_state(struct mem_cgroup *memcg, int idx)
 +{
-+	u64 newest_epoch;
-+	int ret;
-+
-+	ret = ceph_monc_get_version(&client->monc, "osdmap", &newest_epoch);
-+	if (ret)
-+		return ret;
-+
-+	if (client->osdc.osdmap->epoch >= newest_epoch)
-+		return 0;
++	long x = atomic_long_read(&memcg->stat[idx]);
++	int cpu;
 +
-+	ceph_osdc_maybe_request_map(&client->osdc);
-+	return ceph_monc_wait_osdmap(&client->monc, newest_epoch, timeout);
++	for_each_online_cpu(cpu)
++		x += per_cpu_ptr(memcg->stat_cpu, cpu)->count[idx];
++	if (x < 0)
++		x = 0;
++	return x;
 +}
-+EXPORT_SYMBOL(ceph_wait_for_latest_osdmap);
- 
- static int __init init_ceph_lib(void)
- {
-diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
-index 18deb3d889c4..a53e4fbb6319 100644
---- a/net/ceph/mon_client.c
-+++ b/net/ceph/mon_client.c
-@@ -922,6 +922,15 @@ int ceph_monc_blacklist_add(struct ceph_mon_client *monc,
- 	mutex_unlock(&monc->mutex);
- 
- 	ret = wait_generic_request(req);
-+	if (!ret)
-+		/*
-+		 * Make sure we have the osdmap that includes the blacklist
-+		 * entry.  This is needed to ensure that the OSDs pick up the
-+		 * new blacklist before processing any future requests from
-+		 * this client.
-+		 */
-+		ret = ceph_wait_for_latest_osdmap(monc->client, 0);
 +
- out:
- 	put_generic_request(req);
- 	return ret;
-diff --git a/net/core/datagram.c b/net/core/datagram.c
-index b2651bb6d2a3..e657289db4ac 100644
---- a/net/core/datagram.c
-+++ b/net/core/datagram.c
-@@ -279,7 +279,7 @@ struct sk_buff *__skb_try_recv_datagram(struct sock *sk, unsigned int flags,
- 			break;
+ /**
+  * mem_cgroup_wb_stats - retrieve writeback related stats from its memcg
+  * @wb: bdi_writeback in question
+@@ -3907,10 +3923,10 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
+ 	struct mem_cgroup *memcg = mem_cgroup_from_css(wb->memcg_css);
+ 	struct mem_cgroup *parent;
+ 
+-	*pdirty = memcg_page_state(memcg, NR_FILE_DIRTY);
++	*pdirty = memcg_exact_page_state(memcg, NR_FILE_DIRTY);
+ 
+ 	/* this should eventually include NR_UNSTABLE_NFS */
+-	*pwriteback = memcg_page_state(memcg, NR_WRITEBACK);
++	*pwriteback = memcg_exact_page_state(memcg, NR_WRITEBACK);
+ 	*pfilepages = mem_cgroup_nr_lru_pages(memcg, (1 << LRU_INACTIVE_FILE) |
+ 						     (1 << LRU_ACTIVE_FILE));
+ 	*pheadroom = PAGE_COUNTER_MAX;
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index ac92b2eb32b1..e4777614a8a0 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -599,6 +599,7 @@ static int br_ip4_multicast_add_group(struct net_bridge *br,
+ 	if (ipv4_is_local_multicast(group))
+ 		return 0;
+ 
++	memset(&br_group, 0, sizeof(br_group));
+ 	br_group.u.ip4 = group;
+ 	br_group.proto = htons(ETH_P_IP);
+ 	br_group.vid = vid;
+@@ -1489,6 +1490,7 @@ static void br_ip4_multicast_leave_group(struct net_bridge *br,
+ 
+ 	own_query = port ? &port->ip4_own_query : &br->ip4_own_query;
  
- 		sk_busy_loop(sk, flags & MSG_DONTWAIT);
--	} while (!skb_queue_empty(&sk->sk_receive_queue));
-+	} while (sk->sk_receive_queue.prev != *last);
++	memset(&br_group, 0, sizeof(br_group));
+ 	br_group.u.ip4 = group;
+ 	br_group.proto = htons(ETH_P_IP);
+ 	br_group.vid = vid;
+@@ -1512,6 +1514,7 @@ static void br_ip6_multicast_leave_group(struct net_bridge *br,
  
- 	error = -EAGAIN;
+ 	own_query = port ? &port->ip6_own_query : &br->ip6_own_query;
  
++	memset(&br_group, 0, sizeof(br_group));
+ 	br_group.u.ip6 = *group;
+ 	br_group.proto = htons(ETH_P_IPV6);
+ 	br_group.vid = vid;
 diff --git a/net/core/dev.c b/net/core/dev.c
 index 5d03889502eb..12824e007e06 100644
 --- a/net/core/dev.c
@@ -30959,119 +3787,6 @@ index 158264f7cfaf..3a7f19a61768 100644
  	}
  
  	ret = -EFAULT;
-diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
-index 9bf1b9ad1780..ac679f74ba47 100644
---- a/net/core/gen_stats.c
-+++ b/net/core/gen_stats.c
-@@ -291,7 +291,6 @@ __gnet_stats_copy_queue_cpu(struct gnet_stats_queue *qstats,
- 	for_each_possible_cpu(i) {
- 		const struct gnet_stats_queue *qcpu = per_cpu_ptr(q, i);
- 
--		qstats->qlen = 0;
- 		qstats->backlog += qcpu->backlog;
- 		qstats->drops += qcpu->drops;
- 		qstats->requeues += qcpu->requeues;
-@@ -307,7 +306,6 @@ void __gnet_stats_copy_queue(struct gnet_stats_queue *qstats,
- 	if (cpu) {
- 		__gnet_stats_copy_queue_cpu(qstats, cpu);
- 	} else {
--		qstats->qlen = q->qlen;
- 		qstats->backlog = q->backlog;
- 		qstats->drops = q->drops;
- 		qstats->requeues = q->requeues;
-diff --git a/net/core/gro_cells.c b/net/core/gro_cells.c
-index acf45ddbe924..e095fb871d91 100644
---- a/net/core/gro_cells.c
-+++ b/net/core/gro_cells.c
-@@ -13,22 +13,36 @@ int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
- {
- 	struct net_device *dev = skb->dev;
- 	struct gro_cell *cell;
-+	int res;
- 
--	if (!gcells->cells || skb_cloned(skb) || netif_elide_gro(dev))
--		return netif_rx(skb);
-+	rcu_read_lock();
-+	if (unlikely(!(dev->flags & IFF_UP)))
-+		goto drop;
-+
-+	if (!gcells->cells || skb_cloned(skb) || netif_elide_gro(dev)) {
-+		res = netif_rx(skb);
-+		goto unlock;
-+	}
- 
- 	cell = this_cpu_ptr(gcells->cells);
- 
- 	if (skb_queue_len(&cell->napi_skbs) > netdev_max_backlog) {
-+drop:
- 		atomic_long_inc(&dev->rx_dropped);
- 		kfree_skb(skb);
--		return NET_RX_DROP;
-+		res = NET_RX_DROP;
-+		goto unlock;
- 	}
- 
- 	__skb_queue_tail(&cell->napi_skbs, skb);
- 	if (skb_queue_len(&cell->napi_skbs) == 1)
- 		napi_schedule(&cell->napi);
--	return NET_RX_SUCCESS;
-+
-+	res = NET_RX_SUCCESS;
-+
-+unlock:
-+	rcu_read_unlock();
-+	return res;
- }
- EXPORT_SYMBOL(gro_cells_receive);
- 
-diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
-index ff9fd2bb4ce4..aec26584f0ca 100644
---- a/net/core/net-sysfs.c
-+++ b/net/core/net-sysfs.c
-@@ -934,6 +934,8 @@ static int rx_queue_add_kobject(struct net_device *dev, int index)
- 	if (error)
- 		return error;
- 
-+	dev_hold(queue->dev);
-+
- 	if (dev->sysfs_rx_queue_group) {
- 		error = sysfs_create_group(kobj, dev->sysfs_rx_queue_group);
- 		if (error) {
-@@ -943,7 +945,6 @@ static int rx_queue_add_kobject(struct net_device *dev, int index)
- 	}
- 
- 	kobject_uevent(kobj, KOBJ_ADD);
--	dev_hold(queue->dev);
- 
- 	return error;
- }
-@@ -1472,6 +1473,8 @@ static int netdev_queue_add_kobject(struct net_device *dev, int index)
- 	if (error)
- 		return error;
- 
-+	dev_hold(queue->dev);
-+
- #ifdef CONFIG_BQL
- 	error = sysfs_create_group(kobj, &dql_group);
- 	if (error) {
-@@ -1481,7 +1484,6 @@ static int netdev_queue_add_kobject(struct net_device *dev, int index)
- #endif
- 
- 	kobject_uevent(kobj, KOBJ_ADD);
--	dev_hold(queue->dev);
- 
- 	return 0;
- }
-@@ -1547,6 +1549,9 @@ static int register_queue_kobjects(struct net_device *dev)
- error:
- 	netdev_queue_update_kobjects(dev, txq, 0);
- 	net_rx_queue_update_kobjects(dev, rxq, 0);
-+#ifdef CONFIG_SYSFS
-+	kset_unregister(dev->queues_kset);
-+#endif
- 	return error;
- }
- 
 diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
 index b02fb19df2cc..40c249c574c1 100644
 --- a/net/core/net_namespace.c
@@ -31097,154 +3812,6 @@ index 2415d9cb9b89..ef2cd5712098 100644
  		return -E2BIG;
  
  	lp = NAPI_GRO_CB(p)->last;
-diff --git a/net/core/skmsg.c b/net/core/skmsg.c
-index 8c826603bf36..8bc0ba1ebabe 100644
---- a/net/core/skmsg.c
-+++ b/net/core/skmsg.c
-@@ -545,6 +545,7 @@ static void sk_psock_destroy_deferred(struct work_struct *gc)
- 	struct sk_psock *psock = container_of(gc, struct sk_psock, gc);
- 
- 	/* No sk_callback_lock since already detached. */
-+	strp_stop(&psock->parser.strp);
- 	strp_done(&psock->parser.strp);
- 
- 	cancel_work_sync(&psock->work);
-diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
-index d5740bad5b18..57d84e9b7b6f 100644
---- a/net/dccp/ipv6.c
-+++ b/net/dccp/ipv6.c
-@@ -436,8 +436,8 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk,
- 		newnp->ipv6_mc_list = NULL;
- 		newnp->ipv6_ac_list = NULL;
- 		newnp->ipv6_fl_list = NULL;
--		newnp->mcast_oif   = inet6_iif(skb);
--		newnp->mcast_hops  = ipv6_hdr(skb)->hop_limit;
-+		newnp->mcast_oif   = inet_iif(skb);
-+		newnp->mcast_hops  = ip_hdr(skb)->ttl;
- 
- 		/*
- 		 * No need to charge this sock to the relevant IPv6 refcnt debug socks count
-diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
-index b8cd43c9ed5b..a97bf326b231 100644
---- a/net/hsr/hsr_device.c
-+++ b/net/hsr/hsr_device.c
-@@ -94,9 +94,8 @@ static void hsr_check_announce(struct net_device *hsr_dev,
- 			&& (old_operstate != IF_OPER_UP)) {
- 		/* Went up */
- 		hsr->announce_count = 0;
--		hsr->announce_timer.expires = jiffies +
--				msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
--		add_timer(&hsr->announce_timer);
-+		mod_timer(&hsr->announce_timer,
-+			  jiffies + msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL));
- 	}
- 
- 	if ((hsr_dev->operstate != IF_OPER_UP) && (old_operstate == IF_OPER_UP))
-@@ -332,6 +331,7 @@ static void hsr_announce(struct timer_list *t)
- {
- 	struct hsr_priv *hsr;
- 	struct hsr_port *master;
-+	unsigned long interval;
- 
- 	hsr = from_timer(hsr, t, announce_timer);
- 
-@@ -343,18 +343,16 @@ static void hsr_announce(struct timer_list *t)
- 				hsr->protVersion);
- 		hsr->announce_count++;
- 
--		hsr->announce_timer.expires = jiffies +
--				msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
-+		interval = msecs_to_jiffies(HSR_ANNOUNCE_INTERVAL);
- 	} else {
- 		send_hsr_supervision_frame(master, HSR_TLV_LIFE_CHECK,
- 				hsr->protVersion);
- 
--		hsr->announce_timer.expires = jiffies +
--				msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
-+		interval = msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
- 	}
- 
- 	if (is_admin_up(master->dev))
--		add_timer(&hsr->announce_timer);
-+		mod_timer(&hsr->announce_timer, jiffies + interval);
- 
- 	rcu_read_unlock();
- }
-@@ -486,7 +484,7 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
- 
- 	res = hsr_add_port(hsr, hsr_dev, HSR_PT_MASTER);
- 	if (res)
--		return res;
-+		goto err_add_port;
- 
- 	res = register_netdevice(hsr_dev);
- 	if (res)
-@@ -506,6 +504,8 @@ int hsr_dev_finalize(struct net_device *hsr_dev, struct net_device *slave[2],
- fail:
- 	hsr_for_each_port(hsr, port)
- 		hsr_del_port(port);
-+err_add_port:
-+	hsr_del_node(&hsr->self_node_db);
- 
- 	return res;
- }
-diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
-index 286ceb41ac0c..9af16cb68f76 100644
---- a/net/hsr/hsr_framereg.c
-+++ b/net/hsr/hsr_framereg.c
-@@ -124,6 +124,18 @@ int hsr_create_self_node(struct list_head *self_node_db,
- 	return 0;
- }
- 
-+void hsr_del_node(struct list_head *self_node_db)
-+{
-+	struct hsr_node *node;
-+
-+	rcu_read_lock();
-+	node = list_first_or_null_rcu(self_node_db, struct hsr_node, mac_list);
-+	rcu_read_unlock();
-+	if (node) {
-+		list_del_rcu(&node->mac_list);
-+		kfree(node);
-+	}
-+}
- 
- /* Allocate an hsr_node and add it to node_db. 'addr' is the node's AddressA;
-  * seq_out is used to initialize filtering of outgoing duplicate frames
-diff --git a/net/hsr/hsr_framereg.h b/net/hsr/hsr_framereg.h
-index 370b45998121..531fd3dfcac1 100644
---- a/net/hsr/hsr_framereg.h
-+++ b/net/hsr/hsr_framereg.h
-@@ -16,6 +16,7 @@
- 
- struct hsr_node;
- 
-+void hsr_del_node(struct list_head *self_node_db);
- struct hsr_node *hsr_add_node(struct list_head *node_db, unsigned char addr[],
- 			      u16 seq_out);
- struct hsr_node *hsr_get_node(struct hsr_port *port, struct sk_buff *skb,
-diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c
-index 437070d1ffb1..79e98e21cdd7 100644
---- a/net/ipv4/fou.c
-+++ b/net/ipv4/fou.c
-@@ -1024,7 +1024,7 @@ static int gue_err(struct sk_buff *skb, u32 info)
- 	int ret;
- 
- 	len = sizeof(struct udphdr) + sizeof(struct guehdr);
--	if (!pskb_may_pull(skb, len))
-+	if (!pskb_may_pull(skb, transport_offset + len))
- 		return -EINVAL;
- 
- 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
-@@ -1059,7 +1059,7 @@ static int gue_err(struct sk_buff *skb, u32 info)
- 
- 	optlen = guehdr->hlen << 2;
- 
--	if (!pskb_may_pull(skb, len + optlen))
-+	if (!pskb_may_pull(skb, transport_offset + len + optlen))
- 		return -EINVAL;
- 
- 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
 diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
 index 6ae89f2b541b..2d5734079e6b 100644
 --- a/net/ipv4/ip_gre.c
@@ -31350,73 +3917,6 @@ index 32a35043c9f5..3db31bb9df50 100644
  		rt2 = skb_rtable(skb);
  		if (err || (rt2->rt_type != RTN_UNICAST && rt2->rt_type != RTN_LOCAL)) {
  			skb_dst_drop(skb);
-diff --git a/net/ipv4/route.c b/net/ipv4/route.c
-index 7bb9128c8363..e04cdb58a602 100644
---- a/net/ipv4/route.c
-+++ b/net/ipv4/route.c
-@@ -1303,6 +1303,10 @@ static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr)
- 		if (fnhe->fnhe_daddr == daddr) {
- 			rcu_assign_pointer(*fnhe_p, rcu_dereference_protected(
- 				fnhe->fnhe_next, lockdep_is_held(&fnhe_lock)));
-+			/* set fnhe_daddr to 0 to ensure it won't bind with
-+			 * new dsts in rt_bind_exception().
-+			 */
-+			fnhe->fnhe_daddr = 0;
- 			fnhe_flush_routes(fnhe);
- 			kfree_rcu(fnhe, rcu);
- 			break;
-@@ -2144,12 +2148,13 @@ int ip_route_input_rcu(struct sk_buff *skb, __be32 daddr, __be32 saddr,
- 		int our = 0;
- 		int err = -EINVAL;
- 
--		if (in_dev)
--			our = ip_check_mc_rcu(in_dev, daddr, saddr,
--					      ip_hdr(skb)->protocol);
-+		if (!in_dev)
-+			return err;
-+		our = ip_check_mc_rcu(in_dev, daddr, saddr,
-+				      ip_hdr(skb)->protocol);
- 
- 		/* check l3 master if no match yet */
--		if ((!in_dev || !our) && netif_is_l3_slave(dev)) {
-+		if (!our && netif_is_l3_slave(dev)) {
- 			struct in_device *l3_in_dev;
- 
- 			l3_in_dev = __in_dev_get_rcu(skb->dev);
-diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
-index 606f868d9f3f..e531344611a0 100644
---- a/net/ipv4/syncookies.c
-+++ b/net/ipv4/syncookies.c
-@@ -216,7 +216,12 @@ struct sock *tcp_get_cookie_sock(struct sock *sk, struct sk_buff *skb,
- 		refcount_set(&req->rsk_refcnt, 1);
- 		tcp_sk(child)->tsoffset = tsoff;
- 		sock_rps_save_rxhash(child, skb);
--		inet_csk_reqsk_queue_add(sk, req, child);
-+		if (!inet_csk_reqsk_queue_add(sk, req, child)) {
-+			bh_unlock_sock(child);
-+			sock_put(child);
-+			child = NULL;
-+			reqsk_put(req);
-+		}
- 	} else {
- 		reqsk_free(req);
- 	}
-diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
-index cf3c5095c10e..ce365cbba1d1 100644
---- a/net/ipv4/tcp.c
-+++ b/net/ipv4/tcp.c
-@@ -1914,6 +1914,11 @@ static int tcp_inq_hint(struct sock *sk)
- 		inq = tp->rcv_nxt - tp->copied_seq;
- 		release_sock(sk);
- 	}
-+	/* After receiving a FIN, tell the user-space to continue reading
-+	 * by returning a non-zero inq.
-+	 */
-+	if (inq == 0 && sock_flag(sk, SOCK_DONE))
-+		inq = 1;
- 	return inq;
- }
- 
 diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
 index cd4814f7e962..359da68d7c06 100644
 --- a/net/ipv4/tcp_dctcp.c
@@ -31480,47 +3980,11 @@ index cd4814f7e962..359da68d7c06 100644
  	default:
  		/* Don't care for the rest. */
  		break;
-diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
-index 76858b14ebe9..7b1ef897b398 100644
---- a/net/ipv4/tcp_input.c
-+++ b/net/ipv4/tcp_input.c
-@@ -6519,7 +6519,13 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
- 		af_ops->send_synack(fastopen_sk, dst, &fl, req,
- 				    &foc, TCP_SYNACK_FASTOPEN);
- 		/* Add the child socket directly into the accept queue */
--		inet_csk_reqsk_queue_add(sk, req, fastopen_sk);
-+		if (!inet_csk_reqsk_queue_add(sk, req, fastopen_sk)) {
-+			reqsk_fastopen_remove(fastopen_sk, req, false);
-+			bh_unlock_sock(fastopen_sk);
-+			sock_put(fastopen_sk);
-+			reqsk_put(req);
-+			goto drop;
-+		}
- 		sk->sk_data_ready(sk);
- 		bh_unlock_sock(fastopen_sk);
- 		sock_put(fastopen_sk);
 diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
-index ec3cea9d6828..00852f47a73d 100644
+index 1aae9ab57fe9..00852f47a73d 100644
 --- a/net/ipv4/tcp_ipv4.c
 +++ b/net/ipv4/tcp_ipv4.c
-@@ -1734,15 +1734,8 @@ EXPORT_SYMBOL(tcp_add_backlog);
- int tcp_filter(struct sock *sk, struct sk_buff *skb)
- {
- 	struct tcphdr *th = (struct tcphdr *)skb->data;
--	unsigned int eaten = skb->len;
--	int err;
- 
--	err = sk_filter_trim_cap(sk, skb, th->doff * 4);
--	if (!err) {
--		eaten -= skb->len;
--		TCP_SKB_CB(skb)->end_seq -= eaten;
--	}
--	return err;
-+	return sk_filter_trim_cap(sk, skb, th->doff * 4);
- }
- EXPORT_SYMBOL(tcp_filter);
- 
-@@ -2585,7 +2578,8 @@ static void __net_exit tcp_sk_exit(struct net *net)
+@@ -2578,7 +2578,8 @@ static void __net_exit tcp_sk_exit(struct net *net)
  {
  	int cpu;
  
@@ -31530,40 +3994,6 @@ index ec3cea9d6828..00852f47a73d 100644
  
  	for_each_possible_cpu(cpu)
  		inet_ctl_sock_destroy(*per_cpu_ptr(net->ipv4.tcp_sk, cpu));
-diff --git a/net/ipv6/fou6.c b/net/ipv6/fou6.c
-index 867474abe269..ec4e2ed95f36 100644
---- a/net/ipv6/fou6.c
-+++ b/net/ipv6/fou6.c
-@@ -94,7 +94,7 @@ static int gue6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
- 	int ret;
- 
- 	len = sizeof(struct udphdr) + sizeof(struct guehdr);
--	if (!pskb_may_pull(skb, len))
-+	if (!pskb_may_pull(skb, transport_offset + len))
- 		return -EINVAL;
- 
- 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
-@@ -129,7 +129,7 @@ static int gue6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
- 
- 	optlen = guehdr->hlen << 2;
- 
--	if (!pskb_may_pull(skb, len + optlen))
-+	if (!pskb_may_pull(skb, transport_offset + len + optlen))
- 		return -EINVAL;
- 
- 	guehdr = (struct guehdr *)&udp_hdr(skb)[1];
-diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
-index 17c455ff69ff..7858fa9ea103 100644
---- a/net/ipv6/ila/ila_xlat.c
-+++ b/net/ipv6/ila/ila_xlat.c
-@@ -420,6 +420,7 @@ int ila_xlat_nl_cmd_flush(struct sk_buff *skb, struct genl_info *info)
- 
- done:
- 	rhashtable_walk_stop(&iter);
-+	rhashtable_walk_exit(&iter);
- 	return ret;
- }
- 
 diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
 index 26f25b6e2833..438f1a5fd19a 100644
 --- a/net/ipv6/ip6_gre.c
@@ -31666,79 +4096,20 @@ index 0c6403cf8b52..ade1390c6348 100644
  					   IPPROTO_IPIP, RT_TOS(eiph->tos), 0);
 -		if (IS_ERR(rt) || rt->dst.dev->type != ARPHRD_TUNNEL) {
 +		if (IS_ERR(rt) || rt->dst.dev->type != ARPHRD_TUNNEL6) {
- 			if (!IS_ERR(rt))
- 				ip_rt_put(rt);
- 			goto out;
-@@ -636,7 +636,7 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
- 	} else {
- 		if (ip_route_input(skb2, eiph->daddr, eiph->saddr, eiph->tos,
- 				   skb2->dev) ||
--		    skb_dst(skb2)->dev->type != ARPHRD_TUNNEL)
-+		    skb_dst(skb2)->dev->type != ARPHRD_TUNNEL6)
- 			goto out;
- 	}
- 
-diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
-index cc01aa3f2b5e..af91a1a402f1 100644
---- a/net/ipv6/ip6mr.c
-+++ b/net/ipv6/ip6mr.c
-@@ -1964,10 +1964,10 @@ int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
- 
- static inline int ip6mr_forward2_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
- {
--	__IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
--			IPSTATS_MIB_OUTFORWDATAGRAMS);
--	__IP6_ADD_STATS(net, ip6_dst_idev(skb_dst(skb)),
--			IPSTATS_MIB_OUTOCTETS, skb->len);
-+	IP6_INC_STATS(net, ip6_dst_idev(skb_dst(skb)),
-+		      IPSTATS_MIB_OUTFORWDATAGRAMS);
-+	IP6_ADD_STATS(net, ip6_dst_idev(skb_dst(skb)),
-+		      IPSTATS_MIB_OUTOCTETS, skb->len);
- 	return dst_output(net, sk, skb);
- }
- 
-diff --git a/net/ipv6/route.c b/net/ipv6/route.c
-index 8dad1d690b78..0086acc16f3c 100644
---- a/net/ipv6/route.c
-+++ b/net/ipv6/route.c
-@@ -1040,14 +1040,20 @@ static struct rt6_info *ip6_create_rt_rcu(struct fib6_info *rt)
- 	struct rt6_info *nrt;
- 
- 	if (!fib6_info_hold_safe(rt))
--		return NULL;
-+		goto fallback;
- 
- 	nrt = ip6_dst_alloc(dev_net(dev), dev, flags);
--	if (nrt)
--		ip6_rt_copy_init(nrt, rt);
--	else
-+	if (!nrt) {
- 		fib6_info_release(rt);
-+		goto fallback;
-+	}
- 
-+	ip6_rt_copy_init(nrt, rt);
-+	return nrt;
-+
-+fallback:
-+	nrt = dev_net(dev)->ipv6.ip6_null_entry;
-+	dst_hold(&nrt->dst);
- 	return nrt;
- }
- 
-@@ -1096,10 +1102,6 @@ restart:
- 		dst_hold(&rt->dst);
+ 			if (!IS_ERR(rt))
+ 				ip_rt_put(rt);
+ 			goto out;
+@@ -636,7 +636,7 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
  	} else {
- 		rt = ip6_create_rt_rcu(f6i);
--		if (!rt) {
--			rt = net->ipv6.ip6_null_entry;
--			dst_hold(&rt->dst);
--		}
+ 		if (ip_route_input(skb2, eiph->daddr, eiph->saddr, eiph->tos,
+ 				   skb2->dev) ||
+-		    skb_dst(skb2)->dev->type != ARPHRD_TUNNEL)
++		    skb_dst(skb2)->dev->type != ARPHRD_TUNNEL6)
+ 			goto out;
  	}
  
- 	rcu_read_unlock();
 diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
-index 09e440e8dfae..b2109b74857d 100644
+index 07e21a82ce4c..b2109b74857d 100644
 --- a/net/ipv6/sit.c
 +++ b/net/ipv6/sit.c
 @@ -669,6 +669,10 @@ static int ipip6_rcv(struct sk_buff *skb)
@@ -31752,38 +4123,6 @@ index 09e440e8dfae..b2109b74857d 100644
  		err = IP_ECN_decapsulate(iph, skb);
  		if (unlikely(err)) {
  			if (log_ecn_error)
-@@ -778,8 +782,9 @@ static bool check_6rd(struct ip_tunnel *tunnel, const struct in6_addr *v6dst,
- 		pbw0 = tunnel->ip6rd.prefixlen >> 5;
- 		pbi0 = tunnel->ip6rd.prefixlen & 0x1f;
- 
--		d = (ntohl(v6dst->s6_addr32[pbw0]) << pbi0) >>
--		    tunnel->ip6rd.relay_prefixlen;
-+		d = tunnel->ip6rd.relay_prefixlen < 32 ?
-+			(ntohl(v6dst->s6_addr32[pbw0]) << pbi0) >>
-+		    tunnel->ip6rd.relay_prefixlen : 0;
- 
- 		pbi1 = pbi0 - tunnel->ip6rd.relay_prefixlen;
- 		if (pbi1 > 0)
-diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
-index b81eb7cb815e..8505d96483d5 100644
---- a/net/ipv6/tcp_ipv6.c
-+++ b/net/ipv6/tcp_ipv6.c
-@@ -1112,11 +1112,11 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
- 		newnp->ipv6_fl_list = NULL;
- 		newnp->pktoptions  = NULL;
- 		newnp->opt	   = NULL;
--		newnp->mcast_oif   = tcp_v6_iif(skb);
--		newnp->mcast_hops  = ipv6_hdr(skb)->hop_limit;
--		newnp->rcv_flowinfo = ip6_flowinfo(ipv6_hdr(skb));
-+		newnp->mcast_oif   = inet_iif(skb);
-+		newnp->mcast_hops  = ip_hdr(skb)->ttl;
-+		newnp->rcv_flowinfo = 0;
- 		if (np->repflow)
--			newnp->flow_label = ip6_flowlabel(ipv6_hdr(skb));
-+			newnp->flow_label = 0;
- 
- 		/*
- 		 * No need to charge this sock to the relevant IPv6 refcnt debug socks count
 diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
 index 571d824e4e24..b919db02c7f9 100644
 --- a/net/kcm/kcmsock.c
@@ -31833,150 +4172,10 @@ index 571d824e4e24..b919db02c7f9 100644
  	proto_unregister(&kcm_proto);
  	destroy_workqueue(kcm_wq);
  
-diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
-index 0ae6899edac0..37a69df17cab 100644
---- a/net/l2tp/l2tp_ip6.c
-+++ b/net/l2tp/l2tp_ip6.c
-@@ -674,9 +674,6 @@ static int l2tp_ip6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
- 	if (flags & MSG_OOB)
- 		goto out;
- 
--	if (addr_len)
--		*addr_len = sizeof(*lsa);
--
- 	if (flags & MSG_ERRQUEUE)
- 		return ipv6_recv_error(sk, msg, len, addr_len);
- 
-@@ -706,6 +703,7 @@ static int l2tp_ip6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
- 		lsa->l2tp_conn_id = 0;
- 		if (ipv6_addr_type(&lsa->l2tp_addr) & IPV6_ADDR_LINKLOCAL)
- 			lsa->l2tp_scope_id = inet6_iif(skb);
-+		*addr_len = sizeof(*lsa);
- 	}
- 
- 	if (np->rxopt.all)
-diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
-index db4d46332e86..9dd4c2048a2b 100644
---- a/net/netfilter/nf_conntrack_core.c
-+++ b/net/netfilter/nf_conntrack_core.c
-@@ -901,10 +901,18 @@ __nf_conntrack_confirm(struct sk_buff *skb)
- 	 * REJECT will give spurious warnings here.
- 	 */
- 
--	/* No external references means no one else could have
--	 * confirmed us.
-+	/* Another skb with the same unconfirmed conntrack may
-+	 * win the race. This may happen for bridge(br_flood)
-+	 * or broadcast/multicast packets do skb_clone with
-+	 * unconfirmed conntrack.
- 	 */
--	WARN_ON(nf_ct_is_confirmed(ct));
-+	if (unlikely(nf_ct_is_confirmed(ct))) {
-+		WARN_ON_ONCE(1);
-+		nf_conntrack_double_unlock(hash, reply_hash);
-+		local_bh_enable();
-+		return NF_DROP;
-+	}
-+
- 	pr_debug("Confirming conntrack %p\n", ct);
- 	/* We have to check the DYING flag after unlink to prevent
- 	 * a race against nf_ct_get_next_corpse() possibly called from
-diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
-index 4dcbd51a8e97..74fb3fa34db4 100644
---- a/net/netfilter/nf_conntrack_proto_tcp.c
-+++ b/net/netfilter/nf_conntrack_proto_tcp.c
-@@ -828,6 +828,12 @@ static noinline bool tcp_new(struct nf_conn *ct, const struct sk_buff *skb,
- 	return true;
- }
- 
-+static bool nf_conntrack_tcp_established(const struct nf_conn *ct)
-+{
-+	return ct->proto.tcp.state == TCP_CONNTRACK_ESTABLISHED &&
-+	       test_bit(IPS_ASSURED_BIT, &ct->status);
-+}
-+
- /* Returns verdict for packet, or -1 for invalid. */
- static int tcp_packet(struct nf_conn *ct,
- 		      struct sk_buff *skb,
-@@ -1030,16 +1036,38 @@ static int tcp_packet(struct nf_conn *ct,
- 			new_state = TCP_CONNTRACK_ESTABLISHED;
- 		break;
- 	case TCP_CONNTRACK_CLOSE:
--		if (index == TCP_RST_SET
--		    && (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET)
--		    && before(ntohl(th->seq), ct->proto.tcp.seen[!dir].td_maxack)) {
--			/* Invalid RST  */
--			spin_unlock_bh(&ct->lock);
--			nf_ct_l4proto_log_invalid(skb, ct, "invalid rst");
--			return -NF_ACCEPT;
-+		if (index != TCP_RST_SET)
-+			break;
-+
-+		if (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET) {
-+			u32 seq = ntohl(th->seq);
-+
-+			if (before(seq, ct->proto.tcp.seen[!dir].td_maxack)) {
-+				/* Invalid RST  */
-+				spin_unlock_bh(&ct->lock);
-+				nf_ct_l4proto_log_invalid(skb, ct, "invalid rst");
-+				return -NF_ACCEPT;
-+			}
-+
-+			if (!nf_conntrack_tcp_established(ct) ||
-+			    seq == ct->proto.tcp.seen[!dir].td_maxack)
-+				break;
-+
-+			/* Check if rst is part of train, such as
-+			 *   foo:80 > bar:4379: P, 235946583:235946602(19) ack 42
-+			 *   foo:80 > bar:4379: R, 235946602:235946602(0)  ack 42
-+			 */
-+			if (ct->proto.tcp.last_index == TCP_ACK_SET &&
-+			    ct->proto.tcp.last_dir == dir &&
-+			    seq == ct->proto.tcp.last_end)
-+				break;
-+
-+			/* ... RST sequence number doesn't match exactly, keep
-+			 * established state to allow a possible challenge ACK.
-+			 */
-+			new_state = old_state;
- 		}
--		if (index == TCP_RST_SET
--		    && ((test_bit(IPS_SEEN_REPLY_BIT, &ct->status)
-+		if (((test_bit(IPS_SEEN_REPLY_BIT, &ct->status)
- 			 && ct->proto.tcp.last_index == TCP_SYN_SET)
- 			|| (!test_bit(IPS_ASSURED_BIT, &ct->status)
- 			    && ct->proto.tcp.last_index == TCP_ACK_SET))
-@@ -1055,7 +1083,7 @@ static int tcp_packet(struct nf_conn *ct,
- 			 * segments we ignored. */
- 			goto in_window;
- 		}
--		/* Just fall through */
-+		break;
- 	default:
- 		/* Keep compilers happy. */
- 		break;
-@@ -1090,6 +1118,8 @@ static int tcp_packet(struct nf_conn *ct,
- 	if (ct->proto.tcp.retrans >= tn->tcp_max_retrans &&
- 	    timeouts[new_state] > timeouts[TCP_CONNTRACK_RETRANS])
- 		timeout = timeouts[TCP_CONNTRACK_RETRANS];
-+	else if (unlikely(index == TCP_RST_SET))
-+		timeout = timeouts[TCP_CONNTRACK_CLOSE];
- 	else if ((ct->proto.tcp.seen[0].flags | ct->proto.tcp.seen[1].flags) &
- 		 IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED &&
- 		 timeouts[new_state] > timeouts[TCP_CONNTRACK_UNACK])
 diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
-index 4893f248dfdc..acb124ce92ec 100644
+index e1724f9d8b9d..acb124ce92ec 100644
 --- a/net/netfilter/nf_tables_api.c
 +++ b/net/netfilter/nf_tables_api.c
-@@ -127,7 +127,7 @@ static void nft_set_trans_bind(const struct nft_ctx *ctx, struct nft_set *set)
- 	list_for_each_entry_reverse(trans, &net->nft.commit_list, list) {
- 		if (trans->msg_type == NFT_MSG_NEWSET &&
- 		    nft_trans_set(trans) == set) {
--			nft_trans_set_bound(trans) = true;
-+			set->bound = true;
- 			break;
- 		}
- 	}
 @@ -2119,9 +2119,11 @@ err1:
  static void nf_tables_expr_destroy(const struct nft_ctx *ctx,
  				   struct nft_expr *expr)
@@ -32024,77 +4223,6 @@ index 4893f248dfdc..acb124ce92ec 100644
  	}
  	kvfree(info);
  	return err;
-@@ -6617,8 +6627,7 @@ static void nf_tables_abort_release(struct nft_trans *trans)
- 		nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans));
- 		break;
- 	case NFT_MSG_NEWSET:
--		if (!nft_trans_set_bound(trans))
--			nft_set_destroy(nft_trans_set(trans));
-+		nft_set_destroy(nft_trans_set(trans));
- 		break;
- 	case NFT_MSG_NEWSETELEM:
- 		nft_set_elem_destroy(nft_trans_elem_set(trans),
-@@ -6691,8 +6700,11 @@ static int __nf_tables_abort(struct net *net)
- 			break;
- 		case NFT_MSG_NEWSET:
- 			trans->ctx.table->use--;
--			if (!nft_trans_set_bound(trans))
--				list_del_rcu(&nft_trans_set(trans)->list);
-+			if (nft_trans_set(trans)->bound) {
-+				nft_trans_destroy(trans);
-+				break;
-+			}
-+			list_del_rcu(&nft_trans_set(trans)->list);
- 			break;
- 		case NFT_MSG_DELSET:
- 			trans->ctx.table->use++;
-@@ -6700,8 +6712,11 @@ static int __nf_tables_abort(struct net *net)
- 			nft_trans_destroy(trans);
- 			break;
- 		case NFT_MSG_NEWSETELEM:
-+			if (nft_trans_elem_set(trans)->bound) {
-+				nft_trans_destroy(trans);
-+				break;
-+			}
- 			te = (struct nft_trans_elem *)trans->data;
--
- 			te->set->ops->remove(net, te->set, &te->elem);
- 			atomic_dec(&te->set->nelems);
- 			break;
-diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
-index a50500232b0a..7e8dae82ca52 100644
---- a/net/netfilter/nf_tables_core.c
-+++ b/net/netfilter/nf_tables_core.c
-@@ -98,21 +98,23 @@ static noinline void nft_update_chain_stats(const struct nft_chain *chain,
- 					    const struct nft_pktinfo *pkt)
- {
- 	struct nft_base_chain *base_chain;
-+	struct nft_stats __percpu *pstats;
- 	struct nft_stats *stats;
- 
- 	base_chain = nft_base_chain(chain);
--	if (!rcu_access_pointer(base_chain->stats))
--		return;
- 
--	local_bh_disable();
--	stats = this_cpu_ptr(rcu_dereference(base_chain->stats));
--	if (stats) {
-+	rcu_read_lock();
-+	pstats = READ_ONCE(base_chain->stats);
-+	if (pstats) {
-+		local_bh_disable();
-+		stats = this_cpu_ptr(pstats);
- 		u64_stats_update_begin(&stats->syncp);
- 		stats->pkts++;
- 		stats->bytes += pkt->skb->len;
- 		u64_stats_update_end(&stats->syncp);
-+		local_bh_enable();
- 	}
--	local_bh_enable();
-+	rcu_read_unlock();
- }
- 
- struct nft_jumpstack {
 diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
 index 0a4bad55a8aa..469f9da5073b 100644
 --- a/net/netfilter/nft_compat.c
@@ -32554,54 +4682,6 @@ index 0a4bad55a8aa..469f9da5073b 100644
  }
  
  MODULE_ALIAS_NFNL_SUBSYS(NFNL_SUBSYS_NFT_COMPAT);
-diff --git a/net/netfilter/xt_physdev.c b/net/netfilter/xt_physdev.c
-index 4034d70bff39..b2e39cb6a590 100644
---- a/net/netfilter/xt_physdev.c
-+++ b/net/netfilter/xt_physdev.c
-@@ -96,8 +96,7 @@ match_outdev:
- static int physdev_mt_check(const struct xt_mtchk_param *par)
- {
- 	const struct xt_physdev_info *info = par->matchinfo;
--
--	br_netfilter_enable();
-+	static bool brnf_probed __read_mostly;
- 
- 	if (!(info->bitmask & XT_PHYSDEV_OP_MASK) ||
- 	    info->bitmask & ~XT_PHYSDEV_OP_MASK)
-@@ -111,6 +110,12 @@ static int physdev_mt_check(const struct xt_mtchk_param *par)
- 		if (par->hook_mask & (1 << NF_INET_LOCAL_OUT))
- 			return -EINVAL;
- 	}
-+
-+	if (!brnf_probed) {
-+		brnf_probed = true;
-+		request_module("br_netfilter");
-+	}
-+
- 	return 0;
- }
- 
-diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
-index 25eeb6d2a75a..f0ec068e1d02 100644
---- a/net/netlink/genetlink.c
-+++ b/net/netlink/genetlink.c
-@@ -366,7 +366,7 @@ int genl_register_family(struct genl_family *family)
- 			       start, end + 1, GFP_KERNEL);
- 	if (family->id < 0) {
- 		err = family->id;
--		goto errout_locked;
-+		goto errout_free;
- 	}
- 
- 	err = genl_validate_assign_mc_groups(family);
-@@ -385,6 +385,7 @@ int genl_register_family(struct genl_family *family)
- 
- errout_remove:
- 	idr_remove(&genl_fam_idr, family->id);
-+errout_free:
- 	kfree(family->attrbuf);
- errout_locked:
- 	genl_unlock_all();
 diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
 index 691da853bef5..4bdf5e3ac208 100644
 --- a/net/openvswitch/flow_netlink.c
@@ -32623,28 +4703,6 @@ index 691da853bef5..4bdf5e3ac208 100644
  
  	if (new_acts_size > MAX_ACTIONS_BUFSIZE) {
  		if ((MAX_ACTIONS_BUFSIZE - next_offset) < req_size) {
-diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
-index 1cd1d83a4be0..8406bf11eef4 100644
---- a/net/packet/af_packet.c
-+++ b/net/packet/af_packet.c
-@@ -3245,7 +3245,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
- 	}
- 
- 	mutex_lock(&net->packet.sklist_lock);
--	sk_add_node_rcu(sk, &net->packet.sklist);
-+	sk_add_node_tail_rcu(sk, &net->packet.sklist);
- 	mutex_unlock(&net->packet.sklist_lock);
- 
- 	preempt_disable();
-@@ -4211,7 +4211,7 @@ static struct pgv *alloc_pg_vec(struct tpacket_req *req, int order)
- 	struct pgv *pg_vec;
- 	int i;
- 
--	pg_vec = kcalloc(block_nr, sizeof(struct pgv), GFP_KERNEL);
-+	pg_vec = kcalloc(block_nr, sizeof(struct pgv), GFP_KERNEL | __GFP_NOWARN);
- 	if (unlikely(!pg_vec))
- 		goto out;
- 
 diff --git a/net/rds/tcp.c b/net/rds/tcp.c
 index c16f0a362c32..a729c47db781 100644
 --- a/net/rds/tcp.c
@@ -32658,85 +4716,6 @@ index c16f0a362c32..a729c47db781 100644
  			continue;
  		if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn)) {
  			list_move_tail(&tc->t_tcp_node, &tmp_list);
-diff --git a/net/rose/rose_subr.c b/net/rose/rose_subr.c
-index 7ca57741b2fb..7849f286bb93 100644
---- a/net/rose/rose_subr.c
-+++ b/net/rose/rose_subr.c
-@@ -105,16 +105,17 @@ void rose_write_internal(struct sock *sk, int frametype)
- 	struct sk_buff *skb;
- 	unsigned char  *dptr;
- 	unsigned char  lci1, lci2;
--	char buffer[100];
--	int len, faclen = 0;
-+	int maxfaclen = 0;
-+	int len, faclen;
-+	int reserve;
- 
--	len = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + ROSE_MIN_LEN + 1;
-+	reserve = AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + 1;
-+	len = ROSE_MIN_LEN;
- 
- 	switch (frametype) {
- 	case ROSE_CALL_REQUEST:
- 		len   += 1 + ROSE_ADDR_LEN + ROSE_ADDR_LEN;
--		faclen = rose_create_facilities(buffer, rose);
--		len   += faclen;
-+		maxfaclen = 256;
- 		break;
- 	case ROSE_CALL_ACCEPTED:
- 	case ROSE_CLEAR_REQUEST:
-@@ -123,15 +124,16 @@ void rose_write_internal(struct sock *sk, int frametype)
- 		break;
- 	}
- 
--	if ((skb = alloc_skb(len, GFP_ATOMIC)) == NULL)
-+	skb = alloc_skb(reserve + len + maxfaclen, GFP_ATOMIC);
-+	if (!skb)
- 		return;
- 
- 	/*
- 	 *	Space for AX.25 header and PID.
- 	 */
--	skb_reserve(skb, AX25_BPQ_HEADER_LEN + AX25_MAX_HEADER_LEN + 1);
-+	skb_reserve(skb, reserve);
- 
--	dptr = skb_put(skb, skb_tailroom(skb));
-+	dptr = skb_put(skb, len);
- 
- 	lci1 = (rose->lci >> 8) & 0x0F;
- 	lci2 = (rose->lci >> 0) & 0xFF;
-@@ -146,7 +148,8 @@ void rose_write_internal(struct sock *sk, int frametype)
- 		dptr   += ROSE_ADDR_LEN;
- 		memcpy(dptr, &rose->source_addr, ROSE_ADDR_LEN);
- 		dptr   += ROSE_ADDR_LEN;
--		memcpy(dptr, buffer, faclen);
-+		faclen = rose_create_facilities(dptr, rose);
-+		skb_put(skb, faclen);
- 		dptr   += faclen;
- 		break;
- 
-diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
-index b2adfa825363..5cf6d9f4761d 100644
---- a/net/rxrpc/conn_client.c
-+++ b/net/rxrpc/conn_client.c
-@@ -353,7 +353,7 @@ static int rxrpc_get_client_conn(struct rxrpc_sock *rx,
- 	 * normally have to take channel_lock but we do this before anyone else
- 	 * can see the connection.
- 	 */
--	list_add_tail(&call->chan_wait_link, &candidate->waiting_calls);
-+	list_add(&call->chan_wait_link, &candidate->waiting_calls);
- 
- 	if (cp->exclusive) {
- 		call->conn = candidate;
-@@ -432,7 +432,7 @@ found_extant_conn:
- 	call->conn = conn;
- 	call->security_ix = conn->security_ix;
- 	call->service_id = conn->service_id;
--	list_add(&call->chan_wait_link, &conn->waiting_calls);
-+	list_add_tail(&call->chan_wait_link, &conn->waiting_calls);
- 	spin_unlock(&conn->channel_lock);
- 	_leave(" = 0 [extant %d]", conn->debug_id);
- 	return 0;
 diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
 index 1a0c682fd734..fd62fe6c8e73 100644
 --- a/net/sched/act_sample.c
@@ -32773,92 +4752,6 @@ index 1a0c682fd734..fd62fe6c8e73 100644
  	s->psample_group_num = psample_group_num;
  	RCU_INIT_POINTER(s->psample_group, psample_group);
  
-diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
-index 12ca9d13db83..bf67ae5ac1c3 100644
---- a/net/sched/cls_flower.c
-+++ b/net/sched/cls_flower.c
-@@ -1327,46 +1327,46 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
- 	if (err < 0)
- 		goto errout;
- 
--	if (!handle) {
--		handle = 1;
--		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
--				    INT_MAX, GFP_KERNEL);
--	} else if (!fold) {
--		/* user specifies a handle and it doesn't exist */
--		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
--				    handle, GFP_KERNEL);
--	}
--	if (err)
--		goto errout;
--	fnew->handle = handle;
--
- 	if (tb[TCA_FLOWER_FLAGS]) {
- 		fnew->flags = nla_get_u32(tb[TCA_FLOWER_FLAGS]);
- 
- 		if (!tc_flags_valid(fnew->flags)) {
- 			err = -EINVAL;
--			goto errout_idr;
-+			goto errout;
- 		}
- 	}
- 
- 	err = fl_set_parms(net, tp, fnew, mask, base, tb, tca[TCA_RATE], ovr,
- 			   tp->chain->tmplt_priv, extack);
- 	if (err)
--		goto errout_idr;
-+		goto errout;
- 
- 	err = fl_check_assign_mask(head, fnew, fold, mask);
- 	if (err)
--		goto errout_idr;
-+		goto errout;
-+
-+	if (!handle) {
-+		handle = 1;
-+		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
-+				    INT_MAX, GFP_KERNEL);
-+	} else if (!fold) {
-+		/* user specifies a handle and it doesn't exist */
-+		err = idr_alloc_u32(&head->handle_idr, fnew, &handle,
-+				    handle, GFP_KERNEL);
-+	}
-+	if (err)
-+		goto errout_mask;
-+	fnew->handle = handle;
- 
- 	if (!fold && __fl_lookup(fnew->mask, &fnew->mkey)) {
- 		err = -EEXIST;
--		goto errout_mask;
-+		goto errout_idr;
- 	}
- 
- 	err = rhashtable_insert_fast(&fnew->mask->ht, &fnew->ht_node,
- 				     fnew->mask->filter_ht_params);
- 	if (err)
--		goto errout_mask;
-+		goto errout_idr;
- 
- 	if (!tc_skip_hw(fnew->flags)) {
- 		err = fl_hw_replace_filter(tp, fnew, extack);
-@@ -1405,12 +1405,13 @@ errout_mask_ht:
- 	rhashtable_remove_fast(&fnew->mask->ht, &fnew->ht_node,
- 			       fnew->mask->filter_ht_params);
- 
--errout_mask:
--	fl_mask_put(head, fnew->mask, false);
--
- errout_idr:
- 	if (!fold)
- 		idr_remove(&head->handle_idr, fnew->handle);
-+
-+errout_mask:
-+	fl_mask_put(head, fnew->mask, false);
-+
- errout:
- 	tcf_exts_destroy(&fnew->exts);
- 	kfree(fnew);
 diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
 index 0e408ee9dcec..5ba07cd11e31 100644
 --- a/net/sched/cls_matchall.c
@@ -32867,491 +4760,26 @@ index 0e408ee9dcec..5ba07cd11e31 100644
  
  static void *mall_get(struct tcf_proto *tp, u32 handle)
  {
-+	struct cls_mall_head *head = rtnl_dereference(tp->root);
-+
-+	if (head && head->handle == handle)
-+		return head;
-+
- 	return NULL;
- }
- 
-diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
-index 968a85fe4d4a..de31f2f3b973 100644
---- a/net/sched/sch_generic.c
-+++ b/net/sched/sch_generic.c
-@@ -68,7 +68,7 @@ static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q)
- 			skb = __skb_dequeue(&q->skb_bad_txq);
- 			if (qdisc_is_percpu_stats(q)) {
- 				qdisc_qstats_cpu_backlog_dec(q, skb);
--				qdisc_qstats_cpu_qlen_dec(q);
-+				qdisc_qstats_atomic_qlen_dec(q);
- 			} else {
- 				qdisc_qstats_backlog_dec(q, skb);
- 				q->q.qlen--;
-@@ -108,7 +108,7 @@ static inline void qdisc_enqueue_skb_bad_txq(struct Qdisc *q,
- 
- 	if (qdisc_is_percpu_stats(q)) {
- 		qdisc_qstats_cpu_backlog_inc(q, skb);
--		qdisc_qstats_cpu_qlen_inc(q);
-+		qdisc_qstats_atomic_qlen_inc(q);
- 	} else {
- 		qdisc_qstats_backlog_inc(q, skb);
- 		q->q.qlen++;
-@@ -147,7 +147,7 @@ static inline int dev_requeue_skb_locked(struct sk_buff *skb, struct Qdisc *q)
- 
- 		qdisc_qstats_cpu_requeues_inc(q);
- 		qdisc_qstats_cpu_backlog_inc(q, skb);
--		qdisc_qstats_cpu_qlen_inc(q);
-+		qdisc_qstats_atomic_qlen_inc(q);
- 
- 		skb = next;
- 	}
-@@ -252,7 +252,7 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate,
- 			skb = __skb_dequeue(&q->gso_skb);
- 			if (qdisc_is_percpu_stats(q)) {
- 				qdisc_qstats_cpu_backlog_dec(q, skb);
--				qdisc_qstats_cpu_qlen_dec(q);
-+				qdisc_qstats_atomic_qlen_dec(q);
- 			} else {
- 				qdisc_qstats_backlog_dec(q, skb);
- 				q->q.qlen--;
-@@ -645,7 +645,7 @@ static int pfifo_fast_enqueue(struct sk_buff *skb, struct Qdisc *qdisc,
- 	if (unlikely(err))
- 		return qdisc_drop_cpu(skb, qdisc, to_free);
- 
--	qdisc_qstats_cpu_qlen_inc(qdisc);
-+	qdisc_qstats_atomic_qlen_inc(qdisc);
- 	/* Note: skb can not be used after skb_array_produce(),
- 	 * so we better not use qdisc_qstats_cpu_backlog_inc()
- 	 */
-@@ -670,7 +670,7 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc)
- 	if (likely(skb)) {
- 		qdisc_qstats_cpu_backlog_dec(qdisc, skb);
- 		qdisc_bstats_cpu_update(qdisc, skb);
--		qdisc_qstats_cpu_qlen_dec(qdisc);
-+		qdisc_qstats_atomic_qlen_dec(qdisc);
- 	}
- 
- 	return skb;
-@@ -714,7 +714,6 @@ static void pfifo_fast_reset(struct Qdisc *qdisc)
- 		struct gnet_stats_queue *q = per_cpu_ptr(qdisc->cpu_qstats, i);
- 
- 		q->backlog = 0;
--		q->qlen = 0;
- 	}
- }
- 
-diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
-index 6abc8b274270..951afdeea5e9 100644
---- a/net/sctp/protocol.c
-+++ b/net/sctp/protocol.c
-@@ -600,6 +600,7 @@ out:
- static int sctp_v4_addr_to_user(struct sctp_sock *sp, union sctp_addr *addr)
- {
- 	/* No address mapping for V4 sockets */
-+	memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
- 	return sizeof(struct sockaddr_in);
- }
- 
-diff --git a/net/sctp/socket.c b/net/sctp/socket.c
-index 65d6d04546ae..5f68420b4b0d 100644
---- a/net/sctp/socket.c
-+++ b/net/sctp/socket.c
-@@ -999,7 +999,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
- 	if (unlikely(addrs_size <= 0))
- 		return -EINVAL;
- 
--	kaddrs = vmemdup_user(addrs, addrs_size);
-+	kaddrs = memdup_user(addrs, addrs_size);
- 	if (unlikely(IS_ERR(kaddrs)))
- 		return PTR_ERR(kaddrs);
- 
-@@ -1007,7 +1007,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
- 	addr_buf = kaddrs;
- 	while (walk_size < addrs_size) {
- 		if (walk_size + sizeof(sa_family_t) > addrs_size) {
--			kvfree(kaddrs);
-+			kfree(kaddrs);
- 			return -EINVAL;
- 		}
- 
-@@ -1018,7 +1018,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
- 		 * causes the address buffer to overflow return EINVAL.
- 		 */
- 		if (!af || (walk_size + af->sockaddr_len) > addrs_size) {
--			kvfree(kaddrs);
-+			kfree(kaddrs);
- 			return -EINVAL;
- 		}
- 		addrcnt++;
-@@ -1054,7 +1054,7 @@ static int sctp_setsockopt_bindx(struct sock *sk,
- 	}
- 
- out:
--	kvfree(kaddrs);
-+	kfree(kaddrs);
- 
- 	return err;
- }
-@@ -1329,7 +1329,7 @@ static int __sctp_setsockopt_connectx(struct sock *sk,
- 	if (unlikely(addrs_size <= 0))
- 		return -EINVAL;
- 
--	kaddrs = vmemdup_user(addrs, addrs_size);
-+	kaddrs = memdup_user(addrs, addrs_size);
- 	if (unlikely(IS_ERR(kaddrs)))
- 		return PTR_ERR(kaddrs);
- 
-@@ -1349,7 +1349,7 @@ static int __sctp_setsockopt_connectx(struct sock *sk,
- 	err = __sctp_connect(sk, kaddrs, addrs_size, flags, assoc_id);
- 
- out_free:
--	kvfree(kaddrs);
-+	kfree(kaddrs);
- 
- 	return err;
- }
-@@ -1866,6 +1866,7 @@ static int sctp_sendmsg_check_sflags(struct sctp_association *asoc,
- 
- 		pr_debug("%s: aborting association:%p\n", __func__, asoc);
- 		sctp_primitive_ABORT(net, asoc, chunk);
-+		iov_iter_revert(&msg->msg_iter, msg_len);
- 
- 		return 0;
- 	}
-diff --git a/net/sctp/stream.c b/net/sctp/stream.c
-index 2936ed17bf9e..3b47457862cc 100644
---- a/net/sctp/stream.c
-+++ b/net/sctp/stream.c
-@@ -230,8 +230,6 @@ int sctp_stream_init(struct sctp_stream *stream, __u16 outcnt, __u16 incnt,
- 	for (i = 0; i < stream->outcnt; i++)
- 		SCTP_SO(stream, i)->state = SCTP_STREAM_OPEN;
- 
--	sched->init(stream);
--
- in:
- 	sctp_stream_interleave_init(stream);
- 	if (!incnt)
-diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
-index d7ec6132c046..d455537c8fc6 100644
---- a/net/sunrpc/clnt.c
-+++ b/net/sunrpc/clnt.c
-@@ -66,9 +66,6 @@ static void	call_decode(struct rpc_task *task);
- static void	call_bind(struct rpc_task *task);
- static void	call_bind_status(struct rpc_task *task);
- static void	call_transmit(struct rpc_task *task);
--#if defined(CONFIG_SUNRPC_BACKCHANNEL)
--static void	call_bc_transmit(struct rpc_task *task);
--#endif /* CONFIG_SUNRPC_BACKCHANNEL */
- static void	call_status(struct rpc_task *task);
- static void	call_transmit_status(struct rpc_task *task);
- static void	call_refresh(struct rpc_task *task);
-@@ -80,6 +77,7 @@ static void	call_connect_status(struct rpc_task *task);
- static __be32	*rpc_encode_header(struct rpc_task *task);
- static __be32	*rpc_verify_header(struct rpc_task *task);
- static int	rpc_ping(struct rpc_clnt *clnt);
-+static void	rpc_check_timeout(struct rpc_task *task);
- 
- static void rpc_register_client(struct rpc_clnt *clnt)
- {
-@@ -1131,6 +1129,8 @@ rpc_call_async(struct rpc_clnt *clnt, const struct rpc_message *msg, int flags,
- EXPORT_SYMBOL_GPL(rpc_call_async);
- 
- #if defined(CONFIG_SUNRPC_BACKCHANNEL)
-+static void call_bc_encode(struct rpc_task *task);
-+
- /**
-  * rpc_run_bc_task - Allocate a new RPC task for backchannel use, then run
-  * rpc_execute against it
-@@ -1152,7 +1152,7 @@ struct rpc_task *rpc_run_bc_task(struct rpc_rqst *req)
- 	task = rpc_new_task(&task_setup_data);
- 	xprt_init_bc_request(req, task);
- 
--	task->tk_action = call_bc_transmit;
-+	task->tk_action = call_bc_encode;
- 	atomic_inc(&task->tk_count);
- 	WARN_ON_ONCE(atomic_read(&task->tk_count) != 2);
- 	rpc_execute(task);
-@@ -1786,7 +1786,12 @@ call_encode(struct rpc_task *task)
- 		xprt_request_enqueue_receive(task);
- 	xprt_request_enqueue_transmit(task);
- out:
--	task->tk_action = call_bind;
-+	task->tk_action = call_transmit;
-+	/* Check that the connection is OK */
-+	if (!xprt_bound(task->tk_xprt))
-+		task->tk_action = call_bind;
-+	else if (!xprt_connected(task->tk_xprt))
-+		task->tk_action = call_connect;
- }
- 
- /*
-@@ -1937,8 +1942,7 @@ call_connect_status(struct rpc_task *task)
- 			break;
- 		if (clnt->cl_autobind) {
- 			rpc_force_rebind(clnt);
--			task->tk_action = call_bind;
--			return;
-+			goto out_retry;
- 		}
- 		/* fall through */
- 	case -ECONNRESET:
-@@ -1958,16 +1962,19 @@ call_connect_status(struct rpc_task *task)
- 		/* fall through */
- 	case -ENOTCONN:
- 	case -EAGAIN:
--		/* Check for timeouts before looping back to call_bind */
- 	case -ETIMEDOUT:
--		task->tk_action = call_timeout;
--		return;
-+		goto out_retry;
- 	case 0:
- 		clnt->cl_stats->netreconn++;
- 		task->tk_action = call_transmit;
- 		return;
- 	}
- 	rpc_exit(task, status);
-+	return;
-+out_retry:
-+	/* Check for timeouts before looping back to call_bind */
-+	task->tk_action = call_bind;
-+	rpc_check_timeout(task);
- }
- 
- /*
-@@ -1978,13 +1985,19 @@ call_transmit(struct rpc_task *task)
- {
- 	dprint_status(task);
- 
--	task->tk_status = 0;
-+	task->tk_action = call_transmit_status;
- 	if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
- 		if (!xprt_prepare_transmit(task))
- 			return;
--		xprt_transmit(task);
-+		task->tk_status = 0;
-+		if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
-+			if (!xprt_connected(task->tk_xprt)) {
-+				task->tk_status = -ENOTCONN;
-+				return;
-+			}
-+			xprt_transmit(task);
-+		}
- 	}
--	task->tk_action = call_transmit_status;
- 	xprt_end_transmit(task);
- }
- 
-@@ -2038,7 +2051,7 @@ call_transmit_status(struct rpc_task *task)
- 				trace_xprt_ping(task->tk_xprt,
- 						task->tk_status);
- 			rpc_exit(task, task->tk_status);
--			break;
-+			return;
- 		}
- 		/* fall through */
- 	case -ECONNRESET:
-@@ -2046,11 +2059,24 @@ call_transmit_status(struct rpc_task *task)
- 	case -EADDRINUSE:
- 	case -ENOTCONN:
- 	case -EPIPE:
-+		task->tk_action = call_bind;
-+		task->tk_status = 0;
- 		break;
- 	}
-+	rpc_check_timeout(task);
- }
- 
- #if defined(CONFIG_SUNRPC_BACKCHANNEL)
-+static void call_bc_transmit(struct rpc_task *task);
-+static void call_bc_transmit_status(struct rpc_task *task);
-+
-+static void
-+call_bc_encode(struct rpc_task *task)
-+{
-+	xprt_request_enqueue_transmit(task);
-+	task->tk_action = call_bc_transmit;
-+}
-+
- /*
-  * 5b.	Send the backchannel RPC reply.  On error, drop the reply.  In
-  * addition, disconnect on connectivity errors.
-@@ -2058,26 +2084,23 @@ call_transmit_status(struct rpc_task *task)
- static void
- call_bc_transmit(struct rpc_task *task)
- {
--	struct rpc_rqst *req = task->tk_rqstp;
--
--	if (rpc_task_need_encode(task))
--		xprt_request_enqueue_transmit(task);
--	if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate))
--		goto out_wakeup;
--
--	if (!xprt_prepare_transmit(task))
--		goto out_retry;
--
--	if (task->tk_status < 0) {
--		printk(KERN_NOTICE "RPC: Could not send backchannel reply "
--			"error: %d\n", task->tk_status);
--		goto out_done;
-+	task->tk_action = call_bc_transmit_status;
-+	if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
-+		if (!xprt_prepare_transmit(task))
-+			return;
-+		task->tk_status = 0;
-+		xprt_transmit(task);
- 	}
-+	xprt_end_transmit(task);
-+}
- 
--	xprt_transmit(task);
-+static void
-+call_bc_transmit_status(struct rpc_task *task)
-+{
-+	struct rpc_rqst *req = task->tk_rqstp;
- 
--	xprt_end_transmit(task);
- 	dprint_status(task);
-+
- 	switch (task->tk_status) {
- 	case 0:
- 		/* Success */
-@@ -2091,8 +2114,14 @@ call_bc_transmit(struct rpc_task *task)
- 	case -ENOTCONN:
- 	case -EPIPE:
- 		break;
-+	case -ENOBUFS:
-+		rpc_delay(task, HZ>>2);
-+		/* fall through */
-+	case -EBADSLT:
- 	case -EAGAIN:
--		goto out_retry;
-+		task->tk_status = 0;
-+		task->tk_action = call_bc_transmit;
-+		return;
- 	case -ETIMEDOUT:
- 		/*
- 		 * Problem reaching the server.  Disconnect and let the
-@@ -2111,18 +2140,11 @@ call_bc_transmit(struct rpc_task *task)
- 		 * We were unable to reply and will have to drop the
- 		 * request.  The server should reconnect and retransmit.
- 		 */
--		WARN_ON_ONCE(task->tk_status == -EAGAIN);
- 		printk(KERN_NOTICE "RPC: Could not send backchannel reply "
- 			"error: %d\n", task->tk_status);
- 		break;
- 	}
--out_wakeup:
--	rpc_wake_up_queued_task(&req->rq_xprt->pending, task);
--out_done:
- 	task->tk_action = rpc_exit_task;
--	return;
--out_retry:
--	task->tk_status = 0;
- }
- #endif /* CONFIG_SUNRPC_BACKCHANNEL */
- 
-@@ -2178,7 +2200,7 @@ call_status(struct rpc_task *task)
- 	case -EPIPE:
- 	case -ENOTCONN:
- 	case -EAGAIN:
--		task->tk_action = call_encode;
-+		task->tk_action = call_timeout;
- 		break;
- 	case -EIO:
- 		/* shutdown or soft timeout */
-@@ -2192,20 +2214,13 @@ call_status(struct rpc_task *task)
- 	}
- }
- 
--/*
-- * 6a.	Handle RPC timeout
-- * 	We do not release the request slot, so we keep using the
-- *	same XID for all retransmits.
-- */
- static void
--call_timeout(struct rpc_task *task)
-+rpc_check_timeout(struct rpc_task *task)
- {
- 	struct rpc_clnt	*clnt = task->tk_client;
- 
--	if (xprt_adjust_timeout(task->tk_rqstp) == 0) {
--		dprintk("RPC: %5u call_timeout (minor)\n", task->tk_pid);
--		goto retry;
--	}
-+	if (xprt_adjust_timeout(task->tk_rqstp) == 0)
-+		return;
- 
- 	dprintk("RPC: %5u call_timeout (major)\n", task->tk_pid);
- 	task->tk_timeouts++;
-@@ -2241,10 +2256,19 @@ call_timeout(struct rpc_task *task)
- 	 * event? RFC2203 requires the server to drop all such requests.
- 	 */
- 	rpcauth_invalcred(task);
-+}
- 
--retry:
-+/*
-+ * 6a.	Handle RPC timeout
-+ * 	We do not release the request slot, so we keep using the
-+ *	same XID for all retransmits.
-+ */
-+static void
-+call_timeout(struct rpc_task *task)
-+{
- 	task->tk_action = call_encode;
- 	task->tk_status = 0;
-+	rpc_check_timeout(task);
- }
- 
- /*
-diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
-index a6a060925e5d..43590a968b73 100644
---- a/net/sunrpc/svcsock.c
-+++ b/net/sunrpc/svcsock.c
-@@ -349,12 +349,16 @@ static ssize_t svc_recvfrom(struct svc_rqst *rqstp, struct kvec *iov,
- /*
-  * Set socket snd and rcv buffer lengths
-  */
--static void svc_sock_setbufsize(struct socket *sock, unsigned int snd,
--				unsigned int rcv)
-+static void svc_sock_setbufsize(struct svc_sock *svsk, unsigned int nreqs)
- {
-+	unsigned int max_mesg = svsk->sk_xprt.xpt_server->sv_max_mesg;
-+	struct socket *sock = svsk->sk_sock;
++	struct cls_mall_head *head = rtnl_dereference(tp->root);
 +
-+	nreqs = min(nreqs, INT_MAX / 2 / max_mesg);
++	if (head && head->handle == handle)
++		return head;
 +
- 	lock_sock(sock->sk);
--	sock->sk->sk_sndbuf = snd * 2;
--	sock->sk->sk_rcvbuf = rcv * 2;
-+	sock->sk->sk_sndbuf = nreqs * max_mesg * 2;
-+	sock->sk->sk_rcvbuf = nreqs * max_mesg * 2;
- 	sock->sk->sk_write_space(sock->sk);
- 	release_sock(sock->sk);
+ 	return NULL;
+ }
+ 
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 6abc8b274270..951afdeea5e9 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -600,6 +600,7 @@ out:
+ static int sctp_v4_addr_to_user(struct sctp_sock *sp, union sctp_addr *addr)
+ {
+ 	/* No address mapping for V4 sockets */
++	memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
+ 	return sizeof(struct sockaddr_in);
  }
-@@ -516,9 +520,7 @@ static int svc_udp_recvfrom(struct svc_rqst *rqstp)
- 	     * provides an upper bound on the number of threads
- 	     * which will access the socket.
- 	     */
--	    svc_sock_setbufsize(svsk->sk_sock,
--				(serv->sv_nrthreads+3) * serv->sv_max_mesg,
--				(serv->sv_nrthreads+3) * serv->sv_max_mesg);
-+	    svc_sock_setbufsize(svsk, serv->sv_nrthreads + 3);
- 
- 	clear_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags);
- 	skb = NULL;
-@@ -681,9 +683,7 @@ static void svc_udp_init(struct svc_sock *svsk, struct svc_serv *serv)
- 	 * receive and respond to one request.
- 	 * svc_udp_recvfrom will re-adjust if necessary
- 	 */
--	svc_sock_setbufsize(svsk->sk_sock,
--			    3 * svsk->sk_xprt.xpt_server->sv_max_mesg,
--			    3 * svsk->sk_xprt.xpt_server->sv_max_mesg);
-+	svc_sock_setbufsize(svsk, 3);
  
- 	/* data might have come in before data_ready set up */
- 	set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags);
 diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
 index 21113bfd4eca..a5ae9c036b9c 100644
 --- a/net/sunrpc/xprtrdma/verbs.c
@@ -33365,240 +4793,6 @@ index 21113bfd4eca..a5ae9c036b9c 100644
  	drain_workqueue(buf->rb_completion_wq);
  
  	/* Deferred Reply processing might have scheduled
-diff --git a/net/tipc/net.c b/net/tipc/net.c
-index f076edb74338..7ce1e86b024f 100644
---- a/net/tipc/net.c
-+++ b/net/tipc/net.c
-@@ -163,12 +163,9 @@ void tipc_sched_net_finalize(struct net *net, u32 addr)
- 
- void tipc_net_stop(struct net *net)
- {
--	u32 self = tipc_own_addr(net);
--
--	if (!self)
-+	if (!tipc_own_id(net))
- 		return;
- 
--	tipc_nametbl_withdraw(net, TIPC_CFG_SRV, self, self, self);
- 	rtnl_lock();
- 	tipc_bearer_stop(net);
- 	tipc_node_stop(net);
-diff --git a/net/tipc/socket.c b/net/tipc/socket.c
-index 70343ac448b1..4dca9161f99b 100644
---- a/net/tipc/socket.c
-+++ b/net/tipc/socket.c
-@@ -1333,7 +1333,7 @@ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dlen)
- 
- 	if (unlikely(!dest)) {
- 		dest = &tsk->peer;
--		if (!syn || dest->family != AF_TIPC)
-+		if (!syn && dest->family != AF_TIPC)
- 			return -EDESTADDRREQ;
- 	}
- 
-@@ -2349,6 +2349,16 @@ static int tipc_wait_for_connect(struct socket *sock, long *timeo_p)
- 	return 0;
- }
- 
-+static bool tipc_sockaddr_is_sane(struct sockaddr_tipc *addr)
-+{
-+	if (addr->family != AF_TIPC)
-+		return false;
-+	if (addr->addrtype == TIPC_SERVICE_RANGE)
-+		return (addr->addr.nameseq.lower <= addr->addr.nameseq.upper);
-+	return (addr->addrtype == TIPC_SERVICE_ADDR ||
-+		addr->addrtype == TIPC_SOCKET_ADDR);
-+}
-+
- /**
-  * tipc_connect - establish a connection to another TIPC port
-  * @sock: socket structure
-@@ -2384,18 +2394,18 @@ static int tipc_connect(struct socket *sock, struct sockaddr *dest,
- 		if (!tipc_sk_type_connectionless(sk))
- 			res = -EINVAL;
- 		goto exit;
--	} else if (dst->family != AF_TIPC) {
--		res = -EINVAL;
- 	}
--	if (dst->addrtype != TIPC_ADDR_ID && dst->addrtype != TIPC_ADDR_NAME)
-+	if (!tipc_sockaddr_is_sane(dst)) {
- 		res = -EINVAL;
--	if (res)
- 		goto exit;
--
-+	}
- 	/* DGRAM/RDM connect(), just save the destaddr */
- 	if (tipc_sk_type_connectionless(sk)) {
- 		memcpy(&tsk->peer, dest, destlen);
- 		goto exit;
-+	} else if (dst->addrtype == TIPC_SERVICE_RANGE) {
-+		res = -EINVAL;
-+		goto exit;
- 	}
- 
- 	previous = sk->sk_state;
-diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
-index a457c0fbbef1..f5edb213d760 100644
---- a/net/tipc/topsrv.c
-+++ b/net/tipc/topsrv.c
-@@ -365,6 +365,7 @@ static int tipc_conn_rcv_sub(struct tipc_topsrv *srv,
- 	struct tipc_subscription *sub;
- 
- 	if (tipc_sub_read(s, filter) & TIPC_SUB_CANCEL) {
-+		s->filter &= __constant_ntohl(~TIPC_SUB_CANCEL);
- 		tipc_conn_delete_sub(con, s);
- 		return 0;
- 	}
-diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
-index 3ae3a33da70b..602715fc9a75 100644
---- a/net/vmw_vsock/virtio_transport_common.c
-+++ b/net/vmw_vsock/virtio_transport_common.c
-@@ -662,6 +662,8 @@ static int virtio_transport_reset(struct vsock_sock *vsk,
-  */
- static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
- {
-+	const struct virtio_transport *t;
-+	struct virtio_vsock_pkt *reply;
- 	struct virtio_vsock_pkt_info info = {
- 		.op = VIRTIO_VSOCK_OP_RST,
- 		.type = le16_to_cpu(pkt->hdr.type),
-@@ -672,15 +674,21 @@ static int virtio_transport_reset_no_sock(struct virtio_vsock_pkt *pkt)
- 	if (le16_to_cpu(pkt->hdr.op) == VIRTIO_VSOCK_OP_RST)
- 		return 0;
- 
--	pkt = virtio_transport_alloc_pkt(&info, 0,
--					 le64_to_cpu(pkt->hdr.dst_cid),
--					 le32_to_cpu(pkt->hdr.dst_port),
--					 le64_to_cpu(pkt->hdr.src_cid),
--					 le32_to_cpu(pkt->hdr.src_port));
--	if (!pkt)
-+	reply = virtio_transport_alloc_pkt(&info, 0,
-+					   le64_to_cpu(pkt->hdr.dst_cid),
-+					   le32_to_cpu(pkt->hdr.dst_port),
-+					   le64_to_cpu(pkt->hdr.src_cid),
-+					   le32_to_cpu(pkt->hdr.src_port));
-+	if (!reply)
- 		return -ENOMEM;
- 
--	return virtio_transport_get_ops()->send_pkt(pkt);
-+	t = virtio_transport_get_ops();
-+	if (!t) {
-+		virtio_transport_free_pkt(reply);
-+		return -ENOTCONN;
-+	}
-+
-+	return t->send_pkt(reply);
- }
- 
- static void virtio_transport_wait_close(struct sock *sk, long timeout)
-diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
-index eff31348e20b..20a511398389 100644
---- a/net/x25/af_x25.c
-+++ b/net/x25/af_x25.c
-@@ -820,8 +820,13 @@ static int x25_connect(struct socket *sock, struct sockaddr *uaddr,
- 	sock->state = SS_CONNECTED;
- 	rc = 0;
- out_put_neigh:
--	if (rc)
-+	if (rc) {
-+		read_lock_bh(&x25_list_lock);
- 		x25_neigh_put(x25->neighbour);
-+		x25->neighbour = NULL;
-+		read_unlock_bh(&x25_list_lock);
-+		x25->state = X25_STATE_0;
-+	}
- out_put_route:
- 	x25_route_put(rt);
- out:
-diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
-index 85e4fe4f18cc..f3031c8907d9 100644
---- a/net/xdp/xsk.c
-+++ b/net/xdp/xsk.c
-@@ -407,6 +407,10 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
- 	if (sxdp->sxdp_family != AF_XDP)
- 		return -EINVAL;
- 
-+	flags = sxdp->sxdp_flags;
-+	if (flags & ~(XDP_SHARED_UMEM | XDP_COPY | XDP_ZEROCOPY))
-+		return -EINVAL;
-+
- 	mutex_lock(&xs->mutex);
- 	if (xs->dev) {
- 		err = -EBUSY;
-@@ -425,7 +429,6 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
- 	}
- 
- 	qid = sxdp->sxdp_queue_id;
--	flags = sxdp->sxdp_flags;
- 
- 	if (flags & XDP_SHARED_UMEM) {
- 		struct xdp_sock *umem_xs;
-diff --git a/scripts/gdb/linux/constants.py.in b/scripts/gdb/linux/constants.py.in
-index 7aad82406422..d3319a80788a 100644
---- a/scripts/gdb/linux/constants.py.in
-+++ b/scripts/gdb/linux/constants.py.in
-@@ -37,12 +37,12 @@
- import gdb
- 
- /* linux/fs.h */
--LX_VALUE(MS_RDONLY)
--LX_VALUE(MS_SYNCHRONOUS)
--LX_VALUE(MS_MANDLOCK)
--LX_VALUE(MS_DIRSYNC)
--LX_VALUE(MS_NOATIME)
--LX_VALUE(MS_NODIRATIME)
-+LX_VALUE(SB_RDONLY)
-+LX_VALUE(SB_SYNCHRONOUS)
-+LX_VALUE(SB_MANDLOCK)
-+LX_VALUE(SB_DIRSYNC)
-+LX_VALUE(SB_NOATIME)
-+LX_VALUE(SB_NODIRATIME)
- 
- /* linux/mount.h */
- LX_VALUE(MNT_NOSUID)
-diff --git a/scripts/gdb/linux/proc.py b/scripts/gdb/linux/proc.py
-index 0aebd7565b03..2f01a958eb22 100644
---- a/scripts/gdb/linux/proc.py
-+++ b/scripts/gdb/linux/proc.py
-@@ -114,11 +114,11 @@ def info_opts(lst, opt):
-     return opts
- 
- 
--FS_INFO = {constants.LX_MS_SYNCHRONOUS: ",sync",
--           constants.LX_MS_MANDLOCK: ",mand",
--           constants.LX_MS_DIRSYNC: ",dirsync",
--           constants.LX_MS_NOATIME: ",noatime",
--           constants.LX_MS_NODIRATIME: ",nodiratime"}
-+FS_INFO = {constants.LX_SB_SYNCHRONOUS: ",sync",
-+           constants.LX_SB_MANDLOCK: ",mand",
-+           constants.LX_SB_DIRSYNC: ",dirsync",
-+           constants.LX_SB_NOATIME: ",noatime",
-+           constants.LX_SB_NODIRATIME: ",nodiratime"}
- 
- MNT_INFO = {constants.LX_MNT_NOSUID: ",nosuid",
-             constants.LX_MNT_NODEV: ",nodev",
-@@ -184,7 +184,7 @@ values of that process namespace"""
-             fstype = superblock['s_type']['name'].string()
-             s_flags = int(superblock['s_flags'])
-             m_flags = int(vfs['mnt']['mnt_flags'])
--            rd = "ro" if (s_flags & constants.LX_MS_RDONLY) else "rw"
-+            rd = "ro" if (s_flags & constants.LX_SB_RDONLY) else "rw"
- 
-             gdb.write(
-                 "{} {} {} {}{}{} 0 0\n"
-diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
-index 26bf886bd168..588a3bc29ecc 100644
---- a/scripts/mod/modpost.c
-+++ b/scripts/mod/modpost.c
-@@ -640,7 +640,7 @@ static void handle_modversions(struct module *mod, struct elf_info *info,
- 			       info->sechdrs[sym->st_shndx].sh_offset -
- 			       (info->hdr->e_type != ET_REL ?
- 				info->sechdrs[sym->st_shndx].sh_addr : 0);
--			crc = *crcp;
-+			crc = TO_NATIVE(*crcp);
- 		}
- 		sym_update_crc(symname + strlen("__crc_"), mod, crc,
- 				export);
 diff --git a/scripts/package/Makefile b/scripts/package/Makefile
 index 453fecee62f0..aa39c2b5e46a 100644
 --- a/scripts/package/Makefile
@@ -33693,226 +4887,6 @@ index edcad61fe3cd..f030961c5165 100755
  
  clean:
  	rm -rf debian/*tmp debian/files
-diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
-index 379682e2a8d5..f6c2bcb2ab14 100644
---- a/security/apparmor/policy_unpack.c
-+++ b/security/apparmor/policy_unpack.c
-@@ -579,6 +579,7 @@ fail:
- 			kfree(profile->secmark[i].label);
- 		kfree(profile->secmark);
- 		profile->secmark_count = 0;
-+		profile->secmark = NULL;
- 	}
- 
- 	e->pos = pos;
-diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
-index f0e36c3492ba..07b11b5aaf1f 100644
---- a/security/selinux/hooks.c
-+++ b/security/selinux/hooks.c
-@@ -959,8 +959,11 @@ static int selinux_sb_clone_mnt_opts(const struct super_block *oldsb,
- 	BUG_ON(!(oldsbsec->flags & SE_SBINITIALIZED));
- 
- 	/* if fs is reusing a sb, make sure that the contexts match */
--	if (newsbsec->flags & SE_SBINITIALIZED)
-+	if (newsbsec->flags & SE_SBINITIALIZED) {
-+		if ((kern_flags & SECURITY_LSM_NATIVE_LABELS) && !set_context)
-+			*set_kern_flags |= SECURITY_LSM_NATIVE_LABELS;
- 		return selinux_cmp_sb_context(oldsb, newsb);
-+	}
- 
- 	mutex_lock(&newsbsec->lock);
- 
-@@ -3241,12 +3244,16 @@ static int selinux_inode_setsecurity(struct inode *inode, const char *name,
- 				     const void *value, size_t size, int flags)
- {
- 	struct inode_security_struct *isec = inode_security_novalidate(inode);
-+	struct superblock_security_struct *sbsec = inode->i_sb->s_security;
- 	u32 newsid;
- 	int rc;
- 
- 	if (strcmp(name, XATTR_SELINUX_SUFFIX))
- 		return -EOPNOTSUPP;
- 
-+	if (!(sbsec->flags & SBLABEL_MNT))
-+		return -EOPNOTSUPP;
-+
- 	if (!value || !size)
- 		return -EACCES;
- 
-@@ -5120,6 +5127,9 @@ static int selinux_sctp_bind_connect(struct sock *sk, int optname,
- 			return -EINVAL;
- 		}
- 
-+		if (walk_size + len > addrlen)
-+			return -EINVAL;
-+
- 		err = -EINVAL;
- 		switch (optname) {
- 		/* Bind checks */
-@@ -6392,7 +6402,10 @@ static void selinux_inode_invalidate_secctx(struct inode *inode)
-  */
- static int selinux_inode_notifysecctx(struct inode *inode, void *ctx, u32 ctxlen)
- {
--	return selinux_inode_setsecurity(inode, XATTR_SELINUX_SUFFIX, ctx, ctxlen, 0);
-+	int rc = selinux_inode_setsecurity(inode, XATTR_SELINUX_SUFFIX,
-+					   ctx, ctxlen, 0);
-+	/* Do not return error when suppressing label (SBLABEL_MNT not set). */
-+	return rc == -EOPNOTSUPP ? 0 : rc;
- }
- 
- /*
-diff --git a/sound/ac97/bus.c b/sound/ac97/bus.c
-index 9f0c480489ef..9cbf6927abe9 100644
---- a/sound/ac97/bus.c
-+++ b/sound/ac97/bus.c
-@@ -84,7 +84,7 @@ ac97_of_get_child_device(struct ac97_controller *ac97_ctrl, int idx,
- 		if ((idx != of_property_read_u32(node, "reg", &reg)) ||
- 		    !of_device_is_compatible(node, compat))
- 			continue;
--		return of_node_get(node);
-+		return node;
- 	}
- 
- 	return NULL;
-diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
-index 467039b342b5..41abb8bd466a 100644
---- a/sound/core/oss/pcm_oss.c
-+++ b/sound/core/oss/pcm_oss.c
-@@ -940,6 +940,28 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
- 	oss_frame_size = snd_pcm_format_physical_width(params_format(params)) *
- 			 params_channels(params) / 8;
- 
-+	err = snd_pcm_oss_period_size(substream, params, sparams);
-+	if (err < 0)
-+		goto failure;
-+
-+	n = snd_pcm_plug_slave_size(substream, runtime->oss.period_bytes / oss_frame_size);
-+	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, n, NULL);
-+	if (err < 0)
-+		goto failure;
-+
-+	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIODS,
-+				     runtime->oss.periods, NULL);
-+	if (err < 0)
-+		goto failure;
-+
-+	snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_DROP, NULL);
-+
-+	err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_HW_PARAMS, sparams);
-+	if (err < 0) {
-+		pcm_dbg(substream->pcm, "HW_PARAMS failed: %i\n", err);
-+		goto failure;
-+	}
-+
- #ifdef CONFIG_SND_PCM_OSS_PLUGINS
- 	snd_pcm_oss_plugin_clear(substream);
- 	if (!direct) {
-@@ -974,27 +996,6 @@ static int snd_pcm_oss_change_params_locked(struct snd_pcm_substream *substream)
- 	}
- #endif
- 
--	err = snd_pcm_oss_period_size(substream, params, sparams);
--	if (err < 0)
--		goto failure;
--
--	n = snd_pcm_plug_slave_size(substream, runtime->oss.period_bytes / oss_frame_size);
--	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, n, NULL);
--	if (err < 0)
--		goto failure;
--
--	err = snd_pcm_hw_param_near(substream, sparams, SNDRV_PCM_HW_PARAM_PERIODS,
--				     runtime->oss.periods, NULL);
--	if (err < 0)
--		goto failure;
--
--	snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_DROP, NULL);
--
--	if ((err = snd_pcm_kernel_ioctl(substream, SNDRV_PCM_IOCTL_HW_PARAMS, sparams)) < 0) {
--		pcm_dbg(substream->pcm, "HW_PARAMS failed: %i\n", err);
--		goto failure;
--	}
--
- 	if (runtime->oss.trigger) {
- 		sw_params->start_threshold = 1;
- 	} else {
-diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
-index 818dff1de545..e08c6c6ca029 100644
---- a/sound/core/pcm_native.c
-+++ b/sound/core/pcm_native.c
-@@ -1426,8 +1426,15 @@ static int snd_pcm_pause(struct snd_pcm_substream *substream, int push)
- static int snd_pcm_pre_suspend(struct snd_pcm_substream *substream, int state)
- {
- 	struct snd_pcm_runtime *runtime = substream->runtime;
--	if (runtime->status->state == SNDRV_PCM_STATE_SUSPENDED)
-+	switch (runtime->status->state) {
-+	case SNDRV_PCM_STATE_SUSPENDED:
-+		return -EBUSY;
-+	/* unresumable PCM state; return -EBUSY for skipping suspend */
-+	case SNDRV_PCM_STATE_OPEN:
-+	case SNDRV_PCM_STATE_SETUP:
-+	case SNDRV_PCM_STATE_DISCONNECTED:
- 		return -EBUSY;
-+	}
- 	runtime->trigger_master = substream;
- 	return 0;
- }
-@@ -1506,6 +1513,14 @@ int snd_pcm_suspend_all(struct snd_pcm *pcm)
- 			/* FIXME: the open/close code should lock this as well */
- 			if (substream->runtime == NULL)
- 				continue;
-+
-+			/*
-+			 * Skip BE dai link PCM's that are internal and may
-+			 * not have their substream ops set.
-+			 */
-+			if (!substream->ops)
-+				continue;
-+
- 			err = snd_pcm_suspend(substream);
- 			if (err < 0 && err != -EBUSY)
- 				return err;
-diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
-index ee601d7f0926..c0690d1ecd55 100644
---- a/sound/core/rawmidi.c
-+++ b/sound/core/rawmidi.c
-@@ -30,6 +30,7 @@
- #include <linux/module.h>
- #include <linux/delay.h>
- #include <linux/mm.h>
-+#include <linux/nospec.h>
- #include <sound/rawmidi.h>
- #include <sound/info.h>
- #include <sound/control.h>
-@@ -601,6 +602,7 @@ static int __snd_rawmidi_info_select(struct snd_card *card,
- 		return -ENXIO;
- 	if (info->stream < 0 || info->stream > 1)
- 		return -EINVAL;
-+	info->stream = array_index_nospec(info->stream, 2);
- 	pstr = &rmidi->streams[info->stream];
- 	if (pstr->substream_count == 0)
- 		return -ENOENT;
-diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c
-index 278ebb993122..c93945917235 100644
---- a/sound/core/seq/oss/seq_oss_synth.c
-+++ b/sound/core/seq/oss/seq_oss_synth.c
-@@ -617,13 +617,14 @@ int
- snd_seq_oss_synth_make_info(struct seq_oss_devinfo *dp, int dev, struct synth_info *inf)
- {
- 	struct seq_oss_synth *rec;
-+	struct seq_oss_synthinfo *info = get_synthinfo_nospec(dp, dev);
- 
--	if (dev < 0 || dev >= dp->max_synthdev)
-+	if (!info)
- 		return -ENXIO;
- 
--	if (dp->synths[dev].is_midi) {
-+	if (info->is_midi) {
- 		struct midi_info minf;
--		snd_seq_oss_midi_make_info(dp, dp->synths[dev].midi_mapped, &minf);
-+		snd_seq_oss_midi_make_info(dp, info->midi_mapped, &minf);
- 		inf->synth_type = SYNTH_TYPE_MIDI;
- 		inf->synth_subtype = 0;
- 		inf->nr_voices = 16;
 diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
 index 7d4640d1fe9f..38e7deab6384 100644
 --- a/sound/core/seq/seq_clientmgr.c
@@ -33944,268 +4918,11 @@ index 7d4640d1fe9f..38e7deab6384 100644
  	queuefree(q);
  
  	return 0;
-diff --git a/sound/firewire/bebob/bebob.c b/sound/firewire/bebob/bebob.c
-index d91874275d2c..5b46e8dcc2dd 100644
---- a/sound/firewire/bebob/bebob.c
-+++ b/sound/firewire/bebob/bebob.c
-@@ -448,7 +448,19 @@ static const struct ieee1394_device_id bebob_id_table[] = {
- 	/* Focusrite, SaffirePro 26 I/O */
- 	SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, 0x00000003, &saffirepro_26_spec),
- 	/* Focusrite, SaffirePro 10 I/O */
--	SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, 0x00000006, &saffirepro_10_spec),
-+	{
-+		// The combination of vendor_id and model_id is the same as the
-+		// same as the one of Liquid Saffire 56.
-+		.match_flags	= IEEE1394_MATCH_VENDOR_ID |
-+				  IEEE1394_MATCH_MODEL_ID |
-+				  IEEE1394_MATCH_SPECIFIER_ID |
-+				  IEEE1394_MATCH_VERSION,
-+		.vendor_id	= VEN_FOCUSRITE,
-+		.model_id	= 0x000006,
-+		.specifier_id	= 0x00a02d,
-+		.version	= 0x010001,
-+		.driver_data	= (kernel_ulong_t)&saffirepro_10_spec,
-+	},
- 	/* Focusrite, Saffire(no label and LE) */
- 	SND_BEBOB_DEV_ENTRY(VEN_FOCUSRITE, MODEL_FOCUSRITE_SAFFIRE_BOTH,
- 			    &saffire_spec),
-diff --git a/sound/firewire/dice/dice.c b/sound/firewire/dice/dice.c
-index ed50b222d36e..eee184b05d93 100644
---- a/sound/firewire/dice/dice.c
-+++ b/sound/firewire/dice/dice.c
-@@ -18,6 +18,7 @@ MODULE_LICENSE("GPL v2");
- #define OUI_ALESIS		0x000595
- #define OUI_MAUDIO		0x000d6c
- #define OUI_MYTEK		0x001ee8
-+#define OUI_SSL			0x0050c2	// Actually ID reserved by IEEE.
- 
- #define DICE_CATEGORY_ID	0x04
- #define WEISS_CATEGORY_ID	0x00
-@@ -196,7 +197,7 @@ static int dice_probe(struct fw_unit *unit,
- 	struct snd_dice *dice;
- 	int err;
- 
--	if (!entry->driver_data) {
-+	if (!entry->driver_data && entry->vendor_id != OUI_SSL) {
- 		err = check_dice_category(unit);
- 		if (err < 0)
- 			return -ENODEV;
-@@ -361,6 +362,15 @@ static const struct ieee1394_device_id dice_id_table[] = {
- 		.model_id	= 0x000002,
- 		.driver_data = (kernel_ulong_t)snd_dice_detect_mytek_formats,
- 	},
-+	// Solid State Logic, Duende Classic and Mini.
-+	// NOTE: each field of GUID in config ROM is not compliant to standard
-+	// DICE scheme.
-+	{
-+		.match_flags	= IEEE1394_MATCH_VENDOR_ID |
-+				  IEEE1394_MATCH_MODEL_ID,
-+		.vendor_id	= OUI_SSL,
-+		.model_id	= 0x000070,
-+	},
- 	{
- 		.match_flags = IEEE1394_MATCH_VERSION,
- 		.version     = DICE_INTERFACE,
-diff --git a/sound/firewire/motu/amdtp-motu.c b/sound/firewire/motu/amdtp-motu.c
-index f0555a24d90e..6c9b743ea74b 100644
---- a/sound/firewire/motu/amdtp-motu.c
-+++ b/sound/firewire/motu/amdtp-motu.c
-@@ -136,7 +136,9 @@ static void read_pcm_s32(struct amdtp_stream *s,
- 		byte = (u8 *)buffer + p->pcm_byte_offset;
- 
- 		for (c = 0; c < channels; ++c) {
--			*dst = (byte[0] << 24) | (byte[1] << 16) | byte[2];
-+			*dst = (byte[0] << 24) |
-+			       (byte[1] << 16) |
-+			       (byte[2] << 8);
- 			byte += 3;
- 			dst++;
- 		}
-diff --git a/sound/firewire/motu/motu.c b/sound/firewire/motu/motu.c
-index 220e61926ea4..513291ba0ab0 100644
---- a/sound/firewire/motu/motu.c
-+++ b/sound/firewire/motu/motu.c
-@@ -36,7 +36,7 @@ static void name_card(struct snd_motu *motu)
- 	fw_csr_iterator_init(&it, motu->unit->directory);
- 	while (fw_csr_iterator_next(&it, &key, &val)) {
- 		switch (key) {
--		case CSR_VERSION:
-+		case CSR_MODEL:
- 			version = val;
- 			break;
- 		}
-@@ -46,7 +46,7 @@ static void name_card(struct snd_motu *motu)
- 	strcpy(motu->card->shortname, motu->spec->name);
- 	strcpy(motu->card->mixername, motu->spec->name);
- 	snprintf(motu->card->longname, sizeof(motu->card->longname),
--		 "MOTU %s (version:%d), GUID %08x%08x at %s, S%d",
-+		 "MOTU %s (version:%06x), GUID %08x%08x at %s, S%d",
- 		 motu->spec->name, version,
- 		 fw_dev->config_rom[3], fw_dev->config_rom[4],
- 		 dev_name(&motu->unit->device), 100 << fw_dev->max_speed);
-@@ -237,20 +237,20 @@ static const struct snd_motu_spec motu_audio_express = {
- #define SND_MOTU_DEV_ENTRY(model, data)			\
- {							\
- 	.match_flags	= IEEE1394_MATCH_VENDOR_ID |	\
--			  IEEE1394_MATCH_MODEL_ID |	\
--			  IEEE1394_MATCH_SPECIFIER_ID,	\
-+			  IEEE1394_MATCH_SPECIFIER_ID |	\
-+			  IEEE1394_MATCH_VERSION,	\
- 	.vendor_id	= OUI_MOTU,			\
--	.model_id	= model,			\
- 	.specifier_id	= OUI_MOTU,			\
-+	.version	= model,			\
- 	.driver_data	= (kernel_ulong_t)data,		\
- }
- 
- static const struct ieee1394_device_id motu_id_table[] = {
--	SND_MOTU_DEV_ENTRY(0x101800, &motu_828mk2),
--	SND_MOTU_DEV_ENTRY(0x107800, &snd_motu_spec_traveler),
--	SND_MOTU_DEV_ENTRY(0x106800, &motu_828mk3),	/* FireWire only. */
--	SND_MOTU_DEV_ENTRY(0x100800, &motu_828mk3),	/* Hybrid. */
--	SND_MOTU_DEV_ENTRY(0x104800, &motu_audio_express),
-+	SND_MOTU_DEV_ENTRY(0x000003, &motu_828mk2),
-+	SND_MOTU_DEV_ENTRY(0x000009, &snd_motu_spec_traveler),
-+	SND_MOTU_DEV_ENTRY(0x000015, &motu_828mk3),	/* FireWire only. */
-+	SND_MOTU_DEV_ENTRY(0x000035, &motu_828mk3),	/* Hybrid. */
-+	SND_MOTU_DEV_ENTRY(0x000033, &motu_audio_express),
- 	{ }
- };
- MODULE_DEVICE_TABLE(ieee1394, motu_id_table);
-diff --git a/sound/hda/hdac_i915.c b/sound/hda/hdac_i915.c
-index 617ff1aa818f..27eb0270a711 100644
---- a/sound/hda/hdac_i915.c
-+++ b/sound/hda/hdac_i915.c
-@@ -144,9 +144,9 @@ int snd_hdac_i915_init(struct hdac_bus *bus)
- 		return -ENODEV;
- 	if (!acomp->ops) {
- 		request_module("i915");
--		/* 10s timeout */
-+		/* 60s timeout */
- 		wait_for_completion_timeout(&bind_complete,
--					    msecs_to_jiffies(10 * 1000));
-+					    msecs_to_jiffies(60 * 1000));
- 	}
- 	if (!acomp->ops) {
- 		dev_info(bus->dev, "couldn't bind with audio component\n");
-diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
-index 9f8d59e7e89f..b238e903b9d7 100644
---- a/sound/pci/hda/hda_codec.c
-+++ b/sound/pci/hda/hda_codec.c
-@@ -2917,6 +2917,7 @@ static void hda_call_codec_resume(struct hda_codec *codec)
- 		hda_jackpoll_work(&codec->jackpoll_work.work);
- 	else
- 		snd_hda_jack_report_sync(codec);
-+	codec->core.dev.power.power_state = PMSG_ON;
- 	snd_hdac_leave_pm(&codec->core);
- }
- 
-@@ -2950,10 +2951,62 @@ static int hda_codec_runtime_resume(struct device *dev)
- }
- #endif /* CONFIG_PM */
- 
-+#ifdef CONFIG_PM_SLEEP
-+static int hda_codec_force_resume(struct device *dev)
-+{
-+	int ret;
-+
-+	/* The get/put pair below enforces the runtime resume even if the
-+	 * device hasn't been used at suspend time.  This trick is needed to
-+	 * update the jack state change during the sleep.
-+	 */
-+	pm_runtime_get_noresume(dev);
-+	ret = pm_runtime_force_resume(dev);
-+	pm_runtime_put(dev);
-+	return ret;
-+}
-+
-+static int hda_codec_pm_suspend(struct device *dev)
-+{
-+	dev->power.power_state = PMSG_SUSPEND;
-+	return pm_runtime_force_suspend(dev);
-+}
-+
-+static int hda_codec_pm_resume(struct device *dev)
-+{
-+	dev->power.power_state = PMSG_RESUME;
-+	return hda_codec_force_resume(dev);
-+}
-+
-+static int hda_codec_pm_freeze(struct device *dev)
-+{
-+	dev->power.power_state = PMSG_FREEZE;
-+	return pm_runtime_force_suspend(dev);
-+}
-+
-+static int hda_codec_pm_thaw(struct device *dev)
-+{
-+	dev->power.power_state = PMSG_THAW;
-+	return hda_codec_force_resume(dev);
-+}
-+
-+static int hda_codec_pm_restore(struct device *dev)
-+{
-+	dev->power.power_state = PMSG_RESTORE;
-+	return hda_codec_force_resume(dev);
-+}
-+#endif /* CONFIG_PM_SLEEP */
-+
- /* referred in hda_bind.c */
- const struct dev_pm_ops hda_codec_driver_pm = {
--	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
--				pm_runtime_force_resume)
-+#ifdef CONFIG_PM_SLEEP
-+	.suspend = hda_codec_pm_suspend,
-+	.resume = hda_codec_pm_resume,
-+	.freeze = hda_codec_pm_freeze,
-+	.thaw = hda_codec_pm_thaw,
-+	.poweroff = hda_codec_pm_suspend,
-+	.restore = hda_codec_pm_restore,
-+#endif /* CONFIG_PM_SLEEP */
- 	SET_RUNTIME_PM_OPS(hda_codec_runtime_suspend, hda_codec_runtime_resume,
- 			   NULL)
- };
 diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
-index e5c49003e75f..2ec91085fa3e 100644
+index ece256a3b48f..2ec91085fa3e 100644
 --- a/sound/pci/hda/hda_intel.c
 +++ b/sound/pci/hda/hda_intel.c
-@@ -947,7 +947,7 @@ static void __azx_runtime_suspend(struct azx *chip)
- 	display_power(chip, false);
- }
- 
--static void __azx_runtime_resume(struct azx *chip)
-+static void __azx_runtime_resume(struct azx *chip, bool from_rt)
- {
- 	struct hda_intel *hda = container_of(chip, struct hda_intel, chip);
- 	struct hdac_bus *bus = azx_bus(chip);
-@@ -964,7 +964,7 @@ static void __azx_runtime_resume(struct azx *chip)
- 	azx_init_pci(chip);
- 	hda_intel_init_chip(chip, true);
- 
--	if (status) {
-+	if (status && from_rt) {
- 		list_for_each_codec(codec, &chip->bus)
- 			if (status & (1 << codec->addr))
- 				schedule_delayed_work(&codec->jackpoll_work,
-@@ -1016,7 +1016,7 @@ static int azx_resume(struct device *dev)
- 			chip->msi = 0;
- 	if (azx_acquire_irq(chip, 1) < 0)
- 		return -EIO;
--	__azx_runtime_resume(chip);
-+	__azx_runtime_resume(chip, false);
- 	snd_power_change_state(card, SNDRV_CTL_POWER_D0);
- 
- 	trace_azx_resume(chip);
-@@ -1081,7 +1081,7 @@ static int azx_runtime_resume(struct device *dev)
- 	chip = card->private_data;
- 	if (!azx_has_pm_runtime(chip))
- 		return 0;
--	__azx_runtime_resume(chip);
-+	__azx_runtime_resume(chip, true);
- 
- 	/* disable controller Wake Up event*/
- 	azx_writew(chip, WAKEEN, azx_readw(chip, WAKEEN) &
-@@ -2142,12 +2142,18 @@ static struct snd_pci_quirk power_save_blacklist[] = {
+@@ -2142,6 +2142,8 @@ static struct snd_pci_quirk power_save_blacklist[] = {
  	SND_PCI_QUIRK(0x8086, 0x2040, "Intel DZ77BH-55K", 0),
  	/* https://bugzilla.kernel.org/show_bug.cgi?id=199607 */
  	SND_PCI_QUIRK(0x8086, 0x2057, "Intel NUC5i7RYB", 0),
@@ -34213,79 +4930,21 @@ index e5c49003e75f..2ec91085fa3e 100644
 +	SND_PCI_QUIRK(0x8086, 0x2064, "Intel SDP 8086:2064", 0),
  	/* https://bugzilla.redhat.com/show_bug.cgi?id=1520902 */
  	SND_PCI_QUIRK(0x8086, 0x2068, "Intel NUC7i3BNB", 0),
--	/* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
--	SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0),
  	/* https://bugzilla.kernel.org/show_bug.cgi?id=198611 */
- 	SND_PCI_QUIRK(0x17aa, 0x2227, "Lenovo X1 Carbon 3rd Gen", 0),
-+	/* https://bugzilla.redhat.com/show_bug.cgi?id=1689623 */
-+	SND_PCI_QUIRK(0x17aa, 0x367b, "Lenovo IdeaCentre B550", 0),
-+	/* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
-+	SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0),
+@@ -2150,6 +2152,8 @@ static struct snd_pci_quirk power_save_blacklist[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x367b, "Lenovo IdeaCentre B550", 0),
+ 	/* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */
+ 	SND_PCI_QUIRK(0x17aa, 0x36a7, "Lenovo C50 All in one", 0),
 +	/* https://bugs.launchpad.net/bugs/1821663 */
 +	SND_PCI_QUIRK(0x1631, 0xe017, "Packard Bell NEC IMEDIA 5204", 0),
  	{}
  };
  #endif /* CONFIG_PM */
-diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
-index a4ee7656d9ee..fb65ad31e86c 100644
---- a/sound/pci/hda/patch_conexant.c
-+++ b/sound/pci/hda/patch_conexant.c
-@@ -936,6 +936,9 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
- 	SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
- 	SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
- 	SND_PCI_QUIRK(0x103c, 0x8455, "HP Z2 G4", CXT_FIXUP_HP_MIC_NO_PRESENCE),
-+	SND_PCI_QUIRK(0x103c, 0x8456, "HP Z2 G4 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
-+	SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
-+	SND_PCI_QUIRK(0x103c, 0x8458, "HP Z2 G4 mini premium", CXT_FIXUP_HP_MIC_NO_PRESENCE),
- 	SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
- 	SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
- 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
 diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
-index 1ffa36e987b4..84fae0df59e9 100644
+index 00c27b3b8c14..84fae0df59e9 100644
 --- a/sound/pci/hda/patch_realtek.c
 +++ b/sound/pci/hda/patch_realtek.c
-@@ -118,6 +118,7 @@ struct alc_spec {
- 	unsigned int has_alc5505_dsp:1;
- 	unsigned int no_depop_delay:1;
- 	unsigned int done_hp_init:1;
-+	unsigned int no_shutup_pins:1;
- 
- 	/* for PLL fix */
- 	hda_nid_t pll_nid;
-@@ -476,6 +477,14 @@ static void alc_auto_setup_eapd(struct hda_codec *codec, bool on)
- 		set_eapd(codec, *p, on);
- }
- 
-+static void alc_shutup_pins(struct hda_codec *codec)
-+{
-+	struct alc_spec *spec = codec->spec;
-+
-+	if (!spec->no_shutup_pins)
-+		snd_hda_shutup_pins(codec);
-+}
-+
- /* generic shutup callback;
-  * just turning off EAPD and a little pause for avoiding pop-noise
-  */
-@@ -486,7 +495,7 @@ static void alc_eapd_shutup(struct hda_codec *codec)
- 	alc_auto_setup_eapd(codec, false);
- 	if (!spec->no_depop_delay)
- 		msleep(200);
--	snd_hda_shutup_pins(codec);
-+	alc_shutup_pins(codec);
- }
- 
- /* generic EAPD initialization */
-@@ -814,7 +823,7 @@ static inline void alc_shutup(struct hda_codec *codec)
- 	if (spec && spec->shutup)
- 		spec->shutup(codec);
- 	else
--		snd_hda_shutup_pins(codec);
-+		alc_shutup_pins(codec);
- }
- 
- static void alc_reboot_notify(struct hda_codec *codec)
-@@ -1855,8 +1864,8 @@ enum {
+@@ -1864,8 +1864,8 @@ enum {
  	ALC887_FIXUP_BASS_CHMAP,
  	ALC1220_FIXUP_GB_DUAL_CODECS,
  	ALC1220_FIXUP_CLEVO_P950,
@@ -34296,7 +4955,7 @@ index 1ffa36e987b4..84fae0df59e9 100644
  };
  
  static void alc889_fixup_coef(struct hda_codec *codec,
-@@ -2061,7 +2070,7 @@ static void alc1220_fixup_clevo_p950(struct hda_codec *codec,
+@@ -2070,7 +2070,7 @@ static void alc1220_fixup_clevo_p950(struct hda_codec *codec,
  static void alc_fixup_headset_mode_no_hp_mic(struct hda_codec *codec,
  				const struct hda_fixup *fix, int action);
  
@@ -34305,7 +4964,7 @@ index 1ffa36e987b4..84fae0df59e9 100644
  				     const struct hda_fixup *fix,
  				     int action)
  {
-@@ -2313,18 +2322,18 @@ static const struct hda_fixup alc882_fixups[] = {
+@@ -2322,18 +2322,18 @@ static const struct hda_fixup alc882_fixups[] = {
  		.type = HDA_FIXUP_FUNC,
  		.v.func = alc1220_fixup_clevo_p950,
  	},
@@ -34328,7 +4987,7 @@ index 1ffa36e987b4..84fae0df59e9 100644
  	},
  };
  
-@@ -2402,8 +2411,9 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+@@ -2411,8 +2411,9 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
  	SND_PCI_QUIRK(0x1558, 0x9501, "Clevo P950HR", ALC1220_FIXUP_CLEVO_P950),
  	SND_PCI_QUIRK(0x1558, 0x95e1, "Clevo P95xER", ALC1220_FIXUP_CLEVO_P950),
  	SND_PCI_QUIRK(0x1558, 0x95e2, "Clevo P950ER", ALC1220_FIXUP_CLEVO_P950),
@@ -34340,424 +4999,41 @@ index 1ffa36e987b4..84fae0df59e9 100644
  	SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
  	SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
  	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530),
-@@ -2950,7 +2960,7 @@ static void alc269_shutup(struct hda_codec *codec)
- 			(alc_get_coef0(codec) & 0x00ff) == 0x018) {
- 		msleep(150);
- 	}
--	snd_hda_shutup_pins(codec);
-+	alc_shutup_pins(codec);
- }
- 
- static struct coef_fw alc282_coefs[] = {
-@@ -3053,14 +3063,15 @@ static void alc282_shutup(struct hda_codec *codec)
- 	if (hp_pin_sense)
- 		msleep(85);
- 
--	snd_hda_codec_write(codec, hp_pin, 0,
--			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
-+	if (!spec->no_shutup_pins)
-+		snd_hda_codec_write(codec, hp_pin, 0,
-+				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
- 
- 	if (hp_pin_sense)
- 		msleep(100);
- 
- 	alc_auto_setup_eapd(codec, false);
--	snd_hda_shutup_pins(codec);
-+	alc_shutup_pins(codec);
- 	alc_write_coef_idx(codec, 0x78, coef78);
- }
- 
-@@ -3166,15 +3177,16 @@ static void alc283_shutup(struct hda_codec *codec)
- 	if (hp_pin_sense)
- 		msleep(100);
- 
--	snd_hda_codec_write(codec, hp_pin, 0,
--			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
-+	if (!spec->no_shutup_pins)
-+		snd_hda_codec_write(codec, hp_pin, 0,
-+				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
- 
- 	alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
- 
- 	if (hp_pin_sense)
- 		msleep(100);
- 	alc_auto_setup_eapd(codec, false);
--	snd_hda_shutup_pins(codec);
-+	alc_shutup_pins(codec);
- 	alc_write_coef_idx(codec, 0x43, 0x9614);
- }
- 
-@@ -3240,14 +3252,15 @@ static void alc256_shutup(struct hda_codec *codec)
- 	/* NOTE: call this before clearing the pin, otherwise codec stalls */
- 	alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
- 
--	snd_hda_codec_write(codec, hp_pin, 0,
--			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
-+	if (!spec->no_shutup_pins)
-+		snd_hda_codec_write(codec, hp_pin, 0,
-+				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
- 
- 	if (hp_pin_sense)
- 		msleep(100);
- 
- 	alc_auto_setup_eapd(codec, false);
--	snd_hda_shutup_pins(codec);
-+	alc_shutup_pins(codec);
- }
- 
- static void alc225_init(struct hda_codec *codec)
-@@ -3334,7 +3347,7 @@ static void alc225_shutup(struct hda_codec *codec)
- 		msleep(100);
- 
- 	alc_auto_setup_eapd(codec, false);
--	snd_hda_shutup_pins(codec);
-+	alc_shutup_pins(codec);
- }
- 
- static void alc_default_init(struct hda_codec *codec)
-@@ -3388,14 +3401,15 @@ static void alc_default_shutup(struct hda_codec *codec)
- 	if (hp_pin_sense)
- 		msleep(85);
- 
--	snd_hda_codec_write(codec, hp_pin, 0,
--			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
-+	if (!spec->no_shutup_pins)
-+		snd_hda_codec_write(codec, hp_pin, 0,
-+				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
- 
- 	if (hp_pin_sense)
- 		msleep(100);
- 
- 	alc_auto_setup_eapd(codec, false);
--	snd_hda_shutup_pins(codec);
-+	alc_shutup_pins(codec);
- }
- 
- static void alc294_hp_init(struct hda_codec *codec)
-@@ -3412,8 +3426,9 @@ static void alc294_hp_init(struct hda_codec *codec)
- 
- 	msleep(100);
- 
--	snd_hda_codec_write(codec, hp_pin, 0,
--			    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
-+	if (!spec->no_shutup_pins)
-+		snd_hda_codec_write(codec, hp_pin, 0,
-+				    AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
- 
- 	alc_update_coef_idx(codec, 0x6f, 0x000f, 0);/* Set HP depop to manual mode */
- 	alc_update_coefex_idx(codec, 0x58, 0x00, 0x8000, 0x8000); /* HP depop procedure start */
-@@ -5007,16 +5022,12 @@ static void alc_fixup_auto_mute_via_amp(struct hda_codec *codec,
- 	}
- }
- 
--static void alc_no_shutup(struct hda_codec *codec)
--{
--}
--
- static void alc_fixup_no_shutup(struct hda_codec *codec,
- 				const struct hda_fixup *fix, int action)
- {
- 	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
- 		struct alc_spec *spec = codec->spec;
--		spec->shutup = alc_no_shutup;
-+		spec->no_shutup_pins = 1;
- 	}
- }
- 
-@@ -5479,7 +5490,7 @@ static void alc_headset_btn_callback(struct hda_codec *codec,
- 	jack->jack->button_state = report;
- }
- 
--static void alc_fixup_headset_jack(struct hda_codec *codec,
-+static void alc295_fixup_chromebook(struct hda_codec *codec,
- 				    const struct hda_fixup *fix, int action)
- {
- 
-@@ -5489,6 +5500,16 @@ static void alc_fixup_headset_jack(struct hda_codec *codec,
- 						    alc_headset_btn_callback);
- 		snd_hda_jack_add_kctl(codec, 0x55, "Headset Jack", false,
- 				      SND_JACK_HEADSET, alc_headset_btn_keymap);
-+		switch (codec->core.vendor_id) {
-+		case 0x10ec0295:
-+			alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */
-+			alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15);
-+			break;
-+		case 0x10ec0236:
-+			alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */
-+			alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15);
-+			break;
-+		}
- 		break;
- 	case HDA_FIXUP_ACT_INIT:
- 		switch (codec->core.vendor_id) {
-@@ -5641,6 +5662,7 @@ enum {
- 	ALC233_FIXUP_ASUS_MIC_NO_PRESENCE,
- 	ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE,
- 	ALC233_FIXUP_LENOVO_MULTI_CODECS,
-+	ALC233_FIXUP_ACER_HEADSET_MIC,
- 	ALC294_FIXUP_LENOVO_MIC_LOCATION,
- 	ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE,
- 	ALC700_FIXUP_INTEL_REFERENCE,
-@@ -5658,9 +5680,16 @@ enum {
- 	ALC294_FIXUP_ASUS_MIC,
- 	ALC294_FIXUP_ASUS_HEADSET_MIC,
- 	ALC294_FIXUP_ASUS_SPK,
--	ALC225_FIXUP_HEADSET_JACK,
- 	ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
- 	ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
-+	ALC255_FIXUP_ACER_HEADSET_MIC,
-+	ALC295_FIXUP_CHROME_BOOK,
-+	ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE,
-+	ALC225_FIXUP_WYSE_AUTO_MUTE,
-+	ALC225_FIXUP_WYSE_DISABLE_MIC_VREF,
-+	ALC286_FIXUP_ACER_AIO_HEADSET_MIC,
-+	ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
-+	ALC299_FIXUP_PREDATOR_SPK,
- };
- 
- static const struct hda_fixup alc269_fixups[] = {
-@@ -6461,6 +6490,16 @@ static const struct hda_fixup alc269_fixups[] = {
- 		.type = HDA_FIXUP_FUNC,
- 		.v.func = alc233_alc662_fixup_lenovo_dual_codecs,
- 	},
-+	[ALC233_FIXUP_ACER_HEADSET_MIC] = {
-+		.type = HDA_FIXUP_VERBS,
-+		.v.verbs = (const struct hda_verb[]) {
-+			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x45 },
-+			{ 0x20, AC_VERB_SET_PROC_COEF, 0x5089 },
-+			{ }
-+		},
-+		.chained = true,
-+		.chain_id = ALC233_FIXUP_ASUS_MIC_NO_PRESENCE
-+	},
- 	[ALC294_FIXUP_LENOVO_MIC_LOCATION] = {
- 		.type = HDA_FIXUP_PINS,
- 		.v.pins = (const struct hda_pintbl[]) {
-@@ -6603,9 +6642,9 @@ static const struct hda_fixup alc269_fixups[] = {
- 		.chained = true,
- 		.chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC
- 	},
--	[ALC225_FIXUP_HEADSET_JACK] = {
-+	[ALC295_FIXUP_CHROME_BOOK] = {
- 		.type = HDA_FIXUP_FUNC,
--		.v.func = alc_fixup_headset_jack,
-+		.v.func = alc295_fixup_chromebook,
- 	},
- 	[ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE] = {
- 		.type = HDA_FIXUP_PINS,
-@@ -6627,6 +6666,64 @@ static const struct hda_fixup alc269_fixups[] = {
- 		.chained = true,
- 		.chain_id = ALC285_FIXUP_LENOVO_HEADPHONE_NOISE
- 	},
-+	[ALC255_FIXUP_ACER_HEADSET_MIC] = {
-+		.type = HDA_FIXUP_PINS,
-+		.v.pins = (const struct hda_pintbl[]) {
-+			{ 0x19, 0x03a11130 },
-+			{ 0x1a, 0x90a60140 }, /* use as internal mic */
-+			{ }
-+		},
-+		.chained = true,
-+		.chain_id = ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC
-+	},
-+	[ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE] = {
-+		.type = HDA_FIXUP_PINS,
-+		.v.pins = (const struct hda_pintbl[]) {
-+			{ 0x16, 0x01011020 }, /* Rear Line out */
-+			{ 0x19, 0x01a1913c }, /* use as Front headset mic, without its own jack detect */
-+			{ }
-+		},
-+		.chained = true,
-+		.chain_id = ALC225_FIXUP_WYSE_AUTO_MUTE
-+	},
-+	[ALC225_FIXUP_WYSE_AUTO_MUTE] = {
-+		.type = HDA_FIXUP_FUNC,
-+		.v.func = alc_fixup_auto_mute_via_amp,
-+		.chained = true,
-+		.chain_id = ALC225_FIXUP_WYSE_DISABLE_MIC_VREF
-+	},
-+	[ALC225_FIXUP_WYSE_DISABLE_MIC_VREF] = {
-+		.type = HDA_FIXUP_FUNC,
-+		.v.func = alc_fixup_disable_mic_vref,
-+		.chained = true,
-+		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
-+	},
-+	[ALC286_FIXUP_ACER_AIO_HEADSET_MIC] = {
+@@ -5661,6 +5662,7 @@ enum {
+ 	ALC233_FIXUP_ASUS_MIC_NO_PRESENCE,
+ 	ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE,
+ 	ALC233_FIXUP_LENOVO_MULTI_CODECS,
++	ALC233_FIXUP_ACER_HEADSET_MIC,
+ 	ALC294_FIXUP_LENOVO_MIC_LOCATION,
+ 	ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE,
+ 	ALC700_FIXUP_INTEL_REFERENCE,
+@@ -6488,6 +6490,16 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc233_alc662_fixup_lenovo_dual_codecs,
+ 	},
++	[ALC233_FIXUP_ACER_HEADSET_MIC] = {
 +		.type = HDA_FIXUP_VERBS,
 +		.v.verbs = (const struct hda_verb[]) {
-+			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x4f },
-+			{ 0x20, AC_VERB_SET_PROC_COEF, 0x5029 },
-+			{ }
-+		},
-+		.chained = true,
-+		.chain_id = ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE
-+	},
-+	[ALC256_FIXUP_ASUS_MIC_NO_PRESENCE] = {
-+		.type = HDA_FIXUP_PINS,
-+		.v.pins = (const struct hda_pintbl[]) {
-+			{ 0x19, 0x04a11120 }, /* use as headset mic, without its own jack detect */
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x45 },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x5089 },
 +			{ }
 +		},
 +		.chained = true,
-+		.chain_id = ALC256_FIXUP_ASUS_HEADSET_MODE
-+	},
-+	[ALC299_FIXUP_PREDATOR_SPK] = {
-+		.type = HDA_FIXUP_PINS,
-+		.v.pins = (const struct hda_pintbl[]) {
-+			{ 0x21, 0x90170150 }, /* use as headset mic, without its own jack detect */
-+			{ }
-+		}
++		.chain_id = ALC233_FIXUP_ASUS_MIC_NO_PRESENCE
 +	},
- };
- 
- static const struct snd_pci_quirk alc269_fixup_tbl[] = {
-@@ -6643,9 +6740,15 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
- 	SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS),
- 	SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
- 	SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK),
--	SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
--	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
--	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE),
-+	SND_PCI_QUIRK(0x1025, 0x1099, "Acer Aspire E5-523G", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
-+	SND_PCI_QUIRK(0x1025, 0x110e, "Acer Aspire ES1-432", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
-+	SND_PCI_QUIRK(0x1025, 0x1246, "Acer Predator Helios 500", ALC299_FIXUP_PREDATOR_SPK),
-+	SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
-+	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
-+	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
-+	SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	[ALC294_FIXUP_LENOVO_MIC_LOCATION] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -6735,6 +6747,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
 +	SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
-+	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
  	SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
  	SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
- 	SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X),
-@@ -6677,6 +6780,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
- 	SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13 9350", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
- 	SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
- 	SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
-+	SND_PCI_QUIRK(0x1028, 0x0738, "Dell Precision 5820", ALC269_FIXUP_NO_SHUTUP),
- 	SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
- 	SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
- 	SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
-@@ -6689,6 +6793,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
- 	SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
- 	SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
- 	SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
-+	SND_PCI_QUIRK(0x1028, 0x08ad, "Dell WYSE AIO", ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE),
-+	SND_PCI_QUIRK(0x1028, 0x08ae, "Dell WYSE NB", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE),
- 	SND_PCI_QUIRK(0x1028, 0x0935, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
- 	SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
- 	SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
-@@ -6751,11 +6857,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
- 	SND_PCI_QUIRK(0x103c, 0x2336, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- 	SND_PCI_QUIRK(0x103c, 0x2337, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1),
- 	SND_PCI_QUIRK(0x103c, 0x221c, "HP EliteBook 755 G2", ALC280_FIXUP_HP_HEADSET_MIC),
-+	SND_PCI_QUIRK(0x103c, 0x802e, "HP Z240 SFF", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
-+	SND_PCI_QUIRK(0x103c, 0x802f, "HP Z240", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
- 	SND_PCI_QUIRK(0x103c, 0x820d, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
- 	SND_PCI_QUIRK(0x103c, 0x8256, "HP", ALC221_FIXUP_HP_FRONT_MIC),
- 	SND_PCI_QUIRK(0x103c, 0x827e, "HP x360", ALC295_FIXUP_HP_X360),
--	SND_PCI_QUIRK(0x103c, 0x82bf, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
--	SND_PCI_QUIRK(0x103c, 0x82c0, "HP", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
-+	SND_PCI_QUIRK(0x103c, 0x82bf, "HP G3 mini", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
-+	SND_PCI_QUIRK(0x103c, 0x82c0, "HP G3 mini premium", ALC221_FIXUP_HP_MIC_NO_PRESENCE),
- 	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
- 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
- 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
-@@ -6771,7 +6879,6 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
- 	SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
- 	SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
- 	SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
--	SND_PCI_QUIRK(0x1043, 0x14a1, "ASUS UX533FD", ALC294_FIXUP_ASUS_SPK),
- 	SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
- 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
- 	SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
-@@ -7036,7 +7143,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
- 	{.id = ALC255_FIXUP_DUMMY_LINEOUT_VERB, .name = "alc255-dummy-lineout"},
- 	{.id = ALC255_FIXUP_DELL_HEADSET_MIC, .name = "alc255-dell-headset"},
- 	{.id = ALC295_FIXUP_HP_X360, .name = "alc295-hp-x360"},
--	{.id = ALC225_FIXUP_HEADSET_JACK, .name = "alc-sense-combo"},
-+	{.id = ALC295_FIXUP_CHROME_BOOK, .name = "alc-sense-combo"},
-+	{.id = ALC299_FIXUP_PREDATOR_SPK, .name = "predator-spk"},
- 	{}
- };
- #define ALC225_STANDARD_PINS \
-@@ -7257,6 +7365,18 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
- 		{0x14, 0x90170110},
- 		{0x1b, 0x90a70130},
- 		{0x21, 0x03211020}),
-+	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
-+		{0x12, 0x90a60130},
-+		{0x14, 0x90170110},
-+		{0x21, 0x03211020}),
-+	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
-+		{0x12, 0x90a60130},
-+		{0x14, 0x90170110},
-+		{0x21, 0x04211020}),
-+	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE,
-+		{0x1a, 0x90a70130},
-+		{0x1b, 0x90170110},
-+		{0x21, 0x03211020}),
- 	SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
- 		{0x12, 0xb7a60130},
- 		{0x13, 0xb8a61140},
-@@ -7388,6 +7508,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
- 		{0x14, 0x90170110},
- 		{0x1b, 0x90a70130},
- 		{0x21, 0x04211020}),
-+	SND_HDA_PIN_QUIRK(0x10ec0294, 0x1043, "ASUS", ALC294_FIXUP_ASUS_SPK,
-+		{0x12, 0x90a60130},
-+		{0x17, 0x90170110},
-+		{0x21, 0x03211020}),
- 	SND_HDA_PIN_QUIRK(0x10ec0294, 0x1043, "ASUS", ALC294_FIXUP_ASUS_SPK,
- 		{0x12, 0x90a60130},
- 		{0x17, 0x90170110},
-diff --git a/sound/soc/codecs/pcm186x.c b/sound/soc/codecs/pcm186x.c
-index 809b7e9f03ca..c5fcc632f670 100644
---- a/sound/soc/codecs/pcm186x.c
-+++ b/sound/soc/codecs/pcm186x.c
-@@ -42,7 +42,7 @@ struct pcm186x_priv {
- 	bool is_master_mode;
- };
- 
--static const DECLARE_TLV_DB_SCALE(pcm186x_pga_tlv, -1200, 4000, 50);
-+static const DECLARE_TLV_DB_SCALE(pcm186x_pga_tlv, -1200, 50, 0);
- 
- static const struct snd_kcontrol_new pcm1863_snd_controls[] = {
- 	SOC_DOUBLE_R_S_TLV("ADC Capture Volume", PCM186X_PGA_VAL_CH1_L,
-@@ -158,7 +158,7 @@ static const struct snd_soc_dapm_widget pcm1863_dapm_widgets[] = {
- 	 * Put the codec into SLEEP mode when not in use, allowing the
- 	 * Energysense mechanism to operate.
- 	 */
--	SND_SOC_DAPM_ADC("ADC", "HiFi Capture", PCM186X_POWER_CTRL, 1,  0),
-+	SND_SOC_DAPM_ADC("ADC", "HiFi Capture", PCM186X_POWER_CTRL, 1,  1),
- };
- 
- static const struct snd_soc_dapm_widget pcm1865_dapm_widgets[] = {
-@@ -184,8 +184,8 @@ static const struct snd_soc_dapm_widget pcm1865_dapm_widgets[] = {
- 	 * Put the codec into SLEEP mode when not in use, allowing the
- 	 * Energysense mechanism to operate.
- 	 */
--	SND_SOC_DAPM_ADC("ADC1", "HiFi Capture 1", PCM186X_POWER_CTRL, 1,  0),
--	SND_SOC_DAPM_ADC("ADC2", "HiFi Capture 2", PCM186X_POWER_CTRL, 1,  0),
-+	SND_SOC_DAPM_ADC("ADC1", "HiFi Capture 1", PCM186X_POWER_CTRL, 1,  1),
-+	SND_SOC_DAPM_ADC("ADC2", "HiFi Capture 2", PCM186X_POWER_CTRL, 1,  1),
- };
- 
- static const struct snd_soc_dapm_route pcm1863_dapm_routes[] = {
-diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
-index 81f2fe2c6d23..60f87a0d99f4 100644
---- a/sound/soc/fsl/fsl-asoc-card.c
-+++ b/sound/soc/fsl/fsl-asoc-card.c
-@@ -689,6 +689,7 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
- asrc_fail:
- 	of_node_put(asrc_np);
- 	of_node_put(codec_np);
-+	put_device(&cpu_pdev->dev);
- fail:
- 	of_node_put(cpu_np);
- 
 diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
-index 57b484768a58..3623aa9a6f2e 100644
+index afe67c865330..3623aa9a6f2e 100644
 --- a/sound/soc/fsl/fsl_esai.c
 +++ b/sound/soc/fsl/fsl_esai.c
 @@ -54,6 +54,8 @@ struct fsl_esai {
@@ -34793,32 +5069,7 @@ index 57b484768a58..3623aa9a6f2e 100644
  
  	return 0;
  }
-@@ -398,7 +392,8 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt)
- 		break;
- 	case SND_SOC_DAIFMT_RIGHT_J:
- 		/* Data on rising edge of bclk, frame high, right aligned */
--		xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCR_xWA;
-+		xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP;
-+		xcr  |= ESAI_xCR_xWA;
- 		break;
- 	case SND_SOC_DAIFMT_DSP_A:
- 		/* Data on rising edge of bclk, frame high, 1clk before data */
-@@ -455,12 +450,12 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt)
- 		return -EINVAL;
- 	}
- 
--	mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR;
-+	mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR | ESAI_xCR_xWA;
- 	regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR, mask, xcr);
- 	regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR, mask, xcr);
- 
- 	mask = ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCCR_xFSP |
--		ESAI_xCCR_xFSD | ESAI_xCCR_xCKD | ESAI_xCR_xWA;
-+		ESAI_xCCR_xFSD | ESAI_xCCR_xCKD;
- 	regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR, mask, xccr);
- 	regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR, mask, xccr);
- 
-@@ -595,6 +590,7 @@ static int fsl_esai_trigger(struct snd_pcm_substream *substream, int cmd,
+@@ -596,6 +590,7 @@ static int fsl_esai_trigger(struct snd_pcm_substream *substream, int cmd,
  	bool tx = substream->stream == SNDRV_PCM_STREAM_PLAYBACK;
  	u8 i, channels = substream->runtime->channels;
  	u32 pins = DIV_ROUND_UP(channels, esai_priv->slots);
@@ -34826,7 +5077,7 @@ index 57b484768a58..3623aa9a6f2e 100644
  
  	switch (cmd) {
  	case SNDRV_PCM_TRIGGER_START:
-@@ -607,15 +603,38 @@ static int fsl_esai_trigger(struct snd_pcm_substream *substream, int cmd,
+@@ -608,15 +603,38 @@ static int fsl_esai_trigger(struct snd_pcm_substream *substream, int cmd,
  		for (i = 0; tx && i < channels; i++)
  			regmap_write(esai_priv->regmap, REG_ESAI_ETDR, 0x0);
  
@@ -34865,7 +5116,7 @@ index 57b484768a58..3623aa9a6f2e 100644
  
  		/* Disable and reset FIFO */
  		regmap_update_bits(esai_priv->regmap, REG_ESAI_xFCR(tx),
-@@ -905,6 +924,15 @@ static int fsl_esai_probe(struct platform_device *pdev)
+@@ -906,6 +924,15 @@ static int fsl_esai_probe(struct platform_device *pdev)
  		return ret;
  	}
  
@@ -34881,46 +5132,6 @@ index 57b484768a58..3623aa9a6f2e 100644
  	ret = devm_snd_soc_register_component(&pdev->dev, &fsl_esai_component,
  					      &fsl_esai_dai, 1);
  	if (ret) {
-diff --git a/sound/soc/fsl/imx-sgtl5000.c b/sound/soc/fsl/imx-sgtl5000.c
-index c29200cf755a..9b9a7ec52905 100644
---- a/sound/soc/fsl/imx-sgtl5000.c
-+++ b/sound/soc/fsl/imx-sgtl5000.c
-@@ -108,6 +108,7 @@ static int imx_sgtl5000_probe(struct platform_device *pdev)
- 		ret = -EPROBE_DEFER;
- 		goto fail;
- 	}
-+	put_device(&ssi_pdev->dev);
- 	codec_dev = of_find_i2c_device_by_node(codec_np);
- 	if (!codec_dev) {
- 		dev_err(&pdev->dev, "failed to find codec platform device\n");
-diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
-index b807a47515eb..336895f7fd1e 100644
---- a/sound/soc/generic/simple-card-utils.c
-+++ b/sound/soc/generic/simple-card-utils.c
-@@ -283,12 +283,20 @@ static int asoc_simple_card_get_dai_id(struct device_node *ep)
- 	/* use endpoint/port reg if exist */
- 	ret = of_graph_parse_endpoint(ep, &info);
- 	if (ret == 0) {
--		if (info.id)
-+		/*
-+		 * Because it will count port/endpoint if it doesn't have "reg".
-+		 * But, we can't judge whether it has "no reg", or "reg = <0>"
-+		 * only of_graph_parse_endpoint().
-+		 * We need to check "reg" property
-+		 */
-+		if (of_get_property(ep,   "reg", NULL))
- 			return info.id;
--		if (info.port)
-+
-+		node = of_get_parent(ep);
-+		of_node_put(node);
-+		if (of_get_property(node, "reg", NULL))
- 			return info.port;
- 	}
--
- 	node = of_graph_get_port_parent(ep);
- 
- 	/*
 diff --git a/sound/soc/intel/atom/sst-mfld-platform-pcm.c b/sound/soc/intel/atom/sst-mfld-platform-pcm.c
 index 91a2436ce952..e9623da911d5 100644
 --- a/sound/soc/intel/atom/sst-mfld-platform-pcm.c
@@ -34943,45 +5154,6 @@ index 91a2436ce952..e9623da911d5 100644
  	.ops		= &sst_platform_ops,
  	.compr_ops	= &sst_platform_compr_ops,
  	.pcm_new	= sst_pcm_new,
-diff --git a/sound/soc/qcom/common.c b/sound/soc/qcom/common.c
-index 4715527054e5..5661025e8cec 100644
---- a/sound/soc/qcom/common.c
-+++ b/sound/soc/qcom/common.c
-@@ -42,6 +42,9 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
- 	link = card->dai_link;
- 	for_each_child_of_node(dev->of_node, np) {
- 		cpu = of_get_child_by_name(np, "cpu");
-+		platform = of_get_child_by_name(np, "platform");
-+		codec = of_get_child_by_name(np, "codec");
-+
- 		if (!cpu) {
- 			dev_err(dev, "Can't find cpu DT node\n");
- 			ret = -EINVAL;
-@@ -63,8 +66,6 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
- 			goto err;
- 		}
- 
--		platform = of_get_child_by_name(np, "platform");
--		codec = of_get_child_by_name(np, "codec");
- 		if (codec && platform) {
- 			link->platform_of_node = of_parse_phandle(platform,
- 					"sound-dai",
-@@ -100,10 +101,15 @@ int qcom_snd_parse_of(struct snd_soc_card *card)
- 		link->dpcm_capture = 1;
- 		link->stream_name = link->name;
- 		link++;
-+
-+		of_node_put(cpu);
-+		of_node_put(codec);
-+		of_node_put(platform);
- 	}
- 
- 	return 0;
- err:
-+	of_node_put(np);
- 	of_node_put(cpu);
- 	of_node_put(codec);
- 	of_node_put(platform);
 diff --git a/sound/xen/xen_snd_front_alsa.c b/sound/xen/xen_snd_front_alsa.c
 index a7f413cb704d..b14ab512c2ce 100644
 --- a/sound/xen/xen_snd_front_alsa.c
@@ -34995,1024 +5167,3 @@ index a7f413cb704d..b14ab512c2ce 100644
  	if (!stream->buffer)
  		return -ENOMEM;
  
-diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
-index 5467c6bf9ceb..bb9dca65eb5f 100644
---- a/tools/build/Makefile.feature
-+++ b/tools/build/Makefile.feature
-@@ -70,7 +70,6 @@ FEATURE_TESTS_BASIC :=                  \
-         sched_getcpu			\
-         sdt				\
-         setns				\
--        libopencsd			\
-         libaio
- 
- # FEATURE_TESTS_BASIC + FEATURE_TESTS_EXTRA is the complete list
-@@ -84,6 +83,7 @@ FEATURE_TESTS_EXTRA :=                  \
-          libbabeltrace                  \
-          libbfd-liberty                 \
-          libbfd-liberty-z               \
-+         libopencsd                     \
-          libunwind-debug-frame          \
-          libunwind-debug-frame-arm      \
-          libunwind-debug-frame-aarch64  \
-diff --git a/tools/build/feature/test-all.c b/tools/build/feature/test-all.c
-index 20cdaa4fc112..e903b86b742f 100644
---- a/tools/build/feature/test-all.c
-+++ b/tools/build/feature/test-all.c
-@@ -170,14 +170,14 @@
- # include "test-setns.c"
- #undef main
- 
--#define main main_test_libopencsd
--# include "test-libopencsd.c"
--#undef main
--
- #define main main_test_libaio
- # include "test-libaio.c"
- #undef main
- 
-+#define main main_test_reallocarray
-+# include "test-reallocarray.c"
-+#undef main
-+
- int main(int argc, char *argv[])
- {
- 	main_test_libpython();
-@@ -217,8 +217,8 @@ int main(int argc, char *argv[])
- 	main_test_sched_getcpu();
- 	main_test_sdt();
- 	main_test_setns();
--	main_test_libopencsd();
- 	main_test_libaio();
-+	main_test_reallocarray();
- 
- 	return 0;
- }
-diff --git a/tools/build/feature/test-reallocarray.c b/tools/build/feature/test-reallocarray.c
-index 8170de35150d..8f6743e31da7 100644
---- a/tools/build/feature/test-reallocarray.c
-+++ b/tools/build/feature/test-reallocarray.c
-@@ -6,3 +6,5 @@ int main(void)
- {
- 	return !!reallocarray(NULL, 1, 1);
- }
-+
-+#undef _GNU_SOURCE
-diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
-index 34d9c3619c96..78fd86b85087 100644
---- a/tools/lib/bpf/Makefile
-+++ b/tools/lib/bpf/Makefile
-@@ -162,7 +162,8 @@ endif
- 
- TARGETS = $(CMD_TARGETS)
- 
--all: fixdep all_cmd
-+all: fixdep
-+	$(Q)$(MAKE) all_cmd
- 
- all_cmd: $(CMD_TARGETS) check
- 
-diff --git a/tools/lib/lockdep/run_tests.sh b/tools/lib/lockdep/run_tests.sh
-index c8fbd0306960..11f425662b43 100755
---- a/tools/lib/lockdep/run_tests.sh
-+++ b/tools/lib/lockdep/run_tests.sh
-@@ -11,7 +11,7 @@ find tests -name '*.c' | sort | while read -r i; do
- 	testname=$(basename "$i" .c)
- 	echo -ne "$testname... "
- 	if gcc -o "tests/$testname" -pthread "$i" liblockdep.a -Iinclude -D__USE_LIBLOCKDEP &&
--		timeout 1 "tests/$testname" 2>&1 | "tests/${testname}.sh"; then
-+		timeout 1 "tests/$testname" 2>&1 | /bin/bash "tests/${testname}.sh"; then
- 		echo "PASSED!"
- 	else
- 		echo "FAILED!"
-@@ -24,7 +24,7 @@ find tests -name '*.c' | sort | while read -r i; do
- 	echo -ne "(PRELOAD) $testname... "
- 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
- 		timeout 1 ./lockdep "tests/$testname" 2>&1 |
--		"tests/${testname}.sh"; then
-+		/bin/bash "tests/${testname}.sh"; then
- 		echo "PASSED!"
- 	else
- 		echo "FAILED!"
-@@ -37,7 +37,7 @@ find tests -name '*.c' | sort | while read -r i; do
- 	echo -ne "(PRELOAD + Valgrind) $testname... "
- 	if gcc -o "tests/$testname" -pthread -Iinclude "$i" &&
- 		{ timeout 10 valgrind --read-var-info=yes ./lockdep "./tests/$testname" >& "tests/${testname}.vg.out"; true; } &&
--		"tests/${testname}.sh" < "tests/${testname}.vg.out" &&
-+		/bin/bash "tests/${testname}.sh" < "tests/${testname}.vg.out" &&
- 		! grep -Eq '(^==[0-9]*== (Invalid |Uninitialised ))|Mismatched free|Source and destination overlap| UME ' "tests/${testname}.vg.out"; then
- 		echo "PASSED!"
- 	else
-diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
-index abd4fa5d3088..87494c7c619d 100644
---- a/tools/lib/traceevent/event-parse.c
-+++ b/tools/lib/traceevent/event-parse.c
-@@ -2457,7 +2457,7 @@ static int arg_num_eval(struct tep_print_arg *arg, long long *val)
- static char *arg_eval (struct tep_print_arg *arg)
- {
- 	long long val;
--	static char buf[20];
-+	static char buf[24];
- 
- 	switch (arg->type) {
- 	case TEP_PRINT_ATOM:
-diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
-index c9d038f91af6..53f8be0f4a1f 100644
---- a/tools/objtool/Makefile
-+++ b/tools/objtool/Makefile
-@@ -25,14 +25,17 @@ LIBSUBCMD		= $(LIBSUBCMD_OUTPUT)libsubcmd.a
- OBJTOOL    := $(OUTPUT)objtool
- OBJTOOL_IN := $(OBJTOOL)-in.o
- 
-+LIBELF_FLAGS := $(shell pkg-config libelf --cflags 2>/dev/null)
-+LIBELF_LIBS  := $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
-+
- all: $(OBJTOOL)
- 
- INCLUDES := -I$(srctree)/tools/include \
- 	    -I$(srctree)/tools/arch/$(HOSTARCH)/include/uapi \
- 	    -I$(srctree)/tools/objtool/arch/$(ARCH)/include
- WARNINGS := $(EXTRA_WARNINGS) -Wno-switch-default -Wno-switch-enum -Wno-packed
--CFLAGS   += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES)
--LDFLAGS  += -lelf $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
-+CFLAGS   += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES) $(LIBELF_FLAGS)
-+LDFLAGS  += $(LIBELF_LIBS) $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
- 
- # Allow old libelf to be used:
- elfshdr := $(shell echo '$(pound)include <libelf.h>' | $(CC) $(CFLAGS) -x c -E - | grep elf_getshdr)
-diff --git a/tools/objtool/check.c b/tools/objtool/check.c
-index 0414a0d52262..5dde107083c6 100644
---- a/tools/objtool/check.c
-+++ b/tools/objtool/check.c
-@@ -2184,9 +2184,10 @@ static void cleanup(struct objtool_file *file)
- 	elf_close(file->elf);
- }
- 
-+static struct objtool_file file;
-+
- int check(const char *_objname, bool orc)
- {
--	struct objtool_file file;
- 	int ret, warnings = 0;
- 
- 	objname = _objname;
-diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
-index b441c88cafa1..cf4a8329c4c0 100644
---- a/tools/perf/Makefile.config
-+++ b/tools/perf/Makefile.config
-@@ -218,6 +218,8 @@ FEATURE_CHECK_LDFLAGS-libpython := $(PYTHON_EMBED_LDOPTS)
- FEATURE_CHECK_CFLAGS-libpython-version := $(PYTHON_EMBED_CCOPTS)
- FEATURE_CHECK_LDFLAGS-libpython-version := $(PYTHON_EMBED_LDOPTS)
- 
-+FEATURE_CHECK_LDFLAGS-libaio = -lrt
-+
- CFLAGS += -fno-omit-frame-pointer
- CFLAGS += -ggdb3
- CFLAGS += -funwind-tables
-@@ -386,7 +388,8 @@ ifeq ($(feature-setns), 1)
-   $(call detected,CONFIG_SETNS)
- endif
- 
--ifndef NO_CORESIGHT
-+ifdef CORESIGHT
-+  $(call feature_check,libopencsd)
-   ifeq ($(feature-libopencsd), 1)
-     CFLAGS += -DHAVE_CSTRACE_SUPPORT $(LIBOPENCSD_CFLAGS)
-     LDFLAGS += $(LIBOPENCSD_LDFLAGS)
-diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
-index 0ee6795d82cc..77f8f069f1e7 100644
---- a/tools/perf/Makefile.perf
-+++ b/tools/perf/Makefile.perf
-@@ -102,7 +102,7 @@ include ../scripts/utilities.mak
- # When selected, pass LLVM_CONFIG=/path/to/llvm-config to `make' if
- # llvm-config is not in $PATH.
- #
--# Define NO_CORESIGHT if you do not want support for CoreSight trace decoding.
-+# Define CORESIGHT if you DO WANT support for CoreSight trace decoding.
- #
- # Define NO_AIO if you do not want support of Posix AIO based trace
- # streaming for record mode. Currently Posix AIO trace streaming is
-diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c
-index d340d2e42776..13758a0b367b 100644
---- a/tools/perf/builtin-c2c.c
-+++ b/tools/perf/builtin-c2c.c
-@@ -2055,6 +2055,12 @@ static int setup_nodes(struct perf_session *session)
- 		if (!set)
- 			return -ENOMEM;
- 
-+		nodes[node] = set;
-+
-+		/* empty node, skip */
-+		if (cpu_map__empty(map))
-+			continue;
-+
- 		for (cpu = 0; cpu < map->nr; cpu++) {
- 			set_bit(map->map[cpu], set);
- 
-@@ -2063,8 +2069,6 @@ static int setup_nodes(struct perf_session *session)
- 
- 			cpu2node[map->map[cpu]] = node;
- 		}
--
--		nodes[node] = set;
- 	}
- 
- 	setup_nodes_header();
-diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
-index ac221f137ed2..cff4d10daf49 100644
---- a/tools/perf/builtin-script.c
-+++ b/tools/perf/builtin-script.c
-@@ -148,6 +148,7 @@ static struct {
- 	unsigned int print_ip_opts;
- 	u64 fields;
- 	u64 invalid_fields;
-+	u64 user_set_fields;
- } output[OUTPUT_TYPE_MAX] = {
- 
- 	[PERF_TYPE_HARDWARE] = {
-@@ -344,7 +345,7 @@ static int perf_evsel__do_check_stype(struct perf_evsel *evsel,
- 	if (attr->sample_type & sample_type)
- 		return 0;
- 
--	if (output[type].user_set) {
-+	if (output[type].user_set_fields & field) {
- 		if (allow_user_set)
- 			return 0;
- 		evname = perf_evsel__name(evsel);
-@@ -2627,10 +2628,13 @@ parse:
- 					pr_warning("\'%s\' not valid for %s events. Ignoring.\n",
- 						   all_output_options[i].str, event_type(j));
- 				} else {
--					if (change == REMOVE)
-+					if (change == REMOVE) {
- 						output[j].fields &= ~all_output_options[i].field;
--					else
-+						output[j].user_set_fields &= ~all_output_options[i].field;
-+					} else {
- 						output[j].fields |= all_output_options[i].field;
-+						output[j].user_set_fields |= all_output_options[i].field;
-+					}
- 					output[j].user_set = true;
- 					output[j].wildcard_set = true;
- 				}
-diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
-index b36061cd1ab8..91cdbf504535 100644
---- a/tools/perf/builtin-trace.c
-+++ b/tools/perf/builtin-trace.c
-@@ -1039,6 +1039,9 @@ static const size_t trace__entry_str_size = 2048;
- 
- static struct file *thread_trace__files_entry(struct thread_trace *ttrace, int fd)
- {
-+	if (fd < 0)
-+		return NULL;
-+
- 	if (fd > ttrace->files.max) {
- 		struct file *nfiles = realloc(ttrace->files.table, (fd + 1) * sizeof(struct file));
- 
-@@ -3865,7 +3868,8 @@ int cmd_trace(int argc, const char **argv)
- 				goto init_augmented_syscall_tp;
- 			}
- 
--			if (strcmp(perf_evsel__name(evsel), "raw_syscalls:sys_enter") == 0) {
-+			if (trace.syscalls.events.augmented->priv == NULL &&
-+			    strstr(perf_evsel__name(evsel), "syscalls:sys_enter")) {
- 				struct perf_evsel *augmented = trace.syscalls.events.augmented;
- 				if (perf_evsel__init_augmented_syscall_tp(augmented, evsel) ||
- 				    perf_evsel__init_augmented_syscall_tp_args(augmented))
-diff --git a/tools/perf/tests/evsel-tp-sched.c b/tools/perf/tests/evsel-tp-sched.c
-index 5cbba70bcdd0..ea7acf403727 100644
---- a/tools/perf/tests/evsel-tp-sched.c
-+++ b/tools/perf/tests/evsel-tp-sched.c
-@@ -43,7 +43,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
- 		return -1;
- 	}
- 
--	if (perf_evsel__test_field(evsel, "prev_comm", 16, true))
-+	if (perf_evsel__test_field(evsel, "prev_comm", 16, false))
- 		ret = -1;
- 
- 	if (perf_evsel__test_field(evsel, "prev_pid", 4, true))
-@@ -55,7 +55,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
- 	if (perf_evsel__test_field(evsel, "prev_state", sizeof(long), true))
- 		ret = -1;
- 
--	if (perf_evsel__test_field(evsel, "next_comm", 16, true))
-+	if (perf_evsel__test_field(evsel, "next_comm", 16, false))
- 		ret = -1;
- 
- 	if (perf_evsel__test_field(evsel, "next_pid", 4, true))
-@@ -73,7 +73,7 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
- 		return -1;
- 	}
- 
--	if (perf_evsel__test_field(evsel, "comm", 16, true))
-+	if (perf_evsel__test_field(evsel, "comm", 16, false))
- 		ret = -1;
- 
- 	if (perf_evsel__test_field(evsel, "pid", 4, true))
-diff --git a/tools/perf/trace/beauty/msg_flags.c b/tools/perf/trace/beauty/msg_flags.c
-index d66c66315987..ea68db08b8e7 100644
---- a/tools/perf/trace/beauty/msg_flags.c
-+++ b/tools/perf/trace/beauty/msg_flags.c
-@@ -29,7 +29,7 @@ static size_t syscall_arg__scnprintf_msg_flags(char *bf, size_t size,
- 		return scnprintf(bf, size, "NONE");
- #define	P_MSG_FLAG(n) \
- 	if (flags & MSG_##n) { \
--		printed += scnprintf(bf + printed, size - printed, "%s%s", printed ? "|" : "", show_prefix ? prefix : "", #n); \
-+		printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : "", #n); \
- 		flags &= ~MSG_##n; \
- 	}
- 
-diff --git a/tools/perf/trace/beauty/waitid_options.c b/tools/perf/trace/beauty/waitid_options.c
-index 6897fab40dcc..d4d10b33ba0e 100644
---- a/tools/perf/trace/beauty/waitid_options.c
-+++ b/tools/perf/trace/beauty/waitid_options.c
-@@ -11,7 +11,7 @@ static size_t syscall_arg__scnprintf_waitid_options(char *bf, size_t size,
- 
- #define	P_OPTION(n) \
- 	if (options & W##n) { \
--		printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : #n); \
-+		printed += scnprintf(bf + printed, size - printed, "%s%s%s", printed ? "|" : "", show_prefix ? prefix : "",  #n); \
- 		options &= ~W##n; \
- 	}
- 
-diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
-index 70de8f6b3aee..9142fd294e76 100644
---- a/tools/perf/util/annotate.c
-+++ b/tools/perf/util/annotate.c
-@@ -1889,6 +1889,7 @@ int symbol__annotate(struct symbol *sym, struct map *map,
- 		     struct annotation_options *options,
- 		     struct arch **parch)
- {
-+	struct annotation *notes = symbol__annotation(sym);
- 	struct annotate_args args = {
- 		.privsize	= privsize,
- 		.evsel		= evsel,
-@@ -1919,6 +1920,7 @@ int symbol__annotate(struct symbol *sym, struct map *map,
- 
- 	args.ms.map = map;
- 	args.ms.sym = sym;
-+	notes->start = map__rip_2objdump(map, sym->start);
- 
- 	return symbol__disassemble(sym, &args);
- }
-@@ -2794,8 +2796,6 @@ int symbol__annotate2(struct symbol *sym, struct map *map, struct perf_evsel *ev
- 
- 	symbol__calc_percent(sym, evsel);
- 
--	notes->start = map__rip_2objdump(map, sym->start);
--
- 	annotation__set_offsets(notes, size);
- 	annotation__mark_jump_targets(notes, sym);
- 	annotation__compute_ipc(notes, size);
-diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
-index f69961c4a4f3..2921ce08b198 100644
---- a/tools/perf/util/auxtrace.c
-+++ b/tools/perf/util/auxtrace.c
-@@ -1278,9 +1278,9 @@ static int __auxtrace_mmap__read(struct perf_mmap *map,
- 	}
- 
- 	/* padding must be written by fn() e.g. record__process_auxtrace() */
--	padding = size & 7;
-+	padding = size & (PERF_AUXTRACE_RECORD_ALIGNMENT - 1);
- 	if (padding)
--		padding = 8 - padding;
-+		padding = PERF_AUXTRACE_RECORD_ALIGNMENT - padding;
- 
- 	memset(&ev, 0, sizeof(ev));
- 	ev.auxtrace.header.type = PERF_RECORD_AUXTRACE;
-diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h
-index 8e50f96d4b23..fac32482db61 100644
---- a/tools/perf/util/auxtrace.h
-+++ b/tools/perf/util/auxtrace.h
-@@ -40,6 +40,9 @@ struct record_opts;
- struct auxtrace_info_event;
- struct events_stats;
- 
-+/* Auxtrace records must have the same alignment as perf event records */
-+#define PERF_AUXTRACE_RECORD_ALIGNMENT 8
-+
- enum auxtrace_type {
- 	PERF_AUXTRACE_UNKNOWN,
- 	PERF_AUXTRACE_INTEL_PT,
-diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
-index 4503f3ca45ab..7c0b975dd2f0 100644
---- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
-+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
-@@ -26,6 +26,7 @@
- 
- #include "../cache.h"
- #include "../util.h"
-+#include "../auxtrace.h"
- 
- #include "intel-pt-insn-decoder.h"
- #include "intel-pt-pkt-decoder.h"
-@@ -250,19 +251,15 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params)
- 		if (!(decoder->tsc_ctc_ratio_n % decoder->tsc_ctc_ratio_d))
- 			decoder->tsc_ctc_mult = decoder->tsc_ctc_ratio_n /
- 						decoder->tsc_ctc_ratio_d;
--
--		/*
--		 * Allow for timestamps appearing to backwards because a TSC
--		 * packet has slipped past a MTC packet, so allow 2 MTC ticks
--		 * or ...
--		 */
--		decoder->tsc_slip = multdiv(2 << decoder->mtc_shift,
--					decoder->tsc_ctc_ratio_n,
--					decoder->tsc_ctc_ratio_d);
- 	}
--	/* ... or 0x100 paranoia */
--	if (decoder->tsc_slip < 0x100)
--		decoder->tsc_slip = 0x100;
-+
-+	/*
-+	 * A TSC packet can slip past MTC packets so that the timestamp appears
-+	 * to go backwards. One estimate is that can be up to about 40 CPU
-+	 * cycles, which is certainly less than 0x1000 TSC ticks, but accept
-+	 * slippage an order of magnitude more to be on the safe side.
-+	 */
-+	decoder->tsc_slip = 0x10000;
- 
- 	intel_pt_log("timestamp: mtc_shift %u\n", decoder->mtc_shift);
- 	intel_pt_log("timestamp: tsc_ctc_ratio_n %u\n", decoder->tsc_ctc_ratio_n);
-@@ -1394,7 +1391,6 @@ static int intel_pt_overflow(struct intel_pt_decoder *decoder)
- {
- 	intel_pt_log("ERROR: Buffer overflow\n");
- 	intel_pt_clear_tx_flags(decoder);
--	decoder->cbr = 0;
- 	decoder->timestamp_insn_cnt = 0;
- 	decoder->pkt_state = INTEL_PT_STATE_ERR_RESYNC;
- 	decoder->overflow = true;
-@@ -2575,6 +2571,34 @@ static int intel_pt_tsc_cmp(uint64_t tsc1, uint64_t tsc2)
- 	}
- }
- 
-+#define MAX_PADDING (PERF_AUXTRACE_RECORD_ALIGNMENT - 1)
-+
-+/**
-+ * adj_for_padding - adjust overlap to account for padding.
-+ * @buf_b: second buffer
-+ * @buf_a: first buffer
-+ * @len_a: size of first buffer
-+ *
-+ * @buf_a might have up to 7 bytes of padding appended. Adjust the overlap
-+ * accordingly.
-+ *
-+ * Return: A pointer into @buf_b from where non-overlapped data starts
-+ */
-+static unsigned char *adj_for_padding(unsigned char *buf_b,
-+				      unsigned char *buf_a, size_t len_a)
-+{
-+	unsigned char *p = buf_b - MAX_PADDING;
-+	unsigned char *q = buf_a + len_a - MAX_PADDING;
-+	int i;
-+
-+	for (i = MAX_PADDING; i; i--, p++, q++) {
-+		if (*p != *q)
-+			break;
-+	}
-+
-+	return p;
-+}
-+
- /**
-  * intel_pt_find_overlap_tsc - determine start of non-overlapped trace data
-  *                             using TSC.
-@@ -2625,8 +2649,11 @@ static unsigned char *intel_pt_find_overlap_tsc(unsigned char *buf_a,
- 
- 			/* Same TSC, so buffers are consecutive */
- 			if (!cmp && rem_b >= rem_a) {
-+				unsigned char *start;
-+
- 				*consecutive = true;
--				return buf_b + len_b - (rem_b - rem_a);
-+				start = buf_b + len_b - (rem_b - rem_a);
-+				return adj_for_padding(start, buf_a, len_a);
- 			}
- 			if (cmp < 0)
- 				return buf_b; /* tsc_a < tsc_b => no overlap */
-@@ -2689,7 +2716,7 @@ unsigned char *intel_pt_find_overlap(unsigned char *buf_a, size_t len_a,
- 		found = memmem(buf_a, len_a, buf_b, len_a);
- 		if (found) {
- 			*consecutive = true;
--			return buf_b + len_a;
-+			return adj_for_padding(buf_b + len_a, buf_a, len_a);
- 		}
- 
- 		/* Try again at next PSB in buffer 'a' */
-diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
-index 2e72373ec6df..4493fc13a6fa 100644
---- a/tools/perf/util/intel-pt.c
-+++ b/tools/perf/util/intel-pt.c
-@@ -2522,6 +2522,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
- 	}
- 
- 	pt->timeless_decoding = intel_pt_timeless_decoding(pt);
-+	if (pt->timeless_decoding && !pt->tc.time_mult)
-+		pt->tc.time_mult = 1;
- 	pt->have_tsc = intel_pt_have_tsc(pt);
- 	pt->sampling_mode = false;
- 	pt->est_tsc = !pt->timeless_decoding;
-diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
-index 11a234740632..ccd3275feeaa 100644
---- a/tools/perf/util/pmu.c
-+++ b/tools/perf/util/pmu.c
-@@ -734,10 +734,20 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
- 
- 		if (!is_arm_pmu_core(name)) {
- 			pname = pe->pmu ? pe->pmu : "cpu";
-+
-+			/*
-+			 * uncore alias may be from different PMU
-+			 * with common prefix
-+			 */
-+			if (pmu_is_uncore(name) &&
-+			    !strncmp(pname, name, strlen(pname)))
-+				goto new_alias;
-+
- 			if (strcmp(pname, name))
- 				continue;
- 		}
- 
-+new_alias:
- 		/* need type casts to override 'const' */
- 		__perf_pmu__new_alias(head, NULL, (char *)pe->name,
- 				(char *)pe->desc, (char *)pe->event,
-diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
-index 18a59fba97ff..cc4773157b9b 100644
---- a/tools/perf/util/probe-event.c
-+++ b/tools/perf/util/probe-event.c
-@@ -157,8 +157,10 @@ static struct map *kernel_get_module_map(const char *module)
- 	if (module && strchr(module, '/'))
- 		return dso__new_map(module);
- 
--	if (!module)
--		module = "kernel";
-+	if (!module) {
-+		pos = machine__kernel_map(host_machine);
-+		return map__get(pos);
-+	}
- 
- 	for (pos = maps__first(maps); pos; pos = map__next(pos)) {
- 		/* short_name is "[module]" */
-diff --git a/tools/perf/util/s390-cpumsf.c b/tools/perf/util/s390-cpumsf.c
-index 68b2570304ec..08073a4d59a4 100644
---- a/tools/perf/util/s390-cpumsf.c
-+++ b/tools/perf/util/s390-cpumsf.c
-@@ -301,6 +301,11 @@ static bool s390_cpumsf_validate(int machine_type,
- 			*dsdes = 85;
- 			*bsdes = 32;
- 			break;
-+		case 2964:
-+		case 2965:
-+			*dsdes = 112;
-+			*bsdes = 32;
-+			break;
- 		default:
- 			/* Illegal trailer entry */
- 			return false;
-diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
-index 87ef16a1b17e..7059d1be2d09 100644
---- a/tools/perf/util/scripting-engines/trace-event-python.c
-+++ b/tools/perf/util/scripting-engines/trace-event-python.c
-@@ -733,8 +733,7 @@ static PyObject *get_perf_sample_dict(struct perf_sample *sample,
- 		Py_FatalError("couldn't create Python dictionary");
- 
- 	pydict_set_item_string_decref(dict, "ev_name", _PyUnicode_FromString(perf_evsel__name(evsel)));
--	pydict_set_item_string_decref(dict, "attr", _PyUnicode_FromStringAndSize(
--			(const char *)&evsel->attr, sizeof(evsel->attr)));
-+	pydict_set_item_string_decref(dict, "attr", _PyBytes_FromStringAndSize((const char *)&evsel->attr, sizeof(evsel->attr)));
- 
- 	pydict_set_item_string_decref(dict_sample, "pid",
- 			_PyLong_FromLong(sample->pid));
-@@ -1494,34 +1493,40 @@ static void _free_command_line(wchar_t **command_line, int num)
- static int python_start_script(const char *script, int argc, const char **argv)
- {
- 	struct tables *tables = &tables_global;
-+	PyMODINIT_FUNC (*initfunc)(void);
- #if PY_MAJOR_VERSION < 3
- 	const char **command_line;
- #else
- 	wchar_t **command_line;
- #endif
--	char buf[PATH_MAX];
-+	/*
-+	 * Use a non-const name variable to cope with python 2.6's
-+	 * PyImport_AppendInittab prototype
-+	 */
-+	char buf[PATH_MAX], name[19] = "perf_trace_context";
- 	int i, err = 0;
- 	FILE *fp;
- 
- #if PY_MAJOR_VERSION < 3
-+	initfunc = initperf_trace_context;
- 	command_line = malloc((argc + 1) * sizeof(const char *));
- 	command_line[0] = script;
- 	for (i = 1; i < argc + 1; i++)
- 		command_line[i] = argv[i - 1];
- #else
-+	initfunc = PyInit_perf_trace_context;
- 	command_line = malloc((argc + 1) * sizeof(wchar_t *));
- 	command_line[0] = Py_DecodeLocale(script, NULL);
- 	for (i = 1; i < argc + 1; i++)
- 		command_line[i] = Py_DecodeLocale(argv[i - 1], NULL);
- #endif
- 
-+	PyImport_AppendInittab(name, initfunc);
- 	Py_Initialize();
- 
- #if PY_MAJOR_VERSION < 3
--	initperf_trace_context();
- 	PySys_SetArgv(argc + 1, (char **)command_line);
- #else
--	PyInit_perf_trace_context();
- 	PySys_SetArgv(argc + 1, command_line);
- #endif
- 
-diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
-index 6c1a83768eb0..d0334c33da54 100644
---- a/tools/perf/util/sort.c
-+++ b/tools/perf/util/sort.c
-@@ -230,8 +230,14 @@ static int64_t _sort__sym_cmp(struct symbol *sym_l, struct symbol *sym_r)
- 	if (sym_l == sym_r)
- 		return 0;
- 
--	if (sym_l->inlined || sym_r->inlined)
--		return strcmp(sym_l->name, sym_r->name);
-+	if (sym_l->inlined || sym_r->inlined) {
-+		int ret = strcmp(sym_l->name, sym_r->name);
-+
-+		if (ret)
-+			return ret;
-+		if ((sym_l->start <= sym_r->end) && (sym_l->end >= sym_r->start))
-+			return 0;
-+	}
- 
- 	if (sym_l->start != sym_r->start)
- 		return (int64_t)(sym_r->start - sym_l->start);
-diff --git a/tools/perf/util/srcline.c b/tools/perf/util/srcline.c
-index dc86597d0cc4..ccf42c4e83f0 100644
---- a/tools/perf/util/srcline.c
-+++ b/tools/perf/util/srcline.c
-@@ -104,7 +104,7 @@ static struct symbol *new_inline_sym(struct dso *dso,
- 	} else {
- 		/* create a fake symbol for the inline frame */
- 		inline_sym = symbol__new(base_sym ? base_sym->start : 0,
--					 base_sym ? base_sym->end : 0,
-+					 base_sym ? (base_sym->end - base_sym->start) : 0,
- 					 base_sym ? base_sym->binding : 0,
- 					 base_sym ? base_sym->type : 0,
- 					 funcname);
-diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
-index 48efad6d0f90..ca5f2e4796ea 100644
---- a/tools/perf/util/symbol.c
-+++ b/tools/perf/util/symbol.c
-@@ -710,6 +710,8 @@ static int map_groups__split_kallsyms_for_kcore(struct map_groups *kmaps, struct
- 		}
- 
- 		pos->start -= curr_map->start - curr_map->pgoff;
-+		if (pos->end > curr_map->end)
-+			pos->end = curr_map->end;
- 		if (pos->end)
- 			pos->end -= curr_map->start - curr_map->pgoff;
- 		symbols__insert(&curr_map->dso->symbols, pos);
-diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
-index 41ab7a3668b3..936f726f7cd9 100644
---- a/tools/testing/selftests/bpf/Makefile
-+++ b/tools/testing/selftests/bpf/Makefile
-@@ -96,6 +96,7 @@ $(BPFOBJ): force
- CLANG ?= clang
- LLC   ?= llc
- LLVM_OBJCOPY ?= llvm-objcopy
-+LLVM_READELF ?= llvm-readelf
- BTF_PAHOLE ?= pahole
- 
- PROBE := $(shell $(LLC) -march=bpf -mcpu=probe -filetype=null /dev/null 2>&1)
-@@ -132,7 +133,7 @@ BTF_PAHOLE_PROBE := $(shell $(BTF_PAHOLE) --help 2>&1 | grep BTF)
- BTF_OBJCOPY_PROBE := $(shell $(LLVM_OBJCOPY) --help 2>&1 | grep -i 'usage.*llvm')
- BTF_LLVM_PROBE := $(shell echo "int main() { return 0; }" | \
- 			  $(CLANG) -target bpf -O2 -g -c -x c - -o ./llvm_btf_verify.o; \
--			  readelf -S ./llvm_btf_verify.o | grep BTF; \
-+			  $(LLVM_READELF) -S ./llvm_btf_verify.o | grep BTF; \
- 			  /bin/rm -f ./llvm_btf_verify.o)
- 
- ifneq ($(BTF_LLVM_PROBE),)
-diff --git a/tools/testing/selftests/bpf/test_map_in_map.c b/tools/testing/selftests/bpf/test_map_in_map.c
-index ce923e67e08e..2985f262846e 100644
---- a/tools/testing/selftests/bpf/test_map_in_map.c
-+++ b/tools/testing/selftests/bpf/test_map_in_map.c
-@@ -27,6 +27,7 @@ SEC("xdp_mimtest")
- int xdp_mimtest0(struct xdp_md *ctx)
- {
- 	int value = 123;
-+	int *value_p;
- 	int key = 0;
- 	void *map;
- 
-@@ -35,6 +36,9 @@ int xdp_mimtest0(struct xdp_md *ctx)
- 		return XDP_DROP;
- 
- 	bpf_map_update_elem(map, &key, &value, 0);
-+	value_p = bpf_map_lookup_elem(map, &key);
-+	if (!value_p || *value_p != 123)
-+		return XDP_DROP;
- 
- 	map = bpf_map_lookup_elem(&mim_hash, &key);
- 	if (!map)
-diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c
-index e2b9eee37187..6e05a22b346c 100644
---- a/tools/testing/selftests/bpf/test_maps.c
-+++ b/tools/testing/selftests/bpf/test_maps.c
-@@ -43,7 +43,7 @@ static int map_flags;
- 	}								\
- })
- 
--static void test_hashmap(int task, void *data)
-+static void test_hashmap(unsigned int task, void *data)
- {
- 	long long key, next_key, first_key, value;
- 	int fd;
-@@ -133,7 +133,7 @@ static void test_hashmap(int task, void *data)
- 	close(fd);
- }
- 
--static void test_hashmap_sizes(int task, void *data)
-+static void test_hashmap_sizes(unsigned int task, void *data)
- {
- 	int fd, i, j;
- 
-@@ -153,7 +153,7 @@ static void test_hashmap_sizes(int task, void *data)
- 		}
- }
- 
--static void test_hashmap_percpu(int task, void *data)
-+static void test_hashmap_percpu(unsigned int task, void *data)
- {
- 	unsigned int nr_cpus = bpf_num_possible_cpus();
- 	BPF_DECLARE_PERCPU(long, value);
-@@ -280,7 +280,7 @@ static int helper_fill_hashmap(int max_entries)
- 	return fd;
- }
- 
--static void test_hashmap_walk(int task, void *data)
-+static void test_hashmap_walk(unsigned int task, void *data)
- {
- 	int fd, i, max_entries = 1000;
- 	long long key, value, next_key;
-@@ -351,7 +351,7 @@ static void test_hashmap_zero_seed(void)
- 	close(second);
- }
- 
--static void test_arraymap(int task, void *data)
-+static void test_arraymap(unsigned int task, void *data)
- {
- 	int key, next_key, fd;
- 	long long value;
-@@ -406,7 +406,7 @@ static void test_arraymap(int task, void *data)
- 	close(fd);
- }
- 
--static void test_arraymap_percpu(int task, void *data)
-+static void test_arraymap_percpu(unsigned int task, void *data)
- {
- 	unsigned int nr_cpus = bpf_num_possible_cpus();
- 	BPF_DECLARE_PERCPU(long, values);
-@@ -502,7 +502,7 @@ static void test_arraymap_percpu_many_keys(void)
- 	close(fd);
- }
- 
--static void test_devmap(int task, void *data)
-+static void test_devmap(unsigned int task, void *data)
- {
- 	int fd;
- 	__u32 key, value;
-@@ -517,7 +517,7 @@ static void test_devmap(int task, void *data)
- 	close(fd);
- }
- 
--static void test_queuemap(int task, void *data)
-+static void test_queuemap(unsigned int task, void *data)
- {
- 	const int MAP_SIZE = 32;
- 	__u32 vals[MAP_SIZE + MAP_SIZE/2], val;
-@@ -575,7 +575,7 @@ static void test_queuemap(int task, void *data)
- 	close(fd);
- }
- 
--static void test_stackmap(int task, void *data)
-+static void test_stackmap(unsigned int task, void *data)
- {
- 	const int MAP_SIZE = 32;
- 	__u32 vals[MAP_SIZE + MAP_SIZE/2], val;
-@@ -641,7 +641,7 @@ static void test_stackmap(int task, void *data)
- #define SOCKMAP_PARSE_PROG "./sockmap_parse_prog.o"
- #define SOCKMAP_VERDICT_PROG "./sockmap_verdict_prog.o"
- #define SOCKMAP_TCP_MSG_PROG "./sockmap_tcp_msg_prog.o"
--static void test_sockmap(int tasks, void *data)
-+static void test_sockmap(unsigned int tasks, void *data)
- {
- 	struct bpf_map *bpf_map_rx, *bpf_map_tx, *bpf_map_msg, *bpf_map_break;
- 	int map_fd_msg = 0, map_fd_rx = 0, map_fd_tx = 0, map_fd_break;
-@@ -1258,10 +1258,11 @@ static void test_map_large(void)
- }
- 
- #define run_parallel(N, FN, DATA) \
--	printf("Fork %d tasks to '" #FN "'\n", N); \
-+	printf("Fork %u tasks to '" #FN "'\n", N); \
- 	__run_parallel(N, FN, DATA)
- 
--static void __run_parallel(int tasks, void (*fn)(int task, void *data),
-+static void __run_parallel(unsigned int tasks,
-+			   void (*fn)(unsigned int task, void *data),
- 			   void *data)
- {
- 	pid_t pid[tasks];
-@@ -1302,7 +1303,7 @@ static void test_map_stress(void)
- #define DO_UPDATE 1
- #define DO_DELETE 0
- 
--static void test_update_delete(int fn, void *data)
-+static void test_update_delete(unsigned int fn, void *data)
- {
- 	int do_update = ((int *)data)[1];
- 	int fd = ((int *)data)[0];
-diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
-index 2fd90d456892..9a967983abed 100644
---- a/tools/testing/selftests/bpf/test_verifier.c
-+++ b/tools/testing/selftests/bpf/test_verifier.c
-@@ -34,6 +34,7 @@
- #include <linux/if_ether.h>
- 
- #include <bpf/bpf.h>
-+#include <bpf/libbpf.h>
- 
- #ifdef HAVE_GENHDR
- # include "autoconf.h"
-@@ -59,6 +60,7 @@
- 
- #define UNPRIV_SYSCTL "kernel/unprivileged_bpf_disabled"
- static bool unpriv_disabled = false;
-+static int skips;
- 
- struct bpf_test {
- 	const char *descr;
-@@ -15946,6 +15948,11 @@ static void do_test_single(struct bpf_test *test, bool unpriv,
- 		pflags |= BPF_F_ANY_ALIGNMENT;
- 	fd_prog = bpf_verify_program(prog_type, prog, prog_len, pflags,
- 				     "GPL", 0, bpf_vlog, sizeof(bpf_vlog), 1);
-+	if (fd_prog < 0 && !bpf_probe_prog_type(prog_type, 0)) {
-+		printf("SKIP (unsupported program type %d)\n", prog_type);
-+		skips++;
-+		goto close_fds;
-+	}
- 
- 	expected_ret = unpriv && test->result_unpriv != UNDEF ?
- 		       test->result_unpriv : test->result;
-@@ -16099,7 +16106,7 @@ static bool test_as_unpriv(struct bpf_test *test)
- 
- static int do_test(bool unpriv, unsigned int from, unsigned int to)
- {
--	int i, passes = 0, errors = 0, skips = 0;
-+	int i, passes = 0, errors = 0;
- 
- 	for (i = from; i < to; i++) {
- 		struct bpf_test *test = &tests[i];
-diff --git a/tools/testing/selftests/firmware/config b/tools/testing/selftests/firmware/config
-index 913a25a4a32b..bf634dda0720 100644
---- a/tools/testing/selftests/firmware/config
-+++ b/tools/testing/selftests/firmware/config
-@@ -1,6 +1,5 @@
- CONFIG_TEST_FIRMWARE=y
- CONFIG_FW_LOADER=y
- CONFIG_FW_LOADER_USER_HELPER=y
--CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
- CONFIG_IKCONFIG=y
- CONFIG_IKCONFIG_PROC=y
-diff --git a/tools/testing/selftests/firmware/fw_filesystem.sh b/tools/testing/selftests/firmware/fw_filesystem.sh
-index 466cf2f91ba0..a4320c4b44dc 100755
---- a/tools/testing/selftests/firmware/fw_filesystem.sh
-+++ b/tools/testing/selftests/firmware/fw_filesystem.sh
-@@ -155,8 +155,11 @@ read_firmwares()
- {
- 	for i in $(seq 0 3); do
- 		config_set_read_fw_idx $i
--		# Verify the contents match
--		if ! diff -q "$FW" $DIR/read_firmware 2>/dev/null ; then
-+		# Verify the contents are what we expect.
-+		# -Z required for now -- check for yourself, md5sum
-+		# on $FW and DIR/read_firmware will yield the same. Even
-+		# cmp agrees, so something is off.
-+		if ! diff -q -Z "$FW" $DIR/read_firmware 2>/dev/null ; then
- 			echo "request #$i: firmware was not loaded" >&2
- 			exit 1
- 		fi
-@@ -168,7 +171,7 @@ read_firmwares_expect_nofile()
- 	for i in $(seq 0 3); do
- 		config_set_read_fw_idx $i
- 		# Ensures contents differ
--		if diff -q "$FW" $DIR/read_firmware 2>/dev/null ; then
-+		if diff -q -Z "$FW" $DIR/read_firmware 2>/dev/null ; then
- 			echo "request $i: file was not expected to match" >&2
- 			exit 1
- 		fi
-diff --git a/tools/testing/selftests/firmware/fw_lib.sh b/tools/testing/selftests/firmware/fw_lib.sh
-index 6c5f1b2ffb74..1cbb12e284a6 100755
---- a/tools/testing/selftests/firmware/fw_lib.sh
-+++ b/tools/testing/selftests/firmware/fw_lib.sh
-@@ -91,7 +91,7 @@ verify_reqs()
- 	if [ "$TEST_REQS_FW_SYSFS_FALLBACK" = "yes" ]; then
- 		if [ ! "$HAS_FW_LOADER_USER_HELPER" = "yes" ]; then
- 			echo "usermode helper disabled so ignoring test"
--			exit $ksft_skip
-+			exit 0
- 		fi
- 	fi
- }
-diff --git a/tools/testing/selftests/ir/ir_loopback.c b/tools/testing/selftests/ir/ir_loopback.c
-index 858c19caf224..8cdf1b89ac9c 100644
---- a/tools/testing/selftests/ir/ir_loopback.c
-+++ b/tools/testing/selftests/ir/ir_loopback.c
-@@ -27,6 +27,8 @@
- 
- #define TEST_SCANCODES	10
- #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
-+#define SYSFS_PATH_MAX 256
-+#define DNAME_PATH_MAX 256
- 
- static const struct {
- 	enum rc_proto proto;
-@@ -56,7 +58,7 @@ static const struct {
- int lirc_open(const char *rc)
- {
- 	struct dirent *dent;
--	char buf[100];
-+	char buf[SYSFS_PATH_MAX + DNAME_PATH_MAX];
- 	DIR *d;
- 	int fd;
- 
-diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
-index 7e632b465ab4..6d7a81306f8a 100644
---- a/tools/testing/selftests/seccomp/seccomp_bpf.c
-+++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
-@@ -2971,6 +2971,12 @@ TEST(get_metadata)
- 	struct seccomp_metadata md;
- 	long ret;
- 
-+	/* Only real root can get metadata. */
-+	if (geteuid()) {
-+		XFAIL(return, "get_metadata requires real root");
-+		return;
-+	}
-+
- 	ASSERT_EQ(0, pipe(pipefd));
- 
- 	pid = fork();
-diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
-index 30251e288629..5cc22cdaa5ba 100644
---- a/virt/kvm/arm/mmu.c
-+++ b/virt/kvm/arm/mmu.c
-@@ -2353,7 +2353,7 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
- 	return 0;
- }
- 
--void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots)
-+void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen)
- {
- }
- 
-diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
-index 076bc38963bf..b4f2d892a1d3 100644
---- a/virt/kvm/kvm_main.c
-+++ b/virt/kvm/kvm_main.c
-@@ -874,6 +874,7 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
- 		int as_id, struct kvm_memslots *slots)
- {
- 	struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id);
-+	u64 gen;
- 
- 	/*
- 	 * Set the low bit in the generation, which disables SPTE caching
-@@ -896,9 +897,11 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm,
- 	 * space 0 will use generations 0, 4, 8, ... while * address space 1 will
- 	 * use generations 2, 6, 10, 14, ...
- 	 */
--	slots->generation += KVM_ADDRESS_SPACE_NUM * 2 - 1;
-+	gen = slots->generation + KVM_ADDRESS_SPACE_NUM * 2 - 1;
- 
--	kvm_arch_memslots_updated(kvm, slots);
-+	kvm_arch_memslots_updated(kvm, gen);
-+
-+	slots->generation = gen;
- 
- 	return old_memslots;
- }
-@@ -2899,6 +2902,9 @@ static long kvm_device_ioctl(struct file *filp, unsigned int ioctl,
- {
- 	struct kvm_device *dev = filp->private_data;
- 
-+	if (dev->kvm->mm != current->mm)
-+		return -EIO;
-+
- 	switch (ioctl) {
- 	case KVM_SET_DEVICE_ATTR:
- 		return kvm_device_ioctl_attr(dev, dev->ops->set_attr, arg);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-04-20 11:12 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-04-20 11:12 UTC (permalink / raw
  To: gentoo-commits

commit:     3fc69f4634e06b7a81d27e097aeaf5bd6c79fdf5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 20 11:12:01 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Apr 20 11:12:01 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3fc69f46

Linux patch 5.0.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1008_linux-5.0.9.patch | 3652 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3656 insertions(+)

diff --git a/0000_README b/0000_README
index 2dd07a5..dda69ae 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-5.0.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.8
 
+Patch:  1008_linux-5.0.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-5.0.9.patch b/1008_linux-5.0.9.patch
new file mode 100644
index 0000000..ca29395
--- /dev/null
+++ b/1008_linux-5.0.9.patch
@@ -0,0 +1,3652 @@
+diff --git a/Makefile b/Makefile
+index f7666051de66..ef192ca04330 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arc/configs/hsdk_defconfig b/arch/arc/configs/hsdk_defconfig
+index 87b23b7fb781..aefcf7a4e17a 100644
+--- a/arch/arc/configs/hsdk_defconfig
++++ b/arch/arc/configs/hsdk_defconfig
+@@ -8,6 +8,7 @@ CONFIG_NAMESPACES=y
+ # CONFIG_UTS_NS is not set
+ # CONFIG_PID_NS is not set
+ CONFIG_BLK_DEV_INITRD=y
++CONFIG_BLK_DEV_RAM=y
+ CONFIG_EMBEDDED=y
+ CONFIG_PERF_EVENTS=y
+ # CONFIG_VM_EVENT_COUNTERS is not set
+diff --git a/arch/arc/kernel/head.S b/arch/arc/kernel/head.S
+index 30e090625916..a72bbda2f7aa 100644
+--- a/arch/arc/kernel/head.S
++++ b/arch/arc/kernel/head.S
+@@ -106,6 +106,7 @@ ENTRY(stext)
+ 	;    r2 = pointer to uboot provided cmdline or external DTB in mem
+ 	; These are handled later in handle_uboot_args()
+ 	st	r0, [@uboot_tag]
++	st      r1, [@uboot_magic]
+ 	st	r2, [@uboot_arg]
+ 
+ 	; setup "current" tsk and optionally cache it in dedicated r25
+diff --git a/arch/arc/kernel/setup.c b/arch/arc/kernel/setup.c
+index 7b2340996cf8..7b3a7b3b380c 100644
+--- a/arch/arc/kernel/setup.c
++++ b/arch/arc/kernel/setup.c
+@@ -36,6 +36,7 @@ unsigned int intr_to_DE_cnt;
+ 
+ /* Part of U-boot ABI: see head.S */
+ int __initdata uboot_tag;
++int __initdata uboot_magic;
+ char __initdata *uboot_arg;
+ 
+ const struct machine_desc *machine_desc;
+@@ -497,6 +498,8 @@ static inline bool uboot_arg_invalid(unsigned long addr)
+ #define UBOOT_TAG_NONE		0
+ #define UBOOT_TAG_CMDLINE	1
+ #define UBOOT_TAG_DTB		2
++/* We always pass 0 as magic from U-boot */
++#define UBOOT_MAGIC_VALUE	0
+ 
+ void __init handle_uboot_args(void)
+ {
+@@ -511,6 +514,11 @@ void __init handle_uboot_args(void)
+ 		goto ignore_uboot_args;
+ 	}
+ 
++	if (uboot_magic != UBOOT_MAGIC_VALUE) {
++		pr_warn(IGNORE_ARGS "non zero uboot magic\n");
++		goto ignore_uboot_args;
++	}
++
+ 	if (uboot_tag != UBOOT_TAG_NONE &&
+             uboot_arg_invalid((unsigned long)uboot_arg)) {
+ 		pr_warn(IGNORE_ARGS "invalid uboot arg: '%px'\n", uboot_arg);
+diff --git a/arch/arm/kernel/patch.c b/arch/arm/kernel/patch.c
+index a50dc00d79a2..d0a05a3bdb96 100644
+--- a/arch/arm/kernel/patch.c
++++ b/arch/arm/kernel/patch.c
+@@ -16,7 +16,7 @@ struct patch {
+ 	unsigned int insn;
+ };
+ 
+-static DEFINE_SPINLOCK(patch_lock);
++static DEFINE_RAW_SPINLOCK(patch_lock);
+ 
+ static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags)
+ 	__acquires(&patch_lock)
+@@ -33,7 +33,7 @@ static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags)
+ 		return addr;
+ 
+ 	if (flags)
+-		spin_lock_irqsave(&patch_lock, *flags);
++		raw_spin_lock_irqsave(&patch_lock, *flags);
+ 	else
+ 		__acquire(&patch_lock);
+ 
+@@ -48,7 +48,7 @@ static void __kprobes patch_unmap(int fixmap, unsigned long *flags)
+ 	clear_fixmap(fixmap);
+ 
+ 	if (flags)
+-		spin_unlock_irqrestore(&patch_lock, *flags);
++		raw_spin_unlock_irqrestore(&patch_lock, *flags);
+ 	else
+ 		__release(&patch_lock);
+ }
+diff --git a/arch/mips/bcm47xx/workarounds.c b/arch/mips/bcm47xx/workarounds.c
+index 46eddbec8d9f..0ab95dd431b3 100644
+--- a/arch/mips/bcm47xx/workarounds.c
++++ b/arch/mips/bcm47xx/workarounds.c
+@@ -24,6 +24,7 @@ void __init bcm47xx_workarounds(void)
+ 	case BCM47XX_BOARD_NETGEAR_WNR3500L:
+ 		bcm47xx_workarounds_enable_usb_power(12);
+ 		break;
++	case BCM47XX_BOARD_NETGEAR_WNDR3400V2:
+ 	case BCM47XX_BOARD_NETGEAR_WNDR3400_V3:
+ 		bcm47xx_workarounds_enable_usb_power(21);
+ 		break;
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index d3f42b6bbdac..8a9cff1f129d 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -102,9 +102,13 @@ static int hv_cpu_init(unsigned int cpu)
+ 	u64 msr_vp_index;
+ 	struct hv_vp_assist_page **hvp = &hv_vp_assist_page[smp_processor_id()];
+ 	void **input_arg;
++	struct page *pg;
+ 
+ 	input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
+-	*input_arg = page_address(alloc_page(GFP_KERNEL));
++	pg = alloc_page(GFP_KERNEL);
++	if (unlikely(!pg))
++		return -ENOMEM;
++	*input_arg = page_address(pg);
+ 
+ 	hv_get_vp_index(msr_vp_index);
+ 
+diff --git a/arch/x86/kernel/aperture_64.c b/arch/x86/kernel/aperture_64.c
+index 58176b56354e..294ed4392a0e 100644
+--- a/arch/x86/kernel/aperture_64.c
++++ b/arch/x86/kernel/aperture_64.c
+@@ -14,6 +14,7 @@
+ #define pr_fmt(fmt) "AGP: " fmt
+ 
+ #include <linux/kernel.h>
++#include <linux/kcore.h>
+ #include <linux/types.h>
+ #include <linux/init.h>
+ #include <linux/memblock.h>
+@@ -57,7 +58,7 @@ int fallback_aper_force __initdata;
+ 
+ int fix_aperture __initdata = 1;
+ 
+-#ifdef CONFIG_PROC_VMCORE
++#if defined(CONFIG_PROC_VMCORE) || defined(CONFIG_PROC_KCORE)
+ /*
+  * If the first kernel maps the aperture over e820 RAM, the kdump kernel will
+  * use the same range because it will remain configured in the northbridge.
+@@ -66,20 +67,25 @@ int fix_aperture __initdata = 1;
+  */
+ static unsigned long aperture_pfn_start, aperture_page_count;
+ 
+-static int gart_oldmem_pfn_is_ram(unsigned long pfn)
++static int gart_mem_pfn_is_ram(unsigned long pfn)
+ {
+ 	return likely((pfn < aperture_pfn_start) ||
+ 		      (pfn >= aperture_pfn_start + aperture_page_count));
+ }
+ 
+-static void exclude_from_vmcore(u64 aper_base, u32 aper_order)
++static void __init exclude_from_core(u64 aper_base, u32 aper_order)
+ {
+ 	aperture_pfn_start = aper_base >> PAGE_SHIFT;
+ 	aperture_page_count = (32 * 1024 * 1024) << aper_order >> PAGE_SHIFT;
+-	WARN_ON(register_oldmem_pfn_is_ram(&gart_oldmem_pfn_is_ram));
++#ifdef CONFIG_PROC_VMCORE
++	WARN_ON(register_oldmem_pfn_is_ram(&gart_mem_pfn_is_ram));
++#endif
++#ifdef CONFIG_PROC_KCORE
++	WARN_ON(register_mem_pfn_is_ram(&gart_mem_pfn_is_ram));
++#endif
+ }
+ #else
+-static void exclude_from_vmcore(u64 aper_base, u32 aper_order)
++static void exclude_from_core(u64 aper_base, u32 aper_order)
+ {
+ }
+ #endif
+@@ -474,7 +480,7 @@ out:
+ 			 * may have allocated the range over its e820 RAM
+ 			 * and fixed up the northbridge
+ 			 */
+-			exclude_from_vmcore(last_aper_base, last_aper_order);
++			exclude_from_core(last_aper_base, last_aper_order);
+ 
+ 			return 1;
+ 		}
+@@ -520,7 +526,7 @@ out:
+ 	 * overlap with the first kernel's memory. We can't access the
+ 	 * range through vmcore even though it should be part of the dump.
+ 	 */
+-	exclude_from_vmcore(aper_alloc, aper_order);
++	exclude_from_core(aper_alloc, aper_order);
+ 
+ 	/* Fix up the north bridges */
+ 	for (i = 0; i < amd_nb_bus_dev_ranges[i].dev_limit; i++) {
+diff --git a/arch/x86/kernel/cpu/cyrix.c b/arch/x86/kernel/cpu/cyrix.c
+index d12226f60168..1d9b8aaea06c 100644
+--- a/arch/x86/kernel/cpu/cyrix.c
++++ b/arch/x86/kernel/cpu/cyrix.c
+@@ -124,7 +124,7 @@ static void set_cx86_reorder(void)
+ 	setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
+ 
+ 	/* Load/Store Serialize to mem access disable (=reorder it) */
+-	setCx86_old(CX86_PCR0, getCx86_old(CX86_PCR0) & ~0x80);
++	setCx86(CX86_PCR0, getCx86(CX86_PCR0) & ~0x80);
+ 	/* set load/store serialize from 1GB to 4GB */
+ 	ccr3 |= 0xe0;
+ 	setCx86(CX86_CCR3, ccr3);
+@@ -135,11 +135,11 @@ static void set_cx86_memwb(void)
+ 	pr_info("Enable Memory-Write-back mode on Cyrix/NSC processor.\n");
+ 
+ 	/* CCR2 bit 2: unlock NW bit */
+-	setCx86_old(CX86_CCR2, getCx86_old(CX86_CCR2) & ~0x04);
++	setCx86(CX86_CCR2, getCx86(CX86_CCR2) & ~0x04);
+ 	/* set 'Not Write-through' */
+ 	write_cr0(read_cr0() | X86_CR0_NW);
+ 	/* CCR2 bit 2: lock NW bit and set WT1 */
+-	setCx86_old(CX86_CCR2, getCx86_old(CX86_CCR2) | 0x14);
++	setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x14);
+ }
+ 
+ /*
+@@ -153,14 +153,14 @@ static void geode_configure(void)
+ 	local_irq_save(flags);
+ 
+ 	/* Suspend on halt power saving and enable #SUSP pin */
+-	setCx86_old(CX86_CCR2, getCx86_old(CX86_CCR2) | 0x88);
++	setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88);
+ 
+ 	ccr3 = getCx86(CX86_CCR3);
+ 	setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10);	/* enable MAPEN */
+ 
+ 
+ 	/* FPU fast, DTE cache, Mem bypass */
+-	setCx86_old(CX86_CCR4, getCx86_old(CX86_CCR4) | 0x38);
++	setCx86(CX86_CCR4, getCx86(CX86_CCR4) | 0x38);
+ 	setCx86(CX86_CCR3, ccr3);			/* disable MAPEN */
+ 
+ 	set_cx86_memwb();
+@@ -296,7 +296,7 @@ static void init_cyrix(struct cpuinfo_x86 *c)
+ 		/* GXm supports extended cpuid levels 'ala' AMD */
+ 		if (c->cpuid_level == 2) {
+ 			/* Enable cxMMX extensions (GX1 Datasheet 54) */
+-			setCx86_old(CX86_CCR7, getCx86_old(CX86_CCR7) | 1);
++			setCx86(CX86_CCR7, getCx86(CX86_CCR7) | 1);
+ 
+ 			/*
+ 			 * GXm : 0x30 ... 0x5f GXm  datasheet 51
+@@ -319,7 +319,7 @@ static void init_cyrix(struct cpuinfo_x86 *c)
+ 		if (dir1 > 7) {
+ 			dir0_msn++;  /* M II */
+ 			/* Enable MMX extensions (App note 108) */
+-			setCx86_old(CX86_CCR7, getCx86_old(CX86_CCR7)|1);
++			setCx86(CX86_CCR7, getCx86(CX86_CCR7)|1);
+ 		} else {
+ 			/* A 6x86MX - it has the bug. */
+ 			set_cpu_bug(c, X86_BUG_COMA);
+diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
+index dfd3aca82c61..fb32925a2e62 100644
+--- a/arch/x86/kernel/hpet.c
++++ b/arch/x86/kernel/hpet.c
+@@ -905,6 +905,8 @@ int __init hpet_enable(void)
+ 		return 0;
+ 
+ 	hpet_set_mapping();
++	if (!hpet_virt_address)
++		return 0;
+ 
+ 	/*
+ 	 * Read the period and check for a sane value:
+diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c
+index 34a5c1715148..2882fe1d2a78 100644
+--- a/arch/x86/kernel/hw_breakpoint.c
++++ b/arch/x86/kernel/hw_breakpoint.c
+@@ -357,6 +357,7 @@ int hw_breakpoint_arch_parse(struct perf_event *bp,
+ #endif
+ 	default:
+ 		WARN_ON_ONCE(1);
++		return -EINVAL;
+ 	}
+ 
+ 	/*
+diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
+index 3482460d984d..1bfe5c6e6cfe 100644
+--- a/arch/x86/kernel/mpparse.c
++++ b/arch/x86/kernel/mpparse.c
+@@ -598,8 +598,8 @@ static int __init smp_scan_config(unsigned long base, unsigned long length)
+ 			mpf_base = base;
+ 			mpf_found = true;
+ 
+-			pr_info("found SMP MP-table at [mem %#010lx-%#010lx] mapped at [%p]\n",
+-				base, base + sizeof(*mpf) - 1, mpf);
++			pr_info("found SMP MP-table at [mem %#010lx-%#010lx]\n",
++				base, base + sizeof(*mpf) - 1);
+ 
+ 			memblock_reserve(base, sizeof(*mpf));
+ 			if (mpf->physptr)
+diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
+index 2620baa1f699..507212d75ee2 100644
+--- a/block/blk-iolatency.c
++++ b/block/blk-iolatency.c
+@@ -75,6 +75,7 @@
+ #include <linux/blk-mq.h>
+ #include "blk-rq-qos.h"
+ #include "blk-stat.h"
++#include "blk.h"
+ 
+ #define DEFAULT_SCALE_COOKIE 1000000U
+ 
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 9d66a47d32fb..49e16f009095 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -194,6 +194,7 @@ static struct workqueue_struct *ec_query_wq;
+ static int EC_FLAGS_QUERY_HANDSHAKE; /* Needs QR_EC issued when SCI_EVT set */
+ static int EC_FLAGS_CORRECT_ECDT; /* Needs ECDT port address correction */
+ static int EC_FLAGS_IGNORE_DSDT_GPE; /* Needs ECDT GPE as correction setting */
++static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
+ 
+ /* --------------------------------------------------------------------------
+  *                           Logging/Debugging
+@@ -499,6 +500,26 @@ static inline void __acpi_ec_disable_event(struct acpi_ec *ec)
+ 		ec_log_drv("event blocked");
+ }
+ 
++/*
++ * Process _Q events that might have accumulated in the EC.
++ * Run with locked ec mutex.
++ */
++static void acpi_ec_clear(struct acpi_ec *ec)
++{
++	int i, status;
++	u8 value = 0;
++
++	for (i = 0; i < ACPI_EC_CLEAR_MAX; i++) {
++		status = acpi_ec_query(ec, &value);
++		if (status || !value)
++			break;
++	}
++	if (unlikely(i == ACPI_EC_CLEAR_MAX))
++		pr_warn("Warning: Maximum of %d stale EC events cleared\n", i);
++	else
++		pr_info("%d stale EC events cleared\n", i);
++}
++
+ static void acpi_ec_enable_event(struct acpi_ec *ec)
+ {
+ 	unsigned long flags;
+@@ -507,6 +528,10 @@ static void acpi_ec_enable_event(struct acpi_ec *ec)
+ 	if (acpi_ec_started(ec))
+ 		__acpi_ec_enable_event(ec);
+ 	spin_unlock_irqrestore(&ec->lock, flags);
++
++	/* Drain additional events if hardware requires that */
++	if (EC_FLAGS_CLEAR_ON_RESUME)
++		acpi_ec_clear(ec);
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+@@ -1820,6 +1845,31 @@ static int ec_flag_query_handshake(const struct dmi_system_id *id)
+ }
+ #endif
+ 
++/*
++ * On some hardware it is necessary to clear events accumulated by the EC during
++ * sleep. These ECs stop reporting GPEs until they are manually polled, if too
++ * many events are accumulated. (e.g. Samsung Series 5/9 notebooks)
++ *
++ * https://bugzilla.kernel.org/show_bug.cgi?id=44161
++ *
++ * Ideally, the EC should also be instructed NOT to accumulate events during
++ * sleep (which Windows seems to do somehow), but the interface to control this
++ * behaviour is not known at this time.
++ *
++ * Models known to be affected are Samsung 530Uxx/535Uxx/540Uxx/550Pxx/900Xxx,
++ * however it is very likely that other Samsung models are affected.
++ *
++ * On systems which don't accumulate _Q events during sleep, this extra check
++ * should be harmless.
++ */
++static int ec_clear_on_resume(const struct dmi_system_id *id)
++{
++	pr_debug("Detected system needing EC poll on resume.\n");
++	EC_FLAGS_CLEAR_ON_RESUME = 1;
++	ec_event_clearing = ACPI_EC_EVT_TIMING_STATUS;
++	return 0;
++}
++
+ /*
+  * Some ECDTs contain wrong register addresses.
+  * MSI MS-171F
+@@ -1869,6 +1919,9 @@ static const struct dmi_system_id ec_dmi_table[] __initconst = {
+ 	ec_honor_ecdt_gpe, "ASUS X580VD", {
+ 	DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ 	DMI_MATCH(DMI_PRODUCT_NAME, "X580VD"),}, NULL},
++	{
++	ec_clear_on_resume, "Samsung hardware", {
++	DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD.")}, NULL},
+ 	{},
+ };
+ 
+diff --git a/drivers/acpi/utils.c b/drivers/acpi/utils.c
+index 78db97687f26..c4b06cc075f9 100644
+--- a/drivers/acpi/utils.c
++++ b/drivers/acpi/utils.c
+@@ -800,6 +800,7 @@ bool acpi_dev_present(const char *hid, const char *uid, s64 hrv)
+ 	match.hrv = hrv;
+ 
+ 	dev = bus_find_device(&acpi_bus_type, NULL, &match, acpi_dev_match_cb);
++	put_device(dev);
+ 	return !!dev;
+ }
+ EXPORT_SYMBOL(acpi_dev_present);
+diff --git a/drivers/auxdisplay/hd44780.c b/drivers/auxdisplay/hd44780.c
+index 9ad93ea42fdc..3cde351fb5c9 100644
+--- a/drivers/auxdisplay/hd44780.c
++++ b/drivers/auxdisplay/hd44780.c
+@@ -280,6 +280,8 @@ static int hd44780_remove(struct platform_device *pdev)
+ 	struct charlcd *lcd = platform_get_drvdata(pdev);
+ 
+ 	charlcd_unregister(lcd);
++
++	kfree(lcd);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 500de1dee967..a00ca6b8117b 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -1467,12 +1467,12 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
+ 	if (IS_ERR(gpd_data))
+ 		return PTR_ERR(gpd_data);
+ 
+-	genpd_lock(genpd);
+-
+ 	ret = genpd->attach_dev ? genpd->attach_dev(genpd, dev) : 0;
+ 	if (ret)
+ 		goto out;
+ 
++	genpd_lock(genpd);
++
+ 	dev_pm_domain_set(dev, &genpd->domain);
+ 
+ 	genpd->device_count++;
+@@ -1480,9 +1480,8 @@ static int genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
+ 
+ 	list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
+ 
+- out:
+ 	genpd_unlock(genpd);
+-
++ out:
+ 	if (ret)
+ 		genpd_free_dev_data(dev, gpd_data);
+ 	else
+@@ -1531,15 +1530,15 @@ static int genpd_remove_device(struct generic_pm_domain *genpd,
+ 	genpd->device_count--;
+ 	genpd->max_off_time_changed = true;
+ 
+-	if (genpd->detach_dev)
+-		genpd->detach_dev(genpd, dev);
+-
+ 	dev_pm_domain_set(dev, NULL);
+ 
+ 	list_del_init(&pdd->list_node);
+ 
+ 	genpd_unlock(genpd);
+ 
++	if (genpd->detach_dev)
++		genpd->detach_dev(genpd, dev);
++
+ 	genpd_free_dev_data(dev, gpd_data);
+ 
+ 	return 0;
+diff --git a/drivers/block/paride/pcd.c b/drivers/block/paride/pcd.c
+index 96670eefaeb2..6d415b20fb70 100644
+--- a/drivers/block/paride/pcd.c
++++ b/drivers/block/paride/pcd.c
+@@ -314,6 +314,7 @@ static void pcd_init_units(void)
+ 		disk->queue = blk_mq_init_sq_queue(&cd->tag_set, &pcd_mq_ops,
+ 						   1, BLK_MQ_F_SHOULD_MERGE);
+ 		if (IS_ERR(disk->queue)) {
++			put_disk(disk);
+ 			disk->queue = NULL;
+ 			continue;
+ 		}
+@@ -749,8 +750,14 @@ static int pcd_detect(void)
+ 		return 0;
+ 
+ 	printk("%s: No CD-ROM drive found\n", name);
+-	for (unit = 0, cd = pcd; unit < PCD_UNITS; unit++, cd++)
++	for (unit = 0, cd = pcd; unit < PCD_UNITS; unit++, cd++) {
++		if (!cd->disk)
++			continue;
++		blk_cleanup_queue(cd->disk->queue);
++		cd->disk->queue = NULL;
++		blk_mq_free_tag_set(&cd->tag_set);
+ 		put_disk(cd->disk);
++	}
+ 	pi_unregister_driver(par_drv);
+ 	return -1;
+ }
+@@ -1006,8 +1013,14 @@ static int __init pcd_init(void)
+ 	pcd_probe_capabilities();
+ 
+ 	if (register_blkdev(major, name)) {
+-		for (unit = 0, cd = pcd; unit < PCD_UNITS; unit++, cd++)
++		for (unit = 0, cd = pcd; unit < PCD_UNITS; unit++, cd++) {
++			if (!cd->disk)
++				continue;
++
++			blk_cleanup_queue(cd->disk->queue);
++			blk_mq_free_tag_set(&cd->tag_set);
+ 			put_disk(cd->disk);
++		}
+ 		return -EBUSY;
+ 	}
+ 
+@@ -1028,6 +1041,9 @@ static void __exit pcd_exit(void)
+ 	int unit;
+ 
+ 	for (unit = 0, cd = pcd; unit < PCD_UNITS; unit++, cd++) {
++		if (!cd->disk)
++			continue;
++
+ 		if (cd->present) {
+ 			del_gendisk(cd->disk);
+ 			pi_release(cd->pi);
+diff --git a/drivers/block/paride/pf.c b/drivers/block/paride/pf.c
+index e92e7a8eeeb2..35e6e271b219 100644
+--- a/drivers/block/paride/pf.c
++++ b/drivers/block/paride/pf.c
+@@ -761,8 +761,14 @@ static int pf_detect(void)
+ 		return 0;
+ 
+ 	printk("%s: No ATAPI disk detected\n", name);
+-	for (pf = units, unit = 0; unit < PF_UNITS; pf++, unit++)
++	for (pf = units, unit = 0; unit < PF_UNITS; pf++, unit++) {
++		if (!pf->disk)
++			continue;
++		blk_cleanup_queue(pf->disk->queue);
++		pf->disk->queue = NULL;
++		blk_mq_free_tag_set(&pf->tag_set);
+ 		put_disk(pf->disk);
++	}
+ 	pi_unregister_driver(par_drv);
+ 	return -1;
+ }
+@@ -1025,8 +1031,13 @@ static int __init pf_init(void)
+ 	pf_busy = 0;
+ 
+ 	if (register_blkdev(major, name)) {
+-		for (pf = units, unit = 0; unit < PF_UNITS; pf++, unit++)
++		for (pf = units, unit = 0; unit < PF_UNITS; pf++, unit++) {
++			if (!pf->disk)
++				continue;
++			blk_cleanup_queue(pf->disk->queue);
++			blk_mq_free_tag_set(&pf->tag_set);
+ 			put_disk(pf->disk);
++		}
+ 		return -EBUSY;
+ 	}
+ 
+@@ -1047,13 +1058,18 @@ static void __exit pf_exit(void)
+ 	int unit;
+ 	unregister_blkdev(major, name);
+ 	for (pf = units, unit = 0; unit < PF_UNITS; pf++, unit++) {
+-		if (!pf->present)
++		if (!pf->disk)
+ 			continue;
+-		del_gendisk(pf->disk);
++
++		if (pf->present)
++			del_gendisk(pf->disk);
++
+ 		blk_cleanup_queue(pf->disk->queue);
+ 		blk_mq_free_tag_set(&pf->tag_set);
+ 		put_disk(pf->disk);
+-		pi_release(pf->pi);
++
++		if (pf->present)
++			pi_release(pf->pi);
+ 	}
+ }
+ 
+diff --git a/drivers/crypto/axis/artpec6_crypto.c b/drivers/crypto/axis/artpec6_crypto.c
+index f3442c2bdbdc..3c70004240d6 100644
+--- a/drivers/crypto/axis/artpec6_crypto.c
++++ b/drivers/crypto/axis/artpec6_crypto.c
+@@ -284,6 +284,7 @@ enum artpec6_crypto_hash_flags {
+ 
+ struct artpec6_crypto_req_common {
+ 	struct list_head list;
++	struct list_head complete_in_progress;
+ 	struct artpec6_crypto_dma_descriptors *dma;
+ 	struct crypto_async_request *req;
+ 	void (*complete)(struct crypto_async_request *req);
+@@ -2045,7 +2046,8 @@ static int artpec6_crypto_prepare_aead(struct aead_request *areq)
+ 	return artpec6_crypto_dma_map_descs(common);
+ }
+ 
+-static void artpec6_crypto_process_queue(struct artpec6_crypto *ac)
++static void artpec6_crypto_process_queue(struct artpec6_crypto *ac,
++	    struct list_head *completions)
+ {
+ 	struct artpec6_crypto_req_common *req;
+ 
+@@ -2056,7 +2058,7 @@ static void artpec6_crypto_process_queue(struct artpec6_crypto *ac)
+ 		list_move_tail(&req->list, &ac->pending);
+ 		artpec6_crypto_start_dma(req);
+ 
+-		req->req->complete(req->req, -EINPROGRESS);
++		list_add_tail(&req->complete_in_progress, completions);
+ 	}
+ 
+ 	/*
+@@ -2086,6 +2088,11 @@ static void artpec6_crypto_task(unsigned long data)
+ 	struct artpec6_crypto *ac = (struct artpec6_crypto *)data;
+ 	struct artpec6_crypto_req_common *req;
+ 	struct artpec6_crypto_req_common *n;
++	struct list_head complete_done;
++	struct list_head complete_in_progress;
++
++	INIT_LIST_HEAD(&complete_done);
++	INIT_LIST_HEAD(&complete_in_progress);
+ 
+ 	if (list_empty(&ac->pending)) {
+ 		pr_debug("Spurious IRQ\n");
+@@ -2119,19 +2126,30 @@ static void artpec6_crypto_task(unsigned long data)
+ 
+ 		pr_debug("Completing request %p\n", req);
+ 
+-		list_del(&req->list);
++		list_move_tail(&req->list, &complete_done);
+ 
+ 		artpec6_crypto_dma_unmap_all(req);
+ 		artpec6_crypto_copy_bounce_buffers(req);
+ 
+ 		ac->pending_count--;
+ 		artpec6_crypto_common_destroy(req);
+-		req->complete(req->req);
+ 	}
+ 
+-	artpec6_crypto_process_queue(ac);
++	artpec6_crypto_process_queue(ac, &complete_in_progress);
+ 
+ 	spin_unlock_bh(&ac->queue_lock);
++
++	/* Perform the completion callbacks without holding the queue lock
++	 * to allow new request submissions from the callbacks.
++	 */
++	list_for_each_entry_safe(req, n, &complete_done, list) {
++		req->complete(req->req);
++	}
++
++	list_for_each_entry_safe(req, n, &complete_in_progress,
++				 complete_in_progress) {
++		req->req->complete(req->req, -EINPROGRESS);
++	}
+ }
+ 
+ static void artpec6_crypto_complete_crypto(struct crypto_async_request *req)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 3a9b48b227ac..a7208ca0bfe3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -546,7 +546,7 @@ static int psp_load_fw(struct amdgpu_device *adev)
+ 	struct psp_context *psp = &adev->psp;
+ 
+ 	if (amdgpu_sriov_vf(adev) && adev->in_gpu_reset) {
+-		psp_ring_destroy(psp, PSP_RING_TYPE__KM);
++		psp_ring_stop(psp, PSP_RING_TYPE__KM); /* should not destroy ring, only stop */
+ 		goto skip_memalloc;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c
+index 47243165a082..ae90a99909ef 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_cik.c
+@@ -323,57 +323,7 @@ static int init_mqd_hiq(struct mqd_manager *mm, void **mqd,
+ 		struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
+ 		struct queue_properties *q)
+ {
+-	uint64_t addr;
+-	struct cik_mqd *m;
+-	int retval;
+-
+-	retval = kfd_gtt_sa_allocate(mm->dev, sizeof(struct cik_mqd),
+-					mqd_mem_obj);
+-
+-	if (retval != 0)
+-		return -ENOMEM;
+-
+-	m = (struct cik_mqd *) (*mqd_mem_obj)->cpu_ptr;
+-	addr = (*mqd_mem_obj)->gpu_addr;
+-
+-	memset(m, 0, ALIGN(sizeof(struct cik_mqd), 256));
+-
+-	m->header = 0xC0310800;
+-	m->compute_pipelinestat_enable = 1;
+-	m->compute_static_thread_mgmt_se0 = 0xFFFFFFFF;
+-	m->compute_static_thread_mgmt_se1 = 0xFFFFFFFF;
+-	m->compute_static_thread_mgmt_se2 = 0xFFFFFFFF;
+-	m->compute_static_thread_mgmt_se3 = 0xFFFFFFFF;
+-
+-	m->cp_hqd_persistent_state = DEFAULT_CP_HQD_PERSISTENT_STATE |
+-					PRELOAD_REQ;
+-	m->cp_hqd_quantum = QUANTUM_EN | QUANTUM_SCALE_1MS |
+-				QUANTUM_DURATION(10);
+-
+-	m->cp_mqd_control             = MQD_CONTROL_PRIV_STATE_EN;
+-	m->cp_mqd_base_addr_lo        = lower_32_bits(addr);
+-	m->cp_mqd_base_addr_hi        = upper_32_bits(addr);
+-
+-	m->cp_hqd_ib_control = DEFAULT_MIN_IB_AVAIL_SIZE;
+-
+-	/*
+-	 * Pipe Priority
+-	 * Identifies the pipe relative priority when this queue is connected
+-	 * to the pipeline. The pipe priority is against the GFX pipe and HP3D.
+-	 * In KFD we are using a fixed pipe priority set to CS_MEDIUM.
+-	 * 0 = CS_LOW (typically below GFX)
+-	 * 1 = CS_MEDIUM (typically between HP3D and GFX
+-	 * 2 = CS_HIGH (typically above HP3D)
+-	 */
+-	m->cp_hqd_pipe_priority = 1;
+-	m->cp_hqd_queue_priority = 15;
+-
+-	*mqd = m;
+-	if (gart_addr)
+-		*gart_addr = addr;
+-	retval = mm->update_mqd(mm, m, q);
+-
+-	return retval;
++	return init_mqd(mm, mqd, mqd_mem_obj, gart_addr, q);
+ }
+ 
+ static int update_mqd_hiq(struct mqd_manager *mm, void *mqd,
+diff --git a/drivers/gpu/drm/exynos/exynos_mixer.c b/drivers/gpu/drm/exynos/exynos_mixer.c
+index 0573eab0e190..f35e4ab55b27 100644
+--- a/drivers/gpu/drm/exynos/exynos_mixer.c
++++ b/drivers/gpu/drm/exynos/exynos_mixer.c
+@@ -20,6 +20,7 @@
+ #include "regs-vp.h"
+ 
+ #include <linux/kernel.h>
++#include <linux/ktime.h>
+ #include <linux/spinlock.h>
+ #include <linux/wait.h>
+ #include <linux/i2c.h>
+@@ -352,15 +353,62 @@ static void mixer_cfg_vp_blend(struct mixer_context *ctx, unsigned int alpha)
+ 	mixer_reg_write(ctx, MXR_VIDEO_CFG, val);
+ }
+ 
+-static void mixer_vsync_set_update(struct mixer_context *ctx, bool enable)
++static bool mixer_is_synced(struct mixer_context *ctx)
+ {
+-	/* block update on vsync */
+-	mixer_reg_writemask(ctx, MXR_STATUS, enable ?
+-			MXR_STATUS_SYNC_ENABLE : 0, MXR_STATUS_SYNC_ENABLE);
++	u32 base, shadow;
+ 
++	if (ctx->mxr_ver == MXR_VER_16_0_33_0 ||
++	    ctx->mxr_ver == MXR_VER_128_0_0_184)
++		return !(mixer_reg_read(ctx, MXR_CFG) &
++			 MXR_CFG_LAYER_UPDATE_COUNT_MASK);
++
++	if (test_bit(MXR_BIT_VP_ENABLED, &ctx->flags) &&
++	    vp_reg_read(ctx, VP_SHADOW_UPDATE))
++		return false;
++
++	base = mixer_reg_read(ctx, MXR_CFG);
++	shadow = mixer_reg_read(ctx, MXR_CFG_S);
++	if (base != shadow)
++		return false;
++
++	base = mixer_reg_read(ctx, MXR_GRAPHIC_BASE(0));
++	shadow = mixer_reg_read(ctx, MXR_GRAPHIC_BASE_S(0));
++	if (base != shadow)
++		return false;
++
++	base = mixer_reg_read(ctx, MXR_GRAPHIC_BASE(1));
++	shadow = mixer_reg_read(ctx, MXR_GRAPHIC_BASE_S(1));
++	if (base != shadow)
++		return false;
++
++	return true;
++}
++
++static int mixer_wait_for_sync(struct mixer_context *ctx)
++{
++	ktime_t timeout = ktime_add_us(ktime_get(), 100000);
++
++	while (!mixer_is_synced(ctx)) {
++		usleep_range(1000, 2000);
++		if (ktime_compare(ktime_get(), timeout) > 0)
++			return -ETIMEDOUT;
++	}
++	return 0;
++}
++
++static void mixer_disable_sync(struct mixer_context *ctx)
++{
++	mixer_reg_writemask(ctx, MXR_STATUS, 0, MXR_STATUS_SYNC_ENABLE);
++}
++
++static void mixer_enable_sync(struct mixer_context *ctx)
++{
++	if (ctx->mxr_ver == MXR_VER_16_0_33_0 ||
++	    ctx->mxr_ver == MXR_VER_128_0_0_184)
++		mixer_reg_writemask(ctx, MXR_CFG, ~0, MXR_CFG_LAYER_UPDATE);
++	mixer_reg_writemask(ctx, MXR_STATUS, ~0, MXR_STATUS_SYNC_ENABLE);
+ 	if (test_bit(MXR_BIT_VP_ENABLED, &ctx->flags))
+-		vp_reg_write(ctx, VP_SHADOW_UPDATE, enable ?
+-			VP_SHADOW_UPDATE_ENABLE : 0);
++		vp_reg_write(ctx, VP_SHADOW_UPDATE, VP_SHADOW_UPDATE_ENABLE);
+ }
+ 
+ static void mixer_cfg_scan(struct mixer_context *ctx, int width, int height)
+@@ -498,7 +546,6 @@ static void vp_video_buffer(struct mixer_context *ctx,
+ 
+ 	spin_lock_irqsave(&ctx->reg_slock, flags);
+ 
+-	vp_reg_write(ctx, VP_SHADOW_UPDATE, 1);
+ 	/* interlace or progressive scan mode */
+ 	val = (test_bit(MXR_BIT_INTERLACE, &ctx->flags) ? ~0 : 0);
+ 	vp_reg_writemask(ctx, VP_MODE, val, VP_MODE_LINE_SKIP);
+@@ -553,11 +600,6 @@ static void vp_video_buffer(struct mixer_context *ctx,
+ 	vp_regs_dump(ctx);
+ }
+ 
+-static void mixer_layer_update(struct mixer_context *ctx)
+-{
+-	mixer_reg_writemask(ctx, MXR_CFG, ~0, MXR_CFG_LAYER_UPDATE);
+-}
+-
+ static void mixer_graph_buffer(struct mixer_context *ctx,
+ 			       struct exynos_drm_plane *plane)
+ {
+@@ -640,11 +682,6 @@ static void mixer_graph_buffer(struct mixer_context *ctx,
+ 	mixer_cfg_layer(ctx, win, priority, true);
+ 	mixer_cfg_gfx_blend(ctx, win, pixel_alpha, state->base.alpha);
+ 
+-	/* layer update mandatory for mixer 16.0.33.0 */
+-	if (ctx->mxr_ver == MXR_VER_16_0_33_0 ||
+-		ctx->mxr_ver == MXR_VER_128_0_0_184)
+-		mixer_layer_update(ctx);
+-
+ 	spin_unlock_irqrestore(&ctx->reg_slock, flags);
+ 
+ 	mixer_regs_dump(ctx);
+@@ -709,7 +746,7 @@ static void mixer_win_reset(struct mixer_context *ctx)
+ static irqreturn_t mixer_irq_handler(int irq, void *arg)
+ {
+ 	struct mixer_context *ctx = arg;
+-	u32 val, base, shadow;
++	u32 val;
+ 
+ 	spin_lock(&ctx->reg_slock);
+ 
+@@ -723,26 +760,9 @@ static irqreturn_t mixer_irq_handler(int irq, void *arg)
+ 		val &= ~MXR_INT_STATUS_VSYNC;
+ 
+ 		/* interlace scan need to check shadow register */
+-		if (test_bit(MXR_BIT_INTERLACE, &ctx->flags)) {
+-			if (test_bit(MXR_BIT_VP_ENABLED, &ctx->flags) &&
+-			    vp_reg_read(ctx, VP_SHADOW_UPDATE))
+-				goto out;
+-
+-			base = mixer_reg_read(ctx, MXR_CFG);
+-			shadow = mixer_reg_read(ctx, MXR_CFG_S);
+-			if (base != shadow)
+-				goto out;
+-
+-			base = mixer_reg_read(ctx, MXR_GRAPHIC_BASE(0));
+-			shadow = mixer_reg_read(ctx, MXR_GRAPHIC_BASE_S(0));
+-			if (base != shadow)
+-				goto out;
+-
+-			base = mixer_reg_read(ctx, MXR_GRAPHIC_BASE(1));
+-			shadow = mixer_reg_read(ctx, MXR_GRAPHIC_BASE_S(1));
+-			if (base != shadow)
+-				goto out;
+-		}
++		if (test_bit(MXR_BIT_INTERLACE, &ctx->flags)
++		    && !mixer_is_synced(ctx))
++			goto out;
+ 
+ 		drm_crtc_handle_vblank(&ctx->crtc->base);
+ 	}
+@@ -917,12 +937,14 @@ static void mixer_disable_vblank(struct exynos_drm_crtc *crtc)
+ 
+ static void mixer_atomic_begin(struct exynos_drm_crtc *crtc)
+ {
+-	struct mixer_context *mixer_ctx = crtc->ctx;
++	struct mixer_context *ctx = crtc->ctx;
+ 
+-	if (!test_bit(MXR_BIT_POWERED, &mixer_ctx->flags))
++	if (!test_bit(MXR_BIT_POWERED, &ctx->flags))
+ 		return;
+ 
+-	mixer_vsync_set_update(mixer_ctx, false);
++	if (mixer_wait_for_sync(ctx))
++		dev_err(ctx->dev, "timeout waiting for VSYNC\n");
++	mixer_disable_sync(ctx);
+ }
+ 
+ static void mixer_update_plane(struct exynos_drm_crtc *crtc,
+@@ -964,7 +986,7 @@ static void mixer_atomic_flush(struct exynos_drm_crtc *crtc)
+ 	if (!test_bit(MXR_BIT_POWERED, &mixer_ctx->flags))
+ 		return;
+ 
+-	mixer_vsync_set_update(mixer_ctx, true);
++	mixer_enable_sync(mixer_ctx);
+ 	exynos_crtc_handle_event(crtc);
+ }
+ 
+@@ -979,7 +1001,7 @@ static void mixer_enable(struct exynos_drm_crtc *crtc)
+ 
+ 	exynos_drm_pipe_clk_enable(crtc, true);
+ 
+-	mixer_vsync_set_update(ctx, false);
++	mixer_disable_sync(ctx);
+ 
+ 	mixer_reg_writemask(ctx, MXR_STATUS, ~0, MXR_STATUS_SOFT_RESET);
+ 
+@@ -992,7 +1014,7 @@ static void mixer_enable(struct exynos_drm_crtc *crtc)
+ 
+ 	mixer_commit(ctx);
+ 
+-	mixer_vsync_set_update(ctx, true);
++	mixer_enable_sync(ctx);
+ 
+ 	set_bit(MXR_BIT_POWERED, &ctx->flags);
+ }
+diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/volt.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/volt.h
+index 8a0f85f5fc1a..6a765682fbfa 100644
+--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/volt.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/volt.h
+@@ -38,6 +38,7 @@ int nvkm_volt_set_id(struct nvkm_volt *, u8 id, u8 min_id, u8 temp,
+ 
+ int nv40_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
+ int gf100_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
++int gf117_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
+ int gk104_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
+ int gk20a_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
+ int gm20b_volt_new(struct nvkm_device *, int, struct nvkm_volt **);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+index 88a52f6b39fe..7dfbbbc1beea 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
++++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+@@ -181,7 +181,7 @@ nouveau_debugfs_pstate_set(struct file *file, const char __user *ubuf,
+ 	}
+ 
+ 	ret = pm_runtime_get_sync(drm->dev);
+-	if (IS_ERR_VALUE(ret) && ret != -EACCES)
++	if (ret < 0 && ret != -EACCES)
+ 		return ret;
+ 	ret = nvif_mthd(ctrl, NVIF_CONTROL_PSTATE_USER, &args, sizeof(args));
+ 	pm_runtime_put_autosuspend(drm->dev);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+index d9edb5785813..d75fa7678483 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+@@ -1613,7 +1613,7 @@ nvd7_chipset = {
+ 	.pci = gf106_pci_new,
+ 	.therm = gf119_therm_new,
+ 	.timer = nv41_timer_new,
+-	.volt = gf100_volt_new,
++	.volt = gf117_volt_new,
+ 	.ce[0] = gf100_ce_new,
+ 	.disp = gf119_disp_new,
+ 	.dma = gf119_dma_new,
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/volt/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/volt/Kbuild
+index bcd179ba11d0..146adcdd316a 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/volt/Kbuild
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/volt/Kbuild
+@@ -2,6 +2,7 @@ nvkm-y += nvkm/subdev/volt/base.o
+ nvkm-y += nvkm/subdev/volt/gpio.o
+ nvkm-y += nvkm/subdev/volt/nv40.o
+ nvkm-y += nvkm/subdev/volt/gf100.o
++nvkm-y += nvkm/subdev/volt/gf117.o
+ nvkm-y += nvkm/subdev/volt/gk104.o
+ nvkm-y += nvkm/subdev/volt/gk20a.o
+ nvkm-y += nvkm/subdev/volt/gm20b.o
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/volt/gf117.c b/drivers/gpu/drm/nouveau/nvkm/subdev/volt/gf117.c
+new file mode 100644
+index 000000000000..547a58f0aeac
+--- /dev/null
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/volt/gf117.c
+@@ -0,0 +1,60 @@
++/*
++ * Copyright 2019 Ilia Mirkin
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ *
++ * Authors: Ilia Mirkin
++ */
++#include "priv.h"
++
++#include <subdev/fuse.h>
++
++static int
++gf117_volt_speedo_read(struct nvkm_volt *volt)
++{
++	struct nvkm_device *device = volt->subdev.device;
++	struct nvkm_fuse *fuse = device->fuse;
++
++	if (!fuse)
++		return -EINVAL;
++
++	return nvkm_fuse_read(fuse, 0x3a8);
++}
++
++static const struct nvkm_volt_func
++gf117_volt = {
++	.oneinit = gf100_volt_oneinit,
++	.vid_get = nvkm_voltgpio_get,
++	.vid_set = nvkm_voltgpio_set,
++	.speedo_read = gf117_volt_speedo_read,
++};
++
++int
++gf117_volt_new(struct nvkm_device *device, int index, struct nvkm_volt **pvolt)
++{
++	struct nvkm_volt *volt;
++	int ret;
++
++	ret = nvkm_volt_new_(&gf117_volt, device, index, &volt);
++	*pvolt = volt;
++	if (ret)
++		return ret;
++
++	return nvkm_voltgpio_init(volt);
++}
+diff --git a/drivers/gpu/drm/panel/panel-innolux-p079zca.c b/drivers/gpu/drm/panel/panel-innolux-p079zca.c
+index ca4ae45dd307..8e5724b63f1f 100644
+--- a/drivers/gpu/drm/panel/panel-innolux-p079zca.c
++++ b/drivers/gpu/drm/panel/panel-innolux-p079zca.c
+@@ -70,18 +70,12 @@ static inline struct innolux_panel *to_innolux_panel(struct drm_panel *panel)
+ static int innolux_panel_disable(struct drm_panel *panel)
+ {
+ 	struct innolux_panel *innolux = to_innolux_panel(panel);
+-	int err;
+ 
+ 	if (!innolux->enabled)
+ 		return 0;
+ 
+ 	backlight_disable(innolux->backlight);
+ 
+-	err = mipi_dsi_dcs_set_display_off(innolux->link);
+-	if (err < 0)
+-		DRM_DEV_ERROR(panel->dev, "failed to set display off: %d\n",
+-			      err);
+-
+ 	innolux->enabled = false;
+ 
+ 	return 0;
+@@ -95,6 +89,11 @@ static int innolux_panel_unprepare(struct drm_panel *panel)
+ 	if (!innolux->prepared)
+ 		return 0;
+ 
++	err = mipi_dsi_dcs_set_display_off(innolux->link);
++	if (err < 0)
++		DRM_DEV_ERROR(panel->dev, "failed to set display off: %d\n",
++			      err);
++
+ 	err = mipi_dsi_dcs_enter_sleep_mode(innolux->link);
+ 	if (err < 0) {
+ 		DRM_DEV_ERROR(panel->dev, "failed to enter sleep mode: %d\n",
+diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c
+index d5a23295dd80..bb7b58407039 100644
+--- a/drivers/gpu/drm/udl/udl_gem.c
++++ b/drivers/gpu/drm/udl/udl_gem.c
+@@ -224,7 +224,7 @@ int udl_gem_mmap(struct drm_file *file, struct drm_device *dev,
+ 	*offset = drm_vma_node_offset_addr(&gobj->base.vma_node);
+ 
+ out:
+-	drm_gem_object_put(&gobj->base);
++	drm_gem_object_put_unlocked(&gobj->base);
+ unlock:
+ 	mutex_unlock(&udl->gem_lock);
+ 	return ret;
+diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+index 45b2460f3166..e8819d750938 100644
+--- a/drivers/hwtracing/coresight/coresight-cpu-debug.c
++++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c
+@@ -668,6 +668,10 @@ static const struct amba_id debug_ids[] = {
+ 		.id	= 0x000bbd08,
+ 		.mask	= 0x000fffff,
+ 	},
++	{       /* Debug for Cortex-A73 */
++		.id	= 0x000bbd09,
++		.mask	= 0x000fffff,
++	},
+ 	{ 0, 0 },
+ };
+ 
+diff --git a/drivers/infiniband/hw/hfi1/qp.c b/drivers/infiniband/hw/hfi1/qp.c
+index 5344e8993b28..5866f358ea04 100644
+--- a/drivers/infiniband/hw/hfi1/qp.c
++++ b/drivers/infiniband/hw/hfi1/qp.c
+@@ -833,7 +833,7 @@ void notify_error_qp(struct rvt_qp *qp)
+ 		write_seqlock(lock);
+ 		if (!list_empty(&priv->s_iowait.list) &&
+ 		    !(qp->s_flags & RVT_S_BUSY)) {
+-			qp->s_flags &= ~RVT_S_ANY_WAIT_IO;
++			qp->s_flags &= ~HFI1_S_ANY_WAIT_IO;
+ 			list_del_init(&priv->s_iowait.list);
+ 			priv->s_iowait.lock = NULL;
+ 			rvt_put_qp(qp);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 509e467843f6..f4cac63194d9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -216,6 +216,26 @@ enum {
+ 	HNS_ROCE_DB_PER_PAGE = PAGE_SIZE / 4
+ };
+ 
++enum hns_roce_reset_stage {
++	HNS_ROCE_STATE_NON_RST,
++	HNS_ROCE_STATE_RST_BEF_DOWN,
++	HNS_ROCE_STATE_RST_DOWN,
++	HNS_ROCE_STATE_RST_UNINIT,
++	HNS_ROCE_STATE_RST_INIT,
++	HNS_ROCE_STATE_RST_INITED,
++};
++
++enum hns_roce_instance_state {
++	HNS_ROCE_STATE_NON_INIT,
++	HNS_ROCE_STATE_INIT,
++	HNS_ROCE_STATE_INITED,
++	HNS_ROCE_STATE_UNINIT,
++};
++
++enum {
++	HNS_ROCE_RST_DIRECT_RETURN		= 0,
++};
++
+ #define HNS_ROCE_CMD_SUCCESS			1
+ 
+ #define HNS_ROCE_PORT_DOWN			0
+@@ -898,6 +918,7 @@ struct hns_roce_dev {
+ 	spinlock_t		bt_cmd_lock;
+ 	bool			active;
+ 	bool			is_reset;
++	unsigned long		reset_cnt;
+ 	struct hns_roce_ib_iboe iboe;
+ 
+ 	struct list_head        pgdir_list;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 543fa1504cd3..7ac06576d791 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -5800,6 +5800,7 @@ MODULE_DEVICE_TABLE(pci, hns_roce_hw_v2_pci_tbl);
+ static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
+ 				  struct hnae3_handle *handle)
+ {
++	struct hns_roce_v2_priv *priv = hr_dev->priv;
+ 	const struct pci_device_id *id;
+ 	int i;
+ 
+@@ -5830,10 +5831,13 @@ static int hns_roce_hw_v2_get_cfg(struct hns_roce_dev *hr_dev,
+ 	hr_dev->cmd_mod = 1;
+ 	hr_dev->loop_idc = 0;
+ 
++	hr_dev->reset_cnt = handle->ae_algo->ops->ae_dev_reset_cnt(handle);
++	priv->handle = handle;
++
+ 	return 0;
+ }
+ 
+-static int hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
++static int __hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
+ {
+ 	struct hns_roce_dev *hr_dev;
+ 	int ret;
+@@ -5850,7 +5854,6 @@ static int hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
+ 
+ 	hr_dev->pci_dev = handle->pdev;
+ 	hr_dev->dev = &handle->pdev->dev;
+-	handle->priv = hr_dev;
+ 
+ 	ret = hns_roce_hw_v2_get_cfg(hr_dev, handle);
+ 	if (ret) {
+@@ -5864,6 +5867,8 @@ static int hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
+ 		goto error_failed_get_cfg;
+ 	}
+ 
++	handle->priv = hr_dev;
++
+ 	return 0;
+ 
+ error_failed_get_cfg:
+@@ -5875,7 +5880,7 @@ error_failed_kzalloc:
+ 	return ret;
+ }
+ 
+-static void hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
++static void __hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
+ 					   bool reset)
+ {
+ 	struct hns_roce_dev *hr_dev = (struct hns_roce_dev *)handle->priv;
+@@ -5883,24 +5888,78 @@ static void hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
+ 	if (!hr_dev)
+ 		return;
+ 
++	handle->priv = NULL;
+ 	hns_roce_exit(hr_dev);
+ 	kfree(hr_dev->priv);
+ 	ib_dealloc_device(&hr_dev->ib_dev);
+ }
+ 
++static int hns_roce_hw_v2_init_instance(struct hnae3_handle *handle)
++{
++	const struct hnae3_ae_ops *ops = handle->ae_algo->ops;
++	struct device *dev = &handle->pdev->dev;
++	int ret;
++
++	handle->rinfo.instance_state = HNS_ROCE_STATE_INIT;
++
++	if (ops->ae_dev_resetting(handle) || ops->get_hw_reset_stat(handle)) {
++		handle->rinfo.instance_state = HNS_ROCE_STATE_NON_INIT;
++		goto reset_chk_err;
++	}
++
++	ret = __hns_roce_hw_v2_init_instance(handle);
++	if (ret) {
++		handle->rinfo.instance_state = HNS_ROCE_STATE_NON_INIT;
++		dev_err(dev, "RoCE instance init failed! ret = %d\n", ret);
++		if (ops->ae_dev_resetting(handle) ||
++		    ops->get_hw_reset_stat(handle))
++			goto reset_chk_err;
++		else
++			return ret;
++	}
++
++	handle->rinfo.instance_state = HNS_ROCE_STATE_INITED;
++
++
++	return 0;
++
++reset_chk_err:
++	dev_err(dev, "Device is busy in resetting state.\n"
++		     "please retry later.\n");
++
++	return -EBUSY;
++}
++
++static void hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
++					   bool reset)
++{
++	if (handle->rinfo.instance_state != HNS_ROCE_STATE_INITED)
++		return;
++
++	handle->rinfo.instance_state = HNS_ROCE_STATE_UNINIT;
++
++	__hns_roce_hw_v2_uninit_instance(handle, reset);
++
++	handle->rinfo.instance_state = HNS_ROCE_STATE_NON_INIT;
++}
+ static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle)
+ {
+-	struct hns_roce_dev *hr_dev = (struct hns_roce_dev *)handle->priv;
++	struct hns_roce_dev *hr_dev;
+ 	struct ib_event event;
+ 
+-	if (!hr_dev) {
+-		dev_err(&handle->pdev->dev,
+-			"Input parameter handle->priv is NULL!\n");
+-		return -EINVAL;
++	if (handle->rinfo.instance_state != HNS_ROCE_STATE_INITED) {
++		set_bit(HNS_ROCE_RST_DIRECT_RETURN, &handle->rinfo.state);
++		return 0;
+ 	}
+ 
++	handle->rinfo.reset_state = HNS_ROCE_STATE_RST_DOWN;
++	clear_bit(HNS_ROCE_RST_DIRECT_RETURN, &handle->rinfo.state);
++
++	hr_dev = (struct hns_roce_dev *)handle->priv;
++	if (!hr_dev)
++		return 0;
++
+ 	hr_dev->active = false;
+-	hr_dev->is_reset = true;
+ 
+ 	event.event = IB_EVENT_DEVICE_FATAL;
+ 	event.device = &hr_dev->ib_dev;
+@@ -5912,17 +5971,29 @@ static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle)
+ 
+ static int hns_roce_hw_v2_reset_notify_init(struct hnae3_handle *handle)
+ {
++	struct device *dev = &handle->pdev->dev;
+ 	int ret;
+ 
+-	ret = hns_roce_hw_v2_init_instance(handle);
++	if (test_and_clear_bit(HNS_ROCE_RST_DIRECT_RETURN,
++			       &handle->rinfo.state)) {
++		handle->rinfo.reset_state = HNS_ROCE_STATE_RST_INITED;
++		return 0;
++	}
++
++	handle->rinfo.reset_state = HNS_ROCE_STATE_RST_INIT;
++
++	dev_info(&handle->pdev->dev, "In reset process RoCE client reinit.\n");
++	ret = __hns_roce_hw_v2_init_instance(handle);
+ 	if (ret) {
+ 		/* when reset notify type is HNAE3_INIT_CLIENT In reset notify
+ 		 * callback function, RoCE Engine reinitialize. If RoCE reinit
+ 		 * failed, we should inform NIC driver.
+ 		 */
+ 		handle->priv = NULL;
+-		dev_err(&handle->pdev->dev,
+-			"In reset process RoCE reinit failed %d.\n", ret);
++		dev_err(dev, "In reset process RoCE reinit failed %d.\n", ret);
++	} else {
++		handle->rinfo.reset_state = HNS_ROCE_STATE_RST_INITED;
++		dev_info(dev, "Reset done, RoCE client reinit finished.\n");
+ 	}
+ 
+ 	return ret;
+@@ -5930,8 +6001,14 @@ static int hns_roce_hw_v2_reset_notify_init(struct hnae3_handle *handle)
+ 
+ static int hns_roce_hw_v2_reset_notify_uninit(struct hnae3_handle *handle)
+ {
++	if (test_bit(HNS_ROCE_RST_DIRECT_RETURN, &handle->rinfo.state))
++		return 0;
++
++	handle->rinfo.reset_state = HNS_ROCE_STATE_RST_UNINIT;
++	dev_info(&handle->pdev->dev, "In reset process RoCE client uninit.\n");
+ 	msleep(100);
+-	hns_roce_hw_v2_uninit_instance(handle, false);
++	__hns_roce_hw_v2_uninit_instance(handle, false);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index b72d0443c835..5398aa718cfc 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -1546,6 +1546,7 @@ struct hns_roce_link_table_entry {
+ #define HNS_ROCE_LINK_TABLE_NXT_PTR_M GENMASK(31, 20)
+ 
+ struct hns_roce_v2_priv {
++	struct hnae3_handle *handle;
+ 	struct hns_roce_v2_cmq cmq;
+ 	struct hns_roce_link_table tsq;
+ 	struct hns_roce_link_table tpq;
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_utils.c b/drivers/infiniband/hw/i40iw/i40iw_utils.c
+index 59e978141ad4..e99177533930 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_utils.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_utils.c
+@@ -173,7 +173,12 @@ int i40iw_inetaddr_event(struct notifier_block *notifier,
+ 
+ 		rcu_read_lock();
+ 		in = __in_dev_get_rcu(upper_dev);
+-		local_ipaddr = ntohl(in->ifa_list->ifa_address);
++
++		if (!in->ifa_list)
++			local_ipaddr = 0;
++		else
++			local_ipaddr = ntohl(in->ifa_list->ifa_address);
++
+ 		rcu_read_unlock();
+ 	} else {
+ 		local_ipaddr = ntohl(ifa->ifa_address);
+@@ -185,6 +190,11 @@ int i40iw_inetaddr_event(struct notifier_block *notifier,
+ 	case NETDEV_UP:
+ 		/* Fall through */
+ 	case NETDEV_CHANGEADDR:
++
++		/* Just skip if no need to handle ARP cache */
++		if (!local_ipaddr)
++			break;
++
+ 		i40iw_manage_arp_cache(iwdev,
+ 				       netdev->dev_addr,
+ 				       &local_ipaddr,
+diff --git a/drivers/infiniband/hw/mlx4/alias_GUID.c b/drivers/infiniband/hw/mlx4/alias_GUID.c
+index 782499abcd98..2a0b59a4b6eb 100644
+--- a/drivers/infiniband/hw/mlx4/alias_GUID.c
++++ b/drivers/infiniband/hw/mlx4/alias_GUID.c
+@@ -804,8 +804,8 @@ void mlx4_ib_destroy_alias_guid_service(struct mlx4_ib_dev *dev)
+ 	unsigned long flags;
+ 
+ 	for (i = 0 ; i < dev->num_ports; i++) {
+-		cancel_delayed_work(&dev->sriov.alias_guid.ports_guid[i].alias_guid_work);
+ 		det = &sriov->alias_guid.ports_guid[i];
++		cancel_delayed_work_sync(&det->alias_guid_work);
+ 		spin_lock_irqsave(&sriov->alias_guid.ag_work_lock, flags);
+ 		while (!list_empty(&det->cb_list)) {
+ 			cb_ctx = list_entry(det->cb_list.next,
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index dbd6824dfffa..53b1fbadc496 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -1534,6 +1534,9 @@ static void iommu_disable_protect_mem_regions(struct intel_iommu *iommu)
+ 	u32 pmen;
+ 	unsigned long flags;
+ 
++	if (!cap_plmr(iommu->cap) && !cap_phmr(iommu->cap))
++		return;
++
+ 	raw_spin_lock_irqsave(&iommu->register_lock, flags);
+ 	pmen = readl(iommu->reg + DMAR_PMEN_REG);
+ 	pmen &= ~DMA_PMEN_EPM;
+@@ -5328,7 +5331,7 @@ int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct intel_svm_dev *sd
+ 
+ 	ctx_lo = context[0].lo;
+ 
+-	sdev->did = domain->iommu_did[iommu->seq_id];
++	sdev->did = FLPT_DEFAULT_DID;
+ 	sdev->sid = PCI_DEVID(info->bus, info->devfn);
+ 
+ 	if (!(ctx_lo & CONTEXT_PASIDE)) {
+diff --git a/drivers/irqchip/irq-mbigen.c b/drivers/irqchip/irq-mbigen.c
+index 567b29c47608..98b6e1d4b1a6 100644
+--- a/drivers/irqchip/irq-mbigen.c
++++ b/drivers/irqchip/irq-mbigen.c
+@@ -161,6 +161,9 @@ static void mbigen_write_msg(struct msi_desc *desc, struct msi_msg *msg)
+ 	void __iomem *base = d->chip_data;
+ 	u32 val;
+ 
++	if (!msg->address_lo && !msg->address_hi)
++		return;
++ 
+ 	base += get_mbigen_vec_reg(d->hwirq);
+ 	val = readl_relaxed(base);
+ 
+diff --git a/drivers/irqchip/irq-stm32-exti.c b/drivers/irqchip/irq-stm32-exti.c
+index a93296b9b45d..7bd1d4cb2e19 100644
+--- a/drivers/irqchip/irq-stm32-exti.c
++++ b/drivers/irqchip/irq-stm32-exti.c
+@@ -716,7 +716,6 @@ stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data,
+ 	const struct stm32_exti_bank *stm32_bank;
+ 	struct stm32_exti_chip_data *chip_data;
+ 	void __iomem *base = h_data->base;
+-	u32 irqs_mask;
+ 
+ 	stm32_bank = h_data->drv_data->exti_banks[bank_idx];
+ 	chip_data = &h_data->chips_data[bank_idx];
+@@ -725,21 +724,12 @@ stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data,
+ 
+ 	raw_spin_lock_init(&chip_data->rlock);
+ 
+-	/* Determine number of irqs supported */
+-	writel_relaxed(~0UL, base + stm32_bank->rtsr_ofst);
+-	irqs_mask = readl_relaxed(base + stm32_bank->rtsr_ofst);
+-
+ 	/*
+ 	 * This IP has no reset, so after hot reboot we should
+ 	 * clear registers to avoid residue
+ 	 */
+ 	writel_relaxed(0, base + stm32_bank->imr_ofst);
+ 	writel_relaxed(0, base + stm32_bank->emr_ofst);
+-	writel_relaxed(0, base + stm32_bank->rtsr_ofst);
+-	writel_relaxed(0, base + stm32_bank->ftsr_ofst);
+-	writel_relaxed(~0UL, base + stm32_bank->rpr_ofst);
+-	if (stm32_bank->fpr_ofst != UNDEF_REG)
+-		writel_relaxed(~0UL, base + stm32_bank->fpr_ofst);
+ 
+ 	pr_info("%pOF: bank%d\n", h_data->node, bank_idx);
+ 
+diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
+index 2837dc77478e..f0f9eb30bd2b 100644
+--- a/drivers/misc/lkdtm/core.c
++++ b/drivers/misc/lkdtm/core.c
+@@ -152,7 +152,9 @@ static const struct crashtype crashtypes[] = {
+ 	CRASHTYPE(EXEC_VMALLOC),
+ 	CRASHTYPE(EXEC_RODATA),
+ 	CRASHTYPE(EXEC_USERSPACE),
++	CRASHTYPE(EXEC_NULL),
+ 	CRASHTYPE(ACCESS_USERSPACE),
++	CRASHTYPE(ACCESS_NULL),
+ 	CRASHTYPE(WRITE_RO),
+ 	CRASHTYPE(WRITE_RO_AFTER_INIT),
+ 	CRASHTYPE(WRITE_KERN),
+diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h
+index 3c6fd327e166..b69ee004a3f7 100644
+--- a/drivers/misc/lkdtm/lkdtm.h
++++ b/drivers/misc/lkdtm/lkdtm.h
+@@ -45,7 +45,9 @@ void lkdtm_EXEC_KMALLOC(void);
+ void lkdtm_EXEC_VMALLOC(void);
+ void lkdtm_EXEC_RODATA(void);
+ void lkdtm_EXEC_USERSPACE(void);
++void lkdtm_EXEC_NULL(void);
+ void lkdtm_ACCESS_USERSPACE(void);
++void lkdtm_ACCESS_NULL(void);
+ 
+ /* lkdtm_refcount.c */
+ void lkdtm_REFCOUNT_INC_OVERFLOW(void);
+diff --git a/drivers/misc/lkdtm/perms.c b/drivers/misc/lkdtm/perms.c
+index 53b85c9d16b8..62f76d506f04 100644
+--- a/drivers/misc/lkdtm/perms.c
++++ b/drivers/misc/lkdtm/perms.c
+@@ -47,7 +47,7 @@ static noinline void execute_location(void *dst, bool write)
+ {
+ 	void (*func)(void) = dst;
+ 
+-	pr_info("attempting ok execution at %p\n", do_nothing);
++	pr_info("attempting ok execution at %px\n", do_nothing);
+ 	do_nothing();
+ 
+ 	if (write == CODE_WRITE) {
+@@ -55,7 +55,7 @@ static noinline void execute_location(void *dst, bool write)
+ 		flush_icache_range((unsigned long)dst,
+ 				   (unsigned long)dst + EXEC_SIZE);
+ 	}
+-	pr_info("attempting bad execution at %p\n", func);
++	pr_info("attempting bad execution at %px\n", func);
+ 	func();
+ }
+ 
+@@ -66,14 +66,14 @@ static void execute_user_location(void *dst)
+ 	/* Intentionally crossing kernel/user memory boundary. */
+ 	void (*func)(void) = dst;
+ 
+-	pr_info("attempting ok execution at %p\n", do_nothing);
++	pr_info("attempting ok execution at %px\n", do_nothing);
+ 	do_nothing();
+ 
+ 	copied = access_process_vm(current, (unsigned long)dst, do_nothing,
+ 				   EXEC_SIZE, FOLL_WRITE);
+ 	if (copied < EXEC_SIZE)
+ 		return;
+-	pr_info("attempting bad execution at %p\n", func);
++	pr_info("attempting bad execution at %px\n", func);
+ 	func();
+ }
+ 
+@@ -82,7 +82,7 @@ void lkdtm_WRITE_RO(void)
+ 	/* Explicitly cast away "const" for the test. */
+ 	unsigned long *ptr = (unsigned long *)&rodata;
+ 
+-	pr_info("attempting bad rodata write at %p\n", ptr);
++	pr_info("attempting bad rodata write at %px\n", ptr);
+ 	*ptr ^= 0xabcd1234;
+ }
+ 
+@@ -100,7 +100,7 @@ void lkdtm_WRITE_RO_AFTER_INIT(void)
+ 		return;
+ 	}
+ 
+-	pr_info("attempting bad ro_after_init write at %p\n", ptr);
++	pr_info("attempting bad ro_after_init write at %px\n", ptr);
+ 	*ptr ^= 0xabcd1234;
+ }
+ 
+@@ -112,7 +112,7 @@ void lkdtm_WRITE_KERN(void)
+ 	size = (unsigned long)do_overwritten - (unsigned long)do_nothing;
+ 	ptr = (unsigned char *)do_overwritten;
+ 
+-	pr_info("attempting bad %zu byte write at %p\n", size, ptr);
++	pr_info("attempting bad %zu byte write at %px\n", size, ptr);
+ 	memcpy(ptr, (unsigned char *)do_nothing, size);
+ 	flush_icache_range((unsigned long)ptr, (unsigned long)(ptr + size));
+ 
+@@ -164,6 +164,11 @@ void lkdtm_EXEC_USERSPACE(void)
+ 	vm_munmap(user_addr, PAGE_SIZE);
+ }
+ 
++void lkdtm_EXEC_NULL(void)
++{
++	execute_location(NULL, CODE_AS_IS);
++}
++
+ void lkdtm_ACCESS_USERSPACE(void)
+ {
+ 	unsigned long user_addr, tmp = 0;
+@@ -185,16 +190,29 @@ void lkdtm_ACCESS_USERSPACE(void)
+ 
+ 	ptr = (unsigned long *)user_addr;
+ 
+-	pr_info("attempting bad read at %p\n", ptr);
++	pr_info("attempting bad read at %px\n", ptr);
+ 	tmp = *ptr;
+ 	tmp += 0xc0dec0de;
+ 
+-	pr_info("attempting bad write at %p\n", ptr);
++	pr_info("attempting bad write at %px\n", ptr);
+ 	*ptr = tmp;
+ 
+ 	vm_munmap(user_addr, PAGE_SIZE);
+ }
+ 
++void lkdtm_ACCESS_NULL(void)
++{
++	unsigned long tmp;
++	unsigned long *ptr = (unsigned long *)NULL;
++
++	pr_info("attempting bad read at %px\n", ptr);
++	tmp = *ptr;
++	tmp += 0xc0dec0de;
++
++	pr_info("attempting bad write at %px\n", ptr);
++	*ptr = tmp;
++}
++
+ void __init lkdtm_perms_init(void)
+ {
+ 	/* Make sure we can write to __ro_after_init values during __init */
+diff --git a/drivers/mmc/host/davinci_mmc.c b/drivers/mmc/host/davinci_mmc.c
+index 9e68c3645e22..e6f14257a7d0 100644
+--- a/drivers/mmc/host/davinci_mmc.c
++++ b/drivers/mmc/host/davinci_mmc.c
+@@ -1117,7 +1117,7 @@ static inline void mmc_davinci_cpufreq_deregister(struct mmc_davinci_host *host)
+ {
+ }
+ #endif
+-static void __init init_mmcsd_host(struct mmc_davinci_host *host)
++static void init_mmcsd_host(struct mmc_davinci_host *host)
+ {
+ 
+ 	mmc_davinci_reset_ctrl(host, 1);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 09c774fe8853..854a55d4332a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -463,6 +463,8 @@ struct hnae3_ae_ops {
+ 	int (*set_gro_en)(struct hnae3_handle *handle, int enable);
+ 	u16 (*get_global_queue_id)(struct hnae3_handle *handle, u16 queue_id);
+ 	void (*set_timer_task)(struct hnae3_handle *handle, bool enable);
++	int (*mac_connect_phy)(struct hnae3_handle *handle);
++	void (*mac_disconnect_phy)(struct hnae3_handle *handle);
+ };
+ 
+ struct hnae3_dcb_ops {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index d84c50068f66..40b69eaf2cb3 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -3519,6 +3519,25 @@ static int hns3_init_mac_addr(struct net_device *netdev, bool init)
+ 	return ret;
+ }
+ 
++static int hns3_init_phy(struct net_device *netdev)
++{
++	struct hnae3_handle *h = hns3_get_handle(netdev);
++	int ret = 0;
++
++	if (h->ae_algo->ops->mac_connect_phy)
++		ret = h->ae_algo->ops->mac_connect_phy(h);
++
++	return ret;
++}
++
++static void hns3_uninit_phy(struct net_device *netdev)
++{
++	struct hnae3_handle *h = hns3_get_handle(netdev);
++
++	if (h->ae_algo->ops->mac_disconnect_phy)
++		h->ae_algo->ops->mac_disconnect_phy(h);
++}
++
+ static int hns3_restore_fd_rules(struct net_device *netdev)
+ {
+ 	struct hnae3_handle *h = hns3_get_handle(netdev);
+@@ -3627,6 +3646,10 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 		goto out_init_ring_data;
+ 	}
+ 
++	ret = hns3_init_phy(netdev);
++	if (ret)
++		goto out_init_phy;
++
+ 	ret = register_netdev(netdev);
+ 	if (ret) {
+ 		dev_err(priv->dev, "probe register netdev fail!\n");
+@@ -3651,6 +3674,9 @@ static int hns3_client_init(struct hnae3_handle *handle)
+ 	return ret;
+ 
+ out_reg_netdev_fail:
++	hns3_uninit_phy(netdev);
++out_init_phy:
++	hns3_uninit_all_ring(priv);
+ out_init_ring_data:
+ 	(void)hns3_nic_uninit_vector_data(priv);
+ out_init_vector_data:
+@@ -3685,6 +3711,8 @@ static void hns3_client_uninit(struct hnae3_handle *handle, bool reset)
+ 
+ 	hns3_force_clear_all_rx_ring(handle);
+ 
++	hns3_uninit_phy(netdev);
++
+ 	ret = hns3_nic_uninit_vector_data(priv);
+ 	if (ret)
+ 		netdev_err(netdev, "uninit vector error\n");
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index f7637c08bb3a..cb7571747af7 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -6959,16 +6959,6 @@ static void hclge_get_mdix_mode(struct hnae3_handle *handle,
+ 		*tp_mdix = ETH_TP_MDI;
+ }
+ 
+-static int hclge_init_instance_hw(struct hclge_dev *hdev)
+-{
+-	return hclge_mac_connect_phy(hdev);
+-}
+-
+-static void hclge_uninit_instance_hw(struct hclge_dev *hdev)
+-{
+-	hclge_mac_disconnect_phy(hdev);
+-}
+-
+ static int hclge_init_client_instance(struct hnae3_client *client,
+ 				      struct hnae3_ae_dev *ae_dev)
+ {
+@@ -6988,13 +6978,6 @@ static int hclge_init_client_instance(struct hnae3_client *client,
+ 			if (ret)
+ 				goto clear_nic;
+ 
+-			ret = hclge_init_instance_hw(hdev);
+-			if (ret) {
+-			        client->ops->uninit_instance(&vport->nic,
+-			                                     0);
+-				goto clear_nic;
+-			}
+-
+ 			hnae3_set_client_init_flag(client, ae_dev, 1);
+ 
+ 			if (hdev->roce_client &&
+@@ -7079,7 +7062,6 @@ static void hclge_uninit_client_instance(struct hnae3_client *client,
+ 		if (client->type == HNAE3_CLIENT_ROCE)
+ 			return;
+ 		if (hdev->nic_client && client->ops->uninit_instance) {
+-			hclge_uninit_instance_hw(hdev);
+ 			client->ops->uninit_instance(&vport->nic, 0);
+ 			hdev->nic_client = NULL;
+ 			vport->nic.client = NULL;
+@@ -8012,6 +7994,8 @@ static const struct hnae3_ae_ops hclge_ops = {
+ 	.set_gro_en = hclge_gro_en,
+ 	.get_global_queue_id = hclge_covert_handle_qid_global,
+ 	.set_timer_task = hclge_set_timer_task,
++	.mac_connect_phy = hclge_mac_connect_phy,
++	.mac_disconnect_phy = hclge_mac_disconnect_phy,
+ };
+ 
+ static struct hnae3_ae_algo ae_algo = {
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+index dabb8437f8dc..84f28785ba28 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+@@ -195,8 +195,10 @@ static void hclge_mac_adjust_link(struct net_device *netdev)
+ 		netdev_err(netdev, "failed to configure flow control.\n");
+ }
+ 
+-int hclge_mac_connect_phy(struct hclge_dev *hdev)
++int hclge_mac_connect_phy(struct hnae3_handle *handle)
+ {
++	struct hclge_vport *vport = hclge_get_vport(handle);
++	struct hclge_dev *hdev = vport->back;
+ 	struct net_device *netdev = hdev->vport[0].nic.netdev;
+ 	struct phy_device *phydev = hdev->hw.mac.phydev;
+ 	__ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, };
+@@ -229,8 +231,10 @@ int hclge_mac_connect_phy(struct hclge_dev *hdev)
+ 	return 0;
+ }
+ 
+-void hclge_mac_disconnect_phy(struct hclge_dev *hdev)
++void hclge_mac_disconnect_phy(struct hnae3_handle *handle)
+ {
++	struct hclge_vport *vport = hclge_get_vport(handle);
++	struct hclge_dev *hdev = vport->back;
+ 	struct phy_device *phydev = hdev->hw.mac.phydev;
+ 
+ 	if (!phydev)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.h
+index 5fbf7dddb5d9..ef095d9c566f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.h
+@@ -5,8 +5,8 @@
+ #define __HCLGE_MDIO_H
+ 
+ int hclge_mac_mdio_config(struct hclge_dev *hdev);
+-int hclge_mac_connect_phy(struct hclge_dev *hdev);
+-void hclge_mac_disconnect_phy(struct hclge_dev *hdev);
++int hclge_mac_connect_phy(struct hnae3_handle *handle);
++void hclge_mac_disconnect_phy(struct hnae3_handle *handle);
+ void hclge_mac_start_phy(struct hclge_dev *hdev);
+ void hclge_mac_stop_phy(struct hclge_dev *hdev);
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index c25acace7d91..e91005d0f20c 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1233,7 +1233,6 @@ static void pci_restore_pcie_state(struct pci_dev *dev)
+ 	pcie_capability_write_word(dev, PCI_EXP_SLTCTL2, cap[i++]);
+ }
+ 
+-
+ static int pci_save_pcix_state(struct pci_dev *dev)
+ {
+ 	int pos;
+@@ -1270,6 +1269,45 @@ static void pci_restore_pcix_state(struct pci_dev *dev)
+ 	pci_write_config_word(dev, pos + PCI_X_CMD, cap[i++]);
+ }
+ 
++static void pci_save_ltr_state(struct pci_dev *dev)
++{
++	int ltr;
++	struct pci_cap_saved_state *save_state;
++	u16 *cap;
++
++	if (!pci_is_pcie(dev))
++		return;
++
++	ltr = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_LTR);
++	if (!ltr)
++		return;
++
++	save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_LTR);
++	if (!save_state) {
++		pci_err(dev, "no suspend buffer for LTR; ASPM issues possible after resume\n");
++		return;
++	}
++
++	cap = (u16 *)&save_state->cap.data[0];
++	pci_read_config_word(dev, ltr + PCI_LTR_MAX_SNOOP_LAT, cap++);
++	pci_read_config_word(dev, ltr + PCI_LTR_MAX_NOSNOOP_LAT, cap++);
++}
++
++static void pci_restore_ltr_state(struct pci_dev *dev)
++{
++	struct pci_cap_saved_state *save_state;
++	int ltr;
++	u16 *cap;
++
++	save_state = pci_find_saved_ext_cap(dev, PCI_EXT_CAP_ID_LTR);
++	ltr = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_LTR);
++	if (!save_state || !ltr)
++		return;
++
++	cap = (u16 *)&save_state->cap.data[0];
++	pci_write_config_word(dev, ltr + PCI_LTR_MAX_SNOOP_LAT, *cap++);
++	pci_write_config_word(dev, ltr + PCI_LTR_MAX_NOSNOOP_LAT, *cap++);
++}
+ 
+ /**
+  * pci_save_state - save the PCI configuration space of a device before suspending
+@@ -1291,6 +1329,7 @@ int pci_save_state(struct pci_dev *dev)
+ 	if (i != 0)
+ 		return i;
+ 
++	pci_save_ltr_state(dev);
+ 	pci_save_dpc_state(dev);
+ 	return pci_save_vc_state(dev);
+ }
+@@ -1390,7 +1429,12 @@ void pci_restore_state(struct pci_dev *dev)
+ 	if (!dev->state_saved)
+ 		return;
+ 
+-	/* PCI Express register must be restored first */
++	/*
++	 * Restore max latencies (in the LTR capability) before enabling
++	 * LTR itself (in the PCIe capability).
++	 */
++	pci_restore_ltr_state(dev);
++
+ 	pci_restore_pcie_state(dev);
+ 	pci_restore_pasid_state(dev);
+ 	pci_restore_pri_state(dev);
+@@ -2501,6 +2545,25 @@ void pci_config_pm_runtime_put(struct pci_dev *pdev)
+ 		pm_runtime_put_sync(parent);
+ }
+ 
++static const struct dmi_system_id bridge_d3_blacklist[] = {
++#ifdef CONFIG_X86
++	{
++		/*
++		 * Gigabyte X299 root port is not marked as hotplug capable
++		 * which allows Linux to power manage it.  However, this
++		 * confuses the BIOS SMI handler so don't power manage root
++		 * ports on that system.
++		 */
++		.ident = "X299 DESIGNARE EX-CF",
++		.matches = {
++			DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."),
++			DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"),
++		},
++	},
++#endif
++	{ }
++};
++
+ /**
+  * pci_bridge_d3_possible - Is it possible to put the bridge into D3
+  * @bridge: Bridge to check
+@@ -2546,6 +2609,9 @@ bool pci_bridge_d3_possible(struct pci_dev *bridge)
+ 		if (bridge->is_hotplug_bridge)
+ 			return false;
+ 
++		if (dmi_check_system(bridge_d3_blacklist))
++			return false;
++
+ 		/*
+ 		 * It should be safe to put PCIe ports from 2015 or newer
+ 		 * to D3.
+@@ -2998,6 +3064,11 @@ void pci_allocate_cap_save_buffers(struct pci_dev *dev)
+ 	if (error)
+ 		pci_err(dev, "unable to preallocate PCI-X save buffer\n");
+ 
++	error = pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_LTR,
++					    2 * sizeof(u16));
++	if (error)
++		pci_err(dev, "unable to allocate suspend buffer for LTR\n");
++
+ 	pci_allocate_vc_save_buffers(dev);
+ }
+ 
+diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
+index c37e74ee609d..a9cbe5be277b 100644
+--- a/drivers/platform/x86/intel_pmc_core.c
++++ b/drivers/platform/x86/intel_pmc_core.c
+@@ -15,6 +15,7 @@
+ #include <linux/bitfield.h>
+ #include <linux/debugfs.h>
+ #include <linux/delay.h>
++#include <linux/dmi.h>
+ #include <linux/io.h>
+ #include <linux/module.h>
+ #include <linux/pci.h>
+@@ -139,6 +140,7 @@ static const struct pmc_reg_map spt_reg_map = {
+ 	.pm_cfg_offset = SPT_PMC_PM_CFG_OFFSET,
+ 	.pm_read_disable_bit = SPT_PMC_READ_DISABLE_BIT,
+ 	.ltr_ignore_max = SPT_NUM_IP_IGN_ALLOWED,
++	.pm_vric1_offset = SPT_PMC_VRIC1_OFFSET,
+ };
+ 
+ /* Cannonlake: PGD PFET Enable Ack Status Register(s) bitmap */
+@@ -751,6 +753,37 @@ static const struct pci_device_id pmc_pci_ids[] = {
+ 	{ 0, },
+ };
+ 
++/*
++ * This quirk can be used on those platforms where
++ * the platform BIOS enforces 24Mhx Crystal to shutdown
++ * before PMC can assert SLP_S0#.
++ */
++int quirk_xtal_ignore(const struct dmi_system_id *id)
++{
++	struct pmc_dev *pmcdev = &pmc;
++	u32 value;
++
++	value = pmc_core_reg_read(pmcdev, pmcdev->map->pm_vric1_offset);
++	/* 24MHz Crystal Shutdown Qualification Disable */
++	value |= SPT_PMC_VRIC1_XTALSDQDIS;
++	/* Low Voltage Mode Enable */
++	value &= ~SPT_PMC_VRIC1_SLPS0LVEN;
++	pmc_core_reg_write(pmcdev, pmcdev->map->pm_vric1_offset, value);
++	return 0;
++}
++
++static const struct dmi_system_id pmc_core_dmi_table[]  = {
++	{
++	.callback = quirk_xtal_ignore,
++	.ident = "HP Elite x2 1013 G3",
++	.matches = {
++		DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++		DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite x2 1013 G3"),
++		},
++	},
++	{}
++};
++
+ static int __init pmc_core_probe(void)
+ {
+ 	struct pmc_dev *pmcdev = &pmc;
+@@ -792,6 +825,7 @@ static int __init pmc_core_probe(void)
+ 		return err;
+ 	}
+ 
++	dmi_check_system(pmc_core_dmi_table);
+ 	pr_info(" initialized\n");
+ 	return 0;
+ }
+diff --git a/drivers/platform/x86/intel_pmc_core.h b/drivers/platform/x86/intel_pmc_core.h
+index 1a0104d2cbf0..9bc16d7d2917 100644
+--- a/drivers/platform/x86/intel_pmc_core.h
++++ b/drivers/platform/x86/intel_pmc_core.h
+@@ -25,6 +25,7 @@
+ #define SPT_PMC_MTPMC_OFFSET			0x20
+ #define SPT_PMC_MFPMC_OFFSET			0x38
+ #define SPT_PMC_LTR_IGNORE_OFFSET		0x30C
++#define SPT_PMC_VRIC1_OFFSET			0x31c
+ #define SPT_PMC_MPHY_CORE_STS_0			0x1143
+ #define SPT_PMC_MPHY_CORE_STS_1			0x1142
+ #define SPT_PMC_MPHY_COM_STS_0			0x1155
+@@ -135,6 +136,9 @@ enum ppfear_regs {
+ #define SPT_PMC_BIT_MPHY_CMN_LANE2		BIT(2)
+ #define SPT_PMC_BIT_MPHY_CMN_LANE3		BIT(3)
+ 
++#define SPT_PMC_VRIC1_SLPS0LVEN			BIT(13)
++#define SPT_PMC_VRIC1_XTALSDQDIS		BIT(22)
++
+ /* Cannonlake Power Management Controller register offsets */
+ #define CNP_PMC_SLPS0_DBG_OFFSET		0x10B4
+ #define CNP_PMC_PM_CFG_OFFSET			0x1818
+@@ -217,6 +221,7 @@ struct pmc_reg_map {
+ 	const int pm_read_disable_bit;
+ 	const u32 slps0_dbg_offset;
+ 	const u32 ltr_ignore_max;
++	const u32 pm_vric1_offset;
+ };
+ 
+ /**
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 7e35ce2162d0..503fda4e7e8e 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -1459,7 +1459,7 @@ __qla2xxx_eh_generic_reset(char *name, enum nexus_wait_type type,
+ 		goto eh_reset_failed;
+ 	}
+ 	err = 2;
+-	if (do_reset(fcport, cmd->device->lun, blk_mq_rq_cpu(cmd->request) + 1)
++	if (do_reset(fcport, cmd->device->lun, 1)
+ 		!= QLA_SUCCESS) {
+ 		ql_log(ql_log_warn, vha, 0x800c,
+ 		    "do_reset failed for cmd=%p.\n", cmd);
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 5a6e8e12701a..655ad26106e4 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -598,9 +598,16 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
+ 	if (!blk_rq_is_scsi(req)) {
+ 		WARN_ON_ONCE(!(cmd->flags & SCMD_INITIALIZED));
+ 		cmd->flags &= ~SCMD_INITIALIZED;
+-		destroy_rcu_head(&cmd->rcu);
+ 	}
+ 
++	/*
++	 * Calling rcu_barrier() is not necessary here because the
++	 * SCSI error handler guarantees that the function called by
++	 * call_rcu() has been called before scsi_end_request() is
++	 * called.
++	 */
++	destroy_rcu_head(&cmd->rcu);
++
+ 	/*
+ 	 * In the MQ case the command gets freed by __blk_mq_end_request,
+ 	 * so we have to do all cleanup that depends on it earlier.
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 0508831d6fb9..0a82e93566dc 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2200,6 +2200,8 @@ void iscsi_remove_session(struct iscsi_cls_session *session)
+ 	scsi_target_unblock(&session->dev, SDEV_TRANSPORT_OFFLINE);
+ 	/* flush running scans then delete devices */
+ 	flush_work(&session->scan_work);
++	/* flush running unbind operations */
++	flush_work(&session->unbind_work);
+ 	__iscsi_unbind_session(&session->unbind_work);
+ 
+ 	/* hw iscsi may not have removed all connections from session */
+diff --git a/drivers/thermal/broadcom/bcm2835_thermal.c b/drivers/thermal/broadcom/bcm2835_thermal.c
+index 720760cd493f..ba39647a690c 100644
+--- a/drivers/thermal/broadcom/bcm2835_thermal.c
++++ b/drivers/thermal/broadcom/bcm2835_thermal.c
+@@ -119,8 +119,7 @@ static const struct debugfs_reg32 bcm2835_thermal_regs[] = {
+ 
+ static void bcm2835_thermal_debugfs(struct platform_device *pdev)
+ {
+-	struct thermal_zone_device *tz = platform_get_drvdata(pdev);
+-	struct bcm2835_thermal_data *data = tz->devdata;
++	struct bcm2835_thermal_data *data = platform_get_drvdata(pdev);
+ 	struct debugfs_regset32 *regset;
+ 
+ 	data->debugfsdir = debugfs_create_dir("bcm2835_thermal", NULL);
+@@ -266,7 +265,7 @@ static int bcm2835_thermal_probe(struct platform_device *pdev)
+ 
+ 	data->tz = tz;
+ 
+-	platform_set_drvdata(pdev, tz);
++	platform_set_drvdata(pdev, data);
+ 
+ 	/*
+ 	 * Thermal_zone doesn't enable hwmon as default,
+@@ -290,8 +289,8 @@ err_clk:
+ 
+ static int bcm2835_thermal_remove(struct platform_device *pdev)
+ {
+-	struct thermal_zone_device *tz = platform_get_drvdata(pdev);
+-	struct bcm2835_thermal_data *data = tz->devdata;
++	struct bcm2835_thermal_data *data = platform_get_drvdata(pdev);
++	struct thermal_zone_device *tz = data->tz;
+ 
+ 	debugfs_remove_recursive(data->debugfsdir);
+ 	thermal_zone_of_sensor_unregister(&pdev->dev, tz);
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index 61ca7ce3624e..5f3ed24e26ec 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -22,6 +22,13 @@ enum int3400_thermal_uuid {
+ 	INT3400_THERMAL_PASSIVE_1,
+ 	INT3400_THERMAL_ACTIVE,
+ 	INT3400_THERMAL_CRITICAL,
++	INT3400_THERMAL_ADAPTIVE_PERFORMANCE,
++	INT3400_THERMAL_EMERGENCY_CALL_MODE,
++	INT3400_THERMAL_PASSIVE_2,
++	INT3400_THERMAL_POWER_BOSS,
++	INT3400_THERMAL_VIRTUAL_SENSOR,
++	INT3400_THERMAL_COOLING_MODE,
++	INT3400_THERMAL_HARDWARE_DUTY_CYCLING,
+ 	INT3400_THERMAL_MAXIMUM_UUID,
+ };
+ 
+@@ -29,6 +36,13 @@ static char *int3400_thermal_uuids[INT3400_THERMAL_MAXIMUM_UUID] = {
+ 	"42A441D6-AE6A-462b-A84B-4A8CE79027D3",
+ 	"3A95C389-E4B8-4629-A526-C52C88626BAE",
+ 	"97C68AE7-15FA-499c-B8C9-5DA81D606E0A",
++	"63BE270F-1C11-48FD-A6F7-3AF253FF3E2D",
++	"5349962F-71E6-431D-9AE8-0A635B710AEE",
++	"9E04115A-AE87-4D1C-9500-0F3E340BFE75",
++	"F5A35014-C209-46A4-993A-EB56DE7530A1",
++	"6ED722A7-9240-48A5-B479-31EEF723D7CF",
++	"16CAF1B7-DD38-40ED-B1C1-1B8A1913D531",
++	"BE84BABF-C4D4-403D-B495-3128FD44dAC1",
+ };
+ 
+ struct int3400_thermal_priv {
+@@ -299,10 +313,9 @@ static int int3400_thermal_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, priv);
+ 
+-	if (priv->uuid_bitmap & 1 << INT3400_THERMAL_PASSIVE_1) {
+-		int3400_thermal_ops.get_mode = int3400_thermal_get_mode;
+-		int3400_thermal_ops.set_mode = int3400_thermal_set_mode;
+-	}
++	int3400_thermal_ops.get_mode = int3400_thermal_get_mode;
++	int3400_thermal_ops.set_mode = int3400_thermal_set_mode;
++
+ 	priv->thermal = thermal_zone_device_register("INT3400 Thermal", 0, 0,
+ 						priv, &int3400_thermal_ops,
+ 						&int3400_thermal_params, 0, 0);
+diff --git a/drivers/thermal/intel/intel_powerclamp.c b/drivers/thermal/intel/intel_powerclamp.c
+index 7571f7c2e7c9..ac7256b5f020 100644
+--- a/drivers/thermal/intel/intel_powerclamp.c
++++ b/drivers/thermal/intel/intel_powerclamp.c
+@@ -101,7 +101,7 @@ struct powerclamp_worker_data {
+ 	bool clamping;
+ };
+ 
+-static struct powerclamp_worker_data * __percpu worker_data;
++static struct powerclamp_worker_data __percpu *worker_data;
+ static struct thermal_cooling_device *cooling_dev;
+ static unsigned long *cpu_clamping_mask;  /* bit map for tracking per cpu
+ 					   * clamping kthread worker
+@@ -494,7 +494,7 @@ static void start_power_clamp_worker(unsigned long cpu)
+ 	struct powerclamp_worker_data *w_data = per_cpu_ptr(worker_data, cpu);
+ 	struct kthread_worker *worker;
+ 
+-	worker = kthread_create_worker_on_cpu(cpu, 0, "kidle_inject/%ld", cpu);
++	worker = kthread_create_worker_on_cpu(cpu, 0, "kidle_inj/%ld", cpu);
+ 	if (IS_ERR(worker))
+ 		return;
+ 
+diff --git a/drivers/thermal/samsung/exynos_tmu.c b/drivers/thermal/samsung/exynos_tmu.c
+index 48eef552cba4..fc9399d9c082 100644
+--- a/drivers/thermal/samsung/exynos_tmu.c
++++ b/drivers/thermal/samsung/exynos_tmu.c
+@@ -666,7 +666,7 @@ static int exynos_get_temp(void *p, int *temp)
+ 	struct exynos_tmu_data *data = p;
+ 	int value, ret = 0;
+ 
+-	if (!data || !data->tmu_read || !data->enabled)
++	if (!data || !data->tmu_read)
+ 		return -EINVAL;
+ 	else if (!data->enabled)
+ 		/*
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index 07cad54b84f1..e8e125acd712 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -1010,7 +1010,7 @@ static loff_t cifs_remap_file_range(struct file *src_file, loff_t off,
+ 	unsigned int xid;
+ 	int rc;
+ 
+-	if (remap_flags & ~REMAP_FILE_ADVISORY)
++	if (remap_flags & ~(REMAP_FILE_DEDUP | REMAP_FILE_ADVISORY))
+ 		return -EINVAL;
+ 
+ 	cifs_dbg(FYI, "clone range\n");
+diff --git a/fs/cifs/smb2maperror.c b/fs/cifs/smb2maperror.c
+index 924269cec135..e32c264e3adb 100644
+--- a/fs/cifs/smb2maperror.c
++++ b/fs/cifs/smb2maperror.c
+@@ -1036,7 +1036,8 @@ static const struct status_to_posix_error smb2_error_map_table[] = {
+ 	{STATUS_UNFINISHED_CONTEXT_DELETED, -EIO,
+ 	"STATUS_UNFINISHED_CONTEXT_DELETED"},
+ 	{STATUS_NO_TGT_REPLY, -EIO, "STATUS_NO_TGT_REPLY"},
+-	{STATUS_OBJECTID_NOT_FOUND, -EIO, "STATUS_OBJECTID_NOT_FOUND"},
++	/* Note that ENOATTTR and ENODATA are the same errno */
++	{STATUS_OBJECTID_NOT_FOUND, -ENODATA, "STATUS_OBJECTID_NOT_FOUND"},
+ 	{STATUS_NO_IP_ADDRESSES, -EIO, "STATUS_NO_IP_ADDRESSES"},
+ 	{STATUS_WRONG_CREDENTIAL_HANDLE, -EIO,
+ 	"STATUS_WRONG_CREDENTIAL_HANDLE"},
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index b29f711ab965..ea56b1cdbdde 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -949,6 +949,16 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,
+ 	resp_buftype[0] = resp_buftype[1] = resp_buftype[2] = CIFS_NO_BUFFER;
+ 	memset(rsp_iov, 0, sizeof(rsp_iov));
+ 
++	if (ses->server->ops->query_all_EAs) {
++		if (!ea_value) {
++			rc = ses->server->ops->query_all_EAs(xid, tcon, path,
++							     ea_name, NULL, 0,
++							     cifs_sb);
++			if (rc == -ENODATA)
++				goto sea_exit;
++		}
++	}
++
+ 	/* Open */
+ 	memset(&open_iov, 0, sizeof(open_iov));
+ 	rqst[0].rq_iov = open_iov;
+diff --git a/fs/cifs/trace.h b/fs/cifs/trace.h
+index 59be48206932..b49bc925fb4f 100644
+--- a/fs/cifs/trace.h
++++ b/fs/cifs/trace.h
+@@ -378,19 +378,19 @@ DECLARE_EVENT_CLASS(smb3_tcon_class,
+ 		__field(unsigned int, xid)
+ 		__field(__u32, tid)
+ 		__field(__u64, sesid)
+-		__field(const char *,  unc_name)
++		__string(name, unc_name)
+ 		__field(int, rc)
+ 	),
+ 	TP_fast_assign(
+ 		__entry->xid = xid;
+ 		__entry->tid = tid;
+ 		__entry->sesid = sesid;
+-		__entry->unc_name = unc_name;
++		__assign_str(name, unc_name);
+ 		__entry->rc = rc;
+ 	),
+ 	TP_printk("xid=%u sid=0x%llx tid=0x%x unc_name=%s rc=%d",
+ 		__entry->xid, __entry->sesid, __entry->tid,
+-		__entry->unc_name, __entry->rc)
++		__get_str(name), __entry->rc)
+ )
+ 
+ #define DEFINE_SMB3_TCON_EVENT(name)          \
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 2e76fb55d94a..5f24fdc140ad 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -999,6 +999,13 @@ resizefs_out:
+ 		if (!blk_queue_discard(q))
+ 			return -EOPNOTSUPP;
+ 
++		/*
++		 * We haven't replayed the journal, so we cannot use our
++		 * block-bitmap-guided storage zapping commands.
++		 */
++		if (test_opt(sb, NOLOAD) && ext4_has_feature_journal(sb))
++			return -EROFS;
++
+ 		if (copy_from_user(&range, (struct fstrim_range __user *)arg,
+ 		    sizeof(range)))
+ 			return -EFAULT;
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index 3d9b18505c0c..e7ae26e36c9c 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -932,11 +932,18 @@ static int add_new_gdb_meta_bg(struct super_block *sb,
+ 	memcpy(n_group_desc, o_group_desc,
+ 	       EXT4_SB(sb)->s_gdb_count * sizeof(struct buffer_head *));
+ 	n_group_desc[gdb_num] = gdb_bh;
++
++	BUFFER_TRACE(gdb_bh, "get_write_access");
++	err = ext4_journal_get_write_access(handle, gdb_bh);
++	if (err) {
++		kvfree(n_group_desc);
++		brelse(gdb_bh);
++		return err;
++	}
++
+ 	EXT4_SB(sb)->s_group_desc = n_group_desc;
+ 	EXT4_SB(sb)->s_gdb_count++;
+ 	kvfree(o_group_desc);
+-	BUFFER_TRACE(gdb_bh, "get_write_access");
+-	err = ext4_journal_get_write_access(handle, gdb_bh);
+ 	return err;
+ }
+ 
+@@ -2073,6 +2080,10 @@ out:
+ 		free_flex_gd(flex_gd);
+ 	if (resize_inode != NULL)
+ 		iput(resize_inode);
+-	ext4_msg(sb, KERN_INFO, "resized filesystem to %llu", n_blocks_count);
++	if (err)
++		ext4_warning(sb, "error (%d) occurred during "
++			     "file system resize", err);
++	ext4_msg(sb, KERN_INFO, "resized filesystem to %llu",
++		 ext4_blocks_count(es));
+ 	return err;
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index fb12d3c17c1b..b9bca7298f96 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -430,6 +430,12 @@ static void ext4_journal_commit_callback(journal_t *journal, transaction_t *txn)
+ 	spin_unlock(&sbi->s_md_lock);
+ }
+ 
++static bool system_going_down(void)
++{
++	return system_state == SYSTEM_HALT || system_state == SYSTEM_POWER_OFF
++		|| system_state == SYSTEM_RESTART;
++}
++
+ /* Deal with the reporting of failure conditions on a filesystem such as
+  * inconsistencies detected or read IO failures.
+  *
+@@ -460,7 +466,12 @@ static void ext4_handle_error(struct super_block *sb)
+ 		if (journal)
+ 			jbd2_journal_abort(journal, -EIO);
+ 	}
+-	if (test_opt(sb, ERRORS_RO)) {
++	/*
++	 * We force ERRORS_RO behavior when system is rebooting. Otherwise we
++	 * could panic during 'reboot -f' as the underlying device got already
++	 * disabled.
++	 */
++	if (test_opt(sb, ERRORS_RO) || system_going_down()) {
+ 		ext4_msg(sb, KERN_CRIT, "Remounting filesystem read-only");
+ 		/*
+ 		 * Make sure updated value of ->s_mount_flags will be visible
+@@ -468,8 +479,7 @@ static void ext4_handle_error(struct super_block *sb)
+ 		 */
+ 		smp_wmb();
+ 		sb->s_flags |= SB_RDONLY;
+-	}
+-	if (test_opt(sb, ERRORS_PANIC)) {
++	} else if (test_opt(sb, ERRORS_PANIC)) {
+ 		if (EXT4_SB(sb)->s_journal &&
+ 		  !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
+ 			return;
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index f955cd3e0677..7743fa83b895 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -306,8 +306,9 @@ static int f2fs_write_meta_pages(struct address_space *mapping,
+ 		goto skip_write;
+ 
+ 	/* collect a number of dirty meta pages and write together */
+-	if (wbc->for_kupdate ||
+-		get_pages(sbi, F2FS_DIRTY_META) < nr_pages_to_skip(sbi, META))
++	if (wbc->sync_mode != WB_SYNC_ALL &&
++			get_pages(sbi, F2FS_DIRTY_META) <
++					nr_pages_to_skip(sbi, META))
+ 		goto skip_write;
+ 
+ 	/* if locked failed, cp will flush dirty pages instead */
+@@ -405,7 +406,7 @@ static int f2fs_set_meta_page_dirty(struct page *page)
+ 	if (!PageDirty(page)) {
+ 		__set_page_dirty_nobuffers(page);
+ 		inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_META);
+-		SetPagePrivate(page);
++		f2fs_set_page_private(page, 0);
+ 		f2fs_trace_pid(page);
+ 		return 1;
+ 	}
+@@ -956,7 +957,7 @@ void f2fs_update_dirty_page(struct inode *inode, struct page *page)
+ 	inode_inc_dirty_pages(inode);
+ 	spin_unlock(&sbi->inode_lock[type]);
+ 
+-	SetPagePrivate(page);
++	f2fs_set_page_private(page, 0);
+ 	f2fs_trace_pid(page);
+ }
+ 
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index f91d8630c9a2..c99aab23efea 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -2711,8 +2711,7 @@ void f2fs_invalidate_page(struct page *page, unsigned int offset,
+ 	if (IS_ATOMIC_WRITTEN_PAGE(page))
+ 		return f2fs_drop_inmem_page(inode, page);
+ 
+-	set_page_private(page, 0);
+-	ClearPagePrivate(page);
++	f2fs_clear_page_private(page);
+ }
+ 
+ int f2fs_release_page(struct page *page, gfp_t wait)
+@@ -2726,8 +2725,7 @@ int f2fs_release_page(struct page *page, gfp_t wait)
+ 		return 0;
+ 
+ 	clear_cold_data(page);
+-	set_page_private(page, 0);
+-	ClearPagePrivate(page);
++	f2fs_clear_page_private(page);
+ 	return 1;
+ }
+ 
+@@ -2795,12 +2793,8 @@ int f2fs_migrate_page(struct address_space *mapping,
+ 			return -EAGAIN;
+ 	}
+ 
+-	/*
+-	 * A reference is expected if PagePrivate set when move mapping,
+-	 * however F2FS breaks this for maintaining dirty page counts when
+-	 * truncating pages. So here adjusting the 'extra_count' make it work.
+-	 */
+-	extra_count = (atomic_written ? 1 : 0) - page_has_private(page);
++	/* one extra reference was held for atomic_write page */
++	extra_count = atomic_written ? 1 : 0;
+ 	rc = migrate_page_move_mapping(mapping, newpage,
+ 				page, mode, extra_count);
+ 	if (rc != MIGRATEPAGE_SUCCESS) {
+@@ -2821,9 +2815,10 @@ int f2fs_migrate_page(struct address_space *mapping,
+ 		get_page(newpage);
+ 	}
+ 
+-	if (PagePrivate(page))
+-		SetPagePrivate(newpage);
+-	set_page_private(newpage, page_private(page));
++	if (PagePrivate(page)) {
++		f2fs_set_page_private(newpage, page_private(page));
++		f2fs_clear_page_private(page);
++	}
+ 
+ 	if (mode != MIGRATE_SYNC_NO_COPY)
+ 		migrate_page_copy(newpage, page);
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 50d0d36280fa..99a6063c2327 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -728,7 +728,7 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page,
+ 		!f2fs_truncate_hole(dir, page->index, page->index + 1)) {
+ 		f2fs_clear_page_cache_dirty_tag(page);
+ 		clear_page_dirty_for_io(page);
+-		ClearPagePrivate(page);
++		f2fs_clear_page_private(page);
+ 		ClearPageUptodate(page);
+ 		clear_cold_data(page);
+ 		inode_dec_dirty_pages(dir);
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 279bc00489cc..6d9186a6528c 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -2825,6 +2825,27 @@ static inline bool is_valid_data_blkaddr(struct f2fs_sb_info *sbi,
+ 	return true;
+ }
+ 
++static inline void f2fs_set_page_private(struct page *page,
++						unsigned long data)
++{
++	if (PagePrivate(page))
++		return;
++
++	get_page(page);
++	SetPagePrivate(page);
++	set_page_private(page, data);
++}
++
++static inline void f2fs_clear_page_private(struct page *page)
++{
++	if (!PagePrivate(page))
++		return;
++
++	set_page_private(page, 0);
++	ClearPagePrivate(page);
++	f2fs_put_page(page, 0);
++}
++
+ /*
+  * file.c
+  */
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index ae2b45e75847..30ed43bce110 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -768,7 +768,6 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
+ {
+ 	struct inode *inode = d_inode(dentry);
+ 	int err;
+-	bool size_changed = false;
+ 
+ 	if (unlikely(f2fs_cp_error(F2FS_I_SB(inode))))
+ 		return -EIO;
+@@ -843,8 +842,6 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
+ 		down_write(&F2FS_I(inode)->i_sem);
+ 		F2FS_I(inode)->last_disk_size = i_size_read(inode);
+ 		up_write(&F2FS_I(inode)->i_sem);
+-
+-		size_changed = true;
+ 	}
+ 
+ 	__setattr_copy(inode, attr);
+@@ -858,7 +855,7 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr)
+ 	}
+ 
+ 	/* file size may changed here */
+-	f2fs_mark_inode_dirty_sync(inode, size_changed);
++	f2fs_mark_inode_dirty_sync(inode, true);
+ 
+ 	/* inode change will produce dirty node pages flushed by checkpoint */
+ 	f2fs_balance_fs(F2FS_I_SB(inode), true);
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 4f450e573312..3f99ab288695 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1920,7 +1920,9 @@ static int f2fs_write_node_pages(struct address_space *mapping,
+ 	f2fs_balance_fs_bg(sbi);
+ 
+ 	/* collect a number of dirty node pages and write together */
+-	if (get_pages(sbi, F2FS_DIRTY_NODES) < nr_pages_to_skip(sbi, NODE))
++	if (wbc->sync_mode != WB_SYNC_ALL &&
++			get_pages(sbi, F2FS_DIRTY_NODES) <
++					nr_pages_to_skip(sbi, NODE))
+ 		goto skip_write;
+ 
+ 	if (wbc->sync_mode == WB_SYNC_ALL)
+@@ -1959,7 +1961,7 @@ static int f2fs_set_node_page_dirty(struct page *page)
+ 	if (!PageDirty(page)) {
+ 		__set_page_dirty_nobuffers(page);
+ 		inc_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES);
+-		SetPagePrivate(page);
++		f2fs_set_page_private(page, 0);
+ 		f2fs_trace_pid(page);
+ 		return 1;
+ 	}
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index e1b1d390b329..b6c8b0696ef6 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -191,8 +191,7 @@ void f2fs_register_inmem_page(struct inode *inode, struct page *page)
+ 
+ 	f2fs_trace_pid(page);
+ 
+-	set_page_private(page, (unsigned long)ATOMIC_WRITTEN_PAGE);
+-	SetPagePrivate(page);
++	f2fs_set_page_private(page, (unsigned long)ATOMIC_WRITTEN_PAGE);
+ 
+ 	new = f2fs_kmem_cache_alloc(inmem_entry_slab, GFP_NOFS);
+ 
+@@ -280,8 +279,7 @@ next:
+ 			ClearPageUptodate(page);
+ 			clear_cold_data(page);
+ 		}
+-		set_page_private(page, 0);
+-		ClearPagePrivate(page);
++		f2fs_clear_page_private(page);
+ 		f2fs_put_page(page, 1);
+ 
+ 		list_del(&cur->list);
+@@ -370,8 +368,7 @@ void f2fs_drop_inmem_page(struct inode *inode, struct page *page)
+ 	kmem_cache_free(inmem_entry_slab, cur);
+ 
+ 	ClearPageUptodate(page);
+-	set_page_private(page, 0);
+-	ClearPagePrivate(page);
++	f2fs_clear_page_private(page);
+ 	f2fs_put_page(page, 0);
+ 
+ 	trace_f2fs_commit_inmem_page(page, INMEM_INVALIDATE);
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 5892fa3c885f..144ffba3ec5a 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1460,9 +1460,16 @@ static int f2fs_enable_quotas(struct super_block *sb);
+ 
+ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi)
+ {
++	unsigned int s_flags = sbi->sb->s_flags;
+ 	struct cp_control cpc;
+-	int err;
++	int err = 0;
++	int ret;
+ 
++	if (s_flags & SB_RDONLY) {
++		f2fs_msg(sbi->sb, KERN_ERR,
++				"checkpoint=disable on readonly fs");
++		return -EINVAL;
++	}
+ 	sbi->sb->s_flags |= SB_ACTIVE;
+ 
+ 	f2fs_update_time(sbi, DISABLE_TIME);
+@@ -1470,18 +1477,24 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi)
+ 	while (!f2fs_time_over(sbi, DISABLE_TIME)) {
+ 		mutex_lock(&sbi->gc_mutex);
+ 		err = f2fs_gc(sbi, true, false, NULL_SEGNO);
+-		if (err == -ENODATA)
++		if (err == -ENODATA) {
++			err = 0;
+ 			break;
++		}
+ 		if (err && err != -EAGAIN)
+-			return err;
++			break;
+ 	}
+ 
+-	err = sync_filesystem(sbi->sb);
+-	if (err)
+-		return err;
++	ret = sync_filesystem(sbi->sb);
++	if (ret || err) {
++		err = ret ? ret: err;
++		goto restore_flag;
++	}
+ 
+-	if (f2fs_disable_cp_again(sbi))
+-		return -EAGAIN;
++	if (f2fs_disable_cp_again(sbi)) {
++		err = -EAGAIN;
++		goto restore_flag;
++	}
+ 
+ 	mutex_lock(&sbi->gc_mutex);
+ 	cpc.reason = CP_PAUSE;
+@@ -1490,7 +1503,9 @@ static int f2fs_disable_checkpoint(struct f2fs_sb_info *sbi)
+ 
+ 	sbi->unusable_block_count = 0;
+ 	mutex_unlock(&sbi->gc_mutex);
+-	return 0;
++restore_flag:
++	sbi->sb->s_flags = s_flags;	/* Restore MS_RDONLY status */
++	return err;
+ }
+ 
+ static void f2fs_enable_checkpoint(struct f2fs_sb_info *sbi)
+@@ -3359,7 +3374,7 @@ skip_recovery:
+ 	if (test_opt(sbi, DISABLE_CHECKPOINT)) {
+ 		err = f2fs_disable_checkpoint(sbi);
+ 		if (err)
+-			goto free_meta;
++			goto sync_free_meta;
+ 	} else if (is_set_ckpt_flags(sbi, CP_DISABLED_FLAG)) {
+ 		f2fs_enable_checkpoint(sbi);
+ 	}
+@@ -3372,7 +3387,7 @@ skip_recovery:
+ 		/* After POR, we can run background GC thread.*/
+ 		err = f2fs_start_gc_thread(sbi);
+ 		if (err)
+-			goto free_meta;
++			goto sync_free_meta;
+ 	}
+ 	kvfree(options);
+ 
+@@ -3394,6 +3409,11 @@ skip_recovery:
+ 	f2fs_update_time(sbi, REQ_TIME);
+ 	return 0;
+ 
++sync_free_meta:
++	/* safe to flush all the data */
++	sync_filesystem(sbi->sb);
++	retry = false;
++
+ free_meta:
+ #ifdef CONFIG_QUOTA
+ 	f2fs_truncate_quota_inode_pages(sb);
+@@ -3407,6 +3427,8 @@ free_meta:
+ 	 * falls into an infinite loop in f2fs_sync_meta_pages().
+ 	 */
+ 	truncate_inode_pages_final(META_MAPPING(sbi));
++	/* evict some inodes being cached by GC */
++	evict_inodes(sb);
+ 	f2fs_unregister_sysfs(sbi);
+ free_root_inode:
+ 	dput(sb->s_root);
+diff --git a/fs/f2fs/xattr.c b/fs/f2fs/xattr.c
+index 73b92985198b..6b6fe6431a64 100644
+--- a/fs/f2fs/xattr.c
++++ b/fs/f2fs/xattr.c
+@@ -347,7 +347,7 @@ check:
+ 	*base_addr = txattr_addr;
+ 	return 0;
+ out:
+-	kzfree(txattr_addr);
++	kvfree(txattr_addr);
+ 	return err;
+ }
+ 
+@@ -390,7 +390,7 @@ static int read_all_xattrs(struct inode *inode, struct page *ipage,
+ 	*base_addr = txattr_addr;
+ 	return 0;
+ fail:
+-	kzfree(txattr_addr);
++	kvfree(txattr_addr);
+ 	return err;
+ }
+ 
+@@ -517,7 +517,7 @@ int f2fs_getxattr(struct inode *inode, int index, const char *name,
+ 	}
+ 	error = size;
+ out:
+-	kzfree(base_addr);
++	kvfree(base_addr);
+ 	return error;
+ }
+ 
+@@ -563,7 +563,7 @@ ssize_t f2fs_listxattr(struct dentry *dentry, char *buffer, size_t buffer_size)
+ 	}
+ 	error = buffer_size - rest;
+ cleanup:
+-	kzfree(base_addr);
++	kvfree(base_addr);
+ 	return error;
+ }
+ 
+@@ -694,7 +694,7 @@ static int __f2fs_setxattr(struct inode *inode, int index,
+ 	if (!error && S_ISDIR(inode->i_mode))
+ 		set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_CP);
+ exit:
+-	kzfree(base_addr);
++	kvfree(base_addr);
+ 	return error;
+ }
+ 
+diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
+index 798f1253141a..3b7b8e95c98a 100644
+--- a/fs/notify/inotify/inotify_user.c
++++ b/fs/notify/inotify/inotify_user.c
+@@ -519,8 +519,10 @@ static int inotify_update_existing_watch(struct fsnotify_group *group,
+ 	fsn_mark = fsnotify_find_mark(&inode->i_fsnotify_marks, group);
+ 	if (!fsn_mark)
+ 		return -ENOENT;
+-	else if (create)
+-		return -EEXIST;
++	else if (create) {
++		ret = -EEXIST;
++		goto out;
++	}
+ 
+ 	i_mark = container_of(fsn_mark, struct inotify_inode_mark, fsn_mark);
+ 
+@@ -548,6 +550,7 @@ static int inotify_update_existing_watch(struct fsnotify_group *group,
+ 	/* return the wd */
+ 	ret = i_mark->wd;
+ 
++out:
+ 	/* match the get from fsnotify_find_mark() */
+ 	fsnotify_put_mark(fsn_mark);
+ 
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index bbcc185062bb..d29d869abec1 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -54,6 +54,28 @@ static LIST_HEAD(kclist_head);
+ static DECLARE_RWSEM(kclist_lock);
+ static int kcore_need_update = 1;
+ 
++/*
++ * Returns > 0 for RAM pages, 0 for non-RAM pages, < 0 on error
++ * Same as oldmem_pfn_is_ram in vmcore
++ */
++static int (*mem_pfn_is_ram)(unsigned long pfn);
++
++int __init register_mem_pfn_is_ram(int (*fn)(unsigned long pfn))
++{
++	if (mem_pfn_is_ram)
++		return -EBUSY;
++	mem_pfn_is_ram = fn;
++	return 0;
++}
++
++static int pfn_is_ram(unsigned long pfn)
++{
++	if (mem_pfn_is_ram)
++		return mem_pfn_is_ram(pfn);
++	else
++		return 1;
++}
++
+ /* This doesn't grab kclist_lock, so it should only be used at init time. */
+ void __init kclist_add(struct kcore_list *new, void *addr, size_t size,
+ 		       int type)
+@@ -465,6 +487,11 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos)
+ 				goto out;
+ 			}
+ 			m = NULL;	/* skip the list anchor */
++		} else if (!pfn_is_ram(__pa(start) >> PAGE_SHIFT)) {
++			if (clear_user(buffer, tsz)) {
++				ret = -EFAULT;
++				goto out;
++			}
+ 		} else if (m->type == KCORE_VMALLOC) {
+ 			vread(buf, (char *)start, tsz);
+ 			/* we have to zero-fill user buffer even if no read */
+diff --git a/include/linux/atalk.h b/include/linux/atalk.h
+index 840cf92307ba..d5cfc0b15b76 100644
+--- a/include/linux/atalk.h
++++ b/include/linux/atalk.h
+@@ -158,7 +158,7 @@ extern int sysctl_aarp_retransmit_limit;
+ extern int sysctl_aarp_resolve_time;
+ 
+ #ifdef CONFIG_SYSCTL
+-extern void atalk_register_sysctl(void);
++extern int atalk_register_sysctl(void);
+ extern void atalk_unregister_sysctl(void);
+ #else
+ static inline int atalk_register_sysctl(void)
+diff --git a/include/linux/kcore.h b/include/linux/kcore.h
+index 8c3f8c14eeaa..c843f4a9c512 100644
+--- a/include/linux/kcore.h
++++ b/include/linux/kcore.h
+@@ -44,6 +44,8 @@ void kclist_add_remap(struct kcore_list *m, void *addr, void *vaddr, size_t sz)
+ 	m->vaddr = (unsigned long)vaddr;
+ 	kclist_add(m, addr, sz, KCORE_REMAP);
+ }
++
++extern int __init register_mem_pfn_is_ram(int (*fn)(unsigned long pfn));
+ #else
+ static inline
+ void kclist_add(struct kcore_list *new, void *addr, size_t size, int type)
+diff --git a/include/linux/swap.h b/include/linux/swap.h
+index 622025ac1461..f1146ed21062 100644
+--- a/include/linux/swap.h
++++ b/include/linux/swap.h
+@@ -157,9 +157,9 @@ struct swap_extent {
+ /*
+  * Max bad pages in the new format..
+  */
+-#define __swapoffset(x) ((unsigned long)&((union swap_header *)0)->x)
+ #define MAX_SWAP_BADPAGES \
+-	((__swapoffset(magic.magic) - __swapoffset(info.badpages)) / sizeof(int))
++	((offsetof(union swap_header, magic.magic) - \
++	  offsetof(union swap_header, info.badpages)) / sizeof(int))
+ 
+ enum {
+ 	SWP_USED	= (1 << 0),	/* is slot in swap_info[] used? */
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 5b50fe4906d2..7b60fd186cfe 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -76,6 +76,7 @@ enum rxrpc_client_trace {
+ 	rxrpc_client_chan_disconnect,
+ 	rxrpc_client_chan_pass,
+ 	rxrpc_client_chan_unstarted,
++	rxrpc_client_chan_wait_failed,
+ 	rxrpc_client_cleanup,
+ 	rxrpc_client_count,
+ 	rxrpc_client_discard,
+@@ -276,6 +277,7 @@ enum rxrpc_tx_point {
+ 	EM(rxrpc_client_chan_disconnect,	"ChDisc") \
+ 	EM(rxrpc_client_chan_pass,		"ChPass") \
+ 	EM(rxrpc_client_chan_unstarted,		"ChUnst") \
++	EM(rxrpc_client_chan_wait_failed,	"ChWtFl") \
+ 	EM(rxrpc_client_cleanup,		"Clean ") \
+ 	EM(rxrpc_client_count,			"Count ") \
+ 	EM(rxrpc_client_discard,		"Discar") \
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index 2ada5e21dfa6..4a8f390a2b82 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -554,19 +554,6 @@ struct bpf_prog *bpf_prog_get_type_path(const char *name, enum bpf_prog_type typ
+ }
+ EXPORT_SYMBOL(bpf_prog_get_type_path);
+ 
+-static void bpf_evict_inode(struct inode *inode)
+-{
+-	enum bpf_type type;
+-
+-	truncate_inode_pages_final(&inode->i_data);
+-	clear_inode(inode);
+-
+-	if (S_ISLNK(inode->i_mode))
+-		kfree(inode->i_link);
+-	if (!bpf_inode_type(inode, &type))
+-		bpf_any_put(inode->i_private, type);
+-}
+-
+ /*
+  * Display the mount options in /proc/mounts.
+  */
+@@ -579,11 +566,28 @@ static int bpf_show_options(struct seq_file *m, struct dentry *root)
+ 	return 0;
+ }
+ 
++static void bpf_destroy_inode_deferred(struct rcu_head *head)
++{
++	struct inode *inode = container_of(head, struct inode, i_rcu);
++	enum bpf_type type;
++
++	if (S_ISLNK(inode->i_mode))
++		kfree(inode->i_link);
++	if (!bpf_inode_type(inode, &type))
++		bpf_any_put(inode->i_private, type);
++	free_inode_nonrcu(inode);
++}
++
++static void bpf_destroy_inode(struct inode *inode)
++{
++	call_rcu(&inode->i_rcu, bpf_destroy_inode_deferred);
++}
++
+ static const struct super_operations bpf_super_ops = {
+ 	.statfs		= simple_statfs,
+ 	.drop_inode	= generic_delete_inode,
+ 	.show_options	= bpf_show_options,
+-	.evict_inode	= bpf_evict_inode,
++	.destroy_inode	= bpf_destroy_inode,
+ };
+ 
+ enum {
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 26d6edab051a..2e2305a81047 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -7178,6 +7178,7 @@ static void perf_event_mmap_output(struct perf_event *event,
+ 	struct perf_output_handle handle;
+ 	struct perf_sample_data sample;
+ 	int size = mmap_event->event_id.header.size;
++	u32 type = mmap_event->event_id.header.type;
+ 	int ret;
+ 
+ 	if (!perf_event_mmap_match(event, data))
+@@ -7221,6 +7222,7 @@ static void perf_event_mmap_output(struct perf_event *event,
+ 	perf_output_end(&handle);
+ out:
+ 	mmap_event->event_id.header.size = size;
++	mmap_event->event_id.header.type = type;
+ }
+ 
+ static void perf_event_mmap_event(struct perf_mmap_event *mmap_event)
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 01a2489de94e..62cc29364fba 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -6942,7 +6942,7 @@ static int __maybe_unused cpu_period_quota_parse(char *buf,
+ {
+ 	char tok[21];	/* U64_MAX */
+ 
+-	if (!sscanf(buf, "%s %llu", tok, periodp))
++	if (sscanf(buf, "%20s %llu", tok, periodp) < 1)
+ 		return -EINVAL;
+ 
+ 	*periodp *= NSEC_PER_USEC;
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 033ec7c45f13..1ccf77f6d346 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -48,10 +48,10 @@ struct sugov_cpu {
+ 
+ 	bool			iowait_boost_pending;
+ 	unsigned int		iowait_boost;
+-	unsigned int		iowait_boost_max;
+ 	u64			last_update;
+ 
+ 	unsigned long		bw_dl;
++	unsigned long		min;
+ 	unsigned long		max;
+ 
+ 	/* The field below is for single-CPU policies only: */
+@@ -303,8 +303,7 @@ static bool sugov_iowait_reset(struct sugov_cpu *sg_cpu, u64 time,
+ 	if (delta_ns <= TICK_NSEC)
+ 		return false;
+ 
+-	sg_cpu->iowait_boost = set_iowait_boost
+-		? sg_cpu->sg_policy->policy->min : 0;
++	sg_cpu->iowait_boost = set_iowait_boost ? sg_cpu->min : 0;
+ 	sg_cpu->iowait_boost_pending = set_iowait_boost;
+ 
+ 	return true;
+@@ -344,14 +343,13 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
+ 
+ 	/* Double the boost at each request */
+ 	if (sg_cpu->iowait_boost) {
+-		sg_cpu->iowait_boost <<= 1;
+-		if (sg_cpu->iowait_boost > sg_cpu->iowait_boost_max)
+-			sg_cpu->iowait_boost = sg_cpu->iowait_boost_max;
++		sg_cpu->iowait_boost =
++			min_t(unsigned int, sg_cpu->iowait_boost << 1, SCHED_CAPACITY_SCALE);
+ 		return;
+ 	}
+ 
+ 	/* First wakeup after IO: start with minimum boost */
+-	sg_cpu->iowait_boost = sg_cpu->sg_policy->policy->min;
++	sg_cpu->iowait_boost = sg_cpu->min;
+ }
+ 
+ /**
+@@ -373,47 +371,38 @@ static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
+  * This mechanism is designed to boost high frequently IO waiting tasks, while
+  * being more conservative on tasks which does sporadic IO operations.
+  */
+-static void sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
+-			       unsigned long *util, unsigned long *max)
++static unsigned long sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
++					unsigned long util, unsigned long max)
+ {
+-	unsigned int boost_util, boost_max;
++	unsigned long boost;
+ 
+ 	/* No boost currently required */
+ 	if (!sg_cpu->iowait_boost)
+-		return;
++		return util;
+ 
+ 	/* Reset boost if the CPU appears to have been idle enough */
+ 	if (sugov_iowait_reset(sg_cpu, time, false))
+-		return;
++		return util;
+ 
+-	/*
+-	 * An IO waiting task has just woken up:
+-	 * allow to further double the boost value
+-	 */
+-	if (sg_cpu->iowait_boost_pending) {
+-		sg_cpu->iowait_boost_pending = false;
+-	} else {
++	if (!sg_cpu->iowait_boost_pending) {
+ 		/*
+-		 * Otherwise: reduce the boost value and disable it when we
+-		 * reach the minimum.
++		 * No boost pending; reduce the boost value.
+ 		 */
+ 		sg_cpu->iowait_boost >>= 1;
+-		if (sg_cpu->iowait_boost < sg_cpu->sg_policy->policy->min) {
++		if (sg_cpu->iowait_boost < sg_cpu->min) {
+ 			sg_cpu->iowait_boost = 0;
+-			return;
++			return util;
+ 		}
+ 	}
+ 
++	sg_cpu->iowait_boost_pending = false;
++
+ 	/*
+-	 * Apply the current boost value: a CPU is boosted only if its current
+-	 * utilization is smaller then the current IO boost level.
++	 * @util is already in capacity scale; convert iowait_boost
++	 * into the same scale so we can compare.
+ 	 */
+-	boost_util = sg_cpu->iowait_boost;
+-	boost_max = sg_cpu->iowait_boost_max;
+-	if (*util * boost_max < *max * boost_util) {
+-		*util = boost_util;
+-		*max = boost_max;
+-	}
++	boost = (sg_cpu->iowait_boost * max) >> SCHED_CAPACITY_SHIFT;
++	return max(boost, util);
+ }
+ 
+ #ifdef CONFIG_NO_HZ_COMMON
+@@ -460,7 +449,7 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
+ 
+ 	util = sugov_get_util(sg_cpu);
+ 	max = sg_cpu->max;
+-	sugov_iowait_apply(sg_cpu, time, &util, &max);
++	util = sugov_iowait_apply(sg_cpu, time, util, max);
+ 	next_f = get_next_freq(sg_policy, util, max);
+ 	/*
+ 	 * Do not reduce the frequency if the CPU has not been idle
+@@ -500,7 +489,7 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
+ 
+ 		j_util = sugov_get_util(j_sg_cpu);
+ 		j_max = j_sg_cpu->max;
+-		sugov_iowait_apply(j_sg_cpu, time, &j_util, &j_max);
++		j_util = sugov_iowait_apply(j_sg_cpu, time, j_util, j_max);
+ 
+ 		if (j_util * max > j_max * util) {
+ 			util = j_util;
+@@ -837,7 +826,9 @@ static int sugov_start(struct cpufreq_policy *policy)
+ 		memset(sg_cpu, 0, sizeof(*sg_cpu));
+ 		sg_cpu->cpu			= cpu;
+ 		sg_cpu->sg_policy		= sg_policy;
+-		sg_cpu->iowait_boost_max	= policy->cpuinfo.max_freq;
++		sg_cpu->min			=
++			(SCHED_CAPACITY_SCALE * policy->cpuinfo.min_freq) /
++			policy->cpuinfo.max_freq;
+ 	}
+ 
+ 	for_each_cpu(cpu, policy->cpus) {
+diff --git a/lib/div64.c b/lib/div64.c
+index 01c8602bb6ff..ee146bb4c558 100644
+--- a/lib/div64.c
++++ b/lib/div64.c
+@@ -109,7 +109,7 @@ u64 div64_u64_rem(u64 dividend, u64 divisor, u64 *remainder)
+ 		quot = div_u64_rem(dividend, divisor, &rem32);
+ 		*remainder = rem32;
+ 	} else {
+-		int n = 1 + fls(high);
++		int n = fls(high);
+ 		quot = div_u64(dividend >> n, divisor >> n);
+ 
+ 		if (quot != 0)
+@@ -147,7 +147,7 @@ u64 div64_u64(u64 dividend, u64 divisor)
+ 	if (high == 0) {
+ 		quot = div_u64(dividend, divisor);
+ 	} else {
+-		int n = 1 + fls(high);
++		int n = fls(high);
+ 		quot = div_u64(dividend >> n, divisor >> n);
+ 
+ 		if (quot != 0)
+diff --git a/net/appletalk/atalk_proc.c b/net/appletalk/atalk_proc.c
+index 8006295f8bd7..dda73991bb54 100644
+--- a/net/appletalk/atalk_proc.c
++++ b/net/appletalk/atalk_proc.c
+@@ -255,7 +255,7 @@ out_interface:
+ 	goto out;
+ }
+ 
+-void __exit atalk_proc_exit(void)
++void atalk_proc_exit(void)
+ {
+ 	remove_proc_entry("interface", atalk_proc_dir);
+ 	remove_proc_entry("route", atalk_proc_dir);
+diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
+index 9b6bc5abe946..795fbc6c06aa 100644
+--- a/net/appletalk/ddp.c
++++ b/net/appletalk/ddp.c
+@@ -1910,12 +1910,16 @@ static const char atalk_err_snap[] __initconst =
+ /* Called by proto.c on kernel start up */
+ static int __init atalk_init(void)
+ {
+-	int rc = proto_register(&ddp_proto, 0);
++	int rc;
+ 
+-	if (rc != 0)
++	rc = proto_register(&ddp_proto, 0);
++	if (rc)
+ 		goto out;
+ 
+-	(void)sock_register(&atalk_family_ops);
++	rc = sock_register(&atalk_family_ops);
++	if (rc)
++		goto out_proto;
++
+ 	ddp_dl = register_snap_client(ddp_snap_id, atalk_rcv);
+ 	if (!ddp_dl)
+ 		printk(atalk_err_snap);
+@@ -1923,12 +1927,33 @@ static int __init atalk_init(void)
+ 	dev_add_pack(&ltalk_packet_type);
+ 	dev_add_pack(&ppptalk_packet_type);
+ 
+-	register_netdevice_notifier(&ddp_notifier);
++	rc = register_netdevice_notifier(&ddp_notifier);
++	if (rc)
++		goto out_sock;
++
+ 	aarp_proto_init();
+-	atalk_proc_init();
+-	atalk_register_sysctl();
++	rc = atalk_proc_init();
++	if (rc)
++		goto out_aarp;
++
++	rc = atalk_register_sysctl();
++	if (rc)
++		goto out_proc;
+ out:
+ 	return rc;
++out_proc:
++	atalk_proc_exit();
++out_aarp:
++	aarp_cleanup_module();
++	unregister_netdevice_notifier(&ddp_notifier);
++out_sock:
++	dev_remove_pack(&ppptalk_packet_type);
++	dev_remove_pack(&ltalk_packet_type);
++	unregister_snap_client(ddp_dl);
++	sock_unregister(PF_APPLETALK);
++out_proto:
++	proto_unregister(&ddp_proto);
++	goto out;
+ }
+ module_init(atalk_init);
+ 
+diff --git a/net/appletalk/sysctl_net_atalk.c b/net/appletalk/sysctl_net_atalk.c
+index c744a853fa5f..d945b7c0176d 100644
+--- a/net/appletalk/sysctl_net_atalk.c
++++ b/net/appletalk/sysctl_net_atalk.c
+@@ -45,9 +45,12 @@ static struct ctl_table atalk_table[] = {
+ 
+ static struct ctl_table_header *atalk_table_header;
+ 
+-void atalk_register_sysctl(void)
++int __init atalk_register_sysctl(void)
+ {
+ 	atalk_table_header = register_net_sysctl(&init_net, "net/appletalk", atalk_table);
++	if (!atalk_table_header)
++		return -ENOMEM;
++	return 0;
+ }
+ 
+ void atalk_unregister_sysctl(void)
+diff --git a/net/rxrpc/conn_client.c b/net/rxrpc/conn_client.c
+index 5cf6d9f4761d..83797b3949e2 100644
+--- a/net/rxrpc/conn_client.c
++++ b/net/rxrpc/conn_client.c
+@@ -704,6 +704,7 @@ int rxrpc_connect_call(struct rxrpc_sock *rx,
+ 
+ 	ret = rxrpc_wait_for_channel(call, gfp);
+ 	if (ret < 0) {
++		trace_rxrpc_client(call->conn, ret, rxrpc_client_chan_wait_failed);
+ 		rxrpc_disconnect_client_call(call);
+ 		goto out;
+ 	}
+@@ -774,16 +775,22 @@ static void rxrpc_set_client_reap_timer(struct rxrpc_net *rxnet)
+  */
+ void rxrpc_disconnect_client_call(struct rxrpc_call *call)
+ {
+-	unsigned int channel = call->cid & RXRPC_CHANNELMASK;
+ 	struct rxrpc_connection *conn = call->conn;
+-	struct rxrpc_channel *chan = &conn->channels[channel];
++	struct rxrpc_channel *chan = NULL;
+ 	struct rxrpc_net *rxnet = conn->params.local->rxnet;
++	unsigned int channel = -1;
++	u32 cid;
+ 
++	spin_lock(&conn->channel_lock);
++
++	cid = call->cid;
++	if (cid) {
++		channel = cid & RXRPC_CHANNELMASK;
++		chan = &conn->channels[channel];
++	}
+ 	trace_rxrpc_client(conn, channel, rxrpc_client_chan_disconnect);
+ 	call->conn = NULL;
+ 
+-	spin_lock(&conn->channel_lock);
+-
+ 	/* Calls that have never actually been assigned a channel can simply be
+ 	 * discarded.  If the conn didn't get used either, it will follow
+ 	 * immediately unless someone else grabs it in the meantime.
+@@ -807,7 +814,10 @@ void rxrpc_disconnect_client_call(struct rxrpc_call *call)
+ 		goto out;
+ 	}
+ 
+-	ASSERTCMP(rcu_access_pointer(chan->call), ==, call);
++	if (rcu_access_pointer(chan->call) != call) {
++		spin_unlock(&conn->channel_lock);
++		BUG();
++	}
+ 
+ 	/* If a client call was exposed to the world, we save the result for
+ 	 * retransmission.
+diff --git a/sound/drivers/opl3/opl3_voice.h b/sound/drivers/opl3/opl3_voice.h
+index 5b02bd49fde4..4e4ecc21760b 100644
+--- a/sound/drivers/opl3/opl3_voice.h
++++ b/sound/drivers/opl3/opl3_voice.h
+@@ -41,7 +41,7 @@ void snd_opl3_timer_func(struct timer_list *t);
+ 
+ /* Prototypes for opl3_drums.c */
+ void snd_opl3_load_drums(struct snd_opl3 *opl3);
+-void snd_opl3_drum_switch(struct snd_opl3 *opl3, int note, int on_off, int vel, struct snd_midi_channel *chan);
++void snd_opl3_drum_switch(struct snd_opl3 *opl3, int note, int vel, int on_off, struct snd_midi_channel *chan);
+ 
+ /* Prototypes for opl3_oss.c */
+ #if IS_ENABLED(CONFIG_SND_SEQUENCER_OSS)
+diff --git a/sound/isa/sb/sb8.c b/sound/isa/sb/sb8.c
+index d77dcba276b5..1eb8b61a185b 100644
+--- a/sound/isa/sb/sb8.c
++++ b/sound/isa/sb/sb8.c
+@@ -111,6 +111,10 @@ static int snd_sb8_probe(struct device *pdev, unsigned int dev)
+ 
+ 	/* block the 0x388 port to avoid PnP conflicts */
+ 	acard->fm_res = request_region(0x388, 4, "SoundBlaster FM");
++	if (!acard->fm_res) {
++		err = -EBUSY;
++		goto _err;
++	}
+ 
+ 	if (port[dev] != SNDRV_AUTO_PORT) {
+ 		if ((err = snd_sbdsp_create(card, port[dev], irq[dev],
+diff --git a/sound/pci/echoaudio/echoaudio.c b/sound/pci/echoaudio/echoaudio.c
+index 907cf1a46712..3ef2b27ebbe8 100644
+--- a/sound/pci/echoaudio/echoaudio.c
++++ b/sound/pci/echoaudio/echoaudio.c
+@@ -1954,6 +1954,11 @@ static int snd_echo_create(struct snd_card *card,
+ 	}
+ 	chip->dsp_registers = (volatile u32 __iomem *)
+ 		ioremap_nocache(chip->dsp_registers_phys, sz);
++	if (!chip->dsp_registers) {
++		dev_err(chip->card->dev, "ioremap failed\n");
++		snd_echo_free(chip);
++		return -ENOMEM;
++	}
+ 
+ 	if (request_irq(pci->irq, snd_echo_interrupt, IRQF_SHARED,
+ 			KBUILD_MODNAME, chip)) {
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 169e347c76f6..9ba1a2e1ed7a 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -627,7 +627,7 @@ bpf_object__init_maps(struct bpf_object *obj, int flags)
+ 	bool strict = !(flags & MAPS_RELAX_COMPAT);
+ 	int i, map_idx, map_def_sz, nr_maps = 0;
+ 	Elf_Scn *scn;
+-	Elf_Data *data;
++	Elf_Data *data = NULL;
+ 	Elf_Data *symbols = obj->efile.symbols;
+ 
+ 	if (obj->efile.maps_shndx < 0)
+diff --git a/tools/perf/Documentation/perf-config.txt b/tools/perf/Documentation/perf-config.txt
+index 4ac7775fbc11..4851285ba00c 100644
+--- a/tools/perf/Documentation/perf-config.txt
++++ b/tools/perf/Documentation/perf-config.txt
+@@ -114,7 +114,7 @@ Given a $HOME/.perfconfig like this:
+ 
+ 	[report]
+ 		# Defaults
+-		sort-order = comm,dso,symbol
++		sort_order = comm,dso,symbol
+ 		percent-limit = 0
+ 		queue-size = 0
+ 		children = true
+diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
+index 4bc2085e5197..39c05f89104e 100644
+--- a/tools/perf/Documentation/perf-stat.txt
++++ b/tools/perf/Documentation/perf-stat.txt
+@@ -72,9 +72,8 @@ report::
+ --all-cpus::
+         system-wide collection from all CPUs (default if no target is specified)
+ 
+--c::
+---scale::
+-	scale/normalize counter values
++--no-scale::
++	Don't scale/normalize counter values
+ 
+ -d::
+ --detailed::
+diff --git a/tools/perf/bench/epoll-ctl.c b/tools/perf/bench/epoll-ctl.c
+index 0c0a6e824934..2af067859966 100644
+--- a/tools/perf/bench/epoll-ctl.c
++++ b/tools/perf/bench/epoll-ctl.c
+@@ -224,7 +224,7 @@ static int do_threads(struct worker *worker, struct cpu_map *cpu)
+ 	pthread_attr_t thread_attr, *attrp = NULL;
+ 	cpu_set_t cpuset;
+ 	unsigned int i, j;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (!noaffinity)
+ 		pthread_attr_init(&thread_attr);
+diff --git a/tools/perf/bench/epoll-wait.c b/tools/perf/bench/epoll-wait.c
+index 5a11534e96a0..fe85448abd45 100644
+--- a/tools/perf/bench/epoll-wait.c
++++ b/tools/perf/bench/epoll-wait.c
+@@ -293,7 +293,7 @@ static int do_threads(struct worker *worker, struct cpu_map *cpu)
+ 	pthread_attr_t thread_attr, *attrp = NULL;
+ 	cpu_set_t cpuset;
+ 	unsigned int i, j;
+-	int ret, events = EPOLLIN;
++	int ret = 0, events = EPOLLIN;
+ 
+ 	if (oneshot)
+ 		events |= EPOLLONESHOT;
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 63a3afc7f32b..a52295dbad2b 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -728,7 +728,8 @@ static struct option stat_options[] = {
+ 		    "system-wide collection from all CPUs"),
+ 	OPT_BOOLEAN('g', "group", &group,
+ 		    "put the counters into a counter group"),
+-	OPT_BOOLEAN('c', "scale", &stat_config.scale, "scale/normalize counters"),
++	OPT_BOOLEAN(0, "scale", &stat_config.scale,
++		    "Use --no-scale to disable counter scaling for multiplexing"),
+ 	OPT_INCR('v', "verbose", &verbose,
+ 		    "be more verbose (show counter open errors, etc)"),
+ 	OPT_INTEGER('r', "repeat", &stat_config.run_count,
+diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
+index f64e312db787..616408251e25 100644
+--- a/tools/perf/builtin-top.c
++++ b/tools/perf/builtin-top.c
+@@ -1633,8 +1633,9 @@ int cmd_top(int argc, const char **argv)
+ 	annotation_config__init();
+ 
+ 	symbol_conf.try_vmlinux_path = (symbol_conf.vmlinux_name == NULL);
+-	if (symbol__init(NULL) < 0)
+-		return -1;
++	status = symbol__init(NULL);
++	if (status < 0)
++		goto out_delete_evlist;
+ 
+ 	sort__setup_elide(stdout);
+ 
+diff --git a/tools/perf/tests/backward-ring-buffer.c b/tools/perf/tests/backward-ring-buffer.c
+index 6d598cc071ae..1a9c3becf5ff 100644
+--- a/tools/perf/tests/backward-ring-buffer.c
++++ b/tools/perf/tests/backward-ring-buffer.c
+@@ -18,7 +18,7 @@ static void testcase(void)
+ 	int i;
+ 
+ 	for (i = 0; i < NR_ITERS; i++) {
+-		char proc_name[10];
++		char proc_name[15];
+ 
+ 		snprintf(proc_name, sizeof(proc_name), "p:%d\n", i);
+ 		prctl(PR_SET_NAME, proc_name);
+diff --git a/tools/perf/tests/evsel-tp-sched.c b/tools/perf/tests/evsel-tp-sched.c
+index ea7acf403727..71f60c0f9faa 100644
+--- a/tools/perf/tests/evsel-tp-sched.c
++++ b/tools/perf/tests/evsel-tp-sched.c
+@@ -85,5 +85,6 @@ int test__perf_evsel__tp_sched_test(struct test *test __maybe_unused, int subtes
+ 	if (perf_evsel__test_field(evsel, "target_cpu", 4, true))
+ 		ret = -1;
+ 
++	perf_evsel__delete(evsel);
+ 	return ret;
+ }
+diff --git a/tools/perf/tests/expr.c b/tools/perf/tests/expr.c
+index 01f0706995a9..9acc1e80b936 100644
+--- a/tools/perf/tests/expr.c
++++ b/tools/perf/tests/expr.c
+@@ -19,7 +19,7 @@ int test__expr(struct test *t __maybe_unused, int subtest __maybe_unused)
+ 	const char *p;
+ 	const char **other;
+ 	double val;
+-	int ret;
++	int i, ret;
+ 	struct parse_ctx ctx;
+ 	int num_other;
+ 
+@@ -56,6 +56,9 @@ int test__expr(struct test *t __maybe_unused, int subtest __maybe_unused)
+ 	TEST_ASSERT_VAL("find other", !strcmp(other[1], "BAZ"));
+ 	TEST_ASSERT_VAL("find other", !strcmp(other[2], "BOZO"));
+ 	TEST_ASSERT_VAL("find other", other[3] == NULL);
++
++	for (i = 0; i < num_other; i++)
++		free((void *)other[i]);
+ 	free((void *)other);
+ 
+ 	return 0;
+diff --git a/tools/perf/tests/openat-syscall-all-cpus.c b/tools/perf/tests/openat-syscall-all-cpus.c
+index c531e6deb104..493ecb611540 100644
+--- a/tools/perf/tests/openat-syscall-all-cpus.c
++++ b/tools/perf/tests/openat-syscall-all-cpus.c
+@@ -45,7 +45,7 @@ int test__openat_syscall_event_on_all_cpus(struct test *test __maybe_unused, int
+ 	if (IS_ERR(evsel)) {
+ 		tracing_path__strerror_open_tp(errno, errbuf, sizeof(errbuf), "syscalls", "sys_enter_openat");
+ 		pr_debug("%s\n", errbuf);
+-		goto out_thread_map_delete;
++		goto out_cpu_map_delete;
+ 	}
+ 
+ 	if (perf_evsel__open(evsel, cpus, threads) < 0) {
+@@ -119,6 +119,8 @@ out_close_fd:
+ 	perf_evsel__close_fd(evsel);
+ out_evsel_delete:
+ 	perf_evsel__delete(evsel);
++out_cpu_map_delete:
++	cpu_map__put(cpus);
+ out_thread_map_delete:
+ 	thread_map__put(threads);
+ 	return err;
+diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
+index 04b1d53e4bf9..1d352621bd48 100644
+--- a/tools/perf/util/build-id.c
++++ b/tools/perf/util/build-id.c
+@@ -183,6 +183,7 @@ char *build_id_cache__linkname(const char *sbuild_id, char *bf, size_t size)
+ 	return bf;
+ }
+ 
++/* The caller is responsible to free the returned buffer. */
+ char *build_id_cache__origname(const char *sbuild_id)
+ {
+ 	char *linkname;
+diff --git a/tools/perf/util/config.c b/tools/perf/util/config.c
+index 1ea8f898f1a1..9ecdbd5986b3 100644
+--- a/tools/perf/util/config.c
++++ b/tools/perf/util/config.c
+@@ -632,11 +632,10 @@ static int collect_config(const char *var, const char *value,
+ 	}
+ 
+ 	ret = set_value(item, value);
+-	return ret;
+ 
+ out_free:
+ 	free(key);
+-	return -1;
++	return ret;
+ }
+ 
+ int perf_config_set__collect(struct perf_config_set *set, const char *file_name,
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index dbc0466db368..50c933044f88 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -1289,6 +1289,7 @@ void perf_evsel__exit(struct perf_evsel *evsel)
+ {
+ 	assert(list_empty(&evsel->node));
+ 	assert(evsel->evlist == NULL);
++	perf_evsel__free_counts(evsel);
+ 	perf_evsel__free_fd(evsel);
+ 	perf_evsel__free_id(evsel);
+ 	perf_evsel__free_config_terms(evsel);
+@@ -1341,8 +1342,7 @@ void perf_counts_values__scale(struct perf_counts_values *count,
+ 			scaled = 1;
+ 			count->val = (u64)((double) count->val * count->ena / count->run + 0.5);
+ 		}
+-	} else
+-		count->ena = count->run = 0;
++	}
+ 
+ 	if (pscaled)
+ 		*pscaled = scaled;
+diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
+index 8aad8330e392..e416e76f5600 100644
+--- a/tools/perf/util/hist.c
++++ b/tools/perf/util/hist.c
+@@ -1048,8 +1048,10 @@ int hist_entry_iter__add(struct hist_entry_iter *iter, struct addr_location *al,
+ 
+ 	err = sample__resolve_callchain(iter->sample, &callchain_cursor, &iter->parent,
+ 					iter->evsel, al, max_stack_depth);
+-	if (err)
++	if (err) {
++		map__put(alm);
+ 		return err;
++	}
+ 
+ 	err = iter->ops->prepare_entry(iter, al);
+ 	if (err)
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index 6751301a755c..2b37f56f0549 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -571,10 +571,25 @@ static void __maps__purge(struct maps *maps)
+ 	}
+ }
+ 
++static void __maps__purge_names(struct maps *maps)
++{
++	struct rb_root *root = &maps->names;
++	struct rb_node *next = rb_first(root);
++
++	while (next) {
++		struct map *pos = rb_entry(next, struct map, rb_node_name);
++
++		next = rb_next(&pos->rb_node_name);
++		rb_erase_init(&pos->rb_node_name, root);
++		map__put(pos);
++	}
++}
++
+ static void maps__exit(struct maps *maps)
+ {
+ 	down_write(&maps->lock);
+ 	__maps__purge(maps);
++	__maps__purge_names(maps);
+ 	up_write(&maps->lock);
+ }
+ 
+@@ -911,6 +926,9 @@ static void __maps__remove(struct maps *maps, struct map *map)
+ {
+ 	rb_erase_init(&map->rb_node, &maps->entries);
+ 	map__put(map);
++
++	rb_erase_init(&map->rb_node_name, &maps->names);
++	map__put(map);
+ }
+ 
+ void maps__remove(struct maps *maps, struct map *map)
+diff --git a/tools/perf/util/ordered-events.c b/tools/perf/util/ordered-events.c
+index ea523d3b248f..989fed6f43b5 100644
+--- a/tools/perf/util/ordered-events.c
++++ b/tools/perf/util/ordered-events.c
+@@ -270,6 +270,8 @@ static int __ordered_events__flush(struct ordered_events *oe, enum oe_flush how,
+ 		"FINAL",
+ 		"ROUND",
+ 		"HALF ",
++		"TOP  ",
++		"TIME ",
+ 	};
+ 	int err;
+ 	bool show_progress = false;
+diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
+index 920e1e6551dd..03860313313c 100644
+--- a/tools/perf/util/parse-events.c
++++ b/tools/perf/util/parse-events.c
+@@ -2271,6 +2271,7 @@ static bool is_event_supported(u8 type, unsigned config)
+ 		perf_evsel__delete(evsel);
+ 	}
+ 
++	thread_map__put(tmap);
+ 	return ret;
+ }
+ 
+@@ -2341,6 +2342,7 @@ void print_sdt_events(const char *subsys_glob, const char *event_glob,
+ 				printf("  %-50s [%s]\n", buf, "SDT event");
+ 				free(buf);
+ 			}
++			free(path);
+ 		} else
+ 			printf("  %-50s [%s]\n", nd->s, "SDT event");
+ 		if (nd2) {
+diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
+index 4d40515307b8..2856cc9d5a31 100644
+--- a/tools/perf/util/stat.c
++++ b/tools/perf/util/stat.c
+@@ -291,10 +291,8 @@ process_counter_values(struct perf_stat_config *config, struct perf_evsel *evsel
+ 		break;
+ 	case AGGR_GLOBAL:
+ 		aggr->val += count->val;
+-		if (config->scale) {
+-			aggr->ena += count->ena;
+-			aggr->run += count->run;
+-		}
++		aggr->ena += count->ena;
++		aggr->run += count->run;
+ 	case AGGR_UNSET:
+ 	default:
+ 		break;
+@@ -442,10 +440,8 @@ int create_perf_stat_counter(struct perf_evsel *evsel,
+ 	struct perf_event_attr *attr = &evsel->attr;
+ 	struct perf_evsel *leader = evsel->leader;
+ 
+-	if (config->scale) {
+-		attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED |
+-				    PERF_FORMAT_TOTAL_TIME_RUNNING;
+-	}
++	attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED |
++			    PERF_FORMAT_TOTAL_TIME_RUNNING;
+ 
+ 	/*
+ 	 * The event is part of non trivial group, let's enable
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 9327c0ddc3a5..c3fad065c89c 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -5077,6 +5077,9 @@ int fork_it(char **argv)
+ 		signal(SIGQUIT, SIG_IGN);
+ 		if (waitpid(child_pid, &status, 0) == -1)
+ 			err(status, "waitpid");
++
++		if (WIFEXITED(status))
++			status = WEXITSTATUS(status);
+ 	}
+ 	/*
+ 	 * n.b. fork_it() does not check for errors from for_all_cpus()


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-04-27 17:38 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-04-27 17:38 UTC (permalink / raw
  To: gentoo-commits

commit:     34d9261639ae90116c1b17c082767e44530b9116
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 27 17:38:27 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Apr 27 17:38:27 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=34d92616

Linux paycj 5.0.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1009_linux-5.0.10.patch | 4117 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4121 insertions(+)

diff --git a/0000_README b/0000_README
index dda69ae..49a76eb 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-5.0.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.9
 
+Patch:  1009_linux-5.0.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-5.0.10.patch b/1009_linux-5.0.10.patch
new file mode 100644
index 0000000..0659014
--- /dev/null
+++ b/1009_linux-5.0.10.patch
@@ -0,0 +1,4117 @@
+diff --git a/Makefile b/Makefile
+index ef192ca04330..b282c4143b21 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+@@ -678,8 +678,7 @@ KBUILD_CFLAGS	+= $(call cc-disable-warning, format-overflow)
+ KBUILD_CFLAGS	+= $(call cc-disable-warning, int-in-bool-context)
+ 
+ ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
+-KBUILD_CFLAGS	+= $(call cc-option,-Oz,-Os)
+-KBUILD_CFLAGS	+= $(call cc-disable-warning,maybe-uninitialized,)
++KBUILD_CFLAGS	+= -Os $(call cc-disable-warning,maybe-uninitialized,)
+ else
+ ifdef CONFIG_PROFILE_ALL_BRANCHES
+ KBUILD_CFLAGS	+= -O2 $(call cc-disable-warning,maybe-uninitialized,)
+diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
+index e1d95f08f8e1..c7e1a7837706 100644
+--- a/arch/arm64/include/asm/futex.h
++++ b/arch/arm64/include/asm/futex.h
+@@ -50,7 +50,7 @@ do {									\
+ static inline int
+ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
+ {
+-	int oldval, ret, tmp;
++	int oldval = 0, ret, tmp;
+ 	u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);
+ 
+ 	pagefault_disable();
+diff --git a/arch/s390/boot/mem_detect.c b/arch/s390/boot/mem_detect.c
+index 4cb771ba13fa..5d316fe40480 100644
+--- a/arch/s390/boot/mem_detect.c
++++ b/arch/s390/boot/mem_detect.c
+@@ -25,7 +25,7 @@ static void *mem_detect_alloc_extended(void)
+ {
+ 	unsigned long offset = ALIGN(mem_safe_offset(), sizeof(u64));
+ 
+-	if (IS_ENABLED(BLK_DEV_INITRD) && INITRD_START && INITRD_SIZE &&
++	if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && INITRD_START && INITRD_SIZE &&
+ 	    INITRD_START < offset + ENTRIES_EXTENDED_MAX)
+ 		offset = ALIGN(INITRD_START + INITRD_SIZE, sizeof(u64));
+ 
+diff --git a/arch/x86/crypto/poly1305-avx2-x86_64.S b/arch/x86/crypto/poly1305-avx2-x86_64.S
+index 3b6e70d085da..8457cdd47f75 100644
+--- a/arch/x86/crypto/poly1305-avx2-x86_64.S
++++ b/arch/x86/crypto/poly1305-avx2-x86_64.S
+@@ -323,6 +323,12 @@ ENTRY(poly1305_4block_avx2)
+ 	vpaddq		t2,t1,t1
+ 	vmovq		t1x,d4
+ 
++	# Now do a partial reduction mod (2^130)-5, carrying h0 -> h1 -> h2 ->
++	# h3 -> h4 -> h0 -> h1 to get h0,h2,h3,h4 < 2^26 and h1 < 2^26 + a small
++	# amount.  Careful: we must not assume the carry bits 'd0 >> 26',
++	# 'd1 >> 26', 'd2 >> 26', 'd3 >> 26', and '(d4 >> 26) * 5' fit in 32-bit
++	# integers.  It's true in a single-block implementation, but not here.
++
+ 	# d1 += d0 >> 26
+ 	mov		d0,%rax
+ 	shr		$26,%rax
+@@ -361,16 +367,16 @@ ENTRY(poly1305_4block_avx2)
+ 	# h0 += (d4 >> 26) * 5
+ 	mov		d4,%rax
+ 	shr		$26,%rax
+-	lea		(%eax,%eax,4),%eax
+-	add		%eax,%ebx
++	lea		(%rax,%rax,4),%rax
++	add		%rax,%rbx
+ 	# h4 = d4 & 0x3ffffff
+ 	mov		d4,%rax
+ 	and		$0x3ffffff,%eax
+ 	mov		%eax,h4
+ 
+ 	# h1 += h0 >> 26
+-	mov		%ebx,%eax
+-	shr		$26,%eax
++	mov		%rbx,%rax
++	shr		$26,%rax
+ 	add		%eax,h1
+ 	# h0 = h0 & 0x3ffffff
+ 	andl		$0x3ffffff,%ebx
+diff --git a/arch/x86/crypto/poly1305-sse2-x86_64.S b/arch/x86/crypto/poly1305-sse2-x86_64.S
+index c88c670cb5fc..5851c7418fb7 100644
+--- a/arch/x86/crypto/poly1305-sse2-x86_64.S
++++ b/arch/x86/crypto/poly1305-sse2-x86_64.S
+@@ -253,16 +253,16 @@ ENTRY(poly1305_block_sse2)
+ 	# h0 += (d4 >> 26) * 5
+ 	mov		d4,%rax
+ 	shr		$26,%rax
+-	lea		(%eax,%eax,4),%eax
+-	add		%eax,%ebx
++	lea		(%rax,%rax,4),%rax
++	add		%rax,%rbx
+ 	# h4 = d4 & 0x3ffffff
+ 	mov		d4,%rax
+ 	and		$0x3ffffff,%eax
+ 	mov		%eax,h4
+ 
+ 	# h1 += h0 >> 26
+-	mov		%ebx,%eax
+-	shr		$26,%eax
++	mov		%rbx,%rax
++	shr		$26,%rax
+ 	add		%eax,h1
+ 	# h0 = h0 & 0x3ffffff
+ 	andl		$0x3ffffff,%ebx
+@@ -520,6 +520,12 @@ ENTRY(poly1305_2block_sse2)
+ 	paddq		t2,t1
+ 	movq		t1,d4
+ 
++	# Now do a partial reduction mod (2^130)-5, carrying h0 -> h1 -> h2 ->
++	# h3 -> h4 -> h0 -> h1 to get h0,h2,h3,h4 < 2^26 and h1 < 2^26 + a small
++	# amount.  Careful: we must not assume the carry bits 'd0 >> 26',
++	# 'd1 >> 26', 'd2 >> 26', 'd3 >> 26', and '(d4 >> 26) * 5' fit in 32-bit
++	# integers.  It's true in a single-block implementation, but not here.
++
+ 	# d1 += d0 >> 26
+ 	mov		d0,%rax
+ 	shr		$26,%rax
+@@ -558,16 +564,16 @@ ENTRY(poly1305_2block_sse2)
+ 	# h0 += (d4 >> 26) * 5
+ 	mov		d4,%rax
+ 	shr		$26,%rax
+-	lea		(%eax,%eax,4),%eax
+-	add		%eax,%ebx
++	lea		(%rax,%rax,4),%rax
++	add		%rax,%rbx
+ 	# h4 = d4 & 0x3ffffff
+ 	mov		d4,%rax
+ 	and		$0x3ffffff,%eax
+ 	mov		%eax,h4
+ 
+ 	# h1 += h0 >> 26
+-	mov		%ebx,%eax
+-	shr		$26,%eax
++	mov		%rbx,%rax
++	shr		$26,%rax
+ 	add		%eax,h1
+ 	# h0 = h0 & 0x3ffffff
+ 	andl		$0x3ffffff,%ebx
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index 0ecfac84ba91..d45f3fbd232e 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -117,22 +117,39 @@ static __initconst const u64 amd_hw_cache_event_ids
+ };
+ 
+ /*
+- * AMD Performance Monitor K7 and later.
++ * AMD Performance Monitor K7 and later, up to and including Family 16h:
+  */
+ static const u64 amd_perfmon_event_map[PERF_COUNT_HW_MAX] =
+ {
+-  [PERF_COUNT_HW_CPU_CYCLES]			= 0x0076,
+-  [PERF_COUNT_HW_INSTRUCTIONS]			= 0x00c0,
+-  [PERF_COUNT_HW_CACHE_REFERENCES]		= 0x077d,
+-  [PERF_COUNT_HW_CACHE_MISSES]			= 0x077e,
+-  [PERF_COUNT_HW_BRANCH_INSTRUCTIONS]		= 0x00c2,
+-  [PERF_COUNT_HW_BRANCH_MISSES]			= 0x00c3,
+-  [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND]	= 0x00d0, /* "Decoder empty" event */
+-  [PERF_COUNT_HW_STALLED_CYCLES_BACKEND]	= 0x00d1, /* "Dispatch stalls" event */
++	[PERF_COUNT_HW_CPU_CYCLES]		= 0x0076,
++	[PERF_COUNT_HW_INSTRUCTIONS]		= 0x00c0,
++	[PERF_COUNT_HW_CACHE_REFERENCES]	= 0x077d,
++	[PERF_COUNT_HW_CACHE_MISSES]		= 0x077e,
++	[PERF_COUNT_HW_BRANCH_INSTRUCTIONS]	= 0x00c2,
++	[PERF_COUNT_HW_BRANCH_MISSES]		= 0x00c3,
++	[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND]	= 0x00d0, /* "Decoder empty" event */
++	[PERF_COUNT_HW_STALLED_CYCLES_BACKEND]	= 0x00d1, /* "Dispatch stalls" event */
++};
++
++/*
++ * AMD Performance Monitor Family 17h and later:
++ */
++static const u64 amd_f17h_perfmon_event_map[PERF_COUNT_HW_MAX] =
++{
++	[PERF_COUNT_HW_CPU_CYCLES]		= 0x0076,
++	[PERF_COUNT_HW_INSTRUCTIONS]		= 0x00c0,
++	[PERF_COUNT_HW_CACHE_REFERENCES]	= 0xff60,
++	[PERF_COUNT_HW_BRANCH_INSTRUCTIONS]	= 0x00c2,
++	[PERF_COUNT_HW_BRANCH_MISSES]		= 0x00c3,
++	[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND]	= 0x0287,
++	[PERF_COUNT_HW_STALLED_CYCLES_BACKEND]	= 0x0187,
+ };
+ 
+ static u64 amd_pmu_event_map(int hw_event)
+ {
++	if (boot_cpu_data.x86 >= 0x17)
++		return amd_f17h_perfmon_event_map[hw_event];
++
+ 	return amd_perfmon_event_map[hw_event];
+ }
+ 
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 2480feb07df3..470d7daa915d 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3130,7 +3130,7 @@ static unsigned long intel_pmu_large_pebs_flags(struct perf_event *event)
+ 		flags &= ~PERF_SAMPLE_TIME;
+ 	if (!event->attr.exclude_kernel)
+ 		flags &= ~PERF_SAMPLE_REGS_USER;
+-	if (event->attr.sample_regs_user & ~PEBS_REGS)
++	if (event->attr.sample_regs_user & ~PEBS_GP_REGS)
+ 		flags &= ~(PERF_SAMPLE_REGS_USER | PERF_SAMPLE_REGS_INTR);
+ 	return flags;
+ }
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index acd72e669c04..b68ab65454ff 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -96,25 +96,25 @@ struct amd_nb {
+ 	PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER | \
+ 	PERF_SAMPLE_PERIOD)
+ 
+-#define PEBS_REGS \
+-	(PERF_REG_X86_AX | \
+-	 PERF_REG_X86_BX | \
+-	 PERF_REG_X86_CX | \
+-	 PERF_REG_X86_DX | \
+-	 PERF_REG_X86_DI | \
+-	 PERF_REG_X86_SI | \
+-	 PERF_REG_X86_SP | \
+-	 PERF_REG_X86_BP | \
+-	 PERF_REG_X86_IP | \
+-	 PERF_REG_X86_FLAGS | \
+-	 PERF_REG_X86_R8 | \
+-	 PERF_REG_X86_R9 | \
+-	 PERF_REG_X86_R10 | \
+-	 PERF_REG_X86_R11 | \
+-	 PERF_REG_X86_R12 | \
+-	 PERF_REG_X86_R13 | \
+-	 PERF_REG_X86_R14 | \
+-	 PERF_REG_X86_R15)
++#define PEBS_GP_REGS			\
++	((1ULL << PERF_REG_X86_AX)    | \
++	 (1ULL << PERF_REG_X86_BX)    | \
++	 (1ULL << PERF_REG_X86_CX)    | \
++	 (1ULL << PERF_REG_X86_DX)    | \
++	 (1ULL << PERF_REG_X86_DI)    | \
++	 (1ULL << PERF_REG_X86_SI)    | \
++	 (1ULL << PERF_REG_X86_SP)    | \
++	 (1ULL << PERF_REG_X86_BP)    | \
++	 (1ULL << PERF_REG_X86_IP)    | \
++	 (1ULL << PERF_REG_X86_FLAGS) | \
++	 (1ULL << PERF_REG_X86_R8)    | \
++	 (1ULL << PERF_REG_X86_R9)    | \
++	 (1ULL << PERF_REG_X86_R10)   | \
++	 (1ULL << PERF_REG_X86_R11)   | \
++	 (1ULL << PERF_REG_X86_R12)   | \
++	 (1ULL << PERF_REG_X86_R13)   | \
++	 (1ULL << PERF_REG_X86_R14)   | \
++	 (1ULL << PERF_REG_X86_R15))
+ 
+ /*
+  * Per register state.
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 01874d54f4fd..482383c2b184 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -275,7 +275,7 @@ static const struct {
+ 	const char			*option;
+ 	enum spectre_v2_user_cmd	cmd;
+ 	bool				secure;
+-} v2_user_options[] __initdata = {
++} v2_user_options[] __initconst = {
+ 	{ "auto",		SPECTRE_V2_USER_CMD_AUTO,		false },
+ 	{ "off",		SPECTRE_V2_USER_CMD_NONE,		false },
+ 	{ "on",			SPECTRE_V2_USER_CMD_FORCE,		true  },
+@@ -419,7 +419,7 @@ static const struct {
+ 	const char *option;
+ 	enum spectre_v2_mitigation_cmd cmd;
+ 	bool secure;
+-} mitigation_options[] __initdata = {
++} mitigation_options[] __initconst = {
+ 	{ "off",		SPECTRE_V2_CMD_NONE,		  false },
+ 	{ "on",			SPECTRE_V2_CMD_FORCE,		  true  },
+ 	{ "retpoline",		SPECTRE_V2_CMD_RETPOLINE,	  false },
+@@ -658,7 +658,7 @@ static const char * const ssb_strings[] = {
+ static const struct {
+ 	const char *option;
+ 	enum ssb_mitigation_cmd cmd;
+-} ssb_mitigation_options[]  __initdata = {
++} ssb_mitigation_options[]  __initconst = {
+ 	{ "auto",	SPEC_STORE_BYPASS_CMD_AUTO },    /* Platform decides */
+ 	{ "on",		SPEC_STORE_BYPASS_CMD_ON },      /* Disable Speculative Store Bypass */
+ 	{ "off",	SPEC_STORE_BYPASS_CMD_NONE },    /* Don't touch Speculative Store Bypass */
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 4ba75afba527..f4b954ff5b89 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -569,6 +569,7 @@ void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs)
+ 	unsigned long *sara = stack_addr(regs);
+ 
+ 	ri->ret_addr = (kprobe_opcode_t *) *sara;
++	ri->fp = sara;
+ 
+ 	/* Replace the return addr with trampoline addr */
+ 	*sara = (unsigned long) &kretprobe_trampoline;
+@@ -748,26 +749,48 @@ asm(
+ NOKPROBE_SYMBOL(kretprobe_trampoline);
+ STACK_FRAME_NON_STANDARD(kretprobe_trampoline);
+ 
++static struct kprobe kretprobe_kprobe = {
++	.addr = (void *)kretprobe_trampoline,
++};
++
+ /*
+  * Called from kretprobe_trampoline
+  */
+ static __used void *trampoline_handler(struct pt_regs *regs)
+ {
++	struct kprobe_ctlblk *kcb;
+ 	struct kretprobe_instance *ri = NULL;
+ 	struct hlist_head *head, empty_rp;
+ 	struct hlist_node *tmp;
+ 	unsigned long flags, orig_ret_address = 0;
+ 	unsigned long trampoline_address = (unsigned long)&kretprobe_trampoline;
+ 	kprobe_opcode_t *correct_ret_addr = NULL;
++	void *frame_pointer;
++	bool skipped = false;
++
++	preempt_disable();
++
++	/*
++	 * Set a dummy kprobe for avoiding kretprobe recursion.
++	 * Since kretprobe never run in kprobe handler, kprobe must not
++	 * be running at this point.
++	 */
++	kcb = get_kprobe_ctlblk();
++	__this_cpu_write(current_kprobe, &kretprobe_kprobe);
++	kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+ 
+ 	INIT_HLIST_HEAD(&empty_rp);
+ 	kretprobe_hash_lock(current, &head, &flags);
+ 	/* fixup registers */
+ #ifdef CONFIG_X86_64
+ 	regs->cs = __KERNEL_CS;
++	/* On x86-64, we use pt_regs->sp for return address holder. */
++	frame_pointer = &regs->sp;
+ #else
+ 	regs->cs = __KERNEL_CS | get_kernel_rpl();
+ 	regs->gs = 0;
++	/* On x86-32, we use pt_regs->flags for return address holder. */
++	frame_pointer = &regs->flags;
+ #endif
+ 	regs->ip = trampoline_address;
+ 	regs->orig_ax = ~0UL;
+@@ -789,8 +812,25 @@ static __used void *trampoline_handler(struct pt_regs *regs)
+ 		if (ri->task != current)
+ 			/* another task is sharing our hash bucket */
+ 			continue;
++		/*
++		 * Return probes must be pushed on this hash list correct
++		 * order (same as return order) so that it can be poped
++		 * correctly. However, if we find it is pushed it incorrect
++		 * order, this means we find a function which should not be
++		 * probed, because the wrong order entry is pushed on the
++		 * path of processing other kretprobe itself.
++		 */
++		if (ri->fp != frame_pointer) {
++			if (!skipped)
++				pr_warn("kretprobe is stacked incorrectly. Trying to fixup.\n");
++			skipped = true;
++			continue;
++		}
+ 
+ 		orig_ret_address = (unsigned long)ri->ret_addr;
++		if (skipped)
++			pr_warn("%ps must be blacklisted because of incorrect kretprobe order\n",
++				ri->rp->kp.addr);
+ 
+ 		if (orig_ret_address != trampoline_address)
+ 			/*
+@@ -808,14 +848,15 @@ static __used void *trampoline_handler(struct pt_regs *regs)
+ 		if (ri->task != current)
+ 			/* another task is sharing our hash bucket */
+ 			continue;
++		if (ri->fp != frame_pointer)
++			continue;
+ 
+ 		orig_ret_address = (unsigned long)ri->ret_addr;
+ 		if (ri->rp && ri->rp->handler) {
+ 			__this_cpu_write(current_kprobe, &ri->rp->kp);
+-			get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
+ 			ri->ret_addr = correct_ret_addr;
+ 			ri->rp->handler(ri, regs);
+-			__this_cpu_write(current_kprobe, NULL);
++			__this_cpu_write(current_kprobe, &kretprobe_kprobe);
+ 		}
+ 
+ 		recycle_rp_inst(ri, &empty_rp);
+@@ -831,6 +872,9 @@ static __used void *trampoline_handler(struct pt_regs *regs)
+ 
+ 	kretprobe_hash_unlock(current, &flags);
+ 
++	__this_cpu_write(current_kprobe, NULL);
++	preempt_enable();
++
+ 	hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
+ 		hlist_del(&ri->hlist);
+ 		kfree(ri);
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 90ae0ca51083..9db049f06f2f 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -414,6 +414,8 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+ 	u64 msr = x86_spec_ctrl_base;
+ 	bool updmsr = false;
+ 
++	lockdep_assert_irqs_disabled();
++
+ 	/*
+ 	 * If TIF_SSBD is different, select the proper mitigation
+ 	 * method. Note that if SSBD mitigation is disabled or permanentely
+@@ -465,10 +467,12 @@ static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk)
+ 
+ void speculation_ctrl_update(unsigned long tif)
+ {
++	unsigned long flags;
++
+ 	/* Forced update. Make sure all relevant TIF flags are different */
+-	preempt_disable();
++	local_irq_save(flags);
+ 	__speculation_ctrl_update(~tif, tif);
+-	preempt_enable();
++	local_irq_restore(flags);
+ }
+ 
+ /* Called from seccomp/prctl update */
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index c338984c850d..81be2165821f 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -2575,15 +2575,13 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt)
+ 	 * CR0/CR3/CR4/EFER.  It's all a bit more complicated if the vCPU
+ 	 * supports long mode.
+ 	 */
+-	cr4 = ctxt->ops->get_cr(ctxt, 4);
+ 	if (emulator_has_longmode(ctxt)) {
+ 		struct desc_struct cs_desc;
+ 
+ 		/* Zero CR4.PCIDE before CR0.PG.  */
+-		if (cr4 & X86_CR4_PCIDE) {
++		cr4 = ctxt->ops->get_cr(ctxt, 4);
++		if (cr4 & X86_CR4_PCIDE)
+ 			ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE);
+-			cr4 &= ~X86_CR4_PCIDE;
+-		}
+ 
+ 		/* A 32-bit code segment is required to clear EFER.LMA.  */
+ 		memset(&cs_desc, 0, sizeof(cs_desc));
+@@ -2597,13 +2595,16 @@ static int em_rsm(struct x86_emulate_ctxt *ctxt)
+ 	if (cr0 & X86_CR0_PE)
+ 		ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE));
+ 
+-	/* Now clear CR4.PAE (which must be done before clearing EFER.LME).  */
+-	if (cr4 & X86_CR4_PAE)
+-		ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE);
++	if (emulator_has_longmode(ctxt)) {
++		/* Clear CR4.PAE before clearing EFER.LME. */
++		cr4 = ctxt->ops->get_cr(ctxt, 4);
++		if (cr4 & X86_CR4_PAE)
++			ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE);
+ 
+-	/* And finally go back to 32-bit mode.  */
+-	efer = 0;
+-	ctxt->ops->set_msr(ctxt, MSR_EFER, efer);
++		/* And finally go back to 32-bit mode.  */
++		efer = 0;
++		ctxt->ops->set_msr(ctxt, MSR_EFER, efer);
++	}
+ 
+ 	smbase = ctxt->ops->get_smbase(ctxt);
+ 
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index a9b8e38d78ad..516c1de03d47 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -2687,6 +2687,7 @@ static int npf_interception(struct vcpu_svm *svm)
+ static int db_interception(struct vcpu_svm *svm)
+ {
+ 	struct kvm_run *kvm_run = svm->vcpu.run;
++	struct kvm_vcpu *vcpu = &svm->vcpu;
+ 
+ 	if (!(svm->vcpu.guest_debug &
+ 	      (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) &&
+@@ -2697,6 +2698,8 @@ static int db_interception(struct vcpu_svm *svm)
+ 
+ 	if (svm->nmi_singlestep) {
+ 		disable_nmi_singlestep(svm);
++		/* Make sure we check for pending NMIs upon entry */
++		kvm_make_request(KVM_REQ_EVENT, vcpu);
+ 	}
+ 
+ 	if (svm->vcpu.guest_debug &
+@@ -4512,14 +4515,25 @@ static int avic_incomplete_ipi_interception(struct vcpu_svm *svm)
+ 		kvm_lapic_reg_write(apic, APIC_ICR, icrl);
+ 		break;
+ 	case AVIC_IPI_FAILURE_TARGET_NOT_RUNNING: {
++		int i;
++		struct kvm_vcpu *vcpu;
++		struct kvm *kvm = svm->vcpu.kvm;
+ 		struct kvm_lapic *apic = svm->vcpu.arch.apic;
+ 
+ 		/*
+-		 * Update ICR high and low, then emulate sending IPI,
+-		 * which is handled when writing APIC_ICR.
++		 * At this point, we expect that the AVIC HW has already
++		 * set the appropriate IRR bits on the valid target
++		 * vcpus. So, we just need to kick the appropriate vcpu.
+ 		 */
+-		kvm_lapic_reg_write(apic, APIC_ICR2, icrh);
+-		kvm_lapic_reg_write(apic, APIC_ICR, icrl);
++		kvm_for_each_vcpu(i, vcpu, kvm) {
++			bool m = kvm_apic_match_dest(vcpu, apic,
++						     icrl & KVM_APIC_SHORT_MASK,
++						     GET_APIC_DEST_FIELD(icrh),
++						     icrl & KVM_APIC_DEST_MASK);
++
++			if (m && !avic_vcpu_is_running(vcpu))
++				kvm_vcpu_wake_up(vcpu);
++		}
+ 		break;
+ 	}
+ 	case AVIC_IPI_FAILURE_INVALID_TARGET:
+@@ -5620,6 +5634,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	svm->vmcb->save.cr2 = vcpu->arch.cr2;
+ 
+ 	clgi();
++	kvm_load_guest_xcr0(vcpu);
+ 
+ 	/*
+ 	 * If this vCPU has touched SPEC_CTRL, restore the guest's value if
+@@ -5765,6 +5780,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ 	if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
+ 		kvm_before_interrupt(&svm->vcpu);
+ 
++	kvm_put_guest_xcr0(vcpu);
+ 	stgi();
+ 
+ 	/* Any pending NMI will happen here */
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index a0a770816429..34499081022c 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6548,6 +6548,8 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 	if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
+ 		vmx_set_interrupt_shadow(vcpu, 0);
+ 
++	kvm_load_guest_xcr0(vcpu);
++
+ 	if (static_cpu_has(X86_FEATURE_PKU) &&
+ 	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
+ 	    vcpu->arch.pkru != vmx->host_pkru)
+@@ -6635,6 +6637,8 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 			__write_pkru(vmx->host_pkru);
+ 	}
+ 
++	kvm_put_guest_xcr0(vcpu);
++
+ 	vmx->nested.nested_run_pending = 0;
+ 	vmx->idt_vectoring_info = 0;
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 7ee802a92bc8..2db58067bb59 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -800,7 +800,7 @@ void kvm_lmsw(struct kvm_vcpu *vcpu, unsigned long msw)
+ }
+ EXPORT_SYMBOL_GPL(kvm_lmsw);
+ 
+-static void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
++void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
+ {
+ 	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE) &&
+ 			!vcpu->guest_xcr0_loaded) {
+@@ -810,8 +810,9 @@ static void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu)
+ 		vcpu->guest_xcr0_loaded = 1;
+ 	}
+ }
++EXPORT_SYMBOL_GPL(kvm_load_guest_xcr0);
+ 
+-static void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
++void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
+ {
+ 	if (vcpu->guest_xcr0_loaded) {
+ 		if (vcpu->arch.xcr0 != host_xcr0)
+@@ -819,6 +820,7 @@ static void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu)
+ 		vcpu->guest_xcr0_loaded = 0;
+ 	}
+ }
++EXPORT_SYMBOL_GPL(kvm_put_guest_xcr0);
+ 
+ static int __kvm_set_xcr(struct kvm_vcpu *vcpu, u32 index, u64 xcr)
+ {
+@@ -7856,8 +7858,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 		goto cancel_injection;
+ 	}
+ 
+-	kvm_load_guest_xcr0(vcpu);
+-
+ 	if (req_immediate_exit) {
+ 		kvm_make_request(KVM_REQ_EVENT, vcpu);
+ 		kvm_x86_ops->request_immediate_exit(vcpu);
+@@ -7910,8 +7910,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 	vcpu->mode = OUTSIDE_GUEST_MODE;
+ 	smp_wmb();
+ 
+-	kvm_put_guest_xcr0(vcpu);
+-
+ 	kvm_before_interrupt(vcpu);
+ 	kvm_x86_ops->handle_external_intr(vcpu);
+ 	kvm_after_interrupt(vcpu);
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index 20ede17202bf..de3d46769ee3 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -347,4 +347,6 @@ static inline void kvm_after_interrupt(struct kvm_vcpu *vcpu)
+ 	__this_cpu_write(current_vcpu, NULL);
+ }
+ 
++void kvm_load_guest_xcr0(struct kvm_vcpu *vcpu);
++void kvm_put_guest_xcr0(struct kvm_vcpu *vcpu);
+ #endif
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index ca8e8ebef309..db496aa360a3 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -5706,7 +5706,49 @@ static const struct hash_testvec poly1305_tv_template[] = {
+ 		.psize		= 80,
+ 		.digest		= "\x13\x00\x00\x00\x00\x00\x00\x00"
+ 				  "\x00\x00\x00\x00\x00\x00\x00\x00",
+-	},
++	}, { /* Regression test for overflow in AVX2 implementation */
++		.plaintext	= "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff\xff\xff\xff\xff"
++				  "\xff\xff\xff\xff",
++		.psize		= 300,
++		.digest		= "\xfb\x5e\x96\xd8\x61\xd5\xc7\xc8"
++				  "\x78\xe5\x87\xcc\x2d\x5a\x22\xe1",
++	}
+ };
+ 
+ /* NHPoly1305 test vectors from https://github.com/google/adiantum */
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index f75f8f870ce3..4be4dc3e8aa6 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -1319,19 +1319,30 @@ static ssize_t scrub_show(struct device *dev,
+ 		struct device_attribute *attr, char *buf)
+ {
+ 	struct nvdimm_bus_descriptor *nd_desc;
++	struct acpi_nfit_desc *acpi_desc;
+ 	ssize_t rc = -ENXIO;
++	bool busy;
+ 
+ 	device_lock(dev);
+ 	nd_desc = dev_get_drvdata(dev);
+-	if (nd_desc) {
+-		struct acpi_nfit_desc *acpi_desc = to_acpi_desc(nd_desc);
++	if (!nd_desc) {
++		device_unlock(dev);
++		return rc;
++	}
++	acpi_desc = to_acpi_desc(nd_desc);
+ 
+-		mutex_lock(&acpi_desc->init_mutex);
+-		rc = sprintf(buf, "%d%s", acpi_desc->scrub_count,
+-				acpi_desc->scrub_busy
+-				&& !acpi_desc->cancel ? "+\n" : "\n");
+-		mutex_unlock(&acpi_desc->init_mutex);
++	mutex_lock(&acpi_desc->init_mutex);
++	busy = test_bit(ARS_BUSY, &acpi_desc->scrub_flags)
++		&& !test_bit(ARS_CANCEL, &acpi_desc->scrub_flags);
++	rc = sprintf(buf, "%d%s", acpi_desc->scrub_count, busy ? "+\n" : "\n");
++	/* Allow an admin to poll the busy state at a higher rate */
++	if (busy && capable(CAP_SYS_RAWIO) && !test_and_set_bit(ARS_POLL,
++				&acpi_desc->scrub_flags)) {
++		acpi_desc->scrub_tmo = 1;
++		mod_delayed_work(nfit_wq, &acpi_desc->dwork, HZ);
+ 	}
++
++	mutex_unlock(&acpi_desc->init_mutex);
+ 	device_unlock(dev);
+ 	return rc;
+ }
+@@ -2650,7 +2661,10 @@ static int ars_start(struct acpi_nfit_desc *acpi_desc,
+ 
+ 	if (rc < 0)
+ 		return rc;
+-	return cmd_rc;
++	if (cmd_rc < 0)
++		return cmd_rc;
++	set_bit(ARS_VALID, &acpi_desc->scrub_flags);
++	return 0;
+ }
+ 
+ static int ars_continue(struct acpi_nfit_desc *acpi_desc)
+@@ -2660,11 +2674,11 @@ static int ars_continue(struct acpi_nfit_desc *acpi_desc)
+ 	struct nvdimm_bus_descriptor *nd_desc = &acpi_desc->nd_desc;
+ 	struct nd_cmd_ars_status *ars_status = acpi_desc->ars_status;
+ 
+-	memset(&ars_start, 0, sizeof(ars_start));
+-	ars_start.address = ars_status->restart_address;
+-	ars_start.length = ars_status->restart_length;
+-	ars_start.type = ars_status->type;
+-	ars_start.flags = acpi_desc->ars_start_flags;
++	ars_start = (struct nd_cmd_ars_start) {
++		.address = ars_status->restart_address,
++		.length = ars_status->restart_length,
++		.type = ars_status->type,
++	};
+ 	rc = nd_desc->ndctl(nd_desc, NULL, ND_CMD_ARS_START, &ars_start,
+ 			sizeof(ars_start), &cmd_rc);
+ 	if (rc < 0)
+@@ -2743,6 +2757,17 @@ static int ars_status_process_records(struct acpi_nfit_desc *acpi_desc)
+ 	 */
+ 	if (ars_status->out_length < 44)
+ 		return 0;
++
++	/*
++	 * Ignore potentially stale results that are only refreshed
++	 * after a start-ARS event.
++	 */
++	if (!test_and_clear_bit(ARS_VALID, &acpi_desc->scrub_flags)) {
++		dev_dbg(acpi_desc->dev, "skip %d stale records\n",
++				ars_status->num_records);
++		return 0;
++	}
++
+ 	for (i = 0; i < ars_status->num_records; i++) {
+ 		/* only process full records */
+ 		if (ars_status->out_length
+@@ -3081,7 +3106,7 @@ static unsigned int __acpi_nfit_scrub(struct acpi_nfit_desc *acpi_desc,
+ 
+ 	lockdep_assert_held(&acpi_desc->init_mutex);
+ 
+-	if (acpi_desc->cancel)
++	if (test_bit(ARS_CANCEL, &acpi_desc->scrub_flags))
+ 		return 0;
+ 
+ 	if (query_rc == -EBUSY) {
+@@ -3155,7 +3180,7 @@ static void __sched_ars(struct acpi_nfit_desc *acpi_desc, unsigned int tmo)
+ {
+ 	lockdep_assert_held(&acpi_desc->init_mutex);
+ 
+-	acpi_desc->scrub_busy = 1;
++	set_bit(ARS_BUSY, &acpi_desc->scrub_flags);
+ 	/* note this should only be set from within the workqueue */
+ 	if (tmo)
+ 		acpi_desc->scrub_tmo = tmo;
+@@ -3171,7 +3196,7 @@ static void notify_ars_done(struct acpi_nfit_desc *acpi_desc)
+ {
+ 	lockdep_assert_held(&acpi_desc->init_mutex);
+ 
+-	acpi_desc->scrub_busy = 0;
++	clear_bit(ARS_BUSY, &acpi_desc->scrub_flags);
+ 	acpi_desc->scrub_count++;
+ 	if (acpi_desc->scrub_count_state)
+ 		sysfs_notify_dirent(acpi_desc->scrub_count_state);
+@@ -3192,6 +3217,7 @@ static void acpi_nfit_scrub(struct work_struct *work)
+ 	else
+ 		notify_ars_done(acpi_desc);
+ 	memset(acpi_desc->ars_status, 0, acpi_desc->max_ars);
++	clear_bit(ARS_POLL, &acpi_desc->scrub_flags);
+ 	mutex_unlock(&acpi_desc->init_mutex);
+ }
+ 
+@@ -3226,6 +3252,7 @@ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
+ 	struct nfit_spa *nfit_spa;
+ 	int rc;
+ 
++	set_bit(ARS_VALID, &acpi_desc->scrub_flags);
+ 	list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
+ 		switch (nfit_spa_type(nfit_spa->spa)) {
+ 		case NFIT_SPA_VOLATILE:
+@@ -3460,7 +3487,7 @@ int acpi_nfit_ars_rescan(struct acpi_nfit_desc *acpi_desc,
+ 	struct nfit_spa *nfit_spa;
+ 
+ 	mutex_lock(&acpi_desc->init_mutex);
+-	if (acpi_desc->cancel) {
++	if (test_bit(ARS_CANCEL, &acpi_desc->scrub_flags)) {
+ 		mutex_unlock(&acpi_desc->init_mutex);
+ 		return 0;
+ 	}
+@@ -3539,7 +3566,7 @@ void acpi_nfit_shutdown(void *data)
+ 	mutex_unlock(&acpi_desc_lock);
+ 
+ 	mutex_lock(&acpi_desc->init_mutex);
+-	acpi_desc->cancel = 1;
++	set_bit(ARS_CANCEL, &acpi_desc->scrub_flags);
+ 	cancel_delayed_work_sync(&acpi_desc->dwork);
+ 	mutex_unlock(&acpi_desc->init_mutex);
+ 
+diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h
+index 33691aecfcee..0cbe5009eb2c 100644
+--- a/drivers/acpi/nfit/nfit.h
++++ b/drivers/acpi/nfit/nfit.h
+@@ -210,6 +210,13 @@ struct nfit_mem {
+ 	int family;
+ };
+ 
++enum scrub_flags {
++	ARS_BUSY,
++	ARS_CANCEL,
++	ARS_VALID,
++	ARS_POLL,
++};
++
+ struct acpi_nfit_desc {
+ 	struct nvdimm_bus_descriptor nd_desc;
+ 	struct acpi_table_header acpi_header;
+@@ -223,7 +230,6 @@ struct acpi_nfit_desc {
+ 	struct list_head idts;
+ 	struct nvdimm_bus *nvdimm_bus;
+ 	struct device *dev;
+-	u8 ars_start_flags;
+ 	struct nd_cmd_ars_status *ars_status;
+ 	struct nfit_spa *scrub_spa;
+ 	struct delayed_work dwork;
+@@ -232,8 +238,7 @@ struct acpi_nfit_desc {
+ 	unsigned int max_ars;
+ 	unsigned int scrub_count;
+ 	unsigned int scrub_mode;
+-	unsigned int scrub_busy:1;
+-	unsigned int cancel:1;
++	unsigned long scrub_flags;
+ 	unsigned long dimm_cmd_force_en;
+ 	unsigned long bus_cmd_force_en;
+ 	unsigned long bus_nfit_cmd_force_en;
+diff --git a/drivers/base/memory.c b/drivers/base/memory.c
+index 048cbf7d5233..23125f276ff1 100644
+--- a/drivers/base/memory.c
++++ b/drivers/base/memory.c
+@@ -505,7 +505,7 @@ static ssize_t probe_store(struct device *dev, struct device_attribute *attr,
+ 
+ 	ret = lock_device_hotplug_sysfs();
+ 	if (ret)
+-		goto out;
++		return ret;
+ 
+ 	nid = memory_add_physaddr_to_nid(phys_addr);
+ 	ret = __add_memory(nid, phys_addr,
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index c518659b4d9f..ff9dd9adf803 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -214,6 +214,9 @@ struct ipmi_user {
+ 
+ 	/* Does this interface receive IPMI events? */
+ 	bool gets_events;
++
++	/* Free must run in process context for RCU cleanup. */
++	struct work_struct remove_work;
+ };
+ 
+ static struct ipmi_user *acquire_ipmi_user(struct ipmi_user *user, int *index)
+@@ -1079,6 +1082,15 @@ static int intf_err_seq(struct ipmi_smi *intf,
+ }
+ 
+ 
++static void free_user_work(struct work_struct *work)
++{
++	struct ipmi_user *user = container_of(work, struct ipmi_user,
++					      remove_work);
++
++	cleanup_srcu_struct(&user->release_barrier);
++	kfree(user);
++}
++
+ int ipmi_create_user(unsigned int          if_num,
+ 		     const struct ipmi_user_hndl *handler,
+ 		     void                  *handler_data,
+@@ -1122,6 +1134,8 @@ int ipmi_create_user(unsigned int          if_num,
+ 	goto out_kfree;
+ 
+  found:
++	INIT_WORK(&new_user->remove_work, free_user_work);
++
+ 	rv = init_srcu_struct(&new_user->release_barrier);
+ 	if (rv)
+ 		goto out_kfree;
+@@ -1184,8 +1198,9 @@ EXPORT_SYMBOL(ipmi_get_smi_info);
+ static void free_user(struct kref *ref)
+ {
+ 	struct ipmi_user *user = container_of(ref, struct ipmi_user, refcount);
+-	cleanup_srcu_struct(&user->release_barrier);
+-	kfree(user);
++
++	/* SRCU cleanup must happen in task context. */
++	schedule_work(&user->remove_work);
+ }
+ 
+ static void _ipmi_destroy_user(struct ipmi_user *user)
+diff --git a/drivers/char/tpm/eventlog/tpm2.c b/drivers/char/tpm/eventlog/tpm2.c
+index 1b8fa9de2cac..41b9f6c92da7 100644
+--- a/drivers/char/tpm/eventlog/tpm2.c
++++ b/drivers/char/tpm/eventlog/tpm2.c
+@@ -37,8 +37,8 @@
+  *
+  * Returns size of the event. If it is an invalid event, returns 0.
+  */
+-static int calc_tpm2_event_size(struct tcg_pcr_event2 *event,
+-				struct tcg_pcr_event *event_header)
++static size_t calc_tpm2_event_size(struct tcg_pcr_event2 *event,
++				   struct tcg_pcr_event *event_header)
+ {
+ 	struct tcg_efi_specid_event *efispecid;
+ 	struct tcg_event_field *event_field;
+diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c
+index 5eecad233ea1..744b0237300a 100644
+--- a/drivers/char/tpm/tpm-dev-common.c
++++ b/drivers/char/tpm/tpm-dev-common.c
+@@ -203,12 +203,19 @@ __poll_t tpm_common_poll(struct file *file, poll_table *wait)
+ 	__poll_t mask = 0;
+ 
+ 	poll_wait(file, &priv->async_wait, wait);
++	mutex_lock(&priv->buffer_mutex);
+ 
+-	if (!priv->response_read || priv->response_length)
++	/*
++	 * The response_length indicates if there is still response
++	 * (or part of it) to be consumed. Partial reads decrease it
++	 * by the number of bytes read, and write resets it the zero.
++	 */
++	if (priv->response_length)
+ 		mask = EPOLLIN | EPOLLRDNORM;
+ 	else
+ 		mask = EPOLLOUT | EPOLLWRNORM;
+ 
++	mutex_unlock(&priv->buffer_mutex);
+ 	return mask;
+ }
+ 
+diff --git a/drivers/char/tpm/tpm_i2c_atmel.c b/drivers/char/tpm/tpm_i2c_atmel.c
+index 32a8e27c5382..cc4e642d3180 100644
+--- a/drivers/char/tpm/tpm_i2c_atmel.c
++++ b/drivers/char/tpm/tpm_i2c_atmel.c
+@@ -69,6 +69,10 @@ static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len)
+ 	if (status < 0)
+ 		return status;
+ 
++	/* The upper layer does not support incomplete sends. */
++	if (status != len)
++		return -E2BIG;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+index d0d966d6080a..1696644ec022 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
+@@ -182,6 +182,7 @@ static void mmhub_v1_0_init_cache_regs(struct amdgpu_device *adev)
+ 		tmp = REG_SET_FIELD(tmp, VM_L2_CNTL3,
+ 				    L2_CACHE_BIGK_FRAGMENT_SIZE, 6);
+ 	}
++	WREG32_SOC15(MMHUB, 0, mmVM_L2_CNTL3, tmp);
+ 
+ 	tmp = mmVM_L2_CNTL4_DEFAULT;
+ 	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL4, VMC_TAP_PDE_REQUEST_PHYSICAL, 0);
+diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
+index f841accc2c00..f77c81db161b 100644
+--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
++++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
+@@ -730,7 +730,8 @@ static void ttm_put_pages(struct page **pages, unsigned npages, int flags,
+ 			}
+ 
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-			if (!(flags & TTM_PAGE_FLAG_DMA32)) {
++			if (!(flags & TTM_PAGE_FLAG_DMA32) &&
++			    (npages - i) >= HPAGE_PMD_NR) {
+ 				for (j = 0; j < HPAGE_PMD_NR; ++j)
+ 					if (p++ != pages[i + j])
+ 					    break;
+@@ -759,7 +760,7 @@ static void ttm_put_pages(struct page **pages, unsigned npages, int flags,
+ 		unsigned max_size, n2free;
+ 
+ 		spin_lock_irqsave(&huge->lock, irq_flags);
+-		while (i < npages) {
++		while ((npages - i) >= HPAGE_PMD_NR) {
+ 			struct page *p = pages[i];
+ 			unsigned j;
+ 
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 2dc628d4f1ae..1412abcff010 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -1980,7 +1980,6 @@ of_i3c_master_add_i3c_boardinfo(struct i3c_master_controller *master,
+ {
+ 	struct i3c_dev_boardinfo *boardinfo;
+ 	struct device *dev = &master->dev;
+-	struct i3c_device_info info = { };
+ 	enum i3c_addr_slot_status addrstatus;
+ 	u32 init_dyn_addr = 0;
+ 
+@@ -2012,8 +2011,8 @@ of_i3c_master_add_i3c_boardinfo(struct i3c_master_controller *master,
+ 
+ 	boardinfo->pid = ((u64)reg[1] << 32) | reg[2];
+ 
+-	if ((info.pid & GENMASK_ULL(63, 48)) ||
+-	    I3C_PID_RND_LOWER_32BITS(info.pid))
++	if ((boardinfo->pid & GENMASK_ULL(63, 48)) ||
++	    I3C_PID_RND_LOWER_32BITS(boardinfo->pid))
+ 		return -EINVAL;
+ 
+ 	boardinfo->init_dyn_addr = init_dyn_addr;
+diff --git a/drivers/i3c/master/dw-i3c-master.c b/drivers/i3c/master/dw-i3c-master.c
+index bb03079fbade..ec385fbfef4c 100644
+--- a/drivers/i3c/master/dw-i3c-master.c
++++ b/drivers/i3c/master/dw-i3c-master.c
+@@ -300,7 +300,7 @@ to_dw_i3c_master(struct i3c_master_controller *master)
+ 
+ static void dw_i3c_master_disable(struct dw_i3c_master *master)
+ {
+-	writel(readl(master->regs + DEVICE_CTRL) & DEV_CTRL_ENABLE,
++	writel(readl(master->regs + DEVICE_CTRL) & ~DEV_CTRL_ENABLE,
+ 	       master->regs + DEVICE_CTRL);
+ }
+ 
+diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
+index 7096e577b23f..50f3ff386bea 100644
+--- a/drivers/iio/accel/kxcjk-1013.c
++++ b/drivers/iio/accel/kxcjk-1013.c
+@@ -1437,6 +1437,8 @@ static int kxcjk1013_resume(struct device *dev)
+ 
+ 	mutex_lock(&data->mutex);
+ 	ret = kxcjk1013_set_mode(data, OPERATION);
++	if (ret == 0)
++		ret = kxcjk1013_set_range(data, data->range);
+ 	mutex_unlock(&data->mutex);
+ 
+ 	return ret;
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index ff5f2da2e1b1..54d9978b2740 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -121,6 +121,7 @@ static int ad_sd_read_reg_raw(struct ad_sigma_delta *sigma_delta,
+ 	if (sigma_delta->info->has_registers) {
+ 		data[0] = reg << sigma_delta->info->addr_shift;
+ 		data[0] |= sigma_delta->info->read_mask;
++		data[0] |= sigma_delta->comm;
+ 		spi_message_add_tail(&t[0], &m);
+ 	}
+ 	spi_message_add_tail(&t[1], &m);
+diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
+index 75d2f73582a3..596841a3c4db 100644
+--- a/drivers/iio/adc/at91_adc.c
++++ b/drivers/iio/adc/at91_adc.c
+@@ -704,23 +704,29 @@ static int at91_adc_read_raw(struct iio_dev *idev,
+ 		ret = wait_event_interruptible_timeout(st->wq_data_avail,
+ 						       st->done,
+ 						       msecs_to_jiffies(1000));
+-		if (ret == 0)
+-			ret = -ETIMEDOUT;
+-		if (ret < 0) {
+-			mutex_unlock(&st->lock);
+-			return ret;
+-		}
+-
+-		*val = st->last_value;
+ 
++		/* Disable interrupts, regardless if adc conversion was
++		 * successful or not
++		 */
+ 		at91_adc_writel(st, AT91_ADC_CHDR,
+ 				AT91_ADC_CH(chan->channel));
+ 		at91_adc_writel(st, AT91_ADC_IDR, BIT(chan->channel));
+ 
+-		st->last_value = 0;
+-		st->done = false;
++		if (ret > 0) {
++			/* a valid conversion took place */
++			*val = st->last_value;
++			st->last_value = 0;
++			st->done = false;
++			ret = IIO_VAL_INT;
++		} else if (ret == 0) {
++			/* conversion timeout */
++			dev_err(&idev->dev, "ADC Channel %d timeout.\n",
++				chan->channel);
++			ret = -ETIMEDOUT;
++		}
++
+ 		mutex_unlock(&st->lock);
+-		return IIO_VAL_INT;
++		return ret;
+ 
+ 	case IIO_CHAN_INFO_SCALE:
+ 		*val = st->vref_mv;
+diff --git a/drivers/iio/chemical/bme680.h b/drivers/iio/chemical/bme680.h
+index 0ae89b87e2d6..4edc5d21cb9f 100644
+--- a/drivers/iio/chemical/bme680.h
++++ b/drivers/iio/chemical/bme680.h
+@@ -2,11 +2,9 @@
+ #ifndef BME680_H_
+ #define BME680_H_
+ 
+-#define BME680_REG_CHIP_I2C_ID			0xD0
+-#define BME680_REG_CHIP_SPI_ID			0x50
++#define BME680_REG_CHIP_ID			0xD0
+ #define   BME680_CHIP_ID_VAL			0x61
+-#define BME680_REG_SOFT_RESET_I2C		0xE0
+-#define BME680_REG_SOFT_RESET_SPI		0x60
++#define BME680_REG_SOFT_RESET			0xE0
+ #define   BME680_CMD_SOFTRESET			0xB6
+ #define BME680_REG_STATUS			0x73
+ #define   BME680_SPI_MEM_PAGE_BIT		BIT(4)
+diff --git a/drivers/iio/chemical/bme680_core.c b/drivers/iio/chemical/bme680_core.c
+index 70c1fe4366f4..ccde4c65ff93 100644
+--- a/drivers/iio/chemical/bme680_core.c
++++ b/drivers/iio/chemical/bme680_core.c
+@@ -63,9 +63,23 @@ struct bme680_data {
+ 	s32 t_fine;
+ };
+ 
++static const struct regmap_range bme680_volatile_ranges[] = {
++	regmap_reg_range(BME680_REG_MEAS_STAT_0, BME680_REG_GAS_R_LSB),
++	regmap_reg_range(BME680_REG_STATUS, BME680_REG_STATUS),
++	regmap_reg_range(BME680_T2_LSB_REG, BME680_GH3_REG),
++};
++
++static const struct regmap_access_table bme680_volatile_table = {
++	.yes_ranges	= bme680_volatile_ranges,
++	.n_yes_ranges	= ARRAY_SIZE(bme680_volatile_ranges),
++};
++
+ const struct regmap_config bme680_regmap_config = {
+ 	.reg_bits = 8,
+ 	.val_bits = 8,
++	.max_register = 0xef,
++	.volatile_table = &bme680_volatile_table,
++	.cache_type = REGCACHE_RBTREE,
+ };
+ EXPORT_SYMBOL(bme680_regmap_config);
+ 
+@@ -316,6 +330,10 @@ static s16 bme680_compensate_temp(struct bme680_data *data,
+ 	s64 var1, var2, var3;
+ 	s16 calc_temp;
+ 
++	/* If the calibration is invalid, attempt to reload it */
++	if (!calib->par_t2)
++		bme680_read_calib(data, calib);
++
+ 	var1 = (adc_temp >> 3) - (calib->par_t1 << 1);
+ 	var2 = (var1 * calib->par_t2) >> 11;
+ 	var3 = ((var1 >> 1) * (var1 >> 1)) >> 12;
+@@ -583,8 +601,7 @@ static int bme680_gas_config(struct bme680_data *data)
+ 	return ret;
+ }
+ 
+-static int bme680_read_temp(struct bme680_data *data,
+-			    int *val, int *val2)
++static int bme680_read_temp(struct bme680_data *data, int *val)
+ {
+ 	struct device *dev = regmap_get_device(data->regmap);
+ 	int ret;
+@@ -617,10 +634,9 @@ static int bme680_read_temp(struct bme680_data *data,
+ 	 * compensate_press/compensate_humid to get compensated
+ 	 * pressure/humidity readings.
+ 	 */
+-	if (val && val2) {
+-		*val = comp_temp;
+-		*val2 = 100;
+-		return IIO_VAL_FRACTIONAL;
++	if (val) {
++		*val = comp_temp * 10; /* Centidegrees to millidegrees */
++		return IIO_VAL_INT;
+ 	}
+ 
+ 	return ret;
+@@ -635,7 +651,7 @@ static int bme680_read_press(struct bme680_data *data,
+ 	s32 adc_press;
+ 
+ 	/* Read and compensate temperature to get a reading of t_fine */
+-	ret = bme680_read_temp(data, NULL, NULL);
++	ret = bme680_read_temp(data, NULL);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -668,7 +684,7 @@ static int bme680_read_humid(struct bme680_data *data,
+ 	u32 comp_humidity;
+ 
+ 	/* Read and compensate temperature to get a reading of t_fine */
+-	ret = bme680_read_temp(data, NULL, NULL);
++	ret = bme680_read_temp(data, NULL);
+ 	if (ret < 0)
+ 		return ret;
+ 
+@@ -761,7 +777,7 @@ static int bme680_read_raw(struct iio_dev *indio_dev,
+ 	case IIO_CHAN_INFO_PROCESSED:
+ 		switch (chan->type) {
+ 		case IIO_TEMP:
+-			return bme680_read_temp(data, val, val2);
++			return bme680_read_temp(data, val);
+ 		case IIO_PRESSURE:
+ 			return bme680_read_press(data, val, val2);
+ 		case IIO_HUMIDITYRELATIVE:
+@@ -867,8 +883,28 @@ int bme680_core_probe(struct device *dev, struct regmap *regmap,
+ {
+ 	struct iio_dev *indio_dev;
+ 	struct bme680_data *data;
++	unsigned int val;
+ 	int ret;
+ 
++	ret = regmap_write(regmap, BME680_REG_SOFT_RESET,
++			   BME680_CMD_SOFTRESET);
++	if (ret < 0) {
++		dev_err(dev, "Failed to reset chip\n");
++		return ret;
++	}
++
++	ret = regmap_read(regmap, BME680_REG_CHIP_ID, &val);
++	if (ret < 0) {
++		dev_err(dev, "Error reading chip ID\n");
++		return ret;
++	}
++
++	if (val != BME680_CHIP_ID_VAL) {
++		dev_err(dev, "Wrong chip ID, got %x expected %x\n",
++				val, BME680_CHIP_ID_VAL);
++		return -ENODEV;
++	}
++
+ 	indio_dev = devm_iio_device_alloc(dev, sizeof(*data));
+ 	if (!indio_dev)
+ 		return -ENOMEM;
+diff --git a/drivers/iio/chemical/bme680_i2c.c b/drivers/iio/chemical/bme680_i2c.c
+index 06d4be539d2e..cfc4449edf1b 100644
+--- a/drivers/iio/chemical/bme680_i2c.c
++++ b/drivers/iio/chemical/bme680_i2c.c
+@@ -23,8 +23,6 @@ static int bme680_i2c_probe(struct i2c_client *client,
+ {
+ 	struct regmap *regmap;
+ 	const char *name = NULL;
+-	unsigned int val;
+-	int ret;
+ 
+ 	regmap = devm_regmap_init_i2c(client, &bme680_regmap_config);
+ 	if (IS_ERR(regmap)) {
+@@ -33,25 +31,6 @@ static int bme680_i2c_probe(struct i2c_client *client,
+ 		return PTR_ERR(regmap);
+ 	}
+ 
+-	ret = regmap_write(regmap, BME680_REG_SOFT_RESET_I2C,
+-			   BME680_CMD_SOFTRESET);
+-	if (ret < 0) {
+-		dev_err(&client->dev, "Failed to reset chip\n");
+-		return ret;
+-	}
+-
+-	ret = regmap_read(regmap, BME680_REG_CHIP_I2C_ID, &val);
+-	if (ret < 0) {
+-		dev_err(&client->dev, "Error reading I2C chip ID\n");
+-		return ret;
+-	}
+-
+-	if (val != BME680_CHIP_ID_VAL) {
+-		dev_err(&client->dev, "Wrong chip ID, got %x expected %x\n",
+-				val, BME680_CHIP_ID_VAL);
+-		return -ENODEV;
+-	}
+-
+ 	if (id)
+ 		name = id->name;
+ 
+diff --git a/drivers/iio/chemical/bme680_spi.c b/drivers/iio/chemical/bme680_spi.c
+index c9fb05e8d0b9..881778e55d38 100644
+--- a/drivers/iio/chemical/bme680_spi.c
++++ b/drivers/iio/chemical/bme680_spi.c
+@@ -11,28 +11,93 @@
+ 
+ #include "bme680.h"
+ 
++struct bme680_spi_bus_context {
++	struct spi_device *spi;
++	u8 current_page;
++};
++
++/*
++ * In SPI mode there are only 7 address bits, a "page" register determines
++ * which part of the 8-bit range is active. This function looks at the address
++ * and writes the page selection bit if needed
++ */
++static int bme680_regmap_spi_select_page(
++	struct bme680_spi_bus_context *ctx, u8 reg)
++{
++	struct spi_device *spi = ctx->spi;
++	int ret;
++	u8 buf[2];
++	u8 page = (reg & 0x80) ? 0 : 1; /* Page "1" is low range */
++
++	if (page == ctx->current_page)
++		return 0;
++
++	/*
++	 * Data sheet claims we're only allowed to change bit 4, so we must do
++	 * a read-modify-write on each and every page select
++	 */
++	buf[0] = BME680_REG_STATUS;
++	ret = spi_write_then_read(spi, buf, 1, buf + 1, 1);
++	if (ret < 0) {
++		dev_err(&spi->dev, "failed to set page %u\n", page);
++		return ret;
++	}
++
++	buf[0] = BME680_REG_STATUS;
++	if (page)
++		buf[1] |= BME680_SPI_MEM_PAGE_BIT;
++	else
++		buf[1] &= ~BME680_SPI_MEM_PAGE_BIT;
++
++	ret = spi_write(spi, buf, 2);
++	if (ret < 0) {
++		dev_err(&spi->dev, "failed to set page %u\n", page);
++		return ret;
++	}
++
++	ctx->current_page = page;
++
++	return 0;
++}
++
+ static int bme680_regmap_spi_write(void *context, const void *data,
+ 				   size_t count)
+ {
+-	struct spi_device *spi = context;
++	struct bme680_spi_bus_context *ctx = context;
++	struct spi_device *spi = ctx->spi;
++	int ret;
+ 	u8 buf[2];
+ 
+ 	memcpy(buf, data, 2);
++
++	ret = bme680_regmap_spi_select_page(ctx, buf[0]);
++	if (ret)
++		return ret;
++
+ 	/*
+ 	 * The SPI register address (= full register address without bit 7)
+ 	 * and the write command (bit7 = RW = '0')
+ 	 */
+ 	buf[0] &= ~0x80;
+ 
+-	return spi_write_then_read(spi, buf, 2, NULL, 0);
++	return spi_write(spi, buf, 2);
+ }
+ 
+ static int bme680_regmap_spi_read(void *context, const void *reg,
+ 				  size_t reg_size, void *val, size_t val_size)
+ {
+-	struct spi_device *spi = context;
++	struct bme680_spi_bus_context *ctx = context;
++	struct spi_device *spi = ctx->spi;
++	int ret;
++	u8 addr = *(const u8 *)reg;
++
++	ret = bme680_regmap_spi_select_page(ctx, addr);
++	if (ret)
++		return ret;
+ 
+-	return spi_write_then_read(spi, reg, reg_size, val, val_size);
++	addr |= 0x80; /* bit7 = RW = '1' */
++
++	return spi_write_then_read(spi, &addr, 1, val, val_size);
+ }
+ 
+ static struct regmap_bus bme680_regmap_bus = {
+@@ -45,8 +110,8 @@ static struct regmap_bus bme680_regmap_bus = {
+ static int bme680_spi_probe(struct spi_device *spi)
+ {
+ 	const struct spi_device_id *id = spi_get_device_id(spi);
++	struct bme680_spi_bus_context *bus_context;
+ 	struct regmap *regmap;
+-	unsigned int val;
+ 	int ret;
+ 
+ 	spi->bits_per_word = 8;
+@@ -56,45 +121,21 @@ static int bme680_spi_probe(struct spi_device *spi)
+ 		return ret;
+ 	}
+ 
++	bus_context = devm_kzalloc(&spi->dev, sizeof(*bus_context), GFP_KERNEL);
++	if (!bus_context)
++		return -ENOMEM;
++
++	bus_context->spi = spi;
++	bus_context->current_page = 0xff; /* Undefined on warm boot */
++
+ 	regmap = devm_regmap_init(&spi->dev, &bme680_regmap_bus,
+-				  &spi->dev, &bme680_regmap_config);
++				  bus_context, &bme680_regmap_config);
+ 	if (IS_ERR(regmap)) {
+ 		dev_err(&spi->dev, "Failed to register spi regmap %d\n",
+ 				(int)PTR_ERR(regmap));
+ 		return PTR_ERR(regmap);
+ 	}
+ 
+-	ret = regmap_write(regmap, BME680_REG_SOFT_RESET_SPI,
+-			   BME680_CMD_SOFTRESET);
+-	if (ret < 0) {
+-		dev_err(&spi->dev, "Failed to reset chip\n");
+-		return ret;
+-	}
+-
+-	/* after power-on reset, Page 0(0x80-0xFF) of spi_mem_page is active */
+-	ret = regmap_read(regmap, BME680_REG_CHIP_SPI_ID, &val);
+-	if (ret < 0) {
+-		dev_err(&spi->dev, "Error reading SPI chip ID\n");
+-		return ret;
+-	}
+-
+-	if (val != BME680_CHIP_ID_VAL) {
+-		dev_err(&spi->dev, "Wrong chip ID, got %x expected %x\n",
+-				val, BME680_CHIP_ID_VAL);
+-		return -ENODEV;
+-	}
+-	/*
+-	 * select Page 1 of spi_mem_page to enable access to
+-	 * to registers from address 0x00 to 0x7F.
+-	 */
+-	ret = regmap_write_bits(regmap, BME680_REG_STATUS,
+-				BME680_SPI_MEM_PAGE_BIT,
+-				BME680_SPI_MEM_PAGE_1_VAL);
+-	if (ret < 0) {
+-		dev_err(&spi->dev, "failed to set page 1 of spi_mem_page\n");
+-		return ret;
+-	}
+-
+ 	return bme680_core_probe(&spi->dev, regmap, id->name);
+ }
+ 
+diff --git a/drivers/iio/common/cros_ec_sensors/cros_ec_sensors.c b/drivers/iio/common/cros_ec_sensors/cros_ec_sensors.c
+index 89cb0066a6e0..8d76afb87d87 100644
+--- a/drivers/iio/common/cros_ec_sensors/cros_ec_sensors.c
++++ b/drivers/iio/common/cros_ec_sensors/cros_ec_sensors.c
+@@ -103,9 +103,10 @@ static int cros_ec_sensors_read(struct iio_dev *indio_dev,
+ 			 * Do not use IIO_DEGREE_TO_RAD to avoid precision
+ 			 * loss. Round to the nearest integer.
+ 			 */
+-			*val = div_s64(val64 * 314159 + 9000000ULL, 1000);
+-			*val2 = 18000 << (CROS_EC_SENSOR_BITS - 1);
+-			ret = IIO_VAL_FRACTIONAL;
++			*val = 0;
++			*val2 = div_s64(val64 * 3141592653ULL,
++					180 << (CROS_EC_SENSOR_BITS - 1));
++			ret = IIO_VAL_INT_PLUS_NANO;
+ 			break;
+ 		case MOTIONSENSE_TYPE_MAG:
+ 			/*
+diff --git a/drivers/iio/dac/mcp4725.c b/drivers/iio/dac/mcp4725.c
+index 6d71fd905e29..c701a45469f6 100644
+--- a/drivers/iio/dac/mcp4725.c
++++ b/drivers/iio/dac/mcp4725.c
+@@ -92,6 +92,7 @@ static ssize_t mcp4725_store_eeprom(struct device *dev,
+ 
+ 	inoutbuf[0] = 0x60; /* write EEPROM */
+ 	inoutbuf[0] |= data->ref_mode << 3;
++	inoutbuf[0] |= data->powerdown ? ((data->powerdown_mode + 1) << 1) : 0;
+ 	inoutbuf[1] = data->dac_value >> 4;
+ 	inoutbuf[2] = (data->dac_value & 0xf) << 4;
+ 
+diff --git a/drivers/iio/gyro/bmg160_core.c b/drivers/iio/gyro/bmg160_core.c
+index 63ca31628a93..92c07ab826eb 100644
+--- a/drivers/iio/gyro/bmg160_core.c
++++ b/drivers/iio/gyro/bmg160_core.c
+@@ -582,11 +582,10 @@ static int bmg160_read_raw(struct iio_dev *indio_dev,
+ 	case IIO_CHAN_INFO_LOW_PASS_FILTER_3DB_FREQUENCY:
+ 		return bmg160_get_filter(data, val);
+ 	case IIO_CHAN_INFO_SCALE:
+-		*val = 0;
+ 		switch (chan->type) {
+ 		case IIO_TEMP:
+-			*val2 = 500000;
+-			return IIO_VAL_INT_PLUS_MICRO;
++			*val = 500;
++			return IIO_VAL_INT;
+ 		case IIO_ANGL_VEL:
+ 		{
+ 			int i;
+@@ -594,6 +593,7 @@ static int bmg160_read_raw(struct iio_dev *indio_dev,
+ 			for (i = 0; i < ARRAY_SIZE(bmg160_scale_table); ++i) {
+ 				if (bmg160_scale_table[i].dps_range ==
+ 							data->dps_range) {
++					*val = 0;
+ 					*val2 = bmg160_scale_table[i].scale;
+ 					return IIO_VAL_INT_PLUS_MICRO;
+ 				}
+diff --git a/drivers/iio/gyro/mpu3050-core.c b/drivers/iio/gyro/mpu3050-core.c
+index 77fac81a3adc..5ddebede31a6 100644
+--- a/drivers/iio/gyro/mpu3050-core.c
++++ b/drivers/iio/gyro/mpu3050-core.c
+@@ -29,7 +29,8 @@
+ 
+ #include "mpu3050.h"
+ 
+-#define MPU3050_CHIP_ID		0x69
++#define MPU3050_CHIP_ID		0x68
++#define MPU3050_CHIP_ID_MASK	0x7E
+ 
+ /*
+  * Register map: anything suffixed *_H is a big-endian high byte and always
+@@ -1176,8 +1177,9 @@ int mpu3050_common_probe(struct device *dev,
+ 		goto err_power_down;
+ 	}
+ 
+-	if (val != MPU3050_CHIP_ID) {
+-		dev_err(dev, "unsupported chip id %02x\n", (u8)val);
++	if ((val & MPU3050_CHIP_ID_MASK) != MPU3050_CHIP_ID) {
++		dev_err(dev, "unsupported chip id %02x\n",
++				(u8)(val & MPU3050_CHIP_ID_MASK));
+ 		ret = -ENODEV;
+ 		goto err_power_down;
+ 	}
+diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
+index cd5bfe39591b..dadd921a4a30 100644
+--- a/drivers/iio/industrialio-buffer.c
++++ b/drivers/iio/industrialio-buffer.c
+@@ -320,9 +320,8 @@ static int iio_scan_mask_set(struct iio_dev *indio_dev,
+ 	const unsigned long *mask;
+ 	unsigned long *trialmask;
+ 
+-	trialmask = kmalloc_array(BITS_TO_LONGS(indio_dev->masklength),
+-				  sizeof(*trialmask),
+-				  GFP_KERNEL);
++	trialmask = kcalloc(BITS_TO_LONGS(indio_dev->masklength),
++			    sizeof(*trialmask), GFP_KERNEL);
+ 	if (trialmask == NULL)
+ 		return -ENOMEM;
+ 	if (!indio_dev->masklength) {
+diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
+index 4f5cd9f60870..5b65750ce775 100644
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -1738,10 +1738,10 @@ EXPORT_SYMBOL(__iio_device_register);
+  **/
+ void iio_device_unregister(struct iio_dev *indio_dev)
+ {
+-	mutex_lock(&indio_dev->info_exist_lock);
+-
+ 	cdev_device_del(&indio_dev->chrdev, &indio_dev->dev);
+ 
++	mutex_lock(&indio_dev->info_exist_lock);
++
+ 	iio_device_unregister_debugfs(indio_dev);
+ 
+ 	iio_disable_all_buffers(indio_dev);
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 5f366838b7ff..e2a4570a47e8 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -992,6 +992,8 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ 		 * will only be one mm, so no big deal.
+ 		 */
+ 		down_write(&mm->mmap_sem);
++		if (!mmget_still_valid(mm))
++			goto skip_mm;
+ 		mutex_lock(&ufile->umap_lock);
+ 		list_for_each_entry_safe (priv, next_priv, &ufile->umaps,
+ 					  list) {
+@@ -1006,6 +1008,7 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ 			vma->vm_flags &= ~(VM_SHARED | VM_MAYSHARE);
+ 		}
+ 		mutex_unlock(&ufile->umap_lock);
++	skip_mm:
+ 		up_write(&mm->mmap_sem);
+ 		mmput(mm);
+ 	}
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 628ef617bb2f..f9525d6f0bfe 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -1339,21 +1339,46 @@ static const struct acpi_device_id elan_acpi_id[] = {
+ 	{ "ELAN0600", 0 },
+ 	{ "ELAN0601", 0 },
+ 	{ "ELAN0602", 0 },
++	{ "ELAN0603", 0 },
++	{ "ELAN0604", 0 },
+ 	{ "ELAN0605", 0 },
++	{ "ELAN0606", 0 },
++	{ "ELAN0607", 0 },
+ 	{ "ELAN0608", 0 },
+ 	{ "ELAN0609", 0 },
+ 	{ "ELAN060B", 0 },
+ 	{ "ELAN060C", 0 },
++	{ "ELAN060F", 0 },
++	{ "ELAN0610", 0 },
+ 	{ "ELAN0611", 0 },
+ 	{ "ELAN0612", 0 },
++	{ "ELAN0615", 0 },
++	{ "ELAN0616", 0 },
+ 	{ "ELAN0617", 0 },
+ 	{ "ELAN0618", 0 },
++	{ "ELAN0619", 0 },
++	{ "ELAN061A", 0 },
++	{ "ELAN061B", 0 },
+ 	{ "ELAN061C", 0 },
+ 	{ "ELAN061D", 0 },
+ 	{ "ELAN061E", 0 },
++	{ "ELAN061F", 0 },
+ 	{ "ELAN0620", 0 },
+ 	{ "ELAN0621", 0 },
+ 	{ "ELAN0622", 0 },
++	{ "ELAN0623", 0 },
++	{ "ELAN0624", 0 },
++	{ "ELAN0625", 0 },
++	{ "ELAN0626", 0 },
++	{ "ELAN0627", 0 },
++	{ "ELAN0628", 0 },
++	{ "ELAN0629", 0 },
++	{ "ELAN062A", 0 },
++	{ "ELAN062B", 0 },
++	{ "ELAN062C", 0 },
++	{ "ELAN062D", 0 },
++	{ "ELAN0631", 0 },
++	{ "ELAN0632", 0 },
+ 	{ "ELAN1000", 0 },
+ 	{ }
+ };
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 537c90c8eb0a..f89fc6ea6078 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3214,8 +3214,12 @@ static int bond_netdev_event(struct notifier_block *this,
+ 		return NOTIFY_DONE;
+ 
+ 	if (event_dev->flags & IFF_MASTER) {
++		int ret;
++
+ 		netdev_dbg(event_dev, "IFF_MASTER\n");
+-		return bond_master_netdev_event(event, event_dev);
++		ret = bond_master_netdev_event(event, event_dev);
++		if (ret != NOTIFY_DONE)
++			return ret;
+ 	}
+ 
+ 	if (event_dev->flags & IFF_SLAVE) {
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index d4ee9f9c8c34..36263c77df46 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -32,6 +32,13 @@
+ #define DRV_NAME	"nicvf"
+ #define DRV_VERSION	"1.0"
+ 
++/* NOTE: Packets bigger than 1530 are split across multiple pages and XDP needs
++ * the buffer to be contiguous. Allow XDP to be set up only if we don't exceed
++ * this value, keeping headroom for the 14 byte Ethernet header and two
++ * VLAN tags (for QinQ)
++ */
++#define MAX_XDP_MTU	(1530 - ETH_HLEN - VLAN_HLEN * 2)
++
+ /* Supported devices */
+ static const struct pci_device_id nicvf_id_table[] = {
+ 	{ PCI_DEVICE_SUB(PCI_VENDOR_ID_CAVIUM,
+@@ -1582,6 +1589,15 @@ static int nicvf_change_mtu(struct net_device *netdev, int new_mtu)
+ 	struct nicvf *nic = netdev_priv(netdev);
+ 	int orig_mtu = netdev->mtu;
+ 
++	/* For now just support only the usual MTU sized frames,
++	 * plus some headroom for VLAN, QinQ.
++	 */
++	if (nic->xdp_prog && new_mtu > MAX_XDP_MTU) {
++		netdev_warn(netdev, "Jumbo frames not yet supported with XDP, current MTU %d.\n",
++			    netdev->mtu);
++		return -EINVAL;
++	}
++
+ 	netdev->mtu = new_mtu;
+ 
+ 	if (!netif_running(netdev))
+@@ -1830,8 +1846,10 @@ static int nicvf_xdp_setup(struct nicvf *nic, struct bpf_prog *prog)
+ 	bool bpf_attached = false;
+ 	int ret = 0;
+ 
+-	/* For now just support only the usual MTU sized frames */
+-	if (prog && (dev->mtu > 1500)) {
++	/* For now just support only the usual MTU sized frames,
++	 * plus some headroom for VLAN, QinQ.
++	 */
++	if (prog && dev->mtu > MAX_XDP_MTU) {
+ 		netdev_warn(dev, "Jumbo frames not yet supported with XDP, current MTU %d.\n",
+ 			    dev->mtu);
+ 		return -EOPNOTSUPP;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 697c2427f2b7..a96ad20ee484 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1840,13 +1840,9 @@ static int fec_enet_clk_enable(struct net_device *ndev, bool enable)
+ 	int ret;
+ 
+ 	if (enable) {
+-		ret = clk_prepare_enable(fep->clk_ahb);
+-		if (ret)
+-			return ret;
+-
+ 		ret = clk_prepare_enable(fep->clk_enet_out);
+ 		if (ret)
+-			goto failed_clk_enet_out;
++			return ret;
+ 
+ 		if (fep->clk_ptp) {
+ 			mutex_lock(&fep->ptp_clk_mutex);
+@@ -1866,7 +1862,6 @@ static int fec_enet_clk_enable(struct net_device *ndev, bool enable)
+ 
+ 		phy_reset_after_clk_enable(ndev->phydev);
+ 	} else {
+-		clk_disable_unprepare(fep->clk_ahb);
+ 		clk_disable_unprepare(fep->clk_enet_out);
+ 		if (fep->clk_ptp) {
+ 			mutex_lock(&fep->ptp_clk_mutex);
+@@ -1885,8 +1880,6 @@ failed_clk_ref:
+ failed_clk_ptp:
+ 	if (fep->clk_enet_out)
+ 		clk_disable_unprepare(fep->clk_enet_out);
+-failed_clk_enet_out:
+-		clk_disable_unprepare(fep->clk_ahb);
+ 
+ 	return ret;
+ }
+@@ -3470,6 +3463,9 @@ fec_probe(struct platform_device *pdev)
+ 	ret = clk_prepare_enable(fep->clk_ipg);
+ 	if (ret)
+ 		goto failed_clk_ipg;
++	ret = clk_prepare_enable(fep->clk_ahb);
++	if (ret)
++		goto failed_clk_ahb;
+ 
+ 	fep->reg_phy = devm_regulator_get_optional(&pdev->dev, "phy");
+ 	if (!IS_ERR(fep->reg_phy)) {
+@@ -3563,6 +3559,9 @@ failed_reset:
+ 	pm_runtime_put(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ failed_regulator:
++	clk_disable_unprepare(fep->clk_ahb);
++failed_clk_ahb:
++	clk_disable_unprepare(fep->clk_ipg);
+ failed_clk_ipg:
+ 	fec_enet_clk_enable(ndev, false);
+ failed_clk:
+@@ -3686,6 +3685,7 @@ static int __maybe_unused fec_runtime_suspend(struct device *dev)
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct fec_enet_private *fep = netdev_priv(ndev);
+ 
++	clk_disable_unprepare(fep->clk_ahb);
+ 	clk_disable_unprepare(fep->clk_ipg);
+ 
+ 	return 0;
+@@ -3695,8 +3695,20 @@ static int __maybe_unused fec_runtime_resume(struct device *dev)
+ {
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct fec_enet_private *fep = netdev_priv(ndev);
++	int ret;
+ 
+-	return clk_prepare_enable(fep->clk_ipg);
++	ret = clk_prepare_enable(fep->clk_ahb);
++	if (ret)
++		return ret;
++	ret = clk_prepare_enable(fep->clk_ipg);
++	if (ret)
++		goto failed_clk_ipg;
++
++	return 0;
++
++failed_clk_ipg:
++	clk_disable_unprepare(fep->clk_ahb);
++	return ret;
+ }
+ 
+ static const struct dev_pm_ops fec_pm_ops = {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+index f3c7ab6faea5..b8521e2f64ac 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun.c
+@@ -39,6 +39,10 @@ static int get_route_and_out_devs(struct mlx5e_priv *priv,
+ 			return -EOPNOTSUPP;
+ 	}
+ 
++	if (!(mlx5e_eswitch_rep(*out_dev) &&
++	      mlx5e_is_uplink_rep(netdev_priv(*out_dev))))
++		return -EOPNOTSUPP;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index e6099f51d25f..3b9e5f0d0212 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1665,7 +1665,8 @@ static int set_pflag_rx_no_csum_complete(struct net_device *netdev, bool enable)
+ 	struct mlx5e_channel *c;
+ 	int i;
+ 
+-	if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
++	if (!test_bit(MLX5E_STATE_OPENED, &priv->state) ||
++	    priv->channels.params.xdp_prog)
+ 		return 0;
+ 
+ 	for (i = 0; i < channels->num; i++) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 93e50ccd44c3..0cb19e4dd439 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -950,7 +950,11 @@ static int mlx5e_open_rq(struct mlx5e_channel *c,
+ 	if (params->rx_dim_enabled)
+ 		__set_bit(MLX5E_RQ_STATE_AM, &c->rq.state);
+ 
+-	if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_NO_CSUM_COMPLETE))
++	/* We disable csum_complete when XDP is enabled since
++	 * XDP programs might manipulate packets which will render
++	 * skb->checksum incorrect.
++	 */
++	if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_NO_CSUM_COMPLETE) || c->xdp)
+ 		__set_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &c->rq.state);
+ 
+ 	return 0;
+@@ -4570,7 +4574,7 @@ void mlx5e_build_rss_params(struct mlx5e_rss_params *rss_params,
+ {
+ 	enum mlx5e_traffic_types tt;
+ 
+-	rss_params->hfunc = ETH_RSS_HASH_XOR;
++	rss_params->hfunc = ETH_RSS_HASH_TOP;
+ 	netdev_rss_key_fill(rss_params->toeplitz_hash_key,
+ 			    sizeof(rss_params->toeplitz_hash_key));
+ 	mlx5e_build_default_indir_rqt(rss_params->indirection_rqt,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index f86e4804e83e..2cbda8abd8b9 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -693,7 +693,14 @@ static inline bool is_last_ethertype_ip(struct sk_buff *skb, int *network_depth,
+ {
+ 	*proto = ((struct ethhdr *)skb->data)->h_proto;
+ 	*proto = __vlan_get_protocol(skb, *proto, network_depth);
+-	return (*proto == htons(ETH_P_IP) || *proto == htons(ETH_P_IPV6));
++
++	if (*proto == htons(ETH_P_IP))
++		return pskb_may_pull(skb, *network_depth + sizeof(struct iphdr));
++
++	if (*proto == htons(ETH_P_IPV6))
++		return pskb_may_pull(skb, *network_depth + sizeof(struct ipv6hdr));
++
++	return false;
+ }
+ 
+ static inline void mlx5e_enable_ecn(struct mlx5e_rq *rq, struct sk_buff *skb)
+@@ -713,17 +720,6 @@ static inline void mlx5e_enable_ecn(struct mlx5e_rq *rq, struct sk_buff *skb)
+ 	rq->stats->ecn_mark += !!rc;
+ }
+ 
+-static u32 mlx5e_get_fcs(const struct sk_buff *skb)
+-{
+-	const void *fcs_bytes;
+-	u32 _fcs_bytes;
+-
+-	fcs_bytes = skb_header_pointer(skb, skb->len - ETH_FCS_LEN,
+-				       ETH_FCS_LEN, &_fcs_bytes);
+-
+-	return __get_unaligned_cpu32(fcs_bytes);
+-}
+-
+ static u8 get_ip_proto(struct sk_buff *skb, int network_depth, __be16 proto)
+ {
+ 	void *ip_p = skb->data + network_depth;
+@@ -734,6 +730,68 @@ static u8 get_ip_proto(struct sk_buff *skb, int network_depth, __be16 proto)
+ 
+ #define short_frame(size) ((size) <= ETH_ZLEN + ETH_FCS_LEN)
+ 
++#define MAX_PADDING 8
++
++static void
++tail_padding_csum_slow(struct sk_buff *skb, int offset, int len,
++		       struct mlx5e_rq_stats *stats)
++{
++	stats->csum_complete_tail_slow++;
++	skb->csum = csum_block_add(skb->csum,
++				   skb_checksum(skb, offset, len, 0),
++				   offset);
++}
++
++static void
++tail_padding_csum(struct sk_buff *skb, int offset,
++		  struct mlx5e_rq_stats *stats)
++{
++	u8 tail_padding[MAX_PADDING];
++	int len = skb->len - offset;
++	void *tail;
++
++	if (unlikely(len > MAX_PADDING)) {
++		tail_padding_csum_slow(skb, offset, len, stats);
++		return;
++	}
++
++	tail = skb_header_pointer(skb, offset, len, tail_padding);
++	if (unlikely(!tail)) {
++		tail_padding_csum_slow(skb, offset, len, stats);
++		return;
++	}
++
++	stats->csum_complete_tail++;
++	skb->csum = csum_block_add(skb->csum, csum_partial(tail, len, 0), offset);
++}
++
++static void
++mlx5e_skb_padding_csum(struct sk_buff *skb, int network_depth, __be16 proto,
++		       struct mlx5e_rq_stats *stats)
++{
++	struct ipv6hdr *ip6;
++	struct iphdr   *ip4;
++	int pkt_len;
++
++	switch (proto) {
++	case htons(ETH_P_IP):
++		ip4 = (struct iphdr *)(skb->data + network_depth);
++		pkt_len = network_depth + ntohs(ip4->tot_len);
++		break;
++	case htons(ETH_P_IPV6):
++		ip6 = (struct ipv6hdr *)(skb->data + network_depth);
++		pkt_len = network_depth + sizeof(*ip6) + ntohs(ip6->payload_len);
++		break;
++	default:
++		return;
++	}
++
++	if (likely(pkt_len >= skb->len))
++		return;
++
++	tail_padding_csum(skb, pkt_len, stats);
++}
++
+ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 				     struct mlx5_cqe64 *cqe,
+ 				     struct mlx5e_rq *rq,
+@@ -753,7 +811,8 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 		return;
+ 	}
+ 
+-	if (unlikely(test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state)))
++	/* True when explicitly set via priv flag, or XDP prog is loaded */
++	if (test_bit(MLX5E_RQ_STATE_NO_CSUM_COMPLETE, &rq->state))
+ 		goto csum_unnecessary;
+ 
+ 	/* CQE csum doesn't cover padding octets in short ethernet
+@@ -781,18 +840,15 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
+ 			skb->csum = csum_partial(skb->data + ETH_HLEN,
+ 						 network_depth - ETH_HLEN,
+ 						 skb->csum);
+-		if (unlikely(netdev->features & NETIF_F_RXFCS))
+-			skb->csum = csum_block_add(skb->csum,
+-						   (__force __wsum)mlx5e_get_fcs(skb),
+-						   skb->len - ETH_FCS_LEN);
++
++		mlx5e_skb_padding_csum(skb, network_depth, proto, stats);
+ 		stats->csum_complete++;
+ 		return;
+ 	}
+ 
+ csum_unnecessary:
+ 	if (likely((cqe->hds_ip_ext & CQE_L3_OK) &&
+-		   ((cqe->hds_ip_ext & CQE_L4_OK) ||
+-		    (get_cqe_l4_hdr_type(cqe) == CQE_L4_HDR_TYPE_NONE)))) {
++		   (cqe->hds_ip_ext & CQE_L4_OK))) {
+ 		skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 		if (cqe_is_tunneled(cqe)) {
+ 			skb->csum_level = 1;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+index d3fe48ff9da9..4461b44acafc 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.c
+@@ -59,6 +59,8 @@ static const struct counter_desc sw_stats_desc[] = {
+ 	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_unnecessary) },
+ 	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_none) },
+ 	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_complete) },
++	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_complete_tail) },
++	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_complete_tail_slow) },
+ 	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_unnecessary_inner) },
+ 	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_drop) },
+ 	{ MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_xdp_redirect) },
+@@ -151,6 +153,8 @@ void mlx5e_grp_sw_update_stats(struct mlx5e_priv *priv)
+ 		s->rx_removed_vlan_packets += rq_stats->removed_vlan_packets;
+ 		s->rx_csum_none	+= rq_stats->csum_none;
+ 		s->rx_csum_complete += rq_stats->csum_complete;
++		s->rx_csum_complete_tail += rq_stats->csum_complete_tail;
++		s->rx_csum_complete_tail_slow += rq_stats->csum_complete_tail_slow;
+ 		s->rx_csum_unnecessary += rq_stats->csum_unnecessary;
+ 		s->rx_csum_unnecessary_inner += rq_stats->csum_unnecessary_inner;
+ 		s->rx_xdp_drop     += rq_stats->xdp_drop;
+@@ -1192,6 +1196,8 @@ static const struct counter_desc rq_stats_desc[] = {
+ 	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, packets) },
+ 	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, bytes) },
+ 	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_complete) },
++	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_complete_tail) },
++	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_complete_tail_slow) },
+ 	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_unnecessary) },
+ 	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_unnecessary_inner) },
+ 	{ MLX5E_DECLARE_RX_STAT(struct mlx5e_rq_stats, csum_none) },
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
+index fe91ec06e3c7..714303bf0797 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
+@@ -71,6 +71,8 @@ struct mlx5e_sw_stats {
+ 	u64 rx_csum_unnecessary;
+ 	u64 rx_csum_none;
+ 	u64 rx_csum_complete;
++	u64 rx_csum_complete_tail;
++	u64 rx_csum_complete_tail_slow;
+ 	u64 rx_csum_unnecessary_inner;
+ 	u64 rx_xdp_drop;
+ 	u64 rx_xdp_redirect;
+@@ -181,6 +183,8 @@ struct mlx5e_rq_stats {
+ 	u64 packets;
+ 	u64 bytes;
+ 	u64 csum_complete;
++	u64 csum_complete_tail;
++	u64 csum_complete_tail_slow;
+ 	u64 csum_unnecessary;
+ 	u64 csum_unnecessary_inner;
+ 	u64 csum_none;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c b/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
+index 8de64e88c670..22a2ef111514 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fpga/tls.c
+@@ -148,14 +148,16 @@ static int mlx5_fpga_tls_alloc_swid(struct idr *idr, spinlock_t *idr_spinlock,
+ 	return ret;
+ }
+ 
+-static void mlx5_fpga_tls_release_swid(struct idr *idr,
+-				       spinlock_t *idr_spinlock, u32 swid)
++static void *mlx5_fpga_tls_release_swid(struct idr *idr,
++					spinlock_t *idr_spinlock, u32 swid)
+ {
+ 	unsigned long flags;
++	void *ptr;
+ 
+ 	spin_lock_irqsave(idr_spinlock, flags);
+-	idr_remove(idr, swid);
++	ptr = idr_remove(idr, swid);
+ 	spin_unlock_irqrestore(idr_spinlock, flags);
++	return ptr;
+ }
+ 
+ static void mlx_tls_kfree_complete(struct mlx5_fpga_conn *conn,
+@@ -165,20 +167,12 @@ static void mlx_tls_kfree_complete(struct mlx5_fpga_conn *conn,
+ 	kfree(buf);
+ }
+ 
+-struct mlx5_teardown_stream_context {
+-	struct mlx5_fpga_tls_command_context cmd;
+-	u32 swid;
+-};
+-
+ static void
+ mlx5_fpga_tls_teardown_completion(struct mlx5_fpga_conn *conn,
+ 				  struct mlx5_fpga_device *fdev,
+ 				  struct mlx5_fpga_tls_command_context *cmd,
+ 				  struct mlx5_fpga_dma_buf *resp)
+ {
+-	struct mlx5_teardown_stream_context *ctx =
+-		    container_of(cmd, struct mlx5_teardown_stream_context, cmd);
+-
+ 	if (resp) {
+ 		u32 syndrome = MLX5_GET(tls_resp, resp->sg[0].data, syndrome);
+ 
+@@ -186,14 +180,6 @@ mlx5_fpga_tls_teardown_completion(struct mlx5_fpga_conn *conn,
+ 			mlx5_fpga_err(fdev,
+ 				      "Teardown stream failed with syndrome = %d",
+ 				      syndrome);
+-		else if (MLX5_GET(tls_cmd, cmd->buf.sg[0].data, direction_sx))
+-			mlx5_fpga_tls_release_swid(&fdev->tls->tx_idr,
+-						   &fdev->tls->tx_idr_spinlock,
+-						   ctx->swid);
+-		else
+-			mlx5_fpga_tls_release_swid(&fdev->tls->rx_idr,
+-						   &fdev->tls->rx_idr_spinlock,
+-						   ctx->swid);
+ 	}
+ 	mlx5_fpga_tls_put_command_ctx(cmd);
+ }
+@@ -217,22 +203,22 @@ int mlx5_fpga_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
+ 	void *cmd;
+ 	int ret;
+ 
+-	rcu_read_lock();
+-	flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle));
+-	rcu_read_unlock();
+-
+-	if (!flow) {
+-		WARN_ONCE(1, "Received NULL pointer for handle\n");
+-		return -EINVAL;
+-	}
+-
+ 	buf = kzalloc(size, GFP_ATOMIC);
+ 	if (!buf)
+ 		return -ENOMEM;
+ 
+ 	cmd = (buf + 1);
+ 
++	rcu_read_lock();
++	flow = idr_find(&mdev->fpga->tls->rx_idr, ntohl(handle));
++	if (unlikely(!flow)) {
++		rcu_read_unlock();
++		WARN_ONCE(1, "Received NULL pointer for handle\n");
++		kfree(buf);
++		return -EINVAL;
++	}
+ 	mlx5_fpga_tls_flow_to_cmd(flow, cmd);
++	rcu_read_unlock();
+ 
+ 	MLX5_SET(tls_cmd, cmd, swid, ntohl(handle));
+ 	MLX5_SET64(tls_cmd, cmd, tls_rcd_sn, be64_to_cpu(rcd_sn));
+@@ -253,7 +239,7 @@ int mlx5_fpga_tls_resync_rx(struct mlx5_core_dev *mdev, u32 handle, u32 seq,
+ static void mlx5_fpga_tls_send_teardown_cmd(struct mlx5_core_dev *mdev,
+ 					    void *flow, u32 swid, gfp_t flags)
+ {
+-	struct mlx5_teardown_stream_context *ctx;
++	struct mlx5_fpga_tls_command_context *ctx;
+ 	struct mlx5_fpga_dma_buf *buf;
+ 	void *cmd;
+ 
+@@ -261,7 +247,7 @@ static void mlx5_fpga_tls_send_teardown_cmd(struct mlx5_core_dev *mdev,
+ 	if (!ctx)
+ 		return;
+ 
+-	buf = &ctx->cmd.buf;
++	buf = &ctx->buf;
+ 	cmd = (ctx + 1);
+ 	MLX5_SET(tls_cmd, cmd, command_type, CMD_TEARDOWN_STREAM);
+ 	MLX5_SET(tls_cmd, cmd, swid, swid);
+@@ -272,8 +258,7 @@ static void mlx5_fpga_tls_send_teardown_cmd(struct mlx5_core_dev *mdev,
+ 	buf->sg[0].data = cmd;
+ 	buf->sg[0].size = MLX5_TLS_COMMAND_SIZE;
+ 
+-	ctx->swid = swid;
+-	mlx5_fpga_tls_cmd_send(mdev->fpga, &ctx->cmd,
++	mlx5_fpga_tls_cmd_send(mdev->fpga, ctx,
+ 			       mlx5_fpga_tls_teardown_completion);
+ }
+ 
+@@ -283,13 +268,14 @@ void mlx5_fpga_tls_del_flow(struct mlx5_core_dev *mdev, u32 swid,
+ 	struct mlx5_fpga_tls *tls = mdev->fpga->tls;
+ 	void *flow;
+ 
+-	rcu_read_lock();
+ 	if (direction_sx)
+-		flow = idr_find(&tls->tx_idr, swid);
++		flow = mlx5_fpga_tls_release_swid(&tls->tx_idr,
++						  &tls->tx_idr_spinlock,
++						  swid);
+ 	else
+-		flow = idr_find(&tls->rx_idr, swid);
+-
+-	rcu_read_unlock();
++		flow = mlx5_fpga_tls_release_swid(&tls->rx_idr,
++						  &tls->rx_idr_spinlock,
++						  swid);
+ 
+ 	if (!flow) {
+ 		mlx5_fpga_err(mdev->fpga, "No flow information for swid %u\n",
+@@ -297,6 +283,7 @@ void mlx5_fpga_tls_del_flow(struct mlx5_core_dev *mdev, u32 swid,
+ 		return;
+ 	}
+ 
++	synchronize_rcu(); /* before kfree(flow) */
+ 	mlx5_fpga_tls_send_teardown_cmd(mdev, flow, swid, flags);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
+index ddedf8ab5b64..fc643fde5a4a 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
+@@ -568,7 +568,7 @@ static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core)
+ 	if (!(mlxsw_core->bus->features & MLXSW_BUS_F_TXRX))
+ 		return 0;
+ 
+-	emad_wq = alloc_workqueue("mlxsw_core_emad", WQ_MEM_RECLAIM, 0);
++	emad_wq = alloc_workqueue("mlxsw_core_emad", 0, 0);
+ 	if (!emad_wq)
+ 		return -ENOMEM;
+ 	mlxsw_core->emad_wq = emad_wq;
+@@ -1912,10 +1912,10 @@ static int __init mlxsw_core_module_init(void)
+ {
+ 	int err;
+ 
+-	mlxsw_wq = alloc_workqueue(mlxsw_core_driver_name, WQ_MEM_RECLAIM, 0);
++	mlxsw_wq = alloc_workqueue(mlxsw_core_driver_name, 0, 0);
+ 	if (!mlxsw_wq)
+ 		return -ENOMEM;
+-	mlxsw_owq = alloc_ordered_workqueue("%s_ordered", WQ_MEM_RECLAIM,
++	mlxsw_owq = alloc_ordered_workqueue("%s_ordered", 0,
+ 					    mlxsw_core_driver_name);
+ 	if (!mlxsw_owq) {
+ 		err = -ENOMEM;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 98e5ffd71b91..2f6afbfd689f 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -6745,7 +6745,7 @@ static int mlxsw_sp_router_port_check_rif_addr(struct mlxsw_sp *mlxsw_sp,
+ 	/* A RIF is not created for macvlan netdevs. Their MAC is used to
+ 	 * populate the FDB
+ 	 */
+-	if (netif_is_macvlan(dev))
++	if (netif_is_macvlan(dev) || netif_is_l3_master(dev))
+ 		return 0;
+ 
+ 	for (i = 0; i < MLXSW_CORE_RES_GET(mlxsw_sp->core, MAX_RIFS); i++) {
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index c772109b638d..f5a10e286400 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -1654,7 +1654,7 @@ static int mlxsw_sp_port_mdb_add(struct mlxsw_sp_port *mlxsw_sp_port,
+ 	u16 fid_index;
+ 	int err = 0;
+ 
+-	if (switchdev_trans_ph_prepare(trans))
++	if (switchdev_trans_ph_commit(trans))
+ 		return 0;
+ 
+ 	bridge_port = mlxsw_sp_bridge_port_find(mlxsw_sp->bridge, orig_dev);
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/action.c b/drivers/net/ethernet/netronome/nfp/flower/action.c
+index 8d54b36afee8..2bbc5b8f92c2 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/action.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/action.c
+@@ -49,8 +49,7 @@ nfp_fl_push_vlan(struct nfp_fl_push_vlan *push_vlan,
+ 
+ 	tmp_push_vlan_tci =
+ 		FIELD_PREP(NFP_FL_PUSH_VLAN_PRIO, tcf_vlan_push_prio(action)) |
+-		FIELD_PREP(NFP_FL_PUSH_VLAN_VID, tcf_vlan_push_vid(action)) |
+-		NFP_FL_PUSH_VLAN_CFI;
++		FIELD_PREP(NFP_FL_PUSH_VLAN_VID, tcf_vlan_push_vid(action));
+ 	push_vlan->vlan_tci = cpu_to_be16(tmp_push_vlan_tci);
+ }
+ 
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+index 15f41cfef9f1..ab07d76b4186 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
++++ b/drivers/net/ethernet/netronome/nfp/flower/cmsg.h
+@@ -26,7 +26,7 @@
+ #define NFP_FLOWER_LAYER2_GENEVE_OP	BIT(6)
+ 
+ #define NFP_FLOWER_MASK_VLAN_PRIO	GENMASK(15, 13)
+-#define NFP_FLOWER_MASK_VLAN_CFI	BIT(12)
++#define NFP_FLOWER_MASK_VLAN_PRESENT	BIT(12)
+ #define NFP_FLOWER_MASK_VLAN_VID	GENMASK(11, 0)
+ 
+ #define NFP_FLOWER_MASK_MPLS_LB		GENMASK(31, 12)
+@@ -82,7 +82,6 @@
+ #define NFP_FL_OUT_FLAGS_TYPE_IDX	GENMASK(2, 0)
+ 
+ #define NFP_FL_PUSH_VLAN_PRIO		GENMASK(15, 13)
+-#define NFP_FL_PUSH_VLAN_CFI		BIT(12)
+ #define NFP_FL_PUSH_VLAN_VID		GENMASK(11, 0)
+ 
+ #define IPV6_FLOW_LABEL_MASK		cpu_to_be32(0x000fffff)
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/match.c b/drivers/net/ethernet/netronome/nfp/flower/match.c
+index cdf75595f627..571cc8ced33e 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/match.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/match.c
+@@ -26,14 +26,12 @@ nfp_flower_compile_meta_tci(struct nfp_flower_meta_tci *frame,
+ 						      FLOW_DISSECTOR_KEY_VLAN,
+ 						      target);
+ 		/* Populate the tci field. */
+-		if (flow_vlan->vlan_id || flow_vlan->vlan_priority) {
+-			tmp_tci = FIELD_PREP(NFP_FLOWER_MASK_VLAN_PRIO,
+-					     flow_vlan->vlan_priority) |
+-				  FIELD_PREP(NFP_FLOWER_MASK_VLAN_VID,
+-					     flow_vlan->vlan_id) |
+-				  NFP_FLOWER_MASK_VLAN_CFI;
+-			frame->tci = cpu_to_be16(tmp_tci);
+-		}
++		tmp_tci = NFP_FLOWER_MASK_VLAN_PRESENT;
++		tmp_tci |= FIELD_PREP(NFP_FLOWER_MASK_VLAN_PRIO,
++				      flow_vlan->vlan_priority) |
++			   FIELD_PREP(NFP_FLOWER_MASK_VLAN_VID,
++				      flow_vlan->vlan_id);
++		frame->tci = cpu_to_be16(tmp_tci);
+ 	}
+ }
+ 
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 6ce3f666d142..1283632091d5 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1247,6 +1247,23 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
+ 		goto err_option_port_add;
+ 	}
+ 
++	/* set promiscuity level to new slave */
++	if (dev->flags & IFF_PROMISC) {
++		err = dev_set_promiscuity(port_dev, 1);
++		if (err)
++			goto err_set_slave_promisc;
++	}
++
++	/* set allmulti level to new slave */
++	if (dev->flags & IFF_ALLMULTI) {
++		err = dev_set_allmulti(port_dev, 1);
++		if (err) {
++			if (dev->flags & IFF_PROMISC)
++				dev_set_promiscuity(port_dev, -1);
++			goto err_set_slave_promisc;
++		}
++	}
++
+ 	netif_addr_lock_bh(dev);
+ 	dev_uc_sync_multiple(port_dev, dev);
+ 	dev_mc_sync_multiple(port_dev, dev);
+@@ -1263,6 +1280,9 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
+ 
+ 	return 0;
+ 
++err_set_slave_promisc:
++	__team_option_inst_del_port(team, port);
++
+ err_option_port_add:
+ 	team_upper_dev_unlink(team, port);
+ 
+@@ -1308,6 +1328,12 @@ static int team_port_del(struct team *team, struct net_device *port_dev)
+ 
+ 	team_port_disable(team, port);
+ 	list_del_rcu(&port->list);
++
++	if (dev->flags & IFF_PROMISC)
++		dev_set_promiscuity(port_dev, -1);
++	if (dev->flags & IFF_ALLMULTI)
++		dev_set_allmulti(port_dev, -1);
++
+ 	team_upper_dev_unlink(team, port);
+ 	netdev_rx_handler_unregister(port_dev);
+ 	team_port_disable_netpoll(port);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+index 7c9dfa54fee8..9678322aca60 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c
+@@ -421,7 +421,6 @@ void mt76x02_send_tx_status(struct mt76x02_dev *dev,
+ 		return;
+ 
+ 	rcu_read_lock();
+-	mt76_tx_status_lock(mdev, &list);
+ 
+ 	if (stat->wcid < ARRAY_SIZE(dev->mt76.wcid))
+ 		wcid = rcu_dereference(dev->mt76.wcid[stat->wcid]);
+@@ -434,6 +433,8 @@ void mt76x02_send_tx_status(struct mt76x02_dev *dev,
+ 					  drv_priv);
+ 	}
+ 
++	mt76_tx_status_lock(mdev, &list);
++
+ 	if (wcid) {
+ 		if (stat->pktid)
+ 			status.skb = mt76_tx_status_skb_get(mdev, wcid,
+@@ -453,7 +454,9 @@ void mt76x02_send_tx_status(struct mt76x02_dev *dev,
+ 		if (*update == 0 && stat_val == stat_cache &&
+ 		    stat->wcid == msta->status.wcid && msta->n_frames < 32) {
+ 			msta->n_frames++;
+-			goto out;
++			mt76_tx_status_unlock(mdev, &list);
++			rcu_read_unlock();
++			return;
+ 		}
+ 
+ 		mt76x02_mac_fill_tx_status(dev, status.info, &msta->status,
+@@ -469,11 +472,10 @@ void mt76x02_send_tx_status(struct mt76x02_dev *dev,
+ 
+ 	if (status.skb)
+ 		mt76_tx_status_skb_done(mdev, status.skb, &list);
+-	else
+-		ieee80211_tx_status_ext(mt76_hw(dev), &status);
+-
+-out:
+ 	mt76_tx_status_unlock(mdev, &list);
++
++	if (!status.skb)
++		ieee80211_tx_status_ext(mt76_hw(dev), &status);
+ 	rcu_read_unlock();
+ }
+ 
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00.h b/drivers/net/wireless/ralink/rt2x00/rt2x00.h
+index 4b1744e9fb78..50b92ca92bd7 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00.h
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00.h
+@@ -673,7 +673,6 @@ enum rt2x00_state_flags {
+ 	CONFIG_CHANNEL_HT40,
+ 	CONFIG_POWERSAVING,
+ 	CONFIG_HT_DISABLED,
+-	CONFIG_QOS_DISABLED,
+ 	CONFIG_MONITORING,
+ 
+ 	/*
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c b/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
+index 2825560e2424..e8462f25d252 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
+@@ -642,18 +642,8 @@ void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw,
+ 			rt2x00dev->intf_associated--;
+ 
+ 		rt2x00leds_led_assoc(rt2x00dev, !!rt2x00dev->intf_associated);
+-
+-		clear_bit(CONFIG_QOS_DISABLED, &rt2x00dev->flags);
+ 	}
+ 
+-	/*
+-	 * Check for access point which do not support 802.11e . We have to
+-	 * generate data frames sequence number in S/W for such AP, because
+-	 * of H/W bug.
+-	 */
+-	if (changes & BSS_CHANGED_QOS && !bss_conf->qos)
+-		set_bit(CONFIG_QOS_DISABLED, &rt2x00dev->flags);
+-
+ 	/*
+ 	 * When the erp information has changed, we should perform
+ 	 * additional configuration steps. For all other changes we are done.
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00queue.c b/drivers/net/wireless/ralink/rt2x00/rt2x00queue.c
+index 92ddc19e7bf7..4834b4eb0206 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00queue.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00queue.c
+@@ -201,15 +201,18 @@ static void rt2x00queue_create_tx_descriptor_seq(struct rt2x00_dev *rt2x00dev,
+ 	if (!rt2x00_has_cap_flag(rt2x00dev, REQUIRE_SW_SEQNO)) {
+ 		/*
+ 		 * rt2800 has a H/W (or F/W) bug, device incorrectly increase
+-		 * seqno on retransmited data (non-QOS) frames. To workaround
+-		 * the problem let's generate seqno in software if QOS is
+-		 * disabled.
++		 * seqno on retransmitted data (non-QOS) and management frames.
++		 * To workaround the problem let's generate seqno in software.
++		 * Except for beacons which are transmitted periodically by H/W
++		 * hence hardware has to assign seqno for them.
+ 		 */
+-		if (test_bit(CONFIG_QOS_DISABLED, &rt2x00dev->flags))
+-			__clear_bit(ENTRY_TXD_GENERATE_SEQ, &txdesc->flags);
+-		else
++	    	if (ieee80211_is_beacon(hdr->frame_control)) {
++			__set_bit(ENTRY_TXD_GENERATE_SEQ, &txdesc->flags);
+ 			/* H/W will generate sequence number */
+ 			return;
++		}
++
++		__clear_bit(ENTRY_TXD_GENERATE_SEQ, &txdesc->flags);
+ 	}
+ 
+ 	/*
+diff --git a/drivers/scsi/libfc/fc_rport.c b/drivers/scsi/libfc/fc_rport.c
+index dfba4921b265..5bf61431434b 100644
+--- a/drivers/scsi/libfc/fc_rport.c
++++ b/drivers/scsi/libfc/fc_rport.c
+@@ -2162,7 +2162,6 @@ static void fc_rport_recv_logo_req(struct fc_lport *lport, struct fc_frame *fp)
+ 		FC_RPORT_DBG(rdata, "Received LOGO request while in state %s\n",
+ 			     fc_rport_state(rdata));
+ 
+-		rdata->flags &= ~FC_RP_STARTED;
+ 		fc_rport_enter_delete(rdata, RPORT_EV_STOP);
+ 		mutex_unlock(&rdata->rp_mutex);
+ 		kref_put(&rdata->kref, fc_rport_destroy);
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 655ad26106e4..5c78710b713f 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1763,8 +1763,12 @@ out_put_budget:
+ 			ret = BLK_STS_DEV_RESOURCE;
+ 		break;
+ 	default:
++		if (unlikely(!scsi_device_online(sdev)))
++			scsi_req(req)->result = DID_NO_CONNECT << 16;
++		else
++			scsi_req(req)->result = DID_ERROR << 16;
+ 		/*
+-		 * Make sure to release all allocated ressources when
++		 * Make sure to release all allocated resources when
+ 		 * we hit an error, as we will never see this command
+ 		 * again.
+ 		 */
+diff --git a/drivers/staging/comedi/drivers/ni_usb6501.c b/drivers/staging/comedi/drivers/ni_usb6501.c
+index 808ed92ed66f..1bb1cb651349 100644
+--- a/drivers/staging/comedi/drivers/ni_usb6501.c
++++ b/drivers/staging/comedi/drivers/ni_usb6501.c
+@@ -463,10 +463,8 @@ static int ni6501_alloc_usb_buffers(struct comedi_device *dev)
+ 
+ 	size = usb_endpoint_maxp(devpriv->ep_tx);
+ 	devpriv->usb_tx_buf = kzalloc(size, GFP_KERNEL);
+-	if (!devpriv->usb_tx_buf) {
+-		kfree(devpriv->usb_rx_buf);
++	if (!devpriv->usb_tx_buf)
+ 		return -ENOMEM;
+-	}
+ 
+ 	return 0;
+ }
+@@ -518,6 +516,9 @@ static int ni6501_auto_attach(struct comedi_device *dev,
+ 	if (!devpriv)
+ 		return -ENOMEM;
+ 
++	mutex_init(&devpriv->mut);
++	usb_set_intfdata(intf, devpriv);
++
+ 	ret = ni6501_find_endpoints(dev);
+ 	if (ret)
+ 		return ret;
+@@ -526,9 +527,6 @@ static int ni6501_auto_attach(struct comedi_device *dev,
+ 	if (ret)
+ 		return ret;
+ 
+-	mutex_init(&devpriv->mut);
+-	usb_set_intfdata(intf, devpriv);
+-
+ 	ret = comedi_alloc_subdevices(dev, 2);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/staging/comedi/drivers/vmk80xx.c b/drivers/staging/comedi/drivers/vmk80xx.c
+index 6234b649d887..65dc6c51037e 100644
+--- a/drivers/staging/comedi/drivers/vmk80xx.c
++++ b/drivers/staging/comedi/drivers/vmk80xx.c
+@@ -682,10 +682,8 @@ static int vmk80xx_alloc_usb_buffers(struct comedi_device *dev)
+ 
+ 	size = usb_endpoint_maxp(devpriv->ep_tx);
+ 	devpriv->usb_tx_buf = kzalloc(size, GFP_KERNEL);
+-	if (!devpriv->usb_tx_buf) {
+-		kfree(devpriv->usb_rx_buf);
++	if (!devpriv->usb_tx_buf)
+ 		return -ENOMEM;
+-	}
+ 
+ 	return 0;
+ }
+@@ -800,6 +798,8 @@ static int vmk80xx_auto_attach(struct comedi_device *dev,
+ 
+ 	devpriv->model = board->model;
+ 
++	sema_init(&devpriv->limit_sem, 8);
++
+ 	ret = vmk80xx_find_usb_endpoints(dev);
+ 	if (ret)
+ 		return ret;
+@@ -808,8 +808,6 @@ static int vmk80xx_auto_attach(struct comedi_device *dev,
+ 	if (ret)
+ 		return ret;
+ 
+-	sema_init(&devpriv->limit_sem, 8);
+-
+ 	usb_set_intfdata(intf, devpriv);
+ 
+ 	if (devpriv->model == VMK8055_MODEL)
+diff --git a/drivers/staging/iio/adc/ad7192.c b/drivers/staging/iio/adc/ad7192.c
+index acdbc07fd259..2fc8bc22b57b 100644
+--- a/drivers/staging/iio/adc/ad7192.c
++++ b/drivers/staging/iio/adc/ad7192.c
+@@ -109,10 +109,10 @@
+ #define AD7192_CH_AIN3		BIT(6) /* AIN3 - AINCOM */
+ #define AD7192_CH_AIN4		BIT(7) /* AIN4 - AINCOM */
+ 
+-#define AD7193_CH_AIN1P_AIN2M	0x000  /* AIN1(+) - AIN2(-) */
+-#define AD7193_CH_AIN3P_AIN4M	0x001  /* AIN3(+) - AIN4(-) */
+-#define AD7193_CH_AIN5P_AIN6M	0x002  /* AIN5(+) - AIN6(-) */
+-#define AD7193_CH_AIN7P_AIN8M	0x004  /* AIN7(+) - AIN8(-) */
++#define AD7193_CH_AIN1P_AIN2M	0x001  /* AIN1(+) - AIN2(-) */
++#define AD7193_CH_AIN3P_AIN4M	0x002  /* AIN3(+) - AIN4(-) */
++#define AD7193_CH_AIN5P_AIN6M	0x004  /* AIN5(+) - AIN6(-) */
++#define AD7193_CH_AIN7P_AIN8M	0x008  /* AIN7(+) - AIN8(-) */
+ #define AD7193_CH_TEMP		0x100 /* Temp senseor */
+ #define AD7193_CH_AIN2P_AIN2M	0x200 /* AIN2(+) - AIN2(-) */
+ #define AD7193_CH_AIN1		0x401 /* AIN1 - AINCOM */
+diff --git a/drivers/staging/iio/meter/ade7854.c b/drivers/staging/iio/meter/ade7854.c
+index 029c3bf42d4d..07774c000c5a 100644
+--- a/drivers/staging/iio/meter/ade7854.c
++++ b/drivers/staging/iio/meter/ade7854.c
+@@ -269,7 +269,7 @@ static IIO_DEV_ATTR_VPEAK(0644,
+ static IIO_DEV_ATTR_IPEAK(0644,
+ 		ade7854_read_32bit,
+ 		ade7854_write_32bit,
+-		ADE7854_VPEAK);
++		ADE7854_IPEAK);
+ static IIO_DEV_ATTR_APHCAL(0644,
+ 		ade7854_read_16bit,
+ 		ade7854_write_16bit,
+diff --git a/drivers/staging/most/core.c b/drivers/staging/most/core.c
+index 18936cdb1083..956daf8c3bd2 100644
+--- a/drivers/staging/most/core.c
++++ b/drivers/staging/most/core.c
+@@ -1431,7 +1431,7 @@ int most_register_interface(struct most_interface *iface)
+ 
+ 	INIT_LIST_HEAD(&iface->p->channel_list);
+ 	iface->p->dev_id = id;
+-	snprintf(iface->p->name, STRING_SIZE, "mdev%d", id);
++	strcpy(iface->p->name, iface->description);
+ 	iface->dev.init_name = iface->p->name;
+ 	iface->dev.bus = &mc.bus;
+ 	iface->dev.parent = &mc.dev;
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index 93bd90f1ff14..e9a8b79ba77e 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -2497,14 +2497,16 @@ done:
+ 			 * center of the last stop bit in sampling clocks.
+ 			 */
+ 			int last_stop = bits * 2 - 1;
+-			int deviation = min_err * srr * last_stop / 2 / baud;
++			int deviation = DIV_ROUND_CLOSEST(min_err * last_stop *
++							  (int)(srr + 1),
++							  2 * (int)baud);
+ 
+ 			if (abs(deviation) >= 2) {
+ 				/* At least two sampling clocks off at the
+ 				 * last stop bit; we can increase the error
+ 				 * margin by shifting the sampling point.
+ 				 */
+-				int shift = min(-8, max(7, deviation / 2));
++				int shift = clamp(deviation / 2, -8, 7);
+ 
+ 				hssrr |= (shift << HSCIF_SRHP_SHIFT) &
+ 					 HSCIF_SRHP_MASK;
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 9646ff63e77a..b6621a2e916d 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1518,7 +1518,8 @@ static void csi_J(struct vc_data *vc, int vpar)
+ 			return;
+ 	}
+ 	scr_memsetw(start, vc->vc_video_erase_char, 2 * count);
+-	update_region(vc, (unsigned long) start, count);
++	if (con_should_update(vc))
++		do_update_region(vc, (unsigned long) start, count);
+ 	vc->vc_need_wrap = 0;
+ }
+ 
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index a2e5dc7716e2..674cfc5a4084 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -911,8 +911,12 @@ static int vhost_new_umem_range(struct vhost_umem *umem,
+ 				u64 start, u64 size, u64 end,
+ 				u64 userspace_addr, int perm)
+ {
+-	struct vhost_umem_node *tmp, *node = kmalloc(sizeof(*node), GFP_ATOMIC);
++	struct vhost_umem_node *tmp, *node;
+ 
++	if (!size)
++		return -EFAULT;
++
++	node = kmalloc(sizeof(*node), GFP_ATOMIC);
+ 	if (!node)
+ 		return -ENOMEM;
+ 
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 6c934ab3722b..10ead04346ee 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1303,6 +1303,7 @@ cifsFileInfo_get_locked(struct cifsFileInfo *cifs_file)
+ }
+ 
+ struct cifsFileInfo *cifsFileInfo_get(struct cifsFileInfo *cifs_file);
++void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_hdlr);
+ void cifsFileInfo_put(struct cifsFileInfo *cifs_file);
+ 
+ #define CIFS_CACHE_READ_FLG	1
+@@ -1824,6 +1825,7 @@ GLOBAL_EXTERN spinlock_t gidsidlock;
+ #endif /* CONFIG_CIFS_ACL */
+ 
+ void cifs_oplock_break(struct work_struct *work);
++void cifs_queue_oplock_break(struct cifsFileInfo *cfile);
+ 
+ extern const struct slow_work_ops cifs_oplock_break_ops;
+ extern struct workqueue_struct *cifsiod_wq;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 8d107587208f..7c05353b766c 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -360,12 +360,30 @@ cifsFileInfo_get(struct cifsFileInfo *cifs_file)
+ 	return cifs_file;
+ }
+ 
+-/*
+- * Release a reference on the file private data. This may involve closing
+- * the filehandle out on the server. Must be called without holding
+- * tcon->open_file_lock and cifs_file->file_info_lock.
++/**
++ * cifsFileInfo_put - release a reference of file priv data
++ *
++ * Always potentially wait for oplock handler. See _cifsFileInfo_put().
+  */
+ void cifsFileInfo_put(struct cifsFileInfo *cifs_file)
++{
++	_cifsFileInfo_put(cifs_file, true);
++}
++
++/**
++ * _cifsFileInfo_put - release a reference of file priv data
++ *
++ * This may involve closing the filehandle @cifs_file out on the
++ * server. Must be called without holding tcon->open_file_lock and
++ * cifs_file->file_info_lock.
++ *
++ * If @wait_for_oplock_handler is true and we are releasing the last
++ * reference, wait for any running oplock break handler of the file
++ * and cancel any pending one. If calling this function from the
++ * oplock break handler, you need to pass false.
++ *
++ */
++void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler)
+ {
+ 	struct inode *inode = d_inode(cifs_file->dentry);
+ 	struct cifs_tcon *tcon = tlink_tcon(cifs_file->tlink);
+@@ -414,7 +432,8 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file)
+ 
+ 	spin_unlock(&tcon->open_file_lock);
+ 
+-	oplock_break_cancelled = cancel_work_sync(&cifs_file->oplock_break);
++	oplock_break_cancelled = wait_oplock_handler ?
++		cancel_work_sync(&cifs_file->oplock_break) : false;
+ 
+ 	if (!tcon->need_reconnect && !cifs_file->invalidHandle) {
+ 		struct TCP_Server_Info *server = tcon->ses->server;
+@@ -4480,6 +4499,7 @@ void cifs_oplock_break(struct work_struct *work)
+ 							     cinode);
+ 		cifs_dbg(FYI, "Oplock release rc = %d\n", rc);
+ 	}
++	_cifsFileInfo_put(cfile, false /* do not wait for ourself */);
+ 	cifs_done_oplock_break(cinode);
+ }
+ 
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index bee203055b30..1e1626a2cfc3 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -501,8 +501,7 @@ is_valid_oplock_break(char *buffer, struct TCP_Server_Info *srv)
+ 					   CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
+ 					   &pCifsInode->flags);
+ 
+-				queue_work(cifsoplockd_wq,
+-					   &netfile->oplock_break);
++				cifs_queue_oplock_break(netfile);
+ 				netfile->oplock_break_cancelled = false;
+ 
+ 				spin_unlock(&tcon->open_file_lock);
+@@ -607,6 +606,28 @@ void cifs_put_writer(struct cifsInodeInfo *cinode)
+ 	spin_unlock(&cinode->writers_lock);
+ }
+ 
++/**
++ * cifs_queue_oplock_break - queue the oplock break handler for cfile
++ *
++ * This function is called from the demultiplex thread when it
++ * receives an oplock break for @cfile.
++ *
++ * Assumes the tcon->open_file_lock is held.
++ * Assumes cfile->file_info_lock is NOT held.
++ */
++void cifs_queue_oplock_break(struct cifsFileInfo *cfile)
++{
++	/*
++	 * Bump the handle refcount now while we hold the
++	 * open_file_lock to enforce the validity of it for the oplock
++	 * break handler. The matching put is done at the end of the
++	 * handler.
++	 */
++	cifsFileInfo_get(cfile);
++
++	queue_work(cifsoplockd_wq, &cfile->oplock_break);
++}
++
+ void cifs_done_oplock_break(struct cifsInodeInfo *cinode)
+ {
+ 	clear_bit(CIFS_INODE_PENDING_OPLOCK_BREAK, &cinode->flags);
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 58700d2ba8cd..0a7ed2e3ad4f 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -555,7 +555,7 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp,
+ 			clear_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
+ 				  &cinode->flags);
+ 
+-		queue_work(cifsoplockd_wq, &cfile->oplock_break);
++		cifs_queue_oplock_break(cfile);
+ 		kfree(lw);
+ 		return true;
+ 	}
+@@ -719,8 +719,8 @@ smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
+ 					   CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2,
+ 					   &cinode->flags);
+ 				spin_unlock(&cfile->file_info_lock);
+-				queue_work(cifsoplockd_wq,
+-					   &cfile->oplock_break);
++
++				cifs_queue_oplock_break(cfile);
+ 
+ 				spin_unlock(&tcon->open_file_lock);
+ 				spin_unlock(&cifs_tcp_ses_lock);
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index ea56b1cdbdde..d5434ac0571b 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -2210,6 +2210,8 @@ smb2_query_symlink(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	rc = SMB2_open(xid, &oparms, utf16_path, &oplock, NULL, &err_iov,
+ 		       &resp_buftype);
++	if (!rc)
++		SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid);
+ 	if (!rc || !err_iov.iov_base) {
+ 		rc = -ENOENT;
+ 		goto free_path;
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 068febe37fe4..938e75cc3b66 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -815,8 +815,11 @@ SMB2_negotiate(const unsigned int xid, struct cifs_ses *ses)
+ 		} else if (rsp->DialectRevision == cpu_to_le16(SMB21_PROT_ID)) {
+ 			/* ops set to 3.0 by default for default so update */
+ 			ses->server->ops = &smb21_operations;
+-		} else if (rsp->DialectRevision == cpu_to_le16(SMB311_PROT_ID))
++			ses->server->vals = &smb21_values;
++		} else if (rsp->DialectRevision == cpu_to_le16(SMB311_PROT_ID)) {
+ 			ses->server->ops = &smb311_operations;
++			ses->server->vals = &smb311_values;
++		}
+ 	} else if (le16_to_cpu(rsp->DialectRevision) !=
+ 				ses->server->vals->protocol_id) {
+ 		/* if requested single dialect ensure returned dialect matched */
+@@ -3387,8 +3390,6 @@ SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms,
+ 	rqst.rq_nvec = 1;
+ 
+ 	rc = cifs_send_recv(xid, ses, &rqst, &resp_buftype, flags, &rsp_iov);
+-	cifs_small_buf_release(req);
+-
+ 	rsp = (struct smb2_read_rsp *)rsp_iov.iov_base;
+ 
+ 	if (rc) {
+@@ -3407,6 +3408,8 @@ SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms,
+ 				    io_parms->tcon->tid, ses->Suid,
+ 				    io_parms->offset, io_parms->length);
+ 
++	cifs_small_buf_release(req);
++
+ 	*nbytes = le32_to_cpu(rsp->DataLength);
+ 	if ((*nbytes > CIFS_MAX_MSGSIZE) ||
+ 	    (*nbytes > io_parms->length)) {
+@@ -3705,7 +3708,6 @@ SMB2_write(const unsigned int xid, struct cifs_io_parms *io_parms,
+ 
+ 	rc = cifs_send_recv(xid, io_parms->tcon->ses, &rqst,
+ 			    &resp_buftype, flags, &rsp_iov);
+-	cifs_small_buf_release(req);
+ 	rsp = (struct smb2_write_rsp *)rsp_iov.iov_base;
+ 
+ 	if (rc) {
+@@ -3723,6 +3725,7 @@ SMB2_write(const unsigned int xid, struct cifs_io_parms *io_parms,
+ 				     io_parms->offset, *nbytes);
+ 	}
+ 
++	cifs_small_buf_release(req);
+ 	free_rsp_buf(resp_buftype, rsp);
+ 	return rc;
+ }
+diff --git a/fs/dax.c b/fs/dax.c
+index 05cca2214ae3..827ee143413e 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -33,6 +33,7 @@
+ #include <linux/sizes.h>
+ #include <linux/mmu_notifier.h>
+ #include <linux/iomap.h>
++#include <asm/pgalloc.h>
+ #include "internal.h"
+ 
+ #define CREATE_TRACE_POINTS
+@@ -1409,7 +1410,9 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
+ {
+ 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+ 	unsigned long pmd_addr = vmf->address & PMD_MASK;
++	struct vm_area_struct *vma = vmf->vma;
+ 	struct inode *inode = mapping->host;
++	pgtable_t pgtable = NULL;
+ 	struct page *zero_page;
+ 	spinlock_t *ptl;
+ 	pmd_t pmd_entry;
+@@ -1424,12 +1427,22 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
+ 	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
+ 			DAX_PMD | DAX_ZERO_PAGE, false);
+ 
++	if (arch_needs_pgtable_deposit()) {
++		pgtable = pte_alloc_one(vma->vm_mm);
++		if (!pgtable)
++			return VM_FAULT_OOM;
++	}
++
+ 	ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd);
+ 	if (!pmd_none(*(vmf->pmd))) {
+ 		spin_unlock(ptl);
+ 		goto fallback;
+ 	}
+ 
++	if (pgtable) {
++		pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
++		mm_inc_nr_ptes(vma->vm_mm);
++	}
+ 	pmd_entry = mk_pmd(zero_page, vmf->vma->vm_page_prot);
+ 	pmd_entry = pmd_mkhuge(pmd_entry);
+ 	set_pmd_at(vmf->vma->vm_mm, pmd_addr, vmf->pmd, pmd_entry);
+@@ -1438,6 +1451,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
+ 	return VM_FAULT_NOPAGE;
+ 
+ fallback:
++	if (pgtable)
++		pte_free(vma->vm_mm, pgtable);
+ 	trace_dax_pmd_load_hole_fallback(inode, vmf, zero_page, *entry);
+ 	return VM_FAULT_FALLBACK;
+ }
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 85b0ef890b28..91bd2ff0c62c 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1141,6 +1141,24 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
+ 					count = -EINTR;
+ 					goto out_mm;
+ 				}
++				/*
++				 * Avoid to modify vma->vm_flags
++				 * without locked ops while the
++				 * coredump reads the vm_flags.
++				 */
++				if (!mmget_still_valid(mm)) {
++					/*
++					 * Silently return "count"
++					 * like if get_task_mm()
++					 * failed. FIXME: should this
++					 * function have returned
++					 * -ESRCH if get_task_mm()
++					 * failed like if
++					 * get_proc_task() fails?
++					 */
++					up_write(&mm->mmap_sem);
++					goto out_mm;
++				}
+ 				for (vma = mm->mmap; vma; vma = vma->vm_next) {
+ 					vma->vm_flags &= ~VM_SOFTDIRTY;
+ 					vma_set_page_prot(vma);
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 89800fc7dc9d..f5de1e726356 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -629,6 +629,8 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
+ 
+ 		/* the various vma->vm_userfaultfd_ctx still points to it */
+ 		down_write(&mm->mmap_sem);
++		/* no task can run (and in turn coredump) yet */
++		VM_WARN_ON(!mmget_still_valid(mm));
+ 		for (vma = mm->mmap; vma; vma = vma->vm_next)
+ 			if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx) {
+ 				vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+@@ -883,6 +885,8 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
+ 	 * taking the mmap_sem for writing.
+ 	 */
+ 	down_write(&mm->mmap_sem);
++	if (!mmget_still_valid(mm))
++		goto skip_mm;
+ 	prev = NULL;
+ 	for (vma = mm->mmap; vma; vma = vma->vm_next) {
+ 		cond_resched();
+@@ -905,6 +909,7 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
+ 		vma->vm_flags = new_flags;
+ 		vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+ 	}
++skip_mm:
+ 	up_write(&mm->mmap_sem);
+ 	mmput(mm);
+ wakeup:
+@@ -1333,6 +1338,8 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
+ 		goto out;
+ 
+ 	down_write(&mm->mmap_sem);
++	if (!mmget_still_valid(mm))
++		goto out_unlock;
+ 	vma = find_vma_prev(mm, start, &prev);
+ 	if (!vma)
+ 		goto out_unlock;
+@@ -1520,6 +1527,8 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
+ 		goto out;
+ 
+ 	down_write(&mm->mmap_sem);
++	if (!mmget_still_valid(mm))
++		goto out_unlock;
+ 	vma = find_vma_prev(mm, start, &prev);
+ 	if (!vma)
+ 		goto out_unlock;
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index e07e91daaacc..72ff78c33033 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -173,6 +173,7 @@ struct kretprobe_instance {
+ 	struct kretprobe *rp;
+ 	kprobe_opcode_t *ret_addr;
+ 	struct task_struct *task;
++	void *fp;
+ 	char data[0];
+ };
+ 
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 848b54b7ec91..7df56decae37 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -1484,6 +1484,7 @@ struct net_device_ops {
+  * @IFF_FAILOVER: device is a failover master device
+  * @IFF_FAILOVER_SLAVE: device is lower dev of a failover master device
+  * @IFF_L3MDEV_RX_HANDLER: only invoke the rx handler of L3 master device
++ * @IFF_LIVE_RENAME_OK: rename is allowed while device is up and running
+  */
+ enum netdev_priv_flags {
+ 	IFF_802_1Q_VLAN			= 1<<0,
+@@ -1516,6 +1517,7 @@ enum netdev_priv_flags {
+ 	IFF_FAILOVER			= 1<<27,
+ 	IFF_FAILOVER_SLAVE		= 1<<28,
+ 	IFF_L3MDEV_RX_HANDLER		= 1<<29,
++	IFF_LIVE_RENAME_OK		= 1<<30,
+ };
+ 
+ #define IFF_802_1Q_VLAN			IFF_802_1Q_VLAN
+@@ -1547,6 +1549,7 @@ enum netdev_priv_flags {
+ #define IFF_FAILOVER			IFF_FAILOVER
+ #define IFF_FAILOVER_SLAVE		IFF_FAILOVER_SLAVE
+ #define IFF_L3MDEV_RX_HANDLER		IFF_L3MDEV_RX_HANDLER
++#define IFF_LIVE_RENAME_OK		IFF_LIVE_RENAME_OK
+ 
+ /**
+  *	struct net_device - The DEVICE structure.
+diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
+index 3bfa6a0cbba4..c1dbb737a36c 100644
+--- a/include/linux/sched/mm.h
++++ b/include/linux/sched/mm.h
+@@ -49,6 +49,27 @@ static inline void mmdrop(struct mm_struct *mm)
+ 		__mmdrop(mm);
+ }
+ 
++/*
++ * This has to be called after a get_task_mm()/mmget_not_zero()
++ * followed by taking the mmap_sem for writing before modifying the
++ * vmas or anything the coredump pretends not to change from under it.
++ *
++ * NOTE: find_extend_vma() called from GUP context is the only place
++ * that can modify the "mm" (notably the vm_start/end) under mmap_sem
++ * for reading and outside the context of the process, so it is also
++ * the only case that holds the mmap_sem for reading that must call
++ * this function. Generally if the mmap_sem is hold for reading
++ * there's no need of this check after get_task_mm()/mmget_not_zero().
++ *
++ * This function can be obsoleted and the check can be removed, after
++ * the coredump code will hold the mmap_sem for writing before
++ * invoking the ->core_dump methods.
++ */
++static inline bool mmget_still_valid(struct mm_struct *mm)
++{
++	return likely(!mm->core_state);
++}
++
+ /**
+  * mmget() - Pin the address space associated with a &struct mm_struct.
+  * @mm: The address space to pin.
+diff --git a/include/net/nfc/nci_core.h b/include/net/nfc/nci_core.h
+index 87499b6b35d6..df5c69db68af 100644
+--- a/include/net/nfc/nci_core.h
++++ b/include/net/nfc/nci_core.h
+@@ -166,7 +166,7 @@ struct nci_conn_info {
+  * According to specification 102 622 chapter 4.4 Pipes,
+  * the pipe identifier is 7 bits long.
+  */
+-#define NCI_HCI_MAX_PIPES          127
++#define NCI_HCI_MAX_PIPES          128
+ 
+ struct nci_hci_gate {
+ 	u8 gate;
+diff --git a/include/net/tls.h b/include/net/tls.h
+index 1486b60c4de8..8b3d10917d99 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -289,6 +289,7 @@ int tls_device_sendmsg(struct sock *sk, struct msghdr *msg, size_t size);
+ int tls_device_sendpage(struct sock *sk, struct page *page,
+ 			int offset, size_t size, int flags);
+ void tls_device_sk_destruct(struct sock *sk);
++void tls_device_free_resources_tx(struct sock *sk);
+ void tls_device_init(void);
+ void tls_device_cleanup(void);
+ int tls_tx_records(struct sock *sk, int flags);
+@@ -312,6 +313,7 @@ int tls_push_sg(struct sock *sk, struct tls_context *ctx,
+ 		int flags);
+ int tls_push_partial_record(struct sock *sk, struct tls_context *ctx,
+ 			    int flags);
++bool tls_free_partial_record(struct sock *sk, struct tls_context *ctx);
+ 
+ int tls_push_pending_closed_record(struct sock *sk, struct tls_context *ctx,
+ 				   int flags, long *timeo);
+@@ -364,7 +366,7 @@ tls_validate_xmit_skb(struct sock *sk, struct net_device *dev,
+ static inline bool tls_is_sk_tx_device_offloaded(struct sock *sk)
+ {
+ #ifdef CONFIG_SOCK_VALIDATE_XMIT
+-	return sk_fullsock(sk) &
++	return sk_fullsock(sk) &&
+ 	       (smp_load_acquire(&sk->sk_validate_xmit_skb) ==
+ 	       &tls_validate_xmit_skb);
+ #else
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 878c62ec0190..dbd7656b4f73 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -456,24 +456,21 @@ void perf_aux_output_end(struct perf_output_handle *handle, unsigned long size)
+ 		rb->aux_head += size;
+ 	}
+ 
+-	if (size || handle->aux_flags) {
+-		/*
+-		 * Only send RECORD_AUX if we have something useful to communicate
+-		 *
+-		 * Note: the OVERWRITE records by themselves are not considered
+-		 * useful, as they don't communicate any *new* information,
+-		 * aside from the short-lived offset, that becomes history at
+-		 * the next event sched-in and therefore isn't useful.
+-		 * The userspace that needs to copy out AUX data in overwrite
+-		 * mode should know to use user_page::aux_head for the actual
+-		 * offset. So, from now on we don't output AUX records that
+-		 * have *only* OVERWRITE flag set.
+-		 */
+-
+-		if (handle->aux_flags & ~(u64)PERF_AUX_FLAG_OVERWRITE)
+-			perf_event_aux_event(handle->event, aux_head, size,
+-			                     handle->aux_flags);
+-	}
++	/*
++	 * Only send RECORD_AUX if we have something useful to communicate
++	 *
++	 * Note: the OVERWRITE records by themselves are not considered
++	 * useful, as they don't communicate any *new* information,
++	 * aside from the short-lived offset, that becomes history at
++	 * the next event sched-in and therefore isn't useful.
++	 * The userspace that needs to copy out AUX data in overwrite
++	 * mode should know to use user_page::aux_head for the actual
++	 * offset. So, from now on we don't output AUX records that
++	 * have *only* OVERWRITE flag set.
++	 */
++	if (size || (handle->aux_flags & ~(u64)PERF_AUX_FLAG_OVERWRITE))
++		perf_event_aux_event(handle->event, aux_head, size,
++				     handle->aux_flags);
+ 
+ 	rb->user_page->aux_head = rb->aux_head;
+ 	if (rb_need_aux_wakeup(rb))
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index f4ddfdd2d07e..de78d1b998f8 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -709,7 +709,6 @@ static void unoptimize_kprobe(struct kprobe *p, bool force)
+ static int reuse_unused_kprobe(struct kprobe *ap)
+ {
+ 	struct optimized_kprobe *op;
+-	int ret;
+ 
+ 	/*
+ 	 * Unused kprobe MUST be on the way of delayed unoptimizing (means
+@@ -720,9 +719,8 @@ static int reuse_unused_kprobe(struct kprobe *ap)
+ 	/* Enable the probe again */
+ 	ap->flags &= ~KPROBE_FLAG_DISABLED;
+ 	/* Optimize it again (remove from op->list) */
+-	ret = kprobe_optready(ap);
+-	if (ret)
+-		return ret;
++	if (!kprobe_optready(ap))
++		return -EINVAL;
+ 
+ 	optimize_kprobe(ap);
+ 	return 0;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 5e61a1a99e38..eeb605656d59 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4859,12 +4859,15 @@ static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer)
+ 	return HRTIMER_NORESTART;
+ }
+ 
++extern const u64 max_cfs_quota_period;
++
+ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
+ {
+ 	struct cfs_bandwidth *cfs_b =
+ 		container_of(timer, struct cfs_bandwidth, period_timer);
+ 	int overrun;
+ 	int idle = 0;
++	int count = 0;
+ 
+ 	raw_spin_lock(&cfs_b->lock);
+ 	for (;;) {
+@@ -4872,6 +4875,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
+ 		if (!overrun)
+ 			break;
+ 
++		if (++count > 3) {
++			u64 new, old = ktime_to_ns(cfs_b->period);
++
++			new = (old * 147) / 128; /* ~115% */
++			new = min(new, max_cfs_quota_period);
++
++			cfs_b->period = ns_to_ktime(new);
++
++			/* since max is 1s, this is limited to 1e9^2, which fits in u64 */
++			cfs_b->quota *= new;
++			cfs_b->quota = div64_u64(cfs_b->quota, old);
++
++			pr_warn_ratelimited(
++        "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n",
++	                        smp_processor_id(),
++	                        div_u64(new, NSEC_PER_USEC),
++                                div_u64(cfs_b->quota, NSEC_PER_USEC));
++
++			/* reset count so we don't come right back in here */
++			count = 0;
++		}
++
+ 		idle = do_sched_cfs_period_timer(cfs_b, overrun);
+ 	}
+ 	if (idle)
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 28ec71d914c7..f50f1471c119 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -126,6 +126,7 @@ static int zero;
+ static int __maybe_unused one = 1;
+ static int __maybe_unused two = 2;
+ static int __maybe_unused four = 4;
++static unsigned long zero_ul;
+ static unsigned long one_ul = 1;
+ static unsigned long long_max = LONG_MAX;
+ static int one_hundred = 100;
+@@ -1723,7 +1724,7 @@ static struct ctl_table fs_table[] = {
+ 		.maxlen		= sizeof(files_stat.max_files),
+ 		.mode		= 0644,
+ 		.proc_handler	= proc_doulongvec_minmax,
+-		.extra1		= &zero,
++		.extra1		= &zero_ul,
+ 		.extra2		= &long_max,
+ 	},
+ 	{
+diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
+index 094b82ca95e5..930113b9799a 100644
+--- a/kernel/time/sched_clock.c
++++ b/kernel/time/sched_clock.c
+@@ -272,7 +272,7 @@ static u64 notrace suspended_sched_clock_read(void)
+ 	return cd.read_data[seq & 1].epoch_cyc;
+ }
+ 
+-static int sched_clock_suspend(void)
++int sched_clock_suspend(void)
+ {
+ 	struct clock_read_data *rd = &cd.read_data[0];
+ 
+@@ -283,7 +283,7 @@ static int sched_clock_suspend(void)
+ 	return 0;
+ }
+ 
+-static void sched_clock_resume(void)
++void sched_clock_resume(void)
+ {
+ 	struct clock_read_data *rd = &cd.read_data[0];
+ 
+diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
+index 529143b4c8d2..df401463a191 100644
+--- a/kernel/time/tick-common.c
++++ b/kernel/time/tick-common.c
+@@ -487,6 +487,7 @@ void tick_freeze(void)
+ 		trace_suspend_resume(TPS("timekeeping_freeze"),
+ 				     smp_processor_id(), true);
+ 		system_state = SYSTEM_SUSPEND;
++		sched_clock_suspend();
+ 		timekeeping_suspend();
+ 	} else {
+ 		tick_suspend_local();
+@@ -510,6 +511,7 @@ void tick_unfreeze(void)
+ 
+ 	if (tick_freeze_depth == num_online_cpus()) {
+ 		timekeeping_resume();
++		sched_clock_resume();
+ 		system_state = SYSTEM_RUNNING;
+ 		trace_suspend_resume(TPS("timekeeping_freeze"),
+ 				     smp_processor_id(), false);
+diff --git a/kernel/time/timekeeping.h b/kernel/time/timekeeping.h
+index 7a9b4eb7a1d5..141ab3ab0354 100644
+--- a/kernel/time/timekeeping.h
++++ b/kernel/time/timekeeping.h
+@@ -14,6 +14,13 @@ extern u64 timekeeping_max_deferment(void);
+ extern void timekeeping_warp_clock(void);
+ extern int timekeeping_suspend(void);
+ extern void timekeeping_resume(void);
++#ifdef CONFIG_GENERIC_SCHED_CLOCK
++extern int sched_clock_suspend(void);
++extern void sched_clock_resume(void);
++#else
++static inline int sched_clock_suspend(void) { return 0; }
++static inline void sched_clock_resume(void) { }
++#endif
+ 
+ extern void do_timer(unsigned long ticks);
+ extern void update_wall_time(void);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index aac7847c0214..f546ae5102e0 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -33,6 +33,7 @@
+ #include <linux/list.h>
+ #include <linux/hash.h>
+ #include <linux/rcupdate.h>
++#include <linux/kprobes.h>
+ 
+ #include <trace/events/sched.h>
+ 
+@@ -6216,7 +6217,7 @@ void ftrace_reset_array_ops(struct trace_array *tr)
+ 	tr->ops->func = ftrace_stub;
+ }
+ 
+-static inline void
++static nokprobe_inline void
+ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
+ 		       struct ftrace_ops *ignored, struct pt_regs *regs)
+ {
+@@ -6276,11 +6277,13 @@ static void ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
+ {
+ 	__ftrace_ops_list_func(ip, parent_ip, NULL, regs);
+ }
++NOKPROBE_SYMBOL(ftrace_ops_list_func);
+ #else
+ static void ftrace_ops_no_ops(unsigned long ip, unsigned long parent_ip)
+ {
+ 	__ftrace_ops_list_func(ip, parent_ip, NULL, NULL);
+ }
++NOKPROBE_SYMBOL(ftrace_ops_no_ops);
+ #endif
+ 
+ /*
+@@ -6307,6 +6310,7 @@ static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
+ 	preempt_enable_notrace();
+ 	trace_clear_recursion(bit);
+ }
++NOKPROBE_SYMBOL(ftrace_ops_assist_func);
+ 
+ /**
+  * ftrace_ops_get_func - get the function a trampoline should call
+diff --git a/mm/mmap.c b/mm/mmap.c
+index fc1809b1bed6..da9236a5022e 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -45,6 +45,7 @@
+ #include <linux/moduleparam.h>
+ #include <linux/pkeys.h>
+ #include <linux/oom.h>
++#include <linux/sched/mm.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/cacheflush.h>
+@@ -2526,7 +2527,8 @@ find_extend_vma(struct mm_struct *mm, unsigned long addr)
+ 	vma = find_vma_prev(mm, addr, &prev);
+ 	if (vma && (vma->vm_start <= addr))
+ 		return vma;
+-	if (!prev || expand_stack(prev, addr))
++	/* don't alter vm_end if the coredump is running */
++	if (!prev || !mmget_still_valid(mm) || expand_stack(prev, addr))
+ 		return NULL;
+ 	if (prev->vm_flags & VM_LOCKED)
+ 		populate_vma_page_range(prev, addr, prev->vm_end, NULL);
+@@ -2552,6 +2554,9 @@ find_extend_vma(struct mm_struct *mm, unsigned long addr)
+ 		return vma;
+ 	if (!(vma->vm_flags & VM_GROWSDOWN))
+ 		return NULL;
++	/* don't alter vm_start if the coredump is running */
++	if (!mmget_still_valid(mm))
++		return NULL;
+ 	start = vma->vm_start;
+ 	if (expand_stack(vma, addr))
+ 		return NULL;
+diff --git a/mm/percpu.c b/mm/percpu.c
+index db86282fd024..59bd6a51954c 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -2531,8 +2531,8 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
+ 		ai->groups[group].base_offset = areas[group] - base;
+ 	}
+ 
+-	pr_info("Embedded %zu pages/cpu @%p s%zu r%zu d%zu u%zu\n",
+-		PFN_DOWN(size_sum), base, ai->static_size, ai->reserved_size,
++	pr_info("Embedded %zu pages/cpu s%zu r%zu d%zu u%zu\n",
++		PFN_DOWN(size_sum), ai->static_size, ai->reserved_size,
+ 		ai->dyn_size, ai->unit_size);
+ 
+ 	rc = pcpu_setup_first_chunk(ai, base);
+@@ -2653,8 +2653,8 @@ int __init pcpu_page_first_chunk(size_t reserved_size,
+ 	}
+ 
+ 	/* we're ready, commit */
+-	pr_info("%d %s pages/cpu @%p s%zu r%zu d%zu\n",
+-		unit_pages, psize_str, vm.addr, ai->static_size,
++	pr_info("%d %s pages/cpu s%zu r%zu d%zu\n",
++		unit_pages, psize_str, ai->static_size,
+ 		ai->reserved_size, ai->dyn_size);
+ 
+ 	rc = pcpu_setup_first_chunk(ai, vm.addr);
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index 83b30edc2f7f..f807f2e3b4cb 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1274,13 +1274,8 @@ const char * const vmstat_text[] = {
+ #endif
+ #endif /* CONFIG_MEMORY_BALLOON */
+ #ifdef CONFIG_DEBUG_TLBFLUSH
+-#ifdef CONFIG_SMP
+ 	"nr_tlb_remote_flush",
+ 	"nr_tlb_remote_flush_received",
+-#else
+-	"", /* nr_tlb_remote_flush */
+-	"", /* nr_tlb_remote_flush_received */
+-#endif /* CONFIG_SMP */
+ 	"nr_tlb_local_flush_all",
+ 	"nr_tlb_local_flush_one",
+ #endif /* CONFIG_DEBUG_TLBFLUSH */
+diff --git a/net/atm/lec.c b/net/atm/lec.c
+index d7f5cf5b7594..ad4f829193f0 100644
+--- a/net/atm/lec.c
++++ b/net/atm/lec.c
+@@ -710,7 +710,10 @@ static int lec_vcc_attach(struct atm_vcc *vcc, void __user *arg)
+ 
+ static int lec_mcast_attach(struct atm_vcc *vcc, int arg)
+ {
+-	if (arg < 0 || arg >= MAX_LEC_ITF || !dev_lec[arg])
++	if (arg < 0 || arg >= MAX_LEC_ITF)
++		return -EINVAL;
++	arg = array_index_nospec(arg, MAX_LEC_ITF);
++	if (!dev_lec[arg])
+ 		return -EINVAL;
+ 	vcc->proto_data = dev_lec[arg];
+ 	return lec_mcast_make(netdev_priv(dev_lec[arg]), vcc);
+@@ -728,6 +731,7 @@ static int lecd_attach(struct atm_vcc *vcc, int arg)
+ 		i = arg;
+ 	if (arg >= MAX_LEC_ITF)
+ 		return -EINVAL;
++	i = array_index_nospec(arg, MAX_LEC_ITF);
+ 	if (!dev_lec[i]) {
+ 		int size;
+ 
+diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
+index 5ea7e56119c1..ba303ee99b9b 100644
+--- a/net/bridge/br_input.c
++++ b/net/bridge/br_input.c
+@@ -197,13 +197,10 @@ static void __br_handle_local_finish(struct sk_buff *skb)
+ /* note: already called with rcu_read_lock */
+ static int br_handle_local_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+-	struct net_bridge_port *p = br_port_get_rcu(skb->dev);
+-
+ 	__br_handle_local_finish(skb);
+ 
+-	BR_INPUT_SKB_CB(skb)->brdev = p->br->dev;
+-	br_pass_frame_up(skb);
+-	return 0;
++	/* return 1 to signal the okfn() was called so it's ok to use the skb */
++	return 1;
+ }
+ 
+ /*
+@@ -280,10 +277,18 @@ rx_handler_result_t br_handle_frame(struct sk_buff **pskb)
+ 				goto forward;
+ 		}
+ 
+-		/* Deliver packet to local host only */
+-		NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_IN, dev_net(skb->dev),
+-			NULL, skb, skb->dev, NULL, br_handle_local_finish);
+-		return RX_HANDLER_CONSUMED;
++		/* The else clause should be hit when nf_hook():
++		 *   - returns < 0 (drop/error)
++		 *   - returns = 0 (stolen/nf_queue)
++		 * Thus return 1 from the okfn() to signal the skb is ok to pass
++		 */
++		if (NF_HOOK(NFPROTO_BRIDGE, NF_BR_LOCAL_IN,
++			    dev_net(skb->dev), NULL, skb, skb->dev, NULL,
++			    br_handle_local_finish) == 1) {
++			return RX_HANDLER_PASS;
++		} else {
++			return RX_HANDLER_CONSUMED;
++		}
+ 	}
+ 
+ forward:
+diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
+index e4777614a8a0..61ff0d497da6 100644
+--- a/net/bridge/br_multicast.c
++++ b/net/bridge/br_multicast.c
+@@ -1916,7 +1916,8 @@ static void br_multicast_start_querier(struct net_bridge *br,
+ 
+ 	__br_multicast_open(br, query);
+ 
+-	list_for_each_entry(port, &br->port_list, list) {
++	rcu_read_lock();
++	list_for_each_entry_rcu(port, &br->port_list, list) {
+ 		if (port->state == BR_STATE_DISABLED ||
+ 		    port->state == BR_STATE_BLOCKING)
+ 			continue;
+@@ -1928,6 +1929,7 @@ static void br_multicast_start_querier(struct net_bridge *br,
+ 			br_multicast_enable(&port->ip6_own_query);
+ #endif
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ int br_multicast_toggle(struct net_bridge *br, unsigned long val)
+diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
+index 9c07591b0232..7104cf13da84 100644
+--- a/net/bridge/br_netlink.c
++++ b/net/bridge/br_netlink.c
+@@ -1441,7 +1441,7 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev)
+ 	    nla_put_u8(skb, IFLA_BR_VLAN_STATS_ENABLED,
+ 		       br_opt_get(br, BROPT_VLAN_STATS_ENABLED)) ||
+ 	    nla_put_u8(skb, IFLA_BR_VLAN_STATS_PER_PORT,
+-		       br_opt_get(br, IFLA_BR_VLAN_STATS_PER_PORT)))
++		       br_opt_get(br, BROPT_VLAN_STATS_PER_PORT)))
+ 		return -EMSGSIZE;
+ #endif
+ #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 12824e007e06..7277dd393c00 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1184,7 +1184,21 @@ int dev_change_name(struct net_device *dev, const char *newname)
+ 	BUG_ON(!dev_net(dev));
+ 
+ 	net = dev_net(dev);
+-	if (dev->flags & IFF_UP)
++
++	/* Some auto-enslaved devices e.g. failover slaves are
++	 * special, as userspace might rename the device after
++	 * the interface had been brought up and running since
++	 * the point kernel initiated auto-enslavement. Allow
++	 * live name change even when these slave devices are
++	 * up and running.
++	 *
++	 * Typically, users of these auto-enslaving devices
++	 * don't actually care about slave name change, as
++	 * they are supposed to operate on master interface
++	 * directly.
++	 */
++	if (dev->flags & IFF_UP &&
++	    likely(!(dev->priv_flags & IFF_LIVE_RENAME_OK)))
+ 		return -EBUSY;
+ 
+ 	write_seqcount_begin(&devnet_rename_seq);
+diff --git a/net/core/failover.c b/net/core/failover.c
+index 4a92a98ccce9..b5cd3c727285 100644
+--- a/net/core/failover.c
++++ b/net/core/failover.c
+@@ -80,14 +80,14 @@ static int failover_slave_register(struct net_device *slave_dev)
+ 		goto err_upper_link;
+ 	}
+ 
+-	slave_dev->priv_flags |= IFF_FAILOVER_SLAVE;
++	slave_dev->priv_flags |= (IFF_FAILOVER_SLAVE | IFF_LIVE_RENAME_OK);
+ 
+ 	if (fops && fops->slave_register &&
+ 	    !fops->slave_register(slave_dev, failover_dev))
+ 		return NOTIFY_OK;
+ 
+ 	netdev_upper_dev_unlink(slave_dev, failover_dev);
+-	slave_dev->priv_flags &= ~IFF_FAILOVER_SLAVE;
++	slave_dev->priv_flags &= ~(IFF_FAILOVER_SLAVE | IFF_LIVE_RENAME_OK);
+ err_upper_link:
+ 	netdev_rx_handler_unregister(slave_dev);
+ done:
+@@ -121,7 +121,7 @@ int failover_slave_unregister(struct net_device *slave_dev)
+ 
+ 	netdev_rx_handler_unregister(slave_dev);
+ 	netdev_upper_dev_unlink(slave_dev, failover_dev);
+-	slave_dev->priv_flags &= ~IFF_FAILOVER_SLAVE;
++	slave_dev->priv_flags &= ~(IFF_FAILOVER_SLAVE | IFF_LIVE_RENAME_OK);
+ 
+ 	if (fops && fops->slave_unregister &&
+ 	    !fops->slave_unregister(slave_dev, failover_dev))
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index ef2cd5712098..40796b8bf820 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -5083,7 +5083,8 @@ EXPORT_SYMBOL_GPL(skb_gso_validate_mac_len);
+ 
+ static struct sk_buff *skb_reorder_vlan_header(struct sk_buff *skb)
+ {
+-	int mac_len;
++	int mac_len, meta_len;
++	void *meta;
+ 
+ 	if (skb_cow(skb, skb_headroom(skb)) < 0) {
+ 		kfree_skb(skb);
+@@ -5095,6 +5096,13 @@ static struct sk_buff *skb_reorder_vlan_header(struct sk_buff *skb)
+ 		memmove(skb_mac_header(skb) + VLAN_HLEN, skb_mac_header(skb),
+ 			mac_len - VLAN_HLEN - ETH_TLEN);
+ 	}
++
++	meta_len = skb_metadata_len(skb);
++	if (meta_len) {
++		meta = skb_metadata_end(skb) - meta_len;
++		memmove(meta + VLAN_HLEN, meta, meta_len);
++	}
++
+ 	skb->mac_header += VLAN_HLEN;
+ 	return skb;
+ }
+diff --git a/net/ipv4/fou.c b/net/ipv4/fou.c
+index 79e98e21cdd7..12ce6c526d72 100644
+--- a/net/ipv4/fou.c
++++ b/net/ipv4/fou.c
+@@ -121,6 +121,7 @@ static int gue_udp_recv(struct sock *sk, struct sk_buff *skb)
+ 	struct guehdr *guehdr;
+ 	void *data;
+ 	u16 doffset = 0;
++	u8 proto_ctype;
+ 
+ 	if (!fou)
+ 		return 1;
+@@ -212,13 +213,14 @@ static int gue_udp_recv(struct sock *sk, struct sk_buff *skb)
+ 	if (unlikely(guehdr->control))
+ 		return gue_control_message(skb, guehdr);
+ 
++	proto_ctype = guehdr->proto_ctype;
+ 	__skb_pull(skb, sizeof(struct udphdr) + hdrlen);
+ 	skb_reset_transport_header(skb);
+ 
+ 	if (iptunnel_pull_offloads(skb))
+ 		goto drop;
+ 
+-	return -guehdr->proto_ctype;
++	return -proto_ctype;
+ 
+ drop:
+ 	kfree_skb(skb);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index e04cdb58a602..25d9bef27d03 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1185,9 +1185,23 @@ static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie)
+ 
+ static void ipv4_link_failure(struct sk_buff *skb)
+ {
++	struct ip_options opt;
+ 	struct rtable *rt;
++	int res;
+ 
+-	icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
++	/* Recompile ip options since IPCB may not be valid anymore.
++	 */
++	memset(&opt, 0, sizeof(opt));
++	opt.optlen = ip_hdr(skb)->ihl*4 - sizeof(struct iphdr);
++
++	rcu_read_lock();
++	res = __ip_options_compile(dev_net(skb->dev), &opt, skb, NULL);
++	rcu_read_unlock();
++
++	if (res)
++		return;
++
++	__icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0, &opt);
+ 
+ 	rt = skb_rtable(skb);
+ 	if (rt)
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 7b1ef897b398..95b2e31fff08 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -402,11 +402,12 @@ static int __tcp_grow_window(const struct sock *sk, const struct sk_buff *skb)
+ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
+ {
+ 	struct tcp_sock *tp = tcp_sk(sk);
++	int room;
++
++	room = min_t(int, tp->window_clamp, tcp_space(sk)) - tp->rcv_ssthresh;
+ 
+ 	/* Check #1 */
+-	if (tp->rcv_ssthresh < tp->window_clamp &&
+-	    (int)tp->rcv_ssthresh < tcp_space(sk) &&
+-	    !tcp_under_memory_pressure(sk)) {
++	if (room > 0 && !tcp_under_memory_pressure(sk)) {
+ 		int incr;
+ 
+ 		/* Check #2. Increase window, if skb with such overhead
+@@ -419,8 +420,7 @@ static void tcp_grow_window(struct sock *sk, const struct sk_buff *skb)
+ 
+ 		if (incr) {
+ 			incr = max_t(int, incr, 2 * skb->len);
+-			tp->rcv_ssthresh = min(tp->rcv_ssthresh + incr,
+-					       tp->window_clamp);
++			tp->rcv_ssthresh += min(room, incr);
+ 			inet_csk(sk)->icsk_ack.quick |= 1;
+ 		}
+ 	}
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 0086acc16f3c..b6a97115a906 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2336,6 +2336,10 @@ static void __ip6_rt_update_pmtu(struct dst_entry *dst, const struct sock *sk,
+ 
+ 		rcu_read_lock();
+ 		from = rcu_dereference(rt6->from);
++		if (!from) {
++			rcu_read_unlock();
++			return;
++		}
+ 		nrt6 = ip6_rt_cache_alloc(from, daddr, saddr);
+ 		if (nrt6) {
+ 			rt6_do_update_pmtu(nrt6, mtu);
+diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
+index 3e0d5922a440..a9c1d6e3cdae 100644
+--- a/net/mac80211/driver-ops.h
++++ b/net/mac80211/driver-ops.h
+@@ -1166,6 +1166,9 @@ static inline void drv_wake_tx_queue(struct ieee80211_local *local,
+ {
+ 	struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif);
+ 
++	if (local->in_reconfig)
++		return;
++
+ 	if (!check_sdata_in_driver(sdata))
+ 		return;
+ 
+diff --git a/net/nfc/nci/hci.c b/net/nfc/nci/hci.c
+index ddfc52ac1f9b..c0d323b58e73 100644
+--- a/net/nfc/nci/hci.c
++++ b/net/nfc/nci/hci.c
+@@ -312,6 +312,10 @@ static void nci_hci_cmd_received(struct nci_dev *ndev, u8 pipe,
+ 		create_info = (struct nci_hci_create_pipe_resp *)skb->data;
+ 		dest_gate = create_info->dest_gate;
+ 		new_pipe = create_info->pipe;
++		if (new_pipe >= NCI_HCI_MAX_PIPES) {
++			status = NCI_HCI_ANY_E_NOK;
++			goto exit;
++		}
+ 
+ 		/* Save the new created pipe and bind with local gate,
+ 		 * the description for skb->data[3] is destination gate id
+@@ -336,6 +340,10 @@ static void nci_hci_cmd_received(struct nci_dev *ndev, u8 pipe,
+ 			goto exit;
+ 		}
+ 		delete_info = (struct nci_hci_delete_pipe_noti *)skb->data;
++		if (delete_info->pipe >= NCI_HCI_MAX_PIPES) {
++			status = NCI_HCI_ANY_E_NOK;
++			goto exit;
++		}
+ 
+ 		ndev->hci_dev->pipes[delete_info->pipe].gate =
+ 						NCI_HCI_INVALID_GATE;
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 73940293700d..7b5ce1343474 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -1508,32 +1508,29 @@ static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free)
+ 	return idx + (tin << 16);
+ }
+ 
+-static void cake_wash_diffserv(struct sk_buff *skb)
+-{
+-	switch (skb->protocol) {
+-	case htons(ETH_P_IP):
+-		ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, 0);
+-		break;
+-	case htons(ETH_P_IPV6):
+-		ipv6_change_dsfield(ipv6_hdr(skb), INET_ECN_MASK, 0);
+-		break;
+-	default:
+-		break;
+-	}
+-}
+-
+ static u8 cake_handle_diffserv(struct sk_buff *skb, u16 wash)
+ {
++	int wlen = skb_network_offset(skb);
+ 	u8 dscp;
+ 
+-	switch (skb->protocol) {
++	switch (tc_skb_protocol(skb)) {
+ 	case htons(ETH_P_IP):
++		wlen += sizeof(struct iphdr);
++		if (!pskb_may_pull(skb, wlen) ||
++		    skb_try_make_writable(skb, wlen))
++			return 0;
++
+ 		dscp = ipv4_get_dsfield(ip_hdr(skb)) >> 2;
+ 		if (wash && dscp)
+ 			ipv4_change_dsfield(ip_hdr(skb), INET_ECN_MASK, 0);
+ 		return dscp;
+ 
+ 	case htons(ETH_P_IPV6):
++		wlen += sizeof(struct ipv6hdr);
++		if (!pskb_may_pull(skb, wlen) ||
++		    skb_try_make_writable(skb, wlen))
++			return 0;
++
+ 		dscp = ipv6_get_dsfield(ipv6_hdr(skb)) >> 2;
+ 		if (wash && dscp)
+ 			ipv6_change_dsfield(ipv6_hdr(skb), INET_ECN_MASK, 0);
+@@ -1553,25 +1550,27 @@ static struct cake_tin_data *cake_select_tin(struct Qdisc *sch,
+ {
+ 	struct cake_sched_data *q = qdisc_priv(sch);
+ 	u32 tin;
++	u8 dscp;
++
++	/* Tin selection: Default to diffserv-based selection, allow overriding
++	 * using firewall marks or skb->priority.
++	 */
++	dscp = cake_handle_diffserv(skb,
++				    q->rate_flags & CAKE_FLAG_WASH);
+ 
+-	if (TC_H_MAJ(skb->priority) == sch->handle &&
+-	    TC_H_MIN(skb->priority) > 0 &&
+-	    TC_H_MIN(skb->priority) <= q->tin_cnt) {
++	if (q->tin_mode == CAKE_DIFFSERV_BESTEFFORT)
++		tin = 0;
++
++	else if (TC_H_MAJ(skb->priority) == sch->handle &&
++		 TC_H_MIN(skb->priority) > 0 &&
++		 TC_H_MIN(skb->priority) <= q->tin_cnt)
+ 		tin = q->tin_order[TC_H_MIN(skb->priority) - 1];
+ 
+-		if (q->rate_flags & CAKE_FLAG_WASH)
+-			cake_wash_diffserv(skb);
+-	} else if (q->tin_mode != CAKE_DIFFSERV_BESTEFFORT) {
+-		/* extract the Diffserv Precedence field, if it exists */
+-		/* and clear DSCP bits if washing */
+-		tin = q->tin_index[cake_handle_diffserv(skb,
+-				q->rate_flags & CAKE_FLAG_WASH)];
++	else {
++		tin = q->tin_index[dscp];
++
+ 		if (unlikely(tin >= q->tin_cnt))
+ 			tin = 0;
+-	} else {
+-		tin = 0;
+-		if (q->rate_flags & CAKE_FLAG_WASH)
+-			cake_wash_diffserv(skb);
+ 	}
+ 
+ 	return &q->tins[tin];
+diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c
+index da1a676860ca..0f4e42792878 100644
+--- a/net/strparser/strparser.c
++++ b/net/strparser/strparser.c
+@@ -140,13 +140,11 @@ static int __strp_recv(read_descriptor_t *desc, struct sk_buff *orig_skb,
+ 			/* We are going to append to the frags_list of head.
+ 			 * Need to unshare the frag_list.
+ 			 */
+-			if (skb_has_frag_list(head)) {
+-				err = skb_unclone(head, GFP_ATOMIC);
+-				if (err) {
+-					STRP_STATS_INCR(strp->stats.mem_fail);
+-					desc->error = err;
+-					return 0;
+-				}
++			err = skb_unclone(head, GFP_ATOMIC);
++			if (err) {
++				STRP_STATS_INCR(strp->stats.mem_fail);
++				desc->error = err;
++				return 0;
+ 			}
+ 
+ 			if (unlikely(skb_shinfo(head)->frag_list)) {
+diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c
+index bff241f03525..89993afe0fbd 100644
+--- a/net/tipc/name_table.c
++++ b/net/tipc/name_table.c
+@@ -909,7 +909,8 @@ static int tipc_nl_service_list(struct net *net, struct tipc_nl_msg *msg,
+ 	for (; i < TIPC_NAMETBL_SIZE; i++) {
+ 		head = &tn->nametbl->services[i];
+ 
+-		if (*last_type) {
++		if (*last_type ||
++		    (!i && *last_key && (*last_lower == *last_key))) {
+ 			service = tipc_service_find(net, *last_type);
+ 			if (!service)
+ 				return -EPIPE;
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index d753e362d2d9..4b5ff3d44912 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -52,8 +52,11 @@ static DEFINE_SPINLOCK(tls_device_lock);
+ 
+ static void tls_device_free_ctx(struct tls_context *ctx)
+ {
+-	if (ctx->tx_conf == TLS_HW)
++	if (ctx->tx_conf == TLS_HW) {
+ 		kfree(tls_offload_ctx_tx(ctx));
++		kfree(ctx->tx.rec_seq);
++		kfree(ctx->tx.iv);
++	}
+ 
+ 	if (ctx->rx_conf == TLS_HW)
+ 		kfree(tls_offload_ctx_rx(ctx));
+@@ -216,6 +219,13 @@ void tls_device_sk_destruct(struct sock *sk)
+ }
+ EXPORT_SYMBOL(tls_device_sk_destruct);
+ 
++void tls_device_free_resources_tx(struct sock *sk)
++{
++	struct tls_context *tls_ctx = tls_get_ctx(sk);
++
++	tls_free_partial_record(sk, tls_ctx);
++}
++
+ static void tls_append_frag(struct tls_record_info *record,
+ 			    struct page_frag *pfrag,
+ 			    int size)
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 78cb4a584080..96dbac91ac6e 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -220,6 +220,26 @@ int tls_push_pending_closed_record(struct sock *sk,
+ 		return tls_ctx->push_pending_record(sk, flags);
+ }
+ 
++bool tls_free_partial_record(struct sock *sk, struct tls_context *ctx)
++{
++	struct scatterlist *sg;
++
++	sg = ctx->partially_sent_record;
++	if (!sg)
++		return false;
++
++	while (1) {
++		put_page(sg_page(sg));
++		sk_mem_uncharge(sk, sg->length);
++
++		if (sg_is_last(sg))
++			break;
++		sg++;
++	}
++	ctx->partially_sent_record = NULL;
++	return true;
++}
++
+ static void tls_write_space(struct sock *sk)
+ {
+ 	struct tls_context *ctx = tls_get_ctx(sk);
+@@ -278,6 +298,10 @@ static void tls_sk_proto_close(struct sock *sk, long timeout)
+ 		kfree(ctx->tx.rec_seq);
+ 		kfree(ctx->tx.iv);
+ 		tls_sw_free_resources_tx(sk);
++#ifdef CONFIG_TLS_DEVICE
++	} else if (ctx->tx_conf == TLS_HW) {
++		tls_device_free_resources_tx(sk);
++#endif
+ 	}
+ 
+ 	if (ctx->rx_conf == TLS_SW) {
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index bf5b54b513bc..d2d4f7c0d4be 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1804,20 +1804,7 @@ void tls_sw_free_resources_tx(struct sock *sk)
+ 	/* Free up un-sent records in tx_list. First, free
+ 	 * the partially sent record if any at head of tx_list.
+ 	 */
+-	if (tls_ctx->partially_sent_record) {
+-		struct scatterlist *sg = tls_ctx->partially_sent_record;
+-
+-		while (1) {
+-			put_page(sg_page(sg));
+-			sk_mem_uncharge(sk, sg->length);
+-
+-			if (sg_is_last(sg))
+-				break;
+-			sg++;
+-		}
+-
+-		tls_ctx->partially_sent_record = NULL;
+-
++	if (tls_free_partial_record(sk, tls_ctx)) {
+ 		rec = list_first_entry(&ctx->tx_list,
+ 				       struct tls_rec, list);
+ 		list_del(&rec->list);
+diff --git a/security/device_cgroup.c b/security/device_cgroup.c
+index cd97929fac66..dc28914fa72e 100644
+--- a/security/device_cgroup.c
++++ b/security/device_cgroup.c
+@@ -560,7 +560,7 @@ static int propagate_exception(struct dev_cgroup *devcg_root,
+ 		    devcg->behavior == DEVCG_DEFAULT_ALLOW) {
+ 			rc = dev_exception_add(devcg, ex);
+ 			if (rc)
+-				break;
++				return rc;
+ 		} else {
+ 			/*
+ 			 * in the other possible cases:
+diff --git a/sound/core/info.c b/sound/core/info.c
+index fe502bc5e6d2..679136fba730 100644
+--- a/sound/core/info.c
++++ b/sound/core/info.c
+@@ -722,8 +722,11 @@ snd_info_create_entry(const char *name, struct snd_info_entry *parent)
+ 	INIT_LIST_HEAD(&entry->children);
+ 	INIT_LIST_HEAD(&entry->list);
+ 	entry->parent = parent;
+-	if (parent)
++	if (parent) {
++		mutex_lock(&parent->access);
+ 		list_add_tail(&entry->list, &parent->children);
++		mutex_unlock(&parent->access);
++	}
+ 	return entry;
+ }
+ 
+@@ -805,7 +808,12 @@ void snd_info_free_entry(struct snd_info_entry * entry)
+ 	list_for_each_entry_safe(p, n, &entry->children, list)
+ 		snd_info_free_entry(p);
+ 
+-	list_del(&entry->list);
++	p = entry->parent;
++	if (p) {
++		mutex_lock(&p->access);
++		list_del(&entry->list);
++		mutex_unlock(&p->access);
++	}
+ 	kfree(entry->name);
+ 	if (entry->private_free)
+ 		entry->private_free(entry);
+diff --git a/sound/core/init.c b/sound/core/init.c
+index 4849c611c0fe..16b7cc7aa66b 100644
+--- a/sound/core/init.c
++++ b/sound/core/init.c
+@@ -407,14 +407,7 @@ int snd_card_disconnect(struct snd_card *card)
+ 	card->shutdown = 1;
+ 	spin_unlock(&card->files_lock);
+ 
+-	/* phase 1: disable fops (user space) operations for ALSA API */
+-	mutex_lock(&snd_card_mutex);
+-	snd_cards[card->number] = NULL;
+-	clear_bit(card->number, snd_cards_lock);
+-	mutex_unlock(&snd_card_mutex);
+-	
+-	/* phase 2: replace file->f_op with special dummy operations */
+-	
++	/* replace file->f_op with special dummy operations */
+ 	spin_lock(&card->files_lock);
+ 	list_for_each_entry(mfile, &card->files_list, list) {
+ 		/* it's critical part, use endless loop */
+@@ -430,7 +423,7 @@ int snd_card_disconnect(struct snd_card *card)
+ 	}
+ 	spin_unlock(&card->files_lock);	
+ 
+-	/* phase 3: notify all connected devices about disconnection */
++	/* notify all connected devices about disconnection */
+ 	/* at this point, they cannot respond to any calls except release() */
+ 
+ #if IS_ENABLED(CONFIG_SND_MIXER_OSS)
+@@ -446,6 +439,13 @@ int snd_card_disconnect(struct snd_card *card)
+ 		device_del(&card->card_dev);
+ 		card->registered = false;
+ 	}
++
++	/* disable fops (user space) operations for ALSA API */
++	mutex_lock(&snd_card_mutex);
++	snd_cards[card->number] = NULL;
++	clear_bit(card->number, snd_cards_lock);
++	mutex_unlock(&snd_card_mutex);
++
+ #ifdef CONFIG_PM
+ 	wake_up(&card->power_sleep);
+ #endif
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 84fae0df59e9..f061167062bc 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7247,6 +7247,8 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x12, 0x90a60140},
+ 		{0x14, 0x90170150},
+ 		{0x21, 0x02211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		{0x21, 0x02211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL2_MIC_NO_PRESENCE,
+ 		{0x14, 0x90170110},
+ 		{0x21, 0x02211020}),
+@@ -7357,6 +7359,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x21, 0x0221101f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC256_STANDARD_PINS),
++	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		{0x14, 0x90170110},
++		{0x1b, 0x01011020},
++		{0x21, 0x0221101f}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC,
+ 		{0x14, 0x90170110},
+ 		{0x1b, 0x90a70130},


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-02 10:12 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-02 10:12 UTC (permalink / raw
  To: gentoo-commits

commit:     665ebba14c8b3d369b4d6e59828e8e33697c4879
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May  2 10:12:30 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May  2 10:12:30 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=665ebba1

Linux patch 5.0.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1010_linux-5.0.11.patch | 3504 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3508 insertions(+)

diff --git a/0000_README b/0000_README
index 49a76eb..4dfa486 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-5.0.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.10
 
+Patch:  1010_linux-5.0.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-5.0.11.patch b/1010_linux-5.0.11.patch
new file mode 100644
index 0000000..a5f9df8
--- /dev/null
+++ b/1010_linux-5.0.11.patch
@@ -0,0 +1,3504 @@
+diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt
+index acdfb5d2bcaa..e2142fe40cda 100644
+--- a/Documentation/networking/ip-sysctl.txt
++++ b/Documentation/networking/ip-sysctl.txt
+@@ -422,6 +422,7 @@ tcp_min_rtt_wlen - INTEGER
+ 	minimum RTT when it is moved to a longer path (e.g., due to traffic
+ 	engineering). A longer window makes the filter more resistant to RTT
+ 	inflations such as transient congestion. The unit is seconds.
++	Possible values: 0 - 86400 (1 day)
+ 	Default: 300
+ 
+ tcp_moderate_rcvbuf - BOOLEAN
+diff --git a/Documentation/sysctl/vm.txt b/Documentation/sysctl/vm.txt
+index 187ce4f599a2..e4dfaf0d6e87 100644
+--- a/Documentation/sysctl/vm.txt
++++ b/Documentation/sysctl/vm.txt
+@@ -866,14 +866,14 @@ The intent is that compaction has less work to do in the future and to
+ increase the success rate of future high-order allocations such as SLUB
+ allocations, THP and hugetlbfs pages.
+ 
+-To make it sensible with respect to the watermark_scale_factor parameter,
+-the unit is in fractions of 10,000. The default value of 15,000 means
+-that up to 150% of the high watermark will be reclaimed in the event of
+-a pageblock being mixed due to fragmentation. The level of reclaim is
+-determined by the number of fragmentation events that occurred in the
+-recent past. If this value is smaller than a pageblock then a pageblocks
+-worth of pages will be reclaimed (e.g.  2MB on 64-bit x86). A boost factor
+-of 0 will disable the feature.
++To make it sensible with respect to the watermark_scale_factor
++parameter, the unit is in fractions of 10,000. The default value of
++15,000 on !DISCONTIGMEM configurations means that up to 150% of the high
++watermark will be reclaimed in the event of a pageblock being mixed due
++to fragmentation. The level of reclaim is determined by the number of
++fragmentation events that occurred in the recent past. If this value is
++smaller than a pageblock then a pageblocks worth of pages will be reclaimed
++(e.g.  2MB on 64-bit x86). A boost factor of 0 will disable the feature.
+ 
+ =============================================================
+ 
+diff --git a/Makefile b/Makefile
+index b282c4143b21..c3daaefa979c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S
+index 6c7ccb428c07..7135820f76d4 100644
+--- a/arch/arm/boot/compressed/head.S
++++ b/arch/arm/boot/compressed/head.S
+@@ -1438,7 +1438,21 @@ ENTRY(efi_stub_entry)
+ 
+ 		@ Preserve return value of efi_entry() in r4
+ 		mov	r4, r0
+-		bl	cache_clean_flush
++
++		@ our cache maintenance code relies on CP15 barrier instructions
++		@ but since we arrived here with the MMU and caches configured
++		@ by UEFI, we must check that the CP15BEN bit is set in SCTLR.
++		@ Note that this bit is RAO/WI on v6 and earlier, so the ISB in
++		@ the enable path will be executed on v7+ only.
++		mrc	p15, 0, r1, c1, c0, 0	@ read SCTLR
++		tst	r1, #(1 << 5)		@ CP15BEN bit set?
++		bne	0f
++		orr	r1, r1, #(1 << 5)	@ CP15 barrier instructions
++		mcr	p15, 0, r1, c1, c0, 0	@ write SCTLR
++ ARM(		.inst	0xf57ff06f		@ v7+ isb	)
++ THUMB(		isb						)
++
++0:		bl	cache_clean_flush
+ 		bl	cache_off
+ 
+ 		@ Set parameters for booting zImage according to boot protocol
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 7205a9085b4d..c9411774555d 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -406,7 +406,7 @@ void __init arm64_memblock_init(void)
+ 		 * Otherwise, this is a no-op
+ 		 */
+ 		u64 base = phys_initrd_start & PAGE_MASK;
+-		u64 size = PAGE_ALIGN(phys_initrd_size);
++		u64 size = PAGE_ALIGN(phys_initrd_start + phys_initrd_size) - base;
+ 
+ 		/*
+ 		 * We can only add back the initrd memory if we don't end up
+diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S
+index f158c5894a9a..feb2653490df 100644
+--- a/arch/mips/kernel/scall64-o32.S
++++ b/arch/mips/kernel/scall64-o32.S
+@@ -125,7 +125,7 @@ trace_a_syscall:
+ 	subu	t1, v0,  __NR_O32_Linux
+ 	move	a1, v0
+ 	bnez	t1, 1f /* __NR_syscall at offset 0 */
+-	lw	a1, PT_R4(sp) /* Arg1 for __NR_syscall case */
++	ld	a1, PT_R4(sp) /* Arg1 for __NR_syscall case */
+ 	.set	pop
+ 
+ 1:	jal	syscall_trace_enter
+diff --git a/arch/powerpc/configs/skiroot_defconfig b/arch/powerpc/configs/skiroot_defconfig
+index cfdd08897a06..e2b0c5f15c7b 100644
+--- a/arch/powerpc/configs/skiroot_defconfig
++++ b/arch/powerpc/configs/skiroot_defconfig
+@@ -260,6 +260,7 @@ CONFIG_UDF_FS=m
+ CONFIG_MSDOS_FS=m
+ CONFIG_VFAT_FS=m
+ CONFIG_PROC_KCORE=y
++CONFIG_HUGETLBFS=y
+ # CONFIG_MISC_FILESYSTEMS is not set
+ # CONFIG_NETWORK_FILESYSTEMS is not set
+ CONFIG_NLS=y
+diff --git a/arch/powerpc/kernel/vdso32/gettimeofday.S b/arch/powerpc/kernel/vdso32/gettimeofday.S
+index 1e0bc5955a40..afd516b572f8 100644
+--- a/arch/powerpc/kernel/vdso32/gettimeofday.S
++++ b/arch/powerpc/kernel/vdso32/gettimeofday.S
+@@ -98,7 +98,7 @@ V_FUNCTION_BEGIN(__kernel_clock_gettime)
+ 	 * can be used, r7 contains NSEC_PER_SEC.
+ 	 */
+ 
+-	lwz	r5,WTOM_CLOCK_SEC(r9)
++	lwz	r5,(WTOM_CLOCK_SEC+LOPART)(r9)
+ 	lwz	r6,WTOM_CLOCK_NSEC(r9)
+ 
+ 	/* We now have our offset in r5,r6. We create a fake dependency
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index 8c7464c3f27f..2782188a5ba1 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -318,7 +318,7 @@ config ARCH_ENABLE_SPLIT_PMD_PTLOCK
+ 
+ config PPC_RADIX_MMU
+ 	bool "Radix MMU Support"
+-	depends on PPC_BOOK3S_64
++	depends on PPC_BOOK3S_64 && HUGETLB_PAGE
+ 	select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
+ 	default y
+ 	help
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 9c5a67d1b9c1..c0c7291d4ccf 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -217,6 +217,15 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
+ # Avoid indirect branches in kernel to deal with Spectre
+ ifdef CONFIG_RETPOLINE
+   KBUILD_CFLAGS += $(RETPOLINE_CFLAGS)
++  # Additionally, avoid generating expensive indirect jumps which
++  # are subject to retpolines for small number of switch cases.
++  # clang turns off jump table generation by default when under
++  # retpoline builds, however, gcc does not for x86. This has
++  # only been fixed starting from gcc stable version 8.4.0 and
++  # onwards, but not for older ones. See gcc bug #86952.
++  ifndef CONFIG_CC_IS_CLANG
++    KBUILD_CFLAGS += $(call cc-option,-fno-jump-tables)
++  endif
+ endif
+ 
+ archscripts: scripts_basic
+diff --git a/arch/x86/events/intel/cstate.c b/arch/x86/events/intel/cstate.c
+index d2e780705c5a..56194c571299 100644
+--- a/arch/x86/events/intel/cstate.c
++++ b/arch/x86/events/intel/cstate.c
+@@ -76,15 +76,15 @@
+  *			       Scope: Package (physical package)
+  *	MSR_PKG_C8_RESIDENCY:  Package C8 Residency Counter.
+  *			       perf code: 0x04
+- *			       Available model: HSW ULT,CNL
++ *			       Available model: HSW ULT,KBL,CNL
+  *			       Scope: Package (physical package)
+  *	MSR_PKG_C9_RESIDENCY:  Package C9 Residency Counter.
+  *			       perf code: 0x05
+- *			       Available model: HSW ULT,CNL
++ *			       Available model: HSW ULT,KBL,CNL
+  *			       Scope: Package (physical package)
+  *	MSR_PKG_C10_RESIDENCY: Package C10 Residency Counter.
+  *			       perf code: 0x06
+- *			       Available model: HSW ULT,GLM,CNL
++ *			       Available model: HSW ULT,KBL,GLM,CNL
+  *			       Scope: Package (physical package)
+  *
+  */
+@@ -572,8 +572,8 @@ static const struct x86_cpu_id intel_cstates_match[] __initconst = {
+ 	X86_CSTATES_MODEL(INTEL_FAM6_SKYLAKE_DESKTOP, snb_cstates),
+ 	X86_CSTATES_MODEL(INTEL_FAM6_SKYLAKE_X, snb_cstates),
+ 
+-	X86_CSTATES_MODEL(INTEL_FAM6_KABYLAKE_MOBILE,  snb_cstates),
+-	X86_CSTATES_MODEL(INTEL_FAM6_KABYLAKE_DESKTOP, snb_cstates),
++	X86_CSTATES_MODEL(INTEL_FAM6_KABYLAKE_MOBILE,  hswult_cstates),
++	X86_CSTATES_MODEL(INTEL_FAM6_KABYLAKE_DESKTOP, hswult_cstates),
+ 
+ 	X86_CSTATES_MODEL(INTEL_FAM6_CANNONLAKE_MOBILE, cnl_cstates),
+ 
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index e5ed28629271..72510c470001 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2804,7 +2804,7 @@ static void bfq_dispatch_remove(struct request_queue *q, struct request *rq)
+ 	bfq_remove_request(q, rq);
+ }
+ 
+-static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++static bool __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+ 	/*
+ 	 * If this bfqq is shared between multiple processes, check
+@@ -2837,9 +2837,11 @@ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ 	/*
+ 	 * All in-service entities must have been properly deactivated
+ 	 * or requeued before executing the next function, which
+-	 * resets all in-service entites as no more in service.
++	 * resets all in-service entities as no more in service. This
++	 * may cause bfqq to be freed. If this happens, the next
++	 * function returns true.
+ 	 */
+-	__bfq_bfqd_reset_in_service(bfqd);
++	return __bfq_bfqd_reset_in_service(bfqd);
+ }
+ 
+ /**
+@@ -3244,7 +3246,6 @@ void bfq_bfqq_expire(struct bfq_data *bfqd,
+ 	bool slow;
+ 	unsigned long delta = 0;
+ 	struct bfq_entity *entity = &bfqq->entity;
+-	int ref;
+ 
+ 	/*
+ 	 * Check whether the process is slow (see bfq_bfqq_is_slow).
+@@ -3313,10 +3314,8 @@ void bfq_bfqq_expire(struct bfq_data *bfqd,
+ 	 * reason.
+ 	 */
+ 	__bfq_bfqq_recalc_budget(bfqd, bfqq, reason);
+-	ref = bfqq->ref;
+-	__bfq_bfqq_expire(bfqd, bfqq);
+-
+-	if (ref == 1) /* bfqq is gone, no more actions on it */
++	if (__bfq_bfqq_expire(bfqd, bfqq))
++		/* bfqq is gone, no more actions on it */
+ 		return;
+ 
+ 	bfqq->injected_service = 0;
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index 746bd570b85a..ca98c98a8179 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -993,7 +993,7 @@ bool __bfq_deactivate_entity(struct bfq_entity *entity,
+ 			     bool ins_into_idle_tree);
+ bool next_queue_may_preempt(struct bfq_data *bfqd);
+ struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd);
+-void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd);
++bool __bfq_bfqd_reset_in_service(struct bfq_data *bfqd);
+ void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 			 bool ins_into_idle_tree, bool expiration);
+ void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq);
+diff --git a/block/bfq-wf2q.c b/block/bfq-wf2q.c
+index 4aab1a8191f0..8077bf71d2ac 100644
+--- a/block/bfq-wf2q.c
++++ b/block/bfq-wf2q.c
+@@ -1599,7 +1599,8 @@ struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
+ 	return bfqq;
+ }
+ 
+-void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
++/* returns true if the in-service queue gets freed */
++bool __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
+ {
+ 	struct bfq_queue *in_serv_bfqq = bfqd->in_service_queue;
+ 	struct bfq_entity *in_serv_entity = &in_serv_bfqq->entity;
+@@ -1623,8 +1624,20 @@ void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
+ 	 * service tree either, then release the service reference to
+ 	 * the queue it represents (taken with bfq_get_entity).
+ 	 */
+-	if (!in_serv_entity->on_st)
++	if (!in_serv_entity->on_st) {
++		/*
++		 * If no process is referencing in_serv_bfqq any
++		 * longer, then the service reference may be the only
++		 * reference to the queue. If this is the case, then
++		 * bfqq gets freed here.
++		 */
++		int ref = in_serv_bfqq->ref;
+ 		bfq_put_queue(in_serv_bfqq);
++		if (ref == 1)
++			return true;
++	}
++
++	return false;
+ }
+ 
+ void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index 0430ccd08728..08a0e458bc3e 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -212,8 +212,12 @@ static void crypt_done(struct crypto_async_request *areq, int err)
+ {
+ 	struct skcipher_request *req = areq->data;
+ 
+-	if (!err)
++	if (!err) {
++		struct rctx *rctx = skcipher_request_ctx(req);
++
++		rctx->subreq.base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+ 		err = xor_tweak_post(req);
++	}
+ 
+ 	skcipher_request_complete(req, err);
+ }
+diff --git a/crypto/xts.c b/crypto/xts.c
+index 847f54f76789..2f948328cabb 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -137,8 +137,12 @@ static void crypt_done(struct crypto_async_request *areq, int err)
+ {
+ 	struct skcipher_request *req = areq->data;
+ 
+-	if (!err)
++	if (!err) {
++		struct rctx *rctx = skcipher_request_ctx(req);
++
++		rctx->subreq.base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+ 		err = xor_tweak_post(req);
++	}
+ 
+ 	skcipher_request_complete(req, err);
+ }
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 022cd80e80cc..a6e556bf62df 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -959,14 +959,13 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
+ 
+ 	index = page - alloc->pages;
+ 	page_addr = (uintptr_t)alloc->buffer + index * PAGE_SIZE;
++
++	mm = alloc->vma_vm_mm;
++	if (!mmget_not_zero(mm))
++		goto err_mmget;
++	if (!down_write_trylock(&mm->mmap_sem))
++		goto err_down_write_mmap_sem_failed;
+ 	vma = binder_alloc_get_vma(alloc);
+-	if (vma) {
+-		if (!mmget_not_zero(alloc->vma_vm_mm))
+-			goto err_mmget;
+-		mm = alloc->vma_vm_mm;
+-		if (!down_write_trylock(&mm->mmap_sem))
+-			goto err_down_write_mmap_sem_failed;
+-	}
+ 
+ 	list_lru_isolate(lru, item);
+ 	spin_unlock(lock);
+@@ -979,10 +978,9 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
+ 			       PAGE_SIZE);
+ 
+ 		trace_binder_unmap_user_end(alloc, index);
+-
+-		up_write(&mm->mmap_sem);
+-		mmput(mm);
+ 	}
++	up_write(&mm->mmap_sem);
++	mmput(mm);
+ 
+ 	trace_binder_unmap_kernel_start(alloc, index);
+ 
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 9a8d83bc1e75..fc7aefd42ae0 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1111,8 +1111,9 @@ out_unlock:
+ 			err = __blkdev_reread_part(bdev);
+ 		else
+ 			err = blkdev_reread_part(bdev);
+-		pr_warn("%s: partition scan of loop%d failed (rc=%d)\n",
+-			__func__, lo_number, err);
++		if (err)
++			pr_warn("%s: partition scan of loop%d failed (rc=%d)\n",
++				__func__, lo_number, err);
+ 		/* Device is gone, no point in returning error */
+ 		err = 0;
+ 	}
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index 684854d3b0ad..7e57f8f012c3 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -774,18 +774,18 @@ struct zram_work {
+ 	struct zram *zram;
+ 	unsigned long entry;
+ 	struct bio *bio;
++	struct bio_vec bvec;
+ };
+ 
+ #if PAGE_SIZE != 4096
+ static void zram_sync_read(struct work_struct *work)
+ {
+-	struct bio_vec bvec;
+ 	struct zram_work *zw = container_of(work, struct zram_work, work);
+ 	struct zram *zram = zw->zram;
+ 	unsigned long entry = zw->entry;
+ 	struct bio *bio = zw->bio;
+ 
+-	read_from_bdev_async(zram, &bvec, entry, bio);
++	read_from_bdev_async(zram, &zw->bvec, entry, bio);
+ }
+ 
+ /*
+@@ -798,6 +798,7 @@ static int read_from_bdev_sync(struct zram *zram, struct bio_vec *bvec,
+ {
+ 	struct zram_work work;
+ 
++	work.bvec = *bvec;
+ 	work.zram = zram;
+ 	work.entry = entry;
+ 	work.bio = bio;
+diff --git a/drivers/dma/mediatek/mtk-cqdma.c b/drivers/dma/mediatek/mtk-cqdma.c
+index 131f3974740d..814853842e29 100644
+--- a/drivers/dma/mediatek/mtk-cqdma.c
++++ b/drivers/dma/mediatek/mtk-cqdma.c
+@@ -253,7 +253,7 @@ static void mtk_cqdma_start(struct mtk_cqdma_pchan *pc,
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ 	mtk_dma_set(pc, MTK_CQDMA_DST2, cvd->dest >> MTK_CQDMA_ADDR2_SHFIT);
+ #else
+-	mtk_dma_set(pc, MTK_CQDMA_SRC2, 0);
++	mtk_dma_set(pc, MTK_CQDMA_DST2, 0);
+ #endif
+ 
+ 	/* setup the length */
+diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c
+index 2b4f25698169..e2a5398f89b5 100644
+--- a/drivers/dma/sh/rcar-dmac.c
++++ b/drivers/dma/sh/rcar-dmac.c
+@@ -1282,6 +1282,9 @@ static unsigned int rcar_dmac_chan_get_residue(struct rcar_dmac_chan *chan,
+ 	enum dma_status status;
+ 	unsigned int residue = 0;
+ 	unsigned int dptr = 0;
++	unsigned int chcrb;
++	unsigned int tcrb;
++	unsigned int i;
+ 
+ 	if (!desc)
+ 		return 0;
+@@ -1329,6 +1332,24 @@ static unsigned int rcar_dmac_chan_get_residue(struct rcar_dmac_chan *chan,
+ 		return 0;
+ 	}
+ 
++	/*
++	 * We need to read two registers.
++	 * Make sure the control register does not skip to next chunk
++	 * while reading the counter.
++	 * Trying it 3 times should be enough: Initial read, retry, retry
++	 * for the paranoid.
++	 */
++	for (i = 0; i < 3; i++) {
++		chcrb = rcar_dmac_chan_read(chan, RCAR_DMACHCRB) &
++					    RCAR_DMACHCRB_DPTR_MASK;
++		tcrb = rcar_dmac_chan_read(chan, RCAR_DMATCRB);
++		/* Still the same? */
++		if (chcrb == (rcar_dmac_chan_read(chan, RCAR_DMACHCRB) &
++			      RCAR_DMACHCRB_DPTR_MASK))
++			break;
++	}
++	WARN_ONCE(i >= 3, "residue might be not continuous!");
++
+ 	/*
+ 	 * In descriptor mode the descriptor running pointer is not maintained
+ 	 * by the interrupt handler, find the running descriptor from the
+@@ -1336,8 +1357,7 @@ static unsigned int rcar_dmac_chan_get_residue(struct rcar_dmac_chan *chan,
+ 	 * mode just use the running descriptor pointer.
+ 	 */
+ 	if (desc->hwdescs.use) {
+-		dptr = (rcar_dmac_chan_read(chan, RCAR_DMACHCRB) &
+-			RCAR_DMACHCRB_DPTR_MASK) >> RCAR_DMACHCRB_DPTR_SHIFT;
++		dptr = chcrb >> RCAR_DMACHCRB_DPTR_SHIFT;
+ 		if (dptr == 0)
+ 			dptr = desc->nchunks;
+ 		dptr--;
+@@ -1355,7 +1375,7 @@ static unsigned int rcar_dmac_chan_get_residue(struct rcar_dmac_chan *chan,
+ 	}
+ 
+ 	/* Add the residue for the current chunk. */
+-	residue += rcar_dmac_chan_read(chan, RCAR_DMATCRB) << desc->xfer_shift;
++	residue += tcrb << desc->xfer_shift;
+ 
+ 	return residue;
+ }
+@@ -1368,6 +1388,7 @@ static enum dma_status rcar_dmac_tx_status(struct dma_chan *chan,
+ 	enum dma_status status;
+ 	unsigned long flags;
+ 	unsigned int residue;
++	bool cyclic;
+ 
+ 	status = dma_cookie_status(chan, cookie, txstate);
+ 	if (status == DMA_COMPLETE || !txstate)
+@@ -1375,10 +1396,11 @@ static enum dma_status rcar_dmac_tx_status(struct dma_chan *chan,
+ 
+ 	spin_lock_irqsave(&rchan->lock, flags);
+ 	residue = rcar_dmac_chan_get_residue(rchan, cookie);
++	cyclic = rchan->desc.running ? rchan->desc.running->cyclic : false;
+ 	spin_unlock_irqrestore(&rchan->lock, flags);
+ 
+ 	/* if there's no residue, the cookie is complete */
+-	if (!residue)
++	if (!residue && !cyclic)
+ 		return DMA_COMPLETE;
+ 
+ 	dma_set_residue(txstate, residue);
+diff --git a/drivers/gpio/gpio-eic-sprd.c b/drivers/gpio/gpio-eic-sprd.c
+index e41223c05f6e..6cf2e2ce4093 100644
+--- a/drivers/gpio/gpio-eic-sprd.c
++++ b/drivers/gpio/gpio-eic-sprd.c
+@@ -414,6 +414,7 @@ static int sprd_eic_irq_set_type(struct irq_data *data, unsigned int flow_type)
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+ 		case IRQ_TYPE_EDGE_BOTH:
++			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0);
+ 			sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 1);
+ 			irq_set_handler_locked(data, handle_edge_irq);
+ 			break;
+diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c
+index 4ee16b264dbe..7f365ac0b549 100644
+--- a/drivers/gpu/drm/i915/intel_fbdev.c
++++ b/drivers/gpu/drm/i915/intel_fbdev.c
+@@ -336,8 +336,8 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
+ 				    bool *enabled, int width, int height)
+ {
+ 	struct drm_i915_private *dev_priv = to_i915(fb_helper->dev);
++	unsigned long conn_configured, conn_seq, mask;
+ 	unsigned int count = min(fb_helper->connector_count, BITS_PER_LONG);
+-	unsigned long conn_configured, conn_seq;
+ 	int i, j;
+ 	bool *save_enabled;
+ 	bool fallback = true, ret = true;
+@@ -355,9 +355,10 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
+ 		drm_modeset_backoff(&ctx);
+ 
+ 	memcpy(save_enabled, enabled, count);
+-	conn_seq = GENMASK(count - 1, 0);
++	mask = GENMASK(count - 1, 0);
+ 	conn_configured = 0;
+ retry:
++	conn_seq = conn_configured;
+ 	for (i = 0; i < count; i++) {
+ 		struct drm_fb_helper_connector *fb_conn;
+ 		struct drm_connector *connector;
+@@ -370,8 +371,7 @@ retry:
+ 		if (conn_configured & BIT(i))
+ 			continue;
+ 
+-		/* First pass, only consider tiled connectors */
+-		if (conn_seq == GENMASK(count - 1, 0) && !connector->has_tile)
++		if (conn_seq == 0 && !connector->has_tile)
+ 			continue;
+ 
+ 		if (connector->status == connector_status_connected)
+@@ -475,10 +475,8 @@ retry:
+ 		conn_configured |= BIT(i);
+ 	}
+ 
+-	if (conn_configured != conn_seq) { /* repeat until no more are found */
+-		conn_seq = conn_configured;
++	if ((conn_configured & mask) != mask && conn_configured != conn_seq)
+ 		goto retry;
+-	}
+ 
+ 	/*
+ 	 * If the BIOS didn't enable everything it could, fall back to have the
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index 0ec08394e17a..996cadd83f24 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -49,9 +49,8 @@ static void ttm_bo_global_kobj_release(struct kobject *kobj);
+  * ttm_global_mutex - protecting the global BO state
+  */
+ DEFINE_MUTEX(ttm_global_mutex);
+-struct ttm_bo_global ttm_bo_glob = {
+-	.use_count = 0
+-};
++unsigned ttm_bo_glob_use_count;
++struct ttm_bo_global ttm_bo_glob;
+ 
+ static struct attribute ttm_bo_count = {
+ 	.name = "bo_count",
+@@ -1535,12 +1534,13 @@ static void ttm_bo_global_release(void)
+ 	struct ttm_bo_global *glob = &ttm_bo_glob;
+ 
+ 	mutex_lock(&ttm_global_mutex);
+-	if (--glob->use_count > 0)
++	if (--ttm_bo_glob_use_count > 0)
+ 		goto out;
+ 
+ 	kobject_del(&glob->kobj);
+ 	kobject_put(&glob->kobj);
+ 	ttm_mem_global_release(&ttm_mem_glob);
++	memset(glob, 0, sizeof(*glob));
+ out:
+ 	mutex_unlock(&ttm_global_mutex);
+ }
+@@ -1552,7 +1552,7 @@ static int ttm_bo_global_init(void)
+ 	unsigned i;
+ 
+ 	mutex_lock(&ttm_global_mutex);
+-	if (++glob->use_count > 1)
++	if (++ttm_bo_glob_use_count > 1)
+ 		goto out;
+ 
+ 	ret = ttm_mem_global_init(&ttm_mem_glob);
+diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/ttm/ttm_memory.c
+index f1567c353b54..9a0909decb36 100644
+--- a/drivers/gpu/drm/ttm/ttm_memory.c
++++ b/drivers/gpu/drm/ttm/ttm_memory.c
+@@ -461,8 +461,8 @@ out_no_zone:
+ 
+ void ttm_mem_global_release(struct ttm_mem_global *glob)
+ {
+-	unsigned int i;
+ 	struct ttm_mem_zone *zone;
++	unsigned int i;
+ 
+ 	/* let the page allocator first stop the shrink work. */
+ 	ttm_page_alloc_fini();
+@@ -475,9 +475,10 @@ void ttm_mem_global_release(struct ttm_mem_global *glob)
+ 		zone = glob->zones[i];
+ 		kobject_del(&zone->kobj);
+ 		kobject_put(&zone->kobj);
+-			}
++	}
+ 	kobject_del(&glob->kobj);
+ 	kobject_put(&glob->kobj);
++	memset(glob, 0, sizeof(*glob));
+ }
+ 
+ static void ttm_check_swapping(struct ttm_mem_global *glob)
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 3ce136ba8791..2ae4ece0dcea 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -999,7 +999,7 @@ static void
+ vc4_crtc_reset(struct drm_crtc *crtc)
+ {
+ 	if (crtc->state)
+-		__drm_atomic_helper_crtc_destroy_state(crtc->state);
++		vc4_crtc_destroy_state(crtc, crtc->state);
+ 
+ 	crtc->state = kzalloc(sizeof(struct vc4_crtc_state), GFP_KERNEL);
+ 	if (crtc->state)
+diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c
+index cc287cf6eb29..edc52d75e6bd 100644
+--- a/drivers/hwtracing/intel_th/gth.c
++++ b/drivers/hwtracing/intel_th/gth.c
+@@ -616,7 +616,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
+ 	othdev->output.port = -1;
+ 	othdev->output.active = false;
+ 	gth->output[port].output = NULL;
+-	for (master = 0; master < TH_CONFIGURABLE_MASTERS; master++)
++	for (master = 0; master <= TH_CONFIGURABLE_MASTERS; master++)
+ 		if (gth->master[master] == port)
+ 			gth->master[master] = -1;
+ 	spin_unlock(&gth->gth_lock);
+diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h
+index ea0bc6885517..32cc8fe7902f 100644
+--- a/drivers/infiniband/core/uverbs.h
++++ b/drivers/infiniband/core/uverbs.h
+@@ -160,6 +160,7 @@ struct ib_uverbs_file {
+ 
+ 	struct mutex umap_lock;
+ 	struct list_head umaps;
++	struct page *disassociate_page;
+ 
+ 	struct idr		idr;
+ 	/* spinlock protects write access to idr */
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index e2a4570a47e8..27ca4022ca70 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -208,6 +208,9 @@ void ib_uverbs_release_file(struct kref *ref)
+ 		kref_put(&file->async_file->ref,
+ 			 ib_uverbs_release_async_event_file);
+ 	put_device(&file->device->dev);
++
++	if (file->disassociate_page)
++		__free_pages(file->disassociate_page, 0);
+ 	kfree(file);
+ }
+ 
+@@ -876,9 +879,50 @@ static void rdma_umap_close(struct vm_area_struct *vma)
+ 	kfree(priv);
+ }
+ 
++/*
++ * Once the zap_vma_ptes has been called touches to the VMA will come here and
++ * we return a dummy writable zero page for all the pfns.
++ */
++static vm_fault_t rdma_umap_fault(struct vm_fault *vmf)
++{
++	struct ib_uverbs_file *ufile = vmf->vma->vm_file->private_data;
++	struct rdma_umap_priv *priv = vmf->vma->vm_private_data;
++	vm_fault_t ret = 0;
++
++	if (!priv)
++		return VM_FAULT_SIGBUS;
++
++	/* Read only pages can just use the system zero page. */
++	if (!(vmf->vma->vm_flags & (VM_WRITE | VM_MAYWRITE))) {
++		vmf->page = ZERO_PAGE(vmf->address);
++		get_page(vmf->page);
++		return 0;
++	}
++
++	mutex_lock(&ufile->umap_lock);
++	if (!ufile->disassociate_page)
++		ufile->disassociate_page =
++			alloc_pages(vmf->gfp_mask | __GFP_ZERO, 0);
++
++	if (ufile->disassociate_page) {
++		/*
++		 * This VMA is forced to always be shared so this doesn't have
++		 * to worry about COW.
++		 */
++		vmf->page = ufile->disassociate_page;
++		get_page(vmf->page);
++	} else {
++		ret = VM_FAULT_SIGBUS;
++	}
++	mutex_unlock(&ufile->umap_lock);
++
++	return ret;
++}
++
+ static const struct vm_operations_struct rdma_umap_ops = {
+ 	.open = rdma_umap_open,
+ 	.close = rdma_umap_close,
++	.fault = rdma_umap_fault,
+ };
+ 
+ static struct rdma_umap_priv *rdma_user_mmap_pre(struct ib_ucontext *ucontext,
+@@ -888,6 +932,9 @@ static struct rdma_umap_priv *rdma_user_mmap_pre(struct ib_ucontext *ucontext,
+ 	struct ib_uverbs_file *ufile = ucontext->ufile;
+ 	struct rdma_umap_priv *priv;
+ 
++	if (!(vma->vm_flags & VM_SHARED))
++		return ERR_PTR(-EINVAL);
++
+ 	if (vma->vm_end - vma->vm_start != size)
+ 		return ERR_PTR(-EINVAL);
+ 
+@@ -991,7 +1038,7 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ 		 * at a time to get the lock ordering right. Typically there
+ 		 * will only be one mm, so no big deal.
+ 		 */
+-		down_write(&mm->mmap_sem);
++		down_read(&mm->mmap_sem);
+ 		if (!mmget_still_valid(mm))
+ 			goto skip_mm;
+ 		mutex_lock(&ufile->umap_lock);
+@@ -1005,11 +1052,10 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ 
+ 			zap_vma_ptes(vma, vma->vm_start,
+ 				     vma->vm_end - vma->vm_start);
+-			vma->vm_flags &= ~(VM_SHARED | VM_MAYSHARE);
+ 		}
+ 		mutex_unlock(&ufile->umap_lock);
+ 	skip_mm:
+-		up_write(&mm->mmap_sem);
++		up_read(&mm->mmap_sem);
+ 		mmput(mm);
+ 	}
+ }
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 94fe253d4956..497181f5ba09 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -1982,6 +1982,7 @@ static int mlx5_ib_mmap_clock_info_page(struct mlx5_ib_dev *dev,
+ 
+ 	if (vma->vm_flags & VM_WRITE)
+ 		return -EPERM;
++	vma->vm_flags &= ~VM_MAYWRITE;
+ 
+ 	if (!dev->mdev->clock_info_page)
+ 		return -EOPNOTSUPP;
+@@ -2147,19 +2148,18 @@ static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vm
+ 
+ 		if (vma->vm_flags & VM_WRITE)
+ 			return -EPERM;
++		vma->vm_flags &= ~VM_MAYWRITE;
+ 
+ 		/* Don't expose to user-space information it shouldn't have */
+ 		if (PAGE_SIZE > 4096)
+ 			return -EOPNOTSUPP;
+ 
+-		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ 		pfn = (dev->mdev->iseg_base +
+ 		       offsetof(struct mlx5_init_seg, internal_timer_h)) >>
+ 			PAGE_SHIFT;
+-		if (io_remap_pfn_range(vma, vma->vm_start, pfn,
+-				       PAGE_SIZE, vma->vm_page_prot))
+-			return -EAGAIN;
+-		break;
++		return rdma_user_mmap_io(&context->ibucontext, vma, pfn,
++					 PAGE_SIZE,
++					 pgprot_noncached(vma->vm_page_prot));
+ 	case MLX5_IB_MMAP_CLOCK_INFO:
+ 		return mlx5_ib_mmap_clock_info_page(dev, vma, context);
+ 
+diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c
+index 49c9541050d4..5819c9d6ffdc 100644
+--- a/drivers/infiniband/sw/rdmavt/mr.c
++++ b/drivers/infiniband/sw/rdmavt/mr.c
+@@ -611,11 +611,6 @@ static int rvt_set_page(struct ib_mr *ibmr, u64 addr)
+ 	if (unlikely(mapped_segs == mr->mr.max_segs))
+ 		return -ENOMEM;
+ 
+-	if (mr->mr.length == 0) {
+-		mr->mr.user_base = addr;
+-		mr->mr.iova = addr;
+-	}
+-
+ 	m = mapped_segs / RVT_SEGSZ;
+ 	n = mapped_segs % RVT_SEGSZ;
+ 	mr->mr.map[m]->segs[n].vaddr = (void *)addr;
+@@ -633,17 +628,24 @@ static int rvt_set_page(struct ib_mr *ibmr, u64 addr)
+  * @sg_nents: number of entries in sg
+  * @sg_offset: offset in bytes into sg
+  *
++ * Overwrite rvt_mr length with mr length calculated by ib_sg_to_pages.
++ *
+  * Return: number of sg elements mapped to the memory region
+  */
+ int rvt_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,
+ 		  int sg_nents, unsigned int *sg_offset)
+ {
+ 	struct rvt_mr *mr = to_imr(ibmr);
++	int ret;
+ 
+ 	mr->mr.length = 0;
+ 	mr->mr.page_shift = PAGE_SHIFT;
+-	return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset,
+-			      rvt_set_page);
++	ret = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rvt_set_page);
++	mr->mr.user_base = ibmr->iova;
++	mr->mr.iova = ibmr->iova;
++	mr->mr.offset = ibmr->iova - (u64)mr->mr.map[0]->segs[0].vaddr;
++	mr->mr.length = (size_t)ibmr->length;
++	return ret;
+ }
+ 
+ /**
+@@ -674,6 +676,7 @@ int rvt_fast_reg_mr(struct rvt_qp *qp, struct ib_mr *ibmr, u32 key,
+ 	ibmr->rkey = key;
+ 	mr->mr.lkey = key;
+ 	mr->mr.access_flags = access;
++	mr->mr.iova = ibmr->iova;
+ 	atomic_set(&mr->mr.lkey_invalid, 0);
+ 
+ 	return 0;
+diff --git a/drivers/input/rmi4/rmi_f11.c b/drivers/input/rmi4/rmi_f11.c
+index df64d6aed4f7..93901ebd122a 100644
+--- a/drivers/input/rmi4/rmi_f11.c
++++ b/drivers/input/rmi4/rmi_f11.c
+@@ -1230,7 +1230,7 @@ static int rmi_f11_initialize(struct rmi_function *fn)
+ 	}
+ 
+ 	rc = f11_write_control_regs(fn, &f11->sens_query,
+-			   &f11->dev_controls, fn->fd.query_base_addr);
++			   &f11->dev_controls, fn->fd.control_base_addr);
+ 	if (rc)
+ 		dev_warn(&fn->dev, "Failed to write control registers\n");
+ 
+diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
+index 6fd15a734324..58f02c85f2fe 100644
+--- a/drivers/net/ethernet/intel/fm10k/fm10k_main.c
++++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
+@@ -41,6 +41,8 @@ static int __init fm10k_init_module(void)
+ 	/* create driver workqueue */
+ 	fm10k_workqueue = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0,
+ 					  fm10k_driver_name);
++	if (!fm10k_workqueue)
++		return -ENOMEM;
+ 
+ 	fm10k_dbg_init();
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+index 03b2a9f9c589..cad34d6f5f45 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+@@ -33,6 +33,26 @@
+ #include <linux/bpf_trace.h>
+ #include "en/xdp.h"
+ 
++int mlx5e_xdp_max_mtu(struct mlx5e_params *params)
++{
++	int hr = NET_IP_ALIGN + XDP_PACKET_HEADROOM;
++
++	/* Let S := SKB_DATA_ALIGN(sizeof(struct skb_shared_info)).
++	 * The condition checked in mlx5e_rx_is_linear_skb is:
++	 *   SKB_DATA_ALIGN(sw_mtu + hard_mtu + hr) + S <= PAGE_SIZE         (1)
++	 *   (Note that hw_mtu == sw_mtu + hard_mtu.)
++	 * What is returned from this function is:
++	 *   max_mtu = PAGE_SIZE - S - hr - hard_mtu                         (2)
++	 * After assigning sw_mtu := max_mtu, the left side of (1) turns to
++	 * SKB_DATA_ALIGN(PAGE_SIZE - S) + S, which is equal to PAGE_SIZE,
++	 * because both PAGE_SIZE and S are already aligned. Any number greater
++	 * than max_mtu would make the left side of (1) greater than PAGE_SIZE,
++	 * so max_mtu is the maximum MTU allowed.
++	 */
++
++	return MLX5E_HW2SW_MTU(params, SKB_MAX_HEAD(hr));
++}
++
+ static inline bool
+ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_dma_info *di,
+ 		    struct xdp_buff *xdp)
+@@ -304,9 +324,9 @@ bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq)
+ 					mlx5e_xdpi_fifo_pop(xdpi_fifo);
+ 
+ 				if (is_redirect) {
+-					xdp_return_frame(xdpi.xdpf);
+ 					dma_unmap_single(sq->pdev, xdpi.dma_addr,
+ 							 xdpi.xdpf->len, DMA_TO_DEVICE);
++					xdp_return_frame(xdpi.xdpf);
+ 				} else {
+ 					/* Recycle RX page */
+ 					mlx5e_page_release(rq, &xdpi.di, true);
+@@ -345,9 +365,9 @@ void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq)
+ 				mlx5e_xdpi_fifo_pop(xdpi_fifo);
+ 
+ 			if (is_redirect) {
+-				xdp_return_frame(xdpi.xdpf);
+ 				dma_unmap_single(sq->pdev, xdpi.dma_addr,
+ 						 xdpi.xdpf->len, DMA_TO_DEVICE);
++				xdp_return_frame(xdpi.xdpf);
+ 			} else {
+ 				/* Recycle RX page */
+ 				mlx5e_page_release(rq, &xdpi.di, false);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+index ee27a7c8cd87..553956cadc8a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
+@@ -34,13 +34,12 @@
+ 
+ #include "en.h"
+ 
+-#define MLX5E_XDP_MAX_MTU ((int)(PAGE_SIZE - \
+-				 MLX5_SKB_FRAG_SZ(XDP_PACKET_HEADROOM)))
+ #define MLX5E_XDP_MIN_INLINE (ETH_HLEN + VLAN_HLEN)
+ #define MLX5E_XDP_TX_EMPTY_DS_COUNT \
+ 	(sizeof(struct mlx5e_tx_wqe) / MLX5_SEND_WQE_DS)
+ #define MLX5E_XDP_TX_DS_COUNT (MLX5E_XDP_TX_EMPTY_DS_COUNT + 1 /* SG DS */)
+ 
++int mlx5e_xdp_max_mtu(struct mlx5e_params *params);
+ bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
+ 		      void *va, u16 *rx_headroom, u32 *len);
+ bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq, struct mlx5e_rq *rq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 3b9e5f0d0212..253496c4a3db 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1470,7 +1470,7 @@ static int mlx5e_get_module_info(struct net_device *netdev,
+ 		break;
+ 	case MLX5_MODULE_ID_SFP:
+ 		modinfo->type       = ETH_MODULE_SFF_8472;
+-		modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
++		modinfo->eeprom_len = MLX5_EEPROM_PAGE_LENGTH;
+ 		break;
+ 	default:
+ 		netdev_err(priv->netdev, "%s: cable type not recognized:0x%x\n",
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 0cb19e4dd439..2d269acdbc8e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3816,7 +3816,7 @@ int mlx5e_change_mtu(struct net_device *netdev, int new_mtu,
+ 	if (params->xdp_prog &&
+ 	    !mlx5e_rx_is_linear_skb(priv->mdev, &new_channels.params)) {
+ 		netdev_err(netdev, "MTU(%d) > %d is not allowed while XDP enabled\n",
+-			   new_mtu, MLX5E_XDP_MAX_MTU);
++			   new_mtu, mlx5e_xdp_max_mtu(params));
+ 		err = -EINVAL;
+ 		goto out;
+ 	}
+@@ -4280,7 +4280,8 @@ static int mlx5e_xdp_allowed(struct mlx5e_priv *priv, struct bpf_prog *prog)
+ 
+ 	if (!mlx5e_rx_is_linear_skb(priv->mdev, &new_channels.params)) {
+ 		netdev_warn(netdev, "XDP is not allowed with MTU(%d) > %d\n",
+-			    new_channels.params.sw_mtu, MLX5E_XDP_MAX_MTU);
++			    new_channels.params.sw_mtu,
++			    mlx5e_xdp_max_mtu(&new_channels.params));
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+index 2b82f35f4c35..efce1fa37f6f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
+@@ -404,10 +404,6 @@ int mlx5_query_module_eeprom(struct mlx5_core_dev *dev,
+ 		size -= offset + size - MLX5_EEPROM_PAGE_LENGTH;
+ 
+ 	i2c_addr = MLX5_I2C_ADDR_LOW;
+-	if (offset >= MLX5_EEPROM_PAGE_LENGTH) {
+-		i2c_addr = MLX5_I2C_ADDR_HIGH;
+-		offset -= MLX5_EEPROM_PAGE_LENGTH;
+-	}
+ 
+ 	MLX5_SET(mcia_reg, in, l, 0);
+ 	MLX5_SET(mcia_reg, in, module, module_num);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h
+index ffee38e36ce8..8648ca171254 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h
+@@ -27,7 +27,7 @@
+ 
+ #define MLXSW_PCI_SW_RESET			0xF0010
+ #define MLXSW_PCI_SW_RESET_RST_BIT		BIT(0)
+-#define MLXSW_PCI_SW_RESET_TIMEOUT_MSECS	13000
++#define MLXSW_PCI_SW_RESET_TIMEOUT_MSECS	20000
+ #define MLXSW_PCI_SW_RESET_WAIT_MSECS		100
+ #define MLXSW_PCI_FW_READY			0xA1844
+ #define MLXSW_PCI_FW_READY_MASK			0xFFFF
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index cbdee5164be7..ce49504e1f9c 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -2667,11 +2667,11 @@ mlxsw_sp_port_set_link_ksettings(struct net_device *dev,
+ 	if (err)
+ 		return err;
+ 
++	mlxsw_sp_port->link.autoneg = autoneg;
++
+ 	if (!netif_running(dev))
+ 		return 0;
+ 
+-	mlxsw_sp_port->link.autoneg = autoneg;
+-
+ 	mlxsw_sp_port_admin_status_set(mlxsw_sp_port, false);
+ 	mlxsw_sp_port_admin_status_set(mlxsw_sp_port, true);
+ 
+@@ -2961,7 +2961,7 @@ static int mlxsw_sp_port_ets_init(struct mlxsw_sp_port *mlxsw_sp_port)
+ 		err = mlxsw_sp_port_ets_set(mlxsw_sp_port,
+ 					    MLXSW_REG_QEEC_HIERARCY_TC,
+ 					    i + 8, i,
+-					    false, 0);
++					    true, 100);
+ 		if (err)
+ 			return err;
+ 	}
+diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
+index a18149720aa2..cba5881b2746 100644
+--- a/drivers/net/ethernet/socionext/netsec.c
++++ b/drivers/net/ethernet/socionext/netsec.c
+@@ -673,7 +673,8 @@ static void netsec_process_tx(struct netsec_priv *priv)
+ }
+ 
+ static void *netsec_alloc_rx_data(struct netsec_priv *priv,
+-				  dma_addr_t *dma_handle, u16 *desc_len)
++				  dma_addr_t *dma_handle, u16 *desc_len,
++				  bool napi)
+ {
+ 	size_t total_len = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ 	size_t payload_len = NETSEC_RX_BUF_SZ;
+@@ -682,7 +683,7 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv,
+ 
+ 	total_len += SKB_DATA_ALIGN(payload_len + NETSEC_SKB_PAD);
+ 
+-	buf = napi_alloc_frag(total_len);
++	buf = napi ? napi_alloc_frag(total_len) : netdev_alloc_frag(total_len);
+ 	if (!buf)
+ 		return NULL;
+ 
+@@ -765,7 +766,8 @@ static int netsec_process_rx(struct netsec_priv *priv, int budget)
+ 		/* allocate a fresh buffer and map it to the hardware.
+ 		 * This will eventually replace the old buffer in the hardware
+ 		 */
+-		buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len);
++		buf_addr = netsec_alloc_rx_data(priv, &dma_handle, &desc_len,
++						true);
+ 		if (unlikely(!buf_addr))
+ 			break;
+ 
+@@ -1069,7 +1071,8 @@ static int netsec_setup_rx_dring(struct netsec_priv *priv)
+ 		void *buf;
+ 		u16 len;
+ 
+-		buf = netsec_alloc_rx_data(priv, &dma_handle, &len);
++		buf = netsec_alloc_rx_data(priv, &dma_handle, &len,
++					   false);
+ 		if (!buf) {
+ 			netsec_uninit_pkt_dring(priv, NETSEC_RING_RX);
+ 			goto err_out;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 019ab99e65bb..1d8d6f2ddfd6 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2590,8 +2590,6 @@ static int stmmac_open(struct net_device *dev)
+ 	u32 chan;
+ 	int ret;
+ 
+-	stmmac_check_ether_addr(priv);
+-
+ 	if (priv->hw->pcs != STMMAC_PCS_RGMII &&
+ 	    priv->hw->pcs != STMMAC_PCS_TBI &&
+ 	    priv->hw->pcs != STMMAC_PCS_RTBI) {
+@@ -4265,6 +4263,8 @@ int stmmac_dvr_probe(struct device *device,
+ 	if (ret)
+ 		goto error_hw_init;
+ 
++	stmmac_check_ether_addr(priv);
++
+ 	/* Configure real RX and TX queues */
+ 	netif_set_real_num_rx_queues(ndev, priv->plat->rx_queues_to_use);
+ 	netif_set_real_num_tx_queues(ndev, priv->plat->tx_queues_to_use);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+index d819e8eaba12..cc1e887e47b5 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+@@ -159,6 +159,12 @@ static const struct dmi_system_id quark_pci_dmi[] = {
+ 		},
+ 		.driver_data = (void *)&galileo_stmmac_dmi_data,
+ 	},
++	/*
++	 * There are 2 types of SIMATIC IOT2000: IOT20202 and IOT2040.
++	 * The asset tag "6ES7647-0AA00-0YA2" is only for IOT2020 which
++	 * has only one pci network device while other asset tags are
++	 * for IOT2040 which has two.
++	 */
+ 	{
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_BOARD_NAME, "SIMATIC IOT2000"),
+@@ -170,8 +176,6 @@ static const struct dmi_system_id quark_pci_dmi[] = {
+ 	{
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_BOARD_NAME, "SIMATIC IOT2000"),
+-			DMI_EXACT_MATCH(DMI_BOARD_ASSET_TAG,
+-					"6ES7647-0AA00-1YA2"),
+ 		},
+ 		.driver_data = (void *)&iot2040_stmmac_dmi_data,
+ 	},
+diff --git a/drivers/net/slip/slhc.c b/drivers/net/slip/slhc.c
+index f4e93f5fc204..ea90db3c7705 100644
+--- a/drivers/net/slip/slhc.c
++++ b/drivers/net/slip/slhc.c
+@@ -153,7 +153,7 @@ out_fail:
+ void
+ slhc_free(struct slcompress *comp)
+ {
+-	if ( comp == NULLSLCOMPR )
++	if ( IS_ERR_OR_NULL(comp) )
+ 		return;
+ 
+ 	if ( comp->tstate != NULLSLSTATE )
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 1283632091d5..7dcda9364009 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1157,6 +1157,13 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
+ 		return -EINVAL;
+ 	}
+ 
++	if (netdev_has_upper_dev(dev, port_dev)) {
++		NL_SET_ERR_MSG(extack, "Device is already an upper device of the team interface");
++		netdev_err(dev, "Device %s is already an upper device of the team interface\n",
++			   portname);
++		return -EBUSY;
++	}
++
+ 	if (port_dev->features & NETIF_F_VLAN_CHALLENGED &&
+ 	    vlan_uses_dev(dev)) {
+ 		NL_SET_ERR_MSG(extack, "Device is VLAN challenged and team device has VLAN set up");
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 6359053bd0c7..862fd2b92d12 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -2642,7 +2642,7 @@ static int mac80211_hwsim_new_radio(struct genl_info *info,
+ 	enum nl80211_band band;
+ 	const struct ieee80211_ops *ops = &mac80211_hwsim_ops;
+ 	struct net *net;
+-	int idx;
++	int idx, i;
+ 	int n_limits = 0;
+ 
+ 	if (WARN_ON(param->channels > 1 && !param->use_chanctx))
+@@ -2766,12 +2766,23 @@ static int mac80211_hwsim_new_radio(struct genl_info *info,
+ 		goto failed_hw;
+ 	}
+ 
++	data->if_combination.max_interfaces = 0;
++	for (i = 0; i < n_limits; i++)
++		data->if_combination.max_interfaces +=
++			data->if_limits[i].max;
++
+ 	data->if_combination.n_limits = n_limits;
+-	data->if_combination.max_interfaces = 2048;
+ 	data->if_combination.limits = data->if_limits;
+ 
+-	hw->wiphy->iface_combinations = &data->if_combination;
+-	hw->wiphy->n_iface_combinations = 1;
++	/*
++	 * If we actually were asked to support combinations,
++	 * advertise them - if there's only a single thing like
++	 * only IBSS then don't advertise it as combinations.
++	 */
++	if (data->if_combination.max_interfaces > 1) {
++		hw->wiphy->iface_combinations = &data->if_combination;
++		hw->wiphy->n_iface_combinations = 1;
++	}
+ 
+ 	if (param->ciphers) {
+ 		memcpy(data->ciphers, param->ciphers,
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index 53564386ed57..8987cec9549d 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -1896,14 +1896,11 @@ int usb_runtime_idle(struct device *dev)
+ 	return -EBUSY;
+ }
+ 
+-int usb_set_usb2_hardware_lpm(struct usb_device *udev, int enable)
++static int usb_set_usb2_hardware_lpm(struct usb_device *udev, int enable)
+ {
+ 	struct usb_hcd *hcd = bus_to_hcd(udev->bus);
+ 	int ret = -EPERM;
+ 
+-	if (enable && !udev->usb2_hw_lpm_allowed)
+-		return 0;
+-
+ 	if (hcd->driver->set_usb2_hw_lpm) {
+ 		ret = hcd->driver->set_usb2_hw_lpm(hcd, udev, enable);
+ 		if (!ret)
+@@ -1913,6 +1910,24 @@ int usb_set_usb2_hardware_lpm(struct usb_device *udev, int enable)
+ 	return ret;
+ }
+ 
++int usb_enable_usb2_hardware_lpm(struct usb_device *udev)
++{
++	if (!udev->usb2_hw_lpm_capable ||
++	    !udev->usb2_hw_lpm_allowed ||
++	    udev->usb2_hw_lpm_enabled)
++		return 0;
++
++	return usb_set_usb2_hardware_lpm(udev, 1);
++}
++
++int usb_disable_usb2_hardware_lpm(struct usb_device *udev)
++{
++	if (!udev->usb2_hw_lpm_enabled)
++		return 0;
++
++	return usb_set_usb2_hardware_lpm(udev, 0);
++}
++
+ #endif /* CONFIG_PM */
+ 
+ struct bus_type usb_bus_type = {
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 1d1e61e980f3..55c87be5764c 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -3220,8 +3220,7 @@ int usb_port_suspend(struct usb_device *udev, pm_message_t msg)
+ 	}
+ 
+ 	/* disable USB2 hardware LPM */
+-	if (udev->usb2_hw_lpm_enabled == 1)
+-		usb_set_usb2_hardware_lpm(udev, 0);
++	usb_disable_usb2_hardware_lpm(udev);
+ 
+ 	if (usb_disable_ltm(udev)) {
+ 		dev_err(&udev->dev, "Failed to disable LTM before suspend\n");
+@@ -3259,8 +3258,7 @@ int usb_port_suspend(struct usb_device *udev, pm_message_t msg)
+ 		usb_enable_ltm(udev);
+  err_ltm:
+ 		/* Try to enable USB2 hardware LPM again */
+-		if (udev->usb2_hw_lpm_capable == 1)
+-			usb_set_usb2_hardware_lpm(udev, 1);
++		usb_enable_usb2_hardware_lpm(udev);
+ 
+ 		if (udev->do_remote_wakeup)
+ 			(void) usb_disable_remote_wakeup(udev);
+@@ -3543,8 +3541,7 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
+ 		hub_port_logical_disconnect(hub, port1);
+ 	} else  {
+ 		/* Try to enable USB2 hardware LPM */
+-		if (udev->usb2_hw_lpm_capable == 1)
+-			usb_set_usb2_hardware_lpm(udev, 1);
++		usb_enable_usb2_hardware_lpm(udev);
+ 
+ 		/* Try to enable USB3 LTM */
+ 		usb_enable_ltm(udev);
+@@ -4435,7 +4432,7 @@ static void hub_set_initial_usb2_lpm_policy(struct usb_device *udev)
+ 	if ((udev->bos->ext_cap->bmAttributes & cpu_to_le32(USB_BESL_SUPPORT)) ||
+ 			connect_type == USB_PORT_CONNECT_TYPE_HARD_WIRED) {
+ 		udev->usb2_hw_lpm_allowed = 1;
+-		usb_set_usb2_hardware_lpm(udev, 1);
++		usb_enable_usb2_hardware_lpm(udev);
+ 	}
+ }
+ 
+@@ -5649,8 +5646,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 	/* Disable USB2 hardware LPM.
+ 	 * It will be re-enabled by the enumeration process.
+ 	 */
+-	if (udev->usb2_hw_lpm_enabled == 1)
+-		usb_set_usb2_hardware_lpm(udev, 0);
++	usb_disable_usb2_hardware_lpm(udev);
+ 
+ 	/* Disable LPM while we reset the device and reinstall the alt settings.
+ 	 * Device-initiated LPM, and system exit latency settings are cleared
+@@ -5753,7 +5749,7 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ 
+ done:
+ 	/* Now that the alt settings are re-installed, enable LTM and LPM. */
+-	usb_set_usb2_hardware_lpm(udev, 1);
++	usb_enable_usb2_hardware_lpm(udev);
+ 	usb_unlocked_enable_lpm(udev);
+ 	usb_enable_ltm(udev);
+ 	usb_release_bos_descriptor(udev);
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index bfa5eda0cc26..4f33eb632a88 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -1243,8 +1243,7 @@ void usb_disable_device(struct usb_device *dev, int skip_ep0)
+ 			dev->actconfig->interface[i] = NULL;
+ 		}
+ 
+-		if (dev->usb2_hw_lpm_enabled == 1)
+-			usb_set_usb2_hardware_lpm(dev, 0);
++		usb_disable_usb2_hardware_lpm(dev);
+ 		usb_unlocked_disable_lpm(dev);
+ 		usb_disable_ltm(dev);
+ 
+diff --git a/drivers/usb/core/sysfs.c b/drivers/usb/core/sysfs.c
+index ea18284dfa9a..7e88fdfe3cf5 100644
+--- a/drivers/usb/core/sysfs.c
++++ b/drivers/usb/core/sysfs.c
+@@ -528,7 +528,10 @@ static ssize_t usb2_hardware_lpm_store(struct device *dev,
+ 
+ 	if (!ret) {
+ 		udev->usb2_hw_lpm_allowed = value;
+-		ret = usb_set_usb2_hardware_lpm(udev, value);
++		if (value)
++			ret = usb_enable_usb2_hardware_lpm(udev);
++		else
++			ret = usb_disable_usb2_hardware_lpm(udev);
+ 	}
+ 
+ 	usb_unlock_device(udev);
+diff --git a/drivers/usb/core/usb.h b/drivers/usb/core/usb.h
+index 546a2219454b..d95a5358f73d 100644
+--- a/drivers/usb/core/usb.h
++++ b/drivers/usb/core/usb.h
+@@ -92,7 +92,8 @@ extern int usb_remote_wakeup(struct usb_device *dev);
+ extern int usb_runtime_suspend(struct device *dev);
+ extern int usb_runtime_resume(struct device *dev);
+ extern int usb_runtime_idle(struct device *dev);
+-extern int usb_set_usb2_hardware_lpm(struct usb_device *udev, int enable);
++extern int usb_enable_usb2_hardware_lpm(struct usb_device *udev);
++extern int usb_disable_usb2_hardware_lpm(struct usb_device *udev);
+ 
+ #else
+ 
+@@ -112,7 +113,12 @@ static inline int usb_autoresume_device(struct usb_device *udev)
+ 	return 0;
+ }
+ 
+-static inline int usb_set_usb2_hardware_lpm(struct usb_device *udev, int enable)
++static inline int usb_enable_usb2_hardware_lpm(struct usb_device *udev)
++{
++	return 0;
++}
++
++static inline int usb_disable_usb2_hardware_lpm(struct usb_device *udev)
+ {
+ 	return 0;
+ }
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index 73652e21efec..d0f731c9920a 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -58,12 +58,18 @@ module_param_named(disable_hugepages,
+ MODULE_PARM_DESC(disable_hugepages,
+ 		 "Disable VFIO IOMMU support for IOMMU hugepages.");
+ 
++static unsigned int dma_entry_limit __read_mostly = U16_MAX;
++module_param_named(dma_entry_limit, dma_entry_limit, uint, 0644);
++MODULE_PARM_DESC(dma_entry_limit,
++		 "Maximum number of user DMA mappings per container (65535).");
++
+ struct vfio_iommu {
+ 	struct list_head	domain_list;
+ 	struct vfio_domain	*external_domain; /* domain for external user */
+ 	struct mutex		lock;
+ 	struct rb_root		dma_list;
+ 	struct blocking_notifier_head notifier;
++	unsigned int		dma_avail;
+ 	bool			v2;
+ 	bool			nesting;
+ };
+@@ -836,6 +842,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
+ 	vfio_unlink_dma(iommu, dma);
+ 	put_task_struct(dma->task);
+ 	kfree(dma);
++	iommu->dma_avail++;
+ }
+ 
+ static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
+@@ -1081,12 +1088,18 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
+ 		goto out_unlock;
+ 	}
+ 
++	if (!iommu->dma_avail) {
++		ret = -ENOSPC;
++		goto out_unlock;
++	}
++
+ 	dma = kzalloc(sizeof(*dma), GFP_KERNEL);
+ 	if (!dma) {
+ 		ret = -ENOMEM;
+ 		goto out_unlock;
+ 	}
+ 
++	iommu->dma_avail--;
+ 	dma->iova = iova;
+ 	dma->vaddr = vaddr;
+ 	dma->prot = prot;
+@@ -1583,6 +1596,7 @@ static void *vfio_iommu_type1_open(unsigned long arg)
+ 
+ 	INIT_LIST_HEAD(&iommu->domain_list);
+ 	iommu->dma_list = RB_ROOT;
++	iommu->dma_avail = dma_entry_limit;
+ 	mutex_init(&iommu->lock);
+ 	BLOCKING_INIT_NOTIFIER_HEAD(&iommu->notifier);
+ 
+diff --git a/fs/aio.c b/fs/aio.c
+index 3d9669d011b9..efa13410e04e 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -181,7 +181,7 @@ struct poll_iocb {
+ 	struct file		*file;
+ 	struct wait_queue_head	*head;
+ 	__poll_t		events;
+-	bool			woken;
++	bool			done;
+ 	bool			cancelled;
+ 	struct wait_queue_entry	wait;
+ 	struct work_struct	work;
+@@ -204,8 +204,7 @@ struct aio_kiocb {
+ 	struct kioctx		*ki_ctx;
+ 	kiocb_cancel_fn		*ki_cancel;
+ 
+-	struct iocb __user	*ki_user_iocb;	/* user's aiocb */
+-	__u64			ki_user_data;	/* user's data for completion */
++	struct io_event		ki_res;
+ 
+ 	struct list_head	ki_list;	/* the aio core uses this
+ 						 * for cancellation */
+@@ -1022,6 +1021,9 @@ static bool get_reqs_available(struct kioctx *ctx)
+ /* aio_get_req
+  *	Allocate a slot for an aio request.
+  * Returns NULL if no requests are free.
++ *
++ * The refcount is initialized to 2 - one for the async op completion,
++ * one for the synchronous code that does this.
+  */
+ static inline struct aio_kiocb *aio_get_req(struct kioctx *ctx)
+ {
+@@ -1034,7 +1036,7 @@ static inline struct aio_kiocb *aio_get_req(struct kioctx *ctx)
+ 	percpu_ref_get(&ctx->reqs);
+ 	req->ki_ctx = ctx;
+ 	INIT_LIST_HEAD(&req->ki_list);
+-	refcount_set(&req->ki_refcnt, 0);
++	refcount_set(&req->ki_refcnt, 2);
+ 	req->ki_eventfd = NULL;
+ 	return req;
+ }
+@@ -1067,30 +1069,18 @@ out:
+ 	return ret;
+ }
+ 
+-static inline void iocb_put(struct aio_kiocb *iocb)
++static inline void iocb_destroy(struct aio_kiocb *iocb)
+ {
+-	if (refcount_read(&iocb->ki_refcnt) == 0 ||
+-	    refcount_dec_and_test(&iocb->ki_refcnt)) {
+-		if (iocb->ki_filp)
+-			fput(iocb->ki_filp);
+-		percpu_ref_put(&iocb->ki_ctx->reqs);
+-		kmem_cache_free(kiocb_cachep, iocb);
+-	}
+-}
+-
+-static void aio_fill_event(struct io_event *ev, struct aio_kiocb *iocb,
+-			   long res, long res2)
+-{
+-	ev->obj = (u64)(unsigned long)iocb->ki_user_iocb;
+-	ev->data = iocb->ki_user_data;
+-	ev->res = res;
+-	ev->res2 = res2;
++	if (iocb->ki_filp)
++		fput(iocb->ki_filp);
++	percpu_ref_put(&iocb->ki_ctx->reqs);
++	kmem_cache_free(kiocb_cachep, iocb);
+ }
+ 
+ /* aio_complete
+  *	Called when the io request on the given iocb is complete.
+  */
+-static void aio_complete(struct aio_kiocb *iocb, long res, long res2)
++static void aio_complete(struct aio_kiocb *iocb)
+ {
+ 	struct kioctx	*ctx = iocb->ki_ctx;
+ 	struct aio_ring	*ring;
+@@ -1114,14 +1104,14 @@ static void aio_complete(struct aio_kiocb *iocb, long res, long res2)
+ 	ev_page = kmap_atomic(ctx->ring_pages[pos / AIO_EVENTS_PER_PAGE]);
+ 	event = ev_page + pos % AIO_EVENTS_PER_PAGE;
+ 
+-	aio_fill_event(event, iocb, res, res2);
++	*event = iocb->ki_res;
+ 
+ 	kunmap_atomic(ev_page);
+ 	flush_dcache_page(ctx->ring_pages[pos / AIO_EVENTS_PER_PAGE]);
+ 
+-	pr_debug("%p[%u]: %p: %p %Lx %lx %lx\n",
+-		 ctx, tail, iocb, iocb->ki_user_iocb, iocb->ki_user_data,
+-		 res, res2);
++	pr_debug("%p[%u]: %p: %p %Lx %Lx %Lx\n", ctx, tail, iocb,
++		 (void __user *)(unsigned long)iocb->ki_res.obj,
++		 iocb->ki_res.data, iocb->ki_res.res, iocb->ki_res.res2);
+ 
+ 	/* after flagging the request as done, we
+ 	 * must never even look at it again
+@@ -1163,7 +1153,14 @@ static void aio_complete(struct aio_kiocb *iocb, long res, long res2)
+ 
+ 	if (waitqueue_active(&ctx->wait))
+ 		wake_up(&ctx->wait);
+-	iocb_put(iocb);
++}
++
++static inline void iocb_put(struct aio_kiocb *iocb)
++{
++	if (refcount_dec_and_test(&iocb->ki_refcnt)) {
++		aio_complete(iocb);
++		iocb_destroy(iocb);
++	}
+ }
+ 
+ /* aio_read_events_ring
+@@ -1437,7 +1434,9 @@ static void aio_complete_rw(struct kiocb *kiocb, long res, long res2)
+ 		file_end_write(kiocb->ki_filp);
+ 	}
+ 
+-	aio_complete(iocb, res, res2);
++	iocb->ki_res.res = res;
++	iocb->ki_res.res2 = res2;
++	iocb_put(iocb);
+ }
+ 
+ static int aio_prep_rw(struct kiocb *req, const struct iocb *iocb)
+@@ -1585,11 +1584,10 @@ static ssize_t aio_write(struct kiocb *req, const struct iocb *iocb,
+ 
+ static void aio_fsync_work(struct work_struct *work)
+ {
+-	struct fsync_iocb *req = container_of(work, struct fsync_iocb, work);
+-	int ret;
++	struct aio_kiocb *iocb = container_of(work, struct aio_kiocb, fsync.work);
+ 
+-	ret = vfs_fsync(req->file, req->datasync);
+-	aio_complete(container_of(req, struct aio_kiocb, fsync), ret, 0);
++	iocb->ki_res.res = vfs_fsync(iocb->fsync.file, iocb->fsync.datasync);
++	iocb_put(iocb);
+ }
+ 
+ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
+@@ -1608,11 +1606,6 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
+ 	return 0;
+ }
+ 
+-static inline void aio_poll_complete(struct aio_kiocb *iocb, __poll_t mask)
+-{
+-	aio_complete(iocb, mangle_poll(mask), 0);
+-}
+-
+ static void aio_poll_complete_work(struct work_struct *work)
+ {
+ 	struct poll_iocb *req = container_of(work, struct poll_iocb, work);
+@@ -1638,9 +1631,11 @@ static void aio_poll_complete_work(struct work_struct *work)
+ 		return;
+ 	}
+ 	list_del_init(&iocb->ki_list);
++	iocb->ki_res.res = mangle_poll(mask);
++	req->done = true;
+ 	spin_unlock_irq(&ctx->ctx_lock);
+ 
+-	aio_poll_complete(iocb, mask);
++	iocb_put(iocb);
+ }
+ 
+ /* assumes we are called with irqs disabled */
+@@ -1668,31 +1663,27 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
+ 	__poll_t mask = key_to_poll(key);
+ 	unsigned long flags;
+ 
+-	req->woken = true;
+-
+ 	/* for instances that support it check for an event match first: */
+-	if (mask) {
+-		if (!(mask & req->events))
+-			return 0;
++	if (mask && !(mask & req->events))
++		return 0;
++
++	list_del_init(&req->wait.entry);
+ 
++	if (mask && spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
+ 		/*
+ 		 * Try to complete the iocb inline if we can. Use
+ 		 * irqsave/irqrestore because not all filesystems (e.g. fuse)
+ 		 * call this function with IRQs disabled and because IRQs
+ 		 * have to be disabled before ctx_lock is obtained.
+ 		 */
+-		if (spin_trylock_irqsave(&iocb->ki_ctx->ctx_lock, flags)) {
+-			list_del(&iocb->ki_list);
+-			spin_unlock_irqrestore(&iocb->ki_ctx->ctx_lock, flags);
+-
+-			list_del_init(&req->wait.entry);
+-			aio_poll_complete(iocb, mask);
+-			return 1;
+-		}
++		list_del(&iocb->ki_list);
++		iocb->ki_res.res = mangle_poll(mask);
++		req->done = true;
++		spin_unlock_irqrestore(&iocb->ki_ctx->ctx_lock, flags);
++		iocb_put(iocb);
++	} else {
++		schedule_work(&req->work);
+ 	}
+-
+-	list_del_init(&req->wait.entry);
+-	schedule_work(&req->work);
+ 	return 1;
+ }
+ 
+@@ -1724,6 +1715,7 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+ 	struct kioctx *ctx = aiocb->ki_ctx;
+ 	struct poll_iocb *req = &aiocb->poll;
+ 	struct aio_poll_table apt;
++	bool cancel = false;
+ 	__poll_t mask;
+ 
+ 	/* reject any unknown events outside the normal event mask. */
+@@ -1737,7 +1729,7 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+ 	req->events = demangle_poll(iocb->aio_buf) | EPOLLERR | EPOLLHUP;
+ 
+ 	req->head = NULL;
+-	req->woken = false;
++	req->done = false;
+ 	req->cancelled = false;
+ 
+ 	apt.pt._qproc = aio_poll_queue_proc;
+@@ -1749,41 +1741,34 @@ static ssize_t aio_poll(struct aio_kiocb *aiocb, const struct iocb *iocb)
+ 	INIT_LIST_HEAD(&req->wait.entry);
+ 	init_waitqueue_func_entry(&req->wait, aio_poll_wake);
+ 
+-	/* one for removal from waitqueue, one for this function */
+-	refcount_set(&aiocb->ki_refcnt, 2);
+-
+ 	mask = vfs_poll(req->file, &apt.pt) & req->events;
+-	if (unlikely(!req->head)) {
+-		/* we did not manage to set up a waitqueue, done */
+-		goto out;
+-	}
+-
+ 	spin_lock_irq(&ctx->ctx_lock);
+-	spin_lock(&req->head->lock);
+-	if (req->woken) {
+-		/* wake_up context handles the rest */
+-		mask = 0;
++	if (likely(req->head)) {
++		spin_lock(&req->head->lock);
++		if (unlikely(list_empty(&req->wait.entry))) {
++			if (apt.error)
++				cancel = true;
++			apt.error = 0;
++			mask = 0;
++		}
++		if (mask || apt.error) {
++			list_del_init(&req->wait.entry);
++		} else if (cancel) {
++			WRITE_ONCE(req->cancelled, true);
++		} else if (!req->done) { /* actually waiting for an event */
++			list_add_tail(&aiocb->ki_list, &ctx->active_reqs);
++			aiocb->ki_cancel = aio_poll_cancel;
++		}
++		spin_unlock(&req->head->lock);
++	}
++	if (mask) { /* no async, we'd stolen it */
++		aiocb->ki_res.res = mangle_poll(mask);
+ 		apt.error = 0;
+-	} else if (mask || apt.error) {
+-		/* if we get an error or a mask we are done */
+-		WARN_ON_ONCE(list_empty(&req->wait.entry));
+-		list_del_init(&req->wait.entry);
+-	} else {
+-		/* actually waiting for an event */
+-		list_add_tail(&aiocb->ki_list, &ctx->active_reqs);
+-		aiocb->ki_cancel = aio_poll_cancel;
+ 	}
+-	spin_unlock(&req->head->lock);
+ 	spin_unlock_irq(&ctx->ctx_lock);
+-
+-out:
+-	if (unlikely(apt.error))
+-		return apt.error;
+-
+ 	if (mask)
+-		aio_poll_complete(aiocb, mask);
+-	iocb_put(aiocb);
+-	return 0;
++		iocb_put(aiocb);
++	return apt.error;
+ }
+ 
+ static int __io_submit_one(struct kioctx *ctx, const struct iocb *iocb,
+@@ -1842,8 +1827,10 @@ static int __io_submit_one(struct kioctx *ctx, const struct iocb *iocb,
+ 		goto out_put_req;
+ 	}
+ 
+-	req->ki_user_iocb = user_iocb;
+-	req->ki_user_data = iocb->aio_data;
++	req->ki_res.obj = (u64)(unsigned long)user_iocb;
++	req->ki_res.data = iocb->aio_data;
++	req->ki_res.res = 0;
++	req->ki_res.res2 = 0;
+ 
+ 	switch (iocb->aio_lio_opcode) {
+ 	case IOCB_CMD_PREAD:
+@@ -1873,18 +1860,21 @@ static int __io_submit_one(struct kioctx *ctx, const struct iocb *iocb,
+ 		break;
+ 	}
+ 
++	/* Done with the synchronous reference */
++	iocb_put(req);
++
+ 	/*
+ 	 * If ret is 0, we'd either done aio_complete() ourselves or have
+ 	 * arranged for that to be done asynchronously.  Anything non-zero
+ 	 * means that we need to destroy req ourselves.
+ 	 */
+-	if (ret)
+-		goto out_put_req;
+-	return 0;
++	if (!ret)
++		return 0;
++
+ out_put_req:
+ 	if (req->ki_eventfd)
+ 		eventfd_ctx_put(req->ki_eventfd);
+-	iocb_put(req);
++	iocb_destroy(req);
+ out_put_reqs_available:
+ 	put_reqs_available(ctx, 1);
+ 	return ret;
+@@ -1997,24 +1987,6 @@ COMPAT_SYSCALL_DEFINE3(io_submit, compat_aio_context_t, ctx_id,
+ }
+ #endif
+ 
+-/* lookup_kiocb
+- *	Finds a given iocb for cancellation.
+- */
+-static struct aio_kiocb *
+-lookup_kiocb(struct kioctx *ctx, struct iocb __user *iocb)
+-{
+-	struct aio_kiocb *kiocb;
+-
+-	assert_spin_locked(&ctx->ctx_lock);
+-
+-	/* TODO: use a hash or array, this sucks. */
+-	list_for_each_entry(kiocb, &ctx->active_reqs, ki_list) {
+-		if (kiocb->ki_user_iocb == iocb)
+-			return kiocb;
+-	}
+-	return NULL;
+-}
+-
+ /* sys_io_cancel:
+  *	Attempts to cancel an iocb previously passed to io_submit.  If
+  *	the operation is successfully cancelled, the resulting event is
+@@ -2032,6 +2004,7 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
+ 	struct aio_kiocb *kiocb;
+ 	int ret = -EINVAL;
+ 	u32 key;
++	u64 obj = (u64)(unsigned long)iocb;
+ 
+ 	if (unlikely(get_user(key, &iocb->aio_key)))
+ 		return -EFAULT;
+@@ -2043,10 +2016,13 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t, ctx_id, struct iocb __user *, iocb,
+ 		return -EINVAL;
+ 
+ 	spin_lock_irq(&ctx->ctx_lock);
+-	kiocb = lookup_kiocb(ctx, iocb);
+-	if (kiocb) {
+-		ret = kiocb->ki_cancel(&kiocb->rw);
+-		list_del_init(&kiocb->ki_list);
++	/* TODO: use a hash or array, this sucks. */
++	list_for_each_entry(kiocb, &ctx->active_reqs, ki_list) {
++		if (kiocb->ki_res.obj == obj) {
++			ret = kiocb->ki_cancel(&kiocb->rw);
++			list_del_init(&kiocb->ki_list);
++			break;
++		}
+ 	}
+ 	spin_unlock_irq(&ctx->ctx_lock);
+ 
+diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c
+index 82928cea0209..7f3f64ba464f 100644
+--- a/fs/ceph/dir.c
++++ b/fs/ceph/dir.c
+@@ -1470,6 +1470,7 @@ void ceph_dentry_lru_del(struct dentry *dn)
+ unsigned ceph_dentry_hash(struct inode *dir, struct dentry *dn)
+ {
+ 	struct ceph_inode_info *dci = ceph_inode(dir);
++	unsigned hash;
+ 
+ 	switch (dci->i_dir_layout.dl_dir_hash) {
+ 	case 0:	/* for backward compat */
+@@ -1477,8 +1478,11 @@ unsigned ceph_dentry_hash(struct inode *dir, struct dentry *dn)
+ 		return dn->d_name.hash;
+ 
+ 	default:
+-		return ceph_str_hash(dci->i_dir_layout.dl_dir_hash,
++		spin_lock(&dn->d_lock);
++		hash = ceph_str_hash(dci->i_dir_layout.dl_dir_hash,
+ 				     dn->d_name.name, dn->d_name.len);
++		spin_unlock(&dn->d_lock);
++		return hash;
+ 	}
+ }
+ 
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index 163fc74bf221..5cec784e30f6 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -1286,6 +1286,15 @@ static int remove_session_caps_cb(struct inode *inode, struct ceph_cap *cap,
+ 			list_add(&ci->i_prealloc_cap_flush->i_list, &to_remove);
+ 			ci->i_prealloc_cap_flush = NULL;
+ 		}
++
++               if (drop &&
++                  ci->i_wrbuffer_ref_head == 0 &&
++                  ci->i_wr_ref == 0 &&
++                  ci->i_dirty_caps == 0 &&
++                  ci->i_flushing_caps == 0) {
++                      ceph_put_snap_context(ci->i_head_snapc);
++                      ci->i_head_snapc = NULL;
++               }
+ 	}
+ 	spin_unlock(&ci->i_ceph_lock);
+ 	while (!list_empty(&to_remove)) {
+@@ -1958,10 +1967,39 @@ retry:
+ 	return path;
+ }
+ 
++/* Duplicate the dentry->d_name.name safely */
++static int clone_dentry_name(struct dentry *dentry, const char **ppath,
++			     int *ppathlen)
++{
++	u32 len;
++	char *name;
++
++retry:
++	len = READ_ONCE(dentry->d_name.len);
++	name = kmalloc(len + 1, GFP_NOFS);
++	if (!name)
++		return -ENOMEM;
++
++	spin_lock(&dentry->d_lock);
++	if (dentry->d_name.len != len) {
++		spin_unlock(&dentry->d_lock);
++		kfree(name);
++		goto retry;
++	}
++	memcpy(name, dentry->d_name.name, len);
++	spin_unlock(&dentry->d_lock);
++
++	name[len] = '\0';
++	*ppath = name;
++	*ppathlen = len;
++	return 0;
++}
++
+ static int build_dentry_path(struct dentry *dentry, struct inode *dir,
+ 			     const char **ppath, int *ppathlen, u64 *pino,
+-			     int *pfreepath)
++			     bool *pfreepath, bool parent_locked)
+ {
++	int ret;
+ 	char *path;
+ 
+ 	rcu_read_lock();
+@@ -1970,8 +2008,15 @@ static int build_dentry_path(struct dentry *dentry, struct inode *dir,
+ 	if (dir && ceph_snap(dir) == CEPH_NOSNAP) {
+ 		*pino = ceph_ino(dir);
+ 		rcu_read_unlock();
+-		*ppath = dentry->d_name.name;
+-		*ppathlen = dentry->d_name.len;
++		if (parent_locked) {
++			*ppath = dentry->d_name.name;
++			*ppathlen = dentry->d_name.len;
++		} else {
++			ret = clone_dentry_name(dentry, ppath, ppathlen);
++			if (ret)
++				return ret;
++			*pfreepath = true;
++		}
+ 		return 0;
+ 	}
+ 	rcu_read_unlock();
+@@ -1979,13 +2024,13 @@ static int build_dentry_path(struct dentry *dentry, struct inode *dir,
+ 	if (IS_ERR(path))
+ 		return PTR_ERR(path);
+ 	*ppath = path;
+-	*pfreepath = 1;
++	*pfreepath = true;
+ 	return 0;
+ }
+ 
+ static int build_inode_path(struct inode *inode,
+ 			    const char **ppath, int *ppathlen, u64 *pino,
+-			    int *pfreepath)
++			    bool *pfreepath)
+ {
+ 	struct dentry *dentry;
+ 	char *path;
+@@ -2001,7 +2046,7 @@ static int build_inode_path(struct inode *inode,
+ 	if (IS_ERR(path))
+ 		return PTR_ERR(path);
+ 	*ppath = path;
+-	*pfreepath = 1;
++	*pfreepath = true;
+ 	return 0;
+ }
+ 
+@@ -2012,7 +2057,7 @@ static int build_inode_path(struct inode *inode,
+ static int set_request_path_attr(struct inode *rinode, struct dentry *rdentry,
+ 				  struct inode *rdiri, const char *rpath,
+ 				  u64 rino, const char **ppath, int *pathlen,
+-				  u64 *ino, int *freepath)
++				  u64 *ino, bool *freepath, bool parent_locked)
+ {
+ 	int r = 0;
+ 
+@@ -2022,7 +2067,7 @@ static int set_request_path_attr(struct inode *rinode, struct dentry *rdentry,
+ 		     ceph_snap(rinode));
+ 	} else if (rdentry) {
+ 		r = build_dentry_path(rdentry, rdiri, ppath, pathlen, ino,
+-					freepath);
++					freepath, parent_locked);
+ 		dout(" dentry %p %llx/%.*s\n", rdentry, *ino, *pathlen,
+ 		     *ppath);
+ 	} else if (rpath || rino) {
+@@ -2048,7 +2093,7 @@ static struct ceph_msg *create_request_message(struct ceph_mds_client *mdsc,
+ 	const char *path2 = NULL;
+ 	u64 ino1 = 0, ino2 = 0;
+ 	int pathlen1 = 0, pathlen2 = 0;
+-	int freepath1 = 0, freepath2 = 0;
++	bool freepath1 = false, freepath2 = false;
+ 	int len;
+ 	u16 releases;
+ 	void *p, *end;
+@@ -2056,16 +2101,19 @@ static struct ceph_msg *create_request_message(struct ceph_mds_client *mdsc,
+ 
+ 	ret = set_request_path_attr(req->r_inode, req->r_dentry,
+ 			      req->r_parent, req->r_path1, req->r_ino1.ino,
+-			      &path1, &pathlen1, &ino1, &freepath1);
++			      &path1, &pathlen1, &ino1, &freepath1,
++			      test_bit(CEPH_MDS_R_PARENT_LOCKED,
++					&req->r_req_flags));
+ 	if (ret < 0) {
+ 		msg = ERR_PTR(ret);
+ 		goto out;
+ 	}
+ 
++	/* If r_old_dentry is set, then assume that its parent is locked */
+ 	ret = set_request_path_attr(NULL, req->r_old_dentry,
+ 			      req->r_old_dentry_dir,
+ 			      req->r_path2, req->r_ino2.ino,
+-			      &path2, &pathlen2, &ino2, &freepath2);
++			      &path2, &pathlen2, &ino2, &freepath2, true);
+ 	if (ret < 0) {
+ 		msg = ERR_PTR(ret);
+ 		goto out_free1;
+diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
+index f74193da0e09..1f46b02f7314 100644
+--- a/fs/ceph/snap.c
++++ b/fs/ceph/snap.c
+@@ -568,7 +568,12 @@ void ceph_queue_cap_snap(struct ceph_inode_info *ci)
+ 	old_snapc = NULL;
+ 
+ update_snapc:
+-	if (ci->i_head_snapc) {
++       if (ci->i_wrbuffer_ref_head == 0 &&
++           ci->i_wr_ref == 0 &&
++           ci->i_dirty_caps == 0 &&
++           ci->i_flushing_caps == 0) {
++               ci->i_head_snapc = NULL;
++       } else {
+ 		ci->i_head_snapc = ceph_get_snap_context(new_snapc);
+ 		dout(" new snapc is %p\n", new_snapc);
+ 	}
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 7c05353b766c..7c3f9d00586e 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -2796,7 +2796,6 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx)
+ 	struct cifs_tcon *tcon;
+ 	struct cifs_sb_info *cifs_sb;
+ 	struct dentry *dentry = ctx->cfile->dentry;
+-	unsigned int i;
+ 	int rc;
+ 
+ 	tcon = tlink_tcon(ctx->cfile->tlink);
+@@ -2860,10 +2859,6 @@ restart_loop:
+ 		kref_put(&wdata->refcount, cifs_uncached_writedata_release);
+ 	}
+ 
+-	if (!ctx->direct_io)
+-		for (i = 0; i < ctx->npages; i++)
+-			put_page(ctx->bv[i].bv_page);
+-
+ 	cifs_stats_bytes_written(tcon, ctx->total_len);
+ 	set_bit(CIFS_INO_INVALID_MAPPING, &CIFS_I(dentry->d_inode)->flags);
+ 
+@@ -3472,7 +3467,6 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx)
+ 	struct iov_iter *to = &ctx->iter;
+ 	struct cifs_sb_info *cifs_sb;
+ 	struct cifs_tcon *tcon;
+-	unsigned int i;
+ 	int rc;
+ 
+ 	tcon = tlink_tcon(ctx->cfile->tlink);
+@@ -3556,15 +3550,8 @@ again:
+ 		kref_put(&rdata->refcount, cifs_uncached_readdata_release);
+ 	}
+ 
+-	if (!ctx->direct_io) {
+-		for (i = 0; i < ctx->npages; i++) {
+-			if (ctx->should_dirty)
+-				set_page_dirty(ctx->bv[i].bv_page);
+-			put_page(ctx->bv[i].bv_page);
+-		}
+-
++	if (!ctx->direct_io)
+ 		ctx->total_len = ctx->len - iov_iter_count(to);
+-	}
+ 
+ 	cifs_stats_bytes_read(tcon, ctx->total_len);
+ 
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index 53fdb5df0d2e..538fd7d807e4 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -1735,6 +1735,10 @@ cifs_do_rename(const unsigned int xid, struct dentry *from_dentry,
+ 	if (rc == 0 || rc != -EBUSY)
+ 		goto do_rename_exit;
+ 
++	/* Don't fall back to using SMB on SMB 2+ mount */
++	if (server->vals->protocol_id != 0)
++		goto do_rename_exit;
++
+ 	/* open-file renames don't work across directories */
+ 	if (to_dentry->d_parent != from_dentry->d_parent)
+ 		goto do_rename_exit;
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 1e1626a2cfc3..0dc6f08020ac 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -789,6 +789,11 @@ cifs_aio_ctx_alloc(void)
+ {
+ 	struct cifs_aio_ctx *ctx;
+ 
++	/*
++	 * Must use kzalloc to initialize ctx->bv to NULL and ctx->direct_io
++	 * to false so that we know when we have to unreference pages within
++	 * cifs_aio_ctx_release()
++	 */
+ 	ctx = kzalloc(sizeof(struct cifs_aio_ctx), GFP_KERNEL);
+ 	if (!ctx)
+ 		return NULL;
+@@ -807,7 +812,23 @@ cifs_aio_ctx_release(struct kref *refcount)
+ 					struct cifs_aio_ctx, refcount);
+ 
+ 	cifsFileInfo_put(ctx->cfile);
+-	kvfree(ctx->bv);
++
++	/*
++	 * ctx->bv is only set if setup_aio_ctx_iter() was call successfuly
++	 * which means that iov_iter_get_pages() was a success and thus that
++	 * we have taken reference on pages.
++	 */
++	if (ctx->bv) {
++		unsigned i;
++
++		for (i = 0; i < ctx->npages; i++) {
++			if (ctx->should_dirty)
++				set_page_dirty(ctx->bv[i].bv_page);
++			put_page(ctx->bv[i].bv_page);
++		}
++		kvfree(ctx->bv);
++	}
++
+ 	kfree(ctx);
+ }
+ 
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 938e75cc3b66..85a3c051e622 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -3402,6 +3402,7 @@ SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms,
+ 					    rc);
+ 		}
+ 		free_rsp_buf(resp_buftype, rsp_iov.iov_base);
++		cifs_small_buf_release(req);
+ 		return rc == -ENODATA ? 0 : rc;
+ 	} else
+ 		trace_smb3_read_done(xid, req->PersistentFileId,
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 86ed9c686249..dc82e7757f67 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -829,6 +829,7 @@ int ext4_get_inode_usage(struct inode *inode, qsize_t *usage)
+ 		bh = ext4_sb_bread(inode->i_sb, EXT4_I(inode)->i_file_acl, REQ_PRIO);
+ 		if (IS_ERR(bh)) {
+ 			ret = PTR_ERR(bh);
++			bh = NULL;
+ 			goto out;
+ 		}
+ 
+@@ -2903,6 +2904,7 @@ int ext4_xattr_delete_inode(handle_t *handle, struct inode *inode,
+ 			if (error == -EIO)
+ 				EXT4_ERROR_INODE(inode, "block %llu read error",
+ 						 EXT4_I(inode)->i_file_acl);
++			bh = NULL;
+ 			goto cleanup;
+ 		}
+ 		error = ext4_xattr_check_block(inode, bh);
+@@ -3059,6 +3061,7 @@ ext4_xattr_block_cache_find(struct inode *inode,
+ 		if (IS_ERR(bh)) {
+ 			if (PTR_ERR(bh) == -ENOMEM)
+ 				return NULL;
++			bh = NULL;
+ 			EXT4_ERROR_INODE(inode, "block %lu read error",
+ 					 (unsigned long)ce->e_value);
+ 		} else if (ext4_xattr_cmp(header, BHDR(bh)) == 0) {
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index 0570391eaa16..15c025c1a305 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -2041,7 +2041,8 @@ static int nfs23_validate_mount_data(void *options,
+ 		memcpy(sap, &data->addr, sizeof(data->addr));
+ 		args->nfs_server.addrlen = sizeof(data->addr);
+ 		args->nfs_server.port = ntohs(data->addr.sin_port);
+-		if (!nfs_verify_server_address(sap))
++		if (sap->sa_family != AF_INET ||
++		    !nfs_verify_server_address(sap))
+ 			goto out_no_address;
+ 
+ 		if (!(data->flags & NFS_MOUNT_TCP))
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index c74e4538d0eb..258f741d6a21 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -1023,8 +1023,9 @@ static void nfsd4_cb_prepare(struct rpc_task *task, void *calldata)
+ 	cb->cb_seq_status = 1;
+ 	cb->cb_status = 0;
+ 	if (minorversion) {
+-		if (!nfsd41_cb_get_slot(clp, task))
++		if (!cb->cb_holds_slot && !nfsd41_cb_get_slot(clp, task))
+ 			return;
++		cb->cb_holds_slot = true;
+ 	}
+ 	rpc_call_start(task);
+ }
+@@ -1051,6 +1052,9 @@ static bool nfsd4_cb_sequence_done(struct rpc_task *task, struct nfsd4_callback
+ 		return true;
+ 	}
+ 
++	if (!cb->cb_holds_slot)
++		goto need_restart;
++
+ 	switch (cb->cb_seq_status) {
+ 	case 0:
+ 		/*
+@@ -1089,6 +1093,7 @@ static bool nfsd4_cb_sequence_done(struct rpc_task *task, struct nfsd4_callback
+ 			cb->cb_seq_status);
+ 	}
+ 
++	cb->cb_holds_slot = false;
+ 	clear_bit(0, &clp->cl_cb_slot_busy);
+ 	rpc_wake_up_next(&clp->cl_cb_waitq);
+ 	dprintk("%s: freed slot, new seqid=%d\n", __func__,
+@@ -1296,6 +1301,7 @@ void nfsd4_init_cb(struct nfsd4_callback *cb, struct nfs4_client *clp,
+ 	cb->cb_seq_status = 1;
+ 	cb->cb_status = 0;
+ 	cb->cb_need_restart = false;
++	cb->cb_holds_slot = false;
+ }
+ 
+ void nfsd4_run_cb(struct nfsd4_callback *cb)
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 6a45fb00c5fc..f056b1d3fecd 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -265,6 +265,7 @@ find_or_allocate_block(struct nfs4_lockowner *lo, struct knfsd_fh *fh,
+ static void
+ free_blocked_lock(struct nfsd4_blocked_lock *nbl)
+ {
++	locks_delete_block(&nbl->nbl_lock);
+ 	locks_release_private(&nbl->nbl_lock);
+ 	kfree(nbl);
+ }
+@@ -293,11 +294,18 @@ remove_blocked_locks(struct nfs4_lockowner *lo)
+ 		nbl = list_first_entry(&reaplist, struct nfsd4_blocked_lock,
+ 					nbl_lru);
+ 		list_del_init(&nbl->nbl_lru);
+-		locks_delete_block(&nbl->nbl_lock);
+ 		free_blocked_lock(nbl);
+ 	}
+ }
+ 
++static void
++nfsd4_cb_notify_lock_prepare(struct nfsd4_callback *cb)
++{
++	struct nfsd4_blocked_lock	*nbl = container_of(cb,
++						struct nfsd4_blocked_lock, nbl_cb);
++	locks_delete_block(&nbl->nbl_lock);
++}
++
+ static int
+ nfsd4_cb_notify_lock_done(struct nfsd4_callback *cb, struct rpc_task *task)
+ {
+@@ -325,6 +333,7 @@ nfsd4_cb_notify_lock_release(struct nfsd4_callback *cb)
+ }
+ 
+ static const struct nfsd4_callback_ops nfsd4_cb_notify_lock_ops = {
++	.prepare	= nfsd4_cb_notify_lock_prepare,
+ 	.done		= nfsd4_cb_notify_lock_done,
+ 	.release	= nfsd4_cb_notify_lock_release,
+ };
+@@ -4863,7 +4872,6 @@ nfs4_laundromat(struct nfsd_net *nn)
+ 		nbl = list_first_entry(&reaplist,
+ 					struct nfsd4_blocked_lock, nbl_lru);
+ 		list_del_init(&nbl->nbl_lru);
+-		locks_delete_block(&nbl->nbl_lock);
+ 		free_blocked_lock(nbl);
+ 	}
+ out:
+diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
+index 396c76755b03..9d6cb246c6c5 100644
+--- a/fs/nfsd/state.h
++++ b/fs/nfsd/state.h
+@@ -70,6 +70,7 @@ struct nfsd4_callback {
+ 	int cb_seq_status;
+ 	int cb_status;
+ 	bool cb_need_restart;
++	bool cb_holds_slot;
+ };
+ 
+ struct nfsd4_callback_ops {
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index d65390727541..7325baa8f9d4 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -1626,9 +1626,11 @@ static void drop_sysctl_table(struct ctl_table_header *header)
+ 	if (--header->nreg)
+ 		return;
+ 
+-	if (parent)
++	if (parent) {
+ 		put_links(header);
+-	start_unregistering(header);
++		start_unregistering(header);
++	}
++
+ 	if (!--header->count)
+ 		kfree_rcu(header, rcu);
+ 
+diff --git a/fs/splice.c b/fs/splice.c
+index 90c29675d573..7da7d5437472 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -333,8 +333,8 @@ const struct pipe_buf_operations default_pipe_buf_ops = {
+ 	.get = generic_pipe_buf_get,
+ };
+ 
+-static int generic_pipe_buf_nosteal(struct pipe_inode_info *pipe,
+-				    struct pipe_buffer *buf)
++int generic_pipe_buf_nosteal(struct pipe_inode_info *pipe,
++			     struct pipe_buffer *buf)
+ {
+ 	return 1;
+ }
+diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
+index 1021106438b2..c80e5833b1d6 100644
+--- a/include/drm/ttm/ttm_bo_driver.h
++++ b/include/drm/ttm/ttm_bo_driver.h
+@@ -411,7 +411,6 @@ extern struct ttm_bo_global {
+ 	/**
+ 	 * Protected by ttm_global_mutex.
+ 	 */
+-	unsigned int use_count;
+ 	struct list_head device_list;
+ 
+ 	/**
+diff --git a/include/linux/etherdevice.h b/include/linux/etherdevice.h
+index 2c0af7b00715..c94ab8b53a23 100644
+--- a/include/linux/etherdevice.h
++++ b/include/linux/etherdevice.h
+@@ -447,6 +447,18 @@ static inline void eth_addr_dec(u8 *addr)
+ 	u64_to_ether_addr(u, addr);
+ }
+ 
++/**
++ * eth_addr_inc() - Increment the given MAC address.
++ * @addr: Pointer to a six-byte array containing Ethernet address to increment.
++ */
++static inline void eth_addr_inc(u8 *addr)
++{
++	u64 u = ether_addr_to_u64(addr);
++
++	u++;
++	u64_to_ether_addr(u, addr);
++}
++
+ /**
+  * is_etherdev_addr - Tell if given Ethernet address belongs to the device.
+  * @dev: Pointer to a device structure
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index 3ecd7ea212ae..66ee63cd5968 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -181,6 +181,7 @@ void free_pipe_info(struct pipe_inode_info *);
+ void generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_confirm(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_steal(struct pipe_inode_info *, struct pipe_buffer *);
++int generic_pipe_buf_nosteal(struct pipe_inode_info *, struct pipe_buffer *);
+ void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *);
+ void pipe_buf_mark_unmergeable(struct pipe_buffer *buf);
+ 
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 0612439909dc..9e0b9ecb43db 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -382,6 +382,7 @@ void nft_unregister_set(struct nft_set_type *type);
+  * 	@dtype: data type (verdict or numeric type defined by userspace)
+  * 	@objtype: object type (see NFT_OBJECT_* definitions)
+  * 	@size: maximum set size
++ *	@use: number of rules references to this set
+  * 	@nelems: number of elements
+  * 	@ndeact: number of deactivated elements queued for removal
+  *	@timeout: default timeout value in jiffies
+@@ -407,6 +408,7 @@ struct nft_set {
+ 	u32				dtype;
+ 	u32				objtype;
+ 	u32				size;
++	u32				use;
+ 	atomic_t			nelems;
+ 	u32				ndeact;
+ 	u64				timeout;
+@@ -467,6 +469,10 @@ struct nft_set_binding {
+ 	u32				flags;
+ };
+ 
++enum nft_trans_phase;
++void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
++			      struct nft_set_binding *binding,
++			      enum nft_trans_phase phase);
+ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 		       struct nft_set_binding *binding);
+ void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
+diff --git a/include/net/netrom.h b/include/net/netrom.h
+index 5a0714ff500f..80f15b1c1a48 100644
+--- a/include/net/netrom.h
++++ b/include/net/netrom.h
+@@ -266,7 +266,7 @@ void nr_stop_idletimer(struct sock *);
+ int nr_t1timer_running(struct sock *);
+ 
+ /* sysctl_net_netrom.c */
+-void nr_register_sysctl(void);
++int nr_register_sysctl(void);
+ void nr_unregister_sysctl(void);
+ 
+ #endif
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index fb8b7b5d745d..451b1f9e80a6 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -252,7 +252,6 @@ static void task_non_contending(struct task_struct *p)
+ 	if (dl_entity_is_special(dl_se))
+ 		return;
+ 
+-	WARN_ON(hrtimer_active(&dl_se->inactive_timer));
+ 	WARN_ON(dl_se->dl_non_contending);
+ 
+ 	zerolag_time = dl_se->deadline -
+@@ -269,7 +268,7 @@ static void task_non_contending(struct task_struct *p)
+ 	 * If the "0-lag time" already passed, decrease the active
+ 	 * utilization now, instead of starting a timer
+ 	 */
+-	if (zerolag_time < 0) {
++	if ((zerolag_time < 0) || hrtimer_active(&dl_se->inactive_timer)) {
+ 		if (dl_task(p))
+ 			sub_running_bw(dl_se, dl_rq);
+ 		if (!dl_task(p) || p->state == TASK_DEAD) {
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index eeb605656d59..be55a64748ba 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -1994,6 +1994,10 @@ static u64 numa_get_avg_runtime(struct task_struct *p, u64 *period)
+ 	if (p->last_task_numa_placement) {
+ 		delta = runtime - p->last_sum_exec_runtime;
+ 		*period = now - p->last_task_numa_placement;
++
++		/* Avoid time going backwards, prevent potential divide error: */
++		if (unlikely((s64)*period < 0))
++			*period = 0;
+ 	} else {
+ 		delta = p->se.avg.load_sum;
+ 		*period = LOAD_AVG_MAX;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index b49affb4666b..4463ae28bf1a 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -776,7 +776,7 @@ u64 ring_buffer_time_stamp(struct ring_buffer *buffer, int cpu)
+ 
+ 	preempt_disable_notrace();
+ 	time = rb_time_stamp(buffer);
+-	preempt_enable_no_resched_notrace();
++	preempt_enable_notrace();
+ 
+ 	return time;
+ }
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 89158aa93fa6..d07fc2836786 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -496,8 +496,10 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ 	 * not modified.
+ 	 */
+ 	pid_list = kmalloc(sizeof(*pid_list), GFP_KERNEL);
+-	if (!pid_list)
++	if (!pid_list) {
++		trace_parser_put(&parser);
+ 		return -ENOMEM;
++	}
+ 
+ 	pid_list->pid_max = READ_ONCE(pid_max);
+ 
+@@ -507,6 +509,7 @@ int trace_pid_write(struct trace_pid_list *filtered_pids,
+ 
+ 	pid_list->pids = vzalloc((pid_list->pid_max + 7) >> 3);
+ 	if (!pid_list->pids) {
++		trace_parser_put(&parser);
+ 		kfree(pid_list);
+ 		return -ENOMEM;
+ 	}
+@@ -6820,19 +6823,23 @@ struct buffer_ref {
+ 	struct ring_buffer	*buffer;
+ 	void			*page;
+ 	int			cpu;
+-	int			ref;
++	refcount_t		refcount;
+ };
+ 
++static void buffer_ref_release(struct buffer_ref *ref)
++{
++	if (!refcount_dec_and_test(&ref->refcount))
++		return;
++	ring_buffer_free_read_page(ref->buffer, ref->cpu, ref->page);
++	kfree(ref);
++}
++
+ static void buffer_pipe_buf_release(struct pipe_inode_info *pipe,
+ 				    struct pipe_buffer *buf)
+ {
+ 	struct buffer_ref *ref = (struct buffer_ref *)buf->private;
+ 
+-	if (--ref->ref)
+-		return;
+-
+-	ring_buffer_free_read_page(ref->buffer, ref->cpu, ref->page);
+-	kfree(ref);
++	buffer_ref_release(ref);
+ 	buf->private = 0;
+ }
+ 
+@@ -6841,7 +6848,7 @@ static void buffer_pipe_buf_get(struct pipe_inode_info *pipe,
+ {
+ 	struct buffer_ref *ref = (struct buffer_ref *)buf->private;
+ 
+-	ref->ref++;
++	refcount_inc(&ref->refcount);
+ }
+ 
+ /* Pipe buffer operations for a buffer. */
+@@ -6849,7 +6856,7 @@ static const struct pipe_buf_operations buffer_pipe_buf_ops = {
+ 	.can_merge		= 0,
+ 	.confirm		= generic_pipe_buf_confirm,
+ 	.release		= buffer_pipe_buf_release,
+-	.steal			= generic_pipe_buf_steal,
++	.steal			= generic_pipe_buf_nosteal,
+ 	.get			= buffer_pipe_buf_get,
+ };
+ 
+@@ -6862,11 +6869,7 @@ static void buffer_spd_release(struct splice_pipe_desc *spd, unsigned int i)
+ 	struct buffer_ref *ref =
+ 		(struct buffer_ref *)spd->partial[i].private;
+ 
+-	if (--ref->ref)
+-		return;
+-
+-	ring_buffer_free_read_page(ref->buffer, ref->cpu, ref->page);
+-	kfree(ref);
++	buffer_ref_release(ref);
+ 	spd->partial[i].private = 0;
+ }
+ 
+@@ -6921,7 +6924,7 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
+ 			break;
+ 		}
+ 
+-		ref->ref = 1;
++		refcount_set(&ref->refcount, 1);
+ 		ref->buffer = iter->trace_buffer->buffer;
+ 		ref->page = ring_buffer_alloc_read_page(ref->buffer, iter->cpu_file);
+ 		if (IS_ERR(ref->page)) {
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index fc5d23d752a5..e94d2b6bee7f 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -2931,6 +2931,9 @@ static bool __flush_work(struct work_struct *work, bool from_cancel)
+ 	if (WARN_ON(!wq_online))
+ 		return false;
+ 
++	if (WARN_ON(!work->func))
++		return false;
++
+ 	if (!from_cancel) {
+ 		lock_map_acquire(&work->lockdep_map);
+ 		lock_map_release(&work->lockdep_map);
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index d4df5b24d75e..350d5328014f 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1952,6 +1952,7 @@ config TEST_KMOD
+ 	depends on m
+ 	depends on BLOCK && (64BIT || LBDAF)	  # for XFS, BTRFS
+ 	depends on NETDEVICES && NET_CORE && INET # for TUN
++	depends on BLOCK
+ 	select TEST_LKM
+ 	select XFS_FS
+ 	select TUN
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 20dd3283bb1b..318ef6ccdb3b 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -266,7 +266,20 @@ compound_page_dtor * const compound_page_dtors[] = {
+ 
+ int min_free_kbytes = 1024;
+ int user_min_free_kbytes = -1;
++#ifdef CONFIG_DISCONTIGMEM
++/*
++ * DiscontigMem defines memory ranges as separate pg_data_t even if the ranges
++ * are not on separate NUMA nodes. Functionally this works but with
++ * watermark_boost_factor, it can reclaim prematurely as the ranges can be
++ * quite small. By default, do not boost watermarks on discontigmem as in
++ * many cases very high-order allocations like THP are likely to be
++ * unsupported and the premature reclaim offsets the advantage of long-term
++ * fragmentation avoidance.
++ */
++int watermark_boost_factor __read_mostly;
++#else
+ int watermark_boost_factor __read_mostly = 15000;
++#endif
+ int watermark_scale_factor = 10;
+ 
+ static unsigned long nr_kernel_pages __initdata;
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index f77888ec93f1..0bb4d712b80c 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -2032,7 +2032,8 @@ static int ebt_size_mwt(struct compat_ebt_entry_mwt *match32,
+ 		if (match_kern)
+ 			match_kern->match_size = ret;
+ 
+-		if (WARN_ON(type == EBT_COMPAT_TARGET && size_left))
++		/* rule should have no remaining data after target */
++		if (type == EBT_COMPAT_TARGET && size_left)
+ 			return -EINVAL;
+ 
+ 		match32 = (struct compat_ebt_entry_mwt *) buf;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 25d9bef27d03..3c89ca325947 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1183,25 +1183,39 @@ static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie)
+ 	return dst;
+ }
+ 
+-static void ipv4_link_failure(struct sk_buff *skb)
++static void ipv4_send_dest_unreach(struct sk_buff *skb)
+ {
+ 	struct ip_options opt;
+-	struct rtable *rt;
+ 	int res;
+ 
+ 	/* Recompile ip options since IPCB may not be valid anymore.
++	 * Also check we have a reasonable ipv4 header.
+ 	 */
+-	memset(&opt, 0, sizeof(opt));
+-	opt.optlen = ip_hdr(skb)->ihl*4 - sizeof(struct iphdr);
++	if (!pskb_network_may_pull(skb, sizeof(struct iphdr)) ||
++	    ip_hdr(skb)->version != 4 || ip_hdr(skb)->ihl < 5)
++		return;
+ 
+-	rcu_read_lock();
+-	res = __ip_options_compile(dev_net(skb->dev), &opt, skb, NULL);
+-	rcu_read_unlock();
++	memset(&opt, 0, sizeof(opt));
++	if (ip_hdr(skb)->ihl > 5) {
++		if (!pskb_network_may_pull(skb, ip_hdr(skb)->ihl * 4))
++			return;
++		opt.optlen = ip_hdr(skb)->ihl * 4 - sizeof(struct iphdr);
+ 
+-	if (res)
+-		return;
++		rcu_read_lock();
++		res = __ip_options_compile(dev_net(skb->dev), &opt, skb, NULL);
++		rcu_read_unlock();
+ 
++		if (res)
++			return;
++	}
+ 	__icmp_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0, &opt);
++}
++
++static void ipv4_link_failure(struct sk_buff *skb)
++{
++	struct rtable *rt;
++
++	ipv4_send_dest_unreach(skb);
+ 
+ 	rt = skb_rtable(skb);
+ 	if (rt)
+diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
+index ba0fc4b18465..eeb4041fa5f9 100644
+--- a/net/ipv4/sysctl_net_ipv4.c
++++ b/net/ipv4/sysctl_net_ipv4.c
+@@ -49,6 +49,7 @@ static int ip_ping_group_range_min[] = { 0, 0 };
+ static int ip_ping_group_range_max[] = { GID_T_MAX, GID_T_MAX };
+ static int comp_sack_nr_max = 255;
+ static u32 u32_max_div_HZ = UINT_MAX / HZ;
++static int one_day_secs = 24 * 3600;
+ 
+ /* obsolete */
+ static int sysctl_tcp_low_latency __read_mostly;
+@@ -1151,7 +1152,9 @@ static struct ctl_table ipv4_net_table[] = {
+ 		.data		= &init_net.ipv4.sysctl_tcp_min_rtt_wlen,
+ 		.maxlen		= sizeof(int),
+ 		.mode		= 0644,
+-		.proc_handler	= proc_dointvec
++		.proc_handler	= proc_dointvec_minmax,
++		.extra1		= &zero,
++		.extra2		= &one_day_secs
+ 	},
+ 	{
+ 		.procname	= "tcp_autocorking",
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index dc07fcc7938e..802db01e3075 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -11,6 +11,7 @@
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+ #include <linux/netdevice.h>
++#include <linux/etherdevice.h>
+ #include <linux/skbuff.h>
+ 
+ #include <net/ncsi.h>
+@@ -667,7 +668,10 @@ static int ncsi_rsp_handler_oem_bcm_gma(struct ncsi_request *nr)
+ 	ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ 	memcpy(saddr.sa_data, &rsp->data[BCM_MAC_ADDR_OFFSET], ETH_ALEN);
+ 	/* Increase mac address by 1 for BMC's address */
+-	saddr.sa_data[ETH_ALEN - 1]++;
++	eth_addr_inc((u8 *)saddr.sa_data);
++	if (!is_valid_ether_addr((const u8 *)saddr.sa_data))
++		return -ENXIO;
++
+ 	ret = ops->ndo_set_mac_address(ndev, &saddr);
+ 	if (ret < 0)
+ 		netdev_warn(ndev, "NCSI: 'Writing mac address to device failed\n");
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index acb124ce92ec..e2aac80f9b7b 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3624,6 +3624,9 @@ err1:
+ 
+ static void nft_set_destroy(struct nft_set *set)
+ {
++	if (WARN_ON(set->use > 0))
++		return;
++
+ 	set->ops->destroy(set);
+ 	module_put(to_set_type(set->ops)->owner);
+ 	kfree(set->name);
+@@ -3664,7 +3667,7 @@ static int nf_tables_delset(struct net *net, struct sock *nlsk,
+ 		NL_SET_BAD_ATTR(extack, attr);
+ 		return PTR_ERR(set);
+ 	}
+-	if (!list_empty(&set->bindings) ||
++	if (set->use ||
+ 	    (nlh->nlmsg_flags & NLM_F_NONREC && atomic_read(&set->nelems) > 0)) {
+ 		NL_SET_BAD_ATTR(extack, attr);
+ 		return -EBUSY;
+@@ -3694,6 +3697,9 @@ int nf_tables_bind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ 	struct nft_set_binding *i;
+ 	struct nft_set_iter iter;
+ 
++	if (set->use == UINT_MAX)
++		return -EOVERFLOW;
++
+ 	if (!list_empty(&set->bindings) && nft_set_is_anonymous(set))
+ 		return -EBUSY;
+ 
+@@ -3721,6 +3727,7 @@ bind:
+ 	binding->chain = ctx->chain;
+ 	list_add_tail_rcu(&binding->list, &set->bindings);
+ 	nft_set_trans_bind(ctx, set);
++	set->use++;
+ 
+ 	return 0;
+ }
+@@ -3740,6 +3747,25 @@ void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
+ }
+ EXPORT_SYMBOL_GPL(nf_tables_unbind_set);
+ 
++void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
++			      struct nft_set_binding *binding,
++			      enum nft_trans_phase phase)
++{
++	switch (phase) {
++	case NFT_TRANS_PREPARE:
++		set->use--;
++		return;
++	case NFT_TRANS_ABORT:
++	case NFT_TRANS_RELEASE:
++		set->use--;
++		/* fall through */
++	default:
++		nf_tables_unbind_set(ctx, set, binding,
++				     phase == NFT_TRANS_COMMIT);
++	}
++}
++EXPORT_SYMBOL_GPL(nf_tables_deactivate_set);
++
+ void nf_tables_destroy_set(const struct nft_ctx *ctx, struct nft_set *set)
+ {
+ 	if (list_empty(&set->bindings) && nft_set_is_anonymous(set))
+diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
+index f1172f99752b..eb7f9a5f2aeb 100644
+--- a/net/netfilter/nft_dynset.c
++++ b/net/netfilter/nft_dynset.c
+@@ -241,11 +241,15 @@ static void nft_dynset_deactivate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_dynset *priv = nft_expr_priv(expr);
+ 
+-	if (phase == NFT_TRANS_PREPARE)
+-		return;
++	nf_tables_deactivate_set(ctx, priv->set, &priv->binding, phase);
++}
++
++static void nft_dynset_activate(const struct nft_ctx *ctx,
++				const struct nft_expr *expr)
++{
++	struct nft_dynset *priv = nft_expr_priv(expr);
+ 
+-	nf_tables_unbind_set(ctx, priv->set, &priv->binding,
+-			     phase == NFT_TRANS_COMMIT);
++	priv->set->use++;
+ }
+ 
+ static void nft_dynset_destroy(const struct nft_ctx *ctx,
+@@ -293,6 +297,7 @@ static const struct nft_expr_ops nft_dynset_ops = {
+ 	.eval		= nft_dynset_eval,
+ 	.init		= nft_dynset_init,
+ 	.destroy	= nft_dynset_destroy,
++	.activate	= nft_dynset_activate,
+ 	.deactivate	= nft_dynset_deactivate,
+ 	.dump		= nft_dynset_dump,
+ };
+diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c
+index 14496da5141d..161c3451a747 100644
+--- a/net/netfilter/nft_lookup.c
++++ b/net/netfilter/nft_lookup.c
+@@ -127,11 +127,15 @@ static void nft_lookup_deactivate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_lookup *priv = nft_expr_priv(expr);
+ 
+-	if (phase == NFT_TRANS_PREPARE)
+-		return;
++	nf_tables_deactivate_set(ctx, priv->set, &priv->binding, phase);
++}
++
++static void nft_lookup_activate(const struct nft_ctx *ctx,
++				const struct nft_expr *expr)
++{
++	struct nft_lookup *priv = nft_expr_priv(expr);
+ 
+-	nf_tables_unbind_set(ctx, priv->set, &priv->binding,
+-			     phase == NFT_TRANS_COMMIT);
++	priv->set->use++;
+ }
+ 
+ static void nft_lookup_destroy(const struct nft_ctx *ctx,
+@@ -222,6 +226,7 @@ static const struct nft_expr_ops nft_lookup_ops = {
+ 	.size		= NFT_EXPR_SIZE(sizeof(struct nft_lookup)),
+ 	.eval		= nft_lookup_eval,
+ 	.init		= nft_lookup_init,
++	.activate	= nft_lookup_activate,
+ 	.deactivate	= nft_lookup_deactivate,
+ 	.destroy	= nft_lookup_destroy,
+ 	.dump		= nft_lookup_dump,
+diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c
+index ae178e914486..bf92a40dd1b2 100644
+--- a/net/netfilter/nft_objref.c
++++ b/net/netfilter/nft_objref.c
+@@ -64,21 +64,34 @@ nla_put_failure:
+ 	return -1;
+ }
+ 
+-static void nft_objref_destroy(const struct nft_ctx *ctx,
+-			       const struct nft_expr *expr)
++static void nft_objref_deactivate(const struct nft_ctx *ctx,
++				  const struct nft_expr *expr,
++				  enum nft_trans_phase phase)
+ {
+ 	struct nft_object *obj = nft_objref_priv(expr);
+ 
++	if (phase == NFT_TRANS_COMMIT)
++		return;
++
+ 	obj->use--;
+ }
+ 
++static void nft_objref_activate(const struct nft_ctx *ctx,
++				const struct nft_expr *expr)
++{
++	struct nft_object *obj = nft_objref_priv(expr);
++
++	obj->use++;
++}
++
+ static struct nft_expr_type nft_objref_type;
+ static const struct nft_expr_ops nft_objref_ops = {
+ 	.type		= &nft_objref_type,
+ 	.size		= NFT_EXPR_SIZE(sizeof(struct nft_object *)),
+ 	.eval		= nft_objref_eval,
+ 	.init		= nft_objref_init,
+-	.destroy	= nft_objref_destroy,
++	.activate	= nft_objref_activate,
++	.deactivate	= nft_objref_deactivate,
+ 	.dump		= nft_objref_dump,
+ };
+ 
+@@ -161,11 +174,15 @@ static void nft_objref_map_deactivate(const struct nft_ctx *ctx,
+ {
+ 	struct nft_objref_map *priv = nft_expr_priv(expr);
+ 
+-	if (phase == NFT_TRANS_PREPARE)
+-		return;
++	nf_tables_deactivate_set(ctx, priv->set, &priv->binding, phase);
++}
++
++static void nft_objref_map_activate(const struct nft_ctx *ctx,
++				    const struct nft_expr *expr)
++{
++	struct nft_objref_map *priv = nft_expr_priv(expr);
+ 
+-	nf_tables_unbind_set(ctx, priv->set, &priv->binding,
+-			     phase == NFT_TRANS_COMMIT);
++	priv->set->use++;
+ }
+ 
+ static void nft_objref_map_destroy(const struct nft_ctx *ctx,
+@@ -182,6 +199,7 @@ static const struct nft_expr_ops nft_objref_map_ops = {
+ 	.size		= NFT_EXPR_SIZE(sizeof(struct nft_objref_map)),
+ 	.eval		= nft_objref_map_eval,
+ 	.init		= nft_objref_map_init,
++	.activate	= nft_objref_map_activate,
+ 	.deactivate	= nft_objref_map_deactivate,
+ 	.destroy	= nft_objref_map_destroy,
+ 	.dump		= nft_objref_map_dump,
+diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
+index 1d3144d19903..71ffd1a6dc7c 100644
+--- a/net/netrom/af_netrom.c
++++ b/net/netrom/af_netrom.c
+@@ -1392,18 +1392,22 @@ static int __init nr_proto_init(void)
+ 	int i;
+ 	int rc = proto_register(&nr_proto, 0);
+ 
+-	if (rc != 0)
+-		goto out;
++	if (rc)
++		return rc;
+ 
+ 	if (nr_ndevs > 0x7fffffff/sizeof(struct net_device *)) {
+-		printk(KERN_ERR "NET/ROM: nr_proto_init - nr_ndevs parameter to large\n");
+-		return -1;
++		pr_err("NET/ROM: %s - nr_ndevs parameter too large\n",
++		       __func__);
++		rc = -EINVAL;
++		goto unregister_proto;
+ 	}
+ 
+ 	dev_nr = kcalloc(nr_ndevs, sizeof(struct net_device *), GFP_KERNEL);
+-	if (dev_nr == NULL) {
+-		printk(KERN_ERR "NET/ROM: nr_proto_init - unable to allocate device array\n");
+-		return -1;
++	if (!dev_nr) {
++		pr_err("NET/ROM: %s - unable to allocate device array\n",
++		       __func__);
++		rc = -ENOMEM;
++		goto unregister_proto;
+ 	}
+ 
+ 	for (i = 0; i < nr_ndevs; i++) {
+@@ -1413,13 +1417,13 @@ static int __init nr_proto_init(void)
+ 		sprintf(name, "nr%d", i);
+ 		dev = alloc_netdev(0, name, NET_NAME_UNKNOWN, nr_setup);
+ 		if (!dev) {
+-			printk(KERN_ERR "NET/ROM: nr_proto_init - unable to allocate device structure\n");
++			rc = -ENOMEM;
+ 			goto fail;
+ 		}
+ 
+ 		dev->base_addr = i;
+-		if (register_netdev(dev)) {
+-			printk(KERN_ERR "NET/ROM: nr_proto_init - unable to register network device\n");
++		rc = register_netdev(dev);
++		if (rc) {
+ 			free_netdev(dev);
+ 			goto fail;
+ 		}
+@@ -1427,36 +1431,64 @@ static int __init nr_proto_init(void)
+ 		dev_nr[i] = dev;
+ 	}
+ 
+-	if (sock_register(&nr_family_ops)) {
+-		printk(KERN_ERR "NET/ROM: nr_proto_init - unable to register socket family\n");
++	rc = sock_register(&nr_family_ops);
++	if (rc)
+ 		goto fail;
+-	}
+ 
+-	register_netdevice_notifier(&nr_dev_notifier);
++	rc = register_netdevice_notifier(&nr_dev_notifier);
++	if (rc)
++		goto out_sock;
+ 
+ 	ax25_register_pid(&nr_pid);
+ 	ax25_linkfail_register(&nr_linkfail_notifier);
+ 
+ #ifdef CONFIG_SYSCTL
+-	nr_register_sysctl();
++	rc = nr_register_sysctl();
++	if (rc)
++		goto out_sysctl;
+ #endif
+ 
+ 	nr_loopback_init();
+ 
+-	proc_create_seq("nr", 0444, init_net.proc_net, &nr_info_seqops);
+-	proc_create_seq("nr_neigh", 0444, init_net.proc_net, &nr_neigh_seqops);
+-	proc_create_seq("nr_nodes", 0444, init_net.proc_net, &nr_node_seqops);
+-out:
+-	return rc;
++	rc = -ENOMEM;
++	if (!proc_create_seq("nr", 0444, init_net.proc_net, &nr_info_seqops))
++		goto proc_remove1;
++	if (!proc_create_seq("nr_neigh", 0444, init_net.proc_net,
++			     &nr_neigh_seqops))
++		goto proc_remove2;
++	if (!proc_create_seq("nr_nodes", 0444, init_net.proc_net,
++			     &nr_node_seqops))
++		goto proc_remove3;
++
++	return 0;
++
++proc_remove3:
++	remove_proc_entry("nr_neigh", init_net.proc_net);
++proc_remove2:
++	remove_proc_entry("nr", init_net.proc_net);
++proc_remove1:
++
++	nr_loopback_clear();
++	nr_rt_free();
++
++#ifdef CONFIG_SYSCTL
++	nr_unregister_sysctl();
++out_sysctl:
++#endif
++	ax25_linkfail_release(&nr_linkfail_notifier);
++	ax25_protocol_release(AX25_P_NETROM);
++	unregister_netdevice_notifier(&nr_dev_notifier);
++out_sock:
++	sock_unregister(PF_NETROM);
+ fail:
+ 	while (--i >= 0) {
+ 		unregister_netdev(dev_nr[i]);
+ 		free_netdev(dev_nr[i]);
+ 	}
+ 	kfree(dev_nr);
++unregister_proto:
+ 	proto_unregister(&nr_proto);
+-	rc = -1;
+-	goto out;
++	return rc;
+ }
+ 
+ module_init(nr_proto_init);
+diff --git a/net/netrom/nr_loopback.c b/net/netrom/nr_loopback.c
+index 215ad22a9647..93d13f019981 100644
+--- a/net/netrom/nr_loopback.c
++++ b/net/netrom/nr_loopback.c
+@@ -70,7 +70,7 @@ static void nr_loopback_timer(struct timer_list *unused)
+ 	}
+ }
+ 
+-void __exit nr_loopback_clear(void)
++void nr_loopback_clear(void)
+ {
+ 	del_timer_sync(&loopback_timer);
+ 	skb_queue_purge(&loopback_queue);
+diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
+index 6485f593e2f0..b76aa668a94b 100644
+--- a/net/netrom/nr_route.c
++++ b/net/netrom/nr_route.c
+@@ -953,7 +953,7 @@ const struct seq_operations nr_neigh_seqops = {
+ /*
+  *	Free all memory associated with the nodes and routes lists.
+  */
+-void __exit nr_rt_free(void)
++void nr_rt_free(void)
+ {
+ 	struct nr_neigh *s = NULL;
+ 	struct nr_node  *t = NULL;
+diff --git a/net/netrom/sysctl_net_netrom.c b/net/netrom/sysctl_net_netrom.c
+index ba1c368b3f18..771011b84270 100644
+--- a/net/netrom/sysctl_net_netrom.c
++++ b/net/netrom/sysctl_net_netrom.c
+@@ -146,9 +146,12 @@ static struct ctl_table nr_table[] = {
+ 	{ }
+ };
+ 
+-void __init nr_register_sysctl(void)
++int __init nr_register_sysctl(void)
+ {
+ 	nr_table_header = register_net_sysctl(&init_net, "net/netrom", nr_table);
++	if (!nr_table_header)
++		return -ENOMEM;
++	return 0;
+ }
+ 
+ void nr_unregister_sysctl(void)
+diff --git a/net/rds/af_rds.c b/net/rds/af_rds.c
+index 65387e1e6964..cd7e01ea8144 100644
+--- a/net/rds/af_rds.c
++++ b/net/rds/af_rds.c
+@@ -506,6 +506,9 @@ static int rds_connect(struct socket *sock, struct sockaddr *uaddr,
+ 	struct rds_sock *rs = rds_sk_to_rs(sk);
+ 	int ret = 0;
+ 
++	if (addr_len < offsetofend(struct sockaddr, sa_family))
++		return -EINVAL;
++
+ 	lock_sock(sk);
+ 
+ 	switch (uaddr->sa_family) {
+diff --git a/net/rds/bind.c b/net/rds/bind.c
+index 17c9d9f0c848..0f4398e7f2a7 100644
+--- a/net/rds/bind.c
++++ b/net/rds/bind.c
+@@ -173,6 +173,8 @@ int rds_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ 	/* We allow an RDS socket to be bound to either IPv4 or IPv6
+ 	 * address.
+ 	 */
++	if (addr_len < offsetofend(struct sockaddr, sa_family))
++		return -EINVAL;
+ 	if (uaddr->sa_family == AF_INET) {
+ 		struct sockaddr_in *sin = (struct sockaddr_in *)uaddr;
+ 
+diff --git a/net/rds/ib_fmr.c b/net/rds/ib_fmr.c
+index e0f70c4051b6..01e764f8f224 100644
+--- a/net/rds/ib_fmr.c
++++ b/net/rds/ib_fmr.c
+@@ -44,6 +44,17 @@ struct rds_ib_mr *rds_ib_alloc_fmr(struct rds_ib_device *rds_ibdev, int npages)
+ 	else
+ 		pool = rds_ibdev->mr_1m_pool;
+ 
++	if (atomic_read(&pool->dirty_count) >= pool->max_items / 10)
++		queue_delayed_work(rds_ib_mr_wq, &pool->flush_worker, 10);
++
++	/* Switch pools if one of the pool is reaching upper limit */
++	if (atomic_read(&pool->dirty_count) >=  pool->max_items * 9 / 10) {
++		if (pool->pool_type == RDS_IB_MR_8K_POOL)
++			pool = rds_ibdev->mr_1m_pool;
++		else
++			pool = rds_ibdev->mr_8k_pool;
++	}
++
+ 	ibmr = rds_ib_try_reuse_ibmr(pool);
+ 	if (ibmr)
+ 		return ibmr;
+diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
+index 63c8d107adcf..d664e9ade74d 100644
+--- a/net/rds/ib_rdma.c
++++ b/net/rds/ib_rdma.c
+@@ -454,9 +454,6 @@ struct rds_ib_mr *rds_ib_try_reuse_ibmr(struct rds_ib_mr_pool *pool)
+ 	struct rds_ib_mr *ibmr = NULL;
+ 	int iter = 0;
+ 
+-	if (atomic_read(&pool->dirty_count) >= pool->max_items_soft / 10)
+-		queue_delayed_work(rds_ib_mr_wq, &pool->flush_worker, 10);
+-
+ 	while (1) {
+ 		ibmr = rds_ib_reuse_mr(pool);
+ 		if (ibmr)
+diff --git a/net/rose/rose_loopback.c b/net/rose/rose_loopback.c
+index 7af4f99c4a93..094a6621f8e8 100644
+--- a/net/rose/rose_loopback.c
++++ b/net/rose/rose_loopback.c
+@@ -16,6 +16,7 @@
+ #include <linux/init.h>
+ 
+ static struct sk_buff_head loopback_queue;
++#define ROSE_LOOPBACK_LIMIT 1000
+ static struct timer_list loopback_timer;
+ 
+ static void rose_set_loopback_timer(void);
+@@ -35,29 +36,27 @@ static int rose_loopback_running(void)
+ 
+ int rose_loopback_queue(struct sk_buff *skb, struct rose_neigh *neigh)
+ {
+-	struct sk_buff *skbn;
++	struct sk_buff *skbn = NULL;
+ 
+-	skbn = skb_clone(skb, GFP_ATOMIC);
++	if (skb_queue_len(&loopback_queue) < ROSE_LOOPBACK_LIMIT)
++		skbn = skb_clone(skb, GFP_ATOMIC);
+ 
+-	kfree_skb(skb);
+-
+-	if (skbn != NULL) {
++	if (skbn) {
++		consume_skb(skb);
+ 		skb_queue_tail(&loopback_queue, skbn);
+ 
+ 		if (!rose_loopback_running())
+ 			rose_set_loopback_timer();
++	} else {
++		kfree_skb(skb);
+ 	}
+ 
+ 	return 1;
+ }
+ 
+-
+ static void rose_set_loopback_timer(void)
+ {
+-	del_timer(&loopback_timer);
+-
+-	loopback_timer.expires  = jiffies + 10;
+-	add_timer(&loopback_timer);
++	mod_timer(&loopback_timer, jiffies + 10);
+ }
+ 
+ static void rose_loopback_timer(struct timer_list *unused)
+@@ -68,8 +67,12 @@ static void rose_loopback_timer(struct timer_list *unused)
+ 	struct sock *sk;
+ 	unsigned short frametype;
+ 	unsigned int lci_i, lci_o;
++	int count;
+ 
+-	while ((skb = skb_dequeue(&loopback_queue)) != NULL) {
++	for (count = 0; count < ROSE_LOOPBACK_LIMIT; count++) {
++		skb = skb_dequeue(&loopback_queue);
++		if (!skb)
++			return;
+ 		if (skb->len < ROSE_MIN_LEN) {
+ 			kfree_skb(skb);
+ 			continue;
+@@ -106,6 +109,8 @@ static void rose_loopback_timer(struct timer_list *unused)
+ 			kfree_skb(skb);
+ 		}
+ 	}
++	if (!skb_queue_empty(&loopback_queue))
++		mod_timer(&loopback_timer, jiffies + 1);
+ }
+ 
+ void __exit rose_loopback_clear(void)
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 9128aa0e40aa..b4ffb81223ad 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -1155,19 +1155,19 @@ int rxrpc_extract_header(struct rxrpc_skb_priv *sp, struct sk_buff *skb)
+  * handle data received on the local endpoint
+  * - may be called in interrupt context
+  *
+- * The socket is locked by the caller and this prevents the socket from being
+- * shut down and the local endpoint from going away, thus sk_user_data will not
+- * be cleared until this function returns.
++ * [!] Note that as this is called from the encap_rcv hook, the socket is not
++ * held locked by the caller and nothing prevents sk_user_data on the UDP from
++ * being cleared in the middle of processing this function.
+  *
+  * Called with the RCU read lock held from the IP layer via UDP.
+  */
+ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb)
+ {
++	struct rxrpc_local *local = rcu_dereference_sk_user_data(udp_sk);
+ 	struct rxrpc_connection *conn;
+ 	struct rxrpc_channel *chan;
+ 	struct rxrpc_call *call = NULL;
+ 	struct rxrpc_skb_priv *sp;
+-	struct rxrpc_local *local = udp_sk->sk_user_data;
+ 	struct rxrpc_peer *peer = NULL;
+ 	struct rxrpc_sock *rx = NULL;
+ 	unsigned int channel;
+@@ -1175,6 +1175,10 @@ int rxrpc_input_packet(struct sock *udp_sk, struct sk_buff *skb)
+ 
+ 	_enter("%p", udp_sk);
+ 
++	if (unlikely(!local)) {
++		kfree_skb(skb);
++		return 0;
++	}
+ 	if (skb->tstamp == 0)
+ 		skb->tstamp = ktime_get_real();
+ 
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index 0906e51d3cfb..10317dbdab5f 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -304,7 +304,8 @@ nomem:
+ 	ret = -ENOMEM;
+ sock_error:
+ 	mutex_unlock(&rxnet->local_mutex);
+-	kfree(local);
++	if (local)
++		call_rcu(&local->rcu, rxrpc_local_rcu);
+ 	_leave(" = %d", ret);
+ 	return ERR_PTR(ret);
+ 
+diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
+index 12bb23b8e0c5..261131dfa1f1 100644
+--- a/net/sunrpc/cache.c
++++ b/net/sunrpc/cache.c
+@@ -54,6 +54,7 @@ static void cache_init(struct cache_head *h, struct cache_detail *detail)
+ 	h->last_refresh = now;
+ }
+ 
++static inline int cache_is_valid(struct cache_head *h);
+ static void cache_fresh_locked(struct cache_head *head, time_t expiry,
+ 				struct cache_detail *detail);
+ static void cache_fresh_unlocked(struct cache_head *head,
+@@ -105,6 +106,8 @@ static struct cache_head *sunrpc_cache_add_entry(struct cache_detail *detail,
+ 			if (cache_is_expired(detail, tmp)) {
+ 				hlist_del_init_rcu(&tmp->cache_list);
+ 				detail->entries --;
++				if (cache_is_valid(tmp) == -EAGAIN)
++					set_bit(CACHE_NEGATIVE, &tmp->flags);
+ 				cache_fresh_locked(tmp, 0, detail);
+ 				freeme = tmp;
+ 				break;
+diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c
+index 4ad3586da8f0..340a6e7c43a7 100644
+--- a/net/tipc/netlink_compat.c
++++ b/net/tipc/netlink_compat.c
+@@ -267,8 +267,14 @@ static int tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd,
+ 	if (msg->rep_type)
+ 		tipc_tlv_init(msg->rep, msg->rep_type);
+ 
+-	if (cmd->header)
+-		(*cmd->header)(msg);
++	if (cmd->header) {
++		err = (*cmd->header)(msg);
++		if (err) {
++			kfree_skb(msg->rep);
++			msg->rep = NULL;
++			return err;
++		}
++	}
+ 
+ 	arg = nlmsg_new(0, GFP_KERNEL);
+ 	if (!arg) {
+@@ -397,7 +403,12 @@ static int tipc_nl_compat_bearer_enable(struct tipc_nl_compat_cmd_doit *cmd,
+ 	if (!bearer)
+ 		return -EMSGSIZE;
+ 
+-	len = min_t(int, TLV_GET_DATA_LEN(msg->req), TIPC_MAX_BEARER_NAME);
++	len = TLV_GET_DATA_LEN(msg->req);
++	len -= offsetof(struct tipc_bearer_config, name);
++	if (len <= 0)
++		return -EINVAL;
++
++	len = min_t(int, len, TIPC_MAX_BEARER_NAME);
+ 	if (!string_is_valid(b->name, len))
+ 		return -EINVAL;
+ 
+@@ -766,7 +777,12 @@ static int tipc_nl_compat_link_set(struct tipc_nl_compat_cmd_doit *cmd,
+ 
+ 	lc = (struct tipc_link_config *)TLV_DATA(msg->req);
+ 
+-	len = min_t(int, TLV_GET_DATA_LEN(msg->req), TIPC_MAX_LINK_NAME);
++	len = TLV_GET_DATA_LEN(msg->req);
++	len -= offsetof(struct tipc_link_config, name);
++	if (len <= 0)
++		return -EINVAL;
++
++	len = min_t(int, len, TIPC_MAX_LINK_NAME);
+ 	if (!string_is_valid(lc->name, len))
+ 		return -EINVAL;
+ 
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 4b5ff3d44912..5f1d937c4be9 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -884,7 +884,9 @@ int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx)
+ 	goto release_netdev;
+ 
+ free_sw_resources:
++	up_read(&device_offload_lock);
+ 	tls_sw_free_resources_rx(sk);
++	down_read(&device_offload_lock);
+ release_ctx:
+ 	ctx->priv_ctx_rx = NULL;
+ release_netdev:
+@@ -919,8 +921,6 @@ void tls_device_offload_cleanup_rx(struct sock *sk)
+ 	}
+ out:
+ 	up_read(&device_offload_lock);
+-	kfree(tls_ctx->rx.rec_seq);
+-	kfree(tls_ctx->rx.iv);
+ 	tls_sw_release_resources_rx(sk);
+ }
+ 
+diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
+index 450a6dbc5a88..ef8934fd8698 100644
+--- a/net/tls/tls_device_fallback.c
++++ b/net/tls/tls_device_fallback.c
+@@ -193,6 +193,9 @@ static void update_chksum(struct sk_buff *skb, int headln)
+ 
+ static void complete_skb(struct sk_buff *nskb, struct sk_buff *skb, int headln)
+ {
++	struct sock *sk = skb->sk;
++	int delta;
++
+ 	skb_copy_header(nskb, skb);
+ 
+ 	skb_put(nskb, skb->len);
+@@ -200,11 +203,15 @@ static void complete_skb(struct sk_buff *nskb, struct sk_buff *skb, int headln)
+ 	update_chksum(nskb, headln);
+ 
+ 	nskb->destructor = skb->destructor;
+-	nskb->sk = skb->sk;
++	nskb->sk = sk;
+ 	skb->destructor = NULL;
+ 	skb->sk = NULL;
+-	refcount_add(nskb->truesize - skb->truesize,
+-		     &nskb->sk->sk_wmem_alloc);
++
++	delta = nskb->truesize - skb->truesize;
++	if (likely(delta < 0))
++		WARN_ON_ONCE(refcount_sub_and_test(-delta, &sk->sk_wmem_alloc));
++	else if (delta)
++		refcount_add(delta, &sk->sk_wmem_alloc);
+ }
+ 
+ /* This function may be called after the user socket is already
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 96dbac91ac6e..ce5dd79365a7 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -304,11 +304,8 @@ static void tls_sk_proto_close(struct sock *sk, long timeout)
+ #endif
+ 	}
+ 
+-	if (ctx->rx_conf == TLS_SW) {
+-		kfree(ctx->rx.rec_seq);
+-		kfree(ctx->rx.iv);
++	if (ctx->rx_conf == TLS_SW)
+ 		tls_sw_free_resources_rx(sk);
+-	}
+ 
+ #ifdef CONFIG_TLS_DEVICE
+ 	if (ctx->rx_conf == TLS_HW)
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index d2d4f7c0d4be..839a0a0b5dfa 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -1830,6 +1830,9 @@ void tls_sw_release_resources_rx(struct sock *sk)
+ 	struct tls_context *tls_ctx = tls_get_ctx(sk);
+ 	struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
+ 
++	kfree(tls_ctx->rx.rec_seq);
++	kfree(tls_ctx->rx.iv);
++
+ 	if (ctx->aead_recv) {
+ 		kfree_skb(ctx->recv_pkt);
+ 		ctx->recv_pkt = NULL;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f061167062bc..a9f69c3a3e0b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5490,7 +5490,7 @@ static void alc_headset_btn_callback(struct hda_codec *codec,
+ 	jack->jack->button_state = report;
+ }
+ 
+-static void alc295_fixup_chromebook(struct hda_codec *codec,
++static void alc_fixup_headset_jack(struct hda_codec *codec,
+ 				    const struct hda_fixup *fix, int action)
+ {
+ 
+@@ -5500,16 +5500,6 @@ static void alc295_fixup_chromebook(struct hda_codec *codec,
+ 						    alc_headset_btn_callback);
+ 		snd_hda_jack_add_kctl(codec, 0x55, "Headset Jack", false,
+ 				      SND_JACK_HEADSET, alc_headset_btn_keymap);
+-		switch (codec->core.vendor_id) {
+-		case 0x10ec0295:
+-			alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */
+-			alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15);
+-			break;
+-		case 0x10ec0236:
+-			alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */
+-			alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15);
+-			break;
+-		}
+ 		break;
+ 	case HDA_FIXUP_ACT_INIT:
+ 		switch (codec->core.vendor_id) {
+@@ -5530,6 +5520,25 @@ static void alc295_fixup_chromebook(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc295_fixup_chromebook(struct hda_codec *codec,
++				    const struct hda_fixup *fix, int action)
++{
++	switch (action) {
++	case HDA_FIXUP_ACT_INIT:
++		switch (codec->core.vendor_id) {
++		case 0x10ec0295:
++			alc_update_coef_idx(codec, 0x4a, 0x8000, 1 << 15); /* Reset HP JD */
++			alc_update_coef_idx(codec, 0x4a, 0x8000, 0 << 15);
++			break;
++		case 0x10ec0236:
++			alc_update_coef_idx(codec, 0x1b, 0x8000, 1 << 15); /* Reset HP JD */
++			alc_update_coef_idx(codec, 0x1b, 0x8000, 0 << 15);
++			break;
++		}
++		break;
++	}
++}
++
+ static void alc_fixup_disable_mic_vref(struct hda_codec *codec,
+ 				  const struct hda_fixup *fix, int action)
+ {
+@@ -5684,6 +5693,7 @@ enum {
+ 	ALC285_FIXUP_LENOVO_PC_BEEP_IN_NOISE,
+ 	ALC255_FIXUP_ACER_HEADSET_MIC,
+ 	ALC295_FIXUP_CHROME_BOOK,
++	ALC225_FIXUP_HEADSET_JACK,
+ 	ALC225_FIXUP_DELL_WYSE_AIO_MIC_NO_PRESENCE,
+ 	ALC225_FIXUP_WYSE_AUTO_MUTE,
+ 	ALC225_FIXUP_WYSE_DISABLE_MIC_VREF,
+@@ -6645,6 +6655,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 	[ALC295_FIXUP_CHROME_BOOK] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc295_fixup_chromebook,
++		.chained = true,
++		.chain_id = ALC225_FIXUP_HEADSET_JACK
++	},
++	[ALC225_FIXUP_HEADSET_JACK] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_headset_jack,
+ 	},
+ 	[ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+@@ -7143,7 +7159,8 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC255_FIXUP_DUMMY_LINEOUT_VERB, .name = "alc255-dummy-lineout"},
+ 	{.id = ALC255_FIXUP_DELL_HEADSET_MIC, .name = "alc255-dell-headset"},
+ 	{.id = ALC295_FIXUP_HP_X360, .name = "alc295-hp-x360"},
+-	{.id = ALC295_FIXUP_CHROME_BOOK, .name = "alc-sense-combo"},
++	{.id = ALC225_FIXUP_HEADSET_JACK, .name = "alc-headset-jack"},
++	{.id = ALC295_FIXUP_CHROME_BOOK, .name = "alc-chrome-book"},
+ 	{.id = ALC299_FIXUP_PREDATOR_SPK, .name = "predator-spk"},
+ 	{}
+ };


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-04 18:29 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-04 18:29 UTC (permalink / raw
  To: gentoo-commits

commit:     dc81aa26ea1bd832413eabc76bcff4c1421e0b2c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat May  4 18:29:38 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat May  4 18:29:38 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dc81aa26

Linux patch 5.0.12

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1011_linux-5.0.12.patch | 3398 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3402 insertions(+)

diff --git a/0000_README b/0000_README
index 4dfa486..3b63726 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-5.0.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.11
 
+Patch:  1011_linux-5.0.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-5.0.12.patch b/1011_linux-5.0.12.patch
new file mode 100644
index 0000000..f1fc8ab
--- /dev/null
+++ b/1011_linux-5.0.12.patch
@@ -0,0 +1,3398 @@
+diff --git a/Documentation/i2c/busses/i2c-i801 b/Documentation/i2c/busses/i2c-i801
+index d1ee484a787d..ee9984f35868 100644
+--- a/Documentation/i2c/busses/i2c-i801
++++ b/Documentation/i2c/busses/i2c-i801
+@@ -36,6 +36,7 @@ Supported adapters:
+   * Intel Cannon Lake (PCH)
+   * Intel Cedar Fork (PCH)
+   * Intel Ice Lake (PCH)
++  * Intel Comet Lake (PCH)
+    Datasheets: Publicly available at the Intel website
+ 
+ On Intel Patsburg and later chipsets, both the normal host SMBus controller
+diff --git a/Makefile b/Makefile
+index c3daaefa979c..fd044f594bbf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+@@ -31,7 +31,7 @@ _all:
+ # descending is started. They are now explicitly listed as the
+ # prepare rule.
+ 
+-ifneq ($(sub-make-done),1)
++ifneq ($(sub_make_done),1)
+ 
+ # Do not use make's built-in rules and variables
+ # (this increases performance and avoids hard-to-debug behaviour)
+@@ -159,6 +159,8 @@ need-sub-make := 1
+ $(lastword $(MAKEFILE_LIST)): ;
+ endif
+ 
++export sub_make_done := 1
++
+ ifeq ($(need-sub-make),1)
+ 
+ PHONY += $(MAKECMDGOALS) sub-make
+@@ -168,12 +170,12 @@ $(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
+ 
+ # Invoke a second make in the output directory, passing relevant variables
+ sub-make:
+-	$(Q)$(MAKE) sub-make-done=1 \
++	$(Q)$(MAKE) \
+ 	$(if $(KBUILD_OUTPUT),-C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR)) \
+ 	-f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS))
+ 
+ endif # need-sub-make
+-endif # sub-make-done
++endif # sub_make_done
+ 
+ # We process the rest of the Makefile if this is the final invocation of make
+ ifeq ($(need-sub-make),)
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 26524b75970a..e5d56d9b712c 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -593,6 +593,7 @@ config ARCH_DAVINCI
+ 	select HAVE_IDE
+ 	select PM_GENERIC_DOMAINS if PM
+ 	select PM_GENERIC_DOMAINS_OF if PM && OF
++	select REGMAP_MMIO
+ 	select RESET_CONTROLLER
+ 	select USE_OF
+ 	select ZONE_DMA
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts b/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
+index 5641d162dfdb..28e7513ce617 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
+@@ -93,7 +93,7 @@
+ };
+ 
+ &hdmi {
+-	hpd-gpios = <&gpio 46 GPIO_ACTIVE_LOW>;
++	hpd-gpios = <&gpio 46 GPIO_ACTIVE_HIGH>;
+ };
+ 
+ &pwm {
+diff --git a/arch/arm/boot/dts/imx6qdl-icore-rqs.dtsi b/arch/arm/boot/dts/imx6qdl-icore-rqs.dtsi
+index 1d1b4bd0670f..a4217f564a53 100644
+--- a/arch/arm/boot/dts/imx6qdl-icore-rqs.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-icore-rqs.dtsi
+@@ -264,7 +264,7 @@
+ 	pinctrl-2 = <&pinctrl_usdhc3_200mhz>;
+ 	vmcc-supply = <&reg_sd3_vmmc>;
+ 	cd-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>;
+-	bus-witdh = <4>;
++	bus-width = <4>;
+ 	no-1-8-v;
+ 	status = "okay";
+ };
+@@ -275,7 +275,7 @@
+ 	pinctrl-1 = <&pinctrl_usdhc4_100mhz>;
+ 	pinctrl-2 = <&pinctrl_usdhc4_200mhz>;
+ 	vmcc-supply = <&reg_sd4_vmmc>;
+-	bus-witdh = <8>;
++	bus-width = <8>;
+ 	no-1-8-v;
+ 	non-removable;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+index 1b50b01e9bac..65d03c5d409b 100644
+--- a/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
+@@ -90,6 +90,7 @@
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_enet>;
+ 	phy-mode = "rgmii";
++	phy-reset-duration = <10>; /* in msecs */
+ 	phy-reset-gpios = <&gpio3 23 GPIO_ACTIVE_LOW>;
+ 	phy-supply = <&vdd_eth_io_reg>;
+ 	status = "disabled";
+diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
+index 3a875fc1b63c..cee06509f00a 100644
+--- a/arch/arm/include/asm/kvm_mmu.h
++++ b/arch/arm/include/asm/kvm_mmu.h
+@@ -381,6 +381,17 @@ static inline int kvm_read_guest_lock(struct kvm *kvm,
+ 	return ret;
+ }
+ 
++static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa,
++				       const void *data, unsigned long len)
++{
++	int srcu_idx = srcu_read_lock(&kvm->srcu);
++	int ret = kvm_write_guest(kvm, gpa, data, len);
++
++	srcu_read_unlock(&kvm->srcu, srcu_idx);
++
++	return ret;
++}
++
+ static inline void *kvm_get_hyp_vector(void)
+ {
+ 	switch(read_cpuid_part()) {
+diff --git a/arch/arm/include/asm/stage2_pgtable.h b/arch/arm/include/asm/stage2_pgtable.h
+index de2089501b8b..9e11dce55e06 100644
+--- a/arch/arm/include/asm/stage2_pgtable.h
++++ b/arch/arm/include/asm/stage2_pgtable.h
+@@ -75,6 +75,8 @@ static inline bool kvm_stage2_has_pud(struct kvm *kvm)
+ 
+ #define S2_PMD_MASK				PMD_MASK
+ #define S2_PMD_SIZE				PMD_SIZE
++#define S2_PUD_MASK				PUD_MASK
++#define S2_PUD_SIZE				PUD_SIZE
+ 
+ static inline bool kvm_stage2_has_pmd(struct kvm *kvm)
+ {
+diff --git a/arch/arm/mach-imx/mach-imx51.c b/arch/arm/mach-imx/mach-imx51.c
+index c7169c2f94c4..08c7892866c2 100644
+--- a/arch/arm/mach-imx/mach-imx51.c
++++ b/arch/arm/mach-imx/mach-imx51.c
+@@ -59,6 +59,7 @@ static void __init imx51_m4if_setup(void)
+ 		return;
+ 
+ 	m4if_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	if (!m4if_base) {
+ 		pr_err("Unable to map M4IF registers\n");
+ 		return;
+diff --git a/arch/arm64/boot/dts/renesas/r8a77990.dtsi b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+index b2f606e286ce..327d12097643 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77990.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77990.dtsi
+@@ -2,7 +2,7 @@
+ /*
+  * Device Tree Source for the R-Car E3 (R8A77990) SoC
+  *
+- * Copyright (C) 2018 Renesas Electronics Corp.
++ * Copyright (C) 2018-2019 Renesas Electronics Corp.
+  */
+ 
+ #include <dt-bindings/clock/r8a77990-cpg-mssr.h>
+@@ -1040,9 +1040,8 @@
+ 				 <&cpg CPG_CORE R8A77990_CLK_S3D1C>,
+ 				 <&scif_clk>;
+ 			clock-names = "fck", "brg_int", "scif_clk";
+-			dmas = <&dmac1 0x5b>, <&dmac1 0x5a>,
+-			       <&dmac2 0x5b>, <&dmac2 0x5a>;
+-			dma-names = "tx", "rx", "tx", "rx";
++			dmas = <&dmac0 0x5b>, <&dmac0 0x5a>;
++			dma-names = "tx", "rx";
+ 			power-domains = <&sysc R8A77990_PD_ALWAYS_ON>;
+ 			resets = <&cpg 202>;
+ 			status = "disabled";
+diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
+index 8af4b1befa42..c246effd1b67 100644
+--- a/arch/arm64/include/asm/kvm_mmu.h
++++ b/arch/arm64/include/asm/kvm_mmu.h
+@@ -444,6 +444,17 @@ static inline int kvm_read_guest_lock(struct kvm *kvm,
+ 	return ret;
+ }
+ 
++static inline int kvm_write_guest_lock(struct kvm *kvm, gpa_t gpa,
++				       const void *data, unsigned long len)
++{
++	int srcu_idx = srcu_read_lock(&kvm->srcu);
++	int ret = kvm_write_guest(kvm, gpa, data, len);
++
++	srcu_read_unlock(&kvm->srcu, srcu_idx);
++
++	return ret;
++}
++
+ #ifdef CONFIG_KVM_INDIRECT_VECTORS
+ /*
+  * EL2 vectors can be mapped and rerouted in a number of ways,
+diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
+index f16a5f8ff2b4..e2a0500cd7a2 100644
+--- a/arch/arm64/kvm/reset.c
++++ b/arch/arm64/kvm/reset.c
+@@ -123,6 +123,9 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ 	int ret = -EINVAL;
+ 	bool loaded;
+ 
++	/* Reset PMU outside of the non-preemptible section */
++	kvm_pmu_vcpu_reset(vcpu);
++
+ 	preempt_disable();
+ 	loaded = (vcpu->cpu != -1);
+ 	if (loaded)
+@@ -170,9 +173,6 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
+ 		vcpu->arch.reset_state.reset = false;
+ 	}
+ 
+-	/* Reset PMU */
+-	kvm_pmu_vcpu_reset(vcpu);
+-
+ 	/* Default workaround setup is enabled (if supported) */
+ 	if (kvm_arm_have_ssbd() == KVM_SSBD_KERNEL)
+ 		vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
+diff --git a/arch/s390/include/asm/elf.h b/arch/s390/include/asm/elf.h
+index 7d22a474a040..f74639a05f0f 100644
+--- a/arch/s390/include/asm/elf.h
++++ b/arch/s390/include/asm/elf.h
+@@ -252,11 +252,14 @@ do {								\
+ 
+ /*
+  * Cache aliasing on the latest machines calls for a mapping granularity
+- * of 512KB. For 64-bit processes use a 512KB alignment and a randomization
+- * of up to 1GB. For 31-bit processes the virtual address space is limited,
+- * use no alignment and limit the randomization to 8MB.
++ * of 512KB for the anonymous mapping base. For 64-bit processes use a
++ * 512KB alignment and a randomization of up to 1GB. For 31-bit processes
++ * the virtual address space is limited, use no alignment and limit the
++ * randomization to 8MB.
++ * For the additional randomization of the program break use 32MB for
++ * 64-bit and 8MB for 31-bit.
+  */
+-#define BRK_RND_MASK	(is_compat_task() ? 0x7ffUL : 0x3ffffUL)
++#define BRK_RND_MASK	(is_compat_task() ? 0x7ffUL : 0x1fffUL)
+ #define MMAP_RND_MASK	(is_compat_task() ? 0x7ffUL : 0x3ff80UL)
+ #define MMAP_ALIGN_MASK	(is_compat_task() ? 0 : 0x7fUL)
+ #define STACK_RND_MASK	MMAP_RND_MASK
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 71d763ad2637..9f2d890733a9 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1198,6 +1198,8 @@ struct kvm_x86_ops {
+ 	int (*nested_enable_evmcs)(struct kvm_vcpu *vcpu,
+ 				   uint16_t *vmcs_version);
+ 	uint16_t (*nested_get_evmcs_version)(struct kvm_vcpu *vcpu);
++
++	bool (*need_emulation_on_page_fault)(struct kvm_vcpu *vcpu);
+ };
+ 
+ struct kvm_arch_async_pf {
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 89d20ed1d2e8..371c669696d7 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -526,7 +526,9 @@ static int stimer_set_config(struct kvm_vcpu_hv_stimer *stimer, u64 config,
+ 		new_config.enable = 0;
+ 	stimer->config.as_uint64 = new_config.as_uint64;
+ 
+-	stimer_mark_pending(stimer, false);
++	if (stimer->config.enable)
++		stimer_mark_pending(stimer, false);
++
+ 	return 0;
+ }
+ 
+@@ -542,7 +544,10 @@ static int stimer_set_count(struct kvm_vcpu_hv_stimer *stimer, u64 count,
+ 		stimer->config.enable = 0;
+ 	else if (stimer->config.auto_enable)
+ 		stimer->config.enable = 1;
+-	stimer_mark_pending(stimer, false);
++
++	if (stimer->config.enable)
++		stimer_mark_pending(stimer, false);
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 9ab33cab9486..77dbb57412cc 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -4915,11 +4915,15 @@ static union kvm_mmu_role
+ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
+ 				   bool execonly)
+ {
+-	union kvm_mmu_role role;
++	union kvm_mmu_role role = {0};
++	union kvm_mmu_page_role root_base = vcpu->arch.root_mmu.mmu_role.base;
+ 
+-	/* Base role is inherited from root_mmu */
+-	role.base.word = vcpu->arch.root_mmu.mmu_role.base.word;
+-	role.ext = kvm_calc_mmu_role_ext(vcpu);
++	/* Legacy paging and SMM flags are inherited from root_mmu */
++	role.base.smm = root_base.smm;
++	role.base.nxe = root_base.nxe;
++	role.base.cr0_wp = root_base.cr0_wp;
++	role.base.smep_andnot_wp = root_base.smep_andnot_wp;
++	role.base.smap_andnot_wp = root_base.smap_andnot_wp;
+ 
+ 	role.base.level = PT64_ROOT_4LEVEL;
+ 	role.base.direct = false;
+@@ -4927,6 +4931,7 @@ kvm_calc_shadow_ept_root_page_role(struct kvm_vcpu *vcpu, bool accessed_dirty,
+ 	role.base.guest_mode = true;
+ 	role.base.access = ACC_ALL;
+ 
++	role.ext = kvm_calc_mmu_role_ext(vcpu);
+ 	role.ext.execonly = execonly;
+ 
+ 	return role;
+@@ -5390,10 +5395,12 @@ emulate:
+ 	 * This can happen if a guest gets a page-fault on data access but the HW
+ 	 * table walker is not able to read the instruction page (e.g instruction
+ 	 * page is not present in memory). In those cases we simply restart the
+-	 * guest.
++	 * guest, with the exception of AMD Erratum 1096 which is unrecoverable.
+ 	 */
+-	if (unlikely(insn && !insn_len))
+-		return 1;
++	if (unlikely(insn && !insn_len)) {
++		if (!kvm_x86_ops->need_emulation_on_page_fault(vcpu))
++			return 1;
++	}
+ 
+ 	er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len);
+ 
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 516c1de03d47..e544cec812f9 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -7114,6 +7114,36 @@ static int nested_enable_evmcs(struct kvm_vcpu *vcpu,
+ 	return -ENODEV;
+ }
+ 
++static bool svm_need_emulation_on_page_fault(struct kvm_vcpu *vcpu)
++{
++	bool is_user, smap;
++
++	is_user = svm_get_cpl(vcpu) == 3;
++	smap = !kvm_read_cr4_bits(vcpu, X86_CR4_SMAP);
++
++	/*
++	 * Detect and workaround Errata 1096 Fam_17h_00_0Fh
++	 *
++	 * In non SEV guest, hypervisor will be able to read the guest
++	 * memory to decode the instruction pointer when insn_len is zero
++	 * so we return true to indicate that decoding is possible.
++	 *
++	 * But in the SEV guest, the guest memory is encrypted with the
++	 * guest specific key and hypervisor will not be able to decode the
++	 * instruction pointer so we will not able to workaround it. Lets
++	 * print the error and request to kill the guest.
++	 */
++	if (is_user && smap) {
++		if (!sev_guest(vcpu->kvm))
++			return true;
++
++		pr_err_ratelimited("KVM: Guest triggered AMD Erratum 1096\n");
++		kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
++	}
++
++	return false;
++}
++
+ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
+ 	.cpu_has_kvm_support = has_svm,
+ 	.disabled_by_bios = is_disabled,
+@@ -7247,6 +7277,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
+ 
+ 	.nested_enable_evmcs = nested_enable_evmcs,
+ 	.nested_get_evmcs_version = nested_get_evmcs_version,
++
++	.need_emulation_on_page_fault = svm_need_emulation_on_page_fault,
+ };
+ 
+ static int __init svm_init(void)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 34499081022c..e7fe8c692362 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -7526,6 +7526,11 @@ static int enable_smi_window(struct kvm_vcpu *vcpu)
+ 	return 0;
+ }
+ 
++static bool vmx_need_emulation_on_page_fault(struct kvm_vcpu *vcpu)
++{
++	return 0;
++}
++
+ static __init int hardware_setup(void)
+ {
+ 	unsigned long host_bndcfgs;
+@@ -7828,6 +7833,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
+ 	.set_nested_state = NULL,
+ 	.get_vmcs12_pages = NULL,
+ 	.nested_enable_evmcs = NULL,
++	.need_emulation_on_page_fault = vmx_need_emulation_on_page_fault,
+ };
+ 
+ static void vmx_cleanup_l1d_flush(void)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 2db58067bb59..8c9fb6453b2f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1127,7 +1127,7 @@ static u32 msrs_to_save[] = {
+ #endif
+ 	MSR_IA32_TSC, MSR_IA32_CR_PAT, MSR_VM_HSAVE_PA,
+ 	MSR_IA32_FEATURE_CONTROL, MSR_IA32_BNDCFGS, MSR_TSC_AUX,
+-	MSR_IA32_SPEC_CTRL, MSR_IA32_ARCH_CAPABILITIES,
++	MSR_IA32_SPEC_CTRL,
+ 	MSR_IA32_RTIT_CTL, MSR_IA32_RTIT_STATUS, MSR_IA32_RTIT_CR3_MATCH,
+ 	MSR_IA32_RTIT_OUTPUT_BASE, MSR_IA32_RTIT_OUTPUT_MASK,
+ 	MSR_IA32_RTIT_ADDR0_A, MSR_IA32_RTIT_ADDR0_B,
+@@ -1160,6 +1160,7 @@ static u32 emulated_msrs[] = {
+ 
+ 	MSR_IA32_TSC_ADJUST,
+ 	MSR_IA32_TSCDEADLINE,
++	MSR_IA32_ARCH_CAPABILITIES,
+ 	MSR_IA32_MISC_ENABLE,
+ 	MSR_IA32_MCG_STATUS,
+ 	MSR_IA32_MCG_CTL,
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
+index db3165714521..dc726e07d8ba 100644
+--- a/arch/x86/mm/mmap.c
++++ b/arch/x86/mm/mmap.c
+@@ -230,7 +230,7 @@ bool mmap_address_hint_valid(unsigned long addr, unsigned long len)
+ /* Can we access it for direct reading/writing? Must be RAM: */
+ int valid_phys_addr_range(phys_addr_t addr, size_t count)
+ {
+-	return addr + count <= __pa(high_memory);
++	return addr + count - 1 <= __pa(high_memory - 1);
+ }
+ 
+ /* Can we access it through mmap? Must be a valid physical address: */
+diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
+index d10105825d57..47d097946872 100644
+--- a/arch/x86/realmode/init.c
++++ b/arch/x86/realmode/init.c
+@@ -20,8 +20,6 @@ void __init set_real_mode_mem(phys_addr_t mem, size_t size)
+ 	void *base = __va(mem);
+ 
+ 	real_mode_header = (struct real_mode_header *) base;
+-	printk(KERN_DEBUG "Base memory trampoline at [%p] %llx size %zu\n",
+-	       base, (unsigned long long)mem, size);
+ }
+ 
+ void __init reserve_real_mode(void)
+diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
+index 4424997ecf30..e10fec99a182 100644
+--- a/drivers/acpi/acpica/evgpe.c
++++ b/drivers/acpi/acpica/evgpe.c
+@@ -81,12 +81,8 @@ acpi_status acpi_ev_enable_gpe(struct acpi_gpe_event_info *gpe_event_info)
+ 
+ 	ACPI_FUNCTION_TRACE(ev_enable_gpe);
+ 
+-	/* Clear the GPE status */
+-	status = acpi_hw_clear_gpe(gpe_event_info);
+-	if (ACPI_FAILURE(status))
+-		return_ACPI_STATUS(status);
+-
+ 	/* Enable the requested GPE */
++
+ 	status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);
+ 	return_ACPI_STATUS(status);
+ }
+diff --git a/drivers/ata/libata-zpodd.c b/drivers/ata/libata-zpodd.c
+index b3ed8f9953a8..173e6f2dd9af 100644
+--- a/drivers/ata/libata-zpodd.c
++++ b/drivers/ata/libata-zpodd.c
+@@ -52,38 +52,52 @@ static int eject_tray(struct ata_device *dev)
+ /* Per the spec, only slot type and drawer type ODD can be supported */
+ static enum odd_mech_type zpodd_get_mech_type(struct ata_device *dev)
+ {
+-	char buf[16];
++	char *buf;
+ 	unsigned int ret;
+-	struct rm_feature_desc *desc = (void *)(buf + 8);
++	struct rm_feature_desc *desc;
+ 	struct ata_taskfile tf;
+ 	static const char cdb[] = {  GPCMD_GET_CONFIGURATION,
+ 			2,      /* only 1 feature descriptor requested */
+ 			0, 3,   /* 3, removable medium feature */
+ 			0, 0, 0,/* reserved */
+-			0, sizeof(buf),
++			0, 16,
+ 			0, 0, 0,
+ 	};
+ 
++	buf = kzalloc(16, GFP_KERNEL);
++	if (!buf)
++		return ODD_MECH_TYPE_UNSUPPORTED;
++	desc = (void *)(buf + 8);
++
+ 	ata_tf_init(dev, &tf);
+ 	tf.flags = ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
+ 	tf.command = ATA_CMD_PACKET;
+ 	tf.protocol = ATAPI_PROT_PIO;
+-	tf.lbam = sizeof(buf);
++	tf.lbam = 16;
+ 
+ 	ret = ata_exec_internal(dev, &tf, cdb, DMA_FROM_DEVICE,
+-				buf, sizeof(buf), 0);
+-	if (ret)
++				buf, 16, 0);
++	if (ret) {
++		kfree(buf);
+ 		return ODD_MECH_TYPE_UNSUPPORTED;
++	}
+ 
+-	if (be16_to_cpu(desc->feature_code) != 3)
++	if (be16_to_cpu(desc->feature_code) != 3) {
++		kfree(buf);
+ 		return ODD_MECH_TYPE_UNSUPPORTED;
++	}
+ 
+-	if (desc->mech_type == 0 && desc->load == 0 && desc->eject == 1)
++	if (desc->mech_type == 0 && desc->load == 0 && desc->eject == 1) {
++		kfree(buf);
+ 		return ODD_MECH_TYPE_SLOT;
+-	else if (desc->mech_type == 1 && desc->load == 0 && desc->eject == 1)
++	} else if (desc->mech_type == 1 && desc->load == 0 &&
++		   desc->eject == 1) {
++		kfree(buf);
+ 		return ODD_MECH_TYPE_DRAWER;
+-	else
++	} else {
++		kfree(buf);
+ 		return ODD_MECH_TYPE_UNSUPPORTED;
++	}
+ }
+ 
+ /* Test if ODD is zero power ready by sense code */
+diff --git a/drivers/gpio/gpio-aspeed.c b/drivers/gpio/gpio-aspeed.c
+index 854bce4fb9e7..217507002dbc 100644
+--- a/drivers/gpio/gpio-aspeed.c
++++ b/drivers/gpio/gpio-aspeed.c
+@@ -1224,6 +1224,8 @@ static int __init aspeed_gpio_probe(struct platform_device *pdev)
+ 
+ 	gpio->offset_timer =
+ 		devm_kzalloc(&pdev->dev, gpio->chip.ngpio, GFP_KERNEL);
++	if (!gpio->offset_timer)
++		return -ENOMEM;
+ 
+ 	return aspeed_gpio_setup_irqs(gpio, pdev);
+ }
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index a1dd2f1c0d02..13a402ede07a 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -119,7 +119,8 @@ static void of_gpio_flags_quirks(struct device_node *np,
+ 	 * to determine if the flags should have inverted semantics.
+ 	 */
+ 	if (IS_ENABLED(CONFIG_SPI_MASTER) &&
+-	    of_property_read_bool(np, "cs-gpios")) {
++	    of_property_read_bool(np, "cs-gpios") &&
++	    !strcmp(propname, "cs-gpios")) {
+ 		struct device_node *child;
+ 		u32 cs;
+ 		int ret;
+@@ -141,16 +142,16 @@ static void of_gpio_flags_quirks(struct device_node *np,
+ 				 * conflict and the "spi-cs-high" flag will
+ 				 * take precedence.
+ 				 */
+-				if (of_property_read_bool(np, "spi-cs-high")) {
++				if (of_property_read_bool(child, "spi-cs-high")) {
+ 					if (*flags & OF_GPIO_ACTIVE_LOW) {
+ 						pr_warn("%s GPIO handle specifies active low - ignored\n",
+-							of_node_full_name(np));
++							of_node_full_name(child));
+ 						*flags &= ~OF_GPIO_ACTIVE_LOW;
+ 					}
+ 				} else {
+ 					if (!(*flags & OF_GPIO_ACTIVE_LOW))
+ 						pr_info("%s enforce active low on chipselect handle\n",
+-							of_node_full_name(np));
++							of_node_full_name(child));
+ 					*flags |= OF_GPIO_ACTIVE_LOW;
+ 				}
+ 				break;
+@@ -711,7 +712,13 @@ int of_gpiochip_add(struct gpio_chip *chip)
+ 
+ 	of_node_get(chip->of_node);
+ 
+-	return of_gpiochip_scan_gpios(chip);
++	status = of_gpiochip_scan_gpios(chip);
++	if (status) {
++		of_node_put(chip->of_node);
++		gpiochip_remove_pin_ranges(chip);
++	}
++
++	return status;
+ }
+ 
+ void of_gpiochip_remove(struct gpio_chip *chip)
+diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
+index 12e5e2be7890..7a59b8b3ed5a 100644
+--- a/drivers/gpu/drm/drm_drv.c
++++ b/drivers/gpu/drm/drm_drv.c
+@@ -381,11 +381,7 @@ void drm_dev_unplug(struct drm_device *dev)
+ 	synchronize_srcu(&drm_unplug_srcu);
+ 
+ 	drm_dev_unregister(dev);
+-
+-	mutex_lock(&drm_global_mutex);
+-	if (dev->open_count == 0)
+-		drm_dev_put(dev);
+-	mutex_unlock(&drm_global_mutex);
++	drm_dev_put(dev);
+ }
+ EXPORT_SYMBOL(drm_dev_unplug);
+ 
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index 46f48f245eb5..3f20f598cd7c 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -479,11 +479,9 @@ int drm_release(struct inode *inode, struct file *filp)
+ 
+ 	drm_file_free(file_priv);
+ 
+-	if (!--dev->open_count) {
++	if (!--dev->open_count)
+ 		drm_lastclose(dev);
+-		if (drm_dev_is_unplugged(dev))
+-			drm_put_dev(dev);
+-	}
++
+ 	mutex_unlock(&drm_global_mutex);
+ 
+ 	drm_minor_release(minor);
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index dcd1df5322e8..21c6016ccba5 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -1871,6 +1871,9 @@ static bool intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
+ 	u8 dsc_max_bpc;
+ 	int pipe_bpp;
+ 
++	pipe_config->fec_enable = !intel_dp_is_edp(intel_dp) &&
++		intel_dp_supports_fec(intel_dp, pipe_config);
++
+ 	if (!intel_dp_supports_dsc(intel_dp, pipe_config))
+ 		return false;
+ 
+@@ -2097,9 +2100,6 @@ intel_dp_compute_config(struct intel_encoder *encoder,
+ 	if (adjusted_mode->flags & DRM_MODE_FLAG_DBLCLK)
+ 		return false;
+ 
+-	pipe_config->fec_enable = !intel_dp_is_edp(intel_dp) &&
+-				  intel_dp_supports_fec(intel_dp, pipe_config);
+-
+ 	if (!intel_dp_compute_link_config(encoder, pipe_config, conn_state))
+ 		return false;
+ 
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index 12ff47b13668..a13704ab5d11 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -317,12 +317,14 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
+ 
+ 	ret = drm_dev_register(drm, 0);
+ 	if (ret)
+-		goto free_drm;
++		goto uninstall_irq;
+ 
+ 	drm_fbdev_generic_setup(drm, 32);
+ 
+ 	return 0;
+ 
++uninstall_irq:
++	drm_irq_uninstall(drm);
+ free_drm:
+ 	drm_dev_put(drm);
+ 
+@@ -336,8 +338,8 @@ static int meson_drv_bind(struct device *dev)
+ 
+ static void meson_drv_unbind(struct device *dev)
+ {
+-	struct drm_device *drm = dev_get_drvdata(dev);
+-	struct meson_drm *priv = drm->dev_private;
++	struct meson_drm *priv = dev_get_drvdata(dev);
++	struct drm_device *drm = priv->drm;
+ 
+ 	if (priv->canvas) {
+ 		meson_canvas_free(priv->canvas, priv->canvas_id_osd1);
+@@ -347,6 +349,7 @@ static void meson_drv_unbind(struct device *dev)
+ 	}
+ 
+ 	drm_dev_unregister(drm);
++	drm_irq_uninstall(drm);
+ 	drm_kms_helper_poll_fini(drm);
+ 	drm_mode_config_cleanup(drm);
+ 	drm_dev_put(drm);
+diff --git a/drivers/gpu/drm/tegra/hub.c b/drivers/gpu/drm/tegra/hub.c
+index 922a48d5a483..c7c612579270 100644
+--- a/drivers/gpu/drm/tegra/hub.c
++++ b/drivers/gpu/drm/tegra/hub.c
+@@ -378,14 +378,16 @@ static int tegra_shared_plane_atomic_check(struct drm_plane *plane,
+ static void tegra_shared_plane_atomic_disable(struct drm_plane *plane,
+ 					      struct drm_plane_state *old_state)
+ {
+-	struct tegra_dc *dc = to_tegra_dc(old_state->crtc);
+ 	struct tegra_plane *p = to_tegra_plane(plane);
++	struct tegra_dc *dc;
+ 	u32 value;
+ 
+ 	/* rien ne va plus */
+ 	if (!old_state || !old_state->crtc)
+ 		return;
+ 
++	dc = to_tegra_dc(old_state->crtc);
++
+ 	/*
+ 	 * XXX Legacy helpers seem to sometimes call ->atomic_disable() even
+ 	 * on planes that are already disabled. Make sure we fallback to the
+diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
+index f2c681971201..f8979abb9a19 100644
+--- a/drivers/i2c/busses/Kconfig
++++ b/drivers/i2c/busses/Kconfig
+@@ -131,6 +131,7 @@ config I2C_I801
+ 	    Cannon Lake (PCH)
+ 	    Cedar Fork (PCH)
+ 	    Ice Lake (PCH)
++	    Comet Lake (PCH)
+ 
+ 	  This driver can also be built as a module.  If so, the module
+ 	  will be called i2c-i801.
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index c91e145ef5a5..679c6c41f64b 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -71,6 +71,7 @@
+  * Cannon Lake-LP (PCH)		0x9da3	32	hard	yes	yes	yes
+  * Cedar Fork (PCH)		0x18df	32	hard	yes	yes	yes
+  * Ice Lake-LP (PCH)		0x34a3	32	hard	yes	yes	yes
++ * Comet Lake (PCH)		0x02a3	32	hard	yes	yes	yes
+  *
+  * Features supported by this driver:
+  * Software PEC				no
+@@ -240,6 +241,7 @@
+ #define PCI_DEVICE_ID_INTEL_LEWISBURG_SSKU_SMBUS	0xa223
+ #define PCI_DEVICE_ID_INTEL_KABYLAKE_PCH_H_SMBUS	0xa2a3
+ #define PCI_DEVICE_ID_INTEL_CANNONLAKE_H_SMBUS		0xa323
++#define PCI_DEVICE_ID_INTEL_COMETLAKE_SMBUS		0x02a3
+ 
+ struct i801_mux_config {
+ 	char *gpio_chip;
+@@ -1038,6 +1040,7 @@ static const struct pci_device_id i801_ids[] = {
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CANNONLAKE_H_SMBUS) },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_CANNONLAKE_LP_SMBUS) },
+ 	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICELAKE_LP_SMBUS) },
++	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_COMETLAKE_SMBUS) },
+ 	{ 0, }
+ };
+ 
+@@ -1534,6 +1537,7 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ 	case PCI_DEVICE_ID_INTEL_DNV_SMBUS:
+ 	case PCI_DEVICE_ID_INTEL_KABYLAKE_PCH_H_SMBUS:
+ 	case PCI_DEVICE_ID_INTEL_ICELAKE_LP_SMBUS:
++	case PCI_DEVICE_ID_INTEL_COMETLAKE_SMBUS:
+ 		priv->features |= FEATURE_I2C_BLOCK_READ;
+ 		priv->features |= FEATURE_IRQ;
+ 		priv->features |= FEATURE_SMBUS_PEC;
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index e628ef23418f..55b3e4b9d5dc 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -3166,21 +3166,24 @@ static void amd_iommu_get_resv_regions(struct device *dev,
+ 		return;
+ 
+ 	list_for_each_entry(entry, &amd_iommu_unity_map, list) {
++		int type, prot = 0;
+ 		size_t length;
+-		int prot = 0;
+ 
+ 		if (devid < entry->devid_start || devid > entry->devid_end)
+ 			continue;
+ 
++		type   = IOMMU_RESV_DIRECT;
+ 		length = entry->address_end - entry->address_start;
+ 		if (entry->prot & IOMMU_PROT_IR)
+ 			prot |= IOMMU_READ;
+ 		if (entry->prot & IOMMU_PROT_IW)
+ 			prot |= IOMMU_WRITE;
++		if (entry->prot & IOMMU_UNITY_MAP_FLAG_EXCL_RANGE)
++			/* Exclusion range */
++			type = IOMMU_RESV_RESERVED;
+ 
+ 		region = iommu_alloc_resv_region(entry->address_start,
+-						 length, prot,
+-						 IOMMU_RESV_DIRECT);
++						 length, prot, type);
+ 		if (!region) {
+ 			pr_err("Out of memory allocating dm-regions for %s\n",
+ 				dev_name(dev));
+diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
+index 66123b911ec8..84fa5b22371e 100644
+--- a/drivers/iommu/amd_iommu_init.c
++++ b/drivers/iommu/amd_iommu_init.c
+@@ -2013,6 +2013,9 @@ static int __init init_unity_map_range(struct ivmd_header *m)
+ 	if (e == NULL)
+ 		return -ENOMEM;
+ 
++	if (m->flags & IVMD_FLAG_EXCL_RANGE)
++		init_exclusion_range(m);
++
+ 	switch (m->type) {
+ 	default:
+ 		kfree(e);
+@@ -2059,9 +2062,7 @@ static int __init init_memory_definitions(struct acpi_table_header *table)
+ 
+ 	while (p < end) {
+ 		m = (struct ivmd_header *)p;
+-		if (m->flags & IVMD_FLAG_EXCL_RANGE)
+-			init_exclusion_range(m);
+-		else if (m->flags & IVMD_FLAG_UNITY_MAP)
++		if (m->flags & (IVMD_FLAG_UNITY_MAP | IVMD_FLAG_EXCL_RANGE))
+ 			init_unity_map_range(m);
+ 
+ 		p += m->length;
+diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h
+index eae0741f72dc..87965e4d9647 100644
+--- a/drivers/iommu/amd_iommu_types.h
++++ b/drivers/iommu/amd_iommu_types.h
+@@ -374,6 +374,8 @@
+ #define IOMMU_PROT_IR 0x01
+ #define IOMMU_PROT_IW 0x02
+ 
++#define IOMMU_UNITY_MAP_FLAG_EXCL_RANGE	(1 << 2)
++
+ /* IOMMU capabilities */
+ #define IOMMU_CAP_IOTLB   24
+ #define IOMMU_CAP_NPCACHE 26
+diff --git a/drivers/leds/leds-pca9532.c b/drivers/leds/leds-pca9532.c
+index 7fea18b0c15d..7cb4d685a1f1 100644
+--- a/drivers/leds/leds-pca9532.c
++++ b/drivers/leds/leds-pca9532.c
+@@ -513,6 +513,7 @@ static int pca9532_probe(struct i2c_client *client,
+ 	const struct i2c_device_id *id)
+ {
+ 	int devid;
++	const struct of_device_id *of_id;
+ 	struct pca9532_data *data = i2c_get_clientdata(client);
+ 	struct pca9532_platform_data *pca9532_pdata =
+ 			dev_get_platdata(&client->dev);
+@@ -528,8 +529,11 @@ static int pca9532_probe(struct i2c_client *client,
+ 			dev_err(&client->dev, "no platform data\n");
+ 			return -EINVAL;
+ 		}
+-		devid = (int)(uintptr_t)of_match_device(
+-			of_pca9532_leds_match, &client->dev)->data;
++		of_id = of_match_device(of_pca9532_leds_match,
++				&client->dev);
++		if (unlikely(!of_id))
++			return -EINVAL;
++		devid = (int)(uintptr_t) of_id->data;
+ 	} else {
+ 		devid = id->driver_data;
+ 	}
+diff --git a/drivers/leds/trigger/ledtrig-netdev.c b/drivers/leds/trigger/ledtrig-netdev.c
+index 3dd3ed46d473..136f86a1627d 100644
+--- a/drivers/leds/trigger/ledtrig-netdev.c
++++ b/drivers/leds/trigger/ledtrig-netdev.c
+@@ -122,7 +122,8 @@ static ssize_t device_name_store(struct device *dev,
+ 		trigger_data->net_dev = NULL;
+ 	}
+ 
+-	strncpy(trigger_data->device_name, buf, size);
++	memcpy(trigger_data->device_name, buf, size);
++	trigger_data->device_name[size] = 0;
+ 	if (size > 0 && trigger_data->device_name[size - 1] == '\n')
+ 		trigger_data->device_name[size - 1] = 0;
+ 
+@@ -301,11 +302,11 @@ static int netdev_trig_notify(struct notifier_block *nb,
+ 		container_of(nb, struct led_netdev_data, notifier);
+ 
+ 	if (evt != NETDEV_UP && evt != NETDEV_DOWN && evt != NETDEV_CHANGE
+-	    && evt != NETDEV_REGISTER && evt != NETDEV_UNREGISTER
+-	    && evt != NETDEV_CHANGENAME)
++	    && evt != NETDEV_REGISTER && evt != NETDEV_UNREGISTER)
+ 		return NOTIFY_DONE;
+ 
+-	if (strcmp(dev->name, trigger_data->device_name))
++	if (!(dev == trigger_data->net_dev ||
++	      (evt == NETDEV_REGISTER && !strcmp(dev->name, trigger_data->device_name))))
+ 		return NOTIFY_DONE;
+ 
+ 	cancel_delayed_work_sync(&trigger_data->work);
+@@ -320,12 +321,9 @@ static int netdev_trig_notify(struct notifier_block *nb,
+ 		dev_hold(dev);
+ 		trigger_data->net_dev = dev;
+ 		break;
+-	case NETDEV_CHANGENAME:
+ 	case NETDEV_UNREGISTER:
+-		if (trigger_data->net_dev) {
+-			dev_put(trigger_data->net_dev);
+-			trigger_data->net_dev = NULL;
+-		}
++		dev_put(trigger_data->net_dev);
++		trigger_data->net_dev = NULL;
+ 		break;
+ 	case NETDEV_UP:
+ 	case NETDEV_CHANGE:
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 2b2882615e8b..6cbe515bfdeb 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -3318,14 +3318,20 @@ static int macb_clk_init(struct platform_device *pdev, struct clk **pclk,
+ 		*hclk = devm_clk_get(&pdev->dev, "hclk");
+ 	}
+ 
+-	if (IS_ERR(*pclk)) {
++	if (IS_ERR_OR_NULL(*pclk)) {
+ 		err = PTR_ERR(*pclk);
++		if (!err)
++			err = -ENODEV;
++
+ 		dev_err(&pdev->dev, "failed to get macb_clk (%u)\n", err);
+ 		return err;
+ 	}
+ 
+-	if (IS_ERR(*hclk)) {
++	if (IS_ERR_OR_NULL(*hclk)) {
+ 		err = PTR_ERR(*hclk);
++		if (!err)
++			err = -ENODEV;
++
+ 		dev_err(&pdev->dev, "failed to get hclk (%u)\n", err);
+ 		return err;
+ 	}
+diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c
+index 3baabdc89726..90b62c1412c8 100644
+--- a/drivers/net/ethernet/ibm/ehea/ehea_main.c
++++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c
+@@ -3160,6 +3160,7 @@ static ssize_t ehea_probe_port(struct device *dev,
+ 
+ 	if (ehea_add_adapter_mr(adapter)) {
+ 		pr_err("creating MR failed\n");
++		of_node_put(eth_dn);
+ 		return -EIO;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/micrel/ks8851.c b/drivers/net/ethernet/micrel/ks8851.c
+index bd6e9014bc74..b83b070a9eec 100644
+--- a/drivers/net/ethernet/micrel/ks8851.c
++++ b/drivers/net/ethernet/micrel/ks8851.c
+@@ -535,9 +535,8 @@ static void ks8851_rx_pkts(struct ks8851_net *ks)
+ 		/* set dma read address */
+ 		ks8851_wrreg16(ks, KS_RXFDPR, RXFDPR_RXFPAI | 0x00);
+ 
+-		/* start the packet dma process, and set auto-dequeue rx */
+-		ks8851_wrreg16(ks, KS_RXQCR,
+-			       ks->rc_rxqcr | RXQCR_SDA | RXQCR_ADRFE);
++		/* start DMA access */
++		ks8851_wrreg16(ks, KS_RXQCR, ks->rc_rxqcr | RXQCR_SDA);
+ 
+ 		if (rxlen > 4) {
+ 			unsigned int rxalign;
+@@ -568,7 +567,8 @@ static void ks8851_rx_pkts(struct ks8851_net *ks)
+ 			}
+ 		}
+ 
+-		ks8851_wrreg16(ks, KS_RXQCR, ks->rc_rxqcr);
++		/* end DMA access and dequeue packet */
++		ks8851_wrreg16(ks, KS_RXQCR, ks->rc_rxqcr | RXQCR_RRXEF);
+ 	}
+ }
+ 
+@@ -785,6 +785,15 @@ static void ks8851_tx_work(struct work_struct *work)
+ static int ks8851_net_open(struct net_device *dev)
+ {
+ 	struct ks8851_net *ks = netdev_priv(dev);
++	int ret;
++
++	ret = request_threaded_irq(dev->irq, NULL, ks8851_irq,
++				   IRQF_TRIGGER_LOW | IRQF_ONESHOT,
++				   dev->name, ks);
++	if (ret < 0) {
++		netdev_err(dev, "failed to get irq\n");
++		return ret;
++	}
+ 
+ 	/* lock the card, even if we may not actually be doing anything
+ 	 * else at the moment */
+@@ -849,6 +858,7 @@ static int ks8851_net_open(struct net_device *dev)
+ 	netif_dbg(ks, ifup, ks->netdev, "network device up\n");
+ 
+ 	mutex_unlock(&ks->lock);
++	mii_check_link(&ks->mii);
+ 	return 0;
+ }
+ 
+@@ -899,6 +909,8 @@ static int ks8851_net_stop(struct net_device *dev)
+ 		dev_kfree_skb(txb);
+ 	}
+ 
++	free_irq(dev->irq, ks);
++
+ 	return 0;
+ }
+ 
+@@ -1508,6 +1520,7 @@ static int ks8851_probe(struct spi_device *spi)
+ 
+ 	spi_set_drvdata(spi, ks);
+ 
++	netif_carrier_off(ks->netdev);
+ 	ndev->if_port = IF_PORT_100BASET;
+ 	ndev->netdev_ops = &ks8851_netdev_ops;
+ 	ndev->irq = spi->irq;
+@@ -1529,14 +1542,6 @@ static int ks8851_probe(struct spi_device *spi)
+ 	ks8851_read_selftest(ks);
+ 	ks8851_init_mac(ks);
+ 
+-	ret = request_threaded_irq(spi->irq, NULL, ks8851_irq,
+-				   IRQF_TRIGGER_LOW | IRQF_ONESHOT,
+-				   ndev->name, ks);
+-	if (ret < 0) {
+-		dev_err(&spi->dev, "failed to get irq\n");
+-		goto err_irq;
+-	}
+-
+ 	ret = register_netdev(ndev);
+ 	if (ret) {
+ 		dev_err(&spi->dev, "failed to register network device\n");
+@@ -1549,14 +1554,10 @@ static int ks8851_probe(struct spi_device *spi)
+ 
+ 	return 0;
+ 
+-
+ err_netdev:
+-	free_irq(ndev->irq, ks);
+-
+-err_irq:
++err_id:
+ 	if (gpio_is_valid(gpio))
+ 		gpio_set_value(gpio, 0);
+-err_id:
+ 	regulator_disable(ks->vdd_reg);
+ err_reg:
+ 	regulator_disable(ks->vdd_io);
+@@ -1574,7 +1575,6 @@ static int ks8851_remove(struct spi_device *spi)
+ 		dev_info(&spi->dev, "remove\n");
+ 
+ 	unregister_netdev(priv->netdev);
+-	free_irq(spi->irq, priv);
+ 	if (gpio_is_valid(priv->gpio))
+ 		gpio_set_value(priv->gpio, 0);
+ 	regulator_disable(priv->vdd_reg);
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
+index 3b0adda7cc9c..a4cd6f2cfb86 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c
+@@ -1048,6 +1048,8 @@ int qlcnic_do_lb_test(struct qlcnic_adapter *adapter, u8 mode)
+ 
+ 	for (i = 0; i < QLCNIC_NUM_ILB_PKT; i++) {
+ 		skb = netdev_alloc_skb(adapter->netdev, QLCNIC_ILB_PKT_SIZE);
++		if (!skb)
++			break;
+ 		qlcnic_create_loopback_buff(skb->data, adapter->mac_addr);
+ 		skb_put(skb, QLCNIC_ILB_PKT_SIZE);
+ 		adapter->ahw->diag_cnt = 0;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
+index c0c75c111abb..4d9bcb4d0378 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
++++ b/drivers/net/ethernet/stmicro/stmmac/ring_mode.c
+@@ -59,7 +59,7 @@ static int jumbo_frm(void *p, struct sk_buff *skb, int csum)
+ 
+ 		desc->des3 = cpu_to_le32(des2 + BUF_SIZE_4KiB);
+ 		stmmac_prepare_tx_desc(priv, desc, 1, bmax, csum,
+-				STMMAC_RING_MODE, 1, false, skb->len);
++				STMMAC_RING_MODE, 0, false, skb->len);
+ 		tx_q->tx_skbuff[entry] = NULL;
+ 		entry = STMMAC_GET_ENTRY(entry, DMA_TX_SIZE);
+ 
+@@ -79,7 +79,8 @@ static int jumbo_frm(void *p, struct sk_buff *skb, int csum)
+ 
+ 		desc->des3 = cpu_to_le32(des2 + BUF_SIZE_4KiB);
+ 		stmmac_prepare_tx_desc(priv, desc, 0, len, csum,
+-				STMMAC_RING_MODE, 1, true, skb->len);
++				STMMAC_RING_MODE, 1, !skb_is_nonlinear(skb),
++				skb->len);
+ 	} else {
+ 		des2 = dma_map_single(priv->device, skb->data,
+ 				      nopaged_len, DMA_TO_DEVICE);
+@@ -91,7 +92,8 @@ static int jumbo_frm(void *p, struct sk_buff *skb, int csum)
+ 		tx_q->tx_skbuff_dma[entry].is_jumbo = true;
+ 		desc->des3 = cpu_to_le32(des2 + BUF_SIZE_4KiB);
+ 		stmmac_prepare_tx_desc(priv, desc, 1, nopaged_len, csum,
+-				STMMAC_RING_MODE, 1, true, skb->len);
++				STMMAC_RING_MODE, 0, !skb_is_nonlinear(skb),
++				skb->len);
+ 	}
+ 
+ 	tx_q->cur_tx = entry;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 1d8d6f2ddfd6..0bc3632880b5 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3190,14 +3190,16 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		stmmac_prepare_tx_desc(priv, first, 1, nopaged_len,
+ 				csum_insertion, priv->mode, 1, last_segment,
+ 				skb->len);
+-
+-		/* The own bit must be the latest setting done when prepare the
+-		 * descriptor and then barrier is needed to make sure that
+-		 * all is coherent before granting the DMA engine.
+-		 */
+-		wmb();
++	} else {
++		stmmac_set_tx_owner(priv, first);
+ 	}
+ 
++	/* The own bit must be the latest setting done when prepare the
++	 * descriptor and then barrier is needed to make sure that
++	 * all is coherent before granting the DMA engine.
++	 */
++	wmb();
++
+ 	netdev_tx_sent_queue(netdev_get_tx_queue(dev, queue), skb->len);
+ 
+ 	stmmac_enable_dma_transmission(priv, priv->ioaddr);
+diff --git a/drivers/net/ethernet/ti/netcp_ethss.c b/drivers/net/ethernet/ti/netcp_ethss.c
+index 5174d318901e..0a920c5936b2 100644
+--- a/drivers/net/ethernet/ti/netcp_ethss.c
++++ b/drivers/net/ethernet/ti/netcp_ethss.c
+@@ -3657,12 +3657,16 @@ static int gbe_probe(struct netcp_device *netcp_device, struct device *dev,
+ 
+ 	ret = netcp_txpipe_init(&gbe_dev->tx_pipe, netcp_device,
+ 				gbe_dev->dma_chan_name, gbe_dev->tx_queue_id);
+-	if (ret)
++	if (ret) {
++		of_node_put(interfaces);
+ 		return ret;
++	}
+ 
+ 	ret = netcp_txpipe_open(&gbe_dev->tx_pipe);
+-	if (ret)
++	if (ret) {
++		of_node_put(interfaces);
+ 		return ret;
++	}
+ 
+ 	/* Create network interfaces */
+ 	INIT_LIST_HEAD(&gbe_dev->gbe_intf_head);
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 0789d8af7d72..1ef56edb3918 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -1575,12 +1575,14 @@ static int axienet_probe(struct platform_device *pdev)
+ 	ret = of_address_to_resource(np, 0, &dmares);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "unable to get DMA resource\n");
++		of_node_put(np);
+ 		goto free_netdev;
+ 	}
+ 	lp->dma_regs = devm_ioremap_resource(&pdev->dev, &dmares);
+ 	if (IS_ERR(lp->dma_regs)) {
+ 		dev_err(&pdev->dev, "could not map DMA regs\n");
+ 		ret = PTR_ERR(lp->dma_regs);
++		of_node_put(np);
+ 		goto free_netdev;
+ 	}
+ 	lp->rx_irq = irq_of_parse_and_map(np, 1);
+diff --git a/drivers/net/ieee802154/adf7242.c b/drivers/net/ieee802154/adf7242.c
+index cd1d8faccca5..cd6b95e673a5 100644
+--- a/drivers/net/ieee802154/adf7242.c
++++ b/drivers/net/ieee802154/adf7242.c
+@@ -1268,6 +1268,10 @@ static int adf7242_probe(struct spi_device *spi)
+ 	INIT_DELAYED_WORK(&lp->work, adf7242_rx_cal_work);
+ 	lp->wqueue = alloc_ordered_workqueue(dev_name(&spi->dev),
+ 					     WQ_MEM_RECLAIM);
++	if (unlikely(!lp->wqueue)) {
++		ret = -ENOMEM;
++		goto err_hw_init;
++	}
+ 
+ 	ret = adf7242_hw_init(lp);
+ 	if (ret)
+diff --git a/drivers/net/ieee802154/mac802154_hwsim.c b/drivers/net/ieee802154/mac802154_hwsim.c
+index b6743f03dce0..3b88846de31b 100644
+--- a/drivers/net/ieee802154/mac802154_hwsim.c
++++ b/drivers/net/ieee802154/mac802154_hwsim.c
+@@ -324,7 +324,7 @@ static int hwsim_get_radio_nl(struct sk_buff *msg, struct genl_info *info)
+ 			goto out_err;
+ 		}
+ 
+-		genlmsg_reply(skb, info);
++		res = genlmsg_reply(skb, info);
+ 		break;
+ 	}
+ 
+diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
+index 24c7f149f3e6..e11057892f07 100644
+--- a/drivers/net/phy/dp83822.c
++++ b/drivers/net/phy/dp83822.c
+@@ -23,6 +23,8 @@
+ #include <linux/netdevice.h>
+ 
+ #define DP83822_PHY_ID	        0x2000a240
++#define DP83825I_PHY_ID		0x2000a150
++
+ #define DP83822_DEVADDR		0x1f
+ 
+ #define MII_DP83822_PHYSCR	0x11
+@@ -312,26 +314,30 @@ static int dp83822_resume(struct phy_device *phydev)
+ 	return 0;
+ }
+ 
++#define DP83822_PHY_DRIVER(_id, _name)				\
++	{							\
++		PHY_ID_MATCH_MODEL(_id),			\
++		.name		= (_name),			\
++		.features	= PHY_BASIC_FEATURES,		\
++		.soft_reset	= dp83822_phy_reset,		\
++		.config_init	= dp83822_config_init,		\
++		.get_wol = dp83822_get_wol,			\
++		.set_wol = dp83822_set_wol,			\
++		.ack_interrupt = dp83822_ack_interrupt,		\
++		.config_intr = dp83822_config_intr,		\
++		.suspend = dp83822_suspend,			\
++		.resume = dp83822_resume,			\
++	}
++
+ static struct phy_driver dp83822_driver[] = {
+-	{
+-		.phy_id = DP83822_PHY_ID,
+-		.phy_id_mask = 0xfffffff0,
+-		.name = "TI DP83822",
+-		.features = PHY_BASIC_FEATURES,
+-		.config_init = dp83822_config_init,
+-		.soft_reset = dp83822_phy_reset,
+-		.get_wol = dp83822_get_wol,
+-		.set_wol = dp83822_set_wol,
+-		.ack_interrupt = dp83822_ack_interrupt,
+-		.config_intr = dp83822_config_intr,
+-		.suspend = dp83822_suspend,
+-		.resume = dp83822_resume,
+-	 },
++	DP83822_PHY_DRIVER(DP83822_PHY_ID, "TI DP83822"),
++	DP83822_PHY_DRIVER(DP83825I_PHY_ID, "TI DP83825I"),
+ };
+ module_phy_driver(dp83822_driver);
+ 
+ static struct mdio_device_id __maybe_unused dp83822_tbl[] = {
+ 	{ DP83822_PHY_ID, 0xfffffff0 },
++	{ DP83825I_PHY_ID, 0xfffffff0 },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(mdio, dp83822_tbl);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c
+index 81970cf777c0..8cafa5a749ca 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c
+@@ -81,8 +81,9 @@ int mt76x02u_tx_prepare_skb(struct mt76_dev *mdev, void *data,
+ 
+ 	mt76x02_insert_hdr_pad(skb);
+ 
+-	txwi = skb_push(skb, sizeof(struct mt76x02_txwi));
++	txwi = (struct mt76x02_txwi *)(skb->data - sizeof(struct mt76x02_txwi));
+ 	mt76x02_mac_write_txwi(dev, txwi, skb, wcid, sta, len);
++	skb_push(skb, sizeof(struct mt76x02_txwi));
+ 
+ 	pid = mt76_tx_status_skb_add(mdev, wcid, skb);
+ 	txwi->pktid = pid;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/phy.c b/drivers/net/wireless/mediatek/mt76/mt76x2/phy.c
+index c9634a774705..2f618536ef2a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/phy.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/phy.c
+@@ -260,10 +260,15 @@ mt76x2_phy_set_gain_val(struct mt76x02_dev *dev)
+ 	gain_val[0] = dev->cal.agc_gain_cur[0] - dev->cal.agc_gain_adjust;
+ 	gain_val[1] = dev->cal.agc_gain_cur[1] - dev->cal.agc_gain_adjust;
+ 
+-	if (dev->mt76.chandef.width >= NL80211_CHAN_WIDTH_40)
++	val = 0x1836 << 16;
++	if (!mt76x2_has_ext_lna(dev) &&
++	    dev->mt76.chandef.width >= NL80211_CHAN_WIDTH_40)
+ 		val = 0x1e42 << 16;
+-	else
+-		val = 0x1836 << 16;
++
++	if (mt76x2_has_ext_lna(dev) &&
++	    dev->mt76.chandef.chan->band == NL80211_BAND_2GHZ &&
++	    dev->mt76.chandef.width < NL80211_CHAN_WIDTH_40)
++		val = 0x0f36 << 16;
+ 
+ 	val |= 0xf8;
+ 
+@@ -280,6 +285,7 @@ void mt76x2_phy_update_channel_gain(struct mt76x02_dev *dev)
+ {
+ 	u8 *gain = dev->cal.agc_gain_init;
+ 	u8 low_gain_delta, gain_delta;
++	u32 agc_35, agc_37;
+ 	bool gain_change;
+ 	int low_gain;
+ 	u32 val;
+@@ -316,6 +322,16 @@ void mt76x2_phy_update_channel_gain(struct mt76x02_dev *dev)
+ 	else
+ 		low_gain_delta = 14;
+ 
++	agc_37 = 0x2121262c;
++	if (dev->mt76.chandef.chan->band == NL80211_BAND_2GHZ)
++		agc_35 = 0x11111516;
++	else if (low_gain == 2)
++		agc_35 = agc_37 = 0x08080808;
++	else if (dev->mt76.chandef.width == NL80211_CHAN_WIDTH_80)
++		agc_35 = 0x10101014;
++	else
++		agc_35 = 0x11111116;
++
+ 	if (low_gain == 2) {
+ 		mt76_wr(dev, MT_BBP(RXO, 18), 0xf000a990);
+ 		mt76_wr(dev, MT_BBP(AGC, 35), 0x08080808);
+@@ -324,15 +340,13 @@ void mt76x2_phy_update_channel_gain(struct mt76x02_dev *dev)
+ 		dev->cal.agc_gain_adjust = 0;
+ 	} else {
+ 		mt76_wr(dev, MT_BBP(RXO, 18), 0xf000a991);
+-		if (dev->mt76.chandef.width == NL80211_CHAN_WIDTH_80)
+-			mt76_wr(dev, MT_BBP(AGC, 35), 0x10101014);
+-		else
+-			mt76_wr(dev, MT_BBP(AGC, 35), 0x11111116);
+-		mt76_wr(dev, MT_BBP(AGC, 37), 0x2121262C);
+ 		gain_delta = 0;
+ 		dev->cal.agc_gain_adjust = low_gain_delta;
+ 	}
+ 
++	mt76_wr(dev, MT_BBP(AGC, 35), agc_35);
++	mt76_wr(dev, MT_BBP(AGC, 37), agc_37);
++
+ 	dev->cal.agc_gain_cur[0] = gain[0] - gain_delta;
+ 	dev->cal.agc_gain_cur[1] = gain[1] - gain_delta;
+ 	mt76x2_phy_set_gain_val(dev);
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index b9fff3b8ed1b..23da7beadd62 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -366,15 +366,12 @@ static inline bool nvme_state_is_live(enum nvme_ana_state state)
+ static void nvme_update_ns_ana_state(struct nvme_ana_group_desc *desc,
+ 		struct nvme_ns *ns)
+ {
+-	enum nvme_ana_state old;
+-
+ 	mutex_lock(&ns->head->lock);
+-	old = ns->ana_state;
+ 	ns->ana_grpid = le32_to_cpu(desc->grpid);
+ 	ns->ana_state = desc->state;
+ 	clear_bit(NVME_NS_ANA_PENDING, &ns->flags);
+ 
+-	if (nvme_state_is_live(ns->ana_state) && !nvme_state_is_live(old))
++	if (nvme_state_is_live(ns->ana_state))
+ 		nvme_mpath_set_live(ns);
+ 	mutex_unlock(&ns->head->lock);
+ }
+diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
+index 02c63c463222..7bad21a2283f 100644
+--- a/drivers/nvme/target/core.c
++++ b/drivers/nvme/target/core.c
+@@ -517,7 +517,7 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
+ 
+ 	ret = nvmet_p2pmem_ns_enable(ns);
+ 	if (ret)
+-		goto out_unlock;
++		goto out_dev_disable;
+ 
+ 	list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
+ 		nvmet_p2pmem_ns_add_p2p(ctrl, ns);
+@@ -558,7 +558,7 @@ out_unlock:
+ out_dev_put:
+ 	list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
+ 		pci_dev_put(radix_tree_delete(&ctrl->p2p_ns_map, ns->nsid));
+-
++out_dev_disable:
+ 	nvmet_ns_dev_disable(ns);
+ 	goto out_unlock;
+ }
+diff --git a/drivers/nvme/target/io-cmd-file.c b/drivers/nvme/target/io-cmd-file.c
+index 517522305e5c..9a0fa3943ca7 100644
+--- a/drivers/nvme/target/io-cmd-file.c
++++ b/drivers/nvme/target/io-cmd-file.c
+@@ -75,11 +75,11 @@ err:
+ 	return ret;
+ }
+ 
+-static void nvmet_file_init_bvec(struct bio_vec *bv, struct sg_page_iter *iter)
++static void nvmet_file_init_bvec(struct bio_vec *bv, struct scatterlist *sg)
+ {
+-	bv->bv_page = sg_page_iter_page(iter);
+-	bv->bv_offset = iter->sg->offset;
+-	bv->bv_len = PAGE_SIZE - iter->sg->offset;
++	bv->bv_page = sg_page(sg);
++	bv->bv_offset = sg->offset;
++	bv->bv_len = sg->length;
+ }
+ 
+ static ssize_t nvmet_file_submit_bvec(struct nvmet_req *req, loff_t pos,
+@@ -128,14 +128,14 @@ static void nvmet_file_io_done(struct kiocb *iocb, long ret, long ret2)
+ 
+ static bool nvmet_file_execute_io(struct nvmet_req *req, int ki_flags)
+ {
+-	ssize_t nr_bvec = DIV_ROUND_UP(req->data_len, PAGE_SIZE);
+-	struct sg_page_iter sg_pg_iter;
++	ssize_t nr_bvec = req->sg_cnt;
+ 	unsigned long bv_cnt = 0;
+ 	bool is_sync = false;
+ 	size_t len = 0, total_len = 0;
+ 	ssize_t ret = 0;
+ 	loff_t pos;
+-
++	int i;
++	struct scatterlist *sg;
+ 
+ 	if (req->f.mpool_alloc && nr_bvec > NVMET_MAX_MPOOL_BVEC)
+ 		is_sync = true;
+@@ -147,8 +147,8 @@ static bool nvmet_file_execute_io(struct nvmet_req *req, int ki_flags)
+ 	}
+ 
+ 	memset(&req->f.iocb, 0, sizeof(struct kiocb));
+-	for_each_sg_page(req->sg, &sg_pg_iter, req->sg_cnt, 0) {
+-		nvmet_file_init_bvec(&req->f.bvec[bv_cnt], &sg_pg_iter);
++	for_each_sg(req->sg, sg, req->sg_cnt, i) {
++		nvmet_file_init_bvec(&req->f.bvec[bv_cnt], sg);
+ 		len += req->f.bvec[bv_cnt].bv_len;
+ 		total_len += req->f.bvec[bv_cnt].bv_len;
+ 		bv_cnt++;
+@@ -225,7 +225,7 @@ static void nvmet_file_submit_buffered_io(struct nvmet_req *req)
+ 
+ static void nvmet_file_execute_rw(struct nvmet_req *req)
+ {
+-	ssize_t nr_bvec = DIV_ROUND_UP(req->data_len, PAGE_SIZE);
++	ssize_t nr_bvec = req->sg_cnt;
+ 
+ 	if (!req->sg_cnt || !nr_bvec) {
+ 		nvmet_req_complete(req, 0);
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index df34bff4ac31..f73ce96e9603 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -2316,12 +2316,14 @@ static int qeth_l3_probe_device(struct ccwgroup_device *gdev)
+ 	struct qeth_card *card = dev_get_drvdata(&gdev->dev);
+ 	int rc;
+ 
++	hash_init(card->ip_htable);
++
+ 	if (gdev->dev.type == &qeth_generic_devtype) {
+ 		rc = qeth_l3_create_device_attributes(&gdev->dev);
+ 		if (rc)
+ 			return rc;
+ 	}
+-	hash_init(card->ip_htable);
++
+ 	hash_init(card->ip_mc_htable);
+ 	card->info.hwtrap = 0;
+ 	return 0;
+diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c
+index db00b5e3abbe..33eddb02ee30 100644
+--- a/drivers/s390/scsi/zfcp_fc.c
++++ b/drivers/s390/scsi/zfcp_fc.c
+@@ -239,10 +239,6 @@ static void _zfcp_fc_incoming_rscn(struct zfcp_fsf_req *fsf_req, u32 range,
+ 	list_for_each_entry(port, &adapter->port_list, list) {
+ 		if ((port->d_id & range) == (ntoh24(page->rscn_fid) & range))
+ 			zfcp_fc_test_link(port);
+-		if (!port->d_id)
+-			zfcp_erp_port_reopen(port,
+-					     ZFCP_STATUS_COMMON_ERP_FAILED,
+-					     "fcrscn1");
+ 	}
+ 	read_unlock_irqrestore(&adapter->port_list_lock, flags);
+ }
+@@ -250,6 +246,7 @@ static void _zfcp_fc_incoming_rscn(struct zfcp_fsf_req *fsf_req, u32 range,
+ static void zfcp_fc_incoming_rscn(struct zfcp_fsf_req *fsf_req)
+ {
+ 	struct fsf_status_read_buffer *status_buffer = (void *)fsf_req->data;
++	struct zfcp_adapter *adapter = fsf_req->adapter;
+ 	struct fc_els_rscn *head;
+ 	struct fc_els_rscn_page *page;
+ 	u16 i;
+@@ -263,6 +260,22 @@ static void zfcp_fc_incoming_rscn(struct zfcp_fsf_req *fsf_req)
+ 	no_entries = be16_to_cpu(head->rscn_plen) /
+ 		sizeof(struct fc_els_rscn_page);
+ 
++	if (no_entries > 1) {
++		/* handle failed ports */
++		unsigned long flags;
++		struct zfcp_port *port;
++
++		read_lock_irqsave(&adapter->port_list_lock, flags);
++		list_for_each_entry(port, &adapter->port_list, list) {
++			if (port->d_id)
++				continue;
++			zfcp_erp_port_reopen(port,
++					     ZFCP_STATUS_COMMON_ERP_FAILED,
++					     "fcrscn1");
++		}
++		read_unlock_irqrestore(&adapter->port_list_lock, flags);
++	}
++
+ 	for (i = 1; i < no_entries; i++) {
+ 		/* skip head and start with 1st element */
+ 		page++;
+diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h
+index 3291d1c16864..8bd09b96ea18 100644
+--- a/drivers/scsi/aacraid/aacraid.h
++++ b/drivers/scsi/aacraid/aacraid.h
+@@ -2640,9 +2640,14 @@ static inline unsigned int cap_to_cyls(sector_t capacity, unsigned divisor)
+ 	return capacity;
+ }
+ 
++static inline int aac_pci_offline(struct aac_dev *dev)
++{
++	return pci_channel_offline(dev->pdev) || dev->handle_pci_error;
++}
++
+ static inline int aac_adapter_check_health(struct aac_dev *dev)
+ {
+-	if (unlikely(pci_channel_offline(dev->pdev)))
++	if (unlikely(aac_pci_offline(dev)))
+ 		return -1;
+ 
+ 	return (dev)->a_ops.adapter_check_health(dev);
+diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
+index a3adc954f40f..09367b8a3885 100644
+--- a/drivers/scsi/aacraid/commsup.c
++++ b/drivers/scsi/aacraid/commsup.c
+@@ -672,7 +672,7 @@ int aac_fib_send(u16 command, struct fib *fibptr, unsigned long size,
+ 					return -ETIMEDOUT;
+ 				}
+ 
+-				if (unlikely(pci_channel_offline(dev->pdev)))
++				if (unlikely(aac_pci_offline(dev)))
+ 					return -EFAULT;
+ 
+ 				if ((blink = aac_adapter_check_health(dev)) > 0) {
+@@ -772,7 +772,7 @@ int aac_hba_send(u8 command, struct fib *fibptr, fib_callback callback,
+ 
+ 		spin_unlock_irqrestore(&fibptr->event_lock, flags);
+ 
+-		if (unlikely(pci_channel_offline(dev->pdev)))
++		if (unlikely(aac_pci_offline(dev)))
+ 			return -EFAULT;
+ 
+ 		fibptr->flags |= FIB_CONTEXT_FLAG_WAIT;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 0a6cb8f0680c..c39f88100f31 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -3281,12 +3281,18 @@ mpt3sas_base_free_smid(struct MPT3SAS_ADAPTER *ioc, u16 smid)
+ 
+ 	if (smid < ioc->hi_priority_smid) {
+ 		struct scsiio_tracker *st;
++		void *request;
+ 
+ 		st = _get_st_from_smid(ioc, smid);
+ 		if (!st) {
+ 			_base_recovery_check(ioc);
+ 			return;
+ 		}
++
++		/* Clear MPI request frame */
++		request = mpt3sas_base_get_msg_frame(ioc, smid);
++		memset(request, 0, ioc->request_sz);
++
+ 		mpt3sas_base_clear_st(ioc, st);
+ 		_base_recovery_check(ioc);
+ 		return;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 6be39dc27103..6173c211a5e5 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -1462,11 +1462,23 @@ mpt3sas_scsih_scsi_lookup_get(struct MPT3SAS_ADAPTER *ioc, u16 smid)
+ {
+ 	struct scsi_cmnd *scmd = NULL;
+ 	struct scsiio_tracker *st;
++	Mpi25SCSIIORequest_t *mpi_request;
+ 
+ 	if (smid > 0  &&
+ 	    smid <= ioc->scsiio_depth - INTERNAL_SCSIIO_CMDS_COUNT) {
+ 		u32 unique_tag = smid - 1;
+ 
++		mpi_request = mpt3sas_base_get_msg_frame(ioc, smid);
++
++		/*
++		 * If SCSI IO request is outstanding at driver level then
++		 * DevHandle filed must be non-zero. If DevHandle is zero
++		 * then it means that this smid is free at driver level,
++		 * so return NULL.
++		 */
++		if (!mpi_request->DevHandle)
++			return scmd;
++
+ 		scmd = scsi_host_find_tag(ioc->shost, unique_tag);
+ 		if (scmd) {
+ 			st = scsi_cmd_priv(scmd);
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index a77bfb224248..80289c885c07 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -3203,6 +3203,8 @@ static int qla4xxx_conn_bind(struct iscsi_cls_session *cls_session,
+ 	if (iscsi_conn_bind(cls_session, cls_conn, is_leading))
+ 		return -EINVAL;
+ 	ep = iscsi_lookup_endpoint(transport_fd);
++	if (!ep)
++		return -EINVAL;
+ 	conn = cls_conn->dd_data;
+ 	qla_conn = conn->dd_data;
+ 	qla_conn->qla_ep = ep->dd_data;
+diff --git a/drivers/staging/axis-fifo/Kconfig b/drivers/staging/axis-fifo/Kconfig
+index 687537203d9c..d9725888af6f 100644
+--- a/drivers/staging/axis-fifo/Kconfig
++++ b/drivers/staging/axis-fifo/Kconfig
+@@ -3,6 +3,7 @@
+ #
+ config XIL_AXIS_FIFO
+ 	tristate "Xilinx AXI-Stream FIFO IP core driver"
++	depends on OF
+ 	default n
+ 	help
+ 	  This adds support for the Xilinx AXI-Stream
+diff --git a/drivers/staging/mt7621-pci/Kconfig b/drivers/staging/mt7621-pci/Kconfig
+index d33533872a16..c8fa17cfa807 100644
+--- a/drivers/staging/mt7621-pci/Kconfig
++++ b/drivers/staging/mt7621-pci/Kconfig
+@@ -1,6 +1,7 @@
+ config PCI_MT7621
+ 	tristate "MediaTek MT7621 PCI Controller"
+ 	depends on RALINK
++	depends on PCI
+ 	select PCI_DRIVERS_GENERIC
+ 	help
+ 	  This selects a driver for the MediaTek MT7621 PCI Controller.
+diff --git a/drivers/staging/rtl8188eu/core/rtw_xmit.c b/drivers/staging/rtl8188eu/core/rtw_xmit.c
+index 3b1ccd138c3f..6fb6ea29a8b6 100644
+--- a/drivers/staging/rtl8188eu/core/rtw_xmit.c
++++ b/drivers/staging/rtl8188eu/core/rtw_xmit.c
+@@ -174,7 +174,9 @@ s32 _rtw_init_xmit_priv(struct xmit_priv *pxmitpriv, struct adapter *padapter)
+ 
+ 	pxmitpriv->free_xmit_extbuf_cnt = num_xmit_extbuf;
+ 
+-	rtw_alloc_hwxmits(padapter);
++	res = rtw_alloc_hwxmits(padapter);
++	if (res == _FAIL)
++		goto exit;
+ 	rtw_init_hwxmits(pxmitpriv->hwxmits, pxmitpriv->hwxmit_entry);
+ 
+ 	for (i = 0; i < 4; i++)
+@@ -1503,7 +1505,7 @@ exit:
+ 	return res;
+ }
+ 
+-void rtw_alloc_hwxmits(struct adapter *padapter)
++s32 rtw_alloc_hwxmits(struct adapter *padapter)
+ {
+ 	struct hw_xmit *hwxmits;
+ 	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+@@ -1512,6 +1514,8 @@ void rtw_alloc_hwxmits(struct adapter *padapter)
+ 
+ 	pxmitpriv->hwxmits = kcalloc(pxmitpriv->hwxmit_entry,
+ 				     sizeof(struct hw_xmit), GFP_KERNEL);
++	if (!pxmitpriv->hwxmits)
++		return _FAIL;
+ 
+ 	hwxmits = pxmitpriv->hwxmits;
+ 
+@@ -1519,6 +1523,7 @@ void rtw_alloc_hwxmits(struct adapter *padapter)
+ 	hwxmits[1] .sta_queue = &pxmitpriv->vi_pending;
+ 	hwxmits[2] .sta_queue = &pxmitpriv->be_pending;
+ 	hwxmits[3] .sta_queue = &pxmitpriv->bk_pending;
++	return _SUCCESS;
+ }
+ 
+ void rtw_free_hwxmits(struct adapter *padapter)
+diff --git a/drivers/staging/rtl8188eu/include/rtw_xmit.h b/drivers/staging/rtl8188eu/include/rtw_xmit.h
+index 788f59c74ea1..ba7e15fbde72 100644
+--- a/drivers/staging/rtl8188eu/include/rtw_xmit.h
++++ b/drivers/staging/rtl8188eu/include/rtw_xmit.h
+@@ -336,7 +336,7 @@ s32 rtw_txframes_sta_ac_pending(struct adapter *padapter,
+ void rtw_init_hwxmits(struct hw_xmit *phwxmit, int entry);
+ s32 _rtw_init_xmit_priv(struct xmit_priv *pxmitpriv, struct adapter *padapter);
+ void _rtw_free_xmit_priv(struct xmit_priv *pxmitpriv);
+-void rtw_alloc_hwxmits(struct adapter *padapter);
++s32 rtw_alloc_hwxmits(struct adapter *padapter);
+ void rtw_free_hwxmits(struct adapter *padapter);
+ s32 rtw_xmit(struct adapter *padapter, struct sk_buff **pkt);
+ 
+diff --git a/drivers/staging/rtl8712/rtl8712_cmd.c b/drivers/staging/rtl8712/rtl8712_cmd.c
+index 1920d02f7c9f..8c36acedf507 100644
+--- a/drivers/staging/rtl8712/rtl8712_cmd.c
++++ b/drivers/staging/rtl8712/rtl8712_cmd.c
+@@ -147,17 +147,9 @@ static u8 write_macreg_hdl(struct _adapter *padapter, u8 *pbuf)
+ 
+ static u8 read_bbreg_hdl(struct _adapter *padapter, u8 *pbuf)
+ {
+-	u32 val;
+-	void (*pcmd_callback)(struct _adapter *dev, struct cmd_obj	*pcmd);
+ 	struct cmd_obj *pcmd  = (struct cmd_obj *)pbuf;
+ 
+-	if (pcmd->rsp && pcmd->rspsz > 0)
+-		memcpy(pcmd->rsp, (u8 *)&val, pcmd->rspsz);
+-	pcmd_callback = cmd_callback[pcmd->cmdcode].callback;
+-	if (!pcmd_callback)
+-		r8712_free_cmd_obj(pcmd);
+-	else
+-		pcmd_callback(padapter, pcmd);
++	r8712_free_cmd_obj(pcmd);
+ 	return H2C_SUCCESS;
+ }
+ 
+diff --git a/drivers/staging/rtl8712/rtl8712_cmd.h b/drivers/staging/rtl8712/rtl8712_cmd.h
+index 92fb77666d44..1ef86b8c592f 100644
+--- a/drivers/staging/rtl8712/rtl8712_cmd.h
++++ b/drivers/staging/rtl8712/rtl8712_cmd.h
+@@ -140,7 +140,7 @@ enum rtl8712_h2c_cmd {
+ static struct _cmd_callback	cmd_callback[] = {
+ 	{GEN_CMD_CODE(_Read_MACREG), NULL}, /*0*/
+ 	{GEN_CMD_CODE(_Write_MACREG), NULL},
+-	{GEN_CMD_CODE(_Read_BBREG), &r8712_getbbrfreg_cmdrsp_callback},
++	{GEN_CMD_CODE(_Read_BBREG), NULL},
+ 	{GEN_CMD_CODE(_Write_BBREG), NULL},
+ 	{GEN_CMD_CODE(_Read_RFREG), &r8712_getbbrfreg_cmdrsp_callback},
+ 	{GEN_CMD_CODE(_Write_RFREG), NULL}, /*5*/
+diff --git a/drivers/staging/rtl8723bs/core/rtw_xmit.c b/drivers/staging/rtl8723bs/core/rtw_xmit.c
+index 625e67f39889..a36b2213d8ee 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_xmit.c
++++ b/drivers/staging/rtl8723bs/core/rtw_xmit.c
+@@ -260,7 +260,9 @@ s32 _rtw_init_xmit_priv(struct xmit_priv *pxmitpriv, struct adapter *padapter)
+ 		}
+ 	}
+ 
+-	rtw_alloc_hwxmits(padapter);
++	res = rtw_alloc_hwxmits(padapter);
++	if (res == _FAIL)
++		goto exit;
+ 	rtw_init_hwxmits(pxmitpriv->hwxmits, pxmitpriv->hwxmit_entry);
+ 
+ 	for (i = 0; i < 4; i++) {
+@@ -2144,7 +2146,7 @@ exit:
+ 	return res;
+ }
+ 
+-void rtw_alloc_hwxmits(struct adapter *padapter)
++s32 rtw_alloc_hwxmits(struct adapter *padapter)
+ {
+ 	struct hw_xmit *hwxmits;
+ 	struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+@@ -2155,10 +2157,8 @@ void rtw_alloc_hwxmits(struct adapter *padapter)
+ 
+ 	pxmitpriv->hwxmits = rtw_zmalloc(sizeof(struct hw_xmit) * pxmitpriv->hwxmit_entry);
+ 
+-	if (pxmitpriv->hwxmits == NULL) {
+-		DBG_871X("alloc hwxmits fail!...\n");
+-		return;
+-	}
++	if (!pxmitpriv->hwxmits)
++		return _FAIL;
+ 
+ 	hwxmits = pxmitpriv->hwxmits;
+ 
+@@ -2204,7 +2204,7 @@ void rtw_alloc_hwxmits(struct adapter *padapter)
+ 
+ 	}
+ 
+-
++	return _SUCCESS;
+ }
+ 
+ void rtw_free_hwxmits(struct adapter *padapter)
+diff --git a/drivers/staging/rtl8723bs/include/rtw_xmit.h b/drivers/staging/rtl8723bs/include/rtw_xmit.h
+index 1b38b9182b31..37f42b2f22f1 100644
+--- a/drivers/staging/rtl8723bs/include/rtw_xmit.h
++++ b/drivers/staging/rtl8723bs/include/rtw_xmit.h
+@@ -487,7 +487,7 @@ s32 _rtw_init_xmit_priv(struct xmit_priv *pxmitpriv, struct adapter *padapter);
+ void _rtw_free_xmit_priv (struct xmit_priv *pxmitpriv);
+ 
+ 
+-void rtw_alloc_hwxmits(struct adapter *padapter);
++s32 rtw_alloc_hwxmits(struct adapter *padapter);
+ void rtw_free_hwxmits(struct adapter *padapter);
+ 
+ 
+diff --git a/drivers/staging/rtlwifi/phydm/rtl_phydm.c b/drivers/staging/rtlwifi/phydm/rtl_phydm.c
+index 9930ed954abb..4cc77b2016e1 100644
+--- a/drivers/staging/rtlwifi/phydm/rtl_phydm.c
++++ b/drivers/staging/rtlwifi/phydm/rtl_phydm.c
+@@ -180,6 +180,8 @@ static int rtl_phydm_init_priv(struct rtl_priv *rtlpriv,
+ 
+ 	rtlpriv->phydm.internal =
+ 		kzalloc(sizeof(struct phy_dm_struct), GFP_KERNEL);
++	if (!rtlpriv->phydm.internal)
++		return 0;
+ 
+ 	_rtl_phydm_init_com_info(rtlpriv, ic, params);
+ 
+diff --git a/drivers/staging/rtlwifi/rtl8822be/fw.c b/drivers/staging/rtlwifi/rtl8822be/fw.c
+index a40396614814..c1ed52df05f0 100644
+--- a/drivers/staging/rtlwifi/rtl8822be/fw.c
++++ b/drivers/staging/rtlwifi/rtl8822be/fw.c
+@@ -741,6 +741,8 @@ void rtl8822be_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool b_dl_finished)
+ 		      u1_rsvd_page_loc, 3);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	memcpy((u8 *)skb_put(skb, totalpacketlen), &reserved_page_packet,
+ 	       totalpacketlen);
+ 
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 804daf83be35..064d0db4c51e 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -3513,6 +3513,7 @@ static int vchiq_probe(struct platform_device *pdev)
+ 	struct device_node *fw_node;
+ 	const struct of_device_id *of_id;
+ 	struct vchiq_drvdata *drvdata;
++	struct device *vchiq_dev;
+ 	int err;
+ 
+ 	of_id = of_match_node(vchiq_of_match, pdev->dev.of_node);
+@@ -3547,9 +3548,12 @@ static int vchiq_probe(struct platform_device *pdev)
+ 		goto failed_platform_init;
+ 	}
+ 
+-	if (IS_ERR(device_create(vchiq_class, &pdev->dev, vchiq_devid,
+-				 NULL, "vchiq")))
++	vchiq_dev = device_create(vchiq_class, &pdev->dev, vchiq_devid, NULL,
++				  "vchiq");
++	if (IS_ERR(vchiq_dev)) {
++		err = PTR_ERR(vchiq_dev);
+ 		goto failed_device_create;
++	}
+ 
+ 	vchiq_debugfs_init();
+ 
+diff --git a/drivers/tty/serial/ar933x_uart.c b/drivers/tty/serial/ar933x_uart.c
+index db5df3d54818..3bdd56a1021b 100644
+--- a/drivers/tty/serial/ar933x_uart.c
++++ b/drivers/tty/serial/ar933x_uart.c
+@@ -49,11 +49,6 @@ struct ar933x_uart_port {
+ 	struct clk		*clk;
+ };
+ 
+-static inline bool ar933x_uart_console_enabled(void)
+-{
+-	return IS_ENABLED(CONFIG_SERIAL_AR933X_CONSOLE);
+-}
+-
+ static inline unsigned int ar933x_uart_read(struct ar933x_uart_port *up,
+ 					    int offset)
+ {
+@@ -508,6 +503,7 @@ static const struct uart_ops ar933x_uart_ops = {
+ 	.verify_port	= ar933x_uart_verify_port,
+ };
+ 
++#ifdef CONFIG_SERIAL_AR933X_CONSOLE
+ static struct ar933x_uart_port *
+ ar933x_console_ports[CONFIG_SERIAL_AR933X_NR_UARTS];
+ 
+@@ -604,14 +600,7 @@ static struct console ar933x_uart_console = {
+ 	.index		= -1,
+ 	.data		= &ar933x_uart_driver,
+ };
+-
+-static void ar933x_uart_add_console_port(struct ar933x_uart_port *up)
+-{
+-	if (!ar933x_uart_console_enabled())
+-		return;
+-
+-	ar933x_console_ports[up->port.line] = up;
+-}
++#endif /* CONFIG_SERIAL_AR933X_CONSOLE */
+ 
+ static struct uart_driver ar933x_uart_driver = {
+ 	.owner		= THIS_MODULE,
+@@ -700,7 +689,9 @@ static int ar933x_uart_probe(struct platform_device *pdev)
+ 	baud = ar933x_uart_get_baud(port->uartclk, 0, AR933X_UART_MAX_STEP);
+ 	up->max_baud = min_t(unsigned int, baud, AR933X_UART_MAX_BAUD);
+ 
+-	ar933x_uart_add_console_port(up);
++#ifdef CONFIG_SERIAL_AR933X_CONSOLE
++	ar933x_console_ports[up->port.line] = up;
++#endif
+ 
+ 	ret = uart_add_one_port(&ar933x_uart_driver, &up->port);
+ 	if (ret)
+@@ -749,8 +740,9 @@ static int __init ar933x_uart_init(void)
+ {
+ 	int ret;
+ 
+-	if (ar933x_uart_console_enabled())
+-		ar933x_uart_driver.cons = &ar933x_uart_console;
++#ifdef CONFIG_SERIAL_AR933X_CONSOLE
++	ar933x_uart_driver.cons = &ar933x_uart_console;
++#endif
+ 
+ 	ret = uart_register_driver(&ar933x_uart_driver);
+ 	if (ret)
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index 268098681856..114e94f476c6 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -1509,7 +1509,7 @@ static int __init sc16is7xx_init(void)
+ 	ret = i2c_add_driver(&sc16is7xx_i2c_uart_driver);
+ 	if (ret < 0) {
+ 		pr_err("failed to init sc16is7xx i2c --> %d\n", ret);
+-		return ret;
++		goto err_i2c;
+ 	}
+ #endif
+ 
+@@ -1517,10 +1517,18 @@ static int __init sc16is7xx_init(void)
+ 	ret = spi_register_driver(&sc16is7xx_spi_uart_driver);
+ 	if (ret < 0) {
+ 		pr_err("failed to init sc16is7xx spi --> %d\n", ret);
+-		return ret;
++		goto err_spi;
+ 	}
+ #endif
+ 	return ret;
++
++err_spi:
++#ifdef CONFIG_SERIAL_SC16IS7XX_I2C
++	i2c_del_driver(&sc16is7xx_i2c_uart_driver);
++#endif
++err_i2c:
++	uart_unregister_driver(&sc16is7xx_uart);
++	return ret;
+ }
+ module_init(sc16is7xx_init);
+ 
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index fdc6e4e403e8..8cced3609e24 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -29,6 +29,7 @@
+ #define PCI_DEVICE_ID_INTEL_BXT_M		0x1aaa
+ #define PCI_DEVICE_ID_INTEL_APL			0x5aaa
+ #define PCI_DEVICE_ID_INTEL_KBP			0xa2b0
++#define PCI_DEVICE_ID_INTEL_CMLH		0x02ee
+ #define PCI_DEVICE_ID_INTEL_GLK			0x31aa
+ #define PCI_DEVICE_ID_INTEL_CNPLP		0x9dee
+ #define PCI_DEVICE_ID_INTEL_CNPH		0xa36e
+@@ -305,6 +306,9 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MRFLD),
+ 	  (kernel_ulong_t) &dwc3_pci_mrfld_properties, },
+ 
++	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_CMLH),
++	  (kernel_ulong_t) &dwc3_pci_intel_properties, },
++
+ 	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_SPTLP),
+ 	  (kernel_ulong_t) &dwc3_pci_intel_properties, },
+ 
+diff --git a/drivers/usb/gadget/udc/net2272.c b/drivers/usb/gadget/udc/net2272.c
+index b77f3126580e..c2011cd7df8c 100644
+--- a/drivers/usb/gadget/udc/net2272.c
++++ b/drivers/usb/gadget/udc/net2272.c
+@@ -945,6 +945,7 @@ net2272_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ 			break;
+ 	}
+ 	if (&req->req != _req) {
++		ep->stopped = stopped;
+ 		spin_unlock_irqrestore(&ep->dev->lock, flags);
+ 		return -EINVAL;
+ 	}
+diff --git a/drivers/usb/gadget/udc/net2280.c b/drivers/usb/gadget/udc/net2280.c
+index e7dae5379e04..d93cf4171953 100644
+--- a/drivers/usb/gadget/udc/net2280.c
++++ b/drivers/usb/gadget/udc/net2280.c
+@@ -866,9 +866,6 @@ static void start_queue(struct net2280_ep *ep, u32 dmactl, u32 td_dma)
+ 	(void) readl(&ep->dev->pci->pcimstctl);
+ 
+ 	writel(BIT(DMA_START), &dma->dmastat);
+-
+-	if (!ep->is_in)
+-		stop_out_naking(ep);
+ }
+ 
+ static void start_dma(struct net2280_ep *ep, struct net2280_request *req)
+@@ -907,6 +904,7 @@ static void start_dma(struct net2280_ep *ep, struct net2280_request *req)
+ 			writel(BIT(DMA_START), &dma->dmastat);
+ 			return;
+ 		}
++		stop_out_naking(ep);
+ 	}
+ 
+ 	tmp = dmactl_default;
+@@ -1275,9 +1273,9 @@ static int net2280_dequeue(struct usb_ep *_ep, struct usb_request *_req)
+ 			break;
+ 	}
+ 	if (&req->req != _req) {
++		ep->stopped = stopped;
+ 		spin_unlock_irqrestore(&ep->dev->lock, flags);
+-		dev_err(&ep->dev->pdev->dev, "%s: Request mismatch\n",
+-								__func__);
++		ep_dbg(ep->dev, "%s: Request mismatch\n", __func__);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/usb/host/u132-hcd.c b/drivers/usb/host/u132-hcd.c
+index 5b8a3d9530c4..5cac83aaeac3 100644
+--- a/drivers/usb/host/u132-hcd.c
++++ b/drivers/usb/host/u132-hcd.c
+@@ -3202,6 +3202,9 @@ static int __init u132_hcd_init(void)
+ 	printk(KERN_INFO "driver %s\n", hcd_name);
+ 	workqueue = create_singlethread_workqueue("u132");
+ 	retval = platform_driver_register(&u132_platform_driver);
++	if (retval)
++		destroy_workqueue(workqueue);
++
+ 	return retval;
+ }
+ 
+diff --git a/drivers/usb/misc/usb251xb.c b/drivers/usb/misc/usb251xb.c
+index a6efb9a72939..5f7734c729b1 100644
+--- a/drivers/usb/misc/usb251xb.c
++++ b/drivers/usb/misc/usb251xb.c
+@@ -601,7 +601,7 @@ static int usb251xb_probe(struct usb251xb *hub)
+ 							   dev);
+ 	int err;
+ 
+-	if (np) {
++	if (np && of_id) {
+ 		err = usb251xb_get_ofdata(hub,
+ 					  (struct usb251xb_data *)of_id->data);
+ 		if (err) {
+diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
+index ca08c83168f5..0b37867b5c20 100644
+--- a/fs/afs/fsclient.c
++++ b/fs/afs/fsclient.c
+@@ -1515,8 +1515,8 @@ static int afs_fs_setattr_size64(struct afs_fs_cursor *fc, struct iattr *attr)
+ 
+ 	xdr_encode_AFS_StoreStatus(&bp, attr);
+ 
+-	*bp++ = 0;				/* position of start of write */
+-	*bp++ = 0;
++	*bp++ = htonl(attr->ia_size >> 32);	/* position of start of write */
++	*bp++ = htonl((u32) attr->ia_size);
+ 	*bp++ = 0;				/* size of write */
+ 	*bp++ = 0;
+ 	*bp++ = htonl(attr->ia_size >> 32);	/* new file length */
+@@ -1564,7 +1564,7 @@ static int afs_fs_setattr_size(struct afs_fs_cursor *fc, struct iattr *attr)
+ 
+ 	xdr_encode_AFS_StoreStatus(&bp, attr);
+ 
+-	*bp++ = 0;				/* position of start of write */
++	*bp++ = htonl(attr->ia_size);		/* position of start of write */
+ 	*bp++ = 0;				/* size of write */
+ 	*bp++ = htonl(attr->ia_size);		/* new file length */
+ 
+diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
+index 5aa57929e8c2..6e97a42d24d1 100644
+--- a/fs/afs/yfsclient.c
++++ b/fs/afs/yfsclient.c
+@@ -1514,7 +1514,7 @@ static int yfs_fs_setattr_size(struct afs_fs_cursor *fc, struct iattr *attr)
+ 	bp = xdr_encode_u32(bp, 0); /* RPC flags */
+ 	bp = xdr_encode_YFSFid(bp, &vnode->fid);
+ 	bp = xdr_encode_YFS_StoreStatus(bp, attr);
+-	bp = xdr_encode_u64(bp, 0);		/* position of start of write */
++	bp = xdr_encode_u64(bp, attr->ia_size);	/* position of start of write */
+ 	bp = xdr_encode_u64(bp, 0);		/* size of write */
+ 	bp = xdr_encode_u64(bp, attr->ia_size);	/* new file length */
+ 	yfs_check_req(call, bp);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 4ec2b660d014..7f3ece91a4d0 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1886,8 +1886,10 @@ static void btrfs_cleanup_pending_block_groups(struct btrfs_trans_handle *trans)
+        }
+ }
+ 
+-static inline int btrfs_start_delalloc_flush(struct btrfs_fs_info *fs_info)
++static inline int btrfs_start_delalloc_flush(struct btrfs_trans_handle *trans)
+ {
++	struct btrfs_fs_info *fs_info = trans->fs_info;
++
+ 	/*
+ 	 * We use writeback_inodes_sb here because if we used
+ 	 * btrfs_start_delalloc_roots we would deadlock with fs freeze.
+@@ -1897,15 +1899,50 @@ static inline int btrfs_start_delalloc_flush(struct btrfs_fs_info *fs_info)
+ 	 * from already being in a transaction and our join_transaction doesn't
+ 	 * have to re-take the fs freeze lock.
+ 	 */
+-	if (btrfs_test_opt(fs_info, FLUSHONCOMMIT))
++	if (btrfs_test_opt(fs_info, FLUSHONCOMMIT)) {
+ 		writeback_inodes_sb(fs_info->sb, WB_REASON_SYNC);
++	} else {
++		struct btrfs_pending_snapshot *pending;
++		struct list_head *head = &trans->transaction->pending_snapshots;
++
++		/*
++		 * Flush dellaloc for any root that is going to be snapshotted.
++		 * This is done to avoid a corrupted version of files, in the
++		 * snapshots, that had both buffered and direct IO writes (even
++		 * if they were done sequentially) due to an unordered update of
++		 * the inode's size on disk.
++		 */
++		list_for_each_entry(pending, head, list) {
++			int ret;
++
++			ret = btrfs_start_delalloc_snapshot(pending->root);
++			if (ret)
++				return ret;
++		}
++	}
+ 	return 0;
+ }
+ 
+-static inline void btrfs_wait_delalloc_flush(struct btrfs_fs_info *fs_info)
++static inline void btrfs_wait_delalloc_flush(struct btrfs_trans_handle *trans)
+ {
+-	if (btrfs_test_opt(fs_info, FLUSHONCOMMIT))
++	struct btrfs_fs_info *fs_info = trans->fs_info;
++
++	if (btrfs_test_opt(fs_info, FLUSHONCOMMIT)) {
+ 		btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
++	} else {
++		struct btrfs_pending_snapshot *pending;
++		struct list_head *head = &trans->transaction->pending_snapshots;
++
++		/*
++		 * Wait for any dellaloc that we started previously for the roots
++		 * that are going to be snapshotted. This is to avoid a corrupted
++		 * version of files in the snapshots that had both buffered and
++		 * direct IO writes (even if they were done sequentially).
++		 */
++		list_for_each_entry(pending, head, list)
++			btrfs_wait_ordered_extents(pending->root,
++						   U64_MAX, 0, U64_MAX);
++	}
+ }
+ 
+ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+@@ -2024,7 +2061,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 
+ 	extwriter_counter_dec(cur_trans, trans->type);
+ 
+-	ret = btrfs_start_delalloc_flush(fs_info);
++	ret = btrfs_start_delalloc_flush(trans);
+ 	if (ret)
+ 		goto cleanup_transaction;
+ 
+@@ -2040,7 +2077,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
+ 	if (ret)
+ 		goto cleanup_transaction;
+ 
+-	btrfs_wait_delalloc_flush(fs_info);
++	btrfs_wait_delalloc_flush(trans);
+ 
+ 	btrfs_scrub_pause(fs_info);
+ 	/*
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 9d1f34d46627..f7f9e305aaf8 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -524,6 +524,7 @@ static void ceph_i_callback(struct rcu_head *head)
+ 	struct inode *inode = container_of(head, struct inode, i_rcu);
+ 	struct ceph_inode_info *ci = ceph_inode(inode);
+ 
++	kfree(ci->i_symlink);
+ 	kmem_cache_free(ceph_inode_cachep, ci);
+ }
+ 
+@@ -561,7 +562,6 @@ void ceph_destroy_inode(struct inode *inode)
+ 		ceph_put_snap_realm(mdsc, realm);
+ 	}
+ 
+-	kfree(ci->i_symlink);
+ 	while ((n = rb_first(&ci->i_fragtree)) != NULL) {
+ 		frag = rb_entry(n, struct ceph_inode_frag, node);
+ 		rb_erase(n, &ci->i_fragtree);
+diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
+index 809c0f2f9942..64f4de983468 100644
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -2034,10 +2034,8 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
+ 		rem += pipe->bufs[(pipe->curbuf + idx) & (pipe->buffers - 1)].len;
+ 
+ 	ret = -EINVAL;
+-	if (rem < len) {
+-		pipe_unlock(pipe);
+-		goto out;
+-	}
++	if (rem < len)
++		goto out_free;
+ 
+ 	rem = len;
+ 	while (rem) {
+@@ -2055,7 +2053,9 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
+ 			pipe->curbuf = (pipe->curbuf + 1) & (pipe->buffers - 1);
+ 			pipe->nrbufs--;
+ 		} else {
+-			pipe_buf_get(pipe, ibuf);
++			if (!pipe_buf_get(pipe, ibuf))
++				goto out_free;
++
+ 			*obuf = *ibuf;
+ 			obuf->flags &= ~PIPE_BUF_FLAG_GIFT;
+ 			obuf->len = rem;
+@@ -2078,11 +2078,11 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe,
+ 	ret = fuse_dev_do_write(fud, &cs, len);
+ 
+ 	pipe_lock(pipe);
++out_free:
+ 	for (idx = 0; idx < nbuf; idx++)
+ 		pipe_buf_release(pipe, &bufs[idx]);
+ 	pipe_unlock(pipe);
+ 
+-out:
+ 	kvfree(bufs);
+ 	return ret;
+ }
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index fb1cf1a4bda2..90d71fda65ce 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -453,7 +453,7 @@ void nfs_init_timeout_values(struct rpc_timeout *to, int proto,
+ 	case XPRT_TRANSPORT_RDMA:
+ 		if (retrans == NFS_UNSPEC_RETRANS)
+ 			to->to_retries = NFS_DEF_TCP_RETRANS;
+-		if (timeo == NFS_UNSPEC_TIMEO || to->to_retries == 0)
++		if (timeo == NFS_UNSPEC_TIMEO || to->to_initval == 0)
+ 			to->to_initval = NFS_DEF_TCP_TIMEO * HZ / 10;
+ 		if (to->to_initval > NFS_MAX_TCP_TIMEOUT)
+ 			to->to_initval = NFS_MAX_TCP_TIMEOUT;
+diff --git a/fs/pipe.c b/fs/pipe.c
+index c51750ed4011..2a297bce381f 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -189,9 +189,9 @@ EXPORT_SYMBOL(generic_pipe_buf_steal);
+  *	in the tee() system call, when we duplicate the buffers in one
+  *	pipe into another.
+  */
+-void generic_pipe_buf_get(struct pipe_inode_info *pipe, struct pipe_buffer *buf)
++bool generic_pipe_buf_get(struct pipe_inode_info *pipe, struct pipe_buffer *buf)
+ {
+-	get_page(buf->page);
++	return try_get_page(buf->page);
+ }
+ EXPORT_SYMBOL(generic_pipe_buf_get);
+ 
+diff --git a/fs/splice.c b/fs/splice.c
+index 7da7d5437472..c38c7e7a49c9 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -1588,7 +1588,11 @@ retry:
+ 			 * Get a reference to this pipe buffer,
+ 			 * so we can copy the contents over.
+ 			 */
+-			pipe_buf_get(ipipe, ibuf);
++			if (!pipe_buf_get(ipipe, ibuf)) {
++				if (ret == 0)
++					ret = -EFAULT;
++				break;
++			}
+ 			*obuf = *ibuf;
+ 
+ 			/*
+@@ -1662,7 +1666,11 @@ static int link_pipe(struct pipe_inode_info *ipipe,
+ 		 * Get a reference to this pipe buffer,
+ 		 * so we can copy the contents over.
+ 		 */
+-		pipe_buf_get(ipipe, ibuf);
++		if (!pipe_buf_get(ipipe, ibuf)) {
++			if (ret == 0)
++				ret = -EFAULT;
++			break;
++		}
+ 
+ 		obuf = opipe->bufs + nbuf;
+ 		*obuf = *ibuf;
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 80bb6408fe73..7000ddd807e0 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -965,6 +965,10 @@ static inline bool is_pci_p2pdma_page(const struct page *page)
+ }
+ #endif /* CONFIG_DEV_PAGEMAP_OPS */
+ 
++/* 127: arbitrary random number, small enough to assemble well */
++#define page_ref_zero_or_close_to_overflow(page) \
++	((unsigned int) page_ref_count(page) + 127u <= 127u)
++
+ static inline void get_page(struct page *page)
+ {
+ 	page = compound_head(page);
+@@ -972,8 +976,17 @@ static inline void get_page(struct page *page)
+ 	 * Getting a normal page or the head of a compound page
+ 	 * requires to already have an elevated page->_refcount.
+ 	 */
+-	VM_BUG_ON_PAGE(page_ref_count(page) <= 0, page);
++	VM_BUG_ON_PAGE(page_ref_zero_or_close_to_overflow(page), page);
++	page_ref_inc(page);
++}
++
++static inline __must_check bool try_get_page(struct page *page)
++{
++	page = compound_head(page);
++	if (WARN_ON_ONCE(page_ref_count(page) <= 0))
++		return false;
+ 	page_ref_inc(page);
++	return true;
+ }
+ 
+ static inline void put_page(struct page *page)
+diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h
+index 66ee63cd5968..7897a3cc05b9 100644
+--- a/include/linux/pipe_fs_i.h
++++ b/include/linux/pipe_fs_i.h
+@@ -108,18 +108,20 @@ struct pipe_buf_operations {
+ 	/*
+ 	 * Get a reference to the pipe buffer.
+ 	 */
+-	void (*get)(struct pipe_inode_info *, struct pipe_buffer *);
++	bool (*get)(struct pipe_inode_info *, struct pipe_buffer *);
+ };
+ 
+ /**
+  * pipe_buf_get - get a reference to a pipe_buffer
+  * @pipe:	the pipe that the buffer belongs to
+  * @buf:	the buffer to get a reference to
++ *
++ * Return: %true if the reference was successfully obtained.
+  */
+-static inline void pipe_buf_get(struct pipe_inode_info *pipe,
++static inline __must_check bool pipe_buf_get(struct pipe_inode_info *pipe,
+ 				struct pipe_buffer *buf)
+ {
+-	buf->ops->get(pipe, buf);
++	return buf->ops->get(pipe, buf);
+ }
+ 
+ /**
+@@ -178,7 +180,7 @@ struct pipe_inode_info *alloc_pipe_info(void);
+ void free_pipe_info(struct pipe_inode_info *);
+ 
+ /* Generic pipe buffer ops functions */
+-void generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *);
++bool generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_confirm(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_steal(struct pipe_inode_info *, struct pipe_buffer *);
+ int generic_pipe_buf_nosteal(struct pipe_inode_info *, struct pipe_buffer *);
+diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h
+index 13789d10a50e..76b8399b17f6 100644
+--- a/include/linux/sched/signal.h
++++ b/include/linux/sched/signal.h
+@@ -417,10 +417,20 @@ static inline void set_restore_sigmask(void)
+ 	set_thread_flag(TIF_RESTORE_SIGMASK);
+ 	WARN_ON(!test_thread_flag(TIF_SIGPENDING));
+ }
++
++static inline void clear_tsk_restore_sigmask(struct task_struct *tsk)
++{
++	clear_tsk_thread_flag(tsk, TIF_RESTORE_SIGMASK);
++}
++
+ static inline void clear_restore_sigmask(void)
+ {
+ 	clear_thread_flag(TIF_RESTORE_SIGMASK);
+ }
++static inline bool test_tsk_restore_sigmask(struct task_struct *tsk)
++{
++	return test_tsk_thread_flag(tsk, TIF_RESTORE_SIGMASK);
++}
+ static inline bool test_restore_sigmask(void)
+ {
+ 	return test_thread_flag(TIF_RESTORE_SIGMASK);
+@@ -438,6 +448,10 @@ static inline void set_restore_sigmask(void)
+ 	current->restore_sigmask = true;
+ 	WARN_ON(!test_thread_flag(TIF_SIGPENDING));
+ }
++static inline void clear_tsk_restore_sigmask(struct task_struct *tsk)
++{
++	tsk->restore_sigmask = false;
++}
+ static inline void clear_restore_sigmask(void)
+ {
+ 	current->restore_sigmask = false;
+@@ -446,6 +460,10 @@ static inline bool test_restore_sigmask(void)
+ {
+ 	return current->restore_sigmask;
+ }
++static inline bool test_tsk_restore_sigmask(struct task_struct *tsk)
++{
++	return tsk->restore_sigmask;
++}
+ static inline bool test_and_clear_restore_sigmask(void)
+ {
+ 	if (!current->restore_sigmask)
+diff --git a/include/net/tc_act/tc_gact.h b/include/net/tc_act/tc_gact.h
+index ef8dd0db70ce..56935bf027a7 100644
+--- a/include/net/tc_act/tc_gact.h
++++ b/include/net/tc_act/tc_gact.h
+@@ -56,7 +56,7 @@ static inline bool is_tcf_gact_goto_chain(const struct tc_action *a)
+ 
+ static inline u32 tcf_gact_goto_chain_index(const struct tc_action *a)
+ {
+-	return a->goto_chain->index;
++	return READ_ONCE(a->tcfa_action) & TC_ACT_EXT_VAL_MASK;
+ }
+ 
+ #endif /* __NET_TC_GACT_H */
+diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
+index 13acb9803a6d..05d39e579953 100644
+--- a/include/net/xdp_sock.h
++++ b/include/net/xdp_sock.h
+@@ -36,7 +36,6 @@ struct xdp_umem {
+ 	u32 headroom;
+ 	u32 chunk_size_nohr;
+ 	struct user_struct *user;
+-	struct pid *pid;
+ 	unsigned long address;
+ 	refcount_t users;
+ 	struct work_struct work;
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index 771e93f9c43f..6f357f4fc859 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -29,6 +29,7 @@
+ #include <linux/hw_breakpoint.h>
+ #include <linux/cn_proc.h>
+ #include <linux/compat.h>
++#include <linux/sched/signal.h>
+ 
+ /*
+  * Access another process' address space via ptrace.
+@@ -924,18 +925,26 @@ int ptrace_request(struct task_struct *child, long request,
+ 			ret = ptrace_setsiginfo(child, &siginfo);
+ 		break;
+ 
+-	case PTRACE_GETSIGMASK:
++	case PTRACE_GETSIGMASK: {
++		sigset_t *mask;
++
+ 		if (addr != sizeof(sigset_t)) {
+ 			ret = -EINVAL;
+ 			break;
+ 		}
+ 
+-		if (copy_to_user(datavp, &child->blocked, sizeof(sigset_t)))
++		if (test_tsk_restore_sigmask(child))
++			mask = &child->saved_sigmask;
++		else
++			mask = &child->blocked;
++
++		if (copy_to_user(datavp, mask, sizeof(sigset_t)))
+ 			ret = -EFAULT;
+ 		else
+ 			ret = 0;
+ 
+ 		break;
++	}
+ 
+ 	case PTRACE_SETSIGMASK: {
+ 		sigset_t new_set;
+@@ -961,6 +970,8 @@ int ptrace_request(struct task_struct *child, long request,
+ 		child->blocked = new_set;
+ 		spin_unlock_irq(&child->sighand->siglock);
+ 
++		clear_tsk_restore_sigmask(child);
++
+ 		ret = 0;
+ 		break;
+ 	}
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index d07fc2836786..3842773b8aee 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6843,12 +6843,16 @@ static void buffer_pipe_buf_release(struct pipe_inode_info *pipe,
+ 	buf->private = 0;
+ }
+ 
+-static void buffer_pipe_buf_get(struct pipe_inode_info *pipe,
++static bool buffer_pipe_buf_get(struct pipe_inode_info *pipe,
+ 				struct pipe_buffer *buf)
+ {
+ 	struct buffer_ref *ref = (struct buffer_ref *)buf->private;
+ 
++	if (refcount_read(&ref->refcount) > INT_MAX/2)
++		return false;
++
+ 	refcount_inc(&ref->refcount);
++	return true;
+ }
+ 
+ /* Pipe buffer operations for a buffer. */
+diff --git a/lib/sbitmap.c b/lib/sbitmap.c
+index 5b382c1244ed..155fe38756ec 100644
+--- a/lib/sbitmap.c
++++ b/lib/sbitmap.c
+@@ -591,6 +591,17 @@ EXPORT_SYMBOL_GPL(sbitmap_queue_wake_up);
+ void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr,
+ 			 unsigned int cpu)
+ {
++	/*
++	 * Once the clear bit is set, the bit may be allocated out.
++	 *
++	 * Orders READ/WRITE on the asssociated instance(such as request
++	 * of blk_mq) by this bit for avoiding race with re-allocation,
++	 * and its pair is the memory barrier implied in __sbitmap_get_word.
++	 *
++	 * One invariant is that the clear bit has to be zero when the bit
++	 * is in use.
++	 */
++	smp_mb__before_atomic();
+ 	sbitmap_deferred_clear_bit(&sbq->sb, nr);
+ 
+ 	/*
+diff --git a/mm/gup.c b/mm/gup.c
+index 75029649baca..81e0bdefa2cc 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -157,8 +157,12 @@ retry:
+ 		goto retry;
+ 	}
+ 
+-	if (flags & FOLL_GET)
+-		get_page(page);
++	if (flags & FOLL_GET) {
++		if (unlikely(!try_get_page(page))) {
++			page = ERR_PTR(-ENOMEM);
++			goto out;
++		}
++	}
+ 	if (flags & FOLL_TOUCH) {
+ 		if ((flags & FOLL_WRITE) &&
+ 		    !pte_dirty(pte) && !PageDirty(page))
+@@ -295,7 +299,10 @@ retry_locked:
+ 			if (pmd_trans_unstable(pmd))
+ 				ret = -EBUSY;
+ 		} else {
+-			get_page(page);
++			if (unlikely(!try_get_page(page))) {
++				spin_unlock(ptl);
++				return ERR_PTR(-ENOMEM);
++			}
+ 			spin_unlock(ptl);
+ 			lock_page(page);
+ 			ret = split_huge_page(page);
+@@ -497,7 +504,10 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address,
+ 		if (is_device_public_page(*page))
+ 			goto unmap;
+ 	}
+-	get_page(*page);
++	if (unlikely(!try_get_page(*page))) {
++		ret = -ENOMEM;
++		goto unmap;
++	}
+ out:
+ 	ret = 0;
+ unmap:
+@@ -1393,6 +1403,20 @@ static void undo_dev_pagemap(int *nr, int nr_start, struct page **pages)
+ 	}
+ }
+ 
++/*
++ * Return the compund head page with ref appropriately incremented,
++ * or NULL if that failed.
++ */
++static inline struct page *try_get_compound_head(struct page *page, int refs)
++{
++	struct page *head = compound_head(page);
++	if (WARN_ON_ONCE(page_ref_count(head) < 0))
++		return NULL;
++	if (unlikely(!page_cache_add_speculative(head, refs)))
++		return NULL;
++	return head;
++}
++
+ #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
+ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
+ 			 int write, struct page **pages, int *nr)
+@@ -1427,9 +1451,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
+ 
+ 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
+ 		page = pte_page(pte);
+-		head = compound_head(page);
+ 
+-		if (!page_cache_get_speculative(head))
++		head = try_get_compound_head(page, 1);
++		if (!head)
+ 			goto pte_unmap;
+ 
+ 		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
+@@ -1568,8 +1592,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
+ 		refs++;
+ 	} while (addr += PAGE_SIZE, addr != end);
+ 
+-	head = compound_head(pmd_page(orig));
+-	if (!page_cache_add_speculative(head, refs)) {
++	head = try_get_compound_head(pmd_page(orig), refs);
++	if (!head) {
+ 		*nr -= refs;
+ 		return 0;
+ 	}
+@@ -1606,8 +1630,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
+ 		refs++;
+ 	} while (addr += PAGE_SIZE, addr != end);
+ 
+-	head = compound_head(pud_page(orig));
+-	if (!page_cache_add_speculative(head, refs)) {
++	head = try_get_compound_head(pud_page(orig), refs);
++	if (!head) {
+ 		*nr -= refs;
+ 		return 0;
+ 	}
+@@ -1643,8 +1667,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
+ 		refs++;
+ 	} while (addr += PAGE_SIZE, addr != end);
+ 
+-	head = compound_head(pgd_page(orig));
+-	if (!page_cache_add_speculative(head, refs)) {
++	head = try_get_compound_head(pgd_page(orig), refs);
++	if (!head) {
+ 		*nr -= refs;
+ 		return 0;
+ 	}
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 8dfdffc34a99..c220315dc533 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4298,6 +4298,19 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
+ 
+ 		pfn_offset = (vaddr & ~huge_page_mask(h)) >> PAGE_SHIFT;
+ 		page = pte_page(huge_ptep_get(pte));
++
++		/*
++		 * Instead of doing 'try_get_page()' below in the same_page
++		 * loop, just check the count once here.
++		 */
++		if (unlikely(page_count(page) <= 0)) {
++			if (pages) {
++				spin_unlock(ptl);
++				remainder = 0;
++				err = -ENOMEM;
++				break;
++			}
++		}
+ same_page:
+ 		if (pages) {
+ 			pages[i] = mem_map_offset(page, pfn_offset);
+diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
+index ea51b2d898ec..c980ce43e3ba 100644
+--- a/mm/kasan/kasan.h
++++ b/mm/kasan/kasan.h
+@@ -164,7 +164,10 @@ static inline u8 random_tag(void)
+ #endif
+ 
+ #ifndef arch_kasan_set_tag
+-#define arch_kasan_set_tag(addr, tag)	((void *)(addr))
++static inline const void *arch_kasan_set_tag(const void *addr, u8 tag)
++{
++	return addr;
++}
+ #endif
+ #ifndef arch_kasan_reset_tag
+ #define arch_kasan_reset_tag(addr)	((void *)(addr))
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 40d058378b52..fc605758323b 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -502,6 +502,7 @@ static unsigned int br_nf_pre_routing(void *priv,
+ 	nf_bridge->ipv4_daddr = ip_hdr(skb)->daddr;
+ 
+ 	skb->protocol = htons(ETH_P_IP);
++	skb->transport_header = skb->network_header + ip_hdr(skb)->ihl * 4;
+ 
+ 	NF_HOOK(NFPROTO_IPV4, NF_INET_PRE_ROUTING, state->net, state->sk, skb,
+ 		skb->dev, NULL,
+diff --git a/net/bridge/br_netfilter_ipv6.c b/net/bridge/br_netfilter_ipv6.c
+index 564710f88f93..e88d6641647b 100644
+--- a/net/bridge/br_netfilter_ipv6.c
++++ b/net/bridge/br_netfilter_ipv6.c
+@@ -235,6 +235,8 @@ unsigned int br_nf_pre_routing_ipv6(void *priv,
+ 	nf_bridge->ipv6_daddr = ipv6_hdr(skb)->daddr;
+ 
+ 	skb->protocol = htons(ETH_P_IPV6);
++	skb->transport_header = skb->network_header + sizeof(struct ipv6hdr);
++
+ 	NF_HOOK(NFPROTO_IPV6, NF_INET_PRE_ROUTING, state->net, state->sk, skb,
+ 		skb->dev, NULL,
+ 		br_nf_pre_routing_finish_ipv6);
+diff --git a/net/ipv6/netfilter/ip6t_srh.c b/net/ipv6/netfilter/ip6t_srh.c
+index 1059894a6f4c..4cb83fb69844 100644
+--- a/net/ipv6/netfilter/ip6t_srh.c
++++ b/net/ipv6/netfilter/ip6t_srh.c
+@@ -210,6 +210,8 @@ static bool srh1_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 		psidoff = srhoff + sizeof(struct ipv6_sr_hdr) +
+ 			  ((srh->segments_left + 1) * sizeof(struct in6_addr));
+ 		psid = skb_header_pointer(skb, psidoff, sizeof(_psid), &_psid);
++		if (!psid)
++			return false;
+ 		if (NF_SRH_INVF(srhinfo, IP6T_SRH_INV_PSID,
+ 				ipv6_masked_addr_cmp(psid, &srhinfo->psid_msk,
+ 						     &srhinfo->psid_addr)))
+@@ -223,6 +225,8 @@ static bool srh1_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 		nsidoff = srhoff + sizeof(struct ipv6_sr_hdr) +
+ 			  ((srh->segments_left - 1) * sizeof(struct in6_addr));
+ 		nsid = skb_header_pointer(skb, nsidoff, sizeof(_nsid), &_nsid);
++		if (!nsid)
++			return false;
+ 		if (NF_SRH_INVF(srhinfo, IP6T_SRH_INV_NSID,
+ 				ipv6_masked_addr_cmp(nsid, &srhinfo->nsid_msk,
+ 						     &srhinfo->nsid_addr)))
+@@ -233,6 +237,8 @@ static bool srh1_mt6(const struct sk_buff *skb, struct xt_action_param *par)
+ 	if (srhinfo->mt_flags & IP6T_SRH_LSID) {
+ 		lsidoff = srhoff + sizeof(struct ipv6_sr_hdr);
+ 		lsid = skb_header_pointer(skb, lsidoff, sizeof(_lsid), &_lsid);
++		if (!lsid)
++			return false;
+ 		if (NF_SRH_INVF(srhinfo, IP6T_SRH_INV_LSID,
+ 				ipv6_masked_addr_cmp(lsid, &srhinfo->lsid_msk,
+ 						     &srhinfo->lsid_addr)))
+diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig
+index beb3a69ce1d4..0f0e5806bf77 100644
+--- a/net/netfilter/Kconfig
++++ b/net/netfilter/Kconfig
+@@ -995,6 +995,7 @@ config NETFILTER_XT_TARGET_TEE
+ 	depends on NETFILTER_ADVANCED
+ 	depends on IPV6 || IPV6=n
+ 	depends on !NF_CONNTRACK || NF_CONNTRACK
++	depends on IP6_NF_IPTABLES || !IP6_NF_IPTABLES
+ 	select NF_DUP_IPV4
+ 	select NF_DUP_IPV6 if IP6_NF_IPTABLES
+ 	---help---
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index fa61208371f8..321a0036fdf5 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -308,10 +308,6 @@ static void *nft_rbtree_deactivate(const struct net *net,
+ 		else if (d > 0)
+ 			parent = parent->rb_right;
+ 		else {
+-			if (!nft_set_elem_active(&rbe->ext, genmask)) {
+-				parent = parent->rb_left;
+-				continue;
+-			}
+ 			if (nft_rbtree_interval_end(rbe) &&
+ 			    !nft_rbtree_interval_end(this)) {
+ 				parent = parent->rb_left;
+@@ -320,6 +316,9 @@ static void *nft_rbtree_deactivate(const struct net *net,
+ 				   nft_rbtree_interval_end(this)) {
+ 				parent = parent->rb_right;
+ 				continue;
++			} else if (!nft_set_elem_active(&rbe->ext, genmask)) {
++				parent = parent->rb_left;
++				continue;
+ 			}
+ 			nft_rbtree_flush(net, set, rbe);
+ 			return rbe;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 7754aa3e434f..f88c2bd1335a 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -486,8 +486,8 @@ xs_read_stream_request(struct sock_xprt *transport, struct msghdr *msg,
+ 		int flags, struct rpc_rqst *req)
+ {
+ 	struct xdr_buf *buf = &req->rq_private_buf;
+-	size_t want, read;
+-	ssize_t ret;
++	size_t want, uninitialized_var(read);
++	ssize_t uninitialized_var(ret);
+ 
+ 	xs_read_header(transport, buf);
+ 
+diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
+index 37e1fe180769..9c767c68ed3a 100644
+--- a/net/xdp/xdp_umem.c
++++ b/net/xdp/xdp_umem.c
+@@ -189,9 +189,6 @@ static void xdp_umem_unaccount_pages(struct xdp_umem *umem)
+ 
+ static void xdp_umem_release(struct xdp_umem *umem)
+ {
+-	struct task_struct *task;
+-	struct mm_struct *mm;
+-
+ 	xdp_umem_clear_dev(umem);
+ 
+ 	if (umem->fq) {
+@@ -208,21 +205,10 @@ static void xdp_umem_release(struct xdp_umem *umem)
+ 
+ 	xdp_umem_unpin_pages(umem);
+ 
+-	task = get_pid_task(umem->pid, PIDTYPE_PID);
+-	put_pid(umem->pid);
+-	if (!task)
+-		goto out;
+-	mm = get_task_mm(task);
+-	put_task_struct(task);
+-	if (!mm)
+-		goto out;
+-
+-	mmput(mm);
+ 	kfree(umem->pages);
+ 	umem->pages = NULL;
+ 
+ 	xdp_umem_unaccount_pages(umem);
+-out:
+ 	kfree(umem);
+ }
+ 
+@@ -351,7 +337,6 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 	if (size_chk < 0)
+ 		return -EINVAL;
+ 
+-	umem->pid = get_task_pid(current, PIDTYPE_PID);
+ 	umem->address = (unsigned long)addr;
+ 	umem->chunk_mask = ~((u64)chunk_size - 1);
+ 	umem->size = size;
+@@ -367,7 +352,7 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 
+ 	err = xdp_umem_account_pages(umem);
+ 	if (err)
+-		goto out;
++		return err;
+ 
+ 	err = xdp_umem_pin_pages(umem);
+ 	if (err)
+@@ -386,8 +371,6 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 
+ out_account:
+ 	xdp_umem_unaccount_pages(umem);
+-out:
+-	put_pid(umem->pid);
+ 	return err;
+ }
+ 
+diff --git a/scripts/kconfig/lxdialog/inputbox.c b/scripts/kconfig/lxdialog/inputbox.c
+index 611945611bf8..1dcfb288ee63 100644
+--- a/scripts/kconfig/lxdialog/inputbox.c
++++ b/scripts/kconfig/lxdialog/inputbox.c
+@@ -113,7 +113,8 @@ do_resize:
+ 			case KEY_DOWN:
+ 				break;
+ 			case KEY_BACKSPACE:
+-			case 127:
++			case 8:   /* ^H */
++			case 127: /* ^? */
+ 				if (pos) {
+ 					wattrset(dialog, dlg.inputbox.atr);
+ 					if (input_x == 0) {
+diff --git a/scripts/kconfig/nconf.c b/scripts/kconfig/nconf.c
+index a4670f4e825a..ac92c0ded6c5 100644
+--- a/scripts/kconfig/nconf.c
++++ b/scripts/kconfig/nconf.c
+@@ -1048,7 +1048,7 @@ static int do_match(int key, struct match_state *state, int *ans)
+ 		state->match_direction = FIND_NEXT_MATCH_UP;
+ 		*ans = get_mext_match(state->pattern,
+ 				state->match_direction);
+-	} else if (key == KEY_BACKSPACE || key == 127) {
++	} else if (key == KEY_BACKSPACE || key == 8 || key == 127) {
+ 		state->pattern[strlen(state->pattern)-1] = '\0';
+ 		adj_match_dir(&state->match_direction);
+ 	} else
+diff --git a/scripts/kconfig/nconf.gui.c b/scripts/kconfig/nconf.gui.c
+index 7be620a1fcdb..77f525a8617c 100644
+--- a/scripts/kconfig/nconf.gui.c
++++ b/scripts/kconfig/nconf.gui.c
+@@ -439,7 +439,8 @@ int dialog_inputbox(WINDOW *main_window,
+ 		case KEY_F(F_EXIT):
+ 		case KEY_F(F_BACK):
+ 			break;
+-		case 127:
++		case 8:   /* ^H */
++		case 127: /* ^? */
+ 		case KEY_BACKSPACE:
+ 			if (cursor_position > 0) {
+ 				memmove(&result[cursor_position-1],
+diff --git a/scripts/selinux/genheaders/genheaders.c b/scripts/selinux/genheaders/genheaders.c
+index 1ceedea847dd..544ca126a8a8 100644
+--- a/scripts/selinux/genheaders/genheaders.c
++++ b/scripts/selinux/genheaders/genheaders.c
+@@ -9,7 +9,6 @@
+ #include <string.h>
+ #include <errno.h>
+ #include <ctype.h>
+-#include <sys/socket.h>
+ 
+ struct security_class_mapping {
+ 	const char *name;
+diff --git a/scripts/selinux/mdp/mdp.c b/scripts/selinux/mdp/mdp.c
+index 073fe7537f6c..6d51b74bc679 100644
+--- a/scripts/selinux/mdp/mdp.c
++++ b/scripts/selinux/mdp/mdp.c
+@@ -32,7 +32,6 @@
+ #include <stdlib.h>
+ #include <unistd.h>
+ #include <string.h>
+-#include <sys/socket.h>
+ 
+ static void usage(char *name)
+ {
+diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h
+index bd5fe0d3204a..201f7e588a29 100644
+--- a/security/selinux/include/classmap.h
++++ b/security/selinux/include/classmap.h
+@@ -1,5 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #include <linux/capability.h>
++#include <linux/socket.h>
+ 
+ #define COMMON_FILE_SOCK_PERMS "ioctl", "read", "write", "create", \
+     "getattr", "setattr", "lock", "relabelfrom", "relabelto", "append", "map"
+diff --git a/tools/build/feature/test-libopencsd.c b/tools/build/feature/test-libopencsd.c
+index d68eb4fb40cc..2b0e02c38870 100644
+--- a/tools/build/feature/test-libopencsd.c
++++ b/tools/build/feature/test-libopencsd.c
+@@ -4,9 +4,9 @@
+ /*
+  * Check OpenCSD library version is sufficient to provide required features
+  */
+-#define OCSD_MIN_VER ((0 << 16) | (10 << 8) | (0))
++#define OCSD_MIN_VER ((0 << 16) | (11 << 8) | (0))
+ #if !defined(OCSD_VER_NUM) || (OCSD_VER_NUM < OCSD_MIN_VER)
+-#error "OpenCSD >= 0.10.0 is required"
++#error "OpenCSD >= 0.11.0 is required"
+ #endif
+ 
+ int main(void)
+diff --git a/tools/perf/util/cs-etm-decoder/cs-etm-decoder.c b/tools/perf/util/cs-etm-decoder/cs-etm-decoder.c
+index 8c155575c6c5..2a8bf6b45a30 100644
+--- a/tools/perf/util/cs-etm-decoder/cs-etm-decoder.c
++++ b/tools/perf/util/cs-etm-decoder/cs-etm-decoder.c
+@@ -374,6 +374,7 @@ cs_etm_decoder__buffer_range(struct cs_etm_decoder *decoder,
+ 		break;
+ 	case OCSD_INSTR_ISB:
+ 	case OCSD_INSTR_DSB_DMB:
++	case OCSD_INSTR_WFI_WFE:
+ 	case OCSD_INSTR_OTHER:
+ 	default:
+ 		packet->last_instr_taken_branch = false;
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 143f7057d581..596db1daee35 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -1358,6 +1358,20 @@ static void machine__set_kernel_mmap(struct machine *machine,
+ 		machine->vmlinux_map->end = ~0ULL;
+ }
+ 
++static void machine__update_kernel_mmap(struct machine *machine,
++				     u64 start, u64 end)
++{
++	struct map *map = machine__kernel_map(machine);
++
++	map__get(map);
++	map_groups__remove(&machine->kmaps, map);
++
++	machine__set_kernel_mmap(machine, start, end);
++
++	map_groups__insert(&machine->kmaps, map);
++	map__put(map);
++}
++
+ int machine__create_kernel_maps(struct machine *machine)
+ {
+ 	struct dso *kernel = machine__get_kernel(machine);
+@@ -1390,17 +1404,11 @@ int machine__create_kernel_maps(struct machine *machine)
+ 			goto out_put;
+ 		}
+ 
+-		/* we have a real start address now, so re-order the kmaps */
+-		map = machine__kernel_map(machine);
+-
+-		map__get(map);
+-		map_groups__remove(&machine->kmaps, map);
+-
+-		/* assume it's the last in the kmaps */
+-		machine__set_kernel_mmap(machine, addr, ~0ULL);
+-
+-		map_groups__insert(&machine->kmaps, map);
+-		map__put(map);
++		/*
++		 * we have a real start address now, so re-order the kmaps
++		 * assume it's the last in the kmaps
++		 */
++		machine__update_kernel_mmap(machine, addr, ~0ULL);
+ 	}
+ 
+ 	if (machine__create_extra_kernel_maps(machine, kernel))
+@@ -1536,7 +1544,7 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
+ 		if (strstr(kernel->long_name, "vmlinux"))
+ 			dso__set_short_name(kernel, "[kernel.vmlinux]", false);
+ 
+-		machine__set_kernel_mmap(machine, event->mmap.start,
++		machine__update_kernel_mmap(machine, event->mmap.start,
+ 					 event->mmap.start + event->mmap.len);
+ 
+ 		/*
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index f9a0e9938480..cb4a992d6dd3 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -28,8 +28,8 @@ LIBKVM += $(LIBKVM_$(UNAME_M))
+ INSTALL_HDR_PATH = $(top_srcdir)/usr
+ LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/
+ LINUX_TOOL_INCLUDE = $(top_srcdir)/tools/include
+-CFLAGS += -O2 -g -std=gnu99 -I$(LINUX_TOOL_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude -I$(<D) -Iinclude/$(UNAME_M) -I..
+-LDFLAGS += -pthread
++CFLAGS += -O2 -g -std=gnu99 -fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude -I$(<D) -Iinclude/$(UNAME_M) -I..
++LDFLAGS += -pthread -no-pie
+ 
+ # After inclusion, $(OUTPUT) is defined and
+ # $(TEST_GEN_PROGS) starts with $(OUTPUT)/
+diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
+index a84785b02557..07b71ad9734a 100644
+--- a/tools/testing/selftests/kvm/include/kvm_util.h
++++ b/tools/testing/selftests/kvm/include/kvm_util.h
+@@ -102,6 +102,7 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
+ struct kvm_run *vcpu_state(struct kvm_vm *vm, uint32_t vcpuid);
+ void vcpu_run(struct kvm_vm *vm, uint32_t vcpuid);
+ int _vcpu_run(struct kvm_vm *vm, uint32_t vcpuid);
++void vcpu_run_complete_io(struct kvm_vm *vm, uint32_t vcpuid);
+ void vcpu_set_mp_state(struct kvm_vm *vm, uint32_t vcpuid,
+ 		       struct kvm_mp_state *mp_state);
+ void vcpu_regs_get(struct kvm_vm *vm, uint32_t vcpuid, struct kvm_regs *regs);
+diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
+index b52cfdefecbf..efa0aad8b3c6 100644
+--- a/tools/testing/selftests/kvm/lib/kvm_util.c
++++ b/tools/testing/selftests/kvm/lib/kvm_util.c
+@@ -1121,6 +1121,22 @@ int _vcpu_run(struct kvm_vm *vm, uint32_t vcpuid)
+ 	return rc;
+ }
+ 
++void vcpu_run_complete_io(struct kvm_vm *vm, uint32_t vcpuid)
++{
++	struct vcpu *vcpu = vcpu_find(vm, vcpuid);
++	int ret;
++
++	TEST_ASSERT(vcpu != NULL, "vcpu not found, vcpuid: %u", vcpuid);
++
++	vcpu->state->immediate_exit = 1;
++	ret = ioctl(vcpu->fd, KVM_RUN, NULL);
++	vcpu->state->immediate_exit = 0;
++
++	TEST_ASSERT(ret == -1 && errno == EINTR,
++		    "KVM_RUN IOCTL didn't exit immediately, rc: %i, errno: %i",
++		    ret, errno);
++}
++
+ /*
+  * VM VCPU Set MP State
+  *
+diff --git a/tools/testing/selftests/kvm/x86_64/cr4_cpuid_sync_test.c b/tools/testing/selftests/kvm/x86_64/cr4_cpuid_sync_test.c
+index d503a51fad30..7c2c4d4055a8 100644
+--- a/tools/testing/selftests/kvm/x86_64/cr4_cpuid_sync_test.c
++++ b/tools/testing/selftests/kvm/x86_64/cr4_cpuid_sync_test.c
+@@ -87,22 +87,25 @@ int main(int argc, char *argv[])
+ 	while (1) {
+ 		rc = _vcpu_run(vm, VCPU_ID);
+ 
+-		if (run->exit_reason == KVM_EXIT_IO) {
+-			switch (get_ucall(vm, VCPU_ID, &uc)) {
+-			case UCALL_SYNC:
+-				/* emulate hypervisor clearing CR4.OSXSAVE */
+-				vcpu_sregs_get(vm, VCPU_ID, &sregs);
+-				sregs.cr4 &= ~X86_CR4_OSXSAVE;
+-				vcpu_sregs_set(vm, VCPU_ID, &sregs);
+-				break;
+-			case UCALL_ABORT:
+-				TEST_ASSERT(false, "Guest CR4 bit (OSXSAVE) unsynchronized with CPUID bit.");
+-				break;
+-			case UCALL_DONE:
+-				goto done;
+-			default:
+-				TEST_ASSERT(false, "Unknown ucall 0x%x.", uc.cmd);
+-			}
++		TEST_ASSERT(run->exit_reason == KVM_EXIT_IO,
++			    "Unexpected exit reason: %u (%s),\n",
++			    run->exit_reason,
++			    exit_reason_str(run->exit_reason));
++
++		switch (get_ucall(vm, VCPU_ID, &uc)) {
++		case UCALL_SYNC:
++			/* emulate hypervisor clearing CR4.OSXSAVE */
++			vcpu_sregs_get(vm, VCPU_ID, &sregs);
++			sregs.cr4 &= ~X86_CR4_OSXSAVE;
++			vcpu_sregs_set(vm, VCPU_ID, &sregs);
++			break;
++		case UCALL_ABORT:
++			TEST_ASSERT(false, "Guest CR4 bit (OSXSAVE) unsynchronized with CPUID bit.");
++			break;
++		case UCALL_DONE:
++			goto done;
++		default:
++			TEST_ASSERT(false, "Unknown ucall 0x%x.", uc.cmd);
+ 		}
+ 	}
+ 
+diff --git a/tools/testing/selftests/kvm/x86_64/state_test.c b/tools/testing/selftests/kvm/x86_64/state_test.c
+index 4b3f556265f1..30f75856cf39 100644
+--- a/tools/testing/selftests/kvm/x86_64/state_test.c
++++ b/tools/testing/selftests/kvm/x86_64/state_test.c
+@@ -134,6 +134,11 @@ int main(int argc, char *argv[])
+ 
+ 	struct kvm_cpuid_entry2 *entry = kvm_get_supported_cpuid_entry(1);
+ 
++	if (!kvm_check_cap(KVM_CAP_IMMEDIATE_EXIT)) {
++		fprintf(stderr, "immediate_exit not available, skipping test\n");
++		exit(KSFT_SKIP);
++	}
++
+ 	/* Create VM */
+ 	vm = vm_create_default(VCPU_ID, 0, guest_code);
+ 	vcpu_set_cpuid(vm, VCPU_ID, kvm_get_supported_cpuid());
+@@ -156,8 +161,6 @@ int main(int argc, char *argv[])
+ 			    stage, run->exit_reason,
+ 			    exit_reason_str(run->exit_reason));
+ 
+-		memset(&regs1, 0, sizeof(regs1));
+-		vcpu_regs_get(vm, VCPU_ID, &regs1);
+ 		switch (get_ucall(vm, VCPU_ID, &uc)) {
+ 		case UCALL_ABORT:
+ 			TEST_ASSERT(false, "%s at %s:%d", (const char *)uc.args[0],
+@@ -176,6 +179,17 @@ int main(int argc, char *argv[])
+ 			    uc.args[1] == stage, "Unexpected register values vmexit #%lx, got %lx",
+ 			    stage, (ulong)uc.args[1]);
+ 
++		/*
++		 * When KVM exits to userspace with KVM_EXIT_IO, KVM guarantees
++		 * guest state is consistent only after userspace re-enters the
++		 * kernel with KVM_RUN.  Complete IO prior to migrating state
++		 * to a new VM.
++		 */
++		vcpu_run_complete_io(vm, VCPU_ID);
++
++		memset(&regs1, 0, sizeof(regs1));
++		vcpu_regs_get(vm, VCPU_ID, &regs1);
++
+ 		state = vcpu_save_state(vm, VCPU_ID);
+ 		kvm_vm_release(vm);
+ 
+diff --git a/virt/kvm/arm/hyp/vgic-v3-sr.c b/virt/kvm/arm/hyp/vgic-v3-sr.c
+index 9652c453480f..3c3f7cda95c7 100644
+--- a/virt/kvm/arm/hyp/vgic-v3-sr.c
++++ b/virt/kvm/arm/hyp/vgic-v3-sr.c
+@@ -222,7 +222,7 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
+ 		}
+ 	}
+ 
+-	if (used_lrs) {
++	if (used_lrs || cpu_if->its_vpe.its_vm) {
+ 		int i;
+ 		u32 elrsr;
+ 
+@@ -247,7 +247,7 @@ void __hyp_text __vgic_v3_restore_state(struct kvm_vcpu *vcpu)
+ 	u64 used_lrs = vcpu->arch.vgic_cpu.used_lrs;
+ 	int i;
+ 
+-	if (used_lrs) {
++	if (used_lrs || cpu_if->its_vpe.its_vm) {
+ 		write_gicreg(cpu_if->vgic_hcr, ICH_HCR_EL2);
+ 
+ 		for (i = 0; i < used_lrs; i++)
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index 5cc22cdaa5ba..31e22b615d99 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1060,25 +1060,43 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
+ {
+ 	pmd_t *pmd, old_pmd;
+ 
++retry:
+ 	pmd = stage2_get_pmd(kvm, cache, addr);
+ 	VM_BUG_ON(!pmd);
+ 
+ 	old_pmd = *pmd;
++	/*
++	 * Multiple vcpus faulting on the same PMD entry, can
++	 * lead to them sequentially updating the PMD with the
++	 * same value. Following the break-before-make
++	 * (pmd_clear() followed by tlb_flush()) process can
++	 * hinder forward progress due to refaults generated
++	 * on missing translations.
++	 *
++	 * Skip updating the page table if the entry is
++	 * unchanged.
++	 */
++	if (pmd_val(old_pmd) == pmd_val(*new_pmd))
++		return 0;
++
+ 	if (pmd_present(old_pmd)) {
+ 		/*
+-		 * Multiple vcpus faulting on the same PMD entry, can
+-		 * lead to them sequentially updating the PMD with the
+-		 * same value. Following the break-before-make
+-		 * (pmd_clear() followed by tlb_flush()) process can
+-		 * hinder forward progress due to refaults generated
+-		 * on missing translations.
++		 * If we already have PTE level mapping for this block,
++		 * we must unmap it to avoid inconsistent TLB state and
++		 * leaking the table page. We could end up in this situation
++		 * if the memory slot was marked for dirty logging and was
++		 * reverted, leaving PTE level mappings for the pages accessed
++		 * during the period. So, unmap the PTE level mapping for this
++		 * block and retry, as we could have released the upper level
++		 * table in the process.
+ 		 *
+-		 * Skip updating the page table if the entry is
+-		 * unchanged.
++		 * Normal THP split/merge follows mmu_notifier callbacks and do
++		 * get handled accordingly.
+ 		 */
+-		if (pmd_val(old_pmd) == pmd_val(*new_pmd))
+-			return 0;
+-
++		if (!pmd_thp_or_huge(old_pmd)) {
++			unmap_stage2_range(kvm, addr & S2_PMD_MASK, S2_PMD_SIZE);
++			goto retry;
++		}
+ 		/*
+ 		 * Mapping in huge pages should only happen through a
+ 		 * fault.  If a page is merged into a transparent huge
+@@ -1090,8 +1108,7 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
+ 		 * should become splitting first, unmapped, merged,
+ 		 * and mapped back in on-demand.
+ 		 */
+-		VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd));
+-
++		WARN_ON_ONCE(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd));
+ 		pmd_clear(pmd);
+ 		kvm_tlb_flush_vmid_ipa(kvm, addr);
+ 	} else {
+@@ -1107,6 +1124,7 @@ static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cac
+ {
+ 	pud_t *pudp, old_pud;
+ 
++retry:
+ 	pudp = stage2_get_pud(kvm, cache, addr);
+ 	VM_BUG_ON(!pudp);
+ 
+@@ -1114,14 +1132,23 @@ static int stage2_set_pud_huge(struct kvm *kvm, struct kvm_mmu_memory_cache *cac
+ 
+ 	/*
+ 	 * A large number of vcpus faulting on the same stage 2 entry,
+-	 * can lead to a refault due to the
+-	 * stage2_pud_clear()/tlb_flush(). Skip updating the page
+-	 * tables if there is no change.
++	 * can lead to a refault due to the stage2_pud_clear()/tlb_flush().
++	 * Skip updating the page tables if there is no change.
+ 	 */
+ 	if (pud_val(old_pud) == pud_val(*new_pudp))
+ 		return 0;
+ 
+ 	if (stage2_pud_present(kvm, old_pud)) {
++		/*
++		 * If we already have table level mapping for this block, unmap
++		 * the range for this block and retry.
++		 */
++		if (!stage2_pud_huge(kvm, old_pud)) {
++			unmap_stage2_range(kvm, addr & S2_PUD_MASK, S2_PUD_SIZE);
++			goto retry;
++		}
++
++		WARN_ON_ONCE(kvm_pud_pfn(old_pud) != kvm_pud_pfn(*new_pudp));
+ 		stage2_pud_clear(kvm, pudp);
+ 		kvm_tlb_flush_vmid_ipa(kvm, addr);
+ 	} else {
+diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c
+index ab3f47745d9c..fcb2fceaa4a5 100644
+--- a/virt/kvm/arm/vgic/vgic-its.c
++++ b/virt/kvm/arm/vgic/vgic-its.c
+@@ -754,8 +754,9 @@ static bool vgic_its_check_id(struct vgic_its *its, u64 baser, u32 id,
+ 	u64 indirect_ptr, type = GITS_BASER_TYPE(baser);
+ 	phys_addr_t base = GITS_BASER_ADDR_48_to_52(baser);
+ 	int esz = GITS_BASER_ENTRY_SIZE(baser);
+-	int index;
++	int index, idx;
+ 	gfn_t gfn;
++	bool ret;
+ 
+ 	switch (type) {
+ 	case GITS_BASER_TYPE_DEVICE:
+@@ -782,7 +783,8 @@ static bool vgic_its_check_id(struct vgic_its *its, u64 baser, u32 id,
+ 
+ 		if (eaddr)
+ 			*eaddr = addr;
+-		return kvm_is_visible_gfn(its->dev->kvm, gfn);
++
++		goto out;
+ 	}
+ 
+ 	/* calculate and check the index into the 1st level */
+@@ -812,7 +814,12 @@ static bool vgic_its_check_id(struct vgic_its *its, u64 baser, u32 id,
+ 
+ 	if (eaddr)
+ 		*eaddr = indirect_ptr;
+-	return kvm_is_visible_gfn(its->dev->kvm, gfn);
++
++out:
++	idx = srcu_read_lock(&its->dev->kvm->srcu);
++	ret = kvm_is_visible_gfn(its->dev->kvm, gfn);
++	srcu_read_unlock(&its->dev->kvm->srcu, idx);
++	return ret;
+ }
+ 
+ static int vgic_its_alloc_collection(struct vgic_its *its,
+@@ -1919,7 +1926,7 @@ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
+ 	       ((u64)ite->irq->intid << KVM_ITS_ITE_PINTID_SHIFT) |
+ 		ite->collection->collection_id;
+ 	val = cpu_to_le64(val);
+-	return kvm_write_guest(kvm, gpa, &val, ite_esz);
++	return kvm_write_guest_lock(kvm, gpa, &val, ite_esz);
+ }
+ 
+ /**
+@@ -2066,7 +2073,7 @@ static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,
+ 	       (itt_addr_field << KVM_ITS_DTE_ITTADDR_SHIFT) |
+ 		(dev->num_eventid_bits - 1));
+ 	val = cpu_to_le64(val);
+-	return kvm_write_guest(kvm, ptr, &val, dte_esz);
++	return kvm_write_guest_lock(kvm, ptr, &val, dte_esz);
+ }
+ 
+ /**
+@@ -2246,7 +2253,7 @@ static int vgic_its_save_cte(struct vgic_its *its,
+ 	       ((u64)collection->target_addr << KVM_ITS_CTE_RDBASE_SHIFT) |
+ 	       collection->collection_id);
+ 	val = cpu_to_le64(val);
+-	return kvm_write_guest(its->dev->kvm, gpa, &val, esz);
++	return kvm_write_guest_lock(its->dev->kvm, gpa, &val, esz);
+ }
+ 
+ static int vgic_its_restore_cte(struct vgic_its *its, gpa_t gpa, int esz)
+@@ -2317,7 +2324,7 @@ static int vgic_its_save_collection_table(struct vgic_its *its)
+ 	 */
+ 	val = 0;
+ 	BUG_ON(cte_esz > sizeof(val));
+-	ret = kvm_write_guest(its->dev->kvm, gpa, &val, cte_esz);
++	ret = kvm_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz);
+ 	return ret;
+ }
+ 
+diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
+index 4ee0aeb9a905..89260964be73 100644
+--- a/virt/kvm/arm/vgic/vgic-v3.c
++++ b/virt/kvm/arm/vgic/vgic-v3.c
+@@ -358,7 +358,7 @@ retry:
+ 	if (status) {
+ 		/* clear consumed data */
+ 		val &= ~(1 << bit_nr);
+-		ret = kvm_write_guest(kvm, ptr, &val, 1);
++		ret = kvm_write_guest_lock(kvm, ptr, &val, 1);
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -409,7 +409,7 @@ int vgic_v3_save_pending_tables(struct kvm *kvm)
+ 		else
+ 			val &= ~(1 << bit_nr);
+ 
+-		ret = kvm_write_guest(kvm, ptr, &val, 1);
++		ret = kvm_write_guest_lock(kvm, ptr, &val, 1);
+ 		if (ret)
+ 			return ret;
+ 	}
+diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
+index abd9c7352677..3af69f2a3866 100644
+--- a/virt/kvm/arm/vgic/vgic.c
++++ b/virt/kvm/arm/vgic/vgic.c
+@@ -867,15 +867,21 @@ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu)
+ 	 * either observe the new interrupt before or after doing this check,
+ 	 * and introducing additional synchronization mechanism doesn't change
+ 	 * this.
++	 *
++	 * Note that we still need to go through the whole thing if anything
++	 * can be directly injected (GICv4).
+ 	 */
+-	if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head))
++	if (list_empty(&vcpu->arch.vgic_cpu.ap_list_head) &&
++	    !vgic_supports_direct_msis(vcpu->kvm))
+ 		return;
+ 
+ 	DEBUG_SPINLOCK_BUG_ON(!irqs_disabled());
+ 
+-	raw_spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock);
+-	vgic_flush_lr_state(vcpu);
+-	raw_spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock);
++	if (!list_empty(&vcpu->arch.vgic_cpu.ap_list_head)) {
++		raw_spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock);
++		vgic_flush_lr_state(vcpu);
++		raw_spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock);
++	}
+ 
+ 	if (can_access_vgic_from_kernel())
+ 		vgic_restore_state(vcpu);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-05 13:39 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-05 13:39 UTC (permalink / raw
  To: gentoo-commits

commit:     b721073a475fe58039da5c0daf37b3ec3cdbd942
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May  5 13:38:57 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May  5 13:38:57 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b721073a

Linux patch 5.0.13

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 1012_linux-5.0.13.patch | 1280 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 1280 insertions(+)

diff --git a/1012_linux-5.0.13.patch b/1012_linux-5.0.13.patch
new file mode 100644
index 0000000..b3581f4
--- /dev/null
+++ b/1012_linux-5.0.13.patch
@@ -0,0 +1,1280 @@
+diff --git a/Makefile b/Makefile
+index fd044f594bbf..51a819544505 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
+index dabfcf7c3941..7a0e64ccd6ff 100644
+--- a/arch/x86/include/uapi/asm/kvm.h
++++ b/arch/x86/include/uapi/asm/kvm.h
+@@ -381,6 +381,7 @@ struct kvm_sync_regs {
+ #define KVM_X86_QUIRK_LINT0_REENABLED	(1 << 0)
+ #define KVM_X86_QUIRK_CD_NW_CLEARED	(1 << 1)
+ #define KVM_X86_QUIRK_LAPIC_MMIO_HOLE	(1 << 2)
++#define KVM_X86_QUIRK_OUT_7E_INC_RIP	(1 << 3)
+ 
+ #define KVM_STATE_NESTED_GUEST_MODE	0x00000001
+ #define KVM_STATE_NESTED_RUN_PENDING	0x00000002
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index f90b3a948291..a4bcac94392c 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -5407,7 +5407,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
+ 		return ret;
+ 
+ 	/* Empty 'VMXON' state is permitted */
+-	if (kvm_state->size < sizeof(kvm_state) + sizeof(*vmcs12))
++	if (kvm_state->size < sizeof(*kvm_state) + sizeof(*vmcs12))
+ 		return 0;
+ 
+ 	if (kvm_state->vmx.vmcs_pa != -1ull) {
+@@ -5451,7 +5451,7 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
+ 	    vmcs12->vmcs_link_pointer != -1ull) {
+ 		struct vmcs12 *shadow_vmcs12 = get_shadow_vmcs12(vcpu);
+ 
+-		if (kvm_state->size < sizeof(kvm_state) + 2 * sizeof(*vmcs12))
++		if (kvm_state->size < sizeof(*kvm_state) + 2 * sizeof(*vmcs12))
+ 			return -EINVAL;
+ 
+ 		if (copy_from_user(shadow_vmcs12,
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 8c9fb6453b2f..7e413ea19a9a 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -6536,6 +6536,12 @@ int kvm_emulate_instruction_from_buffer(struct kvm_vcpu *vcpu,
+ }
+ EXPORT_SYMBOL_GPL(kvm_emulate_instruction_from_buffer);
+ 
++static int complete_fast_pio_out_port_0x7e(struct kvm_vcpu *vcpu)
++{
++	vcpu->arch.pio.count = 0;
++	return 1;
++}
++
+ static int complete_fast_pio_out(struct kvm_vcpu *vcpu)
+ {
+ 	vcpu->arch.pio.count = 0;
+@@ -6552,12 +6558,23 @@ static int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size,
+ 	unsigned long val = kvm_register_read(vcpu, VCPU_REGS_RAX);
+ 	int ret = emulator_pio_out_emulated(&vcpu->arch.emulate_ctxt,
+ 					    size, port, &val, 1);
++	if (ret)
++		return ret;
+ 
+-	if (!ret) {
++	/*
++	 * Workaround userspace that relies on old KVM behavior of %rip being
++	 * incremented prior to exiting to userspace to handle "OUT 0x7e".
++	 */
++	if (port == 0x7e &&
++	    kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_OUT_7E_INC_RIP)) {
++		vcpu->arch.complete_userspace_io =
++			complete_fast_pio_out_port_0x7e;
++		kvm_skip_emulated_instruction(vcpu);
++	} else {
+ 		vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
+ 		vcpu->arch.complete_userspace_io = complete_fast_pio_out;
+ 	}
+-	return ret;
++	return 0;
+ }
+ 
+ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
+diff --git a/drivers/net/dsa/bcm_sf2_cfp.c b/drivers/net/dsa/bcm_sf2_cfp.c
+index e14663ab6dbc..8dd74700a2ef 100644
+--- a/drivers/net/dsa/bcm_sf2_cfp.c
++++ b/drivers/net/dsa/bcm_sf2_cfp.c
+@@ -854,6 +854,9 @@ static int bcm_sf2_cfp_rule_set(struct dsa_switch *ds, int port,
+ 	     fs->m_ext.data[1]))
+ 		return -EINVAL;
+ 
++	if (fs->location != RX_CLS_LOC_ANY && fs->location >= CFP_NUM_RULES)
++		return -EINVAL;
++
+ 	if (fs->location != RX_CLS_LOC_ANY &&
+ 	    test_bit(fs->location, priv->cfp.used))
+ 		return -EBUSY;
+@@ -942,6 +945,9 @@ static int bcm_sf2_cfp_rule_del(struct bcm_sf2_priv *priv, int port, u32 loc)
+ 	struct cfp_rule *rule;
+ 	int ret;
+ 
++	if (loc >= CFP_NUM_RULES)
++		return -EINVAL;
++
+ 	/* Refuse deleting unused rules, and those that are not unique since
+ 	 * that could leave IPv6 rules with one of the chained rule in the
+ 	 * table.
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 40ca339ec3df..c6ddbc0e084e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1621,7 +1621,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 			netdev_warn(bp->dev, "RX buffer error %x\n", rx_err);
+ 			bnxt_sched_reset(bp, rxr);
+ 		}
+-		goto next_rx;
++		goto next_rx_no_len;
+ 	}
+ 
+ 	len = le32_to_cpu(rxcmp->rx_cmp_len_flags_type) >> RX_CMP_LEN_SHIFT;
+@@ -1702,12 +1702,13 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 	rc = 1;
+ 
+ next_rx:
+-	rxr->rx_prod = NEXT_RX(prod);
+-	rxr->rx_next_cons = NEXT_RX(cons);
+-
+ 	cpr->rx_packets += 1;
+ 	cpr->rx_bytes += len;
+ 
++next_rx_no_len:
++	rxr->rx_prod = NEXT_RX(prod);
++	rxr->rx_next_cons = NEXT_RX(cons);
++
+ next_rx_no_prod_no_len:
+ 	*raw_cons = tmp_raw_cons;
+ 
+@@ -5131,10 +5132,10 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
+ 	for (i = 0; i < bp->tx_nr_rings; i++) {
+ 		struct bnxt_tx_ring_info *txr = &bp->tx_ring[i];
+ 		struct bnxt_ring_struct *ring = &txr->tx_ring_struct;
+-		u32 cmpl_ring_id;
+ 
+-		cmpl_ring_id = bnxt_cp_ring_for_tx(bp, txr);
+ 		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
++			u32 cmpl_ring_id = bnxt_cp_ring_for_tx(bp, txr);
++
+ 			hwrm_ring_free_send_msg(bp, ring,
+ 						RING_FREE_REQ_RING_TYPE_TX,
+ 						close_path ? cmpl_ring_id :
+@@ -5147,10 +5148,10 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
+ 		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
+ 		struct bnxt_ring_struct *ring = &rxr->rx_ring_struct;
+ 		u32 grp_idx = rxr->bnapi->index;
+-		u32 cmpl_ring_id;
+ 
+-		cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr);
+ 		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
++			u32 cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr);
++
+ 			hwrm_ring_free_send_msg(bp, ring,
+ 						RING_FREE_REQ_RING_TYPE_RX,
+ 						close_path ? cmpl_ring_id :
+@@ -5169,10 +5170,10 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
+ 		struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
+ 		struct bnxt_ring_struct *ring = &rxr->rx_agg_ring_struct;
+ 		u32 grp_idx = rxr->bnapi->index;
+-		u32 cmpl_ring_id;
+ 
+-		cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr);
+ 		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
++			u32 cmpl_ring_id = bnxt_cp_ring_for_rx(bp, rxr);
++
+ 			hwrm_ring_free_send_msg(bp, ring, type,
+ 						close_path ? cmpl_ring_id :
+ 						INVALID_HW_RING_ID);
+@@ -5311,17 +5312,16 @@ __bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, struct hwrm_func_cfg_input *req,
+ 	req->num_tx_rings = cpu_to_le16(tx_rings);
+ 	if (BNXT_NEW_RM(bp)) {
+ 		enables |= rx_rings ? FUNC_CFG_REQ_ENABLES_NUM_RX_RINGS : 0;
++		enables |= stats ? FUNC_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
+ 		if (bp->flags & BNXT_FLAG_CHIP_P5) {
+ 			enables |= cp_rings ? FUNC_CFG_REQ_ENABLES_NUM_MSIX : 0;
+ 			enables |= tx_rings + ring_grps ?
+-				   FUNC_CFG_REQ_ENABLES_NUM_CMPL_RINGS |
+-				   FUNC_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
++				   FUNC_CFG_REQ_ENABLES_NUM_CMPL_RINGS : 0;
+ 			enables |= rx_rings ?
+ 				FUNC_CFG_REQ_ENABLES_NUM_RSSCOS_CTXS : 0;
+ 		} else {
+ 			enables |= cp_rings ?
+-				   FUNC_CFG_REQ_ENABLES_NUM_CMPL_RINGS |
+-				   FUNC_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
++				   FUNC_CFG_REQ_ENABLES_NUM_CMPL_RINGS : 0;
+ 			enables |= ring_grps ?
+ 				   FUNC_CFG_REQ_ENABLES_NUM_HW_RING_GRPS |
+ 				   FUNC_CFG_REQ_ENABLES_NUM_RSSCOS_CTXS : 0;
+@@ -5361,14 +5361,13 @@ __bnxt_hwrm_reserve_vf_rings(struct bnxt *bp,
+ 	enables |= tx_rings ? FUNC_VF_CFG_REQ_ENABLES_NUM_TX_RINGS : 0;
+ 	enables |= rx_rings ? FUNC_VF_CFG_REQ_ENABLES_NUM_RX_RINGS |
+ 			      FUNC_VF_CFG_REQ_ENABLES_NUM_RSSCOS_CTXS : 0;
++	enables |= stats ? FUNC_VF_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
+ 	if (bp->flags & BNXT_FLAG_CHIP_P5) {
+ 		enables |= tx_rings + ring_grps ?
+-			   FUNC_VF_CFG_REQ_ENABLES_NUM_CMPL_RINGS |
+-			   FUNC_VF_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
++			   FUNC_VF_CFG_REQ_ENABLES_NUM_CMPL_RINGS : 0;
+ 	} else {
+ 		enables |= cp_rings ?
+-			   FUNC_VF_CFG_REQ_ENABLES_NUM_CMPL_RINGS |
+-			   FUNC_VF_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
++			   FUNC_VF_CFG_REQ_ENABLES_NUM_CMPL_RINGS : 0;
+ 		enables |= ring_grps ?
+ 			   FUNC_VF_CFG_REQ_ENABLES_NUM_HW_RING_GRPS : 0;
+ 	}
+@@ -6745,6 +6744,7 @@ static int bnxt_hwrm_port_qstats_ext(struct bnxt *bp)
+ 	struct hwrm_queue_pri2cos_qcfg_input req2 = {0};
+ 	struct hwrm_port_qstats_ext_input req = {0};
+ 	struct bnxt_pf_info *pf = &bp->pf;
++	u32 tx_stat_size;
+ 	int rc;
+ 
+ 	if (!(bp->flags & BNXT_FLAG_PORT_STATS_EXT))
+@@ -6754,13 +6754,16 @@ static int bnxt_hwrm_port_qstats_ext(struct bnxt *bp)
+ 	req.port_id = cpu_to_le16(pf->port_id);
+ 	req.rx_stat_size = cpu_to_le16(sizeof(struct rx_port_stats_ext));
+ 	req.rx_stat_host_addr = cpu_to_le64(bp->hw_rx_port_stats_ext_map);
+-	req.tx_stat_size = cpu_to_le16(sizeof(struct tx_port_stats_ext));
++	tx_stat_size = bp->hw_tx_port_stats_ext ?
++		       sizeof(*bp->hw_tx_port_stats_ext) : 0;
++	req.tx_stat_size = cpu_to_le16(tx_stat_size);
+ 	req.tx_stat_host_addr = cpu_to_le64(bp->hw_tx_port_stats_ext_map);
+ 	mutex_lock(&bp->hwrm_cmd_lock);
+ 	rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ 	if (!rc) {
+ 		bp->fw_rx_stats_ext_size = le16_to_cpu(resp->rx_stat_size) / 8;
+-		bp->fw_tx_stats_ext_size = le16_to_cpu(resp->tx_stat_size) / 8;
++		bp->fw_tx_stats_ext_size = tx_stat_size ?
++			le16_to_cpu(resp->tx_stat_size) / 8 : 0;
+ 	} else {
+ 		bp->fw_rx_stats_ext_size = 0;
+ 		bp->fw_tx_stats_ext_size = 0;
+@@ -8889,8 +8892,15 @@ static int bnxt_cfg_rx_mode(struct bnxt *bp)
+ 
+ skip_uc:
+ 	rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, 0);
++	if (rc && vnic->mc_list_count) {
++		netdev_info(bp->dev, "Failed setting MC filters rc: %d, turning on ALL_MCAST mode\n",
++			    rc);
++		vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST;
++		vnic->mc_list_count = 0;
++		rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, 0);
++	}
+ 	if (rc)
+-		netdev_err(bp->dev, "HWRM cfa l2 rx mask failure rc: %x\n",
++		netdev_err(bp->dev, "HWRM cfa l2 rx mask failure rc: %d\n",
+ 			   rc);
+ 
+ 	return rc;
+@@ -10625,6 +10635,7 @@ init_err_cleanup_tc:
+ 	bnxt_clear_int_mode(bp);
+ 
+ init_err_pci_clean:
++	bnxt_free_hwrm_short_cmd_req(bp);
+ 	bnxt_free_hwrm_resources(bp);
+ 	bnxt_free_ctx_mem(bp);
+ 	kfree(bp->ctx);
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index abb7876a8776..66573a218df5 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -1494,9 +1494,10 @@ static int marvell_get_sset_count(struct phy_device *phydev)
+ 
+ static void marvell_get_strings(struct phy_device *phydev, u8 *data)
+ {
++	int count = marvell_get_sset_count(phydev);
+ 	int i;
+ 
+-	for (i = 0; i < ARRAY_SIZE(marvell_hw_stats); i++) {
++	for (i = 0; i < count; i++) {
+ 		strlcpy(data + i * ETH_GSTRING_LEN,
+ 			marvell_hw_stats[i].string, ETH_GSTRING_LEN);
+ 	}
+@@ -1524,9 +1525,10 @@ static u64 marvell_get_stat(struct phy_device *phydev, int i)
+ static void marvell_get_stats(struct phy_device *phydev,
+ 			      struct ethtool_stats *stats, u64 *data)
+ {
++	int count = marvell_get_sset_count(phydev);
+ 	int i;
+ 
+-	for (i = 0; i < ARRAY_SIZE(marvell_hw_stats); i++)
++	for (i = 0; i < count; i++)
+ 		data[i] = marvell_get_stat(phydev, i);
+ }
+ 
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 49758490eaba..9560acc5f7da 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -5705,7 +5705,7 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
+ 	}
+ 
+ 	if (changed & BSS_CHANGED_MCAST_RATE &&
+-	    !WARN_ON(ath10k_mac_vif_chan(arvif->vif, &def))) {
++	    !ath10k_mac_vif_chan(arvif->vif, &def)) {
+ 		band = def.chan->band;
+ 		rateidx = vif->bss_conf.mcast_rate[band] - 1;
+ 
+@@ -5743,7 +5743,7 @@ static void ath10k_bss_info_changed(struct ieee80211_hw *hw,
+ 	}
+ 
+ 	if (changed & BSS_CHANGED_BASIC_RATES) {
+-		if (WARN_ON(ath10k_mac_vif_chan(vif, &def))) {
++		if (ath10k_mac_vif_chan(vif, &def)) {
+ 			mutex_unlock(&ar->conf_mutex);
+ 			return;
+ 		}
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+index 33b0af24a537..d61ab3e80759 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+@@ -1482,6 +1482,11 @@ void iwl_mvm_vif_dbgfs_register(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+ 		return;
+ 
+ 	mvmvif->dbgfs_dir = debugfs_create_dir("iwlmvm", dbgfs_dir);
++	if (IS_ERR_OR_NULL(mvmvif->dbgfs_dir)) {
++		IWL_ERR(mvm, "Failed to create debugfs directory under %pd\n",
++			dbgfs_dir);
++		return;
++	}
+ 
+ 	if (!mvmvif->dbgfs_dir) {
+ 		IWL_ERR(mvm, "Failed to create debugfs directory under %pd\n",
+diff --git a/include/net/sctp/command.h b/include/net/sctp/command.h
+index 6640f84fe536..6d5beac29bc1 100644
+--- a/include/net/sctp/command.h
++++ b/include/net/sctp/command.h
+@@ -105,7 +105,6 @@ enum sctp_verb {
+ 	SCTP_CMD_T1_RETRAN,	 /* Mark for retransmission after T1 timeout  */
+ 	SCTP_CMD_UPDATE_INITTAG, /* Update peer inittag */
+ 	SCTP_CMD_SEND_MSG,	 /* Send the whole use message */
+-	SCTP_CMD_SEND_NEXT_ASCONF, /* Send the next ASCONF after ACK */
+ 	SCTP_CMD_PURGE_ASCONF_QUEUE, /* Purge all asconf queues.*/
+ 	SCTP_CMD_SET_ASOC,	 /* Restore association context */
+ 	SCTP_CMD_LAST
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index c80188875f39..e8bb2e85c5a4 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -519,6 +519,7 @@ static void ip_copy_metadata(struct sk_buff *to, struct sk_buff *from)
+ 	to->pkt_type = from->pkt_type;
+ 	to->priority = from->priority;
+ 	to->protocol = from->protocol;
++	to->skb_iif = from->skb_iif;
+ 	skb_dst_drop(to);
+ 	skb_dst_copy(to, from);
+ 	to->dev = from->dev;
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 00852f47a73d..9a2ff79a93ad 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1673,7 +1673,9 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ 	if (TCP_SKB_CB(tail)->end_seq != TCP_SKB_CB(skb)->seq ||
+ 	    TCP_SKB_CB(tail)->ip_dsfield != TCP_SKB_CB(skb)->ip_dsfield ||
+ 	    ((TCP_SKB_CB(tail)->tcp_flags |
+-	      TCP_SKB_CB(skb)->tcp_flags) & TCPHDR_URG) ||
++	      TCP_SKB_CB(skb)->tcp_flags) & (TCPHDR_SYN | TCPHDR_RST | TCPHDR_URG)) ||
++	    !((TCP_SKB_CB(tail)->tcp_flags &
++	      TCP_SKB_CB(skb)->tcp_flags) & TCPHDR_ACK) ||
+ 	    ((TCP_SKB_CB(tail)->tcp_flags ^
+ 	      TCP_SKB_CB(skb)->tcp_flags) & (TCPHDR_ECE | TCPHDR_CWR)) ||
+ #ifdef CONFIG_TLS_DEVICE
+@@ -1692,6 +1694,15 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+ 		if (after(TCP_SKB_CB(skb)->ack_seq, TCP_SKB_CB(tail)->ack_seq))
+ 			TCP_SKB_CB(tail)->ack_seq = TCP_SKB_CB(skb)->ack_seq;
+ 
++		/* We have to update both TCP_SKB_CB(tail)->tcp_flags and
++		 * thtail->fin, so that the fast path in tcp_rcv_established()
++		 * is not entered if we append a packet with a FIN.
++		 * SYN, RST, URG are not present.
++		 * ACK is set on both packets.
++		 * PSH : we do not really care in TCP stack,
++		 *       at least for 'GRO' packets.
++		 */
++		thtail->fin |= th->fin;
+ 		TCP_SKB_CB(tail)->tcp_flags |= TCP_SKB_CB(skb)->tcp_flags;
+ 
+ 		if (TCP_SKB_CB(skb)->has_rxtstamp) {
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 64f9715173ac..065334b41d57 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -352,6 +352,7 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head,
+ 	struct sk_buff *pp = NULL;
+ 	struct udphdr *uh2;
+ 	struct sk_buff *p;
++	unsigned int ulen;
+ 
+ 	/* requires non zero csum, for symmetry with GSO */
+ 	if (!uh->check) {
+@@ -359,6 +360,12 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head,
+ 		return NULL;
+ 	}
+ 
++	/* Do not deal with padded or malicious packets, sorry ! */
++	ulen = ntohs(uh->len);
++	if (ulen <= sizeof(*uh) || ulen != skb_gro_len(skb)) {
++		NAPI_GRO_CB(skb)->flush = 1;
++		return NULL;
++	}
+ 	/* pull encapsulating udp header */
+ 	skb_gro_pull(skb, sizeof(struct udphdr));
+ 	skb_gro_postpull_rcsum(skb, uh, sizeof(struct udphdr));
+@@ -377,13 +384,14 @@ static struct sk_buff *udp_gro_receive_segment(struct list_head *head,
+ 
+ 		/* Terminate the flow on len mismatch or if it grow "too much".
+ 		 * Under small packet flood GRO count could elsewhere grow a lot
+-		 * leading to execessive truesize values
++		 * leading to excessive truesize values.
++		 * On len mismatch merge the first packet shorter than gso_size,
++		 * otherwise complete the GRO packet.
+ 		 */
+-		if (!skb_gro_receive(p, skb) &&
++		if (ulen > ntohs(uh2->len) || skb_gro_receive(p, skb) ||
++		    ulen != ntohs(uh2->len) ||
+ 		    NAPI_GRO_CB(p)->count >= UDP_GRO_CNT_MAX)
+ 			pp = p;
+-		else if (uh->len != uh2->len)
+-			pp = p;
+ 
+ 		return pp;
+ 	}
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 6613d8dbb0e5..91247a6fc67f 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -921,9 +921,7 @@ static void fib6_drop_pcpu_from(struct fib6_info *f6i,
+ 		if (pcpu_rt) {
+ 			struct fib6_info *from;
+ 
+-			from = rcu_dereference_protected(pcpu_rt->from,
+-					     lockdep_is_held(&table->tb6_lock));
+-			rcu_assign_pointer(pcpu_rt->from, NULL);
++			from = xchg((__force struct fib6_info **)&pcpu_rt->from, NULL);
+ 			fib6_info_release(from);
+ 		}
+ 	}
+diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c
+index cb54a8a3c273..be5f3d7ceb96 100644
+--- a/net/ipv6/ip6_flowlabel.c
++++ b/net/ipv6/ip6_flowlabel.c
+@@ -94,15 +94,21 @@ static struct ip6_flowlabel *fl_lookup(struct net *net, __be32 label)
+ 	return fl;
+ }
+ 
++static void fl_free_rcu(struct rcu_head *head)
++{
++	struct ip6_flowlabel *fl = container_of(head, struct ip6_flowlabel, rcu);
++
++	if (fl->share == IPV6_FL_S_PROCESS)
++		put_pid(fl->owner.pid);
++	kfree(fl->opt);
++	kfree(fl);
++}
++
+ 
+ static void fl_free(struct ip6_flowlabel *fl)
+ {
+-	if (fl) {
+-		if (fl->share == IPV6_FL_S_PROCESS)
+-			put_pid(fl->owner.pid);
+-		kfree(fl->opt);
+-		kfree_rcu(fl, rcu);
+-	}
++	if (fl)
++		call_rcu(&fl->rcu, fl_free_rcu);
+ }
+ 
+ static void fl_release(struct ip6_flowlabel *fl)
+@@ -633,9 +639,9 @@ recheck:
+ 				if (fl1->share == IPV6_FL_S_EXCL ||
+ 				    fl1->share != fl->share ||
+ 				    ((fl1->share == IPV6_FL_S_PROCESS) &&
+-				     (fl1->owner.pid == fl->owner.pid)) ||
++				     (fl1->owner.pid != fl->owner.pid)) ||
+ 				    ((fl1->share == IPV6_FL_S_USER) &&
+-				     uid_eq(fl1->owner.uid, fl->owner.uid)))
++				     !uid_eq(fl1->owner.uid, fl->owner.uid)))
+ 					goto release;
+ 
+ 				err = -ENOMEM;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index b6a97115a906..59c90bba048c 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -379,11 +379,8 @@ static void ip6_dst_destroy(struct dst_entry *dst)
+ 		in6_dev_put(idev);
+ 	}
+ 
+-	rcu_read_lock();
+-	from = rcu_dereference(rt->from);
+-	rcu_assign_pointer(rt->from, NULL);
++	from = xchg((__force struct fib6_info **)&rt->from, NULL);
+ 	fib6_info_release(from);
+-	rcu_read_unlock();
+ }
+ 
+ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev,
+@@ -1288,9 +1285,7 @@ static void rt6_remove_exception(struct rt6_exception_bucket *bucket,
+ 	/* purge completely the exception to allow releasing the held resources:
+ 	 * some [sk] cache may keep the dst around for unlimited time
+ 	 */
+-	from = rcu_dereference_protected(rt6_ex->rt6i->from,
+-					 lockdep_is_held(&rt6_exception_lock));
+-	rcu_assign_pointer(rt6_ex->rt6i->from, NULL);
++	from = xchg((__force struct fib6_info **)&rt6_ex->rt6i->from, NULL);
+ 	fib6_info_release(from);
+ 	dst_dev_put(&rt6_ex->rt6i->dst);
+ 
+@@ -3403,11 +3398,8 @@ static void rt6_do_redirect(struct dst_entry *dst, struct sock *sk, struct sk_bu
+ 
+ 	rcu_read_lock();
+ 	from = rcu_dereference(rt->from);
+-	/* This fib6_info_hold() is safe here because we hold reference to rt
+-	 * and rt already holds reference to fib6_info.
+-	 */
+-	fib6_info_hold(from);
+-	rcu_read_unlock();
++	if (!from)
++		goto out;
+ 
+ 	nrt = ip6_rt_cache_alloc(from, &msg->dest, NULL);
+ 	if (!nrt)
+@@ -3419,10 +3411,7 @@ static void rt6_do_redirect(struct dst_entry *dst, struct sock *sk, struct sk_bu
+ 
+ 	nrt->rt6i_gateway = *(struct in6_addr *)neigh->primary_key;
+ 
+-	/* No need to remove rt from the exception table if rt is
+-	 * a cached route because rt6_insert_exception() will
+-	 * takes care of it
+-	 */
++	/* rt6_insert_exception() will take care of duplicated exceptions */
+ 	if (rt6_insert_exception(nrt, from)) {
+ 		dst_release_immediate(&nrt->dst);
+ 		goto out;
+@@ -3435,7 +3424,7 @@ static void rt6_do_redirect(struct dst_entry *dst, struct sock *sk, struct sk_bu
+ 	call_netevent_notifiers(NETEVENT_REDIRECT, &netevent);
+ 
+ out:
+-	fib6_info_release(from);
++	rcu_read_unlock();
+ 	neigh_release(neigh);
+ }
+ 
+@@ -4957,16 +4946,20 @@ static int inet6_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ 
+ 	rcu_read_lock();
+ 	from = rcu_dereference(rt->from);
+-
+-	if (fibmatch)
+-		err = rt6_fill_node(net, skb, from, NULL, NULL, NULL, iif,
+-				    RTM_NEWROUTE, NETLINK_CB(in_skb).portid,
+-				    nlh->nlmsg_seq, 0);
+-	else
+-		err = rt6_fill_node(net, skb, from, dst, &fl6.daddr,
+-				    &fl6.saddr, iif, RTM_NEWROUTE,
+-				    NETLINK_CB(in_skb).portid, nlh->nlmsg_seq,
+-				    0);
++	if (from) {
++		if (fibmatch)
++			err = rt6_fill_node(net, skb, from, NULL, NULL, NULL,
++					    iif, RTM_NEWROUTE,
++					    NETLINK_CB(in_skb).portid,
++					    nlh->nlmsg_seq, 0);
++		else
++			err = rt6_fill_node(net, skb, from, dst, &fl6.daddr,
++					    &fl6.saddr, iif, RTM_NEWROUTE,
++					    NETLINK_CB(in_skb).portid,
++					    nlh->nlmsg_seq, 0);
++	} else {
++		err = -ENETUNREACH;
++	}
+ 	rcu_read_unlock();
+ 
+ 	if (err < 0) {
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index fed6becc5daf..52b5a2797c0c 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -169,8 +169,8 @@ struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id)
+ 
+ 	rcu_read_lock_bh();
+ 	list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
+-		if (tunnel->tunnel_id == tunnel_id) {
+-			l2tp_tunnel_inc_refcount(tunnel);
++		if (tunnel->tunnel_id == tunnel_id &&
++		    refcount_inc_not_zero(&tunnel->ref_count)) {
+ 			rcu_read_unlock_bh();
+ 
+ 			return tunnel;
+@@ -190,8 +190,8 @@ struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth)
+ 
+ 	rcu_read_lock_bh();
+ 	list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
+-		if (++count > nth) {
+-			l2tp_tunnel_inc_refcount(tunnel);
++		if (++count > nth &&
++		    refcount_inc_not_zero(&tunnel->ref_count)) {
+ 			rcu_read_unlock_bh();
+ 			return tunnel;
+ 		}
+@@ -909,7 +909,7 @@ int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct l2tp_tunnel *tunnel;
+ 
+-	tunnel = l2tp_tunnel(sk);
++	tunnel = rcu_dereference_sk_user_data(sk);
+ 	if (tunnel == NULL)
+ 		goto pass_up;
+ 
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 8406bf11eef4..faa2bc50cfa0 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -2603,8 +2603,8 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ 	void *ph;
+ 	DECLARE_SOCKADDR(struct sockaddr_ll *, saddr, msg->msg_name);
+ 	bool need_wait = !(msg->msg_flags & MSG_DONTWAIT);
++	unsigned char *addr = NULL;
+ 	int tp_len, size_max;
+-	unsigned char *addr;
+ 	void *data;
+ 	int len_sum = 0;
+ 	int status = TP_STATUS_AVAILABLE;
+@@ -2615,7 +2615,6 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ 	if (likely(saddr == NULL)) {
+ 		dev	= packet_cached_dev_get(po);
+ 		proto	= po->num;
+-		addr	= NULL;
+ 	} else {
+ 		err = -EINVAL;
+ 		if (msg->msg_namelen < sizeof(struct sockaddr_ll))
+@@ -2625,10 +2624,13 @@ static int tpacket_snd(struct packet_sock *po, struct msghdr *msg)
+ 						sll_addr)))
+ 			goto out;
+ 		proto	= saddr->sll_protocol;
+-		addr	= saddr->sll_halen ? saddr->sll_addr : NULL;
+ 		dev = dev_get_by_index(sock_net(&po->sk), saddr->sll_ifindex);
+-		if (addr && dev && saddr->sll_halen < dev->addr_len)
+-			goto out_put;
++		if (po->sk.sk_socket->type == SOCK_DGRAM) {
++			if (dev && msg->msg_namelen < dev->addr_len +
++				   offsetof(struct sockaddr_ll, sll_addr))
++				goto out_put;
++			addr = saddr->sll_addr;
++		}
+ 	}
+ 
+ 	err = -ENXIO;
+@@ -2800,7 +2802,7 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 	struct sk_buff *skb;
+ 	struct net_device *dev;
+ 	__be16 proto;
+-	unsigned char *addr;
++	unsigned char *addr = NULL;
+ 	int err, reserve = 0;
+ 	struct sockcm_cookie sockc;
+ 	struct virtio_net_hdr vnet_hdr = { 0 };
+@@ -2817,7 +2819,6 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 	if (likely(saddr == NULL)) {
+ 		dev	= packet_cached_dev_get(po);
+ 		proto	= po->num;
+-		addr	= NULL;
+ 	} else {
+ 		err = -EINVAL;
+ 		if (msg->msg_namelen < sizeof(struct sockaddr_ll))
+@@ -2825,10 +2826,13 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len)
+ 		if (msg->msg_namelen < (saddr->sll_halen + offsetof(struct sockaddr_ll, sll_addr)))
+ 			goto out;
+ 		proto	= saddr->sll_protocol;
+-		addr	= saddr->sll_halen ? saddr->sll_addr : NULL;
+ 		dev = dev_get_by_index(sock_net(sk), saddr->sll_ifindex);
+-		if (addr && dev && saddr->sll_halen < dev->addr_len)
+-			goto out_unlock;
++		if (sock->type == SOCK_DGRAM) {
++			if (dev && msg->msg_namelen < dev->addr_len +
++				   offsetof(struct sockaddr_ll, sll_addr))
++				goto out_unlock;
++			addr = saddr->sll_addr;
++		}
+ 	}
+ 
+ 	err = -ENXIO;
+@@ -3345,20 +3349,29 @@ static int packet_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
+ 	sock_recv_ts_and_drops(msg, sk, skb);
+ 
+ 	if (msg->msg_name) {
++		int copy_len;
++
+ 		/* If the address length field is there to be filled
+ 		 * in, we fill it in now.
+ 		 */
+ 		if (sock->type == SOCK_PACKET) {
+ 			__sockaddr_check_size(sizeof(struct sockaddr_pkt));
+ 			msg->msg_namelen = sizeof(struct sockaddr_pkt);
++			copy_len = msg->msg_namelen;
+ 		} else {
+ 			struct sockaddr_ll *sll = &PACKET_SKB_CB(skb)->sa.ll;
+ 
+ 			msg->msg_namelen = sll->sll_halen +
+ 				offsetof(struct sockaddr_ll, sll_addr);
++			copy_len = msg->msg_namelen;
++			if (msg->msg_namelen < sizeof(struct sockaddr_ll)) {
++				memset(msg->msg_name +
++				       offsetof(struct sockaddr_ll, sll_addr),
++				       0, sizeof(sll->sll_addr));
++				msg->msg_namelen = sizeof(struct sockaddr_ll);
++			}
+ 		}
+-		memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa,
+-		       msg->msg_namelen);
++		memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa, copy_len);
+ 	}
+ 
+ 	if (pkt_sk(sk)->auxdata) {
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index 8aa2937b069f..fe96881a334d 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -604,30 +604,30 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
+ 
+ 	_enter("");
+ 
+-	if (list_empty(&rxnet->calls))
+-		return;
++	if (!list_empty(&rxnet->calls)) {
++		write_lock(&rxnet->call_lock);
+ 
+-	write_lock(&rxnet->call_lock);
++		while (!list_empty(&rxnet->calls)) {
++			call = list_entry(rxnet->calls.next,
++					  struct rxrpc_call, link);
++			_debug("Zapping call %p", call);
+ 
+-	while (!list_empty(&rxnet->calls)) {
+-		call = list_entry(rxnet->calls.next, struct rxrpc_call, link);
+-		_debug("Zapping call %p", call);
++			rxrpc_see_call(call);
++			list_del_init(&call->link);
+ 
+-		rxrpc_see_call(call);
+-		list_del_init(&call->link);
++			pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n",
++			       call, atomic_read(&call->usage),
++			       rxrpc_call_states[call->state],
++			       call->flags, call->events);
+ 
+-		pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n",
+-		       call, atomic_read(&call->usage),
+-		       rxrpc_call_states[call->state],
+-		       call->flags, call->events);
++			write_unlock(&rxnet->call_lock);
++			cond_resched();
++			write_lock(&rxnet->call_lock);
++		}
+ 
+ 		write_unlock(&rxnet->call_lock);
+-		cond_resched();
+-		write_lock(&rxnet->call_lock);
+ 	}
+ 
+-	write_unlock(&rxnet->call_lock);
+-
+ 	atomic_dec(&rxnet->nr_calls);
+ 	wait_var_event(&rxnet->nr_calls, !atomic_read(&rxnet->nr_calls));
+ }
+diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
+index 1d143bc3f73d..4aa03588f87b 100644
+--- a/net/sctp/sm_sideeffect.c
++++ b/net/sctp/sm_sideeffect.c
+@@ -1112,32 +1112,6 @@ static void sctp_cmd_send_msg(struct sctp_association *asoc,
+ }
+ 
+ 
+-/* Sent the next ASCONF packet currently stored in the association.
+- * This happens after the ASCONF_ACK was succeffully processed.
+- */
+-static void sctp_cmd_send_asconf(struct sctp_association *asoc)
+-{
+-	struct net *net = sock_net(asoc->base.sk);
+-
+-	/* Send the next asconf chunk from the addip chunk
+-	 * queue.
+-	 */
+-	if (!list_empty(&asoc->addip_chunk_list)) {
+-		struct list_head *entry = asoc->addip_chunk_list.next;
+-		struct sctp_chunk *asconf = list_entry(entry,
+-						struct sctp_chunk, list);
+-		list_del_init(entry);
+-
+-		/* Hold the chunk until an ASCONF_ACK is received. */
+-		sctp_chunk_hold(asconf);
+-		if (sctp_primitive_ASCONF(net, asoc, asconf))
+-			sctp_chunk_free(asconf);
+-		else
+-			asoc->addip_last_asconf = asconf;
+-	}
+-}
+-
+-
+ /* These three macros allow us to pull the debugging code out of the
+  * main flow of sctp_do_sm() to keep attention focused on the real
+  * functionality there.
+@@ -1783,9 +1757,6 @@ static int sctp_cmd_interpreter(enum sctp_event_type event_type,
+ 			}
+ 			sctp_cmd_send_msg(asoc, cmd->obj.msg, gfp);
+ 			break;
+-		case SCTP_CMD_SEND_NEXT_ASCONF:
+-			sctp_cmd_send_asconf(asoc);
+-			break;
+ 		case SCTP_CMD_PURGE_ASCONF_QUEUE:
+ 			sctp_asconf_queue_teardown(asoc);
+ 			break;
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index c9ae3404b1bb..713a669d2058 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -3824,6 +3824,29 @@ enum sctp_disposition sctp_sf_do_asconf(struct net *net,
+ 	return SCTP_DISPOSITION_CONSUME;
+ }
+ 
++static enum sctp_disposition sctp_send_next_asconf(
++					struct net *net,
++					const struct sctp_endpoint *ep,
++					struct sctp_association *asoc,
++					const union sctp_subtype type,
++					struct sctp_cmd_seq *commands)
++{
++	struct sctp_chunk *asconf;
++	struct list_head *entry;
++
++	if (list_empty(&asoc->addip_chunk_list))
++		return SCTP_DISPOSITION_CONSUME;
++
++	entry = asoc->addip_chunk_list.next;
++	asconf = list_entry(entry, struct sctp_chunk, list);
++
++	list_del_init(entry);
++	sctp_chunk_hold(asconf);
++	asoc->addip_last_asconf = asconf;
++
++	return sctp_sf_do_prm_asconf(net, ep, asoc, type, asconf, commands);
++}
++
+ /*
+  * ADDIP Section 4.3 General rules for address manipulation
+  * When building TLV parameters for the ASCONF Chunk that will add or
+@@ -3915,14 +3938,10 @@ enum sctp_disposition sctp_sf_do_asconf_ack(struct net *net,
+ 				SCTP_TO(SCTP_EVENT_TIMEOUT_T4_RTO));
+ 
+ 		if (!sctp_process_asconf_ack((struct sctp_association *)asoc,
+-					     asconf_ack)) {
+-			/* Successfully processed ASCONF_ACK.  We can
+-			 * release the next asconf if we have one.
+-			 */
+-			sctp_add_cmd_sf(commands, SCTP_CMD_SEND_NEXT_ASCONF,
+-					SCTP_NULL());
+-			return SCTP_DISPOSITION_CONSUME;
+-		}
++					     asconf_ack))
++			return sctp_send_next_asconf(net, ep,
++					(struct sctp_association *)asoc,
++							type, commands);
+ 
+ 		abort = sctp_make_abort(asoc, asconf_ack,
+ 					sizeof(struct sctp_errhdr));
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 5f1d937c4be9..7d5136ecee78 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -579,7 +579,7 @@ void handle_device_resync(struct sock *sk, u32 seq, u64 rcd_sn)
+ static int tls_device_reencrypt(struct sock *sk, struct sk_buff *skb)
+ {
+ 	struct strp_msg *rxm = strp_msg(skb);
+-	int err = 0, offset = rxm->offset, copy, nsg;
++	int err = 0, offset = rxm->offset, copy, nsg, data_len, pos;
+ 	struct sk_buff *skb_iter, *unused;
+ 	struct scatterlist sg[1];
+ 	char *orig_buf, *buf;
+@@ -610,25 +610,42 @@ static int tls_device_reencrypt(struct sock *sk, struct sk_buff *skb)
+ 	else
+ 		err = 0;
+ 
+-	copy = min_t(int, skb_pagelen(skb) - offset,
+-		     rxm->full_len - TLS_CIPHER_AES_GCM_128_TAG_SIZE);
++	data_len = rxm->full_len - TLS_CIPHER_AES_GCM_128_TAG_SIZE;
+ 
+-	if (skb->decrypted)
+-		skb_store_bits(skb, offset, buf, copy);
++	if (skb_pagelen(skb) > offset) {
++		copy = min_t(int, skb_pagelen(skb) - offset, data_len);
+ 
+-	offset += copy;
+-	buf += copy;
++		if (skb->decrypted)
++			skb_store_bits(skb, offset, buf, copy);
+ 
++		offset += copy;
++		buf += copy;
++	}
++
++	pos = skb_pagelen(skb);
+ 	skb_walk_frags(skb, skb_iter) {
+-		copy = min_t(int, skb_iter->len,
+-			     rxm->full_len - offset + rxm->offset -
+-			     TLS_CIPHER_AES_GCM_128_TAG_SIZE);
++		int frag_pos;
++
++		/* Practically all frags must belong to msg if reencrypt
++		 * is needed with current strparser and coalescing logic,
++		 * but strparser may "get optimized", so let's be safe.
++		 */
++		if (pos + skb_iter->len <= offset)
++			goto done_with_frag;
++		if (pos >= data_len + rxm->offset)
++			break;
++
++		frag_pos = offset - pos;
++		copy = min_t(int, skb_iter->len - frag_pos,
++			     data_len + rxm->offset - offset);
+ 
+ 		if (skb_iter->decrypted)
+-			skb_store_bits(skb_iter, offset, buf, copy);
++			skb_store_bits(skb_iter, frag_pos, buf, copy);
+ 
+ 		offset += copy;
+ 		buf += copy;
++done_with_frag:
++		pos += skb_iter->len;
+ 	}
+ 
+ free_buf:
+diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
+index ef8934fd8698..426dd97725e4 100644
+--- a/net/tls/tls_device_fallback.c
++++ b/net/tls/tls_device_fallback.c
+@@ -200,13 +200,14 @@ static void complete_skb(struct sk_buff *nskb, struct sk_buff *skb, int headln)
+ 
+ 	skb_put(nskb, skb->len);
+ 	memcpy(nskb->data, skb->data, headln);
+-	update_chksum(nskb, headln);
+ 
+ 	nskb->destructor = skb->destructor;
+ 	nskb->sk = sk;
+ 	skb->destructor = NULL;
+ 	skb->sk = NULL;
+ 
++	update_chksum(nskb, headln);
++
+ 	delta = nskb->truesize - skb->truesize;
+ 	if (likely(delta < 0))
+ 		WARN_ON_ONCE(refcount_sub_and_test(-delta, &sk->sk_wmem_alloc));
+diff --git a/sound/usb/line6/driver.c b/sound/usb/line6/driver.c
+index c1376bfdc90b..aa28510d23ad 100644
+--- a/sound/usb/line6/driver.c
++++ b/sound/usb/line6/driver.c
+@@ -351,12 +351,16 @@ int line6_read_data(struct usb_line6 *line6, unsigned address, void *data,
+ {
+ 	struct usb_device *usbdev = line6->usbdev;
+ 	int ret;
+-	unsigned char len;
++	unsigned char *len;
+ 	unsigned count;
+ 
+ 	if (address > 0xffff || datalen > 0xff)
+ 		return -EINVAL;
+ 
++	len = kmalloc(sizeof(*len), GFP_KERNEL);
++	if (!len)
++		return -ENOMEM;
++
+ 	/* query the serial number: */
+ 	ret = usb_control_msg(usbdev, usb_sndctrlpipe(usbdev, 0), 0x67,
+ 			      USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+@@ -365,7 +369,7 @@ int line6_read_data(struct usb_line6 *line6, unsigned address, void *data,
+ 
+ 	if (ret < 0) {
+ 		dev_err(line6->ifcdev, "read request failed (error %d)\n", ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	/* Wait for data length. We'll get 0xff until length arrives. */
+@@ -375,28 +379,29 @@ int line6_read_data(struct usb_line6 *line6, unsigned address, void *data,
+ 		ret = usb_control_msg(usbdev, usb_rcvctrlpipe(usbdev, 0), 0x67,
+ 				      USB_TYPE_VENDOR | USB_RECIP_DEVICE |
+ 				      USB_DIR_IN,
+-				      0x0012, 0x0000, &len, 1,
++				      0x0012, 0x0000, len, 1,
+ 				      LINE6_TIMEOUT * HZ);
+ 		if (ret < 0) {
+ 			dev_err(line6->ifcdev,
+ 				"receive length failed (error %d)\n", ret);
+-			return ret;
++			goto exit;
+ 		}
+ 
+-		if (len != 0xff)
++		if (*len != 0xff)
+ 			break;
+ 	}
+ 
+-	if (len == 0xff) {
++	ret = -EIO;
++	if (*len == 0xff) {
+ 		dev_err(line6->ifcdev, "read failed after %d retries\n",
+ 			count);
+-		return -EIO;
+-	} else if (len != datalen) {
++		goto exit;
++	} else if (*len != datalen) {
+ 		/* should be equal or something went wrong */
+ 		dev_err(line6->ifcdev,
+ 			"length mismatch (expected %d, got %d)\n",
+-			(int)datalen, (int)len);
+-		return -EIO;
++			(int)datalen, (int)*len);
++		goto exit;
+ 	}
+ 
+ 	/* receive the result: */
+@@ -405,12 +410,12 @@ int line6_read_data(struct usb_line6 *line6, unsigned address, void *data,
+ 			      0x0013, 0x0000, data, datalen,
+ 			      LINE6_TIMEOUT * HZ);
+ 
+-	if (ret < 0) {
++	if (ret < 0)
+ 		dev_err(line6->ifcdev, "read failed (error %d)\n", ret);
+-		return ret;
+-	}
+ 
+-	return 0;
++exit:
++	kfree(len);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(line6_read_data);
+ 
+@@ -422,12 +427,16 @@ int line6_write_data(struct usb_line6 *line6, unsigned address, void *data,
+ {
+ 	struct usb_device *usbdev = line6->usbdev;
+ 	int ret;
+-	unsigned char status;
++	unsigned char *status;
+ 	int count;
+ 
+ 	if (address > 0xffff || datalen > 0xffff)
+ 		return -EINVAL;
+ 
++	status = kmalloc(sizeof(*status), GFP_KERNEL);
++	if (!status)
++		return -ENOMEM;
++
+ 	ret = usb_control_msg(usbdev, usb_sndctrlpipe(usbdev, 0), 0x67,
+ 			      USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 			      0x0022, address, data, datalen,
+@@ -436,7 +445,7 @@ int line6_write_data(struct usb_line6 *line6, unsigned address, void *data,
+ 	if (ret < 0) {
+ 		dev_err(line6->ifcdev,
+ 			"write request failed (error %d)\n", ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	for (count = 0; count < LINE6_READ_WRITE_MAX_RETRIES; count++) {
+@@ -447,28 +456,29 @@ int line6_write_data(struct usb_line6 *line6, unsigned address, void *data,
+ 				      USB_TYPE_VENDOR | USB_RECIP_DEVICE |
+ 				      USB_DIR_IN,
+ 				      0x0012, 0x0000,
+-				      &status, 1, LINE6_TIMEOUT * HZ);
++				      status, 1, LINE6_TIMEOUT * HZ);
+ 
+ 		if (ret < 0) {
+ 			dev_err(line6->ifcdev,
+ 				"receiving status failed (error %d)\n", ret);
+-			return ret;
++			goto exit;
+ 		}
+ 
+-		if (status != 0xff)
++		if (*status != 0xff)
+ 			break;
+ 	}
+ 
+-	if (status == 0xff) {
++	if (*status == 0xff) {
+ 		dev_err(line6->ifcdev, "write failed after %d retries\n",
+ 			count);
+-		return -EIO;
+-	} else if (status != 0) {
++		ret = -EIO;
++	} else if (*status != 0) {
+ 		dev_err(line6->ifcdev, "write failed (error %d)\n", ret);
+-		return -EIO;
++		ret = -EIO;
+ 	}
+-
+-	return 0;
++exit:
++	kfree(status);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(line6_write_data);
+ 
+diff --git a/sound/usb/line6/podhd.c b/sound/usb/line6/podhd.c
+index 36ed9c85c0eb..5f3c87264e66 100644
+--- a/sound/usb/line6/podhd.c
++++ b/sound/usb/line6/podhd.c
+@@ -225,28 +225,32 @@ static void podhd_startup_start_workqueue(struct timer_list *t)
+ static int podhd_dev_start(struct usb_line6_podhd *pod)
+ {
+ 	int ret;
+-	u8 init_bytes[8];
++	u8 *init_bytes;
+ 	int i;
+ 	struct usb_device *usbdev = pod->line6.usbdev;
+ 
++	init_bytes = kmalloc(8, GFP_KERNEL);
++	if (!init_bytes)
++		return -ENOMEM;
++
+ 	ret = usb_control_msg(usbdev, usb_sndctrlpipe(usbdev, 0),
+ 					0x67, USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 					0x11, 0,
+ 					NULL, 0, LINE6_TIMEOUT * HZ);
+ 	if (ret < 0) {
+ 		dev_err(pod->line6.ifcdev, "read request failed (error %d)\n", ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	/* NOTE: looks like some kind of ping message */
+ 	ret = usb_control_msg(usbdev, usb_rcvctrlpipe(usbdev, 0), 0x67,
+ 					USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
+ 					0x11, 0x0,
+-					&init_bytes, 3, LINE6_TIMEOUT * HZ);
++					init_bytes, 3, LINE6_TIMEOUT * HZ);
+ 	if (ret < 0) {
+ 		dev_err(pod->line6.ifcdev,
+ 			"receive length failed (error %d)\n", ret);
+-		return ret;
++		goto exit;
+ 	}
+ 
+ 	pod->firmware_version =
+@@ -255,7 +259,7 @@ static int podhd_dev_start(struct usb_line6_podhd *pod)
+ 	for (i = 0; i <= 16; i++) {
+ 		ret = line6_read_data(&pod->line6, 0xf000 + 0x08 * i, init_bytes, 8);
+ 		if (ret < 0)
+-			return ret;
++			goto exit;
+ 	}
+ 
+ 	ret = usb_control_msg(usbdev, usb_sndctrlpipe(usbdev, 0),
+@@ -263,10 +267,9 @@ static int podhd_dev_start(struct usb_line6_podhd *pod)
+ 					USB_TYPE_STANDARD | USB_RECIP_DEVICE | USB_DIR_OUT,
+ 					1, 0,
+ 					NULL, 0, LINE6_TIMEOUT * HZ);
+-	if (ret < 0)
+-		return ret;
+-
+-	return 0;
++exit:
++	kfree(init_bytes);
++	return ret;
+ }
+ 
+ static void podhd_startup_workqueue(struct work_struct *work)
+diff --git a/sound/usb/line6/toneport.c b/sound/usb/line6/toneport.c
+index f47ba94e6f4a..19bee725de00 100644
+--- a/sound/usb/line6/toneport.c
++++ b/sound/usb/line6/toneport.c
+@@ -365,16 +365,21 @@ static bool toneport_has_source_select(struct usb_line6_toneport *toneport)
+ /*
+ 	Setup Toneport device.
+ */
+-static void toneport_setup(struct usb_line6_toneport *toneport)
++static int toneport_setup(struct usb_line6_toneport *toneport)
+ {
+-	u32 ticks;
++	u32 *ticks;
+ 	struct usb_line6 *line6 = &toneport->line6;
+ 	struct usb_device *usbdev = line6->usbdev;
+ 
++	ticks = kmalloc(sizeof(*ticks), GFP_KERNEL);
++	if (!ticks)
++		return -ENOMEM;
++
+ 	/* sync time on device with host: */
+ 	/* note: 32-bit timestamps overflow in year 2106 */
+-	ticks = (u32)ktime_get_real_seconds();
+-	line6_write_data(line6, 0x80c6, &ticks, 4);
++	*ticks = (u32)ktime_get_real_seconds();
++	line6_write_data(line6, 0x80c6, ticks, 4);
++	kfree(ticks);
+ 
+ 	/* enable device: */
+ 	toneport_send_cmd(usbdev, 0x0301, 0x0000);
+@@ -389,6 +394,7 @@ static void toneport_setup(struct usb_line6_toneport *toneport)
+ 		toneport_update_led(toneport);
+ 
+ 	mod_timer(&toneport->timer, jiffies + TONEPORT_PCM_DELAY * HZ);
++	return 0;
+ }
+ 
+ /*
+@@ -451,7 +457,9 @@ static int toneport_init(struct usb_line6 *line6,
+ 			return err;
+ 	}
+ 
+-	toneport_setup(toneport);
++	err = toneport_setup(toneport);
++	if (err)
++		return err;
+ 
+ 	/* register audio system: */
+ 	return snd_card_register(line6->card);
+@@ -463,7 +471,11 @@ static int toneport_init(struct usb_line6 *line6,
+ */
+ static int toneport_reset_resume(struct usb_interface *interface)
+ {
+-	toneport_setup(usb_get_intfdata(interface));
++	int err;
++
++	err = toneport_setup(usb_get_intfdata(interface));
++	if (err)
++		return err;
+ 	return line6_resume(interface);
+ }
+ #endif
+diff --git a/tools/testing/selftests/net/fib_rule_tests.sh b/tools/testing/selftests/net/fib_rule_tests.sh
+index d4cfb6a7a086..4b7e107865bf 100755
+--- a/tools/testing/selftests/net/fib_rule_tests.sh
++++ b/tools/testing/selftests/net/fib_rule_tests.sh
+@@ -27,6 +27,7 @@ log_test()
+ 		nsuccess=$((nsuccess+1))
+ 		printf "\n    TEST: %-50s  [ OK ]\n" "${msg}"
+ 	else
++		ret=1
+ 		nfail=$((nfail+1))
+ 		printf "\n    TEST: %-50s  [FAIL]\n" "${msg}"
+ 		if [ "${PAUSE_ON_FAIL}" = "yes" ]; then
+@@ -147,8 +148,8 @@ fib_rule6_test()
+ 
+ 	fib_check_iproute_support "ipproto" "ipproto"
+ 	if [ $? -eq 0 ]; then
+-		match="ipproto icmp"
+-		fib_rule6_test_match_n_redirect "$match" "$match" "ipproto icmp match"
++		match="ipproto ipv6-icmp"
++		fib_rule6_test_match_n_redirect "$match" "$match" "ipproto ipv6-icmp match"
+ 	fi
+ }
+ 
+@@ -245,4 +246,9 @@ setup
+ run_fibrule_tests
+ cleanup
+ 
++if [ "$TESTS" != "none" ]; then
++	printf "\nTests passed: %3d\n" ${nsuccess}
++	printf "Tests failed: %3d\n"   ${nfail}
++fi
++
+ exit $ret


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-05 13:40 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-05 13:40 UTC (permalink / raw
  To: gentoo-commits

commit:     814f9c6ca03101663202cf37153c745054de331e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May  5 13:40:40 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May  5 13:40:40 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=814f9c6c

update readme

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/0000_README b/0000_README
index 3b63726..dcd9694 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-5.0.12.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.12
 
+Patch:  1012_linux-5.0.13.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-08 10:07 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-08 10:07 UTC (permalink / raw
  To: gentoo-commits

commit:     25848a16762409a137897779ef10e7684c59c4b5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May  8 10:07:37 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May  8 10:07:37 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=25848a16

Linux patch 5.0.14

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1013_linux-5.0.14.patch | 4322 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4326 insertions(+)

diff --git a/0000_README b/0000_README
index dcd9694..b2a5389 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1012_linux-5.0.13.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.13
 
+Patch:  1013_linux-5.0.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1013_linux-5.0.14.patch b/1013_linux-5.0.14.patch
new file mode 100644
index 0000000..133615f
--- /dev/null
+++ b/1013_linux-5.0.14.patch
@@ -0,0 +1,4322 @@
+diff --git a/Documentation/driver-api/usb/power-management.rst b/Documentation/driver-api/usb/power-management.rst
+index 79beb807996b..4a74cf6f2797 100644
+--- a/Documentation/driver-api/usb/power-management.rst
++++ b/Documentation/driver-api/usb/power-management.rst
+@@ -370,11 +370,15 @@ autosuspend the interface's device.  When the usage counter is = 0
+ then the interface is considered to be idle, and the kernel may
+ autosuspend the device.
+ 
+-Drivers need not be concerned about balancing changes to the usage
+-counter; the USB core will undo any remaining "get"s when a driver
+-is unbound from its interface.  As a corollary, drivers must not call
+-any of the ``usb_autopm_*`` functions after their ``disconnect``
+-routine has returned.
++Drivers must be careful to balance their overall changes to the usage
++counter.  Unbalanced "get"s will remain in effect when a driver is
++unbound from its interface, preventing the device from going into
++runtime suspend should the interface be bound to a driver again.  On
++the other hand, drivers are allowed to achieve this balance by calling
++the ``usb_autopm_*`` functions even after their ``disconnect`` routine
++has returned -- say from within a work-queue routine -- provided they
++retain an active reference to the interface (via ``usb_get_intf`` and
++``usb_put_intf``).
+ 
+ Drivers using the async routines are responsible for their own
+ synchronization and mutual exclusion.
+diff --git a/Makefile b/Makefile
+index 51a819544505..5ce29665eeed 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arc/lib/memset-archs.S b/arch/arc/lib/memset-archs.S
+index f230bb7092fd..b3373f5c88e0 100644
+--- a/arch/arc/lib/memset-archs.S
++++ b/arch/arc/lib/memset-archs.S
+@@ -30,10 +30,10 @@
+ 
+ #else
+ 
+-.macro PREALLOC_INSTR
++.macro PREALLOC_INSTR	reg, off
+ .endm
+ 
+-.macro PREFETCHW_INSTR
++.macro PREFETCHW_INSTR	reg, off
+ .endm
+ 
+ #endif
+diff --git a/arch/arm/boot/dts/am33xx-l4.dtsi b/arch/arm/boot/dts/am33xx-l4.dtsi
+index 7b818d9d2eab..8396faa9ac28 100644
+--- a/arch/arm/boot/dts/am33xx-l4.dtsi
++++ b/arch/arm/boot/dts/am33xx-l4.dtsi
+@@ -1763,7 +1763,7 @@
+ 			reg = <0xcc000 0x4>;
+ 			reg-names = "rev";
+ 			/* Domains (P, C): per_pwrdm, l4ls_clkdm */
+-			clocks = <&l4ls_clkctrl AM3_D_CAN0_CLKCTRL 0>;
++			clocks = <&l4ls_clkctrl AM3_L4LS_D_CAN0_CLKCTRL 0>;
+ 			clock-names = "fck";
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+@@ -1786,7 +1786,7 @@
+ 			reg = <0xd0000 0x4>;
+ 			reg-names = "rev";
+ 			/* Domains (P, C): per_pwrdm, l4ls_clkdm */
+-			clocks = <&l4ls_clkctrl AM3_D_CAN1_CLKCTRL 0>;
++			clocks = <&l4ls_clkctrl AM3_L4LS_D_CAN1_CLKCTRL 0>;
+ 			clock-names = "fck";
+ 			#address-cells = <1>;
+ 			#size-cells = <1>;
+diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
+index 09868dcee34b..df0c5456c94f 100644
+--- a/arch/arm/boot/dts/rk3288.dtsi
++++ b/arch/arm/boot/dts/rk3288.dtsi
+@@ -1282,27 +1282,27 @@
+ 	gpu_opp_table: gpu-opp-table {
+ 		compatible = "operating-points-v2";
+ 
+-		opp@100000000 {
++		opp-100000000 {
+ 			opp-hz = /bits/ 64 <100000000>;
+ 			opp-microvolt = <950000>;
+ 		};
+-		opp@200000000 {
++		opp-200000000 {
+ 			opp-hz = /bits/ 64 <200000000>;
+ 			opp-microvolt = <950000>;
+ 		};
+-		opp@300000000 {
++		opp-300000000 {
+ 			opp-hz = /bits/ 64 <300000000>;
+ 			opp-microvolt = <1000000>;
+ 		};
+-		opp@400000000 {
++		opp-400000000 {
+ 			opp-hz = /bits/ 64 <400000000>;
+ 			opp-microvolt = <1100000>;
+ 		};
+-		opp@500000000 {
++		opp-500000000 {
+ 			opp-hz = /bits/ 64 <500000000>;
+ 			opp-microvolt = <1200000>;
+ 		};
+-		opp@600000000 {
++		opp-600000000 {
+ 			opp-hz = /bits/ 64 <600000000>;
+ 			opp-microvolt = <1250000>;
+ 		};
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index 51e808adb00c..2a757dcaa1a5 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -591,13 +591,13 @@ static int __init at91_pm_backup_init(void)
+ 
+ 	np = of_find_compatible_node(NULL, NULL, "atmel,sama5d2-securam");
+ 	if (!np)
+-		goto securam_fail;
++		goto securam_fail_no_ref_dev;
+ 
+ 	pdev = of_find_device_by_node(np);
+ 	of_node_put(np);
+ 	if (!pdev) {
+ 		pr_warn("%s: failed to find securam device!\n", __func__);
+-		goto securam_fail;
++		goto securam_fail_no_ref_dev;
+ 	}
+ 
+ 	sram_pool = gen_pool_get(&pdev->dev, NULL);
+@@ -620,6 +620,8 @@ static int __init at91_pm_backup_init(void)
+ 	return 0;
+ 
+ securam_fail:
++	put_device(&pdev->dev);
++securam_fail_no_ref_dev:
+ 	iounmap(pm_data.sfrbu);
+ 	pm_data.sfrbu = NULL;
+ 	return ret;
+diff --git a/arch/arm/mach-iop13xx/setup.c b/arch/arm/mach-iop13xx/setup.c
+index 53c316f7301e..fe4932fda01d 100644
+--- a/arch/arm/mach-iop13xx/setup.c
++++ b/arch/arm/mach-iop13xx/setup.c
+@@ -300,7 +300,7 @@ static struct resource iop13xx_adma_2_resources[] = {
+ 	}
+ };
+ 
+-static u64 iop13xx_adma_dmamask = DMA_BIT_MASK(64);
++static u64 iop13xx_adma_dmamask = DMA_BIT_MASK(32);
+ static struct iop_adma_platform_data iop13xx_adma_0_data = {
+ 	.hw_id = 0,
+ 	.pool_size = PAGE_SIZE,
+@@ -324,7 +324,7 @@ static struct platform_device iop13xx_adma_0_channel = {
+ 	.resource = iop13xx_adma_0_resources,
+ 	.dev = {
+ 		.dma_mask = &iop13xx_adma_dmamask,
+-		.coherent_dma_mask = DMA_BIT_MASK(64),
++		.coherent_dma_mask = DMA_BIT_MASK(32),
+ 		.platform_data = (void *) &iop13xx_adma_0_data,
+ 	},
+ };
+@@ -336,7 +336,7 @@ static struct platform_device iop13xx_adma_1_channel = {
+ 	.resource = iop13xx_adma_1_resources,
+ 	.dev = {
+ 		.dma_mask = &iop13xx_adma_dmamask,
+-		.coherent_dma_mask = DMA_BIT_MASK(64),
++		.coherent_dma_mask = DMA_BIT_MASK(32),
+ 		.platform_data = (void *) &iop13xx_adma_1_data,
+ 	},
+ };
+@@ -348,7 +348,7 @@ static struct platform_device iop13xx_adma_2_channel = {
+ 	.resource = iop13xx_adma_2_resources,
+ 	.dev = {
+ 		.dma_mask = &iop13xx_adma_dmamask,
+-		.coherent_dma_mask = DMA_BIT_MASK(64),
++		.coherent_dma_mask = DMA_BIT_MASK(32),
+ 		.platform_data = (void *) &iop13xx_adma_2_data,
+ 	},
+ };
+diff --git a/arch/arm/mach-iop13xx/tpmi.c b/arch/arm/mach-iop13xx/tpmi.c
+index db511ec2b1df..116feb6b261e 100644
+--- a/arch/arm/mach-iop13xx/tpmi.c
++++ b/arch/arm/mach-iop13xx/tpmi.c
+@@ -152,7 +152,7 @@ static struct resource iop13xx_tpmi_3_resources[] = {
+ 	}
+ };
+ 
+-u64 iop13xx_tpmi_mask = DMA_BIT_MASK(64);
++u64 iop13xx_tpmi_mask = DMA_BIT_MASK(32);
+ static struct platform_device iop13xx_tpmi_0_device = {
+ 	.name = "iop-tpmi",
+ 	.id = 0,
+@@ -160,7 +160,7 @@ static struct platform_device iop13xx_tpmi_0_device = {
+ 	.resource = iop13xx_tpmi_0_resources,
+ 	.dev = {
+ 		.dma_mask          = &iop13xx_tpmi_mask,
+-		.coherent_dma_mask = DMA_BIT_MASK(64),
++		.coherent_dma_mask = DMA_BIT_MASK(32),
+ 	},
+ };
+ 
+@@ -171,7 +171,7 @@ static struct platform_device iop13xx_tpmi_1_device = {
+ 	.resource = iop13xx_tpmi_1_resources,
+ 	.dev = {
+ 		.dma_mask          = &iop13xx_tpmi_mask,
+-		.coherent_dma_mask = DMA_BIT_MASK(64),
++		.coherent_dma_mask = DMA_BIT_MASK(32),
+ 	},
+ };
+ 
+@@ -182,7 +182,7 @@ static struct platform_device iop13xx_tpmi_2_device = {
+ 	.resource = iop13xx_tpmi_2_resources,
+ 	.dev = {
+ 		.dma_mask          = &iop13xx_tpmi_mask,
+-		.coherent_dma_mask = DMA_BIT_MASK(64),
++		.coherent_dma_mask = DMA_BIT_MASK(32),
+ 	},
+ };
+ 
+@@ -193,7 +193,7 @@ static struct platform_device iop13xx_tpmi_3_device = {
+ 	.resource = iop13xx_tpmi_3_resources,
+ 	.dev = {
+ 		.dma_mask          = &iop13xx_tpmi_mask,
+-		.coherent_dma_mask = DMA_BIT_MASK(64),
++		.coherent_dma_mask = DMA_BIT_MASK(32),
+ 	},
+ };
+ 
+diff --git a/arch/arm/mach-omap2/display.c b/arch/arm/mach-omap2/display.c
+index 1444b4b4bd9f..439e143cad7b 100644
+--- a/arch/arm/mach-omap2/display.c
++++ b/arch/arm/mach-omap2/display.c
+@@ -250,8 +250,10 @@ static int __init omapdss_init_of(void)
+ 	if (!node)
+ 		return 0;
+ 
+-	if (!of_device_is_available(node))
++	if (!of_device_is_available(node)) {
++		of_node_put(node);
+ 		return 0;
++	}
+ 
+ 	pdev = of_find_device_by_node(node);
+ 
+diff --git a/arch/arm/plat-iop/adma.c b/arch/arm/plat-iop/adma.c
+index a4d1f8de3b5b..d9612221e484 100644
+--- a/arch/arm/plat-iop/adma.c
++++ b/arch/arm/plat-iop/adma.c
+@@ -143,7 +143,7 @@ struct platform_device iop3xx_dma_0_channel = {
+ 	.resource = iop3xx_dma_0_resources,
+ 	.dev = {
+ 		.dma_mask = &iop3xx_adma_dmamask,
+-		.coherent_dma_mask = DMA_BIT_MASK(64),
++		.coherent_dma_mask = DMA_BIT_MASK(32),
+ 		.platform_data = (void *) &iop3xx_dma_0_data,
+ 	},
+ };
+@@ -155,7 +155,7 @@ struct platform_device iop3xx_dma_1_channel = {
+ 	.resource = iop3xx_dma_1_resources,
+ 	.dev = {
+ 		.dma_mask = &iop3xx_adma_dmamask,
+-		.coherent_dma_mask = DMA_BIT_MASK(64),
++		.coherent_dma_mask = DMA_BIT_MASK(32),
+ 		.platform_data = (void *) &iop3xx_dma_1_data,
+ 	},
+ };
+@@ -167,7 +167,7 @@ struct platform_device iop3xx_aau_channel = {
+ 	.resource = iop3xx_aau_resources,
+ 	.dev = {
+ 		.dma_mask = &iop3xx_adma_dmamask,
+-		.coherent_dma_mask = DMA_BIT_MASK(64),
++		.coherent_dma_mask = DMA_BIT_MASK(32),
+ 		.platform_data = (void *) &iop3xx_aau_data,
+ 	},
+ };
+diff --git a/arch/arm/plat-orion/common.c b/arch/arm/plat-orion/common.c
+index a2399fd66e97..1e970873439c 100644
+--- a/arch/arm/plat-orion/common.c
++++ b/arch/arm/plat-orion/common.c
+@@ -622,7 +622,7 @@ static struct platform_device orion_xor0_shared = {
+ 	.resource	= orion_xor0_shared_resources,
+ 	.dev            = {
+ 		.dma_mask               = &orion_xor_dmamask,
+-		.coherent_dma_mask      = DMA_BIT_MASK(64),
++		.coherent_dma_mask      = DMA_BIT_MASK(32),
+ 		.platform_data          = &orion_xor0_pdata,
+ 	},
+ };
+@@ -683,7 +683,7 @@ static struct platform_device orion_xor1_shared = {
+ 	.resource	= orion_xor1_shared_resources,
+ 	.dev            = {
+ 		.dma_mask               = &orion_xor_dmamask,
+-		.coherent_dma_mask      = DMA_BIT_MASK(64),
++		.coherent_dma_mask      = DMA_BIT_MASK(32),
+ 		.platform_data          = &orion_xor1_pdata,
+ 	},
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+index 99d0d9912950..a91f87df662e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+@@ -107,8 +107,8 @@
+ 	snps,reset-gpio = <&gpio1 RK_PC2 GPIO_ACTIVE_LOW>;
+ 	snps,reset-active-low;
+ 	snps,reset-delays-us = <0 10000 50000>;
+-	tx_delay = <0x25>;
+-	rx_delay = <0x11>;
++	tx_delay = <0x24>;
++	rx_delay = <0x18>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c
+index 5ba4465e44f0..ea94cf8f9dc6 100644
+--- a/arch/arm64/kernel/sdei.c
++++ b/arch/arm64/kernel/sdei.c
+@@ -94,6 +94,9 @@ static bool on_sdei_normal_stack(unsigned long sp, struct stack_info *info)
+ 	unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_normal_ptr);
+ 	unsigned long high = low + SDEI_STACK_SIZE;
+ 
++	if (!low)
++		return false;
++
+ 	if (sp < low || sp >= high)
+ 		return false;
+ 
+@@ -111,6 +114,9 @@ static bool on_sdei_critical_stack(unsigned long sp, struct stack_info *info)
+ 	unsigned long low = (unsigned long)raw_cpu_read(sdei_stack_critical_ptr);
+ 	unsigned long high = low + SDEI_STACK_SIZE;
+ 
++	if (!low)
++		return false;
++
+ 	if (sp < low || sp >= high)
+ 		return false;
+ 
+diff --git a/arch/powerpc/kernel/kvm.c b/arch/powerpc/kernel/kvm.c
+index 683b5b3805bd..cd381e2291df 100644
+--- a/arch/powerpc/kernel/kvm.c
++++ b/arch/powerpc/kernel/kvm.c
+@@ -22,6 +22,7 @@
+ #include <linux/kvm_host.h>
+ #include <linux/init.h>
+ #include <linux/export.h>
++#include <linux/kmemleak.h>
+ #include <linux/kvm_para.h>
+ #include <linux/slab.h>
+ #include <linux/of.h>
+@@ -712,6 +713,12 @@ static void kvm_use_magic_page(void)
+ 
+ static __init void kvm_free_tmp(void)
+ {
++	/*
++	 * Inform kmemleak about the hole in the .bss section since the
++	 * corresponding pages will be unmapped with DEBUG_PAGEALLOC=y.
++	 */
++	kmemleak_free_part(&kvm_tmp[kvm_tmp_index],
++			   ARRAY_SIZE(kvm_tmp) - kvm_tmp_index);
+ 	free_reserved_area(&kvm_tmp[kvm_tmp_index],
+ 			   &kvm_tmp[ARRAY_SIZE(kvm_tmp)], -1, NULL);
+ }
+diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
+index 06898c13901d..aec91dbcdc0b 100644
+--- a/arch/powerpc/mm/slice.c
++++ b/arch/powerpc/mm/slice.c
+@@ -32,6 +32,7 @@
+ #include <linux/export.h>
+ #include <linux/hugetlb.h>
+ #include <linux/sched/mm.h>
++#include <linux/security.h>
+ #include <asm/mman.h>
+ #include <asm/mmu.h>
+ #include <asm/copro.h>
+@@ -377,6 +378,7 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
+ 	int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
+ 	unsigned long addr, found, prev;
+ 	struct vm_unmapped_area_info info;
++	unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr);
+ 
+ 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ 	info.length = len;
+@@ -393,7 +395,7 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
+ 	if (high_limit > DEFAULT_MAP_WINDOW)
+ 		addr += mm->context.slb_addr_limit - DEFAULT_MAP_WINDOW;
+ 
+-	while (addr > PAGE_SIZE) {
++	while (addr > min_addr) {
+ 		info.high_limit = addr;
+ 		if (!slice_scan_available(addr - 1, available, 0, &addr))
+ 			continue;
+@@ -405,8 +407,8 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
+ 		 * Check if we need to reduce the range, or if we can
+ 		 * extend it to cover the previous available slice.
+ 		 */
+-		if (addr < PAGE_SIZE)
+-			addr = PAGE_SIZE;
++		if (addr < min_addr)
++			addr = min_addr;
+ 		else if (slice_scan_available(addr - 1, available, 0, &prev)) {
+ 			addr = prev;
+ 			goto prev_slice;
+@@ -528,7 +530,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
+ 		addr = _ALIGN_UP(addr, page_size);
+ 		slice_dbg(" aligned addr=%lx\n", addr);
+ 		/* Ignore hint if it's too large or overlaps a VMA */
+-		if (addr > high_limit - len ||
++		if (addr > high_limit - len || addr < mmap_min_addr ||
+ 		    !slice_area_is_free(mm, addr, len))
+ 			addr = 0;
+ 	}
+diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
+index 637b896894fc..aa82df30e38a 100644
+--- a/arch/riscv/include/asm/uaccess.h
++++ b/arch/riscv/include/asm/uaccess.h
+@@ -301,7 +301,7 @@ do {								\
+ 		"	.balign 4\n"				\
+ 		"4:\n"						\
+ 		"	li %0, %6\n"				\
+-		"	jump 2b, %1\n"				\
++		"	jump 3b, %1\n"				\
+ 		"	.previous\n"				\
+ 		"	.section __ex_table,\"a\"\n"		\
+ 		"	.balign " RISCV_SZPTR "\n"			\
+diff --git a/arch/sh/boards/of-generic.c b/arch/sh/boards/of-generic.c
+index 958f46da3a79..d91065e81a4e 100644
+--- a/arch/sh/boards/of-generic.c
++++ b/arch/sh/boards/of-generic.c
+@@ -164,10 +164,10 @@ static struct sh_machine_vector __initmv sh_of_generic_mv = {
+ 
+ struct sh_clk_ops;
+ 
+-void __init arch_init_clk_ops(struct sh_clk_ops **ops, int idx)
++void __init __weak arch_init_clk_ops(struct sh_clk_ops **ops, int idx)
+ {
+ }
+ 
+-void __init plat_irq_setup(void)
++void __init __weak plat_irq_setup(void)
+ {
+ }
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index d45f3fbd232e..f15441b07dad 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -116,6 +116,110 @@ static __initconst const u64 amd_hw_cache_event_ids
+  },
+ };
+ 
++static __initconst const u64 amd_hw_cache_event_ids_f17h
++				[PERF_COUNT_HW_CACHE_MAX]
++				[PERF_COUNT_HW_CACHE_OP_MAX]
++				[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
++[C(L1D)] = {
++	[C(OP_READ)] = {
++		[C(RESULT_ACCESS)] = 0x0040, /* Data Cache Accesses */
++		[C(RESULT_MISS)]   = 0xc860, /* L2$ access from DC Miss */
++	},
++	[C(OP_WRITE)] = {
++		[C(RESULT_ACCESS)] = 0,
++		[C(RESULT_MISS)]   = 0,
++	},
++	[C(OP_PREFETCH)] = {
++		[C(RESULT_ACCESS)] = 0xff5a, /* h/w prefetch DC Fills */
++		[C(RESULT_MISS)]   = 0,
++	},
++},
++[C(L1I)] = {
++	[C(OP_READ)] = {
++		[C(RESULT_ACCESS)] = 0x0080, /* Instruction cache fetches  */
++		[C(RESULT_MISS)]   = 0x0081, /* Instruction cache misses   */
++	},
++	[C(OP_WRITE)] = {
++		[C(RESULT_ACCESS)] = -1,
++		[C(RESULT_MISS)]   = -1,
++	},
++	[C(OP_PREFETCH)] = {
++		[C(RESULT_ACCESS)] = 0,
++		[C(RESULT_MISS)]   = 0,
++	},
++},
++[C(LL)] = {
++	[C(OP_READ)] = {
++		[C(RESULT_ACCESS)] = 0,
++		[C(RESULT_MISS)]   = 0,
++	},
++	[C(OP_WRITE)] = {
++		[C(RESULT_ACCESS)] = 0,
++		[C(RESULT_MISS)]   = 0,
++	},
++	[C(OP_PREFETCH)] = {
++		[C(RESULT_ACCESS)] = 0,
++		[C(RESULT_MISS)]   = 0,
++	},
++},
++[C(DTLB)] = {
++	[C(OP_READ)] = {
++		[C(RESULT_ACCESS)] = 0xff45, /* All L2 DTLB accesses */
++		[C(RESULT_MISS)]   = 0xf045, /* L2 DTLB misses (PT walks) */
++	},
++	[C(OP_WRITE)] = {
++		[C(RESULT_ACCESS)] = 0,
++		[C(RESULT_MISS)]   = 0,
++	},
++	[C(OP_PREFETCH)] = {
++		[C(RESULT_ACCESS)] = 0,
++		[C(RESULT_MISS)]   = 0,
++	},
++},
++[C(ITLB)] = {
++	[C(OP_READ)] = {
++		[C(RESULT_ACCESS)] = 0x0084, /* L1 ITLB misses, L2 ITLB hits */
++		[C(RESULT_MISS)]   = 0xff85, /* L1 ITLB misses, L2 misses */
++	},
++	[C(OP_WRITE)] = {
++		[C(RESULT_ACCESS)] = -1,
++		[C(RESULT_MISS)]   = -1,
++	},
++	[C(OP_PREFETCH)] = {
++		[C(RESULT_ACCESS)] = -1,
++		[C(RESULT_MISS)]   = -1,
++	},
++},
++[C(BPU)] = {
++	[C(OP_READ)] = {
++		[C(RESULT_ACCESS)] = 0x00c2, /* Retired Branch Instr.      */
++		[C(RESULT_MISS)]   = 0x00c3, /* Retired Mispredicted BI    */
++	},
++	[C(OP_WRITE)] = {
++		[C(RESULT_ACCESS)] = -1,
++		[C(RESULT_MISS)]   = -1,
++	},
++	[C(OP_PREFETCH)] = {
++		[C(RESULT_ACCESS)] = -1,
++		[C(RESULT_MISS)]   = -1,
++	},
++},
++[C(NODE)] = {
++	[C(OP_READ)] = {
++		[C(RESULT_ACCESS)] = 0,
++		[C(RESULT_MISS)]   = 0,
++	},
++	[C(OP_WRITE)] = {
++		[C(RESULT_ACCESS)] = -1,
++		[C(RESULT_MISS)]   = -1,
++	},
++	[C(OP_PREFETCH)] = {
++		[C(RESULT_ACCESS)] = -1,
++		[C(RESULT_MISS)]   = -1,
++	},
++},
++};
++
+ /*
+  * AMD Performance Monitor K7 and later, up to and including Family 16h:
+  */
+@@ -865,9 +969,10 @@ __init int amd_pmu_init(void)
+ 		x86_pmu.amd_nb_constraints = 0;
+ 	}
+ 
+-	/* Events are common for all AMDs */
+-	memcpy(hw_cache_event_ids, amd_hw_cache_event_ids,
+-	       sizeof(hw_cache_event_ids));
++	if (boot_cpu_data.x86 >= 0x17)
++		memcpy(hw_cache_event_ids, amd_hw_cache_event_ids_f17h, sizeof(hw_cache_event_ids));
++	else
++		memcpy(hw_cache_event_ids, amd_hw_cache_event_ids, sizeof(hw_cache_event_ids));
+ 
+ 	return 0;
+ }
+diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c
+index dc3e26e905a3..65201e180fe0 100644
+--- a/arch/x86/kernel/cpu/mce/severity.c
++++ b/arch/x86/kernel/cpu/mce/severity.c
+@@ -165,6 +165,11 @@ static struct severity {
+ 		SER, MASK(MCI_STATUS_OVER|MCI_UC_SAR|MCI_ADDR|MCACOD, MCI_UC_SAR|MCI_ADDR|MCACOD_DATA),
+ 		KERNEL
+ 		),
++	MCESEV(
++		PANIC, "Instruction fetch error in kernel",
++		SER, MASK(MCI_STATUS_OVER|MCI_UC_SAR|MCI_ADDR|MCACOD, MCI_UC_SAR|MCI_ADDR|MCACOD_INSTR),
++		KERNEL
++		),
+ #endif
+ 	MCESEV(
+ 		PANIC, "Action required: unknown MCACOD",
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 4b6c2da7265c..3339697de6e5 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -70,7 +70,6 @@
+ #define APIC_BROADCAST			0xFF
+ #define X2APIC_BROADCAST		0xFFFFFFFFul
+ 
+-static bool lapic_timer_advance_adjust_done = false;
+ #define LAPIC_TIMER_ADVANCE_ADJUST_DONE 100
+ /* step-by-step approximation to mitigate fluctuation */
+ #define LAPIC_TIMER_ADVANCE_ADJUST_STEP 8
+@@ -1479,14 +1478,32 @@ static bool lapic_timer_int_injected(struct kvm_vcpu *vcpu)
+ 	return false;
+ }
+ 
++static inline void __wait_lapic_expire(struct kvm_vcpu *vcpu, u64 guest_cycles)
++{
++	u64 timer_advance_ns = vcpu->arch.apic->lapic_timer.timer_advance_ns;
++
++	/*
++	 * If the guest TSC is running at a different ratio than the host, then
++	 * convert the delay to nanoseconds to achieve an accurate delay.  Note
++	 * that __delay() uses delay_tsc whenever the hardware has TSC, thus
++	 * always for VMX enabled hardware.
++	 */
++	if (vcpu->arch.tsc_scaling_ratio == kvm_default_tsc_scaling_ratio) {
++		__delay(min(guest_cycles,
++			nsec_to_cycles(vcpu, timer_advance_ns)));
++	} else {
++		u64 delay_ns = guest_cycles * 1000000ULL;
++		do_div(delay_ns, vcpu->arch.virtual_tsc_khz);
++		ndelay(min_t(u32, delay_ns, timer_advance_ns));
++	}
++}
++
+ void wait_lapic_expire(struct kvm_vcpu *vcpu)
+ {
+ 	struct kvm_lapic *apic = vcpu->arch.apic;
++	u32 timer_advance_ns = apic->lapic_timer.timer_advance_ns;
+ 	u64 guest_tsc, tsc_deadline, ns;
+ 
+-	if (!lapic_in_kernel(vcpu))
+-		return;
+-
+ 	if (apic->lapic_timer.expired_tscdeadline == 0)
+ 		return;
+ 
+@@ -1498,33 +1515,37 @@ void wait_lapic_expire(struct kvm_vcpu *vcpu)
+ 	guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());
+ 	trace_kvm_wait_lapic_expire(vcpu->vcpu_id, guest_tsc - tsc_deadline);
+ 
+-	/* __delay is delay_tsc whenever the hardware has TSC, thus always.  */
+ 	if (guest_tsc < tsc_deadline)
+-		__delay(min(tsc_deadline - guest_tsc,
+-			nsec_to_cycles(vcpu, lapic_timer_advance_ns)));
++		__wait_lapic_expire(vcpu, tsc_deadline - guest_tsc);
+ 
+-	if (!lapic_timer_advance_adjust_done) {
++	if (!apic->lapic_timer.timer_advance_adjust_done) {
+ 		/* too early */
+ 		if (guest_tsc < tsc_deadline) {
+ 			ns = (tsc_deadline - guest_tsc) * 1000000ULL;
+ 			do_div(ns, vcpu->arch.virtual_tsc_khz);
+-			lapic_timer_advance_ns -= min((unsigned int)ns,
+-				lapic_timer_advance_ns / LAPIC_TIMER_ADVANCE_ADJUST_STEP);
++			timer_advance_ns -= min((u32)ns,
++				timer_advance_ns / LAPIC_TIMER_ADVANCE_ADJUST_STEP);
+ 		} else {
+ 		/* too late */
+ 			ns = (guest_tsc - tsc_deadline) * 1000000ULL;
+ 			do_div(ns, vcpu->arch.virtual_tsc_khz);
+-			lapic_timer_advance_ns += min((unsigned int)ns,
+-				lapic_timer_advance_ns / LAPIC_TIMER_ADVANCE_ADJUST_STEP);
++			timer_advance_ns += min((u32)ns,
++				timer_advance_ns / LAPIC_TIMER_ADVANCE_ADJUST_STEP);
+ 		}
+ 		if (abs(guest_tsc - tsc_deadline) < LAPIC_TIMER_ADVANCE_ADJUST_DONE)
+-			lapic_timer_advance_adjust_done = true;
++			apic->lapic_timer.timer_advance_adjust_done = true;
++		if (unlikely(timer_advance_ns > 5000)) {
++			timer_advance_ns = 0;
++			apic->lapic_timer.timer_advance_adjust_done = true;
++		}
++		apic->lapic_timer.timer_advance_ns = timer_advance_ns;
+ 	}
+ }
+ 
+ static void start_sw_tscdeadline(struct kvm_lapic *apic)
+ {
+-	u64 guest_tsc, tscdeadline = apic->lapic_timer.tscdeadline;
++	struct kvm_timer *ktimer = &apic->lapic_timer;
++	u64 guest_tsc, tscdeadline = ktimer->tscdeadline;
+ 	u64 ns = 0;
+ 	ktime_t expire;
+ 	struct kvm_vcpu *vcpu = apic->vcpu;
+@@ -1539,13 +1560,15 @@ static void start_sw_tscdeadline(struct kvm_lapic *apic)
+ 
+ 	now = ktime_get();
+ 	guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc());
+-	if (likely(tscdeadline > guest_tsc)) {
+-		ns = (tscdeadline - guest_tsc) * 1000000ULL;
+-		do_div(ns, this_tsc_khz);
++
++	ns = (tscdeadline - guest_tsc) * 1000000ULL;
++	do_div(ns, this_tsc_khz);
++
++	if (likely(tscdeadline > guest_tsc) &&
++	    likely(ns > apic->lapic_timer.timer_advance_ns)) {
+ 		expire = ktime_add_ns(now, ns);
+-		expire = ktime_sub_ns(expire, lapic_timer_advance_ns);
+-		hrtimer_start(&apic->lapic_timer.timer,
+-				expire, HRTIMER_MODE_ABS_PINNED);
++		expire = ktime_sub_ns(expire, ktimer->timer_advance_ns);
++		hrtimer_start(&ktimer->timer, expire, HRTIMER_MODE_ABS_PINNED);
+ 	} else
+ 		apic_timer_expired(apic);
+ 
+@@ -2252,7 +2275,7 @@ static enum hrtimer_restart apic_timer_fn(struct hrtimer *data)
+ 		return HRTIMER_NORESTART;
+ }
+ 
+-int kvm_create_lapic(struct kvm_vcpu *vcpu)
++int kvm_create_lapic(struct kvm_vcpu *vcpu, int timer_advance_ns)
+ {
+ 	struct kvm_lapic *apic;
+ 
+@@ -2276,6 +2299,14 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu)
+ 	hrtimer_init(&apic->lapic_timer.timer, CLOCK_MONOTONIC,
+ 		     HRTIMER_MODE_ABS_PINNED);
+ 	apic->lapic_timer.timer.function = apic_timer_fn;
++	if (timer_advance_ns == -1) {
++		apic->lapic_timer.timer_advance_ns = 1000;
++		apic->lapic_timer.timer_advance_adjust_done = false;
++	} else {
++		apic->lapic_timer.timer_advance_ns = timer_advance_ns;
++		apic->lapic_timer.timer_advance_adjust_done = true;
++	}
++
+ 
+ 	/*
+ 	 * APIC is created enabled. This will prevent kvm_lapic_set_base from
+diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
+index ff6ef9c3d760..d6d049ba3045 100644
+--- a/arch/x86/kvm/lapic.h
++++ b/arch/x86/kvm/lapic.h
+@@ -31,8 +31,10 @@ struct kvm_timer {
+ 	u32 timer_mode_mask;
+ 	u64 tscdeadline;
+ 	u64 expired_tscdeadline;
++	u32 timer_advance_ns;
+ 	atomic_t pending;			/* accumulated triggered timers */
+ 	bool hv_timer_in_use;
++	bool timer_advance_adjust_done;
+ };
+ 
+ struct kvm_lapic {
+@@ -62,7 +64,7 @@ struct kvm_lapic {
+ 
+ struct dest_map;
+ 
+-int kvm_create_lapic(struct kvm_vcpu *vcpu);
++int kvm_create_lapic(struct kvm_vcpu *vcpu, int timer_advance_ns);
+ void kvm_free_lapic(struct kvm_vcpu *vcpu);
+ 
+ int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index e544cec812f9..2a07e43ee666 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -6815,7 +6815,8 @@ static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
+ 	struct page **src_p, **dst_p;
+ 	struct kvm_sev_dbg debug;
+ 	unsigned long n;
+-	int ret, size;
++	unsigned int size;
++	int ret;
+ 
+ 	if (!sev_guest(kvm))
+ 		return -ENOTTY;
+@@ -6823,6 +6824,11 @@ static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
+ 	if (copy_from_user(&debug, (void __user *)(uintptr_t)argp->data, sizeof(debug)))
+ 		return -EFAULT;
+ 
++	if (!debug.len || debug.src_uaddr + debug.len < debug.src_uaddr)
++		return -EINVAL;
++	if (!debug.dst_uaddr)
++		return -EINVAL;
++
+ 	vaddr = debug.src_uaddr;
+ 	size = debug.len;
+ 	vaddr_end = vaddr + size;
+@@ -6873,8 +6879,8 @@ static int sev_dbg_crypt(struct kvm *kvm, struct kvm_sev_cmd *argp, bool dec)
+ 						     dst_vaddr,
+ 						     len, &argp->error);
+ 
+-		sev_unpin_memory(kvm, src_p, 1);
+-		sev_unpin_memory(kvm, dst_p, 1);
++		sev_unpin_memory(kvm, src_p, n);
++		sev_unpin_memory(kvm, dst_p, n);
+ 
+ 		if (ret)
+ 			goto err;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index a4bcac94392c..8f8c42b04875 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -2793,7 +2793,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu)
+ 		[fail]"i"(offsetof(struct vcpu_vmx, fail)),
+ 		[host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)),
+ 		[wordsize]"i"(sizeof(ulong))
+-	      : "rax", "cc", "memory"
++	      : "cc", "memory"
+ 	);
+ 
+ 	preempt_enable();
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index e7fe8c692362..da6fdd5434a1 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6465,7 +6465,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ 		"xor %%edi, %%edi \n\t"
+ 		"xor %%ebp, %%ebp \n\t"
+ 		"pop  %%" _ASM_BP "; pop  %%" _ASM_DX " \n\t"
+-	      : ASM_CALL_CONSTRAINT
++	      : ASM_CALL_CONSTRAINT, "=S"((int){0})
+ 	      : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp),
+ 		[launched]"i"(offsetof(struct vcpu_vmx, __launched)),
+ 		[fail]"i"(offsetof(struct vcpu_vmx, fail)),
+@@ -7133,6 +7133,7 @@ static int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc)
+ {
+ 	struct vcpu_vmx *vmx;
+ 	u64 tscl, guest_tscl, delta_tsc, lapic_timer_advance_cycles;
++	struct kvm_timer *ktimer = &vcpu->arch.apic->lapic_timer;
+ 
+ 	if (kvm_mwait_in_guest(vcpu->kvm))
+ 		return -EOPNOTSUPP;
+@@ -7141,7 +7142,8 @@ static int vmx_set_hv_timer(struct kvm_vcpu *vcpu, u64 guest_deadline_tsc)
+ 	tscl = rdtsc();
+ 	guest_tscl = kvm_read_l1_tsc(vcpu, tscl);
+ 	delta_tsc = max(guest_deadline_tsc, guest_tscl) - guest_tscl;
+-	lapic_timer_advance_cycles = nsec_to_cycles(vcpu, lapic_timer_advance_ns);
++	lapic_timer_advance_cycles = nsec_to_cycles(vcpu,
++						    ktimer->timer_advance_ns);
+ 
+ 	if (delta_tsc > lapic_timer_advance_cycles)
+ 		delta_tsc -= lapic_timer_advance_cycles;
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 1abae731c3e4..b26cb680ba38 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -444,7 +444,8 @@ static inline u32 vmx_vmentry_ctrl(void)
+ {
+ 	u32 vmentry_ctrl = vmcs_config.vmentry_ctrl;
+ 	if (pt_mode == PT_MODE_SYSTEM)
+-		vmentry_ctrl &= ~(VM_EXIT_PT_CONCEAL_PIP | VM_EXIT_CLEAR_IA32_RTIT_CTL);
++		vmentry_ctrl &= ~(VM_ENTRY_PT_CONCEAL_PIP |
++				  VM_ENTRY_LOAD_IA32_RTIT_CTL);
+ 	/* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */
+ 	return vmentry_ctrl &
+ 		~(VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL | VM_ENTRY_LOAD_IA32_EFER);
+@@ -454,9 +455,10 @@ static inline u32 vmx_vmexit_ctrl(void)
+ {
+ 	u32 vmexit_ctrl = vmcs_config.vmexit_ctrl;
+ 	if (pt_mode == PT_MODE_SYSTEM)
+-		vmexit_ctrl &= ~(VM_ENTRY_PT_CONCEAL_PIP | VM_ENTRY_LOAD_IA32_RTIT_CTL);
++		vmexit_ctrl &= ~(VM_EXIT_PT_CONCEAL_PIP |
++				 VM_EXIT_CLEAR_IA32_RTIT_CTL);
+ 	/* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */
+-	return vmcs_config.vmexit_ctrl &
++	return vmexit_ctrl &
+ 		~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER);
+ }
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 7e413ea19a9a..3eeb7183fc09 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -136,10 +136,14 @@ EXPORT_SYMBOL_GPL(kvm_default_tsc_scaling_ratio);
+ static u32 __read_mostly tsc_tolerance_ppm = 250;
+ module_param(tsc_tolerance_ppm, uint, S_IRUGO | S_IWUSR);
+ 
+-/* lapic timer advance (tscdeadline mode only) in nanoseconds */
+-unsigned int __read_mostly lapic_timer_advance_ns = 1000;
++/*
++ * lapic timer advance (tscdeadline mode only) in nanoseconds.  '-1' enables
++ * adaptive tuning starting from default advancment of 1000ns.  '0' disables
++ * advancement entirely.  Any other value is used as-is and disables adaptive
++ * tuning, i.e. allows priveleged userspace to set an exact advancement time.
++ */
++static int __read_mostly lapic_timer_advance_ns = -1;
+ module_param(lapic_timer_advance_ns, uint, S_IRUGO | S_IWUSR);
+-EXPORT_SYMBOL_GPL(lapic_timer_advance_ns);
+ 
+ static bool __read_mostly vector_hashing = true;
+ module_param(vector_hashing, bool, S_IRUGO);
+@@ -7882,7 +7886,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 	}
+ 
+ 	trace_kvm_entry(vcpu->vcpu_id);
+-	if (lapic_timer_advance_ns)
++	if (lapic_in_kernel(vcpu) &&
++	    vcpu->arch.apic->lapic_timer.timer_advance_ns)
+ 		wait_lapic_expire(vcpu);
+ 	guest_enter_irqoff();
+ 
+@@ -9070,7 +9075,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
+ 		goto fail_free_pio_data;
+ 
+ 	if (irqchip_in_kernel(vcpu->kvm)) {
+-		r = kvm_create_lapic(vcpu);
++		r = kvm_create_lapic(vcpu, lapic_timer_advance_ns);
+ 		if (r < 0)
+ 			goto fail_mmu_destroy;
+ 	} else
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index de3d46769ee3..b457160dc7ba 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -294,8 +294,6 @@ extern u64 kvm_supported_xcr0(void);
+ 
+ extern unsigned int min_timer_period_us;
+ 
+-extern unsigned int lapic_timer_advance_ns;
+-
+ extern bool enable_vmware_backdoor;
+ 
+ extern struct static_key kvm_no_apic_vcpu;
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index f905a2371080..8dacdb96899e 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -5,6 +5,7 @@
+ #include <linux/memblock.h>
+ #include <linux/swapfile.h>
+ #include <linux/swapops.h>
++#include <linux/kmemleak.h>
+ 
+ #include <asm/set_memory.h>
+ #include <asm/e820/api.h>
+@@ -766,6 +767,11 @@ void free_init_pages(const char *what, unsigned long begin, unsigned long end)
+ 	if (debug_pagealloc_enabled()) {
+ 		pr_info("debug: unmapping init [mem %#010lx-%#010lx]\n",
+ 			begin, end - 1);
++		/*
++		 * Inform kmemleak about the hole in the memory since the
++		 * corresponding pages will be unmapped.
++		 */
++		kmemleak_free_part((void *)begin, end - begin);
+ 		set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
+ 	} else {
+ 		/*
+diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
+index 3f452ffed7e9..d669c5e797e0 100644
+--- a/arch/x86/mm/kaslr.c
++++ b/arch/x86/mm/kaslr.c
+@@ -94,7 +94,7 @@ void __init kernel_randomize_memory(void)
+ 	if (!kaslr_memory_enabled())
+ 		return;
+ 
+-	kaslr_regions[0].size_tb = 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT);
++	kaslr_regions[0].size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT);
+ 	kaslr_regions[1].size_tb = VMALLOC_SIZE_TB;
+ 
+ 	/*
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 999d6d8f0bef..9a49335e717a 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -731,7 +731,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
+ {
+ 	int cpu;
+ 
+-	struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = {
++	struct flush_tlb_info info = {
+ 		.mm = mm,
+ 		.stride_shift = stride_shift,
+ 		.freed_tables = freed_tables,
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 16f9675c57e6..5a2585d69c81 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1716,11 +1716,12 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ 	unsigned int depth;
+ 
+ 	list_splice_init(&plug->mq_list, &list);
+-	plug->rq_count = 0;
+ 
+ 	if (plug->rq_count > 2 && plug->multiple_queues)
+ 		list_sort(NULL, &list, plug_rq_cmp);
+ 
++	plug->rq_count = 0;
++
+ 	this_q = NULL;
+ 	this_hctx = NULL;
+ 	this_ctx = NULL;
+@@ -2341,7 +2342,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
+ 	return 0;
+ 
+  free_fq:
+-	kfree(hctx->fq);
++	blk_free_flush_queue(hctx->fq);
+  exit_hctx:
+ 	if (set->ops->exit_hctx)
+ 		set->ops->exit_hctx(hctx, hctx_idx);
+diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c
+index 62c9654b9ce8..fd7a9be54595 100644
+--- a/drivers/block/null_blk_main.c
++++ b/drivers/block/null_blk_main.c
+@@ -1749,6 +1749,11 @@ static int __init null_init(void)
+ 		return -EINVAL;
+ 	}
+ 
++	if (g_home_node != NUMA_NO_NODE && g_home_node >= nr_online_nodes) {
++		pr_err("null_blk: invalid home_node value\n");
++		g_home_node = NUMA_NO_NODE;
++	}
++
+ 	if (g_queue_mode == NULL_Q_RQ) {
+ 		pr_err("null_blk: legacy IO path no longer available\n");
+ 		return -EINVAL;
+diff --git a/drivers/block/xsysace.c b/drivers/block/xsysace.c
+index 87ccef4bd69e..32a21b8d1d85 100644
+--- a/drivers/block/xsysace.c
++++ b/drivers/block/xsysace.c
+@@ -1090,6 +1090,8 @@ static int ace_setup(struct ace_device *ace)
+ 	return 0;
+ 
+ err_read:
++	/* prevent double queue cleanup */
++	ace->gd->queue = NULL;
+ 	put_disk(ace->gd);
+ err_alloc_disk:
+ 	blk_cleanup_queue(ace->queue);
+diff --git a/drivers/bluetooth/btmtkuart.c b/drivers/bluetooth/btmtkuart.c
+index 4593baff2bc9..19eecf198321 100644
+--- a/drivers/bluetooth/btmtkuart.c
++++ b/drivers/bluetooth/btmtkuart.c
+@@ -115,11 +115,13 @@ static int mtk_hci_wmt_sync(struct hci_dev *hdev, u8 op, u8 flag, u16 plen,
+ 				  TASK_INTERRUPTIBLE, HCI_INIT_TIMEOUT);
+ 	if (err == -EINTR) {
+ 		bt_dev_err(hdev, "Execution of wmt command interrupted");
++		clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
+ 		return err;
+ 	}
+ 
+ 	if (err) {
+ 		bt_dev_err(hdev, "Execution of wmt command timed out");
++		clear_bit(BTMTKUART_TX_WAIT_VND_EVT, &bdev->tx_state);
+ 		return -ETIMEDOUT;
+ 	}
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 4761499db9ee..470ee68555d9 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2885,6 +2885,7 @@ static int btusb_config_oob_wake(struct hci_dev *hdev)
+ 		return 0;
+ 	}
+ 
++	irq_set_status_flags(irq, IRQ_NOAUTOEN);
+ 	ret = devm_request_irq(&hdev->dev, irq, btusb_oob_wake_handler,
+ 			       0, "OOB Wake-on-BT", data);
+ 	if (ret) {
+@@ -2899,7 +2900,6 @@ static int btusb_config_oob_wake(struct hci_dev *hdev)
+ 	}
+ 
+ 	data->oob_wake_irq = irq;
+-	disable_irq(irq);
+ 	bt_dev_info(hdev, "OOB Wake-on-BT configured at IRQ %u", irq);
+ 	return 0;
+ }
+diff --git a/drivers/clk/qcom/gcc-msm8998.c b/drivers/clk/qcom/gcc-msm8998.c
+index 1b779396e04f..42de947173f8 100644
+--- a/drivers/clk/qcom/gcc-msm8998.c
++++ b/drivers/clk/qcom/gcc-msm8998.c
+@@ -1112,6 +1112,7 @@ static struct clk_rcg2 ufs_axi_clk_src = {
+ 
+ static const struct freq_tbl ftbl_usb30_master_clk_src[] = {
+ 	F(19200000, P_XO, 1, 0, 0),
++	F(60000000, P_GPLL0_OUT_MAIN, 10, 0, 0),
+ 	F(120000000, P_GPLL0_OUT_MAIN, 5, 0, 0),
+ 	F(150000000, P_GPLL0_OUT_MAIN, 4, 0, 0),
+ 	{ }
+diff --git a/drivers/clk/x86/clk-pmc-atom.c b/drivers/clk/x86/clk-pmc-atom.c
+index d977193842df..19174835693b 100644
+--- a/drivers/clk/x86/clk-pmc-atom.c
++++ b/drivers/clk/x86/clk-pmc-atom.c
+@@ -165,7 +165,7 @@ static const struct clk_ops plt_clk_ops = {
+ };
+ 
+ static struct clk_plt *plt_clk_register(struct platform_device *pdev, int id,
+-					void __iomem *base,
++					const struct pmc_clk_data *pmc_data,
+ 					const char **parent_names,
+ 					int num_parents)
+ {
+@@ -184,9 +184,17 @@ static struct clk_plt *plt_clk_register(struct platform_device *pdev, int id,
+ 	init.num_parents = num_parents;
+ 
+ 	pclk->hw.init = &init;
+-	pclk->reg = base + PMC_CLK_CTL_OFFSET + id * PMC_CLK_CTL_SIZE;
++	pclk->reg = pmc_data->base + PMC_CLK_CTL_OFFSET + id * PMC_CLK_CTL_SIZE;
+ 	spin_lock_init(&pclk->lock);
+ 
++	/*
++	 * On some systems, the pmc_plt_clocks already enabled by the
++	 * firmware are being marked as critical to avoid them being
++	 * gated by the clock framework.
++	 */
++	if (pmc_data->critical && plt_clk_is_enabled(&pclk->hw))
++		init.flags |= CLK_IS_CRITICAL;
++
+ 	ret = devm_clk_hw_register(&pdev->dev, &pclk->hw);
+ 	if (ret) {
+ 		pclk = ERR_PTR(ret);
+@@ -332,7 +340,7 @@ static int plt_clk_probe(struct platform_device *pdev)
+ 		return PTR_ERR(parent_names);
+ 
+ 	for (i = 0; i < PMC_CLK_NUM; i++) {
+-		data->clks[i] = plt_clk_register(pdev, i, pmc_data->base,
++		data->clks[i] = plt_clk_register(pdev, i, pmc_data,
+ 						 parent_names, data->nparents);
+ 		if (IS_ERR(data->clks[i])) {
+ 			err = PTR_ERR(data->clks[i]);
+diff --git a/drivers/gpio/gpio-mxc.c b/drivers/gpio/gpio-mxc.c
+index 2d1dfa1e0745..e86e61dda4b7 100644
+--- a/drivers/gpio/gpio-mxc.c
++++ b/drivers/gpio/gpio-mxc.c
+@@ -438,8 +438,11 @@ static int mxc_gpio_probe(struct platform_device *pdev)
+ 
+ 	/* the controller clock is optional */
+ 	port->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(port->clk))
++	if (IS_ERR(port->clk)) {
++		if (PTR_ERR(port->clk) == -EPROBE_DEFER)
++			return -EPROBE_DEFER;
+ 		port->clk = NULL;
++	}
+ 
+ 	err = clk_prepare_enable(port->clk);
+ 	if (err) {
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 9993b692598f..860e21ec6a49 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1301,10 +1301,10 @@ static u32 __extract(u8 *report, unsigned offset, int n)
+ u32 hid_field_extract(const struct hid_device *hid, u8 *report,
+ 			unsigned offset, unsigned n)
+ {
+-	if (n > 32) {
+-		hid_warn(hid, "hid_field_extract() called with n (%d) > 32! (%s)\n",
++	if (n > 256) {
++		hid_warn(hid, "hid_field_extract() called with n (%d) > 256! (%s)\n",
+ 			 n, current->comm);
+-		n = 32;
++		n = 256;
+ 	}
+ 
+ 	return __extract(report, offset, n);
+diff --git a/drivers/hid/hid-debug.c b/drivers/hid/hid-debug.c
+index ac9fda1b5a72..1384e57182af 100644
+--- a/drivers/hid/hid-debug.c
++++ b/drivers/hid/hid-debug.c
+@@ -1060,10 +1060,15 @@ static int hid_debug_rdesc_show(struct seq_file *f, void *p)
+ 	seq_printf(f, "\n\n");
+ 
+ 	/* dump parsed data and input mappings */
++	if (down_interruptible(&hdev->driver_input_lock))
++		return 0;
++
+ 	hid_dump_device(hdev, f);
+ 	seq_printf(f, "\n");
+ 	hid_dump_input_mapping(hdev, f);
+ 
++	up(&hdev->driver_input_lock);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 59a5608b8dc0..ff92a7b2fc89 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -995,6 +995,7 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 		case 0x1b8: map_key_clear(KEY_VIDEO);		break;
+ 		case 0x1bc: map_key_clear(KEY_MESSENGER);	break;
+ 		case 0x1bd: map_key_clear(KEY_INFO);		break;
++		case 0x1cb: map_key_clear(KEY_ASSISTANT);	break;
+ 		case 0x201: map_key_clear(KEY_NEW);		break;
+ 		case 0x202: map_key_clear(KEY_OPEN);		break;
+ 		case 0x203: map_key_clear(KEY_CLOSE);		break;
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index f040c8a7f9a9..199cc256e9d9 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -2111,6 +2111,13 @@ static int hidpp_ff_init(struct hidpp_device *hidpp, u8 feature_index)
+ 		kfree(data);
+ 		return -ENOMEM;
+ 	}
++	data->wq = create_singlethread_workqueue("hidpp-ff-sendqueue");
++	if (!data->wq) {
++		kfree(data->effect_ids);
++		kfree(data);
++		return -ENOMEM;
++	}
++
+ 	data->hidpp = hidpp;
+ 	data->feature_index = feature_index;
+ 	data->version = version;
+@@ -2155,7 +2162,6 @@ static int hidpp_ff_init(struct hidpp_device *hidpp, u8 feature_index)
+ 	/* ignore boost value at response.fap.params[2] */
+ 
+ 	/* init the hardware command queue */
+-	data->wq = create_singlethread_workqueue("hidpp-ff-sendqueue");
+ 	atomic_set(&data->workqueue_size, 0);
+ 
+ 	/* initialize with zero autocenter to get wheel in usable state */
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 94088c0ed68a..e24790c988c0 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -744,7 +744,6 @@ static const struct hid_device_id hid_ignore_list[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DEALEXTREAME, USB_DEVICE_ID_DEALEXTREAME_RADIO_SI4701) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DELORME, USB_DEVICE_ID_DELORME_EARTHMATE) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_DELORME, USB_DEVICE_ID_DELORME_EM_LT20) },
+-	{ HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, 0x0400) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ESSENTIAL_REALITY, USB_DEVICE_ID_ESSENTIAL_REALITY_P5) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC5UH) },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC4UM) },
+@@ -1025,6 +1024,10 @@ bool hid_ignore(struct hid_device *hdev)
+ 		if (hdev->product == 0x0401 &&
+ 		    strncmp(hdev->name, "ELAN0800", 8) != 0)
+ 			return true;
++		/* Same with product id 0x0400 */
++		if (hdev->product == 0x0400 &&
++		    strncmp(hdev->name, "QTEC0001", 8) != 0)
++			return true;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index fa9ad53845d9..d4b72e4ffd71 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -510,9 +510,9 @@ static int i2c_imx_clk_notifier_call(struct notifier_block *nb,
+ 				     unsigned long action, void *data)
+ {
+ 	struct clk_notifier_data *ndata = data;
+-	struct imx_i2c_struct *i2c_imx = container_of(&ndata->clk,
++	struct imx_i2c_struct *i2c_imx = container_of(nb,
+ 						      struct imx_i2c_struct,
+-						      clk);
++						      clk_change_nb);
+ 
+ 	if (action & POST_RATE_CHANGE)
+ 		i2c_imx_set_clk(i2c_imx, ndata->new_rate);
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index 13e1213561d4..4284fc991cfd 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -432,7 +432,7 @@ static int stm32f7_i2c_compute_timing(struct stm32f7_i2c_dev *i2c_dev,
+ 		 STM32F7_I2C_ANALOG_FILTER_DELAY_MAX : 0);
+ 	dnf_delay = setup->dnf * i2cclk;
+ 
+-	sdadel_min = setup->fall_time - i2c_specs[setup->speed].hddat_min -
++	sdadel_min = i2c_specs[setup->speed].hddat_min + setup->fall_time -
+ 		af_delay_min - (setup->dnf + 3) * i2cclk;
+ 
+ 	sdadel_max = i2c_specs[setup->speed].vddat_max - setup->rise_time -
+diff --git a/drivers/i2c/busses/i2c-synquacer.c b/drivers/i2c/busses/i2c-synquacer.c
+index 2184b7c3580e..6b8d803bd30e 100644
+--- a/drivers/i2c/busses/i2c-synquacer.c
++++ b/drivers/i2c/busses/i2c-synquacer.c
+@@ -602,6 +602,8 @@ static int synquacer_i2c_probe(struct platform_device *pdev)
+ 	i2c->adapter = synquacer_i2c_ops;
+ 	i2c_set_adapdata(&i2c->adapter, i2c);
+ 	i2c->adapter.dev.parent = &pdev->dev;
++	i2c->adapter.dev.of_node = pdev->dev.of_node;
++	ACPI_COMPANION_SET(&i2c->adapter.dev, ACPI_COMPANION(&pdev->dev));
+ 	i2c->adapter.nr = pdev->id;
+ 	init_completion(&i2c->completion);
+ 
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index af87a16ac3a5..60fb2afc0e50 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -327,6 +327,8 @@ static int i2c_device_probe(struct device *dev)
+ 
+ 		if (client->flags & I2C_CLIENT_HOST_NOTIFY) {
+ 			dev_dbg(dev, "Using Host Notify IRQ\n");
++			/* Keep adapter active when Host Notify is required */
++			pm_runtime_get_sync(&client->adapter->dev);
+ 			irq = i2c_smbus_host_notify_to_irq(client);
+ 		} else if (dev->of_node) {
+ 			irq = of_irq_get_byname(dev->of_node, "irq");
+@@ -431,6 +433,8 @@ static int i2c_device_remove(struct device *dev)
+ 	device_init_wakeup(&client->dev, false);
+ 
+ 	client->irq = client->init_irq;
++	if (client->flags & I2C_CLIENT_HOST_NOTIFY)
++		pm_runtime_put(&client->adapter->dev);
+ 
+ 	return status;
+ }
+diff --git a/drivers/infiniband/core/security.c b/drivers/infiniband/core/security.c
+index 1efadbccf394..7662e9347238 100644
+--- a/drivers/infiniband/core/security.c
++++ b/drivers/infiniband/core/security.c
+@@ -710,16 +710,20 @@ int ib_mad_agent_security_setup(struct ib_mad_agent *agent,
+ 						dev_name(&agent->device->dev),
+ 						agent->port_num);
+ 	if (ret)
+-		return ret;
++		goto free_security;
+ 
+ 	agent->lsm_nb.notifier_call = ib_mad_agent_security_change;
+ 	ret = register_lsm_notifier(&agent->lsm_nb);
+ 	if (ret)
+-		return ret;
++		goto free_security;
+ 
+ 	agent->smp_allowed = true;
+ 	agent->lsm_nb_reg = true;
+ 	return 0;
++
++free_security:
++	security_ib_free_security(agent->security);
++	return ret;
+ }
+ 
+ void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent)
+@@ -727,9 +731,10 @@ void ib_mad_agent_security_cleanup(struct ib_mad_agent *agent)
+ 	if (!rdma_protocol_ib(agent->device, agent->port_num))
+ 		return;
+ 
+-	security_ib_free_security(agent->security);
+ 	if (agent->lsm_nb_reg)
+ 		unregister_lsm_notifier(&agent->lsm_nb);
++
++	security_ib_free_security(agent->security);
+ }
+ 
+ int ib_mad_enforce_security(struct ib_mad_agent_private *map, u16 pkey_index)
+diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
+index ac011836bb54..3220fb42ecce 100644
+--- a/drivers/infiniband/core/verbs.c
++++ b/drivers/infiniband/core/verbs.c
+@@ -1106,8 +1106,8 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
+ }
+ EXPORT_SYMBOL(ib_open_qp);
+ 
+-static struct ib_qp *ib_create_xrc_qp(struct ib_qp *qp,
+-		struct ib_qp_init_attr *qp_init_attr)
++static struct ib_qp *create_xrc_qp(struct ib_qp *qp,
++				   struct ib_qp_init_attr *qp_init_attr)
+ {
+ 	struct ib_qp *real_qp = qp;
+ 
+@@ -1122,10 +1122,10 @@ static struct ib_qp *ib_create_xrc_qp(struct ib_qp *qp,
+ 
+ 	qp = __ib_open_qp(real_qp, qp_init_attr->event_handler,
+ 			  qp_init_attr->qp_context);
+-	if (!IS_ERR(qp))
+-		__ib_insert_xrcd_qp(qp_init_attr->xrcd, real_qp);
+-	else
+-		real_qp->device->ops.destroy_qp(real_qp);
++	if (IS_ERR(qp))
++		return qp;
++
++	__ib_insert_xrcd_qp(qp_init_attr->xrcd, real_qp);
+ 	return qp;
+ }
+ 
+@@ -1156,10 +1156,8 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
+ 		return qp;
+ 
+ 	ret = ib_create_qp_security(qp, device);
+-	if (ret) {
+-		ib_destroy_qp(qp);
+-		return ERR_PTR(ret);
+-	}
++	if (ret)
++		goto err;
+ 
+ 	qp->real_qp    = qp;
+ 	qp->qp_type    = qp_init_attr->qp_type;
+@@ -1172,8 +1170,15 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
+ 	INIT_LIST_HEAD(&qp->sig_mrs);
+ 	qp->port = 0;
+ 
+-	if (qp_init_attr->qp_type == IB_QPT_XRC_TGT)
+-		return ib_create_xrc_qp(qp, qp_init_attr);
++	if (qp_init_attr->qp_type == IB_QPT_XRC_TGT) {
++		struct ib_qp *xrc_qp = create_xrc_qp(qp, qp_init_attr);
++
++		if (IS_ERR(xrc_qp)) {
++			ret = PTR_ERR(xrc_qp);
++			goto err;
++		}
++		return xrc_qp;
++	}
+ 
+ 	qp->event_handler = qp_init_attr->event_handler;
+ 	qp->qp_context = qp_init_attr->qp_context;
+@@ -1200,11 +1205,8 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
+ 
+ 	if (qp_init_attr->cap.max_rdma_ctxs) {
+ 		ret = rdma_rw_init_mrs(qp, qp_init_attr);
+-		if (ret) {
+-			pr_err("failed to init MR pool ret= %d\n", ret);
+-			ib_destroy_qp(qp);
+-			return ERR_PTR(ret);
+-		}
++		if (ret)
++			goto err;
+ 	}
+ 
+ 	/*
+@@ -1217,6 +1219,11 @@ struct ib_qp *ib_create_qp(struct ib_pd *pd,
+ 				 device->attrs.max_sge_rd);
+ 
+ 	return qp;
++
++err:
++	ib_destroy_qp(qp);
++	return ERR_PTR(ret);
++
+ }
+ EXPORT_SYMBOL(ib_create_qp);
+ 
+diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
+index e9c336cff8f5..f367f3db7ff8 100644
+--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
+@@ -2887,8 +2887,19 @@ static void srpt_queue_tm_rsp(struct se_cmd *cmd)
+ 	srpt_queue_response(cmd);
+ }
+ 
++/*
++ * This function is called for aborted commands if no response is sent to the
++ * initiator. Make sure that the credits freed by aborting a command are
++ * returned to the initiator the next time a response is sent by incrementing
++ * ch->req_lim_delta.
++ */
+ static void srpt_aborted_task(struct se_cmd *cmd)
+ {
++	struct srpt_send_ioctx *ioctx = container_of(cmd,
++				struct srpt_send_ioctx, cmd);
++	struct srpt_rdma_ch *ch = ioctx->ch;
++
++	atomic_inc(&ch->req_lim_delta);
+ }
+ 
+ static int srpt_queue_status(struct se_cmd *cmd)
+diff --git a/drivers/input/keyboard/snvs_pwrkey.c b/drivers/input/keyboard/snvs_pwrkey.c
+index effb63205d3d..4c67cf30a5d9 100644
+--- a/drivers/input/keyboard/snvs_pwrkey.c
++++ b/drivers/input/keyboard/snvs_pwrkey.c
+@@ -148,6 +148,9 @@ static int imx_snvs_pwrkey_probe(struct platform_device *pdev)
+ 		return error;
+ 	}
+ 
++	pdata->input = input;
++	platform_set_drvdata(pdev, pdata);
++
+ 	error = devm_request_irq(&pdev->dev, pdata->irq,
+ 			       imx_snvs_pwrkey_interrupt,
+ 			       0, pdev->name, pdev);
+@@ -163,9 +166,6 @@ static int imx_snvs_pwrkey_probe(struct platform_device *pdev)
+ 		return error;
+ 	}
+ 
+-	pdata->input = input;
+-	platform_set_drvdata(pdev, pdata);
+-
+ 	device_init_wakeup(&pdev->dev, pdata->wakeup);
+ 
+ 	return 0;
+diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c
+index 704e99046916..b6f95f20f924 100644
+--- a/drivers/input/touchscreen/stmfts.c
++++ b/drivers/input/touchscreen/stmfts.c
+@@ -106,27 +106,29 @@ struct stmfts_data {
+ 	bool running;
+ };
+ 
+-static void stmfts_brightness_set(struct led_classdev *led_cdev,
++static int stmfts_brightness_set(struct led_classdev *led_cdev,
+ 					enum led_brightness value)
+ {
+ 	struct stmfts_data *sdata = container_of(led_cdev,
+ 					struct stmfts_data, led_cdev);
+ 	int err;
+ 
+-	if (value == sdata->led_status || !sdata->ledvdd)
+-		return;
+-
+-	if (!value) {
+-		regulator_disable(sdata->ledvdd);
+-	} else {
+-		err = regulator_enable(sdata->ledvdd);
+-		if (err)
+-			dev_warn(&sdata->client->dev,
+-				 "failed to disable ledvdd regulator: %d\n",
+-				 err);
++	if (value != sdata->led_status && sdata->ledvdd) {
++		if (!value) {
++			regulator_disable(sdata->ledvdd);
++		} else {
++			err = regulator_enable(sdata->ledvdd);
++			if (err) {
++				dev_warn(&sdata->client->dev,
++					 "failed to disable ledvdd regulator: %d\n",
++					 err);
++				return err;
++			}
++		}
++		sdata->led_status = value;
+ 	}
+ 
+-	sdata->led_status = value;
++	return 0;
+ }
+ 
+ static enum led_brightness stmfts_brightness_get(struct led_classdev *led_cdev)
+@@ -608,7 +610,7 @@ static int stmfts_enable_led(struct stmfts_data *sdata)
+ 	sdata->led_cdev.name = STMFTS_DEV_NAME;
+ 	sdata->led_cdev.max_brightness = LED_ON;
+ 	sdata->led_cdev.brightness = LED_OFF;
+-	sdata->led_cdev.brightness_set = stmfts_brightness_set;
++	sdata->led_cdev.brightness_set_blocking = stmfts_brightness_set;
+ 	sdata->led_cdev.brightness_get = stmfts_brightness_get;
+ 
+ 	err = devm_led_classdev_register(&sdata->client->dev, &sdata->led_cdev);
+diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
+index a70a6ff7b36e..4939a83b50e4 100644
+--- a/drivers/media/i2c/ov7670.c
++++ b/drivers/media/i2c/ov7670.c
+@@ -160,10 +160,10 @@ MODULE_PARM_DESC(debug, "Debug level (0-1)");
+ #define REG_GFIX	0x69	/* Fix gain control */
+ 
+ #define REG_DBLV	0x6b	/* PLL control an debugging */
+-#define   DBLV_BYPASS	  0x00	  /* Bypass PLL */
+-#define   DBLV_X4	  0x01	  /* clock x4 */
+-#define   DBLV_X6	  0x10	  /* clock x6 */
+-#define   DBLV_X8	  0x11	  /* clock x8 */
++#define   DBLV_BYPASS	  0x0a	  /* Bypass PLL */
++#define   DBLV_X4	  0x4a	  /* clock x4 */
++#define   DBLV_X6	  0x8a	  /* clock x6 */
++#define   DBLV_X8	  0xca	  /* clock x8 */
+ 
+ #define REG_SCALING_XSC	0x70	/* Test pattern and horizontal scale factor */
+ #define   TEST_PATTTERN_0 0x80
+@@ -863,7 +863,7 @@ static int ov7675_set_framerate(struct v4l2_subdev *sd,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return ov7670_write(sd, REG_DBLV, DBLV_X4);
++	return 0;
+ }
+ 
+ static void ov7670_get_framerate_legacy(struct v4l2_subdev *sd,
+@@ -1801,11 +1801,7 @@ static int ov7670_probe(struct i2c_client *client,
+ 		if (config->clock_speed)
+ 			info->clock_speed = config->clock_speed;
+ 
+-		/*
+-		 * It should be allowed for ov7670 too when it is migrated to
+-		 * the new frame rate formula.
+-		 */
+-		if (config->pll_bypass && id->driver_data != MODEL_OV7670)
++		if (config->pll_bypass)
+ 			info->pll_bypass = true;
+ 
+ 		if (config->pclk_hb_disable)
+diff --git a/drivers/mfd/twl-core.c b/drivers/mfd/twl-core.c
+index 299016bc46d9..104477b512a2 100644
+--- a/drivers/mfd/twl-core.c
++++ b/drivers/mfd/twl-core.c
+@@ -1245,6 +1245,28 @@ free:
+ 	return status;
+ }
+ 
++static int __maybe_unused twl_suspend(struct device *dev)
++{
++	struct i2c_client *client = to_i2c_client(dev);
++
++	if (client->irq)
++		disable_irq(client->irq);
++
++	return 0;
++}
++
++static int __maybe_unused twl_resume(struct device *dev)
++{
++	struct i2c_client *client = to_i2c_client(dev);
++
++	if (client->irq)
++		enable_irq(client->irq);
++
++	return 0;
++}
++
++static SIMPLE_DEV_PM_OPS(twl_dev_pm_ops, twl_suspend, twl_resume);
++
+ static const struct i2c_device_id twl_ids[] = {
+ 	{ "twl4030", TWL4030_VAUX2 },	/* "Triton 2" */
+ 	{ "twl5030", 0 },		/* T2 updated */
+@@ -1262,6 +1284,7 @@ static const struct i2c_device_id twl_ids[] = {
+ /* One Client Driver , 4 Clients */
+ static struct i2c_driver twl_driver = {
+ 	.driver.name	= DRIVER_NAME,
++	.driver.pm	= &twl_dev_pm_ops,
+ 	.id_table	= twl_ids,
+ 	.probe		= twl_probe,
+ 	.remove		= twl_remove,
+diff --git a/drivers/mtd/nand/raw/marvell_nand.c b/drivers/mtd/nand/raw/marvell_nand.c
+index 84283c6bb0ff..66a161b8f745 100644
+--- a/drivers/mtd/nand/raw/marvell_nand.c
++++ b/drivers/mtd/nand/raw/marvell_nand.c
+@@ -722,12 +722,6 @@ static void marvell_nfc_select_target(struct nand_chip *chip,
+ 	struct marvell_nfc *nfc = to_marvell_nfc(chip->controller);
+ 	u32 ndcr_generic;
+ 
+-	if (chip == nfc->selected_chip && die_nr == marvell_nand->selected_die)
+-		return;
+-
+-	writel_relaxed(marvell_nand->ndtr0, nfc->regs + NDTR0);
+-	writel_relaxed(marvell_nand->ndtr1, nfc->regs + NDTR1);
+-
+ 	/*
+ 	 * Reset the NDCR register to a clean state for this particular chip,
+ 	 * also clear ND_RUN bit.
+@@ -739,6 +733,12 @@ static void marvell_nfc_select_target(struct nand_chip *chip,
+ 	/* Also reset the interrupt status register */
+ 	marvell_nfc_clear_int(nfc, NDCR_ALL_INT);
+ 
++	if (chip == nfc->selected_chip && die_nr == marvell_nand->selected_die)
++		return;
++
++	writel_relaxed(marvell_nand->ndtr0, nfc->regs + NDTR0);
++	writel_relaxed(marvell_nand->ndtr1, nfc->regs + NDTR1);
++
+ 	nfc->selected_chip = chip;
+ 	marvell_nand->selected_die = die_nr;
+ }
+diff --git a/drivers/net/bonding/bond_sysfs_slave.c b/drivers/net/bonding/bond_sysfs_slave.c
+index 2f120b2ffef0..4985268e2273 100644
+--- a/drivers/net/bonding/bond_sysfs_slave.c
++++ b/drivers/net/bonding/bond_sysfs_slave.c
+@@ -55,7 +55,9 @@ static SLAVE_ATTR_RO(link_failure_count);
+ 
+ static ssize_t perm_hwaddr_show(struct slave *slave, char *buf)
+ {
+-	return sprintf(buf, "%pM\n", slave->perm_hwaddr);
++	return sprintf(buf, "%*phC\n",
++		       slave->dev->addr_len,
++		       slave->perm_hwaddr);
+ }
+ static SLAVE_ATTR_RO(perm_hwaddr);
+ 
+diff --git a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c
+index 74849be5f004..e2919005ead3 100644
+--- a/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c
++++ b/drivers/net/ethernet/chelsio/libcxgb/libcxgb_ppm.c
+@@ -354,7 +354,10 @@ static struct cxgbi_ppm_pool *ppm_alloc_cpu_pool(unsigned int *total,
+ 		ppmax = max;
+ 
+ 	/* pool size must be multiple of unsigned long */
+-	bmap = BITS_TO_LONGS(ppmax);
++	bmap = ppmax / BITS_PER_TYPE(unsigned long);
++	if (!bmap)
++		return NULL;
++
+ 	ppmax = (bmap * sizeof(unsigned long)) << 3;
+ 
+ 	alloc_sz = sizeof(*pools) + sizeof(unsigned long) * bmap;
+@@ -402,6 +405,10 @@ int cxgbi_ppm_init(void **ppm_pp, struct net_device *ndev,
+ 	if (reserve_factor) {
+ 		ppmax_pool = ppmax / reserve_factor;
+ 		pool = ppm_alloc_cpu_pool(&ppmax_pool, &pool_index_max);
++		if (!pool) {
++			ppmax_pool = 0;
++			reserve_factor = 0;
++		}
+ 
+ 		pr_debug("%s: ppmax %u, cpu total %u, per cpu %u.\n",
+ 			 ndev->name, ppmax, ppmax_pool, pool_index_max);
+diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.c b/drivers/net/ethernet/hisilicon/hns/hnae.c
+index 79d03f8ee7b1..c7fa97a7e1f4 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.c
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.c
+@@ -150,7 +150,6 @@ out_buffer_fail:
+ /* free desc along with its attached buffer */
+ static void hnae_free_desc(struct hnae_ring *ring)
+ {
+-	hnae_free_buffers(ring);
+ 	dma_unmap_single(ring_to_dev(ring), ring->desc_dma_addr,
+ 			 ring->desc_num * sizeof(ring->desc[0]),
+ 			 ring_to_dma_dir(ring));
+@@ -183,6 +182,9 @@ static int hnae_alloc_desc(struct hnae_ring *ring)
+ /* fini ring, also free the buffer for the ring */
+ static void hnae_fini_ring(struct hnae_ring *ring)
+ {
++	if (is_rx_ring(ring))
++		hnae_free_buffers(ring);
++
+ 	hnae_free_desc(ring);
+ 	kfree(ring->desc_cb);
+ 	ring->desc_cb = NULL;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+index ac55db065f16..f5ff07cb2b72 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+@@ -2750,6 +2750,17 @@ int hns_dsaf_get_regs_count(void)
+ 	return DSAF_DUMP_REGS_NUM;
+ }
+ 
++static int hns_dsaf_get_port_id(u8 port)
++{
++	if (port < DSAF_SERVICE_NW_NUM)
++		return port;
++
++	if (port >= DSAF_BASE_INNER_PORT_NUM)
++		return port - DSAF_BASE_INNER_PORT_NUM + DSAF_SERVICE_NW_NUM;
++
++	return -EINVAL;
++}
++
+ static void set_promisc_tcam_enable(struct dsaf_device *dsaf_dev, u32 port)
+ {
+ 	struct dsaf_tbl_tcam_ucast_cfg tbl_tcam_ucast = {0, 1, 0, 0, 0x80};
+@@ -2815,23 +2826,33 @@ static void set_promisc_tcam_enable(struct dsaf_device *dsaf_dev, u32 port)
+ 	memset(&temp_key, 0x0, sizeof(temp_key));
+ 	mask_entry.addr[0] = 0x01;
+ 	hns_dsaf_set_mac_key(dsaf_dev, &mask_key, mask_entry.in_vlan_id,
+-			     port, mask_entry.addr);
++			     0xf, mask_entry.addr);
+ 	tbl_tcam_mcast.tbl_mcast_item_vld = 1;
+ 	tbl_tcam_mcast.tbl_mcast_old_en = 0;
+ 
+-	if (port < DSAF_SERVICE_NW_NUM) {
+-		mskid = port;
+-	} else if (port >= DSAF_BASE_INNER_PORT_NUM) {
+-		mskid = port - DSAF_BASE_INNER_PORT_NUM + DSAF_SERVICE_NW_NUM;
+-	} else {
++	/* set MAC port to handle multicast */
++	mskid = hns_dsaf_get_port_id(port);
++	if (mskid == -EINVAL) {
+ 		dev_err(dsaf_dev->dev, "%s,pnum(%d)error,key(%#x:%#x)\n",
+ 			dsaf_dev->ae_dev.name, port,
+ 			mask_key.high.val, mask_key.low.val);
+ 		return;
+ 	}
++	dsaf_set_bit(tbl_tcam_mcast.tbl_mcast_port_msk[mskid / 32],
++		     mskid % 32, 1);
+ 
++	/* set pool bit map to handle multicast */
++	mskid = hns_dsaf_get_port_id(port_num);
++	if (mskid == -EINVAL) {
++		dev_err(dsaf_dev->dev,
++			"%s, pool bit map pnum(%d)error,key(%#x:%#x)\n",
++			dsaf_dev->ae_dev.name, port_num,
++			mask_key.high.val, mask_key.low.val);
++		return;
++	}
+ 	dsaf_set_bit(tbl_tcam_mcast.tbl_mcast_port_msk[mskid / 32],
+ 		     mskid % 32, 1);
++
+ 	memcpy(&temp_key, &mask_key, sizeof(mask_key));
+ 	hns_dsaf_tcam_mc_cfg_vague(dsaf_dev, entry_index, &tbl_tcam_data_mc,
+ 				   (struct dsaf_tbl_tcam_data *)(&mask_key),
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_xgmac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_xgmac.c
+index ba4316910dea..a60f207768fc 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_xgmac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_xgmac.c
+@@ -129,7 +129,7 @@ static void hns_xgmac_lf_rf_control_init(struct mac_driver *mac_drv)
+ 	dsaf_set_bit(val, XGMAC_UNIDIR_EN_B, 0);
+ 	dsaf_set_bit(val, XGMAC_RF_TX_EN_B, 1);
+ 	dsaf_set_field(val, XGMAC_LF_RF_INSERT_M, XGMAC_LF_RF_INSERT_S, 0);
+-	dsaf_write_reg(mac_drv, XGMAC_MAC_TX_LF_RF_CONTROL_REG, val);
++	dsaf_write_dev(mac_drv, XGMAC_MAC_TX_LF_RF_CONTROL_REG, val);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+index 60e7d7ae3787..4cd86ba1f050 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -29,9 +29,6 @@
+ 
+ #define SERVICE_TIMER_HZ (1 * HZ)
+ 
+-#define NIC_TX_CLEAN_MAX_NUM 256
+-#define NIC_RX_CLEAN_MAX_NUM 64
+-
+ #define RCB_IRQ_NOT_INITED 0
+ #define RCB_IRQ_INITED 1
+ #define HNS_BUFFER_SIZE_2048 2048
+@@ -376,8 +373,6 @@ netdev_tx_t hns_nic_net_xmit_hw(struct net_device *ndev,
+ 	wmb(); /* commit all data before submit */
+ 	assert(skb->queue_mapping < priv->ae_handle->q_num);
+ 	hnae_queue_xmit(priv->ae_handle->qs[skb->queue_mapping], buf_num);
+-	ring->stats.tx_pkts++;
+-	ring->stats.tx_bytes += skb->len;
+ 
+ 	return NETDEV_TX_OK;
+ 
+@@ -999,6 +994,9 @@ static int hns_nic_tx_poll_one(struct hns_nic_ring_data *ring_data,
+ 		/* issue prefetch for next Tx descriptor */
+ 		prefetch(&ring->desc_cb[ring->next_to_clean]);
+ 	}
++	/* update tx ring statistics. */
++	ring->stats.tx_pkts += pkts;
++	ring->stats.tx_bytes += bytes;
+ 
+ 	NETIF_TX_UNLOCK(ring);
+ 
+@@ -2152,7 +2150,7 @@ static int hns_nic_init_ring_data(struct hns_nic_priv *priv)
+ 			hns_nic_tx_fini_pro_v2;
+ 
+ 		netif_napi_add(priv->netdev, &rd->napi,
+-			       hns_nic_common_poll, NIC_TX_CLEAN_MAX_NUM);
++			       hns_nic_common_poll, NAPI_POLL_WEIGHT);
+ 		rd->ring->irq_init_flag = RCB_IRQ_NOT_INITED;
+ 	}
+ 	for (i = h->q_num; i < h->q_num * 2; i++) {
+@@ -2165,7 +2163,7 @@ static int hns_nic_init_ring_data(struct hns_nic_priv *priv)
+ 			hns_nic_rx_fini_pro_v2;
+ 
+ 		netif_napi_add(priv->netdev, &rd->napi,
+-			       hns_nic_common_poll, NIC_RX_CLEAN_MAX_NUM);
++			       hns_nic_common_poll, NAPI_POLL_WEIGHT);
+ 		rd->ring->irq_init_flag = RCB_IRQ_NOT_INITED;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
+index fffe8c1c45d3..0fb61d440d3b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile
+@@ -3,7 +3,7 @@
+ # Makefile for the HISILICON network device drivers.
+ #
+ 
+-ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
++ccflags-y := -I $(srctree)/drivers/net/ethernet/hisilicon/hns3
+ 
+ obj-$(CONFIG_HNS3_HCLGE) += hclge.o
+ hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o hclge_mbx.o hclge_err.o  hclge_debugfs.o
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
+index fb93bbd35845..6193f8fa7cf3 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile
+@@ -3,7 +3,7 @@
+ # Makefile for the HISILICON network device drivers.
+ #
+ 
+-ccflags-y := -Idrivers/net/ethernet/hisilicon/hns3
++ccflags-y := -I $(srctree)/drivers/net/ethernet/hisilicon/hns3
+ 
+ obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o
+ hclgevf-objs = hclgevf_main.o hclgevf_cmd.o hclgevf_mbx.o
+\ No newline at end of file
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index a6bc7847346b..5d544e661445 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -2378,8 +2378,7 @@ static int i40e_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+ 		return -EOPNOTSUPP;
+ 
+ 	/* only magic packet is supported */
+-	if (wol->wolopts && (wol->wolopts != WAKE_MAGIC)
+-			  | (wol->wolopts != WAKE_FILTER))
++	if (wol->wolopts & ~WAKE_MAGIC)
+ 		return -EOPNOTSUPP;
+ 
+ 	/* is this a new value? */
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ptp.c b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+index 5fb4353c742b..31575c0bb884 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ptp.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+@@ -146,12 +146,13 @@ static int i40e_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
+ static int i40e_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+ {
+ 	struct i40e_pf *pf = container_of(ptp, struct i40e_pf, ptp_caps);
+-	struct timespec64 now;
++	struct timespec64 now, then;
+ 
++	then = ns_to_timespec64(delta);
+ 	mutex_lock(&pf->tmreg_lock);
+ 
+ 	i40e_ptp_read(pf, &now, NULL);
+-	timespec64_add_ns(&now, delta);
++	now = timespec64_add(now, then);
+ 	i40e_ptp_write(pf, (const struct timespec64 *)&now);
+ 
+ 	mutex_unlock(&pf->tmreg_lock);
+diff --git a/drivers/net/ethernet/intel/igb/e1000_defines.h b/drivers/net/ethernet/intel/igb/e1000_defines.h
+index 01fcfc6f3415..d2e2c50ce257 100644
+--- a/drivers/net/ethernet/intel/igb/e1000_defines.h
++++ b/drivers/net/ethernet/intel/igb/e1000_defines.h
+@@ -194,6 +194,8 @@
+ /* enable link status from external LINK_0 and LINK_1 pins */
+ #define E1000_CTRL_SWDPIN0  0x00040000  /* SWDPIN 0 value */
+ #define E1000_CTRL_SWDPIN1  0x00080000  /* SWDPIN 1 value */
++#define E1000_CTRL_ADVD3WUC 0x00100000  /* D3 WUC */
++#define E1000_CTRL_EN_PHY_PWR_MGMT 0x00200000 /* PHY PM enable */
+ #define E1000_CTRL_SDP0_DIR 0x00400000  /* SDP0 Data direction */
+ #define E1000_CTRL_SDP1_DIR 0x00800000  /* SDP1 Data direction */
+ #define E1000_CTRL_RST      0x04000000  /* Global reset */
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 7137e7f9c7f3..21ccadb720d1 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -8755,9 +8755,7 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake,
+ 	struct e1000_hw *hw = &adapter->hw;
+ 	u32 ctrl, rctl, status;
+ 	u32 wufc = runtime ? E1000_WUFC_LNKC : adapter->wol;
+-#ifdef CONFIG_PM
+-	int retval = 0;
+-#endif
++	bool wake;
+ 
+ 	rtnl_lock();
+ 	netif_device_detach(netdev);
+@@ -8770,14 +8768,6 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake,
+ 	igb_clear_interrupt_scheme(adapter);
+ 	rtnl_unlock();
+ 
+-#ifdef CONFIG_PM
+-	if (!runtime) {
+-		retval = pci_save_state(pdev);
+-		if (retval)
+-			return retval;
+-	}
+-#endif
+-
+ 	status = rd32(E1000_STATUS);
+ 	if (status & E1000_STATUS_LU)
+ 		wufc &= ~E1000_WUFC_LNKC;
+@@ -8794,10 +8784,6 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake,
+ 		}
+ 
+ 		ctrl = rd32(E1000_CTRL);
+-		/* advertise wake from D3Cold */
+-		#define E1000_CTRL_ADVD3WUC 0x00100000
+-		/* phy power management enable */
+-		#define E1000_CTRL_EN_PHY_PWR_MGMT 0x00200000
+ 		ctrl |= E1000_CTRL_ADVD3WUC;
+ 		wr32(E1000_CTRL, ctrl);
+ 
+@@ -8811,12 +8797,15 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake,
+ 		wr32(E1000_WUFC, 0);
+ 	}
+ 
+-	*enable_wake = wufc || adapter->en_mng_pt;
+-	if (!*enable_wake)
++	wake = wufc || adapter->en_mng_pt;
++	if (!wake)
+ 		igb_power_down_link(adapter);
+ 	else
+ 		igb_power_up_link(adapter);
+ 
++	if (enable_wake)
++		*enable_wake = wake;
++
+ 	/* Release control of h/w to f/w.  If f/w is AMT enabled, this
+ 	 * would have already happened in close and is redundant.
+ 	 */
+@@ -8859,22 +8848,7 @@ static void igb_deliver_wake_packet(struct net_device *netdev)
+ 
+ static int __maybe_unused igb_suspend(struct device *dev)
+ {
+-	int retval;
+-	bool wake;
+-	struct pci_dev *pdev = to_pci_dev(dev);
+-
+-	retval = __igb_shutdown(pdev, &wake, 0);
+-	if (retval)
+-		return retval;
+-
+-	if (wake) {
+-		pci_prepare_to_sleep(pdev);
+-	} else {
+-		pci_wake_from_d3(pdev, false);
+-		pci_set_power_state(pdev, PCI_D3hot);
+-	}
+-
+-	return 0;
++	return __igb_shutdown(to_pci_dev(dev), NULL, 0);
+ }
+ 
+ static int __maybe_unused igb_resume(struct device *dev)
+@@ -8945,22 +8919,7 @@ static int __maybe_unused igb_runtime_idle(struct device *dev)
+ 
+ static int __maybe_unused igb_runtime_suspend(struct device *dev)
+ {
+-	struct pci_dev *pdev = to_pci_dev(dev);
+-	int retval;
+-	bool wake;
+-
+-	retval = __igb_shutdown(pdev, &wake, 1);
+-	if (retval)
+-		return retval;
+-
+-	if (wake) {
+-		pci_prepare_to_sleep(pdev);
+-	} else {
+-		pci_wake_from_d3(pdev, false);
+-		pci_set_power_state(pdev, PCI_D3hot);
+-	}
+-
+-	return 0;
++	return __igb_shutdown(to_pci_dev(dev), NULL, 1);
+ }
+ 
+ static int __maybe_unused igb_runtime_resume(struct device *dev)
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+index cc4907f9ff02..2fb97967961c 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
+@@ -905,13 +905,12 @@ s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw)
+ 	struct pci_dev *pdev = adapter->pdev;
+ 	struct device *dev = &adapter->netdev->dev;
+ 	struct mii_bus *bus;
++	int err = -ENODEV;
+ 
+-	adapter->mii_bus = devm_mdiobus_alloc(dev);
+-	if (!adapter->mii_bus)
++	bus = devm_mdiobus_alloc(dev);
++	if (!bus)
+ 		return -ENOMEM;
+ 
+-	bus = adapter->mii_bus;
+-
+ 	switch (hw->device_id) {
+ 	/* C3000 SoCs */
+ 	case IXGBE_DEV_ID_X550EM_A_KR:
+@@ -949,12 +948,15 @@ s32 ixgbe_mii_bus_init(struct ixgbe_hw *hw)
+ 	 */
+ 	hw->phy.mdio.mode_support = MDIO_SUPPORTS_C45 | MDIO_SUPPORTS_C22;
+ 
+-	return mdiobus_register(bus);
++	err = mdiobus_register(bus);
++	if (!err) {
++		adapter->mii_bus = bus;
++		return 0;
++	}
+ 
+ ixgbe_no_mii_bus:
+ 	devm_mdiobus_free(dev, bus);
+-	adapter->mii_bus = NULL;
+-	return -ENODEV;
++	return err;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+index 13c48883ed61..619f96940b65 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+@@ -81,8 +81,7 @@ static int arm_vport_context_events_cmd(struct mlx5_core_dev *dev, u16 vport,
+ 		 opcode, MLX5_CMD_OP_MODIFY_NIC_VPORT_CONTEXT);
+ 	MLX5_SET(modify_nic_vport_context_in, in, field_select.change_event, 1);
+ 	MLX5_SET(modify_nic_vport_context_in, in, vport_number, vport);
+-	if (vport)
+-		MLX5_SET(modify_nic_vport_context_in, in, other_vport, 1);
++	MLX5_SET(modify_nic_vport_context_in, in, other_vport, 1);
+ 	nic_vport_ctx = MLX5_ADDR_OF(modify_nic_vport_context_in,
+ 				     in, nic_vport_context);
+ 
+@@ -110,8 +109,7 @@ static int modify_esw_vport_context_cmd(struct mlx5_core_dev *dev, u16 vport,
+ 	MLX5_SET(modify_esw_vport_context_in, in, opcode,
+ 		 MLX5_CMD_OP_MODIFY_ESW_VPORT_CONTEXT);
+ 	MLX5_SET(modify_esw_vport_context_in, in, vport_number, vport);
+-	if (vport)
+-		MLX5_SET(modify_esw_vport_context_in, in, other_vport, 1);
++	MLX5_SET(modify_esw_vport_context_in, in, other_vport, 1);
+ 	return mlx5_cmd_exec(dev, in, inlen, out, sizeof(out));
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index d4e6fe5b9300..ce5766a26baa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -1402,6 +1402,7 @@ int esw_offloads_init(struct mlx5_eswitch *esw, int nvports)
+ {
+ 	int err;
+ 
++	memset(&esw->fdb_table.offloads, 0, sizeof(struct offloads_fdb));
+ 	mutex_init(&esw->fdb_table.offloads.fdb_prio_lock);
+ 
+ 	err = esw_create_offloads_fdb_tables(esw, nvports);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/descs_com.h b/drivers/net/ethernet/stmicro/stmmac/descs_com.h
+index 40d6356a7e73..3dfb07a78952 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/descs_com.h
++++ b/drivers/net/ethernet/stmicro/stmmac/descs_com.h
+@@ -29,11 +29,13 @@
+ /* Specific functions used for Ring mode */
+ 
+ /* Enhanced descriptors */
+-static inline void ehn_desc_rx_set_on_ring(struct dma_desc *p, int end)
++static inline void ehn_desc_rx_set_on_ring(struct dma_desc *p, int end,
++					   int bfsize)
+ {
+-	p->des1 |= cpu_to_le32((BUF_SIZE_8KiB
+-			<< ERDES1_BUFFER2_SIZE_SHIFT)
+-		   & ERDES1_BUFFER2_SIZE_MASK);
++	if (bfsize == BUF_SIZE_16KiB)
++		p->des1 |= cpu_to_le32((BUF_SIZE_8KiB
++				<< ERDES1_BUFFER2_SIZE_SHIFT)
++			   & ERDES1_BUFFER2_SIZE_MASK);
+ 
+ 	if (end)
+ 		p->des1 |= cpu_to_le32(ERDES1_END_RING);
+@@ -59,11 +61,15 @@ static inline void enh_set_tx_desc_len_on_ring(struct dma_desc *p, int len)
+ }
+ 
+ /* Normal descriptors */
+-static inline void ndesc_rx_set_on_ring(struct dma_desc *p, int end)
++static inline void ndesc_rx_set_on_ring(struct dma_desc *p, int end, int bfsize)
+ {
+-	p->des1 |= cpu_to_le32(((BUF_SIZE_2KiB - 1)
+-				<< RDES1_BUFFER2_SIZE_SHIFT)
+-		    & RDES1_BUFFER2_SIZE_MASK);
++	if (bfsize >= BUF_SIZE_2KiB) {
++		int bfsize2;
++
++		bfsize2 = min(bfsize - BUF_SIZE_2KiB + 1, BUF_SIZE_2KiB - 1);
++		p->des1 |= cpu_to_le32((bfsize2 << RDES1_BUFFER2_SIZE_SHIFT)
++			    & RDES1_BUFFER2_SIZE_MASK);
++	}
+ 
+ 	if (end)
+ 		p->des1 |= cpu_to_le32(RDES1_END_RING);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
+index 736e29635b77..313a58b68fee 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
+@@ -296,7 +296,7 @@ exit:
+ }
+ 
+ static void dwmac4_rd_init_rx_desc(struct dma_desc *p, int disable_rx_ic,
+-				   int mode, int end)
++				   int mode, int end, int bfsize)
+ {
+ 	dwmac4_set_rx_owner(p, disable_rx_ic);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
+index 1d858fdec997..98fa471da7c0 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_descs.c
+@@ -123,7 +123,7 @@ static int dwxgmac2_get_rx_timestamp_status(void *desc, void *next_desc,
+ }
+ 
+ static void dwxgmac2_init_rx_desc(struct dma_desc *p, int disable_rx_ic,
+-				  int mode, int end)
++				  int mode, int end, int bfsize)
+ {
+ 	dwxgmac2_set_rx_owner(p, disable_rx_ic);
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c
+index 5ef91a790f9d..5202d6ad7919 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c
+@@ -201,6 +201,11 @@ static int enh_desc_get_rx_status(void *data, struct stmmac_extra_stats *x,
+ 	if (unlikely(rdes0 & RDES0_OWN))
+ 		return dma_own;
+ 
++	if (unlikely(!(rdes0 & RDES0_LAST_DESCRIPTOR))) {
++		stats->rx_length_errors++;
++		return discard_frame;
++	}
++
+ 	if (unlikely(rdes0 & RDES0_ERROR_SUMMARY)) {
+ 		if (unlikely(rdes0 & RDES0_DESCRIPTOR_ERROR)) {
+ 			x->rx_desc++;
+@@ -231,9 +236,10 @@ static int enh_desc_get_rx_status(void *data, struct stmmac_extra_stats *x,
+ 	 * It doesn't match with the information reported into the databook.
+ 	 * At any rate, we need to understand if the CSUM hw computation is ok
+ 	 * and report this info to the upper layers. */
+-	ret = enh_desc_coe_rdes0(!!(rdes0 & RDES0_IPC_CSUM_ERROR),
+-				 !!(rdes0 & RDES0_FRAME_TYPE),
+-				 !!(rdes0 & ERDES0_RX_MAC_ADDR));
++	if (likely(ret == good_frame))
++		ret = enh_desc_coe_rdes0(!!(rdes0 & RDES0_IPC_CSUM_ERROR),
++					 !!(rdes0 & RDES0_FRAME_TYPE),
++					 !!(rdes0 & ERDES0_RX_MAC_ADDR));
+ 
+ 	if (unlikely(rdes0 & RDES0_DRIBBLING))
+ 		x->dribbling_bit++;
+@@ -259,15 +265,19 @@ static int enh_desc_get_rx_status(void *data, struct stmmac_extra_stats *x,
+ }
+ 
+ static void enh_desc_init_rx_desc(struct dma_desc *p, int disable_rx_ic,
+-				  int mode, int end)
++				  int mode, int end, int bfsize)
+ {
++	int bfsize1;
++
+ 	p->des0 |= cpu_to_le32(RDES0_OWN);
+-	p->des1 |= cpu_to_le32(BUF_SIZE_8KiB & ERDES1_BUFFER1_SIZE_MASK);
++
++	bfsize1 = min(bfsize, BUF_SIZE_8KiB);
++	p->des1 |= cpu_to_le32(bfsize1 & ERDES1_BUFFER1_SIZE_MASK);
+ 
+ 	if (mode == STMMAC_CHAIN_MODE)
+ 		ehn_desc_rx_set_on_chain(p);
+ 	else
+-		ehn_desc_rx_set_on_ring(p, end);
++		ehn_desc_rx_set_on_ring(p, end, bfsize);
+ 
+ 	if (disable_rx_ic)
+ 		p->des1 |= cpu_to_le32(ERDES1_DISABLE_IC);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/hwif.h b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+index 92b8944f26e3..5bb00234d961 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/hwif.h
++++ b/drivers/net/ethernet/stmicro/stmmac/hwif.h
+@@ -33,7 +33,7 @@ struct dma_extended_desc;
+ struct stmmac_desc_ops {
+ 	/* DMA RX descriptor ring initialization */
+ 	void (*init_rx_desc)(struct dma_desc *p, int disable_rx_ic, int mode,
+-			int end);
++			int end, int bfsize);
+ 	/* DMA TX descriptor ring initialization */
+ 	void (*init_tx_desc)(struct dma_desc *p, int mode, int end);
+ 	/* Invoked by the xmit function to prepare the tx descriptor */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/norm_desc.c b/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
+index de65bb29feba..b7dd4e3c760d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
+@@ -91,8 +91,6 @@ static int ndesc_get_rx_status(void *data, struct stmmac_extra_stats *x,
+ 		return dma_own;
+ 
+ 	if (unlikely(!(rdes0 & RDES0_LAST_DESCRIPTOR))) {
+-		pr_warn("%s: Oversized frame spanned multiple buffers\n",
+-			__func__);
+ 		stats->rx_length_errors++;
+ 		return discard_frame;
+ 	}
+@@ -135,15 +133,19 @@ static int ndesc_get_rx_status(void *data, struct stmmac_extra_stats *x,
+ }
+ 
+ static void ndesc_init_rx_desc(struct dma_desc *p, int disable_rx_ic, int mode,
+-			       int end)
++			       int end, int bfsize)
+ {
++	int bfsize1;
++
+ 	p->des0 |= cpu_to_le32(RDES0_OWN);
+-	p->des1 |= cpu_to_le32((BUF_SIZE_2KiB - 1) & RDES1_BUFFER1_SIZE_MASK);
++
++	bfsize1 = min(bfsize, BUF_SIZE_2KiB - 1);
++	p->des1 |= cpu_to_le32(bfsize & RDES1_BUFFER1_SIZE_MASK);
+ 
+ 	if (mode == STMMAC_CHAIN_MODE)
+ 		ndesc_rx_set_on_chain(p, end);
+ 	else
+-		ndesc_rx_set_on_ring(p, end);
++		ndesc_rx_set_on_ring(p, end, bfsize);
+ 
+ 	if (disable_rx_ic)
+ 		p->des1 |= cpu_to_le32(RDES1_DISABLE_IC);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 0bc3632880b5..f0e0593e54f3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1114,11 +1114,13 @@ static void stmmac_clear_rx_descriptors(struct stmmac_priv *priv, u32 queue)
+ 		if (priv->extend_desc)
+ 			stmmac_init_rx_desc(priv, &rx_q->dma_erx[i].basic,
+ 					priv->use_riwt, priv->mode,
+-					(i == DMA_RX_SIZE - 1));
++					(i == DMA_RX_SIZE - 1),
++					priv->dma_buf_sz);
+ 		else
+ 			stmmac_init_rx_desc(priv, &rx_q->dma_rx[i],
+ 					priv->use_riwt, priv->mode,
+-					(i == DMA_RX_SIZE - 1));
++					(i == DMA_RX_SIZE - 1),
++					priv->dma_buf_sz);
+ }
+ 
+ /**
+@@ -3326,9 +3328,8 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ {
+ 	struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
+ 	struct stmmac_channel *ch = &priv->channel[queue];
+-	unsigned int entry = rx_q->cur_rx;
++	unsigned int next_entry = rx_q->cur_rx;
+ 	int coe = priv->hw->rx_csum;
+-	unsigned int next_entry;
+ 	unsigned int count = 0;
+ 	bool xmac;
+ 
+@@ -3346,10 +3347,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 		stmmac_display_ring(priv, rx_head, DMA_RX_SIZE, true);
+ 	}
+ 	while (count < limit) {
+-		int status;
++		int entry, status;
+ 		struct dma_desc *p;
+ 		struct dma_desc *np;
+ 
++		entry = next_entry;
++
+ 		if (priv->extend_desc)
+ 			p = (struct dma_desc *)(rx_q->dma_erx + entry);
+ 		else
+@@ -3405,11 +3408,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 			 *  ignored
+ 			 */
+ 			if (frame_len > priv->dma_buf_sz) {
+-				netdev_err(priv->dev,
+-					   "len %d larger than size (%d)\n",
+-					   frame_len, priv->dma_buf_sz);
++				if (net_ratelimit())
++					netdev_err(priv->dev,
++						   "len %d larger than size (%d)\n",
++						   frame_len, priv->dma_buf_sz);
+ 				priv->dev->stats.rx_length_errors++;
+-				break;
++				continue;
+ 			}
+ 
+ 			/* ACS is set; GMAC core strips PAD/FCS for IEEE 802.3
+@@ -3444,7 +3448,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 						dev_warn(priv->device,
+ 							 "packet dropped\n");
+ 					priv->dev->stats.rx_dropped++;
+-					break;
++					continue;
+ 				}
+ 
+ 				dma_sync_single_for_cpu(priv->device,
+@@ -3464,11 +3468,12 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 			} else {
+ 				skb = rx_q->rx_skbuff[entry];
+ 				if (unlikely(!skb)) {
+-					netdev_err(priv->dev,
+-						   "%s: Inconsistent Rx chain\n",
+-						   priv->dev->name);
++					if (net_ratelimit())
++						netdev_err(priv->dev,
++							   "%s: Inconsistent Rx chain\n",
++							   priv->dev->name);
+ 					priv->dev->stats.rx_dropped++;
+-					break;
++					continue;
+ 				}
+ 				prefetch(skb->data - NET_IP_ALIGN);
+ 				rx_q->rx_skbuff[entry] = NULL;
+@@ -3503,7 +3508,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ 			priv->dev->stats.rx_packets++;
+ 			priv->dev->stats.rx_bytes += frame_len;
+ 		}
+-		entry = next_entry;
+ 	}
+ 
+ 	stmmac_rx_refill(priv, queue);
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/5000.c b/drivers/net/wireless/intel/iwlwifi/cfg/5000.c
+index 575a7022d045..3846064d51a5 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/5000.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/5000.c
+@@ -1,7 +1,7 @@
+ /******************************************************************************
+  *
+  * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.
+- * Copyright(c) 2018 Intel Corporation
++ * Copyright(c) 2018 - 2019 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify it
+  * under the terms of version 2 of the GNU General Public License as
+@@ -136,6 +136,7 @@ const struct iwl_cfg iwl5350_agn_cfg = {
+ 	.ht_params = &iwl5000_ht_params,
+ 	.led_mode = IWL_LED_BLINK,
+ 	.internal_wimax_coex = true,
++	.csr = &iwl_csr_v1,
+ };
+ 
+ #define IWL_DEVICE_5150						\
+diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.c b/drivers/net/wireless/marvell/mwifiex/sdio.c
+index d49fbd58afa7..bfbe3aa058d9 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sdio.c
++++ b/drivers/net/wireless/marvell/mwifiex/sdio.c
+@@ -181,7 +181,7 @@ static int mwifiex_sdio_resume(struct device *dev)
+ 
+ 	adapter = card->adapter;
+ 
+-	if (test_bit(MWIFIEX_IS_SUSPENDED, &adapter->work_flags)) {
++	if (!test_bit(MWIFIEX_IS_SUSPENDED, &adapter->work_flags)) {
+ 		mwifiex_dbg(adapter, WARN,
+ 			    "device already resumed\n");
+ 		return 0;
+diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
+index a9cbe5be277b..e88e183914af 100644
+--- a/drivers/platform/x86/intel_pmc_core.c
++++ b/drivers/platform/x86/intel_pmc_core.c
+@@ -205,7 +205,7 @@ static const struct pmc_bit_map cnp_pfear_map[] = {
+ 	{"CNVI",                BIT(3)},
+ 	{"UFS0",                BIT(4)},
+ 	{"EMMC",                BIT(5)},
+-	{"Res_6",               BIT(6)},
++	{"SPF",			BIT(6)},
+ 	{"SBR6",                BIT(7)},
+ 
+ 	{"SBR7",                BIT(0)},
+@@ -802,7 +802,7 @@ static int __init pmc_core_probe(void)
+ 	 * Sunrisepoint PCH regmap can't be used. Use Cannonlake PCH regmap
+ 	 * in this case.
+ 	 */
+-	if (!pci_dev_present(pmc_pci_ids))
++	if (pmcdev->map == &spt_reg_map && !pci_dev_present(pmc_pci_ids))
+ 		pmcdev->map = &cnp_reg_map;
+ 
+ 	if (lpit_read_residency_count_address(&slp_s0_addr))
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index 8f018b3f3cd4..eaec2d306481 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -17,6 +17,7 @@
+ 
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
++#include <linux/dmi.h>
+ #include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/platform_data/x86/clk-pmc-atom.h>
+@@ -391,11 +392,27 @@ static int pmc_dbgfs_register(struct pmc_dev *pmc)
+ }
+ #endif /* CONFIG_DEBUG_FS */
+ 
++/*
++ * Some systems need one or more of their pmc_plt_clks to be
++ * marked as critical.
++ */
++static const struct dmi_system_id critclk_systems[] __initconst = {
++	{
++		.ident = "MPL CEC1x",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "MPL AG"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "CEC10 Family"),
++		},
++	},
++	{ /*sentinel*/ }
++};
++
+ static int pmc_setup_clks(struct pci_dev *pdev, void __iomem *pmc_regmap,
+ 			  const struct pmc_data *pmc_data)
+ {
+ 	struct platform_device *clkdev;
+ 	struct pmc_clk_data *clk_data;
++	const struct dmi_system_id *d = dmi_first_match(critclk_systems);
+ 
+ 	clk_data = kzalloc(sizeof(*clk_data), GFP_KERNEL);
+ 	if (!clk_data)
+@@ -403,6 +420,10 @@ static int pmc_setup_clks(struct pci_dev *pdev, void __iomem *pmc_regmap,
+ 
+ 	clk_data->base = pmc_regmap; /* offset is added by client */
+ 	clk_data->clks = pmc_data->clks;
++	if (d) {
++		clk_data->critical = true;
++		pr_info("%s critclks quirk enabled\n", d->ident);
++	}
+ 
+ 	clkdev = platform_device_register_data(&pdev->dev, "clk-pmc-atom",
+ 					       PLATFORM_DEVID_NONE,
+diff --git a/drivers/reset/reset-meson-audio-arb.c b/drivers/reset/reset-meson-audio-arb.c
+index 91751617b37a..c53a2185a039 100644
+--- a/drivers/reset/reset-meson-audio-arb.c
++++ b/drivers/reset/reset-meson-audio-arb.c
+@@ -130,6 +130,7 @@ static int meson_audio_arb_probe(struct platform_device *pdev)
+ 	arb->rstc.nr_resets = ARRAY_SIZE(axg_audio_arb_reset_bits);
+ 	arb->rstc.ops = &meson_audio_arb_rstc_ops;
+ 	arb->rstc.of_node = dev->of_node;
++	arb->rstc.owner = THIS_MODULE;
+ 
+ 	/*
+ 	 * Enable general :
+diff --git a/drivers/rtc/rtc-cros-ec.c b/drivers/rtc/rtc-cros-ec.c
+index e5444296075e..4d6bf9304ceb 100644
+--- a/drivers/rtc/rtc-cros-ec.c
++++ b/drivers/rtc/rtc-cros-ec.c
+@@ -298,7 +298,7 @@ static int cros_ec_rtc_suspend(struct device *dev)
+ 	struct cros_ec_rtc *cros_ec_rtc = dev_get_drvdata(&pdev->dev);
+ 
+ 	if (device_may_wakeup(dev))
+-		enable_irq_wake(cros_ec_rtc->cros_ec->irq);
++		return enable_irq_wake(cros_ec_rtc->cros_ec->irq);
+ 
+ 	return 0;
+ }
+@@ -309,7 +309,7 @@ static int cros_ec_rtc_resume(struct device *dev)
+ 	struct cros_ec_rtc *cros_ec_rtc = dev_get_drvdata(&pdev->dev);
+ 
+ 	if (device_may_wakeup(dev))
+-		disable_irq_wake(cros_ec_rtc->cros_ec->irq);
++		return disable_irq_wake(cros_ec_rtc->cros_ec->irq);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/rtc/rtc-da9063.c b/drivers/rtc/rtc-da9063.c
+index b4e054c64bad..69b54e5556c0 100644
+--- a/drivers/rtc/rtc-da9063.c
++++ b/drivers/rtc/rtc-da9063.c
+@@ -480,6 +480,13 @@ static int da9063_rtc_probe(struct platform_device *pdev)
+ 	da9063_data_to_tm(data, &rtc->alarm_time, rtc);
+ 	rtc->rtc_sync = false;
+ 
++	/*
++	 * TODO: some models have alarms on a minute boundary but still support
++	 * real hardware interrupts. Add this once the core supports it.
++	 */
++	if (config->rtc_data_start != RTC_SEC)
++		rtc->rtc_dev->uie_unsupported = 1;
++
+ 	irq_alarm = platform_get_irq_byname(pdev, "ALARM");
+ 	ret = devm_request_threaded_irq(&pdev->dev, irq_alarm, NULL,
+ 					da9063_alarm_event,
+diff --git a/drivers/rtc/rtc-sh.c b/drivers/rtc/rtc-sh.c
+index d417b203cbc5..1d3de2a3d1a4 100644
+--- a/drivers/rtc/rtc-sh.c
++++ b/drivers/rtc/rtc-sh.c
+@@ -374,7 +374,7 @@ static int sh_rtc_set_time(struct device *dev, struct rtc_time *tm)
+ static inline int sh_rtc_read_alarm_value(struct sh_rtc *rtc, int reg_off)
+ {
+ 	unsigned int byte;
+-	int value = 0xff;	/* return 0xff for ignored values */
++	int value = -1;			/* return -1 for ignored values */
+ 
+ 	byte = readb(rtc->regbase + reg_off);
+ 	if (byte & AR_ENB) {
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index e0570fd8466e..45e52dd870c8 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -1033,8 +1033,8 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba,
+ 	struct sas_ssp_task *ssp_task = &task->ssp_task;
+ 	struct scsi_cmnd *scsi_cmnd = ssp_task->cmd;
+ 	struct hisi_sas_tmf_task *tmf = slot->tmf;
+-	unsigned char prot_op = scsi_get_prot_op(scsi_cmnd);
+ 	int has_data = 0, priority = !!tmf;
++	unsigned char prot_op;
+ 	u8 *buf_cmd;
+ 	u32 dw1 = 0, dw2 = 0, len = 0;
+ 
+@@ -1049,6 +1049,7 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba,
+ 		dw1 |= 2 << CMD_HDR_FRAME_TYPE_OFF;
+ 		dw1 |= DIR_NO_DATA << CMD_HDR_DIR_OFF;
+ 	} else {
++		prot_op = scsi_get_prot_op(scsi_cmnd);
+ 		dw1 |= 1 << CMD_HDR_FRAME_TYPE_OFF;
+ 		switch (scsi_cmnd->sc_data_direction) {
+ 		case DMA_TO_DEVICE:
+diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
+index c4cbfd07b916..a08ff3bd6310 100644
+--- a/drivers/scsi/scsi_devinfo.c
++++ b/drivers/scsi/scsi_devinfo.c
+@@ -238,6 +238,7 @@ static struct {
+ 	{"NETAPP", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
+ 	{"LSI", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
+ 	{"ENGENIO", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
++	{"LENOVO", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
+ 	{"SMSC", "USB 2 HS-CF", NULL, BLIST_SPARSELUN | BLIST_INQUIRY_36},
+ 	{"SONY", "CD-ROM CDU-8001", NULL, BLIST_BORKEN},
+ 	{"SONY", "TSL", NULL, BLIST_FORCELUN},		/* DDS3 & DDS4 autoloaders */
+diff --git a/drivers/scsi/scsi_dh.c b/drivers/scsi/scsi_dh.c
+index 5a58cbf3a75d..c14006ac98f9 100644
+--- a/drivers/scsi/scsi_dh.c
++++ b/drivers/scsi/scsi_dh.c
+@@ -75,6 +75,7 @@ static const struct scsi_dh_blist scsi_dh_blist[] = {
+ 	{"NETAPP", "INF-01-00",		"rdac", },
+ 	{"LSI", "INF-01-00",		"rdac", },
+ 	{"ENGENIO", "INF-01-00",	"rdac", },
++	{"LENOVO", "DE_Series",		"rdac", },
+ 	{NULL, NULL,			NULL },
+ };
+ 
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 84380bae20f1..e186743033f4 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -668,13 +668,22 @@ static void  handle_multichannel_storage(struct hv_device *device, int max_chns)
+ {
+ 	struct device *dev = &device->device;
+ 	struct storvsc_device *stor_device;
+-	int num_cpus = num_online_cpus();
+ 	int num_sc;
+ 	struct storvsc_cmd_request *request;
+ 	struct vstor_packet *vstor_packet;
+ 	int ret, t;
+ 
+-	num_sc = ((max_chns > num_cpus) ? num_cpus : max_chns);
++	/*
++	 * If the number of CPUs is artificially restricted, such as
++	 * with maxcpus=1 on the kernel boot line, Hyper-V could offer
++	 * sub-channels >= the number of CPUs. These sub-channels
++	 * should not be created. The primary channel is already created
++	 * and assigned to one CPU, so check against # CPUs - 1.
++	 */
++	num_sc = min((int)(num_online_cpus() - 1), max_chns);
++	if (!num_sc)
++		return;
++
+ 	stor_device = get_out_stor_device(device);
+ 	if (!stor_device)
+ 		return;
+diff --git a/drivers/staging/iio/addac/adt7316.c b/drivers/staging/iio/addac/adt7316.c
+index 7839d869d25d..ecbb29e2153e 100644
+--- a/drivers/staging/iio/addac/adt7316.c
++++ b/drivers/staging/iio/addac/adt7316.c
+@@ -47,6 +47,8 @@
+ #define ADT7516_MSB_AIN3		0xA
+ #define ADT7516_MSB_AIN4		0xB
+ #define ADT7316_DA_DATA_BASE		0x10
++#define ADT7316_DA_10_BIT_LSB_SHIFT	6
++#define ADT7316_DA_12_BIT_LSB_SHIFT	4
+ #define ADT7316_DA_MSB_DATA_REGS	4
+ #define ADT7316_LSB_DAC_A		0x10
+ #define ADT7316_MSB_DAC_A		0x11
+@@ -632,9 +634,7 @@ static ssize_t adt7316_show_da_high_resolution(struct device *dev,
+ 	struct adt7316_chip_info *chip = iio_priv(dev_info);
+ 
+ 	if (chip->config3 & ADT7316_DA_HIGH_RESOLUTION) {
+-		if (chip->id == ID_ADT7316 || chip->id == ID_ADT7516)
+-			return sprintf(buf, "1 (12 bits)\n");
+-		if (chip->id == ID_ADT7317 || chip->id == ID_ADT7517)
++		if (chip->id != ID_ADT7318 && chip->id != ID_ADT7519)
+ 			return sprintf(buf, "1 (10 bits)\n");
+ 	}
+ 
+@@ -651,10 +651,12 @@ static ssize_t adt7316_store_da_high_resolution(struct device *dev,
+ 	u8 config3;
+ 	int ret;
+ 
++	if (chip->id == ID_ADT7318 || chip->id == ID_ADT7519)
++		return -EPERM;
++
++	config3 = chip->config3 & (~ADT7316_DA_HIGH_RESOLUTION);
+ 	if (buf[0] == '1')
+-		config3 = chip->config3 | ADT7316_DA_HIGH_RESOLUTION;
+-	else
+-		config3 = chip->config3 & (~ADT7316_DA_HIGH_RESOLUTION);
++		config3 |= ADT7316_DA_HIGH_RESOLUTION;
+ 
+ 	ret = chip->bus.write(chip->bus.client, ADT7316_CONFIG3, config3);
+ 	if (ret)
+@@ -1079,7 +1081,7 @@ static ssize_t adt7316_store_DAC_internal_Vref(struct device *dev,
+ 		ldac_config = chip->ldac_config & (~ADT7516_DAC_IN_VREF_MASK);
+ 		if (data & 0x1)
+ 			ldac_config |= ADT7516_DAC_AB_IN_VREF;
+-		else if (data & 0x2)
++		if (data & 0x2)
+ 			ldac_config |= ADT7516_DAC_CD_IN_VREF;
+ 	} else {
+ 		ret = kstrtou8(buf, 16, &data);
+@@ -1403,7 +1405,7 @@ static IIO_DEVICE_ATTR(ex_analog_temp_offset, 0644,
+ static ssize_t adt7316_show_DAC(struct adt7316_chip_info *chip,
+ 				int channel, char *buf)
+ {
+-	u16 data;
++	u16 data = 0;
+ 	u8 msb, lsb, offset;
+ 	int ret;
+ 
+@@ -1428,7 +1430,11 @@ static ssize_t adt7316_show_DAC(struct adt7316_chip_info *chip,
+ 	if (ret)
+ 		return -EIO;
+ 
+-	data = (msb << offset) + (lsb & ((1 << offset) - 1));
++	if (chip->dac_bits == 12)
++		data = lsb >> ADT7316_DA_12_BIT_LSB_SHIFT;
++	else if (chip->dac_bits == 10)
++		data = lsb >> ADT7316_DA_10_BIT_LSB_SHIFT;
++	data |= msb << offset;
+ 
+ 	return sprintf(buf, "%d\n", data);
+ }
+@@ -1436,7 +1442,7 @@ static ssize_t adt7316_show_DAC(struct adt7316_chip_info *chip,
+ static ssize_t adt7316_store_DAC(struct adt7316_chip_info *chip,
+ 				 int channel, const char *buf, size_t len)
+ {
+-	u8 msb, lsb, offset;
++	u8 msb, lsb, lsb_reg, offset;
+ 	u16 data;
+ 	int ret;
+ 
+@@ -1454,9 +1460,13 @@ static ssize_t adt7316_store_DAC(struct adt7316_chip_info *chip,
+ 		return -EINVAL;
+ 
+ 	if (chip->dac_bits > 8) {
+-		lsb = data & (1 << offset);
++		lsb = data & ((1 << offset) - 1);
++		if (chip->dac_bits == 12)
++			lsb_reg = lsb << ADT7316_DA_12_BIT_LSB_SHIFT;
++		else
++			lsb_reg = lsb << ADT7316_DA_10_BIT_LSB_SHIFT;
+ 		ret = chip->bus.write(chip->bus.client,
+-			ADT7316_DA_DATA_BASE + channel * 2, lsb);
++			ADT7316_DA_DATA_BASE + channel * 2, lsb_reg);
+ 		if (ret)
+ 			return -EIO;
+ 	}
+diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
+index 8987cec9549d..ebcadaad89d1 100644
+--- a/drivers/usb/core/driver.c
++++ b/drivers/usb/core/driver.c
+@@ -473,11 +473,6 @@ static int usb_unbind_interface(struct device *dev)
+ 		pm_runtime_disable(dev);
+ 	pm_runtime_set_suspended(dev);
+ 
+-	/* Undo any residual pm_autopm_get_interface_* calls */
+-	for (r = atomic_read(&intf->pm_usage_cnt); r > 0; --r)
+-		usb_autopm_put_interface_no_suspend(intf);
+-	atomic_set(&intf->pm_usage_cnt, 0);
+-
+ 	if (!error)
+ 		usb_autosuspend_device(udev);
+ 
+@@ -1633,7 +1628,6 @@ void usb_autopm_put_interface(struct usb_interface *intf)
+ 	int			status;
+ 
+ 	usb_mark_last_busy(udev);
+-	atomic_dec(&intf->pm_usage_cnt);
+ 	status = pm_runtime_put_sync(&intf->dev);
+ 	dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",
+ 			__func__, atomic_read(&intf->dev.power.usage_count),
+@@ -1662,7 +1656,6 @@ void usb_autopm_put_interface_async(struct usb_interface *intf)
+ 	int			status;
+ 
+ 	usb_mark_last_busy(udev);
+-	atomic_dec(&intf->pm_usage_cnt);
+ 	status = pm_runtime_put(&intf->dev);
+ 	dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",
+ 			__func__, atomic_read(&intf->dev.power.usage_count),
+@@ -1684,7 +1677,6 @@ void usb_autopm_put_interface_no_suspend(struct usb_interface *intf)
+ 	struct usb_device	*udev = interface_to_usbdev(intf);
+ 
+ 	usb_mark_last_busy(udev);
+-	atomic_dec(&intf->pm_usage_cnt);
+ 	pm_runtime_put_noidle(&intf->dev);
+ }
+ EXPORT_SYMBOL_GPL(usb_autopm_put_interface_no_suspend);
+@@ -1715,8 +1707,6 @@ int usb_autopm_get_interface(struct usb_interface *intf)
+ 	status = pm_runtime_get_sync(&intf->dev);
+ 	if (status < 0)
+ 		pm_runtime_put_sync(&intf->dev);
+-	else
+-		atomic_inc(&intf->pm_usage_cnt);
+ 	dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",
+ 			__func__, atomic_read(&intf->dev.power.usage_count),
+ 			status);
+@@ -1750,8 +1740,6 @@ int usb_autopm_get_interface_async(struct usb_interface *intf)
+ 	status = pm_runtime_get(&intf->dev);
+ 	if (status < 0 && status != -EINPROGRESS)
+ 		pm_runtime_put_noidle(&intf->dev);
+-	else
+-		atomic_inc(&intf->pm_usage_cnt);
+ 	dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",
+ 			__func__, atomic_read(&intf->dev.power.usage_count),
+ 			status);
+@@ -1775,7 +1763,6 @@ void usb_autopm_get_interface_no_resume(struct usb_interface *intf)
+ 	struct usb_device	*udev = interface_to_usbdev(intf);
+ 
+ 	usb_mark_last_busy(udev);
+-	atomic_inc(&intf->pm_usage_cnt);
+ 	pm_runtime_get_noresume(&intf->dev);
+ }
+ EXPORT_SYMBOL_GPL(usb_autopm_get_interface_no_resume);
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 4f33eb632a88..4020ce8db6ce 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -820,9 +820,11 @@ int usb_string(struct usb_device *dev, int index, char *buf, size_t size)
+ 
+ 	if (dev->state == USB_STATE_SUSPENDED)
+ 		return -EHOSTUNREACH;
+-	if (size <= 0 || !buf || !index)
++	if (size <= 0 || !buf)
+ 		return -EINVAL;
+ 	buf[0] = 0;
++	if (index <= 0 || index >= 256)
++		return -EINVAL;
+ 	tbuf = kmalloc(256, GFP_NOIO);
+ 	if (!tbuf)
+ 		return -ENOMEM;
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 8d1dbe36db92..9f941cdb0691 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1506,6 +1506,8 @@ static void dwc3_gadget_ep_skip_trbs(struct dwc3_ep *dep, struct dwc3_request *r
+ 		trb->ctrl &= ~DWC3_TRB_CTRL_HWO;
+ 		dwc3_ep_inc_deq(dep);
+ 	}
++
++	req->num_trbs = 0;
+ }
+ 
+ static void dwc3_gadget_ep_cleanup_cancelled_requests(struct dwc3_ep *dep)
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index baf72f95f0f1..213b52508621 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -979,8 +979,18 @@ static int dummy_udc_start(struct usb_gadget *g,
+ 	struct dummy_hcd	*dum_hcd = gadget_to_dummy_hcd(g);
+ 	struct dummy		*dum = dum_hcd->dum;
+ 
+-	if (driver->max_speed == USB_SPEED_UNKNOWN)
++	switch (g->speed) {
++	/* All the speeds we support */
++	case USB_SPEED_LOW:
++	case USB_SPEED_FULL:
++	case USB_SPEED_HIGH:
++	case USB_SPEED_SUPER:
++		break;
++	default:
++		dev_err(dummy_dev(dum_hcd), "Unsupported driver max speed %d\n",
++				driver->max_speed);
+ 		return -EINVAL;
++	}
+ 
+ 	/*
+ 	 * SLAVE side init ... the layer above hardware, which
+@@ -1784,9 +1794,10 @@ static void dummy_timer(struct timer_list *t)
+ 		/* Bus speed is 500000 bytes/ms, so use a little less */
+ 		total = 490000;
+ 		break;
+-	default:
++	default:	/* Can't happen */
+ 		dev_err(dummy_dev(dum_hcd), "bogus device speed\n");
+-		return;
++		total = 0;
++		break;
+ 	}
+ 
+ 	/* FIXME if HZ != 1000 this will probably misbehave ... */
+@@ -1828,7 +1839,7 @@ restart:
+ 
+ 		/* Used up this frame's bandwidth? */
+ 		if (total <= 0)
+-			break;
++			continue;
+ 
+ 		/* find the gadget's ep for this request (if configured) */
+ 		address = usb_pipeendpoint (urb->pipe);
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 6d9fd5f64903..7b306aa22d25 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -314,6 +314,7 @@ static void yurex_disconnect(struct usb_interface *interface)
+ 	usb_deregister_dev(interface, &yurex_class);
+ 
+ 	/* prevent more I/O from starting */
++	usb_poison_urb(dev->urb);
+ 	mutex_lock(&dev->io_mutex);
+ 	dev->interface = NULL;
+ 	mutex_unlock(&dev->io_mutex);
+diff --git a/drivers/usb/storage/realtek_cr.c b/drivers/usb/storage/realtek_cr.c
+index 31b024441938..cc794e25a0b6 100644
+--- a/drivers/usb/storage/realtek_cr.c
++++ b/drivers/usb/storage/realtek_cr.c
+@@ -763,18 +763,16 @@ static void rts51x_suspend_timer_fn(struct timer_list *t)
+ 		break;
+ 	case RTS51X_STAT_IDLE:
+ 	case RTS51X_STAT_SS:
+-		usb_stor_dbg(us, "RTS51X_STAT_SS, intf->pm_usage_cnt:%d, power.usage:%d\n",
+-			     atomic_read(&us->pusb_intf->pm_usage_cnt),
++		usb_stor_dbg(us, "RTS51X_STAT_SS, power.usage:%d\n",
+ 			     atomic_read(&us->pusb_intf->dev.power.usage_count));
+ 
+-		if (atomic_read(&us->pusb_intf->pm_usage_cnt) > 0) {
++		if (atomic_read(&us->pusb_intf->dev.power.usage_count) > 0) {
+ 			usb_stor_dbg(us, "Ready to enter SS state\n");
+ 			rts51x_set_stat(chip, RTS51X_STAT_SS);
+ 			/* ignore mass storage interface's children */
+ 			pm_suspend_ignore_children(&us->pusb_intf->dev, true);
+ 			usb_autopm_put_interface_async(us->pusb_intf);
+-			usb_stor_dbg(us, "RTS51X_STAT_SS 01, intf->pm_usage_cnt:%d, power.usage:%d\n",
+-				     atomic_read(&us->pusb_intf->pm_usage_cnt),
++			usb_stor_dbg(us, "RTS51X_STAT_SS 01, power.usage:%d\n",
+ 				     atomic_read(&us->pusb_intf->dev.power.usage_count));
+ 		}
+ 		break;
+@@ -807,11 +805,10 @@ static void rts51x_invoke_transport(struct scsi_cmnd *srb, struct us_data *us)
+ 	int ret;
+ 
+ 	if (working_scsi(srb)) {
+-		usb_stor_dbg(us, "working scsi, intf->pm_usage_cnt:%d, power.usage:%d\n",
+-			     atomic_read(&us->pusb_intf->pm_usage_cnt),
++		usb_stor_dbg(us, "working scsi, power.usage:%d\n",
+ 			     atomic_read(&us->pusb_intf->dev.power.usage_count));
+ 
+-		if (atomic_read(&us->pusb_intf->pm_usage_cnt) <= 0) {
++		if (atomic_read(&us->pusb_intf->dev.power.usage_count) <= 0) {
+ 			ret = usb_autopm_get_interface(us->pusb_intf);
+ 			usb_stor_dbg(us, "working scsi, ret=%d\n", ret);
+ 		}
+diff --git a/drivers/usb/usbip/stub_rx.c b/drivers/usb/usbip/stub_rx.c
+index 97b09a42a10c..dbfb2f24d71e 100644
+--- a/drivers/usb/usbip/stub_rx.c
++++ b/drivers/usb/usbip/stub_rx.c
+@@ -361,16 +361,10 @@ static int get_pipe(struct stub_device *sdev, struct usbip_header *pdu)
+ 	}
+ 
+ 	if (usb_endpoint_xfer_isoc(epd)) {
+-		/* validate packet size and number of packets */
+-		unsigned int maxp, packets, bytes;
+-
+-		maxp = usb_endpoint_maxp(epd);
+-		maxp *= usb_endpoint_maxp_mult(epd);
+-		bytes = pdu->u.cmd_submit.transfer_buffer_length;
+-		packets = DIV_ROUND_UP(bytes, maxp);
+-
++		/* validate number of packets */
+ 		if (pdu->u.cmd_submit.number_of_packets < 0 ||
+-		    pdu->u.cmd_submit.number_of_packets > packets) {
++		    pdu->u.cmd_submit.number_of_packets >
++		    USBIP_MAX_ISO_PACKETS) {
+ 			dev_err(&sdev->udev->dev,
+ 				"CMD_SUBMIT: isoc invalid num packets %d\n",
+ 				pdu->u.cmd_submit.number_of_packets);
+diff --git a/drivers/usb/usbip/usbip_common.h b/drivers/usb/usbip/usbip_common.h
+index bf8afe9b5883..8be857a4fa13 100644
+--- a/drivers/usb/usbip/usbip_common.h
++++ b/drivers/usb/usbip/usbip_common.h
+@@ -121,6 +121,13 @@ extern struct device_attribute dev_attr_usbip_debug;
+ #define USBIP_DIR_OUT	0x00
+ #define USBIP_DIR_IN	0x01
+ 
++/*
++ * Arbitrary limit for the maximum number of isochronous packets in an URB,
++ * compare for example the uhci_submit_isochronous function in
++ * drivers/usb/host/uhci-q.c
++ */
++#define USBIP_MAX_ISO_PACKETS 1024
++
+ /**
+  * struct usbip_header_basic - data pertinent to every request
+  * @command: the usbip request type
+diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
+index ff60bd1ea587..eb8fc8ccffc6 100644
+--- a/drivers/vfio/pci/vfio_pci.c
++++ b/drivers/vfio/pci/vfio_pci.c
+@@ -1597,11 +1597,11 @@ static void __init vfio_pci_fill_ids(void)
+ 		rc = pci_add_dynid(&vfio_pci_driver, vendor, device,
+ 				   subvendor, subdevice, class, class_mask, 0);
+ 		if (rc)
+-			pr_warn("failed to add dynamic id [%04hx:%04hx[%04hx:%04hx]] class %#08x/%08x (%d)\n",
++			pr_warn("failed to add dynamic id [%04x:%04x[%04x:%04x]] class %#08x/%08x (%d)\n",
+ 				vendor, device, subvendor, subdevice,
+ 				class, class_mask, rc);
+ 		else
+-			pr_info("add [%04hx:%04hx[%04hx:%04hx]] class %#08x/%08x\n",
++			pr_info("add [%04x:%04x[%04x:%04x]] class %#08x/%08x\n",
+ 				vendor, device, subvendor, subdevice,
+ 				class, class_mask);
+ 	}
+diff --git a/drivers/w1/masters/ds2490.c b/drivers/w1/masters/ds2490.c
+index 0f4ecfcdb549..a9fb77585272 100644
+--- a/drivers/w1/masters/ds2490.c
++++ b/drivers/w1/masters/ds2490.c
+@@ -1016,15 +1016,15 @@ static int ds_probe(struct usb_interface *intf,
+ 	/* alternative 3, 1ms interrupt (greatly speeds search), 64 byte bulk */
+ 	alt = 3;
+ 	err = usb_set_interface(dev->udev,
+-		intf->altsetting[alt].desc.bInterfaceNumber, alt);
++		intf->cur_altsetting->desc.bInterfaceNumber, alt);
+ 	if (err) {
+ 		dev_err(&dev->udev->dev, "Failed to set alternative setting %d "
+ 			"for %d interface: err=%d.\n", alt,
+-			intf->altsetting[alt].desc.bInterfaceNumber, err);
++			intf->cur_altsetting->desc.bInterfaceNumber, err);
+ 		goto err_out_clear;
+ 	}
+ 
+-	iface_desc = &intf->altsetting[alt];
++	iface_desc = intf->cur_altsetting;
+ 	if (iface_desc->desc.bNumEndpoints != NUM_EP-1) {
+ 		pr_info("Num endpoints=%d. It is not DS9490R.\n",
+ 			iface_desc->desc.bNumEndpoints);
+diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
+index c3e201025ef0..0782ff3c2273 100644
+--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
++++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
+@@ -622,9 +622,7 @@ static int xenbus_file_open(struct inode *inode, struct file *filp)
+ 	if (xen_store_evtchn == 0)
+ 		return -ENOENT;
+ 
+-	nonseekable_open(inode, filp);
+-
+-	filp->f_mode &= ~FMODE_ATOMIC_POS; /* cdev-style semantics */
++	stream_open(inode, filp);
+ 
+ 	u = kzalloc(sizeof(*u), GFP_KERNEL);
+ 	if (u == NULL)
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index 29c68c5d44d5..c4a4fc6f1a95 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -163,19 +163,24 @@ static int debugfs_show_options(struct seq_file *m, struct dentry *root)
+ 	return 0;
+ }
+ 
+-static void debugfs_evict_inode(struct inode *inode)
++static void debugfs_i_callback(struct rcu_head *head)
+ {
+-	truncate_inode_pages_final(&inode->i_data);
+-	clear_inode(inode);
++	struct inode *inode = container_of(head, struct inode, i_rcu);
+ 	if (S_ISLNK(inode->i_mode))
+ 		kfree(inode->i_link);
++	free_inode_nonrcu(inode);
++}
++
++static void debugfs_destroy_inode(struct inode *inode)
++{
++	call_rcu(&inode->i_rcu, debugfs_i_callback);
+ }
+ 
+ static const struct super_operations debugfs_super_operations = {
+ 	.statfs		= simple_statfs,
+ 	.remount_fs	= debugfs_remount,
+ 	.show_options	= debugfs_show_options,
+-	.evict_inode	= debugfs_evict_inode,
++	.destroy_inode	= debugfs_destroy_inode,
+ };
+ 
+ static void debugfs_release_dentry(struct dentry *dentry)
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index a7fa037b876b..a3a3d256fb0e 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -741,11 +741,17 @@ static struct inode *hugetlbfs_get_inode(struct super_block *sb,
+ 					umode_t mode, dev_t dev)
+ {
+ 	struct inode *inode;
+-	struct resv_map *resv_map;
++	struct resv_map *resv_map = NULL;
+ 
+-	resv_map = resv_map_alloc();
+-	if (!resv_map)
+-		return NULL;
++	/*
++	 * Reserve maps are only needed for inodes that can have associated
++	 * page allocations.
++	 */
++	if (S_ISREG(mode) || S_ISLNK(mode)) {
++		resv_map = resv_map_alloc();
++		if (!resv_map)
++			return NULL;
++	}
+ 
+ 	inode = new_inode(sb);
+ 	if (inode) {
+@@ -780,8 +786,10 @@ static struct inode *hugetlbfs_get_inode(struct super_block *sb,
+ 			break;
+ 		}
+ 		lockdep_annotate_inode_mutex_key(inode);
+-	} else
+-		kref_put(&resv_map->refs, resv_map_release);
++	} else {
++		if (resv_map)
++			kref_put(&resv_map->refs, resv_map_release);
++	}
+ 
+ 	return inode;
+ }
+diff --git a/fs/jffs2/readinode.c b/fs/jffs2/readinode.c
+index 389ea53ea487..bccfc40b3a74 100644
+--- a/fs/jffs2/readinode.c
++++ b/fs/jffs2/readinode.c
+@@ -1414,11 +1414,6 @@ void jffs2_do_clear_inode(struct jffs2_sb_info *c, struct jffs2_inode_info *f)
+ 
+ 	jffs2_kill_fragtree(&f->fragtree, deleted?c:NULL);
+ 
+-	if (f->target) {
+-		kfree(f->target);
+-		f->target = NULL;
+-	}
+-
+ 	fds = f->dents;
+ 	while(fds) {
+ 		fd = fds;
+diff --git a/fs/jffs2/super.c b/fs/jffs2/super.c
+index bb6ae387469f..05d892c79339 100644
+--- a/fs/jffs2/super.c
++++ b/fs/jffs2/super.c
+@@ -47,7 +47,10 @@ static struct inode *jffs2_alloc_inode(struct super_block *sb)
+ static void jffs2_i_callback(struct rcu_head *head)
+ {
+ 	struct inode *inode = container_of(head, struct inode, i_rcu);
+-	kmem_cache_free(jffs2_inode_cachep, JFFS2_INODE_INFO(inode));
++	struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
++
++	kfree(f->target);
++	kmem_cache_free(jffs2_inode_cachep, f);
+ }
+ 
+ static void jffs2_destroy_inode(struct inode *inode)
+diff --git a/fs/open.c b/fs/open.c
+index f1c2f855fd43..a00350018a47 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -1215,3 +1215,21 @@ int nonseekable_open(struct inode *inode, struct file *filp)
+ }
+ 
+ EXPORT_SYMBOL(nonseekable_open);
++
++/*
++ * stream_open is used by subsystems that want stream-like file descriptors.
++ * Such file descriptors are not seekable and don't have notion of position
++ * (file.f_pos is always 0). Contrary to file descriptors of other regular
++ * files, .read() and .write() can run simultaneously.
++ *
++ * stream_open never fails and is marked to return int so that it could be
++ * directly used as file_operations.open .
++ */
++int stream_open(struct inode *inode, struct file *filp)
++{
++	filp->f_mode &= ~(FMODE_LSEEK | FMODE_PREAD | FMODE_PWRITE | FMODE_ATOMIC_POS);
++	filp->f_mode |= FMODE_STREAM;
++	return 0;
++}
++
++EXPORT_SYMBOL(stream_open);
+diff --git a/fs/read_write.c b/fs/read_write.c
+index 27b69b85d49f..3d3194e32201 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -560,12 +560,13 @@ ssize_t vfs_write(struct file *file, const char __user *buf, size_t count, loff_
+ 
+ static inline loff_t file_pos_read(struct file *file)
+ {
+-	return file->f_pos;
++	return file->f_mode & FMODE_STREAM ? 0 : file->f_pos;
+ }
+ 
+ static inline void file_pos_write(struct file *file, loff_t pos)
+ {
+-	file->f_pos = pos;
++	if ((file->f_mode & FMODE_STREAM) == 0)
++		file->f_pos = pos;
+ }
+ 
+ ssize_t ksys_read(unsigned int fd, char __user *buf, size_t count)
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index fd423fec8d83..09ce2646c78a 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -153,6 +153,9 @@ typedef int (dio_iodone_t)(struct kiocb *iocb, loff_t offset,
+ #define FMODE_OPENED		((__force fmode_t)0x80000)
+ #define FMODE_CREATED		((__force fmode_t)0x100000)
+ 
++/* File is stream-like */
++#define FMODE_STREAM		((__force fmode_t)0x200000)
++
+ /* File was opened by fanotify and shouldn't generate fanotify events */
+ #define FMODE_NONOTIFY		((__force fmode_t)0x4000000)
+ 
+@@ -3074,6 +3077,7 @@ extern loff_t no_seek_end_llseek_size(struct file *, loff_t, int, loff_t);
+ extern loff_t no_seek_end_llseek(struct file *, loff_t, int);
+ extern int generic_file_open(struct inode * inode, struct file * filp);
+ extern int nonseekable_open(struct inode * inode, struct file * filp);
++extern int stream_open(struct inode * inode, struct file * filp);
+ 
+ #ifdef CONFIG_BLOCK
+ typedef void (dio_submit_t)(struct bio *bio, struct inode *inode,
+diff --git a/include/linux/platform_data/x86/clk-pmc-atom.h b/include/linux/platform_data/x86/clk-pmc-atom.h
+index 3ab892208343..7a37ac27d0fb 100644
+--- a/include/linux/platform_data/x86/clk-pmc-atom.h
++++ b/include/linux/platform_data/x86/clk-pmc-atom.h
+@@ -35,10 +35,13 @@ struct pmc_clk {
+  *
+  * @base:	PMC clock register base offset
+  * @clks:	pointer to set of registered clocks, typically 0..5
++ * @critical:	flag to indicate if firmware enabled pmc_plt_clks
++ *		should be marked as critial or not
+  */
+ struct pmc_clk_data {
+ 	void __iomem *base;
+ 	const struct pmc_clk *clks;
++	bool critical;
+ };
+ 
+ #endif /* __PLATFORM_DATA_X86_CLK_PMC_ATOM_H */
+diff --git a/include/linux/usb.h b/include/linux/usb.h
+index 5e49e82c4368..ff010d1fd1c7 100644
+--- a/include/linux/usb.h
++++ b/include/linux/usb.h
+@@ -200,7 +200,6 @@ usb_find_last_int_out_endpoint(struct usb_host_interface *alt,
+  * @dev: driver model's view of this device
+  * @usb_dev: if an interface is bound to the USB major, this will point
+  *	to the sysfs representation for that device.
+- * @pm_usage_cnt: PM usage counter for this interface
+  * @reset_ws: Used for scheduling resets from atomic context.
+  * @resetting_device: USB core reset the device, so use alt setting 0 as
+  *	current; needs bandwidth alloc after reset.
+@@ -257,7 +256,6 @@ struct usb_interface {
+ 
+ 	struct device dev;		/* interface specific device info */
+ 	struct device *usb_dev;
+-	atomic_t pm_usage_cnt;		/* usage counter for autosuspend */
+ 	struct work_struct reset_ws;	/* for resets in atomic context */
+ };
+ #define	to_usb_interface(d) container_of(d, struct usb_interface, dev)
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index 8974b3755670..3c18260403dd 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -162,10 +162,14 @@ static void cpu_map_kthread_stop(struct work_struct *work)
+ static struct sk_buff *cpu_map_build_skb(struct bpf_cpu_map_entry *rcpu,
+ 					 struct xdp_frame *xdpf)
+ {
++	unsigned int hard_start_headroom;
+ 	unsigned int frame_size;
+ 	void *pkt_data_start;
+ 	struct sk_buff *skb;
+ 
++	/* Part of headroom was reserved to xdpf */
++	hard_start_headroom = sizeof(struct xdp_frame) +  xdpf->headroom;
++
+ 	/* build_skb need to place skb_shared_info after SKB end, and
+ 	 * also want to know the memory "truesize".  Thus, need to
+ 	 * know the memory frame size backing xdp_buff.
+@@ -183,15 +187,15 @@ static struct sk_buff *cpu_map_build_skb(struct bpf_cpu_map_entry *rcpu,
+ 	 * is not at a fixed memory location, with mixed length
+ 	 * packets, which is bad for cache-line hotness.
+ 	 */
+-	frame_size = SKB_DATA_ALIGN(xdpf->len + xdpf->headroom) +
++	frame_size = SKB_DATA_ALIGN(xdpf->len + hard_start_headroom) +
+ 		SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ 
+-	pkt_data_start = xdpf->data - xdpf->headroom;
++	pkt_data_start = xdpf->data - hard_start_headroom;
+ 	skb = build_skb(pkt_data_start, frame_size);
+ 	if (!skb)
+ 		return NULL;
+ 
+-	skb_reserve(skb, xdpf->headroom);
++	skb_reserve(skb, hard_start_headroom);
+ 	__skb_put(skb, xdpf->len);
+ 	if (xdpf->metasize)
+ 		skb_metadata_set(skb, xdpf->metasize);
+@@ -205,6 +209,9 @@ static struct sk_buff *cpu_map_build_skb(struct bpf_cpu_map_entry *rcpu,
+ 	 * - RX ring dev queue index	(skb_record_rx_queue)
+ 	 */
+ 
++	/* Allow SKB to reuse area used by xdp_frame */
++	xdp_scrub_frame(xdpf);
++
+ 	return skb;
+ }
+ 
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index e815781ed751..181e72718434 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -500,7 +500,10 @@ out:
+  *
+  * Caller must be holding current->sighand->siglock lock.
+  *
+- * Returns 0 on success, -ve on error.
++ * Returns 0 on success, -ve on error, or
++ *   - in TSYNC mode: the pid of a thread which was either not in the correct
++ *     seccomp mode or did not have an ancestral seccomp filter
++ *   - in NEW_LISTENER mode: the fd of the new listener
+  */
+ static long seccomp_attach_filter(unsigned int flags,
+ 				  struct seccomp_filter *filter)
+@@ -1256,6 +1259,16 @@ static long seccomp_set_mode_filter(unsigned int flags,
+ 	if (flags & ~SECCOMP_FILTER_FLAG_MASK)
+ 		return -EINVAL;
+ 
++	/*
++	 * In the successful case, NEW_LISTENER returns the new listener fd.
++	 * But in the failure case, TSYNC returns the thread that died. If you
++	 * combine these two flags, there's no way to tell whether something
++	 * succeeded or failed. So, let's disallow this combination.
++	 */
++	if ((flags & SECCOMP_FILTER_FLAG_TSYNC) &&
++	    (flags & SECCOMP_FILTER_FLAG_NEW_LISTENER))
++		return -EINVAL;
++
+ 	/* Prepare the new filter before holding any locks. */
+ 	prepared = seccomp_prepare_user_filter(filter);
+ 	if (IS_ERR(prepared))
+@@ -1302,7 +1315,7 @@ out:
+ 		mutex_unlock(&current->signal->cred_guard_mutex);
+ out_put_fd:
+ 	if (flags & SECCOMP_FILTER_FLAG_NEW_LISTENER) {
+-		if (ret < 0) {
++		if (ret) {
+ 			listener_f->private_data = NULL;
+ 			fput(listener_f);
+ 			put_unused_fd(listener);
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 707fa5579f66..2e435b8142e5 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1401,6 +1401,7 @@ static void scan_block(void *_start, void *_end,
+ /*
+  * Scan a large memory block in MAX_SCAN_SIZE chunks to reduce the latency.
+  */
++#ifdef CONFIG_SMP
+ static void scan_large_block(void *start, void *end)
+ {
+ 	void *next;
+@@ -1412,6 +1413,7 @@ static void scan_large_block(void *start, void *end)
+ 		cond_resched();
+ 	}
+ }
++#endif
+ 
+ /*
+  * Scan a memory block corresponding to a kmemleak_object. A condition is
+@@ -1529,11 +1531,6 @@ static void kmemleak_scan(void)
+ 	}
+ 	rcu_read_unlock();
+ 
+-	/* data/bss scanning */
+-	scan_large_block(_sdata, _edata);
+-	scan_large_block(__bss_start, __bss_stop);
+-	scan_large_block(__start_ro_after_init, __end_ro_after_init);
+-
+ #ifdef CONFIG_SMP
+ 	/* per-cpu sections scanning */
+ 	for_each_possible_cpu(i)
+@@ -2071,6 +2068,17 @@ void __init kmemleak_init(void)
+ 	}
+ 	local_irq_restore(flags);
+ 
++	/* register the data/bss sections */
++	create_object((unsigned long)_sdata, _edata - _sdata,
++		      KMEMLEAK_GREY, GFP_ATOMIC);
++	create_object((unsigned long)__bss_start, __bss_stop - __bss_start,
++		      KMEMLEAK_GREY, GFP_ATOMIC);
++	/* only register .data..ro_after_init if not within .data */
++	if (__start_ro_after_init < _sdata || __end_ro_after_init > _edata)
++		create_object((unsigned long)__start_ro_after_init,
++			      __end_ro_after_init - __start_ro_after_init,
++			      KMEMLEAK_GREY, GFP_ATOMIC);
++
+ 	/*
+ 	 * This is the point where tracking allocations is safe. Automatic
+ 	 * scanning is started during the late initcall. Add the early logged
+diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c
+index ef0dec20c7d8..5da183b2f4c9 100644
+--- a/net/batman-adv/bat_v_elp.c
++++ b/net/batman-adv/bat_v_elp.c
+@@ -104,8 +104,10 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
+ 
+ 		ret = cfg80211_get_station(real_netdev, neigh->addr, &sinfo);
+ 
+-		/* free the TID stats immediately */
+-		cfg80211_sinfo_release_content(&sinfo);
++		if (!ret) {
++			/* free the TID stats immediately */
++			cfg80211_sinfo_release_content(&sinfo);
++		}
+ 
+ 		dev_put(real_netdev);
+ 		if (ret == -ENOENT) {
+diff --git a/net/batman-adv/bridge_loop_avoidance.c b/net/batman-adv/bridge_loop_avoidance.c
+index 5fdde2947802..cf2bcea7df82 100644
+--- a/net/batman-adv/bridge_loop_avoidance.c
++++ b/net/batman-adv/bridge_loop_avoidance.c
+@@ -803,6 +803,8 @@ static void batadv_bla_del_claim(struct batadv_priv *bat_priv,
+ 				 const u8 *mac, const unsigned short vid)
+ {
+ 	struct batadv_bla_claim search_claim, *claim;
++	struct batadv_bla_claim *claim_removed_entry;
++	struct hlist_node *claim_removed_node;
+ 
+ 	ether_addr_copy(search_claim.addr, mac);
+ 	search_claim.vid = vid;
+@@ -813,10 +815,18 @@ static void batadv_bla_del_claim(struct batadv_priv *bat_priv,
+ 	batadv_dbg(BATADV_DBG_BLA, bat_priv, "%s(): %pM, vid %d\n", __func__,
+ 		   mac, batadv_print_vid(vid));
+ 
+-	batadv_hash_remove(bat_priv->bla.claim_hash, batadv_compare_claim,
+-			   batadv_choose_claim, claim);
+-	batadv_claim_put(claim); /* reference from the hash is gone */
++	claim_removed_node = batadv_hash_remove(bat_priv->bla.claim_hash,
++						batadv_compare_claim,
++						batadv_choose_claim, claim);
++	if (!claim_removed_node)
++		goto free_claim;
+ 
++	/* reference from the hash is gone */
++	claim_removed_entry = hlist_entry(claim_removed_node,
++					  struct batadv_bla_claim, hash_entry);
++	batadv_claim_put(claim_removed_entry);
++
++free_claim:
+ 	/* don't need the reference from hash_find() anymore */
+ 	batadv_claim_put(claim);
+ }
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 8dcd4968cde7..6ec0e67be560 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -616,14 +616,26 @@ static void batadv_tt_global_free(struct batadv_priv *bat_priv,
+ 				  struct batadv_tt_global_entry *tt_global,
+ 				  const char *message)
+ {
++	struct batadv_tt_global_entry *tt_removed_entry;
++	struct hlist_node *tt_removed_node;
++
+ 	batadv_dbg(BATADV_DBG_TT, bat_priv,
+ 		   "Deleting global tt entry %pM (vid: %d): %s\n",
+ 		   tt_global->common.addr,
+ 		   batadv_print_vid(tt_global->common.vid), message);
+ 
+-	batadv_hash_remove(bat_priv->tt.global_hash, batadv_compare_tt,
+-			   batadv_choose_tt, &tt_global->common);
+-	batadv_tt_global_entry_put(tt_global);
++	tt_removed_node = batadv_hash_remove(bat_priv->tt.global_hash,
++					     batadv_compare_tt,
++					     batadv_choose_tt,
++					     &tt_global->common);
++	if (!tt_removed_node)
++		return;
++
++	/* drop reference of remove hash entry */
++	tt_removed_entry = hlist_entry(tt_removed_node,
++				       struct batadv_tt_global_entry,
++				       common.hash_entry);
++	batadv_tt_global_entry_put(tt_removed_entry);
+ }
+ 
+ /**
+@@ -1337,9 +1349,10 @@ u16 batadv_tt_local_remove(struct batadv_priv *bat_priv, const u8 *addr,
+ 			   unsigned short vid, const char *message,
+ 			   bool roaming)
+ {
++	struct batadv_tt_local_entry *tt_removed_entry;
+ 	struct batadv_tt_local_entry *tt_local_entry;
+ 	u16 flags, curr_flags = BATADV_NO_FLAGS;
+-	void *tt_entry_exists;
++	struct hlist_node *tt_removed_node;
+ 
+ 	tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr, vid);
+ 	if (!tt_local_entry)
+@@ -1368,15 +1381,18 @@ u16 batadv_tt_local_remove(struct batadv_priv *bat_priv, const u8 *addr,
+ 	 */
+ 	batadv_tt_local_event(bat_priv, tt_local_entry, BATADV_TT_CLIENT_DEL);
+ 
+-	tt_entry_exists = batadv_hash_remove(bat_priv->tt.local_hash,
++	tt_removed_node = batadv_hash_remove(bat_priv->tt.local_hash,
+ 					     batadv_compare_tt,
+ 					     batadv_choose_tt,
+ 					     &tt_local_entry->common);
+-	if (!tt_entry_exists)
++	if (!tt_removed_node)
+ 		goto out;
+ 
+-	/* extra call to free the local tt entry */
+-	batadv_tt_local_entry_put(tt_local_entry);
++	/* drop reference of remove hash entry */
++	tt_removed_entry = hlist_entry(tt_removed_node,
++				       struct batadv_tt_local_entry,
++				       common.hash_entry);
++	batadv_tt_local_entry_put(tt_removed_entry);
+ 
+ out:
+ 	if (tt_local_entry)
+diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
+index cff0fb3578c9..deb3faf08337 100644
+--- a/net/mac80211/debugfs_netdev.c
++++ b/net/mac80211/debugfs_netdev.c
+@@ -841,7 +841,7 @@ void ieee80211_debugfs_rename_netdev(struct ieee80211_sub_if_data *sdata)
+ 
+ 	dir = sdata->vif.debugfs_dir;
+ 
+-	if (!dir)
++	if (IS_ERR_OR_NULL(dir))
+ 		return;
+ 
+ 	sprintf(buf, "netdev:%s", sdata->name);
+diff --git a/net/mac80211/key.c b/net/mac80211/key.c
+index 4700718e010f..37e372896230 100644
+--- a/net/mac80211/key.c
++++ b/net/mac80211/key.c
+@@ -167,8 +167,10 @@ static int ieee80211_key_enable_hw_accel(struct ieee80211_key *key)
+ 		 * The driver doesn't know anything about VLAN interfaces.
+ 		 * Hence, don't send GTKs for VLAN interfaces to the driver.
+ 		 */
+-		if (!(key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE))
++		if (!(key->conf.flags & IEEE80211_KEY_FLAG_PAIRWISE)) {
++			ret = 1;
+ 			goto out_unsupported;
++		}
+ 	}
+ 
+ 	ret = drv_set_key(key->local, SET_KEY, sdata,
+@@ -213,11 +215,8 @@ static int ieee80211_key_enable_hw_accel(struct ieee80211_key *key)
+ 		/* all of these we can do in software - if driver can */
+ 		if (ret == 1)
+ 			return 0;
+-		if (ieee80211_hw_check(&key->local->hw, SW_CRYPTO_CONTROL)) {
+-			if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+-				return 0;
++		if (ieee80211_hw_check(&key->local->hw, SW_CRYPTO_CONTROL))
+ 			return -EINVAL;
+-		}
+ 		return 0;
+ 	default:
+ 		return -EINVAL;
+diff --git a/scripts/coccinelle/api/stream_open.cocci b/scripts/coccinelle/api/stream_open.cocci
+new file mode 100644
+index 000000000000..350145da7669
+--- /dev/null
++++ b/scripts/coccinelle/api/stream_open.cocci
+@@ -0,0 +1,363 @@
++// SPDX-License-Identifier: GPL-2.0
++// Author: Kirill Smelkov (kirr@nexedi.com)
++//
++// Search for stream-like files that are using nonseekable_open and convert
++// them to stream_open. A stream-like file is a file that does not use ppos in
++// its read and write. Rationale for the conversion is to avoid deadlock in
++// between read and write.
++
++virtual report
++virtual patch
++virtual explain  // explain decisions in the patch (SPFLAGS="-D explain")
++
++// stream-like reader & writer - ones that do not depend on f_pos.
++@ stream_reader @
++identifier readstream, ppos;
++identifier f, buf, len;
++type loff_t;
++@@
++  ssize_t readstream(struct file *f, char *buf, size_t len, loff_t *ppos)
++  {
++    ... when != ppos
++  }
++
++@ stream_writer @
++identifier writestream, ppos;
++identifier f, buf, len;
++type loff_t;
++@@
++  ssize_t writestream(struct file *f, const char *buf, size_t len, loff_t *ppos)
++  {
++    ... when != ppos
++  }
++
++
++// a function that blocks
++@ blocks @
++identifier block_f;
++identifier wait_event =~ "^wait_event_.*";
++@@
++  block_f(...) {
++    ... when exists
++    wait_event(...)
++    ... when exists
++  }
++
++// stream_reader that can block inside.
++//
++// XXX wait_* can be called not directly from current function (e.g. func -> f -> g -> wait())
++// XXX currently reader_blocks supports only direct and 1-level indirect cases.
++@ reader_blocks_direct @
++identifier stream_reader.readstream;
++identifier wait_event =~ "^wait_event_.*";
++@@
++  readstream(...)
++  {
++    ... when exists
++    wait_event(...)
++    ... when exists
++  }
++
++@ reader_blocks_1 @
++identifier stream_reader.readstream;
++identifier blocks.block_f;
++@@
++  readstream(...)
++  {
++    ... when exists
++    block_f(...)
++    ... when exists
++  }
++
++@ reader_blocks depends on reader_blocks_direct || reader_blocks_1 @
++identifier stream_reader.readstream;
++@@
++  readstream(...) {
++    ...
++  }
++
++
++// file_operations + whether they have _any_ .read, .write, .llseek ... at all.
++//
++// XXX add support for file_operations xxx[N] = ...	(sound/core/pcm_native.c)
++@ fops0 @
++identifier fops;
++@@
++  struct file_operations fops = {
++    ...
++  };
++
++@ has_read @
++identifier fops0.fops;
++identifier read_f;
++@@
++  struct file_operations fops = {
++    .read = read_f,
++  };
++
++@ has_read_iter @
++identifier fops0.fops;
++identifier read_iter_f;
++@@
++  struct file_operations fops = {
++    .read_iter = read_iter_f,
++  };
++
++@ has_write @
++identifier fops0.fops;
++identifier write_f;
++@@
++  struct file_operations fops = {
++    .write = write_f,
++  };
++
++@ has_write_iter @
++identifier fops0.fops;
++identifier write_iter_f;
++@@
++  struct file_operations fops = {
++    .write_iter = write_iter_f,
++  };
++
++@ has_llseek @
++identifier fops0.fops;
++identifier llseek_f;
++@@
++  struct file_operations fops = {
++    .llseek = llseek_f,
++  };
++
++@ has_no_llseek @
++identifier fops0.fops;
++@@
++  struct file_operations fops = {
++    .llseek = no_llseek,
++  };
++
++@ has_mmap @
++identifier fops0.fops;
++identifier mmap_f;
++@@
++  struct file_operations fops = {
++    .mmap = mmap_f,
++  };
++
++@ has_copy_file_range @
++identifier fops0.fops;
++identifier copy_file_range_f;
++@@
++  struct file_operations fops = {
++    .copy_file_range = copy_file_range_f,
++  };
++
++@ has_remap_file_range @
++identifier fops0.fops;
++identifier remap_file_range_f;
++@@
++  struct file_operations fops = {
++    .remap_file_range = remap_file_range_f,
++  };
++
++@ has_splice_read @
++identifier fops0.fops;
++identifier splice_read_f;
++@@
++  struct file_operations fops = {
++    .splice_read = splice_read_f,
++  };
++
++@ has_splice_write @
++identifier fops0.fops;
++identifier splice_write_f;
++@@
++  struct file_operations fops = {
++    .splice_write = splice_write_f,
++  };
++
++
++// file_operations that is candidate for stream_open conversion - it does not
++// use mmap and other methods that assume @offset access to file.
++//
++// XXX for simplicity require no .{read/write}_iter and no .splice_{read/write} for now.
++// XXX maybe_steam.fops cannot be used in other rules - it gives "bad rule maybe_stream or bad variable fops".
++@ maybe_stream depends on (!has_llseek || has_no_llseek) && !has_mmap && !has_copy_file_range && !has_remap_file_range && !has_read_iter && !has_write_iter && !has_splice_read && !has_splice_write @
++identifier fops0.fops;
++@@
++  struct file_operations fops = {
++  };
++
++
++// ---- conversions ----
++
++// XXX .open = nonseekable_open -> .open = stream_open
++// XXX .open = func -> openfunc -> nonseekable_open
++
++// read & write
++//
++// if both are used in the same file_operations together with an opener -
++// under that conditions we can use stream_open instead of nonseekable_open.
++@ fops_rw depends on maybe_stream @
++identifier fops0.fops, openfunc;
++identifier stream_reader.readstream;
++identifier stream_writer.writestream;
++@@
++  struct file_operations fops = {
++      .open  = openfunc,
++      .read  = readstream,
++      .write = writestream,
++  };
++
++@ report_rw depends on report @
++identifier fops_rw.openfunc;
++position p1;
++@@
++  openfunc(...) {
++    <...
++     nonseekable_open@p1
++    ...>
++  }
++
++@ script:python depends on report && reader_blocks @
++fops << fops0.fops;
++p << report_rw.p1;
++@@
++coccilib.report.print_report(p[0],
++  "ERROR: %s: .read() can deadlock .write(); change nonseekable_open -> stream_open to fix." % (fops,))
++
++@ script:python depends on report && !reader_blocks @
++fops << fops0.fops;
++p << report_rw.p1;
++@@
++coccilib.report.print_report(p[0],
++  "WARNING: %s: .read() and .write() have stream semantic; safe to change nonseekable_open -> stream_open." % (fops,))
++
++
++@ explain_rw_deadlocked depends on explain && reader_blocks @
++identifier fops_rw.openfunc;
++@@
++  openfunc(...) {
++    <...
++-    nonseekable_open
+++    nonseekable_open /* read & write (was deadlock) */
++    ...>
++  }
++
++
++@ explain_rw_nodeadlock depends on explain && !reader_blocks @
++identifier fops_rw.openfunc;
++@@
++  openfunc(...) {
++    <...
++-    nonseekable_open
+++    nonseekable_open /* read & write (no direct deadlock) */
++    ...>
++  }
++
++@ patch_rw depends on patch @
++identifier fops_rw.openfunc;
++@@
++  openfunc(...) {
++    <...
++-   nonseekable_open
+++   stream_open
++    ...>
++  }
++
++
++// read, but not write
++@ fops_r depends on maybe_stream && !has_write @
++identifier fops0.fops, openfunc;
++identifier stream_reader.readstream;
++@@
++  struct file_operations fops = {
++      .open  = openfunc,
++      .read  = readstream,
++  };
++
++@ report_r depends on report @
++identifier fops_r.openfunc;
++position p1;
++@@
++  openfunc(...) {
++    <...
++    nonseekable_open@p1
++    ...>
++  }
++
++@ script:python depends on report @
++fops << fops0.fops;
++p << report_r.p1;
++@@
++coccilib.report.print_report(p[0],
++  "WARNING: %s: .read() has stream semantic; safe to change nonseekable_open -> stream_open." % (fops,))
++
++@ explain_r depends on explain @
++identifier fops_r.openfunc;
++@@
++  openfunc(...) {
++    <...
++-   nonseekable_open
+++   nonseekable_open /* read only */
++    ...>
++  }
++
++@ patch_r depends on patch @
++identifier fops_r.openfunc;
++@@
++  openfunc(...) {
++    <...
++-   nonseekable_open
+++   stream_open
++    ...>
++  }
++
++
++// write, but not read
++@ fops_w depends on maybe_stream && !has_read @
++identifier fops0.fops, openfunc;
++identifier stream_writer.writestream;
++@@
++  struct file_operations fops = {
++      .open  = openfunc,
++      .write = writestream,
++  };
++
++@ report_w depends on report @
++identifier fops_w.openfunc;
++position p1;
++@@
++  openfunc(...) {
++    <...
++    nonseekable_open@p1
++    ...>
++  }
++
++@ script:python depends on report @
++fops << fops0.fops;
++p << report_w.p1;
++@@
++coccilib.report.print_report(p[0],
++  "WARNING: %s: .write() has stream semantic; safe to change nonseekable_open -> stream_open." % (fops,))
++
++@ explain_w depends on explain @
++identifier fops_w.openfunc;
++@@
++  openfunc(...) {
++    <...
++-   nonseekable_open
+++   nonseekable_open /* write only */
++    ...>
++  }
++
++@ patch_w depends on patch @
++identifier fops_w.openfunc;
++@@
++  openfunc(...) {
++    <...
++-   nonseekable_open
+++   stream_open
++    ...>
++  }
++
++
++// no read, no write - don't change anything
+diff --git a/security/selinux/avc.c b/security/selinux/avc.c
+index 635e5c1e3e48..5de18a6d5c3f 100644
+--- a/security/selinux/avc.c
++++ b/security/selinux/avc.c
+@@ -838,6 +838,7 @@ out:
+  * @ssid,@tsid,@tclass : identifier of an AVC entry
+  * @seqno : sequence number when decision was made
+  * @xpd: extended_perms_decision to be added to the node
++ * @flags: the AVC_* flags, e.g. AVC_NONBLOCKING, AVC_EXTENDED_PERMS, or 0.
+  *
+  * if a valid AVC entry doesn't exist,this function returns -ENOENT.
+  * if kmalloc() called internal returns NULL, this function returns -ENOMEM.
+@@ -856,6 +857,23 @@ static int avc_update_node(struct selinux_avc *avc,
+ 	struct hlist_head *head;
+ 	spinlock_t *lock;
+ 
++	/*
++	 * If we are in a non-blocking code path, e.g. VFS RCU walk,
++	 * then we must not add permissions to a cache entry
++	 * because we cannot safely audit the denial.  Otherwise,
++	 * during the subsequent blocking retry (e.g. VFS ref walk), we
++	 * will find the permissions already granted in the cache entry
++	 * and won't audit anything at all, leading to silent denials in
++	 * permissive mode that only appear when in enforcing mode.
++	 *
++	 * See the corresponding handling in slow_avc_audit(), and the
++	 * logic in selinux_inode_follow_link and selinux_inode_permission
++	 * for the VFS MAY_NOT_BLOCK flag, which is transliterated into
++	 * AVC_NONBLOCKING for avc_has_perm_noaudit().
++	 */
++	if (flags & AVC_NONBLOCKING)
++		return 0;
++
+ 	node = avc_alloc_node(avc);
+ 	if (!node) {
+ 		rc = -ENOMEM;
+@@ -1115,7 +1133,7 @@ decision:
+  * @tsid: target security identifier
+  * @tclass: target security class
+  * @requested: requested permissions, interpreted based on @tclass
+- * @flags:  AVC_STRICT or 0
++ * @flags:  AVC_STRICT, AVC_NONBLOCKING, or 0
+  * @avd: access vector decisions
+  *
+  * Check the AVC to determine whether the @requested permissions are granted
+@@ -1199,7 +1217,8 @@ int avc_has_perm_flags(struct selinux_state *state,
+ 	struct av_decision avd;
+ 	int rc, rc2;
+ 
+-	rc = avc_has_perm_noaudit(state, ssid, tsid, tclass, requested, 0,
++	rc = avc_has_perm_noaudit(state, ssid, tsid, tclass, requested,
++				  (flags & MAY_NOT_BLOCK) ? AVC_NONBLOCKING : 0,
+ 				  &avd);
+ 
+ 	rc2 = avc_audit(state, ssid, tsid, tclass, requested, &avd, rc,
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 07b11b5aaf1f..b005283f0090 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -534,16 +534,10 @@ static int may_context_mount_inode_relabel(u32 sid,
+ 	return rc;
+ }
+ 
+-static int selinux_is_sblabel_mnt(struct super_block *sb)
++static int selinux_is_genfs_special_handling(struct super_block *sb)
+ {
+-	struct superblock_security_struct *sbsec = sb->s_security;
+-
+-	return sbsec->behavior == SECURITY_FS_USE_XATTR ||
+-		sbsec->behavior == SECURITY_FS_USE_TRANS ||
+-		sbsec->behavior == SECURITY_FS_USE_TASK ||
+-		sbsec->behavior == SECURITY_FS_USE_NATIVE ||
+-		/* Special handling. Genfs but also in-core setxattr handler */
+-		!strcmp(sb->s_type->name, "sysfs") ||
++	/* Special handling. Genfs but also in-core setxattr handler */
++	return	!strcmp(sb->s_type->name, "sysfs") ||
+ 		!strcmp(sb->s_type->name, "pstore") ||
+ 		!strcmp(sb->s_type->name, "debugfs") ||
+ 		!strcmp(sb->s_type->name, "tracefs") ||
+@@ -553,6 +547,34 @@ static int selinux_is_sblabel_mnt(struct super_block *sb)
+ 		  !strcmp(sb->s_type->name, "cgroup2")));
+ }
+ 
++static int selinux_is_sblabel_mnt(struct super_block *sb)
++{
++	struct superblock_security_struct *sbsec = sb->s_security;
++
++	/*
++	 * IMPORTANT: Double-check logic in this function when adding a new
++	 * SECURITY_FS_USE_* definition!
++	 */
++	BUILD_BUG_ON(SECURITY_FS_USE_MAX != 7);
++
++	switch (sbsec->behavior) {
++	case SECURITY_FS_USE_XATTR:
++	case SECURITY_FS_USE_TRANS:
++	case SECURITY_FS_USE_TASK:
++	case SECURITY_FS_USE_NATIVE:
++		return 1;
++
++	case SECURITY_FS_USE_GENFS:
++		return selinux_is_genfs_special_handling(sb);
++
++	/* Never allow relabeling on context mounts */
++	case SECURITY_FS_USE_MNTPOINT:
++	case SECURITY_FS_USE_NONE:
++	default:
++		return 0;
++	}
++}
++
+ static int sb_finish_set_opts(struct super_block *sb)
+ {
+ 	struct superblock_security_struct *sbsec = sb->s_security;
+@@ -2985,7 +3007,9 @@ static int selinux_inode_permission(struct inode *inode, int mask)
+ 		return PTR_ERR(isec);
+ 
+ 	rc = avc_has_perm_noaudit(&selinux_state,
+-				  sid, isec->sid, isec->sclass, perms, 0, &avd);
++				  sid, isec->sid, isec->sclass, perms,
++				  (flags & MAY_NOT_BLOCK) ? AVC_NONBLOCKING : 0,
++				  &avd);
+ 	audited = avc_audit_required(perms, &avd, rc,
+ 				     from_access ? FILE__AUDIT_ACCESS : 0,
+ 				     &denied);
+diff --git a/security/selinux/include/avc.h b/security/selinux/include/avc.h
+index ef899bcfd2cb..74ea50977c20 100644
+--- a/security/selinux/include/avc.h
++++ b/security/selinux/include/avc.h
+@@ -142,6 +142,7 @@ static inline int avc_audit(struct selinux_state *state,
+ 
+ #define AVC_STRICT 1 /* Ignore permissive mode. */
+ #define AVC_EXTENDED_PERMS 2	/* update extended permissions */
++#define AVC_NONBLOCKING    4	/* non blocking */
+ int avc_has_perm_noaudit(struct selinux_state *state,
+ 			 u32 ssid, u32 tsid,
+ 			 u16 tclass, u32 requested,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index a9f69c3a3e0b..5ce28b4f0218 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5448,6 +5448,8 @@ static void alc274_fixup_bind_dacs(struct hda_codec *codec,
+ 		return;
+ 
+ 	spec->gen.preferred_dacs = preferred_pairs;
++	spec->gen.auto_mute_via_amp = 1;
++	codec->power_save_node = 0;
+ }
+ 
+ /* The DAC of NID 0x3 will introduce click/pop noise on headphones, so invalidate it */
+@@ -7266,6 +7268,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x21, 0x02211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		{0x21, 0x02211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0236, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++		{0x12, 0x40000000},
++		{0x14, 0x90170110},
++		{0x21, 0x02211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0255, 0x1028, "Dell", ALC255_FIXUP_DELL2_MIC_NO_PRESENCE,
+ 		{0x14, 0x90170110},
+ 		{0x21, 0x02211020}),
+@@ -7539,6 +7545,13 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x12, 0x90a60130},
+ 		{0x17, 0x90170110},
+ 		{0x21, 0x04211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1043, "ASUS", ALC294_FIXUP_ASUS_SPK,
++		{0x12, 0x90a60130},
++		{0x17, 0x90170110},
++		{0x21, 0x03211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
++		{0x14, 0x90170110},
++		{0x21, 0x04211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 		ALC295_STANDARD_PINS,
+ 		{0x17, 0x21014020},
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 1dd291cebe67..0600e4404f90 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -3443,8 +3443,6 @@ int wm_adsp_compr_trigger(struct snd_compr_stream *stream, int cmd)
+ 			}
+ 		}
+ 
+-		wm_adsp_buffer_clear(compr->buf);
+-
+ 		/* Trigger the IRQ at one fragment of data */
+ 		ret = wm_adsp_buffer_write(compr->buf,
+ 					   HOST_BUFFER_FIELD(high_water_mark),
+@@ -3456,6 +3454,8 @@ int wm_adsp_compr_trigger(struct snd_compr_stream *stream, int cmd)
+ 		}
+ 		break;
+ 	case SNDRV_PCM_TRIGGER_STOP:
++		if (wm_adsp_compr_attached(compr))
++			wm_adsp_buffer_clear(compr->buf);
+ 		break;
+ 	default:
+ 		ret = -EINVAL;
+diff --git a/sound/soc/intel/boards/bytcr_rt5651.c b/sound/soc/intel/boards/bytcr_rt5651.c
+index e528995668b7..0ed844f2ad01 100644
+--- a/sound/soc/intel/boards/bytcr_rt5651.c
++++ b/sound/soc/intel/boards/bytcr_rt5651.c
+@@ -266,7 +266,7 @@ static const struct snd_soc_dapm_route byt_rt5651_audio_map[] = {
+ static const struct snd_soc_dapm_route byt_rt5651_intmic_dmic_map[] = {
+ 	{"DMIC L1", NULL, "Internal Mic"},
+ 	{"DMIC R1", NULL, "Internal Mic"},
+-	{"IN3P", NULL, "Headset Mic"},
++	{"IN2P", NULL, "Headset Mic"},
+ };
+ 
+ static const struct snd_soc_dapm_route byt_rt5651_intmic_in1_map[] = {
+diff --git a/sound/soc/sh/rcar/gen.c b/sound/soc/sh/rcar/gen.c
+index 7cda60188f41..af19010b9d88 100644
+--- a/sound/soc/sh/rcar/gen.c
++++ b/sound/soc/sh/rcar/gen.c
+@@ -255,6 +255,30 @@ static int rsnd_gen2_probe(struct rsnd_priv *priv)
+ 		RSND_GEN_M_REG(SSI_MODE,		0xc,	0x80),
+ 		RSND_GEN_M_REG(SSI_CTRL,		0x10,	0x80),
+ 		RSND_GEN_M_REG(SSI_INT_ENABLE,		0x18,	0x80),
++		RSND_GEN_S_REG(SSI9_BUSIF0_MODE,	0x48c),
++		RSND_GEN_S_REG(SSI9_BUSIF0_ADINR,	0x484),
++		RSND_GEN_S_REG(SSI9_BUSIF0_DALIGN,	0x488),
++		RSND_GEN_S_REG(SSI9_BUSIF1_MODE,	0x4a0),
++		RSND_GEN_S_REG(SSI9_BUSIF1_ADINR,	0x4a4),
++		RSND_GEN_S_REG(SSI9_BUSIF1_DALIGN,	0x4a8),
++		RSND_GEN_S_REG(SSI9_BUSIF2_MODE,	0x4c0),
++		RSND_GEN_S_REG(SSI9_BUSIF2_ADINR,	0x4c4),
++		RSND_GEN_S_REG(SSI9_BUSIF2_DALIGN,	0x4c8),
++		RSND_GEN_S_REG(SSI9_BUSIF3_MODE,	0x4e0),
++		RSND_GEN_S_REG(SSI9_BUSIF3_ADINR,	0x4e4),
++		RSND_GEN_S_REG(SSI9_BUSIF3_DALIGN,	0x4e8),
++		RSND_GEN_S_REG(SSI9_BUSIF4_MODE,	0xd80),
++		RSND_GEN_S_REG(SSI9_BUSIF4_ADINR,	0xd84),
++		RSND_GEN_S_REG(SSI9_BUSIF4_DALIGN,	0xd88),
++		RSND_GEN_S_REG(SSI9_BUSIF5_MODE,	0xda0),
++		RSND_GEN_S_REG(SSI9_BUSIF5_ADINR,	0xda4),
++		RSND_GEN_S_REG(SSI9_BUSIF5_DALIGN,	0xda8),
++		RSND_GEN_S_REG(SSI9_BUSIF6_MODE,	0xdc0),
++		RSND_GEN_S_REG(SSI9_BUSIF6_ADINR,	0xdc4),
++		RSND_GEN_S_REG(SSI9_BUSIF6_DALIGN,	0xdc8),
++		RSND_GEN_S_REG(SSI9_BUSIF7_MODE,	0xde0),
++		RSND_GEN_S_REG(SSI9_BUSIF7_ADINR,	0xde4),
++		RSND_GEN_S_REG(SSI9_BUSIF7_DALIGN,	0xde8),
+ 	};
+ 
+ 	static const struct rsnd_regmap_field_conf conf_scu[] = {
+diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
+index 605e4b934982..90625c57847b 100644
+--- a/sound/soc/sh/rcar/rsnd.h
++++ b/sound/soc/sh/rcar/rsnd.h
+@@ -191,6 +191,30 @@ enum rsnd_reg {
+ 	SSI_SYS_STATUS7,
+ 	HDMI0_SEL,
+ 	HDMI1_SEL,
++	SSI9_BUSIF0_MODE,
++	SSI9_BUSIF1_MODE,
++	SSI9_BUSIF2_MODE,
++	SSI9_BUSIF3_MODE,
++	SSI9_BUSIF4_MODE,
++	SSI9_BUSIF5_MODE,
++	SSI9_BUSIF6_MODE,
++	SSI9_BUSIF7_MODE,
++	SSI9_BUSIF0_ADINR,
++	SSI9_BUSIF1_ADINR,
++	SSI9_BUSIF2_ADINR,
++	SSI9_BUSIF3_ADINR,
++	SSI9_BUSIF4_ADINR,
++	SSI9_BUSIF5_ADINR,
++	SSI9_BUSIF6_ADINR,
++	SSI9_BUSIF7_ADINR,
++	SSI9_BUSIF0_DALIGN,
++	SSI9_BUSIF1_DALIGN,
++	SSI9_BUSIF2_DALIGN,
++	SSI9_BUSIF3_DALIGN,
++	SSI9_BUSIF4_DALIGN,
++	SSI9_BUSIF5_DALIGN,
++	SSI9_BUSIF6_DALIGN,
++	SSI9_BUSIF7_DALIGN,
+ 
+ 	/* SSI */
+ 	SSICR,
+@@ -209,6 +233,9 @@ enum rsnd_reg {
+ #define SSI_BUSIF_MODE(i)	(SSI_BUSIF0_MODE + (i))
+ #define SSI_BUSIF_ADINR(i)	(SSI_BUSIF0_ADINR + (i))
+ #define SSI_BUSIF_DALIGN(i)	(SSI_BUSIF0_DALIGN + (i))
++#define SSI9_BUSIF_MODE(i)	(SSI9_BUSIF0_MODE + (i))
++#define SSI9_BUSIF_ADINR(i)	(SSI9_BUSIF0_ADINR + (i))
++#define SSI9_BUSIF_DALIGN(i)	(SSI9_BUSIF0_DALIGN + (i))
+ #define SSI_SYS_STATUS(i)	(SSI_SYS_STATUS0 + (i))
+ 
+ 
+diff --git a/sound/soc/sh/rcar/ssiu.c b/sound/soc/sh/rcar/ssiu.c
+index c74991dd18ab..2347f3404c06 100644
+--- a/sound/soc/sh/rcar/ssiu.c
++++ b/sound/soc/sh/rcar/ssiu.c
+@@ -181,28 +181,26 @@ static int rsnd_ssiu_init_gen2(struct rsnd_mod *mod,
+ 	if (rsnd_ssi_use_busif(io)) {
+ 		int id = rsnd_mod_id(mod);
+ 		int busif = rsnd_mod_id_sub(mod);
++		enum rsnd_reg adinr_reg, mode_reg, dalign_reg;
+ 
+-		/*
+-		 * FIXME
+-		 *
+-		 * We can't support SSI9-4/5/6/7, because its address is
+-		 * out of calculation rule
+-		 */
+ 		if ((id == 9) && (busif >= 4)) {
+-			struct device *dev = rsnd_priv_to_dev(priv);
+-
+-			dev_err(dev, "This driver doesn't support SSI%d-%d, so far",
+-				id, busif);
++			adinr_reg = SSI9_BUSIF_ADINR(busif);
++			mode_reg = SSI9_BUSIF_MODE(busif);
++			dalign_reg = SSI9_BUSIF_DALIGN(busif);
++		} else {
++			adinr_reg = SSI_BUSIF_ADINR(busif);
++			mode_reg = SSI_BUSIF_MODE(busif);
++			dalign_reg = SSI_BUSIF_DALIGN(busif);
+ 		}
+ 
+-		rsnd_mod_write(mod, SSI_BUSIF_ADINR(busif),
++		rsnd_mod_write(mod, adinr_reg,
+ 			       rsnd_get_adinr_bit(mod, io) |
+ 			       (rsnd_io_is_play(io) ?
+ 				rsnd_runtime_channel_after_ctu(io) :
+ 				rsnd_runtime_channel_original(io)));
+-		rsnd_mod_write(mod, SSI_BUSIF_MODE(busif),
++		rsnd_mod_write(mod, mode_reg,
+ 			       rsnd_get_busif_shift(io, mod) | 1);
+-		rsnd_mod_write(mod, SSI_BUSIF_DALIGN(busif),
++		rsnd_mod_write(mod, dalign_reg,
+ 			       rsnd_get_dalign(mod, io));
+ 	}
+ 
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 03f36e534050..0c1dd6bd67ab 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -1895,10 +1895,15 @@ static int dpcm_apply_symmetry(struct snd_pcm_substream *fe_substream,
+ 		struct snd_soc_pcm_runtime *be = dpcm->be;
+ 		struct snd_pcm_substream *be_substream =
+ 			snd_soc_dpcm_get_substream(be, stream);
+-		struct snd_soc_pcm_runtime *rtd = be_substream->private_data;
++		struct snd_soc_pcm_runtime *rtd;
+ 		struct snd_soc_dai *codec_dai;
+ 		int i;
+ 
++		/* A backend may not have the requested substream */
++		if (!be_substream)
++			continue;
++
++		rtd = be_substream->private_data;
+ 		if (rtd->dai_link->be_hw_params_fixup)
+ 			continue;
+ 
+diff --git a/sound/soc/stm/stm32_sai_sub.c b/sound/soc/stm/stm32_sai_sub.c
+index d4825700b63f..29a131e0569e 100644
+--- a/sound/soc/stm/stm32_sai_sub.c
++++ b/sound/soc/stm/stm32_sai_sub.c
+@@ -1394,7 +1394,6 @@ static int stm32_sai_sub_dais_init(struct platform_device *pdev,
+ 	if (!sai->cpu_dai_drv)
+ 		return -ENOMEM;
+ 
+-	sai->cpu_dai_drv->name = dev_name(&pdev->dev);
+ 	if (STM_SAI_IS_PLAYBACK(sai)) {
+ 		memcpy(sai->cpu_dai_drv, &stm32_sai_playback_dai,
+ 		       sizeof(stm32_sai_playback_dai));
+@@ -1404,6 +1403,7 @@ static int stm32_sai_sub_dais_init(struct platform_device *pdev,
+ 		       sizeof(stm32_sai_capture_dai));
+ 		sai->cpu_dai_drv->capture.stream_name = sai->cpu_dai_drv->name;
+ 	}
++	sai->cpu_dai_drv->name = dev_name(&pdev->dev);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/sunxi/sun50i-codec-analog.c b/sound/soc/sunxi/sun50i-codec-analog.c
+index df1fed0aa001..d105c90c3706 100644
+--- a/sound/soc/sunxi/sun50i-codec-analog.c
++++ b/sound/soc/sunxi/sun50i-codec-analog.c
+@@ -274,7 +274,7 @@ static const struct snd_soc_dapm_widget sun50i_a64_codec_widgets[] = {
+ 	 * stream widgets at the card level.
+ 	 */
+ 
+-	SND_SOC_DAPM_REGULATOR_SUPPLY("hpvcc", 0, 0),
++	SND_SOC_DAPM_REGULATOR_SUPPLY("cpvdd", 0, 0),
+ 	SND_SOC_DAPM_MUX("Headphone Source Playback Route",
+ 			 SND_SOC_NOPM, 0, 0, sun50i_codec_hp_src),
+ 	SND_SOC_DAPM_OUT_DRV("Headphone Amp", SUN50I_ADDA_HP_CTRL,
+@@ -362,7 +362,7 @@ static const struct snd_soc_dapm_route sun50i_a64_codec_routes[] = {
+ 	{ "Headphone Source Playback Route", "Mixer", "Left Mixer" },
+ 	{ "Headphone Source Playback Route", "Mixer", "Right Mixer" },
+ 	{ "Headphone Amp", NULL, "Headphone Source Playback Route" },
+-	{ "Headphone Amp", NULL, "hpvcc" },
++	{ "Headphone Amp", NULL, "cpvdd" },
+ 	{ "HP", NULL, "Headphone Amp" },
+ 
+ 	/* Microphone Routes */
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 6d7a81306f8a..1c2509104924 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -2166,11 +2166,14 @@ TEST(detect_seccomp_filter_flags)
+ 				 SECCOMP_FILTER_FLAG_LOG,
+ 				 SECCOMP_FILTER_FLAG_SPEC_ALLOW,
+ 				 SECCOMP_FILTER_FLAG_NEW_LISTENER };
+-	unsigned int flag, all_flags;
++	unsigned int exclusive[] = {
++				SECCOMP_FILTER_FLAG_TSYNC,
++				SECCOMP_FILTER_FLAG_NEW_LISTENER };
++	unsigned int flag, all_flags, exclusive_mask;
+ 	int i;
+ 	long ret;
+ 
+-	/* Test detection of known-good filter flags */
++	/* Test detection of individual known-good filter flags */
+ 	for (i = 0, all_flags = 0; i < ARRAY_SIZE(flags); i++) {
+ 		int bits = 0;
+ 
+@@ -2197,16 +2200,29 @@ TEST(detect_seccomp_filter_flags)
+ 		all_flags |= flag;
+ 	}
+ 
+-	/* Test detection of all known-good filter flags */
+-	ret = seccomp(SECCOMP_SET_MODE_FILTER, all_flags, NULL);
+-	EXPECT_EQ(-1, ret);
+-	EXPECT_EQ(EFAULT, errno) {
+-		TH_LOG("Failed to detect that all known-good filter flags (0x%X) are supported!",
+-		       all_flags);
++	/*
++	 * Test detection of all known-good filter flags combined. But
++	 * for the exclusive flags we need to mask them out and try them
++	 * individually for the "all flags" testing.
++	 */
++	exclusive_mask = 0;
++	for (i = 0; i < ARRAY_SIZE(exclusive); i++)
++		exclusive_mask |= exclusive[i];
++	for (i = 0; i < ARRAY_SIZE(exclusive); i++) {
++		flag = all_flags & ~exclusive_mask;
++		flag |= exclusive[i];
++
++		ret = seccomp(SECCOMP_SET_MODE_FILTER, flag, NULL);
++		EXPECT_EQ(-1, ret);
++		EXPECT_EQ(EFAULT, errno) {
++			TH_LOG("Failed to detect that all known-good filter flags (0x%X) are supported!",
++			       flag);
++		}
+ 	}
+ 
+-	/* Test detection of an unknown filter flag */
++	/* Test detection of an unknown filter flags, without exclusives. */
+ 	flag = -1;
++	flag &= ~exclusive_mask;
+ 	ret = seccomp(SECCOMP_SET_MODE_FILTER, flag, NULL);
+ 	EXPECT_EQ(-1, ret);
+ 	EXPECT_EQ(EINVAL, errno) {


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-10 19:43 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-10 19:43 UTC (permalink / raw
  To: gentoo-commits

commit:     b29b5859b935dc369fe2e040993cb6d290d04744
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri May 10 19:43:09 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri May 10 19:43:09 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b29b5859

Linux patch 5.0.15

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1014_linux-5.0.15.patch | 6151 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6155 insertions(+)

diff --git a/0000_README b/0000_README
index b2a5389..0d6cdbe 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch:  1013_linux-5.0.14.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.14
 
+Patch:  1014_linux-5.0.15.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.15
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1014_linux-5.0.15.patch b/1014_linux-5.0.15.patch
new file mode 100644
index 0000000..9c65e91
--- /dev/null
+++ b/1014_linux-5.0.15.patch
@@ -0,0 +1,6151 @@
+diff --git a/Makefile b/Makefile
+index 5ce29665eeed..11c7f7844507 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
+index c7e1a7837706..6fb2214333a2 100644
+--- a/arch/arm64/include/asm/futex.h
++++ b/arch/arm64/include/asm/futex.h
+@@ -23,26 +23,34 @@
+ 
+ #include <asm/errno.h>
+ 
++#define FUTEX_MAX_LOOPS	128 /* What's the largest number you can think of? */
++
+ #define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg)		\
+ do {									\
++	unsigned int loops = FUTEX_MAX_LOOPS;				\
++									\
+ 	uaccess_enable();						\
+ 	asm volatile(							\
+ "	prfm	pstl1strm, %2\n"					\
+ "1:	ldxr	%w1, %2\n"						\
+ 	insn "\n"							\
+ "2:	stlxr	%w0, %w3, %2\n"						\
+-"	cbnz	%w0, 1b\n"						\
+-"	dmb	ish\n"							\
++"	cbz	%w0, 3f\n"						\
++"	sub	%w4, %w4, %w0\n"					\
++"	cbnz	%w4, 1b\n"						\
++"	mov	%w0, %w7\n"						\
+ "3:\n"									\
++"	dmb	ish\n"							\
+ "	.pushsection .fixup,\"ax\"\n"					\
+ "	.align	2\n"							\
+-"4:	mov	%w0, %w5\n"						\
++"4:	mov	%w0, %w6\n"						\
+ "	b	3b\n"							\
+ "	.popsection\n"							\
+ 	_ASM_EXTABLE(1b, 4b)						\
+ 	_ASM_EXTABLE(2b, 4b)						\
+-	: "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp)	\
+-	: "r" (oparg), "Ir" (-EFAULT)					\
++	: "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp),	\
++	  "+r" (loops)							\
++	: "r" (oparg), "Ir" (-EFAULT), "Ir" (-EAGAIN)			\
+ 	: "memory");							\
+ 	uaccess_disable();						\
+ } while (0)
+@@ -57,23 +65,23 @@ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
+ 
+ 	switch (op) {
+ 	case FUTEX_OP_SET:
+-		__futex_atomic_op("mov	%w3, %w4",
++		__futex_atomic_op("mov	%w3, %w5",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	case FUTEX_OP_ADD:
+-		__futex_atomic_op("add	%w3, %w1, %w4",
++		__futex_atomic_op("add	%w3, %w1, %w5",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	case FUTEX_OP_OR:
+-		__futex_atomic_op("orr	%w3, %w1, %w4",
++		__futex_atomic_op("orr	%w3, %w1, %w5",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	case FUTEX_OP_ANDN:
+-		__futex_atomic_op("and	%w3, %w1, %w4",
++		__futex_atomic_op("and	%w3, %w1, %w5",
+ 				  ret, oldval, uaddr, tmp, ~oparg);
+ 		break;
+ 	case FUTEX_OP_XOR:
+-		__futex_atomic_op("eor	%w3, %w1, %w4",
++		__futex_atomic_op("eor	%w3, %w1, %w5",
+ 				  ret, oldval, uaddr, tmp, oparg);
+ 		break;
+ 	default:
+@@ -93,6 +101,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
+ 			      u32 oldval, u32 newval)
+ {
+ 	int ret = 0;
++	unsigned int loops = FUTEX_MAX_LOOPS;
+ 	u32 val, tmp;
+ 	u32 __user *uaddr;
+ 
+@@ -104,20 +113,24 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
+ 	asm volatile("// futex_atomic_cmpxchg_inatomic\n"
+ "	prfm	pstl1strm, %2\n"
+ "1:	ldxr	%w1, %2\n"
+-"	sub	%w3, %w1, %w4\n"
+-"	cbnz	%w3, 3f\n"
+-"2:	stlxr	%w3, %w5, %2\n"
+-"	cbnz	%w3, 1b\n"
+-"	dmb	ish\n"
++"	sub	%w3, %w1, %w5\n"
++"	cbnz	%w3, 4f\n"
++"2:	stlxr	%w3, %w6, %2\n"
++"	cbz	%w3, 3f\n"
++"	sub	%w4, %w4, %w3\n"
++"	cbnz	%w4, 1b\n"
++"	mov	%w0, %w8\n"
+ "3:\n"
++"	dmb	ish\n"
++"4:\n"
+ "	.pushsection .fixup,\"ax\"\n"
+-"4:	mov	%w0, %w6\n"
+-"	b	3b\n"
++"5:	mov	%w0, %w7\n"
++"	b	4b\n"
+ "	.popsection\n"
+-	_ASM_EXTABLE(1b, 4b)
+-	_ASM_EXTABLE(2b, 4b)
+-	: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp)
+-	: "r" (oldval), "r" (newval), "Ir" (-EFAULT)
++	_ASM_EXTABLE(1b, 5b)
++	_ASM_EXTABLE(2b, 5b)
++	: "+r" (ret), "=&r" (val), "+Q" (*uaddr), "=&r" (tmp), "+r" (loops)
++	: "r" (oldval), "r" (newval), "Ir" (-EFAULT), "Ir" (-EAGAIN)
+ 	: "memory");
+ 	uaccess_disable();
+ 
+diff --git a/arch/mips/kernel/kgdb.c b/arch/mips/kernel/kgdb.c
+index 149100e1bc7c..90f37626100f 100644
+--- a/arch/mips/kernel/kgdb.c
++++ b/arch/mips/kernel/kgdb.c
+@@ -33,6 +33,7 @@
+ #include <asm/processor.h>
+ #include <asm/sigcontext.h>
+ #include <linux/uaccess.h>
++#include <asm/irq_regs.h>
+ 
+ static struct hard_trap_info {
+ 	unsigned char tt;	/* Trap type code for MIPS R3xxx and R4xxx */
+@@ -214,7 +215,7 @@ void kgdb_call_nmi_hook(void *ignored)
+ 	old_fs = get_fs();
+ 	set_fs(get_ds());
+ 
+-	kgdb_nmicallback(raw_smp_processor_id(), NULL);
++	kgdb_nmicallback(raw_smp_processor_id(), get_irq_regs());
+ 
+ 	set_fs(old_fs);
+ }
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 470d7daa915d..71fb8b7b2954 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3184,7 +3184,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
+ 		return ret;
+ 
+ 	if (event->attr.precise_ip) {
+-		if (!event->attr.freq) {
++		if (!(event->attr.freq || event->attr.wakeup_events)) {
+ 			event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD;
+ 			if (!(event->attr.sample_type &
+ 			      ~intel_pmu_large_pebs_flags(event)))
+@@ -3563,6 +3563,12 @@ static void intel_pmu_cpu_starting(int cpu)
+ 
+ 	cpuc->lbr_sel = NULL;
+ 
++	if (x86_pmu.flags & PMU_FL_TFA) {
++		WARN_ON_ONCE(cpuc->tfa_shadow);
++		cpuc->tfa_shadow = ~0ULL;
++		intel_set_tfa(cpuc, false);
++	}
++
+ 	if (x86_pmu.version > 1)
+ 		flip_smm_bit(&x86_pmu.attr_freeze_on_smi);
+ 
+diff --git a/arch/xtensa/include/asm/processor.h b/arch/xtensa/include/asm/processor.h
+index f7dd895b2353..0c14018d1c26 100644
+--- a/arch/xtensa/include/asm/processor.h
++++ b/arch/xtensa/include/asm/processor.h
+@@ -187,15 +187,18 @@ struct thread_struct {
+ 
+ /* Clearing a0 terminates the backtrace. */
+ #define start_thread(regs, new_pc, new_sp) \
+-	memset(regs, 0, sizeof(*regs)); \
+-	regs->pc = new_pc; \
+-	regs->ps = USER_PS_VALUE; \
+-	regs->areg[1] = new_sp; \
+-	regs->areg[0] = 0; \
+-	regs->wmask = 1; \
+-	regs->depc = 0; \
+-	regs->windowbase = 0; \
+-	regs->windowstart = 1;
++	do { \
++		memset((regs), 0, sizeof(*(regs))); \
++		(regs)->pc = (new_pc); \
++		(regs)->ps = USER_PS_VALUE; \
++		(regs)->areg[1] = (new_sp); \
++		(regs)->areg[0] = 0; \
++		(regs)->wmask = 1; \
++		(regs)->depc = 0; \
++		(regs)->windowbase = 0; \
++		(regs)->windowstart = 1; \
++		(regs)->syscall = NO_SYSCALL; \
++	} while (0)
+ 
+ /* Forward declaration */
+ struct task_struct;
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 5a2585d69c81..6930c82ab75f 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -657,6 +657,13 @@ bool blk_mq_complete_request(struct request *rq)
+ }
+ EXPORT_SYMBOL(blk_mq_complete_request);
+ 
++void blk_mq_complete_request_sync(struct request *rq)
++{
++	WRITE_ONCE(rq->state, MQ_RQ_COMPLETE);
++	rq->q->mq_ops->complete(rq);
++}
++EXPORT_SYMBOL_GPL(blk_mq_complete_request_sync);
++
+ int blk_mq_request_started(struct request *rq)
+ {
+ 	return blk_mq_rq_state(rq) != MQ_RQ_IDLE;
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index 5f94c35d165f..a2d8c03b1e24 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -1142,8 +1142,8 @@ static struct dev_pm_domain acpi_lpss_pm_domain = {
+ 		.thaw_noirq = acpi_subsys_thaw_noirq,
+ 		.poweroff = acpi_subsys_suspend,
+ 		.poweroff_late = acpi_lpss_suspend_late,
+-		.poweroff_noirq = acpi_subsys_suspend_noirq,
+-		.restore_noirq = acpi_subsys_resume_noirq,
++		.poweroff_noirq = acpi_lpss_suspend_noirq,
++		.restore_noirq = acpi_lpss_resume_noirq,
+ 		.restore_early = acpi_lpss_resume_early,
+ #endif
+ 		.runtime_suspend = acpi_lpss_runtime_suspend,
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index b16a887bbd02..29bede887237 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -513,6 +513,8 @@ static int init_vq(struct virtio_blk *vblk)
+ 	if (err)
+ 		num_vqs = 1;
+ 
++	num_vqs = min_t(unsigned int, nr_cpu_ids, num_vqs);
++
+ 	vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL);
+ 	if (!vblk->vqs)
+ 		return -ENOMEM;
+diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
+index ddbe518c3e5b..b5d31d583d60 100644
+--- a/drivers/bluetooth/hci_bcm.c
++++ b/drivers/bluetooth/hci_bcm.c
+@@ -228,9 +228,15 @@ static int bcm_gpio_set_power(struct bcm_device *dev, bool powered)
+ 	int err;
+ 
+ 	if (powered && !dev->res_enabled) {
+-		err = regulator_bulk_enable(BCM_NUM_SUPPLIES, dev->supplies);
+-		if (err)
+-			return err;
++		/* Intel Macs use bcm_apple_get_resources() and don't
++		 * have regulator supplies configured.
++		 */
++		if (dev->supplies[0].supply) {
++			err = regulator_bulk_enable(BCM_NUM_SUPPLIES,
++						    dev->supplies);
++			if (err)
++				return err;
++		}
+ 
+ 		/* LPO clock needs to be 32.768 kHz */
+ 		err = clk_set_rate(dev->lpo_clk, 32768);
+@@ -259,7 +265,13 @@ static int bcm_gpio_set_power(struct bcm_device *dev, bool powered)
+ 	if (!powered && dev->res_enabled) {
+ 		clk_disable_unprepare(dev->txco_clk);
+ 		clk_disable_unprepare(dev->lpo_clk);
+-		regulator_bulk_disable(BCM_NUM_SUPPLIES, dev->supplies);
++
++		/* Intel Macs use bcm_apple_get_resources() and don't
++		 * have regulator supplies configured.
++		 */
++		if (dev->supplies[0].supply)
++			regulator_bulk_disable(BCM_NUM_SUPPLIES,
++					       dev->supplies);
+ 	}
+ 
+ 	/* wait for device to power on and come out of reset */
+diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
+index 65f2599e5243..08824b2cd142 100644
+--- a/drivers/clk/meson/gxbb.c
++++ b/drivers/clk/meson/gxbb.c
+@@ -2213,6 +2213,7 @@ static struct clk_regmap gxbb_vdec_1_div = {
+ 		.offset = HHI_VDEC_CLK_CNTL,
+ 		.shift = 0,
+ 		.width = 7,
++		.flags = CLK_DIVIDER_ROUND_CLOSEST,
+ 	},
+ 	.hw.init = &(struct clk_init_data){
+ 		.name = "vdec_1_div",
+@@ -2258,6 +2259,7 @@ static struct clk_regmap gxbb_vdec_hevc_div = {
+ 		.offset = HHI_VDEC2_CLK_CNTL,
+ 		.shift = 16,
+ 		.width = 7,
++		.flags = CLK_DIVIDER_ROUND_CLOSEST,
+ 	},
+ 	.hw.init = &(struct clk_init_data){
+ 		.name = "vdec_hevc_div",
+diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
+index 75491fc841a6..0df16eb1eb3c 100644
+--- a/drivers/cpufreq/armada-37xx-cpufreq.c
++++ b/drivers/cpufreq/armada-37xx-cpufreq.c
+@@ -359,11 +359,11 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 	struct armada_37xx_dvfs *dvfs;
+ 	struct platform_device *pdev;
+ 	unsigned long freq;
+-	unsigned int cur_frequency;
++	unsigned int cur_frequency, base_frequency;
+ 	struct regmap *nb_pm_base, *avs_base;
+ 	struct device *cpu_dev;
+ 	int load_lvl, ret;
+-	struct clk *clk;
++	struct clk *clk, *parent;
+ 
+ 	nb_pm_base =
+ 		syscon_regmap_lookup_by_compatible("marvell,armada-3700-nb-pm");
+@@ -399,6 +399,22 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 		return PTR_ERR(clk);
+ 	}
+ 
++	parent = clk_get_parent(clk);
++	if (IS_ERR(parent)) {
++		dev_err(cpu_dev, "Cannot get parent clock for CPU0\n");
++		clk_put(clk);
++		return PTR_ERR(parent);
++	}
++
++	/* Get parent CPU frequency */
++	base_frequency =  clk_get_rate(parent);
++
++	if (!base_frequency) {
++		dev_err(cpu_dev, "Failed to get parent clock rate for CPU\n");
++		clk_put(clk);
++		return -EINVAL;
++	}
++
+ 	/* Get nominal (current) CPU frequency */
+ 	cur_frequency = clk_get_rate(clk);
+ 	if (!cur_frequency) {
+@@ -431,7 +447,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
+ 	for (load_lvl = ARMADA_37XX_DVFS_LOAD_0; load_lvl < LOAD_LEVEL_NR;
+ 	     load_lvl++) {
+ 		unsigned long u_volt = avs_map[dvfs->avs[load_lvl]] * 1000;
+-		freq = cur_frequency / dvfs->divider[load_lvl];
++		freq = base_frequency / dvfs->divider[load_lvl];
+ 		ret = dev_pm_opp_add(cpu_dev, freq, u_volt);
+ 		if (ret)
+ 			goto remove_opp;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 7ff3a28fc903..d55dd570a702 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3158,11 +3158,16 @@ static int amdgpu_device_recover_vram(struct amdgpu_device *adev)
+ 			break;
+ 
+ 		if (fence) {
+-			r = dma_fence_wait_timeout(fence, false, tmo);
++			tmo = dma_fence_wait_timeout(fence, false, tmo);
+ 			dma_fence_put(fence);
+ 			fence = next;
+-			if (r <= 0)
++			if (tmo == 0) {
++				r = -ETIMEDOUT;
+ 				break;
++			} else if (tmo < 0) {
++				r = tmo;
++				break;
++			}
+ 		} else {
+ 			fence = next;
+ 		}
+@@ -3173,8 +3178,8 @@ static int amdgpu_device_recover_vram(struct amdgpu_device *adev)
+ 		tmo = dma_fence_wait_timeout(fence, false, tmo);
+ 	dma_fence_put(fence);
+ 
+-	if (r <= 0 || tmo <= 0) {
+-		DRM_ERROR("recover vram bo from shadow failed\n");
++	if (r < 0 || tmo <= 0) {
++		DRM_ERROR("recover vram bo from shadow failed, r is %ld, tmo is %ld\n", r, tmo);
+ 		return -EIO;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index c48207b377bc..b82c5fca217b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -35,6 +35,7 @@
+ #include "amdgpu_trace.h"
+ 
+ #define AMDGPU_IB_TEST_TIMEOUT	msecs_to_jiffies(1000)
++#define AMDGPU_IB_TEST_GFX_XGMI_TIMEOUT	msecs_to_jiffies(2000)
+ 
+ /*
+  * IB
+@@ -344,6 +345,8 @@ int amdgpu_ib_ring_tests(struct amdgpu_device *adev)
+ 		 * cost waiting for it coming back under RUNTIME only
+ 		*/
+ 		tmo_gfx = 8 * AMDGPU_IB_TEST_TIMEOUT;
++	} else if (adev->gmc.xgmi.hive_id) {
++		tmo_gfx = AMDGPU_IB_TEST_GFX_XGMI_TIMEOUT;
+ 	}
+ 
+ 	for (i = 0; i < adev->num_rings; ++i) {
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 8be9677c0c07..cf9a49f49d3a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -320,6 +320,7 @@ static const struct kfd_deviceid supported_devices[] = {
+ 	{ 0x9876, &carrizo_device_info },	/* Carrizo */
+ 	{ 0x9877, &carrizo_device_info },	/* Carrizo */
+ 	{ 0x15DD, &raven_device_info },		/* Raven */
++	{ 0x15D8, &raven_device_info },		/* Raven */
+ #endif
+ 	{ 0x67A0, &hawaii_device_info },	/* Hawaii */
+ 	{ 0x67A1, &hawaii_device_info },	/* Hawaii */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 83c8a0407537..84ee77786944 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -4455,6 +4455,7 @@ static void handle_cursor_update(struct drm_plane *plane,
+ 	amdgpu_crtc->cursor_width = plane->state->crtc_w;
+ 	amdgpu_crtc->cursor_height = plane->state->crtc_h;
+ 
++	memset(&attributes, 0, sizeof(attributes));
+ 	attributes.address.high_part = upper_32_bits(address);
+ 	attributes.address.low_part  = lower_32_bits(address);
+ 	attributes.width             = plane->state->crtc_w;
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index 62a9d47df948..9160c55769f8 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -662,13 +662,11 @@ static unsigned int mt8173_calculate_factor(int clock)
+ static unsigned int mt2701_calculate_factor(int clock)
+ {
+ 	if (clock <= 64000)
+-		return 16;
+-	else if (clock <= 128000)
+-		return 8;
+-	else if (clock <= 256000)
+ 		return 4;
+-	else
++	else if (clock <= 128000)
+ 		return 2;
++	else
++		return 1;
+ }
+ 
+ static const struct mtk_dpi_conf mt8173_conf = {
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+index 862f3ec22131..a687fe3e1d6c 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+@@ -1479,7 +1479,6 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ 	if (IS_ERR(regmap))
+ 		ret = PTR_ERR(regmap);
+ 	if (ret) {
+-		ret = PTR_ERR(regmap);
+ 		dev_err(dev,
+ 			"Failed to get system configuration registers: %d\n",
+ 			ret);
+@@ -1515,6 +1514,7 @@ static int mtk_hdmi_dt_parse_pdata(struct mtk_hdmi *hdmi,
+ 	of_node_put(remote);
+ 
+ 	hdmi->ddc_adpt = of_find_i2c_adapter_by_node(i2c_np);
++	of_node_put(i2c_np);
+ 	if (!hdmi->ddc_adpt) {
+ 		dev_err(dev, "Failed to get ddc i2c adapter by node\n");
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi_phy.c b/drivers/gpu/drm/mediatek/mtk_hdmi_phy.c
+index 4ef9c57ffd44..5223498502c4 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi_phy.c
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi_phy.c
+@@ -15,28 +15,6 @@ static const struct phy_ops mtk_hdmi_phy_dev_ops = {
+ 	.owner = THIS_MODULE,
+ };
+ 
+-long mtk_hdmi_pll_round_rate(struct clk_hw *hw, unsigned long rate,
+-			     unsigned long *parent_rate)
+-{
+-	struct mtk_hdmi_phy *hdmi_phy = to_mtk_hdmi_phy(hw);
+-
+-	hdmi_phy->pll_rate = rate;
+-	if (rate <= 74250000)
+-		*parent_rate = rate;
+-	else
+-		*parent_rate = rate / 2;
+-
+-	return rate;
+-}
+-
+-unsigned long mtk_hdmi_pll_recalc_rate(struct clk_hw *hw,
+-				       unsigned long parent_rate)
+-{
+-	struct mtk_hdmi_phy *hdmi_phy = to_mtk_hdmi_phy(hw);
+-
+-	return hdmi_phy->pll_rate;
+-}
+-
+ void mtk_hdmi_phy_clear_bits(struct mtk_hdmi_phy *hdmi_phy, u32 offset,
+ 			     u32 bits)
+ {
+@@ -110,13 +88,11 @@ mtk_hdmi_phy_dev_get_ops(const struct mtk_hdmi_phy *hdmi_phy)
+ 		return NULL;
+ }
+ 
+-static void mtk_hdmi_phy_clk_get_ops(struct mtk_hdmi_phy *hdmi_phy,
+-				     const struct clk_ops **ops)
++static void mtk_hdmi_phy_clk_get_data(struct mtk_hdmi_phy *hdmi_phy,
++				      struct clk_init_data *clk_init)
+ {
+-	if (hdmi_phy && hdmi_phy->conf && hdmi_phy->conf->hdmi_phy_clk_ops)
+-		*ops = hdmi_phy->conf->hdmi_phy_clk_ops;
+-	else
+-		dev_err(hdmi_phy->dev, "Failed to get clk ops of phy\n");
++	clk_init->flags = hdmi_phy->conf->flags;
++	clk_init->ops = hdmi_phy->conf->hdmi_phy_clk_ops;
+ }
+ 
+ static int mtk_hdmi_phy_probe(struct platform_device *pdev)
+@@ -129,7 +105,6 @@ static int mtk_hdmi_phy_probe(struct platform_device *pdev)
+ 	struct clk_init_data clk_init = {
+ 		.num_parents = 1,
+ 		.parent_names = (const char * const *)&ref_clk_name,
+-		.flags = CLK_SET_RATE_PARENT | CLK_SET_RATE_GATE,
+ 	};
+ 
+ 	struct phy *phy;
+@@ -167,7 +142,7 @@ static int mtk_hdmi_phy_probe(struct platform_device *pdev)
+ 	hdmi_phy->dev = dev;
+ 	hdmi_phy->conf =
+ 		(struct mtk_hdmi_phy_conf *)of_device_get_match_data(dev);
+-	mtk_hdmi_phy_clk_get_ops(hdmi_phy, &clk_init.ops);
++	mtk_hdmi_phy_clk_get_data(hdmi_phy, &clk_init);
+ 	hdmi_phy->pll_hw.init = &clk_init;
+ 	hdmi_phy->pll = devm_clk_register(dev, &hdmi_phy->pll_hw);
+ 	if (IS_ERR(hdmi_phy->pll)) {
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi_phy.h b/drivers/gpu/drm/mediatek/mtk_hdmi_phy.h
+index f39b1fc66612..2d8b3182470d 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi_phy.h
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi_phy.h
+@@ -21,6 +21,7 @@ struct mtk_hdmi_phy;
+ 
+ struct mtk_hdmi_phy_conf {
+ 	bool tz_disabled;
++	unsigned long flags;
+ 	const struct clk_ops *hdmi_phy_clk_ops;
+ 	void (*hdmi_phy_enable_tmds)(struct mtk_hdmi_phy *hdmi_phy);
+ 	void (*hdmi_phy_disable_tmds)(struct mtk_hdmi_phy *hdmi_phy);
+@@ -48,10 +49,6 @@ void mtk_hdmi_phy_set_bits(struct mtk_hdmi_phy *hdmi_phy, u32 offset,
+ void mtk_hdmi_phy_mask(struct mtk_hdmi_phy *hdmi_phy, u32 offset,
+ 		       u32 val, u32 mask);
+ struct mtk_hdmi_phy *to_mtk_hdmi_phy(struct clk_hw *hw);
+-long mtk_hdmi_pll_round_rate(struct clk_hw *hw, unsigned long rate,
+-			     unsigned long *parent_rate);
+-unsigned long mtk_hdmi_pll_recalc_rate(struct clk_hw *hw,
+-				       unsigned long parent_rate);
+ 
+ extern struct platform_driver mtk_hdmi_phy_driver;
+ extern struct mtk_hdmi_phy_conf mtk_hdmi_phy_8173_conf;
+diff --git a/drivers/gpu/drm/mediatek/mtk_mt2701_hdmi_phy.c b/drivers/gpu/drm/mediatek/mtk_mt2701_hdmi_phy.c
+index fcc42dc6ea7f..d3cc4022e988 100644
+--- a/drivers/gpu/drm/mediatek/mtk_mt2701_hdmi_phy.c
++++ b/drivers/gpu/drm/mediatek/mtk_mt2701_hdmi_phy.c
+@@ -79,7 +79,6 @@ static int mtk_hdmi_pll_prepare(struct clk_hw *hw)
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_SLDO_MASK);
+ 	usleep_range(80, 100);
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON2, RG_HDMITX_MBIAS_LPF_EN);
+-	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON2, RG_HDMITX_EN_TX_POSDIV);
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_SER_MASK);
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_PRED_MASK);
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_DRV_MASK);
+@@ -94,7 +93,6 @@ static void mtk_hdmi_pll_unprepare(struct clk_hw *hw)
+ 	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_DRV_MASK);
+ 	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_PRED_MASK);
+ 	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_SER_MASK);
+-	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON2, RG_HDMITX_EN_TX_POSDIV);
+ 	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON2, RG_HDMITX_MBIAS_LPF_EN);
+ 	usleep_range(80, 100);
+ 	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_SLDO_MASK);
+@@ -108,6 +106,12 @@ static void mtk_hdmi_pll_unprepare(struct clk_hw *hw)
+ 	usleep_range(80, 100);
+ }
+ 
++static long mtk_hdmi_pll_round_rate(struct clk_hw *hw, unsigned long rate,
++				    unsigned long *parent_rate)
++{
++	return rate;
++}
++
+ static int mtk_hdmi_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ 				 unsigned long parent_rate)
+ {
+@@ -116,13 +120,14 @@ static int mtk_hdmi_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	if (rate <= 64000000)
+ 		pos_div = 3;
+-	else if (rate <= 12800000)
+-		pos_div = 1;
++	else if (rate <= 128000000)
++		pos_div = 2;
+ 	else
+ 		pos_div = 1;
+ 
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON6, RG_HTPLL_PREDIV_MASK);
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON6, RG_HTPLL_POSDIV_MASK);
++	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON2, RG_HDMITX_EN_TX_POSDIV);
+ 	mtk_hdmi_phy_mask(hdmi_phy, HDMI_CON6, (0x1 << RG_HTPLL_IC),
+ 			  RG_HTPLL_IC_MASK);
+ 	mtk_hdmi_phy_mask(hdmi_phy, HDMI_CON6, (0x1 << RG_HTPLL_IR),
+@@ -154,6 +159,39 @@ static int mtk_hdmi_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	return 0;
+ }
+ 
++static unsigned long mtk_hdmi_pll_recalc_rate(struct clk_hw *hw,
++					      unsigned long parent_rate)
++{
++	struct mtk_hdmi_phy *hdmi_phy = to_mtk_hdmi_phy(hw);
++	unsigned long out_rate, val;
++
++	val = (readl(hdmi_phy->regs + HDMI_CON6)
++	       & RG_HTPLL_PREDIV_MASK) >> RG_HTPLL_PREDIV;
++	switch (val) {
++	case 0x00:
++		out_rate = parent_rate;
++		break;
++	case 0x01:
++		out_rate = parent_rate / 2;
++		break;
++	default:
++		out_rate = parent_rate / 4;
++		break;
++	}
++
++	val = (readl(hdmi_phy->regs + HDMI_CON6)
++	       & RG_HTPLL_FBKDIV_MASK) >> RG_HTPLL_FBKDIV;
++	out_rate *= (val + 1) * 2;
++	val = (readl(hdmi_phy->regs + HDMI_CON2)
++	       & RG_HDMITX_TX_POSDIV_MASK);
++	out_rate >>= (val >> RG_HDMITX_TX_POSDIV);
++
++	if (readl(hdmi_phy->regs + HDMI_CON2) & RG_HDMITX_EN_TX_POSDIV)
++		out_rate /= 5;
++
++	return out_rate;
++}
++
+ static const struct clk_ops mtk_hdmi_phy_pll_ops = {
+ 	.prepare = mtk_hdmi_pll_prepare,
+ 	.unprepare = mtk_hdmi_pll_unprepare,
+@@ -174,7 +212,6 @@ static void mtk_hdmi_phy_enable_tmds(struct mtk_hdmi_phy *hdmi_phy)
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_SLDO_MASK);
+ 	usleep_range(80, 100);
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON2, RG_HDMITX_MBIAS_LPF_EN);
+-	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON2, RG_HDMITX_EN_TX_POSDIV);
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_SER_MASK);
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_PRED_MASK);
+ 	mtk_hdmi_phy_set_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_DRV_MASK);
+@@ -186,7 +223,6 @@ static void mtk_hdmi_phy_disable_tmds(struct mtk_hdmi_phy *hdmi_phy)
+ 	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_DRV_MASK);
+ 	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_PRED_MASK);
+ 	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_SER_MASK);
+-	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON2, RG_HDMITX_EN_TX_POSDIV);
+ 	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON2, RG_HDMITX_MBIAS_LPF_EN);
+ 	usleep_range(80, 100);
+ 	mtk_hdmi_phy_clear_bits(hdmi_phy, HDMI_CON0, RG_HDMITX_EN_SLDO_MASK);
+@@ -202,6 +238,7 @@ static void mtk_hdmi_phy_disable_tmds(struct mtk_hdmi_phy *hdmi_phy)
+ 
+ struct mtk_hdmi_phy_conf mtk_hdmi_phy_2701_conf = {
+ 	.tz_disabled = true,
++	.flags = CLK_SET_RATE_GATE,
+ 	.hdmi_phy_clk_ops = &mtk_hdmi_phy_pll_ops,
+ 	.hdmi_phy_enable_tmds = mtk_hdmi_phy_enable_tmds,
+ 	.hdmi_phy_disable_tmds = mtk_hdmi_phy_disable_tmds,
+diff --git a/drivers/gpu/drm/mediatek/mtk_mt8173_hdmi_phy.c b/drivers/gpu/drm/mediatek/mtk_mt8173_hdmi_phy.c
+index ed5916b27658..47f8a2951682 100644
+--- a/drivers/gpu/drm/mediatek/mtk_mt8173_hdmi_phy.c
++++ b/drivers/gpu/drm/mediatek/mtk_mt8173_hdmi_phy.c
+@@ -199,6 +199,20 @@ static void mtk_hdmi_pll_unprepare(struct clk_hw *hw)
+ 	usleep_range(100, 150);
+ }
+ 
++static long mtk_hdmi_pll_round_rate(struct clk_hw *hw, unsigned long rate,
++				    unsigned long *parent_rate)
++{
++	struct mtk_hdmi_phy *hdmi_phy = to_mtk_hdmi_phy(hw);
++
++	hdmi_phy->pll_rate = rate;
++	if (rate <= 74250000)
++		*parent_rate = rate;
++	else
++		*parent_rate = rate / 2;
++
++	return rate;
++}
++
+ static int mtk_hdmi_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ 				 unsigned long parent_rate)
+ {
+@@ -285,6 +299,14 @@ static int mtk_hdmi_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	return 0;
+ }
+ 
++static unsigned long mtk_hdmi_pll_recalc_rate(struct clk_hw *hw,
++					      unsigned long parent_rate)
++{
++	struct mtk_hdmi_phy *hdmi_phy = to_mtk_hdmi_phy(hw);
++
++	return hdmi_phy->pll_rate;
++}
++
+ static const struct clk_ops mtk_hdmi_phy_pll_ops = {
+ 	.prepare = mtk_hdmi_pll_prepare,
+ 	.unprepare = mtk_hdmi_pll_unprepare,
+@@ -309,6 +331,7 @@ static void mtk_hdmi_phy_disable_tmds(struct mtk_hdmi_phy *hdmi_phy)
+ }
+ 
+ struct mtk_hdmi_phy_conf mtk_hdmi_phy_8173_conf = {
++	.flags = CLK_SET_RATE_PARENT | CLK_SET_RATE_GATE,
+ 	.hdmi_phy_clk_ops = &mtk_hdmi_phy_pll_ops,
+ 	.hdmi_phy_enable_tmds = mtk_hdmi_phy_enable_tmds,
+ 	.hdmi_phy_disable_tmds = mtk_hdmi_phy_disable_tmds,
+diff --git a/drivers/gpu/drm/omapdrm/dss/hdmi4_cec.c b/drivers/gpu/drm/omapdrm/dss/hdmi4_cec.c
+index 340383150fb9..ebf9c96d43ee 100644
+--- a/drivers/gpu/drm/omapdrm/dss/hdmi4_cec.c
++++ b/drivers/gpu/drm/omapdrm/dss/hdmi4_cec.c
+@@ -175,6 +175,7 @@ static int hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 		REG_FLD_MOD(core->base, HDMI_CORE_SYS_INTR_UNMASK4, 0, 3, 3);
+ 		hdmi_wp_clear_irqenable(core->wp, HDMI_IRQ_CORE);
+ 		hdmi_wp_set_irqstatus(core->wp, HDMI_IRQ_CORE);
++		REG_FLD_MOD(core->wp->base, HDMI_WP_CLK, 0, 5, 0);
+ 		hdmi4_core_disable(core);
+ 		return 0;
+ 	}
+@@ -182,16 +183,24 @@ static int hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 	if (err)
+ 		return err;
+ 
++	/*
++	 * Initialize CEC clock divider: CEC needs 2MHz clock hence
++	 * set the divider to 24 to get 48/24=2MHz clock
++	 */
++	REG_FLD_MOD(core->wp->base, HDMI_WP_CLK, 0x18, 5, 0);
++
+ 	/* Clear TX FIFO */
+ 	if (!hdmi_cec_clear_tx_fifo(adap)) {
+ 		pr_err("cec-%s: could not clear TX FIFO\n", adap->name);
+-		return -EIO;
++		err = -EIO;
++		goto err_disable_clk;
+ 	}
+ 
+ 	/* Clear RX FIFO */
+ 	if (!hdmi_cec_clear_rx_fifo(adap)) {
+ 		pr_err("cec-%s: could not clear RX FIFO\n", adap->name);
+-		return -EIO;
++		err = -EIO;
++		goto err_disable_clk;
+ 	}
+ 
+ 	/* Clear CEC interrupts */
+@@ -236,6 +245,12 @@ static int hdmi_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ 		hdmi_write_reg(core->base, HDMI_CEC_INT_STATUS_1, temp);
+ 	}
+ 	return 0;
++
++err_disable_clk:
++	REG_FLD_MOD(core->wp->base, HDMI_WP_CLK, 0, 5, 0);
++	hdmi4_core_disable(core);
++
++	return err;
+ }
+ 
+ static int hdmi_cec_adap_log_addr(struct cec_adapter *adap, u8 log_addr)
+@@ -333,11 +348,8 @@ int hdmi4_cec_init(struct platform_device *pdev, struct hdmi_core_data *core,
+ 		return ret;
+ 	core->wp = wp;
+ 
+-	/*
+-	 * Initialize CEC clock divider: CEC needs 2MHz clock hence
+-	 * set the devider to 24 to get 48/24=2MHz clock
+-	 */
+-	REG_FLD_MOD(core->wp->base, HDMI_WP_CLK, 0x18, 5, 0);
++	/* Disable clock initially, hdmi_cec_adap_enable() manages it */
++	REG_FLD_MOD(core->wp->base, HDMI_WP_CLK, 0, 5, 0);
+ 
+ 	ret = cec_register_adapter(core->adap, &pdev->dev);
+ 	if (ret < 0) {
+diff --git a/drivers/gpu/drm/sun4i/sun8i_tcon_top.c b/drivers/gpu/drm/sun4i/sun8i_tcon_top.c
+index fc36e0c10a37..b1e7c76e9c17 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_tcon_top.c
++++ b/drivers/gpu/drm/sun4i/sun8i_tcon_top.c
+@@ -227,7 +227,7 @@ static int sun8i_tcon_top_bind(struct device *dev, struct device *master,
+ 
+ err_unregister_gates:
+ 	for (i = 0; i < CLK_NUM; i++)
+-		if (clk_data->hws[i])
++		if (!IS_ERR_OR_NULL(clk_data->hws[i]))
+ 			clk_hw_unregister_gate(clk_data->hws[i]);
+ 	clk_disable_unprepare(tcon_top->bus);
+ err_assert_reset:
+@@ -245,7 +245,8 @@ static void sun8i_tcon_top_unbind(struct device *dev, struct device *master,
+ 
+ 	of_clk_del_provider(dev->of_node);
+ 	for (i = 0; i < CLK_NUM; i++)
+-		clk_hw_unregister_gate(clk_data->hws[i]);
++		if (clk_data->hws[i])
++			clk_hw_unregister_gate(clk_data->hws[i]);
+ 
+ 	clk_disable_unprepare(tcon_top->bus);
+ 	reset_control_assert(tcon_top->rst);
+diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
+index 632d25674e7f..45653029ee18 100644
+--- a/drivers/hv/hv.c
++++ b/drivers/hv/hv.c
+@@ -408,7 +408,6 @@ int hv_synic_cleanup(unsigned int cpu)
+ 
+ 		clockevents_unbind_device(hv_cpu->clk_evt, cpu);
+ 		hv_ce_shutdown(hv_cpu->clk_evt);
+-		put_cpu_ptr(hv_cpu);
+ 	}
+ 
+ 	hv_get_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 1cf6290d6435..70f2cb90adc5 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -165,6 +165,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x34a6),
+ 		.driver_data = (kernel_ulong_t)&intel_th_2x,
+ 	},
++	{
++		/* Comet Lake */
++		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x02a6),
++		.driver_data = (kernel_ulong_t)&intel_th_2x,
++	},
+ 	{ 0 },
+ };
+ 
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 1412abcff010..5f4bd52121fe 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -385,8 +385,9 @@ static void i3c_bus_set_addr_slot_status(struct i3c_bus *bus, u16 addr,
+ 		return;
+ 
+ 	ptr = bus->addrslots + (bitpos / BITS_PER_LONG);
+-	*ptr &= ~(I3C_ADDR_SLOT_STATUS_MASK << (bitpos % BITS_PER_LONG));
+-	*ptr |= status << (bitpos % BITS_PER_LONG);
++	*ptr &= ~((unsigned long)I3C_ADDR_SLOT_STATUS_MASK <<
++						(bitpos % BITS_PER_LONG));
++	*ptr |= (unsigned long)status << (bitpos % BITS_PER_LONG);
+ }
+ 
+ static bool i3c_bus_dev_addr_is_avail(struct i3c_bus *bus, u8 addr)
+diff --git a/drivers/iio/adc/qcom-spmi-adc5.c b/drivers/iio/adc/qcom-spmi-adc5.c
+index 6a866cc187f7..21fdcde77883 100644
+--- a/drivers/iio/adc/qcom-spmi-adc5.c
++++ b/drivers/iio/adc/qcom-spmi-adc5.c
+@@ -664,6 +664,7 @@ static const struct of_device_id adc5_match_table[] = {
+ 	},
+ 	{ }
+ };
++MODULE_DEVICE_TABLE(of, adc5_match_table);
+ 
+ static int adc5_get_dt_data(struct adc5_chip *adc, struct device_node *node)
+ {
+diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
+index b443642eac02..0ae05e9249b3 100644
+--- a/drivers/infiniband/hw/hfi1/chip.c
++++ b/drivers/infiniband/hw/hfi1/chip.c
+@@ -13219,7 +13219,7 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
+ 	int total_contexts;
+ 	int ret;
+ 	unsigned ngroups;
+-	int qos_rmt_count;
++	int rmt_count;
+ 	int user_rmt_reduced;
+ 	u32 n_usr_ctxts;
+ 	u32 send_contexts = chip_send_contexts(dd);
+@@ -13281,10 +13281,20 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
+ 		n_usr_ctxts = rcv_contexts - total_contexts;
+ 	}
+ 
+-	/* each user context requires an entry in the RMT */
+-	qos_rmt_count = qos_rmt_entries(dd, NULL, NULL);
+-	if (qos_rmt_count + n_usr_ctxts > NUM_MAP_ENTRIES) {
+-		user_rmt_reduced = NUM_MAP_ENTRIES - qos_rmt_count;
++	/*
++	 * The RMT entries are currently allocated as shown below:
++	 * 1. QOS (0 to 128 entries);
++	 * 2. FECN for PSM (num_user_contexts + num_vnic_contexts);
++	 * 3. VNIC (num_vnic_contexts).
++	 * It should be noted that PSM FECN oversubscribe num_vnic_contexts
++	 * entries of RMT because both VNIC and PSM could allocate any receive
++	 * context between dd->first_dyn_alloc_text and dd->num_rcv_contexts,
++	 * and PSM FECN must reserve an RMT entry for each possible PSM receive
++	 * context.
++	 */
++	rmt_count = qos_rmt_entries(dd, NULL, NULL) + (num_vnic_contexts * 2);
++	if (rmt_count + n_usr_ctxts > NUM_MAP_ENTRIES) {
++		user_rmt_reduced = NUM_MAP_ENTRIES - rmt_count;
+ 		dd_dev_err(dd,
+ 			   "RMT size is reducing the number of user receive contexts from %u to %d\n",
+ 			   n_usr_ctxts,
+@@ -14272,9 +14282,11 @@ static void init_user_fecn_handling(struct hfi1_devdata *dd,
+ 	u64 reg;
+ 	int i, idx, regoff, regidx;
+ 	u8 offset;
++	u32 total_cnt;
+ 
+ 	/* there needs to be enough room in the map table */
+-	if (rmt->used + dd->num_user_contexts >= NUM_MAP_ENTRIES) {
++	total_cnt = dd->num_rcv_contexts - dd->first_dyn_alloc_ctxt;
++	if (rmt->used + total_cnt >= NUM_MAP_ENTRIES) {
+ 		dd_dev_err(dd, "User FECN handling disabled - too many user contexts allocated\n");
+ 		return;
+ 	}
+@@ -14328,7 +14340,7 @@ static void init_user_fecn_handling(struct hfi1_devdata *dd,
+ 	/* add rule 1 */
+ 	add_rsm_rule(dd, RSM_INS_FECN, &rrd);
+ 
+-	rmt->used += dd->num_user_contexts;
++	rmt->used += total_cnt;
+ }
+ 
+ /* Initialize RSM for VNIC */
+diff --git a/drivers/infiniband/hw/hfi1/qp.c b/drivers/infiniband/hw/hfi1/qp.c
+index 5866f358ea04..df8e812804b3 100644
+--- a/drivers/infiniband/hw/hfi1/qp.c
++++ b/drivers/infiniband/hw/hfi1/qp.c
+@@ -834,6 +834,8 @@ void notify_error_qp(struct rvt_qp *qp)
+ 		if (!list_empty(&priv->s_iowait.list) &&
+ 		    !(qp->s_flags & RVT_S_BUSY)) {
+ 			qp->s_flags &= ~HFI1_S_ANY_WAIT_IO;
++			iowait_clear_flag(&priv->s_iowait, IOWAIT_PENDING_IB);
++			iowait_clear_flag(&priv->s_iowait, IOWAIT_PENDING_TID);
+ 			list_del_init(&priv->s_iowait.list);
+ 			priv->s_iowait.lock = NULL;
+ 			rvt_put_qp(qp);
+diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c
+index be603f35d7e4..cfde43b1df96 100644
+--- a/drivers/infiniband/hw/hfi1/rc.c
++++ b/drivers/infiniband/hw/hfi1/rc.c
+@@ -2302,7 +2302,7 @@ send_last:
+ 			update_ack_queue(qp, next);
+ 		}
+ 		e = &qp->s_ack_queue[qp->r_head_ack_queue];
+-		if (e->opcode == OP(RDMA_READ_REQUEST) && e->rdma_sge.mr) {
++		if (e->rdma_sge.mr) {
+ 			rvt_put_mr(e->rdma_sge.mr);
+ 			e->rdma_sge.mr = NULL;
+ 		}
+@@ -2376,7 +2376,7 @@ send_last:
+ 			update_ack_queue(qp, next);
+ 		}
+ 		e = &qp->s_ack_queue[qp->r_head_ack_queue];
+-		if (e->opcode == OP(RDMA_READ_REQUEST) && e->rdma_sge.mr) {
++		if (e->rdma_sge.mr) {
+ 			rvt_put_mr(e->rdma_sge.mr);
+ 			e->rdma_sge.mr = NULL;
+ 		}
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index 4cdbcafa5915..cae23364cfea 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -763,6 +763,8 @@ void *hns_roce_table_find(struct hns_roce_dev *hr_dev,
+ 		idx_offset = (obj & (table->num_obj - 1)) % obj_per_chunk;
+ 		dma_offset = offset = idx_offset * table->obj_size;
+ 	} else {
++		u32 seg_size = 64; /* 8 bytes per BA and 8 BA per segment */
++
+ 		hns_roce_calc_hem_mhop(hr_dev, table, &mhop_obj, &mhop);
+ 		/* mtt mhop */
+ 		i = mhop.l0_idx;
+@@ -774,8 +776,8 @@ void *hns_roce_table_find(struct hns_roce_dev *hr_dev,
+ 			hem_idx = i;
+ 
+ 		hem = table->hem[hem_idx];
+-		dma_offset = offset = (obj & (table->num_obj - 1)) *
+-				       table->obj_size % mhop.bt_chunk_size;
++		dma_offset = offset = (obj & (table->num_obj - 1)) * seg_size %
++				       mhop.bt_chunk_size;
+ 		if (mhop.hop_num == 2)
+ 			dma_offset = offset = 0;
+ 	}
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index ee5991bd4171..dd4bb0ec6113 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -746,7 +746,6 @@ static int hns_roce_write_mtt_chunk(struct hns_roce_dev *hr_dev,
+ 	struct hns_roce_hem_table *table;
+ 	dma_addr_t dma_handle;
+ 	__le64 *mtts;
+-	u32 s = start_index * sizeof(u64);
+ 	u32 bt_page_size;
+ 	u32 i;
+ 
+@@ -780,7 +779,8 @@ static int hns_roce_write_mtt_chunk(struct hns_roce_dev *hr_dev,
+ 		return -EINVAL;
+ 
+ 	mtts = hns_roce_table_find(hr_dev, table,
+-				mtt->first_seg + s / hr_dev->caps.mtt_entry_sz,
++				mtt->first_seg +
++				start_index / HNS_ROCE_MTT_ENTRY_PER_SEG,
+ 				&dma_handle);
+ 	if (!mtts)
+ 		return -ENOMEM;
+diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
+index 39c37b6fd715..76b8dda40edd 100644
+--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
+@@ -1125,6 +1125,8 @@ static void pvrdma_pci_remove(struct pci_dev *pdev)
+ 	pvrdma_page_dir_cleanup(dev, &dev->cq_pdir);
+ 	pvrdma_page_dir_cleanup(dev, &dev->async_pdir);
+ 	pvrdma_free_slots(dev);
++	dma_free_coherent(&pdev->dev, sizeof(*dev->dsr), dev->dsr,
++			  dev->dsrbase);
+ 
+ 	iounmap(dev->regs);
+ 	kfree(dev->sgid_tbl);
+diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
+index 84fa5b22371e..e8a2efe0afce 100644
+--- a/drivers/iommu/amd_iommu_init.c
++++ b/drivers/iommu/amd_iommu_init.c
+@@ -358,7 +358,7 @@ static void iommu_write_l2(struct amd_iommu *iommu, u8 address, u32 val)
+ static void iommu_set_exclusion_range(struct amd_iommu *iommu)
+ {
+ 	u64 start = iommu->exclusion_start & PAGE_MASK;
+-	u64 limit = (start + iommu->exclusion_length) & PAGE_MASK;
++	u64 limit = (start + iommu->exclusion_length - 1) & PAGE_MASK;
+ 	u64 entry;
+ 
+ 	if (!iommu->exclusion_start)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/norm_desc.c b/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
+index b7dd4e3c760d..6d690678c20e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
+@@ -140,7 +140,7 @@ static void ndesc_init_rx_desc(struct dma_desc *p, int disable_rx_ic, int mode,
+ 	p->des0 |= cpu_to_le32(RDES0_OWN);
+ 
+ 	bfsize1 = min(bfsize, BUF_SIZE_2KiB - 1);
+-	p->des1 |= cpu_to_le32(bfsize & RDES1_BUFFER1_SIZE_MASK);
++	p->des1 |= cpu_to_le32(bfsize1 & RDES1_BUFFER1_SIZE_MASK);
+ 
+ 	if (mode == STMMAC_CHAIN_MODE)
+ 		ndesc_rx_set_on_chain(p, end);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 6a9dd68c0f4f..4c4413ad3ceb 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -291,7 +291,7 @@ bool nvme_cancel_request(struct request *req, void *data, bool reserved)
+ 				"Cancelling I/O %d", req->tag);
+ 
+ 	nvme_req(req)->status = NVME_SC_ABORT_REQ;
+-	blk_mq_complete_request(req);
++	blk_mq_complete_request_sync(req);
+ 	return true;
+ }
+ EXPORT_SYMBOL_GPL(nvme_cancel_request);
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index c37d5bbd72ab..8625b73d94bf 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -1857,7 +1857,7 @@ nvme_fc_init_queue(struct nvme_fc_ctrl *ctrl, int idx)
+ 	memset(queue, 0, sizeof(*queue));
+ 	queue->ctrl = ctrl;
+ 	queue->qnum = idx;
+-	atomic_set(&queue->csn, 1);
++	atomic_set(&queue->csn, 0);
+ 	queue->dev = ctrl->dev;
+ 
+ 	if (idx > 0)
+@@ -1899,7 +1899,7 @@ nvme_fc_free_queue(struct nvme_fc_queue *queue)
+ 	 */
+ 
+ 	queue->connection_id = 0;
+-	atomic_set(&queue->csn, 1);
++	atomic_set(&queue->csn, 0);
+ }
+ 
+ static void
+@@ -2195,7 +2195,6 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
+ {
+ 	struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu;
+ 	struct nvme_command *sqe = &cmdiu->sqe;
+-	u32 csn;
+ 	int ret, opstate;
+ 
+ 	/*
+@@ -2210,8 +2209,6 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
+ 
+ 	/* format the FC-NVME CMD IU and fcp_req */
+ 	cmdiu->connection_id = cpu_to_be64(queue->connection_id);
+-	csn = atomic_inc_return(&queue->csn);
+-	cmdiu->csn = cpu_to_be32(csn);
+ 	cmdiu->data_len = cpu_to_be32(data_len);
+ 	switch (io_dir) {
+ 	case NVMEFC_FCP_WRITE:
+@@ -2269,11 +2266,24 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue,
+ 	if (!(op->flags & FCOP_FLAGS_AEN))
+ 		blk_mq_start_request(op->rq);
+ 
++	cmdiu->csn = cpu_to_be32(atomic_inc_return(&queue->csn));
+ 	ret = ctrl->lport->ops->fcp_io(&ctrl->lport->localport,
+ 					&ctrl->rport->remoteport,
+ 					queue->lldd_handle, &op->fcp_req);
+ 
+ 	if (ret) {
++		/*
++		 * If the lld fails to send the command is there an issue with
++		 * the csn value?  If the command that fails is the Connect,
++		 * no - as the connection won't be live.  If it is a command
++		 * post-connect, it's possible a gap in csn may be created.
++		 * Does this matter?  As Linux initiators don't send fused
++		 * commands, no.  The gap would exist, but as there's nothing
++		 * that depends on csn order to be delivered on the target
++		 * side, it shouldn't hurt.  It would be difficult for a
++		 * target to even detect the csn gap as it has no idea when the
++		 * cmd with the csn was supposed to arrive.
++		 */
+ 		opstate = atomic_xchg(&op->state, FCPOP_STATE_COMPLETE);
+ 		__nvme_fc_fcpop_chk_teardowns(ctrl, op, opstate);
+ 
+diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
+index 11baeb14c388..8fdae510c5ac 100644
+--- a/drivers/nvme/target/admin-cmd.c
++++ b/drivers/nvme/target/admin-cmd.c
+@@ -32,6 +32,11 @@ u32 nvmet_get_log_page_len(struct nvme_command *cmd)
+ 	return len;
+ }
+ 
++u64 nvmet_get_log_page_offset(struct nvme_command *cmd)
++{
++	return le64_to_cpu(cmd->get_log_page.lpo);
++}
++
+ static void nvmet_execute_get_log_page_noop(struct nvmet_req *req)
+ {
+ 	nvmet_req_complete(req, nvmet_zero_sgl(req, 0, req->data_len));
+diff --git a/drivers/nvme/target/discovery.c b/drivers/nvme/target/discovery.c
+index d2cb71a0b419..389c1a90197d 100644
+--- a/drivers/nvme/target/discovery.c
++++ b/drivers/nvme/target/discovery.c
+@@ -139,54 +139,76 @@ static void nvmet_set_disc_traddr(struct nvmet_req *req, struct nvmet_port *port
+ 		memcpy(traddr, port->disc_addr.traddr, NVMF_TRADDR_SIZE);
+ }
+ 
++static size_t discovery_log_entries(struct nvmet_req *req)
++{
++	struct nvmet_ctrl *ctrl = req->sq->ctrl;
++	struct nvmet_subsys_link *p;
++	struct nvmet_port *r;
++	size_t entries = 0;
++
++	list_for_each_entry(p, &req->port->subsystems, entry) {
++		if (!nvmet_host_allowed(p->subsys, ctrl->hostnqn))
++			continue;
++		entries++;
++	}
++	list_for_each_entry(r, &req->port->referrals, entry)
++		entries++;
++	return entries;
++}
++
+ static void nvmet_execute_get_disc_log_page(struct nvmet_req *req)
+ {
+ 	const int entry_size = sizeof(struct nvmf_disc_rsp_page_entry);
+ 	struct nvmet_ctrl *ctrl = req->sq->ctrl;
+ 	struct nvmf_disc_rsp_page_hdr *hdr;
++	u64 offset = nvmet_get_log_page_offset(req->cmd);
+ 	size_t data_len = nvmet_get_log_page_len(req->cmd);
+-	size_t alloc_len = max(data_len, sizeof(*hdr));
+-	int residual_len = data_len - sizeof(*hdr);
++	size_t alloc_len;
+ 	struct nvmet_subsys_link *p;
+ 	struct nvmet_port *r;
+ 	u32 numrec = 0;
+ 	u16 status = 0;
++	void *buffer;
++
++	/* Spec requires dword aligned offsets */
++	if (offset & 0x3) {
++		status = NVME_SC_INVALID_FIELD | NVME_SC_DNR;
++		goto out;
++	}
+ 
+ 	/*
+ 	 * Make sure we're passing at least a buffer of response header size.
+ 	 * If host provided data len is less than the header size, only the
+ 	 * number of bytes requested by host will be sent to host.
+ 	 */
+-	hdr = kzalloc(alloc_len, GFP_KERNEL);
+-	if (!hdr) {
++	down_read(&nvmet_config_sem);
++	alloc_len = sizeof(*hdr) + entry_size * discovery_log_entries(req);
++	buffer = kzalloc(alloc_len, GFP_KERNEL);
++	if (!buffer) {
++		up_read(&nvmet_config_sem);
+ 		status = NVME_SC_INTERNAL;
+ 		goto out;
+ 	}
+ 
+-	down_read(&nvmet_config_sem);
++	hdr = buffer;
+ 	list_for_each_entry(p, &req->port->subsystems, entry) {
++		char traddr[NVMF_TRADDR_SIZE];
++
+ 		if (!nvmet_host_allowed(p->subsys, ctrl->hostnqn))
+ 			continue;
+-		if (residual_len >= entry_size) {
+-			char traddr[NVMF_TRADDR_SIZE];
+-
+-			nvmet_set_disc_traddr(req, req->port, traddr);
+-			nvmet_format_discovery_entry(hdr, req->port,
+-					p->subsys->subsysnqn, traddr,
+-					NVME_NQN_NVME, numrec);
+-			residual_len -= entry_size;
+-		}
++
++		nvmet_set_disc_traddr(req, req->port, traddr);
++		nvmet_format_discovery_entry(hdr, req->port,
++				p->subsys->subsysnqn, traddr,
++				NVME_NQN_NVME, numrec);
+ 		numrec++;
+ 	}
+ 
+ 	list_for_each_entry(r, &req->port->referrals, entry) {
+-		if (residual_len >= entry_size) {
+-			nvmet_format_discovery_entry(hdr, r,
+-					NVME_DISC_SUBSYS_NAME,
+-					r->disc_addr.traddr,
+-					NVME_NQN_DISC, numrec);
+-			residual_len -= entry_size;
+-		}
++		nvmet_format_discovery_entry(hdr, r,
++				NVME_DISC_SUBSYS_NAME,
++				r->disc_addr.traddr,
++				NVME_NQN_DISC, numrec);
+ 		numrec++;
+ 	}
+ 
+@@ -198,8 +220,8 @@ static void nvmet_execute_get_disc_log_page(struct nvmet_req *req)
+ 
+ 	up_read(&nvmet_config_sem);
+ 
+-	status = nvmet_copy_to_sgl(req, 0, hdr, data_len);
+-	kfree(hdr);
++	status = nvmet_copy_to_sgl(req, 0, buffer + offset, data_len);
++	kfree(buffer);
+ out:
+ 	nvmet_req_complete(req, status);
+ }
+diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
+index 3e4719fdba85..d253c45c1aa6 100644
+--- a/drivers/nvme/target/nvmet.h
++++ b/drivers/nvme/target/nvmet.h
+@@ -436,6 +436,7 @@ u16 nvmet_copy_from_sgl(struct nvmet_req *req, off_t off, void *buf,
+ u16 nvmet_zero_sgl(struct nvmet_req *req, off_t off, size_t len);
+ 
+ u32 nvmet_get_log_page_len(struct nvme_command *cmd);
++u64 nvmet_get_log_page_offset(struct nvme_command *cmd);
+ 
+ extern struct list_head *nvmet_ports;
+ void nvmet_port_disc_changed(struct nvmet_port *port,
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index eaec2d306481..c7039f52ad51 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -396,7 +396,7 @@ static int pmc_dbgfs_register(struct pmc_dev *pmc)
+  * Some systems need one or more of their pmc_plt_clks to be
+  * marked as critical.
+  */
+-static const struct dmi_system_id critclk_systems[] __initconst = {
++static const struct dmi_system_id critclk_systems[] = {
+ 	{
+ 		.ident = "MPL CEC1x",
+ 		.matches = {
+diff --git a/drivers/scsi/csiostor/csio_scsi.c b/drivers/scsi/csiostor/csio_scsi.c
+index bc5547a62c00..c54c6cd504c4 100644
+--- a/drivers/scsi/csiostor/csio_scsi.c
++++ b/drivers/scsi/csiostor/csio_scsi.c
+@@ -1713,8 +1713,11 @@ csio_scsi_err_handler(struct csio_hw *hw, struct csio_ioreq *req)
+ 	}
+ 
+ out:
+-	if (req->nsge > 0)
++	if (req->nsge > 0) {
+ 		scsi_dma_unmap(cmnd);
++		if (req->dcopy && (host_status == DID_OK))
++			host_status = csio_scsi_copy_to_sgl(hw, req);
++	}
+ 
+ 	cmnd->result = (((host_status) << 16) | scsi_status);
+ 	cmnd->scsi_done(cmnd);
+diff --git a/drivers/scsi/lpfc/lpfc_attr.c b/drivers/scsi/lpfc/lpfc_attr.c
+index 4bae72cbf3f6..45b26a8eb61e 100644
+--- a/drivers/scsi/lpfc/lpfc_attr.c
++++ b/drivers/scsi/lpfc/lpfc_attr.c
+@@ -117,7 +117,7 @@ static ssize_t
+ lpfc_drvr_version_show(struct device *dev, struct device_attribute *attr,
+ 		       char *buf)
+ {
+-	return snprintf(buf, PAGE_SIZE, LPFC_MODULE_DESC "\n");
++	return scnprintf(buf, PAGE_SIZE, LPFC_MODULE_DESC "\n");
+ }
+ 
+ /**
+@@ -137,9 +137,9 @@ lpfc_enable_fip_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+ 	if (phba->hba_flag & HBA_FIP_SUPPORT)
+-		return snprintf(buf, PAGE_SIZE, "1\n");
++		return scnprintf(buf, PAGE_SIZE, "1\n");
+ 	else
+-		return snprintf(buf, PAGE_SIZE, "0\n");
++		return scnprintf(buf, PAGE_SIZE, "0\n");
+ }
+ 
+ static ssize_t
+@@ -517,14 +517,15 @@ lpfc_bg_info_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	if (phba->cfg_enable_bg)
++	if (phba->cfg_enable_bg) {
+ 		if (phba->sli3_options & LPFC_SLI3_BG_ENABLED)
+-			return snprintf(buf, PAGE_SIZE, "BlockGuard Enabled\n");
++			return scnprintf(buf, PAGE_SIZE,
++					"BlockGuard Enabled\n");
+ 		else
+-			return snprintf(buf, PAGE_SIZE,
++			return scnprintf(buf, PAGE_SIZE,
+ 					"BlockGuard Not Supported\n");
+-	else
+-			return snprintf(buf, PAGE_SIZE,
++	} else
++		return scnprintf(buf, PAGE_SIZE,
+ 					"BlockGuard Disabled\n");
+ }
+ 
+@@ -536,7 +537,7 @@ lpfc_bg_guard_err_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n",
++	return scnprintf(buf, PAGE_SIZE, "%llu\n",
+ 			(unsigned long long)phba->bg_guard_err_cnt);
+ }
+ 
+@@ -548,7 +549,7 @@ lpfc_bg_apptag_err_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n",
++	return scnprintf(buf, PAGE_SIZE, "%llu\n",
+ 			(unsigned long long)phba->bg_apptag_err_cnt);
+ }
+ 
+@@ -560,7 +561,7 @@ lpfc_bg_reftag_err_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%llu\n",
++	return scnprintf(buf, PAGE_SIZE, "%llu\n",
+ 			(unsigned long long)phba->bg_reftag_err_cnt);
+ }
+ 
+@@ -578,7 +579,7 @@ lpfc_info_show(struct device *dev, struct device_attribute *attr,
+ {
+ 	struct Scsi_Host *host = class_to_shost(dev);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",lpfc_info(host));
++	return scnprintf(buf, PAGE_SIZE, "%s\n", lpfc_info(host));
+ }
+ 
+ /**
+@@ -597,7 +598,7 @@ lpfc_serialnum_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",phba->SerialNumber);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", phba->SerialNumber);
+ }
+ 
+ /**
+@@ -619,7 +620,7 @@ lpfc_temp_sensor_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+-	return snprintf(buf, PAGE_SIZE, "%d\n",phba->temp_sensor_support);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->temp_sensor_support);
+ }
+ 
+ /**
+@@ -638,7 +639,7 @@ lpfc_modeldesc_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",phba->ModelDesc);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", phba->ModelDesc);
+ }
+ 
+ /**
+@@ -657,7 +658,7 @@ lpfc_modelname_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",phba->ModelName);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", phba->ModelName);
+ }
+ 
+ /**
+@@ -676,7 +677,7 @@ lpfc_programtype_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",phba->ProgramType);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", phba->ProgramType);
+ }
+ 
+ /**
+@@ -694,7 +695,7 @@ lpfc_mlomgmt_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 	struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return scnprintf(buf, PAGE_SIZE, "%d\n",
+ 		(phba->sli.sli_flag & LPFC_MENLO_MAINT));
+ }
+ 
+@@ -714,7 +715,7 @@ lpfc_vportnum_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n",phba->Port);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", phba->Port);
+ }
+ 
+ /**
+@@ -742,10 +743,10 @@ lpfc_fwrev_show(struct device *dev, struct device_attribute *attr,
+ 	sli_family = phba->sli4_hba.pc_sli4_params.sli_family;
+ 
+ 	if (phba->sli_rev < LPFC_SLI_REV4)
+-		len = snprintf(buf, PAGE_SIZE, "%s, sli-%d\n",
++		len = scnprintf(buf, PAGE_SIZE, "%s, sli-%d\n",
+ 			       fwrev, phba->sli_rev);
+ 	else
+-		len = snprintf(buf, PAGE_SIZE, "%s, sli-%d:%d:%x\n",
++		len = scnprintf(buf, PAGE_SIZE, "%s, sli-%d:%d:%x\n",
+ 			       fwrev, phba->sli_rev, if_type, sli_family);
+ 
+ 	return len;
+@@ -769,7 +770,7 @@ lpfc_hdw_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 	lpfc_vpd_t *vp = &phba->vpd;
+ 
+ 	lpfc_jedec_to_ascii(vp->rev.biuRev, hdw);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", hdw);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", hdw);
+ }
+ 
+ /**
+@@ -790,10 +791,11 @@ lpfc_option_rom_version_show(struct device *dev, struct device_attribute *attr,
+ 	char fwrev[FW_REV_STR_SIZE];
+ 
+ 	if (phba->sli_rev < LPFC_SLI_REV4)
+-		return snprintf(buf, PAGE_SIZE, "%s\n", phba->OptionROMVersion);
++		return scnprintf(buf, PAGE_SIZE, "%s\n",
++				phba->OptionROMVersion);
+ 
+ 	lpfc_decode_firmware_rev(phba, fwrev, 1);
+-	return snprintf(buf, PAGE_SIZE, "%s\n", fwrev);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", fwrev);
+ }
+ 
+ /**
+@@ -824,20 +826,20 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr,
+ 	case LPFC_LINK_DOWN:
+ 	case LPFC_HBA_ERROR:
+ 		if (phba->hba_flag & LINK_DISABLED)
+-			len += snprintf(buf + len, PAGE_SIZE-len,
++			len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"Link Down - User disabled\n");
+ 		else
+-			len += snprintf(buf + len, PAGE_SIZE-len,
++			len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"Link Down\n");
+ 		break;
+ 	case LPFC_LINK_UP:
+ 	case LPFC_CLEAR_LA:
+ 	case LPFC_HBA_READY:
+-		len += snprintf(buf + len, PAGE_SIZE-len, "Link Up - ");
++		len += scnprintf(buf + len, PAGE_SIZE-len, "Link Up - ");
+ 
+ 		switch (vport->port_state) {
+ 		case LPFC_LOCAL_CFG_LINK:
+-			len += snprintf(buf + len, PAGE_SIZE-len,
++			len += scnprintf(buf + len, PAGE_SIZE-len,
+ 					"Configuring Link\n");
+ 			break;
+ 		case LPFC_FDISC:
+@@ -847,38 +849,40 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr,
+ 		case LPFC_NS_QRY:
+ 		case LPFC_BUILD_DISC_LIST:
+ 		case LPFC_DISC_AUTH:
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"Discovery\n");
+ 			break;
+ 		case LPFC_VPORT_READY:
+-			len += snprintf(buf + len, PAGE_SIZE - len, "Ready\n");
++			len += scnprintf(buf + len, PAGE_SIZE - len,
++					"Ready\n");
+ 			break;
+ 
+ 		case LPFC_VPORT_FAILED:
+-			len += snprintf(buf + len, PAGE_SIZE - len, "Failed\n");
++			len += scnprintf(buf + len, PAGE_SIZE - len,
++					"Failed\n");
+ 			break;
+ 
+ 		case LPFC_VPORT_UNKNOWN:
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"Unknown\n");
+ 			break;
+ 		}
+ 		if (phba->sli.sli_flag & LPFC_MENLO_MAINT)
+-			len += snprintf(buf + len, PAGE_SIZE-len,
++			len += scnprintf(buf + len, PAGE_SIZE-len,
+ 					"   Menlo Maint Mode\n");
+ 		else if (phba->fc_topology == LPFC_TOPOLOGY_LOOP) {
+ 			if (vport->fc_flag & FC_PUBLIC_LOOP)
+-				len += snprintf(buf + len, PAGE_SIZE-len,
++				len += scnprintf(buf + len, PAGE_SIZE-len,
+ 						"   Public Loop\n");
+ 			else
+-				len += snprintf(buf + len, PAGE_SIZE-len,
++				len += scnprintf(buf + len, PAGE_SIZE-len,
+ 						"   Private Loop\n");
+ 		} else {
+ 			if (vport->fc_flag & FC_FABRIC)
+-				len += snprintf(buf + len, PAGE_SIZE-len,
++				len += scnprintf(buf + len, PAGE_SIZE-len,
+ 						"   Fabric\n");
+ 			else
+-				len += snprintf(buf + len, PAGE_SIZE-len,
++				len += scnprintf(buf + len, PAGE_SIZE-len,
+ 						"   Point-2-Point\n");
+ 		}
+ 	}
+@@ -890,28 +894,28 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr,
+ 		struct lpfc_trunk_link link = phba->trunk_link;
+ 
+ 		if (bf_get(lpfc_conf_trunk_port0, &phba->sli4_hba))
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Trunk port 0: Link %s %s\n",
+ 				(link.link0.state == LPFC_LINK_UP) ?
+ 				 "Up" : "Down. ",
+ 				trunk_errmsg[link.link0.fault]);
+ 
+ 		if (bf_get(lpfc_conf_trunk_port1, &phba->sli4_hba))
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Trunk port 1: Link %s %s\n",
+ 				(link.link1.state == LPFC_LINK_UP) ?
+ 				 "Up" : "Down. ",
+ 				trunk_errmsg[link.link1.fault]);
+ 
+ 		if (bf_get(lpfc_conf_trunk_port2, &phba->sli4_hba))
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Trunk port 2: Link %s %s\n",
+ 				(link.link2.state == LPFC_LINK_UP) ?
+ 				 "Up" : "Down. ",
+ 				trunk_errmsg[link.link2.fault]);
+ 
+ 		if (bf_get(lpfc_conf_trunk_port3, &phba->sli4_hba))
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Trunk port 3: Link %s %s\n",
+ 				(link.link3.state == LPFC_LINK_UP) ?
+ 				 "Up" : "Down. ",
+@@ -939,15 +943,15 @@ lpfc_sli4_protocol_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_hba *phba = vport->phba;
+ 
+ 	if (phba->sli_rev < LPFC_SLI_REV4)
+-		return snprintf(buf, PAGE_SIZE, "fc\n");
++		return scnprintf(buf, PAGE_SIZE, "fc\n");
+ 
+ 	if (phba->sli4_hba.lnk_info.lnk_dv == LPFC_LNK_DAT_VAL) {
+ 		if (phba->sli4_hba.lnk_info.lnk_tp == LPFC_LNK_TYPE_GE)
+-			return snprintf(buf, PAGE_SIZE, "fcoe\n");
++			return scnprintf(buf, PAGE_SIZE, "fcoe\n");
+ 		if (phba->sli4_hba.lnk_info.lnk_tp == LPFC_LNK_TYPE_FC)
+-			return snprintf(buf, PAGE_SIZE, "fc\n");
++			return scnprintf(buf, PAGE_SIZE, "fc\n");
+ 	}
+-	return snprintf(buf, PAGE_SIZE, "unknown\n");
++	return scnprintf(buf, PAGE_SIZE, "unknown\n");
+ }
+ 
+ /**
+@@ -967,7 +971,7 @@ lpfc_oas_supported_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *)shost->hostdata;
+ 	struct lpfc_hba *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return scnprintf(buf, PAGE_SIZE, "%d\n",
+ 			phba->sli4_hba.pc_sli4_params.oas_supported);
+ }
+ 
+@@ -1025,7 +1029,7 @@ lpfc_num_discovered_ports_show(struct device *dev,
+ 	struct Scsi_Host  *shost = class_to_shost(dev);
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return scnprintf(buf, PAGE_SIZE, "%d\n",
+ 			vport->fc_map_cnt + vport->fc_unmap_cnt);
+ }
+ 
+@@ -1539,7 +1543,7 @@ lpfc_nport_evt_cnt_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->nport_event_cnt);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->nport_event_cnt);
+ }
+ 
+ int
+@@ -1628,7 +1632,7 @@ lpfc_board_mode_show(struct device *dev, struct device_attribute *attr,
+ 	else
+ 		state = "online";
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s\n", state);
++	return scnprintf(buf, PAGE_SIZE, "%s\n", state);
+ }
+ 
+ /**
+@@ -1854,8 +1858,8 @@ lpfc_max_rpi_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt;
+ 
+ 	if (lpfc_get_hba_info(phba, NULL, NULL, &cnt, NULL, NULL, NULL))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", cnt);
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -1882,8 +1886,8 @@ lpfc_used_rpi_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt, acnt;
+ 
+ 	if (lpfc_get_hba_info(phba, NULL, NULL, &cnt, &acnt, NULL, NULL))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -1910,8 +1914,8 @@ lpfc_max_xri_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt;
+ 
+ 	if (lpfc_get_hba_info(phba, &cnt, NULL, NULL, NULL, NULL, NULL))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", cnt);
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -1938,8 +1942,8 @@ lpfc_used_xri_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt, acnt;
+ 
+ 	if (lpfc_get_hba_info(phba, &cnt, &acnt, NULL, NULL, NULL, NULL))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -1966,8 +1970,8 @@ lpfc_max_vpi_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt;
+ 
+ 	if (lpfc_get_hba_info(phba, NULL, NULL, NULL, NULL, &cnt, NULL))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", cnt);
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", cnt);
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -1994,8 +1998,8 @@ lpfc_used_vpi_show(struct device *dev, struct device_attribute *attr,
+ 	uint32_t cnt, acnt;
+ 
+ 	if (lpfc_get_hba_info(phba, NULL, NULL, NULL, NULL, &cnt, &acnt))
+-		return snprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
+-	return snprintf(buf, PAGE_SIZE, "Unknown\n");
++		return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
++	return scnprintf(buf, PAGE_SIZE, "Unknown\n");
+ }
+ 
+ /**
+@@ -2020,10 +2024,10 @@ lpfc_npiv_info_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+ 	if (!(phba->max_vpi))
+-		return snprintf(buf, PAGE_SIZE, "NPIV Not Supported\n");
++		return scnprintf(buf, PAGE_SIZE, "NPIV Not Supported\n");
+ 	if (vport->port_type == LPFC_PHYSICAL_PORT)
+-		return snprintf(buf, PAGE_SIZE, "NPIV Physical\n");
+-	return snprintf(buf, PAGE_SIZE, "NPIV Virtual (VPI %d)\n", vport->vpi);
++		return scnprintf(buf, PAGE_SIZE, "NPIV Physical\n");
++	return scnprintf(buf, PAGE_SIZE, "NPIV Virtual (VPI %d)\n", vport->vpi);
+ }
+ 
+ /**
+@@ -2045,7 +2049,7 @@ lpfc_poll_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%#x\n", phba->cfg_poll);
++	return scnprintf(buf, PAGE_SIZE, "%#x\n", phba->cfg_poll);
+ }
+ 
+ /**
+@@ -2149,7 +2153,7 @@ lpfc_fips_level_show(struct device *dev,  struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->fips_level);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->fips_level);
+ }
+ 
+ /**
+@@ -2168,7 +2172,7 @@ lpfc_fips_rev_show(struct device *dev,  struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->fips_spec_rev);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->fips_spec_rev);
+ }
+ 
+ /**
+@@ -2187,7 +2191,7 @@ lpfc_dss_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%s - %sOperational\n",
++	return scnprintf(buf, PAGE_SIZE, "%s - %sOperational\n",
+ 			(phba->cfg_enable_dss) ? "Enabled" : "Disabled",
+ 			(phba->sli3_options & LPFC_SLI3_DSS_ENABLED) ?
+ 				"" : "Not ");
+@@ -2216,7 +2220,7 @@ lpfc_sriov_hw_max_virtfn_show(struct device *dev,
+ 	uint16_t max_nr_virtfn;
+ 
+ 	max_nr_virtfn = lpfc_sli_sriov_nr_virtfn_get(phba);
+-	return snprintf(buf, PAGE_SIZE, "%d\n", max_nr_virtfn);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", max_nr_virtfn);
+ }
+ 
+ static inline bool lpfc_rangecheck(uint val, uint min, uint max)
+@@ -2276,7 +2280,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
+ 	struct Scsi_Host  *shost = class_to_shost(dev);\
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+ 	struct lpfc_hba   *phba = vport->phba;\
+-	return snprintf(buf, PAGE_SIZE, "%d\n",\
++	return scnprintf(buf, PAGE_SIZE, "%d\n",\
+ 			phba->cfg_##attr);\
+ }
+ 
+@@ -2304,7 +2308,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
+ 	struct lpfc_hba   *phba = vport->phba;\
+ 	uint val = 0;\
+ 	val = phba->cfg_##attr;\
+-	return snprintf(buf, PAGE_SIZE, "%#x\n",\
++	return scnprintf(buf, PAGE_SIZE, "%#x\n",\
+ 			phba->cfg_##attr);\
+ }
+ 
+@@ -2440,7 +2444,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
+ { \
+ 	struct Scsi_Host  *shost = class_to_shost(dev);\
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+-	return snprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_##attr);\
++	return scnprintf(buf, PAGE_SIZE, "%d\n", vport->cfg_##attr);\
+ }
+ 
+ /**
+@@ -2465,7 +2469,7 @@ lpfc_##attr##_show(struct device *dev, struct device_attribute *attr, \
+ { \
+ 	struct Scsi_Host  *shost = class_to_shost(dev);\
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;\
+-	return snprintf(buf, PAGE_SIZE, "%#x\n", vport->cfg_##attr);\
++	return scnprintf(buf, PAGE_SIZE, "%#x\n", vport->cfg_##attr);\
+ }
+ 
+ /**
+@@ -2736,7 +2740,7 @@ lpfc_soft_wwpn_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 	struct lpfc_hba   *phba = vport->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "0x%llx\n",
++	return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
+ 			(unsigned long long)phba->cfg_soft_wwpn);
+ }
+ 
+@@ -2833,7 +2837,7 @@ lpfc_soft_wwnn_show(struct device *dev, struct device_attribute *attr,
+ {
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+-	return snprintf(buf, PAGE_SIZE, "0x%llx\n",
++	return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
+ 			(unsigned long long)phba->cfg_soft_wwnn);
+ }
+ 
+@@ -2899,7 +2903,7 @@ lpfc_oas_tgt_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "0x%llx\n",
++	return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
+ 			wwn_to_u64(phba->cfg_oas_tgt_wwpn));
+ }
+ 
+@@ -2967,7 +2971,7 @@ lpfc_oas_priority_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_priority);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_priority);
+ }
+ 
+ /**
+@@ -3030,7 +3034,7 @@ lpfc_oas_vpt_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "0x%llx\n",
++	return scnprintf(buf, PAGE_SIZE, "0x%llx\n",
+ 			wwn_to_u64(phba->cfg_oas_vpt_wwpn));
+ }
+ 
+@@ -3101,7 +3105,7 @@ lpfc_oas_lun_state_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host *shost = class_to_shost(dev);
+ 	struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_state);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_state);
+ }
+ 
+ /**
+@@ -3165,7 +3169,7 @@ lpfc_oas_lun_status_show(struct device *dev, struct device_attribute *attr,
+ 	if (!(phba->cfg_oas_flags & OAS_LUN_VALID))
+ 		return -EFAULT;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_status);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->cfg_oas_lun_status);
+ }
+ static DEVICE_ATTR(lpfc_xlane_lun_status, S_IRUGO,
+ 		   lpfc_oas_lun_status_show, NULL);
+@@ -3317,7 +3321,7 @@ lpfc_oas_lun_show(struct device *dev, struct device_attribute *attr,
+ 	if (oas_lun != NOT_OAS_ENABLED_LUN)
+ 		phba->cfg_oas_flags |= OAS_LUN_VALID;
+ 
+-	len += snprintf(buf + len, PAGE_SIZE-len, "0x%llx", oas_lun);
++	len += scnprintf(buf + len, PAGE_SIZE-len, "0x%llx", oas_lun);
+ 
+ 	return len;
+ }
+@@ -3451,7 +3455,7 @@ lpfc_iocb_hw_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 	struct Scsi_Host  *shost = class_to_shost(dev);
+ 	struct lpfc_hba   *phba = ((struct lpfc_vport *) shost->hostdata)->phba;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n", phba->iocb_max);
++	return scnprintf(buf, PAGE_SIZE, "%d\n", phba->iocb_max);
+ }
+ 
+ static DEVICE_ATTR(iocb_hw, S_IRUGO,
+@@ -3463,7 +3467,7 @@ lpfc_txq_hw_show(struct device *dev, struct device_attribute *attr, char *buf)
+ 	struct lpfc_hba   *phba = ((struct lpfc_vport *) shost->hostdata)->phba;
+ 	struct lpfc_sli_ring *pring = lpfc_phba_elsring(phba);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return scnprintf(buf, PAGE_SIZE, "%d\n",
+ 			pring ? pring->txq_max : 0);
+ }
+ 
+@@ -3477,7 +3481,7 @@ lpfc_txcmplq_hw_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_hba   *phba = ((struct lpfc_vport *) shost->hostdata)->phba;
+ 	struct lpfc_sli_ring *pring = lpfc_phba_elsring(phba);
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",
++	return scnprintf(buf, PAGE_SIZE, "%d\n",
+ 			pring ? pring->txcmplq_max : 0);
+ }
+ 
+@@ -3513,7 +3517,7 @@ lpfc_nodev_tmo_show(struct device *dev, struct device_attribute *attr,
+ 	struct Scsi_Host  *shost = class_to_shost(dev);
+ 	struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
+ 
+-	return snprintf(buf, PAGE_SIZE, "%d\n",	vport->cfg_devloss_tmo);
++	return scnprintf(buf, PAGE_SIZE, "%d\n",	vport->cfg_devloss_tmo);
+ }
+ 
+ /**
+@@ -5016,19 +5020,19 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
+ 
+ 	switch (phba->cfg_fcp_cpu_map) {
+ 	case 0:
+-		len += snprintf(buf + len, PAGE_SIZE-len,
++		len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"fcp_cpu_map: No mapping (%d)\n",
+ 				phba->cfg_fcp_cpu_map);
+ 		return len;
+ 	case 1:
+-		len += snprintf(buf + len, PAGE_SIZE-len,
++		len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"fcp_cpu_map: HBA centric mapping (%d): "
+ 				"%d online CPUs\n",
+ 				phba->cfg_fcp_cpu_map,
+ 				phba->sli4_hba.num_online_cpu);
+ 		break;
+ 	case 2:
+-		len += snprintf(buf + len, PAGE_SIZE-len,
++		len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"fcp_cpu_map: Driver centric mapping (%d): "
+ 				"%d online CPUs\n",
+ 				phba->cfg_fcp_cpu_map,
+@@ -5041,14 +5045,14 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
+ 
+ 		/* margin should fit in this and the truncated message */
+ 		if (cpup->irq == LPFC_VECTOR_MAP_EMPTY)
+-			len += snprintf(buf + len, PAGE_SIZE-len,
++			len += scnprintf(buf + len, PAGE_SIZE-len,
+ 					"CPU %02d io_chan %02d "
+ 					"physid %d coreid %d\n",
+ 					phba->sli4_hba.curr_disp_cpu,
+ 					cpup->channel_id, cpup->phys_id,
+ 					cpup->core_id);
+ 		else
+-			len += snprintf(buf + len, PAGE_SIZE-len,
++			len += scnprintf(buf + len, PAGE_SIZE-len,
+ 					"CPU %02d io_chan %02d "
+ 					"physid %d coreid %d IRQ %d\n",
+ 					phba->sli4_hba.curr_disp_cpu,
+@@ -5061,7 +5065,7 @@ lpfc_fcp_cpu_map_show(struct device *dev, struct device_attribute *attr,
+ 		if (phba->sli4_hba.curr_disp_cpu <
+ 				phba->sli4_hba.num_present_cpu &&
+ 				(len >= (PAGE_SIZE - 64))) {
+-			len += snprintf(buf + len, PAGE_SIZE-len, "more...\n");
++			len += scnprintf(buf + len, PAGE_SIZE-len, "more...\n");
+ 			break;
+ 		}
+ 	}
+@@ -5586,10 +5590,10 @@ lpfc_sg_seg_cnt_show(struct device *dev, struct device_attribute *attr,
+ 	struct lpfc_hba   *phba = vport->phba;
+ 	int len;
+ 
+-	len = snprintf(buf, PAGE_SIZE, "SGL sz: %d  total SGEs: %d\n",
++	len = scnprintf(buf, PAGE_SIZE, "SGL sz: %d  total SGEs: %d\n",
+ 		       phba->cfg_sg_dma_buf_size, phba->cfg_total_seg_cnt);
+ 
+-	len += snprintf(buf + len, PAGE_SIZE, "Cfg: %d  SCSI: %d  NVME: %d\n",
++	len += scnprintf(buf + len, PAGE_SIZE, "Cfg: %d  SCSI: %d  NVME: %d\n",
+ 			phba->cfg_sg_seg_cnt, phba->cfg_scsi_seg_cnt,
+ 			phba->cfg_nvme_seg_cnt);
+ 	return len;
+@@ -6586,7 +6590,7 @@ lpfc_show_rport_##field (struct device *dev,				\
+ {									\
+ 	struct fc_rport *rport = transport_class_to_rport(dev);		\
+ 	struct lpfc_rport_data *rdata = rport->hostdata;		\
+-	return snprintf(buf, sz, format_string,				\
++	return scnprintf(buf, sz, format_string,			\
+ 		(rdata->target) ? cast rdata->target->field : 0);	\
+ }
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index 552da8bf43e4..221f8fd87d24 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -1430,7 +1430,7 @@ lpfc_vport_symbolic_port_name(struct lpfc_vport *vport, char *symbol,
+ 	 * Name object.  NPIV is not in play so this integer
+ 	 * value is sufficient and unique per FC-ID.
+ 	 */
+-	n = snprintf(symbol, size, "%d", vport->phba->brd_no);
++	n = scnprintf(symbol, size, "%d", vport->phba->brd_no);
+ 	return n;
+ }
+ 
+@@ -1444,26 +1444,26 @@ lpfc_vport_symbolic_node_name(struct lpfc_vport *vport, char *symbol,
+ 
+ 	lpfc_decode_firmware_rev(vport->phba, fwrev, 0);
+ 
+-	n = snprintf(symbol, size, "Emulex %s", vport->phba->ModelName);
++	n = scnprintf(symbol, size, "Emulex %s", vport->phba->ModelName);
+ 	if (size < n)
+ 		return n;
+ 
+-	n += snprintf(symbol + n, size - n, " FV%s", fwrev);
++	n += scnprintf(symbol + n, size - n, " FV%s", fwrev);
+ 	if (size < n)
+ 		return n;
+ 
+-	n += snprintf(symbol + n, size - n, " DV%s.",
++	n += scnprintf(symbol + n, size - n, " DV%s.",
+ 		      lpfc_release_version);
+ 	if (size < n)
+ 		return n;
+ 
+-	n += snprintf(symbol + n, size - n, " HN:%s.",
++	n += scnprintf(symbol + n, size - n, " HN:%s.",
+ 		      init_utsname()->nodename);
+ 	if (size < n)
+ 		return n;
+ 
+ 	/* Note :- OS name is "Linux" */
+-	n += snprintf(symbol + n, size - n, " OS:%s\n",
++	n += scnprintf(symbol + n, size - n, " OS:%s\n",
+ 		      init_utsname()->sysname);
+ 	return n;
+ }
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index a58f0b3f03a9..361521dd5bd8 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -170,7 +170,7 @@ lpfc_debugfs_disc_trc_data(struct lpfc_vport *vport, char *buf, int size)
+ 		snprintf(buffer,
+ 			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+ 			dtp->seq_cnt, ms, dtp->fmt);
+-		len +=  snprintf(buf+len, size-len, buffer,
++		len +=  scnprintf(buf+len, size-len, buffer,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 	}
+ 	for (i = 0; i < index; i++) {
+@@ -181,7 +181,7 @@ lpfc_debugfs_disc_trc_data(struct lpfc_vport *vport, char *buf, int size)
+ 		snprintf(buffer,
+ 			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+ 			dtp->seq_cnt, ms, dtp->fmt);
+-		len +=  snprintf(buf+len, size-len, buffer,
++		len +=  scnprintf(buf+len, size-len, buffer,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 	}
+ 
+@@ -236,7 +236,7 @@ lpfc_debugfs_slow_ring_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		snprintf(buffer,
+ 			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+ 			dtp->seq_cnt, ms, dtp->fmt);
+-		len +=  snprintf(buf+len, size-len, buffer,
++		len +=  scnprintf(buf+len, size-len, buffer,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 	}
+ 	for (i = 0; i < index; i++) {
+@@ -247,7 +247,7 @@ lpfc_debugfs_slow_ring_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		snprintf(buffer,
+ 			LPFC_DEBUG_TRC_ENTRY_SIZE, "%010d:%010d ms:%s\n",
+ 			dtp->seq_cnt, ms, dtp->fmt);
+-		len +=  snprintf(buf+len, size-len, buffer,
++		len +=  scnprintf(buf+len, size-len, buffer,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 	}
+ 
+@@ -307,7 +307,7 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ 
+ 	i = lpfc_debugfs_last_hbq;
+ 
+-	len +=  snprintf(buf+len, size-len, "HBQ %d Info\n", i);
++	len +=  scnprintf(buf+len, size-len, "HBQ %d Info\n", i);
+ 
+ 	hbqs =  &phba->hbqs[i];
+ 	posted = 0;
+@@ -315,21 +315,21 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ 		posted++;
+ 
+ 	hip =  lpfc_hbq_defs[i];
+-	len +=  snprintf(buf+len, size-len,
++	len +=  scnprintf(buf+len, size-len,
+ 		"idx:%d prof:%d rn:%d bufcnt:%d icnt:%d acnt:%d posted %d\n",
+ 		hip->hbq_index, hip->profile, hip->rn,
+ 		hip->buffer_count, hip->init_count, hip->add_count, posted);
+ 
+ 	raw_index = phba->hbq_get[i];
+ 	getidx = le32_to_cpu(raw_index);
+-	len +=  snprintf(buf+len, size-len,
++	len +=  scnprintf(buf+len, size-len,
+ 		"entries:%d bufcnt:%d Put:%d nPut:%d localGet:%d hbaGet:%d\n",
+ 		hbqs->entry_count, hbqs->buffer_count, hbqs->hbqPutIdx,
+ 		hbqs->next_hbqPutIdx, hbqs->local_hbqGetIdx, getidx);
+ 
+ 	hbqe = (struct lpfc_hbq_entry *) phba->hbqs[i].hbq_virt;
+ 	for (j=0; j<hbqs->entry_count; j++) {
+-		len +=  snprintf(buf+len, size-len,
++		len +=  scnprintf(buf+len, size-len,
+ 			"%03d: %08x %04x %05x ", j,
+ 			le32_to_cpu(hbqe->bde.addrLow),
+ 			le32_to_cpu(hbqe->bde.tus.w),
+@@ -341,14 +341,16 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ 		low = hbqs->hbqPutIdx - posted;
+ 		if (low >= 0) {
+ 			if ((j >= hbqs->hbqPutIdx) || (j < low)) {
+-				len +=  snprintf(buf+len, size-len, "Unused\n");
++				len +=  scnprintf(buf + len, size - len,
++						"Unused\n");
+ 				goto skipit;
+ 			}
+ 		}
+ 		else {
+ 			if ((j >= hbqs->hbqPutIdx) &&
+ 				(j < (hbqs->entry_count+low))) {
+-				len +=  snprintf(buf+len, size-len, "Unused\n");
++				len +=  scnprintf(buf + len, size - len,
++						"Unused\n");
+ 				goto skipit;
+ 			}
+ 		}
+@@ -358,7 +360,7 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ 			hbq_buf = container_of(d_buf, struct hbq_dmabuf, dbuf);
+ 			phys = ((uint64_t)hbq_buf->dbuf.phys & 0xffffffff);
+ 			if (phys == le32_to_cpu(hbqe->bde.addrLow)) {
+-				len +=  snprintf(buf+len, size-len,
++				len +=  scnprintf(buf+len, size-len,
+ 					"Buf%d: %p %06x\n", i,
+ 					hbq_buf->dbuf.virt, hbq_buf->tag);
+ 				found = 1;
+@@ -367,7 +369,7 @@ lpfc_debugfs_hbqinfo_data(struct lpfc_hba *phba, char *buf, int size)
+ 			i++;
+ 		}
+ 		if (!found) {
+-			len +=  snprintf(buf+len, size-len, "No DMAinfo?\n");
++			len +=  scnprintf(buf+len, size-len, "No DMAinfo?\n");
+ 		}
+ skipit:
+ 		hbqe++;
+@@ -413,7 +415,7 @@ lpfc_debugfs_dumpHBASlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 	off = 0;
+ 	spin_lock_irq(&phba->hbalock);
+ 
+-	len +=  snprintf(buf+len, size-len, "HBA SLIM\n");
++	len +=  scnprintf(buf+len, size-len, "HBA SLIM\n");
+ 	lpfc_memcpy_from_slim(buffer,
+ 		phba->MBslimaddr + lpfc_debugfs_last_hba_slim_off, 1024);
+ 
+@@ -427,7 +429,7 @@ lpfc_debugfs_dumpHBASlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 
+ 	i = 1024;
+ 	while (i > 0) {
+-		len +=  snprintf(buf+len, size-len,
++		len +=  scnprintf(buf+len, size-len,
+ 		"%08x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
+ 		off, *ptr, *(ptr+1), *(ptr+2), *(ptr+3), *(ptr+4),
+ 		*(ptr+5), *(ptr+6), *(ptr+7));
+@@ -471,11 +473,11 @@ lpfc_debugfs_dumpHostSlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 	off = 0;
+ 	spin_lock_irq(&phba->hbalock);
+ 
+-	len +=  snprintf(buf+len, size-len, "SLIM Mailbox\n");
++	len +=  scnprintf(buf+len, size-len, "SLIM Mailbox\n");
+ 	ptr = (uint32_t *)phba->slim2p.virt;
+ 	i = sizeof(MAILBOX_t);
+ 	while (i > 0) {
+-		len +=  snprintf(buf+len, size-len,
++		len +=  scnprintf(buf+len, size-len,
+ 		"%08x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
+ 		off, *ptr, *(ptr+1), *(ptr+2), *(ptr+3), *(ptr+4),
+ 		*(ptr+5), *(ptr+6), *(ptr+7));
+@@ -484,11 +486,11 @@ lpfc_debugfs_dumpHostSlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 		off += (8 * sizeof(uint32_t));
+ 	}
+ 
+-	len +=  snprintf(buf+len, size-len, "SLIM PCB\n");
++	len +=  scnprintf(buf+len, size-len, "SLIM PCB\n");
+ 	ptr = (uint32_t *)phba->pcb;
+ 	i = sizeof(PCB_t);
+ 	while (i > 0) {
+-		len +=  snprintf(buf+len, size-len,
++		len +=  scnprintf(buf+len, size-len,
+ 		"%08x: %08x %08x %08x %08x %08x %08x %08x %08x\n",
+ 		off, *ptr, *(ptr+1), *(ptr+2), *(ptr+3), *(ptr+4),
+ 		*(ptr+5), *(ptr+6), *(ptr+7));
+@@ -501,7 +503,7 @@ lpfc_debugfs_dumpHostSlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 		for (i = 0; i < 4; i++) {
+ 			pgpp = &phba->port_gp[i];
+ 			pring = &psli->sli3_ring[i];
+-			len +=  snprintf(buf+len, size-len,
++			len +=  scnprintf(buf+len, size-len,
+ 					 "Ring %d: CMD GetInx:%d "
+ 					 "(Max:%d Next:%d "
+ 					 "Local:%d flg:x%x)  "
+@@ -518,7 +520,7 @@ lpfc_debugfs_dumpHostSlim_data(struct lpfc_hba *phba, char *buf, int size)
+ 		word1 = readl(phba->CAregaddr);
+ 		word2 = readl(phba->HSregaddr);
+ 		word3 = readl(phba->HCregaddr);
+-		len +=  snprintf(buf+len, size-len, "HA:%08x CA:%08x HS:%08x "
++		len +=  scnprintf(buf+len, size-len, "HA:%08x CA:%08x HS:%08x "
+ 				 "HC:%08x\n", word0, word1, word2, word3);
+ 	}
+ 	spin_unlock_irq(&phba->hbalock);
+@@ -556,12 +558,12 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 	cnt = (LPFC_NODELIST_SIZE / LPFC_NODELIST_ENTRY_SIZE);
+ 	outio = 0;
+ 
+-	len += snprintf(buf+len, size-len, "\nFCP Nodelist Entries ...\n");
++	len += scnprintf(buf+len, size-len, "\nFCP Nodelist Entries ...\n");
+ 	spin_lock_irq(shost->host_lock);
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+ 		iocnt = 0;
+ 		if (!cnt) {
+-			len +=  snprintf(buf+len, size-len,
++			len +=  scnprintf(buf+len, size-len,
+ 				"Missing Nodelist Entries\n");
+ 			break;
+ 		}
+@@ -599,63 +601,63 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 		default:
+ 			statep = "UNKNOWN";
+ 		}
+-		len += snprintf(buf+len, size-len, "%s DID:x%06x ",
++		len += scnprintf(buf+len, size-len, "%s DID:x%06x ",
+ 				statep, ndlp->nlp_DID);
+-		len += snprintf(buf+len, size-len,
++		len += scnprintf(buf+len, size-len,
+ 				"WWPN x%llx ",
+ 				wwn_to_u64(ndlp->nlp_portname.u.wwn));
+-		len += snprintf(buf+len, size-len,
++		len += scnprintf(buf+len, size-len,
+ 				"WWNN x%llx ",
+ 				wwn_to_u64(ndlp->nlp_nodename.u.wwn));
+ 		if (ndlp->nlp_flag & NLP_RPI_REGISTERED)
+-			len += snprintf(buf+len, size-len, "RPI:%03d ",
++			len += scnprintf(buf+len, size-len, "RPI:%03d ",
+ 					ndlp->nlp_rpi);
+ 		else
+-			len += snprintf(buf+len, size-len, "RPI:none ");
+-		len +=  snprintf(buf+len, size-len, "flag:x%08x ",
++			len += scnprintf(buf+len, size-len, "RPI:none ");
++		len +=  scnprintf(buf+len, size-len, "flag:x%08x ",
+ 			ndlp->nlp_flag);
+ 		if (!ndlp->nlp_type)
+-			len += snprintf(buf+len, size-len, "UNKNOWN_TYPE ");
++			len += scnprintf(buf+len, size-len, "UNKNOWN_TYPE ");
+ 		if (ndlp->nlp_type & NLP_FC_NODE)
+-			len += snprintf(buf+len, size-len, "FC_NODE ");
++			len += scnprintf(buf+len, size-len, "FC_NODE ");
+ 		if (ndlp->nlp_type & NLP_FABRIC) {
+-			len += snprintf(buf+len, size-len, "FABRIC ");
++			len += scnprintf(buf+len, size-len, "FABRIC ");
+ 			iocnt = 0;
+ 		}
+ 		if (ndlp->nlp_type & NLP_FCP_TARGET)
+-			len += snprintf(buf+len, size-len, "FCP_TGT sid:%d ",
++			len += scnprintf(buf+len, size-len, "FCP_TGT sid:%d ",
+ 				ndlp->nlp_sid);
+ 		if (ndlp->nlp_type & NLP_FCP_INITIATOR)
+-			len += snprintf(buf+len, size-len, "FCP_INITIATOR ");
++			len += scnprintf(buf+len, size-len, "FCP_INITIATOR ");
+ 		if (ndlp->nlp_type & NLP_NVME_TARGET)
+-			len += snprintf(buf + len,
++			len += scnprintf(buf + len,
+ 					size - len, "NVME_TGT sid:%d ",
+ 					NLP_NO_SID);
+ 		if (ndlp->nlp_type & NLP_NVME_INITIATOR)
+-			len += snprintf(buf + len,
++			len += scnprintf(buf + len,
+ 					size - len, "NVME_INITIATOR ");
+-		len += snprintf(buf+len, size-len, "usgmap:%x ",
++		len += scnprintf(buf+len, size-len, "usgmap:%x ",
+ 			ndlp->nlp_usg_map);
+-		len += snprintf(buf+len, size-len, "refcnt:%x",
++		len += scnprintf(buf+len, size-len, "refcnt:%x",
+ 			kref_read(&ndlp->kref));
+ 		if (iocnt) {
+ 			i = atomic_read(&ndlp->cmd_pending);
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					" OutIO:x%x Qdepth x%x",
+ 					i, ndlp->cmd_qdepth);
+ 			outio += i;
+ 		}
+-		len += snprintf(buf + len, size - len, "defer:%x ",
++		len += scnprintf(buf + len, size - len, "defer:%x ",
+ 			ndlp->nlp_defer_did);
+-		len +=  snprintf(buf+len, size-len, "\n");
++		len +=  scnprintf(buf+len, size-len, "\n");
+ 	}
+ 	spin_unlock_irq(shost->host_lock);
+ 
+-	len += snprintf(buf + len, size - len,
++	len += scnprintf(buf + len, size - len,
+ 			"\nOutstanding IO x%x\n",  outio);
+ 
+ 	if (phba->nvmet_support && phba->targetport && (vport == phba->pport)) {
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"\nNVME Targetport Entry ...\n");
+ 
+ 		/* Port state is only one of two values for now. */
+@@ -663,18 +665,18 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 			statep = "REGISTERED";
+ 		else
+ 			statep = "INIT";
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"TGT WWNN x%llx WWPN x%llx State %s\n",
+ 				wwn_to_u64(vport->fc_nodename.u.wwn),
+ 				wwn_to_u64(vport->fc_portname.u.wwn),
+ 				statep);
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"    Targetport DID x%06x\n",
+ 				phba->targetport->port_id);
+ 		goto out_exit;
+ 	}
+ 
+-	len += snprintf(buf + len, size - len,
++	len += scnprintf(buf + len, size - len,
+ 				"\nNVME Lport/Rport Entries ...\n");
+ 
+ 	localport = vport->localport;
+@@ -689,11 +691,11 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 	else
+ 		statep = "UNKNOWN ";
+ 
+-	len += snprintf(buf + len, size - len,
++	len += scnprintf(buf + len, size - len,
+ 			"Lport DID x%06x PortState %s\n",
+ 			localport->port_id, statep);
+ 
+-	len += snprintf(buf + len, size - len, "\tRport List:\n");
++	len += scnprintf(buf + len, size - len, "\tRport List:\n");
+ 	list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
+ 		/* local short-hand pointer. */
+ 		spin_lock(&phba->hbalock);
+@@ -720,32 +722,32 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size)
+ 		}
+ 
+ 		/* Tab in to show lport ownership. */
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"\t%s Port ID:x%06x ",
+ 				statep, nrport->port_id);
+-		len += snprintf(buf + len, size - len, "WWPN x%llx ",
++		len += scnprintf(buf + len, size - len, "WWPN x%llx ",
+ 				nrport->port_name);
+-		len += snprintf(buf + len, size - len, "WWNN x%llx ",
++		len += scnprintf(buf + len, size - len, "WWNN x%llx ",
+ 				nrport->node_name);
+ 
+ 		/* An NVME rport can have multiple roles. */
+ 		if (nrport->port_role & FC_PORT_ROLE_NVME_INITIATOR)
+-			len +=  snprintf(buf + len, size - len,
++			len +=  scnprintf(buf + len, size - len,
+ 					 "INITIATOR ");
+ 		if (nrport->port_role & FC_PORT_ROLE_NVME_TARGET)
+-			len +=  snprintf(buf + len, size - len,
++			len +=  scnprintf(buf + len, size - len,
+ 					 "TARGET ");
+ 		if (nrport->port_role & FC_PORT_ROLE_NVME_DISCOVERY)
+-			len +=  snprintf(buf + len, size - len,
++			len +=  scnprintf(buf + len, size - len,
+ 					 "DISCSRVC ");
+ 		if (nrport->port_role & ~(FC_PORT_ROLE_NVME_INITIATOR |
+ 					  FC_PORT_ROLE_NVME_TARGET |
+ 					  FC_PORT_ROLE_NVME_DISCOVERY))
+-			len +=  snprintf(buf + len, size - len,
++			len +=  scnprintf(buf + len, size - len,
+ 					 "UNKNOWN ROLE x%x",
+ 					 nrport->port_role);
+ 		/* Terminate the string. */
+-		len +=  snprintf(buf + len, size - len, "\n");
++		len +=  scnprintf(buf + len, size - len, "\n");
+ 	}
+ 
+ 	spin_unlock_irq(shost->host_lock);
+@@ -784,35 +786,35 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 		if (!phba->targetport)
+ 			return len;
+ 		tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private;
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"\nNVME Targetport Statistics\n");
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"LS: Rcv %08x Drop %08x Abort %08x\n",
+ 				atomic_read(&tgtp->rcv_ls_req_in),
+ 				atomic_read(&tgtp->rcv_ls_req_drop),
+ 				atomic_read(&tgtp->xmt_ls_abort));
+ 		if (atomic_read(&tgtp->rcv_ls_req_in) !=
+ 		    atomic_read(&tgtp->rcv_ls_req_out)) {
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Rcv LS: in %08x != out %08x\n",
+ 					atomic_read(&tgtp->rcv_ls_req_in),
+ 					atomic_read(&tgtp->rcv_ls_req_out));
+ 		}
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"LS: Xmt %08x Drop %08x Cmpl %08x\n",
+ 				atomic_read(&tgtp->xmt_ls_rsp),
+ 				atomic_read(&tgtp->xmt_ls_drop),
+ 				atomic_read(&tgtp->xmt_ls_rsp_cmpl));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"LS: RSP Abort %08x xb %08x Err %08x\n",
+ 				atomic_read(&tgtp->xmt_ls_rsp_aborted),
+ 				atomic_read(&tgtp->xmt_ls_rsp_xb_set),
+ 				atomic_read(&tgtp->xmt_ls_rsp_error));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP: Rcv %08x Defer %08x Release %08x "
+ 				"Drop %08x\n",
+ 				atomic_read(&tgtp->rcv_fcp_cmd_in),
+@@ -822,13 +824,13 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 
+ 		if (atomic_read(&tgtp->rcv_fcp_cmd_in) !=
+ 		    atomic_read(&tgtp->rcv_fcp_cmd_out)) {
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Rcv FCP: in %08x != out %08x\n",
+ 					atomic_read(&tgtp->rcv_fcp_cmd_in),
+ 					atomic_read(&tgtp->rcv_fcp_cmd_out));
+ 		}
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP Rsp: read %08x readrsp %08x "
+ 				"write %08x rsp %08x\n",
+ 				atomic_read(&tgtp->xmt_fcp_read),
+@@ -836,31 +838,31 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 				atomic_read(&tgtp->xmt_fcp_write),
+ 				atomic_read(&tgtp->xmt_fcp_rsp));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP Rsp Cmpl: %08x err %08x drop %08x\n",
+ 				atomic_read(&tgtp->xmt_fcp_rsp_cmpl),
+ 				atomic_read(&tgtp->xmt_fcp_rsp_error),
+ 				atomic_read(&tgtp->xmt_fcp_rsp_drop));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP Rsp Abort: %08x xb %08x xricqe  %08x\n",
+ 				atomic_read(&tgtp->xmt_fcp_rsp_aborted),
+ 				atomic_read(&tgtp->xmt_fcp_rsp_xb_set),
+ 				atomic_read(&tgtp->xmt_fcp_xri_abort_cqe));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"ABORT: Xmt %08x Cmpl %08x\n",
+ 				atomic_read(&tgtp->xmt_fcp_abort),
+ 				atomic_read(&tgtp->xmt_fcp_abort_cmpl));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"ABORT: Sol %08x  Usol %08x Err %08x Cmpl %08x",
+ 				atomic_read(&tgtp->xmt_abort_sol),
+ 				atomic_read(&tgtp->xmt_abort_unsol),
+ 				atomic_read(&tgtp->xmt_abort_rsp),
+ 				atomic_read(&tgtp->xmt_abort_rsp_error));
+ 
+-		len +=  snprintf(buf + len, size - len, "\n");
++		len +=  scnprintf(buf + len, size - len, "\n");
+ 
+ 		cnt = 0;
+ 		spin_lock(&phba->sli4_hba.abts_nvme_buf_list_lock);
+@@ -871,7 +873,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 		}
+ 		spin_unlock(&phba->sli4_hba.abts_nvme_buf_list_lock);
+ 		if (cnt) {
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"ABORT: %d ctx entries\n", cnt);
+ 			spin_lock(&phba->sli4_hba.abts_nvme_buf_list_lock);
+ 			list_for_each_entry_safe(ctxp, next_ctxp,
+@@ -879,7 +881,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 				    list) {
+ 				if (len >= (size - LPFC_DEBUG_OUT_LINE_SZ))
+ 					break;
+-				len += snprintf(buf + len, size - len,
++				len += scnprintf(buf + len, size - len,
+ 						"Entry: oxid %x state %x "
+ 						"flag %x\n",
+ 						ctxp->oxid, ctxp->state,
+@@ -893,7 +895,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 		tot += atomic_read(&tgtp->xmt_fcp_release);
+ 		tot = atomic_read(&tgtp->rcv_fcp_cmd_in) - tot;
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"IO_CTX: %08x  WAIT: cur %08x tot %08x\n"
+ 				"CTX Outstanding %08llx\n",
+ 				phba->sli4_hba.nvmet_xri_cnt,
+@@ -911,10 +913,10 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 		if (!lport)
+ 			return len;
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"\nNVME Lport Statistics\n");
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"LS: Xmt %016x Cmpl %016x\n",
+ 				atomic_read(&lport->fc4NvmeLsRequests),
+ 				atomic_read(&lport->fc4NvmeLsCmpls));
+@@ -938,20 +940,20 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 			if (i >= 32)
+ 				continue;
+ 
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"FCP (%d): Rd %016llx Wr %016llx "
+ 					"IO %016llx ",
+ 					i, data1, data2, data3);
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"Cmpl %016llx OutIO %016llx\n",
+ 					tot, ((data1 + data2 + data3) - tot));
+ 		}
+-		len += snprintf(buf + len, PAGE_SIZE - len,
++		len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"Total FCP Cmpl %016llx Issue %016llx "
+ 				"OutIO %016llx\n",
+ 				totin, totout, totout - totin);
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"LS Xmt Err: Abrt %08x Err %08x  "
+ 				"Cmpl Err: xb %08x Err %08x\n",
+ 				atomic_read(&lport->xmt_ls_abort),
+@@ -959,7 +961,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 				atomic_read(&lport->cmpl_ls_xb),
+ 				atomic_read(&lport->cmpl_ls_err));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP Xmt Err: noxri %06x nondlp %06x "
+ 				"qdepth %06x wqerr %06x err %06x Abrt %06x\n",
+ 				atomic_read(&lport->xmt_fcp_noxri),
+@@ -969,7 +971,7 @@ lpfc_debugfs_nvmestat_data(struct lpfc_vport *vport, char *buf, int size)
+ 				atomic_read(&lport->xmt_fcp_err),
+ 				atomic_read(&lport->xmt_fcp_abort));
+ 
+-		len += snprintf(buf + len, size - len,
++		len += scnprintf(buf + len, size - len,
+ 				"FCP Cmpl Err: xb %08x Err %08x\n",
+ 				atomic_read(&lport->cmpl_fcp_xb),
+ 				atomic_read(&lport->cmpl_fcp_err));
+@@ -1001,58 +1003,58 @@ lpfc_debugfs_nvmektime_data(struct lpfc_vport *vport, char *buf, int size)
+ 
+ 	if (phba->nvmet_support == 0) {
+ 		/* NVME Initiator */
+-		len += snprintf(buf + len, PAGE_SIZE - len,
++		len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"ktime %s: Total Samples: %lld\n",
+ 				(phba->ktime_on ?  "Enabled" : "Disabled"),
+ 				phba->ktime_data_samples);
+ 		if (phba->ktime_data_samples == 0)
+ 			return len;
+ 
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"Segment 1: Last NVME Cmd cmpl "
+ 			"done -to- Start of next NVME cnd (in driver)\n");
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg1_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg1_min,
+ 			phba->ktime_seg1_max);
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"Segment 2: Driver start of NVME cmd "
+ 			"-to- Firmware WQ doorbell\n");
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg2_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg2_min,
+ 			phba->ktime_seg2_max);
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"Segment 3: Firmware WQ doorbell -to- "
+ 			"MSI-X ISR cmpl\n");
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg3_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg3_min,
+ 			phba->ktime_seg3_max);
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"Segment 4: MSI-X ISR cmpl -to- "
+ 			"NVME cmpl done\n");
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg4_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg4_min,
+ 			phba->ktime_seg4_max);
+-		len += snprintf(
++		len += scnprintf(
+ 			buf + len, PAGE_SIZE - len,
+ 			"Total IO avg time: %08lld\n",
+ 			div_u64(phba->ktime_seg1_total +
+@@ -1064,7 +1066,7 @@ lpfc_debugfs_nvmektime_data(struct lpfc_vport *vport, char *buf, int size)
+ 	}
+ 
+ 	/* NVME Target */
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"ktime %s: Total Samples: %lld %lld\n",
+ 			(phba->ktime_on ? "Enabled" : "Disabled"),
+ 			phba->ktime_data_samples,
+@@ -1072,46 +1074,46 @@ lpfc_debugfs_nvmektime_data(struct lpfc_vport *vport, char *buf, int size)
+ 	if (phba->ktime_data_samples == 0)
+ 		return len;
+ 
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 1: MSI-X ISR Rcv cmd -to- "
+ 			"cmd pass to NVME Layer\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg1_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg1_min,
+ 			phba->ktime_seg1_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 2: cmd pass to NVME Layer- "
+ 			"-to- Driver rcv cmd OP (action)\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg2_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg2_min,
+ 			phba->ktime_seg2_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 3: Driver rcv cmd OP -to- "
+ 			"Firmware WQ doorbell: cmd\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg3_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg3_min,
+ 			phba->ktime_seg3_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 4: Firmware WQ doorbell: cmd "
+ 			"-to- MSI-X ISR for cmd cmpl\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg4_total,
+ 				phba->ktime_data_samples),
+ 			phba->ktime_seg4_min,
+ 			phba->ktime_seg4_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 5: MSI-X ISR for cmd cmpl "
+ 			"-to- NVME layer passed cmd done\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg5_total,
+ 				phba->ktime_data_samples),
+@@ -1119,10 +1121,10 @@ lpfc_debugfs_nvmektime_data(struct lpfc_vport *vport, char *buf, int size)
+ 			phba->ktime_seg5_max);
+ 
+ 	if (phba->ktime_status_samples == 0) {
+-		len += snprintf(buf + len, PAGE_SIZE-len,
++		len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"Total: cmd received by MSI-X ISR "
+ 				"-to- cmd completed on wire\n");
+-		len += snprintf(buf + len, PAGE_SIZE-len,
++		len += scnprintf(buf + len, PAGE_SIZE-len,
+ 				"avg:%08lld min:%08lld "
+ 				"max %08lld\n",
+ 				div_u64(phba->ktime_seg10_total,
+@@ -1132,46 +1134,46 @@ lpfc_debugfs_nvmektime_data(struct lpfc_vport *vport, char *buf, int size)
+ 		return len;
+ 	}
+ 
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 6: NVME layer passed cmd done "
+ 			"-to- Driver rcv rsp status OP\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg6_total,
+ 				phba->ktime_status_samples),
+ 			phba->ktime_seg6_min,
+ 			phba->ktime_seg6_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 7: Driver rcv rsp status OP "
+ 			"-to- Firmware WQ doorbell: status\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg7_total,
+ 				phba->ktime_status_samples),
+ 			phba->ktime_seg7_min,
+ 			phba->ktime_seg7_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 8: Firmware WQ doorbell: status"
+ 			" -to- MSI-X ISR for status cmpl\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg8_total,
+ 				phba->ktime_status_samples),
+ 			phba->ktime_seg8_min,
+ 			phba->ktime_seg8_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Segment 9: MSI-X ISR for status cmpl  "
+ 			"-to- NVME layer passed status done\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg9_total,
+ 				phba->ktime_status_samples),
+ 			phba->ktime_seg9_min,
+ 			phba->ktime_seg9_max);
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"Total: cmd received by MSI-X ISR -to- "
+ 			"cmd completed on wire\n");
+-	len += snprintf(buf + len, PAGE_SIZE-len,
++	len += scnprintf(buf + len, PAGE_SIZE-len,
+ 			"avg:%08lld min:%08lld max %08lld\n",
+ 			div_u64(phba->ktime_seg10_total,
+ 				phba->ktime_status_samples),
+@@ -1206,7 +1208,7 @@ lpfc_debugfs_nvmeio_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		(phba->nvmeio_trc_size - 1);
+ 	skip = phba->nvmeio_trc_output_idx;
+ 
+-	len += snprintf(buf + len, size - len,
++	len += scnprintf(buf + len, size - len,
+ 			"%s IO Trace %s: next_idx %d skip %d size %d\n",
+ 			(phba->nvmet_support ? "NVME" : "NVMET"),
+ 			(state ? "Enabled" : "Disabled"),
+@@ -1228,18 +1230,18 @@ lpfc_debugfs_nvmeio_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		if (!dtp->fmt)
+ 			continue;
+ 
+-		len +=  snprintf(buf + len, size - len, dtp->fmt,
++		len +=  scnprintf(buf + len, size - len, dtp->fmt,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 
+ 		if (phba->nvmeio_trc_output_idx >= phba->nvmeio_trc_size) {
+ 			phba->nvmeio_trc_output_idx = 0;
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Trace Complete\n");
+ 			goto out;
+ 		}
+ 
+ 		if (len >= (size - LPFC_DEBUG_OUT_LINE_SZ)) {
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Trace Continue (%d of %d)\n",
+ 					phba->nvmeio_trc_output_idx,
+ 					phba->nvmeio_trc_size);
+@@ -1257,18 +1259,18 @@ lpfc_debugfs_nvmeio_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		if (!dtp->fmt)
+ 			continue;
+ 
+-		len +=  snprintf(buf + len, size - len, dtp->fmt,
++		len +=  scnprintf(buf + len, size - len, dtp->fmt,
+ 			dtp->data1, dtp->data2, dtp->data3);
+ 
+ 		if (phba->nvmeio_trc_output_idx >= phba->nvmeio_trc_size) {
+ 			phba->nvmeio_trc_output_idx = 0;
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Trace Complete\n");
+ 			goto out;
+ 		}
+ 
+ 		if (len >= (size - LPFC_DEBUG_OUT_LINE_SZ)) {
+-			len += snprintf(buf + len, size - len,
++			len += scnprintf(buf + len, size - len,
+ 					"Trace Continue (%d of %d)\n",
+ 					phba->nvmeio_trc_output_idx,
+ 					phba->nvmeio_trc_size);
+@@ -1276,7 +1278,7 @@ lpfc_debugfs_nvmeio_trc_data(struct lpfc_hba *phba, char *buf, int size)
+ 		}
+ 	}
+ 
+-	len += snprintf(buf + len, size - len,
++	len += scnprintf(buf + len, size - len,
+ 			"Trace Done\n");
+ out:
+ 	return len;
+@@ -1308,39 +1310,39 @@ lpfc_debugfs_cpucheck_data(struct lpfc_vport *vport, char *buf, int size)
+ 
+ 	if (phba->nvmet_support == 0) {
+ 		/* NVME Initiator */
+-		len += snprintf(buf + len, PAGE_SIZE - len,
++		len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"CPUcheck %s\n",
+ 				(phba->cpucheck_on & LPFC_CHECK_NVME_IO ?
+ 					"Enabled" : "Disabled"));
+ 		for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) {
+ 			if (i >= LPFC_CHECK_CPU_CNT)
+ 				break;
+-			len += snprintf(buf + len, PAGE_SIZE - len,
++			len += scnprintf(buf + len, PAGE_SIZE - len,
+ 					"%02d: xmit x%08x cmpl x%08x\n",
+ 					i, phba->cpucheck_xmt_io[i],
+ 					phba->cpucheck_cmpl_io[i]);
+ 			tot_xmt += phba->cpucheck_xmt_io[i];
+ 			tot_cmpl += phba->cpucheck_cmpl_io[i];
+ 		}
+-		len += snprintf(buf + len, PAGE_SIZE - len,
++		len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"tot:xmit x%08x cmpl x%08x\n",
+ 				tot_xmt, tot_cmpl);
+ 		return len;
+ 	}
+ 
+ 	/* NVME Target */
+-	len += snprintf(buf + len, PAGE_SIZE - len,
++	len += scnprintf(buf + len, PAGE_SIZE - len,
+ 			"CPUcheck %s ",
+ 			(phba->cpucheck_on & LPFC_CHECK_NVMET_IO ?
+ 				"IO Enabled - " : "IO Disabled - "));
+-	len += snprintf(buf + len, PAGE_SIZE - len,
++	len += scnprintf(buf + len, PAGE_SIZE - len,
+ 			"%s\n",
+ 			(phba->cpucheck_on & LPFC_CHECK_NVMET_RCV ?
+ 				"Rcv Enabled\n" : "Rcv Disabled\n"));
+ 	for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) {
+ 		if (i >= LPFC_CHECK_CPU_CNT)
+ 			break;
+-		len += snprintf(buf + len, PAGE_SIZE - len,
++		len += scnprintf(buf + len, PAGE_SIZE - len,
+ 				"%02d: xmit x%08x ccmpl x%08x "
+ 				"cmpl x%08x rcv x%08x\n",
+ 				i, phba->cpucheck_xmt_io[i],
+@@ -1352,7 +1354,7 @@ lpfc_debugfs_cpucheck_data(struct lpfc_vport *vport, char *buf, int size)
+ 		tot_cmpl += phba->cpucheck_cmpl_io[i];
+ 		tot_ccmpl += phba->cpucheck_ccmpl_io[i];
+ 	}
+-	len += snprintf(buf + len, PAGE_SIZE - len,
++	len += scnprintf(buf + len, PAGE_SIZE - len,
+ 			"tot:xmit x%08x ccmpl x%08x cmpl x%08x rcv x%08x\n",
+ 			tot_xmt, tot_ccmpl, tot_cmpl, tot_rcv);
+ 	return len;
+@@ -1797,28 +1799,29 @@ lpfc_debugfs_dif_err_read(struct file *file, char __user *buf,
+ 	int cnt = 0;
+ 
+ 	if (dent == phba->debug_writeGuard)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wgrd_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wgrd_cnt);
+ 	else if (dent == phba->debug_writeApp)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wapp_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wapp_cnt);
+ 	else if (dent == phba->debug_writeRef)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wref_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_wref_cnt);
+ 	else if (dent == phba->debug_readGuard)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rgrd_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rgrd_cnt);
+ 	else if (dent == phba->debug_readApp)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rapp_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rapp_cnt);
+ 	else if (dent == phba->debug_readRef)
+-		cnt = snprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rref_cnt);
++		cnt = scnprintf(cbuf, 32, "%u\n", phba->lpfc_injerr_rref_cnt);
+ 	else if (dent == phba->debug_InjErrNPortID)
+-		cnt = snprintf(cbuf, 32, "0x%06x\n", phba->lpfc_injerr_nportid);
++		cnt = scnprintf(cbuf, 32, "0x%06x\n",
++				phba->lpfc_injerr_nportid);
+ 	else if (dent == phba->debug_InjErrWWPN) {
+ 		memcpy(&tmp, &phba->lpfc_injerr_wwpn, sizeof(struct lpfc_name));
+ 		tmp = cpu_to_be64(tmp);
+-		cnt = snprintf(cbuf, 32, "0x%016llx\n", tmp);
++		cnt = scnprintf(cbuf, 32, "0x%016llx\n", tmp);
+ 	} else if (dent == phba->debug_InjErrLBA) {
+ 		if (phba->lpfc_injerr_lba == (sector_t)(-1))
+-			cnt = snprintf(cbuf, 32, "off\n");
++			cnt = scnprintf(cbuf, 32, "off\n");
+ 		else
+-			cnt = snprintf(cbuf, 32, "0x%llx\n",
++			cnt = scnprintf(cbuf, 32, "0x%llx\n",
+ 				 (uint64_t) phba->lpfc_injerr_lba);
+ 	} else
+ 		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
+@@ -2624,17 +2627,17 @@ lpfc_idiag_pcicfg_read(struct file *file, char __user *buf, size_t nbytes,
+ 	switch (count) {
+ 	case SIZE_U8: /* byte (8 bits) */
+ 		pci_read_config_byte(pdev, where, &u8val);
+-		len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 				"%03x: %02x\n", where, u8val);
+ 		break;
+ 	case SIZE_U16: /* word (16 bits) */
+ 		pci_read_config_word(pdev, where, &u16val);
+-		len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 				"%03x: %04x\n", where, u16val);
+ 		break;
+ 	case SIZE_U32: /* double word (32 bits) */
+ 		pci_read_config_dword(pdev, where, &u32val);
+-		len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 				"%03x: %08x\n", where, u32val);
+ 		break;
+ 	case LPFC_PCI_CFG_BROWSE: /* browse all */
+@@ -2654,25 +2657,25 @@ pcicfg_browse:
+ 	offset = offset_label;
+ 
+ 	/* Read PCI config space */
+-	len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 			"%03x: ", offset_label);
+ 	while (index > 0) {
+ 		pci_read_config_dword(pdev, offset, &u32val);
+-		len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 				"%08x ", u32val);
+ 		offset += sizeof(uint32_t);
+ 		if (offset >= LPFC_PCI_CFG_SIZE) {
+-			len += snprintf(pbuffer+len,
++			len += scnprintf(pbuffer+len,
+ 					LPFC_PCI_CFG_SIZE-len, "\n");
+ 			break;
+ 		}
+ 		index -= sizeof(uint32_t);
+ 		if (!index)
+-			len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++			len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 					"\n");
+ 		else if (!(index % (8 * sizeof(uint32_t)))) {
+ 			offset_label += (8 * sizeof(uint32_t));
+-			len += snprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
++			len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
+ 					"\n%03x: ", offset_label);
+ 		}
+ 	}
+@@ -2943,7 +2946,7 @@ lpfc_idiag_baracc_read(struct file *file, char __user *buf, size_t nbytes,
+ 	if (acc_range == SINGLE_WORD) {
+ 		offset_run = offset;
+ 		u32val = readl(mem_mapped_bar + offset_run);
+-		len += snprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
+ 				"%05x: %08x\n", offset_run, u32val);
+ 	} else
+ 		goto baracc_browse;
+@@ -2957,35 +2960,35 @@ baracc_browse:
+ 	offset_run = offset_label;
+ 
+ 	/* Read PCI bar memory mapped space */
+-	len += snprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
+ 			"%05x: ", offset_label);
+ 	index = LPFC_PCI_BAR_RD_SIZE;
+ 	while (index > 0) {
+ 		u32val = readl(mem_mapped_bar + offset_run);
+-		len += snprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_PCI_BAR_RD_BUF_SIZE-len,
+ 				"%08x ", u32val);
+ 		offset_run += sizeof(uint32_t);
+ 		if (acc_range == LPFC_PCI_BAR_BROWSE) {
+ 			if (offset_run >= bar_size) {
+-				len += snprintf(pbuffer+len,
++				len += scnprintf(pbuffer+len,
+ 					LPFC_PCI_BAR_RD_BUF_SIZE-len, "\n");
+ 				break;
+ 			}
+ 		} else {
+ 			if (offset_run >= offset +
+ 			    (acc_range * sizeof(uint32_t))) {
+-				len += snprintf(pbuffer+len,
++				len += scnprintf(pbuffer+len,
+ 					LPFC_PCI_BAR_RD_BUF_SIZE-len, "\n");
+ 				break;
+ 			}
+ 		}
+ 		index -= sizeof(uint32_t);
+ 		if (!index)
+-			len += snprintf(pbuffer+len,
++			len += scnprintf(pbuffer+len,
+ 					LPFC_PCI_BAR_RD_BUF_SIZE-len, "\n");
+ 		else if (!(index % (8 * sizeof(uint32_t)))) {
+ 			offset_label += (8 * sizeof(uint32_t));
+-			len += snprintf(pbuffer+len,
++			len += scnprintf(pbuffer+len,
+ 					LPFC_PCI_BAR_RD_BUF_SIZE-len,
+ 					"\n%05x: ", offset_label);
+ 		}
+@@ -3158,19 +3161,19 @@ __lpfc_idiag_print_wq(struct lpfc_queue *qp, char *wqtype,
+ 	if (!qp)
+ 		return len;
+ 
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t\t%s WQ info: ", wqtype);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"AssocCQID[%04d]: WQ-STAT[oflow:x%x posted:x%llx]\n",
+ 			qp->assoc_qid, qp->q_cnt_1,
+ 			(unsigned long long)qp->q_cnt_4);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t\tWQID[%02d], QE-CNT[%04d], QE-SZ[%04d], "
+ 			"HST-IDX[%04d], PRT-IDX[%04d], PST[%03d]",
+ 			qp->queue_id, qp->entry_count,
+ 			qp->entry_size, qp->host_index,
+ 			qp->hba_index, qp->entry_repost);
+-	len +=  snprintf(pbuffer + len,
++	len +=  scnprintf(pbuffer + len,
+ 			LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n");
+ 	return len;
+ }
+@@ -3208,21 +3211,21 @@ __lpfc_idiag_print_cq(struct lpfc_queue *qp, char *cqtype,
+ 	if (!qp)
+ 		return len;
+ 
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t%s CQ info: ", cqtype);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"AssocEQID[%02d]: CQ STAT[max:x%x relw:x%x "
+ 			"xabt:x%x wq:x%llx]\n",
+ 			qp->assoc_qid, qp->q_cnt_1, qp->q_cnt_2,
+ 			qp->q_cnt_3, (unsigned long long)qp->q_cnt_4);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\tCQID[%02d], QE-CNT[%04d], QE-SZ[%04d], "
+ 			"HST-IDX[%04d], PRT-IDX[%04d], PST[%03d]",
+ 			qp->queue_id, qp->entry_count,
+ 			qp->entry_size, qp->host_index,
+ 			qp->hba_index, qp->entry_repost);
+ 
+-	len +=  snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n");
++	len +=  scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n");
+ 
+ 	return len;
+ }
+@@ -3234,19 +3237,19 @@ __lpfc_idiag_print_rqpair(struct lpfc_queue *qp, struct lpfc_queue *datqp,
+ 	if (!qp || !datqp)
+ 		return len;
+ 
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t\t%s RQ info: ", rqtype);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"AssocCQID[%02d]: RQ-STAT[nopost:x%x nobuf:x%x "
+ 			"posted:x%x rcv:x%llx]\n",
+ 			qp->assoc_qid, qp->q_cnt_1, qp->q_cnt_2,
+ 			qp->q_cnt_3, (unsigned long long)qp->q_cnt_4);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t\tHQID[%02d], QE-CNT[%04d], QE-SZ[%04d], "
+ 			"HST-IDX[%04d], PRT-IDX[%04d], PST[%03d]\n",
+ 			qp->queue_id, qp->entry_count, qp->entry_size,
+ 			qp->host_index, qp->hba_index, qp->entry_repost);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\t\tDQID[%02d], QE-CNT[%04d], QE-SZ[%04d], "
+ 			"HST-IDX[%04d], PRT-IDX[%04d], PST[%03d]\n",
+ 			datqp->queue_id, datqp->entry_count,
+@@ -3331,17 +3334,17 @@ __lpfc_idiag_print_eq(struct lpfc_queue *qp, char *eqtype,
+ 	if (!qp)
+ 		return len;
+ 
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"\n%s EQ info: EQ-STAT[max:x%x noE:x%x "
+ 			"cqe_proc:x%x eqe_proc:x%llx eqd %d]\n",
+ 			eqtype, qp->q_cnt_1, qp->q_cnt_2, qp->q_cnt_3,
+ 			(unsigned long long)qp->q_cnt_4, qp->q_mode);
+-	len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++	len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 			"EQID[%02d], QE-CNT[%04d], QE-SZ[%04d], "
+ 			"HST-IDX[%04d], PRT-IDX[%04d], PST[%03d]",
+ 			qp->queue_id, qp->entry_count, qp->entry_size,
+ 			qp->host_index, qp->hba_index, qp->entry_repost);
+-	len +=  snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n");
++	len +=  scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len, "\n");
+ 
+ 	return len;
+ }
+@@ -3399,7 +3402,7 @@ lpfc_idiag_queinfo_read(struct file *file, char __user *buf, size_t nbytes,
+ 			if (phba->cfg_fof == 0)
+ 				phba->lpfc_idiag_last_eq = 0;
+ 
+-		len += snprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
++		len += scnprintf(pbuffer + len, LPFC_QUE_INFO_GET_BUF_SIZE - len,
+ 					"EQ %d out of %d HBA EQs\n",
+ 					x, phba->io_channel_irqs);
+ 
+@@ -3512,7 +3515,7 @@ fof:
+ 	return simple_read_from_buffer(buf, nbytes, ppos, pbuffer, len);
+ 
+ too_big:
+-	len +=  snprintf(pbuffer + len,
++	len +=  scnprintf(pbuffer + len,
+ 		LPFC_QUE_INFO_GET_BUF_SIZE - len, "Truncated ...\n");
+ out:
+ 	spin_unlock_irq(&phba->hbalock);
+@@ -3568,22 +3571,22 @@ lpfc_idiag_queacc_read_qe(char *pbuffer, int len, struct lpfc_queue *pque,
+ 		return 0;
+ 
+ 	esize = pque->entry_size;
+-	len += snprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len,
+ 			"QE-INDEX[%04d]:\n", index);
+ 
+ 	offset = 0;
+ 	pentry = pque->qe[index].address;
+ 	while (esize > 0) {
+-		len += snprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len,
+ 				"%08x ", *pentry);
+ 		pentry++;
+ 		offset += sizeof(uint32_t);
+ 		esize -= sizeof(uint32_t);
+ 		if (esize > 0 && !(offset % (4 * sizeof(uint32_t))))
+-			len += snprintf(pbuffer+len,
++			len += scnprintf(pbuffer+len,
+ 					LPFC_QUE_ACC_BUF_SIZE-len, "\n");
+ 	}
+-	len += snprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len, "\n");
++	len += scnprintf(pbuffer+len, LPFC_QUE_ACC_BUF_SIZE-len, "\n");
+ 
+ 	return len;
+ }
+@@ -3989,27 +3992,27 @@ lpfc_idiag_drbacc_read_reg(struct lpfc_hba *phba, char *pbuffer,
+ 
+ 	switch (drbregid) {
+ 	case LPFC_DRB_EQ:
+-		len += snprintf(pbuffer + len, LPFC_DRB_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer + len, LPFC_DRB_ACC_BUF_SIZE-len,
+ 				"EQ-DRB-REG: 0x%08x\n",
+ 				readl(phba->sli4_hba.EQDBregaddr));
+ 		break;
+ 	case LPFC_DRB_CQ:
+-		len += snprintf(pbuffer + len, LPFC_DRB_ACC_BUF_SIZE - len,
++		len += scnprintf(pbuffer + len, LPFC_DRB_ACC_BUF_SIZE - len,
+ 				"CQ-DRB-REG: 0x%08x\n",
+ 				readl(phba->sli4_hba.CQDBregaddr));
+ 		break;
+ 	case LPFC_DRB_MQ:
+-		len += snprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
+ 				"MQ-DRB-REG:   0x%08x\n",
+ 				readl(phba->sli4_hba.MQDBregaddr));
+ 		break;
+ 	case LPFC_DRB_WQ:
+-		len += snprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
+ 				"WQ-DRB-REG:   0x%08x\n",
+ 				readl(phba->sli4_hba.WQDBregaddr));
+ 		break;
+ 	case LPFC_DRB_RQ:
+-		len += snprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_DRB_ACC_BUF_SIZE-len,
+ 				"RQ-DRB-REG:   0x%08x\n",
+ 				readl(phba->sli4_hba.RQDBregaddr));
+ 		break;
+@@ -4199,37 +4202,37 @@ lpfc_idiag_ctlacc_read_reg(struct lpfc_hba *phba, char *pbuffer,
+ 
+ 	switch (ctlregid) {
+ 	case LPFC_CTL_PORT_SEM:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"Port SemReg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PORT_SEM_OFFSET));
+ 		break;
+ 	case LPFC_CTL_PORT_STA:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"Port StaReg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PORT_STA_OFFSET));
+ 		break;
+ 	case LPFC_CTL_PORT_CTL:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"Port CtlReg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PORT_CTL_OFFSET));
+ 		break;
+ 	case LPFC_CTL_PORT_ER1:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"Port Er1Reg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PORT_ER1_OFFSET));
+ 		break;
+ 	case LPFC_CTL_PORT_ER2:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"Port Er2Reg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PORT_ER2_OFFSET));
+ 		break;
+ 	case LPFC_CTL_PDEV_CTL:
+-		len += snprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_CTL_ACC_BUF_SIZE-len,
+ 				"PDev CtlReg:   0x%08x\n",
+ 				readl(phba->sli4_hba.conf_regs_memmap_p +
+ 				      LPFC_CTL_PDEV_CTL_OFFSET));
+@@ -4422,13 +4425,13 @@ lpfc_idiag_mbxacc_get_setup(struct lpfc_hba *phba, char *pbuffer)
+ 	mbx_dump_cnt = idiag.cmd.data[IDIAG_MBXACC_DPCNT_INDX];
+ 	mbx_word_cnt = idiag.cmd.data[IDIAG_MBXACC_WDCNT_INDX];
+ 
+-	len += snprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
+ 			"mbx_dump_map: 0x%08x\n", mbx_dump_map);
+-	len += snprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
+ 			"mbx_dump_cnt: %04d\n", mbx_dump_cnt);
+-	len += snprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
+ 			"mbx_word_cnt: %04d\n", mbx_word_cnt);
+-	len += snprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_MBX_ACC_BUF_SIZE-len,
+ 			"mbx_mbox_cmd: 0x%02x\n", mbx_mbox_cmd);
+ 
+ 	return len;
+@@ -4577,35 +4580,35 @@ lpfc_idiag_extacc_avail_get(struct lpfc_hba *phba, char *pbuffer, int len)
+ {
+ 	uint16_t ext_cnt, ext_size;
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\nAvailable Extents Information:\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tPort Available VPI extents: ");
+ 	lpfc_sli4_get_avail_extnt_rsrc(phba, LPFC_RSC_TYPE_FCOE_VPI,
+ 				       &ext_cnt, &ext_size);
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"Count %3d, Size %3d\n", ext_cnt, ext_size);
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tPort Available VFI extents: ");
+ 	lpfc_sli4_get_avail_extnt_rsrc(phba, LPFC_RSC_TYPE_FCOE_VFI,
+ 				       &ext_cnt, &ext_size);
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"Count %3d, Size %3d\n", ext_cnt, ext_size);
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tPort Available RPI extents: ");
+ 	lpfc_sli4_get_avail_extnt_rsrc(phba, LPFC_RSC_TYPE_FCOE_RPI,
+ 				       &ext_cnt, &ext_size);
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"Count %3d, Size %3d\n", ext_cnt, ext_size);
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tPort Available XRI extents: ");
+ 	lpfc_sli4_get_avail_extnt_rsrc(phba, LPFC_RSC_TYPE_FCOE_XRI,
+ 				       &ext_cnt, &ext_size);
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"Count %3d, Size %3d\n", ext_cnt, ext_size);
+ 
+ 	return len;
+@@ -4629,55 +4632,55 @@ lpfc_idiag_extacc_alloc_get(struct lpfc_hba *phba, char *pbuffer, int len)
+ 	uint16_t ext_cnt, ext_size;
+ 	int rc;
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\nAllocated Extents Information:\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tHost Allocated VPI extents: ");
+ 	rc = lpfc_sli4_get_allocated_extnts(phba, LPFC_RSC_TYPE_FCOE_VPI,
+ 					    &ext_cnt, &ext_size);
+ 	if (!rc)
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"Port %d Extent %3d, Size %3d\n",
+ 				phba->brd_no, ext_cnt, ext_size);
+ 	else
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"N/A\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tHost Allocated VFI extents: ");
+ 	rc = lpfc_sli4_get_allocated_extnts(phba, LPFC_RSC_TYPE_FCOE_VFI,
+ 					    &ext_cnt, &ext_size);
+ 	if (!rc)
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"Port %d Extent %3d, Size %3d\n",
+ 				phba->brd_no, ext_cnt, ext_size);
+ 	else
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"N/A\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tHost Allocated RPI extents: ");
+ 	rc = lpfc_sli4_get_allocated_extnts(phba, LPFC_RSC_TYPE_FCOE_RPI,
+ 					    &ext_cnt, &ext_size);
+ 	if (!rc)
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"Port %d Extent %3d, Size %3d\n",
+ 				phba->brd_no, ext_cnt, ext_size);
+ 	else
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"N/A\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tHost Allocated XRI extents: ");
+ 	rc = lpfc_sli4_get_allocated_extnts(phba, LPFC_RSC_TYPE_FCOE_XRI,
+ 					    &ext_cnt, &ext_size);
+ 	if (!rc)
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"Port %d Extent %3d, Size %3d\n",
+ 				phba->brd_no, ext_cnt, ext_size);
+ 	else
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"N/A\n");
+ 
+ 	return len;
+@@ -4701,49 +4704,49 @@ lpfc_idiag_extacc_drivr_get(struct lpfc_hba *phba, char *pbuffer, int len)
+ 	struct lpfc_rsrc_blks *rsrc_blks;
+ 	int index;
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\nDriver Extents Information:\n");
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tVPI extents:\n");
+ 	index = 0;
+ 	list_for_each_entry(rsrc_blks, &phba->lpfc_vpi_blk_list, list) {
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"\t\tBlock %3d: Start %4d, Count %4d\n",
+ 				index, rsrc_blks->rsrc_start,
+ 				rsrc_blks->rsrc_size);
+ 		index++;
+ 	}
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tVFI extents:\n");
+ 	index = 0;
+ 	list_for_each_entry(rsrc_blks, &phba->sli4_hba.lpfc_vfi_blk_list,
+ 			    list) {
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"\t\tBlock %3d: Start %4d, Count %4d\n",
+ 				index, rsrc_blks->rsrc_start,
+ 				rsrc_blks->rsrc_size);
+ 		index++;
+ 	}
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tRPI extents:\n");
+ 	index = 0;
+ 	list_for_each_entry(rsrc_blks, &phba->sli4_hba.lpfc_rpi_blk_list,
+ 			    list) {
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"\t\tBlock %3d: Start %4d, Count %4d\n",
+ 				index, rsrc_blks->rsrc_start,
+ 				rsrc_blks->rsrc_size);
+ 		index++;
+ 	}
+ 
+-	len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++	len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 			"\tXRI extents:\n");
+ 	index = 0;
+ 	list_for_each_entry(rsrc_blks, &phba->sli4_hba.lpfc_xri_blk_list,
+ 			    list) {
+-		len += snprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
++		len += scnprintf(pbuffer+len, LPFC_EXT_ACC_BUF_SIZE-len,
+ 				"\t\tBlock %3d: Start %4d, Count %4d\n",
+ 				index, rsrc_blks->rsrc_start,
+ 				rsrc_blks->rsrc_size);
+@@ -5137,11 +5140,11 @@ lpfc_idiag_mbxacc_dump_bsg_mbox(struct lpfc_hba *phba, enum nemb_type nemb_tp,
+ 				if (i != 0)
+ 					pr_err("%s\n", line_buf);
+ 				len = 0;
+-				len += snprintf(line_buf+len,
++				len += scnprintf(line_buf+len,
+ 						LPFC_MBX_ACC_LBUF_SZ-len,
+ 						"%03d: ", i);
+ 			}
+-			len += snprintf(line_buf+len, LPFC_MBX_ACC_LBUF_SZ-len,
++			len += scnprintf(line_buf+len, LPFC_MBX_ACC_LBUF_SZ-len,
+ 					"%08x ", (uint32_t)*pword);
+ 			pword++;
+ 		}
+@@ -5204,11 +5207,11 @@ lpfc_idiag_mbxacc_dump_issue_mbox(struct lpfc_hba *phba, MAILBOX_t *pmbox)
+ 					pr_err("%s\n", line_buf);
+ 				len = 0;
+ 				memset(line_buf, 0, LPFC_MBX_ACC_LBUF_SZ);
+-				len += snprintf(line_buf+len,
++				len += scnprintf(line_buf+len,
+ 						LPFC_MBX_ACC_LBUF_SZ-len,
+ 						"%03d: ", i);
+ 			}
+-			len += snprintf(line_buf+len, LPFC_MBX_ACC_LBUF_SZ-len,
++			len += scnprintf(line_buf+len, LPFC_MBX_ACC_LBUF_SZ-len,
+ 					"%08x ",
+ 					((uint32_t)*pword) & 0xffffffff);
+ 			pword++;
+@@ -5227,18 +5230,18 @@ lpfc_idiag_mbxacc_dump_issue_mbox(struct lpfc_hba *phba, MAILBOX_t *pmbox)
+ 					pr_err("%s\n", line_buf);
+ 				len = 0;
+ 				memset(line_buf, 0, LPFC_MBX_ACC_LBUF_SZ);
+-				len += snprintf(line_buf+len,
++				len += scnprintf(line_buf+len,
+ 						LPFC_MBX_ACC_LBUF_SZ-len,
+ 						"%03d: ", i);
+ 			}
+ 			for (j = 0; j < 4; j++) {
+-				len += snprintf(line_buf+len,
++				len += scnprintf(line_buf+len,
+ 						LPFC_MBX_ACC_LBUF_SZ-len,
+ 						"%02x",
+ 						((uint8_t)*pbyte) & 0xff);
+ 				pbyte++;
+ 			}
+-			len += snprintf(line_buf+len,
++			len += scnprintf(line_buf+len,
+ 					LPFC_MBX_ACC_LBUF_SZ-len, " ");
+ 		}
+ 		if ((i - 1) % 8)
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.h b/drivers/scsi/lpfc/lpfc_debugfs.h
+index 30efc7bf91bd..824de3e410ca 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.h
++++ b/drivers/scsi/lpfc/lpfc_debugfs.h
+@@ -342,7 +342,7 @@ lpfc_debug_dump_qe(struct lpfc_queue *q, uint32_t idx)
+ 	pword = q->qe[idx].address;
+ 
+ 	len = 0;
+-	len += snprintf(line_buf+len, LPFC_LBUF_SZ-len, "QE[%04d]: ", idx);
++	len += scnprintf(line_buf+len, LPFC_LBUF_SZ-len, "QE[%04d]: ", idx);
+ 	if (qe_word_cnt > 8)
+ 		printk(KERN_ERR "%s\n", line_buf);
+ 
+@@ -353,11 +353,11 @@ lpfc_debug_dump_qe(struct lpfc_queue *q, uint32_t idx)
+ 			if (qe_word_cnt > 8) {
+ 				len = 0;
+ 				memset(line_buf, 0, LPFC_LBUF_SZ);
+-				len += snprintf(line_buf+len, LPFC_LBUF_SZ-len,
++				len += scnprintf(line_buf+len, LPFC_LBUF_SZ-len,
+ 						"%03d: ", i);
+ 			}
+ 		}
+-		len += snprintf(line_buf+len, LPFC_LBUF_SZ-len, "%08x ",
++		len += scnprintf(line_buf+len, LPFC_LBUF_SZ-len, "%08x ",
+ 				((uint32_t)*pword) & 0xffffffff);
+ 		pword++;
+ 	}
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index ac504a1ff0ff..1a396f843de1 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -364,7 +364,7 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
+ 		}
+ 
+ 		ha->optrom_region_start = start;
+-		ha->optrom_region_size = start + size;
++		ha->optrom_region_size = size;
+ 
+ 		ha->optrom_state = QLA_SREADING;
+ 		ha->optrom_buffer = vmalloc(ha->optrom_region_size);
+@@ -437,7 +437,7 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj,
+ 		}
+ 
+ 		ha->optrom_region_start = start;
+-		ha->optrom_region_size = start + size;
++		ha->optrom_region_size = size;
+ 
+ 		ha->optrom_state = QLA_SWRITING;
+ 		ha->optrom_buffer = vmalloc(ha->optrom_region_size);
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 510337eac106..d4ac18573d81 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -977,6 +977,8 @@ void qlt_free_session_done(struct work_struct *work)
+ 		sess->send_els_logo);
+ 
+ 	if (!IS_SW_RESV_ADDR(sess->d_id)) {
++		qla2x00_mark_device_lost(vha, sess, 0, 0);
++
+ 		if (sess->send_els_logo) {
+ 			qlt_port_logo_t logo;
+ 
+@@ -1157,8 +1159,6 @@ void qlt_unreg_sess(struct fc_port *sess)
+ 	if (sess->se_sess)
+ 		vha->hw->tgt.tgt_ops->clear_nacl_from_fcport_map(sess);
+ 
+-	qla2x00_mark_device_lost(vha, sess, 0, 0);
+-
+ 	sess->deleted = QLA_SESS_DELETION_IN_PROGRESS;
+ 	sess->disc_state = DSC_DELETE_PEND;
+ 	sess->last_rscn_gen = sess->rscn_gen;
+diff --git a/drivers/soc/sunxi/Kconfig b/drivers/soc/sunxi/Kconfig
+index 353b07e40176..e84eb4e59f58 100644
+--- a/drivers/soc/sunxi/Kconfig
++++ b/drivers/soc/sunxi/Kconfig
+@@ -4,6 +4,7 @@
+ config SUNXI_SRAM
+ 	bool
+ 	default ARCH_SUNXI
++	select REGMAP_MMIO
+ 	help
+ 	  Say y here to enable the SRAM controller support. This
+ 	  device is responsible on mapping the SRAM in the sunXi SoCs
+diff --git a/drivers/staging/greybus/power_supply.c b/drivers/staging/greybus/power_supply.c
+index 0529e5628c24..ae5c0285a942 100644
+--- a/drivers/staging/greybus/power_supply.c
++++ b/drivers/staging/greybus/power_supply.c
+@@ -520,7 +520,7 @@ static int gb_power_supply_prop_descriptors_get(struct gb_power_supply *gbpsy)
+ 
+ 	op = gb_operation_create(connection,
+ 				 GB_POWER_SUPPLY_TYPE_GET_PROP_DESCRIPTORS,
+-				 sizeof(req), sizeof(*resp) + props_count *
++				 sizeof(*req), sizeof(*resp) + props_count *
+ 				 sizeof(struct gb_power_supply_props_desc),
+ 				 GFP_KERNEL);
+ 	if (!op)
+diff --git a/drivers/staging/most/cdev/cdev.c b/drivers/staging/most/cdev/cdev.c
+index ea64aabda94e..67ac51c8fc5b 100644
+--- a/drivers/staging/most/cdev/cdev.c
++++ b/drivers/staging/most/cdev/cdev.c
+@@ -546,7 +546,7 @@ static void __exit mod_exit(void)
+ 		destroy_cdev(c);
+ 		destroy_channel(c);
+ 	}
+-	unregister_chrdev_region(comp.devno, 1);
++	unregister_chrdev_region(comp.devno, CHRDEV_REGION_SIZE);
+ 	ida_destroy(&comp.minor_id);
+ 	class_destroy(comp.class);
+ }
+diff --git a/drivers/staging/most/sound/sound.c b/drivers/staging/most/sound/sound.c
+index 79ab3a78c5ec..1e6f47cfe42c 100644
+--- a/drivers/staging/most/sound/sound.c
++++ b/drivers/staging/most/sound/sound.c
+@@ -622,7 +622,7 @@ static int audio_probe_channel(struct most_interface *iface, int channel_id,
+ 	INIT_LIST_HEAD(&adpt->dev_list);
+ 	iface->priv = adpt;
+ 	list_add_tail(&adpt->list, &adpt_list);
+-	ret = snd_card_new(&iface->dev, -1, "INIC", THIS_MODULE,
++	ret = snd_card_new(iface->driver_dev, -1, "INIC", THIS_MODULE,
+ 			   sizeof(*channel), &adpt->card);
+ 	if (ret < 0)
+ 		goto err_free_adpt;
+diff --git a/drivers/staging/wilc1000/linux_wlan.c b/drivers/staging/wilc1000/linux_wlan.c
+index 5e5149c9a92d..2448805315c5 100644
+--- a/drivers/staging/wilc1000/linux_wlan.c
++++ b/drivers/staging/wilc1000/linux_wlan.c
+@@ -816,7 +816,7 @@ static void wilc_set_multicast_list(struct net_device *dev)
+ 		return;
+ 	}
+ 
+-	mc_list = kmalloc_array(dev->mc.count, ETH_ALEN, GFP_KERNEL);
++	mc_list = kmalloc_array(dev->mc.count, ETH_ALEN, GFP_ATOMIC);
+ 	if (!mc_list)
+ 		return;
+ 
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index ec666eb4b7b4..c03aa8550980 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -470,12 +470,12 @@ static void acm_read_bulk_callback(struct urb *urb)
+ 	struct acm *acm = rb->instance;
+ 	unsigned long flags;
+ 	int status = urb->status;
++	bool stopped = false;
++	bool stalled = false;
+ 
+ 	dev_vdbg(&acm->data->dev, "got urb %d, len %d, status %d\n",
+ 		rb->index, urb->actual_length, status);
+ 
+-	set_bit(rb->index, &acm->read_urbs_free);
+-
+ 	if (!acm->dev) {
+ 		dev_dbg(&acm->data->dev, "%s - disconnected\n", __func__);
+ 		return;
+@@ -488,15 +488,16 @@ static void acm_read_bulk_callback(struct urb *urb)
+ 		break;
+ 	case -EPIPE:
+ 		set_bit(EVENT_RX_STALL, &acm->flags);
+-		schedule_work(&acm->work);
+-		return;
++		stalled = true;
++		break;
+ 	case -ENOENT:
+ 	case -ECONNRESET:
+ 	case -ESHUTDOWN:
+ 		dev_dbg(&acm->data->dev,
+ 			"%s - urb shutting down with status: %d\n",
+ 			__func__, status);
+-		return;
++		stopped = true;
++		break;
+ 	default:
+ 		dev_dbg(&acm->data->dev,
+ 			"%s - nonzero urb status received: %d\n",
+@@ -505,10 +506,24 @@ static void acm_read_bulk_callback(struct urb *urb)
+ 	}
+ 
+ 	/*
+-	 * Unthrottle may run on another CPU which needs to see events
+-	 * in the same order. Submission has an implict barrier
++	 * Make sure URB processing is done before marking as free to avoid
++	 * racing with unthrottle() on another CPU. Matches the barriers
++	 * implied by the test_and_clear_bit() in acm_submit_read_urb().
+ 	 */
+ 	smp_mb__before_atomic();
++	set_bit(rb->index, &acm->read_urbs_free);
++	/*
++	 * Make sure URB is marked as free before checking the throttled flag
++	 * to avoid racing with unthrottle() on another CPU. Matches the
++	 * smp_mb() in unthrottle().
++	 */
++	smp_mb__after_atomic();
++
++	if (stopped || stalled) {
++		if (stalled)
++			schedule_work(&acm->work);
++		return;
++	}
+ 
+ 	/* throttle device if requested by tty */
+ 	spin_lock_irqsave(&acm->read_lock, flags);
+@@ -842,6 +857,9 @@ static void acm_tty_unthrottle(struct tty_struct *tty)
+ 	acm->throttle_req = 0;
+ 	spin_unlock_irq(&acm->read_lock);
+ 
++	/* Matches the smp_mb__after_atomic() in acm_read_bulk_callback(). */
++	smp_mb();
++
+ 	if (was_throttled)
+ 		acm_submit_read_urbs(acm, GFP_KERNEL);
+ }
+diff --git a/drivers/usb/dwc3/Kconfig b/drivers/usb/dwc3/Kconfig
+index 1a0404fda596..5d22f4bf2a9f 100644
+--- a/drivers/usb/dwc3/Kconfig
++++ b/drivers/usb/dwc3/Kconfig
+@@ -52,7 +52,8 @@ comment "Platform Glue Driver Support"
+ 
+ config USB_DWC3_OMAP
+ 	tristate "Texas Instruments OMAP5 and similar Platforms"
+-	depends on EXTCON && (ARCH_OMAP2PLUS || COMPILE_TEST)
++	depends on ARCH_OMAP2PLUS || COMPILE_TEST
++	depends on EXTCON || !EXTCON
+ 	depends on OF
+ 	default USB_DWC3
+ 	help
+@@ -113,7 +114,8 @@ config USB_DWC3_ST
+ 
+ config USB_DWC3_QCOM
+ 	tristate "Qualcomm Platform"
+-	depends on EXTCON && (ARCH_QCOM || COMPILE_TEST)
++	depends on ARCH_QCOM || COMPILE_TEST
++	depends on EXTCON || !EXTCON
+ 	depends on OF
+ 	default USB_DWC3
+ 	help
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index a1b126f90261..f944cea4056b 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1218,7 +1218,7 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ 	u8			tx_max_burst_prd;
+ 
+ 	/* default to highest possible threshold */
+-	lpm_nyet_threshold = 0xff;
++	lpm_nyet_threshold = 0xf;
+ 
+ 	/* default to -3.5dB de-emphasis */
+ 	tx_de_emphasis = 1;
+diff --git a/drivers/usb/musb/Kconfig b/drivers/usb/musb/Kconfig
+index ad08895e78f9..c3dae7d5cb6e 100644
+--- a/drivers/usb/musb/Kconfig
++++ b/drivers/usb/musb/Kconfig
+@@ -66,7 +66,7 @@ config USB_MUSB_SUNXI
+ 	depends on NOP_USB_XCEIV
+ 	depends on PHY_SUN4I_USB
+ 	depends on EXTCON
+-	depends on GENERIC_PHY
++	select GENERIC_PHY
+ 	select SUNXI_SRAM
+ 
+ config USB_MUSB_DAVINCI
+diff --git a/drivers/usb/serial/f81232.c b/drivers/usb/serial/f81232.c
+index 0dcdcb4b2cde..dee6f2caf9b5 100644
+--- a/drivers/usb/serial/f81232.c
++++ b/drivers/usb/serial/f81232.c
+@@ -556,9 +556,12 @@ static int f81232_open(struct tty_struct *tty, struct usb_serial_port *port)
+ 
+ static void f81232_close(struct usb_serial_port *port)
+ {
++	struct f81232_private *port_priv = usb_get_serial_port_data(port);
++
+ 	f81232_port_disable(port);
+ 	usb_serial_generic_close(port);
+ 	usb_kill_urb(port->interrupt_in_urb);
++	flush_work(&port_priv->interrupt_work);
+ }
+ 
+ static void f81232_dtr_rts(struct usb_serial_port *port, int on)
+@@ -632,6 +635,40 @@ static int f81232_port_remove(struct usb_serial_port *port)
+ 	return 0;
+ }
+ 
++static int f81232_suspend(struct usb_serial *serial, pm_message_t message)
++{
++	struct usb_serial_port *port = serial->port[0];
++	struct f81232_private *port_priv = usb_get_serial_port_data(port);
++	int i;
++
++	for (i = 0; i < ARRAY_SIZE(port->read_urbs); ++i)
++		usb_kill_urb(port->read_urbs[i]);
++
++	usb_kill_urb(port->interrupt_in_urb);
++
++	if (port_priv)
++		flush_work(&port_priv->interrupt_work);
++
++	return 0;
++}
++
++static int f81232_resume(struct usb_serial *serial)
++{
++	struct usb_serial_port *port = serial->port[0];
++	int result;
++
++	if (tty_port_initialized(&port->port)) {
++		result = usb_submit_urb(port->interrupt_in_urb, GFP_NOIO);
++		if (result) {
++			dev_err(&port->dev, "submit interrupt urb failed: %d\n",
++					result);
++			return result;
++		}
++	}
++
++	return usb_serial_generic_resume(serial);
++}
++
+ static struct usb_serial_driver f81232_device = {
+ 	.driver = {
+ 		.owner =	THIS_MODULE,
+@@ -655,6 +692,8 @@ static struct usb_serial_driver f81232_device = {
+ 	.read_int_callback =	f81232_read_int_callback,
+ 	.port_probe =		f81232_port_probe,
+ 	.port_remove =		f81232_port_remove,
++	.suspend =		f81232_suspend,
++	.resume =		f81232_resume,
+ };
+ 
+ static struct usb_serial_driver * const serial_drivers[] = {
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index a73ea495d5a7..59190d88fa9f 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -65,6 +65,7 @@ static const char* host_info(struct Scsi_Host *host)
+ static int slave_alloc (struct scsi_device *sdev)
+ {
+ 	struct us_data *us = host_to_us(sdev->host);
++	int maxp;
+ 
+ 	/*
+ 	 * Set the INQUIRY transfer length to 36.  We don't use any of
+@@ -74,20 +75,17 @@ static int slave_alloc (struct scsi_device *sdev)
+ 	sdev->inquiry_len = 36;
+ 
+ 	/*
+-	 * USB has unusual DMA-alignment requirements: Although the
+-	 * starting address of each scatter-gather element doesn't matter,
+-	 * the length of each element except the last must be divisible
+-	 * by the Bulk maxpacket value.  There's currently no way to
+-	 * express this by block-layer constraints, so we'll cop out
+-	 * and simply require addresses to be aligned at 512-byte
+-	 * boundaries.  This is okay since most block I/O involves
+-	 * hardware sectors that are multiples of 512 bytes in length,
+-	 * and since host controllers up through USB 2.0 have maxpacket
+-	 * values no larger than 512.
+-	 *
+-	 * But it doesn't suffice for Wireless USB, where Bulk maxpacket
+-	 * values can be as large as 2048.  To make that work properly
+-	 * will require changes to the block layer.
++	 * USB has unusual scatter-gather requirements: the length of each
++	 * scatterlist element except the last must be divisible by the
++	 * Bulk maxpacket value.  Fortunately this value is always a
++	 * power of 2.  Inform the block layer about this requirement.
++	 */
++	maxp = usb_maxpacket(us->pusb_dev, us->recv_bulk_pipe, 0);
++	blk_queue_virt_boundary(sdev->request_queue, maxp - 1);
++
++	/*
++	 * Some host controllers may have alignment requirements.
++	 * We'll play it safe by requiring 512-byte alignment always.
+ 	 */
+ 	blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1));
+ 
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 36742e8e7edc..d2ed3049e7de 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -796,24 +796,33 @@ static int uas_slave_alloc(struct scsi_device *sdev)
+ {
+ 	struct uas_dev_info *devinfo =
+ 		(struct uas_dev_info *)sdev->host->hostdata;
++	int maxp;
+ 
+ 	sdev->hostdata = devinfo;
+ 
+ 	/*
+-	 * USB has unusual DMA-alignment requirements: Although the
+-	 * starting address of each scatter-gather element doesn't matter,
+-	 * the length of each element except the last must be divisible
+-	 * by the Bulk maxpacket value.  There's currently no way to
+-	 * express this by block-layer constraints, so we'll cop out
+-	 * and simply require addresses to be aligned at 512-byte
+-	 * boundaries.  This is okay since most block I/O involves
+-	 * hardware sectors that are multiples of 512 bytes in length,
+-	 * and since host controllers up through USB 2.0 have maxpacket
+-	 * values no larger than 512.
++	 * We have two requirements here. We must satisfy the requirements
++	 * of the physical HC and the demands of the protocol, as we
++	 * definitely want no additional memory allocation in this path
++	 * ruling out using bounce buffers.
+ 	 *
+-	 * But it doesn't suffice for Wireless USB, where Bulk maxpacket
+-	 * values can be as large as 2048.  To make that work properly
+-	 * will require changes to the block layer.
++	 * For a transmission on USB to continue we must never send
++	 * a package that is smaller than maxpacket. Hence the length of each
++         * scatterlist element except the last must be divisible by the
++         * Bulk maxpacket value.
++	 * If the HC does not ensure that through SG,
++	 * the upper layer must do that. We must assume nothing
++	 * about the capabilities off the HC, so we use the most
++	 * pessimistic requirement.
++	 */
++
++	maxp = usb_maxpacket(devinfo->udev, devinfo->data_in_pipe, 0);
++	blk_queue_virt_boundary(sdev->request_queue, maxp - 1);
++
++	/*
++	 * The protocol has no requirements on alignment in the strict sense.
++	 * Controllers may or may not have alignment restrictions.
++	 * As this is not exported, we use an extremely conservative guess.
+ 	 */
+ 	blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1));
+ 
+diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
+index d0584c040c60..7a0398bb84f7 100644
+--- a/drivers/virtio/virtio_pci_common.c
++++ b/drivers/virtio/virtio_pci_common.c
+@@ -255,9 +255,11 @@ void vp_del_vqs(struct virtio_device *vdev)
+ 	for (i = 0; i < vp_dev->msix_used_vectors; ++i)
+ 		free_irq(pci_irq_vector(vp_dev->pci_dev, i), vp_dev);
+ 
+-	for (i = 0; i < vp_dev->msix_vectors; i++)
+-		if (vp_dev->msix_affinity_masks[i])
+-			free_cpumask_var(vp_dev->msix_affinity_masks[i]);
++	if (vp_dev->msix_affinity_masks) {
++		for (i = 0; i < vp_dev->msix_vectors; i++)
++			if (vp_dev->msix_affinity_masks[i])
++				free_cpumask_var(vp_dev->msix_affinity_masks[i]);
++	}
+ 
+ 	if (vp_dev->msix_enabled) {
+ 		/* Disable the vector used for configuration */
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index fed06fd9998d..94f98e190e63 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -329,9 +329,6 @@ ssize_t nfs42_proc_copy(struct file *src, loff_t pos_src,
+ 	};
+ 	ssize_t err, err2;
+ 
+-	if (!nfs_server_capable(file_inode(dst), NFS_CAP_COPY))
+-		return -EOPNOTSUPP;
+-
+ 	src_lock = nfs_get_lock_context(nfs_file_open_context(src));
+ 	if (IS_ERR(src_lock))
+ 		return PTR_ERR(src_lock);
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 45b2322e092d..00d17198ee12 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -133,8 +133,10 @@ static ssize_t nfs4_copy_file_range(struct file *file_in, loff_t pos_in,
+ 				    struct file *file_out, loff_t pos_out,
+ 				    size_t count, unsigned int flags)
+ {
++	if (!nfs_server_capable(file_inode(file_out), NFS_CAP_COPY))
++		return -EOPNOTSUPP;
+ 	if (file_inode(file_in) == file_inode(file_out))
+-		return -EINVAL;
++		return -EOPNOTSUPP;
+ 	return nfs42_proc_copy(file_in, pos_in, file_out, pos_out, count);
+ }
+ 
+diff --git a/include/keys/trusted.h b/include/keys/trusted.h
+index adbcb6817826..0071298b9b28 100644
+--- a/include/keys/trusted.h
++++ b/include/keys/trusted.h
+@@ -38,7 +38,7 @@ enum {
+ 
+ int TSS_authhmac(unsigned char *digest, const unsigned char *key,
+ 			unsigned int keylen, unsigned char *h1,
+-			unsigned char *h2, unsigned char h3, ...);
++			unsigned char *h2, unsigned int h3, ...);
+ int TSS_checkhmac1(unsigned char *buffer,
+ 			  const uint32_t command,
+ 			  const unsigned char *ononce,
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index 0e030f5f76b6..7e092bdac27f 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -306,6 +306,7 @@ void blk_mq_add_to_requeue_list(struct request *rq, bool at_head,
+ void blk_mq_kick_requeue_list(struct request_queue *q);
+ void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs);
+ bool blk_mq_complete_request(struct request *rq);
++void blk_mq_complete_request_sync(struct request *rq);
+ bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list,
+ 			   struct bio *bio);
+ bool blk_mq_queue_stopped(struct request_queue *q);
+diff --git a/include/linux/kernel.h b/include/linux/kernel.h
+index 8f0e68e250a7..fd827b240059 100644
+--- a/include/linux/kernel.h
++++ b/include/linux/kernel.h
+@@ -73,8 +73,8 @@
+ 
+ #define u64_to_user_ptr(x) (		\
+ {					\
+-	typecheck(u64, x);		\
+-	(void __user *)(uintptr_t)x;	\
++	typecheck(u64, (x));		\
++	(void __user *)(uintptr_t)(x);	\
+ }					\
+ )
+ 
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index bbcc83886899..7ba0368f16e6 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -975,8 +975,13 @@ struct nvme_get_log_page_command {
+ 	__le16			numdl;
+ 	__le16			numdu;
+ 	__u16			rsvd11;
+-	__le32			lpol;
+-	__le32			lpou;
++	union {
++		struct {
++			__le32 lpol;
++			__le32 lpou;
++		};
++		__le64 lpo;
++	};
+ 	__u32			rsvd14[2];
+ };
+ 
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index e5ea633ea368..fa61e70788c5 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -190,6 +190,9 @@ struct adv_info {
+ 
+ #define HCI_MAX_SHORT_NAME_LENGTH	10
+ 
++/* Min encryption key size to match with SMP */
++#define HCI_MIN_ENC_KEY_SIZE		7
++
+ /* Default LE RPA expiry time, 15 minutes */
+ #define HCI_DEFAULT_RPA_TIMEOUT		(15 * 60)
+ 
+diff --git a/include/sound/soc.h b/include/sound/soc.h
+index e665f111b0d2..fa82d6215328 100644
+--- a/include/sound/soc.h
++++ b/include/sound/soc.h
+@@ -1043,6 +1043,8 @@ struct snd_soc_card {
+ 	struct mutex mutex;
+ 	struct mutex dapm_mutex;
+ 
++	spinlock_t dpcm_lock;
++
+ 	bool instantiated;
+ 	bool topology_shortname_created;
+ 
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 2e2305a81047..124e1e3d06b9 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2007,8 +2007,8 @@ event_sched_out(struct perf_event *event,
+ 	event->pmu->del(event, 0);
+ 	event->oncpu = -1;
+ 
+-	if (event->pending_disable) {
+-		event->pending_disable = 0;
++	if (READ_ONCE(event->pending_disable) >= 0) {
++		WRITE_ONCE(event->pending_disable, -1);
+ 		state = PERF_EVENT_STATE_OFF;
+ 	}
+ 	perf_event_set_state(event, state);
+@@ -2196,7 +2196,8 @@ EXPORT_SYMBOL_GPL(perf_event_disable);
+ 
+ void perf_event_disable_inatomic(struct perf_event *event)
+ {
+-	event->pending_disable = 1;
++	WRITE_ONCE(event->pending_disable, smp_processor_id());
++	/* can fail, see perf_pending_event_disable() */
+ 	irq_work_queue(&event->pending);
+ }
+ 
+@@ -5803,10 +5804,45 @@ void perf_event_wakeup(struct perf_event *event)
+ 	}
+ }
+ 
++static void perf_pending_event_disable(struct perf_event *event)
++{
++	int cpu = READ_ONCE(event->pending_disable);
++
++	if (cpu < 0)
++		return;
++
++	if (cpu == smp_processor_id()) {
++		WRITE_ONCE(event->pending_disable, -1);
++		perf_event_disable_local(event);
++		return;
++	}
++
++	/*
++	 *  CPU-A			CPU-B
++	 *
++	 *  perf_event_disable_inatomic()
++	 *    @pending_disable = CPU-A;
++	 *    irq_work_queue();
++	 *
++	 *  sched-out
++	 *    @pending_disable = -1;
++	 *
++	 *				sched-in
++	 *				perf_event_disable_inatomic()
++	 *				  @pending_disable = CPU-B;
++	 *				  irq_work_queue(); // FAILS
++	 *
++	 *  irq_work_run()
++	 *    perf_pending_event()
++	 *
++	 * But the event runs on CPU-B and wants disabling there.
++	 */
++	irq_work_queue_on(&event->pending, cpu);
++}
++
+ static void perf_pending_event(struct irq_work *entry)
+ {
+-	struct perf_event *event = container_of(entry,
+-			struct perf_event, pending);
++	struct perf_event *event = container_of(entry, struct perf_event, pending);
+ 	int rctx;
+ 
+ 	rctx = perf_swevent_get_recursion_context();
+@@ -5815,10 +5851,7 @@ static void perf_pending_event(struct irq_work *entry)
+ 	 * and we won't recurse 'further'.
+ 	 */
+ 
+-	if (event->pending_disable) {
+-		event->pending_disable = 0;
+-		perf_event_disable_local(event);
+-	}
++	perf_pending_event_disable(event);
+ 
+ 	if (event->pending_wakeup) {
+ 		event->pending_wakeup = 0;
+@@ -9998,6 +10031,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ 
+ 
+ 	init_waitqueue_head(&event->waitq);
++	event->pending_disable = -1;
+ 	init_irq_work(&event->pending, perf_pending_event);
+ 
+ 	mutex_init(&event->mmap_mutex);
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index dbd7656b4f73..a5fc56a654fd 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -393,7 +393,7 @@ void *perf_aux_output_begin(struct perf_output_handle *handle,
+ 		 * store that will be enabled on successful return
+ 		 */
+ 		if (!handle->size) { /* A, matches D */
+-			event->pending_disable = 1;
++			event->pending_disable = smp_processor_id();
+ 			perf_output_wakeup(handle);
+ 			local_set(&rb->aux_nest, 0);
+ 			goto err_put;
+@@ -478,7 +478,7 @@ void perf_aux_output_end(struct perf_output_handle *handle, unsigned long size)
+ 
+ 	if (wakeup) {
+ 		if (handle->aux_flags & PERF_AUX_FLAG_TRUNCATED)
+-			handle->event->pending_disable = 1;
++			handle->event->pending_disable = smp_processor_id();
+ 		perf_output_wakeup(handle);
+ 	}
+ 
+diff --git a/kernel/futex.c b/kernel/futex.c
+index 52668d44e07b..4eafa8ec76a4 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -1314,13 +1314,15 @@ static int lookup_pi_state(u32 __user *uaddr, u32 uval,
+ 
+ static int lock_pi_update_atomic(u32 __user *uaddr, u32 uval, u32 newval)
+ {
++	int err;
+ 	u32 uninitialized_var(curval);
+ 
+ 	if (unlikely(should_fail_futex(true)))
+ 		return -EFAULT;
+ 
+-	if (unlikely(cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)))
+-		return -EFAULT;
++	err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
++	if (unlikely(err))
++		return err;
+ 
+ 	/* If user space value changed, let the caller retry */
+ 	return curval != uval ? -EAGAIN : 0;
+@@ -1506,10 +1508,8 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_pi_state *pi_
+ 	if (unlikely(should_fail_futex(true)))
+ 		ret = -EFAULT;
+ 
+-	if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) {
+-		ret = -EFAULT;
+-
+-	} else if (curval != uval) {
++	ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
++	if (!ret && (curval != uval)) {
+ 		/*
+ 		 * If a unconditional UNLOCK_PI operation (user space did not
+ 		 * try the TID->0 transition) raced with a waiter setting the
+@@ -1704,32 +1704,32 @@ retry_private:
+ 	double_lock_hb(hb1, hb2);
+ 	op_ret = futex_atomic_op_inuser(op, uaddr2);
+ 	if (unlikely(op_ret < 0)) {
+-
+ 		double_unlock_hb(hb1, hb2);
+ 
+-#ifndef CONFIG_MMU
+-		/*
+-		 * we don't get EFAULT from MMU faults if we don't have an MMU,
+-		 * but we might get them from range checking
+-		 */
+-		ret = op_ret;
+-		goto out_put_keys;
+-#endif
+-
+-		if (unlikely(op_ret != -EFAULT)) {
++		if (!IS_ENABLED(CONFIG_MMU) ||
++		    unlikely(op_ret != -EFAULT && op_ret != -EAGAIN)) {
++			/*
++			 * we don't get EFAULT from MMU faults if we don't have
++			 * an MMU, but we might get them from range checking
++			 */
+ 			ret = op_ret;
+ 			goto out_put_keys;
+ 		}
+ 
+-		ret = fault_in_user_writeable(uaddr2);
+-		if (ret)
+-			goto out_put_keys;
++		if (op_ret == -EFAULT) {
++			ret = fault_in_user_writeable(uaddr2);
++			if (ret)
++				goto out_put_keys;
++		}
+ 
+-		if (!(flags & FLAGS_SHARED))
++		if (!(flags & FLAGS_SHARED)) {
++			cond_resched();
+ 			goto retry_private;
++		}
+ 
+ 		put_futex_key(&key2);
+ 		put_futex_key(&key1);
++		cond_resched();
+ 		goto retry;
+ 	}
+ 
+@@ -2354,7 +2354,7 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
+ 	u32 uval, uninitialized_var(curval), newval;
+ 	struct task_struct *oldowner, *newowner;
+ 	u32 newtid;
+-	int ret;
++	int ret, err = 0;
+ 
+ 	lockdep_assert_held(q->lock_ptr);
+ 
+@@ -2425,14 +2425,17 @@ retry:
+ 	if (!pi_state->owner)
+ 		newtid |= FUTEX_OWNER_DIED;
+ 
+-	if (get_futex_value_locked(&uval, uaddr))
+-		goto handle_fault;
++	err = get_futex_value_locked(&uval, uaddr);
++	if (err)
++		goto handle_err;
+ 
+ 	for (;;) {
+ 		newval = (uval & FUTEX_OWNER_DIED) | newtid;
+ 
+-		if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval))
+-			goto handle_fault;
++		err = cmpxchg_futex_value_locked(&curval, uaddr, uval, newval);
++		if (err)
++			goto handle_err;
++
+ 		if (curval == uval)
+ 			break;
+ 		uval = curval;
+@@ -2460,23 +2463,37 @@ retry:
+ 	return 0;
+ 
+ 	/*
+-	 * To handle the page fault we need to drop the locks here. That gives
+-	 * the other task (either the highest priority waiter itself or the
+-	 * task which stole the rtmutex) the chance to try the fixup of the
+-	 * pi_state. So once we are back from handling the fault we need to
+-	 * check the pi_state after reacquiring the locks and before trying to
+-	 * do another fixup. When the fixup has been done already we simply
+-	 * return.
++	 * In order to reschedule or handle a page fault, we need to drop the
++	 * locks here. In the case of a fault, this gives the other task
++	 * (either the highest priority waiter itself or the task which stole
++	 * the rtmutex) the chance to try the fixup of the pi_state. So once we
++	 * are back from handling the fault we need to check the pi_state after
++	 * reacquiring the locks and before trying to do another fixup. When
++	 * the fixup has been done already we simply return.
+ 	 *
+ 	 * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
+ 	 * drop hb->lock since the caller owns the hb -> futex_q relation.
+ 	 * Dropping the pi_mutex->wait_lock requires the state revalidate.
+ 	 */
+-handle_fault:
++handle_err:
+ 	raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+ 	spin_unlock(q->lock_ptr);
+ 
+-	ret = fault_in_user_writeable(uaddr);
++	switch (err) {
++	case -EFAULT:
++		ret = fault_in_user_writeable(uaddr);
++		break;
++
++	case -EAGAIN:
++		cond_resched();
++		ret = 0;
++		break;
++
++	default:
++		WARN_ON_ONCE(1);
++		ret = err;
++		break;
++	}
+ 
+ 	spin_lock(q->lock_ptr);
+ 	raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+@@ -3045,10 +3062,8 @@ retry:
+ 		 * A unconditional UNLOCK_PI op raced against a waiter
+ 		 * setting the FUTEX_WAITERS bit. Try again.
+ 		 */
+-		if (ret == -EAGAIN) {
+-			put_futex_key(&key);
+-			goto retry;
+-		}
++		if (ret == -EAGAIN)
++			goto pi_retry;
+ 		/*
+ 		 * wake_futex_pi has detected invalid state. Tell user
+ 		 * space.
+@@ -3063,9 +3078,19 @@ retry:
+ 	 * preserve the WAITERS bit not the OWNER_DIED one. We are the
+ 	 * owner.
+ 	 */
+-	if (cmpxchg_futex_value_locked(&curval, uaddr, uval, 0)) {
++	if ((ret = cmpxchg_futex_value_locked(&curval, uaddr, uval, 0))) {
+ 		spin_unlock(&hb->lock);
+-		goto pi_faulted;
++		switch (ret) {
++		case -EFAULT:
++			goto pi_faulted;
++
++		case -EAGAIN:
++			goto pi_retry;
++
++		default:
++			WARN_ON_ONCE(1);
++			goto out_putkey;
++		}
+ 	}
+ 
+ 	/*
+@@ -3079,6 +3104,11 @@ out_putkey:
+ 	put_futex_key(&key);
+ 	return ret;
+ 
++pi_retry:
++	put_futex_key(&key);
++	cond_resched();
++	goto retry;
++
+ pi_faulted:
+ 	put_futex_key(&key);
+ 
+@@ -3439,6 +3469,7 @@ err_unlock:
+ static int handle_futex_death(u32 __user *uaddr, struct task_struct *curr, int pi)
+ {
+ 	u32 uval, uninitialized_var(nval), mval;
++	int err;
+ 
+ 	/* Futex address must be 32bit aligned */
+ 	if ((((unsigned long)uaddr) % sizeof(*uaddr)) != 0)
+@@ -3448,42 +3479,57 @@ retry:
+ 	if (get_user(uval, uaddr))
+ 		return -1;
+ 
+-	if ((uval & FUTEX_TID_MASK) == task_pid_vnr(curr)) {
+-		/*
+-		 * Ok, this dying thread is truly holding a futex
+-		 * of interest. Set the OWNER_DIED bit atomically
+-		 * via cmpxchg, and if the value had FUTEX_WAITERS
+-		 * set, wake up a waiter (if any). (We have to do a
+-		 * futex_wake() even if OWNER_DIED is already set -
+-		 * to handle the rare but possible case of recursive
+-		 * thread-death.) The rest of the cleanup is done in
+-		 * userspace.
+-		 */
+-		mval = (uval & FUTEX_WAITERS) | FUTEX_OWNER_DIED;
+-		/*
+-		 * We are not holding a lock here, but we want to have
+-		 * the pagefault_disable/enable() protection because
+-		 * we want to handle the fault gracefully. If the
+-		 * access fails we try to fault in the futex with R/W
+-		 * verification via get_user_pages. get_user() above
+-		 * does not guarantee R/W access. If that fails we
+-		 * give up and leave the futex locked.
+-		 */
+-		if (cmpxchg_futex_value_locked(&nval, uaddr, uval, mval)) {
++	if ((uval & FUTEX_TID_MASK) != task_pid_vnr(curr))
++		return 0;
++
++	/*
++	 * Ok, this dying thread is truly holding a futex
++	 * of interest. Set the OWNER_DIED bit atomically
++	 * via cmpxchg, and if the value had FUTEX_WAITERS
++	 * set, wake up a waiter (if any). (We have to do a
++	 * futex_wake() even if OWNER_DIED is already set -
++	 * to handle the rare but possible case of recursive
++	 * thread-death.) The rest of the cleanup is done in
++	 * userspace.
++	 */
++	mval = (uval & FUTEX_WAITERS) | FUTEX_OWNER_DIED;
++
++	/*
++	 * We are not holding a lock here, but we want to have
++	 * the pagefault_disable/enable() protection because
++	 * we want to handle the fault gracefully. If the
++	 * access fails we try to fault in the futex with R/W
++	 * verification via get_user_pages. get_user() above
++	 * does not guarantee R/W access. If that fails we
++	 * give up and leave the futex locked.
++	 */
++	if ((err = cmpxchg_futex_value_locked(&nval, uaddr, uval, mval))) {
++		switch (err) {
++		case -EFAULT:
+ 			if (fault_in_user_writeable(uaddr))
+ 				return -1;
+ 			goto retry;
+-		}
+-		if (nval != uval)
++
++		case -EAGAIN:
++			cond_resched();
+ 			goto retry;
+ 
+-		/*
+-		 * Wake robust non-PI futexes here. The wakeup of
+-		 * PI futexes happens in exit_pi_state():
+-		 */
+-		if (!pi && (uval & FUTEX_WAITERS))
+-			futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
++		default:
++			WARN_ON_ONCE(1);
++			return err;
++		}
+ 	}
++
++	if (nval != uval)
++		goto retry;
++
++	/*
++	 * Wake robust non-PI futexes here. The wakeup of
++	 * PI futexes happens in exit_pi_state():
++	 */
++	if (!pi && (uval & FUTEX_WAITERS))
++		futex_wake(uaddr, 1, 1, FUTEX_BITSET_MATCH_ANY);
++
+ 	return 0;
+ }
+ 
+diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
+index 84b54a17b95d..df557ec20a6f 100644
+--- a/kernel/irq/manage.c
++++ b/kernel/irq/manage.c
+@@ -356,8 +356,10 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify)
+ 	desc->affinity_notify = notify;
+ 	raw_spin_unlock_irqrestore(&desc->lock, flags);
+ 
+-	if (old_notify)
++	if (old_notify) {
++		cancel_work_sync(&old_notify->work);
+ 		kref_put(&old_notify->kref, old_notify->release);
++	}
+ 
+ 	return 0;
+ }
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index be4bd627caf0..a0d1cd88f903 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1515,6 +1515,7 @@ EXPORT_SYMBOL(csum_and_copy_to_iter);
+ size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp,
+ 		struct iov_iter *i)
+ {
++#ifdef CONFIG_CRYPTO
+ 	struct ahash_request *hash = hashp;
+ 	struct scatterlist sg;
+ 	size_t copied;
+@@ -1524,6 +1525,9 @@ size_t hash_and_copy_to_iter(const void *addr, size_t bytes, void *hashp,
+ 	ahash_request_set_crypt(hash, &sg, NULL, copied);
+ 	crypto_ahash_update(hash);
+ 	return copied;
++#else
++	return 0;
++#endif
+ }
+ EXPORT_SYMBOL(hash_and_copy_to_iter);
+ 
+diff --git a/lib/ubsan.c b/lib/ubsan.c
+index e4162f59a81c..1e9e2ab25539 100644
+--- a/lib/ubsan.c
++++ b/lib/ubsan.c
+@@ -86,11 +86,13 @@ static bool is_inline_int(struct type_descriptor *type)
+ 	return bits <= inline_bits;
+ }
+ 
+-static s_max get_signed_val(struct type_descriptor *type, unsigned long val)
++static s_max get_signed_val(struct type_descriptor *type, void *val)
+ {
+ 	if (is_inline_int(type)) {
+ 		unsigned extra_bits = sizeof(s_max)*8 - type_bit_width(type);
+-		return ((s_max)val) << extra_bits >> extra_bits;
++		unsigned long ulong_val = (unsigned long)val;
++
++		return ((s_max)ulong_val) << extra_bits >> extra_bits;
+ 	}
+ 
+ 	if (type_bit_width(type) == 64)
+@@ -99,15 +101,15 @@ static s_max get_signed_val(struct type_descriptor *type, unsigned long val)
+ 	return *(s_max *)val;
+ }
+ 
+-static bool val_is_negative(struct type_descriptor *type, unsigned long val)
++static bool val_is_negative(struct type_descriptor *type, void *val)
+ {
+ 	return type_is_signed(type) && get_signed_val(type, val) < 0;
+ }
+ 
+-static u_max get_unsigned_val(struct type_descriptor *type, unsigned long val)
++static u_max get_unsigned_val(struct type_descriptor *type, void *val)
+ {
+ 	if (is_inline_int(type))
+-		return val;
++		return (unsigned long)val;
+ 
+ 	if (type_bit_width(type) == 64)
+ 		return *(u64 *)val;
+@@ -116,7 +118,7 @@ static u_max get_unsigned_val(struct type_descriptor *type, unsigned long val)
+ }
+ 
+ static void val_to_string(char *str, size_t size, struct type_descriptor *type,
+-	unsigned long value)
++			void *value)
+ {
+ 	if (type_is_int(type)) {
+ 		if (type_bit_width(type) == 128) {
+@@ -163,8 +165,8 @@ static void ubsan_epilogue(unsigned long *flags)
+ 	current->in_ubsan--;
+ }
+ 
+-static void handle_overflow(struct overflow_data *data, unsigned long lhs,
+-			unsigned long rhs, char op)
++static void handle_overflow(struct overflow_data *data, void *lhs,
++			void *rhs, char op)
+ {
+ 
+ 	struct type_descriptor *type = data->type;
+@@ -191,8 +193,7 @@ static void handle_overflow(struct overflow_data *data, unsigned long lhs,
+ }
+ 
+ void __ubsan_handle_add_overflow(struct overflow_data *data,
+-				unsigned long lhs,
+-				unsigned long rhs)
++				void *lhs, void *rhs)
+ {
+ 
+ 	handle_overflow(data, lhs, rhs, '+');
+@@ -200,23 +201,21 @@ void __ubsan_handle_add_overflow(struct overflow_data *data,
+ EXPORT_SYMBOL(__ubsan_handle_add_overflow);
+ 
+ void __ubsan_handle_sub_overflow(struct overflow_data *data,
+-				unsigned long lhs,
+-				unsigned long rhs)
++				void *lhs, void *rhs)
+ {
+ 	handle_overflow(data, lhs, rhs, '-');
+ }
+ EXPORT_SYMBOL(__ubsan_handle_sub_overflow);
+ 
+ void __ubsan_handle_mul_overflow(struct overflow_data *data,
+-				unsigned long lhs,
+-				unsigned long rhs)
++				void *lhs, void *rhs)
+ {
+ 	handle_overflow(data, lhs, rhs, '*');
+ }
+ EXPORT_SYMBOL(__ubsan_handle_mul_overflow);
+ 
+ void __ubsan_handle_negate_overflow(struct overflow_data *data,
+-				unsigned long old_val)
++				void *old_val)
+ {
+ 	unsigned long flags;
+ 	char old_val_str[VALUE_LENGTH];
+@@ -237,8 +236,7 @@ EXPORT_SYMBOL(__ubsan_handle_negate_overflow);
+ 
+ 
+ void __ubsan_handle_divrem_overflow(struct overflow_data *data,
+-				unsigned long lhs,
+-				unsigned long rhs)
++				void *lhs, void *rhs)
+ {
+ 	unsigned long flags;
+ 	char rhs_val_str[VALUE_LENGTH];
+@@ -323,7 +321,7 @@ static void ubsan_type_mismatch_common(struct type_mismatch_data_common *data,
+ }
+ 
+ void __ubsan_handle_type_mismatch(struct type_mismatch_data *data,
+-				unsigned long ptr)
++				void *ptr)
+ {
+ 	struct type_mismatch_data_common common_data = {
+ 		.location = &data->location,
+@@ -332,12 +330,12 @@ void __ubsan_handle_type_mismatch(struct type_mismatch_data *data,
+ 		.type_check_kind = data->type_check_kind
+ 	};
+ 
+-	ubsan_type_mismatch_common(&common_data, ptr);
++	ubsan_type_mismatch_common(&common_data, (unsigned long)ptr);
+ }
+ EXPORT_SYMBOL(__ubsan_handle_type_mismatch);
+ 
+ void __ubsan_handle_type_mismatch_v1(struct type_mismatch_data_v1 *data,
+-				unsigned long ptr)
++				void *ptr)
+ {
+ 
+ 	struct type_mismatch_data_common common_data = {
+@@ -347,12 +345,12 @@ void __ubsan_handle_type_mismatch_v1(struct type_mismatch_data_v1 *data,
+ 		.type_check_kind = data->type_check_kind
+ 	};
+ 
+-	ubsan_type_mismatch_common(&common_data, ptr);
++	ubsan_type_mismatch_common(&common_data, (unsigned long)ptr);
+ }
+ EXPORT_SYMBOL(__ubsan_handle_type_mismatch_v1);
+ 
+ void __ubsan_handle_vla_bound_not_positive(struct vla_bound_data *data,
+-					unsigned long bound)
++					void *bound)
+ {
+ 	unsigned long flags;
+ 	char bound_str[VALUE_LENGTH];
+@@ -369,8 +367,7 @@ void __ubsan_handle_vla_bound_not_positive(struct vla_bound_data *data,
+ }
+ EXPORT_SYMBOL(__ubsan_handle_vla_bound_not_positive);
+ 
+-void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data,
+-				unsigned long index)
++void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data, void *index)
+ {
+ 	unsigned long flags;
+ 	char index_str[VALUE_LENGTH];
+@@ -388,7 +385,7 @@ void __ubsan_handle_out_of_bounds(struct out_of_bounds_data *data,
+ EXPORT_SYMBOL(__ubsan_handle_out_of_bounds);
+ 
+ void __ubsan_handle_shift_out_of_bounds(struct shift_out_of_bounds_data *data,
+-					unsigned long lhs, unsigned long rhs)
++					void *lhs, void *rhs)
+ {
+ 	unsigned long flags;
+ 	struct type_descriptor *rhs_type = data->rhs_type;
+@@ -439,7 +436,7 @@ void __ubsan_handle_builtin_unreachable(struct unreachable_data *data)
+ EXPORT_SYMBOL(__ubsan_handle_builtin_unreachable);
+ 
+ void __ubsan_handle_load_invalid_value(struct invalid_value_data *data,
+-				unsigned long val)
++				void *val)
+ {
+ 	unsigned long flags;
+ 	char val_str[VALUE_LENGTH];
+diff --git a/mm/slab.c b/mm/slab.c
+index 2f2aa8eaf7d9..188c4b65255d 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -4297,7 +4297,8 @@ static void show_symbol(struct seq_file *m, unsigned long address)
+ 
+ static int leaks_show(struct seq_file *m, void *p)
+ {
+-	struct kmem_cache *cachep = list_entry(p, struct kmem_cache, list);
++	struct kmem_cache *cachep = list_entry(p, struct kmem_cache,
++					       root_caches_node);
+ 	struct page *page;
+ 	struct kmem_cache_node *n;
+ 	const char *name;
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index bd4978ce8c45..3cf0764d5793 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1276,6 +1276,14 @@ int hci_conn_check_link_mode(struct hci_conn *conn)
+ 	    !test_bit(HCI_CONN_ENCRYPT, &conn->flags))
+ 		return 0;
+ 
++	/* The minimum encryption key size needs to be enforced by the
++	 * host stack before establishing any L2CAP connections. The
++	 * specification in theory allows a minimum of 1, but to align
++	 * BR/EDR and LE transports, a minimum of 7 is chosen.
++	 */
++	if (conn->enc_key_size < HCI_MIN_ENC_KEY_SIZE)
++		return 0;
++
+ 	return 1;
+ }
+ 
+diff --git a/net/bluetooth/hidp/sock.c b/net/bluetooth/hidp/sock.c
+index 9f85a1943be9..2151913892ce 100644
+--- a/net/bluetooth/hidp/sock.c
++++ b/net/bluetooth/hidp/sock.c
+@@ -75,6 +75,7 @@ static int do_hidp_sock_ioctl(struct socket *sock, unsigned int cmd, void __user
+ 			sockfd_put(csock);
+ 			return err;
+ 		}
++		ca.name[sizeof(ca.name)-1] = 0;
+ 
+ 		err = hidp_connection_add(&ca, csock, isock);
+ 		if (!err && copy_to_user(argp, &ca, sizeof(ca)))
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index ccdc5c67d22a..a3b2e3b5f04b 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -510,12 +510,12 @@ void l2cap_chan_set_defaults(struct l2cap_chan *chan)
+ }
+ EXPORT_SYMBOL_GPL(l2cap_chan_set_defaults);
+ 
+-static void l2cap_le_flowctl_init(struct l2cap_chan *chan)
++static void l2cap_le_flowctl_init(struct l2cap_chan *chan, u16 tx_credits)
+ {
+ 	chan->sdu = NULL;
+ 	chan->sdu_last_frag = NULL;
+ 	chan->sdu_len = 0;
+-	chan->tx_credits = 0;
++	chan->tx_credits = tx_credits;
+ 	/* Derive MPS from connection MTU to stop HCI fragmentation */
+ 	chan->mps = min_t(u16, chan->imtu, chan->conn->mtu - L2CAP_HDR_SIZE);
+ 	/* Give enough credits for a full packet */
+@@ -1281,7 +1281,7 @@ static void l2cap_le_connect(struct l2cap_chan *chan)
+ 	if (test_and_set_bit(FLAG_LE_CONN_REQ_SENT, &chan->flags))
+ 		return;
+ 
+-	l2cap_le_flowctl_init(chan);
++	l2cap_le_flowctl_init(chan, 0);
+ 
+ 	req.psm     = chan->psm;
+ 	req.scid    = cpu_to_le16(chan->scid);
+@@ -5531,11 +5531,10 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
+ 	chan->dcid = scid;
+ 	chan->omtu = mtu;
+ 	chan->remote_mps = mps;
+-	chan->tx_credits = __le16_to_cpu(req->credits);
+ 
+ 	__l2cap_chan_add(conn, chan);
+ 
+-	l2cap_le_flowctl_init(chan);
++	l2cap_le_flowctl_init(chan, __le16_to_cpu(req->credits));
+ 
+ 	dcid = chan->scid;
+ 	credits = chan->rx_credits;
+diff --git a/security/keys/trusted.c b/security/keys/trusted.c
+index 4d98f4f87236..94d2b28c7c22 100644
+--- a/security/keys/trusted.c
++++ b/security/keys/trusted.c
+@@ -123,7 +123,7 @@ out:
+  */
+ int TSS_authhmac(unsigned char *digest, const unsigned char *key,
+ 			unsigned int keylen, unsigned char *h1,
+-			unsigned char *h2, unsigned char h3, ...)
++			unsigned char *h2, unsigned int h3, ...)
+ {
+ 	unsigned char paramdigest[SHA1_DIGEST_SIZE];
+ 	struct sdesc *sdesc;
+@@ -139,7 +139,7 @@ int TSS_authhmac(unsigned char *digest, const unsigned char *key,
+ 		return PTR_ERR(sdesc);
+ 	}
+ 
+-	c = h3;
++	c = !!h3;
+ 	ret = crypto_shash_init(&sdesc->shash);
+ 	if (ret < 0)
+ 		goto out;
+diff --git a/sound/hda/ext/hdac_ext_bus.c b/sound/hda/ext/hdac_ext_bus.c
+index 9c37d9af3023..ec7715c6b0c0 100644
+--- a/sound/hda/ext/hdac_ext_bus.c
++++ b/sound/hda/ext/hdac_ext_bus.c
+@@ -107,7 +107,6 @@ int snd_hdac_ext_bus_init(struct hdac_bus *bus, struct device *dev,
+ 	INIT_LIST_HEAD(&bus->hlink_list);
+ 	bus->idx = idx++;
+ 
+-	mutex_init(&bus->lock);
+ 	bus->cmd_dma_state = true;
+ 
+ 	return 0;
+diff --git a/sound/hda/hdac_bus.c b/sound/hda/hdac_bus.c
+index 012305177f68..ad8eee08013f 100644
+--- a/sound/hda/hdac_bus.c
++++ b/sound/hda/hdac_bus.c
+@@ -38,6 +38,7 @@ int snd_hdac_bus_init(struct hdac_bus *bus, struct device *dev,
+ 	INIT_WORK(&bus->unsol_work, snd_hdac_bus_process_unsol_events);
+ 	spin_lock_init(&bus->reg_lock);
+ 	mutex_init(&bus->cmd_mutex);
++	mutex_init(&bus->lock);
+ 	bus->irq = -1;
+ 	return 0;
+ }
+diff --git a/sound/hda/hdac_component.c b/sound/hda/hdac_component.c
+index a6d37b9d6413..6b5caee61c6e 100644
+--- a/sound/hda/hdac_component.c
++++ b/sound/hda/hdac_component.c
+@@ -69,13 +69,15 @@ void snd_hdac_display_power(struct hdac_bus *bus, unsigned int idx, bool enable)
+ 
+ 	dev_dbg(bus->dev, "display power %s\n",
+ 		enable ? "enable" : "disable");
++
++	mutex_lock(&bus->lock);
+ 	if (enable)
+ 		set_bit(idx, &bus->display_power_status);
+ 	else
+ 		clear_bit(idx, &bus->display_power_status);
+ 
+ 	if (!acomp || !acomp->ops)
+-		return;
++		goto unlock;
+ 
+ 	if (bus->display_power_status) {
+ 		if (!bus->display_power_active) {
+@@ -92,6 +94,8 @@ void snd_hdac_display_power(struct hdac_bus *bus, unsigned int idx, bool enable)
+ 			bus->display_power_active = false;
+ 		}
+ 	}
++ unlock:
++	mutex_unlock(&bus->lock);
+ }
+ EXPORT_SYMBOL_GPL(snd_hdac_display_power);
+ 
+diff --git a/sound/soc/codecs/cs35l35.c b/sound/soc/codecs/cs35l35.c
+index 9f4a59871cee..c71696146c5e 100644
+--- a/sound/soc/codecs/cs35l35.c
++++ b/sound/soc/codecs/cs35l35.c
+@@ -1635,6 +1635,16 @@ err:
+ 	return ret;
+ }
+ 
++static int cs35l35_i2c_remove(struct i2c_client *i2c_client)
++{
++	struct cs35l35_private *cs35l35 = i2c_get_clientdata(i2c_client);
++
++	regulator_bulk_disable(cs35l35->num_supplies, cs35l35->supplies);
++	gpiod_set_value_cansleep(cs35l35->reset_gpio, 0);
++
++	return 0;
++}
++
+ static const struct of_device_id cs35l35_of_match[] = {
+ 	{.compatible = "cirrus,cs35l35"},
+ 	{},
+@@ -1655,6 +1665,7 @@ static struct i2c_driver cs35l35_i2c_driver = {
+ 	},
+ 	.id_table = cs35l35_id,
+ 	.probe = cs35l35_i2c_probe,
++	.remove = cs35l35_i2c_remove,
+ };
+ 
+ module_i2c_driver(cs35l35_i2c_driver);
+diff --git a/sound/soc/codecs/cs4270.c b/sound/soc/codecs/cs4270.c
+index 33d74f163bd7..793a14d58667 100644
+--- a/sound/soc/codecs/cs4270.c
++++ b/sound/soc/codecs/cs4270.c
+@@ -642,6 +642,7 @@ static const struct regmap_config cs4270_regmap = {
+ 	.reg_defaults =		cs4270_reg_defaults,
+ 	.num_reg_defaults =	ARRAY_SIZE(cs4270_reg_defaults),
+ 	.cache_type =		REGCACHE_RBTREE,
++	.write_flag_mask =	CS4270_I2C_INCR,
+ 
+ 	.readable_reg =		cs4270_reg_is_readable,
+ 	.volatile_reg =		cs4270_reg_is_volatile,
+diff --git a/sound/soc/codecs/hdac_hda.c b/sound/soc/codecs/hdac_hda.c
+index ffecdaaa8cf2..f889d94c8e3c 100644
+--- a/sound/soc/codecs/hdac_hda.c
++++ b/sound/soc/codecs/hdac_hda.c
+@@ -38,6 +38,9 @@ static void hdac_hda_dai_close(struct snd_pcm_substream *substream,
+ 			       struct snd_soc_dai *dai);
+ static int hdac_hda_dai_prepare(struct snd_pcm_substream *substream,
+ 				struct snd_soc_dai *dai);
++static int hdac_hda_dai_hw_params(struct snd_pcm_substream *substream,
++				  struct snd_pcm_hw_params *params,
++				  struct snd_soc_dai *dai);
+ static int hdac_hda_dai_hw_free(struct snd_pcm_substream *substream,
+ 				struct snd_soc_dai *dai);
+ static int hdac_hda_dai_set_tdm_slot(struct snd_soc_dai *dai,
+@@ -50,6 +53,7 @@ static const struct snd_soc_dai_ops hdac_hda_dai_ops = {
+ 	.startup = hdac_hda_dai_open,
+ 	.shutdown = hdac_hda_dai_close,
+ 	.prepare = hdac_hda_dai_prepare,
++	.hw_params = hdac_hda_dai_hw_params,
+ 	.hw_free = hdac_hda_dai_hw_free,
+ 	.set_tdm_slot = hdac_hda_dai_set_tdm_slot,
+ };
+@@ -139,6 +143,39 @@ static int hdac_hda_dai_set_tdm_slot(struct snd_soc_dai *dai,
+ 	return 0;
+ }
+ 
++static int hdac_hda_dai_hw_params(struct snd_pcm_substream *substream,
++				  struct snd_pcm_hw_params *params,
++				  struct snd_soc_dai *dai)
++{
++	struct snd_soc_component *component = dai->component;
++	struct hdac_hda_priv *hda_pvt;
++	unsigned int format_val;
++	unsigned int maxbps;
++
++	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
++		maxbps = dai->driver->playback.sig_bits;
++	else
++		maxbps = dai->driver->capture.sig_bits;
++
++	hda_pvt = snd_soc_component_get_drvdata(component);
++	format_val = snd_hdac_calc_stream_format(params_rate(params),
++						 params_channels(params),
++						 params_format(params),
++						 maxbps,
++						 0);
++	if (!format_val) {
++		dev_err(dai->dev,
++			"invalid format_val, rate=%d, ch=%d, format=%d, maxbps=%d\n",
++			params_rate(params), params_channels(params),
++			params_format(params), maxbps);
++
++		return -EINVAL;
++	}
++
++	hda_pvt->pcm[dai->id].format_val[substream->stream] = format_val;
++	return 0;
++}
++
+ static int hdac_hda_dai_hw_free(struct snd_pcm_substream *substream,
+ 				struct snd_soc_dai *dai)
+ {
+@@ -162,10 +199,9 @@ static int hdac_hda_dai_prepare(struct snd_pcm_substream *substream,
+ 				struct snd_soc_dai *dai)
+ {
+ 	struct snd_soc_component *component = dai->component;
++	struct hda_pcm_stream *hda_stream;
+ 	struct hdac_hda_priv *hda_pvt;
+-	struct snd_pcm_runtime *runtime = substream->runtime;
+ 	struct hdac_device *hdev;
+-	struct hda_pcm_stream *hda_stream;
+ 	unsigned int format_val;
+ 	struct hda_pcm *pcm;
+ 	unsigned int stream;
+@@ -179,19 +215,8 @@ static int hdac_hda_dai_prepare(struct snd_pcm_substream *substream,
+ 
+ 	hda_stream = &pcm->stream[substream->stream];
+ 
+-	format_val = snd_hdac_calc_stream_format(runtime->rate,
+-						 runtime->channels,
+-						 runtime->format,
+-						 hda_stream->maxbps,
+-						 0);
+-	if (!format_val) {
+-		dev_err(&hdev->dev,
+-			"invalid format_val, rate=%d, ch=%d, format=%d\n",
+-			runtime->rate, runtime->channels, runtime->format);
+-		return -EINVAL;
+-	}
+-
+ 	stream = hda_pvt->pcm[dai->id].stream_tag[substream->stream];
++	format_val = hda_pvt->pcm[dai->id].format_val[substream->stream];
+ 
+ 	ret = snd_hda_codec_prepare(&hda_pvt->codec, hda_stream,
+ 				    stream, format_val, substream);
+diff --git a/sound/soc/codecs/hdac_hda.h b/sound/soc/codecs/hdac_hda.h
+index e444ef593360..6b1bd4f428e7 100644
+--- a/sound/soc/codecs/hdac_hda.h
++++ b/sound/soc/codecs/hdac_hda.h
+@@ -8,6 +8,7 @@
+ 
+ struct hdac_hda_pcm {
+ 	int stream_tag[2];
++	unsigned int format_val[2];
+ };
+ 
+ struct hdac_hda_priv {
+diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c
+index e5b6769b9797..d5f73c837281 100644
+--- a/sound/soc/codecs/hdmi-codec.c
++++ b/sound/soc/codecs/hdmi-codec.c
+@@ -529,73 +529,71 @@ static int hdmi_codec_set_fmt(struct snd_soc_dai *dai,
+ {
+ 	struct hdmi_codec_priv *hcp = snd_soc_dai_get_drvdata(dai);
+ 	struct hdmi_codec_daifmt cf = { 0 };
+-	int ret = 0;
+ 
+ 	dev_dbg(dai->dev, "%s()\n", __func__);
+ 
+-	if (dai->id == DAI_ID_SPDIF) {
+-		cf.fmt = HDMI_SPDIF;
+-	} else {
+-		switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
+-		case SND_SOC_DAIFMT_CBM_CFM:
+-			cf.bit_clk_master = 1;
+-			cf.frame_clk_master = 1;
+-			break;
+-		case SND_SOC_DAIFMT_CBS_CFM:
+-			cf.frame_clk_master = 1;
+-			break;
+-		case SND_SOC_DAIFMT_CBM_CFS:
+-			cf.bit_clk_master = 1;
+-			break;
+-		case SND_SOC_DAIFMT_CBS_CFS:
+-			break;
+-		default:
+-			return -EINVAL;
+-		}
++	if (dai->id == DAI_ID_SPDIF)
++		return 0;
++
++	switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
++	case SND_SOC_DAIFMT_CBM_CFM:
++		cf.bit_clk_master = 1;
++		cf.frame_clk_master = 1;
++		break;
++	case SND_SOC_DAIFMT_CBS_CFM:
++		cf.frame_clk_master = 1;
++		break;
++	case SND_SOC_DAIFMT_CBM_CFS:
++		cf.bit_clk_master = 1;
++		break;
++	case SND_SOC_DAIFMT_CBS_CFS:
++		break;
++	default:
++		return -EINVAL;
++	}
+ 
+-		switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
+-		case SND_SOC_DAIFMT_NB_NF:
+-			break;
+-		case SND_SOC_DAIFMT_NB_IF:
+-			cf.frame_clk_inv = 1;
+-			break;
+-		case SND_SOC_DAIFMT_IB_NF:
+-			cf.bit_clk_inv = 1;
+-			break;
+-		case SND_SOC_DAIFMT_IB_IF:
+-			cf.frame_clk_inv = 1;
+-			cf.bit_clk_inv = 1;
+-			break;
+-		}
++	switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
++	case SND_SOC_DAIFMT_NB_NF:
++		break;
++	case SND_SOC_DAIFMT_NB_IF:
++		cf.frame_clk_inv = 1;
++		break;
++	case SND_SOC_DAIFMT_IB_NF:
++		cf.bit_clk_inv = 1;
++		break;
++	case SND_SOC_DAIFMT_IB_IF:
++		cf.frame_clk_inv = 1;
++		cf.bit_clk_inv = 1;
++		break;
++	}
+ 
+-		switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+-		case SND_SOC_DAIFMT_I2S:
+-			cf.fmt = HDMI_I2S;
+-			break;
+-		case SND_SOC_DAIFMT_DSP_A:
+-			cf.fmt = HDMI_DSP_A;
+-			break;
+-		case SND_SOC_DAIFMT_DSP_B:
+-			cf.fmt = HDMI_DSP_B;
+-			break;
+-		case SND_SOC_DAIFMT_RIGHT_J:
+-			cf.fmt = HDMI_RIGHT_J;
+-			break;
+-		case SND_SOC_DAIFMT_LEFT_J:
+-			cf.fmt = HDMI_LEFT_J;
+-			break;
+-		case SND_SOC_DAIFMT_AC97:
+-			cf.fmt = HDMI_AC97;
+-			break;
+-		default:
+-			dev_err(dai->dev, "Invalid DAI interface format\n");
+-			return -EINVAL;
+-		}
++	switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
++	case SND_SOC_DAIFMT_I2S:
++		cf.fmt = HDMI_I2S;
++		break;
++	case SND_SOC_DAIFMT_DSP_A:
++		cf.fmt = HDMI_DSP_A;
++		break;
++	case SND_SOC_DAIFMT_DSP_B:
++		cf.fmt = HDMI_DSP_B;
++		break;
++	case SND_SOC_DAIFMT_RIGHT_J:
++		cf.fmt = HDMI_RIGHT_J;
++		break;
++	case SND_SOC_DAIFMT_LEFT_J:
++		cf.fmt = HDMI_LEFT_J;
++		break;
++	case SND_SOC_DAIFMT_AC97:
++		cf.fmt = HDMI_AC97;
++		break;
++	default:
++		dev_err(dai->dev, "Invalid DAI interface format\n");
++		return -EINVAL;
+ 	}
+ 
+ 	hcp->daifmt[dai->id] = cf;
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static int hdmi_codec_digital_mute(struct snd_soc_dai *dai, int mute)
+@@ -792,8 +790,10 @@ static int hdmi_codec_probe(struct platform_device *pdev)
+ 		i++;
+ 	}
+ 
+-	if (hcd->spdif)
++	if (hcd->spdif) {
+ 		hcp->daidrv[i] = hdmi_spdif_dai;
++		hcp->daifmt[DAI_ID_SPDIF].fmt = HDMI_SPDIF;
++	}
+ 
+ 	dev_set_drvdata(dev, hcp);
+ 
+diff --git a/sound/soc/codecs/nau8810.c b/sound/soc/codecs/nau8810.c
+index bfd74b86c9d2..645aa0794123 100644
+--- a/sound/soc/codecs/nau8810.c
++++ b/sound/soc/codecs/nau8810.c
+@@ -411,9 +411,9 @@ static const struct snd_soc_dapm_widget nau8810_dapm_widgets[] = {
+ 	SND_SOC_DAPM_MIXER("Mono Mixer", NAU8810_REG_POWER3,
+ 		NAU8810_MOUTMX_EN_SFT, 0, &nau8810_mono_mixer_controls[0],
+ 		ARRAY_SIZE(nau8810_mono_mixer_controls)),
+-	SND_SOC_DAPM_DAC("DAC", "HiFi Playback", NAU8810_REG_POWER3,
++	SND_SOC_DAPM_DAC("DAC", "Playback", NAU8810_REG_POWER3,
+ 		NAU8810_DAC_EN_SFT, 0),
+-	SND_SOC_DAPM_ADC("ADC", "HiFi Capture", NAU8810_REG_POWER2,
++	SND_SOC_DAPM_ADC("ADC", "Capture", NAU8810_REG_POWER2,
+ 		NAU8810_ADC_EN_SFT, 0),
+ 	SND_SOC_DAPM_PGA("SpkN Out", NAU8810_REG_POWER3,
+ 		NAU8810_NSPK_EN_SFT, 0, NULL, 0),
+diff --git a/sound/soc/codecs/nau8824.c b/sound/soc/codecs/nau8824.c
+index 468d5143e2c4..663a208c2f78 100644
+--- a/sound/soc/codecs/nau8824.c
++++ b/sound/soc/codecs/nau8824.c
+@@ -681,8 +681,8 @@ static const struct snd_soc_dapm_widget nau8824_dapm_widgets[] = {
+ 	SND_SOC_DAPM_ADC("ADCR", NULL, NAU8824_REG_ANALOG_ADC_2,
+ 		NAU8824_ADCR_EN_SFT, 0),
+ 
+-	SND_SOC_DAPM_AIF_OUT("AIFTX", "HiFi Capture", 0, SND_SOC_NOPM, 0, 0),
+-	SND_SOC_DAPM_AIF_IN("AIFRX", "HiFi Playback", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_OUT("AIFTX", "Capture", 0, SND_SOC_NOPM, 0, 0),
++	SND_SOC_DAPM_AIF_IN("AIFRX", "Playback", 0, SND_SOC_NOPM, 0, 0),
+ 
+ 	SND_SOC_DAPM_DAC("DACL", NULL, NAU8824_REG_RDAC,
+ 		NAU8824_DACL_EN_SFT, 0),
+@@ -831,6 +831,36 @@ static void nau8824_int_status_clear_all(struct regmap *regmap)
+ 	}
+ }
+ 
++static void nau8824_dapm_disable_pin(struct nau8824 *nau8824, const char *pin)
++{
++	struct snd_soc_dapm_context *dapm = nau8824->dapm;
++	const char *prefix = dapm->component->name_prefix;
++	char prefixed_pin[80];
++
++	if (prefix) {
++		snprintf(prefixed_pin, sizeof(prefixed_pin), "%s %s",
++			 prefix, pin);
++		snd_soc_dapm_disable_pin(dapm, prefixed_pin);
++	} else {
++		snd_soc_dapm_disable_pin(dapm, pin);
++	}
++}
++
++static void nau8824_dapm_enable_pin(struct nau8824 *nau8824, const char *pin)
++{
++	struct snd_soc_dapm_context *dapm = nau8824->dapm;
++	const char *prefix = dapm->component->name_prefix;
++	char prefixed_pin[80];
++
++	if (prefix) {
++		snprintf(prefixed_pin, sizeof(prefixed_pin), "%s %s",
++			 prefix, pin);
++		snd_soc_dapm_force_enable_pin(dapm, prefixed_pin);
++	} else {
++		snd_soc_dapm_force_enable_pin(dapm, pin);
++	}
++}
++
+ static void nau8824_eject_jack(struct nau8824 *nau8824)
+ {
+ 	struct snd_soc_dapm_context *dapm = nau8824->dapm;
+@@ -839,8 +869,8 @@ static void nau8824_eject_jack(struct nau8824 *nau8824)
+ 	/* Clear all interruption status */
+ 	nau8824_int_status_clear_all(regmap);
+ 
+-	snd_soc_dapm_disable_pin(dapm, "SAR");
+-	snd_soc_dapm_disable_pin(dapm, "MICBIAS");
++	nau8824_dapm_disable_pin(nau8824, "SAR");
++	nau8824_dapm_disable_pin(nau8824, "MICBIAS");
+ 	snd_soc_dapm_sync(dapm);
+ 
+ 	/* Enable the insertion interruption, disable the ejection
+@@ -870,8 +900,8 @@ static void nau8824_jdet_work(struct work_struct *work)
+ 	struct regmap *regmap = nau8824->regmap;
+ 	int adc_value, event = 0, event_mask = 0;
+ 
+-	snd_soc_dapm_force_enable_pin(dapm, "MICBIAS");
+-	snd_soc_dapm_force_enable_pin(dapm, "SAR");
++	nau8824_dapm_enable_pin(nau8824, "MICBIAS");
++	nau8824_dapm_enable_pin(nau8824, "SAR");
+ 	snd_soc_dapm_sync(dapm);
+ 
+ 	msleep(100);
+@@ -882,8 +912,8 @@ static void nau8824_jdet_work(struct work_struct *work)
+ 	if (adc_value < HEADSET_SARADC_THD) {
+ 		event |= SND_JACK_HEADPHONE;
+ 
+-		snd_soc_dapm_disable_pin(dapm, "SAR");
+-		snd_soc_dapm_disable_pin(dapm, "MICBIAS");
++		nau8824_dapm_disable_pin(nau8824, "SAR");
++		nau8824_dapm_disable_pin(nau8824, "MICBIAS");
+ 		snd_soc_dapm_sync(dapm);
+ 	} else {
+ 		event |= SND_JACK_HEADSET;
+diff --git a/sound/soc/codecs/rt5682.c b/sound/soc/codecs/rt5682.c
+index a9b91bcfcc09..72ef2a0f6387 100644
+--- a/sound/soc/codecs/rt5682.c
++++ b/sound/soc/codecs/rt5682.c
+@@ -904,13 +904,21 @@ static int rt5682_headset_detect(struct snd_soc_component *component,
+ 		int jack_insert)
+ {
+ 	struct rt5682_priv *rt5682 = snd_soc_component_get_drvdata(component);
+-	struct snd_soc_dapm_context *dapm =
+-		snd_soc_component_get_dapm(component);
+ 	unsigned int val, count;
+ 
+ 	if (jack_insert) {
+-		snd_soc_dapm_force_enable_pin(dapm, "CBJ Power");
+-		snd_soc_dapm_sync(dapm);
++
++		snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1,
++			RT5682_PWR_VREF2 | RT5682_PWR_MB,
++			RT5682_PWR_VREF2 | RT5682_PWR_MB);
++		snd_soc_component_update_bits(component,
++				RT5682_PWR_ANLG_1, RT5682_PWR_FV2, 0);
++		usleep_range(15000, 20000);
++		snd_soc_component_update_bits(component,
++				RT5682_PWR_ANLG_1, RT5682_PWR_FV2, RT5682_PWR_FV2);
++		snd_soc_component_update_bits(component, RT5682_PWR_ANLG_3,
++			RT5682_PWR_CBJ, RT5682_PWR_CBJ);
++
+ 		snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_1,
+ 			RT5682_TRIG_JD_MASK, RT5682_TRIG_JD_HIGH);
+ 
+@@ -938,8 +946,10 @@ static int rt5682_headset_detect(struct snd_soc_component *component,
+ 		rt5682_enable_push_button_irq(component, false);
+ 		snd_soc_component_update_bits(component, RT5682_CBJ_CTRL_1,
+ 			RT5682_TRIG_JD_MASK, RT5682_TRIG_JD_LOW);
+-		snd_soc_dapm_disable_pin(dapm, "CBJ Power");
+-		snd_soc_dapm_sync(dapm);
++		snd_soc_component_update_bits(component, RT5682_PWR_ANLG_1,
++			RT5682_PWR_VREF2 | RT5682_PWR_MB, 0);
++		snd_soc_component_update_bits(component, RT5682_PWR_ANLG_3,
++			RT5682_PWR_CBJ, 0);
+ 
+ 		rt5682->jack_type = 0;
+ 	}
+@@ -1192,7 +1202,7 @@ static int set_filter_clk(struct snd_soc_dapm_widget *w,
+ 	struct snd_soc_component *component =
+ 		snd_soc_dapm_to_component(w->dapm);
+ 	struct rt5682_priv *rt5682 = snd_soc_component_get_drvdata(component);
+-	int ref, val, reg, sft, mask, idx = -EINVAL;
++	int ref, val, reg, idx = -EINVAL;
+ 	static const int div_f[] = {1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48};
+ 	static const int div_o[] = {1, 2, 4, 6, 8, 12, 16, 24, 32, 48};
+ 
+@@ -1206,15 +1216,10 @@ static int set_filter_clk(struct snd_soc_dapm_widget *w,
+ 
+ 	idx = rt5682_div_sel(rt5682, ref, div_f, ARRAY_SIZE(div_f));
+ 
+-	if (w->shift == RT5682_PWR_ADC_S1F_BIT) {
++	if (w->shift == RT5682_PWR_ADC_S1F_BIT)
+ 		reg = RT5682_PLL_TRACK_3;
+-		sft = RT5682_ADC_OSR_SFT;
+-		mask = RT5682_ADC_OSR_MASK;
+-	} else {
++	else
+ 		reg = RT5682_PLL_TRACK_2;
+-		sft = RT5682_DAC_OSR_SFT;
+-		mask = RT5682_DAC_OSR_MASK;
+-	}
+ 
+ 	snd_soc_component_update_bits(component, reg,
+ 		RT5682_FILTER_CLK_DIV_MASK, idx << RT5682_FILTER_CLK_DIV_SFT);
+@@ -1226,7 +1231,8 @@ static int set_filter_clk(struct snd_soc_dapm_widget *w,
+ 	}
+ 
+ 	snd_soc_component_update_bits(component, RT5682_ADDA_CLK_1,
+-		mask, idx << sft);
++		RT5682_ADC_OSR_MASK | RT5682_DAC_OSR_MASK,
++		(idx << RT5682_ADC_OSR_SFT) | (idx << RT5682_DAC_OSR_SFT));
+ 
+ 	return 0;
+ }
+@@ -1585,8 +1591,6 @@ static const struct snd_soc_dapm_widget rt5682_dapm_widgets[] = {
+ 		0, NULL, 0),
+ 	SND_SOC_DAPM_SUPPLY("Vref1", RT5682_PWR_ANLG_1, RT5682_PWR_VREF1_BIT, 0,
+ 		rt5655_set_verf, SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMU),
+-	SND_SOC_DAPM_SUPPLY("Vref2", RT5682_PWR_ANLG_1, RT5682_PWR_VREF2_BIT, 0,
+-		rt5655_set_verf, SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMU),
+ 
+ 	/* ASRC */
+ 	SND_SOC_DAPM_SUPPLY_S("DAC STO1 ASRC", 1, RT5682_PLL_TRACK_1,
+@@ -1621,9 +1625,6 @@ static const struct snd_soc_dapm_widget rt5682_dapm_widgets[] = {
+ 	SND_SOC_DAPM_PGA("BST1 CBJ", SND_SOC_NOPM,
+ 		0, 0, NULL, 0),
+ 
+-	SND_SOC_DAPM_SUPPLY("CBJ Power", RT5682_PWR_ANLG_3,
+-		RT5682_PWR_CBJ_BIT, 0, NULL, 0),
+-
+ 	/* REC Mixer */
+ 	SND_SOC_DAPM_MIXER("RECMIX1L", SND_SOC_NOPM, 0, 0, rt5682_rec1_l_mix,
+ 		ARRAY_SIZE(rt5682_rec1_l_mix)),
+@@ -1786,17 +1787,13 @@ static const struct snd_soc_dapm_route rt5682_dapm_routes[] = {
+ 
+ 	/*Vref*/
+ 	{"MICBIAS1", NULL, "Vref1"},
+-	{"MICBIAS1", NULL, "Vref2"},
+ 	{"MICBIAS2", NULL, "Vref1"},
+-	{"MICBIAS2", NULL, "Vref2"},
+ 
+ 	{"CLKDET SYS", NULL, "CLKDET"},
+ 
+ 	{"IN1P", NULL, "LDO2"},
+ 
+ 	{"BST1 CBJ", NULL, "IN1P"},
+-	{"BST1 CBJ", NULL, "CBJ Power"},
+-	{"CBJ Power", NULL, "Vref2"},
+ 
+ 	{"RECMIX1L", "CBJ Switch", "BST1 CBJ"},
+ 	{"RECMIX1L", NULL, "RECMIX1L Power"},
+@@ -1906,9 +1903,7 @@ static const struct snd_soc_dapm_route rt5682_dapm_routes[] = {
+ 	{"HP Amp", NULL, "Capless"},
+ 	{"HP Amp", NULL, "Charge Pump"},
+ 	{"HP Amp", NULL, "CLKDET SYS"},
+-	{"HP Amp", NULL, "CBJ Power"},
+ 	{"HP Amp", NULL, "Vref1"},
+-	{"HP Amp", NULL, "Vref2"},
+ 	{"HPOL Playback", "Switch", "HP Amp"},
+ 	{"HPOR Playback", "Switch", "HP Amp"},
+ 	{"HPOL", NULL, "HPOL Playback"},
+@@ -2297,16 +2292,13 @@ static int rt5682_set_bias_level(struct snd_soc_component *component,
+ 	switch (level) {
+ 	case SND_SOC_BIAS_PREPARE:
+ 		regmap_update_bits(rt5682->regmap, RT5682_PWR_ANLG_1,
+-			RT5682_PWR_MB | RT5682_PWR_BG,
+-			RT5682_PWR_MB | RT5682_PWR_BG);
++			RT5682_PWR_BG, RT5682_PWR_BG);
+ 		regmap_update_bits(rt5682->regmap, RT5682_PWR_DIG_1,
+ 			RT5682_DIG_GATE_CTRL | RT5682_PWR_LDO,
+ 			RT5682_DIG_GATE_CTRL | RT5682_PWR_LDO);
+ 		break;
+ 
+ 	case SND_SOC_BIAS_STANDBY:
+-		regmap_update_bits(rt5682->regmap, RT5682_PWR_ANLG_1,
+-			RT5682_PWR_MB, RT5682_PWR_MB);
+ 		regmap_update_bits(rt5682->regmap, RT5682_PWR_DIG_1,
+ 			RT5682_DIG_GATE_CTRL, RT5682_DIG_GATE_CTRL);
+ 		break;
+@@ -2314,7 +2306,7 @@ static int rt5682_set_bias_level(struct snd_soc_component *component,
+ 		regmap_update_bits(rt5682->regmap, RT5682_PWR_DIG_1,
+ 			RT5682_DIG_GATE_CTRL | RT5682_PWR_LDO, 0);
+ 		regmap_update_bits(rt5682->regmap, RT5682_PWR_ANLG_1,
+-			RT5682_PWR_MB | RT5682_PWR_BG, 0);
++			RT5682_PWR_BG, 0);
+ 		break;
+ 
+ 	default:
+@@ -2357,6 +2349,8 @@ static int rt5682_resume(struct snd_soc_component *component)
+ 	regcache_cache_only(rt5682->regmap, false);
+ 	regcache_sync(rt5682->regmap);
+ 
++	rt5682_irq(0, rt5682);
++
+ 	return 0;
+ }
+ #else
+diff --git a/sound/soc/codecs/tlv320aic32x4.c b/sound/soc/codecs/tlv320aic32x4.c
+index f03195d2ab2e..45d9f4a09044 100644
+--- a/sound/soc/codecs/tlv320aic32x4.c
++++ b/sound/soc/codecs/tlv320aic32x4.c
+@@ -462,6 +462,8 @@ static const struct snd_soc_dapm_widget aic32x4_dapm_widgets[] = {
+ 	SND_SOC_DAPM_INPUT("IN2_R"),
+ 	SND_SOC_DAPM_INPUT("IN3_L"),
+ 	SND_SOC_DAPM_INPUT("IN3_R"),
++	SND_SOC_DAPM_INPUT("CM_L"),
++	SND_SOC_DAPM_INPUT("CM_R"),
+ };
+ 
+ static const struct snd_soc_dapm_route aic32x4_dapm_routes[] = {
+diff --git a/sound/soc/codecs/tlv320aic3x.c b/sound/soc/codecs/tlv320aic3x.c
+index 6aa0edf8c5ef..cea3ebecdb12 100644
+--- a/sound/soc/codecs/tlv320aic3x.c
++++ b/sound/soc/codecs/tlv320aic3x.c
+@@ -1609,7 +1609,6 @@ static int aic3x_probe(struct snd_soc_component *component)
+ 	struct aic3x_priv *aic3x = snd_soc_component_get_drvdata(component);
+ 	int ret, i;
+ 
+-	INIT_LIST_HEAD(&aic3x->list);
+ 	aic3x->component = component;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(aic3x->supplies); i++) {
+@@ -1692,7 +1691,6 @@ static void aic3x_remove(struct snd_soc_component *component)
+ 	struct aic3x_priv *aic3x = snd_soc_component_get_drvdata(component);
+ 	int i;
+ 
+-	list_del(&aic3x->list);
+ 	for (i = 0; i < ARRAY_SIZE(aic3x->supplies); i++)
+ 		regulator_unregister_notifier(aic3x->supplies[i].consumer,
+ 					      &aic3x->disable_nb[i].nb);
+@@ -1890,6 +1888,7 @@ static int aic3x_i2c_probe(struct i2c_client *i2c,
+ 	if (ret != 0)
+ 		goto err_gpio;
+ 
++	INIT_LIST_HEAD(&aic3x->list);
+ 	list_add(&aic3x->list, &reset_list);
+ 
+ 	return 0;
+@@ -1906,6 +1905,8 @@ static int aic3x_i2c_remove(struct i2c_client *client)
+ {
+ 	struct aic3x_priv *aic3x = i2c_get_clientdata(client);
+ 
++	list_del(&aic3x->list);
++
+ 	if (gpio_is_valid(aic3x->gpio_reset) &&
+ 	    !aic3x_is_shared_reset(aic3x)) {
+ 		gpio_set_value(aic3x->gpio_reset, 0);
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 0600e4404f90..eb5b1be77c47 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -3821,11 +3821,13 @@ irqreturn_t wm_adsp2_bus_error(struct wm_adsp *dsp)
+ 	struct regmap *regmap = dsp->regmap;
+ 	int ret = 0;
+ 
++	mutex_lock(&dsp->pwr_lock);
++
+ 	ret = regmap_read(regmap, dsp->base + ADSP2_LOCK_REGION_CTRL, &val);
+ 	if (ret) {
+ 		adsp_err(dsp,
+ 			"Failed to read Region Lock Ctrl register: %d\n", ret);
+-		return IRQ_HANDLED;
++		goto error;
+ 	}
+ 
+ 	if (val & ADSP2_WDT_TIMEOUT_STS_MASK) {
+@@ -3844,7 +3846,7 @@ irqreturn_t wm_adsp2_bus_error(struct wm_adsp *dsp)
+ 			adsp_err(dsp,
+ 				 "Failed to read Bus Err Addr register: %d\n",
+ 				 ret);
+-			return IRQ_HANDLED;
++			goto error;
+ 		}
+ 
+ 		adsp_err(dsp, "bus error address = 0x%x\n",
+@@ -3857,7 +3859,7 @@ irqreturn_t wm_adsp2_bus_error(struct wm_adsp *dsp)
+ 			adsp_err(dsp,
+ 				 "Failed to read Pmem Xmem Err Addr register: %d\n",
+ 				 ret);
+-			return IRQ_HANDLED;
++			goto error;
+ 		}
+ 
+ 		adsp_err(dsp, "xmem error address = 0x%x\n",
+@@ -3870,6 +3872,9 @@ irqreturn_t wm_adsp2_bus_error(struct wm_adsp *dsp)
+ 	regmap_update_bits(regmap, dsp->base + ADSP2_LOCK_REGION_CTRL,
+ 			   ADSP2_CTRL_ERR_EINT, ADSP2_CTRL_ERR_EINT);
+ 
++error:
++	mutex_unlock(&dsp->pwr_lock);
++
+ 	return IRQ_HANDLED;
+ }
+ EXPORT_SYMBOL_GPL(wm_adsp2_bus_error);
+diff --git a/sound/soc/intel/boards/kbl_rt5663_rt5514_max98927.c b/sound/soc/intel/boards/kbl_rt5663_rt5514_max98927.c
+index 7044d8c2b187..879f14257a3e 100644
+--- a/sound/soc/intel/boards/kbl_rt5663_rt5514_max98927.c
++++ b/sound/soc/intel/boards/kbl_rt5663_rt5514_max98927.c
+@@ -405,7 +405,7 @@ static const struct snd_pcm_hw_constraint_list constraints_dmic_channels = {
+ };
+ 
+ static const unsigned int dmic_2ch[] = {
+-	4,
++	2,
+ };
+ 
+ static const struct snd_pcm_hw_constraint_list constraints_dmic_2ch = {
+diff --git a/sound/soc/intel/common/sst-firmware.c b/sound/soc/intel/common/sst-firmware.c
+index 1e067504b604..f830e59f93ea 100644
+--- a/sound/soc/intel/common/sst-firmware.c
++++ b/sound/soc/intel/common/sst-firmware.c
+@@ -1251,11 +1251,15 @@ struct sst_dsp *sst_dsp_new(struct device *dev,
+ 		goto irq_err;
+ 
+ 	err = sst_dma_new(sst);
+-	if (err)
+-		dev_warn(dev, "sst_dma_new failed %d\n", err);
++	if (err)  {
++		dev_err(dev, "sst_dma_new failed %d\n", err);
++		goto dma_err;
++	}
+ 
+ 	return sst;
+ 
++dma_err:
++	free_irq(sst->irq, sst);
+ irq_err:
+ 	if (sst->ops->free)
+ 		sst->ops->free(sst);
+diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
+index 557f80c0bfe5..5cd308d622f6 100644
+--- a/sound/soc/intel/skylake/skl-pcm.c
++++ b/sound/soc/intel/skylake/skl-pcm.c
+@@ -181,6 +181,7 @@ int skl_pcm_link_dma_prepare(struct device *dev, struct skl_pipe_params *params)
+ 	struct hdac_stream *hstream;
+ 	struct hdac_ext_stream *stream;
+ 	struct hdac_ext_link *link;
++	unsigned char stream_tag;
+ 
+ 	hstream = snd_hdac_get_stream(bus, params->stream,
+ 					params->link_dma_id + 1);
+@@ -199,10 +200,13 @@ int skl_pcm_link_dma_prepare(struct device *dev, struct skl_pipe_params *params)
+ 
+ 	snd_hdac_ext_link_stream_setup(stream, format_val);
+ 
+-	list_for_each_entry(link, &bus->hlink_list, list) {
+-		if (link->index == params->link_index)
+-			snd_hdac_ext_link_set_stream_id(link,
+-					hstream->stream_tag);
++	stream_tag = hstream->stream_tag;
++	if (stream->hstream.direction == SNDRV_PCM_STREAM_PLAYBACK) {
++		list_for_each_entry(link, &bus->hlink_list, list) {
++			if (link->index == params->link_index)
++				snd_hdac_ext_link_set_stream_id(link,
++								stream_tag);
++		}
+ 	}
+ 
+ 	stream->link_prepared = 1;
+@@ -645,6 +649,7 @@ static int skl_link_hw_free(struct snd_pcm_substream *substream,
+ 	struct hdac_ext_stream *link_dev =
+ 				snd_soc_dai_get_dma_data(dai, substream);
+ 	struct hdac_ext_link *link;
++	unsigned char stream_tag;
+ 
+ 	dev_dbg(dai->dev, "%s: %s\n", __func__, dai->name);
+ 
+@@ -654,7 +659,11 @@ static int skl_link_hw_free(struct snd_pcm_substream *substream,
+ 	if (!link)
+ 		return -EINVAL;
+ 
+-	snd_hdac_ext_link_clear_stream_id(link, hdac_stream(link_dev)->stream_tag);
++	if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
++		stream_tag = hdac_stream(link_dev)->stream_tag;
++		snd_hdac_ext_link_clear_stream_id(link, stream_tag);
++	}
++
+ 	snd_hdac_ext_stream_release(link_dev, HDAC_EXT_STREAM_TYPE_LINK);
+ 	return 0;
+ }
+diff --git a/sound/soc/rockchip/rockchip_pdm.c b/sound/soc/rockchip/rockchip_pdm.c
+index 400e29edb1c9..8a2e3bbce3a1 100644
+--- a/sound/soc/rockchip/rockchip_pdm.c
++++ b/sound/soc/rockchip/rockchip_pdm.c
+@@ -208,7 +208,9 @@ static int rockchip_pdm_set_fmt(struct snd_soc_dai *cpu_dai,
+ 		return -EINVAL;
+ 	}
+ 
++	pm_runtime_get_sync(cpu_dai->dev);
+ 	regmap_update_bits(pdm->regmap, PDM_CLK_CTRL, mask, val);
++	pm_runtime_put(cpu_dai->dev);
+ 
+ 	return 0;
+ }
+diff --git a/sound/soc/samsung/odroid.c b/sound/soc/samsung/odroid.c
+index e7b371b07230..45c6d7396785 100644
+--- a/sound/soc/samsung/odroid.c
++++ b/sound/soc/samsung/odroid.c
+@@ -64,11 +64,11 @@ static int odroid_card_hw_params(struct snd_pcm_substream *substream,
+ 		return ret;
+ 
+ 	/*
+-	 *  We add 1 to the rclk_freq value in order to avoid too low clock
++	 *  We add 2 to the rclk_freq value in order to avoid too low clock
+ 	 *  frequency values due to the EPLL output frequency not being exact
+ 	 *  multiple of the audio sampling rate.
+ 	 */
+-	rclk_freq = params_rate(params) * rfs + 1;
++	rclk_freq = params_rate(params) * rfs + 2;
+ 
+ 	ret = clk_set_rate(priv->sclk_i2s, rclk_freq);
+ 	if (ret < 0)
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 50617db05c46..416c371fa01a 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -2790,6 +2790,7 @@ int snd_soc_register_card(struct snd_soc_card *card)
+ 	card->instantiated = 0;
+ 	mutex_init(&card->mutex);
+ 	mutex_init(&card->dapm_mutex);
++	spin_lock_init(&card->dpcm_lock);
+ 
+ 	return snd_soc_bind_card(card);
+ }
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 20bad755888b..08ab5fef75dc 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -3840,6 +3840,10 @@ snd_soc_dapm_free_kcontrol(struct snd_soc_card *card,
+ 	int count;
+ 
+ 	devm_kfree(card->dev, (void *)*private_value);
++
++	if (!w_param_text)
++		return;
++
+ 	for (count = 0 ; count < num_params; count++)
+ 		devm_kfree(card->dev, (void *)w_param_text[count]);
+ 	devm_kfree(card->dev, w_param_text);
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 0c1dd6bd67ab..22946493a11f 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -954,10 +954,13 @@ static int soc_pcm_hw_params(struct snd_pcm_substream *substream,
+ 		codec_params = *params;
+ 
+ 		/* fixup params based on TDM slot masks */
+-		if (codec_dai->tx_mask)
++		if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK &&
++		    codec_dai->tx_mask)
+ 			soc_pcm_codec_params_fixup(&codec_params,
+ 						   codec_dai->tx_mask);
+-		if (codec_dai->rx_mask)
++
++		if (substream->stream == SNDRV_PCM_STREAM_CAPTURE &&
++		    codec_dai->rx_mask)
+ 			soc_pcm_codec_params_fixup(&codec_params,
+ 						   codec_dai->rx_mask);
+ 
+@@ -1209,6 +1212,7 @@ static int dpcm_be_connect(struct snd_soc_pcm_runtime *fe,
+ 		struct snd_soc_pcm_runtime *be, int stream)
+ {
+ 	struct snd_soc_dpcm *dpcm;
++	unsigned long flags;
+ 
+ 	/* only add new dpcms */
+ 	for_each_dpcm_be(fe, stream, dpcm) {
+@@ -1224,8 +1228,10 @@ static int dpcm_be_connect(struct snd_soc_pcm_runtime *fe,
+ 	dpcm->fe = fe;
+ 	be->dpcm[stream].runtime = fe->dpcm[stream].runtime;
+ 	dpcm->state = SND_SOC_DPCM_LINK_STATE_NEW;
++	spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 	list_add(&dpcm->list_be, &fe->dpcm[stream].be_clients);
+ 	list_add(&dpcm->list_fe, &be->dpcm[stream].fe_clients);
++	spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 	dev_dbg(fe->dev, "connected new DPCM %s path %s %s %s\n",
+ 			stream ? "capture" : "playback",  fe->dai_link->name,
+@@ -1271,6 +1277,7 @@ static void dpcm_be_reparent(struct snd_soc_pcm_runtime *fe,
+ void dpcm_be_disconnect(struct snd_soc_pcm_runtime *fe, int stream)
+ {
+ 	struct snd_soc_dpcm *dpcm, *d;
++	unsigned long flags;
+ 
+ 	for_each_dpcm_be_safe(fe, stream, dpcm, d) {
+ 		dev_dbg(fe->dev, "ASoC: BE %s disconnect check for %s\n",
+@@ -1290,8 +1297,10 @@ void dpcm_be_disconnect(struct snd_soc_pcm_runtime *fe, int stream)
+ #ifdef CONFIG_DEBUG_FS
+ 		debugfs_remove(dpcm->debugfs_state);
+ #endif
++		spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 		list_del(&dpcm->list_be);
+ 		list_del(&dpcm->list_fe);
++		spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 		kfree(dpcm);
+ 	}
+ }
+@@ -1543,10 +1552,13 @@ int dpcm_process_paths(struct snd_soc_pcm_runtime *fe,
+ void dpcm_clear_pending_state(struct snd_soc_pcm_runtime *fe, int stream)
+ {
+ 	struct snd_soc_dpcm *dpcm;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 	for_each_dpcm_be(fe, stream, dpcm)
+ 		dpcm->be->dpcm[stream].runtime_update =
+ 						SND_SOC_DPCM_UPDATE_NO;
++	spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ }
+ 
+ static void dpcm_be_dai_startup_unwind(struct snd_soc_pcm_runtime *fe,
+@@ -2572,6 +2584,7 @@ static int dpcm_run_update_startup(struct snd_soc_pcm_runtime *fe, int stream)
+ 	struct snd_soc_dpcm *dpcm;
+ 	enum snd_soc_dpcm_trigger trigger = fe->dai_link->trigger[stream];
+ 	int ret;
++	unsigned long flags;
+ 
+ 	dev_dbg(fe->dev, "ASoC: runtime %s open on FE %s\n",
+ 			stream ? "capture" : "playback", fe->dai_link->name);
+@@ -2641,11 +2654,13 @@ close:
+ 	dpcm_be_dai_shutdown(fe, stream);
+ disconnect:
+ 	/* disconnect any non started BEs */
++	spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 	for_each_dpcm_be(fe, stream, dpcm) {
+ 		struct snd_soc_pcm_runtime *be = dpcm->be;
+ 		if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START)
+ 				dpcm->state = SND_SOC_DPCM_LINK_STATE_FREE;
+ 	}
++	spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 	return ret;
+ }
+@@ -3221,7 +3236,10 @@ int snd_soc_dpcm_can_be_free_stop(struct snd_soc_pcm_runtime *fe,
+ {
+ 	struct snd_soc_dpcm *dpcm;
+ 	int state;
++	int ret = 1;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 	for_each_dpcm_fe(be, stream, dpcm) {
+ 
+ 		if (dpcm->fe == fe)
+@@ -3230,12 +3248,15 @@ int snd_soc_dpcm_can_be_free_stop(struct snd_soc_pcm_runtime *fe,
+ 		state = dpcm->fe->dpcm[stream].state;
+ 		if (state == SND_SOC_DPCM_STATE_START ||
+ 			state == SND_SOC_DPCM_STATE_PAUSED ||
+-			state == SND_SOC_DPCM_STATE_SUSPEND)
+-			return 0;
++			state == SND_SOC_DPCM_STATE_SUSPEND) {
++			ret = 0;
++			break;
++		}
+ 	}
++	spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 	/* it's safe to free/stop this BE DAI */
+-	return 1;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_dpcm_can_be_free_stop);
+ 
+@@ -3248,7 +3269,10 @@ int snd_soc_dpcm_can_be_params(struct snd_soc_pcm_runtime *fe,
+ {
+ 	struct snd_soc_dpcm *dpcm;
+ 	int state;
++	int ret = 1;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 	for_each_dpcm_fe(be, stream, dpcm) {
+ 
+ 		if (dpcm->fe == fe)
+@@ -3258,12 +3282,15 @@ int snd_soc_dpcm_can_be_params(struct snd_soc_pcm_runtime *fe,
+ 		if (state == SND_SOC_DPCM_STATE_START ||
+ 			state == SND_SOC_DPCM_STATE_PAUSED ||
+ 			state == SND_SOC_DPCM_STATE_SUSPEND ||
+-			state == SND_SOC_DPCM_STATE_PREPARE)
+-			return 0;
++			state == SND_SOC_DPCM_STATE_PREPARE) {
++			ret = 0;
++			break;
++		}
+ 	}
++	spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ 
+ 	/* it's safe to change hw_params */
+-	return 1;
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(snd_soc_dpcm_can_be_params);
+ 
+@@ -3302,6 +3329,7 @@ static ssize_t dpcm_show_state(struct snd_soc_pcm_runtime *fe,
+ 	struct snd_pcm_hw_params *params = &fe->dpcm[stream].hw_params;
+ 	struct snd_soc_dpcm *dpcm;
+ 	ssize_t offset = 0;
++	unsigned long flags;
+ 
+ 	/* FE state */
+ 	offset += snprintf(buf + offset, size - offset,
+@@ -3329,6 +3357,7 @@ static ssize_t dpcm_show_state(struct snd_soc_pcm_runtime *fe,
+ 		goto out;
+ 	}
+ 
++	spin_lock_irqsave(&fe->card->dpcm_lock, flags);
+ 	for_each_dpcm_be(fe, stream, dpcm) {
+ 		struct snd_soc_pcm_runtime *be = dpcm->be;
+ 		params = &dpcm->hw_params;
+@@ -3349,7 +3378,7 @@ static ssize_t dpcm_show_state(struct snd_soc_pcm_runtime *fe,
+ 				params_channels(params),
+ 				params_rate(params));
+ 	}
+-
++	spin_unlock_irqrestore(&fe->card->dpcm_lock, flags);
+ out:
+ 	return offset;
+ }
+diff --git a/sound/soc/stm/stm32_adfsdm.c b/sound/soc/stm/stm32_adfsdm.c
+index 706ff005234f..24948b95eb19 100644
+--- a/sound/soc/stm/stm32_adfsdm.c
++++ b/sound/soc/stm/stm32_adfsdm.c
+@@ -9,6 +9,7 @@
+ 
+ #include <linux/clk.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
+ 
+@@ -37,6 +38,8 @@ struct stm32_adfsdm_priv {
+ 	/* PCM buffer */
+ 	unsigned char *pcm_buff;
+ 	unsigned int pos;
++
++	struct mutex lock; /* protect against race condition on iio state */
+ };
+ 
+ static const struct snd_pcm_hardware stm32_adfsdm_pcm_hw = {
+@@ -62,10 +65,12 @@ static void stm32_adfsdm_shutdown(struct snd_pcm_substream *substream,
+ {
+ 	struct stm32_adfsdm_priv *priv = snd_soc_dai_get_drvdata(dai);
+ 
++	mutex_lock(&priv->lock);
+ 	if (priv->iio_active) {
+ 		iio_channel_stop_all_cb(priv->iio_cb);
+ 		priv->iio_active = false;
+ 	}
++	mutex_unlock(&priv->lock);
+ }
+ 
+ static int stm32_adfsdm_dai_prepare(struct snd_pcm_substream *substream,
+@@ -74,13 +79,19 @@ static int stm32_adfsdm_dai_prepare(struct snd_pcm_substream *substream,
+ 	struct stm32_adfsdm_priv *priv = snd_soc_dai_get_drvdata(dai);
+ 	int ret;
+ 
++	mutex_lock(&priv->lock);
++	if (priv->iio_active) {
++		iio_channel_stop_all_cb(priv->iio_cb);
++		priv->iio_active = false;
++	}
++
+ 	ret = iio_write_channel_attribute(priv->iio_ch,
+ 					  substream->runtime->rate, 0,
+ 					  IIO_CHAN_INFO_SAMP_FREQ);
+ 	if (ret < 0) {
+ 		dev_err(dai->dev, "%s: Failed to set %d sampling rate\n",
+ 			__func__, substream->runtime->rate);
+-		return ret;
++		goto out;
+ 	}
+ 
+ 	if (!priv->iio_active) {
+@@ -92,6 +103,9 @@ static int stm32_adfsdm_dai_prepare(struct snd_pcm_substream *substream,
+ 				__func__, ret);
+ 	}
+ 
++out:
++	mutex_unlock(&priv->lock);
++
+ 	return ret;
+ }
+ 
+@@ -290,6 +304,7 @@ MODULE_DEVICE_TABLE(of, stm32_adfsdm_of_match);
+ static int stm32_adfsdm_probe(struct platform_device *pdev)
+ {
+ 	struct stm32_adfsdm_priv *priv;
++	struct snd_soc_component *component;
+ 	int ret;
+ 
+ 	priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
+@@ -298,6 +313,7 @@ static int stm32_adfsdm_probe(struct platform_device *pdev)
+ 
+ 	priv->dev = &pdev->dev;
+ 	priv->dai_drv = stm32_adfsdm_dai;
++	mutex_init(&priv->lock);
+ 
+ 	dev_set_drvdata(&pdev->dev, priv);
+ 
+@@ -316,9 +332,15 @@ static int stm32_adfsdm_probe(struct platform_device *pdev)
+ 	if (IS_ERR(priv->iio_cb))
+ 		return PTR_ERR(priv->iio_cb);
+ 
+-	ret = devm_snd_soc_register_component(&pdev->dev,
+-					      &stm32_adfsdm_soc_platform,
+-					      NULL, 0);
++	component = devm_kzalloc(&pdev->dev, sizeof(*component), GFP_KERNEL);
++	if (!component)
++		return -ENOMEM;
++#ifdef CONFIG_DEBUG_FS
++	component->debugfs_prefix = "pcm";
++#endif
++
++	ret = snd_soc_add_component(&pdev->dev, component,
++				    &stm32_adfsdm_soc_platform, NULL, 0);
+ 	if (ret < 0)
+ 		dev_err(&pdev->dev, "%s: Failed to register PCM platform\n",
+ 			__func__);
+@@ -326,12 +348,20 @@ static int stm32_adfsdm_probe(struct platform_device *pdev)
+ 	return ret;
+ }
+ 
++static int stm32_adfsdm_remove(struct platform_device *pdev)
++{
++	snd_soc_unregister_component(&pdev->dev);
++
++	return 0;
++}
++
+ static struct platform_driver stm32_adfsdm_driver = {
+ 	.driver = {
+ 		   .name = STM32_ADFSDM_DRV_NAME,
+ 		   .of_match_table = stm32_adfsdm_of_match,
+ 		   },
+ 	.probe = stm32_adfsdm_probe,
++	.remove = stm32_adfsdm_remove,
+ };
+ 
+ module_platform_driver(stm32_adfsdm_driver);
+diff --git a/sound/soc/stm/stm32_sai_sub.c b/sound/soc/stm/stm32_sai_sub.c
+index 29a131e0569e..1cf9df4b6f11 100644
+--- a/sound/soc/stm/stm32_sai_sub.c
++++ b/sound/soc/stm/stm32_sai_sub.c
+@@ -70,6 +70,7 @@
+ #define SAI_IEC60958_STATUS_BYTES	24
+ 
+ #define SAI_MCLK_NAME_LEN		32
++#define SAI_RATE_11K			11025
+ 
+ /**
+  * struct stm32_sai_sub_data - private data of SAI sub block (block A or B)
+@@ -100,8 +101,9 @@
+  * @slot_mask: rx or tx active slots mask. set at init or at runtime
+  * @data_size: PCM data width. corresponds to PCM substream width.
+  * @spdif_frm_cnt: S/PDIF playback frame counter
+- * @snd_aes_iec958: iec958 data
++ * @iec958: iec958 data
+  * @ctrl_lock: control lock
++ * @irq_lock: prevent race condition with IRQ
+  */
+ struct stm32_sai_sub_data {
+ 	struct platform_device *pdev;
+@@ -133,6 +135,7 @@ struct stm32_sai_sub_data {
+ 	unsigned int spdif_frm_cnt;
+ 	struct snd_aes_iec958 iec958;
+ 	struct mutex ctrl_lock; /* protect resources accessed by controls */
++	spinlock_t irq_lock; /* used to prevent race condition with IRQ */
+ };
+ 
+ enum stm32_sai_fifo_th {
+@@ -307,6 +310,25 @@ static int stm32_sai_set_clk_div(struct stm32_sai_sub_data *sai,
+ 	return ret;
+ }
+ 
++static int stm32_sai_set_parent_clock(struct stm32_sai_sub_data *sai,
++				      unsigned int rate)
++{
++	struct platform_device *pdev = sai->pdev;
++	struct clk *parent_clk = sai->pdata->clk_x8k;
++	int ret;
++
++	if (!(rate % SAI_RATE_11K))
++		parent_clk = sai->pdata->clk_x11k;
++
++	ret = clk_set_parent(sai->sai_ck, parent_clk);
++	if (ret)
++		dev_err(&pdev->dev, " Error %d setting sai_ck parent clock. %s",
++			ret, ret == -EBUSY ?
++			"Active stream rates conflict\n" : "\n");
++
++	return ret;
++}
++
+ static long stm32_sai_mclk_round_rate(struct clk_hw *hw, unsigned long rate,
+ 				      unsigned long *prate)
+ {
+@@ -474,8 +496,10 @@ static irqreturn_t stm32_sai_isr(int irq, void *devid)
+ 		status = SNDRV_PCM_STATE_XRUN;
+ 	}
+ 
+-	if (status != SNDRV_PCM_STATE_RUNNING)
++	spin_lock(&sai->irq_lock);
++	if (status != SNDRV_PCM_STATE_RUNNING && sai->substream)
+ 		snd_pcm_stop_xrun(sai->substream);
++	spin_unlock(&sai->irq_lock);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -486,25 +510,29 @@ static int stm32_sai_set_sysclk(struct snd_soc_dai *cpu_dai,
+ 	struct stm32_sai_sub_data *sai = snd_soc_dai_get_drvdata(cpu_dai);
+ 	int ret;
+ 
+-	if (dir == SND_SOC_CLOCK_OUT) {
++	if (dir == SND_SOC_CLOCK_OUT && sai->sai_mclk) {
+ 		ret = regmap_update_bits(sai->regmap, STM_SAI_CR1_REGX,
+ 					 SAI_XCR1_NODIV,
+ 					 (unsigned int)~SAI_XCR1_NODIV);
+ 		if (ret < 0)
+ 			return ret;
+ 
+-		dev_dbg(cpu_dai->dev, "SAI MCLK frequency is %uHz\n", freq);
+-		sai->mclk_rate = freq;
++		/* If master clock is used, set parent clock now */
++		ret = stm32_sai_set_parent_clock(sai, freq);
++		if (ret)
++			return ret;
+ 
+-		if (sai->sai_mclk) {
+-			ret = clk_set_rate_exclusive(sai->sai_mclk,
+-						     sai->mclk_rate);
+-			if (ret) {
+-				dev_err(cpu_dai->dev,
+-					"Could not set mclk rate\n");
+-				return ret;
+-			}
++		ret = clk_set_rate_exclusive(sai->sai_mclk, freq);
++		if (ret) {
++			dev_err(cpu_dai->dev,
++				ret == -EBUSY ?
++				"Active streams have incompatible rates" :
++				"Could not set mclk rate\n");
++			return ret;
+ 		}
++
++		dev_dbg(cpu_dai->dev, "SAI MCLK frequency is %uHz\n", freq);
++		sai->mclk_rate = freq;
+ 	}
+ 
+ 	return 0;
+@@ -679,8 +707,19 @@ static int stm32_sai_startup(struct snd_pcm_substream *substream,
+ {
+ 	struct stm32_sai_sub_data *sai = snd_soc_dai_get_drvdata(cpu_dai);
+ 	int imr, cr2, ret;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&sai->irq_lock, flags);
+ 	sai->substream = substream;
++	spin_unlock_irqrestore(&sai->irq_lock, flags);
++
++	if (STM_SAI_PROTOCOL_IS_SPDIF(sai)) {
++		snd_pcm_hw_constraint_mask64(substream->runtime,
++					     SNDRV_PCM_HW_PARAM_FORMAT,
++					     SNDRV_PCM_FMTBIT_S32_LE);
++		snd_pcm_hw_constraint_single(substream->runtime,
++					     SNDRV_PCM_HW_PARAM_CHANNELS, 2);
++	}
+ 
+ 	ret = clk_prepare_enable(sai->sai_ck);
+ 	if (ret < 0) {
+@@ -901,11 +940,13 @@ static int stm32_sai_configure_clock(struct snd_soc_dai *cpu_dai,
+ 	int cr1, mask, div = 0;
+ 	int sai_clk_rate, mclk_ratio, den;
+ 	unsigned int rate = params_rate(params);
++	int ret;
+ 
+-	if (!(rate % 11025))
+-		clk_set_parent(sai->sai_ck, sai->pdata->clk_x11k);
+-	else
+-		clk_set_parent(sai->sai_ck, sai->pdata->clk_x8k);
++	if (!sai->sai_mclk) {
++		ret = stm32_sai_set_parent_clock(sai, rate);
++		if (ret)
++			return ret;
++	}
+ 	sai_clk_rate = clk_get_rate(sai->sai_ck);
+ 
+ 	if (STM_SAI_IS_F4(sai->pdata)) {
+@@ -1053,28 +1094,36 @@ static void stm32_sai_shutdown(struct snd_pcm_substream *substream,
+ 			       struct snd_soc_dai *cpu_dai)
+ {
+ 	struct stm32_sai_sub_data *sai = snd_soc_dai_get_drvdata(cpu_dai);
++	unsigned long flags;
+ 
+ 	regmap_update_bits(sai->regmap, STM_SAI_IMR_REGX, SAI_XIMR_MASK, 0);
+ 
+ 	regmap_update_bits(sai->regmap, STM_SAI_CR1_REGX, SAI_XCR1_NODIV,
+ 			   SAI_XCR1_NODIV);
+ 
+-	clk_disable_unprepare(sai->sai_ck);
++	/* Release mclk rate only if rate was actually set */
++	if (sai->mclk_rate) {
++		clk_rate_exclusive_put(sai->sai_mclk);
++		sai->mclk_rate = 0;
++	}
+ 
+-	clk_rate_exclusive_put(sai->sai_mclk);
++	clk_disable_unprepare(sai->sai_ck);
+ 
++	spin_lock_irqsave(&sai->irq_lock, flags);
+ 	sai->substream = NULL;
++	spin_unlock_irqrestore(&sai->irq_lock, flags);
+ }
+ 
+ static int stm32_sai_pcm_new(struct snd_soc_pcm_runtime *rtd,
+ 			     struct snd_soc_dai *cpu_dai)
+ {
+ 	struct stm32_sai_sub_data *sai = dev_get_drvdata(cpu_dai->dev);
++	struct snd_kcontrol_new knew = iec958_ctls;
+ 
+ 	if (STM_SAI_PROTOCOL_IS_SPDIF(sai)) {
+ 		dev_dbg(&sai->pdev->dev, "%s: register iec controls", __func__);
+-		return snd_ctl_add(rtd->pcm->card,
+-				   snd_ctl_new1(&iec958_ctls, sai));
++		knew.device = rtd->pcm->device;
++		return snd_ctl_add(rtd->pcm->card, snd_ctl_new1(&knew, sai));
+ 	}
+ 
+ 	return 0;
+@@ -1426,6 +1475,7 @@ static int stm32_sai_sub_probe(struct platform_device *pdev)
+ 
+ 	sai->pdev = pdev;
+ 	mutex_init(&sai->ctrl_lock);
++	spin_lock_init(&sai->irq_lock);
+ 	platform_set_drvdata(pdev, sai);
+ 
+ 	sai->pdata = dev_get_drvdata(pdev->dev.parent);
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 5dde107083c6..479196aeb409 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -165,6 +165,7 @@ static int __dead_end_function(struct objtool_file *file, struct symbol *func,
+ 		"fortify_panic",
+ 		"usercopy_abort",
+ 		"machine_real_restart",
++		"rewind_stack_do_exit",
+ 	};
+ 
+ 	if (func->bind == STB_WEAK)


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-14 21:01 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-14 21:01 UTC (permalink / raw
  To: gentoo-commits

commit:     eb28ace601eb7634e8c99180cfe2640f3a09027f
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue May 14 21:01:27 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue May 14 21:01:27 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=eb28ace6

Linux patch 5.0.16

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1015_linux-5.0.16.patch | 2955 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2959 insertions(+)

diff --git a/0000_README b/0000_README
index 0d6cdbe..b19b388 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-5.0.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.15
 
+Patch:  1015_linux-5.0.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-5.0.16.patch b/1015_linux-5.0.16.patch
new file mode 100644
index 0000000..342f6cf
--- /dev/null
+++ b/1015_linux-5.0.16.patch
@@ -0,0 +1,2955 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 9605dbd4b5b5..141a7bb58b80 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -484,6 +484,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/spectre_v2
+ 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+ 		/sys/devices/system/cpu/vulnerabilities/l1tf
++		/sys/devices/system/cpu/vulnerabilities/mds
+ Date:		January 2018
+ Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
+ Description:	Information about CPU vulnerabilities
+@@ -496,8 +497,7 @@ Description:	Information about CPU vulnerabilities
+ 		"Vulnerable"	  CPU is affected and no mitigation in effect
+ 		"Mitigation: $M"  CPU is affected and mitigation $M is in effect
+ 
+-		Details about the l1tf file can be found in
+-		Documentation/admin-guide/l1tf.rst
++		See also: Documentation/admin-guide/hw-vuln/index.rst
+ 
+ What:		/sys/devices/system/cpu/smt
+ 		/sys/devices/system/cpu/smt/active
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+new file mode 100644
+index 000000000000..ffc064c1ec68
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -0,0 +1,13 @@
++========================
++Hardware vulnerabilities
++========================
++
++This section describes CPU vulnerabilities and provides an overview of the
++possible mitigations along with guidance for selecting mitigations if they
++are configurable at compile, boot or run time.
++
++.. toctree::
++   :maxdepth: 1
++
++   l1tf
++   mds
+diff --git a/Documentation/admin-guide/hw-vuln/l1tf.rst b/Documentation/admin-guide/hw-vuln/l1tf.rst
+new file mode 100644
+index 000000000000..31653a9f0e1b
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/l1tf.rst
+@@ -0,0 +1,615 @@
++L1TF - L1 Terminal Fault
++========================
++
++L1 Terminal Fault is a hardware vulnerability which allows unprivileged
++speculative access to data which is available in the Level 1 Data Cache
++when the page table entry controlling the virtual address, which is used
++for the access, has the Present bit cleared or other reserved bits set.
++
++Affected processors
++-------------------
++
++This vulnerability affects a wide range of Intel processors. The
++vulnerability is not present on:
++
++   - Processors from AMD, Centaur and other non Intel vendors
++
++   - Older processor models, where the CPU family is < 6
++
++   - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft,
++     Penwell, Pineview, Silvermont, Airmont, Merrifield)
++
++   - The Intel XEON PHI family
++
++   - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the
++     IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected
++     by the Meltdown vulnerability either. These CPUs should become
++     available by end of 2018.
++
++Whether a processor is affected or not can be read out from the L1TF
++vulnerability file in sysfs. See :ref:`l1tf_sys_info`.
++
++Related CVEs
++------------
++
++The following CVE entries are related to the L1TF vulnerability:
++
++   =============  =================  ==============================
++   CVE-2018-3615  L1 Terminal Fault  SGX related aspects
++   CVE-2018-3620  L1 Terminal Fault  OS, SMM related aspects
++   CVE-2018-3646  L1 Terminal Fault  Virtualization related aspects
++   =============  =================  ==============================
++
++Problem
++-------
++
++If an instruction accesses a virtual address for which the relevant page
++table entry (PTE) has the Present bit cleared or other reserved bits set,
++then speculative execution ignores the invalid PTE and loads the referenced
++data if it is present in the Level 1 Data Cache, as if the page referenced
++by the address bits in the PTE was still present and accessible.
++
++While this is a purely speculative mechanism and the instruction will raise
++a page fault when it is retired eventually, the pure act of loading the
++data and making it available to other speculative instructions opens up the
++opportunity for side channel attacks to unprivileged malicious code,
++similar to the Meltdown attack.
++
++While Meltdown breaks the user space to kernel space protection, L1TF
++allows to attack any physical memory address in the system and the attack
++works across all protection domains. It allows an attack of SGX and also
++works from inside virtual machines because the speculation bypasses the
++extended page table (EPT) protection mechanism.
++
++
++Attack scenarios
++----------------
++
++1. Malicious user space
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   Operating Systems store arbitrary information in the address bits of a
++   PTE which is marked non present. This allows a malicious user space
++   application to attack the physical memory to which these PTEs resolve.
++   In some cases user-space can maliciously influence the information
++   encoded in the address bits of the PTE, thus making attacks more
++   deterministic and more practical.
++
++   The Linux kernel contains a mitigation for this attack vector, PTE
++   inversion, which is permanently enabled and has no performance
++   impact. The kernel ensures that the address bits of PTEs, which are not
++   marked present, never point to cacheable physical memory space.
++
++   A system with an up to date kernel is protected against attacks from
++   malicious user space applications.
++
++2. Malicious guest in a virtual machine
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The fact that L1TF breaks all domain protections allows malicious guest
++   OSes, which can control the PTEs directly, and malicious guest user
++   space applications, which run on an unprotected guest kernel lacking the
++   PTE inversion mitigation for L1TF, to attack physical host memory.
++
++   A special aspect of L1TF in the context of virtualization is symmetric
++   multi threading (SMT). The Intel implementation of SMT is called
++   HyperThreading. The fact that Hyperthreads on the affected processors
++   share the L1 Data Cache (L1D) is important for this. As the flaw allows
++   only to attack data which is present in L1D, a malicious guest running
++   on one Hyperthread can attack the data which is brought into the L1D by
++   the context which runs on the sibling Hyperthread of the same physical
++   core. This context can be host OS, host user space or a different guest.
++
++   If the processor does not support Extended Page Tables, the attack is
++   only possible, when the hypervisor does not sanitize the content of the
++   effective (shadow) page tables.
++
++   While solutions exist to mitigate these attack vectors fully, these
++   mitigations are not enabled by default in the Linux kernel because they
++   can affect performance significantly. The kernel provides several
++   mechanisms which can be utilized to address the problem depending on the
++   deployment scenario. The mitigations, their protection scope and impact
++   are described in the next sections.
++
++   The default mitigations and the rationale for choosing them are explained
++   at the end of this document. See :ref:`default_mitigations`.
++
++.. _l1tf_sys_info:
++
++L1TF system information
++-----------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current L1TF
++status of the system: whether the system is vulnerable, and which
++mitigations are active. The relevant sysfs file is:
++
++/sys/devices/system/cpu/vulnerabilities/l1tf
++
++The possible values in this file are:
++
++  ===========================   ===============================
++  'Not affected'		The processor is not vulnerable
++  'Mitigation: PTE Inversion'	The host protection is active
++  ===========================   ===============================
++
++If KVM/VMX is enabled and the processor is vulnerable then the following
++information is appended to the 'Mitigation: PTE Inversion' part:
++
++  - SMT status:
++
++    =====================  ================
++    'VMX: SMT vulnerable'  SMT is enabled
++    'VMX: SMT disabled'    SMT is disabled
++    =====================  ================
++
++  - L1D Flush mode:
++
++    ================================  ====================================
++    'L1D vulnerable'		      L1D flushing is disabled
++
++    'L1D conditional cache flushes'   L1D flush is conditionally enabled
++
++    'L1D cache flushes'		      L1D flush is unconditionally enabled
++    ================================  ====================================
++
++The resulting grade of protection is discussed in the following sections.
++
++
++Host mitigation mechanism
++-------------------------
++
++The kernel is unconditionally protected against L1TF attacks from malicious
++user space running on the host.
++
++
++Guest mitigation mechanisms
++---------------------------
++
++.. _l1d_flush:
++
++1. L1D flush on VMENTER
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   To make sure that a guest cannot attack data which is present in the L1D
++   the hypervisor flushes the L1D before entering the guest.
++
++   Flushing the L1D evicts not only the data which should not be accessed
++   by a potentially malicious guest, it also flushes the guest
++   data. Flushing the L1D has a performance impact as the processor has to
++   bring the flushed guest data back into the L1D. Depending on the
++   frequency of VMEXIT/VMENTER and the type of computations in the guest
++   performance degradation in the range of 1% to 50% has been observed. For
++   scenarios where guest VMEXIT/VMENTER are rare the performance impact is
++   minimal. Virtio and mechanisms like posted interrupts are designed to
++   confine the VMEXITs to a bare minimum, but specific configurations and
++   application scenarios might still suffer from a high VMEXIT rate.
++
++   The kernel provides two L1D flush modes:
++    - conditional ('cond')
++    - unconditional ('always')
++
++   The conditional mode avoids L1D flushing after VMEXITs which execute
++   only audited code paths before the corresponding VMENTER. These code
++   paths have been verified that they cannot expose secrets or other
++   interesting data to an attacker, but they can leak information about the
++   address space layout of the hypervisor.
++
++   Unconditional mode flushes L1D on all VMENTER invocations and provides
++   maximum protection. It has a higher overhead than the conditional
++   mode. The overhead cannot be quantified correctly as it depends on the
++   workload scenario and the resulting number of VMEXITs.
++
++   The general recommendation is to enable L1D flush on VMENTER. The kernel
++   defaults to conditional mode on affected processors.
++
++   **Note**, that L1D flush does not prevent the SMT problem because the
++   sibling thread will also bring back its data into the L1D which makes it
++   attackable again.
++
++   L1D flush can be controlled by the administrator via the kernel command
++   line and sysfs control files. See :ref:`mitigation_control_command_line`
++   and :ref:`mitigation_control_kvm`.
++
++.. _guest_confinement:
++
++2. Guest VCPU confinement to dedicated physical cores
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   To address the SMT problem, it is possible to make a guest or a group of
++   guests affine to one or more physical cores. The proper mechanism for
++   that is to utilize exclusive cpusets to ensure that no other guest or
++   host tasks can run on these cores.
++
++   If only a single guest or related guests run on sibling SMT threads on
++   the same physical core then they can only attack their own memory and
++   restricted parts of the host memory.
++
++   Host memory is attackable, when one of the sibling SMT threads runs in
++   host OS (hypervisor) context and the other in guest context. The amount
++   of valuable information from the host OS context depends on the context
++   which the host OS executes, i.e. interrupts, soft interrupts and kernel
++   threads. The amount of valuable data from these contexts cannot be
++   declared as non-interesting for an attacker without deep inspection of
++   the code.
++
++   **Note**, that assigning guests to a fixed set of physical cores affects
++   the ability of the scheduler to do load balancing and might have
++   negative effects on CPU utilization depending on the hosting
++   scenario. Disabling SMT might be a viable alternative for particular
++   scenarios.
++
++   For further information about confining guests to a single or to a group
++   of cores consult the cpusets documentation:
++
++   https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt
++
++.. _interrupt_isolation:
++
++3. Interrupt affinity
++^^^^^^^^^^^^^^^^^^^^^
++
++   Interrupts can be made affine to logical CPUs. This is not universally
++   true because there are types of interrupts which are truly per CPU
++   interrupts, e.g. the local timer interrupt. Aside of that multi queue
++   devices affine their interrupts to single CPUs or groups of CPUs per
++   queue without allowing the administrator to control the affinities.
++
++   Moving the interrupts, which can be affinity controlled, away from CPUs
++   which run untrusted guests, reduces the attack vector space.
++
++   Whether the interrupts with are affine to CPUs, which run untrusted
++   guests, provide interesting data for an attacker depends on the system
++   configuration and the scenarios which run on the system. While for some
++   of the interrupts it can be assumed that they won't expose interesting
++   information beyond exposing hints about the host OS memory layout, there
++   is no way to make general assumptions.
++
++   Interrupt affinity can be controlled by the administrator via the
++   /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is
++   available at:
++
++   https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
++
++.. _smt_control:
++
++4. SMT control
++^^^^^^^^^^^^^^
++
++   To prevent the SMT issues of L1TF it might be necessary to disable SMT
++   completely. Disabling SMT can have a significant performance impact, but
++   the impact depends on the hosting scenario and the type of workloads.
++   The impact of disabling SMT needs also to be weighted against the impact
++   of other mitigation solutions like confining guests to dedicated cores.
++
++   The kernel provides a sysfs interface to retrieve the status of SMT and
++   to control it. It also provides a kernel command line interface to
++   control SMT.
++
++   The kernel command line interface consists of the following options:
++
++     =========== ==========================================================
++     nosmt	 Affects the bring up of the secondary CPUs during boot. The
++		 kernel tries to bring all present CPUs online during the
++		 boot process. "nosmt" makes sure that from each physical
++		 core only one - the so called primary (hyper) thread is
++		 activated. Due to a design flaw of Intel processors related
++		 to Machine Check Exceptions the non primary siblings have
++		 to be brought up at least partially and are then shut down
++		 again.  "nosmt" can be undone via the sysfs interface.
++
++     nosmt=force Has the same effect as "nosmt" but it does not allow to
++		 undo the SMT disable via the sysfs interface.
++     =========== ==========================================================
++
++   The sysfs interface provides two files:
++
++   - /sys/devices/system/cpu/smt/control
++   - /sys/devices/system/cpu/smt/active
++
++   /sys/devices/system/cpu/smt/control:
++
++     This file allows to read out the SMT control state and provides the
++     ability to disable or (re)enable SMT. The possible states are:
++
++	==============  ===================================================
++	on		SMT is supported by the CPU and enabled. All
++			logical CPUs can be onlined and offlined without
++			restrictions.
++
++	off		SMT is supported by the CPU and disabled. Only
++			the so called primary SMT threads can be onlined
++			and offlined without restrictions. An attempt to
++			online a non-primary sibling is rejected
++
++	forceoff	Same as 'off' but the state cannot be controlled.
++			Attempts to write to the control file are rejected.
++
++	notsupported	The processor does not support SMT. It's therefore
++			not affected by the SMT implications of L1TF.
++			Attempts to write to the control file are rejected.
++	==============  ===================================================
++
++     The possible states which can be written into this file to control SMT
++     state are:
++
++     - on
++     - off
++     - forceoff
++
++   /sys/devices/system/cpu/smt/active:
++
++     This file reports whether SMT is enabled and active, i.e. if on any
++     physical core two or more sibling threads are online.
++
++   SMT control is also possible at boot time via the l1tf kernel command
++   line parameter in combination with L1D flush control. See
++   :ref:`mitigation_control_command_line`.
++
++5. Disabling EPT
++^^^^^^^^^^^^^^^^
++
++  Disabling EPT for virtual machines provides full mitigation for L1TF even
++  with SMT enabled, because the effective page tables for guests are
++  managed and sanitized by the hypervisor. Though disabling EPT has a
++  significant performance impact especially when the Meltdown mitigation
++  KPTI is enabled.
++
++  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++There is ongoing research and development for new mitigation mechanisms to
++address the performance impact of disabling SMT or EPT.
++
++.. _mitigation_control_command_line:
++
++Mitigation control on the kernel command line
++---------------------------------------------
++
++The kernel command line allows to control the L1TF mitigations at boot
++time with the option "l1tf=". The valid arguments for this option are:
++
++  ============  =============================================================
++  full		Provides all available mitigations for the L1TF
++		vulnerability. Disables SMT and enables all mitigations in
++		the hypervisors, i.e. unconditional L1D flushing
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  full,force	Same as 'full', but disables SMT and L1D flush runtime
++		control. Implies the 'nosmt=force' command line option.
++		(i.e. sysfs control of SMT is disabled.)
++
++  flush		Leaves SMT enabled and enables the default hypervisor
++		mitigation, i.e. conditional L1D flushing
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  flush,nosmt	Disables SMT and enables the default hypervisor mitigation,
++		i.e. conditional L1D flushing.
++
++		SMT control and L1D flush control via the sysfs interface
++		is still possible after boot.  Hypervisors will issue a
++		warning when the first VM is started in a potentially
++		insecure configuration, i.e. SMT enabled or L1D flush
++		disabled.
++
++  flush,nowarn	Same as 'flush', but hypervisors will not warn when a VM is
++		started in a potentially insecure configuration.
++
++  off		Disables hypervisor mitigations and doesn't emit any
++		warnings.
++		It also drops the swap size and available RAM limit restrictions
++		on both hypervisor and bare metal.
++
++  ============  =============================================================
++
++The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
++
++
++.. _mitigation_control_kvm:
++
++Mitigation control for KVM - module parameter
++-------------------------------------------------------------
++
++The KVM hypervisor mitigation mechanism, flushing the L1D cache when
++entering a guest, can be controlled with a module parameter.
++
++The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the
++following arguments:
++
++  ============  ==============================================================
++  always	L1D cache flush on every VMENTER.
++
++  cond		Flush L1D on VMENTER only when the code between VMEXIT and
++		VMENTER can leak host memory which is considered
++		interesting for an attacker. This still can leak host memory
++		which allows e.g. to determine the hosts address space layout.
++
++  never		Disables the mitigation
++  ============  ==============================================================
++
++The parameter can be provided on the kernel command line, as a module
++parameter when loading the modules and at runtime modified via the sysfs
++file:
++
++/sys/module/kvm_intel/parameters/vmentry_l1d_flush
++
++The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
++line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
++module parameter is ignored and writes to the sysfs file are rejected.
++
++.. _mitigation_selection:
++
++Mitigation selection guide
++--------------------------
++
++1. No virtualization in use
++^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The system is protected by the kernel unconditionally and no further
++   action is required.
++
++2. Virtualization with trusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   If the guest comes from a trusted source and the guest OS kernel is
++   guaranteed to have the L1TF mitigations in place the system is fully
++   protected against L1TF and no further action is required.
++
++   To avoid the overhead of the default L1D flushing on VMENTER the
++   administrator can disable the flushing via the kernel command line and
++   sysfs control files. See :ref:`mitigation_control_command_line` and
++   :ref:`mitigation_control_kvm`.
++
++
++3. Virtualization with untrusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++3.1. SMT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++  If SMT is not supported by the processor or disabled in the BIOS or by
++  the kernel, it's only required to enforce L1D flushing on VMENTER.
++
++  Conditional L1D flushing is the default behaviour and can be tuned. See
++  :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++3.2. EPT not supported or disabled
++""""""""""""""""""""""""""""""""""
++
++  If EPT is not supported by the processor or disabled in the hypervisor,
++  the system is fully protected. SMT can stay enabled and L1D flushing on
++  VMENTER is not required.
++
++  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
++
++3.3. SMT and EPT supported and active
++"""""""""""""""""""""""""""""""""""""
++
++  If SMT and EPT are supported and active then various degrees of
++  mitigations can be employed:
++
++  - L1D flushing on VMENTER:
++
++    L1D flushing on VMENTER is the minimal protection requirement, but it
++    is only potent in combination with other mitigation methods.
++
++    Conditional L1D flushing is the default behaviour and can be tuned. See
++    :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
++
++  - Guest confinement:
++
++    Confinement of guests to a single or a group of physical cores which
++    are not running any other processes, can reduce the attack surface
++    significantly, but interrupts, soft interrupts and kernel threads can
++    still expose valuable data to a potential attacker. See
++    :ref:`guest_confinement`.
++
++  - Interrupt isolation:
++
++    Isolating the guest CPUs from interrupts can reduce the attack surface
++    further, but still allows a malicious guest to explore a limited amount
++    of host physical memory. This can at least be used to gain knowledge
++    about the host address space layout. The interrupts which have a fixed
++    affinity to the CPUs which run the untrusted guests can depending on
++    the scenario still trigger soft interrupts and schedule kernel threads
++    which might expose valuable information. See
++    :ref:`interrupt_isolation`.
++
++The above three mitigation methods combined can provide protection to a
++certain degree, but the risk of the remaining attack surface has to be
++carefully analyzed. For full protection the following methods are
++available:
++
++  - Disabling SMT:
++
++    Disabling SMT and enforcing the L1D flushing provides the maximum
++    amount of protection. This mitigation is not depending on any of the
++    above mitigation methods.
++
++    SMT control and L1D flushing can be tuned by the command line
++    parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run
++    time with the matching sysfs control files. See :ref:`smt_control`,
++    :ref:`mitigation_control_command_line` and
++    :ref:`mitigation_control_kvm`.
++
++  - Disabling EPT:
++
++    Disabling EPT provides the maximum amount of protection as well. It is
++    not depending on any of the above mitigation methods. SMT can stay
++    enabled and L1D flushing is not required, but the performance impact is
++    significant.
++
++    EPT can be disabled in the hypervisor via the 'kvm-intel.ept'
++    parameter.
++
++3.4. Nested virtual machines
++""""""""""""""""""""""""""""
++
++When nested virtualization is in use, three operating systems are involved:
++the bare metal hypervisor, the nested hypervisor and the nested virtual
++machine.  VMENTER operations from the nested hypervisor into the nested
++guest will always be processed by the bare metal hypervisor. If KVM is the
++bare metal hypervisor it will:
++
++ - Flush the L1D cache on every switch from the nested hypervisor to the
++   nested virtual machine, so that the nested hypervisor's secrets are not
++   exposed to the nested virtual machine;
++
++ - Flush the L1D cache on every switch from the nested virtual machine to
++   the nested hypervisor; this is a complex operation, and flushing the L1D
++   cache avoids that the bare metal hypervisor's secrets are exposed to the
++   nested virtual machine;
++
++ - Instruct the nested hypervisor to not perform any L1D cache flush. This
++   is an optimization to avoid double L1D flushing.
++
++
++.. _default_mitigations:
++
++Default mitigations
++-------------------
++
++  The kernel default mitigations for vulnerable processors are:
++
++  - PTE inversion to protect against malicious user space. This is done
++    unconditionally and cannot be controlled. The swap storage is limited
++    to ~16TB.
++
++  - L1D conditional flushing on VMENTER when EPT is enabled for
++    a guest.
++
++  The kernel does not by default enforce the disabling of SMT, which leaves
++  SMT systems vulnerable when running untrusted guests with EPT enabled.
++
++  The rationale for this choice is:
++
++  - Force disabling SMT can break existing setups, especially with
++    unattended updates.
++
++  - If regular users run untrusted guests on their machine, then L1TF is
++    just an add on to other malware which might be embedded in an untrusted
++    guest, e.g. spam-bots or attacks on the local network.
++
++    There is no technical way to prevent a user from running untrusted code
++    on their machines blindly.
++
++  - It's technically extremely unlikely and from today's knowledge even
++    impossible that L1TF can be exploited via the most popular attack
++    mechanisms like JavaScript because these mechanisms have no way to
++    control PTEs. If this would be possible and not other mitigation would
++    be possible, then the default might be different.
++
++  - The administrators of cloud and hosting setups have to carefully
++    analyze the risk for their scenarios and make the appropriate
++    mitigation choices, which might even vary across their deployed
++    machines and also result in other changes of their overall setup.
++    There is no way for the kernel to provide a sensible default for this
++    kind of scenarios.
+diff --git a/Documentation/admin-guide/hw-vuln/mds.rst b/Documentation/admin-guide/hw-vuln/mds.rst
+new file mode 100644
+index 000000000000..e3a796c0d3a2
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/mds.rst
+@@ -0,0 +1,308 @@
++MDS - Microarchitectural Data Sampling
++======================================
++
++Microarchitectural Data Sampling is a hardware vulnerability which allows
++unprivileged speculative access to data which is available in various CPU
++internal buffers.
++
++Affected processors
++-------------------
++
++This vulnerability affects a wide range of Intel processors. The
++vulnerability is not present on:
++
++   - Processors from AMD, Centaur and other non Intel vendors
++
++   - Older processor models, where the CPU family is < 6
++
++   - Some Atoms (Bonnell, Saltwell, Goldmont, GoldmontPlus)
++
++   - Intel processors which have the ARCH_CAP_MDS_NO bit set in the
++     IA32_ARCH_CAPABILITIES MSR.
++
++Whether a processor is affected or not can be read out from the MDS
++vulnerability file in sysfs. See :ref:`mds_sys_info`.
++
++Not all processors are affected by all variants of MDS, but the mitigation
++is identical for all of them so the kernel treats them as a single
++vulnerability.
++
++Related CVEs
++------------
++
++The following CVE entries are related to the MDS vulnerability:
++
++   ==============  =====  ===================================================
++   CVE-2018-12126  MSBDS  Microarchitectural Store Buffer Data Sampling
++   CVE-2018-12130  MFBDS  Microarchitectural Fill Buffer Data Sampling
++   CVE-2018-12127  MLPDS  Microarchitectural Load Port Data Sampling
++   CVE-2019-11091  MDSUM  Microarchitectural Data Sampling Uncacheable Memory
++   ==============  =====  ===================================================
++
++Problem
++-------
++
++When performing store, load, L1 refill operations, processors write data
++into temporary microarchitectural structures (buffers). The data in the
++buffer can be forwarded to load operations as an optimization.
++
++Under certain conditions, usually a fault/assist caused by a load
++operation, data unrelated to the load memory address can be speculatively
++forwarded from the buffers. Because the load operation causes a fault or
++assist and its result will be discarded, the forwarded data will not cause
++incorrect program execution or state changes. But a malicious operation
++may be able to forward this speculative data to a disclosure gadget which
++allows in turn to infer the value via a cache side channel attack.
++
++Because the buffers are potentially shared between Hyper-Threads cross
++Hyper-Thread attacks are possible.
++
++Deeper technical information is available in the MDS specific x86
++architecture section: :ref:`Documentation/x86/mds.rst <mds>`.
++
++
++Attack scenarios
++----------------
++
++Attacks against the MDS vulnerabilities can be mounted from malicious non
++priviledged user space applications running on hosts or guest. Malicious
++guest OSes can obviously mount attacks as well.
++
++Contrary to other speculation based vulnerabilities the MDS vulnerability
++does not allow the attacker to control the memory target address. As a
++consequence the attacks are purely sampling based, but as demonstrated with
++the TLBleed attack samples can be postprocessed successfully.
++
++Web-Browsers
++^^^^^^^^^^^^
++
++  It's unclear whether attacks through Web-Browsers are possible at
++  all. The exploitation through Java-Script is considered very unlikely,
++  but other widely used web technologies like Webassembly could possibly be
++  abused.
++
++
++.. _mds_sys_info:
++
++MDS system information
++-----------------------
++
++The Linux kernel provides a sysfs interface to enumerate the current MDS
++status of the system: whether the system is vulnerable, and which
++mitigations are active. The relevant sysfs file is:
++
++/sys/devices/system/cpu/vulnerabilities/mds
++
++The possible values in this file are:
++
++  .. list-table::
++
++     * - 'Not affected'
++       - The processor is not vulnerable
++     * - 'Vulnerable'
++       - The processor is vulnerable, but no mitigation enabled
++     * - 'Vulnerable: Clear CPU buffers attempted, no microcode'
++       - The processor is vulnerable but microcode is not updated.
++
++         The mitigation is enabled on a best effort basis. See :ref:`vmwerv`
++     * - 'Mitigation: Clear CPU buffers'
++       - The processor is vulnerable and the CPU buffer clearing mitigation is
++         enabled.
++
++If the processor is vulnerable then the following information is appended
++to the above information:
++
++    ========================  ============================================
++    'SMT vulnerable'          SMT is enabled
++    'SMT mitigated'           SMT is enabled and mitigated
++    'SMT disabled'            SMT is disabled
++    'SMT Host state unknown'  Kernel runs in a VM, Host SMT state unknown
++    ========================  ============================================
++
++.. _vmwerv:
++
++Best effort mitigation mode
++^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++  If the processor is vulnerable, but the availability of the microcode based
++  mitigation mechanism is not advertised via CPUID the kernel selects a best
++  effort mitigation mode.  This mode invokes the mitigation instructions
++  without a guarantee that they clear the CPU buffers.
++
++  This is done to address virtualization scenarios where the host has the
++  microcode update applied, but the hypervisor is not yet updated to expose
++  the CPUID to the guest. If the host has updated microcode the protection
++  takes effect otherwise a few cpu cycles are wasted pointlessly.
++
++  The state in the mds sysfs file reflects this situation accordingly.
++
++
++Mitigation mechanism
++-------------------------
++
++The kernel detects the affected CPUs and the presence of the microcode
++which is required.
++
++If a CPU is affected and the microcode is available, then the kernel
++enables the mitigation by default. The mitigation can be controlled at boot
++time via a kernel command line option. See
++:ref:`mds_mitigation_control_command_line`.
++
++.. _cpu_buffer_clear:
++
++CPU buffer clearing
++^^^^^^^^^^^^^^^^^^^
++
++  The mitigation for MDS clears the affected CPU buffers on return to user
++  space and when entering a guest.
++
++  If SMT is enabled it also clears the buffers on idle entry when the CPU
++  is only affected by MSBDS and not any other MDS variant, because the
++  other variants cannot be protected against cross Hyper-Thread attacks.
++
++  For CPUs which are only affected by MSBDS the user space, guest and idle
++  transition mitigations are sufficient and SMT is not affected.
++
++.. _virt_mechanism:
++
++Virtualization mitigation
++^^^^^^^^^^^^^^^^^^^^^^^^^
++
++  The protection for host to guest transition depends on the L1TF
++  vulnerability of the CPU:
++
++  - CPU is affected by L1TF:
++
++    If the L1D flush mitigation is enabled and up to date microcode is
++    available, the L1D flush mitigation is automatically protecting the
++    guest transition.
++
++    If the L1D flush mitigation is disabled then the MDS mitigation is
++    invoked explicit when the host MDS mitigation is enabled.
++
++    For details on L1TF and virtualization see:
++    :ref:`Documentation/admin-guide/hw-vuln//l1tf.rst <mitigation_control_kvm>`.
++
++  - CPU is not affected by L1TF:
++
++    CPU buffers are flushed before entering the guest when the host MDS
++    mitigation is enabled.
++
++  The resulting MDS protection matrix for the host to guest transition:
++
++  ============ ===== ============= ============ =================
++   L1TF         MDS   VMX-L1FLUSH   Host MDS     MDS-State
++
++   Don't care   No    Don't care    N/A          Not affected
++
++   Yes          Yes   Disabled      Off          Vulnerable
++
++   Yes          Yes   Disabled      Full         Mitigated
++
++   Yes          Yes   Enabled       Don't care   Mitigated
++
++   No           Yes   N/A           Off          Vulnerable
++
++   No           Yes   N/A           Full         Mitigated
++  ============ ===== ============= ============ =================
++
++  This only covers the host to guest transition, i.e. prevents leakage from
++  host to guest, but does not protect the guest internally. Guests need to
++  have their own protections.
++
++.. _xeon_phi:
++
++XEON PHI specific considerations
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++  The XEON PHI processor family is affected by MSBDS which can be exploited
++  cross Hyper-Threads when entering idle states. Some XEON PHI variants allow
++  to use MWAIT in user space (Ring 3) which opens an potential attack vector
++  for malicious user space. The exposure can be disabled on the kernel
++  command line with the 'ring3mwait=disable' command line option.
++
++  XEON PHI is not affected by the other MDS variants and MSBDS is mitigated
++  before the CPU enters a idle state. As XEON PHI is not affected by L1TF
++  either disabling SMT is not required for full protection.
++
++.. _mds_smt_control:
++
++SMT control
++^^^^^^^^^^^
++
++  All MDS variants except MSBDS can be attacked cross Hyper-Threads. That
++  means on CPUs which are affected by MFBDS or MLPDS it is necessary to
++  disable SMT for full protection. These are most of the affected CPUs; the
++  exception is XEON PHI, see :ref:`xeon_phi`.
++
++  Disabling SMT can have a significant performance impact, but the impact
++  depends on the type of workloads.
++
++  See the relevant chapter in the L1TF mitigation documentation for details:
++  :ref:`Documentation/admin-guide/hw-vuln/l1tf.rst <smt_control>`.
++
++
++.. _mds_mitigation_control_command_line:
++
++Mitigation control on the kernel command line
++---------------------------------------------
++
++The kernel command line allows to control the MDS mitigations at boot
++time with the option "mds=". The valid arguments for this option are:
++
++  ============  =============================================================
++  full		If the CPU is vulnerable, enable all available mitigations
++		for the MDS vulnerability, CPU buffer clearing on exit to
++		userspace and when entering a VM. Idle transitions are
++		protected as well if SMT is enabled.
++
++		It does not automatically disable SMT.
++
++  full,nosmt	The same as mds=full, with SMT disabled on vulnerable
++		CPUs.  This is the complete mitigation.
++
++  off		Disables MDS mitigations completely.
++
++  ============  =============================================================
++
++Not specifying this option is equivalent to "mds=full".
++
++
++Mitigation selection guide
++--------------------------
++
++1. Trusted userspace
++^^^^^^^^^^^^^^^^^^^^
++
++   If all userspace applications are from a trusted source and do not
++   execute untrusted code which is supplied externally, then the mitigation
++   can be disabled.
++
++
++2. Virtualization with trusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The same considerations as above versus trusted user space apply.
++
++3. Virtualization with untrusted guests
++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
++
++   The protection depends on the state of the L1TF mitigations.
++   See :ref:`virt_mechanism`.
++
++   If the MDS mitigation is enabled and SMT is disabled, guest to host and
++   guest to guest attacks are prevented.
++
++.. _mds_default_mitigations:
++
++Default mitigations
++-------------------
++
++  The kernel default mitigations for vulnerable processors are:
++
++  - Enable CPU buffer clearing
++
++  The kernel does not by default enforce the disabling of SMT, which leaves
++  SMT systems vulnerable when running untrusted code. The same rationale as
++  for L1TF applies.
++  See :ref:`Documentation/admin-guide/hw-vuln//l1tf.rst <default_mitigations>`.
+diff --git a/Documentation/admin-guide/index.rst b/Documentation/admin-guide/index.rst
+index 0a491676685e..42247516962a 100644
+--- a/Documentation/admin-guide/index.rst
++++ b/Documentation/admin-guide/index.rst
+@@ -17,14 +17,12 @@ etc.
+    kernel-parameters
+    devices
+ 
+-This section describes CPU vulnerabilities and provides an overview of the
+-possible mitigations along with guidance for selecting mitigations if they
+-are configurable at compile, boot or run time.
++This section describes CPU vulnerabilities and their mitigations.
+ 
+ .. toctree::
+    :maxdepth: 1
+ 
+-   l1tf
++   hw-vuln/index
+ 
+ Here is a set of documents aimed at users who are trying to track down
+ problems and bugs in particular.
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 858b6c0b9a15..18cad2b0392a 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2114,7 +2114,7 @@
+ 
+ 			Default is 'flush'.
+ 
+-			For details see: Documentation/admin-guide/l1tf.rst
++			For details see: Documentation/admin-guide/hw-vuln/l1tf.rst
+ 
+ 	l2cr=		[PPC]
+ 
+@@ -2356,6 +2356,32 @@
+ 			Format: <first>,<last>
+ 			Specifies range of consoles to be captured by the MDA.
+ 
++	mds=		[X86,INTEL]
++			Control mitigation for the Micro-architectural Data
++			Sampling (MDS) vulnerability.
++
++			Certain CPUs are vulnerable to an exploit against CPU
++			internal buffers which can forward information to a
++			disclosure gadget under certain conditions.
++
++			In vulnerable processors, the speculatively
++			forwarded data can be used in a cache side channel
++			attack, to access data to which the attacker does
++			not have direct access.
++
++			This parameter controls the MDS mitigation. The
++			options are:
++
++			full       - Enable MDS mitigation on vulnerable CPUs
++			full,nosmt - Enable MDS mitigation and disable
++				     SMT on vulnerable CPUs
++			off        - Unconditionally disable MDS mitigation
++
++			Not specifying this option is equivalent to
++			mds=full.
++
++			For details see: Documentation/admin-guide/hw-vuln/mds.rst
++
+ 	mem=nn[KMG]	[KNL,BOOT] Force usage of a specific amount of memory
+ 			Amount of memory to be used when the kernel is not able
+ 			to see the whole system memory or for test.
+@@ -2513,6 +2539,40 @@
+ 			in the "bleeding edge" mini2440 support kernel at
+ 			http://repo.or.cz/w/linux-2.6/mini2440.git
+ 
++	mitigations=
++			[X86,PPC,S390] Control optional mitigations for CPU
++			vulnerabilities.  This is a set of curated,
++			arch-independent options, each of which is an
++			aggregation of existing arch-specific options.
++
++			off
++				Disable all optional CPU mitigations.  This
++				improves system performance, but it may also
++				expose users to several CPU vulnerabilities.
++				Equivalent to: nopti [X86,PPC]
++					       nospectre_v1 [PPC]
++					       nobp=0 [S390]
++					       nospectre_v2 [X86,PPC,S390]
++					       spectre_v2_user=off [X86]
++					       spec_store_bypass_disable=off [X86,PPC]
++					       l1tf=off [X86]
++					       mds=off [X86]
++
++			auto (default)
++				Mitigate all CPU vulnerabilities, but leave SMT
++				enabled, even if it's vulnerable.  This is for
++				users who don't want to be surprised by SMT
++				getting disabled across kernel upgrades, or who
++				have other ways of avoiding SMT-based attacks.
++				Equivalent to: (default behavior)
++
++			auto,nosmt
++				Mitigate all CPU vulnerabilities, disabling SMT
++				if needed.  This is for users who always want to
++				be fully mitigated, even if it means losing SMT.
++				Equivalent to: l1tf=flush,nosmt [X86]
++					       mds=full,nosmt [X86]
++
+ 	mminit_loglevel=
+ 			[KNL] When CONFIG_DEBUG_MEMORY_INIT is set, this
+ 			parameter allows control of the logging verbosity for
+diff --git a/Documentation/admin-guide/l1tf.rst b/Documentation/admin-guide/l1tf.rst
+deleted file mode 100644
+index 9af977384168..000000000000
+--- a/Documentation/admin-guide/l1tf.rst
++++ /dev/null
+@@ -1,614 +0,0 @@
+-L1TF - L1 Terminal Fault
+-========================
+-
+-L1 Terminal Fault is a hardware vulnerability which allows unprivileged
+-speculative access to data which is available in the Level 1 Data Cache
+-when the page table entry controlling the virtual address, which is used
+-for the access, has the Present bit cleared or other reserved bits set.
+-
+-Affected processors
+--------------------
+-
+-This vulnerability affects a wide range of Intel processors. The
+-vulnerability is not present on:
+-
+-   - Processors from AMD, Centaur and other non Intel vendors
+-
+-   - Older processor models, where the CPU family is < 6
+-
+-   - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft,
+-     Penwell, Pineview, Silvermont, Airmont, Merrifield)
+-
+-   - The Intel XEON PHI family
+-
+-   - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the
+-     IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected
+-     by the Meltdown vulnerability either. These CPUs should become
+-     available by end of 2018.
+-
+-Whether a processor is affected or not can be read out from the L1TF
+-vulnerability file in sysfs. See :ref:`l1tf_sys_info`.
+-
+-Related CVEs
+-------------
+-
+-The following CVE entries are related to the L1TF vulnerability:
+-
+-   =============  =================  ==============================
+-   CVE-2018-3615  L1 Terminal Fault  SGX related aspects
+-   CVE-2018-3620  L1 Terminal Fault  OS, SMM related aspects
+-   CVE-2018-3646  L1 Terminal Fault  Virtualization related aspects
+-   =============  =================  ==============================
+-
+-Problem
+--------
+-
+-If an instruction accesses a virtual address for which the relevant page
+-table entry (PTE) has the Present bit cleared or other reserved bits set,
+-then speculative execution ignores the invalid PTE and loads the referenced
+-data if it is present in the Level 1 Data Cache, as if the page referenced
+-by the address bits in the PTE was still present and accessible.
+-
+-While this is a purely speculative mechanism and the instruction will raise
+-a page fault when it is retired eventually, the pure act of loading the
+-data and making it available to other speculative instructions opens up the
+-opportunity for side channel attacks to unprivileged malicious code,
+-similar to the Meltdown attack.
+-
+-While Meltdown breaks the user space to kernel space protection, L1TF
+-allows to attack any physical memory address in the system and the attack
+-works across all protection domains. It allows an attack of SGX and also
+-works from inside virtual machines because the speculation bypasses the
+-extended page table (EPT) protection mechanism.
+-
+-
+-Attack scenarios
+-----------------
+-
+-1. Malicious user space
+-^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   Operating Systems store arbitrary information in the address bits of a
+-   PTE which is marked non present. This allows a malicious user space
+-   application to attack the physical memory to which these PTEs resolve.
+-   In some cases user-space can maliciously influence the information
+-   encoded in the address bits of the PTE, thus making attacks more
+-   deterministic and more practical.
+-
+-   The Linux kernel contains a mitigation for this attack vector, PTE
+-   inversion, which is permanently enabled and has no performance
+-   impact. The kernel ensures that the address bits of PTEs, which are not
+-   marked present, never point to cacheable physical memory space.
+-
+-   A system with an up to date kernel is protected against attacks from
+-   malicious user space applications.
+-
+-2. Malicious guest in a virtual machine
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   The fact that L1TF breaks all domain protections allows malicious guest
+-   OSes, which can control the PTEs directly, and malicious guest user
+-   space applications, which run on an unprotected guest kernel lacking the
+-   PTE inversion mitigation for L1TF, to attack physical host memory.
+-
+-   A special aspect of L1TF in the context of virtualization is symmetric
+-   multi threading (SMT). The Intel implementation of SMT is called
+-   HyperThreading. The fact that Hyperthreads on the affected processors
+-   share the L1 Data Cache (L1D) is important for this. As the flaw allows
+-   only to attack data which is present in L1D, a malicious guest running
+-   on one Hyperthread can attack the data which is brought into the L1D by
+-   the context which runs on the sibling Hyperthread of the same physical
+-   core. This context can be host OS, host user space or a different guest.
+-
+-   If the processor does not support Extended Page Tables, the attack is
+-   only possible, when the hypervisor does not sanitize the content of the
+-   effective (shadow) page tables.
+-
+-   While solutions exist to mitigate these attack vectors fully, these
+-   mitigations are not enabled by default in the Linux kernel because they
+-   can affect performance significantly. The kernel provides several
+-   mechanisms which can be utilized to address the problem depending on the
+-   deployment scenario. The mitigations, their protection scope and impact
+-   are described in the next sections.
+-
+-   The default mitigations and the rationale for choosing them are explained
+-   at the end of this document. See :ref:`default_mitigations`.
+-
+-.. _l1tf_sys_info:
+-
+-L1TF system information
+------------------------
+-
+-The Linux kernel provides a sysfs interface to enumerate the current L1TF
+-status of the system: whether the system is vulnerable, and which
+-mitigations are active. The relevant sysfs file is:
+-
+-/sys/devices/system/cpu/vulnerabilities/l1tf
+-
+-The possible values in this file are:
+-
+-  ===========================   ===============================
+-  'Not affected'		The processor is not vulnerable
+-  'Mitigation: PTE Inversion'	The host protection is active
+-  ===========================   ===============================
+-
+-If KVM/VMX is enabled and the processor is vulnerable then the following
+-information is appended to the 'Mitigation: PTE Inversion' part:
+-
+-  - SMT status:
+-
+-    =====================  ================
+-    'VMX: SMT vulnerable'  SMT is enabled
+-    'VMX: SMT disabled'    SMT is disabled
+-    =====================  ================
+-
+-  - L1D Flush mode:
+-
+-    ================================  ====================================
+-    'L1D vulnerable'		      L1D flushing is disabled
+-
+-    'L1D conditional cache flushes'   L1D flush is conditionally enabled
+-
+-    'L1D cache flushes'		      L1D flush is unconditionally enabled
+-    ================================  ====================================
+-
+-The resulting grade of protection is discussed in the following sections.
+-
+-
+-Host mitigation mechanism
+--------------------------
+-
+-The kernel is unconditionally protected against L1TF attacks from malicious
+-user space running on the host.
+-
+-
+-Guest mitigation mechanisms
+----------------------------
+-
+-.. _l1d_flush:
+-
+-1. L1D flush on VMENTER
+-^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   To make sure that a guest cannot attack data which is present in the L1D
+-   the hypervisor flushes the L1D before entering the guest.
+-
+-   Flushing the L1D evicts not only the data which should not be accessed
+-   by a potentially malicious guest, it also flushes the guest
+-   data. Flushing the L1D has a performance impact as the processor has to
+-   bring the flushed guest data back into the L1D. Depending on the
+-   frequency of VMEXIT/VMENTER and the type of computations in the guest
+-   performance degradation in the range of 1% to 50% has been observed. For
+-   scenarios where guest VMEXIT/VMENTER are rare the performance impact is
+-   minimal. Virtio and mechanisms like posted interrupts are designed to
+-   confine the VMEXITs to a bare minimum, but specific configurations and
+-   application scenarios might still suffer from a high VMEXIT rate.
+-
+-   The kernel provides two L1D flush modes:
+-    - conditional ('cond')
+-    - unconditional ('always')
+-
+-   The conditional mode avoids L1D flushing after VMEXITs which execute
+-   only audited code paths before the corresponding VMENTER. These code
+-   paths have been verified that they cannot expose secrets or other
+-   interesting data to an attacker, but they can leak information about the
+-   address space layout of the hypervisor.
+-
+-   Unconditional mode flushes L1D on all VMENTER invocations and provides
+-   maximum protection. It has a higher overhead than the conditional
+-   mode. The overhead cannot be quantified correctly as it depends on the
+-   workload scenario and the resulting number of VMEXITs.
+-
+-   The general recommendation is to enable L1D flush on VMENTER. The kernel
+-   defaults to conditional mode on affected processors.
+-
+-   **Note**, that L1D flush does not prevent the SMT problem because the
+-   sibling thread will also bring back its data into the L1D which makes it
+-   attackable again.
+-
+-   L1D flush can be controlled by the administrator via the kernel command
+-   line and sysfs control files. See :ref:`mitigation_control_command_line`
+-   and :ref:`mitigation_control_kvm`.
+-
+-.. _guest_confinement:
+-
+-2. Guest VCPU confinement to dedicated physical cores
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   To address the SMT problem, it is possible to make a guest or a group of
+-   guests affine to one or more physical cores. The proper mechanism for
+-   that is to utilize exclusive cpusets to ensure that no other guest or
+-   host tasks can run on these cores.
+-
+-   If only a single guest or related guests run on sibling SMT threads on
+-   the same physical core then they can only attack their own memory and
+-   restricted parts of the host memory.
+-
+-   Host memory is attackable, when one of the sibling SMT threads runs in
+-   host OS (hypervisor) context and the other in guest context. The amount
+-   of valuable information from the host OS context depends on the context
+-   which the host OS executes, i.e. interrupts, soft interrupts and kernel
+-   threads. The amount of valuable data from these contexts cannot be
+-   declared as non-interesting for an attacker without deep inspection of
+-   the code.
+-
+-   **Note**, that assigning guests to a fixed set of physical cores affects
+-   the ability of the scheduler to do load balancing and might have
+-   negative effects on CPU utilization depending on the hosting
+-   scenario. Disabling SMT might be a viable alternative for particular
+-   scenarios.
+-
+-   For further information about confining guests to a single or to a group
+-   of cores consult the cpusets documentation:
+-
+-   https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt
+-
+-.. _interrupt_isolation:
+-
+-3. Interrupt affinity
+-^^^^^^^^^^^^^^^^^^^^^
+-
+-   Interrupts can be made affine to logical CPUs. This is not universally
+-   true because there are types of interrupts which are truly per CPU
+-   interrupts, e.g. the local timer interrupt. Aside of that multi queue
+-   devices affine their interrupts to single CPUs or groups of CPUs per
+-   queue without allowing the administrator to control the affinities.
+-
+-   Moving the interrupts, which can be affinity controlled, away from CPUs
+-   which run untrusted guests, reduces the attack vector space.
+-
+-   Whether the interrupts with are affine to CPUs, which run untrusted
+-   guests, provide interesting data for an attacker depends on the system
+-   configuration and the scenarios which run on the system. While for some
+-   of the interrupts it can be assumed that they won't expose interesting
+-   information beyond exposing hints about the host OS memory layout, there
+-   is no way to make general assumptions.
+-
+-   Interrupt affinity can be controlled by the administrator via the
+-   /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is
+-   available at:
+-
+-   https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
+-
+-.. _smt_control:
+-
+-4. SMT control
+-^^^^^^^^^^^^^^
+-
+-   To prevent the SMT issues of L1TF it might be necessary to disable SMT
+-   completely. Disabling SMT can have a significant performance impact, but
+-   the impact depends on the hosting scenario and the type of workloads.
+-   The impact of disabling SMT needs also to be weighted against the impact
+-   of other mitigation solutions like confining guests to dedicated cores.
+-
+-   The kernel provides a sysfs interface to retrieve the status of SMT and
+-   to control it. It also provides a kernel command line interface to
+-   control SMT.
+-
+-   The kernel command line interface consists of the following options:
+-
+-     =========== ==========================================================
+-     nosmt	 Affects the bring up of the secondary CPUs during boot. The
+-		 kernel tries to bring all present CPUs online during the
+-		 boot process. "nosmt" makes sure that from each physical
+-		 core only one - the so called primary (hyper) thread is
+-		 activated. Due to a design flaw of Intel processors related
+-		 to Machine Check Exceptions the non primary siblings have
+-		 to be brought up at least partially and are then shut down
+-		 again.  "nosmt" can be undone via the sysfs interface.
+-
+-     nosmt=force Has the same effect as "nosmt" but it does not allow to
+-		 undo the SMT disable via the sysfs interface.
+-     =========== ==========================================================
+-
+-   The sysfs interface provides two files:
+-
+-   - /sys/devices/system/cpu/smt/control
+-   - /sys/devices/system/cpu/smt/active
+-
+-   /sys/devices/system/cpu/smt/control:
+-
+-     This file allows to read out the SMT control state and provides the
+-     ability to disable or (re)enable SMT. The possible states are:
+-
+-	==============  ===================================================
+-	on		SMT is supported by the CPU and enabled. All
+-			logical CPUs can be onlined and offlined without
+-			restrictions.
+-
+-	off		SMT is supported by the CPU and disabled. Only
+-			the so called primary SMT threads can be onlined
+-			and offlined without restrictions. An attempt to
+-			online a non-primary sibling is rejected
+-
+-	forceoff	Same as 'off' but the state cannot be controlled.
+-			Attempts to write to the control file are rejected.
+-
+-	notsupported	The processor does not support SMT. It's therefore
+-			not affected by the SMT implications of L1TF.
+-			Attempts to write to the control file are rejected.
+-	==============  ===================================================
+-
+-     The possible states which can be written into this file to control SMT
+-     state are:
+-
+-     - on
+-     - off
+-     - forceoff
+-
+-   /sys/devices/system/cpu/smt/active:
+-
+-     This file reports whether SMT is enabled and active, i.e. if on any
+-     physical core two or more sibling threads are online.
+-
+-   SMT control is also possible at boot time via the l1tf kernel command
+-   line parameter in combination with L1D flush control. See
+-   :ref:`mitigation_control_command_line`.
+-
+-5. Disabling EPT
+-^^^^^^^^^^^^^^^^
+-
+-  Disabling EPT for virtual machines provides full mitigation for L1TF even
+-  with SMT enabled, because the effective page tables for guests are
+-  managed and sanitized by the hypervisor. Though disabling EPT has a
+-  significant performance impact especially when the Meltdown mitigation
+-  KPTI is enabled.
+-
+-  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
+-
+-There is ongoing research and development for new mitigation mechanisms to
+-address the performance impact of disabling SMT or EPT.
+-
+-.. _mitigation_control_command_line:
+-
+-Mitigation control on the kernel command line
+----------------------------------------------
+-
+-The kernel command line allows to control the L1TF mitigations at boot
+-time with the option "l1tf=". The valid arguments for this option are:
+-
+-  ============  =============================================================
+-  full		Provides all available mitigations for the L1TF
+-		vulnerability. Disables SMT and enables all mitigations in
+-		the hypervisors, i.e. unconditional L1D flushing
+-
+-		SMT control and L1D flush control via the sysfs interface
+-		is still possible after boot.  Hypervisors will issue a
+-		warning when the first VM is started in a potentially
+-		insecure configuration, i.e. SMT enabled or L1D flush
+-		disabled.
+-
+-  full,force	Same as 'full', but disables SMT and L1D flush runtime
+-		control. Implies the 'nosmt=force' command line option.
+-		(i.e. sysfs control of SMT is disabled.)
+-
+-  flush		Leaves SMT enabled and enables the default hypervisor
+-		mitigation, i.e. conditional L1D flushing
+-
+-		SMT control and L1D flush control via the sysfs interface
+-		is still possible after boot.  Hypervisors will issue a
+-		warning when the first VM is started in a potentially
+-		insecure configuration, i.e. SMT enabled or L1D flush
+-		disabled.
+-
+-  flush,nosmt	Disables SMT and enables the default hypervisor mitigation,
+-		i.e. conditional L1D flushing.
+-
+-		SMT control and L1D flush control via the sysfs interface
+-		is still possible after boot.  Hypervisors will issue a
+-		warning when the first VM is started in a potentially
+-		insecure configuration, i.e. SMT enabled or L1D flush
+-		disabled.
+-
+-  flush,nowarn	Same as 'flush', but hypervisors will not warn when a VM is
+-		started in a potentially insecure configuration.
+-
+-  off		Disables hypervisor mitigations and doesn't emit any
+-		warnings.
+-		It also drops the swap size and available RAM limit restrictions
+-		on both hypervisor and bare metal.
+-
+-  ============  =============================================================
+-
+-The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
+-
+-
+-.. _mitigation_control_kvm:
+-
+-Mitigation control for KVM - module parameter
+--------------------------------------------------------------
+-
+-The KVM hypervisor mitigation mechanism, flushing the L1D cache when
+-entering a guest, can be controlled with a module parameter.
+-
+-The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the
+-following arguments:
+-
+-  ============  ==============================================================
+-  always	L1D cache flush on every VMENTER.
+-
+-  cond		Flush L1D on VMENTER only when the code between VMEXIT and
+-		VMENTER can leak host memory which is considered
+-		interesting for an attacker. This still can leak host memory
+-		which allows e.g. to determine the hosts address space layout.
+-
+-  never		Disables the mitigation
+-  ============  ==============================================================
+-
+-The parameter can be provided on the kernel command line, as a module
+-parameter when loading the modules and at runtime modified via the sysfs
+-file:
+-
+-/sys/module/kvm_intel/parameters/vmentry_l1d_flush
+-
+-The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
+-line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
+-module parameter is ignored and writes to the sysfs file are rejected.
+-
+-
+-Mitigation selection guide
+---------------------------
+-
+-1. No virtualization in use
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   The system is protected by the kernel unconditionally and no further
+-   action is required.
+-
+-2. Virtualization with trusted guests
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-
+-   If the guest comes from a trusted source and the guest OS kernel is
+-   guaranteed to have the L1TF mitigations in place the system is fully
+-   protected against L1TF and no further action is required.
+-
+-   To avoid the overhead of the default L1D flushing on VMENTER the
+-   administrator can disable the flushing via the kernel command line and
+-   sysfs control files. See :ref:`mitigation_control_command_line` and
+-   :ref:`mitigation_control_kvm`.
+-
+-
+-3. Virtualization with untrusted guests
+-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+-
+-3.1. SMT not supported or disabled
+-""""""""""""""""""""""""""""""""""
+-
+-  If SMT is not supported by the processor or disabled in the BIOS or by
+-  the kernel, it's only required to enforce L1D flushing on VMENTER.
+-
+-  Conditional L1D flushing is the default behaviour and can be tuned. See
+-  :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
+-
+-3.2. EPT not supported or disabled
+-""""""""""""""""""""""""""""""""""
+-
+-  If EPT is not supported by the processor or disabled in the hypervisor,
+-  the system is fully protected. SMT can stay enabled and L1D flushing on
+-  VMENTER is not required.
+-
+-  EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
+-
+-3.3. SMT and EPT supported and active
+-"""""""""""""""""""""""""""""""""""""
+-
+-  If SMT and EPT are supported and active then various degrees of
+-  mitigations can be employed:
+-
+-  - L1D flushing on VMENTER:
+-
+-    L1D flushing on VMENTER is the minimal protection requirement, but it
+-    is only potent in combination with other mitigation methods.
+-
+-    Conditional L1D flushing is the default behaviour and can be tuned. See
+-    :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
+-
+-  - Guest confinement:
+-
+-    Confinement of guests to a single or a group of physical cores which
+-    are not running any other processes, can reduce the attack surface
+-    significantly, but interrupts, soft interrupts and kernel threads can
+-    still expose valuable data to a potential attacker. See
+-    :ref:`guest_confinement`.
+-
+-  - Interrupt isolation:
+-
+-    Isolating the guest CPUs from interrupts can reduce the attack surface
+-    further, but still allows a malicious guest to explore a limited amount
+-    of host physical memory. This can at least be used to gain knowledge
+-    about the host address space layout. The interrupts which have a fixed
+-    affinity to the CPUs which run the untrusted guests can depending on
+-    the scenario still trigger soft interrupts and schedule kernel threads
+-    which might expose valuable information. See
+-    :ref:`interrupt_isolation`.
+-
+-The above three mitigation methods combined can provide protection to a
+-certain degree, but the risk of the remaining attack surface has to be
+-carefully analyzed. For full protection the following methods are
+-available:
+-
+-  - Disabling SMT:
+-
+-    Disabling SMT and enforcing the L1D flushing provides the maximum
+-    amount of protection. This mitigation is not depending on any of the
+-    above mitigation methods.
+-
+-    SMT control and L1D flushing can be tuned by the command line
+-    parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run
+-    time with the matching sysfs control files. See :ref:`smt_control`,
+-    :ref:`mitigation_control_command_line` and
+-    :ref:`mitigation_control_kvm`.
+-
+-  - Disabling EPT:
+-
+-    Disabling EPT provides the maximum amount of protection as well. It is
+-    not depending on any of the above mitigation methods. SMT can stay
+-    enabled and L1D flushing is not required, but the performance impact is
+-    significant.
+-
+-    EPT can be disabled in the hypervisor via the 'kvm-intel.ept'
+-    parameter.
+-
+-3.4. Nested virtual machines
+-""""""""""""""""""""""""""""
+-
+-When nested virtualization is in use, three operating systems are involved:
+-the bare metal hypervisor, the nested hypervisor and the nested virtual
+-machine.  VMENTER operations from the nested hypervisor into the nested
+-guest will always be processed by the bare metal hypervisor. If KVM is the
+-bare metal hypervisor it will:
+-
+- - Flush the L1D cache on every switch from the nested hypervisor to the
+-   nested virtual machine, so that the nested hypervisor's secrets are not
+-   exposed to the nested virtual machine;
+-
+- - Flush the L1D cache on every switch from the nested virtual machine to
+-   the nested hypervisor; this is a complex operation, and flushing the L1D
+-   cache avoids that the bare metal hypervisor's secrets are exposed to the
+-   nested virtual machine;
+-
+- - Instruct the nested hypervisor to not perform any L1D cache flush. This
+-   is an optimization to avoid double L1D flushing.
+-
+-
+-.. _default_mitigations:
+-
+-Default mitigations
+--------------------
+-
+-  The kernel default mitigations for vulnerable processors are:
+-
+-  - PTE inversion to protect against malicious user space. This is done
+-    unconditionally and cannot be controlled. The swap storage is limited
+-    to ~16TB.
+-
+-  - L1D conditional flushing on VMENTER when EPT is enabled for
+-    a guest.
+-
+-  The kernel does not by default enforce the disabling of SMT, which leaves
+-  SMT systems vulnerable when running untrusted guests with EPT enabled.
+-
+-  The rationale for this choice is:
+-
+-  - Force disabling SMT can break existing setups, especially with
+-    unattended updates.
+-
+-  - If regular users run untrusted guests on their machine, then L1TF is
+-    just an add on to other malware which might be embedded in an untrusted
+-    guest, e.g. spam-bots or attacks on the local network.
+-
+-    There is no technical way to prevent a user from running untrusted code
+-    on their machines blindly.
+-
+-  - It's technically extremely unlikely and from today's knowledge even
+-    impossible that L1TF can be exploited via the most popular attack
+-    mechanisms like JavaScript because these mechanisms have no way to
+-    control PTEs. If this would be possible and not other mitigation would
+-    be possible, then the default might be different.
+-
+-  - The administrators of cloud and hosting setups have to carefully
+-    analyze the risk for their scenarios and make the appropriate
+-    mitigation choices, which might even vary across their deployed
+-    machines and also result in other changes of their overall setup.
+-    There is no way for the kernel to provide a sensible default for this
+-    kind of scenarios.
+diff --git a/Documentation/index.rst b/Documentation/index.rst
+index c858c2e66e36..63864826dcd6 100644
+--- a/Documentation/index.rst
++++ b/Documentation/index.rst
+@@ -101,6 +101,7 @@ implementation.
+    :maxdepth: 2
+ 
+    sh/index
++   x86/index
+ 
+ Filesystem Documentation
+ ------------------------
+diff --git a/Documentation/x86/conf.py b/Documentation/x86/conf.py
+new file mode 100644
+index 000000000000..33c5c3142e20
+--- /dev/null
++++ b/Documentation/x86/conf.py
+@@ -0,0 +1,10 @@
++# -*- coding: utf-8; mode: python -*-
++
++project = "X86 architecture specific documentation"
++
++tags.add("subproject")
++
++latex_documents = [
++    ('index', 'x86.tex', project,
++     'The kernel development community', 'manual'),
++]
+diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
+new file mode 100644
+index 000000000000..ef389dcf1b1d
+--- /dev/null
++++ b/Documentation/x86/index.rst
+@@ -0,0 +1,8 @@
++==========================
++x86 architecture specifics
++==========================
++
++.. toctree::
++   :maxdepth: 1
++
++   mds
+diff --git a/Documentation/x86/mds.rst b/Documentation/x86/mds.rst
+new file mode 100644
+index 000000000000..534e9baa4e1d
+--- /dev/null
++++ b/Documentation/x86/mds.rst
+@@ -0,0 +1,225 @@
++Microarchitectural Data Sampling (MDS) mitigation
++=================================================
++
++.. _mds:
++
++Overview
++--------
++
++Microarchitectural Data Sampling (MDS) is a family of side channel attacks
++on internal buffers in Intel CPUs. The variants are:
++
++ - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)
++ - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)
++ - Microarchitectural Load Port Data Sampling (MLPDS) (CVE-2018-12127)
++ - Microarchitectural Data Sampling Uncacheable Memory (MDSUM) (CVE-2019-11091)
++
++MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a
++dependent load (store-to-load forwarding) as an optimization. The forward
++can also happen to a faulting or assisting load operation for a different
++memory address, which can be exploited under certain conditions. Store
++buffers are partitioned between Hyper-Threads so cross thread forwarding is
++not possible. But if a thread enters or exits a sleep state the store
++buffer is repartitioned which can expose data from one thread to the other.
++
++MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
++L1 miss situations and to hold data which is returned or sent in response
++to a memory or I/O operation. Fill buffers can forward data to a load
++operation and also write data to the cache. When the fill buffer is
++deallocated it can retain the stale data of the preceding operations which
++can then be forwarded to a faulting or assisting load operation, which can
++be exploited under certain conditions. Fill buffers are shared between
++Hyper-Threads so cross thread leakage is possible.
++
++MLPDS leaks Load Port Data. Load ports are used to perform load operations
++from memory or I/O. The received data is then forwarded to the register
++file or a subsequent operation. In some implementations the Load Port can
++contain stale data from a previous operation which can be forwarded to
++faulting or assisting loads under certain conditions, which again can be
++exploited eventually. Load ports are shared between Hyper-Threads so cross
++thread leakage is possible.
++
++MDSUM is a special case of MSBDS, MFBDS and MLPDS. An uncacheable load from
++memory that takes a fault or assist can leave data in a microarchitectural
++structure that may later be observed using one of the same methods used by
++MSBDS, MFBDS or MLPDS.
++
++Exposure assumptions
++--------------------
++
++It is assumed that attack code resides in user space or in a guest with one
++exception. The rationale behind this assumption is that the code construct
++needed for exploiting MDS requires:
++
++ - to control the load to trigger a fault or assist
++
++ - to have a disclosure gadget which exposes the speculatively accessed
++   data for consumption through a side channel.
++
++ - to control the pointer through which the disclosure gadget exposes the
++   data
++
++The existence of such a construct in the kernel cannot be excluded with
++100% certainty, but the complexity involved makes it extremly unlikely.
++
++There is one exception, which is untrusted BPF. The functionality of
++untrusted BPF is limited, but it needs to be thoroughly investigated
++whether it can be used to create such a construct.
++
++
++Mitigation strategy
++-------------------
++
++All variants have the same mitigation strategy at least for the single CPU
++thread case (SMT off): Force the CPU to clear the affected buffers.
++
++This is achieved by using the otherwise unused and obsolete VERW
++instruction in combination with a microcode update. The microcode clears
++the affected CPU buffers when the VERW instruction is executed.
++
++For virtualization there are two ways to achieve CPU buffer
++clearing. Either the modified VERW instruction or via the L1D Flush
++command. The latter is issued when L1TF mitigation is enabled so the extra
++VERW can be avoided. If the CPU is not affected by L1TF then VERW needs to
++be issued.
++
++If the VERW instruction with the supplied segment selector argument is
++executed on a CPU without the microcode update there is no side effect
++other than a small number of pointlessly wasted CPU cycles.
++
++This does not protect against cross Hyper-Thread attacks except for MSBDS
++which is only exploitable cross Hyper-thread when one of the Hyper-Threads
++enters a C-state.
++
++The kernel provides a function to invoke the buffer clearing:
++
++    mds_clear_cpu_buffers()
++
++The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
++(idle) transitions.
++
++As a special quirk to address virtualization scenarios where the host has
++the microcode updated, but the hypervisor does not (yet) expose the
++MD_CLEAR CPUID bit to guests, the kernel issues the VERW instruction in the
++hope that it might actually clear the buffers. The state is reflected
++accordingly.
++
++According to current knowledge additional mitigations inside the kernel
++itself are not required because the necessary gadgets to expose the leaked
++data cannot be controlled in a way which allows exploitation from malicious
++user space or VM guests.
++
++Kernel internal mitigation modes
++--------------------------------
++
++ ======= ============================================================
++ off      Mitigation is disabled. Either the CPU is not affected or
++          mds=off is supplied on the kernel command line
++
++ full     Mitigation is enabled. CPU is affected and MD_CLEAR is
++          advertised in CPUID.
++
++ vmwerv	  Mitigation is enabled. CPU is affected and MD_CLEAR is not
++	  advertised in CPUID. That is mainly for virtualization
++	  scenarios where the host has the updated microcode but the
++	  hypervisor does not expose MD_CLEAR in CPUID. It's a best
++	  effort approach without guarantee.
++ ======= ============================================================
++
++If the CPU is affected and mds=off is not supplied on the kernel command
++line then the kernel selects the appropriate mitigation mode depending on
++the availability of the MD_CLEAR CPUID bit.
++
++Mitigation points
++-----------------
++
++1. Return to user space
++^^^^^^^^^^^^^^^^^^^^^^^
++
++   When transitioning from kernel to user space the CPU buffers are flushed
++   on affected CPUs when the mitigation is not disabled on the kernel
++   command line. The migitation is enabled through the static key
++   mds_user_clear.
++
++   The mitigation is invoked in prepare_exit_to_usermode() which covers
++   most of the kernel to user space transitions. There are a few exceptions
++   which are not invoking prepare_exit_to_usermode() on return to user
++   space. These exceptions use the paranoid exit code.
++
++   - Non Maskable Interrupt (NMI):
++
++     Access to sensible data like keys, credentials in the NMI context is
++     mostly theoretical: The CPU can do prefetching or execute a
++     misspeculated code path and thereby fetching data which might end up
++     leaking through a buffer.
++
++     But for mounting other attacks the kernel stack address of the task is
++     already valuable information. So in full mitigation mode, the NMI is
++     mitigated on the return from do_nmi() to provide almost complete
++     coverage.
++
++   - Double fault (#DF):
++
++     A double fault is usually fatal, but the ESPFIX workaround, which can
++     be triggered from user space through modify_ldt(2) is a recoverable
++     double fault. #DF uses the paranoid exit path, so explicit mitigation
++     in the double fault handler is required.
++
++   - Machine Check Exception (#MC):
++
++     Another corner case is a #MC which hits between the CPU buffer clear
++     invocation and the actual return to user. As this still is in kernel
++     space it takes the paranoid exit path which does not clear the CPU
++     buffers. So the #MC handler repopulates the buffers to some
++     extent. Machine checks are not reliably controllable and the window is
++     extremly small so mitigation would just tick a checkbox that this
++     theoretical corner case is covered. To keep the amount of special
++     cases small, ignore #MC.
++
++   - Debug Exception (#DB):
++
++     This takes the paranoid exit path only when the INT1 breakpoint is in
++     kernel space. #DB on a user space address takes the regular exit path,
++     so no extra mitigation required.
++
++
++2. C-State transition
++^^^^^^^^^^^^^^^^^^^^^
++
++   When a CPU goes idle and enters a C-State the CPU buffers need to be
++   cleared on affected CPUs when SMT is active. This addresses the
++   repartitioning of the store buffer when one of the Hyper-Threads enters
++   a C-State.
++
++   When SMT is inactive, i.e. either the CPU does not support it or all
++   sibling threads are offline CPU buffer clearing is not required.
++
++   The idle clearing is enabled on CPUs which are only affected by MSBDS
++   and not by any other MDS variant. The other MDS variants cannot be
++   protected against cross Hyper-Thread attacks because the Fill Buffer and
++   the Load Ports are shared. So on CPUs affected by other variants, the
++   idle clearing would be a window dressing exercise and is therefore not
++   activated.
++
++   The invocation is controlled by the static key mds_idle_clear which is
++   switched depending on the chosen mitigation mode and the SMT state of
++   the system.
++
++   The buffer clear is only invoked before entering the C-State to prevent
++   that stale data from the idling CPU from spilling to the Hyper-Thread
++   sibling after the store buffer got repartitioned and all entries are
++   available to the non idle sibling.
++
++   When coming out of idle the store buffer is partitioned again so each
++   sibling has half of it available. The back from idle CPU could be then
++   speculatively exposed to contents of the sibling. The buffers are
++   flushed either on exit to user space or on VMENTER so malicious code
++   in user space or the guest cannot speculatively access them.
++
++   The mitigation is hooked into all variants of halt()/mwait(), but does
++   not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver
++   has been superseded by the intel_idle driver around 2010 and is
++   preferred on all affected CPUs which are expected to gain the MD_CLEAR
++   functionality in microcode. Aside of that the IO-Port mechanism is a
++   legacy interface which is only used on older systems which are either
++   not affected or do not receive microcode updates anymore.
+diff --git a/Makefile b/Makefile
+index 11c7f7844507..95670d520786 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/powerpc/kernel/security.c b/arch/powerpc/kernel/security.c
+index b33bafb8fcea..70568ccbd9fd 100644
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -57,7 +57,7 @@ void setup_barrier_nospec(void)
+ 	enable = security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) &&
+ 		 security_ftr_enabled(SEC_FTR_BNDS_CHK_SPEC_BAR);
+ 
+-	if (!no_nospec)
++	if (!no_nospec && !cpu_mitigations_off())
+ 		enable_barrier_nospec(enable);
+ }
+ 
+@@ -116,7 +116,7 @@ static int __init handle_nospectre_v2(char *p)
+ early_param("nospectre_v2", handle_nospectre_v2);
+ void setup_spectre_v2(void)
+ {
+-	if (no_spectrev2)
++	if (no_spectrev2 || cpu_mitigations_off())
+ 		do_btb_flush_fixups();
+ 	else
+ 		btb_flush_enabled = true;
+@@ -300,7 +300,7 @@ void setup_stf_barrier(void)
+ 
+ 	stf_enabled_flush_types = type;
+ 
+-	if (!no_stf_barrier)
++	if (!no_stf_barrier && !cpu_mitigations_off())
+ 		stf_barrier_enable(enable);
+ }
+ 
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 236c1151a3a7..c7ec27ba8926 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -958,7 +958,7 @@ void setup_rfi_flush(enum l1d_flush_type types, bool enable)
+ 
+ 	enabled_flush_types = types;
+ 
+-	if (!no_rfi_flush)
++	if (!no_rfi_flush && !cpu_mitigations_off())
+ 		rfi_flush_enable(enable);
+ }
+ 
+diff --git a/arch/s390/kernel/nospec-branch.c b/arch/s390/kernel/nospec-branch.c
+index bdddaae96559..649135cbedd5 100644
+--- a/arch/s390/kernel/nospec-branch.c
++++ b/arch/s390/kernel/nospec-branch.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/module.h>
+ #include <linux/device.h>
++#include <linux/cpu.h>
+ #include <asm/nospec-branch.h>
+ 
+ static int __init nobp_setup_early(char *str)
+@@ -58,7 +59,7 @@ early_param("nospectre_v2", nospectre_v2_setup_early);
+ 
+ void __init nospec_auto_detect(void)
+ {
+-	if (test_facility(156)) {
++	if (test_facility(156) || cpu_mitigations_off()) {
+ 		/*
+ 		 * The machine supports etokens.
+ 		 * Disable expolines and disable nobp.
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index 7bc105f47d21..19f650d729f5 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -31,6 +31,7 @@
+ #include <asm/vdso.h>
+ #include <linux/uaccess.h>
+ #include <asm/cpufeature.h>
++#include <asm/nospec-branch.h>
+ 
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/syscalls.h>
+@@ -212,6 +213,8 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
+ #endif
+ 
+ 	user_enter_irqoff();
++
++	mds_user_clear_cpu_buffers();
+ }
+ 
+ #define SYSCALL_EXIT_WORK_FLAGS				\
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 981ff9479648..75f27ee2c263 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -344,6 +344,7 @@
+ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
+ #define X86_FEATURE_AVX512_4VNNIW	(18*32+ 2) /* AVX-512 Neural Network Instructions */
+ #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
++#define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
+ #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+@@ -382,5 +383,7 @@
+ #define X86_BUG_SPECTRE_V2		X86_BUG(16) /* CPU is affected by Spectre variant 2 attack with indirect branches */
+ #define X86_BUG_SPEC_STORE_BYPASS	X86_BUG(17) /* CPU is affected by speculative store bypass attack */
+ #define X86_BUG_L1TF			X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
++#define X86_BUG_MDS			X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
++#define X86_BUG_MSBDS_ONLY		X86_BUG(20) /* CPU is only affected by the  MSDBS variant of BUG_MDS */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index 058e40fed167..8a0e56e1dcc9 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -6,6 +6,8 @@
+ 
+ #ifndef __ASSEMBLY__
+ 
++#include <asm/nospec-branch.h>
++
+ /* Provide __cpuidle; we can't safely include <linux/cpu.h> */
+ #define __cpuidle __attribute__((__section__(".cpuidle.text")))
+ 
+@@ -54,11 +56,13 @@ static inline void native_irq_enable(void)
+ 
+ static inline __cpuidle void native_safe_halt(void)
+ {
++	mds_idle_clear_cpu_buffers();
+ 	asm volatile("sti; hlt": : :"memory");
+ }
+ 
+ static inline __cpuidle void native_halt(void)
+ {
++	mds_idle_clear_cpu_buffers();
+ 	asm volatile("hlt": : :"memory");
+ }
+ 
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index ca5bc0eacb95..20f7da552e90 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -2,6 +2,8 @@
+ #ifndef _ASM_X86_MSR_INDEX_H
+ #define _ASM_X86_MSR_INDEX_H
+ 
++#include <linux/bits.h>
++
+ /*
+  * CPU model specific register (MSR) numbers.
+  *
+@@ -40,14 +42,14 @@
+ /* Intel MSRs. Some also available on other CPUs */
+ 
+ #define MSR_IA32_SPEC_CTRL		0x00000048 /* Speculation Control */
+-#define SPEC_CTRL_IBRS			(1 << 0)   /* Indirect Branch Restricted Speculation */
++#define SPEC_CTRL_IBRS			BIT(0)	   /* Indirect Branch Restricted Speculation */
+ #define SPEC_CTRL_STIBP_SHIFT		1	   /* Single Thread Indirect Branch Predictor (STIBP) bit */
+-#define SPEC_CTRL_STIBP			(1 << SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
++#define SPEC_CTRL_STIBP			BIT(SPEC_CTRL_STIBP_SHIFT)	/* STIBP mask */
+ #define SPEC_CTRL_SSBD_SHIFT		2	   /* Speculative Store Bypass Disable bit */
+-#define SPEC_CTRL_SSBD			(1 << SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
++#define SPEC_CTRL_SSBD			BIT(SPEC_CTRL_SSBD_SHIFT)	/* Speculative Store Bypass Disable */
+ 
+ #define MSR_IA32_PRED_CMD		0x00000049 /* Prediction Command */
+-#define PRED_CMD_IBPB			(1 << 0)   /* Indirect Branch Prediction Barrier */
++#define PRED_CMD_IBPB			BIT(0)	   /* Indirect Branch Prediction Barrier */
+ 
+ #define MSR_PPIN_CTL			0x0000004e
+ #define MSR_PPIN			0x0000004f
+@@ -69,20 +71,25 @@
+ #define MSR_MTRRcap			0x000000fe
+ 
+ #define MSR_IA32_ARCH_CAPABILITIES	0x0000010a
+-#define ARCH_CAP_RDCL_NO		(1 << 0)   /* Not susceptible to Meltdown */
+-#define ARCH_CAP_IBRS_ALL		(1 << 1)   /* Enhanced IBRS support */
+-#define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH	(1 << 3)   /* Skip L1D flush on vmentry */
+-#define ARCH_CAP_SSB_NO			(1 << 4)   /*
+-						    * Not susceptible to Speculative Store Bypass
+-						    * attack, so no Speculative Store Bypass
+-						    * control required.
+-						    */
++#define ARCH_CAP_RDCL_NO		BIT(0)	/* Not susceptible to Meltdown */
++#define ARCH_CAP_IBRS_ALL		BIT(1)	/* Enhanced IBRS support */
++#define ARCH_CAP_SKIP_VMENTRY_L1DFLUSH	BIT(3)	/* Skip L1D flush on vmentry */
++#define ARCH_CAP_SSB_NO			BIT(4)	/*
++						 * Not susceptible to Speculative Store Bypass
++						 * attack, so no Speculative Store Bypass
++						 * control required.
++						 */
++#define ARCH_CAP_MDS_NO			BIT(5)   /*
++						  * Not susceptible to
++						  * Microarchitectural Data
++						  * Sampling (MDS) vulnerabilities.
++						  */
+ 
+ #define MSR_IA32_FLUSH_CMD		0x0000010b
+-#define L1D_FLUSH			(1 << 0)   /*
+-						    * Writeback and invalidate the
+-						    * L1 data cache.
+-						    */
++#define L1D_FLUSH			BIT(0)	/*
++						 * Writeback and invalidate the
++						 * L1 data cache.
++						 */
+ 
+ #define MSR_IA32_BBL_CR_CTL		0x00000119
+ #define MSR_IA32_BBL_CR_CTL3		0x0000011e
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index 39a2fb29378a..eb0f80ce8524 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -6,6 +6,7 @@
+ #include <linux/sched/idle.h>
+ 
+ #include <asm/cpufeature.h>
++#include <asm/nospec-branch.h>
+ 
+ #define MWAIT_SUBSTATE_MASK		0xf
+ #define MWAIT_CSTATE_MASK		0xf
+@@ -40,6 +41,8 @@ static inline void __monitorx(const void *eax, unsigned long ecx,
+ 
+ static inline void __mwait(unsigned long eax, unsigned long ecx)
+ {
++	mds_idle_clear_cpu_buffers();
++
+ 	/* "mwait %eax, %ecx;" */
+ 	asm volatile(".byte 0x0f, 0x01, 0xc9;"
+ 		     :: "a" (eax), "c" (ecx));
+@@ -74,6 +77,8 @@ static inline void __mwait(unsigned long eax, unsigned long ecx)
+ static inline void __mwaitx(unsigned long eax, unsigned long ebx,
+ 			    unsigned long ecx)
+ {
++	/* No MDS buffer clear as this is AMD/HYGON only */
++
+ 	/* "mwaitx %eax, %ebx, %ecx;" */
+ 	asm volatile(".byte 0x0f, 0x01, 0xfb;"
+ 		     :: "a" (eax), "b" (ebx), "c" (ecx));
+@@ -81,6 +86,8 @@ static inline void __mwaitx(unsigned long eax, unsigned long ebx,
+ 
+ static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+ {
++	mds_idle_clear_cpu_buffers();
++
+ 	trace_hardirqs_on();
+ 	/* "mwait %eax, %ecx;" */
+ 	asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index dad12b767ba0..4e970390110f 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -318,6 +318,56 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ 
++DECLARE_STATIC_KEY_FALSE(mds_user_clear);
++DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
++
++#include <asm/segment.h>
++
++/**
++ * mds_clear_cpu_buffers - Mitigation for MDS vulnerability
++ *
++ * This uses the otherwise unused and obsolete VERW instruction in
++ * combination with microcode which triggers a CPU buffer flush when the
++ * instruction is executed.
++ */
++static inline void mds_clear_cpu_buffers(void)
++{
++	static const u16 ds = __KERNEL_DS;
++
++	/*
++	 * Has to be the memory-operand variant because only that
++	 * guarantees the CPU buffer flush functionality according to
++	 * documentation. The register-operand variant does not.
++	 * Works with any segment selector, but a valid writable
++	 * data segment is the fastest variant.
++	 *
++	 * "cc" clobber is required because VERW modifies ZF.
++	 */
++	asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc");
++}
++
++/**
++ * mds_user_clear_cpu_buffers - Mitigation for MDS vulnerability
++ *
++ * Clear CPU buffers if the corresponding static key is enabled
++ */
++static inline void mds_user_clear_cpu_buffers(void)
++{
++	if (static_branch_likely(&mds_user_clear))
++		mds_clear_cpu_buffers();
++}
++
++/**
++ * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
++ *
++ * Clear CPU buffers if the corresponding static key is enabled
++ */
++static inline void mds_idle_clear_cpu_buffers(void)
++{
++	if (static_branch_likely(&mds_idle_clear))
++		mds_clear_cpu_buffers();
++}
++
+ #endif /* __ASSEMBLY__ */
+ 
+ /*
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 33051436c864..aca1ef8cc79f 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -992,4 +992,10 @@ enum l1tf_mitigations {
+ 
+ extern enum l1tf_mitigations l1tf_mitigation;
+ 
++enum mds_mitigations {
++	MDS_MITIGATION_OFF,
++	MDS_MITIGATION_FULL,
++	MDS_MITIGATION_VMWERV,
++};
++
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 482383c2b184..1b2ce0c6c4da 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -37,6 +37,7 @@
+ static void __init spectre_v2_select_mitigation(void);
+ static void __init ssb_select_mitigation(void);
+ static void __init l1tf_select_mitigation(void);
++static void __init mds_select_mitigation(void);
+ 
+ /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+ u64 x86_spec_ctrl_base;
+@@ -63,6 +64,13 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ /* Control unconditional IBPB in switch_mm() */
+ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+ 
++/* Control MDS CPU buffer clear before returning to user space */
++DEFINE_STATIC_KEY_FALSE(mds_user_clear);
++EXPORT_SYMBOL_GPL(mds_user_clear);
++/* Control MDS CPU buffer clear before idling (halt, mwait) */
++DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
++EXPORT_SYMBOL_GPL(mds_idle_clear);
++
+ void __init check_bugs(void)
+ {
+ 	identify_boot_cpu();
+@@ -101,6 +109,10 @@ void __init check_bugs(void)
+ 
+ 	l1tf_select_mitigation();
+ 
++	mds_select_mitigation();
++
++	arch_smt_update();
++
+ #ifdef CONFIG_X86_32
+ 	/*
+ 	 * Check whether we are able to run this kernel safely on SMP.
+@@ -206,6 +218,61 @@ static void x86_amd_ssb_disable(void)
+ 		wrmsrl(MSR_AMD64_LS_CFG, msrval);
+ }
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"MDS: " fmt
++
++/* Default mitigation for MDS-affected CPUs */
++static enum mds_mitigations mds_mitigation __ro_after_init = MDS_MITIGATION_FULL;
++static bool mds_nosmt __ro_after_init = false;
++
++static const char * const mds_strings[] = {
++	[MDS_MITIGATION_OFF]	= "Vulnerable",
++	[MDS_MITIGATION_FULL]	= "Mitigation: Clear CPU buffers",
++	[MDS_MITIGATION_VMWERV]	= "Vulnerable: Clear CPU buffers attempted, no microcode",
++};
++
++static void __init mds_select_mitigation(void)
++{
++	if (!boot_cpu_has_bug(X86_BUG_MDS) || cpu_mitigations_off()) {
++		mds_mitigation = MDS_MITIGATION_OFF;
++		return;
++	}
++
++	if (mds_mitigation == MDS_MITIGATION_FULL) {
++		if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
++			mds_mitigation = MDS_MITIGATION_VMWERV;
++
++		static_branch_enable(&mds_user_clear);
++
++		if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
++		    (mds_nosmt || cpu_mitigations_auto_nosmt()))
++			cpu_smt_disable(false);
++	}
++
++	pr_info("%s\n", mds_strings[mds_mitigation]);
++}
++
++static int __init mds_cmdline(char *str)
++{
++	if (!boot_cpu_has_bug(X86_BUG_MDS))
++		return 0;
++
++	if (!str)
++		return -EINVAL;
++
++	if (!strcmp(str, "off"))
++		mds_mitigation = MDS_MITIGATION_OFF;
++	else if (!strcmp(str, "full"))
++		mds_mitigation = MDS_MITIGATION_FULL;
++	else if (!strcmp(str, "full,nosmt")) {
++		mds_mitigation = MDS_MITIGATION_FULL;
++		mds_nosmt = true;
++	}
++
++	return 0;
++}
++early_param("mds", mds_cmdline);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "Spectre V2 : " fmt
+ 
+@@ -440,7 +507,8 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ 	char arg[20];
+ 	int ret, i;
+ 
+-	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
++	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2") ||
++	    cpu_mitigations_off())
+ 		return SPECTRE_V2_CMD_NONE;
+ 
+ 	ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
+@@ -574,9 +642,6 @@ specv2_set_mode:
+ 
+ 	/* Set up IBPB and STIBP depending on the general spectre V2 command */
+ 	spectre_v2_user_select_mitigation(cmd);
+-
+-	/* Enable STIBP if appropriate */
+-	arch_smt_update();
+ }
+ 
+ static void update_stibp_msr(void * __unused)
+@@ -610,6 +675,31 @@ static void update_indir_branch_cond(void)
+ 		static_branch_disable(&switch_to_cond_stibp);
+ }
+ 
++#undef pr_fmt
++#define pr_fmt(fmt) fmt
++
++/* Update the static key controlling the MDS CPU buffer clear in idle */
++static void update_mds_branch_idle(void)
++{
++	/*
++	 * Enable the idle clearing if SMT is active on CPUs which are
++	 * affected only by MSBDS and not any other MDS variant.
++	 *
++	 * The other variants cannot be mitigated when SMT is enabled, so
++	 * clearing the buffers on idle just to prevent the Store Buffer
++	 * repartitioning leak would be a window dressing exercise.
++	 */
++	if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
++		return;
++
++	if (sched_smt_active())
++		static_branch_enable(&mds_idle_clear);
++	else
++		static_branch_disable(&mds_idle_clear);
++}
++
++#define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
++
+ void arch_smt_update(void)
+ {
+ 	/* Enhanced IBRS implies STIBP. No update required. */
+@@ -631,6 +721,17 @@ void arch_smt_update(void)
+ 		break;
+ 	}
+ 
++	switch (mds_mitigation) {
++	case MDS_MITIGATION_FULL:
++	case MDS_MITIGATION_VMWERV:
++		if (sched_smt_active() && !boot_cpu_has(X86_BUG_MSBDS_ONLY))
++			pr_warn_once(MDS_MSG_SMT);
++		update_mds_branch_idle();
++		break;
++	case MDS_MITIGATION_OFF:
++		break;
++	}
++
+ 	mutex_unlock(&spec_ctrl_mutex);
+ }
+ 
+@@ -672,7 +773,8 @@ static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void)
+ 	char arg[20];
+ 	int ret, i;
+ 
+-	if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable")) {
++	if (cmdline_find_option_bool(boot_command_line, "nospec_store_bypass_disable") ||
++	    cpu_mitigations_off()) {
+ 		return SPEC_STORE_BYPASS_CMD_NONE;
+ 	} else {
+ 		ret = cmdline_find_option(boot_command_line, "spec_store_bypass_disable",
+@@ -996,6 +1098,11 @@ static void __init l1tf_select_mitigation(void)
+ 	if (!boot_cpu_has_bug(X86_BUG_L1TF))
+ 		return;
+ 
++	if (cpu_mitigations_off())
++		l1tf_mitigation = L1TF_MITIGATION_OFF;
++	else if (cpu_mitigations_auto_nosmt())
++		l1tf_mitigation = L1TF_MITIGATION_FLUSH_NOSMT;
++
+ 	override_cache_bits(&boot_cpu_data);
+ 
+ 	switch (l1tf_mitigation) {
+@@ -1024,7 +1131,7 @@ static void __init l1tf_select_mitigation(void)
+ 		pr_info("You may make it effective by booting the kernel with mem=%llu parameter.\n",
+ 				half_pa);
+ 		pr_info("However, doing so will make a part of your RAM unusable.\n");
+-		pr_info("Reading https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html might help you decide.\n");
++		pr_info("Reading https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html might help you decide.\n");
+ 		return;
+ 	}
+ 
+@@ -1057,6 +1164,7 @@ static int __init l1tf_cmdline(char *str)
+ early_param("l1tf", l1tf_cmdline);
+ 
+ #undef pr_fmt
++#define pr_fmt(fmt) fmt
+ 
+ #ifdef CONFIG_SYSFS
+ 
+@@ -1095,6 +1203,23 @@ static ssize_t l1tf_show_state(char *buf)
+ }
+ #endif
+ 
++static ssize_t mds_show_state(char *buf)
++{
++	if (!hypervisor_is_type(X86_HYPER_NATIVE)) {
++		return sprintf(buf, "%s; SMT Host state unknown\n",
++			       mds_strings[mds_mitigation]);
++	}
++
++	if (boot_cpu_has(X86_BUG_MSBDS_ONLY)) {
++		return sprintf(buf, "%s; SMT %s\n", mds_strings[mds_mitigation],
++			       (mds_mitigation == MDS_MITIGATION_OFF ? "vulnerable" :
++			        sched_smt_active() ? "mitigated" : "disabled"));
++	}
++
++	return sprintf(buf, "%s; SMT %s\n", mds_strings[mds_mitigation],
++		       sched_smt_active() ? "vulnerable" : "disabled");
++}
++
+ static char *stibp_state(void)
+ {
+ 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+@@ -1161,6 +1286,10 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 		if (boot_cpu_has(X86_FEATURE_L1TF_PTEINV))
+ 			return l1tf_show_state(buf);
+ 		break;
++
++	case X86_BUG_MDS:
++		return mds_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+@@ -1192,4 +1321,9 @@ ssize_t cpu_show_l1tf(struct device *dev, struct device_attribute *attr, char *b
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_L1TF);
+ }
++
++ssize_t cpu_show_mds(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_MDS);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index cb28e98a0659..132a63dc5a76 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -948,61 +948,77 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+ #endif
+ }
+ 
+-static const __initconst struct x86_cpu_id cpu_no_speculation[] = {
+-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL,	X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL_TABLET,	X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_BONNELL_MID,	X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_SALTWELL_MID,	X86_FEATURE_ANY },
+-	{ X86_VENDOR_INTEL,	6, INTEL_FAM6_ATOM_BONNELL,	X86_FEATURE_ANY },
+-	{ X86_VENDOR_CENTAUR,	5 },
+-	{ X86_VENDOR_INTEL,	5 },
+-	{ X86_VENDOR_NSC,	5 },
+-	{ X86_VENDOR_ANY,	4 },
++#define NO_SPECULATION	BIT(0)
++#define NO_MELTDOWN	BIT(1)
++#define NO_SSB		BIT(2)
++#define NO_L1TF		BIT(3)
++#define NO_MDS		BIT(4)
++#define MSBDS_ONLY	BIT(5)
++
++#define VULNWL(_vendor, _family, _model, _whitelist)	\
++	{ X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist }
++
++#define VULNWL_INTEL(model, whitelist)		\
++	VULNWL(INTEL, 6, INTEL_FAM6_##model, whitelist)
++
++#define VULNWL_AMD(family, whitelist)		\
++	VULNWL(AMD, family, X86_MODEL_ANY, whitelist)
++
++#define VULNWL_HYGON(family, whitelist)		\
++	VULNWL(HYGON, family, X86_MODEL_ANY, whitelist)
++
++static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
++	VULNWL(ANY,	4, X86_MODEL_ANY,	NO_SPECULATION),
++	VULNWL(CENTAUR,	5, X86_MODEL_ANY,	NO_SPECULATION),
++	VULNWL(INTEL,	5, X86_MODEL_ANY,	NO_SPECULATION),
++	VULNWL(NSC,	5, X86_MODEL_ANY,	NO_SPECULATION),
++
++	/* Intel Family 6 */
++	VULNWL_INTEL(ATOM_SALTWELL,		NO_SPECULATION),
++	VULNWL_INTEL(ATOM_SALTWELL_TABLET,	NO_SPECULATION),
++	VULNWL_INTEL(ATOM_SALTWELL_MID,		NO_SPECULATION),
++	VULNWL_INTEL(ATOM_BONNELL,		NO_SPECULATION),
++	VULNWL_INTEL(ATOM_BONNELL_MID,		NO_SPECULATION),
++
++	VULNWL_INTEL(ATOM_SILVERMONT,		NO_SSB | NO_L1TF | MSBDS_ONLY),
++	VULNWL_INTEL(ATOM_SILVERMONT_X,		NO_SSB | NO_L1TF | MSBDS_ONLY),
++	VULNWL_INTEL(ATOM_SILVERMONT_MID,	NO_SSB | NO_L1TF | MSBDS_ONLY),
++	VULNWL_INTEL(ATOM_AIRMONT,		NO_SSB | NO_L1TF | MSBDS_ONLY),
++	VULNWL_INTEL(XEON_PHI_KNL,		NO_SSB | NO_L1TF | MSBDS_ONLY),
++	VULNWL_INTEL(XEON_PHI_KNM,		NO_SSB | NO_L1TF | MSBDS_ONLY),
++
++	VULNWL_INTEL(CORE_YONAH,		NO_SSB),
++
++	VULNWL_INTEL(ATOM_AIRMONT_MID,		NO_L1TF | MSBDS_ONLY),
++
++	VULNWL_INTEL(ATOM_GOLDMONT,		NO_MDS | NO_L1TF),
++	VULNWL_INTEL(ATOM_GOLDMONT_X,		NO_MDS | NO_L1TF),
++	VULNWL_INTEL(ATOM_GOLDMONT_PLUS,	NO_MDS | NO_L1TF),
++
++	/* AMD Family 0xf - 0x12 */
++	VULNWL_AMD(0x0f,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
++	VULNWL_AMD(0x10,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
++	VULNWL_AMD(0x11,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
++	VULNWL_AMD(0x12,	NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
++
++	/* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
++	VULNWL_AMD(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS),
++	VULNWL_HYGON(X86_FAMILY_ANY,	NO_MELTDOWN | NO_L1TF | NO_MDS),
+ 	{}
+ };
+ 
+-static const __initconst struct x86_cpu_id cpu_no_meltdown[] = {
+-	{ X86_VENDOR_AMD },
+-	{ X86_VENDOR_HYGON },
+-	{}
+-};
+-
+-/* Only list CPUs which speculate but are non susceptible to SSB */
+-static const __initconst struct x86_cpu_id cpu_no_spec_store_bypass[] = {
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT		},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_X	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_MID	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_CORE_YONAH		},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNL		},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNM		},
+-	{ X86_VENDOR_AMD,	0x12,					},
+-	{ X86_VENDOR_AMD,	0x11,					},
+-	{ X86_VENDOR_AMD,	0x10,					},
+-	{ X86_VENDOR_AMD,	0xf,					},
+-	{}
+-};
++static bool __init cpu_matches(unsigned long which)
++{
++	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
+ 
+-static const __initconst struct x86_cpu_id cpu_no_l1tf[] = {
+-	/* in addition to cpu_no_speculation */
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_X	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT		},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_SILVERMONT_MID	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_AIRMONT_MID	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT_X	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_ATOM_GOLDMONT_PLUS	},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNL		},
+-	{ X86_VENDOR_INTEL,	6,	INTEL_FAM6_XEON_PHI_KNM		},
+-	{}
+-};
++	return m && !!(m->driver_data & which);
++}
+ 
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ 	u64 ia32_cap = 0;
+ 
+-	if (x86_match_cpu(cpu_no_speculation))
++	if (cpu_matches(NO_SPECULATION))
+ 		return;
+ 
+ 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
+@@ -1011,15 +1027,20 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
+ 		rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+ 
+-	if (!x86_match_cpu(cpu_no_spec_store_bypass) &&
+-	   !(ia32_cap & ARCH_CAP_SSB_NO) &&
++	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
+ 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+ 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+ 
+ 	if (ia32_cap & ARCH_CAP_IBRS_ALL)
+ 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
+ 
+-	if (x86_match_cpu(cpu_no_meltdown))
++	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
++		setup_force_cpu_bug(X86_BUG_MDS);
++		if (cpu_matches(MSBDS_ONLY))
++			setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
++	}
++
++	if (cpu_matches(NO_MELTDOWN))
+ 		return;
+ 
+ 	/* Rogue Data Cache Load? No! */
+@@ -1028,7 +1049,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 
+ 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
+ 
+-	if (x86_match_cpu(cpu_no_l1tf))
++	if (cpu_matches(NO_L1TF))
+ 		return;
+ 
+ 	setup_force_cpu_bug(X86_BUG_L1TF);
+diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
+index 18bc9b51ac9b..086cf1d1d71d 100644
+--- a/arch/x86/kernel/nmi.c
++++ b/arch/x86/kernel/nmi.c
+@@ -34,6 +34,7 @@
+ #include <asm/x86_init.h>
+ #include <asm/reboot.h>
+ #include <asm/cache.h>
++#include <asm/nospec-branch.h>
+ 
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/nmi.h>
+@@ -533,6 +534,9 @@ nmi_restart:
+ 		write_cr2(this_cpu_read(nmi_cr2));
+ 	if (this_cpu_dec_return(nmi_state))
+ 		goto nmi_restart;
++
++	if (user_mode(regs))
++		mds_user_clear_cpu_buffers();
+ }
+ NOKPROBE_SYMBOL(do_nmi);
+ 
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 9b7c4ca8f0a7..85fe1870f873 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -58,6 +58,7 @@
+ #include <asm/alternative.h>
+ #include <asm/fpu/xstate.h>
+ #include <asm/trace/mpx.h>
++#include <asm/nospec-branch.h>
+ #include <asm/mpx.h>
+ #include <asm/vm86.h>
+ #include <asm/umip.h>
+@@ -366,6 +367,13 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
+ 		regs->ip = (unsigned long)general_protection;
+ 		regs->sp = (unsigned long)&gpregs->orig_ax;
+ 
++		/*
++		 * This situation can be triggered by userspace via
++		 * modify_ldt(2) and the return does not take the regular
++		 * user space exit, so a CPU buffer clear is required when
++		 * MDS mitigation is enabled.
++		 */
++		mds_user_clear_cpu_buffers();
+ 		return;
+ 	}
+ #endif
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index c07958b59f50..39501e7afdb4 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -410,7 +410,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
+ 	/* cpuid 7.0.edx*/
+ 	const u32 kvm_cpuid_7_0_edx_x86_features =
+ 		F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
+-		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP);
++		F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES) | F(INTEL_STIBP) |
++		F(MD_CLEAR);
+ 
+ 	/* all calls to cpuid_count() should be made on the same cpu */
+ 	get_cpu();
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index da6fdd5434a1..df6e325b288b 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6356,8 +6356,11 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
+ 	evmcs_rsp = static_branch_unlikely(&enable_evmcs) ?
+ 		(unsigned long)&current_evmcs->host_rsp : 0;
+ 
++	/* L1D Flush includes CPU buffer clear to mitigate MDS */
+ 	if (static_branch_unlikely(&vmx_l1d_should_flush))
+ 		vmx_l1d_flush(vcpu);
++	else if (static_branch_unlikely(&mds_user_clear))
++		mds_clear_cpu_buffers();
+ 
+ 	asm(
+ 		/* Store host registers */
+@@ -6797,8 +6800,8 @@ free_partial_vcpu:
+ 	return ERR_PTR(err);
+ }
+ 
+-#define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
+-#define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/l1tf.html for details.\n"
++#define L1TF_MSG_SMT "L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
++#define L1TF_MSG_L1D "L1TF CPU bug present and virtualization mitigation disabled, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details.\n"
+ 
+ static int vmx_vm_init(struct kvm *kvm)
+ {
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 4fee5c3003ed..5890f09bfc19 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -35,6 +35,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/mm.h>
+ #include <linux/uaccess.h>
++#include <linux/cpu.h>
+ 
+ #include <asm/cpufeature.h>
+ #include <asm/hypervisor.h>
+@@ -115,7 +116,8 @@ void __init pti_check_boottime_disable(void)
+ 		}
+ 	}
+ 
+-	if (cmdline_find_option_bool(boot_command_line, "nopti")) {
++	if (cmdline_find_option_bool(boot_command_line, "nopti") ||
++	    cpu_mitigations_off()) {
+ 		pti_mode = PTI_FORCE_OFF;
+ 		pti_print_if_insecure("disabled on command line.");
+ 		return;
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index eb9443d5bae1..2fd6ca1021c2 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -546,11 +546,18 @@ ssize_t __weak cpu_show_l1tf(struct device *dev,
+ 	return sprintf(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_mds(struct device *dev,
++			    struct device_attribute *attr, char *buf)
++{
++	return sprintf(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+ static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL);
+ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
++static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -558,6 +565,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_spectre_v2.attr,
+ 	&dev_attr_spec_store_bypass.attr,
+ 	&dev_attr_l1tf.attr,
++	&dev_attr_mds.attr,
+ 	NULL
+ };
+ 
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 5041357d0297..57ae83c4d5f4 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -57,6 +57,8 @@ extern ssize_t cpu_show_spec_store_bypass(struct device *dev,
+ 					  struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_l1tf(struct device *dev,
+ 			     struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_mds(struct device *dev,
++			    struct device_attribute *attr, char *buf);
+ 
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+@@ -187,4 +189,28 @@ static inline void cpu_smt_disable(bool force) { }
+ static inline void cpu_smt_check_topology(void) { }
+ #endif
+ 
++/*
++ * These are used for a global "mitigations=" cmdline option for toggling
++ * optional CPU mitigations.
++ */
++enum cpu_mitigations {
++	CPU_MITIGATIONS_OFF,
++	CPU_MITIGATIONS_AUTO,
++	CPU_MITIGATIONS_AUTO_NOSMT,
++};
++
++extern enum cpu_mitigations cpu_mitigations;
++
++/* mitigations=off */
++static inline bool cpu_mitigations_off(void)
++{
++	return cpu_mitigations == CPU_MITIGATIONS_OFF;
++}
++
++/* mitigations=auto,nosmt */
++static inline bool cpu_mitigations_auto_nosmt(void)
++{
++	return cpu_mitigations == CPU_MITIGATIONS_AUTO_NOSMT;
++}
++
+ #endif /* _LINUX_CPU_H_ */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 6754f3ecfd94..43e741e88691 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2304,3 +2304,18 @@ void __init boot_cpu_hotplug_init(void)
+ #endif
+ 	this_cpu_write(cpuhp_state.state, CPUHP_ONLINE);
+ }
++
++enum cpu_mitigations cpu_mitigations __ro_after_init = CPU_MITIGATIONS_AUTO;
++
++static int __init mitigations_parse_cmdline(char *arg)
++{
++	if (!strcmp(arg, "off"))
++		cpu_mitigations = CPU_MITIGATIONS_OFF;
++	else if (!strcmp(arg, "auto"))
++		cpu_mitigations = CPU_MITIGATIONS_AUTO;
++	else if (!strcmp(arg, "auto,nosmt"))
++		cpu_mitigations = CPU_MITIGATIONS_AUTO_NOSMT;
++
++	return 0;
++}
++early_param("mitigations", mitigations_parse_cmdline);
+diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
+index 1598b4fa0b11..045f5f7d68ab 100644
+--- a/tools/power/x86/turbostat/Makefile
++++ b/tools/power/x86/turbostat/Makefile
+@@ -9,7 +9,7 @@ ifeq ("$(origin O)", "command line")
+ endif
+ 
+ turbostat : turbostat.c
+-override CFLAGS +=	-Wall
++override CFLAGS +=	-Wall -I../../../include
+ override CFLAGS +=	-DMSRHEADER='"../../../../arch/x86/include/asm/msr-index.h"'
+ override CFLAGS +=	-DINTEL_FAMILY_HEADER='"../../../../arch/x86/include/asm/intel-family.h"'
+ 
+diff --git a/tools/power/x86/x86_energy_perf_policy/Makefile b/tools/power/x86/x86_energy_perf_policy/Makefile
+index ae7a0e09b722..1fdeef864e7c 100644
+--- a/tools/power/x86/x86_energy_perf_policy/Makefile
++++ b/tools/power/x86/x86_energy_perf_policy/Makefile
+@@ -9,7 +9,7 @@ ifeq ("$(origin O)", "command line")
+ endif
+ 
+ x86_energy_perf_policy : x86_energy_perf_policy.c
+-override CFLAGS +=	-Wall
++override CFLAGS +=	-Wall -I../../../include
+ override CFLAGS +=	-DMSRHEADER='"../../../../arch/x86/include/asm/msr-index.h"'
+ 
+ %: %.c


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-16 23:04 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-16 23:04 UTC (permalink / raw
  To: gentoo-commits

commit:     1b351b52d383de0e7af4080e3983b5dc7e50b032
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 16 23:04:35 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 16 23:04:35 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1b351b52

Linux patch 5.0.17

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1016_linux-5.0.17.patch | 5008 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5012 insertions(+)

diff --git a/0000_README b/0000_README
index b19b388..d6075df 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-5.0.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.16
 
+Patch:  1016_linux-5.0.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-5.0.17.patch b/1016_linux-5.0.17.patch
new file mode 100644
index 0000000..0bdb22b
--- /dev/null
+++ b/1016_linux-5.0.17.patch
@@ -0,0 +1,5008 @@
+diff --git a/Documentation/devicetree/bindings/net/davinci_emac.txt b/Documentation/devicetree/bindings/net/davinci_emac.txt
+index 24c5cdaba8d2..ca83dcc84fb8 100644
+--- a/Documentation/devicetree/bindings/net/davinci_emac.txt
++++ b/Documentation/devicetree/bindings/net/davinci_emac.txt
+@@ -20,6 +20,8 @@ Required properties:
+ Optional properties:
+ - phy-handle: See ethernet.txt file in the same directory.
+               If absent, davinci_emac driver defaults to 100/FULL.
++- nvmem-cells: phandle, reference to an nvmem node for the MAC address
++- nvmem-cell-names: string, should be "mac-address" if nvmem is to be used
+ - ti,davinci-rmii-en: 1 byte, 1 means use RMII
+ - ti,davinci-no-bd-ram: boolean, does EMAC have BD RAM?
+ 
+diff --git a/Documentation/devicetree/bindings/net/ethernet.txt b/Documentation/devicetree/bindings/net/ethernet.txt
+index cfc376bc977a..2974e63ba311 100644
+--- a/Documentation/devicetree/bindings/net/ethernet.txt
++++ b/Documentation/devicetree/bindings/net/ethernet.txt
+@@ -10,8 +10,6 @@ Documentation/devicetree/bindings/phy/phy-bindings.txt.
+   the boot program; should be used in cases where the MAC address assigned to
+   the device by the boot program is different from the "local-mac-address"
+   property;
+-- nvmem-cells: phandle, reference to an nvmem node for the MAC address;
+-- nvmem-cell-names: string, should be "mac-address" if nvmem is to be used;
+ - max-speed: number, specifies maximum speed in Mbit/s supported by the device;
+ - max-frame-size: number, maximum transfer unit (IEEE defined MTU), rather than
+   the maximum frame size (there's contradiction in the Devicetree
+diff --git a/Documentation/devicetree/bindings/net/macb.txt b/Documentation/devicetree/bindings/net/macb.txt
+index 3e17ac1d5d58..1a914116f4c2 100644
+--- a/Documentation/devicetree/bindings/net/macb.txt
++++ b/Documentation/devicetree/bindings/net/macb.txt
+@@ -26,6 +26,10 @@ Required properties:
+ 	Optional elements: 'tsu_clk'
+ - clocks: Phandles to input clocks.
+ 
++Optional properties:
++- nvmem-cells: phandle, reference to an nvmem node for the MAC address
++- nvmem-cell-names: string, should be "mac-address" if nvmem is to be used
++
+ Optional properties for PHY child node:
+ - reset-gpios : Should specify the gpio for phy reset
+ - magic-packet : If present, indicates that the hardware supports waking
+diff --git a/Makefile b/Makefile
+index 95670d520786..6325ac97c7e2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index e5d56d9b712c..3b353af9c48d 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -69,7 +69,7 @@ config ARM
+ 	select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU
+ 	select HAVE_EXIT_THREAD
+ 	select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
+-	select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL
++	select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL && !CC_IS_CLANG
+ 	select HAVE_FUNCTION_TRACER if !XIP_KERNEL
+ 	select HAVE_GCC_PLUGINS
+ 	select HAVE_GENERIC_DMA_COHERENT
+diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug
+index 6d6e0330930b..e388af4594a6 100644
+--- a/arch/arm/Kconfig.debug
++++ b/arch/arm/Kconfig.debug
+@@ -47,8 +47,8 @@ config DEBUG_WX
+ 
+ choice
+ 	prompt "Choose kernel unwinder"
+-	default UNWINDER_ARM if AEABI && !FUNCTION_GRAPH_TRACER
+-	default UNWINDER_FRAME_POINTER if !AEABI || FUNCTION_GRAPH_TRACER
++	default UNWINDER_ARM if AEABI
++	default UNWINDER_FRAME_POINTER if !AEABI
+ 	help
+ 	  This determines which method will be used for unwinding kernel stack
+ 	  traces for panics, oopses, bugs, warnings, perf, /proc/<pid>/stack,
+@@ -65,7 +65,7 @@ config UNWINDER_FRAME_POINTER
+ 
+ config UNWINDER_ARM
+ 	bool "ARM EABI stack unwinder"
+-	depends on AEABI
++	depends on AEABI && !FUNCTION_GRAPH_TRACER
+ 	select ARM_UNWIND
+ 	help
+ 	  This option enables stack unwinding support in the kernel
+diff --git a/arch/arm/kernel/head-nommu.S b/arch/arm/kernel/head-nommu.S
+index ec29de250076..cab89479d15e 100644
+--- a/arch/arm/kernel/head-nommu.S
++++ b/arch/arm/kernel/head-nommu.S
+@@ -133,9 +133,9 @@ __secondary_data:
+  */
+ 	.text
+ __after_proc_init:
+-#ifdef CONFIG_ARM_MPU
+ M_CLASS(movw	r12, #:lower16:BASEADDR_V7M_SCB)
+ M_CLASS(movt	r12, #:upper16:BASEADDR_V7M_SCB)
++#ifdef CONFIG_ARM_MPU
+ M_CLASS(ldr	r3, [r12, 0x50])
+ AR_CLASS(mrc	p15, 0, r3, c0, c1, 4)          @ Read ID_MMFR0
+ 	and	r3, r3, #(MMFR0_PMSA)           @ PMSA field
+diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
+index 07b298120182..65a51331088e 100644
+--- a/arch/arm64/kernel/ftrace.c
++++ b/arch/arm64/kernel/ftrace.c
+@@ -103,10 +103,15 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
+ 		 * to be revisited if support for multiple ftrace entry points
+ 		 * is added in the future, but for now, the pr_err() below
+ 		 * deals with a theoretical issue only.
++		 *
++		 * Note that PLTs are place relative, and plt_entries_equal()
++		 * checks whether they point to the same target. Here, we need
++		 * to check if the actual opcodes are in fact identical,
++		 * regardless of the offset in memory so use memcmp() instead.
+ 		 */
+ 		trampoline = get_plt_entry(addr, mod->arch.ftrace_trampoline);
+-		if (!plt_entries_equal(mod->arch.ftrace_trampoline,
+-				       &trampoline)) {
++		if (memcmp(mod->arch.ftrace_trampoline, &trampoline,
++			   sizeof(trampoline))) {
+ 			if (plt_entry_is_initialized(mod->arch.ftrace_trampoline)) {
+ 				pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
+ 				return -EINVAL;
+diff --git a/arch/mips/ath79/setup.c b/arch/mips/ath79/setup.c
+index 9728abcb18fa..c04ae685003f 100644
+--- a/arch/mips/ath79/setup.c
++++ b/arch/mips/ath79/setup.c
+@@ -211,12 +211,6 @@ const char *get_system_type(void)
+ 	return ath79_sys_type;
+ }
+ 
+-int get_c0_perfcount_int(void)
+-{
+-	return ATH79_MISC_IRQ(5);
+-}
+-EXPORT_SYMBOL_GPL(get_c0_perfcount_int);
+-
+ unsigned int get_c0_compare_int(void)
+ {
+ 	return CP0_LEGACY_COMPARE_IRQ;
+diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h
+index 9c1173283b96..6c6211a82b9a 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
++++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
+@@ -81,6 +81,9 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+ 
+ 	pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
+ 			       pgtable_gfp_flags(mm, GFP_KERNEL));
++	if (unlikely(!pgd))
++		return pgd;
++
+ 	/*
+ 	 * Don't scan the PGD for pointers, it contains references to PUDs but
+ 	 * those references are not full pointers and so can't be recognised by
+diff --git a/arch/powerpc/include/asm/reg_booke.h b/arch/powerpc/include/asm/reg_booke.h
+index eb2a33d5df26..e382bd6ede84 100644
+--- a/arch/powerpc/include/asm/reg_booke.h
++++ b/arch/powerpc/include/asm/reg_booke.h
+@@ -41,7 +41,7 @@
+ #if defined(CONFIG_PPC_BOOK3E_64)
+ #define MSR_64BIT	MSR_CM
+ 
+-#define MSR_		(MSR_ME | MSR_CE)
++#define MSR_		(MSR_ME | MSR_RI | MSR_CE)
+ #define MSR_KERNEL	(MSR_ | MSR_64BIT)
+ #define MSR_USER32	(MSR_ | MSR_PR | MSR_EE)
+ #define MSR_USER64	(MSR_USER32 | MSR_64BIT)
+diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
+index 7f5ac2e8581b..36178000a2f2 100644
+--- a/arch/powerpc/kernel/idle_book3s.S
++++ b/arch/powerpc/kernel/idle_book3s.S
+@@ -170,6 +170,9 @@ core_idle_lock_held:
+ 	bne-	core_idle_lock_held
+ 	blr
+ 
++/* Reuse an unused pt_regs slot for IAMR */
++#define PNV_POWERSAVE_IAMR	_DAR
++
+ /*
+  * Pass requested state in r3:
+  *	r3 - PNV_THREAD_NAP/SLEEP/WINKLE in POWER8
+@@ -200,6 +203,12 @@ pnv_powersave_common:
+ 	/* Continue saving state */
+ 	SAVE_GPR(2, r1)
+ 	SAVE_NVGPRS(r1)
++
++BEGIN_FTR_SECTION
++	mfspr	r5, SPRN_IAMR
++	std	r5, PNV_POWERSAVE_IAMR(r1)
++END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
++
+ 	mfcr	r5
+ 	std	r5,_CCR(r1)
+ 	std	r1,PACAR1(r13)
+@@ -924,6 +933,17 @@ BEGIN_FTR_SECTION
+ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
+ 	REST_NVGPRS(r1)
+ 	REST_GPR(2, r1)
++
++BEGIN_FTR_SECTION
++	/* IAMR was saved in pnv_powersave_common() */
++	ld	r5, PNV_POWERSAVE_IAMR(r1)
++	mtspr	SPRN_IAMR, r5
++	/*
++	 * We don't need an isync here because the upcoming mtmsrd is
++	 * execution synchronizing.
++	 */
++END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
++
+ 	ld	r4,PACAKMSR(r13)
+ 	ld	r5,_LINK(r1)
+ 	ld	r6,_CCR(r1)
+diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h
+index f0b0c90dd398..d213ec5c3766 100644
+--- a/arch/x86/include/uapi/asm/vmx.h
++++ b/arch/x86/include/uapi/asm/vmx.h
+@@ -146,6 +146,7 @@
+ 
+ #define VMX_ABORT_SAVE_GUEST_MSR_FAIL        1
+ #define VMX_ABORT_LOAD_HOST_PDPTE_FAIL       2
++#define VMX_ABORT_VMCS_CORRUPTED             3
+ #define VMX_ABORT_LOAD_HOST_MSR_FAIL         4
+ 
+ #endif /* _UAPIVMX_H */
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index 725624b6c0c0..8fd3cedd9acc 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -81,6 +81,19 @@ static int __init set_bios_reboot(const struct dmi_system_id *d)
+ 	return 0;
+ }
+ 
++/*
++ * Some machines don't handle the default ACPI reboot method and
++ * require the EFI reboot method:
++ */
++static int __init set_efi_reboot(const struct dmi_system_id *d)
++{
++	if (reboot_type != BOOT_EFI && !efi_runtime_disabled()) {
++		reboot_type = BOOT_EFI;
++		pr_info("%s series board detected. Selecting EFI-method for reboot.\n", d->ident);
++	}
++	return 0;
++}
++
+ void __noreturn machine_real_restart(unsigned int type)
+ {
+ 	local_irq_disable();
+@@ -166,6 +179,14 @@ static const struct dmi_system_id reboot_dmi_table[] __initconst = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "AOA110"),
+ 		},
+ 	},
++	{	/* Handle reboot issue on Acer TravelMate X514-51T */
++		.callback = set_efi_reboot,
++		.ident = "Acer TravelMate X514-51T",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "TravelMate X514-51T"),
++		},
++	},
+ 
+ 	/* Apple */
+ 	{	/* Handle problems with rebooting on Apple MacBook5 */
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index ee3b5c7d662e..c45214c44e61 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -362,7 +362,7 @@ SECTIONS
+ 	.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
+ 		__bss_start = .;
+ 		*(.bss..page_aligned)
+-		*(.bss)
++		*(BSS_MAIN)
+ 		BSS_DECRYPTED
+ 		. = ALIGN(PAGE_SIZE);
+ 		__bss_stop = .;
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 3339697de6e5..235687f3388f 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -137,6 +137,7 @@ static inline bool kvm_apic_map_get_logical_dest(struct kvm_apic_map *map,
+ 		if (offset <= max_apic_id) {
+ 			u8 cluster_size = min(max_apic_id - offset + 1, 16U);
+ 
++			offset = array_index_nospec(offset, map->max_apic_id + 1);
+ 			*cluster = &map->phys_map[offset];
+ 			*mask = dest_id & (0xffff >> (16 - cluster_size));
+ 		} else {
+@@ -899,7 +900,8 @@ static inline bool kvm_apic_map_get_dest_lapic(struct kvm *kvm,
+ 		if (irq->dest_id > map->max_apic_id) {
+ 			*bitmap = 0;
+ 		} else {
+-			*dst = &map->phys_map[irq->dest_id];
++			u32 dest_id = array_index_nospec(irq->dest_id, map->max_apic_id + 1);
++			*dst = &map->phys_map[dest_id];
+ 			*bitmap = 1;
+ 		}
+ 		return true;
+diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
+index 6432d08c7de7..4d47a2631d1f 100644
+--- a/arch/x86/kvm/trace.h
++++ b/arch/x86/kvm/trace.h
+@@ -438,13 +438,13 @@ TRACE_EVENT(kvm_apic_ipi,
+ );
+ 
+ TRACE_EVENT(kvm_apic_accept_irq,
+-	    TP_PROTO(__u32 apicid, __u16 dm, __u8 tm, __u8 vec),
++	    TP_PROTO(__u32 apicid, __u16 dm, __u16 tm, __u8 vec),
+ 	    TP_ARGS(apicid, dm, tm, vec),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(	__u32,		apicid		)
+ 		__field(	__u16,		dm		)
+-		__field(	__u8,		tm		)
++		__field(	__u16,		tm		)
+ 		__field(	__u8,		vec		)
+ 	),
+ 
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 8f8c42b04875..2a16bd887729 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3790,8 +3790,18 @@ static void nested_vmx_restore_host_state(struct kvm_vcpu *vcpu)
+ 	vmx_set_cr4(vcpu, vmcs_readl(CR4_READ_SHADOW));
+ 
+ 	nested_ept_uninit_mmu_context(vcpu);
+-	vcpu->arch.cr3 = vmcs_readl(GUEST_CR3);
+-	__set_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail);
++
++	/*
++	 * This is only valid if EPT is in use, otherwise the vmcs01 GUEST_CR3
++	 * points to shadow pages!  Fortunately we only get here after a WARN_ON
++	 * if EPT is disabled, so a VMabort is perfectly fine.
++	 */
++	if (enable_ept) {
++		vcpu->arch.cr3 = vmcs_readl(GUEST_CR3);
++		__set_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail);
++	} else {
++		nested_vmx_abort(vcpu, VMX_ABORT_VMCS_CORRUPTED);
++	}
+ 
+ 	/*
+ 	 * Use ept_save_pdptrs(vcpu) to load the MMU's cached PDPTRs
+@@ -5739,6 +5749,14 @@ __init int nested_vmx_hardware_setup(int (*exit_handlers[])(struct kvm_vcpu *))
+ {
+ 	int i;
+ 
++	/*
++	 * Without EPT it is not possible to restore L1's CR3 and PDPTR on
++	 * VMfail, because they are not available in vmcs01.  Just always
++	 * use hardware checks.
++	 */
++	if (!enable_ept)
++		nested_early_check = 1;
++
+ 	if (!cpu_has_vmx_shadow_vmcs())
+ 		enable_shadow_vmcs = 0;
+ 	if (enable_shadow_vmcs) {
+diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
+index e3cdc85ce5b6..84304626b1cb 100644
+--- a/arch/x86/mm/dump_pagetables.c
++++ b/arch/x86/mm/dump_pagetables.c
+@@ -259,7 +259,8 @@ static void note_wx(struct pg_state *st)
+ #endif
+ 	/* Account the WX pages */
+ 	st->wx_pages += npages;
+-	WARN_ONCE(1, "x86/mm: Found insecure W+X mapping at address %pS\n",
++	WARN_ONCE(__supported_pte_mask & _PAGE_NX,
++		  "x86/mm: Found insecure W+X mapping at address %pS\n",
+ 		  (void *)st->start_address);
+ }
+ 
+diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
+index 5378d10f1d31..3b76fe954978 100644
+--- a/arch/x86/mm/ioremap.c
++++ b/arch/x86/mm/ioremap.c
+@@ -825,7 +825,7 @@ void __init __early_set_fixmap(enum fixed_addresses idx,
+ 	pte = early_ioremap_pte(addr);
+ 
+ 	/* Sanitize 'prot' against any unsupported bits: */
+-	pgprot_val(flags) &= __default_kernel_pte_mask;
++	pgprot_val(flags) &= __supported_pte_mask;
+ 
+ 	if (pgprot_val(flags))
+ 		set_pte(pte, pfn_pte(phys >> PAGE_SHIFT, flags));
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 72510c470001..356620414cf9 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -5353,7 +5353,7 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd,
+ 	return min_shallow;
+ }
+ 
+-static int bfq_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int index)
++static void bfq_depth_updated(struct blk_mq_hw_ctx *hctx)
+ {
+ 	struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
+ 	struct blk_mq_tags *tags = hctx->sched_tags;
+@@ -5361,6 +5361,11 @@ static int bfq_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int index)
+ 
+ 	min_shallow = bfq_update_depths(bfqd, &tags->bitmap_tags);
+ 	sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, min_shallow);
++}
++
++static int bfq_init_hctx(struct blk_mq_hw_ctx *hctx, unsigned int index)
++{
++	bfq_depth_updated(hctx);
+ 	return 0;
+ }
+ 
+@@ -5783,6 +5788,7 @@ static struct elevator_type iosched_bfq_mq = {
+ 		.requests_merged	= bfq_requests_merged,
+ 		.request_merged		= bfq_request_merged,
+ 		.has_work		= bfq_has_work,
++		.depth_updated		= bfq_depth_updated,
+ 		.init_hctx		= bfq_init_hctx,
+ 		.init_sched		= bfq_init_queue,
+ 		.exit_sched		= bfq_exit_queue,
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 6930c82ab75f..5b920a82bfe6 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3131,6 +3131,8 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
+ 		}
+ 		if (ret)
+ 			break;
++		if (q->elevator && q->elevator->type->ops.depth_updated)
++			q->elevator->type->ops.depth_updated(hctx);
+ 	}
+ 
+ 	if (!ret)
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 4be4dc3e8aa6..38ec79bb3edd 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -563,6 +563,12 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 		goto out;
+ 	}
+ 
++	dev_dbg(dev, "%s cmd: %s output length: %d\n", dimm_name,
++			cmd_name, out_obj->buffer.length);
++	print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4, 4,
++			out_obj->buffer.pointer,
++			min_t(u32, 128, out_obj->buffer.length), true);
++
+ 	if (call_pkg) {
+ 		call_pkg->nd_fw_size = out_obj->buffer.length;
+ 		memcpy(call_pkg->nd_payload + call_pkg->nd_size_in,
+@@ -581,12 +587,6 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 		return 0;
+ 	}
+ 
+-	dev_dbg(dev, "%s cmd: %s output length: %d\n", dimm_name,
+-			cmd_name, out_obj->buffer.length);
+-	print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4, 4,
+-			out_obj->buffer.pointer,
+-			min_t(u32, 128, out_obj->buffer.length), true);
+-
+ 	for (i = 0, offset = 0; i < desc->out_num; i++) {
+ 		u32 out_size = nd_cmd_out_size(nvdimm, cmd, desc, i, buf,
+ 				(u32 *) out_obj->buffer.pointer,
+diff --git a/drivers/char/ipmi/ipmi_si_hardcode.c b/drivers/char/ipmi/ipmi_si_hardcode.c
+index 1e5783961b0d..ab7180c46d8d 100644
+--- a/drivers/char/ipmi/ipmi_si_hardcode.c
++++ b/drivers/char/ipmi/ipmi_si_hardcode.c
+@@ -201,6 +201,8 @@ void __init ipmi_hardcode_init(void)
+ 	char *str;
+ 	char *si_type[SI_MAX_PARMS];
+ 
++	memset(si_type, 0, sizeof(si_type));
++
+ 	/* Parse out the si_type string into its components. */
+ 	str = si_type_str;
+ 	if (*str != '\0') {
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index 8dfd3bc448d0..9df90daa9c03 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -144,6 +144,7 @@ config VT8500_TIMER
+ config NPCM7XX_TIMER
+ 	bool "NPCM7xx timer driver" if COMPILE_TEST
+ 	depends on HAS_IOMEM
++	select TIMER_OF
+ 	select CLKSRC_MMIO
+ 	help
+ 	  Enable 24-bit TIMER0 and TIMER1 counters in the NPCM7xx architecture,
+diff --git a/drivers/clocksource/timer-oxnas-rps.c b/drivers/clocksource/timer-oxnas-rps.c
+index eed6feff8b5f..30c6f4ce672b 100644
+--- a/drivers/clocksource/timer-oxnas-rps.c
++++ b/drivers/clocksource/timer-oxnas-rps.c
+@@ -296,4 +296,4 @@ err_alloc:
+ TIMER_OF_DECLARE(ox810se_rps,
+ 		       "oxsemi,ox810se-rps-timer", oxnas_rps_timer_init);
+ TIMER_OF_DECLARE(ox820_rps,
+-		       "oxsemi,ox820se-rps-timer", oxnas_rps_timer_init);
++		       "oxsemi,ox820-rps-timer", oxnas_rps_timer_init);
+diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c
+index ae10f5614f95..bf5119203637 100644
+--- a/drivers/dma/bcm2835-dma.c
++++ b/drivers/dma/bcm2835-dma.c
+@@ -674,7 +674,7 @@ static struct dma_async_tx_descriptor *bcm2835_dma_prep_slave_sg(
+ 	d = bcm2835_dma_create_cb_chain(chan, direction, false,
+ 					info, extra,
+ 					frames, src, dst, 0, 0,
+-					GFP_KERNEL);
++					GFP_NOWAIT);
+ 	if (!d)
+ 		return NULL;
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index d1adfdf50fb3..34fbf879411f 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -1379,7 +1379,7 @@ int gpiochip_add_data_with_key(struct gpio_chip *chip, void *data,
+ 
+ 	status = gpiochip_add_irqchip(chip, lock_key, request_key);
+ 	if (status)
+-		goto err_remove_chip;
++		goto err_free_gpiochip_mask;
+ 
+ 	status = of_gpiochip_add(chip);
+ 	if (status)
+@@ -1387,7 +1387,7 @@ int gpiochip_add_data_with_key(struct gpio_chip *chip, void *data,
+ 
+ 	status = gpiochip_init_valid_mask(chip);
+ 	if (status)
+-		goto err_remove_chip;
++		goto err_remove_of_chip;
+ 
+ 	for (i = 0; i < chip->ngpio; i++) {
+ 		struct gpio_desc *desc = &gdev->descs[i];
+@@ -1415,14 +1415,18 @@ int gpiochip_add_data_with_key(struct gpio_chip *chip, void *data,
+ 	if (gpiolib_initialized) {
+ 		status = gpiochip_setup_dev(gdev);
+ 		if (status)
+-			goto err_remove_chip;
++			goto err_remove_acpi_chip;
+ 	}
+ 	return 0;
+ 
+-err_remove_chip:
++err_remove_acpi_chip:
+ 	acpi_gpiochip_remove(chip);
++err_remove_of_chip:
+ 	gpiochip_free_hogs(chip);
+ 	of_gpiochip_remove(chip);
++err_remove_chip:
++	gpiochip_irqchip_remove(chip);
++err_free_gpiochip_mask:
+ 	gpiochip_free_valid_mask(chip);
+ err_remove_irqchip_mask:
+ 	gpiochip_irqchip_free_valid_mask(chip);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index d55dd570a702..27baac26d8e9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3150,6 +3150,7 @@ static int amdgpu_device_recover_vram(struct amdgpu_device *adev)
+ 
+ 		/* No need to recover an evicted BO */
+ 		if (shadow->tbo.mem.mem_type != TTM_PL_TT ||
++		    shadow->tbo.mem.start == AMDGPU_BO_INVALID_OFFSET ||
+ 		    shadow->parent->tbo.mem.mem_type != TTM_PL_VRAM)
+ 			continue;
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 1f92e7e8e3d3..5af2ea1f201d 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1308,6 +1308,11 @@ static enum surface_update_type det_surface_update(const struct dc *dc,
+ 		return UPDATE_TYPE_FULL;
+ 	}
+ 
++	if (u->surface->force_full_update) {
++		update_flags->bits.full_update = 1;
++		return UPDATE_TYPE_FULL;
++	}
++
+ 	type = get_plane_info_update_type(u);
+ 	elevate_update_type(&overall_type, type);
+ 
+@@ -1637,6 +1642,14 @@ void dc_commit_updates_for_stream(struct dc *dc,
+ 		}
+ 
+ 		dc_resource_state_copy_construct(state, context);
++
++		for (i = 0; i < dc->res_pool->pipe_count; i++) {
++			struct pipe_ctx *new_pipe = &context->res_ctx.pipe_ctx[i];
++			struct pipe_ctx *old_pipe = &dc->current_state->res_ctx.pipe_ctx[i];
++
++			if (new_pipe->plane_state && new_pipe->plane_state != old_pipe->plane_state)
++				new_pipe->plane_state->force_full_update = true;
++		}
+ 	}
+ 
+ 
+@@ -1680,6 +1693,12 @@ void dc_commit_updates_for_stream(struct dc *dc,
+ 		dc->current_state = context;
+ 		dc_release_state(old);
+ 
++		for (i = 0; i < dc->res_pool->pipe_count; i++) {
++			struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
++
++			if (pipe_ctx->plane_state && pipe_ctx->stream == stream)
++				pipe_ctx->plane_state->force_full_update = false;
++		}
+ 	}
+ 	/*let's use current_state to update watermark etc*/
+ 	if (update_type >= UPDATE_TYPE_FULL)
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 4b5bbb13ce7f..7d5656d7e460 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -496,6 +496,9 @@ struct dc_plane_state {
+ 	struct dc_plane_status status;
+ 	struct dc_context *ctx;
+ 
++	/* HACK: Workaround for forcing full reprogramming under some conditions */
++	bool force_full_update;
++
+ 	/* private to dc_surface.c */
+ 	enum dc_irq_source irq_source;
+ 	struct kref refcount;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
+index aaeb7faac0c4..e0fff5744b5f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
+@@ -189,6 +189,12 @@ static void submit_channel_request(
+ 				1,
+ 				0);
+ 	}
++
++	REG_UPDATE(AUX_INTERRUPT_CONTROL, AUX_SW_DONE_ACK, 1);
++
++	REG_WAIT(AUX_SW_STATUS, AUX_SW_DONE, 0,
++				10, aux110->timeout_period/10);
++
+ 	/* set the delay and the number of bytes to write */
+ 
+ 	/* The length include
+@@ -241,9 +247,6 @@ static void submit_channel_request(
+ 		}
+ 	}
+ 
+-	REG_UPDATE(AUX_INTERRUPT_CONTROL, AUX_SW_DONE_ACK, 1);
+-	REG_WAIT(AUX_SW_STATUS, AUX_SW_DONE, 0,
+-				10, aux110->timeout_period/10);
+ 	REG_UPDATE(AUX_SW_CONTROL, AUX_SW_GO, 1);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h
+index f7caab85dc80..2c6f50b4245a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.h
+@@ -69,11 +69,11 @@ enum {	/* This is the timeout as defined in DP 1.2a,
+ 	 * at most within ~240usec. That means,
+ 	 * increasing this timeout will not affect normal operation,
+ 	 * and we'll timeout after
+-	 * SW_AUX_TIMEOUT_PERIOD_MULTIPLIER * AUX_TIMEOUT_PERIOD = 1600usec.
++	 * SW_AUX_TIMEOUT_PERIOD_MULTIPLIER * AUX_TIMEOUT_PERIOD = 2400usec.
+ 	 * This timeout is especially important for
+-	 * resume from S3 and CTS.
++	 * converters, resume from S3, and CTS.
+ 	 */
+-	SW_AUX_TIMEOUT_PERIOD_MULTIPLIER = 4
++	SW_AUX_TIMEOUT_PERIOD_MULTIPLIER = 6
+ };
+ struct aux_engine_dce110 {
+ 	struct aux_engine base;
+diff --git a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+index 64c3cf027518..14223c0ee784 100644
+--- a/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
++++ b/drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
+@@ -1655,6 +1655,8 @@ static void dw_hdmi_clear_overflow(struct dw_hdmi *hdmi)
+ 	 * iteration for others.
+ 	 * The Amlogic Meson GX SoCs (v2.01a) have been identified as needing
+ 	 * the workaround with a single iteration.
++	 * The Rockchip RK3288 SoC (v2.00a) and RK3328/RK3399 SoCs (v2.11a) have
++	 * been identified as needing the workaround with a single iteration.
+ 	 */
+ 
+ 	switch (hdmi->version) {
+@@ -1663,7 +1665,9 @@ static void dw_hdmi_clear_overflow(struct dw_hdmi *hdmi)
+ 		break;
+ 	case 0x131a:
+ 	case 0x132a:
++	case 0x200a:
+ 	case 0x201a:
++	case 0x211a:
+ 	case 0x212a:
+ 		count = 1;
+ 		break;
+diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c
+index 058b53c0aa7e..1bb3e598cb84 100644
+--- a/drivers/gpu/drm/imx/ipuv3-crtc.c
++++ b/drivers/gpu/drm/imx/ipuv3-crtc.c
+@@ -70,7 +70,7 @@ static void ipu_crtc_disable_planes(struct ipu_crtc *ipu_crtc,
+ 	if (disable_partial)
+ 		ipu_plane_disable(ipu_crtc->plane[1], true);
+ 	if (disable_full)
+-		ipu_plane_disable(ipu_crtc->plane[0], false);
++		ipu_plane_disable(ipu_crtc->plane[0], true);
+ }
+ 
+ static void ipu_crtc_atomic_disable(struct drm_crtc *crtc,
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-reg.c b/drivers/gpu/drm/rockchip/cdn-dp-reg.c
+index 5a485489a1e2..6c8b14fb1d2f 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-reg.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-reg.c
+@@ -113,7 +113,7 @@ static int cdp_dp_mailbox_write(struct cdn_dp_device *dp, u8 val)
+ 
+ static int cdn_dp_mailbox_validate_receive(struct cdn_dp_device *dp,
+ 					   u8 module_id, u8 opcode,
+-					   u8 req_size)
++					   u16 req_size)
+ {
+ 	u32 mbox_size, i;
+ 	u8 header[4];
+diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c
+index 9e4c375ccc96..f8bf5bbec2df 100644
+--- a/drivers/gpu/drm/sun4i/sun4i_drv.c
++++ b/drivers/gpu/drm/sun4i/sun4i_drv.c
+@@ -85,6 +85,8 @@ static int sun4i_drv_bind(struct device *dev)
+ 		ret = -ENOMEM;
+ 		goto free_drm;
+ 	}
++
++	dev_set_drvdata(dev, drm);
+ 	drm->dev_private = drv;
+ 	INIT_LIST_HEAD(&drv->frontend_list);
+ 	INIT_LIST_HEAD(&drv->engine_list);
+@@ -144,7 +146,10 @@ static void sun4i_drv_unbind(struct device *dev)
+ 	drm_dev_unregister(drm);
+ 	drm_kms_helper_poll_fini(drm);
+ 	drm_mode_config_cleanup(drm);
++
++	component_unbind_all(dev, NULL);
+ 	of_reserved_mem_device_release(dev);
++
+ 	drm_dev_put(drm);
+ }
+ 
+@@ -393,6 +398,8 @@ static int sun4i_drv_probe(struct platform_device *pdev)
+ 
+ static int sun4i_drv_remove(struct platform_device *pdev)
+ {
++	component_master_del(&pdev->dev, &sun4i_drv_master_ops);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index 996cadd83f24..d8e1b3f12904 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -881,8 +881,10 @@ static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo,
+ 		reservation_object_add_shared_fence(bo->resv, fence);
+ 
+ 		ret = reservation_object_reserve_shared(bo->resv, 1);
+-		if (unlikely(ret))
++		if (unlikely(ret)) {
++			dma_fence_put(fence);
+ 			return ret;
++		}
+ 
+ 		dma_fence_put(bo->moving);
+ 		bo->moving = fence;
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
+index 2d1aaca49105..f7f32a885af7 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
++++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
+@@ -127,10 +127,14 @@ static struct drm_driver driver = {
+ #if defined(CONFIG_DEBUG_FS)
+ 	.debugfs_init = virtio_gpu_debugfs_init,
+ #endif
++	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
++	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+ 	.gem_prime_export = drm_gem_prime_export,
+ 	.gem_prime_import = drm_gem_prime_import,
+ 	.gem_prime_pin = virtgpu_gem_prime_pin,
+ 	.gem_prime_unpin = virtgpu_gem_prime_unpin,
++	.gem_prime_get_sg_table = virtgpu_gem_prime_get_sg_table,
++	.gem_prime_import_sg_table = virtgpu_gem_prime_import_sg_table,
+ 	.gem_prime_vmap = virtgpu_gem_prime_vmap,
+ 	.gem_prime_vunmap = virtgpu_gem_prime_vunmap,
+ 	.gem_prime_mmap = virtgpu_gem_prime_mmap,
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
+index 0c15000f926e..1deb41d42ea4 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
++++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
+@@ -372,6 +372,10 @@ int virtio_gpu_object_wait(struct virtio_gpu_object *bo, bool no_wait);
+ /* virtgpu_prime.c */
+ int virtgpu_gem_prime_pin(struct drm_gem_object *obj);
+ void virtgpu_gem_prime_unpin(struct drm_gem_object *obj);
++struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj);
++struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
++	struct drm_device *dev, struct dma_buf_attachment *attach,
++	struct sg_table *sgt);
+ void *virtgpu_gem_prime_vmap(struct drm_gem_object *obj);
+ void virtgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+ int virtgpu_gem_prime_mmap(struct drm_gem_object *obj,
+diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c
+index c59ec34c80a5..eb51a78e1199 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_prime.c
++++ b/drivers/gpu/drm/virtio/virtgpu_prime.c
+@@ -39,6 +39,18 @@ void virtgpu_gem_prime_unpin(struct drm_gem_object *obj)
+ 	WARN_ONCE(1, "not implemented");
+ }
+ 
++struct sg_table *virtgpu_gem_prime_get_sg_table(struct drm_gem_object *obj)
++{
++	return ERR_PTR(-ENODEV);
++}
++
++struct drm_gem_object *virtgpu_gem_prime_import_sg_table(
++	struct drm_device *dev, struct dma_buf_attachment *attach,
++	struct sg_table *table)
++{
++	return ERR_PTR(-ENODEV);
++}
++
+ void *virtgpu_gem_prime_vmap(struct drm_gem_object *obj)
+ {
+ 	struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+diff --git a/drivers/gpu/ipu-v3/ipu-dp.c b/drivers/gpu/ipu-v3/ipu-dp.c
+index 9b2b3fa479c4..5e44ff1f2085 100644
+--- a/drivers/gpu/ipu-v3/ipu-dp.c
++++ b/drivers/gpu/ipu-v3/ipu-dp.c
+@@ -195,7 +195,8 @@ int ipu_dp_setup_channel(struct ipu_dp *dp,
+ 		ipu_dp_csc_init(flow, flow->foreground.in_cs, flow->out_cs,
+ 				DP_COM_CONF_CSC_DEF_BOTH);
+ 	} else {
+-		if (flow->foreground.in_cs == flow->out_cs)
++		if (flow->foreground.in_cs == IPUV3_COLORSPACE_UNKNOWN ||
++		    flow->foreground.in_cs == flow->out_cs)
+ 			/*
+ 			 * foreground identical to output, apply color
+ 			 * conversion on background
+@@ -261,6 +262,8 @@ void ipu_dp_disable_channel(struct ipu_dp *dp, bool sync)
+ 	struct ipu_dp_priv *priv = flow->priv;
+ 	u32 reg, csc;
+ 
++	dp->in_cs = IPUV3_COLORSPACE_UNKNOWN;
++
+ 	if (!dp->foreground)
+ 		return;
+ 
+@@ -268,8 +271,9 @@ void ipu_dp_disable_channel(struct ipu_dp *dp, bool sync)
+ 
+ 	reg = readl(flow->base + DP_COM_CONF);
+ 	csc = reg & DP_COM_CONF_CSC_DEF_MASK;
+-	if (csc == DP_COM_CONF_CSC_DEF_FG)
+-		reg &= ~DP_COM_CONF_CSC_DEF_MASK;
++	reg &= ~DP_COM_CONF_CSC_DEF_MASK;
++	if (csc == DP_COM_CONF_CSC_DEF_BOTH || csc == DP_COM_CONF_CSC_DEF_BG)
++		reg |= DP_COM_CONF_CSC_DEF_BG;
+ 
+ 	reg &= ~DP_COM_CONF_FG_EN;
+ 	writel(reg, flow->base + DP_COM_CONF);
+@@ -347,6 +351,8 @@ int ipu_dp_init(struct ipu_soc *ipu, struct device *dev, unsigned long base)
+ 	mutex_init(&priv->mutex);
+ 
+ 	for (i = 0; i < IPUV3_NUM_FLOWS; i++) {
++		priv->flow[i].background.in_cs = IPUV3_COLORSPACE_UNKNOWN;
++		priv->flow[i].foreground.in_cs = IPUV3_COLORSPACE_UNKNOWN;
+ 		priv->flow[i].foreground.foreground = true;
+ 		priv->flow[i].base = priv->base + ipu_dp_flow_base[i];
+ 		priv->flow[i].priv = priv;
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index ff92a7b2fc89..4f119300ce3f 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -677,6 +677,14 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 			break;
+ 		}
+ 
++		if ((usage->hid & 0xf0) == 0xb0) {	/* SC - Display */
++			switch (usage->hid & 0xf) {
++			case 0x05: map_key_clear(KEY_SWITCHVIDEOMODE); break;
++			default: goto ignore;
++			}
++			break;
++		}
++
+ 		/*
+ 		 * Some lazy vendors declare 255 usages for System Control,
+ 		 * leading to the creation of ABS_X|Y axis and too many others.
+@@ -908,6 +916,10 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 		case 0x074: map_key_clear(KEY_BRIGHTNESS_MAX);		break;
+ 		case 0x075: map_key_clear(KEY_BRIGHTNESS_AUTO);		break;
+ 
++		case 0x079: map_key_clear(KEY_KBDILLUMUP);	break;
++		case 0x07a: map_key_clear(KEY_KBDILLUMDOWN);	break;
++		case 0x07c: map_key_clear(KEY_KBDILLUMTOGGLE);	break;
++
+ 		case 0x082: map_key_clear(KEY_VIDEO_NEXT);	break;
+ 		case 0x083: map_key_clear(KEY_LAST);		break;
+ 		case 0x084: map_key_clear(KEY_ENTER);		break;
+@@ -1042,6 +1054,8 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ 		case 0x2cb: map_key_clear(KEY_KBDINPUTASSIST_ACCEPT);	break;
+ 		case 0x2cc: map_key_clear(KEY_KBDINPUTASSIST_CANCEL);	break;
+ 
++		case 0x29f: map_key_clear(KEY_SCALE);		break;
++
+ 		default: map_key_clear(KEY_UNKNOWN);
+ 		}
+ 		break;
+diff --git a/drivers/hwmon/occ/sysfs.c b/drivers/hwmon/occ/sysfs.c
+index 743b26ec8e54..f04f502b2e69 100644
+--- a/drivers/hwmon/occ/sysfs.c
++++ b/drivers/hwmon/occ/sysfs.c
+@@ -51,16 +51,16 @@ static ssize_t occ_sysfs_show(struct device *dev,
+ 		val = !!(header->status & OCC_STAT_ACTIVE);
+ 		break;
+ 	case 2:
+-		val = !!(header->status & OCC_EXT_STAT_DVFS_OT);
++		val = !!(header->ext_status & OCC_EXT_STAT_DVFS_OT);
+ 		break;
+ 	case 3:
+-		val = !!(header->status & OCC_EXT_STAT_DVFS_POWER);
++		val = !!(header->ext_status & OCC_EXT_STAT_DVFS_POWER);
+ 		break;
+ 	case 4:
+-		val = !!(header->status & OCC_EXT_STAT_MEM_THROTTLE);
++		val = !!(header->ext_status & OCC_EXT_STAT_MEM_THROTTLE);
+ 		break;
+ 	case 5:
+-		val = !!(header->status & OCC_EXT_STAT_QUICK_DROP);
++		val = !!(header->ext_status & OCC_EXT_STAT_QUICK_DROP);
+ 		break;
+ 	case 6:
+ 		val = header->occ_state;
+diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c
+index 2c944825026f..af34f7ee1b04 100644
+--- a/drivers/hwmon/pwm-fan.c
++++ b/drivers/hwmon/pwm-fan.c
+@@ -254,7 +254,7 @@ static int pwm_fan_probe(struct platform_device *pdev)
+ 
+ 	ret = pwm_fan_of_get_cooling_data(&pdev->dev, ctx);
+ 	if (ret)
+-		return ret;
++		goto err_pwm_disable;
+ 
+ 	ctx->pwm_fan_state = ctx->pwm_fan_max_state;
+ 	if (IS_ENABLED(CONFIG_THERMAL)) {
+diff --git a/drivers/iio/adc/xilinx-xadc-core.c b/drivers/iio/adc/xilinx-xadc-core.c
+index 3f6be5ac049a..1ae86e7359f7 100644
+--- a/drivers/iio/adc/xilinx-xadc-core.c
++++ b/drivers/iio/adc/xilinx-xadc-core.c
+@@ -1290,6 +1290,7 @@ static int xadc_probe(struct platform_device *pdev)
+ 
+ err_free_irq:
+ 	free_irq(xadc->irq, indio_dev);
++	cancel_delayed_work_sync(&xadc->zynq_unmask_work);
+ err_clk_disable_unprepare:
+ 	clk_disable_unprepare(xadc->clk);
+ err_free_samplerate_trigger:
+@@ -1319,8 +1320,8 @@ static int xadc_remove(struct platform_device *pdev)
+ 		iio_triggered_buffer_cleanup(indio_dev);
+ 	}
+ 	free_irq(xadc->irq, indio_dev);
++	cancel_delayed_work_sync(&xadc->zynq_unmask_work);
+ 	clk_disable_unprepare(xadc->clk);
+-	cancel_delayed_work(&xadc->zynq_unmask_work);
+ 	kfree(xadc->data);
+ 	kfree(indio_dev->channels);
+ 
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 54031c5b53fa..89dd2380fc81 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -517,7 +517,7 @@ static int hns_roce_set_kernel_sq_size(struct hns_roce_dev *hr_dev,
+ 
+ static int hns_roce_qp_has_sq(struct ib_qp_init_attr *attr)
+ {
+-	if (attr->qp_type == IB_QPT_XRC_TGT)
++	if (attr->qp_type == IB_QPT_XRC_TGT || !attr->cap.max_send_wr)
+ 		return 0;
+ 
+ 	return 1;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 497181f5ba09..c6bdd0d16c4b 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -1025,6 +1025,8 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
+ 		if (MLX5_CAP_GEN(mdev, qp_packet_based))
+ 			resp.flags |=
+ 				MLX5_IB_QUERY_DEV_RESP_PACKET_BASED_CREDIT_MODE;
++
++		resp.flags |= MLX5_IB_QUERY_DEV_RESP_FLAGS_SCAT2CQE_DCT;
+ 	}
+ 
+ 	if (field_avail(typeof(resp), sw_parsing_caps,
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 7db778d96ef5..afc88e6e172e 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -1724,13 +1724,16 @@ static void configure_responder_scat_cqe(struct ib_qp_init_attr *init_attr,
+ 
+ 	rcqe_sz = mlx5_ib_get_cqe_size(init_attr->recv_cq);
+ 
+-	if (rcqe_sz == 128) {
+-		MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
++	if (init_attr->qp_type == MLX5_IB_QPT_DCT) {
++		if (rcqe_sz == 128)
++			MLX5_SET(dctc, qpc, cs_res, MLX5_RES_SCAT_DATA64_CQE);
++
+ 		return;
+ 	}
+ 
+-	if (init_attr->qp_type != MLX5_IB_QPT_DCT)
+-		MLX5_SET(qpc, qpc, cs_res, MLX5_RES_SCAT_DATA32_CQE);
++	MLX5_SET(qpc, qpc, cs_res,
++		 rcqe_sz == 128 ? MLX5_RES_SCAT_DATA64_CQE :
++				  MLX5_RES_SCAT_DATA32_CQE);
+ }
+ 
+ static void configure_requester_scat_cqe(struct mlx5_ib_dev *dev,
+diff --git a/drivers/input/keyboard/Kconfig b/drivers/input/keyboard/Kconfig
+index a878351f1643..52d7f55fca32 100644
+--- a/drivers/input/keyboard/Kconfig
++++ b/drivers/input/keyboard/Kconfig
+@@ -420,7 +420,7 @@ config KEYBOARD_MPR121
+ 
+ config KEYBOARD_SNVS_PWRKEY
+ 	tristate "IMX SNVS Power Key Driver"
+-	depends on SOC_IMX6SX || SOC_IMX7D
++	depends on ARCH_MXC || COMPILE_TEST
+ 	depends on OF
+ 	help
+ 	  This is the snvs powerkey driver for the Freescale i.MX application
+diff --git a/drivers/input/rmi4/rmi_driver.c b/drivers/input/rmi4/rmi_driver.c
+index fc3ab93b7aea..7fb358f96195 100644
+--- a/drivers/input/rmi4/rmi_driver.c
++++ b/drivers/input/rmi4/rmi_driver.c
+@@ -860,7 +860,7 @@ static int rmi_create_function(struct rmi_device *rmi_dev,
+ 
+ 	error = rmi_register_function(fn);
+ 	if (error)
+-		goto err_put_fn;
++		return error;
+ 
+ 	if (pdt->function_number == 0x01)
+ 		data->f01_container = fn;
+@@ -870,10 +870,6 @@ static int rmi_create_function(struct rmi_device *rmi_dev,
+ 	list_add_tail(&fn->node, &data->function_list);
+ 
+ 	return RMI_SCAN_CONTINUE;
+-
+-err_put_fn:
+-	put_device(&fn->dev);
+-	return error;
+ }
+ 
+ void rmi_enable_irq(struct rmi_device *rmi_dev, bool clear_wake)
+diff --git a/drivers/irqchip/irq-ath79-misc.c b/drivers/irqchip/irq-ath79-misc.c
+index aa7290784636..0390603170b4 100644
+--- a/drivers/irqchip/irq-ath79-misc.c
++++ b/drivers/irqchip/irq-ath79-misc.c
+@@ -22,6 +22,15 @@
+ #define AR71XX_RESET_REG_MISC_INT_ENABLE	4
+ 
+ #define ATH79_MISC_IRQ_COUNT			32
++#define ATH79_MISC_PERF_IRQ			5
++
++static int ath79_perfcount_irq;
++
++int get_c0_perfcount_int(void)
++{
++	return ath79_perfcount_irq;
++}
++EXPORT_SYMBOL_GPL(get_c0_perfcount_int);
+ 
+ static void ath79_misc_irq_handler(struct irq_desc *desc)
+ {
+@@ -113,6 +122,8 @@ static void __init ath79_misc_intc_domain_init(
+ {
+ 	void __iomem *base = domain->host_data;
+ 
++	ath79_perfcount_irq = irq_create_mapping(domain, ATH79_MISC_PERF_IRQ);
++
+ 	/* Disable and clear all interrupts */
+ 	__raw_writel(0, base + AR71XX_RESET_REG_MISC_INT_ENABLE);
+ 	__raw_writel(0, base + AR71XX_RESET_REG_MISC_INT_STATUS);
+diff --git a/drivers/isdn/gigaset/bas-gigaset.c b/drivers/isdn/gigaset/bas-gigaset.c
+index ecdeb89645d0..149b1aca52a2 100644
+--- a/drivers/isdn/gigaset/bas-gigaset.c
++++ b/drivers/isdn/gigaset/bas-gigaset.c
+@@ -958,6 +958,7 @@ static void write_iso_callback(struct urb *urb)
+  */
+ static int starturbs(struct bc_state *bcs)
+ {
++	struct usb_device *udev = bcs->cs->hw.bas->udev;
+ 	struct bas_bc_state *ubc = bcs->hw.bas;
+ 	struct urb *urb;
+ 	int j, k;
+@@ -975,8 +976,8 @@ static int starturbs(struct bc_state *bcs)
+ 			rc = -EFAULT;
+ 			goto error;
+ 		}
+-		usb_fill_int_urb(urb, bcs->cs->hw.bas->udev,
+-				 usb_rcvisocpipe(urb->dev, 3 + 2 * bcs->channel),
++		usb_fill_int_urb(urb, udev,
++				 usb_rcvisocpipe(udev, 3 + 2 * bcs->channel),
+ 				 ubc->isoinbuf + k * BAS_INBUFSIZE,
+ 				 BAS_INBUFSIZE, read_iso_callback, bcs,
+ 				 BAS_FRAMETIME);
+@@ -1006,8 +1007,8 @@ static int starturbs(struct bc_state *bcs)
+ 			rc = -EFAULT;
+ 			goto error;
+ 		}
+-		usb_fill_int_urb(urb, bcs->cs->hw.bas->udev,
+-				 usb_sndisocpipe(urb->dev, 4 + 2 * bcs->channel),
++		usb_fill_int_urb(urb, udev,
++				 usb_sndisocpipe(udev, 4 + 2 * bcs->channel),
+ 				 ubc->isooutbuf->data,
+ 				 sizeof(ubc->isooutbuf->data),
+ 				 write_iso_callback, &ubc->isoouturbs[k],
+diff --git a/drivers/isdn/mISDN/socket.c b/drivers/isdn/mISDN/socket.c
+index 15d3ca37669a..04da3a17cd95 100644
+--- a/drivers/isdn/mISDN/socket.c
++++ b/drivers/isdn/mISDN/socket.c
+@@ -710,10 +710,10 @@ base_sock_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
+ 	struct sock *sk = sock->sk;
+ 	int err = 0;
+ 
+-	if (!maddr || maddr->family != AF_ISDN)
++	if (addr_len < sizeof(struct sockaddr_mISDN))
+ 		return -EINVAL;
+ 
+-	if (addr_len < sizeof(struct sockaddr_mISDN))
++	if (!maddr || maddr->family != AF_ISDN)
+ 		return -EINVAL;
+ 
+ 	lock_sock(sk);
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 5b68f2d0da60..3ae13c06b200 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -4233,26 +4233,15 @@ static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
+ 	case check_state_check_result:
+ 		sh->check_state = check_state_idle;
+ 
++		if (s->failed > 1)
++			break;
+ 		/* handle a successful check operation, if parity is correct
+ 		 * we are done.  Otherwise update the mismatch count and repair
+ 		 * parity if !MD_RECOVERY_CHECK
+ 		 */
+ 		if (sh->ops.zero_sum_result == 0) {
+-			/* both parities are correct */
+-			if (!s->failed)
+-				set_bit(STRIPE_INSYNC, &sh->state);
+-			else {
+-				/* in contrast to the raid5 case we can validate
+-				 * parity, but still have a failure to write
+-				 * back
+-				 */
+-				sh->check_state = check_state_compute_result;
+-				/* Returning at this point means that we may go
+-				 * off and bring p and/or q uptodate again so
+-				 * we make sure to check zero_sum_result again
+-				 * to verify if p or q need writeback
+-				 */
+-			}
++			/* Any parity checked was correct */
++			set_bit(STRIPE_INSYNC, &sh->state);
+ 		} else {
+ 			atomic64_add(STRIPE_SECTORS, &conf->mddev->resync_mismatches);
+ 			if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery)) {
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index 4d5d01cb8141..80867bd8f44c 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -1098,13 +1098,6 @@ static int bond_option_arp_validate_set(struct bonding *bond,
+ {
+ 	netdev_dbg(bond->dev, "Setting arp_validate to %s (%llu)\n",
+ 		   newval->string, newval->value);
+-
+-	if (bond->dev->flags & IFF_UP) {
+-		if (!newval->value)
+-			bond->recv_probe = NULL;
+-		else if (bond->params.arp_interval)
+-			bond->recv_probe = bond_arp_rcv;
+-	}
+ 	bond->params.arp_validate = newval->value;
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 6cbe515bfdeb..8a57888e9765 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2414,12 +2414,12 @@ static int macb_open(struct net_device *dev)
+ 		return err;
+ 	}
+ 
+-	bp->macbgem_ops.mog_init_rings(bp);
+-	macb_init_hw(bp);
+-
+ 	for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue)
+ 		napi_enable(&queue->napi);
+ 
++	bp->macbgem_ops.mog_init_rings(bp);
++	macb_init_hw(bp);
++
+ 	/* schedule a link state check */
+ 	phy_start(dev->phydev);
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index dfebc30c4841..d3f2408dc9e8 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -1648,7 +1648,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv,
+ 				 qm_sg_entry_get_len(&sgt[0]), dma_dir);
+ 
+ 		/* remaining pages were mapped with skb_frag_dma_map() */
+-		for (i = 1; i < nr_frags; i++) {
++		for (i = 1; i <= nr_frags; i++) {
+ 			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
+ 
+ 			dma_unmap_page(dev, qm_sg_addr(&sgt[i]),
+diff --git a/drivers/net/ethernet/freescale/ucc_geth_ethtool.c b/drivers/net/ethernet/freescale/ucc_geth_ethtool.c
+index 0beee2cc2ddd..722b6de24816 100644
+--- a/drivers/net/ethernet/freescale/ucc_geth_ethtool.c
++++ b/drivers/net/ethernet/freescale/ucc_geth_ethtool.c
+@@ -252,14 +252,12 @@ uec_set_ringparam(struct net_device *netdev,
+ 		return -EINVAL;
+ 	}
+ 
++	if (netif_running(netdev))
++		return -EBUSY;
++
+ 	ug_info->bdRingLenRx[queue] = ring->rx_pending;
+ 	ug_info->bdRingLenTx[queue] = ring->tx_pending;
+ 
+-	if (netif_running(netdev)) {
+-		/* FIXME: restart automatically */
+-		netdev_info(netdev, "Please re-open the interface\n");
+-	}
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 931beac3359d..70031e2b2294 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -4370,7 +4370,7 @@ static void mvpp2_phylink_validate(struct net_device *dev,
+ 	case PHY_INTERFACE_MODE_RGMII_ID:
+ 	case PHY_INTERFACE_MODE_RGMII_RXID:
+ 	case PHY_INTERFACE_MODE_RGMII_TXID:
+-		if (port->gop_id == 0)
++		if (port->priv->hw_version == MVPP22 && port->gop_id == 0)
+ 			goto empty_set;
+ 		break;
+ 	default:
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 215a45374d7b..0ef95abde6bb 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -613,7 +613,7 @@ static int ocelot_mact_mc_add(struct ocelot_port *port,
+ 			      struct netdev_hw_addr *hw_addr)
+ {
+ 	struct ocelot *ocelot = port->ocelot;
+-	struct netdev_hw_addr *ha = kzalloc(sizeof(*ha), GFP_KERNEL);
++	struct netdev_hw_addr *ha = kzalloc(sizeof(*ha), GFP_ATOMIC);
+ 
+ 	if (!ha)
+ 		return -ENOMEM;
+diff --git a/drivers/net/ethernet/neterion/vxge/vxge-config.c b/drivers/net/ethernet/neterion/vxge/vxge-config.c
+index 7cde387e5ec6..51cd57ab3d95 100644
+--- a/drivers/net/ethernet/neterion/vxge/vxge-config.c
++++ b/drivers/net/ethernet/neterion/vxge/vxge-config.c
+@@ -2366,6 +2366,7 @@ static void *__vxge_hw_blockpool_malloc(struct __vxge_hw_device *devh, u32 size,
+ 				dma_object->addr))) {
+ 			vxge_os_dma_free(devh->pdev, memblock,
+ 				&dma_object->acc_handle);
++			memblock = NULL;
+ 			goto exit;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
+index 2d8a77cc156b..f458c9776a89 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed.h
++++ b/drivers/net/ethernet/qlogic/qed/qed.h
+@@ -431,12 +431,16 @@ struct qed_qm_info {
+ 	u8 num_pf_rls;
+ };
+ 
++#define QED_OVERFLOW_BIT	1
++
+ struct qed_db_recovery_info {
+ 	struct list_head list;
+ 
+ 	/* Lock to protect the doorbell recovery mechanism list */
+ 	spinlock_t lock;
++	bool dorq_attn;
+ 	u32 db_recovery_counter;
++	unsigned long overflow;
+ };
+ 
+ struct storm_stats {
+@@ -918,8 +922,7 @@ u16 qed_get_cm_pq_idx_llt_mtc(struct qed_hwfn *p_hwfn, u8 tc);
+ 
+ /* doorbell recovery mechanism */
+ void qed_db_recovery_dp(struct qed_hwfn *p_hwfn);
+-void qed_db_recovery_execute(struct qed_hwfn *p_hwfn,
+-			     enum qed_db_rec_exec db_exec);
++void qed_db_recovery_execute(struct qed_hwfn *p_hwfn);
+ bool qed_edpm_enabled(struct qed_hwfn *p_hwfn);
+ 
+ /* Other Linux specific common definitions */
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index 2ecaaaa4469a..228891e459bc 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -102,11 +102,15 @@ static void qed_db_recovery_dp_entry(struct qed_hwfn *p_hwfn,
+ 
+ /* Doorbell address sanity (address within doorbell bar range) */
+ static bool qed_db_rec_sanity(struct qed_dev *cdev,
+-			      void __iomem *db_addr, void *db_data)
++			      void __iomem *db_addr,
++			      enum qed_db_rec_width db_width,
++			      void *db_data)
+ {
++	u32 width = (db_width == DB_REC_WIDTH_32B) ? 32 : 64;
++
+ 	/* Make sure doorbell address is within the doorbell bar */
+ 	if (db_addr < cdev->doorbells ||
+-	    (u8 __iomem *)db_addr >
++	    (u8 __iomem *)db_addr + width >
+ 	    (u8 __iomem *)cdev->doorbells + cdev->db_size) {
+ 		WARN(true,
+ 		     "Illegal doorbell address: %p. Legal range for doorbell addresses is [%p..%p]\n",
+@@ -159,7 +163,7 @@ int qed_db_recovery_add(struct qed_dev *cdev,
+ 	}
+ 
+ 	/* Sanitize doorbell address */
+-	if (!qed_db_rec_sanity(cdev, db_addr, db_data))
++	if (!qed_db_rec_sanity(cdev, db_addr, db_width, db_data))
+ 		return -EINVAL;
+ 
+ 	/* Obtain hwfn from doorbell address */
+@@ -205,10 +209,6 @@ int qed_db_recovery_del(struct qed_dev *cdev,
+ 		return 0;
+ 	}
+ 
+-	/* Sanitize doorbell address */
+-	if (!qed_db_rec_sanity(cdev, db_addr, db_data))
+-		return -EINVAL;
+-
+ 	/* Obtain hwfn from doorbell address */
+ 	p_hwfn = qed_db_rec_find_hwfn(cdev, db_addr);
+ 
+@@ -300,31 +300,24 @@ void qed_db_recovery_dp(struct qed_hwfn *p_hwfn)
+ 
+ /* Ring the doorbell of a single doorbell recovery entry */
+ static void qed_db_recovery_ring(struct qed_hwfn *p_hwfn,
+-				 struct qed_db_recovery_entry *db_entry,
+-				 enum qed_db_rec_exec db_exec)
+-{
+-	if (db_exec != DB_REC_ONCE) {
+-		/* Print according to width */
+-		if (db_entry->db_width == DB_REC_WIDTH_32B) {
+-			DP_VERBOSE(p_hwfn, QED_MSG_SPQ,
+-				   "%s doorbell address %p data %x\n",
+-				   db_exec == DB_REC_DRY_RUN ?
+-				   "would have rung" : "ringing",
+-				   db_entry->db_addr,
+-				   *(u32 *)db_entry->db_data);
+-		} else {
+-			DP_VERBOSE(p_hwfn, QED_MSG_SPQ,
+-				   "%s doorbell address %p data %llx\n",
+-				   db_exec == DB_REC_DRY_RUN ?
+-				   "would have rung" : "ringing",
+-				   db_entry->db_addr,
+-				   *(u64 *)(db_entry->db_data));
+-		}
++				 struct qed_db_recovery_entry *db_entry)
++{
++	/* Print according to width */
++	if (db_entry->db_width == DB_REC_WIDTH_32B) {
++		DP_VERBOSE(p_hwfn, QED_MSG_SPQ,
++			   "ringing doorbell address %p data %x\n",
++			   db_entry->db_addr,
++			   *(u32 *)db_entry->db_data);
++	} else {
++		DP_VERBOSE(p_hwfn, QED_MSG_SPQ,
++			   "ringing doorbell address %p data %llx\n",
++			   db_entry->db_addr,
++			   *(u64 *)(db_entry->db_data));
+ 	}
+ 
+ 	/* Sanity */
+ 	if (!qed_db_rec_sanity(p_hwfn->cdev, db_entry->db_addr,
+-			       db_entry->db_data))
++			       db_entry->db_width, db_entry->db_data))
+ 		return;
+ 
+ 	/* Flush the write combined buffer. Since there are multiple doorbelling
+@@ -334,14 +327,12 @@ static void qed_db_recovery_ring(struct qed_hwfn *p_hwfn,
+ 	wmb();
+ 
+ 	/* Ring the doorbell */
+-	if (db_exec == DB_REC_REAL_DEAL || db_exec == DB_REC_ONCE) {
+-		if (db_entry->db_width == DB_REC_WIDTH_32B)
+-			DIRECT_REG_WR(db_entry->db_addr,
+-				      *(u32 *)(db_entry->db_data));
+-		else
+-			DIRECT_REG_WR64(db_entry->db_addr,
+-					*(u64 *)(db_entry->db_data));
+-	}
++	if (db_entry->db_width == DB_REC_WIDTH_32B)
++		DIRECT_REG_WR(db_entry->db_addr,
++			      *(u32 *)(db_entry->db_data));
++	else
++		DIRECT_REG_WR64(db_entry->db_addr,
++				*(u64 *)(db_entry->db_data));
+ 
+ 	/* Flush the write combined buffer. Next doorbell may come from a
+ 	 * different entity to the same address...
+@@ -350,29 +341,21 @@ static void qed_db_recovery_ring(struct qed_hwfn *p_hwfn,
+ }
+ 
+ /* Traverse the doorbell recovery entry list and ring all the doorbells */
+-void qed_db_recovery_execute(struct qed_hwfn *p_hwfn,
+-			     enum qed_db_rec_exec db_exec)
++void qed_db_recovery_execute(struct qed_hwfn *p_hwfn)
+ {
+ 	struct qed_db_recovery_entry *db_entry = NULL;
+ 
+-	if (db_exec != DB_REC_ONCE) {
+-		DP_NOTICE(p_hwfn,
+-			  "Executing doorbell recovery. Counter was %d\n",
+-			  p_hwfn->db_recovery_info.db_recovery_counter);
++	DP_NOTICE(p_hwfn, "Executing doorbell recovery. Counter was %d\n",
++		  p_hwfn->db_recovery_info.db_recovery_counter);
+ 
+-		/* Track amount of times recovery was executed */
+-		p_hwfn->db_recovery_info.db_recovery_counter++;
+-	}
++	/* Track amount of times recovery was executed */
++	p_hwfn->db_recovery_info.db_recovery_counter++;
+ 
+ 	/* Protect the list */
+ 	spin_lock_bh(&p_hwfn->db_recovery_info.lock);
+ 	list_for_each_entry(db_entry,
+-			    &p_hwfn->db_recovery_info.list, list_entry) {
+-		qed_db_recovery_ring(p_hwfn, db_entry, db_exec);
+-		if (db_exec == DB_REC_ONCE)
+-			break;
+-	}
+-
++			    &p_hwfn->db_recovery_info.list, list_entry)
++		qed_db_recovery_ring(p_hwfn, db_entry);
+ 	spin_unlock_bh(&p_hwfn->db_recovery_info.lock);
+ }
+ 
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.c b/drivers/net/ethernet/qlogic/qed/qed_int.c
+index 92340919d852..a7e95f239317 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_int.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_int.c
+@@ -376,6 +376,9 @@ static int qed_db_rec_flush_queue(struct qed_hwfn *p_hwfn,
+ 	u32 count = QED_DB_REC_COUNT;
+ 	u32 usage = 1;
+ 
++	/* Flush any pending (e)dpms as they may never arrive */
++	qed_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1);
++
+ 	/* wait for usage to zero or count to run out. This is necessary since
+ 	 * EDPM doorbell transactions can take multiple 64b cycles, and as such
+ 	 * can "split" over the pci. Possibly, the doorbell drop can happen with
+@@ -404,51 +407,74 @@ static int qed_db_rec_flush_queue(struct qed_hwfn *p_hwfn,
+ 
+ int qed_db_rec_handler(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+ {
+-	u32 overflow;
++	u32 attn_ovfl, cur_ovfl;
+ 	int rc;
+ 
+-	overflow = qed_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY);
+-	DP_NOTICE(p_hwfn, "PF Overflow sticky 0x%x\n", overflow);
+-	if (!overflow) {
+-		qed_db_recovery_execute(p_hwfn, DB_REC_ONCE);
++	attn_ovfl = test_and_clear_bit(QED_OVERFLOW_BIT,
++				       &p_hwfn->db_recovery_info.overflow);
++	cur_ovfl = qed_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY);
++	if (!cur_ovfl && !attn_ovfl)
+ 		return 0;
+-	}
+ 
+-	if (qed_edpm_enabled(p_hwfn)) {
++	DP_NOTICE(p_hwfn, "PF Overflow sticky: attn %u current %u\n",
++		  attn_ovfl, cur_ovfl);
++
++	if (cur_ovfl && !p_hwfn->db_bar_no_edpm) {
+ 		rc = qed_db_rec_flush_queue(p_hwfn, p_ptt);
+ 		if (rc)
+ 			return rc;
+ 	}
+ 
+-	/* Flush any pending (e)dpm as they may never arrive */
+-	qed_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1);
+-
+ 	/* Release overflow sticky indication (stop silently dropping everything) */
+ 	qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0);
+ 
+ 	/* Repeat all last doorbells (doorbell drop recovery) */
+-	qed_db_recovery_execute(p_hwfn, DB_REC_REAL_DEAL);
++	qed_db_recovery_execute(p_hwfn);
+ 
+ 	return 0;
+ }
+ 
+-static int qed_dorq_attn_cb(struct qed_hwfn *p_hwfn)
++static void qed_dorq_attn_overflow(struct qed_hwfn *p_hwfn)
+ {
+-	u32 int_sts, first_drop_reason, details, address, all_drops_reason;
+ 	struct qed_ptt *p_ptt = p_hwfn->p_dpc_ptt;
++	u32 overflow;
+ 	int rc;
+ 
+-	int_sts = qed_rd(p_hwfn, p_ptt, DORQ_REG_INT_STS);
+-	DP_NOTICE(p_hwfn->cdev, "DORQ attention. int_sts was %x\n", int_sts);
++	overflow = qed_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY);
++	if (!overflow)
++		goto out;
++
++	/* Run PF doorbell recovery in next periodic handler */
++	set_bit(QED_OVERFLOW_BIT, &p_hwfn->db_recovery_info.overflow);
++
++	if (!p_hwfn->db_bar_no_edpm) {
++		rc = qed_db_rec_flush_queue(p_hwfn, p_ptt);
++		if (rc)
++			goto out;
++	}
++
++	qed_wr(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY, 0x0);
++out:
++	/* Schedule the handler even if overflow was not detected */
++	qed_periodic_db_rec_start(p_hwfn);
++}
++
++static int qed_dorq_attn_int_sts(struct qed_hwfn *p_hwfn)
++{
++	u32 int_sts, first_drop_reason, details, address, all_drops_reason;
++	struct qed_ptt *p_ptt = p_hwfn->p_dpc_ptt;
+ 
+ 	/* int_sts may be zero since all PFs were interrupted for doorbell
+ 	 * overflow but another one already handled it. Can abort here. If
+ 	 * This PF also requires overflow recovery we will be interrupted again.
+ 	 * The masked almost full indication may also be set. Ignoring.
+ 	 */
++	int_sts = qed_rd(p_hwfn, p_ptt, DORQ_REG_INT_STS);
+ 	if (!(int_sts & ~DORQ_REG_INT_STS_DORQ_FIFO_AFULL))
+ 		return 0;
+ 
++	DP_NOTICE(p_hwfn->cdev, "DORQ attention. int_sts was %x\n", int_sts);
++
+ 	/* check if db_drop or overflow happened */
+ 	if (int_sts & (DORQ_REG_INT_STS_DB_DROP |
+ 		       DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR)) {
+@@ -475,11 +501,6 @@ static int qed_dorq_attn_cb(struct qed_hwfn *p_hwfn)
+ 			  GET_FIELD(details, QED_DORQ_ATTENTION_SIZE) * 4,
+ 			  first_drop_reason, all_drops_reason);
+ 
+-		rc = qed_db_rec_handler(p_hwfn, p_ptt);
+-		qed_periodic_db_rec_start(p_hwfn);
+-		if (rc)
+-			return rc;
+-
+ 		/* Clear the doorbell drop details and prepare for next drop */
+ 		qed_wr(p_hwfn, p_ptt, DORQ_REG_DB_DROP_DETAILS_REL, 0);
+ 
+@@ -505,6 +526,25 @@ static int qed_dorq_attn_cb(struct qed_hwfn *p_hwfn)
+ 	return -EINVAL;
+ }
+ 
++static int qed_dorq_attn_cb(struct qed_hwfn *p_hwfn)
++{
++	p_hwfn->db_recovery_info.dorq_attn = true;
++	qed_dorq_attn_overflow(p_hwfn);
++
++	return qed_dorq_attn_int_sts(p_hwfn);
++}
++
++static void qed_dorq_attn_handler(struct qed_hwfn *p_hwfn)
++{
++	if (p_hwfn->db_recovery_info.dorq_attn)
++		goto out;
++
++	/* Call DORQ callback if the attention was missed */
++	qed_dorq_attn_cb(p_hwfn);
++out:
++	p_hwfn->db_recovery_info.dorq_attn = false;
++}
++
+ /* Instead of major changes to the data-structure, we have a some 'special'
+  * identifiers for sources that changed meaning between adapters.
+  */
+@@ -1078,6 +1118,9 @@ static int qed_int_deassertion(struct qed_hwfn  *p_hwfn,
+ 		}
+ 	}
+ 
++	/* Handle missed DORQ attention */
++	qed_dorq_attn_handler(p_hwfn);
++
+ 	/* Clear IGU indication for the deasserted bits */
+ 	DIRECT_REG_WR((u8 __iomem *)p_hwfn->regview +
+ 				    GTT_BAR0_MAP_REG_IGU_CMD +
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.h b/drivers/net/ethernet/qlogic/qed/qed_int.h
+index d81a62ebd524..df26bf333893 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_int.h
++++ b/drivers/net/ethernet/qlogic/qed/qed_int.h
+@@ -192,8 +192,8 @@ void qed_int_disable_post_isr_release(struct qed_dev *cdev);
+ 
+ /**
+  * @brief - Doorbell Recovery handler.
+- *          Run DB_REAL_DEAL doorbell recovery in case of PF overflow
+- *          (and flush DORQ if needed), otherwise run DB_REC_ONCE.
++ *          Run doorbell recovery in case of PF overflow (and flush DORQ if
++ *          needed).
+  *
+  * @param p_hwfn
+  * @param p_ptt
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
+index 6adf5bda9811..26bfcbeebc4c 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
+@@ -966,7 +966,7 @@ static void qed_update_pf_params(struct qed_dev *cdev,
+ 	}
+ }
+ 
+-#define QED_PERIODIC_DB_REC_COUNT		100
++#define QED_PERIODIC_DB_REC_COUNT		10
+ #define QED_PERIODIC_DB_REC_INTERVAL_MS		100
+ #define QED_PERIODIC_DB_REC_INTERVAL \
+ 	msecs_to_jiffies(QED_PERIODIC_DB_REC_INTERVAL_MS)
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_ptp.c b/drivers/net/ethernet/qlogic/qede/qede_ptp.c
+index 5f3f42a25361..bddb2b5982dc 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_ptp.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_ptp.c
+@@ -490,18 +490,17 @@ int qede_ptp_enable(struct qede_dev *edev, bool init_tc)
+ 
+ 	ptp->clock = ptp_clock_register(&ptp->clock_info, &edev->pdev->dev);
+ 	if (IS_ERR(ptp->clock)) {
+-		rc = -EINVAL;
+ 		DP_ERR(edev, "PTP clock registration failed\n");
++		qede_ptp_disable(edev);
++		rc = -EINVAL;
+ 		goto err2;
+ 	}
+ 
+ 	return 0;
+ 
+-err2:
+-	qede_ptp_disable(edev);
+-	ptp->clock = NULL;
+ err1:
+ 	kfree(ptp);
++err2:
+ 	edev->ptp = NULL;
+ 
+ 	return rc;
+diff --git a/drivers/net/ethernet/seeq/sgiseeq.c b/drivers/net/ethernet/seeq/sgiseeq.c
+index 70cce63a6081..696037d5ac3d 100644
+--- a/drivers/net/ethernet/seeq/sgiseeq.c
++++ b/drivers/net/ethernet/seeq/sgiseeq.c
+@@ -735,6 +735,7 @@ static int sgiseeq_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	platform_set_drvdata(pdev, dev);
++	SET_NETDEV_DEV(dev, &pdev->dev);
+ 	sp = netdev_priv(dev);
+ 
+ 	/* Make private data page aligned */
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+index 0f660af01a4b..49a896a16391 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -1015,6 +1015,8 @@ static struct mac_device_info *sun8i_dwmac_setup(void *ppriv)
+ 	mac->mac = &sun8i_dwmac_ops;
+ 	mac->dma = &sun8i_dwmac_dma_ops;
+ 
++	priv->dev->priv_flags |= IFF_UNICAST_FLT;
++
+ 	/* The loopback bit seems to be re-set when link change
+ 	 * Simply mask it each time
+ 	 * Speed 10/100/1000 are set in BIT(2)/BIT(3)
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index adf79614c2db..ff2426e00682 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -2083,11 +2083,14 @@ bool phy_validate_pause(struct phy_device *phydev,
+ 			struct ethtool_pauseparam *pp)
+ {
+ 	if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+-			       phydev->supported) ||
+-	    (!linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+-				phydev->supported) &&
+-	     pp->rx_pause != pp->tx_pause))
++			       phydev->supported) && pp->rx_pause)
+ 		return false;
++
++	if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
++			       phydev->supported) &&
++	    pp->rx_pause != pp->tx_pause)
++		return false;
++
+ 	return true;
+ }
+ EXPORT_SYMBOL(phy_validate_pause);
+diff --git a/drivers/net/phy/spi_ks8995.c b/drivers/net/phy/spi_ks8995.c
+index f17b3441779b..d8ea4147dfe7 100644
+--- a/drivers/net/phy/spi_ks8995.c
++++ b/drivers/net/phy/spi_ks8995.c
+@@ -162,6 +162,14 @@ static const struct spi_device_id ks8995_id[] = {
+ };
+ MODULE_DEVICE_TABLE(spi, ks8995_id);
+ 
++static const struct of_device_id ks8895_spi_of_match[] = {
++        { .compatible = "micrel,ks8995" },
++        { .compatible = "micrel,ksz8864" },
++        { .compatible = "micrel,ksz8795" },
++        { },
++ };
++MODULE_DEVICE_TABLE(of, ks8895_spi_of_match);
++
+ static inline u8 get_chip_id(u8 val)
+ {
+ 	return (val >> ID1_CHIPID_S) & ID1_CHIPID_M;
+@@ -529,6 +537,7 @@ static int ks8995_remove(struct spi_device *spi)
+ static struct spi_driver ks8995_driver = {
+ 	.driver = {
+ 		.name	    = "spi-ks8995",
++		.of_match_table = of_match_ptr(ks8895_spi_of_match),
+ 	},
+ 	.probe	  = ks8995_probe,
+ 	.remove	  = ks8995_remove,
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 448d5439ff6a..8888c097375b 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -596,13 +596,18 @@ static u16 tun_automq_select_queue(struct tun_struct *tun, struct sk_buff *skb)
+ static u16 tun_ebpf_select_queue(struct tun_struct *tun, struct sk_buff *skb)
+ {
+ 	struct tun_prog *prog;
++	u32 numqueues;
+ 	u16 ret = 0;
+ 
++	numqueues = READ_ONCE(tun->numqueues);
++	if (!numqueues)
++		return 0;
++
+ 	prog = rcu_dereference(tun->steering_prog);
+ 	if (prog)
+ 		ret = bpf_prog_run_clear_cb(prog->prog, skb);
+ 
+-	return ret % tun->numqueues;
++	return ret % numqueues;
+ }
+ 
+ static u16 tun_select_queue(struct net_device *dev, struct sk_buff *skb,
+@@ -700,6 +705,8 @@ static void __tun_detach(struct tun_file *tfile, bool clean)
+ 				   tun->tfiles[tun->numqueues - 1]);
+ 		ntfile = rtnl_dereference(tun->tfiles[index]);
+ 		ntfile->queue_index = index;
++		rcu_assign_pointer(tun->tfiles[tun->numqueues - 1],
++				   NULL);
+ 
+ 		--tun->numqueues;
+ 		if (clean) {
+@@ -1082,7 +1089,7 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	tfile = rcu_dereference(tun->tfiles[txq]);
+ 
+ 	/* Drop packet if interface is not attached */
+-	if (txq >= tun->numqueues)
++	if (!tfile)
+ 		goto drop;
+ 
+ 	if (!rcu_dereference(tun->steering_prog))
+@@ -1305,6 +1312,7 @@ static int tun_xdp_xmit(struct net_device *dev, int n,
+ 
+ 	rcu_read_lock();
+ 
++resample:
+ 	numqueues = READ_ONCE(tun->numqueues);
+ 	if (!numqueues) {
+ 		rcu_read_unlock();
+@@ -1313,6 +1321,8 @@ static int tun_xdp_xmit(struct net_device *dev, int n,
+ 
+ 	tfile = rcu_dereference(tun->tfiles[smp_processor_id() %
+ 					    numqueues]);
++	if (unlikely(!tfile))
++		goto resample;
+ 
+ 	spin_lock(&tfile->tx_ring.producer_lock);
+ 	for (i = 0; i < n; i++) {
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index cd15c32b2e43..9ee4d7402ca2 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -875,6 +875,7 @@ static const struct net_device_ops vrf_netdev_ops = {
+ 	.ndo_init		= vrf_dev_init,
+ 	.ndo_uninit		= vrf_dev_uninit,
+ 	.ndo_start_xmit		= vrf_xmit,
++	.ndo_set_mac_address	= eth_mac_addr,
+ 	.ndo_get_stats64	= vrf_get_stats64,
+ 	.ndo_add_slave		= vrf_add_slave,
+ 	.ndo_del_slave		= vrf_del_slave,
+@@ -1274,6 +1275,7 @@ static void vrf_setup(struct net_device *dev)
+ 	/* default to no qdisc; user can add if desired */
+ 	dev->priv_flags |= IFF_NO_QUEUE;
+ 	dev->priv_flags |= IFF_NO_RX_HANDLER;
++	dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ 
+ 	/* VRF devices do not care about MTU, but if the MTU is set
+ 	 * too low then the ipv4 and ipv6 protocols are disabled
+diff --git a/drivers/net/wireless/marvell/mwl8k.c b/drivers/net/wireless/marvell/mwl8k.c
+index 8e4e9b6919e0..ffc565ac2192 100644
+--- a/drivers/net/wireless/marvell/mwl8k.c
++++ b/drivers/net/wireless/marvell/mwl8k.c
+@@ -441,6 +441,9 @@ static const struct ieee80211_rate mwl8k_rates_50[] = {
+ #define MWL8K_CMD_UPDATE_STADB		0x1123
+ #define MWL8K_CMD_BASTREAM		0x1125
+ 
++#define MWL8K_LEGACY_5G_RATE_OFFSET \
++	(ARRAY_SIZE(mwl8k_rates_24) - ARRAY_SIZE(mwl8k_rates_50))
++
+ static const char *mwl8k_cmd_name(__le16 cmd, char *buf, int bufsize)
+ {
+ 	u16 command = le16_to_cpu(cmd);
+@@ -1016,8 +1019,9 @@ mwl8k_rxd_ap_process(void *_rxd, struct ieee80211_rx_status *status,
+ 
+ 	if (rxd->channel > 14) {
+ 		status->band = NL80211_BAND_5GHZ;
+-		if (!(status->encoding == RX_ENC_HT))
+-			status->rate_idx -= 5;
++		if (!(status->encoding == RX_ENC_HT) &&
++		    status->rate_idx >= MWL8K_LEGACY_5G_RATE_OFFSET)
++			status->rate_idx -= MWL8K_LEGACY_5G_RATE_OFFSET;
+ 	} else {
+ 		status->band = NL80211_BAND_2GHZ;
+ 	}
+@@ -1124,8 +1128,9 @@ mwl8k_rxd_sta_process(void *_rxd, struct ieee80211_rx_status *status,
+ 
+ 	if (rxd->channel > 14) {
+ 		status->band = NL80211_BAND_5GHZ;
+-		if (!(status->encoding == RX_ENC_HT))
+-			status->rate_idx -= 5;
++		if (!(status->encoding == RX_ENC_HT) &&
++		    status->rate_idx >= MWL8K_LEGACY_5G_RATE_OFFSET)
++			status->rate_idx -= MWL8K_LEGACY_5G_RATE_OFFSET;
+ 	} else {
+ 		status->band = NL80211_BAND_2GHZ;
+ 	}
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
+index f783e4a8083d..0de7551ae14a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
+@@ -1697,6 +1697,7 @@ static void _rtl8723e_read_adapter_info(struct ieee80211_hw *hw,
+ 					rtlhal->oem_id = RT_CID_819X_LENOVO;
+ 					break;
+ 				}
++				break;
+ 			case 0x1025:
+ 				rtlhal->oem_id = RT_CID_819X_ACER;
+ 				break;
+diff --git a/drivers/net/wireless/st/cw1200/scan.c b/drivers/net/wireless/st/cw1200/scan.c
+index 0a9eac93dd01..71e9b91cf15b 100644
+--- a/drivers/net/wireless/st/cw1200/scan.c
++++ b/drivers/net/wireless/st/cw1200/scan.c
+@@ -84,8 +84,11 @@ int cw1200_hw_scan(struct ieee80211_hw *hw,
+ 
+ 	frame.skb = ieee80211_probereq_get(hw, priv->vif->addr, NULL, 0,
+ 		req->ie_len);
+-	if (!frame.skb)
++	if (!frame.skb) {
++		mutex_unlock(&priv->conf_mutex);
++		up(&priv->scan.lock);
+ 		return -ENOMEM;
++	}
+ 
+ 	if (req->ie_len)
+ 		skb_put_data(frame.skb, req->ie, req->ie_len);
+diff --git a/drivers/nfc/st95hf/core.c b/drivers/nfc/st95hf/core.c
+index 2b26f762fbc3..01acb6e53365 100644
+--- a/drivers/nfc/st95hf/core.c
++++ b/drivers/nfc/st95hf/core.c
+@@ -1074,6 +1074,12 @@ static const struct spi_device_id st95hf_id[] = {
+ };
+ MODULE_DEVICE_TABLE(spi, st95hf_id);
+ 
++static const struct of_device_id st95hf_spi_of_match[] = {
++        { .compatible = "st,st95hf" },
++        { },
++};
++MODULE_DEVICE_TABLE(of, st95hf_spi_of_match);
++
+ static int st95hf_probe(struct spi_device *nfc_spi_dev)
+ {
+ 	int ret;
+@@ -1260,6 +1266,7 @@ static struct spi_driver st95hf_driver = {
+ 	.driver = {
+ 		.name = "st95hf",
+ 		.owner = THIS_MODULE,
++		.of_match_table = of_match_ptr(st95hf_spi_of_match),
+ 	},
+ 	.id_table = st95hf_id,
+ 	.probe = st95hf_probe,
+diff --git a/drivers/nvdimm/btt_devs.c b/drivers/nvdimm/btt_devs.c
+index 795ad4ff35ca..e341498876ca 100644
+--- a/drivers/nvdimm/btt_devs.c
++++ b/drivers/nvdimm/btt_devs.c
+@@ -190,14 +190,15 @@ static struct device *__nd_btt_create(struct nd_region *nd_region,
+ 		return NULL;
+ 
+ 	nd_btt->id = ida_simple_get(&nd_region->btt_ida, 0, 0, GFP_KERNEL);
+-	if (nd_btt->id < 0) {
+-		kfree(nd_btt);
+-		return NULL;
+-	}
++	if (nd_btt->id < 0)
++		goto out_nd_btt;
+ 
+ 	nd_btt->lbasize = lbasize;
+-	if (uuid)
++	if (uuid) {
+ 		uuid = kmemdup(uuid, 16, GFP_KERNEL);
++		if (!uuid)
++			goto out_put_id;
++	}
+ 	nd_btt->uuid = uuid;
+ 	dev = &nd_btt->dev;
+ 	dev_set_name(dev, "btt%d.%d", nd_region->id, nd_btt->id);
+@@ -212,6 +213,13 @@ static struct device *__nd_btt_create(struct nd_region *nd_region,
+ 		return NULL;
+ 	}
+ 	return dev;
++
++out_put_id:
++	ida_simple_remove(&nd_region->btt_ida, nd_btt->id);
++
++out_nd_btt:
++	kfree(nd_btt);
++	return NULL;
+ }
+ 
+ struct device *nd_btt_create(struct nd_region *nd_region)
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 33a3b23b3db7..e761b29f7160 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -2249,9 +2249,12 @@ static struct device *create_namespace_blk(struct nd_region *nd_region,
+ 	if (!nsblk->uuid)
+ 		goto blk_err;
+ 	memcpy(name, nd_label->name, NSLABEL_NAME_LEN);
+-	if (name[0])
++	if (name[0]) {
+ 		nsblk->alt_name = kmemdup(name, NSLABEL_NAME_LEN,
+ 				GFP_KERNEL);
++		if (!nsblk->alt_name)
++			goto blk_err;
++	}
+ 	res = nsblk_add_resource(nd_region, ndd, nsblk,
+ 			__le64_to_cpu(nd_label->dpa));
+ 	if (!res)
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index bc2f700feef8..0279eb1da3ef 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -113,13 +113,13 @@ static void write_pmem(void *pmem_addr, struct page *page,
+ 
+ 	while (len) {
+ 		mem = kmap_atomic(page);
+-		chunk = min_t(unsigned int, len, PAGE_SIZE);
++		chunk = min_t(unsigned int, len, PAGE_SIZE - off);
+ 		memcpy_flushcache(pmem_addr, mem + off, chunk);
+ 		kunmap_atomic(mem);
+ 		len -= chunk;
+ 		off = 0;
+ 		page++;
+-		pmem_addr += PAGE_SIZE;
++		pmem_addr += chunk;
+ 	}
+ }
+ 
+@@ -132,7 +132,7 @@ static blk_status_t read_pmem(struct page *page, unsigned int off,
+ 
+ 	while (len) {
+ 		mem = kmap_atomic(page);
+-		chunk = min_t(unsigned int, len, PAGE_SIZE);
++		chunk = min_t(unsigned int, len, PAGE_SIZE - off);
+ 		rem = memcpy_mcsafe(mem + off, pmem_addr, chunk);
+ 		kunmap_atomic(mem);
+ 		if (rem)
+@@ -140,7 +140,7 @@ static blk_status_t read_pmem(struct page *page, unsigned int off,
+ 		len -= chunk;
+ 		off = 0;
+ 		page++;
+-		pmem_addr += PAGE_SIZE;
++		pmem_addr += chunk;
+ 	}
+ 	return BLK_STS_OK;
+ }
+diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c
+index f8bb746a549f..6bea6852bf27 100644
+--- a/drivers/nvdimm/security.c
++++ b/drivers/nvdimm/security.c
+@@ -22,6 +22,8 @@ static bool key_revalidate = true;
+ module_param(key_revalidate, bool, 0444);
+ MODULE_PARM_DESC(key_revalidate, "Require key validation at init.");
+ 
++static const char zero_key[NVDIMM_PASSPHRASE_LEN];
++
+ static void *key_data(struct key *key)
+ {
+ 	struct encrypted_key_payload *epayload = dereference_key_locked(key);
+@@ -286,8 +288,9 @@ int nvdimm_security_erase(struct nvdimm *nvdimm, unsigned int keyid,
+ {
+ 	struct device *dev = &nvdimm->dev;
+ 	struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(dev);
+-	struct key *key;
++	struct key *key = NULL;
+ 	int rc;
++	const void *data;
+ 
+ 	/* The bus lock should be held at the top level of the call stack */
+ 	lockdep_assert_held(&nvdimm_bus->reconfig_mutex);
+@@ -319,11 +322,15 @@ int nvdimm_security_erase(struct nvdimm *nvdimm, unsigned int keyid,
+ 		return -EOPNOTSUPP;
+ 	}
+ 
+-	key = nvdimm_lookup_user_key(nvdimm, keyid, NVDIMM_BASE_KEY);
+-	if (!key)
+-		return -ENOKEY;
++	if (keyid != 0) {
++		key = nvdimm_lookup_user_key(nvdimm, keyid, NVDIMM_BASE_KEY);
++		if (!key)
++			return -ENOKEY;
++		data = key_data(key);
++	} else
++		data = zero_key;
+ 
+-	rc = nvdimm->sec.ops->erase(nvdimm, key_data(key), pass_type);
++	rc = nvdimm->sec.ops->erase(nvdimm, data, pass_type);
+ 	dev_dbg(dev, "key: %d erase%s: %s\n", key_serial(key),
+ 			pass_type == NVDIMM_MASTER ? "(master)" : "(user)",
+ 			rc == 0 ? "success" : "fail");
+diff --git a/drivers/of/of_net.c b/drivers/of/of_net.c
+index 810ab0fbcccb..d820f3edd431 100644
+--- a/drivers/of/of_net.c
++++ b/drivers/of/of_net.c
+@@ -7,7 +7,6 @@
+  */
+ #include <linux/etherdevice.h>
+ #include <linux/kernel.h>
+-#include <linux/nvmem-consumer.h>
+ #include <linux/of_net.h>
+ #include <linux/phy.h>
+ #include <linux/export.h>
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index 9ba4d12c179c..808a182830e5 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -1491,6 +1491,21 @@ static void hv_pci_assign_slots(struct hv_pcibus_device *hbus)
+ 	}
+ }
+ 
++/*
++ * Remove entries in sysfs pci slot directory.
++ */
++static void hv_pci_remove_slots(struct hv_pcibus_device *hbus)
++{
++	struct hv_pci_dev *hpdev;
++
++	list_for_each_entry(hpdev, &hbus->children, list_entry) {
++		if (!hpdev->pci_slot)
++			continue;
++		pci_destroy_slot(hpdev->pci_slot);
++		hpdev->pci_slot = NULL;
++	}
++}
++
+ /**
+  * create_root_hv_pci_bus() - Expose a new root PCI bus
+  * @hbus:	Root PCI bus, as understood by this driver
+@@ -1766,6 +1781,10 @@ static void pci_devices_present_work(struct work_struct *work)
+ 		hpdev = list_first_entry(&removed, struct hv_pci_dev,
+ 					 list_entry);
+ 		list_del(&hpdev->list_entry);
++
++		if (hpdev->pci_slot)
++			pci_destroy_slot(hpdev->pci_slot);
++
+ 		put_pcichild(hpdev);
+ 	}
+ 
+@@ -1905,6 +1924,9 @@ static void hv_eject_device_work(struct work_struct *work)
+ 			 sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt,
+ 			 VM_PKT_DATA_INBAND, 0);
+ 
++	/* For the get_pcichild() in hv_pci_eject_device() */
++	put_pcichild(hpdev);
++	/* For the two refs got in new_pcichild_device() */
+ 	put_pcichild(hpdev);
+ 	put_pcichild(hpdev);
+ 	put_hvpcibus(hpdev->hbus);
+@@ -2682,6 +2704,7 @@ static int hv_pci_remove(struct hv_device *hdev)
+ 		pci_lock_rescan_remove();
+ 		pci_stop_root_bus(hbus->pci_bus);
+ 		pci_remove_root_bus(hbus->pci_bus);
++		hv_pci_remove_slots(hbus);
+ 		pci_unlock_rescan_remove();
+ 		hbus->state = hv_pcibus_removed;
+ 	}
+diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
+index 95e6ca116e00..a561f653cf13 100644
+--- a/drivers/platform/x86/dell-laptop.c
++++ b/drivers/platform/x86/dell-laptop.c
+@@ -531,7 +531,7 @@ static void dell_rfkill_query(struct rfkill *rfkill, void *data)
+ 		return;
+ 	}
+ 
+-	dell_fill_request(&buffer, 0, 0x2, 0, 0);
++	dell_fill_request(&buffer, 0x2, 0, 0, 0);
+ 	ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ 	hwswitch = buffer.output[1];
+ 
+@@ -562,7 +562,7 @@ static int dell_debugfs_show(struct seq_file *s, void *data)
+ 		return ret;
+ 	status = buffer.output[1];
+ 
+-	dell_fill_request(&buffer, 0, 0x2, 0, 0);
++	dell_fill_request(&buffer, 0x2, 0, 0, 0);
+ 	hwswitch_ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ 	if (hwswitch_ret)
+ 		return hwswitch_ret;
+@@ -647,7 +647,7 @@ static void dell_update_rfkill(struct work_struct *ignored)
+ 	if (ret != 0)
+ 		return;
+ 
+-	dell_fill_request(&buffer, 0, 0x2, 0, 0);
++	dell_fill_request(&buffer, 0x2, 0, 0, 0);
+ 	ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ 
+ 	if (ret == 0 && (status & BIT(0)))
+diff --git a/drivers/platform/x86/sony-laptop.c b/drivers/platform/x86/sony-laptop.c
+index b205b037fd61..b50f8f73fb47 100644
+--- a/drivers/platform/x86/sony-laptop.c
++++ b/drivers/platform/x86/sony-laptop.c
+@@ -4424,14 +4424,16 @@ sony_pic_read_possible_resource(struct acpi_resource *resource, void *context)
+ 			}
+ 			return AE_OK;
+ 		}
++
++	case ACPI_RESOURCE_TYPE_END_TAG:
++		return AE_OK;
++
+ 	default:
+ 		dprintk("Resource %d isn't an IRQ nor an IO port\n",
+ 			resource->type);
++		return AE_CTRL_TERMINATE;
+ 
+-	case ACPI_RESOURCE_TYPE_END_TAG:
+-		return AE_OK;
+ 	}
+-	return AE_CTRL_TERMINATE;
+ }
+ 
+ static int sony_pic_possible_resources(struct acpi_device *device)
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 726341f2b638..89ce14b35adc 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -79,7 +79,7 @@
+ #include <linux/jiffies.h>
+ #include <linux/workqueue.h>
+ #include <linux/acpi.h>
+-#include <linux/pci_ids.h>
++#include <linux/pci.h>
+ #include <linux/power_supply.h>
+ #include <sound/core.h>
+ #include <sound/control.h>
+@@ -4501,6 +4501,74 @@ static void bluetooth_exit(void)
+ 	bluetooth_shutdown();
+ }
+ 
++static const struct dmi_system_id bt_fwbug_list[] __initconst = {
++	{
++		.ident = "ThinkPad E485",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20KU"),
++		},
++	},
++	{
++		.ident = "ThinkPad E585",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20KV"),
++		},
++	},
++	{
++		.ident = "ThinkPad A285 - 20MW",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20MW"),
++		},
++	},
++	{
++		.ident = "ThinkPad A285 - 20MX",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20MX"),
++		},
++	},
++	{
++		.ident = "ThinkPad A485 - 20MU",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20MU"),
++		},
++	},
++	{
++		.ident = "ThinkPad A485 - 20MV",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_BOARD_NAME, "20MV"),
++		},
++	},
++	{}
++};
++
++static const struct pci_device_id fwbug_cards_ids[] __initconst = {
++	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x24F3) },
++	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x24FD) },
++	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x2526) },
++	{}
++};
++
++
++static int __init have_bt_fwbug(void)
++{
++	/*
++	 * Some AMD based ThinkPads have a firmware bug that calling
++	 * "GBDC" will cause bluetooth on Intel wireless cards blocked
++	 */
++	if (dmi_check_system(bt_fwbug_list) && pci_dev_present(fwbug_cards_ids)) {
++		vdbg_printk(TPACPI_DBG_INIT | TPACPI_DBG_RFKILL,
++			FW_BUG "disable bluetooth subdriver for Intel cards\n");
++		return 1;
++	} else
++		return 0;
++}
++
+ static int __init bluetooth_init(struct ibm_init_struct *iibm)
+ {
+ 	int res;
+@@ -4513,7 +4581,7 @@ static int __init bluetooth_init(struct ibm_init_struct *iibm)
+ 
+ 	/* bluetooth not supported on 570, 600e/x, 770e, 770x, A21e, A2xm/p,
+ 	   G4x, R30, R31, R40e, R50e, T20-22, X20-21 */
+-	tp_features.bluetooth = hkey_handle &&
++	tp_features.bluetooth = !have_bt_fwbug() && hkey_handle &&
+ 	    acpi_evalf(hkey_handle, &status, "GBDC", "qd");
+ 
+ 	vdbg_printk(TPACPI_DBG_INIT | TPACPI_DBG_RFKILL,
+diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
+index 6e294b4d3635..f89f9d02e788 100644
+--- a/drivers/s390/block/dasd_eckd.c
++++ b/drivers/s390/block/dasd_eckd.c
+@@ -2004,14 +2004,14 @@ static int dasd_eckd_end_analysis(struct dasd_block *block)
+ 	blk_per_trk = recs_per_track(&private->rdc_data, 0, block->bp_block);
+ 
+ raw:
+-	block->blocks = (private->real_cyl *
++	block->blocks = ((unsigned long) private->real_cyl *
+ 			  private->rdc_data.trk_per_cyl *
+ 			  blk_per_trk);
+ 
+ 	dev_info(&device->cdev->dev,
+-		 "DASD with %d KB/block, %d KB total size, %d KB/track, "
++		 "DASD with %u KB/block, %lu KB total size, %u KB/track, "
+ 		 "%s\n", (block->bp_block >> 10),
+-		 ((private->real_cyl *
++		 (((unsigned long) private->real_cyl *
+ 		   private->rdc_data.trk_per_cyl *
+ 		   blk_per_trk * (block->bp_block >> 9)) >> 1),
+ 		 ((blk_per_trk * block->bp_block) >> 10),
+diff --git a/drivers/s390/char/con3270.c b/drivers/s390/char/con3270.c
+index fd2146bcc0ad..e17364e13d2f 100644
+--- a/drivers/s390/char/con3270.c
++++ b/drivers/s390/char/con3270.c
+@@ -629,7 +629,7 @@ con3270_init(void)
+ 		     (void (*)(unsigned long)) con3270_read_tasklet,
+ 		     (unsigned long) condev->read);
+ 
+-	raw3270_add_view(&condev->view, &con3270_fn, 1);
++	raw3270_add_view(&condev->view, &con3270_fn, 1, RAW3270_VIEW_LOCK_IRQ);
+ 
+ 	INIT_LIST_HEAD(&condev->freemem);
+ 	for (i = 0; i < CON3270_STRING_PAGES; i++) {
+diff --git a/drivers/s390/char/fs3270.c b/drivers/s390/char/fs3270.c
+index 8f3a2eeb28dc..8b48ba9c598e 100644
+--- a/drivers/s390/char/fs3270.c
++++ b/drivers/s390/char/fs3270.c
+@@ -463,7 +463,8 @@ fs3270_open(struct inode *inode, struct file *filp)
+ 
+ 	init_waitqueue_head(&fp->wait);
+ 	fp->fs_pid = get_pid(task_pid(current));
+-	rc = raw3270_add_view(&fp->view, &fs3270_fn, minor);
++	rc = raw3270_add_view(&fp->view, &fs3270_fn, minor,
++			      RAW3270_VIEW_LOCK_BH);
+ 	if (rc) {
+ 		fs3270_free_view(&fp->view);
+ 		goto out;
+diff --git a/drivers/s390/char/raw3270.c b/drivers/s390/char/raw3270.c
+index f8cd2935fbfd..63a41b168761 100644
+--- a/drivers/s390/char/raw3270.c
++++ b/drivers/s390/char/raw3270.c
+@@ -920,7 +920,7 @@ raw3270_deactivate_view(struct raw3270_view *view)
+  * Add view to device with minor "minor".
+  */
+ int
+-raw3270_add_view(struct raw3270_view *view, struct raw3270_fn *fn, int minor)
++raw3270_add_view(struct raw3270_view *view, struct raw3270_fn *fn, int minor, int subclass)
+ {
+ 	unsigned long flags;
+ 	struct raw3270 *rp;
+@@ -942,6 +942,7 @@ raw3270_add_view(struct raw3270_view *view, struct raw3270_fn *fn, int minor)
+ 		view->cols = rp->cols;
+ 		view->ascebc = rp->ascebc;
+ 		spin_lock_init(&view->lock);
++		lockdep_set_subclass(&view->lock, subclass);
+ 		list_add(&view->list, &rp->view_list);
+ 		rc = 0;
+ 		spin_unlock_irqrestore(get_ccwdev_lock(rp->cdev), flags);
+diff --git a/drivers/s390/char/raw3270.h b/drivers/s390/char/raw3270.h
+index 114ca7cbf889..3afaa35f7351 100644
+--- a/drivers/s390/char/raw3270.h
++++ b/drivers/s390/char/raw3270.h
+@@ -150,6 +150,8 @@ struct raw3270_fn {
+ struct raw3270_view {
+ 	struct list_head list;
+ 	spinlock_t lock;
++#define RAW3270_VIEW_LOCK_IRQ	0
++#define RAW3270_VIEW_LOCK_BH	1
+ 	atomic_t ref_count;
+ 	struct raw3270 *dev;
+ 	struct raw3270_fn *fn;
+@@ -158,7 +160,7 @@ struct raw3270_view {
+ 	unsigned char *ascebc;		/* ascii -> ebcdic table */
+ };
+ 
+-int raw3270_add_view(struct raw3270_view *, struct raw3270_fn *, int);
++int raw3270_add_view(struct raw3270_view *, struct raw3270_fn *, int, int);
+ int raw3270_activate_view(struct raw3270_view *);
+ void raw3270_del_view(struct raw3270_view *);
+ void raw3270_deactivate_view(struct raw3270_view *);
+diff --git a/drivers/s390/char/tty3270.c b/drivers/s390/char/tty3270.c
+index 2b0c36c2c568..98d7fc152e32 100644
+--- a/drivers/s390/char/tty3270.c
++++ b/drivers/s390/char/tty3270.c
+@@ -980,7 +980,8 @@ static int tty3270_install(struct tty_driver *driver, struct tty_struct *tty)
+ 		return PTR_ERR(tp);
+ 
+ 	rc = raw3270_add_view(&tp->view, &tty3270_fn,
+-			      tty->index + RAW3270_FIRSTMINOR);
++			      tty->index + RAW3270_FIRSTMINOR,
++			      RAW3270_VIEW_LOCK_BH);
+ 	if (rc) {
+ 		tty3270_free_view(tp);
+ 		return rc;
+diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c
+index 2f92bbed4bf6..097e890e0d6d 100644
+--- a/drivers/s390/crypto/pkey_api.c
++++ b/drivers/s390/crypto/pkey_api.c
+@@ -51,7 +51,8 @@ static debug_info_t *debug_info;
+ 
+ static void __init pkey_debug_init(void)
+ {
+-	debug_info = debug_register("pkey", 1, 1, 4 * sizeof(long));
++	/* 5 arguments per dbf entry (including the format string ptr) */
++	debug_info = debug_register("pkey", 1, 1, 5 * sizeof(long));
+ 	debug_register_view(debug_info, &debug_sprintf_view);
+ 	debug_set_level(debug_info, 3);
+ }
+diff --git a/drivers/s390/net/ctcm_main.c b/drivers/s390/net/ctcm_main.c
+index 7617d21cb296..f63c5c871d3d 100644
+--- a/drivers/s390/net/ctcm_main.c
++++ b/drivers/s390/net/ctcm_main.c
+@@ -1595,6 +1595,7 @@ static int ctcm_new_device(struct ccwgroup_device *cgdev)
+ 		if (priv->channel[direction] == NULL) {
+ 			if (direction == CTCM_WRITE)
+ 				channel_free(priv->channel[CTCM_READ]);
++			result = -ENODEV;
+ 			goto out_dev;
+ 		}
+ 		priv->channel[direction]->netdev = dev;
+diff --git a/drivers/scsi/aic7xxx/aic7770_osm.c b/drivers/scsi/aic7xxx/aic7770_osm.c
+index 3d401d02c019..bdd177e3d762 100644
+--- a/drivers/scsi/aic7xxx/aic7770_osm.c
++++ b/drivers/scsi/aic7xxx/aic7770_osm.c
+@@ -91,6 +91,7 @@ aic7770_probe(struct device *dev)
+ 	ahc = ahc_alloc(&aic7xxx_driver_template, name);
+ 	if (ahc == NULL)
+ 		return (ENOMEM);
++	ahc->dev = dev;
+ 	error = aic7770_config(ahc, aic7770_ident_table + edev->id.driver_data,
+ 			       eisaBase);
+ 	if (error != 0) {
+diff --git a/drivers/scsi/aic7xxx/aic7xxx.h b/drivers/scsi/aic7xxx/aic7xxx.h
+index 5614921b4041..88b90f9806c9 100644
+--- a/drivers/scsi/aic7xxx/aic7xxx.h
++++ b/drivers/scsi/aic7xxx/aic7xxx.h
+@@ -943,6 +943,7 @@ struct ahc_softc {
+ 	 * Platform specific device information.
+ 	 */
+ 	ahc_dev_softc_t		  dev_softc;
++	struct device		  *dev;
+ 
+ 	/*
+ 	 * Bus specific device information.
+diff --git a/drivers/scsi/aic7xxx/aic7xxx_osm.c b/drivers/scsi/aic7xxx/aic7xxx_osm.c
+index 3c9c17450bb3..d5c4a0d23706 100644
+--- a/drivers/scsi/aic7xxx/aic7xxx_osm.c
++++ b/drivers/scsi/aic7xxx/aic7xxx_osm.c
+@@ -860,8 +860,8 @@ int
+ ahc_dmamem_alloc(struct ahc_softc *ahc, bus_dma_tag_t dmat, void** vaddr,
+ 		 int flags, bus_dmamap_t *mapp)
+ {
+-	*vaddr = pci_alloc_consistent(ahc->dev_softc,
+-				      dmat->maxsize, mapp);
++	/* XXX: check if we really need the GFP_ATOMIC and unwind this mess! */
++	*vaddr = dma_alloc_coherent(ahc->dev, dmat->maxsize, mapp, GFP_ATOMIC);
+ 	if (*vaddr == NULL)
+ 		return ENOMEM;
+ 	return 0;
+@@ -871,8 +871,7 @@ void
+ ahc_dmamem_free(struct ahc_softc *ahc, bus_dma_tag_t dmat,
+ 		void* vaddr, bus_dmamap_t map)
+ {
+-	pci_free_consistent(ahc->dev_softc, dmat->maxsize,
+-			    vaddr, map);
++	dma_free_coherent(ahc->dev, dmat->maxsize, vaddr, map);
+ }
+ 
+ int
+@@ -1123,8 +1122,7 @@ ahc_linux_register_host(struct ahc_softc *ahc, struct scsi_host_template *templa
+ 
+ 	host->transportt = ahc_linux_transport_template;
+ 
+-	retval = scsi_add_host(host,
+-			(ahc->dev_softc ? &ahc->dev_softc->dev : NULL));
++	retval = scsi_add_host(host, ahc->dev);
+ 	if (retval) {
+ 		printk(KERN_WARNING "aic7xxx: scsi_add_host failed\n");
+ 		scsi_host_put(host);
+diff --git a/drivers/scsi/aic7xxx/aic7xxx_osm_pci.c b/drivers/scsi/aic7xxx/aic7xxx_osm_pci.c
+index 0fc14dac7070..717d8d1082ce 100644
+--- a/drivers/scsi/aic7xxx/aic7xxx_osm_pci.c
++++ b/drivers/scsi/aic7xxx/aic7xxx_osm_pci.c
+@@ -250,6 +250,7 @@ ahc_linux_pci_dev_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		}
+ 	}
+ 	ahc->dev_softc = pci;
++	ahc->dev = &pci->dev;
+ 	error = ahc_pci_config(ahc, entry);
+ 	if (error != 0) {
+ 		ahc_free(ahc);
+diff --git a/drivers/usb/serial/generic.c b/drivers/usb/serial/generic.c
+index 2274d9625f63..0fff4968ea1b 100644
+--- a/drivers/usb/serial/generic.c
++++ b/drivers/usb/serial/generic.c
+@@ -376,6 +376,7 @@ void usb_serial_generic_read_bulk_callback(struct urb *urb)
+ 	struct usb_serial_port *port = urb->context;
+ 	unsigned char *data = urb->transfer_buffer;
+ 	unsigned long flags;
++	bool stopped = false;
+ 	int status = urb->status;
+ 	int i;
+ 
+@@ -383,33 +384,51 @@ void usb_serial_generic_read_bulk_callback(struct urb *urb)
+ 		if (urb == port->read_urbs[i])
+ 			break;
+ 	}
+-	set_bit(i, &port->read_urbs_free);
+ 
+ 	dev_dbg(&port->dev, "%s - urb %d, len %d\n", __func__, i,
+ 							urb->actual_length);
+ 	switch (status) {
+ 	case 0:
++		usb_serial_debug_data(&port->dev, __func__, urb->actual_length,
++							data);
++		port->serial->type->process_read_urb(urb);
+ 		break;
+ 	case -ENOENT:
+ 	case -ECONNRESET:
+ 	case -ESHUTDOWN:
+ 		dev_dbg(&port->dev, "%s - urb stopped: %d\n",
+ 							__func__, status);
+-		return;
++		stopped = true;
++		break;
+ 	case -EPIPE:
+ 		dev_err(&port->dev, "%s - urb stopped: %d\n",
+ 							__func__, status);
+-		return;
++		stopped = true;
++		break;
+ 	default:
+ 		dev_dbg(&port->dev, "%s - nonzero urb status: %d\n",
+ 							__func__, status);
+-		goto resubmit;
++		break;
+ 	}
+ 
+-	usb_serial_debug_data(&port->dev, __func__, urb->actual_length, data);
+-	port->serial->type->process_read_urb(urb);
++	/*
++	 * Make sure URB processing is done before marking as free to avoid
++	 * racing with unthrottle() on another CPU. Matches the barriers
++	 * implied by the test_and_clear_bit() in
++	 * usb_serial_generic_submit_read_urb().
++	 */
++	smp_mb__before_atomic();
++	set_bit(i, &port->read_urbs_free);
++	/*
++	 * Make sure URB is marked as free before checking the throttled flag
++	 * to avoid racing with unthrottle() on another CPU. Matches the
++	 * smp_mb() in unthrottle().
++	 */
++	smp_mb__after_atomic();
++
++	if (stopped)
++		return;
+ 
+-resubmit:
+ 	/* Throttle the device if requested by tty */
+ 	spin_lock_irqsave(&port->lock, flags);
+ 	port->throttled = port->throttle_req;
+@@ -484,6 +503,12 @@ void usb_serial_generic_unthrottle(struct tty_struct *tty)
+ 	port->throttled = port->throttle_req = 0;
+ 	spin_unlock_irq(&port->lock);
+ 
++	/*
++	 * Matches the smp_mb__after_atomic() in
++	 * usb_serial_generic_read_bulk_callback().
++	 */
++	smp_mb();
++
+ 	if (was_throttled)
+ 		usb_serial_generic_submit_read_urbs(port, GFP_KERNEL);
+ }
+diff --git a/drivers/virt/fsl_hypervisor.c b/drivers/virt/fsl_hypervisor.c
+index 8ba726e600e9..1bbd910d4ddb 100644
+--- a/drivers/virt/fsl_hypervisor.c
++++ b/drivers/virt/fsl_hypervisor.c
+@@ -215,6 +215,9 @@ static long ioctl_memcpy(struct fsl_hv_ioctl_memcpy __user *p)
+ 	 * hypervisor.
+ 	 */
+ 	lb_offset = param.local_vaddr & (PAGE_SIZE - 1);
++	if (param.count == 0 ||
++	    param.count > U64_MAX - lb_offset - PAGE_SIZE + 1)
++		return -EINVAL;
+ 	num_pages = (param.count + lb_offset + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ 
+ 	/* Allocate the buffers we need */
+@@ -331,8 +334,8 @@ static long ioctl_dtprop(struct fsl_hv_ioctl_prop __user *p, int set)
+ 	struct fsl_hv_ioctl_prop param;
+ 	char __user *upath, *upropname;
+ 	void __user *upropval;
+-	char *path = NULL, *propname = NULL;
+-	void *propval = NULL;
++	char *path, *propname;
++	void *propval;
+ 	int ret = 0;
+ 
+ 	/* Get the parameters from the user. */
+@@ -344,32 +347,30 @@ static long ioctl_dtprop(struct fsl_hv_ioctl_prop __user *p, int set)
+ 	upropval = (void __user *)(uintptr_t)param.propval;
+ 
+ 	path = strndup_user(upath, FH_DTPROP_MAX_PATHLEN);
+-	if (IS_ERR(path)) {
+-		ret = PTR_ERR(path);
+-		goto out;
+-	}
++	if (IS_ERR(path))
++		return PTR_ERR(path);
+ 
+ 	propname = strndup_user(upropname, FH_DTPROP_MAX_PATHLEN);
+ 	if (IS_ERR(propname)) {
+ 		ret = PTR_ERR(propname);
+-		goto out;
++		goto err_free_path;
+ 	}
+ 
+ 	if (param.proplen > FH_DTPROP_MAX_PROPLEN) {
+ 		ret = -EINVAL;
+-		goto out;
++		goto err_free_propname;
+ 	}
+ 
+ 	propval = kmalloc(param.proplen, GFP_KERNEL);
+ 	if (!propval) {
+ 		ret = -ENOMEM;
+-		goto out;
++		goto err_free_propname;
+ 	}
+ 
+ 	if (set) {
+ 		if (copy_from_user(propval, upropval, param.proplen)) {
+ 			ret = -EFAULT;
+-			goto out;
++			goto err_free_propval;
+ 		}
+ 
+ 		param.ret = fh_partition_set_dtprop(param.handle,
+@@ -388,7 +389,7 @@ static long ioctl_dtprop(struct fsl_hv_ioctl_prop __user *p, int set)
+ 			if (copy_to_user(upropval, propval, param.proplen) ||
+ 			    put_user(param.proplen, &p->proplen)) {
+ 				ret = -EFAULT;
+-				goto out;
++				goto err_free_propval;
+ 			}
+ 		}
+ 	}
+@@ -396,10 +397,12 @@ static long ioctl_dtprop(struct fsl_hv_ioctl_prop __user *p, int set)
+ 	if (put_user(param.ret, &p->ret))
+ 		ret = -EFAULT;
+ 
+-out:
+-	kfree(path);
++err_free_propval:
+ 	kfree(propval);
++err_free_propname:
+ 	kfree(propname);
++err_free_path:
++	kfree(path);
+ 
+ 	return ret;
+ }
+diff --git a/drivers/virt/vboxguest/vboxguest_core.c b/drivers/virt/vboxguest/vboxguest_core.c
+index 1475ed5ffcde..0afef60d0638 100644
+--- a/drivers/virt/vboxguest/vboxguest_core.c
++++ b/drivers/virt/vboxguest/vboxguest_core.c
+@@ -1263,6 +1263,20 @@ static int vbg_ioctl_hgcm_disconnect(struct vbg_dev *gdev,
+ 	return ret;
+ }
+ 
++static bool vbg_param_valid(enum vmmdev_hgcm_function_parameter_type type)
++{
++	switch (type) {
++	case VMMDEV_HGCM_PARM_TYPE_32BIT:
++	case VMMDEV_HGCM_PARM_TYPE_64BIT:
++	case VMMDEV_HGCM_PARM_TYPE_LINADDR:
++	case VMMDEV_HGCM_PARM_TYPE_LINADDR_IN:
++	case VMMDEV_HGCM_PARM_TYPE_LINADDR_OUT:
++		return true;
++	default:
++		return false;
++	}
++}
++
+ static int vbg_ioctl_hgcm_call(struct vbg_dev *gdev,
+ 			       struct vbg_session *session, bool f32bit,
+ 			       struct vbg_ioctl_hgcm_call *call)
+@@ -1298,6 +1312,23 @@ static int vbg_ioctl_hgcm_call(struct vbg_dev *gdev,
+ 	}
+ 	call->hdr.size_out = actual_size;
+ 
++	/* Validate parameter types */
++	if (f32bit) {
++		struct vmmdev_hgcm_function_parameter32 *parm =
++			VBG_IOCTL_HGCM_CALL_PARMS32(call);
++
++		for (i = 0; i < call->parm_count; i++)
++			if (!vbg_param_valid(parm[i].type))
++				return -EINVAL;
++	} else {
++		struct vmmdev_hgcm_function_parameter *parm =
++			VBG_IOCTL_HGCM_CALL_PARMS(call);
++
++		for (i = 0; i < call->parm_count; i++)
++			if (!vbg_param_valid(parm[i].type))
++				return -EINVAL;
++	}
++
+ 	/*
+ 	 * Validate the client id.
+ 	 */
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index a38b65b97be0..a659e52cf79c 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -993,6 +993,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
+ 
+ 	if (unlikely(vq->vq.num_free < 1)) {
+ 		pr_debug("Can't add buf len 1 - avail = 0\n");
++		kfree(desc);
+ 		END_USE(vq);
+ 		return -ENOSPC;
+ 	}
+diff --git a/fs/afs/callback.c b/fs/afs/callback.c
+index 1c7955f5cdaf..128f2dbe256a 100644
+--- a/fs/afs/callback.c
++++ b/fs/afs/callback.c
+@@ -203,8 +203,7 @@ void afs_put_cb_interest(struct afs_net *net, struct afs_cb_interest *cbi)
+  */
+ void afs_init_callback_state(struct afs_server *server)
+ {
+-	if (!test_and_clear_bit(AFS_SERVER_FL_NEW, &server->flags))
+-		server->cb_s_break++;
++	server->cb_s_break++;
+ }
+ 
+ /*
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index 8871b9e8645f..465526f495b0 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -475,7 +475,6 @@ struct afs_server {
+ 	time64_t		put_time;	/* Time at which last put */
+ 	time64_t		update_at;	/* Time at which to next update the record */
+ 	unsigned long		flags;
+-#define AFS_SERVER_FL_NEW	0		/* New server, don't inc cb_s_break */
+ #define AFS_SERVER_FL_NOT_READY	1		/* The record is not ready for use */
+ #define AFS_SERVER_FL_NOT_FOUND	2		/* VL server says no such server */
+ #define AFS_SERVER_FL_VL_FAIL	3		/* Failed to access VL server */
+@@ -828,7 +827,7 @@ static inline struct afs_cb_interest *afs_get_cb_interest(struct afs_cb_interest
+ 
+ static inline unsigned int afs_calc_vnode_cb_break(struct afs_vnode *vnode)
+ {
+-	return vnode->cb_break + vnode->cb_s_break + vnode->cb_v_break;
++	return vnode->cb_break + vnode->cb_v_break;
+ }
+ 
+ static inline bool afs_cb_is_broken(unsigned int cb_break,
+@@ -836,7 +835,6 @@ static inline bool afs_cb_is_broken(unsigned int cb_break,
+ 				    const struct afs_cb_interest *cbi)
+ {
+ 	return !cbi || cb_break != (vnode->cb_break +
+-				    cbi->server->cb_s_break +
+ 				    vnode->volume->cb_v_break);
+ }
+ 
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index 642afa2e9783..65b33b6da48b 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -226,7 +226,6 @@ static struct afs_server *afs_alloc_server(struct afs_net *net,
+ 	RCU_INIT_POINTER(server->addresses, alist);
+ 	server->addr_version = alist->version;
+ 	server->uuid = *uuid;
+-	server->flags = (1UL << AFS_SERVER_FL_NEW);
+ 	server->update_at = ktime_get_real_seconds() + afs_server_update_delay;
+ 	rwlock_init(&server->fs_lock);
+ 	INIT_HLIST_HEAD(&server->cb_volumes);
+diff --git a/fs/afs/write.c b/fs/afs/write.c
+index 72efcfcf9f95..0122d7445fba 100644
+--- a/fs/afs/write.c
++++ b/fs/afs/write.c
+@@ -264,6 +264,7 @@ static void afs_kill_pages(struct address_space *mapping,
+ 				first = page->index + 1;
+ 			lock_page(page);
+ 			generic_error_remove_page(mapping, page);
++			unlock_page(page);
+ 		}
+ 
+ 		__pagevec_release(&pv);
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index f7f9e305aaf8..fd3db2e112d6 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -1152,6 +1152,19 @@ static int splice_dentry(struct dentry **pdn, struct inode *in)
+ 	return 0;
+ }
+ 
++static int d_name_cmp(struct dentry *dentry, const char *name, size_t len)
++{
++	int ret;
++
++	/* take d_lock to ensure dentry->d_name stability */
++	spin_lock(&dentry->d_lock);
++	ret = dentry->d_name.len - len;
++	if (!ret)
++		ret = memcmp(dentry->d_name.name, name, len);
++	spin_unlock(&dentry->d_lock);
++	return ret;
++}
++
+ /*
+  * Incorporate results into the local cache.  This is either just
+  * one inode, or a directory, dentry, and possibly linked-to inode (e.g.,
+@@ -1401,7 +1414,8 @@ retry_lookup:
+ 		err = splice_dentry(&req->r_dentry, in);
+ 		if (err < 0)
+ 			goto done;
+-	} else if (rinfo->head->is_dentry) {
++	} else if (rinfo->head->is_dentry &&
++		   !d_name_cmp(req->r_dentry, rinfo->dname, rinfo->dname_len)) {
+ 		struct ceph_vino *ptvino = NULL;
+ 
+ 		if ((le32_to_cpu(rinfo->diri.in->cap.caps) & CEPH_CAP_FILE_SHARED) ||
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index c99aab23efea..6e598dd85fb5 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -218,12 +218,14 @@ struct block_device *f2fs_target_device(struct f2fs_sb_info *sbi,
+ 	struct block_device *bdev = sbi->sb->s_bdev;
+ 	int i;
+ 
+-	for (i = 0; i < sbi->s_ndevs; i++) {
+-		if (FDEV(i).start_blk <= blk_addr &&
+-					FDEV(i).end_blk >= blk_addr) {
+-			blk_addr -= FDEV(i).start_blk;
+-			bdev = FDEV(i).bdev;
+-			break;
++	if (f2fs_is_multi_device(sbi)) {
++		for (i = 0; i < sbi->s_ndevs; i++) {
++			if (FDEV(i).start_blk <= blk_addr &&
++			    FDEV(i).end_blk >= blk_addr) {
++				blk_addr -= FDEV(i).start_blk;
++				bdev = FDEV(i).bdev;
++				break;
++			}
+ 		}
+ 	}
+ 	if (bio) {
+@@ -237,6 +239,9 @@ int f2fs_target_device_index(struct f2fs_sb_info *sbi, block_t blkaddr)
+ {
+ 	int i;
+ 
++	if (!f2fs_is_multi_device(sbi))
++		return 0;
++
+ 	for (i = 0; i < sbi->s_ndevs; i++)
+ 		if (FDEV(i).start_blk <= blkaddr && FDEV(i).end_blk >= blkaddr)
+ 			return i;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 6d9186a6528c..48f1bbf3e87e 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -1364,6 +1364,17 @@ static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type)
+ }
+ #endif
+ 
++/*
++ * Test if the mounted volume is a multi-device volume.
++ *   - For a single regular disk volume, sbi->s_ndevs is 0.
++ *   - For a single zoned disk volume, sbi->s_ndevs is 1.
++ *   - For a multi-device volume, sbi->s_ndevs is always 2 or more.
++ */
++static inline bool f2fs_is_multi_device(struct f2fs_sb_info *sbi)
++{
++	return sbi->s_ndevs > 1;
++}
++
+ /* For write statistics. Suppose sector size is 512 bytes,
+  * and the return value is in kbytes. s is of struct f2fs_sb_info.
+  */
+@@ -3612,7 +3623,7 @@ static inline bool f2fs_force_buffered_io(struct inode *inode,
+ 
+ 	if (f2fs_post_read_required(inode))
+ 		return true;
+-	if (sbi->s_ndevs)
++	if (f2fs_is_multi_device(sbi))
+ 		return true;
+ 	/*
+ 	 * for blkzoned device, fallback direct IO to buffered IO, so
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 30ed43bce110..bc078aea60ae 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -2570,7 +2570,7 @@ static int f2fs_ioc_flush_device(struct file *filp, unsigned long arg)
+ 							sizeof(range)))
+ 		return -EFAULT;
+ 
+-	if (sbi->s_ndevs <= 1 || sbi->s_ndevs - 1 <= range.dev_num ||
++	if (!f2fs_is_multi_device(sbi) || sbi->s_ndevs - 1 <= range.dev_num ||
+ 			__is_large_section(sbi)) {
+ 		f2fs_msg(sbi->sb, KERN_WARNING,
+ 			"Can't flush %u in %d for segs_per_sec %u != 1\n",
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 195cf0f9d9ef..ab764bd106de 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1346,7 +1346,7 @@ void f2fs_build_gc_manager(struct f2fs_sb_info *sbi)
+ 	sbi->gc_pin_file_threshold = DEF_GC_FAILED_PINNED_FILES;
+ 
+ 	/* give warm/cold data area from slower device */
+-	if (sbi->s_ndevs && !__is_large_section(sbi))
++	if (f2fs_is_multi_device(sbi) && !__is_large_section(sbi))
+ 		SIT_I(sbi)->last_victim[ALLOC_NEXT] =
+ 				GET_SEGNO(sbi, FDEV(0).end_blk) + 1;
+ }
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index b6c8b0696ef6..2b809b54d81b 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -576,7 +576,7 @@ static int submit_flush_wait(struct f2fs_sb_info *sbi, nid_t ino)
+ 	int ret = 0;
+ 	int i;
+ 
+-	if (!sbi->s_ndevs)
++	if (!f2fs_is_multi_device(sbi))
+ 		return __submit_flush_wait(sbi, sbi->sb->s_bdev);
+ 
+ 	for (i = 0; i < sbi->s_ndevs; i++) {
+@@ -644,7 +644,8 @@ int f2fs_issue_flush(struct f2fs_sb_info *sbi, nid_t ino)
+ 		return ret;
+ 	}
+ 
+-	if (atomic_inc_return(&fcc->queued_flush) == 1 || sbi->s_ndevs > 1) {
++	if (atomic_inc_return(&fcc->queued_flush) == 1 ||
++	    f2fs_is_multi_device(sbi)) {
+ 		ret = submit_flush_wait(sbi, ino);
+ 		atomic_dec(&fcc->queued_flush);
+ 
+@@ -750,7 +751,7 @@ int f2fs_flush_device_cache(struct f2fs_sb_info *sbi)
+ {
+ 	int ret = 0, i;
+ 
+-	if (!sbi->s_ndevs)
++	if (!f2fs_is_multi_device(sbi))
+ 		return 0;
+ 
+ 	for (i = 1; i < sbi->s_ndevs; i++) {
+@@ -1359,7 +1360,7 @@ static int __queue_discard_cmd(struct f2fs_sb_info *sbi,
+ 
+ 	trace_f2fs_queue_discard(bdev, blkstart, blklen);
+ 
+-	if (sbi->s_ndevs) {
++	if (f2fs_is_multi_device(sbi)) {
+ 		int devi = f2fs_target_device_index(sbi, blkstart);
+ 
+ 		blkstart -= FDEV(devi).start_blk;
+@@ -1714,7 +1715,7 @@ static int __f2fs_issue_discard_zone(struct f2fs_sb_info *sbi,
+ 	block_t lblkstart = blkstart;
+ 	int devi = 0;
+ 
+-	if (sbi->s_ndevs) {
++	if (f2fs_is_multi_device(sbi)) {
+ 		devi = f2fs_target_device_index(sbi, blkstart);
+ 		blkstart -= FDEV(devi).start_blk;
+ 	}
+@@ -3071,7 +3072,7 @@ static void update_device_state(struct f2fs_io_info *fio)
+ 	struct f2fs_sb_info *sbi = fio->sbi;
+ 	unsigned int devidx;
+ 
+-	if (!sbi->s_ndevs)
++	if (!f2fs_is_multi_device(sbi))
+ 		return;
+ 
+ 	devidx = f2fs_target_device_index(sbi, fio->new_blkaddr);
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index 4ca0b5c18192..853a69e493f5 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -650,11 +650,10 @@ static struct kernfs_node *__kernfs_new_node(struct kernfs_root *root,
+ 	kn->id.generation = gen;
+ 
+ 	/*
+-	 * set ino first. This barrier is paired with atomic_inc_not_zero in
++	 * set ino first. This RELEASE is paired with atomic_inc_not_zero in
+ 	 * kernfs_find_and_get_node_by_ino
+ 	 */
+-	smp_mb__before_atomic();
+-	atomic_set(&kn->count, 1);
++	atomic_set_release(&kn->count, 1);
+ 	atomic_set(&kn->active, KN_DEACTIVATED_BIAS);
+ 	RB_CLEAR_NODE(&kn->rb);
+ 
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index a86485ac7c87..de05a4302529 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -1598,7 +1598,12 @@ efi_status_t efi_setup_gop(efi_system_table_t *sys_table_arg,
+ 			   struct screen_info *si, efi_guid_t *proto,
+ 			   unsigned long size);
+ 
+-bool efi_runtime_disabled(void);
++#ifdef CONFIG_EFI
++extern bool efi_runtime_disabled(void);
++#else
++static inline bool efi_runtime_disabled(void) { return true; }
++#endif
++
+ extern void efi_call_virt_check_flags(unsigned long flags, const char *call);
+ 
+ enum efi_secureboot_mode {
+diff --git a/include/linux/elevator.h b/include/linux/elevator.h
+index 2e9e2763bf47..6e8bc53740f0 100644
+--- a/include/linux/elevator.h
++++ b/include/linux/elevator.h
+@@ -31,6 +31,7 @@ struct elevator_mq_ops {
+ 	void (*exit_sched)(struct elevator_queue *);
+ 	int (*init_hctx)(struct blk_mq_hw_ctx *, unsigned int);
+ 	void (*exit_hctx)(struct blk_mq_hw_ctx *, unsigned int);
++	void (*depth_updated)(struct blk_mq_hw_ctx *);
+ 
+ 	bool (*allow_merge)(struct request_queue *, struct request *, struct bio *);
+ 	bool (*bio_merge)(struct blk_mq_hw_ctx *, struct bio *);
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index cf761ff58224..e41503b2c5a1 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -28,6 +28,7 @@
+ #include <linux/irqbypass.h>
+ #include <linux/swait.h>
+ #include <linux/refcount.h>
++#include <linux/nospec.h>
+ #include <asm/signal.h>
+ 
+ #include <linux/kvm.h>
+@@ -492,10 +493,10 @@ static inline struct kvm_io_bus *kvm_get_bus(struct kvm *kvm, enum kvm_bus idx)
+ 
+ static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i)
+ {
+-	/* Pairs with smp_wmb() in kvm_vm_ioctl_create_vcpu, in case
+-	 * the caller has read kvm->online_vcpus before (as is the case
+-	 * for kvm_for_each_vcpu, for example).
+-	 */
++	int num_vcpus = atomic_read(&kvm->online_vcpus);
++	i = array_index_nospec(i, num_vcpus);
++
++	/* Pairs with smp_wmb() in kvm_vm_ioctl_create_vcpu.  */
+ 	smp_rmb();
+ 	return kvm->vcpus[i];
+ }
+@@ -579,6 +580,7 @@ void kvm_put_kvm(struct kvm *kvm);
+ 
+ static inline struct kvm_memslots *__kvm_memslots(struct kvm *kvm, int as_id)
+ {
++	as_id = array_index_nospec(as_id, KVM_ADDRESS_SPACE_NUM);
+ 	return srcu_dereference_check(kvm->memslots[as_id], &kvm->srcu,
+ 			lockdep_is_held(&kvm->slots_lock) ||
+ 			!refcount_read(&kvm->users_count));
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index bdb9563c64a0..b8679dcba96f 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -4212,10 +4212,10 @@ static inline bool skb_is_gso_sctp(const struct sk_buff *skb)
+ 	return skb_shinfo(skb)->gso_type & SKB_GSO_SCTP;
+ }
+ 
++/* Note: Should be called only if skb_is_gso(skb) is true */
+ static inline bool skb_is_gso_tcp(const struct sk_buff *skb)
+ {
+-	return skb_is_gso(skb) &&
+-	       skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6);
++	return skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6);
+ }
+ 
+ static inline void skb_gso_reset(struct sk_buff *skb)
+diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
+index 249d0a5b12b8..63fd47e924b9 100644
+--- a/include/net/netfilter/nf_conntrack.h
++++ b/include/net/netfilter/nf_conntrack.h
+@@ -318,6 +318,8 @@ struct nf_conn *nf_ct_tmpl_alloc(struct net *net,
+ 				 gfp_t flags);
+ void nf_ct_tmpl_free(struct nf_conn *tmpl);
+ 
++u32 nf_ct_get_id(const struct nf_conn *ct);
++
+ static inline void
+ nf_ct_set(struct sk_buff *skb, struct nf_conn *ct, enum ip_conntrack_info info)
+ {
+diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
+index 87b3198f4b5d..f4d4010b7e3e 100644
+--- a/include/uapi/rdma/mlx5-abi.h
++++ b/include/uapi/rdma/mlx5-abi.h
+@@ -238,6 +238,7 @@ enum mlx5_ib_query_dev_resp_flags {
+ 	MLX5_IB_QUERY_DEV_RESP_FLAGS_CQE_128B_COMP = 1 << 0,
+ 	MLX5_IB_QUERY_DEV_RESP_FLAGS_CQE_128B_PAD  = 1 << 1,
+ 	MLX5_IB_QUERY_DEV_RESP_PACKET_BASED_CREDIT_MODE = 1 << 2,
++	MLX5_IB_QUERY_DEV_RESP_FLAGS_SCAT2CQE_DCT = 1 << 3,
+ };
+ 
+ enum mlx5_ib_tunnel_offloads {
+diff --git a/init/main.c b/init/main.c
+index c86a1c8f19f4..7ae824545265 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -574,6 +574,8 @@ asmlinkage __visible void __init start_kernel(void)
+ 	page_alloc_init();
+ 
+ 	pr_notice("Kernel command line: %s\n", boot_command_line);
++	/* parameters may set static keys */
++	jump_label_init();
+ 	parse_early_param();
+ 	after_dashes = parse_args("Booting kernel",
+ 				  static_command_line, __start___param,
+@@ -583,8 +585,6 @@ asmlinkage __visible void __init start_kernel(void)
+ 		parse_args("Setting init args", after_dashes, NULL, 0, -1, -1,
+ 			   NULL, set_init_arg);
+ 
+-	jump_label_init();
+-
+ 	/*
+ 	 * These use large bootmem allocations and must precede
+ 	 * kmem_cache_init()
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 11593a03c051..7493f50ee880 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -858,6 +858,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ
+ 	 */
+ 	mem = find_memory_block(__pfn_to_section(pfn));
+ 	nid = mem->nid;
++	put_device(&mem->dev);
+ 
+ 	/* associate pfn range with the zone */
+ 	zone = move_pfn_range(online_type, nid, pfn, nr_pages);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 318ef6ccdb3b..d59be95ba45c 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -3385,6 +3385,9 @@ alloc_flags_nofragment(struct zone *zone, gfp_t gfp_mask)
+ 		alloc_flags |= ALLOC_KSWAPD;
+ 
+ #ifdef CONFIG_ZONE_DMA32
++	if (!zone)
++		return alloc_flags;
++
+ 	if (zone_idx(zone) != ZONE_NORMAL)
+ 		goto out;
+ 
+@@ -7945,7 +7948,10 @@ void *__init alloc_large_system_hash(const char *tablename,
+ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
+ 			 int migratetype, int flags)
+ {
+-	unsigned long pfn, iter, found;
++	unsigned long found;
++	unsigned long iter = 0;
++	unsigned long pfn = page_to_pfn(page);
++	const char *reason = "unmovable page";
+ 
+ 	/*
+ 	 * TODO we could make this much more efficient by not checking every
+@@ -7955,17 +7961,20 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
+ 	 * can still lead to having bootmem allocations in zone_movable.
+ 	 */
+ 
+-	/*
+-	 * CMA allocations (alloc_contig_range) really need to mark isolate
+-	 * CMA pageblocks even when they are not movable in fact so consider
+-	 * them movable here.
+-	 */
+-	if (is_migrate_cma(migratetype) &&
+-			is_migrate_cma(get_pageblock_migratetype(page)))
+-		return false;
++	if (is_migrate_cma_page(page)) {
++		/*
++		 * CMA allocations (alloc_contig_range) really need to mark
++		 * isolate CMA pageblocks even when they are not movable in fact
++		 * so consider them movable here.
++		 */
++		if (is_migrate_cma(migratetype))
++			return false;
++
++		reason = "CMA page";
++		goto unmovable;
++	}
+ 
+-	pfn = page_to_pfn(page);
+-	for (found = 0, iter = 0; iter < pageblock_nr_pages; iter++) {
++	for (found = 0; iter < pageblock_nr_pages; iter++) {
+ 		unsigned long check = pfn + iter;
+ 
+ 		if (!pfn_valid_within(check))
+@@ -8045,7 +8054,7 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count,
+ unmovable:
+ 	WARN_ON_ONCE(zone_idx(zone) == ZONE_MOVABLE);
+ 	if (flags & REPORT_FAILURE)
+-		dump_page(pfn_to_page(pfn+iter), "unmovable page");
++		dump_page(pfn_to_page(pfn + iter), reason);
+ 	return true;
+ }
+ 
+diff --git a/mm/slab.c b/mm/slab.c
+index 188c4b65255d..f4bbc53008f3 100644
+--- a/mm/slab.c
++++ b/mm/slab.c
+@@ -2371,7 +2371,6 @@ static void *alloc_slabmgmt(struct kmem_cache *cachep,
+ 		/* Slab management obj is off-slab. */
+ 		freelist = kmem_cache_alloc_node(cachep->freelist_cache,
+ 					      local_flags, nodeid);
+-		freelist = kasan_reset_tag(freelist);
+ 		if (!freelist)
+ 			return NULL;
+ 	} else {
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index e979705bbf32..022afabac3f6 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2199,7 +2199,6 @@ static void shrink_active_list(unsigned long nr_to_scan,
+  *   10TB     320        32GB
+  */
+ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
+-				 struct mem_cgroup *memcg,
+ 				 struct scan_control *sc, bool actual_reclaim)
+ {
+ 	enum lru_list active_lru = file * LRU_FILE + LRU_ACTIVE;
+@@ -2220,16 +2219,12 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
+ 	inactive = lruvec_lru_size(lruvec, inactive_lru, sc->reclaim_idx);
+ 	active = lruvec_lru_size(lruvec, active_lru, sc->reclaim_idx);
+ 
+-	if (memcg)
+-		refaults = memcg_page_state(memcg, WORKINGSET_ACTIVATE);
+-	else
+-		refaults = node_page_state(pgdat, WORKINGSET_ACTIVATE);
+-
+ 	/*
+ 	 * When refaults are being observed, it means a new workingset
+ 	 * is being established. Disable active list protection to get
+ 	 * rid of the stale workingset quickly.
+ 	 */
++	refaults = lruvec_page_state(lruvec, WORKINGSET_ACTIVATE);
+ 	if (file && actual_reclaim && lruvec->refaults != refaults) {
+ 		inactive_ratio = 0;
+ 	} else {
+@@ -2250,12 +2245,10 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
+ }
+ 
+ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
+-				 struct lruvec *lruvec, struct mem_cgroup *memcg,
+-				 struct scan_control *sc)
++				 struct lruvec *lruvec, struct scan_control *sc)
+ {
+ 	if (is_active_lru(lru)) {
+-		if (inactive_list_is_low(lruvec, is_file_lru(lru),
+-					 memcg, sc, true))
++		if (inactive_list_is_low(lruvec, is_file_lru(lru), sc, true))
+ 			shrink_active_list(nr_to_scan, lruvec, sc, lru);
+ 		return 0;
+ 	}
+@@ -2355,7 +2348,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
+ 			 * anonymous pages on the LRU in eligible zones.
+ 			 * Otherwise, the small LRU gets thrashed.
+ 			 */
+-			if (!inactive_list_is_low(lruvec, false, memcg, sc, false) &&
++			if (!inactive_list_is_low(lruvec, false, sc, false) &&
+ 			    lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, sc->reclaim_idx)
+ 					>> sc->priority) {
+ 				scan_balance = SCAN_ANON;
+@@ -2373,7 +2366,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
+ 	 * lruvec even if it has plenty of old anonymous pages unless the
+ 	 * system is under heavy pressure.
+ 	 */
+-	if (!inactive_list_is_low(lruvec, true, memcg, sc, false) &&
++	if (!inactive_list_is_low(lruvec, true, sc, false) &&
+ 	    lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, sc->reclaim_idx) >> sc->priority) {
+ 		scan_balance = SCAN_FILE;
+ 		goto out;
+@@ -2526,7 +2519,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
+ 				nr[lru] -= nr_to_scan;
+ 
+ 				nr_reclaimed += shrink_list(lru, nr_to_scan,
+-							    lruvec, memcg, sc);
++							    lruvec, sc);
+ 			}
+ 		}
+ 
+@@ -2593,7 +2586,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
+ 	 * Even if we did not try to evict anon pages at all, we want to
+ 	 * rebalance the anon lru active/inactive ratio.
+ 	 */
+-	if (inactive_list_is_low(lruvec, false, memcg, sc, true))
++	if (inactive_list_is_low(lruvec, false, sc, true))
+ 		shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
+ 				   sc, LRU_ACTIVE_ANON);
+ }
+@@ -2993,12 +2986,8 @@ static void snapshot_refaults(struct mem_cgroup *root_memcg, pg_data_t *pgdat)
+ 		unsigned long refaults;
+ 		struct lruvec *lruvec;
+ 
+-		if (memcg)
+-			refaults = memcg_page_state(memcg, WORKINGSET_ACTIVATE);
+-		else
+-			refaults = node_page_state(pgdat, WORKINGSET_ACTIVATE);
+-
+ 		lruvec = mem_cgroup_lruvec(pgdat, memcg);
++		refaults = lruvec_page_state(lruvec, WORKINGSET_ACTIVATE);
+ 		lruvec->refaults = refaults;
+ 	} while ((memcg = mem_cgroup_iter(root_memcg, memcg, NULL)));
+ }
+@@ -3363,7 +3352,7 @@ static void age_active_anon(struct pglist_data *pgdat,
+ 	do {
+ 		struct lruvec *lruvec = mem_cgroup_lruvec(pgdat, memcg);
+ 
+-		if (inactive_list_is_low(lruvec, false, memcg, sc, true))
++		if (inactive_list_is_low(lruvec, false, sc, true))
+ 			shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
+ 					   sc, LRU_ACTIVE_ANON);
+ 
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index b2d9c8f27cd7..1991ce2eb268 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -368,10 +368,12 @@ static int vlan_dev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+ 	ifrr.ifr_ifru = ifr->ifr_ifru;
+ 
+ 	switch (cmd) {
++	case SIOCSHWTSTAMP:
++		if (!net_eq(dev_net(dev), &init_net))
++			break;
+ 	case SIOCGMIIPHY:
+ 	case SIOCGMIIREG:
+ 	case SIOCSMIIREG:
+-	case SIOCSHWTSTAMP:
+ 	case SIOCGHWTSTAMP:
+ 		if (netif_device_present(real_dev) && ops->ndo_do_ioctl)
+ 			err = ops->ndo_do_ioctl(real_dev, &ifrr, cmd);
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index 41f0a696a65f..0cb0aa0313a8 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -602,13 +602,15 @@ int br_add_if(struct net_bridge *br, struct net_device *dev,
+ 	call_netdevice_notifiers(NETDEV_JOIN, dev);
+ 
+ 	err = dev_set_allmulti(dev, 1);
+-	if (err)
+-		goto put_back;
++	if (err) {
++		kfree(p);	/* kobject not yet init'd, manually free */
++		goto err1;
++	}
+ 
+ 	err = kobject_init_and_add(&p->kobj, &brport_ktype, &(dev->dev.kobj),
+ 				   SYSFS_BRIDGE_PORT_ATTR);
+ 	if (err)
+-		goto err1;
++		goto err2;
+ 
+ 	err = br_sysfs_addif(p);
+ 	if (err)
+@@ -700,12 +702,9 @@ err3:
+ 	sysfs_remove_link(br->ifobj, p->dev->name);
+ err2:
+ 	kobject_put(&p->kobj);
+-	p = NULL; /* kobject_put frees */
+-err1:
+ 	dev_set_allmulti(dev, -1);
+-put_back:
++err1:
+ 	dev_put(dev);
+-	kfree(p);
+ 	return err;
+ }
+ 
+diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
+index ffbb827723a2..c49b752ea7eb 100644
+--- a/net/core/fib_rules.c
++++ b/net/core/fib_rules.c
+@@ -756,9 +756,9 @@ int fib_nl_newrule(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	if (err)
+ 		goto errout;
+ 
+-	if ((nlh->nlmsg_flags & NLM_F_EXCL) &&
+-	    rule_exists(ops, frh, tb, rule)) {
+-		err = -EEXIST;
++	if (rule_exists(ops, frh, tb, rule)) {
++		if (nlh->nlmsg_flags & NLM_F_EXCL)
++			err = -EEXIST;
+ 		goto errout_free;
+ 	}
+ 
+diff --git a/net/core/filter.c b/net/core/filter.c
+index f7d0004fc160..ff07996515f2 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2789,7 +2789,7 @@ static int bpf_skb_proto_4_to_6(struct sk_buff *skb)
+ 	u32 off = skb_mac_header_len(skb);
+ 	int ret;
+ 
+-	if (!skb_is_gso_tcp(skb))
++	if (skb_is_gso(skb) && !skb_is_gso_tcp(skb))
+ 		return -ENOTSUPP;
+ 
+ 	ret = skb_cow(skb, len_diff);
+@@ -2830,7 +2830,7 @@ static int bpf_skb_proto_6_to_4(struct sk_buff *skb)
+ 	u32 off = skb_mac_header_len(skb);
+ 	int ret;
+ 
+-	if (!skb_is_gso_tcp(skb))
++	if (skb_is_gso(skb) && !skb_is_gso_tcp(skb))
+ 		return -ENOTSUPP;
+ 
+ 	ret = skb_unclone(skb, GFP_ATOMIC);
+@@ -2955,7 +2955,7 @@ static int bpf_skb_net_grow(struct sk_buff *skb, u32 len_diff)
+ 	u32 off = skb_mac_header_len(skb) + bpf_skb_net_base_len(skb);
+ 	int ret;
+ 
+-	if (!skb_is_gso_tcp(skb))
++	if (skb_is_gso(skb) && !skb_is_gso_tcp(skb))
+ 		return -ENOTSUPP;
+ 
+ 	ret = skb_cow(skb, len_diff);
+@@ -2984,7 +2984,7 @@ static int bpf_skb_net_shrink(struct sk_buff *skb, u32 len_diff)
+ 	u32 off = skb_mac_header_len(skb) + bpf_skb_net_base_len(skb);
+ 	int ret;
+ 
+-	if (!skb_is_gso_tcp(skb))
++	if (skb_is_gso(skb) && !skb_is_gso_tcp(skb))
+ 		return -ENOTSUPP;
+ 
+ 	ret = skb_unclone(skb, GFP_ATOMIC);
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 9f2840510e63..afc6e025c85c 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -786,7 +786,10 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
+ 		flow_keys.thoff = nhoff;
+ 
+ 		bpf_compute_data_pointers((struct sk_buff *)skb);
++
++		preempt_disable();
+ 		result = BPF_PROG_RUN(attached, skb);
++		preempt_enable();
+ 
+ 		/* Restore state */
+ 		memcpy(cb, &cb_saved, sizeof(cb_saved));
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index aee909bcddc4..41d534d3f42b 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -342,15 +342,22 @@ static int __init dsa_init_module(void)
+ 
+ 	rc = dsa_slave_register_notifier();
+ 	if (rc)
+-		return rc;
++		goto register_notifier_fail;
+ 
+ 	rc = dsa_legacy_register();
+ 	if (rc)
+-		return rc;
++		goto legacy_register_fail;
+ 
+ 	dev_add_pack(&dsa_pack_type);
+ 
+ 	return 0;
++
++legacy_register_fail:
++	dsa_slave_unregister_notifier();
++register_notifier_fail:
++	destroy_workqueue(dsa_owq);
++
++	return rc;
+ }
+ module_init(dsa_init_module);
+ 
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index c55a5432cf37..dc91c27bb788 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -173,6 +173,7 @@ static int icmp_filter(const struct sock *sk, const struct sk_buff *skb)
+ static int raw_v4_input(struct sk_buff *skb, const struct iphdr *iph, int hash)
+ {
+ 	int sdif = inet_sdif(skb);
++	int dif = inet_iif(skb);
+ 	struct sock *sk;
+ 	struct hlist_head *head;
+ 	int delivered = 0;
+@@ -185,8 +186,7 @@ static int raw_v4_input(struct sk_buff *skb, const struct iphdr *iph, int hash)
+ 
+ 	net = dev_net(skb->dev);
+ 	sk = __raw_v4_lookup(net, __sk_head(head), iph->protocol,
+-			     iph->saddr, iph->daddr,
+-			     skb->dev->ifindex, sdif);
++			     iph->saddr, iph->daddr, dif, sdif);
+ 
+ 	while (sk) {
+ 		delivered = 1;
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index b2109b74857d..971d60bf9640 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -1084,7 +1084,7 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev)
+ 	if (!tdev && tunnel->parms.link)
+ 		tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link);
+ 
+-	if (tdev) {
++	if (tdev && !netif_is_l3_master(tdev)) {
+ 		int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+ 
+ 		dev->hard_header_len = tdev->hard_header_len + sizeof(struct iphdr);
+diff --git a/net/mac80211/mesh_pathtbl.c b/net/mac80211/mesh_pathtbl.c
+index 88a6d5e18ccc..ac1f5db52994 100644
+--- a/net/mac80211/mesh_pathtbl.c
++++ b/net/mac80211/mesh_pathtbl.c
+@@ -23,7 +23,7 @@ static void mesh_path_free_rcu(struct mesh_table *tbl, struct mesh_path *mpath);
+ static u32 mesh_table_hash(const void *addr, u32 len, u32 seed)
+ {
+ 	/* Use last four bytes of hw addr as hash index */
+-	return jhash_1word(*(u32 *)(addr+2), seed);
++	return jhash_1word(__get_unaligned_cpu32((u8 *)addr + 2), seed);
+ }
+ 
+ static const struct rhashtable_params mesh_rht_params = {
+diff --git a/net/mac80211/trace_msg.h b/net/mac80211/trace_msg.h
+index 366b9e6f043e..40141df09f25 100644
+--- a/net/mac80211/trace_msg.h
++++ b/net/mac80211/trace_msg.h
+@@ -1,4 +1,9 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
++/*
++ * Portions of this file
++ * Copyright (C) 2019 Intel Corporation
++ */
++
+ #ifdef CONFIG_MAC80211_MESSAGE_TRACING
+ 
+ #if !defined(__MAC80211_MSG_DRIVER_TRACE) || defined(TRACE_HEADER_MULTI_READ)
+@@ -11,7 +16,7 @@
+ #undef TRACE_SYSTEM
+ #define TRACE_SYSTEM mac80211_msg
+ 
+-#define MAX_MSG_LEN	100
++#define MAX_MSG_LEN	120
+ 
+ DECLARE_EVENT_CLASS(mac80211_msg_event,
+ 	TP_PROTO(struct va_format *vaf),
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index 928f13a208b0..714d80e48a10 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3214,6 +3214,7 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	u8 max_subframes = sta->sta.max_amsdu_subframes;
+ 	int max_frags = local->hw.max_tx_fragments;
+ 	int max_amsdu_len = sta->sta.max_amsdu_len;
++	int orig_truesize;
+ 	__be16 len;
+ 	void *data;
+ 	bool ret = false;
+@@ -3254,6 +3255,7 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	if (!head || skb_is_gso(head))
+ 		goto out;
+ 
++	orig_truesize = head->truesize;
+ 	orig_len = head->len;
+ 
+ 	if (skb->len + head->len > max_amsdu_len)
+@@ -3311,6 +3313,7 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata,
+ 	*frag_tail = skb;
+ 
+ out_recalc:
++	fq->memory_usage += head->truesize - orig_truesize;
+ 	if (head->len != orig_len) {
+ 		flow->backlog += head->len - orig_len;
+ 		tin->backlog_bytes += head->len - orig_len;
+diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
+index 235205c93e14..df112b27246a 100644
+--- a/net/netfilter/ipvs/ip_vs_core.c
++++ b/net/netfilter/ipvs/ip_vs_core.c
+@@ -1647,7 +1647,7 @@ ip_vs_in_icmp(struct netns_ipvs *ipvs, struct sk_buff *skb, int *related,
+ 	if (!cp) {
+ 		int v;
+ 
+-		if (!sysctl_schedule_icmp(ipvs))
++		if (ipip || !sysctl_schedule_icmp(ipvs))
+ 			return NF_ACCEPT;
+ 
+ 		if (!ip_vs_try_to_schedule(ipvs, AF_INET, skb, pd, &v, &cp, &ciph))
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 9dd4c2048a2b..d7ac2f82bb6d 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -25,6 +25,7 @@
+ #include <linux/slab.h>
+ #include <linux/random.h>
+ #include <linux/jhash.h>
++#include <linux/siphash.h>
+ #include <linux/err.h>
+ #include <linux/percpu.h>
+ #include <linux/moduleparam.h>
+@@ -424,6 +425,40 @@ nf_ct_invert_tuple(struct nf_conntrack_tuple *inverse,
+ }
+ EXPORT_SYMBOL_GPL(nf_ct_invert_tuple);
+ 
++/* Generate a almost-unique pseudo-id for a given conntrack.
++ *
++ * intentionally doesn't re-use any of the seeds used for hash
++ * table location, we assume id gets exposed to userspace.
++ *
++ * Following nf_conn items do not change throughout lifetime
++ * of the nf_conn after it has been committed to main hash table:
++ *
++ * 1. nf_conn address
++ * 2. nf_conn->ext address
++ * 3. nf_conn->master address (normally NULL)
++ * 4. tuple
++ * 5. the associated net namespace
++ */
++u32 nf_ct_get_id(const struct nf_conn *ct)
++{
++	static __read_mostly siphash_key_t ct_id_seed;
++	unsigned long a, b, c, d;
++
++	net_get_random_once(&ct_id_seed, sizeof(ct_id_seed));
++
++	a = (unsigned long)ct;
++	b = (unsigned long)ct->master ^ net_hash_mix(nf_ct_net(ct));
++	c = (unsigned long)ct->ext;
++	d = (unsigned long)siphash(&ct->tuplehash, sizeof(ct->tuplehash),
++				   &ct_id_seed);
++#ifdef CONFIG_64BIT
++	return siphash_4u64((u64)a, (u64)b, (u64)c, (u64)d, &ct_id_seed);
++#else
++	return siphash_4u32((u32)a, (u32)b, (u32)c, (u32)d, &ct_id_seed);
++#endif
++}
++EXPORT_SYMBOL_GPL(nf_ct_get_id);
++
+ static void
+ clean_from_lists(struct nf_conn *ct)
+ {
+@@ -948,12 +983,9 @@ __nf_conntrack_confirm(struct sk_buff *skb)
+ 
+ 	/* set conntrack timestamp, if enabled. */
+ 	tstamp = nf_conn_tstamp_find(ct);
+-	if (tstamp) {
+-		if (skb->tstamp == 0)
+-			__net_timestamp(skb);
++	if (tstamp)
++		tstamp->start = ktime_get_real_ns();
+ 
+-		tstamp->start = ktime_to_ns(skb->tstamp);
+-	}
+ 	/* Since the lookup is lockless, hash insertion must be done after
+ 	 * starting the timer and setting the CONFIRMED bit. The RCU barriers
+ 	 * guarantee that no other CPU can find the conntrack before the above
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 1213beb5a714..36619ad8ab8c 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -29,6 +29,7 @@
+ #include <linux/spinlock.h>
+ #include <linux/interrupt.h>
+ #include <linux/slab.h>
++#include <linux/siphash.h>
+ 
+ #include <linux/netfilter.h>
+ #include <net/netlink.h>
+@@ -485,7 +486,9 @@ nla_put_failure:
+ 
+ static int ctnetlink_dump_id(struct sk_buff *skb, const struct nf_conn *ct)
+ {
+-	if (nla_put_be32(skb, CTA_ID, htonl((unsigned long)ct)))
++	__be32 id = (__force __be32)nf_ct_get_id(ct);
++
++	if (nla_put_be32(skb, CTA_ID, id))
+ 		goto nla_put_failure;
+ 	return 0;
+ 
+@@ -1286,8 +1289,9 @@ static int ctnetlink_del_conntrack(struct net *net, struct sock *ctnl,
+ 	}
+ 
+ 	if (cda[CTA_ID]) {
+-		u_int32_t id = ntohl(nla_get_be32(cda[CTA_ID]));
+-		if (id != (u32)(unsigned long)ct) {
++		__be32 id = nla_get_be32(cda[CTA_ID]);
++
++		if (id != (__force __be32)nf_ct_get_id(ct)) {
+ 			nf_ct_put(ct);
+ 			return -ENOENT;
+ 		}
+@@ -2694,6 +2698,25 @@ nla_put_failure:
+ 
+ static const union nf_inet_addr any_addr;
+ 
++static __be32 nf_expect_get_id(const struct nf_conntrack_expect *exp)
++{
++	static __read_mostly siphash_key_t exp_id_seed;
++	unsigned long a, b, c, d;
++
++	net_get_random_once(&exp_id_seed, sizeof(exp_id_seed));
++
++	a = (unsigned long)exp;
++	b = (unsigned long)exp->helper;
++	c = (unsigned long)exp->master;
++	d = (unsigned long)siphash(&exp->tuple, sizeof(exp->tuple), &exp_id_seed);
++
++#ifdef CONFIG_64BIT
++	return (__force __be32)siphash_4u64((u64)a, (u64)b, (u64)c, (u64)d, &exp_id_seed);
++#else
++	return (__force __be32)siphash_4u32((u32)a, (u32)b, (u32)c, (u32)d, &exp_id_seed);
++#endif
++}
++
+ static int
+ ctnetlink_exp_dump_expect(struct sk_buff *skb,
+ 			  const struct nf_conntrack_expect *exp)
+@@ -2741,7 +2764,7 @@ ctnetlink_exp_dump_expect(struct sk_buff *skb,
+ 	}
+ #endif
+ 	if (nla_put_be32(skb, CTA_EXPECT_TIMEOUT, htonl(timeout)) ||
+-	    nla_put_be32(skb, CTA_EXPECT_ID, htonl((unsigned long)exp)) ||
++	    nla_put_be32(skb, CTA_EXPECT_ID, nf_expect_get_id(exp)) ||
+ 	    nla_put_be32(skb, CTA_EXPECT_FLAGS, htonl(exp->flags)) ||
+ 	    nla_put_be32(skb, CTA_EXPECT_CLASS, htonl(exp->class)))
+ 		goto nla_put_failure;
+@@ -3046,7 +3069,8 @@ static int ctnetlink_get_expect(struct net *net, struct sock *ctnl,
+ 
+ 	if (cda[CTA_EXPECT_ID]) {
+ 		__be32 id = nla_get_be32(cda[CTA_EXPECT_ID]);
+-		if (ntohl(id) != (u32)(unsigned long)exp) {
++
++		if (id != nf_expect_get_id(exp)) {
+ 			nf_ct_expect_put(exp);
+ 			return -ENOENT;
+ 		}
+diff --git a/net/netfilter/nf_conntrack_proto.c b/net/netfilter/nf_conntrack_proto.c
+index 859f5d07a915..78361e462e80 100644
+--- a/net/netfilter/nf_conntrack_proto.c
++++ b/net/netfilter/nf_conntrack_proto.c
+@@ -86,7 +86,7 @@ void nf_l4proto_log_invalid(const struct sk_buff *skb,
+ 	struct va_format vaf;
+ 	va_list args;
+ 
+-	if (net->ct.sysctl_log_invalid != protonum ||
++	if (net->ct.sysctl_log_invalid != protonum &&
+ 	    net->ct.sysctl_log_invalid != IPPROTO_RAW)
+ 		return;
+ 
+diff --git a/net/netfilter/nf_nat_core.c b/net/netfilter/nf_nat_core.c
+index d159e9e7835b..ade527565127 100644
+--- a/net/netfilter/nf_nat_core.c
++++ b/net/netfilter/nf_nat_core.c
+@@ -358,9 +358,14 @@ static void nf_nat_l4proto_unique_tuple(struct nf_conntrack_tuple *tuple,
+ 	case IPPROTO_ICMPV6:
+ 		/* id is same for either direction... */
+ 		keyptr = &tuple->src.u.icmp.id;
+-		min = range->min_proto.icmp.id;
+-		range_size = ntohs(range->max_proto.icmp.id) -
+-			     ntohs(range->min_proto.icmp.id) + 1;
++		if (!(range->flags & NF_NAT_RANGE_PROTO_SPECIFIED)) {
++			min = 0;
++			range_size = 65536;
++		} else {
++			min = ntohs(range->min_proto.icmp.id);
++			range_size = ntohs(range->max_proto.icmp.id) -
++				     ntohs(range->min_proto.icmp.id) + 1;
++		}
+ 		goto find_free_id;
+ #if IS_ENABLED(CONFIG_NF_CT_PROTO_GRE)
+ 	case IPPROTO_GRE:
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index e2aac80f9b7b..25c2b98b9a96 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1502,7 +1502,7 @@ static int nft_chain_parse_hook(struct net *net,
+ 		if (IS_ERR(type))
+ 			return PTR_ERR(type);
+ 	}
+-	if (!(type->hook_mask & (1 << hook->num)))
++	if (hook->num > NF_MAX_HOOKS || !(type->hook_mask & (1 << hook->num)))
+ 		return -EOPNOTSUPP;
+ 
+ 	if (type->type == NFT_CHAIN_T_NAT &&
+diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter/nfnetlink_log.c
+index b1f9c5303f02..0b3347570265 100644
+--- a/net/netfilter/nfnetlink_log.c
++++ b/net/netfilter/nfnetlink_log.c
+@@ -540,7 +540,7 @@ __build_packet_message(struct nfnl_log_net *log,
+ 			goto nla_put_failure;
+ 	}
+ 
+-	if (skb->tstamp) {
++	if (hooknum <= NF_INET_FORWARD && skb->tstamp) {
+ 		struct nfulnl_msg_packet_timestamp ts;
+ 		struct timespec64 kts = ktime_to_timespec64(skb->tstamp);
+ 		ts.sec = cpu_to_be64(kts.tv_sec);
+diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
+index 0dcc3592d053..e057b2961d31 100644
+--- a/net/netfilter/nfnetlink_queue.c
++++ b/net/netfilter/nfnetlink_queue.c
+@@ -582,7 +582,7 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
+ 	if (nfqnl_put_bridge(entry, skb) < 0)
+ 		goto nla_put_failure;
+ 
+-	if (entskb->tstamp) {
++	if (entry->state.hook <= NF_INET_FORWARD && entskb->tstamp) {
+ 		struct nfqnl_msg_packet_timestamp ts;
+ 		struct timespec64 kts = ktime_to_timespec64(entskb->tstamp);
+ 
+diff --git a/net/netfilter/xt_time.c b/net/netfilter/xt_time.c
+index c13bcd0ab491..8dbb4d48f2ed 100644
+--- a/net/netfilter/xt_time.c
++++ b/net/netfilter/xt_time.c
+@@ -163,19 +163,24 @@ time_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ 	s64 stamp;
+ 
+ 	/*
+-	 * We cannot use get_seconds() instead of __net_timestamp() here.
++	 * We need real time here, but we can neither use skb->tstamp
++	 * nor __net_timestamp().
++	 *
++	 * skb->tstamp and skb->skb_mstamp_ns overlap, however, they
++	 * use different clock types (real vs monotonic).
++	 *
+ 	 * Suppose you have two rules:
+-	 * 	1. match before 13:00
+-	 * 	2. match after 13:00
++	 *	1. match before 13:00
++	 *	2. match after 13:00
++	 *
+ 	 * If you match against processing time (get_seconds) it
+ 	 * may happen that the same packet matches both rules if
+-	 * it arrived at the right moment before 13:00.
++	 * it arrived at the right moment before 13:00, so it would be
++	 * better to check skb->tstamp and set it via __net_timestamp()
++	 * if needed.  This however breaks outgoing packets tx timestamp,
++	 * and causes them to get delayed forever by fq packet scheduler.
+ 	 */
+-	if (skb->tstamp == 0)
+-		__net_timestamp((struct sk_buff *)skb);
+-
+-	stamp = ktime_to_ns(skb->tstamp);
+-	stamp = div_s64(stamp, NSEC_PER_SEC);
++	stamp = get_seconds();
+ 
+ 	if (info->flags & XT_TIME_LOCAL_TZ)
+ 		/* Adjust for local timezone */
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index faa2bc50cfa0..b6c23af4a315 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4604,14 +4604,29 @@ static void __exit packet_exit(void)
+ 
+ static int __init packet_init(void)
+ {
+-	int rc = proto_register(&packet_proto, 0);
++	int rc;
+ 
+-	if (rc != 0)
++	rc = proto_register(&packet_proto, 0);
++	if (rc)
+ 		goto out;
++	rc = sock_register(&packet_family_ops);
++	if (rc)
++		goto out_proto;
++	rc = register_pernet_subsys(&packet_net_ops);
++	if (rc)
++		goto out_sock;
++	rc = register_netdevice_notifier(&packet_netdev_notifier);
++	if (rc)
++		goto out_pernet;
+ 
+-	sock_register(&packet_family_ops);
+-	register_pernet_subsys(&packet_net_ops);
+-	register_netdevice_notifier(&packet_netdev_notifier);
++	return 0;
++
++out_pernet:
++	unregister_pernet_subsys(&packet_net_ops);
++out_sock:
++	sock_unregister(PF_PACKET);
++out_proto:
++	proto_unregister(&packet_proto);
+ out:
+ 	return rc;
+ }
+diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
+index c8cf4d10c435..971dc03304f4 100644
+--- a/net/sched/act_mirred.c
++++ b/net/sched/act_mirred.c
+@@ -159,6 +159,9 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
+ 	}
+ 	m = to_mirred(*a);
+ 
++	if (ret == ACT_P_CREATED)
++		INIT_LIST_HEAD(&m->tcfm_list);
++
+ 	spin_lock_bh(&m->tcf_lock);
+ 	m->tcf_action = parm->action;
+ 	m->tcfm_eaction = parm->eaction;
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 4dca9161f99b..020477ff91a2 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -734,11 +734,11 @@ static __poll_t tipc_poll(struct file *file, struct socket *sock,
+ 
+ 	switch (sk->sk_state) {
+ 	case TIPC_ESTABLISHED:
+-	case TIPC_CONNECTING:
+ 		if (!tsk->cong_link_cnt && !tsk_conn_cong(tsk))
+ 			revents |= EPOLLOUT;
+ 		/* fall thru' */
+ 	case TIPC_LISTEN:
++	case TIPC_CONNECTING:
+ 		if (!skb_queue_empty(&sk->sk_receive_queue))
+ 			revents |= EPOLLIN | EPOLLRDNORM;
+ 		break;
+@@ -2041,7 +2041,7 @@ static bool tipc_sk_filter_connect(struct tipc_sock *tsk, struct sk_buff *skb)
+ 			if (msg_data_sz(hdr))
+ 				return true;
+ 			/* Empty ACK-, - wake up sleeping connect() and drop */
+-			sk->sk_data_ready(sk);
++			sk->sk_state_change(sk);
+ 			msg_set_dest_droppable(hdr, 1);
+ 			return false;
+ 		}
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index d91a408db113..156ce708b533 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -13596,7 +13596,8 @@ static const struct genl_ops nl80211_ops[] = {
+ 		.policy = nl80211_policy,
+ 		.flags = GENL_UNS_ADMIN_PERM,
+ 		.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
+-				  NL80211_FLAG_NEED_RTNL,
++				  NL80211_FLAG_NEED_RTNL |
++				  NL80211_FLAG_CLEAR_SKB,
+ 	},
+ 	{
+ 		.cmd = NL80211_CMD_DEAUTHENTICATE,
+@@ -13647,7 +13648,8 @@ static const struct genl_ops nl80211_ops[] = {
+ 		.policy = nl80211_policy,
+ 		.flags = GENL_UNS_ADMIN_PERM,
+ 		.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
+-				  NL80211_FLAG_NEED_RTNL,
++				  NL80211_FLAG_NEED_RTNL |
++				  NL80211_FLAG_CLEAR_SKB,
+ 	},
+ 	{
+ 		.cmd = NL80211_CMD_UPDATE_CONNECT_PARAMS,
+@@ -13655,7 +13657,8 @@ static const struct genl_ops nl80211_ops[] = {
+ 		.policy = nl80211_policy,
+ 		.flags = GENL_ADMIN_PERM,
+ 		.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
+-				  NL80211_FLAG_NEED_RTNL,
++				  NL80211_FLAG_NEED_RTNL |
++				  NL80211_FLAG_CLEAR_SKB,
+ 	},
+ 	{
+ 		.cmd = NL80211_CMD_DISCONNECT,
+@@ -13684,7 +13687,8 @@ static const struct genl_ops nl80211_ops[] = {
+ 		.policy = nl80211_policy,
+ 		.flags = GENL_UNS_ADMIN_PERM,
+ 		.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
+-				  NL80211_FLAG_NEED_RTNL,
++				  NL80211_FLAG_NEED_RTNL |
++				  NL80211_FLAG_CLEAR_SKB,
+ 	},
+ 	{
+ 		.cmd = NL80211_CMD_DEL_PMKSA,
+@@ -14036,7 +14040,8 @@ static const struct genl_ops nl80211_ops[] = {
+ 		.policy = nl80211_policy,
+ 		.flags = GENL_UNS_ADMIN_PERM,
+ 		.internal_flags = NL80211_FLAG_NEED_WIPHY |
+-				  NL80211_FLAG_NEED_RTNL,
++				  NL80211_FLAG_NEED_RTNL |
++				  NL80211_FLAG_CLEAR_SKB,
+ 	},
+ 	{
+ 		.cmd = NL80211_CMD_SET_QOS_MAP,
+@@ -14091,7 +14096,8 @@ static const struct genl_ops nl80211_ops[] = {
+ 		.doit = nl80211_set_pmk,
+ 		.policy = nl80211_policy,
+ 		.internal_flags = NL80211_FLAG_NEED_NETDEV_UP |
+-				  NL80211_FLAG_NEED_RTNL,
++				  NL80211_FLAG_NEED_RTNL |
++				  NL80211_FLAG_CLEAR_SKB,
+ 	},
+ 	{
+ 		.cmd = NL80211_CMD_DEL_PMK,
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index dd58b9909ac9..649c89946dec 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -1298,6 +1298,16 @@ reg_intersect_dfs_region(const enum nl80211_dfs_regions dfs_region1,
+ 	return dfs_region1;
+ }
+ 
++static void reg_wmm_rules_intersect(const struct ieee80211_wmm_ac *wmm_ac1,
++				    const struct ieee80211_wmm_ac *wmm_ac2,
++				    struct ieee80211_wmm_ac *intersect)
++{
++	intersect->cw_min = max_t(u16, wmm_ac1->cw_min, wmm_ac2->cw_min);
++	intersect->cw_max = max_t(u16, wmm_ac1->cw_max, wmm_ac2->cw_max);
++	intersect->cot = min_t(u16, wmm_ac1->cot, wmm_ac2->cot);
++	intersect->aifsn = max_t(u8, wmm_ac1->aifsn, wmm_ac2->aifsn);
++}
++
+ /*
+  * Helper for regdom_intersect(), this does the real
+  * mathematical intersection fun
+@@ -1312,6 +1322,8 @@ static int reg_rules_intersect(const struct ieee80211_regdomain *rd1,
+ 	struct ieee80211_freq_range *freq_range;
+ 	const struct ieee80211_power_rule *power_rule1, *power_rule2;
+ 	struct ieee80211_power_rule *power_rule;
++	const struct ieee80211_wmm_rule *wmm_rule1, *wmm_rule2;
++	struct ieee80211_wmm_rule *wmm_rule;
+ 	u32 freq_diff, max_bandwidth1, max_bandwidth2;
+ 
+ 	freq_range1 = &rule1->freq_range;
+@@ -1322,6 +1334,10 @@ static int reg_rules_intersect(const struct ieee80211_regdomain *rd1,
+ 	power_rule2 = &rule2->power_rule;
+ 	power_rule = &intersected_rule->power_rule;
+ 
++	wmm_rule1 = &rule1->wmm_rule;
++	wmm_rule2 = &rule2->wmm_rule;
++	wmm_rule = &intersected_rule->wmm_rule;
++
+ 	freq_range->start_freq_khz = max(freq_range1->start_freq_khz,
+ 					 freq_range2->start_freq_khz);
+ 	freq_range->end_freq_khz = min(freq_range1->end_freq_khz,
+@@ -1365,6 +1381,29 @@ static int reg_rules_intersect(const struct ieee80211_regdomain *rd1,
+ 	intersected_rule->dfs_cac_ms = max(rule1->dfs_cac_ms,
+ 					   rule2->dfs_cac_ms);
+ 
++	if (rule1->has_wmm && rule2->has_wmm) {
++		u8 ac;
++
++		for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
++			reg_wmm_rules_intersect(&wmm_rule1->client[ac],
++						&wmm_rule2->client[ac],
++						&wmm_rule->client[ac]);
++			reg_wmm_rules_intersect(&wmm_rule1->ap[ac],
++						&wmm_rule2->ap[ac],
++						&wmm_rule->ap[ac]);
++		}
++
++		intersected_rule->has_wmm = true;
++	} else if (rule1->has_wmm) {
++		*wmm_rule = *wmm_rule1;
++		intersected_rule->has_wmm = true;
++	} else if (rule2->has_wmm) {
++		*wmm_rule = *wmm_rule2;
++		intersected_rule->has_wmm = true;
++	} else {
++		intersected_rule->has_wmm = false;
++	}
++
+ 	if (!is_valid_reg_rule(intersected_rule))
+ 		return -EINVAL;
+ 
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index b005283f0090..bc4aec97723a 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -4586,7 +4586,7 @@ static int selinux_socket_connect_helper(struct socket *sock,
+ 		struct lsm_network_audit net = {0,};
+ 		struct sockaddr_in *addr4 = NULL;
+ 		struct sockaddr_in6 *addr6 = NULL;
+-		unsigned short snum;
++		unsigned short snum = 0;
+ 		u32 sid, perm;
+ 
+ 		/* sctp_connectx(3) calls via selinux_sctp_bind_connect()
+@@ -4609,12 +4609,12 @@ static int selinux_socket_connect_helper(struct socket *sock,
+ 			break;
+ 		default:
+ 			/* Note that SCTP services expect -EINVAL, whereas
+-			 * others expect -EAFNOSUPPORT.
++			 * others must handle this at the protocol level:
++			 * connect(AF_UNSPEC) on a connected socket is
++			 * a documented way disconnect the socket.
+ 			 */
+ 			if (sksec->sclass == SECCLASS_SCTP_SOCKET)
+ 				return -EINVAL;
+-			else
+-				return -EAFNOSUPPORT;
+ 		}
+ 
+ 		err = sel_netport_sid(sk->sk_protocol, snum, &sid);
+diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
+index 87494c7c619d..981c6ce2da2c 100644
+--- a/tools/lib/traceevent/event-parse.c
++++ b/tools/lib/traceevent/event-parse.c
+@@ -2233,7 +2233,7 @@ eval_type_str(unsigned long long val, const char *type, int pointer)
+ 		return val & 0xffffffff;
+ 
+ 	if (strcmp(type, "u64") == 0 ||
+-	    strcmp(type, "s64"))
++	    strcmp(type, "s64") == 0)
+ 		return val;
+ 
+ 	if (strcmp(type, "s8") == 0)
+diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
+index 616408251e25..63750a711123 100644
+--- a/tools/perf/builtin-top.c
++++ b/tools/perf/builtin-top.c
+@@ -1393,6 +1393,7 @@ int cmd_top(int argc, const char **argv)
+ 			 * */
+ 			.overwrite	= 0,
+ 			.sample_time	= true,
++			.sample_time_set = true,
+ 		},
+ 		.max_stack	     = sysctl__max_stack(),
+ 		.annotation_opts     = annotation__default_options,
+diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
+index 2b37f56f0549..e33f20d16c8d 100644
+--- a/tools/perf/util/map.c
++++ b/tools/perf/util/map.c
+@@ -904,10 +904,8 @@ static void __maps__insert_name(struct maps *maps, struct map *map)
+ 		rc = strcmp(m->dso->short_name, map->dso->short_name);
+ 		if (rc < 0)
+ 			p = &(*p)->rb_left;
+-		else if (rc  > 0)
+-			p = &(*p)->rb_right;
+ 		else
+-			return;
++			p = &(*p)->rb_right;
+ 	}
+ 	rb_link_node(&map->rb_node_name, parent, p);
+ 	rb_insert_color(&map->rb_node_name, &maps->names);
+diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
+index b579f962451d..85ffdcfa596b 100644
+--- a/tools/testing/nvdimm/test/nfit.c
++++ b/tools/testing/nvdimm/test/nfit.c
+@@ -146,6 +146,7 @@ static int dimm_fail_cmd_code[ARRAY_SIZE(handle)];
+ struct nfit_test_sec {
+ 	u8 state;
+ 	u8 ext_state;
++	u8 old_state;
+ 	u8 passphrase[32];
+ 	u8 master_passphrase[32];
+ 	u64 overwrite_end_time;
+@@ -225,6 +226,8 @@ static struct workqueue_struct *nfit_wq;
+ 
+ static struct gen_pool *nfit_pool;
+ 
++static const char zero_key[NVDIMM_PASSPHRASE_LEN];
++
+ static struct nfit_test *to_nfit_test(struct device *dev)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+@@ -1059,8 +1062,7 @@ static int nd_intel_test_cmd_secure_erase(struct nfit_test *t,
+ 	struct device *dev = &t->pdev.dev;
+ 	struct nfit_test_sec *sec = &dimm_sec_info[dimm];
+ 
+-	if (!(sec->state & ND_INTEL_SEC_STATE_ENABLED) ||
+-			(sec->state & ND_INTEL_SEC_STATE_FROZEN)) {
++	if (sec->state & ND_INTEL_SEC_STATE_FROZEN) {
+ 		nd_cmd->status = ND_INTEL_STATUS_INVALID_STATE;
+ 		dev_dbg(dev, "secure erase: wrong security state\n");
+ 	} else if (memcmp(nd_cmd->passphrase, sec->passphrase,
+@@ -1068,6 +1070,12 @@ static int nd_intel_test_cmd_secure_erase(struct nfit_test *t,
+ 		nd_cmd->status = ND_INTEL_STATUS_INVALID_PASS;
+ 		dev_dbg(dev, "secure erase: wrong passphrase\n");
+ 	} else {
++		if (!(sec->state & ND_INTEL_SEC_STATE_ENABLED)
++				&& (memcmp(nd_cmd->passphrase, zero_key,
++					ND_INTEL_PASSPHRASE_SIZE) != 0)) {
++			dev_dbg(dev, "invalid zero key\n");
++			return 0;
++		}
+ 		memset(sec->passphrase, 0, ND_INTEL_PASSPHRASE_SIZE);
+ 		memset(sec->master_passphrase, 0, ND_INTEL_PASSPHRASE_SIZE);
+ 		sec->state = 0;
+@@ -1093,7 +1101,7 @@ static int nd_intel_test_cmd_overwrite(struct nfit_test *t,
+ 		return 0;
+ 	}
+ 
+-	memset(sec->passphrase, 0, ND_INTEL_PASSPHRASE_SIZE);
++	sec->old_state = sec->state;
+ 	sec->state = ND_INTEL_SEC_STATE_OVERWRITE;
+ 	dev_dbg(dev, "overwrite progressing.\n");
+ 	sec->overwrite_end_time = get_jiffies_64() + 5 * HZ;
+@@ -1115,7 +1123,8 @@ static int nd_intel_test_cmd_query_overwrite(struct nfit_test *t,
+ 
+ 	if (time_is_before_jiffies64(sec->overwrite_end_time)) {
+ 		sec->overwrite_end_time = 0;
+-		sec->state = 0;
++		sec->state = sec->old_state;
++		sec->old_state = 0;
+ 		sec->ext_state = ND_INTEL_SEC_ESTATE_ENABLED;
+ 		dev_dbg(dev, "overwrite is complete\n");
+ 	} else
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index 1080ff55a788..0d2a5f4f1e63 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -605,6 +605,39 @@ run_cmd()
+ 	return $rc
+ }
+ 
++check_expected()
++{
++	local out="$1"
++	local expected="$2"
++	local rc=0
++
++	[ "${out}" = "${expected}" ] && return 0
++
++	if [ -z "${out}" ]; then
++		if [ "$VERBOSE" = "1" ]; then
++			printf "\nNo route entry found\n"
++			printf "Expected:\n"
++			printf "    ${expected}\n"
++		fi
++		return 1
++	fi
++
++	# tricky way to convert output to 1-line without ip's
++	# messy '\'; this drops all extra white space
++	out=$(echo ${out})
++	if [ "${out}" != "${expected}" ]; then
++		rc=1
++		if [ "${VERBOSE}" = "1" ]; then
++			printf "    Unexpected route entry. Have:\n"
++			printf "        ${out}\n"
++			printf "    Expected:\n"
++			printf "        ${expected}\n\n"
++		fi
++	fi
++
++	return $rc
++}
++
+ # add route for a prefix, flushing any existing routes first
+ # expected to be the first step of a test
+ add_route6()
+@@ -652,31 +685,7 @@ check_route6()
+ 	pfx=$1
+ 
+ 	out=$($IP -6 ro ls match ${pfx} | sed -e 's/ pref medium//')
+-	[ "${out}" = "${expected}" ] && return 0
+-
+-	if [ -z "${out}" ]; then
+-		if [ "$VERBOSE" = "1" ]; then
+-			printf "\nNo route entry found\n"
+-			printf "Expected:\n"
+-			printf "    ${expected}\n"
+-		fi
+-		return 1
+-	fi
+-
+-	# tricky way to convert output to 1-line without ip's
+-	# messy '\'; this drops all extra white space
+-	out=$(echo ${out})
+-	if [ "${out}" != "${expected}" ]; then
+-		rc=1
+-		if [ "${VERBOSE}" = "1" ]; then
+-			printf "    Unexpected route entry. Have:\n"
+-			printf "        ${out}\n"
+-			printf "    Expected:\n"
+-			printf "        ${expected}\n\n"
+-		fi
+-	fi
+-
+-	return $rc
++	check_expected "${out}" "${expected}"
+ }
+ 
+ route_cleanup()
+@@ -725,7 +734,7 @@ route_setup()
+ 	ip -netns ns2 addr add 172.16.103.2/24 dev veth4
+ 	ip -netns ns2 addr add 172.16.104.1/24 dev dummy1
+ 
+-	set +ex
++	set +e
+ }
+ 
+ # assumption is that basic add of a single path route works
+@@ -960,7 +969,8 @@ ipv6_addr_metric_test()
+ 	run_cmd "$IP li set dev dummy2 down"
+ 	rc=$?
+ 	if [ $rc -eq 0 ]; then
+-		check_route6 ""
++		out=$($IP -6 ro ls match 2001:db8:104::/64)
++		check_expected "${out}" ""
+ 		rc=$?
+ 	fi
+ 	log_test $rc 0 "Prefix route removed on link down"
+@@ -1091,38 +1101,13 @@ check_route()
+ 	local pfx
+ 	local expected="$1"
+ 	local out
+-	local rc=0
+ 
+ 	set -- $expected
+ 	pfx=$1
+ 	[ "${pfx}" = "unreachable" ] && pfx=$2
+ 
+ 	out=$($IP ro ls match ${pfx})
+-	[ "${out}" = "${expected}" ] && return 0
+-
+-	if [ -z "${out}" ]; then
+-		if [ "$VERBOSE" = "1" ]; then
+-			printf "\nNo route entry found\n"
+-			printf "Expected:\n"
+-			printf "    ${expected}\n"
+-		fi
+-		return 1
+-	fi
+-
+-	# tricky way to convert output to 1-line without ip's
+-	# messy '\'; this drops all extra white space
+-	out=$(echo ${out})
+-	if [ "${out}" != "${expected}" ]; then
+-		rc=1
+-		if [ "${VERBOSE}" = "1" ]; then
+-			printf "    Unexpected route entry. Have:\n"
+-			printf "        ${out}\n"
+-			printf "    Expected:\n"
+-			printf "        ${expected}\n\n"
+-		fi
+-	fi
+-
+-	return $rc
++	check_expected "${out}" "${expected}"
+ }
+ 
+ # assumption is that basic add of a single path route works
+@@ -1387,7 +1372,8 @@ ipv4_addr_metric_test()
+ 	run_cmd "$IP li set dev dummy2 down"
+ 	rc=$?
+ 	if [ $rc -eq 0 ]; then
+-		check_route ""
++		out=$($IP ro ls match 172.16.104.0/24)
++		check_expected "${out}" ""
+ 		rc=$?
+ 	fi
+ 	log_test $rc 0 "Prefix route removed on link down"
+diff --git a/tools/testing/selftests/net/run_afpackettests b/tools/testing/selftests/net/run_afpackettests
+index 2dc95fda7ef7..ea5938ec009a 100755
+--- a/tools/testing/selftests/net/run_afpackettests
++++ b/tools/testing/selftests/net/run_afpackettests
+@@ -6,12 +6,14 @@ if [ $(id -u) != 0 ]; then
+ 	exit 0
+ fi
+ 
++ret=0
+ echo "--------------------"
+ echo "running psock_fanout test"
+ echo "--------------------"
+ ./in_netns.sh ./psock_fanout
+ if [ $? -ne 0 ]; then
+ 	echo "[FAIL]"
++	ret=1
+ else
+ 	echo "[PASS]"
+ fi
+@@ -22,6 +24,7 @@ echo "--------------------"
+ ./in_netns.sh ./psock_tpacket
+ if [ $? -ne 0 ]; then
+ 	echo "[FAIL]"
++	ret=1
+ else
+ 	echo "[PASS]"
+ fi
+@@ -32,6 +35,8 @@ echo "--------------------"
+ ./in_netns.sh ./txring_overwrite
+ if [ $? -ne 0 ]; then
+ 	echo "[FAIL]"
++	ret=1
+ else
+ 	echo "[PASS]"
+ fi
++exit $ret
+diff --git a/tools/testing/selftests/net/run_netsocktests b/tools/testing/selftests/net/run_netsocktests
+index b093f39c298c..14e41faf2c57 100755
+--- a/tools/testing/selftests/net/run_netsocktests
++++ b/tools/testing/selftests/net/run_netsocktests
+@@ -7,7 +7,7 @@ echo "--------------------"
+ ./socket
+ if [ $? -ne 0 ]; then
+ 	echo "[FAIL]"
++	exit 1
+ else
+ 	echo "[PASS]"
+ fi
+-
+diff --git a/tools/testing/selftests/netfilter/Makefile b/tools/testing/selftests/netfilter/Makefile
+index c9ff2b47bd1c..a37cb1192c6a 100644
+--- a/tools/testing/selftests/netfilter/Makefile
++++ b/tools/testing/selftests/netfilter/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Makefile for netfilter selftests
+ 
+-TEST_PROGS := nft_trans_stress.sh nft_nat.sh
++TEST_PROGS := nft_trans_stress.sh nft_nat.sh conntrack_icmp_related.sh
+ 
+ include ../lib.mk
+diff --git a/tools/testing/selftests/netfilter/conntrack_icmp_related.sh b/tools/testing/selftests/netfilter/conntrack_icmp_related.sh
+new file mode 100755
+index 000000000000..b48e1833bc89
+--- /dev/null
++++ b/tools/testing/selftests/netfilter/conntrack_icmp_related.sh
+@@ -0,0 +1,283 @@
++#!/bin/bash
++#
++# check that ICMP df-needed/pkttoobig icmp are set are set as related
++# state
++#
++# Setup is:
++#
++# nsclient1 -> nsrouter1 -> nsrouter2 -> nsclient2
++# MTU 1500, except for nsrouter2 <-> nsclient2 link (1280).
++# ping nsclient2 from nsclient1, checking that conntrack did set RELATED
++# 'fragmentation needed' icmp packet.
++#
++# In addition, nsrouter1 will perform IP masquerading, i.e. also
++# check the icmp errors are propagated to the correct host as per
++# nat of "established" icmp-echo "connection".
++
++# Kselftest framework requirement - SKIP code is 4.
++ksft_skip=4
++ret=0
++
++nft --version > /dev/null 2>&1
++if [ $? -ne 0 ];then
++	echo "SKIP: Could not run test without nft tool"
++	exit $ksft_skip
++fi
++
++ip -Version > /dev/null 2>&1
++if [ $? -ne 0 ];then
++	echo "SKIP: Could not run test without ip tool"
++	exit $ksft_skip
++fi
++
++cleanup() {
++	for i in 1 2;do ip netns del nsclient$i;done
++	for i in 1 2;do ip netns del nsrouter$i;done
++}
++
++ipv4() {
++    echo -n 192.168.$1.2
++}
++
++ipv6 () {
++    echo -n dead:$1::2
++}
++
++check_counter()
++{
++	ns=$1
++	name=$2
++	expect=$3
++	local lret=0
++
++	cnt=$(ip netns exec $ns nft list counter inet filter "$name" | grep -q "$expect")
++	if [ $? -ne 0 ]; then
++		echo "ERROR: counter $name in $ns has unexpected value (expected $expect)" 1>&2
++		ip netns exec $ns nft list counter inet filter "$name" 1>&2
++		lret=1
++	fi
++
++	return $lret
++}
++
++check_unknown()
++{
++	expect="packets 0 bytes 0"
++	for n in nsclient1 nsclient2 nsrouter1 nsrouter2; do
++		check_counter $n "unknown" "$expect"
++		if [ $? -ne 0 ] ;then
++			return 1
++		fi
++	done
++
++	return 0
++}
++
++for n in nsclient1 nsclient2 nsrouter1 nsrouter2; do
++  ip netns add $n
++  ip -net $n link set lo up
++done
++
++DEV=veth0
++ip link add $DEV netns nsclient1 type veth peer name eth1 netns nsrouter1
++DEV=veth0
++ip link add $DEV netns nsclient2 type veth peer name eth1 netns nsrouter2
++
++DEV=veth0
++ip link add $DEV netns nsrouter1 type veth peer name eth2 netns nsrouter2
++
++DEV=veth0
++for i in 1 2; do
++    ip -net nsclient$i link set $DEV up
++    ip -net nsclient$i addr add $(ipv4 $i)/24 dev $DEV
++    ip -net nsclient$i addr add $(ipv6 $i)/64 dev $DEV
++done
++
++ip -net nsrouter1 link set eth1 up
++ip -net nsrouter1 link set veth0 up
++
++ip -net nsrouter2 link set eth1 up
++ip -net nsrouter2 link set eth2 up
++
++ip -net nsclient1 route add default via 192.168.1.1
++ip -net nsclient1 -6 route add default via dead:1::1
++
++ip -net nsclient2 route add default via 192.168.2.1
++ip -net nsclient2 route add default via dead:2::1
++
++i=3
++ip -net nsrouter1 addr add 192.168.1.1/24 dev eth1
++ip -net nsrouter1 addr add 192.168.3.1/24 dev veth0
++ip -net nsrouter1 addr add dead:1::1/64 dev eth1
++ip -net nsrouter1 addr add dead:3::1/64 dev veth0
++ip -net nsrouter1 route add default via 192.168.3.10
++ip -net nsrouter1 -6 route add default via dead:3::10
++
++ip -net nsrouter2 addr add 192.168.2.1/24 dev eth1
++ip -net nsrouter2 addr add 192.168.3.10/24 dev eth2
++ip -net nsrouter2 addr add dead:2::1/64 dev eth1
++ip -net nsrouter2 addr add dead:3::10/64 dev eth2
++ip -net nsrouter2 route add default via 192.168.3.1
++ip -net nsrouter2 route add default via dead:3::1
++
++sleep 2
++for i in 4 6; do
++	ip netns exec nsrouter1 sysctl -q net.ipv$i.conf.all.forwarding=1
++	ip netns exec nsrouter2 sysctl -q net.ipv$i.conf.all.forwarding=1
++done
++
++for netns in nsrouter1 nsrouter2; do
++ip netns exec $netns nft -f - <<EOF
++table inet filter {
++	counter unknown { }
++	counter related { }
++	chain forward {
++		type filter hook forward priority 0; policy accept;
++		meta l4proto icmpv6 icmpv6 type "packet-too-big" ct state "related" counter name "related" accept
++		meta l4proto icmp icmp type "destination-unreachable" ct state "related" counter name "related" accept
++		meta l4proto { icmp, icmpv6 } ct state new,established accept
++		counter name "unknown" drop
++	}
++}
++EOF
++done
++
++ip netns exec nsclient1 nft -f - <<EOF
++table inet filter {
++	counter unknown { }
++	counter related { }
++	chain input {
++		type filter hook input priority 0; policy accept;
++		meta l4proto { icmp, icmpv6 } ct state established,untracked accept
++
++		meta l4proto { icmp, icmpv6 } ct state "related" counter name "related" accept
++		counter name "unknown" drop
++	}
++}
++EOF
++
++ip netns exec nsclient2 nft -f - <<EOF
++table inet filter {
++	counter unknown { }
++	counter new { }
++	counter established { }
++
++	chain input {
++		type filter hook input priority 0; policy accept;
++		meta l4proto { icmp, icmpv6 } ct state established,untracked accept
++
++		meta l4proto { icmp, icmpv6 } ct state "new" counter name "new" accept
++		meta l4proto { icmp, icmpv6 } ct state "established" counter name "established" accept
++		counter name "unknown" drop
++	}
++	chain output {
++		type filter hook output priority 0; policy accept;
++		meta l4proto { icmp, icmpv6 } ct state established,untracked accept
++
++		meta l4proto { icmp, icmpv6 } ct state "new" counter name "new"
++		meta l4proto { icmp, icmpv6 } ct state "established" counter name "established"
++		counter name "unknown" drop
++	}
++}
++EOF
++
++
++# make sure NAT core rewrites adress of icmp error if nat is used according to
++# conntrack nat information (icmp error will be directed at nsrouter1 address,
++# but it needs to be routed to nsclient1 address).
++ip netns exec nsrouter1 nft -f - <<EOF
++table ip nat {
++	chain postrouting {
++		type nat hook postrouting priority 0; policy accept;
++		ip protocol icmp oifname "veth0" counter masquerade
++	}
++}
++table ip6 nat {
++	chain postrouting {
++		type nat hook postrouting priority 0; policy accept;
++		ip6 nexthdr icmpv6 oifname "veth0" counter masquerade
++	}
++}
++EOF
++
++ip netns exec nsrouter2 ip link set eth1  mtu 1280
++ip netns exec nsclient2 ip link set veth0 mtu 1280
++sleep 1
++
++ip netns exec nsclient1 ping -c 1 -s 1000 -q -M do 192.168.2.2 >/dev/null
++if [ $? -ne 0 ]; then
++	echo "ERROR: netns ip routing/connectivity broken" 1>&2
++	cleanup
++	exit 1
++fi
++ip netns exec nsclient1 ping6 -q -c 1 -s 1000 dead:2::2 >/dev/null
++if [ $? -ne 0 ]; then
++	echo "ERROR: netns ipv6 routing/connectivity broken" 1>&2
++	cleanup
++	exit 1
++fi
++
++check_unknown
++if [ $? -ne 0 ]; then
++	ret=1
++fi
++
++expect="packets 0 bytes 0"
++for netns in nsrouter1 nsrouter2 nsclient1;do
++	check_counter "$netns" "related" "$expect"
++	if [ $? -ne 0 ]; then
++		ret=1
++	fi
++done
++
++expect="packets 2 bytes 2076"
++check_counter nsclient2 "new" "$expect"
++if [ $? -ne 0 ]; then
++	ret=1
++fi
++
++ip netns exec nsclient1 ping -q -c 1 -s 1300 -M do 192.168.2.2 > /dev/null
++if [ $? -eq 0 ]; then
++	echo "ERROR: ping should have failed with PMTU too big error" 1>&2
++	ret=1
++fi
++
++# nsrouter2 should have generated the icmp error, so
++# related counter should be 0 (its in forward).
++expect="packets 0 bytes 0"
++check_counter "nsrouter2" "related" "$expect"
++if [ $? -ne 0 ]; then
++	ret=1
++fi
++
++# but nsrouter1 should have seen it, same for nsclient1.
++expect="packets 1 bytes 576"
++for netns in nsrouter1 nsclient1;do
++	check_counter "$netns" "related" "$expect"
++	if [ $? -ne 0 ]; then
++		ret=1
++	fi
++done
++
++ip netns exec nsclient1 ping6 -c 1 -s 1300 dead:2::2 > /dev/null
++if [ $? -eq 0 ]; then
++	echo "ERROR: ping6 should have failed with PMTU too big error" 1>&2
++	ret=1
++fi
++
++expect="packets 2 bytes 1856"
++for netns in nsrouter1 nsclient1;do
++	check_counter "$netns" "related" "$expect"
++	if [ $? -ne 0 ]; then
++		ret=1
++	fi
++done
++
++if [ $ret -eq 0 ];then
++	echo "PASS: icmp mtu error had RELATED state"
++else
++	echo "ERROR: icmp error RELATED state test has failed"
++fi
++
++cleanup
++exit $ret
+diff --git a/tools/testing/selftests/netfilter/nft_nat.sh b/tools/testing/selftests/netfilter/nft_nat.sh
+index 8ec76681605c..3194007cf8d1 100755
+--- a/tools/testing/selftests/netfilter/nft_nat.sh
++++ b/tools/testing/selftests/netfilter/nft_nat.sh
+@@ -321,6 +321,7 @@ EOF
+ 
+ test_masquerade6()
+ {
++	local natflags=$1
+ 	local lret=0
+ 
+ 	ip netns exec ns0 sysctl net.ipv6.conf.all.forwarding=1 > /dev/null
+@@ -354,13 +355,13 @@ ip netns exec ns0 nft -f - <<EOF
+ table ip6 nat {
+ 	chain postrouting {
+ 		type nat hook postrouting priority 0; policy accept;
+-		meta oif veth0 masquerade
++		meta oif veth0 masquerade $natflags
+ 	}
+ }
+ EOF
+ 	ip netns exec ns2 ping -q -c 1 dead:1::99 > /dev/null # ping ns2->ns1
+ 	if [ $? -ne 0 ] ; then
+-		echo "ERROR: cannot ping ns1 from ns2 with active ipv6 masquerading"
++		echo "ERROR: cannot ping ns1 from ns2 with active ipv6 masquerade $natflags"
+ 		lret=1
+ 	fi
+ 
+@@ -397,19 +398,26 @@ EOF
+ 		fi
+ 	done
+ 
++	ip netns exec ns2 ping -q -c 1 dead:1::99 > /dev/null # ping ns2->ns1
++	if [ $? -ne 0 ] ; then
++		echo "ERROR: cannot ping ns1 from ns2 with active ipv6 masquerade $natflags (attempt 2)"
++		lret=1
++	fi
++
+ 	ip netns exec ns0 nft flush chain ip6 nat postrouting
+ 	if [ $? -ne 0 ]; then
+ 		echo "ERROR: Could not flush ip6 nat postrouting" 1>&2
+ 		lret=1
+ 	fi
+ 
+-	test $lret -eq 0 && echo "PASS: IPv6 masquerade for ns2"
++	test $lret -eq 0 && echo "PASS: IPv6 masquerade $natflags for ns2"
+ 
+ 	return $lret
+ }
+ 
+ test_masquerade()
+ {
++	local natflags=$1
+ 	local lret=0
+ 
+ 	ip netns exec ns0 sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null
+@@ -417,7 +425,7 @@ test_masquerade()
+ 
+ 	ip netns exec ns2 ping -q -c 1 10.0.1.99 > /dev/null # ping ns2->ns1
+ 	if [ $? -ne 0 ] ; then
+-		echo "ERROR: canot ping ns1 from ns2"
++		echo "ERROR: cannot ping ns1 from ns2 $natflags"
+ 		lret=1
+ 	fi
+ 
+@@ -443,13 +451,13 @@ ip netns exec ns0 nft -f - <<EOF
+ table ip nat {
+ 	chain postrouting {
+ 		type nat hook postrouting priority 0; policy accept;
+-		meta oif veth0 masquerade
++		meta oif veth0 masquerade $natflags
+ 	}
+ }
+ EOF
+ 	ip netns exec ns2 ping -q -c 1 10.0.1.99 > /dev/null # ping ns2->ns1
+ 	if [ $? -ne 0 ] ; then
+-		echo "ERROR: cannot ping ns1 from ns2 with active ip masquerading"
++		echo "ERROR: cannot ping ns1 from ns2 with active ip masquere $natflags"
+ 		lret=1
+ 	fi
+ 
+@@ -485,13 +493,19 @@ EOF
+ 		fi
+ 	done
+ 
++	ip netns exec ns2 ping -q -c 1 10.0.1.99 > /dev/null # ping ns2->ns1
++	if [ $? -ne 0 ] ; then
++		echo "ERROR: cannot ping ns1 from ns2 with active ip masquerade $natflags (attempt 2)"
++		lret=1
++	fi
++
+ 	ip netns exec ns0 nft flush chain ip nat postrouting
+ 	if [ $? -ne 0 ]; then
+ 		echo "ERROR: Could not flush nat postrouting" 1>&2
+ 		lret=1
+ 	fi
+ 
+-	test $lret -eq 0 && echo "PASS: IP masquerade for ns2"
++	test $lret -eq 0 && echo "PASS: IP masquerade $natflags for ns2"
+ 
+ 	return $lret
+ }
+@@ -750,8 +764,12 @@ test_local_dnat
+ test_local_dnat6
+ 
+ reset_counters
+-test_masquerade
+-test_masquerade6
++test_masquerade ""
++test_masquerade6 ""
++
++reset_counters
++test_masquerade "fully-random"
++test_masquerade6 "fully-random"
+ 
+ reset_counters
+ test_redirect
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 1c2509104924..408431744d06 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -3089,9 +3089,9 @@ TEST(user_notification_basic)
+ 
+ 	/* Check that we get -ENOSYS with no listener attached */
+ 	if (pid == 0) {
+-		if (user_trap_syscall(__NR_getpid, 0) < 0)
++		if (user_trap_syscall(__NR_getppid, 0) < 0)
+ 			exit(1);
+-		ret = syscall(__NR_getpid);
++		ret = syscall(__NR_getppid);
+ 		exit(ret >= 0 || errno != ENOSYS);
+ 	}
+ 
+@@ -3106,12 +3106,12 @@ TEST(user_notification_basic)
+ 	EXPECT_EQ(seccomp(SECCOMP_SET_MODE_FILTER, 0, &prog), 0);
+ 
+ 	/* Check that the basic notification machinery works */
+-	listener = user_trap_syscall(__NR_getpid,
++	listener = user_trap_syscall(__NR_getppid,
+ 				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+ 	/* Installing a second listener in the chain should EBUSY */
+-	EXPECT_EQ(user_trap_syscall(__NR_getpid,
++	EXPECT_EQ(user_trap_syscall(__NR_getppid,
+ 				    SECCOMP_FILTER_FLAG_NEW_LISTENER),
+ 		  -1);
+ 	EXPECT_EQ(errno, EBUSY);
+@@ -3120,7 +3120,7 @@ TEST(user_notification_basic)
+ 	ASSERT_GE(pid, 0);
+ 
+ 	if (pid == 0) {
+-		ret = syscall(__NR_getpid);
++		ret = syscall(__NR_getppid);
+ 		exit(ret != USER_NOTIF_MAGIC);
+ 	}
+ 
+@@ -3138,7 +3138,7 @@ TEST(user_notification_basic)
+ 	EXPECT_GT(poll(&pollfd, 1, -1), 0);
+ 	EXPECT_EQ(pollfd.revents, POLLOUT);
+ 
+-	EXPECT_EQ(req.data.nr,  __NR_getpid);
++	EXPECT_EQ(req.data.nr,  __NR_getppid);
+ 
+ 	resp.id = req.id;
+ 	resp.error = 0;
+@@ -3165,7 +3165,7 @@ TEST(user_notification_kill_in_middle)
+ 	struct seccomp_notif req = {};
+ 	struct seccomp_notif_resp resp = {};
+ 
+-	listener = user_trap_syscall(__NR_getpid,
++	listener = user_trap_syscall(__NR_getppid,
+ 				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+@@ -3177,7 +3177,7 @@ TEST(user_notification_kill_in_middle)
+ 	ASSERT_GE(pid, 0);
+ 
+ 	if (pid == 0) {
+-		ret = syscall(__NR_getpid);
++		ret = syscall(__NR_getppid);
+ 		exit(ret != USER_NOTIF_MAGIC);
+ 	}
+ 
+@@ -3277,7 +3277,7 @@ TEST(user_notification_closed_listener)
+ 	long ret;
+ 	int status, listener;
+ 
+-	listener = user_trap_syscall(__NR_getpid,
++	listener = user_trap_syscall(__NR_getppid,
+ 				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+@@ -3288,7 +3288,7 @@ TEST(user_notification_closed_listener)
+ 	ASSERT_GE(pid, 0);
+ 	if (pid == 0) {
+ 		close(listener);
+-		ret = syscall(__NR_getpid);
++		ret = syscall(__NR_getppid);
+ 		exit(ret != -1 && errno != ENOSYS);
+ 	}
+ 
+@@ -3311,14 +3311,15 @@ TEST(user_notification_child_pid_ns)
+ 
+ 	ASSERT_EQ(unshare(CLONE_NEWPID), 0);
+ 
+-	listener = user_trap_syscall(__NR_getpid, SECCOMP_FILTER_FLAG_NEW_LISTENER);
++	listener = user_trap_syscall(__NR_getppid,
++				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+ 	pid = fork();
+ 	ASSERT_GE(pid, 0);
+ 
+ 	if (pid == 0)
+-		exit(syscall(__NR_getpid) != USER_NOTIF_MAGIC);
++		exit(syscall(__NR_getppid) != USER_NOTIF_MAGIC);
+ 
+ 	EXPECT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, &req), 0);
+ 	EXPECT_EQ(req.pid, pid);
+@@ -3346,7 +3347,8 @@ TEST(user_notification_sibling_pid_ns)
+ 	struct seccomp_notif req = {};
+ 	struct seccomp_notif_resp resp = {};
+ 
+-	listener = user_trap_syscall(__NR_getpid, SECCOMP_FILTER_FLAG_NEW_LISTENER);
++	listener = user_trap_syscall(__NR_getppid,
++				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+ 	pid = fork();
+@@ -3359,7 +3361,7 @@ TEST(user_notification_sibling_pid_ns)
+ 		ASSERT_GE(pid2, 0);
+ 
+ 		if (pid2 == 0)
+-			exit(syscall(__NR_getpid) != USER_NOTIF_MAGIC);
++			exit(syscall(__NR_getppid) != USER_NOTIF_MAGIC);
+ 
+ 		EXPECT_EQ(waitpid(pid2, &status, 0), pid2);
+ 		EXPECT_EQ(true, WIFEXITED(status));
+@@ -3368,11 +3370,11 @@ TEST(user_notification_sibling_pid_ns)
+ 	}
+ 
+ 	/* Create the sibling ns, and sibling in it. */
+-	EXPECT_EQ(unshare(CLONE_NEWPID), 0);
+-	EXPECT_EQ(errno, 0);
++	ASSERT_EQ(unshare(CLONE_NEWPID), 0);
++	ASSERT_EQ(errno, 0);
+ 
+ 	pid2 = fork();
+-	EXPECT_GE(pid2, 0);
++	ASSERT_GE(pid2, 0);
+ 
+ 	if (pid2 == 0) {
+ 		ASSERT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, &req), 0);
+@@ -3380,7 +3382,7 @@ TEST(user_notification_sibling_pid_ns)
+ 		 * The pid should be 0, i.e. the task is in some namespace that
+ 		 * we can't "see".
+ 		 */
+-		ASSERT_EQ(req.pid, 0);
++		EXPECT_EQ(req.pid, 0);
+ 
+ 		resp.id = req.id;
+ 		resp.error = 0;
+@@ -3408,14 +3410,15 @@ TEST(user_notification_fault_recv)
+ 	struct seccomp_notif req = {};
+ 	struct seccomp_notif_resp resp = {};
+ 
+-	listener = user_trap_syscall(__NR_getpid, SECCOMP_FILTER_FLAG_NEW_LISTENER);
++	listener = user_trap_syscall(__NR_getppid,
++				     SECCOMP_FILTER_FLAG_NEW_LISTENER);
+ 	ASSERT_GE(listener, 0);
+ 
+ 	pid = fork();
+ 	ASSERT_GE(pid, 0);
+ 
+ 	if (pid == 0)
+-		exit(syscall(__NR_getpid) != USER_NOTIF_MAGIC);
++		exit(syscall(__NR_getppid) != USER_NOTIF_MAGIC);
+ 
+ 	/* Do a bad recv() */
+ 	EXPECT_EQ(ioctl(listener, SECCOMP_IOCTL_NOTIF_RECV, NULL), -1);
+diff --git a/virt/kvm/irqchip.c b/virt/kvm/irqchip.c
+index b1286c4e0712..0bd0683640bd 100644
+--- a/virt/kvm/irqchip.c
++++ b/virt/kvm/irqchip.c
+@@ -144,18 +144,19 @@ static int setup_routing_entry(struct kvm *kvm,
+ {
+ 	struct kvm_kernel_irq_routing_entry *ei;
+ 	int r;
++	u32 gsi = array_index_nospec(ue->gsi, KVM_MAX_IRQ_ROUTES);
+ 
+ 	/*
+ 	 * Do not allow GSI to be mapped to the same irqchip more than once.
+ 	 * Allow only one to one mapping between GSI and non-irqchip routing.
+ 	 */
+-	hlist_for_each_entry(ei, &rt->map[ue->gsi], link)
++	hlist_for_each_entry(ei, &rt->map[gsi], link)
+ 		if (ei->type != KVM_IRQ_ROUTING_IRQCHIP ||
+ 		    ue->type != KVM_IRQ_ROUTING_IRQCHIP ||
+ 		    ue->u.irqchip.irqchip == ei->irqchip.irqchip)
+ 			return -EINVAL;
+ 
+-	e->gsi = ue->gsi;
++	e->gsi = gsi;
+ 	e->type = ue->type;
+ 	r = kvm_set_routing_entry(kvm, e, ue);
+ 	if (r)
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index b4f2d892a1d3..ff68b07e94e9 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2974,12 +2974,14 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
+ 	struct kvm_device_ops *ops = NULL;
+ 	struct kvm_device *dev;
+ 	bool test = cd->flags & KVM_CREATE_DEVICE_TEST;
++	int type;
+ 	int ret;
+ 
+ 	if (cd->type >= ARRAY_SIZE(kvm_device_ops_table))
+ 		return -ENODEV;
+ 
+-	ops = kvm_device_ops_table[cd->type];
++	type = array_index_nospec(cd->type, ARRAY_SIZE(kvm_device_ops_table));
++	ops = kvm_device_ops_table[type];
+ 	if (ops == NULL)
+ 		return -ENODEV;
+ 
+@@ -2994,7 +2996,7 @@ static int kvm_ioctl_create_device(struct kvm *kvm,
+ 	dev->kvm = kvm;
+ 
+ 	mutex_lock(&kvm->lock);
+-	ret = ops->create(dev, cd->type);
++	ret = ops->create(dev, type);
+ 	if (ret < 0) {
+ 		mutex_unlock(&kvm->lock);
+ 		kfree(dev);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-22 11:04 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-22 11:04 UTC (permalink / raw
  To: gentoo-commits

commit:     b086570bafb6657b545764f03af0ff43100b38d7
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 22 11:04:18 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 22 11:04:18 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b086570b

Linux patch 5.0.18

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1017_linux-5.0.18.patch | 5688 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5692 insertions(+)

diff --git a/0000_README b/0000_README
index d6075df..396a4db 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  1016_linux-5.0.17.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.17
 
+Patch:  1017_linux-5.0.18.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.18
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1017_linux-5.0.18.patch b/1017_linux-5.0.18.patch
new file mode 100644
index 0000000..139dfad
--- /dev/null
+++ b/1017_linux-5.0.18.patch
@@ -0,0 +1,5688 @@
+diff --git a/Documentation/x86/mds.rst b/Documentation/x86/mds.rst
+index 534e9baa4e1d..5d4330be200f 100644
+--- a/Documentation/x86/mds.rst
++++ b/Documentation/x86/mds.rst
+@@ -142,45 +142,13 @@ Mitigation points
+    mds_user_clear.
+ 
+    The mitigation is invoked in prepare_exit_to_usermode() which covers
+-   most of the kernel to user space transitions. There are a few exceptions
+-   which are not invoking prepare_exit_to_usermode() on return to user
+-   space. These exceptions use the paranoid exit code.
++   all but one of the kernel to user space transitions.  The exception
++   is when we return from a Non Maskable Interrupt (NMI), which is
++   handled directly in do_nmi().
+ 
+-   - Non Maskable Interrupt (NMI):
+-
+-     Access to sensible data like keys, credentials in the NMI context is
+-     mostly theoretical: The CPU can do prefetching or execute a
+-     misspeculated code path and thereby fetching data which might end up
+-     leaking through a buffer.
+-
+-     But for mounting other attacks the kernel stack address of the task is
+-     already valuable information. So in full mitigation mode, the NMI is
+-     mitigated on the return from do_nmi() to provide almost complete
+-     coverage.
+-
+-   - Double fault (#DF):
+-
+-     A double fault is usually fatal, but the ESPFIX workaround, which can
+-     be triggered from user space through modify_ldt(2) is a recoverable
+-     double fault. #DF uses the paranoid exit path, so explicit mitigation
+-     in the double fault handler is required.
+-
+-   - Machine Check Exception (#MC):
+-
+-     Another corner case is a #MC which hits between the CPU buffer clear
+-     invocation and the actual return to user. As this still is in kernel
+-     space it takes the paranoid exit path which does not clear the CPU
+-     buffers. So the #MC handler repopulates the buffers to some
+-     extent. Machine checks are not reliably controllable and the window is
+-     extremly small so mitigation would just tick a checkbox that this
+-     theoretical corner case is covered. To keep the amount of special
+-     cases small, ignore #MC.
+-
+-   - Debug Exception (#DB):
+-
+-     This takes the paranoid exit path only when the INT1 breakpoint is in
+-     kernel space. #DB on a user space address takes the regular exit path,
+-     so no extra mitigation required.
++   (The reason that NMI is special is that prepare_exit_to_usermode() can
++    enable IRQs.  In NMI context, NMIs are blocked, and we don't want to
++    enable IRQs with NMIs blocked.)
+ 
+ 
+ 2. C-State transition
+diff --git a/Makefile b/Makefile
+index 6325ac97c7e2..bf21b5a86e4b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+@@ -642,7 +642,7 @@ ifeq ($(may-sync-config),1)
+ # Read in dependencies to all Kconfig* files, make sure to run syncconfig if
+ # changes are detected. This should be included after arch/$(SRCARCH)/Makefile
+ # because some architectures define CROSS_COMPILE there.
+--include include/config/auto.conf.cmd
++include include/config/auto.conf.cmd
+ 
+ # To avoid any implicit rule to kick in, define an empty command
+ $(KCONFIG_CONFIG): ;
+diff --git a/arch/arm/boot/dts/exynos5260.dtsi b/arch/arm/boot/dts/exynos5260.dtsi
+index 55167850619c..33a085ffc447 100644
+--- a/arch/arm/boot/dts/exynos5260.dtsi
++++ b/arch/arm/boot/dts/exynos5260.dtsi
+@@ -223,7 +223,7 @@
+ 			wakeup-interrupt-controller {
+ 				compatible = "samsung,exynos4210-wakeup-eint";
+ 				interrupt-parent = <&gic>;
+-				interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
++				interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
+ 			};
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/exynos5422-odroidxu3-audio.dtsi b/arch/arm/boot/dts/exynos5422-odroidxu3-audio.dtsi
+index e84544b220b9..b90cea8b7368 100644
+--- a/arch/arm/boot/dts/exynos5422-odroidxu3-audio.dtsi
++++ b/arch/arm/boot/dts/exynos5422-odroidxu3-audio.dtsi
+@@ -22,7 +22,7 @@
+ 			"Headphone Jack", "HPL",
+ 			"Headphone Jack", "HPR",
+ 			"Headphone Jack", "MICBIAS",
+-			"IN1", "Headphone Jack",
++			"IN12", "Headphone Jack",
+ 			"Speakers", "SPKL",
+ 			"Speakers", "SPKR";
+ 
+diff --git a/arch/arm/boot/dts/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+index 2d56008d8d6b..049fa8e3f2fd 100644
+--- a/arch/arm/boot/dts/qcom-ipq4019.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq4019.dtsi
+@@ -393,8 +393,8 @@
+ 			#address-cells = <3>;
+ 			#size-cells = <2>;
+ 
+-			ranges = <0x81000000 0 0x40200000 0x40200000 0 0x00100000
+-				  0x82000000 0 0x40300000 0x40300000 0 0x400000>;
++			ranges = <0x81000000 0 0x40200000 0x40200000 0 0x00100000>,
++				 <0x82000000 0 0x40300000 0x40300000 0 0x00d00000>;
+ 
+ 			interrupts = <GIC_SPI 141 IRQ_TYPE_EDGE_RISING>;
+ 			interrupt-names = "msi";
+diff --git a/arch/arm/crypto/aes-neonbs-glue.c b/arch/arm/crypto/aes-neonbs-glue.c
+index 07e31941dc67..617c2c99ebfb 100644
+--- a/arch/arm/crypto/aes-neonbs-glue.c
++++ b/arch/arm/crypto/aes-neonbs-glue.c
+@@ -278,6 +278,8 @@ static int __xts_crypt(struct skcipher_request *req,
+ 	int err;
+ 
+ 	err = skcipher_walk_virt(&walk, req, true);
++	if (err)
++		return err;
+ 
+ 	crypto_cipher_encrypt_one(ctx->tweak_tfm, walk.iv, walk.iv);
+ 
+diff --git a/arch/arm/mach-exynos/firmware.c b/arch/arm/mach-exynos/firmware.c
+index d602e3bf3f96..2eaf2dbb8e81 100644
+--- a/arch/arm/mach-exynos/firmware.c
++++ b/arch/arm/mach-exynos/firmware.c
+@@ -196,6 +196,7 @@ bool __init exynos_secure_firmware_available(void)
+ 		return false;
+ 
+ 	addr = of_get_address(nd, 0, NULL, NULL);
++	of_node_put(nd);
+ 	if (!addr) {
+ 		pr_err("%s: No address specified.\n", __func__);
+ 		return false;
+diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c
+index 0850505ac78b..9afb0c69db34 100644
+--- a/arch/arm/mach-exynos/suspend.c
++++ b/arch/arm/mach-exynos/suspend.c
+@@ -639,8 +639,10 @@ void __init exynos_pm_init(void)
+ 
+ 	if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL))) {
+ 		pr_warn("Outdated DT detected, suspend/resume will NOT work\n");
++		of_node_put(np);
+ 		return;
+ 	}
++	of_node_put(np);
+ 
+ 	pm_data = (const struct exynos_pm_data *) match->data;
+ 
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts b/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
+index be78172abc09..8ed3104ade94 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts
+@@ -489,7 +489,7 @@
+ 	status = "okay";
+ 
+ 	bt656-supply = <&vcc1v8_dvp>;
+-	audio-supply = <&vcca1v8_codec>;
++	audio-supply = <&vcc_3v0>;
+ 	sdmmc-supply = <&vcc_sdio>;
+ 	gpio1830-supply = <&vcc_3v0>;
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 6cc1c9fa4ea6..1bbf0da4e01d 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -333,6 +333,7 @@
+ 		phys = <&emmc_phy>;
+ 		phy-names = "phy_arasan";
+ 		power-domains = <&power RK3399_PD_EMMC>;
++		disable-cqe-dcmd;
+ 		status = "disabled";
+ 	};
+ 
+diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c
+index e7a95a566462..5cc248967387 100644
+--- a/arch/arm64/crypto/aes-neonbs-glue.c
++++ b/arch/arm64/crypto/aes-neonbs-glue.c
+@@ -304,6 +304,8 @@ static int __xts_crypt(struct skcipher_request *req,
+ 	int err;
+ 
+ 	err = skcipher_walk_virt(&walk, req, false);
++	if (err)
++		return err;
+ 
+ 	kernel_neon_begin();
+ 	neon_aes_ecb_encrypt(walk.iv, walk.iv, ctx->twkey, ctx->key.rounds, 1);
+diff --git a/arch/arm64/crypto/ghash-ce-glue.c b/arch/arm64/crypto/ghash-ce-glue.c
+index 067d8937d5af..1ed227bf6106 100644
+--- a/arch/arm64/crypto/ghash-ce-glue.c
++++ b/arch/arm64/crypto/ghash-ce-glue.c
+@@ -418,9 +418,11 @@ static int gcm_encrypt(struct aead_request *req)
+ 		put_unaligned_be32(2, iv + GCM_IV_SIZE);
+ 
+ 		while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) {
+-			int blocks = walk.nbytes / AES_BLOCK_SIZE;
++			const int blocks =
++				walk.nbytes / (2 * AES_BLOCK_SIZE) * 2;
+ 			u8 *dst = walk.dst.virt.addr;
+ 			u8 *src = walk.src.virt.addr;
++			int remaining = blocks;
+ 
+ 			do {
+ 				__aes_arm64_encrypt(ctx->aes_key.key_enc,
+@@ -430,9 +432,9 @@ static int gcm_encrypt(struct aead_request *req)
+ 
+ 				dst += AES_BLOCK_SIZE;
+ 				src += AES_BLOCK_SIZE;
+-			} while (--blocks > 0);
++			} while (--remaining > 0);
+ 
+-			ghash_do_update(walk.nbytes / AES_BLOCK_SIZE, dg,
++			ghash_do_update(blocks, dg,
+ 					walk.dst.virt.addr, &ctx->ghash_key,
+ 					NULL);
+ 
+@@ -553,7 +555,7 @@ static int gcm_decrypt(struct aead_request *req)
+ 		put_unaligned_be32(2, iv + GCM_IV_SIZE);
+ 
+ 		while (walk.nbytes >= (2 * AES_BLOCK_SIZE)) {
+-			int blocks = walk.nbytes / AES_BLOCK_SIZE;
++			int blocks = walk.nbytes / (2 * AES_BLOCK_SIZE) * 2;
+ 			u8 *dst = walk.dst.virt.addr;
+ 			u8 *src = walk.src.virt.addr;
+ 
+diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h
+index f2a234d6516c..93e07512b4b6 100644
+--- a/arch/arm64/include/asm/arch_timer.h
++++ b/arch/arm64/include/asm/arch_timer.h
+@@ -148,18 +148,47 @@ static inline void arch_timer_set_cntkctl(u32 cntkctl)
+ 	isb();
+ }
+ 
++/*
++ * Ensure that reads of the counter are treated the same as memory reads
++ * for the purposes of ordering by subsequent memory barriers.
++ *
++ * This insanity brought to you by speculative system register reads,
++ * out-of-order memory accesses, sequence locks and Thomas Gleixner.
++ *
++ * http://lists.infradead.org/pipermail/linux-arm-kernel/2019-February/631195.html
++ */
++#define arch_counter_enforce_ordering(val) do {				\
++	u64 tmp, _val = (val);						\
++									\
++	asm volatile(							\
++	"	eor	%0, %1, %1\n"					\
++	"	add	%0, sp, %0\n"					\
++	"	ldr	xzr, [%0]"					\
++	: "=r" (tmp) : "r" (_val));					\
++} while (0)
++
+ static inline u64 arch_counter_get_cntpct(void)
+ {
++	u64 cnt;
++
+ 	isb();
+-	return arch_timer_reg_read_stable(cntpct_el0);
++	cnt = arch_timer_reg_read_stable(cntpct_el0);
++	arch_counter_enforce_ordering(cnt);
++	return cnt;
+ }
+ 
+ static inline u64 arch_counter_get_cntvct(void)
+ {
++	u64 cnt;
++
+ 	isb();
+-	return arch_timer_reg_read_stable(cntvct_el0);
++	cnt = arch_timer_reg_read_stable(cntvct_el0);
++	arch_counter_enforce_ordering(cnt);
++	return cnt;
+ }
+ 
++#undef arch_counter_enforce_ordering
++
+ static inline int arch_timer_arch_init(void)
+ {
+ 	return 0;
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index f1a7ab18faf3..928f59340598 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -57,7 +57,15 @@
+ #define TASK_SIZE_64		(UL(1) << vabits_user)
+ 
+ #ifdef CONFIG_COMPAT
++#ifdef CONFIG_ARM64_64K_PAGES
++/*
++ * With CONFIG_ARM64_64K_PAGES enabled, the last page is occupied
++ * by the compat vectors page.
++ */
+ #define TASK_SIZE_32		UL(0x100000000)
++#else
++#define TASK_SIZE_32		(UL(0x100000000) - PAGE_SIZE)
++#endif /* CONFIG_ARM64_64K_PAGES */
+ #define TASK_SIZE		(test_thread_flag(TIF_32BIT) ? \
+ 				TASK_SIZE_32 : TASK_SIZE_64)
+ #define TASK_SIZE_OF(tsk)	(test_tsk_thread_flag(tsk, TIF_32BIT) ? \
+diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
+index d7bb6aefae0a..0fa6db521e44 100644
+--- a/arch/arm64/kernel/debug-monitors.c
++++ b/arch/arm64/kernel/debug-monitors.c
+@@ -135,6 +135,7 @@ NOKPROBE_SYMBOL(disable_debug_monitors);
+  */
+ static int clear_os_lock(unsigned int cpu)
+ {
++	write_sysreg(0, osdlr_el1);
+ 	write_sysreg(0, oslar_el1);
+ 	isb();
+ 	return 0;
+diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
+index b44065fb1616..6f91e8116514 100644
+--- a/arch/arm64/kernel/sys.c
++++ b/arch/arm64/kernel/sys.c
+@@ -31,7 +31,7 @@
+ 
+ SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+ 		unsigned long, prot, unsigned long, flags,
+-		unsigned long, fd, off_t, off)
++		unsigned long, fd, unsigned long, off)
+ {
+ 	if (offset_in_page(off) != 0)
+ 		return -EINVAL;
+diff --git a/arch/arm64/kernel/vdso/gettimeofday.S b/arch/arm64/kernel/vdso/gettimeofday.S
+index c39872a7b03c..e8f60112818f 100644
+--- a/arch/arm64/kernel/vdso/gettimeofday.S
++++ b/arch/arm64/kernel/vdso/gettimeofday.S
+@@ -73,6 +73,13 @@ x_tmp		.req	x8
+ 	movn	x_tmp, #0xff00, lsl #48
+ 	and	\res, x_tmp, \res
+ 	mul	\res, \res, \mult
++	/*
++	 * Fake address dependency from the value computed from the counter
++	 * register to subsequent data page accesses so that the sequence
++	 * locking also orders the read of the counter.
++	 */
++	and	x_tmp, \res, xzr
++	add	vdso_data, vdso_data, x_tmp
+ 	.endm
+ 
+ 	/*
+@@ -147,12 +154,12 @@ ENTRY(__kernel_gettimeofday)
+ 	/* w11 = cs_mono_mult, w12 = cs_shift */
+ 	ldp	w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
+ 	ldp	x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
+-	seqcnt_check fail=1b
+ 
+ 	get_nsec_per_sec res=x9
+ 	lsl	x9, x9, x12
+ 
+ 	get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
++	seqcnt_check fail=1b
+ 	get_ts_realtime res_sec=x10, res_nsec=x11, \
+ 		clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
+ 
+@@ -211,13 +218,13 @@ realtime:
+ 	/* w11 = cs_mono_mult, w12 = cs_shift */
+ 	ldp	w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
+ 	ldp	x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
+-	seqcnt_check fail=realtime
+ 
+ 	/* All computations are done with left-shifted nsecs. */
+ 	get_nsec_per_sec res=x9
+ 	lsl	x9, x9, x12
+ 
+ 	get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
++	seqcnt_check fail=realtime
+ 	get_ts_realtime res_sec=x10, res_nsec=x11, \
+ 		clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
+ 	clock_gettime_return, shift=1
+@@ -231,7 +238,6 @@ monotonic:
+ 	ldp	w11, w12, [vdso_data, #VDSO_CS_MONO_MULT]
+ 	ldp	x13, x14, [vdso_data, #VDSO_XTIME_CLK_SEC]
+ 	ldp	x3, x4, [vdso_data, #VDSO_WTM_CLK_SEC]
+-	seqcnt_check fail=monotonic
+ 
+ 	/* All computations are done with left-shifted nsecs. */
+ 	lsl	x4, x4, x12
+@@ -239,6 +245,7 @@ monotonic:
+ 	lsl	x9, x9, x12
+ 
+ 	get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
++	seqcnt_check fail=monotonic
+ 	get_ts_realtime res_sec=x10, res_nsec=x11, \
+ 		clock_nsec=x15, xtime_sec=x13, xtime_nsec=x14, nsec_to_sec=x9
+ 
+@@ -253,13 +260,13 @@ monotonic_raw:
+ 	/* w11 = cs_raw_mult, w12 = cs_shift */
+ 	ldp	w12, w11, [vdso_data, #VDSO_CS_SHIFT]
+ 	ldp	x13, x14, [vdso_data, #VDSO_RAW_TIME_SEC]
+-	seqcnt_check fail=monotonic_raw
+ 
+ 	/* All computations are done with left-shifted nsecs. */
+ 	get_nsec_per_sec res=x9
+ 	lsl	x9, x9, x12
+ 
+ 	get_clock_shifted_nsec res=x15, cycle_last=x10, mult=x11
++	seqcnt_check fail=monotonic_raw
+ 	get_ts_clock_raw res_sec=x10, res_nsec=x11, \
+ 		clock_nsec=x15, nsec_to_sec=x9
+ 
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index 73886a5f1f30..19bc318b2fe9 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -76,24 +76,25 @@ ENTRY(cpu_do_suspend)
+ 	mrs	x2, tpidr_el0
+ 	mrs	x3, tpidrro_el0
+ 	mrs	x4, contextidr_el1
+-	mrs	x5, cpacr_el1
+-	mrs	x6, tcr_el1
+-	mrs	x7, vbar_el1
+-	mrs	x8, mdscr_el1
+-	mrs	x9, oslsr_el1
+-	mrs	x10, sctlr_el1
++	mrs	x5, osdlr_el1
++	mrs	x6, cpacr_el1
++	mrs	x7, tcr_el1
++	mrs	x8, vbar_el1
++	mrs	x9, mdscr_el1
++	mrs	x10, oslsr_el1
++	mrs	x11, sctlr_el1
+ alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
+-	mrs	x11, tpidr_el1
++	mrs	x12, tpidr_el1
+ alternative_else
+-	mrs	x11, tpidr_el2
++	mrs	x12, tpidr_el2
+ alternative_endif
+-	mrs	x12, sp_el0
++	mrs	x13, sp_el0
+ 	stp	x2, x3, [x0]
+-	stp	x4, xzr, [x0, #16]
+-	stp	x5, x6, [x0, #32]
+-	stp	x7, x8, [x0, #48]
+-	stp	x9, x10, [x0, #64]
+-	stp	x11, x12, [x0, #80]
++	stp	x4, x5, [x0, #16]
++	stp	x6, x7, [x0, #32]
++	stp	x8, x9, [x0, #48]
++	stp	x10, x11, [x0, #64]
++	stp	x12, x13, [x0, #80]
+ 	ret
+ ENDPROC(cpu_do_suspend)
+ 
+@@ -116,8 +117,8 @@ ENTRY(cpu_do_resume)
+ 	msr	cpacr_el1, x6
+ 
+ 	/* Don't change t0sz here, mask those bits when restoring */
+-	mrs	x5, tcr_el1
+-	bfi	x8, x5, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
++	mrs	x7, tcr_el1
++	bfi	x8, x7, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
+ 
+ 	msr	tcr_el1, x8
+ 	msr	vbar_el1, x9
+@@ -141,6 +142,7 @@ alternative_endif
+ 	/*
+ 	 * Restore oslsr_el1 by writing oslar_el1
+ 	 */
++	msr	osdlr_el1, x5
+ 	ubfx	x11, x11, #1, #1
+ 	msr	oslar_el1, x11
+ 	reset_pmuserenr_el0 x0			// Disable PMU access from EL0
+diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h
+index 783de51a6c4e..6c881659ee8a 100644
+--- a/arch/arm64/net/bpf_jit.h
++++ b/arch/arm64/net/bpf_jit.h
+@@ -100,12 +100,6 @@
+ #define A64_STXR(sf, Rt, Rn, Rs) \
+ 	A64_LSX(sf, Rt, Rn, Rs, STORE_EX)
+ 
+-/* Prefetch */
+-#define A64_PRFM(Rn, type, target, policy) \
+-	aarch64_insn_gen_prefetch(Rn, AARCH64_INSN_PRFM_TYPE_##type, \
+-				  AARCH64_INSN_PRFM_TARGET_##target, \
+-				  AARCH64_INSN_PRFM_POLICY_##policy)
+-
+ /* Add/subtract (immediate) */
+ #define A64_ADDSUB_IMM(sf, Rd, Rn, imm12, type) \
+ 	aarch64_insn_gen_add_sub_imm(Rd, Rn, imm12, \
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 1542df00b23c..8d7ceec7f079 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -739,7 +739,6 @@ emit_cond_jmp:
+ 	case BPF_STX | BPF_XADD | BPF_DW:
+ 		emit_a64_mov_i(1, tmp, off, ctx);
+ 		emit(A64_ADD(1, tmp, tmp, dst), ctx);
+-		emit(A64_PRFM(tmp, PST, L1, STRM), ctx);
+ 		emit(A64_LDXR(isdw, tmp2, tmp), ctx);
+ 		emit(A64_ADD(isdw, tmp2, tmp2, src), ctx);
+ 		emit(A64_STXR(isdw, tmp2, tmp, tmp3), ctx);
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index ed554b09eb3f..6023021c9b23 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -148,6 +148,7 @@ config S390
+ 	select HAVE_FUNCTION_TRACER
+ 	select HAVE_FUTEX_CMPXCHG if FUTEX
+ 	select HAVE_GCC_PLUGINS
++	select HAVE_GENERIC_GUP
+ 	select HAVE_KERNEL_BZIP2
+ 	select HAVE_KERNEL_GZIP
+ 	select HAVE_KERNEL_LZ4
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 063732414dfb..861f2b63f290 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -1203,42 +1203,79 @@ static inline pte_t mk_pte(struct page *page, pgprot_t pgprot)
+ #define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
+ #define pte_index(address) (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE-1))
+ 
+-#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
+-#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+-#define pgd_offset_raw(pgd, addr) ((pgd) + pgd_index(addr))
+-
+ #define pmd_deref(pmd) (pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN)
+ #define pud_deref(pud) (pud_val(pud) & _REGION_ENTRY_ORIGIN)
+ #define p4d_deref(pud) (p4d_val(pud) & _REGION_ENTRY_ORIGIN)
+ #define pgd_deref(pgd) (pgd_val(pgd) & _REGION_ENTRY_ORIGIN)
+ 
+-static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address)
++/*
++ * The pgd_offset function *always* adds the index for the top-level
++ * region/segment table. This is done to get a sequence like the
++ * following to work:
++ *	pgdp = pgd_offset(current->mm, addr);
++ *	pgd = READ_ONCE(*pgdp);
++ *	p4dp = p4d_offset(&pgd, addr);
++ *	...
++ * The subsequent p4d_offset, pud_offset and pmd_offset functions
++ * only add an index if they dereferenced the pointer.
++ */
++static inline pgd_t *pgd_offset_raw(pgd_t *pgd, unsigned long address)
+ {
+-	p4d_t *p4d = (p4d_t *) pgd;
++	unsigned long rste;
++	unsigned int shift;
+ 
+-	if ((pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R1)
+-		p4d = (p4d_t *) pgd_deref(*pgd);
+-	return p4d + p4d_index(address);
++	/* Get the first entry of the top level table */
++	rste = pgd_val(*pgd);
++	/* Pick up the shift from the table type of the first entry */
++	shift = ((rste & _REGION_ENTRY_TYPE_MASK) >> 2) * 11 + 20;
++	return pgd + ((address >> shift) & (PTRS_PER_PGD - 1));
+ }
+ 
+-static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
++#define pgd_offset(mm, address) pgd_offset_raw(READ_ONCE((mm)->pgd), address)
++#define pgd_offset_k(address) pgd_offset(&init_mm, address)
++
++static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address)
+ {
+-	pud_t *pud = (pud_t *) p4d;
++	if ((pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R1)
++		return (p4d_t *) pgd_deref(*pgd) + p4d_index(address);
++	return (p4d_t *) pgd;
++}
+ 
+-	if ((p4d_val(*p4d) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R2)
+-		pud = (pud_t *) p4d_deref(*p4d);
+-	return pud + pud_index(address);
++static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
++{
++	if ((p4d_val(*p4d) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R2)
++		return (pud_t *) p4d_deref(*p4d) + pud_index(address);
++	return (pud_t *) p4d;
+ }
+ 
+ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
+ {
+-	pmd_t *pmd = (pmd_t *) pud;
++	if ((pud_val(*pud) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R3)
++		return (pmd_t *) pud_deref(*pud) + pmd_index(address);
++	return (pmd_t *) pud;
++}
+ 
+-	if ((pud_val(*pud) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
+-		pmd = (pmd_t *) pud_deref(*pud);
+-	return pmd + pmd_index(address);
++static inline pte_t *pte_offset(pmd_t *pmd, unsigned long address)
++{
++	return (pte_t *) pmd_deref(*pmd) + pte_index(address);
+ }
+ 
++#define pte_offset_kernel(pmd, address) pte_offset(pmd, address)
++#define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address)
++#define pte_unmap(pte) do { } while (0)
++
++static inline bool gup_fast_permitted(unsigned long start, int nr_pages)
++{
++	unsigned long len, end;
++
++	len = (unsigned long) nr_pages << PAGE_SHIFT;
++	end = start + len;
++	if (end < start)
++		return false;
++	return end <= current->mm->context.asce_limit;
++}
++#define gup_fast_permitted gup_fast_permitted
++
+ #define pfn_pte(pfn,pgprot) mk_pte_phys(__pa((pfn) << PAGE_SHIFT),(pgprot))
+ #define pte_pfn(x) (pte_val(x) >> PAGE_SHIFT)
+ #define pte_page(x) pfn_to_page(pte_pfn(x))
+@@ -1248,12 +1285,6 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
+ #define p4d_page(p4d) pfn_to_page(p4d_pfn(p4d))
+ #define pgd_page(pgd) pfn_to_page(pgd_pfn(pgd))
+ 
+-/* Find an entry in the lowest level page table.. */
+-#define pte_offset(pmd, addr) ((pte_t *) pmd_deref(*(pmd)) + pte_index(addr))
+-#define pte_offset_kernel(pmd, address) pte_offset(pmd,address)
+-#define pte_offset_map(pmd, address) pte_offset_kernel(pmd, address)
+-#define pte_unmap(pte) do { } while (0)
+-
+ static inline pmd_t pmd_wrprotect(pmd_t pmd)
+ {
+ 	pmd_val(pmd) &= ~_SEGMENT_ENTRY_WRITE;
+diff --git a/arch/s390/mm/Makefile b/arch/s390/mm/Makefile
+index f5880bfd1b0c..3175413186b9 100644
+--- a/arch/s390/mm/Makefile
++++ b/arch/s390/mm/Makefile
+@@ -4,7 +4,7 @@
+ #
+ 
+ obj-y		:= init.o fault.o extmem.o mmap.o vmem.o maccess.o
+-obj-y		+= page-states.o gup.o pageattr.o pgtable.o pgalloc.o
++obj-y		+= page-states.o pageattr.o pgtable.o pgalloc.o
+ 
+ obj-$(CONFIG_CMM)		+= cmm.o
+ obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
+diff --git a/arch/s390/mm/gup.c b/arch/s390/mm/gup.c
+deleted file mode 100644
+index 2809d11c7a28..000000000000
+--- a/arch/s390/mm/gup.c
++++ /dev/null
+@@ -1,300 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- *  Lockless get_user_pages_fast for s390
+- *
+- *  Copyright IBM Corp. 2010
+- *  Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
+- */
+-#include <linux/sched.h>
+-#include <linux/mm.h>
+-#include <linux/hugetlb.h>
+-#include <linux/vmstat.h>
+-#include <linux/pagemap.h>
+-#include <linux/rwsem.h>
+-#include <asm/pgtable.h>
+-
+-/*
+- * The performance critical leaf functions are made noinline otherwise gcc
+- * inlines everything into a single function which results in too much
+- * register pressure.
+- */
+-static inline int gup_pte_range(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	struct page *head, *page;
+-	unsigned long mask;
+-	pte_t *ptep, pte;
+-
+-	mask = (write ? _PAGE_PROTECT : 0) | _PAGE_INVALID | _PAGE_SPECIAL;
+-
+-	ptep = ((pte_t *) pmd_deref(pmd)) + pte_index(addr);
+-	do {
+-		pte = *ptep;
+-		barrier();
+-		/* Similar to the PMD case, NUMA hinting must take slow path */
+-		if (pte_protnone(pte))
+-			return 0;
+-		if ((pte_val(pte) & mask) != 0)
+-			return 0;
+-		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
+-		page = pte_page(pte);
+-		head = compound_head(page);
+-		if (!page_cache_get_speculative(head))
+-			return 0;
+-		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
+-			put_page(head);
+-			return 0;
+-		}
+-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
+-		pages[*nr] = page;
+-		(*nr)++;
+-
+-	} while (ptep++, addr += PAGE_SIZE, addr != end);
+-
+-	return 1;
+-}
+-
+-static inline int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	struct page *head, *page;
+-	unsigned long mask;
+-	int refs;
+-
+-	mask = (write ? _SEGMENT_ENTRY_PROTECT : 0) | _SEGMENT_ENTRY_INVALID;
+-	if ((pmd_val(pmd) & mask) != 0)
+-		return 0;
+-	VM_BUG_ON(!pfn_valid(pmd_val(pmd) >> PAGE_SHIFT));
+-
+-	refs = 0;
+-	head = pmd_page(pmd);
+-	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
+-	do {
+-		VM_BUG_ON(compound_head(page) != head);
+-		pages[*nr] = page;
+-		(*nr)++;
+-		page++;
+-		refs++;
+-	} while (addr += PAGE_SIZE, addr != end);
+-
+-	if (!page_cache_add_speculative(head, refs)) {
+-		*nr -= refs;
+-		return 0;
+-	}
+-
+-	if (unlikely(pmd_val(pmd) != pmd_val(*pmdp))) {
+-		*nr -= refs;
+-		while (refs--)
+-			put_page(head);
+-		return 0;
+-	}
+-
+-	return 1;
+-}
+-
+-
+-static inline int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	unsigned long next;
+-	pmd_t *pmdp, pmd;
+-
+-	pmdp = (pmd_t *) pudp;
+-	if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
+-		pmdp = (pmd_t *) pud_deref(pud);
+-	pmdp += pmd_index(addr);
+-	do {
+-		pmd = *pmdp;
+-		barrier();
+-		next = pmd_addr_end(addr, end);
+-		if (pmd_none(pmd))
+-			return 0;
+-		if (unlikely(pmd_large(pmd))) {
+-			/*
+-			 * NUMA hinting faults need to be handled in the GUP
+-			 * slowpath for accounting purposes and so that they
+-			 * can be serialised against THP migration.
+-			 */
+-			if (pmd_protnone(pmd))
+-				return 0;
+-			if (!gup_huge_pmd(pmdp, pmd, addr, next,
+-					  write, pages, nr))
+-				return 0;
+-		} else if (!gup_pte_range(pmdp, pmd, addr, next,
+-					  write, pages, nr))
+-			return 0;
+-	} while (pmdp++, addr = next, addr != end);
+-
+-	return 1;
+-}
+-
+-static int gup_huge_pud(pud_t *pudp, pud_t pud, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	struct page *head, *page;
+-	unsigned long mask;
+-	int refs;
+-
+-	mask = (write ? _REGION_ENTRY_PROTECT : 0) | _REGION_ENTRY_INVALID;
+-	if ((pud_val(pud) & mask) != 0)
+-		return 0;
+-	VM_BUG_ON(!pfn_valid(pud_pfn(pud)));
+-
+-	refs = 0;
+-	head = pud_page(pud);
+-	page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
+-	do {
+-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
+-		pages[*nr] = page;
+-		(*nr)++;
+-		page++;
+-		refs++;
+-	} while (addr += PAGE_SIZE, addr != end);
+-
+-	if (!page_cache_add_speculative(head, refs)) {
+-		*nr -= refs;
+-		return 0;
+-	}
+-
+-	if (unlikely(pud_val(pud) != pud_val(*pudp))) {
+-		*nr -= refs;
+-		while (refs--)
+-			put_page(head);
+-		return 0;
+-	}
+-
+-	return 1;
+-}
+-
+-static inline int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	unsigned long next;
+-	pud_t *pudp, pud;
+-
+-	pudp = (pud_t *) p4dp;
+-	if ((p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R2)
+-		pudp = (pud_t *) p4d_deref(p4d);
+-	pudp += pud_index(addr);
+-	do {
+-		pud = *pudp;
+-		barrier();
+-		next = pud_addr_end(addr, end);
+-		if (pud_none(pud))
+-			return 0;
+-		if (unlikely(pud_large(pud))) {
+-			if (!gup_huge_pud(pudp, pud, addr, next, write, pages,
+-					  nr))
+-				return 0;
+-		} else if (!gup_pmd_range(pudp, pud, addr, next, write, pages,
+-					  nr))
+-			return 0;
+-	} while (pudp++, addr = next, addr != end);
+-
+-	return 1;
+-}
+-
+-static inline int gup_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr,
+-		unsigned long end, int write, struct page **pages, int *nr)
+-{
+-	unsigned long next;
+-	p4d_t *p4dp, p4d;
+-
+-	p4dp = (p4d_t *) pgdp;
+-	if ((pgd_val(pgd) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R1)
+-		p4dp = (p4d_t *) pgd_deref(pgd);
+-	p4dp += p4d_index(addr);
+-	do {
+-		p4d = *p4dp;
+-		barrier();
+-		next = p4d_addr_end(addr, end);
+-		if (p4d_none(p4d))
+-			return 0;
+-		if (!gup_pud_range(p4dp, p4d, addr, next, write, pages, nr))
+-			return 0;
+-	} while (p4dp++, addr = next, addr != end);
+-
+-	return 1;
+-}
+-
+-/*
+- * Like get_user_pages_fast() except its IRQ-safe in that it won't fall
+- * back to the regular GUP.
+- * Note a difference with get_user_pages_fast: this always returns the
+- * number of pages pinned, 0 if no pages were pinned.
+- */
+-int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
+-			  struct page **pages)
+-{
+-	struct mm_struct *mm = current->mm;
+-	unsigned long addr, len, end;
+-	unsigned long next, flags;
+-	pgd_t *pgdp, pgd;
+-	int nr = 0;
+-
+-	start &= PAGE_MASK;
+-	addr = start;
+-	len = (unsigned long) nr_pages << PAGE_SHIFT;
+-	end = start + len;
+-	if ((end <= start) || (end > mm->context.asce_limit))
+-		return 0;
+-	/*
+-	 * local_irq_save() doesn't prevent pagetable teardown, but does
+-	 * prevent the pagetables from being freed on s390.
+-	 *
+-	 * So long as we atomically load page table pointers versus teardown,
+-	 * we can follow the address down to the the page and take a ref on it.
+-	 */
+-	local_irq_save(flags);
+-	pgdp = pgd_offset(mm, addr);
+-	do {
+-		pgd = *pgdp;
+-		barrier();
+-		next = pgd_addr_end(addr, end);
+-		if (pgd_none(pgd))
+-			break;
+-		if (!gup_p4d_range(pgdp, pgd, addr, next, write, pages, &nr))
+-			break;
+-	} while (pgdp++, addr = next, addr != end);
+-	local_irq_restore(flags);
+-
+-	return nr;
+-}
+-
+-/**
+- * get_user_pages_fast() - pin user pages in memory
+- * @start:	starting user address
+- * @nr_pages:	number of pages from start to pin
+- * @write:	whether pages will be written to
+- * @pages:	array that receives pointers to the pages pinned.
+- *		Should be at least nr_pages long.
+- *
+- * Attempt to pin user pages in memory without taking mm->mmap_sem.
+- * If not successful, it will fall back to taking the lock and
+- * calling get_user_pages().
+- *
+- * Returns number of pages pinned. This may be fewer than the number
+- * requested. If nr_pages is 0 or negative, returns 0. If no pages
+- * were pinned, returns -errno.
+- */
+-int get_user_pages_fast(unsigned long start, int nr_pages, int write,
+-			struct page **pages)
+-{
+-	int nr, ret;
+-
+-	might_sleep();
+-	start &= PAGE_MASK;
+-	nr = __get_user_pages_fast(start, nr_pages, write, pages);
+-	if (nr == nr_pages)
+-		return nr;
+-
+-	/* Try to get the remaining pages with get_user_pages */
+-	start += nr << PAGE_SHIFT;
+-	pages += nr;
+-	ret = get_user_pages_unlocked(start, nr_pages - nr, pages,
+-				      write ? FOLL_WRITE : 0);
+-	/* Have to be a bit careful with return values */
+-	if (nr > 0)
+-		ret = (ret < 0) ? nr : ret + nr;
+-	return ret;
+-}
+diff --git a/arch/x86/crypto/crct10dif-pclmul_glue.c b/arch/x86/crypto/crct10dif-pclmul_glue.c
+index cd4df9322501..7bbfe7d35da7 100644
+--- a/arch/x86/crypto/crct10dif-pclmul_glue.c
++++ b/arch/x86/crypto/crct10dif-pclmul_glue.c
+@@ -76,15 +76,14 @@ static int chksum_final(struct shash_desc *desc, u8 *out)
+ 	return 0;
+ }
+ 
+-static int __chksum_finup(__u16 *crcp, const u8 *data, unsigned int len,
+-			u8 *out)
++static int __chksum_finup(__u16 crc, const u8 *data, unsigned int len, u8 *out)
+ {
+ 	if (irq_fpu_usable()) {
+ 		kernel_fpu_begin();
+-		*(__u16 *)out = crc_t10dif_pcl(*crcp, data, len);
++		*(__u16 *)out = crc_t10dif_pcl(crc, data, len);
+ 		kernel_fpu_end();
+ 	} else
+-		*(__u16 *)out = crc_t10dif_generic(*crcp, data, len);
++		*(__u16 *)out = crc_t10dif_generic(crc, data, len);
+ 	return 0;
+ }
+ 
+@@ -93,15 +92,13 @@ static int chksum_finup(struct shash_desc *desc, const u8 *data,
+ {
+ 	struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
+ 
+-	return __chksum_finup(&ctx->crc, data, len, out);
++	return __chksum_finup(ctx->crc, data, len, out);
+ }
+ 
+ static int chksum_digest(struct shash_desc *desc, const u8 *data,
+ 			 unsigned int length, u8 *out)
+ {
+-	struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
+-
+-	return __chksum_finup(&ctx->crc, data, length, out);
++	return __chksum_finup(0, data, length, out);
+ }
+ 
+ static struct shash_alg alg = {
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index d309f30cf7af..5fc76b755510 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -650,6 +650,7 @@ ENTRY(__switch_to_asm)
+ 	pushl	%ebx
+ 	pushl	%edi
+ 	pushl	%esi
++	pushfl
+ 
+ 	/* switch stack */
+ 	movl	%esp, TASK_threadsp(%eax)
+@@ -672,6 +673,7 @@ ENTRY(__switch_to_asm)
+ #endif
+ 
+ 	/* restore callee-saved registers */
++	popfl
+ 	popl	%esi
+ 	popl	%edi
+ 	popl	%ebx
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 1f0efdb7b629..4fe27b67d7e2 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -291,6 +291,7 @@ ENTRY(__switch_to_asm)
+ 	pushq	%r13
+ 	pushq	%r14
+ 	pushq	%r15
++	pushfq
+ 
+ 	/* switch stack */
+ 	movq	%rsp, TASK_threadsp(%rdi)
+@@ -313,6 +314,7 @@ ENTRY(__switch_to_asm)
+ #endif
+ 
+ 	/* restore callee-saved registers */
++	popfq
+ 	popq	%r15
+ 	popq	%r14
+ 	popq	%r13
+diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
+index c1a812bd5a27..f7d562bd7dc7 100644
+--- a/arch/x86/include/asm/mce.h
++++ b/arch/x86/include/asm/mce.h
+@@ -209,16 +209,6 @@ static inline void cmci_rediscover(void) {}
+ static inline void cmci_recheck(void) {}
+ #endif
+ 
+-#ifdef CONFIG_X86_MCE_AMD
+-void mce_amd_feature_init(struct cpuinfo_x86 *c);
+-int umc_normaddr_to_sysaddr(u64 norm_addr, u16 nid, u8 umc, u64 *sys_addr);
+-#else
+-static inline void mce_amd_feature_init(struct cpuinfo_x86 *c) { }
+-static inline int umc_normaddr_to_sysaddr(u64 norm_addr, u16 nid, u8 umc, u64 *sys_addr) { return -EINVAL; };
+-#endif
+-
+-static inline void mce_hygon_feature_init(struct cpuinfo_x86 *c) { return mce_amd_feature_init(c); }
+-
+ int mce_available(struct cpuinfo_x86 *c);
+ bool mce_is_memory_error(struct mce *m);
+ bool mce_is_correctable(struct mce *m);
+@@ -338,12 +328,19 @@ extern bool amd_mce_is_memory_error(struct mce *m);
+ extern int mce_threshold_create_device(unsigned int cpu);
+ extern int mce_threshold_remove_device(unsigned int cpu);
+ 
+-#else
++void mce_amd_feature_init(struct cpuinfo_x86 *c);
++int umc_normaddr_to_sysaddr(u64 norm_addr, u16 nid, u8 umc, u64 *sys_addr);
+ 
+-static inline int mce_threshold_create_device(unsigned int cpu) { return 0; };
+-static inline int mce_threshold_remove_device(unsigned int cpu) { return 0; };
+-static inline bool amd_mce_is_memory_error(struct mce *m) { return false; };
++#else
+ 
++static inline int mce_threshold_create_device(unsigned int cpu)		{ return 0; };
++static inline int mce_threshold_remove_device(unsigned int cpu)		{ return 0; };
++static inline bool amd_mce_is_memory_error(struct mce *m)		{ return false; };
++static inline void mce_amd_feature_init(struct cpuinfo_x86 *c)		{ }
++static inline int
++umc_normaddr_to_sysaddr(u64 norm_addr, u16 nid, u8 umc, u64 *sys_addr)	{ return -EINVAL; };
+ #endif
+ 
++static inline void mce_hygon_feature_init(struct cpuinfo_x86 *c)	{ return mce_amd_feature_init(c); }
++
+ #endif /* _ASM_X86_MCE_H */
+diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h
+index 7cf1a270d891..157149d4129c 100644
+--- a/arch/x86/include/asm/switch_to.h
++++ b/arch/x86/include/asm/switch_to.h
+@@ -40,6 +40,7 @@ asmlinkage void ret_from_fork(void);
+  * order of the fields must match the code in __switch_to_asm().
+  */
+ struct inactive_task_frame {
++	unsigned long flags;
+ #ifdef CONFIG_X86_64
+ 	unsigned long r15;
+ 	unsigned long r14;
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index 89298c83de53..496033b01d26 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -545,6 +545,66 @@ out:
+ 	return offset;
+ }
+ 
++bool amd_filter_mce(struct mce *m)
++{
++	enum smca_bank_types bank_type = smca_get_bank_type(m->bank);
++	struct cpuinfo_x86 *c = &boot_cpu_data;
++	u8 xec = (m->status >> 16) & 0x3F;
++
++	/* See Family 17h Models 10h-2Fh Erratum #1114. */
++	if (c->x86 == 0x17 &&
++	    c->x86_model >= 0x10 && c->x86_model <= 0x2F &&
++	    bank_type == SMCA_IF && xec == 10)
++		return true;
++
++	return false;
++}
++
++/*
++ * Turn off thresholding banks for the following conditions:
++ * - MC4_MISC thresholding is not supported on Family 0x15.
++ * - Prevent possible spurious interrupts from the IF bank on Family 0x17
++ *   Models 0x10-0x2F due to Erratum #1114.
++ */
++void disable_err_thresholding(struct cpuinfo_x86 *c, unsigned int bank)
++{
++	int i, num_msrs;
++	u64 hwcr;
++	bool need_toggle;
++	u32 msrs[NR_BLOCKS];
++
++	if (c->x86 == 0x15 && bank == 4) {
++		msrs[0] = 0x00000413; /* MC4_MISC0 */
++		msrs[1] = 0xc0000408; /* MC4_MISC1 */
++		num_msrs = 2;
++	} else if (c->x86 == 0x17 &&
++		   (c->x86_model >= 0x10 && c->x86_model <= 0x2F)) {
++
++		if (smca_get_bank_type(bank) != SMCA_IF)
++			return;
++
++		msrs[0] = MSR_AMD64_SMCA_MCx_MISC(bank);
++		num_msrs = 1;
++	} else {
++		return;
++	}
++
++	rdmsrl(MSR_K7_HWCR, hwcr);
++
++	/* McStatusWrEn has to be set */
++	need_toggle = !(hwcr & BIT(18));
++	if (need_toggle)
++		wrmsrl(MSR_K7_HWCR, hwcr | BIT(18));
++
++	/* Clear CntP bit safely */
++	for (i = 0; i < num_msrs; i++)
++		msr_clear_bit(msrs[i], 62);
++
++	/* restore old settings */
++	if (need_toggle)
++		wrmsrl(MSR_K7_HWCR, hwcr);
++}
++
+ /* cpu init entry point, called from mce.c with preempt off */
+ void mce_amd_feature_init(struct cpuinfo_x86 *c)
+ {
+@@ -556,6 +616,8 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
+ 		if (mce_flags.smca)
+ 			smca_configure(bank, cpu);
+ 
++		disable_err_thresholding(c, bank);
++
+ 		for (block = 0; block < NR_BLOCKS; ++block) {
+ 			address = get_block_address(address, low, high, bank, block);
+ 			if (!address)
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 6ce290c506d9..1a7084ba9a3b 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -1612,36 +1612,6 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
+ 		if (c->x86 == 0x15 && c->x86_model <= 0xf)
+ 			mce_flags.overflow_recov = 1;
+ 
+-		/*
+-		 * Turn off MC4_MISC thresholding banks on those models since
+-		 * they're not supported there.
+-		 */
+-		if (c->x86 == 0x15 &&
+-		    (c->x86_model >= 0x10 && c->x86_model <= 0x1f)) {
+-			int i;
+-			u64 hwcr;
+-			bool need_toggle;
+-			u32 msrs[] = {
+-				0x00000413, /* MC4_MISC0 */
+-				0xc0000408, /* MC4_MISC1 */
+-			};
+-
+-			rdmsrl(MSR_K7_HWCR, hwcr);
+-
+-			/* McStatusWrEn has to be set */
+-			need_toggle = !(hwcr & BIT(18));
+-
+-			if (need_toggle)
+-				wrmsrl(MSR_K7_HWCR, hwcr | BIT(18));
+-
+-			/* Clear CntP bit safely */
+-			for (i = 0; i < ARRAY_SIZE(msrs); i++)
+-				msr_clear_bit(msrs[i], 62);
+-
+-			/* restore old settings */
+-			if (need_toggle)
+-				wrmsrl(MSR_K7_HWCR, hwcr);
+-		}
+ 	}
+ 
+ 	if (c->x86_vendor == X86_VENDOR_INTEL) {
+@@ -1801,6 +1771,14 @@ static void __mcheck_cpu_init_timer(void)
+ 	mce_start_timer(t);
+ }
+ 
++bool filter_mce(struct mce *m)
++{
++	if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD)
++		return amd_filter_mce(m);
++
++	return false;
++}
++
+ /* Handle unconfigured int18 (should never happen) */
+ static void unexpected_machine_check(struct pt_regs *regs, long error_code)
+ {
+diff --git a/arch/x86/kernel/cpu/mce/genpool.c b/arch/x86/kernel/cpu/mce/genpool.c
+index 3395549c51d3..64d1d5a00f39 100644
+--- a/arch/x86/kernel/cpu/mce/genpool.c
++++ b/arch/x86/kernel/cpu/mce/genpool.c
+@@ -99,6 +99,9 @@ int mce_gen_pool_add(struct mce *mce)
+ {
+ 	struct mce_evt_llist *node;
+ 
++	if (filter_mce(mce))
++		return -EINVAL;
++
+ 	if (!mce_evt_pool)
+ 		return -EINVAL;
+ 
+diff --git a/arch/x86/kernel/cpu/mce/internal.h b/arch/x86/kernel/cpu/mce/internal.h
+index af5eab1e65e2..a34b55baa7aa 100644
+--- a/arch/x86/kernel/cpu/mce/internal.h
++++ b/arch/x86/kernel/cpu/mce/internal.h
+@@ -173,4 +173,13 @@ struct mca_msr_regs {
+ 
+ extern struct mca_msr_regs msr_ops;
+ 
++/* Decide whether to add MCE record to MCE event pool or filter it out. */
++extern bool filter_mce(struct mce *m);
++
++#ifdef CONFIG_X86_MCE_AMD
++extern bool amd_filter_mce(struct mce *m);
++#else
++static inline bool amd_filter_mce(struct mce *m)			{ return false; };
++#endif
++
+ #endif /* __X86_MCE_INTERNAL_H__ */
+diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
+index e471d8e6f0b2..70933193878c 100644
+--- a/arch/x86/kernel/process_32.c
++++ b/arch/x86/kernel/process_32.c
+@@ -127,6 +127,13 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
+ 	struct task_struct *tsk;
+ 	int err;
+ 
++	/*
++	 * For a new task use the RESET flags value since there is no before.
++	 * All the status flags are zero; DF and all the system flags must also
++	 * be 0, specifically IF must be 0 because we context switch to the new
++	 * task with interrupts disabled.
++	 */
++	frame->flags = X86_EFLAGS_FIXED;
+ 	frame->bp = 0;
+ 	frame->ret_addr = (unsigned long) ret_from_fork;
+ 	p->thread.sp = (unsigned long) fork_frame;
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index 6a62f4af9fcf..026a43be9bd1 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -392,6 +392,14 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
+ 	childregs = task_pt_regs(p);
+ 	fork_frame = container_of(childregs, struct fork_frame, regs);
+ 	frame = &fork_frame->frame;
++
++	/*
++	 * For a new task use the RESET flags value since there is no before.
++	 * All the status flags are zero; DF and all the system flags must also
++	 * be 0, specifically IF must be 0 because we context switch to the new
++	 * task with interrupts disabled.
++	 */
++	frame->flags = X86_EFLAGS_FIXED;
+ 	frame->bp = 0;
+ 	frame->ret_addr = (unsigned long) ret_from_fork;
+ 	p->thread.sp = (unsigned long) fork_frame;
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 85fe1870f873..9b7c4ca8f0a7 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -58,7 +58,6 @@
+ #include <asm/alternative.h>
+ #include <asm/fpu/xstate.h>
+ #include <asm/trace/mpx.h>
+-#include <asm/nospec-branch.h>
+ #include <asm/mpx.h>
+ #include <asm/vm86.h>
+ #include <asm/umip.h>
+@@ -367,13 +366,6 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
+ 		regs->ip = (unsigned long)general_protection;
+ 		regs->sp = (unsigned long)&gpregs->orig_ax;
+ 
+-		/*
+-		 * This situation can be triggered by userspace via
+-		 * modify_ldt(2) and the return does not take the regular
+-		 * user space exit, so a CPU buffer clear is required when
+-		 * MDS mitigation is enabled.
+-		 */
+-		mds_user_clear_cpu_buffers();
+ 		return;
+ 	}
+ #endif
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 235687f3388f..6939eba2001a 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1453,7 +1453,7 @@ static void apic_timer_expired(struct kvm_lapic *apic)
+ 	if (swait_active(q))
+ 		swake_up_one(q);
+ 
+-	if (apic_lvtt_tscdeadline(apic))
++	if (apic_lvtt_tscdeadline(apic) || ktimer->hv_timer_in_use)
+ 		ktimer->expired_tscdeadline = ktimer->tscdeadline;
+ }
+ 
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 3eeb7183fc09..0bbb21a49082 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1262,31 +1262,42 @@ static int do_get_msr_feature(struct kvm_vcpu *vcpu, unsigned index, u64 *data)
+ 	return 0;
+ }
+ 
+-bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
++static bool __kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
+ {
+-	if (efer & efer_reserved_bits)
+-		return false;
+-
+ 	if (efer & EFER_FFXSR && !guest_cpuid_has(vcpu, X86_FEATURE_FXSR_OPT))
+-			return false;
++		return false;
+ 
+ 	if (efer & EFER_SVME && !guest_cpuid_has(vcpu, X86_FEATURE_SVM))
+-			return false;
++		return false;
+ 
+ 	return true;
++
++}
++bool kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer)
++{
++	if (efer & efer_reserved_bits)
++		return false;
++
++	return __kvm_valid_efer(vcpu, efer);
+ }
+ EXPORT_SYMBOL_GPL(kvm_valid_efer);
+ 
+-static int set_efer(struct kvm_vcpu *vcpu, u64 efer)
++static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ {
+ 	u64 old_efer = vcpu->arch.efer;
++	u64 efer = msr_info->data;
+ 
+-	if (!kvm_valid_efer(vcpu, efer))
+-		return 1;
++	if (efer & efer_reserved_bits)
++		return false;
+ 
+-	if (is_paging(vcpu)
+-	    && (vcpu->arch.efer & EFER_LME) != (efer & EFER_LME))
+-		return 1;
++	if (!msr_info->host_initiated) {
++		if (!__kvm_valid_efer(vcpu, efer))
++			return 1;
++
++		if (is_paging(vcpu) &&
++		    (vcpu->arch.efer & EFER_LME) != (efer & EFER_LME))
++			return 1;
++	}
+ 
+ 	efer &= ~EFER_LMA;
+ 	efer |= vcpu->arch.efer & EFER_LMA;
+@@ -2456,7 +2467,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 		vcpu->arch.arch_capabilities = data;
+ 		break;
+ 	case MSR_EFER:
+-		return set_efer(vcpu, data);
++		return set_efer(vcpu, msr_info);
+ 	case MSR_K7_HWCR:
+ 		data &= ~(u64)0x40;	/* ignore flush filter disable */
+ 		data &= ~(u64)0x100;	/* ignore ignne emulation enable */
+diff --git a/arch/x86/platform/pvh/enlighten.c b/arch/x86/platform/pvh/enlighten.c
+index 62f5c7045944..1861a2ba0f2b 100644
+--- a/arch/x86/platform/pvh/enlighten.c
++++ b/arch/x86/platform/pvh/enlighten.c
+@@ -44,8 +44,6 @@ void __init __weak mem_map_via_hcall(struct boot_params *ptr __maybe_unused)
+ 
+ static void __init init_pvh_bootparams(bool xen_guest)
+ {
+-	memset(&pvh_bootparams, 0, sizeof(pvh_bootparams));
+-
+ 	if ((pvh_start_info.version > 0) && (pvh_start_info.memmap_entries)) {
+ 		struct hvm_memmap_table_entry *ep;
+ 		int i;
+@@ -103,7 +101,7 @@ static void __init init_pvh_bootparams(bool xen_guest)
+  * If we are trying to boot a Xen PVH guest, it is expected that the kernel
+  * will have been configured to provide the required override for this routine.
+  */
+-void __init __weak xen_pvh_init(void)
++void __init __weak xen_pvh_init(struct boot_params *boot_params)
+ {
+ 	xen_raw_printk("Error: Missing xen PVH initialization\n");
+ 	BUG();
+@@ -112,7 +110,7 @@ void __init __weak xen_pvh_init(void)
+ static void hypervisor_specific_init(bool xen_guest)
+ {
+ 	if (xen_guest)
+-		xen_pvh_init();
++		xen_pvh_init(&pvh_bootparams);
+ }
+ 
+ /*
+@@ -131,6 +129,8 @@ void __init xen_prepare_pvh(void)
+ 		BUG();
+ 	}
+ 
++	memset(&pvh_bootparams, 0, sizeof(pvh_bootparams));
++
+ 	hypervisor_specific_init(xen_guest);
+ 
+ 	init_pvh_bootparams(xen_guest);
+diff --git a/arch/x86/xen/efi.c b/arch/x86/xen/efi.c
+index 1fbb629a9d78..0d3365cb64de 100644
+--- a/arch/x86/xen/efi.c
++++ b/arch/x86/xen/efi.c
+@@ -158,7 +158,7 @@ static enum efi_secureboot_mode xen_efi_get_secureboot(void)
+ 	return efi_secureboot_mode_unknown;
+ }
+ 
+-void __init xen_efi_init(void)
++void __init xen_efi_init(struct boot_params *boot_params)
+ {
+ 	efi_system_table_t *efi_systab_xen;
+ 
+@@ -167,12 +167,12 @@ void __init xen_efi_init(void)
+ 	if (efi_systab_xen == NULL)
+ 		return;
+ 
+-	strncpy((char *)&boot_params.efi_info.efi_loader_signature, "Xen",
+-			sizeof(boot_params.efi_info.efi_loader_signature));
+-	boot_params.efi_info.efi_systab = (__u32)__pa(efi_systab_xen);
+-	boot_params.efi_info.efi_systab_hi = (__u32)(__pa(efi_systab_xen) >> 32);
++	strncpy((char *)&boot_params->efi_info.efi_loader_signature, "Xen",
++			sizeof(boot_params->efi_info.efi_loader_signature));
++	boot_params->efi_info.efi_systab = (__u32)__pa(efi_systab_xen);
++	boot_params->efi_info.efi_systab_hi = (__u32)(__pa(efi_systab_xen) >> 32);
+ 
+-	boot_params.secure_boot = xen_efi_get_secureboot();
++	boot_params->secure_boot = xen_efi_get_secureboot();
+ 
+ 	set_bit(EFI_BOOT, &efi.flags);
+ 	set_bit(EFI_PARAVIRT, &efi.flags);
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index c54a493e139a..4722ba2966ac 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1403,7 +1403,7 @@ asmlinkage __visible void __init xen_start_kernel(void)
+ 	/* We need this for printk timestamps */
+ 	xen_setup_runstate_info(0);
+ 
+-	xen_efi_init();
++	xen_efi_init(&boot_params);
+ 
+ 	/* Start the world */
+ #ifdef CONFIG_X86_32
+diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
+index 35b7599d2d0b..80a79db72fcf 100644
+--- a/arch/x86/xen/enlighten_pvh.c
++++ b/arch/x86/xen/enlighten_pvh.c
+@@ -13,6 +13,8 @@
+ 
+ #include <xen/interface/memory.h>
+ 
++#include "xen-ops.h"
++
+ /*
+  * PVH variables.
+  *
+@@ -21,17 +23,20 @@
+  */
+ bool xen_pvh __attribute__((section(".data"))) = 0;
+ 
+-void __init xen_pvh_init(void)
++void __init xen_pvh_init(struct boot_params *boot_params)
+ {
+ 	u32 msr;
+ 	u64 pfn;
+ 
+ 	xen_pvh = 1;
++	xen_domain_type = XEN_HVM_DOMAIN;
+ 	xen_start_flags = pvh_start_info.flags;
+ 
+ 	msr = cpuid_ebx(xen_cpuid_base() + 2);
+ 	pfn = __pa(hypercall_page);
+ 	wrmsr_safe(msr, (u32)pfn, (u32)(pfn >> 32));
++
++	xen_efi_init(boot_params);
+ }
+ 
+ void __init mem_map_via_hcall(struct boot_params *boot_params_p)
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index 0e60bd918695..2f111f47ba98 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -122,9 +122,9 @@ static inline void __init xen_init_vga(const struct dom0_vga_console_info *info,
+ void __init xen_init_apic(void);
+ 
+ #ifdef CONFIG_XEN_EFI
+-extern void xen_efi_init(void);
++extern void xen_efi_init(struct boot_params *boot_params);
+ #else
+-static inline void __init xen_efi_init(void)
++static inline void __init xen_efi_init(struct boot_params *boot_params)
+ {
+ }
+ #endif
+diff --git a/crypto/ccm.c b/crypto/ccm.c
+index b242fd0d3262..1bee0105617d 100644
+--- a/crypto/ccm.c
++++ b/crypto/ccm.c
+@@ -458,7 +458,6 @@ static void crypto_ccm_free(struct aead_instance *inst)
+ 
+ static int crypto_ccm_create_common(struct crypto_template *tmpl,
+ 				    struct rtattr **tb,
+-				    const char *full_name,
+ 				    const char *ctr_name,
+ 				    const char *mac_name)
+ {
+@@ -486,7 +485,8 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
+ 
+ 	mac = __crypto_hash_alg_common(mac_alg);
+ 	err = -EINVAL;
+-	if (mac->digestsize != 16)
++	if (strncmp(mac->base.cra_name, "cbcmac(", 7) != 0 ||
++	    mac->digestsize != 16)
+ 		goto out_put_mac;
+ 
+ 	inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL);
+@@ -509,23 +509,27 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
+ 
+ 	ctr = crypto_spawn_skcipher_alg(&ictx->ctr);
+ 
+-	/* Not a stream cipher? */
++	/* The skcipher algorithm must be CTR mode, using 16-byte blocks. */
+ 	err = -EINVAL;
+-	if (ctr->base.cra_blocksize != 1)
++	if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
++	    crypto_skcipher_alg_ivsize(ctr) != 16 ||
++	    ctr->base.cra_blocksize != 1)
+ 		goto err_drop_ctr;
+ 
+-	/* We want the real thing! */
+-	if (crypto_skcipher_alg_ivsize(ctr) != 16)
++	/* ctr and cbcmac must use the same underlying block cipher. */
++	if (strcmp(ctr->base.cra_name + 4, mac->base.cra_name + 7) != 0)
+ 		goto err_drop_ctr;
+ 
+ 	err = -ENAMETOOLONG;
++	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
++		     "ccm(%s", ctr->base.cra_name + 4) >= CRYPTO_MAX_ALG_NAME)
++		goto err_drop_ctr;
++
+ 	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ 		     "ccm_base(%s,%s)", ctr->base.cra_driver_name,
+ 		     mac->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+ 		goto err_drop_ctr;
+ 
+-	memcpy(inst->alg.base.cra_name, full_name, CRYPTO_MAX_ALG_NAME);
+-
+ 	inst->alg.base.cra_flags = ctr->base.cra_flags & CRYPTO_ALG_ASYNC;
+ 	inst->alg.base.cra_priority = (mac->base.cra_priority +
+ 				       ctr->base.cra_priority) / 2;
+@@ -567,7 +571,6 @@ static int crypto_ccm_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	const char *cipher_name;
+ 	char ctr_name[CRYPTO_MAX_ALG_NAME];
+ 	char mac_name[CRYPTO_MAX_ALG_NAME];
+-	char full_name[CRYPTO_MAX_ALG_NAME];
+ 
+ 	cipher_name = crypto_attr_alg_name(tb[1]);
+ 	if (IS_ERR(cipher_name))
+@@ -581,12 +584,7 @@ static int crypto_ccm_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 		     cipher_name) >= CRYPTO_MAX_ALG_NAME)
+ 		return -ENAMETOOLONG;
+ 
+-	if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm(%s)", cipher_name) >=
+-	    CRYPTO_MAX_ALG_NAME)
+-		return -ENAMETOOLONG;
+-
+-	return crypto_ccm_create_common(tmpl, tb, full_name, ctr_name,
+-					mac_name);
++	return crypto_ccm_create_common(tmpl, tb, ctr_name, mac_name);
+ }
+ 
+ static struct crypto_template crypto_ccm_tmpl = {
+@@ -599,23 +597,17 @@ static int crypto_ccm_base_create(struct crypto_template *tmpl,
+ 				  struct rtattr **tb)
+ {
+ 	const char *ctr_name;
+-	const char *cipher_name;
+-	char full_name[CRYPTO_MAX_ALG_NAME];
++	const char *mac_name;
+ 
+ 	ctr_name = crypto_attr_alg_name(tb[1]);
+ 	if (IS_ERR(ctr_name))
+ 		return PTR_ERR(ctr_name);
+ 
+-	cipher_name = crypto_attr_alg_name(tb[2]);
+-	if (IS_ERR(cipher_name))
+-		return PTR_ERR(cipher_name);
+-
+-	if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "ccm_base(%s,%s)",
+-		     ctr_name, cipher_name) >= CRYPTO_MAX_ALG_NAME)
+-		return -ENAMETOOLONG;
++	mac_name = crypto_attr_alg_name(tb[2]);
++	if (IS_ERR(mac_name))
++		return PTR_ERR(mac_name);
+ 
+-	return crypto_ccm_create_common(tmpl, tb, full_name, ctr_name,
+-					cipher_name);
++	return crypto_ccm_create_common(tmpl, tb, ctr_name, mac_name);
+ }
+ 
+ static struct crypto_template crypto_ccm_base_tmpl = {
+diff --git a/crypto/chacha20poly1305.c b/crypto/chacha20poly1305.c
+index fef11446ab1b..af4c9450063d 100644
+--- a/crypto/chacha20poly1305.c
++++ b/crypto/chacha20poly1305.c
+@@ -645,8 +645,8 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
+ 
+ 	err = -ENAMETOOLONG;
+ 	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
+-		     "%s(%s,%s)", name, chacha_name,
+-		     poly_name) >= CRYPTO_MAX_ALG_NAME)
++		     "%s(%s,%s)", name, chacha->base.cra_name,
++		     poly->cra_name) >= CRYPTO_MAX_ALG_NAME)
+ 		goto out_drop_chacha;
+ 	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ 		     "%s(%s,%s)", name, chacha->base.cra_driver_name,
+diff --git a/crypto/chacha_generic.c b/crypto/chacha_generic.c
+index 35b583101f4f..90ec0ec1b4f7 100644
+--- a/crypto/chacha_generic.c
++++ b/crypto/chacha_generic.c
+@@ -52,7 +52,7 @@ static int chacha_stream_xor(struct skcipher_request *req,
+ 		unsigned int nbytes = walk.nbytes;
+ 
+ 		if (nbytes < walk.total)
+-			nbytes = round_down(nbytes, walk.stride);
++			nbytes = round_down(nbytes, CHACHA_BLOCK_SIZE);
+ 
+ 		chacha_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr,
+ 			       nbytes, ctx->nrounds);
+diff --git a/crypto/crct10dif_generic.c b/crypto/crct10dif_generic.c
+index 8e94e29dc6fc..d08048ae5552 100644
+--- a/crypto/crct10dif_generic.c
++++ b/crypto/crct10dif_generic.c
+@@ -65,10 +65,9 @@ static int chksum_final(struct shash_desc *desc, u8 *out)
+ 	return 0;
+ }
+ 
+-static int __chksum_finup(__u16 *crcp, const u8 *data, unsigned int len,
+-			u8 *out)
++static int __chksum_finup(__u16 crc, const u8 *data, unsigned int len, u8 *out)
+ {
+-	*(__u16 *)out = crc_t10dif_generic(*crcp, data, len);
++	*(__u16 *)out = crc_t10dif_generic(crc, data, len);
+ 	return 0;
+ }
+ 
+@@ -77,15 +76,13 @@ static int chksum_finup(struct shash_desc *desc, const u8 *data,
+ {
+ 	struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
+ 
+-	return __chksum_finup(&ctx->crc, data, len, out);
++	return __chksum_finup(ctx->crc, data, len, out);
+ }
+ 
+ static int chksum_digest(struct shash_desc *desc, const u8 *data,
+ 			 unsigned int length, u8 *out)
+ {
+-	struct chksum_desc_ctx *ctx = shash_desc_ctx(desc);
+-
+-	return __chksum_finup(&ctx->crc, data, length, out);
++	return __chksum_finup(0, data, length, out);
+ }
+ 
+ static struct shash_alg alg = {
+diff --git a/crypto/gcm.c b/crypto/gcm.c
+index e438492db2ca..cfad67042427 100644
+--- a/crypto/gcm.c
++++ b/crypto/gcm.c
+@@ -597,7 +597,6 @@ static void crypto_gcm_free(struct aead_instance *inst)
+ 
+ static int crypto_gcm_create_common(struct crypto_template *tmpl,
+ 				    struct rtattr **tb,
+-				    const char *full_name,
+ 				    const char *ctr_name,
+ 				    const char *ghash_name)
+ {
+@@ -638,7 +637,8 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,
+ 		goto err_free_inst;
+ 
+ 	err = -EINVAL;
+-	if (ghash->digestsize != 16)
++	if (strcmp(ghash->base.cra_name, "ghash") != 0 ||
++	    ghash->digestsize != 16)
+ 		goto err_drop_ghash;
+ 
+ 	crypto_set_skcipher_spawn(&ctx->ctr, aead_crypto_instance(inst));
+@@ -650,24 +650,24 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,
+ 
+ 	ctr = crypto_spawn_skcipher_alg(&ctx->ctr);
+ 
+-	/* We only support 16-byte blocks. */
++	/* The skcipher algorithm must be CTR mode, using 16-byte blocks. */
+ 	err = -EINVAL;
+-	if (crypto_skcipher_alg_ivsize(ctr) != 16)
++	if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
++	    crypto_skcipher_alg_ivsize(ctr) != 16 ||
++	    ctr->base.cra_blocksize != 1)
+ 		goto out_put_ctr;
+ 
+-	/* Not a stream cipher? */
+-	if (ctr->base.cra_blocksize != 1)
++	err = -ENAMETOOLONG;
++	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
++		     "gcm(%s", ctr->base.cra_name + 4) >= CRYPTO_MAX_ALG_NAME)
+ 		goto out_put_ctr;
+ 
+-	err = -ENAMETOOLONG;
+ 	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ 		     "gcm_base(%s,%s)", ctr->base.cra_driver_name,
+ 		     ghash_alg->cra_driver_name) >=
+ 	    CRYPTO_MAX_ALG_NAME)
+ 		goto out_put_ctr;
+ 
+-	memcpy(inst->alg.base.cra_name, full_name, CRYPTO_MAX_ALG_NAME);
+-
+ 	inst->alg.base.cra_flags = (ghash->base.cra_flags |
+ 				    ctr->base.cra_flags) & CRYPTO_ALG_ASYNC;
+ 	inst->alg.base.cra_priority = (ghash->base.cra_priority +
+@@ -709,7 +709,6 @@ static int crypto_gcm_create(struct crypto_template *tmpl, struct rtattr **tb)
+ {
+ 	const char *cipher_name;
+ 	char ctr_name[CRYPTO_MAX_ALG_NAME];
+-	char full_name[CRYPTO_MAX_ALG_NAME];
+ 
+ 	cipher_name = crypto_attr_alg_name(tb[1]);
+ 	if (IS_ERR(cipher_name))
+@@ -719,12 +718,7 @@ static int crypto_gcm_create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	    CRYPTO_MAX_ALG_NAME)
+ 		return -ENAMETOOLONG;
+ 
+-	if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm(%s)", cipher_name) >=
+-	    CRYPTO_MAX_ALG_NAME)
+-		return -ENAMETOOLONG;
+-
+-	return crypto_gcm_create_common(tmpl, tb, full_name,
+-					ctr_name, "ghash");
++	return crypto_gcm_create_common(tmpl, tb, ctr_name, "ghash");
+ }
+ 
+ static struct crypto_template crypto_gcm_tmpl = {
+@@ -738,7 +732,6 @@ static int crypto_gcm_base_create(struct crypto_template *tmpl,
+ {
+ 	const char *ctr_name;
+ 	const char *ghash_name;
+-	char full_name[CRYPTO_MAX_ALG_NAME];
+ 
+ 	ctr_name = crypto_attr_alg_name(tb[1]);
+ 	if (IS_ERR(ctr_name))
+@@ -748,12 +741,7 @@ static int crypto_gcm_base_create(struct crypto_template *tmpl,
+ 	if (IS_ERR(ghash_name))
+ 		return PTR_ERR(ghash_name);
+ 
+-	if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm_base(%s,%s)",
+-		     ctr_name, ghash_name) >= CRYPTO_MAX_ALG_NAME)
+-		return -ENAMETOOLONG;
+-
+-	return crypto_gcm_create_common(tmpl, tb, full_name,
+-					ctr_name, ghash_name);
++	return crypto_gcm_create_common(tmpl, tb, ctr_name, ghash_name);
+ }
+ 
+ static struct crypto_template crypto_gcm_base_tmpl = {
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index 08a0e458bc3e..cc5c89246193 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -162,8 +162,10 @@ static int xor_tweak(struct skcipher_request *req, bool second_pass)
+ 	}
+ 
+ 	err = skcipher_walk_virt(&w, req, false);
+-	iv = (__be32 *)w.iv;
++	if (err)
++		return err;
+ 
++	iv = (__be32 *)w.iv;
+ 	counter[0] = be32_to_cpu(iv[3]);
+ 	counter[1] = be32_to_cpu(iv[2]);
+ 	counter[2] = be32_to_cpu(iv[1]);
+diff --git a/crypto/salsa20_generic.c b/crypto/salsa20_generic.c
+index 00fce32ae17a..1d7ad0256fd3 100644
+--- a/crypto/salsa20_generic.c
++++ b/crypto/salsa20_generic.c
+@@ -161,7 +161,7 @@ static int salsa20_crypt(struct skcipher_request *req)
+ 
+ 	err = skcipher_walk_virt(&walk, req, false);
+ 
+-	salsa20_init(state, ctx, walk.iv);
++	salsa20_init(state, ctx, req->iv);
+ 
+ 	while (walk.nbytes > 0) {
+ 		unsigned int nbytes = walk.nbytes;
+diff --git a/crypto/skcipher.c b/crypto/skcipher.c
+index de09ff60991e..7063135d993b 100644
+--- a/crypto/skcipher.c
++++ b/crypto/skcipher.c
+@@ -131,8 +131,13 @@ unmap_src:
+ 		memcpy(walk->dst.virt.addr, walk->page, n);
+ 		skcipher_unmap_dst(walk);
+ 	} else if (unlikely(walk->flags & SKCIPHER_WALK_SLOW)) {
+-		if (WARN_ON(err)) {
+-			/* unexpected case; didn't process all bytes */
++		if (err) {
++			/*
++			 * Didn't process all bytes.  Either the algorithm is
++			 * broken, or this was the last step and it turned out
++			 * the message wasn't evenly divisible into blocks but
++			 * the algorithm requires it.
++			 */
+ 			err = -EINVAL;
+ 			goto finish;
+ 		}
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 403c4ff15349..e52f1238d2d6 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -977,6 +977,8 @@ static int acpi_s2idle_prepare(void)
+ 	if (acpi_sci_irq_valid())
+ 		enable_irq_wake(acpi_sci_irq);
+ 
++	acpi_enable_wakeup_devices(ACPI_STATE_S0);
++
+ 	/* Change the configuration of GPEs to avoid spurious wakeup. */
+ 	acpi_enable_all_wakeup_gpes();
+ 	acpi_os_wait_events_complete();
+@@ -1027,6 +1029,8 @@ static void acpi_s2idle_restore(void)
+ {
+ 	acpi_enable_all_runtime_gpes();
+ 
++	acpi_disable_wakeup_devices(ACPI_STATE_S0);
++
+ 	if (acpi_sci_irq_valid())
+ 		disable_irq_wake(acpi_sci_irq);
+ 
+diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
+index b7a1ae2afaea..05010b1d04e9 100644
+--- a/drivers/char/ipmi/ipmi_ssif.c
++++ b/drivers/char/ipmi/ipmi_ssif.c
+@@ -690,12 +690,16 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
+ 			/* End of read */
+ 			len = ssif_info->multi_len;
+ 			data = ssif_info->data;
+-		} else if (blocknum != ssif_info->multi_pos) {
++		} else if (blocknum + 1 != ssif_info->multi_pos) {
+ 			/*
+ 			 * Out of sequence block, just abort.  Block
+ 			 * numbers start at zero for the second block,
+ 			 * but multi_pos starts at one, so the +1.
+ 			 */
++			if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
++				dev_dbg(&ssif_info->client->dev,
++					"Received message out of sequence, expected %u, got %u\n",
++					ssif_info->multi_pos - 1, blocknum);
+ 			result = -EIO;
+ 		} else {
+ 			ssif_inc_stat(ssif_info, received_message_parts);
+diff --git a/drivers/crypto/amcc/crypto4xx_alg.c b/drivers/crypto/amcc/crypto4xx_alg.c
+index 4092c2aad8e2..3458c5a085d9 100644
+--- a/drivers/crypto/amcc/crypto4xx_alg.c
++++ b/drivers/crypto/amcc/crypto4xx_alg.c
+@@ -141,9 +141,10 @@ static int crypto4xx_setkey_aes(struct crypto_skcipher *cipher,
+ 	/* Setup SA */
+ 	sa = ctx->sa_in;
+ 
+-	set_dynamic_sa_command_0(sa, SA_NOT_SAVE_HASH, (cm == CRYPTO_MODE_CBC ?
+-				 SA_SAVE_IV : SA_NOT_SAVE_IV),
+-				 SA_LOAD_HASH_FROM_SA, SA_LOAD_IV_FROM_STATE,
++	set_dynamic_sa_command_0(sa, SA_NOT_SAVE_HASH, (cm == CRYPTO_MODE_ECB ?
++				 SA_NOT_SAVE_IV : SA_SAVE_IV),
++				 SA_NOT_LOAD_HASH, (cm == CRYPTO_MODE_ECB ?
++				 SA_LOAD_IV_FROM_SA : SA_LOAD_IV_FROM_STATE),
+ 				 SA_NO_HEADER_PROC, SA_HASH_ALG_NULL,
+ 				 SA_CIPHER_ALG_AES, SA_PAD_TYPE_ZERO,
+ 				 SA_OP_GROUP_BASIC, SA_OPCODE_DECRYPT,
+@@ -162,6 +163,11 @@ static int crypto4xx_setkey_aes(struct crypto_skcipher *cipher,
+ 	memcpy(ctx->sa_out, ctx->sa_in, ctx->sa_len * 4);
+ 	sa = ctx->sa_out;
+ 	sa->sa_command_0.bf.dir = DIR_OUTBOUND;
++	/*
++	 * SA_OPCODE_ENCRYPT is the same value as SA_OPCODE_DECRYPT.
++	 * it's the DIR_(IN|OUT)BOUND that matters
++	 */
++	sa->sa_command_0.bf.opcode = SA_OPCODE_ENCRYPT;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
+index acf79889d903..a76adb8b6d55 100644
+--- a/drivers/crypto/amcc/crypto4xx_core.c
++++ b/drivers/crypto/amcc/crypto4xx_core.c
+@@ -712,7 +712,23 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 	size_t offset_to_sr_ptr;
+ 	u32 gd_idx = 0;
+ 	int tmp;
+-	bool is_busy;
++	bool is_busy, force_sd;
++
++	/*
++	 * There's a very subtile/disguised "bug" in the hardware that
++	 * gets indirectly mentioned in 18.1.3.5 Encryption/Decryption
++	 * of the hardware spec:
++	 * *drum roll* the AES/(T)DES OFB and CFB modes are listed as
++	 * operation modes for >>> "Block ciphers" <<<.
++	 *
++	 * To workaround this issue and stop the hardware from causing
++	 * "overran dst buffer" on crypttexts that are not a multiple
++	 * of 16 (AES_BLOCK_SIZE), we force the driver to use the
++	 * scatter buffers.
++	 */
++	force_sd = (req_sa->sa_command_1.bf.crypto_mode9_8 == CRYPTO_MODE_CFB
++		|| req_sa->sa_command_1.bf.crypto_mode9_8 == CRYPTO_MODE_OFB)
++		&& (datalen % AES_BLOCK_SIZE);
+ 
+ 	/* figure how many gd are needed */
+ 	tmp = sg_nents_for_len(src, assoclen + datalen);
+@@ -730,7 +746,7 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 	}
+ 
+ 	/* figure how many sd are needed */
+-	if (sg_is_last(dst)) {
++	if (sg_is_last(dst) && force_sd == false) {
+ 		num_sd = 0;
+ 	} else {
+ 		if (datalen > PPC4XX_SD_BUFFER_SIZE) {
+@@ -805,9 +821,10 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 	pd->sa_len = sa_len;
+ 
+ 	pd_uinfo = &dev->pdr_uinfo[pd_entry];
+-	pd_uinfo->async_req = req;
+ 	pd_uinfo->num_gd = num_gd;
+ 	pd_uinfo->num_sd = num_sd;
++	pd_uinfo->dest_va = dst;
++	pd_uinfo->async_req = req;
+ 
+ 	if (iv_len)
+ 		memcpy(pd_uinfo->sr_va->save_iv, iv, iv_len);
+@@ -826,7 +843,6 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 		/* get first gd we are going to use */
+ 		gd_idx = fst_gd;
+ 		pd_uinfo->first_gd = fst_gd;
+-		pd_uinfo->num_gd = num_gd;
+ 		gd = crypto4xx_get_gdp(dev, &gd_dma, gd_idx);
+ 		pd->src = gd_dma;
+ 		/* enable gather */
+@@ -863,17 +879,14 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 		 * Indicate gather array is not used
+ 		 */
+ 		pd_uinfo->first_gd = 0xffffffff;
+-		pd_uinfo->num_gd = 0;
+ 	}
+-	if (sg_is_last(dst)) {
++	if (!num_sd) {
+ 		/*
+ 		 * we know application give us dst a whole piece of memory
+ 		 * no need to use scatter ring.
+ 		 */
+ 		pd_uinfo->using_sd = 0;
+ 		pd_uinfo->first_sd = 0xffffffff;
+-		pd_uinfo->num_sd = 0;
+-		pd_uinfo->dest_va = dst;
+ 		sa->sa_command_0.bf.scatter = 0;
+ 		pd->dest = (u32)dma_map_page(dev->core_dev->device,
+ 					     sg_page(dst), dst->offset,
+@@ -887,9 +900,7 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
+ 		nbytes = datalen;
+ 		sa->sa_command_0.bf.scatter = 1;
+ 		pd_uinfo->using_sd = 1;
+-		pd_uinfo->dest_va = dst;
+ 		pd_uinfo->first_sd = fst_sd;
+-		pd_uinfo->num_sd = num_sd;
+ 		sd = crypto4xx_get_sdp(dev, &sd_dma, sd_idx);
+ 		pd->dest = sd_dma;
+ 		/* setup scatter descriptor */
+diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
+index 425d5d974613..86df04812a1b 100644
+--- a/drivers/crypto/caam/caamalg_qi2.c
++++ b/drivers/crypto/caam/caamalg_qi2.c
+@@ -2855,6 +2855,7 @@ struct caam_hash_state {
+ 	struct caam_request caam_req;
+ 	dma_addr_t buf_dma;
+ 	dma_addr_t ctx_dma;
++	int ctx_dma_len;
+ 	u8 buf_0[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
+ 	int buflen_0;
+ 	u8 buf_1[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned;
+@@ -2928,6 +2929,7 @@ static inline int ctx_map_to_qm_sg(struct device *dev,
+ 				   struct caam_hash_state *state, int ctx_len,
+ 				   struct dpaa2_sg_entry *qm_sg, u32 flag)
+ {
++	state->ctx_dma_len = ctx_len;
+ 	state->ctx_dma = dma_map_single(dev, state->caam_ctx, ctx_len, flag);
+ 	if (dma_mapping_error(dev, state->ctx_dma)) {
+ 		dev_err(dev, "unable to map ctx\n");
+@@ -3019,13 +3021,13 @@ static void split_key_sh_done(void *cbk_ctx, u32 err)
+ }
+ 
+ /* Digest hash size if it is too large */
+-static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
+-			   u32 *keylen, u8 *key_out, u32 digestsize)
++static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
++			   u32 digestsize)
+ {
+ 	struct caam_request *req_ctx;
+ 	u32 *desc;
+ 	struct split_key_sh_result result;
+-	dma_addr_t src_dma, dst_dma;
++	dma_addr_t key_dma;
+ 	struct caam_flc *flc;
+ 	dma_addr_t flc_dma;
+ 	int ret = -ENOMEM;
+@@ -3042,17 +3044,10 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
+ 	if (!flc)
+ 		goto err_flc;
+ 
+-	src_dma = dma_map_single(ctx->dev, (void *)key_in, *keylen,
+-				 DMA_TO_DEVICE);
+-	if (dma_mapping_error(ctx->dev, src_dma)) {
+-		dev_err(ctx->dev, "unable to map key input memory\n");
+-		goto err_src_dma;
+-	}
+-	dst_dma = dma_map_single(ctx->dev, (void *)key_out, digestsize,
+-				 DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, dst_dma)) {
+-		dev_err(ctx->dev, "unable to map key output memory\n");
+-		goto err_dst_dma;
++	key_dma = dma_map_single(ctx->dev, key, *keylen, DMA_BIDIRECTIONAL);
++	if (dma_mapping_error(ctx->dev, key_dma)) {
++		dev_err(ctx->dev, "unable to map key memory\n");
++		goto err_key_dma;
+ 	}
+ 
+ 	desc = flc->sh_desc;
+@@ -3077,14 +3072,14 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
+ 
+ 	dpaa2_fl_set_final(in_fle, true);
+ 	dpaa2_fl_set_format(in_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(in_fle, src_dma);
++	dpaa2_fl_set_addr(in_fle, key_dma);
+ 	dpaa2_fl_set_len(in_fle, *keylen);
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, dst_dma);
++	dpaa2_fl_set_addr(out_fle, key_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	print_hex_dump_debug("key_in@" __stringify(__LINE__)": ",
+-			     DUMP_PREFIX_ADDRESS, 16, 4, key_in, *keylen, 1);
++			     DUMP_PREFIX_ADDRESS, 16, 4, key, *keylen, 1);
+ 	print_hex_dump_debug("shdesc@" __stringify(__LINE__)": ",
+ 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
+ 			     1);
+@@ -3104,17 +3099,15 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, const u8 *key_in,
+ 		wait_for_completion(&result.completion);
+ 		ret = result.err;
+ 		print_hex_dump_debug("digested key@" __stringify(__LINE__)": ",
+-				     DUMP_PREFIX_ADDRESS, 16, 4, key_in,
++				     DUMP_PREFIX_ADDRESS, 16, 4, key,
+ 				     digestsize, 1);
+ 	}
+ 
+ 	dma_unmap_single(ctx->dev, flc_dma, sizeof(flc->flc) + desc_bytes(desc),
+ 			 DMA_TO_DEVICE);
+ err_flc_dma:
+-	dma_unmap_single(ctx->dev, dst_dma, digestsize, DMA_FROM_DEVICE);
+-err_dst_dma:
+-	dma_unmap_single(ctx->dev, src_dma, *keylen, DMA_TO_DEVICE);
+-err_src_dma:
++	dma_unmap_single(ctx->dev, key_dma, *keylen, DMA_BIDIRECTIONAL);
++err_key_dma:
+ 	kfree(flc);
+ err_flc:
+ 	kfree(req_ctx);
+@@ -3136,12 +3129,10 @@ static int ahash_setkey(struct crypto_ahash *ahash, const u8 *key,
+ 	dev_dbg(ctx->dev, "keylen %d blocksize %d\n", keylen, blocksize);
+ 
+ 	if (keylen > blocksize) {
+-		hashed_key = kmalloc_array(digestsize, sizeof(*hashed_key),
+-					   GFP_KERNEL | GFP_DMA);
++		hashed_key = kmemdup(key, keylen, GFP_KERNEL | GFP_DMA);
+ 		if (!hashed_key)
+ 			return -ENOMEM;
+-		ret = hash_digest_key(ctx, key, &keylen, hashed_key,
+-				      digestsize);
++		ret = hash_digest_key(ctx, &keylen, hashed_key, digestsize);
+ 		if (ret)
+ 			goto bad_free_key;
+ 		key = hashed_key;
+@@ -3166,14 +3157,12 @@ bad_free_key:
+ }
+ 
+ static inline void ahash_unmap(struct device *dev, struct ahash_edesc *edesc,
+-			       struct ahash_request *req, int dst_len)
++			       struct ahash_request *req)
+ {
+ 	struct caam_hash_state *state = ahash_request_ctx(req);
+ 
+ 	if (edesc->src_nents)
+ 		dma_unmap_sg(dev, req->src, edesc->src_nents, DMA_TO_DEVICE);
+-	if (edesc->dst_dma)
+-		dma_unmap_single(dev, edesc->dst_dma, dst_len, DMA_FROM_DEVICE);
+ 
+ 	if (edesc->qm_sg_bytes)
+ 		dma_unmap_single(dev, edesc->qm_sg_dma, edesc->qm_sg_bytes,
+@@ -3188,18 +3177,15 @@ static inline void ahash_unmap(struct device *dev, struct ahash_edesc *edesc,
+ 
+ static inline void ahash_unmap_ctx(struct device *dev,
+ 				   struct ahash_edesc *edesc,
+-				   struct ahash_request *req, int dst_len,
+-				   u32 flag)
++				   struct ahash_request *req, u32 flag)
+ {
+-	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
+-	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ 	struct caam_hash_state *state = ahash_request_ctx(req);
+ 
+ 	if (state->ctx_dma) {
+-		dma_unmap_single(dev, state->ctx_dma, ctx->ctx_len, flag);
++		dma_unmap_single(dev, state->ctx_dma, state->ctx_dma_len, flag);
+ 		state->ctx_dma = 0;
+ 	}
+-	ahash_unmap(dev, edesc, req, dst_len);
++	ahash_unmap(dev, edesc, req);
+ }
+ 
+ static void ahash_done(void *cbk_ctx, u32 status)
+@@ -3220,16 +3206,13 @@ static void ahash_done(void *cbk_ctx, u32 status)
+ 		ecode = -EIO;
+ 	}
+ 
+-	ahash_unmap(ctx->dev, edesc, req, digestsize);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
++	memcpy(req->result, state->caam_ctx, digestsize);
+ 	qi_cache_free(edesc);
+ 
+ 	print_hex_dump_debug("ctx@" __stringify(__LINE__)": ",
+ 			     DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
+ 			     ctx->ctx_len, 1);
+-	if (req->result)
+-		print_hex_dump_debug("result@" __stringify(__LINE__)": ",
+-				     DUMP_PREFIX_ADDRESS, 16, 4, req->result,
+-				     digestsize, 1);
+ 
+ 	req->base.complete(&req->base, ecode);
+ }
+@@ -3251,7 +3234,7 @@ static void ahash_done_bi(void *cbk_ctx, u32 status)
+ 		ecode = -EIO;
+ 	}
+ 
+-	ahash_unmap_ctx(ctx->dev, edesc, req, ctx->ctx_len, DMA_BIDIRECTIONAL);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
+ 	switch_buf(state);
+ 	qi_cache_free(edesc);
+ 
+@@ -3284,16 +3267,13 @@ static void ahash_done_ctx_src(void *cbk_ctx, u32 status)
+ 		ecode = -EIO;
+ 	}
+ 
+-	ahash_unmap_ctx(ctx->dev, edesc, req, digestsize, DMA_TO_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
++	memcpy(req->result, state->caam_ctx, digestsize);
+ 	qi_cache_free(edesc);
+ 
+ 	print_hex_dump_debug("ctx@" __stringify(__LINE__)": ",
+ 			     DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx,
+ 			     ctx->ctx_len, 1);
+-	if (req->result)
+-		print_hex_dump_debug("result@" __stringify(__LINE__)": ",
+-				     DUMP_PREFIX_ADDRESS, 16, 4, req->result,
+-				     digestsize, 1);
+ 
+ 	req->base.complete(&req->base, ecode);
+ }
+@@ -3315,7 +3295,7 @@ static void ahash_done_ctx_dst(void *cbk_ctx, u32 status)
+ 		ecode = -EIO;
+ 	}
+ 
+-	ahash_unmap_ctx(ctx->dev, edesc, req, ctx->ctx_len, DMA_FROM_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
+ 	switch_buf(state);
+ 	qi_cache_free(edesc);
+ 
+@@ -3453,7 +3433,7 @@ static int ahash_update_ctx(struct ahash_request *req)
+ 
+ 	return ret;
+ unmap_ctx:
+-	ahash_unmap_ctx(ctx->dev, edesc, req, ctx->ctx_len, DMA_BIDIRECTIONAL);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3485,7 +3465,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 	sg_table = &edesc->sgt[0];
+ 
+ 	ret = ctx_map_to_qm_sg(ctx->dev, state, ctx->ctx_len, sg_table,
+-			       DMA_TO_DEVICE);
++			       DMA_BIDIRECTIONAL);
+ 	if (ret)
+ 		goto unmap_ctx;
+ 
+@@ -3504,22 +3484,13 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 	}
+ 	edesc->qm_sg_bytes = qm_sg_bytes;
+ 
+-	edesc->dst_dma = dma_map_single(ctx->dev, req->result, digestsize,
+-					DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, edesc->dst_dma)) {
+-		dev_err(ctx->dev, "unable to map dst\n");
+-		edesc->dst_dma = 0;
+-		ret = -ENOMEM;
+-		goto unmap_ctx;
+-	}
+-
+ 	memset(&req_ctx->fd_flt, 0, sizeof(req_ctx->fd_flt));
+ 	dpaa2_fl_set_final(in_fle, true);
+ 	dpaa2_fl_set_format(in_fle, dpaa2_fl_sg);
+ 	dpaa2_fl_set_addr(in_fle, edesc->qm_sg_dma);
+ 	dpaa2_fl_set_len(in_fle, ctx->ctx_len + buflen);
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, edesc->dst_dma);
++	dpaa2_fl_set_addr(out_fle, state->ctx_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	req_ctx->flc = &ctx->flc[FINALIZE];
+@@ -3534,7 +3505,7 @@ static int ahash_final_ctx(struct ahash_request *req)
+ 		return ret;
+ 
+ unmap_ctx:
+-	ahash_unmap_ctx(ctx->dev, edesc, req, digestsize, DMA_FROM_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3587,7 +3558,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 	sg_table = &edesc->sgt[0];
+ 
+ 	ret = ctx_map_to_qm_sg(ctx->dev, state, ctx->ctx_len, sg_table,
+-			       DMA_TO_DEVICE);
++			       DMA_BIDIRECTIONAL);
+ 	if (ret)
+ 		goto unmap_ctx;
+ 
+@@ -3606,22 +3577,13 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 	}
+ 	edesc->qm_sg_bytes = qm_sg_bytes;
+ 
+-	edesc->dst_dma = dma_map_single(ctx->dev, req->result, digestsize,
+-					DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, edesc->dst_dma)) {
+-		dev_err(ctx->dev, "unable to map dst\n");
+-		edesc->dst_dma = 0;
+-		ret = -ENOMEM;
+-		goto unmap_ctx;
+-	}
+-
+ 	memset(&req_ctx->fd_flt, 0, sizeof(req_ctx->fd_flt));
+ 	dpaa2_fl_set_final(in_fle, true);
+ 	dpaa2_fl_set_format(in_fle, dpaa2_fl_sg);
+ 	dpaa2_fl_set_addr(in_fle, edesc->qm_sg_dma);
+ 	dpaa2_fl_set_len(in_fle, ctx->ctx_len + buflen + req->nbytes);
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, edesc->dst_dma);
++	dpaa2_fl_set_addr(out_fle, state->ctx_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	req_ctx->flc = &ctx->flc[FINALIZE];
+@@ -3636,7 +3598,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
+ 		return ret;
+ 
+ unmap_ctx:
+-	ahash_unmap_ctx(ctx->dev, edesc, req, digestsize, DMA_FROM_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_BIDIRECTIONAL);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3705,18 +3667,19 @@ static int ahash_digest(struct ahash_request *req)
+ 		dpaa2_fl_set_addr(in_fle, sg_dma_address(req->src));
+ 	}
+ 
+-	edesc->dst_dma = dma_map_single(ctx->dev, req->result, digestsize,
++	state->ctx_dma_len = digestsize;
++	state->ctx_dma = dma_map_single(ctx->dev, state->caam_ctx, digestsize,
+ 					DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, edesc->dst_dma)) {
+-		dev_err(ctx->dev, "unable to map dst\n");
+-		edesc->dst_dma = 0;
++	if (dma_mapping_error(ctx->dev, state->ctx_dma)) {
++		dev_err(ctx->dev, "unable to map ctx\n");
++		state->ctx_dma = 0;
+ 		goto unmap;
+ 	}
+ 
+ 	dpaa2_fl_set_final(in_fle, true);
+ 	dpaa2_fl_set_len(in_fle, req->nbytes);
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, edesc->dst_dma);
++	dpaa2_fl_set_addr(out_fle, state->ctx_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	req_ctx->flc = &ctx->flc[DIGEST];
+@@ -3730,7 +3693,7 @@ static int ahash_digest(struct ahash_request *req)
+ 		return ret;
+ 
+ unmap:
+-	ahash_unmap(ctx->dev, edesc, req, digestsize);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3756,27 +3719,39 @@ static int ahash_final_no_ctx(struct ahash_request *req)
+ 	if (!edesc)
+ 		return ret;
+ 
+-	state->buf_dma = dma_map_single(ctx->dev, buf, buflen, DMA_TO_DEVICE);
+-	if (dma_mapping_error(ctx->dev, state->buf_dma)) {
+-		dev_err(ctx->dev, "unable to map src\n");
+-		goto unmap;
++	if (buflen) {
++		state->buf_dma = dma_map_single(ctx->dev, buf, buflen,
++						DMA_TO_DEVICE);
++		if (dma_mapping_error(ctx->dev, state->buf_dma)) {
++			dev_err(ctx->dev, "unable to map src\n");
++			goto unmap;
++		}
+ 	}
+ 
+-	edesc->dst_dma = dma_map_single(ctx->dev, req->result, digestsize,
++	state->ctx_dma_len = digestsize;
++	state->ctx_dma = dma_map_single(ctx->dev, state->caam_ctx, digestsize,
+ 					DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, edesc->dst_dma)) {
+-		dev_err(ctx->dev, "unable to map dst\n");
+-		edesc->dst_dma = 0;
++	if (dma_mapping_error(ctx->dev, state->ctx_dma)) {
++		dev_err(ctx->dev, "unable to map ctx\n");
++		state->ctx_dma = 0;
+ 		goto unmap;
+ 	}
+ 
+ 	memset(&req_ctx->fd_flt, 0, sizeof(req_ctx->fd_flt));
+ 	dpaa2_fl_set_final(in_fle, true);
+-	dpaa2_fl_set_format(in_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(in_fle, state->buf_dma);
+-	dpaa2_fl_set_len(in_fle, buflen);
++	/*
++	 * crypto engine requires the input entry to be present when
++	 * "frame list" FD is used.
++	 * Since engine does not support FMT=2'b11 (unused entry type), leaving
++	 * in_fle zeroized (except for "Final" flag) is the best option.
++	 */
++	if (buflen) {
++		dpaa2_fl_set_format(in_fle, dpaa2_fl_single);
++		dpaa2_fl_set_addr(in_fle, state->buf_dma);
++		dpaa2_fl_set_len(in_fle, buflen);
++	}
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, edesc->dst_dma);
++	dpaa2_fl_set_addr(out_fle, state->ctx_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	req_ctx->flc = &ctx->flc[DIGEST];
+@@ -3791,7 +3766,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
+ 		return ret;
+ 
+ unmap:
+-	ahash_unmap(ctx->dev, edesc, req, digestsize);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3871,6 +3846,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
+ 		}
+ 		edesc->qm_sg_bytes = qm_sg_bytes;
+ 
++		state->ctx_dma_len = ctx->ctx_len;
+ 		state->ctx_dma = dma_map_single(ctx->dev, state->caam_ctx,
+ 						ctx->ctx_len, DMA_FROM_DEVICE);
+ 		if (dma_mapping_error(ctx->dev, state->ctx_dma)) {
+@@ -3919,7 +3895,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
+ 
+ 	return ret;
+ unmap_ctx:
+-	ahash_unmap_ctx(ctx->dev, edesc, req, ctx->ctx_len, DMA_TO_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_TO_DEVICE);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -3984,11 +3960,12 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ 	}
+ 	edesc->qm_sg_bytes = qm_sg_bytes;
+ 
+-	edesc->dst_dma = dma_map_single(ctx->dev, req->result, digestsize,
++	state->ctx_dma_len = digestsize;
++	state->ctx_dma = dma_map_single(ctx->dev, state->caam_ctx, digestsize,
+ 					DMA_FROM_DEVICE);
+-	if (dma_mapping_error(ctx->dev, edesc->dst_dma)) {
+-		dev_err(ctx->dev, "unable to map dst\n");
+-		edesc->dst_dma = 0;
++	if (dma_mapping_error(ctx->dev, state->ctx_dma)) {
++		dev_err(ctx->dev, "unable to map ctx\n");
++		state->ctx_dma = 0;
+ 		ret = -ENOMEM;
+ 		goto unmap;
+ 	}
+@@ -3999,7 +3976,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ 	dpaa2_fl_set_addr(in_fle, edesc->qm_sg_dma);
+ 	dpaa2_fl_set_len(in_fle, buflen + req->nbytes);
+ 	dpaa2_fl_set_format(out_fle, dpaa2_fl_single);
+-	dpaa2_fl_set_addr(out_fle, edesc->dst_dma);
++	dpaa2_fl_set_addr(out_fle, state->ctx_dma);
+ 	dpaa2_fl_set_len(out_fle, digestsize);
+ 
+ 	req_ctx->flc = &ctx->flc[DIGEST];
+@@ -4014,7 +3991,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
+ 
+ 	return ret;
+ unmap:
+-	ahash_unmap(ctx->dev, edesc, req, digestsize);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_FROM_DEVICE);
+ 	qi_cache_free(edesc);
+ 	return -ENOMEM;
+ }
+@@ -4101,6 +4078,7 @@ static int ahash_update_first(struct ahash_request *req)
+ 			scatterwalk_map_and_copy(next_buf, req->src, to_hash,
+ 						 *next_buflen, 0);
+ 
++		state->ctx_dma_len = ctx->ctx_len;
+ 		state->ctx_dma = dma_map_single(ctx->dev, state->caam_ctx,
+ 						ctx->ctx_len, DMA_FROM_DEVICE);
+ 		if (dma_mapping_error(ctx->dev, state->ctx_dma)) {
+@@ -4144,7 +4122,7 @@ static int ahash_update_first(struct ahash_request *req)
+ 
+ 	return ret;
+ unmap_ctx:
+-	ahash_unmap_ctx(ctx->dev, edesc, req, ctx->ctx_len, DMA_TO_DEVICE);
++	ahash_unmap_ctx(ctx->dev, edesc, req, DMA_TO_DEVICE);
+ 	qi_cache_free(edesc);
+ 	return ret;
+ }
+@@ -4163,6 +4141,7 @@ static int ahash_init(struct ahash_request *req)
+ 	state->final = ahash_final_no_ctx;
+ 
+ 	state->ctx_dma = 0;
++	state->ctx_dma_len = 0;
+ 	state->current_buf = 0;
+ 	state->buf_dma = 0;
+ 	state->buflen_0 = 0;
+diff --git a/drivers/crypto/caam/caamalg_qi2.h b/drivers/crypto/caam/caamalg_qi2.h
+index 9823bdefd029..1998d7c6d4e0 100644
+--- a/drivers/crypto/caam/caamalg_qi2.h
++++ b/drivers/crypto/caam/caamalg_qi2.h
+@@ -160,14 +160,12 @@ struct skcipher_edesc {
+ 
+ /*
+  * ahash_edesc - s/w-extended ahash descriptor
+- * @dst_dma: I/O virtual address of req->result
+  * @qm_sg_dma: I/O virtual address of h/w link table
+  * @src_nents: number of segments in input scatterlist
+  * @qm_sg_bytes: length of dma mapped qm_sg space
+  * @sgt: pointer to h/w link table
+  */
+ struct ahash_edesc {
+-	dma_addr_t dst_dma;
+ 	dma_addr_t qm_sg_dma;
+ 	int src_nents;
+ 	int qm_sg_bytes;
+diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
+index b16be8a11d92..e34ad7c2ab68 100644
+--- a/drivers/crypto/ccp/psp-dev.c
++++ b/drivers/crypto/ccp/psp-dev.c
+@@ -972,7 +972,7 @@ void psp_pci_init(void)
+ 	rc = sev_platform_init(&error);
+ 	if (rc) {
+ 		dev_err(sp->dev, "SEV: failed to INIT error %#x\n", error);
+-		goto err;
++		return;
+ 	}
+ 
+ 	dev_info(sp->dev, "SEV API:%d.%d build:%d\n", psp_master->api_major,
+diff --git a/drivers/crypto/ccree/cc_aead.c b/drivers/crypto/ccree/cc_aead.c
+index a3527c00b29a..009ce649ff25 100644
+--- a/drivers/crypto/ccree/cc_aead.c
++++ b/drivers/crypto/ccree/cc_aead.c
+@@ -424,7 +424,7 @@ static int validate_keys_sizes(struct cc_aead_ctx *ctx)
+ /* This function prepers the user key so it can pass to the hmac processing
+  * (copy to intenral buffer or hash in case of key longer than block
+  */
+-static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key,
++static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *authkey,
+ 				 unsigned int keylen)
+ {
+ 	dma_addr_t key_dma_addr = 0;
+@@ -437,6 +437,7 @@ static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key,
+ 	unsigned int hashmode;
+ 	unsigned int idx = 0;
+ 	int rc = 0;
++	u8 *key = NULL;
+ 	struct cc_hw_desc desc[MAX_AEAD_SETKEY_SEQ];
+ 	dma_addr_t padded_authkey_dma_addr =
+ 		ctx->auth_state.hmac.padded_authkey_dma_addr;
+@@ -455,11 +456,17 @@ static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key,
+ 	}
+ 
+ 	if (keylen != 0) {
++
++		key = kmemdup(authkey, keylen, GFP_KERNEL);
++		if (!key)
++			return -ENOMEM;
++
+ 		key_dma_addr = dma_map_single(dev, (void *)key, keylen,
+ 					      DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, key_dma_addr)) {
+ 			dev_err(dev, "Mapping key va=0x%p len=%u for DMA failed\n",
+ 				key, keylen);
++			kzfree(key);
+ 			return -ENOMEM;
+ 		}
+ 		if (keylen > blocksize) {
+@@ -542,6 +549,8 @@ static int cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key,
+ 	if (key_dma_addr)
+ 		dma_unmap_single(dev, key_dma_addr, keylen, DMA_TO_DEVICE);
+ 
++	kzfree(key);
++
+ 	return rc;
+ }
+ 
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index 3bcb6bce666e..90b4870078fb 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -83,24 +83,17 @@ static void cc_copy_mac(struct device *dev, struct aead_request *req,
+  */
+ static unsigned int cc_get_sgl_nents(struct device *dev,
+ 				     struct scatterlist *sg_list,
+-				     unsigned int nbytes, u32 *lbytes,
+-				     bool *is_chained)
++				     unsigned int nbytes, u32 *lbytes)
+ {
+ 	unsigned int nents = 0;
+ 
+ 	while (nbytes && sg_list) {
+-		if (sg_list->length) {
+-			nents++;
+-			/* get the number of bytes in the last entry */
+-			*lbytes = nbytes;
+-			nbytes -= (sg_list->length > nbytes) ?
+-					nbytes : sg_list->length;
+-			sg_list = sg_next(sg_list);
+-		} else {
+-			sg_list = (struct scatterlist *)sg_page(sg_list);
+-			if (is_chained)
+-				*is_chained = true;
+-		}
++		nents++;
++		/* get the number of bytes in the last entry */
++		*lbytes = nbytes;
++		nbytes -= (sg_list->length > nbytes) ?
++				nbytes : sg_list->length;
++		sg_list = sg_next(sg_list);
+ 	}
+ 	dev_dbg(dev, "nents %d last bytes %d\n", nents, *lbytes);
+ 	return nents;
+@@ -142,7 +135,7 @@ void cc_copy_sg_portion(struct device *dev, u8 *dest, struct scatterlist *sg,
+ {
+ 	u32 nents, lbytes;
+ 
+-	nents = cc_get_sgl_nents(dev, sg, end, &lbytes, NULL);
++	nents = cc_get_sgl_nents(dev, sg, end, &lbytes);
+ 	sg_copy_buffer(sg, nents, (void *)dest, (end - to_skip + 1), to_skip,
+ 		       (direct == CC_SG_TO_BUF));
+ }
+@@ -311,40 +304,10 @@ static void cc_add_sg_entry(struct device *dev, struct buffer_array *sgl_data,
+ 	sgl_data->num_of_buffers++;
+ }
+ 
+-static int cc_dma_map_sg(struct device *dev, struct scatterlist *sg, u32 nents,
+-			 enum dma_data_direction direction)
+-{
+-	u32 i, j;
+-	struct scatterlist *l_sg = sg;
+-
+-	for (i = 0; i < nents; i++) {
+-		if (!l_sg)
+-			break;
+-		if (dma_map_sg(dev, l_sg, 1, direction) != 1) {
+-			dev_err(dev, "dma_map_page() sg buffer failed\n");
+-			goto err;
+-		}
+-		l_sg = sg_next(l_sg);
+-	}
+-	return nents;
+-
+-err:
+-	/* Restore mapped parts */
+-	for (j = 0; j < i; j++) {
+-		if (!sg)
+-			break;
+-		dma_unmap_sg(dev, sg, 1, direction);
+-		sg = sg_next(sg);
+-	}
+-	return 0;
+-}
+-
+ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
+ 		     unsigned int nbytes, int direction, u32 *nents,
+ 		     u32 max_sg_nents, u32 *lbytes, u32 *mapped_nents)
+ {
+-	bool is_chained = false;
+-
+ 	if (sg_is_last(sg)) {
+ 		/* One entry only case -set to DLLI */
+ 		if (dma_map_sg(dev, sg, 1, direction) != 1) {
+@@ -358,35 +321,21 @@ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
+ 		*nents = 1;
+ 		*mapped_nents = 1;
+ 	} else {  /*sg_is_last*/
+-		*nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes,
+-					  &is_chained);
++		*nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes);
+ 		if (*nents > max_sg_nents) {
+ 			*nents = 0;
+ 			dev_err(dev, "Too many fragments. current %d max %d\n",
+ 				*nents, max_sg_nents);
+ 			return -ENOMEM;
+ 		}
+-		if (!is_chained) {
+-			/* In case of mmu the number of mapped nents might
+-			 * be changed from the original sgl nents
+-			 */
+-			*mapped_nents = dma_map_sg(dev, sg, *nents, direction);
+-			if (*mapped_nents == 0) {
+-				*nents = 0;
+-				dev_err(dev, "dma_map_sg() sg buffer failed\n");
+-				return -ENOMEM;
+-			}
+-		} else {
+-			/*In this case the driver maps entry by entry so it
+-			 * must have the same nents before and after map
+-			 */
+-			*mapped_nents = cc_dma_map_sg(dev, sg, *nents,
+-						      direction);
+-			if (*mapped_nents != *nents) {
+-				*nents = *mapped_nents;
+-				dev_err(dev, "dma_map_sg() sg buffer failed\n");
+-				return -ENOMEM;
+-			}
++		/* In case of mmu the number of mapped nents might
++		 * be changed from the original sgl nents
++		 */
++		*mapped_nents = dma_map_sg(dev, sg, *nents, direction);
++		if (*mapped_nents == 0) {
++			*nents = 0;
++			dev_err(dev, "dma_map_sg() sg buffer failed\n");
++			return -ENOMEM;
+ 		}
+ 	}
+ 
+@@ -571,7 +520,6 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
+ 	u32 dummy;
+-	bool chained;
+ 	u32 size_to_unmap = 0;
+ 
+ 	if (areq_ctx->mac_buf_dma_addr) {
+@@ -612,6 +560,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 	if (areq_ctx->gen_ctx.iv_dma_addr) {
+ 		dma_unmap_single(dev, areq_ctx->gen_ctx.iv_dma_addr,
+ 				 hw_iv_size, DMA_BIDIRECTIONAL);
++		kzfree(areq_ctx->gen_ctx.iv);
+ 	}
+ 
+ 	/* Release pool */
+@@ -636,15 +585,14 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 		size_to_unmap += crypto_aead_ivsize(tfm);
+ 
+ 	dma_unmap_sg(dev, req->src,
+-		     cc_get_sgl_nents(dev, req->src, size_to_unmap,
+-				      &dummy, &chained),
++		     cc_get_sgl_nents(dev, req->src, size_to_unmap, &dummy),
+ 		     DMA_BIDIRECTIONAL);
+ 	if (req->src != req->dst) {
+ 		dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n",
+ 			sg_virt(req->dst));
+ 		dma_unmap_sg(dev, req->dst,
+ 			     cc_get_sgl_nents(dev, req->dst, size_to_unmap,
+-					      &dummy, &chained),
++					      &dummy),
+ 			     DMA_BIDIRECTIONAL);
+ 	}
+ 	if (drvdata->coherent &&
+@@ -717,19 +665,27 @@ static int cc_aead_chain_iv(struct cc_drvdata *drvdata,
+ 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
+ 	unsigned int hw_iv_size = areq_ctx->hw_iv_size;
+ 	struct device *dev = drvdata_to_dev(drvdata);
++	gfp_t flags = cc_gfp_flags(&req->base);
+ 	int rc = 0;
+ 
+ 	if (!req->iv) {
+ 		areq_ctx->gen_ctx.iv_dma_addr = 0;
++		areq_ctx->gen_ctx.iv = NULL;
+ 		goto chain_iv_exit;
+ 	}
+ 
+-	areq_ctx->gen_ctx.iv_dma_addr = dma_map_single(dev, req->iv,
+-						       hw_iv_size,
+-						       DMA_BIDIRECTIONAL);
++	areq_ctx->gen_ctx.iv = kmemdup(req->iv, hw_iv_size, flags);
++	if (!areq_ctx->gen_ctx.iv)
++		return -ENOMEM;
++
++	areq_ctx->gen_ctx.iv_dma_addr =
++		dma_map_single(dev, areq_ctx->gen_ctx.iv, hw_iv_size,
++			       DMA_BIDIRECTIONAL);
+ 	if (dma_mapping_error(dev, areq_ctx->gen_ctx.iv_dma_addr)) {
+ 		dev_err(dev, "Mapping iv %u B at va=%pK for DMA failed\n",
+ 			hw_iv_size, req->iv);
++		kzfree(areq_ctx->gen_ctx.iv);
++		areq_ctx->gen_ctx.iv = NULL;
+ 		rc = -ENOMEM;
+ 		goto chain_iv_exit;
+ 	}
+@@ -1022,7 +978,6 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 	unsigned int size_for_map = req->assoclen + req->cryptlen;
+ 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ 	u32 sg_index = 0;
+-	bool chained = false;
+ 	bool is_gcm4543 = areq_ctx->is_gcm4543;
+ 	u32 size_to_skip = req->assoclen;
+ 
+@@ -1043,7 +998,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 	size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
+ 			authsize : 0;
+ 	src_mapped_nents = cc_get_sgl_nents(dev, req->src, size_for_map,
+-					    &src_last_bytes, &chained);
++					    &src_last_bytes);
+ 	sg_index = areq_ctx->src_sgl->length;
+ 	//check where the data starts
+ 	while (sg_index <= size_to_skip) {
+@@ -1085,7 +1040,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 	}
+ 
+ 	dst_mapped_nents = cc_get_sgl_nents(dev, req->dst, size_for_map,
+-					    &dst_last_bytes, &chained);
++					    &dst_last_bytes);
+ 	sg_index = areq_ctx->dst_sgl->length;
+ 	offset = size_to_skip;
+ 
+@@ -1486,7 +1441,7 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
+ 		dev_dbg(dev, " less than one block: curr_buff=%pK *curr_buff_cnt=0x%X copy_to=%pK\n",
+ 			curr_buff, *curr_buff_cnt, &curr_buff[*curr_buff_cnt]);
+ 		areq_ctx->in_nents =
+-			cc_get_sgl_nents(dev, src, nbytes, &dummy, NULL);
++			cc_get_sgl_nents(dev, src, nbytes, &dummy);
+ 		sg_copy_to_buffer(src, areq_ctx->in_nents,
+ 				  &curr_buff[*curr_buff_cnt], nbytes);
+ 		*curr_buff_cnt += nbytes;
+diff --git a/drivers/crypto/ccree/cc_driver.h b/drivers/crypto/ccree/cc_driver.h
+index 5be7fd431b05..fe3d6bbc596e 100644
+--- a/drivers/crypto/ccree/cc_driver.h
++++ b/drivers/crypto/ccree/cc_driver.h
+@@ -170,6 +170,7 @@ struct cc_alg_template {
+ 
+ struct async_gen_req_ctx {
+ 	dma_addr_t iv_dma_addr;
++	u8 *iv;
+ 	enum drv_crypto_direction op_type;
+ };
+ 
+diff --git a/drivers/crypto/ccree/cc_fips.c b/drivers/crypto/ccree/cc_fips.c
+index b4d0a6d983e0..09f708f6418e 100644
+--- a/drivers/crypto/ccree/cc_fips.c
++++ b/drivers/crypto/ccree/cc_fips.c
+@@ -72,20 +72,28 @@ static inline void tee_fips_error(struct device *dev)
+ 		dev_err(dev, "TEE reported error!\n");
+ }
+ 
++/*
++ * This function check if cryptocell tee fips error occurred
++ * and in such case triggers system error
++ */
++void cc_tee_handle_fips_error(struct cc_drvdata *p_drvdata)
++{
++	struct device *dev = drvdata_to_dev(p_drvdata);
++
++	if (!cc_get_tee_fips_status(p_drvdata))
++		tee_fips_error(dev);
++}
++
+ /* Deferred service handler, run as interrupt-fired tasklet */
+ static void fips_dsr(unsigned long devarg)
+ {
+ 	struct cc_drvdata *drvdata = (struct cc_drvdata *)devarg;
+-	struct device *dev = drvdata_to_dev(drvdata);
+-	u32 irq, state, val;
++	u32 irq, val;
+ 
+ 	irq = (drvdata->irq & (CC_GPR0_IRQ_MASK));
+ 
+ 	if (irq) {
+-		state = cc_ioread(drvdata, CC_REG(GPR_HOST));
+-
+-		if (state != (CC_FIPS_SYNC_TEE_STATUS | CC_FIPS_SYNC_MODULE_OK))
+-			tee_fips_error(dev);
++		cc_tee_handle_fips_error(drvdata);
+ 	}
+ 
+ 	/* after verifing that there is nothing to do,
+@@ -113,8 +121,7 @@ int cc_fips_init(struct cc_drvdata *p_drvdata)
+ 	dev_dbg(dev, "Initializing fips tasklet\n");
+ 	tasklet_init(&fips_h->tasklet, fips_dsr, (unsigned long)p_drvdata);
+ 
+-	if (!cc_get_tee_fips_status(p_drvdata))
+-		tee_fips_error(dev);
++	cc_tee_handle_fips_error(p_drvdata);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/crypto/ccree/cc_fips.h b/drivers/crypto/ccree/cc_fips.h
+index 645e096a7a82..67d5fbfa09b5 100644
+--- a/drivers/crypto/ccree/cc_fips.h
++++ b/drivers/crypto/ccree/cc_fips.h
+@@ -18,6 +18,7 @@ int cc_fips_init(struct cc_drvdata *p_drvdata);
+ void cc_fips_fini(struct cc_drvdata *drvdata);
+ void fips_handler(struct cc_drvdata *drvdata);
+ void cc_set_ree_fips_status(struct cc_drvdata *drvdata, bool ok);
++void cc_tee_handle_fips_error(struct cc_drvdata *p_drvdata);
+ 
+ #else  /* CONFIG_CRYPTO_FIPS */
+ 
+@@ -30,6 +31,7 @@ static inline void cc_fips_fini(struct cc_drvdata *drvdata) {}
+ static inline void cc_set_ree_fips_status(struct cc_drvdata *drvdata,
+ 					  bool ok) {}
+ static inline void fips_handler(struct cc_drvdata *drvdata) {}
++static inline void cc_tee_handle_fips_error(struct cc_drvdata *p_drvdata) {}
+ 
+ #endif /* CONFIG_CRYPTO_FIPS */
+ 
+diff --git a/drivers/crypto/ccree/cc_hash.c b/drivers/crypto/ccree/cc_hash.c
+index 2c4ddc8fb76b..e44cbf173606 100644
+--- a/drivers/crypto/ccree/cc_hash.c
++++ b/drivers/crypto/ccree/cc_hash.c
+@@ -69,6 +69,7 @@ struct cc_hash_alg {
+ struct hash_key_req_ctx {
+ 	u32 keylen;
+ 	dma_addr_t key_dma_addr;
++	u8 *key;
+ };
+ 
+ /* hash per-session context */
+@@ -730,13 +731,20 @@ static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key,
+ 	ctx->key_params.keylen = keylen;
+ 	ctx->key_params.key_dma_addr = 0;
+ 	ctx->is_hmac = true;
++	ctx->key_params.key = NULL;
+ 
+ 	if (keylen) {
++		ctx->key_params.key = kmemdup(key, keylen, GFP_KERNEL);
++		if (!ctx->key_params.key)
++			return -ENOMEM;
++
+ 		ctx->key_params.key_dma_addr =
+-			dma_map_single(dev, (void *)key, keylen, DMA_TO_DEVICE);
++			dma_map_single(dev, (void *)ctx->key_params.key, keylen,
++				       DMA_TO_DEVICE);
+ 		if (dma_mapping_error(dev, ctx->key_params.key_dma_addr)) {
+ 			dev_err(dev, "Mapping key va=0x%p len=%u for DMA failed\n",
+-				key, keylen);
++				ctx->key_params.key, keylen);
++			kzfree(ctx->key_params.key);
+ 			return -ENOMEM;
+ 		}
+ 		dev_dbg(dev, "mapping key-buffer: key_dma_addr=%pad keylen=%u\n",
+@@ -887,6 +895,9 @@ out:
+ 		dev_dbg(dev, "Unmapped key-buffer: key_dma_addr=%pad keylen=%u\n",
+ 			&ctx->key_params.key_dma_addr, ctx->key_params.keylen);
+ 	}
++
++	kzfree(ctx->key_params.key);
++
+ 	return rc;
+ }
+ 
+@@ -913,11 +924,16 @@ static int cc_xcbc_setkey(struct crypto_ahash *ahash,
+ 
+ 	ctx->key_params.keylen = keylen;
+ 
++	ctx->key_params.key = kmemdup(key, keylen, GFP_KERNEL);
++	if (!ctx->key_params.key)
++		return -ENOMEM;
++
+ 	ctx->key_params.key_dma_addr =
+-		dma_map_single(dev, (void *)key, keylen, DMA_TO_DEVICE);
++		dma_map_single(dev, ctx->key_params.key, keylen, DMA_TO_DEVICE);
+ 	if (dma_mapping_error(dev, ctx->key_params.key_dma_addr)) {
+ 		dev_err(dev, "Mapping key va=0x%p len=%u for DMA failed\n",
+ 			key, keylen);
++		kzfree(ctx->key_params.key);
+ 		return -ENOMEM;
+ 	}
+ 	dev_dbg(dev, "mapping key-buffer: key_dma_addr=%pad keylen=%u\n",
+@@ -969,6 +985,8 @@ static int cc_xcbc_setkey(struct crypto_ahash *ahash,
+ 	dev_dbg(dev, "Unmapped key-buffer: key_dma_addr=%pad keylen=%u\n",
+ 		&ctx->key_params.key_dma_addr, ctx->key_params.keylen);
+ 
++	kzfree(ctx->key_params.key);
++
+ 	return rc;
+ }
+ 
+@@ -1621,7 +1639,7 @@ static struct cc_hash_template driver_hash[] = {
+ 			.setkey = cc_hash_setkey,
+ 			.halg = {
+ 				.digestsize = SHA224_DIGEST_SIZE,
+-				.statesize = CC_STATE_SIZE(SHA224_DIGEST_SIZE),
++				.statesize = CC_STATE_SIZE(SHA256_DIGEST_SIZE),
+ 			},
+ 		},
+ 		.hash_mode = DRV_HASH_SHA224,
+@@ -1648,7 +1666,7 @@ static struct cc_hash_template driver_hash[] = {
+ 			.setkey = cc_hash_setkey,
+ 			.halg = {
+ 				.digestsize = SHA384_DIGEST_SIZE,
+-				.statesize = CC_STATE_SIZE(SHA384_DIGEST_SIZE),
++				.statesize = CC_STATE_SIZE(SHA512_DIGEST_SIZE),
+ 			},
+ 		},
+ 		.hash_mode = DRV_HASH_SHA384,
+diff --git a/drivers/crypto/ccree/cc_ivgen.c b/drivers/crypto/ccree/cc_ivgen.c
+index 769458323394..1abec3896a78 100644
+--- a/drivers/crypto/ccree/cc_ivgen.c
++++ b/drivers/crypto/ccree/cc_ivgen.c
+@@ -154,9 +154,6 @@ void cc_ivgen_fini(struct cc_drvdata *drvdata)
+ 	}
+ 
+ 	ivgen_ctx->pool = NULL_SRAM_ADDR;
+-
+-	/* release "this" context */
+-	kfree(ivgen_ctx);
+ }
+ 
+ /*!
+@@ -174,10 +171,12 @@ int cc_ivgen_init(struct cc_drvdata *drvdata)
+ 	int rc;
+ 
+ 	/* Allocate "this" context */
+-	ivgen_ctx = kzalloc(sizeof(*ivgen_ctx), GFP_KERNEL);
++	ivgen_ctx = devm_kzalloc(device, sizeof(*ivgen_ctx), GFP_KERNEL);
+ 	if (!ivgen_ctx)
+ 		return -ENOMEM;
+ 
++	drvdata->ivgen_handle = ivgen_ctx;
++
+ 	/* Allocate pool's header for initial enc. key/IV */
+ 	ivgen_ctx->pool_meta = dma_alloc_coherent(device, CC_IVPOOL_META_SIZE,
+ 						  &ivgen_ctx->pool_meta_dma,
+@@ -196,8 +195,6 @@ int cc_ivgen_init(struct cc_drvdata *drvdata)
+ 		goto out;
+ 	}
+ 
+-	drvdata->ivgen_handle = ivgen_ctx;
+-
+ 	return cc_init_iv_sram(drvdata);
+ 
+ out:
+diff --git a/drivers/crypto/ccree/cc_pm.c b/drivers/crypto/ccree/cc_pm.c
+index 6ff7e75ad90e..638082dff183 100644
+--- a/drivers/crypto/ccree/cc_pm.c
++++ b/drivers/crypto/ccree/cc_pm.c
+@@ -11,6 +11,7 @@
+ #include "cc_ivgen.h"
+ #include "cc_hash.h"
+ #include "cc_pm.h"
++#include "cc_fips.h"
+ 
+ #define POWER_DOWN_ENABLE 0x01
+ #define POWER_DOWN_DISABLE 0x00
+@@ -25,13 +26,13 @@ int cc_pm_suspend(struct device *dev)
+ 	int rc;
+ 
+ 	dev_dbg(dev, "set HOST_POWER_DOWN_EN\n");
+-	cc_iowrite(drvdata, CC_REG(HOST_POWER_DOWN_EN), POWER_DOWN_ENABLE);
+ 	rc = cc_suspend_req_queue(drvdata);
+ 	if (rc) {
+ 		dev_err(dev, "cc_suspend_req_queue (%x)\n", rc);
+ 		return rc;
+ 	}
+ 	fini_cc_regs(drvdata);
++	cc_iowrite(drvdata, CC_REG(HOST_POWER_DOWN_EN), POWER_DOWN_ENABLE);
+ 	cc_clk_off(drvdata);
+ 	return 0;
+ }
+@@ -42,19 +43,21 @@ int cc_pm_resume(struct device *dev)
+ 	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
+ 
+ 	dev_dbg(dev, "unset HOST_POWER_DOWN_EN\n");
+-	cc_iowrite(drvdata, CC_REG(HOST_POWER_DOWN_EN), POWER_DOWN_DISABLE);
+-
++	/* Enables the device source clk */
+ 	rc = cc_clk_on(drvdata);
+ 	if (rc) {
+ 		dev_err(dev, "failed getting clock back on. We're toast.\n");
+ 		return rc;
+ 	}
+ 
++	cc_iowrite(drvdata, CC_REG(HOST_POWER_DOWN_EN), POWER_DOWN_DISABLE);
+ 	rc = init_cc_regs(drvdata, false);
+ 	if (rc) {
+ 		dev_err(dev, "init_cc_regs (%x)\n", rc);
+ 		return rc;
+ 	}
++	/* check if tee fips error occurred during power down */
++	cc_tee_handle_fips_error(drvdata);
+ 
+ 	rc = cc_resume_req_queue(drvdata);
+ 	if (rc) {
+diff --git a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
+index 23305f22072f..204e4ad62c38 100644
+--- a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
++++ b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c
+@@ -250,9 +250,14 @@ static int rk_set_data_start(struct rk_crypto_info *dev)
+ 	u8 *src_last_blk = page_address(sg_page(dev->sg_src)) +
+ 		dev->sg_src->offset + dev->sg_src->length - ivsize;
+ 
+-	/* store the iv that need to be updated in chain mode */
+-	if (ctx->mode & RK_CRYPTO_DEC)
++	/* Store the iv that need to be updated in chain mode.
++	 * And update the IV buffer to contain the next IV for decryption mode.
++	 */
++	if (ctx->mode & RK_CRYPTO_DEC) {
+ 		memcpy(ctx->iv, src_last_blk, ivsize);
++		sg_pcopy_to_buffer(dev->first, dev->src_nents, req->info,
++				   ivsize, dev->total - ivsize);
++	}
+ 
+ 	err = dev->load_data(dev, dev->sg_src, dev->sg_dst);
+ 	if (!err)
+@@ -288,13 +293,19 @@ static void rk_iv_copyback(struct rk_crypto_info *dev)
+ 	struct ablkcipher_request *req =
+ 		ablkcipher_request_cast(dev->async_req);
+ 	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req);
++	struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm);
+ 	u32 ivsize = crypto_ablkcipher_ivsize(tfm);
+ 
+-	if (ivsize == DES_BLOCK_SIZE)
+-		memcpy_fromio(req->info, dev->reg + RK_CRYPTO_TDES_IV_0,
+-			      ivsize);
+-	else if (ivsize == AES_BLOCK_SIZE)
+-		memcpy_fromio(req->info, dev->reg + RK_CRYPTO_AES_IV_0, ivsize);
++	/* Update the IV buffer to contain the next IV for encryption mode. */
++	if (!(ctx->mode & RK_CRYPTO_DEC)) {
++		if (dev->aligned) {
++			memcpy(req->info, sg_virt(dev->sg_dst) +
++				dev->sg_dst->length - ivsize, ivsize);
++		} else {
++			memcpy(req->info, dev->addr_vir +
++				dev->count - ivsize, ivsize);
++		}
++	}
+ }
+ 
+ static void rk_update_iv(struct rk_crypto_info *dev)
+diff --git a/drivers/crypto/vmx/aesp8-ppc.pl b/drivers/crypto/vmx/aesp8-ppc.pl
+index d6a9f63d65ba..de78282b8f44 100644
+--- a/drivers/crypto/vmx/aesp8-ppc.pl
++++ b/drivers/crypto/vmx/aesp8-ppc.pl
+@@ -1854,7 +1854,7 @@ Lctr32_enc8x_three:
+ 	stvx_u		$out1,$x10,$out
+ 	stvx_u		$out2,$x20,$out
+ 	addi		$out,$out,0x30
+-	b		Lcbc_dec8x_done
++	b		Lctr32_enc8x_done
+ 
+ .align	5
+ Lctr32_enc8x_two:
+@@ -1866,7 +1866,7 @@ Lctr32_enc8x_two:
+ 	stvx_u		$out0,$x00,$out
+ 	stvx_u		$out1,$x10,$out
+ 	addi		$out,$out,0x20
+-	b		Lcbc_dec8x_done
++	b		Lctr32_enc8x_done
+ 
+ .align	5
+ Lctr32_enc8x_one:
+diff --git a/drivers/dax/device.c b/drivers/dax/device.c
+index 948806e57cee..a89ebd94c670 100644
+--- a/drivers/dax/device.c
++++ b/drivers/dax/device.c
+@@ -325,8 +325,7 @@ static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax,
+ 
+ 	*pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
+ 
+-	return vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd, *pfn,
+-			vmf->flags & FAULT_FLAG_WRITE);
++	return vmf_insert_pfn_pmd(vmf, *pfn, vmf->flags & FAULT_FLAG_WRITE);
+ }
+ 
+ #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+@@ -376,8 +375,7 @@ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax,
+ 
+ 	*pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
+ 
+-	return vmf_insert_pfn_pud(vmf->vma, vmf->address, vmf->pud, *pfn,
+-			vmf->flags & FAULT_FLAG_WRITE);
++	return vmf_insert_pfn_pud(vmf, *pfn, vmf->flags & FAULT_FLAG_WRITE);
+ }
+ #else
+ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax,
+diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c
+index c605089d899f..397cd51f033a 100644
+--- a/drivers/edac/mce_amd.c
++++ b/drivers/edac/mce_amd.c
+@@ -914,7 +914,7 @@ static inline void amd_decode_err_code(u16 ec)
+ /*
+  * Filter out unwanted MCE signatures here.
+  */
+-static bool amd_filter_mce(struct mce *m)
++static bool ignore_mce(struct mce *m)
+ {
+ 	/*
+ 	 * NB GART TLB error reporting is disabled by default.
+@@ -948,7 +948,7 @@ amd_decode_mce(struct notifier_block *nb, unsigned long val, void *data)
+ 	unsigned int fam = x86_family(m->cpuid);
+ 	int ecc;
+ 
+-	if (amd_filter_mce(m))
++	if (ignore_mce(m))
+ 		return NOTIFY_STOP;
+ 
+ 	pr_emerg(HW_ERR "%s\n", decode_error_status(m));
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index b2fd412715b1..d3725c17ce3a 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -540,11 +540,11 @@ static void journal_reclaim(struct cache_set *c)
+ 				  ca->sb.nr_this_dev);
+ 	}
+ 
+-	bkey_init(k);
+-	SET_KEY_PTRS(k, n);
+-
+-	if (n)
++	if (n) {
++		bkey_init(k);
++		SET_KEY_PTRS(k, n);
+ 		c->journal.blocks_free = c->sb.bucket_size >> c->block_bits;
++	}
+ out:
+ 	if (!journal_full(&c->journal))
+ 		__closure_wake_up(&c->journal.wait);
+@@ -671,6 +671,9 @@ static void journal_write_unlocked(struct closure *cl)
+ 		ca->journal.seq[ca->journal.cur_idx] = w->data->seq;
+ 	}
+ 
++	/* If KEY_PTRS(k) == 0, this jset gets lost in air */
++	BUG_ON(i == 0);
++
+ 	atomic_dec_bug(&fifo_back(&c->journal.pin));
+ 	bch_journal_next(&c->journal);
+ 	journal_reclaim(c);
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 4dee119c3664..ee36e6b3bcad 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1516,6 +1516,7 @@ static void cache_set_free(struct closure *cl)
+ 	bch_btree_cache_free(c);
+ 	bch_journal_free(c);
+ 
++	mutex_lock(&bch_register_lock);
+ 	for_each_cache(ca, c, i)
+ 		if (ca) {
+ 			ca->set = NULL;
+@@ -1534,7 +1535,6 @@ static void cache_set_free(struct closure *cl)
+ 	mempool_exit(&c->search);
+ 	kfree(c->devices);
+ 
+-	mutex_lock(&bch_register_lock);
+ 	list_del(&c->list);
+ 	mutex_unlock(&bch_register_lock);
+ 
+diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
+index 15a45ec6518d..b07452e83edb 100644
+--- a/drivers/mmc/core/queue.c
++++ b/drivers/mmc/core/queue.c
+@@ -473,6 +473,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
+ 		blk_mq_unquiesce_queue(q);
+ 
+ 	blk_cleanup_queue(q);
++	blk_mq_free_tag_set(&mq->tag_set);
+ 
+ 	/*
+ 	 * A request can be completed before the next request, potentially
+diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
+index a44ec8bb5418..260be2c3c61d 100644
+--- a/drivers/mmc/host/Kconfig
++++ b/drivers/mmc/host/Kconfig
+@@ -92,6 +92,7 @@ config MMC_SDHCI_PCI
+ 	tristate "SDHCI support on PCI bus"
+ 	depends on MMC_SDHCI && PCI
+ 	select MMC_CQHCI
++	select IOSF_MBI if X86
+ 	help
+ 	  This selects the PCI Secure Digital Host Controller Interface.
+ 	  Most controllers found today are PCI devices.
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index c9e3e050ccc8..88dc3f00a5be 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -832,7 +832,10 @@ static int sdhci_arasan_probe(struct platform_device *pdev)
+ 		host->mmc_host_ops.start_signal_voltage_switch =
+ 					sdhci_arasan_voltage_switch;
+ 		sdhci_arasan->has_cqe = true;
+-		host->mmc->caps2 |= MMC_CAP2_CQE | MMC_CAP2_CQE_DCMD;
++		host->mmc->caps2 |= MMC_CAP2_CQE;
++
++		if (!of_property_read_bool(np, "disable-cqe-dcmd"))
++			host->mmc->caps2 |= MMC_CAP2_CQE_DCMD;
+ 	}
+ 
+ 	ret = sdhci_arasan_add_host(sdhci_arasan);
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 2a6eba74b94e..ac9a4ee03c66 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -31,6 +31,10 @@
+ #include <linux/mmc/sdhci-pci-data.h>
+ #include <linux/acpi.h>
+ 
++#ifdef CONFIG_X86
++#include <asm/iosf_mbi.h>
++#endif
++
+ #include "cqhci.h"
+ 
+ #include "sdhci.h"
+@@ -451,6 +455,50 @@ static const struct sdhci_pci_fixes sdhci_intel_pch_sdio = {
+ 	.probe_slot	= pch_hc_probe_slot,
+ };
+ 
++#ifdef CONFIG_X86
++
++#define BYT_IOSF_SCCEP			0x63
++#define BYT_IOSF_OCP_NETCTRL0		0x1078
++#define BYT_IOSF_OCP_TIMEOUT_BASE	GENMASK(10, 8)
++
++static void byt_ocp_setting(struct pci_dev *pdev)
++{
++	u32 val = 0;
++
++	if (pdev->device != PCI_DEVICE_ID_INTEL_BYT_EMMC &&
++	    pdev->device != PCI_DEVICE_ID_INTEL_BYT_SDIO &&
++	    pdev->device != PCI_DEVICE_ID_INTEL_BYT_SD &&
++	    pdev->device != PCI_DEVICE_ID_INTEL_BYT_EMMC2)
++		return;
++
++	if (iosf_mbi_read(BYT_IOSF_SCCEP, MBI_CR_READ, BYT_IOSF_OCP_NETCTRL0,
++			  &val)) {
++		dev_err(&pdev->dev, "%s read error\n", __func__);
++		return;
++	}
++
++	if (!(val & BYT_IOSF_OCP_TIMEOUT_BASE))
++		return;
++
++	val &= ~BYT_IOSF_OCP_TIMEOUT_BASE;
++
++	if (iosf_mbi_write(BYT_IOSF_SCCEP, MBI_CR_WRITE, BYT_IOSF_OCP_NETCTRL0,
++			   val)) {
++		dev_err(&pdev->dev, "%s write error\n", __func__);
++		return;
++	}
++
++	dev_dbg(&pdev->dev, "%s completed\n", __func__);
++}
++
++#else
++
++static inline void byt_ocp_setting(struct pci_dev *pdev)
++{
++}
++
++#endif
++
+ enum {
+ 	INTEL_DSM_FNS		=  0,
+ 	INTEL_DSM_V18_SWITCH	=  3,
+@@ -715,6 +763,8 @@ static void byt_probe_slot(struct sdhci_pci_slot *slot)
+ 
+ 	byt_read_dsm(slot);
+ 
++	byt_ocp_setting(slot->chip->pdev);
++
+ 	ops->execute_tuning = intel_execute_tuning;
+ 	ops->start_signal_voltage_switch = intel_start_signal_voltage_switch;
+ 
+@@ -938,7 +988,35 @@ static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PM_SLEEP
++
++static int byt_resume(struct sdhci_pci_chip *chip)
++{
++	byt_ocp_setting(chip->pdev);
++
++	return sdhci_pci_resume_host(chip);
++}
++
++#endif
++
++#ifdef CONFIG_PM
++
++static int byt_runtime_resume(struct sdhci_pci_chip *chip)
++{
++	byt_ocp_setting(chip->pdev);
++
++	return sdhci_pci_runtime_resume_host(chip);
++}
++
++#endif
++
+ static const struct sdhci_pci_fixes sdhci_intel_byt_emmc = {
++#ifdef CONFIG_PM_SLEEP
++	.resume		= byt_resume,
++#endif
++#ifdef CONFIG_PM
++	.runtime_resume	= byt_runtime_resume,
++#endif
+ 	.allow_runtime_pm = true,
+ 	.probe_slot	= byt_emmc_probe_slot,
+ 	.quirks		= SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
+@@ -972,6 +1050,12 @@ static const struct sdhci_pci_fixes sdhci_intel_glk_emmc = {
+ };
+ 
+ static const struct sdhci_pci_fixes sdhci_ni_byt_sdio = {
++#ifdef CONFIG_PM_SLEEP
++	.resume		= byt_resume,
++#endif
++#ifdef CONFIG_PM
++	.runtime_resume	= byt_runtime_resume,
++#endif
+ 	.quirks		= SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
+ 			  SDHCI_QUIRK_NO_LED,
+ 	.quirks2	= SDHCI_QUIRK2_HOST_OFF_CARD_ON |
+@@ -983,6 +1067,12 @@ static const struct sdhci_pci_fixes sdhci_ni_byt_sdio = {
+ };
+ 
+ static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = {
++#ifdef CONFIG_PM_SLEEP
++	.resume		= byt_resume,
++#endif
++#ifdef CONFIG_PM
++	.runtime_resume	= byt_runtime_resume,
++#endif
+ 	.quirks		= SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
+ 			  SDHCI_QUIRK_NO_LED,
+ 	.quirks2	= SDHCI_QUIRK2_HOST_OFF_CARD_ON |
+@@ -994,6 +1084,12 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = {
+ };
+ 
+ static const struct sdhci_pci_fixes sdhci_intel_byt_sd = {
++#ifdef CONFIG_PM_SLEEP
++	.resume		= byt_resume,
++#endif
++#ifdef CONFIG_PM
++	.runtime_resume	= byt_runtime_resume,
++#endif
+ 	.quirks		= SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC |
+ 			  SDHCI_QUIRK_NO_LED,
+ 	.quirks2	= SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON |
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index e6ace31e2a41..084d22d83d14 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -675,6 +675,7 @@ static void tegra_sdhci_set_uhs_signaling(struct sdhci_host *host,
+ 	bool set_dqs_trim = false;
+ 	bool do_hs400_dll_cal = false;
+ 
++	tegra_host->ddr_signaling = false;
+ 	switch (timing) {
+ 	case MMC_TIMING_UHS_SDR50:
+ 	case MMC_TIMING_UHS_SDR104:
+diff --git a/drivers/mtd/maps/Kconfig b/drivers/mtd/maps/Kconfig
+index e0cf869c8544..544ed1931843 100644
+--- a/drivers/mtd/maps/Kconfig
++++ b/drivers/mtd/maps/Kconfig
+@@ -10,7 +10,7 @@ config MTD_COMPLEX_MAPPINGS
+ 
+ config MTD_PHYSMAP
+ 	tristate "Flash device in physical memory map"
+-	depends on MTD_CFI || MTD_JEDECPROBE || MTD_ROM || MTD_LPDDR
++	depends on MTD_CFI || MTD_JEDECPROBE || MTD_ROM || MTD_RAM || MTD_LPDDR
+ 	help
+ 	  This provides a 'mapping' driver which allows the NOR Flash and
+ 	  ROM driver code to communicate with chips which are mapped
+diff --git a/drivers/mtd/maps/physmap-core.c b/drivers/mtd/maps/physmap-core.c
+index d9a3e4bebe5d..21b556afc305 100644
+--- a/drivers/mtd/maps/physmap-core.c
++++ b/drivers/mtd/maps/physmap-core.c
+@@ -132,6 +132,8 @@ static void physmap_set_addr_gpios(struct physmap_flash_info *info,
+ 
+ 		gpiod_set_value(info->gpios->desc[i], !!(BIT(i) & ofs));
+ 	}
++
++	info->gpio_values = ofs;
+ }
+ 
+ #define win_mask(order)		(BIT(order) - 1)
+diff --git a/drivers/mtd/spi-nor/intel-spi.c b/drivers/mtd/spi-nor/intel-spi.c
+index af0a22019516..d60cbf23d9aa 100644
+--- a/drivers/mtd/spi-nor/intel-spi.c
++++ b/drivers/mtd/spi-nor/intel-spi.c
+@@ -632,6 +632,10 @@ static ssize_t intel_spi_read(struct spi_nor *nor, loff_t from, size_t len,
+ 	while (len > 0) {
+ 		block_size = min_t(size_t, len, INTEL_SPI_FIFO_SZ);
+ 
++		/* Read cannot cross 4K boundary */
++		block_size = min_t(loff_t, from + block_size,
++				   round_up(from + 1, SZ_4K)) - from;
++
+ 		writel(from, ispi->base + FADDR);
+ 
+ 		val = readl(ispi->base + HSFSTS_CTL);
+@@ -685,6 +689,10 @@ static ssize_t intel_spi_write(struct spi_nor *nor, loff_t to, size_t len,
+ 	while (len > 0) {
+ 		block_size = min_t(size_t, len, INTEL_SPI_FIFO_SZ);
+ 
++		/* Write cannot cross 4K boundary */
++		block_size = min_t(loff_t, to + block_size,
++				   round_up(to + 1, SZ_4K)) - to;
++
+ 		writel(to, ispi->base + FADDR);
+ 
+ 		val = readl(ispi->base + HSFSTS_CTL);
+diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c
+index 6d6e9a12150b..7ad08945cece 100644
+--- a/drivers/nvdimm/label.c
++++ b/drivers/nvdimm/label.c
+@@ -753,6 +753,17 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class,
+ 		return &guid_null;
+ }
+ 
++static void reap_victim(struct nd_mapping *nd_mapping,
++		struct nd_label_ent *victim)
++{
++	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
++	u32 slot = to_slot(ndd, victim->label);
++
++	dev_dbg(ndd->dev, "free: %d\n", slot);
++	nd_label_free_slot(ndd, slot);
++	victim->label = NULL;
++}
++
+ static int __pmem_label_update(struct nd_region *nd_region,
+ 		struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm,
+ 		int pos, unsigned long flags)
+@@ -760,9 +771,9 @@ static int __pmem_label_update(struct nd_region *nd_region,
+ 	struct nd_namespace_common *ndns = &nspm->nsio.common;
+ 	struct nd_interleave_set *nd_set = nd_region->nd_set;
+ 	struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
+-	struct nd_label_ent *label_ent, *victim = NULL;
+ 	struct nd_namespace_label *nd_label;
+ 	struct nd_namespace_index *nsindex;
++	struct nd_label_ent *label_ent;
+ 	struct nd_label_id label_id;
+ 	struct resource *res;
+ 	unsigned long *free;
+@@ -831,18 +842,10 @@ static int __pmem_label_update(struct nd_region *nd_region,
+ 	list_for_each_entry(label_ent, &nd_mapping->labels, list) {
+ 		if (!label_ent->label)
+ 			continue;
+-		if (memcmp(nspm->uuid, label_ent->label->uuid,
+-					NSLABEL_UUID_LEN) != 0)
+-			continue;
+-		victim = label_ent;
+-		list_move_tail(&victim->list, &nd_mapping->labels);
+-		break;
+-	}
+-	if (victim) {
+-		dev_dbg(ndd->dev, "free: %d\n", slot);
+-		slot = to_slot(ndd, victim->label);
+-		nd_label_free_slot(ndd, slot);
+-		victim->label = NULL;
++		if (test_and_clear_bit(ND_LABEL_REAP, &label_ent->flags)
++				|| memcmp(nspm->uuid, label_ent->label->uuid,
++					NSLABEL_UUID_LEN) == 0)
++			reap_victim(nd_mapping, label_ent);
+ 	}
+ 
+ 	/* update index */
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index e761b29f7160..df5bc2329518 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -1247,12 +1247,27 @@ static int namespace_update_uuid(struct nd_region *nd_region,
+ 	for (i = 0; i < nd_region->ndr_mappings; i++) {
+ 		struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+ 		struct nvdimm_drvdata *ndd = to_ndd(nd_mapping);
++		struct nd_label_ent *label_ent;
+ 		struct resource *res;
+ 
+ 		for_each_dpa_resource(ndd, res)
+ 			if (strcmp(res->name, old_label_id.id) == 0)
+ 				sprintf((void *) res->name, "%s",
+ 						new_label_id.id);
++
++		mutex_lock(&nd_mapping->lock);
++		list_for_each_entry(label_ent, &nd_mapping->labels, list) {
++			struct nd_namespace_label *nd_label = label_ent->label;
++			struct nd_label_id label_id;
++
++			if (!nd_label)
++				continue;
++			nd_label_gen_id(&label_id, nd_label->uuid,
++					__le32_to_cpu(nd_label->flags));
++			if (strcmp(old_label_id.id, label_id.id) == 0)
++				set_bit(ND_LABEL_REAP, &label_ent->flags);
++		}
++		mutex_unlock(&nd_mapping->lock);
+ 	}
+ 	kfree(*old_uuid);
+  out:
+diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
+index 379bf4305e61..e8d73db13ee1 100644
+--- a/drivers/nvdimm/nd.h
++++ b/drivers/nvdimm/nd.h
+@@ -113,8 +113,12 @@ struct nd_percpu_lane {
+ 	spinlock_t lock;
+ };
+ 
++enum nd_label_flags {
++	ND_LABEL_REAP,
++};
+ struct nd_label_ent {
+ 	struct list_head list;
++	unsigned long flags;
+ 	struct nd_namespace_label *label;
+ };
+ 
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index f8c6da9277b3..00b961890a38 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -833,6 +833,10 @@ static int axp288_charger_probe(struct platform_device *pdev)
+ 	/* Register charger interrupts */
+ 	for (i = 0; i < CHRG_INTR_END; i++) {
+ 		pirq = platform_get_irq(info->pdev, i);
++		if (pirq < 0) {
++			dev_err(&pdev->dev, "Failed to get IRQ: %d\n", pirq);
++			return pirq;
++		}
+ 		info->irq[i] = regmap_irq_get_virq(info->regmap_irqc, pirq);
+ 		if (info->irq[i] < 0) {
+ 			dev_warn(&info->pdev->dev,
+diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
+index 084c8ba9749d..ab0b6e78ca02 100644
+--- a/drivers/power/supply/axp288_fuel_gauge.c
++++ b/drivers/power/supply/axp288_fuel_gauge.c
+@@ -695,6 +695,26 @@ intr_failed:
+  * detection reports one despite it not being there.
+  */
+ static const struct dmi_system_id axp288_fuel_gauge_blacklist[] = {
++	{
++		/* ACEPC T8 Cherry Trail Z8350 mini PC */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "T8"),
++			/* also match on somewhat unique bios-version */
++			DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1.000"),
++		},
++	},
++	{
++		/* ACEPC T11 Cherry Trail Z8350 mini PC */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "T11"),
++			/* also match on somewhat unique bios-version */
++			DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1.000"),
++		},
++	},
+ 	{
+ 		/* Intel Cherry Trail Compute Stick, Windows version */
+ 		.matches = {
+diff --git a/drivers/tty/hvc/hvc_riscv_sbi.c b/drivers/tty/hvc/hvc_riscv_sbi.c
+index 75155bde2b88..31f53fa77e4a 100644
+--- a/drivers/tty/hvc/hvc_riscv_sbi.c
++++ b/drivers/tty/hvc/hvc_riscv_sbi.c
+@@ -53,7 +53,6 @@ device_initcall(hvc_sbi_init);
+ static int __init hvc_sbi_console_init(void)
+ {
+ 	hvc_instantiate(0, 0, &hvc_sbi_ops);
+-	add_preferred_console("hvc", 0, NULL);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index 88312c6c92cc..0617e87ab343 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -123,6 +123,7 @@ static const int NR_TYPES = ARRAY_SIZE(max_vals);
+ static struct input_handler kbd_handler;
+ static DEFINE_SPINLOCK(kbd_event_lock);
+ static DEFINE_SPINLOCK(led_lock);
++static DEFINE_SPINLOCK(func_buf_lock); /* guard 'func_buf'  and friends */
+ static unsigned long key_down[BITS_TO_LONGS(KEY_CNT)];	/* keyboard key bitmap */
+ static unsigned char shift_down[NR_SHIFT];		/* shift state counters.. */
+ static bool dead_key_next;
+@@ -1990,11 +1991,12 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 	char *p;
+ 	u_char *q;
+ 	u_char __user *up;
+-	int sz;
++	int sz, fnw_sz;
+ 	int delta;
+ 	char *first_free, *fj, *fnw;
+ 	int i, j, k;
+ 	int ret;
++	unsigned long flags;
+ 
+ 	if (!capable(CAP_SYS_TTY_CONFIG))
+ 		perm = 0;
+@@ -2037,7 +2039,14 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 			goto reterr;
+ 		}
+ 
++		fnw = NULL;
++		fnw_sz = 0;
++		/* race aginst other writers */
++		again:
++		spin_lock_irqsave(&func_buf_lock, flags);
+ 		q = func_table[i];
++
++		/* fj pointer to next entry after 'q' */
+ 		first_free = funcbufptr + (funcbufsize - funcbufleft);
+ 		for (j = i+1; j < MAX_NR_FUNC && !func_table[j]; j++)
+ 			;
+@@ -2045,10 +2054,12 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 			fj = func_table[j];
+ 		else
+ 			fj = first_free;
+-
++		/* buffer usage increase by new entry */
+ 		delta = (q ? -strlen(q) : 1) + strlen(kbs->kb_string);
++
+ 		if (delta <= funcbufleft) { 	/* it fits in current buf */
+ 		    if (j < MAX_NR_FUNC) {
++			/* make enough space for new entry at 'fj' */
+ 			memmove(fj + delta, fj, first_free - fj);
+ 			for (k = j; k < MAX_NR_FUNC; k++)
+ 			    if (func_table[k])
+@@ -2061,20 +2072,28 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 		    sz = 256;
+ 		    while (sz < funcbufsize - funcbufleft + delta)
+ 		      sz <<= 1;
+-		    fnw = kmalloc(sz, GFP_KERNEL);
+-		    if(!fnw) {
+-		      ret = -ENOMEM;
+-		      goto reterr;
++		    if (fnw_sz != sz) {
++		      spin_unlock_irqrestore(&func_buf_lock, flags);
++		      kfree(fnw);
++		      fnw = kmalloc(sz, GFP_KERNEL);
++		      fnw_sz = sz;
++		      if (!fnw) {
++			ret = -ENOMEM;
++			goto reterr;
++		      }
++		      goto again;
+ 		    }
+ 
+ 		    if (!q)
+ 		      func_table[i] = fj;
++		    /* copy data before insertion point to new location */
+ 		    if (fj > funcbufptr)
+ 			memmove(fnw, funcbufptr, fj - funcbufptr);
+ 		    for (k = 0; k < j; k++)
+ 		      if (func_table[k])
+ 			func_table[k] = fnw + (func_table[k] - funcbufptr);
+ 
++		    /* copy data after insertion point to new location */
+ 		    if (first_free > fj) {
+ 			memmove(fnw + (fj - funcbufptr) + delta, fj, first_free - fj);
+ 			for (k = j; k < MAX_NR_FUNC; k++)
+@@ -2087,7 +2106,9 @@ int vt_do_kdgkb_ioctl(int cmd, struct kbsentry __user *user_kdgkb, int perm)
+ 		    funcbufleft = funcbufleft - delta + sz - funcbufsize;
+ 		    funcbufsize = sz;
+ 		}
++		/* finally insert item itself */
+ 		strcpy(func_table[i], kbs->kb_string);
++		spin_unlock_irqrestore(&func_buf_lock, flags);
+ 		break;
+ 	}
+ 	ret = 0;
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index b6621a2e916d..ea2f5b14ed8c 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -4152,8 +4152,6 @@ void do_blank_screen(int entering_gfx)
+ 		return;
+ 	}
+ 
+-	if (blank_state != blank_normal_wait)
+-		return;
+ 	blank_state = blank_off;
+ 
+ 	/* don't blank graphics */
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 78556447e1d5..ef66db38cedb 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -1454,8 +1454,8 @@ int btrfs_find_all_roots(struct btrfs_trans_handle *trans,
+  * callers (such as fiemap) which want to know whether the extent is
+  * shared but do not need a ref count.
+  *
+- * This attempts to allocate a transaction in order to account for
+- * delayed refs, but continues on even when the alloc fails.
++ * This attempts to attach to the running transaction in order to account for
++ * delayed refs, but continues on even when no running transaction exists.
+  *
+  * Return: 0 if extent is not shared, 1 if it is shared, < 0 on error.
+  */
+@@ -1478,13 +1478,16 @@ int btrfs_check_shared(struct btrfs_root *root, u64 inum, u64 bytenr)
+ 	tmp = ulist_alloc(GFP_NOFS);
+ 	roots = ulist_alloc(GFP_NOFS);
+ 	if (!tmp || !roots) {
+-		ulist_free(tmp);
+-		ulist_free(roots);
+-		return -ENOMEM;
++		ret = -ENOMEM;
++		goto out;
+ 	}
+ 
+-	trans = btrfs_join_transaction(root);
++	trans = btrfs_attach_transaction(root);
+ 	if (IS_ERR(trans)) {
++		if (PTR_ERR(trans) != -ENOENT && PTR_ERR(trans) != -EROFS) {
++			ret = PTR_ERR(trans);
++			goto out;
++		}
+ 		trans = NULL;
+ 		down_read(&fs_info->commit_root_sem);
+ 	} else {
+@@ -1517,6 +1520,7 @@ int btrfs_check_shared(struct btrfs_root *root, u64 inum, u64 bytenr)
+ 	} else {
+ 		up_read(&fs_info->commit_root_sem);
+ 	}
++out:
+ 	ulist_free(tmp);
+ 	ulist_free(roots);
+ 	return ret;
+@@ -1906,13 +1910,19 @@ int iterate_extent_inodes(struct btrfs_fs_info *fs_info,
+ 			extent_item_objectid);
+ 
+ 	if (!search_commit_root) {
+-		trans = btrfs_join_transaction(fs_info->extent_root);
+-		if (IS_ERR(trans))
+-			return PTR_ERR(trans);
++		trans = btrfs_attach_transaction(fs_info->extent_root);
++		if (IS_ERR(trans)) {
++			if (PTR_ERR(trans) != -ENOENT &&
++			    PTR_ERR(trans) != -EROFS)
++				return PTR_ERR(trans);
++			trans = NULL;
++		}
++	}
++
++	if (trans)
+ 		btrfs_get_tree_mod_seq(fs_info, &tree_mod_seq_elem);
+-	} else {
++	else
+ 		down_read(&fs_info->commit_root_sem);
+-	}
+ 
+ 	ret = btrfs_find_all_leafs(trans, fs_info, extent_item_objectid,
+ 				   tree_mod_seq_elem.seq, &refs,
+@@ -1945,7 +1955,7 @@ int iterate_extent_inodes(struct btrfs_fs_info *fs_info,
+ 
+ 	free_leaf_list(refs);
+ out:
+-	if (!search_commit_root) {
++	if (trans) {
+ 		btrfs_put_tree_mod_seq(fs_info, &tree_mod_seq_elem);
+ 		btrfs_end_transaction(trans);
+ 	} else {
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 5a6c39b44c84..7672932aa5b4 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -2401,6 +2401,16 @@ read_block_for_search(struct btrfs_root *root, struct btrfs_path *p,
+ 	if (tmp) {
+ 		/* first we do an atomic uptodate check */
+ 		if (btrfs_buffer_uptodate(tmp, gen, 1) > 0) {
++			/*
++			 * Do extra check for first_key, eb can be stale due to
++			 * being cached, read from scrub, or have multiple
++			 * parents (shared tree blocks).
++			 */
++			if (btrfs_verify_level_key(fs_info, tmp,
++					parent_level - 1, &first_key, gen)) {
++				free_extent_buffer(tmp);
++				return -EUCLEAN;
++			}
+ 			*eb_ret = tmp;
+ 			return 0;
+ 		}
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 7a2a2621f0d9..9019265a2bf9 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1316,6 +1316,12 @@ struct btrfs_root {
+ 	 * manipulation with the read-only status via SUBVOL_SETFLAGS
+ 	 */
+ 	int send_in_progress;
++	/*
++	 * Number of currently running deduplication operations that have a
++	 * destination inode belonging to this root. Protected by the lock
++	 * root_item_lock.
++	 */
++	int dedupe_in_progress;
+ 	struct btrfs_subvolume_writers *subv_writers;
+ 	atomic_t will_be_snapshotted;
+ 	atomic_t snapshot_force_cow;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 888d72dda794..90a3c50d751b 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -414,9 +414,9 @@ static int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
+ 	return ret;
+ }
+ 
+-static int verify_level_key(struct btrfs_fs_info *fs_info,
+-			    struct extent_buffer *eb, int level,
+-			    struct btrfs_key *first_key, u64 parent_transid)
++int btrfs_verify_level_key(struct btrfs_fs_info *fs_info,
++			   struct extent_buffer *eb, int level,
++			   struct btrfs_key *first_key, u64 parent_transid)
+ {
+ 	int found_level;
+ 	struct btrfs_key found_key;
+@@ -493,8 +493,8 @@ static int btree_read_extent_buffer_pages(struct btrfs_fs_info *fs_info,
+ 			if (verify_parent_transid(io_tree, eb,
+ 						   parent_transid, 0))
+ 				ret = -EIO;
+-			else if (verify_level_key(fs_info, eb, level,
+-						  first_key, parent_transid))
++			else if (btrfs_verify_level_key(fs_info, eb, level,
++						first_key, parent_transid))
+ 				ret = -EUCLEAN;
+ 			else
+ 				break;
+@@ -1017,13 +1017,18 @@ void readahead_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr)
+ {
+ 	struct extent_buffer *buf = NULL;
+ 	struct inode *btree_inode = fs_info->btree_inode;
++	int ret;
+ 
+ 	buf = btrfs_find_create_tree_block(fs_info, bytenr);
+ 	if (IS_ERR(buf))
+ 		return;
+-	read_extent_buffer_pages(&BTRFS_I(btree_inode)->io_tree,
+-				 buf, WAIT_NONE, 0);
+-	free_extent_buffer(buf);
++
++	ret = read_extent_buffer_pages(&BTRFS_I(btree_inode)->io_tree, buf,
++			WAIT_NONE, 0);
++	if (ret < 0)
++		free_extent_buffer_stale(buf);
++	else
++		free_extent_buffer(buf);
+ }
+ 
+ int reada_tree_block_flagged(struct btrfs_fs_info *fs_info, u64 bytenr,
+@@ -1043,12 +1048,12 @@ int reada_tree_block_flagged(struct btrfs_fs_info *fs_info, u64 bytenr,
+ 	ret = read_extent_buffer_pages(io_tree, buf, WAIT_PAGE_LOCK,
+ 				       mirror_num);
+ 	if (ret) {
+-		free_extent_buffer(buf);
++		free_extent_buffer_stale(buf);
+ 		return ret;
+ 	}
+ 
+ 	if (test_bit(EXTENT_BUFFER_CORRUPT, &buf->bflags)) {
+-		free_extent_buffer(buf);
++		free_extent_buffer_stale(buf);
+ 		return -EIO;
+ 	} else if (extent_buffer_uptodate(buf)) {
+ 		*eb = buf;
+@@ -1102,7 +1107,7 @@ struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
+ 	ret = btree_read_extent_buffer_pages(fs_info, buf, parent_transid,
+ 					     level, first_key);
+ 	if (ret) {
+-		free_extent_buffer(buf);
++		free_extent_buffer_stale(buf);
+ 		return ERR_PTR(ret);
+ 	}
+ 	return buf;
+diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
+index 987a64bc0c66..67a9fe2d29c7 100644
+--- a/fs/btrfs/disk-io.h
++++ b/fs/btrfs/disk-io.h
+@@ -39,6 +39,9 @@ static inline u64 btrfs_sb_offset(int mirror)
+ struct btrfs_device;
+ struct btrfs_fs_devices;
+ 
++int btrfs_verify_level_key(struct btrfs_fs_info *fs_info,
++			   struct extent_buffer *eb, int level,
++			   struct btrfs_key *first_key, u64 parent_transid);
+ struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
+ 				      u64 parent_transid, int level,
+ 				      struct btrfs_key *first_key);
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index 1b68700bc1c5..a19bbfce449e 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -11192,9 +11192,9 @@ int btrfs_error_unpin_extent_range(struct btrfs_fs_info *fs_info,
+  * held back allocations.
+  */
+ static int btrfs_trim_free_extents(struct btrfs_device *device,
+-				   u64 minlen, u64 *trimmed)
++				   struct fstrim_range *range, u64 *trimmed)
+ {
+-	u64 start = 0, len = 0;
++	u64 start = range->start, len = 0;
+ 	int ret;
+ 
+ 	*trimmed = 0;
+@@ -11237,8 +11237,8 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		if (!trans)
+ 			up_read(&fs_info->commit_root_sem);
+ 
+-		ret = find_free_dev_extent_start(trans, device, minlen, start,
+-						 &start, &len);
++		ret = find_free_dev_extent_start(trans, device, range->minlen,
++						 start, &start, &len);
+ 		if (trans) {
+ 			up_read(&fs_info->commit_root_sem);
+ 			btrfs_put_transaction(trans);
+@@ -11251,6 +11251,16 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 			break;
+ 		}
+ 
++		/* If we are out of the passed range break */
++		if (start > range->start + range->len - 1) {
++			mutex_unlock(&fs_info->chunk_mutex);
++			ret = 0;
++			break;
++		}
++
++		start = max(range->start, start);
++		len = min(range->len, len);
++
+ 		ret = btrfs_issue_discard(device->bdev, start, len, &bytes);
+ 		mutex_unlock(&fs_info->chunk_mutex);
+ 
+@@ -11260,6 +11270,10 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		start += len;
+ 		*trimmed += bytes;
+ 
++		/* We've trimmed enough */
++		if (*trimmed >= range->len)
++			break;
++
+ 		if (fatal_signal_pending(current)) {
+ 			ret = -ERESTARTSYS;
+ 			break;
+@@ -11343,8 +11357,7 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ 	devices = &fs_info->fs_devices->devices;
+ 	list_for_each_entry(device, devices, dev_list) {
+-		ret = btrfs_trim_free_extents(device, range->minlen,
+-					      &group_trimmed);
++		ret = btrfs_trim_free_extents(device, range, &group_trimmed);
+ 		if (ret) {
+ 			dev_failed++;
+ 			dev_ret = ret;
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 1d64a6b8e413..679303bf8e74 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -3275,6 +3275,19 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 olen,
+ 	int ret;
+ 	int num_pages = PAGE_ALIGN(BTRFS_MAX_DEDUPE_LEN) >> PAGE_SHIFT;
+ 	u64 i, tail_len, chunk_count;
++	struct btrfs_root *root_dst = BTRFS_I(dst)->root;
++
++	spin_lock(&root_dst->root_item_lock);
++	if (root_dst->send_in_progress) {
++		btrfs_warn_rl(root_dst->fs_info,
++"cannot deduplicate to root %llu while send operations are using it (%d in progress)",
++			      root_dst->root_key.objectid,
++			      root_dst->send_in_progress);
++		spin_unlock(&root_dst->root_item_lock);
++		return -EAGAIN;
++	}
++	root_dst->dedupe_in_progress++;
++	spin_unlock(&root_dst->root_item_lock);
+ 
+ 	/* don't make the dst file partly checksummed */
+ 	if ((BTRFS_I(src)->flags & BTRFS_INODE_NODATASUM) !=
+@@ -3293,7 +3306,7 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 olen,
+ 		ret = btrfs_extent_same_range(src, loff, BTRFS_MAX_DEDUPE_LEN,
+ 					      dst, dst_loff);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 
+ 		loff += BTRFS_MAX_DEDUPE_LEN;
+ 		dst_loff += BTRFS_MAX_DEDUPE_LEN;
+@@ -3302,6 +3315,10 @@ static int btrfs_extent_same(struct inode *src, u64 loff, u64 olen,
+ 	if (tail_len > 0)
+ 		ret = btrfs_extent_same_range(src, loff, tail_len, dst,
+ 					      dst_loff);
++out:
++	spin_lock(&root_dst->root_item_lock);
++	root_dst->dedupe_in_progress--;
++	spin_unlock(&root_dst->root_item_lock);
+ 
+ 	return ret;
+ }
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 7ea2d6b1f170..19b00b1668ed 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -6579,6 +6579,38 @@ commit_trans:
+ 	return btrfs_commit_transaction(trans);
+ }
+ 
++/*
++ * Make sure any existing dellaloc is flushed for any root used by a send
++ * operation so that we do not miss any data and we do not race with writeback
++ * finishing and changing a tree while send is using the tree. This could
++ * happen if a subvolume is in RW mode, has delalloc, is turned to RO mode and
++ * a send operation then uses the subvolume.
++ * After flushing delalloc ensure_commit_roots_uptodate() must be called.
++ */
++static int flush_delalloc_roots(struct send_ctx *sctx)
++{
++	struct btrfs_root *root = sctx->parent_root;
++	int ret;
++	int i;
++
++	if (root) {
++		ret = btrfs_start_delalloc_snapshot(root);
++		if (ret)
++			return ret;
++		btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX);
++	}
++
++	for (i = 0; i < sctx->clone_roots_cnt; i++) {
++		root = sctx->clone_roots[i].root;
++		ret = btrfs_start_delalloc_snapshot(root);
++		if (ret)
++			return ret;
++		btrfs_wait_ordered_extents(root, U64_MAX, 0, U64_MAX);
++	}
++
++	return 0;
++}
++
+ static void btrfs_root_dec_send_in_progress(struct btrfs_root* root)
+ {
+ 	spin_lock(&root->root_item_lock);
+@@ -6594,6 +6626,13 @@ static void btrfs_root_dec_send_in_progress(struct btrfs_root* root)
+ 	spin_unlock(&root->root_item_lock);
+ }
+ 
++static void dedupe_in_progress_warn(const struct btrfs_root *root)
++{
++	btrfs_warn_rl(root->fs_info,
++"cannot use root %llu for send while deduplications on it are in progress (%d in progress)",
++		      root->root_key.objectid, root->dedupe_in_progress);
++}
++
+ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ {
+ 	int ret = 0;
+@@ -6617,6 +6656,11 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 	 * making it RW. This also protects against deletion.
+ 	 */
+ 	spin_lock(&send_root->root_item_lock);
++	if (btrfs_root_readonly(send_root) && send_root->dedupe_in_progress) {
++		dedupe_in_progress_warn(send_root);
++		spin_unlock(&send_root->root_item_lock);
++		return -EAGAIN;
++	}
+ 	send_root->send_in_progress++;
+ 	spin_unlock(&send_root->root_item_lock);
+ 
+@@ -6751,6 +6795,13 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 				ret = -EPERM;
+ 				goto out;
+ 			}
++			if (clone_root->dedupe_in_progress) {
++				dedupe_in_progress_warn(clone_root);
++				spin_unlock(&clone_root->root_item_lock);
++				srcu_read_unlock(&fs_info->subvol_srcu, index);
++				ret = -EAGAIN;
++				goto out;
++			}
+ 			clone_root->send_in_progress++;
+ 			spin_unlock(&clone_root->root_item_lock);
+ 			srcu_read_unlock(&fs_info->subvol_srcu, index);
+@@ -6785,6 +6836,13 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 			ret = -EPERM;
+ 			goto out;
+ 		}
++		if (sctx->parent_root->dedupe_in_progress) {
++			dedupe_in_progress_warn(sctx->parent_root);
++			spin_unlock(&sctx->parent_root->root_item_lock);
++			srcu_read_unlock(&fs_info->subvol_srcu, index);
++			ret = -EAGAIN;
++			goto out;
++		}
+ 		spin_unlock(&sctx->parent_root->root_item_lock);
+ 
+ 		srcu_read_unlock(&fs_info->subvol_srcu, index);
+@@ -6803,6 +6861,10 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
+ 			NULL);
+ 	sort_clone_roots = 1;
+ 
++	ret = flush_delalloc_roots(sctx);
++	if (ret)
++		goto out;
++
+ 	ret = ensure_commit_roots_uptodate(sctx);
+ 	if (ret)
+ 		goto out;
+diff --git a/fs/dax.c b/fs/dax.c
+index 827ee143413e..8eb3e8c2b4bd 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -1577,8 +1577,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
+ 		}
+ 
+ 		trace_dax_pmd_insert_mapping(inode, vmf, PMD_SIZE, pfn, entry);
+-		result = vmf_insert_pfn_pmd(vma, vmf->address, vmf->pmd, pfn,
+-					    write);
++		result = vmf_insert_pfn_pmd(vmf, pfn, write);
+ 		break;
+ 	case IOMAP_UNWRITTEN:
+ 	case IOMAP_HOLE:
+@@ -1688,8 +1687,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
+ 		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
+ #ifdef CONFIG_FS_DAX_PMD
+ 	else if (order == PMD_ORDER)
+-		ret = vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd,
+-			pfn, true);
++		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
+ #endif
+ 	else
+ 		ret = VM_FAULT_FALLBACK;
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 508a37ec9271..b8fde74ff76d 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1665,6 +1665,8 @@ static inline void ext4_clear_state_flags(struct ext4_inode_info *ei)
+ #define EXT4_FEATURE_INCOMPAT_INLINE_DATA	0x8000 /* data in inode */
+ #define EXT4_FEATURE_INCOMPAT_ENCRYPT		0x10000
+ 
++extern void ext4_update_dynamic_rev(struct super_block *sb);
++
+ #define EXT4_FEATURE_COMPAT_FUNCS(name, flagname) \
+ static inline bool ext4_has_feature_##name(struct super_block *sb) \
+ { \
+@@ -1673,6 +1675,7 @@ static inline bool ext4_has_feature_##name(struct super_block *sb) \
+ } \
+ static inline void ext4_set_feature_##name(struct super_block *sb) \
+ { \
++	ext4_update_dynamic_rev(sb); \
+ 	EXT4_SB(sb)->s_es->s_feature_compat |= \
+ 		cpu_to_le32(EXT4_FEATURE_COMPAT_##flagname); \
+ } \
+@@ -1690,6 +1693,7 @@ static inline bool ext4_has_feature_##name(struct super_block *sb) \
+ } \
+ static inline void ext4_set_feature_##name(struct super_block *sb) \
+ { \
++	ext4_update_dynamic_rev(sb); \
+ 	EXT4_SB(sb)->s_es->s_feature_ro_compat |= \
+ 		cpu_to_le32(EXT4_FEATURE_RO_COMPAT_##flagname); \
+ } \
+@@ -1707,6 +1711,7 @@ static inline bool ext4_has_feature_##name(struct super_block *sb) \
+ } \
+ static inline void ext4_set_feature_##name(struct super_block *sb) \
+ { \
++	ext4_update_dynamic_rev(sb); \
+ 	EXT4_SB(sb)->s_es->s_feature_incompat |= \
+ 		cpu_to_le32(EXT4_FEATURE_INCOMPAT_##flagname); \
+ } \
+@@ -2675,7 +2680,6 @@ do {									\
+ 
+ #endif
+ 
+-extern void ext4_update_dynamic_rev(struct super_block *sb);
+ extern int ext4_update_compat_feature(handle_t *handle, struct super_block *sb,
+ 					__u32 compat);
+ extern int ext4_update_rocompat_feature(handle_t *handle,
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 252bbbb5a2f4..cd00b19746bd 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -1035,6 +1035,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 	__le32 border;
+ 	ext4_fsblk_t *ablocks = NULL; /* array of allocated blocks */
+ 	int err = 0;
++	size_t ext_size = 0;
+ 
+ 	/* make decision: where to split? */
+ 	/* FIXME: now decision is simplest: at current extent */
+@@ -1126,6 +1127,10 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 		le16_add_cpu(&neh->eh_entries, m);
+ 	}
+ 
++	/* zero out unused area in the extent block */
++	ext_size = sizeof(struct ext4_extent_header) +
++		sizeof(struct ext4_extent) * le16_to_cpu(neh->eh_entries);
++	memset(bh->b_data + ext_size, 0, inode->i_sb->s_blocksize - ext_size);
+ 	ext4_extent_block_csum_set(inode, neh);
+ 	set_buffer_uptodate(bh);
+ 	unlock_buffer(bh);
+@@ -1205,6 +1210,11 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 				sizeof(struct ext4_extent_idx) * m);
+ 			le16_add_cpu(&neh->eh_entries, m);
+ 		}
++		/* zero out unused area in the extent block */
++		ext_size = sizeof(struct ext4_extent_header) +
++		   (sizeof(struct ext4_extent) * le16_to_cpu(neh->eh_entries));
++		memset(bh->b_data + ext_size, 0,
++			inode->i_sb->s_blocksize - ext_size);
+ 		ext4_extent_block_csum_set(inode, neh);
+ 		set_buffer_uptodate(bh);
+ 		unlock_buffer(bh);
+@@ -1270,6 +1280,7 @@ static int ext4_ext_grow_indepth(handle_t *handle, struct inode *inode,
+ 	ext4_fsblk_t newblock, goal = 0;
+ 	struct ext4_super_block *es = EXT4_SB(inode->i_sb)->s_es;
+ 	int err = 0;
++	size_t ext_size = 0;
+ 
+ 	/* Try to prepend new index to old one */
+ 	if (ext_depth(inode))
+@@ -1295,9 +1306,11 @@ static int ext4_ext_grow_indepth(handle_t *handle, struct inode *inode,
+ 		goto out;
+ 	}
+ 
++	ext_size = sizeof(EXT4_I(inode)->i_data);
+ 	/* move top-level index/leaf into new block */
+-	memmove(bh->b_data, EXT4_I(inode)->i_data,
+-		sizeof(EXT4_I(inode)->i_data));
++	memmove(bh->b_data, EXT4_I(inode)->i_data, ext_size);
++	/* zero out unused area in the extent block */
++	memset(bh->b_data + ext_size, 0, inode->i_sb->s_blocksize - ext_size);
+ 
+ 	/* set size of new block */
+ 	neh = ext_block_hdr(bh);
+diff --git a/fs/ext4/file.c b/fs/ext4/file.c
+index 98ec11f69cd4..2c5baa5e8291 100644
+--- a/fs/ext4/file.c
++++ b/fs/ext4/file.c
+@@ -264,6 +264,13 @@ ext4_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 	}
+ 
+ 	ret = __generic_file_write_iter(iocb, from);
++	/*
++	 * Unaligned direct AIO must be the only IO in flight. Otherwise
++	 * overlapping aligned IO after unaligned might result in data
++	 * corruption.
++	 */
++	if (ret == -EIOCBQUEUED && unaligned_aio)
++		ext4_unwritten_wait(inode);
+ 	inode_unlock(inode);
+ 
+ 	if (ret > 0)
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 34d7e0703cc6..878f8b5dd39f 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5351,7 +5351,6 @@ static int ext4_do_update_inode(handle_t *handle,
+ 		err = ext4_journal_get_write_access(handle, EXT4_SB(sb)->s_sbh);
+ 		if (err)
+ 			goto out_brelse;
+-		ext4_update_dynamic_rev(sb);
+ 		ext4_set_feature_large_file(sb);
+ 		ext4_handle_sync(handle);
+ 		err = ext4_handle_dirty_super(handle, sb);
+@@ -6002,7 +6001,7 @@ int ext4_expand_extra_isize(struct inode *inode,
+ 
+ 	ext4_write_lock_xattr(inode, &no_expand);
+ 
+-	BUFFER_TRACE(iloc.bh, "get_write_access");
++	BUFFER_TRACE(iloc->bh, "get_write_access");
+ 	error = ext4_journal_get_write_access(handle, iloc->bh);
+ 	if (error) {
+ 		brelse(iloc->bh);
+diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
+index 5f24fdc140ad..53d57cdf3c4d 100644
+--- a/fs/ext4/ioctl.c
++++ b/fs/ext4/ioctl.c
+@@ -977,7 +977,7 @@ mext_out:
+ 		if (err == 0)
+ 			err = err2;
+ 		mnt_drop_write_file(filp);
+-		if (!err && (o_group > EXT4_SB(sb)->s_groups_count) &&
++		if (!err && (o_group < EXT4_SB(sb)->s_groups_count) &&
+ 		    ext4_has_group_desc_csum(sb) &&
+ 		    test_opt(sb, INIT_INODE_TABLE))
+ 			err = ext4_register_li_request(sb, o_group);
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index e2248083cdca..459450e59a88 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1539,7 +1539,7 @@ static int mb_find_extent(struct ext4_buddy *e4b, int block,
+ 		ex->fe_len += 1 << order;
+ 	}
+ 
+-	if (ex->fe_start + ex->fe_len > (1 << (e4b->bd_blkbits + 3))) {
++	if (ex->fe_start + ex->fe_len > EXT4_CLUSTERS_PER_GROUP(e4b->bd_sb)) {
+ 		/* Should never happen! (but apparently sometimes does?!?) */
+ 		WARN_ON(1);
+ 		ext4_error(e4b->bd_sb, "corruption or bug in mb_find_extent "
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 2b928eb07fa2..03c623407648 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -871,12 +871,15 @@ static void dx_release(struct dx_frame *frames)
+ {
+ 	struct dx_root_info *info;
+ 	int i;
++	unsigned int indirect_levels;
+ 
+ 	if (frames[0].bh == NULL)
+ 		return;
+ 
+ 	info = &((struct dx_root *)frames[0].bh->b_data)->info;
+-	for (i = 0; i <= info->indirect_levels; i++) {
++	/* save local copy, "info" may be freed after brelse() */
++	indirect_levels = info->indirect_levels;
++	for (i = 0; i <= indirect_levels; i++) {
+ 		if (frames[i].bh == NULL)
+ 			break;
+ 		brelse(frames[i].bh);
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index e7ae26e36c9c..4d5c0fc9d23a 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -874,6 +874,7 @@ static int add_new_gdb(handle_t *handle, struct inode *inode,
+ 	err = ext4_handle_dirty_metadata(handle, NULL, gdb_bh);
+ 	if (unlikely(err)) {
+ 		ext4_std_error(sb, err);
++		iloc.bh = NULL;
+ 		goto errout;
+ 	}
+ 	brelse(dind);
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index b9bca7298f96..7b22c01b1cdb 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -698,7 +698,7 @@ void __ext4_abort(struct super_block *sb, const char *function,
+ 			jbd2_journal_abort(EXT4_SB(sb)->s_journal, -EIO);
+ 		save_error_info(sb, function, line);
+ 	}
+-	if (test_opt(sb, ERRORS_PANIC)) {
++	if (test_opt(sb, ERRORS_PANIC) && !system_going_down()) {
+ 		if (EXT4_SB(sb)->s_journal &&
+ 		  !(EXT4_SB(sb)->s_journal->j_flags & JBD2_REC_ERR))
+ 			return;
+@@ -2259,7 +2259,6 @@ static int ext4_setup_super(struct super_block *sb, struct ext4_super_block *es,
+ 		es->s_max_mnt_count = cpu_to_le16(EXT4_DFL_MAX_MNT_COUNT);
+ 	le16_add_cpu(&es->s_mnt_count, 1);
+ 	ext4_update_tstamp(es, s_mtime);
+-	ext4_update_dynamic_rev(sb);
+ 	if (sbi->s_journal)
+ 		ext4_set_feature_journal_needs_recovery(sb);
+ 
+@@ -3514,6 +3513,37 @@ int ext4_calculate_overhead(struct super_block *sb)
+ 	return 0;
+ }
+ 
++static void ext4_clamp_want_extra_isize(struct super_block *sb)
++{
++	struct ext4_sb_info *sbi = EXT4_SB(sb);
++	struct ext4_super_block *es = sbi->s_es;
++
++	/* determine the minimum size of new large inodes, if present */
++	if (sbi->s_inode_size > EXT4_GOOD_OLD_INODE_SIZE &&
++	    sbi->s_want_extra_isize == 0) {
++		sbi->s_want_extra_isize = sizeof(struct ext4_inode) -
++						     EXT4_GOOD_OLD_INODE_SIZE;
++		if (ext4_has_feature_extra_isize(sb)) {
++			if (sbi->s_want_extra_isize <
++			    le16_to_cpu(es->s_want_extra_isize))
++				sbi->s_want_extra_isize =
++					le16_to_cpu(es->s_want_extra_isize);
++			if (sbi->s_want_extra_isize <
++			    le16_to_cpu(es->s_min_extra_isize))
++				sbi->s_want_extra_isize =
++					le16_to_cpu(es->s_min_extra_isize);
++		}
++	}
++	/* Check if enough inode space is available */
++	if (EXT4_GOOD_OLD_INODE_SIZE + sbi->s_want_extra_isize >
++							sbi->s_inode_size) {
++		sbi->s_want_extra_isize = sizeof(struct ext4_inode) -
++						       EXT4_GOOD_OLD_INODE_SIZE;
++		ext4_msg(sb, KERN_INFO,
++			 "required extra inode space not available");
++	}
++}
++
+ static void ext4_set_resv_clusters(struct super_block *sb)
+ {
+ 	ext4_fsblk_t resv_clusters;
+@@ -4239,7 +4269,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 				 "data=, fs mounted w/o journal");
+ 			goto failed_mount_wq;
+ 		}
+-		sbi->s_def_mount_opt &= EXT4_MOUNT_JOURNAL_CHECKSUM;
++		sbi->s_def_mount_opt &= ~EXT4_MOUNT_JOURNAL_CHECKSUM;
+ 		clear_opt(sb, JOURNAL_CHECKSUM);
+ 		clear_opt(sb, DATA_FLAGS);
+ 		sbi->s_journal = NULL;
+@@ -4388,30 +4418,7 @@ no_journal:
+ 	} else if (ret)
+ 		goto failed_mount4a;
+ 
+-	/* determine the minimum size of new large inodes, if present */
+-	if (sbi->s_inode_size > EXT4_GOOD_OLD_INODE_SIZE &&
+-	    sbi->s_want_extra_isize == 0) {
+-		sbi->s_want_extra_isize = sizeof(struct ext4_inode) -
+-						     EXT4_GOOD_OLD_INODE_SIZE;
+-		if (ext4_has_feature_extra_isize(sb)) {
+-			if (sbi->s_want_extra_isize <
+-			    le16_to_cpu(es->s_want_extra_isize))
+-				sbi->s_want_extra_isize =
+-					le16_to_cpu(es->s_want_extra_isize);
+-			if (sbi->s_want_extra_isize <
+-			    le16_to_cpu(es->s_min_extra_isize))
+-				sbi->s_want_extra_isize =
+-					le16_to_cpu(es->s_min_extra_isize);
+-		}
+-	}
+-	/* Check if enough inode space is available */
+-	if (EXT4_GOOD_OLD_INODE_SIZE + sbi->s_want_extra_isize >
+-							sbi->s_inode_size) {
+-		sbi->s_want_extra_isize = sizeof(struct ext4_inode) -
+-						       EXT4_GOOD_OLD_INODE_SIZE;
+-		ext4_msg(sb, KERN_INFO, "required extra inode space not"
+-			 "available");
+-	}
++	ext4_clamp_want_extra_isize(sb);
+ 
+ 	ext4_set_resv_clusters(sb);
+ 
+@@ -5195,6 +5202,8 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 		goto restore_opts;
+ 	}
+ 
++	ext4_clamp_want_extra_isize(sb);
++
+ 	if ((old_opts.s_mount_opt & EXT4_MOUNT_JOURNAL_CHECKSUM) ^
+ 	    test_opt(sb, JOURNAL_CHECKSUM)) {
+ 		ext4_msg(sb, KERN_ERR, "changing journal_checksum "
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index dc82e7757f67..491f9ee4040e 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1696,7 +1696,7 @@ static int ext4_xattr_set_entry(struct ext4_xattr_info *i,
+ 
+ 	/* No failures allowed past this point. */
+ 
+-	if (!s->not_found && here->e_value_size && here->e_value_offs) {
++	if (!s->not_found && here->e_value_size && !here->e_value_inum) {
+ 		/* Remove the old value. */
+ 		void *first_val = s->base + min_offs;
+ 		size_t offs = le16_to_cpu(here->e_value_offs);
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index 36855c1f8daf..b16645b417d9 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -523,8 +523,6 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
+ 
+ 	isw->inode = inode;
+ 
+-	atomic_inc(&isw_nr_in_flight);
+-
+ 	/*
+ 	 * In addition to synchronizing among switchers, I_WB_SWITCH tells
+ 	 * the RCU protected stat update paths to grab the i_page
+@@ -532,6 +530,9 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
+ 	 * Let's continue after I_WB_SWITCH is guaranteed to be visible.
+ 	 */
+ 	call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
++
++	atomic_inc(&isw_nr_in_flight);
++
+ 	goto out_unlock;
+ 
+ out_free:
+@@ -901,7 +902,11 @@ restart:
+ void cgroup_writeback_umount(void)
+ {
+ 	if (atomic_read(&isw_nr_in_flight)) {
+-		synchronize_rcu();
++		/*
++		 * Use rcu_barrier() to wait for all pending callbacks to
++		 * ensure that all in-flight wb switches are in the workqueue.
++		 */
++		rcu_barrier();
+ 		flush_workqueue(isw_wq);
+ 	}
+ }
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index a3a3d256fb0e..7a24f91af29e 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -426,9 +426,7 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
+ 			u32 hash;
+ 
+ 			index = page->index;
+-			hash = hugetlb_fault_mutex_hash(h, current->mm,
+-							&pseudo_vma,
+-							mapping, index, 0);
++			hash = hugetlb_fault_mutex_hash(h, mapping, index, 0);
+ 			mutex_lock(&hugetlb_fault_mutex_table[hash]);
+ 
+ 			/*
+@@ -625,8 +623,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
+ 		addr = index * hpage_size;
+ 
+ 		/* mutex taken here, fault path and hole punch */
+-		hash = hugetlb_fault_mutex_hash(h, mm, &pseudo_vma, mapping,
+-						index, addr);
++		hash = hugetlb_fault_mutex_hash(h, mapping, index, addr);
+ 		mutex_lock(&hugetlb_fault_mutex_table[hash]);
+ 
+ 		/* See if already present in mapping to avoid alloc/free */
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 88f2a49338a1..e9cf88f0bc29 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1366,6 +1366,10 @@ static int jbd2_write_superblock(journal_t *journal, int write_flags)
+ 	journal_superblock_t *sb = journal->j_superblock;
+ 	int ret;
+ 
++	/* Buffer got discarded which means block device got invalidated */
++	if (!buffer_mapped(bh))
++		return -EIO;
++
+ 	trace_jbd2_write_superblock(journal, write_flags);
+ 	if (!(journal->j_flags & JBD2_BARRIER))
+ 		write_flags &= ~(REQ_FUA | REQ_PREFLUSH);
+@@ -2385,22 +2389,19 @@ static struct kmem_cache *jbd2_journal_head_cache;
+ static atomic_t nr_journal_heads = ATOMIC_INIT(0);
+ #endif
+ 
+-static int jbd2_journal_init_journal_head_cache(void)
++static int __init jbd2_journal_init_journal_head_cache(void)
+ {
+-	int retval;
+-
+-	J_ASSERT(jbd2_journal_head_cache == NULL);
++	J_ASSERT(!jbd2_journal_head_cache);
+ 	jbd2_journal_head_cache = kmem_cache_create("jbd2_journal_head",
+ 				sizeof(struct journal_head),
+ 				0,		/* offset */
+ 				SLAB_TEMPORARY | SLAB_TYPESAFE_BY_RCU,
+ 				NULL);		/* ctor */
+-	retval = 0;
+ 	if (!jbd2_journal_head_cache) {
+-		retval = -ENOMEM;
+ 		printk(KERN_EMERG "JBD2: no memory for journal_head cache\n");
++		return -ENOMEM;
+ 	}
+-	return retval;
++	return 0;
+ }
+ 
+ static void jbd2_journal_destroy_journal_head_cache(void)
+@@ -2646,28 +2647,38 @@ static void __exit jbd2_remove_jbd_stats_proc_entry(void)
+ 
+ struct kmem_cache *jbd2_handle_cache, *jbd2_inode_cache;
+ 
++static int __init jbd2_journal_init_inode_cache(void)
++{
++	J_ASSERT(!jbd2_inode_cache);
++	jbd2_inode_cache = KMEM_CACHE(jbd2_inode, 0);
++	if (!jbd2_inode_cache) {
++		pr_emerg("JBD2: failed to create inode cache\n");
++		return -ENOMEM;
++	}
++	return 0;
++}
++
+ static int __init jbd2_journal_init_handle_cache(void)
+ {
++	J_ASSERT(!jbd2_handle_cache);
+ 	jbd2_handle_cache = KMEM_CACHE(jbd2_journal_handle, SLAB_TEMPORARY);
+-	if (jbd2_handle_cache == NULL) {
++	if (!jbd2_handle_cache) {
+ 		printk(KERN_EMERG "JBD2: failed to create handle cache\n");
+ 		return -ENOMEM;
+ 	}
+-	jbd2_inode_cache = KMEM_CACHE(jbd2_inode, 0);
+-	if (jbd2_inode_cache == NULL) {
+-		printk(KERN_EMERG "JBD2: failed to create inode cache\n");
+-		kmem_cache_destroy(jbd2_handle_cache);
+-		return -ENOMEM;
+-	}
+ 	return 0;
+ }
+ 
++static void jbd2_journal_destroy_inode_cache(void)
++{
++	kmem_cache_destroy(jbd2_inode_cache);
++	jbd2_inode_cache = NULL;
++}
++
+ static void jbd2_journal_destroy_handle_cache(void)
+ {
+ 	kmem_cache_destroy(jbd2_handle_cache);
+ 	jbd2_handle_cache = NULL;
+-	kmem_cache_destroy(jbd2_inode_cache);
+-	jbd2_inode_cache = NULL;
+ }
+ 
+ /*
+@@ -2678,11 +2689,15 @@ static int __init journal_init_caches(void)
+ {
+ 	int ret;
+ 
+-	ret = jbd2_journal_init_revoke_caches();
++	ret = jbd2_journal_init_revoke_record_cache();
++	if (ret == 0)
++		ret = jbd2_journal_init_revoke_table_cache();
+ 	if (ret == 0)
+ 		ret = jbd2_journal_init_journal_head_cache();
+ 	if (ret == 0)
+ 		ret = jbd2_journal_init_handle_cache();
++	if (ret == 0)
++		ret = jbd2_journal_init_inode_cache();
+ 	if (ret == 0)
+ 		ret = jbd2_journal_init_transaction_cache();
+ 	return ret;
+@@ -2690,9 +2705,11 @@ static int __init journal_init_caches(void)
+ 
+ static void jbd2_journal_destroy_caches(void)
+ {
+-	jbd2_journal_destroy_revoke_caches();
++	jbd2_journal_destroy_revoke_record_cache();
++	jbd2_journal_destroy_revoke_table_cache();
+ 	jbd2_journal_destroy_journal_head_cache();
+ 	jbd2_journal_destroy_handle_cache();
++	jbd2_journal_destroy_inode_cache();
+ 	jbd2_journal_destroy_transaction_cache();
+ 	jbd2_journal_destroy_slabs();
+ }
+diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
+index a1143e57a718..69b9bc329964 100644
+--- a/fs/jbd2/revoke.c
++++ b/fs/jbd2/revoke.c
+@@ -178,33 +178,41 @@ static struct jbd2_revoke_record_s *find_revoke_record(journal_t *journal,
+ 	return NULL;
+ }
+ 
+-void jbd2_journal_destroy_revoke_caches(void)
++void jbd2_journal_destroy_revoke_record_cache(void)
+ {
+ 	kmem_cache_destroy(jbd2_revoke_record_cache);
+ 	jbd2_revoke_record_cache = NULL;
++}
++
++void jbd2_journal_destroy_revoke_table_cache(void)
++{
+ 	kmem_cache_destroy(jbd2_revoke_table_cache);
+ 	jbd2_revoke_table_cache = NULL;
+ }
+ 
+-int __init jbd2_journal_init_revoke_caches(void)
++int __init jbd2_journal_init_revoke_record_cache(void)
+ {
+ 	J_ASSERT(!jbd2_revoke_record_cache);
+-	J_ASSERT(!jbd2_revoke_table_cache);
+-
+ 	jbd2_revoke_record_cache = KMEM_CACHE(jbd2_revoke_record_s,
+ 					SLAB_HWCACHE_ALIGN|SLAB_TEMPORARY);
+-	if (!jbd2_revoke_record_cache)
+-		goto record_cache_failure;
+ 
++	if (!jbd2_revoke_record_cache) {
++		pr_emerg("JBD2: failed to create revoke_record cache\n");
++		return -ENOMEM;
++	}
++	return 0;
++}
++
++int __init jbd2_journal_init_revoke_table_cache(void)
++{
++	J_ASSERT(!jbd2_revoke_table_cache);
+ 	jbd2_revoke_table_cache = KMEM_CACHE(jbd2_revoke_table_s,
+ 					     SLAB_TEMPORARY);
+-	if (!jbd2_revoke_table_cache)
+-		goto table_cache_failure;
+-	return 0;
+-table_cache_failure:
+-	jbd2_journal_destroy_revoke_caches();
+-record_cache_failure:
++	if (!jbd2_revoke_table_cache) {
++		pr_emerg("JBD2: failed to create revoke_table cache\n");
+ 		return -ENOMEM;
++	}
++	return 0;
+ }
+ 
+ static struct jbd2_revoke_table_s *jbd2_journal_init_revoke_table(int hash_size)
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index f0d8dabe1ff5..e9dded268a9b 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -42,9 +42,11 @@ int __init jbd2_journal_init_transaction_cache(void)
+ 					0,
+ 					SLAB_HWCACHE_ALIGN|SLAB_TEMPORARY,
+ 					NULL);
+-	if (transaction_cache)
+-		return 0;
+-	return -ENOMEM;
++	if (!transaction_cache) {
++		pr_emerg("JBD2: failed to create transaction cache\n");
++		return -ENOMEM;
++	}
++	return 0;
+ }
+ 
+ void jbd2_journal_destroy_transaction_cache(void)
+diff --git a/fs/ocfs2/export.c b/fs/ocfs2/export.c
+index 4bf8d5854b27..af2888d23de3 100644
+--- a/fs/ocfs2/export.c
++++ b/fs/ocfs2/export.c
+@@ -148,16 +148,24 @@ static struct dentry *ocfs2_get_parent(struct dentry *child)
+ 	u64 blkno;
+ 	struct dentry *parent;
+ 	struct inode *dir = d_inode(child);
++	int set;
+ 
+ 	trace_ocfs2_get_parent(child, child->d_name.len, child->d_name.name,
+ 			       (unsigned long long)OCFS2_I(dir)->ip_blkno);
+ 
++	status = ocfs2_nfs_sync_lock(OCFS2_SB(dir->i_sb), 1);
++	if (status < 0) {
++		mlog(ML_ERROR, "getting nfs sync lock(EX) failed %d\n", status);
++		parent = ERR_PTR(status);
++		goto bail;
++	}
++
+ 	status = ocfs2_inode_lock(dir, NULL, 0);
+ 	if (status < 0) {
+ 		if (status != -ENOENT)
+ 			mlog_errno(status);
+ 		parent = ERR_PTR(status);
+-		goto bail;
++		goto unlock_nfs_sync;
+ 	}
+ 
+ 	status = ocfs2_lookup_ino_from_name(dir, "..", 2, &blkno);
+@@ -166,11 +174,31 @@ static struct dentry *ocfs2_get_parent(struct dentry *child)
+ 		goto bail_unlock;
+ 	}
+ 
++	status = ocfs2_test_inode_bit(OCFS2_SB(dir->i_sb), blkno, &set);
++	if (status < 0) {
++		if (status == -EINVAL) {
++			status = -ESTALE;
++		} else
++			mlog(ML_ERROR, "test inode bit failed %d\n", status);
++		parent = ERR_PTR(status);
++		goto bail_unlock;
++	}
++
++	trace_ocfs2_get_dentry_test_bit(status, set);
++	if (!set) {
++		status = -ESTALE;
++		parent = ERR_PTR(status);
++		goto bail_unlock;
++	}
++
+ 	parent = d_obtain_alias(ocfs2_iget(OCFS2_SB(dir->i_sb), blkno, 0, 0));
+ 
+ bail_unlock:
+ 	ocfs2_inode_unlock(dir, 0);
+ 
++unlock_nfs_sync:
++	ocfs2_nfs_sync_unlock(OCFS2_SB(dir->i_sb), 1);
++
+ bail:
+ 	trace_ocfs2_get_parent_end(parent);
+ 
+diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
+index 381e872bfde0..7cd5c150c21d 100644
+--- a/include/linux/huge_mm.h
++++ b/include/linux/huge_mm.h
+@@ -47,10 +47,8 @@ extern bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ extern int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ 			unsigned long addr, pgprot_t newprot,
+ 			int prot_numa);
+-vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+-			pmd_t *pmd, pfn_t pfn, bool write);
+-vm_fault_t vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+-			pud_t *pud, pfn_t pfn, bool write);
++vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write);
++vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write);
+ enum transparent_hugepage_flag {
+ 	TRANSPARENT_HUGEPAGE_FLAG,
+ 	TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 087fd5f48c91..d34112fb3d52 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -123,9 +123,7 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason);
+ void free_huge_page(struct page *page);
+ void hugetlb_fix_reserve_counts(struct inode *inode);
+ extern struct mutex *hugetlb_fault_mutex_table;
+-u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm,
+-				struct vm_area_struct *vma,
+-				struct address_space *mapping,
++u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
+ 				pgoff_t idx, unsigned long address);
+ 
+ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud);
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index 0f919d5fe84f..2cf6e04b08fc 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -1318,7 +1318,7 @@ extern void		__wait_on_journal (journal_t *);
+ 
+ /* Transaction cache support */
+ extern void jbd2_journal_destroy_transaction_cache(void);
+-extern int  jbd2_journal_init_transaction_cache(void);
++extern int __init jbd2_journal_init_transaction_cache(void);
+ extern void jbd2_journal_free_transaction(transaction_t *);
+ 
+ /*
+@@ -1446,8 +1446,10 @@ static inline void jbd2_free_inode(struct jbd2_inode *jinode)
+ /* Primary revoke support */
+ #define JOURNAL_REVOKE_DEFAULT_HASH 256
+ extern int	   jbd2_journal_init_revoke(journal_t *, int);
+-extern void	   jbd2_journal_destroy_revoke_caches(void);
+-extern int	   jbd2_journal_init_revoke_caches(void);
++extern void	   jbd2_journal_destroy_revoke_record_cache(void);
++extern void	   jbd2_journal_destroy_revoke_table_cache(void);
++extern int __init jbd2_journal_init_revoke_record_cache(void);
++extern int __init jbd2_journal_init_revoke_table_cache(void);
+ 
+ extern void	   jbd2_journal_destroy_revoke(journal_t *);
+ extern int	   jbd2_journal_revoke (handle_t *, unsigned long long, struct buffer_head *);
+diff --git a/include/linux/mfd/da9063/registers.h b/include/linux/mfd/da9063/registers.h
+index 5d42859cb441..844fc2973392 100644
+--- a/include/linux/mfd/da9063/registers.h
++++ b/include/linux/mfd/da9063/registers.h
+@@ -215,9 +215,9 @@
+ 
+ /* DA9063 Configuration registers */
+ /* OTP */
+-#define	DA9063_REG_OPT_COUNT		0x101
+-#define	DA9063_REG_OPT_ADDR		0x102
+-#define	DA9063_REG_OPT_DATA		0x103
++#define	DA9063_REG_OTP_CONT		0x101
++#define	DA9063_REG_OTP_ADDR		0x102
++#define	DA9063_REG_OTP_DATA		0x103
+ 
+ /* Customer Trim and Configuration */
+ #define	DA9063_REG_T_OFFSET		0x104
+diff --git a/include/linux/mfd/max77620.h b/include/linux/mfd/max77620.h
+index ad2a9a852aea..b4fd5a7c2aaa 100644
+--- a/include/linux/mfd/max77620.h
++++ b/include/linux/mfd/max77620.h
+@@ -136,8 +136,8 @@
+ #define MAX77620_FPS_PERIOD_MIN_US		40
+ #define MAX20024_FPS_PERIOD_MIN_US		20
+ 
+-#define MAX77620_FPS_PERIOD_MAX_US		2560
+-#define MAX20024_FPS_PERIOD_MAX_US		5120
++#define MAX20024_FPS_PERIOD_MAX_US		2560
++#define MAX77620_FPS_PERIOD_MAX_US		5120
+ 
+ #define MAX77620_REG_FPS_GPIO1			0x54
+ #define MAX77620_REG_FPS_GPIO2			0x55
+diff --git a/kernel/fork.c b/kernel/fork.c
+index b69248e6f0e0..95fd41e92031 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -953,6 +953,15 @@ static void mm_init_aio(struct mm_struct *mm)
+ #endif
+ }
+ 
++static __always_inline void mm_clear_owner(struct mm_struct *mm,
++					   struct task_struct *p)
++{
++#ifdef CONFIG_MEMCG
++	if (mm->owner == p)
++		WRITE_ONCE(mm->owner, NULL);
++#endif
++}
++
+ static void mm_init_owner(struct mm_struct *mm, struct task_struct *p)
+ {
+ #ifdef CONFIG_MEMCG
+@@ -1332,6 +1341,7 @@ static struct mm_struct *dup_mm(struct task_struct *tsk)
+ free_pt:
+ 	/* don't put binfmt in mmput, we haven't got module yet */
+ 	mm->binfmt = NULL;
++	mm_init_owner(mm, NULL);
+ 	mmput(mm);
+ 
+ fail_nomem:
+@@ -1663,6 +1673,21 @@ static inline void rcu_copy_process(struct task_struct *p)
+ #endif /* #ifdef CONFIG_TASKS_RCU */
+ }
+ 
++static void __delayed_free_task(struct rcu_head *rhp)
++{
++	struct task_struct *tsk = container_of(rhp, struct task_struct, rcu);
++
++	free_task(tsk);
++}
++
++static __always_inline void delayed_free_task(struct task_struct *tsk)
++{
++	if (IS_ENABLED(CONFIG_MEMCG))
++		call_rcu(&tsk->rcu, __delayed_free_task);
++	else
++		free_task(tsk);
++}
++
+ /*
+  * This creates a new process as a copy of the old one,
+  * but does not actually start it yet.
+@@ -2124,8 +2149,10 @@ bad_fork_cleanup_io:
+ bad_fork_cleanup_namespaces:
+ 	exit_task_namespaces(p);
+ bad_fork_cleanup_mm:
+-	if (p->mm)
++	if (p->mm) {
++		mm_clear_owner(p->mm, p);
+ 		mmput(p->mm);
++	}
+ bad_fork_cleanup_signal:
+ 	if (!(clone_flags & CLONE_THREAD))
+ 		free_signal_struct(p->signal);
+@@ -2156,7 +2183,7 @@ bad_fork_cleanup_count:
+ bad_fork_free:
+ 	p->state = TASK_DEAD;
+ 	put_task_stack(p);
+-	free_task(p);
++	delayed_free_task(p);
+ fork_out:
+ 	spin_lock_irq(&current->sighand->siglock);
+ 	hlist_del_init(&delayed.node);
+diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
+index 50d9af615dc4..115860164c36 100644
+--- a/kernel/locking/rwsem-xadd.c
++++ b/kernel/locking/rwsem-xadd.c
+@@ -130,6 +130,7 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
+ {
+ 	struct rwsem_waiter *waiter, *tmp;
+ 	long oldcount, woken = 0, adjustment = 0;
++	struct list_head wlist;
+ 
+ 	/*
+ 	 * Take a peek at the queue head waiter such that we can determine
+@@ -188,18 +189,42 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
+ 	 * of the queue. We know that woken will be at least 1 as we accounted
+ 	 * for above. Note we increment the 'active part' of the count by the
+ 	 * number of readers before waking any processes up.
++	 *
++	 * We have to do wakeup in 2 passes to prevent the possibility that
++	 * the reader count may be decremented before it is incremented. It
++	 * is because the to-be-woken waiter may not have slept yet. So it
++	 * may see waiter->task got cleared, finish its critical section and
++	 * do an unlock before the reader count increment.
++	 *
++	 * 1) Collect the read-waiters in a separate list, count them and
++	 *    fully increment the reader count in rwsem.
++	 * 2) For each waiters in the new list, clear waiter->task and
++	 *    put them into wake_q to be woken up later.
+ 	 */
+-	list_for_each_entry_safe(waiter, tmp, &sem->wait_list, list) {
+-		struct task_struct *tsk;
+-
++	list_for_each_entry(waiter, &sem->wait_list, list) {
+ 		if (waiter->type == RWSEM_WAITING_FOR_WRITE)
+ 			break;
+ 
+ 		woken++;
+-		tsk = waiter->task;
++	}
++	list_cut_before(&wlist, &sem->wait_list, &waiter->list);
++
++	adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
++	if (list_empty(&sem->wait_list)) {
++		/* hit end of list above */
++		adjustment -= RWSEM_WAITING_BIAS;
++	}
++
++	if (adjustment)
++		atomic_long_add(adjustment, &sem->count);
++
++	/* 2nd pass */
++	list_for_each_entry_safe(waiter, tmp, &wlist, list) {
++		struct task_struct *tsk;
+ 
++		tsk = waiter->task;
+ 		get_task_struct(tsk);
+-		list_del(&waiter->list);
++
+ 		/*
+ 		 * Ensure calling get_task_struct() before setting the reader
+ 		 * waiter to nil such that rwsem_down_read_failed() cannot
+@@ -215,15 +240,6 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
+ 		/* wake_q_add() already take the task ref */
+ 		put_task_struct(tsk);
+ 	}
+-
+-	adjustment = woken * RWSEM_ACTIVE_READ_BIAS - adjustment;
+-	if (list_empty(&sem->wait_list)) {
+-		/* hit end of list above */
+-		adjustment -= RWSEM_WAITING_BIAS;
+-	}
+-
+-	if (adjustment)
+-		atomic_long_add(adjustment, &sem->count);
+ }
+ 
+ /*
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index a0d1cd88f903..b396d328a764 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -861,8 +861,21 @@ EXPORT_SYMBOL(_copy_from_iter_full_nocache);
+ 
+ static inline bool page_copy_sane(struct page *page, size_t offset, size_t n)
+ {
+-	struct page *head = compound_head(page);
+-	size_t v = n + offset + page_address(page) - page_address(head);
++	struct page *head;
++	size_t v = n + offset;
++
++	/*
++	 * The general case needs to access the page order in order
++	 * to compute the page size.
++	 * However, we mostly deal with order-0 pages and thus can
++	 * avoid a possible cache line miss for requests that fit all
++	 * page orders.
++	 */
++	if (n <= v && v <= PAGE_SIZE)
++		return true;
++
++	head = compound_head(page);
++	v += (page - head) << PAGE_SHIFT;
+ 
+ 	if (likely(n <= v && v <= (PAGE_SIZE << compound_order(head))))
+ 		return true;
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 8b03c698f86e..010051a07a64 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -791,11 +791,13 @@ out_unlock:
+ 		pte_free(mm, pgtable);
+ }
+ 
+-vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+-			pmd_t *pmd, pfn_t pfn, bool write)
++vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write)
+ {
++	unsigned long addr = vmf->address & PMD_MASK;
++	struct vm_area_struct *vma = vmf->vma;
+ 	pgprot_t pgprot = vma->vm_page_prot;
+ 	pgtable_t pgtable = NULL;
++
+ 	/*
+ 	 * If we had pmd_special, we could avoid all these restrictions,
+ 	 * but we need to be consistent with PTEs and architectures that
+@@ -818,7 +820,7 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+ 
+ 	track_pfn_insert(vma, &pgprot, pfn);
+ 
+-	insert_pfn_pmd(vma, addr, pmd, pfn, pgprot, write, pgtable);
++	insert_pfn_pmd(vma, addr, vmf->pmd, pfn, pgprot, write, pgtable);
+ 	return VM_FAULT_NOPAGE;
+ }
+ EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd);
+@@ -867,10 +869,12 @@ out_unlock:
+ 	spin_unlock(ptl);
+ }
+ 
+-vm_fault_t vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+-			pud_t *pud, pfn_t pfn, bool write)
++vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
+ {
++	unsigned long addr = vmf->address & PUD_MASK;
++	struct vm_area_struct *vma = vmf->vma;
+ 	pgprot_t pgprot = vma->vm_page_prot;
++
+ 	/*
+ 	 * If we had pud_special, we could avoid all these restrictions,
+ 	 * but we need to be consistent with PTEs and architectures that
+@@ -887,7 +891,7 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+ 
+ 	track_pfn_insert(vma, &pgprot, pfn);
+ 
+-	insert_pfn_pud(vma, addr, pud, pfn, pgprot, write);
++	insert_pfn_pud(vma, addr, vmf->pud, pfn, pgprot, write);
+ 	return VM_FAULT_NOPAGE;
+ }
+ EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index c220315dc533..c161069bfdbc 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1573,8 +1573,9 @@ static struct page *alloc_surplus_huge_page(struct hstate *h, gfp_t gfp_mask,
+ 	 */
+ 	if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) {
+ 		SetPageHugeTemporary(page);
++		spin_unlock(&hugetlb_lock);
+ 		put_page(page);
+-		page = NULL;
++		return NULL;
+ 	} else {
+ 		h->surplus_huge_pages++;
+ 		h->surplus_huge_pages_node[page_to_nid(page)]++;
+@@ -3776,8 +3777,7 @@ retry:
+ 			 * handling userfault.  Reacquire after handling
+ 			 * fault to make calling code simpler.
+ 			 */
+-			hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping,
+-							idx, haddr);
++			hash = hugetlb_fault_mutex_hash(h, mapping, idx, haddr);
+ 			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+ 			ret = handle_userfault(&vmf, VM_UFFD_MISSING);
+ 			mutex_lock(&hugetlb_fault_mutex_table[hash]);
+@@ -3885,21 +3885,14 @@ backout_unlocked:
+ }
+ 
+ #ifdef CONFIG_SMP
+-u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm,
+-			    struct vm_area_struct *vma,
+-			    struct address_space *mapping,
++u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
+ 			    pgoff_t idx, unsigned long address)
+ {
+ 	unsigned long key[2];
+ 	u32 hash;
+ 
+-	if (vma->vm_flags & VM_SHARED) {
+-		key[0] = (unsigned long) mapping;
+-		key[1] = idx;
+-	} else {
+-		key[0] = (unsigned long) mm;
+-		key[1] = address >> huge_page_shift(h);
+-	}
++	key[0] = (unsigned long) mapping;
++	key[1] = idx;
+ 
+ 	hash = jhash2((u32 *)&key, sizeof(key)/sizeof(u32), 0);
+ 
+@@ -3910,9 +3903,7 @@ u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm,
+  * For uniprocesor systems we always use a single mutex, so just
+  * return 0 and avoid the hashing overhead.
+  */
+-u32 hugetlb_fault_mutex_hash(struct hstate *h, struct mm_struct *mm,
+-			    struct vm_area_struct *vma,
+-			    struct address_space *mapping,
++u32 hugetlb_fault_mutex_hash(struct hstate *h, struct address_space *mapping,
+ 			    pgoff_t idx, unsigned long address)
+ {
+ 	return 0;
+@@ -3957,7 +3948,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	 * get spurious allocation failures if two CPUs race to instantiate
+ 	 * the same page in the page cache.
+ 	 */
+-	hash = hugetlb_fault_mutex_hash(h, mm, vma, mapping, idx, haddr);
++	hash = hugetlb_fault_mutex_hash(h, mapping, idx, haddr);
+ 	mutex_lock(&hugetlb_fault_mutex_table[hash]);
+ 
+ 	entry = huge_ptep_get(ptep);
+diff --git a/mm/mincore.c b/mm/mincore.c
+index 218099b5ed31..c3f058bd0faf 100644
+--- a/mm/mincore.c
++++ b/mm/mincore.c
+@@ -169,6 +169,22 @@ out:
+ 	return 0;
+ }
+ 
++static inline bool can_do_mincore(struct vm_area_struct *vma)
++{
++	if (vma_is_anonymous(vma))
++		return true;
++	if (!vma->vm_file)
++		return false;
++	/*
++	 * Reveal pagecache information only for non-anonymous mappings that
++	 * correspond to the files the calling process could (if tried) open
++	 * for writing; otherwise we'd be including shared non-exclusive
++	 * mappings, which opens a side channel.
++	 */
++	return inode_owner_or_capable(file_inode(vma->vm_file)) ||
++		inode_permission(file_inode(vma->vm_file), MAY_WRITE) == 0;
++}
++
+ /*
+  * Do a chunk of "sys_mincore()". We've already checked
+  * all the arguments, we hold the mmap semaphore: we should
+@@ -189,8 +205,13 @@ static long do_mincore(unsigned long addr, unsigned long pages, unsigned char *v
+ 	vma = find_vma(current->mm, addr);
+ 	if (!vma || addr < vma->vm_start)
+ 		return -ENOMEM;
+-	mincore_walk.mm = vma->vm_mm;
+ 	end = min(vma->vm_end, addr + (pages << PAGE_SHIFT));
++	if (!can_do_mincore(vma)) {
++		unsigned long pages = DIV_ROUND_UP(end - addr, PAGE_SIZE);
++		memset(vec, 1, pages);
++		return pages;
++	}
++	mincore_walk.mm = vma->vm_mm;
+ 	err = walk_page_range(addr, end, &mincore_walk);
+ 	if (err < 0)
+ 		return err;
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index d59b5a73dfb3..9932d5755e4c 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -271,8 +271,7 @@ retry:
+ 		 */
+ 		idx = linear_page_index(dst_vma, dst_addr);
+ 		mapping = dst_vma->vm_file->f_mapping;
+-		hash = hugetlb_fault_mutex_hash(h, dst_mm, dst_vma, mapping,
+-								idx, dst_addr);
++		hash = hugetlb_fault_mutex_hash(h, mapping, idx, dst_addr);
+ 		mutex_lock(&hugetlb_fault_mutex_table[hash]);
+ 
+ 		err = -ENOMEM;
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 46f88dc7b7e8..af17265b829c 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1548,9 +1548,11 @@ static bool hdmi_present_sense_via_verbs(struct hdmi_spec_per_pin *per_pin,
+ 	ret = !repoll || !eld->monitor_present || eld->eld_valid;
+ 
+ 	jack = snd_hda_jack_tbl_get(codec, pin_nid);
+-	if (jack)
++	if (jack) {
+ 		jack->block_report = !ret;
+-
++		jack->pin_sense = (eld->monitor_present && eld->eld_valid) ?
++			AC_PINSENSE_PRESENCE : 0;
++	}
+ 	mutex_unlock(&per_pin->lock);
+ 	return ret;
+ }
+@@ -1660,6 +1662,11 @@ static void hdmi_repoll_eld(struct work_struct *work)
+ 	container_of(to_delayed_work(work), struct hdmi_spec_per_pin, work);
+ 	struct hda_codec *codec = per_pin->codec;
+ 	struct hdmi_spec *spec = codec->spec;
++	struct hda_jack_tbl *jack;
++
++	jack = snd_hda_jack_tbl_get(codec, per_pin->pin_nid);
++	if (jack)
++		jack->jack_dirty = 1;
+ 
+ 	if (per_pin->repoll_count++ > 6)
+ 		per_pin->repoll_count = 0;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 5ce28b4f0218..c50fb33e323c 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -477,12 +477,45 @@ static void alc_auto_setup_eapd(struct hda_codec *codec, bool on)
+ 		set_eapd(codec, *p, on);
+ }
+ 
++static int find_ext_mic_pin(struct hda_codec *codec);
++
++static void alc_headset_mic_no_shutup(struct hda_codec *codec)
++{
++	const struct hda_pincfg *pin;
++	int mic_pin = find_ext_mic_pin(codec);
++	int i;
++
++	/* don't shut up pins when unloading the driver; otherwise it breaks
++	 * the default pin setup at the next load of the driver
++	 */
++	if (codec->bus->shutdown)
++		return;
++
++	snd_array_for_each(&codec->init_pins, i, pin) {
++		/* use read here for syncing after issuing each verb */
++		if (pin->nid != mic_pin)
++			snd_hda_codec_read(codec, pin->nid, 0,
++					AC_VERB_SET_PIN_WIDGET_CONTROL, 0);
++	}
++
++	codec->pins_shutup = 1;
++}
++
+ static void alc_shutup_pins(struct hda_codec *codec)
+ {
+ 	struct alc_spec *spec = codec->spec;
+ 
+-	if (!spec->no_shutup_pins)
+-		snd_hda_shutup_pins(codec);
++	switch (codec->core.vendor_id) {
++	case 0x10ec0286:
++	case 0x10ec0288:
++	case 0x10ec0298:
++		alc_headset_mic_no_shutup(codec);
++		break;
++	default:
++		if (!spec->no_shutup_pins)
++			snd_hda_shutup_pins(codec);
++		break;
++	}
+ }
+ 
+ /* generic shutup callback;
+@@ -803,11 +836,10 @@ static int alc_init(struct hda_codec *codec)
+ 	if (spec->init_hook)
+ 		spec->init_hook(codec);
+ 
++	snd_hda_gen_init(codec);
+ 	alc_fix_pll(codec);
+ 	alc_auto_init_amp(codec, spec->init_amp);
+ 
+-	snd_hda_gen_init(codec);
+-
+ 	snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_INIT);
+ 
+ 	return 0;
+@@ -2924,27 +2956,6 @@ static int alc269_parse_auto_config(struct hda_codec *codec)
+ 	return alc_parse_auto_config(codec, alc269_ignore, ssids);
+ }
+ 
+-static int find_ext_mic_pin(struct hda_codec *codec);
+-
+-static void alc286_shutup(struct hda_codec *codec)
+-{
+-	const struct hda_pincfg *pin;
+-	int i;
+-	int mic_pin = find_ext_mic_pin(codec);
+-	/* don't shut up pins when unloading the driver; otherwise it breaks
+-	 * the default pin setup at the next load of the driver
+-	 */
+-	if (codec->bus->shutdown)
+-		return;
+-	snd_array_for_each(&codec->init_pins, i, pin) {
+-		/* use read here for syncing after issuing each verb */
+-		if (pin->nid != mic_pin)
+-			snd_hda_codec_read(codec, pin->nid, 0,
+-					AC_VERB_SET_PIN_WIDGET_CONTROL, 0);
+-	}
+-	codec->pins_shutup = 1;
+-}
+-
+ static void alc269vb_toggle_power_output(struct hda_codec *codec, int power_up)
+ {
+ 	alc_update_coef_idx(codec, 0x04, 1 << 11, power_up ? (1 << 11) : 0);
+@@ -6931,6 +6942,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1558, 0x1325, "System76 Darter Pro (darp5)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x8550, "System76 Gazelle (gaze14)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x8551, "System76 Gazelle (gaze14)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
++	SND_PCI_QUIRK(0x1558, 0x8560, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1558, 0x8561, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE),
+@@ -6973,7 +6988,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x313c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+ 	SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+-	SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
++	SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ 	SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x501e, "Thinkpad L440", ALC292_FIXUP_TPT440_DOCK),
+@@ -7702,7 +7717,6 @@ static int patch_alc269(struct hda_codec *codec)
+ 	case 0x10ec0286:
+ 	case 0x10ec0288:
+ 		spec->codec_variant = ALC269_TYPE_ALC286;
+-		spec->shutup = alc286_shutup;
+ 		break;
+ 	case 0x10ec0298:
+ 		spec->codec_variant = ALC269_TYPE_ALC298;
+diff --git a/sound/soc/codecs/hdac_hdmi.c b/sound/soc/codecs/hdac_hdmi.c
+index b19d7a3e7a2c..a23b1f2844e9 100644
+--- a/sound/soc/codecs/hdac_hdmi.c
++++ b/sound/soc/codecs/hdac_hdmi.c
+@@ -1871,6 +1871,17 @@ static int hdmi_codec_probe(struct snd_soc_component *component)
+ 	/* Imp: Store the card pointer in hda_codec */
+ 	hdmi->card = dapm->card->snd_card;
+ 
++	/*
++	 * Setup a device_link between card device and HDMI codec device.
++	 * The card device is the consumer and the HDMI codec device is
++	 * the supplier. With this setting, we can make sure that the audio
++	 * domain in display power will be always turned on before operating
++	 * on the HDMI audio codec registers.
++	 * Let's use the flag DL_FLAG_AUTOREMOVE_CONSUMER. This can make
++	 * sure the device link is freed when the machine driver is removed.
++	 */
++	device_link_add(component->card->dev, &hdev->dev, DL_FLAG_RPM_ACTIVE |
++			DL_FLAG_AUTOREMOVE_CONSUMER);
+ 	/*
+ 	 * hdac_device core already sets the state to active and calls
+ 	 * get_noresume. So enable runtime and set the device to suspend.
+diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c
+index c97f21836c66..f06ae43650a3 100644
+--- a/sound/soc/codecs/max98090.c
++++ b/sound/soc/codecs/max98090.c
+@@ -1209,14 +1209,14 @@ static const struct snd_soc_dapm_widget max98090_dapm_widgets[] = {
+ 		&max98090_right_rcv_mixer_controls[0],
+ 		ARRAY_SIZE(max98090_right_rcv_mixer_controls)),
+ 
+-	SND_SOC_DAPM_MUX("LINMOD Mux", M98090_REG_LOUTR_MIXER,
+-		M98090_LINMOD_SHIFT, 0, &max98090_linmod_mux),
++	SND_SOC_DAPM_MUX("LINMOD Mux", SND_SOC_NOPM, 0, 0,
++		&max98090_linmod_mux),
+ 
+-	SND_SOC_DAPM_MUX("MIXHPLSEL Mux", M98090_REG_HP_CONTROL,
+-		M98090_MIXHPLSEL_SHIFT, 0, &max98090_mixhplsel_mux),
++	SND_SOC_DAPM_MUX("MIXHPLSEL Mux", SND_SOC_NOPM, 0, 0,
++		&max98090_mixhplsel_mux),
+ 
+-	SND_SOC_DAPM_MUX("MIXHPRSEL Mux", M98090_REG_HP_CONTROL,
+-		M98090_MIXHPRSEL_SHIFT, 0, &max98090_mixhprsel_mux),
++	SND_SOC_DAPM_MUX("MIXHPRSEL Mux", SND_SOC_NOPM, 0, 0,
++		&max98090_mixhprsel_mux),
+ 
+ 	SND_SOC_DAPM_PGA("HP Left Out", M98090_REG_OUTPUT_ENABLE,
+ 		M98090_HPLEN_SHIFT, 0, NULL, 0),
+diff --git a/sound/soc/codecs/rt5677-spi.c b/sound/soc/codecs/rt5677-spi.c
+index 84501c2020c7..a2c7ffa5f400 100644
+--- a/sound/soc/codecs/rt5677-spi.c
++++ b/sound/soc/codecs/rt5677-spi.c
+@@ -57,13 +57,15 @@ static DEFINE_MUTEX(spi_mutex);
+  * RT5677_SPI_READ/WRITE_32:	Transfer 4 bytes
+  * RT5677_SPI_READ/WRITE_BURST:	Transfer any multiples of 8 bytes
+  *
+- * For example, reading 260 bytes at 0x60030002 uses the following commands:
+- * 0x60030002 RT5677_SPI_READ_16	2 bytes
++ * Note:
++ * 16 Bit writes and reads are restricted to the address range
++ * 0x18020000 ~ 0x18021000
++ *
++ * For example, reading 256 bytes at 0x60030004 uses the following commands:
+  * 0x60030004 RT5677_SPI_READ_32	4 bytes
+  * 0x60030008 RT5677_SPI_READ_BURST	240 bytes
+  * 0x600300F8 RT5677_SPI_READ_BURST	8 bytes
+  * 0x60030100 RT5677_SPI_READ_32	4 bytes
+- * 0x60030104 RT5677_SPI_READ_16	2 bytes
+  *
+  * Input:
+  * @read: true for read commands; false for write commands
+@@ -78,15 +80,13 @@ static u8 rt5677_spi_select_cmd(bool read, u32 align, u32 remain, u32 *len)
+ {
+ 	u8 cmd;
+ 
+-	if (align == 2 || align == 6 || remain == 2) {
+-		cmd = RT5677_SPI_READ_16;
+-		*len = 2;
+-	} else if (align == 4 || remain <= 6) {
++	if (align == 4 || remain <= 4) {
+ 		cmd = RT5677_SPI_READ_32;
+ 		*len = 4;
+ 	} else {
+ 		cmd = RT5677_SPI_READ_BURST;
+-		*len = min_t(u32, remain & ~7, RT5677_SPI_BURST_LEN);
++		*len = (((remain - 1) >> 3) + 1) << 3;
++		*len = min_t(u32, *len, RT5677_SPI_BURST_LEN);
+ 	}
+ 	return read ? cmd : cmd + 1;
+ }
+@@ -107,7 +107,7 @@ static void rt5677_spi_reverse(u8 *dst, u32 dstlen, const u8 *src, u32 srclen)
+ 	}
+ }
+ 
+-/* Read DSP address space using SPI. addr and len have to be 2-byte aligned. */
++/* Read DSP address space using SPI. addr and len have to be 4-byte aligned. */
+ int rt5677_spi_read(u32 addr, void *rxbuf, size_t len)
+ {
+ 	u32 offset;
+@@ -123,7 +123,7 @@ int rt5677_spi_read(u32 addr, void *rxbuf, size_t len)
+ 	if (!g_spi)
+ 		return -ENODEV;
+ 
+-	if ((addr & 1) || (len & 1)) {
++	if ((addr & 3) || (len & 3)) {
+ 		dev_err(&g_spi->dev, "Bad read align 0x%x(%zu)\n", addr, len);
+ 		return -EACCES;
+ 	}
+@@ -158,13 +158,13 @@ int rt5677_spi_read(u32 addr, void *rxbuf, size_t len)
+ }
+ EXPORT_SYMBOL_GPL(rt5677_spi_read);
+ 
+-/* Write DSP address space using SPI. addr has to be 2-byte aligned.
+- * If len is not 2-byte aligned, an extra byte of zero is written at the end
++/* Write DSP address space using SPI. addr has to be 4-byte aligned.
++ * If len is not 4-byte aligned, then extra zeros are written at the end
+  * as padding.
+  */
+ int rt5677_spi_write(u32 addr, const void *txbuf, size_t len)
+ {
+-	u32 offset, len_with_pad = len;
++	u32 offset;
+ 	int status = 0;
+ 	struct spi_transfer t;
+ 	struct spi_message m;
+@@ -177,22 +177,19 @@ int rt5677_spi_write(u32 addr, const void *txbuf, size_t len)
+ 	if (!g_spi)
+ 		return -ENODEV;
+ 
+-	if (addr & 1) {
++	if (addr & 3) {
+ 		dev_err(&g_spi->dev, "Bad write align 0x%x(%zu)\n", addr, len);
+ 		return -EACCES;
+ 	}
+ 
+-	if (len & 1)
+-		len_with_pad = len + 1;
+-
+ 	memset(&t, 0, sizeof(t));
+ 	t.tx_buf = buf;
+ 	t.speed_hz = RT5677_SPI_FREQ;
+ 	spi_message_init_with_transfers(&m, &t, 1);
+ 
+-	for (offset = 0; offset < len_with_pad;) {
++	for (offset = 0; offset < len;) {
+ 		spi_cmd = rt5677_spi_select_cmd(false, (addr + offset) & 7,
+-				len_with_pad - offset, &t.len);
++				len - offset, &t.len);
+ 
+ 		/* Construct SPI message header */
+ 		buf[0] = spi_cmd;
+diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c
+index 3623aa9a6f2e..15202a637197 100644
+--- a/sound/soc/fsl/fsl_esai.c
++++ b/sound/soc/fsl/fsl_esai.c
+@@ -251,7 +251,7 @@ static int fsl_esai_set_dai_sysclk(struct snd_soc_dai *dai, int clk_id,
+ 		break;
+ 	case ESAI_HCKT_EXTAL:
+ 		ecr |= ESAI_ECR_ETI;
+-		/* fall through */
++		break;
+ 	case ESAI_HCKR_EXTAL:
+ 		ecr |= ESAI_ECR_ERI;
+ 		break;
+diff --git a/sound/usb/line6/toneport.c b/sound/usb/line6/toneport.c
+index 19bee725de00..325b07b98b3c 100644
+--- a/sound/usb/line6/toneport.c
++++ b/sound/usb/line6/toneport.c
+@@ -54,8 +54,8 @@ struct usb_line6_toneport {
+ 	/* Firmware version (x 100) */
+ 	u8 firmware_version;
+ 
+-	/* Timer for delayed PCM startup */
+-	struct timer_list timer;
++	/* Work for delayed PCM startup */
++	struct delayed_work pcm_work;
+ 
+ 	/* Device type */
+ 	enum line6_device_type type;
+@@ -241,9 +241,10 @@ static int snd_toneport_source_put(struct snd_kcontrol *kcontrol,
+ 	return 1;
+ }
+ 
+-static void toneport_start_pcm(struct timer_list *t)
++static void toneport_start_pcm(struct work_struct *work)
+ {
+-	struct usb_line6_toneport *toneport = from_timer(toneport, t, timer);
++	struct usb_line6_toneport *toneport =
++		container_of(work, struct usb_line6_toneport, pcm_work.work);
+ 	struct usb_line6 *line6 = &toneport->line6;
+ 
+ 	line6_pcm_acquire(line6->line6pcm, LINE6_STREAM_MONITOR, true);
+@@ -393,7 +394,8 @@ static int toneport_setup(struct usb_line6_toneport *toneport)
+ 	if (toneport_has_led(toneport))
+ 		toneport_update_led(toneport);
+ 
+-	mod_timer(&toneport->timer, jiffies + TONEPORT_PCM_DELAY * HZ);
++	schedule_delayed_work(&toneport->pcm_work,
++			      msecs_to_jiffies(TONEPORT_PCM_DELAY * 1000));
+ 	return 0;
+ }
+ 
+@@ -405,7 +407,7 @@ static void line6_toneport_disconnect(struct usb_line6 *line6)
+ 	struct usb_line6_toneport *toneport =
+ 		(struct usb_line6_toneport *)line6;
+ 
+-	del_timer_sync(&toneport->timer);
++	cancel_delayed_work_sync(&toneport->pcm_work);
+ 
+ 	if (toneport_has_led(toneport))
+ 		toneport_remove_leds(toneport);
+@@ -422,7 +424,7 @@ static int toneport_init(struct usb_line6 *line6,
+ 	struct usb_line6_toneport *toneport =  (struct usb_line6_toneport *) line6;
+ 
+ 	toneport->type = id->driver_info;
+-	timer_setup(&toneport->timer, toneport_start_pcm, 0);
++	INIT_DELAYED_WORK(&toneport->pcm_work, toneport_start_pcm);
+ 
+ 	line6->disconnect = line6_toneport_disconnect;
+ 
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index e7d441d0e839..5a10b1b7f6b9 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -2679,6 +2679,8 @@ static int parse_audio_selector_unit(struct mixer_build *state, int unitid,
+ 	kctl = snd_ctl_new1(&mixer_selectunit_ctl, cval);
+ 	if (! kctl) {
+ 		usb_audio_err(state->chip, "cannot malloc kcontrol\n");
++		for (i = 0; i < desc->bNrInPins; i++)
++			kfree(namelist[i]);
+ 		kfree(namelist);
+ 		kfree(cval);
+ 		return -ENOMEM;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 479196aeb409..2cd57730381b 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -1832,7 +1832,8 @@ static int validate_branch(struct objtool_file *file, struct instruction *first,
+ 			return 1;
+ 		}
+ 
+-		func = insn->func ? insn->func->pfunc : NULL;
++		if (insn->func)
++			func = insn->func->pfunc;
+ 
+ 		if (func && insn->ignore) {
+ 			WARN_FUNC("BUG: why am I validating an ignored function?",
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index ff68b07e94e9..b5238bcba72c 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1251,7 +1251,7 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
+ 	if (!dirty_bitmap)
+ 		return -ENOENT;
+ 
+-	n = kvm_dirty_bitmap_bytes(memslot);
++	n = ALIGN(log->num_pages, BITS_PER_LONG) / 8;
+ 
+ 	if (log->first_page > memslot->npages ||
+ 	    log->num_pages > memslot->npages - log->first_page)


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-26 17:08 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-26 17:08 UTC (permalink / raw
  To: gentoo-commits

commit:     15741aee1cf7932e3be5a4e02b6d5867bdcb4bd0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May 26 17:08:02 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May 26 17:08:02 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=15741aee

Linux patch 5.0.19

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1018_linux-5.0.19.patch | 5315 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5319 insertions(+)

diff --git a/0000_README b/0000_README
index 396a4db..599546c 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  1017_linux-5.0.18.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.18
 
+Patch:  1018_linux-5.0.19.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.19
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1018_linux-5.0.19.patch b/1018_linux-5.0.19.patch
new file mode 100644
index 0000000..0c8c977
--- /dev/null
+++ b/1018_linux-5.0.19.patch
@@ -0,0 +1,5315 @@
+diff --git a/Documentation/filesystems/porting b/Documentation/filesystems/porting
+index cf43bc4dbf31..a60fa516d4cb 100644
+--- a/Documentation/filesystems/porting
++++ b/Documentation/filesystems/porting
+@@ -638,3 +638,8 @@ in your dentry operations instead.
+ 	inode to d_splice_alias() will also do the right thing (equivalent of
+ 	d_add(dentry, NULL); return NULL;), so that kind of special cases
+ 	also doesn't need a separate treatment.
++--
++[mandatory]
++	DCACHE_RCUACCESS is gone; having an RCU delay on dentry freeing is the
++	default.  DCACHE_NORCU opts out, and only d_alloc_pseudo() has any
++	business doing so.
+diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
+index ba8927c0d45c..a1b8e6d92298 100644
+--- a/Documentation/virtual/kvm/api.txt
++++ b/Documentation/virtual/kvm/api.txt
+@@ -3790,8 +3790,9 @@ The ioctl clears the dirty status of pages in a memory slot, according to
+ the bitmap that is passed in struct kvm_clear_dirty_log's dirty_bitmap
+ field.  Bit 0 of the bitmap corresponds to page "first_page" in the
+ memory slot, and num_pages is the size in bits of the input bitmap.
+-Both first_page and num_pages must be a multiple of 64.  For each bit
+-that is set in the input bitmap, the corresponding page is marked "clean"
++first_page must be a multiple of 64; num_pages must also be a multiple of
++64 unless first_page + num_pages is the size of the memory slot.  For each
++bit that is set in the input bitmap, the corresponding page is marked "clean"
+ in KVM's dirty bitmap, and dirty tracking is re-enabled for that page
+ (for example via write-protection, or by clearing the dirty bit in
+ a page table entry).
+diff --git a/Makefile b/Makefile
+index bf21b5a86e4b..66efffc3fb41 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c
+index 4135abec3fb0..63e6e6504699 100644
+--- a/arch/arc/mm/cache.c
++++ b/arch/arc/mm/cache.c
+@@ -113,10 +113,24 @@ static void read_decode_cache_bcr_arcv2(int cpu)
+ 	}
+ 
+ 	READ_BCR(ARC_REG_CLUSTER_BCR, cbcr);
+-	if (cbcr.c)
++	if (cbcr.c) {
+ 		ioc_exists = 1;
+-	else
++
++		/*
++		 * As for today we don't support both IOC and ZONE_HIGHMEM enabled
++		 * simultaneously. This happens because as of today IOC aperture covers
++		 * only ZONE_NORMAL (low mem) and any dma transactions outside this
++		 * region won't be HW coherent.
++		 * If we want to use both IOC and ZONE_HIGHMEM we can use
++		 * bounce_buffer to handle dma transactions to HIGHMEM.
++		 * Also it is possible to modify dma_direct cache ops or increase IOC
++		 * aperture size if we are planning to use HIGHMEM without PAE.
++		 */
++		if (IS_ENABLED(CONFIG_HIGHMEM) || is_pae40_enabled())
++			ioc_enable = 0;
++	} else {
+ 		ioc_enable = 0;
++	}
+ 
+ 	/* HS 2.0 didn't have AUX_VOL */
+ 	if (cpuinfo_arc700[cpu].core.family > 0x51) {
+@@ -1158,19 +1172,6 @@ noinline void __init arc_ioc_setup(void)
+ 	if (!ioc_enable)
+ 		return;
+ 
+-	/*
+-	 * As for today we don't support both IOC and ZONE_HIGHMEM enabled
+-	 * simultaneously. This happens because as of today IOC aperture covers
+-	 * only ZONE_NORMAL (low mem) and any dma transactions outside this
+-	 * region won't be HW coherent.
+-	 * If we want to use both IOC and ZONE_HIGHMEM we can use
+-	 * bounce_buffer to handle dma transactions to HIGHMEM.
+-	 * Also it is possible to modify dma_direct cache ops or increase IOC
+-	 * aperture size if we are planning to use HIGHMEM without PAE.
+-	 */
+-	if (IS_ENABLED(CONFIG_HIGHMEM))
+-		panic("IOC and HIGHMEM can't be used simultaneously");
+-
+ 	/* Flush + invalidate + disable L1 dcache */
+ 	__dc_disable();
+ 
+diff --git a/arch/mips/kernel/perf_event_mipsxx.c b/arch/mips/kernel/perf_event_mipsxx.c
+index 413863508f6f..d67fb64e908c 100644
+--- a/arch/mips/kernel/perf_event_mipsxx.c
++++ b/arch/mips/kernel/perf_event_mipsxx.c
+@@ -64,17 +64,11 @@ struct mips_perf_event {
+ 	#define CNTR_EVEN	0x55555555
+ 	#define CNTR_ODD	0xaaaaaaaa
+ 	#define CNTR_ALL	0xffffffff
+-#ifdef CONFIG_MIPS_MT_SMP
+ 	enum {
+ 		T  = 0,
+ 		V  = 1,
+ 		P  = 2,
+ 	} range;
+-#else
+-	#define T
+-	#define V
+-	#define P
+-#endif
+ };
+ 
+ static struct mips_perf_event raw_event;
+@@ -325,9 +319,7 @@ static void mipsxx_pmu_enable_event(struct hw_perf_event *evt, int idx)
+ {
+ 	struct perf_event *event = container_of(evt, struct perf_event, hw);
+ 	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+-#ifdef CONFIG_MIPS_MT_SMP
+ 	unsigned int range = evt->event_base >> 24;
+-#endif /* CONFIG_MIPS_MT_SMP */
+ 
+ 	WARN_ON(idx < 0 || idx >= mipspmu.num_counters);
+ 
+@@ -336,21 +328,15 @@ static void mipsxx_pmu_enable_event(struct hw_perf_event *evt, int idx)
+ 		/* Make sure interrupt enabled. */
+ 		MIPS_PERFCTRL_IE;
+ 
+-#ifdef CONFIG_CPU_BMIPS5000
+-	{
++	if (IS_ENABLED(CONFIG_CPU_BMIPS5000)) {
+ 		/* enable the counter for the calling thread */
+ 		cpuc->saved_ctrl[idx] |=
+ 			(1 << (12 + vpe_id())) | BRCM_PERFCTRL_TC;
+-	}
+-#else
+-#ifdef CONFIG_MIPS_MT_SMP
+-	if (range > V) {
++	} else if (IS_ENABLED(CONFIG_MIPS_MT_SMP) && range > V) {
+ 		/* The counter is processor wide. Set it up to count all TCs. */
+ 		pr_debug("Enabling perf counter for all TCs\n");
+ 		cpuc->saved_ctrl[idx] |= M_TC_EN_ALL;
+-	} else
+-#endif /* CONFIG_MIPS_MT_SMP */
+-	{
++	} else {
+ 		unsigned int cpu, ctrl;
+ 
+ 		/*
+@@ -365,7 +351,6 @@ static void mipsxx_pmu_enable_event(struct hw_perf_event *evt, int idx)
+ 		cpuc->saved_ctrl[idx] |= ctrl;
+ 		pr_debug("Enabling perf counter for CPU%d\n", cpu);
+ 	}
+-#endif /* CONFIG_CPU_BMIPS5000 */
+ 	/*
+ 	 * We do not actually let the counter run. Leave it until start().
+ 	 */
+diff --git a/arch/parisc/boot/compressed/head.S b/arch/parisc/boot/compressed/head.S
+index 5aba20fa48aa..e8b798fd0cf0 100644
+--- a/arch/parisc/boot/compressed/head.S
++++ b/arch/parisc/boot/compressed/head.S
+@@ -22,7 +22,7 @@
+ 	__HEAD
+ 
+ ENTRY(startup)
+-	 .level LEVEL
++	 .level PA_ASM_LEVEL
+ 
+ #define PSW_W_SM	0x200
+ #define PSW_W_BIT       36
+@@ -63,7 +63,7 @@ $bss_loop:
+ 	load32	BOOTADDR(decompress_kernel),%r3
+ 
+ #ifdef CONFIG_64BIT
+-	.level LEVEL
++	.level PA_ASM_LEVEL
+ 	ssm	PSW_W_SM, %r0		/* set W-bit */
+ 	depdi	0, 31, 32, %r3
+ #endif
+@@ -72,7 +72,7 @@ $bss_loop:
+ 
+ startup_continue:
+ #ifdef CONFIG_64BIT
+-	.level LEVEL
++	.level PA_ASM_LEVEL
+ 	rsm	PSW_W_SM, %r0		/* clear W-bit */
+ #endif
+ 
+diff --git a/arch/parisc/include/asm/assembly.h b/arch/parisc/include/asm/assembly.h
+index c17ec0ee6e7c..d85738a7bbe6 100644
+--- a/arch/parisc/include/asm/assembly.h
++++ b/arch/parisc/include/asm/assembly.h
+@@ -61,14 +61,14 @@
+ #define LDCW		ldcw,co
+ #define BL		b,l
+ # ifdef CONFIG_64BIT
+-#  define LEVEL		2.0w
++#  define PA_ASM_LEVEL	2.0w
+ # else
+-#  define LEVEL		2.0
++#  define PA_ASM_LEVEL	2.0
+ # endif
+ #else
+ #define LDCW		ldcw
+ #define BL		bl
+-#define LEVEL		1.1
++#define PA_ASM_LEVEL	1.1
+ #endif
+ 
+ #ifdef __ASSEMBLY__
+diff --git a/arch/parisc/include/asm/cache.h b/arch/parisc/include/asm/cache.h
+index 006fb939cac8..4016fe1c65a9 100644
+--- a/arch/parisc/include/asm/cache.h
++++ b/arch/parisc/include/asm/cache.h
+@@ -44,22 +44,22 @@ void parisc_setup_cache_timing(void);
+ 
+ #define pdtlb(addr)	asm volatile("pdtlb 0(%%sr1,%0)" \
+ 			ALTERNATIVE(ALT_COND_NO_SMP, INSN_PxTLB) \
+-			: : "r" (addr))
++			: : "r" (addr) : "memory")
+ #define pitlb(addr)	asm volatile("pitlb 0(%%sr1,%0)" \
+ 			ALTERNATIVE(ALT_COND_NO_SMP, INSN_PxTLB) \
+ 			ALTERNATIVE(ALT_COND_NO_SPLIT_TLB, INSN_NOP) \
+-			: : "r" (addr))
++			: : "r" (addr) : "memory")
+ #define pdtlb_kernel(addr)  asm volatile("pdtlb 0(%0)"   \
+ 			ALTERNATIVE(ALT_COND_NO_SMP, INSN_PxTLB) \
+-			: : "r" (addr))
++			: : "r" (addr) : "memory")
+ 
+ #define asm_io_fdc(addr) asm volatile("fdc %%r0(%0)" \
+ 			ALTERNATIVE(ALT_COND_NO_DCACHE, INSN_NOP) \
+ 			ALTERNATIVE(ALT_COND_NO_IOC_FDC, INSN_NOP) \
+-			: : "r" (addr))
++			: : "r" (addr) : "memory")
+ #define asm_io_sync()	asm volatile("sync" \
+ 			ALTERNATIVE(ALT_COND_NO_DCACHE, INSN_NOP) \
+-			ALTERNATIVE(ALT_COND_NO_IOC_FDC, INSN_NOP) :: )
++			ALTERNATIVE(ALT_COND_NO_IOC_FDC, INSN_NOP) :::"memory")
+ 
+ #endif /* ! __ASSEMBLY__ */
+ 
+diff --git a/arch/parisc/kernel/head.S b/arch/parisc/kernel/head.S
+index fbb4e43fda05..f56cbab64ac1 100644
+--- a/arch/parisc/kernel/head.S
++++ b/arch/parisc/kernel/head.S
+@@ -22,7 +22,7 @@
+ #include <linux/linkage.h>
+ #include <linux/init.h>
+ 
+-	.level	LEVEL
++	.level	PA_ASM_LEVEL
+ 
+ 	__INITDATA
+ ENTRY(boot_args)
+@@ -258,7 +258,7 @@ stext_pdc_ret:
+ 	ldo		R%PA(fault_vector_11)(%r10),%r10
+ 
+ $is_pa20:
+-	.level		LEVEL /* restore 1.1 || 2.0w */
++	.level		PA_ASM_LEVEL /* restore 1.1 || 2.0w */
+ #endif /*!CONFIG_64BIT*/
+ 	load32		PA(fault_vector_20),%r10
+ 
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index 841db71958cd..97c206734e24 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -193,6 +193,7 @@ int dump_task_fpu (struct task_struct *tsk, elf_fpregset_t *r)
+  */
+ 
+ int running_on_qemu __read_mostly;
++EXPORT_SYMBOL(running_on_qemu);
+ 
+ void __cpuidle arch_cpu_idle_dead(void)
+ {
+diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
+index 4f77bd9be66b..93cc36d98875 100644
+--- a/arch/parisc/kernel/syscall.S
++++ b/arch/parisc/kernel/syscall.S
+@@ -48,7 +48,7 @@ registers).
+ 	 */
+ #define KILL_INSN	break	0,0
+ 
+-	.level          LEVEL
++	.level          PA_ASM_LEVEL
+ 
+ 	.text
+ 
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 059187a3ded7..3d1305aa64b6 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -512,7 +512,7 @@ static void __init map_pages(unsigned long start_vaddr,
+ 
+ void __init set_kernel_text_rw(int enable_read_write)
+ {
+-	unsigned long start = (unsigned long) _text;
++	unsigned long start = (unsigned long) __init_begin;
+ 	unsigned long end   = (unsigned long) &data_start;
+ 
+ 	map_pages(start, __pa(start), end-start,
+diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
+index 6ee8195a2ffb..4a6dd3ba0b0b 100644
+--- a/arch/powerpc/include/asm/mmu_context.h
++++ b/arch/powerpc/include/asm/mmu_context.h
+@@ -237,7 +237,6 @@ extern void arch_exit_mmap(struct mm_struct *mm);
+ #endif
+ 
+ static inline void arch_unmap(struct mm_struct *mm,
+-			      struct vm_area_struct *vma,
+ 			      unsigned long start, unsigned long end)
+ {
+ 	if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
+diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
+index 532ab79734c7..d43e8fe6d424 100644
+--- a/arch/powerpc/kvm/book3s_64_vio.c
++++ b/arch/powerpc/kvm/book3s_64_vio.c
+@@ -543,14 +543,14 @@ long kvmppc_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn,
+ 	if (ret != H_SUCCESS)
+ 		return ret;
+ 
++	idx = srcu_read_lock(&vcpu->kvm->srcu);
++
+ 	ret = kvmppc_tce_validate(stt, tce);
+ 	if (ret != H_SUCCESS)
+-		return ret;
++		goto unlock_exit;
+ 
+ 	dir = iommu_tce_direction(tce);
+ 
+-	idx = srcu_read_lock(&vcpu->kvm->srcu);
+-
+ 	if ((dir != DMA_NONE) && kvmppc_tce_to_ua(vcpu->kvm, tce, &ua, NULL)) {
+ 		ret = H_PARAMETER;
+ 		goto unlock_exit;
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 5a066fc299e1..f17065f2c962 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3407,7 +3407,9 @@ static int kvmhv_load_hv_regs_and_go(struct kvm_vcpu *vcpu, u64 time_limit,
+ 	vcpu->arch.shregs.sprg2 = mfspr(SPRN_SPRG2);
+ 	vcpu->arch.shregs.sprg3 = mfspr(SPRN_SPRG3);
+ 
+-	mtspr(SPRN_PSSCR, host_psscr);
++	/* Preserve PSSCR[FAKE_SUSPEND] until we've called kvmppc_save_tm_hv */
++	mtspr(SPRN_PSSCR, host_psscr |
++	      (local_paca->kvm_hstate.fake_suspend << PSSCR_FAKE_SUSPEND_LG));
+ 	mtspr(SPRN_HFSCR, host_hfscr);
+ 	mtspr(SPRN_CIABR, host_ciabr);
+ 	mtspr(SPRN_DAWR, host_dawr);
+diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h
+index fca34b2177e2..9f4b4bb78120 100644
+--- a/arch/um/include/asm/mmu_context.h
++++ b/arch/um/include/asm/mmu_context.h
+@@ -22,7 +22,6 @@ static inline int arch_dup_mmap(struct mm_struct *oldmm, struct mm_struct *mm)
+ }
+ extern void arch_exit_mmap(struct mm_struct *mm);
+ static inline void arch_unmap(struct mm_struct *mm,
+-			struct vm_area_struct *vma,
+ 			unsigned long start, unsigned long end)
+ {
+ }
+diff --git a/arch/unicore32/include/asm/mmu_context.h b/arch/unicore32/include/asm/mmu_context.h
+index 5c205a9cb5a6..9f06ea5466dd 100644
+--- a/arch/unicore32/include/asm/mmu_context.h
++++ b/arch/unicore32/include/asm/mmu_context.h
+@@ -88,7 +88,6 @@ static inline int arch_dup_mmap(struct mm_struct *oldmm,
+ }
+ 
+ static inline void arch_unmap(struct mm_struct *mm,
+-			struct vm_area_struct *vma,
+ 			unsigned long start, unsigned long end)
+ {
+ }
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 4fe27b67d7e2..b1d59a7c556e 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -881,7 +881,7 @@ apicinterrupt IRQ_WORK_VECTOR			irq_work_interrupt		smp_irq_work_interrupt
+  * @paranoid == 2 is special: the stub will never switch stacks.  This is for
+  * #DF: if the thread stack is somehow unusable, we'll still get a useful OOPS.
+  */
+-.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
++.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 create_gap=0
+ ENTRY(\sym)
+ 	UNWIND_HINT_IRET_REGS offset=\has_error_code*8
+ 
+@@ -901,6 +901,20 @@ ENTRY(\sym)
+ 	jnz	.Lfrom_usermode_switch_stack_\@
+ 	.endif
+ 
++	.if \create_gap == 1
++	/*
++	 * If coming from kernel space, create a 6-word gap to allow the
++	 * int3 handler to emulate a call instruction.
++	 */
++	testb	$3, CS-ORIG_RAX(%rsp)
++	jnz	.Lfrom_usermode_no_gap_\@
++	.rept	6
++	pushq	5*8(%rsp)
++	.endr
++	UNWIND_HINT_IRET_REGS offset=8
++.Lfrom_usermode_no_gap_\@:
++	.endif
++
+ 	.if \paranoid
+ 	call	paranoid_entry
+ 	.else
+@@ -1132,7 +1146,7 @@ apicinterrupt3 HYPERV_STIMER0_VECTOR \
+ #endif /* CONFIG_HYPERV */
+ 
+ idtentry debug			do_debug		has_error_code=0	paranoid=1 shift_ist=DEBUG_STACK
+-idtentry int3			do_int3			has_error_code=0
++idtentry int3			do_int3			has_error_code=0	create_gap=1
+ idtentry stack_segment		do_stack_segment	has_error_code=1
+ 
+ #ifdef CONFIG_XEN_PV
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 71fb8b7b2954..c87b06ad9f86 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -2090,15 +2090,19 @@ static void intel_pmu_disable_event(struct perf_event *event)
+ 	cpuc->intel_ctrl_host_mask &= ~(1ull << hwc->idx);
+ 	cpuc->intel_cp_status &= ~(1ull << hwc->idx);
+ 
+-	if (unlikely(event->attr.precise_ip))
+-		intel_pmu_pebs_disable(event);
+-
+ 	if (unlikely(hwc->config_base == MSR_ARCH_PERFMON_FIXED_CTR_CTRL)) {
+ 		intel_pmu_disable_fixed(hwc);
+ 		return;
+ 	}
+ 
+ 	x86_pmu_disable_event(event);
++
++	/*
++	 * Needs to be called after x86_pmu_disable_event,
++	 * so we don't trigger the event without PEBS bit set.
++	 */
++	if (unlikely(event->attr.precise_ip))
++		intel_pmu_pebs_disable(event);
+ }
+ 
+ static void intel_pmu_del_event(struct perf_event *event)
+diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
+index 19d18fae6ec6..41019af68adf 100644
+--- a/arch/x86/include/asm/mmu_context.h
++++ b/arch/x86/include/asm/mmu_context.h
+@@ -277,8 +277,8 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
+ 	mpx_mm_init(mm);
+ }
+ 
+-static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
+-			      unsigned long start, unsigned long end)
++static inline void arch_unmap(struct mm_struct *mm, unsigned long start,
++			      unsigned long end)
+ {
+ 	/*
+ 	 * mpx_notify_unmap() goes and reads a rarely-hot
+@@ -298,7 +298,7 @@ static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	 * consistently wrong.
+ 	 */
+ 	if (unlikely(cpu_feature_enabled(X86_FEATURE_MPX)))
+-		mpx_notify_unmap(mm, vma, start, end);
++		mpx_notify_unmap(mm, start, end);
+ }
+ 
+ /*
+diff --git a/arch/x86/include/asm/mpx.h b/arch/x86/include/asm/mpx.h
+index d0b1434fb0b6..143a5c193ed3 100644
+--- a/arch/x86/include/asm/mpx.h
++++ b/arch/x86/include/asm/mpx.h
+@@ -64,12 +64,15 @@ struct mpx_fault_info {
+ };
+ 
+ #ifdef CONFIG_X86_INTEL_MPX
+-int mpx_fault_info(struct mpx_fault_info *info, struct pt_regs *regs);
+-int mpx_handle_bd_fault(void);
++
++extern int mpx_fault_info(struct mpx_fault_info *info, struct pt_regs *regs);
++extern int mpx_handle_bd_fault(void);
++
+ static inline int kernel_managing_mpx_tables(struct mm_struct *mm)
+ {
+ 	return (mm->context.bd_addr != MPX_INVALID_BOUNDS_DIR);
+ }
++
+ static inline void mpx_mm_init(struct mm_struct *mm)
+ {
+ 	/*
+@@ -78,11 +81,10 @@ static inline void mpx_mm_init(struct mm_struct *mm)
+ 	 */
+ 	mm->context.bd_addr = MPX_INVALID_BOUNDS_DIR;
+ }
+-void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
+-		      unsigned long start, unsigned long end);
+ 
+-unsigned long mpx_unmapped_area_check(unsigned long addr, unsigned long len,
+-		unsigned long flags);
++extern void mpx_notify_unmap(struct mm_struct *mm, unsigned long start, unsigned long end);
++extern unsigned long mpx_unmapped_area_check(unsigned long addr, unsigned long len, unsigned long flags);
++
+ #else
+ static inline int mpx_fault_info(struct mpx_fault_info *info, struct pt_regs *regs)
+ {
+@@ -100,7 +102,6 @@ static inline void mpx_mm_init(struct mm_struct *mm)
+ {
+ }
+ static inline void mpx_notify_unmap(struct mm_struct *mm,
+-				    struct vm_area_struct *vma,
+ 				    unsigned long start, unsigned long end)
+ {
+ }
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 9c85b54bf03c..0bb566315621 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -259,8 +259,7 @@ extern void init_extra_mapping_uc(unsigned long phys, unsigned long size);
+ extern void init_extra_mapping_wb(unsigned long phys, unsigned long size);
+ 
+ #define gup_fast_permitted gup_fast_permitted
+-static inline bool gup_fast_permitted(unsigned long start, int nr_pages,
+-		int write)
++static inline bool gup_fast_permitted(unsigned long start, int nr_pages)
+ {
+ 	unsigned long len, end;
+ 
+diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
+index e85ff65c43c3..05861cc08787 100644
+--- a/arch/x86/include/asm/text-patching.h
++++ b/arch/x86/include/asm/text-patching.h
+@@ -39,4 +39,32 @@ extern int poke_int3_handler(struct pt_regs *regs);
+ extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+ extern int after_bootmem;
+ 
++static inline void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip)
++{
++	regs->ip = ip;
++}
++
++#define INT3_INSN_SIZE 1
++#define CALL_INSN_SIZE 5
++
++#ifdef CONFIG_X86_64
++static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val)
++{
++	/*
++	 * The int3 handler in entry_64.S adds a gap between the
++	 * stack where the break point happened, and the saving of
++	 * pt_regs. We can extend the original stack because of
++	 * this gap. See the idtentry macro's create_gap option.
++	 */
++	regs->sp -= sizeof(unsigned long);
++	*(unsigned long *)regs->sp = val;
++}
++
++static inline void int3_emulate_call(struct pt_regs *regs, unsigned long func)
++{
++	int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE);
++	int3_emulate_jmp(regs, func);
++}
++#endif
++
+ #endif /* _ASM_X86_TEXT_PATCHING_H */
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 763d4264d16a..2ee4b12a70e8 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -29,6 +29,7 @@
+ #include <asm/kprobes.h>
+ #include <asm/ftrace.h>
+ #include <asm/nops.h>
++#include <asm/text-patching.h>
+ 
+ #ifdef CONFIG_DYNAMIC_FTRACE
+ 
+@@ -231,6 +232,7 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
+ }
+ 
+ static unsigned long ftrace_update_func;
++static unsigned long ftrace_update_func_call;
+ 
+ static int update_ftrace_func(unsigned long ip, void *new)
+ {
+@@ -259,6 +261,8 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
+ 	unsigned char *new;
+ 	int ret;
+ 
++	ftrace_update_func_call = (unsigned long)func;
++
+ 	new = ftrace_call_replace(ip, (unsigned long)func);
+ 	ret = update_ftrace_func(ip, new);
+ 
+@@ -294,13 +298,28 @@ int ftrace_int3_handler(struct pt_regs *regs)
+ 	if (WARN_ON_ONCE(!regs))
+ 		return 0;
+ 
+-	ip = regs->ip - 1;
+-	if (!ftrace_location(ip) && !is_ftrace_caller(ip))
+-		return 0;
++	ip = regs->ip - INT3_INSN_SIZE;
+ 
+-	regs->ip += MCOUNT_INSN_SIZE - 1;
++#ifdef CONFIG_X86_64
++	if (ftrace_location(ip)) {
++		int3_emulate_call(regs, (unsigned long)ftrace_regs_caller);
++		return 1;
++	} else if (is_ftrace_caller(ip)) {
++		if (!ftrace_update_func_call) {
++			int3_emulate_jmp(regs, ip + CALL_INSN_SIZE);
++			return 1;
++		}
++		int3_emulate_call(regs, ftrace_update_func_call);
++		return 1;
++	}
++#else
++	if (ftrace_location(ip) || is_ftrace_caller(ip)) {
++		int3_emulate_jmp(regs, ip + CALL_INSN_SIZE);
++		return 1;
++	}
++#endif
+ 
+-	return 1;
++	return 0;
+ }
+ 
+ static int ftrace_write(unsigned long ip, const char *val, int size)
+@@ -858,6 +877,8 @@ void arch_ftrace_update_trampoline(struct ftrace_ops *ops)
+ 
+ 	func = ftrace_ops_get_func(ops);
+ 
++	ftrace_update_func_call = (unsigned long)func;
++
+ 	/* Do a safe modify in case the trampoline is executing */
+ 	new = ftrace_call_replace(ip, (unsigned long)func);
+ 	ret = update_ftrace_func(ip, new);
+@@ -959,6 +980,7 @@ static int ftrace_mod_jmp(unsigned long ip, void *func)
+ {
+ 	unsigned char *new;
+ 
++	ftrace_update_func_call = 0UL;
+ 	new = ftrace_jmp_replace(ip, (unsigned long)func);
+ 
+ 	return update_ftrace_func(ip, new);
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 371c669696d7..610c0f1fbdd7 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -1371,7 +1371,16 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *current_vcpu, u64 ingpa,
+ 
+ 		valid_bank_mask = BIT_ULL(0);
+ 		sparse_banks[0] = flush.processor_mask;
+-		all_cpus = flush.flags & HV_FLUSH_ALL_PROCESSORS;
++
++		/*
++		 * Work around possible WS2012 bug: it sends hypercalls
++		 * with processor_mask = 0x0 and HV_FLUSH_ALL_PROCESSORS clear,
++		 * while also expecting us to flush something and crashing if
++		 * we don't. Let's treat processor_mask == 0 same as
++		 * HV_FLUSH_ALL_PROCESSORS.
++		 */
++		all_cpus = (flush.flags & HV_FLUSH_ALL_PROCESSORS) ||
++			flush.processor_mask == 0;
+ 	} else {
+ 		if (unlikely(kvm_read_guest(kvm, ingpa, &flush_ex,
+ 					    sizeof(flush_ex))))
+diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
+index 140e61843a07..3cb3af51ec89 100644
+--- a/arch/x86/lib/Makefile
++++ b/arch/x86/lib/Makefile
+@@ -6,6 +6,18 @@
+ # Produces uninteresting flaky coverage.
+ KCOV_INSTRUMENT_delay.o	:= n
+ 
++# Early boot use of cmdline; don't instrument it
++ifdef CONFIG_AMD_MEM_ENCRYPT
++KCOV_INSTRUMENT_cmdline.o := n
++KASAN_SANITIZE_cmdline.o  := n
++
++ifdef CONFIG_FUNCTION_TRACER
++CFLAGS_REMOVE_cmdline.o = -pg
++endif
++
++CFLAGS_cmdline.o := $(call cc-option, -fno-stack-protector)
++endif
++
+ inat_tables_script = $(srctree)/arch/x86/tools/gen-insn-attr-x86.awk
+ inat_tables_maps = $(srctree)/arch/x86/lib/x86-opcode-map.txt
+ quiet_cmd_inat_tables = GEN     $@
+diff --git a/arch/x86/mm/mpx.c b/arch/x86/mm/mpx.c
+index de1851d15699..ea17ff6c8588 100644
+--- a/arch/x86/mm/mpx.c
++++ b/arch/x86/mm/mpx.c
+@@ -881,9 +881,10 @@ static int mpx_unmap_tables(struct mm_struct *mm,
+  * the virtual address region start...end have already been split if
+  * necessary, and the 'vma' is the first vma in this range (start -> end).
+  */
+-void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
+-		unsigned long start, unsigned long end)
++void mpx_notify_unmap(struct mm_struct *mm, unsigned long start,
++		      unsigned long end)
+ {
++	struct vm_area_struct *vma;
+ 	int ret;
+ 
+ 	/*
+@@ -902,11 +903,12 @@ void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
+ 	 * which should not occur normally. Being strict about it here
+ 	 * helps ensure that we do not have an exploitable stack overflow.
+ 	 */
+-	do {
++	vma = find_vma(mm, start);
++	while (vma && vma->vm_start < end) {
+ 		if (vma->vm_flags & VM_MPX)
+ 			return;
+ 		vma = vma->vm_next;
+-	} while (vma && vma->vm_start < end);
++	}
+ 
+ 	ret = mpx_unmap_tables(mm, start, end);
+ 	if (ret)
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 5bde73a49399..6ba6d8805697 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -375,7 +375,7 @@ void blk_cleanup_queue(struct request_queue *q)
+ 	blk_exit_queue(q);
+ 
+ 	if (queue_is_mq(q))
+-		blk_mq_free_queue(q);
++		blk_mq_exit_queue(q);
+ 
+ 	percpu_ref_exit(&q->q_usage_counter);
+ 
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index 3f9c3f4ac44c..4040e62c3737 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -10,6 +10,7 @@
+ #include <linux/smp.h>
+ 
+ #include <linux/blk-mq.h>
++#include "blk.h"
+ #include "blk-mq.h"
+ #include "blk-mq-tag.h"
+ 
+@@ -33,6 +34,11 @@ static void blk_mq_hw_sysfs_release(struct kobject *kobj)
+ {
+ 	struct blk_mq_hw_ctx *hctx = container_of(kobj, struct blk_mq_hw_ctx,
+ 						  kobj);
++
++	if (hctx->flags & BLK_MQ_F_BLOCKING)
++		cleanup_srcu_struct(hctx->srcu);
++	blk_free_flush_queue(hctx->fq);
++	sbitmap_free(&hctx->ctx_map);
+ 	free_cpumask_var(hctx->cpumask);
+ 	kfree(hctx->ctxs);
+ 	kfree(hctx);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 5b920a82bfe6..9957e0fc17fc 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2270,12 +2270,7 @@ static void blk_mq_exit_hctx(struct request_queue *q,
+ 	if (set->ops->exit_hctx)
+ 		set->ops->exit_hctx(hctx, hctx_idx);
+ 
+-	if (hctx->flags & BLK_MQ_F_BLOCKING)
+-		cleanup_srcu_struct(hctx->srcu);
+-
+ 	blk_mq_remove_cpuhp(hctx);
+-	blk_free_flush_queue(hctx->fq);
+-	sbitmap_free(&hctx->ctx_map);
+ }
+ 
+ static void blk_mq_exit_hw_queues(struct request_queue *q,
+@@ -2904,7 +2899,8 @@ err_exit:
+ }
+ EXPORT_SYMBOL(blk_mq_init_allocated_queue);
+ 
+-void blk_mq_free_queue(struct request_queue *q)
++/* tags can _not_ be used after returning from blk_mq_exit_queue */
++void blk_mq_exit_queue(struct request_queue *q)
+ {
+ 	struct blk_mq_tag_set	*set = q->tag_set;
+ 
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index a3a684a8c633..39bc1d5d4637 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -36,7 +36,7 @@ struct blk_mq_ctx {
+ 	struct kobject		kobj;
+ } ____cacheline_aligned_in_smp;
+ 
+-void blk_mq_free_queue(struct request_queue *q);
++void blk_mq_exit_queue(struct request_queue *q);
+ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
+ void blk_mq_wake_waiters(struct request_queue *q);
+ bool blk_mq_dispatch_rq_list(struct request_queue *, struct list_head *, bool);
+diff --git a/drivers/base/dd.c b/drivers/base/dd.c
+index d62487d02455..4add909e1a91 100644
+--- a/drivers/base/dd.c
++++ b/drivers/base/dd.c
+@@ -486,7 +486,7 @@ re_probe:
+ 	if (dev->bus->dma_configure) {
+ 		ret = dev->bus->dma_configure(dev);
+ 		if (ret)
+-			goto dma_failed;
++			goto probe_failed;
+ 	}
+ 
+ 	if (driver_sysfs_add(dev)) {
+@@ -542,14 +542,13 @@ re_probe:
+ 	goto done;
+ 
+ probe_failed:
+-	arch_teardown_dma_ops(dev);
+-dma_failed:
+ 	if (dev->bus)
+ 		blocking_notifier_call_chain(&dev->bus->p->bus_notifier,
+ 					     BUS_NOTIFY_DRIVER_NOT_BOUND, dev);
+ pinctrl_bind_failed:
+ 	device_links_no_driver(dev);
+ 	devres_release_all(dev);
++	arch_teardown_dma_ops(dev);
+ 	driver_sysfs_remove(dev);
+ 	dev->driver = NULL;
+ 	dev_set_drvdata(dev, NULL);
+diff --git a/drivers/block/brd.c b/drivers/block/brd.c
+index c18586fccb6f..17defbf4f332 100644
+--- a/drivers/block/brd.c
++++ b/drivers/block/brd.c
+@@ -96,13 +96,8 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
+ 	/*
+ 	 * Must use NOIO because we don't want to recurse back into the
+ 	 * block or filesystem layers from page reclaim.
+-	 *
+-	 * Cannot support DAX and highmem, because our ->direct_access
+-	 * routine for DAX must return memory that is always addressable.
+-	 * If DAX was reworked to use pfns and kmap throughout, this
+-	 * restriction might be able to be lifted.
+ 	 */
+-	gfp_flags = GFP_NOIO | __GFP_ZERO;
++	gfp_flags = GFP_NOIO | __GFP_ZERO | __GFP_HIGHMEM;
+ 	page = alloc_page(gfp_flags);
+ 	if (!page)
+ 		return NULL;
+diff --git a/drivers/clk/hisilicon/clk-hi3660.c b/drivers/clk/hisilicon/clk-hi3660.c
+index f40419959656..794eeff0d5d2 100644
+--- a/drivers/clk/hisilicon/clk-hi3660.c
++++ b/drivers/clk/hisilicon/clk-hi3660.c
+@@ -163,8 +163,12 @@ static const struct hisi_gate_clock hi3660_crgctrl_gate_sep_clks[] = {
+ 	  "clk_isp_snclk_mux", CLK_SET_RATE_PARENT, 0x50, 17, 0, },
+ 	{ HI3660_CLK_GATE_ISP_SNCLK2, "clk_gate_isp_snclk2",
+ 	  "clk_isp_snclk_mux", CLK_SET_RATE_PARENT, 0x50, 18, 0, },
++	/*
++	 * clk_gate_ufs_subsys is a system bus clock, mark it as critical
++	 * clock and keep it on for system suspend and resume.
++	 */
+ 	{ HI3660_CLK_GATE_UFS_SUBSYS, "clk_gate_ufs_subsys", "clk_div_sysbus",
+-	  CLK_SET_RATE_PARENT, 0x50, 21, 0, },
++	  CLK_SET_RATE_PARENT | CLK_IS_CRITICAL, 0x50, 21, 0, },
+ 	{ HI3660_PCLK_GATE_DSI0, "pclk_gate_dsi0", "clk_div_cfgbus",
+ 	  CLK_SET_RATE_PARENT, 0x50, 28, 0, },
+ 	{ HI3660_PCLK_GATE_DSI1, "pclk_gate_dsi1", "clk_div_cfgbus",
+diff --git a/drivers/clk/mediatek/clk-pll.c b/drivers/clk/mediatek/clk-pll.c
+index f54e4015b0b1..18842d660317 100644
+--- a/drivers/clk/mediatek/clk-pll.c
++++ b/drivers/clk/mediatek/clk-pll.c
+@@ -88,6 +88,32 @@ static unsigned long __mtk_pll_recalc_rate(struct mtk_clk_pll *pll, u32 fin,
+ 	return ((unsigned long)vco + postdiv - 1) / postdiv;
+ }
+ 
++static void __mtk_pll_tuner_enable(struct mtk_clk_pll *pll)
++{
++	u32 r;
++
++	if (pll->tuner_en_addr) {
++		r = readl(pll->tuner_en_addr) | BIT(pll->data->tuner_en_bit);
++		writel(r, pll->tuner_en_addr);
++	} else if (pll->tuner_addr) {
++		r = readl(pll->tuner_addr) | AUDPLL_TUNER_EN;
++		writel(r, pll->tuner_addr);
++	}
++}
++
++static void __mtk_pll_tuner_disable(struct mtk_clk_pll *pll)
++{
++	u32 r;
++
++	if (pll->tuner_en_addr) {
++		r = readl(pll->tuner_en_addr) & ~BIT(pll->data->tuner_en_bit);
++		writel(r, pll->tuner_en_addr);
++	} else if (pll->tuner_addr) {
++		r = readl(pll->tuner_addr) & ~AUDPLL_TUNER_EN;
++		writel(r, pll->tuner_addr);
++	}
++}
++
+ static void mtk_pll_set_rate_regs(struct mtk_clk_pll *pll, u32 pcw,
+ 		int postdiv)
+ {
+@@ -96,6 +122,9 @@ static void mtk_pll_set_rate_regs(struct mtk_clk_pll *pll, u32 pcw,
+ 
+ 	pll_en = readl(pll->base_addr + REG_CON0) & CON0_BASE_EN;
+ 
++	/* disable tuner */
++	__mtk_pll_tuner_disable(pll);
++
+ 	/* set postdiv */
+ 	val = readl(pll->pd_addr);
+ 	val &= ~(POSTDIV_MASK << pll->data->pd_shift);
+@@ -122,6 +151,9 @@ static void mtk_pll_set_rate_regs(struct mtk_clk_pll *pll, u32 pcw,
+ 	if (pll->tuner_addr)
+ 		writel(con1 + 1, pll->tuner_addr);
+ 
++	/* restore tuner_en */
++	__mtk_pll_tuner_enable(pll);
++
+ 	if (pll_en)
+ 		udelay(20);
+ }
+@@ -228,13 +260,7 @@ static int mtk_pll_prepare(struct clk_hw *hw)
+ 	r |= pll->data->en_mask;
+ 	writel(r, pll->base_addr + REG_CON0);
+ 
+-	if (pll->tuner_en_addr) {
+-		r = readl(pll->tuner_en_addr) | BIT(pll->data->tuner_en_bit);
+-		writel(r, pll->tuner_en_addr);
+-	} else if (pll->tuner_addr) {
+-		r = readl(pll->tuner_addr) | AUDPLL_TUNER_EN;
+-		writel(r, pll->tuner_addr);
+-	}
++	__mtk_pll_tuner_enable(pll);
+ 
+ 	udelay(20);
+ 
+@@ -258,13 +284,7 @@ static void mtk_pll_unprepare(struct clk_hw *hw)
+ 		writel(r, pll->base_addr + REG_CON0);
+ 	}
+ 
+-	if (pll->tuner_en_addr) {
+-		r = readl(pll->tuner_en_addr) & ~BIT(pll->data->tuner_en_bit);
+-		writel(r, pll->tuner_en_addr);
+-	} else if (pll->tuner_addr) {
+-		r = readl(pll->tuner_addr) & ~AUDPLL_TUNER_EN;
+-		writel(r, pll->tuner_addr);
+-	}
++	__mtk_pll_tuner_disable(pll);
+ 
+ 	r = readl(pll->base_addr + REG_CON0);
+ 	r &= ~CON0_BASE_EN;
+diff --git a/drivers/clk/rockchip/clk-rk3328.c b/drivers/clk/rockchip/clk-rk3328.c
+index 65ab5c2f48b0..f12142d9cea2 100644
+--- a/drivers/clk/rockchip/clk-rk3328.c
++++ b/drivers/clk/rockchip/clk-rk3328.c
+@@ -458,7 +458,7 @@ static struct rockchip_clk_branch rk3328_clk_branches[] __initdata = {
+ 			RK3328_CLKSEL_CON(35), 15, 1, MFLAGS, 8, 7, DFLAGS,
+ 			RK3328_CLKGATE_CON(2), 12, GFLAGS),
+ 	COMPOSITE(SCLK_CRYPTO, "clk_crypto", mux_2plls_p, 0,
+-			RK3328_CLKSEL_CON(20), 7, 1, MFLAGS, 0, 7, DFLAGS,
++			RK3328_CLKSEL_CON(20), 7, 1, MFLAGS, 0, 5, DFLAGS,
+ 			RK3328_CLKGATE_CON(2), 4, GFLAGS),
+ 	COMPOSITE_NOMUX(SCLK_TSADC, "clk_tsadc", "clk_24m", 0,
+ 			RK3328_CLKSEL_CON(22), 0, 10, DFLAGS,
+@@ -550,15 +550,15 @@ static struct rockchip_clk_branch rk3328_clk_branches[] __initdata = {
+ 	GATE(0, "hclk_rkvenc_niu", "hclk_rkvenc", 0,
+ 			RK3328_CLKGATE_CON(25), 1, GFLAGS),
+ 	GATE(ACLK_H265, "aclk_h265", "aclk_rkvenc", 0,
+-			RK3328_CLKGATE_CON(25), 0, GFLAGS),
++			RK3328_CLKGATE_CON(25), 2, GFLAGS),
+ 	GATE(PCLK_H265, "pclk_h265", "hclk_rkvenc", 0,
+-			RK3328_CLKGATE_CON(25), 1, GFLAGS),
++			RK3328_CLKGATE_CON(25), 3, GFLAGS),
+ 	GATE(ACLK_H264, "aclk_h264", "aclk_rkvenc", 0,
+-			RK3328_CLKGATE_CON(25), 0, GFLAGS),
++			RK3328_CLKGATE_CON(25), 4, GFLAGS),
+ 	GATE(HCLK_H264, "hclk_h264", "hclk_rkvenc", 0,
+-			RK3328_CLKGATE_CON(25), 1, GFLAGS),
++			RK3328_CLKGATE_CON(25), 5, GFLAGS),
+ 	GATE(ACLK_AXISRAM, "aclk_axisram", "aclk_rkvenc", CLK_IGNORE_UNUSED,
+-			RK3328_CLKGATE_CON(25), 0, GFLAGS),
++			RK3328_CLKGATE_CON(25), 6, GFLAGS),
+ 
+ 	COMPOSITE(SCLK_VENC_CORE, "sclk_venc_core", mux_4plls_p, 0,
+ 			RK3328_CLKSEL_CON(51), 14, 2, MFLAGS, 8, 5, DFLAGS,
+@@ -663,7 +663,7 @@ static struct rockchip_clk_branch rk3328_clk_branches[] __initdata = {
+ 
+ 	/* PD_GMAC */
+ 	COMPOSITE(ACLK_GMAC, "aclk_gmac", mux_2plls_hdmiphy_p, 0,
+-			RK3328_CLKSEL_CON(35), 6, 2, MFLAGS, 0, 5, DFLAGS,
++			RK3328_CLKSEL_CON(25), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ 			RK3328_CLKGATE_CON(3), 2, GFLAGS),
+ 	COMPOSITE_NOMUX(PCLK_GMAC, "pclk_gmac", "aclk_gmac", 0,
+ 			RK3328_CLKSEL_CON(25), 8, 3, DFLAGS,
+@@ -733,7 +733,7 @@ static struct rockchip_clk_branch rk3328_clk_branches[] __initdata = {
+ 
+ 	/* PD_PERI */
+ 	GATE(0, "aclk_peri_noc", "aclk_peri", CLK_IGNORE_UNUSED, RK3328_CLKGATE_CON(19), 11, GFLAGS),
+-	GATE(ACLK_USB3OTG, "aclk_usb3otg", "aclk_peri", 0, RK3328_CLKGATE_CON(19), 4, GFLAGS),
++	GATE(ACLK_USB3OTG, "aclk_usb3otg", "aclk_peri", 0, RK3328_CLKGATE_CON(19), 14, GFLAGS),
+ 
+ 	GATE(HCLK_SDMMC, "hclk_sdmmc", "hclk_peri", 0, RK3328_CLKGATE_CON(19), 0, GFLAGS),
+ 	GATE(HCLK_SDIO, "hclk_sdio", "hclk_peri", 0, RK3328_CLKGATE_CON(19), 1, GFLAGS),
+@@ -913,7 +913,7 @@ static void __init rk3328_clk_init(struct device_node *np)
+ 				     &rk3328_cpuclk_data, rk3328_cpuclk_rates,
+ 				     ARRAY_SIZE(rk3328_cpuclk_rates));
+ 
+-	rockchip_register_softrst(np, 11, reg_base + RK3328_SOFTRST_CON(0),
++	rockchip_register_softrst(np, 12, reg_base + RK3328_SOFTRST_CON(0),
+ 				  ROCKCHIP_SOFTRST_HIWORD_MASK);
+ 
+ 	rockchip_register_restart_notifier(ctx, RK3328_GLB_SRST_FST, NULL);
+diff --git a/drivers/clk/sunxi-ng/ccu_nkmp.c b/drivers/clk/sunxi-ng/ccu_nkmp.c
+index 9b49adb20d07..69dfc6de1c4e 100644
+--- a/drivers/clk/sunxi-ng/ccu_nkmp.c
++++ b/drivers/clk/sunxi-ng/ccu_nkmp.c
+@@ -167,7 +167,7 @@ static int ccu_nkmp_set_rate(struct clk_hw *hw, unsigned long rate,
+ 			   unsigned long parent_rate)
+ {
+ 	struct ccu_nkmp *nkmp = hw_to_ccu_nkmp(hw);
+-	u32 n_mask, k_mask, m_mask, p_mask;
++	u32 n_mask = 0, k_mask = 0, m_mask = 0, p_mask = 0;
+ 	struct _ccu_nkmp _nkmp;
+ 	unsigned long flags;
+ 	u32 reg;
+@@ -186,10 +186,18 @@ static int ccu_nkmp_set_rate(struct clk_hw *hw, unsigned long rate,
+ 
+ 	ccu_nkmp_find_best(parent_rate, rate, &_nkmp);
+ 
+-	n_mask = GENMASK(nkmp->n.width + nkmp->n.shift - 1, nkmp->n.shift);
+-	k_mask = GENMASK(nkmp->k.width + nkmp->k.shift - 1, nkmp->k.shift);
+-	m_mask = GENMASK(nkmp->m.width + nkmp->m.shift - 1, nkmp->m.shift);
+-	p_mask = GENMASK(nkmp->p.width + nkmp->p.shift - 1, nkmp->p.shift);
++	if (nkmp->n.width)
++		n_mask = GENMASK(nkmp->n.width + nkmp->n.shift - 1,
++				 nkmp->n.shift);
++	if (nkmp->k.width)
++		k_mask = GENMASK(nkmp->k.width + nkmp->k.shift - 1,
++				 nkmp->k.shift);
++	if (nkmp->m.width)
++		m_mask = GENMASK(nkmp->m.width + nkmp->m.shift - 1,
++				 nkmp->m.shift);
++	if (nkmp->p.width)
++		p_mask = GENMASK(nkmp->p.width + nkmp->p.shift - 1,
++				 nkmp->p.shift);
+ 
+ 	spin_lock_irqsave(nkmp->common.lock, flags);
+ 
+diff --git a/drivers/clk/tegra/clk-pll.c b/drivers/clk/tegra/clk-pll.c
+index b50b7460014b..3e67cbcd80da 100644
+--- a/drivers/clk/tegra/clk-pll.c
++++ b/drivers/clk/tegra/clk-pll.c
+@@ -663,8 +663,8 @@ static void _update_pll_mnp(struct tegra_clk_pll *pll,
+ 		pll_override_writel(val, params->pmc_divp_reg, pll);
+ 
+ 		val = pll_override_readl(params->pmc_divnm_reg, pll);
+-		val &= ~(divm_mask(pll) << div_nmp->override_divm_shift) |
+-			~(divn_mask(pll) << div_nmp->override_divn_shift);
++		val &= ~((divm_mask(pll) << div_nmp->override_divm_shift) |
++			(divn_mask(pll) << div_nmp->override_divn_shift));
+ 		val |= (cfg->m << div_nmp->override_divm_shift) |
+ 			(cfg->n << div_nmp->override_divn_shift);
+ 		pll_override_writel(val, params->pmc_divnm_reg, pll);
+diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
+index ba7aaf421f36..8ff326c0c406 100644
+--- a/drivers/hwtracing/intel_th/msu.c
++++ b/drivers/hwtracing/intel_th/msu.c
+@@ -84,6 +84,7 @@ struct msc_iter {
+  * @reg_base:		register window base address
+  * @thdev:		intel_th_device pointer
+  * @win_list:		list of windows in multiblock mode
++ * @single_sgt:		single mode buffer
+  * @nr_pages:		total number of pages allocated for this buffer
+  * @single_sz:		amount of data in single mode
+  * @single_wrap:	single mode wrap occurred
+@@ -104,6 +105,7 @@ struct msc {
+ 	struct intel_th_device	*thdev;
+ 
+ 	struct list_head	win_list;
++	struct sg_table		single_sgt;
+ 	unsigned long		nr_pages;
+ 	unsigned long		single_sz;
+ 	unsigned int		single_wrap : 1;
+@@ -617,22 +619,45 @@ static void intel_th_msc_deactivate(struct intel_th_device *thdev)
+  */
+ static int msc_buffer_contig_alloc(struct msc *msc, unsigned long size)
+ {
++	unsigned long nr_pages = size >> PAGE_SHIFT;
+ 	unsigned int order = get_order(size);
+ 	struct page *page;
++	int ret;
+ 
+ 	if (!size)
+ 		return 0;
+ 
++	ret = sg_alloc_table(&msc->single_sgt, 1, GFP_KERNEL);
++	if (ret)
++		goto err_out;
++
++	ret = -ENOMEM;
+ 	page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
+ 	if (!page)
+-		return -ENOMEM;
++		goto err_free_sgt;
+ 
+ 	split_page(page, order);
+-	msc->nr_pages = size >> PAGE_SHIFT;
++	sg_set_buf(msc->single_sgt.sgl, page_address(page), size);
++
++	ret = dma_map_sg(msc_dev(msc)->parent->parent, msc->single_sgt.sgl, 1,
++			 DMA_FROM_DEVICE);
++	if (ret < 0)
++		goto err_free_pages;
++
++	msc->nr_pages = nr_pages;
+ 	msc->base = page_address(page);
+-	msc->base_addr = page_to_phys(page);
++	msc->base_addr = sg_dma_address(msc->single_sgt.sgl);
+ 
+ 	return 0;
++
++err_free_pages:
++	__free_pages(page, order);
++
++err_free_sgt:
++	sg_free_table(&msc->single_sgt);
++
++err_out:
++	return ret;
+ }
+ 
+ /**
+@@ -643,6 +668,10 @@ static void msc_buffer_contig_free(struct msc *msc)
+ {
+ 	unsigned long off;
+ 
++	dma_unmap_sg(msc_dev(msc)->parent->parent, msc->single_sgt.sgl,
++		     1, DMA_FROM_DEVICE);
++	sg_free_table(&msc->single_sgt);
++
+ 	for (off = 0; off < msc->nr_pages << PAGE_SHIFT; off += PAGE_SIZE) {
+ 		struct page *page = virt_to_page(msc->base + off);
+ 
+diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
+index c7ba8acfd4d5..e55b902560de 100644
+--- a/drivers/hwtracing/stm/core.c
++++ b/drivers/hwtracing/stm/core.c
+@@ -166,11 +166,10 @@ stm_master(struct stm_device *stm, unsigned int idx)
+ static int stp_master_alloc(struct stm_device *stm, unsigned int idx)
+ {
+ 	struct stp_master *master;
+-	size_t size;
+ 
+-	size = ALIGN(stm->data->sw_nchannels, 8) / 8;
+-	size += sizeof(struct stp_master);
+-	master = kzalloc(size, GFP_ATOMIC);
++	master = kzalloc(struct_size(master, chan_map,
++				     BITS_TO_LONGS(stm->data->sw_nchannels)),
++			 GFP_ATOMIC);
+ 	if (!master)
+ 		return -ENOMEM;
+ 
+@@ -218,8 +217,8 @@ stm_output_disclaim(struct stm_device *stm, struct stm_output *output)
+ 	bitmap_release_region(&master->chan_map[0], output->channel,
+ 			      ilog2(output->nr_chans));
+ 
+-	output->nr_chans = 0;
+ 	master->nr_free += output->nr_chans;
++	output->nr_chans = 0;
+ }
+ 
+ /*
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index bb8e3f149979..d464799e40a3 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -426,8 +426,7 @@ i2c_dw_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
+ 
+ 	pm_runtime_get_sync(dev->dev);
+ 
+-	if (dev->suspended) {
+-		dev_err(dev->dev, "Error %s call while suspended\n", __func__);
++	if (dev_WARN_ONCE(dev->dev, dev->suspended, "Transfer while suspended\n")) {
+ 		ret = -ESHUTDOWN;
+ 		goto done_nolock;
+ 	}
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index c6bdd0d16c4b..ca91f90b4ccc 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -1986,11 +1986,12 @@ static int mlx5_ib_mmap_clock_info_page(struct mlx5_ib_dev *dev,
+ 		return -EPERM;
+ 	vma->vm_flags &= ~VM_MAYWRITE;
+ 
+-	if (!dev->mdev->clock_info_page)
++	if (!dev->mdev->clock_info)
+ 		return -EOPNOTSUPP;
+ 
+ 	return rdma_user_mmap_page(&context->ibucontext, vma,
+-				   dev->mdev->clock_info_page, PAGE_SIZE);
++				   virt_to_page(dev->mdev->clock_info),
++				   PAGE_SIZE);
+ }
+ 
+ static int uar_mmap(struct mlx5_ib_dev *dev, enum mlx5_ib_mmap_cmd cmd,
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index d932f99201d1..1851bc5e05ae 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -2402,7 +2402,18 @@ static ssize_t dev_id_show(struct device *dev,
+ {
+ 	struct net_device *ndev = to_net_dev(dev);
+ 
+-	if (ndev->dev_id == ndev->dev_port)
++	/*
++	 * ndev->dev_port will be equal to 0 in old kernel prior to commit
++	 * 9b8b2a323008 ("IB/ipoib: Use dev_port to expose network interface
++	 * port numbers") Zero was chosen as special case for user space
++	 * applications to fallback and query dev_id to check if it has
++	 * different value or not.
++	 *
++	 * Don't print warning in such scenario.
++	 *
++	 * https://github.com/systemd/systemd/blob/master/src/udev/udev-builtin-net_id.c#L358
++	 */
++	if (ndev->dev_port && ndev->dev_id == ndev->dev_port)
+ 		netdev_info_once(ndev,
+ 			"\"%s\" wants to know my dev_id. Should it look at dev_port instead? See Documentation/ABI/testing/sysfs-class-net for more info.\n",
+ 			current->comm);
+diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
+index 3a5c7dc6dc57..43fe59642930 100644
+--- a/drivers/iommu/tegra-smmu.c
++++ b/drivers/iommu/tegra-smmu.c
+@@ -102,7 +102,6 @@ static inline u32 smmu_readl(struct tegra_smmu *smmu, unsigned long offset)
+ #define  SMMU_TLB_FLUSH_VA_MATCH_ALL     (0 << 0)
+ #define  SMMU_TLB_FLUSH_VA_MATCH_SECTION (2 << 0)
+ #define  SMMU_TLB_FLUSH_VA_MATCH_GROUP   (3 << 0)
+-#define  SMMU_TLB_FLUSH_ASID(x)          (((x) & 0x7f) << 24)
+ #define  SMMU_TLB_FLUSH_VA_SECTION(addr) ((((addr) & 0xffc00000) >> 12) | \
+ 					  SMMU_TLB_FLUSH_VA_MATCH_SECTION)
+ #define  SMMU_TLB_FLUSH_VA_GROUP(addr)   ((((addr) & 0xffffc000) >> 12) | \
+@@ -205,8 +204,12 @@ static inline void smmu_flush_tlb_asid(struct tegra_smmu *smmu,
+ {
+ 	u32 value;
+ 
+-	value = SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_ASID(asid) |
+-		SMMU_TLB_FLUSH_VA_MATCH_ALL;
++	if (smmu->soc->num_asids == 4)
++		value = (asid & 0x3) << 29;
++	else
++		value = (asid & 0x7f) << 24;
++
++	value |= SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_VA_MATCH_ALL;
+ 	smmu_writel(smmu, value, SMMU_TLB_FLUSH);
+ }
+ 
+@@ -216,8 +219,12 @@ static inline void smmu_flush_tlb_section(struct tegra_smmu *smmu,
+ {
+ 	u32 value;
+ 
+-	value = SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_ASID(asid) |
+-		SMMU_TLB_FLUSH_VA_SECTION(iova);
++	if (smmu->soc->num_asids == 4)
++		value = (asid & 0x3) << 29;
++	else
++		value = (asid & 0x7f) << 24;
++
++	value |= SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_VA_SECTION(iova);
+ 	smmu_writel(smmu, value, SMMU_TLB_FLUSH);
+ }
+ 
+@@ -227,8 +234,12 @@ static inline void smmu_flush_tlb_group(struct tegra_smmu *smmu,
+ {
+ 	u32 value;
+ 
+-	value = SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_ASID(asid) |
+-		SMMU_TLB_FLUSH_VA_GROUP(iova);
++	if (smmu->soc->num_asids == 4)
++		value = (asid & 0x3) << 29;
++	else
++		value = (asid & 0x7f) << 24;
++
++	value |= SMMU_TLB_FLUSH_ASID_MATCH | SMMU_TLB_FLUSH_VA_GROUP(iova);
+ 	smmu_writel(smmu, value, SMMU_TLB_FLUSH);
+ }
+ 
+diff --git a/drivers/md/dm-cache-metadata.c b/drivers/md/dm-cache-metadata.c
+index 6fc93834da44..151aa95775be 100644
+--- a/drivers/md/dm-cache-metadata.c
++++ b/drivers/md/dm-cache-metadata.c
+@@ -1167,11 +1167,18 @@ static int __load_discards(struct dm_cache_metadata *cmd,
+ 		if (r)
+ 			return r;
+ 
+-		for (b = 0; b < from_dblock(cmd->discard_nr_blocks); b++) {
++		for (b = 0; ; b++) {
+ 			r = fn(context, cmd->discard_block_size, to_dblock(b),
+ 			       dm_bitset_cursor_get_value(&c));
+ 			if (r)
+ 				break;
++
++			if (b >= (from_dblock(cmd->discard_nr_blocks) - 1))
++				break;
++
++			r = dm_bitset_cursor_next(&c);
++			if (r)
++				break;
+ 		}
+ 
+ 		dm_bitset_cursor_end(&c);
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index dd538e6b2748..df39b07de800 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -949,6 +949,7 @@ static int crypt_integrity_ctr(struct crypt_config *cc, struct dm_target *ti)
+ {
+ #ifdef CONFIG_BLK_DEV_INTEGRITY
+ 	struct blk_integrity *bi = blk_get_integrity(cc->dev->bdev->bd_disk);
++	struct mapped_device *md = dm_table_get_md(ti->table);
+ 
+ 	/* From now we require underlying device with our integrity profile */
+ 	if (!bi || strcasecmp(bi->profile->name, "DM-DIF-EXT-TAG")) {
+@@ -968,7 +969,7 @@ static int crypt_integrity_ctr(struct crypt_config *cc, struct dm_target *ti)
+ 
+ 	if (crypt_integrity_aead(cc)) {
+ 		cc->integrity_tag_size = cc->on_disk_tag_size - cc->integrity_iv_size;
+-		DMINFO("Integrity AEAD, tag size %u, IV size %u.",
++		DMDEBUG("%s: Integrity AEAD, tag size %u, IV size %u.", dm_device_name(md),
+ 		       cc->integrity_tag_size, cc->integrity_iv_size);
+ 
+ 		if (crypto_aead_setauthsize(any_tfm_aead(cc), cc->integrity_tag_size)) {
+@@ -976,7 +977,7 @@ static int crypt_integrity_ctr(struct crypt_config *cc, struct dm_target *ti)
+ 			return -EINVAL;
+ 		}
+ 	} else if (cc->integrity_iv_size)
+-		DMINFO("Additional per-sector space %u bytes for IV.",
++		DMDEBUG("%s: Additional per-sector space %u bytes for IV.", dm_device_name(md),
+ 		       cc->integrity_iv_size);
+ 
+ 	if ((cc->integrity_tag_size + cc->integrity_iv_size) != bi->tag_size) {
+@@ -1890,7 +1891,7 @@ static int crypt_alloc_tfms_skcipher(struct crypt_config *cc, char *ciphermode)
+ 	 * algorithm implementation is used.  Help people debug performance
+ 	 * problems by logging the ->cra_driver_name.
+ 	 */
+-	DMINFO("%s using implementation \"%s\"", ciphermode,
++	DMDEBUG_LIMIT("%s using implementation \"%s\"", ciphermode,
+ 	       crypto_skcipher_alg(any_tfm(cc))->base.cra_driver_name);
+ 	return 0;
+ }
+@@ -1910,7 +1911,7 @@ static int crypt_alloc_tfms_aead(struct crypt_config *cc, char *ciphermode)
+ 		return err;
+ 	}
+ 
+-	DMINFO("%s using implementation \"%s\"", ciphermode,
++	DMDEBUG_LIMIT("%s using implementation \"%s\"", ciphermode,
+ 	       crypto_aead_alg(any_tfm_aead(cc))->base.cra_driver_name);
+ 	return 0;
+ }
+diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c
+index fddffe251bf6..f496213f8b67 100644
+--- a/drivers/md/dm-delay.c
++++ b/drivers/md/dm-delay.c
+@@ -121,7 +121,8 @@ static void delay_dtr(struct dm_target *ti)
+ {
+ 	struct delay_c *dc = ti->private;
+ 
+-	destroy_workqueue(dc->kdelayd_wq);
++	if (dc->kdelayd_wq)
++		destroy_workqueue(dc->kdelayd_wq);
+ 
+ 	if (dc->read.dev)
+ 		dm_put_device(ti, dc->read.dev);
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index f535fd8ac82d..a4fe187d50d0 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -2568,7 +2568,7 @@ static int calculate_device_limits(struct dm_integrity_c *ic)
+ 		if (last_sector < ic->start || last_sector >= ic->meta_device_sectors)
+ 			return -EINVAL;
+ 	} else {
+-		__u64 meta_size = ic->provided_data_sectors * ic->tag_size;
++		__u64 meta_size = (ic->provided_data_sectors >> ic->sb->log2_sectors_per_block) * ic->tag_size;
+ 		meta_size = (meta_size + ((1U << (ic->log2_buffer_sectors + SECTOR_SHIFT)) - 1))
+ 				>> (ic->log2_buffer_sectors + SECTOR_SHIFT);
+ 		meta_size <<= ic->log2_buffer_sectors;
+@@ -3439,7 +3439,7 @@ try_smaller_buffer:
+ 	DEBUG_print("	journal_sections %u\n", (unsigned)le32_to_cpu(ic->sb->journal_sections));
+ 	DEBUG_print("	journal_entries %u\n", ic->journal_entries);
+ 	DEBUG_print("	log2_interleave_sectors %d\n", ic->sb->log2_interleave_sectors);
+-	DEBUG_print("	device_sectors 0x%llx\n", (unsigned long long)ic->device_sectors);
++	DEBUG_print("	data_device_sectors 0x%llx\n", (unsigned long long)ic->data_device_sectors);
+ 	DEBUG_print("	initial_sectors 0x%x\n", ic->initial_sectors);
+ 	DEBUG_print("	metadata_run 0x%x\n", ic->metadata_run);
+ 	DEBUG_print("	log2_metadata_run %d\n", ic->log2_metadata_run);
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index 2ee5e357a0a7..cc5173dfd466 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -882,6 +882,7 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 	if (attached_handler_name || m->hw_handler_name) {
+ 		INIT_DELAYED_WORK(&p->activate_path, activate_path_work);
+ 		r = setup_scsi_dh(p->path.dev->bdev, m, &attached_handler_name, &ti->error);
++		kfree(attached_handler_name);
+ 		if (r) {
+ 			dm_put_device(ti, p->path.dev);
+ 			goto bad;
+@@ -896,7 +897,6 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
+ 
+ 	return p;
+  bad:
+-	kfree(attached_handler_name);
+ 	free_pgpath(p);
+ 	return ERR_PTR(r);
+ }
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index fa68336560c3..d8334cd45d7c 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -1169,6 +1169,9 @@ static int dmz_init_zones(struct dmz_metadata *zmd)
+ 			goto out;
+ 		}
+ 
++		if (!nr_blkz)
++			break;
++
+ 		/* Process report */
+ 		for (i = 0; i < nr_blkz; i++) {
+ 			ret = dmz_init_zone(zmd, zone, &blkz[i]);
+@@ -1204,6 +1207,8 @@ static int dmz_update_zone(struct dmz_metadata *zmd, struct dm_zone *zone)
+ 	/* Get zone information from disk */
+ 	ret = blkdev_report_zones(zmd->dev->bdev, dmz_start_sect(zmd, zone),
+ 				  &blkz, &nr_blkz, GFP_NOIO);
++	if (!nr_blkz)
++		ret = -EIO;
+ 	if (ret) {
+ 		dmz_dev_err(zmd->dev, "Get zone %u report failed",
+ 			    dmz_id(zmd, zone));
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 05ffffb8b769..295ff09cff4c 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -132,24 +132,6 @@ static inline int speed_max(struct mddev *mddev)
+ 		mddev->sync_speed_max : sysctl_speed_limit_max;
+ }
+ 
+-static void * flush_info_alloc(gfp_t gfp_flags, void *data)
+-{
+-        return kzalloc(sizeof(struct flush_info), gfp_flags);
+-}
+-static void flush_info_free(void *flush_info, void *data)
+-{
+-        kfree(flush_info);
+-}
+-
+-static void * flush_bio_alloc(gfp_t gfp_flags, void *data)
+-{
+-	return kzalloc(sizeof(struct flush_bio), gfp_flags);
+-}
+-static void flush_bio_free(void *flush_bio, void *data)
+-{
+-	kfree(flush_bio);
+-}
+-
+ static struct ctl_table_header *raid_table_header;
+ 
+ static struct ctl_table raid_table[] = {
+@@ -423,54 +405,31 @@ static int md_congested(void *data, int bits)
+ /*
+  * Generic flush handling for md
+  */
+-static void submit_flushes(struct work_struct *ws)
+-{
+-	struct flush_info *fi = container_of(ws, struct flush_info, flush_work);
+-	struct mddev *mddev = fi->mddev;
+-	struct bio *bio = fi->bio;
+-
+-	bio->bi_opf &= ~REQ_PREFLUSH;
+-	md_handle_request(mddev, bio);
+-
+-	mempool_free(fi, mddev->flush_pool);
+-}
+ 
+-static void md_end_flush(struct bio *fbio)
++static void md_end_flush(struct bio *bio)
+ {
+-	struct flush_bio *fb = fbio->bi_private;
+-	struct md_rdev *rdev = fb->rdev;
+-	struct flush_info *fi = fb->fi;
+-	struct bio *bio = fi->bio;
+-	struct mddev *mddev = fi->mddev;
++	struct md_rdev *rdev = bio->bi_private;
++	struct mddev *mddev = rdev->mddev;
+ 
+ 	rdev_dec_pending(rdev, mddev);
+ 
+-	if (atomic_dec_and_test(&fi->flush_pending)) {
+-		if (bio->bi_iter.bi_size == 0) {
+-			/* an empty barrier - all done */
+-			bio_endio(bio);
+-			mempool_free(fi, mddev->flush_pool);
+-		} else {
+-			INIT_WORK(&fi->flush_work, submit_flushes);
+-			queue_work(md_wq, &fi->flush_work);
+-		}
++	if (atomic_dec_and_test(&mddev->flush_pending)) {
++		/* The pre-request flush has finished */
++		queue_work(md_wq, &mddev->flush_work);
+ 	}
+-
+-	mempool_free(fb, mddev->flush_bio_pool);
+-	bio_put(fbio);
++	bio_put(bio);
+ }
+ 
+-void md_flush_request(struct mddev *mddev, struct bio *bio)
++static void md_submit_flush_data(struct work_struct *ws);
++
++static void submit_flushes(struct work_struct *ws)
+ {
++	struct mddev *mddev = container_of(ws, struct mddev, flush_work);
+ 	struct md_rdev *rdev;
+-	struct flush_info *fi;
+-
+-	fi = mempool_alloc(mddev->flush_pool, GFP_NOIO);
+-
+-	fi->bio = bio;
+-	fi->mddev = mddev;
+-	atomic_set(&fi->flush_pending, 1);
+ 
++	mddev->start_flush = ktime_get_boottime();
++	INIT_WORK(&mddev->flush_work, md_submit_flush_data);
++	atomic_set(&mddev->flush_pending, 1);
+ 	rcu_read_lock();
+ 	rdev_for_each_rcu(rdev, mddev)
+ 		if (rdev->raid_disk >= 0 &&
+@@ -480,37 +439,74 @@ void md_flush_request(struct mddev *mddev, struct bio *bio)
+ 			 * we reclaim rcu_read_lock
+ 			 */
+ 			struct bio *bi;
+-			struct flush_bio *fb;
+ 			atomic_inc(&rdev->nr_pending);
+ 			atomic_inc(&rdev->nr_pending);
+ 			rcu_read_unlock();
+-
+-			fb = mempool_alloc(mddev->flush_bio_pool, GFP_NOIO);
+-			fb->fi = fi;
+-			fb->rdev = rdev;
+-
+ 			bi = bio_alloc_mddev(GFP_NOIO, 0, mddev);
+-			bio_set_dev(bi, rdev->bdev);
+ 			bi->bi_end_io = md_end_flush;
+-			bi->bi_private = fb;
++			bi->bi_private = rdev;
++			bio_set_dev(bi, rdev->bdev);
+ 			bi->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
+-
+-			atomic_inc(&fi->flush_pending);
++			atomic_inc(&mddev->flush_pending);
+ 			submit_bio(bi);
+-
+ 			rcu_read_lock();
+ 			rdev_dec_pending(rdev, mddev);
+ 		}
+ 	rcu_read_unlock();
++	if (atomic_dec_and_test(&mddev->flush_pending))
++		queue_work(md_wq, &mddev->flush_work);
++}
++
++static void md_submit_flush_data(struct work_struct *ws)
++{
++	struct mddev *mddev = container_of(ws, struct mddev, flush_work);
++	struct bio *bio = mddev->flush_bio;
++
++	/*
++	 * must reset flush_bio before calling into md_handle_request to avoid a
++	 * deadlock, because other bios passed md_handle_request suspend check
++	 * could wait for this and below md_handle_request could wait for those
++	 * bios because of suspend check
++	 */
++	mddev->last_flush = mddev->start_flush;
++	mddev->flush_bio = NULL;
++	wake_up(&mddev->sb_wait);
++
++	if (bio->bi_iter.bi_size == 0) {
++		/* an empty barrier - all done */
++		bio_endio(bio);
++	} else {
++		bio->bi_opf &= ~REQ_PREFLUSH;
++		md_handle_request(mddev, bio);
++	}
++}
+ 
+-	if (atomic_dec_and_test(&fi->flush_pending)) {
+-		if (bio->bi_iter.bi_size == 0) {
++void md_flush_request(struct mddev *mddev, struct bio *bio)
++{
++	ktime_t start = ktime_get_boottime();
++	spin_lock_irq(&mddev->lock);
++	wait_event_lock_irq(mddev->sb_wait,
++			    !mddev->flush_bio ||
++			    ktime_after(mddev->last_flush, start),
++			    mddev->lock);
++	if (!ktime_after(mddev->last_flush, start)) {
++		WARN_ON(mddev->flush_bio);
++		mddev->flush_bio = bio;
++		bio = NULL;
++	}
++	spin_unlock_irq(&mddev->lock);
++
++	if (!bio) {
++		INIT_WORK(&mddev->flush_work, submit_flushes);
++		queue_work(md_wq, &mddev->flush_work);
++	} else {
++		/* flush was performed for some other bio while we waited. */
++		if (bio->bi_iter.bi_size == 0)
+ 			/* an empty barrier - all done */
+ 			bio_endio(bio);
+-			mempool_free(fi, mddev->flush_pool);
+-		} else {
+-			INIT_WORK(&fi->flush_work, submit_flushes);
+-			queue_work(md_wq, &fi->flush_work);
++		else {
++			bio->bi_opf &= ~REQ_PREFLUSH;
++			mddev->pers->make_request(mddev, bio);
+ 		}
+ 	}
+ }
+@@ -560,6 +556,7 @@ void mddev_init(struct mddev *mddev)
+ 	atomic_set(&mddev->openers, 0);
+ 	atomic_set(&mddev->active_io, 0);
+ 	spin_lock_init(&mddev->lock);
++	atomic_set(&mddev->flush_pending, 0);
+ 	init_waitqueue_head(&mddev->sb_wait);
+ 	init_waitqueue_head(&mddev->recovery_wait);
+ 	mddev->reshape_position = MaxSector;
+@@ -2855,8 +2852,10 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
+ 			err = 0;
+ 		}
+ 	} else if (cmd_match(buf, "re-add")) {
+-		if (test_bit(Faulty, &rdev->flags) && (rdev->raid_disk == -1) &&
+-			rdev->saved_raid_disk >= 0) {
++		if (!rdev->mddev->pers)
++			err = -EINVAL;
++		else if (test_bit(Faulty, &rdev->flags) && (rdev->raid_disk == -1) &&
++				rdev->saved_raid_disk >= 0) {
+ 			/* clear_bit is performed _after_ all the devices
+ 			 * have their local Faulty bit cleared. If any writes
+ 			 * happen in the meantime in the local node, they
+@@ -5511,22 +5510,6 @@ int md_run(struct mddev *mddev)
+ 		if (err)
+ 			return err;
+ 	}
+-	if (mddev->flush_pool == NULL) {
+-		mddev->flush_pool = mempool_create(NR_FLUSH_INFOS, flush_info_alloc,
+-						flush_info_free, mddev);
+-		if (!mddev->flush_pool) {
+-			err = -ENOMEM;
+-			goto abort;
+-		}
+-	}
+-	if (mddev->flush_bio_pool == NULL) {
+-		mddev->flush_bio_pool = mempool_create(NR_FLUSH_BIOS, flush_bio_alloc,
+-						flush_bio_free, mddev);
+-		if (!mddev->flush_bio_pool) {
+-			err = -ENOMEM;
+-			goto abort;
+-		}
+-	}
+ 
+ 	spin_lock(&pers_lock);
+ 	pers = find_pers(mddev->level, mddev->clevel);
+@@ -5686,11 +5669,8 @@ int md_run(struct mddev *mddev)
+ 	return 0;
+ 
+ abort:
+-	mempool_destroy(mddev->flush_bio_pool);
+-	mddev->flush_bio_pool = NULL;
+-	mempool_destroy(mddev->flush_pool);
+-	mddev->flush_pool = NULL;
+-
++	bioset_exit(&mddev->bio_set);
++	bioset_exit(&mddev->sync_set);
+ 	return err;
+ }
+ EXPORT_SYMBOL_GPL(md_run);
+@@ -5894,14 +5874,6 @@ static void __md_stop(struct mddev *mddev)
+ 		mddev->to_remove = &md_redundancy_group;
+ 	module_put(pers->owner);
+ 	clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+-	if (mddev->flush_bio_pool) {
+-		mempool_destroy(mddev->flush_bio_pool);
+-		mddev->flush_bio_pool = NULL;
+-	}
+-	if (mddev->flush_pool) {
+-		mempool_destroy(mddev->flush_pool);
+-		mddev->flush_pool = NULL;
+-	}
+ }
+ 
+ void md_stop(struct mddev *mddev)
+@@ -9257,7 +9229,7 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
+ 		 * reshape is happening in the remote node, we need to
+ 		 * update reshape_position and call start_reshape.
+ 		 */
+-		mddev->reshape_position = sb->reshape_position;
++		mddev->reshape_position = le64_to_cpu(sb->reshape_position);
+ 		if (mddev->pers->update_reshape_pos)
+ 			mddev->pers->update_reshape_pos(mddev);
+ 		if (mddev->pers->start_reshape)
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index c52afb52c776..257cb4c9e22b 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -252,19 +252,6 @@ enum mddev_sb_flags {
+ 	MD_SB_NEED_REWRITE,	/* metadata write needs to be repeated */
+ };
+ 
+-#define NR_FLUSH_INFOS 8
+-#define NR_FLUSH_BIOS 64
+-struct flush_info {
+-	struct bio			*bio;
+-	struct mddev			*mddev;
+-	struct work_struct		flush_work;
+-	atomic_t			flush_pending;
+-};
+-struct flush_bio {
+-	struct flush_info *fi;
+-	struct md_rdev *rdev;
+-};
+-
+ struct mddev {
+ 	void				*private;
+ 	struct md_personality		*pers;
+@@ -470,8 +457,16 @@ struct mddev {
+ 						   * metadata and bitmap writes
+ 						   */
+ 
+-	mempool_t			*flush_pool;
+-	mempool_t			*flush_bio_pool;
++	/* Generic flush handling.
++	 * The last to finish preflush schedules a worker to submit
++	 * the rest of the request (without the REQ_PREFLUSH flag).
++	 */
++	struct bio *flush_bio;
++	atomic_t flush_pending;
++	ktime_t start_flush, last_flush; /* last_flush is when the last completed
++					  * flush was started.
++					  */
++	struct work_struct flush_work;
+ 	struct work_struct event_work;	/* used by dm to report failure event */
+ 	void (*sync_super)(struct mddev *mddev, struct md_rdev *rdev);
+ 	struct md_cluster_info		*cluster_info;
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 3ae13c06b200..f9c90ab220b9 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -4197,7 +4197,7 @@ static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
+ 		/* now write out any block on a failed drive,
+ 		 * or P or Q if they were recomputed
+ 		 */
+-		BUG_ON(s->uptodate < disks - 1); /* We don't need Q to recover */
++		dev = NULL;
+ 		if (s->failed == 2) {
+ 			dev = &sh->dev[s->failed_num[1]];
+ 			s->locked++;
+@@ -4222,6 +4222,14 @@ static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
+ 			set_bit(R5_LOCKED, &dev->flags);
+ 			set_bit(R5_Wantwrite, &dev->flags);
+ 		}
++		if (WARN_ONCE(dev && !test_bit(R5_UPTODATE, &dev->flags),
++			      "%s: disk%td not up to date\n",
++			      mdname(conf->mddev),
++			      dev - (struct r5dev *) &sh->dev)) {
++			clear_bit(R5_LOCKED, &dev->flags);
++			clear_bit(R5_Wantwrite, &dev->flags);
++			s->locked--;
++		}
+ 		clear_bit(STRIPE_DEGRADED, &sh->state);
+ 
+ 		set_bit(STRIPE_INSYNC, &sh->state);
+@@ -4233,15 +4241,26 @@ static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
+ 	case check_state_check_result:
+ 		sh->check_state = check_state_idle;
+ 
+-		if (s->failed > 1)
+-			break;
+ 		/* handle a successful check operation, if parity is correct
+ 		 * we are done.  Otherwise update the mismatch count and repair
+ 		 * parity if !MD_RECOVERY_CHECK
+ 		 */
+ 		if (sh->ops.zero_sum_result == 0) {
+-			/* Any parity checked was correct */
+-			set_bit(STRIPE_INSYNC, &sh->state);
++			/* both parities are correct */
++			if (!s->failed)
++				set_bit(STRIPE_INSYNC, &sh->state);
++			else {
++				/* in contrast to the raid5 case we can validate
++				 * parity, but still have a failure to write
++				 * back
++				 */
++				sh->check_state = check_state_compute_result;
++				/* Returning at this point means that we may go
++				 * off and bring p and/or q uptodate again so
++				 * we make sure to check zero_sum_result again
++				 * to verify if p or q need writeback
++				 */
++			}
+ 		} else {
+ 			atomic64_add(STRIPE_SECTORS, &conf->mddev->resync_mismatches);
+ 			if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery)) {
+diff --git a/drivers/media/i2c/ov6650.c b/drivers/media/i2c/ov6650.c
+index 5d1b218bb7f0..2d3f7e00b129 100644
+--- a/drivers/media/i2c/ov6650.c
++++ b/drivers/media/i2c/ov6650.c
+@@ -814,6 +814,8 @@ static int ov6650_video_probe(struct i2c_client *client)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	msleep(20);
++
+ 	/*
+ 	 * check and show product ID and manufacturer ID
+ 	 */
+diff --git a/drivers/memory/tegra/mc.c b/drivers/memory/tegra/mc.c
+index 24afc36833bf..1608a482f681 100644
+--- a/drivers/memory/tegra/mc.c
++++ b/drivers/memory/tegra/mc.c
+@@ -280,7 +280,7 @@ static int tegra_mc_setup_latency_allowance(struct tegra_mc *mc)
+ 	u32 value;
+ 
+ 	/* compute the number of MC clock cycles per tick */
+-	tick = mc->tick * clk_get_rate(mc->clk);
++	tick = (unsigned long long)mc->tick * clk_get_rate(mc->clk);
+ 	do_div(tick, NSEC_PER_SEC);
+ 
+ 	value = readl(mc->regs + MC_EMEM_ARB_CFG);
+diff --git a/drivers/net/Makefile b/drivers/net/Makefile
+index 21cde7e78621..0d3ba056cda3 100644
+--- a/drivers/net/Makefile
++++ b/drivers/net/Makefile
+@@ -40,7 +40,7 @@ obj-$(CONFIG_ARCNET) += arcnet/
+ obj-$(CONFIG_DEV_APPLETALK) += appletalk/
+ obj-$(CONFIG_CAIF) += caif/
+ obj-$(CONFIG_CAN) += can/
+-obj-$(CONFIG_NET_DSA) += dsa/
++obj-y += dsa/
+ obj-$(CONFIG_ETHERNET) += ethernet/
+ obj-$(CONFIG_FDDI) += fddi/
+ obj-$(CONFIG_HIPPI) += hippi/
+diff --git a/drivers/net/ethernet/mellanox/mlx4/mcg.c b/drivers/net/ethernet/mellanox/mlx4/mcg.c
+index ffed2d4c9403..9c481823b3e8 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/mcg.c
++++ b/drivers/net/ethernet/mellanox/mlx4/mcg.c
+@@ -1492,7 +1492,7 @@ int mlx4_flow_steer_promisc_add(struct mlx4_dev *dev, u8 port,
+ 	rule.port = port;
+ 	rule.qpn = qpn;
+ 	INIT_LIST_HEAD(&rule.list);
+-	mlx4_err(dev, "going promisc on %x\n", port);
++	mlx4_info(dev, "going promisc on %x\n", port);
+ 
+ 	return  mlx4_flow_attach(dev, &rule, regid_p);
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+index 37a551436e4a..b7e3b8902e7e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
++++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+@@ -8,6 +8,7 @@ config MLX5_CORE
+ 	depends on PCI
+ 	imply PTP_1588_CLOCK
+ 	imply VXLAN
++	imply MLXFW
+ 	default n
+ 	---help---
+ 	  Core driver for low level functionality of the ConnectX-4 and
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index 253496c4a3db..a908e29ddb7b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1802,6 +1802,22 @@ static int mlx5e_flash_device(struct net_device *dev,
+ 	return mlx5e_ethtool_flash_device(priv, flash);
+ }
+ 
++#ifndef CONFIG_MLX5_EN_RXNFC
++/* When CONFIG_MLX5_EN_RXNFC=n we only support ETHTOOL_GRXRINGS
++ * otherwise this function will be defined from en_fs_ethtool.c
++ */
++static int mlx5e_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info, u32 *rule_locs)
++{
++	struct mlx5e_priv *priv = netdev_priv(dev);
++
++	if (info->cmd != ETHTOOL_GRXRINGS)
++		return -EOPNOTSUPP;
++	/* ring_count is needed by ethtool -x */
++	info->data = priv->channels.params.num_channels;
++	return 0;
++}
++#endif
++
+ const struct ethtool_ops mlx5e_ethtool_ops = {
+ 	.get_drvinfo       = mlx5e_get_drvinfo,
+ 	.get_link          = ethtool_op_get_link,
+@@ -1820,8 +1836,8 @@ const struct ethtool_ops mlx5e_ethtool_ops = {
+ 	.get_rxfh_indir_size = mlx5e_get_rxfh_indir_size,
+ 	.get_rxfh          = mlx5e_get_rxfh,
+ 	.set_rxfh          = mlx5e_set_rxfh,
+-#ifdef CONFIG_MLX5_EN_RXNFC
+ 	.get_rxnfc         = mlx5e_get_rxnfc,
++#ifdef CONFIG_MLX5_EN_RXNFC
+ 	.set_rxnfc         = mlx5e_set_rxnfc,
+ #endif
+ 	.flash_device      = mlx5e_flash_device,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index ef9e472daffb..3977f763b6ed 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -64,9 +64,26 @@ static void mlx5e_rep_indr_unregister_block(struct mlx5e_rep_priv *rpriv,
+ static void mlx5e_rep_get_drvinfo(struct net_device *dev,
+ 				  struct ethtool_drvinfo *drvinfo)
+ {
++	struct mlx5e_priv *priv = netdev_priv(dev);
++	struct mlx5_core_dev *mdev = priv->mdev;
++
+ 	strlcpy(drvinfo->driver, mlx5e_rep_driver_name,
+ 		sizeof(drvinfo->driver));
+ 	strlcpy(drvinfo->version, UTS_RELEASE, sizeof(drvinfo->version));
++	snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
++		 "%d.%d.%04d (%.16s)",
++		 fw_rev_maj(mdev), fw_rev_min(mdev),
++		 fw_rev_sub(mdev), mdev->board_id);
++}
++
++static void mlx5e_uplink_rep_get_drvinfo(struct net_device *dev,
++					 struct ethtool_drvinfo *drvinfo)
++{
++	struct mlx5e_priv *priv = netdev_priv(dev);
++
++	mlx5e_rep_get_drvinfo(dev, drvinfo);
++	strlcpy(drvinfo->bus_info, pci_name(priv->mdev->pdev),
++		sizeof(drvinfo->bus_info));
+ }
+ 
+ static const struct counter_desc sw_rep_stats_desc[] = {
+@@ -374,7 +391,7 @@ static const struct ethtool_ops mlx5e_vf_rep_ethtool_ops = {
+ };
+ 
+ static const struct ethtool_ops mlx5e_uplink_rep_ethtool_ops = {
+-	.get_drvinfo	   = mlx5e_rep_get_drvinfo,
++	.get_drvinfo	   = mlx5e_uplink_rep_get_drvinfo,
+ 	.get_link	   = ethtool_op_get_link,
+ 	.get_strings       = mlx5e_rep_get_strings,
+ 	.get_sset_count    = mlx5e_rep_get_sset_count,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 79f122b45def..abbdd4906984 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1375,6 +1375,8 @@ static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1,
+ 		if ((d1->type == MLX5_FLOW_DESTINATION_TYPE_VPORT &&
+ 		     d1->vport.num == d2->vport.num &&
+ 		     d1->vport.flags == d2->vport.flags &&
++		     ((d1->vport.flags & MLX5_FLOW_DEST_VPORT_VHCA_ID) ?
++		      (d1->vport.vhca_id == d2->vport.vhca_id) : true) &&
+ 		     ((d1->vport.flags & MLX5_FLOW_DEST_VPORT_REFORMAT_ID) ?
+ 		      (d1->vport.reformat_id == d2->vport.reformat_id) : true)) ||
+ 		    (d1->type == MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE &&
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index ca0ee9916e9e..0059b290e095 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -535,23 +535,16 @@ void mlx5_init_clock(struct mlx5_core_dev *mdev)
+ 	do_div(ns, NSEC_PER_SEC / HZ);
+ 	clock->overflow_period = ns;
+ 
+-	mdev->clock_info_page = alloc_page(GFP_KERNEL);
+-	if (mdev->clock_info_page) {
+-		mdev->clock_info = kmap(mdev->clock_info_page);
+-		if (!mdev->clock_info) {
+-			__free_page(mdev->clock_info_page);
+-			mlx5_core_warn(mdev, "failed to map clock page\n");
+-		} else {
+-			mdev->clock_info->sign   = 0;
+-			mdev->clock_info->nsec   = clock->tc.nsec;
+-			mdev->clock_info->cycles = clock->tc.cycle_last;
+-			mdev->clock_info->mask   = clock->cycles.mask;
+-			mdev->clock_info->mult   = clock->nominal_c_mult;
+-			mdev->clock_info->shift  = clock->cycles.shift;
+-			mdev->clock_info->frac   = clock->tc.frac;
+-			mdev->clock_info->overflow_period =
+-						clock->overflow_period;
+-		}
++	mdev->clock_info =
++		(struct mlx5_ib_clock_info *)get_zeroed_page(GFP_KERNEL);
++	if (mdev->clock_info) {
++		mdev->clock_info->nsec = clock->tc.nsec;
++		mdev->clock_info->cycles = clock->tc.cycle_last;
++		mdev->clock_info->mask = clock->cycles.mask;
++		mdev->clock_info->mult = clock->nominal_c_mult;
++		mdev->clock_info->shift = clock->cycles.shift;
++		mdev->clock_info->frac = clock->tc.frac;
++		mdev->clock_info->overflow_period = clock->overflow_period;
+ 	}
+ 
+ 	INIT_WORK(&clock->pps_info.out_work, mlx5_pps_out);
+@@ -599,8 +592,7 @@ void mlx5_cleanup_clock(struct mlx5_core_dev *mdev)
+ 	cancel_delayed_work_sync(&clock->overflow_work);
+ 
+ 	if (mdev->clock_info) {
+-		kunmap(mdev->clock_info_page);
+-		__free_page(mdev->clock_info_page);
++		free_page((unsigned long)mdev->clock_info);
+ 		mdev->clock_info = NULL;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+index 2d9f26a725c2..37bd2dbcd206 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/tunnel_conf.c
+@@ -164,6 +164,7 @@ void nfp_tunnel_keep_alive(struct nfp_app *app, struct sk_buff *skb)
+ 		return;
+ 	}
+ 
++	rcu_read_lock();
+ 	for (i = 0; i < count; i++) {
+ 		ipv4_addr = payload->tun_info[i].ipv4;
+ 		port = be32_to_cpu(payload->tun_info[i].egress_port);
+@@ -179,6 +180,7 @@ void nfp_tunnel_keep_alive(struct nfp_app *app, struct sk_buff *skb)
+ 		neigh_event_send(n, NULL);
+ 		neigh_release(n);
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ static int
+@@ -362,9 +364,10 @@ void nfp_tunnel_request_route(struct nfp_app *app, struct sk_buff *skb)
+ 
+ 	payload = nfp_flower_cmsg_get_data(skb);
+ 
++	rcu_read_lock();
+ 	netdev = nfp_app_repr_get(app, be32_to_cpu(payload->ingress_port));
+ 	if (!netdev)
+-		goto route_fail_warning;
++		goto fail_rcu_unlock;
+ 
+ 	flow.daddr = payload->ipv4_addr;
+ 	flow.flowi4_proto = IPPROTO_UDP;
+@@ -374,21 +377,23 @@ void nfp_tunnel_request_route(struct nfp_app *app, struct sk_buff *skb)
+ 	rt = ip_route_output_key(dev_net(netdev), &flow);
+ 	err = PTR_ERR_OR_ZERO(rt);
+ 	if (err)
+-		goto route_fail_warning;
++		goto fail_rcu_unlock;
+ #else
+-	goto route_fail_warning;
++	goto fail_rcu_unlock;
+ #endif
+ 
+ 	/* Get the neighbour entry for the lookup */
+ 	n = dst_neigh_lookup(&rt->dst, &flow.daddr);
+ 	ip_rt_put(rt);
+ 	if (!n)
+-		goto route_fail_warning;
+-	nfp_tun_write_neigh(n->dev, app, &flow, n, GFP_KERNEL);
++		goto fail_rcu_unlock;
++	nfp_tun_write_neigh(n->dev, app, &flow, n, GFP_ATOMIC);
+ 	neigh_release(n);
++	rcu_read_unlock();
+ 	return;
+ 
+-route_fail_warning:
++fail_rcu_unlock:
++	rcu_read_unlock();
+ 	nfp_flower_cmsg_warn(app, "Requested route not found.\n");
+ }
+ 
+diff --git a/drivers/net/ieee802154/mcr20a.c b/drivers/net/ieee802154/mcr20a.c
+index c589f5ae75bb..8bb53ec8d9cf 100644
+--- a/drivers/net/ieee802154/mcr20a.c
++++ b/drivers/net/ieee802154/mcr20a.c
+@@ -533,6 +533,8 @@ mcr20a_start(struct ieee802154_hw *hw)
+ 	dev_dbg(printdev(lp), "no slotted operation\n");
+ 	ret = regmap_update_bits(lp->regmap_dar, DAR_PHY_CTRL1,
+ 				 DAR_PHY_CTRL1_SLOTTED, 0x0);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* enable irq */
+ 	enable_irq(lp->spi->irq);
+@@ -540,11 +542,15 @@ mcr20a_start(struct ieee802154_hw *hw)
+ 	/* Unmask SEQ interrupt */
+ 	ret = regmap_update_bits(lp->regmap_dar, DAR_PHY_CTRL2,
+ 				 DAR_PHY_CTRL2_SEQMSK, 0x0);
++	if (ret < 0)
++		return ret;
+ 
+ 	/* Start the RX sequence */
+ 	dev_dbg(printdev(lp), "start the RX sequence\n");
+ 	ret = regmap_update_bits(lp->regmap_dar, DAR_PHY_CTRL1,
+ 				 DAR_PHY_CTRL1_XCVSEQ_MASK, MCR20A_XCVSEQ_RX);
++	if (ret < 0)
++		return ret;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ppp/ppp_deflate.c b/drivers/net/ppp/ppp_deflate.c
+index b5edc7f96a39..685e875f5164 100644
+--- a/drivers/net/ppp/ppp_deflate.c
++++ b/drivers/net/ppp/ppp_deflate.c
+@@ -610,12 +610,20 @@ static struct compressor ppp_deflate_draft = {
+ 
+ static int __init deflate_init(void)
+ {
+-        int answer = ppp_register_compressor(&ppp_deflate);
+-        if (answer == 0)
+-                printk(KERN_INFO
+-		       "PPP Deflate Compression module registered\n");
+-	ppp_register_compressor(&ppp_deflate_draft);
+-        return answer;
++	int rc;
++
++	rc = ppp_register_compressor(&ppp_deflate);
++	if (rc)
++		return rc;
++
++	rc = ppp_register_compressor(&ppp_deflate_draft);
++	if (rc) {
++		ppp_unregister_compressor(&ppp_deflate);
++		return rc;
++	}
++
++	pr_info("PPP Deflate Compression module registered\n");
++	return 0;
+ }
+ 
+ static void __exit deflate_cleanup(void)
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 9195f3476b1d..366217263d70 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1122,9 +1122,16 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x0846, 0x68d3, 8)},	/* Netgear Aircard 779S */
+ 	{QMI_FIXED_INTF(0x12d1, 0x140c, 1)},	/* Huawei E173 */
+ 	{QMI_FIXED_INTF(0x12d1, 0x14ac, 1)},	/* Huawei E1820 */
++	{QMI_FIXED_INTF(0x1435, 0x0918, 3)},	/* Wistron NeWeb D16Q1 */
++	{QMI_FIXED_INTF(0x1435, 0x0918, 4)},	/* Wistron NeWeb D16Q1 */
++	{QMI_FIXED_INTF(0x1435, 0x0918, 5)},	/* Wistron NeWeb D16Q1 */
++	{QMI_FIXED_INTF(0x1435, 0x3185, 4)},	/* Wistron NeWeb M18Q5 */
++	{QMI_FIXED_INTF(0x1435, 0xd111, 4)},	/* M9615A DM11-1 D51QC */
+ 	{QMI_FIXED_INTF(0x1435, 0xd181, 3)},	/* Wistron NeWeb D18Q1 */
+ 	{QMI_FIXED_INTF(0x1435, 0xd181, 4)},	/* Wistron NeWeb D18Q1 */
+ 	{QMI_FIXED_INTF(0x1435, 0xd181, 5)},	/* Wistron NeWeb D18Q1 */
++	{QMI_FIXED_INTF(0x1435, 0xd182, 4)},	/* Wistron NeWeb D18 */
++	{QMI_FIXED_INTF(0x1435, 0xd182, 5)},	/* Wistron NeWeb D18 */
+ 	{QMI_FIXED_INTF(0x1435, 0xd191, 4)},	/* Wistron NeWeb D19Q1 */
+ 	{QMI_QUIRK_SET_DTR(0x1508, 0x1001, 4)},	/* Fibocom NL668 series */
+ 	{QMI_FIXED_INTF(0x16d8, 0x6003, 0)},	/* CMOTech 6003 */
+@@ -1180,6 +1187,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x19d2, 0x0265, 4)},	/* ONDA MT8205 4G LTE */
+ 	{QMI_FIXED_INTF(0x19d2, 0x0284, 4)},	/* ZTE MF880 */
+ 	{QMI_FIXED_INTF(0x19d2, 0x0326, 4)},	/* ZTE MF821D */
++	{QMI_FIXED_INTF(0x19d2, 0x0396, 3)},	/* ZTE ZM8620 */
+ 	{QMI_FIXED_INTF(0x19d2, 0x0412, 4)},	/* Telewell TW-LTE 4G */
+ 	{QMI_FIXED_INTF(0x19d2, 0x1008, 4)},	/* ZTE (Vodafone) K3570-Z */
+ 	{QMI_FIXED_INTF(0x19d2, 0x1010, 4)},	/* ZTE (Vodafone) K3571-Z */
+@@ -1200,7 +1208,9 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x19d2, 0x1425, 2)},
+ 	{QMI_FIXED_INTF(0x19d2, 0x1426, 2)},	/* ZTE MF91 */
+ 	{QMI_FIXED_INTF(0x19d2, 0x1428, 2)},	/* Telewell TW-LTE 4G v2 */
++	{QMI_FIXED_INTF(0x19d2, 0x1432, 3)},	/* ZTE ME3620 */
+ 	{QMI_FIXED_INTF(0x19d2, 0x2002, 4)},	/* ZTE (Vodafone) K3765-Z */
++	{QMI_FIXED_INTF(0x2001, 0x7e16, 3)},	/* D-Link DWM-221 */
+ 	{QMI_FIXED_INTF(0x2001, 0x7e19, 4)},	/* D-Link DWM-221 B1 */
+ 	{QMI_FIXED_INTF(0x2001, 0x7e35, 4)},	/* D-Link DWM-222 */
+ 	{QMI_FIXED_INTF(0x2020, 0x2031, 4)},	/* Olicard 600 */
+@@ -1240,6 +1250,8 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)},	/* Telit ME910 dual modem */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)},	/* Telit LE920 */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1201, 2)},	/* Telit LE920, LE920A4 */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1260, 2)},	/* Telit LE910Cx */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1261, 2)},	/* Telit LE910Cx */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1900, 1)},	/* Telit LN940 series */
+ 	{QMI_FIXED_INTF(0x1c9e, 0x9801, 3)},	/* Telewell TW-3G HSPA+ */
+ 	{QMI_FIXED_INTF(0x1c9e, 0x9803, 4)},	/* Telewell TW-3G HSPA+ */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+index 51d76ac45075..188d7961584e 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/dmi.c
+@@ -31,6 +31,10 @@ struct brcmf_dmi_data {
+ 
+ /* NOTE: Please keep all entries sorted alphabetically */
+ 
++static const struct brcmf_dmi_data acepc_t8_data = {
++	BRCM_CC_4345_CHIP_ID, 6, "acepc-t8"
++};
++
+ static const struct brcmf_dmi_data gpd_win_pocket_data = {
+ 	BRCM_CC_4356_CHIP_ID, 2, "gpd-win-pocket"
+ };
+@@ -44,6 +48,28 @@ static const struct brcmf_dmi_data meegopad_t08_data = {
+ };
+ 
+ static const struct dmi_system_id dmi_platform_data[] = {
++	{
++		/* ACEPC T8 Cherry Trail Z8350 mini PC */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "T8"),
++			/* also match on somewhat unique bios-version */
++			DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1.000"),
++		},
++		.driver_data = (void *)&acepc_t8_data,
++	},
++	{
++		/* ACEPC T11 Cherry Trail Z8350 mini PC, same wifi as the T8 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "To be filled by O.E.M."),
++			DMI_EXACT_MATCH(DMI_BOARD_NAME, "Cherry Trail CR"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "T11"),
++			/* also match on somewhat unique bios-version */
++			DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1.000"),
++		},
++		.driver_data = (void *)&acepc_t8_data,
++	},
+ 	{
+ 		/* Match for the GPDwin which unfortunately uses somewhat
+ 		 * generic dmi strings, which is why we test for 4 strings.
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index 7bd8676508f5..519c7dd47f69 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -143,9 +143,9 @@ static inline int iwl_mvm_check_pn(struct iwl_mvm *mvm, struct sk_buff *skb,
+ }
+ 
+ /* iwl_mvm_create_skb Adds the rxb to a new skb */
+-static void iwl_mvm_create_skb(struct sk_buff *skb, struct ieee80211_hdr *hdr,
+-			       u16 len, u8 crypt_len,
+-			       struct iwl_rx_cmd_buffer *rxb)
++static int iwl_mvm_create_skb(struct iwl_mvm *mvm, struct sk_buff *skb,
++			      struct ieee80211_hdr *hdr, u16 len, u8 crypt_len,
++			      struct iwl_rx_cmd_buffer *rxb)
+ {
+ 	struct iwl_rx_packet *pkt = rxb_addr(rxb);
+ 	struct iwl_rx_mpdu_desc *desc = (void *)pkt->data;
+@@ -178,6 +178,20 @@ static void iwl_mvm_create_skb(struct sk_buff *skb, struct ieee80211_hdr *hdr,
+ 	 * present before copying packet data.
+ 	 */
+ 	hdrlen += crypt_len;
++
++	if (WARN_ONCE(headlen < hdrlen,
++		      "invalid packet lengths (hdrlen=%d, len=%d, crypt_len=%d)\n",
++		      hdrlen, len, crypt_len)) {
++		/*
++		 * We warn and trace because we want to be able to see
++		 * it in trace-cmd as well.
++		 */
++		IWL_DEBUG_RX(mvm,
++			     "invalid packet lengths (hdrlen=%d, len=%d, crypt_len=%d)\n",
++			     hdrlen, len, crypt_len);
++		return -EINVAL;
++	}
++
+ 	skb_put_data(skb, hdr, hdrlen);
+ 	skb_put_data(skb, (u8 *)hdr + hdrlen + pad_len, headlen - hdrlen);
+ 
+@@ -190,6 +204,8 @@ static void iwl_mvm_create_skb(struct sk_buff *skb, struct ieee80211_hdr *hdr,
+ 		skb_add_rx_frag(skb, 0, rxb_steal_page(rxb), offset,
+ 				fraglen, rxb->truesize);
+ 	}
++
++	return 0;
+ }
+ 
+ /* iwl_mvm_pass_packet_to_mac80211 - passes the packet for mac80211 */
+@@ -1600,7 +1616,11 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
+ 			rx_status->boottime_ns = ktime_get_boot_ns();
+ 	}
+ 
+-	iwl_mvm_create_skb(skb, hdr, len, crypt_len, rxb);
++	if (iwl_mvm_create_skb(mvm, skb, hdr, len, crypt_len, rxb)) {
++		kfree_skb(skb);
++		goto out;
++	}
++
+ 	if (!iwl_mvm_reorder(mvm, napi, queue, sta, skb, desc))
+ 		iwl_mvm_pass_packet_to_mac80211(mvm, napi, skb, queue, sta);
+ out:
+diff --git a/drivers/net/wireless/intersil/p54/p54pci.c b/drivers/net/wireless/intersil/p54/p54pci.c
+index 27a49068d32d..57ad56435dda 100644
+--- a/drivers/net/wireless/intersil/p54/p54pci.c
++++ b/drivers/net/wireless/intersil/p54/p54pci.c
+@@ -554,7 +554,7 @@ static int p54p_probe(struct pci_dev *pdev,
+ 	err = pci_enable_device(pdev);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "Cannot enable new PCI device\n");
+-		return err;
++		goto err_put;
+ 	}
+ 
+ 	mem_addr = pci_resource_start(pdev, 0);
+@@ -639,6 +639,7 @@ static int p54p_probe(struct pci_dev *pdev,
+ 	pci_release_regions(pdev);
+  err_disable_dev:
+ 	pci_disable_device(pdev);
++err_put:
+ 	pci_dev_put(pdev);
+ 	return err;
+ }
+diff --git a/drivers/parisc/led.c b/drivers/parisc/led.c
+index 0c6e8b44b4ed..c60b465f6fe4 100644
+--- a/drivers/parisc/led.c
++++ b/drivers/parisc/led.c
+@@ -568,6 +568,9 @@ int __init register_led_driver(int model, unsigned long cmd_reg, unsigned long d
+ 		break;
+ 
+ 	case DISPLAY_MODEL_LASI:
++		/* Skip to register LED in QEMU */
++		if (running_on_qemu)
++			return 1;
+ 		LED_DATA_REG = data_reg;
+ 		led_func_ptr = led_LASI_driver;
+ 		printk(KERN_INFO "LED display at %lx registered\n", LED_DATA_REG);
+diff --git a/drivers/pci/controller/pcie-rcar.c b/drivers/pci/controller/pcie-rcar.c
+index c8febb009454..6a4e435bd35f 100644
+--- a/drivers/pci/controller/pcie-rcar.c
++++ b/drivers/pci/controller/pcie-rcar.c
+@@ -46,6 +46,7 @@
+ 
+ /* Transfer control */
+ #define PCIETCTLR		0x02000
++#define  DL_DOWN		BIT(3)
+ #define  CFINIT			1
+ #define PCIETSTR		0x02004
+ #define  DATA_LINK_ACTIVE	1
+@@ -94,6 +95,7 @@
+ #define MACCTLR			0x011058
+ #define  SPEED_CHANGE		BIT(24)
+ #define  SCRAMBLE_DISABLE	BIT(27)
++#define PMSR			0x01105c
+ #define MACS2R			0x011078
+ #define MACCGSPSETR		0x011084
+ #define  SPCNGRSN		BIT(31)
+@@ -1130,6 +1132,7 @@ static int rcar_pcie_probe(struct platform_device *pdev)
+ 	pcie = pci_host_bridge_priv(bridge);
+ 
+ 	pcie->dev = dev;
++	platform_set_drvdata(pdev, pcie);
+ 
+ 	err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, NULL);
+ 	if (err)
+@@ -1221,10 +1224,28 @@ err_free_bridge:
+ 	return err;
+ }
+ 
++static int rcar_pcie_resume_noirq(struct device *dev)
++{
++	struct rcar_pcie *pcie = dev_get_drvdata(dev);
++
++	if (rcar_pci_read_reg(pcie, PMSR) &&
++	    !(rcar_pci_read_reg(pcie, PCIETCTLR) & DL_DOWN))
++		return 0;
++
++	/* Re-establish the PCIe link */
++	rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR);
++	return rcar_pcie_wait_for_dl(pcie);
++}
++
++static const struct dev_pm_ops rcar_pcie_pm_ops = {
++	.resume_noirq = rcar_pcie_resume_noirq,
++};
++
+ static struct platform_driver rcar_pcie_driver = {
+ 	.driver = {
+ 		.name = "rcar-pcie",
+ 		.of_match_table = rcar_pcie_of_match,
++		.pm = &rcar_pcie_pm_ops,
+ 		.suppress_bind_attrs = true,
+ 	},
+ 	.probe = rcar_pcie_probe,
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index e91005d0f20c..3f77bab698ce 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -6266,8 +6266,7 @@ static int __init pci_setup(char *str)
+ 			} else if (!strncmp(str, "pcie_scan_all", 13)) {
+ 				pci_add_flags(PCI_SCAN_ALL_PCIE_DEVS);
+ 			} else if (!strncmp(str, "disable_acs_redir=", 18)) {
+-				disable_acs_redir_param =
+-					kstrdup(str + 18, GFP_KERNEL);
++				disable_acs_redir_param = str + 18;
+ 			} else {
+ 				printk(KERN_ERR "PCI: Unknown option `%s'\n",
+ 						str);
+@@ -6278,3 +6277,19 @@ static int __init pci_setup(char *str)
+ 	return 0;
+ }
+ early_param("pci", pci_setup);
++
++/*
++ * 'disable_acs_redir_param' is initialized in pci_setup(), above, to point
++ * to data in the __initdata section which will be freed after the init
++ * sequence is complete. We can't allocate memory in pci_setup() because some
++ * architectures do not have any memory allocation service available during
++ * an early_param() call. So we allocate memory and copy the variable here
++ * before the init section is freed.
++ */
++static int __init pci_realloc_setup_params(void)
++{
++	disable_acs_redir_param = kstrdup(disable_acs_redir_param, GFP_KERNEL);
++
++	return 0;
++}
++pure_initcall(pci_realloc_setup_params);
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 224d88634115..17c4ed2021de 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -596,7 +596,7 @@ void pci_aer_clear_fatal_status(struct pci_dev *dev);
+ void pci_aer_clear_device_status(struct pci_dev *dev);
+ #else
+ static inline void pci_no_aer(void) { }
+-static inline int pci_aer_init(struct pci_dev *d) { return -ENODEV; }
++static inline void pci_aer_init(struct pci_dev *d) { }
+ static inline void pci_aer_exit(struct pci_dev *d) { }
+ static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { }
+ static inline void pci_aer_clear_device_status(struct pci_dev *dev) { }
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 727e3c1ef9a4..38e7017478b5 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -196,6 +196,38 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
+ 	link->clkpm_capable = (blacklist) ? 0 : capable;
+ }
+ 
++static bool pcie_retrain_link(struct pcie_link_state *link)
++{
++	struct pci_dev *parent = link->pdev;
++	unsigned long start_jiffies;
++	u16 reg16;
++
++	pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16);
++	reg16 |= PCI_EXP_LNKCTL_RL;
++	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
++	if (parent->clear_retrain_link) {
++		/*
++		 * Due to an erratum in some devices the Retrain Link bit
++		 * needs to be cleared again manually to allow the link
++		 * training to succeed.
++		 */
++		reg16 &= ~PCI_EXP_LNKCTL_RL;
++		pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
++	}
++
++	/* Wait for link training end. Break out after waiting for timeout */
++	start_jiffies = jiffies;
++	for (;;) {
++		pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16);
++		if (!(reg16 & PCI_EXP_LNKSTA_LT))
++			break;
++		if (time_after(jiffies, start_jiffies + LINK_RETRAIN_TIMEOUT))
++			break;
++		msleep(1);
++	}
++	return !(reg16 & PCI_EXP_LNKSTA_LT);
++}
++
+ /*
+  * pcie_aspm_configure_common_clock: check if the 2 ends of a link
+  *   could use common clock. If they are, configure them to use the
+@@ -205,7 +237,6 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ {
+ 	int same_clock = 1;
+ 	u16 reg16, parent_reg, child_reg[8];
+-	unsigned long start_jiffies;
+ 	struct pci_dev *child, *parent = link->pdev;
+ 	struct pci_bus *linkbus = parent->subordinate;
+ 	/*
+@@ -263,21 +294,7 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
+ 		reg16 &= ~PCI_EXP_LNKCTL_CCC;
+ 	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
+ 
+-	/* Retrain link */
+-	reg16 |= PCI_EXP_LNKCTL_RL;
+-	pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
+-
+-	/* Wait for link training end. Break out after waiting for timeout */
+-	start_jiffies = jiffies;
+-	for (;;) {
+-		pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16);
+-		if (!(reg16 & PCI_EXP_LNKSTA_LT))
+-			break;
+-		if (time_after(jiffies, start_jiffies + LINK_RETRAIN_TIMEOUT))
+-			break;
+-		msleep(1);
+-	}
+-	if (!(reg16 & PCI_EXP_LNKSTA_LT))
++	if (pcie_retrain_link(link))
+ 		return;
+ 
+ 	/* Training failed. Restore common clock configurations */
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index c46a3fcb341e..3bb9bdb884e5 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -535,16 +535,9 @@ static void pci_release_host_bridge_dev(struct device *dev)
+ 	kfree(to_pci_host_bridge(dev));
+ }
+ 
+-struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
++static void pci_init_host_bridge(struct pci_host_bridge *bridge)
+ {
+-	struct pci_host_bridge *bridge;
+-
+-	bridge = kzalloc(sizeof(*bridge) + priv, GFP_KERNEL);
+-	if (!bridge)
+-		return NULL;
+-
+ 	INIT_LIST_HEAD(&bridge->windows);
+-	bridge->dev.release = pci_release_host_bridge_dev;
+ 
+ 	/*
+ 	 * We assume we can manage these PCIe features.  Some systems may
+@@ -557,6 +550,18 @@ struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
+ 	bridge->native_shpc_hotplug = 1;
+ 	bridge->native_pme = 1;
+ 	bridge->native_ltr = 1;
++}
++
++struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
++{
++	struct pci_host_bridge *bridge;
++
++	bridge = kzalloc(sizeof(*bridge) + priv, GFP_KERNEL);
++	if (!bridge)
++		return NULL;
++
++	pci_init_host_bridge(bridge);
++	bridge->dev.release = pci_release_host_bridge_dev;
+ 
+ 	return bridge;
+ }
+@@ -571,7 +576,7 @@ struct pci_host_bridge *devm_pci_alloc_host_bridge(struct device *dev,
+ 	if (!bridge)
+ 		return NULL;
+ 
+-	INIT_LIST_HEAD(&bridge->windows);
++	pci_init_host_bridge(bridge);
+ 	bridge->dev.release = devm_pci_release_host_bridge_dev;
+ 
+ 	return bridge;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index fba03a7d5c7f..c2c54dc4433e 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -2245,6 +2245,23 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f1, quirk_disable_aspm_l0s);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f4, quirk_disable_aspm_l0s);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1508, quirk_disable_aspm_l0s);
+ 
++/*
++ * Some Pericom PCIe-to-PCI bridges in reverse mode need the PCIe Retrain
++ * Link bit cleared after starting the link retrain process to allow this
++ * process to finish.
++ *
++ * Affected devices: PI7C9X110, PI7C9X111SL, PI7C9X130.  See also the
++ * Pericom Errata Sheet PI7C9X111SLB_errata_rev1.2_102711.pdf.
++ */
++static void quirk_enable_clear_retrain_link(struct pci_dev *dev)
++{
++	dev->clear_retrain_link = 1;
++	pci_info(dev, "Enable PCIe Retrain Link quirk\n");
++}
++DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe110, quirk_enable_clear_retrain_link);
++DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe111, quirk_enable_clear_retrain_link);
++DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe130, quirk_enable_clear_retrain_link);
++
+ static void fixup_rev1_53c810(struct pci_dev *dev)
+ {
+ 	u32 class = dev->class;
+@@ -3408,6 +3425,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0030, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0034, quirk_no_bus_reset);
+ 
+ /*
+  * Root port on some Cavium CN8xxx chips do not successfully complete a bus
+@@ -4903,6 +4921,7 @@ static void quirk_no_ats(struct pci_dev *pdev)
+ 
+ /* AMD Stoney platform GPU */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_no_ats);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_no_ats);
+ #endif /* CONFIG_PCI_ATS */
+ 
+ /* Freescale PCIe doesn't support MSI in RC mode */
+@@ -5120,3 +5139,61 @@ SWITCHTEC_QUIRK(0x8573);  /* PFXI 48XG3 */
+ SWITCHTEC_QUIRK(0x8574);  /* PFXI 64XG3 */
+ SWITCHTEC_QUIRK(0x8575);  /* PFXI 80XG3 */
+ SWITCHTEC_QUIRK(0x8576);  /* PFXI 96XG3 */
++
++/*
++ * On Lenovo Thinkpad P50 SKUs with a Nvidia Quadro M1000M, the BIOS does
++ * not always reset the secondary Nvidia GPU between reboots if the system
++ * is configured to use Hybrid Graphics mode.  This results in the GPU
++ * being left in whatever state it was in during the *previous* boot, which
++ * causes spurious interrupts from the GPU, which in turn causes us to
++ * disable the wrong IRQ and end up breaking the touchpad.  Unsurprisingly,
++ * this also completely breaks nouveau.
++ *
++ * Luckily, it seems a simple reset of the Nvidia GPU brings it back to a
++ * clean state and fixes all these issues.
++ *
++ * When the machine is configured in Dedicated display mode, the issue
++ * doesn't occur.  Fortunately the GPU advertises NoReset+ when in this
++ * mode, so we can detect that and avoid resetting it.
++ */
++static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
++{
++	void __iomem *map;
++	int ret;
++
++	if (pdev->subsystem_vendor != PCI_VENDOR_ID_LENOVO ||
++	    pdev->subsystem_device != 0x222e ||
++	    !pdev->reset_fn)
++		return;
++
++	if (pci_enable_device_mem(pdev))
++		return;
++
++	/*
++	 * Based on nvkm_device_ctor() in
++	 * drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++	 */
++	map = pci_iomap(pdev, 0, 0x23000);
++	if (!map) {
++		pci_err(pdev, "Can't map MMIO space\n");
++		goto out_disable;
++	}
++
++	/*
++	 * Make sure the GPU looks like it's been POSTed before resetting
++	 * it.
++	 */
++	if (ioread32(map + 0x2240c) & 0x2) {
++		pci_info(pdev, FW_BUG "GPU left initialized by EFI, resetting\n");
++		ret = pci_reset_function(pdev);
++		if (ret < 0)
++			pci_err(pdev, "Failed to reset GPU: %d\n", ret);
++	}
++
++	iounmap(map);
++out_disable:
++	pci_disable_device(pdev);
++}
++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
++			      PCI_CLASS_DISPLAY_VGA, 8,
++			      quirk_reset_lenovo_thinkpad_p50_nvgpu);
+diff --git a/drivers/phy/ti/phy-ti-pipe3.c b/drivers/phy/ti/phy-ti-pipe3.c
+index 68ce4a082b9b..693acc167351 100644
+--- a/drivers/phy/ti/phy-ti-pipe3.c
++++ b/drivers/phy/ti/phy-ti-pipe3.c
+@@ -303,7 +303,7 @@ static void ti_pipe3_calibrate(struct ti_pipe3 *phy)
+ 
+ 	val = ti_pipe3_readl(phy->phy_rx, PCIEPHYRX_ANA_PROGRAMMABILITY);
+ 	val &= ~(INTERFACE_MASK | LOSD_MASK | MEM_PLLDIV);
+-	val = (0x1 << INTERFACE_SHIFT | 0xA << LOSD_SHIFT);
++	val |= (0x1 << INTERFACE_SHIFT | 0xA << LOSD_SHIFT);
+ 	ti_pipe3_writel(phy->phy_rx, PCIEPHYRX_ANA_PROGRAMMABILITY, val);
+ 
+ 	val = ti_pipe3_readl(phy->phy_rx, PCIEPHYRX_DIGITAL_MODES);
+diff --git a/drivers/power/supply/cpcap-battery.c b/drivers/power/supply/cpcap-battery.c
+index 08d5037fd052..6887870ba32c 100644
+--- a/drivers/power/supply/cpcap-battery.c
++++ b/drivers/power/supply/cpcap-battery.c
+@@ -221,6 +221,9 @@ static int cpcap_battery_cc_raw_div(struct cpcap_battery_ddata *ddata,
+ 	int avg_current;
+ 	u32 cc_lsb;
+ 
++	if (!divider)
++		return 0;
++
+ 	sample &= 0xffffff;		/* 24-bits, unsigned */
+ 	offset &= 0x7ff;		/* 10-bits, signed */
+ 
+diff --git a/drivers/power/supply/power_supply_sysfs.c b/drivers/power/supply/power_supply_sysfs.c
+index dce24f596160..5358a80d854f 100644
+--- a/drivers/power/supply/power_supply_sysfs.c
++++ b/drivers/power/supply/power_supply_sysfs.c
+@@ -383,15 +383,11 @@ int power_supply_uevent(struct device *dev, struct kobj_uevent_env *env)
+ 	char *prop_buf;
+ 	char *attrname;
+ 
+-	dev_dbg(dev, "uevent\n");
+-
+ 	if (!psy || !psy->desc) {
+ 		dev_dbg(dev, "No power supply yet\n");
+ 		return ret;
+ 	}
+ 
+-	dev_dbg(dev, "POWER_SUPPLY_NAME=%s\n", psy->desc->name);
+-
+ 	ret = add_uevent_var(env, "POWER_SUPPLY_NAME=%s", psy->desc->name);
+ 	if (ret)
+ 		return ret;
+@@ -427,8 +423,6 @@ int power_supply_uevent(struct device *dev, struct kobj_uevent_env *env)
+ 			goto out;
+ 		}
+ 
+-		dev_dbg(dev, "prop %s=%s\n", attrname, prop_buf);
+-
+ 		ret = add_uevent_var(env, "POWER_SUPPLY_%s=%s", attrname, prop_buf);
+ 		kfree(attrname);
+ 		if (ret)
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index e2caf11598c7..fb9fe26fd0fa 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -3360,15 +3360,12 @@ static int regulator_set_voltage_unlocked(struct regulator *regulator,
+ 
+ 	/* for not coupled regulators this will just set the voltage */
+ 	ret = regulator_balance_voltage(rdev, state);
+-	if (ret < 0)
+-		goto out2;
++	if (ret < 0) {
++		voltage->min_uV = old_min_uV;
++		voltage->max_uV = old_max_uV;
++	}
+ 
+ out:
+-	return 0;
+-out2:
+-	voltage->min_uV = old_min_uV;
+-	voltage->max_uV = old_max_uV;
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c
+index be1e9e52b2a0..cbccb9b38503 100644
+--- a/drivers/staging/media/imx/imx-media-csi.c
++++ b/drivers/staging/media/imx/imx-media-csi.c
+@@ -153,9 +153,10 @@ static inline bool requires_passthrough(struct v4l2_fwnode_endpoint *ep,
+ /*
+  * Parses the fwnode endpoint from the source pad of the entity
+  * connected to this CSI. This will either be the entity directly
+- * upstream from the CSI-2 receiver, or directly upstream from the
+- * video mux. The endpoint is needed to determine the bus type and
+- * bus config coming into the CSI.
++ * upstream from the CSI-2 receiver, directly upstream from the
++ * video mux, or directly upstream from the CSI itself. The endpoint
++ * is needed to determine the bus type and bus config coming into
++ * the CSI.
+  */
+ static int csi_get_upstream_endpoint(struct csi_priv *priv,
+ 				     struct v4l2_fwnode_endpoint *ep)
+@@ -171,7 +172,8 @@ static int csi_get_upstream_endpoint(struct csi_priv *priv,
+ 	if (!priv->src_sd)
+ 		return -EPIPE;
+ 
+-	src = &priv->src_sd->entity;
++	sd = priv->src_sd;
++	src = &sd->entity;
+ 
+ 	if (src->function == MEDIA_ENT_F_VID_MUX) {
+ 		/*
+@@ -185,6 +187,14 @@ static int csi_get_upstream_endpoint(struct csi_priv *priv,
+ 			src = &sd->entity;
+ 	}
+ 
++	/*
++	 * If the source is neither the video mux nor the CSI-2 receiver,
++	 * get the source pad directly upstream from CSI itself.
++	 */
++	if (src->function != MEDIA_ENT_F_VID_MUX &&
++	    sd->grp_id != IMX_MEDIA_GRP_ID_CSI2)
++		src = &priv->sd.entity;
++
+ 	/* get source pad of entity directly upstream from src */
+ 	pad = imx_media_find_upstream_pad(priv->md, src, 0);
+ 	if (IS_ERR(pad))
+diff --git a/drivers/staging/media/imx/imx-media-of.c b/drivers/staging/media/imx/imx-media-of.c
+index a01327f6e045..2da81a5af274 100644
+--- a/drivers/staging/media/imx/imx-media-of.c
++++ b/drivers/staging/media/imx/imx-media-of.c
+@@ -143,15 +143,18 @@ int imx_media_create_csi_of_links(struct imx_media_dev *imxmd,
+ 				  struct v4l2_subdev *csi)
+ {
+ 	struct device_node *csi_np = csi->dev->of_node;
+-	struct fwnode_handle *fwnode, *csi_ep;
+-	struct v4l2_fwnode_link link;
+ 	struct device_node *ep;
+-	int ret;
+-
+-	link.local_node = of_fwnode_handle(csi_np);
+-	link.local_port = CSI_SINK_PAD;
+ 
+ 	for_each_child_of_node(csi_np, ep) {
++		struct fwnode_handle *fwnode, *csi_ep;
++		struct v4l2_fwnode_link link;
++		int ret;
++
++		memset(&link, 0, sizeof(link));
++
++		link.local_node = of_fwnode_handle(csi_np);
++		link.local_port = CSI_SINK_PAD;
++
+ 		csi_ep = of_fwnode_handle(ep);
+ 
+ 		fwnode = fwnode_graph_get_remote_endpoint(csi_ep);
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index ba906876cc45..fd02e8a4841d 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -476,8 +476,12 @@ static int efifb_probe(struct platform_device *dev)
+ 		 * If the UEFI memory map covers the efifb region, we may only
+ 		 * remap it using the attributes the memory map prescribes.
+ 		 */
+-		mem_flags |= EFI_MEMORY_WT | EFI_MEMORY_WB;
+-		mem_flags &= md.attribute;
++		md.attribute &= EFI_MEMORY_UC | EFI_MEMORY_WC |
++				EFI_MEMORY_WT | EFI_MEMORY_WB;
++		if (md.attribute) {
++			mem_flags |= EFI_MEMORY_WT | EFI_MEMORY_WB;
++			mem_flags &= md.attribute;
++		}
+ 	}
+ 	if (mem_flags & EFI_MEMORY_WC)
+ 		info->screen_base = ioremap_wc(efifb_fix.smem_start,
+diff --git a/drivers/video/fbdev/sm712.h b/drivers/video/fbdev/sm712.h
+index aad1cc4be34a..c7ebf03b8d53 100644
+--- a/drivers/video/fbdev/sm712.h
++++ b/drivers/video/fbdev/sm712.h
+@@ -15,14 +15,10 @@
+ 
+ #define FB_ACCEL_SMI_LYNX 88
+ 
+-#define SCREEN_X_RES      1024
+-#define SCREEN_Y_RES      600
+-#define SCREEN_BPP        16
+-
+-/*Assume SM712 graphics chip has 4MB VRAM */
+-#define SM712_VIDEOMEMORYSIZE	  0x00400000
+-/*Assume SM722 graphics chip has 8MB VRAM */
+-#define SM722_VIDEOMEMORYSIZE	  0x00800000
++#define SCREEN_X_RES          1024
++#define SCREEN_Y_RES_PC       768
++#define SCREEN_Y_RES_NETBOOK  600
++#define SCREEN_BPP            16
+ 
+ #define dac_reg	(0x3c8)
+ #define dac_val	(0x3c9)
+diff --git a/drivers/video/fbdev/sm712fb.c b/drivers/video/fbdev/sm712fb.c
+index 502d0de2feec..f1dcc6766d1e 100644
+--- a/drivers/video/fbdev/sm712fb.c
++++ b/drivers/video/fbdev/sm712fb.c
+@@ -530,6 +530,65 @@ static const struct modeinit vgamode[] = {
+ 			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x15, 0x03,
+ 		},
+ 	},
++	{	/*  1024 x 768  16Bpp  60Hz */
++		1024, 768, 16, 60,
++		/*  Init_MISC */
++		0xEB,
++		{	/*  Init_SR0_SR4 */
++			0x03, 0x01, 0x0F, 0x03, 0x0E,
++		},
++		{	/*  Init_SR10_SR24 */
++			0xF3, 0xB6, 0xC0, 0xDD, 0x00, 0x0E, 0x17, 0x2C,
++			0x99, 0x02, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
++			0xC4, 0x30, 0x02, 0x01, 0x01,
++		},
++		{	/*  Init_SR30_SR75 */
++			0x38, 0x03, 0x20, 0x09, 0xC0, 0x3A, 0x3A, 0x3A,
++			0x3A, 0x3A, 0x3A, 0x3A, 0x00, 0x00, 0x03, 0xFF,
++			0x00, 0xFC, 0x00, 0x00, 0x20, 0x18, 0x00, 0xFC,
++			0x20, 0x0C, 0x44, 0x20, 0x00, 0x00, 0x00, 0x3A,
++			0x06, 0x68, 0xA7, 0x7F, 0x83, 0x24, 0xFF, 0x03,
++			0x0F, 0x60, 0x59, 0x3A, 0x3A, 0x00, 0x00, 0x3A,
++			0x01, 0x80, 0x7E, 0x1A, 0x1A, 0x00, 0x00, 0x00,
++			0x50, 0x03, 0x74, 0x14, 0x3B, 0x0D, 0x09, 0x02,
++			0x04, 0x45, 0x30, 0x30, 0x40, 0x20,
++		},
++		{	/*  Init_SR80_SR93 */
++			0xFF, 0x07, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0x3A,
++			0xF7, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0x3A, 0x3A,
++			0x00, 0x00, 0x00, 0x00,
++		},
++		{	/*  Init_SRA0_SRAF */
++			0x00, 0xFB, 0x9F, 0x01, 0x00, 0xED, 0xED, 0xED,
++			0x7B, 0xFB, 0xFF, 0xFF, 0x97, 0xEF, 0xBF, 0xDF,
++		},
++		{	/*  Init_GR00_GR08 */
++			0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x05, 0x0F,
++			0xFF,
++		},
++		{	/*  Init_AR00_AR14 */
++			0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
++			0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F,
++			0x41, 0x00, 0x0F, 0x00, 0x00,
++		},
++		{	/*  Init_CR00_CR18 */
++			0xA3, 0x7F, 0x7F, 0x00, 0x85, 0x16, 0x24, 0xF5,
++			0x00, 0x60, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
++			0x03, 0x09, 0xFF, 0x80, 0x40, 0xFF, 0x00, 0xE3,
++			0xFF,
++		},
++		{	/*  Init_CR30_CR4D */
++			0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x02, 0x20,
++			0x00, 0x00, 0x00, 0x40, 0x00, 0xFF, 0xBF, 0xFF,
++			0xA3, 0x7F, 0x00, 0x86, 0x15, 0x24, 0xFF, 0x00,
++			0x01, 0x07, 0xE5, 0x20, 0x7F, 0xFF,
++		},
++		{	/*  Init_CR90_CRA7 */
++			0x55, 0xD9, 0x5D, 0xE1, 0x86, 0x1B, 0x8E, 0x26,
++			0xDA, 0x8D, 0xDE, 0x94, 0x00, 0x00, 0x18, 0x00,
++			0x03, 0x03, 0x03, 0x03, 0x03, 0x03, 0x15, 0x03,
++		},
++	},
+ 	{	/*  mode#5: 1024 x 768  24Bpp  60Hz */
+ 		1024, 768, 24, 60,
+ 		/*  Init_MISC */
+@@ -827,67 +886,80 @@ static inline unsigned int chan_to_field(unsigned int chan,
+ 
+ static int smtc_blank(int blank_mode, struct fb_info *info)
+ {
++	struct smtcfb_info *sfb = info->par;
++
+ 	/* clear DPMS setting */
+ 	switch (blank_mode) {
+ 	case FB_BLANK_UNBLANK:
+ 		/* Screen On: HSync: On, VSync : On */
++
++		switch (sfb->chip_id) {
++		case 0x710:
++		case 0x712:
++			smtc_seqw(0x6a, 0x16);
++			smtc_seqw(0x6b, 0x02);
++			break;
++		case 0x720:
++			smtc_seqw(0x6a, 0x0d);
++			smtc_seqw(0x6b, 0x02);
++			break;
++		}
++
++		smtc_seqw(0x23, (smtc_seqr(0x23) & (~0xc0)));
+ 		smtc_seqw(0x01, (smtc_seqr(0x01) & (~0x20)));
+-		smtc_seqw(0x6a, 0x16);
+-		smtc_seqw(0x6b, 0x02);
+ 		smtc_seqw(0x21, (smtc_seqr(0x21) & 0x77));
+ 		smtc_seqw(0x22, (smtc_seqr(0x22) & (~0x30)));
+-		smtc_seqw(0x23, (smtc_seqr(0x23) & (~0xc0)));
+-		smtc_seqw(0x24, (smtc_seqr(0x24) | 0x01));
+ 		smtc_seqw(0x31, (smtc_seqr(0x31) | 0x03));
++		smtc_seqw(0x24, (smtc_seqr(0x24) | 0x01));
+ 		break;
+ 	case FB_BLANK_NORMAL:
+ 		/* Screen Off: HSync: On, VSync : On   Soft blank */
++		smtc_seqw(0x24, (smtc_seqr(0x24) | 0x01));
++		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
++		smtc_seqw(0x23, (smtc_seqr(0x23) & (~0xc0)));
+ 		smtc_seqw(0x01, (smtc_seqr(0x01) & (~0x20)));
++		smtc_seqw(0x22, (smtc_seqr(0x22) & (~0x30)));
+ 		smtc_seqw(0x6a, 0x16);
+ 		smtc_seqw(0x6b, 0x02);
+-		smtc_seqw(0x22, (smtc_seqr(0x22) & (~0x30)));
+-		smtc_seqw(0x23, (smtc_seqr(0x23) & (~0xc0)));
+-		smtc_seqw(0x24, (smtc_seqr(0x24) | 0x01));
+-		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
+ 		break;
+ 	case FB_BLANK_VSYNC_SUSPEND:
+ 		/* Screen On: HSync: On, VSync : Off */
++		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
++		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
++		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0x20));
+ 		smtc_seqw(0x01, (smtc_seqr(0x01) | 0x20));
+-		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+-		smtc_seqw(0x6a, 0x0c);
+-		smtc_seqw(0x6b, 0x02);
+ 		smtc_seqw(0x21, (smtc_seqr(0x21) | 0x88));
++		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+ 		smtc_seqw(0x22, ((smtc_seqr(0x22) & (~0x30)) | 0x20));
+-		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0x20));
+-		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
+-		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
+ 		smtc_seqw(0x34, (smtc_seqr(0x34) | 0x80));
++		smtc_seqw(0x6a, 0x0c);
++		smtc_seqw(0x6b, 0x02);
+ 		break;
+ 	case FB_BLANK_HSYNC_SUSPEND:
+ 		/* Screen On: HSync: Off, VSync : On */
++		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
++		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
++		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0xD8));
+ 		smtc_seqw(0x01, (smtc_seqr(0x01) | 0x20));
+-		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+-		smtc_seqw(0x6a, 0x0c);
+-		smtc_seqw(0x6b, 0x02);
+ 		smtc_seqw(0x21, (smtc_seqr(0x21) | 0x88));
++		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+ 		smtc_seqw(0x22, ((smtc_seqr(0x22) & (~0x30)) | 0x10));
+-		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0xD8));
+-		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
+-		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
+ 		smtc_seqw(0x34, (smtc_seqr(0x34) | 0x80));
++		smtc_seqw(0x6a, 0x0c);
++		smtc_seqw(0x6b, 0x02);
+ 		break;
+ 	case FB_BLANK_POWERDOWN:
+ 		/* Screen On: HSync: Off, VSync : Off */
++		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
++		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
++		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0xD8));
+ 		smtc_seqw(0x01, (smtc_seqr(0x01) | 0x20));
+-		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+-		smtc_seqw(0x6a, 0x0c);
+-		smtc_seqw(0x6b, 0x02);
+ 		smtc_seqw(0x21, (smtc_seqr(0x21) | 0x88));
++		smtc_seqw(0x20, (smtc_seqr(0x20) & (~0xB0)));
+ 		smtc_seqw(0x22, ((smtc_seqr(0x22) & (~0x30)) | 0x30));
+-		smtc_seqw(0x23, ((smtc_seqr(0x23) & (~0xc0)) | 0xD8));
+-		smtc_seqw(0x24, (smtc_seqr(0x24) & (~0x01)));
+-		smtc_seqw(0x31, ((smtc_seqr(0x31) & (~0x07)) | 0x00));
+ 		smtc_seqw(0x34, (smtc_seqr(0x34) | 0x80));
++		smtc_seqw(0x6a, 0x0c);
++		smtc_seqw(0x6b, 0x02);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -1145,8 +1217,10 @@ static void sm7xx_set_timing(struct smtcfb_info *sfb)
+ 
+ 		/* init SEQ register SR30 - SR75 */
+ 		for (i = 0; i < SIZE_SR30_SR75; i++)
+-			if ((i + 0x30) != 0x62 && (i + 0x30) != 0x6a &&
+-			    (i + 0x30) != 0x6b)
++			if ((i + 0x30) != 0x30 && (i + 0x30) != 0x62 &&
++			    (i + 0x30) != 0x6a && (i + 0x30) != 0x6b &&
++			    (i + 0x30) != 0x70 && (i + 0x30) != 0x71 &&
++			    (i + 0x30) != 0x74 && (i + 0x30) != 0x75)
+ 				smtc_seqw(i + 0x30,
+ 					  vgamode[j].init_sr30_sr75[i]);
+ 
+@@ -1171,8 +1245,12 @@ static void sm7xx_set_timing(struct smtcfb_info *sfb)
+ 			smtc_crtcw(i, vgamode[j].init_cr00_cr18[i]);
+ 
+ 		/* init CRTC register CR30 - CR4D */
+-		for (i = 0; i < SIZE_CR30_CR4D; i++)
++		for (i = 0; i < SIZE_CR30_CR4D; i++) {
++			if ((i + 0x30) >= 0x3B && (i + 0x30) <= 0x3F)
++				/* side-effect, don't write to CR3B-CR3F */
++				continue;
+ 			smtc_crtcw(i + 0x30, vgamode[j].init_cr30_cr4d[i]);
++		}
+ 
+ 		/* init CRTC register CR90 - CRA7 */
+ 		for (i = 0; i < SIZE_CR90_CRA7; i++)
+@@ -1323,6 +1401,11 @@ static int smtc_map_smem(struct smtcfb_info *sfb,
+ {
+ 	sfb->fb->fix.smem_start = pci_resource_start(pdev, 0);
+ 
++	if (sfb->chip_id == 0x720)
++		/* on SM720, the framebuffer starts at the 1 MB offset */
++		sfb->fb->fix.smem_start += 0x00200000;
++
++	/* XXX: is it safe for SM720 on Big-Endian? */
+ 	if (sfb->fb->var.bits_per_pixel == 32)
+ 		sfb->fb->fix.smem_start += big_addr;
+ 
+@@ -1360,12 +1443,82 @@ static inline void sm7xx_init_hw(void)
+ 	outb_p(0x11, 0x3c5);
+ }
+ 
++static u_long sm7xx_vram_probe(struct smtcfb_info *sfb)
++{
++	u8 vram;
++
++	switch (sfb->chip_id) {
++	case 0x710:
++	case 0x712:
++		/*
++		 * Assume SM712 graphics chip has 4MB VRAM.
++		 *
++		 * FIXME: SM712 can have 2MB VRAM, which is used on earlier
++		 * laptops, such as IBM Thinkpad 240X. This driver would
++		 * probably crash on those machines. If anyone gets one of
++		 * those and is willing to help, run "git blame" and send me
++		 * an E-mail.
++		 */
++		return 0x00400000;
++	case 0x720:
++		outb_p(0x76, 0x3c4);
++		vram = inb_p(0x3c5) >> 6;
++
++		if (vram == 0x00)
++			return 0x00800000;  /* 8 MB */
++		else if (vram == 0x01)
++			return 0x01000000;  /* 16 MB */
++		else if (vram == 0x02)
++			return 0x00400000;  /* illegal, fallback to 4 MB */
++		else if (vram == 0x03)
++			return 0x00400000;  /* 4 MB */
++	}
++	return 0;  /* unknown hardware */
++}
++
++static void sm7xx_resolution_probe(struct smtcfb_info *sfb)
++{
++	/* get mode parameter from smtc_scr_info */
++	if (smtc_scr_info.lfb_width != 0) {
++		sfb->fb->var.xres = smtc_scr_info.lfb_width;
++		sfb->fb->var.yres = smtc_scr_info.lfb_height;
++		sfb->fb->var.bits_per_pixel = smtc_scr_info.lfb_depth;
++		goto final;
++	}
++
++	/*
++	 * No parameter, default resolution is 1024x768-16.
++	 *
++	 * FIXME: earlier laptops, such as IBM Thinkpad 240X, has a 800x600
++	 * panel, also see the comments about Thinkpad 240X above.
++	 */
++	sfb->fb->var.xres = SCREEN_X_RES;
++	sfb->fb->var.yres = SCREEN_Y_RES_PC;
++	sfb->fb->var.bits_per_pixel = SCREEN_BPP;
++
++#ifdef CONFIG_MIPS
++	/*
++	 * Loongson MIPS netbooks use 1024x600 LCD panels, which is the original
++	 * target platform of this driver, but nearly all old x86 laptops have
++	 * 1024x768. Lighting 768 panels using 600's timings would partially
++	 * garble the display, so we don't want that. But it's not possible to
++	 * distinguish them reliably.
++	 *
++	 * So we change the default to 768, but keep 600 as-is on MIPS.
++	 */
++	sfb->fb->var.yres = SCREEN_Y_RES_NETBOOK;
++#endif
++
++final:
++	big_pixel_depth(sfb->fb->var.bits_per_pixel, smtc_scr_info.lfb_depth);
++}
++
+ static int smtcfb_pci_probe(struct pci_dev *pdev,
+ 			    const struct pci_device_id *ent)
+ {
+ 	struct smtcfb_info *sfb;
+ 	struct fb_info *info;
+-	u_long smem_size = 0x00800000;	/* default 8MB */
++	u_long smem_size;
+ 	int err;
+ 	unsigned long mmio_base;
+ 
+@@ -1405,29 +1558,19 @@ static int smtcfb_pci_probe(struct pci_dev *pdev,
+ 
+ 	sm7xx_init_hw();
+ 
+-	/* get mode parameter from smtc_scr_info */
+-	if (smtc_scr_info.lfb_width != 0) {
+-		sfb->fb->var.xres = smtc_scr_info.lfb_width;
+-		sfb->fb->var.yres = smtc_scr_info.lfb_height;
+-		sfb->fb->var.bits_per_pixel = smtc_scr_info.lfb_depth;
+-	} else {
+-		/* default resolution 1024x600 16bit mode */
+-		sfb->fb->var.xres = SCREEN_X_RES;
+-		sfb->fb->var.yres = SCREEN_Y_RES;
+-		sfb->fb->var.bits_per_pixel = SCREEN_BPP;
+-	}
+-
+-	big_pixel_depth(sfb->fb->var.bits_per_pixel, smtc_scr_info.lfb_depth);
+ 	/* Map address and memory detection */
+ 	mmio_base = pci_resource_start(pdev, 0);
+ 	pci_read_config_byte(pdev, PCI_REVISION_ID, &sfb->chip_rev_id);
+ 
++	smem_size = sm7xx_vram_probe(sfb);
++	dev_info(&pdev->dev, "%lu MiB of VRAM detected.\n",
++					smem_size / 1048576);
++
+ 	switch (sfb->chip_id) {
+ 	case 0x710:
+ 	case 0x712:
+ 		sfb->fb->fix.mmio_start = mmio_base + 0x00400000;
+ 		sfb->fb->fix.mmio_len = 0x00400000;
+-		smem_size = SM712_VIDEOMEMORYSIZE;
+ 		sfb->lfb = ioremap(mmio_base, mmio_addr);
+ 		if (!sfb->lfb) {
+ 			dev_err(&pdev->dev,
+@@ -1459,8 +1602,7 @@ static int smtcfb_pci_probe(struct pci_dev *pdev,
+ 	case 0x720:
+ 		sfb->fb->fix.mmio_start = mmio_base;
+ 		sfb->fb->fix.mmio_len = 0x00200000;
+-		smem_size = SM722_VIDEOMEMORYSIZE;
+-		sfb->dp_regs = ioremap(mmio_base, 0x00a00000);
++		sfb->dp_regs = ioremap(mmio_base, 0x00200000 + smem_size);
+ 		sfb->lfb = sfb->dp_regs + 0x00200000;
+ 		sfb->mmio = (smtc_regbaseaddress =
+ 		    sfb->dp_regs + 0x000c0000);
+@@ -1477,6 +1619,9 @@ static int smtcfb_pci_probe(struct pci_dev *pdev,
+ 		goto failed_fb;
+ 	}
+ 
++	/* probe and decide resolution */
++	sm7xx_resolution_probe(sfb);
++
+ 	/* can support 32 bpp */
+ 	if (sfb->fb->var.bits_per_pixel == 15)
+ 		sfb->fb->var.bits_per_pixel = 16;
+@@ -1487,7 +1632,11 @@ static int smtcfb_pci_probe(struct pci_dev *pdev,
+ 	if (err)
+ 		goto failed;
+ 
+-	smtcfb_setmode(sfb);
++	/*
++	 * The screen would be temporarily garbled when sm712fb takes over
++	 * vesafb or VGA text mode. Zero the framebuffer.
++	 */
++	memset_io(sfb->lfb, 0, sfb->fb->fix.smem_len);
+ 
+ 	err = register_framebuffer(info);
+ 	if (err < 0)
+diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
+index 1d034dddc556..5a0d6fb02bbc 100644
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -594,8 +594,7 @@ static int dlfb_render_hline(struct dlfb_data *dlfb, struct urb **urb_ptr,
+ 	return 0;
+ }
+ 
+-static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y,
+-	       int width, int height, char *data)
++static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y, int width, int height)
+ {
+ 	int i, ret;
+ 	char *cmd;
+@@ -607,21 +606,29 @@ static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y,
+ 
+ 	start_cycles = get_cycles();
+ 
++	mutex_lock(&dlfb->render_mutex);
++
+ 	aligned_x = DL_ALIGN_DOWN(x, sizeof(unsigned long));
+ 	width = DL_ALIGN_UP(width + (x-aligned_x), sizeof(unsigned long));
+ 	x = aligned_x;
+ 
+ 	if ((width <= 0) ||
+ 	    (x + width > dlfb->info->var.xres) ||
+-	    (y + height > dlfb->info->var.yres))
+-		return -EINVAL;
++	    (y + height > dlfb->info->var.yres)) {
++		ret = -EINVAL;
++		goto unlock_ret;
++	}
+ 
+-	if (!atomic_read(&dlfb->usb_active))
+-		return 0;
++	if (!atomic_read(&dlfb->usb_active)) {
++		ret = 0;
++		goto unlock_ret;
++	}
+ 
+ 	urb = dlfb_get_urb(dlfb);
+-	if (!urb)
+-		return 0;
++	if (!urb) {
++		ret = 0;
++		goto unlock_ret;
++	}
+ 	cmd = urb->transfer_buffer;
+ 
+ 	for (i = y; i < y + height ; i++) {
+@@ -641,7 +648,7 @@ static int dlfb_handle_damage(struct dlfb_data *dlfb, int x, int y,
+ 			*cmd++ = 0xAF;
+ 		/* Send partial buffer remaining before exiting */
+ 		len = cmd - (char *) urb->transfer_buffer;
+-		ret = dlfb_submit_urb(dlfb, urb, len);
++		dlfb_submit_urb(dlfb, urb, len);
+ 		bytes_sent += len;
+ 	} else
+ 		dlfb_urb_completion(urb);
+@@ -655,7 +662,55 @@ error:
+ 		    >> 10)), /* Kcycles */
+ 		   &dlfb->cpu_kcycles_used);
+ 
+-	return 0;
++	ret = 0;
++
++unlock_ret:
++	mutex_unlock(&dlfb->render_mutex);
++	return ret;
++}
++
++static void dlfb_init_damage(struct dlfb_data *dlfb)
++{
++	dlfb->damage_x = INT_MAX;
++	dlfb->damage_x2 = 0;
++	dlfb->damage_y = INT_MAX;
++	dlfb->damage_y2 = 0;
++}
++
++static void dlfb_damage_work(struct work_struct *w)
++{
++	struct dlfb_data *dlfb = container_of(w, struct dlfb_data, damage_work);
++	int x, x2, y, y2;
++
++	spin_lock_irq(&dlfb->damage_lock);
++	x = dlfb->damage_x;
++	x2 = dlfb->damage_x2;
++	y = dlfb->damage_y;
++	y2 = dlfb->damage_y2;
++	dlfb_init_damage(dlfb);
++	spin_unlock_irq(&dlfb->damage_lock);
++
++	if (x < x2 && y < y2)
++		dlfb_handle_damage(dlfb, x, y, x2 - x, y2 - y);
++}
++
++static void dlfb_offload_damage(struct dlfb_data *dlfb, int x, int y, int width, int height)
++{
++	unsigned long flags;
++	int x2 = x + width;
++	int y2 = y + height;
++
++	if (x >= x2 || y >= y2)
++		return;
++
++	spin_lock_irqsave(&dlfb->damage_lock, flags);
++	dlfb->damage_x = min(x, dlfb->damage_x);
++	dlfb->damage_x2 = max(x2, dlfb->damage_x2);
++	dlfb->damage_y = min(y, dlfb->damage_y);
++	dlfb->damage_y2 = max(y2, dlfb->damage_y2);
++	spin_unlock_irqrestore(&dlfb->damage_lock, flags);
++
++	schedule_work(&dlfb->damage_work);
+ }
+ 
+ /*
+@@ -679,7 +734,7 @@ static ssize_t dlfb_ops_write(struct fb_info *info, const char __user *buf,
+ 				(u32)info->var.yres);
+ 
+ 		dlfb_handle_damage(dlfb, 0, start, info->var.xres,
+-			lines, info->screen_base);
++			lines);
+ 	}
+ 
+ 	return result;
+@@ -694,8 +749,8 @@ static void dlfb_ops_copyarea(struct fb_info *info,
+ 
+ 	sys_copyarea(info, area);
+ 
+-	dlfb_handle_damage(dlfb, area->dx, area->dy,
+-			area->width, area->height, info->screen_base);
++	dlfb_offload_damage(dlfb, area->dx, area->dy,
++			area->width, area->height);
+ }
+ 
+ static void dlfb_ops_imageblit(struct fb_info *info,
+@@ -705,8 +760,8 @@ static void dlfb_ops_imageblit(struct fb_info *info,
+ 
+ 	sys_imageblit(info, image);
+ 
+-	dlfb_handle_damage(dlfb, image->dx, image->dy,
+-			image->width, image->height, info->screen_base);
++	dlfb_offload_damage(dlfb, image->dx, image->dy,
++			image->width, image->height);
+ }
+ 
+ static void dlfb_ops_fillrect(struct fb_info *info,
+@@ -716,8 +771,8 @@ static void dlfb_ops_fillrect(struct fb_info *info,
+ 
+ 	sys_fillrect(info, rect);
+ 
+-	dlfb_handle_damage(dlfb, rect->dx, rect->dy, rect->width,
+-			      rect->height, info->screen_base);
++	dlfb_offload_damage(dlfb, rect->dx, rect->dy, rect->width,
++			      rect->height);
+ }
+ 
+ /*
+@@ -739,17 +794,19 @@ static void dlfb_dpy_deferred_io(struct fb_info *info,
+ 	int bytes_identical = 0;
+ 	int bytes_rendered = 0;
+ 
++	mutex_lock(&dlfb->render_mutex);
++
+ 	if (!fb_defio)
+-		return;
++		goto unlock_ret;
+ 
+ 	if (!atomic_read(&dlfb->usb_active))
+-		return;
++		goto unlock_ret;
+ 
+ 	start_cycles = get_cycles();
+ 
+ 	urb = dlfb_get_urb(dlfb);
+ 	if (!urb)
+-		return;
++		goto unlock_ret;
+ 
+ 	cmd = urb->transfer_buffer;
+ 
+@@ -782,6 +839,8 @@ error:
+ 	atomic_add(((unsigned int) ((end_cycles - start_cycles)
+ 		    >> 10)), /* Kcycles */
+ 		   &dlfb->cpu_kcycles_used);
++unlock_ret:
++	mutex_unlock(&dlfb->render_mutex);
+ }
+ 
+ static int dlfb_get_edid(struct dlfb_data *dlfb, char *edid, int len)
+@@ -859,8 +918,7 @@ static int dlfb_ops_ioctl(struct fb_info *info, unsigned int cmd,
+ 		if (area.y > info->var.yres)
+ 			area.y = info->var.yres;
+ 
+-		dlfb_handle_damage(dlfb, area.x, area.y, area.w, area.h,
+-			   info->screen_base);
++		dlfb_handle_damage(dlfb, area.x, area.y, area.w, area.h);
+ 	}
+ 
+ 	return 0;
+@@ -942,6 +1000,10 @@ static void dlfb_ops_destroy(struct fb_info *info)
+ {
+ 	struct dlfb_data *dlfb = info->par;
+ 
++	cancel_work_sync(&dlfb->damage_work);
++
++	mutex_destroy(&dlfb->render_mutex);
++
+ 	if (info->cmap.len != 0)
+ 		fb_dealloc_cmap(&info->cmap);
+ 	if (info->monspecs.modedb)
+@@ -1065,8 +1127,7 @@ static int dlfb_ops_set_par(struct fb_info *info)
+ 			pix_framebuffer[i] = 0x37e6;
+ 	}
+ 
+-	dlfb_handle_damage(dlfb, 0, 0, info->var.xres, info->var.yres,
+-			   info->screen_base);
++	dlfb_handle_damage(dlfb, 0, 0, info->var.xres, info->var.yres);
+ 
+ 	return 0;
+ }
+@@ -1639,6 +1700,11 @@ static int dlfb_usb_probe(struct usb_interface *intf,
+ 	dlfb->ops = dlfb_ops;
+ 	info->fbops = &dlfb->ops;
+ 
++	mutex_init(&dlfb->render_mutex);
++	dlfb_init_damage(dlfb);
++	spin_lock_init(&dlfb->damage_lock);
++	INIT_WORK(&dlfb->damage_work, dlfb_damage_work);
++
+ 	INIT_LIST_HEAD(&info->modelist);
+ 
+ 	if (!dlfb_alloc_urb_list(dlfb, WRITES_IN_FLIGHT, MAX_TRANSFER)) {
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index da2cd8e89062..950919411460 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -832,6 +832,12 @@ static void ceph_umount_begin(struct super_block *sb)
+ 	return;
+ }
+ 
++static int ceph_remount(struct super_block *sb, int *flags, char *data)
++{
++	sync_filesystem(sb);
++	return 0;
++}
++
+ static const struct super_operations ceph_super_ops = {
+ 	.alloc_inode	= ceph_alloc_inode,
+ 	.destroy_inode	= ceph_destroy_inode,
+@@ -839,6 +845,7 @@ static const struct super_operations ceph_super_ops = {
+ 	.drop_inode	= ceph_drop_inode,
+ 	.sync_fs        = ceph_sync_fs,
+ 	.put_super	= ceph_put_super,
++	.remount_fs	= ceph_remount,
+ 	.show_options   = ceph_show_options,
+ 	.statfs		= ceph_statfs,
+ 	.umount_begin   = ceph_umount_begin,
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index d5434ac0571b..105ddbad00e5 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -2652,26 +2652,28 @@ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+ 		       unsigned int epoch, bool *purge_cache)
+ {
+ 	char message[5] = {0};
++	unsigned int new_oplock = 0;
+ 
+ 	oplock &= 0xFF;
+ 	if (oplock == SMB2_OPLOCK_LEVEL_NOCHANGE)
+ 		return;
+ 
+-	cinode->oplock = 0;
+ 	if (oplock & SMB2_LEASE_READ_CACHING_HE) {
+-		cinode->oplock |= CIFS_CACHE_READ_FLG;
++		new_oplock |= CIFS_CACHE_READ_FLG;
+ 		strcat(message, "R");
+ 	}
+ 	if (oplock & SMB2_LEASE_HANDLE_CACHING_HE) {
+-		cinode->oplock |= CIFS_CACHE_HANDLE_FLG;
++		new_oplock |= CIFS_CACHE_HANDLE_FLG;
+ 		strcat(message, "H");
+ 	}
+ 	if (oplock & SMB2_LEASE_WRITE_CACHING_HE) {
+-		cinode->oplock |= CIFS_CACHE_WRITE_FLG;
++		new_oplock |= CIFS_CACHE_WRITE_FLG;
+ 		strcat(message, "W");
+ 	}
+-	if (!cinode->oplock)
+-		strcat(message, "None");
++	if (!new_oplock)
++		strncpy(message, "None", sizeof(message));
++
++	cinode->oplock = new_oplock;
+ 	cifs_dbg(FYI, "%s Lease granted on inode %p\n", message,
+ 		 &cinode->vfs_inode);
+ }
+diff --git a/fs/dcache.c b/fs/dcache.c
+index aac41adf4743..c663c602f9ef 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -344,7 +344,7 @@ static void dentry_free(struct dentry *dentry)
+ 		}
+ 	}
+ 	/* if dentry was never visible to RCU, immediate free is OK */
+-	if (!(dentry->d_flags & DCACHE_RCUACCESS))
++	if (dentry->d_flags & DCACHE_NORCU)
+ 		__d_free(&dentry->d_u.d_rcu);
+ 	else
+ 		call_rcu(&dentry->d_u.d_rcu, __d_free);
+@@ -1701,7 +1701,6 @@ struct dentry *d_alloc(struct dentry * parent, const struct qstr *name)
+ 	struct dentry *dentry = __d_alloc(parent->d_sb, name);
+ 	if (!dentry)
+ 		return NULL;
+-	dentry->d_flags |= DCACHE_RCUACCESS;
+ 	spin_lock(&parent->d_lock);
+ 	/*
+ 	 * don't need child lock because it is not subject
+@@ -1726,7 +1725,7 @@ struct dentry *d_alloc_cursor(struct dentry * parent)
+ {
+ 	struct dentry *dentry = d_alloc_anon(parent->d_sb);
+ 	if (dentry) {
+-		dentry->d_flags |= DCACHE_RCUACCESS | DCACHE_DENTRY_CURSOR;
++		dentry->d_flags |= DCACHE_DENTRY_CURSOR;
+ 		dentry->d_parent = dget(parent);
+ 	}
+ 	return dentry;
+@@ -1739,10 +1738,17 @@ struct dentry *d_alloc_cursor(struct dentry * parent)
+  *
+  * For a filesystem that just pins its dentries in memory and never
+  * performs lookups at all, return an unhashed IS_ROOT dentry.
++ * This is used for pipes, sockets et.al. - the stuff that should
++ * never be anyone's children or parents.  Unlike all other
++ * dentries, these will not have RCU delay between dropping the
++ * last reference and freeing them.
+  */
+ struct dentry *d_alloc_pseudo(struct super_block *sb, const struct qstr *name)
+ {
+-	return __d_alloc(sb, name);
++	struct dentry *dentry = __d_alloc(sb, name);
++	if (likely(dentry))
++		dentry->d_flags |= DCACHE_NORCU;
++	return dentry;
+ }
+ EXPORT_SYMBOL(d_alloc_pseudo);
+ 
+@@ -1911,12 +1917,10 @@ struct dentry *d_make_root(struct inode *root_inode)
+ 
+ 	if (root_inode) {
+ 		res = d_alloc_anon(root_inode->i_sb);
+-		if (res) {
+-			res->d_flags |= DCACHE_RCUACCESS;
++		if (res)
+ 			d_instantiate(res, root_inode);
+-		} else {
++		else
+ 			iput(root_inode);
+-		}
+ 	}
+ 	return res;
+ }
+@@ -2781,9 +2785,7 @@ static void __d_move(struct dentry *dentry, struct dentry *target,
+ 		copy_name(dentry, target);
+ 		target->d_hash.pprev = NULL;
+ 		dentry->d_parent->d_lockref.count++;
+-		if (dentry == old_parent)
+-			dentry->d_flags |= DCACHE_RCUACCESS;
+-		else
++		if (dentry != old_parent) /* wasn't IS_ROOT */
+ 			WARN_ON(!--old_parent->d_lockref.count);
+ 	} else {
+ 		target->d_parent = old_parent;
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index a59c16bd90ac..d2926ac44f83 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -181,7 +181,9 @@ void fuse_finish_open(struct inode *inode, struct file *file)
+ 		file->f_op = &fuse_direct_io_file_operations;
+ 	if (!(ff->open_flags & FOPEN_KEEP_CACHE))
+ 		invalidate_inode_pages2(inode->i_mapping);
+-	if (ff->open_flags & FOPEN_NONSEEKABLE)
++	if (ff->open_flags & FOPEN_STREAM)
++		stream_open(inode, file);
++	else if (ff->open_flags & FOPEN_NONSEEKABLE)
+ 		nonseekable_open(inode, file);
+ 	if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC)) {
+ 		struct fuse_inode *fi = get_fuse_inode(inode);
+@@ -1530,7 +1532,7 @@ __acquires(fc->lock)
+ {
+ 	struct fuse_conn *fc = get_fuse_conn(inode);
+ 	struct fuse_inode *fi = get_fuse_inode(inode);
+-	size_t crop = i_size_read(inode);
++	loff_t crop = i_size_read(inode);
+ 	struct fuse_req *req;
+ 
+ 	while (fi->writectr >= 0 && !list_empty(&fi->queued_writes)) {
+@@ -2987,6 +2989,13 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
+ 		}
+ 	}
+ 
++	if (!(mode & FALLOC_FL_KEEP_SIZE) &&
++	    offset + length > i_size_read(inode)) {
++		err = inode_newsize_ok(inode, offset + length);
++		if (err)
++			return err;
++	}
++
+ 	if (!(mode & FALLOC_FL_KEEP_SIZE))
+ 		set_bit(FUSE_I_SIZE_UNSTABLE, &fi->state);
+ 
+diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
+index 61f46facb39c..b3e8ba3bd654 100644
+--- a/fs/nfs/filelayout/filelayout.c
++++ b/fs/nfs/filelayout/filelayout.c
+@@ -904,7 +904,7 @@ fl_pnfs_update_layout(struct inode *ino,
+ 	status = filelayout_check_deviceid(lo, fl, gfp_flags);
+ 	if (status) {
+ 		pnfs_put_lseg(lseg);
+-		lseg = ERR_PTR(status);
++		lseg = NULL;
+ 	}
+ out:
+ 	return lseg;
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 02488b50534a..6999e870baa9 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -159,6 +159,10 @@ int nfs40_discover_server_trunking(struct nfs_client *clp,
+ 		/* Sustain the lease, even if it's empty.  If the clientid4
+ 		 * goes stale it's of no use for trunking discovery. */
+ 		nfs4_schedule_state_renewal(*result);
++
++		/* If the client state need to recover, do it. */
++		if (clp->cl_state)
++			nfs4_schedule_state_manager(clp);
+ 	}
+ out:
+ 	return status;
+diff --git a/fs/nsfs.c b/fs/nsfs.c
+index 60702d677bd4..30d150a4f0c6 100644
+--- a/fs/nsfs.c
++++ b/fs/nsfs.c
+@@ -85,13 +85,12 @@ slow:
+ 	inode->i_fop = &ns_file_operations;
+ 	inode->i_private = ns;
+ 
+-	dentry = d_alloc_pseudo(mnt->mnt_sb, &empty_name);
++	dentry = d_alloc_anon(mnt->mnt_sb);
+ 	if (!dentry) {
+ 		iput(inode);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 	d_instantiate(dentry, inode);
+-	dentry->d_flags |= DCACHE_RCUACCESS;
+ 	dentry->d_fsdata = (void *)ns->ops;
+ 	d = atomic_long_cmpxchg(&ns->stashed, 0, (unsigned long)dentry);
+ 	if (d) {
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 68b3303e4b46..56feaa739979 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -909,14 +909,14 @@ static bool ovl_open_need_copy_up(struct dentry *dentry, int flags)
+ 	return true;
+ }
+ 
+-int ovl_open_maybe_copy_up(struct dentry *dentry, unsigned int file_flags)
++int ovl_maybe_copy_up(struct dentry *dentry, int flags)
+ {
+ 	int err = 0;
+ 
+-	if (ovl_open_need_copy_up(dentry, file_flags)) {
++	if (ovl_open_need_copy_up(dentry, flags)) {
+ 		err = ovl_want_write(dentry);
+ 		if (!err) {
+-			err = ovl_copy_up_flags(dentry, file_flags);
++			err = ovl_copy_up_flags(dentry, flags);
+ 			ovl_drop_write(dentry);
+ 		}
+ 	}
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index 84dd957efa24..50e4407398d8 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -116,11 +116,10 @@ static int ovl_real_fdget(const struct file *file, struct fd *real)
+ 
+ static int ovl_open(struct inode *inode, struct file *file)
+ {
+-	struct dentry *dentry = file_dentry(file);
+ 	struct file *realfile;
+ 	int err;
+ 
+-	err = ovl_open_maybe_copy_up(dentry, file->f_flags);
++	err = ovl_maybe_copy_up(file_dentry(file), file->f_flags);
+ 	if (err)
+ 		return err;
+ 
+@@ -390,7 +389,7 @@ static long ovl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ 		if (ret)
+ 			return ret;
+ 
+-		ret = ovl_copy_up_with_data(file_dentry(file));
++		ret = ovl_maybe_copy_up(file_dentry(file), O_WRONLY);
+ 		if (!ret) {
+ 			ret = ovl_real_ioctl(file, cmd, arg);
+ 
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 9c6018287d57..d26efed9f80a 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -421,7 +421,7 @@ extern const struct file_operations ovl_file_operations;
+ int ovl_copy_up(struct dentry *dentry);
+ int ovl_copy_up_with_data(struct dentry *dentry);
+ int ovl_copy_up_flags(struct dentry *dentry, int flags);
+-int ovl_open_maybe_copy_up(struct dentry *dentry, unsigned int file_flags);
++int ovl_maybe_copy_up(struct dentry *dentry, int flags);
+ int ovl_copy_xattr(struct dentry *old, struct dentry *new);
+ int ovl_set_attr(struct dentry *upper, struct kstat *stat);
+ struct ovl_fh *ovl_encode_real_fh(struct dentry *real, bool is_upper);
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index f5ed9512d193..ef11c54ad712 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -2550,6 +2550,11 @@ static ssize_t proc_pid_attr_write(struct file * file, const char __user * buf,
+ 		rcu_read_unlock();
+ 		return -EACCES;
+ 	}
++	/* Prevent changes to overridden credentials. */
++	if (current_cred() != current_real_cred()) {
++		rcu_read_unlock();
++		return -EBUSY;
++	}
+ 	rcu_read_unlock();
+ 
+ 	if (count > PAGE_SIZE)
+diff --git a/fs/ufs/util.h b/fs/ufs/util.h
+index 1fd3011ea623..7fd4802222b8 100644
+--- a/fs/ufs/util.h
++++ b/fs/ufs/util.h
+@@ -229,7 +229,7 @@ ufs_get_inode_gid(struct super_block *sb, struct ufs_inode *inode)
+ 	case UFS_UID_44BSD:
+ 		return fs32_to_cpu(sb, inode->ui_u3.ui_44.ui_gid);
+ 	case UFS_UID_EFT:
+-		if (inode->ui_u1.oldids.ui_suid == 0xFFFF)
++		if (inode->ui_u1.oldids.ui_sgid == 0xFFFF)
+ 			return fs32_to_cpu(sb, inode->ui_u3.ui_sun.ui_gid);
+ 		/* Fall through */
+ 	default:
+diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h
+index 8ac4e68a12f0..6736ed2f632b 100644
+--- a/include/asm-generic/mm_hooks.h
++++ b/include/asm-generic/mm_hooks.h
+@@ -18,7 +18,6 @@ static inline void arch_exit_mmap(struct mm_struct *mm)
+ }
+ 
+ static inline void arch_unmap(struct mm_struct *mm,
+-			struct vm_area_struct *vma,
+ 			unsigned long start, unsigned long end)
+ {
+ }
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index e734f163bd0b..bd8c322fd92a 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -35,6 +35,7 @@ struct bpf_map_ops {
+ 	void (*map_free)(struct bpf_map *map);
+ 	int (*map_get_next_key)(struct bpf_map *map, void *key, void *next_key);
+ 	void (*map_release_uref)(struct bpf_map *map);
++	void *(*map_lookup_elem_sys_only)(struct bpf_map *map, void *key);
+ 
+ 	/* funcs callable from userspace and from eBPF programs */
+ 	void *(*map_lookup_elem)(struct bpf_map *map, void *key);
+@@ -455,7 +456,7 @@ int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
+ 		}					\
+ _out:							\
+ 		rcu_read_unlock();			\
+-		preempt_enable_no_resched();		\
++		preempt_enable();			\
+ 		_ret;					\
+ 	 })
+ 
+diff --git a/include/linux/dcache.h b/include/linux/dcache.h
+index 60996e64c579..6e1e8e6602c6 100644
+--- a/include/linux/dcache.h
++++ b/include/linux/dcache.h
+@@ -176,7 +176,6 @@ struct dentry_operations {
+       * typically using d_splice_alias. */
+ 
+ #define DCACHE_REFERENCED		0x00000040 /* Recently used, don't discard. */
+-#define DCACHE_RCUACCESS		0x00000080 /* Entry has ever been RCU-visible */
+ 
+ #define DCACHE_CANT_MOUNT		0x00000100
+ #define DCACHE_GENOCIDE			0x00000200
+@@ -217,6 +216,7 @@ struct dentry_operations {
+ 
+ #define DCACHE_PAR_LOOKUP		0x10000000 /* being looked up (with parent locked shared) */
+ #define DCACHE_DENTRY_CURSOR		0x20000000
++#define DCACHE_NORCU			0x40000000 /* No RCU delay for freeing */
+ 
+ extern seqlock_t rename_lock;
+ 
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 4f001619f854..a6d4436c76b5 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -677,7 +677,6 @@ struct mlx5_core_dev {
+ #endif
+ 	struct mlx5_clock        clock;
+ 	struct mlx5_ib_clock_info  *clock_info;
+-	struct page             *clock_info_page;
+ 	struct mlx5_fw_tracer   *tracer;
+ };
+ 
+diff --git a/include/linux/of.h b/include/linux/of.h
+index e240992e5cb6..074913002e39 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -234,8 +234,8 @@ extern struct device_node *of_find_all_nodes(struct device_node *prev);
+ static inline u64 of_read_number(const __be32 *cell, int size)
+ {
+ 	u64 r = 0;
+-	while (size--)
+-		r = (r << 32) | be32_to_cpu(*(cell++));
++	for (; size--; cell++)
++		r = (r << 32) | be32_to_cpu(*cell);
+ 	return r;
+ }
+ 
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 65f1d8c2f082..0e5e1ceae27d 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -348,6 +348,8 @@ struct pci_dev {
+ 	unsigned int	hotplug_user_indicators:1; /* SlotCtl indicators
+ 						      controlled exclusively by
+ 						      user sysfs */
++	unsigned int	clear_retrain_link:1;	/* Need to clear Retrain Link
++						   bit manually */
+ 	unsigned int	d3_delay;	/* D3->D0 transition time in ms */
+ 	unsigned int	d3cold_delay;	/* D3cold->D0 transition time in ms */
+ 
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index b8679dcba96f..3b1a8f38a1ef 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -1366,10 +1366,12 @@ static inline void skb_zcopy_clear(struct sk_buff *skb, bool zerocopy)
+ 	struct ubuf_info *uarg = skb_zcopy(skb);
+ 
+ 	if (uarg) {
+-		if (uarg->callback == sock_zerocopy_callback) {
++		if (skb_zcopy_is_nouarg(skb)) {
++			/* no notification callback */
++		} else if (uarg->callback == sock_zerocopy_callback) {
+ 			uarg->zerocopy = uarg->zerocopy && zerocopy;
+ 			sock_zerocopy_put(uarg);
+-		} else if (!skb_zcopy_is_nouarg(skb)) {
++		} else {
+ 			uarg->callback(uarg, zerocopy);
+ 		}
+ 
+@@ -2627,7 +2629,8 @@ static inline int skb_orphan_frags(struct sk_buff *skb, gfp_t gfp_mask)
+ {
+ 	if (likely(!skb_zcopy(skb)))
+ 		return 0;
+-	if (skb_uarg(skb)->callback == sock_zerocopy_callback)
++	if (!skb_zcopy_is_nouarg(skb) &&
++	    skb_uarg(skb)->callback == sock_zerocopy_callback)
+ 		return 0;
+ 	return skb_copy_ubufs(skb, gfp_mask);
+ }
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index 84097010237c..b5e3add90e99 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -171,7 +171,8 @@ struct fib6_info {
+ 					dst_nocount:1,
+ 					dst_nopolicy:1,
+ 					dst_host:1,
+-					unused:3;
++					fib6_destroying:1,
++					unused:2;
+ 
+ 	struct fib6_nh			fib6_nh;
+ 	struct rcu_head			rcu;
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index 85386becbaea..c9b0b2b5d672 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -295,7 +295,8 @@ struct xfrm_replay {
+ };
+ 
+ struct xfrm_if_cb {
+-	struct xfrm_if	*(*decode_session)(struct sk_buff *skb);
++	struct xfrm_if	*(*decode_session)(struct sk_buff *skb,
++					   unsigned short family);
+ };
+ 
+ void xfrm_if_register_cb(const struct xfrm_if_cb *ifcb);
+@@ -1404,6 +1405,23 @@ static inline int xfrm_state_kern(const struct xfrm_state *x)
+ 	return atomic_read(&x->tunnel_users);
+ }
+ 
++static inline bool xfrm_id_proto_valid(u8 proto)
++{
++	switch (proto) {
++	case IPPROTO_AH:
++	case IPPROTO_ESP:
++	case IPPROTO_COMP:
++#if IS_ENABLED(CONFIG_IPV6)
++	case IPPROTO_ROUTING:
++	case IPPROTO_DSTOPTS:
++#endif
++		return true;
++	default:
++		return false;
++	}
++}
++
++/* IPSEC_PROTO_ANY only matches 3 IPsec protocols, 0 could match all. */
+ static inline int xfrm_id_proto_match(u8 proto, u8 userproto)
+ {
+ 	return (!userproto || proto == userproto ||
+diff --git a/include/uapi/linux/fuse.h b/include/uapi/linux/fuse.h
+index b4967d48bfda..5f7c3a221894 100644
+--- a/include/uapi/linux/fuse.h
++++ b/include/uapi/linux/fuse.h
+@@ -226,11 +226,13 @@ struct fuse_file_lock {
+  * FOPEN_KEEP_CACHE: don't invalidate the data cache on open
+  * FOPEN_NONSEEKABLE: the file is not seekable
+  * FOPEN_CACHE_DIR: allow caching this directory
++ * FOPEN_STREAM: the file is stream-like (no file position at all)
+  */
+ #define FOPEN_DIRECT_IO		(1 << 0)
+ #define FOPEN_KEEP_CACHE	(1 << 1)
+ #define FOPEN_NONSEEKABLE	(1 << 2)
+ #define FOPEN_CACHE_DIR		(1 << 3)
++#define FOPEN_STREAM		(1 << 4)
+ 
+ /**
+  * INIT request/reply flags
+diff --git a/include/video/udlfb.h b/include/video/udlfb.h
+index 7d09e54ae54e..58fb5732831a 100644
+--- a/include/video/udlfb.h
++++ b/include/video/udlfb.h
+@@ -48,6 +48,13 @@ struct dlfb_data {
+ 	int base8;
+ 	u32 pseudo_palette[256];
+ 	int blank_mode; /*one of FB_BLANK_ */
++	struct mutex render_mutex;
++	int damage_x;
++	int damage_y;
++	int damage_x2;
++	int damage_y2;
++	spinlock_t damage_lock;
++	struct work_struct damage_work;
+ 	struct fb_ops ops;
+ 	/* blit-only rendering path metrics, exposed through sysfs */
+ 	atomic_t bytes_rendered; /* raw pixel-bytes driver asked to render */
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index f9274114c88d..be5747a5337a 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -527,18 +527,30 @@ static u32 htab_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
+ 	return insn - insn_buf;
+ }
+ 
+-static void *htab_lru_map_lookup_elem(struct bpf_map *map, void *key)
++static __always_inline void *__htab_lru_map_lookup_elem(struct bpf_map *map,
++							void *key, const bool mark)
+ {
+ 	struct htab_elem *l = __htab_map_lookup_elem(map, key);
+ 
+ 	if (l) {
+-		bpf_lru_node_set_ref(&l->lru_node);
++		if (mark)
++			bpf_lru_node_set_ref(&l->lru_node);
+ 		return l->key + round_up(map->key_size, 8);
+ 	}
+ 
+ 	return NULL;
+ }
+ 
++static void *htab_lru_map_lookup_elem(struct bpf_map *map, void *key)
++{
++	return __htab_lru_map_lookup_elem(map, key, true);
++}
++
++static void *htab_lru_map_lookup_elem_sys(struct bpf_map *map, void *key)
++{
++	return __htab_lru_map_lookup_elem(map, key, false);
++}
++
+ static u32 htab_lru_map_gen_lookup(struct bpf_map *map,
+ 				   struct bpf_insn *insn_buf)
+ {
+@@ -1215,6 +1227,7 @@ const struct bpf_map_ops htab_lru_map_ops = {
+ 	.map_free = htab_map_free,
+ 	.map_get_next_key = htab_map_get_next_key,
+ 	.map_lookup_elem = htab_lru_map_lookup_elem,
++	.map_lookup_elem_sys_only = htab_lru_map_lookup_elem_sys,
+ 	.map_update_elem = htab_lru_map_update_elem,
+ 	.map_delete_elem = htab_lru_map_delete_elem,
+ 	.map_gen_lookup = htab_lru_map_gen_lookup,
+@@ -1246,7 +1259,6 @@ static void *htab_lru_percpu_map_lookup_elem(struct bpf_map *map, void *key)
+ 
+ int bpf_percpu_hash_copy(struct bpf_map *map, void *key, void *value)
+ {
+-	struct bpf_htab *htab = container_of(map, struct bpf_htab, map);
+ 	struct htab_elem *l;
+ 	void __percpu *pptr;
+ 	int ret = -ENOENT;
+@@ -1262,8 +1274,9 @@ int bpf_percpu_hash_copy(struct bpf_map *map, void *key, void *value)
+ 	l = __htab_map_lookup_elem(map, key);
+ 	if (!l)
+ 		goto out;
+-	if (htab_is_lru(htab))
+-		bpf_lru_node_set_ref(&l->lru_node);
++	/* We do not mark LRU map element here in order to not mess up
++	 * eviction heuristics when user space does a map walk.
++	 */
+ 	pptr = htab_elem_get_ptr(l, map->key_size);
+ 	for_each_possible_cpu(cpu) {
+ 		bpf_long_memcpy(value + off,
+diff --git a/kernel/bpf/inode.c b/kernel/bpf/inode.c
+index 4a8f390a2b82..dc9d7ac8228d 100644
+--- a/kernel/bpf/inode.c
++++ b/kernel/bpf/inode.c
+@@ -518,7 +518,7 @@ out:
+ static struct bpf_prog *__get_prog_inode(struct inode *inode, enum bpf_prog_type type)
+ {
+ 	struct bpf_prog *prog;
+-	int ret = inode_permission(inode, MAY_READ | MAY_WRITE);
++	int ret = inode_permission(inode, MAY_READ);
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 84470d1480aa..07d9b76e90ce 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -738,7 +738,10 @@ static int map_lookup_elem(union bpf_attr *attr)
+ 		err = map->ops->map_peek_elem(map, value);
+ 	} else {
+ 		rcu_read_lock();
+-		ptr = map->ops->map_lookup_elem(map, key);
++		if (map->ops->map_lookup_elem_sys_only)
++			ptr = map->ops->map_lookup_elem_sys_only(map, key);
++		else
++			ptr = map->ops->map_lookup_elem(map, key);
+ 		if (IS_ERR(ptr)) {
+ 			err = PTR_ERR(ptr);
+ 		} else if (!ptr) {
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 1ccf77f6d346..d4ab9245e016 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -771,6 +771,7 @@ out:
+ 	return 0;
+ 
+ fail:
++	kobject_put(&tunables->attr_set.kobj);
+ 	policy->governor_data = NULL;
+ 	sugov_tunables_free(tunables);
+ 
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 5b3b0c3c8a47..d910e36c34b5 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -1318,9 +1318,6 @@ event_id_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
+ 	char buf[32];
+ 	int len;
+ 
+-	if (*ppos)
+-		return 0;
+-
+ 	if (unlikely(!id))
+ 		return -ENODEV;
+ 
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 9962cb5da8ac..44f078cda0ac 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -405,13 +405,14 @@ static int traceprobe_parse_probe_arg_body(char *arg, ssize_t *size,
+ 				return -E2BIG;
+ 		}
+ 	}
+-	/*
+-	 * The default type of $comm should be "string", and it can't be
+-	 * dereferenced.
+-	 */
+-	if (!t && strcmp(arg, "$comm") == 0)
++
++	/* Since $comm can not be dereferred, we can find $comm by strcmp */
++	if (strcmp(arg, "$comm") == 0) {
++		/* The type of $comm must be "string", and not an array. */
++		if (parg->count || (t && strcmp(t, "string")))
++			return -EINVAL;
+ 		parg->type = find_fetch_type("string");
+-	else
++	} else
+ 		parg->type = find_fetch_type(t);
+ 	if (!parg->type) {
+ 		pr_info("Unsupported type: %s\n", t);
+diff --git a/lib/Makefile b/lib/Makefile
+index e1b59da71418..d1f312096bec 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -17,6 +17,17 @@ KCOV_INSTRUMENT_list_debug.o := n
+ KCOV_INSTRUMENT_debugobjects.o := n
+ KCOV_INSTRUMENT_dynamic_debug.o := n
+ 
++# Early boot use of cmdline, don't instrument it
++ifdef CONFIG_AMD_MEM_ENCRYPT
++KASAN_SANITIZE_string.o := n
++
++ifdef CONFIG_FUNCTION_TRACER
++CFLAGS_REMOVE_string.o = -pg
++endif
++
++CFLAGS_string.o := $(call cc-option, -fno-stack-protector)
++endif
++
+ lib-y := ctype.o string.o vsprintf.o cmdline.o \
+ 	 rbtree.o radix-tree.o timerqueue.o xarray.o \
+ 	 idr.o int_sqrt.o extable.o \
+diff --git a/mm/gup.c b/mm/gup.c
+index 81e0bdefa2cc..1a42b4367c3a 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1811,7 +1811,7 @@ static void gup_pgd_range(unsigned long addr, unsigned long end,
+  * Check if it's allowed to use __get_user_pages_fast() for the range, or
+  * we need to fall back to the slow version:
+  */
+-bool gup_fast_permitted(unsigned long start, int nr_pages, int write)
++bool gup_fast_permitted(unsigned long start, int nr_pages)
+ {
+ 	unsigned long len, end;
+ 
+@@ -1853,7 +1853,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
+ 	 * block IPIs that come from THPs splitting.
+ 	 */
+ 
+-	if (gup_fast_permitted(start, nr_pages, write)) {
++	if (gup_fast_permitted(start, nr_pages)) {
+ 		local_irq_save(flags);
+ 		gup_pgd_range(start, end, write, pages, &nr);
+ 		local_irq_restore(flags);
+@@ -1895,7 +1895,7 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
+ 	if (unlikely(!access_ok((void __user *)start, len)))
+ 		return -EFAULT;
+ 
+-	if (gup_fast_permitted(start, nr_pages, write)) {
++	if (gup_fast_permitted(start, nr_pages)) {
+ 		local_irq_disable();
+ 		gup_pgd_range(addr, end, write, pages, &nr);
+ 		local_irq_enable();
+diff --git a/mm/mmap.c b/mm/mmap.c
+index da9236a5022e..446698476e4c 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2736,9 +2736,17 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+ 		return -EINVAL;
+ 
+ 	len = PAGE_ALIGN(len);
++	end = start + len;
+ 	if (len == 0)
+ 		return -EINVAL;
+ 
++	/*
++	 * arch_unmap() might do unmaps itself.  It must be called
++	 * and finish any rbtree manipulation before this code
++	 * runs and also starts to manipulate the rbtree.
++	 */
++	arch_unmap(mm, start, end);
++
+ 	/* Find the first overlapping VMA */
+ 	vma = find_vma(mm, start);
+ 	if (!vma)
+@@ -2747,7 +2755,6 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+ 	/* we have  start < vma->vm_end  */
+ 
+ 	/* if it doesn't overlap, we have nothing.. */
+-	end = start + len;
+ 	if (vma->vm_start >= end)
+ 		return 0;
+ 
+@@ -2817,12 +2824,6 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
+ 	/* Detach vmas from rbtree */
+ 	detach_vmas_to_be_unmapped(mm, vma, prev, end);
+ 
+-	/*
+-	 * mpx unmap needs to be called with mmap_sem held for write.
+-	 * It is safe to call it before unmap_region().
+-	 */
+-	arch_unmap(mm, vma, start, end);
+-
+ 	if (downgrade)
+ 		downgrade_write(&mm->mmap_sem);
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 7277dd393c00..c8e672ac32cb 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -8829,7 +8829,7 @@ static void netdev_wait_allrefs(struct net_device *dev)
+ 
+ 		refcnt = netdev_refcnt_read(dev);
+ 
+-		if (time_after(jiffies, warning_time + 10 * HZ)) {
++		if (refcnt && time_after(jiffies, warning_time + 10 * HZ)) {
+ 			pr_emerg("unregister_netdevice: waiting for %s to become free. Usage count = %d\n",
+ 				 dev->name, refcnt);
+ 			warning_time = jiffies;
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 5ea1bed08ede..fd449017c55e 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1502,14 +1502,15 @@ static int put_master_ifindex(struct sk_buff *skb, struct net_device *dev)
+ 	return ret;
+ }
+ 
+-static int nla_put_iflink(struct sk_buff *skb, const struct net_device *dev)
++static int nla_put_iflink(struct sk_buff *skb, const struct net_device *dev,
++			  bool force)
+ {
+ 	int ifindex = dev_get_iflink(dev);
+ 
+-	if (dev->ifindex == ifindex)
+-		return 0;
++	if (force || dev->ifindex != ifindex)
++		return nla_put_u32(skb, IFLA_LINK, ifindex);
+ 
+-	return nla_put_u32(skb, IFLA_LINK, ifindex);
++	return 0;
+ }
+ 
+ static noinline_for_stack int nla_put_ifalias(struct sk_buff *skb,
+@@ -1526,6 +1527,8 @@ static int rtnl_fill_link_netnsid(struct sk_buff *skb,
+ 				  const struct net_device *dev,
+ 				  struct net *src_net)
+ {
++	bool put_iflink = false;
++
+ 	if (dev->rtnl_link_ops && dev->rtnl_link_ops->get_link_net) {
+ 		struct net *link_net = dev->rtnl_link_ops->get_link_net(dev);
+ 
+@@ -1534,10 +1537,12 @@ static int rtnl_fill_link_netnsid(struct sk_buff *skb,
+ 
+ 			if (nla_put_s32(skb, IFLA_LINK_NETNSID, id))
+ 				return -EMSGSIZE;
++
++			put_iflink = true;
+ 		}
+ 	}
+ 
+-	return 0;
++	return nla_put_iflink(skb, dev, put_iflink);
+ }
+ 
+ static int rtnl_fill_link_af(struct sk_buff *skb,
+@@ -1623,7 +1628,6 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
+ #ifdef CONFIG_RPS
+ 	    nla_put_u32(skb, IFLA_NUM_RX_QUEUES, dev->num_rx_queues) ||
+ #endif
+-	    nla_put_iflink(skb, dev) ||
+ 	    put_master_ifindex(skb, dev) ||
+ 	    nla_put_u8(skb, IFLA_CARRIER, netif_carrier_ok(dev)) ||
+ 	    (dev->qdisc &&
+diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
+index 10e809b296ec..fb065a8937ea 100644
+--- a/net/ipv4/esp4.c
++++ b/net/ipv4/esp4.c
+@@ -226,7 +226,7 @@ static void esp_output_fill_trailer(u8 *tail, int tfclen, int plen, __u8 proto)
+ 	tail[plen - 1] = proto;
+ }
+ 
+-static void esp_output_udp_encap(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
++static int esp_output_udp_encap(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
+ {
+ 	int encap_type;
+ 	struct udphdr *uh;
+@@ -234,6 +234,7 @@ static void esp_output_udp_encap(struct xfrm_state *x, struct sk_buff *skb, stru
+ 	__be16 sport, dport;
+ 	struct xfrm_encap_tmpl *encap = x->encap;
+ 	struct ip_esp_hdr *esph = esp->esph;
++	unsigned int len;
+ 
+ 	spin_lock_bh(&x->lock);
+ 	sport = encap->encap_sport;
+@@ -241,11 +242,14 @@ static void esp_output_udp_encap(struct xfrm_state *x, struct sk_buff *skb, stru
+ 	encap_type = encap->encap_type;
+ 	spin_unlock_bh(&x->lock);
+ 
++	len = skb->len + esp->tailen - skb_transport_offset(skb);
++	if (len + sizeof(struct iphdr) >= IP_MAX_MTU)
++		return -EMSGSIZE;
++
+ 	uh = (struct udphdr *)esph;
+ 	uh->source = sport;
+ 	uh->dest = dport;
+-	uh->len = htons(skb->len + esp->tailen
+-		  - skb_transport_offset(skb));
++	uh->len = htons(len);
+ 	uh->check = 0;
+ 
+ 	switch (encap_type) {
+@@ -262,6 +266,8 @@ static void esp_output_udp_encap(struct xfrm_state *x, struct sk_buff *skb, stru
+ 
+ 	*skb_mac_header(skb) = IPPROTO_UDP;
+ 	esp->esph = esph;
++
++	return 0;
+ }
+ 
+ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)
+@@ -275,8 +281,12 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *
+ 	int tailen = esp->tailen;
+ 
+ 	/* this is non-NULL only with UDP Encapsulation */
+-	if (x->encap)
+-		esp_output_udp_encap(x, skb, esp);
++	if (x->encap) {
++		int err = esp_output_udp_encap(x, skb, esp);
++
++		if (err < 0)
++			return err;
++	}
+ 
+ 	if (!skb_cloned(skb)) {
+ 		if (tailen <= skb_tailroom(skb)) {
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index 8756e0e790d2..d3170a8001b2 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -52,13 +52,13 @@ static struct sk_buff *esp4_gro_receive(struct list_head *head,
+ 			goto out;
+ 
+ 		if (sp->len == XFRM_MAX_DEPTH)
+-			goto out;
++			goto out_reset;
+ 
+ 		x = xfrm_state_lookup(dev_net(skb->dev), skb->mark,
+ 				      (xfrm_address_t *)&ip_hdr(skb)->daddr,
+ 				      spi, IPPROTO_ESP, AF_INET);
+ 		if (!x)
+-			goto out;
++			goto out_reset;
+ 
+ 		sp->xvec[sp->len++] = x;
+ 		sp->olen++;
+@@ -66,7 +66,7 @@ static struct sk_buff *esp4_gro_receive(struct list_head *head,
+ 		xo = xfrm_offload(skb);
+ 		if (!xo) {
+ 			xfrm_state_put(x);
+-			goto out;
++			goto out_reset;
+ 		}
+ 	}
+ 
+@@ -82,6 +82,8 @@ static struct sk_buff *esp4_gro_receive(struct list_head *head,
+ 	xfrm_input(skb, IPPROTO_ESP, spi, -2);
+ 
+ 	return ERR_PTR(-EINPROGRESS);
++out_reset:
++	secpath_reset(skb);
+ out:
+ 	skb_push(skb, offset);
+ 	NAPI_GRO_CB(skb)->same_flow = 0;
+diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
+index 68a21bf75dd0..b6235ca09fa5 100644
+--- a/net/ipv4/ip_vti.c
++++ b/net/ipv4/ip_vti.c
+@@ -659,9 +659,9 @@ static int __init vti_init(void)
+ 	return err;
+ 
+ rtnl_link_failed:
+-	xfrm4_protocol_deregister(&vti_ipcomp4_protocol, IPPROTO_COMP);
+-xfrm_tunnel_failed:
+ 	xfrm4_tunnel_deregister(&ipip_handler, AF_INET);
++xfrm_tunnel_failed:
++	xfrm4_protocol_deregister(&vti_ipcomp4_protocol, IPPROTO_COMP);
+ xfrm_proto_comp_failed:
+ 	xfrm4_protocol_deregister(&vti_ah4_protocol, IPPROTO_AH);
+ xfrm_proto_ah_failed:
+@@ -676,6 +676,7 @@ pernet_dev_failed:
+ static void __exit vti_fini(void)
+ {
+ 	rtnl_link_unregister(&vti_link_ops);
++	xfrm4_tunnel_deregister(&ipip_handler, AF_INET);
+ 	xfrm4_protocol_deregister(&vti_ipcomp4_protocol, IPPROTO_COMP);
+ 	xfrm4_protocol_deregister(&vti_ah4_protocol, IPPROTO_AH);
+ 	xfrm4_protocol_deregister(&vti_esp4_protocol, IPPROTO_ESP);
+diff --git a/net/ipv4/xfrm4_policy.c b/net/ipv4/xfrm4_policy.c
+index d73a6d6652f6..2b144b92ae46 100644
+--- a/net/ipv4/xfrm4_policy.c
++++ b/net/ipv4/xfrm4_policy.c
+@@ -111,7 +111,8 @@ static void
+ _decode_session4(struct sk_buff *skb, struct flowi *fl, int reverse)
+ {
+ 	const struct iphdr *iph = ip_hdr(skb);
+-	u8 *xprth = skb_network_header(skb) + iph->ihl * 4;
++	int ihl = iph->ihl;
++	u8 *xprth = skb_network_header(skb) + ihl * 4;
+ 	struct flowi4 *fl4 = &fl->u.ip4;
+ 	int oif = 0;
+ 
+@@ -122,6 +123,11 @@ _decode_session4(struct sk_buff *skb, struct flowi *fl, int reverse)
+ 	fl4->flowi4_mark = skb->mark;
+ 	fl4->flowi4_oif = reverse ? skb->skb_iif : oif;
+ 
++	fl4->flowi4_proto = iph->protocol;
++	fl4->daddr = reverse ? iph->saddr : iph->daddr;
++	fl4->saddr = reverse ? iph->daddr : iph->saddr;
++	fl4->flowi4_tos = iph->tos;
++
+ 	if (!ip_is_fragment(iph)) {
+ 		switch (iph->protocol) {
+ 		case IPPROTO_UDP:
+@@ -133,7 +139,7 @@ _decode_session4(struct sk_buff *skb, struct flowi *fl, int reverse)
+ 			    pskb_may_pull(skb, xprth + 4 - skb->data)) {
+ 				__be16 *ports;
+ 
+-				xprth = skb_network_header(skb) + iph->ihl * 4;
++				xprth = skb_network_header(skb) + ihl * 4;
+ 				ports = (__be16 *)xprth;
+ 
+ 				fl4->fl4_sport = ports[!!reverse];
+@@ -146,7 +152,7 @@ _decode_session4(struct sk_buff *skb, struct flowi *fl, int reverse)
+ 			    pskb_may_pull(skb, xprth + 2 - skb->data)) {
+ 				u8 *icmp;
+ 
+-				xprth = skb_network_header(skb) + iph->ihl * 4;
++				xprth = skb_network_header(skb) + ihl * 4;
+ 				icmp = xprth;
+ 
+ 				fl4->fl4_icmp_type = icmp[0];
+@@ -159,7 +165,7 @@ _decode_session4(struct sk_buff *skb, struct flowi *fl, int reverse)
+ 			    pskb_may_pull(skb, xprth + 4 - skb->data)) {
+ 				__be32 *ehdr;
+ 
+-				xprth = skb_network_header(skb) + iph->ihl * 4;
++				xprth = skb_network_header(skb) + ihl * 4;
+ 				ehdr = (__be32 *)xprth;
+ 
+ 				fl4->fl4_ipsec_spi = ehdr[0];
+@@ -171,7 +177,7 @@ _decode_session4(struct sk_buff *skb, struct flowi *fl, int reverse)
+ 			    pskb_may_pull(skb, xprth + 8 - skb->data)) {
+ 				__be32 *ah_hdr;
+ 
+-				xprth = skb_network_header(skb) + iph->ihl * 4;
++				xprth = skb_network_header(skb) + ihl * 4;
+ 				ah_hdr = (__be32 *)xprth;
+ 
+ 				fl4->fl4_ipsec_spi = ah_hdr[1];
+@@ -183,7 +189,7 @@ _decode_session4(struct sk_buff *skb, struct flowi *fl, int reverse)
+ 			    pskb_may_pull(skb, xprth + 4 - skb->data)) {
+ 				__be16 *ipcomp_hdr;
+ 
+-				xprth = skb_network_header(skb) + iph->ihl * 4;
++				xprth = skb_network_header(skb) + ihl * 4;
+ 				ipcomp_hdr = (__be16 *)xprth;
+ 
+ 				fl4->fl4_ipsec_spi = htonl(ntohs(ipcomp_hdr[1]));
+@@ -196,7 +202,7 @@ _decode_session4(struct sk_buff *skb, struct flowi *fl, int reverse)
+ 				__be16 *greflags;
+ 				__be32 *gre_hdr;
+ 
+-				xprth = skb_network_header(skb) + iph->ihl * 4;
++				xprth = skb_network_header(skb) + ihl * 4;
+ 				greflags = (__be16 *)xprth;
+ 				gre_hdr = (__be32 *)xprth;
+ 
+@@ -213,10 +219,6 @@ _decode_session4(struct sk_buff *skb, struct flowi *fl, int reverse)
+ 			break;
+ 		}
+ 	}
+-	fl4->flowi4_proto = iph->protocol;
+-	fl4->daddr = reverse ? iph->saddr : iph->daddr;
+-	fl4->saddr = reverse ? iph->daddr : iph->saddr;
+-	fl4->flowi4_tos = iph->tos;
+ }
+ 
+ static void xfrm4_update_pmtu(struct dst_entry *dst, struct sock *sk,
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index d46b4eb645c2..cb99f6fb79b7 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -74,13 +74,13 @@ static struct sk_buff *esp6_gro_receive(struct list_head *head,
+ 			goto out;
+ 
+ 		if (sp->len == XFRM_MAX_DEPTH)
+-			goto out;
++			goto out_reset;
+ 
+ 		x = xfrm_state_lookup(dev_net(skb->dev), skb->mark,
+ 				      (xfrm_address_t *)&ipv6_hdr(skb)->daddr,
+ 				      spi, IPPROTO_ESP, AF_INET6);
+ 		if (!x)
+-			goto out;
++			goto out_reset;
+ 
+ 		sp->xvec[sp->len++] = x;
+ 		sp->olen++;
+@@ -88,7 +88,7 @@ static struct sk_buff *esp6_gro_receive(struct list_head *head,
+ 		xo = xfrm_offload(skb);
+ 		if (!xo) {
+ 			xfrm_state_put(x);
+-			goto out;
++			goto out_reset;
+ 		}
+ 	}
+ 
+@@ -109,6 +109,8 @@ static struct sk_buff *esp6_gro_receive(struct list_head *head,
+ 	xfrm_input(skb, IPPROTO_ESP, spi, -2);
+ 
+ 	return ERR_PTR(-EINPROGRESS);
++out_reset:
++	secpath_reset(skb);
+ out:
+ 	skb_push(skb, offset);
+ 	NAPI_GRO_CB(skb)->same_flow = 0;
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 91247a6fc67f..9915f64b38a0 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -909,6 +909,12 @@ static void fib6_drop_pcpu_from(struct fib6_info *f6i,
+ {
+ 	int cpu;
+ 
++	/* Make sure rt6_make_pcpu_route() wont add other percpu routes
++	 * while we are cleaning them here.
++	 */
++	f6i->fib6_destroying = 1;
++	mb(); /* paired with the cmpxchg() in rt6_make_pcpu_route() */
++
+ 	/* release the reference to this fib entry from
+ 	 * all of its cached pcpu routes
+ 	 */
+@@ -932,6 +938,9 @@ static void fib6_purge_rt(struct fib6_info *rt, struct fib6_node *fn,
+ {
+ 	struct fib6_table *table = rt->fib6_table;
+ 
++	if (rt->rt6i_pcpu)
++		fib6_drop_pcpu_from(rt, table);
++
+ 	if (atomic_read(&rt->fib6_ref) != 1) {
+ 		/* This route is used as dummy address holder in some split
+ 		 * nodes. It is not leaked, but it still holds other resources,
+@@ -953,9 +962,6 @@ static void fib6_purge_rt(struct fib6_info *rt, struct fib6_node *fn,
+ 			fn = rcu_dereference_protected(fn->parent,
+ 				    lockdep_is_held(&table->tb6_lock));
+ 		}
+-
+-		if (rt->rt6i_pcpu)
+-			fib6_drop_pcpu_from(rt, table);
+ 	}
+ }
+ 
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 59c90bba048c..b471afce1330 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -110,8 +110,8 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 			 int iif, int type, u32 portid, u32 seq,
+ 			 unsigned int flags);
+ static struct rt6_info *rt6_find_cached_rt(struct fib6_info *rt,
+-					   struct in6_addr *daddr,
+-					   struct in6_addr *saddr);
++					   const struct in6_addr *daddr,
++					   const struct in6_addr *saddr);
+ 
+ #ifdef CONFIG_IPV6_ROUTE_INFO
+ static struct fib6_info *rt6_add_route_info(struct net *net,
+@@ -1260,6 +1260,13 @@ static struct rt6_info *rt6_make_pcpu_route(struct net *net,
+ 	prev = cmpxchg(p, NULL, pcpu_rt);
+ 	BUG_ON(prev);
+ 
++	if (rt->fib6_destroying) {
++		struct fib6_info *from;
++
++		from = xchg((__force struct fib6_info **)&pcpu_rt->from, NULL);
++		fib6_info_release(from);
++	}
++
+ 	return pcpu_rt;
+ }
+ 
+@@ -1529,31 +1536,44 @@ out:
+  * Caller has to hold rcu_read_lock()
+  */
+ static struct rt6_info *rt6_find_cached_rt(struct fib6_info *rt,
+-					   struct in6_addr *daddr,
+-					   struct in6_addr *saddr)
++					   const struct in6_addr *daddr,
++					   const struct in6_addr *saddr)
+ {
++	const struct in6_addr *src_key = NULL;
+ 	struct rt6_exception_bucket *bucket;
+-	struct in6_addr *src_key = NULL;
+ 	struct rt6_exception *rt6_ex;
+ 	struct rt6_info *res = NULL;
+ 
+-	bucket = rcu_dereference(rt->rt6i_exception_bucket);
+-
+ #ifdef CONFIG_IPV6_SUBTREES
+ 	/* rt6i_src.plen != 0 indicates rt is in subtree
+ 	 * and exception table is indexed by a hash of
+ 	 * both rt6i_dst and rt6i_src.
+-	 * Otherwise, the exception table is indexed by
+-	 * a hash of only rt6i_dst.
++	 * However, the src addr used to create the hash
++	 * might not be exactly the passed in saddr which
++	 * is a /128 addr from the flow.
++	 * So we need to use f6i->fib6_src to redo lookup
++	 * if the passed in saddr does not find anything.
++	 * (See the logic in ip6_rt_cache_alloc() on how
++	 * rt->rt6i_src is updated.)
+ 	 */
+ 	if (rt->fib6_src.plen)
+ 		src_key = saddr;
++find_ex:
+ #endif
++	bucket = rcu_dereference(rt->rt6i_exception_bucket);
+ 	rt6_ex = __rt6_find_exception_rcu(&bucket, daddr, src_key);
+ 
+ 	if (rt6_ex && !rt6_check_expired(rt6_ex->rt6i))
+ 		res = rt6_ex->rt6i;
+ 
++#ifdef CONFIG_IPV6_SUBTREES
++	/* Use fib6_src as src_key and redo lookup */
++	if (!res && src_key && src_key != &rt->fib6_src.addr) {
++		src_key = &rt->fib6_src.addr;
++		goto find_ex;
++	}
++#endif
++
+ 	return res;
+ }
+ 
+@@ -2614,10 +2634,8 @@ out:
+ u32 ip6_mtu_from_fib6(struct fib6_info *f6i, struct in6_addr *daddr,
+ 		      struct in6_addr *saddr)
+ {
+-	struct rt6_exception_bucket *bucket;
+-	struct rt6_exception *rt6_ex;
+-	struct in6_addr *src_key;
+ 	struct inet6_dev *idev;
++	struct rt6_info *rt;
+ 	u32 mtu = 0;
+ 
+ 	if (unlikely(fib6_metric_locked(f6i, RTAX_MTU))) {
+@@ -2626,18 +2644,10 @@ u32 ip6_mtu_from_fib6(struct fib6_info *f6i, struct in6_addr *daddr,
+ 			goto out;
+ 	}
+ 
+-	src_key = NULL;
+-#ifdef CONFIG_IPV6_SUBTREES
+-	if (f6i->fib6_src.plen)
+-		src_key = saddr;
+-#endif
+-
+-	bucket = rcu_dereference(f6i->rt6i_exception_bucket);
+-	rt6_ex = __rt6_find_exception_rcu(&bucket, daddr, src_key);
+-	if (rt6_ex && !rt6_check_expired(rt6_ex->rt6i))
+-		mtu = dst_metric_raw(&rt6_ex->rt6i->dst, RTAX_MTU);
+-
+-	if (likely(!mtu)) {
++	rt = rt6_find_cached_rt(f6i, daddr, saddr);
++	if (unlikely(rt)) {
++		mtu = dst_metric_raw(&rt->dst, RTAX_MTU);
++	} else {
+ 		struct net_device *dev = fib6_info_nh_dev(f6i);
+ 
+ 		mtu = IPV6_MIN_MTU;
+diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c
+index bc65db782bfb..d9e5f6808811 100644
+--- a/net/ipv6/xfrm6_tunnel.c
++++ b/net/ipv6/xfrm6_tunnel.c
+@@ -345,7 +345,7 @@ static void __net_exit xfrm6_tunnel_net_exit(struct net *net)
+ 	unsigned int i;
+ 
+ 	xfrm_flush_gc();
+-	xfrm_state_flush(net, IPSEC_PROTO_ANY, false, true);
++	xfrm_state_flush(net, 0, false, true);
+ 
+ 	for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++)
+ 		WARN_ON_ONCE(!hlist_empty(&xfrm6_tn->spi_byaddr[i]));
+@@ -402,6 +402,10 @@ static void __exit xfrm6_tunnel_fini(void)
+ 	xfrm6_tunnel_deregister(&xfrm6_tunnel_handler, AF_INET6);
+ 	xfrm_unregister_type(&xfrm6_tunnel_type, AF_INET6);
+ 	unregister_pernet_subsys(&xfrm6_tunnel_net_ops);
++	/* Someone maybe has gotten the xfrm6_tunnel_spi.
++	 * So need to wait it.
++	 */
++	rcu_barrier();
+ 	kmem_cache_destroy(xfrm6_tunnel_spi_kmem);
+ }
+ 
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index 5651c29cb5bd..4af1e1d60b9f 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1951,8 +1951,10 @@ parse_ipsecrequest(struct xfrm_policy *xp, struct sadb_x_ipsecrequest *rq)
+ 
+ 	if (rq->sadb_x_ipsecrequest_mode == 0)
+ 		return -EINVAL;
++	if (!xfrm_id_proto_valid(rq->sadb_x_ipsecrequest_proto))
++		return -EINVAL;
+ 
+-	t->id.proto = rq->sadb_x_ipsecrequest_proto; /* XXX check proto */
++	t->id.proto = rq->sadb_x_ipsecrequest_proto;
+ 	if ((mode = pfkey_mode_to_xfrm(rq->sadb_x_ipsecrequest_mode)) < 0)
+ 		return -EINVAL;
+ 	t->mode = mode;
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 4a6ff1482a9f..02d2e6f11e93 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1908,6 +1908,9 @@ void ieee80211_if_remove(struct ieee80211_sub_if_data *sdata)
+ 	list_del_rcu(&sdata->list);
+ 	mutex_unlock(&sdata->local->iflist_mtx);
+ 
++	if (sdata->vif.txq)
++		ieee80211_txq_purge(sdata->local, to_txq_info(sdata->vif.txq));
++
+ 	synchronize_rcu();
+ 
+ 	if (sdata->dev) {
+diff --git a/net/tipc/core.c b/net/tipc/core.c
+index 5b38f5164281..d7b0688c98dd 100644
+--- a/net/tipc/core.c
++++ b/net/tipc/core.c
+@@ -66,6 +66,10 @@ static int __net_init tipc_init_net(struct net *net)
+ 	INIT_LIST_HEAD(&tn->node_list);
+ 	spin_lock_init(&tn->node_list_lock);
+ 
++	err = tipc_socket_init();
++	if (err)
++		goto out_socket;
++
+ 	err = tipc_sk_rht_init(net);
+ 	if (err)
+ 		goto out_sk_rht;
+@@ -92,6 +96,8 @@ out_subscr:
+ out_nametbl:
+ 	tipc_sk_rht_destroy(net);
+ out_sk_rht:
++	tipc_socket_stop();
++out_socket:
+ 	return err;
+ }
+ 
+@@ -102,6 +108,7 @@ static void __net_exit tipc_exit_net(struct net *net)
+ 	tipc_bcast_stop(net);
+ 	tipc_nametbl_stop(net);
+ 	tipc_sk_rht_destroy(net);
++	tipc_socket_stop();
+ }
+ 
+ static struct pernet_operations tipc_net_ops = {
+@@ -129,10 +136,6 @@ static int __init tipc_init(void)
+ 	if (err)
+ 		goto out_netlink_compat;
+ 
+-	err = tipc_socket_init();
+-	if (err)
+-		goto out_socket;
+-
+ 	err = tipc_register_sysctl();
+ 	if (err)
+ 		goto out_sysctl;
+@@ -152,8 +155,6 @@ out_bearer:
+ out_pernet:
+ 	tipc_unregister_sysctl();
+ out_sysctl:
+-	tipc_socket_stop();
+-out_socket:
+ 	tipc_netlink_compat_stop();
+ out_netlink_compat:
+ 	tipc_netlink_stop();
+@@ -168,7 +169,6 @@ static void __exit tipc_exit(void)
+ 	unregister_pernet_subsys(&tipc_net_ops);
+ 	tipc_netlink_stop();
+ 	tipc_netlink_compat_stop();
+-	tipc_socket_stop();
+ 	tipc_unregister_sysctl();
+ 
+ 	pr_info("Deactivated\n");
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index 15eb5d3d4750..96ab344f17bb 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -702,28 +702,27 @@ static int __init virtio_vsock_init(void)
+ 	if (!virtio_vsock_workqueue)
+ 		return -ENOMEM;
+ 
+-	ret = register_virtio_driver(&virtio_vsock_driver);
++	ret = vsock_core_init(&virtio_transport.transport);
+ 	if (ret)
+ 		goto out_wq;
+ 
+-	ret = vsock_core_init(&virtio_transport.transport);
++	ret = register_virtio_driver(&virtio_vsock_driver);
+ 	if (ret)
+-		goto out_vdr;
++		goto out_vci;
+ 
+ 	return 0;
+ 
+-out_vdr:
+-	unregister_virtio_driver(&virtio_vsock_driver);
++out_vci:
++	vsock_core_exit();
+ out_wq:
+ 	destroy_workqueue(virtio_vsock_workqueue);
+ 	return ret;
+-
+ }
+ 
+ static void __exit virtio_vsock_exit(void)
+ {
+-	vsock_core_exit();
+ 	unregister_virtio_driver(&virtio_vsock_driver);
++	vsock_core_exit();
+ 	destroy_workqueue(virtio_vsock_workqueue);
+ }
+ 
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 602715fc9a75..f3f3d06cb6d8 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -786,12 +786,19 @@ static bool virtio_transport_close(struct vsock_sock *vsk)
+ 
+ void virtio_transport_release(struct vsock_sock *vsk)
+ {
++	struct virtio_vsock_sock *vvs = vsk->trans;
++	struct virtio_vsock_pkt *pkt, *tmp;
+ 	struct sock *sk = &vsk->sk;
+ 	bool remove_sock = true;
+ 
+ 	lock_sock(sk);
+ 	if (sk->sk_type == SOCK_STREAM)
+ 		remove_sock = virtio_transport_close(vsk);
++
++	list_for_each_entry_safe(pkt, tmp, &vvs->rx_queue, list) {
++		list_del(&pkt->list);
++		virtio_transport_free_pkt(pkt);
++	}
+ 	release_sock(sk);
+ 
+ 	if (remove_sock)
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index dbb3c1945b5c..85fec98676d3 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -70,17 +70,28 @@ static struct xfrm_if *xfrmi_lookup(struct net *net, struct xfrm_state *x)
+ 	return NULL;
+ }
+ 
+-static struct xfrm_if *xfrmi_decode_session(struct sk_buff *skb)
++static struct xfrm_if *xfrmi_decode_session(struct sk_buff *skb,
++					    unsigned short family)
+ {
+ 	struct xfrmi_net *xfrmn;
+-	int ifindex;
+ 	struct xfrm_if *xi;
++	int ifindex = 0;
+ 
+ 	if (!secpath_exists(skb) || !skb->dev)
+ 		return NULL;
+ 
++	switch (family) {
++	case AF_INET6:
++		ifindex = inet6_sdif(skb);
++		break;
++	case AF_INET:
++		ifindex = inet_sdif(skb);
++		break;
++	}
++	if (!ifindex)
++		ifindex = skb->dev->ifindex;
++
+ 	xfrmn = net_generic(xs_net(xfrm_input_state(skb)), xfrmi_net_id);
+-	ifindex = skb->dev->ifindex;
+ 
+ 	for_each_xfrmi_rcu(xfrmn->xfrmi[0], xi) {
+ 		if (ifindex == xi->dev->ifindex &&
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 8d1a898d0ba5..a6b58df7a70f 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3313,7 +3313,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ 	ifcb = xfrm_if_get_cb();
+ 
+ 	if (ifcb) {
+-		xi = ifcb->decode_session(skb);
++		xi = ifcb->decode_session(skb, family);
+ 		if (xi) {
+ 			if_id = xi->p.if_id;
+ 			net = xi->net;
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 1bb971f46fc6..178baaa037e5 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2384,7 +2384,7 @@ void xfrm_state_fini(struct net *net)
+ 
+ 	flush_work(&net->xfrm.state_hash_work);
+ 	flush_work(&xfrm_state_gc_work);
+-	xfrm_state_flush(net, IPSEC_PROTO_ANY, false, true);
++	xfrm_state_flush(net, 0, false, true);
+ 
+ 	WARN_ON(!list_empty(&net->xfrm.state_all));
+ 
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index a131f9ff979e..6916931b1de1 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -1424,7 +1424,7 @@ static int verify_newpolicy_info(struct xfrm_userpolicy_info *p)
+ 	ret = verify_policy_dir(p->dir);
+ 	if (ret)
+ 		return ret;
+-	if (p->index && ((p->index & XFRM_POLICY_MAX) != p->dir))
++	if (p->index && (xfrm_policy_id2dir(p->index) != p->dir))
+ 		return -EINVAL;
+ 
+ 	return 0;
+@@ -1513,20 +1513,8 @@ static int validate_tmpl(int nr, struct xfrm_user_tmpl *ut, u16 family)
+ 			return -EINVAL;
+ 		}
+ 
+-		switch (ut[i].id.proto) {
+-		case IPPROTO_AH:
+-		case IPPROTO_ESP:
+-		case IPPROTO_COMP:
+-#if IS_ENABLED(CONFIG_IPV6)
+-		case IPPROTO_ROUTING:
+-		case IPPROTO_DSTOPTS:
+-#endif
+-		case IPSEC_PROTO_ANY:
+-			break;
+-		default:
++		if (!xfrm_id_proto_valid(ut[i].id.proto))
+ 			return -EINVAL;
+-		}
+-
+ 	}
+ 
+ 	return 0;
+diff --git a/scripts/gcc-plugins/arm_ssp_per_task_plugin.c b/scripts/gcc-plugins/arm_ssp_per_task_plugin.c
+index 89c47f57d1ce..8c1af9bdcb1b 100644
+--- a/scripts/gcc-plugins/arm_ssp_per_task_plugin.c
++++ b/scripts/gcc-plugins/arm_ssp_per_task_plugin.c
+@@ -36,7 +36,7 @@ static unsigned int arm_pertask_ssp_rtl_execute(void)
+ 		mask = GEN_INT(sext_hwi(sp_mask, GET_MODE_PRECISION(Pmode)));
+ 		masked_sp = gen_reg_rtx(Pmode);
+ 
+-		emit_insn_before(gen_rtx_SET(masked_sp,
++		emit_insn_before(gen_rtx_set(masked_sp,
+ 					     gen_rtx_AND(Pmode,
+ 							 stack_pointer_rtx,
+ 							 mask)),
+diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
+index 3f80a684c232..665853dd517c 100644
+--- a/security/apparmor/apparmorfs.c
++++ b/security/apparmor/apparmorfs.c
+@@ -123,17 +123,22 @@ static int aafs_show_path(struct seq_file *seq, struct dentry *dentry)
+ 	return 0;
+ }
+ 
+-static void aafs_evict_inode(struct inode *inode)
++static void aafs_i_callback(struct rcu_head *head)
+ {
+-	truncate_inode_pages_final(&inode->i_data);
+-	clear_inode(inode);
++	struct inode *inode = container_of(head, struct inode, i_rcu);
+ 	if (S_ISLNK(inode->i_mode))
+ 		kfree(inode->i_link);
++	free_inode_nonrcu(inode);
++}
++
++static void aafs_destroy_inode(struct inode *inode)
++{
++	call_rcu(&inode->i_rcu, aafs_i_callback);
+ }
+ 
+ static const struct super_operations aafs_super_ops = {
+ 	.statfs = simple_statfs,
+-	.evict_inode = aafs_evict_inode,
++	.destroy_inode = aafs_destroy_inode,
+ 	.show_path = aafs_show_path,
+ };
+ 
+diff --git a/security/inode.c b/security/inode.c
+index b7772a9b315e..421dd72b5876 100644
+--- a/security/inode.c
++++ b/security/inode.c
+@@ -27,17 +27,22 @@
+ static struct vfsmount *mount;
+ static int mount_count;
+ 
+-static void securityfs_evict_inode(struct inode *inode)
++static void securityfs_i_callback(struct rcu_head *head)
+ {
+-	truncate_inode_pages_final(&inode->i_data);
+-	clear_inode(inode);
++	struct inode *inode = container_of(head, struct inode, i_rcu);
+ 	if (S_ISLNK(inode->i_mode))
+ 		kfree(inode->i_link);
++	free_inode_nonrcu(inode);
++}
++
++static void securityfs_destroy_inode(struct inode *inode)
++{
++	call_rcu(&inode->i_rcu, securityfs_i_callback);
+ }
+ 
+ static const struct super_operations securityfs_super_operations = {
+ 	.statfs		= simple_statfs,
+-	.evict_inode	= securityfs_evict_inode,
++	.destroy_inode	= securityfs_destroy_inode,
+ };
+ 
+ static int fill_super(struct super_block *sb, void *data, int silent)
+diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c
+index 1ef1ee2280a2..227766d9f43b 100644
+--- a/tools/bpf/bpftool/map.c
++++ b/tools/bpf/bpftool/map.c
+@@ -1111,6 +1111,9 @@ static int do_create(int argc, char **argv)
+ 				return -1;
+ 			}
+ 			NEXT_ARG();
++		} else {
++			p_err("unknown arg %s", *argv);
++			return -1;
+ 		}
+ 	}
+ 
+diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
+index 53f8be0f4a1f..88158239622b 100644
+--- a/tools/objtool/Makefile
++++ b/tools/objtool/Makefile
+@@ -7,11 +7,12 @@ ARCH := x86
+ endif
+ 
+ # always use the host compiler
++HOSTAR	?= ar
+ HOSTCC	?= gcc
+ HOSTLD	?= ld
++AR	 = $(HOSTAR)
+ CC	 = $(HOSTCC)
+ LD	 = $(HOSTLD)
+-AR	 = ar
+ 
+ ifeq ($(srctree),)
+ srctree := $(patsubst %/,%,$(dir $(CURDIR)))
+diff --git a/tools/perf/bench/numa.c b/tools/perf/bench/numa.c
+index 44195514b19e..fa56fde6e8d8 100644
+--- a/tools/perf/bench/numa.c
++++ b/tools/perf/bench/numa.c
+@@ -38,6 +38,10 @@
+ #include <numa.h>
+ #include <numaif.h>
+ 
++#ifndef RUSAGE_THREAD
++# define RUSAGE_THREAD 1
++#endif
++
+ /*
+  * Regular printout to the terminal, supressed if -q is specified:
+  */
+diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
+index 27a374ddf661..947f1bb2fbdf 100644
+--- a/tools/perf/util/cs-etm.c
++++ b/tools/perf/util/cs-etm.c
+@@ -345,11 +345,9 @@ static struct cs_etm_queue *cs_etm__alloc_queue(struct cs_etm_auxtrace *etm,
+ 	if (!etmq->packet)
+ 		goto out_free;
+ 
+-	if (etm->synth_opts.last_branch || etm->sample_branches) {
+-		etmq->prev_packet = zalloc(szp);
+-		if (!etmq->prev_packet)
+-			goto out_free;
+-	}
++	etmq->prev_packet = zalloc(szp);
++	if (!etmq->prev_packet)
++		goto out_free;
+ 
+ 	if (etm->synth_opts.last_branch) {
+ 		size_t sz = sizeof(struct branch_stack);
+diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+index 7c0b975dd2f0..73fc4abee302 100644
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -58,6 +58,7 @@ enum intel_pt_pkt_state {
+ 	INTEL_PT_STATE_NO_IP,
+ 	INTEL_PT_STATE_ERR_RESYNC,
+ 	INTEL_PT_STATE_IN_SYNC,
++	INTEL_PT_STATE_TNT_CONT,
+ 	INTEL_PT_STATE_TNT,
+ 	INTEL_PT_STATE_TIP,
+ 	INTEL_PT_STATE_TIP_PGD,
+@@ -72,8 +73,9 @@ static inline bool intel_pt_sample_time(enum intel_pt_pkt_state pkt_state)
+ 	case INTEL_PT_STATE_NO_IP:
+ 	case INTEL_PT_STATE_ERR_RESYNC:
+ 	case INTEL_PT_STATE_IN_SYNC:
+-	case INTEL_PT_STATE_TNT:
++	case INTEL_PT_STATE_TNT_CONT:
+ 		return true;
++	case INTEL_PT_STATE_TNT:
+ 	case INTEL_PT_STATE_TIP:
+ 	case INTEL_PT_STATE_TIP_PGD:
+ 	case INTEL_PT_STATE_FUP:
+@@ -888,16 +890,20 @@ static uint64_t intel_pt_next_period(struct intel_pt_decoder *decoder)
+ 	timestamp = decoder->timestamp + decoder->timestamp_insn_cnt;
+ 	masked_timestamp = timestamp & decoder->period_mask;
+ 	if (decoder->continuous_period) {
+-		if (masked_timestamp != decoder->last_masked_timestamp)
++		if (masked_timestamp > decoder->last_masked_timestamp)
+ 			return 1;
+ 	} else {
+ 		timestamp += 1;
+ 		masked_timestamp = timestamp & decoder->period_mask;
+-		if (masked_timestamp != decoder->last_masked_timestamp) {
++		if (masked_timestamp > decoder->last_masked_timestamp) {
+ 			decoder->last_masked_timestamp = masked_timestamp;
+ 			decoder->continuous_period = true;
+ 		}
+ 	}
++
++	if (masked_timestamp < decoder->last_masked_timestamp)
++		return decoder->period_ticks;
++
+ 	return decoder->period_ticks - (timestamp - masked_timestamp);
+ }
+ 
+@@ -926,7 +932,10 @@ static void intel_pt_sample_insn(struct intel_pt_decoder *decoder)
+ 	case INTEL_PT_PERIOD_TICKS:
+ 		timestamp = decoder->timestamp + decoder->timestamp_insn_cnt;
+ 		masked_timestamp = timestamp & decoder->period_mask;
+-		decoder->last_masked_timestamp = masked_timestamp;
++		if (masked_timestamp > decoder->last_masked_timestamp)
++			decoder->last_masked_timestamp = masked_timestamp;
++		else
++			decoder->last_masked_timestamp += decoder->period_ticks;
+ 		break;
+ 	case INTEL_PT_PERIOD_NONE:
+ 	case INTEL_PT_PERIOD_MTC:
+@@ -1254,7 +1263,9 @@ static int intel_pt_walk_tnt(struct intel_pt_decoder *decoder)
+ 				return -ENOENT;
+ 			}
+ 			decoder->tnt.count -= 1;
+-			if (!decoder->tnt.count)
++			if (decoder->tnt.count)
++				decoder->pkt_state = INTEL_PT_STATE_TNT_CONT;
++			else
+ 				decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ 			decoder->tnt.payload <<= 1;
+ 			decoder->state.from_ip = decoder->ip;
+@@ -1285,7 +1296,9 @@ static int intel_pt_walk_tnt(struct intel_pt_decoder *decoder)
+ 
+ 		if (intel_pt_insn.branch == INTEL_PT_BR_CONDITIONAL) {
+ 			decoder->tnt.count -= 1;
+-			if (!decoder->tnt.count)
++			if (decoder->tnt.count)
++				decoder->pkt_state = INTEL_PT_STATE_TNT_CONT;
++			else
+ 				decoder->pkt_state = INTEL_PT_STATE_IN_SYNC;
+ 			if (decoder->tnt.payload & BIT63) {
+ 				decoder->tnt.payload <<= 1;
+@@ -1305,8 +1318,11 @@ static int intel_pt_walk_tnt(struct intel_pt_decoder *decoder)
+ 				return 0;
+ 			}
+ 			decoder->ip += intel_pt_insn.length;
+-			if (!decoder->tnt.count)
++			if (!decoder->tnt.count) {
++				decoder->sample_timestamp = decoder->timestamp;
++				decoder->sample_insn_cnt = decoder->timestamp_insn_cnt;
+ 				return -EAGAIN;
++			}
+ 			decoder->tnt.payload <<= 1;
+ 			continue;
+ 		}
+@@ -2365,6 +2381,7 @@ const struct intel_pt_state *intel_pt_decode(struct intel_pt_decoder *decoder)
+ 			err = intel_pt_walk_trace(decoder);
+ 			break;
+ 		case INTEL_PT_STATE_TNT:
++		case INTEL_PT_STATE_TNT_CONT:
+ 			err = intel_pt_walk_tnt(decoder);
+ 			if (err == -EAGAIN)
+ 				err = intel_pt_walk_trace(decoder);
+diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
+index 4715cfba20dc..93f99c6b7d79 100644
+--- a/tools/testing/selftests/kvm/dirty_log_test.c
++++ b/tools/testing/selftests/kvm/dirty_log_test.c
+@@ -288,8 +288,11 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
+ #endif
+ 	max_gfn = (1ul << (guest_pa_bits - guest_page_shift)) - 1;
+ 	guest_page_size = (1ul << guest_page_shift);
+-	/* 1G of guest page sized pages */
+-	guest_num_pages = (1ul << (30 - guest_page_shift));
++	/*
++	 * A little more than 1G of guest page sized pages.  Cover the
++	 * case where the size is not aligned to 64 pages.
++	 */
++	guest_num_pages = (1ul << (30 - guest_page_shift)) + 3;
+ 	host_page_size = getpagesize();
+ 	host_num_pages = (guest_num_pages * guest_page_size) / host_page_size +
+ 			 !!((guest_num_pages * guest_page_size) % host_page_size);
+@@ -359,7 +362,7 @@ static void run_test(enum vm_guest_mode mode, unsigned long iterations,
+ 		kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap);
+ #ifdef USE_CLEAR_DIRTY_LOG
+ 		kvm_vm_clear_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap, 0,
+-				       DIV_ROUND_UP(host_num_pages, 64) * 64);
++				       host_num_pages);
+ #endif
+ 		vm_dirty_log_verify(bmap);
+ 		iteration++;
+diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c b/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c
+index 264425f75806..9a21e912097c 100644
+--- a/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c
++++ b/tools/testing/selftests/kvm/x86_64/hyperv_cpuid.c
+@@ -141,7 +141,13 @@ int main(int argc, char *argv[])
+ 
+ 	free(hv_cpuid_entries);
+ 
+-	vcpu_ioctl(vm, VCPU_ID, KVM_ENABLE_CAP, &enable_evmcs_cap);
++	rv = _vcpu_ioctl(vm, VCPU_ID, KVM_ENABLE_CAP, &enable_evmcs_cap);
++
++	if (rv) {
++		fprintf(stderr,
++			"Enlightened VMCS is unsupported, skip related test\n");
++		goto vm_free;
++	}
+ 
+ 	hv_cpuid_entries = kvm_get_supported_hv_cpuid(vm);
+ 	if (!hv_cpuid_entries)
+@@ -151,6 +157,7 @@ int main(int argc, char *argv[])
+ 
+ 	free(hv_cpuid_entries);
+ 
++vm_free:
+ 	kvm_vm_free(vm);
+ 
+ 	return 0;
+diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
+index 9c486fad3f9f..6202b4f718ce 100644
+--- a/virt/kvm/arm/arm.c
++++ b/virt/kvm/arm/arm.c
+@@ -949,7 +949,7 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_level,
+ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
+ 			       const struct kvm_vcpu_init *init)
+ {
+-	unsigned int i;
++	unsigned int i, ret;
+ 	int phys_target = kvm_target_cpu();
+ 
+ 	if (init->target != phys_target)
+@@ -984,9 +984,14 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
+ 	vcpu->arch.target = phys_target;
+ 
+ 	/* Now we know what it is, we can reset it. */
+-	return kvm_reset_vcpu(vcpu);
+-}
++	ret = kvm_reset_vcpu(vcpu);
++	if (ret) {
++		vcpu->arch.target = -1;
++		bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES);
++	}
+ 
++	return ret;
++}
+ 
+ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
+ 					 struct kvm_vcpu_init *init)
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index b5238bcba72c..4cc0d8a46891 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -1241,7 +1241,7 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
+ 	if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_USER_MEM_SLOTS)
+ 		return -EINVAL;
+ 
+-	if ((log->first_page & 63) || (log->num_pages & 63))
++	if (log->first_page & 63)
+ 		return -EINVAL;
+ 
+ 	slots = __kvm_memslots(kvm, as_id);
+@@ -1254,8 +1254,9 @@ int kvm_clear_dirty_log_protect(struct kvm *kvm,
+ 	n = ALIGN(log->num_pages, BITS_PER_LONG) / 8;
+ 
+ 	if (log->first_page > memslot->npages ||
+-	    log->num_pages > memslot->npages - log->first_page)
+-			return -EINVAL;
++	    log->num_pages > memslot->npages - log->first_page ||
++	    (log->num_pages < memslot->npages - log->first_page && (log->num_pages & 63)))
++	    return -EINVAL;
+ 
+ 	*flush = false;
+ 	dirty_bitmap_buffer = kvm_second_dirty_bitmap(memslot);


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-05-31 14:03 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-05-31 14:03 UTC (permalink / raw
  To: gentoo-commits

commit:     e62379d875e51b478db75a6675ecb6180f41ebbc
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri May 31 14:03:07 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri May 31 14:03:07 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e62379d8

Linux patch 5.0.20

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |     4 +
 1019_linux-5.0.20.patch | 11613 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11617 insertions(+)

diff --git a/0000_README b/0000_README
index 599546c..cf5191b 100644
--- a/0000_README
+++ b/0000_README
@@ -119,6 +119,10 @@ Patch:  1018_linux-5.0.19.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.19
 
+Patch:  1019_linux-5.0.20.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.20
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1019_linux-5.0.20.patch b/1019_linux-5.0.20.patch
new file mode 100644
index 0000000..d10f5ca
--- /dev/null
+++ b/1019_linux-5.0.20.patch
@@ -0,0 +1,11613 @@
+diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
+index ddb8ce5333ba..7a7e271be3f1 100644
+--- a/Documentation/arm64/silicon-errata.txt
++++ b/Documentation/arm64/silicon-errata.txt
+@@ -61,6 +61,7 @@ stable kernels.
+ | ARM            | Cortex-A76      | #1188873        | ARM64_ERRATUM_1188873       |
+ | ARM            | Cortex-A76      | #1165522        | ARM64_ERRATUM_1165522       |
+ | ARM            | Cortex-A76      | #1286807        | ARM64_ERRATUM_1286807       |
++| ARM            | Cortex-A76      | #1463225        | ARM64_ERRATUM_1463225       |
+ | ARM            | MMU-500         | #841119,#826419 | N/A                         |
+ |                |                 |                 |                             |
+ | Cavium         | ThunderX ITS    | #22375, #24313  | CAVIUM_ERRATUM_22375        |
+diff --git a/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt b/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt
+index 41a1074228ba..6b6ca4456dc7 100644
+--- a/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt
++++ b/Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt
+@@ -53,7 +53,8 @@ Required properties:
+ 	   one for each entry in reset-names.
+  - reset-names: "phy" for reset of phy block,
+ 		"common" for phy common block reset,
+-		"cfg" for phy's ahb cfg block reset.
++		"cfg" for phy's ahb cfg block reset,
++		"ufsphy" for the PHY reset in the UFS controller.
+ 
+ 		For "qcom,ipq8074-qmp-pcie-phy" must contain:
+ 			"phy", "common".
+@@ -65,7 +66,8 @@ Required properties:
+ 			"phy", "common".
+ 		For "qcom,sdm845-qmp-usb3-uni-phy" must contain:
+ 			"phy", "common".
+-		For "qcom,sdm845-qmp-ufs-phy": no resets are listed.
++		For "qcom,sdm845-qmp-ufs-phy": must contain:
++			"ufsphy".
+ 
+  - vdda-phy-supply: Phandle to a regulator supply to PHY core block.
+  - vdda-pll-supply: Phandle to 1.8V regulator supply to PHY refclk pll block.
+diff --git a/Makefile b/Makefile
+index 66efffc3fb41..25390977536b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 19
++SUBLEVEL = 20
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h
+index 07e27f212dc7..d2453e2d3f1f 100644
+--- a/arch/arm/include/asm/cp15.h
++++ b/arch/arm/include/asm/cp15.h
+@@ -68,6 +68,8 @@
+ #define BPIALL				__ACCESS_CP15(c7, 0, c5, 6)
+ #define ICIALLU				__ACCESS_CP15(c7, 0, c5, 0)
+ 
++#define CNTVCT				__ACCESS_CP15_64(1, c14)
++
+ extern unsigned long cr_alignment;	/* defined in entry-armv.S */
+ 
+ static inline unsigned long get_cr(void)
+diff --git a/arch/arm/vdso/vgettimeofday.c b/arch/arm/vdso/vgettimeofday.c
+index a9dd619c6c29..7bdbf5d5c47d 100644
+--- a/arch/arm/vdso/vgettimeofday.c
++++ b/arch/arm/vdso/vgettimeofday.c
+@@ -18,9 +18,9 @@
+ #include <linux/compiler.h>
+ #include <linux/hrtimer.h>
+ #include <linux/time.h>
+-#include <asm/arch_timer.h>
+ #include <asm/barrier.h>
+ #include <asm/bug.h>
++#include <asm/cp15.h>
+ #include <asm/page.h>
+ #include <asm/unistd.h>
+ #include <asm/vdso_datapage.h>
+@@ -123,7 +123,8 @@ static notrace u64 get_ns(struct vdso_data *vdata)
+ 	u64 cycle_now;
+ 	u64 nsec;
+ 
+-	cycle_now = arch_counter_get_cntvct();
++	isb();
++	cycle_now = read_sysreg(CNTVCT);
+ 
+ 	cycle_delta = (cycle_now - vdata->cs_cycle_last) & vdata->cs_mask;
+ 
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index a4168d366127..4535b2b48fd9 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -518,6 +518,24 @@ config ARM64_ERRATUM_1286807
+ 
+ 	  If unsure, say Y.
+ 
++config ARM64_ERRATUM_1463225
++	bool "Cortex-A76: Software Step might prevent interrupt recognition"
++	default y
++	help
++	  This option adds a workaround for Arm Cortex-A76 erratum 1463225.
++
++	  On the affected Cortex-A76 cores (r0p0 to r3p1), software stepping
++	  of a system call instruction (SVC) can prevent recognition of
++	  subsequent interrupts when software stepping is disabled in the
++	  exception handler of the system call and either kernel debugging
++	  is enabled or VHE is in use.
++
++	  Work around the erratum by triggering a dummy step exception
++	  when handling a system call from a task that is being stepped
++	  in a VHE configuration of the kernel.
++
++	  If unsure, say Y.
++
+ config CAVIUM_ERRATUM_22375
+ 	bool "Cavium erratum 22375, 24313"
+ 	default y
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index 82e9099834ae..99db8de83734 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -60,7 +60,8 @@
+ #define ARM64_HAS_ADDRESS_AUTH_IMP_DEF		39
+ #define ARM64_HAS_GENERIC_AUTH_ARCH		40
+ #define ARM64_HAS_GENERIC_AUTH_IMP_DEF		41
++#define ARM64_WORKAROUND_1463225		42
+ 
+-#define ARM64_NCAPS				42
++#define ARM64_NCAPS				43
+ 
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index de70c1eabf33..74ebe9693714 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -478,6 +478,8 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd)
+ 	return __pmd_to_phys(pmd);
+ }
+ 
++static inline void pte_unmap(pte_t *pte) { }
++
+ /* Find an entry in the third-level page table. */
+ #define pte_index(addr)		(((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+ 
+@@ -486,7 +488,6 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd)
+ 
+ #define pte_offset_map(dir,addr)	pte_offset_kernel((dir), (addr))
+ #define pte_offset_map_nested(dir,addr)	pte_offset_kernel((dir), (addr))
+-#define pte_unmap(pte)			do { } while (0)
+ #define pte_unmap_nested(pte)		do { } while (0)
+ 
+ #define pte_set_fixmap(addr)		((pte_t *)set_fixmap_offset(FIX_PTE, addr))
+diff --git a/arch/arm64/include/asm/vdso_datapage.h b/arch/arm64/include/asm/vdso_datapage.h
+index 2b9a63771eda..f89263c8e11a 100644
+--- a/arch/arm64/include/asm/vdso_datapage.h
++++ b/arch/arm64/include/asm/vdso_datapage.h
+@@ -38,6 +38,7 @@ struct vdso_data {
+ 	__u32 tz_minuteswest;	/* Whacky timezone stuff */
+ 	__u32 tz_dsttime;
+ 	__u32 use_syscall;
++	__u32 hrtimer_res;
+ };
+ 
+ #endif /* !__ASSEMBLY__ */
+diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
+index 65b8afc84466..ddcd3ea87b81 100644
+--- a/arch/arm64/kernel/asm-offsets.c
++++ b/arch/arm64/kernel/asm-offsets.c
+@@ -102,7 +102,7 @@ int main(void)
+   DEFINE(CLOCK_REALTIME,	CLOCK_REALTIME);
+   DEFINE(CLOCK_MONOTONIC,	CLOCK_MONOTONIC);
+   DEFINE(CLOCK_MONOTONIC_RAW,	CLOCK_MONOTONIC_RAW);
+-  DEFINE(CLOCK_REALTIME_RES,	MONOTONIC_RES_NSEC);
++  DEFINE(CLOCK_REALTIME_RES,	offsetof(struct vdso_data, hrtimer_res));
+   DEFINE(CLOCK_REALTIME_COARSE,	CLOCK_REALTIME_COARSE);
+   DEFINE(CLOCK_MONOTONIC_COARSE,CLOCK_MONOTONIC_COARSE);
+   DEFINE(CLOCK_COARSE_RES,	LOW_RES_NSEC);
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 9950bb0cbd52..87019cd73f22 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -464,6 +464,22 @@ out_printmsg:
+ }
+ #endif	/* CONFIG_ARM64_SSBD */
+ 
++#ifdef CONFIG_ARM64_ERRATUM_1463225
++DEFINE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
++
++static bool
++has_cortex_a76_erratum_1463225(const struct arm64_cpu_capabilities *entry,
++			       int scope)
++{
++	u32 midr = read_cpuid_id();
++	/* Cortex-A76 r0p0 - r3p1 */
++	struct midr_range range = MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 1);
++
++	WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
++	return is_midr_in_range(midr, &range) && is_kernel_in_hyp_mode();
++}
++#endif
++
+ static void __maybe_unused
+ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
+ {
+@@ -738,6 +754,14 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ 		.capability = ARM64_WORKAROUND_1165522,
+ 		ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 2, 0),
+ 	},
++#endif
++#ifdef CONFIG_ARM64_ERRATUM_1463225
++	{
++		.desc = "ARM erratum 1463225",
++		.capability = ARM64_WORKAROUND_1463225,
++		.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
++		.matches = has_cortex_a76_erratum_1463225,
++	},
+ #endif
+ 	{
+ 	}
+diff --git a/arch/arm64/kernel/cpu_ops.c b/arch/arm64/kernel/cpu_ops.c
+index ea001241bdd4..00f8b8612b69 100644
+--- a/arch/arm64/kernel/cpu_ops.c
++++ b/arch/arm64/kernel/cpu_ops.c
+@@ -85,6 +85,7 @@ static const char *__init cpu_read_enable_method(int cpu)
+ 				pr_err("%pOF: missing enable-method property\n",
+ 					dn);
+ 		}
++		of_node_put(dn);
+ 	} else {
+ 		enable_method = acpi_get_enable_method(cpu);
+ 		if (!enable_method) {
+diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
+index b09b6f75f759..06941c1fe418 100644
+--- a/arch/arm64/kernel/kaslr.c
++++ b/arch/arm64/kernel/kaslr.c
+@@ -145,15 +145,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
+ 
+ 	if (IS_ENABLED(CONFIG_RANDOMIZE_MODULE_REGION_FULL)) {
+ 		/*
+-		 * Randomize the module region over a 4 GB window covering the
++		 * Randomize the module region over a 2 GB window covering the
+ 		 * kernel. This reduces the risk of modules leaking information
+ 		 * about the address of the kernel itself, but results in
+ 		 * branches between modules and the core kernel that are
+ 		 * resolved via PLTs. (Branches between modules will be
+ 		 * resolved normally.)
+ 		 */
+-		module_range = SZ_4G - (u64)(_end - _stext);
+-		module_alloc_base = max((u64)_end + offset - SZ_4G,
++		module_range = SZ_2G - (u64)(_end - _stext);
++		module_alloc_base = max((u64)_end + offset - SZ_2G,
+ 					(u64)MODULES_VADDR);
+ 	} else {
+ 		/*
+diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
+index f713e2fc4d75..1e418e69b58c 100644
+--- a/arch/arm64/kernel/module.c
++++ b/arch/arm64/kernel/module.c
+@@ -56,7 +56,7 @@ void *module_alloc(unsigned long size)
+ 		 * can simply omit this fallback in that case.
+ 		 */
+ 		p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
+-				module_alloc_base + SZ_4G, GFP_KERNEL,
++				module_alloc_base + SZ_2G, GFP_KERNEL,
+ 				PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
+ 				__builtin_return_address(0));
+ 
+diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
+index 5610ac01c1ec..871c739f060a 100644
+--- a/arch/arm64/kernel/syscall.c
++++ b/arch/arm64/kernel/syscall.c
+@@ -8,6 +8,7 @@
+ #include <linux/syscalls.h>
+ 
+ #include <asm/daifflags.h>
++#include <asm/debug-monitors.h>
+ #include <asm/fpsimd.h>
+ #include <asm/syscall.h>
+ #include <asm/thread_info.h>
+@@ -60,6 +61,35 @@ static inline bool has_syscall_work(unsigned long flags)
+ int syscall_trace_enter(struct pt_regs *regs);
+ void syscall_trace_exit(struct pt_regs *regs);
+ 
++#ifdef CONFIG_ARM64_ERRATUM_1463225
++DECLARE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
++
++static void cortex_a76_erratum_1463225_svc_handler(void)
++{
++	u32 reg, val;
++
++	if (!unlikely(test_thread_flag(TIF_SINGLESTEP)))
++		return;
++
++	if (!unlikely(this_cpu_has_cap(ARM64_WORKAROUND_1463225)))
++		return;
++
++	__this_cpu_write(__in_cortex_a76_erratum_1463225_wa, 1);
++	reg = read_sysreg(mdscr_el1);
++	val = reg | DBG_MDSCR_SS | DBG_MDSCR_KDE;
++	write_sysreg(val, mdscr_el1);
++	asm volatile("msr daifclr, #8");
++	isb();
++
++	/* We will have taken a single-step exception by this point */
++
++	write_sysreg(reg, mdscr_el1);
++	__this_cpu_write(__in_cortex_a76_erratum_1463225_wa, 0);
++}
++#else
++static void cortex_a76_erratum_1463225_svc_handler(void) { }
++#endif /* CONFIG_ARM64_ERRATUM_1463225 */
++
+ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ 			   const syscall_fn_t syscall_table[])
+ {
+@@ -68,6 +98,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+ 	regs->orig_x0 = regs->regs[0];
+ 	regs->syscallno = scno;
+ 
++	cortex_a76_erratum_1463225_svc_handler();
+ 	local_daif_restore(DAIF_PROCCTX);
+ 	user_exit();
+ 
+diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
+index 2d419006ad43..ec0bb588d755 100644
+--- a/arch/arm64/kernel/vdso.c
++++ b/arch/arm64/kernel/vdso.c
+@@ -232,6 +232,9 @@ void update_vsyscall(struct timekeeper *tk)
+ 	vdso_data->wtm_clock_sec		= tk->wall_to_monotonic.tv_sec;
+ 	vdso_data->wtm_clock_nsec		= tk->wall_to_monotonic.tv_nsec;
+ 
++	/* Read without the seqlock held by clock_getres() */
++	WRITE_ONCE(vdso_data->hrtimer_res, hrtimer_resolution);
++
+ 	if (!use_syscall) {
+ 		/* tkr_mono.cycle_last == tkr_raw.cycle_last */
+ 		vdso_data->cs_cycle_last	= tk->tkr_mono.cycle_last;
+diff --git a/arch/arm64/kernel/vdso/gettimeofday.S b/arch/arm64/kernel/vdso/gettimeofday.S
+index e8f60112818f..856fee6d3512 100644
+--- a/arch/arm64/kernel/vdso/gettimeofday.S
++++ b/arch/arm64/kernel/vdso/gettimeofday.S
+@@ -308,13 +308,14 @@ ENTRY(__kernel_clock_getres)
+ 	ccmp	w0, #CLOCK_MONOTONIC_RAW, #0x4, ne
+ 	b.ne	1f
+ 
+-	ldr	x2, 5f
++	adr	vdso_data, _vdso_data
++	ldr	w2, [vdso_data, #CLOCK_REALTIME_RES]
+ 	b	2f
+ 1:
+ 	cmp	w0, #CLOCK_REALTIME_COARSE
+ 	ccmp	w0, #CLOCK_MONOTONIC_COARSE, #0x4, ne
+ 	b.ne	4f
+-	ldr	x2, 6f
++	ldr	x2, 5f
+ 2:
+ 	cbz	x1, 3f
+ 	stp	xzr, x2, [x1]
+@@ -328,8 +329,6 @@ ENTRY(__kernel_clock_getres)
+ 	svc	#0
+ 	ret
+ 5:
+-	.quad	CLOCK_REALTIME_RES
+-6:
+ 	.quad	CLOCK_COARSE_RES
+ 	.cfi_endproc
+ ENDPROC(__kernel_clock_getres)
+diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
+index 78c0a72f822c..674860e3e478 100644
+--- a/arch/arm64/mm/dma-mapping.c
++++ b/arch/arm64/mm/dma-mapping.c
+@@ -249,6 +249,11 @@ static int __iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+ 	if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
+ 		return ret;
+ 
++	if (!is_vmalloc_addr(cpu_addr)) {
++		unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
++		return __swiotlb_mmap_pfn(vma, pfn, size);
++	}
++
+ 	if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) {
+ 		/*
+ 		 * DMA_ATTR_FORCE_CONTIGUOUS allocations are always remapped,
+@@ -272,6 +277,11 @@ static int __iommu_get_sgtable(struct device *dev, struct sg_table *sgt,
+ 	unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ 	struct vm_struct *area = find_vm_area(cpu_addr);
+ 
++	if (!is_vmalloc_addr(cpu_addr)) {
++		struct page *page = virt_to_page(cpu_addr);
++		return __swiotlb_get_sgtable_page(sgt, page, size);
++	}
++
+ 	if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) {
+ 		/*
+ 		 * DMA_ATTR_FORCE_CONTIGUOUS allocations are always remapped,
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index ef46925096f0..d3bdef0b2f60 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -824,14 +824,47 @@ void __init hook_debug_fault_code(int nr,
+ 	debug_fault_info[nr].name	= name;
+ }
+ 
++#ifdef CONFIG_ARM64_ERRATUM_1463225
++DECLARE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
++
++static int __exception
++cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
++{
++	if (user_mode(regs))
++		return 0;
++
++	if (!__this_cpu_read(__in_cortex_a76_erratum_1463225_wa))
++		return 0;
++
++	/*
++	 * We've taken a dummy step exception from the kernel to ensure
++	 * that interrupts are re-enabled on the syscall path. Return back
++	 * to cortex_a76_erratum_1463225_svc_handler() with debug exceptions
++	 * masked so that we can safely restore the mdscr and get on with
++	 * handling the syscall.
++	 */
++	regs->pstate |= PSR_D_BIT;
++	return 1;
++}
++#else
++static int __exception
++cortex_a76_erratum_1463225_debug_handler(struct pt_regs *regs)
++{
++	return 0;
++}
++#endif /* CONFIG_ARM64_ERRATUM_1463225 */
++
+ asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint,
+-					      unsigned int esr,
+-					      struct pt_regs *regs)
++					       unsigned int esr,
++					       struct pt_regs *regs)
+ {
+ 	const struct fault_info *inf = esr_to_debug_fault_info(esr);
+ 	unsigned long pc = instruction_pointer(regs);
+ 	int rv;
+ 
++	if (cortex_a76_erratum_1463225_debug_handler(regs))
++		return 0;
++
+ 	/*
+ 	 * Tell lockdep we disabled irqs in entry.S. Do nothing if they were
+ 	 * already disabled to preserve the last enabled/disabled addresses.
+diff --git a/arch/powerpc/boot/addnote.c b/arch/powerpc/boot/addnote.c
+index 9d9f6f334d3c..3da3e2b1b51b 100644
+--- a/arch/powerpc/boot/addnote.c
++++ b/arch/powerpc/boot/addnote.c
+@@ -223,7 +223,11 @@ main(int ac, char **av)
+ 	PUT_16(E_PHNUM, np + 2);
+ 
+ 	/* write back */
+-	lseek(fd, (long) 0, SEEK_SET);
++	i = lseek(fd, (long) 0, SEEK_SET);
++	if (i < 0) {
++		perror("lseek");
++		exit(1);
++	}
+ 	i = write(fd, buf, n);
+ 	if (i < 0) {
+ 		perror("write");
+diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
+index 4898e9491a1c..9168a247e24f 100644
+--- a/arch/powerpc/kernel/head_64.S
++++ b/arch/powerpc/kernel/head_64.S
+@@ -970,7 +970,9 @@ start_here_multiplatform:
+ 
+ 	/* Restore parameters passed from prom_init/kexec */
+ 	mr	r3,r31
+-	bl	early_setup		/* also sets r13 and SPRG_PACA */
++	LOAD_REG_ADDR(r12, DOTSYM(early_setup))
++	mtctr	r12
++	bctrl		/* also sets r13 and SPRG_PACA */
+ 
+ 	LOAD_REG_ADDR(r3, start_here_common)
+ 	ld	r4,PACAKMSR(r13)
+diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
+index 3c6ab22a0c4e..af3c15a1d41e 100644
+--- a/arch/powerpc/kernel/watchdog.c
++++ b/arch/powerpc/kernel/watchdog.c
+@@ -77,7 +77,7 @@ static u64 wd_smp_panic_timeout_tb __read_mostly; /* panic other CPUs */
+ 
+ static u64 wd_timer_period_ms __read_mostly;  /* interval between heartbeat */
+ 
+-static DEFINE_PER_CPU(struct timer_list, wd_timer);
++static DEFINE_PER_CPU(struct hrtimer, wd_hrtimer);
+ static DEFINE_PER_CPU(u64, wd_timer_tb);
+ 
+ /* SMP checker bits */
+@@ -293,21 +293,21 @@ out:
+ 	nmi_exit();
+ }
+ 
+-static void wd_timer_reset(unsigned int cpu, struct timer_list *t)
+-{
+-	t->expires = jiffies + msecs_to_jiffies(wd_timer_period_ms);
+-	if (wd_timer_period_ms > 1000)
+-		t->expires = __round_jiffies_up(t->expires, cpu);
+-	add_timer_on(t, cpu);
+-}
+-
+-static void wd_timer_fn(struct timer_list *t)
++static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
+ {
+ 	int cpu = smp_processor_id();
+ 
++	if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
++		return HRTIMER_NORESTART;
++
++	if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
++		return HRTIMER_NORESTART;
++
+ 	watchdog_timer_interrupt(cpu);
+ 
+-	wd_timer_reset(cpu, t);
++	hrtimer_forward_now(hrtimer, ms_to_ktime(wd_timer_period_ms));
++
++	return HRTIMER_RESTART;
+ }
+ 
+ void arch_touch_nmi_watchdog(void)
+@@ -323,37 +323,22 @@ void arch_touch_nmi_watchdog(void)
+ }
+ EXPORT_SYMBOL(arch_touch_nmi_watchdog);
+ 
+-static void start_watchdog_timer_on(unsigned int cpu)
+-{
+-	struct timer_list *t = per_cpu_ptr(&wd_timer, cpu);
+-
+-	per_cpu(wd_timer_tb, cpu) = get_tb();
+-
+-	timer_setup(t, wd_timer_fn, TIMER_PINNED);
+-	wd_timer_reset(cpu, t);
+-}
+-
+-static void stop_watchdog_timer_on(unsigned int cpu)
+-{
+-	struct timer_list *t = per_cpu_ptr(&wd_timer, cpu);
+-
+-	del_timer_sync(t);
+-}
+-
+-static int start_wd_on_cpu(unsigned int cpu)
++static void start_watchdog(void *arg)
+ {
++	struct hrtimer *hrtimer = this_cpu_ptr(&wd_hrtimer);
++	int cpu = smp_processor_id();
+ 	unsigned long flags;
+ 
+ 	if (cpumask_test_cpu(cpu, &wd_cpus_enabled)) {
+ 		WARN_ON(1);
+-		return 0;
++		return;
+ 	}
+ 
+ 	if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED))
+-		return 0;
++		return;
+ 
+ 	if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
+-		return 0;
++		return;
+ 
+ 	wd_smp_lock(&flags);
+ 	cpumask_set_cpu(cpu, &wd_cpus_enabled);
+@@ -363,27 +348,40 @@ static int start_wd_on_cpu(unsigned int cpu)
+ 	}
+ 	wd_smp_unlock(&flags);
+ 
+-	start_watchdog_timer_on(cpu);
++	*this_cpu_ptr(&wd_timer_tb) = get_tb();
+ 
+-	return 0;
++	hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	hrtimer->function = watchdog_timer_fn;
++	hrtimer_start(hrtimer, ms_to_ktime(wd_timer_period_ms),
++		      HRTIMER_MODE_REL_PINNED);
+ }
+ 
+-static int stop_wd_on_cpu(unsigned int cpu)
++static int start_watchdog_on_cpu(unsigned int cpu)
+ {
++	return smp_call_function_single(cpu, start_watchdog, NULL, true);
++}
++
++static void stop_watchdog(void *arg)
++{
++	struct hrtimer *hrtimer = this_cpu_ptr(&wd_hrtimer);
++	int cpu = smp_processor_id();
+ 	unsigned long flags;
+ 
+ 	if (!cpumask_test_cpu(cpu, &wd_cpus_enabled))
+-		return 0; /* Can happen in CPU unplug case */
++		return; /* Can happen in CPU unplug case */
+ 
+-	stop_watchdog_timer_on(cpu);
++	hrtimer_cancel(hrtimer);
+ 
+ 	wd_smp_lock(&flags);
+ 	cpumask_clear_cpu(cpu, &wd_cpus_enabled);
+ 	wd_smp_unlock(&flags);
+ 
+ 	wd_smp_clear_cpu_pending(cpu, get_tb());
++}
+ 
+-	return 0;
++static int stop_watchdog_on_cpu(unsigned int cpu)
++{
++	return smp_call_function_single(cpu, stop_watchdog, NULL, true);
+ }
+ 
+ static void watchdog_calc_timeouts(void)
+@@ -402,7 +400,7 @@ void watchdog_nmi_stop(void)
+ 	int cpu;
+ 
+ 	for_each_cpu(cpu, &wd_cpus_enabled)
+-		stop_wd_on_cpu(cpu);
++		stop_watchdog_on_cpu(cpu);
+ }
+ 
+ void watchdog_nmi_start(void)
+@@ -411,7 +409,7 @@ void watchdog_nmi_start(void)
+ 
+ 	watchdog_calc_timeouts();
+ 	for_each_cpu_and(cpu, cpu_online_mask, &watchdog_cpumask)
+-		start_wd_on_cpu(cpu);
++		start_watchdog_on_cpu(cpu);
+ }
+ 
+ /*
+@@ -423,7 +421,8 @@ int __init watchdog_nmi_probe(void)
+ 
+ 	err = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
+ 					"powerpc/watchdog:online",
+-					start_wd_on_cpu, stop_wd_on_cpu);
++					start_watchdog_on_cpu,
++					stop_watchdog_on_cpu);
+ 	if (err < 0) {
+ 		pr_warn("could not be initialized");
+ 		return err;
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index b5d1c45c1475..2a85a4bcc277 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -1494,6 +1494,9 @@ int start_topology_update(void)
+ {
+ 	int rc = 0;
+ 
++	if (!topology_updates_enabled)
++		return 0;
++
+ 	if (firmware_has_feature(FW_FEATURE_PRRN)) {
+ 		if (!prrn_enabled) {
+ 			prrn_enabled = 1;
+@@ -1527,6 +1530,9 @@ int stop_topology_update(void)
+ {
+ 	int rc = 0;
+ 
++	if (!topology_updates_enabled)
++		return 0;
++
+ 	if (prrn_enabled) {
+ 		prrn_enabled = 0;
+ #ifdef CONFIG_SMP
+@@ -1584,11 +1590,13 @@ static ssize_t topology_write(struct file *file, const char __user *buf,
+ 
+ 	kbuf[read_len] = '\0';
+ 
+-	if (!strncmp(kbuf, "on", 2))
++	if (!strncmp(kbuf, "on", 2)) {
++		topology_updates_enabled = true;
+ 		start_topology_update();
+-	else if (!strncmp(kbuf, "off", 3))
++	} else if (!strncmp(kbuf, "off", 3)) {
+ 		stop_topology_update();
+-	else
++		topology_updates_enabled = false;
++	} else
+ 		return -EINVAL;
+ 
+ 	return count;
+@@ -1603,9 +1611,7 @@ static const struct file_operations topology_ops = {
+ 
+ static int topology_update_init(void)
+ {
+-	/* Do not poll for changes if disabled at boot */
+-	if (topology_updates_enabled)
+-		start_topology_update();
++	start_topology_update();
+ 
+ 	if (vphn_enabled)
+ 		topology_schedule_update();
+diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
+index f292a3f284f1..d1009fe3130b 100644
+--- a/arch/powerpc/perf/imc-pmu.c
++++ b/arch/powerpc/perf/imc-pmu.c
+@@ -496,6 +496,11 @@ static int nest_imc_event_init(struct perf_event *event)
+ 	 * Get the base memory addresss for this cpu.
+ 	 */
+ 	chip_id = cpu_to_chip_id(event->cpu);
++
++	/* Return, if chip_id is not valid */
++	if (chip_id < 0)
++		return -ENODEV;
++
+ 	pcni = pmu->mem_info;
+ 	do {
+ 		if (pcni->id == chip_id) {
+@@ -503,7 +508,7 @@ static int nest_imc_event_init(struct perf_event *event)
+ 			break;
+ 		}
+ 		pcni++;
+-	} while (pcni);
++	} while (pcni->vbase != 0);
+ 
+ 	if (!flag)
+ 		return -ENODEV;
+diff --git a/arch/powerpc/platforms/powernv/opal-imc.c b/arch/powerpc/platforms/powernv/opal-imc.c
+index 58a07948c76e..3d27f02695e4 100644
+--- a/arch/powerpc/platforms/powernv/opal-imc.c
++++ b/arch/powerpc/platforms/powernv/opal-imc.c
+@@ -127,7 +127,7 @@ static int imc_get_mem_addr_nest(struct device_node *node,
+ 								nr_chips))
+ 		goto error;
+ 
+-	pmu_ptr->mem_info = kcalloc(nr_chips, sizeof(*pmu_ptr->mem_info),
++	pmu_ptr->mem_info = kcalloc(nr_chips + 1, sizeof(*pmu_ptr->mem_info),
+ 				    GFP_KERNEL);
+ 	if (!pmu_ptr->mem_info)
+ 		goto error;
+diff --git a/arch/s390/kernel/kexec_elf.c b/arch/s390/kernel/kexec_elf.c
+index 5a286b012043..602e7cc26d11 100644
+--- a/arch/s390/kernel/kexec_elf.c
++++ b/arch/s390/kernel/kexec_elf.c
+@@ -19,10 +19,15 @@ static int kexec_file_add_elf_kernel(struct kimage *image,
+ 	struct kexec_buf buf;
+ 	const Elf_Ehdr *ehdr;
+ 	const Elf_Phdr *phdr;
++	Elf_Addr entry;
+ 	int i, ret;
+ 
+ 	ehdr = (Elf_Ehdr *)kernel;
+ 	buf.image = image;
++	if (image->type == KEXEC_TYPE_CRASH)
++		entry = STARTUP_KDUMP_OFFSET;
++	else
++		entry = ehdr->e_entry;
+ 
+ 	phdr = (void *)ehdr + ehdr->e_phoff;
+ 	for (i = 0; i < ehdr->e_phnum; i++, phdr++) {
+@@ -35,7 +40,7 @@ static int kexec_file_add_elf_kernel(struct kimage *image,
+ 		buf.mem = ALIGN(phdr->p_paddr, phdr->p_align);
+ 		buf.memsz = phdr->p_memsz;
+ 
+-		if (phdr->p_paddr == 0) {
++		if (entry - phdr->p_paddr < phdr->p_memsz) {
+ 			data->kernel_buf = buf.buffer;
+ 			data->memsz += STARTUP_NORMAL_OFFSET;
+ 
+diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
+index f2cc7da473e4..ae894ac83fd6 100644
+--- a/arch/s390/mm/pgtable.c
++++ b/arch/s390/mm/pgtable.c
+@@ -410,6 +410,7 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm,
+ 	return old;
+ }
+ 
++#ifdef CONFIG_PGSTE
+ static pmd_t *pmd_alloc_map(struct mm_struct *mm, unsigned long addr)
+ {
+ 	pgd_t *pgd;
+@@ -427,6 +428,7 @@ static pmd_t *pmd_alloc_map(struct mm_struct *mm, unsigned long addr)
+ 	pmd = pmd_alloc(mm, pud, addr);
+ 	return pmd;
+ }
++#endif
+ 
+ pmd_t pmdp_xchg_direct(struct mm_struct *mm, unsigned long addr,
+ 		       pmd_t *pmdp, pmd_t new)
+diff --git a/arch/sh/include/cpu-sh4/cpu/sh7786.h b/arch/sh/include/cpu-sh4/cpu/sh7786.h
+index 8f9bfbf3cdb1..d6cce65b4871 100644
+--- a/arch/sh/include/cpu-sh4/cpu/sh7786.h
++++ b/arch/sh/include/cpu-sh4/cpu/sh7786.h
+@@ -132,7 +132,7 @@ enum {
+ 
+ static inline u32 sh7786_mm_sel(void)
+ {
+-	return __raw_readl(0xFC400020) & 0x7;
++	return __raw_readl((const volatile void __iomem *)0xFC400020) & 0x7;
+ }
+ 
+ #endif /* __CPU_SH7786_H__ */
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index c0c7291d4ccf..2cf52617a1e7 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -47,7 +47,7 @@ export REALMODE_CFLAGS
+ export BITS
+ 
+ ifdef CONFIG_X86_NEED_RELOCS
+-        LDFLAGS_vmlinux := --emit-relocs
++        LDFLAGS_vmlinux := --emit-relocs --discard-none
+ endif
+ 
+ #
+diff --git a/arch/x86/events/intel/cstate.c b/arch/x86/events/intel/cstate.c
+index 56194c571299..4a650eb3d94a 100644
+--- a/arch/x86/events/intel/cstate.c
++++ b/arch/x86/events/intel/cstate.c
+@@ -584,6 +584,8 @@ static const struct x86_cpu_id intel_cstates_match[] __initconst = {
+ 	X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT_X, glm_cstates),
+ 
+ 	X86_CSTATES_MODEL(INTEL_FAM6_ATOM_GOLDMONT_PLUS, glm_cstates),
++
++	X86_CSTATES_MODEL(INTEL_FAM6_ICELAKE_MOBILE, snb_cstates),
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(x86cpu, intel_cstates_match);
+diff --git a/arch/x86/events/intel/rapl.c b/arch/x86/events/intel/rapl.c
+index 91039ffed633..2413169ce362 100644
+--- a/arch/x86/events/intel/rapl.c
++++ b/arch/x86/events/intel/rapl.c
+@@ -780,6 +780,8 @@ static const struct x86_cpu_id rapl_cpu_match[] __initconst = {
+ 	X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GOLDMONT_X, hsw_rapl_init),
+ 
+ 	X86_RAPL_MODEL_MATCH(INTEL_FAM6_ATOM_GOLDMONT_PLUS, hsw_rapl_init),
++
++	X86_RAPL_MODEL_MATCH(INTEL_FAM6_ICELAKE_MOBILE,  skl_rapl_init),
+ 	{},
+ };
+ 
+diff --git a/arch/x86/events/msr.c b/arch/x86/events/msr.c
+index 1b9f85abf9bc..ace6c1e752fb 100644
+--- a/arch/x86/events/msr.c
++++ b/arch/x86/events/msr.c
+@@ -89,6 +89,7 @@ static bool test_intel(int idx)
+ 	case INTEL_FAM6_SKYLAKE_X:
+ 	case INTEL_FAM6_KABYLAKE_MOBILE:
+ 	case INTEL_FAM6_KABYLAKE_DESKTOP:
++	case INTEL_FAM6_ICELAKE_MOBILE:
+ 		if (idx == PERF_MSR_SMI || idx == PERF_MSR_PPERF)
+ 			return true;
+ 		break;
+diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
+index 321fe5f5d0e9..4d5fcd47ab75 100644
+--- a/arch/x86/ia32/ia32_signal.c
++++ b/arch/x86/ia32/ia32_signal.c
+@@ -61,9 +61,8 @@
+ } while (0)
+ 
+ #define RELOAD_SEG(seg)		{		\
+-	unsigned int pre = GET_SEG(seg);	\
++	unsigned int pre = (seg) | 3;		\
+ 	unsigned int cur = get_user_seg(seg);	\
+-	pre |= 3;				\
+ 	if (pre != cur)				\
+ 		set_user_seg(seg, pre);		\
+ }
+@@ -72,6 +71,7 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
+ 				   struct sigcontext_32 __user *sc)
+ {
+ 	unsigned int tmpflags, err = 0;
++	u16 gs, fs, es, ds;
+ 	void __user *buf;
+ 	u32 tmp;
+ 
+@@ -79,16 +79,10 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
+ 	current->restart_block.fn = do_no_restart_syscall;
+ 
+ 	get_user_try {
+-		/*
+-		 * Reload fs and gs if they have changed in the signal
+-		 * handler.  This does not handle long fs/gs base changes in
+-		 * the handler, but does not clobber them at least in the
+-		 * normal case.
+-		 */
+-		RELOAD_SEG(gs);
+-		RELOAD_SEG(fs);
+-		RELOAD_SEG(ds);
+-		RELOAD_SEG(es);
++		gs = GET_SEG(gs);
++		fs = GET_SEG(fs);
++		ds = GET_SEG(ds);
++		es = GET_SEG(es);
+ 
+ 		COPY(di); COPY(si); COPY(bp); COPY(sp); COPY(bx);
+ 		COPY(dx); COPY(cx); COPY(ip); COPY(ax);
+@@ -106,6 +100,17 @@ static int ia32_restore_sigcontext(struct pt_regs *regs,
+ 		buf = compat_ptr(tmp);
+ 	} get_user_catch(err);
+ 
++	/*
++	 * Reload fs and gs if they have changed in the signal
++	 * handler.  This does not handle long fs/gs base changes in
++	 * the handler, but does not clobber them at least in the
++	 * normal case.
++	 */
++	RELOAD_SEG(gs);
++	RELOAD_SEG(fs);
++	RELOAD_SEG(ds);
++	RELOAD_SEG(es);
++
+ 	err |= fpu__restore_sig(buf, 1);
+ 
+ 	force_iret();
+diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
+index 05861cc08787..0bbb07eaed6b 100644
+--- a/arch/x86/include/asm/text-patching.h
++++ b/arch/x86/include/asm/text-patching.h
+@@ -39,6 +39,7 @@ extern int poke_int3_handler(struct pt_regs *regs);
+ extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+ extern int after_bootmem;
+ 
++#ifndef CONFIG_UML_X86
+ static inline void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip)
+ {
+ 	regs->ip = ip;
+@@ -65,6 +66,7 @@ static inline void int3_emulate_call(struct pt_regs *regs, unsigned long func)
+ 	int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE);
+ 	int3_emulate_jmp(regs, func);
+ }
+-#endif
++#endif /* CONFIG_X86_64 */
++#endif /* !CONFIG_UML_X86 */
+ 
+ #endif /* _ASM_X86_TEXT_PATCHING_H */
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index f3aed639dccd..2b0dd1b9c208 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -431,10 +431,11 @@ do {									\
+ ({								\
+ 	__label__ __pu_label;					\
+ 	int __pu_err = -EFAULT;					\
+-	__typeof__(*(ptr)) __pu_val;				\
+-	__pu_val = x;						\
++	__typeof__(*(ptr)) __pu_val = (x);			\
++	__typeof__(ptr) __pu_ptr = (ptr);			\
++	__typeof__(size) __pu_size = (size);			\
+ 	__uaccess_begin();					\
+-	__put_user_size(__pu_val, (ptr), (size), __pu_label);	\
++	__put_user_size(__pu_val, __pu_ptr, __pu_size, __pu_label);	\
+ 	__pu_err = 0;						\
+ __pu_label:							\
+ 	__uaccess_end();					\
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index ebeac487a20c..2db985513917 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -666,15 +666,29 @@ void __init alternative_instructions(void)
+  * handlers seeing an inconsistent instruction while you patch.
+  */
+ void *__init_or_module text_poke_early(void *addr, const void *opcode,
+-					      size_t len)
++				       size_t len)
+ {
+ 	unsigned long flags;
+-	local_irq_save(flags);
+-	memcpy(addr, opcode, len);
+-	local_irq_restore(flags);
+-	sync_core();
+-	/* Could also do a CLFLUSH here to speed up CPU recovery; but
+-	   that causes hangs on some VIA CPUs. */
++
++	if (boot_cpu_has(X86_FEATURE_NX) &&
++	    is_module_text_address((unsigned long)addr)) {
++		/*
++		 * Modules text is marked initially as non-executable, so the
++		 * code cannot be running and speculative code-fetches are
++		 * prevented. Just change the code.
++		 */
++		memcpy(addr, opcode, len);
++	} else {
++		local_irq_save(flags);
++		memcpy(addr, opcode, len);
++		local_irq_restore(flags);
++		sync_core();
++
++		/*
++		 * Could also do a CLFLUSH here to speed up CPU recovery; but
++		 * that causes hangs on some VIA CPUs.
++		 */
++	}
+ 	return addr;
+ }
+ 
+diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
+index cf25405444ab..415621ddb8a2 100644
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -19,6 +19,8 @@
+ 
+ #include "cpu.h"
+ 
++#define APICID_SOCKET_ID_BIT 6
++
+ /*
+  * nodes_per_socket: Stores the number of nodes per socket.
+  * Refer to CPUID Fn8000_001E_ECX Node Identifiers[10:8]
+@@ -87,6 +89,9 @@ static void hygon_get_topology(struct cpuinfo_x86 *c)
+ 		if (!err)
+ 			c->x86_coreid_bits = get_count_order(c->x86_max_cores);
+ 
++		/* Socket ID is ApicId[6] for these processors. */
++		c->phys_proc_id = c->apicid >> APICID_SOCKET_ID_BIT;
++
+ 		cacheinfo_hygon_init_llc_id(c, cpu, node_id);
+ 	} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
+ 		u64 value;
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 1a7084ba9a3b..9e6a94c208e0 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -712,19 +712,49 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
+ 
+ 		barrier();
+ 		m.status = mce_rdmsrl(msr_ops.status(i));
++
++		/* If this entry is not valid, ignore it */
+ 		if (!(m.status & MCI_STATUS_VAL))
+ 			continue;
+ 
+ 		/*
+-		 * Uncorrected or signalled events are handled by the exception
+-		 * handler when it is enabled, so don't process those here.
+-		 *
+-		 * TBD do the same check for MCI_STATUS_EN here?
++		 * If we are logging everything (at CPU online) or this
++		 * is a corrected error, then we must log it.
+ 		 */
+-		if (!(flags & MCP_UC) &&
+-		    (m.status & (mca_cfg.ser ? MCI_STATUS_S : MCI_STATUS_UC)))
+-			continue;
++		if ((flags & MCP_UC) || !(m.status & MCI_STATUS_UC))
++			goto log_it;
++
++		/*
++		 * Newer Intel systems that support software error
++		 * recovery need to make additional checks. Other
++		 * CPUs should skip over uncorrected errors, but log
++		 * everything else.
++		 */
++		if (!mca_cfg.ser) {
++			if (m.status & MCI_STATUS_UC)
++				continue;
++			goto log_it;
++		}
++
++		/* Log "not enabled" (speculative) errors */
++		if (!(m.status & MCI_STATUS_EN))
++			goto log_it;
++
++		/*
++		 * Log UCNA (SDM: 15.6.3 "UCR Error Classification")
++		 * UC == 1 && PCC == 0 && S == 0
++		 */
++		if (!(m.status & MCI_STATUS_PCC) && !(m.status & MCI_STATUS_S))
++			goto log_it;
++
++		/*
++		 * Skip anything else. Presumption is that our read of this
++		 * bank is racing with a machine check. Leave the log alone
++		 * for do_machine_check() to deal with it.
++		 */
++		continue;
+ 
++log_it:
+ 		error_seen = true;
+ 
+ 		mce_read_aux(&m, i);
+@@ -1451,13 +1481,12 @@ EXPORT_SYMBOL_GPL(mce_notify_irq);
+ static int __mcheck_cpu_mce_banks_init(void)
+ {
+ 	int i;
+-	u8 num_banks = mca_cfg.banks;
+ 
+-	mce_banks = kcalloc(num_banks, sizeof(struct mce_bank), GFP_KERNEL);
++	mce_banks = kcalloc(MAX_NR_BANKS, sizeof(struct mce_bank), GFP_KERNEL);
+ 	if (!mce_banks)
+ 		return -ENOMEM;
+ 
+-	for (i = 0; i < num_banks; i++) {
++	for (i = 0; i < MAX_NR_BANKS; i++) {
+ 		struct mce_bank *b = &mce_banks[i];
+ 
+ 		b->ctl = -1ULL;
+@@ -1471,28 +1500,19 @@ static int __mcheck_cpu_mce_banks_init(void)
+  */
+ static int __mcheck_cpu_cap_init(void)
+ {
+-	unsigned b;
+ 	u64 cap;
++	u8 b;
+ 
+ 	rdmsrl(MSR_IA32_MCG_CAP, cap);
+ 
+ 	b = cap & MCG_BANKCNT_MASK;
+-	if (!mca_cfg.banks)
+-		pr_info("CPU supports %d MCE banks\n", b);
+-
+-	if (b > MAX_NR_BANKS) {
+-		pr_warn("Using only %u machine check banks out of %u\n",
+-			MAX_NR_BANKS, b);
++	if (WARN_ON_ONCE(b > MAX_NR_BANKS))
+ 		b = MAX_NR_BANKS;
+-	}
+ 
+-	/* Don't support asymmetric configurations today */
+-	WARN_ON(mca_cfg.banks != 0 && b != mca_cfg.banks);
+-	mca_cfg.banks = b;
++	mca_cfg.banks = max(mca_cfg.banks, b);
+ 
+ 	if (!mce_banks) {
+ 		int err = __mcheck_cpu_mce_banks_init();
+-
+ 		if (err)
+ 			return err;
+ 	}
+@@ -2459,6 +2479,8 @@ EXPORT_SYMBOL_GPL(mcsafe_key);
+ 
+ static int __init mcheck_late_init(void)
+ {
++	pr_info("Using %d MCE banks\n", mca_cfg.banks);
++
+ 	if (mca_cfg.recovery)
+ 		static_branch_inc(&mcsafe_key);
+ 
+diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
+index 8492ef7d9015..3f82afd0f46f 100644
+--- a/arch/x86/kernel/cpu/mce/inject.c
++++ b/arch/x86/kernel/cpu/mce/inject.c
+@@ -46,8 +46,6 @@
+ static struct mce i_mce;
+ static struct dentry *dfs_inj;
+ 
+-static u8 n_banks;
+-
+ #define MAX_FLAG_OPT_SIZE	4
+ #define NBCFG			0x44
+ 
+@@ -570,9 +568,15 @@ err:
+ static int inj_bank_set(void *data, u64 val)
+ {
+ 	struct mce *m = (struct mce *)data;
++	u8 n_banks;
++	u64 cap;
++
++	/* Get bank count on target CPU so we can handle non-uniform values. */
++	rdmsrl_on_cpu(m->extcpu, MSR_IA32_MCG_CAP, &cap);
++	n_banks = cap & MCG_BANKCNT_MASK;
+ 
+ 	if (val >= n_banks) {
+-		pr_err("Non-existent MCE bank: %llu\n", val);
++		pr_err("MCA bank %llu non-existent on CPU%d\n", val, m->extcpu);
+ 		return -EINVAL;
+ 	}
+ 
+@@ -665,10 +669,6 @@ static struct dfs_node {
+ static int __init debugfs_init(void)
+ {
+ 	unsigned int i;
+-	u64 cap;
+-
+-	rdmsrl(MSR_IA32_MCG_CAP, cap);
+-	n_banks = cap & MCG_BANKCNT_MASK;
+ 
+ 	dfs_inj = debugfs_create_dir("mce-inject", NULL);
+ 	if (!dfs_inj)
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index 97f9ada9ceda..fc70d39b804f 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -418,8 +418,9 @@ static int do_microcode_update(const void __user *buf, size_t size)
+ 		if (ustate == UCODE_ERROR) {
+ 			error = -1;
+ 			break;
+-		} else if (ustate == UCODE_OK)
++		} else if (ustate == UCODE_NEW) {
+ 			apply_microcode_on_target(cpu);
++		}
+ 	}
+ 
+ 	return error;
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 2ee4b12a70e8..becb075954aa 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -748,6 +748,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 	unsigned long end_offset;
+ 	unsigned long op_offset;
+ 	unsigned long offset;
++	unsigned long npages;
+ 	unsigned long size;
+ 	unsigned long retq;
+ 	unsigned long *ptr;
+@@ -780,6 +781,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 		return 0;
+ 
+ 	*tramp_size = size + RET_SIZE + sizeof(void *);
++	npages = DIV_ROUND_UP(*tramp_size, PAGE_SIZE);
+ 
+ 	/* Copy ftrace_caller onto the trampoline memory */
+ 	ret = probe_kernel_read(trampoline, (void *)start_offset, size);
+@@ -824,6 +826,12 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 	/* ALLOC_TRAMP flags lets us know we created it */
+ 	ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
+ 
++	/*
++	 * Module allocation needs to be completed by making the page
++	 * executable. The page is still writable, which is a security hazard,
++	 * but anyhow ftrace breaks W^X completely.
++	 */
++	set_memory_x((unsigned long)trampoline, npages);
+ 	return (unsigned long)trampoline;
+ fail:
+ 	tramp_free(trampoline, *tramp_size);
+diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
+index 0469cd078db1..b50ac9c7397b 100644
+--- a/arch/x86/kernel/irq_64.c
++++ b/arch/x86/kernel/irq_64.c
+@@ -26,9 +26,18 @@ int sysctl_panic_on_stackoverflow;
+ /*
+  * Probabilistic stack overflow check:
+  *
+- * Only check the stack in process context, because everything else
+- * runs on the big interrupt stacks. Checking reliably is too expensive,
+- * so we just check from interrupts.
++ * Regular device interrupts can enter on the following stacks:
++ *
++ * - User stack
++ *
++ * - Kernel task stack
++ *
++ * - Interrupt stack if a device driver reenables interrupts
++ *   which should only happen in really old drivers.
++ *
++ * - Debug IST stack
++ *
++ * All other contexts are invalid.
+  */
+ static inline void stack_overflow_check(struct pt_regs *regs)
+ {
+@@ -53,8 +62,8 @@ static inline void stack_overflow_check(struct pt_regs *regs)
+ 		return;
+ 
+ 	oist = this_cpu_ptr(&orig_ist);
+-	estack_top = (u64)oist->ist[0] - EXCEPTION_STKSZ + STACK_TOP_MARGIN;
+-	estack_bottom = (u64)oist->ist[N_EXCEPTION_STACKS - 1];
++	estack_bottom = (u64)oist->ist[DEBUG_STACK];
++	estack_top = estack_bottom - DEBUG_STKSZ + STACK_TOP_MARGIN;
+ 	if (regs->sp >= estack_top && regs->sp <= estack_bottom)
+ 		return;
+ 
+diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
+index b052e883dd8c..cfa3106faee4 100644
+--- a/arch/x86/kernel/module.c
++++ b/arch/x86/kernel/module.c
+@@ -87,7 +87,7 @@ void *module_alloc(unsigned long size)
+ 	p = __vmalloc_node_range(size, MODULE_ALIGN,
+ 				    MODULES_VADDR + get_module_load_offset(),
+ 				    MODULES_END, GFP_KERNEL,
+-				    PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
++				    PAGE_KERNEL, 0, NUMA_NO_NODE,
+ 				    __builtin_return_address(0));
+ 	if (p && (kasan_module_alloc(p, size) < 0)) {
+ 		vfree(p);
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index 08dfd4c1a4f9..c8aa58a2bab9 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -132,16 +132,6 @@ static int restore_sigcontext(struct pt_regs *regs,
+ 		COPY_SEG_CPL3(cs);
+ 		COPY_SEG_CPL3(ss);
+ 
+-#ifdef CONFIG_X86_64
+-		/*
+-		 * Fix up SS if needed for the benefit of old DOSEMU and
+-		 * CRIU.
+-		 */
+-		if (unlikely(!(uc_flags & UC_STRICT_RESTORE_SS) &&
+-			     user_64bit_mode(regs)))
+-			force_valid_ss(regs);
+-#endif
+-
+ 		get_user_ex(tmpflags, &sc->flags);
+ 		regs->flags = (regs->flags & ~FIX_EFLAGS) | (tmpflags & FIX_EFLAGS);
+ 		regs->orig_ax = -1;		/* disable syscall checks */
+@@ -150,6 +140,15 @@ static int restore_sigcontext(struct pt_regs *regs,
+ 		buf = (void __user *)buf_val;
+ 	} get_user_catch(err);
+ 
++#ifdef CONFIG_X86_64
++	/*
++	 * Fix up SS if needed for the benefit of old DOSEMU and
++	 * CRIU.
++	 */
++	if (unlikely(!(uc_flags & UC_STRICT_RESTORE_SS) && user_64bit_mode(regs)))
++		force_valid_ss(regs);
++#endif
++
+ 	err |= fpu__restore_sig(buf, IS_ENABLED(CONFIG_X86_32));
+ 
+ 	force_iret();
+@@ -461,6 +460,7 @@ static int __setup_rt_frame(int sig, struct ksignal *ksig,
+ {
+ 	struct rt_sigframe __user *frame;
+ 	void __user *fp = NULL;
++	unsigned long uc_flags;
+ 	int err = 0;
+ 
+ 	frame = get_sigframe(&ksig->ka, regs, sizeof(struct rt_sigframe), &fp);
+@@ -473,9 +473,11 @@ static int __setup_rt_frame(int sig, struct ksignal *ksig,
+ 			return -EFAULT;
+ 	}
+ 
++	uc_flags = frame_uc_flags(regs);
++
+ 	put_user_try {
+ 		/* Create the ucontext.  */
+-		put_user_ex(frame_uc_flags(regs), &frame->uc.uc_flags);
++		put_user_ex(uc_flags, &frame->uc.uc_flags);
+ 		put_user_ex(0, &frame->uc.uc_link);
+ 		save_altstack_ex(&frame->uc.uc_stack, regs->sp);
+ 
+@@ -541,6 +543,7 @@ static int x32_setup_rt_frame(struct ksignal *ksig,
+ {
+ #ifdef CONFIG_X86_X32_ABI
+ 	struct rt_sigframe_x32 __user *frame;
++	unsigned long uc_flags;
+ 	void __user *restorer;
+ 	int err = 0;
+ 	void __user *fpstate = NULL;
+@@ -555,9 +558,11 @@ static int x32_setup_rt_frame(struct ksignal *ksig,
+ 			return -EFAULT;
+ 	}
+ 
++	uc_flags = frame_uc_flags(regs);
++
+ 	put_user_try {
+ 		/* Create the ucontext.  */
+-		put_user_ex(frame_uc_flags(regs), &frame->uc.uc_flags);
++		put_user_ex(uc_flags, &frame->uc.uc_flags);
+ 		put_user_ex(0, &frame->uc.uc_link);
+ 		compat_save_altstack_ex(&frame->uc.uc_stack, regs->sp);
+ 		put_user_ex(0, &frame->uc.uc__pad0);
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index c45214c44e61..5cbce783d4d1 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -141,11 +141,11 @@ SECTIONS
+ 		*(.text.__x86.indirect_thunk)
+ 		__indirect_thunk_end = .;
+ #endif
+-
+-		/* End of text section */
+-		_etext = .;
+ 	} :text = 0x9090
+ 
++	/* End of text section */
++	_etext = .;
++
+ 	NOTES :text :note
+ 
+ 	EXCEPTION_TABLE(16) :text = 0x9090
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 2a07e43ee666..847db4bd1dc5 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -2020,7 +2020,11 @@ static void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ 	if (!kvm_vcpu_apicv_active(vcpu))
+ 		return;
+ 
+-	if (WARN_ON(h_physical_id >= AVIC_MAX_PHYSICAL_ID_COUNT))
++	/*
++	 * Since the host physical APIC id is 8 bits,
++	 * we can support host APIC ID upto 255.
++	 */
++	if (WARN_ON(h_physical_id > AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK))
+ 		return;
+ 
+ 	entry = READ_ONCE(*(svm->avic_physical_id_cache));
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0bbb21a49082..03b5c5803b5c 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1288,7 +1288,7 @@ static int set_efer(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ 	u64 efer = msr_info->data;
+ 
+ 	if (efer & efer_reserved_bits)
+-		return false;
++		return 1;
+ 
+ 	if (!msr_info->host_initiated) {
+ 		if (!__kvm_valid_efer(vcpu, efer))
+diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
+index 3b24dc05251c..9d05572370ed 100644
+--- a/arch/x86/lib/memcpy_64.S
++++ b/arch/x86/lib/memcpy_64.S
+@@ -257,6 +257,7 @@ ENTRY(__memcpy_mcsafe)
+ 	/* Copy successful. Return zero */
+ .L_done_memcpy_trap:
+ 	xorl %eax, %eax
++.L_done:
+ 	ret
+ ENDPROC(__memcpy_mcsafe)
+ EXPORT_SYMBOL_GPL(__memcpy_mcsafe)
+@@ -273,7 +274,7 @@ EXPORT_SYMBOL_GPL(__memcpy_mcsafe)
+ 	addl	%edx, %ecx
+ .E_trailing_bytes:
+ 	mov	%ecx, %eax
+-	ret
++	jmp	.L_done
+ 
+ 	/*
+ 	 * For write fault handling, given the destination is unaligned,
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 9d5c75f02295..55233dec5ff4 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -359,8 +359,6 @@ static noinline int vmalloc_fault(unsigned long address)
+ 	if (!(address >= VMALLOC_START && address < VMALLOC_END))
+ 		return -1;
+ 
+-	WARN_ON_ONCE(in_nmi());
+-
+ 	/*
+ 	 * Copy kernel mappings over when needed. This can also
+ 	 * happen within a race in page table update. In the later
+diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
+index 0c98b6c1ca49..1213556a20da 100644
+--- a/block/blk-mq-sched.c
++++ b/block/blk-mq-sched.c
+@@ -413,6 +413,14 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
+ 				  struct list_head *list, bool run_queue_async)
+ {
+ 	struct elevator_queue *e;
++	struct request_queue *q = hctx->queue;
++
++	/*
++	 * blk_mq_sched_insert_requests() is called from flush plug
++	 * context only, and hold one usage counter to prevent queue
++	 * from being released.
++	 */
++	percpu_ref_get(&q->q_usage_counter);
+ 
+ 	e = hctx->queue->elevator;
+ 	if (e && e->type->ops.insert_requests)
+@@ -426,12 +434,14 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
+ 		if (!hctx->dispatch_busy && !e && !run_queue_async) {
+ 			blk_mq_try_issue_list_directly(hctx, list);
+ 			if (list_empty(list))
+-				return;
++				goto out;
+ 		}
+ 		blk_mq_insert_requests(hctx, ctx, list);
+ 	}
+ 
+ 	blk_mq_run_hw_queue(hctx, run_queue_async);
++ out:
++	percpu_ref_put(&q->q_usage_counter);
+ }
+ 
+ static void blk_mq_sched_free_tags(struct blk_mq_tag_set *set,
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 9957e0fc17fc..27526095319c 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2287,15 +2287,65 @@ static void blk_mq_exit_hw_queues(struct request_queue *q,
+ 	}
+ }
+ 
++static int blk_mq_hw_ctx_size(struct blk_mq_tag_set *tag_set)
++{
++	int hw_ctx_size = sizeof(struct blk_mq_hw_ctx);
++
++	BUILD_BUG_ON(ALIGN(offsetof(struct blk_mq_hw_ctx, srcu),
++			   __alignof__(struct blk_mq_hw_ctx)) !=
++		     sizeof(struct blk_mq_hw_ctx));
++
++	if (tag_set->flags & BLK_MQ_F_BLOCKING)
++		hw_ctx_size += sizeof(struct srcu_struct);
++
++	return hw_ctx_size;
++}
++
+ static int blk_mq_init_hctx(struct request_queue *q,
+ 		struct blk_mq_tag_set *set,
+ 		struct blk_mq_hw_ctx *hctx, unsigned hctx_idx)
+ {
+-	int node;
++	hctx->queue_num = hctx_idx;
++
++	cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead);
++
++	hctx->tags = set->tags[hctx_idx];
++
++	if (set->ops->init_hctx &&
++	    set->ops->init_hctx(hctx, set->driver_data, hctx_idx))
++		goto unregister_cpu_notifier;
+ 
+-	node = hctx->numa_node;
++	if (blk_mq_init_request(set, hctx->fq->flush_rq, hctx_idx,
++				hctx->numa_node))
++		goto exit_hctx;
++	return 0;
++
++ exit_hctx:
++	if (set->ops->exit_hctx)
++		set->ops->exit_hctx(hctx, hctx_idx);
++ unregister_cpu_notifier:
++	blk_mq_remove_cpuhp(hctx);
++	return -1;
++}
++
++static struct blk_mq_hw_ctx *
++blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set,
++		int node)
++{
++	struct blk_mq_hw_ctx *hctx;
++	gfp_t gfp = GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY;
++
++	hctx = kzalloc_node(blk_mq_hw_ctx_size(set), gfp, node);
++	if (!hctx)
++		goto fail_alloc_hctx;
++
++	if (!zalloc_cpumask_var_node(&hctx->cpumask, gfp, node))
++		goto free_hctx;
++
++	atomic_set(&hctx->nr_active, 0);
+ 	if (node == NUMA_NO_NODE)
+-		node = hctx->numa_node = set->numa_node;
++		node = set->numa_node;
++	hctx->numa_node = node;
+ 
+ 	INIT_DELAYED_WORK(&hctx->run_work, blk_mq_run_work_fn);
+ 	spin_lock_init(&hctx->lock);
+@@ -2303,58 +2353,45 @@ static int blk_mq_init_hctx(struct request_queue *q,
+ 	hctx->queue = q;
+ 	hctx->flags = set->flags & ~BLK_MQ_F_TAG_SHARED;
+ 
+-	cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead);
+-
+-	hctx->tags = set->tags[hctx_idx];
+-
+ 	/*
+ 	 * Allocate space for all possible cpus to avoid allocation at
+ 	 * runtime
+ 	 */
+ 	hctx->ctxs = kmalloc_array_node(nr_cpu_ids, sizeof(void *),
+-			GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY, node);
++			gfp, node);
+ 	if (!hctx->ctxs)
+-		goto unregister_cpu_notifier;
++		goto free_cpumask;
+ 
+ 	if (sbitmap_init_node(&hctx->ctx_map, nr_cpu_ids, ilog2(8),
+-				GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY, node))
++				gfp, node))
+ 		goto free_ctxs;
+-
+ 	hctx->nr_ctx = 0;
+ 
+ 	spin_lock_init(&hctx->dispatch_wait_lock);
+ 	init_waitqueue_func_entry(&hctx->dispatch_wait, blk_mq_dispatch_wake);
+ 	INIT_LIST_HEAD(&hctx->dispatch_wait.entry);
+ 
+-	if (set->ops->init_hctx &&
+-	    set->ops->init_hctx(hctx, set->driver_data, hctx_idx))
+-		goto free_bitmap;
+-
+ 	hctx->fq = blk_alloc_flush_queue(q, hctx->numa_node, set->cmd_size,
+-			GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY);
++			gfp);
+ 	if (!hctx->fq)
+-		goto exit_hctx;
+-
+-	if (blk_mq_init_request(set, hctx->fq->flush_rq, hctx_idx, node))
+-		goto free_fq;
++		goto free_bitmap;
+ 
+ 	if (hctx->flags & BLK_MQ_F_BLOCKING)
+ 		init_srcu_struct(hctx->srcu);
++	blk_mq_hctx_kobj_init(hctx);
+ 
+-	return 0;
++	return hctx;
+ 
+- free_fq:
+-	blk_free_flush_queue(hctx->fq);
+- exit_hctx:
+-	if (set->ops->exit_hctx)
+-		set->ops->exit_hctx(hctx, hctx_idx);
+  free_bitmap:
+ 	sbitmap_free(&hctx->ctx_map);
+  free_ctxs:
+ 	kfree(hctx->ctxs);
+- unregister_cpu_notifier:
+-	blk_mq_remove_cpuhp(hctx);
+-	return -1;
++ free_cpumask:
++	free_cpumask_var(hctx->cpumask);
++ free_hctx:
++	kfree(hctx);
++ fail_alloc_hctx:
++	return NULL;
+ }
+ 
+ static void blk_mq_init_cpu_queues(struct request_queue *q,
+@@ -2691,51 +2728,25 @@ struct request_queue *blk_mq_init_sq_queue(struct blk_mq_tag_set *set,
+ }
+ EXPORT_SYMBOL(blk_mq_init_sq_queue);
+ 
+-static int blk_mq_hw_ctx_size(struct blk_mq_tag_set *tag_set)
+-{
+-	int hw_ctx_size = sizeof(struct blk_mq_hw_ctx);
+-
+-	BUILD_BUG_ON(ALIGN(offsetof(struct blk_mq_hw_ctx, srcu),
+-			   __alignof__(struct blk_mq_hw_ctx)) !=
+-		     sizeof(struct blk_mq_hw_ctx));
+-
+-	if (tag_set->flags & BLK_MQ_F_BLOCKING)
+-		hw_ctx_size += sizeof(struct srcu_struct);
+-
+-	return hw_ctx_size;
+-}
+-
+ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx(
+ 		struct blk_mq_tag_set *set, struct request_queue *q,
+ 		int hctx_idx, int node)
+ {
+ 	struct blk_mq_hw_ctx *hctx;
+ 
+-	hctx = kzalloc_node(blk_mq_hw_ctx_size(set),
+-			GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY,
+-			node);
++	hctx = blk_mq_alloc_hctx(q, set, node);
+ 	if (!hctx)
+-		return NULL;
+-
+-	if (!zalloc_cpumask_var_node(&hctx->cpumask,
+-				GFP_NOIO | __GFP_NOWARN | __GFP_NORETRY,
+-				node)) {
+-		kfree(hctx);
+-		return NULL;
+-	}
+-
+-	atomic_set(&hctx->nr_active, 0);
+-	hctx->numa_node = node;
+-	hctx->queue_num = hctx_idx;
++		goto fail;
+ 
+-	if (blk_mq_init_hctx(q, set, hctx, hctx_idx)) {
+-		free_cpumask_var(hctx->cpumask);
+-		kfree(hctx);
+-		return NULL;
+-	}
+-	blk_mq_hctx_kobj_init(hctx);
++	if (blk_mq_init_hctx(q, set, hctx, hctx_idx))
++		goto free_hctx;
+ 
+ 	return hctx;
++
++ free_hctx:
++	kobject_put(&hctx->kobj);
++ fail:
++	return NULL;
+ }
+ 
+ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+diff --git a/block/blk.h b/block/blk.h
+index 848278c52030..a57bc90e44bb 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -75,7 +75,7 @@ static inline bool biovec_phys_mergeable(struct request_queue *q,
+ 
+ 	if (addr1 + vec1->bv_len != addr2)
+ 		return false;
+-	if (xen_domain() && !xen_biovec_phys_mergeable(vec1, vec2))
++	if (xen_domain() && !xen_biovec_phys_mergeable(vec1, vec2->bv_page))
+ 		return false;
+ 	if ((addr1 | mask) != ((addr2 + vec2->bv_len - 1) | mask))
+ 		return false;
+diff --git a/block/genhd.c b/block/genhd.c
+index 1dd8fd6613b8..ef28a5126d21 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -531,6 +531,18 @@ void blk_free_devt(dev_t devt)
+ 	}
+ }
+ 
++/**
++ *	We invalidate devt by assigning NULL pointer for devt in idr.
++ */
++void blk_invalidate_devt(dev_t devt)
++{
++	if (MAJOR(devt) == BLOCK_EXT_MAJOR) {
++		spin_lock_bh(&ext_devt_lock);
++		idr_replace(&ext_devt_idr, NULL, blk_mangle_minor(MINOR(devt)));
++		spin_unlock_bh(&ext_devt_lock);
++	}
++}
++
+ static char *bdevt_str(dev_t devt, char *buf)
+ {
+ 	if (MAJOR(devt) <= 0xff && MINOR(devt) <= 0xff) {
+@@ -791,6 +803,13 @@ void del_gendisk(struct gendisk *disk)
+ 
+ 	if (!(disk->flags & GENHD_FL_HIDDEN))
+ 		blk_unregister_region(disk_devt(disk), disk->minors);
++	/*
++	 * Remove gendisk pointer from idr so that it cannot be looked up
++	 * while RCU period before freeing gendisk is running to prevent
++	 * use-after-free issues. Note that the device number stays
++	 * "in-use" until we really free the gendisk.
++	 */
++	blk_invalidate_devt(disk_devt(disk));
+ 
+ 	kobject_put(disk->part0.holder_dir);
+ 	kobject_put(disk->slave_dir);
+diff --git a/block/partition-generic.c b/block/partition-generic.c
+index 8e596a8dff32..aee643ce13d1 100644
+--- a/block/partition-generic.c
++++ b/block/partition-generic.c
+@@ -285,6 +285,13 @@ void delete_partition(struct gendisk *disk, int partno)
+ 	kobject_put(part->holder_dir);
+ 	device_del(part_to_dev(part));
+ 
++	/*
++	 * Remove gendisk pointer from idr so that it cannot be looked up
++	 * while RCU period before freeing gendisk is running to prevent
++	 * use-after-free issues. Note that the device number stays
++	 * "in-use" until we really free the gendisk.
++	 */
++	blk_invalidate_devt(part_devt(part));
+ 	hd_struct_kill(part);
+ }
+ 
+diff --git a/block/sed-opal.c b/block/sed-opal.c
+index e0de4dd448b3..119640897293 100644
+--- a/block/sed-opal.c
++++ b/block/sed-opal.c
+@@ -2095,13 +2095,16 @@ static int opal_erase_locking_range(struct opal_dev *dev,
+ static int opal_enable_disable_shadow_mbr(struct opal_dev *dev,
+ 					  struct opal_mbr_data *opal_mbr)
+ {
++	u8 enable_disable = opal_mbr->enable_disable == OPAL_MBR_ENABLE ?
++		OPAL_TRUE : OPAL_FALSE;
++
+ 	const struct opal_step mbr_steps[] = {
+ 		{ opal_discovery0, },
+ 		{ start_admin1LSP_opal_session, &opal_mbr->key },
+-		{ set_mbr_done, &opal_mbr->enable_disable },
++		{ set_mbr_done, &enable_disable },
+ 		{ end_opal_session, },
+ 		{ start_admin1LSP_opal_session, &opal_mbr->key },
+-		{ set_mbr_enable_disable, &opal_mbr->enable_disable },
++		{ set_mbr_enable_disable, &enable_disable },
+ 		{ end_opal_session, },
+ 		{ NULL, }
+ 	};
+@@ -2221,7 +2224,7 @@ static int __opal_lock_unlock(struct opal_dev *dev,
+ 
+ static int __opal_set_mbr_done(struct opal_dev *dev, struct opal_key *key)
+ {
+-	u8 mbr_done_tf = 1;
++	u8 mbr_done_tf = OPAL_TRUE;
+ 	const struct opal_step mbrdone_step [] = {
+ 		{ opal_discovery0, },
+ 		{ start_admin1LSP_opal_session, key },
+diff --git a/crypto/hmac.c b/crypto/hmac.c
+index e74730224f0a..4b8c8ee8f15c 100644
+--- a/crypto/hmac.c
++++ b/crypto/hmac.c
+@@ -168,6 +168,8 @@ static int hmac_init_tfm(struct crypto_tfm *tfm)
+ 
+ 	parent->descsize = sizeof(struct shash_desc) +
+ 			   crypto_shash_descsize(hash);
++	if (WARN_ON(parent->descsize > HASH_MAX_DESCSIZE))
++		return -EINVAL;
+ 
+ 	ctx->hash = hash;
+ 	return 0;
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index e48894e002ba..a46c2c162c03 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -1232,18 +1232,24 @@ static bool __init arm_smmu_v3_is_coherent(struct acpi_iort_node *node)
+ /*
+  * set numa proximity domain for smmuv3 device
+  */
+-static void  __init arm_smmu_v3_set_proximity(struct device *dev,
++static int  __init arm_smmu_v3_set_proximity(struct device *dev,
+ 					      struct acpi_iort_node *node)
+ {
+ 	struct acpi_iort_smmu_v3 *smmu;
+ 
+ 	smmu = (struct acpi_iort_smmu_v3 *)node->node_data;
+ 	if (smmu->flags & ACPI_IORT_SMMU_V3_PXM_VALID) {
+-		set_dev_node(dev, acpi_map_pxm_to_node(smmu->pxm));
++		int node = acpi_map_pxm_to_node(smmu->pxm);
++
++		if (node != NUMA_NO_NODE && !node_online(node))
++			return -EINVAL;
++
++		set_dev_node(dev, node);
+ 		pr_info("SMMU-v3[%llx] Mapped to Proximity domain %d\n",
+ 			smmu->base_address,
+ 			smmu->pxm);
+ 	}
++	return 0;
+ }
+ #else
+ #define arm_smmu_v3_set_proximity NULL
+@@ -1318,7 +1324,7 @@ struct iort_dev_config {
+ 	int (*dev_count_resources)(struct acpi_iort_node *node);
+ 	void (*dev_init_resources)(struct resource *res,
+ 				     struct acpi_iort_node *node);
+-	void (*dev_set_proximity)(struct device *dev,
++	int (*dev_set_proximity)(struct device *dev,
+ 				    struct acpi_iort_node *node);
+ };
+ 
+@@ -1369,8 +1375,11 @@ static int __init iort_add_platform_device(struct acpi_iort_node *node,
+ 	if (!pdev)
+ 		return -ENOMEM;
+ 
+-	if (ops->dev_set_proximity)
+-		ops->dev_set_proximity(&pdev->dev, node);
++	if (ops->dev_set_proximity) {
++		ret = ops->dev_set_proximity(&pdev->dev, node);
++		if (ret)
++			goto dev_put;
++	}
+ 
+ 	count = ops->dev_count_resources(node);
+ 
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 77abe0ec4043..bd533f68b1de 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -1031,6 +1031,14 @@ struct fwnode_handle *acpi_get_next_subnode(const struct fwnode_handle *fwnode,
+ 		const struct acpi_data_node *data = to_acpi_data_node(fwnode);
+ 		struct acpi_data_node *dn;
+ 
++		/*
++		 * We can have a combination of device and data nodes, e.g. with
++		 * hierarchical _DSD properties. Make sure the adev pointer is
++		 * restored before going through data nodes, otherwise we will
++		 * be looking for data_nodes below the last device found instead
++		 * of the common fwnode shared by device_nodes and data_nodes.
++		 */
++		adev = to_acpi_device_node(fwnode);
+ 		if (adev)
+ 			head = &adev->data.subnodes;
+ 		else if (data)
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 0992e67e862b..7900debc5ce4 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1738,6 +1738,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 	if (dev->power.syscore)
+ 		goto Complete;
+ 
++	/* Avoid direct_complete to let wakeup_path propagate. */
++	if (device_may_wakeup(dev) || dev->power.wakeup_path)
++		dev->power.direct_complete = false;
++
+ 	if (dev->power.direct_complete) {
+ 		if (pm_runtime_status_suspended(dev)) {
+ 			pm_runtime_disable(dev);
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index d5d6e6e5da3b..62d3aa2b26f6 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -37,6 +37,7 @@
+ #define BDADDR_BCM43430A0 (&(bdaddr_t) {{0xac, 0x1f, 0x12, 0xa0, 0x43, 0x43}})
+ #define BDADDR_BCM4324B3 (&(bdaddr_t) {{0x00, 0x00, 0x00, 0xb3, 0x24, 0x43}})
+ #define BDADDR_BCM4330B1 (&(bdaddr_t) {{0x00, 0x00, 0x00, 0xb1, 0x30, 0x43}})
++#define BDADDR_BCM43341B (&(bdaddr_t) {{0xac, 0x1f, 0x00, 0x1b, 0x34, 0x43}})
+ 
+ int btbcm_check_bdaddr(struct hci_dev *hdev)
+ {
+@@ -82,7 +83,8 @@ int btbcm_check_bdaddr(struct hci_dev *hdev)
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM20702A1) ||
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM4324B3) ||
+ 	    !bacmp(&bda->bdaddr, BDADDR_BCM4330B1) ||
+-	    !bacmp(&bda->bdaddr, BDADDR_BCM43430A0)) {
++	    !bacmp(&bda->bdaddr, BDADDR_BCM43430A0) ||
++	    !bacmp(&bda->bdaddr, BDADDR_BCM43341B)) {
+ 		bt_dev_info(hdev, "BCM: Using default device address (%pMR)",
+ 			    &bda->bdaddr);
+ 		set_bit(HCI_QUIRK_INVALID_BDADDR, &hdev->quirks);
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index f036c8f98ea3..97bc17670b7a 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -508,6 +508,8 @@ static int qca_open(struct hci_uart *hu)
+ 		qcadev = serdev_device_get_drvdata(hu->serdev);
+ 		if (qcadev->btsoc_type != QCA_WCN3990) {
+ 			gpiod_set_value_cansleep(qcadev->bt_en, 1);
++			/* Controller needs time to bootup. */
++			msleep(150);
+ 		} else {
+ 			hu->init_speed = qcadev->init_speed;
+ 			hu->oper_speed = qcadev->oper_speed;
+diff --git a/drivers/char/hw_random/omap-rng.c b/drivers/char/hw_random/omap-rng.c
+index b65ff6962899..e9b6ac61fb7f 100644
+--- a/drivers/char/hw_random/omap-rng.c
++++ b/drivers/char/hw_random/omap-rng.c
+@@ -443,6 +443,7 @@ static int omap_rng_probe(struct platform_device *pdev)
+ 	priv->rng.read = omap_rng_do_read;
+ 	priv->rng.init = omap_rng_init;
+ 	priv->rng.cleanup = omap_rng_cleanup;
++	priv->rng.quality = 900;
+ 
+ 	priv->rng.priv = (unsigned long)priv;
+ 	platform_set_drvdata(pdev, priv);
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 38c6d1af6d1c..af6e240f98ff 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -777,6 +777,7 @@ static struct crng_state **crng_node_pool __read_mostly;
+ #endif
+ 
+ static void invalidate_batched_entropy(void);
++static void numa_crng_init(void);
+ 
+ static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
+ static int __init parse_trust_cpu(char *arg)
+@@ -805,7 +806,9 @@ static void crng_initialize(struct crng_state *crng)
+ 		}
+ 		crng->state[i] ^= rv;
+ 	}
+-	if (trust_cpu && arch_init) {
++	if (trust_cpu && arch_init && crng == &primary_crng) {
++		invalidate_batched_entropy();
++		numa_crng_init();
+ 		crng_init = 2;
+ 		pr_notice("random: crng done (trusting CPU's manufacturer)\n");
+ 	}
+@@ -2211,8 +2214,8 @@ struct batched_entropy {
+ 		u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)];
+ 	};
+ 	unsigned int position;
++	spinlock_t batch_lock;
+ };
+-static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock);
+ 
+ /*
+  * Get a random word for internal kernel use only. The quality of the random
+@@ -2222,12 +2225,14 @@ static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_
+  * wait_for_random_bytes() should be called and return 0 at least once
+  * at any point prior.
+  */
+-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
++static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = {
++	.batch_lock	= __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock),
++};
++
+ u64 get_random_u64(void)
+ {
+ 	u64 ret;
+-	bool use_lock;
+-	unsigned long flags = 0;
++	unsigned long flags;
+ 	struct batched_entropy *batch;
+ 	static void *previous;
+ 
+@@ -2242,28 +2247,25 @@ u64 get_random_u64(void)
+ 
+ 	warn_unseeded_randomness(&previous);
+ 
+-	use_lock = READ_ONCE(crng_init) < 2;
+-	batch = &get_cpu_var(batched_entropy_u64);
+-	if (use_lock)
+-		read_lock_irqsave(&batched_entropy_reset_lock, flags);
++	batch = raw_cpu_ptr(&batched_entropy_u64);
++	spin_lock_irqsave(&batch->batch_lock, flags);
+ 	if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
+ 		extract_crng((u8 *)batch->entropy_u64);
+ 		batch->position = 0;
+ 	}
+ 	ret = batch->entropy_u64[batch->position++];
+-	if (use_lock)
+-		read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
+-	put_cpu_var(batched_entropy_u64);
++	spin_unlock_irqrestore(&batch->batch_lock, flags);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(get_random_u64);
+ 
+-static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32);
++static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = {
++	.batch_lock	= __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock),
++};
+ u32 get_random_u32(void)
+ {
+ 	u32 ret;
+-	bool use_lock;
+-	unsigned long flags = 0;
++	unsigned long flags;
+ 	struct batched_entropy *batch;
+ 	static void *previous;
+ 
+@@ -2272,18 +2274,14 @@ u32 get_random_u32(void)
+ 
+ 	warn_unseeded_randomness(&previous);
+ 
+-	use_lock = READ_ONCE(crng_init) < 2;
+-	batch = &get_cpu_var(batched_entropy_u32);
+-	if (use_lock)
+-		read_lock_irqsave(&batched_entropy_reset_lock, flags);
++	batch = raw_cpu_ptr(&batched_entropy_u32);
++	spin_lock_irqsave(&batch->batch_lock, flags);
+ 	if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
+ 		extract_crng((u8 *)batch->entropy_u32);
+ 		batch->position = 0;
+ 	}
+ 	ret = batch->entropy_u32[batch->position++];
+-	if (use_lock)
+-		read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
+-	put_cpu_var(batched_entropy_u32);
++	spin_unlock_irqrestore(&batch->batch_lock, flags);
+ 	return ret;
+ }
+ EXPORT_SYMBOL(get_random_u32);
+@@ -2297,12 +2295,19 @@ static void invalidate_batched_entropy(void)
+ 	int cpu;
+ 	unsigned long flags;
+ 
+-	write_lock_irqsave(&batched_entropy_reset_lock, flags);
+ 	for_each_possible_cpu (cpu) {
+-		per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0;
+-		per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0;
++		struct batched_entropy *batched_entropy;
++
++		batched_entropy = per_cpu_ptr(&batched_entropy_u32, cpu);
++		spin_lock_irqsave(&batched_entropy->batch_lock, flags);
++		batched_entropy->position = 0;
++		spin_unlock(&batched_entropy->batch_lock);
++
++		batched_entropy = per_cpu_ptr(&batched_entropy_u64, cpu);
++		spin_lock(&batched_entropy->batch_lock);
++		batched_entropy->position = 0;
++		spin_unlock_irqrestore(&batched_entropy->batch_lock, flags);
+ 	}
+-	write_unlock_irqrestore(&batched_entropy_reset_lock, flags);
+ }
+ 
+ /**
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index fbeb71953526..05dbfdb9f4af 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -75,7 +75,7 @@ struct ports_driver_data {
+ 	/* All the console devices handled by this driver */
+ 	struct list_head consoles;
+ };
+-static struct ports_driver_data pdrvdata;
++static struct ports_driver_data pdrvdata = { .next_vtermno = 1};
+ 
+ static DEFINE_SPINLOCK(pdrvdata_lock);
+ static DECLARE_COMPLETION(early_console_added);
+@@ -1394,6 +1394,7 @@ static int add_port(struct ports_device *portdev, u32 id)
+ 	port->async_queue = NULL;
+ 
+ 	port->cons.ws.ws_row = port->cons.ws.ws_col = 0;
++	port->cons.vtermno = 0;
+ 
+ 	port->host_connected = port->guest_connected = false;
+ 	port->stats = (struct port_stats) { 0 };
+diff --git a/drivers/clk/renesas/r8a774a1-cpg-mssr.c b/drivers/clk/renesas/r8a774a1-cpg-mssr.c
+index 10e852518870..904d4d4ebcad 100644
+--- a/drivers/clk/renesas/r8a774a1-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a774a1-cpg-mssr.c
+@@ -122,8 +122,8 @@ static const struct mssr_mod_clk r8a774a1_mod_clks[] __initconst = {
+ 	DEF_MOD("msiof2",		 209,	R8A774A1_CLK_MSO),
+ 	DEF_MOD("msiof1",		 210,	R8A774A1_CLK_MSO),
+ 	DEF_MOD("msiof0",		 211,	R8A774A1_CLK_MSO),
+-	DEF_MOD("sys-dmac2",		 217,	R8A774A1_CLK_S0D3),
+-	DEF_MOD("sys-dmac1",		 218,	R8A774A1_CLK_S0D3),
++	DEF_MOD("sys-dmac2",		 217,	R8A774A1_CLK_S3D1),
++	DEF_MOD("sys-dmac1",		 218,	R8A774A1_CLK_S3D1),
+ 	DEF_MOD("sys-dmac0",		 219,	R8A774A1_CLK_S0D3),
+ 	DEF_MOD("cmt3",			 300,	R8A774A1_CLK_R),
+ 	DEF_MOD("cmt2",			 301,	R8A774A1_CLK_R),
+@@ -142,8 +142,8 @@ static const struct mssr_mod_clk r8a774a1_mod_clks[] __initconst = {
+ 	DEF_MOD("rwdt",			 402,	R8A774A1_CLK_R),
+ 	DEF_MOD("intc-ex",		 407,	R8A774A1_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A774A1_CLK_S0D3),
+-	DEF_MOD("audmac1",		 501,	R8A774A1_CLK_S0D3),
+-	DEF_MOD("audmac0",		 502,	R8A774A1_CLK_S0D3),
++	DEF_MOD("audmac1",		 501,	R8A774A1_CLK_S1D2),
++	DEF_MOD("audmac0",		 502,	R8A774A1_CLK_S1D2),
+ 	DEF_MOD("hscif4",		 516,	R8A774A1_CLK_S3D1),
+ 	DEF_MOD("hscif3",		 517,	R8A774A1_CLK_S3D1),
+ 	DEF_MOD("hscif2",		 518,	R8A774A1_CLK_S3D1),
+diff --git a/drivers/clk/renesas/r8a774c0-cpg-mssr.c b/drivers/clk/renesas/r8a774c0-cpg-mssr.c
+index 10b96895d452..4a0525425c16 100644
+--- a/drivers/clk/renesas/r8a774c0-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a774c0-cpg-mssr.c
+@@ -149,7 +149,7 @@ static const struct mssr_mod_clk r8a774c0_mod_clks[] __initconst = {
+ 	DEF_MOD("intc-ex",		 407,	R8A774C0_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A774C0_CLK_S0D3),
+ 
+-	DEF_MOD("audmac0",		 502,	R8A774C0_CLK_S3D4),
++	DEF_MOD("audmac0",		 502,	R8A774C0_CLK_S1D2),
+ 	DEF_MOD("hscif4",		 516,	R8A774C0_CLK_S3D1C),
+ 	DEF_MOD("hscif3",		 517,	R8A774C0_CLK_S3D1C),
+ 	DEF_MOD("hscif2",		 518,	R8A774C0_CLK_S3D1C),
+diff --git a/drivers/clk/renesas/r8a7795-cpg-mssr.c b/drivers/clk/renesas/r8a7795-cpg-mssr.c
+index 86842c9fd314..0825cd0ff286 100644
+--- a/drivers/clk/renesas/r8a7795-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a7795-cpg-mssr.c
+@@ -129,8 +129,8 @@ static struct mssr_mod_clk r8a7795_mod_clks[] __initdata = {
+ 	DEF_MOD("msiof2",		 209,	R8A7795_CLK_MSO),
+ 	DEF_MOD("msiof1",		 210,	R8A7795_CLK_MSO),
+ 	DEF_MOD("msiof0",		 211,	R8A7795_CLK_MSO),
+-	DEF_MOD("sys-dmac2",		 217,	R8A7795_CLK_S0D3),
+-	DEF_MOD("sys-dmac1",		 218,	R8A7795_CLK_S0D3),
++	DEF_MOD("sys-dmac2",		 217,	R8A7795_CLK_S3D1),
++	DEF_MOD("sys-dmac1",		 218,	R8A7795_CLK_S3D1),
+ 	DEF_MOD("sys-dmac0",		 219,	R8A7795_CLK_S0D3),
+ 	DEF_MOD("sceg-pub",		 229,	R8A7795_CLK_CR),
+ 	DEF_MOD("cmt3",			 300,	R8A7795_CLK_R),
+@@ -153,8 +153,8 @@ static struct mssr_mod_clk r8a7795_mod_clks[] __initdata = {
+ 	DEF_MOD("rwdt",			 402,	R8A7795_CLK_R),
+ 	DEF_MOD("intc-ex",		 407,	R8A7795_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A7795_CLK_S0D3),
+-	DEF_MOD("audmac1",		 501,	R8A7795_CLK_S0D3),
+-	DEF_MOD("audmac0",		 502,	R8A7795_CLK_S0D3),
++	DEF_MOD("audmac1",		 501,	R8A7795_CLK_S1D2),
++	DEF_MOD("audmac0",		 502,	R8A7795_CLK_S1D2),
+ 	DEF_MOD("drif7",		 508,	R8A7795_CLK_S3D2),
+ 	DEF_MOD("drif6",		 509,	R8A7795_CLK_S3D2),
+ 	DEF_MOD("drif5",		 510,	R8A7795_CLK_S3D2),
+diff --git a/drivers/clk/renesas/r8a7796-cpg-mssr.c b/drivers/clk/renesas/r8a7796-cpg-mssr.c
+index 12c455859f2c..997cd956f12b 100644
+--- a/drivers/clk/renesas/r8a7796-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a7796-cpg-mssr.c
+@@ -126,8 +126,8 @@ static const struct mssr_mod_clk r8a7796_mod_clks[] __initconst = {
+ 	DEF_MOD("msiof2",		 209,	R8A7796_CLK_MSO),
+ 	DEF_MOD("msiof1",		 210,	R8A7796_CLK_MSO),
+ 	DEF_MOD("msiof0",		 211,	R8A7796_CLK_MSO),
+-	DEF_MOD("sys-dmac2",		 217,	R8A7796_CLK_S0D3),
+-	DEF_MOD("sys-dmac1",		 218,	R8A7796_CLK_S0D3),
++	DEF_MOD("sys-dmac2",		 217,	R8A7796_CLK_S3D1),
++	DEF_MOD("sys-dmac1",		 218,	R8A7796_CLK_S3D1),
+ 	DEF_MOD("sys-dmac0",		 219,	R8A7796_CLK_S0D3),
+ 	DEF_MOD("cmt3",			 300,	R8A7796_CLK_R),
+ 	DEF_MOD("cmt2",			 301,	R8A7796_CLK_R),
+@@ -146,8 +146,8 @@ static const struct mssr_mod_clk r8a7796_mod_clks[] __initconst = {
+ 	DEF_MOD("rwdt",			 402,	R8A7796_CLK_R),
+ 	DEF_MOD("intc-ex",		 407,	R8A7796_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A7796_CLK_S0D3),
+-	DEF_MOD("audmac1",		 501,	R8A7796_CLK_S0D3),
+-	DEF_MOD("audmac0",		 502,	R8A7796_CLK_S0D3),
++	DEF_MOD("audmac1",		 501,	R8A7796_CLK_S1D2),
++	DEF_MOD("audmac0",		 502,	R8A7796_CLK_S1D2),
+ 	DEF_MOD("drif7",		 508,	R8A7796_CLK_S3D2),
+ 	DEF_MOD("drif6",		 509,	R8A7796_CLK_S3D2),
+ 	DEF_MOD("drif5",		 510,	R8A7796_CLK_S3D2),
+diff --git a/drivers/clk/renesas/r8a77965-cpg-mssr.c b/drivers/clk/renesas/r8a77965-cpg-mssr.c
+index eb1cca58a1e1..afc9c72fa094 100644
+--- a/drivers/clk/renesas/r8a77965-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a77965-cpg-mssr.c
+@@ -123,8 +123,8 @@ static const struct mssr_mod_clk r8a77965_mod_clks[] __initconst = {
+ 	DEF_MOD("msiof2",		209,	R8A77965_CLK_MSO),
+ 	DEF_MOD("msiof1",		210,	R8A77965_CLK_MSO),
+ 	DEF_MOD("msiof0",		211,	R8A77965_CLK_MSO),
+-	DEF_MOD("sys-dmac2",		217,	R8A77965_CLK_S0D3),
+-	DEF_MOD("sys-dmac1",		218,	R8A77965_CLK_S0D3),
++	DEF_MOD("sys-dmac2",		217,	R8A77965_CLK_S3D1),
++	DEF_MOD("sys-dmac1",		218,	R8A77965_CLK_S3D1),
+ 	DEF_MOD("sys-dmac0",		219,	R8A77965_CLK_S0D3),
+ 
+ 	DEF_MOD("cmt3",			300,	R8A77965_CLK_R),
+@@ -146,8 +146,8 @@ static const struct mssr_mod_clk r8a77965_mod_clks[] __initconst = {
+ 	DEF_MOD("intc-ex",		407,	R8A77965_CLK_CP),
+ 	DEF_MOD("intc-ap",		408,	R8A77965_CLK_S0D3),
+ 
+-	DEF_MOD("audmac1",		501,	R8A77965_CLK_S0D3),
+-	DEF_MOD("audmac0",		502,	R8A77965_CLK_S0D3),
++	DEF_MOD("audmac1",		501,	R8A77965_CLK_S1D2),
++	DEF_MOD("audmac0",		502,	R8A77965_CLK_S1D2),
+ 	DEF_MOD("drif7",		508,	R8A77965_CLK_S3D2),
+ 	DEF_MOD("drif6",		509,	R8A77965_CLK_S3D2),
+ 	DEF_MOD("drif5",		510,	R8A77965_CLK_S3D2),
+diff --git a/drivers/clk/renesas/r8a77990-cpg-mssr.c b/drivers/clk/renesas/r8a77990-cpg-mssr.c
+index 9a278c75c918..03f445d47ef6 100644
+--- a/drivers/clk/renesas/r8a77990-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a77990-cpg-mssr.c
+@@ -152,7 +152,7 @@ static const struct mssr_mod_clk r8a77990_mod_clks[] __initconst = {
+ 	DEF_MOD("intc-ex",		 407,	R8A77990_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A77990_CLK_S0D3),
+ 
+-	DEF_MOD("audmac0",		 502,	R8A77990_CLK_S3D4),
++	DEF_MOD("audmac0",		 502,	R8A77990_CLK_S1D2),
+ 	DEF_MOD("drif7",		 508,	R8A77990_CLK_S3D2),
+ 	DEF_MOD("drif6",		 509,	R8A77990_CLK_S3D2),
+ 	DEF_MOD("drif5",		 510,	R8A77990_CLK_S3D2),
+diff --git a/drivers/clk/renesas/r8a77995-cpg-mssr.c b/drivers/clk/renesas/r8a77995-cpg-mssr.c
+index eee3874865a9..68707277b17b 100644
+--- a/drivers/clk/renesas/r8a77995-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a77995-cpg-mssr.c
+@@ -133,7 +133,7 @@ static const struct mssr_mod_clk r8a77995_mod_clks[] __initconst = {
+ 	DEF_MOD("rwdt",			 402,	R8A77995_CLK_R),
+ 	DEF_MOD("intc-ex",		 407,	R8A77995_CLK_CP),
+ 	DEF_MOD("intc-ap",		 408,	R8A77995_CLK_S1D2),
+-	DEF_MOD("audmac0",		 502,	R8A77995_CLK_S3D1),
++	DEF_MOD("audmac0",		 502,	R8A77995_CLK_S1D2),
+ 	DEF_MOD("hscif3",		 517,	R8A77995_CLK_S3D1C),
+ 	DEF_MOD("hscif0",		 520,	R8A77995_CLK_S3D1C),
+ 	DEF_MOD("thermal",		 522,	R8A77995_CLK_CP),
+diff --git a/drivers/clk/rockchip/clk-rk3288.c b/drivers/clk/rockchip/clk-rk3288.c
+index 5a67b7869960..355d6a3611db 100644
+--- a/drivers/clk/rockchip/clk-rk3288.c
++++ b/drivers/clk/rockchip/clk-rk3288.c
+@@ -219,7 +219,7 @@ PNAME(mux_hsadcout_p)	= { "hsadc_src", "ext_hsadc" };
+ PNAME(mux_edp_24m_p)	= { "ext_edp_24m", "xin24m" };
+ PNAME(mux_tspout_p)	= { "cpll", "gpll", "npll", "xin27m" };
+ 
+-PNAME(mux_aclk_vcodec_pre_p)	= { "aclk_vepu", "aclk_vdpu" };
++PNAME(mux_aclk_vcodec_pre_p)	= { "aclk_vdpu", "aclk_vepu" };
+ PNAME(mux_usbphy480m_p)		= { "sclk_otgphy1_480m", "sclk_otgphy2_480m",
+ 				    "sclk_otgphy0_480m" };
+ PNAME(mux_hsicphy480m_p)	= { "cpll", "gpll", "usbphy480m_src" };
+@@ -313,13 +313,13 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
+ 	COMPOSITE_NOMUX(0, "aclk_core_mp", "armclk", CLK_IGNORE_UNUSED,
+ 			RK3288_CLKSEL_CON(0), 4, 4, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ 			RK3288_CLKGATE_CON(12), 6, GFLAGS),
+-	COMPOSITE_NOMUX(0, "atclk", "armclk", CLK_IGNORE_UNUSED,
++	COMPOSITE_NOMUX(0, "atclk", "armclk", 0,
+ 			RK3288_CLKSEL_CON(37), 4, 5, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ 			RK3288_CLKGATE_CON(12), 7, GFLAGS),
+ 	COMPOSITE_NOMUX(0, "pclk_dbg_pre", "armclk", CLK_IGNORE_UNUSED,
+ 			RK3288_CLKSEL_CON(37), 9, 5, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ 			RK3288_CLKGATE_CON(12), 8, GFLAGS),
+-	GATE(0, "pclk_dbg", "pclk_dbg_pre", CLK_IGNORE_UNUSED,
++	GATE(0, "pclk_dbg", "pclk_dbg_pre", 0,
+ 			RK3288_CLKGATE_CON(12), 9, GFLAGS),
+ 	GATE(0, "cs_dbg", "pclk_dbg_pre", CLK_IGNORE_UNUSED,
+ 			RK3288_CLKGATE_CON(12), 10, GFLAGS),
+@@ -420,7 +420,7 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
+ 	COMPOSITE(0, "aclk_vdpu", mux_pll_src_cpll_gpll_usb480m_p, 0,
+ 			RK3288_CLKSEL_CON(32), 14, 2, MFLAGS, 8, 5, DFLAGS,
+ 			RK3288_CLKGATE_CON(3), 11, GFLAGS),
+-	MUXGRF(0, "aclk_vcodec_pre", mux_aclk_vcodec_pre_p, 0,
++	MUXGRF(0, "aclk_vcodec_pre", mux_aclk_vcodec_pre_p, CLK_SET_RATE_PARENT,
+ 			RK3288_GRF_SOC_CON(0), 7, 1, MFLAGS),
+ 	GATE(ACLK_VCODEC, "aclk_vcodec", "aclk_vcodec_pre", 0,
+ 		RK3288_CLKGATE_CON(9), 0, GFLAGS),
+@@ -647,7 +647,7 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
+ 	INVERTER(SCLK_HSADC, "sclk_hsadc", "sclk_hsadc_out",
+ 			RK3288_CLKSEL_CON(22), 7, IFLAGS),
+ 
+-	GATE(0, "jtag", "ext_jtag", CLK_IGNORE_UNUSED,
++	GATE(0, "jtag", "ext_jtag", 0,
+ 			RK3288_CLKGATE_CON(4), 14, GFLAGS),
+ 
+ 	COMPOSITE_NODIV(SCLK_USBPHY480M_SRC, "usbphy480m_src", mux_usbphy480m_p, 0,
+@@ -656,7 +656,7 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
+ 	COMPOSITE_NODIV(SCLK_HSICPHY480M, "sclk_hsicphy480m", mux_hsicphy480m_p, 0,
+ 			RK3288_CLKSEL_CON(29), 0, 2, MFLAGS,
+ 			RK3288_CLKGATE_CON(3), 6, GFLAGS),
+-	GATE(0, "hsicphy12m_xin12m", "xin12m", CLK_IGNORE_UNUSED,
++	GATE(0, "hsicphy12m_xin12m", "xin12m", 0,
+ 			RK3288_CLKGATE_CON(13), 9, GFLAGS),
+ 	DIV(0, "hsicphy12m_usbphy", "sclk_hsicphy480m", 0,
+ 			RK3288_CLKSEL_CON(11), 8, 6, DFLAGS),
+@@ -697,7 +697,7 @@ static struct rockchip_clk_branch rk3288_clk_branches[] __initdata = {
+ 	GATE(PCLK_TZPC, "pclk_tzpc", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 3, GFLAGS),
+ 	GATE(PCLK_UART2, "pclk_uart2", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 9, GFLAGS),
+ 	GATE(PCLK_EFUSE256, "pclk_efuse_256", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 10, GFLAGS),
+-	GATE(PCLK_RKPWM, "pclk_rkpwm", "pclk_cpu", CLK_IGNORE_UNUSED, RK3288_CLKGATE_CON(11), 11, GFLAGS),
++	GATE(PCLK_RKPWM, "pclk_rkpwm", "pclk_cpu", 0, RK3288_CLKGATE_CON(11), 11, GFLAGS),
+ 
+ 	/* ddrctrl [DDR Controller PHY clock] gates */
+ 	GATE(0, "nclk_ddrupctl0", "ddrphy", CLK_IGNORE_UNUSED, RK3288_CLKGATE_CON(11), 4, GFLAGS),
+@@ -837,12 +837,9 @@ static const char *const rk3288_critical_clocks[] __initconst = {
+ 	"pclk_alive_niu",
+ 	"pclk_pd_pmu",
+ 	"pclk_pmu_niu",
+-	"pclk_core_niu",
+-	"pclk_ddrupctl0",
+-	"pclk_publ0",
+-	"pclk_ddrupctl1",
+-	"pclk_publ1",
+ 	"pmu_hclk_otg0",
++	/* pwm-regulators on some boards, so handoff-critical later */
++	"pclk_rkpwm",
+ };
+ 
+ static void __iomem *rk3288_cru_base;
+diff --git a/drivers/clk/zynqmp/divider.c b/drivers/clk/zynqmp/divider.c
+index a371c66e72ef..bd9b5fbc443b 100644
+--- a/drivers/clk/zynqmp/divider.c
++++ b/drivers/clk/zynqmp/divider.c
+@@ -31,12 +31,14 @@
+  * struct zynqmp_clk_divider - adjustable divider clock
+  * @hw:		handle between common and hardware-specific interfaces
+  * @flags:	Hardware specific flags
++ * @is_frac:	The divider is a fractional divider
+  * @clk_id:	Id of clock
+  * @div_type:	divisor type (TYPE_DIV1 or TYPE_DIV2)
+  */
+ struct zynqmp_clk_divider {
+ 	struct clk_hw hw;
+ 	u8 flags;
++	bool is_frac;
+ 	u32 clk_id;
+ 	u32 div_type;
+ };
+@@ -116,8 +118,7 @@ static long zynqmp_clk_divider_round_rate(struct clk_hw *hw,
+ 
+ 	bestdiv = zynqmp_divider_get_val(*prate, rate);
+ 
+-	if ((clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) &&
+-	    (divider->flags & CLK_FRAC))
++	if ((clk_hw_get_flags(hw) & CLK_SET_RATE_PARENT) && divider->is_frac)
+ 		bestdiv = rate % *prate ? 1 : bestdiv;
+ 	*prate = rate * bestdiv;
+ 
+@@ -195,11 +196,13 @@ struct clk_hw *zynqmp_clk_register_divider(const char *name,
+ 
+ 	init.name = name;
+ 	init.ops = &zynqmp_clk_divider_ops;
+-	init.flags = nodes->flag;
++	/* CLK_FRAC is not defined in the common clk framework */
++	init.flags = nodes->flag & ~CLK_FRAC;
+ 	init.parent_names = parents;
+ 	init.num_parents = 1;
+ 
+ 	/* struct clk_divider assignments */
++	div->is_frac = !!(nodes->flag & CLK_FRAC);
+ 	div->flags = nodes->type_flag;
+ 	div->hw.init = &init;
+ 	div->clk_id = clk_id;
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index ef0e33e21b98..97b094963253 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1103,6 +1103,7 @@ static struct cpufreq_policy *cpufreq_policy_alloc(unsigned int cpu)
+ 				   cpufreq_global_kobject, "policy%u", cpu);
+ 	if (ret) {
+ 		pr_err("%s: failed to init policy->kobj: %d\n", __func__, ret);
++		kobject_put(&policy->kobj);
+ 		goto err_free_real_cpus;
+ 	}
+ 
+diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
+index ffa9adeaba31..9d1d9bf02710 100644
+--- a/drivers/cpufreq/cpufreq_governor.c
++++ b/drivers/cpufreq/cpufreq_governor.c
+@@ -459,6 +459,8 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
+ 	/* Failure, so roll back. */
+ 	pr_err("initialization failed (dbs_data kobject init error %d)\n", ret);
+ 
++	kobject_put(&dbs_data->attr_set.kobj);
++
+ 	policy->governor_data = NULL;
+ 
+ 	if (!have_governor_per_policy())
+diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
+index 9fedf627e000..3ee55aee5d71 100644
+--- a/drivers/cpufreq/imx6q-cpufreq.c
++++ b/drivers/cpufreq/imx6q-cpufreq.c
+@@ -407,11 +407,11 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
+ 		ret = imx6ul_opp_check_speed_grading(cpu_dev);
+ 		if (ret) {
+ 			if (ret == -EPROBE_DEFER)
+-				return ret;
++				goto put_node;
+ 
+ 			dev_err(cpu_dev, "failed to read ocotp: %d\n",
+ 				ret);
+-			return ret;
++			goto put_node;
+ 		}
+ 	} else {
+ 		imx6q_opp_check_speed_grading(cpu_dev);
+diff --git a/drivers/cpufreq/kirkwood-cpufreq.c b/drivers/cpufreq/kirkwood-cpufreq.c
+index c2dd43f3f5d8..8d63a6dc8383 100644
+--- a/drivers/cpufreq/kirkwood-cpufreq.c
++++ b/drivers/cpufreq/kirkwood-cpufreq.c
+@@ -124,13 +124,14 @@ static int kirkwood_cpufreq_probe(struct platform_device *pdev)
+ 	priv.cpu_clk = of_clk_get_by_name(np, "cpu_clk");
+ 	if (IS_ERR(priv.cpu_clk)) {
+ 		dev_err(priv.dev, "Unable to get cpuclk\n");
+-		return PTR_ERR(priv.cpu_clk);
++		err = PTR_ERR(priv.cpu_clk);
++		goto out_node;
+ 	}
+ 
+ 	err = clk_prepare_enable(priv.cpu_clk);
+ 	if (err) {
+ 		dev_err(priv.dev, "Unable to prepare cpuclk\n");
+-		return err;
++		goto out_node;
+ 	}
+ 
+ 	kirkwood_freq_table[0].frequency = clk_get_rate(priv.cpu_clk) / 1000;
+@@ -161,20 +162,22 @@ static int kirkwood_cpufreq_probe(struct platform_device *pdev)
+ 		goto out_ddr;
+ 	}
+ 
+-	of_node_put(np);
+-	np = NULL;
+-
+ 	err = cpufreq_register_driver(&kirkwood_cpufreq_driver);
+-	if (!err)
+-		return 0;
++	if (err) {
++		dev_err(priv.dev, "Failed to register cpufreq driver\n");
++		goto out_powersave;
++	}
+ 
+-	dev_err(priv.dev, "Failed to register cpufreq driver\n");
++	of_node_put(np);
++	return 0;
+ 
++out_powersave:
+ 	clk_disable_unprepare(priv.powersave_clk);
+ out_ddr:
+ 	clk_disable_unprepare(priv.ddr_clk);
+ out_cpu:
+ 	clk_disable_unprepare(priv.cpu_clk);
++out_node:
+ 	of_node_put(np);
+ 
+ 	return err;
+diff --git a/drivers/cpufreq/pasemi-cpufreq.c b/drivers/cpufreq/pasemi-cpufreq.c
+index 75dfbd2a58ea..c7710c149de8 100644
+--- a/drivers/cpufreq/pasemi-cpufreq.c
++++ b/drivers/cpufreq/pasemi-cpufreq.c
+@@ -146,6 +146,7 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 
+ 	cpu = of_get_cpu_node(policy->cpu, NULL);
+ 
++	of_node_put(cpu);
+ 	if (!cpu)
+ 		goto out;
+ 
+diff --git a/drivers/cpufreq/pmac32-cpufreq.c b/drivers/cpufreq/pmac32-cpufreq.c
+index 52f0d91d30c1..9b4ce2eb8222 100644
+--- a/drivers/cpufreq/pmac32-cpufreq.c
++++ b/drivers/cpufreq/pmac32-cpufreq.c
+@@ -552,6 +552,7 @@ static int pmac_cpufreq_init_7447A(struct device_node *cpunode)
+ 	volt_gpio_np = of_find_node_by_name(NULL, "cpu-vcore-select");
+ 	if (volt_gpio_np)
+ 		voltage_gpio = read_gpio(volt_gpio_np);
++	of_node_put(volt_gpio_np);
+ 	if (!voltage_gpio){
+ 		pr_err("missing cpu-vcore-select gpio\n");
+ 		return 1;
+@@ -588,6 +589,7 @@ static int pmac_cpufreq_init_750FX(struct device_node *cpunode)
+ 	if (volt_gpio_np)
+ 		voltage_gpio = read_gpio(volt_gpio_np);
+ 
++	of_node_put(volt_gpio_np);
+ 	pvr = mfspr(SPRN_PVR);
+ 	has_cpu_l2lve = !((pvr & 0xf00) == 0x100);
+ 
+diff --git a/drivers/cpufreq/ppc_cbe_cpufreq.c b/drivers/cpufreq/ppc_cbe_cpufreq.c
+index 41a0f0be3f9f..8414c3a4ea08 100644
+--- a/drivers/cpufreq/ppc_cbe_cpufreq.c
++++ b/drivers/cpufreq/ppc_cbe_cpufreq.c
+@@ -86,6 +86,7 @@ static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ 	if (!cbe_get_cpu_pmd_regs(policy->cpu) ||
+ 	    !cbe_get_cpu_mic_tm_regs(policy->cpu)) {
+ 		pr_info("invalid CBE regs pointers for cpufreq\n");
++		of_node_put(cpu);
+ 		return -EINVAL;
+ 	}
+ 
+diff --git a/drivers/crypto/sunxi-ss/sun4i-ss-hash.c b/drivers/crypto/sunxi-ss/sun4i-ss-hash.c
+index a4b5ff2b72f8..f6936bb3b7be 100644
+--- a/drivers/crypto/sunxi-ss/sun4i-ss-hash.c
++++ b/drivers/crypto/sunxi-ss/sun4i-ss-hash.c
+@@ -240,7 +240,10 @@ static int sun4i_hash(struct ahash_request *areq)
+ 		}
+ 	} else {
+ 		/* Since we have the flag final, we can go up to modulo 4 */
+-		end = ((areq->nbytes + op->len) / 4) * 4 - op->len;
++		if (areq->nbytes < 4)
++			end = 0;
++		else
++			end = ((areq->nbytes + op->len) / 4) * 4 - op->len;
+ 	}
+ 
+ 	/* TODO if SGlen % 4 and !op->len then DMA */
+diff --git a/drivers/crypto/vmx/aesp8-ppc.pl b/drivers/crypto/vmx/aesp8-ppc.pl
+index de78282b8f44..9c6b5c1d6a1a 100644
+--- a/drivers/crypto/vmx/aesp8-ppc.pl
++++ b/drivers/crypto/vmx/aesp8-ppc.pl
+@@ -1357,7 +1357,7 @@ Loop_ctr32_enc:
+ 	addi		$idx,$idx,16
+ 	bdnz		Loop_ctr32_enc
+ 
+-	vadduwm		$ivec,$ivec,$one
++	vadduqm		$ivec,$ivec,$one
+ 	 vmr		$dat,$inptail
+ 	 lvx		$inptail,0,$inp
+ 	 addi		$inp,$inp,16
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 0ae3de76833b..839621b044f4 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -228,7 +228,7 @@ static struct devfreq_governor *find_devfreq_governor(const char *name)
+  * if is not found. This can happen when both drivers (the governor driver
+  * and the driver that call devfreq_add_device) are built as modules.
+  * devfreq_list_lock should be held by the caller. Returns the matched
+- * governor's pointer.
++ * governor's pointer or an error pointer.
+  */
+ static struct devfreq_governor *try_then_request_governor(const char *name)
+ {
+@@ -254,7 +254,7 @@ static struct devfreq_governor *try_then_request_governor(const char *name)
+ 		/* Restore previous state before return */
+ 		mutex_lock(&devfreq_list_lock);
+ 		if (err)
+-			return NULL;
++			return ERR_PTR(err);
+ 
+ 		governor = find_devfreq_governor(name);
+ 	}
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index fe69dccfa0c0..37a269420435 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -1606,7 +1606,11 @@ static void at_xdmac_tasklet(unsigned long data)
+ 					struct at_xdmac_desc,
+ 					xfer_node);
+ 		dev_vdbg(chan2dev(&atchan->chan), "%s: desc 0x%p\n", __func__, desc);
+-		BUG_ON(!desc->active_xfer);
++		if (!desc->active_xfer) {
++			dev_err(chan2dev(&atchan->chan), "Xfer not active: exiting");
++			spin_unlock_bh(&atchan->lock);
++			return;
++		}
+ 
+ 		txd = &desc->tx_dma_desc;
+ 
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index cff1b143fff5..9b7a49fc7697 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -966,6 +966,7 @@ static void _stop(struct pl330_thread *thrd)
+ {
+ 	void __iomem *regs = thrd->dmac->base;
+ 	u8 insn[6] = {0, 0, 0, 0, 0, 0};
++	u32 inten = readl(regs + INTEN);
+ 
+ 	if (_state(thrd) == PL330_STATE_FAULT_COMPLETING)
+ 		UNTIL(thrd, PL330_STATE_FAULTING | PL330_STATE_KILLING);
+@@ -978,10 +979,13 @@ static void _stop(struct pl330_thread *thrd)
+ 
+ 	_emit_KILL(0, insn);
+ 
+-	/* Stop generating interrupts for SEV */
+-	writel(readl(regs + INTEN) & ~(1 << thrd->ev), regs + INTEN);
+-
+ 	_execute_DBGINSN(thrd, insn, is_manager(thrd));
++
++	/* clear the event */
++	if (inten & (1 << thrd->ev))
++		writel(1 << thrd->ev, regs + INTCLR);
++	/* Stop generating interrupts for SEV */
++	writel(inten & ~(1 << thrd->ev), regs + INTEN);
+ }
+ 
+ /* Start doing req 'idx' of thread 'thrd' */
+diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c
+index b26256f23d67..09b6756366c3 100644
+--- a/drivers/dma/tegra210-adma.c
++++ b/drivers/dma/tegra210-adma.c
+@@ -22,7 +22,6 @@
+ #include <linux/of_device.h>
+ #include <linux/of_dma.h>
+ #include <linux/of_irq.h>
+-#include <linux/pm_clock.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/slab.h>
+ 
+@@ -141,6 +140,7 @@ struct tegra_adma {
+ 	struct dma_device		dma_dev;
+ 	struct device			*dev;
+ 	void __iomem			*base_addr;
++	struct clk			*ahub_clk;
+ 	unsigned int			nr_channels;
+ 	unsigned long			rx_requests_reserved;
+ 	unsigned long			tx_requests_reserved;
+@@ -637,8 +637,9 @@ static int tegra_adma_runtime_suspend(struct device *dev)
+ 	struct tegra_adma *tdma = dev_get_drvdata(dev);
+ 
+ 	tdma->global_cmd = tdma_read(tdma, ADMA_GLOBAL_CMD);
++	clk_disable_unprepare(tdma->ahub_clk);
+ 
+-	return pm_clk_suspend(dev);
++	return 0;
+ }
+ 
+ static int tegra_adma_runtime_resume(struct device *dev)
+@@ -646,10 +647,11 @@ static int tegra_adma_runtime_resume(struct device *dev)
+ 	struct tegra_adma *tdma = dev_get_drvdata(dev);
+ 	int ret;
+ 
+-	ret = pm_clk_resume(dev);
+-	if (ret)
++	ret = clk_prepare_enable(tdma->ahub_clk);
++	if (ret) {
++		dev_err(dev, "ahub clk_enable failed: %d\n", ret);
+ 		return ret;
+-
++	}
+ 	tdma_write(tdma, ADMA_GLOBAL_CMD, tdma->global_cmd);
+ 
+ 	return 0;
+@@ -692,13 +694,11 @@ static int tegra_adma_probe(struct platform_device *pdev)
+ 	if (IS_ERR(tdma->base_addr))
+ 		return PTR_ERR(tdma->base_addr);
+ 
+-	ret = pm_clk_create(&pdev->dev);
+-	if (ret)
+-		return ret;
+-
+-	ret = of_pm_clk_add_clk(&pdev->dev, "d_audio");
+-	if (ret)
+-		goto clk_destroy;
++	tdma->ahub_clk = devm_clk_get(&pdev->dev, "d_audio");
++	if (IS_ERR(tdma->ahub_clk)) {
++		dev_err(&pdev->dev, "Error: Missing ahub controller clock\n");
++		return PTR_ERR(tdma->ahub_clk);
++	}
+ 
+ 	pm_runtime_enable(&pdev->dev);
+ 
+@@ -775,8 +775,6 @@ rpm_put:
+ 	pm_runtime_put_sync(&pdev->dev);
+ rpm_disable:
+ 	pm_runtime_disable(&pdev->dev);
+-clk_destroy:
+-	pm_clk_destroy(&pdev->dev);
+ 
+ 	return ret;
+ }
+@@ -786,6 +784,7 @@ static int tegra_adma_remove(struct platform_device *pdev)
+ 	struct tegra_adma *tdma = platform_get_drvdata(pdev);
+ 	int i;
+ 
++	of_dma_controller_free(pdev->dev.of_node);
+ 	dma_async_device_unregister(&tdma->dma_dev);
+ 
+ 	for (i = 0; i < tdma->nr_channels; ++i)
+@@ -793,7 +792,6 @@ static int tegra_adma_remove(struct platform_device *pdev)
+ 
+ 	pm_runtime_put_sync(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+-	pm_clk_destroy(&pdev->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/extcon/extcon-arizona.c b/drivers/extcon/extcon-arizona.c
+index da0e9bc4262f..9327479c719c 100644
+--- a/drivers/extcon/extcon-arizona.c
++++ b/drivers/extcon/extcon-arizona.c
+@@ -1726,6 +1726,16 @@ static int arizona_extcon_remove(struct platform_device *pdev)
+ 	struct arizona_extcon_info *info = platform_get_drvdata(pdev);
+ 	struct arizona *arizona = info->arizona;
+ 	int jack_irq_rise, jack_irq_fall;
++	bool change;
++
++	regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1,
++				 ARIZONA_MICD_ENA, 0,
++				 &change);
++
++	if (change) {
++		regulator_disable(info->micvdd);
++		pm_runtime_put(info->dev);
++	}
+ 
+ 	gpiod_put(info->micd_pol_gpio);
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+index ee47c11e92ce..4dee2326b29c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+@@ -136,8 +136,9 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
+ {
+ 	struct amdgpu_device *adev = ring->adev;
+ 	struct amdgpu_fence *fence;
+-	struct dma_fence *old, **ptr;
++	struct dma_fence __rcu **ptr;
+ 	uint32_t seq;
++	int r;
+ 
+ 	fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_KERNEL);
+ 	if (fence == NULL)
+@@ -153,15 +154,24 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
+ 			       seq, flags | AMDGPU_FENCE_FLAG_INT);
+ 
+ 	ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask];
++	if (unlikely(rcu_dereference_protected(*ptr, 1))) {
++		struct dma_fence *old;
++
++		rcu_read_lock();
++		old = dma_fence_get_rcu_safe(ptr);
++		rcu_read_unlock();
++
++		if (old) {
++			r = dma_fence_wait(old, false);
++			dma_fence_put(old);
++			if (r)
++				return r;
++		}
++	}
++
+ 	/* This function can't be called concurrently anyway, otherwise
+ 	 * emitting the fence would mess up the hardware ring buffer.
+ 	 */
+-	old = rcu_dereference_protected(*ptr, 1);
+-	if (old && !dma_fence_is_signaled(old)) {
+-		DRM_INFO("rcu slot is busy\n");
+-		dma_fence_wait(old, false);
+-	}
+-
+ 	rcu_assign_pointer(*ptr, dma_fence_get(&fence->base));
+ 
+ 	*f = &fence->base;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 84ee77786944..864c2faf672b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3519,6 +3519,8 @@ static void dm_drm_plane_reset(struct drm_plane *plane)
+ 		plane->state = &amdgpu_state->base;
+ 		plane->state->plane = plane;
+ 		plane->state->rotation = DRM_MODE_ROTATE_0;
++		plane->state->alpha = DRM_BLEND_ALPHA_OPAQUE;
++		plane->state->pixel_blend_mode = DRM_MODE_BLEND_PREMULTI;
+ 	}
+ }
+ 
+@@ -4976,8 +4978,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ static void amdgpu_dm_crtc_copy_transient_flags(struct drm_crtc_state *crtc_state,
+ 						struct dc_stream_state *stream_state)
+ {
+-	stream_state->mode_changed =
+-		crtc_state->mode_changed || crtc_state->active_changed;
++	stream_state->mode_changed = drm_atomic_crtc_needs_modeset(crtc_state);
+ }
+ 
+ static int amdgpu_dm_atomic_commit(struct drm_device *dev,
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 5af2ea1f201d..c0db7788c464 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -524,6 +524,14 @@ void dc_link_set_preferred_link_settings(struct dc *dc,
+ 	struct dc_stream_state *link_stream;
+ 	struct dc_link_settings store_settings = *link_setting;
+ 
++	link->preferred_link_setting = store_settings;
++
++	/* Retrain with preferred link settings only relevant for
++	 * DP signal type
++	 */
++	if (!dc_is_dp_signal(link->connector_signal))
++		return;
++
+ 	for (i = 0; i < MAX_PIPES; i++) {
+ 		pipe = &dc->current_state->res_ctx.pipe_ctx[i];
+ 		if (pipe->stream && pipe->stream->sink
+@@ -539,7 +547,10 @@ void dc_link_set_preferred_link_settings(struct dc *dc,
+ 
+ 	link_stream = link->dc->current_state->res_ctx.pipe_ctx[i].stream;
+ 
+-	link->preferred_link_setting = store_settings;
++	/* Cannot retrain link if backend is off */
++	if (link_stream->dpms_off)
++		return;
++
+ 	if (link_stream)
+ 		decide_link_settings(link_stream, &store_settings);
+ 
+@@ -1500,6 +1511,7 @@ static void commit_planes_do_stream_update(struct dc *dc,
+ 				continue;
+ 
+ 			if (stream_update->dpms_off) {
++				dc->hwss.pipe_control_lock(dc, pipe_ctx, true);
+ 				if (*stream_update->dpms_off) {
+ 					core_link_disable_stream(pipe_ctx, KEEP_ACQUIRED_RESOURCE);
+ 					dc->hwss.optimize_bandwidth(dc, dc->current_state);
+@@ -1507,6 +1519,7 @@ static void commit_planes_do_stream_update(struct dc *dc,
+ 					dc->hwss.prepare_bandwidth(dc, dc->current_state);
+ 					core_link_enable_stream(dc->current_state, pipe_ctx);
+ 				}
++				dc->hwss.pipe_control_lock(dc, pipe_ctx, false);
+ 			}
+ 
+ 			if (stream_update->abm_level && pipe_ctx->stream_res.abm) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index 76137df74a53..c6aa80d7e639 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1266,10 +1266,12 @@ bool dc_remove_plane_from_context(
+ 			 * For head pipe detach surfaces from pipe for tail
+ 			 * pipe just zero it out
+ 			 */
+-			if (!pipe_ctx->top_pipe) {
++			if (!pipe_ctx->top_pipe ||
++				(!pipe_ctx->top_pipe->top_pipe &&
++					pipe_ctx->top_pipe->stream_res.opp != pipe_ctx->stream_res.opp)) {
+ 				pipe_ctx->plane_state = NULL;
+ 				pipe_ctx->bottom_pipe = NULL;
+-			} else  {
++			} else {
+ 				memset(pipe_ctx, 0, sizeof(*pipe_ctx));
+ 			}
+ 		}
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
+index 4a863a5dab41..321af9af95e8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp_dscl.c
+@@ -406,15 +406,25 @@ void dpp1_dscl_calc_lb_num_partitions(
+ 		int *num_part_y,
+ 		int *num_part_c)
+ {
++	int lb_memory_size, lb_memory_size_c, lb_memory_size_a, num_partitions_a,
++	lb_bpc, memory_line_size_y, memory_line_size_c, memory_line_size_a;
++
+ 	int line_size = scl_data->viewport.width < scl_data->recout.width ?
+ 			scl_data->viewport.width : scl_data->recout.width;
+ 	int line_size_c = scl_data->viewport_c.width < scl_data->recout.width ?
+ 			scl_data->viewport_c.width : scl_data->recout.width;
+-	int lb_bpc = dpp1_dscl_get_lb_depth_bpc(scl_data->lb_params.depth);
+-	int memory_line_size_y = (line_size * lb_bpc + 71) / 72; /* +71 to ceil */
+-	int memory_line_size_c = (line_size_c * lb_bpc + 71) / 72; /* +71 to ceil */
+-	int memory_line_size_a = (line_size + 5) / 6; /* +5 to ceil */
+-	int lb_memory_size, lb_memory_size_c, lb_memory_size_a, num_partitions_a;
++
++	if (line_size == 0)
++		line_size = 1;
++
++	if (line_size_c == 0)
++		line_size_c = 1;
++
++
++	lb_bpc = dpp1_dscl_get_lb_depth_bpc(scl_data->lb_params.depth);
++	memory_line_size_y = (line_size * lb_bpc + 71) / 72; /* +71 to ceil */
++	memory_line_size_c = (line_size_c * lb_bpc + 71) / 72; /* +71 to ceil */
++	memory_line_size_a = (line_size + 5) / 6; /* +5 to ceil */
+ 
+ 	if (lb_config == LB_MEMORY_CONFIG_1) {
+ 		lb_memory_size = 816;
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index a684b38332ac..2ab05a4e8ed4 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -2658,9 +2658,15 @@ static void dcn10_set_cursor_position(struct pipe_ctx *pipe_ctx)
+ 		.rotation = pipe_ctx->plane_state->rotation,
+ 		.mirror = pipe_ctx->plane_state->horizontal_mirror
+ 	};
+-
+-	pos_cpy.x_hotspot += pipe_ctx->plane_state->dst_rect.x;
+-	pos_cpy.y_hotspot += pipe_ctx->plane_state->dst_rect.y;
++	uint32_t x_plane = pipe_ctx->plane_state->dst_rect.x;
++	uint32_t y_plane = pipe_ctx->plane_state->dst_rect.y;
++	uint32_t x_offset = min(x_plane, pos_cpy.x);
++	uint32_t y_offset = min(y_plane, pos_cpy.y);
++
++	pos_cpy.x -= x_offset;
++	pos_cpy.y -= y_offset;
++	pos_cpy.x_hotspot += (x_plane - x_offset);
++	pos_cpy.y_hotspot += (y_plane - y_offset);
+ 
+ 	if (pipe_ctx->plane_state->address.type
+ 			== PLN_ADDR_TYPE_VIDEO_PROGRESSIVE)
+diff --git a/drivers/gpu/drm/drm_atomic_state_helper.c b/drivers/gpu/drm/drm_atomic_state_helper.c
+index 4985384e51f6..59ffb6b9c745 100644
+--- a/drivers/gpu/drm/drm_atomic_state_helper.c
++++ b/drivers/gpu/drm/drm_atomic_state_helper.c
+@@ -30,6 +30,7 @@
+ #include <drm/drm_connector.h>
+ #include <drm/drm_atomic.h>
+ #include <drm/drm_device.h>
++#include <drm/drm_writeback.h>
+ 
+ #include <linux/slab.h>
+ #include <linux/dma-fence.h>
+@@ -412,6 +413,9 @@ __drm_atomic_helper_connector_destroy_state(struct drm_connector_state *state)
+ 
+ 	if (state->commit)
+ 		drm_crtc_commit_put(state->commit);
++
++	if (state->writeback_job)
++		drm_writeback_cleanup_job(state->writeback_job);
+ }
+ EXPORT_SYMBOL(__drm_atomic_helper_connector_destroy_state);
+ 
+diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
+index 7a59b8b3ed5a..81c8936bc1d3 100644
+--- a/drivers/gpu/drm/drm_drv.c
++++ b/drivers/gpu/drm/drm_drv.c
+@@ -499,7 +499,7 @@ int drm_dev_init(struct drm_device *dev,
+ 	BUG_ON(!parent);
+ 
+ 	kref_init(&dev->ref);
+-	dev->dev = parent;
++	dev->dev = get_device(parent);
+ 	dev->driver = driver;
+ 
+ 	/* no per-device feature limits by default */
+@@ -569,6 +569,7 @@ err_minors:
+ 	drm_minor_free(dev, DRM_MINOR_RENDER);
+ 	drm_fs_inode_free(dev->anon_inode);
+ err_free:
++	put_device(dev->dev);
+ 	mutex_destroy(&dev->master_mutex);
+ 	mutex_destroy(&dev->ctxlist_mutex);
+ 	mutex_destroy(&dev->clientlist_mutex);
+@@ -604,6 +605,8 @@ void drm_dev_fini(struct drm_device *dev)
+ 	drm_minor_free(dev, DRM_MINOR_PRIMARY);
+ 	drm_minor_free(dev, DRM_MINOR_RENDER);
+ 
++	put_device(dev->dev);
++
+ 	mutex_destroy(&dev->master_mutex);
+ 	mutex_destroy(&dev->ctxlist_mutex);
+ 	mutex_destroy(&dev->clientlist_mutex);
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index 3f20f598cd7c..9c5bc0121ff9 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -567,6 +567,7 @@ put_back_event:
+ 				file_priv->event_space -= length;
+ 				list_add(&e->link, &file_priv->event_list);
+ 				spin_unlock_irq(&dev->event_lock);
++				wake_up_interruptible(&file_priv->event_wait);
+ 				break;
+ 			}
+ 
+diff --git a/drivers/gpu/drm/drm_writeback.c b/drivers/gpu/drm/drm_writeback.c
+index c20e6fe00cb3..2d75032f8159 100644
+--- a/drivers/gpu/drm/drm_writeback.c
++++ b/drivers/gpu/drm/drm_writeback.c
+@@ -268,6 +268,15 @@ void drm_writeback_queue_job(struct drm_writeback_connector *wb_connector,
+ }
+ EXPORT_SYMBOL(drm_writeback_queue_job);
+ 
++void drm_writeback_cleanup_job(struct drm_writeback_job *job)
++{
++	if (job->fb)
++		drm_framebuffer_put(job->fb);
++
++	kfree(job);
++}
++EXPORT_SYMBOL(drm_writeback_cleanup_job);
++
+ /*
+  * @cleanup_work: deferred cleanup of a writeback job
+  *
+@@ -280,10 +289,9 @@ static void cleanup_work(struct work_struct *work)
+ 	struct drm_writeback_job *job = container_of(work,
+ 						     struct drm_writeback_job,
+ 						     cleanup_work);
+-	drm_framebuffer_put(job->fb);
+-	kfree(job);
+-}
+ 
++	drm_writeback_cleanup_job(job);
++}
+ 
+ /**
+  * drm_writeback_signal_completion - Signal the completion of a writeback job
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+index 18c27f795cf6..3156450723ba 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+@@ -515,6 +515,9 @@ static int etnaviv_bind(struct device *dev)
+ 	}
+ 	drm->dev_private = priv;
+ 
++	dev->dma_parms = &priv->dma_parms;
++	dma_set_max_seg_size(dev, SZ_2G);
++
+ 	mutex_init(&priv->gem_lock);
+ 	INIT_LIST_HEAD(&priv->gem_list);
+ 	priv->num_gpus = 0;
+@@ -552,6 +555,8 @@ static void etnaviv_unbind(struct device *dev)
+ 
+ 	component_unbind_all(dev, drm);
+ 
++	dev->dma_parms = NULL;
++
+ 	drm->dev_private = NULL;
+ 	kfree(priv);
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+index 4bf698de5996..51b7bdf5748b 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
++++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+@@ -43,6 +43,7 @@ struct etnaviv_file_private {
+ 
+ struct etnaviv_drm_private {
+ 	int num_gpus;
++	struct device_dma_parameters dma_parms;
+ 	struct etnaviv_gpu *gpu[ETNA_MAX_PIPES];
+ 
+ 	/* list of GEM objects: */
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index d5f5e56422f5..270da14cba67 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -34,7 +34,7 @@ static int zap_shader_load_mdt(struct msm_gpu *gpu, const char *fwname)
+ {
+ 	struct device *dev = &gpu->pdev->dev;
+ 	const struct firmware *fw;
+-	struct device_node *np;
++	struct device_node *np, *mem_np;
+ 	struct resource r;
+ 	phys_addr_t mem_phys;
+ 	ssize_t mem_size;
+@@ -48,11 +48,13 @@ static int zap_shader_load_mdt(struct msm_gpu *gpu, const char *fwname)
+ 	if (!np)
+ 		return -ENODEV;
+ 
+-	np = of_parse_phandle(np, "memory-region", 0);
+-	if (!np)
++	mem_np = of_parse_phandle(np, "memory-region", 0);
++	of_node_put(np);
++	if (!mem_np)
+ 		return -EINVAL;
+ 
+-	ret = of_address_to_resource(np, 0, &r);
++	ret = of_address_to_resource(mem_np, 0, &r);
++	of_node_put(mem_np);
+ 	if (ret)
+ 		return ret;
+ 
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 36158b7d99cd..1aea0fc894b2 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -1034,13 +1034,13 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
+ 			if (!dpu_enc->hw_pp[i]) {
+ 				DPU_ERROR_ENC(dpu_enc, "no pp block assigned"
+ 					     "at idx: %d\n", i);
+-				return;
++				goto error;
+ 			}
+ 
+ 			if (!hw_ctl[i]) {
+ 				DPU_ERROR_ENC(dpu_enc, "no ctl block assigned"
+ 					     "at idx: %d\n", i);
+-				return;
++				goto error;
+ 			}
+ 
+ 			phys->hw_pp = dpu_enc->hw_pp[i];
+@@ -1053,6 +1053,9 @@ static void dpu_encoder_virt_mode_set(struct drm_encoder *drm_enc,
+ 	}
+ 
+ 	dpu_enc->mode_set_complete = true;
++
++error:
++	dpu_rm_release(&dpu_kms->rm, drm_enc);
+ }
+ 
+ static void _dpu_encoder_virt_enable_helper(struct drm_encoder *drm_enc)
+@@ -1558,8 +1561,14 @@ static void _dpu_encoder_kickoff_phys(struct dpu_encoder_virt *dpu_enc,
+ 		if (!ctl)
+ 			continue;
+ 
+-		if (phys->split_role != ENC_ROLE_SLAVE)
++		/*
++		 * This is cleared in frame_done worker, which isn't invoked
++		 * for async commits. So don't set this for async, since it'll
++		 * roll over to the next commit.
++		 */
++		if (!async && phys->split_role != ENC_ROLE_SLAVE)
+ 			set_bit(i, dpu_enc->frame_busy_mask);
++
+ 		if (!phys->ops.needs_single_flush ||
+ 				!phys->ops.needs_single_flush(phys))
+ 			_dpu_encoder_trigger_flush(&dpu_enc->base, phys, 0x0,
+diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
+index 49c04829cf34..fcf7a83f0e6f 100644
+--- a/drivers/gpu/drm/msm/msm_gem_vma.c
++++ b/drivers/gpu/drm/msm/msm_gem_vma.c
+@@ -85,7 +85,7 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace,
+ 
+ 	vma->mapped = true;
+ 
+-	if (aspace->mmu)
++	if (aspace && aspace->mmu)
+ 		ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt,
+ 				size, prot);
+ 
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bar/nv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bar/nv50.c
+index 157b076a1272..38c9c086754b 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bar/nv50.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bar/nv50.c
+@@ -109,7 +109,7 @@ nv50_bar_oneinit(struct nvkm_bar *base)
+ 	struct nvkm_device *device = bar->base.subdev.device;
+ 	static struct lock_class_key bar1_lock;
+ 	static struct lock_class_key bar2_lock;
+-	u64 start, limit;
++	u64 start, limit, size;
+ 	int ret;
+ 
+ 	ret = nvkm_gpuobj_new(device, 0x20000, 0, false, NULL, &bar->mem);
+@@ -127,7 +127,10 @@ nv50_bar_oneinit(struct nvkm_bar *base)
+ 
+ 	/* BAR2 */
+ 	start = 0x0100000000ULL;
+-	limit = start + device->func->resource_size(device, 3);
++	size = device->func->resource_size(device, 3);
++	if (!size)
++		return -ENOMEM;
++	limit = start + size;
+ 
+ 	ret = nvkm_vmm_new(device, start, limit-- - start, NULL, 0,
+ 			   &bar2_lock, "bar2", &bar->bar2_vmm);
+@@ -164,7 +167,10 @@ nv50_bar_oneinit(struct nvkm_bar *base)
+ 
+ 	/* BAR1 */
+ 	start = 0x0000000000ULL;
+-	limit = start + device->func->resource_size(device, 1);
++	size = device->func->resource_size(device, 1);
++	if (!size)
++		return -ENOMEM;
++	limit = start + size;
+ 
+ 	ret = nvkm_vmm_new(device, start, limit-- - start, NULL, 0,
+ 			   &bar1_lock, "bar1", &bar->bar1_vmm);
+diff --git a/drivers/gpu/drm/omapdrm/dss/dsi.c b/drivers/gpu/drm/omapdrm/dss/dsi.c
+index 64fb788b6647..f0fe975ed46c 100644
+--- a/drivers/gpu/drm/omapdrm/dss/dsi.c
++++ b/drivers/gpu/drm/omapdrm/dss/dsi.c
+@@ -1342,12 +1342,9 @@ static int dsi_pll_enable(struct dss_pll *pll)
+ 	 */
+ 	dsi_enable_scp_clk(dsi);
+ 
+-	if (!dsi->vdds_dsi_enabled) {
+-		r = regulator_enable(dsi->vdds_dsi_reg);
+-		if (r)
+-			goto err0;
+-		dsi->vdds_dsi_enabled = true;
+-	}
++	r = regulator_enable(dsi->vdds_dsi_reg);
++	if (r)
++		goto err0;
+ 
+ 	/* XXX PLL does not come out of reset without this... */
+ 	dispc_pck_free_enable(dsi->dss->dispc, 1);
+@@ -1372,36 +1369,25 @@ static int dsi_pll_enable(struct dss_pll *pll)
+ 
+ 	return 0;
+ err1:
+-	if (dsi->vdds_dsi_enabled) {
+-		regulator_disable(dsi->vdds_dsi_reg);
+-		dsi->vdds_dsi_enabled = false;
+-	}
++	regulator_disable(dsi->vdds_dsi_reg);
+ err0:
+ 	dsi_disable_scp_clk(dsi);
+ 	dsi_runtime_put(dsi);
+ 	return r;
+ }
+ 
+-static void dsi_pll_uninit(struct dsi_data *dsi, bool disconnect_lanes)
++static void dsi_pll_disable(struct dss_pll *pll)
+ {
++	struct dsi_data *dsi = container_of(pll, struct dsi_data, pll);
++
+ 	dsi_pll_power(dsi, DSI_PLL_POWER_OFF);
+-	if (disconnect_lanes) {
+-		WARN_ON(!dsi->vdds_dsi_enabled);
+-		regulator_disable(dsi->vdds_dsi_reg);
+-		dsi->vdds_dsi_enabled = false;
+-	}
++
++	regulator_disable(dsi->vdds_dsi_reg);
+ 
+ 	dsi_disable_scp_clk(dsi);
+ 	dsi_runtime_put(dsi);
+ 
+-	DSSDBG("PLL uninit done\n");
+-}
+-
+-static void dsi_pll_disable(struct dss_pll *pll)
+-{
+-	struct dsi_data *dsi = container_of(pll, struct dsi_data, pll);
+-
+-	dsi_pll_uninit(dsi, true);
++	DSSDBG("PLL disable done\n");
+ }
+ 
+ static int dsi_dump_dsi_clocks(struct seq_file *s, void *p)
+@@ -4096,11 +4082,11 @@ static int dsi_display_init_dsi(struct dsi_data *dsi)
+ 
+ 	r = dss_pll_enable(&dsi->pll);
+ 	if (r)
+-		goto err0;
++		return r;
+ 
+ 	r = dsi_configure_dsi_clocks(dsi);
+ 	if (r)
+-		goto err1;
++		goto err0;
+ 
+ 	dss_select_dsi_clk_source(dsi->dss, dsi->module_id,
+ 				  dsi->module_id == 0 ?
+@@ -4108,6 +4094,14 @@ static int dsi_display_init_dsi(struct dsi_data *dsi)
+ 
+ 	DSSDBG("PLL OK\n");
+ 
++	if (!dsi->vdds_dsi_enabled) {
++		r = regulator_enable(dsi->vdds_dsi_reg);
++		if (r)
++			goto err1;
++
++		dsi->vdds_dsi_enabled = true;
++	}
++
+ 	r = dsi_cio_init(dsi);
+ 	if (r)
+ 		goto err2;
+@@ -4136,10 +4130,13 @@ static int dsi_display_init_dsi(struct dsi_data *dsi)
+ err3:
+ 	dsi_cio_uninit(dsi);
+ err2:
+-	dss_select_dsi_clk_source(dsi->dss, dsi->module_id, DSS_CLK_SRC_FCK);
++	regulator_disable(dsi->vdds_dsi_reg);
++	dsi->vdds_dsi_enabled = false;
+ err1:
+-	dss_pll_disable(&dsi->pll);
++	dss_select_dsi_clk_source(dsi->dss, dsi->module_id, DSS_CLK_SRC_FCK);
+ err0:
++	dss_pll_disable(&dsi->pll);
++
+ 	return r;
+ }
+ 
+@@ -4158,7 +4155,12 @@ static void dsi_display_uninit_dsi(struct dsi_data *dsi, bool disconnect_lanes,
+ 
+ 	dss_select_dsi_clk_source(dsi->dss, dsi->module_id, DSS_CLK_SRC_FCK);
+ 	dsi_cio_uninit(dsi);
+-	dsi_pll_uninit(dsi, disconnect_lanes);
++	dss_pll_disable(&dsi->pll);
++
++	if (disconnect_lanes) {
++		regulator_disable(dsi->vdds_dsi_reg);
++		dsi->vdds_dsi_enabled = false;
++	}
+ }
+ 
+ static int dsi_display_enable(struct omap_dss_device *dssdev)
+diff --git a/drivers/gpu/drm/omapdrm/omap_connector.c b/drivers/gpu/drm/omapdrm/omap_connector.c
+index b81302c4bf9e..a45f925cb19a 100644
+--- a/drivers/gpu/drm/omapdrm/omap_connector.c
++++ b/drivers/gpu/drm/omapdrm/omap_connector.c
+@@ -36,18 +36,22 @@ struct omap_connector {
+ };
+ 
+ static void omap_connector_hpd_notify(struct drm_connector *connector,
+-				      struct omap_dss_device *src,
+ 				      enum drm_connector_status status)
+ {
+-	if (status == connector_status_disconnected) {
+-		/*
+-		 * If the source is an HDMI encoder, notify it of disconnection.
+-		 * This is required to let the HDMI encoder reset any internal
+-		 * state related to connection status, such as the CEC address.
+-		 */
+-		if (src && src->type == OMAP_DISPLAY_TYPE_HDMI &&
+-		    src->ops->hdmi.lost_hotplug)
+-			src->ops->hdmi.lost_hotplug(src);
++	struct omap_connector *omap_connector = to_omap_connector(connector);
++	struct omap_dss_device *dssdev;
++
++	if (status != connector_status_disconnected)
++		return;
++
++	/*
++	 * Notify all devics in the pipeline of disconnection. This is required
++	 * to let the HDMI encoders reset their internal state related to
++	 * connection status, such as the CEC address.
++	 */
++	for (dssdev = omap_connector->output; dssdev; dssdev = dssdev->next) {
++		if (dssdev->ops && dssdev->ops->hdmi.lost_hotplug)
++			dssdev->ops->hdmi.lost_hotplug(dssdev);
+ 	}
+ }
+ 
+@@ -67,7 +71,7 @@ static void omap_connector_hpd_cb(void *cb_data,
+ 	if (old_status == status)
+ 		return;
+ 
+-	omap_connector_hpd_notify(connector, omap_connector->hpd, status);
++	omap_connector_hpd_notify(connector, status);
+ 
+ 	drm_kms_helper_hotplug_event(dev);
+ }
+@@ -128,7 +132,7 @@ static enum drm_connector_status omap_connector_detect(
+ 		       ? connector_status_connected
+ 		       : connector_status_disconnected;
+ 
+-		omap_connector_hpd_notify(connector, dssdev->src, status);
++		omap_connector_hpd_notify(connector, status);
+ 	} else {
+ 		switch (omap_connector->display->type) {
+ 		case OMAP_DISPLAY_TYPE_DPI:
+diff --git a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
+index 87fa316e1d7b..58ccf648b70f 100644
+--- a/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
++++ b/drivers/gpu/drm/panel/panel-orisetech-otm8009a.c
+@@ -248,6 +248,9 @@ static int otm8009a_init_sequence(struct otm8009a *ctx)
+ 	/* Send Command GRAM memory write (no parameters) */
+ 	dcs_write_seq(ctx, MIPI_DCS_WRITE_MEMORY_START);
+ 
++	/* Wait a short while to let the panel be ready before the 1st frame */
++	mdelay(10);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/pl111/pl111_versatile.c b/drivers/gpu/drm/pl111/pl111_versatile.c
+index b9baefdba38a..1c318ad32a8c 100644
+--- a/drivers/gpu/drm/pl111/pl111_versatile.c
++++ b/drivers/gpu/drm/pl111/pl111_versatile.c
+@@ -330,6 +330,7 @@ int pl111_versatile_init(struct device *dev, struct pl111_drm_dev_private *priv)
+ 		ret = vexpress_muxfpga_init();
+ 		if (ret) {
+ 			dev_err(dev, "unable to initialize muxfpga driver\n");
++			of_node_put(np);
+ 			return ret;
+ 		}
+ 
+@@ -337,17 +338,20 @@ int pl111_versatile_init(struct device *dev, struct pl111_drm_dev_private *priv)
+ 		pdev = of_find_device_by_node(np);
+ 		if (!pdev) {
+ 			dev_err(dev, "can't find the sysreg device, deferring\n");
++			of_node_put(np);
+ 			return -EPROBE_DEFER;
+ 		}
+ 		map = dev_get_drvdata(&pdev->dev);
+ 		if (!map) {
+ 			dev_err(dev, "sysreg has not yet probed\n");
+ 			platform_device_put(pdev);
++			of_node_put(np);
+ 			return -EPROBE_DEFER;
+ 		}
+ 	} else {
+ 		map = syscon_node_to_regmap(np);
+ 	}
++	of_node_put(np);
+ 
+ 	if (IS_ERR(map)) {
+ 		dev_err(dev, "no Versatile syscon regmap\n");
+diff --git a/drivers/gpu/drm/rcar-du/rcar_lvds.c b/drivers/gpu/drm/rcar-du/rcar_lvds.c
+index 534a128a869d..ccdfc64e122a 100644
+--- a/drivers/gpu/drm/rcar-du/rcar_lvds.c
++++ b/drivers/gpu/drm/rcar-du/rcar_lvds.c
+@@ -427,9 +427,13 @@ static void rcar_lvds_enable(struct drm_bridge *bridge)
+ 	}
+ 
+ 	if (lvds->info->quirks & RCAR_LVDS_QUIRK_GEN3_LVEN) {
+-		/* Turn on the LVDS PHY. */
++		/*
++		 * Turn on the LVDS PHY. On D3, the LVEN and LVRES bit must be
++		 * set at the same time, so don't write the register yet.
++		 */
+ 		lvdcr0 |= LVDCR0_LVEN;
+-		rcar_lvds_write(lvds, LVDCR0, lvdcr0);
++		if (!(lvds->info->quirks & RCAR_LVDS_QUIRK_PWD))
++			rcar_lvds_write(lvds, LVDCR0, lvdcr0);
+ 	}
+ 
+ 	if (!(lvds->info->quirks & RCAR_LVDS_QUIRK_EXT_PLL)) {
+diff --git a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+index e3b34a345546..97a0573cc514 100644
+--- a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
++++ b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+@@ -357,7 +357,13 @@ static void sun6i_dsi_inst_init(struct sun6i_dsi *dsi,
+ static u16 sun6i_dsi_get_video_start_delay(struct sun6i_dsi *dsi,
+ 					   struct drm_display_mode *mode)
+ {
+-	return mode->vtotal - (mode->vsync_end - mode->vdisplay) + 1;
++	u16 start = clamp(mode->vtotal - mode->vdisplay - 10, 8, 100);
++	u16 delay = mode->vtotal - (mode->vsync_end - mode->vdisplay) + start;
++
++	if (delay > mode->vtotal)
++		delay = delay % mode->vtotal;
++
++	return max_t(u16, delay, 1);
+ }
+ 
+ static void sun6i_dsi_setup_burst(struct sun6i_dsi *dsi,
+diff --git a/drivers/gpu/drm/tinydrm/ili9225.c b/drivers/gpu/drm/tinydrm/ili9225.c
+index 78f7c2d1b449..5d85894607c7 100644
+--- a/drivers/gpu/drm/tinydrm/ili9225.c
++++ b/drivers/gpu/drm/tinydrm/ili9225.c
+@@ -279,7 +279,7 @@ static void ili9225_pipe_disable(struct drm_simple_display_pipe *pipe)
+ 	mipi->enabled = false;
+ }
+ 
+-static int ili9225_dbi_command(struct mipi_dbi *mipi, u8 cmd, u8 *par,
++static int ili9225_dbi_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
+ 			       size_t num)
+ {
+ 	struct spi_device *spi = mipi->spi;
+@@ -289,11 +289,11 @@ static int ili9225_dbi_command(struct mipi_dbi *mipi, u8 cmd, u8 *par,
+ 
+ 	gpiod_set_value_cansleep(mipi->dc, 0);
+ 	speed_hz = mipi_dbi_spi_cmd_max_speed(spi, 1);
+-	ret = tinydrm_spi_transfer(spi, speed_hz, NULL, 8, &cmd, 1);
++	ret = tinydrm_spi_transfer(spi, speed_hz, NULL, 8, cmd, 1);
+ 	if (ret || !num)
+ 		return ret;
+ 
+-	if (cmd == ILI9225_WRITE_DATA_TO_GRAM && !mipi->swap_bytes)
++	if (*cmd == ILI9225_WRITE_DATA_TO_GRAM && !mipi->swap_bytes)
+ 		bpw = 16;
+ 
+ 	gpiod_set_value_cansleep(mipi->dc, 1);
+diff --git a/drivers/gpu/drm/tinydrm/mipi-dbi.c b/drivers/gpu/drm/tinydrm/mipi-dbi.c
+index 3a05e56f9b0d..dd091c7eecf9 100644
+--- a/drivers/gpu/drm/tinydrm/mipi-dbi.c
++++ b/drivers/gpu/drm/tinydrm/mipi-dbi.c
+@@ -148,16 +148,42 @@ EXPORT_SYMBOL(mipi_dbi_command_read);
+  */
+ int mipi_dbi_command_buf(struct mipi_dbi *mipi, u8 cmd, u8 *data, size_t len)
+ {
++	u8 *cmdbuf;
+ 	int ret;
+ 
++	/* SPI requires dma-safe buffers */
++	cmdbuf = kmemdup(&cmd, 1, GFP_KERNEL);
++	if (!cmdbuf)
++		return -ENOMEM;
++
+ 	mutex_lock(&mipi->cmdlock);
+-	ret = mipi->command(mipi, cmd, data, len);
++	ret = mipi->command(mipi, cmdbuf, data, len);
+ 	mutex_unlock(&mipi->cmdlock);
+ 
++	kfree(cmdbuf);
++
+ 	return ret;
+ }
+ EXPORT_SYMBOL(mipi_dbi_command_buf);
+ 
++/* This should only be used by mipi_dbi_command() */
++int mipi_dbi_command_stackbuf(struct mipi_dbi *mipi, u8 cmd, u8 *data, size_t len)
++{
++	u8 *buf;
++	int ret;
++
++	buf = kmemdup(data, len, GFP_KERNEL);
++	if (!buf)
++		return -ENOMEM;
++
++	ret = mipi_dbi_command_buf(mipi, cmd, buf, len);
++
++	kfree(buf);
++
++	return ret;
++}
++EXPORT_SYMBOL(mipi_dbi_command_stackbuf);
++
+ /**
+  * mipi_dbi_buf_copy - Copy a framebuffer, transforming it if necessary
+  * @dst: The destination buffer
+@@ -745,18 +771,18 @@ static int mipi_dbi_spi1_transfer(struct mipi_dbi *mipi, int dc,
+ 	return 0;
+ }
+ 
+-static int mipi_dbi_typec1_command(struct mipi_dbi *mipi, u8 cmd,
++static int mipi_dbi_typec1_command(struct mipi_dbi *mipi, u8 *cmd,
+ 				   u8 *parameters, size_t num)
+ {
+-	unsigned int bpw = (cmd == MIPI_DCS_WRITE_MEMORY_START) ? 16 : 8;
++	unsigned int bpw = (*cmd == MIPI_DCS_WRITE_MEMORY_START) ? 16 : 8;
+ 	int ret;
+ 
+-	if (mipi_dbi_command_is_read(mipi, cmd))
++	if (mipi_dbi_command_is_read(mipi, *cmd))
+ 		return -ENOTSUPP;
+ 
+-	MIPI_DBI_DEBUG_COMMAND(cmd, parameters, num);
++	MIPI_DBI_DEBUG_COMMAND(*cmd, parameters, num);
+ 
+-	ret = mipi_dbi_spi1_transfer(mipi, 0, &cmd, 1, 8);
++	ret = mipi_dbi_spi1_transfer(mipi, 0, cmd, 1, 8);
+ 	if (ret || !num)
+ 		return ret;
+ 
+@@ -765,7 +791,7 @@ static int mipi_dbi_typec1_command(struct mipi_dbi *mipi, u8 cmd,
+ 
+ /* MIPI DBI Type C Option 3 */
+ 
+-static int mipi_dbi_typec3_command_read(struct mipi_dbi *mipi, u8 cmd,
++static int mipi_dbi_typec3_command_read(struct mipi_dbi *mipi, u8 *cmd,
+ 					u8 *data, size_t len)
+ {
+ 	struct spi_device *spi = mipi->spi;
+@@ -774,7 +800,7 @@ static int mipi_dbi_typec3_command_read(struct mipi_dbi *mipi, u8 cmd,
+ 	struct spi_transfer tr[2] = {
+ 		{
+ 			.speed_hz = speed_hz,
+-			.tx_buf = &cmd,
++			.tx_buf = cmd,
+ 			.len = 1,
+ 		}, {
+ 			.speed_hz = speed_hz,
+@@ -792,8 +818,8 @@ static int mipi_dbi_typec3_command_read(struct mipi_dbi *mipi, u8 cmd,
+ 	 * Support non-standard 24-bit and 32-bit Nokia read commands which
+ 	 * start with a dummy clock, so we need to read an extra byte.
+ 	 */
+-	if (cmd == MIPI_DCS_GET_DISPLAY_ID ||
+-	    cmd == MIPI_DCS_GET_DISPLAY_STATUS) {
++	if (*cmd == MIPI_DCS_GET_DISPLAY_ID ||
++	    *cmd == MIPI_DCS_GET_DISPLAY_STATUS) {
+ 		if (!(len == 3 || len == 4))
+ 			return -EINVAL;
+ 
+@@ -823,7 +849,7 @@ static int mipi_dbi_typec3_command_read(struct mipi_dbi *mipi, u8 cmd,
+ 			data[i] = (buf[i] << 1) | !!(buf[i + 1] & BIT(7));
+ 	}
+ 
+-	MIPI_DBI_DEBUG_COMMAND(cmd, data, len);
++	MIPI_DBI_DEBUG_COMMAND(*cmd, data, len);
+ 
+ err_free:
+ 	kfree(buf);
+@@ -831,7 +857,7 @@ err_free:
+ 	return ret;
+ }
+ 
+-static int mipi_dbi_typec3_command(struct mipi_dbi *mipi, u8 cmd,
++static int mipi_dbi_typec3_command(struct mipi_dbi *mipi, u8 *cmd,
+ 				   u8 *par, size_t num)
+ {
+ 	struct spi_device *spi = mipi->spi;
+@@ -839,18 +865,18 @@ static int mipi_dbi_typec3_command(struct mipi_dbi *mipi, u8 cmd,
+ 	u32 speed_hz;
+ 	int ret;
+ 
+-	if (mipi_dbi_command_is_read(mipi, cmd))
++	if (mipi_dbi_command_is_read(mipi, *cmd))
+ 		return mipi_dbi_typec3_command_read(mipi, cmd, par, num);
+ 
+-	MIPI_DBI_DEBUG_COMMAND(cmd, par, num);
++	MIPI_DBI_DEBUG_COMMAND(*cmd, par, num);
+ 
+ 	gpiod_set_value_cansleep(mipi->dc, 0);
+ 	speed_hz = mipi_dbi_spi_cmd_max_speed(spi, 1);
+-	ret = tinydrm_spi_transfer(spi, speed_hz, NULL, 8, &cmd, 1);
++	ret = tinydrm_spi_transfer(spi, speed_hz, NULL, 8, cmd, 1);
+ 	if (ret || !num)
+ 		return ret;
+ 
+-	if (cmd == MIPI_DCS_WRITE_MEMORY_START && !mipi->swap_bytes)
++	if (*cmd == MIPI_DCS_WRITE_MEMORY_START && !mipi->swap_bytes)
+ 		bpw = 16;
+ 
+ 	gpiod_set_value_cansleep(mipi->dc, 1);
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c
+index f0afcec72c34..30ae1c74edaa 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.c
++++ b/drivers/gpu/drm/v3d/v3d_drv.c
+@@ -312,14 +312,18 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto dev_destroy;
+ 
+-	v3d_irq_init(v3d);
++	ret = v3d_irq_init(v3d);
++	if (ret)
++		goto gem_destroy;
+ 
+ 	ret = drm_dev_register(drm, 0);
+ 	if (ret)
+-		goto gem_destroy;
++		goto irq_disable;
+ 
+ 	return 0;
+ 
++irq_disable:
++	v3d_irq_disable(v3d);
+ gem_destroy:
+ 	v3d_gem_destroy(drm);
+ dev_destroy:
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index dcb772a19191..f2937a1da581 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -311,7 +311,7 @@ void v3d_invalidate_caches(struct v3d_dev *v3d);
+ void v3d_flush_caches(struct v3d_dev *v3d);
+ 
+ /* v3d_irq.c */
+-void v3d_irq_init(struct v3d_dev *v3d);
++int v3d_irq_init(struct v3d_dev *v3d);
+ void v3d_irq_enable(struct v3d_dev *v3d);
+ void v3d_irq_disable(struct v3d_dev *v3d);
+ void v3d_irq_reset(struct v3d_dev *v3d);
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index 69338da70ddc..29d746cfce57 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -156,7 +156,7 @@ v3d_hub_irq(int irq, void *arg)
+ 	return status;
+ }
+ 
+-void
++int
+ v3d_irq_init(struct v3d_dev *v3d)
+ {
+ 	int ret, core;
+@@ -173,13 +173,22 @@ v3d_irq_init(struct v3d_dev *v3d)
+ 	ret = devm_request_irq(v3d->dev, platform_get_irq(v3d->pdev, 0),
+ 			       v3d_hub_irq, IRQF_SHARED,
+ 			       "v3d_hub", v3d);
++	if (ret)
++		goto fail;
++
+ 	ret = devm_request_irq(v3d->dev, platform_get_irq(v3d->pdev, 1),
+ 			       v3d_irq, IRQF_SHARED,
+ 			       "v3d_core0", v3d);
+ 	if (ret)
+-		dev_err(v3d->dev, "IRQ setup failed: %d\n", ret);
++		goto fail;
+ 
+ 	v3d_irq_enable(v3d);
++	return 0;
++
++fail:
++	if (ret != -EPROBE_DEFER)
++		dev_err(v3d->dev, "IRQ setup failed: %d\n", ret);
++	return ret;
+ }
+ 
+ void
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 860e21ec6a49..63a43726cce0 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -218,13 +218,14 @@ static unsigned hid_lookup_collection(struct hid_parser *parser, unsigned type)
+  * Add a usage to the temporary parser table.
+  */
+ 
+-static int hid_add_usage(struct hid_parser *parser, unsigned usage)
++static int hid_add_usage(struct hid_parser *parser, unsigned usage, u8 size)
+ {
+ 	if (parser->local.usage_index >= HID_MAX_USAGES) {
+ 		hid_err(parser->device, "usage index exceeded\n");
+ 		return -1;
+ 	}
+ 	parser->local.usage[parser->local.usage_index] = usage;
++	parser->local.usage_size[parser->local.usage_index] = size;
+ 	parser->local.collection_index[parser->local.usage_index] =
+ 		parser->collection_stack_ptr ?
+ 		parser->collection_stack[parser->collection_stack_ptr - 1] : 0;
+@@ -486,10 +487,7 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ 			return 0;
+ 		}
+ 
+-		if (item->size <= 2)
+-			data = (parser->global.usage_page << 16) + data;
+-
+-		return hid_add_usage(parser, data);
++		return hid_add_usage(parser, data, item->size);
+ 
+ 	case HID_LOCAL_ITEM_TAG_USAGE_MINIMUM:
+ 
+@@ -498,9 +496,6 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ 			return 0;
+ 		}
+ 
+-		if (item->size <= 2)
+-			data = (parser->global.usage_page << 16) + data;
+-
+ 		parser->local.usage_minimum = data;
+ 		return 0;
+ 
+@@ -511,9 +506,6 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ 			return 0;
+ 		}
+ 
+-		if (item->size <= 2)
+-			data = (parser->global.usage_page << 16) + data;
+-
+ 		count = data - parser->local.usage_minimum;
+ 		if (count + parser->local.usage_index >= HID_MAX_USAGES) {
+ 			/*
+@@ -533,7 +525,7 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ 		}
+ 
+ 		for (n = parser->local.usage_minimum; n <= data; n++)
+-			if (hid_add_usage(parser, n)) {
++			if (hid_add_usage(parser, n, item->size)) {
+ 				dbg_hid("hid_add_usage failed\n");
+ 				return -1;
+ 			}
+@@ -547,6 +539,22 @@ static int hid_parser_local(struct hid_parser *parser, struct hid_item *item)
+ 	return 0;
+ }
+ 
++/*
++ * Concatenate Usage Pages into Usages where relevant:
++ * As per specification, 6.2.2.8: "When the parser encounters a main item it
++ * concatenates the last declared Usage Page with a Usage to form a complete
++ * usage value."
++ */
++
++static void hid_concatenate_usage_page(struct hid_parser *parser)
++{
++	int i;
++
++	for (i = 0; i < parser->local.usage_index; i++)
++		if (parser->local.usage_size[i] <= 2)
++			parser->local.usage[i] += parser->global.usage_page << 16;
++}
++
+ /*
+  * Process a main item.
+  */
+@@ -556,6 +564,8 @@ static int hid_parser_main(struct hid_parser *parser, struct hid_item *item)
+ 	__u32 data;
+ 	int ret;
+ 
++	hid_concatenate_usage_page(parser);
++
+ 	data = item_udata(item);
+ 
+ 	switch (item->tag) {
+@@ -765,6 +775,8 @@ static int hid_scan_main(struct hid_parser *parser, struct hid_item *item)
+ 	__u32 data;
+ 	int i;
+ 
++	hid_concatenate_usage_page(parser);
++
+ 	data = item_udata(item);
+ 
+ 	switch (item->tag) {
+diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
+index 199cc256e9d9..e74fa990ba13 100644
+--- a/drivers/hid/hid-logitech-hidpp.c
++++ b/drivers/hid/hid-logitech-hidpp.c
+@@ -836,13 +836,16 @@ static int hidpp_root_get_feature(struct hidpp_device *hidpp, u16 feature,
+ 
+ static int hidpp_root_get_protocol_version(struct hidpp_device *hidpp)
+ {
++	const u8 ping_byte = 0x5a;
++	u8 ping_data[3] = { 0, 0, ping_byte };
+ 	struct hidpp_report response;
+ 	int ret;
+ 
+-	ret = hidpp_send_fap_command_sync(hidpp,
++	ret = hidpp_send_rap_command_sync(hidpp,
++			REPORT_ID_HIDPP_SHORT,
+ 			HIDPP_PAGE_ROOT_IDX,
+ 			CMD_ROOT_GET_PROTOCOL_VERSION,
+-			NULL, 0, &response);
++			ping_data, sizeof(ping_data), &response);
+ 
+ 	if (ret == HIDPP_ERROR_INVALID_SUBID) {
+ 		hidpp->protocol_major = 1;
+@@ -862,8 +865,14 @@ static int hidpp_root_get_protocol_version(struct hidpp_device *hidpp)
+ 	if (ret)
+ 		return ret;
+ 
+-	hidpp->protocol_major = response.fap.params[0];
+-	hidpp->protocol_minor = response.fap.params[1];
++	if (response.rap.params[2] != ping_byte) {
++		hid_err(hidpp->hid_dev, "%s: ping mismatch 0x%02x != 0x%02x\n",
++			__func__, response.rap.params[2], ping_byte);
++		return -EPROTO;
++	}
++
++	hidpp->protocol_major = response.rap.params[0];
++	hidpp->protocol_minor = response.rap.params[1];
+ 
+ 	return ret;
+ }
+@@ -1012,7 +1021,11 @@ static int hidpp_map_battery_level(int capacity)
+ {
+ 	if (capacity < 11)
+ 		return POWER_SUPPLY_CAPACITY_LEVEL_CRITICAL;
+-	else if (capacity < 31)
++	/*
++	 * The spec says this should be < 31 but some devices report 30
++	 * with brand new batteries and Windows reports 30 as "Good".
++	 */
++	else if (capacity < 30)
+ 		return POWER_SUPPLY_CAPACITY_LEVEL_LOW;
+ 	else if (capacity < 81)
+ 		return POWER_SUPPLY_CAPACITY_LEVEL_NORMAL;
+diff --git a/drivers/hwmon/f71805f.c b/drivers/hwmon/f71805f.c
+index 73c681162653..623736d2a7c1 100644
+--- a/drivers/hwmon/f71805f.c
++++ b/drivers/hwmon/f71805f.c
+@@ -96,17 +96,23 @@ superio_select(int base, int ld)
+ 	outb(ld, base + 1);
+ }
+ 
+-static inline void
++static inline int
+ superio_enter(int base)
+ {
++	if (!request_muxed_region(base, 2, DRVNAME))
++		return -EBUSY;
++
+ 	outb(0x87, base);
+ 	outb(0x87, base);
++
++	return 0;
+ }
+ 
+ static inline void
+ superio_exit(int base)
+ {
+ 	outb(0xaa, base);
++	release_region(base, 2);
+ }
+ 
+ /*
+@@ -1561,7 +1567,7 @@ exit:
+ static int __init f71805f_find(int sioaddr, unsigned short *address,
+ 			       struct f71805f_sio_data *sio_data)
+ {
+-	int err = -ENODEV;
++	int err;
+ 	u16 devid;
+ 
+ 	static const char * const names[] = {
+@@ -1569,8 +1575,11 @@ static int __init f71805f_find(int sioaddr, unsigned short *address,
+ 		"F71872F/FG or F71806F/FG",
+ 	};
+ 
+-	superio_enter(sioaddr);
++	err = superio_enter(sioaddr);
++	if (err)
++		return err;
+ 
++	err = -ENODEV;
+ 	devid = superio_inw(sioaddr, SIO_REG_MANID);
+ 	if (devid != SIO_FINTEK_ID)
+ 		goto exit;
+diff --git a/drivers/hwmon/pc87427.c b/drivers/hwmon/pc87427.c
+index dc5a9d5ada51..81a05cd1a512 100644
+--- a/drivers/hwmon/pc87427.c
++++ b/drivers/hwmon/pc87427.c
+@@ -106,6 +106,13 @@ static const char *logdev_str[2] = { DRVNAME " FMC", DRVNAME " HMC" };
+ #define LD_IN		1
+ #define LD_TEMP		1
+ 
++static inline int superio_enter(int sioaddr)
++{
++	if (!request_muxed_region(sioaddr, 2, DRVNAME))
++		return -EBUSY;
++	return 0;
++}
++
+ static inline void superio_outb(int sioaddr, int reg, int val)
+ {
+ 	outb(reg, sioaddr);
+@@ -122,6 +129,7 @@ static inline void superio_exit(int sioaddr)
+ {
+ 	outb(0x02, sioaddr);
+ 	outb(0x02, sioaddr + 1);
++	release_region(sioaddr, 2);
+ }
+ 
+ /*
+@@ -1220,7 +1228,11 @@ static int __init pc87427_find(int sioaddr, struct pc87427_sio_data *sio_data)
+ {
+ 	u16 val;
+ 	u8 cfg, cfg_b;
+-	int i, err = 0;
++	int i, err;
++
++	err = superio_enter(sioaddr);
++	if (err)
++		return err;
+ 
+ 	/* Identify device */
+ 	val = force_id ? force_id : superio_inb(sioaddr, SIOREG_DEVID);
+diff --git a/drivers/hwmon/smsc47b397.c b/drivers/hwmon/smsc47b397.c
+index 6bd200756560..cbdb5c4991ae 100644
+--- a/drivers/hwmon/smsc47b397.c
++++ b/drivers/hwmon/smsc47b397.c
+@@ -72,14 +72,19 @@ static inline void superio_select(int ld)
+ 	superio_outb(0x07, ld);
+ }
+ 
+-static inline void superio_enter(void)
++static inline int superio_enter(void)
+ {
++	if (!request_muxed_region(REG, 2, DRVNAME))
++		return -EBUSY;
++
+ 	outb(0x55, REG);
++	return 0;
+ }
+ 
+ static inline void superio_exit(void)
+ {
+ 	outb(0xAA, REG);
++	release_region(REG, 2);
+ }
+ 
+ #define SUPERIO_REG_DEVID	0x20
+@@ -300,8 +305,12 @@ static int __init smsc47b397_find(void)
+ 	u8 id, rev;
+ 	char *name;
+ 	unsigned short addr;
++	int err;
++
++	err = superio_enter();
++	if (err)
++		return err;
+ 
+-	superio_enter();
+ 	id = force_id ? force_id : superio_inb(SUPERIO_REG_DEVID);
+ 
+ 	switch (id) {
+diff --git a/drivers/hwmon/smsc47m1.c b/drivers/hwmon/smsc47m1.c
+index c7b6a425e2c0..5eeac9853d0a 100644
+--- a/drivers/hwmon/smsc47m1.c
++++ b/drivers/hwmon/smsc47m1.c
+@@ -73,16 +73,21 @@ superio_inb(int reg)
+ /* logical device for fans is 0x0A */
+ #define superio_select() superio_outb(0x07, 0x0A)
+ 
+-static inline void
++static inline int
+ superio_enter(void)
+ {
++	if (!request_muxed_region(REG, 2, DRVNAME))
++		return -EBUSY;
++
+ 	outb(0x55, REG);
++	return 0;
+ }
+ 
+ static inline void
+ superio_exit(void)
+ {
+ 	outb(0xAA, REG);
++	release_region(REG, 2);
+ }
+ 
+ #define SUPERIO_REG_ACT		0x30
+@@ -531,8 +536,12 @@ static int __init smsc47m1_find(struct smsc47m1_sio_data *sio_data)
+ {
+ 	u8 val;
+ 	unsigned short addr;
++	int err;
++
++	err = superio_enter();
++	if (err)
++		return err;
+ 
+-	superio_enter();
+ 	val = force_id ? force_id : superio_inb(SUPERIO_REG_DEVID);
+ 
+ 	/*
+@@ -608,13 +617,14 @@ static int __init smsc47m1_find(struct smsc47m1_sio_data *sio_data)
+ static void smsc47m1_restore(const struct smsc47m1_sio_data *sio_data)
+ {
+ 	if ((sio_data->activate & 0x01) == 0) {
+-		superio_enter();
+-		superio_select();
+-
+-		pr_info("Disabling device\n");
+-		superio_outb(SUPERIO_REG_ACT, sio_data->activate);
+-
+-		superio_exit();
++		if (!superio_enter()) {
++			superio_select();
++			pr_info("Disabling device\n");
++			superio_outb(SUPERIO_REG_ACT, sio_data->activate);
++			superio_exit();
++		} else {
++			pr_warn("Failed to disable device\n");
++		}
+ 	}
+ }
+ 
+diff --git a/drivers/hwmon/vt1211.c b/drivers/hwmon/vt1211.c
+index 3a6bfa51cb94..95d5e8ec8b7f 100644
+--- a/drivers/hwmon/vt1211.c
++++ b/drivers/hwmon/vt1211.c
+@@ -226,15 +226,21 @@ static inline void superio_select(int sio_cip, int ldn)
+ 	outb(ldn, sio_cip + 1);
+ }
+ 
+-static inline void superio_enter(int sio_cip)
++static inline int superio_enter(int sio_cip)
+ {
++	if (!request_muxed_region(sio_cip, 2, DRVNAME))
++		return -EBUSY;
++
+ 	outb(0x87, sio_cip);
+ 	outb(0x87, sio_cip);
++
++	return 0;
+ }
+ 
+ static inline void superio_exit(int sio_cip)
+ {
+ 	outb(0xaa, sio_cip);
++	release_region(sio_cip, 2);
+ }
+ 
+ /* ---------------------------------------------------------------------
+@@ -1282,11 +1288,14 @@ EXIT:
+ 
+ static int __init vt1211_find(int sio_cip, unsigned short *address)
+ {
+-	int err = -ENODEV;
++	int err;
+ 	int devid;
+ 
+-	superio_enter(sio_cip);
++	err = superio_enter(sio_cip);
++	if (err)
++		return err;
+ 
++	err = -ENODEV;
+ 	devid = force_id ? force_id : superio_inb(sio_cip, SIO_VT1211_DEVID);
+ 	if (devid != SIO_VT1211_ID)
+ 		goto EXIT;
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index 7a3ca4ec0cb7..0a11f6cbc91a 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -747,6 +747,7 @@ config STM32_DFSDM_ADC
+ 	depends on (ARCH_STM32 && OF) || COMPILE_TEST
+ 	select STM32_DFSDM_CORE
+ 	select REGMAP_MMIO
++	select IIO_BUFFER
+ 	select IIO_BUFFER_HW_CONSUMER
+ 	help
+ 	  Select this option to support ADCSigma delta modulator for
+diff --git a/drivers/iio/adc/ad_sigma_delta.c b/drivers/iio/adc/ad_sigma_delta.c
+index 54d9978b2740..a4310600a853 100644
+--- a/drivers/iio/adc/ad_sigma_delta.c
++++ b/drivers/iio/adc/ad_sigma_delta.c
+@@ -62,7 +62,7 @@ int ad_sd_write_reg(struct ad_sigma_delta *sigma_delta, unsigned int reg,
+ 	struct spi_transfer t = {
+ 		.tx_buf		= data,
+ 		.len		= size + 1,
+-		.cs_change	= sigma_delta->bus_locked,
++		.cs_change	= sigma_delta->keep_cs_asserted,
+ 	};
+ 	struct spi_message m;
+ 	int ret;
+@@ -218,6 +218,7 @@ static int ad_sd_calibrate(struct ad_sigma_delta *sigma_delta,
+ 
+ 	spi_bus_lock(sigma_delta->spi->master);
+ 	sigma_delta->bus_locked = true;
++	sigma_delta->keep_cs_asserted = true;
+ 	reinit_completion(&sigma_delta->completion);
+ 
+ 	ret = ad_sigma_delta_set_mode(sigma_delta, mode);
+@@ -235,9 +236,10 @@ static int ad_sd_calibrate(struct ad_sigma_delta *sigma_delta,
+ 		ret = 0;
+ 	}
+ out:
++	sigma_delta->keep_cs_asserted = false;
++	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
+ 	sigma_delta->bus_locked = false;
+ 	spi_bus_unlock(sigma_delta->spi->master);
+-	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
+ 
+ 	return ret;
+ }
+@@ -290,6 +292,7 @@ int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev,
+ 
+ 	spi_bus_lock(sigma_delta->spi->master);
+ 	sigma_delta->bus_locked = true;
++	sigma_delta->keep_cs_asserted = true;
+ 	reinit_completion(&sigma_delta->completion);
+ 
+ 	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_SINGLE);
+@@ -299,9 +302,6 @@ int ad_sigma_delta_single_conversion(struct iio_dev *indio_dev,
+ 	ret = wait_for_completion_interruptible_timeout(
+ 			&sigma_delta->completion, HZ);
+ 
+-	sigma_delta->bus_locked = false;
+-	spi_bus_unlock(sigma_delta->spi->master);
+-
+ 	if (ret == 0)
+ 		ret = -EIO;
+ 	if (ret < 0)
+@@ -322,7 +322,10 @@ out:
+ 		sigma_delta->irq_dis = true;
+ 	}
+ 
++	sigma_delta->keep_cs_asserted = false;
+ 	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
++	sigma_delta->bus_locked = false;
++	spi_bus_unlock(sigma_delta->spi->master);
+ 	mutex_unlock(&indio_dev->mlock);
+ 
+ 	if (ret)
+@@ -359,6 +362,8 @@ static int ad_sd_buffer_postenable(struct iio_dev *indio_dev)
+ 
+ 	spi_bus_lock(sigma_delta->spi->master);
+ 	sigma_delta->bus_locked = true;
++	sigma_delta->keep_cs_asserted = true;
++
+ 	ret = ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_CONTINUOUS);
+ 	if (ret)
+ 		goto err_unlock;
+@@ -387,6 +392,7 @@ static int ad_sd_buffer_postdisable(struct iio_dev *indio_dev)
+ 		sigma_delta->irq_dis = true;
+ 	}
+ 
++	sigma_delta->keep_cs_asserted = false;
+ 	ad_sigma_delta_set_mode(sigma_delta, AD_SD_MODE_IDLE);
+ 
+ 	sigma_delta->bus_locked = false;
+diff --git a/drivers/iio/adc/ti-ads7950.c b/drivers/iio/adc/ti-ads7950.c
+index 0ad63592cc3c..1e47bef72bb7 100644
+--- a/drivers/iio/adc/ti-ads7950.c
++++ b/drivers/iio/adc/ti-ads7950.c
+@@ -56,6 +56,9 @@ struct ti_ads7950_state {
+ 	struct spi_message	ring_msg;
+ 	struct spi_message	scan_single_msg;
+ 
++	/* Lock to protect the spi xfer buffers */
++	struct mutex		slock;
++
+ 	struct regulator	*reg;
+ 	unsigned int		vref_mv;
+ 
+@@ -268,6 +271,7 @@ static irqreturn_t ti_ads7950_trigger_handler(int irq, void *p)
+ 	struct ti_ads7950_state *st = iio_priv(indio_dev);
+ 	int ret;
+ 
++	mutex_lock(&st->slock);
+ 	ret = spi_sync(st->spi, &st->ring_msg);
+ 	if (ret < 0)
+ 		goto out;
+@@ -276,6 +280,7 @@ static irqreturn_t ti_ads7950_trigger_handler(int irq, void *p)
+ 					   iio_get_time_ns(indio_dev));
+ 
+ out:
++	mutex_unlock(&st->slock);
+ 	iio_trigger_notify_done(indio_dev->trig);
+ 
+ 	return IRQ_HANDLED;
+@@ -286,7 +291,7 @@ static int ti_ads7950_scan_direct(struct iio_dev *indio_dev, unsigned int ch)
+ 	struct ti_ads7950_state *st = iio_priv(indio_dev);
+ 	int ret, cmd;
+ 
+-	mutex_lock(&indio_dev->mlock);
++	mutex_lock(&st->slock);
+ 
+ 	cmd = TI_ADS7950_CR_WRITE | TI_ADS7950_CR_CHAN(ch) | st->settings;
+ 	st->single_tx = cmd;
+@@ -298,7 +303,7 @@ static int ti_ads7950_scan_direct(struct iio_dev *indio_dev, unsigned int ch)
+ 	ret = st->single_rx;
+ 
+ out:
+-	mutex_unlock(&indio_dev->mlock);
++	mutex_unlock(&st->slock);
+ 
+ 	return ret;
+ }
+@@ -432,16 +437,19 @@ static int ti_ads7950_probe(struct spi_device *spi)
+ 	if (ACPI_COMPANION(&spi->dev))
+ 		st->vref_mv = TI_ADS7950_VA_MV_ACPI_DEFAULT;
+ 
++	mutex_init(&st->slock);
++
+ 	st->reg = devm_regulator_get(&spi->dev, "vref");
+ 	if (IS_ERR(st->reg)) {
+ 		dev_err(&spi->dev, "Failed get get regulator \"vref\"\n");
+-		return PTR_ERR(st->reg);
++		ret = PTR_ERR(st->reg);
++		goto error_destroy_mutex;
+ 	}
+ 
+ 	ret = regulator_enable(st->reg);
+ 	if (ret) {
+ 		dev_err(&spi->dev, "Failed to enable regulator \"vref\"\n");
+-		return ret;
++		goto error_destroy_mutex;
+ 	}
+ 
+ 	ret = iio_triggered_buffer_setup(indio_dev, NULL,
+@@ -463,6 +471,8 @@ error_cleanup_ring:
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ error_disable_reg:
+ 	regulator_disable(st->reg);
++error_destroy_mutex:
++	mutex_destroy(&st->slock);
+ 
+ 	return ret;
+ }
+@@ -475,6 +485,7 @@ static int ti_ads7950_remove(struct spi_device *spi)
+ 	iio_device_unregister(indio_dev);
+ 	iio_triggered_buffer_cleanup(indio_dev);
+ 	regulator_disable(st->reg);
++	mutex_destroy(&st->slock);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/iio/common/ssp_sensors/ssp_iio.c b/drivers/iio/common/ssp_sensors/ssp_iio.c
+index 645f2e3975db..e38f704d88b7 100644
+--- a/drivers/iio/common/ssp_sensors/ssp_iio.c
++++ b/drivers/iio/common/ssp_sensors/ssp_iio.c
+@@ -81,7 +81,7 @@ int ssp_common_process_data(struct iio_dev *indio_dev, void *buf,
+ 			    unsigned int len, int64_t timestamp)
+ {
+ 	__le32 time;
+-	int64_t calculated_time;
++	int64_t calculated_time = 0;
+ 	struct ssp_sensor_data *spd = iio_priv(indio_dev);
+ 
+ 	if (indio_dev->scan_bytes == 0)
+diff --git a/drivers/iio/magnetometer/hmc5843_i2c.c b/drivers/iio/magnetometer/hmc5843_i2c.c
+index 3de7f4426ac4..86abba5827a2 100644
+--- a/drivers/iio/magnetometer/hmc5843_i2c.c
++++ b/drivers/iio/magnetometer/hmc5843_i2c.c
+@@ -58,8 +58,13 @@ static const struct regmap_config hmc5843_i2c_regmap_config = {
+ static int hmc5843_i2c_probe(struct i2c_client *cli,
+ 			     const struct i2c_device_id *id)
+ {
++	struct regmap *regmap = devm_regmap_init_i2c(cli,
++			&hmc5843_i2c_regmap_config);
++	if (IS_ERR(regmap))
++		return PTR_ERR(regmap);
++
+ 	return hmc5843_common_probe(&cli->dev,
+-			devm_regmap_init_i2c(cli, &hmc5843_i2c_regmap_config),
++			regmap,
+ 			id->driver_data, id->name);
+ }
+ 
+diff --git a/drivers/iio/magnetometer/hmc5843_spi.c b/drivers/iio/magnetometer/hmc5843_spi.c
+index 535f03a70d63..79b2b707f90e 100644
+--- a/drivers/iio/magnetometer/hmc5843_spi.c
++++ b/drivers/iio/magnetometer/hmc5843_spi.c
+@@ -58,6 +58,7 @@ static const struct regmap_config hmc5843_spi_regmap_config = {
+ static int hmc5843_spi_probe(struct spi_device *spi)
+ {
+ 	int ret;
++	struct regmap *regmap;
+ 	const struct spi_device_id *id = spi_get_device_id(spi);
+ 
+ 	spi->mode = SPI_MODE_3;
+@@ -67,8 +68,12 @@ static int hmc5843_spi_probe(struct spi_device *spi)
+ 	if (ret)
+ 		return ret;
+ 
++	regmap = devm_regmap_init_spi(spi, &hmc5843_spi_regmap_config);
++	if (IS_ERR(regmap))
++		return PTR_ERR(regmap);
++
+ 	return hmc5843_common_probe(&spi->dev,
+-			devm_regmap_init_spi(spi, &hmc5843_spi_regmap_config),
++			regmap,
+ 			id->driver_data, id->name);
+ }
+ 
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 81bded0d37d1..cb482f338950 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -1170,18 +1170,31 @@ static inline bool cma_any_addr(const struct sockaddr *addr)
+ 	return cma_zero_addr(addr) || cma_loopback_addr(addr);
+ }
+ 
+-static int cma_addr_cmp(struct sockaddr *src, struct sockaddr *dst)
++static int cma_addr_cmp(const struct sockaddr *src, const struct sockaddr *dst)
+ {
+ 	if (src->sa_family != dst->sa_family)
+ 		return -1;
+ 
+ 	switch (src->sa_family) {
+ 	case AF_INET:
+-		return ((struct sockaddr_in *) src)->sin_addr.s_addr !=
+-		       ((struct sockaddr_in *) dst)->sin_addr.s_addr;
+-	case AF_INET6:
+-		return ipv6_addr_cmp(&((struct sockaddr_in6 *) src)->sin6_addr,
+-				     &((struct sockaddr_in6 *) dst)->sin6_addr);
++		return ((struct sockaddr_in *)src)->sin_addr.s_addr !=
++		       ((struct sockaddr_in *)dst)->sin_addr.s_addr;
++	case AF_INET6: {
++		struct sockaddr_in6 *src_addr6 = (struct sockaddr_in6 *)src;
++		struct sockaddr_in6 *dst_addr6 = (struct sockaddr_in6 *)dst;
++		bool link_local;
++
++		if (ipv6_addr_cmp(&src_addr6->sin6_addr,
++					  &dst_addr6->sin6_addr))
++			return 1;
++		link_local = ipv6_addr_type(&dst_addr6->sin6_addr) &
++			     IPV6_ADDR_LINKLOCAL;
++		/* Link local must match their scope_ids */
++		return link_local ? (src_addr6->sin6_scope_id !=
++				     dst_addr6->sin6_scope_id) :
++				    0;
++	}
++
+ 	default:
+ 		return ib_addr_cmp(&((struct sockaddr_ib *) src)->sib_addr,
+ 				   &((struct sockaddr_ib *) dst)->sib_addr);
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index 25a81fbb0d4d..f1819b527256 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -457,6 +457,8 @@ static struct sk_buff *get_skb(struct sk_buff *skb, int len, gfp_t gfp)
+ 		skb_reset_transport_header(skb);
+ 	} else {
+ 		skb = alloc_skb(len, gfp);
++		if (!skb)
++			return NULL;
+ 	}
+ 	t4_set_arp_err_handler(skb, NULL, NULL);
+ 	return skb;
+diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
+index c532ceb0bb9a..b66c4fe8151a 100644
+--- a/drivers/infiniband/hw/hfi1/init.c
++++ b/drivers/infiniband/hw/hfi1/init.c
+@@ -797,7 +797,8 @@ static int create_workqueues(struct hfi1_devdata *dd)
+ 			ppd->hfi1_wq =
+ 				alloc_workqueue(
+ 				    "hfi%d_%d",
+-				    WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE,
++				    WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE |
++				    WQ_MEM_RECLAIM,
+ 				    HFI1_MAX_ACTIVE_WORKQUEUE_ENTRIES,
+ 				    dd->unit, pidx);
+ 			if (!ppd->hfi1_wq)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
+index b3c8c45ec1e3..64e0c69b69c5 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
++++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
+@@ -70,7 +70,7 @@ struct ib_ah *hns_roce_create_ah(struct ib_pd *ibpd,
+ 			     HNS_ROCE_VLAN_SL_BIT_MASK) <<
+ 			     HNS_ROCE_VLAN_SL_SHIFT;
+ 
+-	ah->av.port_pd = cpu_to_be32(to_hr_pd(ibpd)->pdn |
++	ah->av.port_pd = cpu_to_le32(to_hr_pd(ibpd)->pdn |
+ 				     (rdma_ah_get_port_num(ah_attr) <<
+ 				     HNS_ROCE_PORT_NUM_SHIFT));
+ 	ah->av.gid_index = grh->sgid_index;
+diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
+index 5002838ea476..f8986effcb50 100644
+--- a/drivers/md/bcache/alloc.c
++++ b/drivers/md/bcache/alloc.c
+@@ -327,10 +327,11 @@ static int bch_allocator_thread(void *arg)
+ 		 * possibly issue discards to them, then we add the bucket to
+ 		 * the free list:
+ 		 */
+-		while (!fifo_empty(&ca->free_inc)) {
++		while (1) {
+ 			long bucket;
+ 
+-			fifo_pop(&ca->free_inc, bucket);
++			if (!fifo_pop(&ca->free_inc, bucket))
++				break;
+ 
+ 			if (ca->discard) {
+ 				mutex_unlock(&ca->set->bucket_lock);
+diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c
+index d3725c17ce3a..6c94fa007796 100644
+--- a/drivers/md/bcache/journal.c
++++ b/drivers/md/bcache/journal.c
+@@ -317,6 +317,18 @@ void bch_journal_mark(struct cache_set *c, struct list_head *list)
+ 	}
+ }
+ 
++bool is_discard_enabled(struct cache_set *s)
++{
++	struct cache *ca;
++	unsigned int i;
++
++	for_each_cache(ca, s, i)
++		if (ca->discard)
++			return true;
++
++	return false;
++}
++
+ int bch_journal_replay(struct cache_set *s, struct list_head *list)
+ {
+ 	int ret = 0, keys = 0, entries = 0;
+@@ -330,9 +342,17 @@ int bch_journal_replay(struct cache_set *s, struct list_head *list)
+ 	list_for_each_entry(i, list, list) {
+ 		BUG_ON(i->pin && atomic_read(i->pin) != 1);
+ 
+-		cache_set_err_on(n != i->j.seq, s,
+-"bcache: journal entries %llu-%llu missing! (replaying %llu-%llu)",
+-				 n, i->j.seq - 1, start, end);
++		if (n != i->j.seq) {
++			if (n == start && is_discard_enabled(s))
++				pr_info("bcache: journal entries %llu-%llu may be discarded! (replaying %llu-%llu)",
++					n, i->j.seq - 1, start, end);
++			else {
++				pr_err("bcache: journal entries %llu-%llu missing! (replaying %llu-%llu)",
++					n, i->j.seq - 1, start, end);
++				ret = -EIO;
++				goto err;
++			}
++		}
+ 
+ 		for (k = i->j.start;
+ 		     k < bset_bkey_last(&i->j);
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index ee36e6b3bcad..0148dd931f68 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1775,13 +1775,15 @@ err:
+ 	return NULL;
+ }
+ 
+-static void run_cache_set(struct cache_set *c)
++static int run_cache_set(struct cache_set *c)
+ {
+ 	const char *err = "cannot allocate memory";
+ 	struct cached_dev *dc, *t;
+ 	struct cache *ca;
+ 	struct closure cl;
+ 	unsigned int i;
++	LIST_HEAD(journal);
++	struct journal_replay *l;
+ 
+ 	closure_init_stack(&cl);
+ 
+@@ -1869,7 +1871,9 @@ static void run_cache_set(struct cache_set *c)
+ 		if (j->version < BCACHE_JSET_VERSION_UUID)
+ 			__uuid_write(c);
+ 
+-		bch_journal_replay(c, &journal);
++		err = "bcache: replay journal failed";
++		if (bch_journal_replay(c, &journal))
++			goto err;
+ 	} else {
+ 		pr_notice("invalidating existing data");
+ 
+@@ -1937,11 +1941,19 @@ static void run_cache_set(struct cache_set *c)
+ 	flash_devs_run(c);
+ 
+ 	set_bit(CACHE_SET_RUNNING, &c->flags);
+-	return;
++	return 0;
+ err:
++	while (!list_empty(&journal)) {
++		l = list_first_entry(&journal, struct journal_replay, list);
++		list_del(&l->list);
++		kfree(l);
++	}
++
+ 	closure_sync(&cl);
+ 	/* XXX: test this, it's broken */
+ 	bch_cache_set_error(c, "%s", err);
++
++	return -EIO;
+ }
+ 
+ static bool can_attach_cache(struct cache *ca, struct cache_set *c)
+@@ -2005,8 +2017,11 @@ found:
+ 	ca->set->cache[ca->sb.nr_this_dev] = ca;
+ 	c->cache_by_alloc[c->caches_loaded++] = ca;
+ 
+-	if (c->caches_loaded == c->sb.nr_in_set)
+-		run_cache_set(c);
++	if (c->caches_loaded == c->sb.nr_in_set) {
++		err = "failed to run cache set";
++		if (run_cache_set(c) < 0)
++			goto err;
++	}
+ 
+ 	return NULL;
+ err:
+diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
+index 70e8c3366f9c..b68bf44216f4 100644
+--- a/drivers/media/common/videobuf2/videobuf2-core.c
++++ b/drivers/media/common/videobuf2/videobuf2-core.c
+@@ -672,6 +672,11 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
+ 		return -EBUSY;
+ 	}
+ 
++	if (q->waiting_in_dqbuf && *count) {
++		dprintk(1, "another dup()ped fd is waiting for a buffer\n");
++		return -EBUSY;
++	}
++
+ 	if (*count == 0 || q->num_buffers != 0 ||
+ 	    (q->memory != VB2_MEMORY_UNKNOWN && q->memory != memory)) {
+ 		/*
+@@ -807,6 +812,10 @@ int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
+ 	}
+ 
+ 	if (!q->num_buffers) {
++		if (q->waiting_in_dqbuf && *count) {
++			dprintk(1, "another dup()ped fd is waiting for a buffer\n");
++			return -EBUSY;
++		}
+ 		memset(q->alloc_devs, 0, sizeof(q->alloc_devs));
+ 		q->memory = memory;
+ 		q->waiting_for_buffers = !q->is_output;
+@@ -1638,6 +1647,11 @@ static int __vb2_wait_for_done_vb(struct vb2_queue *q, int nonblocking)
+ 	for (;;) {
+ 		int ret;
+ 
++		if (q->waiting_in_dqbuf) {
++			dprintk(1, "another dup()ped fd is waiting for a buffer\n");
++			return -EBUSY;
++		}
++
+ 		if (!q->streaming) {
+ 			dprintk(1, "streaming off, will not wait for buffers\n");
+ 			return -EINVAL;
+@@ -1665,6 +1679,7 @@ static int __vb2_wait_for_done_vb(struct vb2_queue *q, int nonblocking)
+ 			return -EAGAIN;
+ 		}
+ 
++		q->waiting_in_dqbuf = 1;
+ 		/*
+ 		 * We are streaming and blocking, wait for another buffer to
+ 		 * become ready or for streamoff. Driver's lock is released to
+@@ -1685,6 +1700,7 @@ static int __vb2_wait_for_done_vb(struct vb2_queue *q, int nonblocking)
+ 		 * the locks or return an error if one occurred.
+ 		 */
+ 		call_void_qop(q, wait_finish, q);
++		q->waiting_in_dqbuf = 0;
+ 		if (ret) {
+ 			dprintk(1, "sleep was interrupted\n");
+ 			return ret;
+@@ -2572,6 +2588,12 @@ static size_t __vb2_perform_fileio(struct vb2_queue *q, char __user *data, size_
+ 	if (!data)
+ 		return -EINVAL;
+ 
++	if (q->waiting_in_dqbuf) {
++		dprintk(3, "another dup()ped fd is %s\n",
++			read ? "reading" : "writing");
++		return -EBUSY;
++	}
++
+ 	/*
+ 	 * Initialize emulator on first call.
+ 	 */
+diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
+index 123f2a33738b..403f42806455 100644
+--- a/drivers/media/dvb-frontends/m88ds3103.c
++++ b/drivers/media/dvb-frontends/m88ds3103.c
+@@ -309,6 +309,9 @@ static int m88ds3103_set_frontend(struct dvb_frontend *fe)
+ 	u16 u16tmp;
+ 	u32 tuner_frequency_khz, target_mclk;
+ 	s32 s32tmp;
++	static const struct reg_sequence reset_buf[] = {
++		{0x07, 0x80}, {0x07, 0x00}
++	};
+ 
+ 	dev_dbg(&client->dev,
+ 		"delivery_system=%d modulation=%d frequency=%u symbol_rate=%d inversion=%d pilot=%d rolloff=%d\n",
+@@ -321,11 +324,7 @@ static int m88ds3103_set_frontend(struct dvb_frontend *fe)
+ 	}
+ 
+ 	/* reset */
+-	ret = regmap_write(dev->regmap, 0x07, 0x80);
+-	if (ret)
+-		goto err;
+-
+-	ret = regmap_write(dev->regmap, 0x07, 0x00);
++	ret = regmap_multi_reg_write(dev->regmap, reset_buf, 2);
+ 	if (ret)
+ 		goto err;
+ 
+diff --git a/drivers/media/dvb-frontends/si2165.c b/drivers/media/dvb-frontends/si2165.c
+index feacd8da421d..d55d8f169dca 100644
+--- a/drivers/media/dvb-frontends/si2165.c
++++ b/drivers/media/dvb-frontends/si2165.c
+@@ -275,18 +275,20 @@ static u32 si2165_get_fe_clk(struct si2165_state *state)
+ 
+ static int si2165_wait_init_done(struct si2165_state *state)
+ {
+-	int ret = -EINVAL;
++	int ret;
+ 	u8 val = 0;
+ 	int i;
+ 
+ 	for (i = 0; i < 3; ++i) {
+-		si2165_readreg8(state, REG_INIT_DONE, &val);
++		ret = si2165_readreg8(state, REG_INIT_DONE, &val);
++		if (ret < 0)
++			return ret;
+ 		if (val == 0x01)
+ 			return 0;
+ 		usleep_range(1000, 50000);
+ 	}
+ 	dev_err(&state->client->dev, "init_done was not set\n");
+-	return ret;
++	return -EINVAL;
+ }
+ 
+ static int si2165_upload_firmware_block(struct si2165_state *state,
+diff --git a/drivers/media/i2c/ov2659.c b/drivers/media/i2c/ov2659.c
+index 799acce803fe..a1e9a980a445 100644
+--- a/drivers/media/i2c/ov2659.c
++++ b/drivers/media/i2c/ov2659.c
+@@ -1117,8 +1117,10 @@ static int ov2659_set_fmt(struct v4l2_subdev *sd,
+ 		if (ov2659_formats[index].code == mf->code)
+ 			break;
+ 
+-	if (index < 0)
+-		return -EINVAL;
++	if (index < 0) {
++		index = 0;
++		mf->code = ov2659_formats[index].code;
++	}
+ 
+ 	mf->colorspace = V4L2_COLORSPACE_SRGB;
+ 	mf->field = V4L2_FIELD_NONE;
+diff --git a/drivers/media/i2c/ov6650.c b/drivers/media/i2c/ov6650.c
+index 2d3f7e00b129..ba54ceb0b726 100644
+--- a/drivers/media/i2c/ov6650.c
++++ b/drivers/media/i2c/ov6650.c
+@@ -810,9 +810,16 @@ static int ov6650_video_probe(struct i2c_client *client)
+ 	u8		pidh, pidl, midh, midl;
+ 	int		ret;
+ 
++	priv->clk = v4l2_clk_get(&client->dev, NULL);
++	if (IS_ERR(priv->clk)) {
++		ret = PTR_ERR(priv->clk);
++		dev_err(&client->dev, "v4l2_clk request err: %d\n", ret);
++		return ret;
++	}
++
+ 	ret = ov6650_s_power(&priv->subdev, 1);
+ 	if (ret < 0)
+-		return ret;
++		goto eclkput;
+ 
+ 	msleep(20);
+ 
+@@ -849,6 +856,11 @@ static int ov6650_video_probe(struct i2c_client *client)
+ 
+ done:
+ 	ov6650_s_power(&priv->subdev, 0);
++	if (!ret)
++		return 0;
++eclkput:
++	v4l2_clk_put(priv->clk);
++
+ 	return ret;
+ }
+ 
+@@ -991,18 +1003,9 @@ static int ov6650_probe(struct i2c_client *client,
+ 	priv->code	  = MEDIA_BUS_FMT_YUYV8_2X8;
+ 	priv->colorspace  = V4L2_COLORSPACE_JPEG;
+ 
+-	priv->clk = v4l2_clk_get(&client->dev, NULL);
+-	if (IS_ERR(priv->clk)) {
+-		ret = PTR_ERR(priv->clk);
+-		goto eclkget;
+-	}
+-
+ 	ret = ov6650_video_probe(client);
+-	if (ret) {
+-		v4l2_clk_put(priv->clk);
+-eclkget:
++	if (ret)
+ 		v4l2_ctrl_handler_free(&priv->hdl);
+-	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/media/pci/saa7146/hexium_gemini.c b/drivers/media/pci/saa7146/hexium_gemini.c
+index 5817d9cde4d0..6d8e4afe9673 100644
+--- a/drivers/media/pci/saa7146/hexium_gemini.c
++++ b/drivers/media/pci/saa7146/hexium_gemini.c
+@@ -270,9 +270,8 @@ static int hexium_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_d
+ 	/* enable i2c-port pins */
+ 	saa7146_write(dev, MC1, (MASK_08 | MASK_24 | MASK_10 | MASK_26));
+ 
+-	hexium->i2c_adapter = (struct i2c_adapter) {
+-		.name = "hexium gemini",
+-	};
++	strscpy(hexium->i2c_adapter.name, "hexium gemini",
++		sizeof(hexium->i2c_adapter.name));
+ 	saa7146_i2c_adapter_prepare(dev, &hexium->i2c_adapter, SAA7146_I2C_BUS_BIT_RATE_480);
+ 	if (i2c_add_adapter(&hexium->i2c_adapter) < 0) {
+ 		DEB_S("cannot register i2c-device. skipping.\n");
+diff --git a/drivers/media/pci/saa7146/hexium_orion.c b/drivers/media/pci/saa7146/hexium_orion.c
+index 0a05176c18ab..a794f9e5f990 100644
+--- a/drivers/media/pci/saa7146/hexium_orion.c
++++ b/drivers/media/pci/saa7146/hexium_orion.c
+@@ -231,9 +231,8 @@ static int hexium_probe(struct saa7146_dev *dev)
+ 	saa7146_write(dev, DD1_STREAM_B, 0x00000000);
+ 	saa7146_write(dev, MC2, (MASK_09 | MASK_25 | MASK_10 | MASK_26));
+ 
+-	hexium->i2c_adapter = (struct i2c_adapter) {
+-		.name = "hexium orion",
+-	};
++	strscpy(hexium->i2c_adapter.name, "hexium orion",
++		sizeof(hexium->i2c_adapter.name));
+ 	saa7146_i2c_adapter_prepare(dev, &hexium->i2c_adapter, SAA7146_I2C_BUS_BIT_RATE_480);
+ 	if (i2c_add_adapter(&hexium->i2c_adapter) < 0) {
+ 		DEB_S("cannot register i2c-device. skipping.\n");
+diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c
+index 8e0194993a52..b9fc58916110 100644
+--- a/drivers/media/platform/coda/coda-bit.c
++++ b/drivers/media/platform/coda/coda-bit.c
+@@ -2006,6 +2006,9 @@ static int coda_prepare_decode(struct coda_ctx *ctx)
+ 	/* Clear decode success flag */
+ 	coda_write(dev, 0, CODA_RET_DEC_PIC_SUCCESS);
+ 
++	/* Clear error return value */
++	coda_write(dev, 0, CODA_RET_DEC_PIC_ERR_MB);
++
+ 	trace_coda_dec_pic_run(ctx, meta);
+ 
+ 	coda_command_async(ctx, CODA_COMMAND_PIC_RUN);
+diff --git a/drivers/media/platform/stm32/stm32-dcmi.c b/drivers/media/platform/stm32/stm32-dcmi.c
+index 6732874114cf..a33d497bd5b7 100644
+--- a/drivers/media/platform/stm32/stm32-dcmi.c
++++ b/drivers/media/platform/stm32/stm32-dcmi.c
+@@ -811,6 +811,9 @@ static int dcmi_try_fmt(struct stm32_dcmi *dcmi, struct v4l2_format *f,
+ 
+ 	sd_fmt = find_format_by_fourcc(dcmi, pix->pixelformat);
+ 	if (!sd_fmt) {
++		if (!dcmi->num_of_sd_formats)
++			return -ENODATA;
++
+ 		sd_fmt = dcmi->sd_formats[dcmi->num_of_sd_formats - 1];
+ 		pix->pixelformat = sd_fmt->fourcc;
+ 	}
+@@ -989,6 +992,9 @@ static int dcmi_set_sensor_format(struct stm32_dcmi *dcmi,
+ 
+ 	sd_fmt = find_format_by_fourcc(dcmi, pix->pixelformat);
+ 	if (!sd_fmt) {
++		if (!dcmi->num_of_sd_formats)
++			return -ENODATA;
++
+ 		sd_fmt = dcmi->sd_formats[dcmi->num_of_sd_formats - 1];
+ 		pix->pixelformat = sd_fmt->fourcc;
+ 	}
+@@ -1645,7 +1651,7 @@ static int dcmi_probe(struct platform_device *pdev)
+ 	dcmi->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL);
+ 	if (IS_ERR(dcmi->rstc)) {
+ 		dev_err(&pdev->dev, "Could not get reset control\n");
+-		return -ENODEV;
++		return PTR_ERR(dcmi->rstc);
+ 	}
+ 
+ 	/* Get bus characteristics from devicetree */
+@@ -1660,7 +1666,7 @@ static int dcmi_probe(struct platform_device *pdev)
+ 	of_node_put(np);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Could not parse the endpoint\n");
+-		return -ENODEV;
++		return ret;
+ 	}
+ 
+ 	if (ep.bus_type == V4L2_MBUS_CSI2_DPHY) {
+@@ -1673,8 +1679,9 @@ static int dcmi_probe(struct platform_device *pdev)
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq <= 0) {
+-		dev_err(&pdev->dev, "Could not get irq\n");
+-		return -ENODEV;
++		if (irq != -EPROBE_DEFER)
++			dev_err(&pdev->dev, "Could not get irq\n");
++		return irq;
+ 	}
+ 
+ 	dcmi->res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+@@ -1694,12 +1701,13 @@ static int dcmi_probe(struct platform_device *pdev)
+ 					dev_name(&pdev->dev), dcmi);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Unable to request irq %d\n", irq);
+-		return -ENODEV;
++		return ret;
+ 	}
+ 
+ 	mclk = devm_clk_get(&pdev->dev, "mclk");
+ 	if (IS_ERR(mclk)) {
+-		dev_err(&pdev->dev, "Unable to get mclk\n");
++		if (PTR_ERR(mclk) != -EPROBE_DEFER)
++			dev_err(&pdev->dev, "Unable to get mclk\n");
+ 		return PTR_ERR(mclk);
+ 	}
+ 
+diff --git a/drivers/media/platform/video-mux.c b/drivers/media/platform/video-mux.c
+index c33900e3c23e..4135165cdabe 100644
+--- a/drivers/media/platform/video-mux.c
++++ b/drivers/media/platform/video-mux.c
+@@ -399,9 +399,14 @@ static int video_mux_probe(struct platform_device *pdev)
+ 	vmux->active = -1;
+ 	vmux->pads = devm_kcalloc(dev, num_pads, sizeof(*vmux->pads),
+ 				  GFP_KERNEL);
++	if (!vmux->pads)
++		return -ENOMEM;
++
+ 	vmux->format_mbus = devm_kcalloc(dev, num_pads,
+ 					 sizeof(*vmux->format_mbus),
+ 					 GFP_KERNEL);
++	if (!vmux->format_mbus)
++		return -ENOMEM;
+ 
+ 	for (i = 0; i < num_pads; i++) {
+ 		vmux->pads[i].flags = (i < num_pads - 1) ? MEDIA_PAD_FL_SINK
+diff --git a/drivers/media/platform/vim2m.c b/drivers/media/platform/vim2m.c
+index 89d9c4c21037..333bd34e47c4 100644
+--- a/drivers/media/platform/vim2m.c
++++ b/drivers/media/platform/vim2m.c
+@@ -984,6 +984,15 @@ static int vim2m_release(struct file *file)
+ 	return 0;
+ }
+ 
++static void vim2m_device_release(struct video_device *vdev)
++{
++	struct vim2m_dev *dev = container_of(vdev, struct vim2m_dev, vfd);
++
++	v4l2_device_unregister(&dev->v4l2_dev);
++	v4l2_m2m_release(dev->m2m_dev);
++	kfree(dev);
++}
++
+ static const struct v4l2_file_operations vim2m_fops = {
+ 	.owner		= THIS_MODULE,
+ 	.open		= vim2m_open,
+@@ -999,7 +1008,7 @@ static const struct video_device vim2m_videodev = {
+ 	.fops		= &vim2m_fops,
+ 	.ioctl_ops	= &vim2m_ioctl_ops,
+ 	.minor		= -1,
+-	.release	= video_device_release_empty,
++	.release	= vim2m_device_release,
+ 	.device_caps	= V4L2_CAP_VIDEO_M2M | V4L2_CAP_STREAMING,
+ };
+ 
+@@ -1020,7 +1029,7 @@ static int vim2m_probe(struct platform_device *pdev)
+ 	struct video_device *vfd;
+ 	int ret;
+ 
+-	dev = devm_kzalloc(&pdev->dev, sizeof(*dev), GFP_KERNEL);
++	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ 	if (!dev)
+ 		return -ENOMEM;
+ 
+@@ -1028,7 +1037,7 @@ static int vim2m_probe(struct platform_device *pdev)
+ 
+ 	ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
+ 	if (ret)
+-		return ret;
++		goto error_free;
+ 
+ 	atomic_set(&dev->num_inst, 0);
+ 	mutex_init(&dev->dev_mutex);
+@@ -1042,7 +1051,7 @@ static int vim2m_probe(struct platform_device *pdev)
+ 	ret = video_register_device(vfd, VFL_TYPE_GRABBER, 0);
+ 	if (ret) {
+ 		v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
+-		goto unreg_v4l2;
++		goto error_v4l2;
+ 	}
+ 
+ 	video_set_drvdata(vfd, dev);
+@@ -1055,7 +1064,7 @@ static int vim2m_probe(struct platform_device *pdev)
+ 	if (IS_ERR(dev->m2m_dev)) {
+ 		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
+ 		ret = PTR_ERR(dev->m2m_dev);
+-		goto unreg_dev;
++		goto error_dev;
+ 	}
+ 
+ #ifdef CONFIG_MEDIA_CONTROLLER
+@@ -1069,27 +1078,29 @@ static int vim2m_probe(struct platform_device *pdev)
+ 			vfd, MEDIA_ENT_F_PROC_VIDEO_SCALER);
+ 	if (ret) {
+ 		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n");
+-		goto unreg_m2m;
++		goto error_m2m;
+ 	}
+ 
+ 	ret = media_device_register(&dev->mdev);
+ 	if (ret) {
+ 		v4l2_err(&dev->v4l2_dev, "Failed to register mem2mem media device\n");
+-		goto unreg_m2m_mc;
++		goto error_m2m_mc;
+ 	}
+ #endif
+ 	return 0;
+ 
+ #ifdef CONFIG_MEDIA_CONTROLLER
+-unreg_m2m_mc:
++error_m2m_mc:
+ 	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+-unreg_m2m:
++error_m2m:
+ 	v4l2_m2m_release(dev->m2m_dev);
+ #endif
+-unreg_dev:
++error_dev:
+ 	video_unregister_device(&dev->vfd);
+-unreg_v4l2:
++error_v4l2:
+ 	v4l2_device_unregister(&dev->v4l2_dev);
++error_free:
++	kfree(dev);
+ 
+ 	return ret;
+ }
+@@ -1105,9 +1116,7 @@ static int vim2m_remove(struct platform_device *pdev)
+ 	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+ 	media_device_cleanup(&dev->mdev);
+ #endif
+-	v4l2_m2m_release(dev->m2m_dev);
+ 	video_unregister_device(&dev->vfd);
+-	v4l2_device_unregister(&dev->v4l2_dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/vimc/vimc-core.c b/drivers/media/platform/vimc/vimc-core.c
+index ce809d2e3d53..64eb424c15ab 100644
+--- a/drivers/media/platform/vimc/vimc-core.c
++++ b/drivers/media/platform/vimc/vimc-core.c
+@@ -303,6 +303,8 @@ static int vimc_probe(struct platform_device *pdev)
+ 
+ 	dev_dbg(&pdev->dev, "probe");
+ 
++	memset(&vimc->mdev, 0, sizeof(vimc->mdev));
++
+ 	/* Create platform_device for each entity in the topology*/
+ 	vimc->subdevs = devm_kcalloc(&vimc->pdev.dev, vimc->pipe_cfg->num_ents,
+ 				     sizeof(*vimc->subdevs), GFP_KERNEL);
+diff --git a/drivers/media/platform/vimc/vimc-streamer.c b/drivers/media/platform/vimc/vimc-streamer.c
+index fcc897fb247b..392754c18046 100644
+--- a/drivers/media/platform/vimc/vimc-streamer.c
++++ b/drivers/media/platform/vimc/vimc-streamer.c
+@@ -120,7 +120,6 @@ static int vimc_streamer_thread(void *data)
+ 	int i;
+ 
+ 	set_freezable();
+-	set_current_state(TASK_UNINTERRUPTIBLE);
+ 
+ 	for (;;) {
+ 		try_to_freeze();
+@@ -137,6 +136,7 @@ static int vimc_streamer_thread(void *data)
+ 				break;
+ 		}
+ 		//wait for 60hz
++		set_current_state(TASK_UNINTERRUPTIBLE);
+ 		schedule_timeout(HZ / 60);
+ 	}
+ 
+diff --git a/drivers/media/platform/vivid/vivid-vid-cap.c b/drivers/media/platform/vivid/vivid-vid-cap.c
+index c059fc12668a..234dee12256f 100644
+--- a/drivers/media/platform/vivid/vivid-vid-cap.c
++++ b/drivers/media/platform/vivid/vivid-vid-cap.c
+@@ -1003,7 +1003,7 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
+ 		v4l2_rect_map_inside(&s->r, &dev->fmt_cap_rect);
+ 		if (dev->bitmap_cap && (compose->width != s->r.width ||
+ 					compose->height != s->r.height)) {
+-			kfree(dev->bitmap_cap);
++			vfree(dev->bitmap_cap);
+ 			dev->bitmap_cap = NULL;
+ 		}
+ 		*compose = s->r;
+diff --git a/drivers/media/radio/wl128x/fmdrv_common.c b/drivers/media/radio/wl128x/fmdrv_common.c
+index 800d69c3f80b..1cf4019689a5 100644
+--- a/drivers/media/radio/wl128x/fmdrv_common.c
++++ b/drivers/media/radio/wl128x/fmdrv_common.c
+@@ -489,7 +489,8 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
+ 		return -EIO;
+ 	}
+ 	/* Send response data to caller */
+-	if (response != NULL && response_len != NULL && evt_hdr->dlen) {
++	if (response != NULL && response_len != NULL && evt_hdr->dlen &&
++	    evt_hdr->dlen <= payload_len) {
+ 		/* Skip header info and copy only response data */
+ 		skb_pull(skb, sizeof(struct fm_event_msg_hdr));
+ 		memcpy(response, skb->data, evt_hdr->dlen);
+@@ -583,6 +584,8 @@ static void fm_irq_handle_flag_getcmd_resp(struct fmdev *fmdev)
+ 		return;
+ 
+ 	fm_evt_hdr = (void *)skb->data;
++	if (fm_evt_hdr->dlen > sizeof(fmdev->irq_info.flag))
++		return;
+ 
+ 	/* Skip header info and copy only response data */
+ 	skb_pull(skb, sizeof(struct fm_event_msg_hdr));
+@@ -1308,7 +1311,7 @@ static int load_default_rx_configuration(struct fmdev *fmdev)
+ static int fm_power_up(struct fmdev *fmdev, u8 mode)
+ {
+ 	u16 payload;
+-	__be16 asic_id, asic_ver;
++	__be16 asic_id = 0, asic_ver = 0;
+ 	int resp_len, ret;
+ 	u8 fw_name[50];
+ 
+diff --git a/drivers/media/rc/serial_ir.c b/drivers/media/rc/serial_ir.c
+index ffe2c672d105..3998ba29beb6 100644
+--- a/drivers/media/rc/serial_ir.c
++++ b/drivers/media/rc/serial_ir.c
+@@ -773,8 +773,6 @@ static void serial_ir_exit(void)
+ 
+ static int __init serial_ir_init_module(void)
+ {
+-	int result;
+-
+ 	switch (type) {
+ 	case IR_HOMEBREW:
+ 	case IR_IRDEO:
+@@ -802,12 +800,7 @@ static int __init serial_ir_init_module(void)
+ 	if (sense != -1)
+ 		sense = !!sense;
+ 
+-	result = serial_ir_init();
+-	if (!result)
+-		return 0;
+-
+-	serial_ir_exit();
+-	return result;
++	return serial_ir_init();
+ }
+ 
+ static void __exit serial_ir_exit_module(void)
+diff --git a/drivers/media/usb/au0828/au0828-video.c b/drivers/media/usb/au0828/au0828-video.c
+index 7876c897cc1d..222723d946e4 100644
+--- a/drivers/media/usb/au0828/au0828-video.c
++++ b/drivers/media/usb/au0828/au0828-video.c
+@@ -758,6 +758,9 @@ static int au0828_analog_stream_enable(struct au0828_dev *d)
+ 
+ 	dprintk(1, "au0828_analog_stream_enable called\n");
+ 
++	if (test_bit(DEV_DISCONNECTED, &d->dev_state))
++		return -ENODEV;
++
+ 	iface = usb_ifnum_to_if(d->usbdev, 0);
+ 	if (iface && iface->cur_altsetting->desc.bAlternateSetting != 5) {
+ 		dprintk(1, "Changing intf#0 to alt 5\n");
+@@ -839,9 +842,9 @@ int au0828_start_analog_streaming(struct vb2_queue *vq, unsigned int count)
+ 			return rc;
+ 		}
+ 
++		v4l2_device_call_all(&dev->v4l2_dev, 0, video, s_stream, 1);
++
+ 		if (vq->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) {
+-			v4l2_device_call_all(&dev->v4l2_dev, 0, video,
+-						s_stream, 1);
+ 			dev->vid_timeout_running = 1;
+ 			mod_timer(&dev->vid_timeout, jiffies + (HZ / 10));
+ 		} else if (vq->type == V4L2_BUF_TYPE_VBI_CAPTURE) {
+@@ -861,10 +864,11 @@ static void au0828_stop_streaming(struct vb2_queue *vq)
+ 
+ 	dprintk(1, "au0828_stop_streaming called %d\n", dev->streaming_users);
+ 
+-	if (dev->streaming_users-- == 1)
++	if (dev->streaming_users-- == 1) {
+ 		au0828_uninit_isoc(dev);
++		v4l2_device_call_all(&dev->v4l2_dev, 0, video, s_stream, 0);
++	}
+ 
+-	v4l2_device_call_all(&dev->v4l2_dev, 0, video, s_stream, 0);
+ 	dev->vid_timeout_running = 0;
+ 	del_timer_sync(&dev->vid_timeout);
+ 
+@@ -893,8 +897,10 @@ void au0828_stop_vbi_streaming(struct vb2_queue *vq)
+ 	dprintk(1, "au0828_stop_vbi_streaming called %d\n",
+ 		dev->streaming_users);
+ 
+-	if (dev->streaming_users-- == 1)
++	if (dev->streaming_users-- == 1) {
+ 		au0828_uninit_isoc(dev);
++		v4l2_device_call_all(&dev->v4l2_dev, 0, video, s_stream, 0);
++	}
+ 
+ 	spin_lock_irqsave(&dev->slock, flags);
+ 	if (dev->isoc_ctl.vbi_buf != NULL) {
+diff --git a/drivers/media/usb/cpia2/cpia2_v4l.c b/drivers/media/usb/cpia2/cpia2_v4l.c
+index 748739c2b8b2..39b20a6ae350 100644
+--- a/drivers/media/usb/cpia2/cpia2_v4l.c
++++ b/drivers/media/usb/cpia2/cpia2_v4l.c
+@@ -1245,8 +1245,7 @@ static int __init cpia2_init(void)
+ 	LOG("%s v%s\n",
+ 	    ABOUT, CPIA_VERSION);
+ 	check_parameters();
+-	cpia2_usb_init();
+-	return 0;
++	return cpia2_usb_init();
+ }
+ 
+ 
+diff --git a/drivers/media/usb/dvb-usb-v2/dvbsky.c b/drivers/media/usb/dvb-usb-v2/dvbsky.c
+index e28bd8836751..ae0814dd202a 100644
+--- a/drivers/media/usb/dvb-usb-v2/dvbsky.c
++++ b/drivers/media/usb/dvb-usb-v2/dvbsky.c
+@@ -615,16 +615,18 @@ static int dvbsky_init(struct dvb_usb_device *d)
+ 	return 0;
+ }
+ 
+-static void dvbsky_exit(struct dvb_usb_device *d)
++static int dvbsky_frontend_detach(struct dvb_usb_adapter *adap)
+ {
++	struct dvb_usb_device *d = adap_to_d(adap);
+ 	struct dvbsky_state *state = d_to_priv(d);
+-	struct dvb_usb_adapter *adap = &d->adapter[0];
++
++	dev_dbg(&d->udev->dev, "%s: adap=%d\n", __func__, adap->id);
+ 
+ 	dvb_module_release(state->i2c_client_tuner);
+ 	dvb_module_release(state->i2c_client_demod);
+ 	dvb_module_release(state->i2c_client_ci);
+ 
+-	adap->fe[0] = NULL;
++	return 0;
+ }
+ 
+ /* DVB USB Driver stuff */
+@@ -640,11 +642,11 @@ static struct dvb_usb_device_properties dvbsky_s960_props = {
+ 
+ 	.i2c_algo         = &dvbsky_i2c_algo,
+ 	.frontend_attach  = dvbsky_s960_attach,
++	.frontend_detach  = dvbsky_frontend_detach,
+ 	.init             = dvbsky_init,
+ 	.get_rc_config    = dvbsky_get_rc_config,
+ 	.streaming_ctrl   = dvbsky_streaming_ctrl,
+ 	.identify_state	  = dvbsky_identify_state,
+-	.exit             = dvbsky_exit,
+ 	.read_mac_address = dvbsky_read_mac_addr,
+ 
+ 	.num_adapters = 1,
+@@ -667,11 +669,11 @@ static struct dvb_usb_device_properties dvbsky_s960c_props = {
+ 
+ 	.i2c_algo         = &dvbsky_i2c_algo,
+ 	.frontend_attach  = dvbsky_s960c_attach,
++	.frontend_detach  = dvbsky_frontend_detach,
+ 	.init             = dvbsky_init,
+ 	.get_rc_config    = dvbsky_get_rc_config,
+ 	.streaming_ctrl   = dvbsky_streaming_ctrl,
+ 	.identify_state	  = dvbsky_identify_state,
+-	.exit             = dvbsky_exit,
+ 	.read_mac_address = dvbsky_read_mac_addr,
+ 
+ 	.num_adapters = 1,
+@@ -694,11 +696,11 @@ static struct dvb_usb_device_properties dvbsky_t680c_props = {
+ 
+ 	.i2c_algo         = &dvbsky_i2c_algo,
+ 	.frontend_attach  = dvbsky_t680c_attach,
++	.frontend_detach  = dvbsky_frontend_detach,
+ 	.init             = dvbsky_init,
+ 	.get_rc_config    = dvbsky_get_rc_config,
+ 	.streaming_ctrl   = dvbsky_streaming_ctrl,
+ 	.identify_state	  = dvbsky_identify_state,
+-	.exit             = dvbsky_exit,
+ 	.read_mac_address = dvbsky_read_mac_addr,
+ 
+ 	.num_adapters = 1,
+@@ -721,11 +723,11 @@ static struct dvb_usb_device_properties dvbsky_t330_props = {
+ 
+ 	.i2c_algo         = &dvbsky_i2c_algo,
+ 	.frontend_attach  = dvbsky_t330_attach,
++	.frontend_detach  = dvbsky_frontend_detach,
+ 	.init             = dvbsky_init,
+ 	.get_rc_config    = dvbsky_get_rc_config,
+ 	.streaming_ctrl   = dvbsky_streaming_ctrl,
+ 	.identify_state	  = dvbsky_identify_state,
+-	.exit             = dvbsky_exit,
+ 	.read_mac_address = dvbsky_read_mac_addr,
+ 
+ 	.num_adapters = 1,
+@@ -748,11 +750,11 @@ static struct dvb_usb_device_properties mygica_t230c_props = {
+ 
+ 	.i2c_algo         = &dvbsky_i2c_algo,
+ 	.frontend_attach  = dvbsky_mygica_t230c_attach,
++	.frontend_detach  = dvbsky_frontend_detach,
+ 	.init             = dvbsky_init,
+ 	.get_rc_config    = dvbsky_get_rc_config,
+ 	.streaming_ctrl   = dvbsky_streaming_ctrl,
+ 	.identify_state	  = dvbsky_identify_state,
+-	.exit             = dvbsky_exit,
+ 
+ 	.num_adapters = 1,
+ 	.adapter = {
+diff --git a/drivers/media/usb/go7007/go7007-fw.c b/drivers/media/usb/go7007/go7007-fw.c
+index 24f5b615dc7a..dfa9f899d0c2 100644
+--- a/drivers/media/usb/go7007/go7007-fw.c
++++ b/drivers/media/usb/go7007/go7007-fw.c
+@@ -1499,8 +1499,8 @@ static int modet_to_package(struct go7007 *go, __le16 *code, int space)
+ 	return cnt;
+ }
+ 
+-static int do_special(struct go7007 *go, u16 type, __le16 *code, int space,
+-			int *framelen)
++static noinline_for_stack int do_special(struct go7007 *go, u16 type,
++					 __le16 *code, int space, int *framelen)
+ {
+ 	switch (type) {
+ 	case SPECIAL_FRM_HEAD:
+diff --git a/drivers/media/usb/gspca/gspca.c b/drivers/media/usb/gspca/gspca.c
+index 3137f5d89d80..bdb81e93b74d 100644
+--- a/drivers/media/usb/gspca/gspca.c
++++ b/drivers/media/usb/gspca/gspca.c
+@@ -294,7 +294,7 @@ static void fill_frame(struct gspca_dev *gspca_dev,
+ 		/* check the packet status and length */
+ 		st = urb->iso_frame_desc[i].status;
+ 		if (st) {
+-			pr_err("ISOC data error: [%d] len=%d, status=%d\n",
++			gspca_dbg(gspca_dev, D_PACK, "ISOC data error: [%d] len=%d, status=%d\n",
+ 			       i, len, st);
+ 			gspca_dev->last_packet_type = DISCARD_PACKET;
+ 			continue;
+@@ -314,6 +314,8 @@ static void fill_frame(struct gspca_dev *gspca_dev,
+ 	}
+ 
+ resubmit:
++	if (!gspca_dev->streaming)
++		return;
+ 	/* resubmit the URB */
+ 	st = usb_submit_urb(urb, GFP_ATOMIC);
+ 	if (st < 0)
+@@ -330,7 +332,7 @@ static void isoc_irq(struct urb *urb)
+ 	struct gspca_dev *gspca_dev = (struct gspca_dev *) urb->context;
+ 
+ 	gspca_dbg(gspca_dev, D_PACK, "isoc irq\n");
+-	if (!vb2_start_streaming_called(&gspca_dev->queue))
++	if (!gspca_dev->streaming)
+ 		return;
+ 	fill_frame(gspca_dev, urb);
+ }
+@@ -344,7 +346,7 @@ static void bulk_irq(struct urb *urb)
+ 	int st;
+ 
+ 	gspca_dbg(gspca_dev, D_PACK, "bulk irq\n");
+-	if (!vb2_start_streaming_called(&gspca_dev->queue))
++	if (!gspca_dev->streaming)
+ 		return;
+ 	switch (urb->status) {
+ 	case 0:
+@@ -367,6 +369,8 @@ static void bulk_irq(struct urb *urb)
+ 				urb->actual_length);
+ 
+ resubmit:
++	if (!gspca_dev->streaming)
++		return;
+ 	/* resubmit the URB */
+ 	if (gspca_dev->cam.bulk_nurbs != 0) {
+ 		st = usb_submit_urb(urb, GFP_ATOMIC);
+@@ -1630,6 +1634,8 @@ void gspca_disconnect(struct usb_interface *intf)
+ 
+ 	mutex_lock(&gspca_dev->usb_lock);
+ 	gspca_dev->present = false;
++	destroy_urbs(gspca_dev);
++	gspca_input_destroy_urb(gspca_dev);
+ 
+ 	vb2_queue_error(&gspca_dev->queue);
+ 
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index 446a999dd2ce..2bab4713bc5b 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -666,6 +666,8 @@ static int ctrl_get_input(struct pvr2_ctrl *cptr,int *vp)
+ 
+ static int ctrl_check_input(struct pvr2_ctrl *cptr,int v)
+ {
++	if (v < 0 || v > PVR2_CVAL_INPUT_MAX)
++		return 0;
+ 	return ((1 << v) & cptr->hdw->input_allowed_mask) != 0;
+ }
+ 
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.h b/drivers/media/usb/pvrusb2/pvrusb2-hdw.h
+index 25648add77e5..bd2b7a67b732 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.h
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.h
+@@ -50,6 +50,7 @@
+ #define PVR2_CVAL_INPUT_COMPOSITE 2
+ #define PVR2_CVAL_INPUT_SVIDEO 3
+ #define PVR2_CVAL_INPUT_RADIO 4
++#define PVR2_CVAL_INPUT_MAX PVR2_CVAL_INPUT_RADIO
+ 
+ enum pvr2_config {
+ 	pvr2_config_empty,    /* No configuration */
+diff --git a/drivers/media/v4l2-core/v4l2-fwnode.c b/drivers/media/v4l2-core/v4l2-fwnode.c
+index 9bfedd7596a1..a398b7885399 100644
+--- a/drivers/media/v4l2-core/v4l2-fwnode.c
++++ b/drivers/media/v4l2-core/v4l2-fwnode.c
+@@ -225,6 +225,10 @@ static int v4l2_fwnode_endpoint_parse_csi2_bus(struct fwnode_handle *fwnode,
+ 	if (bus_type == V4L2_MBUS_CSI2_DPHY ||
+ 	    bus_type == V4L2_MBUS_CSI2_CPHY || lanes_used ||
+ 	    have_clk_lane || (flags & ~V4L2_MBUS_CSI2_CONTINUOUS_CLOCK)) {
++		/* Only D-PHY has a clock lane. */
++		unsigned int dfl_data_lane_index =
++			bus_type == V4L2_MBUS_CSI2_DPHY;
++
+ 		bus->flags = flags;
+ 		if (bus_type == V4L2_MBUS_UNKNOWN)
+ 			vep->bus_type = V4L2_MBUS_CSI2_DPHY;
+@@ -233,7 +237,7 @@ static int v4l2_fwnode_endpoint_parse_csi2_bus(struct fwnode_handle *fwnode,
+ 		if (use_default_lane_mapping) {
+ 			bus->clock_lane = 0;
+ 			for (i = 0; i < num_data_lanes; i++)
+-				bus->data_lanes[i] = 1 + i;
++				bus->data_lanes[i] = dfl_data_lane_index + i;
+ 		} else {
+ 			bus->clock_lane = clock_lane;
+ 			for (i = 0; i < num_data_lanes; i++)
+diff --git a/drivers/mmc/core/pwrseq_emmc.c b/drivers/mmc/core/pwrseq_emmc.c
+index efb8a7965dd4..154f4204d58c 100644
+--- a/drivers/mmc/core/pwrseq_emmc.c
++++ b/drivers/mmc/core/pwrseq_emmc.c
+@@ -30,19 +30,14 @@ struct mmc_pwrseq_emmc {
+ 
+ #define to_pwrseq_emmc(p) container_of(p, struct mmc_pwrseq_emmc, pwrseq)
+ 
+-static void __mmc_pwrseq_emmc_reset(struct mmc_pwrseq_emmc *pwrseq)
+-{
+-	gpiod_set_value(pwrseq->reset_gpio, 1);
+-	udelay(1);
+-	gpiod_set_value(pwrseq->reset_gpio, 0);
+-	udelay(200);
+-}
+-
+ static void mmc_pwrseq_emmc_reset(struct mmc_host *host)
+ {
+ 	struct mmc_pwrseq_emmc *pwrseq =  to_pwrseq_emmc(host->pwrseq);
+ 
+-	__mmc_pwrseq_emmc_reset(pwrseq);
++	gpiod_set_value_cansleep(pwrseq->reset_gpio, 1);
++	udelay(1);
++	gpiod_set_value_cansleep(pwrseq->reset_gpio, 0);
++	udelay(200);
+ }
+ 
+ static int mmc_pwrseq_emmc_reset_nb(struct notifier_block *this,
+@@ -50,8 +45,11 @@ static int mmc_pwrseq_emmc_reset_nb(struct notifier_block *this,
+ {
+ 	struct mmc_pwrseq_emmc *pwrseq = container_of(this,
+ 					struct mmc_pwrseq_emmc, reset_nb);
++	gpiod_set_value(pwrseq->reset_gpio, 1);
++	udelay(1);
++	gpiod_set_value(pwrseq->reset_gpio, 0);
++	udelay(200);
+ 
+-	__mmc_pwrseq_emmc_reset(pwrseq);
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -72,14 +70,18 @@ static int mmc_pwrseq_emmc_probe(struct platform_device *pdev)
+ 	if (IS_ERR(pwrseq->reset_gpio))
+ 		return PTR_ERR(pwrseq->reset_gpio);
+ 
+-	/*
+-	 * register reset handler to ensure emmc reset also from
+-	 * emergency_reboot(), priority 255 is the highest priority
+-	 * so it will be executed before any system reboot handler.
+-	 */
+-	pwrseq->reset_nb.notifier_call = mmc_pwrseq_emmc_reset_nb;
+-	pwrseq->reset_nb.priority = 255;
+-	register_restart_handler(&pwrseq->reset_nb);
++	if (!gpiod_cansleep(pwrseq->reset_gpio)) {
++		/*
++		 * register reset handler to ensure emmc reset also from
++		 * emergency_reboot(), priority 255 is the highest priority
++		 * so it will be executed before any system reboot handler.
++		 */
++		pwrseq->reset_nb.notifier_call = mmc_pwrseq_emmc_reset_nb;
++		pwrseq->reset_nb.priority = 255;
++		register_restart_handler(&pwrseq->reset_nb);
++	} else {
++		dev_notice(dev, "EMMC reset pin tied to a sleepy GPIO driver; reset on emergency-reboot disabled\n");
++	}
+ 
+ 	pwrseq->pwrseq.ops = &mmc_pwrseq_emmc_ops;
+ 	pwrseq->pwrseq.dev = dev;
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index d0d9f90e7cdf..cfb8ee24eaba 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -216,6 +216,14 @@ static int mmc_decode_scr(struct mmc_card *card)
+ 
+ 	if (scr->sda_spec3)
+ 		scr->cmds = UNSTUFF_BITS(resp, 32, 2);
++
++	/* SD Spec says: any SD Card shall set at least bits 0 and 2 */
++	if (!(scr->bus_widths & SD_SCR_BUS_WIDTH_1) ||
++	    !(scr->bus_widths & SD_SCR_BUS_WIDTH_4)) {
++		pr_err("%s: invalid bus width\n", mmc_hostname(card->host));
++		return -EINVAL;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
+index 8ade14fb2148..3410e2151579 100644
+--- a/drivers/mmc/host/mmc_spi.c
++++ b/drivers/mmc/host/mmc_spi.c
+@@ -819,6 +819,10 @@ mmc_spi_readblock(struct mmc_spi_host *host, struct spi_transfer *t,
+ 	}
+ 
+ 	status = spi_sync_locked(spi, &host->m);
++	if (status < 0) {
++		dev_dbg(&spi->dev, "read error %d\n", status);
++		return status;
++	}
+ 
+ 	if (host->dma_dev) {
+ 		dma_sync_single_for_cpu(host->dma_dev,
+diff --git a/drivers/mmc/host/sdhci-iproc.c b/drivers/mmc/host/sdhci-iproc.c
+index 9d12c06c7fd6..2feb4ef32035 100644
+--- a/drivers/mmc/host/sdhci-iproc.c
++++ b/drivers/mmc/host/sdhci-iproc.c
+@@ -196,7 +196,8 @@ static const struct sdhci_ops sdhci_iproc_32only_ops = {
+ };
+ 
+ static const struct sdhci_pltfm_data sdhci_iproc_cygnus_pltfm_data = {
+-	.quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK,
++	.quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
++		  SDHCI_QUIRK_NO_HISPD_BIT,
+ 	.quirks2 = SDHCI_QUIRK2_ACMD23_BROKEN | SDHCI_QUIRK2_HOST_OFF_CARD_ON,
+ 	.ops = &sdhci_iproc_32only_ops,
+ };
+@@ -219,7 +220,8 @@ static const struct sdhci_iproc_data iproc_cygnus_data = {
+ 
+ static const struct sdhci_pltfm_data sdhci_iproc_pltfm_data = {
+ 	.quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK |
+-		  SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12,
++		  SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12 |
++		  SDHCI_QUIRK_NO_HISPD_BIT,
+ 	.quirks2 = SDHCI_QUIRK2_ACMD23_BROKEN,
+ 	.ops = &sdhci_iproc_ops,
+ };
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 4e669b4edfc1..7e0eae8dafae 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -694,6 +694,9 @@ static void esdhc_reset(struct sdhci_host *host, u8 mask)
+ 	sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
+ 	sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
+ 
++	if (of_find_compatible_node(NULL, NULL, "fsl,p2020-esdhc"))
++		mdelay(5);
++
+ 	if (mask & SDHCI_RESET_ALL) {
+ 		val = sdhci_readl(host, ESDHC_TBCTL);
+ 		val &= ~ESDHC_TB_EN;
+@@ -1074,6 +1077,11 @@ static int sdhci_esdhc_probe(struct platform_device *pdev)
+ 	if (esdhc->vendor_ver > VENDOR_V_22)
+ 		host->quirks &= ~SDHCI_QUIRK_NO_BUSY_IRQ;
+ 
++	if (of_find_compatible_node(NULL, NULL, "fsl,p2020-esdhc")) {
++		host->quirks2 |= SDHCI_QUIRK_RESET_AFTER_REQUEST;
++		host->quirks2 |= SDHCI_QUIRK_BROKEN_TIMEOUT_VAL;
++	}
++
+ 	if (of_device_is_compatible(np, "fsl,p5040-esdhc") ||
+ 	    of_device_is_compatible(np, "fsl,p5020-esdhc") ||
+ 	    of_device_is_compatible(np, "fsl,p4080-esdhc") ||
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index a6eacf2099c3..9b03d7e404f8 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -224,28 +224,23 @@ static int ena_setup_tx_resources(struct ena_adapter *adapter, int qid)
+ 	if (!tx_ring->tx_buffer_info) {
+ 		tx_ring->tx_buffer_info = vzalloc(size);
+ 		if (!tx_ring->tx_buffer_info)
+-			return -ENOMEM;
++			goto err_tx_buffer_info;
+ 	}
+ 
+ 	size = sizeof(u16) * tx_ring->ring_size;
+ 	tx_ring->free_tx_ids = vzalloc_node(size, node);
+ 	if (!tx_ring->free_tx_ids) {
+ 		tx_ring->free_tx_ids = vzalloc(size);
+-		if (!tx_ring->free_tx_ids) {
+-			vfree(tx_ring->tx_buffer_info);
+-			return -ENOMEM;
+-		}
++		if (!tx_ring->free_tx_ids)
++			goto err_free_tx_ids;
+ 	}
+ 
+ 	size = tx_ring->tx_max_header_size;
+ 	tx_ring->push_buf_intermediate_buf = vzalloc_node(size, node);
+ 	if (!tx_ring->push_buf_intermediate_buf) {
+ 		tx_ring->push_buf_intermediate_buf = vzalloc(size);
+-		if (!tx_ring->push_buf_intermediate_buf) {
+-			vfree(tx_ring->tx_buffer_info);
+-			vfree(tx_ring->free_tx_ids);
+-			return -ENOMEM;
+-		}
++		if (!tx_ring->push_buf_intermediate_buf)
++			goto err_push_buf_intermediate_buf;
+ 	}
+ 
+ 	/* Req id ring for TX out of order completions */
+@@ -259,6 +254,15 @@ static int ena_setup_tx_resources(struct ena_adapter *adapter, int qid)
+ 	tx_ring->next_to_clean = 0;
+ 	tx_ring->cpu = ena_irq->cpu;
+ 	return 0;
++
++err_push_buf_intermediate_buf:
++	vfree(tx_ring->free_tx_ids);
++	tx_ring->free_tx_ids = NULL;
++err_free_tx_ids:
++	vfree(tx_ring->tx_buffer_info);
++	tx_ring->tx_buffer_info = NULL;
++err_tx_buffer_info:
++	return -ENOMEM;
+ }
+ 
+ /* ena_free_tx_resources - Free I/O Tx Resources per Queue
+@@ -378,6 +382,7 @@ static int ena_setup_rx_resources(struct ena_adapter *adapter,
+ 		rx_ring->free_rx_ids = vzalloc(size);
+ 		if (!rx_ring->free_rx_ids) {
+ 			vfree(rx_ring->rx_buffer_info);
++			rx_ring->rx_buffer_info = NULL;
+ 			return -ENOMEM;
+ 		}
+ 	}
+@@ -2292,7 +2297,7 @@ static void ena_config_host_info(struct ena_com_dev *ena_dev,
+ 	host_info->bdf = (pdev->bus->number << 8) | pdev->devfn;
+ 	host_info->os_type = ENA_ADMIN_OS_LINUX;
+ 	host_info->kernel_ver = LINUX_VERSION_CODE;
+-	strncpy(host_info->kernel_ver_str, utsname()->version,
++	strlcpy(host_info->kernel_ver_str, utsname()->version,
+ 		sizeof(host_info->kernel_ver_str) - 1);
+ 	host_info->os_dist = 0;
+ 	strncpy(host_info->os_dist_str, utsname()->release,
+diff --git a/drivers/net/ethernet/chelsio/cxgb3/l2t.h b/drivers/net/ethernet/chelsio/cxgb3/l2t.h
+index c2fd323c4078..ea75f275023f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb3/l2t.h
++++ b/drivers/net/ethernet/chelsio/cxgb3/l2t.h
+@@ -75,8 +75,8 @@ struct l2t_data {
+ 	struct l2t_entry *rover;	/* starting point for next allocation */
+ 	atomic_t nfree;		/* number of free entries */
+ 	rwlock_t lock;
+-	struct l2t_entry l2tab[0];
+ 	struct rcu_head rcu_head;	/* to handle rcu cleanup */
++	struct l2t_entry l2tab[];
+ };
+ 
+ typedef void (*arp_failure_handler_func)(struct t3cdev * dev,
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 6ba9099ca7fe..8bc7a0738adb 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -6044,15 +6044,24 @@ static int __init cxgb4_init_module(void)
+ 
+ 	ret = pci_register_driver(&cxgb4_driver);
+ 	if (ret < 0)
+-		debugfs_remove(cxgb4_debugfs_root);
++		goto err_pci;
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+ 	if (!inet6addr_registered) {
+-		register_inet6addr_notifier(&cxgb4_inet6addr_notifier);
+-		inet6addr_registered = true;
++		ret = register_inet6addr_notifier(&cxgb4_inet6addr_notifier);
++		if (ret)
++			pci_unregister_driver(&cxgb4_driver);
++		else
++			inet6addr_registered = true;
+ 	}
+ #endif
+ 
++	if (ret == 0)
++		return ret;
++
++err_pci:
++	debugfs_remove(cxgb4_debugfs_root);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 1ca9a18139ec..0982fb4f131d 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -2604,6 +2604,7 @@ int dpaa2_eth_set_hash(struct net_device *net_dev, u64 flags)
+ static int dpaa2_eth_set_cls(struct dpaa2_eth_priv *priv)
+ {
+ 	struct device *dev = priv->net_dev->dev.parent;
++	int err;
+ 
+ 	/* Check if we actually support Rx flow classification */
+ 	if (dpaa2_eth_has_legacy_dist(priv)) {
+@@ -2622,9 +2623,13 @@ static int dpaa2_eth_set_cls(struct dpaa2_eth_priv *priv)
+ 		return -EOPNOTSUPP;
+ 	}
+ 
++	err = dpaa2_eth_set_dist_key(priv->net_dev, DPAA2_ETH_RX_DIST_CLS, 0);
++	if (err)
++		return err;
++
+ 	priv->rx_cls_enabled = 1;
+ 
+-	return dpaa2_eth_set_dist_key(priv->net_dev, DPAA2_ETH_RX_DIST_CLS, 0);
++	return 0;
+ }
+ 
+ /* Bind the DPNI to its needed objects and resources: buffer pool, DPIOs,
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+index 691d12174902..3c7a26bb8322 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
+@@ -102,7 +102,7 @@ struct hclgevf_mbx_arq_ring {
+ 	struct hclgevf_dev *hdev;
+ 	u32 head;
+ 	u32 tail;
+-	u32 count;
++	atomic_t count;
+ 	u16 msg_q[HCLGE_MBX_MAX_ARQ_MSG_NUM][HCLGE_MBX_MAX_ARQ_MSG_SIZE];
+ };
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 40b69eaf2cb3..a6481bd34c3b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -2708,7 +2708,7 @@ int hns3_clean_rx_ring(
+ #define RCB_NOF_ALLOC_RX_BUFF_ONCE 16
+ 	struct net_device *netdev = ring->tqp->handle->kinfo.netdev;
+ 	int recv_pkts, recv_bds, clean_count, err;
+-	int unused_count = hns3_desc_unused(ring) - ring->pending_buf;
++	int unused_count = hns3_desc_unused(ring);
+ 	struct sk_buff *skb = ring->skb;
+ 	int num;
+ 
+@@ -2717,6 +2717,7 @@ int hns3_clean_rx_ring(
+ 
+ 	recv_pkts = 0, recv_bds = 0, clean_count = 0;
+ 	num -= unused_count;
++	unused_count -= ring->pending_buf;
+ 
+ 	while (recv_pkts < budget && recv_bds < num) {
+ 		/* Reuse or realloc buffers */
+@@ -3793,12 +3794,13 @@ static int hns3_recover_hw_addr(struct net_device *ndev)
+ 	struct netdev_hw_addr *ha, *tmp;
+ 	int ret = 0;
+ 
++	netif_addr_lock_bh(ndev);
+ 	/* go through and sync uc_addr entries to the device */
+ 	list = &ndev->uc;
+ 	list_for_each_entry_safe(ha, tmp, &list->list, list) {
+ 		ret = hns3_nic_uc_sync(ndev, ha->addr);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 	}
+ 
+ 	/* go through and sync mc_addr entries to the device */
+@@ -3806,9 +3808,11 @@ static int hns3_recover_hw_addr(struct net_device *ndev)
+ 	list_for_each_entry_safe(ha, tmp, &list->list, list) {
+ 		ret = hns3_nic_mc_sync(ndev, ha->addr);
+ 		if (ret)
+-			return ret;
++			goto out;
+ 	}
+ 
++out:
++	netif_addr_unlock_bh(ndev);
+ 	return ret;
+ }
+ 
+@@ -3819,6 +3823,7 @@ static void hns3_remove_hw_addr(struct net_device *netdev)
+ 
+ 	hns3_nic_uc_unsync(netdev, netdev->dev_addr);
+ 
++	netif_addr_lock_bh(netdev);
+ 	/* go through and unsync uc_addr entries to the device */
+ 	list = &netdev->uc;
+ 	list_for_each_entry_safe(ha, tmp, &list->list, list)
+@@ -3829,6 +3834,8 @@ static void hns3_remove_hw_addr(struct net_device *netdev)
+ 	list_for_each_entry_safe(ha, tmp, &list->list, list)
+ 		if (ha->refcount > 1)
+ 			hns3_nic_mc_unsync(netdev, ha->addr);
++
++	netif_addr_unlock_bh(netdev);
+ }
+ 
+ static void hns3_clear_tx_ring(struct hns3_enet_ring *ring)
+@@ -3870,6 +3877,13 @@ static int hns3_clear_rx_ring(struct hns3_enet_ring *ring)
+ 		ring_ptr_move_fw(ring, next_to_use);
+ 	}
+ 
++	/* Free the pending skb in rx ring */
++	if (ring->skb) {
++		dev_kfree_skb_any(ring->skb);
++		ring->skb = NULL;
++		ring->pending_buf = 0;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+index e678b6939da3..36b35c58304b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c
+@@ -482,6 +482,11 @@ static void hns3_get_stats(struct net_device *netdev,
+ 	struct hnae3_handle *h = hns3_get_handle(netdev);
+ 	u64 *p = data;
+ 
++	if (hns3_nic_resetting(netdev)) {
++		netdev_err(netdev, "dev resetting, could not get stats\n");
++		return;
++	}
++
+ 	if (!h->ae_algo->ops->get_stats || !h->ae_algo->ops->update_stats) {
+ 		netdev_err(netdev, "could not get any statistics\n");
+ 		return;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+index e483a6e730e6..87fa4787bb76 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c
+@@ -359,21 +359,26 @@ int hclge_cmd_init(struct hclge_dev *hdev)
+ 	 * reset may happen when lower level reset is being processed.
+ 	 */
+ 	if ((hclge_is_reset_pending(hdev))) {
+-		set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
+-		return -EBUSY;
++		ret = -EBUSY;
++		goto err_cmd_init;
+ 	}
+ 
+ 	ret = hclge_cmd_query_firmware_version(&hdev->hw, &version);
+ 	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+ 			"firmware version query failed %d\n", ret);
+-		return ret;
++		goto err_cmd_init;
+ 	}
+ 	hdev->fw_version = version;
+ 
+ 	dev_info(&hdev->pdev->dev, "The firmware version is %08x\n", version);
+ 
+ 	return 0;
++
++err_cmd_init:
++	set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state);
++
++	return ret;
+ }
+ 
+ static void hclge_destroy_queue(struct hclge_cmq_ring *ring)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
+index 4e78e8812a04..1e7df81c95ad 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_cmd.c
+@@ -327,7 +327,7 @@ int hclgevf_cmd_init(struct hclgevf_dev *hdev)
+ 	hdev->arq.hdev = hdev;
+ 	hdev->arq.head = 0;
+ 	hdev->arq.tail = 0;
+-	hdev->arq.count = 0;
++	atomic_set(&hdev->arq.count, 0);
+ 	hdev->hw.cmq.csq.next_to_clean = 0;
+ 	hdev->hw.cmq.csq.next_to_use = 0;
+ 	hdev->hw.cmq.crq.next_to_clean = 0;
+@@ -344,8 +344,8 @@ int hclgevf_cmd_init(struct hclgevf_dev *hdev)
+ 	 * reset may happen when lower level reset is being processed.
+ 	 */
+ 	if (hclgevf_is_reset_pending(hdev)) {
+-		set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state);
+-		return -EBUSY;
++		ret = -EBUSY;
++		goto err_cmd_init;
+ 	}
+ 
+ 	/* get firmware version */
+@@ -353,13 +353,18 @@ int hclgevf_cmd_init(struct hclgevf_dev *hdev)
+ 	if (ret) {
+ 		dev_err(&hdev->pdev->dev,
+ 			"failed(%d) to query firmware version\n", ret);
+-		return ret;
++		goto err_cmd_init;
+ 	}
+ 	hdev->fw_version = version;
+ 
+ 	dev_info(&hdev->pdev->dev, "The firmware version is %08x\n", version);
+ 
+ 	return 0;
++
++err_cmd_init:
++	set_bit(HCLGEVF_STATE_CMD_DISABLE, &hdev->state);
++
++	return ret;
+ }
+ 
+ void hclgevf_cmd_uninit(struct hclgevf_dev *hdev)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 82103d5fa815..57f658379b16 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1895,9 +1895,15 @@ static int hclgevf_set_alive(struct hnae3_handle *handle, bool alive)
+ static int hclgevf_client_start(struct hnae3_handle *handle)
+ {
+ 	struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++	int ret;
++
++	ret = hclgevf_set_alive(handle, true);
++	if (ret)
++		return ret;
+ 
+ 	mod_timer(&hdev->keep_alive_timer, jiffies + 2 * HZ);
+-	return hclgevf_set_alive(handle, true);
++
++	return 0;
+ }
+ 
+ static void hclgevf_client_stop(struct hnae3_handle *handle)
+@@ -1939,6 +1945,10 @@ static void hclgevf_state_uninit(struct hclgevf_dev *hdev)
+ {
+ 	set_bit(HCLGEVF_STATE_DOWN, &hdev->state);
+ 
++	if (hdev->keep_alive_timer.function)
++		del_timer_sync(&hdev->keep_alive_timer);
++	if (hdev->keep_alive_task.func)
++		cancel_work_sync(&hdev->keep_alive_task);
+ 	if (hdev->service_timer.function)
+ 		del_timer_sync(&hdev->service_timer);
+ 	if (hdev->service_task.func)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+index 84653f58b2d1..fbba8b83b36c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_mbx.c
+@@ -207,7 +207,8 @@ void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+ 			/* we will drop the async msg if we find ARQ as full
+ 			 * and continue with next message
+ 			 */
+-			if (hdev->arq.count >= HCLGE_MBX_MAX_ARQ_MSG_NUM) {
++			if (atomic_read(&hdev->arq.count) >=
++			    HCLGE_MBX_MAX_ARQ_MSG_NUM) {
+ 				dev_warn(&hdev->pdev->dev,
+ 					 "Async Q full, dropping msg(%d)\n",
+ 					 req->msg[1]);
+@@ -219,7 +220,7 @@ void hclgevf_mbx_handler(struct hclgevf_dev *hdev)
+ 			memcpy(&msg_q[0], req->msg,
+ 			       HCLGE_MBX_MAX_ARQ_MSG_SIZE * sizeof(u16));
+ 			hclge_mbx_tail_ptr_move_arq(hdev->arq);
+-			hdev->arq.count++;
++			atomic_inc(&hdev->arq.count);
+ 
+ 			hclgevf_mbx_task_schedule(hdev);
+ 
+@@ -296,7 +297,7 @@ void hclgevf_mbx_async_handler(struct hclgevf_dev *hdev)
+ 		}
+ 
+ 		hclge_mbx_head_ptr_move_arq(hdev->arq);
+-		hdev->arq.count--;
++		atomic_dec(&hdev->arq.count);
+ 		msg_q = hdev->arq.msg_q[hdev->arq.head];
+ 	}
+ }
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 7acc61e4f645..c10c9d7eadaa 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -7350,7 +7350,7 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
+ 
+-	if (pci_dev_run_wake(pdev))
++	if (pci_dev_run_wake(pdev) && hw->mac.type < e1000_pch_cnp)
+ 		pm_runtime_put_noidle(&pdev->dev);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
+index e4ff531db14a..3a3382613263 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
+@@ -2654,6 +2654,10 @@ void i40e_vlan_stripping_enable(struct i40e_vsi *vsi)
+ 	struct i40e_vsi_context ctxt;
+ 	i40e_status ret;
+ 
++	/* Don't modify stripping options if a port VLAN is active */
++	if (vsi->info.pvid)
++		return;
++
+ 	if ((vsi->info.valid_sections &
+ 	     cpu_to_le16(I40E_AQ_VSI_PROP_VLAN_VALID)) &&
+ 	    ((vsi->info.port_vlan_flags & I40E_AQ_VSI_PVLAN_MODE_MASK) == 0))
+@@ -2684,6 +2688,10 @@ void i40e_vlan_stripping_disable(struct i40e_vsi *vsi)
+ 	struct i40e_vsi_context ctxt;
+ 	i40e_status ret;
+ 
++	/* Don't modify stripping options if a port VLAN is active */
++	if (vsi->info.pvid)
++		return;
++
+ 	if ((vsi->info.valid_sections &
+ 	     cpu_to_le16(I40E_AQ_VSI_PROP_VLAN_VALID)) &&
+ 	    ((vsi->info.port_vlan_flags & I40E_AQ_VSI_PVLAN_EMOD_MASK) ==
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 2ac23ebfbf31..715c6a9f30f9 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -2449,8 +2449,10 @@ error_param:
+ 				      (u8 *)&stats, sizeof(stats));
+ }
+ 
+-/* If the VF is not trusted restrict the number of MAC/VLAN it can program */
+-#define I40E_VC_MAX_MAC_ADDR_PER_VF 12
++/* If the VF is not trusted restrict the number of MAC/VLAN it can program
++ * MAC filters: 16 for multicast, 1 for MAC, 1 for broadcast
++ */
++#define I40E_VC_MAX_MAC_ADDR_PER_VF (16 + 1 + 1)
+ #define I40E_VC_MAX_VLAN_PER_VF 8
+ 
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 8725569d11f0..f8801267502a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -342,6 +342,10 @@ ice_prepare_for_reset(struct ice_pf *pf)
+ {
+ 	struct ice_hw *hw = &pf->hw;
+ 
++	/* already prepared for reset */
++	if (test_bit(__ICE_PREPARED_FOR_RESET, pf->state))
++		return;
++
+ 	/* Notify VFs of impending reset */
+ 	if (ice_check_sq_alive(hw, &hw->mailboxq))
+ 		ice_vc_notify_reset(pf);
+@@ -416,10 +420,15 @@ static void ice_reset_subtask(struct ice_pf *pf)
+ 	 * for the reset now), poll for reset done, rebuild and return.
+ 	 */
+ 	if (test_bit(__ICE_RESET_OICR_RECV, pf->state)) {
+-		clear_bit(__ICE_GLOBR_RECV, pf->state);
+-		clear_bit(__ICE_CORER_RECV, pf->state);
+-		if (!test_bit(__ICE_PREPARED_FOR_RESET, pf->state))
+-			ice_prepare_for_reset(pf);
++		/* Perform the largest reset requested */
++		if (test_and_clear_bit(__ICE_CORER_RECV, pf->state))
++			reset_type = ICE_RESET_CORER;
++		if (test_and_clear_bit(__ICE_GLOBR_RECV, pf->state))
++			reset_type = ICE_RESET_GLOBR;
++		/* return if no valid reset type requested */
++		if (reset_type == ICE_RESET_INVAL)
++			return;
++		ice_prepare_for_reset(pf);
+ 
+ 		/* make sure we are ready to rebuild */
+ 		if (ice_check_reset(&pf->hw)) {
+@@ -2490,6 +2499,9 @@ static int ice_set_features(struct net_device *netdev,
+ 	struct ice_vsi *vsi = np->vsi;
+ 	int ret = 0;
+ 
++	/* Multiple features can be changed in one call so keep features in
++	 * separate if/else statements to guarantee each feature is checked
++	 */
+ 	if (features & NETIF_F_RXHASH && !(netdev->features & NETIF_F_RXHASH))
+ 		ret = ice_vsi_manage_rss_lut(vsi, true);
+ 	else if (!(features & NETIF_F_RXHASH) &&
+@@ -2502,8 +2514,9 @@ static int ice_set_features(struct net_device *netdev,
+ 	else if (!(features & NETIF_F_HW_VLAN_CTAG_RX) &&
+ 		 (netdev->features & NETIF_F_HW_VLAN_CTAG_RX))
+ 		ret = ice_vsi_manage_vlan_stripping(vsi, false);
+-	else if ((features & NETIF_F_HW_VLAN_CTAG_TX) &&
+-		 !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX))
++
++	if ((features & NETIF_F_HW_VLAN_CTAG_TX) &&
++	    !(netdev->features & NETIF_F_HW_VLAN_CTAG_TX))
+ 		ret = ice_vsi_manage_vlan_insertion(vsi);
+ 	else if (!(features & NETIF_F_HW_VLAN_CTAG_TX) &&
+ 		 (netdev->features & NETIF_F_HW_VLAN_CTAG_TX))
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index 21ccadb720d1..2a000b8f341b 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -3467,6 +3467,9 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 			break;
+ 		}
+ 	}
++
++	dev_pm_set_driver_flags(&pdev->dev, DPM_FLAG_NEVER_SKIP);
++
+ 	pm_runtime_put_noidle(&pdev->dev);
+ 	return 0;
+ 
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index a591583d120e..dd12b73a8853 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -800,12 +800,17 @@ static int cpsw_purge_all_mc(struct net_device *ndev, const u8 *addr, int num)
+ 
+ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
+ {
+-	struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
++	struct cpsw_priv *priv = netdev_priv(ndev);
++	struct cpsw_common *cpsw = priv->cpsw;
++	int slave_port = -1;
++
++	if (cpsw->data.dual_emac)
++		slave_port = priv->emac_port + 1;
+ 
+ 	if (ndev->flags & IFF_PROMISC) {
+ 		/* Enable promiscuous mode */
+ 		cpsw_set_promiscious(ndev, true);
+-		cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI);
++		cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, slave_port);
+ 		return;
+ 	} else {
+ 		/* Disable promiscuous mode */
+@@ -813,7 +818,8 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
+ 	}
+ 
+ 	/* Restore allmulti on vlans if necessary */
+-	cpsw_ale_set_allmulti(cpsw->ale, ndev->flags & IFF_ALLMULTI);
++	cpsw_ale_set_allmulti(cpsw->ale,
++			      ndev->flags & IFF_ALLMULTI, slave_port);
+ 
+ 	/* add/remove mcast address either for real netdev or for vlan */
+ 	__hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr,
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c
+index 798c989d5d93..b3d9591b4824 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.c
++++ b/drivers/net/ethernet/ti/cpsw_ale.c
+@@ -482,24 +482,25 @@ int cpsw_ale_del_vlan(struct cpsw_ale *ale, u16 vid, int port_mask)
+ }
+ EXPORT_SYMBOL_GPL(cpsw_ale_del_vlan);
+ 
+-void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti)
++void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti, int port)
+ {
+ 	u32 ale_entry[ALE_ENTRY_WORDS];
+-	int type, idx;
+ 	int unreg_mcast = 0;
+-
+-	/* Only bother doing the work if the setting is actually changing */
+-	if (ale->allmulti == allmulti)
+-		return;
+-
+-	/* Remember the new setting to check against next time */
+-	ale->allmulti = allmulti;
++	int type, idx;
+ 
+ 	for (idx = 0; idx < ale->params.ale_entries; idx++) {
++		int vlan_members;
++
+ 		cpsw_ale_read(ale, idx, ale_entry);
+ 		type = cpsw_ale_get_entry_type(ale_entry);
+ 		if (type != ALE_TYPE_VLAN)
+ 			continue;
++		vlan_members =
++			cpsw_ale_get_vlan_member_list(ale_entry,
++						      ale->vlan_field_bits);
++
++		if (port != -1 && !(vlan_members & BIT(port)))
++			continue;
+ 
+ 		unreg_mcast =
+ 			cpsw_ale_get_vlan_unreg_mcast(ale_entry,
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.h b/drivers/net/ethernet/ti/cpsw_ale.h
+index cd07a3e96d57..1fe196d8a5e4 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.h
++++ b/drivers/net/ethernet/ti/cpsw_ale.h
+@@ -37,7 +37,6 @@ struct cpsw_ale {
+ 	struct cpsw_ale_params	params;
+ 	struct timer_list	timer;
+ 	unsigned long		ageout;
+-	int			allmulti;
+ 	u32			version;
+ 	/* These bits are different on NetCP NU Switch ALE */
+ 	u32			port_mask_bits;
+@@ -116,7 +115,7 @@ int cpsw_ale_del_mcast(struct cpsw_ale *ale, const u8 *addr, int port_mask,
+ int cpsw_ale_add_vlan(struct cpsw_ale *ale, u16 vid, int port, int untag,
+ 			int reg_mcast, int unreg_mcast);
+ int cpsw_ale_del_vlan(struct cpsw_ale *ale, u16 vid, int port);
+-void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti);
++void cpsw_ale_set_allmulti(struct cpsw_ale *ale, int allmulti, int port);
+ 
+ int cpsw_ale_control_get(struct cpsw_ale *ale, int port, int control);
+ int cpsw_ale_control_set(struct cpsw_ale *ale, int port,
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index e0dce373cdd9..3d4a166a49d5 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -875,12 +875,6 @@ static inline int netvsc_send_pkt(
+ 	} else if (ret == -EAGAIN) {
+ 		netif_tx_stop_queue(txq);
+ 		ndev_ctx->eth_stats.stop_queue++;
+-		if (atomic_read(&nvchan->queue_sends) < 1 &&
+-		    !net_device->tx_disable) {
+-			netif_tx_wake_queue(txq);
+-			ndev_ctx->eth_stats.wake_queue++;
+-			ret = -ENOSPC;
+-		}
+ 	} else {
+ 		netdev_err(ndev,
+ 			   "Unable to send packet pages %u len %u, ret %d\n",
+@@ -888,6 +882,15 @@ static inline int netvsc_send_pkt(
+ 			   ret);
+ 	}
+ 
++	if (netif_tx_queue_stopped(txq) &&
++	    atomic_read(&nvchan->queue_sends) < 1 &&
++	    !net_device->tx_disable) {
++		netif_tx_wake_queue(txq);
++		ndev_ctx->eth_stats.wake_queue++;
++		if (ret == -EAGAIN)
++			ret = -ENOSPC;
++	}
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index ff2426e00682..67a06fa7566b 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1830,13 +1830,25 @@ EXPORT_SYMBOL(genphy_read_status);
+  */
+ int genphy_soft_reset(struct phy_device *phydev)
+ {
++	u16 res = BMCR_RESET;
+ 	int ret;
+ 
+-	ret = phy_set_bits(phydev, MII_BMCR, BMCR_RESET);
++	if (phydev->autoneg == AUTONEG_ENABLE)
++		res |= BMCR_ANRESTART;
++
++	ret = phy_modify(phydev, MII_BMCR, BMCR_ISOLATE, res);
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	return phy_poll_reset(phydev);
++	ret = phy_poll_reset(phydev);
++	if (ret)
++		return ret;
++
++	/* BMCR may be reset to defaults */
++	if (phydev->autoneg == AUTONEG_DISABLE)
++		ret = genphy_setup_forced(phydev);
++
++	return ret;
+ }
+ EXPORT_SYMBOL(genphy_soft_reset);
+ 
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 366217263d70..d9a6699abe59 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -63,6 +63,7 @@ enum qmi_wwan_flags {
+ 
+ enum qmi_wwan_quirks {
+ 	QMI_WWAN_QUIRK_DTR = 1 << 0,	/* needs "set DTR" request */
++	QMI_WWAN_QUIRK_QUECTEL_DYNCFG = 1 << 1,	/* check num. endpoints */
+ };
+ 
+ struct qmimux_hdr {
+@@ -845,6 +846,16 @@ static const struct driver_info	qmi_wwan_info_quirk_dtr = {
+ 	.data           = QMI_WWAN_QUIRK_DTR,
+ };
+ 
++static const struct driver_info	qmi_wwan_info_quirk_quectel_dyncfg = {
++	.description	= "WWAN/QMI device",
++	.flags		= FLAG_WWAN | FLAG_SEND_ZLP,
++	.bind		= qmi_wwan_bind,
++	.unbind		= qmi_wwan_unbind,
++	.manage_power	= qmi_wwan_manage_power,
++	.rx_fixup       = qmi_wwan_rx_fixup,
++	.data           = QMI_WWAN_QUIRK_DTR | QMI_WWAN_QUIRK_QUECTEL_DYNCFG,
++};
++
+ #define HUAWEI_VENDOR_ID	0x12D1
+ 
+ /* map QMI/wwan function by a fixed interface number */
+@@ -865,6 +876,15 @@ static const struct driver_info	qmi_wwan_info_quirk_dtr = {
+ #define QMI_GOBI_DEVICE(vend, prod) \
+ 	QMI_FIXED_INTF(vend, prod, 0)
+ 
++/* Quectel does not use fixed interface numbers on at least some of their
++ * devices. We need to check the number of endpoints to ensure that we bind to
++ * the correct interface.
++ */
++#define QMI_QUIRK_QUECTEL_DYNCFG(vend, prod) \
++	USB_DEVICE_AND_INTERFACE_INFO(vend, prod, USB_CLASS_VENDOR_SPEC, \
++				      USB_SUBCLASS_VENDOR_SPEC, 0xff), \
++	.driver_info = (unsigned long)&qmi_wwan_info_quirk_quectel_dyncfg
++
+ static const struct usb_device_id products[] = {
+ 	/* 1. CDC ECM like devices match on the control interface */
+ 	{	/* Huawei E392, E398 and possibly others sharing both device id and more... */
+@@ -969,20 +989,9 @@ static const struct usb_device_id products[] = {
+ 		USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x581d, USB_CLASS_VENDOR_SPEC, 1, 7),
+ 		.driver_info = (unsigned long)&qmi_wwan_info,
+ 	},
+-	{	/* Quectel EP06/EG06/EM06 */
+-		USB_DEVICE_AND_INTERFACE_INFO(0x2c7c, 0x0306,
+-					      USB_CLASS_VENDOR_SPEC,
+-					      USB_SUBCLASS_VENDOR_SPEC,
+-					      0xff),
+-		.driver_info	    = (unsigned long)&qmi_wwan_info_quirk_dtr,
+-	},
+-	{	/* Quectel EG12/EM12 */
+-		USB_DEVICE_AND_INTERFACE_INFO(0x2c7c, 0x0512,
+-					      USB_CLASS_VENDOR_SPEC,
+-					      USB_SUBCLASS_VENDOR_SPEC,
+-					      0xff),
+-		.driver_info	    = (unsigned long)&qmi_wwan_info_quirk_dtr,
+-	},
++	{QMI_QUIRK_QUECTEL_DYNCFG(0x2c7c, 0x0125)},	/* Quectel EC25, EC20 R2.0  Mini PCIe */
++	{QMI_QUIRK_QUECTEL_DYNCFG(0x2c7c, 0x0306)},	/* Quectel EP06/EG06/EM06 */
++	{QMI_QUIRK_QUECTEL_DYNCFG(0x2c7c, 0x0512)},	/* Quectel EG12/EM12 */
+ 
+ 	/* 3. Combined interface devices matching on interface number */
+ 	{QMI_FIXED_INTF(0x0408, 0xea42, 4)},	/* Yota / Megafon M100-1 */
+@@ -1283,7 +1292,6 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)},	/* HP lt4120 Snapdragon X5 LTE */
+ 	{QMI_FIXED_INTF(0x22de, 0x9061, 3)},	/* WeTelecom WPD-600N */
+ 	{QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)},	/* SIMCom 7100E, 7230E, 7600E ++ */
+-	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0125, 4)},	/* Quectel EC25, EC20 R2.0  Mini PCIe */
+ 	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)},	/* Quectel EC21 Mini PCIe */
+ 	{QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)},	/* Quectel EG91 */
+ 	{QMI_FIXED_INTF(0x2c7c, 0x0296, 4)},	/* Quectel BG96 */
+@@ -1363,27 +1371,12 @@ static bool quectel_ec20_detected(struct usb_interface *intf)
+ 	return false;
+ }
+ 
+-static bool quectel_diag_detected(struct usb_interface *intf)
+-{
+-	struct usb_device *dev = interface_to_usbdev(intf);
+-	struct usb_interface_descriptor intf_desc = intf->cur_altsetting->desc;
+-	u16 id_vendor = le16_to_cpu(dev->descriptor.idVendor);
+-	u16 id_product = le16_to_cpu(dev->descriptor.idProduct);
+-
+-	if (id_vendor != 0x2c7c || intf_desc.bNumEndpoints != 2)
+-		return false;
+-
+-	if (id_product == 0x0306 || id_product == 0x0512)
+-		return true;
+-	else
+-		return false;
+-}
+-
+ static int qmi_wwan_probe(struct usb_interface *intf,
+ 			  const struct usb_device_id *prod)
+ {
+ 	struct usb_device_id *id = (struct usb_device_id *)prod;
+ 	struct usb_interface_descriptor *desc = &intf->cur_altsetting->desc;
++	const struct driver_info *info;
+ 
+ 	/* Workaround to enable dynamic IDs.  This disables usbnet
+ 	 * blacklisting functionality.  Which, if required, can be
+@@ -1417,10 +1410,14 @@ static int qmi_wwan_probe(struct usb_interface *intf,
+ 	 * we need to match on class/subclass/protocol. These values are
+ 	 * identical for the diagnostic- and QMI-interface, but bNumEndpoints is
+ 	 * different. Ignore the current interface if the number of endpoints
+-	 * the number for the diag interface (two).
++	 * equals the number for the diag interface (two).
+ 	 */
+-	if (quectel_diag_detected(intf))
+-		return -ENODEV;
++	info = (void *)&id->driver_info;
++
++	if (info->data & QMI_WWAN_QUIRK_QUECTEL_DYNCFG) {
++		if (desc->bNumEndpoints == 2)
++			return -ENODEV;
++	}
+ 
+ 	return usbnet_probe(intf, id);
+ }
+diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
+index 5a44f9d0ff02..c7b5a7786e38 100644
+--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
++++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
+@@ -1274,7 +1274,12 @@ int wil_cfg80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 			     params->wait);
+ 
+ out:
++	/* when the sent packet was not acked by receiver(ACK=0), rc will
++	 * be -EAGAIN. In this case this function needs to return success,
++	 * the ACK=0 will be reflected in tx_status.
++	 */
+ 	tx_status = (rc == 0);
++	rc = (rc == -EAGAIN) ? 0 : rc;
+ 	cfg80211_mgmt_tx_status(wdev, cookie ? *cookie : 0, buf, len,
+ 				tx_status, GFP_KERNEL);
+ 
+diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c
+index 345f05969190..fc1b4b897cb8 100644
+--- a/drivers/net/wireless/ath/wil6210/wmi.c
++++ b/drivers/net/wireless/ath/wil6210/wmi.c
+@@ -3455,8 +3455,9 @@ int wmi_mgmt_tx(struct wil6210_vif *vif, const u8 *buf, size_t len)
+ 	rc = wmi_call(wil, WMI_SW_TX_REQ_CMDID, vif->mid, cmd, total,
+ 		      WMI_SW_TX_COMPLETE_EVENTID, &evt, sizeof(evt), 2000);
+ 	if (!rc && evt.evt.status != WMI_FW_STATUS_SUCCESS) {
+-		wil_err(wil, "mgmt_tx failed with status %d\n", evt.evt.status);
+-		rc = -EINVAL;
++		wil_dbg_wmi(wil, "mgmt_tx failed with status %d\n",
++			    evt.evt.status);
++		rc = -EAGAIN;
+ 	}
+ 
+ 	kfree(cmd);
+@@ -3508,9 +3509,9 @@ int wmi_mgmt_tx_ext(struct wil6210_vif *vif, const u8 *buf, size_t len,
+ 	rc = wmi_call(wil, WMI_SW_TX_REQ_EXT_CMDID, vif->mid, cmd, total,
+ 		      WMI_SW_TX_COMPLETE_EVENTID, &evt, sizeof(evt), 2000);
+ 	if (!rc && evt.evt.status != WMI_FW_STATUS_SUCCESS) {
+-		wil_err(wil, "mgmt_tx_ext failed with status %d\n",
+-			evt.evt.status);
+-		rc = -EINVAL;
++		wil_dbg_wmi(wil, "mgmt_tx_ext failed with status %d\n",
++			    evt.evt.status);
++		rc = -EAGAIN;
+ 	}
+ 
+ 	kfree(cmd);
+diff --git a/drivers/net/wireless/atmel/at76c50x-usb.c b/drivers/net/wireless/atmel/at76c50x-usb.c
+index e99e766a3028..1cabae424839 100644
+--- a/drivers/net/wireless/atmel/at76c50x-usb.c
++++ b/drivers/net/wireless/atmel/at76c50x-usb.c
+@@ -2585,8 +2585,8 @@ static int __init at76_mod_init(void)
+ 	if (result < 0)
+ 		printk(KERN_ERR DRIVER_NAME
+ 		       ": usb_register failed (status %d)\n", result);
+-
+-	led_trigger_register_simple("at76_usb-tx", &ledtrig_tx);
++	else
++		led_trigger_register_simple("at76_usb-tx", &ledtrig_tx);
+ 	return result;
+ }
+ 
+diff --git a/drivers/net/wireless/broadcom/b43/phy_lp.c b/drivers/net/wireless/broadcom/b43/phy_lp.c
+index 46408a560814..aedee026c5e2 100644
+--- a/drivers/net/wireless/broadcom/b43/phy_lp.c
++++ b/drivers/net/wireless/broadcom/b43/phy_lp.c
+@@ -1835,7 +1835,7 @@ static void lpphy_papd_cal(struct b43_wldev *dev, struct lpphy_tx_gains gains,
+ static void lpphy_papd_cal_txpwr(struct b43_wldev *dev)
+ {
+ 	struct b43_phy_lp *lpphy = dev->phy.lp;
+-	struct lpphy_tx_gains gains, oldgains;
++	struct lpphy_tx_gains oldgains;
+ 	int old_txpctl, old_afe_ovr, old_rf, old_bbmult;
+ 
+ 	lpphy_read_tx_pctl_mode_from_hardware(dev);
+@@ -1849,9 +1849,9 @@ static void lpphy_papd_cal_txpwr(struct b43_wldev *dev)
+ 	lpphy_set_tx_power_control(dev, B43_LPPHY_TXPCTL_OFF);
+ 
+ 	if (dev->dev->chip_id == 0x4325 && dev->dev->chip_rev == 0)
+-		lpphy_papd_cal(dev, gains, 0, 1, 30);
++		lpphy_papd_cal(dev, oldgains, 0, 1, 30);
+ 	else
+-		lpphy_papd_cal(dev, gains, 0, 1, 65);
++		lpphy_papd_cal(dev, oldgains, 0, 1, 65);
+ 
+ 	if (old_afe_ovr)
+ 		lpphy_set_tx_gains(dev, oldgains);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+index 35301237d435..ded629460fc0 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+@@ -3474,6 +3474,8 @@ brcmf_wowl_nd_results(struct brcmf_if *ifp, const struct brcmf_event_msg *e,
+ 	}
+ 
+ 	netinfo = brcmf_get_netinfo_array(pfn_result);
++	if (netinfo->SSID_len > IEEE80211_MAX_SSID_LEN)
++		netinfo->SSID_len = IEEE80211_MAX_SSID_LEN;
+ 	memcpy(cfg->wowl.nd->ssid.ssid, netinfo->SSID, netinfo->SSID_len);
+ 	cfg->wowl.nd->ssid.ssid_len = netinfo->SSID_len;
+ 	cfg->wowl.nd->n_channels = 1;
+@@ -5374,6 +5376,8 @@ static s32 brcmf_get_assoc_ies(struct brcmf_cfg80211_info *cfg,
+ 		conn_info->req_ie =
+ 		    kmemdup(cfg->extra_buf, conn_info->req_ie_len,
+ 			    GFP_KERNEL);
++		if (!conn_info->req_ie)
++			conn_info->req_ie_len = 0;
+ 	} else {
+ 		conn_info->req_ie_len = 0;
+ 		conn_info->req_ie = NULL;
+@@ -5390,6 +5394,8 @@ static s32 brcmf_get_assoc_ies(struct brcmf_cfg80211_info *cfg,
+ 		conn_info->resp_ie =
+ 		    kmemdup(cfg->extra_buf, conn_info->resp_ie_len,
+ 			    GFP_KERNEL);
++		if (!conn_info->resp_ie)
++			conn_info->resp_ie_len = 0;
+ 	} else {
+ 		conn_info->resp_ie_len = 0;
+ 		conn_info->resp_ie = NULL;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index 860a4372cb56..36a04c1144e5 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -464,7 +464,8 @@ void brcmf_rx_frame(struct device *dev, struct sk_buff *skb, bool handle_event)
+ 	} else {
+ 		/* Process special event packets */
+ 		if (handle_event)
+-			brcmf_fweh_process_skb(ifp->drvr, skb);
++			brcmf_fweh_process_skb(ifp->drvr, skb,
++					       BCMILCP_SUBTYPE_VENDOR_LONG);
+ 
+ 		brcmf_netif_rx(ifp, skb);
+ 	}
+@@ -481,7 +482,7 @@ void brcmf_rx_event(struct device *dev, struct sk_buff *skb)
+ 	if (brcmf_rx_hdrpull(drvr, skb, &ifp))
+ 		return;
+ 
+-	brcmf_fweh_process_skb(ifp->drvr, skb);
++	brcmf_fweh_process_skb(ifp->drvr, skb, 0);
+ 	brcmu_pkt_buf_free_skb(skb);
+ }
+ 
+@@ -783,17 +784,17 @@ static void brcmf_del_if(struct brcmf_pub *drvr, s32 bsscfgidx,
+ 			 bool rtnl_locked)
+ {
+ 	struct brcmf_if *ifp;
++	int ifidx;
+ 
+ 	ifp = drvr->iflist[bsscfgidx];
+-	drvr->iflist[bsscfgidx] = NULL;
+ 	if (!ifp) {
+ 		brcmf_err("Null interface, bsscfgidx=%d\n", bsscfgidx);
+ 		return;
+ 	}
+ 	brcmf_dbg(TRACE, "Enter, bsscfgidx=%d, ifidx=%d\n", bsscfgidx,
+ 		  ifp->ifidx);
+-	if (drvr->if2bss[ifp->ifidx] == bsscfgidx)
+-		drvr->if2bss[ifp->ifidx] = BRCMF_BSSIDX_INVALID;
++	ifidx = ifp->ifidx;
++
+ 	if (ifp->ndev) {
+ 		if (bsscfgidx == 0) {
+ 			if (ifp->ndev->netdev_ops == &brcmf_netdev_ops_pri) {
+@@ -821,6 +822,10 @@ static void brcmf_del_if(struct brcmf_pub *drvr, s32 bsscfgidx,
+ 		brcmf_p2p_ifp_removed(ifp, rtnl_locked);
+ 		kfree(ifp);
+ 	}
++
++	drvr->iflist[bsscfgidx] = NULL;
++	if (drvr->if2bss[ifidx] == bsscfgidx)
++		drvr->if2bss[ifidx] = BRCMF_BSSIDX_INVALID;
+ }
+ 
+ void brcmf_remove_interface(struct brcmf_if *ifp, bool rtnl_locked)
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.h
+index 816f80ea925b..ebd66fe0d949 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.h
+@@ -211,7 +211,7 @@ enum brcmf_fweh_event_code {
+  */
+ #define BRCM_OUI				"\x00\x10\x18"
+ #define BCMILCP_BCM_SUBTYPE_EVENT		1
+-
++#define BCMILCP_SUBTYPE_VENDOR_LONG		32769
+ 
+ /**
+  * struct brcm_ethhdr - broadcom specific ether header.
+@@ -334,10 +334,10 @@ void brcmf_fweh_process_event(struct brcmf_pub *drvr,
+ void brcmf_fweh_p2pdev_setup(struct brcmf_if *ifp, bool ongoing);
+ 
+ static inline void brcmf_fweh_process_skb(struct brcmf_pub *drvr,
+-					  struct sk_buff *skb)
++					  struct sk_buff *skb, u16 stype)
+ {
+ 	struct brcmf_event *event_packet;
+-	u16 usr_stype;
++	u16 subtype, usr_stype;
+ 
+ 	/* only process events when protocol matches */
+ 	if (skb->protocol != cpu_to_be16(ETH_P_LINK_CTL))
+@@ -346,8 +346,16 @@ static inline void brcmf_fweh_process_skb(struct brcmf_pub *drvr,
+ 	if ((skb->len + ETH_HLEN) < sizeof(*event_packet))
+ 		return;
+ 
+-	/* check for BRCM oui match */
+ 	event_packet = (struct brcmf_event *)skb_mac_header(skb);
++
++	/* check subtype if needed */
++	if (unlikely(stype)) {
++		subtype = get_unaligned_be16(&event_packet->hdr.subtype);
++		if (subtype != stype)
++			return;
++	}
++
++	/* check for BRCM oui match */
+ 	if (memcmp(BRCM_OUI, &event_packet->hdr.oui[0],
+ 		   sizeof(event_packet->hdr.oui)))
+ 		return;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+index 02759ebd207c..d439079193f8 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c
+@@ -580,24 +580,6 @@ static bool brcmf_fws_ifidx_match(struct sk_buff *skb, void *arg)
+ 	return ifidx == *(int *)arg;
+ }
+ 
+-static void brcmf_fws_psq_flush(struct brcmf_fws_info *fws, struct pktq *q,
+-				int ifidx)
+-{
+-	bool (*matchfn)(struct sk_buff *, void *) = NULL;
+-	struct sk_buff *skb;
+-	int prec;
+-
+-	if (ifidx != -1)
+-		matchfn = brcmf_fws_ifidx_match;
+-	for (prec = 0; prec < q->num_prec; prec++) {
+-		skb = brcmu_pktq_pdeq_match(q, prec, matchfn, &ifidx);
+-		while (skb) {
+-			brcmu_pkt_buf_free_skb(skb);
+-			skb = brcmu_pktq_pdeq_match(q, prec, matchfn, &ifidx);
+-		}
+-	}
+-}
+-
+ static void brcmf_fws_hanger_init(struct brcmf_fws_hanger *hanger)
+ {
+ 	int i;
+@@ -669,6 +651,28 @@ static inline int brcmf_fws_hanger_poppkt(struct brcmf_fws_hanger *h,
+ 	return 0;
+ }
+ 
++static void brcmf_fws_psq_flush(struct brcmf_fws_info *fws, struct pktq *q,
++				int ifidx)
++{
++	bool (*matchfn)(struct sk_buff *, void *) = NULL;
++	struct sk_buff *skb;
++	int prec;
++	u32 hslot;
++
++	if (ifidx != -1)
++		matchfn = brcmf_fws_ifidx_match;
++	for (prec = 0; prec < q->num_prec; prec++) {
++		skb = brcmu_pktq_pdeq_match(q, prec, matchfn, &ifidx);
++		while (skb) {
++			hslot = brcmf_skb_htod_tag_get_field(skb, HSLOT);
++			brcmf_fws_hanger_poppkt(&fws->hanger, hslot, &skb,
++						true);
++			brcmu_pkt_buf_free_skb(skb);
++			skb = brcmu_pktq_pdeq_match(q, prec, matchfn, &ifidx);
++		}
++	}
++}
++
+ static int brcmf_fws_hanger_mark_suppressed(struct brcmf_fws_hanger *h,
+ 					    u32 slot_id)
+ {
+@@ -2194,6 +2198,8 @@ void brcmf_fws_del_interface(struct brcmf_if *ifp)
+ 	brcmf_fws_lock(fws);
+ 	ifp->fws_desc = NULL;
+ 	brcmf_dbg(TRACE, "deleting %s\n", entry->name);
++	brcmf_fws_macdesc_cleanup(fws, &fws->desc.iface[ifp->ifidx],
++				  ifp->ifidx);
+ 	brcmf_fws_macdesc_deinit(entry);
+ 	brcmf_fws_cleanup(fws, ifp->ifidx);
+ 	brcmf_fws_unlock(fws);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+index 4e8397a0cbc8..ee922b052561 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+@@ -1116,7 +1116,7 @@ static void brcmf_msgbuf_process_event(struct brcmf_msgbuf *msgbuf, void *buf)
+ 
+ 	skb->protocol = eth_type_trans(skb, ifp->ndev);
+ 
+-	brcmf_fweh_process_skb(ifp->drvr, skb);
++	brcmf_fweh_process_skb(ifp->drvr, skb, 0);
+ 
+ exit:
+ 	brcmu_pkt_buf_free_skb(skb);
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+index a4308c6e72d7..44ead0fea7c6 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c
+@@ -160,7 +160,7 @@ struct brcmf_usbdev_info {
+ 
+ 	struct usb_device *usbdev;
+ 	struct device *dev;
+-	struct mutex dev_init_lock;
++	struct completion dev_init_done;
+ 
+ 	int ctl_in_pipe, ctl_out_pipe;
+ 	struct urb *ctl_urb; /* URB for control endpoint */
+@@ -684,12 +684,18 @@ static int brcmf_usb_up(struct device *dev)
+ 
+ static void brcmf_cancel_all_urbs(struct brcmf_usbdev_info *devinfo)
+ {
++	int i;
++
+ 	if (devinfo->ctl_urb)
+ 		usb_kill_urb(devinfo->ctl_urb);
+ 	if (devinfo->bulk_urb)
+ 		usb_kill_urb(devinfo->bulk_urb);
+-	brcmf_usb_free_q(&devinfo->tx_postq, true);
+-	brcmf_usb_free_q(&devinfo->rx_postq, true);
++	if (devinfo->tx_reqs)
++		for (i = 0; i < devinfo->bus_pub.ntxq; i++)
++			usb_kill_urb(devinfo->tx_reqs[i].urb);
++	if (devinfo->rx_reqs)
++		for (i = 0; i < devinfo->bus_pub.nrxq; i++)
++			usb_kill_urb(devinfo->rx_reqs[i].urb);
+ }
+ 
+ static void brcmf_usb_down(struct device *dev)
+@@ -1195,11 +1201,11 @@ static void brcmf_usb_probe_phase2(struct device *dev, int ret,
+ 	if (ret)
+ 		goto error;
+ 
+-	mutex_unlock(&devinfo->dev_init_lock);
++	complete(&devinfo->dev_init_done);
+ 	return;
+ error:
+ 	brcmf_dbg(TRACE, "failed: dev=%s, err=%d\n", dev_name(dev), ret);
+-	mutex_unlock(&devinfo->dev_init_lock);
++	complete(&devinfo->dev_init_done);
+ 	device_release_driver(dev);
+ }
+ 
+@@ -1267,7 +1273,7 @@ static int brcmf_usb_probe_cb(struct brcmf_usbdev_info *devinfo)
+ 		if (ret)
+ 			goto fail;
+ 		/* we are done */
+-		mutex_unlock(&devinfo->dev_init_lock);
++		complete(&devinfo->dev_init_done);
+ 		return 0;
+ 	}
+ 	bus->chip = bus_pub->devid;
+@@ -1327,11 +1333,10 @@ brcmf_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 
+ 	devinfo->usbdev = usb;
+ 	devinfo->dev = &usb->dev;
+-	/* Take an init lock, to protect for disconnect while still loading.
++	/* Init completion, to protect for disconnect while still loading.
+ 	 * Necessary because of the asynchronous firmware load construction
+ 	 */
+-	mutex_init(&devinfo->dev_init_lock);
+-	mutex_lock(&devinfo->dev_init_lock);
++	init_completion(&devinfo->dev_init_done);
+ 
+ 	usb_set_intfdata(intf, devinfo);
+ 
+@@ -1409,7 +1414,7 @@ brcmf_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ 	return 0;
+ 
+ fail:
+-	mutex_unlock(&devinfo->dev_init_lock);
++	complete(&devinfo->dev_init_done);
+ 	kfree(devinfo);
+ 	usb_set_intfdata(intf, NULL);
+ 	return ret;
+@@ -1424,7 +1429,7 @@ brcmf_usb_disconnect(struct usb_interface *intf)
+ 	devinfo = (struct brcmf_usbdev_info *)usb_get_intfdata(intf);
+ 
+ 	if (devinfo) {
+-		mutex_lock(&devinfo->dev_init_lock);
++		wait_for_completion(&devinfo->dev_init_done);
+ 		/* Make sure that devinfo still exists. Firmware probe routines
+ 		 * may have released the device and cleared the intfdata.
+ 		 */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c
+index 8eff2753abad..d493021f6031 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/vendor.c
+@@ -35,9 +35,10 @@ static int brcmf_cfg80211_vndr_cmds_dcmd_handler(struct wiphy *wiphy,
+ 	struct brcmf_if *ifp;
+ 	const struct brcmf_vndr_dcmd_hdr *cmdhdr = data;
+ 	struct sk_buff *reply;
+-	int ret, payload, ret_len;
++	unsigned int payload, ret_len;
+ 	void *dcmd_buf = NULL, *wr_pointer;
+ 	u16 msglen, maxmsglen = PAGE_SIZE - 0x100;
++	int ret;
+ 
+ 	if (len < sizeof(*cmdhdr)) {
+ 		brcmf_err("vendor command too short: %d\n", len);
+@@ -65,7 +66,7 @@ static int brcmf_cfg80211_vndr_cmds_dcmd_handler(struct wiphy *wiphy,
+ 			brcmf_err("oversize return buffer %d\n", ret_len);
+ 			ret_len = BRCMF_DCMD_MAXLEN;
+ 		}
+-		payload = max(ret_len, len) + 1;
++		payload = max_t(unsigned int, ret_len, len) + 1;
+ 		dcmd_buf = vzalloc(payload);
+ 		if (NULL == dcmd_buf)
+ 			return -ENOMEM;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+index c596c7b13504..4354c0fedda7 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+@@ -1384,10 +1384,15 @@ out_err:
+ static void iwl_pcie_rx_handle(struct iwl_trans *trans, int queue)
+ {
+ 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
+-	struct iwl_rxq *rxq = &trans_pcie->rxq[queue];
++	struct iwl_rxq *rxq;
+ 	u32 r, i, count = 0;
+ 	bool emergency = false;
+ 
++	if (WARN_ON_ONCE(!trans_pcie->rxq || !trans_pcie->rxq[queue].bd))
++		return;
++
++	rxq = &trans_pcie->rxq[queue];
++
+ restart:
+ 	spin_lock(&rxq->lock);
+ 	/* uCode's read index (stored in shared DRAM) indicates the last Rx
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 883752f640b4..4bc25dc5dc1d 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -4073,16 +4073,20 @@ static int mwifiex_tm_cmd(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 
+ 		if (mwifiex_send_cmd(priv, 0, 0, 0, hostcmd, true)) {
+ 			dev_err(priv->adapter->dev, "Failed to process hostcmd\n");
++			kfree(hostcmd);
+ 			return -EFAULT;
+ 		}
+ 
+ 		/* process hostcmd response*/
+ 		skb = cfg80211_testmode_alloc_reply_skb(wiphy, hostcmd->len);
+-		if (!skb)
++		if (!skb) {
++			kfree(hostcmd);
+ 			return -ENOMEM;
++		}
+ 		err = nla_put(skb, MWIFIEX_TM_ATTR_DATA,
+ 			      hostcmd->len, hostcmd->cmd);
+ 		if (err) {
++			kfree(hostcmd);
+ 			kfree_skb(skb);
+ 			return -EMSGSIZE;
+ 		}
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfp.c b/drivers/net/wireless/marvell/mwifiex/cfp.c
+index bfe84e55df77..f1522fb1c1e8 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfp.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfp.c
+@@ -531,5 +531,8 @@ u8 mwifiex_adjust_data_rate(struct mwifiex_private *priv,
+ 		rate_index = (rx_rate > MWIFIEX_RATE_INDEX_OFDM0) ?
+ 			      rx_rate - 1 : rx_rate;
+ 
++	if (rate_index >= MWIFIEX_MAX_AC_RX_RATES)
++		rate_index = MWIFIEX_MAX_AC_RX_RATES - 1;
++
+ 	return rate_index;
+ }
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
+index ef9b502ce576..a3189294ecb8 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.c
++++ b/drivers/net/wireless/realtek/rtlwifi/base.c
+@@ -469,6 +469,11 @@ static void _rtl_init_deferred_work(struct ieee80211_hw *hw)
+ 	/* <2> work queue */
+ 	rtlpriv->works.hw = hw;
+ 	rtlpriv->works.rtl_wq = alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name);
++	if (unlikely(!rtlpriv->works.rtl_wq)) {
++		pr_err("Failed to allocate work queue\n");
++		return;
++	}
++
+ 	INIT_DELAYED_WORK(&rtlpriv->works.watchdog_wq,
+ 			  (void *)rtl_watchdog_wq_callback);
+ 	INIT_DELAYED_WORK(&rtlpriv->works.ips_nic_off_wq,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/fw.c
+index 63874512598b..b5f91c994c79 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/fw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/fw.c
+@@ -622,6 +622,8 @@ void rtl88e_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool b_dl_finished)
+ 		      u1rsvdpageloc, 3);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/fw_common.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/fw_common.c
+index f3bff66e85d0..81ec0e6e07c1 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/fw_common.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/fw_common.c
+@@ -646,6 +646,8 @@ void rtl92c_set_fw_rsvdpagepkt(struct ieee80211_hw *hw,
+ 
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet, totalpacketlen);
+ 
+ 	if (cmd_send_packet)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c
+index 84a0d0eb72e1..a933490928ba 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/fw.c
+@@ -766,6 +766,8 @@ void rtl92ee_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool b_dl_finished)
+ 		      u1rsvdpageloc, 3);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/fw.c
+index bf9859f74b6f..52f108744e96 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/fw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/fw.c
+@@ -470,6 +470,8 @@ void rtl8723e_set_fw_rsvdpagepkt(struct ieee80211_hw *hw, bool b_dl_finished)
+ 		      u1rsvdpageloc, 3);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/fw.c
+index f2441fbb92f1..307c2bd77f06 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/fw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/fw.c
+@@ -584,6 +584,8 @@ void rtl8723be_set_fw_rsvdpagepkt(struct ieee80211_hw *hw,
+ 		      u1rsvdpageloc, sizeof(u1rsvdpageloc));
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.c
+index d868a034659f..d7235f6165fd 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.c
+@@ -1645,6 +1645,8 @@ out:
+ 		      &reserved_page_packet_8812[0], totalpacketlen);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet_8812, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+@@ -1781,6 +1783,8 @@ out:
+ 		      &reserved_page_packet_8821[0], totalpacketlen);
+ 
+ 	skb = dev_alloc_skb(totalpacketlen);
++	if (!skb)
++		return;
+ 	skb_put_data(skb, &reserved_page_packet_8821, totalpacketlen);
+ 
+ 	rtstatus = rtl_cmd_send_packet(hw, skb);
+diff --git a/drivers/net/wireless/rsi/rsi_91x_mac80211.c b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+index e56fc83faf0e..2f604e8bc991 100644
+--- a/drivers/net/wireless/rsi/rsi_91x_mac80211.c
++++ b/drivers/net/wireless/rsi/rsi_91x_mac80211.c
+@@ -188,27 +188,27 @@ bool rsi_is_cipher_wep(struct rsi_common *common)
+  * @adapter: Pointer to the adapter structure.
+  * @band: Operating band to be set.
+  *
+- * Return: None.
++ * Return: int - 0 on success, negative error on failure.
+  */
+-static void rsi_register_rates_channels(struct rsi_hw *adapter, int band)
++static int rsi_register_rates_channels(struct rsi_hw *adapter, int band)
+ {
+ 	struct ieee80211_supported_band *sbands = &adapter->sbands[band];
+ 	void *channels = NULL;
+ 
+ 	if (band == NL80211_BAND_2GHZ) {
+-		channels = kmalloc(sizeof(rsi_2ghz_channels), GFP_KERNEL);
+-		memcpy(channels,
+-		       rsi_2ghz_channels,
+-		       sizeof(rsi_2ghz_channels));
++		channels = kmemdup(rsi_2ghz_channels, sizeof(rsi_2ghz_channels),
++				   GFP_KERNEL);
++		if (!channels)
++			return -ENOMEM;
+ 		sbands->band = NL80211_BAND_2GHZ;
+ 		sbands->n_channels = ARRAY_SIZE(rsi_2ghz_channels);
+ 		sbands->bitrates = rsi_rates;
+ 		sbands->n_bitrates = ARRAY_SIZE(rsi_rates);
+ 	} else {
+-		channels = kmalloc(sizeof(rsi_5ghz_channels), GFP_KERNEL);
+-		memcpy(channels,
+-		       rsi_5ghz_channels,
+-		       sizeof(rsi_5ghz_channels));
++		channels = kmemdup(rsi_5ghz_channels, sizeof(rsi_5ghz_channels),
++				   GFP_KERNEL);
++		if (!channels)
++			return -ENOMEM;
+ 		sbands->band = NL80211_BAND_5GHZ;
+ 		sbands->n_channels = ARRAY_SIZE(rsi_5ghz_channels);
+ 		sbands->bitrates = &rsi_rates[4];
+@@ -227,6 +227,7 @@ static void rsi_register_rates_channels(struct rsi_hw *adapter, int band)
+ 	sbands->ht_cap.mcs.rx_mask[0] = 0xff;
+ 	sbands->ht_cap.mcs.tx_params = IEEE80211_HT_MCS_TX_DEFINED;
+ 	/* sbands->ht_cap.mcs.rx_highest = 0x82; */
++	return 0;
+ }
+ 
+ /**
+@@ -1985,11 +1986,16 @@ int rsi_mac80211_attach(struct rsi_common *common)
+ 	wiphy->available_antennas_rx = 1;
+ 	wiphy->available_antennas_tx = 1;
+ 
+-	rsi_register_rates_channels(adapter, NL80211_BAND_2GHZ);
++	status = rsi_register_rates_channels(adapter, NL80211_BAND_2GHZ);
++	if (status)
++		return status;
+ 	wiphy->bands[NL80211_BAND_2GHZ] =
+ 		&adapter->sbands[NL80211_BAND_2GHZ];
+ 	if (common->num_supp_bands > 1) {
+-		rsi_register_rates_channels(adapter, NL80211_BAND_5GHZ);
++		status = rsi_register_rates_channels(adapter,
++						     NL80211_BAND_5GHZ);
++		if (status)
++			return status;
+ 		wiphy->bands[NL80211_BAND_5GHZ] =
+ 			&adapter->sbands[NL80211_BAND_5GHZ];
+ 	}
+diff --git a/drivers/net/wireless/st/cw1200/main.c b/drivers/net/wireless/st/cw1200/main.c
+index 90dc979f260b..c1608f0bf6d0 100644
+--- a/drivers/net/wireless/st/cw1200/main.c
++++ b/drivers/net/wireless/st/cw1200/main.c
+@@ -345,6 +345,11 @@ static struct ieee80211_hw *cw1200_init_common(const u8 *macaddr,
+ 	mutex_init(&priv->wsm_cmd_mux);
+ 	mutex_init(&priv->conf_mutex);
+ 	priv->workqueue = create_singlethread_workqueue("cw1200_wq");
++	if (!priv->workqueue) {
++		ieee80211_free_hw(hw);
++		return NULL;
++	}
++
+ 	sema_init(&priv->scan.lock, 1);
+ 	INIT_WORK(&priv->scan.work, cw1200_scan_work);
+ 	INIT_DELAYED_WORK(&priv->scan.probe_work, cw1200_probe_work);
+diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
+index 0279eb1da3ef..7733eb240564 100644
+--- a/drivers/nvdimm/pmem.c
++++ b/drivers/nvdimm/pmem.c
+@@ -281,16 +281,22 @@ static long pmem_dax_direct_access(struct dax_device *dax_dev,
+ 	return __pmem_direct_access(pmem, pgoff, nr_pages, kaddr, pfn);
+ }
+ 
++/*
++ * Use the 'no check' versions of copy_from_iter_flushcache() and
++ * copy_to_iter_mcsafe() to bypass HARDENED_USERCOPY overhead. Bounds
++ * checking, both file offset and device offset, is handled by
++ * dax_iomap_actor()
++ */
+ static size_t pmem_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff,
+ 		void *addr, size_t bytes, struct iov_iter *i)
+ {
+-	return copy_from_iter_flushcache(addr, bytes, i);
++	return _copy_from_iter_flushcache(addr, bytes, i);
+ }
+ 
+ static size_t pmem_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff,
+ 		void *addr, size_t bytes, struct iov_iter *i)
+ {
+-	return copy_to_iter_mcsafe(addr, bytes, i);
++	return _copy_to_iter_mcsafe(addr, bytes, i);
+ }
+ 
+ static const struct dax_operations pmem_dax_ops = {
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 4c4413ad3ceb..5b389fed6d54 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1551,6 +1551,10 @@ static void nvme_update_disk_info(struct gendisk *disk,
+ 	sector_t capacity = le64_to_cpup(&id->nsze) << (ns->lba_shift - 9);
+ 	unsigned short bs = 1 << ns->lba_shift;
+ 
++	if (ns->lba_shift > PAGE_SHIFT) {
++		/* unsupported block size, set capacity to 0 later */
++		bs = (1 << 9);
++	}
+ 	blk_mq_freeze_queue(disk->queue);
+ 	blk_integrity_unregister(disk);
+ 
+@@ -1561,7 +1565,8 @@ static void nvme_update_disk_info(struct gendisk *disk,
+ 	if (ns->ms && !ns->ext &&
+ 	    (ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED))
+ 		nvme_init_integrity(disk, ns->ms, ns->pi_type);
+-	if (ns->ms && !nvme_ns_has_pi(ns) && !blk_get_integrity(disk))
++	if ((ns->ms && !nvme_ns_has_pi(ns) && !blk_get_integrity(disk)) ||
++	    ns->lba_shift > PAGE_SHIFT)
+ 		capacity = 0;
+ 
+ 	set_capacity(disk, capacity);
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 52abc3a6de12..1b1645a77daf 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -922,8 +922,9 @@ static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl,
+ {
+ 	blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
+ 	nvme_rdma_stop_queue(&ctrl->queues[0]);
+-	blk_mq_tagset_busy_iter(&ctrl->admin_tag_set, nvme_cancel_request,
+-			&ctrl->ctrl);
++	if (ctrl->ctrl.admin_tagset)
++		blk_mq_tagset_busy_iter(ctrl->ctrl.admin_tagset,
++			nvme_cancel_request, &ctrl->ctrl);
+ 	blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
+ 	nvme_rdma_destroy_admin_queue(ctrl, remove);
+ }
+@@ -934,8 +935,9 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
+ 	if (ctrl->ctrl.queue_count > 1) {
+ 		nvme_stop_queues(&ctrl->ctrl);
+ 		nvme_rdma_stop_io_queues(ctrl);
+-		blk_mq_tagset_busy_iter(&ctrl->tag_set, nvme_cancel_request,
+-				&ctrl->ctrl);
++		if (ctrl->ctrl.tagset)
++			blk_mq_tagset_busy_iter(ctrl->ctrl.tagset,
++				nvme_cancel_request, &ctrl->ctrl);
+ 		if (remove)
+ 			nvme_start_queues(&ctrl->ctrl);
+ 		nvme_rdma_destroy_io_queues(ctrl, remove);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 5f0a00425242..e71b0058c57b 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1686,7 +1686,9 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
+ {
+ 	blk_mq_quiesce_queue(ctrl->admin_q);
+ 	nvme_tcp_stop_queue(ctrl, 0);
+-	blk_mq_tagset_busy_iter(ctrl->admin_tagset, nvme_cancel_request, ctrl);
++	if (ctrl->admin_tagset)
++		blk_mq_tagset_busy_iter(ctrl->admin_tagset,
++			nvme_cancel_request, ctrl);
+ 	blk_mq_unquiesce_queue(ctrl->admin_q);
+ 	nvme_tcp_destroy_admin_queue(ctrl, remove);
+ }
+@@ -1698,7 +1700,9 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
+ 		return;
+ 	nvme_stop_queues(ctrl);
+ 	nvme_tcp_stop_io_queues(ctrl);
+-	blk_mq_tagset_busy_iter(ctrl->tagset, nvme_cancel_request, ctrl);
++	if (ctrl->tagset)
++		blk_mq_tagset_busy_iter(ctrl->tagset,
++			nvme_cancel_request, ctrl);
+ 	if (remove)
+ 		nvme_start_queues(ctrl);
+ 	nvme_tcp_destroy_io_queues(ctrl, remove);
+diff --git a/drivers/perf/arm-cci.c b/drivers/perf/arm-cci.c
+index 1bfeb160c5b1..14a541c453e5 100644
+--- a/drivers/perf/arm-cci.c
++++ b/drivers/perf/arm-cci.c
+@@ -1692,21 +1692,24 @@ static int cci_pmu_probe(struct platform_device *pdev)
+ 	raw_spin_lock_init(&cci_pmu->hw_events.pmu_lock);
+ 	mutex_init(&cci_pmu->reserve_mutex);
+ 	atomic_set(&cci_pmu->active_events, 0);
+-	cci_pmu->cpu = get_cpu();
+-
+-	ret = cci_pmu_init(cci_pmu, pdev);
+-	if (ret) {
+-		put_cpu();
+-		return ret;
+-	}
+ 
++	cci_pmu->cpu = raw_smp_processor_id();
++	g_cci_pmu = cci_pmu;
+ 	cpuhp_setup_state_nocalls(CPUHP_AP_PERF_ARM_CCI_ONLINE,
+ 				  "perf/arm/cci:online", NULL,
+ 				  cci_pmu_offline_cpu);
+-	put_cpu();
+-	g_cci_pmu = cci_pmu;
++
++	ret = cci_pmu_init(cci_pmu, pdev);
++	if (ret)
++		goto error_pmu_init;
++
+ 	pr_info("ARM %s PMU driver probed", cci_pmu->model->name);
+ 	return 0;
++
++error_pmu_init:
++	cpuhp_remove_state(CPUHP_AP_PERF_ARM_CCI_ONLINE);
++	g_cci_pmu = NULL;
++	return ret;
+ }
+ 
+ static int cci_pmu_remove(struct platform_device *pdev)
+diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c
+index 4bbd9ede38c8..cc5af961778d 100644
+--- a/drivers/phy/allwinner/phy-sun4i-usb.c
++++ b/drivers/phy/allwinner/phy-sun4i-usb.c
+@@ -554,6 +554,7 @@ static void sun4i_usb_phy0_id_vbus_det_scan(struct work_struct *work)
+ 	struct sun4i_usb_phy_data *data =
+ 		container_of(work, struct sun4i_usb_phy_data, detect.work);
+ 	struct phy *phy0 = data->phys[0].phy;
++	struct sun4i_usb_phy *phy = phy_get_drvdata(phy0);
+ 	bool force_session_end, id_notify = false, vbus_notify = false;
+ 	int id_det, vbus_det;
+ 
+@@ -610,6 +611,9 @@ static void sun4i_usb_phy0_id_vbus_det_scan(struct work_struct *work)
+ 			mutex_unlock(&phy0->mutex);
+ 		}
+ 
++		/* Enable PHY0 passby for host mode only. */
++		sun4i_usb_phy_passby(phy, !id_det);
++
+ 		/* Re-route PHY0 if necessary */
+ 		if (data->cfg->phy0_dual_route)
+ 			sun4i_usb_phy0_reroute(data, id_det);
+diff --git a/drivers/phy/motorola/Kconfig b/drivers/phy/motorola/Kconfig
+index 82651524ffb9..718f8729701d 100644
+--- a/drivers/phy/motorola/Kconfig
++++ b/drivers/phy/motorola/Kconfig
+@@ -13,7 +13,7 @@ config PHY_CPCAP_USB
+ 
+ config PHY_MAPPHONE_MDM6600
+ 	tristate "Motorola Mapphone MDM6600 modem USB PHY driver"
+-	depends on OF && USB_SUPPORT
++	depends on OF && USB_SUPPORT && GPIOLIB
+ 	select GENERIC_PHY
+ 	help
+ 	  Enable this for MDM6600 USB modem to work on Motorola phones
+diff --git a/drivers/pinctrl/pinctrl-pistachio.c b/drivers/pinctrl/pinctrl-pistachio.c
+index aa5f949ef219..5b0678f310e5 100644
+--- a/drivers/pinctrl/pinctrl-pistachio.c
++++ b/drivers/pinctrl/pinctrl-pistachio.c
+@@ -1367,6 +1367,7 @@ static int pistachio_gpio_register(struct pistachio_pinctrl *pctl)
+ 		if (!of_find_property(child, "gpio-controller", NULL)) {
+ 			dev_err(pctl->dev,
+ 				"No gpio-controller property for bank %u\n", i);
++			of_node_put(child);
+ 			ret = -ENODEV;
+ 			goto err;
+ 		}
+@@ -1374,6 +1375,7 @@ static int pistachio_gpio_register(struct pistachio_pinctrl *pctl)
+ 		irq = irq_of_parse_and_map(child, 0);
+ 		if (irq < 0) {
+ 			dev_err(pctl->dev, "No IRQ for bank %u: %d\n", i, irq);
++			of_node_put(child);
+ 			ret = irq;
+ 			goto err;
+ 		}
+diff --git a/drivers/pinctrl/pinctrl-st.c b/drivers/pinctrl/pinctrl-st.c
+index e66af93f2cbf..195b442a2343 100644
+--- a/drivers/pinctrl/pinctrl-st.c
++++ b/drivers/pinctrl/pinctrl-st.c
+@@ -1170,7 +1170,7 @@ static int st_pctl_dt_parse_groups(struct device_node *np,
+ 	struct property *pp;
+ 	struct st_pinconf *conf;
+ 	struct device_node *pins;
+-	int i = 0, npins = 0, nr_props;
++	int i = 0, npins = 0, nr_props, ret = 0;
+ 
+ 	pins = of_get_child_by_name(np, "st,pins");
+ 	if (!pins)
+@@ -1185,7 +1185,8 @@ static int st_pctl_dt_parse_groups(struct device_node *np,
+ 			npins++;
+ 		} else {
+ 			pr_warn("Invalid st,pins in %pOFn node\n", np);
+-			return -EINVAL;
++			ret = -EINVAL;
++			goto out_put_node;
+ 		}
+ 	}
+ 
+@@ -1195,8 +1196,10 @@ static int st_pctl_dt_parse_groups(struct device_node *np,
+ 	grp->pin_conf = devm_kcalloc(info->dev,
+ 					npins, sizeof(*conf), GFP_KERNEL);
+ 
+-	if (!grp->pins || !grp->pin_conf)
+-		return -ENOMEM;
++	if (!grp->pins || !grp->pin_conf) {
++		ret = -ENOMEM;
++		goto out_put_node;
++	}
+ 
+ 	/* <bank offset mux direction rt_type rt_delay rt_clk> */
+ 	for_each_property_of_node(pins, pp) {
+@@ -1229,9 +1232,11 @@ static int st_pctl_dt_parse_groups(struct device_node *np,
+ 		}
+ 		i++;
+ 	}
++
++out_put_node:
+ 	of_node_put(pins);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int st_pctl_parse_functions(struct device_node *np,
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
+index 44c6b753f692..85ddf49a5188 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
+@@ -71,6 +71,7 @@ s5pv210_retention_init(struct samsung_pinctrl_drv_data *drvdata,
+ 	}
+ 
+ 	clk_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	if (!clk_base) {
+ 		pr_err("%s: failed to map clock registers\n", __func__);
+ 		return ERR_PTR(-EINVAL);
+diff --git a/drivers/pinctrl/zte/pinctrl-zx.c b/drivers/pinctrl/zte/pinctrl-zx.c
+index caa44dd2880a..3cb69309912b 100644
+--- a/drivers/pinctrl/zte/pinctrl-zx.c
++++ b/drivers/pinctrl/zte/pinctrl-zx.c
+@@ -411,6 +411,7 @@ int zx_pinctrl_init(struct platform_device *pdev,
+ 	}
+ 
+ 	zpctl->aux_base = of_iomap(np, 0);
++	of_node_put(np);
+ 	if (!zpctl->aux_base)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index fb9fe26fd0fa..218b9331475b 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -5101,10 +5101,11 @@ void regulator_unregister(struct regulator_dev *rdev)
+ 		regulator_put(rdev->supply);
+ 	}
+ 
++	flush_work(&rdev->disable_work.work);
++
+ 	mutex_lock(&regulator_list_mutex);
+ 
+ 	debugfs_remove_recursive(rdev->debugfs);
+-	flush_work(&rdev->disable_work.work);
+ 	WARN_ON(rdev->open_count);
+ 	regulator_remove_coupling(rdev);
+ 	unset_regulator_supplies(rdev);
+diff --git a/drivers/regulator/da9055-regulator.c b/drivers/regulator/da9055-regulator.c
+index 588c3d2445cf..acba42d5b57d 100644
+--- a/drivers/regulator/da9055-regulator.c
++++ b/drivers/regulator/da9055-regulator.c
+@@ -515,8 +515,10 @@ static irqreturn_t da9055_ldo5_6_oc_irq(int irq, void *data)
+ {
+ 	struct da9055_regulator *regulator = data;
+ 
++	regulator_lock(regulator->rdev);
+ 	regulator_notifier_call_chain(regulator->rdev,
+ 				      REGULATOR_EVENT_OVER_CURRENT, NULL);
++	regulator_unlock(regulator->rdev);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/regulator/da9062-regulator.c b/drivers/regulator/da9062-regulator.c
+index 34a70d9dc450..5224304c10b3 100644
+--- a/drivers/regulator/da9062-regulator.c
++++ b/drivers/regulator/da9062-regulator.c
+@@ -974,8 +974,10 @@ static irqreturn_t da9062_ldo_lim_event(int irq, void *data)
+ 			continue;
+ 
+ 		if (BIT(regl->info->oc_event.lsb) & bits) {
++			regulator_lock(regl->rdev);
+ 			regulator_notifier_call_chain(regl->rdev,
+ 					REGULATOR_EVENT_OVER_CURRENT, NULL);
++			regulator_unlock(regl->rdev);
+ 			handled = IRQ_HANDLED;
+ 		}
+ 	}
+diff --git a/drivers/regulator/da9063-regulator.c b/drivers/regulator/da9063-regulator.c
+index 8cbcd2a3eb20..d3ea73ab5920 100644
+--- a/drivers/regulator/da9063-regulator.c
++++ b/drivers/regulator/da9063-regulator.c
+@@ -615,9 +615,12 @@ static irqreturn_t da9063_ldo_lim_event(int irq, void *data)
+ 		if (regl->info->oc_event.reg != DA9063_REG_STATUS_D)
+ 			continue;
+ 
+-		if (BIT(regl->info->oc_event.lsb) & bits)
++		if (BIT(regl->info->oc_event.lsb) & bits) {
++		        regulator_lock(regl->rdev);
+ 			regulator_notifier_call_chain(regl->rdev,
+ 					REGULATOR_EVENT_OVER_CURRENT, NULL);
++		        regulator_unlock(regl->rdev);
++		}
+ 	}
+ 
+ 	return IRQ_HANDLED;
+diff --git a/drivers/regulator/da9211-regulator.c b/drivers/regulator/da9211-regulator.c
+index 109ee12d4362..4d7fe4819c1c 100644
+--- a/drivers/regulator/da9211-regulator.c
++++ b/drivers/regulator/da9211-regulator.c
+@@ -322,8 +322,10 @@ static irqreturn_t da9211_irq_handler(int irq, void *data)
+ 		goto error_i2c;
+ 
+ 	if (reg_val & DA9211_E_OV_CURR_A) {
++	        regulator_lock(chip->rdev[0]);
+ 		regulator_notifier_call_chain(chip->rdev[0],
+ 			REGULATOR_EVENT_OVER_CURRENT, NULL);
++	        regulator_unlock(chip->rdev[0]);
+ 
+ 		err = regmap_write(chip->regmap, DA9211_REG_EVENT_B,
+ 			DA9211_E_OV_CURR_A);
+@@ -334,8 +336,10 @@ static irqreturn_t da9211_irq_handler(int irq, void *data)
+ 	}
+ 
+ 	if (reg_val & DA9211_E_OV_CURR_B) {
++	        regulator_lock(chip->rdev[1]);
+ 		regulator_notifier_call_chain(chip->rdev[1],
+ 			REGULATOR_EVENT_OVER_CURRENT, NULL);
++	        regulator_unlock(chip->rdev[1]);
+ 
+ 		err = regmap_write(chip->regmap, DA9211_REG_EVENT_B,
+ 			DA9211_E_OV_CURR_B);
+diff --git a/drivers/regulator/lp8755.c b/drivers/regulator/lp8755.c
+index 244822bb63cd..d82d3077f3b8 100644
+--- a/drivers/regulator/lp8755.c
++++ b/drivers/regulator/lp8755.c
+@@ -372,10 +372,13 @@ static irqreturn_t lp8755_irq_handler(int irq, void *data)
+ 	for (icnt = 0; icnt < LP8755_BUCK_MAX; icnt++)
+ 		if ((flag0 & (0x4 << icnt))
+ 		    && (pchip->irqmask & (0x04 << icnt))
+-		    && (pchip->rdev[icnt] != NULL))
++		    && (pchip->rdev[icnt] != NULL)) {
++			regulator_lock(pchip->rdev[icnt]);
+ 			regulator_notifier_call_chain(pchip->rdev[icnt],
+ 						      LP8755_EVENT_PWR_FAULT,
+ 						      NULL);
++			regulator_unlock(pchip->rdev[icnt]);
++		}
+ 
+ 	/* read flag1 register */
+ 	ret = lp8755_read(pchip, 0x0E, &flag1);
+@@ -389,18 +392,24 @@ static irqreturn_t lp8755_irq_handler(int irq, void *data)
+ 	/* send OCP event to all regualtor devices */
+ 	if ((flag1 & 0x01) && (pchip->irqmask & 0x01))
+ 		for (icnt = 0; icnt < LP8755_BUCK_MAX; icnt++)
+-			if (pchip->rdev[icnt] != NULL)
++			if (pchip->rdev[icnt] != NULL) {
++				regulator_lock(pchip->rdev[icnt]);
+ 				regulator_notifier_call_chain(pchip->rdev[icnt],
+ 							      LP8755_EVENT_OCP,
+ 							      NULL);
++				regulator_unlock(pchip->rdev[icnt]);
++			}
+ 
+ 	/* send OVP event to all regualtor devices */
+ 	if ((flag1 & 0x02) && (pchip->irqmask & 0x02))
+ 		for (icnt = 0; icnt < LP8755_BUCK_MAX; icnt++)
+-			if (pchip->rdev[icnt] != NULL)
++			if (pchip->rdev[icnt] != NULL) {
++				regulator_lock(pchip->rdev[icnt]);
+ 				regulator_notifier_call_chain(pchip->rdev[icnt],
+ 							      LP8755_EVENT_OVP,
+ 							      NULL);
++				regulator_unlock(pchip->rdev[icnt]);
++			}
+ 	return IRQ_HANDLED;
+ 
+ err_i2c:
+diff --git a/drivers/regulator/ltc3589.c b/drivers/regulator/ltc3589.c
+index 63f724f260ef..75089b037b72 100644
+--- a/drivers/regulator/ltc3589.c
++++ b/drivers/regulator/ltc3589.c
+@@ -419,16 +419,22 @@ static irqreturn_t ltc3589_isr(int irq, void *dev_id)
+ 
+ 	if (irqstat & LTC3589_IRQSTAT_THERMAL_WARN) {
+ 		event = REGULATOR_EVENT_OVER_TEMP;
+-		for (i = 0; i < LTC3589_NUM_REGULATORS; i++)
++		for (i = 0; i < LTC3589_NUM_REGULATORS; i++) {
++		        regulator_lock(ltc3589->regulators[i]);
+ 			regulator_notifier_call_chain(ltc3589->regulators[i],
+ 						      event, NULL);
++		        regulator_unlock(ltc3589->regulators[i]);
++		}
+ 	}
+ 
+ 	if (irqstat & LTC3589_IRQSTAT_UNDERVOLT_WARN) {
+ 		event = REGULATOR_EVENT_UNDER_VOLTAGE;
+-		for (i = 0; i < LTC3589_NUM_REGULATORS; i++)
++		for (i = 0; i < LTC3589_NUM_REGULATORS; i++) {
++		        regulator_lock(ltc3589->regulators[i]);
+ 			regulator_notifier_call_chain(ltc3589->regulators[i],
+ 						      event, NULL);
++		        regulator_unlock(ltc3589->regulators[i]);
++		}
+ 	}
+ 
+ 	/* Clear warning condition */
+diff --git a/drivers/regulator/ltc3676.c b/drivers/regulator/ltc3676.c
+index 71fd0f2a4b76..cd0f11254c77 100644
+--- a/drivers/regulator/ltc3676.c
++++ b/drivers/regulator/ltc3676.c
+@@ -338,17 +338,23 @@ static irqreturn_t ltc3676_isr(int irq, void *dev_id)
+ 	if (irqstat & LTC3676_IRQSTAT_THERMAL_WARN) {
+ 		dev_warn(dev, "Over-temperature Warning\n");
+ 		event = REGULATOR_EVENT_OVER_TEMP;
+-		for (i = 0; i < LTC3676_NUM_REGULATORS; i++)
++		for (i = 0; i < LTC3676_NUM_REGULATORS; i++) {
++			regulator_lock(ltc3676->regulators[i]);
+ 			regulator_notifier_call_chain(ltc3676->regulators[i],
+ 						      event, NULL);
++			regulator_unlock(ltc3676->regulators[i]);
++		}
+ 	}
+ 
+ 	if (irqstat & LTC3676_IRQSTAT_UNDERVOLT_WARN) {
+ 		dev_info(dev, "Undervoltage Warning\n");
+ 		event = REGULATOR_EVENT_UNDER_VOLTAGE;
+-		for (i = 0; i < LTC3676_NUM_REGULATORS; i++)
++		for (i = 0; i < LTC3676_NUM_REGULATORS; i++) {
++			regulator_lock(ltc3676->regulators[i]);
+ 			regulator_notifier_call_chain(ltc3676->regulators[i],
+ 						      event, NULL);
++			regulator_unlock(ltc3676->regulators[i]);
++		}
+ 	}
+ 
+ 	/* Clear warning condition */
+diff --git a/drivers/regulator/pv88060-regulator.c b/drivers/regulator/pv88060-regulator.c
+index a9446056435f..000c34914fe3 100644
+--- a/drivers/regulator/pv88060-regulator.c
++++ b/drivers/regulator/pv88060-regulator.c
+@@ -276,9 +276,11 @@ static irqreturn_t pv88060_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88060_E_VDD_FLT) {
+ 		for (i = 0; i < PV88060_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++				regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_UNDER_VOLTAGE,
+ 					NULL);
++				regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+@@ -293,9 +295,11 @@ static irqreturn_t pv88060_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88060_E_OVER_TEMP) {
+ 		for (i = 0; i < PV88060_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++				regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_OVER_TEMP,
+ 					NULL);
++				regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+diff --git a/drivers/regulator/pv88080-regulator.c b/drivers/regulator/pv88080-regulator.c
+index 9a08cb2de501..d99f1b9fa075 100644
+--- a/drivers/regulator/pv88080-regulator.c
++++ b/drivers/regulator/pv88080-regulator.c
+@@ -384,9 +384,11 @@ static irqreturn_t pv88080_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88080_E_VDD_FLT) {
+ 		for (i = 0; i < PV88080_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++			        regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_UNDER_VOLTAGE,
+ 					NULL);
++			        regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+@@ -401,9 +403,11 @@ static irqreturn_t pv88080_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88080_E_OVER_TEMP) {
+ 		for (i = 0; i < PV88080_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++			        regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_OVER_TEMP,
+ 					NULL);
++			        regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+diff --git a/drivers/regulator/pv88090-regulator.c b/drivers/regulator/pv88090-regulator.c
+index 7a0c15957bd0..b4ff646608f5 100644
+--- a/drivers/regulator/pv88090-regulator.c
++++ b/drivers/regulator/pv88090-regulator.c
+@@ -274,9 +274,11 @@ static irqreturn_t pv88090_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88090_E_VDD_FLT) {
+ 		for (i = 0; i < PV88090_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++			        regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_UNDER_VOLTAGE,
+ 					NULL);
++			        regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+@@ -291,9 +293,11 @@ static irqreturn_t pv88090_irq_handler(int irq, void *data)
+ 	if (reg_val & PV88090_E_OVER_TEMP) {
+ 		for (i = 0; i < PV88090_MAX_REGULATORS; i++) {
+ 			if (chip->rdev[i] != NULL) {
++			        regulator_lock(chip->rdev[i]);
+ 				regulator_notifier_call_chain(chip->rdev[i],
+ 					REGULATOR_EVENT_OVER_TEMP,
+ 					NULL);
++			        regulator_unlock(chip->rdev[i]);
+ 			}
+ 		}
+ 
+diff --git a/drivers/regulator/wm831x-dcdc.c b/drivers/regulator/wm831x-dcdc.c
+index 5a5bc4bb08d2..4f5461ad7b62 100644
+--- a/drivers/regulator/wm831x-dcdc.c
++++ b/drivers/regulator/wm831x-dcdc.c
+@@ -183,9 +183,11 @@ static irqreturn_t wm831x_dcdc_uv_irq(int irq, void *data)
+ {
+ 	struct wm831x_dcdc *dcdc = data;
+ 
++	regulator_lock(dcdc->regulator);
+ 	regulator_notifier_call_chain(dcdc->regulator,
+ 				      REGULATOR_EVENT_UNDER_VOLTAGE,
+ 				      NULL);
++	regulator_unlock(dcdc->regulator);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -194,9 +196,11 @@ static irqreturn_t wm831x_dcdc_oc_irq(int irq, void *data)
+ {
+ 	struct wm831x_dcdc *dcdc = data;
+ 
++	regulator_lock(dcdc->regulator);
+ 	regulator_notifier_call_chain(dcdc->regulator,
+ 				      REGULATOR_EVENT_OVER_CURRENT,
+ 				      NULL);
++	regulator_unlock(dcdc->regulator);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/regulator/wm831x-isink.c b/drivers/regulator/wm831x-isink.c
+index 6dd891d7eee3..11f351191dba 100644
+--- a/drivers/regulator/wm831x-isink.c
++++ b/drivers/regulator/wm831x-isink.c
+@@ -140,9 +140,11 @@ static irqreturn_t wm831x_isink_irq(int irq, void *data)
+ {
+ 	struct wm831x_isink *isink = data;
+ 
++	regulator_lock(isink->regulator);
+ 	regulator_notifier_call_chain(isink->regulator,
+ 				      REGULATOR_EVENT_OVER_CURRENT,
+ 				      NULL);
++	regulator_unlock(isink->regulator);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/regulator/wm831x-ldo.c b/drivers/regulator/wm831x-ldo.c
+index e4a6f888484e..fcd038e7cd80 100644
+--- a/drivers/regulator/wm831x-ldo.c
++++ b/drivers/regulator/wm831x-ldo.c
+@@ -51,9 +51,11 @@ static irqreturn_t wm831x_ldo_uv_irq(int irq, void *data)
+ {
+ 	struct wm831x_ldo *ldo = data;
+ 
++	regulator_lock(ldo->regulator);
+ 	regulator_notifier_call_chain(ldo->regulator,
+ 				      REGULATOR_EVENT_UNDER_VOLTAGE,
+ 				      NULL);
++	regulator_unlock(ldo->regulator);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/rtc/rtc-88pm860x.c b/drivers/rtc/rtc-88pm860x.c
+index 01ffc0ef8033..fbcf13bbbd8d 100644
+--- a/drivers/rtc/rtc-88pm860x.c
++++ b/drivers/rtc/rtc-88pm860x.c
+@@ -414,7 +414,7 @@ static int pm860x_rtc_remove(struct platform_device *pdev)
+ 	struct pm860x_rtc_info *info = platform_get_drvdata(pdev);
+ 
+ #ifdef VRTC_CALIBRATION
+-	flush_scheduled_work();
++	cancel_delayed_work_sync(&info->calib_work);
+ 	/* disable measurement */
+ 	pm860x_set_bits(info->i2c, PM8607_MEAS_EN2, MEAS2_VRTC, 0);
+ #endif	/* VRTC_CALIBRATION */
+diff --git a/drivers/rtc/rtc-stm32.c b/drivers/rtc/rtc-stm32.c
+index c5908cfea234..8e6c9b3bcc29 100644
+--- a/drivers/rtc/rtc-stm32.c
++++ b/drivers/rtc/rtc-stm32.c
+@@ -788,11 +788,14 @@ static int stm32_rtc_probe(struct platform_device *pdev)
+ 	ret = device_init_wakeup(&pdev->dev, true);
+ 	if (rtc->data->has_wakeirq) {
+ 		rtc->wakeirq_alarm = platform_get_irq(pdev, 1);
+-		if (rtc->wakeirq_alarm <= 0)
+-			ret = rtc->wakeirq_alarm;
+-		else
++		if (rtc->wakeirq_alarm > 0) {
+ 			ret = dev_pm_set_dedicated_wake_irq(&pdev->dev,
+ 							    rtc->wakeirq_alarm);
++		} else {
++			ret = rtc->wakeirq_alarm;
++			if (rtc->wakeirq_alarm == -EPROBE_DEFER)
++				goto err;
++		}
+ 	}
+ 	if (ret)
+ 		dev_warn(&pdev->dev, "alarm can't wake up the system: %d", ret);
+diff --git a/drivers/rtc/rtc-xgene.c b/drivers/rtc/rtc-xgene.c
+index 153820876a82..2f741f455c30 100644
+--- a/drivers/rtc/rtc-xgene.c
++++ b/drivers/rtc/rtc-xgene.c
+@@ -168,6 +168,10 @@ static int xgene_rtc_probe(struct platform_device *pdev)
+ 	if (IS_ERR(pdata->csr_base))
+ 		return PTR_ERR(pdata->csr_base);
+ 
++	pdata->rtc = devm_rtc_allocate_device(&pdev->dev);
++	if (IS_ERR(pdata->rtc))
++		return PTR_ERR(pdata->rtc);
++
+ 	irq = platform_get_irq(pdev, 0);
+ 	if (irq < 0) {
+ 		dev_err(&pdev->dev, "No IRQ resource\n");
+@@ -198,15 +202,15 @@ static int xgene_rtc_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
+-	pdata->rtc = devm_rtc_device_register(&pdev->dev, pdev->name,
+-					 &xgene_rtc_ops, THIS_MODULE);
+-	if (IS_ERR(pdata->rtc)) {
+-		clk_disable_unprepare(pdata->clk);
+-		return PTR_ERR(pdata->rtc);
+-	}
+-
+ 	/* HW does not support update faster than 1 seconds */
+ 	pdata->rtc->uie_unsupported = 1;
++	pdata->rtc->ops = &xgene_rtc_ops;
++
++	ret = rtc_register_device(pdata->rtc);
++	if (ret) {
++		clk_disable_unprepare(pdata->clk);
++		return ret;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/s390/cio/cio.h b/drivers/s390/cio/cio.h
+index 9811fd8a0c73..92eabbb5f18d 100644
+--- a/drivers/s390/cio/cio.h
++++ b/drivers/s390/cio/cio.h
+@@ -115,7 +115,7 @@ struct subchannel {
+ 	struct schib_config config;
+ } __attribute__ ((aligned(8)));
+ 
+-DECLARE_PER_CPU(struct irb, cio_irb);
++DECLARE_PER_CPU_ALIGNED(struct irb, cio_irb);
+ 
+ #define to_subchannel(n) container_of(n, struct subchannel, dev)
+ 
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index 0b3b9de45c60..9e84d8a971ad 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -40,26 +40,30 @@ int vfio_ccw_sch_quiesce(struct subchannel *sch)
+ 	if (ret != -EBUSY)
+ 		goto out_unlock;
+ 
++	iretry = 255;
+ 	do {
+-		iretry = 255;
+ 
+ 		ret = cio_cancel_halt_clear(sch, &iretry);
+-		while (ret == -EBUSY) {
+-			/*
+-			 * Flush all I/O and wait for
+-			 * cancel/halt/clear completion.
+-			 */
+-			private->completion = &completion;
+-			spin_unlock_irq(sch->lock);
+ 
+-			wait_for_completion_timeout(&completion, 3*HZ);
++		if (ret == -EIO) {
++			pr_err("vfio_ccw: could not quiesce subchannel 0.%x.%04x!\n",
++			       sch->schid.ssid, sch->schid.sch_no);
++			break;
++		}
++
++		/*
++		 * Flush all I/O and wait for
++		 * cancel/halt/clear completion.
++		 */
++		private->completion = &completion;
++		spin_unlock_irq(sch->lock);
+ 
+-			spin_lock_irq(sch->lock);
+-			private->completion = NULL;
+-			flush_workqueue(vfio_ccw_work_q);
+-			ret = cio_cancel_halt_clear(sch, &iretry);
+-		};
++		if (ret == -EBUSY)
++			wait_for_completion_timeout(&completion, 3*HZ);
+ 
++		private->completion = NULL;
++		flush_workqueue(vfio_ccw_work_q);
++		spin_lock_irq(sch->lock);
+ 		ret = cio_disable_subchannel(sch);
+ 	} while (ret == -EBUSY);
+ out_unlock:
+diff --git a/drivers/s390/cio/vfio_ccw_ops.c b/drivers/s390/cio/vfio_ccw_ops.c
+index f673e106c041..dc5ff47de3fe 100644
+--- a/drivers/s390/cio/vfio_ccw_ops.c
++++ b/drivers/s390/cio/vfio_ccw_ops.c
+@@ -130,11 +130,12 @@ static int vfio_ccw_mdev_remove(struct mdev_device *mdev)
+ 
+ 	if ((private->state != VFIO_CCW_STATE_NOT_OPER) &&
+ 	    (private->state != VFIO_CCW_STATE_STANDBY)) {
+-		if (!vfio_ccw_mdev_reset(mdev))
++		if (!vfio_ccw_sch_quiesce(private->sch))
+ 			private->state = VFIO_CCW_STATE_STANDBY;
+ 		/* The state will be NOT_OPER on error. */
+ 	}
+ 
++	cp_free(&private->cp);
+ 	private->mdev = NULL;
+ 	atomic_inc(&private->avail);
+ 
+@@ -158,6 +159,14 @@ static void vfio_ccw_mdev_release(struct mdev_device *mdev)
+ 	struct vfio_ccw_private *private =
+ 		dev_get_drvdata(mdev_parent_dev(mdev));
+ 
++	if ((private->state != VFIO_CCW_STATE_NOT_OPER) &&
++	    (private->state != VFIO_CCW_STATE_STANDBY)) {
++		if (!vfio_ccw_mdev_reset(mdev))
++			private->state = VFIO_CCW_STATE_STANDBY;
++		/* The state will be NOT_OPER on error. */
++	}
++
++	cp_free(&private->cp);
+ 	vfio_unregister_notifier(mdev_dev(mdev), VFIO_IOMMU_NOTIFY,
+ 				 &private->nb);
+ }
+diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c
+index eb93c2d27d0a..df1e847dd36e 100644
+--- a/drivers/s390/crypto/zcrypt_api.c
++++ b/drivers/s390/crypto/zcrypt_api.c
+@@ -657,6 +657,7 @@ static long zcrypt_rsa_modexpo(struct ap_perms *perms,
+ 	trace_s390_zcrypt_req(mex, TP_ICARSAMODEXPO);
+ 
+ 	if (mex->outputdatalength < mex->inputdatalength) {
++		func_code = 0;
+ 		rc = -EINVAL;
+ 		goto out;
+ 	}
+@@ -739,6 +740,7 @@ static long zcrypt_rsa_crt(struct ap_perms *perms,
+ 	trace_s390_zcrypt_req(crt, TP_ICARSACRT);
+ 
+ 	if (crt->outputdatalength < crt->inputdatalength) {
++		func_code = 0;
+ 		rc = -EINVAL;
+ 		goto out;
+ 	}
+@@ -946,6 +948,7 @@ static long zcrypt_send_ep11_cprb(struct ap_perms *perms,
+ 
+ 		targets = kcalloc(target_num, sizeof(*targets), GFP_KERNEL);
+ 		if (!targets) {
++			func_code = 0;
+ 			rc = -ENOMEM;
+ 			goto out;
+ 		}
+@@ -953,6 +956,7 @@ static long zcrypt_send_ep11_cprb(struct ap_perms *perms,
+ 		uptr = (struct ep11_target_dev __force __user *) xcrb->targets;
+ 		if (copy_from_user(targets, uptr,
+ 				   target_num * sizeof(*targets))) {
++			func_code = 0;
+ 			rc = -EFAULT;
+ 			goto out_free;
+ 		}
+diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
+index 122059ecad84..614bb0f34e8e 100644
+--- a/drivers/s390/net/qeth_core.h
++++ b/drivers/s390/net/qeth_core.h
+@@ -215,6 +215,12 @@ struct qeth_vnicc_info {
+ 	bool rx_bcast_enabled;
+ };
+ 
++static inline int qeth_is_adp_supported(struct qeth_ipa_info *ipa,
++		enum qeth_ipa_setadp_cmd func)
++{
++	return (ipa->supported_funcs & func);
++}
++
+ static inline int qeth_is_ipa_supported(struct qeth_ipa_info *ipa,
+ 		enum qeth_ipa_funcs func)
+ {
+@@ -228,9 +234,7 @@ static inline int qeth_is_ipa_enabled(struct qeth_ipa_info *ipa,
+ }
+ 
+ #define qeth_adp_supported(c, f) \
+-	qeth_is_ipa_supported(&c->options.adp, f)
+-#define qeth_adp_enabled(c, f) \
+-	qeth_is_ipa_enabled(&c->options.adp, f)
++	qeth_is_adp_supported(&c->options.adp, f)
+ #define qeth_is_supported(c, f) \
+ 	qeth_is_ipa_supported(&c->options.ipa4, f)
+ #define qeth_is_enabled(c, f) \
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 89f912213e62..8786805b9d1c 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -1325,7 +1325,7 @@ static void qeth_set_multiple_write_queues(struct qeth_card *card)
+ 	card->qdio.no_out_queues = 4;
+ }
+ 
+-static void qeth_update_from_chp_desc(struct qeth_card *card)
++static int qeth_update_from_chp_desc(struct qeth_card *card)
+ {
+ 	struct ccw_device *ccwdev;
+ 	struct channel_path_desc_fmt0 *chp_dsc;
+@@ -1335,7 +1335,7 @@ static void qeth_update_from_chp_desc(struct qeth_card *card)
+ 	ccwdev = card->data.ccwdev;
+ 	chp_dsc = ccw_device_get_chp_desc(ccwdev, 0);
+ 	if (!chp_dsc)
+-		goto out;
++		return -ENOMEM;
+ 
+ 	card->info.func_level = 0x4100 + chp_dsc->desc;
+ 	if (card->info.type == QETH_CARD_TYPE_IQD)
+@@ -1350,6 +1350,7 @@ out:
+ 	kfree(chp_dsc);
+ 	QETH_DBF_TEXT_(SETUP, 2, "nr:%x", card->qdio.no_out_queues);
+ 	QETH_DBF_TEXT_(SETUP, 2, "lvl:%02x", card->info.func_level);
++	return 0;
+ }
+ 
+ static void qeth_init_qdio_info(struct qeth_card *card)
+@@ -5086,7 +5087,9 @@ int qeth_core_hardsetup_card(struct qeth_card *card, bool *carrier_ok)
+ 
+ 	QETH_DBF_TEXT(SETUP, 2, "hrdsetup");
+ 	atomic_set(&card->force_alloc_skb, 0);
+-	qeth_update_from_chp_desc(card);
++	rc = qeth_update_from_chp_desc(card);
++	if (rc)
++		return rc;
+ retry:
+ 	if (retries < 3)
+ 		QETH_DBF_MESSAGE(2, "Retrying to do IDX activates on device %x.\n",
+@@ -5755,7 +5758,9 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev)
+ 	}
+ 
+ 	qeth_setup_card(card);
+-	qeth_update_from_chp_desc(card);
++	rc = qeth_update_from_chp_desc(card);
++	if (rc)
++		goto err_chp_desc;
+ 
+ 	card->dev = qeth_alloc_netdev(card);
+ 	if (!card->dev) {
+@@ -5790,6 +5795,7 @@ err_disc:
+ 	qeth_core_free_discipline(card);
+ err_load:
+ 	free_netdev(card->dev);
++err_chp_desc:
+ err_card:
+ 	qeth_core_free_card(card);
+ err_dev:
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index f21c93bbb35c..a761f5019b07 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -2024,6 +2024,11 @@ static int sas_rediscover_dev(struct domain_device *dev, int phy_id, bool last)
+ 	if ((SAS_ADDR(sas_addr) == 0) || (res == -ECOMM)) {
+ 		phy->phy_state = PHY_EMPTY;
+ 		sas_unregister_devs_sas_addr(dev, phy_id, last);
++		/*
++		 * Even though the PHY is empty, for convenience we discover
++		 * the PHY to update the PHY info, like negotiated linkrate.
++		 */
++		sas_ex_phy_discover(dev, phy_id);
+ 		return res;
+ 	} else if (SAS_ADDR(sas_addr) == SAS_ADDR(phy->attached_sas_addr) &&
+ 		   dev_type_flutter(type, phy->attached_dev_type)) {
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index 221f8fd87d24..b385b8ece343 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -2005,8 +2005,11 @@ lpfc_fdmi_hba_attr_manufacturer(struct lpfc_vport *vport,
+ 	ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
+ 	memset(ae, 0, 256);
+ 
++	/* This string MUST be consistent with other FC platforms
++	 * supported by Broadcom.
++	 */
+ 	strncpy(ae->un.AttrString,
+-		"Broadcom Inc.",
++		"Emulex Corporation",
+ 		       sizeof(ae->un.AttrString));
+ 	len = strnlen(ae->un.AttrString,
+ 			  sizeof(ae->un.AttrString));
+@@ -2360,10 +2363,11 @@ lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport,
+ 	ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
+ 	memset(ae, 0, 32);
+ 
+-	ae->un.AttrTypes[3] = 0x02; /* Type 1 - ELS */
+-	ae->un.AttrTypes[2] = 0x01; /* Type 8 - FCP */
+-	ae->un.AttrTypes[6] = 0x01; /* Type 40 - NVME */
+-	ae->un.AttrTypes[7] = 0x01; /* Type 32 - CT */
++	ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
++	ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
++	if (vport->nvmei_support || vport->phba->nvmet_support)
++		ae->un.AttrTypes[6] = 0x01; /* Type 0x28 - NVME */
++	ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
+ 	size = FOURBYTES + 32;
+ 	ad->AttrLen = cpu_to_be16(size);
+ 	ad->AttrType = cpu_to_be16(RPRT_SUPPORTED_FC4_TYPES);
+@@ -2673,9 +2677,11 @@ lpfc_fdmi_port_attr_active_fc4type(struct lpfc_vport *vport,
+ 	ae = (struct lpfc_fdmi_attr_entry *)&ad->AttrValue;
+ 	memset(ae, 0, 32);
+ 
+-	ae->un.AttrTypes[3] = 0x02; /* Type 1 - ELS */
+-	ae->un.AttrTypes[2] = 0x01; /* Type 8 - FCP */
+-	ae->un.AttrTypes[7] = 0x01; /* Type 32 - CT */
++	ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */
++	ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */
++	if (vport->phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME)
++		ae->un.AttrTypes[6] = 0x1; /* Type 0x28 - NVME */
++	ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */
+ 	size = FOURBYTES + 32;
+ 	ad->AttrLen = cpu_to_be16(size);
+ 	ad->AttrType = cpu_to_be16(RPRT_ACTIVE_FC4_TYPES);
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index b183b882d506..8d553cfb85aa 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -935,7 +935,11 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ 		}
+ 	}
+ 	lpfc_destroy_vport_work_array(phba, vports);
+-	/* Clean up any firmware default rpi's */
++
++	/* Clean up any SLI3 firmware default rpi's */
++	if (phba->sli_rev > LPFC_SLI_REV3)
++		goto skip_unreg_did;
++
+ 	mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+ 	if (mb) {
+ 		lpfc_unreg_did(phba, 0xffff, LPFC_UNREG_ALL_DFLT_RPIS, mb);
+@@ -947,6 +951,7 @@ lpfc_linkdown(struct lpfc_hba *phba)
+ 		}
+ 	}
+ 
++ skip_unreg_did:
+ 	/* Setup myDID for link up if we are in pt2pt mode */
+ 	if (phba->pport->fc_flag & FC_PT2PT) {
+ 		mb = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+@@ -4874,6 +4879,10 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
+ 					 * accept PLOGIs after unreg_rpi_cmpl
+ 					 */
+ 					acc_plogi = 0;
++				} else if (vport->load_flag & FC_UNLOADING) {
++					mbox->ctx_ndlp = NULL;
++					mbox->mbox_cmpl =
++						lpfc_sli_def_mbox_cmpl;
+ 				} else {
+ 					mbox->ctx_ndlp = ndlp;
+ 					mbox->mbox_cmpl =
+@@ -4985,6 +4994,10 @@ lpfc_unreg_default_rpis(struct lpfc_vport *vport)
+ 	LPFC_MBOXQ_t     *mbox;
+ 	int rc;
+ 
++	/* Unreg DID is an SLI3 operation. */
++	if (phba->sli_rev > LPFC_SLI_REV3)
++		return;
++
+ 	mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+ 	if (mbox) {
+ 		lpfc_unreg_did(phba, vport->vpi, LPFC_UNREG_ALL_DFLT_RPIS,
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index 8c9f79042228..56df8b510186 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -2471,15 +2471,15 @@ lpfc_nvme_create_localport(struct lpfc_vport *vport)
+ 	if (!cstat)
+ 		return -ENOMEM;
+ 
++	if (!IS_ENABLED(CONFIG_NVME_FC))
++		return ret;
++
+ 	/* localport is allocated from the stack, but the registration
+ 	 * call allocates heap memory as well as the private area.
+ 	 */
+-#if (IS_ENABLED(CONFIG_NVME_FC))
++
+ 	ret = nvme_fc_register_localport(&nfcp_info, &lpfc_nvme_template,
+ 					 &vport->phba->pcidev->dev, &localport);
+-#else
+-	ret = -ENOMEM;
+-#endif
+ 	if (!ret) {
+ 		lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME | LOG_NVME_DISC,
+ 				 "6005 Successfully registered local "
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 2242e9b3ca12..d3a942971d81 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -2518,8 +2518,8 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ 			} else {
+ 				ndlp->nlp_flag &= ~NLP_UNREG_INP;
+ 			}
++			pmb->ctx_ndlp = NULL;
+ 		}
+-		pmb->ctx_ndlp = NULL;
+ 	}
+ 
+ 	/* Check security permission status on INIT_LINK mailbox command */
+diff --git a/drivers/scsi/qedf/qedf_io.c b/drivers/scsi/qedf/qedf_io.c
+index 6bbc38b1b465..a17c13846d1e 100644
+--- a/drivers/scsi/qedf/qedf_io.c
++++ b/drivers/scsi/qedf/qedf_io.c
+@@ -902,6 +902,7 @@ int qedf_post_io_req(struct qedf_rport *fcport, struct qedf_ioreq *io_req)
+ 	if (!test_bit(QEDF_RPORT_SESSION_READY, &fcport->flags)) {
+ 		QEDF_ERR(&(qedf->dbg_ctx), "Session not offloaded yet.\n");
+ 		kref_put(&io_req->refcount, qedf_release_cmd);
++		return -EINVAL;
+ 	}
+ 
+ 	/* Obtain free SQE */
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index 6d6d6013e35b..bf371e7b957d 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -1000,6 +1000,9 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+ 	qedi_ep = ep->dd_data;
+ 	qedi = qedi_ep->qedi;
+ 
++	if (qedi_ep->state == EP_STATE_OFLDCONN_START)
++		goto ep_exit_recover;
++
+ 	flush_work(&qedi_ep->offload_work);
+ 
+ 	if (qedi_ep->conn) {
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 1a20e5d8f057..51df171b32ed 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -3454,7 +3454,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
+ 		ql_log(ql_log_fatal, vha, 0x00c8,
+ 		    "Failed to allocate memory for ha->msix_entries.\n");
+ 		ret = -ENOMEM;
+-		goto msix_out;
++		goto free_irqs;
+ 	}
+ 	ha->flags.msix_enabled = 1;
+ 
+@@ -3537,6 +3537,10 @@ msix_register_fail:
+ 
+ msix_out:
+ 	return ret;
++
++free_irqs:
++	pci_free_irq_vectors(ha->pdev);
++	goto msix_out;
+ }
+ 
+ int
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index d4ac18573d81..4758cd687718 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -680,7 +680,6 @@ done:
+ void qla24xx_do_nack_work(struct scsi_qla_host *vha, struct qla_work_evt *e)
+ {
+ 	fc_port_t *t;
+-	unsigned long flags;
+ 
+ 	switch (e->u.nack.type) {
+ 	case SRB_NACK_PRLI:
+@@ -690,10 +689,8 @@ void qla24xx_do_nack_work(struct scsi_qla_host *vha, struct qla_work_evt *e)
+ 		if (t) {
+ 			ql_log(ql_log_info, vha, 0xd034,
+ 			    "%s create sess success %p", __func__, t);
+-			spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
+ 			/* create sess has an extra kref */
+ 			vha->hw->tgt.tgt_ops->put_sess(e->u.nack.fcport);
+-			spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+ 		}
+ 		break;
+ 	}
+@@ -705,9 +702,6 @@ void qla24xx_delete_sess_fn(struct work_struct *work)
+ {
+ 	fc_port_t *fcport = container_of(work, struct fc_port, del_work);
+ 	struct qla_hw_data *ha = fcport->vha->hw;
+-	unsigned long flags;
+-
+-	spin_lock_irqsave(&ha->tgt.sess_lock, flags);
+ 
+ 	if (fcport->se_sess) {
+ 		ha->tgt.tgt_ops->shutdown_sess(fcport);
+@@ -715,7 +709,6 @@ void qla24xx_delete_sess_fn(struct work_struct *work)
+ 	} else {
+ 		qlt_unreg_sess(fcport);
+ 	}
+-	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+ }
+ 
+ /*
+@@ -784,8 +777,9 @@ void qlt_fc_port_added(struct scsi_qla_host *vha, fc_port_t *fcport)
+ 		    fcport->port_name, sess->loop_id);
+ 		sess->local = 0;
+ 	}
+-	ha->tgt.tgt_ops->put_sess(sess);
+ 	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
++
++	ha->tgt.tgt_ops->put_sess(sess);
+ }
+ 
+ /*
+@@ -4242,9 +4236,7 @@ static void __qlt_do_work(struct qla_tgt_cmd *cmd)
+ 	/*
+ 	 * Drop extra session reference from qla_tgt_handle_cmd_for_atio*(
+ 	 */
+-	spin_lock_irqsave(&ha->tgt.sess_lock, flags);
+ 	ha->tgt.tgt_ops->put_sess(sess);
+-	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+ 	return;
+ 
+ out_term:
+@@ -4261,9 +4253,7 @@ out_term:
+ 	target_free_tag(sess->se_sess, &cmd->se_cmd);
+ 	spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+ 
+-	spin_lock_irqsave(&ha->tgt.sess_lock, flags);
+ 	ha->tgt.tgt_ops->put_sess(sess);
+-	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+ }
+ 
+ static void qlt_do_work(struct work_struct *work)
+@@ -4472,9 +4462,7 @@ static int qlt_handle_cmd_for_atio(struct scsi_qla_host *vha,
+ 	if (!cmd) {
+ 		ql_dbg(ql_dbg_io, vha, 0x3062,
+ 		    "qla_target(%d): Allocation of cmd failed\n", vha->vp_idx);
+-		spin_lock_irqsave(&ha->tgt.sess_lock, flags);
+ 		ha->tgt.tgt_ops->put_sess(sess);
+-		spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+ 		return -EBUSY;
+ 	}
+ 
+@@ -6318,17 +6306,19 @@ static void qlt_abort_work(struct qla_tgt *tgt,
+ 	}
+ 
+ 	rc = __qlt_24xx_handle_abts(vha, &prm->abts, sess);
+-	ha->tgt.tgt_ops->put_sess(sess);
+ 	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags2);
+ 
++	ha->tgt.tgt_ops->put_sess(sess);
++
+ 	if (rc != 0)
+ 		goto out_term;
+ 	return;
+ 
+ out_term2:
++	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags2);
++
+ 	if (sess)
+ 		ha->tgt.tgt_ops->put_sess(sess);
+-	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags2);
+ 
+ out_term:
+ 	spin_lock_irqsave(&ha->hardware_lock, flags);
+@@ -6388,9 +6378,10 @@ static void qlt_tmr_work(struct qla_tgt *tgt,
+ 	    scsilun_to_int((struct scsi_lun *)&a->u.isp24.fcp_cmnd.lun);
+ 
+ 	rc = qlt_issue_task_mgmt(sess, unpacked_lun, fn, iocb, 0);
+-	ha->tgt.tgt_ops->put_sess(sess);
+ 	spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
+ 
++	ha->tgt.tgt_ops->put_sess(sess);
++
+ 	if (rc != 0)
+ 		goto out_term;
+ 	return;
+diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+index 283e6b80abb5..5e3bb49687df 100644
+--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c
++++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c
+@@ -359,7 +359,6 @@ static void tcm_qla2xxx_put_sess(struct fc_port *sess)
+ 	if (!sess)
+ 		return;
+ 
+-	assert_spin_locked(&sess->vha->hw->tgt.sess_lock);
+ 	kref_put(&sess->sess_kref, tcm_qla2xxx_release_session);
+ }
+ 
+@@ -374,8 +373,9 @@ static void tcm_qla2xxx_close_session(struct se_session *se_sess)
+ 
+ 	spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
+ 	target_sess_cmd_list_set_waiting(se_sess);
+-	tcm_qla2xxx_put_sess(sess);
+ 	spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
++
++	tcm_qla2xxx_put_sess(sess);
+ }
+ 
+ static u32 tcm_qla2xxx_sess_get_index(struct se_session *se_sess)
+@@ -399,6 +399,8 @@ static int tcm_qla2xxx_write_pending(struct se_cmd *se_cmd)
+ 			cmd->se_cmd.transport_state,
+ 			cmd->se_cmd.t_state,
+ 			cmd->se_cmd.se_cmd_flags);
++		transport_generic_request_failure(&cmd->se_cmd,
++			TCM_CHECK_CONDITION_ABORT_CMD);
+ 		return 0;
+ 	}
+ 	cmd->trc_flags |= TRC_XFR_RDY;
+@@ -858,7 +860,6 @@ static void tcm_qla2xxx_clear_nacl_from_fcport_map(struct fc_port *sess)
+ 
+ static void tcm_qla2xxx_shutdown_sess(struct fc_port *sess)
+ {
+-	assert_spin_locked(&sess->vha->hw->tgt.sess_lock);
+ 	target_sess_cmd_list_set_waiting(sess->se_sess);
+ }
+ 
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index 80289c885c07..9edec8e27b7c 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -5930,7 +5930,7 @@ static int get_fw_boot_info(struct scsi_qla_host *ha, uint16_t ddb_index[])
+ 		val = rd_nvram_byte(ha, sec_addr);
+ 		if (val & BIT_7)
+ 			ddb_index[1] = (val & 0x7f);
+-
++		goto exit_boot_info;
+ 	} else if (is_qla80XX(ha)) {
+ 		buf = dma_alloc_coherent(&ha->pdev->dev, size,
+ 					 &buf_dma, GFP_KERNEL);
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index d64553c0a051..ef43f06bc7a7 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -2586,7 +2586,6 @@ sd_read_write_protect_flag(struct scsi_disk *sdkp, unsigned char *buffer)
+ 	int res;
+ 	struct scsi_device *sdp = sdkp->device;
+ 	struct scsi_mode_data data;
+-	int disk_ro = get_disk_ro(sdkp->disk);
+ 	int old_wp = sdkp->write_prot;
+ 
+ 	set_disk_ro(sdkp->disk, 0);
+@@ -2627,7 +2626,7 @@ sd_read_write_protect_flag(struct scsi_disk *sdkp, unsigned char *buffer)
+ 			  "Test WP failed, assume Write Enabled\n");
+ 	} else {
+ 		sdkp->write_prot = ((data.device_specific & 0x80) != 0);
+-		set_disk_ro(sdkp->disk, sdkp->write_prot || disk_ro);
++		set_disk_ro(sdkp->disk, sdkp->write_prot);
+ 		if (sdkp->first_scan || old_wp != sdkp->write_prot) {
+ 			sd_printk(KERN_NOTICE, sdkp, "Write Protect is %s\n",
+ 				  sdkp->write_prot ? "on" : "off");
+diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c
+index 452e19f8fb47..c2cee73a8560 100644
+--- a/drivers/scsi/ufs/ufs-hisi.c
++++ b/drivers/scsi/ufs/ufs-hisi.c
+@@ -544,6 +544,10 @@ static int ufs_hisi_init_common(struct ufs_hba *hba)
+ 	ufshcd_set_variant(hba, host);
+ 
+ 	host->rst  = devm_reset_control_get(dev, "rst");
++	if (IS_ERR(host->rst)) {
++		dev_err(dev, "%s: failed to get reset control\n", __func__);
++		return PTR_ERR(host->rst);
++	}
+ 
+ 	ufs_hisi_set_pm_lvl(hba);
+ 
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 2ddf24466a62..c02e70428711 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -6286,19 +6286,19 @@ static u32 ufshcd_find_max_sup_active_icc_level(struct ufs_hba *hba,
+ 		goto out;
+ 	}
+ 
+-	if (hba->vreg_info.vcc)
++	if (hba->vreg_info.vcc && hba->vreg_info.vcc->max_uA)
+ 		icc_level = ufshcd_get_max_icc_level(
+ 				hba->vreg_info.vcc->max_uA,
+ 				POWER_DESC_MAX_ACTV_ICC_LVLS - 1,
+ 				&desc_buf[PWR_DESC_ACTIVE_LVLS_VCC_0]);
+ 
+-	if (hba->vreg_info.vccq)
++	if (hba->vreg_info.vccq && hba->vreg_info.vccq->max_uA)
+ 		icc_level = ufshcd_get_max_icc_level(
+ 				hba->vreg_info.vccq->max_uA,
+ 				icc_level,
+ 				&desc_buf[PWR_DESC_ACTIVE_LVLS_VCCQ_0]);
+ 
+-	if (hba->vreg_info.vccq2)
++	if (hba->vreg_info.vccq2 && hba->vreg_info.vccq2->max_uA)
+ 		icc_level = ufshcd_get_max_icc_level(
+ 				hba->vreg_info.vccq2->max_uA,
+ 				icc_level,
+@@ -7001,6 +7001,15 @@ static int ufshcd_config_vreg_load(struct device *dev, struct ufs_vreg *vreg,
+ 	if (!vreg)
+ 		return 0;
+ 
++	/*
++	 * "set_load" operation shall be required on those regulators
++	 * which specifically configured current limitation. Otherwise
++	 * zero max_uA may cause unexpected behavior when regulator is
++	 * enabled or set as high power mode.
++	 */
++	if (!vreg->max_uA)
++		return 0;
++
+ 	ret = regulator_set_load(vreg->reg, ua);
+ 	if (ret < 0) {
+ 		dev_err(dev, "%s: %s set load (ua=%d) failed, err=%d\n",
+@@ -7047,12 +7056,15 @@ static int ufshcd_config_vreg(struct device *dev,
+ 	name = vreg->name;
+ 
+ 	if (regulator_count_voltages(reg) > 0) {
+-		min_uV = on ? vreg->min_uV : 0;
+-		ret = regulator_set_voltage(reg, min_uV, vreg->max_uV);
+-		if (ret) {
+-			dev_err(dev, "%s: %s set voltage failed, err=%d\n",
++		if (vreg->min_uV && vreg->max_uV) {
++			min_uV = on ? vreg->min_uV : 0;
++			ret = regulator_set_voltage(reg, min_uV, vreg->max_uV);
++			if (ret) {
++				dev_err(dev,
++					"%s: %s set voltage failed, err=%d\n",
+ 					__func__, name, ret);
+-			goto out;
++				goto out;
++			}
+ 		}
+ 
+ 		uA_load = on ? vreg->max_uA : 0;
+diff --git a/drivers/slimbus/qcom-ngd-ctrl.c b/drivers/slimbus/qcom-ngd-ctrl.c
+index 71f094c9ec68..f3585777324c 100644
+--- a/drivers/slimbus/qcom-ngd-ctrl.c
++++ b/drivers/slimbus/qcom-ngd-ctrl.c
+@@ -1342,6 +1342,10 @@ static int of_qcom_slim_ngd_register(struct device *parent,
+ 			return -ENOMEM;
+ 
+ 		ngd->pdev = platform_device_alloc(QCOM_SLIM_NGD_DRV_NAME, id);
++		if (!ngd->pdev) {
++			kfree(ngd);
++			return -ENOMEM;
++		}
+ 		ngd->id = id;
+ 		ngd->pdev->dev.parent = parent;
+ 		ngd->pdev->driver_override = QCOM_SLIM_NGD_DRV_NAME;
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index ddc712410812..ec6e9970d775 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -506,7 +506,8 @@ static int atmel_qspi_remove(struct platform_device *pdev)
+ 
+ static int __maybe_unused atmel_qspi_suspend(struct device *dev)
+ {
+-	struct atmel_qspi *aq = dev_get_drvdata(dev);
++	struct spi_controller *ctrl = dev_get_drvdata(dev);
++	struct atmel_qspi *aq = spi_controller_get_devdata(ctrl);
+ 
+ 	clk_disable_unprepare(aq->clk);
+ 
+@@ -515,7 +516,8 @@ static int __maybe_unused atmel_qspi_suspend(struct device *dev)
+ 
+ static int __maybe_unused atmel_qspi_resume(struct device *dev)
+ {
+-	struct atmel_qspi *aq = dev_get_drvdata(dev);
++	struct spi_controller *ctrl = dev_get_drvdata(dev);
++	struct atmel_qspi *aq = spi_controller_get_devdata(ctrl);
+ 
+ 	clk_prepare_enable(aq->clk);
+ 
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 6ec647bbba77..a81ae29aa68a 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1494,7 +1494,7 @@ static int spi_imx_transfer(struct spi_device *spi,
+ 
+ 	/* flush rxfifo before transfer */
+ 	while (spi_imx->devtype_data->rx_available(spi_imx))
+-		spi_imx->rx(spi_imx);
++		readl(spi_imx->base + MXC_CSPIRXDATA);
+ 
+ 	if (spi_imx->slave_mode)
+ 		return spi_imx_pio_transfer_slave(spi, transfer);
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index 3e82eaad0f2d..41aadb41a20b 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -884,10 +884,14 @@ static unsigned int ssp_get_clk_div(struct driver_data *drv_data, int rate)
+ 
+ 	rate = min_t(int, ssp_clk, rate);
+ 
++	/*
++	 * Calculate the divisor for the SCR (Serial Clock Rate), avoiding
++	 * that the SSP transmission rate can be greater than the device rate
++	 */
+ 	if (ssp->type == PXA25x_SSP || ssp->type == CE4100_SSP)
+-		return (ssp_clk / (2 * rate) - 1) & 0xff;
++		return (DIV_ROUND_UP(ssp_clk, 2 * rate) - 1) & 0xff;
+ 	else
+-		return (ssp_clk / rate - 1) & 0xfff;
++		return (DIV_ROUND_UP(ssp_clk, rate) - 1)  & 0xfff;
+ }
+ 
+ static unsigned int pxa2xx_ssp_get_clk_div(struct driver_data *drv_data,
+diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c
+index a4ef641b5227..b0824df3e04b 100644
+--- a/drivers/spi/spi-rspi.c
++++ b/drivers/spi/spi-rspi.c
+@@ -271,7 +271,8 @@ static int rspi_set_config_register(struct rspi_data *rspi, int access_size)
+ 	/* Sets parity, interrupt mask */
+ 	rspi_write8(rspi, 0x00, RSPI_SPCR2);
+ 
+-	/* Sets SPCMD */
++	/* Resets sequencer */
++	rspi_write8(rspi, 0, RSPI_SPSCR);
+ 	rspi->spcmd |= SPCMD_SPB_8_TO_16(access_size);
+ 	rspi_write16(rspi, rspi->spcmd, RSPI_SPCMD0);
+ 
+@@ -315,7 +316,8 @@ static int rspi_rz_set_config_register(struct rspi_data *rspi, int access_size)
+ 	rspi_write8(rspi, 0x00, RSPI_SSLND);
+ 	rspi_write8(rspi, 0x00, RSPI_SPND);
+ 
+-	/* Sets SPCMD */
++	/* Resets sequencer */
++	rspi_write8(rspi, 0, RSPI_SPSCR);
+ 	rspi->spcmd |= SPCMD_SPB_8_TO_16(access_size);
+ 	rspi_write16(rspi, rspi->spcmd, RSPI_SPCMD0);
+ 
+@@ -366,7 +368,8 @@ static int qspi_set_config_register(struct rspi_data *rspi, int access_size)
+ 	/* Sets buffer to allow normal operation */
+ 	rspi_write8(rspi, 0x00, QSPI_SPBFCR);
+ 
+-	/* Sets SPCMD */
++	/* Resets sequencer */
++	rspi_write8(rspi, 0, RSPI_SPSCR);
+ 	rspi_write16(rspi, rspi->spcmd, RSPI_SPCMD0);
+ 
+ 	/* Sets RSPI mode */
+diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c
+index 3b2a9a6b990d..0b9a8bddb939 100644
+--- a/drivers/spi/spi-stm32-qspi.c
++++ b/drivers/spi/spi-stm32-qspi.c
+@@ -93,6 +93,7 @@ struct stm32_qspi_flash {
+ 
+ struct stm32_qspi {
+ 	struct device *dev;
++	struct spi_controller *ctrl;
+ 	void __iomem *io_base;
+ 	void __iomem *mm_base;
+ 	resource_size_t mm_size;
+@@ -397,6 +398,7 @@ static void stm32_qspi_release(struct stm32_qspi *qspi)
+ 	writel_relaxed(0, qspi->io_base + QSPI_CR);
+ 	mutex_destroy(&qspi->lock);
+ 	clk_disable_unprepare(qspi->clk);
++	spi_master_put(qspi->ctrl);
+ }
+ 
+ static int stm32_qspi_probe(struct platform_device *pdev)
+@@ -413,43 +415,54 @@ static int stm32_qspi_probe(struct platform_device *pdev)
+ 		return -ENOMEM;
+ 
+ 	qspi = spi_controller_get_devdata(ctrl);
++	qspi->ctrl = ctrl;
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi");
+ 	qspi->io_base = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(qspi->io_base))
+-		return PTR_ERR(qspi->io_base);
++	if (IS_ERR(qspi->io_base)) {
++		ret = PTR_ERR(qspi->io_base);
++		goto err;
++	}
+ 
+ 	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_mm");
+ 	qspi->mm_base = devm_ioremap_resource(dev, res);
+-	if (IS_ERR(qspi->mm_base))
+-		return PTR_ERR(qspi->mm_base);
++	if (IS_ERR(qspi->mm_base)) {
++		ret = PTR_ERR(qspi->mm_base);
++		goto err;
++	}
+ 
+ 	qspi->mm_size = resource_size(res);
+-	if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ)
+-		return -EINVAL;
++	if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ) {
++		ret = -EINVAL;
++		goto err;
++	}
+ 
+ 	irq = platform_get_irq(pdev, 0);
+ 	ret = devm_request_irq(dev, irq, stm32_qspi_irq, 0,
+ 			       dev_name(dev), qspi);
+ 	if (ret) {
+ 		dev_err(dev, "failed to request irq\n");
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	init_completion(&qspi->data_completion);
+ 
+ 	qspi->clk = devm_clk_get(dev, NULL);
+-	if (IS_ERR(qspi->clk))
+-		return PTR_ERR(qspi->clk);
++	if (IS_ERR(qspi->clk)) {
++		ret = PTR_ERR(qspi->clk);
++		goto err;
++	}
+ 
+ 	qspi->clk_rate = clk_get_rate(qspi->clk);
+-	if (!qspi->clk_rate)
+-		return -EINVAL;
++	if (!qspi->clk_rate) {
++		ret = -EINVAL;
++		goto err;
++	}
+ 
+ 	ret = clk_prepare_enable(qspi->clk);
+ 	if (ret) {
+ 		dev_err(dev, "can not enable the clock\n");
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	rstc = devm_reset_control_get_exclusive(dev, NULL);
+@@ -472,14 +485,11 @@ static int stm32_qspi_probe(struct platform_device *pdev)
+ 	ctrl->dev.of_node = dev->of_node;
+ 
+ 	ret = devm_spi_register_master(dev, ctrl);
+-	if (ret)
+-		goto err_spi_register;
+-
+-	return 0;
++	if (!ret)
++		return 0;
+ 
+-err_spi_register:
++err:
+ 	stm32_qspi_release(qspi);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/spi/spi-tegra114.c b/drivers/spi/spi-tegra114.c
+index a76acedd7e2f..a1888dc6a938 100644
+--- a/drivers/spi/spi-tegra114.c
++++ b/drivers/spi/spi-tegra114.c
+@@ -1067,27 +1067,19 @@ static int tegra_spi_probe(struct platform_device *pdev)
+ 
+ 	spi_irq = platform_get_irq(pdev, 0);
+ 	tspi->irq = spi_irq;
+-	ret = request_threaded_irq(tspi->irq, tegra_spi_isr,
+-			tegra_spi_isr_thread, IRQF_ONESHOT,
+-			dev_name(&pdev->dev), tspi);
+-	if (ret < 0) {
+-		dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",
+-					tspi->irq);
+-		goto exit_free_master;
+-	}
+ 
+ 	tspi->clk = devm_clk_get(&pdev->dev, "spi");
+ 	if (IS_ERR(tspi->clk)) {
+ 		dev_err(&pdev->dev, "can not get clock\n");
+ 		ret = PTR_ERR(tspi->clk);
+-		goto exit_free_irq;
++		goto exit_free_master;
+ 	}
+ 
+ 	tspi->rst = devm_reset_control_get_exclusive(&pdev->dev, "spi");
+ 	if (IS_ERR(tspi->rst)) {
+ 		dev_err(&pdev->dev, "can not get reset\n");
+ 		ret = PTR_ERR(tspi->rst);
+-		goto exit_free_irq;
++		goto exit_free_master;
+ 	}
+ 
+ 	tspi->max_buf_size = SPI_FIFO_DEPTH << 2;
+@@ -1095,7 +1087,7 @@ static int tegra_spi_probe(struct platform_device *pdev)
+ 
+ 	ret = tegra_spi_init_dma_param(tspi, true);
+ 	if (ret < 0)
+-		goto exit_free_irq;
++		goto exit_free_master;
+ 	ret = tegra_spi_init_dma_param(tspi, false);
+ 	if (ret < 0)
+ 		goto exit_rx_dma_free;
+@@ -1117,18 +1109,32 @@ static int tegra_spi_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "pm runtime get failed, e = %d\n", ret);
+ 		goto exit_pm_disable;
+ 	}
++
++	reset_control_assert(tspi->rst);
++	udelay(2);
++	reset_control_deassert(tspi->rst);
+ 	tspi->def_command1_reg  = SPI_M_S;
+ 	tegra_spi_writel(tspi, tspi->def_command1_reg, SPI_COMMAND1);
+ 	pm_runtime_put(&pdev->dev);
++	ret = request_threaded_irq(tspi->irq, tegra_spi_isr,
++				   tegra_spi_isr_thread, IRQF_ONESHOT,
++				   dev_name(&pdev->dev), tspi);
++	if (ret < 0) {
++		dev_err(&pdev->dev, "Failed to register ISR for IRQ %d\n",
++			tspi->irq);
++		goto exit_pm_disable;
++	}
+ 
+ 	master->dev.of_node = pdev->dev.of_node;
+ 	ret = devm_spi_register_master(&pdev->dev, master);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "can not register to master err %d\n", ret);
+-		goto exit_pm_disable;
++		goto exit_free_irq;
+ 	}
+ 	return ret;
+ 
++exit_free_irq:
++	free_irq(spi_irq, tspi);
+ exit_pm_disable:
+ 	pm_runtime_disable(&pdev->dev);
+ 	if (!pm_runtime_status_suspended(&pdev->dev))
+@@ -1136,8 +1142,6 @@ exit_pm_disable:
+ 	tegra_spi_deinit_dma_param(tspi, false);
+ exit_rx_dma_free:
+ 	tegra_spi_deinit_dma_param(tspi, true);
+-exit_free_irq:
+-	free_irq(spi_irq, tspi);
+ exit_free_master:
+ 	spi_master_put(master);
+ 	return ret;
+diff --git a/drivers/spi/spi-topcliff-pch.c b/drivers/spi/spi-topcliff-pch.c
+index 97d137591b18..4389ab80c23e 100644
+--- a/drivers/spi/spi-topcliff-pch.c
++++ b/drivers/spi/spi-topcliff-pch.c
+@@ -1294,18 +1294,27 @@ static void pch_free_dma_buf(struct pch_spi_board_data *board_dat,
+ 				  dma->rx_buf_virt, dma->rx_buf_dma);
+ }
+ 
+-static void pch_alloc_dma_buf(struct pch_spi_board_data *board_dat,
++static int pch_alloc_dma_buf(struct pch_spi_board_data *board_dat,
+ 			      struct pch_spi_data *data)
+ {
+ 	struct pch_spi_dma_ctrl *dma;
++	int ret;
+ 
+ 	dma = &data->dma;
++	ret = 0;
+ 	/* Get Consistent memory for Tx DMA */
+ 	dma->tx_buf_virt = dma_alloc_coherent(&board_dat->pdev->dev,
+ 				PCH_BUF_SIZE, &dma->tx_buf_dma, GFP_KERNEL);
++	if (!dma->tx_buf_virt)
++		ret = -ENOMEM;
++
+ 	/* Get Consistent memory for Rx DMA */
+ 	dma->rx_buf_virt = dma_alloc_coherent(&board_dat->pdev->dev,
+ 				PCH_BUF_SIZE, &dma->rx_buf_dma, GFP_KERNEL);
++	if (!dma->rx_buf_virt)
++		ret = -ENOMEM;
++
++	return ret;
+ }
+ 
+ static int pch_spi_pd_probe(struct platform_device *plat_dev)
+@@ -1382,7 +1391,9 @@ static int pch_spi_pd_probe(struct platform_device *plat_dev)
+ 
+ 	if (use_dma) {
+ 		dev_info(&plat_dev->dev, "Use DMA for data transfers\n");
+-		pch_alloc_dma_buf(board_dat, data);
++		ret = pch_alloc_dma_buf(board_dat, data);
++		if (ret)
++			goto err_spi_register_master;
+ 	}
+ 
+ 	ret = spi_register_master(master);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 9a7def7c3237..0632a32c1105 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -1024,6 +1024,8 @@ static int spi_map_msg(struct spi_controller *ctlr, struct spi_message *msg)
+ 		if (max_tx || max_rx) {
+ 			list_for_each_entry(xfer, &msg->transfers,
+ 					    transfer_list) {
++				if (!xfer->len)
++					continue;
+ 				if (!xfer->tx_buf)
+ 					xfer->tx_buf = ctlr->dummy_tx;
+ 				if (!xfer->rx_buf)
+diff --git a/drivers/ssb/bridge_pcmcia_80211.c b/drivers/ssb/bridge_pcmcia_80211.c
+index f51f150307df..ffa379efff83 100644
+--- a/drivers/ssb/bridge_pcmcia_80211.c
++++ b/drivers/ssb/bridge_pcmcia_80211.c
+@@ -113,16 +113,21 @@ static struct pcmcia_driver ssb_host_pcmcia_driver = {
+ 	.resume		= ssb_host_pcmcia_resume,
+ };
+ 
++static int pcmcia_init_failed;
++
+ /*
+  * These are not module init/exit functions!
+  * The module_pcmcia_driver() helper cannot be used here.
+  */
+ int ssb_host_pcmcia_init(void)
+ {
+-	return pcmcia_register_driver(&ssb_host_pcmcia_driver);
++	pcmcia_init_failed = pcmcia_register_driver(&ssb_host_pcmcia_driver);
++
++	return pcmcia_init_failed;
+ }
+ 
+ void ssb_host_pcmcia_exit(void)
+ {
+-	pcmcia_unregister_driver(&ssb_host_pcmcia_driver);
++	if (!pcmcia_init_failed)
++		pcmcia_unregister_driver(&ssb_host_pcmcia_driver);
+ }
+diff --git a/drivers/staging/media/davinci_vpfe/Kconfig b/drivers/staging/media/davinci_vpfe/Kconfig
+index aea449a8dbf8..76818cc48ddc 100644
+--- a/drivers/staging/media/davinci_vpfe/Kconfig
++++ b/drivers/staging/media/davinci_vpfe/Kconfig
+@@ -1,7 +1,7 @@
+ config VIDEO_DM365_VPFE
+ 	tristate "DM365 VPFE Media Controller Capture Driver"
+ 	depends on VIDEO_V4L2
+-	depends on (ARCH_DAVINCI_DM365 && !VIDEO_DM365_ISIF) || COMPILE_TEST
++	depends on (ARCH_DAVINCI_DM365 && !VIDEO_DM365_ISIF) || (COMPILE_TEST && !ARCH_OMAP1)
+ 	depends on VIDEO_V4L2_SUBDEV_API
+ 	depends on VIDEO_DAVINCI_VPBE_DISPLAY
+ 	select VIDEOBUF2_DMA_CONTIG
+diff --git a/drivers/staging/media/ipu3/ipu3.c b/drivers/staging/media/ipu3/ipu3.c
+index d521b3afb8b1..0b161888ec28 100644
+--- a/drivers/staging/media/ipu3/ipu3.c
++++ b/drivers/staging/media/ipu3/ipu3.c
+@@ -792,7 +792,7 @@ out:
+  * PCI rpm framework checks the existence of driver rpm callbacks.
+  * Place a dummy callback here to avoid rpm going into error state.
+  */
+-static int imgu_rpm_dummy_cb(struct device *dev)
++static __maybe_unused int imgu_rpm_dummy_cb(struct device *dev)
+ {
+ 	return 0;
+ }
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus.h b/drivers/staging/media/sunxi/cedrus/cedrus.h
+index 3acfdcf83691..726bef649ba6 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus.h
++++ b/drivers/staging/media/sunxi/cedrus/cedrus.h
+@@ -28,6 +28,8 @@
+ 
+ #define CEDRUS_CAPABILITY_UNTILED	BIT(0)
+ 
++#define CEDRUS_QUIRK_NO_DMA_OFFSET	BIT(0)
++
+ enum cedrus_codec {
+ 	CEDRUS_CODEC_MPEG2,
+ 
+@@ -91,6 +93,7 @@ struct cedrus_dec_ops {
+ 
+ struct cedrus_variant {
+ 	unsigned int	capabilities;
++	unsigned int	quirks;
+ };
+ 
+ struct cedrus_dev {
+diff --git a/drivers/staging/media/sunxi/cedrus/cedrus_hw.c b/drivers/staging/media/sunxi/cedrus/cedrus_hw.c
+index 300339fee1bc..24a06a1260f0 100644
+--- a/drivers/staging/media/sunxi/cedrus/cedrus_hw.c
++++ b/drivers/staging/media/sunxi/cedrus/cedrus_hw.c
+@@ -177,7 +177,8 @@ int cedrus_hw_probe(struct cedrus_dev *dev)
+ 	 */
+ 
+ #ifdef PHYS_PFN_OFFSET
+-	dev->dev->dma_pfn_offset = PHYS_PFN_OFFSET;
++	if (!(variant->quirks & CEDRUS_QUIRK_NO_DMA_OFFSET))
++		dev->dev->dma_pfn_offset = PHYS_PFN_OFFSET;
+ #endif
+ 
+ 	ret = of_reserved_mem_device_init(dev->dev);
+diff --git a/drivers/staging/mt7621-mmc/sd.c b/drivers/staging/mt7621-mmc/sd.c
+index 4b26ec896a96..38f9ea02ee3a 100644
+--- a/drivers/staging/mt7621-mmc/sd.c
++++ b/drivers/staging/mt7621-mmc/sd.c
+@@ -468,7 +468,11 @@ static unsigned int msdc_command_start(struct msdc_host   *host,
+ 	host->cmd     = cmd;
+ 	host->cmd_rsp = resp;
+ 
+-	init_completion(&host->cmd_done);
++	// The completion should have been consumed by the previous command
++	// response handler, because the mmc requests should be serialized
++	if (completion_done(&host->cmd_done))
++		dev_err(mmc_dev(host->mmc),
++			"previous command was not handled\n");
+ 
+ 	sdr_set_bits(host->base + MSDC_INTEN, wints);
+ 	sdc_send_cmd(rawcmd, cmd->arg);
+@@ -490,7 +494,6 @@ static unsigned int msdc_command_resp(struct msdc_host   *host,
+ 		    MSDC_INT_ACMD19_DONE;
+ 
+ 	BUG_ON(in_interrupt());
+-	//init_completion(&host->cmd_done);
+ 	//sdr_set_bits(host->base + MSDC_INTEN, wints);
+ 
+ 	spin_unlock(&host->lock);
+@@ -593,8 +596,6 @@ static void msdc_dma_setup(struct msdc_host *host, struct msdc_dma *dma,
+ 	struct bd *bd;
+ 	u32 j;
+ 
+-	BUG_ON(sglen > MAX_BD_NUM); /* not support currently */
+-
+ 	gpd = dma->gpd;
+ 	bd  = dma->bd;
+ 
+@@ -674,7 +675,13 @@ static int msdc_do_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		//msdc_clr_fifo(host);  /* no need */
+ 
+ 		msdc_dma_on();  /* enable DMA mode first!! */
+-		init_completion(&host->xfer_done);
++
++		// The completion should have been consumed by the previous
++		// xfer response handler, because the mmc requests should be
++		// serialized
++		if (completion_done(&host->cmd_done))
++			dev_err(mmc_dev(host->mmc),
++				"previous transfer was not handled\n");
+ 
+ 		/* start the command first*/
+ 		if (msdc_command_start(host, cmd, CMD_TIMEOUT) != 0)
+@@ -683,6 +690,13 @@ static int msdc_do_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		data->sg_count = dma_map_sg(mmc_dev(mmc), data->sg,
+ 					    data->sg_len,
+ 					    mmc_get_dma_dir(data));
++
++		if (data->sg_count == 0) {
++			dev_err(mmc_dev(host->mmc), "failed to map DMA for transfer\n");
++			data->error = -ENOMEM;
++			goto done;
++		}
++
+ 		msdc_dma_setup(host, &host->dma, data->sg,
+ 			       data->sg_count);
+ 
+@@ -693,7 +707,6 @@ static int msdc_do_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		/* for read, the data coming too fast, then CRC error
+ 		 *  start DMA no business with CRC.
+ 		 */
+-		//init_completion(&host->xfer_done);
+ 		msdc_dma_start(host);
+ 
+ 		spin_unlock(&host->lock);
+@@ -1688,6 +1701,8 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ 	}
+ 	msdc_init_gpd_bd(host, &host->dma);
+ 
++	init_completion(&host->cmd_done);
++	init_completion(&host->xfer_done);
+ 	INIT_DELAYED_WORK(&host->card_delaywork, msdc_tasklet_card);
+ 	spin_lock_init(&host->lock);
+ 	msdc_init_hw(host);
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+index dd4898861b83..eb1e5dcb0d52 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+@@ -209,6 +209,9 @@ vchiq_platform_init_state(struct vchiq_state *state)
+ 	struct vchiq_2835_state *platform_state;
+ 
+ 	state->platform_state = kzalloc(sizeof(*platform_state), GFP_KERNEL);
++	if (!state->platform_state)
++		return VCHIQ_ERROR;
++
+ 	platform_state = (struct vchiq_2835_state *)state->platform_state;
+ 
+ 	platform_state->inited = 1;
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+index 53f5a1cb4636..819813e742d8 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
+@@ -2239,6 +2239,8 @@ vchiq_init_state(struct vchiq_state *state, struct vchiq_slot_zero *slot_zero)
+ 	local->debug[DEBUG_ENTRIES] = DEBUG_MAX;
+ 
+ 	status = vchiq_platform_init_state(state);
++	if (status != VCHIQ_SUCCESS)
++		return VCHIQ_ERROR;
+ 
+ 	/*
+ 		bring up slot handler thread
+diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
+index e3fc920af682..8b7f9131e9d1 100644
+--- a/drivers/thunderbolt/icm.c
++++ b/drivers/thunderbolt/icm.c
+@@ -473,6 +473,11 @@ static void add_switch(struct tb_switch *parent_sw, u64 route,
+ 		goto out;
+ 
+ 	sw->uuid = kmemdup(uuid, sizeof(*uuid), GFP_KERNEL);
++	if (!sw->uuid) {
++		tb_sw_warn(sw, "cannot allocate memory for switch\n");
++		tb_switch_put(sw);
++		goto out;
++	}
+ 	sw->connection_id = connection_id;
+ 	sw->connection_key = connection_key;
+ 	sw->link = link;
+diff --git a/drivers/thunderbolt/property.c b/drivers/thunderbolt/property.c
+index b2f0d6386cee..8c077c4f3b5b 100644
+--- a/drivers/thunderbolt/property.c
++++ b/drivers/thunderbolt/property.c
+@@ -548,6 +548,11 @@ int tb_property_add_data(struct tb_property_dir *parent, const char *key,
+ 
+ 	property->length = size / 4;
+ 	property->value.data = kzalloc(size, GFP_KERNEL);
++	if (!property->value.data) {
++		kfree(property);
++		return -ENOMEM;
++	}
++
+ 	memcpy(property->value.data, buf, buflen);
+ 
+ 	list_add_tail(&property->list, &parent->properties);
+@@ -578,7 +583,12 @@ int tb_property_add_text(struct tb_property_dir *parent, const char *key,
+ 		return -ENOMEM;
+ 
+ 	property->length = size / 4;
+-	property->value.data = kzalloc(size, GFP_KERNEL);
++	property->value.text = kzalloc(size, GFP_KERNEL);
++	if (!property->value.text) {
++		kfree(property);
++		return -ENOMEM;
++	}
++
+ 	strcpy(property->value.text, text);
+ 
+ 	list_add_tail(&property->list, &parent->properties);
+diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
+index cd96994dc094..f569a2673742 100644
+--- a/drivers/thunderbolt/switch.c
++++ b/drivers/thunderbolt/switch.c
+@@ -10,15 +10,13 @@
+ #include <linux/idr.h>
+ #include <linux/nvmem-provider.h>
+ #include <linux/pm_runtime.h>
++#include <linux/sched/signal.h>
+ #include <linux/sizes.h>
+ #include <linux/slab.h>
+ #include <linux/vmalloc.h>
+ 
+ #include "tb.h"
+ 
+-/* Switch authorization from userspace is serialized by this lock */
+-static DEFINE_MUTEX(switch_lock);
+-
+ /* Switch NVM support */
+ 
+ #define NVM_DEVID		0x05
+@@ -254,8 +252,8 @@ static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val,
+ 	struct tb_switch *sw = priv;
+ 	int ret = 0;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	/*
+ 	 * Since writing the NVM image might require some special steps,
+@@ -275,7 +273,7 @@ static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val,
+ 	memcpy(sw->nvm->buf + offset, val, bytes);
+ 
+ unlock:
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 
+ 	return ret;
+ }
+@@ -364,10 +362,7 @@ static int tb_switch_nvm_add(struct tb_switch *sw)
+ 	}
+ 	nvm->non_active = nvm_dev;
+ 
+-	mutex_lock(&switch_lock);
+ 	sw->nvm = nvm;
+-	mutex_unlock(&switch_lock);
+-
+ 	return 0;
+ 
+ err_nvm_active:
+@@ -384,10 +379,8 @@ static void tb_switch_nvm_remove(struct tb_switch *sw)
+ {
+ 	struct tb_switch_nvm *nvm;
+ 
+-	mutex_lock(&switch_lock);
+ 	nvm = sw->nvm;
+ 	sw->nvm = NULL;
+-	mutex_unlock(&switch_lock);
+ 
+ 	if (!nvm)
+ 		return;
+@@ -716,8 +709,8 @@ static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val)
+ {
+ 	int ret = -EINVAL;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	if (sw->authorized)
+ 		goto unlock;
+@@ -760,7 +753,7 @@ static int tb_switch_set_authorized(struct tb_switch *sw, unsigned int val)
+ 	}
+ 
+ unlock:
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 	return ret;
+ }
+ 
+@@ -817,15 +810,15 @@ static ssize_t key_show(struct device *dev, struct device_attribute *attr,
+ 	struct tb_switch *sw = tb_to_switch(dev);
+ 	ssize_t ret;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	if (sw->key)
+ 		ret = sprintf(buf, "%*phN\n", TB_SWITCH_KEY_SIZE, sw->key);
+ 	else
+ 		ret = sprintf(buf, "\n");
+ 
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 	return ret;
+ }
+ 
+@@ -842,8 +835,8 @@ static ssize_t key_store(struct device *dev, struct device_attribute *attr,
+ 	else if (hex2bin(key, buf, sizeof(key)))
+ 		return -EINVAL;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	if (sw->authorized) {
+ 		ret = -EBUSY;
+@@ -858,7 +851,7 @@ static ssize_t key_store(struct device *dev, struct device_attribute *attr,
+ 		}
+ 	}
+ 
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 	return ret;
+ }
+ static DEVICE_ATTR(key, 0600, key_show, key_store);
+@@ -904,8 +897,8 @@ static ssize_t nvm_authenticate_store(struct device *dev,
+ 	bool val;
+ 	int ret;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	/* If NVMem devices are not yet added */
+ 	if (!sw->nvm) {
+@@ -953,7 +946,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
+ 	}
+ 
+ exit_unlock:
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 
+ 	if (ret)
+ 		return ret;
+@@ -967,8 +960,8 @@ static ssize_t nvm_version_show(struct device *dev,
+ 	struct tb_switch *sw = tb_to_switch(dev);
+ 	int ret;
+ 
+-	if (mutex_lock_interruptible(&switch_lock))
+-		return -ERESTARTSYS;
++	if (!mutex_trylock(&sw->tb->lock))
++		return restart_syscall();
+ 
+ 	if (sw->safe_mode)
+ 		ret = -ENODATA;
+@@ -977,7 +970,7 @@ static ssize_t nvm_version_show(struct device *dev,
+ 	else
+ 		ret = sprintf(buf, "%x.%x\n", sw->nvm->major, sw->nvm->minor);
+ 
+-	mutex_unlock(&switch_lock);
++	mutex_unlock(&sw->tb->lock);
+ 
+ 	return ret;
+ }
+@@ -1294,13 +1287,14 @@ int tb_switch_configure(struct tb_switch *sw)
+ 	return tb_plug_events_active(sw, true);
+ }
+ 
+-static void tb_switch_set_uuid(struct tb_switch *sw)
++static int tb_switch_set_uuid(struct tb_switch *sw)
+ {
+ 	u32 uuid[4];
+-	int cap;
++	int cap, ret;
+ 
++	ret = 0;
+ 	if (sw->uuid)
+-		return;
++		return ret;
+ 
+ 	/*
+ 	 * The newer controllers include fused UUID as part of link
+@@ -1308,7 +1302,9 @@ static void tb_switch_set_uuid(struct tb_switch *sw)
+ 	 */
+ 	cap = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER);
+ 	if (cap > 0) {
+-		tb_sw_read(sw, uuid, TB_CFG_SWITCH, cap + 3, 4);
++		ret = tb_sw_read(sw, uuid, TB_CFG_SWITCH, cap + 3, 4);
++		if (ret)
++			return ret;
+ 	} else {
+ 		/*
+ 		 * ICM generates UUID based on UID and fills the upper
+@@ -1323,6 +1319,9 @@ static void tb_switch_set_uuid(struct tb_switch *sw)
+ 	}
+ 
+ 	sw->uuid = kmemdup(uuid, sizeof(uuid), GFP_KERNEL);
++	if (!sw->uuid)
++		ret = -ENOMEM;
++	return ret;
+ }
+ 
+ static int tb_switch_add_dma_port(struct tb_switch *sw)
+@@ -1372,7 +1371,9 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
+ 
+ 	if (status) {
+ 		tb_sw_info(sw, "switch flash authentication failed\n");
+-		tb_switch_set_uuid(sw);
++		ret = tb_switch_set_uuid(sw);
++		if (ret)
++			return ret;
+ 		nvm_set_auth_status(sw, status);
+ 	}
+ 
+@@ -1422,7 +1423,9 @@ int tb_switch_add(struct tb_switch *sw)
+ 		}
+ 		tb_sw_dbg(sw, "uid: %#llx\n", sw->uid);
+ 
+-		tb_switch_set_uuid(sw);
++		ret = tb_switch_set_uuid(sw);
++		if (ret)
++			return ret;
+ 
+ 		for (i = 0; i <= sw->config.max_port_number; i++) {
+ 			if (sw->ports[i].disabled) {
+diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
+index 52584c4003e3..f5e0282225d1 100644
+--- a/drivers/thunderbolt/tb.h
++++ b/drivers/thunderbolt/tb.h
+@@ -80,8 +80,7 @@ struct tb_switch_nvm {
+  * @depth: Depth in the chain this switch is connected (ICM only)
+  *
+  * When the switch is being added or removed to the domain (other
+- * switches) you need to have domain lock held. For switch authorization
+- * internal switch_lock is enough.
++ * switches) you need to have domain lock held.
+  */
+ struct tb_switch {
+ 	struct device dev;
+diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
+index e27dd8beb94b..e0642dcb8b9b 100644
+--- a/drivers/thunderbolt/xdomain.c
++++ b/drivers/thunderbolt/xdomain.c
+@@ -740,6 +740,7 @@ static void enumerate_services(struct tb_xdomain *xd)
+ 	struct tb_service *svc;
+ 	struct tb_property *p;
+ 	struct device *dev;
++	int id;
+ 
+ 	/*
+ 	 * First remove all services that are not available anymore in
+@@ -768,7 +769,12 @@ static void enumerate_services(struct tb_xdomain *xd)
+ 			break;
+ 		}
+ 
+-		svc->id = ida_simple_get(&xd->service_ids, 0, 0, GFP_KERNEL);
++		id = ida_simple_get(&xd->service_ids, 0, 0, GFP_KERNEL);
++		if (id < 0) {
++			kfree(svc);
++			break;
++		}
++		svc->id = id;
+ 		svc->dev.bus = &tb_bus_type;
+ 		svc->dev.type = &tb_service_type;
+ 		svc->dev.parent = &xd->dev;
+diff --git a/drivers/tty/ipwireless/main.c b/drivers/tty/ipwireless/main.c
+index 3475e841ef5c..4c18bbfe1a92 100644
+--- a/drivers/tty/ipwireless/main.c
++++ b/drivers/tty/ipwireless/main.c
+@@ -114,6 +114,10 @@ static int ipwireless_probe(struct pcmcia_device *p_dev, void *priv_data)
+ 
+ 	ipw->common_memory = ioremap(p_dev->resource[2]->start,
+ 				resource_size(p_dev->resource[2]));
++	if (!ipw->common_memory) {
++		ret = -ENOMEM;
++		goto exit1;
++	}
+ 	if (!request_mem_region(p_dev->resource[2]->start,
+ 				resource_size(p_dev->resource[2]),
+ 				IPWIRELESS_PCCARD_NAME)) {
+@@ -134,6 +138,10 @@ static int ipwireless_probe(struct pcmcia_device *p_dev, void *priv_data)
+ 
+ 	ipw->attr_memory = ioremap(p_dev->resource[3]->start,
+ 				resource_size(p_dev->resource[3]));
++	if (!ipw->attr_memory) {
++		ret = -ENOMEM;
++		goto exit3;
++	}
+ 	if (!request_mem_region(p_dev->resource[3]->start,
+ 				resource_size(p_dev->resource[3]),
+ 				IPWIRELESS_PCCARD_NAME)) {
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 015b126ce455..a5c8bcb7723b 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -3001,6 +3001,9 @@ usb_hcd_platform_shutdown(struct platform_device *dev)
+ {
+ 	struct usb_hcd *hcd = platform_get_drvdata(dev);
+ 
++	/* No need for pm_runtime_put(), we're shutting down */
++	pm_runtime_get_sync(&dev->dev);
++
+ 	if (hcd->driver->shutdown)
+ 		hcd->driver->shutdown(hcd);
+ }
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 55c87be5764c..d325dd66f10e 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -5864,7 +5864,10 @@ int usb_reset_device(struct usb_device *udev)
+ 					cintf->needs_binding = 1;
+ 			}
+ 		}
+-		usb_unbind_and_rebind_marked_interfaces(udev);
++
++		/* If the reset failed, hub_wq will unbind drivers later */
++		if (ret == 0)
++			usb_unbind_and_rebind_marked_interfaces(udev);
+ 	}
+ 
+ 	usb_autosuspend_device(udev);
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 55ef3cc2701b..f54127473239 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -714,13 +714,11 @@ static unsigned int dwc2_gadget_get_chain_limit(struct dwc2_hsotg_ep *hs_ep)
+ 	unsigned int maxsize;
+ 
+ 	if (is_isoc)
+-		maxsize = hs_ep->dir_in ? DEV_DMA_ISOC_TX_NBYTES_LIMIT :
+-					   DEV_DMA_ISOC_RX_NBYTES_LIMIT;
++		maxsize = (hs_ep->dir_in ? DEV_DMA_ISOC_TX_NBYTES_LIMIT :
++					   DEV_DMA_ISOC_RX_NBYTES_LIMIT) *
++					   MAX_DMA_DESC_NUM_HS_ISOC;
+ 	else
+-		maxsize = DEV_DMA_NBYTES_LIMIT;
+-
+-	/* Above size of one descriptor was chosen, multiple it */
+-	maxsize *= MAX_DMA_DESC_NUM_GENERIC;
++		maxsize = DEV_DMA_NBYTES_LIMIT * MAX_DMA_DESC_NUM_GENERIC;
+ 
+ 	return maxsize;
+ }
+@@ -903,7 +901,7 @@ static int dwc2_gadget_fill_isoc_desc(struct dwc2_hsotg_ep *hs_ep,
+ 
+ 	/* Update index of last configured entry in the chain */
+ 	hs_ep->next_desc++;
+-	if (hs_ep->next_desc >= MAX_DMA_DESC_NUM_GENERIC)
++	if (hs_ep->next_desc >= MAX_DMA_DESC_NUM_HS_ISOC)
+ 		hs_ep->next_desc = 0;
+ 
+ 	return 0;
+@@ -935,7 +933,7 @@ static void dwc2_gadget_start_isoc_ddma(struct dwc2_hsotg_ep *hs_ep)
+ 	}
+ 
+ 	/* Initialize descriptor chain by Host Busy status */
+-	for (i = 0; i < MAX_DMA_DESC_NUM_GENERIC; i++) {
++	for (i = 0; i < MAX_DMA_DESC_NUM_HS_ISOC; i++) {
+ 		desc = &hs_ep->desc_list[i];
+ 		desc->status = 0;
+ 		desc->status |= (DEV_DMA_BUFF_STS_HBUSY
+@@ -2122,7 +2120,7 @@ static void dwc2_gadget_complete_isoc_request_ddma(struct dwc2_hsotg_ep *hs_ep)
+ 		dwc2_hsotg_complete_request(hsotg, hs_ep, hs_req, 0);
+ 
+ 		hs_ep->compl_desc++;
+-		if (hs_ep->compl_desc > (MAX_DMA_DESC_NUM_GENERIC - 1))
++		if (hs_ep->compl_desc > (MAX_DMA_DESC_NUM_HS_ISOC - 1))
+ 			hs_ep->compl_desc = 0;
+ 		desc_sts = hs_ep->desc_list[hs_ep->compl_desc].status;
+ 	}
+@@ -3859,6 +3857,7 @@ static int dwc2_hsotg_ep_enable(struct usb_ep *ep,
+ 	unsigned int i, val, size;
+ 	int ret = 0;
+ 	unsigned char ep_type;
++	int desc_num;
+ 
+ 	dev_dbg(hsotg->dev,
+ 		"%s: ep %s: a 0x%02x, attr 0x%02x, mps 0x%04x, intr %d\n",
+@@ -3905,11 +3904,15 @@ static int dwc2_hsotg_ep_enable(struct usb_ep *ep,
+ 	dev_dbg(hsotg->dev, "%s: read DxEPCTL=0x%08x from 0x%08x\n",
+ 		__func__, epctrl, epctrl_reg);
+ 
++	if (using_desc_dma(hsotg) && ep_type == USB_ENDPOINT_XFER_ISOC)
++		desc_num = MAX_DMA_DESC_NUM_HS_ISOC;
++	else
++		desc_num = MAX_DMA_DESC_NUM_GENERIC;
++
+ 	/* Allocate DMA descriptor chain for non-ctrl endpoints */
+ 	if (using_desc_dma(hsotg) && !hs_ep->desc_list) {
+ 		hs_ep->desc_list = dmam_alloc_coherent(hsotg->dev,
+-			MAX_DMA_DESC_NUM_GENERIC *
+-			sizeof(struct dwc2_dma_desc),
++			desc_num * sizeof(struct dwc2_dma_desc),
+ 			&hs_ep->desc_list_dma, GFP_ATOMIC);
+ 		if (!hs_ep->desc_list) {
+ 			ret = -ENOMEM;
+@@ -4051,7 +4054,7 @@ error1:
+ 
+ error2:
+ 	if (ret && using_desc_dma(hsotg) && hs_ep->desc_list) {
+-		dmam_free_coherent(hsotg->dev, MAX_DMA_DESC_NUM_GENERIC *
++		dmam_free_coherent(hsotg->dev, desc_num *
+ 			sizeof(struct dwc2_dma_desc),
+ 			hs_ep->desc_list, hs_ep->desc_list_dma);
+ 		hs_ep->desc_list = NULL;
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index f944cea4056b..72110a8c49d6 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1600,6 +1600,7 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ 		spin_lock_irqsave(&dwc->lock, flags);
+ 		dwc3_gadget_suspend(dwc);
+ 		spin_unlock_irqrestore(&dwc->lock, flags);
++		synchronize_irq(dwc->irq_gadget);
+ 		dwc3_core_exit(dwc);
+ 		break;
+ 	case DWC3_GCTL_PRTCAP_HOST:
+@@ -1632,6 +1633,7 @@ static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ 			spin_lock_irqsave(&dwc->lock, flags);
+ 			dwc3_gadget_suspend(dwc);
+ 			spin_unlock_irqrestore(&dwc->lock, flags);
++			synchronize_irq(dwc->irq_gadget);
+ 		}
+ 
+ 		dwc3_otg_exit(dwc);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 9f941cdb0691..1227e8f5a5c8 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -3385,8 +3385,6 @@ int dwc3_gadget_suspend(struct dwc3 *dwc)
+ 	dwc3_disconnect_gadget(dwc);
+ 	__dwc3_gadget_stop(dwc);
+ 
+-	synchronize_irq(dwc->irq_gadget);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 0f8d16de7a37..768230795bb2 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1133,7 +1133,8 @@ error_lock:
+ error_mutex:
+ 	mutex_unlock(&epfile->mutex);
+ error:
+-	ffs_free_buffer(io_data);
++	if (ret != -EIOCBQUEUED) /* don't free if there is iocb queued */
++		ffs_free_buffer(io_data);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index 68a113594808..2811c4afde01 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -94,6 +94,8 @@ int fb_alloc_cmap_gfp(struct fb_cmap *cmap, int len, int transp, gfp_t flags)
+ 	int size = len * sizeof(u16);
+ 	int ret = -ENOMEM;
+ 
++	flags |= __GFP_NOWARN;
++
+ 	if (cmap->len != len) {
+ 		fb_dealloc_cmap(cmap);
+ 		if (!len)
+diff --git a/drivers/video/fbdev/core/modedb.c b/drivers/video/fbdev/core/modedb.c
+index 283d9307df21..ac049871704d 100644
+--- a/drivers/video/fbdev/core/modedb.c
++++ b/drivers/video/fbdev/core/modedb.c
+@@ -935,6 +935,9 @@ void fb_var_to_videomode(struct fb_videomode *mode,
+ 	if (var->vmode & FB_VMODE_DOUBLE)
+ 		vtotal *= 2;
+ 
++	if (!htotal || !vtotal)
++		return;
++
+ 	hfreq = pixclock/htotal;
+ 	mode->refresh = hfreq/vtotal;
+ }
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index fd02e8a4841d..9f39f0c360e0 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -464,7 +464,8 @@ static int efifb_probe(struct platform_device *dev)
+ 	info->apertures->ranges[0].base = efifb_fix.smem_start;
+ 	info->apertures->ranges[0].size = size_remap;
+ 
+-	if (!efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
++	if (efi_enabled(EFI_BOOT) &&
++	    !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
+ 		if ((efifb_fix.smem_start + efifb_fix.smem_len) >
+ 		    (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {
+ 			pr_err("efifb: video memory @ 0x%lx spans multiple EFI memory regions\n",
+diff --git a/drivers/w1/w1_io.c b/drivers/w1/w1_io.c
+index 0364d3329c52..3516ce6718d9 100644
+--- a/drivers/w1/w1_io.c
++++ b/drivers/w1/w1_io.c
+@@ -432,8 +432,7 @@ int w1_reset_resume_command(struct w1_master *dev)
+ 	if (w1_reset_bus(dev))
+ 		return -1;
+ 
+-	/* This will make only the last matched slave perform a skip ROM. */
+-	w1_write_8(dev, W1_RESUME_CMD);
++	w1_write_8(dev, dev->slave_count > 1 ? W1_RESUME_CMD : W1_SKIP_ROM);
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(w1_reset_resume_command);
+diff --git a/drivers/xen/biomerge.c b/drivers/xen/biomerge.c
+index f3fbb700f569..05a286d24f14 100644
+--- a/drivers/xen/biomerge.c
++++ b/drivers/xen/biomerge.c
+@@ -4,12 +4,13 @@
+ #include <xen/xen.h>
+ #include <xen/page.h>
+ 
++/* check if @page can be merged with 'vec1' */
+ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
+-			       const struct bio_vec *vec2)
++			       const struct page *page)
+ {
+ #if XEN_PAGE_SIZE == PAGE_SIZE
+ 	unsigned long bfn1 = pfn_to_bfn(page_to_pfn(vec1->bv_page));
+-	unsigned long bfn2 = pfn_to_bfn(page_to_pfn(vec2->bv_page));
++	unsigned long bfn2 = pfn_to_bfn(page_to_pfn(page));
+ 
+ 	return bfn1 + PFN_DOWN(vec1->bv_offset + vec1->bv_len) == bfn2;
+ #else
+diff --git a/fs/afs/xattr.c b/fs/afs/xattr.c
+index a2cdf25573e2..706801c6c4c4 100644
+--- a/fs/afs/xattr.c
++++ b/fs/afs/xattr.c
+@@ -69,11 +69,20 @@ static int afs_xattr_get_fid(const struct xattr_handler *handler,
+ 			     void *buffer, size_t size)
+ {
+ 	struct afs_vnode *vnode = AFS_FS_I(inode);
+-	char text[8 + 1 + 8 + 1 + 8 + 1];
++	char text[16 + 1 + 24 + 1 + 8 + 1];
+ 	size_t len;
+ 
+-	len = sprintf(text, "%llx:%llx:%x",
+-		      vnode->fid.vid, vnode->fid.vnode, vnode->fid.unique);
++	/* The volume ID is 64-bit, the vnode ID is 96-bit and the
++	 * uniquifier is 32-bit.
++	 */
++	len = sprintf(text, "%llx:", vnode->fid.vid);
++	if (vnode->fid.vnode_hi)
++		len += sprintf(text + len, "%x%016llx",
++			       vnode->fid.vnode_hi, vnode->fid.vnode);
++	else
++		len += sprintf(text + len, "%llx", vnode->fid.vnode);
++	len += sprintf(text + len, ":%x", vnode->fid.unique);
++
+ 	if (size == 0)
+ 		return len;
+ 	if (len > size)
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index ef66db38cedb..efe4d4080a21 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -712,7 +712,7 @@ out:
+  * read tree blocks and add keys where required.
+  */
+ static int add_missing_keys(struct btrfs_fs_info *fs_info,
+-			    struct preftrees *preftrees)
++			    struct preftrees *preftrees, bool lock)
+ {
+ 	struct prelim_ref *ref;
+ 	struct extent_buffer *eb;
+@@ -737,12 +737,14 @@ static int add_missing_keys(struct btrfs_fs_info *fs_info,
+ 			free_extent_buffer(eb);
+ 			return -EIO;
+ 		}
+-		btrfs_tree_read_lock(eb);
++		if (lock)
++			btrfs_tree_read_lock(eb);
+ 		if (btrfs_header_level(eb) == 0)
+ 			btrfs_item_key_to_cpu(eb, &ref->key_for_search, 0);
+ 		else
+ 			btrfs_node_key_to_cpu(eb, &ref->key_for_search, 0);
+-		btrfs_tree_read_unlock(eb);
++		if (lock)
++			btrfs_tree_read_unlock(eb);
+ 		free_extent_buffer(eb);
+ 		prelim_ref_insert(fs_info, &preftrees->indirect, ref, NULL);
+ 		cond_resched();
+@@ -1227,7 +1229,7 @@ again:
+ 
+ 	btrfs_release_path(path);
+ 
+-	ret = add_missing_keys(fs_info, &preftrees);
++	ret = add_missing_keys(fs_info, &preftrees, path->skip_locking == 0);
+ 	if (ret)
+ 		goto out;
+ 
+@@ -1288,11 +1290,14 @@ again:
+ 					ret = -EIO;
+ 					goto out;
+ 				}
+-				btrfs_tree_read_lock(eb);
+-				btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK);
++				if (!path->skip_locking) {
++					btrfs_tree_read_lock(eb);
++					btrfs_set_lock_blocking_rw(eb, BTRFS_READ_LOCK);
++				}
+ 				ret = find_extent_in_eb(eb, bytenr,
+ 							*extent_item_pos, &eie, ignore_offset);
+-				btrfs_tree_read_unlock_blocking(eb);
++				if (!path->skip_locking)
++					btrfs_tree_read_unlock_blocking(eb);
+ 				free_extent_buffer(eb);
+ 				if (ret < 0)
+ 					goto out;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index a19bbfce449e..e4772b90dad0 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -3985,8 +3985,7 @@ static int create_space_info(struct btrfs_fs_info *info, u64 flags)
+ 				    info->space_info_kobj, "%s",
+ 				    alloc_name(space_info->flags));
+ 	if (ret) {
+-		percpu_counter_destroy(&space_info->total_bytes_pinned);
+-		kfree(space_info);
++		kobject_put(&space_info->kobj);
+ 		return ret;
+ 	}
+ 
+@@ -11192,9 +11191,9 @@ int btrfs_error_unpin_extent_range(struct btrfs_fs_info *fs_info,
+  * held back allocations.
+  */
+ static int btrfs_trim_free_extents(struct btrfs_device *device,
+-				   struct fstrim_range *range, u64 *trimmed)
++				   u64 minlen, u64 *trimmed)
+ {
+-	u64 start = range->start, len = 0;
++	u64 start = 0, len = 0;
+ 	int ret;
+ 
+ 	*trimmed = 0;
+@@ -11237,8 +11236,8 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		if (!trans)
+ 			up_read(&fs_info->commit_root_sem);
+ 
+-		ret = find_free_dev_extent_start(trans, device, range->minlen,
+-						 start, &start, &len);
++		ret = find_free_dev_extent_start(trans, device, minlen, start,
++						 &start, &len);
+ 		if (trans) {
+ 			up_read(&fs_info->commit_root_sem);
+ 			btrfs_put_transaction(trans);
+@@ -11251,16 +11250,6 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 			break;
+ 		}
+ 
+-		/* If we are out of the passed range break */
+-		if (start > range->start + range->len - 1) {
+-			mutex_unlock(&fs_info->chunk_mutex);
+-			ret = 0;
+-			break;
+-		}
+-
+-		start = max(range->start, start);
+-		len = min(range->len, len);
+-
+ 		ret = btrfs_issue_discard(device->bdev, start, len, &bytes);
+ 		mutex_unlock(&fs_info->chunk_mutex);
+ 
+@@ -11270,10 +11259,6 @@ static int btrfs_trim_free_extents(struct btrfs_device *device,
+ 		start += len;
+ 		*trimmed += bytes;
+ 
+-		/* We've trimmed enough */
+-		if (*trimmed >= range->len)
+-			break;
+-
+ 		if (fatal_signal_pending(current)) {
+ 			ret = -ERESTARTSYS;
+ 			break;
+@@ -11357,7 +11342,8 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
+ 	mutex_lock(&fs_info->fs_devices->device_list_mutex);
+ 	devices = &fs_info->fs_devices->devices;
+ 	list_for_each_entry(device, devices, dev_list) {
+-		ret = btrfs_trim_free_extents(device, range, &group_trimmed);
++		ret = btrfs_trim_free_extents(device, range->minlen,
++					      &group_trimmed);
+ 		if (ret) {
+ 			dev_failed++;
+ 			dev_ret = ret;
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index d38dc8c31533..7f082b019766 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2058,6 +2058,18 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	int ret = 0, err;
+ 	u64 len;
+ 
++	/*
++	 * If the inode needs a full sync, make sure we use a full range to
++	 * avoid log tree corruption, due to hole detection racing with ordered
++	 * extent completion for adjacent ranges, and assertion failures during
++	 * hole detection.
++	 */
++	if (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
++		     &BTRFS_I(inode)->runtime_flags)) {
++		start = 0;
++		end = LLONG_MAX;
++	}
++
+ 	/*
+ 	 * The range length can be represented by u64, we have to do the typecasts
+ 	 * to avoid signed overflow if it's [0, LLONG_MAX] eg. from fsync()
+@@ -2546,10 +2558,8 @@ static int btrfs_punch_hole(struct inode *inode, loff_t offset, loff_t len)
+ 
+ 	ret = btrfs_punch_hole_lock_range(inode, lockstart, lockend,
+ 					  &cached_state);
+-	if (ret) {
+-		inode_unlock(inode);
++	if (ret)
+ 		goto out_only_mutex;
+-	}
+ 
+ 	path = btrfs_alloc_path();
+ 	if (!path) {
+@@ -3132,6 +3142,7 @@ static long btrfs_fallocate(struct file *file, int mode,
+ 			ret = btrfs_qgroup_reserve_data(inode, &data_reserved,
+ 					cur_offset, last_byte - cur_offset);
+ 			if (ret < 0) {
++				cur_offset = last_byte;
+ 				free_extent_map(em);
+ 				break;
+ 			}
+@@ -3181,7 +3192,7 @@ out:
+ 	/* Let go of our reservation. */
+ 	if (ret != 0 && !(mode & FALLOC_FL_ZERO_RANGE))
+ 		btrfs_free_reserved_data_space(inode, data_reserved,
+-				alloc_start, alloc_end - cur_offset);
++				cur_offset, alloc_end - cur_offset);
+ 	extent_changeset_free(data_reserved);
+ 	return ret;
+ }
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 272b287f8cf0..0395b8233c90 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -4271,27 +4271,36 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
+ 		mutex_lock(&fs_info->cleaner_mutex);
+ 		ret = relocate_block_group(rc);
+ 		mutex_unlock(&fs_info->cleaner_mutex);
+-		if (ret < 0) {
++		if (ret < 0)
+ 			err = ret;
+-			goto out;
+-		}
+-
+-		if (rc->extents_found == 0)
+-			break;
+-
+-		btrfs_info(fs_info, "found %llu extents", rc->extents_found);
+ 
++		/*
++		 * We may have gotten ENOSPC after we already dirtied some
++		 * extents.  If writeout happens while we're relocating a
++		 * different block group we could end up hitting the
++		 * BUG_ON(rc->stage == UPDATE_DATA_PTRS) in
++		 * btrfs_reloc_cow_block.  Make sure we write everything out
++		 * properly so we don't trip over this problem, and then break
++		 * out of the loop if we hit an error.
++		 */
+ 		if (rc->stage == MOVE_DATA_EXTENTS && rc->found_file_extent) {
+ 			ret = btrfs_wait_ordered_range(rc->data_inode, 0,
+ 						       (u64)-1);
+-			if (ret) {
++			if (ret)
+ 				err = ret;
+-				goto out;
+-			}
+ 			invalidate_mapping_pages(rc->data_inode->i_mapping,
+ 						 0, -1);
+ 			rc->stage = UPDATE_DATA_PTRS;
+ 		}
++
++		if (err < 0)
++			goto out;
++
++		if (rc->extents_found == 0)
++			break;
++
++		btrfs_info(fs_info, "found %llu extents", rc->extents_found);
++
+ 	}
+ 
+ 	WARN_ON(rc->block_group->pinned > 0);
+diff --git a/fs/btrfs/root-tree.c b/fs/btrfs/root-tree.c
+index 65bda0682928..3228d3b3084a 100644
+--- a/fs/btrfs/root-tree.c
++++ b/fs/btrfs/root-tree.c
+@@ -132,16 +132,17 @@ int btrfs_update_root(struct btrfs_trans_handle *trans, struct btrfs_root
+ 		return -ENOMEM;
+ 
+ 	ret = btrfs_search_slot(trans, root, key, path, 0, 1);
+-	if (ret < 0) {
+-		btrfs_abort_transaction(trans, ret);
++	if (ret < 0)
+ 		goto out;
+-	}
+ 
+-	if (ret != 0) {
+-		btrfs_print_leaf(path->nodes[0]);
+-		btrfs_crit(fs_info, "unable to update root key %llu %u %llu",
+-			   key->objectid, key->type, key->offset);
+-		BUG_ON(1);
++	if (ret > 0) {
++		btrfs_crit(fs_info,
++			"unable to find root key (%llu %u %llu) in tree %llu",
++			key->objectid, key->type, key->offset,
++			root->root_key.objectid);
++		ret = -EUCLEAN;
++		btrfs_abort_transaction(trans, ret);
++		goto out;
+ 	}
+ 
+ 	l = path->nodes[0];
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index 5a5930e3d32b..2f078b77fe14 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -825,7 +825,12 @@ int btrfs_sysfs_add_fsid(struct btrfs_fs_devices *fs_devs,
+ 	fs_devs->fsid_kobj.kset = btrfs_kset;
+ 	error = kobject_init_and_add(&fs_devs->fsid_kobj,
+ 				&btrfs_ktype, parent, "%pU", fs_devs->fsid);
+-	return error;
++	if (error) {
++		kobject_put(&fs_devs->fsid_kobj);
++		return error;
++	}
++
++	return 0;
+ }
+ 
+ int btrfs_sysfs_add_mounted(struct btrfs_fs_info *fs_info)
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 7f3b74a55073..1a1b4fa503f8 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -4106,6 +4106,7 @@ fill_holes:
+ 							       *last_extent, 0,
+ 							       0, len, 0, len,
+ 							       0, 0, 0);
++				*last_extent += len;
+ 			}
+ 		}
+ 	}
+diff --git a/fs/char_dev.c b/fs/char_dev.c
+index a279c58fe360..8a63cfa29005 100644
+--- a/fs/char_dev.c
++++ b/fs/char_dev.c
+@@ -159,6 +159,12 @@ __register_chrdev_region(unsigned int major, unsigned int baseminor,
+ 			ret = -EBUSY;
+ 			goto out;
+ 		}
++
++		if (new_min < old_min && new_max > old_max) {
++			ret = -EBUSY;
++			goto out;
++		}
++
+ 	}
+ 
+ 	cd->next = *cp;
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 10ead04346ee..8c295e37ac23 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -1657,6 +1657,7 @@ static inline bool is_retryable_error(int error)
+ 
+ #define   CIFS_HAS_CREDITS 0x0400    /* already has credits */
+ #define   CIFS_TRANSFORM_REQ 0x0800    /* transform request before sending */
++#define   CIFS_NO_SRV_RSP    0x1000    /* there is no server response */
+ 
+ /* Security Flags: indicate type of session setup needed */
+ #define   CIFSSEC_MAY_SIGN	0x00001
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 551924beb86f..f91e714928d4 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -2533,7 +2533,7 @@ CIFSSMBLock(const unsigned int xid, struct cifs_tcon *tcon,
+ 
+ 	if (lockType == LOCKING_ANDX_OPLOCK_RELEASE) {
+ 		/* no response expected */
+-		flags = CIFS_ASYNC_OP | CIFS_OBREAK_OP;
++		flags = CIFS_NO_SRV_RSP | CIFS_ASYNC_OP | CIFS_OBREAK_OP;
+ 		pSMB->Timeout = 0;
+ 	} else if (waitFlag) {
+ 		flags = CIFS_BLOCKING_OP; /* blocking operation, no timeout */
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index 9544eb99b5a2..95f3be904eed 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -906,8 +906,11 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 
+ 	mutex_unlock(&ses->server->srv_mutex);
+ 
+-	if (rc < 0) {
+-		/* Sending failed for some reason - return credits back */
++	/*
++	 * If sending failed for some reason or it is an oplock break that we
++	 * will not receive a response to - return credits back
++	 */
++	if (rc < 0 || (flags & CIFS_NO_SRV_RSP)) {
+ 		for (i = 0; i < num_rqst; i++)
+ 			add_credits(ses->server, credits[i], optype);
+ 		goto out;
+@@ -928,9 +931,6 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
+ 		smb311_update_preauth_hash(ses, rqst[0].rq_iov,
+ 					   rqst[0].rq_nvec);
+ 
+-	if (timeout == CIFS_ASYNC_OP)
+-		goto out;
+-
+ 	for (i = 0; i < num_rqst; i++) {
+ 		rc = wait_for_response(ses->server, midQ[i]);
+ 		if (rc != 0)
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 878f8b5dd39f..d3ca3b221e21 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -5627,25 +5627,22 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr)
+ 			up_write(&EXT4_I(inode)->i_data_sem);
+ 			ext4_journal_stop(handle);
+ 			if (error) {
+-				if (orphan)
++				if (orphan && inode->i_nlink)
+ 					ext4_orphan_del(NULL, inode);
+ 				goto err_out;
+ 			}
+ 		}
+-		if (!shrink)
++		if (!shrink) {
+ 			pagecache_isize_extended(inode, oldsize, inode->i_size);
+-
+-		/*
+-		 * Blocks are going to be removed from the inode. Wait
+-		 * for dio in flight.  Temporarily disable
+-		 * dioread_nolock to prevent livelock.
+-		 */
+-		if (orphan) {
+-			if (!ext4_should_journal_data(inode)) {
+-				inode_dio_wait(inode);
+-			} else
+-				ext4_wait_for_tail_page_commit(inode);
++		} else {
++			/*
++			 * Blocks are going to be removed from the inode. Wait
++			 * for dio in flight.
++			 */
++			inode_dio_wait(inode);
+ 		}
++		if (orphan && ext4_should_journal_data(inode))
++			ext4_wait_for_tail_page_commit(inode);
+ 		down_write(&EXT4_I(inode)->i_mmap_sem);
+ 
+ 		rc = ext4_break_layouts(inode);
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 4b038f25f256..c925e9ec68f4 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -140,6 +140,7 @@ void gfs2_glock_free(struct gfs2_glock *gl)
+ {
+ 	struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
+ 
++	BUG_ON(atomic_read(&gl->gl_revokes));
+ 	rhashtable_remove_fast(&gl_hash_table, &gl->gl_node, ht_parms);
+ 	smp_mb();
+ 	wake_up_glock(gl);
+@@ -183,15 +184,19 @@ static int demote_ok(const struct gfs2_glock *gl)
+ 
+ void gfs2_glock_add_to_lru(struct gfs2_glock *gl)
+ {
++	if (!(gl->gl_ops->go_flags & GLOF_LRU))
++		return;
++
+ 	spin_lock(&lru_lock);
+ 
+-	if (!list_empty(&gl->gl_lru))
+-		list_del_init(&gl->gl_lru);
+-	else
++	list_del(&gl->gl_lru);
++	list_add_tail(&gl->gl_lru, &lru_list);
++
++	if (!test_bit(GLF_LRU, &gl->gl_flags)) {
++		set_bit(GLF_LRU, &gl->gl_flags);
+ 		atomic_inc(&lru_count);
++	}
+ 
+-	list_add_tail(&gl->gl_lru, &lru_list);
+-	set_bit(GLF_LRU, &gl->gl_flags);
+ 	spin_unlock(&lru_lock);
+ }
+ 
+@@ -201,7 +206,7 @@ static void gfs2_glock_remove_from_lru(struct gfs2_glock *gl)
+ 		return;
+ 
+ 	spin_lock(&lru_lock);
+-	if (!list_empty(&gl->gl_lru)) {
++	if (test_bit(GLF_LRU, &gl->gl_flags)) {
+ 		list_del_init(&gl->gl_lru);
+ 		atomic_dec(&lru_count);
+ 		clear_bit(GLF_LRU, &gl->gl_flags);
+@@ -1159,8 +1164,7 @@ void gfs2_glock_dq(struct gfs2_holder *gh)
+ 		    !test_bit(GLF_DEMOTE, &gl->gl_flags))
+ 			fast_path = 1;
+ 	}
+-	if (!test_bit(GLF_LFLUSH, &gl->gl_flags) && demote_ok(gl) &&
+-	    (glops->go_flags & GLOF_LRU))
++	if (!test_bit(GLF_LFLUSH, &gl->gl_flags) && demote_ok(gl))
+ 		gfs2_glock_add_to_lru(gl);
+ 
+ 	trace_gfs2_glock_queue(gh, 0);
+@@ -1456,6 +1460,7 @@ __acquires(&lru_lock)
+ 		if (!spin_trylock(&gl->gl_lockref.lock)) {
+ add_back_to_lru:
+ 			list_add(&gl->gl_lru, &lru_list);
++			set_bit(GLF_LRU, &gl->gl_flags);
+ 			atomic_inc(&lru_count);
+ 			continue;
+ 		}
+@@ -1463,7 +1468,6 @@ add_back_to_lru:
+ 			spin_unlock(&gl->gl_lockref.lock);
+ 			goto add_back_to_lru;
+ 		}
+-		clear_bit(GLF_LRU, &gl->gl_flags);
+ 		gl->gl_lockref.count++;
+ 		if (demote_ok(gl))
+ 			handle_callback(gl, LM_ST_UNLOCKED, 0, false);
+@@ -1498,6 +1502,7 @@ static long gfs2_scan_glock_lru(int nr)
+ 		if (!test_bit(GLF_LOCK, &gl->gl_flags)) {
+ 			list_move(&gl->gl_lru, &dispose);
+ 			atomic_dec(&lru_count);
++			clear_bit(GLF_LRU, &gl->gl_flags);
+ 			freed++;
+ 			continue;
+ 		}
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index e10e0b0a7cd5..e1a33d288121 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -621,6 +621,7 @@ enum {
+ 	SDF_SKIP_DLM_UNLOCK	= 8,
+ 	SDF_FORCE_AIL_FLUSH     = 9,
+ 	SDF_AIL1_IO_ERROR	= 10,
++	SDF_FS_FROZEN           = 11,
+ };
+ 
+ enum gfs2_freeze_state {
+diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
+index 31df26ed7854..69bd1597bacf 100644
+--- a/fs/gfs2/lock_dlm.c
++++ b/fs/gfs2/lock_dlm.c
+@@ -31,9 +31,10 @@
+  * @delta is the difference between the current rtt sample and the
+  * running average srtt. We add 1/8 of that to the srtt in order to
+  * update the current srtt estimate. The variance estimate is a bit
+- * more complicated. We subtract the abs value of the @delta from
+- * the current variance estimate and add 1/4 of that to the running
+- * total.
++ * more complicated. We subtract the current variance estimate from
++ * the abs value of the @delta and add 1/4 of that to the running
++ * total.  That's equivalent to 3/4 of the current variance
++ * estimate plus 1/4 of the abs of @delta.
+  *
+  * Note that the index points at the array entry containing the smoothed
+  * mean value, and the variance is always in the following entry
+@@ -49,7 +50,7 @@ static inline void gfs2_update_stats(struct gfs2_lkstats *s, unsigned index,
+ 	s64 delta = sample - s->stats[index];
+ 	s->stats[index] += (delta >> 3);
+ 	index++;
+-	s->stats[index] += ((abs(delta) - s->stats[index]) >> 2);
++	s->stats[index] += (s64)(abs(delta) - s->stats[index]) >> 2;
+ }
+ 
+ /**
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index b8830fda51e8..0e04f87a7ddd 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -606,7 +606,8 @@ void gfs2_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd)
+ 	gfs2_remove_from_ail(bd); /* drops ref on bh */
+ 	bd->bd_bh = NULL;
+ 	sdp->sd_log_num_revoke++;
+-	atomic_inc(&gl->gl_revokes);
++	if (atomic_inc_return(&gl->gl_revokes) == 1)
++		gfs2_glock_hold(gl);
+ 	set_bit(GLF_LFLUSH, &gl->gl_flags);
+ 	list_add(&bd->bd_list, &sdp->sd_log_le_revoke);
+ }
+diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
+index 2295042bc625..f09cd5d8ac63 100644
+--- a/fs/gfs2/lops.c
++++ b/fs/gfs2/lops.c
+@@ -667,8 +667,10 @@ static void revoke_lo_after_commit(struct gfs2_sbd *sdp, struct gfs2_trans *tr)
+ 		bd = list_entry(head->next, struct gfs2_bufdata, bd_list);
+ 		list_del_init(&bd->bd_list);
+ 		gl = bd->bd_gl;
+-		atomic_dec(&gl->gl_revokes);
+-		clear_bit(GLF_LFLUSH, &gl->gl_flags);
++		if (atomic_dec_return(&gl->gl_revokes) == 0) {
++			clear_bit(GLF_LFLUSH, &gl->gl_flags);
++			gfs2_glock_queue_put(gl);
++		}
+ 		kmem_cache_free(gfs2_bufdata_cachep, bd);
+ 	}
+ }
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index ca71163ff7cf..360206704a14 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -973,8 +973,7 @@ void gfs2_freeze_func(struct work_struct *work)
+ 	if (error) {
+ 		printk(KERN_INFO "GFS2: couldn't get freeze lock : %d\n", error);
+ 		gfs2_assert_withdraw(sdp, 0);
+-	}
+-	else {
++	} else {
+ 		atomic_set(&sdp->sd_freeze_state, SFS_UNFROZEN);
+ 		error = thaw_super(sb);
+ 		if (error) {
+@@ -987,6 +986,8 @@ void gfs2_freeze_func(struct work_struct *work)
+ 		gfs2_glock_dq_uninit(&freeze_gh);
+ 	}
+ 	deactivate_super(sb);
++	clear_bit_unlock(SDF_FS_FROZEN, &sdp->sd_flags);
++	wake_up_bit(&sdp->sd_flags, SDF_FS_FROZEN);
+ 	return;
+ }
+ 
+@@ -1029,6 +1030,7 @@ static int gfs2_freeze(struct super_block *sb)
+ 		msleep(1000);
+ 	}
+ 	error = 0;
++	set_bit(SDF_FS_FROZEN, &sdp->sd_flags);
+ out:
+ 	mutex_unlock(&sdp->sd_freeze_mutex);
+ 	return error;
+@@ -1053,7 +1055,7 @@ static int gfs2_unfreeze(struct super_block *sb)
+ 
+ 	gfs2_glock_dq_uninit(&sdp->sd_freeze_gh);
+ 	mutex_unlock(&sdp->sd_freeze_mutex);
+-	return 0;
++	return wait_on_bit(&sdp->sd_flags, SDF_FS_FROZEN, TASK_INTERRUPTIBLE);
+ }
+ 
+ /**
+diff --git a/fs/internal.h b/fs/internal.h
+index d410186bc369..d109665b9e50 100644
+--- a/fs/internal.h
++++ b/fs/internal.h
+@@ -80,9 +80,7 @@ extern int sb_prepare_remount_readonly(struct super_block *);
+ 
+ extern void __init mnt_init(void);
+ 
+-extern int __mnt_want_write(struct vfsmount *);
+ extern int __mnt_want_write_file(struct file *);
+-extern void __mnt_drop_write(struct vfsmount *);
+ extern void __mnt_drop_write_file(struct file *);
+ 
+ /*
+diff --git a/fs/nfs/client.c b/fs/nfs/client.c
+index 90d71fda65ce..dfb796eab912 100644
+--- a/fs/nfs/client.c
++++ b/fs/nfs/client.c
+@@ -284,6 +284,7 @@ static struct nfs_client *nfs_match_client(const struct nfs_client_initdata *dat
+ 	struct nfs_client *clp;
+ 	const struct sockaddr *sap = data->addr;
+ 	struct nfs_net *nn = net_generic(data->net, nfs_net_id);
++	int error;
+ 
+ again:
+ 	list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) {
+@@ -296,9 +297,11 @@ again:
+ 		if (clp->cl_cons_state > NFS_CS_READY) {
+ 			refcount_inc(&clp->cl_count);
+ 			spin_unlock(&nn->nfs_client_lock);
+-			nfs_wait_client_init_complete(clp);
++			error = nfs_wait_client_init_complete(clp);
+ 			nfs_put_client(clp);
+ 			spin_lock(&nn->nfs_client_lock);
++			if (error < 0)
++				return ERR_PTR(error);
+ 			goto again;
+ 		}
+ 
+@@ -407,6 +410,8 @@ struct nfs_client *nfs_get_client(const struct nfs_client_initdata *cl_init)
+ 		clp = nfs_match_client(cl_init);
+ 		if (clp) {
+ 			spin_unlock(&nn->nfs_client_lock);
++			if (IS_ERR(clp))
++				return clp;
+ 			if (new)
+ 				new->rpc_ops->free_client(new);
+ 			return nfs_found_client(cl_init, clp);
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 00d17198ee12..f10b660805fc 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -187,7 +187,7 @@ static loff_t nfs42_remap_file_range(struct file *src_file, loff_t src_off,
+ 	bool same_inode = false;
+ 	int ret;
+ 
+-	if (remap_flags & ~REMAP_FILE_ADVISORY)
++	if (remap_flags & ~(REMAP_FILE_DEDUP | REMAP_FILE_ADVISORY))
+ 		return -EINVAL;
+ 
+ 	/* check alignment w.r.t. clone_blksize */
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 82c129bfe58d..93872bb50230 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -260,7 +260,7 @@ static int ovl_instantiate(struct dentry *dentry, struct inode *inode,
+ 		 * hashed directory inode aliases.
+ 		 */
+ 		inode = ovl_get_inode(dentry->d_sb, &oip);
+-		if (WARN_ON(IS_ERR(inode)))
++		if (IS_ERR(inode))
+ 			return PTR_ERR(inode);
+ 	} else {
+ 		WARN_ON(ovl_inode_real(inode) != d_inode(newdentry));
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index 3b7ed5d2279c..b48273e846ad 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -832,7 +832,7 @@ struct inode *ovl_get_inode(struct super_block *sb,
+ 	int fsid = bylower ? oip->lowerpath->layer->fsid : 0;
+ 	bool is_dir, metacopy = false;
+ 	unsigned long ino = 0;
+-	int err = -ENOMEM;
++	int err = oip->newinode ? -EEXIST : -ENOMEM;
+ 
+ 	if (!realinode)
+ 		realinode = d_inode(lowerdentry);
+@@ -917,6 +917,7 @@ out:
+ 	return inode;
+ 
+ out_err:
++	pr_warn_ratelimited("overlayfs: failed to get inode (%i)\n", err);
+ 	inode = ERR_PTR(err);
+ 	goto out;
+ }
+diff --git a/include/crypto/hash.h b/include/crypto/hash.h
+index 3b31c1b349ae..bc143b410359 100644
+--- a/include/crypto/hash.h
++++ b/include/crypto/hash.h
+@@ -152,7 +152,13 @@ struct shash_desc {
+ };
+ 
+ #define HASH_MAX_DIGESTSIZE	 64
+-#define HASH_MAX_DESCSIZE	360
++
++/*
++ * Worst case is hmac(sha3-224-generic).  Its context is a nested 'shash_desc'
++ * containing a 'struct sha3_state'.
++ */
++#define HASH_MAX_DESCSIZE	(sizeof(struct shash_desc) + 360)
++
+ #define HASH_MAX_STATESIZE	512
+ 
+ #define SHASH_DESC_ON_STACK(shash, ctx)				  \
+diff --git a/include/drm/tinydrm/mipi-dbi.h b/include/drm/tinydrm/mipi-dbi.h
+index b8ba58861986..bcc98bd447f7 100644
+--- a/include/drm/tinydrm/mipi-dbi.h
++++ b/include/drm/tinydrm/mipi-dbi.h
+@@ -42,7 +42,7 @@ struct mipi_dbi {
+ 	struct spi_device *spi;
+ 	bool enabled;
+ 	struct mutex cmdlock;
+-	int (*command)(struct mipi_dbi *mipi, u8 cmd, u8 *param, size_t num);
++	int (*command)(struct mipi_dbi *mipi, u8 *cmd, u8 *param, size_t num);
+ 	const u8 *read_commands;
+ 	struct gpio_desc *dc;
+ 	u16 *tx_buf;
+@@ -79,6 +79,7 @@ u32 mipi_dbi_spi_cmd_max_speed(struct spi_device *spi, size_t len);
+ 
+ int mipi_dbi_command_read(struct mipi_dbi *mipi, u8 cmd, u8 *val);
+ int mipi_dbi_command_buf(struct mipi_dbi *mipi, u8 cmd, u8 *data, size_t len);
++int mipi_dbi_command_stackbuf(struct mipi_dbi *mipi, u8 cmd, u8 *data, size_t len);
+ int mipi_dbi_buf_copy(void *dst, struct drm_framebuffer *fb,
+ 		      struct drm_clip_rect *clip, bool swap);
+ /**
+@@ -96,7 +97,7 @@ int mipi_dbi_buf_copy(void *dst, struct drm_framebuffer *fb,
+ #define mipi_dbi_command(mipi, cmd, seq...) \
+ ({ \
+ 	u8 d[] = { seq }; \
+-	mipi_dbi_command_buf(mipi, cmd, d, ARRAY_SIZE(d)); \
++	mipi_dbi_command_stackbuf(mipi, cmd, d, ARRAY_SIZE(d)); \
+ })
+ 
+ #ifdef CONFIG_DEBUG_FS
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index 7380b094dcca..4f7618cf9f38 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -211,7 +211,7 @@ static inline void bio_cnt_set(struct bio *bio, unsigned int count)
+ {
+ 	if (count != 1) {
+ 		bio->bi_flags |= (1 << BIO_REFFED);
+-		smp_mb__before_atomic();
++		smp_mb();
+ 	}
+ 	atomic_set(&bio->__bi_cnt, count);
+ }
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 120d1d40704b..319c07305500 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -348,6 +348,11 @@ struct cgroup {
+ 	 * Dying cgroups are cgroups which were deleted by a user,
+ 	 * but are still existing because someone else is holding a reference.
+ 	 * max_descendants is a maximum allowed number of descent cgroups.
++	 *
++	 * nr_descendants and nr_dying_descendants are protected
++	 * by cgroup_mutex and css_set_lock. It's fine to read them holding
++	 * any of cgroup_mutex and css_set_lock; for writing both locks
++	 * should be held.
+ 	 */
+ 	int nr_descendants;
+ 	int nr_dying_descendants;
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 3358646a8e7a..42513fa6846c 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -709,6 +709,7 @@ static inline void bpf_prog_unlock_ro(struct bpf_prog *fp)
+ static inline void bpf_jit_binary_lock_ro(struct bpf_binary_header *hdr)
+ {
+ 	set_memory_ro((unsigned long)hdr, hdr->pages);
++	set_memory_x((unsigned long)hdr, hdr->pages);
+ }
+ 
+ static inline void bpf_jit_binary_unlock_ro(struct bpf_binary_header *hdr)
+diff --git a/include/linux/genhd.h b/include/linux/genhd.h
+index 06c0fd594097..69db1affedb0 100644
+--- a/include/linux/genhd.h
++++ b/include/linux/genhd.h
+@@ -610,6 +610,7 @@ struct unixware_disklabel {
+ 
+ extern int blk_alloc_devt(struct hd_struct *part, dev_t *devt);
+ extern void blk_free_devt(dev_t devt);
++extern void blk_invalidate_devt(dev_t devt);
+ extern dev_t blk_lookup_devt(const char *name, int partno);
+ extern char *disk_name (struct gendisk *hd, int partno, char *buf);
+ 
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index f9707d1dcb58..ac0c70b4ce10 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -417,6 +417,7 @@ struct hid_global {
+ 
+ struct hid_local {
+ 	unsigned usage[HID_MAX_USAGES]; /* usage array */
++	u8 usage_size[HID_MAX_USAGES]; /* usage size array */
+ 	unsigned collection_index[HID_MAX_USAGES]; /* collection index array */
+ 	unsigned usage_index;
+ 	unsigned usage_minimum;
+diff --git a/include/linux/iio/adc/ad_sigma_delta.h b/include/linux/iio/adc/ad_sigma_delta.h
+index 7e84351fa2c0..6e9fb1932dde 100644
+--- a/include/linux/iio/adc/ad_sigma_delta.h
++++ b/include/linux/iio/adc/ad_sigma_delta.h
+@@ -69,6 +69,7 @@ struct ad_sigma_delta {
+ 	bool			irq_dis;
+ 
+ 	bool			bus_locked;
++	bool			keep_cs_asserted;
+ 
+ 	uint8_t			comm;
+ 
+diff --git a/include/linux/mount.h b/include/linux/mount.h
+index 037eed52164b..03138a0f46f9 100644
+--- a/include/linux/mount.h
++++ b/include/linux/mount.h
+@@ -86,6 +86,8 @@ extern bool mnt_may_suid(struct vfsmount *mnt);
+ 
+ struct path;
+ extern struct vfsmount *clone_private_mount(const struct path *path);
++extern int __mnt_want_write(struct vfsmount *);
++extern void __mnt_drop_write(struct vfsmount *);
+ 
+ struct file_system_type;
+ extern struct vfsmount *vfs_kern_mount(struct file_system_type *type,
+diff --git a/include/linux/overflow.h b/include/linux/overflow.h
+index 40b48e2133cb..15eb85de9226 100644
+--- a/include/linux/overflow.h
++++ b/include/linux/overflow.h
+@@ -36,6 +36,12 @@
+ #define type_max(T) ((T)((__type_half_max(T) - 1) + __type_half_max(T)))
+ #define type_min(T) ((T)((T)-type_max(T)-(T)1))
+ 
++/*
++ * Avoids triggering -Wtype-limits compilation warning,
++ * while using unsigned data types to check a < 0.
++ */
++#define is_non_negative(a) ((a) > 0 || (a) == 0)
++#define is_negative(a) (!(is_non_negative(a)))
+ 
+ #ifdef COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW
+ /*
+@@ -227,10 +233,10 @@
+ 	typeof(d) _d = d;						\
+ 	u64 _a_full = _a;						\
+ 	unsigned int _to_shift =					\
+-		_s >= 0 && _s < 8 * sizeof(*d) ? _s : 0;		\
++		is_non_negative(_s) && _s < 8 * sizeof(*d) ? _s : 0;	\
+ 	*_d = (_a_full << _to_shift);					\
+-	(_to_shift != _s || *_d < 0 || _a < 0 ||			\
+-		(*_d >> _to_shift) != _a);				\
++	(_to_shift != _s || is_negative(*_d) || is_negative(_a) ||	\
++	(*_d >> _to_shift) != _a);					\
+ })
+ 
+ /**
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 4db8bcacc51a..991d97cf395a 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -890,9 +890,11 @@ static inline void rcu_head_init(struct rcu_head *rhp)
+ static inline bool
+ rcu_head_after_call_rcu(struct rcu_head *rhp, rcu_callback_t f)
+ {
+-	if (READ_ONCE(rhp->func) == f)
++	rcu_callback_t func = READ_ONCE(rhp->func);
++
++	if (func == f)
+ 		return true;
+-	WARN_ON_ONCE(READ_ONCE(rhp->func) != (rcu_callback_t)~0L);
++	WARN_ON_ONCE(func != (rcu_callback_t)~0L);
+ 	return false;
+ }
+ 
+diff --git a/include/linux/smpboot.h b/include/linux/smpboot.h
+index d0884b525001..9d1bc65d226c 100644
+--- a/include/linux/smpboot.h
++++ b/include/linux/smpboot.h
+@@ -29,7 +29,7 @@ struct smpboot_thread_data;
+  * @thread_comm:	The base name of the thread
+  */
+ struct smp_hotplug_thread {
+-	struct task_struct __percpu	**store;
++	struct task_struct		* __percpu *store;
+ 	struct list_head		list;
+ 	int				(*thread_should_run)(unsigned int cpu);
+ 	void				(*thread_fn)(unsigned int cpu);
+diff --git a/include/linux/time64.h b/include/linux/time64.h
+index 05634afba0db..4a45aea0f96e 100644
+--- a/include/linux/time64.h
++++ b/include/linux/time64.h
+@@ -41,6 +41,17 @@ struct itimerspec64 {
+ #define KTIME_MAX			((s64)~((u64)1 << 63))
+ #define KTIME_SEC_MAX			(KTIME_MAX / NSEC_PER_SEC)
+ 
++/*
++ * Limits for settimeofday():
++ *
++ * To prevent setting the time close to the wraparound point time setting
++ * is limited so a reasonable uptime can be accomodated. Uptime of 30 years
++ * should be really sufficient, which means the cutoff is 2232. At that
++ * point the cutoff is just a small part of the larger problem.
++ */
++#define TIME_UPTIME_SEC_MAX		(30LL * 365 * 24 *3600)
++#define TIME_SETTOD_SEC_MAX		(KTIME_SEC_MAX - TIME_UPTIME_SEC_MAX)
++
+ static inline int timespec64_equal(const struct timespec64 *a,
+ 				   const struct timespec64 *b)
+ {
+@@ -108,6 +119,16 @@ static inline bool timespec64_valid_strict(const struct timespec64 *ts)
+ 	return true;
+ }
+ 
++static inline bool timespec64_valid_settod(const struct timespec64 *ts)
++{
++	if (!timespec64_valid(ts))
++		return false;
++	/* Disallow values which cause overflow issues vs. CLOCK_REALTIME */
++	if ((unsigned long long)ts->tv_sec >= TIME_SETTOD_SEC_MAX)
++		return false;
++	return true;
++}
++
+ /**
+  * timespec64_to_ns - Convert timespec64 to nanoseconds
+  * @ts:		pointer to the timespec64 variable to be converted
+diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
+index 4a737b2c610b..bc3bc82778da 100644
+--- a/include/media/videobuf2-core.h
++++ b/include/media/videobuf2-core.h
+@@ -586,6 +586,7 @@ struct vb2_queue {
+ 	unsigned int			start_streaming_called:1;
+ 	unsigned int			error:1;
+ 	unsigned int			waiting_for_buffers:1;
++	unsigned int			waiting_in_dqbuf:1;
+ 	unsigned int			is_multiplanar:1;
+ 	unsigned int			is_output:1;
+ 	unsigned int			copy_timestamp:1;
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index c36dc1e20556..60b7cbc0a6cb 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -270,6 +270,7 @@ enum {
+ 	HCI_FORCE_BREDR_SMP,
+ 	HCI_FORCE_STATIC_ADDR,
+ 	HCI_LL_RPA_RESOLUTION,
++	HCI_CMD_PENDING,
+ 
+ 	__HCI_NUM_FLAGS,
+ };
+diff --git a/include/xen/xen.h b/include/xen/xen.h
+index 0e2156786ad2..e1ba6921bc8e 100644
+--- a/include/xen/xen.h
++++ b/include/xen/xen.h
+@@ -43,7 +43,9 @@ extern struct hvm_start_info pvh_start_info;
+ #endif	/* CONFIG_XEN_DOM0 */
+ 
+ struct bio_vec;
++struct page;
++
+ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
+-		const struct bio_vec *vec2);
++		const struct page *page);
+ 
+ #endif	/* _XEN_XEN_H */
+diff --git a/kernel/acct.c b/kernel/acct.c
+index addf7732fb56..81f9831a7859 100644
+--- a/kernel/acct.c
++++ b/kernel/acct.c
+@@ -227,7 +227,7 @@ static int acct_on(struct filename *pathname)
+ 		filp_close(file, NULL);
+ 		return PTR_ERR(internal);
+ 	}
+-	err = mnt_want_write(internal);
++	err = __mnt_want_write(internal);
+ 	if (err) {
+ 		mntput(internal);
+ 		kfree(acct);
+@@ -252,7 +252,7 @@ static int acct_on(struct filename *pathname)
+ 	old = xchg(&ns->bacct, &acct->pin);
+ 	mutex_unlock(&acct->lock);
+ 	pin_kill(old);
+-	mnt_drop_write(mnt);
++	__mnt_drop_write(mnt);
+ 	mntput(mnt);
+ 	return 0;
+ }
+diff --git a/kernel/auditfilter.c b/kernel/auditfilter.c
+index bf309f2592c4..425c67e4f568 100644
+--- a/kernel/auditfilter.c
++++ b/kernel/auditfilter.c
+@@ -1114,22 +1114,24 @@ int audit_rule_change(int type, int seq, void *data, size_t datasz)
+ 	int err = 0;
+ 	struct audit_entry *entry;
+ 
+-	entry = audit_data_to_entry(data, datasz);
+-	if (IS_ERR(entry))
+-		return PTR_ERR(entry);
+-
+ 	switch (type) {
+ 	case AUDIT_ADD_RULE:
++		entry = audit_data_to_entry(data, datasz);
++		if (IS_ERR(entry))
++			return PTR_ERR(entry);
+ 		err = audit_add_rule(entry);
+ 		audit_log_rule_change("add_rule", &entry->rule, !err);
+ 		break;
+ 	case AUDIT_DEL_RULE:
++		entry = audit_data_to_entry(data, datasz);
++		if (IS_ERR(entry))
++			return PTR_ERR(entry);
+ 		err = audit_del_rule(entry);
+ 		audit_log_rule_change("remove_rule", &entry->rule, !err);
+ 		break;
+ 	default:
+-		err = -EINVAL;
+ 		WARN_ON(1);
++		return -EINVAL;
+ 	}
+ 
+ 	if (err || type == AUDIT_DEL_RULE) {
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index b585ceb2f7a2..71e737774611 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -837,6 +837,13 @@ static inline void audit_proctitle_free(struct audit_context *context)
+ 	context->proctitle.len = 0;
+ }
+ 
++static inline void audit_free_module(struct audit_context *context)
++{
++	if (context->type == AUDIT_KERN_MODULE) {
++		kfree(context->module.name);
++		context->module.name = NULL;
++	}
++}
+ static inline void audit_free_names(struct audit_context *context)
+ {
+ 	struct audit_names *n, *next;
+@@ -920,6 +927,7 @@ int audit_alloc(struct task_struct *tsk)
+ 
+ static inline void audit_free_context(struct audit_context *context)
+ {
++	audit_free_module(context);
+ 	audit_free_names(context);
+ 	unroll_tree_refs(context, NULL, 0);
+ 	free_tree_refs(context);
+@@ -1237,7 +1245,6 @@ static void show_special(struct audit_context *context, int *call_panic)
+ 		audit_log_format(ab, "name=");
+ 		if (context->module.name) {
+ 			audit_log_untrustedstring(ab, context->module.name);
+-			kfree(context->module.name);
+ 		} else
+ 			audit_log_format(ab, "(null)");
+ 
+@@ -1574,6 +1581,7 @@ void __audit_syscall_exit(int success, long return_code)
+ 	context->in_syscall = 0;
+ 	context->prio = context->state == AUDIT_RECORD_CONTEXT ? ~0ULL : 0;
+ 
++	audit_free_module(context);
+ 	audit_free_names(context);
+ 	unroll_tree_refs(context, NULL, 0);
+ 	audit_free_aux(context);
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index 191b79948424..1e525d70f833 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -164,6 +164,9 @@ static void dev_map_free(struct bpf_map *map)
+ 	bpf_clear_redirect_map(map);
+ 	synchronize_rcu();
+ 
++	/* Make sure prior __dev_map_entry_free() have completed. */
++	rcu_barrier();
++
+ 	/* To ensure all pending flush operations have completed wait for flush
+ 	 * bitmap to indicate all flush_needed bits to be zero on _all_ cpus.
+ 	 * Because the above synchronize_rcu() ensures the map is disconnected
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index f84bf28f36ba..ee77b0f00edd 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -4728,9 +4728,11 @@ static void css_release_work_fn(struct work_struct *work)
+ 		if (cgroup_on_dfl(cgrp))
+ 			cgroup_rstat_flush(cgrp);
+ 
++		spin_lock_irq(&css_set_lock);
+ 		for (tcgrp = cgroup_parent(cgrp); tcgrp;
+ 		     tcgrp = cgroup_parent(tcgrp))
+ 			tcgrp->nr_dying_descendants--;
++		spin_unlock_irq(&css_set_lock);
+ 
+ 		cgroup_idr_remove(&cgrp->root->cgroup_idr, cgrp->id);
+ 		cgrp->id = -1;
+@@ -4948,12 +4950,14 @@ static struct cgroup *cgroup_create(struct cgroup *parent)
+ 	if (ret)
+ 		goto out_psi_free;
+ 
++	spin_lock_irq(&css_set_lock);
+ 	for (tcgrp = cgrp; tcgrp; tcgrp = cgroup_parent(tcgrp)) {
+ 		cgrp->ancestor_ids[tcgrp->level] = tcgrp->id;
+ 
+ 		if (tcgrp != cgrp)
+ 			tcgrp->nr_descendants++;
+ 	}
++	spin_unlock_irq(&css_set_lock);
+ 
+ 	if (notify_on_release(parent))
+ 		set_bit(CGRP_NOTIFY_ON_RELEASE, &cgrp->flags);
+@@ -5238,10 +5242,12 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
+ 	if (parent && cgroup_is_threaded(cgrp))
+ 		parent->nr_threaded_children--;
+ 
++	spin_lock_irq(&css_set_lock);
+ 	for (tcgrp = cgroup_parent(cgrp); tcgrp; tcgrp = cgroup_parent(tcgrp)) {
+ 		tcgrp->nr_descendants--;
+ 		tcgrp->nr_dying_descendants++;
+ 	}
++	spin_unlock_irq(&css_set_lock);
+ 
+ 	cgroup1_check_for_release(parent);
+ 
+diff --git a/kernel/irq_work.c b/kernel/irq_work.c
+index 6b7cdf17ccf8..73288914ed5e 100644
+--- a/kernel/irq_work.c
++++ b/kernel/irq_work.c
+@@ -56,61 +56,70 @@ void __weak arch_irq_work_raise(void)
+ 	 */
+ }
+ 
+-/*
+- * Enqueue the irq_work @work on @cpu unless it's already pending
+- * somewhere.
+- *
+- * Can be re-enqueued while the callback is still in progress.
+- */
+-bool irq_work_queue_on(struct irq_work *work, int cpu)
++/* Enqueue on current CPU, work must already be claimed and preempt disabled */
++static void __irq_work_queue_local(struct irq_work *work)
+ {
+-	/* All work should have been flushed before going offline */
+-	WARN_ON_ONCE(cpu_is_offline(cpu));
+-
+-#ifdef CONFIG_SMP
+-
+-	/* Arch remote IPI send/receive backend aren't NMI safe */
+-	WARN_ON_ONCE(in_nmi());
++	/* If the work is "lazy", handle it from next tick if any */
++	if (work->flags & IRQ_WORK_LAZY) {
++		if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) &&
++		    tick_nohz_tick_stopped())
++			arch_irq_work_raise();
++	} else {
++		if (llist_add(&work->llnode, this_cpu_ptr(&raised_list)))
++			arch_irq_work_raise();
++	}
++}
+ 
++/* Enqueue the irq work @work on the current CPU */
++bool irq_work_queue(struct irq_work *work)
++{
+ 	/* Only queue if not already pending */
+ 	if (!irq_work_claim(work))
+ 		return false;
+ 
+-	if (llist_add(&work->llnode, &per_cpu(raised_list, cpu)))
+-		arch_send_call_function_single_ipi(cpu);
+-
+-#else /* #ifdef CONFIG_SMP */
+-	irq_work_queue(work);
+-#endif /* #else #ifdef CONFIG_SMP */
++	/* Queue the entry and raise the IPI if needed. */
++	preempt_disable();
++	__irq_work_queue_local(work);
++	preempt_enable();
+ 
+ 	return true;
+ }
++EXPORT_SYMBOL_GPL(irq_work_queue);
+ 
+-/* Enqueue the irq work @work on the current CPU */
+-bool irq_work_queue(struct irq_work *work)
++/*
++ * Enqueue the irq_work @work on @cpu unless it's already pending
++ * somewhere.
++ *
++ * Can be re-enqueued while the callback is still in progress.
++ */
++bool irq_work_queue_on(struct irq_work *work, int cpu)
+ {
++#ifndef CONFIG_SMP
++	return irq_work_queue(work);
++
++#else /* CONFIG_SMP: */
++	/* All work should have been flushed before going offline */
++	WARN_ON_ONCE(cpu_is_offline(cpu));
++
+ 	/* Only queue if not already pending */
+ 	if (!irq_work_claim(work))
+ 		return false;
+ 
+-	/* Queue the entry and raise the IPI if needed. */
+ 	preempt_disable();
+-
+-	/* If the work is "lazy", handle it from next tick if any */
+-	if (work->flags & IRQ_WORK_LAZY) {
+-		if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) &&
+-		    tick_nohz_tick_stopped())
+-			arch_irq_work_raise();
++	if (cpu != smp_processor_id()) {
++		/* Arch remote IPI send/receive backend aren't NMI safe */
++		WARN_ON_ONCE(in_nmi());
++		if (llist_add(&work->llnode, &per_cpu(raised_list, cpu)))
++			arch_send_call_function_single_ipi(cpu);
+ 	} else {
+-		if (llist_add(&work->llnode, this_cpu_ptr(&raised_list)))
+-			arch_irq_work_raise();
++		__irq_work_queue_local(work);
+ 	}
+-
+ 	preempt_enable();
+ 
+ 	return true;
++#endif /* CONFIG_SMP */
+ }
+-EXPORT_SYMBOL_GPL(irq_work_queue);
++
+ 
+ bool irq_work_needs_cpu(void)
+ {
+diff --git a/kernel/jump_label.c b/kernel/jump_label.c
+index bad96b476eb6..a799b1ac6b2f 100644
+--- a/kernel/jump_label.c
++++ b/kernel/jump_label.c
+@@ -206,6 +206,8 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key,
+ 					   unsigned long rate_limit,
+ 					   struct delayed_work *work)
+ {
++	int val;
++
+ 	lockdep_assert_cpus_held();
+ 
+ 	/*
+@@ -215,17 +217,20 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key,
+ 	 * returns is unbalanced, because all other static_key_slow_inc()
+ 	 * instances block while the update is in progress.
+ 	 */
+-	if (!atomic_dec_and_mutex_lock(&key->enabled, &jump_label_mutex)) {
+-		WARN(atomic_read(&key->enabled) < 0,
+-		     "jump label: negative count!\n");
++	val = atomic_fetch_add_unless(&key->enabled, -1, 1);
++	if (val != 1) {
++		WARN(val < 0, "jump label: negative count!\n");
+ 		return;
+ 	}
+ 
+-	if (rate_limit) {
+-		atomic_inc(&key->enabled);
+-		schedule_delayed_work(work, rate_limit);
+-	} else {
+-		jump_label_update(key);
++	jump_label_lock();
++	if (atomic_dec_and_test(&key->enabled)) {
++		if (rate_limit) {
++			atomic_inc(&key->enabled);
++			schedule_delayed_work(work, rate_limit);
++		} else {
++			jump_label_update(key);
++		}
+ 	}
+ 	jump_label_unlock();
+ }
+diff --git a/kernel/module.c b/kernel/module.c
+index 2ad1b5239910..ae1b77da6a20 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -1950,8 +1950,13 @@ void module_enable_ro(const struct module *mod, bool after_init)
+ 		return;
+ 
+ 	frob_text(&mod->core_layout, set_memory_ro);
++	frob_text(&mod->core_layout, set_memory_x);
++
+ 	frob_rodata(&mod->core_layout, set_memory_ro);
++
+ 	frob_text(&mod->init_layout, set_memory_ro);
++	frob_text(&mod->init_layout, set_memory_x);
++
+ 	frob_rodata(&mod->init_layout, set_memory_ro);
+ 
+ 	if (after_init)
+diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c
+index b459da70b4fc..5444a7831451 100644
+--- a/kernel/rcu/rcuperf.c
++++ b/kernel/rcu/rcuperf.c
+@@ -501,6 +501,10 @@ rcu_perf_cleanup(void)
+ 
+ 	if (torture_cleanup_begin())
+ 		return;
++	if (!cur_ops) {
++		torture_cleanup_end();
++		return;
++	}
+ 
+ 	if (reader_tasks) {
+ 		for (i = 0; i < nrealreaders; i++)
+@@ -621,6 +625,7 @@ rcu_perf_init(void)
+ 		pr_cont("\n");
+ 		WARN_ON(!IS_MODULE(CONFIG_RCU_PERF_TEST));
+ 		firsterr = -EINVAL;
++		cur_ops = NULL;
+ 		goto unwind;
+ 	}
+ 	if (cur_ops->init)
+diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
+index f6e85faa4ff4..584b0d1da0a3 100644
+--- a/kernel/rcu/rcutorture.c
++++ b/kernel/rcu/rcutorture.c
+@@ -2092,6 +2092,10 @@ rcu_torture_cleanup(void)
+ 			cur_ops->cb_barrier();
+ 		return;
+ 	}
++	if (!cur_ops) {
++		torture_cleanup_end();
++		return;
++	}
+ 
+ 	rcu_torture_barrier_cleanup();
+ 	torture_stop_kthread(rcu_torture_fwd_prog, fwd_prog_task);
+@@ -2257,6 +2261,7 @@ rcu_torture_init(void)
+ 		pr_cont("\n");
+ 		WARN_ON(!IS_MODULE(CONFIG_RCU_TORTURE_TEST));
+ 		firsterr = -EINVAL;
++		cur_ops = NULL;
+ 		goto unwind;
+ 	}
+ 	if (cur_ops->fqs == NULL && fqs_duration != 0) {
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 62cc29364fba..9346e2ce0ac0 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -6503,6 +6503,8 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset)
+ static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
+ 				struct cftype *cftype, u64 shareval)
+ {
++	if (shareval > scale_load_down(ULONG_MAX))
++		shareval = MAX_SHARES;
+ 	return sched_group_set_shares(css_tg(css), scale_load(shareval));
+ }
+ 
+@@ -6605,8 +6607,10 @@ int tg_set_cfs_quota(struct task_group *tg, long cfs_quota_us)
+ 	period = ktime_to_ns(tg->cfs_bandwidth.period);
+ 	if (cfs_quota_us < 0)
+ 		quota = RUNTIME_INF;
+-	else
++	else if ((u64)cfs_quota_us <= U64_MAX / NSEC_PER_USEC)
+ 		quota = (u64)cfs_quota_us * NSEC_PER_USEC;
++	else
++		return -EINVAL;
+ 
+ 	return tg_set_cfs_bandwidth(tg, period, quota);
+ }
+@@ -6628,6 +6632,9 @@ int tg_set_cfs_period(struct task_group *tg, long cfs_period_us)
+ {
+ 	u64 quota, period;
+ 
++	if ((u64)cfs_period_us > U64_MAX / NSEC_PER_USEC)
++		return -EINVAL;
++
+ 	period = (u64)cfs_period_us * NSEC_PER_USEC;
+ 	quota = tg->cfs_bandwidth.quota;
+ 
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index be55a64748ba..d905c443e10e 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -9456,22 +9456,26 @@ static inline int on_null_domain(struct rq *rq)
+  * - When one of the busy CPUs notice that there may be an idle rebalancing
+  *   needed, they will kick the idle load balancer, which then does idle
+  *   load balancing for all the idle CPUs.
++ * - HK_FLAG_MISC CPUs are used for this task, because HK_FLAG_SCHED not set
++ *   anywhere yet.
+  */
+ 
+ static inline int find_new_ilb(void)
+ {
+-	int ilb = cpumask_first(nohz.idle_cpus_mask);
++	int ilb;
+ 
+-	if (ilb < nr_cpu_ids && idle_cpu(ilb))
+-		return ilb;
++	for_each_cpu_and(ilb, nohz.idle_cpus_mask,
++			      housekeeping_cpumask(HK_FLAG_MISC)) {
++		if (idle_cpu(ilb))
++			return ilb;
++	}
+ 
+ 	return nr_cpu_ids;
+ }
+ 
+ /*
+- * Kick a CPU to do the nohz balancing, if it is time for it. We pick the
+- * nohz_load_balancer CPU (if there is one) otherwise fallback to any idle
+- * CPU (if there is one).
++ * Kick a CPU to do the nohz balancing, if it is time for it. We pick any
++ * idle CPU in the HK_FLAG_MISC housekeeping set (if there is one).
+  */
+ static void kick_ilb(unsigned int flags)
+ {
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index e4f398ad9e73..aa7ee3a0bf90 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2555,6 +2555,8 @@ int sched_group_set_rt_runtime(struct task_group *tg, long rt_runtime_us)
+ 	rt_runtime = (u64)rt_runtime_us * NSEC_PER_USEC;
+ 	if (rt_runtime_us < 0)
+ 		rt_runtime = RUNTIME_INF;
++	else if ((u64)rt_runtime_us > U64_MAX / NSEC_PER_USEC)
++		return -EINVAL;
+ 
+ 	return tg_set_rt_bandwidth(tg, rt_period, rt_runtime);
+ }
+@@ -2575,6 +2577,9 @@ int sched_group_set_rt_period(struct task_group *tg, u64 rt_period_us)
+ {
+ 	u64 rt_runtime, rt_period;
+ 
++	if (rt_period_us > U64_MAX / NSEC_PER_USEC)
++		return -EINVAL;
++
+ 	rt_period = rt_period_us * NSEC_PER_USEC;
+ 	rt_runtime = tg->rt_bandwidth.rt_runtime;
+ 
+diff --git a/kernel/time/time.c b/kernel/time/time.c
+index 2edb5088a70b..e7a3b9236f0b 100644
+--- a/kernel/time/time.c
++++ b/kernel/time/time.c
+@@ -171,7 +171,7 @@ int do_sys_settimeofday64(const struct timespec64 *tv, const struct timezone *tz
+ 	static int firsttime = 1;
+ 	int error = 0;
+ 
+-	if (tv && !timespec64_valid(tv))
++	if (tv && !timespec64_valid_settod(tv))
+ 		return -EINVAL;
+ 
+ 	error = security_settime64(tv, tz);
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index ac5dbf2cd4a2..161b2728ac09 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -1221,7 +1221,7 @@ int do_settimeofday64(const struct timespec64 *ts)
+ 	unsigned long flags;
+ 	int ret = 0;
+ 
+-	if (!timespec64_valid_strict(ts))
++	if (!timespec64_valid_settod(ts))
+ 		return -EINVAL;
+ 
+ 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
+@@ -1278,7 +1278,7 @@ static int timekeeping_inject_offset(const struct timespec64 *ts)
+ 	/* Make sure the proposed value is valid */
+ 	tmp = timespec64_add(tk_xtime(tk), *ts);
+ 	if (timespec64_compare(&tk->wall_to_monotonic, ts) > 0 ||
+-	    !timespec64_valid_strict(&tmp)) {
++	    !timespec64_valid_settod(&tmp)) {
+ 		ret = -EINVAL;
+ 		goto error;
+ 	}
+@@ -1527,7 +1527,7 @@ void __init timekeeping_init(void)
+ 	unsigned long flags;
+ 
+ 	read_persistent_wall_and_boot_offset(&wall_time, &boot_offset);
+-	if (timespec64_valid_strict(&wall_time) &&
++	if (timespec64_valid_settod(&wall_time) &&
+ 	    timespec64_to_ns(&wall_time) > 0) {
+ 		persistent_clock_exists = true;
+ 	} else if (timespec64_to_ns(&wall_time) != 0) {
+diff --git a/kernel/trace/trace_branch.c b/kernel/trace/trace_branch.c
+index 4ad967453b6f..3ea65cdff30d 100644
+--- a/kernel/trace/trace_branch.c
++++ b/kernel/trace/trace_branch.c
+@@ -205,6 +205,8 @@ void trace_likely_condition(struct ftrace_likely_data *f, int val, int expect)
+ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+ 			  int expect, int is_constant)
+ {
++	unsigned long flags = user_access_save();
++
+ 	/* A constant is always correct */
+ 	if (is_constant) {
+ 		f->constant++;
+@@ -223,6 +225,8 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+ 		f->data.correct++;
+ 	else
+ 		f->data.incorrect++;
++
++	user_access_restore(flags);
+ }
+ EXPORT_SYMBOL(ftrace_likely_update);
+ 
+diff --git a/lib/kobject_uevent.c b/lib/kobject_uevent.c
+index 27c6118afd1c..bd26df36757f 100644
+--- a/lib/kobject_uevent.c
++++ b/lib/kobject_uevent.c
+@@ -466,6 +466,13 @@ int kobject_uevent_env(struct kobject *kobj, enum kobject_action action,
+ 	int i = 0;
+ 	int retval = 0;
+ 
++	/*
++	 * Mark "remove" event done regardless of result, for some subsystems
++	 * do not want to re-trigger "remove" event via automatic cleanup.
++	 */
++	if (action == KOBJ_REMOVE)
++		kobj->state_remove_uevent_sent = 1;
++
+ 	pr_debug("kobject: '%s' (%p): %s\n",
+ 		 kobject_name(kobj), kobj, __func__);
+ 
+@@ -567,10 +574,6 @@ int kobject_uevent_env(struct kobject *kobj, enum kobject_action action,
+ 		kobj->state_add_uevent_sent = 1;
+ 		break;
+ 
+-	case KOBJ_REMOVE:
+-		kobj->state_remove_uevent_sent = 1;
+-		break;
+-
+ 	case KOBJ_UNBIND:
+ 		zap_modalias_env(env);
+ 		break;
+diff --git a/lib/sbitmap.c b/lib/sbitmap.c
+index 155fe38756ec..4a7fc4915dfc 100644
+--- a/lib/sbitmap.c
++++ b/lib/sbitmap.c
+@@ -435,7 +435,7 @@ static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq,
+ 		 * to ensure that the batch size is updated before the wait
+ 		 * counts.
+ 		 */
+-		smp_mb__before_atomic();
++		smp_mb();
+ 		for (i = 0; i < SBQ_WAIT_QUEUES; i++)
+ 			atomic_set(&sbq->ws[i].wait_cnt, 1);
+ 	}
+diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c
+index 58eacd41526c..023ba9f3b99f 100644
+--- a/lib/strncpy_from_user.c
++++ b/lib/strncpy_from_user.c
+@@ -23,10 +23,11 @@
+  * hit it), 'max' is the address space maximum (and we return
+  * -EFAULT if we hit it).
+  */
+-static inline long do_strncpy_from_user(char *dst, const char __user *src, long count, unsigned long max)
++static inline long do_strncpy_from_user(char *dst, const char __user *src,
++					unsigned long count, unsigned long max)
+ {
+ 	const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;
+-	long res = 0;
++	unsigned long res = 0;
+ 
+ 	/*
+ 	 * Truncate 'max' to the user-specified limit, so that
+diff --git a/lib/strnlen_user.c b/lib/strnlen_user.c
+index 1c1a1b0e38a5..7f2db3fe311f 100644
+--- a/lib/strnlen_user.c
++++ b/lib/strnlen_user.c
+@@ -28,7 +28,7 @@
+ static inline long do_strnlen_user(const char __user *src, unsigned long count, unsigned long max)
+ {
+ 	const struct word_at_a_time constants = WORD_AT_A_TIME_CONSTANTS;
+-	long align, res = 0;
++	unsigned long align, res = 0;
+ 	unsigned long c;
+ 
+ 	/*
+@@ -42,7 +42,7 @@ static inline long do_strnlen_user(const char __user *src, unsigned long count,
+ 	 * Do everything aligned. But that means that we
+ 	 * need to also expand the maximum..
+ 	 */
+-	align = (sizeof(long) - 1) & (unsigned long)src;
++	align = (sizeof(unsigned long) - 1) & (unsigned long)src;
+ 	src -= align;
+ 	max += align;
+ 
+diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
+index b9ffe1826527..0b8d84ece7db 100644
+--- a/net/batman-adv/distributed-arp-table.c
++++ b/net/batman-adv/distributed-arp-table.c
+@@ -1398,7 +1398,6 @@ bool batadv_dat_snoop_incoming_arp_reply(struct batadv_priv *bat_priv,
+ 			   hw_src, &ip_src, hw_dst, &ip_dst,
+ 			   dat_entry->mac_addr,	&dat_entry->ip);
+ 		dropped = true;
+-		goto out;
+ 	}
+ 
+ 	/* Update our internal cache with both the IP addresses the node got
+@@ -1407,6 +1406,9 @@ bool batadv_dat_snoop_incoming_arp_reply(struct batadv_priv *bat_priv,
+ 	batadv_dat_entry_add(bat_priv, ip_src, hw_src, vid);
+ 	batadv_dat_entry_add(bat_priv, ip_dst, hw_dst, vid);
+ 
++	if (dropped)
++		goto out;
++
+ 	/* If BLA is enabled, only forward ARP replies if we have claimed the
+ 	 * source of the ARP reply or if no one else of the same backbone has
+ 	 * already claimed that client. This prevents that different gateways
+diff --git a/net/batman-adv/main.c b/net/batman-adv/main.c
+index d1ed839fd32b..64558df6a119 100644
+--- a/net/batman-adv/main.c
++++ b/net/batman-adv/main.c
+@@ -161,6 +161,7 @@ int batadv_mesh_init(struct net_device *soft_iface)
+ 	spin_lock_init(&bat_priv->tt.commit_lock);
+ 	spin_lock_init(&bat_priv->gw.list_lock);
+ #ifdef CONFIG_BATMAN_ADV_MCAST
++	spin_lock_init(&bat_priv->mcast.mla_lock);
+ 	spin_lock_init(&bat_priv->mcast.want_lists_lock);
+ #endif
+ 	spin_lock_init(&bat_priv->tvlv.container_list_lock);
+diff --git a/net/batman-adv/multicast.c b/net/batman-adv/multicast.c
+index 69244e4598f5..454b9067fbbd 100644
+--- a/net/batman-adv/multicast.c
++++ b/net/batman-adv/multicast.c
+@@ -325,8 +325,6 @@ static void batadv_mcast_mla_list_free(struct hlist_head *mcast_list)
+  * translation table except the ones listed in the given mcast_list.
+  *
+  * If mcast_list is NULL then all are retracted.
+- *
+- * Do not call outside of the mcast worker! (or cancel mcast worker first)
+  */
+ static void batadv_mcast_mla_tt_retract(struct batadv_priv *bat_priv,
+ 					struct hlist_head *mcast_list)
+@@ -334,8 +332,6 @@ static void batadv_mcast_mla_tt_retract(struct batadv_priv *bat_priv,
+ 	struct batadv_hw_addr *mcast_entry;
+ 	struct hlist_node *tmp;
+ 
+-	WARN_ON(delayed_work_pending(&bat_priv->mcast.work));
+-
+ 	hlist_for_each_entry_safe(mcast_entry, tmp, &bat_priv->mcast.mla_list,
+ 				  list) {
+ 		if (mcast_list &&
+@@ -359,8 +355,6 @@ static void batadv_mcast_mla_tt_retract(struct batadv_priv *bat_priv,
+  *
+  * Adds multicast listener announcements from the given mcast_list to the
+  * translation table if they have not been added yet.
+- *
+- * Do not call outside of the mcast worker! (or cancel mcast worker first)
+  */
+ static void batadv_mcast_mla_tt_add(struct batadv_priv *bat_priv,
+ 				    struct hlist_head *mcast_list)
+@@ -368,8 +362,6 @@ static void batadv_mcast_mla_tt_add(struct batadv_priv *bat_priv,
+ 	struct batadv_hw_addr *mcast_entry;
+ 	struct hlist_node *tmp;
+ 
+-	WARN_ON(delayed_work_pending(&bat_priv->mcast.work));
+-
+ 	if (!mcast_list)
+ 		return;
+ 
+@@ -658,7 +650,10 @@ static void batadv_mcast_mla_update(struct work_struct *work)
+ 	priv_mcast = container_of(delayed_work, struct batadv_priv_mcast, work);
+ 	bat_priv = container_of(priv_mcast, struct batadv_priv, mcast);
+ 
++	spin_lock(&bat_priv->mcast.mla_lock);
+ 	__batadv_mcast_mla_update(bat_priv);
++	spin_unlock(&bat_priv->mcast.mla_lock);
++
+ 	batadv_mcast_start_timer(bat_priv);
+ }
+ 
+diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
+index cbe17da36fcb..2b0ea1cbbc2f 100644
+--- a/net/batman-adv/types.h
++++ b/net/batman-adv/types.h
+@@ -1223,6 +1223,11 @@ struct batadv_priv_mcast {
+ 	/** @bridged: whether the soft interface has a bridge on top */
+ 	unsigned char bridged:1;
+ 
++	/**
++	 * @mla_lock: a lock protecting mla_list and mla_flags
++	 */
++	spinlock_t mla_lock;
++
+ 	/**
+ 	 * @num_want_all_unsnoopables: number of nodes wanting unsnoopable IP
+ 	 *  traffic
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 7352fe85674b..c25c664a2504 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -4337,6 +4337,9 @@ void hci_req_cmd_complete(struct hci_dev *hdev, u16 opcode, u8 status,
+ 		return;
+ 	}
+ 
++	/* If we reach this point this event matches the last command sent */
++	hci_dev_clear_flag(hdev, HCI_CMD_PENDING);
++
+ 	/* If the command succeeded and there's still more commands in
+ 	 * this request the request is not yet complete.
+ 	 */
+@@ -4447,6 +4450,8 @@ static void hci_cmd_work(struct work_struct *work)
+ 
+ 		hdev->sent_cmd = skb_clone(skb, GFP_KERNEL);
+ 		if (hdev->sent_cmd) {
++			if (hci_req_status_pend(hdev))
++				hci_dev_set_flag(hdev, HCI_CMD_PENDING);
+ 			atomic_dec(&hdev->cmd_cnt);
+ 			hci_send_frame(hdev, skb);
+ 			if (test_bit(HCI_RESET, &hdev->flags))
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index ac2826ce162b..ef5ae4c7e286 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3404,6 +3404,12 @@ static void hci_cmd_complete_evt(struct hci_dev *hdev, struct sk_buff *skb,
+ 	hci_req_cmd_complete(hdev, *opcode, *status, req_complete,
+ 			     req_complete_skb);
+ 
++	if (hci_dev_test_flag(hdev, HCI_CMD_PENDING)) {
++		bt_dev_err(hdev,
++			   "unexpected event for opcode 0x%4.4x", *opcode);
++		return;
++	}
++
+ 	if (atomic_read(&hdev->cmd_cnt) && !skb_queue_empty(&hdev->cmd_q))
+ 		queue_work(hdev->workqueue, &hdev->cmd_work);
+ }
+@@ -3511,6 +3517,12 @@ static void hci_cmd_status_evt(struct hci_dev *hdev, struct sk_buff *skb,
+ 		hci_req_cmd_complete(hdev, *opcode, ev->status, req_complete,
+ 				     req_complete_skb);
+ 
++	if (hci_dev_test_flag(hdev, HCI_CMD_PENDING)) {
++		bt_dev_err(hdev,
++			   "unexpected event for opcode 0x%4.4x", *opcode);
++		return;
++	}
++
+ 	if (atomic_read(&hdev->cmd_cnt) && !skb_queue_empty(&hdev->cmd_q))
+ 		queue_work(hdev->workqueue, &hdev->cmd_work);
+ }
+diff --git a/net/bluetooth/hci_request.c b/net/bluetooth/hci_request.c
+index ca73d36cc149..e9a95ed65491 100644
+--- a/net/bluetooth/hci_request.c
++++ b/net/bluetooth/hci_request.c
+@@ -46,6 +46,11 @@ void hci_req_purge(struct hci_request *req)
+ 	skb_queue_purge(&req->cmd_q);
+ }
+ 
++bool hci_req_status_pend(struct hci_dev *hdev)
++{
++	return hdev->req_status == HCI_REQ_PEND;
++}
++
+ static int req_run(struct hci_request *req, hci_req_complete_t complete,
+ 		   hci_req_complete_skb_t complete_skb)
+ {
+diff --git a/net/bluetooth/hci_request.h b/net/bluetooth/hci_request.h
+index 692cc8b13368..55b2050cc9ff 100644
+--- a/net/bluetooth/hci_request.h
++++ b/net/bluetooth/hci_request.h
+@@ -37,6 +37,7 @@ struct hci_request {
+ 
+ void hci_req_init(struct hci_request *req, struct hci_dev *hdev);
+ void hci_req_purge(struct hci_request *req);
++bool hci_req_status_pend(struct hci_dev *hdev);
+ int hci_req_run(struct hci_request *req, hci_req_complete_t complete);
+ int hci_req_run_skb(struct hci_request *req, hci_req_complete_skb_t complete);
+ void hci_req_add(struct hci_request *req, u16 opcode, u32 plen,
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 687821567287..715ab0e6579c 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1167,9 +1167,6 @@ static void ieee80211_chswitch_work(struct work_struct *work)
+ 		goto out;
+ 	}
+ 
+-	/* XXX: shouldn't really modify cfg80211-owned data! */
+-	ifmgd->associated->channel = sdata->csa_chandef.chan;
+-
+ 	ifmgd->csa_waiting_bcn = true;
+ 
+ 	ieee80211_sta_reset_beacon_monitor(sdata);
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 36619ad8ab8c..8233dfafb339 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -1254,7 +1254,7 @@ static int ctnetlink_del_conntrack(struct net *net, struct sock *ctnl,
+ 	struct nf_conntrack_tuple tuple;
+ 	struct nf_conn *ct;
+ 	struct nfgenmsg *nfmsg = nlmsg_data(nlh);
+-	u_int8_t u3 = nfmsg->nfgen_family;
++	u_int8_t u3 = nfmsg->version ? nfmsg->nfgen_family : AF_UNSPEC;
+ 	struct nf_conntrack_zone zone;
+ 	int err;
+ 
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 156ce708b533..0044bfb526ab 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -15667,6 +15667,11 @@ void cfg80211_ch_switch_notify(struct net_device *dev,
+ 
+ 	wdev->chandef = *chandef;
+ 	wdev->preset_chandef = *chandef;
++
++	if (wdev->iftype == NL80211_IFTYPE_STATION &&
++	    !WARN_ON(!wdev->current_bss))
++		wdev->current_bss->pub.channel = chandef->chan;
++
+ 	nl80211_ch_switch_notify(rdev, dev, chandef, GFP_KERNEL,
+ 				 NL80211_CMD_CH_SWITCH_NOTIFY, 0);
+ }
+diff --git a/samples/bpf/asm_goto_workaround.h b/samples/bpf/asm_goto_workaround.h
+index 5cd7c1d1a5d5..7409722727ca 100644
+--- a/samples/bpf/asm_goto_workaround.h
++++ b/samples/bpf/asm_goto_workaround.h
+@@ -13,4 +13,5 @@
+ #define asm_volatile_goto(x...) asm volatile("invalid use of asm_volatile_goto")
+ #endif
+ 
++#define volatile(x...) volatile("")
+ #endif
+diff --git a/security/selinux/netlabel.c b/security/selinux/netlabel.c
+index 186e727b737b..6fd9954e1c08 100644
+--- a/security/selinux/netlabel.c
++++ b/security/selinux/netlabel.c
+@@ -288,11 +288,8 @@ int selinux_netlbl_sctp_assoc_request(struct sctp_endpoint *ep,
+ 	int rc;
+ 	struct netlbl_lsm_secattr secattr;
+ 	struct sk_security_struct *sksec = ep->base.sk->sk_security;
+-	struct sockaddr *addr;
+ 	struct sockaddr_in addr4;
+-#if IS_ENABLED(CONFIG_IPV6)
+ 	struct sockaddr_in6 addr6;
+-#endif
+ 
+ 	if (ep->base.sk->sk_family != PF_INET &&
+ 				ep->base.sk->sk_family != PF_INET6)
+@@ -310,16 +307,15 @@ int selinux_netlbl_sctp_assoc_request(struct sctp_endpoint *ep,
+ 	if (ip_hdr(skb)->version == 4) {
+ 		addr4.sin_family = AF_INET;
+ 		addr4.sin_addr.s_addr = ip_hdr(skb)->saddr;
+-		addr = (struct sockaddr *)&addr4;
+-#if IS_ENABLED(CONFIG_IPV6)
+-	} else {
++		rc = netlbl_conn_setattr(ep->base.sk, (void *)&addr4, &secattr);
++	} else if (IS_ENABLED(CONFIG_IPV6) && ip_hdr(skb)->version == 6) {
+ 		addr6.sin6_family = AF_INET6;
+ 		addr6.sin6_addr = ipv6_hdr(skb)->saddr;
+-		addr = (struct sockaddr *)&addr6;
+-#endif
++		rc = netlbl_conn_setattr(ep->base.sk, (void *)&addr6, &secattr);
++	} else {
++		rc = -EAFNOSUPPORT;
+ 	}
+ 
+-	rc = netlbl_conn_setattr(ep->base.sk, addr, &secattr);
+ 	if (rc == 0)
+ 		sksec->nlbl_state = NLBL_LABELED;
+ 
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index b238e903b9d7..a00bd7986646 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -841,7 +841,13 @@ static int snd_hda_codec_dev_free(struct snd_device *device)
+ 	struct hda_codec *codec = device->device_data;
+ 
+ 	codec->in_freeing = 1;
+-	snd_hdac_device_unregister(&codec->core);
++	/*
++	 * snd_hda_codec_device_new() is used by legacy HDA and ASoC driver.
++	 * We can't unregister ASoC device since it will be unregistered in
++	 * snd_hdac_ext_bus_device_remove().
++	 */
++	if (codec->core.type == HDA_DEV_LEGACY)
++		snd_hdac_device_unregister(&codec->core);
+ 	codec_display_power(codec, false);
+ 	put_device(hda_codec_dev(codec));
+ 	return 0;
+diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c
+index d5f73c837281..7994e8ddc7d2 100644
+--- a/sound/soc/codecs/hdmi-codec.c
++++ b/sound/soc/codecs/hdmi-codec.c
+@@ -439,8 +439,12 @@ static int hdmi_codec_startup(struct snd_pcm_substream *substream,
+ 		if (!ret) {
+ 			ret = snd_pcm_hw_constraint_eld(substream->runtime,
+ 							hcp->eld);
+-			if (ret)
++			if (ret) {
++				mutex_lock(&hcp->current_stream_lock);
++				hcp->current_stream = NULL;
++				mutex_unlock(&hcp->current_stream_lock);
+ 				return ret;
++			}
+ 		}
+ 		/* Select chmap supported */
+ 		hdmi_codec_eld_chmap(hcp);
+diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
+index 2e75b5bc5f1d..f721cd4e3f97 100644
+--- a/sound/soc/fsl/Kconfig
++++ b/sound/soc/fsl/Kconfig
+@@ -173,16 +173,17 @@ config SND_MPC52xx_SOC_EFIKA
+ 
+ endif # SND_POWERPC_SOC
+ 
++config SND_SOC_IMX_PCM_FIQ
++	tristate
++	default y if SND_SOC_IMX_SSI=y && (SND_SOC_FSL_SSI=m || SND_SOC_FSL_SPDIF=m) && (MXC_TZIC || MXC_AVIC)
++	select FIQ
++
+ if SND_IMX_SOC
+ 
+ config SND_SOC_IMX_SSI
+ 	tristate
+ 	select SND_SOC_FSL_UTILS
+ 
+-config SND_SOC_IMX_PCM_FIQ
+-	tristate
+-	select FIQ
+-
+ comment "SoC Audio support for Freescale i.MX boards:"
+ 
+ config SND_MXC_SOC_WM1133_EV1
+diff --git a/sound/soc/fsl/eukrea-tlv320.c b/sound/soc/fsl/eukrea-tlv320.c
+index 191426a6d9ad..30a3d68b5c03 100644
+--- a/sound/soc/fsl/eukrea-tlv320.c
++++ b/sound/soc/fsl/eukrea-tlv320.c
+@@ -118,13 +118,13 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ 		if (ret) {
+ 			dev_err(&pdev->dev,
+ 				"fsl,mux-int-port node missing or invalid.\n");
+-			return ret;
++			goto err;
+ 		}
+ 		ret = of_property_read_u32(np, "fsl,mux-ext-port", &ext_port);
+ 		if (ret) {
+ 			dev_err(&pdev->dev,
+ 				"fsl,mux-ext-port node missing or invalid.\n");
+-			return ret;
++			goto err;
+ 		}
+ 
+ 		/*
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 4163f2cfc06f..bfc5b21d0c3f 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -268,12 +268,14 @@ static int fsl_sai_set_dai_fmt_tr(struct snd_soc_dai *cpu_dai,
+ 	case SND_SOC_DAIFMT_CBS_CFS:
+ 		val_cr2 |= FSL_SAI_CR2_BCD_MSTR;
+ 		val_cr4 |= FSL_SAI_CR4_FSD_MSTR;
++		sai->is_slave_mode = false;
+ 		break;
+ 	case SND_SOC_DAIFMT_CBM_CFM:
+ 		sai->is_slave_mode = true;
+ 		break;
+ 	case SND_SOC_DAIFMT_CBS_CFM:
+ 		val_cr2 |= FSL_SAI_CR2_BCD_MSTR;
++		sai->is_slave_mode = false;
+ 		break;
+ 	case SND_SOC_DAIFMT_CBM_CFS:
+ 		val_cr4 |= FSL_SAI_CR4_FSD_MSTR;
+diff --git a/sound/soc/fsl/fsl_utils.c b/sound/soc/fsl/fsl_utils.c
+index 9981668ab590..040d06b89f00 100644
+--- a/sound/soc/fsl/fsl_utils.c
++++ b/sound/soc/fsl/fsl_utils.c
+@@ -71,6 +71,7 @@ int fsl_asoc_get_dma_channel(struct device_node *ssi_np,
+ 	iprop = of_get_property(dma_np, "cell-index", NULL);
+ 	if (!iprop) {
+ 		of_node_put(dma_np);
++		of_node_put(dma_channel_np);
+ 		return -EINVAL;
+ 	}
+ 	*dma_id = be32_to_cpup(iprop);
+diff --git a/sound/soc/intel/boards/kbl_da7219_max98357a.c b/sound/soc/intel/boards/kbl_da7219_max98357a.c
+index 38f6ab74709d..07491a0f8fb8 100644
+--- a/sound/soc/intel/boards/kbl_da7219_max98357a.c
++++ b/sound/soc/intel/boards/kbl_da7219_max98357a.c
+@@ -188,7 +188,7 @@ static int kabylake_da7219_codec_init(struct snd_soc_pcm_runtime *rtd)
+ 
+ 	jack = &ctx->kabylake_headset;
+ 
+-	snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_MEDIA);
++	snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_PLAYPAUSE);
+ 	snd_jack_set_key(jack->jack, SND_JACK_BTN_1, KEY_VOLUMEUP);
+ 	snd_jack_set_key(jack->jack, SND_JACK_BTN_2, KEY_VOLUMEDOWN);
+ 	snd_jack_set_key(jack->jack, SND_JACK_BTN_3, KEY_VOICECOMMAND);
+diff --git a/sound/soc/ti/Kconfig b/sound/soc/ti/Kconfig
+index 4bf3c15d4e51..ee7c202c69b7 100644
+--- a/sound/soc/ti/Kconfig
++++ b/sound/soc/ti/Kconfig
+@@ -21,8 +21,8 @@ config SND_SOC_DAVINCI_ASP
+ 
+ config SND_SOC_DAVINCI_MCASP
+ 	tristate "Multichannel Audio Serial Port (McASP) support"
+-	select SND_SOC_TI_EDMA_PCM if TI_EDMA
+-	select SND_SOC_TI_SDMA_PCM if DMA_OMAP
++	select SND_SOC_TI_EDMA_PCM
++	select SND_SOC_TI_SDMA_PCM
+ 	help
+ 	  Say Y or M here if you want to have support for McASP IP found in
+ 	  various Texas Instruments SoCs like:
+diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
+index a10fcb5963c6..570d435e3e8b 100644
+--- a/sound/soc/ti/davinci-mcasp.c
++++ b/sound/soc/ti/davinci-mcasp.c
+@@ -44,6 +44,7 @@
+ 
+ #define MCASP_MAX_AFIFO_DEPTH	64
+ 
++#ifdef CONFIG_PM
+ static u32 context_regs[] = {
+ 	DAVINCI_MCASP_TXFMCTL_REG,
+ 	DAVINCI_MCASP_RXFMCTL_REG,
+@@ -66,6 +67,7 @@ struct davinci_mcasp_context {
+ 	u32	*xrsr_regs; /* for serializer configuration */
+ 	bool	pm_state;
+ };
++#endif
+ 
+ struct davinci_mcasp_ruledata {
+ 	struct davinci_mcasp *mcasp;
+diff --git a/tools/bpf/bpftool/.gitignore b/tools/bpf/bpftool/.gitignore
+index 67167e44b726..8248b8dd89d4 100644
+--- a/tools/bpf/bpftool/.gitignore
++++ b/tools/bpf/bpftool/.gitignore
+@@ -1,5 +1,5 @@
+ *.d
+-bpftool
++/bpftool
+ bpftool*.8
+ bpf-helpers.*
+ FEATURE-DUMP.bpftool
+diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
+index 88cbd110ae58..ddeb46c9eef2 100644
+--- a/tools/lib/bpf/bpf.c
++++ b/tools/lib/bpf/bpf.c
+@@ -45,6 +45,8 @@
+ #  define __NR_bpf 349
+ # elif defined(__s390__)
+ #  define __NR_bpf 351
++# elif defined(__arc__)
++#  define __NR_bpf 280
+ # else
+ #  error __NR_bpf not defined. libbpf does not support your arch.
+ # endif
+diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
+index 8f09de482839..64762a62c008 100644
+--- a/tools/lib/bpf/bpf.h
++++ b/tools/lib/bpf/bpf.h
+@@ -26,6 +26,7 @@
+ #include <linux/bpf.h>
+ #include <stdbool.h>
+ #include <stddef.h>
++#include <stdint.h>
+ 
+ #ifdef __cplusplus
+ extern "C" {
+diff --git a/tools/testing/selftests/bpf/test_libbpf_open.c b/tools/testing/selftests/bpf/test_libbpf_open.c
+index 8fcd1c076add..cbd55f5f8d59 100644
+--- a/tools/testing/selftests/bpf/test_libbpf_open.c
++++ b/tools/testing/selftests/bpf/test_libbpf_open.c
+@@ -11,6 +11,8 @@ static const char *__doc__ =
+ #include <bpf/libbpf.h>
+ #include <getopt.h>
+ 
++#include "bpf_rlimit.h"
++
+ static const struct option long_options[] = {
+ 	{"help",	no_argument,		NULL, 'h' },
+ 	{"debug",	no_argument,		NULL, 'D' },
+diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
+index 4cdb63bf0521..9a9fc6c9b70b 100644
+--- a/tools/testing/selftests/bpf/trace_helpers.c
++++ b/tools/testing/selftests/bpf/trace_helpers.c
+@@ -52,6 +52,10 @@ struct ksym *ksym_search(long key)
+ 	int start = 0, end = sym_cnt;
+ 	int result;
+ 
++	/* kallsyms not loaded. return NULL */
++	if (sym_cnt <= 0)
++		return NULL;
++
+ 	while (start < end) {
+ 		size_t mid = start + (end - start) / 2;
+ 
+diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
+index 28d321ba311b..6f339882a6ca 100644
+--- a/tools/testing/selftests/cgroup/test_memcontrol.c
++++ b/tools/testing/selftests/cgroup/test_memcontrol.c
+@@ -26,7 +26,7 @@
+  */
+ static int test_memcg_subtree_control(const char *root)
+ {
+-	char *parent, *child, *parent2, *child2;
++	char *parent, *child, *parent2 = NULL, *child2 = NULL;
+ 	int ret = KSFT_FAIL;
+ 	char buf[PAGE_SIZE];
+ 
+@@ -34,50 +34,54 @@ static int test_memcg_subtree_control(const char *root)
+ 	parent = cg_name(root, "memcg_test_0");
+ 	child = cg_name(root, "memcg_test_0/memcg_test_1");
+ 	if (!parent || !child)
+-		goto cleanup;
++		goto cleanup_free;
+ 
+ 	if (cg_create(parent))
+-		goto cleanup;
++		goto cleanup_free;
+ 
+ 	if (cg_write(parent, "cgroup.subtree_control", "+memory"))
+-		goto cleanup;
++		goto cleanup_parent;
+ 
+ 	if (cg_create(child))
+-		goto cleanup;
++		goto cleanup_parent;
+ 
+ 	if (cg_read_strstr(child, "cgroup.controllers", "memory"))
+-		goto cleanup;
++		goto cleanup_child;
+ 
+ 	/* Create two nested cgroups without enabling memory controller */
+ 	parent2 = cg_name(root, "memcg_test_1");
+ 	child2 = cg_name(root, "memcg_test_1/memcg_test_1");
+ 	if (!parent2 || !child2)
+-		goto cleanup;
++		goto cleanup_free2;
+ 
+ 	if (cg_create(parent2))
+-		goto cleanup;
++		goto cleanup_free2;
+ 
+ 	if (cg_create(child2))
+-		goto cleanup;
++		goto cleanup_parent2;
+ 
+ 	if (cg_read(child2, "cgroup.controllers", buf, sizeof(buf)))
+-		goto cleanup;
++		goto cleanup_all;
+ 
+ 	if (!cg_read_strstr(child2, "cgroup.controllers", "memory"))
+-		goto cleanup;
++		goto cleanup_all;
+ 
+ 	ret = KSFT_PASS;
+ 
+-cleanup:
+-	cg_destroy(child);
+-	cg_destroy(parent);
+-	free(parent);
+-	free(child);
+-
++cleanup_all:
+ 	cg_destroy(child2);
++cleanup_parent2:
+ 	cg_destroy(parent2);
++cleanup_free2:
+ 	free(parent2);
+ 	free(child2);
++cleanup_child:
++	cg_destroy(child);
++cleanup_parent:
++	cg_destroy(parent);
++cleanup_free:
++	free(parent);
++	free(child);
+ 
+ 	return ret;
+ }


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [gentoo-commits] proj/linux-patches:5.0 commit in: /
@ 2019-06-04 11:10 Mike Pagano
  0 siblings, 0 replies; 28+ messages in thread
From: Mike Pagano @ 2019-06-04 11:10 UTC (permalink / raw
  To: gentoo-commits

commit:     111b09445ca154f9feee0743aa1a84f9250a2dab
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jun  4 11:10:42 2019 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jun  4 11:10:42 2019 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=111b0944

Linux patch 5.0.21

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1020_linux-5.0.21.patch | 1443 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1447 insertions(+)

diff --git a/0000_README b/0000_README
index cf5191b..1fe5b3d 100644
--- a/0000_README
+++ b/0000_README
@@ -123,6 +123,10 @@ Patch:  1019_linux-5.0.20.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.0.20
 
+Patch:  1020_linux-5.0.21.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.0.21
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1020_linux-5.0.21.patch b/1020_linux-5.0.21.patch
new file mode 100644
index 0000000..47e7232
--- /dev/null
+++ b/1020_linux-5.0.21.patch
@@ -0,0 +1,1443 @@
+diff --git a/Makefile b/Makefile
+index 25390977536b..93701ca8f3a6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 0
+-SUBLEVEL = 20
++SUBLEVEL = 21
+ EXTRAVERSION =
+ NAME = Shy Crocodile
+ 
+diff --git a/drivers/crypto/vmx/ghash.c b/drivers/crypto/vmx/ghash.c
+index dd8b8716467a..2d1a8cd35509 100644
+--- a/drivers/crypto/vmx/ghash.c
++++ b/drivers/crypto/vmx/ghash.c
+@@ -1,22 +1,14 @@
++// SPDX-License-Identifier: GPL-2.0
+ /**
+  * GHASH routines supporting VMX instructions on the Power 8
+  *
+- * Copyright (C) 2015 International Business Machines Inc.
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; version 2 only.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
++ * Copyright (C) 2015, 2019 International Business Machines Inc.
+  *
+  * Author: Marcelo Henrique Cerri <mhcerri@br.ibm.com>
++ *
++ * Extended by Daniel Axtens <dja@axtens.net> to replace the fallback
++ * mechanism. The new approach is based on arm64 code, which is:
++ *   Copyright (C) 2014 - 2018 Linaro Ltd. <ard.biesheuvel@linaro.org>
+  */
+ 
+ #include <linux/types.h>
+@@ -39,71 +31,25 @@ void gcm_ghash_p8(u64 Xi[2], const u128 htable[16],
+ 		  const u8 *in, size_t len);
+ 
+ struct p8_ghash_ctx {
++	/* key used by vector asm */
+ 	u128 htable[16];
+-	struct crypto_shash *fallback;
++	/* key used by software fallback */
++	be128 key;
+ };
+ 
+ struct p8_ghash_desc_ctx {
+ 	u64 shash[2];
+ 	u8 buffer[GHASH_DIGEST_SIZE];
+ 	int bytes;
+-	struct shash_desc fallback_desc;
+ };
+ 
+-static int p8_ghash_init_tfm(struct crypto_tfm *tfm)
+-{
+-	const char *alg = "ghash-generic";
+-	struct crypto_shash *fallback;
+-	struct crypto_shash *shash_tfm = __crypto_shash_cast(tfm);
+-	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm);
+-
+-	fallback = crypto_alloc_shash(alg, 0, CRYPTO_ALG_NEED_FALLBACK);
+-	if (IS_ERR(fallback)) {
+-		printk(KERN_ERR
+-		       "Failed to allocate transformation for '%s': %ld\n",
+-		       alg, PTR_ERR(fallback));
+-		return PTR_ERR(fallback);
+-	}
+-
+-	crypto_shash_set_flags(fallback,
+-			       crypto_shash_get_flags((struct crypto_shash
+-						       *) tfm));
+-
+-	/* Check if the descsize defined in the algorithm is still enough. */
+-	if (shash_tfm->descsize < sizeof(struct p8_ghash_desc_ctx)
+-	    + crypto_shash_descsize(fallback)) {
+-		printk(KERN_ERR
+-		       "Desc size of the fallback implementation (%s) does not match the expected value: %lu vs %u\n",
+-		       alg,
+-		       shash_tfm->descsize - sizeof(struct p8_ghash_desc_ctx),
+-		       crypto_shash_descsize(fallback));
+-		return -EINVAL;
+-	}
+-	ctx->fallback = fallback;
+-
+-	return 0;
+-}
+-
+-static void p8_ghash_exit_tfm(struct crypto_tfm *tfm)
+-{
+-	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(tfm);
+-
+-	if (ctx->fallback) {
+-		crypto_free_shash(ctx->fallback);
+-		ctx->fallback = NULL;
+-	}
+-}
+-
+ static int p8_ghash_init(struct shash_desc *desc)
+ {
+-	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
+ 	struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+ 
+ 	dctx->bytes = 0;
+ 	memset(dctx->shash, 0, GHASH_DIGEST_SIZE);
+-	dctx->fallback_desc.tfm = ctx->fallback;
+-	dctx->fallback_desc.flags = desc->flags;
+-	return crypto_shash_init(&dctx->fallback_desc);
++	return 0;
+ }
+ 
+ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key,
+@@ -121,7 +67,51 @@ static int p8_ghash_setkey(struct crypto_shash *tfm, const u8 *key,
+ 	disable_kernel_vsx();
+ 	pagefault_enable();
+ 	preempt_enable();
+-	return crypto_shash_setkey(ctx->fallback, key, keylen);
++
++	memcpy(&ctx->key, key, GHASH_BLOCK_SIZE);
++
++	return 0;
++}
++
++static inline void __ghash_block(struct p8_ghash_ctx *ctx,
++				 struct p8_ghash_desc_ctx *dctx)
++{
++	if (!IN_INTERRUPT) {
++		preempt_disable();
++		pagefault_disable();
++		enable_kernel_vsx();
++		gcm_ghash_p8(dctx->shash, ctx->htable,
++				dctx->buffer, GHASH_DIGEST_SIZE);
++		disable_kernel_vsx();
++		pagefault_enable();
++		preempt_enable();
++	} else {
++		crypto_xor((u8 *)dctx->shash, dctx->buffer, GHASH_BLOCK_SIZE);
++		gf128mul_lle((be128 *)dctx->shash, &ctx->key);
++	}
++}
++
++static inline void __ghash_blocks(struct p8_ghash_ctx *ctx,
++				  struct p8_ghash_desc_ctx *dctx,
++				  const u8 *src, unsigned int srclen)
++{
++	if (!IN_INTERRUPT) {
++		preempt_disable();
++		pagefault_disable();
++		enable_kernel_vsx();
++		gcm_ghash_p8(dctx->shash, ctx->htable,
++				src, srclen);
++		disable_kernel_vsx();
++		pagefault_enable();
++		preempt_enable();
++	} else {
++		while (srclen >= GHASH_BLOCK_SIZE) {
++			crypto_xor((u8 *)dctx->shash, src, GHASH_BLOCK_SIZE);
++			gf128mul_lle((be128 *)dctx->shash, &ctx->key);
++			srclen -= GHASH_BLOCK_SIZE;
++			src += GHASH_BLOCK_SIZE;
++		}
++	}
+ }
+ 
+ static int p8_ghash_update(struct shash_desc *desc,
+@@ -131,49 +121,33 @@ static int p8_ghash_update(struct shash_desc *desc,
+ 	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
+ 	struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+ 
+-	if (IN_INTERRUPT) {
+-		return crypto_shash_update(&dctx->fallback_desc, src,
+-					   srclen);
+-	} else {
+-		if (dctx->bytes) {
+-			if (dctx->bytes + srclen < GHASH_DIGEST_SIZE) {
+-				memcpy(dctx->buffer + dctx->bytes, src,
+-				       srclen);
+-				dctx->bytes += srclen;
+-				return 0;
+-			}
++	if (dctx->bytes) {
++		if (dctx->bytes + srclen < GHASH_DIGEST_SIZE) {
+ 			memcpy(dctx->buffer + dctx->bytes, src,
+-			       GHASH_DIGEST_SIZE - dctx->bytes);
+-			preempt_disable();
+-			pagefault_disable();
+-			enable_kernel_vsx();
+-			gcm_ghash_p8(dctx->shash, ctx->htable,
+-				     dctx->buffer, GHASH_DIGEST_SIZE);
+-			disable_kernel_vsx();
+-			pagefault_enable();
+-			preempt_enable();
+-			src += GHASH_DIGEST_SIZE - dctx->bytes;
+-			srclen -= GHASH_DIGEST_SIZE - dctx->bytes;
+-			dctx->bytes = 0;
+-		}
+-		len = srclen & ~(GHASH_DIGEST_SIZE - 1);
+-		if (len) {
+-			preempt_disable();
+-			pagefault_disable();
+-			enable_kernel_vsx();
+-			gcm_ghash_p8(dctx->shash, ctx->htable, src, len);
+-			disable_kernel_vsx();
+-			pagefault_enable();
+-			preempt_enable();
+-			src += len;
+-			srclen -= len;
+-		}
+-		if (srclen) {
+-			memcpy(dctx->buffer, src, srclen);
+-			dctx->bytes = srclen;
++				srclen);
++			dctx->bytes += srclen;
++			return 0;
+ 		}
+-		return 0;
++		memcpy(dctx->buffer + dctx->bytes, src,
++			GHASH_DIGEST_SIZE - dctx->bytes);
++
++		__ghash_block(ctx, dctx);
++
++		src += GHASH_DIGEST_SIZE - dctx->bytes;
++		srclen -= GHASH_DIGEST_SIZE - dctx->bytes;
++		dctx->bytes = 0;
++	}
++	len = srclen & ~(GHASH_DIGEST_SIZE - 1);
++	if (len) {
++		__ghash_blocks(ctx, dctx, src, len);
++		src += len;
++		srclen -= len;
+ 	}
++	if (srclen) {
++		memcpy(dctx->buffer, src, srclen);
++		dctx->bytes = srclen;
++	}
++	return 0;
+ }
+ 
+ static int p8_ghash_final(struct shash_desc *desc, u8 *out)
+@@ -182,25 +156,14 @@ static int p8_ghash_final(struct shash_desc *desc, u8 *out)
+ 	struct p8_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm));
+ 	struct p8_ghash_desc_ctx *dctx = shash_desc_ctx(desc);
+ 
+-	if (IN_INTERRUPT) {
+-		return crypto_shash_final(&dctx->fallback_desc, out);
+-	} else {
+-		if (dctx->bytes) {
+-			for (i = dctx->bytes; i < GHASH_DIGEST_SIZE; i++)
+-				dctx->buffer[i] = 0;
+-			preempt_disable();
+-			pagefault_disable();
+-			enable_kernel_vsx();
+-			gcm_ghash_p8(dctx->shash, ctx->htable,
+-				     dctx->buffer, GHASH_DIGEST_SIZE);
+-			disable_kernel_vsx();
+-			pagefault_enable();
+-			preempt_enable();
+-			dctx->bytes = 0;
+-		}
+-		memcpy(out, dctx->shash, GHASH_DIGEST_SIZE);
+-		return 0;
++	if (dctx->bytes) {
++		for (i = dctx->bytes; i < GHASH_DIGEST_SIZE; i++)
++			dctx->buffer[i] = 0;
++		__ghash_block(ctx, dctx);
++		dctx->bytes = 0;
+ 	}
++	memcpy(out, dctx->shash, GHASH_DIGEST_SIZE);
++	return 0;
+ }
+ 
+ struct shash_alg p8_ghash_alg = {
+@@ -215,11 +178,8 @@ struct shash_alg p8_ghash_alg = {
+ 		 .cra_name = "ghash",
+ 		 .cra_driver_name = "p8_ghash",
+ 		 .cra_priority = 1000,
+-		 .cra_flags = CRYPTO_ALG_NEED_FALLBACK,
+ 		 .cra_blocksize = GHASH_BLOCK_SIZE,
+ 		 .cra_ctxsize = sizeof(struct p8_ghash_ctx),
+ 		 .cra_module = THIS_MODULE,
+-		 .cra_init = p8_ghash_init_tfm,
+-		 .cra_exit = p8_ghash_exit_tfm,
+ 	},
+ };
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index f89fc6ea6078..4eeece3576e1 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -3123,13 +3123,18 @@ static int bond_slave_netdev_event(unsigned long event,
+ 	case NETDEV_CHANGE:
+ 		/* For 802.3ad mode only:
+ 		 * Getting invalid Speed/Duplex values here will put slave
+-		 * in weird state. So mark it as link-fail for the time
+-		 * being and let link-monitoring (miimon) set it right when
+-		 * correct speeds/duplex are available.
++		 * in weird state. Mark it as link-fail if the link was
++		 * previously up or link-down if it hasn't yet come up, and
++		 * let link-monitoring (miimon) set it right when correct
++		 * speeds/duplex are available.
+ 		 */
+ 		if (bond_update_speed_duplex(slave) &&
+-		    BOND_MODE(bond) == BOND_MODE_8023AD)
+-			slave->link = BOND_LINK_FAIL;
++		    BOND_MODE(bond) == BOND_MODE_8023AD) {
++			if (slave->last_link_up)
++				slave->link = BOND_LINK_FAIL;
++			else
++				slave->link = BOND_LINK_DOWN;
++		}
+ 
+ 		if (BOND_MODE(bond) == BOND_MODE_8023AD)
+ 			bond_3ad_adapter_speed_duplex_changed(slave);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 6cba05a80892..5a81ce42b808 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -892,7 +892,7 @@ static uint64_t _mv88e6xxx_get_ethtool_stat(struct mv88e6xxx_chip *chip,
+ 			err = mv88e6xxx_port_read(chip, port, s->reg + 1, &reg);
+ 			if (err)
+ 				return U64_MAX;
+-			high = reg;
++			low |= ((u32)reg) << 16;
+ 		}
+ 		break;
+ 	case STATS_TYPE_BANK1:
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index c6ddbc0e084e..300dbfdd4ae8 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1636,6 +1636,8 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ 		skb = bnxt_copy_skb(bnapi, data_ptr, len, dma_addr);
+ 		bnxt_reuse_rx_data(rxr, cons, data);
+ 		if (!skb) {
++			if (agg_bufs)
++				bnxt_reuse_rx_agg_bufs(cpr, cp_cons, agg_bufs);
+ 			rc = -ENOMEM;
+ 			goto next_rx;
+ 		}
+@@ -6336,7 +6338,7 @@ static int bnxt_alloc_ctx_mem(struct bnxt *bp)
+ 	if (!ctx || (ctx->flags & BNXT_CTX_FLAG_INITED))
+ 		return 0;
+ 
+-	if (bp->flags & BNXT_FLAG_ROCE_CAP) {
++	if ((bp->flags & BNXT_FLAG_ROCE_CAP) && !is_kdump_kernel()) {
+ 		pg_lvl = 2;
+ 		extra_qps = 65536;
+ 		extra_srqs = 8192;
+@@ -7504,22 +7506,23 @@ static void bnxt_clear_int_mode(struct bnxt *bp)
+ 	bp->flags &= ~BNXT_FLAG_USING_MSIX;
+ }
+ 
+-int bnxt_reserve_rings(struct bnxt *bp)
++int bnxt_reserve_rings(struct bnxt *bp, bool irq_re_init)
+ {
+ 	int tcs = netdev_get_num_tc(bp->dev);
+-	bool reinit_irq = false;
++	bool irq_cleared = false;
+ 	int rc;
+ 
+ 	if (!bnxt_need_reserve_rings(bp))
+ 		return 0;
+ 
+-	if (BNXT_NEW_RM(bp) && (bnxt_get_num_msix(bp) != bp->total_irqs)) {
++	if (irq_re_init && BNXT_NEW_RM(bp) &&
++	    bnxt_get_num_msix(bp) != bp->total_irqs) {
+ 		bnxt_ulp_irq_stop(bp);
+ 		bnxt_clear_int_mode(bp);
+-		reinit_irq = true;
++		irq_cleared = true;
+ 	}
+ 	rc = __bnxt_reserve_rings(bp);
+-	if (reinit_irq) {
++	if (irq_cleared) {
+ 		if (!rc)
+ 			rc = bnxt_init_int_mode(bp);
+ 		bnxt_ulp_irq_restart(bp, rc);
+@@ -8418,7 +8421,7 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
+ 			return rc;
+ 		}
+ 	}
+-	rc = bnxt_reserve_rings(bp);
++	rc = bnxt_reserve_rings(bp, irq_re_init);
+ 	if (rc)
+ 		return rc;
+ 	if ((bp->flags & BNXT_FLAG_RFS) &&
+@@ -10276,7 +10279,7 @@ static int bnxt_set_dflt_rings(struct bnxt *bp, bool sh)
+ 
+ 	if (sh)
+ 		bp->flags |= BNXT_FLAG_SHARED_RINGS;
+-	dflt_rings = netif_get_num_default_rss_queues();
++	dflt_rings = is_kdump_kernel() ? 1 : netif_get_num_default_rss_queues();
+ 	/* Reduce default rings on multi-port cards so that total default
+ 	 * rings do not exceed CPU count.
+ 	 */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 2fb653e0048d..c09b20b08395 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -20,6 +20,7 @@
+ 
+ #include <linux/interrupt.h>
+ #include <linux/rhashtable.h>
++#include <linux/crash_dump.h>
+ #include <net/devlink.h>
+ #include <net/dst_metadata.h>
+ #include <net/switchdev.h>
+@@ -1367,7 +1368,8 @@ struct bnxt {
+ #define BNXT_CHIP_TYPE_NITRO_A0(bp) ((bp)->flags & BNXT_FLAG_CHIP_NITRO_A0)
+ #define BNXT_RX_PAGE_MODE(bp)	((bp)->flags & BNXT_FLAG_RX_PAGE_MODE)
+ #define BNXT_SUPPORTS_TPA(bp)	(!BNXT_CHIP_TYPE_NITRO_A0(bp) &&	\
+-				 !(bp->flags & BNXT_FLAG_CHIP_P5))
++				 !(bp->flags & BNXT_FLAG_CHIP_P5) &&	\
++				 !is_kdump_kernel())
+ 
+ /* Chip class phase 5 */
+ #define BNXT_CHIP_P5(bp)			\
+@@ -1776,7 +1778,7 @@ unsigned int bnxt_get_avail_stat_ctxs_for_en(struct bnxt *bp);
+ unsigned int bnxt_get_max_func_cp_rings(struct bnxt *bp);
+ unsigned int bnxt_get_avail_cp_rings_for_en(struct bnxt *bp);
+ int bnxt_get_avail_msix(struct bnxt *bp, int num);
+-int bnxt_reserve_rings(struct bnxt *bp);
++int bnxt_reserve_rings(struct bnxt *bp, bool irq_re_init);
+ void bnxt_tx_disable(struct bnxt *bp);
+ void bnxt_tx_enable(struct bnxt *bp);
+ int bnxt_hwrm_set_pause(struct bnxt *);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index adabbe94a259..e1460e391952 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -788,7 +788,7 @@ static int bnxt_set_channels(struct net_device *dev,
+ 			 */
+ 		}
+ 	} else {
+-		rc = bnxt_reserve_rings(bp);
++		rc = bnxt_reserve_rings(bp, true);
+ 	}
+ 
+ 	return rc;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index ea45a9b8179e..7dd3f445afb6 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -150,7 +150,7 @@ static int bnxt_req_msix_vecs(struct bnxt_en_dev *edev, int ulp_id,
+ 			bnxt_close_nic(bp, true, false);
+ 			rc = bnxt_open_nic(bp, true, false);
+ 		} else {
+-			rc = bnxt_reserve_rings(bp);
++			rc = bnxt_reserve_rings(bp, true);
+ 		}
+ 	}
+ 	if (rc) {
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+index c116f96956fe..f2aba5b160c2 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_flower.c
+@@ -228,6 +228,9 @@ static void cxgb4_process_flow_match(struct net_device *dev,
+ 		fs->val.ivlan = vlan_tci;
+ 		fs->mask.ivlan = vlan_tci_mask;
+ 
++		fs->val.ivlan_vld = 1;
++		fs->mask.ivlan_vld = 1;
++
+ 		/* Chelsio adapters use ivlan_vld bit to match vlan packets
+ 		 * as 802.1Q. Also, when vlan tag is present in packets,
+ 		 * ethtype match is used then to match on ethtype of inner
+@@ -238,8 +241,6 @@ static void cxgb4_process_flow_match(struct net_device *dev,
+ 		 * ethtype value with ethtype of inner header.
+ 		 */
+ 		if (fs->val.ethtype == ETH_P_8021Q) {
+-			fs->val.ivlan_vld = 1;
+-			fs->mask.ivlan_vld = 1;
+ 			fs->val.ethtype = 0;
+ 			fs->mask.ethtype = 0;
+ 		}
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index 2b03f6187a24..29d3399c4995 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -7139,10 +7139,21 @@ int t4_fixup_host_params(struct adapter *adap, unsigned int page_size,
+ 			 unsigned int cache_line_size)
+ {
+ 	unsigned int page_shift = fls(page_size) - 1;
++	unsigned int sge_hps = page_shift - 10;
+ 	unsigned int stat_len = cache_line_size > 64 ? 128 : 64;
+ 	unsigned int fl_align = cache_line_size < 32 ? 32 : cache_line_size;
+ 	unsigned int fl_align_log = fls(fl_align) - 1;
+ 
++	t4_write_reg(adap, SGE_HOST_PAGE_SIZE_A,
++		     HOSTPAGESIZEPF0_V(sge_hps) |
++		     HOSTPAGESIZEPF1_V(sge_hps) |
++		     HOSTPAGESIZEPF2_V(sge_hps) |
++		     HOSTPAGESIZEPF3_V(sge_hps) |
++		     HOSTPAGESIZEPF4_V(sge_hps) |
++		     HOSTPAGESIZEPF5_V(sge_hps) |
++		     HOSTPAGESIZEPF6_V(sge_hps) |
++		     HOSTPAGESIZEPF7_V(sge_hps));
++
+ 	if (is_t4(adap->params.chip)) {
+ 		t4_set_reg_field(adap, SGE_CONTROL_A,
+ 				 INGPADBOUNDARY_V(INGPADBOUNDARY_M) |
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index a96ad20ee484..878ccce1dfcd 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3556,7 +3556,7 @@ failed_init:
+ 	if (fep->reg_phy)
+ 		regulator_disable(fep->reg_phy);
+ failed_reset:
+-	pm_runtime_put(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ failed_regulator:
+ 	clk_disable_unprepare(fep->clk_ahb);
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 8433fb9c3eee..ea0236a2e18b 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -4619,7 +4619,7 @@ static int mvneta_probe(struct platform_device *pdev)
+ 	err = register_netdev(dev);
+ 	if (err < 0) {
+ 		dev_err(&pdev->dev, "failed to register\n");
+-		goto err_free_stats;
++		goto err_netdev;
+ 	}
+ 
+ 	netdev_info(dev, "Using %s mac address %pM\n", mac_from,
+@@ -4630,14 +4630,12 @@ static int mvneta_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ err_netdev:
+-	unregister_netdev(dev);
+ 	if (pp->bm_priv) {
+ 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
+ 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
+ 				       1 << pp->id);
+ 		mvneta_bm_put(pp->bm_priv);
+ 	}
+-err_free_stats:
+ 	free_percpu(pp->stats);
+ err_free_ports:
+ 	free_percpu(pp->ports);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 70031e2b2294..f063ba69eb17 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -1412,7 +1412,7 @@ static inline void mvpp2_xlg_max_rx_size_set(struct mvpp2_port *port)
+ /* Set defaults to the MVPP2 port */
+ static void mvpp2_defaults_set(struct mvpp2_port *port)
+ {
+-	int tx_port_num, val, queue, ptxq, lrxq;
++	int tx_port_num, val, queue, lrxq;
+ 
+ 	if (port->priv->hw_version == MVPP21) {
+ 		/* Update TX FIFO MIN Threshold */
+@@ -1433,11 +1433,9 @@ static void mvpp2_defaults_set(struct mvpp2_port *port)
+ 	mvpp2_write(port->priv, MVPP2_TXP_SCHED_FIXED_PRIO_REG, 0);
+ 
+ 	/* Close bandwidth for all queues */
+-	for (queue = 0; queue < MVPP2_MAX_TXQ; queue++) {
+-		ptxq = mvpp2_txq_phys(port->id, queue);
++	for (queue = 0; queue < MVPP2_MAX_TXQ; queue++)
+ 		mvpp2_write(port->priv,
+-			    MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(ptxq), 0);
+-	}
++			    MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(queue), 0);
+ 
+ 	/* Set refill period to 1 usec, refill tokens
+ 	 * and bucket size to maximum
+@@ -2293,7 +2291,7 @@ static void mvpp2_txq_deinit(struct mvpp2_port *port,
+ 	txq->descs_dma         = 0;
+ 
+ 	/* Set minimum bandwidth for disabled TXQs */
+-	mvpp2_write(port->priv, MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(txq->id), 0);
++	mvpp2_write(port->priv, MVPP2_TXQ_SCHED_TOKEN_CNTR_REG(txq->log_id), 0);
+ 
+ 	/* Set Tx descriptors queue starting address and size */
+ 	thread = mvpp2_cpu_to_thread(port->priv, get_cpu());
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 2d269acdbc8e..631a600bec4d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3789,6 +3789,12 @@ static netdev_features_t mlx5e_fix_features(struct net_device *netdev,
+ 			netdev_warn(netdev, "Disabling LRO, not supported in legacy RQ\n");
+ 	}
+ 
++	if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)) {
++		features &= ~NETIF_F_RXHASH;
++		if (netdev->features & NETIF_F_RXHASH)
++			netdev_warn(netdev, "Disabling rxhash, not supported when CQE compress is active\n");
++	}
++
+ 	mutex_unlock(&priv->state_lock);
+ 
+ 	return features;
+@@ -3915,6 +3921,9 @@ int mlx5e_hwstamp_set(struct mlx5e_priv *priv, struct ifreq *ifr)
+ 	memcpy(&priv->tstamp, &config, sizeof(config));
+ 	mutex_unlock(&priv->state_lock);
+ 
++	/* might need to fix some features */
++	netdev_update_features(priv->netdev);
++
+ 	return copy_to_user(ifr->ifr_data, &config,
+ 			    sizeof(config)) ? -EFAULT : 0;
+ }
+@@ -4744,6 +4753,10 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev)
+ 	if (!priv->channels.params.scatter_fcs_en)
+ 		netdev->features  &= ~NETIF_F_RXFCS;
+ 
++	/* prefere CQE compression over rxhash */
++	if (MLX5E_GET_PFLAG(&priv->channels.params, MLX5E_PFLAG_RX_CQE_COMPRESS))
++		netdev->features &= ~NETIF_F_RXHASH;
++
+ #define FT_CAP(f) MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive.f)
+ 	if (FT_CAP(flow_modify_en) &&
+ 	    FT_CAP(modify_root) &&
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index abbdd4906984..158b941ae911 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2247,7 +2247,7 @@ static struct mlx5_flow_root_namespace
+ 		cmds = mlx5_fs_cmd_get_default_ipsec_fpga_cmds(table_type);
+ 
+ 	/* Create the root namespace */
+-	root_ns = kvzalloc(sizeof(*root_ns), GFP_KERNEL);
++	root_ns = kzalloc(sizeof(*root_ns), GFP_KERNEL);
+ 	if (!root_ns)
+ 		return NULL;
+ 
+@@ -2390,6 +2390,7 @@ static void cleanup_egress_acls_root_ns(struct mlx5_core_dev *dev)
+ 		cleanup_root_ns(steering->esw_egress_root_ns[i]);
+ 
+ 	kfree(steering->esw_egress_root_ns);
++	steering->esw_egress_root_ns = NULL;
+ }
+ 
+ static void cleanup_ingress_acls_root_ns(struct mlx5_core_dev *dev)
+@@ -2404,6 +2405,7 @@ static void cleanup_ingress_acls_root_ns(struct mlx5_core_dev *dev)
+ 		cleanup_root_ns(steering->esw_ingress_root_ns[i]);
+ 
+ 	kfree(steering->esw_ingress_root_ns);
++	steering->esw_ingress_root_ns = NULL;
+ }
+ 
+ void mlx5_cleanup_fs(struct mlx5_core_dev *dev)
+@@ -2572,6 +2574,7 @@ cleanup_root_ns:
+ 	for (i--; i >= 0; i--)
+ 		cleanup_root_ns(steering->esw_egress_root_ns[i]);
+ 	kfree(steering->esw_egress_root_ns);
++	steering->esw_egress_root_ns = NULL;
+ 	return err;
+ }
+ 
+@@ -2599,6 +2602,7 @@ cleanup_root_ns:
+ 	for (i--; i >= 0; i--)
+ 		cleanup_root_ns(steering->esw_ingress_root_ns[i]);
+ 	kfree(steering->esw_ingress_root_ns);
++	steering->esw_ingress_root_ns = NULL;
+ 	return err;
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+index 2941967e1cc5..2e5ebcd01b4b 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_erp.c
+@@ -1169,13 +1169,12 @@ mlxsw_sp_acl_erp_delta_fill(const struct mlxsw_sp_acl_erp_key *parent_key,
+ 			return -EINVAL;
+ 	}
+ 	if (si == -1) {
+-		/* The masks are the same, this cannot happen.
+-		 * That means the caller is broken.
++		/* The masks are the same, this can happen in case eRPs with
++		 * the same mask were created in both A-TCAM and C-TCAM.
++		 * The only possible condition under which this can happen
++		 * is identical rule insertion. Delta is not possible here.
+ 		 */
+-		WARN_ON(1);
+-		*delta_start = 0;
+-		*delta_mask = 0;
+-		return 0;
++		return -EINVAL;
+ 	}
+ 	pmask = (unsigned char) parent_key->mask[__MASK_IDX(si)];
+ 	mask = (unsigned char) key->mask[__MASK_IDX(si)];
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 365cddbfc684..cb65f6a48eba 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -6814,6 +6814,8 @@ static int rtl8169_resume(struct device *device)
+ 	struct net_device *dev = dev_get_drvdata(device);
+ 	struct rtl8169_private *tp = netdev_priv(dev);
+ 
++	rtl_rar_set(tp, dev->dev_addr);
++
+ 	clk_prepare_enable(tp->clk);
+ 
+ 	if (netif_running(dev))
+@@ -6847,6 +6849,7 @@ static int rtl8169_runtime_resume(struct device *device)
+ {
+ 	struct net_device *dev = dev_get_drvdata(device);
+ 	struct rtl8169_private *tp = netdev_priv(dev);
++
+ 	rtl_rar_set(tp, dev->dev_addr);
+ 
+ 	if (!tp->TxDescArray)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+index 3c749c327cbd..e09522c5509a 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -460,7 +460,7 @@ stmmac_get_pauseparam(struct net_device *netdev,
+ 	} else {
+ 		if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ 				       netdev->phydev->supported) ||
+-		    linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
++		    !linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ 				      netdev->phydev->supported))
+ 			return;
+ 	}
+@@ -491,7 +491,7 @@ stmmac_set_pauseparam(struct net_device *netdev,
+ 	} else {
+ 		if (!linkmode_test_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ 				       phy->supported) ||
+-		    linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
++		    !linkmode_test_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ 				      phy->supported))
+ 			return -EOPNOTSUPP;
+ 	}
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index f0e0593e54f3..8841c5de8979 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2190,6 +2190,10 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 	if (priv->plat->axi)
+ 		stmmac_axi(priv, priv->ioaddr, priv->plat->axi);
+ 
++	/* DMA CSR Channel configuration */
++	for (chan = 0; chan < dma_csr_ch; chan++)
++		stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan);
++
+ 	/* DMA RX Channel Configuration */
+ 	for (chan = 0; chan < rx_channels_count; chan++) {
+ 		rx_q = &priv->rx_queue[chan];
+@@ -2215,10 +2219,6 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
+ 				       tx_q->tx_tail_addr, chan);
+ 	}
+ 
+-	/* DMA CSR Channel configuration */
+-	for (chan = 0; chan < dma_csr_ch; chan++)
+-		stmmac_init_chan(priv, priv->ioaddr, priv->plat->dma_cfg, chan);
+-
+ 	return ret;
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+index bdd351597b55..093a223fe408 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+@@ -267,7 +267,8 @@ int stmmac_mdio_reset(struct mii_bus *bus)
+ 			of_property_read_u32_array(np,
+ 				"snps,reset-delays-us", data->delays, 3);
+ 
+-			if (gpio_request(data->reset_gpio, "mdio-reset"))
++			if (devm_gpio_request(priv->device, data->reset_gpio,
++					      "mdio-reset"))
+ 				return 0;
+ 		}
+ 
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index 6bac602094bd..8438f2f40d3d 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -29,6 +29,9 @@
+ #define MDIO_AN_10GBT_CTRL_ADV_NBT_MASK	0x01e0
+ 
+ enum {
++	MV_PMA_BOOT		= 0xc050,
++	MV_PMA_BOOT_FATAL	= BIT(0),
++
+ 	MV_PCS_BASE_T		= 0x0000,
+ 	MV_PCS_BASE_R		= 0x1000,
+ 	MV_PCS_1000BASEX	= 0x2000,
+@@ -228,6 +231,16 @@ static int mv3310_probe(struct phy_device *phydev)
+ 	    (phydev->c45_ids.devices_in_package & mmd_mask) != mmd_mask)
+ 		return -ENODEV;
+ 
++	ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MV_PMA_BOOT);
++	if (ret < 0)
++		return ret;
++
++	if (ret & MV_PMA_BOOT_FATAL) {
++		dev_warn(&phydev->mdio.dev,
++			 "PHY failed to boot firmware, status=%04x\n", ret);
++		return -ENODEV;
++	}
++
+ 	priv = devm_kzalloc(&phydev->mdio.dev, sizeof(*priv), GFP_KERNEL);
+ 	if (!priv)
+ 		return -ENOMEM;
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 504282af27e5..921cc0571bd0 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -506,6 +506,7 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags)
+ 
+ 	if (netif_running (dev->net) &&
+ 	    netif_device_present (dev->net) &&
++	    test_bit(EVENT_DEV_OPEN, &dev->flags) &&
+ 	    !test_bit (EVENT_RX_HALT, &dev->flags) &&
+ 	    !test_bit (EVENT_DEV_ASLEEP, &dev->flags)) {
+ 		switch (retval = usb_submit_urb (urb, GFP_ATOMIC)) {
+@@ -1431,6 +1432,11 @@ netdev_tx_t usbnet_start_xmit (struct sk_buff *skb,
+ 		spin_unlock_irqrestore(&dev->txq.lock, flags);
+ 		goto drop;
+ 	}
++	if (netif_queue_stopped(net)) {
++		usb_autopm_put_interface_async(dev->intf);
++		spin_unlock_irqrestore(&dev->txq.lock, flags);
++		goto drop;
++	}
+ 
+ #ifdef CONFIG_PM
+ 	/* if this triggers the device is still a sleep */
+diff --git a/drivers/xen/xen-pciback/pciback_ops.c b/drivers/xen/xen-pciback/pciback_ops.c
+index ea4a08b83fa0..787966f44589 100644
+--- a/drivers/xen/xen-pciback/pciback_ops.c
++++ b/drivers/xen/xen-pciback/pciback_ops.c
+@@ -127,8 +127,6 @@ void xen_pcibk_reset_device(struct pci_dev *dev)
+ 		if (pci_is_enabled(dev))
+ 			pci_disable_device(dev);
+ 
+-		pci_write_config_word(dev, PCI_COMMAND, 0);
+-
+ 		dev->is_busmaster = 0;
+ 	} else {
+ 		pci_read_config_word(dev, PCI_COMMAND, &cmd);
+diff --git a/include/linux/siphash.h b/include/linux/siphash.h
+index fa7a6b9cedbf..bf21591a9e5e 100644
+--- a/include/linux/siphash.h
++++ b/include/linux/siphash.h
+@@ -21,6 +21,11 @@ typedef struct {
+ 	u64 key[2];
+ } siphash_key_t;
+ 
++static inline bool siphash_key_is_zero(const siphash_key_t *key)
++{
++	return !(key->key[0] | key->key[1]);
++}
++
+ u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key);
+ #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key);
+diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
+index 104a6669e344..7698460a3dd1 100644
+--- a/include/net/netns/ipv4.h
++++ b/include/net/netns/ipv4.h
+@@ -9,6 +9,7 @@
+ #include <linux/uidgid.h>
+ #include <net/inet_frag.h>
+ #include <linux/rcupdate.h>
++#include <linux/siphash.h>
+ 
+ struct tcpm_hash_bucket;
+ struct ctl_table_header;
+@@ -217,5 +218,6 @@ struct netns_ipv4 {
+ 	unsigned int	ipmr_seq;	/* protected by rtnl_mutex */
+ 
+ 	atomic_t	rt_genid;
++	siphash_key_t	ip_id_key;
+ };
+ #endif
+diff --git a/include/uapi/linux/tipc_config.h b/include/uapi/linux/tipc_config.h
+index 4b2c93b1934c..4955e1a9f1bc 100644
+--- a/include/uapi/linux/tipc_config.h
++++ b/include/uapi/linux/tipc_config.h
+@@ -307,8 +307,10 @@ static inline int TLV_SET(void *tlv, __u16 type, void *data, __u16 len)
+ 	tlv_ptr = (struct tlv_desc *)tlv;
+ 	tlv_ptr->tlv_type = htons(type);
+ 	tlv_ptr->tlv_len  = htons(tlv_len);
+-	if (len && data)
+-		memcpy(TLV_DATA(tlv_ptr), data, tlv_len);
++	if (len && data) {
++		memcpy(TLV_DATA(tlv_ptr), data, len);
++		memset(TLV_DATA(tlv_ptr) + len, 0, TLV_SPACE(len) - tlv_len);
++	}
+ 	return TLV_SPACE(len);
+ }
+ 
+@@ -405,8 +407,10 @@ static inline int TCM_SET(void *msg, __u16 cmd, __u16 flags,
+ 	tcm_hdr->tcm_len   = htonl(msg_len);
+ 	tcm_hdr->tcm_type  = htons(cmd);
+ 	tcm_hdr->tcm_flags = htons(flags);
+-	if (data_len && data)
++	if (data_len && data) {
+ 		memcpy(TCM_DATA(msg), data, data_len);
++		memset(TCM_DATA(msg) + data_len, 0, TCM_SPACE(data_len) - msg_len);
++	}
+ 	return TCM_SPACE(data_len);
+ }
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index c8e672ac32cb..a8d017035ae9 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -5804,7 +5804,6 @@ static struct sk_buff *napi_frags_skb(struct napi_struct *napi)
+ 	skb_reset_mac_header(skb);
+ 	skb_gro_reset_offset(skb);
+ 
+-	eth = skb_gro_header_fast(skb, 0);
+ 	if (unlikely(skb_gro_header_hard(skb, hlen))) {
+ 		eth = skb_gro_header_slow(skb, hlen, 0);
+ 		if (unlikely(!eth)) {
+@@ -5814,6 +5813,7 @@ static struct sk_buff *napi_frags_skb(struct napi_struct *napi)
+ 			return NULL;
+ 		}
+ 	} else {
++		eth = (const struct ethhdr *)skb->data;
+ 		gro_pull_from_frag0(skb, hlen);
+ 		NAPI_GRO_CB(skb)->frag0 += hlen;
+ 		NAPI_GRO_CB(skb)->frag0_len -= hlen;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 40796b8bf820..e5bfd42fd083 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -1001,7 +1001,11 @@ struct ubuf_info *sock_zerocopy_realloc(struct sock *sk, size_t size,
+ 			uarg->len++;
+ 			uarg->bytelen = bytelen;
+ 			atomic_set(&sk->sk_zckey, ++next);
+-			sock_zerocopy_get(uarg);
++
++			/* no extra ref when appending to datagram (MSG_MORE) */
++			if (sk->sk_type == SOCK_STREAM)
++				sock_zerocopy_get(uarg);
++
+ 			return uarg;
+ 		}
+ 	}
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 765b2b32c4a4..1e79e1bca13c 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -187,6 +187,17 @@ static void ip_ma_put(struct ip_mc_list *im)
+ 	     pmc != NULL;					\
+ 	     pmc = rtnl_dereference(pmc->next_rcu))
+ 
++static void ip_sf_list_clear_all(struct ip_sf_list *psf)
++{
++	struct ip_sf_list *next;
++
++	while (psf) {
++		next = psf->sf_next;
++		kfree(psf);
++		psf = next;
++	}
++}
++
+ #ifdef CONFIG_IP_MULTICAST
+ 
+ /*
+@@ -632,6 +643,13 @@ static void igmpv3_clear_zeros(struct ip_sf_list **ppsf)
+ 	}
+ }
+ 
++static void kfree_pmc(struct ip_mc_list *pmc)
++{
++	ip_sf_list_clear_all(pmc->sources);
++	ip_sf_list_clear_all(pmc->tomb);
++	kfree(pmc);
++}
++
+ static void igmpv3_send_cr(struct in_device *in_dev)
+ {
+ 	struct ip_mc_list *pmc, *pmc_prev, *pmc_next;
+@@ -668,7 +686,7 @@ static void igmpv3_send_cr(struct in_device *in_dev)
+ 			else
+ 				in_dev->mc_tomb = pmc_next;
+ 			in_dev_put(pmc->interface);
+-			kfree(pmc);
++			kfree_pmc(pmc);
+ 		} else
+ 			pmc_prev = pmc;
+ 	}
+@@ -1213,14 +1231,18 @@ static void igmpv3_del_delrec(struct in_device *in_dev, struct ip_mc_list *im)
+ 		im->interface = pmc->interface;
+ 		if (im->sfmode == MCAST_INCLUDE) {
+ 			im->tomb = pmc->tomb;
++			pmc->tomb = NULL;
++
+ 			im->sources = pmc->sources;
++			pmc->sources = NULL;
++
+ 			for (psf = im->sources; psf; psf = psf->sf_next)
+ 				psf->sf_crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
+ 		} else {
+ 			im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv;
+ 		}
+ 		in_dev_put(pmc->interface);
+-		kfree(pmc);
++		kfree_pmc(pmc);
+ 	}
+ 	spin_unlock_bh(&im->lock);
+ }
+@@ -1241,21 +1263,18 @@ static void igmpv3_clear_delrec(struct in_device *in_dev)
+ 		nextpmc = pmc->next;
+ 		ip_mc_clear_src(pmc);
+ 		in_dev_put(pmc->interface);
+-		kfree(pmc);
++		kfree_pmc(pmc);
+ 	}
+ 	/* clear dead sources, too */
+ 	rcu_read_lock();
+ 	for_each_pmc_rcu(in_dev, pmc) {
+-		struct ip_sf_list *psf, *psf_next;
++		struct ip_sf_list *psf;
+ 
+ 		spin_lock_bh(&pmc->lock);
+ 		psf = pmc->tomb;
+ 		pmc->tomb = NULL;
+ 		spin_unlock_bh(&pmc->lock);
+-		for (; psf; psf = psf_next) {
+-			psf_next = psf->sf_next;
+-			kfree(psf);
+-		}
++		ip_sf_list_clear_all(psf);
+ 	}
+ 	rcu_read_unlock();
+ }
+@@ -2133,7 +2152,7 @@ static int ip_mc_add_src(struct in_device *in_dev, __be32 *pmca, int sfmode,
+ 
+ static void ip_mc_clear_src(struct ip_mc_list *pmc)
+ {
+-	struct ip_sf_list *psf, *nextpsf, *tomb, *sources;
++	struct ip_sf_list *tomb, *sources;
+ 
+ 	spin_lock_bh(&pmc->lock);
+ 	tomb = pmc->tomb;
+@@ -2145,14 +2164,8 @@ static void ip_mc_clear_src(struct ip_mc_list *pmc)
+ 	pmc->sfcount[MCAST_EXCLUDE] = 1;
+ 	spin_unlock_bh(&pmc->lock);
+ 
+-	for (psf = tomb; psf; psf = nextpsf) {
+-		nextpsf = psf->sf_next;
+-		kfree(psf);
+-	}
+-	for (psf = sources; psf; psf = nextpsf) {
+-		nextpsf = psf->sf_next;
+-		kfree(psf);
+-	}
++	ip_sf_list_clear_all(tomb);
++	ip_sf_list_clear_all(sources);
+ }
+ 
+ /* Join a multicast group
+diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
+index e8bb2e85c5a4..ac770940adb9 100644
+--- a/net/ipv4/ip_output.c
++++ b/net/ipv4/ip_output.c
+@@ -883,7 +883,7 @@ static int __ip_append_data(struct sock *sk,
+ 	int csummode = CHECKSUM_NONE;
+ 	struct rtable *rt = (struct rtable *)cork->dst;
+ 	unsigned int wmem_alloc_delta = 0;
+-	bool paged, extra_uref;
++	bool paged, extra_uref = false;
+ 	u32 tskey = 0;
+ 
+ 	skb = skb_peek_tail(queue);
+@@ -923,7 +923,7 @@ static int __ip_append_data(struct sock *sk,
+ 		uarg = sock_zerocopy_realloc(sk, length, skb_zcopy(skb));
+ 		if (!uarg)
+ 			return -ENOBUFS;
+-		extra_uref = true;
++		extra_uref = !skb;	/* only extra ref if !MSG_MORE */
+ 		if (rt->dst.dev->features & NETIF_F_SG &&
+ 		    csummode == CHECKSUM_PARTIAL) {
+ 			paged = true;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 3c89ca325947..b66f78fad98c 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -500,15 +500,17 @@ EXPORT_SYMBOL(ip_idents_reserve);
+ 
+ void __ip_select_ident(struct net *net, struct iphdr *iph, int segs)
+ {
+-	static u32 ip_idents_hashrnd __read_mostly;
+ 	u32 hash, id;
+ 
+-	net_get_random_once(&ip_idents_hashrnd, sizeof(ip_idents_hashrnd));
++	/* Note the following code is not safe, but this is okay. */
++	if (unlikely(siphash_key_is_zero(&net->ipv4.ip_id_key)))
++		get_random_bytes(&net->ipv4.ip_id_key,
++				 sizeof(net->ipv4.ip_id_key));
+ 
+-	hash = jhash_3words((__force u32)iph->daddr,
++	hash = siphash_3u32((__force u32)iph->daddr,
+ 			    (__force u32)iph->saddr,
+-			    iph->protocol ^ net_hash_mix(net),
+-			    ip_idents_hashrnd);
++			    iph->protocol,
++			    &net->ipv4.ip_id_key);
+ 	id = ip_idents_reserve(hash, segs);
+ 	iph->id = htons(id);
+ }
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index e71227390bec..de16c2e343ef 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1269,7 +1269,7 @@ static int __ip6_append_data(struct sock *sk,
+ 	int csummode = CHECKSUM_NONE;
+ 	unsigned int maxnonfragsize, headersize;
+ 	unsigned int wmem_alloc_delta = 0;
+-	bool paged, extra_uref;
++	bool paged, extra_uref = false;
+ 
+ 	skb = skb_peek_tail(queue);
+ 	if (!skb) {
+@@ -1338,7 +1338,7 @@ emsgsize:
+ 		uarg = sock_zerocopy_realloc(sk, length, skb_zcopy(skb));
+ 		if (!uarg)
+ 			return -ENOBUFS;
+-		extra_uref = true;
++		extra_uref = !skb;	/* only extra ref if !MSG_MORE */
+ 		if (rt->dst.dev->features & NETIF_F_SG &&
+ 		    csummode == CHECKSUM_PARTIAL) {
+ 			paged = true;
+diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c
+index 4fe7c90962dd..868ae23dbae1 100644
+--- a/net/ipv6/output_core.c
++++ b/net/ipv6/output_core.c
+@@ -10,15 +10,25 @@
+ #include <net/secure_seq.h>
+ #include <linux/netfilter.h>
+ 
+-static u32 __ipv6_select_ident(struct net *net, u32 hashrnd,
++static u32 __ipv6_select_ident(struct net *net,
+ 			       const struct in6_addr *dst,
+ 			       const struct in6_addr *src)
+ {
++	const struct {
++		struct in6_addr dst;
++		struct in6_addr src;
++	} __aligned(SIPHASH_ALIGNMENT) combined = {
++		.dst = *dst,
++		.src = *src,
++	};
+ 	u32 hash, id;
+ 
+-	hash = __ipv6_addr_jhash(dst, hashrnd);
+-	hash = __ipv6_addr_jhash(src, hash);
+-	hash ^= net_hash_mix(net);
++	/* Note the following code is not safe, but this is okay. */
++	if (unlikely(siphash_key_is_zero(&net->ipv4.ip_id_key)))
++		get_random_bytes(&net->ipv4.ip_id_key,
++				 sizeof(net->ipv4.ip_id_key));
++
++	hash = siphash(&combined, sizeof(combined), &net->ipv4.ip_id_key);
+ 
+ 	/* Treat id of 0 as unset and if we get 0 back from ip_idents_reserve,
+ 	 * set the hight order instead thus minimizing possible future
+@@ -41,7 +51,6 @@ static u32 __ipv6_select_ident(struct net *net, u32 hashrnd,
+  */
+ __be32 ipv6_proxy_select_ident(struct net *net, struct sk_buff *skb)
+ {
+-	static u32 ip6_proxy_idents_hashrnd __read_mostly;
+ 	struct in6_addr buf[2];
+ 	struct in6_addr *addrs;
+ 	u32 id;
+@@ -53,11 +62,7 @@ __be32 ipv6_proxy_select_ident(struct net *net, struct sk_buff *skb)
+ 	if (!addrs)
+ 		return 0;
+ 
+-	net_get_random_once(&ip6_proxy_idents_hashrnd,
+-			    sizeof(ip6_proxy_idents_hashrnd));
+-
+-	id = __ipv6_select_ident(net, ip6_proxy_idents_hashrnd,
+-				 &addrs[1], &addrs[0]);
++	id = __ipv6_select_ident(net, &addrs[1], &addrs[0]);
+ 	return htonl(id);
+ }
+ EXPORT_SYMBOL_GPL(ipv6_proxy_select_ident);
+@@ -66,12 +71,9 @@ __be32 ipv6_select_ident(struct net *net,
+ 			 const struct in6_addr *daddr,
+ 			 const struct in6_addr *saddr)
+ {
+-	static u32 ip6_idents_hashrnd __read_mostly;
+ 	u32 id;
+ 
+-	net_get_random_once(&ip6_idents_hashrnd, sizeof(ip6_idents_hashrnd));
+-
+-	id = __ipv6_select_ident(net, ip6_idents_hashrnd, daddr, saddr);
++	id = __ipv6_select_ident(net, daddr, saddr);
+ 	return htonl(id);
+ }
+ EXPORT_SYMBOL(ipv6_select_ident);
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 5a426226c762..5cb14eabfc65 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -287,7 +287,9 @@ static int rawv6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+ 			/* Binding to link-local address requires an interface */
+ 			if (!sk->sk_bound_dev_if)
+ 				goto out_unlock;
++		}
+ 
++		if (sk->sk_bound_dev_if) {
+ 			err = -ENODEV;
+ 			dev = dev_get_by_index_rcu(sock_net(sk),
+ 						   sk->sk_bound_dev_if);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index b471afce1330..457a27016e74 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2448,6 +2448,12 @@ static struct rt6_info *__ip6_route_redirect(struct net *net,
+ 	struct fib6_info *rt;
+ 	struct fib6_node *fn;
+ 
++	/* l3mdev_update_flow overrides oif if the device is enslaved; in
++	 * this case we must match on the real ingress device, so reset it
++	 */
++	if (fl6->flowi6_flags & FLOWI_FLAG_SKIP_NH_OIF)
++		fl6->flowi6_oif = skb->dev->ifindex;
++
+ 	/* Get the "current" route for this destination and
+ 	 * check if the redirect has come from appropriate router.
+ 	 *
+diff --git a/net/llc/llc_output.c b/net/llc/llc_output.c
+index 94425e421213..9e4b6bcf6920 100644
+--- a/net/llc/llc_output.c
++++ b/net/llc/llc_output.c
+@@ -72,6 +72,8 @@ int llc_build_and_send_ui_pkt(struct llc_sap *sap, struct sk_buff *skb,
+ 	rc = llc_mac_hdr_init(skb, skb->dev->dev_addr, dmac);
+ 	if (likely(!rc))
+ 		rc = dev_queue_xmit(skb);
++	else
++		kfree_skb(skb);
+ 	return rc;
+ }
+ 
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index d4b8355737d8..9d4ed81a33b9 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -766,7 +766,7 @@ int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[],
+ 
+ 	for (i = 0; i < TCA_ACT_MAX_PRIO && actions[i]; i++) {
+ 		a = actions[i];
+-		nest = nla_nest_start(skb, a->order);
++		nest = nla_nest_start(skb, i + 1);
+ 		if (nest == NULL)
+ 			goto nla_put_failure;
+ 		err = tcf_action_dump_1(skb, a, bind, ref);
+@@ -1283,7 +1283,6 @@ tca_action_gd(struct net *net, struct nlattr *nla, struct nlmsghdr *n,
+ 			ret = PTR_ERR(act);
+ 			goto err;
+ 		}
+-		act->order = i;
+ 		attr_size += tcf_action_fill_size(act);
+ 		actions[i - 1] = act;
+ 	}
+diff --git a/net/tipc/core.c b/net/tipc/core.c
+index d7b0688c98dd..3ecca3b88bf8 100644
+--- a/net/tipc/core.c
++++ b/net/tipc/core.c
+@@ -66,10 +66,6 @@ static int __net_init tipc_init_net(struct net *net)
+ 	INIT_LIST_HEAD(&tn->node_list);
+ 	spin_lock_init(&tn->node_list_lock);
+ 
+-	err = tipc_socket_init();
+-	if (err)
+-		goto out_socket;
+-
+ 	err = tipc_sk_rht_init(net);
+ 	if (err)
+ 		goto out_sk_rht;
+@@ -79,9 +75,6 @@ static int __net_init tipc_init_net(struct net *net)
+ 		goto out_nametbl;
+ 
+ 	INIT_LIST_HEAD(&tn->dist_queue);
+-	err = tipc_topsrv_start(net);
+-	if (err)
+-		goto out_subscr;
+ 
+ 	err = tipc_bcast_init(net);
+ 	if (err)
+@@ -90,25 +83,19 @@ static int __net_init tipc_init_net(struct net *net)
+ 	return 0;
+ 
+ out_bclink:
+-	tipc_bcast_stop(net);
+-out_subscr:
+ 	tipc_nametbl_stop(net);
+ out_nametbl:
+ 	tipc_sk_rht_destroy(net);
+ out_sk_rht:
+-	tipc_socket_stop();
+-out_socket:
+ 	return err;
+ }
+ 
+ static void __net_exit tipc_exit_net(struct net *net)
+ {
+-	tipc_topsrv_stop(net);
+ 	tipc_net_stop(net);
+ 	tipc_bcast_stop(net);
+ 	tipc_nametbl_stop(net);
+ 	tipc_sk_rht_destroy(net);
+-	tipc_socket_stop();
+ }
+ 
+ static struct pernet_operations tipc_net_ops = {
+@@ -118,6 +105,11 @@ static struct pernet_operations tipc_net_ops = {
+ 	.size = sizeof(struct tipc_net),
+ };
+ 
++static struct pernet_operations tipc_topsrv_net_ops = {
++	.init = tipc_topsrv_init_net,
++	.exit = tipc_topsrv_exit_net,
++};
++
+ static int __init tipc_init(void)
+ {
+ 	int err;
+@@ -144,6 +136,14 @@ static int __init tipc_init(void)
+ 	if (err)
+ 		goto out_pernet;
+ 
++	err = tipc_socket_init();
++	if (err)
++		goto out_socket;
++
++	err = register_pernet_subsys(&tipc_topsrv_net_ops);
++	if (err)
++		goto out_pernet_topsrv;
++
+ 	err = tipc_bearer_setup();
+ 	if (err)
+ 		goto out_bearer;
+@@ -151,6 +151,10 @@ static int __init tipc_init(void)
+ 	pr_info("Started in single node mode\n");
+ 	return 0;
+ out_bearer:
++	unregister_pernet_subsys(&tipc_topsrv_net_ops);
++out_pernet_topsrv:
++	tipc_socket_stop();
++out_socket:
+ 	unregister_pernet_subsys(&tipc_net_ops);
+ out_pernet:
+ 	tipc_unregister_sysctl();
+@@ -166,6 +170,8 @@ out_netlink:
+ static void __exit tipc_exit(void)
+ {
+ 	tipc_bearer_cleanup();
++	unregister_pernet_subsys(&tipc_topsrv_net_ops);
++	tipc_socket_stop();
+ 	unregister_pernet_subsys(&tipc_net_ops);
+ 	tipc_netlink_stop();
+ 	tipc_netlink_compat_stop();
+diff --git a/net/tipc/subscr.h b/net/tipc/subscr.h
+index d793b4343885..aa015c233898 100644
+--- a/net/tipc/subscr.h
++++ b/net/tipc/subscr.h
+@@ -77,8 +77,9 @@ void tipc_sub_report_overlap(struct tipc_subscription *sub,
+ 			     u32 found_lower, u32 found_upper,
+ 			     u32 event, u32 port, u32 node,
+ 			     u32 scope, int must);
+-int tipc_topsrv_start(struct net *net);
+-void tipc_topsrv_stop(struct net *net);
++
++int __net_init tipc_topsrv_init_net(struct net *net);
++void __net_exit tipc_topsrv_exit_net(struct net *net);
+ 
+ void tipc_sub_put(struct tipc_subscription *subscription);
+ void tipc_sub_get(struct tipc_subscription *subscription);
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index f5edb213d760..00f25640877a 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -637,7 +637,7 @@ static void tipc_topsrv_work_stop(struct tipc_topsrv *s)
+ 	destroy_workqueue(s->send_wq);
+ }
+ 
+-int tipc_topsrv_start(struct net *net)
++static int tipc_topsrv_start(struct net *net)
+ {
+ 	struct tipc_net *tn = tipc_net(net);
+ 	const char name[] = "topology_server";
+@@ -671,7 +671,7 @@ int tipc_topsrv_start(struct net *net)
+ 	return ret;
+ }
+ 
+-void tipc_topsrv_stop(struct net *net)
++static void tipc_topsrv_stop(struct net *net)
+ {
+ 	struct tipc_topsrv *srv = tipc_topsrv(net);
+ 	struct socket *lsock = srv->listener;
+@@ -696,3 +696,13 @@ void tipc_topsrv_stop(struct net *net)
+ 	idr_destroy(&srv->conn_idr);
+ 	kfree(srv);
+ }
++
++int __net_init tipc_topsrv_init_net(struct net *net)
++{
++	return tipc_topsrv_start(net);
++}
++
++void __net_exit tipc_topsrv_exit_net(struct net *net)
++{
++	tipc_topsrv_stop(net);
++}
+diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
+index 7d5136ecee78..84f6b6906bcc 100644
+--- a/net/tls/tls_device.c
++++ b/net/tls/tls_device.c
+@@ -923,12 +923,6 @@ void tls_device_offload_cleanup_rx(struct sock *sk)
+ 	if (!netdev)
+ 		goto out;
+ 
+-	if (!(netdev->features & NETIF_F_HW_TLS_RX)) {
+-		pr_err_ratelimited("%s: device is missing NETIF_F_HW_TLS_RX cap\n",
+-				   __func__);
+-		goto out;
+-	}
+-
+ 	netdev->tlsdev_ops->tls_dev_del(netdev, tls_ctx,
+ 					TLS_OFFLOAD_CTX_DIR_RX);
+ 
+@@ -987,7 +981,8 @@ static int tls_dev_event(struct notifier_block *this, unsigned long event,
+ {
+ 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ 
+-	if (!(dev->features & (NETIF_F_HW_TLS_RX | NETIF_F_HW_TLS_TX)))
++	if (!dev->tlsdev_ops &&
++	    !(dev->features & (NETIF_F_HW_TLS_RX | NETIF_F_HW_TLS_TX)))
+ 		return NOTIFY_DONE;
+ 
+ 	switch (event) {


^ permalink raw reply related	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2019-06-04 11:11 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-04-17  7:32 [gentoo-commits] proj/linux-patches:5.0 commit in: / Alice Ferrazzi
  -- strict thread matches above, loose matches on Subject: below --
2019-06-04 11:10 Mike Pagano
2019-05-31 14:03 Mike Pagano
2019-05-26 17:08 Mike Pagano
2019-05-22 11:04 Mike Pagano
2019-05-16 23:04 Mike Pagano
2019-05-14 21:01 Mike Pagano
2019-05-10 19:43 Mike Pagano
2019-05-08 10:07 Mike Pagano
2019-05-05 13:40 Mike Pagano
2019-05-05 13:39 Mike Pagano
2019-05-04 18:29 Mike Pagano
2019-05-02 10:12 Mike Pagano
2019-04-27 17:38 Mike Pagano
2019-04-20 11:12 Mike Pagano
2019-04-19 19:28 Mike Pagano
2019-04-05 21:47 Mike Pagano
2019-04-03 11:09 Mike Pagano
2019-04-03 11:00 Mike Pagano
2019-03-27 12:20 Mike Pagano
2019-03-27 10:23 Mike Pagano
2019-03-23 20:25 Mike Pagano
2019-03-19 17:01 Mike Pagano
2019-03-13 22:10 Mike Pagano
2019-03-10 14:12 Mike Pagano
2019-03-08 14:36 Mike Pagano
2019-03-04 13:16 Mike Pagano
2019-03-04 13:11 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox