From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:3.18 commit in: /
Date: Fri, 16 Jan 2015 18:31:14 +0000 (UTC) [thread overview]
Message-ID: <1421433075.9e45c43b5f94229af08b5fc1702e8414e66dddf3.mpagano@gentoo> (raw)
commit: 9e45c43b5f94229af08b5fc1702e8414e66dddf3
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 16 18:31:15 2015 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan 16 18:31:15 2015 +0000
URL: http://sources.gentoo.org/gitweb/?p=proj/linux-patches.git;a=commit;h=9e45c43b
Linux 3.18.3 patch. Removal of redundant patch for nouveau.
---
0000_README | 8 +-
1002_linux-3.18.3.patch | 4593 +++++++++++++++++++++++++++++++++++++
2800_nouveau-spin-is-locked.patch | 31 -
3 files changed, 4597 insertions(+), 35 deletions(-)
diff --git a/0000_README b/0000_README
index d04c7db..0df447d 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-3.18.2.patch
From: http://www.kernel.org
Desc: Linux 3.18.2
+Patch: 1002_linux-3.18.3.patch
+From: http://www.kernel.org
+Desc: Linux 3.18.3
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
@@ -75,10 +79,6 @@ Patch: 2710_i915-drm-disallow-pin-ioctl-for-kms-drivers.patch
From: http://www.kernel.org
Desc: drm/i915: Patch to disallow pin ioctl completely for kms drivers. See bug #532926.
-Patch: 2800_nouveau-spin-is-locked.patch
-From: http://www.kernel.org
-Desc: nouveau: Do not BUG_ON(!spin_is_locked()) on UP.
-
Patch: 4200_fbcondecor-3.16.patch
From: http://www.mepiscommunity.org/fbcondecor
Desc: Bootsplash ported by Uladzimir Bely (bug #513334)
diff --git a/1002_linux-3.18.3.patch b/1002_linux-3.18.3.patch
new file mode 100644
index 0000000..bfa35ef
--- /dev/null
+++ b/1002_linux-3.18.3.patch
@@ -0,0 +1,4593 @@
+diff --git a/Documentation/devicetree/bindings/i2c/i2c-designware.txt b/Documentation/devicetree/bindings/i2c/i2c-designware.txt
+index 5199b0c8cf7a..fee26dc3e858 100644
+--- a/Documentation/devicetree/bindings/i2c/i2c-designware.txt
++++ b/Documentation/devicetree/bindings/i2c/i2c-designware.txt
+@@ -14,10 +14,10 @@ Optional properties :
+ - i2c-sda-hold-time-ns : should contain the SDA hold time in nanoseconds.
+ This option is only supported in hardware blocks version 1.11a or newer.
+
+- - i2c-scl-falling-time : should contain the SCL falling time in nanoseconds.
++ - i2c-scl-falling-time-ns : should contain the SCL falling time in nanoseconds.
+ This value which is by default 300ns is used to compute the tLOW period.
+
+- - i2c-sda-falling-time : should contain the SDA falling time in nanoseconds.
++ - i2c-sda-falling-time-ns : should contain the SDA falling time in nanoseconds.
+ This value which is by default 300ns is used to compute the tHIGH period.
+
+ Example :
+diff --git a/Documentation/ramoops.txt b/Documentation/ramoops.txt
+index 69b3cac4749d..5d8675615e59 100644
+--- a/Documentation/ramoops.txt
++++ b/Documentation/ramoops.txt
+@@ -14,11 +14,19 @@ survive after a restart.
+
+ 1. Ramoops concepts
+
+-Ramoops uses a predefined memory area to store the dump. The start and size of
+-the memory area are set using two variables:
++Ramoops uses a predefined memory area to store the dump. The start and size
++and type of the memory area are set using three variables:
+ * "mem_address" for the start
+ * "mem_size" for the size. The memory size will be rounded down to a
+ power of two.
++ * "mem_type" to specifiy if the memory type (default is pgprot_writecombine).
++
++Typically the default value of mem_type=0 should be used as that sets the pstore
++mapping to pgprot_writecombine. Setting mem_type=1 attempts to use
++pgprot_noncached, which only works on some platforms. This is because pstore
++depends on atomic operations. At least on ARM, pgprot_noncached causes the
++memory to be mapped strongly ordered, and atomic operations on strongly ordered
++memory are implementation defined, and won't work on many ARMs such as omaps.
+
+ The memory area is divided into "record_size" chunks (also rounded down to
+ power of two) and each oops/panic writes a "record_size" chunk of
+@@ -55,6 +63,7 @@ Setting the ramoops parameters can be done in 2 different manners:
+ static struct ramoops_platform_data ramoops_data = {
+ .mem_size = <...>,
+ .mem_address = <...>,
++ .mem_type = <...>,
+ .record_size = <...>,
+ .dump_oops = <...>,
+ .ecc = <...>,
+diff --git a/Makefile b/Makefile
+index 8f73b417dc1a..91cfe8d5ee06 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 3
+ PATCHLEVEL = 18
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Diseased Newt
+
+diff --git a/arch/arm/boot/dts/am437x-sk-evm.dts b/arch/arm/boot/dts/am437x-sk-evm.dts
+index 87aa4f3b8b3d..53bbfc90b26a 100644
+--- a/arch/arm/boot/dts/am437x-sk-evm.dts
++++ b/arch/arm/boot/dts/am437x-sk-evm.dts
+@@ -100,7 +100,7 @@
+ };
+
+ lcd0: display {
+- compatible = "osddisplays,osd057T0559-34ts", "panel-dpi";
++ compatible = "newhaven,nhd-4.3-480272ef-atxl", "panel-dpi";
+ label = "lcd";
+
+ pinctrl-names = "default";
+@@ -112,11 +112,11 @@
+ clock-frequency = <9000000>;
+ hactive = <480>;
+ vactive = <272>;
+- hfront-porch = <8>;
+- hback-porch = <43>;
+- hsync-len = <4>;
+- vback-porch = <12>;
+- vfront-porch = <4>;
++ hfront-porch = <2>;
++ hback-porch = <2>;
++ hsync-len = <41>;
++ vfront-porch = <2>;
++ vback-porch = <2>;
+ vsync-len = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+@@ -320,8 +320,7 @@
+
+ lcd_pins: lcd_pins {
+ pinctrl-single,pins = <
+- /* GPIO 5_8 to select LCD / HDMI */
+- 0x238 (PIN_OUTPUT_PULLUP | MUX_MODE7)
++ 0x1c (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpcm_ad7.gpio1_7 */
+ >;
+ };
+ };
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index 9cc98436a982..666e796847d8 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -653,7 +653,7 @@
+ };
+
+ wdt2: wdt@4ae14000 {
+- compatible = "ti,omap4-wdt";
++ compatible = "ti,omap3-wdt";
+ reg = <0x4ae14000 0x80>;
+ interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
+ ti,hwmods = "wd_timer2";
+diff --git a/arch/arm/boot/dts/s3c6410-mini6410.dts b/arch/arm/boot/dts/s3c6410-mini6410.dts
+index 57e00f9bce99..a25debb50401 100644
+--- a/arch/arm/boot/dts/s3c6410-mini6410.dts
++++ b/arch/arm/boot/dts/s3c6410-mini6410.dts
+@@ -198,10 +198,6 @@
+ status = "okay";
+ };
+
+-&pwm {
+- status = "okay";
+-};
+-
+ &pinctrl0 {
+ gpio_leds: gpio-leds {
+ samsung,pins = "gpk-4", "gpk-5", "gpk-6", "gpk-7";
+diff --git a/arch/arm/boot/dts/s3c64xx.dtsi b/arch/arm/boot/dts/s3c64xx.dtsi
+index ff5bdaac987a..0ccb414cd268 100644
+--- a/arch/arm/boot/dts/s3c64xx.dtsi
++++ b/arch/arm/boot/dts/s3c64xx.dtsi
+@@ -172,7 +172,6 @@
+ clocks = <&clocks PCLK_PWM>;
+ samsung,pwm-outputs = <0>, <1>;
+ #pwm-cells = <3>;
+- status = "disabled";
+ };
+
+ pinctrl0: pinctrl@7f008000 {
+diff --git a/arch/arm/configs/multi_v7_defconfig b/arch/arm/configs/multi_v7_defconfig
+index 9d7a32f93fcf..37560f19d346 100644
+--- a/arch/arm/configs/multi_v7_defconfig
++++ b/arch/arm/configs/multi_v7_defconfig
+@@ -320,6 +320,7 @@ CONFIG_USB=y
+ CONFIG_USB_XHCI_HCD=y
+ CONFIG_USB_XHCI_MVEBU=y
+ CONFIG_USB_EHCI_HCD=y
++CONFIG_USB_EHCI_EXYNOS=y
+ CONFIG_USB_EHCI_TEGRA=y
+ CONFIG_USB_EHCI_HCD_PLATFORM=y
+ CONFIG_USB_ISP1760_HCD=y
+@@ -445,4 +446,4 @@ CONFIG_DEBUG_FS=y
+ CONFIG_MAGIC_SYSRQ=y
+ CONFIG_LOCKUP_DETECTOR=y
+ CONFIG_CRYPTO_DEV_TEGRA_AES=y
+-CONFIG_GENERIC_CPUFREQ_CPU0=y
++CONFIG_CPUFREQ_DT=y
+diff --git a/arch/arm/configs/shmobile_defconfig b/arch/arm/configs/shmobile_defconfig
+index d7346ad51043..bfe79d5b8213 100644
+--- a/arch/arm/configs/shmobile_defconfig
++++ b/arch/arm/configs/shmobile_defconfig
+@@ -176,5 +176,5 @@ CONFIG_CPU_FREQ_GOV_USERSPACE=y
+ CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+ CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
+ CONFIG_CPU_THERMAL=y
+-CONFIG_GENERIC_CPUFREQ_CPU0=y
++CONFIG_CPUFREQ_DT=y
+ CONFIG_REGULATOR_DA9210=y
+diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
+index c03106378b49..306e1ac2c8e3 100644
+--- a/arch/arm/kernel/setup.c
++++ b/arch/arm/kernel/setup.c
+@@ -1043,6 +1043,15 @@ static int c_show(struct seq_file *m, void *v)
+ seq_printf(m, "model name\t: %s rev %d (%s)\n",
+ cpu_name, cpuid & 15, elf_platform);
+
++#if defined(CONFIG_SMP)
++ seq_printf(m, "BogoMIPS\t: %lu.%02lu\n",
++ per_cpu(cpu_data, i).loops_per_jiffy / (500000UL/HZ),
++ (per_cpu(cpu_data, i).loops_per_jiffy / (5000UL/HZ)) % 100);
++#else
++ seq_printf(m, "BogoMIPS\t: %lu.%02lu\n",
++ loops_per_jiffy / (500000/HZ),
++ (loops_per_jiffy / (5000/HZ)) % 100);
++#endif
+ /* dump out the processor features */
+ seq_puts(m, "Features\t: ");
+
+diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
+index 13396d3d600e..a8e32aaf0383 100644
+--- a/arch/arm/kernel/smp.c
++++ b/arch/arm/kernel/smp.c
+@@ -387,8 +387,17 @@ asmlinkage void secondary_start_kernel(void)
+
+ void __init smp_cpus_done(unsigned int max_cpus)
+ {
+- printk(KERN_INFO "SMP: Total of %d processors activated.\n",
+- num_online_cpus());
++ int cpu;
++ unsigned long bogosum = 0;
++
++ for_each_online_cpu(cpu)
++ bogosum += per_cpu(cpu_data, cpu).loops_per_jiffy;
++
++ printk(KERN_INFO "SMP: Total of %d processors activated "
++ "(%lu.%02lu BogoMIPS).\n",
++ num_online_cpus(),
++ bogosum / (500000/HZ),
++ (bogosum / (5000/HZ)) % 100);
+
+ hyp_mode_check();
+ }
+diff --git a/arch/arm/mach-omap2/pm44xx.c b/arch/arm/mach-omap2/pm44xx.c
+index 503097c72b82..e7f823b960c2 100644
+--- a/arch/arm/mach-omap2/pm44xx.c
++++ b/arch/arm/mach-omap2/pm44xx.c
+@@ -160,26 +160,6 @@ static inline int omap4_init_static_deps(void)
+ struct clockdomain *ducati_clkdm, *l3_2_clkdm;
+ int ret = 0;
+
+- if (omap_rev() == OMAP4430_REV_ES1_0) {
+- WARN(1, "Power Management not supported on OMAP4430 ES1.0\n");
+- return -ENODEV;
+- }
+-
+- pr_err("Power Management for TI OMAP4.\n");
+- /*
+- * OMAP4 chip PM currently works only with certain (newer)
+- * versions of bootloaders. This is due to missing code in the
+- * kernel to properly reset and initialize some devices.
+- * http://www.spinics.net/lists/arm-kernel/msg218641.html
+- */
+- pr_warn("OMAP4 PM: u-boot >= v2012.07 is required for full PM support\n");
+-
+- ret = pwrdm_for_each(pwrdms_setup, NULL);
+- if (ret) {
+- pr_err("Failed to setup powerdomains\n");
+- return ret;
+- }
+-
+ /*
+ * The dynamic dependency between MPUSS -> MEMIF and
+ * MPUSS -> L4_PER/L3_* and DUCATI -> L3_* doesn't work as
+@@ -272,6 +252,15 @@ int __init omap4_pm_init(void)
+
+ pr_info("Power Management for TI OMAP4+ devices.\n");
+
++ /*
++ * OMAP4 chip PM currently works only with certain (newer)
++ * versions of bootloaders. This is due to missing code in the
++ * kernel to properly reset and initialize some devices.
++ * http://www.spinics.net/lists/arm-kernel/msg218641.html
++ */
++ if (cpu_is_omap44xx())
++ pr_warn("OMAP4 PM: u-boot >= v2012.07 is required for full PM support\n");
++
+ ret = pwrdm_for_each(pwrdms_setup, NULL);
+ if (ret) {
+ pr_err("Failed to setup powerdomains.\n");
+diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
+index 95c49ebc660d..1d85a7c5a850 100644
+--- a/arch/arm64/kernel/efi.c
++++ b/arch/arm64/kernel/efi.c
+@@ -327,6 +327,7 @@ void __init efi_idmap_init(void)
+
+ /* boot time idmap_pg_dir is incomplete, so fill in missing parts */
+ efi_setup_idmap();
++ early_memunmap(memmap.map, memmap.map_end - memmap.map);
+ }
+
+ static int __init remap_region(efi_memory_desc_t *md, void **new)
+@@ -381,7 +382,6 @@ static int __init arm64_enter_virtual_mode(void)
+ }
+
+ mapsize = memmap.map_end - memmap.map;
+- early_memunmap(memmap.map, mapsize);
+
+ if (efi_runtime_disabled()) {
+ pr_info("EFI runtime services will be disabled.\n");
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index f9620154bfb0..64c4f0800ee3 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -394,6 +394,7 @@ void __init setup_arch(char **cmdline_p)
+ request_standard_resources();
+
+ efi_idmap_init();
++ early_ioremap_reset();
+
+ unflatten_device_tree();
+
+diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
+index a564b440416a..ede186cdd452 100644
+--- a/arch/arm64/kernel/sleep.S
++++ b/arch/arm64/kernel/sleep.S
+@@ -147,14 +147,12 @@ cpu_resume_after_mmu:
+ ret
+ ENDPROC(cpu_resume_after_mmu)
+
+- .data
+ ENTRY(cpu_resume)
+ bl el2_setup // if in EL2 drop to EL1 cleanly
+ #ifdef CONFIG_SMP
+ mrs x1, mpidr_el1
+- adr x4, mpidr_hash_ptr
+- ldr x5, [x4]
+- add x8, x4, x5 // x8 = struct mpidr_hash phys address
++ adrp x8, mpidr_hash
++ add x8, x8, #:lo12:mpidr_hash // x8 = struct mpidr_hash phys address
+ /* retrieve mpidr_hash members to compute the hash */
+ ldr x2, [x8, #MPIDR_HASH_MASK]
+ ldp w3, w4, [x8, #MPIDR_HASH_SHIFTS]
+@@ -164,14 +162,15 @@ ENTRY(cpu_resume)
+ #else
+ mov x7, xzr
+ #endif
+- adr x0, sleep_save_sp
++ adrp x0, sleep_save_sp
++ add x0, x0, #:lo12:sleep_save_sp
+ ldr x0, [x0, #SLEEP_SAVE_SP_PHYS]
+ ldr x0, [x0, x7, lsl #3]
+ /* load sp from context */
+ ldr x2, [x0, #CPU_CTX_SP]
+- adr x1, sleep_idmap_phys
++ adrp x1, sleep_idmap_phys
+ /* load physical address of identity map page table in x1 */
+- ldr x1, [x1]
++ ldr x1, [x1, #:lo12:sleep_idmap_phys]
+ mov sp, x2
+ /*
+ * cpu_do_resume expects x0 to contain context physical address
+@@ -180,26 +179,3 @@ ENTRY(cpu_resume)
+ bl cpu_do_resume // PC relative jump, MMU off
+ b cpu_resume_mmu // Resume MMU, never returns
+ ENDPROC(cpu_resume)
+-
+- .align 3
+-mpidr_hash_ptr:
+- /*
+- * offset of mpidr_hash symbol from current location
+- * used to obtain run-time mpidr_hash address with MMU off
+- */
+- .quad mpidr_hash - .
+-/*
+- * physical address of identity mapped page tables
+- */
+- .type sleep_idmap_phys, #object
+-ENTRY(sleep_idmap_phys)
+- .quad 0
+-/*
+- * struct sleep_save_sp {
+- * phys_addr_t *save_ptr_stash;
+- * phys_addr_t save_ptr_stash_phys;
+- * };
+- */
+- .type sleep_save_sp, #object
+-ENTRY(sleep_save_sp)
+- .space SLEEP_SAVE_SP_SZ // struct sleep_save_sp
+diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
+index 13ad4dbb1615..2d6b6065fe7f 100644
+--- a/arch/arm64/kernel/suspend.c
++++ b/arch/arm64/kernel/suspend.c
+@@ -5,6 +5,7 @@
+ #include <asm/debug-monitors.h>
+ #include <asm/pgtable.h>
+ #include <asm/memory.h>
++#include <asm/mmu_context.h>
+ #include <asm/smp_plat.h>
+ #include <asm/suspend.h>
+ #include <asm/tlbflush.h>
+@@ -98,7 +99,18 @@ int __cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
+ */
+ ret = __cpu_suspend_enter(arg, fn);
+ if (ret == 0) {
+- cpu_switch_mm(mm->pgd, mm);
++ /*
++ * We are resuming from reset with TTBR0_EL1 set to the
++ * idmap to enable the MMU; restore the active_mm mappings in
++ * TTBR0_EL1 unless the active_mm == &init_mm, in which case
++ * the thread entered __cpu_suspend with TTBR0_EL1 set to
++ * reserved TTBR0 page tables and should be restored as such.
++ */
++ if (mm == &init_mm)
++ cpu_set_reserved_ttbr0();
++ else
++ cpu_switch_mm(mm->pgd, mm);
++
+ flush_tlb_all();
+
+ /*
+@@ -126,8 +138,8 @@ int __cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
+ return ret;
+ }
+
+-extern struct sleep_save_sp sleep_save_sp;
+-extern phys_addr_t sleep_idmap_phys;
++struct sleep_save_sp sleep_save_sp;
++phys_addr_t sleep_idmap_phys;
+
+ static int __init cpu_suspend_init(void)
+ {
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index c998279bd85b..a68ee15964b3 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -118,8 +118,10 @@
+ #define __MSR (MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_ISF |MSR_HV)
+ #ifdef __BIG_ENDIAN__
+ #define MSR_ __MSR
++#define MSR_IDLE (MSR_ME | MSR_SF | MSR_HV)
+ #else
+ #define MSR_ (__MSR | MSR_LE)
++#define MSR_IDLE (MSR_ME | MSR_SF | MSR_HV | MSR_LE)
+ #endif
+ #define MSR_KERNEL (MSR_ | MSR_64BIT)
+ #define MSR_USER32 (MSR_ | MSR_PR | MSR_EE)
+diff --git a/arch/powerpc/include/asm/syscall.h b/arch/powerpc/include/asm/syscall.h
+index 6240698fee9a..ff21b7a2f0cc 100644
+--- a/arch/powerpc/include/asm/syscall.h
++++ b/arch/powerpc/include/asm/syscall.h
+@@ -90,6 +90,10 @@ static inline void syscall_set_arguments(struct task_struct *task,
+
+ static inline int syscall_get_arch(void)
+ {
+- return is_32bit_task() ? AUDIT_ARCH_PPC : AUDIT_ARCH_PPC64;
++ int arch = is_32bit_task() ? AUDIT_ARCH_PPC : AUDIT_ARCH_PPC64;
++#ifdef __LITTLE_ENDIAN__
++ arch |= __AUDIT_ARCH_LE;
++#endif
++ return arch;
+ }
+ #endif /* _ASM_SYSCALL_H */
+diff --git a/arch/powerpc/kernel/idle_power7.S b/arch/powerpc/kernel/idle_power7.S
+index c0754bbf8118..283c603716a0 100644
+--- a/arch/powerpc/kernel/idle_power7.S
++++ b/arch/powerpc/kernel/idle_power7.S
+@@ -101,7 +101,23 @@ _GLOBAL(power7_powersave_common)
+ std r9,_MSR(r1)
+ std r1,PACAR1(r13)
+
+-_GLOBAL(power7_enter_nap_mode)
++ /*
++ * Go to real mode to do the nap, as required by the architecture.
++ * Also, we need to be in real mode before setting hwthread_state,
++ * because as soon as we do that, another thread can switch
++ * the MMU context to the guest.
++ */
++ LOAD_REG_IMMEDIATE(r5, MSR_IDLE)
++ li r6, MSR_RI
++ andc r6, r9, r6
++ LOAD_REG_ADDR(r7, power7_enter_nap_mode)
++ mtmsrd r6, 1 /* clear RI before setting SRR0/1 */
++ mtspr SPRN_SRR0, r7
++ mtspr SPRN_SRR1, r5
++ rfid
++
++ .globl power7_enter_nap_mode
++power7_enter_nap_mode:
+ #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+ /* Tell KVM we're napping */
+ li r4,KVM_HWTHREAD_IN_NAP
+diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
+index aa9aff3d6ad3..b6f123ab90ed 100644
+--- a/arch/powerpc/kernel/mce_power.c
++++ b/arch/powerpc/kernel/mce_power.c
+@@ -79,7 +79,7 @@ static long mce_handle_derror(uint64_t dsisr, uint64_t slb_error_bits)
+ }
+ if (dsisr & P7_DSISR_MC_TLB_MULTIHIT_MFTLB) {
+ if (cur_cpu_spec && cur_cpu_spec->flush_tlb)
+- cur_cpu_spec->flush_tlb(TLBIEL_INVAL_PAGE);
++ cur_cpu_spec->flush_tlb(TLBIEL_INVAL_SET);
+ /* reset error bits */
+ dsisr &= ~P7_DSISR_MC_TLB_MULTIHIT_MFTLB;
+ }
+@@ -110,7 +110,7 @@ static long mce_handle_common_ierror(uint64_t srr1)
+ break;
+ case P7_SRR1_MC_IFETCH_TLB_MULTIHIT:
+ if (cur_cpu_spec && cur_cpu_spec->flush_tlb) {
+- cur_cpu_spec->flush_tlb(TLBIEL_INVAL_PAGE);
++ cur_cpu_spec->flush_tlb(TLBIEL_INVAL_SET);
+ handled = 1;
+ }
+ break;
+diff --git a/arch/powerpc/kernel/udbg_16550.c b/arch/powerpc/kernel/udbg_16550.c
+index 6e7c4923b5ea..411116c38da4 100644
+--- a/arch/powerpc/kernel/udbg_16550.c
++++ b/arch/powerpc/kernel/udbg_16550.c
+@@ -69,8 +69,12 @@ static void udbg_uart_putc(char c)
+
+ static int udbg_uart_getc_poll(void)
+ {
+- if (!udbg_uart_in || !(udbg_uart_in(UART_LSR) & LSR_DR))
++ if (!udbg_uart_in)
++ return -1;
++
++ if (!(udbg_uart_in(UART_LSR) & LSR_DR))
+ return udbg_uart_in(UART_RBR);
++
+ return -1;
+ }
+
+diff --git a/arch/powerpc/perf/hv-24x7.c b/arch/powerpc/perf/hv-24x7.c
+index dba34088da28..d073e0679a0c 100644
+--- a/arch/powerpc/perf/hv-24x7.c
++++ b/arch/powerpc/perf/hv-24x7.c
+@@ -217,11 +217,14 @@ static bool is_physical_domain(int domain)
+ domain == HV_24X7_PERF_DOMAIN_PHYSICAL_CORE;
+ }
+
++DEFINE_PER_CPU(char, hv_24x7_reqb[4096]) __aligned(4096);
++DEFINE_PER_CPU(char, hv_24x7_resb[4096]) __aligned(4096);
++
+ static unsigned long single_24x7_request(u8 domain, u32 offset, u16 ix,
+ u16 lpar, u64 *res,
+ bool success_expected)
+ {
+- unsigned long ret = -ENOMEM;
++ unsigned long ret;
+
+ /*
+ * request_buffer and result_buffer are not required to be 4k aligned,
+@@ -243,13 +246,11 @@ static unsigned long single_24x7_request(u8 domain, u32 offset, u16 ix,
+ BUILD_BUG_ON(sizeof(*request_buffer) > 4096);
+ BUILD_BUG_ON(sizeof(*result_buffer) > 4096);
+
+- request_buffer = kmem_cache_zalloc(hv_page_cache, GFP_USER);
+- if (!request_buffer)
+- goto out;
++ request_buffer = (void *)get_cpu_var(hv_24x7_reqb);
++ result_buffer = (void *)get_cpu_var(hv_24x7_resb);
+
+- result_buffer = kmem_cache_zalloc(hv_page_cache, GFP_USER);
+- if (!result_buffer)
+- goto out_free_request_buffer;
++ memset(request_buffer, 0, 4096);
++ memset(result_buffer, 0, 4096);
+
+ *request_buffer = (struct reqb) {
+ .buf = {
+@@ -278,15 +279,11 @@ static unsigned long single_24x7_request(u8 domain, u32 offset, u16 ix,
+ domain, offset, ix, lpar, ret, ret,
+ result_buffer->buf.detailed_rc,
+ result_buffer->buf.failing_request_ix);
+- goto out_free_result_buffer;
++ goto out;
+ }
+
+ *res = be64_to_cpu(result_buffer->result);
+
+-out_free_result_buffer:
+- kfree(result_buffer);
+-out_free_request_buffer:
+- kfree(request_buffer);
+ out:
+ return ret;
+ }
+diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c
+index 0f961a1c64b3..6dc0ad9c7050 100644
+--- a/arch/s390/kvm/gaccess.c
++++ b/arch/s390/kvm/gaccess.c
+@@ -229,10 +229,12 @@ static void ipte_lock_simple(struct kvm_vcpu *vcpu)
+ goto out;
+ ic = &vcpu->kvm->arch.sca->ipte_control;
+ do {
+- old = ACCESS_ONCE(*ic);
++ old = *ic;
++ barrier();
+ while (old.k) {
+ cond_resched();
+- old = ACCESS_ONCE(*ic);
++ old = *ic;
++ barrier();
+ }
+ new = old;
+ new.k = 1;
+@@ -251,7 +253,9 @@ static void ipte_unlock_simple(struct kvm_vcpu *vcpu)
+ goto out;
+ ic = &vcpu->kvm->arch.sca->ipte_control;
+ do {
+- new = old = ACCESS_ONCE(*ic);
++ old = *ic;
++ barrier();
++ new = old;
+ new.k = 0;
+ } while (cmpxchg(&ic->val, old.val, new.val) != old.val);
+ wake_up(&vcpu->kvm->arch.ipte_wq);
+@@ -265,10 +269,12 @@ static void ipte_lock_siif(struct kvm_vcpu *vcpu)
+
+ ic = &vcpu->kvm->arch.sca->ipte_control;
+ do {
+- old = ACCESS_ONCE(*ic);
++ old = *ic;
++ barrier();
+ while (old.kg) {
+ cond_resched();
+- old = ACCESS_ONCE(*ic);
++ old = *ic;
++ barrier();
+ }
+ new = old;
+ new.k = 1;
+@@ -282,7 +288,9 @@ static void ipte_unlock_siif(struct kvm_vcpu *vcpu)
+
+ ic = &vcpu->kvm->arch.sca->ipte_control;
+ do {
+- new = old = ACCESS_ONCE(*ic);
++ old = *ic;
++ barrier();
++ new = old;
+ new.kh--;
+ if (!new.kh)
+ new.k = 0;
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index a39838457f01..4fc3fed636dc 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -270,7 +270,7 @@ static int __must_check __deliver_prog_irq(struct kvm_vcpu *vcpu,
+ break;
+ case PGM_MONITOR:
+ rc = put_guest_lc(vcpu, pgm_info->mon_class_nr,
+- (u64 *)__LC_MON_CLASS_NR);
++ (u16 *)__LC_MON_CLASS_NR);
+ rc |= put_guest_lc(vcpu, pgm_info->mon_code,
+ (u64 *)__LC_MON_CODE);
+ break;
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index 72bb2dd8b9cd..9c565b6b4ccb 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -791,7 +791,7 @@ int kvm_s390_handle_lctl(struct kvm_vcpu *vcpu)
+ break;
+ reg = (reg + 1) % 16;
+ } while (1);
+-
++ kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
+ return 0;
+ }
+
+@@ -863,7 +863,7 @@ static int handle_lctlg(struct kvm_vcpu *vcpu)
+ break;
+ reg = (reg + 1) % 16;
+ } while (1);
+-
++ kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
+ return 0;
+ }
+
+diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
+index fd0f848938cc..5a4a089e8b1f 100644
+--- a/arch/x86/crypto/Makefile
++++ b/arch/x86/crypto/Makefile
+@@ -26,7 +26,6 @@ obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o
+
+ obj-$(CONFIG_CRYPTO_CRC32C_INTEL) += crc32c-intel.o
+ obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o
+-obj-$(CONFIG_CRYPTO_SHA1_MB) += sha-mb/
+ obj-$(CONFIG_CRYPTO_CRC32_PCLMUL) += crc32-pclmul.o
+ obj-$(CONFIG_CRYPTO_SHA256_SSSE3) += sha256-ssse3.o
+ obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o
+@@ -46,6 +45,7 @@ endif
+ ifeq ($(avx2_supported),yes)
+ obj-$(CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64) += camellia-aesni-avx2.o
+ obj-$(CONFIG_CRYPTO_SERPENT_AVX2_X86_64) += serpent-avx2.o
++ obj-$(CONFIG_CRYPTO_SHA1_MB) += sha-mb/
+ endif
+
+ aes-i586-y := aes-i586-asm_32.o aes_glue.o
+diff --git a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
+index 2df2a0298f5a..a916c4a61165 100644
+--- a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
++++ b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
+@@ -208,7 +208,7 @@ ddq_add_8:
+
+ .if (klen == KEY_128)
+ .if (load_keys)
+- vmovdqa 3*16(p_keys), xkeyA
++ vmovdqa 3*16(p_keys), xkey4
+ .endif
+ .else
+ vmovdqa 3*16(p_keys), xkeyA
+@@ -224,7 +224,7 @@ ddq_add_8:
+ add $(16*by), p_in
+
+ .if (klen == KEY_128)
+- vmovdqa 4*16(p_keys), xkey4
++ vmovdqa 4*16(p_keys), xkeyB
+ .else
+ .if (load_keys)
+ vmovdqa 4*16(p_keys), xkey4
+@@ -234,7 +234,12 @@ ddq_add_8:
+ .set i, 0
+ .rept by
+ club XDATA, i
+- vaesenc xkeyA, var_xdata, var_xdata /* key 3 */
++ /* key 3 */
++ .if (klen == KEY_128)
++ vaesenc xkey4, var_xdata, var_xdata
++ .else
++ vaesenc xkeyA, var_xdata, var_xdata
++ .endif
+ .set i, (i +1)
+ .endr
+
+@@ -243,13 +248,18 @@ ddq_add_8:
+ .set i, 0
+ .rept by
+ club XDATA, i
+- vaesenc xkey4, var_xdata, var_xdata /* key 4 */
++ /* key 4 */
++ .if (klen == KEY_128)
++ vaesenc xkeyB, var_xdata, var_xdata
++ .else
++ vaesenc xkey4, var_xdata, var_xdata
++ .endif
+ .set i, (i +1)
+ .endr
+
+ .if (klen == KEY_128)
+ .if (load_keys)
+- vmovdqa 6*16(p_keys), xkeyB
++ vmovdqa 6*16(p_keys), xkey8
+ .endif
+ .else
+ vmovdqa 6*16(p_keys), xkeyB
+@@ -267,12 +277,17 @@ ddq_add_8:
+ .set i, 0
+ .rept by
+ club XDATA, i
+- vaesenc xkeyB, var_xdata, var_xdata /* key 6 */
++ /* key 6 */
++ .if (klen == KEY_128)
++ vaesenc xkey8, var_xdata, var_xdata
++ .else
++ vaesenc xkeyB, var_xdata, var_xdata
++ .endif
+ .set i, (i +1)
+ .endr
+
+ .if (klen == KEY_128)
+- vmovdqa 8*16(p_keys), xkey8
++ vmovdqa 8*16(p_keys), xkeyB
+ .else
+ .if (load_keys)
+ vmovdqa 8*16(p_keys), xkey8
+@@ -288,7 +303,7 @@ ddq_add_8:
+
+ .if (klen == KEY_128)
+ .if (load_keys)
+- vmovdqa 9*16(p_keys), xkeyA
++ vmovdqa 9*16(p_keys), xkey12
+ .endif
+ .else
+ vmovdqa 9*16(p_keys), xkeyA
+@@ -297,7 +312,12 @@ ddq_add_8:
+ .set i, 0
+ .rept by
+ club XDATA, i
+- vaesenc xkey8, var_xdata, var_xdata /* key 8 */
++ /* key 8 */
++ .if (klen == KEY_128)
++ vaesenc xkeyB, var_xdata, var_xdata
++ .else
++ vaesenc xkey8, var_xdata, var_xdata
++ .endif
+ .set i, (i +1)
+ .endr
+
+@@ -306,7 +326,12 @@ ddq_add_8:
+ .set i, 0
+ .rept by
+ club XDATA, i
+- vaesenc xkeyA, var_xdata, var_xdata /* key 9 */
++ /* key 9 */
++ .if (klen == KEY_128)
++ vaesenc xkey12, var_xdata, var_xdata
++ .else
++ vaesenc xkeyA, var_xdata, var_xdata
++ .endif
+ .set i, (i +1)
+ .endr
+
+@@ -412,7 +437,6 @@ ddq_add_8:
+ /* main body of aes ctr load */
+
+ .macro do_aes_ctrmain key_len
+-
+ cmp $16, num_bytes
+ jb .Ldo_return2\key_len
+
+diff --git a/arch/x86/include/asm/vsyscall.h b/arch/x86/include/asm/vsyscall.h
+index 2a46ca720afc..2874be9aef0a 100644
+--- a/arch/x86/include/asm/vsyscall.h
++++ b/arch/x86/include/asm/vsyscall.h
+@@ -34,7 +34,7 @@ static inline unsigned int __getcpu(void)
+ native_read_tscp(&p);
+ } else {
+ /* Load per CPU data from GDT */
+- asm("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));
++ asm volatile ("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));
+ }
+
+ return p;
+diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.c b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
+index 9762dbd9f3f7..e98f68cfea02 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel_uncore.c
++++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.c
+@@ -276,6 +276,17 @@ static struct intel_uncore_box *uncore_alloc_box(struct intel_uncore_type *type,
+ return box;
+ }
+
++/*
++ * Using uncore_pmu_event_init pmu event_init callback
++ * as a detection point for uncore events.
++ */
++static int uncore_pmu_event_init(struct perf_event *event);
++
++static bool is_uncore_event(struct perf_event *event)
++{
++ return event->pmu->event_init == uncore_pmu_event_init;
++}
++
+ static int
+ uncore_collect_events(struct intel_uncore_box *box, struct perf_event *leader, bool dogrp)
+ {
+@@ -290,13 +301,18 @@ uncore_collect_events(struct intel_uncore_box *box, struct perf_event *leader, b
+ return -EINVAL;
+
+ n = box->n_events;
+- box->event_list[n] = leader;
+- n++;
++
++ if (is_uncore_event(leader)) {
++ box->event_list[n] = leader;
++ n++;
++ }
++
+ if (!dogrp)
+ return n;
+
+ list_for_each_entry(event, &leader->sibling_list, group_entry) {
+- if (event->state <= PERF_EVENT_STATE_OFF)
++ if (!is_uncore_event(event) ||
++ event->state <= PERF_EVENT_STATE_OFF)
+ continue;
+
+ if (n >= max_count)
+diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore.h b/arch/x86/kernel/cpu/perf_event_intel_uncore.h
+index 18eb78bbdd10..863d9b02563e 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel_uncore.h
++++ b/arch/x86/kernel/cpu/perf_event_intel_uncore.h
+@@ -17,7 +17,7 @@
+ #define UNCORE_PCI_DEV_TYPE(data) ((data >> 8) & 0xff)
+ #define UNCORE_PCI_DEV_IDX(data) (data & 0xff)
+ #define UNCORE_EXTRA_PCI_DEV 0xff
+-#define UNCORE_EXTRA_PCI_DEV_MAX 2
++#define UNCORE_EXTRA_PCI_DEV_MAX 3
+
+ /* support up to 8 sockets */
+ #define UNCORE_SOCKET_MAX 8
+diff --git a/arch/x86/kernel/cpu/perf_event_intel_uncore_snbep.c b/arch/x86/kernel/cpu/perf_event_intel_uncore_snbep.c
+index f9ed429d6e4f..ab474faa262b 100644
+--- a/arch/x86/kernel/cpu/perf_event_intel_uncore_snbep.c
++++ b/arch/x86/kernel/cpu/perf_event_intel_uncore_snbep.c
+@@ -887,6 +887,7 @@ void snbep_uncore_cpu_init(void)
+ enum {
+ SNBEP_PCI_QPI_PORT0_FILTER,
+ SNBEP_PCI_QPI_PORT1_FILTER,
++ HSWEP_PCI_PCU_3,
+ };
+
+ static int snbep_qpi_hw_config(struct intel_uncore_box *box, struct perf_event *event)
+@@ -2022,6 +2023,17 @@ void hswep_uncore_cpu_init(void)
+ {
+ if (hswep_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
+ hswep_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
++
++ /* Detect 6-8 core systems with only two SBOXes */
++ if (uncore_extra_pci_dev[0][HSWEP_PCI_PCU_3]) {
++ u32 capid4;
++
++ pci_read_config_dword(uncore_extra_pci_dev[0][HSWEP_PCI_PCU_3],
++ 0x94, &capid4);
++ if (((capid4 >> 6) & 0x3) == 0)
++ hswep_uncore_sbox.num_boxes = 2;
++ }
++
+ uncore_msr_uncores = hswep_msr_uncores;
+ }
+
+@@ -2279,6 +2291,11 @@ static DEFINE_PCI_DEVICE_TABLE(hswep_uncore_pci_ids) = {
+ .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
+ SNBEP_PCI_QPI_PORT1_FILTER),
+ },
++ { /* PCU.3 (for Capability registers) */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x2fc0),
++ .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
++ HSWEP_PCI_PCU_3),
++ },
+ { /* end: all zeroes */ }
+ };
+
+diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c
+index 4c540c4719d8..0de1fae2bdf0 100644
+--- a/arch/x86/kernel/xsave.c
++++ b/arch/x86/kernel/xsave.c
+@@ -738,3 +738,4 @@ void *get_xsave_addr(struct xsave_struct *xsave, int xstate)
+
+ return (void *)xsave + xstate_comp_offsets[feature];
+ }
++EXPORT_SYMBOL_GPL(get_xsave_addr);
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 976e3a57f9ea..88f92014ba6b 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -319,6 +319,10 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
+ F(BMI2) | F(ERMS) | f_invpcid | F(RTM) | f_mpx | F(RDSEED) |
+ F(ADX) | F(SMAP);
+
++ /* cpuid 0xD.1.eax */
++ const u32 kvm_supported_word10_x86_features =
++ F(XSAVEOPT) | F(XSAVEC) | F(XGETBV1);
++
+ /* all calls to cpuid_count() should be made on the same cpu */
+ get_cpu();
+
+@@ -455,13 +459,18 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
+ entry->eax &= supported;
+ entry->edx &= supported >> 32;
+ entry->flags |= KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
++ if (!supported)
++ break;
++
+ for (idx = 1, i = 1; idx < 64; ++idx) {
+ u64 mask = ((u64)1 << idx);
+ if (*nent >= maxnent)
+ goto out;
+
+ do_cpuid_1_ent(&entry[i], function, idx);
+- if (entry[i].eax == 0 || !(supported & mask))
++ if (idx == 1)
++ entry[i].eax &= kvm_supported_word10_x86_features;
++ else if (entry[i].eax == 0 || !(supported & mask))
+ continue;
+ entry[i].flags |=
+ KVM_CPUID_FLAG_SIGNIFCANT_INDEX;
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 9f8a2faf5040..22e7ed9e6d8e 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -2128,7 +2128,7 @@ static int em_ret_far(struct x86_emulate_ctxt *ctxt)
+ /* Outer-privilege level return is not implemented */
+ if (ctxt->mode >= X86EMUL_MODE_PROT16 && (cs & 3) > cpl)
+ return X86EMUL_UNHANDLEABLE;
+- rc = __load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS, 0, false,
++ rc = __load_segment_descriptor(ctxt, (u16)cs, VCPU_SREG_CS, cpl, false,
+ &new_desc);
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 978f402006ee..9c12e63c653f 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -4449,7 +4449,7 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm)
+ * zap all shadow pages.
+ */
+ if (unlikely(kvm_current_mmio_generation(kvm) == 0)) {
+- printk_ratelimited(KERN_INFO "kvm: zapping shadow pages for mmio generation wraparound\n");
++ printk_ratelimited(KERN_DEBUG "kvm: zapping shadow pages for mmio generation wraparound\n");
+ kvm_mmu_invalidate_zap_all_pages(kvm);
+ }
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0033df32a745..506488cfa385 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3128,15 +3128,89 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
+ return 0;
+ }
+
++#define XSTATE_COMPACTION_ENABLED (1ULL << 63)
++
++static void fill_xsave(u8 *dest, struct kvm_vcpu *vcpu)
++{
++ struct xsave_struct *xsave = &vcpu->arch.guest_fpu.state->xsave;
++ u64 xstate_bv = xsave->xsave_hdr.xstate_bv;
++ u64 valid;
++
++ /*
++ * Copy legacy XSAVE area, to avoid complications with CPUID
++ * leaves 0 and 1 in the loop below.
++ */
++ memcpy(dest, xsave, XSAVE_HDR_OFFSET);
++
++ /* Set XSTATE_BV */
++ *(u64 *)(dest + XSAVE_HDR_OFFSET) = xstate_bv;
++
++ /*
++ * Copy each region from the possibly compacted offset to the
++ * non-compacted offset.
++ */
++ valid = xstate_bv & ~XSTATE_FPSSE;
++ while (valid) {
++ u64 feature = valid & -valid;
++ int index = fls64(feature) - 1;
++ void *src = get_xsave_addr(xsave, feature);
++
++ if (src) {
++ u32 size, offset, ecx, edx;
++ cpuid_count(XSTATE_CPUID, index,
++ &size, &offset, &ecx, &edx);
++ memcpy(dest + offset, src, size);
++ }
++
++ valid -= feature;
++ }
++}
++
++static void load_xsave(struct kvm_vcpu *vcpu, u8 *src)
++{
++ struct xsave_struct *xsave = &vcpu->arch.guest_fpu.state->xsave;
++ u64 xstate_bv = *(u64 *)(src + XSAVE_HDR_OFFSET);
++ u64 valid;
++
++ /*
++ * Copy legacy XSAVE area, to avoid complications with CPUID
++ * leaves 0 and 1 in the loop below.
++ */
++ memcpy(xsave, src, XSAVE_HDR_OFFSET);
++
++ /* Set XSTATE_BV and possibly XCOMP_BV. */
++ xsave->xsave_hdr.xstate_bv = xstate_bv;
++ if (cpu_has_xsaves)
++ xsave->xsave_hdr.xcomp_bv = host_xcr0 | XSTATE_COMPACTION_ENABLED;
++
++ /*
++ * Copy each region from the non-compacted offset to the
++ * possibly compacted offset.
++ */
++ valid = xstate_bv & ~XSTATE_FPSSE;
++ while (valid) {
++ u64 feature = valid & -valid;
++ int index = fls64(feature) - 1;
++ void *dest = get_xsave_addr(xsave, feature);
++
++ if (dest) {
++ u32 size, offset, ecx, edx;
++ cpuid_count(XSTATE_CPUID, index,
++ &size, &offset, &ecx, &edx);
++ memcpy(dest, src + offset, size);
++ } else
++ WARN_ON_ONCE(1);
++
++ valid -= feature;
++ }
++}
++
+ static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
+ struct kvm_xsave *guest_xsave)
+ {
+ if (cpu_has_xsave) {
+- memcpy(guest_xsave->region,
+- &vcpu->arch.guest_fpu.state->xsave,
+- vcpu->arch.guest_xstate_size);
+- *(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)] &=
+- vcpu->arch.guest_supported_xcr0 | XSTATE_FPSSE;
++ memset(guest_xsave, 0, sizeof(struct kvm_xsave));
++ fill_xsave((u8 *) guest_xsave->region, vcpu);
+ } else {
+ memcpy(guest_xsave->region,
+ &vcpu->arch.guest_fpu.state->fxsave,
+@@ -3160,8 +3234,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
+ */
+ if (xstate_bv & ~kvm_supported_xcr0())
+ return -EINVAL;
+- memcpy(&vcpu->arch.guest_fpu.state->xsave,
+- guest_xsave->region, vcpu->arch.guest_xstate_size);
++ load_xsave(vcpu, (u8 *)guest_xsave->region);
+ } else {
+ if (xstate_bv & ~XSTATE_FPSSE)
+ return -EINVAL;
+@@ -6873,6 +6946,9 @@ int fx_init(struct kvm_vcpu *vcpu)
+ return err;
+
+ fpu_finit(&vcpu->arch.guest_fpu);
++ if (cpu_has_xsaves)
++ vcpu->arch.guest_fpu.state->xsave.xsave_hdr.xcomp_bv =
++ host_xcr0 | XSTATE_COMPACTION_ENABLED;
+
+ /*
+ * Ensure guest xcr0 is valid for loading
+diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c
+index 970463b566cf..208c2206df46 100644
+--- a/arch/x86/vdso/vma.c
++++ b/arch/x86/vdso/vma.c
+@@ -54,12 +54,17 @@ subsys_initcall(init_vdso);
+
+ struct linux_binprm;
+
+-/* Put the vdso above the (randomized) stack with another randomized offset.
+- This way there is no hole in the middle of address space.
+- To save memory make sure it is still in the same PTE as the stack top.
+- This doesn't give that many random bits.
+-
+- Only used for the 64-bit and x32 vdsos. */
++/*
++ * Put the vdso above the (randomized) stack with another randomized
++ * offset. This way there is no hole in the middle of address space.
++ * To save memory make sure it is still in the same PTE as the stack
++ * top. This doesn't give that many random bits.
++ *
++ * Note that this algorithm is imperfect: the distribution of the vdso
++ * start address within a PMD is biased toward the end.
++ *
++ * Only used for the 64-bit and x32 vdsos.
++ */
+ static unsigned long vdso_addr(unsigned long start, unsigned len)
+ {
+ #ifdef CONFIG_X86_32
+@@ -67,22 +72,30 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
+ #else
+ unsigned long addr, end;
+ unsigned offset;
+- end = (start + PMD_SIZE - 1) & PMD_MASK;
++
++ /*
++ * Round up the start address. It can start out unaligned as a result
++ * of stack start randomization.
++ */
++ start = PAGE_ALIGN(start);
++
++ /* Round the lowest possible end address up to a PMD boundary. */
++ end = (start + len + PMD_SIZE - 1) & PMD_MASK;
+ if (end >= TASK_SIZE_MAX)
+ end = TASK_SIZE_MAX;
+ end -= len;
+- /* This loses some more bits than a modulo, but is cheaper */
+- offset = get_random_int() & (PTRS_PER_PTE - 1);
+- addr = start + (offset << PAGE_SHIFT);
+- if (addr >= end)
+- addr = end;
++
++ if (end > start) {
++ offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1);
++ addr = start + (offset << PAGE_SHIFT);
++ } else {
++ addr = start;
++ }
+
+ /*
+- * page-align it here so that get_unmapped_area doesn't
+- * align it wrongfully again to the next page. addr can come in 4K
+- * unaligned here as a result of stack start randomization.
++ * Forcibly align the final address in case we have a hardware
++ * issue that requires alignment for performance reasons.
+ */
+- addr = PAGE_ALIGN(addr);
+ addr = align_vdso_addr(addr);
+
+ return addr;
+diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h
+index 2c7901edffaf..01cef6b40829 100644
+--- a/arch/xtensa/include/asm/highmem.h
++++ b/arch/xtensa/include/asm/highmem.h
+@@ -25,7 +25,7 @@
+ #define PKMAP_NR(virt) (((virt) - PKMAP_BASE) >> PAGE_SHIFT)
+ #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
+
+-#define kmap_prot PAGE_KERNEL
++#define kmap_prot PAGE_KERNEL_EXEC
+
+ #if DCACHE_WAY_SIZE > PAGE_SIZE
+ #define get_pkmap_color get_pkmap_color
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 0421b53e6431..93f9152fc271 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -525,6 +525,9 @@ void blk_cleanup_queue(struct request_queue *q)
+ del_timer_sync(&q->backing_dev_info.laptop_mode_wb_timer);
+ blk_sync_queue(q);
+
++ if (q->mq_ops)
++ blk_mq_free_queue(q);
++
+ spin_lock_irq(lock);
+ if (q->queue_lock != &q->__queue_lock)
+ q->queue_lock = &q->__queue_lock;
+diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
+index 1065d7c65fa1..72e5ed691e37 100644
+--- a/block/blk-mq-cpumap.c
++++ b/block/blk-mq-cpumap.c
+@@ -90,7 +90,7 @@ unsigned int *blk_mq_make_queue_map(struct blk_mq_tag_set *set)
+ unsigned int *map;
+
+ /* If cpus are offline, map them to first hctx */
+- map = kzalloc_node(sizeof(*map) * num_possible_cpus(), GFP_KERNEL,
++ map = kzalloc_node(sizeof(*map) * nr_cpu_ids, GFP_KERNEL,
+ set->numa_node);
+ if (!map)
+ return NULL;
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index 371d8800b48a..1630a20d5dcf 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -390,16 +390,15 @@ static void blk_mq_sysfs_init(struct request_queue *q)
+ {
+ struct blk_mq_hw_ctx *hctx;
+ struct blk_mq_ctx *ctx;
+- int i, j;
++ int i;
+
+ kobject_init(&q->mq_kobj, &blk_mq_ktype);
+
+- queue_for_each_hw_ctx(q, hctx, i) {
++ queue_for_each_hw_ctx(q, hctx, i)
+ kobject_init(&hctx->kobj, &blk_mq_hw_ktype);
+
+- hctx_for_each_ctx(hctx, ctx, j)
+- kobject_init(&ctx->kobj, &blk_mq_ctx_ktype);
+- }
++ queue_for_each_ctx(q, ctx, i)
++ kobject_init(&ctx->kobj, &blk_mq_ctx_ktype);
+ }
+
+ /* see blk_register_queue() */
+diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
+index 8317175a3009..ff18dab6b585 100644
+--- a/block/blk-mq-tag.c
++++ b/block/blk-mq-tag.c
+@@ -137,6 +137,7 @@ static inline bool hctx_may_queue(struct blk_mq_hw_ctx *hctx,
+ static int __bt_get_word(struct blk_align_bitmap *bm, unsigned int last_tag)
+ {
+ int tag, org_last_tag, end;
++ bool wrap = last_tag != 0;
+
+ org_last_tag = last_tag;
+ end = bm->depth;
+@@ -148,15 +149,16 @@ restart:
+ * We started with an offset, start from 0 to
+ * exhaust the map.
+ */
+- if (org_last_tag && last_tag) {
+- end = last_tag;
++ if (wrap) {
++ wrap = false;
++ end = org_last_tag;
+ last_tag = 0;
+ goto restart;
+ }
+ return -1;
+ }
+ last_tag = tag + 1;
+- } while (test_and_set_bit_lock(tag, &bm->word));
++ } while (test_and_set_bit(tag, &bm->word));
+
+ return tag;
+ }
+@@ -340,11 +342,10 @@ static void bt_clear_tag(struct blk_mq_bitmap_tags *bt, unsigned int tag)
+ struct bt_wait_state *bs;
+ int wait_cnt;
+
+- /*
+- * The unlock memory barrier need to order access to req in free
+- * path and clearing tag bit
+- */
+- clear_bit_unlock(TAG_TO_BIT(bt, tag), &bt->map[index].word);
++ clear_bit(TAG_TO_BIT(bt, tag), &bt->map[index].word);
++
++ /* Ensure that the wait list checks occur after clear_bit(). */
++ smp_mb();
+
+ bs = bt_wake_ptr(bt);
+ if (!bs)
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 1fac43408911..935ea2aa0730 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -492,17 +492,15 @@ static void blk_free_queue_rcu(struct rcu_head *rcu_head)
+ * Currently, its primary task it to free all the &struct request
+ * structures that were allocated to the queue and the queue itself.
+ *
+- * Caveat:
+- * Hopefully the low level driver will have finished any
+- * outstanding requests first...
++ * Note:
++ * The low level driver must have finished any outstanding requests first
++ * via blk_cleanup_queue().
+ **/
+ static void blk_release_queue(struct kobject *kobj)
+ {
+ struct request_queue *q =
+ container_of(kobj, struct request_queue, kobj);
+
+- blk_sync_queue(q);
+-
+ blkcg_exit_queue(q);
+
+ if (q->elevator) {
+@@ -517,9 +515,7 @@ static void blk_release_queue(struct kobject *kobj)
+ if (q->queue_tags)
+ __blk_queue_free_tags(q);
+
+- if (q->mq_ops)
+- blk_mq_free_queue(q);
+- else
++ if (!q->mq_ops)
+ blk_free_flush_queue(q->fq);
+
+ blk_trace_shutdown(q);
+diff --git a/block/genhd.c b/block/genhd.c
+index bd3060684ab2..0a536dc05f3b 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -1070,9 +1070,16 @@ int disk_expand_part_tbl(struct gendisk *disk, int partno)
+ struct disk_part_tbl *old_ptbl = disk->part_tbl;
+ struct disk_part_tbl *new_ptbl;
+ int len = old_ptbl ? old_ptbl->len : 0;
+- int target = partno + 1;
++ int i, target;
+ size_t size;
+- int i;
++
++ /*
++ * check for int overflow, since we can get here from blkpg_ioctl()
++ * with a user passed 'partno'.
++ */
++ target = partno + 1;
++ if (target < 0)
++ return -EINVAL;
+
+ /* disk_max_parts() is zero during initialization, ignore if so */
+ if (disk_max_parts(disk) && target > disk_max_parts(disk))
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 7db193160766..93b71420a046 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -257,7 +257,7 @@ int acpi_bus_init_power(struct acpi_device *device)
+
+ device->power.state = ACPI_STATE_UNKNOWN;
+ if (!acpi_device_is_present(device))
+- return 0;
++ return -ENXIO;
+
+ result = acpi_device_get_power(device, &state);
+ if (result)
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 0476e90b2091..c9ea3dfb4974 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -909,7 +909,7 @@ static void acpi_free_power_resources_lists(struct acpi_device *device)
+ if (device->wakeup.flags.valid)
+ acpi_power_resources_list_free(&device->wakeup.resources);
+
+- if (!device->flags.power_manageable)
++ if (!device->power.flags.power_resources)
+ return;
+
+ for (i = ACPI_STATE_D0; i <= ACPI_STATE_D3_HOT; i++) {
+@@ -1631,10 +1631,8 @@ static void acpi_bus_get_power_flags(struct acpi_device *device)
+ device->power.flags.power_resources)
+ device->power.states[ACPI_STATE_D3_COLD].flags.os_accessible = 1;
+
+- if (acpi_bus_init_power(device)) {
+- acpi_free_power_resources_lists(device);
++ if (acpi_bus_init_power(device))
+ device->flags.power_manageable = 0;
+- }
+ }
+
+ static void acpi_bus_get_flags(struct acpi_device *device)
+@@ -2202,13 +2200,18 @@ static void acpi_bus_attach(struct acpi_device *device)
+ /* Skip devices that are not present. */
+ if (!acpi_device_is_present(device)) {
+ device->flags.visited = false;
++ device->flags.power_manageable = 0;
+ return;
+ }
+ if (device->handler)
+ goto ok;
+
+ if (!device->flags.initialized) {
+- acpi_bus_update_power(device, NULL);
++ device->flags.power_manageable =
++ device->power.states[ACPI_STATE_D0].flags.valid;
++ if (acpi_bus_init_power(device))
++ device->flags.power_manageable = 0;
++
+ device->flags.initialized = true;
+ }
+ device->flags.visited = false;
+diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c
+index 9d75ead2a1f9..41322591fb43 100644
+--- a/drivers/acpi/video.c
++++ b/drivers/acpi/video.c
+@@ -155,6 +155,7 @@ struct acpi_video_bus {
+ u8 dos_setting;
+ struct acpi_video_enumerated_device *attached_array;
+ u8 attached_count;
++ u8 child_count;
+ struct acpi_video_bus_cap cap;
+ struct acpi_video_bus_flags flags;
+ struct list_head video_device_list;
+@@ -504,6 +505,23 @@ static struct dmi_system_id video_dmi_table[] __initdata = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "HP ENVY 15 Notebook PC"),
+ },
+ },
++
++ {
++ .callback = video_disable_native_backlight,
++ .ident = "SAMSUNG 870Z5E/880Z5E/680Z5E",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "870Z5E/880Z5E/680Z5E"),
++ },
++ },
++ {
++ .callback = video_disable_native_backlight,
++ .ident = "SAMSUNG 370R4E/370R4V/370R5E/3570RE/370R5V",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "370R4E/370R4V/370R5E/3570RE/370R5V"),
++ },
++ },
+ {}
+ };
+
+@@ -1159,8 +1177,12 @@ static bool acpi_video_device_in_dod(struct acpi_video_device *device)
+ struct acpi_video_bus *video = device->video;
+ int i;
+
+- /* If we have a broken _DOD, no need to test */
+- if (!video->attached_count)
++ /*
++ * If we have a broken _DOD or we have more than 8 output devices
++ * under the graphics controller node that we can't proper deal with
++ * in the operation region code currently, no need to test.
++ */
++ if (!video->attached_count || video->child_count > 8)
+ return true;
+
+ for (i = 0; i < video->attached_count; i++) {
+@@ -1413,6 +1435,7 @@ acpi_video_bus_get_devices(struct acpi_video_bus *video,
+ dev_err(&dev->dev, "Can't attach device\n");
+ break;
+ }
++ video->child_count++;
+ }
+ return status;
+ }
+diff --git a/drivers/base/bus.c b/drivers/base/bus.c
+index 83e910a57563..876bae5ade33 100644
+--- a/drivers/base/bus.c
++++ b/drivers/base/bus.c
+@@ -254,13 +254,15 @@ static ssize_t store_drivers_probe(struct bus_type *bus,
+ const char *buf, size_t count)
+ {
+ struct device *dev;
++ int err = -EINVAL;
+
+ dev = bus_find_device_by_name(bus, NULL, buf);
+ if (!dev)
+ return -ENODEV;
+- if (bus_rescan_devices_helper(dev, NULL) != 0)
+- return -EINVAL;
+- return count;
++ if (bus_rescan_devices_helper(dev, NULL) == 0)
++ err = count;
++ put_device(dev);
++ return err;
+ }
+
+ static struct device *next_device(struct klist_iter *i)
+diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c
+index 5a01c53dddeb..3b797cd5a407 100644
+--- a/drivers/block/drbd/drbd_req.c
++++ b/drivers/block/drbd/drbd_req.c
+@@ -1545,6 +1545,7 @@ int drbd_merge_bvec(struct request_queue *q, struct bvec_merge_data *bvm, struct
+ struct request_queue * const b =
+ device->ldev->backing_bdev->bd_disk->queue;
+ if (b->merge_bvec_fn) {
++ bvm->bi_bdev = device->ldev->backing_bdev;
+ backing_limit = b->merge_bvec_fn(b, bvm, bvec);
+ limit = min(limit, backing_limit);
+ }
+@@ -1628,7 +1629,7 @@ void request_timer_fn(unsigned long data)
+ time_after(now, req_peer->pre_send_jif + ent) &&
+ !time_in_range(now, connection->last_reconnect_jif, connection->last_reconnect_jif + ent)) {
+ drbd_warn(device, "Remote failed to finish a request within ko-count * timeout\n");
+- _drbd_set_state(_NS(device, conn, C_TIMEOUT), CS_VERBOSE | CS_HARD, NULL);
++ _conn_request_state(connection, NS(conn, C_TIMEOUT), CS_VERBOSE | CS_HARD);
+ }
+ if (dt && oldest_submit_jif != now &&
+ time_after(now, oldest_submit_jif + dt) &&
+diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
+index d85ced27ebd5..086240cd29c3 100644
+--- a/drivers/bluetooth/ath3k.c
++++ b/drivers/bluetooth/ath3k.c
+@@ -105,6 +105,7 @@ static const struct usb_device_id ath3k_table[] = {
+ { USB_DEVICE(0x13d3, 0x3375) },
+ { USB_DEVICE(0x13d3, 0x3393) },
+ { USB_DEVICE(0x13d3, 0x3402) },
++ { USB_DEVICE(0x13d3, 0x3408) },
+ { USB_DEVICE(0x13d3, 0x3432) },
+
+ /* Atheros AR5BBU12 with sflash firmware */
+@@ -156,6 +157,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
+ { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 },
++ { USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 },
+
+ /* Atheros AR5BBU22 with sflash firmware */
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index edfc17bfcd44..091c813df8e9 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -182,6 +182,7 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 },
++ { USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 },
+
+ /* Atheros AR5BBU12 with sflash firmware */
+diff --git a/drivers/char/i8k.c b/drivers/char/i8k.c
+index 34174d01462e..471f985e38d2 100644
+--- a/drivers/char/i8k.c
++++ b/drivers/char/i8k.c
+@@ -711,6 +711,14 @@ static struct dmi_system_id i8k_dmi_table[] __initdata = {
+ .driver_data = (void *)&i8k_config_data[DELL_LATITUDE_D520],
+ },
+ {
++ .ident = "Dell Latitude E6440",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Latitude E6440"),
++ },
++ .driver_data = (void *)&i8k_config_data[DELL_LATITUDE_E6540],
++ },
++ {
+ .ident = "Dell Latitude E6540",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+diff --git a/drivers/gpu/drm/nouveau/core/core/event.c b/drivers/gpu/drm/nouveau/core/core/event.c
+index ff2b434b3db4..760947e380c9 100644
+--- a/drivers/gpu/drm/nouveau/core/core/event.c
++++ b/drivers/gpu/drm/nouveau/core/core/event.c
+@@ -26,7 +26,7 @@
+ void
+ nvkm_event_put(struct nvkm_event *event, u32 types, int index)
+ {
+- BUG_ON(!spin_is_locked(&event->refs_lock));
++ assert_spin_locked(&event->refs_lock);
+ while (types) {
+ int type = __ffs(types); types &= ~(1 << type);
+ if (--event->refs[index * event->types_nr + type] == 0) {
+@@ -39,7 +39,7 @@ nvkm_event_put(struct nvkm_event *event, u32 types, int index)
+ void
+ nvkm_event_get(struct nvkm_event *event, u32 types, int index)
+ {
+- BUG_ON(!spin_is_locked(&event->refs_lock));
++ assert_spin_locked(&event->refs_lock);
+ while (types) {
+ int type = __ffs(types); types &= ~(1 << type);
+ if (++event->refs[index * event->types_nr + type] == 1) {
+diff --git a/drivers/gpu/drm/nouveau/core/core/notify.c b/drivers/gpu/drm/nouveau/core/core/notify.c
+index d1bcde55e9d7..839a32577680 100644
+--- a/drivers/gpu/drm/nouveau/core/core/notify.c
++++ b/drivers/gpu/drm/nouveau/core/core/notify.c
+@@ -98,7 +98,7 @@ nvkm_notify_send(struct nvkm_notify *notify, void *data, u32 size)
+ struct nvkm_event *event = notify->event;
+ unsigned long flags;
+
+- BUG_ON(!spin_is_locked(&event->list_lock));
++ assert_spin_locked(&event->list_lock);
+ BUG_ON(size != notify->size);
+
+ spin_lock_irqsave(&event->refs_lock, flags);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
+index 753a6def61e7..3d1cfcb96b6b 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
+@@ -28,6 +28,7 @@
+ #include "nouveau_ttm.h"
+ #include "nouveau_gem.h"
+
++#include "drm_legacy.h"
+ static int
+ nouveau_vram_manager_init(struct ttm_mem_type_manager *man, unsigned long psize)
+ {
+@@ -281,7 +282,7 @@ nouveau_ttm_mmap(struct file *filp, struct vm_area_struct *vma)
+ struct nouveau_drm *drm = nouveau_drm(file_priv->minor->dev);
+
+ if (unlikely(vma->vm_pgoff < DRM_FILE_PAGE_OFFSET))
+- return -EINVAL;
++ return drm_legacy_mmap(filp, vma);
+
+ return ttm_bo_mmap(filp, vma, &drm->ttm.bdev);
+ }
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 3402033fa52a..dfaccfca0688 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1809,6 +1809,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_ERGO_525V) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_I405X) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M610X) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LABTEC, USB_DEVICE_ID_LABTEC_WIRELESS_KEYBOARD) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LCPOWER, USB_DEVICE_ID_LCPOWER_LC1000 ) },
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 7c863738e419..0e28190480d7 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -300,6 +300,7 @@
+ #define USB_DEVICE_ID_ELAN_TOUCHSCREEN 0x0089
+ #define USB_DEVICE_ID_ELAN_TOUCHSCREEN_009B 0x009b
+ #define USB_DEVICE_ID_ELAN_TOUCHSCREEN_0103 0x0103
++#define USB_DEVICE_ID_ELAN_TOUCHSCREEN_010c 0x010c
+ #define USB_DEVICE_ID_ELAN_TOUCHSCREEN_016F 0x016f
+
+ #define USB_VENDOR_ID_ELECOM 0x056e
+@@ -525,6 +526,7 @@
+ #define USB_DEVICE_ID_KYE_GPEN_560 0x5003
+ #define USB_DEVICE_ID_KYE_EASYPEN_I405X 0x5010
+ #define USB_DEVICE_ID_KYE_MOUSEPEN_I608X 0x5011
++#define USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2 0x501a
+ #define USB_DEVICE_ID_KYE_EASYPEN_M610X 0x5013
+
+ #define USB_VENDOR_ID_LABTEC 0x1020
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index 725f22ca47fc..8df8ceb47659 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -312,6 +312,9 @@ static const struct hid_device_id hid_battery_quirks[] = {
+ USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ANSI),
+ HID_BATTERY_QUIRK_PERCENT | HID_BATTERY_QUIRK_FEATURE },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE,
++ USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ISO),
++ HID_BATTERY_QUIRK_PERCENT | HID_BATTERY_QUIRK_FEATURE },
++ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE,
+ USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
+ HID_BATTERY_QUIRK_PERCENT | HID_BATTERY_QUIRK_FEATURE },
+ {}
+diff --git a/drivers/hid/hid-kye.c b/drivers/hid/hid-kye.c
+index b92bf01a1ae8..158fcf577fae 100644
+--- a/drivers/hid/hid-kye.c
++++ b/drivers/hid/hid-kye.c
+@@ -323,6 +323,7 @@ static __u8 *kye_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ }
+ break;
+ case USB_DEVICE_ID_KYE_MOUSEPEN_I608X:
++ case USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2:
+ if (*rsize == MOUSEPEN_I608X_RDESC_ORIG_SIZE) {
+ rdesc = mousepen_i608x_rdesc_fixed;
+ *rsize = sizeof(mousepen_i608x_rdesc_fixed);
+@@ -415,6 +416,7 @@ static int kye_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ switch (id->product) {
+ case USB_DEVICE_ID_KYE_EASYPEN_I405X:
+ case USB_DEVICE_ID_KYE_MOUSEPEN_I608X:
++ case USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2:
+ case USB_DEVICE_ID_KYE_EASYPEN_M610X:
+ ret = kye_tablet_enable(hdev);
+ if (ret) {
+@@ -446,6 +448,8 @@ static const struct hid_device_id kye_devices[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE,
+ USB_DEVICE_ID_KYE_MOUSEPEN_I608X) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE,
++ USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_KYE,
+ USB_DEVICE_ID_KYE_EASYPEN_M610X) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE,
+ USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE) },
+diff --git a/drivers/hid/hid-roccat-pyra.c b/drivers/hid/hid-roccat-pyra.c
+index 1a07e07d99a0..47d7e74231e5 100644
+--- a/drivers/hid/hid-roccat-pyra.c
++++ b/drivers/hid/hid-roccat-pyra.c
+@@ -35,6 +35,8 @@ static struct class *pyra_class;
+ static void profile_activated(struct pyra_device *pyra,
+ unsigned int new_profile)
+ {
++ if (new_profile >= ARRAY_SIZE(pyra->profile_settings))
++ return;
+ pyra->actual_profile = new_profile;
+ pyra->actual_cpi = pyra->profile_settings[pyra->actual_profile].y_cpi;
+ }
+@@ -257,9 +259,11 @@ static ssize_t pyra_sysfs_write_settings(struct file *fp,
+ if (off != 0 || count != PYRA_SIZE_SETTINGS)
+ return -EINVAL;
+
+- mutex_lock(&pyra->pyra_lock);
+-
+ settings = (struct pyra_settings const *)buf;
++ if (settings->startup_profile >= ARRAY_SIZE(pyra->profile_settings))
++ return -EINVAL;
++
++ mutex_lock(&pyra->pyra_lock);
+
+ retval = pyra_set_settings(usb_dev, settings);
+ if (retval) {
+diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c
+index 747d54421e73..80e33e0abc52 100644
+--- a/drivers/hid/i2c-hid/i2c-hid.c
++++ b/drivers/hid/i2c-hid/i2c-hid.c
+@@ -137,6 +137,7 @@ struct i2c_hid {
+ * descriptor. */
+ unsigned int bufsize; /* i2c buffer size */
+ char *inbuf; /* Input buffer */
++ char *rawbuf; /* Raw Input buffer */
+ char *cmdbuf; /* Command buffer */
+ char *argsbuf; /* Command arguments buffer */
+
+@@ -369,7 +370,7 @@ static int i2c_hid_hwreset(struct i2c_client *client)
+ static void i2c_hid_get_input(struct i2c_hid *ihid)
+ {
+ int ret, ret_size;
+- int size = le16_to_cpu(ihid->hdesc.wMaxInputLength);
++ int size = ihid->bufsize;
+
+ ret = i2c_master_recv(ihid->client, ihid->inbuf, size);
+ if (ret != size) {
+@@ -504,9 +505,11 @@ static void i2c_hid_find_max_report(struct hid_device *hid, unsigned int type,
+ static void i2c_hid_free_buffers(struct i2c_hid *ihid)
+ {
+ kfree(ihid->inbuf);
++ kfree(ihid->rawbuf);
+ kfree(ihid->argsbuf);
+ kfree(ihid->cmdbuf);
+ ihid->inbuf = NULL;
++ ihid->rawbuf = NULL;
+ ihid->cmdbuf = NULL;
+ ihid->argsbuf = NULL;
+ ihid->bufsize = 0;
+@@ -522,10 +525,11 @@ static int i2c_hid_alloc_buffers(struct i2c_hid *ihid, size_t report_size)
+ report_size; /* report */
+
+ ihid->inbuf = kzalloc(report_size, GFP_KERNEL);
++ ihid->rawbuf = kzalloc(report_size, GFP_KERNEL);
+ ihid->argsbuf = kzalloc(args_len, GFP_KERNEL);
+ ihid->cmdbuf = kzalloc(sizeof(union command) + args_len, GFP_KERNEL);
+
+- if (!ihid->inbuf || !ihid->argsbuf || !ihid->cmdbuf) {
++ if (!ihid->inbuf || !ihid->rawbuf || !ihid->argsbuf || !ihid->cmdbuf) {
+ i2c_hid_free_buffers(ihid);
+ return -ENOMEM;
+ }
+@@ -552,12 +556,12 @@ static int i2c_hid_get_raw_report(struct hid_device *hid,
+
+ ret = i2c_hid_get_report(client,
+ report_type == HID_FEATURE_REPORT ? 0x03 : 0x01,
+- report_number, ihid->inbuf, ask_count);
++ report_number, ihid->rawbuf, ask_count);
+
+ if (ret < 0)
+ return ret;
+
+- ret_count = ihid->inbuf[0] | (ihid->inbuf[1] << 8);
++ ret_count = ihid->rawbuf[0] | (ihid->rawbuf[1] << 8);
+
+ if (ret_count <= 2)
+ return 0;
+@@ -566,7 +570,7 @@ static int i2c_hid_get_raw_report(struct hid_device *hid,
+
+ /* The query buffer contains the size, dropping it in the reply */
+ count = min(count, ret_count - 2);
+- memcpy(buf, ihid->inbuf + 2, count);
++ memcpy(buf, ihid->rawbuf + 2, count);
+
+ return count;
+ }
+@@ -702,12 +706,7 @@ static int i2c_hid_start(struct hid_device *hid)
+
+ static void i2c_hid_stop(struct hid_device *hid)
+ {
+- struct i2c_client *client = hid->driver_data;
+- struct i2c_hid *ihid = i2c_get_clientdata(client);
+-
+ hid->claimed = 0;
+-
+- i2c_hid_free_buffers(ihid);
+ }
+
+ static int i2c_hid_open(struct hid_device *hid)
+diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
+index 552671ee7c5d..4477eb7457de 100644
+--- a/drivers/hid/usbhid/hid-quirks.c
++++ b/drivers/hid/usbhid/hid-quirks.c
+@@ -73,6 +73,7 @@ static const struct hid_blacklist {
+ { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN, HID_QUIRK_ALWAYS_POLL },
+ { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_009B, HID_QUIRK_ALWAYS_POLL },
+ { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_0103, HID_QUIRK_ALWAYS_POLL },
++ { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_010c, HID_QUIRK_ALWAYS_POLL },
+ { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_016F, HID_QUIRK_ALWAYS_POLL },
+ { USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
+@@ -122,6 +123,7 @@ static const struct hid_blacklist {
+ { USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_WIRELESS, HID_QUIRK_MULTI_INPUT },
+ { USB_VENDOR_ID_SIGMA_MICRO, USB_DEVICE_ID_SIGMA_MICRO_KEYBOARD, HID_QUIRK_NO_INIT_REPORTS },
+ { USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X, HID_QUIRK_MULTI_INPUT },
++ { USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2, HID_QUIRK_MULTI_INPUT },
+ { USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M610X, HID_QUIRK_MULTI_INPUT },
+ { USB_VENDOR_ID_NTRIG, USB_DEVICE_ID_NTRIG_DUOSENSE, HID_QUIRK_NO_INIT_REPORTS },
+ { USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD, HID_QUIRK_NO_INIT_REPORTS },
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 8593047bb726..b6bcd251c4a8 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -70,22 +70,15 @@ static int wacom_raw_event(struct hid_device *hdev, struct hid_report *report,
+ static int wacom_open(struct input_dev *dev)
+ {
+ struct wacom *wacom = input_get_drvdata(dev);
+- int retval;
+-
+- mutex_lock(&wacom->lock);
+- retval = hid_hw_open(wacom->hdev);
+- mutex_unlock(&wacom->lock);
+
+- return retval;
++ return hid_hw_open(wacom->hdev);
+ }
+
+ static void wacom_close(struct input_dev *dev)
+ {
+ struct wacom *wacom = input_get_drvdata(dev);
+
+- mutex_lock(&wacom->lock);
+ hid_hw_close(wacom->hdev);
+- mutex_unlock(&wacom->lock);
+ }
+
+ /*
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 586b2405b0d4..7cf998cdd011 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -3026,6 +3026,7 @@ const struct hid_device_id wacom_ids[] = {
+ { USB_DEVICE_WACOM(0x4004) },
+ { USB_DEVICE_WACOM(0x5000) },
+ { USB_DEVICE_WACOM(0x5002) },
++ { USB_DEVICE_LENOVO(0x6004) },
+
+ { USB_DEVICE_WACOM(HID_ANY_ID) },
+ { }
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index a2d1a9612c86..d36ce6835fb7 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -216,9 +216,16 @@ static void vmbus_process_rescind_offer(struct work_struct *work)
+ unsigned long flags;
+ struct vmbus_channel *primary_channel;
+ struct vmbus_channel_relid_released msg;
++ struct device *dev;
++
++ if (channel->device_obj) {
++ dev = get_device(&channel->device_obj->device);
++ if (dev) {
++ vmbus_device_unregister(channel->device_obj);
++ put_device(dev);
++ }
++ }
+
+- if (channel->device_obj)
+- vmbus_device_unregister(channel->device_obj);
+ memset(&msg, 0, sizeof(struct vmbus_channel_relid_released));
+ msg.child_relid = channel->offermsg.child_relid;
+ msg.header.msgtype = CHANNELMSG_RELID_RELEASED;
+diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
+index d125a019383f..54ff03791940 100644
+--- a/drivers/input/mouse/alps.c
++++ b/drivers/input/mouse/alps.c
+@@ -919,18 +919,21 @@ static void alps_get_finger_coordinate_v7(struct input_mt_pos *mt,
+
+ static int alps_get_mt_count(struct input_mt_pos *mt)
+ {
+- int i;
++ int i, fingers = 0;
+
+- for (i = 0; i < MAX_TOUCHES && mt[i].x != 0 && mt[i].y != 0; i++)
+- /* empty */;
++ for (i = 0; i < MAX_TOUCHES; i++) {
++ if (mt[i].x != 0 || mt[i].y != 0)
++ fingers++;
++ }
+
+- return i;
++ return fingers;
+ }
+
+ static int alps_decode_packet_v7(struct alps_fields *f,
+ unsigned char *p,
+ struct psmouse *psmouse)
+ {
++ struct alps_data *priv = psmouse->private;
+ unsigned char pkt_id;
+
+ pkt_id = alps_get_packet_id_v7(p);
+@@ -938,19 +941,52 @@ static int alps_decode_packet_v7(struct alps_fields *f,
+ return 0;
+ if (pkt_id == V7_PACKET_ID_UNKNOWN)
+ return -1;
++ /*
++ * NEW packets are send to indicate a discontinuity in the finger
++ * coordinate reporting. Specifically a finger may have moved from
++ * slot 0 to 1 or vice versa. INPUT_MT_TRACK takes care of this for
++ * us.
++ *
++ * NEW packets have 3 problems:
++ * 1) They do not contain middle / right button info (on non clickpads)
++ * this can be worked around by preserving the old button state
++ * 2) They do not contain an accurate fingercount, and they are
++ * typically send when the number of fingers changes. We cannot use
++ * the old finger count as that may mismatch with the amount of
++ * touch coordinates we've available in the NEW packet
++ * 3) Their x data for the second touch is inaccurate leading to
++ * a possible jump of the x coordinate by 16 units when the first
++ * non NEW packet comes in
++ * Since problems 2 & 3 cannot be worked around, just ignore them.
++ */
++ if (pkt_id == V7_PACKET_ID_NEW)
++ return 1;
+
+ alps_get_finger_coordinate_v7(f->mt, p, pkt_id);
+
+- if (pkt_id == V7_PACKET_ID_TWO || pkt_id == V7_PACKET_ID_MULTI) {
+- f->left = (p[0] & 0x80) >> 7;
++ if (pkt_id == V7_PACKET_ID_TWO)
++ f->fingers = alps_get_mt_count(f->mt);
++ else /* pkt_id == V7_PACKET_ID_MULTI */
++ f->fingers = 3 + (p[5] & 0x03);
++
++ f->left = (p[0] & 0x80) >> 7;
++ if (priv->flags & ALPS_BUTTONPAD) {
++ if (p[0] & 0x20)
++ f->fingers++;
++ if (p[0] & 0x10)
++ f->fingers++;
++ } else {
+ f->right = (p[0] & 0x20) >> 5;
+ f->middle = (p[0] & 0x10) >> 4;
+ }
+
+- if (pkt_id == V7_PACKET_ID_TWO)
+- f->fingers = alps_get_mt_count(f->mt);
+- else if (pkt_id == V7_PACKET_ID_MULTI)
+- f->fingers = 3 + (p[5] & 0x03);
++ /* Sometimes a single touch is reported in mt[1] rather then mt[0] */
++ if (f->fingers == 1 && f->mt[0].x == 0 && f->mt[0].y == 0) {
++ f->mt[0].x = f->mt[1].x;
++ f->mt[0].y = f->mt[1].y;
++ f->mt[1].x = 0;
++ f->mt[1].y = 0;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index a27d6cb1a793..b2b9c9264131 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -1983,7 +1983,7 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
+ {
+ struct dma_pte *first_pte = NULL, *pte = NULL;
+ phys_addr_t uninitialized_var(pteval);
+- unsigned long sg_res;
++ unsigned long sg_res = 0;
+ unsigned int largepage_lvl = 0;
+ unsigned long lvl_pages = 0;
+
+@@ -1994,10 +1994,8 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn,
+
+ prot &= DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP;
+
+- if (sg)
+- sg_res = 0;
+- else {
+- sg_res = nr_pages + 1;
++ if (!sg) {
++ sg_res = nr_pages;
+ pteval = ((phys_addr_t)phys_pfn << VTD_PAGE_SHIFT) | prot;
+ }
+
+@@ -4267,6 +4265,10 @@ static int intel_iommu_attach_device(struct iommu_domain *domain,
+ domain_remove_one_dev_info(old_domain, dev);
+ else
+ domain_remove_dev_info(old_domain);
++
++ if (!domain_type_is_vm_or_si(old_domain) &&
++ list_empty(&old_domain->devices))
++ domain_exit(old_domain);
+ }
+ }
+
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 9c66e5997fc8..c1b0d52bfcb0 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -2917,8 +2917,11 @@ static int fetch_block(struct stripe_head *sh, struct stripe_head_state *s,
+ (sh->raid_conf->level <= 5 && s->failed && fdev[0]->towrite &&
+ (!test_bit(R5_Insync, &dev->flags) || test_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) &&
+ !test_bit(R5_OVERWRITE, &fdev[0]->flags)) ||
+- (sh->raid_conf->level == 6 && s->failed && s->to_write &&
+- s->to_write - s->non_overwrite < sh->raid_conf->raid_disks - 2 &&
++ ((sh->raid_conf->level == 6 ||
++ sh->sector >= sh->raid_conf->mddev->recovery_cp)
++ && s->failed && s->to_write &&
++ (s->to_write - s->non_overwrite <
++ sh->raid_conf->raid_disks - sh->raid_conf->max_degraded) &&
+ (!test_bit(R5_Insync, &dev->flags) || test_bit(STRIPE_PREREAD_ACTIVE, &sh->state))))) {
+ /* we would like to get this block, possibly by computing it,
+ * otherwise read it if the backing disk is insync
+diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c
+index 7cb3b7e41739..1ca94e6fa8fb 100644
+--- a/drivers/misc/genwqe/card_utils.c
++++ b/drivers/misc/genwqe/card_utils.c
+@@ -590,6 +590,8 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr,
+ m->nr_pages,
+ 1, /* write by caller */
+ m->page_list); /* ptrs to pages */
++ if (rc < 0)
++ goto fail_get_user_pages;
+
+ /* assumption: get_user_pages can be killed by signals. */
+ if (rc < m->nr_pages) {
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index ada1a3ea3a87..7625bd791fca 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1319,6 +1319,8 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
+
+ sdhci_runtime_pm_get(host);
+
++ present = mmc_gpio_get_cd(host->mmc);
++
+ spin_lock_irqsave(&host->lock, flags);
+
+ WARN_ON(host->mrq != NULL);
+@@ -1347,7 +1349,6 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ * zero: cd-gpio is used, and card is removed
+ * one: cd-gpio is used, and card is present
+ */
+- present = mmc_gpio_get_cd(host->mmc);
+ if (present < 0) {
+ /* If polling, assume that the card is always present. */
+ if (host->quirks & SDHCI_QUIRK_BROKEN_CARD_DETECTION)
+@@ -2072,15 +2073,18 @@ static void sdhci_card_event(struct mmc_host *mmc)
+ {
+ struct sdhci_host *host = mmc_priv(mmc);
+ unsigned long flags;
++ int present;
+
+ /* First check if client has provided their own card event */
+ if (host->ops->card_event)
+ host->ops->card_event(host);
+
++ present = sdhci_do_get_cd(host);
++
+ spin_lock_irqsave(&host->lock, flags);
+
+ /* Check host->mrq first in case we are runtime suspended */
+- if (host->mrq && !sdhci_do_get_cd(host)) {
++ if (host->mrq && !present) {
+ pr_err("%s: Card removed during transfer!\n",
+ mmc_hostname(host->mmc));
+ pr_err("%s: Resetting controller.\n",
+diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c
+index ed827cf894e4..dd8f66ccd2d6 100644
+--- a/drivers/mtd/devices/m25p80.c
++++ b/drivers/mtd/devices/m25p80.c
+@@ -300,11 +300,11 @@ static const struct spi_device_id m25p_ids[] = {
+ {"m45pe10"}, {"m45pe80"}, {"m45pe16"},
+ {"m25pe20"}, {"m25pe80"}, {"m25pe16"},
+ {"m25px16"}, {"m25px32"}, {"m25px32-s0"}, {"m25px32-s1"},
+- {"m25px64"},
++ {"m25px64"}, {"m25px80"},
+ {"w25x10"}, {"w25x20"}, {"w25x40"}, {"w25x80"},
+ {"w25x16"}, {"w25x32"}, {"w25q32"}, {"w25q32dw"},
+- {"w25x64"}, {"w25q64"}, {"w25q128"}, {"w25q80"},
+- {"w25q80bl"}, {"w25q128"}, {"w25q256"}, {"cat25c11"},
++ {"w25x64"}, {"w25q64"}, {"w25q80"}, {"w25q80bl"},
++ {"w25q128"}, {"w25q256"}, {"cat25c11"},
+ {"cat25c03"}, {"cat25c09"}, {"cat25c17"}, {"cat25128"},
+ { },
+ };
+diff --git a/drivers/mtd/nand/omap2.c b/drivers/mtd/nand/omap2.c
+index 3b357e920a0c..10d07dd20f7c 100644
+--- a/drivers/mtd/nand/omap2.c
++++ b/drivers/mtd/nand/omap2.c
+@@ -1741,13 +1741,6 @@ static int omap_nand_probe(struct platform_device *pdev)
+ goto return_error;
+ }
+
+- /* check for small page devices */
+- if ((mtd->oobsize < 64) && (pdata->ecc_opt != OMAP_ECC_HAM1_CODE_HW)) {
+- dev_err(&info->pdev->dev, "small page devices are not supported\n");
+- err = -EINVAL;
+- goto return_error;
+- }
+-
+ /* re-populate low-level callbacks based on xfer modes */
+ switch (pdata->xfer_type) {
+ case NAND_OMAP_PREFETCH_POLLED:
+diff --git a/drivers/mtd/tests/torturetest.c b/drivers/mtd/tests/torturetest.c
+index eeab96973cf0..b55bc52a1340 100644
+--- a/drivers/mtd/tests/torturetest.c
++++ b/drivers/mtd/tests/torturetest.c
+@@ -264,7 +264,9 @@ static int __init tort_init(void)
+ int i;
+ void *patt;
+
+- mtdtest_erase_good_eraseblocks(mtd, bad_ebs, eb, ebcnt);
++ err = mtdtest_erase_good_eraseblocks(mtd, bad_ebs, eb, ebcnt);
++ if (err)
++ goto out;
+
+ /* Check if the eraseblocks contain only 0xFF bytes */
+ if (check) {
+diff --git a/drivers/mtd/ubi/upd.c b/drivers/mtd/ubi/upd.c
+index ec2c2dc1c1ca..2a1b6e037e1a 100644
+--- a/drivers/mtd/ubi/upd.c
++++ b/drivers/mtd/ubi/upd.c
+@@ -133,6 +133,10 @@ int ubi_start_update(struct ubi_device *ubi, struct ubi_volume *vol,
+ ubi_assert(!vol->updating && !vol->changing_leb);
+ vol->updating = 1;
+
++ vol->upd_buf = vmalloc(ubi->leb_size);
++ if (!vol->upd_buf)
++ return -ENOMEM;
++
+ err = set_update_marker(ubi, vol);
+ if (err)
+ return err;
+@@ -152,14 +156,12 @@ int ubi_start_update(struct ubi_device *ubi, struct ubi_volume *vol,
+ err = clear_update_marker(ubi, vol, 0);
+ if (err)
+ return err;
++
++ vfree(vol->upd_buf);
+ vol->updating = 0;
+ return 0;
+ }
+
+- vol->upd_buf = vmalloc(ubi->leb_size);
+- if (!vol->upd_buf)
+- return -ENOMEM;
+-
+ vol->upd_ebs = div_u64(bytes + vol->usable_leb_size - 1,
+ vol->usable_leb_size);
+ vol->upd_bytes = bytes;
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index 6654f191868e..b9686c1472d2 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -1212,7 +1212,6 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+
+ err = do_sync_erase(ubi, e1, vol_id, lnum, 0);
+ if (err) {
+- kmem_cache_free(ubi_wl_entry_slab, e1);
+ if (e2)
+ kmem_cache_free(ubi_wl_entry_slab, e2);
+ goto out_ro;
+@@ -1226,10 +1225,8 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ dbg_wl("PEB %d (LEB %d:%d) was put meanwhile, erase",
+ e2->pnum, vol_id, lnum);
+ err = do_sync_erase(ubi, e2, vol_id, lnum, 0);
+- if (err) {
+- kmem_cache_free(ubi_wl_entry_slab, e2);
++ if (err)
+ goto out_ro;
+- }
+ }
+
+ dbg_wl("done");
+@@ -1265,10 +1262,9 @@ out_not_moved:
+
+ ubi_free_vid_hdr(ubi, vid_hdr);
+ err = do_sync_erase(ubi, e2, vol_id, lnum, torture);
+- if (err) {
+- kmem_cache_free(ubi_wl_entry_slab, e2);
++ if (err)
+ goto out_ro;
+- }
++
+ mutex_unlock(&ubi->move_mutex);
+ return 0;
+
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+index 644e6ab8a489..dc807e10f802 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+@@ -735,7 +735,7 @@ static int peak_usb_create_dev(struct peak_usb_adapter *peak_usb_adapter,
+ dev->cmd_buf = kmalloc(PCAN_USB_MAX_CMD_LEN, GFP_KERNEL);
+ if (!dev->cmd_buf) {
+ err = -ENOMEM;
+- goto lbl_set_intf_data;
++ goto lbl_free_candev;
+ }
+
+ dev->udev = usb_dev;
+@@ -775,7 +775,7 @@ static int peak_usb_create_dev(struct peak_usb_adapter *peak_usb_adapter,
+ err = register_candev(netdev);
+ if (err) {
+ dev_err(&intf->dev, "couldn't register CAN device: %d\n", err);
+- goto lbl_free_cmd_buf;
++ goto lbl_restore_intf_data;
+ }
+
+ if (dev->prev_siblings)
+@@ -788,14 +788,14 @@ static int peak_usb_create_dev(struct peak_usb_adapter *peak_usb_adapter,
+ if (dev->adapter->dev_init) {
+ err = dev->adapter->dev_init(dev);
+ if (err)
+- goto lbl_free_cmd_buf;
++ goto lbl_unregister_candev;
+ }
+
+ /* set bus off */
+ if (dev->adapter->dev_set_bus) {
+ err = dev->adapter->dev_set_bus(dev, 0);
+ if (err)
+- goto lbl_free_cmd_buf;
++ goto lbl_unregister_candev;
+ }
+
+ /* get device number early */
+@@ -807,11 +807,14 @@ static int peak_usb_create_dev(struct peak_usb_adapter *peak_usb_adapter,
+
+ return 0;
+
+-lbl_free_cmd_buf:
+- kfree(dev->cmd_buf);
++lbl_unregister_candev:
++ unregister_candev(netdev);
+
+-lbl_set_intf_data:
++lbl_restore_intf_data:
+ usb_set_intfdata(intf, dev->prev_siblings);
++ kfree(dev->cmd_buf);
++
++lbl_free_candev:
+ free_candev(netdev);
+
+ return err;
+diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_pro.c b/drivers/net/can/usb/peak_usb/pcan_usb_pro.c
+index 263dd921edc4..f7f796a2c50b 100644
+--- a/drivers/net/can/usb/peak_usb/pcan_usb_pro.c
++++ b/drivers/net/can/usb/peak_usb/pcan_usb_pro.c
+@@ -333,8 +333,6 @@ static int pcan_usb_pro_send_req(struct peak_usb_device *dev, int req_id,
+ if (!(dev->state & PCAN_USB_STATE_CONNECTED))
+ return 0;
+
+- memset(req_addr, '\0', req_size);
+-
+ req_type = USB_TYPE_VENDOR | USB_RECIP_OTHER;
+
+ switch (req_id) {
+@@ -345,6 +343,7 @@ static int pcan_usb_pro_send_req(struct peak_usb_device *dev, int req_id,
+ default:
+ p = usb_rcvctrlpipe(dev->udev, 0);
+ req_type |= USB_DIR_IN;
++ memset(req_addr, '\0', req_size);
+ break;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath5k/qcu.c b/drivers/net/wireless/ath/ath5k/qcu.c
+index 0583c69d26db..ddaad712c59a 100644
+--- a/drivers/net/wireless/ath/ath5k/qcu.c
++++ b/drivers/net/wireless/ath/ath5k/qcu.c
+@@ -225,13 +225,7 @@ ath5k_hw_setup_tx_queue(struct ath5k_hw *ah, enum ath5k_tx_queue queue_type,
+ } else {
+ switch (queue_type) {
+ case AR5K_TX_QUEUE_DATA:
+- for (queue = AR5K_TX_QUEUE_ID_DATA_MIN;
+- ah->ah_txq[queue].tqi_type !=
+- AR5K_TX_QUEUE_INACTIVE; queue++) {
+-
+- if (queue > AR5K_TX_QUEUE_ID_DATA_MAX)
+- return -EINVAL;
+- }
++ queue = queue_info->tqi_subtype;
+ break;
+ case AR5K_TX_QUEUE_UAPSD:
+ queue = AR5K_TX_QUEUE_ID_UAPSD;
+diff --git a/drivers/net/wireless/ath/ath9k/hw.h b/drivers/net/wireless/ath/ath9k/hw.h
+index 975074fc11bc..e8e8dd28bade 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.h
++++ b/drivers/net/wireless/ath/ath9k/hw.h
+@@ -217,8 +217,8 @@
+ #define AH_WOW_BEACON_MISS BIT(3)
+
+ enum ath_hw_txq_subtype {
+- ATH_TXQ_AC_BE = 0,
+- ATH_TXQ_AC_BK = 1,
++ ATH_TXQ_AC_BK = 0,
++ ATH_TXQ_AC_BE = 1,
+ ATH_TXQ_AC_VI = 2,
+ ATH_TXQ_AC_VO = 3,
+ };
+diff --git a/drivers/net/wireless/ath/ath9k/mac.c b/drivers/net/wireless/ath/ath9k/mac.c
+index 275205ab5f15..3e58bfa0c1fd 100644
+--- a/drivers/net/wireless/ath/ath9k/mac.c
++++ b/drivers/net/wireless/ath/ath9k/mac.c
+@@ -311,14 +311,7 @@ int ath9k_hw_setuptxqueue(struct ath_hw *ah, enum ath9k_tx_queue type,
+ q = ATH9K_NUM_TX_QUEUES - 3;
+ break;
+ case ATH9K_TX_QUEUE_DATA:
+- for (q = 0; q < ATH9K_NUM_TX_QUEUES; q++)
+- if (ah->txq[q].tqi_type ==
+- ATH9K_TX_QUEUE_INACTIVE)
+- break;
+- if (q == ATH9K_NUM_TX_QUEUES) {
+- ath_err(common, "No available TX queue\n");
+- return -1;
+- }
++ q = qinfo->tqi_subtype;
+ break;
+ default:
+ ath_err(common, "Invalid TX queue type: %u\n", type);
+diff --git a/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c
+index 8079a9ddcba9..0c9671f2f01a 100644
+--- a/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c
++++ b/drivers/net/wireless/brcm80211/brcmfmac/msgbuf.c
+@@ -1081,8 +1081,17 @@ brcmf_msgbuf_rx_skb(struct brcmf_msgbuf *msgbuf, struct sk_buff *skb,
+ {
+ struct brcmf_if *ifp;
+
++ /* The ifidx is the idx to map to matching netdev/ifp. When receiving
++ * events this is easy because it contains the bssidx which maps
++ * 1-on-1 to the netdev/ifp. But for data frames the ifidx is rcvd.
++ * bssidx 1 is used for p2p0 and no data can be received or
++ * transmitted on it. Therefor bssidx is ifidx + 1 if ifidx > 0
++ */
++ if (ifidx)
++ (ifidx)++;
+ ifp = msgbuf->drvr->iflist[ifidx];
+ if (!ifp || !ifp->ndev) {
++ brcmf_err("Received pkt for invalid ifidx %d\n", ifidx);
+ brcmu_pkt_buf_free_skb(skb);
+ return;
+ }
+diff --git a/drivers/net/wireless/iwlwifi/dvm/commands.h b/drivers/net/wireless/iwlwifi/dvm/commands.h
+index 751ae1d10b7f..7a34e4d158d1 100644
+--- a/drivers/net/wireless/iwlwifi/dvm/commands.h
++++ b/drivers/net/wireless/iwlwifi/dvm/commands.h
+@@ -966,21 +966,21 @@ struct iwl_rem_sta_cmd {
+
+
+ /* WiFi queues mask */
+-#define IWL_SCD_BK_MSK cpu_to_le32(BIT(0))
+-#define IWL_SCD_BE_MSK cpu_to_le32(BIT(1))
+-#define IWL_SCD_VI_MSK cpu_to_le32(BIT(2))
+-#define IWL_SCD_VO_MSK cpu_to_le32(BIT(3))
+-#define IWL_SCD_MGMT_MSK cpu_to_le32(BIT(3))
++#define IWL_SCD_BK_MSK BIT(0)
++#define IWL_SCD_BE_MSK BIT(1)
++#define IWL_SCD_VI_MSK BIT(2)
++#define IWL_SCD_VO_MSK BIT(3)
++#define IWL_SCD_MGMT_MSK BIT(3)
+
+ /* PAN queues mask */
+-#define IWL_PAN_SCD_BK_MSK cpu_to_le32(BIT(4))
+-#define IWL_PAN_SCD_BE_MSK cpu_to_le32(BIT(5))
+-#define IWL_PAN_SCD_VI_MSK cpu_to_le32(BIT(6))
+-#define IWL_PAN_SCD_VO_MSK cpu_to_le32(BIT(7))
+-#define IWL_PAN_SCD_MGMT_MSK cpu_to_le32(BIT(7))
+-#define IWL_PAN_SCD_MULTICAST_MSK cpu_to_le32(BIT(8))
++#define IWL_PAN_SCD_BK_MSK BIT(4)
++#define IWL_PAN_SCD_BE_MSK BIT(5)
++#define IWL_PAN_SCD_VI_MSK BIT(6)
++#define IWL_PAN_SCD_VO_MSK BIT(7)
++#define IWL_PAN_SCD_MGMT_MSK BIT(7)
++#define IWL_PAN_SCD_MULTICAST_MSK BIT(8)
+
+-#define IWL_AGG_TX_QUEUE_MSK cpu_to_le32(0xffc00)
++#define IWL_AGG_TX_QUEUE_MSK 0xffc00
+
+ #define IWL_DROP_ALL BIT(1)
+
+@@ -1005,12 +1005,17 @@ struct iwl_rem_sta_cmd {
+ * 1: Dump multiple MSDU according to PS, INVALID STA, TTL, TID disable.
+ * 2: Dump all FIFO
+ */
+-struct iwl_txfifo_flush_cmd {
++struct iwl_txfifo_flush_cmd_v3 {
+ __le32 queue_control;
+ __le16 flush_control;
+ __le16 reserved;
+ } __packed;
+
++struct iwl_txfifo_flush_cmd_v2 {
++ __le16 queue_control;
++ __le16 flush_control;
++} __packed;
++
+ /*
+ * REPLY_WEP_KEY = 0x20
+ */
+diff --git a/drivers/net/wireless/iwlwifi/dvm/lib.c b/drivers/net/wireless/iwlwifi/dvm/lib.c
+index 2191621d69c1..cfe1293692fc 100644
+--- a/drivers/net/wireless/iwlwifi/dvm/lib.c
++++ b/drivers/net/wireless/iwlwifi/dvm/lib.c
+@@ -137,37 +137,38 @@ int iwlagn_manage_ibss_station(struct iwl_priv *priv,
+ */
+ int iwlagn_txfifo_flush(struct iwl_priv *priv, u32 scd_q_msk)
+ {
+- struct iwl_txfifo_flush_cmd flush_cmd;
+- struct iwl_host_cmd cmd = {
+- .id = REPLY_TXFIFO_FLUSH,
+- .len = { sizeof(struct iwl_txfifo_flush_cmd), },
+- .data = { &flush_cmd, },
++ struct iwl_txfifo_flush_cmd_v3 flush_cmd_v3 = {
++ .flush_control = cpu_to_le16(IWL_DROP_ALL),
++ };
++ struct iwl_txfifo_flush_cmd_v2 flush_cmd_v2 = {
++ .flush_control = cpu_to_le16(IWL_DROP_ALL),
+ };
+
+- memset(&flush_cmd, 0, sizeof(flush_cmd));
++ u32 queue_control = IWL_SCD_VO_MSK | IWL_SCD_VI_MSK |
++ IWL_SCD_BE_MSK | IWL_SCD_BK_MSK | IWL_SCD_MGMT_MSK;
+
+- flush_cmd.queue_control = IWL_SCD_VO_MSK | IWL_SCD_VI_MSK |
+- IWL_SCD_BE_MSK | IWL_SCD_BK_MSK |
+- IWL_SCD_MGMT_MSK;
+ if ((priv->valid_contexts != BIT(IWL_RXON_CTX_BSS)))
+- flush_cmd.queue_control |= IWL_PAN_SCD_VO_MSK |
+- IWL_PAN_SCD_VI_MSK |
+- IWL_PAN_SCD_BE_MSK |
+- IWL_PAN_SCD_BK_MSK |
+- IWL_PAN_SCD_MGMT_MSK |
+- IWL_PAN_SCD_MULTICAST_MSK;
++ queue_control |= IWL_PAN_SCD_VO_MSK | IWL_PAN_SCD_VI_MSK |
++ IWL_PAN_SCD_BE_MSK | IWL_PAN_SCD_BK_MSK |
++ IWL_PAN_SCD_MGMT_MSK |
++ IWL_PAN_SCD_MULTICAST_MSK;
+
+ if (priv->nvm_data->sku_cap_11n_enable)
+- flush_cmd.queue_control |= IWL_AGG_TX_QUEUE_MSK;
++ queue_control |= IWL_AGG_TX_QUEUE_MSK;
+
+ if (scd_q_msk)
+- flush_cmd.queue_control = cpu_to_le32(scd_q_msk);
+-
+- IWL_DEBUG_INFO(priv, "queue control: 0x%x\n",
+- flush_cmd.queue_control);
+- flush_cmd.flush_control = cpu_to_le16(IWL_DROP_ALL);
+-
+- return iwl_dvm_send_cmd(priv, &cmd);
++ queue_control = scd_q_msk;
++
++ IWL_DEBUG_INFO(priv, "queue control: 0x%x\n", queue_control);
++ flush_cmd_v3.queue_control = cpu_to_le32(queue_control);
++ flush_cmd_v2.queue_control = cpu_to_le16((u16)queue_control);
++
++ if (IWL_UCODE_API(priv->fw->ucode_ver) > 2)
++ return iwl_dvm_send_cmd_pdu(priv, REPLY_TXFIFO_FLUSH, 0,
++ sizeof(flush_cmd_v3),
++ &flush_cmd_v3);
++ return iwl_dvm_send_cmd_pdu(priv, REPLY_TXFIFO_FLUSH, 0,
++ sizeof(flush_cmd_v2), &flush_cmd_v2);
+ }
+
+ void iwlagn_dev_txfifo_flush(struct iwl_priv *priv)
+diff --git a/drivers/net/wireless/iwlwifi/mvm/fw-api.h b/drivers/net/wireless/iwlwifi/mvm/fw-api.h
+index c62575d86bcd..5bd902c976e7 100644
+--- a/drivers/net/wireless/iwlwifi/mvm/fw-api.h
++++ b/drivers/net/wireless/iwlwifi/mvm/fw-api.h
+@@ -1589,7 +1589,7 @@ enum iwl_sf_scenario {
+ #define SF_NUM_TIMEOUT_TYPES 2 /* Aging timer and Idle timer */
+
+ /* smart FIFO default values */
+-#define SF_W_MARK_SISO 4096
++#define SF_W_MARK_SISO 6144
+ #define SF_W_MARK_MIMO2 8192
+ #define SF_W_MARK_MIMO3 6144
+ #define SF_W_MARK_LEGACY 4096
+diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c
+index 6ced8549eb3a..05cba8c05d3f 100644
+--- a/drivers/net/wireless/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/iwlwifi/pcie/drv.c
+@@ -367,7 +367,11 @@ static const struct pci_device_id iwl_hw_card_ids[] = {
+
+ /* 3165 Series */
+ {IWL_PCI_DEVICE(0x3165, 0x4010, iwl3165_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x3165, 0x4012, iwl3165_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x3165, 0x4110, iwl3165_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x3165, 0x4210, iwl3165_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x3165, 0x4410, iwl3165_2ac_cfg)},
++ {IWL_PCI_DEVICE(0x3165, 0x4510, iwl3165_2ac_cfg)},
+
+ /* 7265 Series */
+ {IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)},
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index c8ca98c2b480..3010ffc9029d 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -216,14 +216,17 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
+ res->flags |= IORESOURCE_SIZEALIGN;
+ if (res->flags & IORESOURCE_IO) {
+ l &= PCI_BASE_ADDRESS_IO_MASK;
++ sz &= PCI_BASE_ADDRESS_IO_MASK;
+ mask = PCI_BASE_ADDRESS_IO_MASK & (u32) IO_SPACE_LIMIT;
+ } else {
+ l &= PCI_BASE_ADDRESS_MEM_MASK;
++ sz &= PCI_BASE_ADDRESS_MEM_MASK;
+ mask = (u32)PCI_BASE_ADDRESS_MEM_MASK;
+ }
+ } else {
+ res->flags |= (l & IORESOURCE_ROM_ENABLE);
+ l &= PCI_ROM_ADDRESS_MASK;
++ sz &= PCI_ROM_ADDRESS_MASK;
+ mask = (u32)PCI_ROM_ADDRESS_MASK;
+ }
+
+diff --git a/drivers/regulator/s2mps11.c b/drivers/regulator/s2mps11.c
+index adab82d5279f..697be114e21a 100644
+--- a/drivers/regulator/s2mps11.c
++++ b/drivers/regulator/s2mps11.c
+@@ -479,7 +479,7 @@ static struct regulator_ops s2mps14_reg_ops = {
+ .enable_mask = S2MPS14_ENABLE_MASK \
+ }
+
+-#define regulator_desc_s2mps14_buck(num, min, step) { \
++#define regulator_desc_s2mps14_buck(num, min, step, min_sel) { \
+ .name = "BUCK"#num, \
+ .id = S2MPS14_BUCK##num, \
+ .ops = &s2mps14_reg_ops, \
+@@ -488,7 +488,7 @@ static struct regulator_ops s2mps14_reg_ops = {
+ .min_uV = min, \
+ .uV_step = step, \
+ .n_voltages = S2MPS14_BUCK_N_VOLTAGES, \
+- .linear_min_sel = S2MPS14_BUCK1235_START_SEL, \
++ .linear_min_sel = min_sel, \
+ .ramp_delay = S2MPS14_BUCK_RAMP_DELAY, \
+ .vsel_reg = S2MPS14_REG_B1CTRL2 + (num - 1) * 2, \
+ .vsel_mask = S2MPS14_BUCK_VSEL_MASK, \
+@@ -522,11 +522,16 @@ static const struct regulator_desc s2mps14_regulators[] = {
+ regulator_desc_s2mps14_ldo(23, MIN_800_MV, STEP_25_MV),
+ regulator_desc_s2mps14_ldo(24, MIN_1800_MV, STEP_25_MV),
+ regulator_desc_s2mps14_ldo(25, MIN_1800_MV, STEP_25_MV),
+- regulator_desc_s2mps14_buck(1, MIN_600_MV, STEP_6_25_MV),
+- regulator_desc_s2mps14_buck(2, MIN_600_MV, STEP_6_25_MV),
+- regulator_desc_s2mps14_buck(3, MIN_600_MV, STEP_6_25_MV),
+- regulator_desc_s2mps14_buck(4, MIN_1400_MV, STEP_12_5_MV),
+- regulator_desc_s2mps14_buck(5, MIN_600_MV, STEP_6_25_MV),
++ regulator_desc_s2mps14_buck(1, MIN_600_MV, STEP_6_25_MV,
++ S2MPS14_BUCK1235_START_SEL),
++ regulator_desc_s2mps14_buck(2, MIN_600_MV, STEP_6_25_MV,
++ S2MPS14_BUCK1235_START_SEL),
++ regulator_desc_s2mps14_buck(3, MIN_600_MV, STEP_6_25_MV,
++ S2MPS14_BUCK1235_START_SEL),
++ regulator_desc_s2mps14_buck(4, MIN_1400_MV, STEP_12_5_MV,
++ S2MPS14_BUCK4_START_SEL),
++ regulator_desc_s2mps14_buck(5, MIN_600_MV, STEP_6_25_MV,
++ S2MPS14_BUCK1235_START_SEL),
+ };
+
+ static int s2mps14_pmic_enable_ext_control(struct s2mps11_info *s2mps11,
+diff --git a/drivers/rtc/rtc-isl12057.c b/drivers/rtc/rtc-isl12057.c
+index 455b601d731d..8c3f60737df8 100644
+--- a/drivers/rtc/rtc-isl12057.c
++++ b/drivers/rtc/rtc-isl12057.c
+@@ -88,7 +88,7 @@ static void isl12057_rtc_regs_to_tm(struct rtc_time *tm, u8 *regs)
+ tm->tm_min = bcd2bin(regs[ISL12057_REG_RTC_MN]);
+
+ if (regs[ISL12057_REG_RTC_HR] & ISL12057_REG_RTC_HR_MIL) { /* AM/PM */
+- tm->tm_hour = bcd2bin(regs[ISL12057_REG_RTC_HR] & 0x0f);
++ tm->tm_hour = bcd2bin(regs[ISL12057_REG_RTC_HR] & 0x1f);
+ if (regs[ISL12057_REG_RTC_HR] & ISL12057_REG_RTC_HR_PM)
+ tm->tm_hour += 12;
+ } else { /* 24 hour mode */
+@@ -97,7 +97,7 @@ static void isl12057_rtc_regs_to_tm(struct rtc_time *tm, u8 *regs)
+
+ tm->tm_mday = bcd2bin(regs[ISL12057_REG_RTC_DT]);
+ tm->tm_wday = bcd2bin(regs[ISL12057_REG_RTC_DW]) - 1; /* starts at 1 */
+- tm->tm_mon = bcd2bin(regs[ISL12057_REG_RTC_MO]) - 1; /* starts at 1 */
++ tm->tm_mon = bcd2bin(regs[ISL12057_REG_RTC_MO] & 0x1f) - 1; /* ditto */
+ tm->tm_year = bcd2bin(regs[ISL12057_REG_RTC_YR]) + 100;
+ }
+
+diff --git a/drivers/rtc/rtc-omap.c b/drivers/rtc/rtc-omap.c
+index 21142e6574a9..828cb9983cc2 100644
+--- a/drivers/rtc/rtc-omap.c
++++ b/drivers/rtc/rtc-omap.c
+@@ -416,6 +416,8 @@ static int __init omap_rtc_probe(struct platform_device *pdev)
+ rtc_writel(KICK1_VALUE, OMAP_RTC_KICK1_REG);
+ }
+
++ device_init_wakeup(&pdev->dev, true);
++
+ rtc = devm_rtc_device_register(&pdev->dev, pdev->name,
+ &omap_rtc_ops, THIS_MODULE);
+ if (IS_ERR(rtc)) {
+@@ -431,8 +433,10 @@ static int __init omap_rtc_probe(struct platform_device *pdev)
+ rtc_write(0, OMAP_RTC_INTERRUPTS_REG);
+
+ /* enable RTC functional clock */
+- if (id_entry->driver_data & OMAP_RTC_HAS_32KCLK_EN)
+- rtc_writel(OMAP_RTC_OSC_32KCLK_EN, OMAP_RTC_OSC_REG);
++ if (id_entry->driver_data & OMAP_RTC_HAS_32KCLK_EN) {
++ reg = rtc_read(OMAP_RTC_OSC_REG);
++ rtc_writel(reg | OMAP_RTC_OSC_32KCLK_EN, OMAP_RTC_OSC_REG);
++ }
+
+ /* clear old status */
+ reg = rtc_read(OMAP_RTC_STATUS_REG);
+@@ -482,8 +486,6 @@ static int __init omap_rtc_probe(struct platform_device *pdev)
+ * is write-only, and always reads as zero...)
+ */
+
+- device_init_wakeup(&pdev->dev, true);
+-
+ if (new_ctrl & (u8) OMAP_RTC_CTRL_SPLIT)
+ pr_info("%s: split power mode\n", pdev->name);
+
+@@ -493,6 +495,7 @@ static int __init omap_rtc_probe(struct platform_device *pdev)
+ return 0;
+
+ fail0:
++ device_init_wakeup(&pdev->dev, false);
+ if (id_entry->driver_data & OMAP_RTC_HAS_KICKER)
+ rtc_writel(0, OMAP_RTC_KICK0_REG);
+ pm_runtime_put_sync(&pdev->dev);
+diff --git a/drivers/rtc/rtc-sirfsoc.c b/drivers/rtc/rtc-sirfsoc.c
+index 76e38007ba90..24ba97d3286e 100644
+--- a/drivers/rtc/rtc-sirfsoc.c
++++ b/drivers/rtc/rtc-sirfsoc.c
+@@ -286,14 +286,6 @@ static int sirfsoc_rtc_probe(struct platform_device *pdev)
+ rtc_div = ((32768 / RTC_HZ) / 2) - 1;
+ sirfsoc_rtc_iobrg_writel(rtc_div, rtcdrv->rtc_base + RTC_DIV);
+
+- rtcdrv->rtc = devm_rtc_device_register(&pdev->dev, pdev->name,
+- &sirfsoc_rtc_ops, THIS_MODULE);
+- if (IS_ERR(rtcdrv->rtc)) {
+- err = PTR_ERR(rtcdrv->rtc);
+- dev_err(&pdev->dev, "can't register RTC device\n");
+- return err;
+- }
+-
+ /* 0x3 -> RTC_CLK */
+ sirfsoc_rtc_iobrg_writel(SIRFSOC_RTC_CLK,
+ rtcdrv->rtc_base + RTC_CLOCK_SWITCH);
+@@ -308,6 +300,14 @@ static int sirfsoc_rtc_probe(struct platform_device *pdev)
+ rtcdrv->overflow_rtc =
+ sirfsoc_rtc_iobrg_readl(rtcdrv->rtc_base + RTC_SW_VALUE);
+
++ rtcdrv->rtc = devm_rtc_device_register(&pdev->dev, pdev->name,
++ &sirfsoc_rtc_ops, THIS_MODULE);
++ if (IS_ERR(rtcdrv->rtc)) {
++ err = PTR_ERR(rtcdrv->rtc);
++ dev_err(&pdev->dev, "can't register RTC device\n");
++ return err;
++ }
++
+ rtcdrv->irq = platform_get_irq(pdev, 0);
+ err = devm_request_irq(
+ &pdev->dev,
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index 3f365402fcc0..14052936b1c5 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -480,6 +480,8 @@ static int sh_msiof_spi_setup(struct spi_device *spi)
+ struct device_node *np = spi->master->dev.of_node;
+ struct sh_msiof_spi_priv *p = spi_master_get_devdata(spi->master);
+
++ pm_runtime_get_sync(&p->pdev->dev);
++
+ if (!np) {
+ /*
+ * Use spi->controller_data for CS (same strategy as spi_gpio),
+@@ -498,6 +500,9 @@ static int sh_msiof_spi_setup(struct spi_device *spi)
+ if (spi->cs_gpio >= 0)
+ gpio_set_value(spi->cs_gpio, !(spi->mode & SPI_CS_HIGH));
+
++
++ pm_runtime_put_sync(&p->pdev->dev);
++
+ return 0;
+ }
+
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index 2e900a98c3e3..47ca0f3b8c85 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -321,7 +321,8 @@ static void n_tty_check_unthrottle(struct tty_struct *tty)
+
+ static inline void put_tty_queue(unsigned char c, struct n_tty_data *ldata)
+ {
+- *read_buf_addr(ldata, ldata->read_head++) = c;
++ *read_buf_addr(ldata, ldata->read_head) = c;
++ ldata->read_head++;
+ }
+
+ /**
+diff --git a/drivers/tty/serial/men_z135_uart.c b/drivers/tty/serial/men_z135_uart.c
+index 30e9e60bc5cd..517cd073dc08 100644
+--- a/drivers/tty/serial/men_z135_uart.c
++++ b/drivers/tty/serial/men_z135_uart.c
+@@ -809,6 +809,7 @@ static void men_z135_remove(struct mcb_device *mdev)
+
+ static const struct mcb_device_id men_z135_ids[] = {
+ { .device = 0x87 },
++ { }
+ };
+ MODULE_DEVICE_TABLE(mcb, men_z135_ids);
+
+diff --git a/drivers/tty/serial/samsung.c b/drivers/tty/serial/samsung.c
+index c78f43a481ce..587d63bcbd0e 100644
+--- a/drivers/tty/serial/samsung.c
++++ b/drivers/tty/serial/samsung.c
+@@ -559,11 +559,15 @@ static void s3c24xx_serial_pm(struct uart_port *port, unsigned int level,
+ unsigned int old)
+ {
+ struct s3c24xx_uart_port *ourport = to_ourport(port);
++ int timeout = 10000;
+
+ ourport->pm_level = level;
+
+ switch (level) {
+ case 3:
++ while (--timeout && !s3c24xx_serial_txempty_nofifo(port))
++ udelay(100);
++
+ if (!IS_ERR(ourport->baudclk))
+ clk_disable_unprepare(ourport->baudclk);
+
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 077d58ac3dcb..64d9c3daa856 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -1197,10 +1197,11 @@ next_desc:
+ } else {
+ control_interface = usb_ifnum_to_if(usb_dev, union_header->bMasterInterface0);
+ data_interface = usb_ifnum_to_if(usb_dev, (data_interface_num = union_header->bSlaveInterface0));
+- if (!control_interface || !data_interface) {
+- dev_dbg(&intf->dev, "no interfaces\n");
+- return -ENODEV;
+- }
++ }
++
++ if (!control_interface || !data_interface) {
++ dev_dbg(&intf->dev, "no interfaces\n");
++ return -ENODEV;
+ }
+
+ if (data_interface_num != call_interface_num)
+@@ -1475,6 +1476,7 @@ alloc_fail8:
+ &dev_attr_wCountryCodes);
+ device_remove_file(&acm->control->dev,
+ &dev_attr_iCountryCodeRelDate);
++ kfree(acm->country_codes);
+ }
+ device_remove_file(&acm->control->dev, &dev_attr_bmCapabilities);
+ alloc_fail7:
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 96fafed92b76..0ffb4ed0a945 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -103,6 +103,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x04f3, 0x009b), .driver_info =
+ USB_QUIRK_DEVICE_QUALIFIER },
+
++ { USB_DEVICE(0x04f3, 0x010c), .driver_info =
++ USB_QUIRK_DEVICE_QUALIFIER },
++
+ { USB_DEVICE(0x04f3, 0x016f), .driver_info =
+ USB_QUIRK_DEVICE_QUALIFIER },
+
+diff --git a/drivers/usb/gadget/udc/at91_udc.c b/drivers/usb/gadget/udc/at91_udc.c
+index 9968f5331fe4..0716c1994e28 100644
+--- a/drivers/usb/gadget/udc/at91_udc.c
++++ b/drivers/usb/gadget/udc/at91_udc.c
+@@ -870,12 +870,10 @@ static void clk_on(struct at91_udc *udc)
+ return;
+ udc->clocked = 1;
+
+- if (IS_ENABLED(CONFIG_COMMON_CLK)) {
+- clk_set_rate(udc->uclk, 48000000);
+- clk_prepare_enable(udc->uclk);
+- }
+- clk_prepare_enable(udc->iclk);
+- clk_prepare_enable(udc->fclk);
++ if (IS_ENABLED(CONFIG_COMMON_CLK))
++ clk_enable(udc->uclk);
++ clk_enable(udc->iclk);
++ clk_enable(udc->fclk);
+ }
+
+ static void clk_off(struct at91_udc *udc)
+@@ -884,10 +882,10 @@ static void clk_off(struct at91_udc *udc)
+ return;
+ udc->clocked = 0;
+ udc->gadget.speed = USB_SPEED_UNKNOWN;
+- clk_disable_unprepare(udc->fclk);
+- clk_disable_unprepare(udc->iclk);
++ clk_disable(udc->fclk);
++ clk_disable(udc->iclk);
+ if (IS_ENABLED(CONFIG_COMMON_CLK))
+- clk_disable_unprepare(udc->uclk);
++ clk_disable(udc->uclk);
+ }
+
+ /*
+@@ -1780,14 +1778,24 @@ static int at91udc_probe(struct platform_device *pdev)
+ }
+
+ /* don't do anything until we have both gadget driver and VBUS */
++ if (IS_ENABLED(CONFIG_COMMON_CLK)) {
++ clk_set_rate(udc->uclk, 48000000);
++ retval = clk_prepare(udc->uclk);
++ if (retval)
++ goto fail1;
++ }
++ retval = clk_prepare(udc->fclk);
++ if (retval)
++ goto fail1a;
++
+ retval = clk_prepare_enable(udc->iclk);
+ if (retval)
+- goto fail1;
++ goto fail1b;
+ at91_udp_write(udc, AT91_UDP_TXVC, AT91_UDP_TXVC_TXVDIS);
+ at91_udp_write(udc, AT91_UDP_IDR, 0xffffffff);
+ /* Clear all pending interrupts - UDP may be used by bootloader. */
+ at91_udp_write(udc, AT91_UDP_ICR, 0xffffffff);
+- clk_disable_unprepare(udc->iclk);
++ clk_disable(udc->iclk);
+
+ /* request UDC and maybe VBUS irqs */
+ udc->udp_irq = platform_get_irq(pdev, 0);
+@@ -1795,7 +1803,7 @@ static int at91udc_probe(struct platform_device *pdev)
+ 0, driver_name, udc);
+ if (retval < 0) {
+ DBG("request irq %d failed\n", udc->udp_irq);
+- goto fail1;
++ goto fail1c;
+ }
+ if (gpio_is_valid(udc->board.vbus_pin)) {
+ retval = gpio_request(udc->board.vbus_pin, "udc_vbus");
+@@ -1848,6 +1856,13 @@ fail3:
+ gpio_free(udc->board.vbus_pin);
+ fail2:
+ free_irq(udc->udp_irq, udc);
++fail1c:
++ clk_unprepare(udc->iclk);
++fail1b:
++ clk_unprepare(udc->fclk);
++fail1a:
++ if (IS_ENABLED(CONFIG_COMMON_CLK))
++ clk_unprepare(udc->uclk);
+ fail1:
+ if (IS_ENABLED(CONFIG_COMMON_CLK) && !IS_ERR(udc->uclk))
+ clk_put(udc->uclk);
+@@ -1896,6 +1911,11 @@ static int __exit at91udc_remove(struct platform_device *pdev)
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ release_mem_region(res->start, resource_size(res));
+
++ if (IS_ENABLED(CONFIG_COMMON_CLK))
++ clk_unprepare(udc->uclk);
++ clk_unprepare(udc->fclk);
++ clk_unprepare(udc->iclk);
++
+ clk_put(udc->iclk);
+ clk_put(udc->fclk);
+ if (IS_ENABLED(CONFIG_COMMON_CLK))
+diff --git a/drivers/usb/renesas_usbhs/mod_gadget.c b/drivers/usb/renesas_usbhs/mod_gadget.c
+index 2d17c10a0428..294d43c387b2 100644
+--- a/drivers/usb/renesas_usbhs/mod_gadget.c
++++ b/drivers/usb/renesas_usbhs/mod_gadget.c
+@@ -602,6 +602,9 @@ static int usbhsg_ep_disable(struct usb_ep *ep)
+ struct usbhsg_uep *uep = usbhsg_ep_to_uep(ep);
+ struct usbhs_pipe *pipe = usbhsg_uep_to_pipe(uep);
+
++ if (!pipe)
++ return -EINVAL;
++
+ usbhsg_pipe_disable(uep);
+ usbhs_pipe_free(pipe);
+
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index b2aa003bf411..cb3e14780a7e 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -27,12 +27,15 @@ enum qcserial_layouts {
+ QCSERIAL_G2K = 0, /* Gobi 2000 */
+ QCSERIAL_G1K = 1, /* Gobi 1000 */
+ QCSERIAL_SWI = 2, /* Sierra Wireless */
++ QCSERIAL_HWI = 3, /* Huawei */
+ };
+
+ #define DEVICE_G1K(v, p) \
+ USB_DEVICE(v, p), .driver_info = QCSERIAL_G1K
+ #define DEVICE_SWI(v, p) \
+ USB_DEVICE(v, p), .driver_info = QCSERIAL_SWI
++#define DEVICE_HWI(v, p) \
++ USB_DEVICE(v, p), .driver_info = QCSERIAL_HWI
+
+ static const struct usb_device_id id_table[] = {
+ /* Gobi 1000 devices */
+@@ -157,6 +160,9 @@ static const struct usb_device_id id_table[] = {
+ {DEVICE_SWI(0x413c, 0x81a8)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card */
+ {DEVICE_SWI(0x413c, 0x81a9)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card */
+
++ /* Huawei devices */
++ {DEVICE_HWI(0x03f0, 0x581d)}, /* HP lt4112 LTE/HSPA+ Gobi 4G Modem (Huawei me906e) */
++
+ { } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, id_table);
+@@ -287,6 +293,33 @@ static int qcprobe(struct usb_serial *serial, const struct usb_device_id *id)
+ break;
+ }
+ break;
++ case QCSERIAL_HWI:
++ /*
++ * Huawei layout:
++ * 0: AT-capable modem port
++ * 1: DM/DIAG
++ * 2: AT-capable modem port
++ * 3: CCID-compatible PCSC interface
++ * 4: QMI/net
++ * 5: NMEA
++ */
++ switch (ifnum) {
++ case 0:
++ case 2:
++ dev_dbg(dev, "Modem port found\n");
++ break;
++ case 1:
++ dev_dbg(dev, "DM/DIAG interface found\n");
++ break;
++ case 5:
++ dev_dbg(dev, "NMEA GPS interface found\n");
++ break;
++ default:
++ /* don't claim any unsupported interface */
++ altsetting = -1;
++ break;
++ }
++ break;
+ default:
+ dev_err(dev, "unsupported device layout type: %lu\n",
+ id->driver_info);
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index ebd8f218a788..9df5d6ec7eec 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -96,8 +96,6 @@ static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
+ dma_addr_t dma = (dma_addr_t)pfn << PAGE_SHIFT;
+ phys_addr_t paddr = dma;
+
+- BUG_ON(paddr != dma); /* truncation has occurred, should never happen */
+-
+ paddr |= baddr & ~PAGE_MASK;
+
+ return paddr;
+@@ -447,11 +445,11 @@ static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr,
+
+ BUG_ON(dir == DMA_NONE);
+
+- xen_dma_unmap_page(hwdev, paddr, size, dir, attrs);
++ xen_dma_unmap_page(hwdev, dev_addr, size, dir, attrs);
+
+ /* NOTE: We use dev_addr here, not paddr! */
+ if (is_xen_swiotlb_buffer(dev_addr)) {
+- swiotlb_tbl_unmap_single(hwdev, paddr, size, dir);
++ swiotlb_tbl_unmap_single(hwdev, dev_addr, size, dir);
+ return;
+ }
+
+@@ -495,14 +493,14 @@ xen_swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
+ BUG_ON(dir == DMA_NONE);
+
+ if (target == SYNC_FOR_CPU)
+- xen_dma_sync_single_for_cpu(hwdev, paddr, size, dir);
++ xen_dma_sync_single_for_cpu(hwdev, dev_addr, size, dir);
+
+ /* NOTE: We use dev_addr here, not paddr! */
+ if (is_xen_swiotlb_buffer(dev_addr))
+ swiotlb_tbl_sync_single(hwdev, paddr, size, dir, target);
+
+ if (target == SYNC_FOR_DEVICE)
+- xen_dma_sync_single_for_cpu(hwdev, paddr, size, dir);
++ xen_dma_sync_single_for_device(hwdev, dev_addr, size, dir);
+
+ if (dir != DMA_FROM_DEVICE)
+ return;
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index 054577bddaf2..de4e70fb3cbb 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -1857,6 +1857,14 @@ int btrfs_delayed_delete_inode_ref(struct inode *inode)
+ {
+ struct btrfs_delayed_node *delayed_node;
+
++ /*
++ * we don't do delayed inode updates during log recovery because it
++ * leads to enospc problems. This means we also can't do
++ * delayed inode refs
++ */
++ if (BTRFS_I(inode)->root->fs_info->log_root_recovering)
++ return -EAGAIN;
++
+ delayed_node = btrfs_get_or_create_delayed_node(inode);
+ if (IS_ERR(delayed_node))
+ return PTR_ERR(delayed_node);
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index 18c06bbaf136..481529b879fe 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -673,7 +673,7 @@ static int ceph_writepages_start(struct address_space *mapping,
+ int rc = 0;
+ unsigned wsize = 1 << inode->i_blkbits;
+ struct ceph_osd_request *req = NULL;
+- int do_sync;
++ int do_sync = 0;
+ u64 truncate_size, snap_size;
+ u32 truncate_seq;
+
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index ef9bef118342..2d609a5fbfea 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -479,12 +479,28 @@ __writeback_single_inode(struct inode *inode, struct writeback_control *wbc)
+ * write_inode()
+ */
+ spin_lock(&inode->i_lock);
+- /* Clear I_DIRTY_PAGES if we've written out all dirty pages */
+- if (!mapping_tagged(mapping, PAGECACHE_TAG_DIRTY))
+- inode->i_state &= ~I_DIRTY_PAGES;
++
+ dirty = inode->i_state & I_DIRTY;
+- inode->i_state &= ~(I_DIRTY_SYNC | I_DIRTY_DATASYNC);
++ inode->i_state &= ~I_DIRTY;
++
++ /*
++ * Paired with smp_mb() in __mark_inode_dirty(). This allows
++ * __mark_inode_dirty() to test i_state without grabbing i_lock -
++ * either they see the I_DIRTY bits cleared or we see the dirtied
++ * inode.
++ *
++ * I_DIRTY_PAGES is always cleared together above even if @mapping
++ * still has dirty pages. The flag is reinstated after smp_mb() if
++ * necessary. This guarantees that either __mark_inode_dirty()
++ * sees clear I_DIRTY_PAGES or we see PAGECACHE_TAG_DIRTY.
++ */
++ smp_mb();
++
++ if (mapping_tagged(mapping, PAGECACHE_TAG_DIRTY))
++ inode->i_state |= I_DIRTY_PAGES;
++
+ spin_unlock(&inode->i_lock);
++
+ /* Don't write the inode if only I_DIRTY_PAGES was set */
+ if (dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) {
+ int err = write_inode(inode, wbc);
+@@ -1148,12 +1164,11 @@ void __mark_inode_dirty(struct inode *inode, int flags)
+ }
+
+ /*
+- * make sure that changes are seen by all cpus before we test i_state
+- * -- mikulas
++ * Paired with smp_mb() in __writeback_single_inode() for the
++ * following lockless i_state test. See there for details.
+ */
+ smp_mb();
+
+- /* avoid the locking if we can */
+ if ((inode->i_state & flags) == flags)
+ return;
+
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index e9c3afe4b5d3..d66e3ad1de48 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1711,15 +1711,14 @@ static int copy_cred(struct svc_cred *target, struct svc_cred *source)
+ return 0;
+ }
+
+-static long long
++static int
+ compare_blob(const struct xdr_netobj *o1, const struct xdr_netobj *o2)
+ {
+- long long res;
+-
+- res = o1->len - o2->len;
+- if (res)
+- return res;
+- return (long long)memcmp(o1->data, o2->data, o1->len);
++ if (o1->len < o2->len)
++ return -1;
++ if (o1->len > o2->len)
++ return 1;
++ return memcmp(o1->data, o2->data, o1->len);
+ }
+
+ static int same_name(const char *n1, const char *n2)
+@@ -1907,7 +1906,7 @@ add_clp_to_name_tree(struct nfs4_client *new_clp, struct rb_root *root)
+ static struct nfs4_client *
+ find_clp_in_name_tree(struct xdr_netobj *name, struct rb_root *root)
+ {
+- long long cmp;
++ int cmp;
+ struct rb_node *node = root->rb_node;
+ struct nfs4_client *clp;
+
+@@ -3891,11 +3890,11 @@ nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ status = nfs4_setlease(dp);
+ goto out;
+ }
+- atomic_inc(&fp->fi_delegees);
+ if (fp->fi_had_conflict) {
+ status = -EAGAIN;
+ goto out_unlock;
+ }
++ atomic_inc(&fp->fi_delegees);
+ hash_delegation_locked(dp, fp);
+ status = 0;
+ out_unlock:
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index eeea7a90eb87..2a77603d7cfd 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -1795,9 +1795,12 @@ static __be32 nfsd4_encode_components_esc(struct xdr_stream *xdr, char sep,
+ }
+ else
+ end++;
++ if (found_esc)
++ end = next;
++
+ str = end;
+ }
+- pathlen = htonl(xdr->buf->len - pathlen_offset);
++ pathlen = htonl(count);
+ write_bytes_to_xdr_buf(xdr->buf, pathlen_offset, &pathlen, 4);
+ return 0;
+ }
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index e1fa69b341b9..8b5969538f39 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -49,6 +49,8 @@ struct nilfs_iget_args {
+ int for_gc;
+ };
+
++static int nilfs_iget_test(struct inode *inode, void *opaque);
++
+ void nilfs_inode_add_blocks(struct inode *inode, int n)
+ {
+ struct nilfs_root *root = NILFS_I(inode)->i_root;
+@@ -348,6 +350,17 @@ const struct address_space_operations nilfs_aops = {
+ .is_partially_uptodate = block_is_partially_uptodate,
+ };
+
++static int nilfs_insert_inode_locked(struct inode *inode,
++ struct nilfs_root *root,
++ unsigned long ino)
++{
++ struct nilfs_iget_args args = {
++ .ino = ino, .root = root, .cno = 0, .for_gc = 0
++ };
++
++ return insert_inode_locked4(inode, ino, nilfs_iget_test, &args);
++}
++
+ struct inode *nilfs_new_inode(struct inode *dir, umode_t mode)
+ {
+ struct super_block *sb = dir->i_sb;
+@@ -383,7 +396,7 @@ struct inode *nilfs_new_inode(struct inode *dir, umode_t mode)
+ if (S_ISREG(mode) || S_ISDIR(mode) || S_ISLNK(mode)) {
+ err = nilfs_bmap_read(ii->i_bmap, NULL);
+ if (err < 0)
+- goto failed_bmap;
++ goto failed_after_creation;
+
+ set_bit(NILFS_I_BMAP, &ii->i_state);
+ /* No lock is needed; iget() ensures it. */
+@@ -399,21 +412,24 @@ struct inode *nilfs_new_inode(struct inode *dir, umode_t mode)
+ spin_lock(&nilfs->ns_next_gen_lock);
+ inode->i_generation = nilfs->ns_next_generation++;
+ spin_unlock(&nilfs->ns_next_gen_lock);
+- insert_inode_hash(inode);
++ if (nilfs_insert_inode_locked(inode, root, ino) < 0) {
++ err = -EIO;
++ goto failed_after_creation;
++ }
+
+ err = nilfs_init_acl(inode, dir);
+ if (unlikely(err))
+- goto failed_acl; /* never occur. When supporting
++ goto failed_after_creation; /* never occur. When supporting
+ nilfs_init_acl(), proper cancellation of
+ above jobs should be considered */
+
+ return inode;
+
+- failed_acl:
+- failed_bmap:
++ failed_after_creation:
+ clear_nlink(inode);
++ unlock_new_inode(inode);
+ iput(inode); /* raw_inode will be deleted through
+- generic_delete_inode() */
++ nilfs_evict_inode() */
+ goto failed;
+
+ failed_ifile_create_inode:
+@@ -461,8 +477,8 @@ int nilfs_read_inode_common(struct inode *inode,
+ inode->i_atime.tv_nsec = le32_to_cpu(raw_inode->i_mtime_nsec);
+ inode->i_ctime.tv_nsec = le32_to_cpu(raw_inode->i_ctime_nsec);
+ inode->i_mtime.tv_nsec = le32_to_cpu(raw_inode->i_mtime_nsec);
+- if (inode->i_nlink == 0 && inode->i_mode == 0)
+- return -EINVAL; /* this inode is deleted */
++ if (inode->i_nlink == 0)
++ return -ESTALE; /* this inode is deleted */
+
+ inode->i_blocks = le64_to_cpu(raw_inode->i_blocks);
+ ii->i_flags = le32_to_cpu(raw_inode->i_flags);
+diff --git a/fs/nilfs2/namei.c b/fs/nilfs2/namei.c
+index 9de78f08989e..0f84b257932c 100644
+--- a/fs/nilfs2/namei.c
++++ b/fs/nilfs2/namei.c
+@@ -51,9 +51,11 @@ static inline int nilfs_add_nondir(struct dentry *dentry, struct inode *inode)
+ int err = nilfs_add_link(dentry, inode);
+ if (!err) {
+ d_instantiate(dentry, inode);
++ unlock_new_inode(inode);
+ return 0;
+ }
+ inode_dec_link_count(inode);
++ unlock_new_inode(inode);
+ iput(inode);
+ return err;
+ }
+@@ -182,6 +184,7 @@ out:
+ out_fail:
+ drop_nlink(inode);
+ nilfs_mark_inode_dirty(inode);
++ unlock_new_inode(inode);
+ iput(inode);
+ goto out;
+ }
+@@ -201,11 +204,15 @@ static int nilfs_link(struct dentry *old_dentry, struct inode *dir,
+ inode_inc_link_count(inode);
+ ihold(inode);
+
+- err = nilfs_add_nondir(dentry, inode);
+- if (!err)
++ err = nilfs_add_link(dentry, inode);
++ if (!err) {
++ d_instantiate(dentry, inode);
+ err = nilfs_transaction_commit(dir->i_sb);
+- else
++ } else {
++ inode_dec_link_count(inode);
++ iput(inode);
+ nilfs_transaction_abort(dir->i_sb);
++ }
+
+ return err;
+ }
+@@ -243,6 +250,7 @@ static int nilfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+
+ nilfs_mark_inode_dirty(inode);
+ d_instantiate(dentry, inode);
++ unlock_new_inode(inode);
+ out:
+ if (!err)
+ err = nilfs_transaction_commit(dir->i_sb);
+@@ -255,6 +263,7 @@ out_fail:
+ drop_nlink(inode);
+ drop_nlink(inode);
+ nilfs_mark_inode_dirty(inode);
++ unlock_new_inode(inode);
+ iput(inode);
+ out_dir:
+ drop_nlink(dir);
+diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
+index 1ef547e49373..c71174a0b1b5 100644
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -894,7 +894,7 @@ void ocfs2_unlock_and_free_pages(struct page **pages, int num_pages)
+ }
+ }
+
+-static void ocfs2_free_write_ctxt(struct ocfs2_write_ctxt *wc)
++static void ocfs2_unlock_pages(struct ocfs2_write_ctxt *wc)
+ {
+ int i;
+
+@@ -915,7 +915,11 @@ static void ocfs2_free_write_ctxt(struct ocfs2_write_ctxt *wc)
+ page_cache_release(wc->w_target_page);
+ }
+ ocfs2_unlock_and_free_pages(wc->w_pages, wc->w_num_pages);
++}
+
++static void ocfs2_free_write_ctxt(struct ocfs2_write_ctxt *wc)
++{
++ ocfs2_unlock_pages(wc);
+ brelse(wc->w_di_bh);
+ kfree(wc);
+ }
+@@ -2042,11 +2046,19 @@ out_write_size:
+ ocfs2_update_inode_fsync_trans(handle, inode, 1);
+ ocfs2_journal_dirty(handle, wc->w_di_bh);
+
++ /* unlock pages before dealloc since it needs acquiring j_trans_barrier
++ * lock, or it will cause a deadlock since journal commit threads holds
++ * this lock and will ask for the page lock when flushing the data.
++ * put it here to preserve the unlock order.
++ */
++ ocfs2_unlock_pages(wc);
++
+ ocfs2_commit_trans(osb, handle);
+
+ ocfs2_run_deallocs(osb, &wc->w_dealloc);
+
+- ocfs2_free_write_ctxt(wc);
++ brelse(wc->w_di_bh);
++ kfree(wc);
+
+ return copied;
+ }
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index b931e04e3388..914c121ec890 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -94,6 +94,14 @@ static int ocfs2_create_symlink_data(struct ocfs2_super *osb,
+ struct inode *inode,
+ const char *symname);
+
++static int ocfs2_double_lock(struct ocfs2_super *osb,
++ struct buffer_head **bh1,
++ struct inode *inode1,
++ struct buffer_head **bh2,
++ struct inode *inode2,
++ int rename);
++
++static void ocfs2_double_unlock(struct inode *inode1, struct inode *inode2);
+ /* An orphan dir name is an 8 byte value, printed as a hex string */
+ #define OCFS2_ORPHAN_NAMELEN ((int)(2 * sizeof(u64)))
+
+@@ -678,8 +686,10 @@ static int ocfs2_link(struct dentry *old_dentry,
+ {
+ handle_t *handle;
+ struct inode *inode = old_dentry->d_inode;
++ struct inode *old_dir = old_dentry->d_parent->d_inode;
+ int err;
+ struct buffer_head *fe_bh = NULL;
++ struct buffer_head *old_dir_bh = NULL;
+ struct buffer_head *parent_fe_bh = NULL;
+ struct ocfs2_dinode *fe = NULL;
+ struct ocfs2_super *osb = OCFS2_SB(dir->i_sb);
+@@ -696,19 +706,33 @@ static int ocfs2_link(struct dentry *old_dentry,
+
+ dquot_initialize(dir);
+
+- err = ocfs2_inode_lock_nested(dir, &parent_fe_bh, 1, OI_LS_PARENT);
++ err = ocfs2_double_lock(osb, &old_dir_bh, old_dir,
++ &parent_fe_bh, dir, 0);
+ if (err < 0) {
+ if (err != -ENOENT)
+ mlog_errno(err);
+ return err;
+ }
+
++ /* make sure both dirs have bhs
++ * get an extra ref on old_dir_bh if old==new */
++ if (!parent_fe_bh) {
++ if (old_dir_bh) {
++ parent_fe_bh = old_dir_bh;
++ get_bh(parent_fe_bh);
++ } else {
++ mlog(ML_ERROR, "%s: no old_dir_bh!\n", osb->uuid_str);
++ err = -EIO;
++ goto out;
++ }
++ }
++
+ if (!dir->i_nlink) {
+ err = -ENOENT;
+ goto out;
+ }
+
+- err = ocfs2_lookup_ino_from_name(dir, old_dentry->d_name.name,
++ err = ocfs2_lookup_ino_from_name(old_dir, old_dentry->d_name.name,
+ old_dentry->d_name.len, &old_de_ino);
+ if (err) {
+ err = -ENOENT;
+@@ -801,10 +825,11 @@ out_unlock_inode:
+ ocfs2_inode_unlock(inode, 1);
+
+ out:
+- ocfs2_inode_unlock(dir, 1);
++ ocfs2_double_unlock(old_dir, dir);
+
+ brelse(fe_bh);
+ brelse(parent_fe_bh);
++ brelse(old_dir_bh);
+
+ ocfs2_free_dir_lookup_result(&lookup);
+
+@@ -1072,14 +1097,15 @@ static int ocfs2_check_if_ancestor(struct ocfs2_super *osb,
+ }
+
+ /*
+- * The only place this should be used is rename!
++ * The only place this should be used is rename and link!
+ * if they have the same id, then the 1st one is the only one locked.
+ */
+ static int ocfs2_double_lock(struct ocfs2_super *osb,
+ struct buffer_head **bh1,
+ struct inode *inode1,
+ struct buffer_head **bh2,
+- struct inode *inode2)
++ struct inode *inode2,
++ int rename)
+ {
+ int status;
+ int inode1_is_ancestor, inode2_is_ancestor;
+@@ -1127,7 +1153,7 @@ static int ocfs2_double_lock(struct ocfs2_super *osb,
+ }
+ /* lock id2 */
+ status = ocfs2_inode_lock_nested(inode2, bh2, 1,
+- OI_LS_RENAME1);
++ rename == 1 ? OI_LS_RENAME1 : OI_LS_PARENT);
+ if (status < 0) {
+ if (status != -ENOENT)
+ mlog_errno(status);
+@@ -1136,7 +1162,8 @@ static int ocfs2_double_lock(struct ocfs2_super *osb,
+ }
+
+ /* lock id1 */
+- status = ocfs2_inode_lock_nested(inode1, bh1, 1, OI_LS_RENAME2);
++ status = ocfs2_inode_lock_nested(inode1, bh1, 1,
++ rename == 1 ? OI_LS_RENAME2 : OI_LS_PARENT);
+ if (status < 0) {
+ /*
+ * An error return must mean that no cluster locks
+@@ -1252,7 +1279,7 @@ static int ocfs2_rename(struct inode *old_dir,
+
+ /* if old and new are the same, this'll just do one lock. */
+ status = ocfs2_double_lock(osb, &old_dir_bh, old_dir,
+- &new_dir_bh, new_dir);
++ &new_dir_bh, new_dir, 1);
+ if (status < 0) {
+ mlog_errno(status);
+ goto bail;
+diff --git a/fs/pstore/ram.c b/fs/pstore/ram.c
+index 3b5744306ed8..5fa34243b1ae 100644
+--- a/fs/pstore/ram.c
++++ b/fs/pstore/ram.c
+@@ -61,6 +61,11 @@ module_param(mem_size, ulong, 0400);
+ MODULE_PARM_DESC(mem_size,
+ "size of reserved RAM used to store oops/panic logs");
+
++static unsigned int mem_type;
++module_param(mem_type, uint, 0600);
++MODULE_PARM_DESC(mem_type,
++ "set to 1 to try to use unbuffered memory (default 0)");
++
+ static int dump_oops = 1;
+ module_param(dump_oops, int, 0600);
+ MODULE_PARM_DESC(dump_oops,
+@@ -79,6 +84,7 @@ struct ramoops_context {
+ struct persistent_ram_zone *fprz;
+ phys_addr_t phys_addr;
+ unsigned long size;
++ unsigned int memtype;
+ size_t record_size;
+ size_t console_size;
+ size_t ftrace_size;
+@@ -358,7 +364,8 @@ static int ramoops_init_przs(struct device *dev, struct ramoops_context *cxt,
+ size_t sz = cxt->record_size;
+
+ cxt->przs[i] = persistent_ram_new(*paddr, sz, 0,
+- &cxt->ecc_info);
++ &cxt->ecc_info,
++ cxt->memtype);
+ if (IS_ERR(cxt->przs[i])) {
+ err = PTR_ERR(cxt->przs[i]);
+ dev_err(dev, "failed to request mem region (0x%zx@0x%llx): %d\n",
+@@ -388,7 +395,7 @@ static int ramoops_init_prz(struct device *dev, struct ramoops_context *cxt,
+ return -ENOMEM;
+ }
+
+- *prz = persistent_ram_new(*paddr, sz, sig, &cxt->ecc_info);
++ *prz = persistent_ram_new(*paddr, sz, sig, &cxt->ecc_info, cxt->memtype);
+ if (IS_ERR(*prz)) {
+ int err = PTR_ERR(*prz);
+
+@@ -435,6 +442,7 @@ static int ramoops_probe(struct platform_device *pdev)
+
+ cxt->size = pdata->mem_size;
+ cxt->phys_addr = pdata->mem_address;
++ cxt->memtype = pdata->mem_type;
+ cxt->record_size = pdata->record_size;
+ cxt->console_size = pdata->console_size;
+ cxt->ftrace_size = pdata->ftrace_size;
+@@ -564,6 +572,7 @@ static void ramoops_register_dummy(void)
+
+ dummy_data->mem_size = mem_size;
+ dummy_data->mem_address = mem_address;
++ dummy_data->mem_type = 0;
+ dummy_data->record_size = record_size;
+ dummy_data->console_size = ramoops_console_size;
+ dummy_data->ftrace_size = ramoops_ftrace_size;
+diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
+index 9d7b9a83699e..76c3f80efdfa 100644
+--- a/fs/pstore/ram_core.c
++++ b/fs/pstore/ram_core.c
+@@ -380,7 +380,8 @@ void persistent_ram_zap(struct persistent_ram_zone *prz)
+ persistent_ram_update_header_ecc(prz);
+ }
+
+-static void *persistent_ram_vmap(phys_addr_t start, size_t size)
++static void *persistent_ram_vmap(phys_addr_t start, size_t size,
++ unsigned int memtype)
+ {
+ struct page **pages;
+ phys_addr_t page_start;
+@@ -392,7 +393,10 @@ static void *persistent_ram_vmap(phys_addr_t start, size_t size)
+ page_start = start - offset_in_page(start);
+ page_count = DIV_ROUND_UP(size + offset_in_page(start), PAGE_SIZE);
+
+- prot = pgprot_noncached(PAGE_KERNEL);
++ if (memtype)
++ prot = pgprot_noncached(PAGE_KERNEL);
++ else
++ prot = pgprot_writecombine(PAGE_KERNEL);
+
+ pages = kmalloc_array(page_count, sizeof(struct page *), GFP_KERNEL);
+ if (!pages) {
+@@ -411,8 +415,11 @@ static void *persistent_ram_vmap(phys_addr_t start, size_t size)
+ return vaddr;
+ }
+
+-static void *persistent_ram_iomap(phys_addr_t start, size_t size)
++static void *persistent_ram_iomap(phys_addr_t start, size_t size,
++ unsigned int memtype)
+ {
++ void *va;
++
+ if (!request_mem_region(start, size, "persistent_ram")) {
+ pr_err("request mem region (0x%llx@0x%llx) failed\n",
+ (unsigned long long)size, (unsigned long long)start);
+@@ -422,19 +429,24 @@ static void *persistent_ram_iomap(phys_addr_t start, size_t size)
+ buffer_start_add = buffer_start_add_locked;
+ buffer_size_add = buffer_size_add_locked;
+
+- return ioremap(start, size);
++ if (memtype)
++ va = ioremap(start, size);
++ else
++ va = ioremap_wc(start, size);
++
++ return va;
+ }
+
+ static int persistent_ram_buffer_map(phys_addr_t start, phys_addr_t size,
+- struct persistent_ram_zone *prz)
++ struct persistent_ram_zone *prz, int memtype)
+ {
+ prz->paddr = start;
+ prz->size = size;
+
+ if (pfn_valid(start >> PAGE_SHIFT))
+- prz->vaddr = persistent_ram_vmap(start, size);
++ prz->vaddr = persistent_ram_vmap(start, size, memtype);
+ else
+- prz->vaddr = persistent_ram_iomap(start, size);
++ prz->vaddr = persistent_ram_iomap(start, size, memtype);
+
+ if (!prz->vaddr) {
+ pr_err("%s: Failed to map 0x%llx pages at 0x%llx\n", __func__,
+@@ -500,7 +512,8 @@ void persistent_ram_free(struct persistent_ram_zone *prz)
+ }
+
+ struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size,
+- u32 sig, struct persistent_ram_ecc_info *ecc_info)
++ u32 sig, struct persistent_ram_ecc_info *ecc_info,
++ unsigned int memtype)
+ {
+ struct persistent_ram_zone *prz;
+ int ret = -ENOMEM;
+@@ -511,7 +524,7 @@ struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size,
+ goto err;
+ }
+
+- ret = persistent_ram_buffer_map(start, size, prz);
++ ret = persistent_ram_buffer_map(start, size, prz, memtype);
+ if (ret)
+ goto err;
+
+diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
+index f1376c92cf74..b27ef3541490 100644
+--- a/fs/reiserfs/super.c
++++ b/fs/reiserfs/super.c
+@@ -2161,6 +2161,9 @@ error_unlocked:
+ reiserfs_write_unlock(s);
+ }
+
++ if (sbi->commit_wq)
++ destroy_workqueue(sbi->commit_wq);
++
+ cancel_delayed_work_sync(&REISERFS_SB(s)->old_work);
+
+ reiserfs_free_bitmap_cache(s);
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index b46461116cd2..5ab2da9811c1 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -1936,7 +1936,7 @@ extern int expand_downwards(struct vm_area_struct *vma,
+ #if VM_GROWSUP
+ extern int expand_upwards(struct vm_area_struct *vma, unsigned long address);
+ #else
+- #define expand_upwards(vma, address) do { } while (0)
++ #define expand_upwards(vma, address) (0)
+ #endif
+
+ /* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
+diff --git a/include/linux/pstore_ram.h b/include/linux/pstore_ram.h
+index 9974975d40db..4af3fdc85b01 100644
+--- a/include/linux/pstore_ram.h
++++ b/include/linux/pstore_ram.h
+@@ -53,7 +53,8 @@ struct persistent_ram_zone {
+ };
+
+ struct persistent_ram_zone *persistent_ram_new(phys_addr_t start, size_t size,
+- u32 sig, struct persistent_ram_ecc_info *ecc_info);
++ u32 sig, struct persistent_ram_ecc_info *ecc_info,
++ unsigned int memtype);
+ void persistent_ram_free(struct persistent_ram_zone *prz);
+ void persistent_ram_zap(struct persistent_ram_zone *prz);
+
+@@ -76,6 +77,7 @@ ssize_t persistent_ram_ecc_string(struct persistent_ram_zone *prz,
+ struct ramoops_platform_data {
+ unsigned long mem_size;
+ unsigned long mem_address;
++ unsigned int mem_type;
+ unsigned long record_size;
+ unsigned long console_size;
+ unsigned long ftrace_size;
+diff --git a/include/linux/writeback.h b/include/linux/writeback.h
+index a219be961c0a..00048339c23e 100644
+--- a/include/linux/writeback.h
++++ b/include/linux/writeback.h
+@@ -177,7 +177,6 @@ int write_cache_pages(struct address_space *mapping,
+ struct writeback_control *wbc, writepage_t writepage,
+ void *data);
+ int do_writepages(struct address_space *mapping, struct writeback_control *wbc);
+-void set_page_dirty_balance(struct page *page);
+ void writeback_set_ratelimit(void);
+ void tag_pages_for_writeback(struct address_space *mapping,
+ pgoff_t start, pgoff_t end);
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 0ad1f47d2dc7..a9de1da73c01 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -1227,8 +1227,7 @@ struct ieee80211_vif *wdev_to_ieee80211_vif(struct wireless_dev *wdev);
+ *
+ * @IEEE80211_KEY_FLAG_GENERATE_IV: This flag should be set by the
+ * driver to indicate that it requires IV generation for this
+- * particular key. Setting this flag does not necessarily mean that SKBs
+- * will have sufficient tailroom for ICV or MIC.
++ * particular key.
+ * @IEEE80211_KEY_FLAG_GENERATE_MMIC: This flag should be set by
+ * the driver for a TKIP key if it requires Michael MIC
+ * generation in software.
+@@ -1240,9 +1239,7 @@ struct ieee80211_vif *wdev_to_ieee80211_vif(struct wireless_dev *wdev);
+ * @IEEE80211_KEY_FLAG_PUT_IV_SPACE: This flag should be set by the driver
+ * if space should be prepared for the IV, but the IV
+ * itself should not be generated. Do not set together with
+- * @IEEE80211_KEY_FLAG_GENERATE_IV on the same key. Setting this flag does
+- * not necessarily mean that SKBs will have sufficient tailroom for ICV or
+- * MIC.
++ * @IEEE80211_KEY_FLAG_GENERATE_IV on the same key.
+ * @IEEE80211_KEY_FLAG_RX_MGMT: This key will be used to decrypt received
+ * management frames. The flag can help drivers that have a hardware
+ * crypto implementation that doesn't deal with management frames
+diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
+index 0a68d5ae584e..a7d67bc14906 100644
+--- a/include/trace/events/sched.h
++++ b/include/trace/events/sched.h
+@@ -100,7 +100,7 @@ static inline long __trace_sched_switch_state(struct task_struct *p)
+ /*
+ * For all intents and purposes a preempted task is a running task.
+ */
+- if (task_preempt_count(p) & PREEMPT_ACTIVE)
++ if (preempt_count() & PREEMPT_ACTIVE)
+ state = TASK_RUNNING | TASK_STATE_MAX;
+ #endif
+
+diff --git a/include/uapi/linux/audit.h b/include/uapi/linux/audit.h
+index d4dbef14d4df..584bb0113e25 100644
+--- a/include/uapi/linux/audit.h
++++ b/include/uapi/linux/audit.h
+@@ -365,7 +365,9 @@ enum {
+ #define AUDIT_ARCH_PARISC (EM_PARISC)
+ #define AUDIT_ARCH_PARISC64 (EM_PARISC|__AUDIT_ARCH_64BIT)
+ #define AUDIT_ARCH_PPC (EM_PPC)
++/* do not define AUDIT_ARCH_PPCLE since it is not supported by audit */
+ #define AUDIT_ARCH_PPC64 (EM_PPC64|__AUDIT_ARCH_64BIT)
++#define AUDIT_ARCH_PPC64LE (EM_PPC64|__AUDIT_ARCH_64BIT|__AUDIT_ARCH_LE)
+ #define AUDIT_ARCH_S390 (EM_S390)
+ #define AUDIT_ARCH_S390X (EM_S390|__AUDIT_ARCH_64BIT)
+ #define AUDIT_ARCH_SH (EM_SH)
+diff --git a/include/uapi/linux/hyperv.h b/include/uapi/linux/hyperv.h
+index 0a8e6badb29b..bb1cb73c927a 100644
+--- a/include/uapi/linux/hyperv.h
++++ b/include/uapi/linux/hyperv.h
+@@ -134,6 +134,7 @@ struct hv_start_fcopy {
+
+ struct hv_do_fcopy {
+ struct hv_fcopy_hdr hdr;
++ __u32 pad;
+ __u64 offset;
+ __u32 size;
+ __u8 data[DATA_FRAGMENT];
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 1cd5eef1fcdd..2ab023803945 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -7435,11 +7435,11 @@ SYSCALL_DEFINE5(perf_event_open,
+
+ if (move_group) {
+ synchronize_rcu();
+- perf_install_in_context(ctx, group_leader, event->cpu);
++ perf_install_in_context(ctx, group_leader, group_leader->cpu);
+ get_ctx(ctx);
+ list_for_each_entry(sibling, &group_leader->sibling_list,
+ group_entry) {
+- perf_install_in_context(ctx, sibling, event->cpu);
++ perf_install_in_context(ctx, sibling, sibling->cpu);
+ get_ctx(ctx);
+ }
+ }
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 5d30019ff953..2116aace6c85 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -1302,9 +1302,15 @@ static int wait_task_continued(struct wait_opts *wo, struct task_struct *p)
+ static int wait_consider_task(struct wait_opts *wo, int ptrace,
+ struct task_struct *p)
+ {
++ /*
++ * We can race with wait_task_zombie() from another thread.
++ * Ensure that EXIT_ZOMBIE -> EXIT_DEAD/EXIT_TRACE transition
++ * can't confuse the checks below.
++ */
++ int exit_state = ACCESS_ONCE(p->exit_state);
+ int ret;
+
+- if (unlikely(p->exit_state == EXIT_DEAD))
++ if (unlikely(exit_state == EXIT_DEAD))
+ return 0;
+
+ ret = eligible_child(wo, p);
+@@ -1325,7 +1331,7 @@ static int wait_consider_task(struct wait_opts *wo, int ptrace,
+ return 0;
+ }
+
+- if (unlikely(p->exit_state == EXIT_TRACE)) {
++ if (unlikely(exit_state == EXIT_TRACE)) {
+ /*
+ * ptrace == 0 means we are the natural parent. In this case
+ * we should clear notask_error, debugger will notify us.
+@@ -1352,7 +1358,7 @@ static int wait_consider_task(struct wait_opts *wo, int ptrace,
+ }
+
+ /* slay zombie? */
+- if (p->exit_state == EXIT_ZOMBIE) {
++ if (exit_state == EXIT_ZOMBIE) {
+ /* we don't reap group leaders with subthreads */
+ if (!delay_group_leader(p)) {
+ /*
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 89e7283015a6..efdca2f08222 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1623,8 +1623,10 @@ void wake_up_if_idle(int cpu)
+ struct rq *rq = cpu_rq(cpu);
+ unsigned long flags;
+
+- if (!is_idle_task(rq->curr))
+- return;
++ rcu_read_lock();
++
++ if (!is_idle_task(rcu_dereference(rq->curr)))
++ goto out;
+
+ if (set_nr_if_polling(rq->idle)) {
+ trace_sched_wake_idle_without_ipi(cpu);
+@@ -1635,6 +1637,9 @@ void wake_up_if_idle(int cpu)
+ /* Else cpu is not in idle, do nothing here */
+ raw_spin_unlock_irqrestore(&rq->lock, flags);
+ }
++
++out:
++ rcu_read_unlock();
+ }
+
+ bool cpus_share_cache(int this_cpu, int that_cpu)
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 28fa9d9e9201..40a97c3d8aba 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -575,24 +575,7 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
+ static
+ int dl_runtime_exceeded(struct rq *rq, struct sched_dl_entity *dl_se)
+ {
+- int dmiss = dl_time_before(dl_se->deadline, rq_clock(rq));
+- int rorun = dl_se->runtime <= 0;
+-
+- if (!rorun && !dmiss)
+- return 0;
+-
+- /*
+- * If we are beyond our current deadline and we are still
+- * executing, then we have already used some of the runtime of
+- * the next instance. Thus, if we do not account that, we are
+- * stealing bandwidth from the system at each deadline miss!
+- */
+- if (dmiss) {
+- dl_se->runtime = rorun ? dl_se->runtime : 0;
+- dl_se->runtime -= rq_clock(rq) - dl_se->deadline;
+- }
+-
+- return 1;
++ return (dl_se->runtime <= 0);
+ }
+
+ extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq);
+@@ -831,10 +814,10 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se,
+ * parameters of the task might need updating. Otherwise,
+ * we want a replenishment of its runtime.
+ */
+- if (!dl_se->dl_new && flags & ENQUEUE_REPLENISH)
+- replenish_dl_entity(dl_se, pi_se);
+- else
++ if (dl_se->dl_new || flags & ENQUEUE_WAKEUP)
+ update_dl_entity(dl_se, pi_se);
++ else if (flags & ENQUEUE_REPLENISH)
++ replenish_dl_entity(dl_se, pi_se);
+
+ __enqueue_dl_entity(dl_se);
+ }
+diff --git a/mm/memory.c b/mm/memory.c
+index d5f2ae9c4a23..7f86cf6252bd 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -2150,17 +2150,24 @@ reuse:
+ if (!dirty_page)
+ return ret;
+
+- /*
+- * Yes, Virginia, this is actually required to prevent a race
+- * with clear_page_dirty_for_io() from clearing the page dirty
+- * bit after it clear all dirty ptes, but before a racing
+- * do_wp_page installs a dirty pte.
+- *
+- * do_shared_fault is protected similarly.
+- */
+ if (!page_mkwrite) {
+- wait_on_page_locked(dirty_page);
+- set_page_dirty_balance(dirty_page);
++ struct address_space *mapping;
++ int dirtied;
++
++ lock_page(dirty_page);
++ dirtied = set_page_dirty(dirty_page);
++ VM_BUG_ON_PAGE(PageAnon(dirty_page), dirty_page);
++ mapping = dirty_page->mapping;
++ unlock_page(dirty_page);
++
++ if (dirtied && mapping) {
++ /*
++ * Some device drivers do not set page.mapping
++ * but still dirty their pages
++ */
++ balance_dirty_pages_ratelimited(mapping);
++ }
++
+ /* file_update_time outside page_lock */
+ if (vma->vm_file)
+ file_update_time(vma->vm_file);
+@@ -2606,7 +2613,7 @@ static inline int check_stack_guard_page(struct vm_area_struct *vma, unsigned lo
+ if (prev && prev->vm_end == address)
+ return prev->vm_flags & VM_GROWSDOWN ? 0 : -ENOMEM;
+
+- expand_downwards(vma, address - PAGE_SIZE);
++ return expand_downwards(vma, address - PAGE_SIZE);
+ }
+ if ((vma->vm_flags & VM_GROWSUP) && address + PAGE_SIZE == vma->vm_end) {
+ struct vm_area_struct *next = vma->vm_next;
+@@ -2615,7 +2622,7 @@ static inline int check_stack_guard_page(struct vm_area_struct *vma, unsigned lo
+ if (next && next->vm_start == address + PAGE_SIZE)
+ return next->vm_flags & VM_GROWSUP ? 0 : -ENOMEM;
+
+- expand_upwards(vma, address + PAGE_SIZE);
++ return expand_upwards(vma, address + PAGE_SIZE);
+ }
+ return 0;
+ }
+diff --git a/mm/mmap.c b/mm/mmap.c
+index ae919891a087..1620adbbd77f 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -2099,14 +2099,17 @@ static int acct_stack_growth(struct vm_area_struct *vma, unsigned long size, uns
+ {
+ struct mm_struct *mm = vma->vm_mm;
+ struct rlimit *rlim = current->signal->rlim;
+- unsigned long new_start;
++ unsigned long new_start, actual_size;
+
+ /* address space limit tests */
+ if (!may_expand_vm(mm, grow))
+ return -ENOMEM;
+
+ /* Stack limit test */
+- if (size > ACCESS_ONCE(rlim[RLIMIT_STACK].rlim_cur))
++ actual_size = size;
++ if (size && (vma->vm_flags & (VM_GROWSUP | VM_GROWSDOWN)))
++ actual_size -= PAGE_SIZE;
++ if (actual_size > ACCESS_ONCE(rlim[RLIMIT_STACK].rlim_cur))
+ return -ENOMEM;
+
+ /* mlock limit tests */
+diff --git a/mm/page-writeback.c b/mm/page-writeback.c
+index 19ceae87522d..437174a2aaa3 100644
+--- a/mm/page-writeback.c
++++ b/mm/page-writeback.c
+@@ -1541,16 +1541,6 @@ pause:
+ bdi_start_background_writeback(bdi);
+ }
+
+-void set_page_dirty_balance(struct page *page)
+-{
+- if (set_page_dirty(page)) {
+- struct address_space *mapping = page_mapping(page);
+-
+- if (mapping)
+- balance_dirty_pages_ratelimited(mapping);
+- }
+-}
+-
+ static DEFINE_PER_CPU(int, bdp_ratelimits);
+
+ /*
+@@ -2123,32 +2113,25 @@ EXPORT_SYMBOL(account_page_dirtied);
+ * page dirty in that case, but not all the buffers. This is a "bottom-up"
+ * dirtying, whereas __set_page_dirty_buffers() is a "top-down" dirtying.
+ *
+- * Most callers have locked the page, which pins the address_space in memory.
+- * But zap_pte_range() does not lock the page, however in that case the
+- * mapping is pinned by the vma's ->vm_file reference.
+- *
+- * We take care to handle the case where the page was truncated from the
+- * mapping by re-checking page_mapping() inside tree_lock.
++ * The caller must ensure this doesn't race with truncation. Most will simply
++ * hold the page lock, but e.g. zap_pte_range() calls with the page mapped and
++ * the pte lock held, which also locks out truncation.
+ */
+ int __set_page_dirty_nobuffers(struct page *page)
+ {
+ if (!TestSetPageDirty(page)) {
+ struct address_space *mapping = page_mapping(page);
+- struct address_space *mapping2;
+ unsigned long flags;
+
+ if (!mapping)
+ return 1;
+
+ spin_lock_irqsave(&mapping->tree_lock, flags);
+- mapping2 = page_mapping(page);
+- if (mapping2) { /* Race with truncate? */
+- BUG_ON(mapping2 != mapping);
+- WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
+- account_page_dirtied(page, mapping);
+- radix_tree_tag_set(&mapping->page_tree,
+- page_index(page), PAGECACHE_TAG_DIRTY);
+- }
++ BUG_ON(page_mapping(page) != mapping);
++ WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
++ account_page_dirtied(page, mapping);
++ radix_tree_tag_set(&mapping->page_tree, page_index(page),
++ PAGECACHE_TAG_DIRTY);
+ spin_unlock_irqrestore(&mapping->tree_lock, flags);
+ if (mapping->host) {
+ /* !PageAnon && !swapper_space */
+@@ -2305,12 +2288,10 @@ int clear_page_dirty_for_io(struct page *page)
+ /*
+ * We carefully synchronise fault handlers against
+ * installing a dirty pte and marking the page dirty
+- * at this point. We do this by having them hold the
+- * page lock at some point after installing their
+- * pte, but before marking the page dirty.
+- * Pages are always locked coming in here, so we get
+- * the desired exclusion. See mm/memory.c:do_wp_page()
+- * for more comments.
++ * at this point. We do this by having them hold the
++ * page lock while dirtying the page, and pages are
++ * always locked coming in here, so we get the desired
++ * exclusion.
+ */
+ if (TestClearPageDirty(page)) {
+ dec_zone_page_state(page, NR_FILE_DIRTY);
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index dcb47074ae03..e3b0a54a44aa 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -2904,18 +2904,20 @@ static bool prepare_kswapd_sleep(pg_data_t *pgdat, int order, long remaining,
+ return false;
+
+ /*
+- * There is a potential race between when kswapd checks its watermarks
+- * and a process gets throttled. There is also a potential race if
+- * processes get throttled, kswapd wakes, a large process exits therby
+- * balancing the zones that causes kswapd to miss a wakeup. If kswapd
+- * is going to sleep, no process should be sleeping on pfmemalloc_wait
+- * so wake them now if necessary. If necessary, processes will wake
+- * kswapd and get throttled again
++ * The throttled processes are normally woken up in balance_pgdat() as
++ * soon as pfmemalloc_watermark_ok() is true. But there is a potential
++ * race between when kswapd checks the watermarks and a process gets
++ * throttled. There is also a potential race if processes get
++ * throttled, kswapd wakes, a large process exits thereby balancing the
++ * zones, which causes kswapd to exit balance_pgdat() before reaching
++ * the wake up checks. If kswapd is going to sleep, no process should
++ * be sleeping on pfmemalloc_wait, so wake them now if necessary. If
++ * the wake up is premature, processes will wake kswapd and get
++ * throttled again. The difference from wake ups in balance_pgdat() is
++ * that here we are under prepare_to_wait().
+ */
+- if (waitqueue_active(&pgdat->pfmemalloc_wait)) {
+- wake_up(&pgdat->pfmemalloc_wait);
+- return false;
+- }
++ if (waitqueue_active(&pgdat->pfmemalloc_wait))
++ wake_up_all(&pgdat->pfmemalloc_wait);
+
+ return pgdat_balanced(pgdat, order, classzone_idx);
+ }
+diff --git a/net/bluetooth/6lowpan.c b/net/bluetooth/6lowpan.c
+index c2e0d14433df..cfbb39e6fdfd 100644
+--- a/net/bluetooth/6lowpan.c
++++ b/net/bluetooth/6lowpan.c
+@@ -591,17 +591,13 @@ static netdev_tx_t bt_xmit(struct sk_buff *skb, struct net_device *netdev)
+ int err = 0;
+ bdaddr_t addr;
+ u8 addr_type;
+- struct sk_buff *tmpskb;
+
+ /* We must take a copy of the skb before we modify/replace the ipv6
+ * header as the header could be used elsewhere
+ */
+- tmpskb = skb_unshare(skb, GFP_ATOMIC);
+- if (!tmpskb) {
+- kfree_skb(skb);
++ skb = skb_unshare(skb, GFP_ATOMIC);
++ if (!skb)
+ return NET_XMIT_DROP;
+- }
+- skb = tmpskb;
+
+ /* Return values from setup_header()
+ * <0 - error, packet is dropped
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index b9517bd17190..b45eb243a5ee 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -415,7 +415,7 @@ static void le_conn_timeout(struct work_struct *work)
+ * happen with broken hardware or if low duty cycle was used
+ * (which doesn't have a timeout of its own).
+ */
+- if (test_bit(HCI_ADVERTISING, &hdev->dev_flags)) {
++ if (conn->role == HCI_ROLE_SLAVE) {
+ u8 enable = 0x00;
+ hci_send_cmd(hdev, HCI_OP_LE_SET_ADV_ENABLE, sizeof(enable),
+ &enable);
+@@ -517,7 +517,7 @@ int hci_conn_del(struct hci_conn *conn)
+ /* Unacked frames */
+ hdev->acl_cnt += conn->sent;
+ } else if (conn->type == LE_LINK) {
+- cancel_delayed_work_sync(&conn->le_conn_timeout);
++ cancel_delayed_work(&conn->le_conn_timeout);
+
+ if (hdev->le_pkts)
+ hdev->le_cnt += conn->sent;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 8b0a2a6de419..e5124a9ea6f6 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -205,6 +205,8 @@ static void hci_cc_reset(struct hci_dev *hdev, struct sk_buff *skb)
+ hdev->le_scan_type = LE_SCAN_PASSIVE;
+
+ hdev->ssp_debug_mode = 0;
++
++ hci_bdaddr_list_clear(&hdev->le_white_list);
+ }
+
+ static void hci_cc_write_local_name(struct hci_dev *hdev, struct sk_buff *skb)
+@@ -237,7 +239,8 @@ static void hci_cc_read_local_name(struct hci_dev *hdev, struct sk_buff *skb)
+ if (rp->status)
+ return;
+
+- if (test_bit(HCI_SETUP, &hdev->dev_flags))
++ if (test_bit(HCI_SETUP, &hdev->dev_flags) ||
++ test_bit(HCI_CONFIG, &hdev->dev_flags))
+ memcpy(hdev->dev_name, rp->name, HCI_MAX_NAME_LENGTH);
+ }
+
+@@ -492,7 +495,8 @@ static void hci_cc_read_local_version(struct hci_dev *hdev, struct sk_buff *skb)
+ if (rp->status)
+ return;
+
+- if (test_bit(HCI_SETUP, &hdev->dev_flags)) {
++ if (test_bit(HCI_SETUP, &hdev->dev_flags) ||
++ test_bit(HCI_CONFIG, &hdev->dev_flags)) {
+ hdev->hci_ver = rp->hci_ver;
+ hdev->hci_rev = __le16_to_cpu(rp->hci_rev);
+ hdev->lmp_ver = rp->lmp_ver;
+@@ -511,7 +515,8 @@ static void hci_cc_read_local_commands(struct hci_dev *hdev,
+ if (rp->status)
+ return;
+
+- if (test_bit(HCI_SETUP, &hdev->dev_flags))
++ if (test_bit(HCI_SETUP, &hdev->dev_flags) ||
++ test_bit(HCI_CONFIG, &hdev->dev_flags))
+ memcpy(hdev->commands, rp->commands, sizeof(hdev->commands));
+ }
+
+@@ -2139,7 +2144,12 @@ static void hci_conn_request_evt(struct hci_dev *hdev, struct sk_buff *skb)
+ return;
+ }
+
+- if (!test_bit(HCI_CONNECTABLE, &hdev->dev_flags) &&
++ /* Require HCI_CONNECTABLE or a whitelist entry to accept the
++ * connection. These features are only touched through mgmt so
++ * only do the checks if HCI_MGMT is set.
++ */
++ if (test_bit(HCI_MGMT, &hdev->dev_flags) &&
++ !test_bit(HCI_CONNECTABLE, &hdev->dev_flags) &&
+ !hci_bdaddr_list_lookup(&hdev->whitelist, &ev->bdaddr,
+ BDADDR_BREDR)) {
+ hci_reject_conn(hdev, &ev->bdaddr);
+diff --git a/net/mac80211/key.c b/net/mac80211/key.c
+index d66c6443164c..94368404744b 100644
+--- a/net/mac80211/key.c
++++ b/net/mac80211/key.c
+@@ -131,7 +131,9 @@ static int ieee80211_key_enable_hw_accel(struct ieee80211_key *key)
+ if (!ret) {
+ key->flags |= KEY_FLAG_UPLOADED_TO_HARDWARE;
+
+- if (!(key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_MMIC))
++ if (!((key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_MMIC) ||
++ (key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_IV) ||
++ (key->conf.flags & IEEE80211_KEY_FLAG_PUT_IV_SPACE)))
+ sdata->crypto_tx_tailroom_needed_cnt--;
+
+ WARN_ON((key->conf.flags & IEEE80211_KEY_FLAG_PUT_IV_SPACE) &&
+@@ -179,7 +181,9 @@ static void ieee80211_key_disable_hw_accel(struct ieee80211_key *key)
+ sta = key->sta;
+ sdata = key->sdata;
+
+- if (!(key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_MMIC))
++ if (!((key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_MMIC) ||
++ (key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_IV) ||
++ (key->conf.flags & IEEE80211_KEY_FLAG_PUT_IV_SPACE)))
+ increment_tailroom_need_count(sdata);
+
+ ret = drv_set_key(key->local, DISABLE_KEY, sdata,
+@@ -875,7 +879,9 @@ void ieee80211_remove_key(struct ieee80211_key_conf *keyconf)
+ if (key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE) {
+ key->flags &= ~KEY_FLAG_UPLOADED_TO_HARDWARE;
+
+- if (!(key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_MMIC))
++ if (!((key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_MMIC) ||
++ (key->conf.flags & IEEE80211_KEY_FLAG_GENERATE_IV) ||
++ (key->conf.flags & IEEE80211_KEY_FLAG_PUT_IV_SPACE)))
+ increment_tailroom_need_count(key->sdata);
+ }
+
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index 290af97bf6f9..2a81e77c4477 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -606,7 +606,7 @@ void xdr_truncate_encode(struct xdr_stream *xdr, size_t len)
+ struct kvec *head = buf->head;
+ struct kvec *tail = buf->tail;
+ int fraglen;
+- int new, old;
++ int new;
+
+ if (len > buf->len) {
+ WARN_ON_ONCE(1);
+@@ -628,8 +628,8 @@ void xdr_truncate_encode(struct xdr_stream *xdr, size_t len)
+ buf->len -= fraglen;
+
+ new = buf->page_base + buf->page_len;
+- old = new + fraglen;
+- xdr->page_ptr -= (old >> PAGE_SHIFT) - (new >> PAGE_SHIFT);
++
++ xdr->page_ptr = buf->pages + (new >> PAGE_SHIFT);
+
+ if (buf->page_len && buf->len == len) {
+ xdr->p = page_address(*xdr->page_ptr);
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index 70bea942b413..9922e66883a5 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -1753,7 +1753,7 @@ sub dump_struct($$) {
+ # strip kmemcheck_bitfield_{begin,end}.*;
+ $members =~ s/kmemcheck_bitfield_.*?;//gos;
+ # strip attributes
+- $members =~ s/__aligned\s*\(.+\)//gos;
++ $members =~ s/__aligned\s*\([^;]*\)//gos;
+
+ create_parameterlist($members, ';', $file);
+ check_sections($file, $declaration_name, "struct", $sectcheck, $struct_actual, $nested);
+diff --git a/sound/firewire/fireworks/fireworks_transaction.c b/sound/firewire/fireworks/fireworks_transaction.c
+index 255dabc6fc33..2a85e4209f0b 100644
+--- a/sound/firewire/fireworks/fireworks_transaction.c
++++ b/sound/firewire/fireworks/fireworks_transaction.c
+@@ -124,7 +124,7 @@ copy_resp_to_buf(struct snd_efw *efw, void *data, size_t length, int *rcode)
+ spin_lock_irq(&efw->lock);
+
+ t = (struct snd_efw_transaction *)data;
+- length = min_t(size_t, t->length * sizeof(t->length), length);
++ length = min_t(size_t, be32_to_cpu(t->length) * sizeof(u32), length);
+
+ if (efw->push_ptr < efw->pull_ptr)
+ capacity = (unsigned int)(efw->pull_ptr - efw->push_ptr);
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 15e0089492f7..e708368d208f 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -338,8 +338,10 @@ int snd_hda_get_sub_nodes(struct hda_codec *codec, hda_nid_t nid,
+ unsigned int parm;
+
+ parm = snd_hda_param_read(codec, nid, AC_PAR_NODE_COUNT);
+- if (parm == -1)
++ if (parm == -1) {
++ *start_id = 0;
+ return 0;
++ }
+ *start_id = (parm >> 16) & 0x7fff;
+ return (int)(parm & 0x7fff);
+ }
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 9dc9cf8c90e9..edb6e6124a23 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -3351,6 +3351,7 @@ static const struct hda_codec_preset snd_hda_preset_hdmi[] = {
+ { .id = 0x10de0067, .name = "MCP67 HDMI", .patch = patch_nvhdmi_2ch },
+ { .id = 0x10de0070, .name = "GPU 70 HDMI/DP", .patch = patch_nvhdmi },
+ { .id = 0x10de0071, .name = "GPU 71 HDMI/DP", .patch = patch_nvhdmi },
++{ .id = 0x10de0072, .name = "GPU 72 HDMI/DP", .patch = patch_nvhdmi },
+ { .id = 0x10de8001, .name = "MCP73 HDMI", .patch = patch_nvhdmi_2ch },
+ { .id = 0x11069f80, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi },
+ { .id = 0x11069f81, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi },
+@@ -3410,6 +3411,7 @@ MODULE_ALIAS("snd-hda-codec-id:10de0060");
+ MODULE_ALIAS("snd-hda-codec-id:10de0067");
+ MODULE_ALIAS("snd-hda-codec-id:10de0070");
+ MODULE_ALIAS("snd-hda-codec-id:10de0071");
++MODULE_ALIAS("snd-hda-codec-id:10de0072");
+ MODULE_ALIAS("snd-hda-codec-id:10de8001");
+ MODULE_ALIAS("snd-hda-codec-id:11069f80");
+ MODULE_ALIAS("snd-hda-codec-id:11069f81");
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index c5ad83e4e0c7..c879c3709eae 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -319,10 +319,12 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ break;
+ case 0x10ec0233:
+ case 0x10ec0255:
++ case 0x10ec0256:
+ case 0x10ec0282:
+ case 0x10ec0283:
+ case 0x10ec0286:
+ case 0x10ec0288:
++ case 0x10ec0298:
+ alc_update_coef_idx(codec, 0x10, 1<<9, 0);
+ break;
+ case 0x10ec0285:
+@@ -2657,7 +2659,9 @@ enum {
+ ALC269_TYPE_ALC284,
+ ALC269_TYPE_ALC285,
+ ALC269_TYPE_ALC286,
++ ALC269_TYPE_ALC298,
+ ALC269_TYPE_ALC255,
++ ALC269_TYPE_ALC256,
+ };
+
+ /*
+@@ -2684,7 +2688,9 @@ static int alc269_parse_auto_config(struct hda_codec *codec)
+ case ALC269_TYPE_ALC282:
+ case ALC269_TYPE_ALC283:
+ case ALC269_TYPE_ALC286:
++ case ALC269_TYPE_ALC298:
+ case ALC269_TYPE_ALC255:
++ case ALC269_TYPE_ALC256:
+ ssids = alc269_ssids;
+ break;
+ default:
+@@ -4790,6 +4796,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS_HSJACK),
+ SND_PCI_QUIRK(0x1028, 0x064a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x064b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1028, 0x06c7, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x06d9, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x06da, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+@@ -5377,9 +5384,15 @@ static int patch_alc269(struct hda_codec *codec)
+ spec->codec_variant = ALC269_TYPE_ALC286;
+ spec->shutup = alc286_shutup;
+ break;
++ case 0x10ec0298:
++ spec->codec_variant = ALC269_TYPE_ALC298;
++ break;
+ case 0x10ec0255:
+ spec->codec_variant = ALC269_TYPE_ALC255;
+ break;
++ case 0x10ec0256:
++ spec->codec_variant = ALC269_TYPE_ALC256;
++ break;
+ }
+
+ if (snd_hda_codec_read(codec, 0x51, 0, AC_VERB_PARAMETERS, 0) == 0x10ec5505) {
+@@ -6315,6 +6328,7 @@ static const struct hda_codec_preset snd_hda_preset_realtek[] = {
+ { .id = 0x10ec0233, .name = "ALC233", .patch = patch_alc269 },
+ { .id = 0x10ec0235, .name = "ALC233", .patch = patch_alc269 },
+ { .id = 0x10ec0255, .name = "ALC255", .patch = patch_alc269 },
++ { .id = 0x10ec0256, .name = "ALC256", .patch = patch_alc269 },
+ { .id = 0x10ec0260, .name = "ALC260", .patch = patch_alc260 },
+ { .id = 0x10ec0262, .name = "ALC262", .patch = patch_alc262 },
+ { .id = 0x10ec0267, .name = "ALC267", .patch = patch_alc268 },
+@@ -6334,6 +6348,7 @@ static const struct hda_codec_preset snd_hda_preset_realtek[] = {
+ { .id = 0x10ec0290, .name = "ALC290", .patch = patch_alc269 },
+ { .id = 0x10ec0292, .name = "ALC292", .patch = patch_alc269 },
+ { .id = 0x10ec0293, .name = "ALC293", .patch = patch_alc269 },
++ { .id = 0x10ec0298, .name = "ALC298", .patch = patch_alc269 },
+ { .id = 0x10ec0861, .rev = 0x100340, .name = "ALC660",
+ .patch = patch_alc861 },
+ { .id = 0x10ec0660, .name = "ALC660-VD", .patch = patch_alc861vd },
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index 4f6413e01c13..605d14003d25 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -568,9 +568,9 @@ static void stac_store_hints(struct hda_codec *codec)
+ spec->gpio_mask;
+ }
+ if (get_int_hint(codec, "gpio_dir", &spec->gpio_dir))
+- spec->gpio_mask &= spec->gpio_mask;
+- if (get_int_hint(codec, "gpio_data", &spec->gpio_data))
+ spec->gpio_dir &= spec->gpio_mask;
++ if (get_int_hint(codec, "gpio_data", &spec->gpio_data))
++ spec->gpio_data &= spec->gpio_mask;
+ if (get_int_hint(codec, "eapd_mask", &spec->eapd_mask))
+ spec->eapd_mask &= spec->gpio_mask;
+ if (get_int_hint(codec, "gpio_mute", &spec->gpio_mute))
+diff --git a/sound/soc/codecs/max98090.c b/sound/soc/codecs/max98090.c
+index 1229554f1464..d492d6ea656e 100644
+--- a/sound/soc/codecs/max98090.c
++++ b/sound/soc/codecs/max98090.c
+@@ -1395,8 +1395,8 @@ static const struct snd_soc_dapm_route max98090_dapm_routes[] = {
+ {"STENL Mux", "Sidetone Left", "DMICL"},
+ {"STENR Mux", "Sidetone Right", "ADCR"},
+ {"STENR Mux", "Sidetone Right", "DMICR"},
+- {"DACL", "NULL", "STENL Mux"},
+- {"DACR", "NULL", "STENL Mux"},
++ {"DACL", NULL, "STENL Mux"},
++ {"DACR", NULL, "STENL Mux"},
+
+ {"AIFINL", NULL, "SHDN"},
+ {"AIFINR", NULL, "SHDN"},
+diff --git a/sound/soc/codecs/pcm512x-i2c.c b/sound/soc/codecs/pcm512x-i2c.c
+index 4d62230bd378..d0547fa275fc 100644
+--- a/sound/soc/codecs/pcm512x-i2c.c
++++ b/sound/soc/codecs/pcm512x-i2c.c
+@@ -24,8 +24,13 @@ static int pcm512x_i2c_probe(struct i2c_client *i2c,
+ const struct i2c_device_id *id)
+ {
+ struct regmap *regmap;
++ struct regmap_config config = pcm512x_regmap;
+
+- regmap = devm_regmap_init_i2c(i2c, &pcm512x_regmap);
++ /* msb needs to be set to enable auto-increment of addresses */
++ config.read_flag_mask = 0x80;
++ config.write_flag_mask = 0x80;
++
++ regmap = devm_regmap_init_i2c(i2c, &config);
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
+
+diff --git a/sound/soc/codecs/sigmadsp.c b/sound/soc/codecs/sigmadsp.c
+index f2de7e049bc6..81a38dd9af1f 100644
+--- a/sound/soc/codecs/sigmadsp.c
++++ b/sound/soc/codecs/sigmadsp.c
+@@ -159,6 +159,13 @@ int _process_sigma_firmware(struct device *dev,
+ goto done;
+ }
+
++ if (ssfw_head->version != 1) {
++ dev_err(dev,
++ "Failed to load firmware: Invalid version %d. Supported firmware versions: 1\n",
++ ssfw_head->version);
++ goto done;
++ }
++
+ crc = crc32(0, fw->data + sizeof(*ssfw_head),
+ fw->size - sizeof(*ssfw_head));
+ pr_debug("%s: crc=%x\n", __func__, crc);
+diff --git a/sound/soc/codecs/tlv320aic31xx.c b/sound/soc/codecs/tlv320aic31xx.c
+index 145fe5b253d4..93de5dd0a7b9 100644
+--- a/sound/soc/codecs/tlv320aic31xx.c
++++ b/sound/soc/codecs/tlv320aic31xx.c
+@@ -911,12 +911,13 @@ static int aic31xx_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ }
+ aic31xx->p_div = i;
+
+- for (i = 0; aic31xx_divs[i].mclk_p != freq/aic31xx->p_div; i++) {
+- if (i == ARRAY_SIZE(aic31xx_divs)) {
+- dev_err(aic31xx->dev, "%s: Unsupported frequency %d\n",
+- __func__, freq);
+- return -EINVAL;
+- }
++ for (i = 0; i < ARRAY_SIZE(aic31xx_divs) &&
++ aic31xx_divs[i].mclk_p != freq/aic31xx->p_div; i++)
++ ;
++ if (i == ARRAY_SIZE(aic31xx_divs)) {
++ dev_err(aic31xx->dev, "%s: Unsupported frequency %d\n",
++ __func__, freq);
++ return -EINVAL;
+ }
+
+ /* set clock on MCLK, BCLK, or GPIO1 as PLL input */
+diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c
+index e961388e6e9c..10e1b8ca42ed 100644
+--- a/sound/soc/dwc/designware_i2s.c
++++ b/sound/soc/dwc/designware_i2s.c
+@@ -263,6 +263,19 @@ static void dw_i2s_shutdown(struct snd_pcm_substream *substream,
+ snd_soc_dai_set_dma_data(dai, substream, NULL);
+ }
+
++static int dw_i2s_prepare(struct snd_pcm_substream *substream,
++ struct snd_soc_dai *dai)
++{
++ struct dw_i2s_dev *dev = snd_soc_dai_get_drvdata(dai);
++
++ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
++ i2s_write_reg(dev->i2s_base, TXFFR, 1);
++ else
++ i2s_write_reg(dev->i2s_base, RXFFR, 1);
++
++ return 0;
++}
++
+ static int dw_i2s_trigger(struct snd_pcm_substream *substream,
+ int cmd, struct snd_soc_dai *dai)
+ {
+@@ -294,6 +307,7 @@ static struct snd_soc_dai_ops dw_i2s_dai_ops = {
+ .startup = dw_i2s_startup,
+ .shutdown = dw_i2s_shutdown,
+ .hw_params = dw_i2s_hw_params,
++ .prepare = dw_i2s_prepare,
+ .trigger = dw_i2s_trigger,
+ };
+
+diff --git a/sound/soc/fsl/eukrea-tlv320.c b/sound/soc/fsl/eukrea-tlv320.c
+index eb093d5b85c4..54790461f39e 100644
+--- a/sound/soc/fsl/eukrea-tlv320.c
++++ b/sound/soc/fsl/eukrea-tlv320.c
+@@ -105,7 +105,7 @@ static int eukrea_tlv320_probe(struct platform_device *pdev)
+ int ret;
+ int int_port = 0, ext_port;
+ struct device_node *np = pdev->dev.of_node;
+- struct device_node *ssi_np, *codec_np;
++ struct device_node *ssi_np = NULL, *codec_np = NULL;
+
+ eukrea_tlv320.dev = &pdev->dev;
+ if (np) {
+diff --git a/sound/usb/caiaq/audio.c b/sound/usb/caiaq/audio.c
+index 272844746135..327f8642ca80 100644
+--- a/sound/usb/caiaq/audio.c
++++ b/sound/usb/caiaq/audio.c
+@@ -816,7 +816,7 @@ int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev)
+ return -EINVAL;
+ }
+
+- if (cdev->n_streams < 2) {
++ if (cdev->n_streams < 1) {
+ dev_err(dev, "bogus number of streams: %d\n", cdev->n_streams);
+ return -EINVAL;
+ }
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index d1d72ff50347..621bc9ebb55e 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -328,8 +328,11 @@ static struct usbmix_name_map gamecom780_map[] = {
+ {}
+ };
+
+-static const struct usbmix_name_map kef_x300a_map[] = {
+- { 10, NULL }, /* firmware locks up (?) when we try to access this FU */
++/* some (all?) SCMS USB3318 devices are affected by a firmware lock up
++ * when anything attempts to access FU 10 (control)
++ */
++static const struct usbmix_name_map scms_usb3318_map[] = {
++ { 10, NULL },
+ { 0 }
+ };
+
+@@ -425,8 +428,14 @@ static struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ .map = ebox44_map,
+ },
+ {
++ /* KEF X300A */
+ .id = USB_ID(0x27ac, 0x1000),
+- .map = kef_x300a_map,
++ .map = scms_usb3318_map,
++ },
++ {
++ /* Arcam rPAC */
++ .id = USB_ID(0x25c4, 0x0003),
++ .map = scms_usb3318_map,
+ },
+ { 0 } /* terminator */
+ };
+diff --git a/tools/perf/util/event.h b/tools/perf/util/event.h
+index 5699e7e2a790..50a7b115698c 100644
+--- a/tools/perf/util/event.h
++++ b/tools/perf/util/event.h
+@@ -214,6 +214,7 @@ struct events_stats {
+ u32 nr_invalid_chains;
+ u32 nr_unknown_id;
+ u32 nr_unprocessable_samples;
++ u32 nr_unordered_events;
+ };
+
+ struct attr_event {
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index 6702ac28754b..80dbba095f30 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -521,15 +521,11 @@ int perf_session_queue_event(struct perf_session *s, union perf_event *event,
+ return -ETIME;
+
+ if (timestamp < oe->last_flush) {
+- WARN_ONCE(1, "Timestamp below last timeslice flush\n");
+-
+- pr_oe_time(timestamp, "out of order event");
++ pr_oe_time(timestamp, "out of order event\n");
+ pr_oe_time(oe->last_flush, "last flush, last_flush_type %d\n",
+ oe->last_flush_type);
+
+- /* We could get out of order messages after forced flush. */
+- if (oe->last_flush_type != OE_FLUSH__HALF)
+- return -EINVAL;
++ s->stats.nr_unordered_events++;
+ }
+
+ new = ordered_events__new(oe, timestamp, event);
+@@ -1057,6 +1053,9 @@ static void perf_session__warn_about_errors(const struct perf_session *session,
+ "Do you have a KVM guest running and not using 'perf kvm'?\n",
+ session->stats.nr_unprocessable_samples);
+ }
++
++ if (session->stats.nr_unordered_events != 0)
++ ui__warning("%u out of order events recorded.\n", session->stats.nr_unordered_events);
+ }
+
+ volatile int session_done;
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index bf1398180785..dcb1e9ac949c 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -3571,7 +3571,9 @@ sub test_this_config {
+ undef %configs;
+ assign_configs \%configs, $output_config;
+
+- return $config if (!defined($configs{$config}));
++ if (!defined($configs{$config}) || $configs{$config} =~ /^#/) {
++ return $config;
++ }
+
+ doprint "disabling config $config did not change .config\n";
+
diff --git a/2800_nouveau-spin-is-locked.patch b/2800_nouveau-spin-is-locked.patch
deleted file mode 100644
index 4cd72c9..0000000
--- a/2800_nouveau-spin-is-locked.patch
+++ /dev/null
@@ -1,31 +0,0 @@
---- a/drivers/gpu/drm/nouveau/core/core/event.c 2015-01-12 14:01:30.999164123 -0500
-+++ b/drivers/gpu/drm/nouveau/core/core/event.c 2015-01-12 14:03:11.229163330 -0500
-@@ -26,7 +26,7 @@
- void
- nvkm_event_put(struct nvkm_event *event, u32 types, int index)
- {
-- BUG_ON(!spin_is_locked(&event->refs_lock));
-+ assert_spin_locked(&event->refs_lock);
- while (types) {
- int type = __ffs(types); types &= ~(1 << type);
- if (--event->refs[index * event->types_nr + type] == 0) {
-@@ -39,7 +39,7 @@ nvkm_event_put(struct nvkm_event *event,
- void
- nvkm_event_get(struct nvkm_event *event, u32 types, int index)
- {
-- BUG_ON(!spin_is_locked(&event->refs_lock));
-+ assert_spin_locked(&event->refs_lock);
- while (types) {
- int type = __ffs(types); types &= ~(1 << type);
- if (++event->refs[index * event->types_nr + type] == 1) {
---- a/drivers/gpu/drm/nouveau/core/core/notify.c 2015-01-12 14:01:38.299164065 -0500
-+++ b/drivers/gpu/drm/nouveau/core/core/notify.c 2015-01-12 14:03:45.739163057 -0500
-@@ -98,7 +98,7 @@ nvkm_notify_send(struct nvkm_notify *not
- struct nvkm_event *event = notify->event;
- unsigned long flags;
-
-- BUG_ON(!spin_is_locked(&event->list_lock));
-+ assert_spin_locked(&event->list_lock);
- BUG_ON(size != notify->size);
-
- spin_lock_irqsave(&event->refs_lock, flags);
next reply other threads:[~2015-01-16 18:31 UTC|newest]
Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-16 18:31 Mike Pagano [this message]
-- strict thread matches above, loose matches on Subject: below --
2017-04-18 10:20 [gentoo-commits] proj/linux-patches:3.18 commit in: / Mike Pagano
2017-03-02 16:33 Mike Pagano
2017-03-02 16:33 Mike Pagano
2017-02-08 11:16 Mike Pagano
2017-01-18 21:46 Mike Pagano
2017-01-18 21:38 Mike Pagano
2016-12-09 0:21 Mike Pagano
2016-11-30 15:42 Mike Pagano
2016-11-25 22:57 Mike Pagano
2016-11-01 9:36 Alice Ferrazzi
2016-10-12 19:51 Mike Pagano
2016-09-18 12:43 Mike Pagano
2016-08-22 23:27 Mike Pagano
2016-08-10 12:54 Mike Pagano
2016-07-31 15:27 Mike Pagano
2016-07-15 14:46 Mike Pagano
2016-07-13 23:28 Mike Pagano
2016-07-01 20:50 Mike Pagano
2016-06-23 11:44 Mike Pagano
2016-06-08 11:20 Mike Pagano
2016-05-24 12:03 Mike Pagano
2016-05-12 0:10 Mike Pagano
2016-04-20 11:21 Mike Pagano
2016-04-06 11:21 Mike Pagano
2016-03-17 22:50 Mike Pagano
2016-03-05 21:08 Mike Pagano
2016-02-16 16:34 Mike Pagano
2016-01-31 15:36 Mike Pagano
2016-01-20 14:35 Mike Pagano
2015-12-17 17:12 Mike Pagano
2015-11-03 18:39 Mike Pagano
2015-10-30 18:38 Mike Pagano
2015-10-03 17:45 Mike Pagano
2015-09-10 0:15 Mike Pagano
2015-08-21 12:57 Mike Pagano
2015-07-30 12:43 Mike Pagano
2015-07-22 10:13 Mike Pagano
2015-07-10 23:44 Mike Pagano
2015-06-19 15:22 Mike Pagano
2015-05-22 0:48 Mike Pagano
2015-05-13 14:27 Mike Pagano
2015-04-29 17:31 Mike Pagano
2015-04-27 17:18 Mike Pagano
2015-04-05 0:05 Mike Pagano
2015-03-28 19:46 Mike Pagano
2015-03-24 23:19 Mike Pagano
2015-03-21 20:02 Mike Pagano
2015-03-07 14:28 Mike Pagano
2015-02-27 13:26 Mike Pagano
2015-02-14 20:32 Mike Pagano
2015-02-13 1:35 Mike Pagano
2015-02-11 14:35 Mike Pagano
2015-02-07 1:07 Mike Pagano
2015-01-30 11:01 Mike Pagano
2015-01-28 23:56 Anthony G. Basile
2015-01-28 23:55 Anthony G. Basile
2015-01-28 22:18 Anthony G. Basile
2015-01-16 0:28 Mike Pagano
2015-01-09 13:39 Mike Pagano
2015-01-05 14:39 Mike Pagano
2015-01-04 19:03 Mike Pagano
2015-01-02 19:06 Mike Pagano
2015-01-01 14:15 Mike Pagano
2014-12-16 19:44 Mike Pagano
2014-12-09 20:49 Mike Pagano
2014-11-26 0:36 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1421433075.9e45c43b5f94229af08b5fc1702e8414e66dddf3.mpagano@gentoo \
--to=mpagano@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox