From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 67519138334 for ; Tue, 12 Feb 2019 20:52:49 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 98275E0893; Tue, 12 Feb 2019 20:52:48 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 4242BE0893 for ; Tue, 12 Feb 2019 20:52:48 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 8009F33FEFB for ; Tue, 12 Feb 2019 20:52:46 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 291234E3 for ; Tue, 12 Feb 2019 20:52:45 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1550004733.c566bab0d5862845eb159c00bf3151d058839166.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.14 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1098_linux-4.14.99.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: c566bab0d5862845eb159c00bf3151d058839166 X-VCS-Branch: 4.14 Date: Tue, 12 Feb 2019 20:52:45 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 1f11ac88-5c4e-498b-969c-38ffb456a2b9 X-Archives-Hash: 15b14e9a705c27b35de535f09174beac commit: c566bab0d5862845eb159c00bf3151d058839166 Author: Mike Pagano gentoo org> AuthorDate: Tue Feb 12 20:52:13 2019 +0000 Commit: Mike Pagano gentoo org> CommitDate: Tue Feb 12 20:52:13 2019 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c566bab0 proj/linux-patches: Linux patch 4.14.99 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1098_linux-4.14.99.patch | 6309 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 6313 insertions(+) diff --git a/0000_README b/0000_README index 01b8ba9..6e5e8ac 100644 --- a/0000_README +++ b/0000_README @@ -435,6 +435,10 @@ Patch: 1097_4.14.98.patch From: http://www.kernel.org Desc: Linux 4.14.98 +Patch: 1098_4.14.99.patch +From: http://www.kernel.org +Desc: Linux 4.14.99 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1098_linux-4.14.99.patch b/1098_linux-4.14.99.patch new file mode 100644 index 0000000..d261bf3 --- /dev/null +++ b/1098_linux-4.14.99.patch @@ -0,0 +1,6309 @@ +diff --git a/Makefile b/Makefile +index 7f561ef954f2..3b10c8b542e2 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 14 +-SUBLEVEL = 98 ++SUBLEVEL = 99 + EXTRAVERSION = + NAME = Petit Gorille + +diff --git a/arch/arm/boot/dts/gemini-dlink-dir-685.dts b/arch/arm/boot/dts/gemini-dlink-dir-685.dts +index e75e2d44371c..d6f752ab07bb 100644 +--- a/arch/arm/boot/dts/gemini-dlink-dir-685.dts ++++ b/arch/arm/boot/dts/gemini-dlink-dir-685.dts +@@ -128,20 +128,16 @@ + read-only; + }; + /* +- * Between the boot loader and the rootfs is the kernel +- * in a custom Storlink format flashed from the boot +- * menu. The rootfs is in squashfs format. ++ * This firmware image contains the kernel catenated ++ * with the squashfs root filesystem. For some reason ++ * this is called "upgrade" on the vendor system. + */ +- partition@1800c0 { +- label = "rootfs"; +- reg = <0x001800c0 0x01dbff40>; +- read-only; +- }; +- partition@1f40000 { ++ partition@40000 { + label = "upgrade"; +- reg = <0x01f40000 0x00040000>; ++ reg = <0x00040000 0x01f40000>; + read-only; + }; ++ /* RGDB, Residental Gateway Database? */ + partition@1f80000 { + label = "rgdb"; + reg = <0x01f80000 0x00040000>; +diff --git a/arch/arm/boot/dts/mmp2.dtsi b/arch/arm/boot/dts/mmp2.dtsi +index 766bbb8495b6..47e5b63339d1 100644 +--- a/arch/arm/boot/dts/mmp2.dtsi ++++ b/arch/arm/boot/dts/mmp2.dtsi +@@ -220,12 +220,15 @@ + status = "disabled"; + }; + +- twsi2: i2c@d4025000 { ++ twsi2: i2c@d4031000 { + compatible = "mrvl,mmp-twsi"; +- reg = <0xd4025000 0x1000>; +- interrupts = <58>; ++ reg = <0xd4031000 0x1000>; ++ interrupt-parent = <&intcmux17>; ++ interrupts = <0>; + clocks = <&soc_clocks MMP2_CLK_TWSI1>; + resets = <&soc_clocks MMP2_CLK_TWSI1>; ++ #address-cells = <1>; ++ #size-cells = <0>; + status = "disabled"; + }; + +diff --git a/arch/arm/boot/dts/omap4-sdp.dts b/arch/arm/boot/dts/omap4-sdp.dts +index 280d92d42bf1..bfad6aadfe88 100644 +--- a/arch/arm/boot/dts/omap4-sdp.dts ++++ b/arch/arm/boot/dts/omap4-sdp.dts +@@ -33,6 +33,7 @@ + gpio = <&gpio2 16 GPIO_ACTIVE_HIGH>; /* gpio line 48 */ + enable-active-high; + regulator-boot-on; ++ startup-delay-us = <25000>; + }; + + vbat: fixedregulator-vbat { +diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c +index e61af0600133..5e31c62127a0 100644 +--- a/arch/arm/kernel/smp.c ++++ b/arch/arm/kernel/smp.c +@@ -691,6 +691,21 @@ void smp_send_stop(void) + pr_warn("SMP: failed to stop secondary CPUs\n"); + } + ++/* In case panic() and panic() called at the same time on CPU1 and CPU2, ++ * and CPU 1 calls panic_smp_self_stop() before crash_smp_send_stop() ++ * CPU1 can't receive the ipi irqs from CPU2, CPU1 will be always online, ++ * kdump fails. So split out the panic_smp_self_stop() and add ++ * set_cpu_online(smp_processor_id(), false). ++ */ ++void panic_smp_self_stop(void) ++{ ++ pr_debug("CPU %u will stop doing anything useful since another CPU has paniced\n", ++ smp_processor_id()); ++ set_cpu_online(smp_processor_id(), false); ++ while (1) ++ cpu_relax(); ++} ++ + /* + * not supported here + */ +diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c +index 2dbd63239c54..45c8f2ef4e23 100644 +--- a/arch/arm/mach-omap2/omap_hwmod.c ++++ b/arch/arm/mach-omap2/omap_hwmod.c +@@ -2497,7 +2497,7 @@ static int __init _init(struct omap_hwmod *oh, void *data) + * a stub; implementing this properly requires iclk autoidle usecounting in + * the clock code. No return value. + */ +-static void __init _setup_iclk_autoidle(struct omap_hwmod *oh) ++static void _setup_iclk_autoidle(struct omap_hwmod *oh) + { + struct omap_hwmod_ocp_if *os; + +@@ -2528,7 +2528,7 @@ static void __init _setup_iclk_autoidle(struct omap_hwmod *oh) + * reset. Returns 0 upon success or a negative error code upon + * failure. + */ +-static int __init _setup_reset(struct omap_hwmod *oh) ++static int _setup_reset(struct omap_hwmod *oh) + { + int r; + +@@ -2589,7 +2589,7 @@ static int __init _setup_reset(struct omap_hwmod *oh) + * + * No return value. + */ +-static void __init _setup_postsetup(struct omap_hwmod *oh) ++static void _setup_postsetup(struct omap_hwmod *oh) + { + u8 postsetup_state; + +diff --git a/arch/arm/mach-pxa/cm-x300.c b/arch/arm/mach-pxa/cm-x300.c +index 868448d2cd82..38ab30869821 100644 +--- a/arch/arm/mach-pxa/cm-x300.c ++++ b/arch/arm/mach-pxa/cm-x300.c +@@ -547,7 +547,7 @@ static struct pxa3xx_u2d_platform_data cm_x300_u2d_platform_data = { + .exit = cm_x300_u2d_exit, + }; + +-static void cm_x300_init_u2d(void) ++static void __init cm_x300_init_u2d(void) + { + pxa3xx_set_u2d_info(&cm_x300_u2d_platform_data); + } +diff --git a/arch/arm/mach-pxa/littleton.c b/arch/arm/mach-pxa/littleton.c +index fae38fdc8d8e..5cd6b4bd31e0 100644 +--- a/arch/arm/mach-pxa/littleton.c ++++ b/arch/arm/mach-pxa/littleton.c +@@ -183,7 +183,7 @@ static struct pxafb_mach_info littleton_lcd_info = { + .lcd_conn = LCD_COLOR_TFT_16BPP, + }; + +-static void littleton_init_lcd(void) ++static void __init littleton_init_lcd(void) + { + pxa_set_fb_info(NULL, &littleton_lcd_info); + } +diff --git a/arch/arm/mach-pxa/zeus.c b/arch/arm/mach-pxa/zeus.c +index ecbcaee5a2d5..c293ea0a7eaf 100644 +--- a/arch/arm/mach-pxa/zeus.c ++++ b/arch/arm/mach-pxa/zeus.c +@@ -558,7 +558,7 @@ static struct pxaohci_platform_data zeus_ohci_platform_data = { + .flags = ENABLE_PORT_ALL | POWER_SENSE_LOW, + }; + +-static void zeus_register_ohci(void) ++static void __init zeus_register_ohci(void) + { + /* Port 2 is shared between host and client interface. */ + UP2OCR = UP2OCR_HXOE | UP2OCR_HXS | UP2OCR_DMPDE | UP2OCR_DPPDE; +diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h +index 35b2e50f17fb..49bb9a020a09 100644 +--- a/arch/arm64/include/asm/io.h ++++ b/arch/arm64/include/asm/io.h +@@ -106,7 +106,23 @@ static inline u64 __raw_readq(const volatile void __iomem *addr) + } + + /* IO barriers */ +-#define __iormb() rmb() ++#define __iormb(v) \ ++({ \ ++ unsigned long tmp; \ ++ \ ++ rmb(); \ ++ \ ++ /* \ ++ * Create a dummy control dependency from the IO read to any \ ++ * later instructions. This ensures that a subsequent call to \ ++ * udelay() will be ordered due to the ISB in get_cycles(). \ ++ */ \ ++ asm volatile("eor %0, %1, %1\n" \ ++ "cbnz %0, ." \ ++ : "=r" (tmp) : "r" ((unsigned long)(v)) \ ++ : "memory"); \ ++}) ++ + #define __iowmb() wmb() + + #define mmiowb() do { } while (0) +@@ -131,10 +147,10 @@ static inline u64 __raw_readq(const volatile void __iomem *addr) + * following Normal memory access. Writes are ordered relative to any prior + * Normal memory access. + */ +-#define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; }) +-#define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; }) +-#define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(); __v; }) +-#define readq(c) ({ u64 __v = readq_relaxed(c); __iormb(); __v; }) ++#define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(__v); __v; }) ++#define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(__v); __v; }) ++#define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(__v); __v; }) ++#define readq(c) ({ u64 __v = readq_relaxed(c); __iormb(__v); __v; }) + + #define writeb(v,c) ({ __iowmb(); writeb_relaxed((v),(c)); }) + #define writew(v,c) ({ __iowmb(); writew_relaxed((v),(c)); }) +@@ -185,9 +201,9 @@ extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size); + /* + * io{read,write}{16,32,64}be() macros + */ +-#define ioread16be(p) ({ __u16 __v = be16_to_cpu((__force __be16)__raw_readw(p)); __iormb(); __v; }) +-#define ioread32be(p) ({ __u32 __v = be32_to_cpu((__force __be32)__raw_readl(p)); __iormb(); __v; }) +-#define ioread64be(p) ({ __u64 __v = be64_to_cpu((__force __be64)__raw_readq(p)); __iormb(); __v; }) ++#define ioread16be(p) ({ __u16 __v = be16_to_cpu((__force __be16)__raw_readw(p)); __iormb(__v); __v; }) ++#define ioread32be(p) ({ __u32 __v = be32_to_cpu((__force __be32)__raw_readl(p)); __iormb(__v); __v; }) ++#define ioread64be(p) ({ __u64 __v = be64_to_cpu((__force __be64)__raw_readq(p)); __iormb(__v); __v; }) + + #define iowrite16be(v,p) ({ __iowmb(); __raw_writew((__force __u16)cpu_to_be16(v), p); }) + #define iowrite32be(v,p) ({ __iowmb(); __raw_writel((__force __u32)cpu_to_be32(v), p); }) +diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S +index e1be42e11ff5..5a10e3a3e843 100644 +--- a/arch/arm64/kernel/entry-ftrace.S ++++ b/arch/arm64/kernel/entry-ftrace.S +@@ -79,7 +79,6 @@ + .macro mcount_get_lr reg + ldr \reg, [x29] + ldr \reg, [\reg, #8] +- mcount_adjust_addr \reg, \reg + .endm + + .macro mcount_get_lr_addr reg +diff --git a/arch/mips/boot/dts/img/boston.dts b/arch/mips/boot/dts/img/boston.dts +index f7aad80c69ab..bebb0fa21369 100644 +--- a/arch/mips/boot/dts/img/boston.dts ++++ b/arch/mips/boot/dts/img/boston.dts +@@ -141,6 +141,12 @@ + #size-cells = <2>; + #interrupt-cells = <1>; + ++ eg20t_phub@2,0,0 { ++ compatible = "pci8086,8801"; ++ reg = <0x00020000 0 0 0 0>; ++ intel,eg20t-prefetch = <0>; ++ }; ++ + eg20t_mac@2,0,1 { + compatible = "pci8086,8802"; + reg = <0x00020100 0 0 0 0>; +diff --git a/arch/mips/include/uapi/asm/inst.h b/arch/mips/include/uapi/asm/inst.h +index c05dcf5ab414..273ef58f4d43 100644 +--- a/arch/mips/include/uapi/asm/inst.h ++++ b/arch/mips/include/uapi/asm/inst.h +@@ -369,8 +369,8 @@ enum mm_32a_minor_op { + mm_ext_op = 0x02c, + mm_pool32axf_op = 0x03c, + mm_srl32_op = 0x040, ++ mm_srlv32_op = 0x050, + mm_sra_op = 0x080, +- mm_srlv32_op = 0x090, + mm_rotr_op = 0x0c0, + mm_lwxs_op = 0x118, + mm_addu32_op = 0x150, +diff --git a/arch/mips/ralink/Kconfig b/arch/mips/ralink/Kconfig +index f26736b7080b..fae36f0371d3 100644 +--- a/arch/mips/ralink/Kconfig ++++ b/arch/mips/ralink/Kconfig +@@ -39,6 +39,7 @@ choice + + config SOC_MT7620 + bool "MT7620/8" ++ select CPU_MIPSR2_IRQ_VI + select HW_HAS_PCI + + config SOC_MT7621 +diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h +index 1e7a33592e29..15bc07a31c46 100644 +--- a/arch/powerpc/include/asm/fadump.h ++++ b/arch/powerpc/include/asm/fadump.h +@@ -200,7 +200,7 @@ struct fad_crash_memory_ranges { + unsigned long long size; + }; + +-extern int is_fadump_boot_memory_area(u64 addr, ulong size); ++extern int is_fadump_memory_area(u64 addr, ulong size); + extern int early_init_dt_scan_fw_dump(unsigned long node, + const char *uname, int depth, void *data); + extern int fadump_reserve_mem(void); +diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h +index 565cead12be2..cf26e62b268d 100644 +--- a/arch/powerpc/include/asm/uaccess.h ++++ b/arch/powerpc/include/asm/uaccess.h +@@ -54,7 +54,7 @@ + #endif + + #define access_ok(type, addr, size) \ +- (__chk_user_ptr(addr), \ ++ (__chk_user_ptr(addr), (void)(type), \ + __access_ok((__force unsigned long)(addr), (size), get_fs())) + + /* +diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c +index 5a6470383ca3..62d7ef6508de 100644 +--- a/arch/powerpc/kernel/fadump.c ++++ b/arch/powerpc/kernel/fadump.c +@@ -117,13 +117,19 @@ int __init early_init_dt_scan_fw_dump(unsigned long node, + + /* + * If fadump is registered, check if the memory provided +- * falls within boot memory area. ++ * falls within boot memory area and reserved memory area. + */ +-int is_fadump_boot_memory_area(u64 addr, ulong size) ++int is_fadump_memory_area(u64 addr, ulong size) + { ++ u64 d_start = fw_dump.reserve_dump_area_start; ++ u64 d_end = d_start + fw_dump.reserve_dump_area_size; ++ + if (!fw_dump.dump_registered) + return 0; + ++ if (((addr + size) > d_start) && (addr <= d_end)) ++ return 1; ++ + return (addr + size) > RMA_START && addr <= fw_dump.boot_memory_size; + } + +diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c +index ecb45361095b..a35995a6b34a 100644 +--- a/arch/powerpc/kvm/powerpc.c ++++ b/arch/powerpc/kvm/powerpc.c +@@ -540,8 +540,11 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) + #ifdef CONFIG_PPC_BOOK3S_64 + case KVM_CAP_SPAPR_TCE: + case KVM_CAP_SPAPR_TCE_64: +- /* fallthrough */ ++ r = 1; ++ break; + case KVM_CAP_SPAPR_TCE_VFIO: ++ r = !!cpu_has_feature(CPU_FTR_HVMODE); ++ break; + case KVM_CAP_PPC_RTAS: + case KVM_CAP_PPC_FIXUP_HCALL: + case KVM_CAP_PPC_ENABLE_HCALL: +diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c +index 6e1e39035380..52863deed65d 100644 +--- a/arch/powerpc/mm/fault.c ++++ b/arch/powerpc/mm/fault.c +@@ -215,7 +215,9 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault) + static bool bad_kernel_fault(bool is_exec, unsigned long error_code, + unsigned long address) + { +- if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT))) { ++ /* NX faults set DSISR_PROTFAULT on the 8xx, DSISR_NOEXEC_OR_G on others */ ++ if (is_exec && (error_code & (DSISR_NOEXEC_OR_G | DSISR_KEYFAULT | ++ DSISR_PROTFAULT))) { + printk_ratelimited(KERN_CRIT "kernel tried to execute" + " exec-protected page (%lx) -" + "exploit attempt? (uid: %d)\n", +diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c +index 2efee3f196f5..cf9c35aa0cf4 100644 +--- a/arch/powerpc/perf/isa207-common.c ++++ b/arch/powerpc/perf/isa207-common.c +@@ -228,8 +228,13 @@ void isa207_get_mem_weight(u64 *weight) + u64 mmcra = mfspr(SPRN_MMCRA); + u64 exp = MMCRA_THR_CTR_EXP(mmcra); + u64 mantissa = MMCRA_THR_CTR_MANT(mmcra); ++ u64 sier = mfspr(SPRN_SIER); ++ u64 val = (sier & ISA207_SIER_TYPE_MASK) >> ISA207_SIER_TYPE_SHIFT; + +- *weight = mantissa << (2 * exp); ++ if (val == 0 || val == 7) ++ *weight = 0; ++ else ++ *weight = mantissa << (2 * exp); + } + + int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp) +diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c +index e9149d05d30b..f4e6565dd7a9 100644 +--- a/arch/powerpc/platforms/pseries/dlpar.c ++++ b/arch/powerpc/platforms/pseries/dlpar.c +@@ -284,6 +284,8 @@ int dlpar_detach_node(struct device_node *dn) + if (rc) + return rc; + ++ of_node_put(dn); ++ + return 0; + } + +diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c +index 1d48ab424bd9..93e09f108ca1 100644 +--- a/arch/powerpc/platforms/pseries/hotplug-memory.c ++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c +@@ -441,8 +441,11 @@ static bool lmb_is_removable(struct of_drconf_cell *lmb) + phys_addr = lmb->base_addr; + + #ifdef CONFIG_FA_DUMP +- /* Don't hot-remove memory that falls in fadump boot memory area */ +- if (is_fadump_boot_memory_area(phys_addr, block_sz)) ++ /* ++ * Don't hot-remove memory that falls in fadump boot memory area ++ * and memory that is reserved for capturing old kernel memory. ++ */ ++ if (is_fadump_memory_area(phys_addr, block_sz)) + return false; + #endif + +diff --git a/arch/s390/include/uapi/asm/zcrypt.h b/arch/s390/include/uapi/asm/zcrypt.h +index 137ef473584e..b9fb42089760 100644 +--- a/arch/s390/include/uapi/asm/zcrypt.h ++++ b/arch/s390/include/uapi/asm/zcrypt.h +@@ -161,8 +161,8 @@ struct ica_xcRB { + * @cprb_len: CPRB header length [0x0020] + * @cprb_ver_id: CPRB version id. [0x04] + * @pad_000: Alignment pad bytes +- * @flags: Admin cmd [0x80] or functional cmd [0x00] +- * @func_id: Function id / subtype [0x5434] ++ * @flags: Admin bit [0x80], Special bit [0x20] ++ * @func_id: Function id / subtype [0x5434] "T4" + * @source_id: Source id [originator id] + * @target_id: Target id [usage/ctrl domain id] + * @ret_code: Return code +diff --git a/arch/um/include/asm/pgtable.h b/arch/um/include/asm/pgtable.h +index 7485398d0737..9c04562310b3 100644 +--- a/arch/um/include/asm/pgtable.h ++++ b/arch/um/include/asm/pgtable.h +@@ -197,12 +197,17 @@ static inline pte_t pte_mkold(pte_t pte) + + static inline pte_t pte_wrprotect(pte_t pte) + { +- pte_clear_bits(pte, _PAGE_RW); ++ if (likely(pte_get_bits(pte, _PAGE_RW))) ++ pte_clear_bits(pte, _PAGE_RW); ++ else ++ return pte; + return(pte_mknewprot(pte)); + } + + static inline pte_t pte_mkread(pte_t pte) + { ++ if (unlikely(pte_get_bits(pte, _PAGE_USER))) ++ return pte; + pte_set_bits(pte, _PAGE_USER); + return(pte_mknewprot(pte)); + } +@@ -221,6 +226,8 @@ static inline pte_t pte_mkyoung(pte_t pte) + + static inline pte_t pte_mkwrite(pte_t pte) + { ++ if (unlikely(pte_get_bits(pte, _PAGE_RW))) ++ return pte; + pte_set_bits(pte, _PAGE_RW); + return(pte_mknewprot(pte)); + } +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c +index 7bb80151bfff..1cb5ff3ee728 100644 +--- a/arch/x86/events/intel/core.c ++++ b/arch/x86/events/intel/core.c +@@ -3419,6 +3419,11 @@ static void free_excl_cntrs(int cpu) + } + + static void intel_pmu_cpu_dying(int cpu) ++{ ++ fini_debug_store_on_cpu(cpu); ++} ++ ++static void intel_pmu_cpu_dead(int cpu) + { + struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu); + struct intel_shared_regs *pc; +@@ -3431,8 +3436,6 @@ static void intel_pmu_cpu_dying(int cpu) + } + + free_excl_cntrs(cpu); +- +- fini_debug_store_on_cpu(cpu); + } + + static void intel_pmu_sched_task(struct perf_event_context *ctx, +@@ -3521,6 +3524,7 @@ static __initconst const struct x86_pmu core_pmu = { + .cpu_prepare = intel_pmu_cpu_prepare, + .cpu_starting = intel_pmu_cpu_starting, + .cpu_dying = intel_pmu_cpu_dying, ++ .cpu_dead = intel_pmu_cpu_dead, + }; + + static struct attribute *intel_pmu_attrs[]; +@@ -3560,6 +3564,8 @@ static __initconst const struct x86_pmu intel_pmu = { + .cpu_prepare = intel_pmu_cpu_prepare, + .cpu_starting = intel_pmu_cpu_starting, + .cpu_dying = intel_pmu_cpu_dying, ++ .cpu_dead = intel_pmu_cpu_dead, ++ + .guest_get_msrs = intel_guest_get_msrs, + .sched_task = intel_pmu_sched_task, + }; +diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c +index a68aba8a482f..6b66285c6ced 100644 +--- a/arch/x86/events/intel/uncore_snbep.c ++++ b/arch/x86/events/intel/uncore_snbep.c +@@ -1221,6 +1221,8 @@ static struct pci_driver snbep_uncore_pci_driver = { + .id_table = snbep_uncore_pci_ids, + }; + ++#define NODE_ID_MASK 0x7 ++ + /* + * build pci bus to socket mapping + */ +@@ -1242,7 +1244,7 @@ static int snbep_pci2phy_map_init(int devid, int nodeid_loc, int idmap_loc, bool + err = pci_read_config_dword(ubox_dev, nodeid_loc, &config); + if (err) + break; +- nodeid = config; ++ nodeid = config & NODE_ID_MASK; + /* get the Node ID mapping */ + err = pci_read_config_dword(ubox_dev, idmap_loc, &config); + if (err) +diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h +index 69dcdf195b61..fa2c93cb42a2 100644 +--- a/arch/x86/include/asm/fpu/internal.h ++++ b/arch/x86/include/asm/fpu/internal.h +@@ -106,6 +106,9 @@ extern void fpstate_sanitize_xstate(struct fpu *fpu); + #define user_insn(insn, output, input...) \ + ({ \ + int err; \ ++ \ ++ might_fault(); \ ++ \ + asm volatile(ASM_STAC "\n" \ + "1:" #insn "\n\t" \ + "2: " ASM_CLAC "\n" \ +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c +index 004e60470a77..ec7aedba3d74 100644 +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -68,7 +68,7 @@ void __init check_bugs(void) + * identify_boot_cpu() initialized SMT support information, let the + * core code know. + */ +- cpu_smt_check_topology_early(); ++ cpu_smt_check_topology(); + + if (!IS_ENABLED(CONFIG_SMP)) { + pr_info("CPU: "); +diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c +index 98e4e4dc4a3b..54874e2b1d32 100644 +--- a/arch/x86/kernel/cpu/mcheck/mce.c ++++ b/arch/x86/kernel/cpu/mcheck/mce.c +@@ -773,6 +773,7 @@ static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp, + quirk_no_way_out(i, m, regs); + + if (mce_severity(m, mca_cfg.tolerant, &tmp, true) >= MCE_PANIC_SEVERITY) { ++ m->bank = i; + mce_read_aux(m, i); + *msg = tmp; + return 1; +diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c +index 4dc79d139810..656ac12f5439 100644 +--- a/arch/x86/kvm/svm.c ++++ b/arch/x86/kvm/svm.c +@@ -5319,6 +5319,13 @@ static bool svm_cpu_has_accelerated_tpr(void) + + static bool svm_has_emulated_msr(int index) + { ++ switch (index) { ++ case MSR_IA32_MCG_EXT_CTL: ++ return false; ++ default: ++ break; ++ } ++ + return true; + } + +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c +index 16bb8e35605e..1f5de4314291 100644 +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -27,6 +27,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -7708,6 +7709,7 @@ static void free_nested(struct vcpu_vmx *vmx) + if (!vmx->nested.vmxon) + return; + ++ hrtimer_cancel(&vmx->nested.preemption_timer); + vmx->nested.vmxon = false; + free_vpid(vmx->nested.vpid02); + vmx->nested.posted_intr_nv = -1; +@@ -10119,7 +10121,7 @@ static int vmx_vm_init(struct kvm *kvm) + * Warn upon starting the first VM in a potentially + * insecure environment. + */ +- if (cpu_smt_control == CPU_SMT_ENABLED) ++ if (sched_smt_active()) + pr_warn_once(L1TF_MSG_SMT); + if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER) + pr_warn_once(L1TF_MSG_L1D); +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 867c22f8d59b..b0e7621ddf01 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -4611,6 +4611,13 @@ int kvm_read_guest_virt(struct kvm_vcpu *vcpu, + { + u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0; + ++ /* ++ * FIXME: this should call handle_emulation_failure if X86EMUL_IO_NEEDED ++ * is returned, but our callers are not ready for that and they blindly ++ * call kvm_inject_page_fault. Ensure that they at least do not leak ++ * uninitialized kernel stack memory into cr2 and error code. ++ */ ++ memset(exception, 0, sizeof(*exception)); + return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access, + exception); + } +diff --git a/arch/x86/pci/broadcom_bus.c b/arch/x86/pci/broadcom_bus.c +index 526536c81ddc..ca1e8e6dccc8 100644 +--- a/arch/x86/pci/broadcom_bus.c ++++ b/arch/x86/pci/broadcom_bus.c +@@ -50,8 +50,8 @@ static void __init cnb20le_res(u8 bus, u8 slot, u8 func) + word1 = read_pci_config_16(bus, slot, func, 0xc0); + word2 = read_pci_config_16(bus, slot, func, 0xc2); + if (word1 != word2) { +- res.start = (word1 << 16) | 0x0000; +- res.end = (word2 << 16) | 0xffff; ++ res.start = ((resource_size_t) word1 << 16) | 0x0000; ++ res.end = ((resource_size_t) word2 << 16) | 0xffff; + res.flags = IORESOURCE_MEM; + update_res(info, res.start, res.end, res.flags, 0); + } +diff --git a/crypto/Kconfig b/crypto/Kconfig +index 5579eb88d460..84f99f8eca4b 100644 +--- a/crypto/Kconfig ++++ b/crypto/Kconfig +@@ -930,7 +930,8 @@ config CRYPTO_AES_TI + 8 for decryption), this implementation only uses just two S-boxes of + 256 bytes each, and attempts to eliminate data dependent latencies by + prefetching the entire table into the cache at the start of each +- block. ++ block. Interrupts are also disabled to avoid races where cachelines ++ are evicted when the CPU is interrupted to do something else. + + config CRYPTO_AES_586 + tristate "AES cipher algorithms (i586)" +diff --git a/crypto/aes_ti.c b/crypto/aes_ti.c +index 03023b2290e8..1ff9785b30f5 100644 +--- a/crypto/aes_ti.c ++++ b/crypto/aes_ti.c +@@ -269,6 +269,7 @@ static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) + const u32 *rkp = ctx->key_enc + 4; + int rounds = 6 + ctx->key_length / 4; + u32 st0[4], st1[4]; ++ unsigned long flags; + int round; + + st0[0] = ctx->key_enc[0] ^ get_unaligned_le32(in); +@@ -276,6 +277,12 @@ static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) + st0[2] = ctx->key_enc[2] ^ get_unaligned_le32(in + 8); + st0[3] = ctx->key_enc[3] ^ get_unaligned_le32(in + 12); + ++ /* ++ * Temporarily disable interrupts to avoid races where cachelines are ++ * evicted when the CPU is interrupted to do something else. ++ */ ++ local_irq_save(flags); ++ + st0[0] ^= __aesti_sbox[ 0] ^ __aesti_sbox[128]; + st0[1] ^= __aesti_sbox[32] ^ __aesti_sbox[160]; + st0[2] ^= __aesti_sbox[64] ^ __aesti_sbox[192]; +@@ -300,6 +307,8 @@ static void aesti_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) + put_unaligned_le32(subshift(st1, 1) ^ rkp[5], out + 4); + put_unaligned_le32(subshift(st1, 2) ^ rkp[6], out + 8); + put_unaligned_le32(subshift(st1, 3) ^ rkp[7], out + 12); ++ ++ local_irq_restore(flags); + } + + static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +@@ -308,6 +317,7 @@ static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) + const u32 *rkp = ctx->key_dec + 4; + int rounds = 6 + ctx->key_length / 4; + u32 st0[4], st1[4]; ++ unsigned long flags; + int round; + + st0[0] = ctx->key_dec[0] ^ get_unaligned_le32(in); +@@ -315,6 +325,12 @@ static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) + st0[2] = ctx->key_dec[2] ^ get_unaligned_le32(in + 8); + st0[3] = ctx->key_dec[3] ^ get_unaligned_le32(in + 12); + ++ /* ++ * Temporarily disable interrupts to avoid races where cachelines are ++ * evicted when the CPU is interrupted to do something else. ++ */ ++ local_irq_save(flags); ++ + st0[0] ^= __aesti_inv_sbox[ 0] ^ __aesti_inv_sbox[128]; + st0[1] ^= __aesti_inv_sbox[32] ^ __aesti_inv_sbox[160]; + st0[2] ^= __aesti_inv_sbox[64] ^ __aesti_inv_sbox[192]; +@@ -339,6 +355,8 @@ static void aesti_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) + put_unaligned_le32(inv_subshift(st1, 1) ^ rkp[5], out + 4); + put_unaligned_le32(inv_subshift(st1, 2) ^ rkp[6], out + 8); + put_unaligned_le32(inv_subshift(st1, 3) ^ rkp[7], out + 12); ++ ++ local_irq_restore(flags); + } + + static struct crypto_alg aes_alg = { +diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c +index f14695e744d0..5889f6407fea 100644 +--- a/drivers/acpi/apei/ghes.c ++++ b/drivers/acpi/apei/ghes.c +@@ -675,6 +675,8 @@ static void __ghes_panic(struct ghes *ghes) + { + __ghes_print_estatus(KERN_EMERG, ghes->generic, ghes->estatus); + ++ ghes_clear_estatus(ghes); ++ + /* reboot to log the error! */ + if (!panic_timeout) + panic_timeout = ghes_panic_timeout; +diff --git a/drivers/acpi/spcr.c b/drivers/acpi/spcr.c +index 324b35bfe781..f567fa5f0148 100644 +--- a/drivers/acpi/spcr.c ++++ b/drivers/acpi/spcr.c +@@ -148,6 +148,13 @@ int __init parse_spcr(bool earlycon) + } + + switch (table->baud_rate) { ++ case 0: ++ /* ++ * SPCR 1.04 defines 0 as a preconfigured state of UART. ++ * Assume firmware or bootloader configures console correctly. ++ */ ++ baud_rate = 0; ++ break; + case 3: + baud_rate = 9600; + break; +@@ -196,6 +203,10 @@ int __init parse_spcr(bool earlycon) + * UART so don't attempt to change to the baud rate state + * in the table because driver cannot calculate the dividers + */ ++ baud_rate = 0; ++ } ++ ++ if (!baud_rate) { + snprintf(opts, sizeof(opts), "%s,%s,0x%llx", uart, iotype, + table->serial_port.address); + } else { +diff --git a/drivers/ata/sata_rcar.c b/drivers/ata/sata_rcar.c +index 537d11869069..3e82a4ac239e 100644 +--- a/drivers/ata/sata_rcar.c ++++ b/drivers/ata/sata_rcar.c +@@ -880,7 +880,9 @@ static int sata_rcar_probe(struct platform_device *pdev) + int ret = 0; + + irq = platform_get_irq(pdev, 0); +- if (irq <= 0) ++ if (irq < 0) ++ return irq; ++ if (!irq) + return -EINVAL; + + priv = devm_kzalloc(&pdev->dev, sizeof(struct sata_rcar_priv), +diff --git a/drivers/base/bus.c b/drivers/base/bus.c +index 1cf1460f8c90..3464c49dad0d 100644 +--- a/drivers/base/bus.c ++++ b/drivers/base/bus.c +@@ -616,8 +616,10 @@ static void remove_probe_files(struct bus_type *bus) + static ssize_t uevent_store(struct device_driver *drv, const char *buf, + size_t count) + { +- kobject_synth_uevent(&drv->p->kobj, buf, count); +- return count; ++ int rc; ++ ++ rc = kobject_synth_uevent(&drv->p->kobj, buf, count); ++ return rc ? rc : count; + } + static DRIVER_ATTR_WO(uevent); + +@@ -833,8 +835,10 @@ static void klist_devices_put(struct klist_node *n) + static ssize_t bus_uevent_store(struct bus_type *bus, + const char *buf, size_t count) + { +- kobject_synth_uevent(&bus->p->subsys.kobj, buf, count); +- return count; ++ int rc; ++ ++ rc = kobject_synth_uevent(&bus->p->subsys.kobj, buf, count); ++ return rc ? rc : count; + } + static BUS_ATTR(uevent, S_IWUSR, NULL, bus_uevent_store); + +diff --git a/drivers/base/core.c b/drivers/base/core.c +index fc5bbb2519fe..1c67bf24bc23 100644 +--- a/drivers/base/core.c ++++ b/drivers/base/core.c +@@ -991,8 +991,14 @@ out: + static ssize_t uevent_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) + { +- if (kobject_synth_uevent(&dev->kobj, buf, count)) ++ int rc; ++ ++ rc = kobject_synth_uevent(&dev->kobj, buf, count); ++ ++ if (rc) { + dev_err(dev, "uevent: failed to send synthetic uevent\n"); ++ return rc; ++ } + + return count; + } +diff --git a/drivers/base/dd.c b/drivers/base/dd.c +index 55fc31f6fe7f..d928cc6d0638 100644 +--- a/drivers/base/dd.c ++++ b/drivers/base/dd.c +@@ -813,9 +813,6 @@ static void __device_release_driver(struct device *dev, struct device *parent) + + drv = dev->driver; + if (drv) { +- if (driver_allows_async_probing(drv)) +- async_synchronize_full(); +- + while (device_links_busy(dev)) { + device_unlock(dev); + if (parent) +@@ -920,6 +917,9 @@ void driver_detach(struct device_driver *drv) + struct device_private *dev_prv; + struct device *dev; + ++ if (driver_allows_async_probing(drv)) ++ async_synchronize_full(); ++ + for (;;) { + spin_lock(&drv->p->klist_devices.k_lock); + if (list_empty(&drv->p->klist_devices.k_list)) { +diff --git a/drivers/base/power/opp/core.c b/drivers/base/power/opp/core.c +index d4862775b9f6..d5e7e8cc4f22 100644 +--- a/drivers/base/power/opp/core.c ++++ b/drivers/base/power/opp/core.c +@@ -192,12 +192,12 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) + if (IS_ERR(opp_table)) + return 0; + +- count = opp_table->regulator_count; +- + /* Regulator may not be required for the device */ +- if (!count) ++ if (!opp_table->regulators) + goto put_opp_table; + ++ count = opp_table->regulator_count; ++ + uV = kmalloc_array(count, sizeof(*uV), GFP_KERNEL); + if (!uV) + goto put_opp_table; +@@ -921,6 +921,9 @@ static bool _opp_supported_by_regulators(struct dev_pm_opp *opp, + struct regulator *reg; + int i; + ++ if (!opp_table->regulators) ++ return true; ++ + for (i = 0; i < opp_table->regulator_count; i++) { + reg = opp_table->regulators[i]; + +@@ -1226,7 +1229,7 @@ static int _allocate_set_opp_data(struct opp_table *opp_table) + struct dev_pm_set_opp_data *data; + int len, count = opp_table->regulator_count; + +- if (WARN_ON(!count)) ++ if (WARN_ON(!opp_table->regulators)) + return -EINVAL; + + /* space for set_opp_data */ +diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c +index a12f77e6891e..ad13ec66c8e4 100644 +--- a/drivers/block/drbd/drbd_nl.c ++++ b/drivers/block/drbd/drbd_nl.c +@@ -668,14 +668,15 @@ drbd_set_role(struct drbd_device *const device, enum drbd_role new_role, int for + if (rv == SS_TWO_PRIMARIES) { + /* Maybe the peer is detected as dead very soon... + retry at most once more in this case. */ +- int timeo; +- rcu_read_lock(); +- nc = rcu_dereference(connection->net_conf); +- timeo = nc ? (nc->ping_timeo + 1) * HZ / 10 : 1; +- rcu_read_unlock(); +- schedule_timeout_interruptible(timeo); +- if (try < max_tries) ++ if (try < max_tries) { ++ int timeo; + try = max_tries - 1; ++ rcu_read_lock(); ++ nc = rcu_dereference(connection->net_conf); ++ timeo = nc ? (nc->ping_timeo + 1) * HZ / 10 : 1; ++ rcu_read_unlock(); ++ schedule_timeout_interruptible(timeo); ++ } + continue; + } + if (rv < SS_SUCCESS) { +diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c +index 796eaf347dc0..1aad373da50e 100644 +--- a/drivers/block/drbd/drbd_receiver.c ++++ b/drivers/block/drbd/drbd_receiver.c +@@ -3361,7 +3361,7 @@ static enum drbd_conns drbd_sync_handshake(struct drbd_peer_device *peer_device, + enum drbd_conns rv = C_MASK; + enum drbd_disk_state mydisk; + struct net_conf *nc; +- int hg, rule_nr, rr_conflict, tentative; ++ int hg, rule_nr, rr_conflict, tentative, always_asbp; + + mydisk = device->state.disk; + if (mydisk == D_NEGOTIATING) +@@ -3412,8 +3412,12 @@ static enum drbd_conns drbd_sync_handshake(struct drbd_peer_device *peer_device, + + rcu_read_lock(); + nc = rcu_dereference(peer_device->connection->net_conf); ++ always_asbp = nc->always_asbp; ++ rr_conflict = nc->rr_conflict; ++ tentative = nc->tentative; ++ rcu_read_unlock(); + +- if (hg == 100 || (hg == -100 && nc->always_asbp)) { ++ if (hg == 100 || (hg == -100 && always_asbp)) { + int pcount = (device->state.role == R_PRIMARY) + + (peer_role == R_PRIMARY); + int forced = (hg == -100); +@@ -3452,9 +3456,6 @@ static enum drbd_conns drbd_sync_handshake(struct drbd_peer_device *peer_device, + "Sync from %s node\n", + (hg < 0) ? "peer" : "this"); + } +- rr_conflict = nc->rr_conflict; +- tentative = nc->tentative; +- rcu_read_unlock(); + + if (hg == -100) { + /* FIXME this log message is not correct if we end up here +@@ -4138,7 +4139,7 @@ static int receive_uuids(struct drbd_connection *connection, struct packet_info + kfree(device->p_uuid); + device->p_uuid = p_uuid; + +- if (device->state.conn < C_CONNECTED && ++ if ((device->state.conn < C_CONNECTED || device->state.pdsk == D_DISKLESS) && + device->state.disk < D_INCONSISTENT && + device->state.role == R_PRIMARY && + (device->ed_uuid & ~((u64)1)) != (p_uuid[UI_CURRENT] & ~((u64)1))) { +diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c +index ad9749463d4f..ed4d6276e94f 100644 +--- a/drivers/block/sunvdc.c ++++ b/drivers/block/sunvdc.c +@@ -41,6 +41,8 @@ MODULE_VERSION(DRV_MODULE_VERSION); + #define WAITING_FOR_GEN_CMD 0x04 + #define WAITING_FOR_ANY -1 + ++#define VDC_MAX_RETRIES 10 ++ + static struct workqueue_struct *sunvdc_wq; + + struct vdc_req_entry { +@@ -427,6 +429,7 @@ static int __vdc_tx_trigger(struct vdc_port *port) + .end_idx = dr->prod, + }; + int err, delay; ++ int retries = 0; + + hdr.seq = dr->snd_nxt; + delay = 1; +@@ -439,6 +442,8 @@ static int __vdc_tx_trigger(struct vdc_port *port) + udelay(delay); + if ((delay <<= 1) > 128) + delay = 128; ++ if (retries++ > VDC_MAX_RETRIES) ++ break; + } while (err == -EAGAIN); + + if (err == -ENOTCONN) +diff --git a/drivers/block/swim3.c b/drivers/block/swim3.c +index 0d7527c6825a..2f7acdb830c3 100644 +--- a/drivers/block/swim3.c ++++ b/drivers/block/swim3.c +@@ -1027,7 +1027,11 @@ static void floppy_release(struct gendisk *disk, fmode_t mode) + struct swim3 __iomem *sw = fs->swim3; + + mutex_lock(&swim3_mutex); +- if (fs->ref_count > 0 && --fs->ref_count == 0) { ++ if (fs->ref_count > 0) ++ --fs->ref_count; ++ else if (fs->ref_count == -1) ++ fs->ref_count = 0; ++ if (fs->ref_count == 0) { + swim3_action(fs, MOTOR_OFF); + out_8(&sw->control_bic, 0xff); + swim3_select(fs, RELAX); +diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c +index ae3a7537cf0f..72cd96a8eb19 100644 +--- a/drivers/cdrom/gdrom.c ++++ b/drivers/cdrom/gdrom.c +@@ -889,6 +889,7 @@ static void __exit exit_gdrom(void) + platform_device_unregister(pd); + platform_driver_unregister(&gdrom_driver); + kfree(gd.toc); ++ kfree(gd.cd_info); + } + + module_init(init_gdrom); +diff --git a/drivers/clk/imgtec/clk-boston.c b/drivers/clk/imgtec/clk-boston.c +index 15af423cc0c9..f5d54a64d33c 100644 +--- a/drivers/clk/imgtec/clk-boston.c ++++ b/drivers/clk/imgtec/clk-boston.c +@@ -73,27 +73,32 @@ static void __init clk_boston_setup(struct device_node *np) + hw = clk_hw_register_fixed_rate(NULL, "input", NULL, 0, in_freq); + if (IS_ERR(hw)) { + pr_err("failed to register input clock: %ld\n", PTR_ERR(hw)); +- return; ++ goto error; + } + onecell->hws[BOSTON_CLK_INPUT] = hw; + + hw = clk_hw_register_fixed_rate(NULL, "sys", "input", 0, sys_freq); + if (IS_ERR(hw)) { + pr_err("failed to register sys clock: %ld\n", PTR_ERR(hw)); +- return; ++ goto error; + } + onecell->hws[BOSTON_CLK_SYS] = hw; + + hw = clk_hw_register_fixed_rate(NULL, "cpu", "input", 0, cpu_freq); + if (IS_ERR(hw)) { + pr_err("failed to register cpu clock: %ld\n", PTR_ERR(hw)); +- return; ++ goto error; + } + onecell->hws[BOSTON_CLK_CPU] = hw; + + err = of_clk_add_hw_provider(np, of_clk_hw_onecell_get, onecell); + if (err) + pr_err("failed to add DT provider: %d\n", err); ++ ++ return; ++ ++error: ++ kfree(onecell); + } + + /* +diff --git a/drivers/clk/imx/clk-imx6sl.c b/drivers/clk/imx/clk-imx6sl.c +index 9642cdf0fb88..c264a744fae8 100644 +--- a/drivers/clk/imx/clk-imx6sl.c ++++ b/drivers/clk/imx/clk-imx6sl.c +@@ -17,6 +17,8 @@ + + #include "clk.h" + ++#define CCDR 0x4 ++#define BM_CCM_CCDR_MMDC_CH0_MASK (1 << 17) + #define CCSR 0xc + #define BM_CCSR_PLL1_SW_CLK_SEL (1 << 2) + #define CACRR 0x10 +@@ -414,6 +416,10 @@ static void __init imx6sl_clocks_init(struct device_node *ccm_node) + clks[IMX6SL_CLK_USDHC3] = imx_clk_gate2("usdhc3", "usdhc3_podf", base + 0x80, 6); + clks[IMX6SL_CLK_USDHC4] = imx_clk_gate2("usdhc4", "usdhc4_podf", base + 0x80, 8); + ++ /* Ensure the MMDC CH0 handshake is bypassed */ ++ writel_relaxed(readl_relaxed(base + CCDR) | ++ BM_CCM_CCDR_MMDC_CH0_MASK, base + CCDR); ++ + imx_check_clocks(clks, ARRAY_SIZE(clks)); + + clk_data.clks = clks; +diff --git a/drivers/clk/sunxi-ng/ccu-sun8i-a33.c b/drivers/clk/sunxi-ng/ccu-sun8i-a33.c +index 13eb5b23c5e7..c40d572a7602 100644 +--- a/drivers/clk/sunxi-ng/ccu-sun8i-a33.c ++++ b/drivers/clk/sunxi-ng/ccu-sun8i-a33.c +@@ -366,10 +366,10 @@ static SUNXI_CCU_MP_WITH_MUX_GATE(spi1_clk, "spi1", mod0_default_parents, 0x0a4, + static const char * const i2s_parents[] = { "pll-audio-8x", "pll-audio-4x", + "pll-audio-2x", "pll-audio" }; + static SUNXI_CCU_MUX_WITH_GATE(i2s0_clk, "i2s0", i2s_parents, +- 0x0b0, 16, 2, BIT(31), 0); ++ 0x0b0, 16, 2, BIT(31), CLK_SET_RATE_PARENT); + + static SUNXI_CCU_MUX_WITH_GATE(i2s1_clk, "i2s1", i2s_parents, +- 0x0b4, 16, 2, BIT(31), 0); ++ 0x0b4, 16, 2, BIT(31), CLK_SET_RATE_PARENT); + + /* TODO: the parent for most of the USB clocks is not known */ + static SUNXI_CCU_GATE(usb_phy0_clk, "usb-phy0", "osc24M", +@@ -446,7 +446,7 @@ static SUNXI_CCU_M_WITH_GATE(ve_clk, "ve", "pll-ve", + static SUNXI_CCU_GATE(ac_dig_clk, "ac-dig", "pll-audio", + 0x140, BIT(31), CLK_SET_RATE_PARENT); + static SUNXI_CCU_GATE(ac_dig_4x_clk, "ac-dig-4x", "pll-audio-4x", +- 0x140, BIT(30), 0); ++ 0x140, BIT(30), CLK_SET_RATE_PARENT); + static SUNXI_CCU_GATE(avs_clk, "avs", "osc24M", + 0x144, BIT(31), 0); + +diff --git a/drivers/cpuidle/cpuidle-big_little.c b/drivers/cpuidle/cpuidle-big_little.c +index db2ede565f1a..b44476a1b7ad 100644 +--- a/drivers/cpuidle/cpuidle-big_little.c ++++ b/drivers/cpuidle/cpuidle-big_little.c +@@ -167,6 +167,7 @@ static int __init bl_idle_init(void) + { + int ret; + struct device_node *root = of_find_node_by_path("/"); ++ const struct of_device_id *match_id; + + if (!root) + return -ENODEV; +@@ -174,7 +175,11 @@ static int __init bl_idle_init(void) + /* + * Initialize the driver just for a compliant set of machines + */ +- if (!of_match_node(compatible_machine_match, root)) ++ match_id = of_match_node(compatible_machine_match, root); ++ ++ of_node_put(root); ++ ++ if (!match_id) + return -ENODEV; + + if (!mcpm_is_available()) +diff --git a/drivers/crypto/ux500/cryp/cryp_core.c b/drivers/crypto/ux500/cryp/cryp_core.c +index 790f7cadc1ed..efebc484e371 100644 +--- a/drivers/crypto/ux500/cryp/cryp_core.c ++++ b/drivers/crypto/ux500/cryp/cryp_core.c +@@ -555,7 +555,7 @@ static int cryp_set_dma_transfer(struct cryp_ctx *ctx, + desc = dmaengine_prep_slave_sg(channel, + ctx->device->dma.sg_src, + ctx->device->dma.sg_src_len, +- direction, DMA_CTRL_ACK); ++ DMA_MEM_TO_DEV, DMA_CTRL_ACK); + break; + + case DMA_FROM_DEVICE: +@@ -579,7 +579,7 @@ static int cryp_set_dma_transfer(struct cryp_ctx *ctx, + desc = dmaengine_prep_slave_sg(channel, + ctx->device->dma.sg_dst, + ctx->device->dma.sg_dst_len, +- direction, ++ DMA_DEV_TO_MEM, + DMA_CTRL_ACK | + DMA_PREP_INTERRUPT); + +diff --git a/drivers/crypto/ux500/hash/hash_core.c b/drivers/crypto/ux500/hash/hash_core.c +index 9acccad26928..17c8e2b28c42 100644 +--- a/drivers/crypto/ux500/hash/hash_core.c ++++ b/drivers/crypto/ux500/hash/hash_core.c +@@ -165,7 +165,7 @@ static int hash_set_dma_transfer(struct hash_ctx *ctx, struct scatterlist *sg, + __func__); + desc = dmaengine_prep_slave_sg(channel, + ctx->device->dma.sg, ctx->device->dma.sg_len, +- direction, DMA_CTRL_ACK | DMA_PREP_INTERRUPT); ++ DMA_MEM_TO_DEV, DMA_CTRL_ACK | DMA_PREP_INTERRUPT); + if (!desc) { + dev_err(ctx->device->dev, + "%s: dmaengine_prep_slave_sg() failed!\n", __func__); +diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c +index 6204cc32d09c..6ba53bbd0e16 100644 +--- a/drivers/dma/bcm2835-dma.c ++++ b/drivers/dma/bcm2835-dma.c +@@ -415,38 +415,32 @@ static void bcm2835_dma_fill_cb_chain_with_sg( + } + } + +-static int bcm2835_dma_abort(void __iomem *chan_base) ++static int bcm2835_dma_abort(struct bcm2835_chan *c) + { +- unsigned long cs; ++ void __iomem *chan_base = c->chan_base; + long int timeout = 10000; + +- cs = readl(chan_base + BCM2835_DMA_CS); +- if (!(cs & BCM2835_DMA_ACTIVE)) ++ /* ++ * A zero control block address means the channel is idle. ++ * (The ACTIVE flag in the CS register is not a reliable indicator.) ++ */ ++ if (!readl(chan_base + BCM2835_DMA_ADDR)) + return 0; + + /* Write 0 to the active bit - Pause the DMA */ + writel(0, chan_base + BCM2835_DMA_CS); + + /* Wait for any current AXI transfer to complete */ +- while ((cs & BCM2835_DMA_ISPAUSED) && --timeout) { ++ while ((readl(chan_base + BCM2835_DMA_CS) & ++ BCM2835_DMA_WAITING_FOR_WRITES) && --timeout) + cpu_relax(); +- cs = readl(chan_base + BCM2835_DMA_CS); +- } + +- /* We'll un-pause when we set of our next DMA */ ++ /* Peripheral might be stuck and fail to signal AXI write responses */ + if (!timeout) +- return -ETIMEDOUT; +- +- if (!(cs & BCM2835_DMA_ACTIVE)) +- return 0; +- +- /* Terminate the control block chain */ +- writel(0, chan_base + BCM2835_DMA_NEXTCB); +- +- /* Abort the whole DMA */ +- writel(BCM2835_DMA_ABORT | BCM2835_DMA_ACTIVE, +- chan_base + BCM2835_DMA_CS); ++ dev_err(c->vc.chan.device->dev, ++ "failed to complete outstanding writes\n"); + ++ writel(BCM2835_DMA_RESET, chan_base + BCM2835_DMA_CS); + return 0; + } + +@@ -485,8 +479,15 @@ static irqreturn_t bcm2835_dma_callback(int irq, void *data) + + spin_lock_irqsave(&c->vc.lock, flags); + +- /* Acknowledge interrupt */ +- writel(BCM2835_DMA_INT, c->chan_base + BCM2835_DMA_CS); ++ /* ++ * Clear the INT flag to receive further interrupts. Keep the channel ++ * active in case the descriptor is cyclic or in case the client has ++ * already terminated the descriptor and issued a new one. (May happen ++ * if this IRQ handler is threaded.) If the channel is finished, it ++ * will remain idle despite the ACTIVE flag being set. ++ */ ++ writel(BCM2835_DMA_INT | BCM2835_DMA_ACTIVE, ++ c->chan_base + BCM2835_DMA_CS); + + d = c->desc; + +@@ -494,11 +495,7 @@ static irqreturn_t bcm2835_dma_callback(int irq, void *data) + if (d->cyclic) { + /* call the cyclic callback */ + vchan_cyclic_callback(&d->vd); +- +- /* Keep the DMA engine running */ +- writel(BCM2835_DMA_ACTIVE, +- c->chan_base + BCM2835_DMA_CS); +- } else { ++ } else if (!readl(c->chan_base + BCM2835_DMA_ADDR)) { + vchan_cookie_complete(&c->desc->vd); + bcm2835_dma_start_desc(c); + } +@@ -796,7 +793,6 @@ static int bcm2835_dma_terminate_all(struct dma_chan *chan) + struct bcm2835_chan *c = to_bcm2835_dma_chan(chan); + struct bcm2835_dmadev *d = to_bcm2835_dma_dev(c->vc.chan.device); + unsigned long flags; +- int timeout = 10000; + LIST_HEAD(head); + + spin_lock_irqsave(&c->vc.lock, flags); +@@ -806,27 +802,11 @@ static int bcm2835_dma_terminate_all(struct dma_chan *chan) + list_del_init(&c->node); + spin_unlock(&d->lock); + +- /* +- * Stop DMA activity: we assume the callback will not be called +- * after bcm_dma_abort() returns (even if it does, it will see +- * c->desc is NULL and exit.) +- */ ++ /* stop DMA activity */ + if (c->desc) { + bcm2835_dma_desc_free(&c->desc->vd); + c->desc = NULL; +- bcm2835_dma_abort(c->chan_base); +- +- /* Wait for stopping */ +- while (--timeout) { +- if (!(readl(c->chan_base + BCM2835_DMA_CS) & +- BCM2835_DMA_ACTIVE)) +- break; +- +- cpu_relax(); +- } +- +- if (!timeout) +- dev_err(d->ddev.dev, "DMA transfer could not be terminated\n"); ++ bcm2835_dma_abort(c); + } + + vchan_get_all_descriptors(&c->vc, &head); +diff --git a/drivers/dma/imx-dma.c b/drivers/dma/imx-dma.c +index f681df8f0ed3..cb37730f9272 100644 +--- a/drivers/dma/imx-dma.c ++++ b/drivers/dma/imx-dma.c +@@ -623,7 +623,7 @@ static void imxdma_tasklet(unsigned long data) + { + struct imxdma_channel *imxdmac = (void *)data; + struct imxdma_engine *imxdma = imxdmac->imxdma; +- struct imxdma_desc *desc; ++ struct imxdma_desc *desc, *next_desc; + unsigned long flags; + + spin_lock_irqsave(&imxdma->lock, flags); +@@ -653,10 +653,10 @@ static void imxdma_tasklet(unsigned long data) + list_move_tail(imxdmac->ld_active.next, &imxdmac->ld_free); + + if (!list_empty(&imxdmac->ld_queue)) { +- desc = list_first_entry(&imxdmac->ld_queue, struct imxdma_desc, +- node); ++ next_desc = list_first_entry(&imxdmac->ld_queue, ++ struct imxdma_desc, node); + list_move_tail(imxdmac->ld_queue.next, &imxdmac->ld_active); +- if (imxdma_xfer_desc(desc) < 0) ++ if (imxdma_xfer_desc(next_desc) < 0) + dev_warn(imxdma->dev, "%s: channel: %d couldn't xfer desc\n", + __func__, imxdmac->channel); + } +diff --git a/drivers/dma/xilinx/zynqmp_dma.c b/drivers/dma/xilinx/zynqmp_dma.c +index 5cc8ed31f26b..6d86d05e53aa 100644 +--- a/drivers/dma/xilinx/zynqmp_dma.c ++++ b/drivers/dma/xilinx/zynqmp_dma.c +@@ -159,7 +159,7 @@ struct zynqmp_dma_desc_ll { + u32 ctrl; + u64 nxtdscraddr; + u64 rsvd; +-}; __aligned(64) ++}; + + /** + * struct zynqmp_dma_desc_sw - Per Transaction structure +diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c +index 9336ffdf6e2c..fceaafd67ec6 100644 +--- a/drivers/firmware/efi/vars.c ++++ b/drivers/firmware/efi/vars.c +@@ -318,7 +318,12 @@ EXPORT_SYMBOL_GPL(efivar_variable_is_removable); + static efi_status_t + check_var_size(u32 attributes, unsigned long size) + { +- const struct efivar_operations *fops = __efivars->ops; ++ const struct efivar_operations *fops; ++ ++ if (!__efivars) ++ return EFI_UNSUPPORTED; ++ ++ fops = __efivars->ops; + + if (!fops->query_variable_store) + return EFI_UNSUPPORTED; +@@ -329,7 +334,12 @@ check_var_size(u32 attributes, unsigned long size) + static efi_status_t + check_var_size_nonblocking(u32 attributes, unsigned long size) + { +- const struct efivar_operations *fops = __efivars->ops; ++ const struct efivar_operations *fops; ++ ++ if (!__efivars) ++ return EFI_UNSUPPORTED; ++ ++ fops = __efivars->ops; + + if (!fops->query_variable_store) + return EFI_UNSUPPORTED; +@@ -429,13 +439,18 @@ static void dup_variable_bug(efi_char16_t *str16, efi_guid_t *vendor_guid, + int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *), + void *data, bool duplicates, struct list_head *head) + { +- const struct efivar_operations *ops = __efivars->ops; ++ const struct efivar_operations *ops; + unsigned long variable_name_size = 1024; + efi_char16_t *variable_name; + efi_status_t status; + efi_guid_t vendor_guid; + int err = 0; + ++ if (!__efivars) ++ return -EFAULT; ++ ++ ops = __efivars->ops; ++ + variable_name = kzalloc(variable_name_size, GFP_KERNEL); + if (!variable_name) { + printk(KERN_ERR "efivars: Memory allocation failed.\n"); +@@ -583,12 +598,14 @@ static void efivar_entry_list_del_unlock(struct efivar_entry *entry) + */ + int __efivar_entry_delete(struct efivar_entry *entry) + { +- const struct efivar_operations *ops = __efivars->ops; + efi_status_t status; + +- status = ops->set_variable(entry->var.VariableName, +- &entry->var.VendorGuid, +- 0, 0, NULL); ++ if (!__efivars) ++ return -EINVAL; ++ ++ status = __efivars->ops->set_variable(entry->var.VariableName, ++ &entry->var.VendorGuid, ++ 0, 0, NULL); + + return efi_status_to_err(status); + } +@@ -607,12 +624,17 @@ EXPORT_SYMBOL_GPL(__efivar_entry_delete); + */ + int efivar_entry_delete(struct efivar_entry *entry) + { +- const struct efivar_operations *ops = __efivars->ops; ++ const struct efivar_operations *ops; + efi_status_t status; + + if (down_interruptible(&efivars_lock)) + return -EINTR; + ++ if (!__efivars) { ++ up(&efivars_lock); ++ return -EINVAL; ++ } ++ ops = __efivars->ops; + status = ops->set_variable(entry->var.VariableName, + &entry->var.VendorGuid, + 0, 0, NULL); +@@ -650,13 +672,19 @@ EXPORT_SYMBOL_GPL(efivar_entry_delete); + int efivar_entry_set(struct efivar_entry *entry, u32 attributes, + unsigned long size, void *data, struct list_head *head) + { +- const struct efivar_operations *ops = __efivars->ops; ++ const struct efivar_operations *ops; + efi_status_t status; + efi_char16_t *name = entry->var.VariableName; + efi_guid_t vendor = entry->var.VendorGuid; + + if (down_interruptible(&efivars_lock)) + return -EINTR; ++ ++ if (!__efivars) { ++ up(&efivars_lock); ++ return -EINVAL; ++ } ++ ops = __efivars->ops; + if (head && efivar_entry_find(name, vendor, head, false)) { + up(&efivars_lock); + return -EEXIST; +@@ -687,12 +715,17 @@ static int + efivar_entry_set_nonblocking(efi_char16_t *name, efi_guid_t vendor, + u32 attributes, unsigned long size, void *data) + { +- const struct efivar_operations *ops = __efivars->ops; ++ const struct efivar_operations *ops; + efi_status_t status; + + if (down_trylock(&efivars_lock)) + return -EBUSY; + ++ if (!__efivars) { ++ up(&efivars_lock); ++ return -EINVAL; ++ } ++ + status = check_var_size_nonblocking(attributes, + size + ucs2_strsize(name, 1024)); + if (status != EFI_SUCCESS) { +@@ -700,6 +733,7 @@ efivar_entry_set_nonblocking(efi_char16_t *name, efi_guid_t vendor, + return -ENOSPC; + } + ++ ops = __efivars->ops; + status = ops->set_variable_nonblocking(name, &vendor, attributes, + size, data); + +@@ -727,9 +761,13 @@ efivar_entry_set_nonblocking(efi_char16_t *name, efi_guid_t vendor, + int efivar_entry_set_safe(efi_char16_t *name, efi_guid_t vendor, u32 attributes, + bool block, unsigned long size, void *data) + { +- const struct efivar_operations *ops = __efivars->ops; ++ const struct efivar_operations *ops; + efi_status_t status; + ++ if (!__efivars) ++ return -EINVAL; ++ ++ ops = __efivars->ops; + if (!ops->query_variable_store) + return -ENOSYS; + +@@ -829,13 +867,18 @@ EXPORT_SYMBOL_GPL(efivar_entry_find); + */ + int efivar_entry_size(struct efivar_entry *entry, unsigned long *size) + { +- const struct efivar_operations *ops = __efivars->ops; ++ const struct efivar_operations *ops; + efi_status_t status; + + *size = 0; + + if (down_interruptible(&efivars_lock)) + return -EINTR; ++ if (!__efivars) { ++ up(&efivars_lock); ++ return -EINVAL; ++ } ++ ops = __efivars->ops; + status = ops->get_variable(entry->var.VariableName, + &entry->var.VendorGuid, NULL, size, NULL); + up(&efivars_lock); +@@ -861,12 +904,14 @@ EXPORT_SYMBOL_GPL(efivar_entry_size); + int __efivar_entry_get(struct efivar_entry *entry, u32 *attributes, + unsigned long *size, void *data) + { +- const struct efivar_operations *ops = __efivars->ops; + efi_status_t status; + +- status = ops->get_variable(entry->var.VariableName, +- &entry->var.VendorGuid, +- attributes, size, data); ++ if (!__efivars) ++ return -EINVAL; ++ ++ status = __efivars->ops->get_variable(entry->var.VariableName, ++ &entry->var.VendorGuid, ++ attributes, size, data); + + return efi_status_to_err(status); + } +@@ -882,14 +927,19 @@ EXPORT_SYMBOL_GPL(__efivar_entry_get); + int efivar_entry_get(struct efivar_entry *entry, u32 *attributes, + unsigned long *size, void *data) + { +- const struct efivar_operations *ops = __efivars->ops; + efi_status_t status; + + if (down_interruptible(&efivars_lock)) + return -EINTR; +- status = ops->get_variable(entry->var.VariableName, +- &entry->var.VendorGuid, +- attributes, size, data); ++ ++ if (!__efivars) { ++ up(&efivars_lock); ++ return -EINVAL; ++ } ++ ++ status = __efivars->ops->get_variable(entry->var.VariableName, ++ &entry->var.VendorGuid, ++ attributes, size, data); + up(&efivars_lock); + + return efi_status_to_err(status); +@@ -921,7 +971,7 @@ EXPORT_SYMBOL_GPL(efivar_entry_get); + int efivar_entry_set_get_size(struct efivar_entry *entry, u32 attributes, + unsigned long *size, void *data, bool *set) + { +- const struct efivar_operations *ops = __efivars->ops; ++ const struct efivar_operations *ops; + efi_char16_t *name = entry->var.VariableName; + efi_guid_t *vendor = &entry->var.VendorGuid; + efi_status_t status; +@@ -940,6 +990,11 @@ int efivar_entry_set_get_size(struct efivar_entry *entry, u32 attributes, + if (down_interruptible(&efivars_lock)) + return -EINTR; + ++ if (!__efivars) { ++ err = -EINVAL; ++ goto out; ++ } ++ + /* + * Ensure that the available space hasn't shrunk below the safe level + */ +@@ -956,6 +1011,8 @@ int efivar_entry_set_get_size(struct efivar_entry *entry, u32 attributes, + } + } + ++ ops = __efivars->ops; ++ + status = ops->set_variable(name, vendor, attributes, *size, data); + if (status != EFI_SUCCESS) { + err = efi_status_to_err(status); +diff --git a/drivers/fpga/altera-cvp.c b/drivers/fpga/altera-cvp.c +index 00e73d28077c..b7558acd1a66 100644 +--- a/drivers/fpga/altera-cvp.c ++++ b/drivers/fpga/altera-cvp.c +@@ -404,6 +404,7 @@ static int altera_cvp_probe(struct pci_dev *pdev, + { + struct altera_cvp_conf *conf; + u16 cmd, val; ++ u32 regval; + int ret; + + /* +@@ -417,6 +418,14 @@ static int altera_cvp_probe(struct pci_dev *pdev, + return -ENODEV; + } + ++ pci_read_config_dword(pdev, VSE_CVP_STATUS, ®val); ++ if (!(regval & VSE_CVP_STATUS_CVP_EN)) { ++ dev_err(&pdev->dev, ++ "CVP is disabled for this device: CVP_STATUS Reg 0x%x\n", ++ regval); ++ return -ENODEV; ++ } ++ + conf = devm_kzalloc(&pdev->dev, sizeof(*conf), GFP_KERNEL); + if (!conf) + return -ENOMEM; +diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c +index 1f08d597b87a..d05ed0521e20 100644 +--- a/drivers/gpu/drm/drm_atomic_helper.c ++++ b/drivers/gpu/drm/drm_atomic_helper.c +@@ -2899,7 +2899,7 @@ EXPORT_SYMBOL(drm_atomic_helper_suspend); + int drm_atomic_helper_commit_duplicated_state(struct drm_atomic_state *state, + struct drm_modeset_acquire_ctx *ctx) + { +- int i; ++ int i, ret; + struct drm_plane *plane; + struct drm_plane_state *new_plane_state; + struct drm_connector *connector; +@@ -2918,7 +2918,11 @@ int drm_atomic_helper_commit_duplicated_state(struct drm_atomic_state *state, + for_each_new_connector_in_state(state, connector, new_conn_state, i) + state->connectors[i].old_state = connector->state; + +- return drm_atomic_commit(state); ++ ret = drm_atomic_commit(state); ++ ++ state->acquire_ctx = NULL; ++ ++ return ret; + } + EXPORT_SYMBOL(drm_atomic_helper_commit_duplicated_state); + +diff --git a/drivers/gpu/drm/drm_bufs.c b/drivers/gpu/drm/drm_bufs.c +index 1ee84dd802d4..0f05b8d8fefa 100644 +--- a/drivers/gpu/drm/drm_bufs.c ++++ b/drivers/gpu/drm/drm_bufs.c +@@ -36,6 +36,8 @@ + #include + #include "drm_legacy.h" + ++#include ++ + static struct drm_map_list *drm_find_matching_map(struct drm_device *dev, + struct drm_local_map *map) + { +@@ -1417,6 +1419,7 @@ int drm_legacy_freebufs(struct drm_device *dev, void *data, + idx, dma->buf_count - 1); + return -EINVAL; + } ++ idx = array_index_nospec(idx, dma->buf_count); + buf = dma->buflist[idx]; + if (buf->file_priv != file_priv) { + DRM_ERROR("Process %d freeing buffer not owned\n", +diff --git a/drivers/gpu/drm/rockchip/cdn-dp-reg.c b/drivers/gpu/drm/rockchip/cdn-dp-reg.c +index b14d211f6c21..0ed7e91471f6 100644 +--- a/drivers/gpu/drm/rockchip/cdn-dp-reg.c ++++ b/drivers/gpu/drm/rockchip/cdn-dp-reg.c +@@ -147,7 +147,7 @@ static int cdn_dp_mailbox_validate_receive(struct cdn_dp_device *dp, + } + + static int cdn_dp_mailbox_read_receive(struct cdn_dp_device *dp, +- u8 *buff, u8 buff_size) ++ u8 *buff, u16 buff_size) + { + u32 i; + int ret; +diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c +index 5bd3c2ef0067..6277a3f2d5d1 100644 +--- a/drivers/gpu/drm/vc4/vc4_plane.c ++++ b/drivers/gpu/drm/vc4/vc4_plane.c +@@ -347,12 +347,14 @@ static int vc4_plane_setup_clipping_and_scaling(struct drm_plane_state *state) + vc4_get_scaling_mode(vc4_state->src_h[1], + vc4_state->crtc_h); + +- /* YUV conversion requires that horizontal scaling be enabled, +- * even on a plane that's otherwise 1:1. Looks like only PPF +- * works in that case, so let's pick that one. ++ /* YUV conversion requires that horizontal scaling be enabled ++ * on the UV plane even if vc4_get_scaling_mode() returned ++ * VC4_SCALING_NONE (which can happen when the down-scaling ++ * ratio is 0.5). Let's force it to VC4_SCALING_PPF in this ++ * case. + */ +- if (vc4_state->is_unity) +- vc4_state->x_scaling[0] = VC4_SCALING_PPF; ++ if (vc4_state->x_scaling[1] == VC4_SCALING_NONE) ++ vc4_state->x_scaling[1] = VC4_SCALING_PPF; + } else { + vc4_state->is_yuv = false; + vc4_state->x_scaling[1] = VC4_SCALING_NONE; +diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c +index 2524ff116f00..81c7ab10c083 100644 +--- a/drivers/gpu/drm/vgem/vgem_drv.c ++++ b/drivers/gpu/drm/vgem/vgem_drv.c +@@ -472,31 +472,31 @@ static int __init vgem_init(void) + if (!vgem_device) + return -ENOMEM; + +- ret = drm_dev_init(&vgem_device->drm, &vgem_driver, NULL); +- if (ret) +- goto out_free; +- + vgem_device->platform = + platform_device_register_simple("vgem", -1, NULL, 0); + if (IS_ERR(vgem_device->platform)) { + ret = PTR_ERR(vgem_device->platform); +- goto out_fini; ++ goto out_free; + } + + dma_coerce_mask_and_coherent(&vgem_device->platform->dev, + DMA_BIT_MASK(64)); ++ ret = drm_dev_init(&vgem_device->drm, &vgem_driver, ++ &vgem_device->platform->dev); ++ if (ret) ++ goto out_unregister; + + /* Final step: expose the device/driver to userspace */ + ret = drm_dev_register(&vgem_device->drm, 0); + if (ret) +- goto out_unregister; ++ goto out_fini; + + return 0; + +-out_unregister: +- platform_device_unregister(vgem_device->platform); + out_fini: + drm_dev_fini(&vgem_device->drm); ++out_unregister: ++ platform_device_unregister(vgem_device->platform); + out_free: + kfree(vgem_device); + return ret; +diff --git a/drivers/gpu/ipu-v3/ipu-image-convert.c b/drivers/gpu/ipu-v3/ipu-image-convert.c +index 524a717ab28e..a5e33d58e02f 100644 +--- a/drivers/gpu/ipu-v3/ipu-image-convert.c ++++ b/drivers/gpu/ipu-v3/ipu-image-convert.c +@@ -1518,7 +1518,7 @@ unlock: + EXPORT_SYMBOL_GPL(ipu_image_convert_queue); + + /* Abort any active or pending conversions for this context */ +-void ipu_image_convert_abort(struct ipu_image_convert_ctx *ctx) ++static void __ipu_image_convert_abort(struct ipu_image_convert_ctx *ctx) + { + struct ipu_image_convert_chan *chan = ctx->chan; + struct ipu_image_convert_priv *priv = chan->priv; +@@ -1545,7 +1545,7 @@ void ipu_image_convert_abort(struct ipu_image_convert_ctx *ctx) + + need_abort = (run_count || active_run); + +- ctx->aborting = need_abort; ++ ctx->aborting = true; + + spin_unlock_irqrestore(&chan->irqlock, flags); + +@@ -1566,7 +1566,11 @@ void ipu_image_convert_abort(struct ipu_image_convert_ctx *ctx) + dev_warn(priv->ipu->dev, "%s: timeout\n", __func__); + force_abort(ctx); + } ++} + ++void ipu_image_convert_abort(struct ipu_image_convert_ctx *ctx) ++{ ++ __ipu_image_convert_abort(ctx); + ctx->aborting = false; + } + EXPORT_SYMBOL_GPL(ipu_image_convert_abort); +@@ -1580,7 +1584,7 @@ void ipu_image_convert_unprepare(struct ipu_image_convert_ctx *ctx) + bool put_res; + + /* make sure no runs are hanging around */ +- ipu_image_convert_abort(ctx); ++ __ipu_image_convert_abort(ctx); + + dev_dbg(priv->ipu->dev, "%s: task %u: removing ctx %p\n", __func__, + chan->ic_task, ctx); +diff --git a/drivers/hid/hid-lenovo.c b/drivers/hid/hid-lenovo.c +index 643b6eb54442..eacc76d2ab96 100644 +--- a/drivers/hid/hid-lenovo.c ++++ b/drivers/hid/hid-lenovo.c +@@ -743,7 +743,9 @@ static int lenovo_probe_tpkbd(struct hid_device *hdev) + data_pointer->led_mute.brightness_get = lenovo_led_brightness_get_tpkbd; + data_pointer->led_mute.brightness_set = lenovo_led_brightness_set_tpkbd; + data_pointer->led_mute.dev = dev; +- led_classdev_register(dev, &data_pointer->led_mute); ++ ret = led_classdev_register(dev, &data_pointer->led_mute); ++ if (ret < 0) ++ goto err; + + data_pointer->led_micmute.name = name_micmute; + data_pointer->led_micmute.brightness_get = +@@ -751,7 +753,11 @@ static int lenovo_probe_tpkbd(struct hid_device *hdev) + data_pointer->led_micmute.brightness_set = + lenovo_led_brightness_set_tpkbd; + data_pointer->led_micmute.dev = dev; +- led_classdev_register(dev, &data_pointer->led_micmute); ++ ret = led_classdev_register(dev, &data_pointer->led_micmute); ++ if (ret < 0) { ++ led_classdev_unregister(&data_pointer->led_mute); ++ goto err; ++ } + + lenovo_features_set_tpkbd(hdev); + +diff --git a/drivers/hwmon/lm80.c b/drivers/hwmon/lm80.c +index 08e3945a6fbf..0e30fa00204c 100644 +--- a/drivers/hwmon/lm80.c ++++ b/drivers/hwmon/lm80.c +@@ -360,9 +360,11 @@ static ssize_t set_fan_div(struct device *dev, struct device_attribute *attr, + struct i2c_client *client = data->client; + unsigned long min, val; + u8 reg; +- int err = kstrtoul(buf, 10, &val); +- if (err < 0) +- return err; ++ int rv; ++ ++ rv = kstrtoul(buf, 10, &val); ++ if (rv < 0) ++ return rv; + + /* Save fan_min */ + mutex_lock(&data->update_lock); +@@ -390,8 +392,11 @@ static ssize_t set_fan_div(struct device *dev, struct device_attribute *attr, + return -EINVAL; + } + +- reg = (lm80_read_value(client, LM80_REG_FANDIV) & +- ~(3 << (2 * (nr + 1)))) | (data->fan_div[nr] << (2 * (nr + 1))); ++ rv = lm80_read_value(client, LM80_REG_FANDIV); ++ if (rv < 0) ++ return rv; ++ reg = (rv & ~(3 << (2 * (nr + 1)))) ++ | (data->fan_div[nr] << (2 * (nr + 1))); + lm80_write_value(client, LM80_REG_FANDIV, reg); + + /* Restore fan_min */ +@@ -623,6 +628,7 @@ static int lm80_probe(struct i2c_client *client, + struct device *dev = &client->dev; + struct device *hwmon_dev; + struct lm80_data *data; ++ int rv; + + data = devm_kzalloc(dev, sizeof(struct lm80_data), GFP_KERNEL); + if (!data) +@@ -635,8 +641,14 @@ static int lm80_probe(struct i2c_client *client, + lm80_init_client(client); + + /* A few vars need to be filled upon startup */ +- data->fan[f_min][0] = lm80_read_value(client, LM80_REG_FAN_MIN(1)); +- data->fan[f_min][1] = lm80_read_value(client, LM80_REG_FAN_MIN(2)); ++ rv = lm80_read_value(client, LM80_REG_FAN_MIN(1)); ++ if (rv < 0) ++ return rv; ++ data->fan[f_min][0] = rv; ++ rv = lm80_read_value(client, LM80_REG_FAN_MIN(2)); ++ if (rv < 0) ++ return rv; ++ data->fan[f_min][1] = rv; + + hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name, + data, lm80_groups); +diff --git a/drivers/i2c/busses/i2c-axxia.c b/drivers/i2c/busses/i2c-axxia.c +index deea13838648..30d80ce0bde3 100644 +--- a/drivers/i2c/busses/i2c-axxia.c ++++ b/drivers/i2c/busses/i2c-axxia.c +@@ -296,22 +296,7 @@ static irqreturn_t axxia_i2c_isr(int irq, void *_dev) + i2c_int_disable(idev, MST_STATUS_TFL); + } + +- if (status & MST_STATUS_SCC) { +- /* Stop completed */ +- i2c_int_disable(idev, ~MST_STATUS_TSS); +- complete(&idev->msg_complete); +- } else if (status & MST_STATUS_SNS) { +- /* Transfer done */ +- i2c_int_disable(idev, ~MST_STATUS_TSS); +- if (i2c_m_rd(idev->msg) && idev->msg_xfrd < idev->msg->len) +- axxia_i2c_empty_rx_fifo(idev); +- complete(&idev->msg_complete); +- } else if (status & MST_STATUS_TSS) { +- /* Transfer timeout */ +- idev->msg_err = -ETIMEDOUT; +- i2c_int_disable(idev, ~MST_STATUS_TSS); +- complete(&idev->msg_complete); +- } else if (unlikely(status & MST_STATUS_ERR)) { ++ if (unlikely(status & MST_STATUS_ERR)) { + /* Transfer error */ + i2c_int_disable(idev, ~0); + if (status & MST_STATUS_AL) +@@ -328,6 +313,21 @@ static irqreturn_t axxia_i2c_isr(int irq, void *_dev) + readl(idev->base + MST_TX_BYTES_XFRD), + readl(idev->base + MST_TX_XFER)); + complete(&idev->msg_complete); ++ } else if (status & MST_STATUS_SCC) { ++ /* Stop completed */ ++ i2c_int_disable(idev, ~MST_STATUS_TSS); ++ complete(&idev->msg_complete); ++ } else if (status & MST_STATUS_SNS) { ++ /* Transfer done */ ++ i2c_int_disable(idev, ~MST_STATUS_TSS); ++ if (i2c_m_rd(idev->msg) && idev->msg_xfrd < idev->msg->len) ++ axxia_i2c_empty_rx_fifo(idev); ++ complete(&idev->msg_complete); ++ } else if (status & MST_STATUS_TSS) { ++ /* Transfer timeout */ ++ idev->msg_err = -ETIMEDOUT; ++ i2c_int_disable(idev, ~MST_STATUS_TSS); ++ complete(&idev->msg_complete); + } + + out: +diff --git a/drivers/i2c/busses/i2c-sh_mobile.c b/drivers/i2c/busses/i2c-sh_mobile.c +index 6f2aaeb7c4fa..338344e76e02 100644 +--- a/drivers/i2c/busses/i2c-sh_mobile.c ++++ b/drivers/i2c/busses/i2c-sh_mobile.c +@@ -836,6 +836,7 @@ static const struct of_device_id sh_mobile_i2c_dt_ids[] = { + { .compatible = "renesas,rcar-gen2-iic", .data = &fast_clock_dt_config }, + { .compatible = "renesas,iic-r8a7795", .data = &fast_clock_dt_config }, + { .compatible = "renesas,rcar-gen3-iic", .data = &fast_clock_dt_config }, ++ { .compatible = "renesas,iic-r8a77990", .data = &fast_clock_dt_config }, + { .compatible = "renesas,iic-sh73a0", .data = &fast_clock_dt_config }, + { .compatible = "renesas,rmobile-iic", .data = &default_dt_config }, + {}, +diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c +index 3f968c46e667..784636800361 100644 +--- a/drivers/iio/accel/kxcjk-1013.c ++++ b/drivers/iio/accel/kxcjk-1013.c +@@ -1393,6 +1393,7 @@ static const struct acpi_device_id kx_acpi_match[] = { + {"KXCJ1008", KXCJ91008}, + {"KXCJ9000", KXCJ91008}, + {"KIOX000A", KXCJ91008}, ++ {"KIOX010A", KXCJ91008}, /* KXCJ91008 inside the display of a 2-in-1 */ + {"KXTJ1009", KXTJ21009}, + {"SMO8500", KXCJ91008}, + { }, +diff --git a/drivers/iio/adc/meson_saradc.c b/drivers/iio/adc/meson_saradc.c +index 11484cb38b84..2515badf8b28 100644 +--- a/drivers/iio/adc/meson_saradc.c ++++ b/drivers/iio/adc/meson_saradc.c +@@ -583,8 +583,11 @@ static int meson_sar_adc_clk_init(struct iio_dev *indio_dev, + struct clk_init_data init; + const char *clk_parents[1]; + +- init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%pOF#adc_div", +- indio_dev->dev.of_node); ++ init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%s#adc_div", ++ dev_name(indio_dev->dev.parent)); ++ if (!init.name) ++ return -ENOMEM; ++ + init.flags = 0; + init.ops = &clk_divider_ops; + clk_parents[0] = __clk_get_name(priv->clkin); +@@ -602,8 +605,11 @@ static int meson_sar_adc_clk_init(struct iio_dev *indio_dev, + if (WARN_ON(IS_ERR(priv->adc_div_clk))) + return PTR_ERR(priv->adc_div_clk); + +- init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%pOF#adc_en", +- indio_dev->dev.of_node); ++ init.name = devm_kasprintf(&indio_dev->dev, GFP_KERNEL, "%s#adc_en", ++ dev_name(indio_dev->dev.parent)); ++ if (!init.name) ++ return -ENOMEM; ++ + init.flags = CLK_SET_RATE_PARENT; + init.ops = &clk_gate_ops; + clk_parents[0] = __clk_get_name(priv->adc_div_clk); +diff --git a/drivers/infiniband/hw/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c +index 818bac1a4056..d3b8cb92fd6d 100644 +--- a/drivers/infiniband/hw/hfi1/rc.c ++++ b/drivers/infiniband/hw/hfi1/rc.c +@@ -1162,6 +1162,7 @@ void hfi1_rc_send_complete(struct rvt_qp *qp, struct hfi1_opa_header *opah) + if (cmp_psn(wqe->lpsn, qp->s_sending_psn) >= 0 && + cmp_psn(qp->s_sending_psn, qp->s_sending_hpsn) <= 0) + break; ++ rvt_qp_wqe_unreserve(qp, wqe); + s_last = qp->s_last; + trace_hfi1_qp_send_completion(qp, wqe, s_last); + if (++s_last >= qp->s_size) +@@ -1214,6 +1215,7 @@ static struct rvt_swqe *do_rc_completion(struct rvt_qp *qp, + u32 s_last; + + rvt_put_swqe(wqe); ++ rvt_qp_wqe_unreserve(qp, wqe); + s_last = qp->s_last; + trace_hfi1_qp_send_completion(qp, wqe, s_last); + if (++s_last >= qp->s_size) +diff --git a/drivers/infiniband/hw/hfi1/ruc.c b/drivers/infiniband/hw/hfi1/ruc.c +index 5866ccc0fc21..e8aaae4bd911 100644 +--- a/drivers/infiniband/hw/hfi1/ruc.c ++++ b/drivers/infiniband/hw/hfi1/ruc.c +@@ -440,6 +440,8 @@ send: + goto op_err; + if (!ret) + goto rnr_nak; ++ if (wqe->length > qp->r_len) ++ goto inv_err; + break; + + case IB_WR_RDMA_WRITE_WITH_IMM: +@@ -607,7 +609,10 @@ op_err: + goto err; + + inv_err: +- send_status = IB_WC_REM_INV_REQ_ERR; ++ send_status = ++ sqp->ibqp.qp_type == IB_QPT_RC ? ++ IB_WC_REM_INV_REQ_ERR : ++ IB_WC_SUCCESS; + wc.status = IB_WC_LOC_QP_OP_ERR; + goto err; + +diff --git a/drivers/infiniband/hw/qib/qib_ruc.c b/drivers/infiniband/hw/qib/qib_ruc.c +index 53efbb0b40c4..dd812ad0d09f 100644 +--- a/drivers/infiniband/hw/qib/qib_ruc.c ++++ b/drivers/infiniband/hw/qib/qib_ruc.c +@@ -425,6 +425,8 @@ again: + goto op_err; + if (!ret) + goto rnr_nak; ++ if (wqe->length > qp->r_len) ++ goto inv_err; + break; + + case IB_WR_RDMA_WRITE_WITH_IMM: +@@ -585,7 +587,10 @@ op_err: + goto err; + + inv_err: +- send_status = IB_WC_REM_INV_REQ_ERR; ++ send_status = ++ sqp->ibqp.qp_type == IB_QPT_RC ? ++ IB_WC_REM_INV_REQ_ERR : ++ IB_WC_SUCCESS; + wc.status = IB_WC_LOC_QP_OP_ERR; + goto err; + +diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c +index efa6cd2500b9..766103ea237e 100644 +--- a/drivers/iommu/amd_iommu.c ++++ b/drivers/iommu/amd_iommu.c +@@ -442,7 +442,14 @@ static int iommu_init_device(struct device *dev) + + dev_data->alias = get_alias(dev); + +- if (dev_is_pci(dev) && pci_iommuv2_capable(to_pci_dev(dev))) { ++ /* ++ * By default we use passthrough mode for IOMMUv2 capable device. ++ * But if amd_iommu=force_isolation is set (e.g. to debug DMA to ++ * invalid address), we ignore the capability for the device so ++ * it'll be forced to go into translation mode. ++ */ ++ if ((iommu_pass_through || !amd_iommu_force_isolation) && ++ dev_is_pci(dev) && pci_iommuv2_capable(to_pci_dev(dev))) { + struct amd_iommu *iommu; + + iommu = amd_iommu_rlookup_table[dev_data->devid]; +diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c +index 26e99c03390f..09eb258a9a7d 100644 +--- a/drivers/iommu/arm-smmu-v3.c ++++ b/drivers/iommu/arm-smmu-v3.c +@@ -730,7 +730,13 @@ static void queue_inc_cons(struct arm_smmu_queue *q) + u32 cons = (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1; + + q->cons = Q_OVF(q, q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons); +- writel(q->cons, q->cons_reg); ++ ++ /* ++ * Ensure that all CPU accesses (reads and writes) to the queue ++ * are complete before we update the cons pointer. ++ */ ++ mb(); ++ writel_relaxed(q->cons, q->cons_reg); + } + + static int queue_sync_prod(struct arm_smmu_queue *q) +diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c +index 15b5856475fc..01a6a0ea2a4f 100644 +--- a/drivers/iommu/arm-smmu.c ++++ b/drivers/iommu/arm-smmu.c +@@ -117,6 +117,7 @@ enum arm_smmu_implementation { + GENERIC_SMMU, + ARM_MMU500, + CAVIUM_SMMUV2, ++ QCOM_SMMUV2, + }; + + /* Until ACPICA headers cover IORT rev. C */ +@@ -1910,6 +1911,7 @@ ARM_SMMU_MATCH_DATA(smmu_generic_v2, ARM_SMMU_V2, GENERIC_SMMU); + ARM_SMMU_MATCH_DATA(arm_mmu401, ARM_SMMU_V1_64K, GENERIC_SMMU); + ARM_SMMU_MATCH_DATA(arm_mmu500, ARM_SMMU_V2, ARM_MMU500); + ARM_SMMU_MATCH_DATA(cavium_smmuv2, ARM_SMMU_V2, CAVIUM_SMMUV2); ++ARM_SMMU_MATCH_DATA(qcom_smmuv2, ARM_SMMU_V2, QCOM_SMMUV2); + + static const struct of_device_id arm_smmu_of_match[] = { + { .compatible = "arm,smmu-v1", .data = &smmu_generic_v1 }, +@@ -1918,6 +1920,7 @@ static const struct of_device_id arm_smmu_of_match[] = { + { .compatible = "arm,mmu-401", .data = &arm_mmu401 }, + { .compatible = "arm,mmu-500", .data = &arm_mmu500 }, + { .compatible = "cavium,smmu-v2", .data = &cavium_smmuv2 }, ++ { .compatible = "qcom,smmu-v2", .data = &qcom_smmuv2 }, + { }, + }; + MODULE_DEVICE_TABLE(of, arm_smmu_of_match); +diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c +index 7638ca03fb1f..d8ecc90ed1b5 100644 +--- a/drivers/irqchip/irq-gic-v3-its.c ++++ b/drivers/irqchip/irq-gic-v3-its.c +@@ -87,9 +87,14 @@ struct its_baser { + * The ITS structure - contains most of the infrastructure, with the + * top-level MSI domain, the command queue, the collections, and the + * list of devices writing to it. ++ * ++ * dev_alloc_lock has to be taken for device allocations, while the ++ * spinlock must be taken to parse data structures such as the device ++ * list. + */ + struct its_node { + raw_spinlock_t lock; ++ struct mutex dev_alloc_lock; + struct list_head entry; + void __iomem *base; + phys_addr_t phys_base; +@@ -138,6 +143,7 @@ struct its_device { + void *itt; + u32 nr_ites; + u32 device_id; ++ bool shared; + }; + + static struct { +@@ -2109,6 +2115,7 @@ static int its_msi_prepare(struct irq_domain *domain, struct device *dev, + struct its_device *its_dev; + struct msi_domain_info *msi_info; + u32 dev_id; ++ int err = 0; + + /* + * We ignore "dev" entierely, and rely on the dev_id that has +@@ -2131,6 +2138,7 @@ static int its_msi_prepare(struct irq_domain *domain, struct device *dev, + return -EINVAL; + } + ++ mutex_lock(&its->dev_alloc_lock); + its_dev = its_find_device(its, dev_id); + if (its_dev) { + /* +@@ -2138,18 +2146,22 @@ static int its_msi_prepare(struct irq_domain *domain, struct device *dev, + * another alias (PCI bridge of some sort). No need to + * create the device. + */ ++ its_dev->shared = true; + pr_debug("Reusing ITT for devID %x\n", dev_id); + goto out; + } + + its_dev = its_create_device(its, dev_id, nvec, true); +- if (!its_dev) +- return -ENOMEM; ++ if (!its_dev) { ++ err = -ENOMEM; ++ goto out; ++ } + + pr_debug("ITT %d entries, %d bits\n", nvec, ilog2(nvec)); + out: ++ mutex_unlock(&its->dev_alloc_lock); + info->scratchpad[0].ptr = its_dev; +- return 0; ++ return err; + } + + static struct msi_domain_ops its_msi_domain_ops = { +@@ -2252,6 +2264,7 @@ static void its_irq_domain_free(struct irq_domain *domain, unsigned int virq, + { + struct irq_data *d = irq_domain_get_irq_data(domain, virq); + struct its_device *its_dev = irq_data_get_irq_chip_data(d); ++ struct its_node *its = its_dev->its; + int i; + + for (i = 0; i < nr_irqs; i++) { +@@ -2266,8 +2279,14 @@ static void its_irq_domain_free(struct irq_domain *domain, unsigned int virq, + irq_domain_reset_irq_data(data); + } + +- /* If all interrupts have been freed, start mopping the floor */ +- if (bitmap_empty(its_dev->event_map.lpi_map, ++ mutex_lock(&its->dev_alloc_lock); ++ ++ /* ++ * If all interrupts have been freed, start mopping the ++ * floor. This is conditionned on the device not being shared. ++ */ ++ if (!its_dev->shared && ++ bitmap_empty(its_dev->event_map.lpi_map, + its_dev->event_map.nr_lpis)) { + its_lpi_free_chunks(its_dev->event_map.lpi_map, + its_dev->event_map.lpi_base, +@@ -2279,6 +2298,8 @@ static void its_irq_domain_free(struct irq_domain *domain, unsigned int virq, + its_free_device(its_dev); + } + ++ mutex_unlock(&its->dev_alloc_lock); ++ + irq_domain_free_irqs_parent(domain, virq, nr_irqs); + } + +@@ -2966,6 +2987,7 @@ static int __init its_probe_one(struct resource *res, + } + + raw_spin_lock_init(&its->lock); ++ mutex_init(&its->dev_alloc_lock); + INIT_LIST_HEAD(&its->entry); + INIT_LIST_HEAD(&its->its_device_list); + typer = gic_read_typer(its_base + GITS_TYPER); +diff --git a/drivers/isdn/hisax/hfc_pci.c b/drivers/isdn/hisax/hfc_pci.c +index f9ca35cc32b1..b42d27a4c950 100644 +--- a/drivers/isdn/hisax/hfc_pci.c ++++ b/drivers/isdn/hisax/hfc_pci.c +@@ -1169,11 +1169,13 @@ HFCPCI_l1hw(struct PStack *st, int pr, void *arg) + if (cs->debug & L1_DEB_LAPD) + debugl1(cs, "-> PH_REQUEST_PULL"); + #endif ++ spin_lock_irqsave(&cs->lock, flags); + if (!cs->tx_skb) { + test_and_clear_bit(FLG_L1_PULL_REQ, &st->l1.Flags); + st->l1.l1l2(st, PH_PULL | CONFIRM, NULL); + } else + test_and_set_bit(FLG_L1_PULL_REQ, &st->l1.Flags); ++ spin_unlock_irqrestore(&cs->lock, flags); + break; + case (HW_RESET | REQUEST): + spin_lock_irqsave(&cs->lock, flags); +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c +index 52ddfa0fca94..2ce079a0b0bd 100644 +--- a/drivers/md/raid10.c ++++ b/drivers/md/raid10.c +@@ -1190,7 +1190,9 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio, + struct bio *split = bio_split(bio, max_sectors, + gfp, conf->bio_split); + bio_chain(split, bio); ++ allow_barrier(conf); + generic_make_request(bio); ++ wait_barrier(conf); + bio = split; + r10_bio->master_bio = bio; + r10_bio->sectors = max_sectors; +@@ -1479,7 +1481,9 @@ retry_write: + struct bio *split = bio_split(bio, r10_bio->sectors, + GFP_NOIO, conf->bio_split); + bio_chain(split, bio); ++ allow_barrier(conf); + generic_make_request(bio); ++ wait_barrier(conf); + bio = split; + r10_bio->master_bio = bio; + } +diff --git a/drivers/media/i2c/ad9389b.c b/drivers/media/i2c/ad9389b.c +index a056d6cdaaaa..f0b200ae2127 100644 +--- a/drivers/media/i2c/ad9389b.c ++++ b/drivers/media/i2c/ad9389b.c +@@ -590,7 +590,7 @@ static const struct v4l2_dv_timings_cap ad9389b_timings_cap = { + .type = V4L2_DV_BT_656_1120, + /* keep this initialization for compatibility with GCC < 4.4.6 */ + .reserved = { 0 }, +- V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 170000000, ++ V4L2_INIT_BT_TIMINGS(640, 1920, 350, 1200, 25000000, 170000000, + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | + V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, + V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | +diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c +index 2817bafc67bf..80c20404334a 100644 +--- a/drivers/media/i2c/adv7511.c ++++ b/drivers/media/i2c/adv7511.c +@@ -142,7 +142,7 @@ static const struct v4l2_dv_timings_cap adv7511_timings_cap = { + .type = V4L2_DV_BT_656_1120, + /* keep this initialization for compatibility with GCC < 4.4.6 */ + .reserved = { 0 }, +- V4L2_INIT_BT_TIMINGS(0, ADV7511_MAX_WIDTH, 0, ADV7511_MAX_HEIGHT, ++ V4L2_INIT_BT_TIMINGS(640, ADV7511_MAX_WIDTH, 350, ADV7511_MAX_HEIGHT, + ADV7511_MIN_PIXELCLOCK, ADV7511_MAX_PIXELCLOCK, + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | + V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, +diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c +index f289b8aca1da..d2108aad3c65 100644 +--- a/drivers/media/i2c/adv7604.c ++++ b/drivers/media/i2c/adv7604.c +@@ -778,7 +778,7 @@ static const struct v4l2_dv_timings_cap adv7604_timings_cap_analog = { + .type = V4L2_DV_BT_656_1120, + /* keep this initialization for compatibility with GCC < 4.4.6 */ + .reserved = { 0 }, +- V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 170000000, ++ V4L2_INIT_BT_TIMINGS(640, 1920, 350, 1200, 25000000, 170000000, + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | + V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, + V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | +@@ -789,7 +789,7 @@ static const struct v4l2_dv_timings_cap adv76xx_timings_cap_digital = { + .type = V4L2_DV_BT_656_1120, + /* keep this initialization for compatibility with GCC < 4.4.6 */ + .reserved = { 0 }, +- V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 225000000, ++ V4L2_INIT_BT_TIMINGS(640, 1920, 350, 1200, 25000000, 225000000, + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | + V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, + V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | +diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c +index 65f34e7e146f..f9c23173c9fa 100644 +--- a/drivers/media/i2c/adv7842.c ++++ b/drivers/media/i2c/adv7842.c +@@ -676,7 +676,7 @@ static const struct v4l2_dv_timings_cap adv7842_timings_cap_analog = { + .type = V4L2_DV_BT_656_1120, + /* keep this initialization for compatibility with GCC < 4.4.6 */ + .reserved = { 0 }, +- V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 170000000, ++ V4L2_INIT_BT_TIMINGS(640, 1920, 350, 1200, 25000000, 170000000, + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | + V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, + V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | +@@ -687,7 +687,7 @@ static const struct v4l2_dv_timings_cap adv7842_timings_cap_digital = { + .type = V4L2_DV_BT_656_1120, + /* keep this initialization for compatibility with GCC < 4.4.6 */ + .reserved = { 0 }, +- V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 225000000, ++ V4L2_INIT_BT_TIMINGS(640, 1920, 350, 1200, 25000000, 225000000, + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | + V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, + V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | +diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c +index e6f5c363ccab..c9647e24a4a3 100644 +--- a/drivers/media/i2c/tc358743.c ++++ b/drivers/media/i2c/tc358743.c +@@ -70,7 +70,7 @@ static const struct v4l2_dv_timings_cap tc358743_timings_cap = { + /* keep this initialization for compatibility with GCC < 4.4.6 */ + .reserved = { 0 }, + /* Pixel clock from REF_01 p. 20. Min/max height/width are unknown */ +- V4L2_INIT_BT_TIMINGS(1, 10000, 1, 10000, 0, 165000000, ++ V4L2_INIT_BT_TIMINGS(640, 1920, 350, 1200, 13000000, 165000000, + V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | + V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, + V4L2_DV_BT_CAP_PROGRESSIVE | +diff --git a/drivers/media/i2c/ths8200.c b/drivers/media/i2c/ths8200.c +index 498ad2368cbc..f5ee28058ea2 100644 +--- a/drivers/media/i2c/ths8200.c ++++ b/drivers/media/i2c/ths8200.c +@@ -49,7 +49,7 @@ static const struct v4l2_dv_timings_cap ths8200_timings_cap = { + .type = V4L2_DV_BT_656_1120, + /* keep this initialization for compatibility with GCC < 4.4.6 */ + .reserved = { 0 }, +- V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1080, 25000000, 148500000, ++ V4L2_INIT_BT_TIMINGS(640, 1920, 350, 1080, 25000000, 148500000, + V4L2_DV_BT_STD_CEA861, V4L2_DV_BT_CAP_PROGRESSIVE) + }; + +diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c +index 291c40933935..3457a5f1c8a8 100644 +--- a/drivers/media/platform/coda/coda-bit.c ++++ b/drivers/media/platform/coda/coda-bit.c +@@ -953,16 +953,15 @@ static int coda_start_encoding(struct coda_ctx *ctx) + else + coda_write(dev, CODA_STD_H264, + CODA_CMD_ENC_SEQ_COD_STD); +- if (ctx->params.h264_deblk_enabled) { +- value = ((ctx->params.h264_deblk_alpha & +- CODA_264PARAM_DEBLKFILTEROFFSETALPHA_MASK) << +- CODA_264PARAM_DEBLKFILTEROFFSETALPHA_OFFSET) | +- ((ctx->params.h264_deblk_beta & +- CODA_264PARAM_DEBLKFILTEROFFSETBETA_MASK) << +- CODA_264PARAM_DEBLKFILTEROFFSETBETA_OFFSET); +- } else { +- value = 1 << CODA_264PARAM_DISABLEDEBLK_OFFSET; +- } ++ value = ((ctx->params.h264_disable_deblocking_filter_idc & ++ CODA_264PARAM_DISABLEDEBLK_MASK) << ++ CODA_264PARAM_DISABLEDEBLK_OFFSET) | ++ ((ctx->params.h264_slice_alpha_c0_offset_div2 & ++ CODA_264PARAM_DEBLKFILTEROFFSETALPHA_MASK) << ++ CODA_264PARAM_DEBLKFILTEROFFSETALPHA_OFFSET) | ++ ((ctx->params.h264_slice_beta_offset_div2 & ++ CODA_264PARAM_DEBLKFILTEROFFSETBETA_MASK) << ++ CODA_264PARAM_DEBLKFILTEROFFSETBETA_OFFSET); + coda_write(dev, value, CODA_CMD_ENC_SEQ_264_PARA); + break; + case V4L2_PIX_FMT_JPEG: +diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c +index 99d138d3f87f..2e1472fadc2c 100644 +--- a/drivers/media/platform/coda/coda-common.c ++++ b/drivers/media/platform/coda/coda-common.c +@@ -1675,14 +1675,13 @@ static int coda_s_ctrl(struct v4l2_ctrl *ctrl) + ctx->params.h264_max_qp = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_ALPHA: +- ctx->params.h264_deblk_alpha = ctrl->val; ++ ctx->params.h264_slice_alpha_c0_offset_div2 = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_BETA: +- ctx->params.h264_deblk_beta = ctrl->val; ++ ctx->params.h264_slice_beta_offset_div2 = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_MODE: +- ctx->params.h264_deblk_enabled = (ctrl->val == +- V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_ENABLED); ++ ctx->params.h264_disable_deblocking_filter_idc = ctrl->val; + break; + case V4L2_CID_MPEG_VIDEO_H264_PROFILE: + /* TODO: switch between baseline and constrained baseline */ +@@ -1764,13 +1763,13 @@ static void coda_encode_ctrls(struct coda_ctx *ctx) + v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_MAX_QP, 0, 51, 1, 51); + v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops, +- V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_ALPHA, 0, 15, 1, 0); ++ V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_ALPHA, -6, 6, 1, 0); + v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops, +- V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_BETA, 0, 15, 1, 0); ++ V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_BETA, -6, 6, 1, 0); + v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_MODE, +- V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_DISABLED, 0x0, +- V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_ENABLED); ++ V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_DISABLED_AT_SLICE_BOUNDARY, ++ 0x0, V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_ENABLED); + v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_PROFILE, + V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE, 0x0, +diff --git a/drivers/media/platform/coda/coda.h b/drivers/media/platform/coda/coda.h +index c5f504d8cf67..389a882cc3da 100644 +--- a/drivers/media/platform/coda/coda.h ++++ b/drivers/media/platform/coda/coda.h +@@ -114,9 +114,9 @@ struct coda_params { + u8 h264_inter_qp; + u8 h264_min_qp; + u8 h264_max_qp; +- u8 h264_deblk_enabled; +- u8 h264_deblk_alpha; +- u8 h264_deblk_beta; ++ u8 h264_disable_deblocking_filter_idc; ++ s8 h264_slice_alpha_c0_offset_div2; ++ s8 h264_slice_beta_offset_div2; + u8 h264_profile_idc; + u8 h264_level_idc; + u8 mpeg4_intra_qp; +diff --git a/drivers/media/platform/coda/coda_regs.h b/drivers/media/platform/coda/coda_regs.h +index 38df5fd9a2fa..546f5762357c 100644 +--- a/drivers/media/platform/coda/coda_regs.h ++++ b/drivers/media/platform/coda/coda_regs.h +@@ -292,7 +292,7 @@ + #define CODA_264PARAM_DEBLKFILTEROFFSETALPHA_OFFSET 8 + #define CODA_264PARAM_DEBLKFILTEROFFSETALPHA_MASK 0x0f + #define CODA_264PARAM_DISABLEDEBLK_OFFSET 6 +-#define CODA_264PARAM_DISABLEDEBLK_MASK 0x01 ++#define CODA_264PARAM_DISABLEDEBLK_MASK 0x03 + #define CODA_264PARAM_CONSTRAINEDINTRAPREDFLAG_OFFSET 5 + #define CODA_264PARAM_CONSTRAINEDINTRAPREDFLAG_MASK 0x01 + #define CODA_264PARAM_CHROMAQPOFFSET_OFFSET 0 +diff --git a/drivers/media/platform/davinci/vpbe.c b/drivers/media/platform/davinci/vpbe.c +index 7f6462562579..1d3c13e36904 100644 +--- a/drivers/media/platform/davinci/vpbe.c ++++ b/drivers/media/platform/davinci/vpbe.c +@@ -739,7 +739,7 @@ static int vpbe_initialize(struct device *dev, struct vpbe_device *vpbe_dev) + if (ret) { + v4l2_err(&vpbe_dev->v4l2_dev, "Failed to set default output %s", + def_output); +- return ret; ++ goto fail_kfree_amp; + } + + printk(KERN_NOTICE "Setting default mode to %s\n", def_mode); +@@ -747,12 +747,15 @@ static int vpbe_initialize(struct device *dev, struct vpbe_device *vpbe_dev) + if (ret) { + v4l2_err(&vpbe_dev->v4l2_dev, "Failed to set default mode %s", + def_mode); +- return ret; ++ goto fail_kfree_amp; + } + vpbe_dev->initialized = 1; + /* TBD handling of bootargs for default output and mode */ + return 0; + ++fail_kfree_amp: ++ mutex_lock(&vpbe_dev->lock); ++ kfree(vpbe_dev->amp); + fail_kfree_encoders: + kfree(vpbe_dev->encoders); + fail_dev_unregister: +diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_pm.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_pm.c +index 3e73e9db781f..7c025045ea90 100644 +--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_pm.c ++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_pm.c +@@ -41,25 +41,27 @@ int mtk_vcodec_init_enc_pm(struct mtk_vcodec_dev *mtkdev) + node = of_parse_phandle(dev->of_node, "mediatek,larb", 0); + if (!node) { + mtk_v4l2_err("no mediatek,larb found"); +- return -1; ++ return -ENODEV; + } + pdev = of_find_device_by_node(node); ++ of_node_put(node); + if (!pdev) { + mtk_v4l2_err("no mediatek,larb device found"); +- return -1; ++ return -ENODEV; + } + pm->larbvenc = &pdev->dev; + + node = of_parse_phandle(dev->of_node, "mediatek,larb", 1); + if (!node) { + mtk_v4l2_err("no mediatek,larb found"); +- return -1; ++ return -ENODEV; + } + + pdev = of_find_device_by_node(node); ++ of_node_put(node); + if (!pdev) { + mtk_v4l2_err("no mediatek,larb device found"); +- return -1; ++ return -ENODEV; + } + + pm->larbvenclt = &pdev->dev; +diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c +index 76382c858c35..1246d69ba187 100644 +--- a/drivers/memstick/core/memstick.c ++++ b/drivers/memstick/core/memstick.c +@@ -18,6 +18,7 @@ + #include + #include + #include ++#include + + #define DRIVER_NAME "memstick" + +@@ -436,6 +437,7 @@ static void memstick_check(struct work_struct *work) + struct memstick_dev *card; + + dev_dbg(&host->dev, "memstick_check started\n"); ++ pm_runtime_get_noresume(host->dev.parent); + mutex_lock(&host->lock); + if (!host->card) { + if (memstick_power_on(host)) +@@ -479,6 +481,7 @@ out_power_off: + host->set_param(host, MEMSTICK_POWER, MEMSTICK_POWER_OFF); + + mutex_unlock(&host->lock); ++ pm_runtime_put(host->dev.parent); + dev_dbg(&host->dev, "memstick_check finished\n"); + } + +diff --git a/drivers/mmc/host/bcm2835.c b/drivers/mmc/host/bcm2835.c +index 0d3b7473bc21..5301302fb531 100644 +--- a/drivers/mmc/host/bcm2835.c ++++ b/drivers/mmc/host/bcm2835.c +@@ -286,6 +286,7 @@ static void bcm2835_reset(struct mmc_host *mmc) + + if (host->dma_chan) + dmaengine_terminate_sync(host->dma_chan); ++ host->dma_chan = NULL; + bcm2835_reset_internal(host); + } + +@@ -772,6 +773,8 @@ static void bcm2835_finish_command(struct bcm2835_host *host) + + if (!(sdhsts & SDHSTS_CRC7_ERROR) || + (host->cmd->opcode != MMC_SEND_OP_COND)) { ++ u32 edm, fsm; ++ + if (sdhsts & SDHSTS_CMD_TIME_OUT) { + host->cmd->error = -ETIMEDOUT; + } else { +@@ -780,6 +783,13 @@ static void bcm2835_finish_command(struct bcm2835_host *host) + bcm2835_dumpregs(host); + host->cmd->error = -EILSEQ; + } ++ edm = readl(host->ioaddr + SDEDM); ++ fsm = edm & SDEDM_FSM_MASK; ++ if (fsm == SDEDM_FSM_READWAIT || ++ fsm == SDEDM_FSM_WRITESTART1) ++ /* Kick the FSM out of its wait */ ++ writel(edm | SDEDM_FORCE_DATA_MODE, ++ host->ioaddr + SDEDM); + bcm2835_finish_request(host); + return; + } +@@ -837,6 +847,8 @@ static void bcm2835_timeout(struct work_struct *work) + dev_err(dev, "timeout waiting for hardware interrupt.\n"); + bcm2835_dumpregs(host); + ++ bcm2835_reset(host->mmc); ++ + if (host->data) { + host->data->error = -ETIMEDOUT; + bcm2835_finish_data(host); +diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c +index 8332f56e6c0d..7b7d077e40fd 100644 +--- a/drivers/mmc/host/sdhci-of-esdhc.c ++++ b/drivers/mmc/host/sdhci-of-esdhc.c +@@ -481,8 +481,12 @@ static void esdhc_clock_enable(struct sdhci_host *host, bool enable) + /* Wait max 20 ms */ + timeout = ktime_add_ms(ktime_get(), 20); + val = ESDHC_CLOCK_STABLE; +- while (!(sdhci_readl(host, ESDHC_PRSSTAT) & val)) { +- if (ktime_after(ktime_get(), timeout)) { ++ while (1) { ++ bool timedout = ktime_after(ktime_get(), timeout); ++ ++ if (sdhci_readl(host, ESDHC_PRSSTAT) & val) ++ break; ++ if (timedout) { + pr_err("%s: Internal clock never stabilised.\n", + mmc_hostname(host->mmc)); + break; +@@ -558,8 +562,12 @@ static void esdhc_of_set_clock(struct sdhci_host *host, unsigned int clock) + + /* Wait max 20 ms */ + timeout = ktime_add_ms(ktime_get(), 20); +- while (!(sdhci_readl(host, ESDHC_PRSSTAT) & ESDHC_CLOCK_STABLE)) { +- if (ktime_after(ktime_get(), timeout)) { ++ while (1) { ++ bool timedout = ktime_after(ktime_get(), timeout); ++ ++ if (sdhci_readl(host, ESDHC_PRSSTAT) & ESDHC_CLOCK_STABLE) ++ break; ++ if (timedout) { + pr_err("%s: Internal clock never stabilised.\n", + mmc_hostname(host->mmc)); + return; +diff --git a/drivers/mmc/host/sdhci-xenon-phy.c b/drivers/mmc/host/sdhci-xenon-phy.c +index ec8794335241..82051f2b7191 100644 +--- a/drivers/mmc/host/sdhci-xenon-phy.c ++++ b/drivers/mmc/host/sdhci-xenon-phy.c +@@ -357,9 +357,13 @@ static int xenon_emmc_phy_enable_dll(struct sdhci_host *host) + + /* Wait max 32 ms */ + timeout = ktime_add_ms(ktime_get(), 32); +- while (!(sdhci_readw(host, XENON_SLOT_EXT_PRESENT_STATE) & +- XENON_DLL_LOCK_STATE)) { +- if (ktime_after(ktime_get(), timeout)) { ++ while (1) { ++ bool timedout = ktime_after(ktime_get(), timeout); ++ ++ if (sdhci_readw(host, XENON_SLOT_EXT_PRESENT_STATE) & ++ XENON_DLL_LOCK_STATE) ++ break; ++ if (timedout) { + dev_err(mmc_dev(host->mmc), "Wait for DLL Lock time-out\n"); + return -ETIMEDOUT; + } +diff --git a/drivers/mmc/host/sdhci-xenon.c b/drivers/mmc/host/sdhci-xenon.c +index 4d0791f6ec23..a0b5089b3274 100644 +--- a/drivers/mmc/host/sdhci-xenon.c ++++ b/drivers/mmc/host/sdhci-xenon.c +@@ -34,9 +34,13 @@ static int xenon_enable_internal_clk(struct sdhci_host *host) + sdhci_writel(host, reg, SDHCI_CLOCK_CONTROL); + /* Wait max 20 ms */ + timeout = ktime_add_ms(ktime_get(), 20); +- while (!((reg = sdhci_readw(host, SDHCI_CLOCK_CONTROL)) +- & SDHCI_CLOCK_INT_STABLE)) { +- if (ktime_after(ktime_get(), timeout)) { ++ while (1) { ++ bool timedout = ktime_after(ktime_get(), timeout); ++ ++ reg = sdhci_readw(host, SDHCI_CLOCK_CONTROL); ++ if (reg & SDHCI_CLOCK_INT_STABLE) ++ break; ++ if (timedout) { + dev_err(mmc_dev(host->mmc), "Internal clock never stabilised.\n"); + return -ETIMEDOUT; + } +diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c +index 45462557e51c..ed3edb17fd09 100644 +--- a/drivers/net/ethernet/broadcom/bcmsysport.c ++++ b/drivers/net/ethernet/broadcom/bcmsysport.c +@@ -519,7 +519,6 @@ static void bcm_sysport_get_wol(struct net_device *dev, + struct ethtool_wolinfo *wol) + { + struct bcm_sysport_priv *priv = netdev_priv(dev); +- u32 reg; + + wol->supported = WAKE_MAGIC | WAKE_MAGICSECURE; + wol->wolopts = priv->wolopts; +@@ -527,11 +526,7 @@ static void bcm_sysport_get_wol(struct net_device *dev, + if (!(priv->wolopts & WAKE_MAGICSECURE)) + return; + +- /* Return the programmed SecureOn password */ +- reg = umac_readl(priv, UMAC_PSW_MS); +- put_unaligned_be16(reg, &wol->sopass[0]); +- reg = umac_readl(priv, UMAC_PSW_LS); +- put_unaligned_be32(reg, &wol->sopass[2]); ++ memcpy(wol->sopass, priv->sopass, sizeof(priv->sopass)); + } + + static int bcm_sysport_set_wol(struct net_device *dev, +@@ -547,13 +542,8 @@ static int bcm_sysport_set_wol(struct net_device *dev, + if (wol->wolopts & ~supported) + return -EINVAL; + +- /* Program the SecureOn password */ +- if (wol->wolopts & WAKE_MAGICSECURE) { +- umac_writel(priv, get_unaligned_be16(&wol->sopass[0]), +- UMAC_PSW_MS); +- umac_writel(priv, get_unaligned_be32(&wol->sopass[2]), +- UMAC_PSW_LS); +- } ++ if (wol->wolopts & WAKE_MAGICSECURE) ++ memcpy(priv->sopass, wol->sopass, sizeof(priv->sopass)); + + /* Flag the device and relevant IRQ as wakeup capable */ + if (wol->wolopts) { +@@ -2221,12 +2211,17 @@ static int bcm_sysport_suspend_to_wol(struct bcm_sysport_priv *priv) + unsigned int timeout = 1000; + u32 reg; + +- /* Password has already been programmed */ + reg = umac_readl(priv, UMAC_MPD_CTRL); + reg |= MPD_EN; + reg &= ~PSW_EN; +- if (priv->wolopts & WAKE_MAGICSECURE) ++ if (priv->wolopts & WAKE_MAGICSECURE) { ++ /* Program the SecureOn password */ ++ umac_writel(priv, get_unaligned_be16(&priv->sopass[0]), ++ UMAC_PSW_MS); ++ umac_writel(priv, get_unaligned_be32(&priv->sopass[2]), ++ UMAC_PSW_LS); + reg |= PSW_EN; ++ } + umac_writel(priv, reg, UMAC_MPD_CTRL); + + /* Make sure RBUF entered WoL mode as result */ +diff --git a/drivers/net/ethernet/broadcom/bcmsysport.h b/drivers/net/ethernet/broadcom/bcmsysport.h +index 86ae751ccb5c..3df4a48b8eac 100644 +--- a/drivers/net/ethernet/broadcom/bcmsysport.h ++++ b/drivers/net/ethernet/broadcom/bcmsysport.h +@@ -11,6 +11,7 @@ + #ifndef __BCM_SYSPORT_H + #define __BCM_SYSPORT_H + ++#include + #include + + /* Receive/transmit descriptor format */ +@@ -754,6 +755,7 @@ struct bcm_sysport_priv { + unsigned int crc_fwd:1; + u16 rev; + u32 wolopts; ++ u8 sopass[SOPASS_MAX]; + unsigned int wol_irq_disabled:1; + + /* MIB related fields */ +diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c +index 03f4fee1bbc9..ced348e15a63 100644 +--- a/drivers/net/ethernet/cisco/enic/enic_main.c ++++ b/drivers/net/ethernet/cisco/enic/enic_main.c +@@ -1393,7 +1393,8 @@ static void enic_rq_indicate_buf(struct vnic_rq *rq, + * csum is correct or is zero. + */ + if ((netdev->features & NETIF_F_RXCSUM) && !csum_not_calc && +- tcp_udp_csum_ok && ipv4_csum_ok && outer_csum_ok) { ++ tcp_udp_csum_ok && outer_csum_ok && ++ (ipv4_csum_ok || ipv6)) { + skb->ip_summed = CHECKSUM_UNNECESSARY; + skb->csum_level = encap; + } +diff --git a/drivers/net/ethernet/freescale/fman/fman_memac.c b/drivers/net/ethernet/freescale/fman/fman_memac.c +index c0296880feba..75ce773c21a6 100644 +--- a/drivers/net/ethernet/freescale/fman/fman_memac.c ++++ b/drivers/net/ethernet/freescale/fman/fman_memac.c +@@ -927,7 +927,7 @@ int memac_add_hash_mac_address(struct fman_mac *memac, enet_addr_t *eth_addr) + hash = get_mac_addr_hash_code(addr) & HASH_CTRL_ADDR_MASK; + + /* Create element to be added to the driver hash table */ +- hash_entry = kmalloc(sizeof(*hash_entry), GFP_KERNEL); ++ hash_entry = kmalloc(sizeof(*hash_entry), GFP_ATOMIC); + if (!hash_entry) + return -ENOMEM; + hash_entry->addr = addr; +diff --git a/drivers/net/ethernet/freescale/fman/fman_tgec.c b/drivers/net/ethernet/freescale/fman/fman_tgec.c +index 4b0f3a50b293..e575259d20f4 100644 +--- a/drivers/net/ethernet/freescale/fman/fman_tgec.c ++++ b/drivers/net/ethernet/freescale/fman/fman_tgec.c +@@ -551,7 +551,7 @@ int tgec_add_hash_mac_address(struct fman_mac *tgec, enet_addr_t *eth_addr) + hash = (crc >> TGEC_HASH_MCAST_SHIFT) & TGEC_HASH_ADR_MSK; + + /* Create element to be added to the driver hash table */ +- hash_entry = kmalloc(sizeof(*hash_entry), GFP_KERNEL); ++ hash_entry = kmalloc(sizeof(*hash_entry), GFP_ATOMIC); + if (!hash_entry) + return -ENOMEM; + hash_entry->addr = addr; +diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c +index 904b42becd45..5d47a51e74eb 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_main.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c +@@ -9772,6 +9772,9 @@ static int i40e_config_netdev(struct i40e_vsi *vsi) + ether_addr_copy(netdev->dev_addr, mac_addr); + ether_addr_copy(netdev->perm_addr, mac_addr); + ++ /* i40iw_net_event() reads 16 bytes from neigh->primary_key */ ++ netdev->neigh_priv_len = sizeof(u32) * 4; ++ + netdev->priv_flags |= IFF_UNICAST_FLT; + netdev->priv_flags |= IFF_SUPP_NOFCS; + /* Setup netdev TC information */ +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c +index 1c027f9d9af5..8892ea5cbb01 100644 +--- a/drivers/net/ethernet/intel/igb/igb_main.c ++++ b/drivers/net/ethernet/intel/igb/igb_main.c +@@ -7950,9 +7950,11 @@ static int __igb_shutdown(struct pci_dev *pdev, bool *enable_wake, + rtnl_unlock(); + + #ifdef CONFIG_PM +- retval = pci_save_state(pdev); +- if (retval) +- return retval; ++ if (!runtime) { ++ retval = pci_save_state(pdev); ++ if (retval) ++ return retval; ++ } + #endif + + status = rd32(E1000_STATUS); +diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c +index eef35bf3e849..5d00be3aac73 100644 +--- a/drivers/net/ethernet/marvell/skge.c ++++ b/drivers/net/ethernet/marvell/skge.c +@@ -152,8 +152,10 @@ static void skge_get_regs(struct net_device *dev, struct ethtool_regs *regs, + memset(p, 0, regs->len); + memcpy_fromio(p, io, B3_RAM_ADDR); + +- memcpy_fromio(p + B3_RI_WTO_R1, io + B3_RI_WTO_R1, +- regs->len - B3_RI_WTO_R1); ++ if (regs->len > B3_RI_WTO_R1) { ++ memcpy_fromio(p + B3_RI_WTO_R1, io + B3_RI_WTO_R1, ++ regs->len - B3_RI_WTO_R1); ++ } + } + + /* Wake on Lan only supported on Yukon chips with rev 1 or above */ +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +index bf34264c734b..14bab8a5550d 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +@@ -1605,7 +1605,7 @@ static void mlx5e_close_cq(struct mlx5e_cq *cq) + + static int mlx5e_get_cpu(struct mlx5e_priv *priv, int ix) + { +- return cpumask_first(priv->mdev->priv.irq_info[ix].mask); ++ return cpumask_first(priv->mdev->priv.irq_info[ix + MLX5_EQ_VEC_COMP_BASE].mask); + } + + static int mlx5e_open_tx_cqs(struct mlx5e_channel *c, +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +index 8b7b52c7512e..eec7c2ef067a 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +@@ -646,6 +646,8 @@ static u32 mlx5e_get_fcs(const struct sk_buff *skb) + return __get_unaligned_cpu32(fcs_bytes); + } + ++#define short_frame(size) ((size) <= ETH_ZLEN + ETH_FCS_LEN) ++ + static inline void mlx5e_handle_csum(struct net_device *netdev, + struct mlx5_cqe64 *cqe, + struct mlx5e_rq *rq, +@@ -661,6 +663,17 @@ static inline void mlx5e_handle_csum(struct net_device *netdev, + return; + } + ++ /* CQE csum doesn't cover padding octets in short ethernet ++ * frames. And the pad field is appended prior to calculating ++ * and appending the FCS field. ++ * ++ * Detecting these padded frames requires to verify and parse ++ * IP headers, so we simply force all those small frames to be ++ * CHECKSUM_UNNECESSARY even if they are not padded. ++ */ ++ if (short_frame(skb->len)) ++ goto csum_unnecessary; ++ + if (is_first_ethertype_ip(skb)) { + skb->ip_summed = CHECKSUM_COMPLETE; + skb->csum = csum_unfold((__force __sum16)cqe->check_sum); +@@ -672,6 +685,7 @@ static inline void mlx5e_handle_csum(struct net_device *netdev, + return; + } + ++csum_unnecessary: + if (likely((cqe->hds_ip_ext & CQE_L3_OK) && + (cqe->hds_ip_ext & CQE_L4_OK))) { + skb->ip_summed = CHECKSUM_UNNECESSARY; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c +index e99f1382a4f0..558fc6a05e2a 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c +@@ -619,18 +619,19 @@ u64 mlx5_read_internal_timer(struct mlx5_core_dev *dev) + static int mlx5_irq_set_affinity_hint(struct mlx5_core_dev *mdev, int i) + { + struct mlx5_priv *priv = &mdev->priv; +- int irq = pci_irq_vector(mdev->pdev, MLX5_EQ_VEC_COMP_BASE + i); ++ int vecidx = MLX5_EQ_VEC_COMP_BASE + i; ++ int irq = pci_irq_vector(mdev->pdev, vecidx); + +- if (!zalloc_cpumask_var(&priv->irq_info[i].mask, GFP_KERNEL)) { ++ if (!zalloc_cpumask_var(&priv->irq_info[vecidx].mask, GFP_KERNEL)) { + mlx5_core_warn(mdev, "zalloc_cpumask_var failed"); + return -ENOMEM; + } + + cpumask_set_cpu(cpumask_local_spread(i, priv->numa_node), +- priv->irq_info[i].mask); ++ priv->irq_info[vecidx].mask); + + if (IS_ENABLED(CONFIG_SMP) && +- irq_set_affinity_hint(irq, priv->irq_info[i].mask)) ++ irq_set_affinity_hint(irq, priv->irq_info[vecidx].mask)) + mlx5_core_warn(mdev, "irq_set_affinity_hint failed, irq 0x%.4x", irq); + + return 0; +@@ -638,11 +639,12 @@ static int mlx5_irq_set_affinity_hint(struct mlx5_core_dev *mdev, int i) + + static void mlx5_irq_clear_affinity_hint(struct mlx5_core_dev *mdev, int i) + { ++ int vecidx = MLX5_EQ_VEC_COMP_BASE + i; + struct mlx5_priv *priv = &mdev->priv; +- int irq = pci_irq_vector(mdev->pdev, MLX5_EQ_VEC_COMP_BASE + i); ++ int irq = pci_irq_vector(mdev->pdev, vecidx); + + irq_set_affinity_hint(irq, NULL); +- free_cpumask_var(priv->irq_info[i].mask); ++ free_cpumask_var(priv->irq_info[vecidx].mask); + } + + static int mlx5_irq_set_affinity_hints(struct mlx5_core_dev *mdev) +diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +index cf65b2ee8b95..7892e6b8d2e8 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c +@@ -3907,6 +3907,25 @@ void mlxsw_sp_port_dev_put(struct mlxsw_sp_port *mlxsw_sp_port) + dev_put(mlxsw_sp_port->dev); + } + ++static void ++mlxsw_sp_port_lag_uppers_cleanup(struct mlxsw_sp_port *mlxsw_sp_port, ++ struct net_device *lag_dev) ++{ ++ struct net_device *br_dev = netdev_master_upper_dev_get(lag_dev); ++ struct net_device *upper_dev; ++ struct list_head *iter; ++ ++ if (netif_is_bridge_port(lag_dev)) ++ mlxsw_sp_port_bridge_leave(mlxsw_sp_port, lag_dev, br_dev); ++ ++ netdev_for_each_upper_dev_rcu(lag_dev, upper_dev, iter) { ++ if (!netif_is_bridge_port(upper_dev)) ++ continue; ++ br_dev = netdev_master_upper_dev_get(upper_dev); ++ mlxsw_sp_port_bridge_leave(mlxsw_sp_port, upper_dev, br_dev); ++ } ++} ++ + static int mlxsw_sp_lag_create(struct mlxsw_sp *mlxsw_sp, u16 lag_id) + { + char sldr_pl[MLXSW_REG_SLDR_LEN]; +@@ -4094,6 +4113,10 @@ static void mlxsw_sp_port_lag_leave(struct mlxsw_sp_port *mlxsw_sp_port, + + /* Any VLANs configured on the port are no longer valid */ + mlxsw_sp_port_vlan_flush(mlxsw_sp_port); ++ /* Make the LAG and its directly linked uppers leave bridges they ++ * are memeber in ++ */ ++ mlxsw_sp_port_lag_uppers_cleanup(mlxsw_sp_port, lag_dev); + + if (lag->ref_count == 1) + mlxsw_sp_lag_destroy(mlxsw_sp, lag_id); +diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c +index 9052e93e1925..f33fb95c4189 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c +@@ -291,30 +291,6 @@ mlxsw_sp_bridge_port_destroy(struct mlxsw_sp_bridge_port *bridge_port) + kfree(bridge_port); + } + +-static bool +-mlxsw_sp_bridge_port_should_destroy(const struct mlxsw_sp_bridge_port * +- bridge_port) +-{ +- struct net_device *dev = bridge_port->dev; +- struct mlxsw_sp *mlxsw_sp; +- +- if (is_vlan_dev(dev)) +- mlxsw_sp = mlxsw_sp_lower_get(vlan_dev_real_dev(dev)); +- else +- mlxsw_sp = mlxsw_sp_lower_get(dev); +- +- /* In case ports were pulled from out of a bridged LAG, then +- * it's possible the reference count isn't zero, yet the bridge +- * port should be destroyed, as it's no longer an upper of ours. +- */ +- if (!mlxsw_sp && list_empty(&bridge_port->vlans_list)) +- return true; +- else if (bridge_port->ref_count == 0) +- return true; +- else +- return false; +-} +- + static struct mlxsw_sp_bridge_port * + mlxsw_sp_bridge_port_get(struct mlxsw_sp_bridge *bridge, + struct net_device *brport_dev) +@@ -352,8 +328,7 @@ static void mlxsw_sp_bridge_port_put(struct mlxsw_sp_bridge *bridge, + { + struct mlxsw_sp_bridge_device *bridge_device; + +- bridge_port->ref_count--; +- if (!mlxsw_sp_bridge_port_should_destroy(bridge_port)) ++ if (--bridge_port->ref_count != 0) + return; + bridge_device = bridge_port->bridge_device; + mlxsw_sp_bridge_port_destroy(bridge_port); +diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c +index e92f41d20a2c..411a69bea1d4 100644 +--- a/drivers/net/ethernet/sun/niu.c ++++ b/drivers/net/ethernet/sun/niu.c +@@ -8119,6 +8119,8 @@ static int niu_pci_vpd_scan_props(struct niu *np, u32 start, u32 end) + start += 3; + + prop_len = niu_pci_eeprom_read(np, start + 4); ++ if (prop_len < 0) ++ return prop_len; + err = niu_pci_vpd_get_propname(np, start + 5, namebuf, 64); + if (err < 0) + return err; +@@ -8163,8 +8165,12 @@ static int niu_pci_vpd_scan_props(struct niu *np, u32 start, u32 end) + netif_printk(np, probe, KERN_DEBUG, np->dev, + "VPD_SCAN: Reading in property [%s] len[%d]\n", + namebuf, prop_len); +- for (i = 0; i < prop_len; i++) +- *prop_buf++ = niu_pci_eeprom_read(np, off + i); ++ for (i = 0; i < prop_len; i++) { ++ err = niu_pci_eeprom_read(np, off + i); ++ if (err >= 0) ++ *prop_buf = err; ++ ++prop_buf; ++ } + } + + start += len; +diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c +index 26fbbd3ffe33..afebdc2f0b94 100644 +--- a/drivers/net/phy/dp83640.c ++++ b/drivers/net/phy/dp83640.c +@@ -893,14 +893,14 @@ static void decode_txts(struct dp83640_private *dp83640, + struct phy_txts *phy_txts) + { + struct skb_shared_hwtstamps shhwtstamps; ++ struct dp83640_skb_info *skb_info; + struct sk_buff *skb; +- u64 ns; + u8 overflow; ++ u64 ns; + + /* We must already have the skb that triggered this. */ +- ++again: + skb = skb_dequeue(&dp83640->tx_queue); +- + if (!skb) { + pr_debug("have timestamp but tx_queue empty\n"); + return; +@@ -915,6 +915,11 @@ static void decode_txts(struct dp83640_private *dp83640, + } + return; + } ++ skb_info = (struct dp83640_skb_info *)skb->cb; ++ if (time_after(jiffies, skb_info->tmo)) { ++ kfree_skb(skb); ++ goto again; ++ } + + ns = phy2txts(phy_txts); + memset(&shhwtstamps, 0, sizeof(shhwtstamps)); +@@ -1466,6 +1471,7 @@ static bool dp83640_rxtstamp(struct phy_device *phydev, + static void dp83640_txtstamp(struct phy_device *phydev, + struct sk_buff *skb, int type) + { ++ struct dp83640_skb_info *skb_info = (struct dp83640_skb_info *)skb->cb; + struct dp83640_private *dp83640 = phydev->priv; + + switch (dp83640->hwts_tx_en) { +@@ -1478,6 +1484,7 @@ static void dp83640_txtstamp(struct phy_device *phydev, + /* fall through */ + case HWTSTAMP_TX_ON: + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; ++ skb_info->tmo = jiffies + SKB_TIMESTAMP_TIMEOUT; + skb_queue_tail(&dp83640->tx_queue, skb); + break; + +diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c +index 2f65975a121f..fc48da1c702d 100644 +--- a/drivers/net/usb/smsc95xx.c ++++ b/drivers/net/usb/smsc95xx.c +@@ -1295,6 +1295,7 @@ static int smsc95xx_bind(struct usbnet *dev, struct usb_interface *intf) + dev->net->features |= NETIF_F_RXCSUM; + + dev->net->hw_features = NETIF_F_IP_CSUM | NETIF_F_RXCSUM; ++ set_bit(EVENT_NO_IP_ALIGN, &dev->flags); + + smsc95xx_init_mac_address(dev); + +diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h +index cf076719c27e..f9339b5c3624 100644 +--- a/drivers/net/wireless/ath/ath9k/ath9k.h ++++ b/drivers/net/wireless/ath/ath9k/ath9k.h +@@ -272,7 +272,7 @@ struct ath_node { + #endif + u8 key_idx[4]; + +- u32 ackto; ++ int ackto; + struct list_head list; + }; + +diff --git a/drivers/net/wireless/ath/ath9k/dynack.c b/drivers/net/wireless/ath/ath9k/dynack.c +index 7334c9b09e82..6e236a485431 100644 +--- a/drivers/net/wireless/ath/ath9k/dynack.c ++++ b/drivers/net/wireless/ath/ath9k/dynack.c +@@ -29,9 +29,13 @@ + * ath_dynack_ewma - EWMA (Exponentially Weighted Moving Average) calculation + * + */ +-static inline u32 ath_dynack_ewma(u32 old, u32 new) ++static inline int ath_dynack_ewma(int old, int new) + { +- return (new * (EWMA_DIV - EWMA_LEVEL) + old * EWMA_LEVEL) / EWMA_DIV; ++ if (old > 0) ++ return (new * (EWMA_DIV - EWMA_LEVEL) + ++ old * EWMA_LEVEL) / EWMA_DIV; ++ else ++ return new; + } + + /** +@@ -82,10 +86,10 @@ static inline bool ath_dynack_bssidmask(struct ath_hw *ah, const u8 *mac) + */ + static void ath_dynack_compute_ackto(struct ath_hw *ah) + { +- struct ath_node *an; +- u32 to = 0; +- struct ath_dynack *da = &ah->dynack; + struct ath_common *common = ath9k_hw_common(ah); ++ struct ath_dynack *da = &ah->dynack; ++ struct ath_node *an; ++ int to = 0; + + list_for_each_entry(an, &da->nodes, list) + if (an->ackto > to) +@@ -144,7 +148,8 @@ static void ath_dynack_compute_to(struct ath_hw *ah) + an->ackto = ath_dynack_ewma(an->ackto, + ackto); + ath_dbg(ath9k_hw_common(ah), DYNACK, +- "%pM to %u\n", dst, an->ackto); ++ "%pM to %d [%u]\n", dst, ++ an->ackto, ackto); + if (time_is_before_jiffies(da->lto)) { + ath_dynack_compute_ackto(ah); + da->lto = jiffies + COMPUTE_TO; +@@ -166,10 +171,12 @@ static void ath_dynack_compute_to(struct ath_hw *ah) + * @ah: ath hw + * @skb: socket buffer + * @ts: tx status info ++ * @sta: station pointer + * + */ + void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb, +- struct ath_tx_status *ts) ++ struct ath_tx_status *ts, ++ struct ieee80211_sta *sta) + { + u8 ridx; + struct ieee80211_hdr *hdr; +@@ -177,7 +184,7 @@ void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb, + struct ath_common *common = ath9k_hw_common(ah); + struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); + +- if ((info->flags & IEEE80211_TX_CTL_NO_ACK) || !da->enabled) ++ if (!da->enabled || (info->flags & IEEE80211_TX_CTL_NO_ACK)) + return; + + spin_lock_bh(&da->qlock); +@@ -187,11 +194,19 @@ void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb, + /* late ACK */ + if (ts->ts_status & ATH9K_TXERR_XRETRY) { + if (ieee80211_is_assoc_req(hdr->frame_control) || +- ieee80211_is_assoc_resp(hdr->frame_control)) { ++ ieee80211_is_assoc_resp(hdr->frame_control) || ++ ieee80211_is_auth(hdr->frame_control)) { + ath_dbg(common, DYNACK, "late ack\n"); ++ + ath9k_hw_setslottime(ah, (LATEACK_TO - 3) / 2); + ath9k_hw_set_ack_timeout(ah, LATEACK_TO); + ath9k_hw_set_cts_timeout(ah, LATEACK_TO); ++ if (sta) { ++ struct ath_node *an; ++ ++ an = (struct ath_node *)sta->drv_priv; ++ an->ackto = -1; ++ } + da->lto = jiffies + LATEACK_DELAY; + } + +@@ -251,7 +266,7 @@ void ath_dynack_sample_ack_ts(struct ath_hw *ah, struct sk_buff *skb, + struct ath_common *common = ath9k_hw_common(ah); + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; + +- if (!ath_dynack_bssidmask(ah, hdr->addr1) || !da->enabled) ++ if (!da->enabled || !ath_dynack_bssidmask(ah, hdr->addr1)) + return; + + spin_lock_bh(&da->qlock); +diff --git a/drivers/net/wireless/ath/ath9k/dynack.h b/drivers/net/wireless/ath/ath9k/dynack.h +index 6d7bef976742..cf60224d40df 100644 +--- a/drivers/net/wireless/ath/ath9k/dynack.h ++++ b/drivers/net/wireless/ath/ath9k/dynack.h +@@ -86,7 +86,8 @@ void ath_dynack_node_deinit(struct ath_hw *ah, struct ath_node *an); + void ath_dynack_init(struct ath_hw *ah); + void ath_dynack_sample_ack_ts(struct ath_hw *ah, struct sk_buff *skb, u32 ts); + void ath_dynack_sample_tx_ts(struct ath_hw *ah, struct sk_buff *skb, +- struct ath_tx_status *ts); ++ struct ath_tx_status *ts, ++ struct ieee80211_sta *sta); + #else + static inline void ath_dynack_init(struct ath_hw *ah) {} + static inline void ath_dynack_node_init(struct ath_hw *ah, +@@ -97,7 +98,8 @@ static inline void ath_dynack_sample_ack_ts(struct ath_hw *ah, + struct sk_buff *skb, u32 ts) {} + static inline void ath_dynack_sample_tx_ts(struct ath_hw *ah, + struct sk_buff *skb, +- struct ath_tx_status *ts) {} ++ struct ath_tx_status *ts, ++ struct ieee80211_sta *sta) {} + #endif + + #endif /* DYNACK_H */ +diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c +index fa64c1cc94ae..458c4f53ba5d 100644 +--- a/drivers/net/wireless/ath/ath9k/xmit.c ++++ b/drivers/net/wireless/ath/ath9k/xmit.c +@@ -621,7 +621,7 @@ static void ath_tx_complete_aggr(struct ath_softc *sc, struct ath_txq *txq, + if (bf == bf->bf_lastbf) + ath_dynack_sample_tx_ts(sc->sc_ah, + bf->bf_mpdu, +- ts); ++ ts, sta); + } + + ath_tx_complete_buf(sc, bf, txq, &bf_head, sta, ts, +@@ -765,7 +765,8 @@ static void ath_tx_process_buffer(struct ath_softc *sc, struct ath_txq *txq, + memcpy(info->control.rates, bf->rates, + sizeof(info->control.rates)); + ath_tx_rc_status(sc, bf, ts, 1, txok ? 0 : 1, txok); +- ath_dynack_sample_tx_ts(sc->sc_ah, bf->bf_mpdu, ts); ++ ath_dynack_sample_tx_ts(sc->sc_ah, bf->bf_mpdu, ts, ++ sta); + } + ath_tx_complete_buf(sc, bf, txq, bf_head, sta, ts, txok); + } else +diff --git a/drivers/net/wireless/st/cw1200/scan.c b/drivers/net/wireless/st/cw1200/scan.c +index cc2ce60f4f09..f22c8ae15ad8 100644 +--- a/drivers/net/wireless/st/cw1200/scan.c ++++ b/drivers/net/wireless/st/cw1200/scan.c +@@ -78,6 +78,10 @@ int cw1200_hw_scan(struct ieee80211_hw *hw, + if (req->n_ssids > WSM_SCAN_MAX_NUM_OF_SSIDS) + return -EINVAL; + ++ /* will be unlocked in cw1200_scan_work() */ ++ down(&priv->scan.lock); ++ mutex_lock(&priv->conf_mutex); ++ + frame.skb = ieee80211_probereq_get(hw, priv->vif->addr, NULL, 0, + req->ie_len); + if (!frame.skb) +@@ -86,19 +90,15 @@ int cw1200_hw_scan(struct ieee80211_hw *hw, + if (req->ie_len) + skb_put_data(frame.skb, req->ie, req->ie_len); + +- /* will be unlocked in cw1200_scan_work() */ +- down(&priv->scan.lock); +- mutex_lock(&priv->conf_mutex); +- + ret = wsm_set_template_frame(priv, &frame); + if (!ret) { + /* Host want to be the probe responder. */ + ret = wsm_set_probe_responder(priv, true); + } + if (ret) { ++ dev_kfree_skb(frame.skb); + mutex_unlock(&priv->conf_mutex); + up(&priv->scan.lock); +- dev_kfree_skb(frame.skb); + return ret; + } + +@@ -120,10 +120,9 @@ int cw1200_hw_scan(struct ieee80211_hw *hw, + ++priv->scan.n_ssids; + } + +- mutex_unlock(&priv->conf_mutex); +- + if (frame.skb) + dev_kfree_skb(frame.skb); ++ mutex_unlock(&priv->conf_mutex); + queue_work(priv->workqueue, &priv->scan.work); + return 0; + } +diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c +index 620f5b995a12..e3aefdafae89 100644 +--- a/drivers/pci/switch/switchtec.c ++++ b/drivers/pci/switch/switchtec.c +@@ -1064,6 +1064,7 @@ static int ioctl_event_ctl(struct switchtec_dev *stdev, + { + int ret; + int nr_idxs; ++ unsigned int event_flags; + struct switchtec_ioctl_event_ctl ctl; + + if (copy_from_user(&ctl, uctl, sizeof(ctl))) +@@ -1085,7 +1086,9 @@ static int ioctl_event_ctl(struct switchtec_dev *stdev, + else + return -EINVAL; + ++ event_flags = ctl.flags; + for (ctl.index = 0; ctl.index < nr_idxs; ctl.index++) { ++ ctl.flags = event_flags; + ret = event_ctl(stdev, &ctl); + if (ret < 0) + return ret; +diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c +index afedb8cd1990..d1ccff527756 100644 +--- a/drivers/phy/allwinner/phy-sun4i-usb.c ++++ b/drivers/phy/allwinner/phy-sun4i-usb.c +@@ -125,6 +125,7 @@ struct sun4i_usb_phy_cfg { + bool dedicated_clocks; + bool enable_pmu_unk1; + bool phy0_dual_route; ++ int missing_phys; + }; + + struct sun4i_usb_phy_data { +@@ -645,6 +646,9 @@ static struct phy *sun4i_usb_phy_xlate(struct device *dev, + if (args->args[0] >= data->cfg->num_phys) + return ERR_PTR(-ENODEV); + ++ if (data->cfg->missing_phys & BIT(args->args[0])) ++ return ERR_PTR(-ENODEV); ++ + return data->phys[args->args[0]].phy; + } + +@@ -740,6 +744,9 @@ static int sun4i_usb_phy_probe(struct platform_device *pdev) + struct sun4i_usb_phy *phy = data->phys + i; + char name[16]; + ++ if (data->cfg->missing_phys & BIT(i)) ++ continue; ++ + snprintf(name, sizeof(name), "usb%d_vbus", i); + phy->vbus = devm_regulator_get_optional(dev, name); + if (IS_ERR(phy->vbus)) { +diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c +index ff782445dfb7..e72bf2502eca 100644 +--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c ++++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c +@@ -92,7 +92,7 @@ struct bcm2835_pinctrl { + struct gpio_chip gpio_chip; + struct pinctrl_gpio_range gpio_range; + +- spinlock_t irq_lock[BCM2835_NUM_BANKS]; ++ raw_spinlock_t irq_lock[BCM2835_NUM_BANKS]; + }; + + /* pins are just named GPIO0..GPIO53 */ +@@ -471,10 +471,10 @@ static void bcm2835_gpio_irq_enable(struct irq_data *data) + unsigned bank = GPIO_REG_OFFSET(gpio); + unsigned long flags; + +- spin_lock_irqsave(&pc->irq_lock[bank], flags); ++ raw_spin_lock_irqsave(&pc->irq_lock[bank], flags); + set_bit(offset, &pc->enabled_irq_map[bank]); + bcm2835_gpio_irq_config(pc, gpio, true); +- spin_unlock_irqrestore(&pc->irq_lock[bank], flags); ++ raw_spin_unlock_irqrestore(&pc->irq_lock[bank], flags); + } + + static void bcm2835_gpio_irq_disable(struct irq_data *data) +@@ -486,12 +486,12 @@ static void bcm2835_gpio_irq_disable(struct irq_data *data) + unsigned bank = GPIO_REG_OFFSET(gpio); + unsigned long flags; + +- spin_lock_irqsave(&pc->irq_lock[bank], flags); ++ raw_spin_lock_irqsave(&pc->irq_lock[bank], flags); + bcm2835_gpio_irq_config(pc, gpio, false); + /* Clear events that were latched prior to clearing event sources */ + bcm2835_gpio_set_bit(pc, GPEDS0, gpio); + clear_bit(offset, &pc->enabled_irq_map[bank]); +- spin_unlock_irqrestore(&pc->irq_lock[bank], flags); ++ raw_spin_unlock_irqrestore(&pc->irq_lock[bank], flags); + } + + static int __bcm2835_gpio_irq_set_type_disabled(struct bcm2835_pinctrl *pc, +@@ -594,7 +594,7 @@ static int bcm2835_gpio_irq_set_type(struct irq_data *data, unsigned int type) + unsigned long flags; + int ret; + +- spin_lock_irqsave(&pc->irq_lock[bank], flags); ++ raw_spin_lock_irqsave(&pc->irq_lock[bank], flags); + + if (test_bit(offset, &pc->enabled_irq_map[bank])) + ret = __bcm2835_gpio_irq_set_type_enabled(pc, gpio, type); +@@ -606,7 +606,7 @@ static int bcm2835_gpio_irq_set_type(struct irq_data *data, unsigned int type) + else + irq_set_handler_locked(data, handle_level_irq); + +- spin_unlock_irqrestore(&pc->irq_lock[bank], flags); ++ raw_spin_unlock_irqrestore(&pc->irq_lock[bank], flags); + + return ret; + } +@@ -1021,7 +1021,7 @@ static int bcm2835_pinctrl_probe(struct platform_device *pdev) + for_each_set_bit(offset, &events, 32) + bcm2835_gpio_wr(pc, GPEDS0 + i * 4, BIT(offset)); + +- spin_lock_init(&pc->irq_lock[i]); ++ raw_spin_lock_init(&pc->irq_lock[i]); + } + + err = gpiochip_add_data(&pc->gpio_chip, pc); +diff --git a/drivers/pinctrl/meson/pinctrl-meson8.c b/drivers/pinctrl/meson/pinctrl-meson8.c +index 970f6f14502c..591b01657378 100644 +--- a/drivers/pinctrl/meson/pinctrl-meson8.c ++++ b/drivers/pinctrl/meson/pinctrl-meson8.c +@@ -808,7 +808,9 @@ static const char * const gpio_groups[] = { + "BOOT_5", "BOOT_6", "BOOT_7", "BOOT_8", "BOOT_9", + "BOOT_10", "BOOT_11", "BOOT_12", "BOOT_13", "BOOT_14", + "BOOT_15", "BOOT_16", "BOOT_17", "BOOT_18", ++}; + ++static const char * const gpio_aobus_groups[] = { + "GPIOAO_0", "GPIOAO_1", "GPIOAO_2", "GPIOAO_3", + "GPIOAO_4", "GPIOAO_5", "GPIOAO_6", "GPIOAO_7", + "GPIOAO_8", "GPIOAO_9", "GPIOAO_10", "GPIOAO_11", +@@ -1030,6 +1032,7 @@ static struct meson_pmx_func meson8_cbus_functions[] = { + }; + + static struct meson_pmx_func meson8_aobus_functions[] = { ++ FUNCTION(gpio_aobus), + FUNCTION(uart_ao), + FUNCTION(remote), + FUNCTION(i2c_slave_ao), +diff --git a/drivers/pinctrl/meson/pinctrl-meson8b.c b/drivers/pinctrl/meson/pinctrl-meson8b.c +index 71f216b5b0b9..a6fff215e60f 100644 +--- a/drivers/pinctrl/meson/pinctrl-meson8b.c ++++ b/drivers/pinctrl/meson/pinctrl-meson8b.c +@@ -649,16 +649,18 @@ static const char * const gpio_groups[] = { + "BOOT_10", "BOOT_11", "BOOT_12", "BOOT_13", "BOOT_14", + "BOOT_15", "BOOT_16", "BOOT_17", "BOOT_18", + +- "GPIOAO_0", "GPIOAO_1", "GPIOAO_2", "GPIOAO_3", +- "GPIOAO_4", "GPIOAO_5", "GPIOAO_6", "GPIOAO_7", +- "GPIOAO_8", "GPIOAO_9", "GPIOAO_10", "GPIOAO_11", +- "GPIOAO_12", "GPIOAO_13", "GPIO_BSD_EN", "GPIO_TEST_N", +- + "DIF_0_P", "DIF_0_N", "DIF_1_P", "DIF_1_N", + "DIF_2_P", "DIF_2_N", "DIF_3_P", "DIF_3_N", + "DIF_4_P", "DIF_4_N" + }; + ++static const char * const gpio_aobus_groups[] = { ++ "GPIOAO_0", "GPIOAO_1", "GPIOAO_2", "GPIOAO_3", ++ "GPIOAO_4", "GPIOAO_5", "GPIOAO_6", "GPIOAO_7", ++ "GPIOAO_8", "GPIOAO_9", "GPIOAO_10", "GPIOAO_11", ++ "GPIOAO_12", "GPIOAO_13", "GPIO_BSD_EN", "GPIO_TEST_N" ++}; ++ + static const char * const sd_a_groups[] = { + "sd_d0_a", "sd_d1_a", "sd_d2_a", "sd_d3_a", "sd_clk_a", + "sd_cmd_a" +@@ -874,6 +876,7 @@ static struct meson_pmx_func meson8b_cbus_functions[] = { + }; + + static struct meson_pmx_func meson8b_aobus_functions[] = { ++ FUNCTION(gpio_aobus), + FUNCTION(uart_ao), + FUNCTION(uart_ao_b), + FUNCTION(i2c_slave_ao), +diff --git a/drivers/pinctrl/pinctrl-sx150x.c b/drivers/pinctrl/pinctrl-sx150x.c +index 70a0228f4e7f..2d0f4f760326 100644 +--- a/drivers/pinctrl/pinctrl-sx150x.c ++++ b/drivers/pinctrl/pinctrl-sx150x.c +@@ -1166,7 +1166,6 @@ static int sx150x_probe(struct i2c_client *client, + } + + /* Register GPIO controller */ +- pctl->gpio.label = devm_kstrdup(dev, client->name, GFP_KERNEL); + pctl->gpio.base = -1; + pctl->gpio.ngpio = pctl->data->npins; + pctl->gpio.get_direction = sx150x_gpio_get_direction; +@@ -1180,6 +1179,10 @@ static int sx150x_probe(struct i2c_client *client, + pctl->gpio.of_node = dev->of_node; + #endif + pctl->gpio.can_sleep = true; ++ pctl->gpio.label = devm_kstrdup(dev, client->name, GFP_KERNEL); ++ if (!pctl->gpio.label) ++ return -ENOMEM; ++ + /* + * Setting multiple pins is not safe when all pins are not + * handled by the same regmap register. The oscio pin (present +@@ -1200,13 +1203,15 @@ static int sx150x_probe(struct i2c_client *client, + + /* Add Interrupt support if an irq is specified */ + if (client->irq > 0) { +- pctl->irq_chip.name = devm_kstrdup(dev, client->name, +- GFP_KERNEL); + pctl->irq_chip.irq_mask = sx150x_irq_mask; + pctl->irq_chip.irq_unmask = sx150x_irq_unmask; + pctl->irq_chip.irq_set_type = sx150x_irq_set_type; + pctl->irq_chip.irq_bus_lock = sx150x_irq_bus_lock; + pctl->irq_chip.irq_bus_sync_unlock = sx150x_irq_bus_sync_unlock; ++ pctl->irq_chip.name = devm_kstrdup(dev, client->name, ++ GFP_KERNEL); ++ if (!pctl->irq_chip.name) ++ return -ENOMEM; + + pctl->irq.masked = ~0; + pctl->irq.sense = 0; +diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c +index e7bbdf947bbc..2ac4a7178470 100644 +--- a/drivers/platform/chrome/cros_ec_proto.c ++++ b/drivers/platform/chrome/cros_ec_proto.c +@@ -551,6 +551,7 @@ static int get_keyboard_state_event(struct cros_ec_device *ec_dev) + + int cros_ec_get_next_event(struct cros_ec_device *ec_dev, bool *wake_event) + { ++ u8 event_type; + u32 host_event; + int ret; + +@@ -570,11 +571,22 @@ int cros_ec_get_next_event(struct cros_ec_device *ec_dev, bool *wake_event) + return ret; + + if (wake_event) { ++ event_type = ec_dev->event_data.event_type; + host_event = cros_ec_get_host_event(ec_dev); + +- /* Consider non-host_event as wake event */ +- *wake_event = !host_event || +- !!(host_event & ec_dev->host_event_wake_mask); ++ /* ++ * Sensor events need to be parsed by the sensor sub-device. ++ * Defer them, and don't report the wakeup here. ++ */ ++ if (event_type == EC_MKBP_EVENT_SENSOR_FIFO) ++ *wake_event = false; ++ /* Masked host-events should not count as wake events. */ ++ else if (host_event && ++ !(host_event & ec_dev->host_event_wake_mask)) ++ *wake_event = false; ++ /* Consider all other events as wake events. */ ++ else ++ *wake_event = true; + } + + return ret; +diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c +index a421d6c551b6..ecb41eacd74b 100644 +--- a/drivers/ptp/ptp_chardev.c ++++ b/drivers/ptp/ptp_chardev.c +@@ -228,7 +228,9 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg) + pct->sec = ts.tv_sec; + pct->nsec = ts.tv_nsec; + pct++; +- ptp->info->gettime64(ptp->info, &ts); ++ err = ptp->info->gettime64(ptp->info, &ts); ++ if (err) ++ goto out; + pct->sec = ts.tv_sec; + pct->nsec = ts.tv_nsec; + pct++; +@@ -281,6 +283,7 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg) + break; + } + ++out: + kfree(sysoff); + return err; + } +diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c +index 7eacc1c4b3b1..c64903a5978f 100644 +--- a/drivers/ptp/ptp_clock.c ++++ b/drivers/ptp/ptp_clock.c +@@ -253,8 +253,10 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info, + ptp->dev = device_create_with_groups(ptp_class, parent, ptp->devid, + ptp, ptp->pin_attr_groups, + "ptp%d", ptp->index); +- if (IS_ERR(ptp->dev)) ++ if (IS_ERR(ptp->dev)) { ++ err = PTR_ERR(ptp->dev); + goto no_device; ++ } + + /* Register a new PPS source. */ + if (info->pps) { +@@ -265,6 +267,7 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info, + pps.owner = info->owner; + ptp->pps_source = pps_register_source(&pps, PTP_PPS_DEFAULTS); + if (!ptp->pps_source) { ++ err = -EINVAL; + pr_err("failed to register pps source\n"); + goto no_pps; + } +diff --git a/drivers/s390/crypto/zcrypt_error.h b/drivers/s390/crypto/zcrypt_error.h +index 13df60209ed3..9499cd3a05f8 100644 +--- a/drivers/s390/crypto/zcrypt_error.h ++++ b/drivers/s390/crypto/zcrypt_error.h +@@ -65,6 +65,7 @@ struct error_hdr { + #define REP82_ERROR_FORMAT_FIELD 0x29 + #define REP82_ERROR_INVALID_COMMAND 0x30 + #define REP82_ERROR_MALFORMED_MSG 0x40 ++#define REP82_ERROR_INVALID_SPECIAL_CMD 0x41 + #define REP82_ERROR_INVALID_DOMAIN_PRECHECK 0x42 + #define REP82_ERROR_RESERVED_FIELDO 0x50 /* old value */ + #define REP82_ERROR_WORD_ALIGNMENT 0x60 +@@ -103,6 +104,7 @@ static inline int convert_error(struct zcrypt_queue *zq, + case REP88_ERROR_MESSAGE_MALFORMD: + case REP82_ERROR_INVALID_DOMAIN_PRECHECK: + case REP82_ERROR_INVALID_DOMAIN_PENDING: ++ case REP82_ERROR_INVALID_SPECIAL_CMD: + // REP88_ERROR_INVALID_KEY // '82' CEX2A + // REP88_ERROR_OPERAND // '84' CEX2A + // REP88_ERROR_OPERAND_EVEN_MOD // '85' CEX2A +diff --git a/drivers/scsi/aic94xx/aic94xx_init.c b/drivers/scsi/aic94xx/aic94xx_init.c +index 4a4746cc6745..eb5ee0ec5a2f 100644 +--- a/drivers/scsi/aic94xx/aic94xx_init.c ++++ b/drivers/scsi/aic94xx/aic94xx_init.c +@@ -281,7 +281,7 @@ static ssize_t asd_show_dev_rev(struct device *dev, + return snprintf(buf, PAGE_SIZE, "%s\n", + asd_dev_rev[asd_ha->revision_id]); + } +-static DEVICE_ATTR(revision, S_IRUGO, asd_show_dev_rev, NULL); ++static DEVICE_ATTR(aic_revision, S_IRUGO, asd_show_dev_rev, NULL); + + static ssize_t asd_show_dev_bios_build(struct device *dev, + struct device_attribute *attr,char *buf) +@@ -478,7 +478,7 @@ static int asd_create_dev_attrs(struct asd_ha_struct *asd_ha) + { + int err; + +- err = device_create_file(&asd_ha->pcidev->dev, &dev_attr_revision); ++ err = device_create_file(&asd_ha->pcidev->dev, &dev_attr_aic_revision); + if (err) + return err; + +@@ -500,13 +500,13 @@ err_update_bios: + err_biosb: + device_remove_file(&asd_ha->pcidev->dev, &dev_attr_bios_build); + err_rev: +- device_remove_file(&asd_ha->pcidev->dev, &dev_attr_revision); ++ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_aic_revision); + return err; + } + + static void asd_remove_dev_attrs(struct asd_ha_struct *asd_ha) + { +- device_remove_file(&asd_ha->pcidev->dev, &dev_attr_revision); ++ device_remove_file(&asd_ha->pcidev->dev, &dev_attr_aic_revision); + device_remove_file(&asd_ha->pcidev->dev, &dev_attr_bios_build); + device_remove_file(&asd_ha->pcidev->dev, &dev_attr_pcba_sn); + device_remove_file(&asd_ha->pcidev->dev, &dev_attr_update_bios); +diff --git a/drivers/scsi/cxlflash/main.c b/drivers/scsi/cxlflash/main.c +index 737314cac8d8..b37149e48c5c 100644 +--- a/drivers/scsi/cxlflash/main.c ++++ b/drivers/scsi/cxlflash/main.c +@@ -3659,6 +3659,7 @@ static int cxlflash_probe(struct pci_dev *pdev, + host->max_cmd_len = CXLFLASH_MAX_CDB_LEN; + + cfg = shost_priv(host); ++ cfg->state = STATE_PROBING; + cfg->host = host; + rc = alloc_mem(cfg); + if (rc) { +@@ -3741,6 +3742,7 @@ out: + return rc; + + out_remove: ++ cfg->state = STATE_PROBED; + cxlflash_remove(pdev); + goto out; + } +diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c +index 91783dbdf10c..57cddbc4a977 100644 +--- a/drivers/scsi/lpfc/lpfc_els.c ++++ b/drivers/scsi/lpfc/lpfc_els.c +@@ -242,6 +242,8 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, uint8_t expectRsp, + icmd->ulpCommand = CMD_ELS_REQUEST64_CR; + if (elscmd == ELS_CMD_FLOGI) + icmd->ulpTimeout = FF_DEF_RATOV * 2; ++ else if (elscmd == ELS_CMD_LOGO) ++ icmd->ulpTimeout = phba->fc_ratov; + else + icmd->ulpTimeout = phba->fc_ratov * 2; + } else { +@@ -2674,16 +2676,15 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + goto out; + } + ++ /* The LOGO will not be retried on failure. A LOGO was ++ * issued to the remote rport and a ACC or RJT or no Answer are ++ * all acceptable. Note the failure and move forward with ++ * discovery. The PLOGI will retry. ++ */ + if (irsp->ulpStatus) { +- /* Check for retry */ +- if (lpfc_els_retry(phba, cmdiocb, rspiocb)) { +- /* ELS command is being retried */ +- skip_recovery = 1; +- goto out; +- } + /* LOGO failed */ + lpfc_printf_vlog(vport, KERN_ERR, LOG_ELS, +- "2756 LOGO failure DID:%06X Status:x%x/x%x\n", ++ "2756 LOGO failure, No Retry DID:%06X Status:x%x/x%x\n", + ndlp->nlp_DID, irsp->ulpStatus, + irsp->un.ulpWord[4]); + /* Do not call DSM for lpfc_els_abort'ed ELS cmds */ +@@ -2729,7 +2730,8 @@ out: + * For any other port type, the rpi is unregistered as an implicit + * LOGO. + */ +- if ((ndlp->nlp_type & NLP_FCP_TARGET) && (skip_recovery == 0)) { ++ if (ndlp->nlp_type & (NLP_FCP_TARGET | NLP_NVME_TARGET) && ++ skip_recovery == 0) { + lpfc_cancel_retry_delay_tmo(vport, ndlp); + spin_lock_irqsave(shost->host_lock, flags); + ndlp->nlp_flag |= NLP_NPR_2B_DISC; +@@ -2762,6 +2764,8 @@ out: + * will be stored into the context1 field of the IOCB for the completion + * callback function to the LOGO ELS command. + * ++ * Callers of this routine are expected to unregister the RPI first ++ * + * Return code + * 0 - successfully issued logo + * 1 - failed to issue logo +@@ -2803,22 +2807,6 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + "Issue LOGO: did:x%x", + ndlp->nlp_DID, 0, 0); + +- /* +- * If we are issuing a LOGO, we may try to recover the remote NPort +- * by issuing a PLOGI later. Even though we issue ELS cmds by the +- * VPI, if we have a valid RPI, and that RPI gets unreg'ed while +- * that ELS command is in-flight, the HBA returns a IOERR_INVALID_RPI +- * for that ELS cmd. To avoid this situation, lets get rid of the +- * RPI right now, before any ELS cmds are sent. +- */ +- spin_lock_irq(shost->host_lock); +- ndlp->nlp_flag |= NLP_ISSUE_LOGO; +- spin_unlock_irq(shost->host_lock); +- if (lpfc_unreg_rpi(vport, ndlp)) { +- lpfc_els_free_iocb(phba, elsiocb); +- return 0; +- } +- + phba->fc_stat.elsXmitLOGO++; + elsiocb->iocb_cmpl = lpfc_cmpl_els_logo; + spin_lock_irq(shost->host_lock); +@@ -2826,7 +2814,6 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + ndlp->nlp_flag &= ~NLP_ISSUE_LOGO; + spin_unlock_irq(shost->host_lock); + rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0); +- + if (rc == IOCB_ERROR) { + spin_lock_irq(shost->host_lock); + ndlp->nlp_flag &= ~NLP_LOGO_SND; +@@ -2834,6 +2821,11 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + lpfc_els_free_iocb(phba, elsiocb); + return 1; + } ++ ++ spin_lock_irq(shost->host_lock); ++ ndlp->nlp_prev_state = ndlp->nlp_state; ++ spin_unlock_irq(shost->host_lock); ++ lpfc_nlp_set_state(vport, ndlp, NLP_STE_LOGO_ISSUE); + return 0; + } + +@@ -5696,6 +5688,9 @@ error: + stat = (struct ls_rjt *)(pcmd + sizeof(uint32_t)); + stat->un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC; + ++ if (shdr_add_status == ADD_STATUS_OPERATION_ALREADY_ACTIVE) ++ stat->un.b.lsRjtRsnCodeExp = LSEXP_CMD_IN_PROGRESS; ++ + elsiocb->iocb_cmpl = lpfc_cmpl_els_rsp; + phba->fc_stat.elsXmitLSRJT++; + rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0); +@@ -9480,7 +9475,8 @@ lpfc_sli_abts_recover_port(struct lpfc_vport *vport, + "rport in state 0x%x\n", ndlp->nlp_state); + return; + } +- lpfc_printf_log(phba, KERN_INFO, LOG_SLI, ++ lpfc_printf_log(phba, KERN_ERR, ++ LOG_ELS | LOG_FCP_ERROR | LOG_NVME_IOERR, + "3094 Start rport recovery on shost id 0x%x " + "fc_id 0x%06x vpi 0x%x rpi 0x%x state 0x%x " + "flags 0x%x\n", +@@ -9493,8 +9489,8 @@ lpfc_sli_abts_recover_port(struct lpfc_vport *vport, + */ + spin_lock_irqsave(shost->host_lock, flags); + ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; ++ ndlp->nlp_flag |= NLP_ISSUE_LOGO; + spin_unlock_irqrestore(shost->host_lock, flags); +- lpfc_issue_els_logo(vport, ndlp, 0); +- lpfc_nlp_set_state(vport, ndlp, NLP_STE_LOGO_ISSUE); ++ lpfc_unreg_rpi(vport, ndlp); + } + +diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c +index d489f6827cc1..36fb549eb4e8 100644 +--- a/drivers/scsi/lpfc/lpfc_nportdisc.c ++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c +@@ -801,7 +801,9 @@ lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + struct Scsi_Host *shost = lpfc_shost_from_vport(vport); + + if (!(ndlp->nlp_flag & NLP_RPI_REGISTERED)) { ++ spin_lock_irq(shost->host_lock); + ndlp->nlp_flag &= ~NLP_NPR_ADISC; ++ spin_unlock_irq(shost->host_lock); + return 0; + } + +@@ -816,7 +818,10 @@ lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + return 1; + } + } ++ ++ spin_lock_irq(shost->host_lock); + ndlp->nlp_flag &= ~NLP_NPR_ADISC; ++ spin_unlock_irq(shost->host_lock); + lpfc_unreg_rpi(vport, ndlp); + return 0; + } +diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c +index ae5e579ac473..b28efddab7b1 100644 +--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c ++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c +@@ -8260,6 +8260,7 @@ static void scsih_remove(struct pci_dev *pdev) + + /* release all the volumes */ + _scsih_ir_shutdown(ioc); ++ sas_remove_host(shost); + list_for_each_entry_safe(raid_device, next, &ioc->raid_device_list, + list) { + if (raid_device->starget) { +@@ -8296,7 +8297,6 @@ static void scsih_remove(struct pci_dev *pdev) + ioc->sas_hba.num_phys = 0; + } + +- sas_remove_host(shost); + mpt3sas_base_detach(ioc); + spin_lock(&gioc_lock); + list_del(&ioc->list); +diff --git a/drivers/scsi/mpt3sas/mpt3sas_transport.c b/drivers/scsi/mpt3sas/mpt3sas_transport.c +index 63dd9bc21ff2..66d9f04c4c0b 100644 +--- a/drivers/scsi/mpt3sas/mpt3sas_transport.c ++++ b/drivers/scsi/mpt3sas/mpt3sas_transport.c +@@ -846,10 +846,13 @@ mpt3sas_transport_port_remove(struct MPT3SAS_ADAPTER *ioc, u64 sas_address, + mpt3sas_port->remote_identify.sas_address, + mpt3sas_phy->phy_id); + mpt3sas_phy->phy_belongs_to_port = 0; +- sas_port_delete_phy(mpt3sas_port->port, mpt3sas_phy->phy); ++ if (!ioc->remove_host) ++ sas_port_delete_phy(mpt3sas_port->port, ++ mpt3sas_phy->phy); + list_del(&mpt3sas_phy->port_siblings); + } +- sas_port_delete(mpt3sas_port->port); ++ if (!ioc->remove_host) ++ sas_port_delete(mpt3sas_port->port); + kfree(mpt3sas_port); + } + +diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c +index bc15999f1c7c..5ec2898d21cd 100644 +--- a/drivers/scsi/smartpqi/smartpqi_init.c ++++ b/drivers/scsi/smartpqi/smartpqi_init.c +@@ -653,6 +653,7 @@ struct bmic_host_wellness_driver_version { + u8 driver_version_tag[2]; + __le16 driver_version_length; + char driver_version[32]; ++ u8 dont_write_tag[2]; + u8 end_tag[2]; + }; + +@@ -682,6 +683,8 @@ static int pqi_write_driver_version_to_host_wellness( + strncpy(buffer->driver_version, "Linux " DRIVER_VERSION, + sizeof(buffer->driver_version) - 1); + buffer->driver_version[sizeof(buffer->driver_version) - 1] = '\0'; ++ buffer->dont_write_tag[0] = 'D'; ++ buffer->dont_write_tag[1] = 'W'; + buffer->end_tag[0] = 'Z'; + buffer->end_tag[1] = 'Z'; + +@@ -1181,6 +1184,9 @@ static void pqi_get_volume_status(struct pqi_ctrl_info *ctrl_info, + if (rc) + goto out; + ++ if (vpd->page_code != CISS_VPD_LV_STATUS) ++ goto out; ++ + page_length = offsetof(struct ciss_vpd_logical_volume_status, + volume_status) + vpd->page_length; + if (page_length < sizeof(*vpd)) +diff --git a/drivers/scsi/smartpqi/smartpqi_sis.c b/drivers/scsi/smartpqi/smartpqi_sis.c +index 5141bd4c9f06..ca7dfb3a520f 100644 +--- a/drivers/scsi/smartpqi/smartpqi_sis.c ++++ b/drivers/scsi/smartpqi/smartpqi_sis.c +@@ -59,7 +59,7 @@ + + #define SIS_CTRL_KERNEL_UP 0x80 + #define SIS_CTRL_KERNEL_PANIC 0x100 +-#define SIS_CTRL_READY_TIMEOUT_SECS 30 ++#define SIS_CTRL_READY_TIMEOUT_SECS 180 + #define SIS_CTRL_READY_RESUME_TIMEOUT_SECS 90 + #define SIS_CTRL_READY_POLL_INTERVAL_MSECS 10 + +diff --git a/drivers/soc/bcm/brcmstb/common.c b/drivers/soc/bcm/brcmstb/common.c +index 22e98a90468c..2f5ec424a390 100644 +--- a/drivers/soc/bcm/brcmstb/common.c ++++ b/drivers/soc/bcm/brcmstb/common.c +@@ -31,13 +31,17 @@ static const struct of_device_id brcmstb_machine_match[] = { + + bool soc_is_brcmstb(void) + { ++ const struct of_device_id *match; + struct device_node *root; + + root = of_find_node_by_path("/"); + if (!root) + return false; + +- return of_match_node(brcmstb_machine_match, root) != NULL; ++ match = of_match_node(brcmstb_machine_match, root); ++ of_node_put(root); ++ ++ return match != NULL; + } + + static const struct of_device_id sun_top_ctrl_match[] = { +diff --git a/drivers/soc/tegra/common.c b/drivers/soc/tegra/common.c +index cd8f41351add..7bfb154d6fa5 100644 +--- a/drivers/soc/tegra/common.c ++++ b/drivers/soc/tegra/common.c +@@ -22,11 +22,15 @@ static const struct of_device_id tegra_machine_match[] = { + + bool soc_is_tegra(void) + { ++ const struct of_device_id *match; + struct device_node *root; + + root = of_find_node_by_path("/"); + if (!root) + return false; + +- return of_match_node(tegra_machine_match, root) != NULL; ++ match = of_match_node(tegra_machine_match, root); ++ of_node_put(root); ++ ++ return match != NULL; + } +diff --git a/drivers/staging/iio/adc/ad7280a.c b/drivers/staging/iio/adc/ad7280a.c +index f85dde9805e0..f17f700ea04f 100644 +--- a/drivers/staging/iio/adc/ad7280a.c ++++ b/drivers/staging/iio/adc/ad7280a.c +@@ -256,7 +256,9 @@ static int ad7280_read(struct ad7280_state *st, unsigned int devaddr, + if (ret) + return ret; + +- __ad7280_read32(st, &tmp); ++ ret = __ad7280_read32(st, &tmp); ++ if (ret) ++ return ret; + + if (ad7280_check_crc(st, tmp)) + return -EIO; +@@ -294,7 +296,9 @@ static int ad7280_read_channel(struct ad7280_state *st, unsigned int devaddr, + + ad7280_delay(st); + +- __ad7280_read32(st, &tmp); ++ ret = __ad7280_read32(st, &tmp); ++ if (ret) ++ return ret; + + if (ad7280_check_crc(st, tmp)) + return -EIO; +@@ -327,7 +331,9 @@ static int ad7280_read_all_channels(struct ad7280_state *st, unsigned int cnt, + ad7280_delay(st); + + for (i = 0; i < cnt; i++) { +- __ad7280_read32(st, &tmp); ++ ret = __ad7280_read32(st, &tmp); ++ if (ret) ++ return ret; + + if (ad7280_check_crc(st, tmp)) + return -EIO; +@@ -370,7 +376,10 @@ static int ad7280_chain_setup(struct ad7280_state *st) + return ret; + + for (n = 0; n <= AD7280A_MAX_CHAIN; n++) { +- __ad7280_read32(st, &val); ++ ret = __ad7280_read32(st, &val); ++ if (ret) ++ return ret; ++ + if (val == 0) + return n - 1; + +diff --git a/drivers/staging/iio/adc/ad7780.c b/drivers/staging/iio/adc/ad7780.c +index dec3ba6eba8a..52613f6a9dd8 100644 +--- a/drivers/staging/iio/adc/ad7780.c ++++ b/drivers/staging/iio/adc/ad7780.c +@@ -87,12 +87,16 @@ static int ad7780_read_raw(struct iio_dev *indio_dev, + long m) + { + struct ad7780_state *st = iio_priv(indio_dev); ++ int voltage_uv; + + switch (m) { + case IIO_CHAN_INFO_RAW: + return ad_sigma_delta_single_conversion(indio_dev, chan, val); + case IIO_CHAN_INFO_SCALE: +- *val = st->int_vref_mv * st->gain; ++ voltage_uv = regulator_get_voltage(st->reg); ++ if (voltage_uv < 0) ++ return voltage_uv; ++ *val = (voltage_uv / 1000) * st->gain; + *val2 = chan->scan_type.realbits - 1; + return IIO_VAL_FRACTIONAL_LOG2; + case IIO_CHAN_INFO_OFFSET: +diff --git a/drivers/staging/iio/resolver/ad2s90.c b/drivers/staging/iio/resolver/ad2s90.c +index b2270908f26f..cbee9ad00f0d 100644 +--- a/drivers/staging/iio/resolver/ad2s90.c ++++ b/drivers/staging/iio/resolver/ad2s90.c +@@ -86,7 +86,12 @@ static int ad2s90_probe(struct spi_device *spi) + /* need 600ns between CS and the first falling edge of SCLK */ + spi->max_speed_hz = 830000; + spi->mode = SPI_MODE_3; +- spi_setup(spi); ++ ret = spi_setup(spi); ++ ++ if (ret < 0) { ++ dev_err(&spi->dev, "spi_setup failed!\n"); ++ return ret; ++ } + + return 0; + } +diff --git a/drivers/staging/pi433/pi433_if.c b/drivers/staging/pi433/pi433_if.c +index 93c01680f016..5be40bdc191b 100644 +--- a/drivers/staging/pi433/pi433_if.c ++++ b/drivers/staging/pi433/pi433_if.c +@@ -1210,6 +1210,10 @@ static int pi433_probe(struct spi_device *spi) + + /* create cdev */ + device->cdev = cdev_alloc(); ++ if (!device->cdev) { ++ dev_dbg(device->dev, "allocation of cdev failed"); ++ goto cdev_failed; ++ } + device->cdev->owner = THIS_MODULE; + cdev_init(device->cdev, &pi433_fops); + retval = cdev_add(device->cdev, device->devt, 1); +diff --git a/drivers/staging/speakup/spk_ttyio.c b/drivers/staging/speakup/spk_ttyio.c +index 4d7d8f2f66ea..71edd3cfe684 100644 +--- a/drivers/staging/speakup/spk_ttyio.c ++++ b/drivers/staging/speakup/spk_ttyio.c +@@ -246,7 +246,8 @@ static void spk_ttyio_send_xchar(char ch) + return; + } + +- speakup_tty->ops->send_xchar(speakup_tty, ch); ++ if (speakup_tty->ops->send_xchar) ++ speakup_tty->ops->send_xchar(speakup_tty, ch); + mutex_unlock(&speakup_tty_mutex); + } + +@@ -258,7 +259,8 @@ static void spk_ttyio_tiocmset(unsigned int set, unsigned int clear) + return; + } + +- speakup_tty->ops->tiocmset(speakup_tty, set, clear); ++ if (speakup_tty->ops->tiocmset) ++ speakup_tty->ops->tiocmset(speakup_tty, set, clear); + mutex_unlock(&speakup_tty_mutex); + } + +diff --git a/drivers/thermal/broadcom/bcm2835_thermal.c b/drivers/thermal/broadcom/bcm2835_thermal.c +index 23ad4f9f2143..24b006a95142 100644 +--- a/drivers/thermal/broadcom/bcm2835_thermal.c ++++ b/drivers/thermal/broadcom/bcm2835_thermal.c +@@ -27,6 +27,8 @@ + #include + #include + ++#include "../thermal_hwmon.h" ++ + #define BCM2835_TS_TSENSCTL 0x00 + #define BCM2835_TS_TSENSSTAT 0x04 + +@@ -275,6 +277,15 @@ static int bcm2835_thermal_probe(struct platform_device *pdev) + + platform_set_drvdata(pdev, tz); + ++ /* ++ * Thermal_zone doesn't enable hwmon as default, ++ * enable it here ++ */ ++ tz->tzp->no_hwmon = false; ++ err = thermal_add_hwmon_sysfs(tz); ++ if (err) ++ goto err_tz; ++ + bcm2835_thermal_debugfs(pdev); + + return 0; +diff --git a/drivers/thermal/thermal-generic-adc.c b/drivers/thermal/thermal-generic-adc.c +index 73f55d6a1721..ad601e5b4175 100644 +--- a/drivers/thermal/thermal-generic-adc.c ++++ b/drivers/thermal/thermal-generic-adc.c +@@ -26,7 +26,7 @@ struct gadc_thermal_info { + + static int gadc_thermal_adc_to_temp(struct gadc_thermal_info *gti, int val) + { +- int temp, adc_hi, adc_lo; ++ int temp, temp_hi, temp_lo, adc_hi, adc_lo; + int i; + + for (i = 0; i < gti->nlookup_table; i++) { +@@ -36,13 +36,17 @@ static int gadc_thermal_adc_to_temp(struct gadc_thermal_info *gti, int val) + + if (i == 0) { + temp = gti->lookup_table[0]; +- } else if (i >= (gti->nlookup_table - 1)) { ++ } else if (i >= gti->nlookup_table) { + temp = gti->lookup_table[2 * (gti->nlookup_table - 1)]; + } else { + adc_hi = gti->lookup_table[2 * i - 1]; + adc_lo = gti->lookup_table[2 * i + 1]; +- temp = gti->lookup_table[2 * i]; +- temp -= ((val - adc_lo) * 1000) / (adc_hi - adc_lo); ++ ++ temp_hi = gti->lookup_table[2 * i - 2]; ++ temp_lo = gti->lookup_table[2 * i]; ++ ++ temp = temp_hi + mult_frac(temp_lo - temp_hi, val - adc_hi, ++ adc_lo - adc_hi); + } + + return temp; +diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c +index 2b1b0ba393a4..17d6079c7642 100644 +--- a/drivers/thermal/thermal_core.c ++++ b/drivers/thermal/thermal_core.c +@@ -454,16 +454,20 @@ static void update_temperature(struct thermal_zone_device *tz) + tz->last_temperature, tz->temperature); + } + +-static void thermal_zone_device_reset(struct thermal_zone_device *tz) ++static void thermal_zone_device_init(struct thermal_zone_device *tz) + { + struct thermal_instance *pos; +- + tz->temperature = THERMAL_TEMP_INVALID; +- tz->passive = 0; + list_for_each_entry(pos, &tz->thermal_instances, tz_node) + pos->initialized = false; + } + ++static void thermal_zone_device_reset(struct thermal_zone_device *tz) ++{ ++ tz->passive = 0; ++ thermal_zone_device_init(tz); ++} ++ + void thermal_zone_device_update(struct thermal_zone_device *tz, + enum thermal_notify_event event) + { +@@ -1503,7 +1507,7 @@ static int thermal_pm_notify(struct notifier_block *nb, + case PM_POST_SUSPEND: + atomic_set(&in_suspend, 0); + list_for_each_entry(tz, &thermal_tz_list, node) { +- thermal_zone_device_reset(tz); ++ thermal_zone_device_init(tz); + thermal_zone_device_update(tz, + THERMAL_EVENT_UNSPECIFIED); + } +diff --git a/drivers/thermal/thermal_hwmon.h b/drivers/thermal/thermal_hwmon.h +index c798fdb2ae43..f97f76691bd0 100644 +--- a/drivers/thermal/thermal_hwmon.h ++++ b/drivers/thermal/thermal_hwmon.h +@@ -34,13 +34,13 @@ + int thermal_add_hwmon_sysfs(struct thermal_zone_device *tz); + void thermal_remove_hwmon_sysfs(struct thermal_zone_device *tz); + #else +-static int ++static inline int + thermal_add_hwmon_sysfs(struct thermal_zone_device *tz) + { + return 0; + } + +-static void ++static inline void + thermal_remove_hwmon_sysfs(struct thermal_zone_device *tz) + { + } +diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c +index 4986b4aebe80..790375b5eeb2 100644 +--- a/drivers/tty/serial/8250/8250_pci.c ++++ b/drivers/tty/serial/8250/8250_pci.c +@@ -3425,6 +3425,11 @@ static int + serial_pci_guess_board(struct pci_dev *dev, struct pciserial_board *board) + { + int num_iomem, num_port, first_port = -1, i; ++ int rc; ++ ++ rc = serial_pci_is_class_communication(dev); ++ if (rc) ++ return rc; + + /* + * Should we try to make guesses for multiport serial devices later? +@@ -3652,10 +3657,6 @@ pciserial_init_one(struct pci_dev *dev, const struct pci_device_id *ent) + + board = &pci_boards[ent->driver_data]; + +- rc = serial_pci_is_class_communication(dev); +- if (rc) +- return rc; +- + rc = serial_pci_is_blacklisted(dev); + if (rc) + return rc; +diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c +index fd64ac2c1a74..716c33b2a11c 100644 +--- a/drivers/tty/serial/fsl_lpuart.c ++++ b/drivers/tty/serial/fsl_lpuart.c +@@ -1482,6 +1482,8 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios, + else + cr1 &= ~UARTCR1_PT; + } ++ } else { ++ cr1 &= ~UARTCR1_PE; + } + + /* ask the core to calculate the divisor */ +@@ -1694,6 +1696,8 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios, + else + ctrl &= ~UARTCTRL_PT; + } ++ } else { ++ ctrl &= ~UARTCTRL_PE; + } + + /* ask the core to calculate the divisor */ +diff --git a/drivers/tty/serial/samsung.c b/drivers/tty/serial/samsung.c +index 57baa84ccf86..f4b8e4e17a86 100644 +--- a/drivers/tty/serial/samsung.c ++++ b/drivers/tty/serial/samsung.c +@@ -1343,11 +1343,14 @@ static void s3c24xx_serial_set_termios(struct uart_port *port, + wr_regl(port, S3C2410_ULCON, ulcon); + wr_regl(port, S3C2410_UBRDIV, quot); + ++ port->status &= ~UPSTAT_AUTOCTS; ++ + umcon = rd_regl(port, S3C2410_UMCON); + if (termios->c_cflag & CRTSCTS) { + umcon |= S3C2410_UMCOM_AFC; + /* Disable RTS when RX FIFO contains 63 bytes */ + umcon &= ~S3C2412_UMCON_AFC_8; ++ port->status = UPSTAT_AUTOCTS; + } else { + umcon &= ~S3C2410_UMCOM_AFC; + } +diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c +index 94ac6c6e8fb8..51a58c367953 100644 +--- a/drivers/tty/serial/serial_core.c ++++ b/drivers/tty/serial/serial_core.c +@@ -143,6 +143,9 @@ static void uart_start(struct tty_struct *tty) + struct uart_port *port; + unsigned long flags; + ++ if (!state) ++ return; ++ + port = uart_port_lock(state, flags); + __uart_start(tty); + uart_port_unlock(port, flags); +@@ -2415,6 +2418,9 @@ static void uart_poll_put_char(struct tty_driver *driver, int line, char ch) + struct uart_state *state = drv->state + line; + struct uart_port *port; + ++ if (!state) ++ return; ++ + port = uart_port_ref(state); + if (!port) + return; +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index a073cb5be013..4a4e666a8e09 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -1110,6 +1110,16 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type) + USB_PORT_FEAT_ENABLE); + } + ++ /* ++ * Add debounce if USB3 link is in polling/link training state. ++ * Link will automatically transition to Enabled state after ++ * link training completes. ++ */ ++ if (hub_is_superspeed(hdev) && ++ ((portstatus & USB_PORT_STAT_LINK_STATE) == ++ USB_SS_PORT_LS_POLLING)) ++ need_debounce_delay = true; ++ + /* Clear status-change flags; we'll debounce later */ + if (portchange & USB_PORT_STAT_C_CONNECTION) { + need_debounce_delay = true; +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c +index 727bf3c9f53b..2f96d2d0addd 100644 +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -890,8 +890,6 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb, + struct usb_gadget *gadget = &dwc->gadget; + enum usb_device_speed speed = gadget->speed; + +- dwc3_ep_inc_enq(dep); +- + trb->size = DWC3_TRB_SIZE_LENGTH(length); + trb->bpl = lower_32_bits(dma); + trb->bph = upper_32_bits(dma); +@@ -961,16 +959,20 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb, + usb_endpoint_type(dep->endpoint.desc)); + } + +- /* always enable Continue on Short Packet */ ++ /* ++ * Enable Continue on Short Packet ++ * when endpoint is not a stream capable ++ */ + if (usb_endpoint_dir_out(dep->endpoint.desc)) { +- trb->ctrl |= DWC3_TRB_CTRL_CSP; ++ if (!dep->stream_capable) ++ trb->ctrl |= DWC3_TRB_CTRL_CSP; + + if (short_not_ok) + trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI; + } + + if ((!no_interrupt && !chain) || +- (dwc3_calc_trbs_left(dep) == 0)) ++ (dwc3_calc_trbs_left(dep) == 1)) + trb->ctrl |= DWC3_TRB_CTRL_IOC; + + if (chain) +@@ -981,6 +983,8 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb, + + trb->ctrl |= DWC3_TRB_CTRL_HWO; + ++ dwc3_ep_inc_enq(dep); ++ + trace_dwc3_prepare_trb(dep, trb); + } + +@@ -1110,7 +1114,7 @@ static void dwc3_prepare_one_trb_linear(struct dwc3_ep *dep, + unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc); + unsigned int rem = length % maxp; + +- if (rem && usb_endpoint_dir_out(dep->endpoint.desc)) { ++ if ((!length || rem) && usb_endpoint_dir_out(dep->endpoint.desc)) { + struct dwc3 *dwc = dep->dwc; + struct dwc3_trb *trb; + +diff --git a/drivers/usb/dwc3/trace.h b/drivers/usb/dwc3/trace.h +index 6504b116da04..62ec20a26013 100644 +--- a/drivers/usb/dwc3/trace.h ++++ b/drivers/usb/dwc3/trace.h +@@ -262,9 +262,11 @@ DECLARE_EVENT_CLASS(dwc3_log_trb, + s = "2x "; + break; + case 3: ++ default: + s = "3x "; + break; + } ++ break; + default: + s = ""; + } s; }), +diff --git a/drivers/usb/gadget/udc/net2272.c b/drivers/usb/gadget/udc/net2272.c +index 8f85a51bd2b3..e0759a826b60 100644 +--- a/drivers/usb/gadget/udc/net2272.c ++++ b/drivers/usb/gadget/udc/net2272.c +@@ -2096,7 +2096,7 @@ static irqreturn_t net2272_irq(int irq, void *_dev) + #if defined(PLX_PCI_RDK2) + /* see if PCI int for us by checking irqstat */ + intcsr = readl(dev->rdk2.fpga_base_addr + RDK2_IRQSTAT); +- if (!intcsr & (1 << NET2272_PCI_IRQ)) { ++ if (!(intcsr & (1 << NET2272_PCI_IRQ))) { + spin_unlock(&dev->lock); + return IRQ_NONE; + } +diff --git a/drivers/usb/mtu3/mtu3_core.c b/drivers/usb/mtu3/mtu3_core.c +index 947579842ad7..95978e3b363e 100644 +--- a/drivers/usb/mtu3/mtu3_core.c ++++ b/drivers/usb/mtu3/mtu3_core.c +@@ -564,8 +564,10 @@ static void mtu3_regs_init(struct mtu3 *mtu) + if (mtu->is_u3_ip) { + /* disable LGO_U1/U2 by default */ + mtu3_clrbits(mbase, U3D_LINK_POWER_CONTROL, +- SW_U1_ACCEPT_ENABLE | SW_U2_ACCEPT_ENABLE | + SW_U1_REQUEST_ENABLE | SW_U2_REQUEST_ENABLE); ++ /* enable accept LGO_U1/U2 link command from host */ ++ mtu3_setbits(mbase, U3D_LINK_POWER_CONTROL, ++ SW_U1_ACCEPT_ENABLE | SW_U2_ACCEPT_ENABLE); + /* device responses to u3_exit from host automatically */ + mtu3_clrbits(mbase, U3D_LTSSM_CTRL, SOFT_U3_EXIT_EN); + /* automatically build U2 link when U3 detect fail */ +diff --git a/drivers/usb/mtu3/mtu3_gadget_ep0.c b/drivers/usb/mtu3/mtu3_gadget_ep0.c +index 958d74dd2b78..7997cf5f06fc 100644 +--- a/drivers/usb/mtu3/mtu3_gadget_ep0.c ++++ b/drivers/usb/mtu3/mtu3_gadget_ep0.c +@@ -335,9 +335,9 @@ static int ep0_handle_feature_dev(struct mtu3 *mtu, + + lpc = mtu3_readl(mbase, U3D_LINK_POWER_CONTROL); + if (set) +- lpc |= SW_U1_ACCEPT_ENABLE; ++ lpc |= SW_U1_REQUEST_ENABLE; + else +- lpc &= ~SW_U1_ACCEPT_ENABLE; ++ lpc &= ~SW_U1_REQUEST_ENABLE; + mtu3_writel(mbase, U3D_LINK_POWER_CONTROL, lpc); + + mtu->u1_enable = !!set; +@@ -350,9 +350,9 @@ static int ep0_handle_feature_dev(struct mtu3 *mtu, + + lpc = mtu3_readl(mbase, U3D_LINK_POWER_CONTROL); + if (set) +- lpc |= SW_U2_ACCEPT_ENABLE; ++ lpc |= SW_U2_REQUEST_ENABLE; + else +- lpc &= ~SW_U2_ACCEPT_ENABLE; ++ lpc &= ~SW_U2_REQUEST_ENABLE; + mtu3_writel(mbase, U3D_LINK_POWER_CONTROL, lpc); + + mtu->u2_enable = !!set; +diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c +index dbb482b7e0ba..b7d460adaa61 100644 +--- a/drivers/usb/musb/musb_dsps.c ++++ b/drivers/usb/musb/musb_dsps.c +@@ -242,8 +242,13 @@ static int dsps_check_status(struct musb *musb, void *unused) + + switch (musb->xceiv->otg->state) { + case OTG_STATE_A_WAIT_VRISE: +- dsps_mod_timer_optional(glue); +- break; ++ if (musb->port_mode == MUSB_HOST) { ++ musb->xceiv->otg->state = OTG_STATE_A_WAIT_BCON; ++ dsps_mod_timer_optional(glue); ++ break; ++ } ++ /* fall through */ ++ + case OTG_STATE_A_WAIT_BCON: + /* keep VBUS on for host-only mode */ + if (musb->port_mode == MUSB_PORT_MODE_HOST) { +diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c +index 87f932d4b72c..1e431634589d 100644 +--- a/drivers/usb/musb/musb_gadget.c ++++ b/drivers/usb/musb/musb_gadget.c +@@ -477,13 +477,10 @@ void musb_g_tx(struct musb *musb, u8 epnum) + } + + if (request) { +- u8 is_dma = 0; +- bool short_packet = false; + + trace_musb_req_tx(req); + + if (dma && (csr & MUSB_TXCSR_DMAENAB)) { +- is_dma = 1; + csr |= MUSB_TXCSR_P_WZC_BITS; + csr &= ~(MUSB_TXCSR_DMAENAB | MUSB_TXCSR_P_UNDERRUN | + MUSB_TXCSR_TXPKTRDY | MUSB_TXCSR_AUTOSET); +@@ -501,16 +498,8 @@ void musb_g_tx(struct musb *musb, u8 epnum) + */ + if ((request->zero && request->length) + && (request->length % musb_ep->packet_sz == 0) +- && (request->actual == request->length)) +- short_packet = true; ++ && (request->actual == request->length)) { + +- if ((musb_dma_inventra(musb) || musb_dma_ux500(musb)) && +- (is_dma && (!dma->desired_mode || +- (request->actual & +- (musb_ep->packet_sz - 1))))) +- short_packet = true; +- +- if (short_packet) { + /* + * On DMA completion, FIFO may not be + * available yet... +diff --git a/drivers/usb/musb/musbhsdma.c b/drivers/usb/musb/musbhsdma.c +index 3620073da58c..512108e22d2b 100644 +--- a/drivers/usb/musb/musbhsdma.c ++++ b/drivers/usb/musb/musbhsdma.c +@@ -320,12 +320,10 @@ static irqreturn_t dma_controller_irq(int irq, void *private_data) + channel->status = MUSB_DMA_STATUS_FREE; + + /* completed */ +- if ((devctl & MUSB_DEVCTL_HM) +- && (musb_channel->transmit) +- && ((channel->desired_mode == 0) +- || (channel->actual_len & +- (musb_channel->max_packet_sz - 1))) +- ) { ++ if (musb_channel->transmit && ++ (!channel->desired_mode || ++ (channel->actual_len % ++ musb_channel->max_packet_sz))) { + u8 epnum = musb_channel->epnum; + int offset = musb->io.ep_offset(epnum, + MUSB_TXCSR); +@@ -337,11 +335,14 @@ static irqreturn_t dma_controller_irq(int irq, void *private_data) + */ + musb_ep_select(mbase, epnum); + txcsr = musb_readw(mbase, offset); +- txcsr &= ~(MUSB_TXCSR_DMAENAB ++ if (channel->desired_mode == 1) { ++ txcsr &= ~(MUSB_TXCSR_DMAENAB + | MUSB_TXCSR_AUTOSET); +- musb_writew(mbase, offset, txcsr); +- /* Send out the packet */ +- txcsr &= ~MUSB_TXCSR_DMAMODE; ++ musb_writew(mbase, offset, txcsr); ++ /* Send out the packet */ ++ txcsr &= ~MUSB_TXCSR_DMAMODE; ++ txcsr |= MUSB_TXCSR_DMAENAB; ++ } + txcsr |= MUSB_TXCSR_TXPKTRDY; + musb_writew(mbase, offset, txcsr); + } +diff --git a/drivers/usb/phy/phy-am335x.c b/drivers/usb/phy/phy-am335x.c +index 7e5aece769da..cb1382a52765 100644 +--- a/drivers/usb/phy/phy-am335x.c ++++ b/drivers/usb/phy/phy-am335x.c +@@ -60,9 +60,6 @@ static int am335x_phy_probe(struct platform_device *pdev) + if (ret) + return ret; + +- ret = usb_add_phy_dev(&am_phy->usb_phy_gen.phy); +- if (ret) +- return ret; + am_phy->usb_phy_gen.phy.init = am335x_init; + am_phy->usb_phy_gen.phy.shutdown = am335x_shutdown; + +@@ -81,7 +78,7 @@ static int am335x_phy_probe(struct platform_device *pdev) + device_set_wakeup_enable(dev, false); + phy_ctrl_power(am_phy->phy_ctrl, am_phy->id, am_phy->dr_mode, false); + +- return 0; ++ return usb_add_phy_dev(&am_phy->usb_phy_gen.phy); + } + + static int am335x_phy_remove(struct platform_device *pdev) +diff --git a/drivers/video/fbdev/clps711x-fb.c b/drivers/video/fbdev/clps711x-fb.c +index ff561073ee4e..42f909618f04 100644 +--- a/drivers/video/fbdev/clps711x-fb.c ++++ b/drivers/video/fbdev/clps711x-fb.c +@@ -287,14 +287,17 @@ static int clps711x_fb_probe(struct platform_device *pdev) + } + + ret = of_get_fb_videomode(disp, &cfb->mode, OF_USE_NATIVE_MODE); +- if (ret) ++ if (ret) { ++ of_node_put(disp); + goto out_fb_release; ++ } + + of_property_read_u32(disp, "ac-prescale", &cfb->ac_prescale); + cfb->cmap_invert = of_property_read_bool(disp, "cmap-invert"); + + ret = of_property_read_u32(disp, "bits-per-pixel", + &info->var.bits_per_pixel); ++ of_node_put(disp); + if (ret) + goto out_fb_release; + +diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c +index 04612f938bab..85787119bfbf 100644 +--- a/drivers/video/fbdev/core/fbcon.c ++++ b/drivers/video/fbdev/core/fbcon.c +@@ -3041,7 +3041,7 @@ static int fbcon_fb_unbind(int idx) + for (i = first_fb_vc; i <= last_fb_vc; i++) { + if (con2fb_map[i] != idx && + con2fb_map[i] != -1) { +- new_idx = i; ++ new_idx = con2fb_map[i]; + break; + } + } +diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c +index 11d73b5fc885..302cce7185e3 100644 +--- a/drivers/video/fbdev/core/fbmem.c ++++ b/drivers/video/fbdev/core/fbmem.c +@@ -435,7 +435,9 @@ static void fb_do_show_logo(struct fb_info *info, struct fb_image *image, + image->dx += image->width + 8; + } + } else if (rotate == FB_ROTATE_UD) { +- for (x = 0; x < num; x++) { ++ u32 dx = image->dx; ++ ++ for (x = 0; x < num && image->dx <= dx; x++) { + info->fbops->fb_imageblit(info, image); + image->dx -= image->width + 8; + } +@@ -447,7 +449,9 @@ static void fb_do_show_logo(struct fb_info *info, struct fb_image *image, + image->dy += image->height + 8; + } + } else if (rotate == FB_ROTATE_CCW) { +- for (x = 0; x < num; x++) { ++ u32 dy = image->dy; ++ ++ for (x = 0; x < num && image->dy <= dy; x++) { + info->fbops->fb_imageblit(info, image); + image->dy -= image->height + 8; + } +diff --git a/drivers/watchdog/renesas_wdt.c b/drivers/watchdog/renesas_wdt.c +index 831ef83f6de1..c4a17d72d025 100644 +--- a/drivers/watchdog/renesas_wdt.c ++++ b/drivers/watchdog/renesas_wdt.c +@@ -74,12 +74,17 @@ static int rwdt_init_timeout(struct watchdog_device *wdev) + static int rwdt_start(struct watchdog_device *wdev) + { + struct rwdt_priv *priv = watchdog_get_drvdata(wdev); ++ u8 val; + + pm_runtime_get_sync(wdev->parent); + +- rwdt_write(priv, 0, RWTCSRB); +- rwdt_write(priv, priv->cks, RWTCSRA); ++ /* Stop the timer before we modify any register */ ++ val = readb_relaxed(priv->base + RWTCSRA) & ~RWTCSRA_TME; ++ rwdt_write(priv, val, RWTCSRA); ++ + rwdt_init_timeout(wdev); ++ rwdt_write(priv, priv->cks, RWTCSRA); ++ rwdt_write(priv, 0, RWTCSRB); + + while (readb_relaxed(priv->base + RWTCSRA) & RWTCSRA_WRFLG) + cpu_relax(); +diff --git a/fs/binfmt_script.c b/fs/binfmt_script.c +index 7cde3f46ad26..d0078cbb718b 100644 +--- a/fs/binfmt_script.c ++++ b/fs/binfmt_script.c +@@ -42,10 +42,14 @@ static int load_script(struct linux_binprm *bprm) + fput(bprm->file); + bprm->file = NULL; + +- bprm->buf[BINPRM_BUF_SIZE - 1] = '\0'; +- if ((cp = strchr(bprm->buf, '\n')) == NULL) +- cp = bprm->buf+BINPRM_BUF_SIZE-1; ++ for (cp = bprm->buf+2;; cp++) { ++ if (cp >= bprm->buf + BINPRM_BUF_SIZE) ++ return -ENOEXEC; ++ if (!*cp || (*cp == '\n')) ++ break; ++ } + *cp = '\0'; ++ + while (cp > bprm->buf) { + cp--; + if ((*cp == ' ') || (*cp == '\t')) +diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c +index ef24b4527459..68183872bf8b 100644 +--- a/fs/cifs/readdir.c ++++ b/fs/cifs/readdir.c +@@ -655,7 +655,14 @@ find_cifs_entry(const unsigned int xid, struct cifs_tcon *tcon, loff_t pos, + /* scan and find it */ + int i; + char *cur_ent; +- char *end_of_smb = cfile->srch_inf.ntwrk_buf_start + ++ char *end_of_smb; ++ ++ if (cfile->srch_inf.ntwrk_buf_start == NULL) { ++ cifs_dbg(VFS, "ntwrk_buf_start is NULL during readdir\n"); ++ return -EIO; ++ } ++ ++ end_of_smb = cfile->srch_inf.ntwrk_buf_start + + server->ops->calc_smb_size( + cfile->srch_inf.ntwrk_buf_start); + +diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c +index 07fed838d8fd..15fa4239ae9f 100644 +--- a/fs/dlm/ast.c ++++ b/fs/dlm/ast.c +@@ -290,6 +290,8 @@ void dlm_callback_suspend(struct dlm_ls *ls) + flush_workqueue(ls->ls_callback_wq); + } + ++#define MAX_CB_QUEUE 25 ++ + void dlm_callback_resume(struct dlm_ls *ls) + { + struct dlm_lkb *lkb, *safe; +@@ -300,15 +302,23 @@ void dlm_callback_resume(struct dlm_ls *ls) + if (!ls->ls_callback_wq) + return; + ++more: + mutex_lock(&ls->ls_cb_mutex); + list_for_each_entry_safe(lkb, safe, &ls->ls_cb_delay, lkb_cb_list) { + list_del_init(&lkb->lkb_cb_list); + queue_work(ls->ls_callback_wq, &lkb->lkb_cb_work); + count++; ++ if (count == MAX_CB_QUEUE) ++ break; + } + mutex_unlock(&ls->ls_cb_mutex); + + if (count) + log_rinfo(ls, "dlm_callback_resume %d", count); ++ if (count == MAX_CB_QUEUE) { ++ count = 0; ++ cond_resched(); ++ goto more; ++ } + } + +diff --git a/fs/eventpoll.c b/fs/eventpoll.c +index 2fabd19cdeea..c291bf61afb9 100644 +--- a/fs/eventpoll.c ++++ b/fs/eventpoll.c +@@ -1167,7 +1167,7 @@ static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, v + * semantics). All the events that happen during that period of time are + * chained in ep->ovflist and requeued later on. + */ +- if (unlikely(ep->ovflist != EP_UNACTIVE_PTR)) { ++ if (ep->ovflist != EP_UNACTIVE_PTR) { + if (epi->next == EP_UNACTIVE_PTR) { + epi->next = ep->ovflist; + ep->ovflist = epi; +diff --git a/fs/f2fs/acl.c b/fs/f2fs/acl.c +index 436b3a1464d9..5e4860b8bbfc 100644 +--- a/fs/f2fs/acl.c ++++ b/fs/f2fs/acl.c +@@ -349,12 +349,14 @@ static int f2fs_acl_create(struct inode *dir, umode_t *mode, + return PTR_ERR(p); + + clone = f2fs_acl_clone(p, GFP_NOFS); +- if (!clone) +- goto no_mem; ++ if (!clone) { ++ ret = -ENOMEM; ++ goto release_acl; ++ } + + ret = f2fs_acl_create_masq(clone, mode); + if (ret < 0) +- goto no_mem_clone; ++ goto release_clone; + + if (ret == 0) + posix_acl_release(clone); +@@ -368,11 +370,11 @@ static int f2fs_acl_create(struct inode *dir, umode_t *mode, + + return 0; + +-no_mem_clone: ++release_clone: + posix_acl_release(clone); +-no_mem: ++release_acl: + posix_acl_release(p); +- return -ENOMEM; ++ return ret; + } + + int f2fs_init_acl(struct inode *inode, struct inode *dir, struct page *ipage, +diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c +index c68b319b07aa..3d37124eb63e 100644 +--- a/fs/f2fs/data.c ++++ b/fs/f2fs/data.c +@@ -1880,6 +1880,7 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, + bool locked = false; + struct extent_info ei = {0,0,0}; + int err = 0; ++ int flag; + + /* + * we already allocated all the blocks, so we don't need to get +@@ -1889,9 +1890,15 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, + !is_inode_flag_set(inode, FI_NO_PREALLOC)) + return 0; + ++ /* f2fs_lock_op avoids race between write CP and convert_inline_page */ ++ if (f2fs_has_inline_data(inode) && pos + len > MAX_INLINE_DATA(inode)) ++ flag = F2FS_GET_BLOCK_DEFAULT; ++ else ++ flag = F2FS_GET_BLOCK_PRE_AIO; ++ + if (f2fs_has_inline_data(inode) || + (pos & PAGE_MASK) >= i_size_read(inode)) { +- __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true); ++ __do_map_lock(sbi, flag, true); + locked = true; + } + restart: +@@ -1929,6 +1936,7 @@ restart: + f2fs_put_dnode(&dn); + __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, + true); ++ WARN_ON(flag != F2FS_GET_BLOCK_PRE_AIO); + locked = true; + goto restart; + } +@@ -1942,7 +1950,7 @@ out: + f2fs_put_dnode(&dn); + unlock_out: + if (locked) +- __do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false); ++ __do_map_lock(sbi, flag, false); + return err; + } + +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h +index 3f1a44696036..634165fb64f1 100644 +--- a/fs/f2fs/f2fs.h ++++ b/fs/f2fs/f2fs.h +@@ -2294,10 +2294,19 @@ static inline bool is_dot_dotdot(const struct qstr *str) + + static inline bool f2fs_may_extent_tree(struct inode *inode) + { +- if (!test_opt(F2FS_I_SB(inode), EXTENT_CACHE) || ++ struct f2fs_sb_info *sbi = F2FS_I_SB(inode); ++ ++ if (!test_opt(sbi, EXTENT_CACHE) || + is_inode_flag_set(inode, FI_NO_EXTENT)) + return false; + ++ /* ++ * for recovered files during mount do not create extents ++ * if shrinker is not registered. ++ */ ++ if (list_empty(&sbi->s_list)) ++ return false; ++ + return S_ISREG(inode->i_mode); + } + +diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c +index 7d3189f1941c..5f549bc4e097 100644 +--- a/fs/f2fs/file.c ++++ b/fs/f2fs/file.c +@@ -205,6 +205,9 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end, + + trace_f2fs_sync_file_enter(inode); + ++ if (S_ISDIR(inode->i_mode)) ++ goto go_write; ++ + /* if fdatasync is triggered, let's do in-place-update */ + if (datasync || get_dirty_pages(inode) <= SM_I(sbi)->min_fsync_blocks) + set_inode_flag(inode, FI_NEED_IPU); +diff --git a/fs/f2fs/shrinker.c b/fs/f2fs/shrinker.c +index 5c60fc28ec75..ec71d2e29a15 100644 +--- a/fs/f2fs/shrinker.c ++++ b/fs/f2fs/shrinker.c +@@ -138,6 +138,6 @@ void f2fs_leave_shrinker(struct f2fs_sb_info *sbi) + f2fs_shrink_extent_tree(sbi, __count_extent_cache(sbi)); + + spin_lock(&f2fs_list_lock); +- list_del(&sbi->s_list); ++ list_del_init(&sbi->s_list); + spin_unlock(&f2fs_list_lock); + } +diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c +index f7280c44cd4b..63fd33383413 100644 +--- a/fs/fuse/dev.c ++++ b/fs/fuse/dev.c +@@ -1691,7 +1691,6 @@ static int fuse_retrieve(struct fuse_conn *fc, struct inode *inode, + req->in.h.nodeid = outarg->nodeid; + req->in.numargs = 2; + req->in.argpages = 1; +- req->page_descs[0].offset = offset; + req->end = fuse_retrieve_end; + + index = outarg->offset >> PAGE_SHIFT; +@@ -1706,6 +1705,7 @@ static int fuse_retrieve(struct fuse_conn *fc, struct inode *inode, + + this_num = min_t(unsigned, num, PAGE_SIZE - offset); + req->pages[req->num_pages] = page; ++ req->page_descs[req->num_pages].offset = offset; + req->page_descs[req->num_pages].length = this_num; + req->num_pages++; + +@@ -2024,8 +2024,10 @@ static ssize_t fuse_dev_splice_write(struct pipe_inode_info *pipe, + + ret = fuse_dev_do_write(fud, &cs, len); + ++ pipe_lock(pipe); + for (idx = 0; idx < nbuf; idx++) + pipe_buf_release(pipe, &bufs[idx]); ++ pipe_unlock(pipe); + + out: + kfree(bufs); +diff --git a/fs/fuse/file.c b/fs/fuse/file.c +index 52514a64dcd6..19ea122a7d03 100644 +--- a/fs/fuse/file.c ++++ b/fs/fuse/file.c +@@ -1777,7 +1777,7 @@ static bool fuse_writepage_in_flight(struct fuse_req *new_req, + spin_unlock(&fc->lock); + + dec_wb_stat(&bdi->wb, WB_WRITEBACK); +- dec_node_page_state(page, NR_WRITEBACK_TEMP); ++ dec_node_page_state(new_req->pages[0], NR_WRITEBACK_TEMP); + wb_writeout_inc(&bdi->wb); + fuse_writepage_free(fc, new_req); + fuse_request_free(new_req); +diff --git a/fs/nfs/super.c b/fs/nfs/super.c +index 38de09b08e96..3c4aeb83e1c4 100644 +--- a/fs/nfs/super.c ++++ b/fs/nfs/super.c +@@ -2401,8 +2401,7 @@ static int nfs_compare_mount_options(const struct super_block *s, const struct n + goto Ebusy; + if (a->acdirmax != b->acdirmax) + goto Ebusy; +- if (b->auth_info.flavor_len > 0 && +- clnt_a->cl_auth->au_flavor != clnt_b->cl_auth->au_flavor) ++ if (clnt_a->cl_auth->au_flavor != clnt_b->cl_auth->au_flavor) + goto Ebusy; + return 1; + Ebusy: +diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c +index 3cef6bfa09d4..94128643ec1a 100644 +--- a/fs/nfsd/nfs4state.c ++++ b/fs/nfsd/nfs4state.c +@@ -1472,8 +1472,10 @@ free_session_slots(struct nfsd4_session *ses) + { + int i; + +- for (i = 0; i < ses->se_fchannel.maxreqs; i++) ++ for (i = 0; i < ses->se_fchannel.maxreqs; i++) { ++ free_svc_cred(&ses->se_slots[i]->sl_cred); + kfree(ses->se_slots[i]); ++ } + } + + /* +@@ -2331,14 +2333,18 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp) + + dprintk("--> %s slot %p\n", __func__, slot); + ++ slot->sl_flags |= NFSD4_SLOT_INITIALIZED; + slot->sl_opcnt = resp->opcnt; + slot->sl_status = resp->cstate.status; ++ free_svc_cred(&slot->sl_cred); ++ copy_cred(&slot->sl_cred, &resp->rqstp->rq_cred); + +- slot->sl_flags |= NFSD4_SLOT_INITIALIZED; +- if (nfsd4_not_cached(resp)) { +- slot->sl_datalen = 0; ++ if (!nfsd4_cache_this(resp)) { ++ slot->sl_flags &= ~NFSD4_SLOT_CACHED; + return; + } ++ slot->sl_flags |= NFSD4_SLOT_CACHED; ++ + base = resp->cstate.data_offset; + slot->sl_datalen = buf->len - base; + if (read_bytes_from_xdr_buf(buf, base, slot->sl_data, slot->sl_datalen)) +@@ -2365,8 +2371,16 @@ nfsd4_enc_sequence_replay(struct nfsd4_compoundargs *args, + op = &args->ops[resp->opcnt - 1]; + nfsd4_encode_operation(resp, op); + +- /* Return nfserr_retry_uncached_rep in next operation. */ +- if (args->opcnt > 1 && !(slot->sl_flags & NFSD4_SLOT_CACHETHIS)) { ++ if (slot->sl_flags & NFSD4_SLOT_CACHED) ++ return op->status; ++ if (args->opcnt == 1) { ++ /* ++ * The original operation wasn't a solo sequence--we ++ * always cache those--so this retry must not match the ++ * original: ++ */ ++ op->status = nfserr_seq_false_retry; ++ } else { + op = &args->ops[resp->opcnt++]; + op->status = nfserr_retry_uncached_rep; + nfsd4_encode_operation(resp, op); +@@ -3030,6 +3044,34 @@ static bool nfsd4_request_too_big(struct svc_rqst *rqstp, + return xb->len > session->se_fchannel.maxreq_sz; + } + ++static bool replay_matches_cache(struct svc_rqst *rqstp, ++ struct nfsd4_sequence *seq, struct nfsd4_slot *slot) ++{ ++ struct nfsd4_compoundargs *argp = rqstp->rq_argp; ++ ++ if ((bool)(slot->sl_flags & NFSD4_SLOT_CACHETHIS) != ++ (bool)seq->cachethis) ++ return false; ++ /* ++ * If there's an error than the reply can have fewer ops than ++ * the call. But if we cached a reply with *more* ops than the ++ * call you're sending us now, then this new call is clearly not ++ * really a replay of the old one: ++ */ ++ if (slot->sl_opcnt < argp->opcnt) ++ return false; ++ /* This is the only check explicitly called by spec: */ ++ if (!same_creds(&rqstp->rq_cred, &slot->sl_cred)) ++ return false; ++ /* ++ * There may be more comparisons we could actually do, but the ++ * spec doesn't require us to catch every case where the calls ++ * don't match (that would require caching the call as well as ++ * the reply), so we don't bother. ++ */ ++ return true; ++} ++ + __be32 + nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, + union nfsd4_op_u *u) +@@ -3089,6 +3131,9 @@ nfsd4_sequence(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, + status = nfserr_seq_misordered; + if (!(slot->sl_flags & NFSD4_SLOT_INITIALIZED)) + goto out_put_session; ++ status = nfserr_seq_false_retry; ++ if (!replay_matches_cache(rqstp, seq, slot)) ++ goto out_put_session; + cstate->slot = slot; + cstate->session = session; + cstate->clp = clp; +diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c +index 6493df6b1bd5..4b8ebcc6b183 100644 +--- a/fs/nfsd/nfsctl.c ++++ b/fs/nfsd/nfsctl.c +@@ -1126,6 +1126,8 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size) + case 'Y': + case 'y': + case '1': ++ if (nn->nfsd_serv) ++ return -EBUSY; + nfsd4_end_grace(nn); + break; + default: +diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h +index 005c911b34ac..86aa92d200e1 100644 +--- a/fs/nfsd/state.h ++++ b/fs/nfsd/state.h +@@ -169,11 +169,13 @@ static inline struct nfs4_delegation *delegstateid(struct nfs4_stid *s) + struct nfsd4_slot { + u32 sl_seqid; + __be32 sl_status; ++ struct svc_cred sl_cred; + u32 sl_datalen; + u16 sl_opcnt; + #define NFSD4_SLOT_INUSE (1 << 0) + #define NFSD4_SLOT_CACHETHIS (1 << 1) + #define NFSD4_SLOT_INITIALIZED (1 << 2) ++#define NFSD4_SLOT_CACHED (1 << 3) + u8 sl_flags; + char sl_data[]; + }; +diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h +index aa4375eac475..f47c392cbd57 100644 +--- a/fs/nfsd/xdr4.h ++++ b/fs/nfsd/xdr4.h +@@ -651,9 +651,18 @@ static inline bool nfsd4_is_solo_sequence(struct nfsd4_compoundres *resp) + return resp->opcnt == 1 && args->ops[0].opnum == OP_SEQUENCE; + } + +-static inline bool nfsd4_not_cached(struct nfsd4_compoundres *resp) ++/* ++ * The session reply cache only needs to cache replies that the client ++ * actually asked us to. But it's almost free for us to cache compounds ++ * consisting of only a SEQUENCE op, so we may as well cache those too. ++ * Also, the protocol doesn't give us a convenient response in the case ++ * of a replay of a solo SEQUENCE op that wasn't cached ++ * (RETRY_UNCACHED_REP can only be returned in the second op of a ++ * compound). ++ */ ++static inline bool nfsd4_cache_this(struct nfsd4_compoundres *resp) + { +- return !(resp->cstate.slot->sl_flags & NFSD4_SLOT_CACHETHIS) ++ return (resp->cstate.slot->sl_flags & NFSD4_SLOT_CACHETHIS) + || nfsd4_is_solo_sequence(resp); + } + +diff --git a/fs/ocfs2/Makefile b/fs/ocfs2/Makefile +index 99ee093182cb..cc9b32b9db7c 100644 +--- a/fs/ocfs2/Makefile ++++ b/fs/ocfs2/Makefile +@@ -1,5 +1,5 @@ + # SPDX-License-Identifier: GPL-2.0 +-ccflags-y := -Ifs/ocfs2 ++ccflags-y := -I$(src) + + obj-$(CONFIG_OCFS2_FS) += \ + ocfs2.o \ +diff --git a/fs/ocfs2/buffer_head_io.c b/fs/ocfs2/buffer_head_io.c +index 1d098c3c00e0..9f8250df99f1 100644 +--- a/fs/ocfs2/buffer_head_io.c ++++ b/fs/ocfs2/buffer_head_io.c +@@ -152,7 +152,6 @@ int ocfs2_read_blocks_sync(struct ocfs2_super *osb, u64 block, + #endif + } + +- clear_buffer_uptodate(bh); + get_bh(bh); /* for end_buffer_read_sync() */ + bh->b_end_io = end_buffer_read_sync; + submit_bh(REQ_OP_READ, 0, bh); +@@ -306,7 +305,6 @@ int ocfs2_read_blocks(struct ocfs2_caching_info *ci, u64 block, int nr, + continue; + } + +- clear_buffer_uptodate(bh); + get_bh(bh); /* for end_buffer_read_sync() */ + if (validate) + set_buffer_needs_validate(bh); +diff --git a/fs/ocfs2/dlm/Makefile b/fs/ocfs2/dlm/Makefile +index bd1aab1f49a4..ef2854422a6e 100644 +--- a/fs/ocfs2/dlm/Makefile ++++ b/fs/ocfs2/dlm/Makefile +@@ -1,4 +1,4 @@ +-ccflags-y := -Ifs/ocfs2 ++ccflags-y := -I$(src)/.. + + obj-$(CONFIG_OCFS2_FS_O2CB) += ocfs2_dlm.o + +diff --git a/fs/ocfs2/dlmfs/Makefile b/fs/ocfs2/dlmfs/Makefile +index eed3db8c5b49..33431a0296a3 100644 +--- a/fs/ocfs2/dlmfs/Makefile ++++ b/fs/ocfs2/dlmfs/Makefile +@@ -1,4 +1,4 @@ +-ccflags-y := -Ifs/ocfs2 ++ccflags-y := -I$(src)/.. + + obj-$(CONFIG_OCFS2_FS) += ocfs2_dlmfs.o + +diff --git a/fs/udf/inode.c b/fs/udf/inode.c +index 8dacf4f57414..28b9d7cca29b 100644 +--- a/fs/udf/inode.c ++++ b/fs/udf/inode.c +@@ -1357,6 +1357,12 @@ reread: + + iinfo->i_alloc_type = le16_to_cpu(fe->icbTag.flags) & + ICBTAG_FLAG_AD_MASK; ++ if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_SHORT && ++ iinfo->i_alloc_type != ICBTAG_FLAG_AD_LONG && ++ iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB) { ++ ret = -EIO; ++ goto out; ++ } + iinfo->i_unique = 0; + iinfo->i_lenEAttr = 0; + iinfo->i_lenExtents = 0; +diff --git a/include/linux/cpu.h b/include/linux/cpu.h +index 2a378d261914..c7712e042aba 100644 +--- a/include/linux/cpu.h ++++ b/include/linux/cpu.h +@@ -188,12 +188,10 @@ enum cpuhp_smt_control { + #if defined(CONFIG_SMP) && defined(CONFIG_HOTPLUG_SMT) + extern enum cpuhp_smt_control cpu_smt_control; + extern void cpu_smt_disable(bool force); +-extern void cpu_smt_check_topology_early(void); + extern void cpu_smt_check_topology(void); + #else + # define cpu_smt_control (CPU_SMT_ENABLED) + static inline void cpu_smt_disable(bool force) { } +-static inline void cpu_smt_check_topology_early(void) { } + static inline void cpu_smt_check_topology(void) { } + #endif + +diff --git a/include/linux/genl_magic_struct.h b/include/linux/genl_magic_struct.h +index 5972e4969197..eeae59d3ceb7 100644 +--- a/include/linux/genl_magic_struct.h ++++ b/include/linux/genl_magic_struct.h +@@ -191,6 +191,7 @@ static inline void ct_assert_unique_operations(void) + { + switch (0) { + #include GENL_MAGIC_INCLUDE_FILE ++ case 0: + ; + } + } +@@ -209,6 +210,7 @@ static inline void ct_assert_unique_top_level_attributes(void) + { + switch (0) { + #include GENL_MAGIC_INCLUDE_FILE ++ case 0: + ; + } + } +@@ -218,7 +220,8 @@ static inline void ct_assert_unique_top_level_attributes(void) + static inline void ct_assert_unique_ ## s_name ## _attributes(void) \ + { \ + switch (0) { \ +- s_fields \ ++ s_fields \ ++ case 0: \ + ; \ + } \ + } +diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h +index b6962ae6237e..4f7f19c1dc0a 100644 +--- a/include/linux/kvm_host.h ++++ b/include/linux/kvm_host.h +@@ -685,7 +685,8 @@ int kvm_write_guest(struct kvm *kvm, gpa_t gpa, const void *data, + int kvm_write_guest_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + void *data, unsigned long len); + int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, +- void *data, int offset, unsigned long len); ++ void *data, unsigned int offset, ++ unsigned long len); + int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + gpa_t gpa, unsigned long len); + int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len); +diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h +index fb677e4f902d..88f0c530fe9c 100644 +--- a/include/linux/mlx5/driver.h ++++ b/include/linux/mlx5/driver.h +@@ -1195,7 +1195,7 @@ enum { + static inline const struct cpumask * + mlx5_get_vector_affinity_hint(struct mlx5_core_dev *dev, int vector) + { +- return dev->priv.irq_info[vector].mask; ++ return dev->priv.irq_info[vector + MLX5_EQ_VEC_COMP_BASE].mask; + } + + #endif /* MLX5_DRIVER_H */ +diff --git a/include/sound/compress_driver.h b/include/sound/compress_driver.h +index 9924bc9cbc7c..392bac18398b 100644 +--- a/include/sound/compress_driver.h ++++ b/include/sound/compress_driver.h +@@ -186,7 +186,11 @@ static inline void snd_compr_drain_notify(struct snd_compr_stream *stream) + if (snd_BUG_ON(!stream)) + return; + +- stream->runtime->state = SNDRV_PCM_STATE_SETUP; ++ if (stream->direction == SND_COMPRESS_PLAYBACK) ++ stream->runtime->state = SNDRV_PCM_STATE_SETUP; ++ else ++ stream->runtime->state = SNDRV_PCM_STATE_PREPARED; ++ + wake_up(&stream->runtime->sleep); + } + +diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c +index 109c32c56de7..21bbfc09e395 100644 +--- a/kernel/cgroup/cgroup.c ++++ b/kernel/cgroup/cgroup.c +@@ -1692,7 +1692,7 @@ static int parse_cgroup_root_flags(char *data, unsigned int *root_flags) + + *root_flags = 0; + +- if (!data) ++ if (!data || *data == '\0') + return 0; + + while ((token = strsep(&data, ",")) != NULL) { +diff --git a/kernel/cpu.c b/kernel/cpu.c +index 5c907d96e3dd..0171754db32b 100644 +--- a/kernel/cpu.c ++++ b/kernel/cpu.c +@@ -356,9 +356,6 @@ void __weak arch_smt_update(void) { } + + #ifdef CONFIG_HOTPLUG_SMT + enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED; +-EXPORT_SYMBOL_GPL(cpu_smt_control); +- +-static bool cpu_smt_available __read_mostly; + + void __init cpu_smt_disable(bool force) + { +@@ -376,25 +373,11 @@ void __init cpu_smt_disable(bool force) + + /* + * The decision whether SMT is supported can only be done after the full +- * CPU identification. Called from architecture code before non boot CPUs +- * are brought up. +- */ +-void __init cpu_smt_check_topology_early(void) +-{ +- if (!topology_smt_supported()) +- cpu_smt_control = CPU_SMT_NOT_SUPPORTED; +-} +- +-/* +- * If SMT was disabled by BIOS, detect it here, after the CPUs have been +- * brought online. This ensures the smt/l1tf sysfs entries are consistent +- * with reality. cpu_smt_available is set to true during the bringup of non +- * boot CPUs when a SMT sibling is detected. Note, this may overwrite +- * cpu_smt_control's previous setting. ++ * CPU identification. Called from architecture code. + */ + void __init cpu_smt_check_topology(void) + { +- if (!cpu_smt_available) ++ if (!topology_smt_supported()) + cpu_smt_control = CPU_SMT_NOT_SUPPORTED; + } + +@@ -407,18 +390,10 @@ early_param("nosmt", smt_cmdline_disable); + + static inline bool cpu_smt_allowed(unsigned int cpu) + { +- if (topology_is_primary_thread(cpu)) ++ if (cpu_smt_control == CPU_SMT_ENABLED) + return true; + +- /* +- * If the CPU is not a 'primary' thread and the booted_once bit is +- * set then the processor has SMT support. Store this information +- * for the late check of SMT support in cpu_smt_check_topology(). +- */ +- if (per_cpu(cpuhp_state, cpu).booted_once) +- cpu_smt_available = true; +- +- if (cpu_smt_control == CPU_SMT_ENABLED) ++ if (topology_is_primary_thread(cpu)) + return true; + + /* +diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c +index 65c0f1363788..94aa9ae0007a 100644 +--- a/kernel/debug/debug_core.c ++++ b/kernel/debug/debug_core.c +@@ -535,6 +535,8 @@ return_normal: + arch_kgdb_ops.correct_hw_break(); + if (trace_on) + tracing_on(); ++ kgdb_info[cpu].debuggerinfo = NULL; ++ kgdb_info[cpu].task = NULL; + kgdb_info[cpu].exception_state &= + ~(DCPU_WANT_MASTER | DCPU_IS_SLAVE); + kgdb_info[cpu].enter_kgdb--; +@@ -667,6 +669,8 @@ kgdb_restore: + if (trace_on) + tracing_on(); + ++ kgdb_info[cpu].debuggerinfo = NULL; ++ kgdb_info[cpu].task = NULL; + kgdb_info[cpu].exception_state &= + ~(DCPU_WANT_MASTER | DCPU_IS_SLAVE); + kgdb_info[cpu].enter_kgdb--; +diff --git a/kernel/debug/kdb/kdb_bt.c b/kernel/debug/kdb/kdb_bt.c +index 7921ae4fca8d..7e2379aa0a1e 100644 +--- a/kernel/debug/kdb/kdb_bt.c ++++ b/kernel/debug/kdb/kdb_bt.c +@@ -186,7 +186,16 @@ kdb_bt(int argc, const char **argv) + kdb_printf("btc: cpu status: "); + kdb_parse("cpu\n"); + for_each_online_cpu(cpu) { +- sprintf(buf, "btt 0x%px\n", KDB_TSK(cpu)); ++ void *kdb_tsk = KDB_TSK(cpu); ++ ++ /* If a CPU failed to round up we could be here */ ++ if (!kdb_tsk) { ++ kdb_printf("WARNING: no task for cpu %ld\n", ++ cpu); ++ continue; ++ } ++ ++ sprintf(buf, "btt 0x%px\n", kdb_tsk); + kdb_parse(buf); + touch_nmi_watchdog(); + } +diff --git a/kernel/debug/kdb/kdb_debugger.c b/kernel/debug/kdb/kdb_debugger.c +index 15e1a7af5dd0..53a0df6e4d92 100644 +--- a/kernel/debug/kdb/kdb_debugger.c ++++ b/kernel/debug/kdb/kdb_debugger.c +@@ -118,13 +118,6 @@ int kdb_stub(struct kgdb_state *ks) + kdb_bp_remove(); + KDB_STATE_CLEAR(DOING_SS); + KDB_STATE_SET(PAGER); +- /* zero out any offline cpu data */ +- for_each_present_cpu(i) { +- if (!cpu_online(i)) { +- kgdb_info[i].debuggerinfo = NULL; +- kgdb_info[i].task = NULL; +- } +- } + if (ks->err_code == DIE_OOPS || reason == KDB_REASON_OOPS) { + ks->pass_exception = 1; + KDB_FLAG_SET(CATASTROPHIC); +diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c +index c573c7339223..8b311340b241 100644 +--- a/kernel/events/ring_buffer.c ++++ b/kernel/events/ring_buffer.c +@@ -719,6 +719,9 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags) + size = sizeof(struct ring_buffer); + size += nr_pages * sizeof(void *); + ++ if (order_base_2(size) >= MAX_ORDER) ++ goto fail; ++ + rb = kzalloc(size, GFP_KERNEL); + if (!rb) + goto fail; +diff --git a/kernel/futex.c b/kernel/futex.c +index 046cd780d057..abe04a2bb5b9 100644 +--- a/kernel/futex.c ++++ b/kernel/futex.c +@@ -2811,35 +2811,39 @@ retry_private: + * and BUG when futex_unlock_pi() interleaves with this. + * + * Therefore acquire wait_lock while holding hb->lock, but drop the +- * latter before calling rt_mutex_start_proxy_lock(). This still fully +- * serializes against futex_unlock_pi() as that does the exact same +- * lock handoff sequence. ++ * latter before calling __rt_mutex_start_proxy_lock(). This ++ * interleaves with futex_unlock_pi() -- which does a similar lock ++ * handoff -- such that the latter can observe the futex_q::pi_state ++ * before __rt_mutex_start_proxy_lock() is done. + */ + raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock); + spin_unlock(q.lock_ptr); ++ /* ++ * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter ++ * such that futex_unlock_pi() is guaranteed to observe the waiter when ++ * it sees the futex_q::pi_state. ++ */ + ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current); + raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock); + + if (ret) { + if (ret == 1) + ret = 0; +- +- spin_lock(q.lock_ptr); +- goto no_block; ++ goto cleanup; + } + +- + if (unlikely(to)) + hrtimer_start_expires(&to->timer, HRTIMER_MODE_ABS); + + ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter); + ++cleanup: + spin_lock(q.lock_ptr); + /* +- * If we failed to acquire the lock (signal/timeout), we must ++ * If we failed to acquire the lock (deadlock/signal/timeout), we must + * first acquire the hb->lock before removing the lock from the +- * rt_mutex waitqueue, such that we can keep the hb and rt_mutex +- * wait lists consistent. ++ * rt_mutex waitqueue, such that we can keep the hb and rt_mutex wait ++ * lists consistent. + * + * In particular; it is important that futex_unlock_pi() can not + * observe this inconsistency. +@@ -2963,6 +2967,10 @@ retry: + * there is no point where we hold neither; and therefore + * wake_futex_pi() must observe a state consistent with what we + * observed. ++ * ++ * In particular; this forces __rt_mutex_start_proxy() to ++ * complete such that we're guaranteed to observe the ++ * rt_waiter. Also see the WARN in wake_futex_pi(). + */ + raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); + spin_unlock(&hb->lock); +diff --git a/kernel/hung_task.c b/kernel/hung_task.c +index 32b479468e4d..f9aaf4994062 100644 +--- a/kernel/hung_task.c ++++ b/kernel/hung_task.c +@@ -33,7 +33,7 @@ int __read_mostly sysctl_hung_task_check_count = PID_MAX_LIMIT; + * is disabled during the critical section. It also controls the size of + * the RCU grace period. So it needs to be upper-bound. + */ +-#define HUNG_TASK_BATCHING 1024 ++#define HUNG_TASK_LOCK_BREAK (HZ / 10) + + /* + * Zero means infinite timeout - no checking done: +@@ -103,8 +103,11 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) + + trace_sched_process_hang(t); + +- if (!sysctl_hung_task_warnings && !sysctl_hung_task_panic) +- return; ++ if (sysctl_hung_task_panic) { ++ console_verbose(); ++ hung_task_show_lock = true; ++ hung_task_call_panic = true; ++ } + + /* + * Ok, the task did not get scheduled for more than 2 minutes, +@@ -126,11 +129,6 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) + } + + touch_nmi_watchdog(); +- +- if (sysctl_hung_task_panic) { +- hung_task_show_lock = true; +- hung_task_call_panic = true; +- } + } + + /* +@@ -164,7 +162,7 @@ static bool rcu_lock_break(struct task_struct *g, struct task_struct *t) + static void check_hung_uninterruptible_tasks(unsigned long timeout) + { + int max_count = sysctl_hung_task_check_count; +- int batch_count = HUNG_TASK_BATCHING; ++ unsigned long last_break = jiffies; + struct task_struct *g, *t; + + /* +@@ -179,10 +177,10 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout) + for_each_process_thread(g, t) { + if (!max_count--) + goto unlock; +- if (!--batch_count) { +- batch_count = HUNG_TASK_BATCHING; ++ if (time_after(jiffies, last_break + HUNG_TASK_LOCK_BREAK)) { + if (!rcu_lock_break(g, t)) + goto unlock; ++ last_break = jiffies; + } + /* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */ + if (t->state == TASK_UNINTERRUPTIBLE) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index 4ad35718f123..71c554a9e17f 100644 +--- a/kernel/locking/rtmutex.c ++++ b/kernel/locking/rtmutex.c +@@ -1726,12 +1726,33 @@ void rt_mutex_proxy_unlock(struct rt_mutex *lock, + rt_mutex_set_owner(lock, NULL); + } + ++/** ++ * __rt_mutex_start_proxy_lock() - Start lock acquisition for another task ++ * @lock: the rt_mutex to take ++ * @waiter: the pre-initialized rt_mutex_waiter ++ * @task: the task to prepare ++ * ++ * Starts the rt_mutex acquire; it enqueues the @waiter and does deadlock ++ * detection. It does not wait, see rt_mutex_wait_proxy_lock() for that. ++ * ++ * NOTE: does _NOT_ remove the @waiter on failure; must either call ++ * rt_mutex_wait_proxy_lock() or rt_mutex_cleanup_proxy_lock() after this. ++ * ++ * Returns: ++ * 0 - task blocked on lock ++ * 1 - acquired the lock for task, caller should wake it up ++ * <0 - error ++ * ++ * Special API call for PI-futex support. ++ */ + int __rt_mutex_start_proxy_lock(struct rt_mutex *lock, + struct rt_mutex_waiter *waiter, + struct task_struct *task) + { + int ret; + ++ lockdep_assert_held(&lock->wait_lock); ++ + if (try_to_take_rt_mutex(lock, task, NULL)) + return 1; + +@@ -1749,9 +1770,6 @@ int __rt_mutex_start_proxy_lock(struct rt_mutex *lock, + ret = 0; + } + +- if (unlikely(ret)) +- remove_waiter(lock, waiter); +- + debug_rt_mutex_print_deadlock(waiter); + + return ret; +@@ -1763,12 +1781,18 @@ int __rt_mutex_start_proxy_lock(struct rt_mutex *lock, + * @waiter: the pre-initialized rt_mutex_waiter + * @task: the task to prepare + * ++ * Starts the rt_mutex acquire; it enqueues the @waiter and does deadlock ++ * detection. It does not wait, see rt_mutex_wait_proxy_lock() for that. ++ * ++ * NOTE: unlike __rt_mutex_start_proxy_lock this _DOES_ remove the @waiter ++ * on failure. ++ * + * Returns: + * 0 - task blocked on lock + * 1 - acquired the lock for task, caller should wake it up + * <0 - error + * +- * Special API call for FUTEX_REQUEUE_PI support. ++ * Special API call for PI-futex support. + */ + int rt_mutex_start_proxy_lock(struct rt_mutex *lock, + struct rt_mutex_waiter *waiter, +@@ -1778,6 +1802,8 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock, + + raw_spin_lock_irq(&lock->wait_lock); + ret = __rt_mutex_start_proxy_lock(lock, waiter, task); ++ if (unlikely(ret)) ++ remove_waiter(lock, waiter); + raw_spin_unlock_irq(&lock->wait_lock); + + return ret; +@@ -1845,7 +1871,8 @@ int rt_mutex_wait_proxy_lock(struct rt_mutex *lock, + * @lock: the rt_mutex we were woken on + * @waiter: the pre-initialized rt_mutex_waiter + * +- * Attempt to clean up after a failed rt_mutex_wait_proxy_lock(). ++ * Attempt to clean up after a failed __rt_mutex_start_proxy_lock() or ++ * rt_mutex_wait_proxy_lock(). + * + * Unless we acquired the lock; we're still enqueued on the wait-list and can + * in fact still be granted ownership until we're removed. Therefore we can +diff --git a/kernel/module.c b/kernel/module.c +index 2a44c515f0d7..94528b891027 100644 +--- a/kernel/module.c ++++ b/kernel/module.c +@@ -1201,8 +1201,10 @@ static ssize_t store_uevent(struct module_attribute *mattr, + struct module_kobject *mk, + const char *buffer, size_t count) + { +- kobject_synth_uevent(&mk->kobj, buffer, count); +- return count; ++ int rc; ++ ++ rc = kobject_synth_uevent(&mk->kobj, buffer, count); ++ return rc ? rc : count; + } + + struct module_attribute module_uevent = +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index f33b24080b1c..4d54c1fe9623 100644 +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -5651,6 +5651,7 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) + + #ifdef CONFIG_SCHED_SMT + DEFINE_STATIC_KEY_FALSE(sched_smt_present); ++EXPORT_SYMBOL_GPL(sched_smt_present); + + static inline void set_idle_cores(int cpu, int val) + { +diff --git a/kernel/smp.c b/kernel/smp.c +index 2d1da290f144..c94dd85c8d41 100644 +--- a/kernel/smp.c ++++ b/kernel/smp.c +@@ -584,8 +584,6 @@ void __init smp_init(void) + num_nodes, (num_nodes > 1 ? "s" : ""), + num_cpus, (num_cpus > 1 ? "s" : "")); + +- /* Final decision about SMT support */ +- cpu_smt_check_topology(); + /* Any cleanup work */ + smp_cpus_done(setup_max_cpus); + } +diff --git a/kernel/sysctl.c b/kernel/sysctl.c +index d330b1ce3b94..3ad00bf90b3d 100644 +--- a/kernel/sysctl.c ++++ b/kernel/sysctl.c +@@ -2708,6 +2708,8 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table, int + bool neg; + + left -= proc_skip_spaces(&p); ++ if (!left) ++ break; + + err = proc_get_long(&p, &left, &val, &neg, + proc_wspace_sep, +diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c +index 2cafb49aa65e..1ce7c404d0b0 100644 +--- a/kernel/time/timekeeping.c ++++ b/kernel/time/timekeeping.c +@@ -41,7 +41,9 @@ + static struct { + seqcount_t seq; + struct timekeeper timekeeper; +-} tk_core ____cacheline_aligned; ++} tk_core ____cacheline_aligned = { ++ .seq = SEQCNT_ZERO(tk_core.seq), ++}; + + static DEFINE_RAW_SPINLOCK(timekeeper_lock); + static struct timekeeper shadow_timekeeper; +diff --git a/lib/seq_buf.c b/lib/seq_buf.c +index 11f2ae0f9099..6aabb609dd87 100644 +--- a/lib/seq_buf.c ++++ b/lib/seq_buf.c +@@ -144,9 +144,13 @@ int seq_buf_puts(struct seq_buf *s, const char *str) + + WARN_ON(s->size == 0); + ++ /* Add 1 to len for the trailing null byte which must be there */ ++ len += 1; ++ + if (seq_buf_can_fit(s, len)) { + memcpy(s->buffer + s->len, str, len); +- s->len += len; ++ /* Don't count the trailing null byte against the capacity */ ++ s->len += len - 1; + return 0; + } + seq_buf_set_overflow(s); +diff --git a/mm/percpu-km.c b/mm/percpu-km.c +index 0d88d7bd5706..c22d959105b6 100644 +--- a/mm/percpu-km.c ++++ b/mm/percpu-km.c +@@ -50,6 +50,7 @@ static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp) + const int nr_pages = pcpu_group_sizes[0] >> PAGE_SHIFT; + struct pcpu_chunk *chunk; + struct page *pages; ++ unsigned long flags; + int i; + + chunk = pcpu_alloc_chunk(gfp); +@@ -68,9 +69,9 @@ static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp) + chunk->data = pages; + chunk->base_addr = page_address(pages) - pcpu_group_offsets[0]; + +- spin_lock_irq(&pcpu_lock); ++ spin_lock_irqsave(&pcpu_lock, flags); + pcpu_chunk_populated(chunk, 0, nr_pages, false); +- spin_unlock_irq(&pcpu_lock); ++ spin_unlock_irqrestore(&pcpu_lock, flags); + + pcpu_stats_chunk_alloc(); + trace_percpu_create_chunk(chunk->base_addr); +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index 01f211e31f47..363dc85bbc5c 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -5212,6 +5212,12 @@ static bool hci_get_cmd_complete(struct hci_dev *hdev, u16 opcode, + return true; + } + ++ /* Check if request ended in Command Status - no way to retreive ++ * any extra parameters in this case. ++ */ ++ if (hdr->evt == HCI_EV_CMD_STATUS) ++ return false; ++ + if (hdr->evt != HCI_EV_CMD_COMPLETE) { + BT_DBG("Last event is not cmd complete (0x%2.2x)", hdr->evt); + return false; +diff --git a/net/dccp/ccid.h b/net/dccp/ccid.h +index 6eb837a47b5c..baaaeb2b2c42 100644 +--- a/net/dccp/ccid.h ++++ b/net/dccp/ccid.h +@@ -202,7 +202,7 @@ static inline void ccid_hc_tx_packet_recv(struct ccid *ccid, struct sock *sk, + static inline int ccid_hc_tx_parse_options(struct ccid *ccid, struct sock *sk, + u8 pkt, u8 opt, u8 *val, u8 len) + { +- if (ccid->ccid_ops->ccid_hc_tx_parse_options == NULL) ++ if (!ccid || !ccid->ccid_ops->ccid_hc_tx_parse_options) + return 0; + return ccid->ccid_ops->ccid_hc_tx_parse_options(sk, pkt, opt, val, len); + } +@@ -214,7 +214,7 @@ static inline int ccid_hc_tx_parse_options(struct ccid *ccid, struct sock *sk, + static inline int ccid_hc_rx_parse_options(struct ccid *ccid, struct sock *sk, + u8 pkt, u8 opt, u8 *val, u8 len) + { +- if (ccid->ccid_ops->ccid_hc_rx_parse_options == NULL) ++ if (!ccid || !ccid->ccid_ops->ccid_hc_rx_parse_options) + return 0; + return ccid->ccid_ops->ccid_hc_rx_parse_options(sk, pkt, opt, val, len); + } +diff --git a/net/dsa/slave.c b/net/dsa/slave.c +index 242e74b9d454..b14d530a32b1 100644 +--- a/net/dsa/slave.c ++++ b/net/dsa/slave.c +@@ -156,10 +156,14 @@ static void dsa_slave_change_rx_flags(struct net_device *dev, int change) + struct dsa_slave_priv *p = netdev_priv(dev); + struct net_device *master = dsa_master_netdev(p); + +- if (change & IFF_ALLMULTI) +- dev_set_allmulti(master, dev->flags & IFF_ALLMULTI ? 1 : -1); +- if (change & IFF_PROMISC) +- dev_set_promiscuity(master, dev->flags & IFF_PROMISC ? 1 : -1); ++ if (dev->flags & IFF_UP) { ++ if (change & IFF_ALLMULTI) ++ dev_set_allmulti(master, ++ dev->flags & IFF_ALLMULTI ? 1 : -1); ++ if (change & IFF_PROMISC) ++ dev_set_promiscuity(master, ++ dev->flags & IFF_PROMISC ? 1 : -1); ++ } + } + + static void dsa_slave_set_rx_mode(struct net_device *dev) +diff --git a/net/ipv6/xfrm6_tunnel.c b/net/ipv6/xfrm6_tunnel.c +index 4e438bc7ee87..c28e3eaad7c2 100644 +--- a/net/ipv6/xfrm6_tunnel.c ++++ b/net/ipv6/xfrm6_tunnel.c +@@ -144,6 +144,9 @@ static u32 __xfrm6_tunnel_alloc_spi(struct net *net, xfrm_address_t *saddr) + index = __xfrm6_tunnel_spi_check(net, spi); + if (index >= 0) + goto alloc_spi; ++ ++ if (spi == XFRM6_TUNNEL_SPI_MAX) ++ break; + } + for (spi = XFRM6_TUNNEL_SPI_MIN; spi < xfrm6_tn->spi; spi++) { + index = __xfrm6_tunnel_spi_check(net, spi); +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c +index 9e19ddbcb06e..c7ac1a480b1d 100644 +--- a/net/mac80211/rx.c ++++ b/net/mac80211/rx.c +@@ -141,6 +141,9 @@ ieee80211_rx_radiotap_hdrlen(struct ieee80211_local *local, + /* allocate extra bitmaps */ + if (status->chains) + len += 4 * hweight8(status->chains); ++ /* vendor presence bitmap */ ++ if (status->flag & RX_FLAG_RADIOTAP_VENDOR_DATA) ++ len += 4; + + if (ieee80211_have_rx_timestamp(status)) { + len = ALIGN(len, 8); +@@ -182,8 +185,6 @@ ieee80211_rx_radiotap_hdrlen(struct ieee80211_local *local, + if (status->flag & RX_FLAG_RADIOTAP_VENDOR_DATA) { + struct ieee80211_vendor_radiotap *rtap = (void *)skb->data; + +- /* vendor presence bitmap */ +- len += 4; + /* alignment for fixed 6-byte vendor data header */ + len = ALIGN(len, 2); + /* vendor data header */ +diff --git a/net/rds/bind.c b/net/rds/bind.c +index 48257d3a4201..4f1427c3452d 100644 +--- a/net/rds/bind.c ++++ b/net/rds/bind.c +@@ -62,10 +62,10 @@ struct rds_sock *rds_find_bound(__be32 addr, __be16 port) + + rcu_read_lock(); + rs = rhashtable_lookup(&bind_hash_table, &key, ht_parms); +- if (rs && !sock_flag(rds_rs_to_sk(rs), SOCK_DEAD)) +- rds_sock_addref(rs); +- else ++ if (rs && (sock_flag(rds_rs_to_sk(rs), SOCK_DEAD) || ++ !refcount_inc_not_zero(&rds_rs_to_sk(rs)->sk_refcnt))) + rs = NULL; ++ + rcu_read_unlock(); + + rdsdebug("returning rs %p for %pI4:%u\n", rs, &addr, +diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c +index abcf48026d99..b74cde2fd214 100644 +--- a/net/rxrpc/recvmsg.c ++++ b/net/rxrpc/recvmsg.c +@@ -588,6 +588,7 @@ error_requeue_call: + } + error_no_call: + release_sock(&rx->sk); ++error_trace: + trace_rxrpc_recvmsg(call, rxrpc_recvmsg_return, 0, 0, 0, ret); + return ret; + +@@ -596,7 +597,7 @@ wait_interrupted: + wait_error: + finish_wait(sk_sleep(&rx->sk), &wait); + call = NULL; +- goto error_no_call; ++ goto error_trace; + } + + /** +diff --git a/scripts/decode_stacktrace.sh b/scripts/decode_stacktrace.sh +index 64220e36ce3b..98a7d63a723e 100755 +--- a/scripts/decode_stacktrace.sh ++++ b/scripts/decode_stacktrace.sh +@@ -78,7 +78,7 @@ parse_symbol() { + fi + + # Strip out the base of the path +- code=${code//$basepath/""} ++ code=${code//^$basepath/""} + + # In the case of inlines, move everything to same line + code=${code//$'\n'/' '} +diff --git a/scripts/gdb/linux/proc.py b/scripts/gdb/linux/proc.py +index 086d27223c0c..0aebd7565b03 100644 +--- a/scripts/gdb/linux/proc.py ++++ b/scripts/gdb/linux/proc.py +@@ -41,7 +41,7 @@ class LxVersion(gdb.Command): + + def invoke(self, arg, from_tty): + # linux_banner should contain a newline +- gdb.write(gdb.parse_and_eval("linux_banner").string()) ++ gdb.write(gdb.parse_and_eval("(char *)linux_banner").string()) + + LxVersion() + +diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c +index 18bc8738e989..e36a673833ae 100644 +--- a/scripts/mod/modpost.c ++++ b/scripts/mod/modpost.c +@@ -1215,6 +1215,30 @@ static int secref_whitelist(const struct sectioncheck *mismatch, + return 1; + } + ++static inline int is_arm_mapping_symbol(const char *str) ++{ ++ return str[0] == '$' && strchr("axtd", str[1]) ++ && (str[2] == '\0' || str[2] == '.'); ++} ++ ++/* ++ * If there's no name there, ignore it; likewise, ignore it if it's ++ * one of the magic symbols emitted used by current ARM tools. ++ * ++ * Otherwise if find_symbols_between() returns those symbols, they'll ++ * fail the whitelist tests and cause lots of false alarms ... fixable ++ * only by merging __exit and __init sections into __text, bloating ++ * the kernel (which is especially evil on embedded platforms). ++ */ ++static inline int is_valid_name(struct elf_info *elf, Elf_Sym *sym) ++{ ++ const char *name = elf->strtab + sym->st_name; ++ ++ if (!name || !strlen(name)) ++ return 0; ++ return !is_arm_mapping_symbol(name); ++} ++ + /** + * Find symbol based on relocation record info. + * In some cases the symbol supplied is a valid symbol so +@@ -1240,6 +1264,8 @@ static Elf_Sym *find_elf_symbol(struct elf_info *elf, Elf64_Sword addr, + continue; + if (ELF_ST_TYPE(sym->st_info) == STT_SECTION) + continue; ++ if (!is_valid_name(elf, sym)) ++ continue; + if (sym->st_value == addr) + return sym; + /* Find a symbol nearby - addr are maybe negative */ +@@ -1258,30 +1284,6 @@ static Elf_Sym *find_elf_symbol(struct elf_info *elf, Elf64_Sword addr, + return NULL; + } + +-static inline int is_arm_mapping_symbol(const char *str) +-{ +- return str[0] == '$' && strchr("axtd", str[1]) +- && (str[2] == '\0' || str[2] == '.'); +-} +- +-/* +- * If there's no name there, ignore it; likewise, ignore it if it's +- * one of the magic symbols emitted used by current ARM tools. +- * +- * Otherwise if find_symbols_between() returns those symbols, they'll +- * fail the whitelist tests and cause lots of false alarms ... fixable +- * only by merging __exit and __init sections into __text, bloating +- * the kernel (which is especially evil on embedded platforms). +- */ +-static inline int is_valid_name(struct elf_info *elf, Elf_Sym *sym) +-{ +- const char *name = elf->strtab + sym->st_name; +- +- if (!name || !strlen(name)) +- return 0; +- return !is_arm_mapping_symbol(name); +-} +- + /* + * Find symbols before or equal addr and after addr - in the section sec. + * If we find two symbols with equal offset prefer one with a valid name. +diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c +index c8fd5c10b7c6..0d5ce7190b17 100644 +--- a/security/smack/smack_lsm.c ++++ b/security/smack/smack_lsm.c +@@ -4356,6 +4356,12 @@ static int smack_key_permission(key_ref_t key_ref, + int request = 0; + int rc; + ++ /* ++ * Validate requested permissions ++ */ ++ if (perm & ~KEY_NEED_ALL) ++ return -EINVAL; ++ + keyp = key_ref_to_ptr(key_ref); + if (keyp == NULL) + return -EINVAL; +@@ -4375,10 +4381,10 @@ static int smack_key_permission(key_ref_t key_ref, + ad.a.u.key_struct.key = keyp->serial; + ad.a.u.key_struct.key_desc = keyp->description; + #endif +- if (perm & KEY_NEED_READ) +- request = MAY_READ; ++ if (perm & (KEY_NEED_READ | KEY_NEED_SEARCH | KEY_NEED_VIEW)) ++ request |= MAY_READ; + if (perm & (KEY_NEED_WRITE | KEY_NEED_LINK | KEY_NEED_SETATTR)) +- request = MAY_WRITE; ++ request |= MAY_WRITE; + rc = smk_access(tkp, keyp->security, request, &ad); + rc = smk_bu_note("key access", tkp, keyp->security, request, rc); + return rc; +diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c +index d361bb77ca00..8db1890605f6 100644 +--- a/sound/pci/hda/hda_bind.c ++++ b/sound/pci/hda/hda_bind.c +@@ -109,7 +109,8 @@ static int hda_codec_driver_probe(struct device *dev) + err = snd_hda_codec_build_controls(codec); + if (err < 0) + goto error_module; +- if (codec->card->registered) { ++ /* only register after the bus probe finished; otherwise it's racy */ ++ if (!codec->bus->bus_probing && codec->card->registered) { + err = snd_card_register(codec->card); + if (err < 0) + goto error_module; +diff --git a/sound/pci/hda/hda_codec.h b/sound/pci/hda/hda_codec.h +index 681c360f29f9..3812238e00d5 100644 +--- a/sound/pci/hda/hda_codec.h ++++ b/sound/pci/hda/hda_codec.h +@@ -68,6 +68,7 @@ struct hda_bus { + unsigned int response_reset:1; /* controller was reset */ + unsigned int in_reset:1; /* during reset operation */ + unsigned int no_response_fallback:1; /* don't fallback at RIRB error */ ++ unsigned int bus_probing :1; /* during probing process */ + + int primary_dig_out_type; /* primary digital out PCM type */ + unsigned int mixer_assigned; /* codec addr for mixer name */ +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index d8e80b6f5a6b..afa591cf840a 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -2236,6 +2236,7 @@ static int azx_probe_continue(struct azx *chip) + int val; + int err; + ++ to_hda_bus(bus)->bus_probing = 1; + hda->probe_continued = 1; + + /* bind with i915 if needed */ +@@ -2341,6 +2342,7 @@ i915_power_fail: + if (err < 0) + hda->init_failed = 1; + complete_all(&hda->probe_wait); ++ to_hda_bus(bus)->bus_probing = 0; + return err; + } + +diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig +index 37f9b6201918..4087deeda7cf 100644 +--- a/sound/soc/fsl/Kconfig ++++ b/sound/soc/fsl/Kconfig +@@ -221,7 +221,7 @@ config SND_SOC_PHYCORE_AC97 + + config SND_SOC_EUKREA_TLV320 + tristate "Eukrea TLV320" +- depends on ARCH_MXC && I2C ++ depends on ARCH_MXC && !ARM64 && I2C + select SND_SOC_TLV320AIC23_I2C + select SND_SOC_IMX_AUDMUX + select SND_SOC_IMX_SSI +diff --git a/sound/soc/intel/atom/sst/sst_loader.c b/sound/soc/intel/atom/sst/sst_loader.c +index 33917146d9c4..054b1d514e8a 100644 +--- a/sound/soc/intel/atom/sst/sst_loader.c ++++ b/sound/soc/intel/atom/sst/sst_loader.c +@@ -354,14 +354,14 @@ static int sst_request_fw(struct intel_sst_drv *sst) + const struct firmware *fw; + + retval = request_firmware(&fw, sst->firmware_name, sst->dev); +- if (fw == NULL) { +- dev_err(sst->dev, "fw is returning as null\n"); +- return -EINVAL; +- } + if (retval) { + dev_err(sst->dev, "request fw failed %d\n", retval); + return retval; + } ++ if (fw == NULL) { ++ dev_err(sst->dev, "fw is returning as null\n"); ++ return -EINVAL; ++ } + mutex_lock(&sst->sst_lock); + retval = sst_cache_and_parse_fw(sst, fw); + mutex_unlock(&sst->sst_lock); +diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c +index 3965186b375a..62c9a503ae05 100644 +--- a/tools/hv/hv_kvp_daemon.c ++++ b/tools/hv/hv_kvp_daemon.c +@@ -1172,6 +1172,7 @@ static int kvp_set_ip_info(char *if_name, struct hv_kvp_ipaddr_value *new_val) + FILE *file; + char cmd[PATH_MAX]; + char *mac_addr; ++ int str_len; + + /* + * Set the configuration for the specified interface with +@@ -1295,8 +1296,18 @@ static int kvp_set_ip_info(char *if_name, struct hv_kvp_ipaddr_value *new_val) + * invoke the external script to do its magic. + */ + +- snprintf(cmd, sizeof(cmd), KVP_SCRIPTS_PATH "%s %s", +- "hv_set_ifconfig", if_file); ++ str_len = snprintf(cmd, sizeof(cmd), KVP_SCRIPTS_PATH "%s %s", ++ "hv_set_ifconfig", if_file); ++ /* ++ * This is a little overcautious, but it's necessary to suppress some ++ * false warnings from gcc 8.0.1. ++ */ ++ if (str_len <= 0 || (unsigned int)str_len >= sizeof(cmd)) { ++ syslog(LOG_ERR, "Cmd '%s' (len=%d) may be too long", ++ cmd, str_len); ++ return HV_E_FAIL; ++ } ++ + if (system(cmd)) { + syslog(LOG_ERR, "Failed to execute cmd '%s'; error: %d %s", + cmd, errno, strerror(errno)); +diff --git a/tools/perf/arch/x86/util/kvm-stat.c b/tools/perf/arch/x86/util/kvm-stat.c +index b32409a0e546..081353d7b095 100644 +--- a/tools/perf/arch/x86/util/kvm-stat.c ++++ b/tools/perf/arch/x86/util/kvm-stat.c +@@ -156,7 +156,7 @@ int cpu_isa_init(struct perf_kvm_stat *kvm, const char *cpuid) + if (strstr(cpuid, "Intel")) { + kvm->exit_reasons = vmx_exit_reasons; + kvm->exit_reasons_isa = "VMX"; +- } else if (strstr(cpuid, "AMD")) { ++ } else if (strstr(cpuid, "AMD") || strstr(cpuid, "Hygon")) { + kvm->exit_reasons = svm_exit_reasons; + kvm->exit_reasons_isa = "SVM"; + } else +diff --git a/tools/perf/tests/attr.py b/tools/perf/tests/attr.py +index ff9b60b99f52..44090a9a19f3 100644 +--- a/tools/perf/tests/attr.py ++++ b/tools/perf/tests/attr.py +@@ -116,7 +116,7 @@ class Event(dict): + if not self.has_key(t) or not other.has_key(t): + continue + if not data_equal(self[t], other[t]): +- log.warning("expected %s=%s, got %s" % (t, self[t], other[t])) ++ log.warning("expected %s=%s, got %s" % (t, self[t], other[t])) + + # Test file description needs to have following sections: + # [config] +diff --git a/tools/perf/tests/evsel-tp-sched.c b/tools/perf/tests/evsel-tp-sched.c +index 699561fa512c..67bcbf876776 100644 +--- a/tools/perf/tests/evsel-tp-sched.c ++++ b/tools/perf/tests/evsel-tp-sched.c +@@ -17,7 +17,7 @@ static int perf_evsel__test_field(struct perf_evsel *evsel, const char *name, + return -1; + } + +- is_signed = !!(field->flags | FIELD_IS_SIGNED); ++ is_signed = !!(field->flags & FIELD_IS_SIGNED); + if (should_be_signed && !is_signed) { + pr_debug("%s: \"%s\" signedness(%d) is wrong, should be %d\n", + evsel->name, name, is_signed, should_be_signed); +diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c +index 1ceb332575bd..696f2654826b 100644 +--- a/tools/perf/util/header.c ++++ b/tools/perf/util/header.c +@@ -3132,7 +3132,7 @@ perf_event__synthesize_event_update_unit(struct perf_tool *tool, + if (ev == NULL) + return -ENOMEM; + +- strncpy(ev->data, evsel->unit, size); ++ strlcpy(ev->data, evsel->unit, size + 1); + err = process(tool, (union perf_event *)ev, NULL, NULL); + free(ev); + return err; +diff --git a/tools/perf/util/probe-file.c b/tools/perf/util/probe-file.c +index cdf8d83a484c..6ab9230ce8ee 100644 +--- a/tools/perf/util/probe-file.c ++++ b/tools/perf/util/probe-file.c +@@ -424,7 +424,7 @@ static int probe_cache__open(struct probe_cache *pcache, const char *target, + + if (target && build_id_cache__cached(target)) { + /* This is a cached buildid */ +- strncpy(sbuildid, target, SBUILD_ID_SIZE); ++ strlcpy(sbuildid, target, SBUILD_ID_SIZE); + dir_name = build_id_cache__linkname(sbuildid, NULL, 0); + goto found; + } +diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c +index 11ee25cea227..1903fb4f45d8 100644 +--- a/tools/testing/selftests/bpf/test_progs.c ++++ b/tools/testing/selftests/bpf/test_progs.c +@@ -43,10 +43,10 @@ static struct { + struct iphdr iph; + struct tcphdr tcp; + } __packed pkt_v4 = { +- .eth.h_proto = bpf_htons(ETH_P_IP), ++ .eth.h_proto = __bpf_constant_htons(ETH_P_IP), + .iph.ihl = 5, + .iph.protocol = 6, +- .iph.tot_len = bpf_htons(MAGIC_BYTES), ++ .iph.tot_len = __bpf_constant_htons(MAGIC_BYTES), + .tcp.urg_ptr = 123, + }; + +@@ -56,9 +56,9 @@ static struct { + struct ipv6hdr iph; + struct tcphdr tcp; + } __packed pkt_v6 = { +- .eth.h_proto = bpf_htons(ETH_P_IPV6), ++ .eth.h_proto = __bpf_constant_htons(ETH_P_IPV6), + .iph.nexthdr = 6, +- .iph.payload_len = bpf_htons(MAGIC_BYTES), ++ .iph.payload_len = __bpf_constant_htons(MAGIC_BYTES), + .tcp.urg_ptr = 123, + }; + +diff --git a/virt/kvm/arm/mmio.c b/virt/kvm/arm/mmio.c +index dac7ceb1a677..08443a15e6be 100644 +--- a/virt/kvm/arm/mmio.c ++++ b/virt/kvm/arm/mmio.c +@@ -117,6 +117,12 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) + vcpu_set_reg(vcpu, vcpu->arch.mmio_decode.rt, data); + } + ++ /* ++ * The MMIO instruction is emulated and should not be re-executed ++ * in the guest. ++ */ ++ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); ++ + return 0; + } + +@@ -144,11 +150,6 @@ static int decode_hsr(struct kvm_vcpu *vcpu, bool *is_write, int *len) + vcpu->arch.mmio_decode.sign_extend = sign_extend; + vcpu->arch.mmio_decode.rt = rt; + +- /* +- * The MMIO instruction is emulated and should not be re-executed +- * in the guest. +- */ +- kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu)); + return 0; + } + +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index 4f35f0dfe681..9b79818758dc 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -1962,7 +1962,8 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc, + EXPORT_SYMBOL_GPL(kvm_gfn_to_hva_cache_init); + + int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, +- void *data, int offset, unsigned long len) ++ void *data, unsigned int offset, ++ unsigned long len) + { + struct kvm_memslots *slots = kvm_memslots(kvm); + int r; +@@ -2911,8 +2912,10 @@ static int kvm_ioctl_create_device(struct kvm *kvm, + if (ops->init) + ops->init(dev); + ++ kvm_get_kvm(kvm); + ret = anon_inode_getfd(ops->name, &kvm_device_fops, dev, O_RDWR | O_CLOEXEC); + if (ret < 0) { ++ kvm_put_kvm(kvm); + mutex_lock(&kvm->lock); + list_del(&dev->vm_node); + mutex_unlock(&kvm->lock); +@@ -2920,7 +2923,6 @@ static int kvm_ioctl_create_device(struct kvm *kvm, + return ret; + } + +- kvm_get_kvm(kvm); + cd->fd = ret; + return 0; + }