From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 696801382C5 for ; Mon, 5 Mar 2018 02:24:23 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 6E5E3E0825; Mon, 5 Mar 2018 02:24:22 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 261B0E0825 for ; Mon, 5 Mar 2018 02:24:21 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id CC1B6335C06 for ; Mon, 5 Mar 2018 02:24:19 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 6416E1F0 for ; Mon, 5 Mar 2018 02:24:18 +0000 (UTC) From: "Alice Ferrazzi" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Alice Ferrazzi" Message-ID: <1520216643.79fc437a44836427496cec54d9d3d36f320b0fc6.alicef@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.14 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1023_linux-4.14.24.patch X-VCS-Directories: / X-VCS-Committer: alicef X-VCS-Committer-Name: Alice Ferrazzi X-VCS-Revision: 79fc437a44836427496cec54d9d3d36f320b0fc6 X-VCS-Branch: 4.14 Date: Mon, 5 Mar 2018 02:24:18 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: 8633249a-91e3-48d2-bf61-58d02ac535a6 X-Archives-Hash: 1c9349ae4555baed3a209d28916140d5 commit: 79fc437a44836427496cec54d9d3d36f320b0fc6 Author: Alice Ferrazzi gentoo org> AuthorDate: Mon Mar 5 02:24:03 2018 +0000 Commit: Alice Ferrazzi gentoo org> CommitDate: Mon Mar 5 02:24:03 2018 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=79fc437a linux kernel 4.14.24 0000_README | 4 + 1023_linux-4.14.24.patch | 4370 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 4374 insertions(+) diff --git a/0000_README b/0000_README index 6827bed..da94971 100644 --- a/0000_README +++ b/0000_README @@ -135,6 +135,10 @@ Patch: 1022_linux-4.14.23.patch From: http://www.kernel.org Desc: Linux 4.14.23 +Patch: 1023_linux-4.14.24.patch +From: http://www.kernel.org +Desc: Linux 4.14.24 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1023_linux-4.14.24.patch b/1023_linux-4.14.24.patch new file mode 100644 index 0000000..2c3cebe --- /dev/null +++ b/1023_linux-4.14.24.patch @@ -0,0 +1,4370 @@ +diff --git a/Makefile b/Makefile +index 169f3199274f..38acc6047d7d 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 14 +-SUBLEVEL = 23 ++SUBLEVEL = 24 + EXTRAVERSION = + NAME = Petit Gorille + +diff --git a/arch/arm/boot/dts/ls1021a-qds.dts b/arch/arm/boot/dts/ls1021a-qds.dts +index 940875316d0f..67b4de0e3439 100644 +--- a/arch/arm/boot/dts/ls1021a-qds.dts ++++ b/arch/arm/boot/dts/ls1021a-qds.dts +@@ -215,7 +215,7 @@ + reg = <0x2a>; + VDDA-supply = <®_3p3v>; + VDDIO-supply = <®_3p3v>; +- clocks = <&sys_mclk 1>; ++ clocks = <&sys_mclk>; + }; + }; + }; +diff --git a/arch/arm/boot/dts/ls1021a-twr.dts b/arch/arm/boot/dts/ls1021a-twr.dts +index a8b148ad1dd2..44715c8ef756 100644 +--- a/arch/arm/boot/dts/ls1021a-twr.dts ++++ b/arch/arm/boot/dts/ls1021a-twr.dts +@@ -187,7 +187,7 @@ + reg = <0x0a>; + VDDA-supply = <®_3p3v>; + VDDIO-supply = <®_3p3v>; +- clocks = <&sys_mclk 1>; ++ clocks = <&sys_mclk>; + }; + }; + +diff --git a/arch/arm/lib/csumpartialcopyuser.S b/arch/arm/lib/csumpartialcopyuser.S +index 1712f132b80d..b83fdc06286a 100644 +--- a/arch/arm/lib/csumpartialcopyuser.S ++++ b/arch/arm/lib/csumpartialcopyuser.S +@@ -85,7 +85,11 @@ + .pushsection .text.fixup,"ax" + .align 4 + 9001: mov r4, #-EFAULT ++#ifdef CONFIG_CPU_SW_DOMAIN_PAN ++ ldr r5, [sp, #9*4] @ *err_ptr ++#else + ldr r5, [sp, #8*4] @ *err_ptr ++#endif + str r4, [r5] + ldmia sp, {r1, r2} @ retrieve dst, len + add r2, r2, r1 +diff --git a/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts b/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts +index 2b6b792dab93..e6ee7443b530 100644 +--- a/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts ++++ b/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts +@@ -228,8 +228,11 @@ + + &cpm_eth0 { + status = "okay"; ++ /* Network PHY */ + phy = <&phy0>; + phy-mode = "10gbase-kr"; ++ /* Generic PHY, providing serdes lanes */ ++ phys = <&cpm_comphy4 0>; + }; + + &cpm_sata0 { +@@ -263,15 +266,21 @@ + + &cps_eth0 { + status = "okay"; ++ /* Network PHY */ + phy = <&phy8>; + phy-mode = "10gbase-kr"; ++ /* Generic PHY, providing serdes lanes */ ++ phys = <&cps_comphy4 0>; + }; + + &cps_eth1 { + /* CPS Lane 0 - J5 (Gigabit RJ45) */ + status = "okay"; ++ /* Network PHY */ + phy = <&ge_phy>; + phy-mode = "sgmii"; ++ /* Generic PHY, providing serdes lanes */ ++ phys = <&cps_comphy0 1>; + }; + + &cps_pinctrl { +diff --git a/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi b/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi +index 32690107c1cc..9a7b63cd63a3 100644 +--- a/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi ++++ b/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi +@@ -111,6 +111,44 @@ + }; + }; + ++ cpm_comphy: phy@120000 { ++ compatible = "marvell,comphy-cp110"; ++ reg = <0x120000 0x6000>; ++ marvell,system-controller = <&cpm_syscon0>; ++ #address-cells = <1>; ++ #size-cells = <0>; ++ ++ cpm_comphy0: phy@0 { ++ reg = <0>; ++ #phy-cells = <1>; ++ }; ++ ++ cpm_comphy1: phy@1 { ++ reg = <1>; ++ #phy-cells = <1>; ++ }; ++ ++ cpm_comphy2: phy@2 { ++ reg = <2>; ++ #phy-cells = <1>; ++ }; ++ ++ cpm_comphy3: phy@3 { ++ reg = <3>; ++ #phy-cells = <1>; ++ }; ++ ++ cpm_comphy4: phy@4 { ++ reg = <4>; ++ #phy-cells = <1>; ++ }; ++ ++ cpm_comphy5: phy@5 { ++ reg = <5>; ++ #phy-cells = <1>; ++ }; ++ }; ++ + cpm_mdio: mdio@12a200 { + #address-cells = <1>; + #size-cells = <0>; +diff --git a/arch/arm64/boot/dts/marvell/armada-cp110-slave.dtsi b/arch/arm64/boot/dts/marvell/armada-cp110-slave.dtsi +index 14e47c5c3816..faf28633a309 100644 +--- a/arch/arm64/boot/dts/marvell/armada-cp110-slave.dtsi ++++ b/arch/arm64/boot/dts/marvell/armada-cp110-slave.dtsi +@@ -111,6 +111,44 @@ + }; + }; + ++ cps_comphy: phy@120000 { ++ compatible = "marvell,comphy-cp110"; ++ reg = <0x120000 0x6000>; ++ marvell,system-controller = <&cps_syscon0>; ++ #address-cells = <1>; ++ #size-cells = <0>; ++ ++ cps_comphy0: phy@0 { ++ reg = <0>; ++ #phy-cells = <1>; ++ }; ++ ++ cps_comphy1: phy@1 { ++ reg = <1>; ++ #phy-cells = <1>; ++ }; ++ ++ cps_comphy2: phy@2 { ++ reg = <2>; ++ #phy-cells = <1>; ++ }; ++ ++ cps_comphy3: phy@3 { ++ reg = <3>; ++ #phy-cells = <1>; ++ }; ++ ++ cps_comphy4: phy@4 { ++ reg = <4>; ++ #phy-cells = <1>; ++ }; ++ ++ cps_comphy5: phy@5 { ++ reg = <5>; ++ #phy-cells = <1>; ++ }; ++ }; ++ + cps_mdio: mdio@12a200 { + #address-cells = <1>; + #size-cells = <0>; +diff --git a/arch/arm64/boot/dts/renesas/ulcb.dtsi b/arch/arm64/boot/dts/renesas/ulcb.dtsi +index 1b868df2393f..e95d99265af9 100644 +--- a/arch/arm64/boot/dts/renesas/ulcb.dtsi ++++ b/arch/arm64/boot/dts/renesas/ulcb.dtsi +@@ -145,7 +145,6 @@ + &avb { + pinctrl-0 = <&avb_pins>; + pinctrl-names = "default"; +- renesas,no-ether-link; + phy-handle = <&phy0>; + status = "okay"; + +diff --git a/arch/ia64/kernel/time.c b/arch/ia64/kernel/time.c +index aa7be020a904..c954523d00fe 100644 +--- a/arch/ia64/kernel/time.c ++++ b/arch/ia64/kernel/time.c +@@ -88,7 +88,7 @@ void vtime_flush(struct task_struct *tsk) + } + + if (ti->softirq_time) { +- delta = cycle_to_nsec(ti->softirq_time)); ++ delta = cycle_to_nsec(ti->softirq_time); + account_system_index_time(tsk, delta, CPUTIME_SOFTIRQ); + } + +diff --git a/arch/mips/lib/Makefile b/arch/mips/lib/Makefile +index 78c2affeabf8..e84e12655fa8 100644 +--- a/arch/mips/lib/Makefile ++++ b/arch/mips/lib/Makefile +@@ -16,4 +16,5 @@ obj-$(CONFIG_CPU_R3000) += r3k_dump_tlb.o + obj-$(CONFIG_CPU_TX39XX) += r3k_dump_tlb.o + + # libgcc-style stuff needed in the kernel +-obj-y += ashldi3.o ashrdi3.o bswapsi.o bswapdi.o cmpdi2.o lshrdi3.o ucmpdi2.o ++obj-y += ashldi3.o ashrdi3.o bswapsi.o bswapdi.o cmpdi2.o lshrdi3.o multi3.o \ ++ ucmpdi2.o +diff --git a/arch/mips/lib/libgcc.h b/arch/mips/lib/libgcc.h +index 28002ed90c2c..199a7f96282f 100644 +--- a/arch/mips/lib/libgcc.h ++++ b/arch/mips/lib/libgcc.h +@@ -10,10 +10,18 @@ typedef int word_type __attribute__ ((mode (__word__))); + struct DWstruct { + int high, low; + }; ++ ++struct TWstruct { ++ long long high, low; ++}; + #elif defined(__LITTLE_ENDIAN) + struct DWstruct { + int low, high; + }; ++ ++struct TWstruct { ++ long long low, high; ++}; + #else + #error I feel sick. + #endif +@@ -23,4 +31,13 @@ typedef union { + long long ll; + } DWunion; + ++#if defined(CONFIG_64BIT) && defined(CONFIG_CPU_MIPSR6) ++typedef int ti_type __attribute__((mode(TI))); ++ ++typedef union { ++ struct TWstruct s; ++ ti_type ti; ++} TWunion; ++#endif ++ + #endif /* __ASM_LIBGCC_H */ +diff --git a/arch/mips/lib/multi3.c b/arch/mips/lib/multi3.c +new file mode 100644 +index 000000000000..111ad475aa0c +--- /dev/null ++++ b/arch/mips/lib/multi3.c +@@ -0,0 +1,54 @@ ++// SPDX-License-Identifier: GPL-2.0 ++#include ++ ++#include "libgcc.h" ++ ++/* ++ * GCC 7 suboptimally generates __multi3 calls for mips64r6, so for that ++ * specific case only we'll implement it here. ++ * ++ * See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82981 ++ */ ++#if defined(CONFIG_64BIT) && defined(CONFIG_CPU_MIPSR6) && (__GNUC__ == 7) ++ ++/* multiply 64-bit values, low 64-bits returned */ ++static inline long long notrace dmulu(long long a, long long b) ++{ ++ long long res; ++ ++ asm ("dmulu %0,%1,%2" : "=r" (res) : "r" (a), "r" (b)); ++ return res; ++} ++ ++/* multiply 64-bit unsigned values, high 64-bits of 128-bit result returned */ ++static inline long long notrace dmuhu(long long a, long long b) ++{ ++ long long res; ++ ++ asm ("dmuhu %0,%1,%2" : "=r" (res) : "r" (a), "r" (b)); ++ return res; ++} ++ ++/* multiply 128-bit values, low 128-bits returned */ ++ti_type notrace __multi3(ti_type a, ti_type b) ++{ ++ TWunion res, aa, bb; ++ ++ aa.ti = a; ++ bb.ti = b; ++ ++ /* ++ * a * b = (a.lo * b.lo) ++ * + 2^64 * (a.hi * b.lo + a.lo * b.hi) ++ * [+ 2^128 * (a.hi * b.hi)] ++ */ ++ res.s.low = dmulu(aa.s.low, bb.s.low); ++ res.s.high = dmuhu(aa.s.low, bb.s.low); ++ res.s.high += dmulu(aa.s.high, bb.s.low); ++ res.s.high += dmulu(aa.s.low, bb.s.high); ++ ++ return res.ti; ++} ++EXPORT_SYMBOL(__multi3); ++ ++#endif /* 64BIT && CPU_MIPSR6 && GCC7 */ +diff --git a/arch/parisc/include/asm/thread_info.h b/arch/parisc/include/asm/thread_info.h +index c980a02a52bc..598c8d60fa5e 100644 +--- a/arch/parisc/include/asm/thread_info.h ++++ b/arch/parisc/include/asm/thread_info.h +@@ -35,7 +35,12 @@ struct thread_info { + + /* thread information allocation */ + ++#ifdef CONFIG_IRQSTACKS ++#define THREAD_SIZE_ORDER 2 /* PA-RISC requires at least 16k stack */ ++#else + #define THREAD_SIZE_ORDER 3 /* PA-RISC requires at least 32k stack */ ++#endif ++ + /* Be sure to hunt all references to this down when you change the size of + * the kernel stack */ + #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) +diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c +index e45b5f10645a..e9149d05d30b 100644 +--- a/arch/powerpc/platforms/pseries/dlpar.c ++++ b/arch/powerpc/platforms/pseries/dlpar.c +@@ -586,11 +586,26 @@ static ssize_t dlpar_show(struct class *class, struct class_attribute *attr, + + static CLASS_ATTR_RW(dlpar); + +-static int __init pseries_dlpar_init(void) ++int __init dlpar_workqueue_init(void) + { ++ if (pseries_hp_wq) ++ return 0; ++ + pseries_hp_wq = alloc_workqueue("pseries hotplug workqueue", +- WQ_UNBOUND, 1); ++ WQ_UNBOUND, 1); ++ ++ return pseries_hp_wq ? 0 : -ENOMEM; ++} ++ ++static int __init dlpar_sysfs_init(void) ++{ ++ int rc; ++ ++ rc = dlpar_workqueue_init(); ++ if (rc) ++ return rc; ++ + return sysfs_create_file(kernel_kobj, &class_attr_dlpar.attr); + } +-machine_device_initcall(pseries, pseries_dlpar_init); ++machine_device_initcall(pseries, dlpar_sysfs_init); + +diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h +index 4470a3194311..1ae1d9f4dbe9 100644 +--- a/arch/powerpc/platforms/pseries/pseries.h ++++ b/arch/powerpc/platforms/pseries/pseries.h +@@ -98,4 +98,6 @@ static inline unsigned long cmo_get_page_size(void) + return CMO_PageSize; + } + ++int dlpar_workqueue_init(void); ++ + #endif /* _PSERIES_PSERIES_H */ +diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c +index 4923ffe230cf..5e1ef9150182 100644 +--- a/arch/powerpc/platforms/pseries/ras.c ++++ b/arch/powerpc/platforms/pseries/ras.c +@@ -48,6 +48,28 @@ static irqreturn_t ras_epow_interrupt(int irq, void *dev_id); + static irqreturn_t ras_error_interrupt(int irq, void *dev_id); + + ++/* ++ * Enable the hotplug interrupt late because processing them may touch other ++ * devices or systems (e.g. hugepages) that have not been initialized at the ++ * subsys stage. ++ */ ++int __init init_ras_hotplug_IRQ(void) ++{ ++ struct device_node *np; ++ ++ /* Hotplug Events */ ++ np = of_find_node_by_path("/event-sources/hot-plug-events"); ++ if (np != NULL) { ++ if (dlpar_workqueue_init() == 0) ++ request_event_sources_irqs(np, ras_hotplug_interrupt, ++ "RAS_HOTPLUG"); ++ of_node_put(np); ++ } ++ ++ return 0; ++} ++machine_late_initcall(pseries, init_ras_hotplug_IRQ); ++ + /* + * Initialize handlers for the set of interrupts caused by hardware errors + * and power system events. +@@ -66,14 +88,6 @@ static int __init init_ras_IRQ(void) + of_node_put(np); + } + +- /* Hotplug Events */ +- np = of_find_node_by_path("/event-sources/hot-plug-events"); +- if (np != NULL) { +- request_event_sources_irqs(np, ras_hotplug_interrupt, +- "RAS_HOTPLUG"); +- of_node_put(np); +- } +- + /* EPOW Events */ + np = of_find_node_by_path("/event-sources/epow-events"); + if (np != NULL) { +diff --git a/arch/sh/boards/mach-se/770x/setup.c b/arch/sh/boards/mach-se/770x/setup.c +index 77c35350ee77..b7fa7a87e946 100644 +--- a/arch/sh/boards/mach-se/770x/setup.c ++++ b/arch/sh/boards/mach-se/770x/setup.c +@@ -9,6 +9,7 @@ + */ + #include + #include ++#include + #include + #include + #include +@@ -115,6 +116,11 @@ static struct platform_device heartbeat_device = { + #if defined(CONFIG_CPU_SUBTYPE_SH7710) ||\ + defined(CONFIG_CPU_SUBTYPE_SH7712) + /* SH771X Ethernet driver */ ++static struct sh_eth_plat_data sh_eth_plat = { ++ .phy = PHY_ID, ++ .phy_interface = PHY_INTERFACE_MODE_MII, ++}; ++ + static struct resource sh_eth0_resources[] = { + [0] = { + .start = SH_ETH0_BASE, +@@ -132,7 +138,7 @@ static struct platform_device sh_eth0_device = { + .name = "sh771x-ether", + .id = 0, + .dev = { +- .platform_data = PHY_ID, ++ .platform_data = &sh_eth_plat, + }, + .num_resources = ARRAY_SIZE(sh_eth0_resources), + .resource = sh_eth0_resources, +@@ -155,7 +161,7 @@ static struct platform_device sh_eth1_device = { + .name = "sh771x-ether", + .id = 1, + .dev = { +- .platform_data = PHY_ID, ++ .platform_data = &sh_eth_plat, + }, + .num_resources = ARRAY_SIZE(sh_eth1_resources), + .resource = sh_eth1_resources, +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c +index 1c2558430cf0..56457cb73448 100644 +--- a/arch/x86/events/intel/core.c ++++ b/arch/x86/events/intel/core.c +@@ -3847,6 +3847,8 @@ static struct attribute *intel_pmu_attrs[] = { + + __init int intel_pmu_init(void) + { ++ struct attribute **extra_attr = NULL; ++ struct attribute **to_free = NULL; + union cpuid10_edx edx; + union cpuid10_eax eax; + union cpuid10_ebx ebx; +@@ -3854,7 +3856,6 @@ __init int intel_pmu_init(void) + unsigned int unused; + struct extra_reg *er; + int version, i; +- struct attribute **extra_attr = NULL; + char *name; + + if (!cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON)) { +@@ -4294,6 +4295,7 @@ __init int intel_pmu_init(void) + extra_attr = boot_cpu_has(X86_FEATURE_RTM) ? + hsw_format_attr : nhm_format_attr; + extra_attr = merge_attr(extra_attr, skl_format_attr); ++ to_free = extra_attr; + x86_pmu.cpu_events = get_hsw_events_attrs(); + intel_pmu_pebs_data_source_skl( + boot_cpu_data.x86_model == INTEL_FAM6_SKYLAKE_X); +@@ -4401,6 +4403,7 @@ __init int intel_pmu_init(void) + pr_cont("full-width counters, "); + } + ++ kfree(to_free); + return 0; + } + +diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h +index 219faaec51df..386a6900e206 100644 +--- a/arch/x86/include/asm/asm.h ++++ b/arch/x86/include/asm/asm.h +@@ -136,6 +136,7 @@ + #endif + + #ifndef __ASSEMBLY__ ++#ifndef __BPF__ + /* + * This output constraint should be used for any inline asm which has a "call" + * instruction. Otherwise the asm may be inserted before the frame pointer +@@ -145,5 +146,6 @@ + register unsigned long current_stack_pointer asm(_ASM_SP); + #define ASM_CALL_CONSTRAINT "+r" (current_stack_pointer) + #endif ++#endif + + #endif /* _ASM_X86_ASM_H */ +diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c +index e84cb4c75cd0..c54361a22f59 100644 +--- a/arch/x86/kernel/setup.c ++++ b/arch/x86/kernel/setup.c +@@ -928,9 +928,6 @@ void __init setup_arch(char **cmdline_p) + set_bit(EFI_BOOT, &efi.flags); + set_bit(EFI_64BIT, &efi.flags); + } +- +- if (efi_enabled(EFI_BOOT)) +- efi_memblock_x86_reserve_range(); + #endif + + x86_init.oem.arch_setup(); +@@ -984,6 +981,8 @@ void __init setup_arch(char **cmdline_p) + + parse_early_param(); + ++ if (efi_enabled(EFI_BOOT)) ++ efi_memblock_x86_reserve_range(); + #ifdef CONFIG_MEMORY_HOTPLUG + /* + * Memory used by the kernel cannot be hot-removed because Linux +diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c +index 60244bfaf88f..4565f31bd398 100644 +--- a/arch/x86/kernel/stacktrace.c ++++ b/arch/x86/kernel/stacktrace.c +@@ -160,8 +160,12 @@ int save_stack_trace_tsk_reliable(struct task_struct *tsk, + { + int ret; + ++ /* ++ * If the task doesn't have a stack (e.g., a zombie), the stack is ++ * "reliably" empty. ++ */ + if (!try_get_task_stack(tsk)) +- return -EINVAL; ++ return 0; + + ret = __save_stack_trace_reliable(trace, tsk); + +diff --git a/arch/x86/platform/intel-mid/device_libs/platform_bt.c b/arch/x86/platform/intel-mid/device_libs/platform_bt.c +index dc036e511f48..5a0483e7bf66 100644 +--- a/arch/x86/platform/intel-mid/device_libs/platform_bt.c ++++ b/arch/x86/platform/intel-mid/device_libs/platform_bt.c +@@ -60,7 +60,7 @@ static int __init tng_bt_sfi_setup(struct bt_sfi_data *ddata) + return 0; + } + +-static const struct bt_sfi_data tng_bt_sfi_data __initdata = { ++static struct bt_sfi_data tng_bt_sfi_data __initdata = { + .setup = tng_bt_sfi_setup, + }; + +diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c +index d669e9d89001..c9081c6671f0 100644 +--- a/arch/x86/xen/enlighten.c ++++ b/arch/x86/xen/enlighten.c +@@ -1,8 +1,12 @@ ++#ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG ++#include ++#endif + #include + #include + + #include + #include ++#include + + #include + #include +@@ -331,3 +335,80 @@ void xen_arch_unregister_cpu(int num) + } + EXPORT_SYMBOL(xen_arch_unregister_cpu); + #endif ++ ++#ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG ++void __init arch_xen_balloon_init(struct resource *hostmem_resource) ++{ ++ struct xen_memory_map memmap; ++ int rc; ++ unsigned int i, last_guest_ram; ++ phys_addr_t max_addr = PFN_PHYS(max_pfn); ++ struct e820_table *xen_e820_table; ++ const struct e820_entry *entry; ++ struct resource *res; ++ ++ if (!xen_initial_domain()) ++ return; ++ ++ xen_e820_table = kmalloc(sizeof(*xen_e820_table), GFP_KERNEL); ++ if (!xen_e820_table) ++ return; ++ ++ memmap.nr_entries = ARRAY_SIZE(xen_e820_table->entries); ++ set_xen_guest_handle(memmap.buffer, xen_e820_table->entries); ++ rc = HYPERVISOR_memory_op(XENMEM_machine_memory_map, &memmap); ++ if (rc) { ++ pr_warn("%s: Can't read host e820 (%d)\n", __func__, rc); ++ goto out; ++ } ++ ++ last_guest_ram = 0; ++ for (i = 0; i < memmap.nr_entries; i++) { ++ if (xen_e820_table->entries[i].addr >= max_addr) ++ break; ++ if (xen_e820_table->entries[i].type == E820_TYPE_RAM) ++ last_guest_ram = i; ++ } ++ ++ entry = &xen_e820_table->entries[last_guest_ram]; ++ if (max_addr >= entry->addr + entry->size) ++ goto out; /* No unallocated host RAM. */ ++ ++ hostmem_resource->start = max_addr; ++ hostmem_resource->end = entry->addr + entry->size; ++ ++ /* ++ * Mark non-RAM regions between the end of dom0 RAM and end of host RAM ++ * as unavailable. The rest of that region can be used for hotplug-based ++ * ballooning. ++ */ ++ for (; i < memmap.nr_entries; i++) { ++ entry = &xen_e820_table->entries[i]; ++ ++ if (entry->type == E820_TYPE_RAM) ++ continue; ++ ++ if (entry->addr >= hostmem_resource->end) ++ break; ++ ++ res = kzalloc(sizeof(*res), GFP_KERNEL); ++ if (!res) ++ goto out; ++ ++ res->name = "Unavailable host RAM"; ++ res->start = entry->addr; ++ res->end = (entry->addr + entry->size < hostmem_resource->end) ? ++ entry->addr + entry->size : hostmem_resource->end; ++ rc = insert_resource(hostmem_resource, res); ++ if (rc) { ++ pr_warn("%s: Can't insert [%llx - %llx) (%d)\n", ++ __func__, res->start, res->end, rc); ++ kfree(res); ++ goto out; ++ } ++ } ++ ++ out: ++ kfree(xen_e820_table); ++} ++#endif /* CONFIG_XEN_BALLOON_MEMORY_HOTPLUG */ +diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c +index 899a22a02e95..f896c2975545 100644 +--- a/arch/x86/xen/enlighten_pv.c ++++ b/arch/x86/xen/enlighten_pv.c +@@ -88,6 +88,8 @@ + #include "multicalls.h" + #include "pmu.h" + ++#include "../kernel/cpu/cpu.h" /* get_cpu_cap() */ ++ + void *xen_initial_gdt; + + static int xen_cpu_up_prepare_pv(unsigned int cpu); +@@ -1257,6 +1259,7 @@ asmlinkage __visible void __init xen_start_kernel(void) + __userpte_alloc_gfp &= ~__GFP_HIGHMEM; + + /* Work out if we support NX */ ++ get_cpu_cap(&boot_cpu_data); + x86_configure_nx(); + + /* Get mfn list */ +diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c +index d0b943a6b117..042e9c422b21 100644 +--- a/arch/x86/xen/mmu_pv.c ++++ b/arch/x86/xen/mmu_pv.c +@@ -1902,6 +1902,18 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn) + /* Graft it onto L4[511][510] */ + copy_page(level2_kernel_pgt, l2); + ++ /* ++ * Zap execute permission from the ident map. Due to the sharing of ++ * L1 entries we need to do this in the L2. ++ */ ++ if (__supported_pte_mask & _PAGE_NX) { ++ for (i = 0; i < PTRS_PER_PMD; ++i) { ++ if (pmd_none(level2_ident_pgt[i])) ++ continue; ++ level2_ident_pgt[i] = pmd_set_flags(level2_ident_pgt[i], _PAGE_NX); ++ } ++ } ++ + /* Copy the initial P->M table mappings if necessary. */ + i = pgd_index(xen_start_info->mfn_list); + if (i && i < pgd_index(__START_KERNEL_map)) +diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c +index c114ca767b3b..6e0d2086eacb 100644 +--- a/arch/x86/xen/setup.c ++++ b/arch/x86/xen/setup.c +@@ -808,7 +808,6 @@ char * __init xen_memory_setup(void) + addr = xen_e820_table.entries[0].addr; + size = xen_e820_table.entries[0].size; + while (i < xen_e820_table.nr_entries) { +- bool discard = false; + + chunk_size = size; + type = xen_e820_table.entries[i].type; +@@ -824,11 +823,10 @@ char * __init xen_memory_setup(void) + xen_add_extra_mem(pfn_s, n_pfns); + xen_max_p2m_pfn = pfn_s + n_pfns; + } else +- discard = true; ++ type = E820_TYPE_UNUSABLE; + } + +- if (!discard) +- xen_align_and_add_e820_region(addr, chunk_size, type); ++ xen_align_and_add_e820_region(addr, chunk_size, type); + + addr += chunk_size; + size -= chunk_size; +diff --git a/block/blk-core.c b/block/blk-core.c +index f3750389e351..95b7ea996ac2 100644 +--- a/block/blk-core.c ++++ b/block/blk-core.c +@@ -531,6 +531,13 @@ static void __blk_drain_queue(struct request_queue *q, bool drain_all) + } + } + ++void blk_drain_queue(struct request_queue *q) ++{ ++ spin_lock_irq(q->queue_lock); ++ __blk_drain_queue(q, true); ++ spin_unlock_irq(q->queue_lock); ++} ++ + /** + * blk_queue_bypass_start - enter queue bypass mode + * @q: queue of interest +@@ -655,8 +662,6 @@ void blk_cleanup_queue(struct request_queue *q) + */ + blk_freeze_queue(q); + spin_lock_irq(lock); +- if (!q->mq_ops) +- __blk_drain_queue(q, true); + queue_flag_set(QUEUE_FLAG_DEAD, q); + spin_unlock_irq(lock); + +diff --git a/block/blk-mq.c b/block/blk-mq.c +index 98a18609755e..b60798a30ea2 100644 +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -159,6 +159,8 @@ void blk_freeze_queue(struct request_queue *q) + * exported to drivers as the only user for unfreeze is blk_mq. + */ + blk_freeze_queue_start(q); ++ if (!q->mq_ops) ++ blk_drain_queue(q); + blk_mq_freeze_queue_wait(q); + } + +diff --git a/block/blk.h b/block/blk.h +index 85be8b232b37..b2c287c2c6a3 100644 +--- a/block/blk.h ++++ b/block/blk.h +@@ -362,4 +362,6 @@ static inline void blk_queue_bounce(struct request_queue *q, struct bio **bio) + } + #endif /* CONFIG_BOUNCE */ + ++extern void blk_drain_queue(struct request_queue *q); ++ + #endif /* BLK_INTERNAL_H */ +diff --git a/crypto/af_alg.c b/crypto/af_alg.c +index 53b7fa4cf4ab..4e4640bb82b9 100644 +--- a/crypto/af_alg.c ++++ b/crypto/af_alg.c +@@ -693,7 +693,7 @@ void af_alg_free_areq_sgls(struct af_alg_async_req *areq) + unsigned int i; + + list_for_each_entry_safe(rsgl, tmp, &areq->rsgl_list, list) { +- ctx->rcvused -= rsgl->sg_num_bytes; ++ atomic_sub(rsgl->sg_num_bytes, &ctx->rcvused); + af_alg_free_sg(&rsgl->sgl); + list_del(&rsgl->list); + if (rsgl != &areq->first_rsgl) +@@ -1192,7 +1192,7 @@ int af_alg_get_rsgl(struct sock *sk, struct msghdr *msg, int flags, + + areq->last_rsgl = rsgl; + len += err; +- ctx->rcvused += err; ++ atomic_add(err, &ctx->rcvused); + rsgl->sg_num_bytes = err; + iov_iter_advance(&msg->msg_iter, err); + } +diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c +index 782cb8fec323..f138af18b500 100644 +--- a/crypto/algif_aead.c ++++ b/crypto/algif_aead.c +@@ -571,7 +571,7 @@ static int aead_accept_parent_nokey(void *private, struct sock *sk) + INIT_LIST_HEAD(&ctx->tsgl_list); + ctx->len = len; + ctx->used = 0; +- ctx->rcvused = 0; ++ atomic_set(&ctx->rcvused, 0); + ctx->more = 0; + ctx->merge = 0; + ctx->enc = 0; +diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c +index 7a3e663d54d5..90bc4e0f0785 100644 +--- a/crypto/algif_skcipher.c ++++ b/crypto/algif_skcipher.c +@@ -391,7 +391,7 @@ static int skcipher_accept_parent_nokey(void *private, struct sock *sk) + INIT_LIST_HEAD(&ctx->tsgl_list); + ctx->len = len; + ctx->used = 0; +- ctx->rcvused = 0; ++ atomic_set(&ctx->rcvused, 0); + ctx->more = 0; + ctx->merge = 0; + ctx->enc = 0; +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c +index 89ba9e85c0f3..4bcef78a08aa 100644 +--- a/drivers/crypto/inside-secure/safexcel.c ++++ b/drivers/crypto/inside-secure/safexcel.c +@@ -607,6 +607,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv + ndesc = ctx->handle_result(priv, ring, sreq->req, + &should_complete, &ret); + if (ndesc < 0) { ++ kfree(sreq); + dev_err(priv->dev, "failed to handle result (%d)", ndesc); + return; + } +diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c +index 5438552bc6d7..fcc0a606d748 100644 +--- a/drivers/crypto/inside-secure/safexcel_cipher.c ++++ b/drivers/crypto/inside-secure/safexcel_cipher.c +@@ -14,6 +14,7 @@ + + #include + #include ++#include + + #include "safexcel.h" + +@@ -33,6 +34,10 @@ struct safexcel_cipher_ctx { + unsigned int key_len; + }; + ++struct safexcel_cipher_req { ++ bool needs_inv; ++}; ++ + static void safexcel_cipher_token(struct safexcel_cipher_ctx *ctx, + struct crypto_async_request *async, + struct safexcel_command_desc *cdesc, +@@ -126,9 +131,9 @@ static int safexcel_context_control(struct safexcel_cipher_ctx *ctx, + return 0; + } + +-static int safexcel_handle_result(struct safexcel_crypto_priv *priv, int ring, +- struct crypto_async_request *async, +- bool *should_complete, int *ret) ++static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int ring, ++ struct crypto_async_request *async, ++ bool *should_complete, int *ret) + { + struct skcipher_request *req = skcipher_request_cast(async); + struct safexcel_result_desc *rdesc; +@@ -265,7 +270,6 @@ static int safexcel_aes_send(struct crypto_async_request *async, + spin_unlock_bh(&priv->ring[ring].egress_lock); + + request->req = &req->base; +- ctx->base.handle_result = safexcel_handle_result; + + *commands = n_cdesc; + *results = n_rdesc; +@@ -341,8 +345,6 @@ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv, + + ring = safexcel_select_ring(priv); + ctx->base.ring = ring; +- ctx->base.needs_inv = false; +- ctx->base.send = safexcel_aes_send; + + spin_lock_bh(&priv->ring[ring].queue_lock); + enq_ret = crypto_enqueue_request(&priv->ring[ring].queue, async); +@@ -359,6 +361,26 @@ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv, + return ndesc; + } + ++static int safexcel_handle_result(struct safexcel_crypto_priv *priv, int ring, ++ struct crypto_async_request *async, ++ bool *should_complete, int *ret) ++{ ++ struct skcipher_request *req = skcipher_request_cast(async); ++ struct safexcel_cipher_req *sreq = skcipher_request_ctx(req); ++ int err; ++ ++ if (sreq->needs_inv) { ++ sreq->needs_inv = false; ++ err = safexcel_handle_inv_result(priv, ring, async, ++ should_complete, ret); ++ } else { ++ err = safexcel_handle_req_result(priv, ring, async, ++ should_complete, ret); ++ } ++ ++ return err; ++} ++ + static int safexcel_cipher_send_inv(struct crypto_async_request *async, + int ring, struct safexcel_request *request, + int *commands, int *results) +@@ -368,8 +390,6 @@ static int safexcel_cipher_send_inv(struct crypto_async_request *async, + struct safexcel_crypto_priv *priv = ctx->priv; + int ret; + +- ctx->base.handle_result = safexcel_handle_inv_result; +- + ret = safexcel_invalidate_cache(async, &ctx->base, priv, + ctx->base.ctxr_dma, ring, request); + if (unlikely(ret)) +@@ -381,28 +401,46 @@ static int safexcel_cipher_send_inv(struct crypto_async_request *async, + return 0; + } + ++static int safexcel_send(struct crypto_async_request *async, ++ int ring, struct safexcel_request *request, ++ int *commands, int *results) ++{ ++ struct skcipher_request *req = skcipher_request_cast(async); ++ struct safexcel_cipher_req *sreq = skcipher_request_ctx(req); ++ int ret; ++ ++ if (sreq->needs_inv) ++ ret = safexcel_cipher_send_inv(async, ring, request, ++ commands, results); ++ else ++ ret = safexcel_aes_send(async, ring, request, ++ commands, results); ++ return ret; ++} ++ + static int safexcel_cipher_exit_inv(struct crypto_tfm *tfm) + { + struct safexcel_cipher_ctx *ctx = crypto_tfm_ctx(tfm); + struct safexcel_crypto_priv *priv = ctx->priv; +- struct skcipher_request req; ++ SKCIPHER_REQUEST_ON_STACK(req, __crypto_skcipher_cast(tfm)); ++ struct safexcel_cipher_req *sreq = skcipher_request_ctx(req); + struct safexcel_inv_result result = {}; + int ring = ctx->base.ring; + +- memset(&req, 0, sizeof(struct skcipher_request)); ++ memset(req, 0, sizeof(struct skcipher_request)); + + /* create invalidation request */ + init_completion(&result.completion); +- skcipher_request_set_callback(&req, CRYPTO_TFM_REQ_MAY_BACKLOG, +- safexcel_inv_complete, &result); ++ skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, ++ safexcel_inv_complete, &result); + +- skcipher_request_set_tfm(&req, __crypto_skcipher_cast(tfm)); +- ctx = crypto_tfm_ctx(req.base.tfm); ++ skcipher_request_set_tfm(req, __crypto_skcipher_cast(tfm)); ++ ctx = crypto_tfm_ctx(req->base.tfm); + ctx->base.exit_inv = true; +- ctx->base.send = safexcel_cipher_send_inv; ++ sreq->needs_inv = true; + + spin_lock_bh(&priv->ring[ring].queue_lock); +- crypto_enqueue_request(&priv->ring[ring].queue, &req.base); ++ crypto_enqueue_request(&priv->ring[ring].queue, &req->base); + spin_unlock_bh(&priv->ring[ring].queue_lock); + + if (!priv->ring[ring].need_dequeue) +@@ -424,19 +462,21 @@ static int safexcel_aes(struct skcipher_request *req, + enum safexcel_cipher_direction dir, u32 mode) + { + struct safexcel_cipher_ctx *ctx = crypto_tfm_ctx(req->base.tfm); ++ struct safexcel_cipher_req *sreq = skcipher_request_ctx(req); + struct safexcel_crypto_priv *priv = ctx->priv; + int ret, ring; + ++ sreq->needs_inv = false; + ctx->direction = dir; + ctx->mode = mode; + + if (ctx->base.ctxr) { +- if (ctx->base.needs_inv) +- ctx->base.send = safexcel_cipher_send_inv; ++ if (ctx->base.needs_inv) { ++ sreq->needs_inv = true; ++ ctx->base.needs_inv = false; ++ } + } else { + ctx->base.ring = safexcel_select_ring(priv); +- ctx->base.send = safexcel_aes_send; +- + ctx->base.ctxr = dma_pool_zalloc(priv->context_pool, + EIP197_GFP_FLAGS(req->base), + &ctx->base.ctxr_dma); +@@ -476,6 +516,11 @@ static int safexcel_skcipher_cra_init(struct crypto_tfm *tfm) + alg.skcipher.base); + + ctx->priv = tmpl->priv; ++ ctx->base.send = safexcel_send; ++ ctx->base.handle_result = safexcel_handle_result; ++ ++ crypto_skcipher_set_reqsize(__crypto_skcipher_cast(tfm), ++ sizeof(struct safexcel_cipher_req)); + + return 0; + } +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c +index 0626b33d2886..d626aa485a76 100644 +--- a/drivers/crypto/inside-secure/safexcel_hash.c ++++ b/drivers/crypto/inside-secure/safexcel_hash.c +@@ -32,6 +32,7 @@ struct safexcel_ahash_req { + bool last_req; + bool finish; + bool hmac; ++ bool needs_inv; + + int nents; + +@@ -121,9 +122,9 @@ static void safexcel_context_control(struct safexcel_ahash_ctx *ctx, + } + } + +-static int safexcel_handle_result(struct safexcel_crypto_priv *priv, int ring, +- struct crypto_async_request *async, +- bool *should_complete, int *ret) ++static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int ring, ++ struct crypto_async_request *async, ++ bool *should_complete, int *ret) + { + struct safexcel_result_desc *rdesc; + struct ahash_request *areq = ahash_request_cast(async); +@@ -169,9 +170,9 @@ static int safexcel_handle_result(struct safexcel_crypto_priv *priv, int ring, + return 1; + } + +-static int safexcel_ahash_send(struct crypto_async_request *async, int ring, +- struct safexcel_request *request, int *commands, +- int *results) ++static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring, ++ struct safexcel_request *request, ++ int *commands, int *results) + { + struct ahash_request *areq = ahash_request_cast(async); + struct crypto_ahash *ahash = crypto_ahash_reqtfm(areq); +@@ -310,7 +311,6 @@ static int safexcel_ahash_send(struct crypto_async_request *async, int ring, + + req->processed += len; + request->req = &areq->base; +- ctx->base.handle_result = safexcel_handle_result; + + *commands = n_cdesc; + *results = 1; +@@ -394,8 +394,6 @@ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv, + + ring = safexcel_select_ring(priv); + ctx->base.ring = ring; +- ctx->base.needs_inv = false; +- ctx->base.send = safexcel_ahash_send; + + spin_lock_bh(&priv->ring[ring].queue_lock); + enq_ret = crypto_enqueue_request(&priv->ring[ring].queue, async); +@@ -412,6 +410,26 @@ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv, + return 1; + } + ++static int safexcel_handle_result(struct safexcel_crypto_priv *priv, int ring, ++ struct crypto_async_request *async, ++ bool *should_complete, int *ret) ++{ ++ struct ahash_request *areq = ahash_request_cast(async); ++ struct safexcel_ahash_req *req = ahash_request_ctx(areq); ++ int err; ++ ++ if (req->needs_inv) { ++ req->needs_inv = false; ++ err = safexcel_handle_inv_result(priv, ring, async, ++ should_complete, ret); ++ } else { ++ err = safexcel_handle_req_result(priv, ring, async, ++ should_complete, ret); ++ } ++ ++ return err; ++} ++ + static int safexcel_ahash_send_inv(struct crypto_async_request *async, + int ring, struct safexcel_request *request, + int *commands, int *results) +@@ -420,7 +438,6 @@ static int safexcel_ahash_send_inv(struct crypto_async_request *async, + struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq)); + int ret; + +- ctx->base.handle_result = safexcel_handle_inv_result; + ret = safexcel_invalidate_cache(async, &ctx->base, ctx->priv, + ctx->base.ctxr_dma, ring, request); + if (unlikely(ret)) +@@ -432,28 +449,46 @@ static int safexcel_ahash_send_inv(struct crypto_async_request *async, + return 0; + } + ++static int safexcel_ahash_send(struct crypto_async_request *async, ++ int ring, struct safexcel_request *request, ++ int *commands, int *results) ++{ ++ struct ahash_request *areq = ahash_request_cast(async); ++ struct safexcel_ahash_req *req = ahash_request_ctx(areq); ++ int ret; ++ ++ if (req->needs_inv) ++ ret = safexcel_ahash_send_inv(async, ring, request, ++ commands, results); ++ else ++ ret = safexcel_ahash_send_req(async, ring, request, ++ commands, results); ++ return ret; ++} ++ + static int safexcel_ahash_exit_inv(struct crypto_tfm *tfm) + { + struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(tfm); + struct safexcel_crypto_priv *priv = ctx->priv; +- struct ahash_request req; ++ AHASH_REQUEST_ON_STACK(req, __crypto_ahash_cast(tfm)); ++ struct safexcel_ahash_req *rctx = ahash_request_ctx(req); + struct safexcel_inv_result result = {}; + int ring = ctx->base.ring; + +- memset(&req, 0, sizeof(struct ahash_request)); ++ memset(req, 0, sizeof(struct ahash_request)); + + /* create invalidation request */ + init_completion(&result.completion); +- ahash_request_set_callback(&req, CRYPTO_TFM_REQ_MAY_BACKLOG, ++ ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + safexcel_inv_complete, &result); + +- ahash_request_set_tfm(&req, __crypto_ahash_cast(tfm)); +- ctx = crypto_tfm_ctx(req.base.tfm); ++ ahash_request_set_tfm(req, __crypto_ahash_cast(tfm)); ++ ctx = crypto_tfm_ctx(req->base.tfm); + ctx->base.exit_inv = true; +- ctx->base.send = safexcel_ahash_send_inv; ++ rctx->needs_inv = true; + + spin_lock_bh(&priv->ring[ring].queue_lock); +- crypto_enqueue_request(&priv->ring[ring].queue, &req.base); ++ crypto_enqueue_request(&priv->ring[ring].queue, &req->base); + spin_unlock_bh(&priv->ring[ring].queue_lock); + + if (!priv->ring[ring].need_dequeue) +@@ -501,14 +536,16 @@ static int safexcel_ahash_enqueue(struct ahash_request *areq) + struct safexcel_crypto_priv *priv = ctx->priv; + int ret, ring; + +- ctx->base.send = safexcel_ahash_send; ++ req->needs_inv = false; + + if (req->processed && ctx->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED) + ctx->base.needs_inv = safexcel_ahash_needs_inv_get(areq); + + if (ctx->base.ctxr) { +- if (ctx->base.needs_inv) +- ctx->base.send = safexcel_ahash_send_inv; ++ if (ctx->base.needs_inv) { ++ ctx->base.needs_inv = false; ++ req->needs_inv = true; ++ } + } else { + ctx->base.ring = safexcel_select_ring(priv); + ctx->base.ctxr = dma_pool_zalloc(priv->context_pool, +@@ -642,6 +679,8 @@ static int safexcel_ahash_cra_init(struct crypto_tfm *tfm) + struct safexcel_alg_template, alg.ahash); + + ctx->priv = tmpl->priv; ++ ctx->base.send = safexcel_ahash_send; ++ ctx->base.handle_result = safexcel_handle_result; + + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), + sizeof(struct safexcel_ahash_req)); +diff --git a/drivers/dma/fsl-edma.c b/drivers/dma/fsl-edma.c +index 6775f2c74e25..c7568869284e 100644 +--- a/drivers/dma/fsl-edma.c ++++ b/drivers/dma/fsl-edma.c +@@ -863,11 +863,11 @@ static void fsl_edma_irq_exit( + } + } + +-static void fsl_disable_clocks(struct fsl_edma_engine *fsl_edma) ++static void fsl_disable_clocks(struct fsl_edma_engine *fsl_edma, int nr_clocks) + { + int i; + +- for (i = 0; i < DMAMUX_NR; i++) ++ for (i = 0; i < nr_clocks; i++) + clk_disable_unprepare(fsl_edma->muxclk[i]); + } + +@@ -904,25 +904,25 @@ static int fsl_edma_probe(struct platform_device *pdev) + + res = platform_get_resource(pdev, IORESOURCE_MEM, 1 + i); + fsl_edma->muxbase[i] = devm_ioremap_resource(&pdev->dev, res); +- if (IS_ERR(fsl_edma->muxbase[i])) ++ if (IS_ERR(fsl_edma->muxbase[i])) { ++ /* on error: disable all previously enabled clks */ ++ fsl_disable_clocks(fsl_edma, i); + return PTR_ERR(fsl_edma->muxbase[i]); ++ } + + sprintf(clkname, "dmamux%d", i); + fsl_edma->muxclk[i] = devm_clk_get(&pdev->dev, clkname); + if (IS_ERR(fsl_edma->muxclk[i])) { + dev_err(&pdev->dev, "Missing DMAMUX block clock.\n"); ++ /* on error: disable all previously enabled clks */ ++ fsl_disable_clocks(fsl_edma, i); + return PTR_ERR(fsl_edma->muxclk[i]); + } + + ret = clk_prepare_enable(fsl_edma->muxclk[i]); +- if (ret) { +- /* disable only clks which were enabled on error */ +- for (; i >= 0; i--) +- clk_disable_unprepare(fsl_edma->muxclk[i]); +- +- dev_err(&pdev->dev, "DMAMUX clk block failed.\n"); +- return ret; +- } ++ if (ret) ++ /* on error: disable all previously enabled clks */ ++ fsl_disable_clocks(fsl_edma, i); + + } + +@@ -976,7 +976,7 @@ static int fsl_edma_probe(struct platform_device *pdev) + if (ret) { + dev_err(&pdev->dev, + "Can't register Freescale eDMA engine. (%d)\n", ret); +- fsl_disable_clocks(fsl_edma); ++ fsl_disable_clocks(fsl_edma, DMAMUX_NR); + return ret; + } + +@@ -985,7 +985,7 @@ static int fsl_edma_probe(struct platform_device *pdev) + dev_err(&pdev->dev, + "Can't register Freescale eDMA of_dma. (%d)\n", ret); + dma_async_device_unregister(&fsl_edma->dma_dev); +- fsl_disable_clocks(fsl_edma); ++ fsl_disable_clocks(fsl_edma, DMAMUX_NR); + return ret; + } + +@@ -1015,7 +1015,7 @@ static int fsl_edma_remove(struct platform_device *pdev) + fsl_edma_cleanup_vchan(&fsl_edma->dma_dev); + of_dma_controller_free(np); + dma_async_device_unregister(&fsl_edma->dma_dev); +- fsl_disable_clocks(fsl_edma); ++ fsl_disable_clocks(fsl_edma, DMAMUX_NR); + + return 0; + } +diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c +index 46485692db48..059db50109bc 100644 +--- a/drivers/gpu/drm/i915/intel_display.c ++++ b/drivers/gpu/drm/i915/intel_display.c +@@ -13240,7 +13240,7 @@ intel_primary_plane_create(struct drm_i915_private *dev_priv, enum pipe pipe) + primary->frontbuffer_bit = INTEL_FRONTBUFFER_PRIMARY(pipe); + primary->check_plane = intel_check_primary_plane; + +- if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) { ++ if (INTEL_GEN(dev_priv) >= 10) { + intel_primary_formats = skl_primary_formats; + num_formats = ARRAY_SIZE(skl_primary_formats); + modifiers = skl_format_modifiers_ccs; +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pci/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pci/base.c +index a4cb82495cee..245c946ea661 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pci/base.c ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pci/base.c +@@ -136,6 +136,13 @@ nvkm_pci_init(struct nvkm_subdev *subdev) + return ret; + + pci->irq = pdev->irq; ++ ++ /* Ensure MSI interrupts are armed, for the case where there are ++ * already interrupts pending (for whatever reason) at load time. ++ */ ++ if (pci->msi) ++ pci->func->msi_rearm(pci); ++ + return ret; + } + +diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c +index 871599826773..91f9263f3c3b 100644 +--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c ++++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c +@@ -821,6 +821,8 @@ int ttm_page_alloc_init(struct ttm_mem_global *glob, unsigned max_pages) + pr_info("Initializing pool allocator\n"); + + _manager = kzalloc(sizeof(*_manager), GFP_KERNEL); ++ if (!_manager) ++ return -ENOMEM; + + ttm_page_pool_init_locked(&_manager->wc_pool, GFP_HIGHUSER, "wc"); + +diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h +index a1d687a664f8..66f0268f37a6 100644 +--- a/drivers/infiniband/core/core_priv.h ++++ b/drivers/infiniband/core/core_priv.h +@@ -314,7 +314,7 @@ static inline int ib_mad_enforce_security(struct ib_mad_agent_private *map, + } + #endif + +-struct ib_device *__ib_device_get_by_index(u32 ifindex); ++struct ib_device *ib_device_get_by_index(u32 ifindex); + /* RDMA device netlink */ + void nldev_init(void); + void nldev_exit(void); +diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c +index b4b28ff8b7dc..d7d042a20ab4 100644 +--- a/drivers/infiniband/core/device.c ++++ b/drivers/infiniband/core/device.c +@@ -134,7 +134,7 @@ static int ib_device_check_mandatory(struct ib_device *device) + return 0; + } + +-struct ib_device *__ib_device_get_by_index(u32 index) ++static struct ib_device *__ib_device_get_by_index(u32 index) + { + struct ib_device *device; + +@@ -145,6 +145,22 @@ struct ib_device *__ib_device_get_by_index(u32 index) + return NULL; + } + ++/* ++ * Caller is responsible to return refrerence count by calling put_device() ++ */ ++struct ib_device *ib_device_get_by_index(u32 index) ++{ ++ struct ib_device *device; ++ ++ down_read(&lists_rwsem); ++ device = __ib_device_get_by_index(index); ++ if (device) ++ get_device(&device->dev); ++ ++ up_read(&lists_rwsem); ++ return device; ++} ++ + static struct ib_device *__ib_device_get_by_name(const char *name) + { + struct ib_device *device; +diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c +index 9a05245a1acf..0dcd1aa6f683 100644 +--- a/drivers/infiniband/core/nldev.c ++++ b/drivers/infiniband/core/nldev.c +@@ -142,27 +142,34 @@ static int nldev_get_doit(struct sk_buff *skb, struct nlmsghdr *nlh, + + index = nla_get_u32(tb[RDMA_NLDEV_ATTR_DEV_INDEX]); + +- device = __ib_device_get_by_index(index); ++ device = ib_device_get_by_index(index); + if (!device) + return -EINVAL; + + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); +- if (!msg) +- return -ENOMEM; ++ if (!msg) { ++ err = -ENOMEM; ++ goto err; ++ } + + nlh = nlmsg_put(msg, NETLINK_CB(skb).portid, nlh->nlmsg_seq, + RDMA_NL_GET_TYPE(RDMA_NL_NLDEV, RDMA_NLDEV_CMD_GET), + 0, 0); + + err = fill_dev_info(msg, device); +- if (err) { +- nlmsg_free(msg); +- return err; +- } ++ if (err) ++ goto err_free; + + nlmsg_end(msg, nlh); + ++ put_device(&device->dev); + return rdma_nl_unicast(msg, NETLINK_CB(skb).portid); ++ ++err_free: ++ nlmsg_free(msg); ++err: ++ put_device(&device->dev); ++ return err; + } + + static int _nldev_get_dumpit(struct ib_device *device, +@@ -220,31 +227,40 @@ static int nldev_port_get_doit(struct sk_buff *skb, struct nlmsghdr *nlh, + return -EINVAL; + + index = nla_get_u32(tb[RDMA_NLDEV_ATTR_DEV_INDEX]); +- device = __ib_device_get_by_index(index); ++ device = ib_device_get_by_index(index); + if (!device) + return -EINVAL; + + port = nla_get_u32(tb[RDMA_NLDEV_ATTR_PORT_INDEX]); +- if (!rdma_is_port_valid(device, port)) +- return -EINVAL; ++ if (!rdma_is_port_valid(device, port)) { ++ err = -EINVAL; ++ goto err; ++ } + + msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); +- if (!msg) +- return -ENOMEM; ++ if (!msg) { ++ err = -ENOMEM; ++ goto err; ++ } + + nlh = nlmsg_put(msg, NETLINK_CB(skb).portid, nlh->nlmsg_seq, + RDMA_NL_GET_TYPE(RDMA_NL_NLDEV, RDMA_NLDEV_CMD_GET), + 0, 0); + + err = fill_port_info(msg, device, port); +- if (err) { +- nlmsg_free(msg); +- return err; +- } ++ if (err) ++ goto err_free; + + nlmsg_end(msg, nlh); ++ put_device(&device->dev); + + return rdma_nl_unicast(msg, NETLINK_CB(skb).portid); ++ ++err_free: ++ nlmsg_free(msg); ++err: ++ put_device(&device->dev); ++ return err; + } + + static int nldev_port_get_dumpit(struct sk_buff *skb, +@@ -265,7 +281,7 @@ static int nldev_port_get_dumpit(struct sk_buff *skb, + return -EINVAL; + + ifindex = nla_get_u32(tb[RDMA_NLDEV_ATTR_DEV_INDEX]); +- device = __ib_device_get_by_index(ifindex); ++ device = ib_device_get_by_index(ifindex); + if (!device) + return -EINVAL; + +@@ -299,7 +315,9 @@ static int nldev_port_get_dumpit(struct sk_buff *skb, + nlmsg_end(skb, nlh); + } + +-out: cb->args[0] = idx; ++out: ++ put_device(&device->dev); ++ cb->args[0] = idx; + return skb->len; + } + +diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c +index e6f77f63da75..e80a7f764a74 100644 +--- a/drivers/infiniband/hw/mlx4/mr.c ++++ b/drivers/infiniband/hw/mlx4/mr.c +@@ -406,7 +406,6 @@ struct ib_mr *mlx4_ib_alloc_mr(struct ib_pd *pd, + goto err_free_mr; + + mr->max_pages = max_num_sg; +- + err = mlx4_mr_enable(dev->dev, &mr->mmr); + if (err) + goto err_free_pl; +@@ -417,6 +416,7 @@ struct ib_mr *mlx4_ib_alloc_mr(struct ib_pd *pd, + return &mr->ibmr; + + err_free_pl: ++ mr->ibmr.device = pd->device; + mlx4_free_priv_pages(mr); + err_free_mr: + (void) mlx4_mr_free(dev->dev, &mr->mmr); +diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c +index 37bbc543847a..231b043e2806 100644 +--- a/drivers/infiniband/hw/mlx5/mr.c ++++ b/drivers/infiniband/hw/mlx5/mr.c +@@ -1637,6 +1637,7 @@ struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd, + MLX5_SET(mkc, mkc, access_mode, mr->access_mode); + MLX5_SET(mkc, mkc, umr_en, 1); + ++ mr->ibmr.device = pd->device; + err = mlx5_core_create_mkey(dev->mdev, &mr->mmkey, in, inlen); + if (err) + goto err_destroy_psv; +diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c +index ed34d5a581fa..d7162f2b7979 100644 +--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c ++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_qp.c +@@ -406,6 +406,13 @@ static void pvrdma_free_qp(struct pvrdma_qp *qp) + atomic_dec(&qp->refcnt); + wait_event(qp->wait, !atomic_read(&qp->refcnt)); + ++ if (!qp->is_kernel) { ++ if (qp->rumem) ++ ib_umem_release(qp->rumem); ++ if (qp->sumem) ++ ib_umem_release(qp->sumem); ++ } ++ + pvrdma_page_dir_cleanup(dev, &qp->pdir); + + kfree(qp); +diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c +index dcc77014018d..f6935811ef3f 100644 +--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c ++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c +@@ -903,8 +903,8 @@ static int path_rec_start(struct net_device *dev, + return 0; + } + +-static void neigh_add_path(struct sk_buff *skb, u8 *daddr, +- struct net_device *dev) ++static struct ipoib_neigh *neigh_add_path(struct sk_buff *skb, u8 *daddr, ++ struct net_device *dev) + { + struct ipoib_dev_priv *priv = ipoib_priv(dev); + struct rdma_netdev *rn = netdev_priv(dev); +@@ -918,7 +918,15 @@ static void neigh_add_path(struct sk_buff *skb, u8 *daddr, + spin_unlock_irqrestore(&priv->lock, flags); + ++dev->stats.tx_dropped; + dev_kfree_skb_any(skb); +- return; ++ return NULL; ++ } ++ ++ /* To avoid race condition, make sure that the ++ * neigh will be added only once. ++ */ ++ if (unlikely(!list_empty(&neigh->list))) { ++ spin_unlock_irqrestore(&priv->lock, flags); ++ return neigh; + } + + path = __path_find(dev, daddr + 4); +@@ -957,7 +965,7 @@ static void neigh_add_path(struct sk_buff *skb, u8 *daddr, + path->ah->last_send = rn->send(dev, skb, path->ah->ah, + IPOIB_QPN(daddr)); + ipoib_neigh_put(neigh); +- return; ++ return NULL; + } + } else { + neigh->ah = NULL; +@@ -974,7 +982,7 @@ static void neigh_add_path(struct sk_buff *skb, u8 *daddr, + + spin_unlock_irqrestore(&priv->lock, flags); + ipoib_neigh_put(neigh); +- return; ++ return NULL; + + err_path: + ipoib_neigh_free(neigh); +@@ -984,6 +992,8 @@ static void neigh_add_path(struct sk_buff *skb, u8 *daddr, + + spin_unlock_irqrestore(&priv->lock, flags); + ipoib_neigh_put(neigh); ++ ++ return NULL; + } + + static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev, +@@ -1092,8 +1102,9 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev) + case htons(ETH_P_TIPC): + neigh = ipoib_neigh_get(dev, phdr->hwaddr); + if (unlikely(!neigh)) { +- neigh_add_path(skb, phdr->hwaddr, dev); +- return NETDEV_TX_OK; ++ neigh = neigh_add_path(skb, phdr->hwaddr, dev); ++ if (likely(!neigh)) ++ return NETDEV_TX_OK; + } + break; + case htons(ETH_P_ARP): +diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c +index 93e149efc1f5..9b3f47ae2016 100644 +--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c ++++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c +@@ -816,7 +816,10 @@ void ipoib_mcast_send(struct net_device *dev, u8 *daddr, struct sk_buff *skb) + spin_lock_irqsave(&priv->lock, flags); + if (!neigh) { + neigh = ipoib_neigh_alloc(daddr, dev); +- if (neigh) { ++ /* Make sure that the neigh will be added only ++ * once to mcast list. ++ */ ++ if (neigh && list_empty(&neigh->list)) { + kref_get(&mcast->ah->ref); + neigh->ah = mcast->ah; + list_add_tail(&neigh->list, &mcast->neigh_list); +diff --git a/drivers/input/misc/xen-kbdfront.c b/drivers/input/misc/xen-kbdfront.c +index 6bf56bb5f8d9..d91f3b1c5375 100644 +--- a/drivers/input/misc/xen-kbdfront.c ++++ b/drivers/input/misc/xen-kbdfront.c +@@ -326,8 +326,6 @@ static int xenkbd_probe(struct xenbus_device *dev, + 0, width, 0, 0); + input_set_abs_params(mtouch, ABS_MT_POSITION_Y, + 0, height, 0, 0); +- input_set_abs_params(mtouch, ABS_MT_PRESSURE, +- 0, 255, 0, 0); + + ret = input_mt_init_slots(mtouch, num_cont, INPUT_MT_DIRECT); + if (ret) { +diff --git a/drivers/leds/led-core.c b/drivers/leds/led-core.c +index ef1360445413..9ce6b32f52a1 100644 +--- a/drivers/leds/led-core.c ++++ b/drivers/leds/led-core.c +@@ -189,6 +189,7 @@ void led_blink_set(struct led_classdev *led_cdev, + { + del_timer_sync(&led_cdev->blink_timer); + ++ clear_bit(LED_BLINK_SW, &led_cdev->work_flags); + clear_bit(LED_BLINK_ONESHOT, &led_cdev->work_flags); + clear_bit(LED_BLINK_ONESHOT_STOP, &led_cdev->work_flags); + +diff --git a/drivers/mtd/nand/brcmnand/brcmnand.c b/drivers/mtd/nand/brcmnand/brcmnand.c +index edf24c148fa6..2a978d9832a7 100644 +--- a/drivers/mtd/nand/brcmnand/brcmnand.c ++++ b/drivers/mtd/nand/brcmnand/brcmnand.c +@@ -1763,7 +1763,7 @@ static int brcmnand_read(struct mtd_info *mtd, struct nand_chip *chip, + err = brcmstb_nand_verify_erased_page(mtd, chip, buf, + addr); + /* erased page bitflips corrected */ +- if (err > 0) ++ if (err >= 0) + return err; + } + +diff --git a/drivers/mtd/nand/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/gpmi-nand/gpmi-nand.c +index 50f8d4a1b983..d4d824ef64e9 100644 +--- a/drivers/mtd/nand/gpmi-nand/gpmi-nand.c ++++ b/drivers/mtd/nand/gpmi-nand/gpmi-nand.c +@@ -1067,9 +1067,6 @@ static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip, + return ret; + } + +- /* handle the block mark swapping */ +- block_mark_swapping(this, payload_virt, auxiliary_virt); +- + /* Loop over status bytes, accumulating ECC status. */ + status = auxiliary_virt + nfc_geo->auxiliary_status_offset; + +@@ -1158,6 +1155,9 @@ static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip, + max_bitflips = max_t(unsigned int, max_bitflips, *status); + } + ++ /* handle the block mark swapping */ ++ block_mark_swapping(this, buf, auxiliary_virt); ++ + if (oob_required) { + /* + * It's time to deliver the OOB bytes. See gpmi_ecc_read_oob() +diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c +index c4d1140116ea..ed8a2a7ce500 100644 +--- a/drivers/net/can/flexcan.c ++++ b/drivers/net/can/flexcan.c +@@ -526,7 +526,7 @@ static int flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev) + data = be32_to_cpup((__be32 *)&cf->data[0]); + flexcan_write(data, &priv->tx_mb->data[0]); + } +- if (cf->can_dlc > 3) { ++ if (cf->can_dlc > 4) { + data = be32_to_cpup((__be32 *)&cf->data[4]); + flexcan_write(data, &priv->tx_mb->data[1]); + } +diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c +index c6bd5e24005d..67df5053dc30 100644 +--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c ++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c +@@ -1565,7 +1565,7 @@ static int ena_rss_configure(struct ena_adapter *adapter) + + static int ena_up_complete(struct ena_adapter *adapter) + { +- int rc, i; ++ int rc; + + rc = ena_rss_configure(adapter); + if (rc) +@@ -1584,17 +1584,6 @@ static int ena_up_complete(struct ena_adapter *adapter) + + ena_napi_enable_all(adapter); + +- /* Enable completion queues interrupt */ +- for (i = 0; i < adapter->num_queues; i++) +- ena_unmask_interrupt(&adapter->tx_ring[i], +- &adapter->rx_ring[i]); +- +- /* schedule napi in case we had pending packets +- * from the last time we disable napi +- */ +- for (i = 0; i < adapter->num_queues; i++) +- napi_schedule(&adapter->ena_napi[i].napi); +- + return 0; + } + +@@ -1731,7 +1720,7 @@ static int ena_create_all_io_rx_queues(struct ena_adapter *adapter) + + static int ena_up(struct ena_adapter *adapter) + { +- int rc; ++ int rc, i; + + netdev_dbg(adapter->netdev, "%s\n", __func__); + +@@ -1774,6 +1763,17 @@ static int ena_up(struct ena_adapter *adapter) + + set_bit(ENA_FLAG_DEV_UP, &adapter->flags); + ++ /* Enable completion queues interrupt */ ++ for (i = 0; i < adapter->num_queues; i++) ++ ena_unmask_interrupt(&adapter->tx_ring[i], ++ &adapter->rx_ring[i]); ++ ++ /* schedule napi in case we had pending packets ++ * from the last time we disable napi ++ */ ++ for (i = 0; i < adapter->num_queues; i++) ++ napi_schedule(&adapter->ena_napi[i].napi); ++ + return rc; + + err_up: +diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h +index 0207927dc8a6..4ebd53b3c7da 100644 +--- a/drivers/net/ethernet/aquantia/atlantic/aq_hw.h ++++ b/drivers/net/ethernet/aquantia/atlantic/aq_hw.h +@@ -85,7 +85,9 @@ struct aq_hw_ops { + void (*destroy)(struct aq_hw_s *self); + + int (*get_hw_caps)(struct aq_hw_s *self, +- struct aq_hw_caps_s *aq_hw_caps); ++ struct aq_hw_caps_s *aq_hw_caps, ++ unsigned short device, ++ unsigned short subsystem_device); + + int (*hw_ring_tx_xmit)(struct aq_hw_s *self, struct aq_ring_s *aq_ring, + unsigned int frags); +diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c +index 483e97691eea..c93e5613d4cc 100644 +--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c ++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c +@@ -222,7 +222,7 @@ static struct net_device *aq_nic_ndev_alloc(void) + + struct aq_nic_s *aq_nic_alloc_cold(const struct net_device_ops *ndev_ops, + const struct ethtool_ops *et_ops, +- struct device *dev, ++ struct pci_dev *pdev, + struct aq_pci_func_s *aq_pci_func, + unsigned int port, + const struct aq_hw_ops *aq_hw_ops) +@@ -242,7 +242,7 @@ struct aq_nic_s *aq_nic_alloc_cold(const struct net_device_ops *ndev_ops, + ndev->netdev_ops = ndev_ops; + ndev->ethtool_ops = et_ops; + +- SET_NETDEV_DEV(ndev, dev); ++ SET_NETDEV_DEV(ndev, &pdev->dev); + + ndev->if_port = port; + self->ndev = ndev; +@@ -254,7 +254,8 @@ struct aq_nic_s *aq_nic_alloc_cold(const struct net_device_ops *ndev_ops, + + self->aq_hw = self->aq_hw_ops.create(aq_pci_func, self->port, + &self->aq_hw_ops); +- err = self->aq_hw_ops.get_hw_caps(self->aq_hw, &self->aq_hw_caps); ++ err = self->aq_hw_ops.get_hw_caps(self->aq_hw, &self->aq_hw_caps, ++ pdev->device, pdev->subsystem_device); + if (err < 0) + goto err_exit; + +diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h +index 4309983acdd6..3c9f8db03d5f 100644 +--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.h ++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.h +@@ -71,7 +71,7 @@ struct aq_nic_cfg_s { + + struct aq_nic_s *aq_nic_alloc_cold(const struct net_device_ops *ndev_ops, + const struct ethtool_ops *et_ops, +- struct device *dev, ++ struct pci_dev *pdev, + struct aq_pci_func_s *aq_pci_func, + unsigned int port, + const struct aq_hw_ops *aq_hw_ops); +diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c +index cadaa646c89f..58c29d04b186 100644 +--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c ++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c +@@ -51,7 +51,8 @@ struct aq_pci_func_s *aq_pci_func_alloc(struct aq_hw_ops *aq_hw_ops, + pci_set_drvdata(pdev, self); + self->pdev = pdev; + +- err = aq_hw_ops->get_hw_caps(NULL, &self->aq_hw_caps); ++ err = aq_hw_ops->get_hw_caps(NULL, &self->aq_hw_caps, pdev->device, ++ pdev->subsystem_device); + if (err < 0) + goto err_exit; + +@@ -59,7 +60,7 @@ struct aq_pci_func_s *aq_pci_func_alloc(struct aq_hw_ops *aq_hw_ops, + + for (port = 0; port < self->ports; ++port) { + struct aq_nic_s *aq_nic = aq_nic_alloc_cold(ndev_ops, eth_ops, +- &pdev->dev, self, ++ pdev, self, + port, aq_hw_ops); + + if (!aq_nic) { +diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c +index 07b3c49a16a4..b0abd187cead 100644 +--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c ++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_a0.c +@@ -18,9 +18,20 @@ + #include "hw_atl_a0_internal.h" + + static int hw_atl_a0_get_hw_caps(struct aq_hw_s *self, +- struct aq_hw_caps_s *aq_hw_caps) ++ struct aq_hw_caps_s *aq_hw_caps, ++ unsigned short device, ++ unsigned short subsystem_device) + { + memcpy(aq_hw_caps, &hw_atl_a0_hw_caps_, sizeof(*aq_hw_caps)); ++ ++ if (device == HW_ATL_DEVICE_ID_D108 && subsystem_device == 0x0001) ++ aq_hw_caps->link_speed_msk &= ~HW_ATL_A0_RATE_10G; ++ ++ if (device == HW_ATL_DEVICE_ID_D109 && subsystem_device == 0x0001) { ++ aq_hw_caps->link_speed_msk &= ~HW_ATL_A0_RATE_10G; ++ aq_hw_caps->link_speed_msk &= ~HW_ATL_A0_RATE_5G; ++ } ++ + return 0; + } + +diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c +index ec68c20efcbd..36fddb199160 100644 +--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c ++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_b0.c +@@ -16,11 +16,23 @@ + #include "hw_atl_utils.h" + #include "hw_atl_llh.h" + #include "hw_atl_b0_internal.h" ++#include "hw_atl_llh_internal.h" + + static int hw_atl_b0_get_hw_caps(struct aq_hw_s *self, +- struct aq_hw_caps_s *aq_hw_caps) ++ struct aq_hw_caps_s *aq_hw_caps, ++ unsigned short device, ++ unsigned short subsystem_device) + { + memcpy(aq_hw_caps, &hw_atl_b0_hw_caps_, sizeof(*aq_hw_caps)); ++ ++ if (device == HW_ATL_DEVICE_ID_D108 && subsystem_device == 0x0001) ++ aq_hw_caps->link_speed_msk &= ~HW_ATL_B0_RATE_10G; ++ ++ if (device == HW_ATL_DEVICE_ID_D109 && subsystem_device == 0x0001) { ++ aq_hw_caps->link_speed_msk &= ~HW_ATL_B0_RATE_10G; ++ aq_hw_caps->link_speed_msk &= ~HW_ATL_B0_RATE_5G; ++ } ++ + return 0; + } + +@@ -357,6 +369,7 @@ static int hw_atl_b0_hw_init(struct aq_hw_s *self, + }; + + int err = 0; ++ u32 val; + + self->aq_nic_cfg = aq_nic_cfg; + +@@ -374,6 +387,16 @@ static int hw_atl_b0_hw_init(struct aq_hw_s *self, + hw_atl_b0_hw_rss_set(self, &aq_nic_cfg->aq_rss); + hw_atl_b0_hw_rss_hash_set(self, &aq_nic_cfg->aq_rss); + ++ /* Force limit MRRS on RDM/TDM to 2K */ ++ val = aq_hw_read_reg(self, pci_reg_control6_adr); ++ aq_hw_write_reg(self, pci_reg_control6_adr, (val & ~0x707) | 0x404); ++ ++ /* TX DMA total request limit. B0 hardware is not capable to ++ * handle more than (8K-MRRS) incoming DMA data. ++ * Value 24 in 256byte units ++ */ ++ aq_hw_write_reg(self, tx_dma_total_req_limit_adr, 24); ++ + err = aq_hw_err_from_flags(self); + if (err < 0) + goto err_exit; +diff --git a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h +index 5527fc0e5942..93450ec930e8 100644 +--- a/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h ++++ b/drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_llh_internal.h +@@ -2343,6 +2343,9 @@ + #define tx_dma_desc_base_addrmsw_adr(descriptor) \ + (0x00007c04u + (descriptor) * 0x40) + ++/* tx dma total request limit */ ++#define tx_dma_total_req_limit_adr 0x00007b20u ++ + /* tx interrupt moderation control register definitions + * Preprocessor definitions for TX Interrupt Moderation Control Register + * Base Address: 0x00008980 +@@ -2369,6 +2372,9 @@ + /* default value of bitfield reg_res_dsbl */ + #define pci_reg_res_dsbl_default 0x1 + ++/* PCI core control register */ ++#define pci_reg_control6_adr 0x1014u ++ + /* global microprocessor scratch pad definitions */ + #define glb_cpu_scratch_scp_adr(scratch_scp) (0x00000300u + (scratch_scp) * 0x4) + +diff --git a/drivers/net/ethernet/arc/emac_main.c b/drivers/net/ethernet/arc/emac_main.c +index 3241af1ce718..5b422be56165 100644 +--- a/drivers/net/ethernet/arc/emac_main.c ++++ b/drivers/net/ethernet/arc/emac_main.c +@@ -210,39 +210,48 @@ static int arc_emac_rx(struct net_device *ndev, int budget) + continue; + } + +- pktlen = info & LEN_MASK; +- stats->rx_packets++; +- stats->rx_bytes += pktlen; +- skb = rx_buff->skb; +- skb_put(skb, pktlen); +- skb->dev = ndev; +- skb->protocol = eth_type_trans(skb, ndev); +- +- dma_unmap_single(&ndev->dev, dma_unmap_addr(rx_buff, addr), +- dma_unmap_len(rx_buff, len), DMA_FROM_DEVICE); +- +- /* Prepare the BD for next cycle */ +- rx_buff->skb = netdev_alloc_skb_ip_align(ndev, +- EMAC_BUFFER_SIZE); +- if (unlikely(!rx_buff->skb)) { ++ /* Prepare the BD for next cycle. netif_receive_skb() ++ * only if new skb was allocated and mapped to avoid holes ++ * in the RX fifo. ++ */ ++ skb = netdev_alloc_skb_ip_align(ndev, EMAC_BUFFER_SIZE); ++ if (unlikely(!skb)) { ++ if (net_ratelimit()) ++ netdev_err(ndev, "cannot allocate skb\n"); ++ /* Return ownership to EMAC */ ++ rxbd->info = cpu_to_le32(FOR_EMAC | EMAC_BUFFER_SIZE); + stats->rx_errors++; +- /* Because receive_skb is below, increment rx_dropped */ + stats->rx_dropped++; + continue; + } + +- /* receive_skb only if new skb was allocated to avoid holes */ +- netif_receive_skb(skb); +- +- addr = dma_map_single(&ndev->dev, (void *)rx_buff->skb->data, ++ addr = dma_map_single(&ndev->dev, (void *)skb->data, + EMAC_BUFFER_SIZE, DMA_FROM_DEVICE); + if (dma_mapping_error(&ndev->dev, addr)) { + if (net_ratelimit()) +- netdev_err(ndev, "cannot dma map\n"); +- dev_kfree_skb(rx_buff->skb); ++ netdev_err(ndev, "cannot map dma buffer\n"); ++ dev_kfree_skb(skb); ++ /* Return ownership to EMAC */ ++ rxbd->info = cpu_to_le32(FOR_EMAC | EMAC_BUFFER_SIZE); + stats->rx_errors++; ++ stats->rx_dropped++; + continue; + } ++ ++ /* unmap previosly mapped skb */ ++ dma_unmap_single(&ndev->dev, dma_unmap_addr(rx_buff, addr), ++ dma_unmap_len(rx_buff, len), DMA_FROM_DEVICE); ++ ++ pktlen = info & LEN_MASK; ++ stats->rx_packets++; ++ stats->rx_bytes += pktlen; ++ skb_put(rx_buff->skb, pktlen); ++ rx_buff->skb->dev = ndev; ++ rx_buff->skb->protocol = eth_type_trans(rx_buff->skb, ndev); ++ ++ netif_receive_skb(rx_buff->skb); ++ ++ rx_buff->skb = skb; + dma_unmap_addr_set(rx_buff, addr, addr); + dma_unmap_len_set(rx_buff, len, EMAC_BUFFER_SIZE); + +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +index 1216c1f1e052..6465414dad74 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +@@ -3030,7 +3030,7 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link) + + del_timer_sync(&bp->timer); + +- if (IS_PF(bp)) { ++ if (IS_PF(bp) && !BP_NOMCP(bp)) { + /* Set ALWAYS_ALIVE bit in shmem */ + bp->fw_drv_pulse_wr_seq |= DRV_PULSE_ALWAYS_ALIVE; + bnx2x_drv_pulse(bp); +@@ -3116,7 +3116,7 @@ int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode, bool keep_link) + bp->cnic_loaded = false; + + /* Clear driver version indication in shmem */ +- if (IS_PF(bp)) ++ if (IS_PF(bp) && !BP_NOMCP(bp)) + bnx2x_update_mng_version(bp); + + /* Check if there are pending parity attentions. If there are - set +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +index c12b4d3e946e..e855a271db48 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +@@ -9578,6 +9578,15 @@ static int bnx2x_init_shmem(struct bnx2x *bp) + + do { + bp->common.shmem_base = REG_RD(bp, MISC_REG_SHARED_MEM_ADDR); ++ ++ /* If we read all 0xFFs, means we are in PCI error state and ++ * should bail out to avoid crashes on adapter's FW reads. ++ */ ++ if (bp->common.shmem_base == 0xFFFFFFFF) { ++ bp->flags |= NO_MCP_FLAG; ++ return -ENODEV; ++ } ++ + if (bp->common.shmem_base) { + val = SHMEM_RD(bp, validity_map[BP_PORT(bp)]); + if (val & SHR_MEM_VALIDITY_MB) +@@ -14315,7 +14324,10 @@ static pci_ers_result_t bnx2x_io_slot_reset(struct pci_dev *pdev) + BNX2X_ERR("IO slot reset --> driver unload\n"); + + /* MCP should have been reset; Need to wait for validity */ +- bnx2x_init_shmem(bp); ++ if (bnx2x_init_shmem(bp)) { ++ rtnl_unlock(); ++ return PCI_ERS_RESULT_DISCONNECT; ++ } + + if (IS_PF(bp) && SHMEM2_HAS(bp, drv_capabilities_flag)) { + u32 v; +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c +index 5ee18660bc33..c9617675f934 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c +@@ -70,7 +70,7 @@ static int bnxt_vf_ndo_prep(struct bnxt *bp, int vf_id) + netdev_err(bp->dev, "vf ndo called though sriov is disabled\n"); + return -EINVAL; + } +- if (vf_id >= bp->pf.max_vfs) { ++ if (vf_id >= bp->pf.active_vfs) { + netdev_err(bp->dev, "Invalid VF id %d\n", vf_id); + return -EINVAL; + } +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c +index 7dd3d131043a..6a185344b378 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c +@@ -327,7 +327,7 @@ static int bnxt_hwrm_cfa_flow_alloc(struct bnxt *bp, struct bnxt_tc_flow *flow, + } + + /* If all IP and L4 fields are wildcarded then this is an L2 flow */ +- if (is_wildcard(&l3_mask, sizeof(l3_mask)) && ++ if (is_wildcard(l3_mask, sizeof(*l3_mask)) && + is_wildcard(&flow->l4_mask, sizeof(flow->l4_mask))) { + flow_flags |= CFA_FLOW_ALLOC_REQ_FLAGS_FLOWTYPE_L2; + } else { +diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c +index aef3fcf2f5b9..48738eb27806 100644 +--- a/drivers/net/ethernet/broadcom/tg3.c ++++ b/drivers/net/ethernet/broadcom/tg3.c +@@ -10052,6 +10052,16 @@ static int tg3_reset_hw(struct tg3 *tp, bool reset_phy) + + tw32(GRC_MODE, tp->grc_mode | val); + ++ /* On one of the AMD platform, MRRS is restricted to 4000 because of ++ * south bridge limitation. As a workaround, Driver is setting MRRS ++ * to 2048 instead of default 4096. ++ */ ++ if (tp->pdev->subsystem_vendor == PCI_VENDOR_ID_DELL && ++ tp->pdev->subsystem_device == TG3PCI_SUBDEVICE_ID_DELL_5762) { ++ val = tr32(TG3PCI_DEV_STATUS_CTRL) & ~MAX_READ_REQ_MASK; ++ tw32(TG3PCI_DEV_STATUS_CTRL, val | MAX_READ_REQ_SIZE_2048); ++ } ++ + /* Setup the timer prescalar register. Clock is always 66Mhz. */ + val = tr32(GRC_MISC_CFG); + val &= ~0xff; +@@ -14229,7 +14239,8 @@ static int tg3_change_mtu(struct net_device *dev, int new_mtu) + */ + if (tg3_asic_rev(tp) == ASIC_REV_57766 || + tg3_asic_rev(tp) == ASIC_REV_5717 || +- tg3_asic_rev(tp) == ASIC_REV_5719) ++ tg3_asic_rev(tp) == ASIC_REV_5719 || ++ tg3_asic_rev(tp) == ASIC_REV_5720) + reset_phy = true; + + err = tg3_restart_hw(tp, reset_phy); +diff --git a/drivers/net/ethernet/broadcom/tg3.h b/drivers/net/ethernet/broadcom/tg3.h +index c2d02d02d1e6..b057f71aed48 100644 +--- a/drivers/net/ethernet/broadcom/tg3.h ++++ b/drivers/net/ethernet/broadcom/tg3.h +@@ -96,6 +96,7 @@ + #define TG3PCI_SUBDEVICE_ID_DELL_JAGUAR 0x0106 + #define TG3PCI_SUBDEVICE_ID_DELL_MERLOT 0x0109 + #define TG3PCI_SUBDEVICE_ID_DELL_SLIM_MERLOT 0x010a ++#define TG3PCI_SUBDEVICE_ID_DELL_5762 0x07f0 + #define TG3PCI_SUBVENDOR_ID_COMPAQ PCI_VENDOR_ID_COMPAQ + #define TG3PCI_SUBDEVICE_ID_COMPAQ_BANSHEE 0x007c + #define TG3PCI_SUBDEVICE_ID_COMPAQ_BANSHEE_2 0x009a +@@ -281,6 +282,9 @@ + #define TG3PCI_STD_RING_PROD_IDX 0x00000098 /* 64-bit */ + #define TG3PCI_RCV_RET_RING_CON_IDX 0x000000a0 /* 64-bit */ + /* 0xa8 --> 0xb8 unused */ ++#define TG3PCI_DEV_STATUS_CTRL 0x000000b4 ++#define MAX_READ_REQ_SIZE_2048 0x00004000 ++#define MAX_READ_REQ_MASK 0x00007000 + #define TG3PCI_DUAL_MAC_CTRL 0x000000b8 + #define DUAL_MAC_CTRL_CH_MASK 0x00000003 + #define DUAL_MAC_CTRL_ID 0x00000004 +diff --git a/drivers/net/ethernet/freescale/gianfar_ptp.c b/drivers/net/ethernet/freescale/gianfar_ptp.c +index 544114281ea7..9f8d4f8e57e3 100644 +--- a/drivers/net/ethernet/freescale/gianfar_ptp.c ++++ b/drivers/net/ethernet/freescale/gianfar_ptp.c +@@ -319,11 +319,10 @@ static int ptp_gianfar_adjtime(struct ptp_clock_info *ptp, s64 delta) + now = tmr_cnt_read(etsects); + now += delta; + tmr_cnt_write(etsects, now); ++ set_fipers(etsects); + + spin_unlock_irqrestore(&etsects->lock, flags); + +- set_fipers(etsects); +- + return 0; + } + +diff --git a/drivers/net/ethernet/intel/e1000/e1000.h b/drivers/net/ethernet/intel/e1000/e1000.h +index d7bdea79e9fa..8fd2458060a0 100644 +--- a/drivers/net/ethernet/intel/e1000/e1000.h ++++ b/drivers/net/ethernet/intel/e1000/e1000.h +@@ -331,7 +331,8 @@ struct e1000_adapter { + enum e1000_state_t { + __E1000_TESTING, + __E1000_RESETTING, +- __E1000_DOWN ++ __E1000_DOWN, ++ __E1000_DISABLED + }; + + #undef pr_fmt +diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c +index 1982f7917a8d..3dd4aeb2706d 100644 +--- a/drivers/net/ethernet/intel/e1000/e1000_main.c ++++ b/drivers/net/ethernet/intel/e1000/e1000_main.c +@@ -945,7 +945,7 @@ static int e1000_init_hw_struct(struct e1000_adapter *adapter, + static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + { + struct net_device *netdev; +- struct e1000_adapter *adapter; ++ struct e1000_adapter *adapter = NULL; + struct e1000_hw *hw; + + static int cards_found; +@@ -955,6 +955,7 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + u16 tmp = 0; + u16 eeprom_apme_mask = E1000_EEPROM_APME; + int bars, need_ioport; ++ bool disable_dev = false; + + /* do not allocate ioport bars when not needed */ + need_ioport = e1000_is_need_ioport(pdev); +@@ -1259,11 +1260,13 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + iounmap(hw->ce4100_gbe_mdio_base_virt); + iounmap(hw->hw_addr); + err_ioremap: ++ disable_dev = !test_and_set_bit(__E1000_DISABLED, &adapter->flags); + free_netdev(netdev); + err_alloc_etherdev: + pci_release_selected_regions(pdev, bars); + err_pci_reg: +- pci_disable_device(pdev); ++ if (!adapter || disable_dev) ++ pci_disable_device(pdev); + return err; + } + +@@ -1281,6 +1284,7 @@ static void e1000_remove(struct pci_dev *pdev) + struct net_device *netdev = pci_get_drvdata(pdev); + struct e1000_adapter *adapter = netdev_priv(netdev); + struct e1000_hw *hw = &adapter->hw; ++ bool disable_dev; + + e1000_down_and_stop(adapter); + e1000_release_manageability(adapter); +@@ -1299,9 +1303,11 @@ static void e1000_remove(struct pci_dev *pdev) + iounmap(hw->flash_address); + pci_release_selected_regions(pdev, adapter->bars); + ++ disable_dev = !test_and_set_bit(__E1000_DISABLED, &adapter->flags); + free_netdev(netdev); + +- pci_disable_device(pdev); ++ if (disable_dev) ++ pci_disable_device(pdev); + } + + /** +@@ -5156,7 +5162,8 @@ static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake) + if (netif_running(netdev)) + e1000_free_irq(adapter); + +- pci_disable_device(pdev); ++ if (!test_and_set_bit(__E1000_DISABLED, &adapter->flags)) ++ pci_disable_device(pdev); + + return 0; + } +@@ -5200,6 +5207,10 @@ static int e1000_resume(struct pci_dev *pdev) + pr_err("Cannot enable PCI device from suspend\n"); + return err; + } ++ ++ /* flush memory to make sure state is correct */ ++ smp_mb__before_atomic(); ++ clear_bit(__E1000_DISABLED, &adapter->flags); + pci_set_master(pdev); + + pci_enable_wake(pdev, PCI_D3hot, 0); +@@ -5274,7 +5285,9 @@ static pci_ers_result_t e1000_io_error_detected(struct pci_dev *pdev, + + if (netif_running(netdev)) + e1000_down(adapter); +- pci_disable_device(pdev); ++ ++ if (!test_and_set_bit(__E1000_DISABLED, &adapter->flags)) ++ pci_disable_device(pdev); + + /* Request a slot slot reset. */ + return PCI_ERS_RESULT_NEED_RESET; +@@ -5302,6 +5315,10 @@ static pci_ers_result_t e1000_io_slot_reset(struct pci_dev *pdev) + pr_err("Cannot re-enable PCI device after reset.\n"); + return PCI_ERS_RESULT_DISCONNECT; + } ++ ++ /* flush memory to make sure state is correct */ ++ smp_mb__before_atomic(); ++ clear_bit(__E1000_DISABLED, &adapter->flags); + pci_set_master(pdev); + + pci_enable_wake(pdev, PCI_D3hot, 0); +diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c +index b2cde9b16d82..b1cde1b051a4 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_main.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c +@@ -1553,11 +1553,18 @@ static int i40e_set_mac(struct net_device *netdev, void *p) + else + netdev_info(netdev, "set new mac address %pM\n", addr->sa_data); + ++ /* Copy the address first, so that we avoid a possible race with ++ * .set_rx_mode(). If we copy after changing the address in the filter ++ * list, we might open ourselves to a narrow race window where ++ * .set_rx_mode could delete our dev_addr filter and prevent traffic ++ * from passing. ++ */ ++ ether_addr_copy(netdev->dev_addr, addr->sa_data); ++ + spin_lock_bh(&vsi->mac_filter_hash_lock); + i40e_del_mac_filter(vsi, netdev->dev_addr); + i40e_add_mac_filter(vsi, addr->sa_data); + spin_unlock_bh(&vsi->mac_filter_hash_lock); +- ether_addr_copy(netdev->dev_addr, addr->sa_data); + if (vsi->type == I40E_VSI_MAIN) { + i40e_status ret; + +@@ -1739,6 +1746,14 @@ static int i40e_addr_unsync(struct net_device *netdev, const u8 *addr) + struct i40e_netdev_priv *np = netdev_priv(netdev); + struct i40e_vsi *vsi = np->vsi; + ++ /* Under some circumstances, we might receive a request to delete ++ * our own device address from our uc list. Because we store the ++ * device address in the VSI's MAC/VLAN filter list, we need to ignore ++ * such requests and not delete our device address from this list. ++ */ ++ if (ether_addr_equal(addr, netdev->dev_addr)) ++ return 0; ++ + i40e_del_mac_filter(vsi, addr); + + return 0; +diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c +index 3c07ff171ddc..542c00b1c823 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c +@@ -3048,10 +3048,30 @@ bool __i40e_chk_linearize(struct sk_buff *skb) + /* Walk through fragments adding latest fragment, testing it, and + * then removing stale fragments from the sum. + */ +- stale = &skb_shinfo(skb)->frags[0]; +- for (;;) { ++ for (stale = &skb_shinfo(skb)->frags[0];; stale++) { ++ int stale_size = skb_frag_size(stale); ++ + sum += skb_frag_size(frag++); + ++ /* The stale fragment may present us with a smaller ++ * descriptor than the actual fragment size. To account ++ * for that we need to remove all the data on the front and ++ * figure out what the remainder would be in the last ++ * descriptor associated with the fragment. ++ */ ++ if (stale_size > I40E_MAX_DATA_PER_TXD) { ++ int align_pad = -(stale->page_offset) & ++ (I40E_MAX_READ_REQ_SIZE - 1); ++ ++ sum -= align_pad; ++ stale_size -= align_pad; ++ ++ do { ++ sum -= I40E_MAX_DATA_PER_TXD_ALIGNED; ++ stale_size -= I40E_MAX_DATA_PER_TXD_ALIGNED; ++ } while (stale_size > I40E_MAX_DATA_PER_TXD); ++ } ++ + /* if sum is negative we failed to make sufficient progress */ + if (sum < 0) + return true; +@@ -3059,7 +3079,7 @@ bool __i40e_chk_linearize(struct sk_buff *skb) + if (!nr_frags--) + break; + +- sum -= skb_frag_size(stale++); ++ sum -= stale_size; + } + + return false; +diff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c +index 07a4e6e13925..7368b0dc3af8 100644 +--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c ++++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c +@@ -2014,10 +2014,30 @@ bool __i40evf_chk_linearize(struct sk_buff *skb) + /* Walk through fragments adding latest fragment, testing it, and + * then removing stale fragments from the sum. + */ +- stale = &skb_shinfo(skb)->frags[0]; +- for (;;) { ++ for (stale = &skb_shinfo(skb)->frags[0];; stale++) { ++ int stale_size = skb_frag_size(stale); ++ + sum += skb_frag_size(frag++); + ++ /* The stale fragment may present us with a smaller ++ * descriptor than the actual fragment size. To account ++ * for that we need to remove all the data on the front and ++ * figure out what the remainder would be in the last ++ * descriptor associated with the fragment. ++ */ ++ if (stale_size > I40E_MAX_DATA_PER_TXD) { ++ int align_pad = -(stale->page_offset) & ++ (I40E_MAX_READ_REQ_SIZE - 1); ++ ++ sum -= align_pad; ++ stale_size -= align_pad; ++ ++ do { ++ sum -= I40E_MAX_DATA_PER_TXD_ALIGNED; ++ stale_size -= I40E_MAX_DATA_PER_TXD_ALIGNED; ++ } while (stale_size > I40E_MAX_DATA_PER_TXD); ++ } ++ + /* if sum is negative we failed to make sufficient progress */ + if (sum < 0) + return true; +@@ -2025,7 +2045,7 @@ bool __i40evf_chk_linearize(struct sk_buff *skb) + if (!nr_frags--) + break; + +- sum -= skb_frag_size(stale++); ++ sum -= stale_size; + } + + return false; +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +index 5e81a7263654..3fd71cf5cd60 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +@@ -1959,11 +1959,12 @@ static int mtk_hw_init(struct mtk_eth *eth) + /* set GE2 TUNE */ + regmap_write(eth->pctl, GPIO_BIAS_CTRL, 0x0); + +- /* GE1, Force 1000M/FD, FC ON */ +- mtk_w32(eth, MAC_MCR_FIXED_LINK, MTK_MAC_MCR(0)); +- +- /* GE2, Force 1000M/FD, FC ON */ +- mtk_w32(eth, MAC_MCR_FIXED_LINK, MTK_MAC_MCR(1)); ++ /* Set linkdown as the default for each GMAC. Its own MCR would be set ++ * up with the more appropriate value when mtk_phy_link_adjust call is ++ * being invoked. ++ */ ++ for (i = 0; i < MTK_MAC_COUNT; i++) ++ mtk_w32(eth, 0, MTK_MAC_MCR(i)); + + /* Indicates CDM to parse the MTK special tag from CPU + * which also is working out for untag packets. +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c +index 51c4cc00a186..9d64d0759ee9 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c +@@ -259,6 +259,7 @@ int mlx5e_dcbnl_ieee_setets_core(struct mlx5e_priv *priv, struct ieee_ets *ets) + static int mlx5e_dbcnl_validate_ets(struct net_device *netdev, + struct ieee_ets *ets) + { ++ bool have_ets_tc = false; + int bw_sum = 0; + int i; + +@@ -273,11 +274,14 @@ static int mlx5e_dbcnl_validate_ets(struct net_device *netdev, + } + + /* Validate Bandwidth Sum */ +- for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) +- if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) ++ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) { ++ if (ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS) { ++ have_ets_tc = true; + bw_sum += ets->tc_tx_bw[i]; ++ } ++ } + +- if (bw_sum != 0 && bw_sum != 100) { ++ if (have_ets_tc && bw_sum != 100) { + netdev_err(netdev, + "Failed to validate ETS: BW sum is illegal\n"); + return -EINVAL; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c +index fc606bfd1d6e..eb91de86202b 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c +@@ -776,7 +776,7 @@ int mlx5_start_eqs(struct mlx5_core_dev *dev) + return err; + } + +-int mlx5_stop_eqs(struct mlx5_core_dev *dev) ++void mlx5_stop_eqs(struct mlx5_core_dev *dev) + { + struct mlx5_eq_table *table = &dev->priv.eq_table; + int err; +@@ -785,22 +785,26 @@ int mlx5_stop_eqs(struct mlx5_core_dev *dev) + if (MLX5_CAP_GEN(dev, pg)) { + err = mlx5_destroy_unmap_eq(dev, &table->pfault_eq); + if (err) +- return err; ++ mlx5_core_err(dev, "failed to destroy page fault eq, err(%d)\n", ++ err); + } + #endif + + err = mlx5_destroy_unmap_eq(dev, &table->pages_eq); + if (err) +- return err; ++ mlx5_core_err(dev, "failed to destroy pages eq, err(%d)\n", ++ err); + +- mlx5_destroy_unmap_eq(dev, &table->async_eq); ++ err = mlx5_destroy_unmap_eq(dev, &table->async_eq); ++ if (err) ++ mlx5_core_err(dev, "failed to destroy async eq, err(%d)\n", ++ err); + mlx5_cmd_use_polling(dev); + + err = mlx5_destroy_unmap_eq(dev, &table->cmd_eq); + if (err) +- mlx5_cmd_use_events(dev); +- +- return err; ++ mlx5_core_err(dev, "failed to destroy command eq, err(%d)\n", ++ err); + } + + int mlx5_core_eq_query(struct mlx5_core_dev *dev, struct mlx5_eq *eq, +diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c +index 23f7d828cf67..6ef20e5cc77d 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c +@@ -1643,7 +1643,12 @@ static int mlxsw_pci_sw_reset(struct mlxsw_pci *mlxsw_pci, + return 0; + } + +- wmb(); /* reset needs to be written before we read control register */ ++ /* Reset needs to be written before we read control register, and ++ * we must wait for the HW to become responsive once again ++ */ ++ wmb(); ++ msleep(MLXSW_PCI_SW_RESET_WAIT_MSECS); ++ + end = jiffies + msecs_to_jiffies(MLXSW_PCI_SW_RESET_TIMEOUT_MSECS); + do { + u32 val = mlxsw_pci_read32(mlxsw_pci, FW_READY); +diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h +index a6441208e9d9..fb082ad21b00 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h ++++ b/drivers/net/ethernet/mellanox/mlxsw/pci_hw.h +@@ -59,6 +59,7 @@ + #define MLXSW_PCI_SW_RESET 0xF0010 + #define MLXSW_PCI_SW_RESET_RST_BIT BIT(0) + #define MLXSW_PCI_SW_RESET_TIMEOUT_MSECS 5000 ++#define MLXSW_PCI_SW_RESET_WAIT_MSECS 100 + #define MLXSW_PCI_FW_READY 0xA1844 + #define MLXSW_PCI_FW_READY_MASK 0xFFFF + #define MLXSW_PCI_FW_READY_MAGIC 0x5E +diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +index e118b5f23996..8d53a593fb27 100644 +--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c ++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +@@ -568,6 +568,7 @@ nfp_net_aux_irq_request(struct nfp_net *nn, u32 ctrl_offset, + return err; + } + nn_writeb(nn, ctrl_offset, entry->entry); ++ nfp_net_irq_unmask(nn, entry->entry); + + return 0; + } +@@ -582,6 +583,7 @@ static void nfp_net_aux_irq_free(struct nfp_net *nn, u32 ctrl_offset, + unsigned int vector_idx) + { + nn_writeb(nn, ctrl_offset, 0xff); ++ nn_pci_flush(nn); + free_irq(nn->irq_entries[vector_idx].vector, nn); + } + +diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h +index e82b4b70b7be..627fec210e2f 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/common.h ++++ b/drivers/net/ethernet/stmicro/stmmac/common.h +@@ -409,7 +409,7 @@ struct stmmac_desc_ops { + /* get timestamp value */ + u64(*get_timestamp) (void *desc, u32 ats); + /* get rx timestamp status */ +- int (*get_rx_timestamp_status) (void *desc, u32 ats); ++ int (*get_rx_timestamp_status)(void *desc, void *next_desc, u32 ats); + /* Display ring */ + void (*display_ring)(void *head, unsigned int size, bool rx); + /* set MSS via context descriptor */ +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c +index 4b286e27c4ca..7e089bf906b4 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c +@@ -258,7 +258,8 @@ static int dwmac4_rx_check_timestamp(void *desc) + return ret; + } + +-static int dwmac4_wrback_get_rx_timestamp_status(void *desc, u32 ats) ++static int dwmac4_wrback_get_rx_timestamp_status(void *desc, void *next_desc, ++ u32 ats) + { + struct dma_desc *p = (struct dma_desc *)desc; + int ret = -EINVAL; +@@ -270,7 +271,7 @@ static int dwmac4_wrback_get_rx_timestamp_status(void *desc, u32 ats) + + /* Check if timestamp is OK from context descriptor */ + do { +- ret = dwmac4_rx_check_timestamp(desc); ++ ret = dwmac4_rx_check_timestamp(next_desc); + if (ret < 0) + goto exit; + i++; +diff --git a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c +index 7546b3664113..2a828a312814 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c ++++ b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c +@@ -400,7 +400,8 @@ static u64 enh_desc_get_timestamp(void *desc, u32 ats) + return ns; + } + +-static int enh_desc_get_rx_timestamp_status(void *desc, u32 ats) ++static int enh_desc_get_rx_timestamp_status(void *desc, void *next_desc, ++ u32 ats) + { + if (ats) { + struct dma_extended_desc *p = (struct dma_extended_desc *)desc; +diff --git a/drivers/net/ethernet/stmicro/stmmac/norm_desc.c b/drivers/net/ethernet/stmicro/stmmac/norm_desc.c +index f817f8f36569..db4cee57bb24 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/norm_desc.c ++++ b/drivers/net/ethernet/stmicro/stmmac/norm_desc.c +@@ -265,7 +265,7 @@ static u64 ndesc_get_timestamp(void *desc, u32 ats) + return ns; + } + +-static int ndesc_get_rx_timestamp_status(void *desc, u32 ats) ++static int ndesc_get_rx_timestamp_status(void *desc, void *next_desc, u32 ats) + { + struct dma_desc *p = (struct dma_desc *)desc; + +diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c +index 721b61655261..08c19ebd5306 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c ++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c +@@ -34,6 +34,7 @@ static u32 stmmac_config_sub_second_increment(void __iomem *ioaddr, + { + u32 value = readl(ioaddr + PTP_TCR); + unsigned long data; ++ u32 reg_value; + + /* For GMAC3.x, 4.x versions, convert the ptp_clock to nano second + * formula = (1/ptp_clock) * 1000000000 +@@ -50,10 +51,11 @@ static u32 stmmac_config_sub_second_increment(void __iomem *ioaddr, + + data &= PTP_SSIR_SSINC_MASK; + ++ reg_value = data; + if (gmac4) +- data = data << GMAC4_PTP_SSIR_SSINC_SHIFT; ++ reg_value <<= GMAC4_PTP_SSIR_SSINC_SHIFT; + +- writel(data, ioaddr + PTP_SSIR); ++ writel(reg_value, ioaddr + PTP_SSIR); + + return data; + } +diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +index 0ad12c81a9e4..d0cc73795056 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c ++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +@@ -489,7 +489,7 @@ static void stmmac_get_rx_hwtstamp(struct stmmac_priv *priv, struct dma_desc *p, + desc = np; + + /* Check if timestamp is available */ +- if (priv->hw->desc->get_rx_timestamp_status(desc, priv->adv_ts)) { ++ if (priv->hw->desc->get_rx_timestamp_status(p, np, priv->adv_ts)) { + ns = priv->hw->desc->get_timestamp(desc, priv->adv_ts); + netdev_dbg(priv->dev, "get valid RX hw timestamp %llu\n", ns); + shhwtstamp = skb_hwtstamps(skb); +diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c +index fb1c9e095d0c..176fc0906bfe 100644 +--- a/drivers/net/macvlan.c ++++ b/drivers/net/macvlan.c +@@ -1441,9 +1441,14 @@ int macvlan_common_newlink(struct net *src_net, struct net_device *dev, + return 0; + + unregister_netdev: ++ /* macvlan_uninit would free the macvlan port */ + unregister_netdevice(dev); ++ return err; + destroy_macvlan_port: +- if (create) ++ /* the macvlan port may be freed by macvlan_uninit when fail to register. ++ * so we destroy the macvlan port only when it's valid. ++ */ ++ if (create && macvlan_port_get_rtnl(dev)) + macvlan_port_destroy(port->dev); + return err; + } +diff --git a/drivers/net/phy/mdio-sun4i.c b/drivers/net/phy/mdio-sun4i.c +index 135296508a7e..6425ce04d3f9 100644 +--- a/drivers/net/phy/mdio-sun4i.c ++++ b/drivers/net/phy/mdio-sun4i.c +@@ -118,8 +118,10 @@ static int sun4i_mdio_probe(struct platform_device *pdev) + + data->regulator = devm_regulator_get(&pdev->dev, "phy"); + if (IS_ERR(data->regulator)) { +- if (PTR_ERR(data->regulator) == -EPROBE_DEFER) +- return -EPROBE_DEFER; ++ if (PTR_ERR(data->regulator) == -EPROBE_DEFER) { ++ ret = -EPROBE_DEFER; ++ goto err_out_free_mdiobus; ++ } + + dev_info(&pdev->dev, "no regulator found\n"); + data->regulator = NULL; +diff --git a/drivers/net/phy/mdio-xgene.c b/drivers/net/phy/mdio-xgene.c +index bfd3090fb055..07c6048200c6 100644 +--- a/drivers/net/phy/mdio-xgene.c ++++ b/drivers/net/phy/mdio-xgene.c +@@ -194,8 +194,11 @@ static int xgene_mdio_reset(struct xgene_mdio_pdata *pdata) + } + + ret = xgene_enet_ecc_init(pdata); +- if (ret) ++ if (ret) { ++ if (pdata->dev->of_node) ++ clk_disable_unprepare(pdata->clk); + return ret; ++ } + xgene_gmac_reset(pdata); + + return 0; +@@ -388,8 +391,10 @@ static int xgene_mdio_probe(struct platform_device *pdev) + return ret; + + mdio_bus = mdiobus_alloc(); +- if (!mdio_bus) +- return -ENOMEM; ++ if (!mdio_bus) { ++ ret = -ENOMEM; ++ goto out_clk; ++ } + + mdio_bus->name = "APM X-Gene MDIO bus"; + +@@ -418,7 +423,7 @@ static int xgene_mdio_probe(struct platform_device *pdev) + mdio_bus->phy_mask = ~0; + ret = mdiobus_register(mdio_bus); + if (ret) +- goto out; ++ goto out_mdiobus; + + acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_HANDLE(dev), 1, + acpi_register_phy, NULL, mdio_bus, NULL); +@@ -426,16 +431,20 @@ static int xgene_mdio_probe(struct platform_device *pdev) + } + + if (ret) +- goto out; ++ goto out_mdiobus; + + pdata->mdio_bus = mdio_bus; + xgene_mdio_status = true; + + return 0; + +-out: ++out_mdiobus: + mdiobus_free(mdio_bus); + ++out_clk: ++ if (dev->of_node) ++ clk_disable_unprepare(pdata->clk); ++ + return ret; + } + +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index 8d9f02b7a71f..b1632294174f 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -1100,6 +1100,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x05c6, 0x9084, 4)}, + {QMI_FIXED_INTF(0x05c6, 0x920d, 0)}, + {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, ++ {QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */ + {QMI_FIXED_INTF(0x0846, 0x68a2, 8)}, + {QMI_FIXED_INTF(0x12d1, 0x140c, 1)}, /* Huawei E173 */ + {QMI_FIXED_INTF(0x12d1, 0x14ac, 1)}, /* Huawei E1820 */ +@@ -1211,6 +1212,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */ + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */ + {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */ ++ {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */ + {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ + {QMI_QUIRK_SET_DTR(0x1bc7, 0x1201, 2)}, /* Telit LE920, LE920A4 */ + {QMI_FIXED_INTF(0x1c9e, 0x9801, 3)}, /* Telewell TW-3G HSPA+ */ +diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c +index 9e9202b50e73..bb44f0c6891f 100644 +--- a/drivers/net/vxlan.c ++++ b/drivers/net/vxlan.c +@@ -2155,6 +2155,13 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, + } + + ndst = &rt->dst; ++ if (skb_dst(skb)) { ++ int mtu = dst_mtu(ndst) - VXLAN_HEADROOM; ++ ++ skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, ++ skb, mtu); ++ } ++ + tos = ip_tunnel_ecn_encap(tos, old_iph, skb); + ttl = ttl ? : ip4_dst_hoplimit(&rt->dst); + err = vxlan_build_skb(skb, ndst, sizeof(struct iphdr), +@@ -2190,6 +2197,13 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, + goto out_unlock; + } + ++ if (skb_dst(skb)) { ++ int mtu = dst_mtu(ndst) - VXLAN6_HEADROOM; ++ ++ skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, ++ skb, mtu); ++ } ++ + tos = ip_tunnel_ecn_encap(tos, old_iph, skb); + ttl = ttl ? : ip6_dst_hoplimit(ndst); + skb_scrub_packet(skb, xnet); +diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c +index b83f01d6e3dd..af37c19dbfd7 100644 +--- a/drivers/net/wireless/ath/wcn36xx/main.c ++++ b/drivers/net/wireless/ath/wcn36xx/main.c +@@ -384,6 +384,18 @@ static int wcn36xx_config(struct ieee80211_hw *hw, u32 changed) + } + } + ++ if (changed & IEEE80211_CONF_CHANGE_PS) { ++ list_for_each_entry(tmp, &wcn->vif_list, list) { ++ vif = wcn36xx_priv_to_vif(tmp); ++ if (hw->conf.flags & IEEE80211_CONF_PS) { ++ if (vif->bss_conf.ps) /* ps allowed ? */ ++ wcn36xx_pmc_enter_bmps_state(wcn, vif); ++ } else { ++ wcn36xx_pmc_exit_bmps_state(wcn, vif); ++ } ++ } ++ } ++ + mutex_unlock(&wcn->conf_mutex); + + return 0; +@@ -747,17 +759,6 @@ static void wcn36xx_bss_info_changed(struct ieee80211_hw *hw, + vif_priv->dtim_period = bss_conf->dtim_period; + } + +- if (changed & BSS_CHANGED_PS) { +- wcn36xx_dbg(WCN36XX_DBG_MAC, +- "mac bss PS set %d\n", +- bss_conf->ps); +- if (bss_conf->ps) { +- wcn36xx_pmc_enter_bmps_state(wcn, vif); +- } else { +- wcn36xx_pmc_exit_bmps_state(wcn, vif); +- } +- } +- + if (changed & BSS_CHANGED_BSSID) { + wcn36xx_dbg(WCN36XX_DBG_MAC, "mac bss changed_bssid %pM\n", + bss_conf->bssid); +diff --git a/drivers/net/wireless/ath/wcn36xx/pmc.c b/drivers/net/wireless/ath/wcn36xx/pmc.c +index 589fe5f70971..1976b80c235f 100644 +--- a/drivers/net/wireless/ath/wcn36xx/pmc.c ++++ b/drivers/net/wireless/ath/wcn36xx/pmc.c +@@ -45,8 +45,10 @@ int wcn36xx_pmc_exit_bmps_state(struct wcn36xx *wcn, + struct wcn36xx_vif *vif_priv = wcn36xx_vif_to_priv(vif); + + if (WCN36XX_BMPS != vif_priv->pw_state) { +- wcn36xx_err("Not in BMPS mode, no need to exit from BMPS mode!\n"); +- return -EINVAL; ++ /* Unbalanced call or last BMPS enter failed */ ++ wcn36xx_dbg(WCN36XX_DBG_PMC, ++ "Not in BMPS mode, no need to exit\n"); ++ return -EALREADY; + } + wcn36xx_smd_exit_bmps(wcn, vif); + vif_priv->pw_state = WCN36XX_FULL_POWER; +diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c +index 052e67bce6b3..710efe7b65f9 100644 +--- a/drivers/net/wireless/mac80211_hwsim.c ++++ b/drivers/net/wireless/mac80211_hwsim.c +@@ -3220,7 +3220,7 @@ static int hwsim_get_radio_nl(struct sk_buff *msg, struct genl_info *info) + if (!net_eq(wiphy_net(data->hw->wiphy), genl_info_net(info))) + continue; + +- skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); ++ skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_ATOMIC); + if (!skb) { + res = -ENOMEM; + goto out_err; +diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c +index 391432e2725d..c980cdbd6e53 100644 +--- a/drivers/net/xen-netfront.c ++++ b/drivers/net/xen-netfront.c +@@ -1326,6 +1326,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev) + + netif_carrier_off(netdev); + ++ xenbus_switch_state(dev, XenbusStateInitialising); + return netdev; + + exit: +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 0655f45643d9..dd956311a85a 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -1515,7 +1515,8 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl, + blk_queue_max_hw_sectors(q, ctrl->max_hw_sectors); + blk_queue_max_segments(q, min_t(u32, max_segments, USHRT_MAX)); + } +- if (ctrl->quirks & NVME_QUIRK_STRIPE_SIZE) ++ if ((ctrl->quirks & NVME_QUIRK_STRIPE_SIZE) && ++ is_power_of_2(ctrl->max_hw_sectors)) + blk_queue_chunk_sectors(q, ctrl->max_hw_sectors); + blk_queue_virt_boundary(q, ctrl->page_size - 1); + if (ctrl->vwc & NVME_CTRL_VWC_PRESENT) +diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c +index 555c976cc2ee..8cd42544c90e 100644 +--- a/drivers/nvme/host/fabrics.c ++++ b/drivers/nvme/host/fabrics.c +@@ -74,6 +74,7 @@ static struct nvmf_host *nvmf_host_default(void) + return NULL; + + kref_init(&host->ref); ++ uuid_gen(&host->id); + snprintf(host->nqn, NVMF_NQN_SIZE, + "nqn.2014-08.org.nvmexpress:uuid:%pUb", &host->id); + +diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c +index 3148d760d825..7deb7b5d8683 100644 +--- a/drivers/nvme/host/fc.c ++++ b/drivers/nvme/host/fc.c +@@ -2876,7 +2876,6 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, + + /* initiate nvme ctrl ref counting teardown */ + nvme_uninit_ctrl(&ctrl->ctrl); +- nvme_put_ctrl(&ctrl->ctrl); + + /* Remove core ctrl ref. */ + nvme_put_ctrl(&ctrl->ctrl); +diff --git a/drivers/of/of_mdio.c b/drivers/of/of_mdio.c +index 98258583abb0..8c1819230ed2 100644 +--- a/drivers/of/of_mdio.c ++++ b/drivers/of/of_mdio.c +@@ -228,7 +228,12 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np) + rc = of_mdiobus_register_phy(mdio, child, addr); + else + rc = of_mdiobus_register_device(mdio, child, addr); +- if (rc) ++ ++ if (rc == -ENODEV) ++ dev_err(&mdio->dev, ++ "MDIO device at address %d is missing.\n", ++ addr); ++ else if (rc) + goto unregister; + } + +@@ -252,7 +257,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np) + + if (of_mdiobus_child_is_phy(child)) { + rc = of_mdiobus_register_phy(mdio, child, addr); +- if (rc) ++ if (rc && rc != -ENODEV) + goto unregister; + } + } +diff --git a/drivers/phy/motorola/phy-cpcap-usb.c b/drivers/phy/motorola/phy-cpcap-usb.c +index accaaaccb662..6601ad0dfb3a 100644 +--- a/drivers/phy/motorola/phy-cpcap-usb.c ++++ b/drivers/phy/motorola/phy-cpcap-usb.c +@@ -310,7 +310,7 @@ static int cpcap_usb_init_irq(struct platform_device *pdev, + int irq, error; + + irq = platform_get_irq_byname(pdev, name); +- if (!irq) ++ if (irq < 0) + return -ENODEV; + + error = devm_request_threaded_irq(ddata->dev, irq, NULL, +diff --git a/drivers/s390/block/dasd_3990_erp.c b/drivers/s390/block/dasd_3990_erp.c +index c94b606e0df8..ee14d8e45c97 100644 +--- a/drivers/s390/block/dasd_3990_erp.c ++++ b/drivers/s390/block/dasd_3990_erp.c +@@ -2803,6 +2803,16 @@ dasd_3990_erp_action(struct dasd_ccw_req * cqr) + erp = dasd_3990_erp_handle_match_erp(cqr, erp); + } + ++ ++ /* ++ * For path verification work we need to stick with the path that was ++ * originally chosen so that the per path configuration data is ++ * assigned correctly. ++ */ ++ if (test_bit(DASD_CQR_VERIFY_PATH, &erp->flags) && cqr->lpm) { ++ erp->lpm = cqr->lpm; ++ } ++ + if (device->features & DASD_FEATURE_ERPLOG) { + /* print current erp_chain */ + dev_err(&device->cdev->dev, +diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h +index 403a639574e5..b0b290f7b8dc 100644 +--- a/drivers/scsi/aacraid/aacraid.h ++++ b/drivers/scsi/aacraid/aacraid.h +@@ -1724,6 +1724,7 @@ struct aac_dev + #define FIB_CONTEXT_FLAG_NATIVE_HBA (0x00000010) + #define FIB_CONTEXT_FLAG_NATIVE_HBA_TMF (0x00000020) + #define FIB_CONTEXT_FLAG_SCSI_CMD (0x00000040) ++#define FIB_CONTEXT_FLAG_EH_RESET (0x00000080) + + /* + * Define the command values +diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c +index c9252b138c1f..509fe23fafe1 100644 +--- a/drivers/scsi/aacraid/linit.c ++++ b/drivers/scsi/aacraid/linit.c +@@ -1037,7 +1037,7 @@ static int aac_eh_bus_reset(struct scsi_cmnd* cmd) + info = &aac->hba_map[bus][cid]; + if (bus >= AAC_MAX_BUSES || cid >= AAC_MAX_TARGETS || + info->devtype != AAC_DEVTYPE_NATIVE_RAW) { +- fib->flags |= FIB_CONTEXT_FLAG_TIMED_OUT; ++ fib->flags |= FIB_CONTEXT_FLAG_EH_RESET; + cmd->SCp.phase = AAC_OWNER_ERROR_HANDLER; + } + } +diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c +index c17ccb913fde..a3e480e7a257 100644 +--- a/drivers/scsi/storvsc_drv.c ++++ b/drivers/scsi/storvsc_drv.c +@@ -952,10 +952,11 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb, + case TEST_UNIT_READY: + break; + default: +- set_host_byte(scmnd, DID_TARGET_FAILURE); ++ set_host_byte(scmnd, DID_ERROR); + } + break; + case SRB_STATUS_INVALID_LUN: ++ set_host_byte(scmnd, DID_NO_CONNECT); + do_work = true; + process_err_fn = storvsc_remove_lun; + break; +diff --git a/drivers/spi/spi-atmel.c b/drivers/spi/spi-atmel.c +index f95da364c283..669470971023 100644 +--- a/drivers/spi/spi-atmel.c ++++ b/drivers/spi/spi-atmel.c +@@ -1661,12 +1661,12 @@ static int atmel_spi_remove(struct platform_device *pdev) + pm_runtime_get_sync(&pdev->dev); + + /* reset the hardware and block queue progress */ +- spin_lock_irq(&as->lock); + if (as->use_dma) { + atmel_spi_stop_dma(master); + atmel_spi_release_dma(master); + } + ++ spin_lock_irq(&as->lock); + spi_writel(as, CR, SPI_BIT(SWRST)); + spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */ + spi_readl(as, SR); +diff --git a/drivers/staging/android/ion/Kconfig b/drivers/staging/android/ion/Kconfig +index a517b2d29f1b..8f6494158d3d 100644 +--- a/drivers/staging/android/ion/Kconfig ++++ b/drivers/staging/android/ion/Kconfig +@@ -37,7 +37,7 @@ config ION_CHUNK_HEAP + + config ION_CMA_HEAP + bool "Ion CMA heap support" +- depends on ION && CMA ++ depends on ION && DMA_CMA + help + Choose this option to enable CMA heaps with Ion. This heap is backed + by the Contiguous Memory Allocator (CMA). If your system has these +diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c +index dd5545d9990a..86196ffd2faf 100644 +--- a/drivers/staging/android/ion/ion_cma_heap.c ++++ b/drivers/staging/android/ion/ion_cma_heap.c +@@ -39,9 +39,15 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, + struct ion_cma_heap *cma_heap = to_cma_heap(heap); + struct sg_table *table; + struct page *pages; ++ unsigned long size = PAGE_ALIGN(len); ++ unsigned long nr_pages = size >> PAGE_SHIFT; ++ unsigned long align = get_order(size); + int ret; + +- pages = cma_alloc(cma_heap->cma, len, 0, GFP_KERNEL); ++ if (align > CONFIG_CMA_ALIGNMENT) ++ align = CONFIG_CMA_ALIGNMENT; ++ ++ pages = cma_alloc(cma_heap->cma, nr_pages, align, GFP_KERNEL); + if (!pages) + return -ENOMEM; + +@@ -53,7 +59,7 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, + if (ret) + goto free_mem; + +- sg_set_page(table->sgl, pages, len, 0); ++ sg_set_page(table->sgl, pages, size, 0); + + buffer->priv_virt = pages; + buffer->sg_table = table; +@@ -62,7 +68,7 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, + free_mem: + kfree(table); + err: +- cma_release(cma_heap->cma, pages, buffer->size); ++ cma_release(cma_heap->cma, pages, nr_pages); + return -ENOMEM; + } + +@@ -70,9 +76,10 @@ static void ion_cma_free(struct ion_buffer *buffer) + { + struct ion_cma_heap *cma_heap = to_cma_heap(buffer->heap); + struct page *pages = buffer->priv_virt; ++ unsigned long nr_pages = PAGE_ALIGN(buffer->size) >> PAGE_SHIFT; + + /* release memory */ +- cma_release(cma_heap->cma, pages, buffer->size); ++ cma_release(cma_heap->cma, pages, nr_pages); + /* release sg table */ + sg_free_table(buffer->sg_table); + kfree(buffer->sg_table); +diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c +index f77e499afddd..065f0b607373 100644 +--- a/drivers/xen/balloon.c ++++ b/drivers/xen/balloon.c +@@ -257,10 +257,25 @@ static void release_memory_resource(struct resource *resource) + kfree(resource); + } + ++/* ++ * Host memory not allocated to dom0. We can use this range for hotplug-based ++ * ballooning. ++ * ++ * It's a type-less resource. Setting IORESOURCE_MEM will make resource ++ * management algorithms (arch_remove_reservations()) look into guest e820, ++ * which we don't want. ++ */ ++static struct resource hostmem_resource = { ++ .name = "Host RAM", ++}; ++ ++void __attribute__((weak)) __init arch_xen_balloon_init(struct resource *res) ++{} ++ + static struct resource *additional_memory_resource(phys_addr_t size) + { +- struct resource *res; +- int ret; ++ struct resource *res, *res_hostmem; ++ int ret = -ENOMEM; + + res = kzalloc(sizeof(*res), GFP_KERNEL); + if (!res) +@@ -269,13 +284,42 @@ static struct resource *additional_memory_resource(phys_addr_t size) + res->name = "System RAM"; + res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; + +- ret = allocate_resource(&iomem_resource, res, +- size, 0, -1, +- PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL); +- if (ret < 0) { +- pr_err("Cannot allocate new System RAM resource\n"); +- kfree(res); +- return NULL; ++ res_hostmem = kzalloc(sizeof(*res), GFP_KERNEL); ++ if (res_hostmem) { ++ /* Try to grab a range from hostmem */ ++ res_hostmem->name = "Host memory"; ++ ret = allocate_resource(&hostmem_resource, res_hostmem, ++ size, 0, -1, ++ PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL); ++ } ++ ++ if (!ret) { ++ /* ++ * Insert this resource into iomem. Because hostmem_resource ++ * tracks portion of guest e820 marked as UNUSABLE noone else ++ * should try to use it. ++ */ ++ res->start = res_hostmem->start; ++ res->end = res_hostmem->end; ++ ret = insert_resource(&iomem_resource, res); ++ if (ret < 0) { ++ pr_err("Can't insert iomem_resource [%llx - %llx]\n", ++ res->start, res->end); ++ release_memory_resource(res_hostmem); ++ res_hostmem = NULL; ++ res->start = res->end = 0; ++ } ++ } ++ ++ if (ret) { ++ ret = allocate_resource(&iomem_resource, res, ++ size, 0, -1, ++ PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL); ++ if (ret < 0) { ++ pr_err("Cannot allocate new System RAM resource\n"); ++ kfree(res); ++ return NULL; ++ } + } + + #ifdef CONFIG_SPARSEMEM +@@ -287,6 +331,7 @@ static struct resource *additional_memory_resource(phys_addr_t size) + pr_err("New System RAM resource outside addressable RAM (%lu > %lu)\n", + pfn, limit); + release_memory_resource(res); ++ release_memory_resource(res_hostmem); + return NULL; + } + } +@@ -765,6 +810,8 @@ static int __init balloon_init(void) + set_online_page_callback(&xen_online_page); + register_memory_notifier(&xen_memory_nb); + register_sysctl_table(xen_root); ++ ++ arch_xen_balloon_init(&hostmem_resource); + #endif + + #ifdef CONFIG_XEN_PV +diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c +index 57efbd3b053b..bd56653b9bbc 100644 +--- a/drivers/xen/gntdev.c ++++ b/drivers/xen/gntdev.c +@@ -380,10 +380,8 @@ static int unmap_grant_pages(struct grant_map *map, int offset, int pages) + } + range = 0; + while (range < pages) { +- if (map->unmap_ops[offset+range].handle == -1) { +- range--; ++ if (map->unmap_ops[offset+range].handle == -1) + break; +- } + range++; + } + err = __unmap_grant_pages(map, offset, range); +@@ -1073,8 +1071,10 @@ static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma) + out_unlock_put: + mutex_unlock(&priv->lock); + out_put_map: +- if (use_ptemod) ++ if (use_ptemod) { + map->vma = NULL; ++ unmap_grant_pages(map, 0, map->count); ++ } + gntdev_put_map(priv, map); + return err; + } +diff --git a/fs/afs/write.c b/fs/afs/write.c +index 106e43db1115..926d4d68f791 100644 +--- a/fs/afs/write.c ++++ b/fs/afs/write.c +@@ -282,7 +282,7 @@ int afs_write_end(struct file *file, struct address_space *mapping, + ret = afs_fill_page(vnode, key, pos + copied, + len - copied, page); + if (ret < 0) +- return ret; ++ goto out; + } + SetPageUptodate(page); + } +@@ -290,10 +290,12 @@ int afs_write_end(struct file *file, struct address_space *mapping, + set_page_dirty(page); + if (PageDirty(page)) + _debug("dirtied"); ++ ret = copied; ++ ++out: + unlock_page(page); + put_page(page); +- +- return copied; ++ return ret; + } + + /* +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index 4006b2a1233d..bc534fafacf9 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -236,7 +236,6 @@ static struct btrfs_device *__alloc_device(void) + kfree(dev); + return ERR_PTR(-ENOMEM); + } +- bio_get(dev->flush_bio); + + INIT_LIST_HEAD(&dev->dev_list); + INIT_LIST_HEAD(&dev->dev_alloc_list); +diff --git a/fs/exec.c b/fs/exec.c +index acec119fcc31..0da4d748b4e6 100644 +--- a/fs/exec.c ++++ b/fs/exec.c +@@ -1216,15 +1216,14 @@ static int de_thread(struct task_struct *tsk) + return -EAGAIN; + } + +-char *get_task_comm(char *buf, struct task_struct *tsk) ++char *__get_task_comm(char *buf, size_t buf_size, struct task_struct *tsk) + { +- /* buf must be at least sizeof(tsk->comm) in size */ + task_lock(tsk); +- strncpy(buf, tsk->comm, sizeof(tsk->comm)); ++ strncpy(buf, tsk->comm, buf_size); + task_unlock(tsk); + return buf; + } +-EXPORT_SYMBOL_GPL(get_task_comm); ++EXPORT_SYMBOL_GPL(__get_task_comm); + + /* + * These functions flushes out all traces of the currently running executable +diff --git a/fs/super.c b/fs/super.c +index 994db21f59bf..79d7fc5e0ddd 100644 +--- a/fs/super.c ++++ b/fs/super.c +@@ -522,7 +522,11 @@ struct super_block *sget_userns(struct file_system_type *type, + hlist_add_head(&s->s_instances, &type->fs_supers); + spin_unlock(&sb_lock); + get_filesystem(type); +- register_shrinker(&s->s_shrink); ++ err = register_shrinker(&s->s_shrink); ++ if (err) { ++ deactivate_locked_super(s); ++ s = ERR_PTR(err); ++ } + return s; + } + +diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c +index 010a13a201aa..659ed6f8c484 100644 +--- a/fs/xfs/xfs_qm.c ++++ b/fs/xfs/xfs_qm.c +@@ -48,7 +48,7 @@ + STATIC int xfs_qm_init_quotainos(xfs_mount_t *); + STATIC int xfs_qm_init_quotainfo(xfs_mount_t *); + +- ++STATIC void xfs_qm_destroy_quotainos(xfs_quotainfo_t *qi); + STATIC void xfs_qm_dqfree_one(struct xfs_dquot *dqp); + /* + * We use the batch lookup interface to iterate over the dquots as it +@@ -695,9 +695,17 @@ xfs_qm_init_quotainfo( + qinf->qi_shrinker.scan_objects = xfs_qm_shrink_scan; + qinf->qi_shrinker.seeks = DEFAULT_SEEKS; + qinf->qi_shrinker.flags = SHRINKER_NUMA_AWARE; +- register_shrinker(&qinf->qi_shrinker); ++ ++ error = register_shrinker(&qinf->qi_shrinker); ++ if (error) ++ goto out_free_inos; ++ + return 0; + ++out_free_inos: ++ mutex_destroy(&qinf->qi_quotaofflock); ++ mutex_destroy(&qinf->qi_tree_lock); ++ xfs_qm_destroy_quotainos(qinf); + out_free_lru: + list_lru_destroy(&qinf->qi_lru); + out_free_qinf: +@@ -706,7 +714,6 @@ xfs_qm_init_quotainfo( + return error; + } + +- + /* + * Gets called when unmounting a filesystem or when all quotas get + * turned off. +@@ -723,19 +730,8 @@ xfs_qm_destroy_quotainfo( + + unregister_shrinker(&qi->qi_shrinker); + list_lru_destroy(&qi->qi_lru); +- +- if (qi->qi_uquotaip) { +- IRELE(qi->qi_uquotaip); +- qi->qi_uquotaip = NULL; /* paranoia */ +- } +- if (qi->qi_gquotaip) { +- IRELE(qi->qi_gquotaip); +- qi->qi_gquotaip = NULL; +- } +- if (qi->qi_pquotaip) { +- IRELE(qi->qi_pquotaip); +- qi->qi_pquotaip = NULL; +- } ++ xfs_qm_destroy_quotainos(qi); ++ mutex_destroy(&qi->qi_tree_lock); + mutex_destroy(&qi->qi_quotaofflock); + kmem_free(qi); + mp->m_quotainfo = NULL; +@@ -1599,6 +1595,24 @@ xfs_qm_init_quotainos( + return error; + } + ++STATIC void ++xfs_qm_destroy_quotainos( ++ xfs_quotainfo_t *qi) ++{ ++ if (qi->qi_uquotaip) { ++ IRELE(qi->qi_uquotaip); ++ qi->qi_uquotaip = NULL; /* paranoia */ ++ } ++ if (qi->qi_gquotaip) { ++ IRELE(qi->qi_gquotaip); ++ qi->qi_gquotaip = NULL; ++ } ++ if (qi->qi_pquotaip) { ++ IRELE(qi->qi_pquotaip); ++ qi->qi_pquotaip = NULL; ++ } ++} ++ + STATIC void + xfs_qm_dqfree_one( + struct xfs_dquot *dqp) +diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h +index aeec003a566b..ac0eae8372ab 100644 +--- a/include/crypto/if_alg.h ++++ b/include/crypto/if_alg.h +@@ -18,6 +18,7 @@ + #include + #include + #include ++#include + #include + + #include +@@ -155,7 +156,7 @@ struct af_alg_ctx { + struct af_alg_completion completion; + + size_t used; +- size_t rcvused; ++ atomic_t rcvused; + + bool more; + bool merge; +@@ -228,7 +229,7 @@ static inline int af_alg_rcvbuf(struct sock *sk) + struct af_alg_ctx *ctx = ask->private; + + return max_t(int, max_t(int, sk->sk_rcvbuf & PAGE_MASK, PAGE_SIZE) - +- ctx->rcvused, 0); ++ atomic_read(&ctx->rcvused), 0); + } + + /** +diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h +index ae15864c8708..8f9fc6e5539a 100644 +--- a/include/linux/mlx5/driver.h ++++ b/include/linux/mlx5/driver.h +@@ -1017,7 +1017,7 @@ int mlx5_create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, u8 vecidx, + enum mlx5_eq_type type); + int mlx5_destroy_unmap_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq); + int mlx5_start_eqs(struct mlx5_core_dev *dev); +-int mlx5_stop_eqs(struct mlx5_core_dev *dev); ++void mlx5_stop_eqs(struct mlx5_core_dev *dev); + int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn, + unsigned int *irqn); + int mlx5_core_attach_mcg(struct mlx5_core_dev *dev, union ib_gid *mgid, u32 qpn); +diff --git a/include/linux/sched.h b/include/linux/sched.h +index fdf74f27acf1..41354690e4e3 100644 +--- a/include/linux/sched.h ++++ b/include/linux/sched.h +@@ -1502,7 +1502,11 @@ static inline void set_task_comm(struct task_struct *tsk, const char *from) + __set_task_comm(tsk, from, false); + } + +-extern char *get_task_comm(char *to, struct task_struct *tsk); ++extern char *__get_task_comm(char *to, size_t len, struct task_struct *tsk); ++#define get_task_comm(buf, tsk) ({ \ ++ BUILD_BUG_ON(sizeof(buf) != TASK_COMM_LEN); \ ++ __get_task_comm(buf, sizeof(buf), tsk); \ ++}) + + #ifdef CONFIG_SMP + void scheduler_ipi(void); +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h +index 236bfe5b2ffe..6073e8bae025 100644 +--- a/include/net/sch_generic.h ++++ b/include/net/sch_generic.h +@@ -273,7 +273,6 @@ struct tcf_chain { + + struct tcf_block { + struct list_head chain_list; +- struct work_struct work; + }; + + static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz) +diff --git a/include/net/xfrm.h b/include/net/xfrm.h +index e015e164bac0..db99efb2d1d0 100644 +--- a/include/net/xfrm.h ++++ b/include/net/xfrm.h +@@ -1570,6 +1570,9 @@ int xfrm_init_state(struct xfrm_state *x); + int xfrm_prepare_input(struct xfrm_state *x, struct sk_buff *skb); + int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type); + int xfrm_input_resume(struct sk_buff *skb, int nexthdr); ++int xfrm_trans_queue(struct sk_buff *skb, ++ int (*finish)(struct net *, struct sock *, ++ struct sk_buff *)); + int xfrm_output_resume(struct sk_buff *skb, int err); + int xfrm_output(struct sock *sk, struct sk_buff *skb); + int xfrm_inner_extract_output(struct xfrm_state *x, struct sk_buff *skb); +diff --git a/include/uapi/linux/libc-compat.h b/include/uapi/linux/libc-compat.h +index 282875cf8056..8254c937c9f4 100644 +--- a/include/uapi/linux/libc-compat.h ++++ b/include/uapi/linux/libc-compat.h +@@ -168,46 +168,99 @@ + + /* If we did not see any headers from any supported C libraries, + * or we are being included in the kernel, then define everything +- * that we need. */ ++ * that we need. Check for previous __UAPI_* definitions to give ++ * unsupported C libraries a way to opt out of any kernel definition. */ + #else /* !defined(__GLIBC__) */ + + /* Definitions for if.h */ ++#ifndef __UAPI_DEF_IF_IFCONF + #define __UAPI_DEF_IF_IFCONF 1 ++#endif ++#ifndef __UAPI_DEF_IF_IFMAP + #define __UAPI_DEF_IF_IFMAP 1 ++#endif ++#ifndef __UAPI_DEF_IF_IFNAMSIZ + #define __UAPI_DEF_IF_IFNAMSIZ 1 ++#endif ++#ifndef __UAPI_DEF_IF_IFREQ + #define __UAPI_DEF_IF_IFREQ 1 ++#endif + /* Everything up to IFF_DYNAMIC, matches net/if.h until glibc 2.23 */ ++#ifndef __UAPI_DEF_IF_NET_DEVICE_FLAGS + #define __UAPI_DEF_IF_NET_DEVICE_FLAGS 1 ++#endif + /* For the future if glibc adds IFF_LOWER_UP, IFF_DORMANT and IFF_ECHO */ ++#ifndef __UAPI_DEF_IF_NET_DEVICE_FLAGS_LOWER_UP_DORMANT_ECHO + #define __UAPI_DEF_IF_NET_DEVICE_FLAGS_LOWER_UP_DORMANT_ECHO 1 ++#endif + + /* Definitions for in.h */ ++#ifndef __UAPI_DEF_IN_ADDR + #define __UAPI_DEF_IN_ADDR 1 ++#endif ++#ifndef __UAPI_DEF_IN_IPPROTO + #define __UAPI_DEF_IN_IPPROTO 1 ++#endif ++#ifndef __UAPI_DEF_IN_PKTINFO + #define __UAPI_DEF_IN_PKTINFO 1 ++#endif ++#ifndef __UAPI_DEF_IP_MREQ + #define __UAPI_DEF_IP_MREQ 1 ++#endif ++#ifndef __UAPI_DEF_SOCKADDR_IN + #define __UAPI_DEF_SOCKADDR_IN 1 ++#endif ++#ifndef __UAPI_DEF_IN_CLASS + #define __UAPI_DEF_IN_CLASS 1 ++#endif + + /* Definitions for in6.h */ ++#ifndef __UAPI_DEF_IN6_ADDR + #define __UAPI_DEF_IN6_ADDR 1 ++#endif ++#ifndef __UAPI_DEF_IN6_ADDR_ALT + #define __UAPI_DEF_IN6_ADDR_ALT 1 ++#endif ++#ifndef __UAPI_DEF_SOCKADDR_IN6 + #define __UAPI_DEF_SOCKADDR_IN6 1 ++#endif ++#ifndef __UAPI_DEF_IPV6_MREQ + #define __UAPI_DEF_IPV6_MREQ 1 ++#endif ++#ifndef __UAPI_DEF_IPPROTO_V6 + #define __UAPI_DEF_IPPROTO_V6 1 ++#endif ++#ifndef __UAPI_DEF_IPV6_OPTIONS + #define __UAPI_DEF_IPV6_OPTIONS 1 ++#endif ++#ifndef __UAPI_DEF_IN6_PKTINFO + #define __UAPI_DEF_IN6_PKTINFO 1 ++#endif ++#ifndef __UAPI_DEF_IP6_MTUINFO + #define __UAPI_DEF_IP6_MTUINFO 1 ++#endif + + /* Definitions for ipx.h */ ++#ifndef __UAPI_DEF_SOCKADDR_IPX + #define __UAPI_DEF_SOCKADDR_IPX 1 ++#endif ++#ifndef __UAPI_DEF_IPX_ROUTE_DEFINITION + #define __UAPI_DEF_IPX_ROUTE_DEFINITION 1 ++#endif ++#ifndef __UAPI_DEF_IPX_INTERFACE_DEFINITION + #define __UAPI_DEF_IPX_INTERFACE_DEFINITION 1 ++#endif ++#ifndef __UAPI_DEF_IPX_CONFIG_DATA + #define __UAPI_DEF_IPX_CONFIG_DATA 1 ++#endif ++#ifndef __UAPI_DEF_IPX_ROUTE_DEF + #define __UAPI_DEF_IPX_ROUTE_DEF 1 ++#endif + + /* Definitions for xattr.h */ ++#ifndef __UAPI_DEF_XATTR + #define __UAPI_DEF_XATTR 1 ++#endif + + #endif /* __GLIBC__ */ + +diff --git a/include/uapi/linux/netfilter/nf_conntrack_common.h b/include/uapi/linux/netfilter/nf_conntrack_common.h +index 3fea7709a441..57ccfb32e87f 100644 +--- a/include/uapi/linux/netfilter/nf_conntrack_common.h ++++ b/include/uapi/linux/netfilter/nf_conntrack_common.h +@@ -36,7 +36,7 @@ enum ip_conntrack_info { + + #define NF_CT_STATE_INVALID_BIT (1 << 0) + #define NF_CT_STATE_BIT(ctinfo) (1 << ((ctinfo) % IP_CT_IS_REPLY + 1)) +-#define NF_CT_STATE_UNTRACKED_BIT (1 << (IP_CT_UNTRACKED + 1)) ++#define NF_CT_STATE_UNTRACKED_BIT (1 << 6) + + /* Bitset representing status of connection. */ + enum ip_conntrack_status { +diff --git a/include/xen/balloon.h b/include/xen/balloon.h +index 4914b93a23f2..61f410fd74e4 100644 +--- a/include/xen/balloon.h ++++ b/include/xen/balloon.h +@@ -44,3 +44,8 @@ static inline void xen_balloon_init(void) + { + } + #endif ++ ++#ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG ++struct resource; ++void arch_xen_balloon_init(struct resource *hostmem_resource); ++#endif +diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c +index dbd7b322a86b..1890be7ea9cd 100644 +--- a/kernel/bpf/sockmap.c ++++ b/kernel/bpf/sockmap.c +@@ -588,8 +588,15 @@ static void sock_map_free(struct bpf_map *map) + + write_lock_bh(&sock->sk_callback_lock); + psock = smap_psock_sk(sock); +- smap_list_remove(psock, &stab->sock_map[i]); +- smap_release_sock(psock, sock); ++ /* This check handles a racing sock event that can get the ++ * sk_callback_lock before this case but after xchg happens ++ * causing the refcnt to hit zero and sock user data (psock) ++ * to be null and queued for garbage collection. ++ */ ++ if (likely(psock)) { ++ smap_list_remove(psock, &stab->sock_map[i]); ++ smap_release_sock(psock, sock); ++ } + write_unlock_bh(&sock->sk_callback_lock); + } + rcu_read_unlock(); +diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c +index 024085daab1a..a2c05d2476ac 100644 +--- a/kernel/cgroup/cgroup-v1.c ++++ b/kernel/cgroup/cgroup-v1.c +@@ -123,7 +123,11 @@ int cgroup_transfer_tasks(struct cgroup *to, struct cgroup *from) + */ + do { + css_task_iter_start(&from->self, 0, &it); +- task = css_task_iter_next(&it); ++ ++ do { ++ task = css_task_iter_next(&it); ++ } while (task && (task->flags & PF_EXITING)); ++ + if (task) + get_task_struct(task); + css_task_iter_end(&it); +diff --git a/kernel/irq/debug.h b/kernel/irq/debug.h +index 17f05ef8f575..e4d3819a91cc 100644 +--- a/kernel/irq/debug.h ++++ b/kernel/irq/debug.h +@@ -12,6 +12,11 @@ + + static inline void print_irq_desc(unsigned int irq, struct irq_desc *desc) + { ++ static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 5); ++ ++ if (!__ratelimit(&ratelimit)) ++ return; ++ + printk("irq %d, desc: %p, depth: %d, count: %d, unhandled: %d\n", + irq, desc, desc->depth, desc->irq_count, desc->irqs_unhandled); + printk("->handle_irq(): %p, ", desc->handle_irq); +diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c +index 052773df9f03..d00e85ac10d6 100644 +--- a/kernel/time/hrtimer.c ++++ b/kernel/time/hrtimer.c +@@ -1106,7 +1106,12 @@ static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id, + + cpu_base = raw_cpu_ptr(&hrtimer_bases); + +- if (clock_id == CLOCK_REALTIME && mode != HRTIMER_MODE_ABS) ++ /* ++ * POSIX magic: Relative CLOCK_REALTIME timers are not affected by ++ * clock modifications, so they needs to become CLOCK_MONOTONIC to ++ * ensure POSIX compliance. ++ */ ++ if (clock_id == CLOCK_REALTIME && mode & HRTIMER_MODE_REL) + clock_id = CLOCK_MONOTONIC; + + base = hrtimer_clockid_to_base(clock_id); +diff --git a/lib/mpi/longlong.h b/lib/mpi/longlong.h +index 57fd45ab7af1..08c60d10747f 100644 +--- a/lib/mpi/longlong.h ++++ b/lib/mpi/longlong.h +@@ -671,7 +671,23 @@ do { \ + ************** MIPS/64 ************** + ***************************************/ + #if (defined(__mips) && __mips >= 3) && W_TYPE_SIZE == 64 +-#if (__GNUC__ >= 5) || (__GNUC__ >= 4 && __GNUC_MINOR__ >= 4) ++#if defined(__mips_isa_rev) && __mips_isa_rev >= 6 ++/* ++ * GCC ends up emitting a __multi3 intrinsic call for MIPS64r6 with the plain C ++ * code below, so we special case MIPS64r6 until the compiler can do better. ++ */ ++#define umul_ppmm(w1, w0, u, v) \ ++do { \ ++ __asm__ ("dmulu %0,%1,%2" \ ++ : "=d" ((UDItype)(w0)) \ ++ : "d" ((UDItype)(u)), \ ++ "d" ((UDItype)(v))); \ ++ __asm__ ("dmuhu %0,%1,%2" \ ++ : "=d" ((UDItype)(w1)) \ ++ : "d" ((UDItype)(u)), \ ++ "d" ((UDItype)(v))); \ ++} while (0) ++#elif (__GNUC__ >= 5) || (__GNUC__ >= 4 && __GNUC_MINOR__ >= 4) + #define umul_ppmm(w1, w0, u, v) \ + do { \ + typedef unsigned int __ll_UTItype __attribute__((mode(TI))); \ +diff --git a/mm/frame_vector.c b/mm/frame_vector.c +index 297c7238f7d4..c64dca6e27c2 100644 +--- a/mm/frame_vector.c ++++ b/mm/frame_vector.c +@@ -62,8 +62,10 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, + * get_user_pages_longterm() and disallow it for filesystem-dax + * mappings. + */ +- if (vma_is_fsdax(vma)) +- return -EOPNOTSUPP; ++ if (vma_is_fsdax(vma)) { ++ ret = -EOPNOTSUPP; ++ goto out; ++ } + + if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) { + vec->got_ref = true; +diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c +index 045331204097..1933654007c4 100644 +--- a/net/ipv4/ip_gre.c ++++ b/net/ipv4/ip_gre.c +@@ -1274,6 +1274,7 @@ static const struct net_device_ops erspan_netdev_ops = { + static void ipgre_tap_setup(struct net_device *dev) + { + ether_setup(dev); ++ dev->max_mtu = 0; + dev->netdev_ops = &gre_tap_netdev_ops; + dev->priv_flags &= ~IFF_TX_SKB_SHARING; + dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; +diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c +index e50b7fea57ee..bcfc00e88756 100644 +--- a/net/ipv4/xfrm4_input.c ++++ b/net/ipv4/xfrm4_input.c +@@ -23,6 +23,12 @@ int xfrm4_extract_input(struct xfrm_state *x, struct sk_buff *skb) + return xfrm4_extract_header(skb); + } + ++static int xfrm4_rcv_encap_finish2(struct net *net, struct sock *sk, ++ struct sk_buff *skb) ++{ ++ return dst_input(skb); ++} ++ + static inline int xfrm4_rcv_encap_finish(struct net *net, struct sock *sk, + struct sk_buff *skb) + { +@@ -33,7 +39,11 @@ static inline int xfrm4_rcv_encap_finish(struct net *net, struct sock *sk, + iph->tos, skb->dev)) + goto drop; + } +- return dst_input(skb); ++ ++ if (xfrm_trans_queue(skb, xfrm4_rcv_encap_finish2)) ++ goto drop; ++ ++ return 0; + drop: + kfree_skb(skb); + return NET_RX_DROP; +diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c +index 5b4870caf268..e8ab306794d8 100644 +--- a/net/ipv6/ip6_gre.c ++++ b/net/ipv6/ip6_gre.c +@@ -1335,6 +1335,7 @@ static void ip6gre_tap_setup(struct net_device *dev) + + ether_setup(dev); + ++ dev->max_mtu = 0; + dev->netdev_ops = &ip6gre_tap_netdev_ops; + dev->needs_free_netdev = true; + dev->priv_destructor = ip6gre_dev_free; +diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c +index 3f46121ad139..1161fd5630c1 100644 +--- a/net/ipv6/ip6_tunnel.c ++++ b/net/ipv6/ip6_tunnel.c +@@ -1131,8 +1131,13 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield, + max_headroom += 8; + mtu -= 8; + } +- if (mtu < IPV6_MIN_MTU) +- mtu = IPV6_MIN_MTU; ++ if (skb->protocol == htons(ETH_P_IPV6)) { ++ if (mtu < IPV6_MIN_MTU) ++ mtu = IPV6_MIN_MTU; ++ } else if (mtu < 576) { ++ mtu = 576; ++ } ++ + if (skb_dst(skb) && !t->parms.collect_md) + skb_dst(skb)->ops->update_pmtu(skb_dst(skb), NULL, skb, mtu); + if (skb->len - t->tun_hlen - eth_hlen > mtu && !skb_is_gso(skb)) { +@@ -1679,11 +1684,11 @@ int ip6_tnl_change_mtu(struct net_device *dev, int new_mtu) + { + struct ip6_tnl *tnl = netdev_priv(dev); + +- if (tnl->parms.proto == IPPROTO_IPIP) { +- if (new_mtu < ETH_MIN_MTU) ++ if (tnl->parms.proto == IPPROTO_IPV6) { ++ if (new_mtu < IPV6_MIN_MTU) + return -EINVAL; + } else { +- if (new_mtu < IPV6_MIN_MTU) ++ if (new_mtu < ETH_MIN_MTU) + return -EINVAL; + } + if (new_mtu > 0xFFF8 - dev->hard_header_len) +diff --git a/net/ipv6/route.c b/net/ipv6/route.c +index ca8d3266e92e..a4a865c8a23c 100644 +--- a/net/ipv6/route.c ++++ b/net/ipv6/route.c +@@ -1755,6 +1755,7 @@ struct dst_entry *icmp6_dst_alloc(struct net_device *dev, + } + + rt->dst.flags |= DST_HOST; ++ rt->dst.input = ip6_input; + rt->dst.output = ip6_output; + rt->rt6i_gateway = fl6->daddr; + rt->rt6i_dst.addr = fl6->daddr; +diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c +index fe04e23af986..841f4a07438e 100644 +--- a/net/ipv6/xfrm6_input.c ++++ b/net/ipv6/xfrm6_input.c +@@ -32,6 +32,14 @@ int xfrm6_rcv_spi(struct sk_buff *skb, int nexthdr, __be32 spi, + } + EXPORT_SYMBOL(xfrm6_rcv_spi); + ++static int xfrm6_transport_finish2(struct net *net, struct sock *sk, ++ struct sk_buff *skb) ++{ ++ if (xfrm_trans_queue(skb, ip6_rcv_finish)) ++ __kfree_skb(skb); ++ return -1; ++} ++ + int xfrm6_transport_finish(struct sk_buff *skb, int async) + { + struct xfrm_offload *xo = xfrm_offload(skb); +@@ -56,7 +64,7 @@ int xfrm6_transport_finish(struct sk_buff *skb, int async) + + NF_HOOK(NFPROTO_IPV6, NF_INET_PRE_ROUTING, + dev_net(skb->dev), NULL, skb, skb->dev, NULL, +- ip6_rcv_finish); ++ xfrm6_transport_finish2); + return -1; + } + +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c +index 70e9d2ca8bbe..4daafb07602f 100644 +--- a/net/mac80211/rx.c ++++ b/net/mac80211/rx.c +@@ -3632,6 +3632,8 @@ static bool ieee80211_accept_frame(struct ieee80211_rx_data *rx) + } + return true; + case NL80211_IFTYPE_MESH_POINT: ++ if (ether_addr_equal(sdata->vif.addr, hdr->addr2)) ++ return false; + if (multicast) + return true; + return ether_addr_equal(sdata->vif.addr, hdr->addr1); +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index 64e1ee091225..5b504aa653f5 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -2072,7 +2072,7 @@ static int nf_tables_dump_rules(struct sk_buff *skb, + continue; + + list_for_each_entry_rcu(chain, &table->chains, list) { +- if (ctx && ctx->chain[0] && ++ if (ctx && ctx->chain && + strcmp(ctx->chain, chain->name) != 0) + continue; + +@@ -4596,8 +4596,10 @@ static int nf_tables_dump_obj_done(struct netlink_callback *cb) + { + struct nft_obj_filter *filter = cb->data; + +- kfree(filter->table); +- kfree(filter); ++ if (filter) { ++ kfree(filter->table); ++ kfree(filter); ++ } + + return 0; + } +diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c +index ecbb019efcbd..934c239cf98d 100644 +--- a/net/sched/cls_api.c ++++ b/net/sched/cls_api.c +@@ -197,21 +197,26 @@ static struct tcf_chain *tcf_chain_create(struct tcf_block *block, + + static void tcf_chain_flush(struct tcf_chain *chain) + { +- struct tcf_proto *tp; ++ struct tcf_proto *tp = rtnl_dereference(chain->filter_chain); + + if (chain->p_filter_chain) + RCU_INIT_POINTER(*chain->p_filter_chain, NULL); +- while ((tp = rtnl_dereference(chain->filter_chain)) != NULL) { ++ while (tp) { + RCU_INIT_POINTER(chain->filter_chain, tp->next); +- tcf_chain_put(chain); + tcf_proto_destroy(tp); ++ tp = rtnl_dereference(chain->filter_chain); ++ tcf_chain_put(chain); + } + } + + static void tcf_chain_destroy(struct tcf_chain *chain) + { ++ struct tcf_block *block = chain->block; ++ + list_del(&chain->list); + kfree(chain); ++ if (list_empty(&block->chain_list)) ++ kfree(block); + } + + static void tcf_chain_hold(struct tcf_chain *chain) +@@ -275,22 +280,8 @@ int tcf_block_get(struct tcf_block **p_block, + } + EXPORT_SYMBOL(tcf_block_get); + +-static void tcf_block_put_final(struct work_struct *work) +-{ +- struct tcf_block *block = container_of(work, struct tcf_block, work); +- struct tcf_chain *chain, *tmp; +- +- rtnl_lock(); +- /* Only chain 0 should be still here. */ +- list_for_each_entry_safe(chain, tmp, &block->chain_list, list) +- tcf_chain_put(chain); +- rtnl_unlock(); +- kfree(block); +-} +- + /* XXX: Standalone actions are not allowed to jump to any chain, and bound +- * actions should be all removed after flushing. However, filters are now +- * destroyed in tc filter workqueue with RTNL lock, they can not race here. ++ * actions should be all removed after flushing. + */ + void tcf_block_put(struct tcf_block *block) + { +@@ -299,15 +290,22 @@ void tcf_block_put(struct tcf_block *block) + if (!block) + return; + +- list_for_each_entry_safe(chain, tmp, &block->chain_list, list) ++ /* Hold a refcnt for all chains, so that they don't disappear ++ * while we are iterating. ++ */ ++ list_for_each_entry(chain, &block->chain_list, list) ++ tcf_chain_hold(chain); ++ ++ list_for_each_entry(chain, &block->chain_list, list) + tcf_chain_flush(chain); + +- INIT_WORK(&block->work, tcf_block_put_final); +- /* Wait for RCU callbacks to release the reference count and make +- * sure their works have been queued before this. +- */ +- rcu_barrier(); +- tcf_queue_work(&block->work); ++ /* At this point, all the chains should have refcnt >= 1. */ ++ list_for_each_entry_safe(chain, tmp, &block->chain_list, list) ++ tcf_chain_put(chain); ++ ++ /* Finally, put chain 0 and allow block to be freed. */ ++ chain = list_first_entry(&block->chain_list, struct tcf_chain, list); ++ tcf_chain_put(chain); + } + EXPORT_SYMBOL(tcf_block_put); + +diff --git a/net/sctp/socket.c b/net/sctp/socket.c +index 3c8b92667866..6b3a862706de 100644 +--- a/net/sctp/socket.c ++++ b/net/sctp/socket.c +@@ -3494,6 +3494,8 @@ static int sctp_setsockopt_hmac_ident(struct sock *sk, + + if (optlen < sizeof(struct sctp_hmacalgo)) + return -EINVAL; ++ optlen = min_t(unsigned int, optlen, sizeof(struct sctp_hmacalgo) + ++ SCTP_AUTH_NUM_HMACS * sizeof(u16)); + + hmacs = memdup_user(optval, optlen); + if (IS_ERR(hmacs)) +@@ -3532,6 +3534,11 @@ static int sctp_setsockopt_auth_key(struct sock *sk, + + if (optlen <= sizeof(struct sctp_authkey)) + return -EINVAL; ++ /* authkey->sca_keylength is u16, so optlen can't be bigger than ++ * this. ++ */ ++ optlen = min_t(unsigned int, optlen, USHRT_MAX + ++ sizeof(struct sctp_authkey)); + + authkey = memdup_user(optval, optlen); + if (IS_ERR(authkey)) +@@ -3889,6 +3896,9 @@ static int sctp_setsockopt_reset_streams(struct sock *sk, + + if (optlen < sizeof(*params)) + return -EINVAL; ++ /* srs_number_streams is u16, so optlen can't be bigger than this. */ ++ optlen = min_t(unsigned int, optlen, USHRT_MAX + ++ sizeof(__u16) * sizeof(*params)); + + params = memdup_user(optval, optlen); + if (IS_ERR(params)) +@@ -4947,7 +4957,7 @@ static int sctp_getsockopt_autoclose(struct sock *sk, int len, char __user *optv + len = sizeof(int); + if (put_user(len, optlen)) + return -EFAULT; +- if (copy_to_user(optval, &sctp_sk(sk)->autoclose, sizeof(int))) ++ if (copy_to_user(optval, &sctp_sk(sk)->autoclose, len)) + return -EFAULT; + return 0; + } +@@ -5578,6 +5588,9 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len, + err = -EFAULT; + goto out; + } ++ /* XXX: We should have accounted for sizeof(struct sctp_getaddrs) too, ++ * but we can't change it anymore. ++ */ + if (put_user(bytes_copied, optlen)) + err = -EFAULT; + out: +@@ -6014,7 +6027,7 @@ static int sctp_getsockopt_maxseg(struct sock *sk, int len, + params.assoc_id = 0; + } else if (len >= sizeof(struct sctp_assoc_value)) { + len = sizeof(struct sctp_assoc_value); +- if (copy_from_user(¶ms, optval, sizeof(params))) ++ if (copy_from_user(¶ms, optval, len)) + return -EFAULT; + } else + return -EINVAL; +@@ -6184,7 +6197,9 @@ static int sctp_getsockopt_active_key(struct sock *sk, int len, + + if (len < sizeof(struct sctp_authkeyid)) + return -EINVAL; +- if (copy_from_user(&val, optval, sizeof(struct sctp_authkeyid))) ++ ++ len = sizeof(struct sctp_authkeyid); ++ if (copy_from_user(&val, optval, len)) + return -EFAULT; + + asoc = sctp_id2assoc(sk, val.scact_assoc_id); +@@ -6196,7 +6211,6 @@ static int sctp_getsockopt_active_key(struct sock *sk, int len, + else + val.scact_keynumber = ep->active_key_id; + +- len = sizeof(struct sctp_authkeyid); + if (put_user(len, optlen)) + return -EFAULT; + if (copy_to_user(optval, &val, len)) +@@ -6222,7 +6236,7 @@ static int sctp_getsockopt_peer_auth_chunks(struct sock *sk, int len, + if (len < sizeof(struct sctp_authchunks)) + return -EINVAL; + +- if (copy_from_user(&val, optval, sizeof(struct sctp_authchunks))) ++ if (copy_from_user(&val, optval, sizeof(val))) + return -EFAULT; + + to = p->gauth_chunks; +@@ -6267,7 +6281,7 @@ static int sctp_getsockopt_local_auth_chunks(struct sock *sk, int len, + if (len < sizeof(struct sctp_authchunks)) + return -EINVAL; + +- if (copy_from_user(&val, optval, sizeof(struct sctp_authchunks))) ++ if (copy_from_user(&val, optval, sizeof(val))) + return -EFAULT; + + to = p->gauth_chunks; +diff --git a/net/sctp/ulpqueue.c b/net/sctp/ulpqueue.c +index a71be33f3afe..e36ec5dd64c6 100644 +--- a/net/sctp/ulpqueue.c ++++ b/net/sctp/ulpqueue.c +@@ -1084,29 +1084,21 @@ void sctp_ulpq_partial_delivery(struct sctp_ulpq *ulpq, + void sctp_ulpq_renege(struct sctp_ulpq *ulpq, struct sctp_chunk *chunk, + gfp_t gfp) + { +- struct sctp_association *asoc; +- __u16 needed, freed; +- +- asoc = ulpq->asoc; ++ struct sctp_association *asoc = ulpq->asoc; ++ __u32 freed = 0; ++ __u16 needed; + +- if (chunk) { +- needed = ntohs(chunk->chunk_hdr->length); +- needed -= sizeof(struct sctp_data_chunk); +- } else +- needed = SCTP_DEFAULT_MAXWINDOW; +- +- freed = 0; ++ needed = ntohs(chunk->chunk_hdr->length) - ++ sizeof(struct sctp_data_chunk); + + if (skb_queue_empty(&asoc->base.sk->sk_receive_queue)) { + freed = sctp_ulpq_renege_order(ulpq, needed); +- if (freed < needed) { ++ if (freed < needed) + freed += sctp_ulpq_renege_frags(ulpq, needed - freed); +- } + } + /* If able to free enough room, accept this chunk. */ +- if (chunk && (freed >= needed)) { +- int retval; +- retval = sctp_ulpq_tail_data(ulpq, chunk, gfp); ++ if (freed >= needed) { ++ int retval = sctp_ulpq_tail_data(ulpq, chunk, gfp); + /* + * Enter partial delivery if chunk has not been + * delivered; otherwise, drain the reassembly queue. +diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c +index 47ec121574ce..c8001471da6c 100644 +--- a/net/tipc/bearer.c ++++ b/net/tipc/bearer.c +@@ -324,6 +324,7 @@ static int tipc_enable_bearer(struct net *net, const char *name, + if (res) { + pr_warn("Bearer <%s> rejected, enable failure (%d)\n", + name, -res); ++ kfree(b); + return -EINVAL; + } + +@@ -347,8 +348,10 @@ static int tipc_enable_bearer(struct net *net, const char *name, + if (skb) + tipc_bearer_xmit_skb(net, bearer_id, skb, &b->bcast_addr); + +- if (tipc_mon_create(net, bearer_id)) ++ if (tipc_mon_create(net, bearer_id)) { ++ bearer_disable(net, b); + return -ENOMEM; ++ } + + pr_info("Enabled bearer <%s>, discovery domain %s, priority %u\n", + name, +diff --git a/net/tipc/monitor.c b/net/tipc/monitor.c +index 9e109bb1a207..0fcfb3916dcf 100644 +--- a/net/tipc/monitor.c ++++ b/net/tipc/monitor.c +@@ -633,9 +633,13 @@ void tipc_mon_delete(struct net *net, int bearer_id) + { + struct tipc_net *tn = tipc_net(net); + struct tipc_monitor *mon = tipc_monitor(net, bearer_id); +- struct tipc_peer *self = get_self(net, bearer_id); ++ struct tipc_peer *self; + struct tipc_peer *peer, *tmp; + ++ if (!mon) ++ return; ++ ++ self = get_self(net, bearer_id); + write_lock_bh(&mon->lock); + tn->monitors[bearer_id] = NULL; + list_for_each_entry_safe(peer, tmp, &self->list, list) { +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c +index 81bef0676e1d..ea28aa505302 100644 +--- a/net/wireless/nl80211.c ++++ b/net/wireless/nl80211.c +@@ -11301,7 +11301,8 @@ static int nl80211_nan_add_func(struct sk_buff *skb, + break; + case NL80211_NAN_FUNC_FOLLOW_UP: + if (!tb[NL80211_NAN_FUNC_FOLLOW_UP_ID] || +- !tb[NL80211_NAN_FUNC_FOLLOW_UP_REQ_ID]) { ++ !tb[NL80211_NAN_FUNC_FOLLOW_UP_REQ_ID] || ++ !tb[NL80211_NAN_FUNC_FOLLOW_UP_DEST]) { + err = -EINVAL; + goto out; + } +diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c +index da6447389ffb..3f6f6f8c9fa5 100644 +--- a/net/xfrm/xfrm_input.c ++++ b/net/xfrm/xfrm_input.c +@@ -8,15 +8,29 @@ + * + */ + ++#include ++#include + #include + #include + #include ++#include + #include + #include + #include + #include + #include + ++struct xfrm_trans_tasklet { ++ struct tasklet_struct tasklet; ++ struct sk_buff_head queue; ++}; ++ ++struct xfrm_trans_cb { ++ int (*finish)(struct net *net, struct sock *sk, struct sk_buff *skb); ++}; ++ ++#define XFRM_TRANS_SKB_CB(__skb) ((struct xfrm_trans_cb *)&((__skb)->cb[0])) ++ + static struct kmem_cache *secpath_cachep __read_mostly; + + static DEFINE_SPINLOCK(xfrm_input_afinfo_lock); +@@ -25,6 +39,8 @@ static struct xfrm_input_afinfo const __rcu *xfrm_input_afinfo[AF_INET6 + 1]; + static struct gro_cells gro_cells; + static struct net_device xfrm_napi_dev; + ++static DEFINE_PER_CPU(struct xfrm_trans_tasklet, xfrm_trans_tasklet); ++ + int xfrm_input_register_afinfo(const struct xfrm_input_afinfo *afinfo) + { + int err = 0; +@@ -477,9 +493,41 @@ int xfrm_input_resume(struct sk_buff *skb, int nexthdr) + } + EXPORT_SYMBOL(xfrm_input_resume); + ++static void xfrm_trans_reinject(unsigned long data) ++{ ++ struct xfrm_trans_tasklet *trans = (void *)data; ++ struct sk_buff_head queue; ++ struct sk_buff *skb; ++ ++ __skb_queue_head_init(&queue); ++ skb_queue_splice_init(&trans->queue, &queue); ++ ++ while ((skb = __skb_dequeue(&queue))) ++ XFRM_TRANS_SKB_CB(skb)->finish(dev_net(skb->dev), NULL, skb); ++} ++ ++int xfrm_trans_queue(struct sk_buff *skb, ++ int (*finish)(struct net *, struct sock *, ++ struct sk_buff *)) ++{ ++ struct xfrm_trans_tasklet *trans; ++ ++ trans = this_cpu_ptr(&xfrm_trans_tasklet); ++ ++ if (skb_queue_len(&trans->queue) >= netdev_max_backlog) ++ return -ENOBUFS; ++ ++ XFRM_TRANS_SKB_CB(skb)->finish = finish; ++ skb_queue_tail(&trans->queue, skb); ++ tasklet_schedule(&trans->tasklet); ++ return 0; ++} ++EXPORT_SYMBOL(xfrm_trans_queue); ++ + void __init xfrm_input_init(void) + { + int err; ++ int i; + + init_dummy_netdev(&xfrm_napi_dev); + err = gro_cells_init(&gro_cells, &xfrm_napi_dev); +@@ -490,4 +538,13 @@ void __init xfrm_input_init(void) + sizeof(struct sec_path), + 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC, + NULL); ++ ++ for_each_possible_cpu(i) { ++ struct xfrm_trans_tasklet *trans; ++ ++ trans = &per_cpu(xfrm_trans_tasklet, i); ++ __skb_queue_head_init(&trans->queue); ++ tasklet_init(&trans->tasklet, xfrm_trans_reinject, ++ (unsigned long)trans); ++ } + } +diff --git a/sound/soc/codecs/nau8825.c b/sound/soc/codecs/nau8825.c +index 714ce17da717..e853a6dfd33b 100644 +--- a/sound/soc/codecs/nau8825.c ++++ b/sound/soc/codecs/nau8825.c +@@ -905,6 +905,7 @@ static int nau8825_adc_event(struct snd_soc_dapm_widget *w, + + switch (event) { + case SND_SOC_DAPM_POST_PMU: ++ msleep(125); + regmap_update_bits(nau8825->regmap, NAU8825_REG_ENA_CTRL, + NAU8825_ENABLE_ADC, NAU8825_ENABLE_ADC); + break; +diff --git a/sound/soc/sh/rcar/adg.c b/sound/soc/sh/rcar/adg.c +index 938baff86ef2..2684a2ba33cd 100644 +--- a/sound/soc/sh/rcar/adg.c ++++ b/sound/soc/sh/rcar/adg.c +@@ -216,7 +216,7 @@ int rsnd_adg_set_cmd_timsel_gen2(struct rsnd_mod *cmd_mod, + NULL, &val, NULL); + + val = val << shift; +- mask = 0xffff << shift; ++ mask = 0x0f1f << shift; + + rsnd_mod_bset(adg_mod, CMDOUT_TIMSEL, mask, val); + +@@ -244,7 +244,7 @@ int rsnd_adg_set_src_timesel_gen2(struct rsnd_mod *src_mod, + + in = in << shift; + out = out << shift; +- mask = 0xffff << shift; ++ mask = 0x0f1f << shift; + + switch (id / 2) { + case 0: +@@ -374,7 +374,7 @@ int rsnd_adg_ssi_clk_try_start(struct rsnd_mod *ssi_mod, unsigned int rate) + ckr = 0x80000000; + } + +- rsnd_mod_bset(adg_mod, BRGCKR, 0x80FF0000, adg->ckr | ckr); ++ rsnd_mod_bset(adg_mod, BRGCKR, 0x80770000, adg->ckr | ckr); + rsnd_mod_write(adg_mod, BRRA, adg->rbga); + rsnd_mod_write(adg_mod, BRRB, adg->rbgb); +