From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 2B615138334 for ; Sat, 15 Jun 2019 15:10:22 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 08CE1E089F; Sat, 15 Jun 2019 15:10:21 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id BD999E089F for ; Sat, 15 Jun 2019 15:10:20 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id C4D97346187 for ; Sat, 15 Jun 2019 15:10:16 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id E05AE617 for ; Sat, 15 Jun 2019 15:10:14 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1560611395.c5ae9b74d246be7052cf5a2302e5fc0c935ac8a8.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.1 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1009_linux-5.1.10.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: c5ae9b74d246be7052cf5a2302e5fc0c935ac8a8 X-VCS-Branch: 5.1 Date: Sat, 15 Jun 2019 15:10:14 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 0955b037-ee81-4e55-804c-50baa9e4796f X-Archives-Hash: 7d0613108ccd4cdb4409bc9d14d7cb53 commit: c5ae9b74d246be7052cf5a2302e5fc0c935ac8a8 Author: Mike Pagano gentoo org> AuthorDate: Sat Jun 15 15:09:55 2019 +0000 Commit: Mike Pagano gentoo org> CommitDate: Sat Jun 15 15:09:55 2019 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c5ae9b74 Linux patch 5.1.10 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1009_linux-5.1.10.patch | 4366 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 4370 insertions(+) diff --git a/0000_README b/0000_README index c7d01be..6a502ad 100644 --- a/0000_README +++ b/0000_README @@ -79,6 +79,10 @@ Patch: 1008_linux-5.1.9.patch From: http://www.kernel.org Desc: Linux 5.1.9 +Patch: 1009_linux-5.1.10.patch +From: http://www.kernel.org +Desc: Linux 5.1.10 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1009_linux-5.1.10.patch b/1009_linux-5.1.10.patch new file mode 100644 index 0000000..90072d1 --- /dev/null +++ b/1009_linux-5.1.10.patch @@ -0,0 +1,4366 @@ +diff --git a/Documentation/devicetree/bindings/input/touchscreen/goodix.txt b/Documentation/devicetree/bindings/input/touchscreen/goodix.txt +index 8cf0b4d38a7e..109cc0cebaa2 100644 +--- a/Documentation/devicetree/bindings/input/touchscreen/goodix.txt ++++ b/Documentation/devicetree/bindings/input/touchscreen/goodix.txt +@@ -3,6 +3,7 @@ Device tree bindings for Goodix GT9xx series touchscreen controller + Required properties: + + - compatible : Should be "goodix,gt1151" ++ or "goodix,gt5663" + or "goodix,gt5688" + or "goodix,gt911" + or "goodix,gt9110" +diff --git a/Makefile b/Makefile +index 2884a8d3b6d6..e7d1973d9c26 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 1 +-SUBLEVEL = 9 ++SUBLEVEL = 10 + EXTRAVERSION = + NAME = Shy Crocodile + +diff --git a/arch/arm/boot/dts/exynos5420-arndale-octa.dts b/arch/arm/boot/dts/exynos5420-arndale-octa.dts +index 3447160e1fbf..a0e27e1c0feb 100644 +--- a/arch/arm/boot/dts/exynos5420-arndale-octa.dts ++++ b/arch/arm/boot/dts/exynos5420-arndale-octa.dts +@@ -107,6 +107,7 @@ + regulator-name = "PVDD_APIO_1V8"; + regulator-min-microvolt = <1800000>; + regulator-max-microvolt = <1800000>; ++ regulator-always-on; + }; + + ldo3_reg: LDO3 { +@@ -145,6 +146,7 @@ + regulator-name = "PVDD_ABB_1V8"; + regulator-min-microvolt = <1800000>; + regulator-max-microvolt = <1800000>; ++ regulator-always-on; + }; + + ldo9_reg: LDO9 { +diff --git a/arch/arm/boot/dts/imx50.dtsi b/arch/arm/boot/dts/imx50.dtsi +index ee1e3e8bf4ec..4a68e30cc668 100644 +--- a/arch/arm/boot/dts/imx50.dtsi ++++ b/arch/arm/boot/dts/imx50.dtsi +@@ -411,7 +411,7 @@ + reg = <0x63fb0000 0x4000>; + interrupts = <6>; + clocks = <&clks IMX5_CLK_SDMA_GATE>, +- <&clks IMX5_CLK_SDMA_GATE>; ++ <&clks IMX5_CLK_AHB>; + clock-names = "ipg", "ahb"; + #dma-cells = <3>; + fsl,sdma-ram-script-name = "imx/sdma/sdma-imx50.bin"; +diff --git a/arch/arm/boot/dts/imx51.dtsi b/arch/arm/boot/dts/imx51.dtsi +index a5ee25cedc10..0a4b9a5d9a9c 100644 +--- a/arch/arm/boot/dts/imx51.dtsi ++++ b/arch/arm/boot/dts/imx51.dtsi +@@ -489,7 +489,7 @@ + reg = <0x83fb0000 0x4000>; + interrupts = <6>; + clocks = <&clks IMX5_CLK_SDMA_GATE>, +- <&clks IMX5_CLK_SDMA_GATE>; ++ <&clks IMX5_CLK_AHB>; + clock-names = "ipg", "ahb"; + #dma-cells = <3>; + fsl,sdma-ram-script-name = "imx/sdma/sdma-imx51.bin"; +diff --git a/arch/arm/boot/dts/imx53.dtsi b/arch/arm/boot/dts/imx53.dtsi +index b3300300aabe..9b672ed2486d 100644 +--- a/arch/arm/boot/dts/imx53.dtsi ++++ b/arch/arm/boot/dts/imx53.dtsi +@@ -702,7 +702,7 @@ + reg = <0x63fb0000 0x4000>; + interrupts = <6>; + clocks = <&clks IMX5_CLK_SDMA_GATE>, +- <&clks IMX5_CLK_SDMA_GATE>; ++ <&clks IMX5_CLK_AHB>; + clock-names = "ipg", "ahb"; + #dma-cells = <3>; + fsl,sdma-ram-script-name = "imx/sdma/sdma-imx53.bin"; +diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi +index fe17a3405edc..2df39c308c83 100644 +--- a/arch/arm/boot/dts/imx6qdl.dtsi ++++ b/arch/arm/boot/dts/imx6qdl.dtsi +@@ -918,7 +918,7 @@ + compatible = "fsl,imx6q-sdma", "fsl,imx35-sdma"; + reg = <0x020ec000 0x4000>; + interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>; +- clocks = <&clks IMX6QDL_CLK_SDMA>, ++ clocks = <&clks IMX6QDL_CLK_IPG>, + <&clks IMX6QDL_CLK_SDMA>; + clock-names = "ipg", "ahb"; + #dma-cells = <3>; +diff --git a/arch/arm/boot/dts/imx6sl.dtsi b/arch/arm/boot/dts/imx6sl.dtsi +index 4b4813f176cd..1f2a4ed99ed3 100644 +--- a/arch/arm/boot/dts/imx6sl.dtsi ++++ b/arch/arm/boot/dts/imx6sl.dtsi +@@ -741,7 +741,7 @@ + reg = <0x020ec000 0x4000>; + interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>; + clocks = <&clks IMX6SL_CLK_SDMA>, +- <&clks IMX6SL_CLK_SDMA>; ++ <&clks IMX6SL_CLK_AHB>; + clock-names = "ipg", "ahb"; + #dma-cells = <3>; + /* imx6sl reuses imx6q sdma firmware */ +diff --git a/arch/arm/boot/dts/imx6sll.dtsi b/arch/arm/boot/dts/imx6sll.dtsi +index 62847c68330b..ed598d72038c 100644 +--- a/arch/arm/boot/dts/imx6sll.dtsi ++++ b/arch/arm/boot/dts/imx6sll.dtsi +@@ -621,7 +621,7 @@ + compatible = "fsl,imx6sll-sdma", "fsl,imx35-sdma"; + reg = <0x020ec000 0x4000>; + interrupts = ; +- clocks = <&clks IMX6SLL_CLK_SDMA>, ++ clocks = <&clks IMX6SLL_CLK_IPG>, + <&clks IMX6SLL_CLK_SDMA>; + clock-names = "ipg", "ahb"; + #dma-cells = <3>; +diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi +index 5b16e65f7696..fc5a8fc74091 100644 +--- a/arch/arm/boot/dts/imx6sx.dtsi ++++ b/arch/arm/boot/dts/imx6sx.dtsi +@@ -820,7 +820,7 @@ + compatible = "fsl,imx6sx-sdma", "fsl,imx6q-sdma"; + reg = <0x020ec000 0x4000>; + interrupts = ; +- clocks = <&clks IMX6SX_CLK_SDMA>, ++ clocks = <&clks IMX6SX_CLK_IPG>, + <&clks IMX6SX_CLK_SDMA>; + clock-names = "ipg", "ahb"; + #dma-cells = <3>; +diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi +index 62ed30c781ed..facd65602c2d 100644 +--- a/arch/arm/boot/dts/imx6ul.dtsi ++++ b/arch/arm/boot/dts/imx6ul.dtsi +@@ -708,7 +708,7 @@ + "fsl,imx35-sdma"; + reg = <0x020ec000 0x4000>; + interrupts = ; +- clocks = <&clks IMX6UL_CLK_SDMA>, ++ clocks = <&clks IMX6UL_CLK_IPG>, + <&clks IMX6UL_CLK_SDMA>; + clock-names = "ipg", "ahb"; + #dma-cells = <3>; +diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi +index e88f53a4c7f4..2f45ef527e6c 100644 +--- a/arch/arm/boot/dts/imx7s.dtsi ++++ b/arch/arm/boot/dts/imx7s.dtsi +@@ -1067,8 +1067,8 @@ + compatible = "fsl,imx7d-sdma", "fsl,imx35-sdma"; + reg = <0x30bd0000 0x10000>; + interrupts = ; +- clocks = <&clks IMX7D_SDMA_CORE_CLK>, +- <&clks IMX7D_AHB_CHANNEL_ROOT_CLK>; ++ clocks = <&clks IMX7D_IPG_ROOT_CLK>, ++ <&clks IMX7D_SDMA_CORE_CLK>; + clock-names = "ipg", "ahb"; + #dma-cells = <3>; + fsl,sdma-ram-script-name = "imx/sdma/sdma-imx7d.bin"; +diff --git a/arch/arm/include/asm/hardirq.h b/arch/arm/include/asm/hardirq.h +index cba23eaa6072..7a88f160b1fb 100644 +--- a/arch/arm/include/asm/hardirq.h ++++ b/arch/arm/include/asm/hardirq.h +@@ -6,6 +6,7 @@ + #include + #include + ++/* number of IPIS _not_ including IPI_CPU_BACKTRACE */ + #define NR_IPI 7 + + typedef struct { +diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c +index facd4240ca02..c93fe0f256de 100644 +--- a/arch/arm/kernel/smp.c ++++ b/arch/arm/kernel/smp.c +@@ -70,6 +70,10 @@ enum ipi_msg_type { + IPI_CPU_STOP, + IPI_IRQ_WORK, + IPI_COMPLETION, ++ /* ++ * CPU_BACKTRACE is special and not included in NR_IPI ++ * or tracable with trace_ipi_* ++ */ + IPI_CPU_BACKTRACE, + /* + * SGI8-15 can be reserved by secure firmware, and thus may +@@ -797,7 +801,7 @@ core_initcall(register_cpufreq_notifier); + + static void raise_nmi(cpumask_t *mask) + { +- smp_cross_call(mask, IPI_CPU_BACKTRACE); ++ __smp_cross_call(mask, IPI_CPU_BACKTRACE); + } + + void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self) +diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c +index 9afb0c69db34..5bc2d76deccb 100644 +--- a/arch/arm/mach-exynos/suspend.c ++++ b/arch/arm/mach-exynos/suspend.c +@@ -444,8 +444,27 @@ early_wakeup: + + static void exynos5420_prepare_pm_resume(void) + { ++ unsigned int mpidr, cluster; ++ ++ mpidr = read_cpuid_mpidr(); ++ cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); ++ + if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM)) + WARN_ON(mcpm_cpu_powered_up()); ++ ++ if (IS_ENABLED(CONFIG_HW_PERF_EVENTS) && cluster != 0) { ++ /* ++ * When system is resumed on the LITTLE/KFC core (cluster 1), ++ * the DSCR is not properly updated until the power is turned ++ * on also for the cluster 0. Enable it for a while to ++ * propagate the SPNIDEN and SPIDEN signals from Secure JTAG ++ * block and avoid undefined instruction issue on CP14 reset. ++ */ ++ pmu_raw_writel(S5P_CORE_LOCAL_PWR_EN, ++ EXYNOS_COMMON_CONFIGURATION(0)); ++ pmu_raw_writel(0, ++ EXYNOS_COMMON_CONFIGURATION(0)); ++ } + } + + static void exynos5420_pm_resume(void) +diff --git a/arch/arm/mach-omap2/pm33xx-core.c b/arch/arm/mach-omap2/pm33xx-core.c +index 724cf5774a6c..c93b6efd565f 100644 +--- a/arch/arm/mach-omap2/pm33xx-core.c ++++ b/arch/arm/mach-omap2/pm33xx-core.c +@@ -51,10 +51,12 @@ static int amx3_common_init(void) + + /* CEFUSE domain can be turned off post bootup */ + cefuse_pwrdm = pwrdm_lookup("cefuse_pwrdm"); +- if (cefuse_pwrdm) +- omap_set_pwrdm_state(cefuse_pwrdm, PWRDM_POWER_OFF); +- else ++ if (!cefuse_pwrdm) + pr_err("PM: Failed to get cefuse_pwrdm\n"); ++ else if (omap_type() != OMAP2_DEVICE_TYPE_GP) ++ pr_info("PM: Leaving EFUSE power domain active\n"); ++ else ++ omap_set_pwrdm_state(cefuse_pwrdm, PWRDM_POWER_OFF); + + return 0; + } +diff --git a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c +index dc526ef2e9b3..ee949255ced3 100644 +--- a/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c ++++ b/arch/arm/mach-shmobile/regulator-quirk-rcar-gen2.c +@@ -1,6 +1,6 @@ + // SPDX-License-Identifier: GPL-2.0 + /* +- * R-Car Generation 2 da9063/da9210 regulator quirk ++ * R-Car Generation 2 da9063(L)/da9210 regulator quirk + * + * Certain Gen2 development boards have an da9063 and one or more da9210 + * regulators. All of these regulators have their interrupt request lines +@@ -65,6 +65,7 @@ static struct i2c_msg da9210_msg = { + + static const struct of_device_id rcar_gen2_quirk_match[] = { + { .compatible = "dlg,da9063", .data = &da9063_msg }, ++ { .compatible = "dlg,da9063l", .data = &da9063_msg }, + { .compatible = "dlg,da9210", .data = &da9210_msg }, + {}, + }; +@@ -147,6 +148,7 @@ static int __init rcar_gen2_regulator_quirk(void) + + if (!of_machine_is_compatible("renesas,koelsch") && + !of_machine_is_compatible("renesas,lager") && ++ !of_machine_is_compatible("renesas,porter") && + !of_machine_is_compatible("renesas,stout") && + !of_machine_is_compatible("renesas,gose")) + return -ENODEV; +@@ -210,7 +212,7 @@ static int __init rcar_gen2_regulator_quirk(void) + goto err_free; + } + +- pr_info("IRQ2 is asserted, installing da9063/da9210 regulator quirk\n"); ++ pr_info("IRQ2 is asserted, installing regulator quirk\n"); + + bus_register_notifier(&i2c_bus_type, ®ulator_quirk_nb); + return 0; +diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi +index 9155bd4784eb..aa051af23bd0 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi +@@ -240,7 +240,7 @@ + }; + + iomuxc_gpr: syscon@30340000 { +- compatible = "fsl,imx8mq-iomuxc-gpr", "syscon"; ++ compatible = "fsl,imx8mq-iomuxc-gpr", "fsl,imx6q-iomuxc-gpr", "syscon"; + reg = <0x30340000 0x10000>; + }; + +diff --git a/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi b/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi +index 50b3589c7f15..536f735243d2 100644 +--- a/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi ++++ b/arch/arm64/boot/dts/qcom/qcs404-evb.dtsi +@@ -37,18 +37,18 @@ + pms405-regulators { + compatible = "qcom,rpm-pms405-regulators"; + +- vdd-s1-supply = <&vph_pwr>; +- vdd-s2-supply = <&vph_pwr>; +- vdd-s3-supply = <&vph_pwr>; +- vdd-s4-supply = <&vph_pwr>; +- vdd-s5-supply = <&vph_pwr>; +- vdd-l1-l2-supply = <&vreg_s5_1p35>; +- vdd-l3-l8-supply = <&vreg_s5_1p35>; +- vdd-l4-supply = <&vreg_s5_1p35>; +- vdd-l5-l6-supply = <&vreg_s4_1p8>; +- vdd-l7-supply = <&vph_pwr>; +- vdd-l9-supply = <&vreg_s5_1p35>; +- vdd-l10-l11-l12-l13-supply = <&vph_pwr>; ++ vdd_s1-supply = <&vph_pwr>; ++ vdd_s2-supply = <&vph_pwr>; ++ vdd_s3-supply = <&vph_pwr>; ++ vdd_s4-supply = <&vph_pwr>; ++ vdd_s5-supply = <&vph_pwr>; ++ vdd_l1_l2-supply = <&vreg_s5_1p35>; ++ vdd_l3_l8-supply = <&vreg_s5_1p35>; ++ vdd_l4-supply = <&vreg_s5_1p35>; ++ vdd_l5_l6-supply = <&vreg_s4_1p8>; ++ vdd_l7-supply = <&vph_pwr>; ++ vdd_l9-supply = <&vreg_s5_1p35>; ++ vdd_l10_l11_l12_l13-supply = <&vph_pwr>; + + vreg_s4_1p8: s4 { + regulator-min-microvolt = <1728000>; +@@ -56,8 +56,8 @@ + }; + + vreg_s5_1p35: s5 { +- regulator-min-microvolt = <>; +- regulator-max-microvolt = <>; ++ regulator-min-microvolt = <1352000>; ++ regulator-max-microvolt = <1352000>; + }; + + vreg_l1_1p3: l1 { +diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig +index 2d9c39033c1a..32fb03503b0b 100644 +--- a/arch/arm64/configs/defconfig ++++ b/arch/arm64/configs/defconfig +@@ -222,10 +222,10 @@ CONFIG_BLK_DEV_SD=y + CONFIG_SCSI_SAS_ATA=y + CONFIG_SCSI_HISI_SAS=y + CONFIG_SCSI_HISI_SAS_PCI=y +-CONFIG_SCSI_UFSHCD=m +-CONFIG_SCSI_UFSHCD_PLATFORM=m ++CONFIG_SCSI_UFSHCD=y ++CONFIG_SCSI_UFSHCD_PLATFORM=y + CONFIG_SCSI_UFS_QCOM=m +-CONFIG_SCSI_UFS_HISI=m ++CONFIG_SCSI_UFS_HISI=y + CONFIG_ATA=y + CONFIG_SATA_AHCI=y + CONFIG_SATA_AHCI_PLATFORM=y +diff --git a/arch/mips/kernel/prom.c b/arch/mips/kernel/prom.c +index 93b8e0b4332f..b9d6c6ec4177 100644 +--- a/arch/mips/kernel/prom.c ++++ b/arch/mips/kernel/prom.c +@@ -41,7 +41,19 @@ char *mips_get_machine_name(void) + #ifdef CONFIG_USE_OF + void __init early_init_dt_add_memory_arch(u64 base, u64 size) + { +- return add_memory_region(base, size, BOOT_MEM_RAM); ++ if (base >= PHYS_ADDR_MAX) { ++ pr_warn("Trying to add an invalid memory region, skipped\n"); ++ return; ++ } ++ ++ /* Truncate the passed memory region instead of type casting */ ++ if (base + size - 1 >= PHYS_ADDR_MAX || base + size < base) { ++ pr_warn("Truncate memory region %llx @ %llx to size %llx\n", ++ size, base, PHYS_ADDR_MAX - base); ++ size = PHYS_ADDR_MAX - base; ++ } ++ ++ add_memory_region(base, size, BOOT_MEM_RAM); + } + + int __init early_init_dt_reserve_memory_arch(phys_addr_t base, +diff --git a/arch/powerpc/include/asm/drmem.h b/arch/powerpc/include/asm/drmem.h +index 7c1d8e74b25d..7f3279b014db 100644 +--- a/arch/powerpc/include/asm/drmem.h ++++ b/arch/powerpc/include/asm/drmem.h +@@ -17,6 +17,9 @@ struct drmem_lmb { + u32 drc_index; + u32 aa_index; + u32 flags; ++#ifdef CONFIG_MEMORY_HOTPLUG ++ int nid; ++#endif + }; + + struct drmem_lmb_info { +@@ -104,4 +107,22 @@ static inline void invalidate_lmb_associativity_index(struct drmem_lmb *lmb) + lmb->aa_index = 0xffffffff; + } + ++#ifdef CONFIG_MEMORY_HOTPLUG ++static inline void lmb_set_nid(struct drmem_lmb *lmb) ++{ ++ lmb->nid = memory_add_physaddr_to_nid(lmb->base_addr); ++} ++static inline void lmb_clear_nid(struct drmem_lmb *lmb) ++{ ++ lmb->nid = -1; ++} ++#else ++static inline void lmb_set_nid(struct drmem_lmb *lmb) ++{ ++} ++static inline void lmb_clear_nid(struct drmem_lmb *lmb) ++{ ++} ++#endif ++ + #endif /* _ASM_POWERPC_LMB_H */ +diff --git a/arch/powerpc/mm/drmem.c b/arch/powerpc/mm/drmem.c +index 3f1803672c9b..641891df2046 100644 +--- a/arch/powerpc/mm/drmem.c ++++ b/arch/powerpc/mm/drmem.c +@@ -366,8 +366,10 @@ static void __init init_drmem_v1_lmbs(const __be32 *prop) + if (!drmem_info->lmbs) + return; + +- for_each_drmem_lmb(lmb) ++ for_each_drmem_lmb(lmb) { + read_drconf_v1_cell(lmb, &prop); ++ lmb_set_nid(lmb); ++ } + } + + static void __init init_drmem_v2_lmbs(const __be32 *prop) +@@ -412,6 +414,8 @@ static void __init init_drmem_v2_lmbs(const __be32 *prop) + + lmb->aa_index = dr_cell.aa_index; + lmb->flags = dr_cell.flags; ++ ++ lmb_set_nid(lmb); + } + } + } +diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c +index d291b618a559..47087832f8b2 100644 +--- a/arch/powerpc/platforms/pseries/hotplug-memory.c ++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c +@@ -379,7 +379,7 @@ static int dlpar_add_lmb(struct drmem_lmb *); + static int dlpar_remove_lmb(struct drmem_lmb *lmb) + { + unsigned long block_sz; +- int nid, rc; ++ int rc; + + if (!lmb_is_removable(lmb)) + return -EINVAL; +@@ -389,14 +389,14 @@ static int dlpar_remove_lmb(struct drmem_lmb *lmb) + return rc; + + block_sz = pseries_memory_block_size(); +- nid = memory_add_physaddr_to_nid(lmb->base_addr); + +- __remove_memory(nid, lmb->base_addr, block_sz); ++ __remove_memory(lmb->nid, lmb->base_addr, block_sz); + + /* Update memory regions for memory remove */ + memblock_remove(lmb->base_addr, block_sz); + + invalidate_lmb_associativity_index(lmb); ++ lmb_clear_nid(lmb); + lmb->flags &= ~DRCONF_MEM_ASSIGNED; + + return 0; +@@ -653,7 +653,7 @@ static int dlpar_memory_remove_by_ic(u32 lmbs_to_remove, u32 drc_index) + static int dlpar_add_lmb(struct drmem_lmb *lmb) + { + unsigned long block_sz; +- int nid, rc; ++ int rc; + + if (lmb->flags & DRCONF_MEM_ASSIGNED) + return -EINVAL; +@@ -664,13 +664,11 @@ static int dlpar_add_lmb(struct drmem_lmb *lmb) + return rc; + } + ++ lmb_set_nid(lmb); + block_sz = memory_block_size_bytes(); + +- /* Find the node id for this address */ +- nid = memory_add_physaddr_to_nid(lmb->base_addr); +- + /* Add the memory */ +- rc = __add_memory(nid, lmb->base_addr, block_sz); ++ rc = __add_memory(lmb->nid, lmb->base_addr, block_sz); + if (rc) { + invalidate_lmb_associativity_index(lmb); + return rc; +@@ -678,8 +676,9 @@ static int dlpar_add_lmb(struct drmem_lmb *lmb) + + rc = dlpar_online_lmb(lmb); + if (rc) { +- __remove_memory(nid, lmb->base_addr, block_sz); ++ __remove_memory(lmb->nid, lmb->base_addr, block_sz); + invalidate_lmb_associativity_index(lmb); ++ lmb_clear_nid(lmb); + } else { + lmb->flags |= DRCONF_MEM_ASSIGNED; + } +diff --git a/arch/um/kernel/time.c b/arch/um/kernel/time.c +index 052de4c8acb2..0c572a48158e 100644 +--- a/arch/um/kernel/time.c ++++ b/arch/um/kernel/time.c +@@ -56,7 +56,7 @@ static int itimer_one_shot(struct clock_event_device *evt) + static struct clock_event_device timer_clockevent = { + .name = "posix-timer", + .rating = 250, +- .cpumask = cpu_all_mask, ++ .cpumask = cpu_possible_mask, + .features = CLOCK_EVT_FEAT_PERIODIC | + CLOCK_EVT_FEAT_ONESHOT, + .set_state_shutdown = itimer_shutdown, +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c +index d35f4775d5f1..82dad001d1ea 100644 +--- a/arch/x86/events/intel/core.c ++++ b/arch/x86/events/intel/core.c +@@ -3189,7 +3189,7 @@ static int intel_pmu_hw_config(struct perf_event *event) + return ret; + + if (event->attr.precise_ip) { +- if (!(event->attr.freq || event->attr.wakeup_events)) { ++ if (!(event->attr.freq || (event->attr.wakeup_events && !event->attr.watermark))) { + event->hw.flags |= PERF_X86_EVENT_AUTO_RELOAD; + if (!(event->attr.sample_type & + ~intel_pmu_large_pebs_flags(event))) +diff --git a/arch/x86/pci/irq.c b/arch/x86/pci/irq.c +index 52e55108404e..d3a73f9335e1 100644 +--- a/arch/x86/pci/irq.c ++++ b/arch/x86/pci/irq.c +@@ -1119,6 +1119,8 @@ static const struct dmi_system_id pciirq_dmi_table[] __initconst = { + + void __init pcibios_irq_init(void) + { ++ struct irq_routing_table *rtable = NULL; ++ + DBG(KERN_DEBUG "PCI: IRQ init\n"); + + if (raw_pci_ops == NULL) +@@ -1129,8 +1131,10 @@ void __init pcibios_irq_init(void) + pirq_table = pirq_find_routing_table(); + + #ifdef CONFIG_PCI_BIOS +- if (!pirq_table && (pci_probe & PCI_BIOS_IRQ_SCAN)) ++ if (!pirq_table && (pci_probe & PCI_BIOS_IRQ_SCAN)) { + pirq_table = pcibios_get_irq_routing_table(); ++ rtable = pirq_table; ++ } + #endif + if (pirq_table) { + pirq_peer_trick(); +@@ -1145,8 +1149,10 @@ void __init pcibios_irq_init(void) + * If we're using the I/O APIC, avoid using the PCI IRQ + * routing table + */ +- if (io_apic_assign_pci_irqs) ++ if (io_apic_assign_pci_irqs) { ++ kfree(rtable); + pirq_table = NULL; ++ } + } + + x86_init.pci.fixup_irqs(); +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c +index 5ba1e0d841b4..679d608347ea 100644 +--- a/block/bfq-iosched.c ++++ b/block/bfq-iosched.c +@@ -2545,6 +2545,8 @@ static void bfq_arm_slice_timer(struct bfq_data *bfqd) + if (BFQQ_SEEKY(bfqq) && bfqq->wr_coeff == 1 && + bfq_symmetric_scenario(bfqd)) + sl = min_t(u64, sl, BFQ_MIN_TT); ++ else if (bfqq->wr_coeff > 1) ++ sl = max_t(u32, sl, 20ULL * NSEC_PER_MSEC); + + bfqd->last_idling_start = ktime_get(); + hrtimer_start(&bfqd->idle_slice_timer, ns_to_ktime(sl), +diff --git a/block/blk-core.c b/block/blk-core.c +index b375cfea024c..2dd94b3e9ece 100644 +--- a/block/blk-core.c ++++ b/block/blk-core.c +@@ -237,7 +237,6 @@ void blk_sync_queue(struct request_queue *q) + struct blk_mq_hw_ctx *hctx; + int i; + +- cancel_delayed_work_sync(&q->requeue_work); + queue_for_each_hw_ctx(q, hctx, i) + cancel_delayed_work_sync(&hctx->run_work); + } +diff --git a/block/blk-mq.c b/block/blk-mq.c +index 8a41cc5974fe..11efca3534ad 100644 +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -2666,6 +2666,8 @@ void blk_mq_release(struct request_queue *q) + struct blk_mq_hw_ctx *hctx; + unsigned int i; + ++ cancel_delayed_work_sync(&q->requeue_work); ++ + /* hctx kobj stays in hctx */ + queue_for_each_hw_ctx(q, hctx, i) { + if (!hctx) +diff --git a/drivers/clk/rockchip/clk-rk3288.c b/drivers/clk/rockchip/clk-rk3288.c +index 355d6a3611db..18460e0993bc 100644 +--- a/drivers/clk/rockchip/clk-rk3288.c ++++ b/drivers/clk/rockchip/clk-rk3288.c +@@ -856,6 +856,9 @@ static const int rk3288_saved_cru_reg_ids[] = { + RK3288_CLKSEL_CON(10), + RK3288_CLKSEL_CON(33), + RK3288_CLKSEL_CON(37), ++ ++ /* We turn aclk_dmac1 on for suspend; this will restore it */ ++ RK3288_CLKGATE_CON(10), + }; + + static u32 rk3288_saved_cru_regs[ARRAY_SIZE(rk3288_saved_cru_reg_ids)]; +@@ -871,6 +874,14 @@ static int rk3288_clk_suspend(void) + readl_relaxed(rk3288_cru_base + reg_id); + } + ++ /* ++ * Going into deep sleep (specifically setting PMU_CLR_DMA in ++ * RK3288_PMU_PWRMODE_CON1) appears to fail unless ++ * "aclk_dmac1" is on. ++ */ ++ writel_relaxed(1 << (12 + 16), ++ rk3288_cru_base + RK3288_CLKGATE_CON(10)); ++ + /* + * Switch PLLs other than DPLL (for SDRAM) to slow mode to + * avoid crashes on resume. The Mask ROM on the system will +diff --git a/drivers/dma/idma64.c b/drivers/dma/idma64.c +index 0baf9797cc09..83796a33dc16 100644 +--- a/drivers/dma/idma64.c ++++ b/drivers/dma/idma64.c +@@ -592,7 +592,7 @@ static int idma64_probe(struct idma64_chip *chip) + idma64->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); + idma64->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; + +- idma64->dma.dev = chip->dev; ++ idma64->dma.dev = chip->sysdev; + + dma_set_max_seg_size(idma64->dma.dev, IDMA64C_CTLH_BLOCK_TS_MASK); + +@@ -632,6 +632,7 @@ static int idma64_platform_probe(struct platform_device *pdev) + { + struct idma64_chip *chip; + struct device *dev = &pdev->dev; ++ struct device *sysdev = dev->parent; + struct resource *mem; + int ret; + +@@ -648,11 +649,12 @@ static int idma64_platform_probe(struct platform_device *pdev) + if (IS_ERR(chip->regs)) + return PTR_ERR(chip->regs); + +- ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); ++ ret = dma_coerce_mask_and_coherent(sysdev, DMA_BIT_MASK(64)); + if (ret) + return ret; + + chip->dev = dev; ++ chip->sysdev = sysdev; + + ret = idma64_probe(chip); + if (ret) +diff --git a/drivers/dma/idma64.h b/drivers/dma/idma64.h +index 6b816878e5e7..baa32e1425de 100644 +--- a/drivers/dma/idma64.h ++++ b/drivers/dma/idma64.h +@@ -216,12 +216,14 @@ static inline void idma64_writel(struct idma64 *idma64, int offset, u32 value) + /** + * struct idma64_chip - representation of iDMA 64-bit controller hardware + * @dev: struct device of the DMA controller ++ * @sysdev: struct device of the physical device that does DMA + * @irq: irq line + * @regs: memory mapped I/O space + * @idma64: struct idma64 that is filed by idma64_probe() + */ + struct idma64_chip { + struct device *dev; ++ struct device *sysdev; + int irq; + void __iomem *regs; + struct idma64 *idma64; +diff --git a/drivers/edac/Kconfig b/drivers/edac/Kconfig +index 47eb4d13ed5f..5e2e0348d460 100644 +--- a/drivers/edac/Kconfig ++++ b/drivers/edac/Kconfig +@@ -263,8 +263,8 @@ config EDAC_PND2 + micro-server but may appear on others in the future. + + config EDAC_MPC85XX +- tristate "Freescale MPC83xx / MPC85xx" +- depends on FSL_SOC ++ bool "Freescale MPC83xx / MPC85xx" ++ depends on FSL_SOC && EDAC=y + help + Support for error detection and correction on the Freescale + MPC8349, MPC8560, MPC8540, MPC8548, T4240 +diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c +index 7f33024b6d83..fafd79438bbf 100644 +--- a/drivers/gpio/gpio-omap.c ++++ b/drivers/gpio/gpio-omap.c +@@ -353,6 +353,22 @@ static void omap_clear_gpio_debounce(struct gpio_bank *bank, unsigned offset) + } + } + ++/* ++ * Off mode wake-up capable GPIOs in bank(s) that are in the wakeup domain. ++ * See TRM section for GPIO for "Wake-Up Generation" for the list of GPIOs ++ * in wakeup domain. If bank->non_wakeup_gpios is not configured, assume none ++ * are capable waking up the system from off mode. ++ */ ++static bool omap_gpio_is_off_wakeup_capable(struct gpio_bank *bank, u32 gpio_mask) ++{ ++ u32 no_wake = bank->non_wakeup_gpios; ++ ++ if (no_wake) ++ return !!(~no_wake & gpio_mask); ++ ++ return false; ++} ++ + static inline void omap_set_gpio_trigger(struct gpio_bank *bank, int gpio, + unsigned trigger) + { +@@ -384,13 +400,7 @@ static inline void omap_set_gpio_trigger(struct gpio_bank *bank, int gpio, + } + + /* This part needs to be executed always for OMAP{34xx, 44xx} */ +- if (!bank->regs->irqctrl) { +- /* On omap24xx proceed only when valid GPIO bit is set */ +- if (bank->non_wakeup_gpios) { +- if (!(bank->non_wakeup_gpios & gpio_bit)) +- goto exit; +- } +- ++ if (!bank->regs->irqctrl && !omap_gpio_is_off_wakeup_capable(bank, gpio)) { + /* + * Log the edge gpio and manually trigger the IRQ + * after resume if the input level changes +@@ -403,7 +413,6 @@ static inline void omap_set_gpio_trigger(struct gpio_bank *bank, int gpio, + bank->enabled_non_wakeup_gpios &= ~gpio_bit; + } + +-exit: + bank->level_mask = + readl_relaxed(bank->base + bank->regs->leveldetect0) | + readl_relaxed(bank->base + bank->regs->leveldetect1); +@@ -1444,7 +1453,10 @@ static void omap_gpio_restore_context(struct gpio_bank *bank); + static void omap_gpio_idle(struct gpio_bank *bank, bool may_lose_context) + { + struct device *dev = bank->chip.parent; +- u32 l1 = 0, l2 = 0; ++ void __iomem *base = bank->base; ++ u32 nowake; ++ ++ bank->saved_datain = readl_relaxed(base + bank->regs->datain); + + if (bank->funcs.idle_enable_level_quirk) + bank->funcs.idle_enable_level_quirk(bank); +@@ -1456,20 +1468,15 @@ static void omap_gpio_idle(struct gpio_bank *bank, bool may_lose_context) + goto update_gpio_context_count; + + /* +- * If going to OFF, remove triggering for all ++ * If going to OFF, remove triggering for all wkup domain + * non-wakeup GPIOs. Otherwise spurious IRQs will be + * generated. See OMAP2420 Errata item 1.101. + */ +- bank->saved_datain = readl_relaxed(bank->base + +- bank->regs->datain); +- l1 = bank->context.fallingdetect; +- l2 = bank->context.risingdetect; +- +- l1 &= ~bank->enabled_non_wakeup_gpios; +- l2 &= ~bank->enabled_non_wakeup_gpios; +- +- writel_relaxed(l1, bank->base + bank->regs->fallingdetect); +- writel_relaxed(l2, bank->base + bank->regs->risingdetect); ++ if (!bank->loses_context && bank->enabled_non_wakeup_gpios) { ++ nowake = bank->enabled_non_wakeup_gpios; ++ omap_gpio_rmw(base, bank->regs->fallingdetect, nowake, ~nowake); ++ omap_gpio_rmw(base, bank->regs->risingdetect, nowake, ~nowake); ++ } + + bank->workaround_enabled = true; + +@@ -1518,6 +1525,12 @@ static void omap_gpio_unidle(struct gpio_bank *bank) + return; + } + } ++ } else { ++ /* Restore changes done for OMAP2420 errata 1.101 */ ++ writel_relaxed(bank->context.fallingdetect, ++ bank->base + bank->regs->fallingdetect); ++ writel_relaxed(bank->context.risingdetect, ++ bank->base + bank->regs->risingdetect); + } + + if (!bank->workaround_enabled) +diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c +index 541fa6ac399d..7e9451f47efe 100644 +--- a/drivers/gpio/gpio-vf610.c ++++ b/drivers/gpio/gpio-vf610.c +@@ -29,6 +29,7 @@ struct fsl_gpio_soc_data { + + struct vf610_gpio_port { + struct gpio_chip gc; ++ struct irq_chip ic; + void __iomem *base; + void __iomem *gpio_base; + const struct fsl_gpio_soc_data *sdata; +@@ -60,8 +61,6 @@ struct vf610_gpio_port { + #define PORT_INT_EITHER_EDGE 0xb + #define PORT_INT_LOGIC_ONE 0xc + +-static struct irq_chip vf610_gpio_irq_chip; +- + static const struct fsl_gpio_soc_data imx_data = { + .have_paddr = true, + }; +@@ -237,15 +236,6 @@ static int vf610_gpio_irq_set_wake(struct irq_data *d, u32 enable) + return 0; + } + +-static struct irq_chip vf610_gpio_irq_chip = { +- .name = "gpio-vf610", +- .irq_ack = vf610_gpio_irq_ack, +- .irq_mask = vf610_gpio_irq_mask, +- .irq_unmask = vf610_gpio_irq_unmask, +- .irq_set_type = vf610_gpio_irq_set_type, +- .irq_set_wake = vf610_gpio_irq_set_wake, +-}; +- + static int vf610_gpio_probe(struct platform_device *pdev) + { + struct device *dev = &pdev->dev; +@@ -253,6 +243,7 @@ static int vf610_gpio_probe(struct platform_device *pdev) + struct vf610_gpio_port *port; + struct resource *iores; + struct gpio_chip *gc; ++ struct irq_chip *ic; + int i; + int ret; + +@@ -316,6 +307,14 @@ static int vf610_gpio_probe(struct platform_device *pdev) + gc->direction_output = vf610_gpio_direction_output; + gc->set = vf610_gpio_set; + ++ ic = &port->ic; ++ ic->name = "gpio-vf610"; ++ ic->irq_ack = vf610_gpio_irq_ack; ++ ic->irq_mask = vf610_gpio_irq_mask; ++ ic->irq_unmask = vf610_gpio_irq_unmask; ++ ic->irq_set_type = vf610_gpio_irq_set_type; ++ ic->irq_set_wake = vf610_gpio_irq_set_wake; ++ + ret = gpiochip_add_data(gc, port); + if (ret < 0) + return ret; +@@ -327,14 +326,13 @@ static int vf610_gpio_probe(struct platform_device *pdev) + /* Clear the interrupt status register for all GPIO's */ + vf610_gpio_writel(~0, port->base + PORT_ISFR); + +- ret = gpiochip_irqchip_add(gc, &vf610_gpio_irq_chip, 0, +- handle_edge_irq, IRQ_TYPE_NONE); ++ ret = gpiochip_irqchip_add(gc, ic, 0, handle_edge_irq, IRQ_TYPE_NONE); + if (ret) { + dev_err(dev, "failed to add irqchip\n"); + gpiochip_remove(gc); + return ret; + } +- gpiochip_set_chained_irqchip(gc, &vf610_gpio_irq_chip, port->irq, ++ gpiochip_set_chained_irqchip(gc, ic, port->irq, + vf610_gpio_irq_handler); + + return 0; +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c +index 419e8de8c0f4..6072636da388 100644 +--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c ++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c +@@ -1399,6 +1399,15 @@ static enum dc_status enable_link_dp( + /* get link settings for video mode timing */ + decide_link_settings(stream, &link_settings); + ++ /* If link settings are different than current and link already enabled ++ * then need to disable before programming to new rate. ++ */ ++ if (link->link_status.link_active && ++ (link->cur_link_settings.lane_count != link_settings.lane_count || ++ link->cur_link_settings.link_rate != link_settings.link_rate)) { ++ dp_disable_link_phy(link, pipe_ctx->stream->signal); ++ } ++ + pipe_ctx->stream_res.pix_clk_params.requested_sym_clk = + link_settings.link_rate * LINK_RATE_REF_FREQ_IN_KHZ; + state->dccg->funcs->update_clocks(state->dccg, state, false); +diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c +index cd1ebe57ed59..1951f9276e41 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_dpp.c +@@ -392,6 +392,10 @@ void dpp1_cnv_setup ( + default: + break; + } ++ ++ /* Set default color space based on format if none is given. */ ++ color_space = input_color_space ? input_color_space : color_space; ++ + REG_SET(CNVC_SURFACE_PIXEL_FORMAT, 0, + CNVC_SURFACE_PIXEL_FORMAT, pixel_format); + REG_UPDATE(FORMAT_CONTROL, FORMAT_CONTROL__ALPHA_EN, alpha_en); +@@ -403,7 +407,7 @@ void dpp1_cnv_setup ( + for (i = 0; i < 12; i++) + tbl_entry.regval[i] = input_csc_color_matrix.matrix[i]; + +- tbl_entry.color_space = input_color_space; ++ tbl_entry.color_space = color_space; + + if (color_space >= COLOR_SPACE_YCBCR601) + select = INPUT_CSC_SELECT_ICSC; +diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +index 5b551a544e82..1fac86d3032d 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +@@ -1936,7 +1936,7 @@ static void update_dpp(struct dpp *dpp, struct dc_plane_state *plane_state) + plane_state->format, + EXPANSION_MODE_ZERO, + plane_state->input_csc_color_matrix, +- COLOR_SPACE_YCBCR601_LIMITED); ++ plane_state->color_space); + + //set scale and bias registers + build_prescale_params(&bns_params, plane_state); +diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c +index ec2ca71e1323..c532e9c9e491 100644 +--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c ++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c +@@ -748,11 +748,11 @@ static void adv7511_mode_set(struct adv7511 *adv7511, + vsync_polarity = 1; + } + +- if (mode->vrefresh <= 24000) ++ if (drm_mode_vrefresh(mode) <= 24) + low_refresh_rate = ADV7511_LOW_REFRESH_RATE_24HZ; +- else if (mode->vrefresh <= 25000) ++ else if (drm_mode_vrefresh(mode) <= 25) + low_refresh_rate = ADV7511_LOW_REFRESH_RATE_25HZ; +- else if (mode->vrefresh <= 30000) ++ else if (drm_mode_vrefresh(mode) <= 30) + low_refresh_rate = ADV7511_LOW_REFRESH_RATE_30HZ; + else + low_refresh_rate = ADV7511_LOW_REFRESH_RATE_NONE; +diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c +index 687943df58e1..ab5692104ea0 100644 +--- a/drivers/gpu/drm/drm_ioctl.c ++++ b/drivers/gpu/drm/drm_ioctl.c +@@ -508,13 +508,6 @@ int drm_version(struct drm_device *dev, void *data, + return err; + } + +-static inline bool +-drm_render_driver_and_ioctl(const struct drm_device *dev, u32 flags) +-{ +- return drm_core_check_feature(dev, DRIVER_RENDER) && +- (flags & DRM_RENDER_ALLOW); +-} +- + /** + * drm_ioctl_permit - Check ioctl permissions against caller + * +@@ -529,19 +522,14 @@ drm_render_driver_and_ioctl(const struct drm_device *dev, u32 flags) + */ + int drm_ioctl_permit(u32 flags, struct drm_file *file_priv) + { +- const struct drm_device *dev = file_priv->minor->dev; +- + /* ROOT_ONLY is only for CAP_SYS_ADMIN */ + if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN))) + return -EACCES; + +- /* AUTH is only for master ... */ +- if (unlikely((flags & DRM_AUTH) && drm_is_primary_client(file_priv))) { +- /* authenticated ones, or render capable on DRM_RENDER_ALLOW. */ +- if (!file_priv->authenticated && +- !drm_render_driver_and_ioctl(dev, flags)) +- return -EACCES; +- } ++ /* AUTH is only for authenticated or render client */ ++ if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) && ++ !file_priv->authenticated)) ++ return -EACCES; + + /* MASTER is only for master or control clients */ + if (unlikely((flags & DRM_MASTER) && +diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c +index 18ca651ab942..23de4d1b7b1a 100644 +--- a/drivers/gpu/drm/msm/msm_gem.c ++++ b/drivers/gpu/drm/msm/msm_gem.c +@@ -805,7 +805,8 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) + seq_puts(m, " vmas:"); + + list_for_each_entry(vma, &msm_obj->vmas, list) +- seq_printf(m, " [%s: %08llx,%s,inuse=%d]", vma->aspace->name, ++ seq_printf(m, " [%s: %08llx,%s,inuse=%d]", ++ vma->aspace != NULL ? vma->aspace->name : NULL, + vma->iova, vma->mapped ? "mapped" : "unmapped", + vma->inuse); + +diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig +index db28012dbf54..00cd9ab8948d 100644 +--- a/drivers/gpu/drm/nouveau/Kconfig ++++ b/drivers/gpu/drm/nouveau/Kconfig +@@ -17,20 +17,9 @@ config DRM_NOUVEAU + select INPUT if ACPI && X86 + select THERMAL if ACPI && X86 + select ACPI_VIDEO if ACPI && X86 +- help +- Choose this option for open-source NVIDIA support. +- +-config NOUVEAU_LEGACY_CTX_SUPPORT +- bool "Nouveau legacy context support" +- depends on DRM_NOUVEAU + select DRM_VM +- default y + help +- There was a version of the nouveau DDX that relied on legacy +- ctx ioctls not erroring out. But that was back in time a long +- ways, so offer a way to disable it now. For uapi compat with +- old nouveau ddx this should be on by default, but modern distros +- should consider turning it off. ++ Choose this option for open-source NVIDIA support. + + config NOUVEAU_PLATFORM_DRIVER + bool "Nouveau (NVIDIA) SoC GPUs" +diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.h b/drivers/gpu/drm/nouveau/dispnv50/disp.h +index 2216c58620c2..7c41b0599d1a 100644 +--- a/drivers/gpu/drm/nouveau/dispnv50/disp.h ++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.h +@@ -41,6 +41,7 @@ struct nv50_disp_interlock { + NV50_DISP_INTERLOCK__SIZE + } type; + u32 data; ++ u32 wimm; + }; + + void corec37d_ntfy_init(struct nouveau_bo *, u32); +diff --git a/drivers/gpu/drm/nouveau/dispnv50/head.c b/drivers/gpu/drm/nouveau/dispnv50/head.c +index 2e7a0c347ddb..06ee23823a68 100644 +--- a/drivers/gpu/drm/nouveau/dispnv50/head.c ++++ b/drivers/gpu/drm/nouveau/dispnv50/head.c +@@ -306,7 +306,7 @@ nv50_head_atomic_check(struct drm_crtc *crtc, struct drm_crtc_state *state) + asyh->set.or = head->func->or != NULL; + } + +- if (asyh->state.mode_changed) ++ if (asyh->state.mode_changed || asyh->state.connectors_changed) + nv50_head_atomic_check_mode(head, asyh); + + if (asyh->state.color_mgmt_changed || +@@ -413,6 +413,7 @@ nv50_head_atomic_duplicate_state(struct drm_crtc *crtc) + asyh->ovly = armh->ovly; + asyh->dither = armh->dither; + asyh->procamp = armh->procamp; ++ asyh->or = armh->or; + asyh->dp = armh->dp; + asyh->clr.mask = 0; + asyh->set.mask = 0; +diff --git a/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c b/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c +index 9103b8494279..f7dbd965e4e7 100644 +--- a/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c ++++ b/drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c +@@ -75,6 +75,7 @@ wimmc37b_init_(const struct nv50_wimm_func *func, struct nouveau_drm *drm, + return ret; + } + ++ wndw->interlock.wimm = wndw->interlock.data; + wndw->immd = func; + return 0; + } +diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c +index b95181027b31..471a39a077e5 100644 +--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c ++++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c +@@ -149,7 +149,7 @@ nv50_wndw_flush_set(struct nv50_wndw *wndw, u32 *interlock, + if (asyw->set.point) { + if (asyw->set.point = false, asyw->set.mask) + interlock[wndw->interlock.type] |= wndw->interlock.data; +- interlock[NV50_DISP_INTERLOCK_WIMM] |= wndw->interlock.data; ++ interlock[NV50_DISP_INTERLOCK_WIMM] |= wndw->interlock.wimm; + + wndw->immd->point(wndw, asyw); + wndw->immd->update(wndw, interlock); +diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c +index 6ab9033f49da..5020265bfbd9 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_drm.c ++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c +@@ -1094,11 +1094,8 @@ nouveau_driver_fops = { + static struct drm_driver + driver_stub = { + .driver_features = +- DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_RENDER +-#if defined(CONFIG_NOUVEAU_LEGACY_CTX_SUPPORT) +- | DRIVER_KMS_LEGACY_CONTEXT +-#endif +- , ++ DRIVER_GEM | DRIVER_MODESET | DRIVER_PRIME | DRIVER_RENDER | ++ DRIVER_KMS_LEGACY_CONTEXT, + + .open = nouveau_drm_open, + .postclose = nouveau_drm_postclose, +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c +index 5f301e632599..818d21bd28d3 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c +@@ -365,8 +365,15 @@ nvkm_dp_train(struct nvkm_dp *dp, u32 dataKBps) + * and it's better to have a failed modeset than that. + */ + for (cfg = nvkm_dp_rates; cfg->rate; cfg++) { +- if (cfg->nr <= outp_nr && cfg->nr <= outp_bw) +- failsafe = cfg; ++ if (cfg->nr <= outp_nr && cfg->nr <= outp_bw) { ++ /* Try to respect sink limits too when selecting ++ * lowest link configuration. ++ */ ++ if (!failsafe || ++ (cfg->nr <= sink_nr && cfg->bw <= sink_bw)) ++ failsafe = cfg; ++ } ++ + if (failsafe && cfg[1].rate < dataKBps) + break; + } +diff --git a/drivers/gpu/drm/pl111/pl111_display.c b/drivers/gpu/drm/pl111/pl111_display.c +index 754f6b25f265..6d9f78612dee 100644 +--- a/drivers/gpu/drm/pl111/pl111_display.c ++++ b/drivers/gpu/drm/pl111/pl111_display.c +@@ -531,14 +531,15 @@ pl111_init_clock_divider(struct drm_device *drm) + dev_err(drm->dev, "CLCD: unable to get clcdclk.\n"); + return PTR_ERR(parent); + } ++ ++ spin_lock_init(&priv->tim2_lock); ++ + /* If the clock divider is broken, use the parent directly */ + if (priv->variant->broken_clockdivider) { + priv->clk = parent; + return 0; + } + parent_name = __clk_get_name(parent); +- +- spin_lock_init(&priv->tim2_lock); + div->init = &init; + + ret = devm_clk_hw_register(drm->dev, div); +diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c +index f57d82220a88..dd029e689903 100644 +--- a/drivers/input/touchscreen/goodix.c ++++ b/drivers/input/touchscreen/goodix.c +@@ -216,6 +216,7 @@ static const struct goodix_chip_data *goodix_get_chip_data(u16 id) + { + switch (id) { + case 1151: ++ case 5663: + case 5688: + return >1x_chip_data; + +@@ -945,6 +946,7 @@ MODULE_DEVICE_TABLE(acpi, goodix_acpi_match); + #ifdef CONFIG_OF + static const struct of_device_id goodix_of_match[] = { + { .compatible = "goodix,gt1151" }, ++ { .compatible = "goodix,gt5663" }, + { .compatible = "goodix,gt5688" }, + { .compatible = "goodix,gt911" }, + { .compatible = "goodix,gt9110" }, +diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c +index d3880010c6cf..d8b73da6447d 100644 +--- a/drivers/iommu/arm-smmu-v3.c ++++ b/drivers/iommu/arm-smmu-v3.c +@@ -2454,13 +2454,9 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) + /* Clear CR0 and sync (disables SMMU and queue processing) */ + reg = readl_relaxed(smmu->base + ARM_SMMU_CR0); + if (reg & CR0_SMMUEN) { +- if (is_kdump_kernel()) { +- arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0); +- arm_smmu_device_disable(smmu); +- return -EBUSY; +- } +- + dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n"); ++ WARN_ON(is_kdump_kernel() && !disable_bypass); ++ arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0); + } + + ret = arm_smmu_device_disable(smmu); +@@ -2553,6 +2549,8 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass) + return ret; + } + ++ if (is_kdump_kernel()) ++ enables &= ~(CR0_EVTQEN | CR0_PRIQEN); + + /* Enable the SMMU interface, or ensure bypass */ + if (!bypass || disable_bypass) { +diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c +index 28cb713d728c..0feb3f70da16 100644 +--- a/drivers/iommu/intel-iommu.c ++++ b/drivers/iommu/intel-iommu.c +@@ -3496,7 +3496,13 @@ domains_done: + + #ifdef CONFIG_INTEL_IOMMU_SVM + if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) { ++ /* ++ * Call dmar_alloc_hwirq() with dmar_global_lock held, ++ * could cause possible lock race condition. ++ */ ++ up_write(&dmar_global_lock); + ret = intel_svm_enable_prq(iommu); ++ down_write(&dmar_global_lock); + if (ret) + goto free_iommu; + } +@@ -3730,6 +3736,7 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) + unsigned long iova_pfn; + struct intel_iommu *iommu; + struct page *freelist; ++ struct pci_dev *pdev = NULL; + + if (iommu_no_mapping(dev)) + return; +@@ -3745,11 +3752,14 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) + start_pfn = mm_to_dma_pfn(iova_pfn); + last_pfn = start_pfn + nrpages - 1; + ++ if (dev_is_pci(dev)) ++ pdev = to_pci_dev(dev); ++ + dev_dbg(dev, "Device unmapping: pfn %lx-%lx\n", start_pfn, last_pfn); + + freelist = domain_unmap(domain, start_pfn, last_pfn); + +- if (intel_iommu_strict) { ++ if (intel_iommu_strict || (pdev && pdev->untrusted)) { + iommu_flush_iotlb_psi(iommu, domain, start_pfn, + nrpages, !freelist, 0); + /* free iova */ +@@ -4055,9 +4065,7 @@ static void __init init_no_remapping_devices(void) + + /* This IOMMU has *only* gfx devices. Either bypass it or + set the gfx_mapped flag, as appropriate */ +- if (dmar_map_gfx) { +- intel_iommu_gfx_mapped = 1; +- } else { ++ if (!dmar_map_gfx) { + drhd->ignored = 1; + for_each_active_dev_scope(drhd->devices, + drhd->devices_cnt, i, dev) +@@ -4896,6 +4904,9 @@ int __init intel_iommu_init(void) + goto out_free_reserved_range; + } + ++ if (dmar_map_gfx) ++ intel_iommu_gfx_mapped = 1; ++ + init_no_remapping_devices(); + + ret = init_dmars(); +diff --git a/drivers/mailbox/stm32-ipcc.c b/drivers/mailbox/stm32-ipcc.c +index 210fe504f5ae..f91dfb1327c7 100644 +--- a/drivers/mailbox/stm32-ipcc.c ++++ b/drivers/mailbox/stm32-ipcc.c +@@ -8,9 +8,9 @@ + #include + #include + #include ++#include + #include + #include +-#include + #include + #include + +@@ -240,9 +240,11 @@ static int stm32_ipcc_probe(struct platform_device *pdev) + + /* irq */ + for (i = 0; i < IPCC_IRQ_NUM; i++) { +- ipcc->irqs[i] = of_irq_get_byname(dev->of_node, irq_name[i]); ++ ipcc->irqs[i] = platform_get_irq_byname(pdev, irq_name[i]); + if (ipcc->irqs[i] < 0) { +- dev_err(dev, "no IRQ specified %s\n", irq_name[i]); ++ if (ipcc->irqs[i] != -EPROBE_DEFER) ++ dev_err(dev, "no IRQ specified %s\n", ++ irq_name[i]); + ret = ipcc->irqs[i]; + goto err_clk; + } +@@ -263,9 +265,10 @@ static int stm32_ipcc_probe(struct platform_device *pdev) + + /* wakeup */ + if (of_property_read_bool(np, "wakeup-source")) { +- ipcc->wkp = of_irq_get_byname(dev->of_node, "wakeup"); ++ ipcc->wkp = platform_get_irq_byname(pdev, "wakeup"); + if (ipcc->wkp < 0) { +- dev_err(dev, "could not get wakeup IRQ\n"); ++ if (ipcc->wkp != -EPROBE_DEFER) ++ dev_err(dev, "could not get wakeup IRQ\n"); + ret = ipcc->wkp; + goto err_clk; + } +diff --git a/drivers/media/platform/atmel/atmel-isc.c b/drivers/media/platform/atmel/atmel-isc.c +index 50178968b8a6..5b0e96bd8c7e 100644 +--- a/drivers/media/platform/atmel/atmel-isc.c ++++ b/drivers/media/platform/atmel/atmel-isc.c +@@ -2065,8 +2065,11 @@ static int isc_parse_dt(struct device *dev, struct isc_device *isc) + break; + } + +- subdev_entity->asd = devm_kzalloc(dev, +- sizeof(*subdev_entity->asd), GFP_KERNEL); ++ /* asd will be freed by the subsystem once it's added to the ++ * notifier list ++ */ ++ subdev_entity->asd = kzalloc(sizeof(*subdev_entity->asd), ++ GFP_KERNEL); + if (!subdev_entity->asd) { + of_node_put(rem); + ret = -ENOMEM; +@@ -2210,6 +2213,7 @@ static int atmel_isc_probe(struct platform_device *pdev) + subdev_entity->asd); + if (ret) { + fwnode_handle_put(subdev_entity->asd->match.fwnode); ++ kfree(subdev_entity->asd); + goto cleanup_subdev; + } + +diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c +index b79d3bbd8350..54d66dbc2a31 100644 +--- a/drivers/media/v4l2-core/v4l2-ctrls.c ++++ b/drivers/media/v4l2-core/v4l2-ctrls.c +@@ -3899,18 +3899,19 @@ void v4l2_ctrl_request_complete(struct media_request *req, + } + EXPORT_SYMBOL(v4l2_ctrl_request_complete); + +-void v4l2_ctrl_request_setup(struct media_request *req, ++int v4l2_ctrl_request_setup(struct media_request *req, + struct v4l2_ctrl_handler *main_hdl) + { + struct media_request_object *obj; + struct v4l2_ctrl_handler *hdl; + struct v4l2_ctrl_ref *ref; ++ int ret = 0; + + if (!req || !main_hdl) +- return; ++ return 0; + + if (WARN_ON(req->state != MEDIA_REQUEST_STATE_QUEUED)) +- return; ++ return -EBUSY; + + /* + * Note that it is valid if nothing was found. It means +@@ -3919,10 +3920,10 @@ void v4l2_ctrl_request_setup(struct media_request *req, + */ + obj = media_request_object_find(req, &req_ops, main_hdl); + if (!obj) +- return; ++ return 0; + if (obj->completed) { + media_request_object_put(obj); +- return; ++ return -EBUSY; + } + hdl = container_of(obj, struct v4l2_ctrl_handler, req_obj); + +@@ -3990,12 +3991,15 @@ void v4l2_ctrl_request_setup(struct media_request *req, + update_from_auto_cluster(master); + } + +- try_or_set_cluster(NULL, master, true, 0); +- ++ ret = try_or_set_cluster(NULL, master, true, 0); + v4l2_ctrl_unlock(master); ++ ++ if (ret) ++ break; + } + + media_request_object_put(obj); ++ return ret; + } + EXPORT_SYMBOL(v4l2_ctrl_request_setup); + +diff --git a/drivers/media/v4l2-core/v4l2-fwnode.c b/drivers/media/v4l2-core/v4l2-fwnode.c +index 7495f8323147..ccefa55813ad 100644 +--- a/drivers/media/v4l2-core/v4l2-fwnode.c ++++ b/drivers/media/v4l2-core/v4l2-fwnode.c +@@ -163,7 +163,7 @@ static int v4l2_fwnode_endpoint_parse_csi2_bus(struct fwnode_handle *fwnode, + } + + if (use_default_lane_mapping) +- pr_debug("using default lane mapping\n"); ++ pr_debug("no lane mapping given, using defaults\n"); + } + + rval = fwnode_property_read_u32_array(fwnode, "data-lanes", NULL, 0); +@@ -175,6 +175,10 @@ static int v4l2_fwnode_endpoint_parse_csi2_bus(struct fwnode_handle *fwnode, + num_data_lanes); + + have_data_lanes = true; ++ if (use_default_lane_mapping) { ++ pr_debug("data-lanes property exists; disabling default mapping\n"); ++ use_default_lane_mapping = false; ++ } + } + + for (i = 0; i < num_data_lanes; i++) { +diff --git a/drivers/mfd/intel-lpss.c b/drivers/mfd/intel-lpss.c +index 50bffc3382d7..ff3fba16e735 100644 +--- a/drivers/mfd/intel-lpss.c ++++ b/drivers/mfd/intel-lpss.c +@@ -273,6 +273,9 @@ static void intel_lpss_init_dev(const struct intel_lpss *lpss) + { + u32 value = LPSS_PRIV_SSP_REG_DIS_DMA_FIN; + ++ /* Set the device in reset state */ ++ writel(0, lpss->priv + LPSS_PRIV_RESETS); ++ + intel_lpss_deassert_reset(lpss); + + intel_lpss_set_remap_addr(lpss); +diff --git a/drivers/mfd/tps65912-spi.c b/drivers/mfd/tps65912-spi.c +index 3bd75061f777..f78be039e463 100644 +--- a/drivers/mfd/tps65912-spi.c ++++ b/drivers/mfd/tps65912-spi.c +@@ -27,6 +27,7 @@ static const struct of_device_id tps65912_spi_of_match_table[] = { + { .compatible = "ti,tps65912", }, + { /* sentinel */ } + }; ++MODULE_DEVICE_TABLE(of, tps65912_spi_of_match_table); + + static int tps65912_spi_probe(struct spi_device *spi) + { +diff --git a/drivers/mfd/twl6040.c b/drivers/mfd/twl6040.c +index 7c3c5fd5fcd0..86052c5c6069 100644 +--- a/drivers/mfd/twl6040.c ++++ b/drivers/mfd/twl6040.c +@@ -322,8 +322,19 @@ int twl6040_power(struct twl6040 *twl6040, int on) + } + } + ++ /* ++ * Register access can produce errors after power-up unless we ++ * wait at least 8ms based on measurements on duovero. ++ */ ++ usleep_range(10000, 12000); ++ + /* Sync with the HW */ +- regcache_sync(twl6040->regmap); ++ ret = regcache_sync(twl6040->regmap); ++ if (ret) { ++ dev_err(twl6040->dev, "Failed to sync with the HW: %i\n", ++ ret); ++ goto out; ++ } + + /* Default PLL configuration after power up */ + twl6040->pll = TWL6040_SYSCLK_SEL_LPPLL; +diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c +index 29582fe57151..733274a061dc 100644 +--- a/drivers/misc/pci_endpoint_test.c ++++ b/drivers/misc/pci_endpoint_test.c +@@ -662,6 +662,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev, + data = (struct pci_endpoint_test_data *)ent->driver_data; + if (data) { + test_reg_bar = data->test_reg_bar; ++ test->test_reg_bar = test_reg_bar; + test->alignment = data->alignment; + irq_type = data->irq_type; + } +diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c +index 387ff14587b8..e27978c47db7 100644 +--- a/drivers/mmc/host/mmci.c ++++ b/drivers/mmc/host/mmci.c +@@ -1550,9 +1550,10 @@ static irqreturn_t mmci_irq(int irq, void *dev_id) + } + + /* +- * Don't poll for busy completion in irq context. ++ * Busy detection has been handled by mmci_cmd_irq() above. ++ * Clear the status bit to prevent polling in IRQ context. + */ +- if (host->variant->busy_detect && host->busy_status) ++ if (host->variant->busy_detect_flag) + status &= ~host->variant->busy_detect_flag; + + ret = 1; +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index deda606c51e7..6d4d5a470163 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -5942,8 +5942,11 @@ int hclge_add_uc_addr_common(struct hclge_vport *vport, + } + + /* check if we just hit the duplicate */ +- if (!ret) +- ret = -EINVAL; ++ if (!ret) { ++ dev_warn(&hdev->pdev->dev, "VF %d mac(%pM) exists\n", ++ vport->vport_id, addr); ++ return 0; ++ } + + dev_err(&hdev->pdev->dev, + "PF failed to add unicast entry(%pM) in the MAC table\n", +diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c +index ac9fcb097689..133f5e008822 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_main.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c +@@ -6854,10 +6854,12 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data) + struct i40e_pf *pf = vsi->back; + u8 enabled_tc = 0, num_tc, hw; + bool need_reset = false; ++ int old_queue_pairs; + int ret = -EINVAL; + u16 mode; + int i; + ++ old_queue_pairs = vsi->num_queue_pairs; + num_tc = mqprio_qopt->qopt.num_tc; + hw = mqprio_qopt->qopt.hw; + mode = mqprio_qopt->mode; +@@ -6958,6 +6960,7 @@ config_tc: + } + ret = i40e_configure_queue_channels(vsi); + if (ret) { ++ vsi->num_queue_pairs = old_queue_pairs; + netdev_info(netdev, + "Failed configuring queue channels\n"); + need_reset = true; +diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c +index 6ec73864019c..b562476b1251 100644 +--- a/drivers/net/ethernet/intel/ice/ice_main.c ++++ b/drivers/net/ethernet/intel/ice/ice_main.c +@@ -528,6 +528,9 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup) + case ICE_FC_RX_PAUSE: + fc = "RX"; + break; ++ case ICE_FC_NONE: ++ fc = "None"; ++ break; + default: + fc = "Unknown"; + break; +diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c +index 09d1c314b68f..c5f6cfecc042 100644 +--- a/drivers/net/ethernet/intel/ice/ice_switch.c ++++ b/drivers/net/ethernet/intel/ice/ice_switch.c +@@ -643,21 +643,37 @@ static void ice_fill_sw_info(struct ice_hw *hw, struct ice_fltr_info *fi) + fi->fltr_act == ICE_FWD_TO_VSI_LIST || + fi->fltr_act == ICE_FWD_TO_Q || + fi->fltr_act == ICE_FWD_TO_QGRP)) { +- fi->lb_en = true; +- /* Do not set lan_en to TRUE if ++ /* Setting LB for prune actions will result in replicated ++ * packets to the internal switch that will be dropped. ++ */ ++ if (fi->lkup_type != ICE_SW_LKUP_VLAN) ++ fi->lb_en = true; ++ ++ /* Set lan_en to TRUE if + * 1. The switch is a VEB AND + * 2 +- * 2.1 The lookup is MAC with unicast addr for MAC, OR +- * 2.2 The lookup is MAC_VLAN with unicast addr for MAC ++ * 2.1 The lookup is VLAN, OR ++ * 2.2 The lookup is default port mode, OR ++ * 2.3 The lookup is MAC with mcast or bcast addr for MAC, OR ++ * 2.4 The lookup is MAC_VLAN with mcast or bcast addr for MAC. ++ * ++ * OR + * +- * In all other cases, the LAN enable has to be set to true. ++ * The switch is a VEPA. ++ * ++ * In all other cases, the LAN enable has to be set to false. + */ +- if (!(hw->evb_veb && +- ((fi->lkup_type == ICE_SW_LKUP_MAC && +- is_unicast_ether_addr(fi->l_data.mac.mac_addr)) || +- (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN && +- is_unicast_ether_addr(fi->l_data.mac_vlan.mac_addr))))) ++ if (hw->evb_veb) { ++ if (fi->lkup_type == ICE_SW_LKUP_VLAN || ++ fi->lkup_type == ICE_SW_LKUP_DFLT || ++ (fi->lkup_type == ICE_SW_LKUP_MAC && ++ !is_unicast_ether_addr(fi->l_data.mac.mac_addr)) || ++ (fi->lkup_type == ICE_SW_LKUP_MAC_VLAN && ++ !is_unicast_ether_addr(fi->l_data.mac.mac_addr))) ++ fi->lan_en = true; ++ } else { + fi->lan_en = true; ++ } + } + } + +diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c +index c48c3a1eb1f8..fcf31335a8b6 100644 +--- a/drivers/net/thunderbolt.c ++++ b/drivers/net/thunderbolt.c +@@ -1282,6 +1282,7 @@ static int __maybe_unused tbnet_suspend(struct device *dev) + tbnet_tear_down(net, true); + } + ++ tb_unregister_protocol_handler(&net->handler); + return 0; + } + +@@ -1290,6 +1291,8 @@ static int __maybe_unused tbnet_resume(struct device *dev) + struct tb_service *svc = tb_to_service(dev); + struct tbnet *net = tb_service_get_drvdata(svc); + ++ tb_register_protocol_handler(&net->handler); ++ + netif_carrier_off(net->dev); + if (netif_running(net->dev)) { + netif_device_attach(net->dev); +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index a90cf5d63aac..372d3f4a106a 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -1271,6 +1271,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) + struct nvme_dev *dev = nvmeq->dev; + struct request *abort_req; + struct nvme_command cmd; ++ bool shutdown = false; + u32 csts = readl(dev->bar + NVME_REG_CSTS); + + /* If PCI error recovery process is happening, we cannot reset or +@@ -1307,12 +1308,14 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) + * shutdown, so we return BLK_EH_DONE. + */ + switch (dev->ctrl.state) { ++ case NVME_CTRL_DELETING: ++ shutdown = true; + case NVME_CTRL_CONNECTING: + case NVME_CTRL_RESETTING: + dev_warn_ratelimited(dev->ctrl.device, + "I/O %d QID %d timeout, disable controller\n", + req->tag, nvmeq->qid); +- nvme_dev_disable(dev, false); ++ nvme_dev_disable(dev, shutdown); + nvme_req(req)->flags |= NVME_REQ_CANCELLED; + return BLK_EH_DONE; + default: +@@ -2438,8 +2441,11 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown) + * must flush all entered requests to their failed completion to avoid + * deadlocking blk-mq hot-cpu notifier. + */ +- if (shutdown) ++ if (shutdown) { + nvme_start_queues(&dev->ctrl); ++ if (dev->ctrl.admin_q && !blk_queue_dying(dev->ctrl.admin_q)) ++ blk_mq_unquiesce_queue(dev->ctrl.admin_q); ++ } + mutex_unlock(&dev->shutdown_lock); + } + +diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c +index f24008b66826..53dc37574b5d 100644 +--- a/drivers/nvmem/core.c ++++ b/drivers/nvmem/core.c +@@ -1166,7 +1166,7 @@ EXPORT_SYMBOL_GPL(nvmem_cell_put); + static void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, void *buf) + { + u8 *p, *b; +- int i, bit_offset = cell->bit_offset; ++ int i, extra, bit_offset = cell->bit_offset; + + p = b = buf; + if (bit_offset) { +@@ -1181,11 +1181,16 @@ static void nvmem_shift_read_buffer_in_place(struct nvmem_cell *cell, void *buf) + p = b; + *b++ >>= bit_offset; + } +- +- /* result fits in less bytes */ +- if (cell->bytes != DIV_ROUND_UP(cell->nbits, BITS_PER_BYTE)) +- *p-- = 0; ++ } else { ++ /* point to the msb */ ++ p += cell->bytes - 1; + } ++ ++ /* result fits in less bytes */ ++ extra = cell->bytes - DIV_ROUND_UP(cell->nbits, BITS_PER_BYTE); ++ while (--extra >= 0) ++ *p-- = 0; ++ + /* clear msb bits if any leftover in the last byte */ + *p &= GENMASK((cell->nbits%BITS_PER_BYTE) - 1, 0); + } +diff --git a/drivers/nvmem/sunxi_sid.c b/drivers/nvmem/sunxi_sid.c +index 570a2e354f30..ef3d776bb16e 100644 +--- a/drivers/nvmem/sunxi_sid.c ++++ b/drivers/nvmem/sunxi_sid.c +@@ -222,8 +222,10 @@ static const struct sunxi_sid_cfg sun50i_a64_cfg = { + static const struct of_device_id sunxi_sid_of_match[] = { + { .compatible = "allwinner,sun4i-a10-sid", .data = &sun4i_a10_cfg }, + { .compatible = "allwinner,sun7i-a20-sid", .data = &sun7i_a20_cfg }, ++ { .compatible = "allwinner,sun8i-a83t-sid", .data = &sun50i_a64_cfg }, + { .compatible = "allwinner,sun8i-h3-sid", .data = &sun8i_h3_cfg }, + { .compatible = "allwinner,sun50i-a64-sid", .data = &sun50i_a64_cfg }, ++ { .compatible = "allwinner,sun50i-h5-sid", .data = &sun50i_a64_cfg }, + {/* sentinel */}, + }; + MODULE_DEVICE_TABLE(of, sunxi_sid_of_match); +diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c +index 14f2b0b4ed5e..ba6907af9dcb 100644 +--- a/drivers/pci/controller/dwc/pci-keystone.c ++++ b/drivers/pci/controller/dwc/pci-keystone.c +@@ -705,6 +705,7 @@ static void ks_pcie_setup_interrupts(struct keystone_pcie *ks_pcie) + ks_pcie_enable_error_irq(ks_pcie); + } + ++#ifdef CONFIG_ARM + /* + * When a PCI device does not exist during config cycles, keystone host gets a + * bus error instead of returning 0xffffffff. This handler always returns 0 +@@ -724,6 +725,7 @@ static int ks_pcie_fault(unsigned long addr, unsigned int fsr, + + return 0; + } ++#endif + + static int __init ks_pcie_init_id(struct keystone_pcie *ks_pcie) + { +@@ -766,12 +768,14 @@ static int __init ks_pcie_host_init(struct pcie_port *pp) + if (ret < 0) + return ret; + ++#ifdef CONFIG_ARM + /* + * PCIe access errors that result into OCP errors are caught by ARM as + * "External aborts" + */ + hook_fault_code(17, ks_pcie_fault, SIGBUS, 0, + "Asynchronous external abort"); ++#endif + + return 0; + } +@@ -873,6 +877,10 @@ static int ks_pcie_enable_phy(struct keystone_pcie *ks_pcie) + int num_lanes = ks_pcie->num_lanes; + + for (i = 0; i < num_lanes; i++) { ++ ret = phy_reset(ks_pcie->phy[i]); ++ if (ret < 0) ++ goto err_phy; ++ + ret = phy_init(ks_pcie->phy[i]); + if (ret < 0) + goto err_phy; +diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c +index 24f5a775ad34..e3cce5d203f3 100644 +--- a/drivers/pci/controller/dwc/pcie-designware-ep.c ++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c +@@ -397,6 +397,7 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, + { + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); + struct pci_epc *epc = ep->epc; ++ unsigned int aligned_offset; + u16 msg_ctrl, msg_data; + u32 msg_addr_lower, msg_addr_upper, reg; + u64 msg_addr; +@@ -422,13 +423,15 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no, + reg = ep->msi_cap + PCI_MSI_DATA_32; + msg_data = dw_pcie_readw_dbi(pci, reg); + } +- msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower; ++ aligned_offset = msg_addr_lower & (epc->mem->page_size - 1); ++ msg_addr = ((u64)msg_addr_upper) << 32 | ++ (msg_addr_lower & ~aligned_offset); + ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, + epc->mem->page_size); + if (ret) + return ret; + +- writel(msg_data | (interrupt_num - 1), ep->msi_mem); ++ writel(msg_data | (interrupt_num - 1), ep->msi_mem + aligned_offset); + + dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys); + +diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c +index 25087d3c9a82..214e883aa0a1 100644 +--- a/drivers/pci/controller/dwc/pcie-designware-host.c ++++ b/drivers/pci/controller/dwc/pcie-designware-host.c +@@ -303,20 +303,24 @@ void dw_pcie_free_msi(struct pcie_port *pp) + + irq_domain_remove(pp->msi_domain); + irq_domain_remove(pp->irq_domain); ++ ++ if (pp->msi_page) ++ __free_page(pp->msi_page); + } + + void dw_pcie_msi_init(struct pcie_port *pp) + { + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); + struct device *dev = pci->dev; +- struct page *page; + u64 msi_target; + +- page = alloc_page(GFP_KERNEL); +- pp->msi_data = dma_map_page(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); ++ pp->msi_page = alloc_page(GFP_KERNEL); ++ pp->msi_data = dma_map_page(dev, pp->msi_page, 0, PAGE_SIZE, ++ DMA_FROM_DEVICE); + if (dma_mapping_error(dev, pp->msi_data)) { + dev_err(dev, "Failed to map MSI data\n"); +- __free_page(page); ++ __free_page(pp->msi_page); ++ pp->msi_page = NULL; + return; + } + msi_target = (u64)pp->msi_data; +@@ -439,7 +443,7 @@ int dw_pcie_host_init(struct pcie_port *pp) + if (ret) + pci->num_viewport = 2; + +- if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_enabled()) { ++ if (pci_msi_enabled()) { + /* + * If a specific SoC driver needs to change the + * default number of vectors, it needs to implement +@@ -477,7 +481,7 @@ int dw_pcie_host_init(struct pcie_port *pp) + if (pp->ops->host_init) { + ret = pp->ops->host_init(pp); + if (ret) +- goto error; ++ goto err_free_msi; + } + + pp->root_bus_nr = pp->busn->start; +@@ -491,7 +495,7 @@ int dw_pcie_host_init(struct pcie_port *pp) + + ret = pci_scan_root_bus_bridge(bridge); + if (ret) +- goto error; ++ goto err_free_msi; + + bus = bridge->bus; + +@@ -507,6 +511,9 @@ int dw_pcie_host_init(struct pcie_port *pp) + pci_bus_add_devices(bus); + return 0; + ++err_free_msi: ++ if (pci_msi_enabled() && !pp->ops->msi_host_init) ++ dw_pcie_free_msi(pp); + error: + pci_free_host_bridge(bridge); + return ret; +@@ -646,17 +653,19 @@ void dw_pcie_setup_rc(struct pcie_port *pp) + + dw_pcie_setup(pci); + +- num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; +- +- /* Initialize IRQ Status array */ +- for (ctrl = 0; ctrl < num_ctrls; ctrl++) { +- pp->irq_mask[ctrl] = ~0; +- dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + +- (ctrl * MSI_REG_CTRL_BLOCK_SIZE), +- 4, pp->irq_mask[ctrl]); +- dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + +- (ctrl * MSI_REG_CTRL_BLOCK_SIZE), +- 4, ~0); ++ if (!pp->ops->msi_host_init) { ++ num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; ++ ++ /* Initialize IRQ Status array */ ++ for (ctrl = 0; ctrl < num_ctrls; ctrl++) { ++ pp->irq_mask[ctrl] = ~0; ++ dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + ++ (ctrl * MSI_REG_CTRL_BLOCK_SIZE), ++ 4, pp->irq_mask[ctrl]); ++ dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + ++ (ctrl * MSI_REG_CTRL_BLOCK_SIZE), ++ 4, ~0); ++ } + } + + /* Setup RC BARs */ +diff --git a/drivers/pci/controller/dwc/pcie-designware.h b/drivers/pci/controller/dwc/pcie-designware.h +index 377f4c0b52da..6fb0a1879932 100644 +--- a/drivers/pci/controller/dwc/pcie-designware.h ++++ b/drivers/pci/controller/dwc/pcie-designware.h +@@ -179,6 +179,7 @@ struct pcie_port { + struct irq_domain *irq_domain; + struct irq_domain *msi_domain; + dma_addr_t msi_data; ++ struct page *msi_page; + u32 num_vectors; + u32 irq_mask[MAX_MSI_CTRLS]; + raw_spinlock_t lock; +diff --git a/drivers/pci/controller/pcie-rcar.c b/drivers/pci/controller/pcie-rcar.c +index 6a4e435bd35f..9b9c677ad3a0 100644 +--- a/drivers/pci/controller/pcie-rcar.c ++++ b/drivers/pci/controller/pcie-rcar.c +@@ -892,7 +892,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie *pcie) + { + struct device *dev = pcie->dev; + struct rcar_msi *msi = &pcie->msi; +- unsigned long base; ++ phys_addr_t base; + int err, i; + + mutex_init(&msi->lock); +@@ -931,10 +931,14 @@ static int rcar_pcie_enable_msi(struct rcar_pcie *pcie) + + /* setup MSI data target */ + msi->pages = __get_free_pages(GFP_KERNEL, 0); ++ if (!msi->pages) { ++ err = -ENOMEM; ++ goto err; ++ } + base = virt_to_phys((void *)msi->pages); + +- rcar_pci_write_reg(pcie, base | MSIFE, PCIEMSIALR); +- rcar_pci_write_reg(pcie, 0, PCIEMSIAUR); ++ rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR); ++ rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR); + + /* enable all MSI interrupts */ + rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER); +diff --git a/drivers/pci/controller/pcie-xilinx.c b/drivers/pci/controller/pcie-xilinx.c +index 9bd1a35cd5d8..5bf3af3b28e6 100644 +--- a/drivers/pci/controller/pcie-xilinx.c ++++ b/drivers/pci/controller/pcie-xilinx.c +@@ -336,14 +336,19 @@ static const struct irq_domain_ops msi_domain_ops = { + * xilinx_pcie_enable_msi - Enable MSI support + * @port: PCIe port information + */ +-static void xilinx_pcie_enable_msi(struct xilinx_pcie_port *port) ++static int xilinx_pcie_enable_msi(struct xilinx_pcie_port *port) + { + phys_addr_t msg_addr; + + port->msi_pages = __get_free_pages(GFP_KERNEL, 0); ++ if (!port->msi_pages) ++ return -ENOMEM; ++ + msg_addr = virt_to_phys((void *)port->msi_pages); + pcie_write(port, 0x0, XILINX_PCIE_REG_MSIBASE1); + pcie_write(port, msg_addr, XILINX_PCIE_REG_MSIBASE2); ++ ++ return 0; + } + + /* INTx Functions */ +@@ -498,6 +503,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port) + struct device *dev = port->dev; + struct device_node *node = dev->of_node; + struct device_node *pcie_intc_node; ++ int ret; + + /* Setup INTx */ + pcie_intc_node = of_get_next_child(node, NULL); +@@ -526,7 +532,9 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port) + return -ENODEV; + } + +- xilinx_pcie_enable_msi(port); ++ ret = xilinx_pcie_enable_msi(port); ++ if (ret) ++ return ret; + } + + return 0; +diff --git a/drivers/pci/hotplug/rpadlpar_core.c b/drivers/pci/hotplug/rpadlpar_core.c +index e2356a9c7088..182f9e3443ee 100644 +--- a/drivers/pci/hotplug/rpadlpar_core.c ++++ b/drivers/pci/hotplug/rpadlpar_core.c +@@ -51,6 +51,7 @@ static struct device_node *find_vio_slot_node(char *drc_name) + if (rc == 0) + break; + } ++ of_node_put(parent); + + return dn; + } +@@ -71,6 +72,7 @@ static struct device_node *find_php_slot_pci_node(char *drc_name, + return np; + } + ++/* Returns a device_node with its reference count incremented */ + static struct device_node *find_dlpar_node(char *drc_name, int *node_type) + { + struct device_node *dn; +@@ -306,6 +308,7 @@ int dlpar_add_slot(char *drc_name) + rc = dlpar_add_phb(drc_name, dn); + break; + } ++ of_node_put(dn); + + printk(KERN_INFO "%s: slot %s added\n", DLPAR_MODULE_NAME, drc_name); + exit: +@@ -439,6 +442,7 @@ int dlpar_remove_slot(char *drc_name) + rc = dlpar_remove_pci_slot(drc_name, dn); + break; + } ++ of_node_put(dn); + vm_unmap_aliases(); + + printk(KERN_INFO "%s: slot %s removed\n", DLPAR_MODULE_NAME, drc_name); +diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c +index e22766c79fe9..c2fa830b8ef5 100644 +--- a/drivers/pci/switch/switchtec.c ++++ b/drivers/pci/switch/switchtec.c +@@ -1162,7 +1162,8 @@ static int mask_event(struct switchtec_dev *stdev, int eid, int idx) + if (!(hdr & SWITCHTEC_EVENT_OCCURRED && hdr & SWITCHTEC_EVENT_EN_IRQ)) + return 0; + +- if (eid == SWITCHTEC_IOCTL_EVENT_LINK_STATE) ++ if (eid == SWITCHTEC_IOCTL_EVENT_LINK_STATE || ++ eid == SWITCHTEC_IOCTL_EVENT_MRPC_COMP) + return 0; + + dev_dbg(&stdev->dev, "%s: %d %d %x\n", __func__, eid, idx, hdr); +diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c +index 3b1818184207..70638b74f9d6 100644 +--- a/drivers/pinctrl/intel/pinctrl-intel.c ++++ b/drivers/pinctrl/intel/pinctrl-intel.c +@@ -1466,7 +1466,7 @@ static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int + return false; + } + +-int intel_pinctrl_suspend(struct device *dev) ++int intel_pinctrl_suspend_noirq(struct device *dev) + { + struct intel_pinctrl *pctrl = dev_get_drvdata(dev); + struct intel_community_context *communities; +@@ -1505,7 +1505,7 @@ int intel_pinctrl_suspend(struct device *dev) + + return 0; + } +-EXPORT_SYMBOL_GPL(intel_pinctrl_suspend); ++EXPORT_SYMBOL_GPL(intel_pinctrl_suspend_noirq); + + static void intel_gpio_irq_init(struct intel_pinctrl *pctrl) + { +@@ -1527,7 +1527,7 @@ static void intel_gpio_irq_init(struct intel_pinctrl *pctrl) + } + } + +-int intel_pinctrl_resume(struct device *dev) ++int intel_pinctrl_resume_noirq(struct device *dev) + { + struct intel_pinctrl *pctrl = dev_get_drvdata(dev); + const struct intel_community_context *communities; +@@ -1589,7 +1589,7 @@ int intel_pinctrl_resume(struct device *dev) + + return 0; + } +-EXPORT_SYMBOL_GPL(intel_pinctrl_resume); ++EXPORT_SYMBOL_GPL(intel_pinctrl_resume_noirq); + #endif + + MODULE_AUTHOR("Mathias Nyman "); +diff --git a/drivers/pinctrl/intel/pinctrl-intel.h b/drivers/pinctrl/intel/pinctrl-intel.h +index b8a07d37d18f..a8e958f1dcf5 100644 +--- a/drivers/pinctrl/intel/pinctrl-intel.h ++++ b/drivers/pinctrl/intel/pinctrl-intel.h +@@ -177,13 +177,14 @@ int intel_pinctrl_probe_by_hid(struct platform_device *pdev); + int intel_pinctrl_probe_by_uid(struct platform_device *pdev); + + #ifdef CONFIG_PM_SLEEP +-int intel_pinctrl_suspend(struct device *dev); +-int intel_pinctrl_resume(struct device *dev); ++int intel_pinctrl_suspend_noirq(struct device *dev); ++int intel_pinctrl_resume_noirq(struct device *dev); + #endif + +-#define INTEL_PINCTRL_PM_OPS(_name) \ +-const struct dev_pm_ops _name = { \ +- SET_LATE_SYSTEM_SLEEP_PM_OPS(intel_pinctrl_suspend, intel_pinctrl_resume) \ ++#define INTEL_PINCTRL_PM_OPS(_name) \ ++const struct dev_pm_ops _name = { \ ++ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(intel_pinctrl_suspend_noirq, \ ++ intel_pinctrl_resume_noirq) \ + } + + #endif /* PINCTRL_INTEL_H */ +diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c +index 97a068dff192..3bb954997ebc 100644 +--- a/drivers/platform/chrome/cros_ec_proto.c ++++ b/drivers/platform/chrome/cros_ec_proto.c +@@ -56,6 +56,17 @@ static int send_command(struct cros_ec_device *ec_dev, + else + xfer_fxn = ec_dev->cmd_xfer; + ++ if (!xfer_fxn) { ++ /* ++ * This error can happen if a communication error happened and ++ * the EC is trying to use protocol v2, on an underlying ++ * communication mechanism that does not support v2. ++ */ ++ dev_err_once(ec_dev->dev, ++ "missing EC transfer API, cannot send command\n"); ++ return -EIO; ++ } ++ + ret = (*xfer_fxn)(ec_dev, msg); + if (msg->result == EC_RES_IN_PROGRESS) { + int i; +diff --git a/drivers/platform/x86/intel_pmc_ipc.c b/drivers/platform/x86/intel_pmc_ipc.c +index 7964ba22ef8d..d37cbd1cf58c 100644 +--- a/drivers/platform/x86/intel_pmc_ipc.c ++++ b/drivers/platform/x86/intel_pmc_ipc.c +@@ -771,13 +771,17 @@ static int ipc_create_pmc_devices(void) + if (ret) { + dev_err(ipcdev.dev, "Failed to add punit platform device\n"); + platform_device_unregister(ipcdev.tco_dev); ++ return ret; + } + + if (!ipcdev.telem_res_inval) { + ret = ipc_create_telemetry_device(); +- if (ret) ++ if (ret) { + dev_warn(ipcdev.dev, + "Failed to add telemetry platform device\n"); ++ platform_device_unregister(ipcdev.punit_dev); ++ platform_device_unregister(ipcdev.tco_dev); ++ } + } + + return ret; +diff --git a/drivers/power/supply/cpcap-battery.c b/drivers/power/supply/cpcap-battery.c +index 6887870ba32c..453baa8f7d73 100644 +--- a/drivers/power/supply/cpcap-battery.c ++++ b/drivers/power/supply/cpcap-battery.c +@@ -82,7 +82,7 @@ struct cpcap_battery_config { + }; + + struct cpcap_coulomb_counter_data { +- s32 sample; /* 24-bits */ ++ s32 sample; /* 24 or 32 bits */ + s32 accumulator; + s16 offset; /* 10-bits */ + }; +@@ -213,7 +213,7 @@ static int cpcap_battery_get_current(struct cpcap_battery_ddata *ddata) + * TI or ST coulomb counter in the PMIC. + */ + static int cpcap_battery_cc_raw_div(struct cpcap_battery_ddata *ddata, +- u32 sample, s32 accumulator, ++ s32 sample, s32 accumulator, + s16 offset, u32 divider) + { + s64 acc; +@@ -224,7 +224,6 @@ static int cpcap_battery_cc_raw_div(struct cpcap_battery_ddata *ddata, + if (!divider) + return 0; + +- sample &= 0xffffff; /* 24-bits, unsigned */ + offset &= 0x7ff; /* 10-bits, signed */ + + switch (ddata->vendor) { +@@ -259,7 +258,7 @@ static int cpcap_battery_cc_raw_div(struct cpcap_battery_ddata *ddata, + + /* 3600000μAms = 1μAh */ + static int cpcap_battery_cc_to_uah(struct cpcap_battery_ddata *ddata, +- u32 sample, s32 accumulator, ++ s32 sample, s32 accumulator, + s16 offset) + { + return cpcap_battery_cc_raw_div(ddata, sample, +@@ -268,7 +267,7 @@ static int cpcap_battery_cc_to_uah(struct cpcap_battery_ddata *ddata, + } + + static int cpcap_battery_cc_to_ua(struct cpcap_battery_ddata *ddata, +- u32 sample, s32 accumulator, ++ s32 sample, s32 accumulator, + s16 offset) + { + return cpcap_battery_cc_raw_div(ddata, sample, +@@ -312,6 +311,8 @@ cpcap_battery_read_accumulated(struct cpcap_battery_ddata *ddata, + /* Sample value CPCAP_REG_CCS1 & 2 */ + ccd->sample = (buf[1] & 0x0fff) << 16; + ccd->sample |= buf[0]; ++ if (ddata->vendor == CPCAP_VENDOR_TI) ++ ccd->sample = sign_extend32(24, ccd->sample); + + /* Accumulator value CPCAP_REG_CCA1 & 2 */ + ccd->accumulator = ((s16)buf[3]) << 16; +diff --git a/drivers/power/supply/max14656_charger_detector.c b/drivers/power/supply/max14656_charger_detector.c +index b91b1d2999dc..d19307f791c6 100644 +--- a/drivers/power/supply/max14656_charger_detector.c ++++ b/drivers/power/supply/max14656_charger_detector.c +@@ -280,6 +280,13 @@ static int max14656_probe(struct i2c_client *client, + + INIT_DELAYED_WORK(&chip->irq_work, max14656_irq_worker); + ++ chip->detect_psy = devm_power_supply_register(dev, ++ &chip->psy_desc, &psy_cfg); ++ if (IS_ERR(chip->detect_psy)) { ++ dev_err(dev, "power_supply_register failed\n"); ++ return -EINVAL; ++ } ++ + ret = devm_request_irq(dev, chip->irq, max14656_irq, + IRQF_TRIGGER_FALLING, + MAX14656_NAME, chip); +@@ -289,13 +296,6 @@ static int max14656_probe(struct i2c_client *client, + } + enable_irq_wake(chip->irq); + +- chip->detect_psy = devm_power_supply_register(dev, +- &chip->psy_desc, &psy_cfg); +- if (IS_ERR(chip->detect_psy)) { +- dev_err(dev, "power_supply_register failed\n"); +- return -EINVAL; +- } +- + schedule_delayed_work(&chip->irq_work, msecs_to_jiffies(2000)); + + return 0; +diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c +index 3149204567f3..8c9200a0df5e 100644 +--- a/drivers/pwm/core.c ++++ b/drivers/pwm/core.c +@@ -311,10 +311,12 @@ int pwmchip_add_with_polarity(struct pwm_chip *chip, + if (IS_ENABLED(CONFIG_OF)) + of_pwmchip_add(chip); + +- pwmchip_sysfs_export(chip); +- + out: + mutex_unlock(&pwm_lock); ++ ++ if (!ret) ++ pwmchip_sysfs_export(chip); ++ + return ret; + } + EXPORT_SYMBOL_GPL(pwmchip_add_with_polarity); +@@ -348,7 +350,7 @@ int pwmchip_remove(struct pwm_chip *chip) + unsigned int i; + int ret = 0; + +- pwmchip_sysfs_unexport_children(chip); ++ pwmchip_sysfs_unexport(chip); + + mutex_lock(&pwm_lock); + +@@ -368,8 +370,6 @@ int pwmchip_remove(struct pwm_chip *chip) + + free_pwms(chip); + +- pwmchip_sysfs_unexport(chip); +- + out: + mutex_unlock(&pwm_lock); + return ret; +diff --git a/drivers/pwm/pwm-meson.c b/drivers/pwm/pwm-meson.c +index c1ed641b3e26..f6e738ad7bd9 100644 +--- a/drivers/pwm/pwm-meson.c ++++ b/drivers/pwm/pwm-meson.c +@@ -111,6 +111,10 @@ struct meson_pwm { + const struct meson_pwm_data *data; + void __iomem *base; + u8 inverter_mask; ++ /* ++ * Protects register (write) access to the REG_MISC_AB register ++ * that is shared between the two PWMs. ++ */ + spinlock_t lock; + }; + +@@ -235,6 +239,7 @@ static void meson_pwm_enable(struct meson_pwm *meson, + { + u32 value, clk_shift, clk_enable, enable; + unsigned int offset; ++ unsigned long flags; + + switch (id) { + case 0: +@@ -255,6 +260,8 @@ static void meson_pwm_enable(struct meson_pwm *meson, + return; + } + ++ spin_lock_irqsave(&meson->lock, flags); ++ + value = readl(meson->base + REG_MISC_AB); + value &= ~(MISC_CLK_DIV_MASK << clk_shift); + value |= channel->pre_div << clk_shift; +@@ -267,11 +274,14 @@ static void meson_pwm_enable(struct meson_pwm *meson, + value = readl(meson->base + REG_MISC_AB); + value |= enable; + writel(value, meson->base + REG_MISC_AB); ++ ++ spin_unlock_irqrestore(&meson->lock, flags); + } + + static void meson_pwm_disable(struct meson_pwm *meson, unsigned int id) + { + u32 value, enable; ++ unsigned long flags; + + switch (id) { + case 0: +@@ -286,9 +296,13 @@ static void meson_pwm_disable(struct meson_pwm *meson, unsigned int id) + return; + } + ++ spin_lock_irqsave(&meson->lock, flags); ++ + value = readl(meson->base + REG_MISC_AB); + value &= ~enable; + writel(value, meson->base + REG_MISC_AB); ++ ++ spin_unlock_irqrestore(&meson->lock, flags); + } + + static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm, +@@ -296,19 +310,16 @@ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm, + { + struct meson_pwm_channel *channel = pwm_get_chip_data(pwm); + struct meson_pwm *meson = to_meson_pwm(chip); +- unsigned long flags; + int err = 0; + + if (!state) + return -EINVAL; + +- spin_lock_irqsave(&meson->lock, flags); +- + if (!state->enabled) { + meson_pwm_disable(meson, pwm->hwpwm); + channel->state.enabled = false; + +- goto unlock; ++ return 0; + } + + if (state->period != channel->state.period || +@@ -329,7 +340,7 @@ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm, + err = meson_pwm_calc(meson, channel, pwm->hwpwm, + state->duty_cycle, state->period); + if (err < 0) +- goto unlock; ++ return err; + + channel->state.polarity = state->polarity; + channel->state.period = state->period; +@@ -341,9 +352,7 @@ static int meson_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm, + channel->state.enabled = true; + } + +-unlock: +- spin_unlock_irqrestore(&meson->lock, flags); +- return err; ++ return 0; + } + + static void meson_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm, +diff --git a/drivers/pwm/pwm-tiehrpwm.c b/drivers/pwm/pwm-tiehrpwm.c +index f7b8a86fa5c5..ad4a40c0f27c 100644 +--- a/drivers/pwm/pwm-tiehrpwm.c ++++ b/drivers/pwm/pwm-tiehrpwm.c +@@ -382,6 +382,8 @@ static void ehrpwm_pwm_disable(struct pwm_chip *chip, struct pwm_device *pwm) + } + + /* Update shadow register first before modifying active register */ ++ ehrpwm_modify(pc->mmio_base, AQSFRC, AQSFRC_RLDCSF_MASK, ++ AQSFRC_RLDCSF_ZRO); + ehrpwm_modify(pc->mmio_base, AQCSFRC, aqcsfrc_mask, aqcsfrc_val); + /* + * Changes to immediate action on Action Qualifier. This puts +diff --git a/drivers/pwm/sysfs.c b/drivers/pwm/sysfs.c +index ceb233dd6048..13d9bd54dfce 100644 +--- a/drivers/pwm/sysfs.c ++++ b/drivers/pwm/sysfs.c +@@ -409,19 +409,6 @@ void pwmchip_sysfs_export(struct pwm_chip *chip) + } + + void pwmchip_sysfs_unexport(struct pwm_chip *chip) +-{ +- struct device *parent; +- +- parent = class_find_device(&pwm_class, NULL, chip, +- pwmchip_sysfs_match); +- if (parent) { +- /* for class_find_device() */ +- put_device(parent); +- device_unregister(parent); +- } +-} +- +-void pwmchip_sysfs_unexport_children(struct pwm_chip *chip) + { + struct device *parent; + unsigned int i; +@@ -439,6 +426,7 @@ void pwmchip_sysfs_unexport_children(struct pwm_chip *chip) + } + + put_device(parent); ++ device_unregister(parent); + } + + static int __init pwm_sysfs_init(void) +diff --git a/drivers/rapidio/rio_cm.c b/drivers/rapidio/rio_cm.c +index cf45829585cb..b29fc258eeba 100644 +--- a/drivers/rapidio/rio_cm.c ++++ b/drivers/rapidio/rio_cm.c +@@ -2147,6 +2147,14 @@ static int riocm_add_mport(struct device *dev, + mutex_init(&cm->rx_lock); + riocm_rx_fill(cm, RIOCM_RX_RING_SIZE); + cm->rx_wq = create_workqueue(DRV_NAME "/rxq"); ++ if (!cm->rx_wq) { ++ riocm_error("failed to allocate IBMBOX_%d on %s", ++ cmbox, mport->name); ++ rio_release_outb_mbox(mport, cmbox); ++ kfree(cm); ++ return -ENOMEM; ++ } ++ + INIT_WORK(&cm->rx_work, rio_ibmsg_handler); + + cm->tx_slot = 0; +diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c +index c6fdad12428e..2e377814d7c1 100644 +--- a/drivers/scsi/qla2xxx/qla_gs.c ++++ b/drivers/scsi/qla2xxx/qla_gs.c +@@ -3031,6 +3031,8 @@ static void qla24xx_async_gpsc_sp_done(void *s, int res) + "Async done-%s res %x, WWPN %8phC \n", + sp->name, res, fcport->port_name); + ++ fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE); ++ + if (res == QLA_FUNCTION_TIMEOUT) + return; + +@@ -4370,6 +4372,7 @@ int qla24xx_async_gnnid(scsi_qla_host_t *vha, fc_port_t *fcport) + + done_free_sp: + sp->free(sp); ++ fcport->flags &= ~FCF_ASYNC_SENT; + done: + return rval; + } +diff --git a/drivers/soc/mediatek/mtk-pmic-wrap.c b/drivers/soc/mediatek/mtk-pmic-wrap.c +index 8236a6c87e19..2f632e8790f7 100644 +--- a/drivers/soc/mediatek/mtk-pmic-wrap.c ++++ b/drivers/soc/mediatek/mtk-pmic-wrap.c +@@ -1281,7 +1281,7 @@ static bool pwrap_is_pmic_cipher_ready(struct pmic_wrapper *wrp) + static int pwrap_init_cipher(struct pmic_wrapper *wrp) + { + int ret; +- u32 rdata; ++ u32 rdata = 0; + + pwrap_writel(wrp, 0x1, PWRAP_CIPHER_SWRST); + pwrap_writel(wrp, 0x0, PWRAP_CIPHER_SWRST); +diff --git a/drivers/soc/renesas/renesas-soc.c b/drivers/soc/renesas/renesas-soc.c +index 4af96e668a2f..3299cf5365f3 100644 +--- a/drivers/soc/renesas/renesas-soc.c ++++ b/drivers/soc/renesas/renesas-soc.c +@@ -335,6 +335,9 @@ static int __init renesas_soc_init(void) + /* R-Car M3-W ES1.1 incorrectly identifies as ES2.0 */ + if ((product & 0x7fff) == 0x5210) + product ^= 0x11; ++ /* R-Car M3-W ES1.3 incorrectly identifies as ES2.1 */ ++ if ((product & 0x7fff) == 0x5211) ++ product ^= 0x12; + if (soc->id && ((product >> 8) & 0xff) != soc->id) { + pr_warn("SoC mismatch (product = 0x%x)\n", product); + return -ENODEV; +diff --git a/drivers/soc/rockchip/grf.c b/drivers/soc/rockchip/grf.c +index 96882ffde67e..3b81e1d75a97 100644 +--- a/drivers/soc/rockchip/grf.c ++++ b/drivers/soc/rockchip/grf.c +@@ -66,9 +66,11 @@ static const struct rockchip_grf_info rk3228_grf __initconst = { + }; + + #define RK3288_GRF_SOC_CON0 0x244 ++#define RK3288_GRF_SOC_CON2 0x24c + + static const struct rockchip_grf_value rk3288_defaults[] __initconst = { + { "jtag switching", RK3288_GRF_SOC_CON0, HIWORD_UPDATE(0, 1, 12) }, ++ { "pwm select", RK3288_GRF_SOC_CON2, HIWORD_UPDATE(1, 1, 0) }, + }; + + static const struct rockchip_grf_info rk3288_grf __initconst = { +diff --git a/drivers/soc/tegra/pmc.c b/drivers/soc/tegra/pmc.c +index 0df258518693..2fba8cdbe3bd 100644 +--- a/drivers/soc/tegra/pmc.c ++++ b/drivers/soc/tegra/pmc.c +@@ -1999,7 +1999,7 @@ static int tegra_pmc_probe(struct platform_device *pdev) + if (IS_ENABLED(CONFIG_DEBUG_FS)) { + err = tegra_powergate_debugfs_init(); + if (err < 0) +- return err; ++ goto cleanup_sysfs; + } + + err = register_restart_handler(&tegra_pmc_restart_handler); +@@ -2030,6 +2030,9 @@ cleanup_restart_handler: + unregister_restart_handler(&tegra_pmc_restart_handler); + cleanup_debugfs: + debugfs_remove(pmc->debugfs); ++cleanup_sysfs: ++ device_remove_file(&pdev->dev, &dev_attr_reset_reason); ++ device_remove_file(&pdev->dev, &dev_attr_reset_level); + return err; + } + +diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c +index d2076f2f468f..a1a63b617ae1 100644 +--- a/drivers/spi/spi-pxa2xx.c ++++ b/drivers/spi/spi-pxa2xx.c +@@ -1491,12 +1491,7 @@ static int pxa2xx_spi_get_port_id(struct acpi_device *adev) + + static bool pxa2xx_spi_idma_filter(struct dma_chan *chan, void *param) + { +- struct device *dev = param; +- +- if (dev != chan->device->dev->parent) +- return false; +- +- return true; ++ return param == chan->device->dev; + } + + #endif /* CONFIG_PCI */ +diff --git a/drivers/staging/media/rockchip/vpu/rockchip_vpu_drv.c b/drivers/staging/media/rockchip/vpu/rockchip_vpu_drv.c +index 962412c79b91..d489b5dd54d7 100644 +--- a/drivers/staging/media/rockchip/vpu/rockchip_vpu_drv.c ++++ b/drivers/staging/media/rockchip/vpu/rockchip_vpu_drv.c +@@ -481,15 +481,18 @@ static int rockchip_vpu_probe(struct platform_device *pdev) + return 0; + err_video_dev_unreg: + if (vpu->vfd_enc) { ++ v4l2_m2m_unregister_media_controller(vpu->m2m_dev); + video_unregister_device(vpu->vfd_enc); + video_device_release(vpu->vfd_enc); + } + err_m2m_rel: ++ media_device_cleanup(&vpu->mdev); + v4l2_m2m_release(vpu->m2m_dev); + err_v4l2_unreg: + v4l2_device_unregister(&vpu->v4l2_dev); + err_clk_unprepare: + clk_bulk_unprepare(vpu->variant->num_clocks, vpu->clocks); ++ pm_runtime_dont_use_autosuspend(vpu->dev); + pm_runtime_disable(vpu->dev); + return ret; + } +@@ -501,15 +504,16 @@ static int rockchip_vpu_remove(struct platform_device *pdev) + v4l2_info(&vpu->v4l2_dev, "Removing %s\n", pdev->name); + + media_device_unregister(&vpu->mdev); +- v4l2_m2m_unregister_media_controller(vpu->m2m_dev); +- v4l2_m2m_release(vpu->m2m_dev); +- media_device_cleanup(&vpu->mdev); + if (vpu->vfd_enc) { ++ v4l2_m2m_unregister_media_controller(vpu->m2m_dev); + video_unregister_device(vpu->vfd_enc); + video_device_release(vpu->vfd_enc); + } ++ media_device_cleanup(&vpu->mdev); ++ v4l2_m2m_release(vpu->m2m_dev); + v4l2_device_unregister(&vpu->v4l2_dev); + clk_bulk_unprepare(vpu->variant->num_clocks, vpu->clocks); ++ pm_runtime_dont_use_autosuspend(vpu->dev); + pm_runtime_disable(vpu->dev); + return 0; + } +diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c +index f1ec9bbe4717..54b2c0e3c4f4 100644 +--- a/drivers/thermal/qcom/tsens.c ++++ b/drivers/thermal/qcom/tsens.c +@@ -160,7 +160,8 @@ static int tsens_probe(struct platform_device *pdev) + if (tmdev->ops->calibrate) { + ret = tmdev->ops->calibrate(tmdev); + if (ret < 0) { +- dev_err(dev, "tsens calibration failed\n"); ++ if (ret != -EPROBE_DEFER) ++ dev_err(dev, "tsens calibration failed\n"); + return ret; + } + } +diff --git a/drivers/thermal/rcar_gen3_thermal.c b/drivers/thermal/rcar_gen3_thermal.c +index 88fa41cf16e8..0c9e7a5bacdf 100644 +--- a/drivers/thermal/rcar_gen3_thermal.c ++++ b/drivers/thermal/rcar_gen3_thermal.c +@@ -331,6 +331,9 @@ MODULE_DEVICE_TABLE(of, rcar_gen3_thermal_dt_ids); + static int rcar_gen3_thermal_remove(struct platform_device *pdev) + { + struct device *dev = &pdev->dev; ++ struct rcar_gen3_thermal_priv *priv = dev_get_drvdata(dev); ++ ++ rcar_thermal_irq_set(priv, false); + + pm_runtime_put(dev); + pm_runtime_disable(dev); +diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c +index d31b975dd3fd..284e8d052fc3 100644 +--- a/drivers/tty/serial/8250/8250_dw.c ++++ b/drivers/tty/serial/8250/8250_dw.c +@@ -365,7 +365,7 @@ static bool dw8250_fallback_dma_filter(struct dma_chan *chan, void *param) + + static bool dw8250_idma_filter(struct dma_chan *chan, void *param) + { +- return param == chan->device->dev->parent; ++ return param == chan->device->dev; + } + + /* +@@ -434,7 +434,7 @@ static void dw8250_quirks(struct uart_port *p, struct dw8250_data *data) + data->uart_16550_compatible = true; + } + +- /* Platforms with iDMA */ ++ /* Platforms with iDMA 64-bit */ + if (platform_get_resource_byname(to_platform_device(p->dev), + IORESOURCE_MEM, "lpss_priv")) { + data->dma.rx_param = p->dev->parent; +diff --git a/drivers/usb/host/ohci-da8xx.c b/drivers/usb/host/ohci-da8xx.c +index ca8a94f15ac0..113401b7d70d 100644 +--- a/drivers/usb/host/ohci-da8xx.c ++++ b/drivers/usb/host/ohci-da8xx.c +@@ -206,12 +206,23 @@ static int ohci_da8xx_regulator_event(struct notifier_block *nb, + return 0; + } + +-static irqreturn_t ohci_da8xx_oc_handler(int irq, void *data) ++static irqreturn_t ohci_da8xx_oc_thread(int irq, void *data) + { + struct da8xx_ohci_hcd *da8xx_ohci = data; ++ struct device *dev = da8xx_ohci->hcd->self.controller; ++ int ret; + +- if (gpiod_get_value(da8xx_ohci->oc_gpio)) +- gpiod_set_value(da8xx_ohci->vbus_gpio, 0); ++ if (gpiod_get_value_cansleep(da8xx_ohci->oc_gpio)) { ++ if (da8xx_ohci->vbus_gpio) { ++ gpiod_set_value_cansleep(da8xx_ohci->vbus_gpio, 0); ++ } else if (da8xx_ohci->vbus_reg) { ++ ret = regulator_disable(da8xx_ohci->vbus_reg); ++ if (ret) ++ dev_err(dev, ++ "Failed to disable regulator: %d\n", ++ ret); ++ } ++ } + + return IRQ_HANDLED; + } +@@ -438,8 +449,9 @@ static int ohci_da8xx_probe(struct platform_device *pdev) + if (oc_irq < 0) + goto err; + +- error = devm_request_irq(dev, oc_irq, ohci_da8xx_oc_handler, +- IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING, ++ error = devm_request_threaded_irq(dev, oc_irq, NULL, ++ ohci_da8xx_oc_thread, IRQF_TRIGGER_RISING | ++ IRQF_TRIGGER_FALLING | IRQF_ONESHOT, + "OHCI over-current indicator", da8xx_ohci); + if (error) + goto err; +diff --git a/drivers/usb/typec/tcpm/fusb302.c b/drivers/usb/typec/tcpm/fusb302.c +index e9344997329c..d8b128ed08af 100644 +--- a/drivers/usb/typec/tcpm/fusb302.c ++++ b/drivers/usb/typec/tcpm/fusb302.c +@@ -634,6 +634,8 @@ static int fusb302_set_toggling(struct fusb302_chip *chip, + return ret; + chip->intr_togdone = false; + } else { ++ /* Datasheet says vconn MUST be off when toggling */ ++ WARN(chip->vconn_on, "Vconn is on during toggle start"); + /* unmask TOGDONE interrupt */ + ret = fusb302_i2c_clear_bits(chip, FUSB_REG_MASKA, + FUSB_REG_MASKA_TOGDONE); +diff --git a/drivers/vfio/pci/vfio_pci_nvlink2.c b/drivers/vfio/pci/vfio_pci_nvlink2.c +index 32f695ffe128..50fe3c4f7feb 100644 +--- a/drivers/vfio/pci/vfio_pci_nvlink2.c ++++ b/drivers/vfio/pci/vfio_pci_nvlink2.c +@@ -472,6 +472,8 @@ int vfio_pci_ibm_npu2_init(struct vfio_pci_device *vdev) + return 0; + + free_exit: ++ if (data->base) ++ memunmap(data->base); + kfree(data); + + return ret; +diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c +index a3030cdf3c18..aa0e14d25ac3 100644 +--- a/drivers/vfio/vfio.c ++++ b/drivers/vfio/vfio.c +@@ -34,6 +34,7 @@ + #include + #include + #include ++#include + + #define DRIVER_VERSION "0.3" + #define DRIVER_AUTHOR "Alex Williamson " +@@ -904,30 +905,17 @@ void *vfio_device_data(struct vfio_device *device) + } + EXPORT_SYMBOL_GPL(vfio_device_data); + +-/* Given a referenced group, check if it contains the device */ +-static bool vfio_dev_present(struct vfio_group *group, struct device *dev) +-{ +- struct vfio_device *device; +- +- device = vfio_group_get_device(group, dev); +- if (!device) +- return false; +- +- vfio_device_put(device); +- return true; +-} +- + /* + * Decrement the device reference count and wait for the device to be + * removed. Open file descriptors for the device... */ + void *vfio_del_group_dev(struct device *dev) + { ++ DEFINE_WAIT_FUNC(wait, woken_wake_function); + struct vfio_device *device = dev_get_drvdata(dev); + struct vfio_group *group = device->group; + void *device_data = device->device_data; + struct vfio_unbound_dev *unbound; + unsigned int i = 0; +- long ret; + bool interrupted = false; + + /* +@@ -964,6 +952,8 @@ void *vfio_del_group_dev(struct device *dev) + * interval with counter to allow the driver to take escalating + * measures to release the device if it has the ability to do so. + */ ++ add_wait_queue(&vfio.release_q, &wait); ++ + do { + device = vfio_group_get_device(group, dev); + if (!device) +@@ -975,12 +965,10 @@ void *vfio_del_group_dev(struct device *dev) + vfio_device_put(device); + + if (interrupted) { +- ret = wait_event_timeout(vfio.release_q, +- !vfio_dev_present(group, dev), HZ * 10); ++ wait_woken(&wait, TASK_UNINTERRUPTIBLE, HZ * 10); + } else { +- ret = wait_event_interruptible_timeout(vfio.release_q, +- !vfio_dev_present(group, dev), HZ * 10); +- if (ret == -ERESTARTSYS) { ++ wait_woken(&wait, TASK_INTERRUPTIBLE, HZ * 10); ++ if (signal_pending(current)) { + interrupted = true; + dev_warn(dev, + "Device is currently in use, task" +@@ -989,8 +977,10 @@ void *vfio_del_group_dev(struct device *dev) + current->comm, task_pid_nr(current)); + } + } +- } while (ret <= 0); + ++ } while (1); ++ ++ remove_wait_queue(&vfio.release_q, &wait); + /* + * In order to support multiple devices per group, devices can be + * plucked from the group while other devices in the group are still +diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c +index c59b23f6e9ba..a9c69ae30878 100644 +--- a/drivers/video/fbdev/core/fbcon.c ++++ b/drivers/video/fbdev/core/fbcon.c +@@ -1069,7 +1069,7 @@ static void fbcon_init(struct vc_data *vc, int init) + + cap = info->flags; + +- if (console_loglevel <= CONSOLE_LOGLEVEL_QUIET) ++ if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET) + logo_shown = FBCON_LOGO_DONTSHOW; + + if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW || +diff --git a/drivers/video/fbdev/hgafb.c b/drivers/video/fbdev/hgafb.c +index 463028543173..59e1cae57948 100644 +--- a/drivers/video/fbdev/hgafb.c ++++ b/drivers/video/fbdev/hgafb.c +@@ -285,6 +285,8 @@ static int hga_card_detect(void) + hga_vram_len = 0x08000; + + hga_vram = ioremap(0xb0000, hga_vram_len); ++ if (!hga_vram) ++ goto error; + + if (request_region(0x3b0, 12, "hgafb")) + release_io_ports = 1; +diff --git a/drivers/video/fbdev/imsttfb.c b/drivers/video/fbdev/imsttfb.c +index 4b9615e4ce74..35bba3c2036d 100644 +--- a/drivers/video/fbdev/imsttfb.c ++++ b/drivers/video/fbdev/imsttfb.c +@@ -1515,6 +1515,11 @@ static int imsttfb_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + info->fix.smem_start = addr; + info->screen_base = (__u8 *)ioremap(addr, par->ramdac == IBM ? + 0x400000 : 0x800000); ++ if (!info->screen_base) { ++ release_mem_region(addr, size); ++ framebuffer_release(info); ++ return -ENOMEM; ++ } + info->fix.mmio_start = addr + 0x800000; + par->dc_regs = ioremap(addr + 0x800000, 0x1000); + par->cmap_regs_phys = addr + 0x840000; +diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig +index 242eea859637..fa325b49672e 100644 +--- a/drivers/watchdog/Kconfig ++++ b/drivers/watchdog/Kconfig +@@ -2028,6 +2028,7 @@ comment "Watchdog Pretimeout Governors" + + config WATCHDOG_PRETIMEOUT_GOV + bool "Enable watchdog pretimeout governors" ++ depends on WATCHDOG_CORE + help + The option allows to select watchdog pretimeout governors. + +diff --git a/drivers/watchdog/imx2_wdt.c b/drivers/watchdog/imx2_wdt.c +index 2b52514eaa86..7e7bdcbbc741 100644 +--- a/drivers/watchdog/imx2_wdt.c ++++ b/drivers/watchdog/imx2_wdt.c +@@ -178,8 +178,10 @@ static void __imx2_wdt_set_timeout(struct watchdog_device *wdog, + static int imx2_wdt_set_timeout(struct watchdog_device *wdog, + unsigned int new_timeout) + { +- __imx2_wdt_set_timeout(wdog, new_timeout); ++ unsigned int actual; + ++ actual = min(new_timeout, wdog->max_hw_heartbeat_ms * 1000); ++ __imx2_wdt_set_timeout(wdog, actual); + wdog->timeout = new_timeout; + return 0; + } +diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c +index 39843fa7e11b..920d350df37b 100644 +--- a/fs/configfs/dir.c ++++ b/fs/configfs/dir.c +@@ -1755,12 +1755,19 @@ int configfs_register_group(struct config_group *parent_group, + + inode_lock_nested(d_inode(parent), I_MUTEX_PARENT); + ret = create_default_group(parent_group, group); +- if (!ret) { +- spin_lock(&configfs_dirent_lock); +- configfs_dir_set_ready(group->cg_item.ci_dentry->d_fsdata); +- spin_unlock(&configfs_dirent_lock); +- } ++ if (ret) ++ goto err_out; ++ ++ spin_lock(&configfs_dirent_lock); ++ configfs_dir_set_ready(group->cg_item.ci_dentry->d_fsdata); ++ spin_unlock(&configfs_dirent_lock); ++ inode_unlock(d_inode(parent)); ++ return 0; ++err_out: + inode_unlock(d_inode(parent)); ++ mutex_lock(&subsys->su_mutex); ++ unlink_group(group); ++ mutex_unlock(&subsys->su_mutex); + return ret; + } + EXPORT_SYMBOL(configfs_register_group); +diff --git a/fs/dax.c b/fs/dax.c +index 83009875308c..f74386293632 100644 +--- a/fs/dax.c ++++ b/fs/dax.c +@@ -814,7 +814,7 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, + goto unlock_pmd; + + flush_cache_page(vma, address, pfn); +- pmd = pmdp_huge_clear_flush(vma, address, pmdp); ++ pmd = pmdp_invalidate(vma, address, pmdp); + pmd = pmd_wrprotect(pmd); + pmd = pmd_mkclean(pmd); + set_pmd_at(vma->vm_mm, address, pmdp, pmd); +diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c +index a98e1b02279e..935ebdb9cf47 100644 +--- a/fs/f2fs/checkpoint.c ++++ b/fs/f2fs/checkpoint.c +@@ -1009,13 +1009,11 @@ retry: + if (inode) { + unsigned long cur_ino = inode->i_ino; + +- if (is_dir) +- F2FS_I(inode)->cp_task = current; ++ F2FS_I(inode)->cp_task = current; + + filemap_fdatawrite(inode->i_mapping); + +- if (is_dir) +- F2FS_I(inode)->cp_task = NULL; ++ F2FS_I(inode)->cp_task = NULL; + + iput(inode); + /* We need to give cpu to another writers. */ +diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c +index d87dfa5aa112..9d3c11e09a03 100644 +--- a/fs/f2fs/data.c ++++ b/fs/f2fs/data.c +@@ -2038,7 +2038,8 @@ out: + } + + unlock_page(page); +- if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode)) ++ if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode) && ++ !F2FS_I(inode)->cp_task) + f2fs_balance_fs(sbi, need_balance_fs); + + if (unlikely(f2fs_cp_error(sbi))) { +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h +index 7bea1bc6589f..e2cf567dcbd7 100644 +--- a/fs/f2fs/f2fs.h ++++ b/fs/f2fs/f2fs.h +@@ -1789,6 +1789,7 @@ enospc: + return -ENOSPC; + } + ++void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...); + static inline void dec_valid_block_count(struct f2fs_sb_info *sbi, + struct inode *inode, + block_t count) +@@ -1797,13 +1798,21 @@ static inline void dec_valid_block_count(struct f2fs_sb_info *sbi, + + spin_lock(&sbi->stat_lock); + f2fs_bug_on(sbi, sbi->total_valid_block_count < (block_t) count); +- f2fs_bug_on(sbi, inode->i_blocks < sectors); + sbi->total_valid_block_count -= (block_t)count; + if (sbi->reserved_blocks && + sbi->current_reserved_blocks < sbi->reserved_blocks) + sbi->current_reserved_blocks = min(sbi->reserved_blocks, + sbi->current_reserved_blocks + count); + spin_unlock(&sbi->stat_lock); ++ if (unlikely(inode->i_blocks < sectors)) { ++ f2fs_msg(sbi->sb, KERN_WARNING, ++ "Inconsistent i_blocks, ino:%lu, iblocks:%llu, sectors:%llu", ++ inode->i_ino, ++ (unsigned long long)inode->i_blocks, ++ (unsigned long long)sectors); ++ set_sbi_flag(sbi, SBI_NEED_FSCK); ++ return; ++ } + f2fs_i_blocks_write(inode, count, false, true); + } + +@@ -2020,7 +2029,6 @@ static inline void dec_valid_node_count(struct f2fs_sb_info *sbi, + + f2fs_bug_on(sbi, !sbi->total_valid_block_count); + f2fs_bug_on(sbi, !sbi->total_valid_node_count); +- f2fs_bug_on(sbi, !is_inode && !inode->i_blocks); + + sbi->total_valid_node_count--; + sbi->total_valid_block_count--; +@@ -2030,10 +2038,19 @@ static inline void dec_valid_node_count(struct f2fs_sb_info *sbi, + + spin_unlock(&sbi->stat_lock); + +- if (is_inode) ++ if (is_inode) { + dquot_free_inode(inode); +- else ++ } else { ++ if (unlikely(inode->i_blocks == 0)) { ++ f2fs_msg(sbi->sb, KERN_WARNING, ++ "Inconsistent i_blocks, ino:%lu, iblocks:%llu", ++ inode->i_ino, ++ (unsigned long long)inode->i_blocks); ++ set_sbi_flag(sbi, SBI_NEED_FSCK); ++ return; ++ } + f2fs_i_blocks_write(inode, 1, false, true); ++ } + } + + static inline unsigned int valid_node_count(struct f2fs_sb_info *sbi) +@@ -2570,7 +2587,9 @@ static inline void *inline_xattr_addr(struct inode *inode, struct page *page) + + static inline int inline_xattr_size(struct inode *inode) + { +- return get_inline_xattr_addrs(inode) * sizeof(__le32); ++ if (f2fs_has_inline_xattr(inode)) ++ return get_inline_xattr_addrs(inode) * sizeof(__le32); ++ return 0; + } + + static inline int f2fs_has_inline_data(struct inode *inode) +@@ -2817,7 +2836,6 @@ static inline void f2fs_update_iostat(struct f2fs_sb_info *sbi, + + bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi, + block_t blkaddr, int type); +-void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...); + static inline void verify_blkaddr(struct f2fs_sb_info *sbi, + block_t blkaddr, int type) + { +diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c +index ab764bd106de..a66a8752e5f6 100644 +--- a/fs/f2fs/gc.c ++++ b/fs/f2fs/gc.c +@@ -1175,6 +1175,7 @@ static int do_garbage_collect(struct f2fs_sb_info *sbi, + "type [%d, %d] in SSA and SIT", + segno, type, GET_SUM_TYPE((&sum->footer))); + set_sbi_flag(sbi, SBI_NEED_FSCK); ++ f2fs_stop_checkpoint(sbi, false); + goto skip; + } + +diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c +index bb6a152310ef..404d2462a0fe 100644 +--- a/fs/f2fs/inline.c ++++ b/fs/f2fs/inline.c +@@ -420,6 +420,14 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, + stat_dec_inline_dir(dir); + clear_inode_flag(dir, FI_INLINE_DENTRY); + ++ /* ++ * should retrieve reserved space which was used to keep ++ * inline_dentry's structure for backward compatibility. ++ */ ++ if (!f2fs_sb_has_flexible_inline_xattr(F2FS_I_SB(dir)) && ++ !f2fs_has_inline_xattr(dir)) ++ F2FS_I(dir)->i_inline_xattr_size = 0; ++ + f2fs_i_depth_write(dir, 1); + if (i_size_read(dir) < PAGE_SIZE) + f2fs_i_size_write(dir, PAGE_SIZE); +@@ -501,6 +509,15 @@ static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage, + + stat_dec_inline_dir(dir); + clear_inode_flag(dir, FI_INLINE_DENTRY); ++ ++ /* ++ * should retrieve reserved space which was used to keep ++ * inline_dentry's structure for backward compatibility. ++ */ ++ if (!f2fs_sb_has_flexible_inline_xattr(F2FS_I_SB(dir)) && ++ !f2fs_has_inline_xattr(dir)) ++ F2FS_I(dir)->i_inline_xattr_size = 0; ++ + kvfree(backup_dentry); + return 0; + recover: +diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c +index e7f2e8759315..b53952a15ffa 100644 +--- a/fs/f2fs/inode.c ++++ b/fs/f2fs/inode.c +@@ -177,8 +177,8 @@ bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct page *page) + + if (provided != calculated) + f2fs_msg(sbi->sb, KERN_WARNING, +- "checksum invalid, ino = %x, %x vs. %x", +- ino_of_node(page), provided, calculated); ++ "checksum invalid, nid = %lu, ino_of_node = %x, %x vs. %x", ++ page->index, ino_of_node(page), provided, calculated); + + return provided == calculated; + } +@@ -488,6 +488,7 @@ make_now: + return inode; + + bad_inode: ++ f2fs_inode_synced(inode); + iget_failed(inode); + trace_f2fs_iget_exit(inode, ret); + return ERR_PTR(ret); +diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c +index 3f99ab288695..e29d5f6735ae 100644 +--- a/fs/f2fs/node.c ++++ b/fs/f2fs/node.c +@@ -1179,8 +1179,14 @@ int f2fs_remove_inode_page(struct inode *inode) + f2fs_put_dnode(&dn); + return -EIO; + } +- f2fs_bug_on(F2FS_I_SB(inode), +- inode->i_blocks != 0 && inode->i_blocks != 8); ++ ++ if (unlikely(inode->i_blocks != 0 && inode->i_blocks != 8)) { ++ f2fs_msg(F2FS_I_SB(inode)->sb, KERN_WARNING, ++ "Inconsistent i_blocks, ino:%lu, iblocks:%llu", ++ inode->i_ino, ++ (unsigned long long)inode->i_blocks); ++ set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK); ++ } + + /* will put inode & node pages */ + err = truncate_node(&dn); +@@ -1275,9 +1281,10 @@ static int read_node_page(struct page *page, int op_flags) + int err; + + if (PageUptodate(page)) { +-#ifdef CONFIG_F2FS_CHECK_FS +- f2fs_bug_on(sbi, !f2fs_inode_chksum_verify(sbi, page)); +-#endif ++ if (!f2fs_inode_chksum_verify(sbi, page)) { ++ ClearPageUptodate(page); ++ return -EBADMSG; ++ } + return LOCKED_PAGE; + } + +@@ -2076,6 +2083,9 @@ static bool add_free_nid(struct f2fs_sb_info *sbi, + if (unlikely(nid == 0)) + return false; + ++ if (unlikely(f2fs_check_nid_range(sbi, nid))) ++ return false; ++ + i = f2fs_kmem_cache_alloc(free_nid_slab, GFP_NOFS); + i->nid = nid; + i->state = FREE_NID; +diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c +index e3883db868d8..b14c718139a9 100644 +--- a/fs/f2fs/recovery.c ++++ b/fs/f2fs/recovery.c +@@ -325,8 +325,10 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, + break; + } + +- if (!is_recoverable_dnode(page)) ++ if (!is_recoverable_dnode(page)) { ++ f2fs_put_page(page, 1); + break; ++ } + + if (!is_fsync_dnode(page)) + goto next; +@@ -338,8 +340,10 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, + if (!check_only && + IS_INODE(page) && is_dent_dnode(page)) { + err = f2fs_recover_inode_page(sbi, page); +- if (err) ++ if (err) { ++ f2fs_put_page(page, 1); + break; ++ } + quota_inode = true; + } + +@@ -355,6 +359,7 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, + err = 0; + goto next; + } ++ f2fs_put_page(page, 1); + break; + } + } +@@ -370,6 +375,7 @@ next: + "%s: detect looped node chain, " + "blkaddr:%u, next:%u", + __func__, blkaddr, next_blkaddr_of_node(page)); ++ f2fs_put_page(page, 1); + err = -EINVAL; + break; + } +@@ -380,7 +386,6 @@ next: + + f2fs_ra_meta_pages_cond(sbi, blkaddr); + } +- f2fs_put_page(page, 1); + return err; + } + +@@ -546,7 +551,15 @@ retry_dn: + goto err; + + f2fs_bug_on(sbi, ni.ino != ino_of_node(page)); +- f2fs_bug_on(sbi, ofs_of_node(dn.node_page) != ofs_of_node(page)); ++ ++ if (ofs_of_node(dn.node_page) != ofs_of_node(page)) { ++ f2fs_msg(sbi->sb, KERN_WARNING, ++ "Inconsistent ofs_of_node, ino:%lu, ofs:%u, %u", ++ inode->i_ino, ofs_of_node(dn.node_page), ++ ofs_of_node(page)); ++ err = -EFAULT; ++ goto err; ++ } + + for (; start < end; start++, dn.ofs_in_node++) { + block_t src, dest; +@@ -666,8 +679,10 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list, + */ + if (IS_INODE(page)) { + err = recover_inode(entry->inode, page); +- if (err) ++ if (err) { ++ f2fs_put_page(page, 1); + break; ++ } + } + if (entry->last_dentry == blkaddr) { + err = recover_dentry(entry->inode, page, dir_list); +diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c +index ddfa2eb7ec58..3d6efbfe180f 100644 +--- a/fs/f2fs/segment.c ++++ b/fs/f2fs/segment.c +@@ -3188,13 +3188,18 @@ int f2fs_inplace_write_data(struct f2fs_io_info *fio) + { + int err; + struct f2fs_sb_info *sbi = fio->sbi; ++ unsigned int segno; + + fio->new_blkaddr = fio->old_blkaddr; + /* i/o temperature is needed for passing down write hints */ + __get_segment_type(fio); + +- f2fs_bug_on(sbi, !IS_DATASEG(get_seg_entry(sbi, +- GET_SEGNO(sbi, fio->new_blkaddr))->type)); ++ segno = GET_SEGNO(sbi, fio->new_blkaddr); ++ ++ if (!IS_DATASEG(get_seg_entry(sbi, segno)->type)) { ++ set_sbi_flag(sbi, SBI_NEED_FSCK); ++ return -EFAULT; ++ } + + stat_inc_inplace_blocks(fio->sbi); + +diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h +index 5c7ed0442d6e..6f48e0763279 100644 +--- a/fs/f2fs/segment.h ++++ b/fs/f2fs/segment.h +@@ -672,7 +672,6 @@ static inline void verify_block_addr(struct f2fs_io_info *fio, block_t blk_addr) + static inline int check_block_count(struct f2fs_sb_info *sbi, + int segno, struct f2fs_sit_entry *raw_sit) + { +-#ifdef CONFIG_F2FS_CHECK_FS + bool is_valid = test_bit_le(0, raw_sit->valid_map) ? true : false; + int valid_blocks = 0; + int cur_pos = 0, next_pos; +@@ -699,7 +698,7 @@ static inline int check_block_count(struct f2fs_sb_info *sbi, + set_sbi_flag(sbi, SBI_NEED_FSCK); + return -EINVAL; + } +-#endif ++ + /* check segment usage, and check boundary of a given segment number */ + if (unlikely(GET_SIT_VBLOCKS(raw_sit) > sbi->blocks_per_seg + || segno > TOTAL_SEGS(sbi) - 1)) { +diff --git a/fs/fat/file.c b/fs/fat/file.c +index b3bed32946b1..0e3ed79fcc3f 100644 +--- a/fs/fat/file.c ++++ b/fs/fat/file.c +@@ -193,12 +193,17 @@ static int fat_file_release(struct inode *inode, struct file *filp) + int fat_file_fsync(struct file *filp, loff_t start, loff_t end, int datasync) + { + struct inode *inode = filp->f_mapping->host; +- int res, err; ++ int err; ++ ++ err = __generic_file_fsync(filp, start, end, datasync); ++ if (err) ++ return err; + +- res = generic_file_fsync(filp, start, end, datasync); + err = sync_mapping_buffers(MSDOS_SB(inode->i_sb)->fat_inode->i_mapping); ++ if (err) ++ return err; + +- return res ? res : err; ++ return blkdev_issue_flush(inode->i_sb->s_bdev, GFP_KERNEL, NULL); + } + + +diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c +index 9971a35cf1ef..1e2efd873201 100644 +--- a/fs/fuse/dev.c ++++ b/fs/fuse/dev.c +@@ -1749,7 +1749,7 @@ static int fuse_retrieve(struct fuse_conn *fc, struct inode *inode, + offset = outarg->offset & ~PAGE_MASK; + file_size = i_size_read(inode); + +- num = outarg->size; ++ num = min(outarg->size, fc->max_write); + if (outarg->offset > file_size) + num = 0; + else if (outarg->offset + num > file_size) +diff --git a/fs/io_uring.c b/fs/io_uring.c +index 30a5687a17b6..28269a0c5037 100644 +--- a/fs/io_uring.c ++++ b/fs/io_uring.c +@@ -2330,10 +2330,11 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx, + ctx->sq_thread_idle = HZ; + + if (p->flags & IORING_SETUP_SQ_AFF) { +- int cpu = array_index_nospec(p->sq_thread_cpu, +- nr_cpu_ids); ++ int cpu = p->sq_thread_cpu; + + ret = -EINVAL; ++ if (cpu >= nr_cpu_ids) ++ goto err; + if (!cpu_online(cpu)) + goto err; + +diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c +index 3de42a729093..a3a3455826aa 100644 +--- a/fs/nfsd/nfs4xdr.c ++++ b/fs/nfsd/nfs4xdr.c +@@ -2420,8 +2420,10 @@ nfsd4_encode_fattr(struct xdr_stream *xdr, struct svc_fh *fhp, + __be32 status; + int err; + struct nfs4_acl *acl = NULL; ++#ifdef CONFIG_NFSD_V4_SECURITY_LABEL + void *context = NULL; + int contextlen; ++#endif + bool contextsupport = false; + struct nfsd4_compoundres *resp = rqstp->rq_resp; + u32 minorversion = resp->cstate.minorversion; +@@ -2906,12 +2908,14 @@ out_acl: + *p++ = cpu_to_be32(NFS4_CHANGE_TYPE_IS_TIME_METADATA); + } + ++#ifdef CONFIG_NFSD_V4_SECURITY_LABEL + if (bmval2 & FATTR4_WORD2_SECURITY_LABEL) { + status = nfsd4_encode_security_label(xdr, rqstp, context, + contextlen); + if (status) + goto out; + } ++#endif + + attrlen = htonl(xdr->buf->len - attrlen_offset - 4); + write_bytes_to_xdr_buf(xdr->buf, attrlen_offset, &attrlen, 4); +diff --git a/fs/nfsd/vfs.h b/fs/nfsd/vfs.h +index a7e107309f76..db351247892d 100644 +--- a/fs/nfsd/vfs.h ++++ b/fs/nfsd/vfs.h +@@ -120,8 +120,11 @@ void nfsd_put_raparams(struct file *file, struct raparms *ra); + + static inline int fh_want_write(struct svc_fh *fh) + { +- int ret = mnt_want_write(fh->fh_export->ex_path.mnt); ++ int ret; + ++ if (fh->fh_want_write) ++ return 0; ++ ret = mnt_want_write(fh->fh_export->ex_path.mnt); + if (!ret) + fh->fh_want_write = true; + return ret; +diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c +index 50e4407398d8..540a8b845145 100644 +--- a/fs/overlayfs/file.c ++++ b/fs/overlayfs/file.c +@@ -11,6 +11,7 @@ + #include + #include + #include ++#include + #include "overlayfs.h" + + static char ovl_whatisit(struct inode *inode, struct inode *realinode) +@@ -29,10 +30,11 @@ static struct file *ovl_open_realfile(const struct file *file, + struct inode *inode = file_inode(file); + struct file *realfile; + const struct cred *old_cred; ++ int flags = file->f_flags | O_NOATIME | FMODE_NONOTIFY; + + old_cred = ovl_override_creds(inode->i_sb); +- realfile = open_with_fake_path(&file->f_path, file->f_flags | O_NOATIME, +- realinode, current_cred()); ++ realfile = open_with_fake_path(&file->f_path, flags, realinode, ++ current_cred()); + revert_creds(old_cred); + + pr_debug("open(%p[%pD2/%c], 0%o) -> (%p, 0%o)\n", +@@ -50,7 +52,7 @@ static int ovl_change_flags(struct file *file, unsigned int flags) + int err; + + /* No atime modificaton on underlying */ +- flags |= O_NOATIME; ++ flags |= O_NOATIME | FMODE_NONOTIFY; + + /* If some flag changed that cannot be changed then something's amiss */ + if (WARN_ON((file->f_flags ^ flags) & ~OVL_SETFL_MASK)) +@@ -144,11 +146,47 @@ static int ovl_release(struct inode *inode, struct file *file) + + static loff_t ovl_llseek(struct file *file, loff_t offset, int whence) + { +- struct inode *realinode = ovl_inode_real(file_inode(file)); ++ struct inode *inode = file_inode(file); ++ struct fd real; ++ const struct cred *old_cred; ++ ssize_t ret; ++ ++ /* ++ * The two special cases below do not need to involve real fs, ++ * so we can optimizing concurrent callers. ++ */ ++ if (offset == 0) { ++ if (whence == SEEK_CUR) ++ return file->f_pos; ++ ++ if (whence == SEEK_SET) ++ return vfs_setpos(file, 0, 0); ++ } ++ ++ ret = ovl_real_fdget(file, &real); ++ if (ret) ++ return ret; ++ ++ /* ++ * Overlay file f_pos is the master copy that is preserved ++ * through copy up and modified on read/write, but only real ++ * fs knows how to SEEK_HOLE/SEEK_DATA and real fs may impose ++ * limitations that are more strict than ->s_maxbytes for specific ++ * files, so we use the real file to perform seeks. ++ */ ++ inode_lock(inode); ++ real.file->f_pos = file->f_pos; ++ ++ old_cred = ovl_override_creds(inode->i_sb); ++ ret = vfs_llseek(real.file, offset, whence); ++ revert_creds(old_cred); ++ ++ file->f_pos = real.file->f_pos; ++ inode_unlock(inode); ++ ++ fdput(real); + +- return generic_file_llseek_size(file, offset, whence, +- realinode->i_sb->s_maxbytes, +- i_size_read(realinode)); ++ return ret; + } + + static void ovl_file_accessed(struct file *file) +@@ -371,10 +409,68 @@ static long ovl_real_ioctl(struct file *file, unsigned int cmd, + return ret; + } + +-static long ovl_ioctl(struct file *file, unsigned int cmd, unsigned long arg) ++static unsigned int ovl_get_inode_flags(struct inode *inode) ++{ ++ unsigned int flags = READ_ONCE(inode->i_flags); ++ unsigned int ovl_iflags = 0; ++ ++ if (flags & S_SYNC) ++ ovl_iflags |= FS_SYNC_FL; ++ if (flags & S_APPEND) ++ ovl_iflags |= FS_APPEND_FL; ++ if (flags & S_IMMUTABLE) ++ ovl_iflags |= FS_IMMUTABLE_FL; ++ if (flags & S_NOATIME) ++ ovl_iflags |= FS_NOATIME_FL; ++ ++ return ovl_iflags; ++} ++ ++static long ovl_ioctl_set_flags(struct file *file, unsigned long arg) + { + long ret; + struct inode *inode = file_inode(file); ++ unsigned int flags; ++ unsigned int old_flags; ++ ++ if (!inode_owner_or_capable(inode)) ++ return -EACCES; ++ ++ if (get_user(flags, (int __user *) arg)) ++ return -EFAULT; ++ ++ ret = mnt_want_write_file(file); ++ if (ret) ++ return ret; ++ ++ inode_lock(inode); ++ ++ /* Check the capability before cred override */ ++ ret = -EPERM; ++ old_flags = ovl_get_inode_flags(inode); ++ if (((flags ^ old_flags) & (FS_APPEND_FL | FS_IMMUTABLE_FL)) && ++ !capable(CAP_LINUX_IMMUTABLE)) ++ goto unlock; ++ ++ ret = ovl_maybe_copy_up(file_dentry(file), O_WRONLY); ++ if (ret) ++ goto unlock; ++ ++ ret = ovl_real_ioctl(file, FS_IOC_SETFLAGS, arg); ++ ++ ovl_copyflags(ovl_inode_real(inode), inode); ++unlock: ++ inode_unlock(inode); ++ ++ mnt_drop_write_file(file); ++ ++ return ret; ++ ++} ++ ++static long ovl_ioctl(struct file *file, unsigned int cmd, unsigned long arg) ++{ ++ long ret; + + switch (cmd) { + case FS_IOC_GETFLAGS: +@@ -382,23 +478,7 @@ static long ovl_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + break; + + case FS_IOC_SETFLAGS: +- if (!inode_owner_or_capable(inode)) +- return -EACCES; +- +- ret = mnt_want_write_file(file); +- if (ret) +- return ret; +- +- ret = ovl_maybe_copy_up(file_dentry(file), O_WRONLY); +- if (!ret) { +- ret = ovl_real_ioctl(file, cmd, arg); +- +- inode_lock(inode); +- ovl_copyflags(ovl_inode_real(inode), inode); +- inode_unlock(inode); +- } +- +- mnt_drop_write_file(file); ++ ret = ovl_ioctl_set_flags(file, arg); + break; + + default: +diff --git a/include/linux/pwm.h b/include/linux/pwm.h +index b628abfffacc..eaa5c6e3fc9f 100644 +--- a/include/linux/pwm.h ++++ b/include/linux/pwm.h +@@ -596,7 +596,6 @@ static inline void pwm_remove_table(struct pwm_lookup *table, size_t num) + #ifdef CONFIG_PWM_SYSFS + void pwmchip_sysfs_export(struct pwm_chip *chip); + void pwmchip_sysfs_unexport(struct pwm_chip *chip); +-void pwmchip_sysfs_unexport_children(struct pwm_chip *chip); + #else + static inline void pwmchip_sysfs_export(struct pwm_chip *chip) + { +@@ -605,10 +604,6 @@ static inline void pwmchip_sysfs_export(struct pwm_chip *chip) + static inline void pwmchip_sysfs_unexport(struct pwm_chip *chip) + { + } +- +-static inline void pwmchip_sysfs_unexport_children(struct pwm_chip *chip) +-{ +-} + #endif /* CONFIG_PWM_SYSFS */ + + #endif /* __LINUX_PWM_H */ +diff --git a/include/media/v4l2-ctrls.h b/include/media/v4l2-ctrls.h +index e5cae37ced2d..200f8a66ecaa 100644 +--- a/include/media/v4l2-ctrls.h ++++ b/include/media/v4l2-ctrls.h +@@ -1127,7 +1127,7 @@ __poll_t v4l2_ctrl_poll(struct file *file, struct poll_table_struct *wait); + * applying control values in a request is only applicable to memory-to-memory + * devices. + */ +-void v4l2_ctrl_request_setup(struct media_request *req, ++int v4l2_ctrl_request_setup(struct media_request *req, + struct v4l2_ctrl_handler *parent); + + /** +diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h +index 05b1b96f4d9e..094e61e07030 100644 +--- a/include/net/bluetooth/hci_core.h ++++ b/include/net/bluetooth/hci_core.h +@@ -190,9 +190,6 @@ struct adv_info { + + #define HCI_MAX_SHORT_NAME_LENGTH 10 + +-/* Min encryption key size to match with SMP */ +-#define HCI_MIN_ENC_KEY_SIZE 7 +- + /* Default LE RPA expiry time, 15 minutes */ + #define HCI_DEFAULT_RPA_TIMEOUT (15 * 60) + +diff --git a/init/initramfs.c b/init/initramfs.c +index 4749e1115eef..c322e1099f43 100644 +--- a/init/initramfs.c ++++ b/init/initramfs.c +@@ -612,13 +612,12 @@ static int __init populate_rootfs(void) + printk(KERN_INFO "Trying to unpack rootfs image as initramfs...\n"); + err = unpack_to_rootfs((char *)initrd_start, + initrd_end - initrd_start); +- if (!err) { +- free_initrd(); ++ if (!err) + goto done; +- } else { +- clean_rootfs(); +- unpack_to_rootfs(__initramfs_start, __initramfs_size); +- } ++ ++ clean_rootfs(); ++ unpack_to_rootfs(__initramfs_start, __initramfs_size); ++ + printk(KERN_INFO "rootfs image is not initramfs (%s)" + "; looks like an initrd\n", err); + fd = ksys_open("/initrd.image", +@@ -632,7 +631,6 @@ static int __init populate_rootfs(void) + written, initrd_end - initrd_start); + + ksys_close(fd); +- free_initrd(); + } + done: + /* empty statement */; +@@ -642,9 +640,9 @@ static int __init populate_rootfs(void) + initrd_end - initrd_start); + if (err) + printk(KERN_EMERG "Initramfs unpacking failed: %s\n", err); +- free_initrd(); + #endif + } ++ free_initrd(); + flush_delayed_fput(); + return 0; + } +diff --git a/ipc/mqueue.c b/ipc/mqueue.c +index aea30530c472..127ba1e8950b 100644 +--- a/ipc/mqueue.c ++++ b/ipc/mqueue.c +@@ -436,7 +436,8 @@ static void mqueue_evict_inode(struct inode *inode) + struct user_struct *user; + unsigned long mq_bytes, mq_treesize; + struct ipc_namespace *ipc_ns; +- struct msg_msg *msg; ++ struct msg_msg *msg, *nmsg; ++ LIST_HEAD(tmp_msg); + + clear_inode(inode); + +@@ -447,10 +448,15 @@ static void mqueue_evict_inode(struct inode *inode) + info = MQUEUE_I(inode); + spin_lock(&info->lock); + while ((msg = msg_get(info)) != NULL) +- free_msg(msg); ++ list_add_tail(&msg->m_list, &tmp_msg); + kfree(info->node_cache); + spin_unlock(&info->lock); + ++ list_for_each_entry_safe(msg, nmsg, &tmp_msg, m_list) { ++ list_del(&msg->m_list); ++ free_msg(msg); ++ } ++ + /* Total amount of bytes accounted for the mqueue */ + mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) + + min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) * +diff --git a/ipc/msgutil.c b/ipc/msgutil.c +index 84598025a6ad..e65593742e2b 100644 +--- a/ipc/msgutil.c ++++ b/ipc/msgutil.c +@@ -18,6 +18,7 @@ + #include + #include + #include ++#include + + #include "util.h" + +@@ -64,6 +65,9 @@ static struct msg_msg *alloc_msg(size_t len) + pseg = &msg->next; + while (len > 0) { + struct msg_msgseg *seg; ++ ++ cond_resched(); ++ + alen = min(len, DATALEN_SEG); + seg = kmalloc(sizeof(*seg) + alen, GFP_KERNEL_ACCOUNT); + if (seg == NULL) +@@ -176,6 +180,8 @@ void free_msg(struct msg_msg *msg) + kfree(msg); + while (seg != NULL) { + struct msg_msgseg *tmp = seg->next; ++ ++ cond_resched(); + kfree(seg); + seg = tmp; + } +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 09d5d972c9ff..950fac024fbb 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -7296,7 +7296,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env) + insn->dst_reg, + shift); + insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg, +- (1 << size * 8) - 1); ++ (1ULL << size * 8) - 1); + } + } + +diff --git a/kernel/sys.c b/kernel/sys.c +index 12df0e5434b8..bdbfe8d37418 100644 +--- a/kernel/sys.c ++++ b/kernel/sys.c +@@ -1924,7 +1924,7 @@ static int validate_prctl_map(struct prctl_mm_map *prctl_map) + ((unsigned long)prctl_map->__m1 __op \ + (unsigned long)prctl_map->__m2) ? 0 : -EINVAL + error = __prctl_check_order(start_code, <, end_code); +- error |= __prctl_check_order(start_data, <, end_data); ++ error |= __prctl_check_order(start_data,<=, end_data); + error |= __prctl_check_order(start_brk, <=, brk); + error |= __prctl_check_order(arg_start, <=, arg_end); + error |= __prctl_check_order(env_start, <=, env_end); +diff --git a/kernel/sysctl.c b/kernel/sysctl.c +index c9ec050bcf46..387efbaf464a 100644 +--- a/kernel/sysctl.c ++++ b/kernel/sysctl.c +@@ -2874,8 +2874,10 @@ static int __do_proc_doulongvec_minmax(void *data, struct ctl_table *table, int + if (neg) + continue; + val = convmul * val / convdiv; +- if ((min && val < *min) || (max && val > *max)) +- continue; ++ if ((min && val < *min) || (max && val > *max)) { ++ err = -EINVAL; ++ break; ++ } + *i = val; + } else { + val = convdiv * (*i) / convmul; +diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c +index 92a90014a925..f43d47c8c3b6 100644 +--- a/kernel/time/ntp.c ++++ b/kernel/time/ntp.c +@@ -690,7 +690,7 @@ static inline void process_adjtimex_modes(const struct __kernel_timex *txc, + time_constant = max(time_constant, 0l); + } + +- if (txc->modes & ADJ_TAI && txc->constant > 0) ++ if (txc->modes & ADJ_TAI && txc->constant >= 0) + *time_tai = txc->constant; + + if (txc->modes & ADJ_OFFSET) +diff --git a/mm/Kconfig b/mm/Kconfig +index 25c71eb8a7db..2e6d24d783f7 100644 +--- a/mm/Kconfig ++++ b/mm/Kconfig +@@ -694,12 +694,12 @@ config DEV_PAGEMAP_OPS + + config HMM + bool ++ select MMU_NOTIFIER + select MIGRATE_VMA_HELPER + + config HMM_MIRROR + bool "HMM mirror CPU page table into a device page table" + depends on ARCH_HAS_HMM +- select MMU_NOTIFIER + select HMM + help + Select HMM_MIRROR if you want to mirror range of the CPU page table of a +diff --git a/mm/cma.c b/mm/cma.c +index bb2d333ffcb3..5e36d7418031 100644 +--- a/mm/cma.c ++++ b/mm/cma.c +@@ -106,8 +106,10 @@ static int __init cma_activate_area(struct cma *cma) + + cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL); + +- if (!cma->bitmap) ++ if (!cma->bitmap) { ++ cma->count = 0; + return -ENOMEM; ++ } + + WARN_ON_ONCE(!pfn_valid(pfn)); + zone = page_zone(pfn_to_page(pfn)); +@@ -367,23 +369,26 @@ err: + #ifdef CONFIG_CMA_DEBUG + static void cma_debug_show_areas(struct cma *cma) + { +- unsigned long next_zero_bit, next_set_bit; ++ unsigned long next_zero_bit, next_set_bit, nr_zero; + unsigned long start = 0; +- unsigned int nr_zero, nr_total = 0; ++ unsigned long nr_part, nr_total = 0; ++ unsigned long nbits = cma_bitmap_maxno(cma); + + mutex_lock(&cma->lock); + pr_info("number of available pages: "); + for (;;) { +- next_zero_bit = find_next_zero_bit(cma->bitmap, cma->count, start); +- if (next_zero_bit >= cma->count) ++ next_zero_bit = find_next_zero_bit(cma->bitmap, nbits, start); ++ if (next_zero_bit >= nbits) + break; +- next_set_bit = find_next_bit(cma->bitmap, cma->count, next_zero_bit); ++ next_set_bit = find_next_bit(cma->bitmap, nbits, next_zero_bit); + nr_zero = next_set_bit - next_zero_bit; +- pr_cont("%s%u@%lu", nr_total ? "+" : "", nr_zero, next_zero_bit); +- nr_total += nr_zero; ++ nr_part = nr_zero << cma->order_per_bit; ++ pr_cont("%s%lu@%lu", nr_total ? "+" : "", nr_part, ++ next_zero_bit); ++ nr_total += nr_part; + start = next_zero_bit + nr_zero; + } +- pr_cont("=> %u free of %lu total pages\n", nr_total, cma->count); ++ pr_cont("=> %lu free of %lu total pages\n", nr_total, cma->count); + mutex_unlock(&cma->lock); + } + #else +diff --git a/mm/cma_debug.c b/mm/cma_debug.c +index 8d7b2fd52225..a7dd9e8e10d5 100644 +--- a/mm/cma_debug.c ++++ b/mm/cma_debug.c +@@ -56,7 +56,7 @@ static int cma_maxchunk_get(void *data, u64 *val) + mutex_lock(&cma->lock); + for (;;) { + start = find_next_zero_bit(cma->bitmap, bitmap_maxno, end); +- if (start >= cma->count) ++ if (start >= bitmap_maxno) + break; + end = find_next_bit(cma->bitmap, bitmap_maxno, start); + maxchunk = max(end - start, maxchunk); +diff --git a/mm/compaction.c b/mm/compaction.c +index 368445cc71cf..2d7bb9eb07cd 100644 +--- a/mm/compaction.c ++++ b/mm/compaction.c +@@ -1164,7 +1164,9 @@ static bool suitable_migration_target(struct compact_control *cc, + static inline unsigned int + freelist_scan_limit(struct compact_control *cc) + { +- return (COMPACT_CLUSTER_MAX >> cc->fast_search_fail) + 1; ++ unsigned short shift = BITS_PER_LONG - 1; ++ ++ return (COMPACT_CLUSTER_MAX >> min(shift, cc->fast_search_fail)) + 1; + } + + /* +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index 5baf1f00ad42..5b4f00be325d 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -1258,12 +1258,23 @@ void free_huge_page(struct page *page) + ClearPagePrivate(page); + + /* +- * A return code of zero implies that the subpool will be under its +- * minimum size if the reservation is not restored after page is free. +- * Therefore, force restore_reserve operation. ++ * If PagePrivate() was set on page, page allocation consumed a ++ * reservation. If the page was associated with a subpool, there ++ * would have been a page reserved in the subpool before allocation ++ * via hugepage_subpool_get_pages(). Since we are 'restoring' the ++ * reservtion, do not call hugepage_subpool_put_pages() as this will ++ * remove the reserved page from the subpool. + */ +- if (hugepage_subpool_put_pages(spool, 1) == 0) +- restore_reserve = true; ++ if (!restore_reserve) { ++ /* ++ * A return code of zero implies that the subpool will be ++ * under its minimum size if the reservation is not restored ++ * after page is free. Therefore, force restore_reserve ++ * operation. ++ */ ++ if (hugepage_subpool_put_pages(spool, 1) == 0) ++ restore_reserve = true; ++ } + + spin_lock(&hugetlb_lock); + clear_page_huge_active(page); +diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c +index b236069ff0d8..547e48addced 100644 +--- a/mm/memory_hotplug.c ++++ b/mm/memory_hotplug.c +@@ -561,20 +561,6 @@ int __remove_pages(struct zone *zone, unsigned long phys_start_pfn, + if (is_dev_zone(zone)) { + if (altmap) + map_offset = vmem_altmap_offset(altmap); +- } else { +- resource_size_t start, size; +- +- start = phys_start_pfn << PAGE_SHIFT; +- size = nr_pages * PAGE_SIZE; +- +- ret = release_mem_region_adjustable(&iomem_resource, start, +- size); +- if (ret) { +- resource_size_t endres = start + size - 1; +- +- pr_warn("Unable to release resource <%pa-%pa> (%d)\n", +- &start, &endres, ret); +- } + } + + clear_zone_contiguous(zone); +@@ -714,7 +700,7 @@ static void node_states_check_changes_online(unsigned long nr_pages, + if (zone_idx(zone) <= ZONE_NORMAL && !node_state(nid, N_NORMAL_MEMORY)) + arg->status_change_nid_normal = nid; + #ifdef CONFIG_HIGHMEM +- if (zone_idx(zone) <= N_HIGH_MEMORY && !node_state(nid, N_HIGH_MEMORY)) ++ if (zone_idx(zone) <= ZONE_HIGHMEM && !node_state(nid, N_HIGH_MEMORY)) + arg->status_change_nid_high = nid; + #endif + } +@@ -1843,6 +1829,26 @@ void try_offline_node(int nid) + } + EXPORT_SYMBOL(try_offline_node); + ++static void __release_memory_resource(resource_size_t start, ++ resource_size_t size) ++{ ++ int ret; ++ ++ /* ++ * When removing memory in the same granularity as it was added, ++ * this function never fails. It might only fail if resources ++ * have to be adjusted or split. We'll ignore the error, as ++ * removing of memory cannot fail. ++ */ ++ ret = release_mem_region_adjustable(&iomem_resource, start, size); ++ if (ret) { ++ resource_size_t endres = start + size - 1; ++ ++ pr_warn("Unable to release resource <%pa-%pa> (%d)\n", ++ &start, &endres, ret); ++ } ++} ++ + /** + * remove_memory + * @nid: the node ID +@@ -1877,6 +1883,7 @@ void __ref __remove_memory(int nid, u64 start, u64 size) + memblock_remove(start, size); + + arch_remove_memory(nid, start, size, NULL); ++ __release_memory_resource(start, size); + + try_offline_node(nid); + +diff --git a/mm/mprotect.c b/mm/mprotect.c +index 028c724dcb1a..ab40f3d04aa3 100644 +--- a/mm/mprotect.c ++++ b/mm/mprotect.c +@@ -39,7 +39,6 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, + unsigned long addr, unsigned long end, pgprot_t newprot, + int dirty_accountable, int prot_numa) + { +- struct mm_struct *mm = vma->vm_mm; + pte_t *pte, oldpte; + spinlock_t *ptl; + unsigned long pages = 0; +@@ -136,7 +135,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, + newpte = swp_entry_to_pte(entry); + if (pte_swp_soft_dirty(oldpte)) + newpte = pte_swp_mksoft_dirty(newpte); +- set_pte_at(mm, addr, pte, newpte); ++ set_pte_at(vma->vm_mm, addr, pte, newpte); + + pages++; + } +@@ -150,7 +149,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, + */ + make_device_private_entry_read(&entry); + newpte = swp_entry_to_pte(entry); +- set_pte_at(mm, addr, pte, newpte); ++ set_pte_at(vma->vm_mm, addr, pte, newpte); + + pages++; + } +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index c02cff1ed56e..475ca5b1a824 100644 +--- a/mm/page_alloc.c ++++ b/mm/page_alloc.c +@@ -6244,13 +6244,15 @@ static unsigned long __init zone_spanned_pages_in_node(int nid, + unsigned long *zone_end_pfn, + unsigned long *ignored) + { ++ unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type]; ++ unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type]; + /* When hotadd a new node from cpu_up(), the node should be empty */ + if (!node_start_pfn && !node_end_pfn) + return 0; + + /* Get the start and end of the zone */ +- *zone_start_pfn = arch_zone_lowest_possible_pfn[zone_type]; +- *zone_end_pfn = arch_zone_highest_possible_pfn[zone_type]; ++ *zone_start_pfn = clamp(node_start_pfn, zone_low, zone_high); ++ *zone_end_pfn = clamp(node_end_pfn, zone_low, zone_high); + adjust_zone_range_for_zone_movable(nid, zone_type, + node_start_pfn, node_end_pfn, + zone_start_pfn, zone_end_pfn); +diff --git a/mm/percpu.c b/mm/percpu.c +index 68dd2e7e73b5..d38bd83fbe96 100644 +--- a/mm/percpu.c ++++ b/mm/percpu.c +@@ -988,7 +988,8 @@ static int pcpu_alloc_area(struct pcpu_chunk *chunk, int alloc_bits, + /* + * Search to find a fit. + */ +- end = start + alloc_bits + PCPU_BITMAP_BLOCK_BITS; ++ end = min_t(int, start + alloc_bits + PCPU_BITMAP_BLOCK_BITS, ++ pcpu_chunk_map_bits(chunk)); + bit_off = bitmap_find_next_zero_area(chunk->alloc_map, end, start, + alloc_bits, align_mask); + if (bit_off >= end) +@@ -1738,6 +1739,7 @@ void free_percpu(void __percpu *ptr) + struct pcpu_chunk *chunk; + unsigned long flags; + int off; ++ bool need_balance = false; + + if (!ptr) + return; +@@ -1759,7 +1761,7 @@ void free_percpu(void __percpu *ptr) + + list_for_each_entry(pos, &pcpu_slot[pcpu_nr_slots - 1], list) + if (pos != chunk) { +- pcpu_schedule_balance_work(); ++ need_balance = true; + break; + } + } +@@ -1767,6 +1769,9 @@ void free_percpu(void __percpu *ptr) + trace_percpu_free_percpu(chunk->base_addr, off, ptr); + + spin_unlock_irqrestore(&pcpu_lock, flags); ++ ++ if (need_balance) ++ pcpu_schedule_balance_work(); + } + EXPORT_SYMBOL_GPL(free_percpu); + +diff --git a/mm/rmap.c b/mm/rmap.c +index b30c7c71d1d9..76c8dfd3ae1c 100644 +--- a/mm/rmap.c ++++ b/mm/rmap.c +@@ -928,7 +928,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, + continue; + + flush_cache_page(vma, address, page_to_pfn(page)); +- entry = pmdp_huge_clear_flush(vma, address, pmd); ++ entry = pmdp_invalidate(vma, address, pmd); + entry = pmd_wrprotect(entry); + entry = pmd_mkclean(entry); + set_pmd_at(vma->vm_mm, address, pmd, entry); +diff --git a/mm/slab.c b/mm/slab.c +index 9142ee992493..fbbef79e1ad5 100644 +--- a/mm/slab.c ++++ b/mm/slab.c +@@ -4328,8 +4328,12 @@ static int leaks_show(struct seq_file *m, void *p) + * whole processing. + */ + do { +- set_store_user_clean(cachep); + drain_cpu_caches(cachep); ++ /* ++ * drain_cpu_caches() could make kmemleak_object and ++ * debug_objects_cache dirty, so reset afterwards. ++ */ ++ set_store_user_clean(cachep); + + x[1] = 0; + +diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c +index 8d290da0d596..9ba7b9bb198a 100644 +--- a/net/batman-adv/distributed-arp-table.c ++++ b/net/batman-adv/distributed-arp-table.c +@@ -667,7 +667,7 @@ batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst, + } + + /** +- * batadv_dat_send_data() - send a payload to the selected candidates ++ * batadv_dat_forward_data() - copy and send payload to the selected candidates + * @bat_priv: the bat priv with all the soft interface information + * @skb: payload to send + * @ip: the DHT key +@@ -680,9 +680,9 @@ batadv_dat_select_candidates(struct batadv_priv *bat_priv, __be32 ip_dst, + * Return: true if the packet is sent to at least one candidate, false + * otherwise. + */ +-static bool batadv_dat_send_data(struct batadv_priv *bat_priv, +- struct sk_buff *skb, __be32 ip, +- unsigned short vid, int packet_subtype) ++static bool batadv_dat_forward_data(struct batadv_priv *bat_priv, ++ struct sk_buff *skb, __be32 ip, ++ unsigned short vid, int packet_subtype) + { + int i; + bool ret = false; +@@ -1277,8 +1277,8 @@ bool batadv_dat_snoop_outgoing_arp_request(struct batadv_priv *bat_priv, + ret = true; + } else { + /* Send the request to the DHT */ +- ret = batadv_dat_send_data(bat_priv, skb, ip_dst, vid, +- BATADV_P_DAT_DHT_GET); ++ ret = batadv_dat_forward_data(bat_priv, skb, ip_dst, vid, ++ BATADV_P_DAT_DHT_GET); + } + out: + if (dat_entry) +@@ -1392,8 +1392,10 @@ void batadv_dat_snoop_outgoing_arp_reply(struct batadv_priv *bat_priv, + /* Send the ARP reply to the candidates for both the IP addresses that + * the node obtained from the ARP reply + */ +- batadv_dat_send_data(bat_priv, skb, ip_src, vid, BATADV_P_DAT_DHT_PUT); +- batadv_dat_send_data(bat_priv, skb, ip_dst, vid, BATADV_P_DAT_DHT_PUT); ++ batadv_dat_forward_data(bat_priv, skb, ip_src, vid, ++ BATADV_P_DAT_DHT_PUT); ++ batadv_dat_forward_data(bat_priv, skb, ip_dst, vid, ++ BATADV_P_DAT_DHT_PUT); + } + + /** +@@ -1710,8 +1712,10 @@ static void batadv_dat_put_dhcp(struct batadv_priv *bat_priv, u8 *chaddr, + batadv_dat_entry_add(bat_priv, yiaddr, chaddr, vid); + batadv_dat_entry_add(bat_priv, ip_dst, hw_dst, vid); + +- batadv_dat_send_data(bat_priv, skb, yiaddr, vid, BATADV_P_DAT_DHT_PUT); +- batadv_dat_send_data(bat_priv, skb, ip_dst, vid, BATADV_P_DAT_DHT_PUT); ++ batadv_dat_forward_data(bat_priv, skb, yiaddr, vid, ++ BATADV_P_DAT_DHT_PUT); ++ batadv_dat_forward_data(bat_priv, skb, ip_dst, vid, ++ BATADV_P_DAT_DHT_PUT); + + consume_skb(skb); + +diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c +index 3cf0764d5793..bd4978ce8c45 100644 +--- a/net/bluetooth/hci_conn.c ++++ b/net/bluetooth/hci_conn.c +@@ -1276,14 +1276,6 @@ int hci_conn_check_link_mode(struct hci_conn *conn) + !test_bit(HCI_CONN_ENCRYPT, &conn->flags)) + return 0; + +- /* The minimum encryption key size needs to be enforced by the +- * host stack before establishing any L2CAP connections. The +- * specification in theory allows a minimum of 1, but to align +- * BR/EDR and LE transports, a minimum of 7 is chosen. +- */ +- if (conn->enc_key_size < HCI_MIN_ENC_KEY_SIZE) +- return 0; +- + return 1; + } + +diff --git a/net/netfilter/nf_conntrack_h323_asn1.c b/net/netfilter/nf_conntrack_h323_asn1.c +index 1601275efe2d..4c2ef42e189c 100644 +--- a/net/netfilter/nf_conntrack_h323_asn1.c ++++ b/net/netfilter/nf_conntrack_h323_asn1.c +@@ -172,7 +172,7 @@ static int nf_h323_error_boundary(struct bitstr *bs, size_t bytes, size_t bits) + if (bits % BITS_PER_BYTE > 0) + bytes++; + +- if (*bs->cur + bytes > *bs->end) ++ if (bs->cur + bytes > bs->end) + return 1; + + return 0; +diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c +index 7aabfd4b1e50..a9e4f74b1ff6 100644 +--- a/net/netfilter/nf_flow_table_core.c ++++ b/net/netfilter/nf_flow_table_core.c +@@ -185,14 +185,25 @@ static const struct rhashtable_params nf_flow_offload_rhash_params = { + + int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow) + { +- flow->timeout = (u32)jiffies; ++ int err; + +- rhashtable_insert_fast(&flow_table->rhashtable, +- &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].node, +- nf_flow_offload_rhash_params); +- rhashtable_insert_fast(&flow_table->rhashtable, +- &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].node, +- nf_flow_offload_rhash_params); ++ err = rhashtable_insert_fast(&flow_table->rhashtable, ++ &flow->tuplehash[0].node, ++ nf_flow_offload_rhash_params); ++ if (err < 0) ++ return err; ++ ++ err = rhashtable_insert_fast(&flow_table->rhashtable, ++ &flow->tuplehash[1].node, ++ nf_flow_offload_rhash_params); ++ if (err < 0) { ++ rhashtable_remove_fast(&flow_table->rhashtable, ++ &flow->tuplehash[0].node, ++ nf_flow_offload_rhash_params); ++ return err; ++ } ++ ++ flow->timeout = (u32)jiffies; + return 0; + } + EXPORT_SYMBOL_GPL(flow_offload_add); +diff --git a/net/netfilter/nf_flow_table_ip.c b/net/netfilter/nf_flow_table_ip.c +index 1d291a51cd45..46022a2867d7 100644 +--- a/net/netfilter/nf_flow_table_ip.c ++++ b/net/netfilter/nf_flow_table_ip.c +@@ -181,6 +181,9 @@ static int nf_flow_tuple_ip(struct sk_buff *skb, const struct net_device *dev, + iph->protocol != IPPROTO_UDP) + return -1; + ++ if (iph->ttl <= 1) ++ return -1; ++ + thoff = iph->ihl * 4; + if (!pskb_may_pull(skb, thoff + sizeof(*ports))) + return -1; +@@ -411,6 +414,9 @@ static int nf_flow_tuple_ipv6(struct sk_buff *skb, const struct net_device *dev, + ip6h->nexthdr != IPPROTO_UDP) + return -1; + ++ if (ip6h->hop_limit <= 1) ++ return -1; ++ + thoff = sizeof(*ip6h); + if (!pskb_may_pull(skb, thoff + sizeof(*ports))) + return -1; +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index 1606eaa5ae0d..aa5e7b00a581 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -1190,6 +1190,9 @@ static int nft_dump_stats(struct sk_buff *skb, struct nft_stats __percpu *stats) + u64 pkts, bytes; + int cpu; + ++ if (!stats) ++ return 0; ++ + memset(&total, 0, sizeof(total)); + for_each_possible_cpu(cpu) { + cpu_stats = per_cpu_ptr(stats, cpu); +@@ -1247,6 +1250,7 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net, + if (nft_is_base_chain(chain)) { + const struct nft_base_chain *basechain = nft_base_chain(chain); + const struct nf_hook_ops *ops = &basechain->ops; ++ struct nft_stats __percpu *stats; + struct nlattr *nest; + + nest = nla_nest_start(skb, NFTA_CHAIN_HOOK); +@@ -1268,8 +1272,9 @@ static int nf_tables_fill_chain_info(struct sk_buff *skb, struct net *net, + if (nla_put_string(skb, NFTA_CHAIN_TYPE, basechain->type->name)) + goto nla_put_failure; + +- if (rcu_access_pointer(basechain->stats) && +- nft_dump_stats(skb, rcu_dereference(basechain->stats))) ++ stats = rcu_dereference_check(basechain->stats, ++ lockdep_commit_lock_is_held(net)); ++ if (nft_dump_stats(skb, stats)) + goto nla_put_failure; + } + +diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c +index 6e6b9adf7d38..ff50bc1b144f 100644 +--- a/net/netfilter/nft_flow_offload.c ++++ b/net/netfilter/nft_flow_offload.c +@@ -113,6 +113,7 @@ static void nft_flow_offload_eval(const struct nft_expr *expr, + if (ret < 0) + goto err_flow_add; + ++ dst_release(route.tuple[!dir].dst); + return; + + err_flow_add: +diff --git a/sound/core/seq/seq_ports.c b/sound/core/seq/seq_ports.c +index 24d90abfc64d..da31aa8e216e 100644 +--- a/sound/core/seq/seq_ports.c ++++ b/sound/core/seq/seq_ports.c +@@ -550,10 +550,10 @@ static void delete_and_unsubscribe_port(struct snd_seq_client *client, + list_del_init(list); + grp->exclusive = 0; + write_unlock_irq(&grp->list_lock); +- up_write(&grp->list_mutex); + + if (!empty) + unsubscribe_port(client, port, grp, &subs->info, ack); ++ up_write(&grp->list_mutex); + } + + /* connect two ports */ +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 2ec91085fa3e..789308f54785 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -1788,9 +1788,6 @@ static int azx_first_init(struct azx *chip) + chip->msi = 0; + } + +- if (azx_acquire_irq(chip, 0) < 0) +- return -EBUSY; +- + pci_set_master(pci); + synchronize_irq(bus->irq); + +@@ -1904,6 +1901,9 @@ static int azx_first_init(struct azx *chip) + return -ENODEV; + } + ++ if (azx_acquire_irq(chip, 0) < 0) ++ return -EBUSY; ++ + strcpy(card->driver, "HDA-Intel"); + strlcpy(card->shortname, driver_short_names[chip->driver_type], + sizeof(card->shortname)); +diff --git a/tools/objtool/check.c b/tools/objtool/check.c +index 2cd57730381b..ecf5fc77f50b 100644 +--- a/tools/objtool/check.c ++++ b/tools/objtool/check.c +@@ -28,6 +28,8 @@ + #include + #include + ++#define FAKE_JUMP_OFFSET -1 ++ + struct alternative { + struct list_head list; + struct instruction *insn; +@@ -501,7 +503,7 @@ static int add_jump_destinations(struct objtool_file *file) + insn->type != INSN_JUMP_UNCONDITIONAL) + continue; + +- if (insn->ignore) ++ if (insn->ignore || insn->offset == FAKE_JUMP_OFFSET) + continue; + + rela = find_rela_by_dest_range(insn->sec, insn->offset, +@@ -670,10 +672,10 @@ static int handle_group_alt(struct objtool_file *file, + clear_insn_state(&fake_jump->state); + + fake_jump->sec = special_alt->new_sec; +- fake_jump->offset = -1; ++ fake_jump->offset = FAKE_JUMP_OFFSET; + fake_jump->type = INSN_JUMP_UNCONDITIONAL; + fake_jump->jump_dest = list_next_entry(last_orig_insn, list); +- fake_jump->ignore = true; ++ fake_jump->func = orig_insn->func; + } + + if (!special_alt->new_len) {