From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 00011158091 for ; Tue, 14 Jun 2022 17:12:52 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 808C1E085A; Tue, 14 Jun 2022 17:12:51 +0000 (UTC) Received: from smtp.gentoo.org (mail.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 78350E085A for ; Tue, 14 Jun 2022 17:12:50 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id F01363419E2 for ; Tue, 14 Jun 2022 17:12:48 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id B21E6105 for ; Tue, 14 Jun 2022 17:12:47 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1655226752.1552bbdaf299377e4f6983026eb2880c681d19fe.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.4 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1197_linux-5.4.198.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 1552bbdaf299377e4f6983026eb2880c681d19fe X-VCS-Branch: 5.4 Date: Tue, 14 Jun 2022 17:12:47 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: af5e6c69-aba8-4391-9170-9619aecf1b8d X-Archives-Hash: 7e2fcf1fb47080d1d3781521625855c7 commit: 1552bbdaf299377e4f6983026eb2880c681d19fe Author: Mike Pagano gentoo org> AuthorDate: Tue Jun 14 17:12:32 2022 +0000 Commit: Mike Pagano gentoo org> CommitDate: Tue Jun 14 17:12:32 2022 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1552bbda Linux patch 5.4.198 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1197_linux-5.4.198.patch | 11742 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 11746 insertions(+) diff --git a/0000_README b/0000_README index b35c40bb..6200ccee 100644 --- a/0000_README +++ b/0000_README @@ -831,6 +831,10 @@ Patch: 1196_linux-5.4.197.patch From: http://www.kernel.org Desc: Linux 5.4.197 +Patch: 1197_linux-5.4.198.patch +From: http://www.kernel.org +Desc: Linux 5.4.198 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1197_linux-5.4.198.patch b/1197_linux-5.4.198.patch new file mode 100644 index 00000000..624244f5 --- /dev/null +++ b/1197_linux-5.4.198.patch @@ -0,0 +1,11742 @@ +diff --git a/Documentation/ABI/testing/sysfs-ata b/Documentation/ABI/testing/sysfs-ata +index 9ab0ef1dd1c72..299e0d1dc1619 100644 +--- a/Documentation/ABI/testing/sysfs-ata ++++ b/Documentation/ABI/testing/sysfs-ata +@@ -107,13 +107,14 @@ Description: + described in ATA8 7.16 and 7.17. Only valid if + the device is not a PM. + +- pio_mode: (RO) Transfer modes supported by the device when +- in PIO mode. Mostly used by PATA device. ++ pio_mode: (RO) PIO transfer mode used by the device. ++ Mostly used by PATA devices. + +- xfer_mode: (RO) Current transfer mode ++ xfer_mode: (RO) Current transfer mode. Mostly used by ++ PATA devices. + +- dma_mode: (RO) Transfer modes supported by the device when +- in DMA mode. Mostly used by PATA device. ++ dma_mode: (RO) DMA transfer mode used by the device. ++ Mostly used by PATA devices. + + class: (RO) Device class. Can be "ata" for disk, + "atapi" for packet device, "pmp" for PM, or +diff --git a/Documentation/conf.py b/Documentation/conf.py +index a8fe845832bce..38c1f7618b5e8 100644 +--- a/Documentation/conf.py ++++ b/Documentation/conf.py +@@ -98,7 +98,7 @@ finally: + # + # This is also used if you do content translation via gettext catalogs. + # Usually you set "language" from the command line for these cases. +-language = None ++language = 'en' + + # There are two options for replacing |today|: either, you set today to some + # non-false value, then it is used: +diff --git a/Documentation/devicetree/bindings/gpio/gpio-altera.txt b/Documentation/devicetree/bindings/gpio/gpio-altera.txt +index 146e554b3c676..2a80e272cd666 100644 +--- a/Documentation/devicetree/bindings/gpio/gpio-altera.txt ++++ b/Documentation/devicetree/bindings/gpio/gpio-altera.txt +@@ -9,8 +9,9 @@ Required properties: + - The second cell is reserved and is currently unused. + - gpio-controller : Marks the device node as a GPIO controller. + - interrupt-controller: Mark the device node as an interrupt controller +-- #interrupt-cells : Should be 1. The interrupt type is fixed in the hardware. ++- #interrupt-cells : Should be 2. The interrupt type is fixed in the hardware. + - The first cell is the GPIO offset number within the GPIO controller. ++ - The second cell is the interrupt trigger type and level flags. + - interrupts: Specify the interrupt. + - altr,interrupt-type: Specifies the interrupt trigger type the GPIO + hardware is synthesized. This field is required if the Altera GPIO controller +@@ -38,6 +39,6 @@ gpio_altr: gpio@ff200000 { + altr,interrupt-type = ; + #gpio-cells = <2>; + gpio-controller; +- #interrupt-cells = <1>; ++ #interrupt-cells = <2>; + interrupt-controller; + }; +diff --git a/Documentation/hwmon/hwmon-kernel-api.rst b/Documentation/hwmon/hwmon-kernel-api.rst +index c41eb61081036..23f27fe78e379 100644 +--- a/Documentation/hwmon/hwmon-kernel-api.rst ++++ b/Documentation/hwmon/hwmon-kernel-api.rst +@@ -72,7 +72,7 @@ hwmon_device_register_with_info is the most comprehensive and preferred means + to register a hardware monitoring device. It creates the standard sysfs + attributes in the hardware monitoring core, letting the driver focus on reading + from and writing to the chip instead of having to bother with sysfs attributes. +-The parent device parameter cannot be NULL with non-NULL chip info. Its ++The parent device parameter as well as the chip parameter must not be NULL. Its + parameters are described in more detail below. + + devm_hwmon_device_register_with_info is similar to +diff --git a/Makefile b/Makefile +index 57e27af9fc0c0..1c99e688da213 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 4 +-SUBLEVEL = 197 ++SUBLEVEL = 198 + EXTRAVERSION = + NAME = Kleptomaniac Octopus + +diff --git a/arch/arm/boot/dts/bcm2835-rpi-b.dts b/arch/arm/boot/dts/bcm2835-rpi-b.dts +index 2b69957e0113e..1838e0fa0ff59 100644 +--- a/arch/arm/boot/dts/bcm2835-rpi-b.dts ++++ b/arch/arm/boot/dts/bcm2835-rpi-b.dts +@@ -53,18 +53,17 @@ + "GPIO18", + "NC", /* GPIO19 */ + "NC", /* GPIO20 */ +- "GPIO21", ++ "CAM_GPIO0", + "GPIO22", + "GPIO23", + "GPIO24", + "GPIO25", + "NC", /* GPIO26 */ +- "CAM_GPIO0", +- /* Binary number representing build/revision */ +- "CONFIG0", +- "CONFIG1", +- "CONFIG2", +- "CONFIG3", ++ "GPIO27", ++ "GPIO28", ++ "GPIO29", ++ "GPIO30", ++ "GPIO31", + "NC", /* GPIO32 */ + "NC", /* GPIO33 */ + "NC", /* GPIO34 */ +diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts +index f65448c01e317..34a85ad9f03c2 100644 +--- a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts ++++ b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts +@@ -74,16 +74,18 @@ + "GPIO27", + "SDA0", + "SCL0", +- "NC", /* GPIO30 */ +- "NC", /* GPIO31 */ +- "NC", /* GPIO32 */ +- "NC", /* GPIO33 */ +- "NC", /* GPIO34 */ +- "NC", /* GPIO35 */ +- "NC", /* GPIO36 */ +- "NC", /* GPIO37 */ +- "NC", /* GPIO38 */ +- "NC", /* GPIO39 */ ++ /* Used by BT module */ ++ "CTS0", ++ "RTS0", ++ "TXD0", ++ "RXD0", ++ /* Used by Wifi */ ++ "SD1_CLK", ++ "SD1_CMD", ++ "SD1_DATA0", ++ "SD1_DATA1", ++ "SD1_DATA2", ++ "SD1_DATA3", + "CAM_GPIO1", /* GPIO40 */ + "WL_ON", /* GPIO41 */ + "NC", /* GPIO42 */ +diff --git a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts +index 74ed6d0478070..d9f63fc59f165 100644 +--- a/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts ++++ b/arch/arm/boot/dts/bcm2837-rpi-3-b-plus.dts +@@ -43,7 +43,7 @@ + #gpio-cells = <2>; + gpio-line-names = "BT_ON", + "WL_ON", +- "STATUS_LED_R", ++ "PWR_LED_R", + "LAN_RUN", + "", + "CAM_GPIO0", +diff --git a/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts b/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts +index 588d9411ceb61..3dfce4312dfc4 100644 +--- a/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts ++++ b/arch/arm/boot/dts/bcm2837-rpi-cm3-io3.dts +@@ -63,8 +63,8 @@ + "GPIO43", + "GPIO44", + "GPIO45", +- "GPIO46", +- "GPIO47", ++ "SMPS_SCL", ++ "SMPS_SDA", + /* Used by eMMC */ + "SD_CLK_R", + "SD_CMD_R", +diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts +index fa5dd992e3273..c7e350ea03fe3 100644 +--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts ++++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts +@@ -128,7 +128,7 @@ + samsung,i2c-max-bus-freq = <20000>; + + eeprom@50 { +- compatible = "samsung,s524ad0xd1"; ++ compatible = "samsung,s524ad0xd1", "atmel,24c128"; + reg = <0x50>; + }; + +@@ -287,7 +287,7 @@ + samsung,i2c-max-bus-freq = <20000>; + + eeprom@51 { +- compatible = "samsung,s524ad0xd1"; ++ compatible = "samsung,s524ad0xd1", "atmel,24c128"; + reg = <0x51>; + }; + +diff --git a/arch/arm/boot/dts/ox820.dtsi b/arch/arm/boot/dts/ox820.dtsi +index 90846a7655b49..dde4364892bf0 100644 +--- a/arch/arm/boot/dts/ox820.dtsi ++++ b/arch/arm/boot/dts/ox820.dtsi +@@ -287,7 +287,7 @@ + clocks = <&armclk>; + }; + +- gic: gic@1000 { ++ gic: interrupt-controller@1000 { + compatible = "arm,arm11mp-gic"; + interrupt-controller; + #interrupt-cells = <3>; +diff --git a/arch/arm/boot/dts/suniv-f1c100s.dtsi b/arch/arm/boot/dts/suniv-f1c100s.dtsi +index 6100d3b75f613..def8301014487 100644 +--- a/arch/arm/boot/dts/suniv-f1c100s.dtsi ++++ b/arch/arm/boot/dts/suniv-f1c100s.dtsi +@@ -104,8 +104,10 @@ + + wdt: watchdog@1c20ca0 { + compatible = "allwinner,suniv-f1c100s-wdt", +- "allwinner,sun4i-a10-wdt"; ++ "allwinner,sun6i-a31-wdt"; + reg = <0x01c20ca0 0x20>; ++ interrupts = <16>; ++ clocks = <&osc32k>; + }; + + uart0: serial@1c25000 { +diff --git a/arch/arm/mach-hisi/platsmp.c b/arch/arm/mach-hisi/platsmp.c +index da7a09c1dae56..1cd1d9b0aabf9 100644 +--- a/arch/arm/mach-hisi/platsmp.c ++++ b/arch/arm/mach-hisi/platsmp.c +@@ -67,14 +67,17 @@ static void __init hi3xxx_smp_prepare_cpus(unsigned int max_cpus) + } + ctrl_base = of_iomap(np, 0); + if (!ctrl_base) { ++ of_node_put(np); + pr_err("failed to map address\n"); + return; + } + if (of_property_read_u32(np, "smp-offset", &offset) < 0) { ++ of_node_put(np); + pr_err("failed to find smp-offset property\n"); + return; + } + ctrl_base += offset; ++ of_node_put(np); + } + } + +@@ -160,6 +163,7 @@ static int hip01_boot_secondary(unsigned int cpu, struct task_struct *idle) + if (WARN_ON(!node)) + return -1; + ctrl_base = of_iomap(node, 0); ++ of_node_put(node); + + /* set the secondary core boot from DDR */ + remap_reg_value = readl_relaxed(ctrl_base + REG_SC_CTRL); +diff --git a/arch/arm/mach-mediatek/Kconfig b/arch/arm/mach-mediatek/Kconfig +index 9e0f592d87d8e..35a3430c7942d 100644 +--- a/arch/arm/mach-mediatek/Kconfig ++++ b/arch/arm/mach-mediatek/Kconfig +@@ -30,6 +30,7 @@ config MACH_MT7623 + config MACH_MT7629 + bool "MediaTek MT7629 SoCs support" + default ARCH_MEDIATEK ++ select HAVE_ARM_ARCH_TIMER + + config MACH_MT8127 + bool "MediaTek MT8127 SoCs support" +diff --git a/arch/arm/mach-omap1/clock.c b/arch/arm/mach-omap1/clock.c +index bd5be82101f32..d89bda12bf3cd 100644 +--- a/arch/arm/mach-omap1/clock.c ++++ b/arch/arm/mach-omap1/clock.c +@@ -41,7 +41,7 @@ static DEFINE_SPINLOCK(clockfw_lock); + unsigned long omap1_uart_recalc(struct clk *clk) + { + unsigned int val = __raw_readl(clk->enable_reg); +- return val & clk->enable_bit ? 48000000 : 12000000; ++ return val & 1 << clk->enable_bit ? 48000000 : 12000000; + } + + unsigned long omap1_sossi_recalc(struct clk *clk) +diff --git a/arch/arm/mach-pxa/cm-x300.c b/arch/arm/mach-pxa/cm-x300.c +index 425855f456f2b..719e6395797cb 100644 +--- a/arch/arm/mach-pxa/cm-x300.c ++++ b/arch/arm/mach-pxa/cm-x300.c +@@ -355,13 +355,13 @@ static struct platform_device cm_x300_spi_gpio = { + static struct gpiod_lookup_table cm_x300_spi_gpiod_table = { + .dev_id = "spi_gpio", + .table = { +- GPIO_LOOKUP("gpio-pxa", GPIO_LCD_SCL, ++ GPIO_LOOKUP("pca9555.1", GPIO_LCD_SCL - GPIO_LCD_BASE, + "sck", GPIO_ACTIVE_HIGH), +- GPIO_LOOKUP("gpio-pxa", GPIO_LCD_DIN, ++ GPIO_LOOKUP("pca9555.1", GPIO_LCD_DIN - GPIO_LCD_BASE, + "mosi", GPIO_ACTIVE_HIGH), +- GPIO_LOOKUP("gpio-pxa", GPIO_LCD_DOUT, ++ GPIO_LOOKUP("pca9555.1", GPIO_LCD_DOUT - GPIO_LCD_BASE, + "miso", GPIO_ACTIVE_HIGH), +- GPIO_LOOKUP("gpio-pxa", GPIO_LCD_CS, ++ GPIO_LOOKUP("pca9555.1", GPIO_LCD_CS - GPIO_LCD_BASE, + "cs", GPIO_ACTIVE_HIGH), + { }, + }, +diff --git a/arch/arm/mach-pxa/magician.c b/arch/arm/mach-pxa/magician.c +index e1a394ac3eea7..8f2d4faa26120 100644 +--- a/arch/arm/mach-pxa/magician.c ++++ b/arch/arm/mach-pxa/magician.c +@@ -675,7 +675,7 @@ static struct platform_device bq24022 = { + static struct gpiod_lookup_table bq24022_gpiod_table = { + .dev_id = "gpio-regulator", + .table = { +- GPIO_LOOKUP("gpio-pxa", EGPIO_MAGICIAN_BQ24022_ISET2, ++ GPIO_LOOKUP("htc-egpio-0", EGPIO_MAGICIAN_BQ24022_ISET2 - MAGICIAN_EGPIO_BASE, + NULL, GPIO_ACTIVE_HIGH), + GPIO_LOOKUP("gpio-pxa", GPIO30_MAGICIAN_BQ24022_nCHARGE_EN, + "enable", GPIO_ACTIVE_LOW), +diff --git a/arch/arm/mach-pxa/tosa.c b/arch/arm/mach-pxa/tosa.c +index f537ff1c3ba7e..3fbcaa3b4e182 100644 +--- a/arch/arm/mach-pxa/tosa.c ++++ b/arch/arm/mach-pxa/tosa.c +@@ -295,9 +295,9 @@ static struct gpiod_lookup_table tosa_mci_gpio_table = { + .table = { + GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_nSD_DETECT, + "cd", GPIO_ACTIVE_LOW), +- GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_SD_WP, ++ GPIO_LOOKUP("sharp-scoop.0", TOSA_GPIO_SD_WP - TOSA_SCOOP_GPIO_BASE, + "wp", GPIO_ACTIVE_LOW), +- GPIO_LOOKUP("gpio-pxa", TOSA_GPIO_PWR_ON, ++ GPIO_LOOKUP("sharp-scoop.0", TOSA_GPIO_PWR_ON - TOSA_SCOOP_GPIO_BASE, + "power", GPIO_ACTIVE_HIGH), + { }, + }, +diff --git a/arch/arm/mach-vexpress/dcscb.c b/arch/arm/mach-vexpress/dcscb.c +index 46a903c88c6a0..f553cde614f92 100644 +--- a/arch/arm/mach-vexpress/dcscb.c ++++ b/arch/arm/mach-vexpress/dcscb.c +@@ -143,6 +143,7 @@ static int __init dcscb_init(void) + if (!node) + return -ENODEV; + dcscb_base = of_iomap(node, 0); ++ of_node_put(node); + if (!dcscb_base) + return -EADDRNOTAVAIL; + cfg = readl_relaxed(dcscb_base + DCS_CFG_R); +diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms +index 9dccf4db319b1..90202e5608d1e 100644 +--- a/arch/arm64/Kconfig.platforms ++++ b/arch/arm64/Kconfig.platforms +@@ -225,6 +225,7 @@ config ARCH_STRATIX10 + + config ARCH_SYNQUACER + bool "Socionext SynQuacer SoC Family" ++ select IRQ_FASTEOI_HIERARCHY_HANDLERS + + config ARCH_TEGRA + bool "NVIDIA Tegra SoC Family" +diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi +index 67ee5f5601046..7822592664ffb 100644 +--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi ++++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi +@@ -482,7 +482,7 @@ + clocks { + sleep_clk: sleep_clk { + compatible = "fixed-clock"; +- clock-frequency = <32000>; ++ clock-frequency = <32768>; + #clock-cells = <0>; + }; + +diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi +index 95942d917de53..4496f7e1c68f8 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi ++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi +@@ -1447,6 +1447,7 @@ + reg = <0xf780 0x24>; + clocks = <&sdhci>; + clock-names = "emmcclk"; ++ drive-impedance-ohm = <50>; + #phy-cells = <0>; + status = "disabled"; + }; +@@ -1457,7 +1458,6 @@ + clock-names = "refclk"; + #phy-cells = <1>; + resets = <&cru SRST_PCIEPHY>; +- drive-impedance-ohm = <50>; + reset-names = "phy"; + status = "disabled"; + }; +diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c +index 3c18c2454089b..51274bab25653 100644 +--- a/arch/arm64/kernel/sys_compat.c ++++ b/arch/arm64/kernel/sys_compat.c +@@ -115,6 +115,6 @@ long compat_arm_syscall(struct pt_regs *regs, int scno) + (compat_thumb_mode(regs) ? 2 : 4); + + arm64_notify_die("Oops - bad compat syscall(2)", regs, +- SIGILL, ILL_ILLTRP, addr, scno); ++ SIGILL, ILL_ILLTRP, addr, 0); + return 0; + } +diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c +index 17a8d1484f9b9..9f71ca4414825 100644 +--- a/arch/arm64/net/bpf_jit_comp.c ++++ b/arch/arm64/net/bpf_jit_comp.c +@@ -973,6 +973,7 @@ skip_init_ctx: + bpf_jit_binary_free(header); + prog->bpf_func = NULL; + prog->jited = 0; ++ prog->jited_len = 0; + goto out_off; + } + bpf_jit_binary_lock_ro(header); +diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu +index 60ac1cd8b96fb..6bc7fc14163f8 100644 +--- a/arch/m68k/Kconfig.cpu ++++ b/arch/m68k/Kconfig.cpu +@@ -309,7 +309,7 @@ comment "Processor Specific Options" + + config M68KFPU_EMU + bool "Math emulation support" +- depends on MMU ++ depends on M68KCLASSIC && FPU + help + At some point in the future, this will cause floating-point math + instructions to be emulated by the kernel on machines that lack a +diff --git a/arch/m68k/Kconfig.machine b/arch/m68k/Kconfig.machine +index b88a980f56f8a..f0527b155c057 100644 +--- a/arch/m68k/Kconfig.machine ++++ b/arch/m68k/Kconfig.machine +@@ -320,6 +320,7 @@ comment "Machine Options" + + config UBOOT + bool "Support for U-Boot command line parameters" ++ depends on COLDFIRE + help + If you say Y here kernel will try to collect command + line parameters from the initial u-boot stack. +diff --git a/arch/m68k/include/asm/pgtable_no.h b/arch/m68k/include/asm/pgtable_no.h +index c18165b0d9043..6b02484665691 100644 +--- a/arch/m68k/include/asm/pgtable_no.h ++++ b/arch/m68k/include/asm/pgtable_no.h +@@ -42,7 +42,8 @@ extern void paging_init(void); + * ZERO_PAGE is a global shared page that is always zero: used + * for zero-mapped memory areas etc.. + */ +-#define ZERO_PAGE(vaddr) (virt_to_page(0)) ++extern void *empty_zero_page; ++#define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) + + /* + * All 32bit addresses are effectively valid for vmalloc... +diff --git a/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h b/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h +index 136d6d464e320..93c69fc7bbd8c 100644 +--- a/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h ++++ b/arch/mips/include/asm/mach-ip27/cpu-feature-overrides.h +@@ -28,7 +28,6 @@ + #define cpu_has_6k_cache 0 + #define cpu_has_8k_cache 0 + #define cpu_has_tx39_cache 0 +-#define cpu_has_fpu 1 + #define cpu_has_nofpuex 0 + #define cpu_has_32fpr 1 + #define cpu_has_counter 1 +diff --git a/arch/mips/kernel/mips-cpc.c b/arch/mips/kernel/mips-cpc.c +index 69e3e0b556bf7..1b0d4bb617a9c 100644 +--- a/arch/mips/kernel/mips-cpc.c ++++ b/arch/mips/kernel/mips-cpc.c +@@ -27,6 +27,7 @@ phys_addr_t __weak mips_cpc_default_phys_base(void) + cpc_node = of_find_compatible_node(of_root, NULL, "mti,mips-cpc"); + if (cpc_node) { + err = of_address_to_resource(cpc_node, 0, &res); ++ of_node_put(cpc_node); + if (!err) + return res.start; + } +diff --git a/arch/openrisc/include/asm/timex.h b/arch/openrisc/include/asm/timex.h +index d52b4e536e3f9..5487fa93dd9be 100644 +--- a/arch/openrisc/include/asm/timex.h ++++ b/arch/openrisc/include/asm/timex.h +@@ -23,6 +23,7 @@ static inline cycles_t get_cycles(void) + { + return mfspr(SPR_TTCR); + } ++#define get_cycles get_cycles + + /* This isn't really used any more */ + #define CLOCK_TICK_RATE 1000 +diff --git a/arch/openrisc/kernel/head.S b/arch/openrisc/kernel/head.S +index b0dc974f9a743..ffbbf639b7f95 100644 +--- a/arch/openrisc/kernel/head.S ++++ b/arch/openrisc/kernel/head.S +@@ -521,6 +521,15 @@ _start: + l.ori r3,r0,0x1 + l.mtspr r0,r3,SPR_SR + ++ /* ++ * Start the TTCR as early as possible, so that the RNG can make use of ++ * measurements of boot time from the earliest opportunity. Especially ++ * important is that the TTCR does not return zero by the time we reach ++ * rand_initialize(). ++ */ ++ l.movhi r3,hi(SPR_TTMR_CR) ++ l.mtspr r0,r3,SPR_TTMR ++ + CLEAR_GPR(r1) + CLEAR_GPR(r2) + CLEAR_GPR(r3) +diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h +index 0d8f9246ce153..d92353a96f811 100644 +--- a/arch/powerpc/include/asm/page.h ++++ b/arch/powerpc/include/asm/page.h +@@ -216,6 +216,9 @@ static inline bool pfn_valid(unsigned long pfn) + #define __pa(x) ((unsigned long)(x) - VIRT_PHYS_OFFSET) + #else + #ifdef CONFIG_PPC64 ++ ++#define VIRTUAL_WARN_ON(x) WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && (x)) ++ + /* + * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET + * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit. +@@ -223,13 +226,13 @@ static inline bool pfn_valid(unsigned long pfn) + */ + #define __va(x) \ + ({ \ +- VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET); \ ++ VIRTUAL_WARN_ON((unsigned long)(x) >= PAGE_OFFSET); \ + (void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET); \ + }) + + #define __pa(x) \ + ({ \ +- VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET); \ ++ VIRTUAL_WARN_ON((unsigned long)(x) < PAGE_OFFSET); \ + (unsigned long)(x) & 0x0fffffffffffffffUL; \ + }) + +diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c +index 0455dc1b27977..69d64f406204f 100644 +--- a/arch/powerpc/kernel/fadump.c ++++ b/arch/powerpc/kernel/fadump.c +@@ -835,7 +835,6 @@ static int fadump_alloc_mem_ranges(struct fadump_mrange_info *mrange_info) + sizeof(struct fadump_memory_range)); + return 0; + } +- + static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info, + u64 base, u64 end) + { +@@ -854,7 +853,12 @@ static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info, + start = mem_ranges[mrange_info->mem_range_cnt - 1].base; + size = mem_ranges[mrange_info->mem_range_cnt - 1].size; + +- if ((start + size) == base) ++ /* ++ * Boot memory area needs separate PT_LOAD segment(s) as it ++ * is moved to a different location at the time of crash. ++ * So, fold only if the region is not boot memory area. ++ */ ++ if ((start + size) == base && start >= fw_dump.boot_mem_top) + is_adjacent = true; + } + if (!is_adjacent) { +diff --git a/arch/powerpc/kernel/idle.c b/arch/powerpc/kernel/idle.c +index a36fd053c3dba..0615ba86baef3 100644 +--- a/arch/powerpc/kernel/idle.c ++++ b/arch/powerpc/kernel/idle.c +@@ -37,7 +37,7 @@ static int __init powersave_off(char *arg) + { + ppc_md.power_save = NULL; + cpuidle_disable = IDLE_POWERSAVE_OFF; +- return 0; ++ return 1; + } + __setup("powersave=off", powersave_off); + +diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c +index 8c92febf5f443..63bfc5250b67e 100644 +--- a/arch/powerpc/kernel/ptrace.c ++++ b/arch/powerpc/kernel/ptrace.c +@@ -3014,8 +3014,13 @@ long arch_ptrace(struct task_struct *child, long request, + + flush_fp_to_thread(child); + if (fpidx < (PT_FPSCR - PT_FPR0)) +- memcpy(&tmp, &child->thread.TS_FPR(fpidx), +- sizeof(long)); ++ if (IS_ENABLED(CONFIG_PPC32)) { ++ // On 32-bit the index we are passed refers to 32-bit words ++ tmp = ((u32 *)child->thread.fp_state.fpr)[fpidx]; ++ } else { ++ memcpy(&tmp, &child->thread.TS_FPR(fpidx), ++ sizeof(long)); ++ } + else + tmp = child->thread.fp_state.fpscr; + } +@@ -3047,8 +3052,13 @@ long arch_ptrace(struct task_struct *child, long request, + + flush_fp_to_thread(child); + if (fpidx < (PT_FPSCR - PT_FPR0)) +- memcpy(&child->thread.TS_FPR(fpidx), &data, +- sizeof(long)); ++ if (IS_ENABLED(CONFIG_PPC32)) { ++ // On 32-bit the index we are passed refers to 32-bit words ++ ((u32 *)child->thread.fp_state.fpr)[fpidx] = data; ++ } else { ++ memcpy(&child->thread.TS_FPR(fpidx), &data, ++ sizeof(long)); ++ } + else + child->thread.fp_state.fpscr = data; + ret = 0; +@@ -3398,4 +3408,7 @@ void __init pt_regs_check(void) + offsetof(struct user_pt_regs, result)); + + BUILD_BUG_ON(sizeof(struct user_pt_regs) > sizeof(struct pt_regs)); ++ ++ // ptrace_get/put_fpr() rely on PPC32 and VSX being incompatible ++ BUILD_BUG_ON(IS_ENABLED(CONFIG_PPC32) && IS_ENABLED(CONFIG_VSX)); + } +diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c +index 944180f55a3c6..25eda98f3b1bd 100644 +--- a/arch/powerpc/perf/isa207-common.c ++++ b/arch/powerpc/perf/isa207-common.c +@@ -326,7 +326,8 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp) + if (event_is_threshold(event) && is_thresh_cmp_valid(event)) { + mask |= CNST_THRESH_MASK; + value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT); +- } ++ } else if (event_is_threshold(event)) ++ return -1; + } else { + /* + * Special case for PM_MRK_FAB_RSP_MATCH and PM_MRK_FAB_RSP_MATCH_CYC, +diff --git a/arch/powerpc/platforms/4xx/cpm.c b/arch/powerpc/platforms/4xx/cpm.c +index ae8b812c92029..2481e78c04234 100644 +--- a/arch/powerpc/platforms/4xx/cpm.c ++++ b/arch/powerpc/platforms/4xx/cpm.c +@@ -327,6 +327,6 @@ late_initcall(cpm_init); + static int __init cpm_powersave_off(char *arg) + { + cpm.powersave_off = 1; +- return 0; ++ return 1; + } + __setup("powersave=off", cpm_powersave_off); +diff --git a/arch/powerpc/platforms/8xx/cpm1.c b/arch/powerpc/platforms/8xx/cpm1.c +index 0f65c51271db9..ec6dc2d7a9db3 100644 +--- a/arch/powerpc/platforms/8xx/cpm1.c ++++ b/arch/powerpc/platforms/8xx/cpm1.c +@@ -292,6 +292,7 @@ cpm_setbrg(uint brg, uint rate) + out_be32(bp, (((BRG_UART_CLK_DIV16 / rate) - 1) << 1) | + CPM_BRG_EN | CPM_BRG_DIV16); + } ++EXPORT_SYMBOL(cpm_setbrg); + + struct cpm_ioport16 { + __be16 dir, par, odr_sor, dat, intr; +diff --git a/arch/powerpc/platforms/powernv/opal-fadump.c b/arch/powerpc/platforms/powernv/opal-fadump.c +index d361d37d975f3..f5cea068f0bdc 100644 +--- a/arch/powerpc/platforms/powernv/opal-fadump.c ++++ b/arch/powerpc/platforms/powernv/opal-fadump.c +@@ -60,7 +60,7 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node) + addr = be64_to_cpu(addr); + pr_debug("Kernel metadata addr: %llx\n", addr); + opal_fdm_active = (void *)addr; +- if (opal_fdm_active->registered_regions == 0) ++ if (be16_to_cpu(opal_fdm_active->registered_regions) == 0) + return; + + ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_BOOT_MEM, &addr); +@@ -95,17 +95,17 @@ static int opal_fadump_unregister(struct fw_dump *fadump_conf); + static void opal_fadump_update_config(struct fw_dump *fadump_conf, + const struct opal_fadump_mem_struct *fdm) + { +- pr_debug("Boot memory regions count: %d\n", fdm->region_cnt); ++ pr_debug("Boot memory regions count: %d\n", be16_to_cpu(fdm->region_cnt)); + + /* + * The destination address of the first boot memory region is the + * destination address of boot memory regions. + */ +- fadump_conf->boot_mem_dest_addr = fdm->rgn[0].dest; ++ fadump_conf->boot_mem_dest_addr = be64_to_cpu(fdm->rgn[0].dest); + pr_debug("Destination address of boot memory regions: %#016llx\n", + fadump_conf->boot_mem_dest_addr); + +- fadump_conf->fadumphdr_addr = fdm->fadumphdr_addr; ++ fadump_conf->fadumphdr_addr = be64_to_cpu(fdm->fadumphdr_addr); + } + + /* +@@ -126,9 +126,9 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf, + fadump_conf->boot_memory_size = 0; + + pr_debug("Boot memory regions:\n"); +- for (i = 0; i < fdm->region_cnt; i++) { +- base = fdm->rgn[i].src; +- size = fdm->rgn[i].size; ++ for (i = 0; i < be16_to_cpu(fdm->region_cnt); i++) { ++ base = be64_to_cpu(fdm->rgn[i].src); ++ size = be64_to_cpu(fdm->rgn[i].size); + pr_debug("\t[%03d] base: 0x%lx, size: 0x%lx\n", i, base, size); + + fadump_conf->boot_mem_addr[i] = base; +@@ -143,7 +143,7 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf, + * Start address of reserve dump area (permanent reservation) for + * re-registering FADump after dump capture. + */ +- fadump_conf->reserve_dump_area_start = fdm->rgn[0].dest; ++ fadump_conf->reserve_dump_area_start = be64_to_cpu(fdm->rgn[0].dest); + + /* + * Rarely, but it can so happen that system crashes before all +@@ -155,13 +155,14 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf, + * Hope the memory that could not be preserved only has pages + * that are usually filtered out while saving the vmcore. + */ +- if (fdm->region_cnt > fdm->registered_regions) { ++ if (be16_to_cpu(fdm->region_cnt) > be16_to_cpu(fdm->registered_regions)) { + pr_warn("Not all memory regions were saved!!!\n"); + pr_warn(" Unsaved memory regions:\n"); +- i = fdm->registered_regions; +- while (i < fdm->region_cnt) { ++ i = be16_to_cpu(fdm->registered_regions); ++ while (i < be16_to_cpu(fdm->region_cnt)) { + pr_warn("\t[%03d] base: 0x%llx, size: 0x%llx\n", +- i, fdm->rgn[i].src, fdm->rgn[i].size); ++ i, be64_to_cpu(fdm->rgn[i].src), ++ be64_to_cpu(fdm->rgn[i].size)); + i++; + } + +@@ -170,7 +171,7 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf, + } + + fadump_conf->boot_mem_top = (fadump_conf->boot_memory_size + hole_size); +- fadump_conf->boot_mem_regs_cnt = fdm->region_cnt; ++ fadump_conf->boot_mem_regs_cnt = be16_to_cpu(fdm->region_cnt); + opal_fadump_update_config(fadump_conf, fdm); + } + +@@ -178,35 +179,38 @@ static void opal_fadump_get_config(struct fw_dump *fadump_conf, + static void opal_fadump_init_metadata(struct opal_fadump_mem_struct *fdm) + { + fdm->version = OPAL_FADUMP_VERSION; +- fdm->region_cnt = 0; +- fdm->registered_regions = 0; +- fdm->fadumphdr_addr = 0; ++ fdm->region_cnt = cpu_to_be16(0); ++ fdm->registered_regions = cpu_to_be16(0); ++ fdm->fadumphdr_addr = cpu_to_be64(0); + } + + static u64 opal_fadump_init_mem_struct(struct fw_dump *fadump_conf) + { + u64 addr = fadump_conf->reserve_dump_area_start; ++ u16 reg_cnt; + int i; + + opal_fdm = __va(fadump_conf->kernel_metadata); + opal_fadump_init_metadata(opal_fdm); + + /* Boot memory regions */ ++ reg_cnt = be16_to_cpu(opal_fdm->region_cnt); + for (i = 0; i < fadump_conf->boot_mem_regs_cnt; i++) { +- opal_fdm->rgn[i].src = fadump_conf->boot_mem_addr[i]; +- opal_fdm->rgn[i].dest = addr; +- opal_fdm->rgn[i].size = fadump_conf->boot_mem_sz[i]; ++ opal_fdm->rgn[i].src = cpu_to_be64(fadump_conf->boot_mem_addr[i]); ++ opal_fdm->rgn[i].dest = cpu_to_be64(addr); ++ opal_fdm->rgn[i].size = cpu_to_be64(fadump_conf->boot_mem_sz[i]); + +- opal_fdm->region_cnt++; ++ reg_cnt++; + addr += fadump_conf->boot_mem_sz[i]; + } ++ opal_fdm->region_cnt = cpu_to_be16(reg_cnt); + + /* + * Kernel metadata is passed to f/w and retrieved in capture kerenl. + * So, use it to save fadump header address instead of calculating it. + */ +- opal_fdm->fadumphdr_addr = (opal_fdm->rgn[0].dest + +- fadump_conf->boot_memory_size); ++ opal_fdm->fadumphdr_addr = cpu_to_be64(be64_to_cpu(opal_fdm->rgn[0].dest) + ++ fadump_conf->boot_memory_size); + + opal_fadump_update_config(fadump_conf, opal_fdm); + +@@ -269,18 +273,21 @@ static u64 opal_fadump_get_bootmem_min(void) + static int opal_fadump_register(struct fw_dump *fadump_conf) + { + s64 rc = OPAL_PARAMETER; ++ u16 registered_regs; + int i, err = -EIO; + +- for (i = 0; i < opal_fdm->region_cnt; i++) { ++ registered_regs = be16_to_cpu(opal_fdm->registered_regions); ++ for (i = 0; i < be16_to_cpu(opal_fdm->region_cnt); i++) { + rc = opal_mpipl_update(OPAL_MPIPL_ADD_RANGE, +- opal_fdm->rgn[i].src, +- opal_fdm->rgn[i].dest, +- opal_fdm->rgn[i].size); ++ be64_to_cpu(opal_fdm->rgn[i].src), ++ be64_to_cpu(opal_fdm->rgn[i].dest), ++ be64_to_cpu(opal_fdm->rgn[i].size)); + if (rc != OPAL_SUCCESS) + break; + +- opal_fdm->registered_regions++; ++ registered_regs++; + } ++ opal_fdm->registered_regions = cpu_to_be16(registered_regs); + + switch (rc) { + case OPAL_SUCCESS: +@@ -291,7 +298,8 @@ static int opal_fadump_register(struct fw_dump *fadump_conf) + case OPAL_RESOURCE: + /* If MAX regions limit in f/w is hit, warn and proceed. */ + pr_warn("%d regions could not be registered for MPIPL as MAX limit is reached!\n", +- (opal_fdm->region_cnt - opal_fdm->registered_regions)); ++ (be16_to_cpu(opal_fdm->region_cnt) - ++ be16_to_cpu(opal_fdm->registered_regions))); + fadump_conf->dump_registered = 1; + err = 0; + break; +@@ -312,7 +320,7 @@ static int opal_fadump_register(struct fw_dump *fadump_conf) + * If some regions were registered before OPAL_MPIPL_ADD_RANGE + * OPAL call failed, unregister all regions. + */ +- if ((err < 0) && (opal_fdm->registered_regions > 0)) ++ if ((err < 0) && (be16_to_cpu(opal_fdm->registered_regions) > 0)) + opal_fadump_unregister(fadump_conf); + + return err; +@@ -328,7 +336,7 @@ static int opal_fadump_unregister(struct fw_dump *fadump_conf) + return -EIO; + } + +- opal_fdm->registered_regions = 0; ++ opal_fdm->registered_regions = cpu_to_be16(0); + fadump_conf->dump_registered = 0; + return 0; + } +@@ -563,19 +571,20 @@ static void opal_fadump_region_show(struct fw_dump *fadump_conf, + else + fdm_ptr = opal_fdm; + +- for (i = 0; i < fdm_ptr->region_cnt; i++) { ++ for (i = 0; i < be16_to_cpu(fdm_ptr->region_cnt); i++) { + /* + * Only regions that are registered for MPIPL + * would have dump data. + */ + if ((fadump_conf->dump_active) && +- (i < fdm_ptr->registered_regions)) +- dumped_bytes = fdm_ptr->rgn[i].size; ++ (i < be16_to_cpu(fdm_ptr->registered_regions))) ++ dumped_bytes = be64_to_cpu(fdm_ptr->rgn[i].size); + + seq_printf(m, "DUMP: Src: %#016llx, Dest: %#016llx, ", +- fdm_ptr->rgn[i].src, fdm_ptr->rgn[i].dest); ++ be64_to_cpu(fdm_ptr->rgn[i].src), ++ be64_to_cpu(fdm_ptr->rgn[i].dest)); + seq_printf(m, "Size: %#llx, Dumped: %#llx bytes\n", +- fdm_ptr->rgn[i].size, dumped_bytes); ++ be64_to_cpu(fdm_ptr->rgn[i].size), dumped_bytes); + } + + /* Dump is active. Show reserved area start address. */ +@@ -624,6 +633,7 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node) + { + const __be32 *prop; + unsigned long dn; ++ __be64 be_addr; + u64 addr = 0; + int i, len; + s64 ret; +@@ -680,13 +690,13 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node) + if (!prop) + return; + +- ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &addr); +- if ((ret != OPAL_SUCCESS) || !addr) { ++ ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &be_addr); ++ if ((ret != OPAL_SUCCESS) || !be_addr) { + pr_err("Failed to get Kernel metadata (%lld)\n", ret); + return; + } + +- addr = be64_to_cpu(addr); ++ addr = be64_to_cpu(be_addr); + pr_debug("Kernel metadata addr: %llx\n", addr); + + opal_fdm_active = __va(addr); +@@ -697,14 +707,14 @@ void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node) + } + + /* Kernel regions not registered with f/w for MPIPL */ +- if (opal_fdm_active->registered_regions == 0) { ++ if (be16_to_cpu(opal_fdm_active->registered_regions) == 0) { + opal_fdm_active = NULL; + return; + } + +- ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &addr); +- if (addr) { +- addr = be64_to_cpu(addr); ++ ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &be_addr); ++ if (be_addr) { ++ addr = be64_to_cpu(be_addr); + pr_debug("CPU metadata addr: %llx\n", addr); + opal_cpu_metadata = __va(addr); + } +diff --git a/arch/powerpc/platforms/powernv/opal-fadump.h b/arch/powerpc/platforms/powernv/opal-fadump.h +index f1e9ecf548c5d..3f715efb0aa6e 100644 +--- a/arch/powerpc/platforms/powernv/opal-fadump.h ++++ b/arch/powerpc/platforms/powernv/opal-fadump.h +@@ -31,14 +31,14 @@ + * OPAL FADump kernel metadata + * + * The address of this structure will be registered with f/w for retrieving +- * and processing during crash dump. ++ * in the capture kernel to process the crash dump. + */ + struct opal_fadump_mem_struct { + u8 version; + u8 reserved[3]; +- u16 region_cnt; /* number of regions */ +- u16 registered_regions; /* Regions registered for MPIPL */ +- u64 fadumphdr_addr; ++ __be16 region_cnt; /* number of regions */ ++ __be16 registered_regions; /* Regions registered for MPIPL */ ++ __be64 fadumphdr_addr; + struct opal_mpipl_region rgn[FADUMP_MAX_MEM_REGS]; + } __packed; + +@@ -135,7 +135,7 @@ static inline void opal_fadump_read_regs(char *bufp, unsigned int regs_cnt, + for (i = 0; i < regs_cnt; i++, bufp += reg_entry_size) { + reg_entry = (struct hdat_fadump_reg_entry *)bufp; + val = (cpu_endian ? be64_to_cpu(reg_entry->reg_val) : +- reg_entry->reg_val); ++ (u64)(reg_entry->reg_val)); + opal_fadump_set_regval_regnum(regs, + be32_to_cpu(reg_entry->reg_type), + be32_to_cpu(reg_entry->reg_num), +diff --git a/arch/powerpc/platforms/powernv/ultravisor.c b/arch/powerpc/platforms/powernv/ultravisor.c +index e4a00ad06f9d3..67c8c4b2d8b17 100644 +--- a/arch/powerpc/platforms/powernv/ultravisor.c ++++ b/arch/powerpc/platforms/powernv/ultravisor.c +@@ -55,6 +55,7 @@ static int __init uv_init(void) + return -ENODEV; + + uv_memcons = memcons_init(node, "memcons"); ++ of_node_put(node); + if (!uv_memcons) + return -ENOENT; + +diff --git a/arch/powerpc/sysdev/dart_iommu.c b/arch/powerpc/sysdev/dart_iommu.c +index 6b4a34b36d987..8ff9bcfe4b8d4 100644 +--- a/arch/powerpc/sysdev/dart_iommu.c ++++ b/arch/powerpc/sysdev/dart_iommu.c +@@ -403,9 +403,10 @@ void __init iommu_init_early_dart(struct pci_controller_ops *controller_ops) + } + + /* Initialize the DART HW */ +- if (dart_init(dn) != 0) ++ if (dart_init(dn) != 0) { ++ of_node_put(dn); + return; +- ++ } + /* + * U4 supports a DART bypass, we use it for 64-bit capable devices to + * improve performance. However, that only works for devices connected +@@ -418,6 +419,7 @@ void __init iommu_init_early_dart(struct pci_controller_ops *controller_ops) + + /* Setup pci_dma ops */ + set_pci_dma_ops(&dma_iommu_ops); ++ of_node_put(dn); + } + + #ifdef CONFIG_PM +diff --git a/arch/powerpc/sysdev/fsl_rio.c b/arch/powerpc/sysdev/fsl_rio.c +index 07c164f7f8cfe..3f9f78621cf3c 100644 +--- a/arch/powerpc/sysdev/fsl_rio.c ++++ b/arch/powerpc/sysdev/fsl_rio.c +@@ -505,8 +505,10 @@ int fsl_rio_setup(struct platform_device *dev) + if (rc) { + dev_err(&dev->dev, "Can't get %pOF property 'reg'\n", + rmu_node); ++ of_node_put(rmu_node); + goto err_rmu; + } ++ of_node_put(rmu_node); + rmu_regs_win = ioremap(rmu_regs.start, resource_size(&rmu_regs)); + if (!rmu_regs_win) { + dev_err(&dev->dev, "Unable to map rmu register window\n"); +diff --git a/arch/powerpc/sysdev/xics/icp-opal.c b/arch/powerpc/sysdev/xics/icp-opal.c +index 68fd2540b0931..7fa520efcefa0 100644 +--- a/arch/powerpc/sysdev/xics/icp-opal.c ++++ b/arch/powerpc/sysdev/xics/icp-opal.c +@@ -195,6 +195,7 @@ int icp_opal_init(void) + + printk("XICS: Using OPAL ICP fallbacks\n"); + ++ of_node_put(np); + return 0; + } + +diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c +index 9803e96d29247..558cfe570ccf8 100644 +--- a/arch/s390/crypto/aes_s390.c ++++ b/arch/s390/crypto/aes_s390.c +@@ -861,7 +861,7 @@ static inline void _gcm_sg_unmap_and_advance(struct gcm_sg_walk *gw, + unsigned int nbytes) + { + gw->walk_bytes_remain -= nbytes; +- scatterwalk_unmap(&gw->walk); ++ scatterwalk_unmap(gw->walk_ptr); + scatterwalk_advance(&gw->walk, nbytes); + scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain); + gw->walk_ptr = NULL; +@@ -936,7 +936,7 @@ static int gcm_out_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded) + goto out; + } + +- scatterwalk_unmap(&gw->walk); ++ scatterwalk_unmap(gw->walk_ptr); + gw->walk_ptr = NULL; + + gw->ptr = gw->buf; +diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h +index b5ea9e14c017a..3dcd8ab3db73b 100644 +--- a/arch/s390/include/asm/preempt.h ++++ b/arch/s390/include/asm/preempt.h +@@ -52,10 +52,17 @@ static inline bool test_preempt_need_resched(void) + + static inline void __preempt_count_add(int val) + { +- if (__builtin_constant_p(val) && (val >= -128) && (val <= 127)) +- __atomic_add_const(val, &S390_lowcore.preempt_count); +- else +- __atomic_add(val, &S390_lowcore.preempt_count); ++ /* ++ * With some obscure config options and CONFIG_PROFILE_ALL_BRANCHES ++ * enabled, gcc 12 fails to handle __builtin_constant_p(). ++ */ ++ if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES)) { ++ if (__builtin_constant_p(val) && (val >= -128) && (val <= 127)) { ++ __atomic_add_const(val, &S390_lowcore.preempt_count); ++ return; ++ } ++ } ++ __atomic_add(val, &S390_lowcore.preempt_count); + } + + static inline void __preempt_count_sub(int val) +diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c +index 5e5a4e1f0e6cf..19ee8355b2a7f 100644 +--- a/arch/s390/mm/gmap.c ++++ b/arch/s390/mm/gmap.c +@@ -2579,6 +2579,18 @@ static int __s390_enable_skey_pte(pte_t *pte, unsigned long addr, + return 0; + } + ++/* ++ * Give a chance to schedule after setting a key to 256 pages. ++ * We only hold the mm lock, which is a rwsem and the kvm srcu. ++ * Both can sleep. ++ */ ++static int __s390_enable_skey_pmd(pmd_t *pmd, unsigned long addr, ++ unsigned long next, struct mm_walk *walk) ++{ ++ cond_resched(); ++ return 0; ++} ++ + static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr, + unsigned long hmask, unsigned long next, + struct mm_walk *walk) +@@ -2601,12 +2613,14 @@ static int __s390_enable_skey_hugetlb(pte_t *pte, unsigned long addr, + end = start + HPAGE_SIZE - 1; + __storage_key_init_range(start, end); + set_bit(PG_arch_1, &page->flags); ++ cond_resched(); + return 0; + } + + static const struct mm_walk_ops enable_skey_walk_ops = { + .hugetlb_entry = __s390_enable_skey_hugetlb, + .pte_entry = __s390_enable_skey_pte, ++ .pmd_entry = __s390_enable_skey_pmd, + }; + + int s390_enable_skey(void) +diff --git a/arch/um/drivers/chan_user.c b/arch/um/drivers/chan_user.c +index 6040817c036f3..25727ed648b72 100644 +--- a/arch/um/drivers/chan_user.c ++++ b/arch/um/drivers/chan_user.c +@@ -220,7 +220,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out, + unsigned long *stack_out) + { + struct winch_data data; +- int fds[2], n, err; ++ int fds[2], n, err, pid; + char c; + + err = os_pipe(fds, 1, 1); +@@ -238,8 +238,9 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out, + * problem with /dev/net/tun, which if held open by this + * thread, prevents the TUN/TAP device from being reused. + */ +- err = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out); +- if (err < 0) { ++ pid = run_helper_thread(winch_thread, &data, CLONE_FILES, stack_out); ++ if (pid < 0) { ++ err = pid; + printk(UM_KERN_ERR "fork of winch_thread failed - errno = %d\n", + -err); + goto out_close; +@@ -263,7 +264,7 @@ static int winch_tramp(int fd, struct tty_port *port, int *fd_out, + goto out_close; + } + +- return err; ++ return pid; + + out_close: + close(fds[1]); +diff --git a/arch/um/include/asm/thread_info.h b/arch/um/include/asm/thread_info.h +index 4c19ce4c49f18..66ab6a07330b2 100644 +--- a/arch/um/include/asm/thread_info.h ++++ b/arch/um/include/asm/thread_info.h +@@ -63,6 +63,7 @@ static inline struct thread_info *current_thread_info(void) + #define TIF_RESTORE_SIGMASK 7 + #define TIF_NOTIFY_RESUME 8 + #define TIF_SECCOMP 9 /* secure computing */ ++#define TIF_SINGLESTEP 10 /* single stepping userspace */ + + #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) + #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) +@@ -70,5 +71,6 @@ static inline struct thread_info *current_thread_info(void) + #define _TIF_MEMDIE (1 << TIF_MEMDIE) + #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) + #define _TIF_SECCOMP (1 << TIF_SECCOMP) ++#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) + + #endif +diff --git a/arch/um/kernel/exec.c b/arch/um/kernel/exec.c +index e8fd5d540b05d..7f7a74c82abb6 100644 +--- a/arch/um/kernel/exec.c ++++ b/arch/um/kernel/exec.c +@@ -44,7 +44,7 @@ void start_thread(struct pt_regs *regs, unsigned long eip, unsigned long esp) + { + PT_REGS_IP(regs) = eip; + PT_REGS_SP(regs) = esp; +- current->ptrace &= ~PT_DTRACE; ++ clear_thread_flag(TIF_SINGLESTEP); + #ifdef SUBARCH_EXECVE1 + SUBARCH_EXECVE1(regs->regs); + #endif +diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c +index 17045e7211bfd..d71dd7725bef1 100644 +--- a/arch/um/kernel/process.c ++++ b/arch/um/kernel/process.c +@@ -380,7 +380,7 @@ int singlestepping(void * t) + { + struct task_struct *task = t ? t : current; + +- if (!(task->ptrace & PT_DTRACE)) ++ if (!test_thread_flag(TIF_SINGLESTEP)) + return 0; + + if (task->thread.singlestep_syscall) +diff --git a/arch/um/kernel/ptrace.c b/arch/um/kernel/ptrace.c +index b425f47bddbb3..d37802ced5636 100644 +--- a/arch/um/kernel/ptrace.c ++++ b/arch/um/kernel/ptrace.c +@@ -12,7 +12,7 @@ + + void user_enable_single_step(struct task_struct *child) + { +- child->ptrace |= PT_DTRACE; ++ set_tsk_thread_flag(child, TIF_SINGLESTEP); + child->thread.singlestep_syscall = 0; + + #ifdef SUBARCH_SET_SINGLESTEPPING +@@ -22,7 +22,7 @@ void user_enable_single_step(struct task_struct *child) + + void user_disable_single_step(struct task_struct *child) + { +- child->ptrace &= ~PT_DTRACE; ++ clear_tsk_thread_flag(child, TIF_SINGLESTEP); + child->thread.singlestep_syscall = 0; + + #ifdef SUBARCH_SET_SINGLESTEPPING +@@ -121,7 +121,7 @@ static void send_sigtrap(struct uml_pt_regs *regs, int error_code) + } + + /* +- * XXX Check PT_DTRACE vs TIF_SINGLESTEP for singlestepping check and ++ * XXX Check TIF_SINGLESTEP for singlestepping check and + * PT_PTRACED vs TIF_SYSCALL_TRACE for syscall tracing check + */ + int syscall_trace_enter(struct pt_regs *regs) +@@ -145,7 +145,7 @@ void syscall_trace_leave(struct pt_regs *regs) + audit_syscall_exit(regs); + + /* Fake a debug trap */ +- if (ptraced & PT_DTRACE) ++ if (test_thread_flag(TIF_SINGLESTEP)) + send_sigtrap(®s->regs, 0); + + if (!test_thread_flag(TIF_SYSCALL_TRACE)) +diff --git a/arch/um/kernel/signal.c b/arch/um/kernel/signal.c +index 3d57c71c532e4..01628195ae520 100644 +--- a/arch/um/kernel/signal.c ++++ b/arch/um/kernel/signal.c +@@ -53,7 +53,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) + unsigned long sp; + int err; + +- if ((current->ptrace & PT_DTRACE) && (current->ptrace & PT_PTRACED)) ++ if (test_thread_flag(TIF_SINGLESTEP) && (current->ptrace & PT_PTRACED)) + singlestep = 1; + + /* Did we come from a system call? */ +@@ -128,7 +128,7 @@ void do_signal(struct pt_regs *regs) + * on the host. The tracing thread will check this flag and + * PTRACE_SYSCALL if necessary. + */ +- if (current->ptrace & PT_DTRACE) ++ if (test_thread_flag(TIF_SINGLESTEP)) + current->thread.singlestep_syscall = + is_syscall(PT_REGS_IP(¤t->thread.regs)); + +diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c +index f5937742b2901..3613cfb83c6dc 100644 +--- a/arch/x86/entry/vdso/vma.c ++++ b/arch/x86/entry/vdso/vma.c +@@ -323,7 +323,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) + static __init int vdso_setup(char *s) + { + vdso64_enabled = simple_strtoul(s, NULL, 0); +- return 0; ++ return 1; + } + __setup("vdso=", vdso_setup); + +diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c +index b7baaa9733173..2e930d8c04d95 100644 +--- a/arch/x86/events/amd/ibs.c ++++ b/arch/x86/events/amd/ibs.c +@@ -312,6 +312,16 @@ static int perf_ibs_init(struct perf_event *event) + hwc->config_base = perf_ibs->msr; + hwc->config = config; + ++ /* ++ * rip recorded by IbsOpRip will not be consistent with rsp and rbp ++ * recorded as part of interrupt regs. Thus we need to use rip from ++ * interrupt regs while unwinding call stack. Setting _EARLY flag ++ * makes sure we unwind call-stack before perf sample rip is set to ++ * IbsOpRip. ++ */ ++ if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) ++ event->attr.sample_type |= __PERF_SAMPLE_CALLCHAIN_EARLY; ++ + return 0; + } + +@@ -683,6 +693,14 @@ fail: + data.raw = &raw; + } + ++ /* ++ * rip recorded by IbsOpRip will not be consistent with rsp and rbp ++ * recorded as part of interrupt regs. Thus we need to use rip from ++ * interrupt regs while unwinding call stack. ++ */ ++ if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) ++ data.callchain = perf_callchain(event, iregs); ++ + throttle = perf_event_overflow(event, &data, ®s); + out: + if (throttle) { +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c +index b33540e1efa88..f2976204e8b5d 100644 +--- a/arch/x86/events/intel/core.c ++++ b/arch/x86/events/intel/core.c +@@ -250,7 +250,7 @@ static struct event_constraint intel_icl_event_constraints[] = { + INTEL_EVENT_CONSTRAINT_RANGE(0x03, 0x0a, 0xf), + INTEL_EVENT_CONSTRAINT_RANGE(0x1f, 0x28, 0xf), + INTEL_EVENT_CONSTRAINT(0x32, 0xf), /* SW_PREFETCH_ACCESS.* */ +- INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x54, 0xf), ++ INTEL_EVENT_CONSTRAINT_RANGE(0x48, 0x56, 0xf), + INTEL_EVENT_CONSTRAINT_RANGE(0x60, 0x8b, 0xf), + INTEL_UEVENT_CONSTRAINT(0x04a3, 0xff), /* CYCLE_ACTIVITY.STALLS_TOTAL */ + INTEL_UEVENT_CONSTRAINT(0x10a3, 0xff), /* CYCLE_ACTIVITY.CYCLES_MEM_ANY */ +diff --git a/arch/x86/include/asm/acenv.h b/arch/x86/include/asm/acenv.h +index 9aff97f0de7fd..d937c55e717e6 100644 +--- a/arch/x86/include/asm/acenv.h ++++ b/arch/x86/include/asm/acenv.h +@@ -13,7 +13,19 @@ + + /* Asm macros */ + +-#define ACPI_FLUSH_CPU_CACHE() wbinvd() ++/* ++ * ACPI_FLUSH_CPU_CACHE() flushes caches on entering sleep states. ++ * It is required to prevent data loss. ++ * ++ * While running inside virtual machine, the kernel can bypass cache flushing. ++ * Changing sleep state in a virtual machine doesn't affect the host system ++ * sleep state and cannot lead to data loss. ++ */ ++#define ACPI_FLUSH_CPU_CACHE() \ ++do { \ ++ if (!cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) \ ++ wbinvd(); \ ++} while (0) + + int __acpi_acquire_global_lock(unsigned int *lock); + int __acpi_release_global_lock(unsigned int *lock); +diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h +index 59bf91c57aa85..619c1f80a2abe 100644 +--- a/arch/x86/include/asm/cpufeature.h ++++ b/arch/x86/include/asm/cpufeature.h +@@ -49,7 +49,7 @@ extern const char * const x86_power_flags[32]; + extern const char * const x86_bug_flags[NBUGINTS*32]; + + #define test_cpu_cap(c, bit) \ +- test_bit(bit, (unsigned long *)((c)->x86_capability)) ++ arch_test_bit(bit, (unsigned long *)((c)->x86_capability)) + + /* + * There are 32 bits/features in each mask word. The high bits +diff --git a/arch/x86/include/asm/suspend_32.h b/arch/x86/include/asm/suspend_32.h +index fdbd9d7b7bca1..3b97aa9215430 100644 +--- a/arch/x86/include/asm/suspend_32.h ++++ b/arch/x86/include/asm/suspend_32.h +@@ -21,7 +21,6 @@ struct saved_context { + #endif + unsigned long cr0, cr2, cr3, cr4; + u64 misc_enable; +- bool misc_enable_saved; + struct saved_msrs saved_msrs; + struct desc_ptr gdt_desc; + struct desc_ptr idt; +@@ -30,6 +29,7 @@ struct saved_context { + unsigned long tr; + unsigned long safety; + unsigned long return_address; ++ bool misc_enable_saved; + } __attribute__((packed)); + + /* routines for saving/restoring kernel state */ +diff --git a/arch/x86/include/asm/suspend_64.h b/arch/x86/include/asm/suspend_64.h +index 35bb35d28733e..54df06687d834 100644 +--- a/arch/x86/include/asm/suspend_64.h ++++ b/arch/x86/include/asm/suspend_64.h +@@ -14,9 +14,13 @@ + * Image of the saved processor state, used by the low level ACPI suspend to + * RAM code and by the low level hibernation code. + * +- * If you modify it, fix arch/x86/kernel/acpi/wakeup_64.S and make sure that +- * __save/__restore_processor_state(), defined in arch/x86/kernel/suspend_64.c, +- * still work as required. ++ * If you modify it, check how it is used in arch/x86/kernel/acpi/wakeup_64.S ++ * and make sure that __save/__restore_processor_state(), defined in ++ * arch/x86/power/cpu.c, still work as required. ++ * ++ * Because the structure is packed, make sure to avoid unaligned members. For ++ * optimisation purposes but also because tools like kmemleak only search for ++ * pointers that are aligned. + */ + struct saved_context { + struct pt_regs regs; +@@ -36,7 +40,6 @@ struct saved_context { + + unsigned long cr0, cr2, cr3, cr4; + u64 misc_enable; +- bool misc_enable_saved; + struct saved_msrs saved_msrs; + unsigned long efer; + u16 gdt_pad; /* Unused */ +@@ -48,6 +51,7 @@ struct saved_context { + unsigned long tr; + unsigned long safety; + unsigned long return_address; ++ bool misc_enable_saved; + } __attribute__((packed)); + + #define loaddebug(thread,register) \ +diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c +index 4e4476b832be2..68c7340325233 100644 +--- a/arch/x86/kernel/apic/apic.c ++++ b/arch/x86/kernel/apic/apic.c +@@ -168,7 +168,7 @@ static __init int setup_apicpmtimer(char *s) + { + apic_calibrate_pmtmr = 1; + notsc_setup(NULL); +- return 0; ++ return 1; + } + __setup("apicpmtimer", setup_apicpmtimer); + #endif +diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c +index 11d5c5950e2d3..44688917d51fd 100644 +--- a/arch/x86/kernel/cpu/intel.c ++++ b/arch/x86/kernel/cpu/intel.c +@@ -97,7 +97,7 @@ static bool ring3mwait_disabled __read_mostly; + static int __init ring3mwait_disable(char *__unused) + { + ring3mwait_disabled = true; +- return 0; ++ return 1; + } + __setup("ring3mwait=disable", ring3mwait_disable); + +diff --git a/arch/x86/kernel/step.c b/arch/x86/kernel/step.c +index 60d2c3798ba28..2f97d1a1032f3 100644 +--- a/arch/x86/kernel/step.c ++++ b/arch/x86/kernel/step.c +@@ -175,8 +175,7 @@ void set_task_blockstep(struct task_struct *task, bool on) + * + * NOTE: this means that set/clear TIF_BLOCKSTEP is only safe if + * task is current or it can't be running, otherwise we can race +- * with __switch_to_xtra(). We rely on ptrace_freeze_traced() but +- * PTRACE_KILL is not safe. ++ * with __switch_to_xtra(). We rely on ptrace_freeze_traced(). + */ + local_irq_disable(); + debugctl = get_debugctlmsr(); +diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c +index f7476ce23b6e0..42e31358a9d32 100644 +--- a/arch/x86/kernel/sys_x86_64.c ++++ b/arch/x86/kernel/sys_x86_64.c +@@ -70,9 +70,6 @@ static int __init control_va_addr_alignment(char *str) + if (*str == 0) + return 1; + +- if (*str == '=') +- str++; +- + if (!strcmp(str, "32")) + va_align.flags = ALIGN_VA_32; + else if (!strcmp(str, "64")) +@@ -82,11 +79,11 @@ static int __init control_va_addr_alignment(char *str) + else if (!strcmp(str, "on")) + va_align.flags = ALIGN_VA_32 | ALIGN_VA_64; + else +- return 0; ++ pr_warn("invalid option value: 'align_va_addr=%s'\n", str); + + return 1; + } +-__setup("align_va_addr", control_va_addr_alignment); ++__setup("align_va_addr=", control_va_addr_alignment); + + SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len, + unsigned long, prot, unsigned long, flags, +diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c +index 3041015b05f71..9f61ae64b7277 100644 +--- a/arch/x86/kvm/vmx/nested.c ++++ b/arch/x86/kvm/vmx/nested.c +@@ -3746,12 +3746,12 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, + /* update exit information fields: */ + vmcs12->vm_exit_reason = exit_reason; + vmcs12->exit_qualification = exit_qualification; +- vmcs12->vm_exit_intr_info = exit_intr_info; +- +- vmcs12->idt_vectoring_info_field = 0; +- vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN); +- vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); + ++ /* ++ * On VM-Exit due to a failed VM-Entry, the VMCS isn't marked launched ++ * and only EXIT_REASON and EXIT_QUALIFICATION are updated, all other ++ * exit info fields are unmodified. ++ */ + if (!(vmcs12->vm_exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY)) { + vmcs12->launch_state = 1; + +@@ -3763,8 +3763,13 @@ static void prepare_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, + * Transfer the event that L0 or L1 may wanted to inject into + * L2 to IDT_VECTORING_INFO_FIELD. + */ ++ vmcs12->idt_vectoring_info_field = 0; + vmcs12_save_pending_event(vcpu, vmcs12); + ++ vmcs12->vm_exit_intr_info = exit_intr_info; ++ vmcs12->vm_exit_instruction_len = vmcs_read32(VM_EXIT_INSTRUCTION_LEN); ++ vmcs12->vmx_instruction_info = vmcs_read32(VMX_INSTRUCTION_INFO); ++ + /* + * According to spec, there's no need to store the guest's + * MSRs if the exit is due to a VM-entry failure that occurs +diff --git a/arch/x86/lib/delay.c b/arch/x86/lib/delay.c +index c126571e5e2ee..3d1cfad36ba21 100644 +--- a/arch/x86/lib/delay.c ++++ b/arch/x86/lib/delay.c +@@ -43,8 +43,8 @@ static void delay_loop(unsigned long loops) + " jnz 2b \n" + "3: dec %0 \n" + +- : /* we don't need output */ +- :"a" (loops) ++ : "+a" (loops) ++ : + ); + } + +diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c +index 35b2e35c22035..c7c4e2f8c6a5c 100644 +--- a/arch/x86/mm/pat.c ++++ b/arch/x86/mm/pat.c +@@ -75,7 +75,7 @@ int pat_debug_enable; + static int __init pat_debug_setup(char *str) + { + pat_debug_enable = 1; +- return 0; ++ return 1; + } + __setup("debugpat", pat_debug_setup); + +diff --git a/arch/x86/um/ldt.c b/arch/x86/um/ldt.c +index 3ee234b6234dd..255a44dd415a9 100644 +--- a/arch/x86/um/ldt.c ++++ b/arch/x86/um/ldt.c +@@ -23,9 +23,11 @@ static long write_ldt_entry(struct mm_id *mm_idp, int func, + { + long res; + void *stub_addr; ++ ++ BUILD_BUG_ON(sizeof(*desc) % sizeof(long)); ++ + res = syscall_stub_data(mm_idp, (unsigned long *)desc, +- (sizeof(*desc) + sizeof(long) - 1) & +- ~(sizeof(long) - 1), ++ sizeof(*desc) / sizeof(long), + addr, &stub_addr); + if (!res) { + unsigned long args[] = { func, +diff --git a/arch/xtensa/kernel/ptrace.c b/arch/xtensa/kernel/ptrace.c +index 145742d70a9f2..998b4249065a6 100644 +--- a/arch/xtensa/kernel/ptrace.c ++++ b/arch/xtensa/kernel/ptrace.c +@@ -225,12 +225,12 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task) + + void user_enable_single_step(struct task_struct *child) + { +- child->ptrace |= PT_SINGLESTEP; ++ set_tsk_thread_flag(child, TIF_SINGLESTEP); + } + + void user_disable_single_step(struct task_struct *child) + { +- child->ptrace &= ~PT_SINGLESTEP; ++ clear_tsk_thread_flag(child, TIF_SINGLESTEP); + } + + /* +diff --git a/arch/xtensa/kernel/signal.c b/arch/xtensa/kernel/signal.c +index dae83cddd6ca2..cf2bd960b30d4 100644 +--- a/arch/xtensa/kernel/signal.c ++++ b/arch/xtensa/kernel/signal.c +@@ -465,7 +465,7 @@ static void do_signal(struct pt_regs *regs) + /* Set up the stack frame */ + ret = setup_frame(&ksig, sigmask_to_save(), regs); + signal_setup_done(ret, &ksig, 0); +- if (current->ptrace & PT_SINGLESTEP) ++ if (test_thread_flag(TIF_SINGLESTEP)) + task_pt_regs(current)->icountlevel = 1; + + return; +@@ -491,7 +491,7 @@ static void do_signal(struct pt_regs *regs) + /* If there's no signal to deliver, we just restore the saved mask. */ + restore_saved_sigmask(); + +- if (current->ptrace & PT_SINGLESTEP) ++ if (test_thread_flag(TIF_SINGLESTEP)) + task_pt_regs(current)->icountlevel = 1; + return; + } +diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c +index c17eb794f0aef..09d721b1f6acf 100644 +--- a/block/bfq-cgroup.c ++++ b/block/bfq-cgroup.c +@@ -536,6 +536,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd) + */ + bfqg->bfqd = bfqd; + bfqg->active_entities = 0; ++ bfqg->online = true; + bfqg->rq_pos_tree = RB_ROOT; + } + +@@ -564,28 +565,11 @@ static void bfq_group_set_parent(struct bfq_group *bfqg, + entity->sched_data = &parent->sched_data; + } + +-static struct bfq_group *bfq_lookup_bfqg(struct bfq_data *bfqd, +- struct blkcg *blkcg) ++static void bfq_link_bfqg(struct bfq_data *bfqd, struct bfq_group *bfqg) + { +- struct blkcg_gq *blkg; +- +- blkg = blkg_lookup(blkcg, bfqd->queue); +- if (likely(blkg)) +- return blkg_to_bfqg(blkg); +- return NULL; +-} +- +-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, +- struct blkcg *blkcg) +-{ +- struct bfq_group *bfqg, *parent; ++ struct bfq_group *parent; + struct bfq_entity *entity; + +- bfqg = bfq_lookup_bfqg(bfqd, blkcg); +- +- if (unlikely(!bfqg)) +- return NULL; +- + /* + * Update chain of bfq_groups as we might be handling a leaf group + * which, along with some of its relatives, has not been hooked yet +@@ -602,8 +586,24 @@ struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, + bfq_group_set_parent(curr_bfqg, parent); + } + } ++} + +- return bfqg; ++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio) ++{ ++ struct blkcg_gq *blkg = bio->bi_blkg; ++ struct bfq_group *bfqg; ++ ++ while (blkg) { ++ bfqg = blkg_to_bfqg(blkg); ++ if (bfqg->online) { ++ bio_associate_blkg_from_css(bio, &blkg->blkcg->css); ++ return bfqg; ++ } ++ blkg = blkg->parent; ++ } ++ bio_associate_blkg_from_css(bio, ++ &bfqg_to_blkg(bfqd->root_group)->blkcg->css); ++ return bfqd->root_group; + } + + /** +@@ -679,25 +679,15 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, + * Move bic to blkcg, assuming that bfqd->lock is held; which makes + * sure that the reference to cgroup is valid across the call (see + * comments in bfq_bic_update_cgroup on this issue) +- * +- * NOTE: an alternative approach might have been to store the current +- * cgroup in bfqq and getting a reference to it, reducing the lookup +- * time here, at the price of slightly more complex code. + */ +-static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd, +- struct bfq_io_cq *bic, +- struct blkcg *blkcg) ++static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd, ++ struct bfq_io_cq *bic, ++ struct bfq_group *bfqg) + { + struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0); + struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1); +- struct bfq_group *bfqg; + struct bfq_entity *entity; + +- bfqg = bfq_find_set_group(bfqd, blkcg); +- +- if (unlikely(!bfqg)) +- bfqg = bfqd->root_group; +- + if (async_bfqq) { + entity = &async_bfqq->entity; + +@@ -708,9 +698,39 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd, + } + + if (sync_bfqq) { +- entity = &sync_bfqq->entity; +- if (entity->sched_data != &bfqg->sched_data) +- bfq_bfqq_move(bfqd, sync_bfqq, bfqg); ++ if (!sync_bfqq->new_bfqq && !bfq_bfqq_coop(sync_bfqq)) { ++ /* We are the only user of this bfqq, just move it */ ++ if (sync_bfqq->entity.sched_data != &bfqg->sched_data) ++ bfq_bfqq_move(bfqd, sync_bfqq, bfqg); ++ } else { ++ struct bfq_queue *bfqq; ++ ++ /* ++ * The queue was merged to a different queue. Check ++ * that the merge chain still belongs to the same ++ * cgroup. ++ */ ++ for (bfqq = sync_bfqq; bfqq; bfqq = bfqq->new_bfqq) ++ if (bfqq->entity.sched_data != ++ &bfqg->sched_data) ++ break; ++ if (bfqq) { ++ /* ++ * Some queue changed cgroup so the merge is ++ * not valid anymore. We cannot easily just ++ * cancel the merge (by clearing new_bfqq) as ++ * there may be other processes using this ++ * queue and holding refs to all queues below ++ * sync_bfqq->new_bfqq. Similarly if the merge ++ * already happened, we need to detach from ++ * bfqq now so that we cannot merge bio to a ++ * request from the old cgroup. ++ */ ++ bfq_put_cooperator(sync_bfqq); ++ bfq_release_process_ref(bfqd, sync_bfqq); ++ bic_set_bfqq(bic, NULL, 1); ++ } ++ } + } + + return bfqg; +@@ -719,20 +739,24 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd, + void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio) + { + struct bfq_data *bfqd = bic_to_bfqd(bic); +- struct bfq_group *bfqg = NULL; ++ struct bfq_group *bfqg = bfq_bio_bfqg(bfqd, bio); + uint64_t serial_nr; + +- rcu_read_lock(); +- serial_nr = __bio_blkcg(bio)->css.serial_nr; ++ serial_nr = bfqg_to_blkg(bfqg)->blkcg->css.serial_nr; + + /* + * Check whether blkcg has changed. The condition may trigger + * spuriously on a newly created cic but there's no harm. + */ + if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr)) +- goto out; ++ return; + +- bfqg = __bfq_bic_change_cgroup(bfqd, bic, __bio_blkcg(bio)); ++ /* ++ * New cgroup for this process. Make sure it is linked to bfq internal ++ * cgroup hierarchy. ++ */ ++ bfq_link_bfqg(bfqd, bfqg); ++ __bfq_bic_change_cgroup(bfqd, bic, bfqg); + /* + * Update blkg_path for bfq_log_* functions. We cache this + * path, and update it here, for the following +@@ -785,8 +809,6 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio) + */ + blkg_path(bfqg_to_blkg(bfqg), bfqg->blkg_path, sizeof(bfqg->blkg_path)); + bic->blkcg_serial_nr = serial_nr; +-out: +- rcu_read_unlock(); + } + + /** +@@ -914,6 +936,7 @@ static void bfq_pd_offline(struct blkg_policy_data *pd) + + put_async_queues: + bfq_put_async_queues(bfqd, bfqg); ++ bfqg->online = false; + + spin_unlock_irqrestore(&bfqd->lock, flags); + /* +@@ -1402,7 +1425,7 @@ void bfq_end_wr_async(struct bfq_data *bfqd) + bfq_end_wr_async_queues(bfqd, bfqd->root_group); + } + +-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, struct blkcg *blkcg) ++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio) + { + return bfqd->root_group; + } +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c +index d46806182b051..962701d3f46bd 100644 +--- a/block/bfq-iosched.c ++++ b/block/bfq-iosched.c +@@ -2227,10 +2227,17 @@ static bool bfq_bio_merge(struct request_queue *q, struct bio *bio, + + spin_lock_irq(&bfqd->lock); + +- if (bic) ++ if (bic) { ++ /* ++ * Make sure cgroup info is uptodate for current process before ++ * considering the merge. ++ */ ++ bfq_bic_update_cgroup(bic, bio); ++ + bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf)); +- else ++ } else { + bfqd->bio_bfqq = NULL; ++ } + bfqd->bio_bic = bic; + + ret = blk_mq_sched_try_merge(q, bio, nr_segs, &free); +@@ -2260,8 +2267,6 @@ static int bfq_request_merge(struct request_queue *q, struct request **req, + return ELEVATOR_NO_MERGE; + } + +-static struct bfq_queue *bfq_init_rq(struct request *rq); +- + static void bfq_request_merged(struct request_queue *q, struct request *req, + enum elv_merge type) + { +@@ -2270,7 +2275,7 @@ static void bfq_request_merged(struct request_queue *q, struct request *req, + blk_rq_pos(req) < + blk_rq_pos(container_of(rb_prev(&req->rb_node), + struct request, rb_node))) { +- struct bfq_queue *bfqq = bfq_init_rq(req); ++ struct bfq_queue *bfqq = RQ_BFQQ(req); + struct bfq_data *bfqd; + struct request *prev, *next_rq; + +@@ -2322,8 +2327,8 @@ static void bfq_request_merged(struct request_queue *q, struct request *req, + static void bfq_requests_merged(struct request_queue *q, struct request *rq, + struct request *next) + { +- struct bfq_queue *bfqq = bfq_init_rq(rq), +- *next_bfqq = bfq_init_rq(next); ++ struct bfq_queue *bfqq = RQ_BFQQ(rq), ++ *next_bfqq = RQ_BFQQ(next); + + if (!bfqq) + return; +@@ -2502,6 +2507,14 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq) + if (process_refs == 0 || new_process_refs == 0) + return NULL; + ++ /* ++ * Make sure merged queues belong to the same parent. Parents could ++ * have changed since the time we decided the two queues are suitable ++ * for merging. ++ */ ++ if (new_bfqq->entity.parent != bfqq->entity.parent) ++ return NULL; ++ + bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d", + new_bfqq->pid); + +@@ -4914,7 +4927,7 @@ void bfq_put_queue(struct bfq_queue *bfqq) + bfqg_and_blkg_put(bfqg); + } + +-static void bfq_put_cooperator(struct bfq_queue *bfqq) ++void bfq_put_cooperator(struct bfq_queue *bfqq) + { + struct bfq_queue *__bfqq, *next; + +@@ -5145,14 +5158,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, + struct bfq_queue *bfqq; + struct bfq_group *bfqg; + +- rcu_read_lock(); +- +- bfqg = bfq_find_set_group(bfqd, __bio_blkcg(bio)); +- if (!bfqg) { +- bfqq = &bfqd->oom_bfqq; +- goto out; +- } +- ++ bfqg = bfq_bio_bfqg(bfqd, bio); + if (!is_sync) { + async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class, + ioprio); +@@ -5196,7 +5202,6 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, + out: + bfqq->ref++; /* get a process reference to this queue */ + bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq, bfqq->ref); +- rcu_read_unlock(); + return bfqq; + } + +@@ -5499,6 +5504,8 @@ static inline void bfq_update_insert_stats(struct request_queue *q, + unsigned int cmd_flags) {} + #endif /* CONFIG_BFQ_CGROUP_DEBUG */ + ++static struct bfq_queue *bfq_init_rq(struct request *rq); ++ + static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, + bool at_head) + { +@@ -5509,17 +5516,14 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, + unsigned int cmd_flags; + + spin_lock_irq(&bfqd->lock); ++ bfqq = bfq_init_rq(rq); + if (blk_mq_sched_try_insert_merge(q, rq)) { + spin_unlock_irq(&bfqd->lock); + return; + } + +- spin_unlock_irq(&bfqd->lock); +- + blk_mq_sched_request_inserted(rq); + +- spin_lock_irq(&bfqd->lock); +- bfqq = bfq_init_rq(rq); + if (!bfqq || at_head || blk_rq_is_passthrough(rq)) { + if (at_head) + list_add(&rq->queuelist, &bfqd->dispatch); +diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h +index de98fdfe9ea17..f6cc2b4180086 100644 +--- a/block/bfq-iosched.h ++++ b/block/bfq-iosched.h +@@ -896,6 +896,8 @@ struct bfq_group { + + /* reference counter (see comments in bfq_bic_update_cgroup) */ + int ref; ++ /* Is bfq_group still online? */ ++ bool online; + + struct bfq_entity entity; + struct bfq_sched_data sched_data; +@@ -949,6 +951,7 @@ void bfq_weights_tree_remove(struct bfq_data *bfqd, + void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq, + bool compensate, enum bfqq_expiration reason); + void bfq_put_queue(struct bfq_queue *bfqq); ++void bfq_put_cooperator(struct bfq_queue *bfqq); + void bfq_end_wr_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg); + void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq); + void bfq_schedule_dispatch(struct bfq_data *bfqd); +@@ -975,8 +978,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, + void bfq_init_entity(struct bfq_entity *entity, struct bfq_group *bfqg); + void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio); + void bfq_end_wr_async(struct bfq_data *bfqd); +-struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, +- struct blkcg *blkcg); ++struct bfq_group *bfq_bio_bfqg(struct bfq_data *bfqd, struct bio *bio); + struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg); + struct bfq_group *bfqq_group(struct bfq_queue *bfqq); + struct bfq_group *bfq_create_group_hierarchy(struct bfq_data *bfqd, int node); +diff --git a/block/bio.c b/block/bio.c +index 40004a3631a80..08dbdc32ceaa8 100644 +--- a/block/bio.c ++++ b/block/bio.c +@@ -2179,7 +2179,7 @@ void bio_clone_blkg_association(struct bio *dst, struct bio *src) + rcu_read_lock(); + + if (src->bi_blkg) +- __bio_associate_blkg(dst, src->bi_blkg); ++ bio_associate_blkg_from_css(dst, &bio_blkcg(src)->css); + + rcu_read_unlock(); + } +diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c +index 71a82528d4bfe..a4156b3b33c31 100644 +--- a/block/blk-iolatency.c ++++ b/block/blk-iolatency.c +@@ -86,7 +86,17 @@ struct iolatency_grp; + struct blk_iolatency { + struct rq_qos rqos; + struct timer_list timer; +- atomic_t enabled; ++ ++ /* ++ * ->enabled is the master enable switch gating the throttling logic and ++ * inflight tracking. The number of cgroups which have iolat enabled is ++ * tracked in ->enable_cnt, and ->enable is flipped on/off accordingly ++ * from ->enable_work with the request_queue frozen. For details, See ++ * blkiolatency_enable_work_fn(). ++ */ ++ bool enabled; ++ atomic_t enable_cnt; ++ struct work_struct enable_work; + }; + + static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos) +@@ -94,11 +104,6 @@ static inline struct blk_iolatency *BLKIOLATENCY(struct rq_qos *rqos) + return container_of(rqos, struct blk_iolatency, rqos); + } + +-static inline bool blk_iolatency_enabled(struct blk_iolatency *blkiolat) +-{ +- return atomic_read(&blkiolat->enabled) > 0; +-} +- + struct child_latency_info { + spinlock_t lock; + +@@ -463,7 +468,7 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio) + struct blkcg_gq *blkg = bio->bi_blkg; + bool issue_as_root = bio_issue_as_root_blkg(bio); + +- if (!blk_iolatency_enabled(blkiolat)) ++ if (!blkiolat->enabled) + return; + + while (blkg && blkg->parent) { +@@ -593,7 +598,6 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio) + u64 window_start; + u64 now = ktime_to_ns(ktime_get()); + bool issue_as_root = bio_issue_as_root_blkg(bio); +- bool enabled = false; + int inflight = 0; + + blkg = bio->bi_blkg; +@@ -604,8 +608,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio) + if (!iolat) + return; + +- enabled = blk_iolatency_enabled(iolat->blkiolat); +- if (!enabled) ++ if (!iolat->blkiolat->enabled) + return; + + while (blkg && blkg->parent) { +@@ -643,6 +646,7 @@ static void blkcg_iolatency_exit(struct rq_qos *rqos) + struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos); + + del_timer_sync(&blkiolat->timer); ++ flush_work(&blkiolat->enable_work); + blkcg_deactivate_policy(rqos->q, &blkcg_policy_iolatency); + kfree(blkiolat); + } +@@ -714,6 +718,44 @@ next: + rcu_read_unlock(); + } + ++/** ++ * blkiolatency_enable_work_fn - Enable or disable iolatency on the device ++ * @work: enable_work of the blk_iolatency of interest ++ * ++ * iolatency needs to keep track of the number of in-flight IOs per cgroup. This ++ * is relatively expensive as it involves walking up the hierarchy twice for ++ * every IO. Thus, if iolatency is not enabled in any cgroup for the device, we ++ * want to disable the in-flight tracking. ++ * ++ * We have to make sure that the counting is balanced - we don't want to leak ++ * the in-flight counts by disabling accounting in the completion path while IOs ++ * are in flight. This is achieved by ensuring that no IO is in flight by ++ * freezing the queue while flipping ->enabled. As this requires a sleepable ++ * context, ->enabled flipping is punted to this work function. ++ */ ++static void blkiolatency_enable_work_fn(struct work_struct *work) ++{ ++ struct blk_iolatency *blkiolat = container_of(work, struct blk_iolatency, ++ enable_work); ++ bool enabled; ++ ++ /* ++ * There can only be one instance of this function running for @blkiolat ++ * and it's guaranteed to be executed at least once after the latest ++ * ->enabled_cnt modification. Acting on the latest ->enable_cnt is ++ * sufficient. ++ * ++ * Also, we know @blkiolat is safe to access as ->enable_work is flushed ++ * in blkcg_iolatency_exit(). ++ */ ++ enabled = atomic_read(&blkiolat->enable_cnt); ++ if (enabled != blkiolat->enabled) { ++ blk_mq_freeze_queue(blkiolat->rqos.q); ++ blkiolat->enabled = enabled; ++ blk_mq_unfreeze_queue(blkiolat->rqos.q); ++ } ++} ++ + int blk_iolatency_init(struct request_queue *q) + { + struct blk_iolatency *blkiolat; +@@ -739,17 +781,15 @@ int blk_iolatency_init(struct request_queue *q) + } + + timer_setup(&blkiolat->timer, blkiolatency_timer_fn, 0); ++ INIT_WORK(&blkiolat->enable_work, blkiolatency_enable_work_fn); + + return 0; + } + +-/* +- * return 1 for enabling iolatency, return -1 for disabling iolatency, otherwise +- * return 0. +- */ +-static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val) ++static void iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val) + { + struct iolatency_grp *iolat = blkg_to_lat(blkg); ++ struct blk_iolatency *blkiolat = iolat->blkiolat; + u64 oldval = iolat->min_lat_nsec; + + iolat->min_lat_nsec = val; +@@ -757,13 +797,15 @@ static int iolatency_set_min_lat_nsec(struct blkcg_gq *blkg, u64 val) + iolat->cur_win_nsec = min_t(u64, iolat->cur_win_nsec, + BLKIOLATENCY_MAX_WIN_SIZE); + +- if (!oldval && val) +- return 1; ++ if (!oldval && val) { ++ if (atomic_inc_return(&blkiolat->enable_cnt) == 1) ++ schedule_work(&blkiolat->enable_work); ++ } + if (oldval && !val) { + blkcg_clear_delay(blkg); +- return -1; ++ if (atomic_dec_return(&blkiolat->enable_cnt) == 0) ++ schedule_work(&blkiolat->enable_work); + } +- return 0; + } + + static void iolatency_clear_scaling(struct blkcg_gq *blkg) +@@ -795,7 +837,6 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf, + u64 lat_val = 0; + u64 oldval; + int ret; +- int enable = 0; + + ret = blkg_conf_prep(blkcg, &blkcg_policy_iolatency, buf, &ctx); + if (ret) +@@ -830,41 +871,12 @@ static ssize_t iolatency_set_limit(struct kernfs_open_file *of, char *buf, + blkg = ctx.blkg; + oldval = iolat->min_lat_nsec; + +- enable = iolatency_set_min_lat_nsec(blkg, lat_val); +- if (enable) { +- if (!blk_get_queue(blkg->q)) { +- ret = -ENODEV; +- goto out; +- } +- +- blkg_get(blkg); +- } +- +- if (oldval != iolat->min_lat_nsec) { ++ iolatency_set_min_lat_nsec(blkg, lat_val); ++ if (oldval != iolat->min_lat_nsec) + iolatency_clear_scaling(blkg); +- } +- + ret = 0; + out: + blkg_conf_finish(&ctx); +- if (ret == 0 && enable) { +- struct iolatency_grp *tmp = blkg_to_lat(blkg); +- struct blk_iolatency *blkiolat = tmp->blkiolat; +- +- blk_mq_freeze_queue(blkg->q); +- +- if (enable == 1) +- atomic_inc(&blkiolat->enabled); +- else if (enable == -1) +- atomic_dec(&blkiolat->enabled); +- else +- WARN_ON_ONCE(1); +- +- blk_mq_unfreeze_queue(blkg->q); +- +- blkg_put(blkg); +- blk_put_queue(blkg->q); +- } + return ret ?: nbytes; + } + +@@ -1005,14 +1017,8 @@ static void iolatency_pd_offline(struct blkg_policy_data *pd) + { + struct iolatency_grp *iolat = pd_to_lat(pd); + struct blkcg_gq *blkg = lat_to_blkg(iolat); +- struct blk_iolatency *blkiolat = iolat->blkiolat; +- int ret; + +- ret = iolatency_set_min_lat_nsec(blkg, 0); +- if (ret == 1) +- atomic_inc(&blkiolat->enabled); +- if (ret == -1) +- atomic_dec(&blkiolat->enabled); ++ iolatency_set_min_lat_nsec(blkg, 0); + iolatency_clear_scaling(blkg); + } + +diff --git a/crypto/cryptd.c b/crypto/cryptd.c +index 927760b316a4d..43a1a855886bd 100644 +--- a/crypto/cryptd.c ++++ b/crypto/cryptd.c +@@ -39,6 +39,10 @@ struct cryptd_cpu_queue { + }; + + struct cryptd_queue { ++ /* ++ * Protected by disabling BH to allow enqueueing from softinterrupt and ++ * dequeuing from kworker (cryptd_queue_worker()). ++ */ + struct cryptd_cpu_queue __percpu *cpu_queue; + }; + +@@ -125,28 +129,28 @@ static void cryptd_fini_queue(struct cryptd_queue *queue) + static int cryptd_enqueue_request(struct cryptd_queue *queue, + struct crypto_async_request *request) + { +- int cpu, err; ++ int err; + struct cryptd_cpu_queue *cpu_queue; + refcount_t *refcnt; + +- cpu = get_cpu(); ++ local_bh_disable(); + cpu_queue = this_cpu_ptr(queue->cpu_queue); + err = crypto_enqueue_request(&cpu_queue->queue, request); + + refcnt = crypto_tfm_ctx(request->tfm); + + if (err == -ENOSPC) +- goto out_put_cpu; ++ goto out; + +- queue_work_on(cpu, cryptd_wq, &cpu_queue->work); ++ queue_work_on(smp_processor_id(), cryptd_wq, &cpu_queue->work); + + if (!refcount_read(refcnt)) +- goto out_put_cpu; ++ goto out; + + refcount_inc(refcnt); + +-out_put_cpu: +- put_cpu(); ++out: ++ local_bh_enable(); + + return err; + } +@@ -162,15 +166,10 @@ static void cryptd_queue_worker(struct work_struct *work) + cpu_queue = container_of(work, struct cryptd_cpu_queue, work); + /* + * Only handle one request at a time to avoid hogging crypto workqueue. +- * preempt_disable/enable is used to prevent being preempted by +- * cryptd_enqueue_request(). local_bh_disable/enable is used to prevent +- * cryptd_enqueue_request() being accessed from software interrupts. + */ + local_bh_disable(); +- preempt_disable(); + backlog = crypto_get_backlog(&cpu_queue->queue); + req = crypto_dequeue_request(&cpu_queue->queue); +- preempt_enable(); + local_bh_enable(); + + if (!req) +diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c +index a4b7cdd0c8446..1b0aeb8320448 100644 +--- a/drivers/acpi/property.c ++++ b/drivers/acpi/property.c +@@ -430,6 +430,16 @@ void acpi_init_properties(struct acpi_device *adev) + acpi_extract_apple_properties(adev); + } + ++static void acpi_free_device_properties(struct list_head *list) ++{ ++ struct acpi_device_properties *props, *tmp; ++ ++ list_for_each_entry_safe(props, tmp, list, list) { ++ list_del(&props->list); ++ kfree(props); ++ } ++} ++ + static void acpi_destroy_nondev_subnodes(struct list_head *list) + { + struct acpi_data_node *dn, *next; +@@ -442,22 +452,18 @@ static void acpi_destroy_nondev_subnodes(struct list_head *list) + wait_for_completion(&dn->kobj_done); + list_del(&dn->sibling); + ACPI_FREE((void *)dn->data.pointer); ++ acpi_free_device_properties(&dn->data.properties); + kfree(dn); + } + } + + void acpi_free_properties(struct acpi_device *adev) + { +- struct acpi_device_properties *props, *tmp; +- + acpi_destroy_nondev_subnodes(&adev->data.subnodes); + ACPI_FREE((void *)adev->data.pointer); + adev->data.of_compatible = NULL; + adev->data.pointer = NULL; +- list_for_each_entry_safe(props, tmp, &adev->data.properties, list) { +- list_del(&props->list); +- kfree(props); +- } ++ acpi_free_device_properties(&adev->data.properties); + } + + /** +diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c +index b0e23e3fe0d56..34966128293b1 100644 +--- a/drivers/acpi/sleep.c ++++ b/drivers/acpi/sleep.c +@@ -374,6 +374,18 @@ static const struct dmi_system_id acpisleep_dmi_table[] __initconst = { + DMI_MATCH(DMI_PRODUCT_NAME, "20GGA00L00"), + }, + }, ++ /* ++ * ASUS B1400CEAE hangs on resume from suspend (see ++ * https://bugzilla.kernel.org/show_bug.cgi?id=215742). ++ */ ++ { ++ .callback = init_default_s3, ++ .ident = "ASUS B1400CEAE", ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ++ DMI_MATCH(DMI_PRODUCT_NAME, "ASUS EXPERTBOOK B1400CEAE"), ++ }, ++ }, + {}, + }; + +diff --git a/drivers/ata/libata-transport.c b/drivers/ata/libata-transport.c +index 12a505bb9c5b1..c4f36312b8a42 100644 +--- a/drivers/ata/libata-transport.c ++++ b/drivers/ata/libata-transport.c +@@ -196,7 +196,7 @@ static struct { + { XFER_PIO_0, "XFER_PIO_0" }, + { XFER_PIO_SLOW, "XFER_PIO_SLOW" } + }; +-ata_bitfield_name_match(xfer,ata_xfer_names) ++ata_bitfield_name_search(xfer, ata_xfer_names) + + /* + * ATA Port attributes +diff --git a/drivers/ata/pata_octeon_cf.c b/drivers/ata/pata_octeon_cf.c +index ac3b1fda820ff..c240d8cbfd417 100644 +--- a/drivers/ata/pata_octeon_cf.c ++++ b/drivers/ata/pata_octeon_cf.c +@@ -888,12 +888,14 @@ static int octeon_cf_probe(struct platform_device *pdev) + int i; + res_dma = platform_get_resource(dma_dev, IORESOURCE_MEM, 0); + if (!res_dma) { ++ put_device(&dma_dev->dev); + of_node_put(dma_node); + return -EINVAL; + } + cf_port->dma_base = (u64)devm_ioremap_nocache(&pdev->dev, res_dma->start, + resource_size(res_dma)); + if (!cf_port->dma_base) { ++ put_device(&dma_dev->dev); + of_node_put(dma_node); + return -EINVAL; + } +@@ -903,6 +905,7 @@ static int octeon_cf_probe(struct platform_device *pdev) + irq = i; + irq_handler = octeon_cf_interrupt; + } ++ put_device(&dma_dev->dev); + } + of_node_put(dma_node); + } +diff --git a/drivers/base/bus.c b/drivers/base/bus.c +index a1d1e82563244..7d7d28f498edd 100644 +--- a/drivers/base/bus.c ++++ b/drivers/base/bus.c +@@ -620,7 +620,7 @@ int bus_add_driver(struct device_driver *drv) + if (drv->bus->p->drivers_autoprobe) { + error = driver_attach(drv); + if (error) +- goto out_unregister; ++ goto out_del_list; + } + module_add_driver(drv->owner, drv); + +@@ -647,6 +647,8 @@ int bus_add_driver(struct device_driver *drv) + + return 0; + ++out_del_list: ++ klist_del(&priv->knode_bus); + out_unregister: + kobject_put(&priv->kobj); + /* drv->p is freed in driver_release() */ +diff --git a/drivers/base/dd.c b/drivers/base/dd.c +index 26cd4ce3ac75f..6f85280fef8d3 100644 +--- a/drivers/base/dd.c ++++ b/drivers/base/dd.c +@@ -873,6 +873,7 @@ out_unlock: + static int __device_attach(struct device *dev, bool allow_async) + { + int ret = 0; ++ bool async = false; + + device_lock(dev); + if (dev->p->dead) { +@@ -911,7 +912,7 @@ static int __device_attach(struct device *dev, bool allow_async) + */ + dev_dbg(dev, "scheduling asynchronous probe\n"); + get_device(dev); +- async_schedule_dev(__device_attach_async_helper, dev); ++ async = true; + } else { + pm_request_idle(dev); + } +@@ -921,6 +922,8 @@ static int __device_attach(struct device *dev, bool allow_async) + } + out_unlock: + device_unlock(dev); ++ if (async) ++ async_schedule_dev(__device_attach_async_helper, dev); + return ret; + } + +diff --git a/drivers/base/node.c b/drivers/base/node.c +index 62a052990bb9b..666eb55c0774e 100644 +--- a/drivers/base/node.c ++++ b/drivers/base/node.c +@@ -641,6 +641,7 @@ static int register_node(struct node *node, int num) + */ + void unregister_node(struct node *node) + { ++ compaction_unregister_node(node); + hugetlb_unregister_node(node); /* no-op, if memoryless node */ + node_remove_accesses(node); + node_remove_caches(node); +diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c +index ba10fa24fa1f1..5ece2fd70d9cf 100644 +--- a/drivers/block/drbd/drbd_main.c ++++ b/drivers/block/drbd/drbd_main.c +@@ -3709,9 +3709,8 @@ const char *cmdname(enum drbd_packet cmd) + * when we want to support more than + * one PRO_VERSION */ + static const char *cmdnames[] = { ++ + [P_DATA] = "Data", +- [P_WSAME] = "WriteSame", +- [P_TRIM] = "Trim", + [P_DATA_REPLY] = "DataReply", + [P_RS_DATA_REPLY] = "RSDataReply", + [P_BARRIER] = "Barrier", +@@ -3722,7 +3721,6 @@ const char *cmdname(enum drbd_packet cmd) + [P_DATA_REQUEST] = "DataRequest", + [P_RS_DATA_REQUEST] = "RSDataRequest", + [P_SYNC_PARAM] = "SyncParam", +- [P_SYNC_PARAM89] = "SyncParam89", + [P_PROTOCOL] = "ReportProtocol", + [P_UUIDS] = "ReportUUIDs", + [P_SIZES] = "ReportSizes", +@@ -3730,6 +3728,7 @@ const char *cmdname(enum drbd_packet cmd) + [P_SYNC_UUID] = "ReportSyncUUID", + [P_AUTH_CHALLENGE] = "AuthChallenge", + [P_AUTH_RESPONSE] = "AuthResponse", ++ [P_STATE_CHG_REQ] = "StateChgRequest", + [P_PING] = "Ping", + [P_PING_ACK] = "PingAck", + [P_RECV_ACK] = "RecvAck", +@@ -3740,24 +3739,26 @@ const char *cmdname(enum drbd_packet cmd) + [P_NEG_DREPLY] = "NegDReply", + [P_NEG_RS_DREPLY] = "NegRSDReply", + [P_BARRIER_ACK] = "BarrierAck", +- [P_STATE_CHG_REQ] = "StateChgRequest", + [P_STATE_CHG_REPLY] = "StateChgReply", + [P_OV_REQUEST] = "OVRequest", + [P_OV_REPLY] = "OVReply", + [P_OV_RESULT] = "OVResult", + [P_CSUM_RS_REQUEST] = "CsumRSRequest", + [P_RS_IS_IN_SYNC] = "CsumRSIsInSync", ++ [P_SYNC_PARAM89] = "SyncParam89", + [P_COMPRESSED_BITMAP] = "CBitmap", + [P_DELAY_PROBE] = "DelayProbe", + [P_OUT_OF_SYNC] = "OutOfSync", +- [P_RETRY_WRITE] = "RetryWrite", + [P_RS_CANCEL] = "RSCancel", + [P_CONN_ST_CHG_REQ] = "conn_st_chg_req", + [P_CONN_ST_CHG_REPLY] = "conn_st_chg_reply", + [P_RETRY_WRITE] = "retry_write", + [P_PROTOCOL_UPDATE] = "protocol_update", ++ [P_TRIM] = "Trim", + [P_RS_THIN_REQ] = "rs_thin_req", + [P_RS_DEALLOCATED] = "rs_deallocated", ++ [P_WSAME] = "WriteSame", ++ [P_ZEROES] = "Zeroes", + + /* enum drbd_packet, but not commands - obsoleted flags: + * P_MAY_IGNORE +diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c +index 25e81b1a59a54..09323b0510f0b 100644 +--- a/drivers/block/nbd.c ++++ b/drivers/block/nbd.c +@@ -865,11 +865,15 @@ static int wait_for_reconnect(struct nbd_device *nbd) + struct nbd_config *config = nbd->config; + if (!config->dead_conn_timeout) + return 0; +- if (test_bit(NBD_RT_DISCONNECTED, &config->runtime_flags)) ++ ++ if (!wait_event_timeout(config->conn_wait, ++ test_bit(NBD_RT_DISCONNECTED, ++ &config->runtime_flags) || ++ atomic_read(&config->live_connections) > 0, ++ config->dead_conn_timeout)) + return 0; +- return wait_event_timeout(config->conn_wait, +- atomic_read(&config->live_connections) > 0, +- config->dead_conn_timeout) > 0; ++ ++ return !test_bit(NBD_RT_DISCONNECTED, &config->runtime_flags); + } + + static int nbd_handle_cmd(struct nbd_cmd *cmd, int index) +@@ -1340,7 +1344,7 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *b + static void nbd_clear_sock_ioctl(struct nbd_device *nbd, + struct block_device *bdev) + { +- sock_shutdown(nbd); ++ nbd_clear_sock(nbd); + __invalidate_device(bdev, true); + nbd_bdev_reset(bdev); + if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF, +@@ -1453,15 +1457,20 @@ static struct nbd_config *nbd_alloc_config(void) + { + struct nbd_config *config; + ++ if (!try_module_get(THIS_MODULE)) ++ return ERR_PTR(-ENODEV); ++ + config = kzalloc(sizeof(struct nbd_config), GFP_NOFS); +- if (!config) +- return NULL; ++ if (!config) { ++ module_put(THIS_MODULE); ++ return ERR_PTR(-ENOMEM); ++ } ++ + atomic_set(&config->recv_threads, 0); + init_waitqueue_head(&config->recv_wq); + init_waitqueue_head(&config->conn_wait); + config->blksize = NBD_DEF_BLKSIZE; + atomic_set(&config->live_connections, 0); +- try_module_get(THIS_MODULE); + return config; + } + +@@ -1488,12 +1497,13 @@ static int nbd_open(struct block_device *bdev, fmode_t mode) + mutex_unlock(&nbd->config_lock); + goto out; + } +- config = nbd->config = nbd_alloc_config(); +- if (!config) { +- ret = -ENOMEM; ++ config = nbd_alloc_config(); ++ if (IS_ERR(config)) { ++ ret = PTR_ERR(config); + mutex_unlock(&nbd->config_lock); + goto out; + } ++ nbd->config = config; + refcount_set(&nbd->config_refs, 1); + refcount_inc(&nbd->refs); + mutex_unlock(&nbd->config_lock); +@@ -1915,13 +1925,14 @@ again: + nbd_put(nbd); + return -EINVAL; + } +- config = nbd->config = nbd_alloc_config(); +- if (!nbd->config) { ++ config = nbd_alloc_config(); ++ if (IS_ERR(config)) { + mutex_unlock(&nbd->config_lock); + nbd_put(nbd); + printk(KERN_ERR "nbd: couldn't allocate config\n"); +- return -ENOMEM; ++ return PTR_ERR(config); + } ++ nbd->config = config; + refcount_set(&nbd->config_refs, 1); + set_bit(NBD_RT_BOUND, &config->runtime_flags); + +@@ -2014,6 +2025,7 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd) + mutex_lock(&nbd->config_lock); + nbd_disconnect(nbd); + sock_shutdown(nbd); ++ wake_up(&nbd->config->conn_wait); + /* + * Make sure recv thread has finished, so it does not drop the last + * config ref and try to destroy the workqueue from inside the work +@@ -2441,6 +2453,12 @@ static void __exit nbd_cleanup(void) + struct nbd_device *nbd; + LIST_HEAD(del_list); + ++ /* ++ * Unregister netlink interface prior to waiting ++ * for the completion of netlink commands. ++ */ ++ genl_unregister_family(&nbd_genl_family); ++ + nbd_dbg_close(); + + mutex_lock(&nbd_index_mutex); +@@ -2450,13 +2468,15 @@ static void __exit nbd_cleanup(void) + while (!list_empty(&del_list)) { + nbd = list_first_entry(&del_list, struct nbd_device, list); + list_del_init(&nbd->list); ++ if (refcount_read(&nbd->config_refs)) ++ printk(KERN_ERR "nbd: possibly leaking nbd_config (ref %d)\n", ++ refcount_read(&nbd->config_refs)); + if (refcount_read(&nbd->refs) != 1) + printk(KERN_ERR "nbd: possibly leaking a device\n"); + nbd_put(nbd); + } + + idr_destroy(&nbd_index_idr); +- genl_unregister_family(&nbd_genl_family); + unregister_blkdev(NBD_MAJOR, "nbd"); + } + +diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c +index 2a5cd502feae7..9b3ea86c20e5e 100644 +--- a/drivers/block/virtio_blk.c ++++ b/drivers/block/virtio_blk.c +@@ -976,11 +976,12 @@ static int virtblk_probe(struct virtio_device *vdev) + blk_queue_io_opt(q, blk_size * opt_io_size); + + if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) { +- q->limits.discard_granularity = blk_size; +- + virtio_cread(vdev, struct virtio_blk_config, + discard_sector_alignment, &v); +- q->limits.discard_alignment = v ? v << SECTOR_SHIFT : 0; ++ if (v) ++ q->limits.discard_granularity = v << SECTOR_SHIFT; ++ else ++ q->limits.discard_granularity = blk_size; + + virtio_cread(vdev, struct virtio_blk_config, + max_discard_sectors, &v); +diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c +index 469ca73de4ce7..44aeceaccfa48 100644 +--- a/drivers/bus/ti-sysc.c ++++ b/drivers/bus/ti-sysc.c +@@ -2724,7 +2724,9 @@ static int sysc_remove(struct platform_device *pdev) + struct sysc *ddata = platform_get_drvdata(pdev); + int error; + +- cancel_delayed_work_sync(&ddata->idle_work); ++ /* Device can still be enabled, see deferred idle quirk in probe */ ++ if (cancel_delayed_work_sync(&ddata->idle_work)) ++ ti_sysc_idle(&ddata->idle_work.work); + + error = pm_runtime_get_sync(ddata->dev); + if (error < 0) { +diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c +index ad2e6d55d4a59..736970312bbc9 100644 +--- a/drivers/char/ipmi/ipmi_msghandler.c ++++ b/drivers/char/ipmi/ipmi_msghandler.c +@@ -11,8 +11,8 @@ + * Copyright 2002 MontaVista Software Inc. + */ + +-#define pr_fmt(fmt) "%s" fmt, "IPMI message handler: " +-#define dev_fmt pr_fmt ++#define pr_fmt(fmt) "IPMI message handler: " fmt ++#define dev_fmt(fmt) pr_fmt(fmt) + + #include + #include +diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c +index bb42a1c92cae5..60fb6c62f224b 100644 +--- a/drivers/char/ipmi/ipmi_ssif.c ++++ b/drivers/char/ipmi/ipmi_ssif.c +@@ -845,6 +845,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result, + break; + + case SSIF_GETTING_EVENTS: ++ if (!msg) { ++ /* Should never happen, but just in case. */ ++ dev_warn(&ssif_info->client->dev, ++ "No message set while getting events\n"); ++ ipmi_ssif_unlock_cond(ssif_info, flags); ++ break; ++ } ++ + if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) { + /* Error getting event, probably done. */ + msg->done(msg); +@@ -869,6 +877,14 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result, + break; + + case SSIF_GETTING_MESSAGES: ++ if (!msg) { ++ /* Should never happen, but just in case. */ ++ dev_warn(&ssif_info->client->dev, ++ "No message set while getting messages\n"); ++ ipmi_ssif_unlock_cond(ssif_info, flags); ++ break; ++ } ++ + if ((result < 0) || (len < 3) || (msg->rsp[2] != 0)) { + /* Error getting event, probably done. */ + msg->done(msg); +@@ -892,6 +908,13 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result, + deliver_recv_msg(ssif_info, msg); + } + break; ++ ++ default: ++ /* Should never happen, but just in case. */ ++ dev_warn(&ssif_info->client->dev, ++ "Invalid state in message done handling: %d\n", ++ ssif_info->ssif_state); ++ ipmi_ssif_unlock_cond(ssif_info, flags); + } + + flags = ipmi_ssif_lock_cond(ssif_info, &oflags); +diff --git a/drivers/clocksource/timer-oxnas-rps.c b/drivers/clocksource/timer-oxnas-rps.c +index 56c0cc32d0ac6..d514b44e67dd1 100644 +--- a/drivers/clocksource/timer-oxnas-rps.c ++++ b/drivers/clocksource/timer-oxnas-rps.c +@@ -236,7 +236,7 @@ static int __init oxnas_rps_timer_init(struct device_node *np) + } + + rps->irq = irq_of_parse_and_map(np, 0); +- if (rps->irq < 0) { ++ if (!rps->irq) { + ret = -EINVAL; + goto err_iomap; + } +diff --git a/drivers/clocksource/timer-riscv.c b/drivers/clocksource/timer-riscv.c +index 4b04ffbe5e7e9..e3be5c2f57b8e 100644 +--- a/drivers/clocksource/timer-riscv.c ++++ b/drivers/clocksource/timer-riscv.c +@@ -26,7 +26,7 @@ static int riscv_clock_next_event(unsigned long delta, + + static DEFINE_PER_CPU(struct clock_event_device, riscv_clock_event) = { + .name = "riscv_timer_clockevent", +- .features = CLOCK_EVT_FEAT_ONESHOT, ++ .features = CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_C3STOP, + .rating = 100, + .set_next_event = riscv_clock_next_event, + }; +diff --git a/drivers/clocksource/timer-sp804.c b/drivers/clocksource/timer-sp804.c +index 9c841980eed13..c9aa0498fb840 100644 +--- a/drivers/clocksource/timer-sp804.c ++++ b/drivers/clocksource/timer-sp804.c +@@ -215,6 +215,11 @@ static int __init sp804_of_init(struct device_node *np) + struct clk *clk1, *clk2; + const char *name = of_get_property(np, "compatible", NULL); + ++ if (initialized) { ++ pr_debug("%pOF: skipping further SP804 timer device\n", np); ++ return 0; ++ } ++ + base = of_iomap(np, 0); + if (!base) + return -ENXIO; +@@ -223,11 +228,6 @@ static int __init sp804_of_init(struct device_node *np) + writel(0, base + TIMER_CTRL); + writel(0, base + TIMER_2_BASE + TIMER_CTRL); + +- if (initialized || !of_device_is_available(np)) { +- ret = -EINVAL; +- goto err; +- } +- + clk1 = of_clk_get(np, 0); + if (IS_ERR(clk1)) + clk1 = NULL; +diff --git a/drivers/crypto/marvell/cipher.c b/drivers/crypto/marvell/cipher.c +index 84ceddfee76b4..708dc63b2f099 100644 +--- a/drivers/crypto/marvell/cipher.c ++++ b/drivers/crypto/marvell/cipher.c +@@ -610,7 +610,6 @@ struct skcipher_alg mv_cesa_ecb_des3_ede_alg = { + .decrypt = mv_cesa_ecb_des3_ede_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, +- .ivsize = DES3_EDE_BLOCK_SIZE, + .base = { + .cra_name = "ecb(des3_ede)", + .cra_driver_name = "mv-ecb-des3-ede", +diff --git a/drivers/devfreq/rk3399_dmc.c b/drivers/devfreq/rk3399_dmc.c +index 027769e39f9b8..a491dcfa1dd07 100644 +--- a/drivers/devfreq/rk3399_dmc.c ++++ b/drivers/devfreq/rk3399_dmc.c +@@ -485,6 +485,8 @@ static int rk3399_dmcfreq_remove(struct platform_device *pdev) + { + struct rk3399_dmcfreq *dmcfreq = dev_get_drvdata(&pdev->dev); + ++ devfreq_event_disable_edev(dmcfreq->edev); ++ + /* + * Before remove the opp table we need to unregister the opp notifier. + */ +diff --git a/drivers/dma/stm32-mdma.c b/drivers/dma/stm32-mdma.c +index a05355d1292e8..c902c24806404 100644 +--- a/drivers/dma/stm32-mdma.c ++++ b/drivers/dma/stm32-mdma.c +@@ -40,7 +40,6 @@ + STM32_MDMA_SHIFT(mask)) + + #define STM32_MDMA_GISR0 0x0000 /* MDMA Int Status Reg 1 */ +-#define STM32_MDMA_GISR1 0x0004 /* MDMA Int Status Reg 2 */ + + /* MDMA Channel x interrupt/status register */ + #define STM32_MDMA_CISR(x) (0x40 + 0x40 * (x)) /* x = 0..62 */ +@@ -196,7 +195,7 @@ + + #define STM32_MDMA_MAX_BUF_LEN 128 + #define STM32_MDMA_MAX_BLOCK_LEN 65536 +-#define STM32_MDMA_MAX_CHANNELS 63 ++#define STM32_MDMA_MAX_CHANNELS 32 + #define STM32_MDMA_MAX_REQUESTS 256 + #define STM32_MDMA_MAX_BURST 128 + #define STM32_MDMA_VERY_HIGH_PRIORITY 0x11 +@@ -1351,21 +1350,11 @@ static irqreturn_t stm32_mdma_irq_handler(int irq, void *devid) + + /* Find out which channel generates the interrupt */ + status = readl_relaxed(dmadev->base + STM32_MDMA_GISR0); +- if (status) { +- id = __ffs(status); +- } else { +- status = readl_relaxed(dmadev->base + STM32_MDMA_GISR1); +- if (!status) { +- dev_dbg(mdma2dev(dmadev), "spurious it\n"); +- return IRQ_NONE; +- } +- id = __ffs(status); +- /* +- * As GISR0 provides status for channel id from 0 to 31, +- * so GISR1 provides status for channel id from 32 to 62 +- */ +- id += 32; ++ if (!status) { ++ dev_dbg(mdma2dev(dmadev), "spurious it\n"); ++ return IRQ_NONE; + } ++ id = __ffs(status); + + chan = &dmadev->chan[id]; + if (!chan) { +diff --git a/drivers/dma/xilinx/zynqmp_dma.c b/drivers/dma/xilinx/zynqmp_dma.c +index 84009c5e0f330..b61d0c79dffb6 100644 +--- a/drivers/dma/xilinx/zynqmp_dma.c ++++ b/drivers/dma/xilinx/zynqmp_dma.c +@@ -232,7 +232,7 @@ struct zynqmp_dma_chan { + bool is_dmacoherent; + struct tasklet_struct tasklet; + bool idle; +- u32 desc_size; ++ size_t desc_size; + bool err; + u32 bus_width; + u32 src_burst_len; +@@ -489,7 +489,8 @@ static int zynqmp_dma_alloc_chan_resources(struct dma_chan *dchan) + } + + chan->desc_pool_v = dma_alloc_coherent(chan->dev, +- (2 * chan->desc_size * ZYNQMP_DMA_NUM_DESCS), ++ (2 * ZYNQMP_DMA_DESC_SIZE(chan) * ++ ZYNQMP_DMA_NUM_DESCS), + &chan->desc_pool_p, GFP_KERNEL); + if (!chan->desc_pool_v) + return -ENOMEM; +diff --git a/drivers/extcon/extcon.c b/drivers/extcon/extcon.c +index 5c9e156cd0862..6b905c3d30f4f 100644 +--- a/drivers/extcon/extcon.c ++++ b/drivers/extcon/extcon.c +@@ -1230,19 +1230,14 @@ int extcon_dev_register(struct extcon_dev *edev) + edev->dev.type = &edev->extcon_dev_type; + } + +- ret = device_register(&edev->dev); +- if (ret) { +- put_device(&edev->dev); +- goto err_dev; +- } +- + spin_lock_init(&edev->lock); +- edev->nh = devm_kcalloc(&edev->dev, edev->max_supported, +- sizeof(*edev->nh), GFP_KERNEL); +- if (!edev->nh) { +- ret = -ENOMEM; +- device_unregister(&edev->dev); +- goto err_dev; ++ if (edev->max_supported) { ++ edev->nh = kcalloc(edev->max_supported, sizeof(*edev->nh), ++ GFP_KERNEL); ++ if (!edev->nh) { ++ ret = -ENOMEM; ++ goto err_alloc_nh; ++ } + } + + for (index = 0; index < edev->max_supported; index++) +@@ -1253,6 +1248,12 @@ int extcon_dev_register(struct extcon_dev *edev) + dev_set_drvdata(&edev->dev, edev); + edev->state = 0; + ++ ret = device_register(&edev->dev); ++ if (ret) { ++ put_device(&edev->dev); ++ goto err_dev; ++ } ++ + mutex_lock(&extcon_dev_list_lock); + list_add(&edev->entry, &extcon_dev_list); + mutex_unlock(&extcon_dev_list_lock); +@@ -1260,6 +1261,9 @@ int extcon_dev_register(struct extcon_dev *edev) + return 0; + + err_dev: ++ if (edev->max_supported) ++ kfree(edev->nh); ++err_alloc_nh: + if (edev->max_supported) + kfree(edev->extcon_dev_type.groups); + err_alloc_groups: +@@ -1320,6 +1324,7 @@ void extcon_dev_unregister(struct extcon_dev *edev) + if (edev->max_supported) { + kfree(edev->extcon_dev_type.groups); + kfree(edev->cables); ++ kfree(edev->nh); + } + + put_device(&edev->dev); +diff --git a/drivers/firmware/arm_scmi/base.c b/drivers/firmware/arm_scmi/base.c +index f986ee8919f03..2be32e86445f4 100644 +--- a/drivers/firmware/arm_scmi/base.c ++++ b/drivers/firmware/arm_scmi/base.c +@@ -164,7 +164,7 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle, + break; + + loop_num_ret = le32_to_cpu(*num_ret); +- if (tot_num_ret + loop_num_ret > MAX_PROTOCOLS_IMP) { ++ if (loop_num_ret > MAX_PROTOCOLS_IMP - tot_num_ret) { + dev_err(dev, "No. of Protocol > MAX_PROTOCOLS_IMP"); + break; + } +diff --git a/drivers/firmware/dmi-sysfs.c b/drivers/firmware/dmi-sysfs.c +index b6180023eba7c..2858e05636e98 100644 +--- a/drivers/firmware/dmi-sysfs.c ++++ b/drivers/firmware/dmi-sysfs.c +@@ -603,7 +603,7 @@ static void __init dmi_sysfs_register_handle(const struct dmi_header *dh, + "%d-%d", dh->type, entry->instance); + + if (*ret) { +- kfree(entry); ++ kobject_put(&entry->kobj); + return; + } + +diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c +index b2b4ba240fb11..08c422380a00d 100644 +--- a/drivers/firmware/stratix10-svc.c ++++ b/drivers/firmware/stratix10-svc.c +@@ -934,17 +934,17 @@ EXPORT_SYMBOL_GPL(stratix10_svc_allocate_memory); + void stratix10_svc_free_memory(struct stratix10_svc_chan *chan, void *kaddr) + { + struct stratix10_svc_data_mem *pmem; +- size_t size = 0; + + list_for_each_entry(pmem, &svc_data_mem, node) + if (pmem->vaddr == kaddr) { +- size = pmem->size; +- break; ++ gen_pool_free(chan->ctrl->genpool, ++ (unsigned long)kaddr, pmem->size); ++ pmem->vaddr = NULL; ++ list_del(&pmem->node); ++ return; + } + +- gen_pool_free(chan->ctrl->genpool, (unsigned long)kaddr, size); +- pmem->vaddr = NULL; +- list_del(&pmem->node); ++ list_del(&svc_data_mem); + } + EXPORT_SYMBOL_GPL(stratix10_svc_free_memory); + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +index fddeea2b17e50..7eeb98fe50ed7 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +@@ -114,7 +114,7 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, union drm_amdgpu_cs + int ret; + + if (cs->in.num_chunks == 0) +- return 0; ++ return -EINVAL; + + chunk_array = kmalloc_array(cs->in.num_chunks, sizeof(uint64_t), GFP_KERNEL); + if (!chunk_array) +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c +index 3a6115ad01965..f3250db7f9c27 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c +@@ -568,8 +568,7 @@ int amdgpu_ucode_create_bo(struct amdgpu_device *adev) + + void amdgpu_ucode_free_bo(struct amdgpu_device *adev) + { +- if (adev->firmware.load_type != AMDGPU_FW_LOAD_DIRECT) +- amdgpu_bo_free_kernel(&adev->firmware.fw_buf, ++ amdgpu_bo_free_kernel(&adev->firmware.fw_buf, + &adev->firmware.fw_buf_mc, + &adev->firmware.fw_buf_ptr); + } +diff --git a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c +index 4b3faaccecb94..c8a5a5698edd9 100644 +--- a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c ++++ b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c +@@ -1609,19 +1609,7 @@ static int kv_update_samu_dpm(struct amdgpu_device *adev, bool gate) + + static u8 kv_get_acp_boot_level(struct amdgpu_device *adev) + { +- u8 i; +- struct amdgpu_clock_voltage_dependency_table *table = +- &adev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table; +- +- for (i = 0; i < table->count; i++) { +- if (table->entries[i].clk >= 0) /* XXX */ +- break; +- } +- +- if (i >= table->count) +- i = table->count - 1; +- +- return i; ++ return 0; + } + + static void kv_update_acp_boot_level(struct amdgpu_device *adev) +diff --git a/drivers/gpu/drm/amd/amdgpu/si_dpm.c b/drivers/gpu/drm/amd/amdgpu/si_dpm.c +index 4cb4c891120b2..9931d5c17cfb6 100644 +--- a/drivers/gpu/drm/amd/amdgpu/si_dpm.c ++++ b/drivers/gpu/drm/amd/amdgpu/si_dpm.c +@@ -7250,17 +7250,15 @@ static int si_parse_power_table(struct amdgpu_device *adev) + if (!adev->pm.dpm.ps) + return -ENOMEM; + power_state_offset = (u8 *)state_array->states; +- for (i = 0; i < state_array->ucNumEntries; i++) { ++ for (adev->pm.dpm.num_ps = 0, i = 0; i < state_array->ucNumEntries; i++) { + u8 *idx; + power_state = (union pplib_power_state *)power_state_offset; + non_clock_array_index = power_state->v2.nonClockInfoIndex; + non_clock_info = (struct _ATOM_PPLIB_NONCLOCK_INFO *) + &non_clock_info_array->nonClockInfo[non_clock_array_index]; + ps = kzalloc(sizeof(struct si_ps), GFP_KERNEL); +- if (ps == NULL) { +- kfree(adev->pm.dpm.ps); ++ if (ps == NULL) + return -ENOMEM; +- } + adev->pm.dpm.ps[i].ps_priv = ps; + si_parse_pplib_non_clock_info(adev, &adev->pm.dpm.ps[i], + non_clock_info, +@@ -7282,8 +7280,8 @@ static int si_parse_power_table(struct amdgpu_device *adev) + k++; + } + power_state_offset += 2 + power_state->v2.ucNumDPMLevels; ++ adev->pm.dpm.num_ps++; + } +- adev->pm.dpm.num_ps = state_array->ucNumEntries; + + /* fill in the vce power states */ + for (i = 0; i < adev->pm.dpm.num_of_vce_states; i++) { +diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c +index 98e915e325ddf..bc3f42e915e91 100644 +--- a/drivers/gpu/drm/arm/display/komeda/komeda_plane.c ++++ b/drivers/gpu/drm/arm/display/komeda/komeda_plane.c +@@ -264,6 +264,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms, + + formats = komeda_get_layer_fourcc_list(&mdev->fmt_tbl, + layer->layer_type, &n_formats); ++ if (!formats) { ++ kfree(kplane); ++ return -ENOMEM; ++ } + + err = drm_universal_plane_init(&kms->base, plane, + get_possible_crtcs(kms, c->pipeline), +@@ -274,8 +278,10 @@ static int komeda_plane_add(struct komeda_kms_dev *kms, + + komeda_put_fourcc_list(formats); + +- if (err) +- goto cleanup; ++ if (err) { ++ kfree(kplane); ++ return err; ++ } + + drm_plane_helper_add(plane, &komeda_plane_helper_funcs); + +diff --git a/drivers/gpu/drm/arm/malidp_crtc.c b/drivers/gpu/drm/arm/malidp_crtc.c +index 587d94798f5c2..af729094260c4 100644 +--- a/drivers/gpu/drm/arm/malidp_crtc.c ++++ b/drivers/gpu/drm/arm/malidp_crtc.c +@@ -483,7 +483,10 @@ static void malidp_crtc_reset(struct drm_crtc *crtc) + if (crtc->state) + malidp_crtc_destroy_state(crtc, crtc->state); + +- __drm_atomic_helper_crtc_reset(crtc, &state->base); ++ if (state) ++ __drm_atomic_helper_crtc_reset(crtc, &state->base); ++ else ++ __drm_atomic_helper_crtc_reset(crtc, NULL); + } + + static int malidp_crtc_enable_vblank(struct drm_crtc *crtc) +diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c +index 9e13e466e72c0..e7bf32f234d71 100644 +--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c ++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c +@@ -1225,6 +1225,7 @@ static int adv7511_probe(struct i2c_client *i2c, const struct i2c_device_id *id) + return 0; + + err_unregister_cec: ++ cec_unregister_adapter(adv7511->cec_adap); + i2c_unregister_device(adv7511->i2c_cec); + if (adv7511->cec_clk) + clk_disable_unprepare(adv7511->cec_clk); +diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c +index 1f26890a8da6e..c6a51d1c7ec9e 100644 +--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c ++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c +@@ -1630,8 +1630,19 @@ static ssize_t analogix_dpaux_transfer(struct drm_dp_aux *aux, + struct drm_dp_aux_msg *msg) + { + struct analogix_dp_device *dp = to_dp(aux); ++ int ret; ++ ++ pm_runtime_get_sync(dp->dev); ++ ++ ret = analogix_dp_detect_hpd(dp); ++ if (ret) ++ goto out; + +- return analogix_dp_transfer(dp, msg); ++ ret = analogix_dp_transfer(dp, msg); ++out: ++ pm_runtime_put(dp->dev); ++ ++ return ret; + } + + struct analogix_dp_device * +@@ -1696,8 +1707,10 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data) + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + + dp->reg_base = devm_ioremap_resource(&pdev->dev, res); +- if (IS_ERR(dp->reg_base)) +- return ERR_CAST(dp->reg_base); ++ if (IS_ERR(dp->reg_base)) { ++ ret = PTR_ERR(dp->reg_base); ++ goto err_disable_clk; ++ } + + dp->force_hpd = of_property_read_bool(dev->of_node, "force-hpd"); + +@@ -1709,7 +1722,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data) + if (IS_ERR(dp->hpd_gpiod)) { + dev_err(dev, "error getting HDP GPIO: %ld\n", + PTR_ERR(dp->hpd_gpiod)); +- return ERR_CAST(dp->hpd_gpiod); ++ ret = PTR_ERR(dp->hpd_gpiod); ++ goto err_disable_clk; + } + + if (dp->hpd_gpiod) { +@@ -1729,7 +1743,8 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data) + + if (dp->irq == -ENXIO) { + dev_err(&pdev->dev, "failed to get irq\n"); +- return ERR_PTR(-ENODEV); ++ ret = -ENODEV; ++ goto err_disable_clk; + } + + ret = devm_request_threaded_irq(&pdev->dev, dp->irq, +@@ -1738,11 +1753,15 @@ analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data) + irq_flags, "analogix-dp", dp); + if (ret) { + dev_err(&pdev->dev, "failed to request irq\n"); +- return ERR_PTR(ret); ++ goto err_disable_clk; + } + disable_irq(dp->irq); + + return dp; ++ ++err_disable_clk: ++ clk_disable_unprepare(dp->clock); ++ return ERR_PTR(ret); + } + EXPORT_SYMBOL_GPL(analogix_dp_probe); + +diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c +index aeeab1b57aad3..2dc6dd6230d76 100644 +--- a/drivers/gpu/drm/drm_edid.c ++++ b/drivers/gpu/drm/drm_edid.c +@@ -1702,9 +1702,6 @@ struct edid *drm_do_get_edid(struct drm_connector *connector, + + connector_bad_edid(connector, edid, edid[0x7e] + 1); + +- edid[EDID_LENGTH-1] += edid[0x7e] - valid_extensions; +- edid[0x7e] = valid_extensions; +- + new = kmalloc_array(valid_extensions + 1, EDID_LENGTH, + GFP_KERNEL); + if (!new) +@@ -1721,6 +1718,9 @@ struct edid *drm_do_get_edid(struct drm_connector *connector, + base += EDID_LENGTH; + } + ++ new[EDID_LENGTH - 1] += new[0x7e] - valid_extensions; ++ new[0x7e] = valid_extensions; ++ + kfree(edid); + edid = new; + } +diff --git a/drivers/gpu/drm/drm_plane.c b/drivers/gpu/drm/drm_plane.c +index d6ad60ab0d389..6bdebcca56905 100644 +--- a/drivers/gpu/drm/drm_plane.c ++++ b/drivers/gpu/drm/drm_plane.c +@@ -186,6 +186,13 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane, + if (WARN_ON(config->num_total_plane >= 32)) + return -EINVAL; + ++ /* ++ * First driver to need more than 64 formats needs to fix this. Each ++ * format is encoded as a bit and the current code only supports a u64. ++ */ ++ if (WARN_ON(format_count > 64)) ++ return -EINVAL; ++ + WARN_ON(drm_drv_uses_atomic_modeset(dev) && + (!funcs->atomic_destroy_state || + !funcs->atomic_duplicate_state)); +@@ -207,13 +214,6 @@ int drm_universal_plane_init(struct drm_device *dev, struct drm_plane *plane, + return -ENOMEM; + } + +- /* +- * First driver to need more than 64 formats needs to fix this. Each +- * format is encoded as a bit and the current code only supports a u64. +- */ +- if (WARN_ON(format_count > 64)) +- return -EINVAL; +- + if (format_modifiers) { + const uint64_t *temp_modifiers = format_modifiers; + while (*temp_modifiers++ != DRM_FORMAT_MOD_INVALID) +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c +index 707f5c1a58740..790cbb20aaeba 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c ++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c +@@ -289,6 +289,12 @@ void etnaviv_iommu_unmap_gem(struct etnaviv_iommu_context *context, + + mutex_lock(&context->lock); + ++ /* Bail if the mapping has been reaped by another thread */ ++ if (!mapping->context) { ++ mutex_unlock(&context->lock); ++ return; ++ } ++ + /* If the vram node is on the mm, unmap and remove the node */ + if (mapping->vram_node.mm == &context->mm) + etnaviv_iommu_remove_mapping(context, mapping); +diff --git a/drivers/gpu/drm/gma500/psb_intel_display.c b/drivers/gpu/drm/gma500/psb_intel_display.c +index 4256410535f06..65e67e12a0a1a 100644 +--- a/drivers/gpu/drm/gma500/psb_intel_display.c ++++ b/drivers/gpu/drm/gma500/psb_intel_display.c +@@ -532,14 +532,15 @@ void psb_intel_crtc_init(struct drm_device *dev, int pipe, + + struct drm_crtc *psb_intel_get_crtc_from_pipe(struct drm_device *dev, int pipe) + { +- struct drm_crtc *crtc = NULL; ++ struct drm_crtc *crtc; + + list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { + struct gma_crtc *gma_crtc = to_gma_crtc(crtc); ++ + if (gma_crtc->pipe == pipe) +- break; ++ return crtc; + } +- return crtc; ++ return NULL; + } + + int gma_connector_clones(struct drm_device *dev, int type_mask) +diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c +index 2256c9789fc2c..f19264e91d4db 100644 +--- a/drivers/gpu/drm/imx/ipuv3-crtc.c ++++ b/drivers/gpu/drm/imx/ipuv3-crtc.c +@@ -68,7 +68,7 @@ static void ipu_crtc_disable_planes(struct ipu_crtc *ipu_crtc, + drm_atomic_crtc_state_for_each_plane(plane, old_crtc_state) { + if (plane == &ipu_crtc->plane[0]->base) + disable_full = true; +- if (&ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base) ++ if (ipu_crtc->plane[1] && plane == &ipu_crtc->plane[1]->base) + disable_partial = true; + } + +diff --git a/drivers/gpu/drm/mediatek/mtk_cec.c b/drivers/gpu/drm/mediatek/mtk_cec.c +index cb29b649fcdba..12bf937694977 100644 +--- a/drivers/gpu/drm/mediatek/mtk_cec.c ++++ b/drivers/gpu/drm/mediatek/mtk_cec.c +@@ -84,7 +84,7 @@ static void mtk_cec_mask(struct mtk_cec *cec, unsigned int offset, + u32 tmp = readl(cec->regs + offset) & ~mask; + + tmp |= val & mask; +- writel(val, cec->regs + offset); ++ writel(tmp, cec->regs + offset); + } + + void mtk_cec_set_hpd_event(struct device *dev, +diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +index df2656e579917..a3ae6c1d341bf 100644 +--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c ++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +@@ -891,6 +891,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) + BUG_ON(!node); + + ret = a6xx_gmu_init(a6xx_gpu, node); ++ of_node_put(node); + if (ret) { + a6xx_destroy(&(a6xx_gpu->base.base)); + return ERR_PTR(ret); +diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +index 72f487692adbb..c08c67338d73d 100644 +--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c ++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +@@ -599,8 +599,10 @@ static void _dpu_kms_hw_destroy(struct dpu_kms *dpu_kms) + for (i = 0; i < dpu_kms->catalog->vbif_count; i++) { + u32 vbif_idx = dpu_kms->catalog->vbif[i].id; + +- if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx]) ++ if ((vbif_idx < VBIF_MAX) && dpu_kms->hw_vbif[vbif_idx]) { + dpu_hw_vbif_destroy(dpu_kms->hw_vbif[vbif_idx]); ++ dpu_kms->hw_vbif[vbif_idx] = NULL; ++ } + } + } + +diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +index 395146884a222..03d60eb092577 100644 +--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c ++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +@@ -534,9 +534,15 @@ int mdp5_crtc_setup_pipeline(struct drm_crtc *crtc, + if (ret) + return ret; + +- mdp5_mixer_release(new_crtc_state->state, old_mixer); ++ ret = mdp5_mixer_release(new_crtc_state->state, old_mixer); ++ if (ret) ++ return ret; ++ + if (old_r_mixer) { +- mdp5_mixer_release(new_crtc_state->state, old_r_mixer); ++ ret = mdp5_mixer_release(new_crtc_state->state, old_r_mixer); ++ if (ret) ++ return ret; ++ + if (!need_right_mixer) + pipeline->r_mixer = NULL; + } +@@ -903,8 +909,10 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, + + ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, + &mdp5_crtc->cursor.iova); +- if (ret) ++ if (ret) { ++ drm_gem_object_put(cursor_bo); + return -EINVAL; ++ } + + pm_runtime_get_sync(&pdev->dev); + +diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +index 77823ccdd0f8f..39d0082eedcca 100644 +--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c ++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +@@ -698,9 +698,9 @@ struct msm_kms *mdp5_kms_init(struct drm_device *dev) + pdev = mdp5_kms->pdev; + + irq = irq_of_parse_and_map(pdev->dev.of_node, 0); +- if (irq < 0) { +- ret = irq; +- DRM_DEV_ERROR(&pdev->dev, "failed to get irq: %d\n", ret); ++ if (!irq) { ++ ret = -EINVAL; ++ DRM_DEV_ERROR(&pdev->dev, "failed to get irq\n"); + goto fail; + } + +diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c +index 954db683ae444..2536def2a0005 100644 +--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c ++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.c +@@ -116,21 +116,28 @@ int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc, + return 0; + } + +-void mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer) ++int mdp5_mixer_release(struct drm_atomic_state *s, struct mdp5_hw_mixer *mixer) + { + struct mdp5_global_state *global_state = mdp5_get_global_state(s); +- struct mdp5_hw_mixer_state *new_state = &global_state->hwmixer; ++ struct mdp5_hw_mixer_state *new_state; + + if (!mixer) +- return; ++ return 0; ++ ++ if (IS_ERR(global_state)) ++ return PTR_ERR(global_state); ++ ++ new_state = &global_state->hwmixer; + + if (WARN_ON(!new_state->hwmixer_to_crtc[mixer->idx])) +- return; ++ return -EINVAL; + + DBG("%s: release from crtc %s", mixer->name, + new_state->hwmixer_to_crtc[mixer->idx]->name); + + new_state->hwmixer_to_crtc[mixer->idx] = NULL; ++ ++ return 0; + } + + void mdp5_mixer_destroy(struct mdp5_hw_mixer *mixer) +diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h +index 43c9ba43ce185..545ee223b9d74 100644 +--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h ++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_mixer.h +@@ -30,7 +30,7 @@ void mdp5_mixer_destroy(struct mdp5_hw_mixer *lm); + int mdp5_mixer_assign(struct drm_atomic_state *s, struct drm_crtc *crtc, + uint32_t caps, struct mdp5_hw_mixer **mixer, + struct mdp5_hw_mixer **r_mixer); +-void mdp5_mixer_release(struct drm_atomic_state *s, +- struct mdp5_hw_mixer *mixer); ++int mdp5_mixer_release(struct drm_atomic_state *s, ++ struct mdp5_hw_mixer *mixer); + + #endif /* __MDP5_LM_H__ */ +diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c +index ba6695963aa66..a4f5cb90f3e80 100644 +--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c ++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.c +@@ -119,18 +119,23 @@ int mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane, + return 0; + } + +-void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe) ++int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe) + { + struct msm_drm_private *priv = s->dev->dev_private; + struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(priv->kms)); + struct mdp5_global_state *state = mdp5_get_global_state(s); +- struct mdp5_hw_pipe_state *new_state = &state->hwpipe; ++ struct mdp5_hw_pipe_state *new_state; + + if (!hwpipe) +- return; ++ return 0; ++ ++ if (IS_ERR(state)) ++ return PTR_ERR(state); ++ ++ new_state = &state->hwpipe; + + if (WARN_ON(!new_state->hwpipe_to_plane[hwpipe->idx])) +- return; ++ return -EINVAL; + + DBG("%s: release from plane %s", hwpipe->name, + new_state->hwpipe_to_plane[hwpipe->idx]->name); +@@ -141,6 +146,8 @@ void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe) + } + + new_state->hwpipe_to_plane[hwpipe->idx] = NULL; ++ ++ return 0; + } + + void mdp5_pipe_destroy(struct mdp5_hw_pipe *hwpipe) +diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h +index 9b26d0761bd4f..cca67938cab21 100644 +--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h ++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_pipe.h +@@ -37,7 +37,7 @@ int mdp5_pipe_assign(struct drm_atomic_state *s, struct drm_plane *plane, + uint32_t caps, uint32_t blkcfg, + struct mdp5_hw_pipe **hwpipe, + struct mdp5_hw_pipe **r_hwpipe); +-void mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe); ++int mdp5_pipe_release(struct drm_atomic_state *s, struct mdp5_hw_pipe *hwpipe); + + struct mdp5_hw_pipe *mdp5_pipe_init(enum mdp5_pipe pipe, + uint32_t reg_offset, uint32_t caps); +diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +index da07993339702..0dc23c86747e8 100644 +--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c ++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +@@ -393,12 +393,24 @@ static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, + mdp5_state->r_hwpipe = NULL; + + +- mdp5_pipe_release(state->state, old_hwpipe); +- mdp5_pipe_release(state->state, old_right_hwpipe); ++ ret = mdp5_pipe_release(state->state, old_hwpipe); ++ if (ret) ++ return ret; ++ ++ ret = mdp5_pipe_release(state->state, old_right_hwpipe); ++ if (ret) ++ return ret; ++ + } + } else { +- mdp5_pipe_release(state->state, mdp5_state->hwpipe); +- mdp5_pipe_release(state->state, mdp5_state->r_hwpipe); ++ ret = mdp5_pipe_release(state->state, mdp5_state->hwpipe); ++ if (ret) ++ return ret; ++ ++ ret = mdp5_pipe_release(state->state, mdp5_state->r_hwpipe); ++ if (ret) ++ return ret; ++ + mdp5_state->hwpipe = mdp5_state->r_hwpipe = NULL; + } + +diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c +index 423c4ae2be10d..743142e15b4c1 100644 +--- a/drivers/gpu/drm/msm/dsi/dsi_host.c ++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c +@@ -1348,10 +1348,10 @@ static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host, + dsi_get_bpp(msm_host->format) / 8; + + len = dsi_cmd_dma_add(msm_host, msg); +- if (!len) { ++ if (len < 0) { + pr_err("%s: failed to add cmd type = 0x%x\n", + __func__, msg->type); +- return -EINVAL; ++ return len; + } + + /* for video mode, do not send cmds more than +@@ -1370,10 +1370,14 @@ static int dsi_cmds2buf_tx(struct msm_dsi_host *msm_host, + } + + ret = dsi_cmd_dma_tx(msm_host, len); +- if (ret < len) { +- pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d\n", +- __func__, msg->type, (*(u8 *)(msg->tx_buf)), len); +- return -ECOMM; ++ if (ret < 0) { ++ pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, len=%d, ret=%d\n", ++ __func__, msg->type, (*(u8 *)(msg->tx_buf)), len, ret); ++ return ret; ++ } else if (ret < len) { ++ pr_err("%s: cmd dma tx failed, type=0x%x, data0=0x%x, ret=%d len=%d\n", ++ __func__, msg->type, (*(u8 *)(msg->tx_buf)), ret, len); ++ return -EIO; + } + + return len; +@@ -2099,9 +2103,12 @@ int msm_dsi_host_cmd_rx(struct mipi_dsi_host *host, + } + + ret = dsi_cmds2buf_tx(msm_host, msg); +- if (ret < msg->tx_len) { ++ if (ret < 0) { + pr_err("%s: Read cmd Tx failed, %d\n", __func__, ret); + return ret; ++ } else if (ret < msg->tx_len) { ++ pr_err("%s: Read cmd Tx failed, too short: %d\n", __func__, ret); ++ return -ECOMM; + } + + /* +diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c +index 1a7e77373407f..e4c9ff934e5b8 100644 +--- a/drivers/gpu/drm/msm/hdmi/hdmi.c ++++ b/drivers/gpu/drm/msm/hdmi/hdmi.c +@@ -142,6 +142,10 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev) + /* HDCP needs physical address of hdmi register */ + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, + config->mmio_name); ++ if (!res) { ++ ret = -EINVAL; ++ goto fail; ++ } + hdmi->mmio_phy_addr = res->start; + + hdmi->qfprom_mmio = msm_ioremap(pdev, +@@ -311,9 +315,9 @@ int msm_hdmi_modeset_init(struct hdmi *hdmi, + } + + hdmi->irq = irq_of_parse_and_map(pdev->dev.of_node, 0); +- if (hdmi->irq < 0) { +- ret = hdmi->irq; +- DRM_DEV_ERROR(dev->dev, "failed to get irq: %d\n", ret); ++ if (!hdmi->irq) { ++ ret = -EINVAL; ++ DRM_DEV_ERROR(dev->dev, "failed to get irq\n"); + goto fail; + } + +diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c +index d7c8948427fe0..705a834ba1e66 100644 +--- a/drivers/gpu/drm/msm/msm_gem_prime.c ++++ b/drivers/gpu/drm/msm/msm_gem_prime.c +@@ -17,7 +17,7 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) + int npages = obj->size >> PAGE_SHIFT; + + if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ +- return NULL; ++ return ERR_PTR(-ENOMEM); + + return drm_prime_pages_to_sg(msm_obj->pages, npages); + } +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c +index 40e564524b7a9..93a49cbfb81d6 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c +@@ -135,10 +135,10 @@ nvkm_cstate_find_best(struct nvkm_clk *clk, struct nvkm_pstate *pstate, + + list_for_each_entry_from_reverse(cstate, &pstate->list, head) { + if (nvkm_cstate_valid(clk, cstate, max_volt, clk->temp)) +- break; ++ return cstate; + } + +- return cstate; ++ return NULL; + } + + static struct nvkm_cstate * +@@ -169,6 +169,8 @@ nvkm_cstate_prog(struct nvkm_clk *clk, struct nvkm_pstate *pstate, int cstatei) + if (!list_empty(&pstate->list)) { + cstate = nvkm_cstate_get(clk, pstate, cstatei); + cstate = nvkm_cstate_find_best(clk, pstate, cstate); ++ if (!cstate) ++ return -EINVAL; + } else { + cstate = &pstate->base; + } +diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c +index bc63f4cecf5d5..ca6ccd69424e0 100644 +--- a/drivers/gpu/drm/radeon/radeon_connectors.c ++++ b/drivers/gpu/drm/radeon/radeon_connectors.c +@@ -477,6 +477,8 @@ static struct drm_display_mode *radeon_fp_native_mode(struct drm_encoder *encode + native_mode->vdisplay != 0 && + native_mode->clock != 0) { + mode = drm_mode_duplicate(dev, native_mode); ++ if (!mode) ++ return NULL; + mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER; + drm_mode_set_name(mode); + +@@ -491,6 +493,8 @@ static struct drm_display_mode *radeon_fp_native_mode(struct drm_encoder *encode + * simpler. + */ + mode = drm_cvt_mode(dev, native_mode->hdisplay, native_mode->vdisplay, 60, true, false, false); ++ if (!mode) ++ return NULL; + mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER; + DRM_DEBUG_KMS("Adding cvt approximation of native panel mode %s\n", mode->name); + } +diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c +index 84e3decb17b1f..2e4e1933a43c1 100644 +--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c ++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c +@@ -1848,10 +1848,10 @@ static int vop_bind(struct device *dev, struct device *master, void *data) + vop_win_init(vop); + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); +- vop->len = resource_size(res); + vop->regs = devm_ioremap_resource(dev, res); + if (IS_ERR(vop->regs)) + return PTR_ERR(vop->regs); ++ vop->len = resource_size(res); + + vop->regsbak = devm_kzalloc(dev, vop->len, GFP_KERNEL); + if (!vop->regsbak) +diff --git a/drivers/gpu/drm/tilcdc/tilcdc_external.c b/drivers/gpu/drm/tilcdc/tilcdc_external.c +index 43d756b7810ee..67e23317c7ded 100644 +--- a/drivers/gpu/drm/tilcdc/tilcdc_external.c ++++ b/drivers/gpu/drm/tilcdc/tilcdc_external.c +@@ -58,11 +58,13 @@ struct drm_connector *tilcdc_encoder_find_connector(struct drm_device *ddev, + int tilcdc_add_component_encoder(struct drm_device *ddev) + { + struct tilcdc_drm_private *priv = ddev->dev_private; +- struct drm_encoder *encoder; ++ struct drm_encoder *encoder = NULL, *iter; + +- list_for_each_entry(encoder, &ddev->mode_config.encoder_list, head) +- if (encoder->possible_crtcs & (1 << priv->crtc->index)) ++ list_for_each_entry(iter, &ddev->mode_config.encoder_list, head) ++ if (iter->possible_crtcs & (1 << priv->crtc->index)) { ++ encoder = iter; + break; ++ } + + if (!encoder) { + dev_err(ddev->dev, "%s: No suitable encoder found\n", __func__); +diff --git a/drivers/gpu/drm/vc4/vc4_txp.c b/drivers/gpu/drm/vc4/vc4_txp.c +index bf720206727f0..0d9263f65d95b 100644 +--- a/drivers/gpu/drm/vc4/vc4_txp.c ++++ b/drivers/gpu/drm/vc4/vc4_txp.c +@@ -285,12 +285,18 @@ static void vc4_txp_connector_atomic_commit(struct drm_connector *conn, + if (WARN_ON(i == ARRAY_SIZE(drm_fmts))) + return; + +- ctrl = TXP_GO | TXP_VSTART_AT_EOF | TXP_EI | ++ ctrl = TXP_GO | TXP_EI | + VC4_SET_FIELD(0xf, TXP_BYTE_ENABLE) | + VC4_SET_FIELD(txp_fmts[i], TXP_FORMAT); + + if (fb->format->has_alpha) + ctrl |= TXP_ALPHA_ENABLE; ++ else ++ /* ++ * If TXP_ALPHA_ENABLE isn't set and TXP_ALPHA_INVERT is, the ++ * hardware will force the output padding to be 0xff. ++ */ ++ ctrl |= TXP_ALPHA_INVERT; + + gem = drm_fb_cma_get_gem_obj(fb, 0); + TXP_WRITE(TXP_DST_PTR, gem->paddr + fb->offsets[0]); +diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c b/drivers/gpu/drm/virtio/virtgpu_display.c +index e622485ae8267..7e34307eb075e 100644 +--- a/drivers/gpu/drm/virtio/virtgpu_display.c ++++ b/drivers/gpu/drm/virtio/virtgpu_display.c +@@ -174,6 +174,8 @@ static int virtio_gpu_conn_get_modes(struct drm_connector *connector) + DRM_DEBUG("add mode: %dx%d\n", width, height); + mode = drm_cvt_mode(connector->dev, width, height, 60, + false, false, false); ++ if (!mode) ++ return count; + mode->type |= DRM_MODE_TYPE_PREFERRED; + drm_mode_probed_add(connector, mode); + count++; +diff --git a/drivers/hid/hid-bigbenff.c b/drivers/hid/hid-bigbenff.c +index 74ad8bf98bfd5..e8c5e3ac9fff1 100644 +--- a/drivers/hid/hid-bigbenff.c ++++ b/drivers/hid/hid-bigbenff.c +@@ -347,6 +347,12 @@ static int bigben_probe(struct hid_device *hid, + bigben->report = list_entry(report_list->next, + struct hid_report, list); + ++ if (list_empty(&hid->inputs)) { ++ hid_err(hid, "no inputs found\n"); ++ error = -ENODEV; ++ goto error_hw_stop; ++ } ++ + hidinput = list_first_entry(&hid->inputs, struct hid_input, list); + set_bit(FF_RUMBLE, hidinput->input->ffbit); + +diff --git a/drivers/hid/hid-elan.c b/drivers/hid/hid-elan.c +index 0e8f424025fea..838673303f77f 100644 +--- a/drivers/hid/hid-elan.c ++++ b/drivers/hid/hid-elan.c +@@ -188,7 +188,6 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi) + ret = input_mt_init_slots(input, ELAN_MAX_FINGERS, INPUT_MT_POINTER); + if (ret) { + hid_err(hdev, "Failed to init elan MT slots: %d\n", ret); +- input_free_device(input); + return ret; + } + +@@ -200,7 +199,6 @@ static int elan_input_configured(struct hid_device *hdev, struct hid_input *hi) + hid_err(hdev, "Failed to register elan input device: %d\n", + ret); + input_mt_destroy_slots(input); +- input_free_device(input); + return ret; + } + +diff --git a/drivers/hid/hid-led.c b/drivers/hid/hid-led.c +index c2c66ceca1327..7d82f8d426bbc 100644 +--- a/drivers/hid/hid-led.c ++++ b/drivers/hid/hid-led.c +@@ -366,7 +366,7 @@ static const struct hidled_config hidled_configs[] = { + .type = DREAM_CHEEKY, + .name = "Dream Cheeky Webmail Notifier", + .short_name = "dream_cheeky", +- .max_brightness = 31, ++ .max_brightness = 63, + .num_leds = 1, + .report_size = 9, + .report_type = RAW_REQUEST, +diff --git a/drivers/hwmon/hwmon.c b/drivers/hwmon/hwmon.c +index a2175394cd253..c73b93b9bb87d 100644 +--- a/drivers/hwmon/hwmon.c ++++ b/drivers/hwmon/hwmon.c +@@ -715,11 +715,12 @@ EXPORT_SYMBOL_GPL(hwmon_device_register_with_groups); + + /** + * hwmon_device_register_with_info - register w/ hwmon +- * @dev: the parent device +- * @name: hwmon name attribute +- * @drvdata: driver data to attach to created device +- * @chip: pointer to hwmon chip information ++ * @dev: the parent device (mandatory) ++ * @name: hwmon name attribute (mandatory) ++ * @drvdata: driver data to attach to created device (optional) ++ * @chip: pointer to hwmon chip information (mandatory) + * @extra_groups: pointer to list of additional non-standard attribute groups ++ * (optional) + * + * hwmon_device_unregister() must be called when the device is no + * longer needed. +@@ -732,13 +733,10 @@ hwmon_device_register_with_info(struct device *dev, const char *name, + const struct hwmon_chip_info *chip, + const struct attribute_group **extra_groups) + { +- if (!name) +- return ERR_PTR(-EINVAL); +- +- if (chip && (!chip->ops || !chip->ops->is_visible || !chip->info)) ++ if (!dev || !name || !chip) + return ERR_PTR(-EINVAL); + +- if (chip && !dev) ++ if (!chip->ops || !chip->ops->is_visible || !chip->info) + return ERR_PTR(-EINVAL); + + return __hwmon_device_register(dev, name, drvdata, chip, extra_groups); +diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c +index 96544b348c273..ebe34fd6adb0a 100644 +--- a/drivers/hwtracing/coresight/coresight-cpu-debug.c ++++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c +@@ -379,9 +379,10 @@ static int debug_notifier_call(struct notifier_block *self, + int cpu; + struct debug_drvdata *drvdata; + +- mutex_lock(&debug_lock); ++ /* Bail out if we can't acquire the mutex or the functionality is off */ ++ if (!mutex_trylock(&debug_lock)) ++ return NOTIFY_DONE; + +- /* Bail out if the functionality is disabled */ + if (!debug_enable) + goto skip_dump; + +@@ -400,7 +401,7 @@ static int debug_notifier_call(struct notifier_block *self, + + skip_dump: + mutex_unlock(&debug_lock); +- return 0; ++ return NOTIFY_DONE; + } + + static struct notifier_block debug_notifier = { +diff --git a/drivers/i2c/busses/i2c-at91-master.c b/drivers/i2c/busses/i2c-at91-master.c +index a3fcc35ffd3b6..f74d5ad2f1faa 100644 +--- a/drivers/i2c/busses/i2c-at91-master.c ++++ b/drivers/i2c/busses/i2c-at91-master.c +@@ -609,6 +609,7 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num) + unsigned int_addr_flag = 0; + struct i2c_msg *m_start = msg; + bool is_read; ++ u8 *dma_buf = NULL; + + dev_dbg(&adap->dev, "at91_xfer: processing %d messages:\n", num); + +@@ -656,7 +657,17 @@ static int at91_twi_xfer(struct i2c_adapter *adap, struct i2c_msg *msg, int num) + dev->msg = m_start; + dev->recv_len_abort = false; + ++ if (dev->use_dma) { ++ dma_buf = i2c_get_dma_safe_msg_buf(m_start, 1); ++ if (!dma_buf) { ++ ret = -ENOMEM; ++ goto out; ++ } ++ dev->buf = dma_buf; ++ } ++ + ret = at91_do_twi_transfer(dev); ++ i2c_put_dma_safe_msg_buf(dma_buf, m_start, !ret); + + ret = (ret < 0) ? ret : num; + out: +diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c +index 17f0dd1f891e2..8a3a0991bc1c5 100644 +--- a/drivers/i2c/busses/i2c-cadence.c ++++ b/drivers/i2c/busses/i2c-cadence.c +@@ -506,7 +506,7 @@ static void cdns_i2c_master_reset(struct i2c_adapter *adap) + static int cdns_i2c_process_msg(struct cdns_i2c *id, struct i2c_msg *msg, + struct i2c_adapter *adap) + { +- unsigned long time_left; ++ unsigned long time_left, msg_timeout; + u32 reg; + + id->p_msg = msg; +@@ -531,8 +531,16 @@ static int cdns_i2c_process_msg(struct cdns_i2c *id, struct i2c_msg *msg, + else + cdns_i2c_msend(id); + ++ /* Minimal time to execute this message */ ++ msg_timeout = msecs_to_jiffies((1000 * msg->len * BITS_PER_BYTE) / id->i2c_clk); ++ /* Plus some wiggle room */ ++ msg_timeout += msecs_to_jiffies(500); ++ ++ if (msg_timeout < adap->timeout) ++ msg_timeout = adap->timeout; ++ + /* Wait for the signal of completion */ +- time_left = wait_for_completion_timeout(&id->xfer_done, adap->timeout); ++ time_left = wait_for_completion_timeout(&id->xfer_done, msg_timeout); + if (time_left == 0) { + cdns_i2c_master_reset(adap); + dev_err(id->adap.dev.parent, +diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c +index 635cc1e7b1234..793a803919c52 100644 +--- a/drivers/iio/adc/ad7124.c ++++ b/drivers/iio/adc/ad7124.c +@@ -142,7 +142,6 @@ static const struct iio_chan_spec ad7124_channel_template = { + .sign = 'u', + .realbits = 24, + .storagebits = 32, +- .shift = 8, + .endianness = IIO_BE, + }, + }; +diff --git a/drivers/iio/adc/sc27xx_adc.c b/drivers/iio/adc/sc27xx_adc.c +index a6c046575ec3a..5b79c8b9ccde1 100644 +--- a/drivers/iio/adc/sc27xx_adc.c ++++ b/drivers/iio/adc/sc27xx_adc.c +@@ -36,8 +36,8 @@ + + /* Bits and mask definition for SC27XX_ADC_CH_CFG register */ + #define SC27XX_ADC_CHN_ID_MASK GENMASK(4, 0) +-#define SC27XX_ADC_SCALE_MASK GENMASK(10, 8) +-#define SC27XX_ADC_SCALE_SHIFT 8 ++#define SC27XX_ADC_SCALE_MASK GENMASK(10, 9) ++#define SC27XX_ADC_SCALE_SHIFT 9 + + /* Bits definitions for SC27XX_ADC_INT_EN registers */ + #define SC27XX_ADC_IRQ_EN BIT(0) +@@ -103,14 +103,14 @@ static struct sc27xx_adc_linear_graph small_scale_graph = { + 100, 341, + }; + +-static const struct sc27xx_adc_linear_graph big_scale_graph_calib = { +- 4200, 856, +- 3600, 733, ++static const struct sc27xx_adc_linear_graph sc2731_big_scale_graph_calib = { ++ 4200, 850, ++ 3600, 728, + }; + +-static const struct sc27xx_adc_linear_graph small_scale_graph_calib = { +- 1000, 833, +- 100, 80, ++static const struct sc27xx_adc_linear_graph sc2731_small_scale_graph_calib = { ++ 1000, 838, ++ 100, 84, + }; + + static int sc27xx_adc_get_calib_data(u32 calib_data, int calib_adc) +@@ -130,11 +130,11 @@ static int sc27xx_adc_scale_calibration(struct sc27xx_adc_data *data, + size_t len; + + if (big_scale) { +- calib_graph = &big_scale_graph_calib; ++ calib_graph = &sc2731_big_scale_graph_calib; + graph = &big_scale_graph; + cell_name = "big_scale_calib"; + } else { +- calib_graph = &small_scale_graph_calib; ++ calib_graph = &sc2731_small_scale_graph_calib; + graph = &small_scale_graph; + cell_name = "small_scale_calib"; + } +diff --git a/drivers/iio/adc/stmpe-adc.c b/drivers/iio/adc/stmpe-adc.c +index bd72727fc417a..35ae801c4d35f 100644 +--- a/drivers/iio/adc/stmpe-adc.c ++++ b/drivers/iio/adc/stmpe-adc.c +@@ -61,7 +61,7 @@ struct stmpe_adc { + static int stmpe_read_voltage(struct stmpe_adc *info, + struct iio_chan_spec const *chan, int *val) + { +- long ret; ++ unsigned long ret; + + mutex_lock(&info->lock); + +@@ -79,7 +79,7 @@ static int stmpe_read_voltage(struct stmpe_adc *info, + + ret = wait_for_completion_timeout(&info->completion, STMPE_ADC_TIMEOUT); + +- if (ret <= 0) { ++ if (ret == 0) { + stmpe_reg_write(info->stmpe, STMPE_REG_ADC_INT_STA, + STMPE_ADC_CH(info->channel)); + mutex_unlock(&info->lock); +@@ -96,7 +96,7 @@ static int stmpe_read_voltage(struct stmpe_adc *info, + static int stmpe_read_temp(struct stmpe_adc *info, + struct iio_chan_spec const *chan, int *val) + { +- long ret; ++ unsigned long ret; + + mutex_lock(&info->lock); + +@@ -114,7 +114,7 @@ static int stmpe_read_temp(struct stmpe_adc *info, + + ret = wait_for_completion_timeout(&info->completion, STMPE_ADC_TIMEOUT); + +- if (ret <= 0) { ++ if (ret == 0) { + mutex_unlock(&info->lock); + return -ETIMEDOUT; + } +diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c +index 364683783ae52..c25b0bc89b0c2 100644 +--- a/drivers/iio/common/st_sensors/st_sensors_core.c ++++ b/drivers/iio/common/st_sensors/st_sensors_core.c +@@ -76,16 +76,18 @@ st_sensors_match_odr_error: + + int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr) + { +- int err; ++ int err = 0; + struct st_sensor_odr_avl odr_out = {0, 0}; + struct st_sensor_data *sdata = iio_priv(indio_dev); + ++ mutex_lock(&sdata->odr_lock); ++ + if (!sdata->sensor_settings->odr.mask) +- return 0; ++ goto unlock_mutex; + + err = st_sensors_match_odr(sdata->sensor_settings, odr, &odr_out); + if (err < 0) +- goto st_sensors_match_odr_error; ++ goto unlock_mutex; + + if ((sdata->sensor_settings->odr.addr == + sdata->sensor_settings->pw.addr) && +@@ -108,7 +110,9 @@ int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr) + if (err >= 0) + sdata->odr = odr_out.hz; + +-st_sensors_match_odr_error: ++unlock_mutex: ++ mutex_unlock(&sdata->odr_lock); ++ + return err; + } + EXPORT_SYMBOL(st_sensors_set_odr); +@@ -384,6 +388,8 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev, + struct st_sensors_platform_data *of_pdata; + int err = 0; + ++ mutex_init(&sdata->odr_lock); ++ + /* If OF/DT pdata exists, it will take precedence of anything else */ + of_pdata = st_sensors_of_probe(indio_dev->dev.parent, pdata); + if (of_pdata) +@@ -575,18 +581,24 @@ int st_sensors_read_info_raw(struct iio_dev *indio_dev, + err = -EBUSY; + goto out; + } else { ++ mutex_lock(&sdata->odr_lock); + err = st_sensors_set_enable(indio_dev, true); +- if (err < 0) ++ if (err < 0) { ++ mutex_unlock(&sdata->odr_lock); + goto out; ++ } + + msleep((sdata->sensor_settings->bootime * 1000) / sdata->odr); + err = st_sensors_read_axis_data(indio_dev, ch, val); +- if (err < 0) ++ if (err < 0) { ++ mutex_unlock(&sdata->odr_lock); + goto out; ++ } + + *val = *val >> ch->scan_type.shift; + + err = st_sensors_set_enable(indio_dev, false); ++ mutex_unlock(&sdata->odr_lock); + } + out: + mutex_unlock(&indio_dev->mlock); +diff --git a/drivers/iio/dummy/iio_simple_dummy.c b/drivers/iio/dummy/iio_simple_dummy.c +index 6cb02299a2152..18cfe1cb7a408 100644 +--- a/drivers/iio/dummy/iio_simple_dummy.c ++++ b/drivers/iio/dummy/iio_simple_dummy.c +@@ -568,10 +568,9 @@ static struct iio_sw_device *iio_dummy_probe(const char *name) + struct iio_sw_device *swd; + + swd = kzalloc(sizeof(*swd), GFP_KERNEL); +- if (!swd) { +- ret = -ENOMEM; +- goto error_kzalloc; +- } ++ if (!swd) ++ return ERR_PTR(-ENOMEM); ++ + /* + * Allocate an IIO device. + * +@@ -583,7 +582,7 @@ static struct iio_sw_device *iio_dummy_probe(const char *name) + indio_dev = iio_device_alloc(sizeof(*st)); + if (!indio_dev) { + ret = -ENOMEM; +- goto error_ret; ++ goto error_free_swd; + } + + st = iio_priv(indio_dev); +@@ -614,6 +613,10 @@ static struct iio_sw_device *iio_dummy_probe(const char *name) + * indio_dev->name = spi_get_device_id(spi)->name; + */ + indio_dev->name = kstrdup(name, GFP_KERNEL); ++ if (!indio_dev->name) { ++ ret = -ENOMEM; ++ goto error_free_device; ++ } + + /* Provide description of available channels */ + indio_dev->channels = iio_dummy_channels; +@@ -630,7 +633,7 @@ static struct iio_sw_device *iio_dummy_probe(const char *name) + + ret = iio_simple_dummy_events_register(indio_dev); + if (ret < 0) +- goto error_free_device; ++ goto error_free_name; + + ret = iio_simple_dummy_configure_buffer(indio_dev); + if (ret < 0) +@@ -647,11 +650,12 @@ error_unconfigure_buffer: + iio_simple_dummy_unconfigure_buffer(indio_dev); + error_unregister_events: + iio_simple_dummy_events_unregister(indio_dev); ++error_free_name: ++ kfree(indio_dev->name); + error_free_device: + iio_device_free(indio_dev); +-error_ret: ++error_free_swd: + kfree(swd); +-error_kzalloc: + return ERR_PTR(ret); + } + +diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c +index 89e1dfd07a1bf..8c7ba7bad42b9 100644 +--- a/drivers/infiniband/hw/hfi1/file_ops.c ++++ b/drivers/infiniband/hw/hfi1/file_ops.c +@@ -308,6 +308,8 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from) + unsigned long dim = from->nr_segs; + int idx; + ++ if (!HFI1_CAP_IS_KSET(SDMA)) ++ return -EINVAL; + idx = srcu_read_lock(&fd->pq_srcu); + pq = srcu_dereference(fd->pq, &fd->pq_srcu); + if (!cq || !pq) { +diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c +index 85583f51124e2..d698c26282ea1 100644 +--- a/drivers/infiniband/hw/hfi1/init.c ++++ b/drivers/infiniband/hw/hfi1/init.c +@@ -543,7 +543,7 @@ void set_link_ipg(struct hfi1_pportdata *ppd) + u16 shift, mult; + u64 src; + u32 current_egress_rate; /* Mbits /sec */ +- u32 max_pkt_time; ++ u64 max_pkt_time; + /* + * max_pkt_time is the maximum packet egress time in units + * of the fabric clock period 1/(805 MHz). +diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c +index 248be21acdbed..2a684fc6056e1 100644 +--- a/drivers/infiniband/hw/hfi1/sdma.c ++++ b/drivers/infiniband/hw/hfi1/sdma.c +@@ -1329,11 +1329,13 @@ void sdma_clean(struct hfi1_devdata *dd, size_t num_engines) + kvfree(sde->tx_ring); + sde->tx_ring = NULL; + } +- spin_lock_irq(&dd->sde_map_lock); +- sdma_map_free(rcu_access_pointer(dd->sdma_map)); +- RCU_INIT_POINTER(dd->sdma_map, NULL); +- spin_unlock_irq(&dd->sde_map_lock); +- synchronize_rcu(); ++ if (rcu_access_pointer(dd->sdma_map)) { ++ spin_lock_irq(&dd->sde_map_lock); ++ sdma_map_free(rcu_access_pointer(dd->sdma_map)); ++ RCU_INIT_POINTER(dd->sdma_map, NULL); ++ spin_unlock_irq(&dd->sde_map_lock); ++ synchronize_rcu(); ++ } + kfree(dd->per_sdma); + dd->per_sdma = NULL; + +diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c +index 48e8612c1bc8d..e97c13967174c 100644 +--- a/drivers/infiniband/sw/rdmavt/qp.c ++++ b/drivers/infiniband/sw/rdmavt/qp.c +@@ -2812,7 +2812,7 @@ void rvt_qp_iter(struct rvt_dev_info *rdi, + EXPORT_SYMBOL(rvt_qp_iter); + + /* +- * This should be called with s_lock held. ++ * This should be called with s_lock and r_lock held. + */ + void rvt_send_complete(struct rvt_qp *qp, struct rvt_swqe *wqe, + enum ib_wc_status status) +@@ -3171,7 +3171,9 @@ send_comp: + rvp->n_loop_pkts++; + flush_send: + sqp->s_rnr_retry = sqp->s_rnr_retry_cnt; ++ spin_lock(&sqp->r_lock); + rvt_send_complete(sqp, wqe, send_status); ++ spin_unlock(&sqp->r_lock); + if (local_ops) { + atomic_dec(&sqp->local_ops_pending); + local_ops = 0; +@@ -3225,7 +3227,9 @@ serr: + spin_unlock_irqrestore(&qp->r_lock, flags); + serr_no_r_lock: + spin_lock_irqsave(&sqp->s_lock, flags); ++ spin_lock(&sqp->r_lock); + rvt_send_complete(sqp, wqe, send_status); ++ spin_unlock(&sqp->r_lock); + if (sqp->ibqp.qp_type == IB_QPT_RC) { + int lastwqe; + +diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c +index a4d6e0b7901e9..87702478eb99b 100644 +--- a/drivers/infiniband/sw/rxe/rxe_req.c ++++ b/drivers/infiniband/sw/rxe/rxe_req.c +@@ -680,7 +680,7 @@ next_wqe: + opcode = next_opcode(qp, wqe, wqe->wr.opcode); + if (unlikely(opcode < 0)) { + wqe->status = IB_WC_LOC_QP_OP_ERR; +- goto exit; ++ goto err; + } + + mask = rxe_opcode[opcode].mask; +diff --git a/drivers/input/misc/sparcspkr.c b/drivers/input/misc/sparcspkr.c +index fe43e5557ed72..cdcb7737c46aa 100644 +--- a/drivers/input/misc/sparcspkr.c ++++ b/drivers/input/misc/sparcspkr.c +@@ -205,6 +205,7 @@ static int bbc_beep_probe(struct platform_device *op) + + info = &state->u.bbc; + info->clock_freq = of_getintprop_default(dp, "clock-frequency", 0); ++ of_node_put(dp); + if (!info->clock_freq) + goto out_free; + +diff --git a/drivers/input/mouse/bcm5974.c b/drivers/input/mouse/bcm5974.c +index 59a14505b9cd1..ca150618d32f1 100644 +--- a/drivers/input/mouse/bcm5974.c ++++ b/drivers/input/mouse/bcm5974.c +@@ -942,17 +942,22 @@ static int bcm5974_probe(struct usb_interface *iface, + if (!dev->tp_data) + goto err_free_bt_buffer; + +- if (dev->bt_urb) ++ if (dev->bt_urb) { + usb_fill_int_urb(dev->bt_urb, udev, + usb_rcvintpipe(udev, cfg->bt_ep), + dev->bt_data, dev->cfg.bt_datalen, + bcm5974_irq_button, dev, 1); + ++ dev->bt_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; ++ } ++ + usb_fill_int_urb(dev->tp_urb, udev, + usb_rcvintpipe(udev, cfg->tp_ep), + dev->tp_data, dev->cfg.tp_datalen, + bcm5974_irq_trackpad, dev, 1); + ++ dev->tp_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; ++ + /* create bcm5974 device */ + usb_make_path(udev, dev->phys, sizeof(dev->phys)); + strlcat(dev->phys, "/input0", sizeof(dev->phys)); +diff --git a/drivers/input/touchscreen/stmfts.c b/drivers/input/touchscreen/stmfts.c +index be1dd504d5b1d..20bc2279a2f24 100644 +--- a/drivers/input/touchscreen/stmfts.c ++++ b/drivers/input/touchscreen/stmfts.c +@@ -337,13 +337,15 @@ static int stmfts_input_open(struct input_dev *dev) + struct stmfts_data *sdata = input_get_drvdata(dev); + int err; + +- err = pm_runtime_get_sync(&sdata->client->dev); +- if (err < 0) +- goto out; ++ err = pm_runtime_resume_and_get(&sdata->client->dev); ++ if (err) ++ return err; + + err = i2c_smbus_write_byte(sdata->client, STMFTS_MS_MT_SENSE_ON); +- if (err) +- goto out; ++ if (err) { ++ pm_runtime_put_sync(&sdata->client->dev); ++ return err; ++ } + + mutex_lock(&sdata->mutex); + sdata->running = true; +@@ -366,9 +368,7 @@ static int stmfts_input_open(struct input_dev *dev) + "failed to enable touchkey\n"); + } + +-out: +- pm_runtime_put_noidle(&sdata->client->dev); +- return err; ++ return 0; + } + + static void stmfts_input_close(struct input_dev *dev) +diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c +index 7502fa84e2537..82d0083104182 100644 +--- a/drivers/iommu/amd_iommu_init.c ++++ b/drivers/iommu/amd_iommu_init.c +@@ -83,7 +83,7 @@ + #define ACPI_DEVFLAG_LINT1 0x80 + #define ACPI_DEVFLAG_ATSDIS 0x10000000 + +-#define LOOP_TIMEOUT 100000 ++#define LOOP_TIMEOUT 2000000 + /* + * ACPI table definitions + * +diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c +index be99d408cf35d..cba0097eba39c 100644 +--- a/drivers/iommu/msm_iommu.c ++++ b/drivers/iommu/msm_iommu.c +@@ -636,16 +636,19 @@ static void insert_iommu_master(struct device *dev, + static int qcom_iommu_of_xlate(struct device *dev, + struct of_phandle_args *spec) + { +- struct msm_iommu_dev *iommu; ++ struct msm_iommu_dev *iommu = NULL, *iter; + unsigned long flags; + int ret = 0; + + spin_lock_irqsave(&msm_iommu_lock, flags); +- list_for_each_entry(iommu, &qcom_iommu_devices, dev_node) +- if (iommu->dev->of_node == spec->np) ++ list_for_each_entry(iter, &qcom_iommu_devices, dev_node) { ++ if (iter->dev->of_node == spec->np) { ++ iommu = iter; + break; ++ } ++ } + +- if (!iommu || iommu->dev->of_node != spec->np) { ++ if (!iommu) { + ret = -ENODEV; + goto fail; + } +diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c +index c2f6c78fee444..18d7c818a174c 100644 +--- a/drivers/iommu/mtk_iommu.c ++++ b/drivers/iommu/mtk_iommu.c +@@ -769,8 +769,7 @@ static int mtk_iommu_remove(struct platform_device *pdev) + iommu_device_sysfs_remove(&data->iommu); + iommu_device_unregister(&data->iommu); + +- if (iommu_present(&platform_bus_type)) +- bus_set_iommu(&platform_bus_type, NULL); ++ list_del(&data->list); + + clk_disable_unprepare(data->bclk); + devm_free_irq(&pdev->dev, data->irq, data); +diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c +index 5849ac5a2ad3b..0fd428db3aa4d 100644 +--- a/drivers/irqchip/irq-armada-370-xp.c ++++ b/drivers/irqchip/irq-armada-370-xp.c +@@ -392,7 +392,16 @@ static void armada_xp_mpic_smp_cpu_init(void) + + static void armada_xp_mpic_perf_init(void) + { +- unsigned long cpuid = cpu_logical_map(smp_processor_id()); ++ unsigned long cpuid; ++ ++ /* ++ * This Performance Counter Overflow interrupt is specific for ++ * Armada 370 and XP. It is not available on Armada 375, 38x and 39x. ++ */ ++ if (!of_machine_is_compatible("marvell,armada-370-xp")) ++ return; ++ ++ cpuid = cpu_logical_map(smp_processor_id()); + + /* Enable Performance Counter Overflow interrupts */ + writel(ARMADA_370_XP_INT_CAUSE_PERF(cpuid), +diff --git a/drivers/irqchip/irq-aspeed-i2c-ic.c b/drivers/irqchip/irq-aspeed-i2c-ic.c +index 8d591c179f812..3d3210828e9bf 100644 +--- a/drivers/irqchip/irq-aspeed-i2c-ic.c ++++ b/drivers/irqchip/irq-aspeed-i2c-ic.c +@@ -79,8 +79,8 @@ static int __init aspeed_i2c_ic_of_init(struct device_node *node, + } + + i2c_ic->parent_irq = irq_of_parse_and_map(node, 0); +- if (i2c_ic->parent_irq < 0) { +- ret = i2c_ic->parent_irq; ++ if (!i2c_ic->parent_irq) { ++ ret = -EINVAL; + goto err_iounmap; + } + +diff --git a/drivers/irqchip/irq-sni-exiu.c b/drivers/irqchip/irq-sni-exiu.c +index abd011fcecf4a..c7db617e1a2f6 100644 +--- a/drivers/irqchip/irq-sni-exiu.c ++++ b/drivers/irqchip/irq-sni-exiu.c +@@ -37,11 +37,26 @@ struct exiu_irq_data { + u32 spi_base; + }; + +-static void exiu_irq_eoi(struct irq_data *d) ++static void exiu_irq_ack(struct irq_data *d) + { + struct exiu_irq_data *data = irq_data_get_irq_chip_data(d); + + writel(BIT(d->hwirq), data->base + EIREQCLR); ++} ++ ++static void exiu_irq_eoi(struct irq_data *d) ++{ ++ struct exiu_irq_data *data = irq_data_get_irq_chip_data(d); ++ ++ /* ++ * Level triggered interrupts are latched and must be cleared during ++ * EOI or the interrupt will be jammed on. Of course if a level ++ * triggered interrupt is still asserted then the write will not clear ++ * the interrupt. ++ */ ++ if (irqd_is_level_type(d)) ++ writel(BIT(d->hwirq), data->base + EIREQCLR); ++ + irq_chip_eoi_parent(d); + } + +@@ -91,10 +106,13 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type) + writel_relaxed(val, data->base + EILVL); + + val = readl_relaxed(data->base + EIEDG); +- if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH) ++ if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH) { + val &= ~BIT(d->hwirq); +- else ++ irq_set_handler_locked(d, handle_fasteoi_irq); ++ } else { + val |= BIT(d->hwirq); ++ irq_set_handler_locked(d, handle_fasteoi_ack_irq); ++ } + writel_relaxed(val, data->base + EIEDG); + + writel_relaxed(BIT(d->hwirq), data->base + EIREQCLR); +@@ -104,6 +122,7 @@ static int exiu_irq_set_type(struct irq_data *d, unsigned int type) + + static struct irq_chip exiu_irq_chip = { + .name = "EXIU", ++ .irq_ack = exiu_irq_ack, + .irq_eoi = exiu_irq_eoi, + .irq_enable = exiu_irq_enable, + .irq_mask = exiu_irq_mask, +diff --git a/drivers/irqchip/irq-xtensa-mx.c b/drivers/irqchip/irq-xtensa-mx.c +index 27933338f7b36..8c581c985aa7d 100644 +--- a/drivers/irqchip/irq-xtensa-mx.c ++++ b/drivers/irqchip/irq-xtensa-mx.c +@@ -151,14 +151,25 @@ static struct irq_chip xtensa_mx_irq_chip = { + .irq_set_affinity = xtensa_mx_irq_set_affinity, + }; + ++static void __init xtensa_mx_init_common(struct irq_domain *root_domain) ++{ ++ unsigned int i; ++ ++ irq_set_default_host(root_domain); ++ secondary_init_irq(); ++ ++ /* Initialize default IRQ routing to CPU 0 */ ++ for (i = 0; i < XCHAL_NUM_EXTINTERRUPTS; ++i) ++ set_er(1, MIROUT(i)); ++} ++ + int __init xtensa_mx_init_legacy(struct device_node *interrupt_parent) + { + struct irq_domain *root_domain = + irq_domain_add_legacy(NULL, NR_IRQS - 1, 1, 0, + &xtensa_mx_irq_domain_ops, + &xtensa_mx_irq_chip); +- irq_set_default_host(root_domain); +- secondary_init_irq(); ++ xtensa_mx_init_common(root_domain); + return 0; + } + +@@ -168,8 +179,7 @@ static int __init xtensa_mx_init(struct device_node *np, + struct irq_domain *root_domain = + irq_domain_add_linear(np, NR_IRQS, &xtensa_mx_irq_domain_ops, + &xtensa_mx_irq_chip); +- irq_set_default_host(root_domain); +- secondary_init_irq(); ++ xtensa_mx_init_common(root_domain); + return 0; + } + IRQCHIP_DECLARE(xtensa_mx_irq_chip, "cdns,xtensa-mx", xtensa_mx_init); +diff --git a/drivers/macintosh/Kconfig b/drivers/macintosh/Kconfig +index 574e122ae1050..b5a534206eddd 100644 +--- a/drivers/macintosh/Kconfig ++++ b/drivers/macintosh/Kconfig +@@ -44,6 +44,7 @@ config ADB_IOP + config ADB_CUDA + bool "Support for Cuda/Egret based Macs and PowerMacs" + depends on (ADB || PPC_PMAC) && !PPC_PMAC64 ++ select RTC_LIB + help + This provides support for Cuda/Egret based Macintosh and + Power Macintosh systems. This includes most m68k based Macs, +@@ -57,6 +58,7 @@ config ADB_CUDA + config ADB_PMU + bool "Support for PMU based PowerMacs and PowerBooks" + depends on PPC_PMAC || MAC ++ select RTC_LIB + help + On PowerBooks, iBooks, and recent iMacs and Power Macintoshes, the + PMU is an embedded microprocessor whose primary function is to +@@ -67,6 +69,10 @@ config ADB_PMU + this device; you should do so if your machine is one of those + mentioned above. + ++config ADB_PMU_EVENT ++ def_bool y ++ depends on ADB_PMU && INPUT=y ++ + config ADB_PMU_LED + bool "Support for the Power/iBook front LED" + depends on PPC_PMAC && ADB_PMU +diff --git a/drivers/macintosh/Makefile b/drivers/macintosh/Makefile +index 49819b1b6f201..712edcb3e0b08 100644 +--- a/drivers/macintosh/Makefile ++++ b/drivers/macintosh/Makefile +@@ -12,7 +12,8 @@ obj-$(CONFIG_MAC_EMUMOUSEBTN) += mac_hid.o + obj-$(CONFIG_INPUT_ADBHID) += adbhid.o + obj-$(CONFIG_ANSLCD) += ans-lcd.o + +-obj-$(CONFIG_ADB_PMU) += via-pmu.o via-pmu-event.o ++obj-$(CONFIG_ADB_PMU) += via-pmu.o ++obj-$(CONFIG_ADB_PMU_EVENT) += via-pmu-event.o + obj-$(CONFIG_ADB_PMU_LED) += via-pmu-led.o + obj-$(CONFIG_PMAC_BACKLIGHT) += via-pmu-backlight.o + obj-$(CONFIG_ADB_CUDA) += via-cuda.o +diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c +index 21d532a78fa47..d8b6ac2ec313f 100644 +--- a/drivers/macintosh/via-pmu.c ++++ b/drivers/macintosh/via-pmu.c +@@ -1464,7 +1464,7 @@ next: + pmu_pass_intr(data, len); + /* len == 6 is probably a bad check. But how do I + * know what PMU versions send what events here? */ +- if (len == 6) { ++ if (IS_ENABLED(CONFIG_ADB_PMU_EVENT) && len == 6) { + via_pmu_event(PMU_EVT_POWER, !!(data[1]&8)); + via_pmu_event(PMU_EVT_LID, data[1]&1); + } +diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c +index 3e7d4b20ab34f..4229b9b5da98f 100644 +--- a/drivers/mailbox/mailbox.c ++++ b/drivers/mailbox/mailbox.c +@@ -82,11 +82,11 @@ static void msg_submit(struct mbox_chan *chan) + exit: + spin_unlock_irqrestore(&chan->lock, flags); + +- /* kick start the timer immediately to avoid delays */ + if (!err && (chan->txdone_method & TXDONE_BY_POLL)) { +- /* but only if not already active */ +- if (!hrtimer_active(&chan->mbox->poll_hrt)) +- hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL); ++ /* kick start the timer immediately to avoid delays */ ++ spin_lock_irqsave(&chan->mbox->poll_hrt_lock, flags); ++ hrtimer_start(&chan->mbox->poll_hrt, 0, HRTIMER_MODE_REL); ++ spin_unlock_irqrestore(&chan->mbox->poll_hrt_lock, flags); + } + } + +@@ -120,20 +120,26 @@ static enum hrtimer_restart txdone_hrtimer(struct hrtimer *hrtimer) + container_of(hrtimer, struct mbox_controller, poll_hrt); + bool txdone, resched = false; + int i; ++ unsigned long flags; + + for (i = 0; i < mbox->num_chans; i++) { + struct mbox_chan *chan = &mbox->chans[i]; + + if (chan->active_req && chan->cl) { +- resched = true; + txdone = chan->mbox->ops->last_tx_done(chan); + if (txdone) + tx_tick(chan, 0); ++ else ++ resched = true; + } + } + + if (resched) { +- hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period)); ++ spin_lock_irqsave(&mbox->poll_hrt_lock, flags); ++ if (!hrtimer_is_queued(hrtimer)) ++ hrtimer_forward_now(hrtimer, ms_to_ktime(mbox->txpoll_period)); ++ spin_unlock_irqrestore(&mbox->poll_hrt_lock, flags); ++ + return HRTIMER_RESTART; + } + return HRTIMER_NORESTART; +@@ -500,6 +506,7 @@ int mbox_controller_register(struct mbox_controller *mbox) + hrtimer_init(&mbox->poll_hrt, CLOCK_MONOTONIC, + HRTIMER_MODE_REL); + mbox->poll_hrt.function = txdone_hrtimer; ++ spin_lock_init(&mbox->poll_hrt_lock); + } + + for (i = 0; i < mbox->num_chans; i++) { +diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c +index 4045ae748f17e..f5d24620d8182 100644 +--- a/drivers/md/bcache/request.c ++++ b/drivers/md/bcache/request.c +@@ -1119,6 +1119,12 @@ static void detached_dev_do_request(struct bcache_device *d, struct bio *bio) + * which would call closure_get(&dc->disk.cl) + */ + ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO); ++ if (!ddip) { ++ bio->bi_status = BLK_STS_RESOURCE; ++ bio->bi_end_io(bio); ++ return; ++ } ++ + ddip->d = d; + ddip->start_time = jiffies; + ddip->bi_end_io = bio->bi_end_io; +diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c +index d7eef5292ae2f..a95e20c3d0d4f 100644 +--- a/drivers/md/md-bitmap.c ++++ b/drivers/md/md-bitmap.c +@@ -642,14 +642,6 @@ re_read: + daemon_sleep = le32_to_cpu(sb->daemon_sleep) * HZ; + write_behind = le32_to_cpu(sb->write_behind); + sectors_reserved = le32_to_cpu(sb->sectors_reserved); +- /* Setup nodes/clustername only if bitmap version is +- * cluster-compatible +- */ +- if (sb->version == cpu_to_le32(BITMAP_MAJOR_CLUSTERED)) { +- nodes = le32_to_cpu(sb->nodes); +- strlcpy(bitmap->mddev->bitmap_info.cluster_name, +- sb->cluster_name, 64); +- } + + /* verify that the bitmap-specific fields are valid */ + if (sb->magic != cpu_to_le32(BITMAP_MAGIC)) +@@ -671,6 +663,16 @@ re_read: + goto out; + } + ++ /* ++ * Setup nodes/clustername only if bitmap version is ++ * cluster-compatible ++ */ ++ if (sb->version == cpu_to_le32(BITMAP_MAJOR_CLUSTERED)) { ++ nodes = le32_to_cpu(sb->nodes); ++ strlcpy(bitmap->mddev->bitmap_info.cluster_name, ++ sb->cluster_name, 64); ++ } ++ + /* keep the array size field of the bitmap superblock up to date */ + sb->sync_size = cpu_to_le64(bitmap->mddev->resync_max_sectors); + +@@ -703,9 +705,9 @@ re_read: + + out: + kunmap_atomic(sb); +- /* Assigning chunksize is required for "re_read" */ +- bitmap->mddev->bitmap_info.chunksize = chunksize; + if (err == 0 && nodes && (bitmap->cluster_slot < 0)) { ++ /* Assigning chunksize is required for "re_read" */ ++ bitmap->mddev->bitmap_info.chunksize = chunksize; + err = md_setup_cluster(bitmap->mddev, nodes); + if (err) { + pr_warn("%s: Could not setup cluster service (%d)\n", +@@ -716,18 +718,18 @@ out: + goto re_read; + } + +- + out_no_sb: +- if (test_bit(BITMAP_STALE, &bitmap->flags)) +- bitmap->events_cleared = bitmap->mddev->events; +- bitmap->mddev->bitmap_info.chunksize = chunksize; +- bitmap->mddev->bitmap_info.daemon_sleep = daemon_sleep; +- bitmap->mddev->bitmap_info.max_write_behind = write_behind; +- bitmap->mddev->bitmap_info.nodes = nodes; +- if (bitmap->mddev->bitmap_info.space == 0 || +- bitmap->mddev->bitmap_info.space > sectors_reserved) +- bitmap->mddev->bitmap_info.space = sectors_reserved; +- if (err) { ++ if (err == 0) { ++ if (test_bit(BITMAP_STALE, &bitmap->flags)) ++ bitmap->events_cleared = bitmap->mddev->events; ++ bitmap->mddev->bitmap_info.chunksize = chunksize; ++ bitmap->mddev->bitmap_info.daemon_sleep = daemon_sleep; ++ bitmap->mddev->bitmap_info.max_write_behind = write_behind; ++ bitmap->mddev->bitmap_info.nodes = nodes; ++ if (bitmap->mddev->bitmap_info.space == 0 || ++ bitmap->mddev->bitmap_info.space > sectors_reserved) ++ bitmap->mddev->bitmap_info.space = sectors_reserved; ++ } else { + md_bitmap_print_sb(bitmap); + if (bitmap->cluster_slot < 0) + md_cluster_stop(bitmap->mddev); +diff --git a/drivers/md/md.c b/drivers/md/md.c +index c178b2f406de3..11fd3b32b5621 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -2532,14 +2532,16 @@ static void sync_sbs(struct mddev *mddev, int nospares) + + static bool does_sb_need_changing(struct mddev *mddev) + { +- struct md_rdev *rdev; ++ struct md_rdev *rdev = NULL, *iter; + struct mdp_superblock_1 *sb; + int role; + + /* Find a good rdev */ +- rdev_for_each(rdev, mddev) +- if ((rdev->raid_disk >= 0) && !test_bit(Faulty, &rdev->flags)) ++ rdev_for_each(iter, mddev) ++ if ((iter->raid_disk >= 0) && !test_bit(Faulty, &iter->flags)) { ++ rdev = iter; + break; ++ } + + /* No good device found. */ + if (!rdev) +@@ -7775,17 +7777,22 @@ EXPORT_SYMBOL(md_register_thread); + + void md_unregister_thread(struct md_thread **threadp) + { +- struct md_thread *thread = *threadp; +- if (!thread) +- return; +- pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk)); +- /* Locking ensures that mddev_unlock does not wake_up a ++ struct md_thread *thread; ++ ++ /* ++ * Locking ensures that mddev_unlock does not wake_up a + * non-existent thread + */ + spin_lock(&pers_lock); ++ thread = *threadp; ++ if (!thread) { ++ spin_unlock(&pers_lock); ++ return; ++ } + *threadp = NULL; + spin_unlock(&pers_lock); + ++ pr_debug("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk)); + kthread_stop(thread->tsk); + kfree(thread); + } +@@ -9529,16 +9536,18 @@ static int read_rdev(struct mddev *mddev, struct md_rdev *rdev) + + void md_reload_sb(struct mddev *mddev, int nr) + { +- struct md_rdev *rdev; ++ struct md_rdev *rdev = NULL, *iter; + int err; + + /* Find the rdev */ +- rdev_for_each_rcu(rdev, mddev) { +- if (rdev->desc_nr == nr) ++ rdev_for_each_rcu(iter, mddev) { ++ if (iter->desc_nr == nr) { ++ rdev = iter; + break; ++ } + } + +- if (!rdev || rdev->desc_nr != nr) { ++ if (!rdev) { + pr_warn("%s: %d Could not find rdev with nr %d\n", __func__, __LINE__, nr); + return; + } +diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c +index 322386ff5d225..0ead5a7887f14 100644 +--- a/drivers/md/raid0.c ++++ b/drivers/md/raid0.c +@@ -143,21 +143,6 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) + pr_debug("md/raid0:%s: FINAL %d zones\n", + mdname(mddev), conf->nr_strip_zones); + +- if (conf->nr_strip_zones == 1) { +- conf->layout = RAID0_ORIG_LAYOUT; +- } else if (mddev->layout == RAID0_ORIG_LAYOUT || +- mddev->layout == RAID0_ALT_MULTIZONE_LAYOUT) { +- conf->layout = mddev->layout; +- } else if (default_layout == RAID0_ORIG_LAYOUT || +- default_layout == RAID0_ALT_MULTIZONE_LAYOUT) { +- conf->layout = default_layout; +- } else { +- pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n", +- mdname(mddev)); +- pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n"); +- err = -ENOTSUPP; +- goto abort; +- } + /* + * now since we have the hard sector sizes, we can make sure + * chunk size is a multiple of that sector size +@@ -288,6 +273,22 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) + (unsigned long long)smallest->sectors); + } + ++ if (conf->nr_strip_zones == 1 || conf->strip_zone[1].nb_dev == 1) { ++ conf->layout = RAID0_ORIG_LAYOUT; ++ } else if (mddev->layout == RAID0_ORIG_LAYOUT || ++ mddev->layout == RAID0_ALT_MULTIZONE_LAYOUT) { ++ conf->layout = mddev->layout; ++ } else if (default_layout == RAID0_ORIG_LAYOUT || ++ default_layout == RAID0_ALT_MULTIZONE_LAYOUT) { ++ conf->layout = default_layout; ++ } else { ++ pr_err("md/raid0:%s: cannot assemble multi-zone RAID0 with default_layout setting\n", ++ mdname(mddev)); ++ pr_err("md/raid0: please set raid0.default_layout to 1 or 2\n"); ++ err = -EOPNOTSUPP; ++ goto abort; ++ } ++ + pr_debug("md/raid0:%s: done.\n", mdname(mddev)); + *private_conf = conf; + +diff --git a/drivers/media/cec/cec-adap.c b/drivers/media/cec/cec-adap.c +index 56857ac0a0be2..c665f7d20c448 100644 +--- a/drivers/media/cec/cec-adap.c ++++ b/drivers/media/cec/cec-adap.c +@@ -1263,7 +1263,7 @@ static int cec_config_log_addr(struct cec_adapter *adap, + * While trying to poll the physical address was reset + * and the adapter was unconfigured, so bail out. + */ +- if (!adap->is_configuring) ++ if (adap->phys_addr == CEC_PHYS_ADDR_INVALID) + return -EINTR; + + if (err) +@@ -1321,7 +1321,6 @@ static void cec_adap_unconfigure(struct cec_adapter *adap) + adap->phys_addr != CEC_PHYS_ADDR_INVALID) + WARN_ON(adap->ops->adap_log_addr(adap, CEC_LOG_ADDR_INVALID)); + adap->log_addrs.log_addr_mask = 0; +- adap->is_configuring = false; + adap->is_configured = false; + memset(adap->phys_addrs, 0xff, sizeof(adap->phys_addrs)); + cec_flush(adap); +@@ -1514,9 +1513,10 @@ unconfigure: + for (i = 0; i < las->num_log_addrs; i++) + las->log_addr[i] = CEC_LOG_ADDR_INVALID; + cec_adap_unconfigure(adap); ++ adap->is_configuring = false; + adap->kthread_config = NULL; +- mutex_unlock(&adap->lock); + complete(&adap->config_completion); ++ mutex_unlock(&adap->lock); + return 0; + } + +diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c +index b42b289faaef4..154776d0069ea 100644 +--- a/drivers/media/i2c/ov7670.c ++++ b/drivers/media/i2c/ov7670.c +@@ -2000,7 +2000,6 @@ static int ov7670_remove(struct i2c_client *client) + v4l2_async_unregister_subdev(sd); + v4l2_ctrl_handler_free(&info->hdl); + media_entity_cleanup(&info->sd.entity); +- ov7670_power_off(sd); + return 0; + } + +diff --git a/drivers/media/pci/cx23885/cx23885-core.c b/drivers/media/pci/cx23885/cx23885-core.c +index ead0acb7807c8..6747ecb4911b1 100644 +--- a/drivers/media/pci/cx23885/cx23885-core.c ++++ b/drivers/media/pci/cx23885/cx23885-core.c +@@ -2154,7 +2154,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev, + err = pci_set_dma_mask(pci_dev, 0xffffffff); + if (err) { + pr_err("%s/0: Oops: no 32bit PCI DMA ???\n", dev->name); +- goto fail_ctrl; ++ goto fail_dma_set_mask; + } + + err = request_irq(pci_dev->irq, cx23885_irq, +@@ -2162,7 +2162,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev, + if (err < 0) { + pr_err("%s: can't get IRQ %d\n", + dev->name, pci_dev->irq); +- goto fail_irq; ++ goto fail_dma_set_mask; + } + + switch (dev->board) { +@@ -2184,7 +2184,7 @@ static int cx23885_initdev(struct pci_dev *pci_dev, + + return 0; + +-fail_irq: ++fail_dma_set_mask: + cx23885_dev_unregister(dev); + fail_ctrl: + v4l2_ctrl_handler_free(hdl); +diff --git a/drivers/media/pci/cx25821/cx25821-core.c b/drivers/media/pci/cx25821/cx25821-core.c +index 44839a6461e88..534829e352d1d 100644 +--- a/drivers/media/pci/cx25821/cx25821-core.c ++++ b/drivers/media/pci/cx25821/cx25821-core.c +@@ -1340,11 +1340,11 @@ static void cx25821_finidev(struct pci_dev *pci_dev) + struct cx25821_dev *dev = get_cx25821(v4l2_dev); + + cx25821_shutdown(dev); +- pci_disable_device(pci_dev); + + /* unregister stuff */ + if (pci_dev->irq) + free_irq(pci_dev->irq, dev); ++ pci_disable_device(pci_dev); + + cx25821_dev_unregister(dev); + v4l2_device_unregister(v4l2_dev); +diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c +index c87eddb1c93f7..c3f0b143330a5 100644 +--- a/drivers/media/platform/aspeed-video.c ++++ b/drivers/media/platform/aspeed-video.c +@@ -1688,6 +1688,7 @@ static int aspeed_video_probe(struct platform_device *pdev) + + rc = aspeed_video_setup_video(video); + if (rc) { ++ aspeed_video_free_buf(video, &video->jpeg); + clk_unprepare(video->vclk); + clk_unprepare(video->eclk); + return rc; +@@ -1715,8 +1716,7 @@ static int aspeed_video_remove(struct platform_device *pdev) + + v4l2_device_unregister(v4l2_dev); + +- dma_free_coherent(video->dev, VE_JPEG_HEADER_SIZE, video->jpeg.virt, +- video->jpeg.dma); ++ aspeed_video_free_buf(video, &video->jpeg); + + of_reserved_mem_device_release(dev); + +diff --git a/drivers/media/platform/coda/coda-common.c b/drivers/media/platform/coda/coda-common.c +index 0adc54832657b..ebe5e44b6fd38 100644 +--- a/drivers/media/platform/coda/coda-common.c ++++ b/drivers/media/platform/coda/coda-common.c +@@ -1192,7 +1192,8 @@ static int coda_enum_frameintervals(struct file *file, void *fh, + struct v4l2_frmivalenum *f) + { + struct coda_ctx *ctx = fh_to_ctx(fh); +- int i; ++ struct coda_q_data *q_data; ++ const struct coda_codec *codec; + + if (f->index) + return -EINVAL; +@@ -1201,12 +1202,19 @@ static int coda_enum_frameintervals(struct file *file, void *fh, + if (!ctx->vdoa && f->pixel_format == V4L2_PIX_FMT_YUYV) + return -EINVAL; + +- for (i = 0; i < CODA_MAX_FORMATS; i++) { +- if (f->pixel_format == ctx->cvd->src_formats[i] || +- f->pixel_format == ctx->cvd->dst_formats[i]) +- break; ++ if (coda_format_normalize_yuv(f->pixel_format) == V4L2_PIX_FMT_YUV420) { ++ q_data = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE); ++ codec = coda_find_codec(ctx->dev, f->pixel_format, ++ q_data->fourcc); ++ } else { ++ codec = coda_find_codec(ctx->dev, V4L2_PIX_FMT_YUV420, ++ f->pixel_format); + } +- if (i == CODA_MAX_FORMATS) ++ if (!codec) ++ return -EINVAL; ++ ++ if (f->width < MIN_W || f->width > codec->max_w || ++ f->height < MIN_H || f->height > codec->max_h) + return -EINVAL; + + f->type = V4L2_FRMIVAL_TYPE_CONTINUOUS; +@@ -2164,8 +2172,8 @@ static void coda_encode_ctrls(struct coda_ctx *ctx) + V4L2_CID_MPEG_VIDEO_H264_CHROMA_QP_INDEX_OFFSET, -12, 12, 1, 0); + v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_PROFILE, +- V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE, 0x0, +- V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE); ++ V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE, 0x0, ++ V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE); + if (ctx->dev->devtype->product == CODA_HX4 || + ctx->dev->devtype->product == CODA_7541) { + v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops, +@@ -2179,12 +2187,15 @@ static void coda_encode_ctrls(struct coda_ctx *ctx) + if (ctx->dev->devtype->product == CODA_960) { + v4l2_ctrl_new_std_menu(&ctx->ctrls, &coda_ctrl_ops, + V4L2_CID_MPEG_VIDEO_H264_LEVEL, +- V4L2_MPEG_VIDEO_H264_LEVEL_4_0, +- ~((1 << V4L2_MPEG_VIDEO_H264_LEVEL_2_0) | ++ V4L2_MPEG_VIDEO_H264_LEVEL_4_2, ++ ~((1 << V4L2_MPEG_VIDEO_H264_LEVEL_1_0) | ++ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_2_0) | + (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_0) | + (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_1) | + (1 << V4L2_MPEG_VIDEO_H264_LEVEL_3_2) | +- (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_0)), ++ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_0) | ++ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_1) | ++ (1 << V4L2_MPEG_VIDEO_H264_LEVEL_4_2)), + V4L2_MPEG_VIDEO_H264_LEVEL_4_0); + } + v4l2_ctrl_new_std(&ctx->ctrls, &coda_ctrl_ops, +@@ -2246,7 +2257,7 @@ static void coda_decode_ctrls(struct coda_ctx *ctx) + ctx->h264_profile_ctrl = v4l2_ctrl_new_std_menu(&ctx->ctrls, + &coda_ctrl_ops, V4L2_CID_MPEG_VIDEO_H264_PROFILE, + V4L2_MPEG_VIDEO_H264_PROFILE_HIGH, +- ~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_BASELINE) | ++ ~((1 << V4L2_MPEG_VIDEO_H264_PROFILE_CONSTRAINED_BASELINE) | + (1 << V4L2_MPEG_VIDEO_H264_PROFILE_MAIN) | + (1 << V4L2_MPEG_VIDEO_H264_PROFILE_HIGH)), + V4L2_MPEG_VIDEO_H264_PROFILE_HIGH); +diff --git a/drivers/media/platform/exynos4-is/fimc-is.c b/drivers/media/platform/exynos4-is/fimc-is.c +index 64148b7e0d986..9bb14bb2e4987 100644 +--- a/drivers/media/platform/exynos4-is/fimc-is.c ++++ b/drivers/media/platform/exynos4-is/fimc-is.c +@@ -141,7 +141,7 @@ static int fimc_is_enable_clocks(struct fimc_is *is) + dev_err(&is->pdev->dev, "clock %s enable failed\n", + fimc_is_clocks[i]); + for (--i; i >= 0; i--) +- clk_disable(is->clocks[i]); ++ clk_disable_unprepare(is->clocks[i]); + return ret; + } + pr_debug("enabled clock: %s\n", fimc_is_clocks[i]); +diff --git a/drivers/media/platform/exynos4-is/fimc-isp-video.h b/drivers/media/platform/exynos4-is/fimc-isp-video.h +index edcb3a5e3cb90..2dd4ddbc748a1 100644 +--- a/drivers/media/platform/exynos4-is/fimc-isp-video.h ++++ b/drivers/media/platform/exynos4-is/fimc-isp-video.h +@@ -32,7 +32,7 @@ static inline int fimc_isp_video_device_register(struct fimc_isp *isp, + return 0; + } + +-void fimc_isp_video_device_unregister(struct fimc_isp *isp, ++static inline void fimc_isp_video_device_unregister(struct fimc_isp *isp, + enum v4l2_buf_type type) + { + } +diff --git a/drivers/media/platform/qcom/venus/hfi.c b/drivers/media/platform/qcom/venus/hfi.c +index 3d8b1284d1f35..68964a80fe619 100644 +--- a/drivers/media/platform/qcom/venus/hfi.c ++++ b/drivers/media/platform/qcom/venus/hfi.c +@@ -104,6 +104,9 @@ int hfi_core_deinit(struct venus_core *core, bool blocking) + mutex_lock(&core->lock); + } + ++ if (!core->ops) ++ goto unlock; ++ + ret = core->ops->core_deinit(core); + + if (!ret) +diff --git a/drivers/media/platform/sti/delta/delta-v4l2.c b/drivers/media/platform/sti/delta/delta-v4l2.c +index 2791107e641bc..29732b49a2cdb 100644 +--- a/drivers/media/platform/sti/delta/delta-v4l2.c ++++ b/drivers/media/platform/sti/delta/delta-v4l2.c +@@ -1862,7 +1862,7 @@ static int delta_probe(struct platform_device *pdev) + if (ret) { + dev_err(delta->dev, "%s failed to initialize firmware ipc channel\n", + DELTA_PREFIX); +- goto err; ++ goto err_pm_disable; + } + + /* register all available decoders */ +@@ -1876,7 +1876,7 @@ static int delta_probe(struct platform_device *pdev) + if (ret) { + dev_err(delta->dev, "%s failed to register V4L2 device\n", + DELTA_PREFIX); +- goto err; ++ goto err_pm_disable; + } + + delta->work_queue = create_workqueue(DELTA_NAME); +@@ -1901,6 +1901,8 @@ err_work_queue: + destroy_workqueue(delta->work_queue); + err_v4l2: + v4l2_device_unregister(&delta->v4l2_dev); ++err_pm_disable: ++ pm_runtime_disable(dev); + err: + return ret; + } +diff --git a/drivers/media/platform/vsp1/vsp1_rpf.c b/drivers/media/platform/vsp1/vsp1_rpf.c +index 85587c1b6a373..75083cb234fe3 100644 +--- a/drivers/media/platform/vsp1/vsp1_rpf.c ++++ b/drivers/media/platform/vsp1/vsp1_rpf.c +@@ -291,11 +291,11 @@ static void rpf_configure_partition(struct vsp1_entity *entity, + + crop.left * fmtinfo->bpp[0] / 8; + + if (format->num_planes > 1) { ++ unsigned int bpl = format->plane_fmt[1].bytesperline; + unsigned int offset; + +- offset = crop.top * format->plane_fmt[1].bytesperline +- + crop.left / fmtinfo->hsub +- * fmtinfo->bpp[1] / 8; ++ offset = crop.top / fmtinfo->vsub * bpl ++ + crop.left / fmtinfo->hsub * fmtinfo->bpp[1] / 8; + mem.addr[1] += offset; + mem.addr[2] += offset; + } +diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c +index 2f00679f65a0a..11e7fcfc3f195 100644 +--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c ++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c +@@ -2570,6 +2570,11 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf, + } while (0); + mutex_unlock(&pvr2_unit_mtx); + ++ INIT_WORK(&hdw->workpoll, pvr2_hdw_worker_poll); ++ ++ if (hdw->unit_number == -1) ++ goto fail; ++ + cnt1 = 0; + cnt2 = scnprintf(hdw->name+cnt1,sizeof(hdw->name)-cnt1,"pvrusb2"); + cnt1 += cnt2; +@@ -2581,8 +2586,6 @@ struct pvr2_hdw *pvr2_hdw_create(struct usb_interface *intf, + if (cnt1 >= sizeof(hdw->name)) cnt1 = sizeof(hdw->name)-1; + hdw->name[cnt1] = 0; + +- INIT_WORK(&hdw->workpoll,pvr2_hdw_worker_poll); +- + pvr2_trace(PVR2_TRACE_INIT,"Driver unit number is %d, name is %s", + hdw->unit_number,hdw->name); + +diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c +index 3126ee9e965c9..96ef64b6a232b 100644 +--- a/drivers/media/usb/uvc/uvc_v4l2.c ++++ b/drivers/media/usb/uvc/uvc_v4l2.c +@@ -859,29 +859,31 @@ static int uvc_ioctl_enum_input(struct file *file, void *fh, + struct uvc_video_chain *chain = handle->chain; + const struct uvc_entity *selector = chain->selector; + struct uvc_entity *iterm = NULL; ++ struct uvc_entity *it; + u32 index = input->index; +- int pin = 0; + + if (selector == NULL || + (chain->dev->quirks & UVC_QUIRK_IGNORE_SELECTOR_UNIT)) { + if (index != 0) + return -EINVAL; +- list_for_each_entry(iterm, &chain->entities, chain) { +- if (UVC_ENTITY_IS_ITERM(iterm)) ++ list_for_each_entry(it, &chain->entities, chain) { ++ if (UVC_ENTITY_IS_ITERM(it)) { ++ iterm = it; + break; ++ } + } +- pin = iterm->id; + } else if (index < selector->bNrInPins) { +- pin = selector->baSourceID[index]; +- list_for_each_entry(iterm, &chain->entities, chain) { +- if (!UVC_ENTITY_IS_ITERM(iterm)) ++ list_for_each_entry(it, &chain->entities, chain) { ++ if (!UVC_ENTITY_IS_ITERM(it)) + continue; +- if (iterm->id == pin) ++ if (it->id == selector->baSourceID[index]) { ++ iterm = it; + break; ++ } + } + } + +- if (iterm == NULL || iterm->id != pin) ++ if (iterm == NULL) + return -EINVAL; + + memset(input, 0, sizeof(*input)); +diff --git a/drivers/mfd/davinci_voicecodec.c b/drivers/mfd/davinci_voicecodec.c +index e5c8bc998eb4e..965820481f1e1 100644 +--- a/drivers/mfd/davinci_voicecodec.c ++++ b/drivers/mfd/davinci_voicecodec.c +@@ -46,14 +46,12 @@ static int __init davinci_vc_probe(struct platform_device *pdev) + } + clk_enable(davinci_vc->clk); + +- res = platform_get_resource(pdev, IORESOURCE_MEM, 0); +- +- fifo_base = (dma_addr_t)res->start; +- davinci_vc->base = devm_ioremap_resource(&pdev->dev, res); ++ davinci_vc->base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); + if (IS_ERR(davinci_vc->base)) { + ret = PTR_ERR(davinci_vc->base); + goto fail; + } ++ fifo_base = (dma_addr_t)res->start; + + davinci_vc->regmap = devm_regmap_init_mmio(&pdev->dev, + davinci_vc->base, +diff --git a/drivers/mfd/ipaq-micro.c b/drivers/mfd/ipaq-micro.c +index a1d9be82734de..88387c7e74433 100644 +--- a/drivers/mfd/ipaq-micro.c ++++ b/drivers/mfd/ipaq-micro.c +@@ -407,7 +407,7 @@ static int __init micro_probe(struct platform_device *pdev) + micro_reset_comm(micro); + + irq = platform_get_irq(pdev, 0); +- if (!irq) ++ if (irq < 0) + return -EINVAL; + ret = devm_request_irq(&pdev->dev, irq, micro_serial_isr, + IRQF_SHARED, "ipaq-micro", +diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c +index a328cab110143..4aef33d07cc36 100644 +--- a/drivers/misc/cardreader/rtsx_usb.c ++++ b/drivers/misc/cardreader/rtsx_usb.c +@@ -667,6 +667,7 @@ static int rtsx_usb_probe(struct usb_interface *intf, + return 0; + + out_init_fail: ++ usb_set_intfdata(ucr->pusb_intf, NULL); + usb_free_coherent(ucr->pusb_dev, IOBUF_SIZE, ucr->iobuf, + ucr->iobuf_dma); + return ret; +diff --git a/drivers/misc/lkdtm/usercopy.c b/drivers/misc/lkdtm/usercopy.c +index e172719dd86d0..4617c63b10260 100644 +--- a/drivers/misc/lkdtm/usercopy.c ++++ b/drivers/misc/lkdtm/usercopy.c +@@ -30,12 +30,12 @@ static const unsigned char test_text[] = "This is a test.\n"; + */ + static noinline unsigned char *trick_compiler(unsigned char *stack) + { +- return stack + 0; ++ return stack + unconst; + } + + static noinline unsigned char *do_usercopy_stack_callee(int value) + { +- unsigned char buf[32]; ++ unsigned char buf[128]; + int i; + + /* Exercise stack to avoid everything living in registers. */ +@@ -43,7 +43,12 @@ static noinline unsigned char *do_usercopy_stack_callee(int value) + buf[i] = value & 0xff; + } + +- return trick_compiler(buf); ++ /* ++ * Put the target buffer in the middle of stack allocation ++ * so that we don't step on future stack users regardless ++ * of stack growth direction. ++ */ ++ return trick_compiler(&buf[(128/2)-32]); + } + + static noinline void do_usercopy_stack(bool to_user, bool bad_frame) +@@ -66,6 +71,12 @@ static noinline void do_usercopy_stack(bool to_user, bool bad_frame) + bad_stack -= sizeof(unsigned long); + } + ++#ifdef ARCH_HAS_CURRENT_STACK_POINTER ++ pr_info("stack : %px\n", (void *)current_stack_pointer); ++#endif ++ pr_info("good_stack: %px-%px\n", good_stack, good_stack + sizeof(good_stack)); ++ pr_info("bad_stack : %px-%px\n", bad_stack, bad_stack + sizeof(good_stack)); ++ + user_addr = vm_mmap(NULL, 0, PAGE_SIZE, + PROT_READ | PROT_WRITE | PROT_EXEC, + MAP_ANONYMOUS | MAP_PRIVATE, 0); +diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c +index 4d1b44de14921..c742ab02ae186 100644 +--- a/drivers/misc/ocxl/file.c ++++ b/drivers/misc/ocxl/file.c +@@ -558,7 +558,9 @@ int ocxl_file_register_afu(struct ocxl_afu *afu) + + err_unregister: + ocxl_sysfs_unregister_afu(info); // safe to call even if register failed ++ free_minor(info); + device_unregister(&info->dev); ++ return rc; + err_put: + ocxl_afu_put(afu); + free_minor(info); +diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c +index 709f117fd5772..482e01ece0b7f 100644 +--- a/drivers/mmc/core/block.c ++++ b/drivers/mmc/core/block.c +@@ -1492,8 +1492,7 @@ void mmc_blk_cqe_recovery(struct mmc_queue *mq) + err = mmc_cqe_recovery(host); + if (err) + mmc_blk_reset(mq->blkdata, host, MMC_BLK_CQE_RECOVERY); +- else +- mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY); ++ mmc_blk_reset_success(mq->blkdata, MMC_BLK_CQE_RECOVERY); + + pr_debug("%s: CQE recovery done\n", mmc_hostname(host)); + } +diff --git a/drivers/mmc/host/jz4740_mmc.c b/drivers/mmc/host/jz4740_mmc.c +index f816c06ef9160..a316c912a118f 100644 +--- a/drivers/mmc/host/jz4740_mmc.c ++++ b/drivers/mmc/host/jz4740_mmc.c +@@ -224,6 +224,26 @@ static int jz4740_mmc_acquire_dma_channels(struct jz4740_mmc_host *host) + return PTR_ERR(host->dma_rx); + } + ++ /* ++ * Limit the maximum segment size in any SG entry according to ++ * the parameters of the DMA engine device. ++ */ ++ if (host->dma_tx) { ++ struct device *dev = host->dma_tx->device->dev; ++ unsigned int max_seg_size = dma_get_max_seg_size(dev); ++ ++ if (max_seg_size < host->mmc->max_seg_size) ++ host->mmc->max_seg_size = max_seg_size; ++ } ++ ++ if (host->dma_rx) { ++ struct device *dev = host->dma_rx->device->dev; ++ unsigned int max_seg_size = dma_get_max_seg_size(dev); ++ ++ if (max_seg_size < host->mmc->max_seg_size) ++ host->mmc->max_seg_size = max_seg_size; ++ } ++ + return 0; + } + +diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c +index 9c98ddef0097d..006221284d0ae 100644 +--- a/drivers/mtd/chips/cfi_cmdset_0002.c ++++ b/drivers/mtd/chips/cfi_cmdset_0002.c +@@ -59,6 +59,10 @@ + #define CFI_SR_WBASB BIT(3) + #define CFI_SR_SLSB BIT(1) + ++enum cfi_quirks { ++ CFI_QUIRK_DQ_TRUE_DATA = BIT(0), ++}; ++ + static int cfi_amdstd_read (struct mtd_info *, loff_t, size_t, size_t *, u_char *); + static int cfi_amdstd_write_words(struct mtd_info *, loff_t, size_t, size_t *, const u_char *); + #if !FORCE_WORD_WRITE +@@ -432,6 +436,15 @@ static void fixup_s29ns512p_sectors(struct mtd_info *mtd) + mtd->name); + } + ++static void fixup_quirks(struct mtd_info *mtd) ++{ ++ struct map_info *map = mtd->priv; ++ struct cfi_private *cfi = map->fldrv_priv; ++ ++ if (cfi->mfr == CFI_MFR_AMD && cfi->id == 0x0c01) ++ cfi->quirks |= CFI_QUIRK_DQ_TRUE_DATA; ++} ++ + /* Used to fix CFI-Tables of chips without Extended Query Tables */ + static struct cfi_fixup cfi_nopri_fixup_table[] = { + { CFI_MFR_SST, 0x234a, fixup_sst39vf }, /* SST39VF1602 */ +@@ -470,6 +483,7 @@ static struct cfi_fixup cfi_fixup_table[] = { + #if !FORCE_WORD_WRITE + { CFI_MFR_ANY, CFI_ID_ANY, fixup_use_write_buffers }, + #endif ++ { CFI_MFR_ANY, CFI_ID_ANY, fixup_quirks }, + { 0, 0, NULL } + }; + static struct cfi_fixup jedec_fixup_table[] = { +@@ -798,21 +812,25 @@ static struct mtd_info *cfi_amdstd_setup(struct mtd_info *mtd) + } + + /* +- * Return true if the chip is ready. ++ * Return true if the chip is ready and has the correct value. + * + * Ready is one of: read mode, query mode, erase-suspend-read mode (in any + * non-suspended sector) and is indicated by no toggle bits toggling. + * ++ * Error are indicated by toggling bits or bits held with the wrong value, ++ * or with bits toggling. ++ * + * Note that anything more complicated than checking if no bits are toggling + * (including checking DQ5 for an error status) is tricky to get working + * correctly and is therefore not done (particularly with interleaved chips + * as each chip must be checked independently of the others). + */ + static int __xipram chip_ready(struct map_info *map, struct flchip *chip, +- unsigned long addr) ++ unsigned long addr, map_word *expected) + { + struct cfi_private *cfi = map->fldrv_priv; + map_word d, t; ++ int ret; + + if (cfi_use_status_reg(cfi)) { + map_word ready = CMD(CFI_SR_DRB); +@@ -822,57 +840,32 @@ static int __xipram chip_ready(struct map_info *map, struct flchip *chip, + */ + cfi_send_gen_cmd(0x70, cfi->addr_unlock1, chip->start, map, cfi, + cfi->device_type, NULL); +- d = map_read(map, addr); ++ t = map_read(map, addr); + +- return map_word_andequal(map, d, ready, ready); ++ return map_word_andequal(map, t, ready, ready); + } + + d = map_read(map, addr); + t = map_read(map, addr); + +- return map_word_equal(map, d, t); ++ ret = map_word_equal(map, d, t); ++ ++ if (!ret || !expected) ++ return ret; ++ ++ return map_word_equal(map, t, *expected); + } + +-/* +- * Return true if the chip is ready and has the correct value. +- * +- * Ready is one of: read mode, query mode, erase-suspend-read mode (in any +- * non-suspended sector) and it is indicated by no bits toggling. +- * +- * Error are indicated by toggling bits or bits held with the wrong value, +- * or with bits toggling. +- * +- * Note that anything more complicated than checking if no bits are toggling +- * (including checking DQ5 for an error status) is tricky to get working +- * correctly and is therefore not done (particularly with interleaved chips +- * as each chip must be checked independently of the others). +- * +- */ + static int __xipram chip_good(struct map_info *map, struct flchip *chip, +- unsigned long addr, map_word expected) ++ unsigned long addr, map_word *expected) + { + struct cfi_private *cfi = map->fldrv_priv; +- map_word oldd, curd; +- +- if (cfi_use_status_reg(cfi)) { +- map_word ready = CMD(CFI_SR_DRB); +- +- /* +- * For chips that support status register, check device +- * ready bit +- */ +- cfi_send_gen_cmd(0x70, cfi->addr_unlock1, chip->start, map, cfi, +- cfi->device_type, NULL); +- curd = map_read(map, addr); +- +- return map_word_andequal(map, curd, ready, ready); +- } ++ map_word *datum = expected; + +- oldd = map_read(map, addr); +- curd = map_read(map, addr); ++ if (cfi->quirks & CFI_QUIRK_DQ_TRUE_DATA) ++ datum = NULL; + +- return map_word_equal(map, oldd, curd) && +- map_word_equal(map, curd, expected); ++ return chip_ready(map, chip, addr, datum); + } + + static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr, int mode) +@@ -889,7 +882,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr + + case FL_STATUS: + for (;;) { +- if (chip_ready(map, chip, adr)) ++ if (chip_ready(map, chip, adr, NULL)) + break; + + if (time_after(jiffies, timeo)) { +@@ -927,7 +920,7 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr + chip->state = FL_ERASE_SUSPENDING; + chip->erase_suspended = 1; + for (;;) { +- if (chip_ready(map, chip, adr)) ++ if (chip_ready(map, chip, adr, NULL)) + break; + + if (time_after(jiffies, timeo)) { +@@ -1459,7 +1452,7 @@ static int do_otp_lock(struct map_info *map, struct flchip *chip, loff_t adr, + /* wait for chip to become ready */ + timeo = jiffies + msecs_to_jiffies(2); + for (;;) { +- if (chip_ready(map, chip, adr)) ++ if (chip_ready(map, chip, adr, NULL)) + break; + + if (time_after(jiffies, timeo)) { +@@ -1695,7 +1688,7 @@ static int __xipram do_write_oneword_once(struct map_info *map, + * "chip_good" to avoid the failure due to scheduling. + */ + if (time_after(jiffies, timeo) && +- !chip_good(map, chip, adr, datum)) { ++ !chip_good(map, chip, adr, &datum)) { + xip_enable(map, chip, adr); + printk(KERN_WARNING "MTD %s(): software timeout\n", __func__); + xip_disable(map, chip, adr); +@@ -1703,7 +1696,7 @@ static int __xipram do_write_oneword_once(struct map_info *map, + break; + } + +- if (chip_good(map, chip, adr, datum)) { ++ if (chip_good(map, chip, adr, &datum)) { + if (cfi_check_err_status(map, chip, adr)) + ret = -EIO; + break; +@@ -1975,14 +1968,14 @@ static int __xipram do_write_buffer_wait(struct map_info *map, + * "chip_good" to avoid the failure due to scheduling. + */ + if (time_after(jiffies, timeo) && +- !chip_good(map, chip, adr, datum)) { ++ !chip_good(map, chip, adr, &datum)) { + pr_err("MTD %s(): software timeout, address:0x%.8lx.\n", + __func__, adr); + ret = -EIO; + break; + } + +- if (chip_good(map, chip, adr, datum)) { ++ if (chip_good(map, chip, adr, &datum)) { + if (cfi_check_err_status(map, chip, adr)) + ret = -EIO; + break; +@@ -2191,7 +2184,7 @@ static int cfi_amdstd_panic_wait(struct map_info *map, struct flchip *chip, + * If the driver thinks the chip is idle, and no toggle bits + * are changing, then the chip is actually idle for sure. + */ +- if (chip->state == FL_READY && chip_ready(map, chip, adr)) ++ if (chip->state == FL_READY && chip_ready(map, chip, adr, NULL)) + return 0; + + /* +@@ -2208,7 +2201,7 @@ static int cfi_amdstd_panic_wait(struct map_info *map, struct flchip *chip, + + /* wait for the chip to become ready */ + for (i = 0; i < jiffies_to_usecs(timeo); i++) { +- if (chip_ready(map, chip, adr)) ++ if (chip_ready(map, chip, adr, NULL)) + return 0; + + udelay(1); +@@ -2272,13 +2265,13 @@ retry: + map_write(map, datum, adr); + + for (i = 0; i < jiffies_to_usecs(uWriteTimeout); i++) { +- if (chip_ready(map, chip, adr)) ++ if (chip_ready(map, chip, adr, NULL)) + break; + + udelay(1); + } + +- if (!chip_good(map, chip, adr, datum) || ++ if (!chip_ready(map, chip, adr, &datum) || + cfi_check_err_status(map, chip, adr)) { + /* reset on all failures. */ + map_write(map, CMD(0xF0), chip->start); +@@ -2420,6 +2413,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip) + DECLARE_WAITQUEUE(wait, current); + int ret = 0; + int retry_cnt = 0; ++ map_word datum = map_word_ff(map); + + adr = cfi->addr_unlock1; + +@@ -2474,7 +2468,7 @@ static int __xipram do_erase_chip(struct map_info *map, struct flchip *chip) + chip->erase_suspended = 0; + } + +- if (chip_good(map, chip, adr, map_word_ff(map))) { ++ if (chip_ready(map, chip, adr, &datum)) { + if (cfi_check_err_status(map, chip, adr)) + ret = -EIO; + break; +@@ -2519,6 +2513,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip, + DECLARE_WAITQUEUE(wait, current); + int ret = 0; + int retry_cnt = 0; ++ map_word datum = map_word_ff(map); + + adr += chip->start; + +@@ -2573,7 +2568,7 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip, + chip->erase_suspended = 0; + } + +- if (chip_good(map, chip, adr, map_word_ff(map))) { ++ if (chip_ready(map, chip, adr, &datum)) { + if (cfi_check_err_status(map, chip, adr)) + ret = -EIO; + break; +@@ -2767,7 +2762,7 @@ static int __maybe_unused do_ppb_xxlock(struct map_info *map, + */ + timeo = jiffies + msecs_to_jiffies(2000); /* 2s max (un)locking */ + for (;;) { +- if (chip_ready(map, chip, adr)) ++ if (chip_ready(map, chip, adr, NULL)) + break; + + if (time_after(jiffies, timeo)) { +diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c +index 1bc7b3a056046..6ea95ade4ca6b 100644 +--- a/drivers/mtd/ubi/vmt.c ++++ b/drivers/mtd/ubi/vmt.c +@@ -309,7 +309,6 @@ out_mapping: + ubi->volumes[vol_id] = NULL; + ubi->vol_count -= 1; + spin_unlock(&ubi->volumes_lock); +- ubi_eba_destroy_table(eba_tbl); + out_acc: + spin_lock(&ubi->volumes_lock); + ubi->rsvd_pebs -= vol->reserved_pebs; +diff --git a/drivers/net/can/xilinx_can.c b/drivers/net/can/xilinx_can.c +index 008d3d492bd1c..be3811311db2d 100644 +--- a/drivers/net/can/xilinx_can.c ++++ b/drivers/net/can/xilinx_can.c +@@ -239,7 +239,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd = { + }; + + /* AXI CANFD Data Bittiming constants as per AXI CANFD 1.0 specs */ +-static struct can_bittiming_const xcan_data_bittiming_const_canfd = { ++static const struct can_bittiming_const xcan_data_bittiming_const_canfd = { + .name = DRIVER_NAME, + .tseg1_min = 1, + .tseg1_max = 16, +@@ -265,7 +265,7 @@ static const struct can_bittiming_const xcan_bittiming_const_canfd2 = { + }; + + /* AXI CANFD 2.0 Data Bittiming constants as per AXI CANFD 2.0 spec */ +-static struct can_bittiming_const xcan_data_bittiming_const_canfd2 = { ++static const struct can_bittiming_const xcan_data_bittiming_const_canfd2 = { + .name = DRIVER_NAME, + .tseg1_min = 1, + .tseg1_max = 32, +diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c +index 0c191d395f8f3..b546002e5fd41 100644 +--- a/drivers/net/dsa/lantiq_gswip.c ++++ b/drivers/net/dsa/lantiq_gswip.c +@@ -1958,8 +1958,10 @@ static int gswip_gphy_fw_list(struct gswip_priv *priv, + for_each_available_child_of_node(gphy_fw_list_np, gphy_fw_np) { + err = gswip_gphy_fw_probe(priv, &priv->gphy_fw[i], + gphy_fw_np, i); +- if (err) ++ if (err) { ++ of_node_put(gphy_fw_np); + goto remove_gphy; ++ } + i++; + } + +diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c +index 87d28ef82559e..b336ed071fa89 100644 +--- a/drivers/net/dsa/mv88e6xxx/chip.c ++++ b/drivers/net/dsa/mv88e6xxx/chip.c +@@ -2910,6 +2910,7 @@ static int mv88e6xxx_mdios_register(struct mv88e6xxx_chip *chip, + */ + child = of_get_child_by_name(np, "mdio"); + err = mv88e6xxx_mdio_register(chip, child, false); ++ of_node_put(child); + if (err) + return err; + +diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c +index 1f8c3b669dc14..f36536114790b 100644 +--- a/drivers/net/ethernet/altera/altera_tse_main.c ++++ b/drivers/net/ethernet/altera/altera_tse_main.c +@@ -163,7 +163,8 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id) + mdio = mdiobus_alloc(); + if (mdio == NULL) { + netdev_err(dev, "Error allocating MDIO bus\n"); +- return -ENOMEM; ++ ret = -ENOMEM; ++ goto put_node; + } + + mdio->name = ALTERA_TSE_RESOURCE_NAME; +@@ -180,6 +181,7 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id) + mdio->id); + goto out_free_mdio; + } ++ of_node_put(mdio_node); + + if (netif_msg_drv(priv)) + netdev_info(dev, "MDIO bus %s: created\n", mdio->id); +@@ -189,6 +191,8 @@ static int altera_tse_mdio_create(struct net_device *dev, unsigned int id) + out_free_mdio: + mdiobus_free(mdio); + mdio = NULL; ++put_node: ++ of_node_put(mdio_node); + return ret; + } + +diff --git a/drivers/net/ethernet/broadcom/Makefile b/drivers/net/ethernet/broadcom/Makefile +index 7046ad6d3d0e3..ac50da49ca770 100644 +--- a/drivers/net/ethernet/broadcom/Makefile ++++ b/drivers/net/ethernet/broadcom/Makefile +@@ -16,3 +16,8 @@ obj-$(CONFIG_BGMAC_BCMA) += bgmac-bcma.o bgmac-bcma-mdio.o + obj-$(CONFIG_BGMAC_PLATFORM) += bgmac-platform.o + obj-$(CONFIG_SYSTEMPORT) += bcmsysport.o + obj-$(CONFIG_BNXT) += bnxt/ ++ ++# FIXME: temporarily silence -Warray-bounds on non W=1+ builds ++ifndef KBUILD_EXTRA_WARN ++CFLAGS_tg3.o += -Wno-array-bounds ++endif +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +index 47a920128760e..cf5c2b9465eba 100644 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +@@ -1157,9 +1157,9 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter, + + switch (xcast_mode) { + case IXGBEVF_XCAST_MODE_NONE: +- disable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE | ++ disable = IXGBE_VMOLR_ROMPE | + IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE; +- enable = 0; ++ enable = IXGBE_VMOLR_BAM; + break; + case IXGBEVF_XCAST_MODE_MULTI: + disable = IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE; +@@ -1181,9 +1181,9 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter, + return -EPERM; + } + +- disable = 0; ++ disable = IXGBE_VMOLR_VPE; + enable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE | +- IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE; ++ IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE; + break; + default: + return -EOPNOTSUPP; +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +index 3351d4f9363af..5dce4cd60f58d 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +@@ -1962,6 +1962,9 @@ static int mtk_hwlro_get_fdir_entry(struct net_device *dev, + struct ethtool_rx_flow_spec *fsp = + (struct ethtool_rx_flow_spec *)&cmd->fs; + ++ if (fsp->location >= ARRAY_SIZE(mac->hwlro_ip)) ++ return -EINVAL; ++ + /* only tcp dst ipv4 is meaningful, others are meaningless */ + fsp->flow_type = TCP_V4_FLOW; + fsp->h_u.tcp_ip4_spec.ip4dst = ntohl(mac->hwlro_ip[fsp->location]); +diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c +index dd029d91bbc2d..b711148a9d503 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c ++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c +@@ -2083,7 +2083,7 @@ static int mlx4_en_get_module_eeprom(struct net_device *dev, + en_err(priv, + "mlx4_get_module_info i(%d) offset(%d) bytes_to_read(%d) - FAILED (0x%x)\n", + i, offset, ee->len - i, ret); +- return 0; ++ return ret; + } + + i += ret; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c +index 97359417c6e7f..f8144ce7e476d 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c +@@ -673,6 +673,9 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work) + if (!tracer->owner) + return; + ++ if (unlikely(!tracer->str_db.loaded)) ++ goto arm; ++ + block_count = tracer->buff.size / TRACER_BLOCK_SIZE_BYTE; + start_offset = tracer->buff.consumer_index * TRACER_BLOCK_SIZE_BYTE; + +@@ -730,6 +733,7 @@ static void mlx5_fw_tracer_handle_traces(struct work_struct *work) + &tmp_trace_block[TRACES_PER_BLOCK - 1]); + } + ++arm: + mlx5_fw_tracer_arm(dev); + } + +@@ -1084,8 +1088,7 @@ static int fw_tracer_event(struct notifier_block *nb, unsigned long action, void + queue_work(tracer->work_queue, &tracer->ownership_change_work); + break; + case MLX5_TRACER_SUBTYPE_TRACES_AVAILABLE: +- if (likely(tracer->str_db.loaded)) +- queue_work(tracer->work_queue, &tracer->handle_traces_work); ++ queue_work(tracer->work_queue, &tracer->handle_traces_work); + break; + default: + mlx5_core_dbg(dev, "FWTracer: Event with unrecognized subtype: sub_type %d\n", +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +index 73291051808f9..35630b538c826 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +@@ -4638,6 +4638,11 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) + + unlock: + mutex_unlock(&priv->state_lock); ++ ++ /* Need to fix some features. */ ++ if (!err) ++ netdev_update_features(netdev); ++ + return err; + } + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +index 5baf2c666d293..41087c0618c11 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +@@ -1450,9 +1450,22 @@ static struct mlx5_flow_rule *find_flow_rule(struct fs_fte *fte, + return NULL; + } + +-static bool check_conflicting_actions(u32 action1, u32 action2) ++static bool check_conflicting_actions_vlan(const struct mlx5_fs_vlan *vlan0, ++ const struct mlx5_fs_vlan *vlan1) + { +- u32 xored_actions = action1 ^ action2; ++ return vlan0->ethtype != vlan1->ethtype || ++ vlan0->vid != vlan1->vid || ++ vlan0->prio != vlan1->prio; ++} ++ ++static bool check_conflicting_actions(const struct mlx5_flow_act *act1, ++ const struct mlx5_flow_act *act2) ++{ ++ u32 action1 = act1->action; ++ u32 action2 = act2->action; ++ u32 xored_actions; ++ ++ xored_actions = action1 ^ action2; + + /* if one rule only wants to count, it's ok */ + if (action1 == MLX5_FLOW_CONTEXT_ACTION_COUNT || +@@ -1469,6 +1482,22 @@ static bool check_conflicting_actions(u32 action1, u32 action2) + MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2)) + return true; + ++ if (action1 & MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT && ++ act1->pkt_reformat != act2->pkt_reformat) ++ return true; ++ ++ if (action1 & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR && ++ act1->modify_hdr != act2->modify_hdr) ++ return true; ++ ++ if (action1 & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH && ++ check_conflicting_actions_vlan(&act1->vlan[0], &act2->vlan[0])) ++ return true; ++ ++ if (action1 & MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH_2 && ++ check_conflicting_actions_vlan(&act1->vlan[1], &act2->vlan[1])) ++ return true; ++ + return false; + } + +@@ -1476,7 +1505,7 @@ static int check_conflicting_ftes(struct fs_fte *fte, + const struct mlx5_flow_context *flow_context, + const struct mlx5_flow_act *flow_act) + { +- if (check_conflicting_actions(flow_act->action, fte->action.action)) { ++ if (check_conflicting_actions(flow_act, &fte->action)) { + mlx5_core_warn(get_dev(&fte->node), + "Found two FTEs with conflicting actions\n"); + return -EEXIST; +@@ -1937,16 +1966,16 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle) + down_write_ref_node(&fte->node, false); + for (i = handle->num_rules - 1; i >= 0; i--) + tree_remove_node(&handle->rule[i]->node, true); +- if (fte->dests_size) { +- if (fte->modify_mask) +- modify_fte(fte); +- up_write_ref_node(&fte->node, false); +- } else if (list_empty(&fte->node.children)) { ++ if (list_empty(&fte->node.children)) { + del_hw_fte(&fte->node); + /* Avoid double call to del_hw_fte */ + fte->node.del_hw_func = NULL; + up_write_ref_node(&fte->node, false); + tree_put_node(&fte->node, false); ++ } else if (fte->dests_size) { ++ if (fte->modify_mask) ++ modify_fte(fte); ++ up_write_ref_node(&fte->node, false); + } else { + up_write_ref_node(&fte->node, false); + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c +index 348f02e336f68..d643685067541 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c +@@ -43,11 +43,10 @@ static int set_miss_action(struct mlx5_flow_root_namespace *ns, + err = mlx5dr_table_set_miss_action(ft->fs_dr_table.dr_table, action); + if (err && action) { + err = mlx5dr_action_destroy(action); +- if (err) { +- action = NULL; +- mlx5_core_err(ns->dev, "Failed to destroy action (%d)\n", +- err); +- } ++ if (err) ++ mlx5_core_err(ns->dev, ++ "Failed to destroy action (%d)\n", err); ++ action = NULL; + } + ft->fs_dr_table.miss_action = action; + if (old_miss_action) { +diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c +index 21296fa7f7fbf..bf51ed94952c5 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c +@@ -227,8 +227,6 @@ static int mlxsw_sp_dcbnl_ieee_setets(struct net_device *dev, + static int mlxsw_sp_dcbnl_app_validate(struct net_device *dev, + struct dcb_app *app) + { +- int prio; +- + if (app->priority >= IEEE_8021QAZ_MAX_TCS) { + netdev_err(dev, "APP entry with priority value %u is invalid\n", + app->priority); +@@ -242,17 +240,6 @@ static int mlxsw_sp_dcbnl_app_validate(struct net_device *dev, + app->protocol); + return -EINVAL; + } +- +- /* Warn about any DSCP APP entries with the same PID. */ +- prio = fls(dcb_ieee_getapp_mask(dev, app)); +- if (prio--) { +- if (prio < app->priority) +- netdev_warn(dev, "Choosing priority %d for DSCP %d in favor of previously-active value of %d\n", +- app->priority, app->protocol, prio); +- else if (prio > app->priority) +- netdev_warn(dev, "Ignoring new priority %d for DSCP %d in favor of current value of %d\n", +- app->priority, app->protocol, prio); +- } + break; + + case IEEE_8021QAZ_APP_SEL_ETHERTYPE: +diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +index 89e578e25ff8f..10857914c552b 100644 +--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c ++++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +@@ -266,8 +266,6 @@ nfp_net_get_link_ksettings(struct net_device *netdev, + + /* Init to unknowns */ + ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE); +- ethtool_link_ksettings_add_link_mode(cmd, supported, Pause); +- ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); + cmd->base.port = PORT_OTHER; + cmd->base.speed = SPEED_UNKNOWN; + cmd->base.duplex = DUPLEX_UNKNOWN; +@@ -275,6 +273,8 @@ nfp_net_get_link_ksettings(struct net_device *netdev, + port = nfp_port_from_netdev(netdev); + eth_port = nfp_port_get_eth_port(port); + if (eth_port) { ++ ethtool_link_ksettings_add_link_mode(cmd, supported, Pause); ++ ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause); + cmd->base.autoneg = eth_port->aneg != NFP_ANEG_DISABLED ? + AUTONEG_ENABLE : AUTONEG_DISABLE; + nfp_net_set_fec_link_mode(eth_port, cmd); +diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c +index b0a439248ff69..05c24db507a2c 100644 +--- a/drivers/net/phy/mdio_bus.c ++++ b/drivers/net/phy/mdio_bus.c +@@ -753,7 +753,6 @@ int __init mdio_bus_init(void) + + return ret; + } +-EXPORT_SYMBOL_GPL(mdio_bus_init); + + #if IS_ENABLED(CONFIG_PHYLIB) + void mdio_bus_exit(void) +diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c +index 18cc5e4280e83..721153dcfd15a 100644 +--- a/drivers/net/phy/micrel.c ++++ b/drivers/net/phy/micrel.c +@@ -282,7 +282,7 @@ static int kszphy_config_reset(struct phy_device *phydev) + } + } + +- if (priv->led_mode >= 0) ++ if (priv->type && priv->led_mode >= 0) + kszphy_setup_led(phydev, priv->type->led_mode_reg, priv->led_mode); + + return 0; +@@ -298,10 +298,10 @@ static int kszphy_config_init(struct phy_device *phydev) + + type = priv->type; + +- if (type->has_broadcast_disable) ++ if (type && type->has_broadcast_disable) + kszphy_broadcast_disable(phydev); + +- if (type->has_nand_tree_disable) ++ if (type && type->has_nand_tree_disable) + kszphy_nand_tree_disable(phydev); + + return kszphy_config_reset(phydev); +@@ -939,7 +939,7 @@ static int kszphy_probe(struct phy_device *phydev) + + priv->type = type; + +- if (type->led_mode_reg) { ++ if (type && type->led_mode_reg) { + ret = of_property_read_u32(np, "micrel,led-mode", + &priv->led_mode); + if (ret) +@@ -960,7 +960,8 @@ static int kszphy_probe(struct phy_device *phydev) + unsigned long rate = clk_get_rate(clk); + bool rmii_ref_clk_sel_25_mhz; + +- priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel; ++ if (type) ++ priv->rmii_ref_clk_sel = type->has_rmii_ref_clk_sel; + rmii_ref_clk_sel_25_mhz = of_property_read_bool(np, + "micrel,rmii-reference-clock-select-25-mhz"); + +diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c +index b0a4ca3559fd8..abed1effd95ca 100644 +--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c ++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c +@@ -5615,7 +5615,7 @@ unsigned int ar9003_get_paprd_scale_factor(struct ath_hw *ah, + + static u8 ar9003_get_eepmisc(struct ath_hw *ah) + { +- return ah->eeprom.map4k.baseEepHeader.eepMisc; ++ return ah->eeprom.ar9300_eep.baseEepHeader.opCapFlags.eepMisc; + } + + const struct eeprom_ops eep_ar9300_ops = { +diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.h b/drivers/net/wireless/ath/ath9k/ar9003_phy.h +index a171dbb29fbb6..ad949eb02f3d2 100644 +--- a/drivers/net/wireless/ath/ath9k/ar9003_phy.h ++++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.h +@@ -720,7 +720,7 @@ + #define AR_CH0_TOP2 (AR_SREV_9300(ah) ? 0x1628c : \ + (AR_SREV_9462(ah) ? 0x16290 : 0x16284)) + #define AR_CH0_TOP2_XPABIASLVL (AR_SREV_9561(ah) ? 0x1e00 : 0xf000) +-#define AR_CH0_TOP2_XPABIASLVL_S 12 ++#define AR_CH0_TOP2_XPABIASLVL_S (AR_SREV_9561(ah) ? 9 : 12) + + #define AR_CH0_XTAL (AR_SREV_9300(ah) ? 0x16294 : \ + ((AR_SREV_9462(ah) || AR_SREV_9565(ah)) ? 0x16298 : \ +diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c +index 628f45c8c06f2..eeaf63de71bfd 100644 +--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c ++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c +@@ -1005,6 +1005,14 @@ static bool ath9k_rx_prepare(struct ath9k_htc_priv *priv, + goto rx_next; + } + ++ if (rxstatus->rs_keyix >= ATH_KEYMAX && ++ rxstatus->rs_keyix != ATH9K_RXKEYIX_INVALID) { ++ ath_dbg(common, ANY, ++ "Invalid keyix, dropping (keyix: %d)\n", ++ rxstatus->rs_keyix); ++ goto rx_next; ++ } ++ + /* Get the RX status information */ + + memset(rx_status, 0, sizeof(struct ieee80211_rx_status)); +diff --git a/drivers/net/wireless/ath/carl9170/tx.c b/drivers/net/wireless/ath/carl9170/tx.c +index 2407931440edb..dfab6be1080cb 100644 +--- a/drivers/net/wireless/ath/carl9170/tx.c ++++ b/drivers/net/wireless/ath/carl9170/tx.c +@@ -1557,6 +1557,9 @@ static struct carl9170_vif_info *carl9170_pick_beaconing_vif(struct ar9170 *ar) + goto out; + } + } while (ar->beacon_enabled && i--); ++ ++ /* no entry found in list */ ++ return NULL; + } + + out: +diff --git a/drivers/net/wireless/broadcom/b43/phy_n.c b/drivers/net/wireless/broadcom/b43/phy_n.c +index 32ce1b42ce08b..0ef62ef77af64 100644 +--- a/drivers/net/wireless/broadcom/b43/phy_n.c ++++ b/drivers/net/wireless/broadcom/b43/phy_n.c +@@ -582,7 +582,7 @@ static void b43_nphy_adjust_lna_gain_table(struct b43_wldev *dev) + u16 data[4]; + s16 gain[2]; + u16 minmax[2]; +- static const u16 lna_gain[4] = { -2, 10, 19, 25 }; ++ static const s16 lna_gain[4] = { -2, 10, 19, 25 }; + + if (nphy->hang_avoid) + b43_nphy_stay_in_carrier_search(dev, 1); +diff --git a/drivers/net/wireless/broadcom/b43legacy/phy.c b/drivers/net/wireless/broadcom/b43legacy/phy.c +index a659259bc51aa..6e76055e136d2 100644 +--- a/drivers/net/wireless/broadcom/b43legacy/phy.c ++++ b/drivers/net/wireless/broadcom/b43legacy/phy.c +@@ -1123,7 +1123,7 @@ void b43legacy_phy_lo_b_measure(struct b43legacy_wldev *dev) + struct b43legacy_phy *phy = &dev->phy; + u16 regstack[12] = { 0 }; + u16 mls; +- u16 fval; ++ s16 fval; + int i; + int j; + +diff --git a/drivers/net/wireless/intel/ipw2x00/libipw_tx.c b/drivers/net/wireless/intel/ipw2x00/libipw_tx.c +index d9baa2fa603b2..e4c60caa6543c 100644 +--- a/drivers/net/wireless/intel/ipw2x00/libipw_tx.c ++++ b/drivers/net/wireless/intel/ipw2x00/libipw_tx.c +@@ -383,7 +383,7 @@ netdev_tx_t libipw_xmit(struct sk_buff *skb, struct net_device *dev) + + /* Each fragment may need to have room for encryption + * pre/postfix */ +- if (host_encrypt) ++ if (host_encrypt && crypt && crypt->ops) + bytes_per_frag -= crypt->ops->extra_mpdu_prefix_len + + crypt->ops->extra_mpdu_postfix_len; + +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/power.c b/drivers/net/wireless/intel/iwlwifi/mvm/power.c +index 22136e4832ea6..b2a6e9b7d0a10 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/power.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/power.c +@@ -626,6 +626,9 @@ static void iwl_mvm_power_get_vifs_iterator(void *_data, u8 *mac, + struct iwl_power_vifs *power_iterator = _data; + bool active = mvmvif->phy_ctxt && mvmvif->phy_ctxt->id < NUM_PHY_CTX; + ++ if (!mvmvif->uploaded) ++ return; ++ + switch (ieee80211_vif_type_p2p(vif)) { + case NL80211_IFTYPE_P2P_DEVICE: + break; +diff --git a/drivers/net/wireless/marvell/mwifiex/11h.c b/drivers/net/wireless/marvell/mwifiex/11h.c +index 238accfe4f41d..c4176e357b22c 100644 +--- a/drivers/net/wireless/marvell/mwifiex/11h.c ++++ b/drivers/net/wireless/marvell/mwifiex/11h.c +@@ -303,5 +303,7 @@ void mwifiex_dfs_chan_sw_work_queue(struct work_struct *work) + + mwifiex_dbg(priv->adapter, MSG, + "indicating channel switch completion to kernel\n"); ++ mutex_lock(&priv->wdev.mtx); + cfg80211_ch_switch_notify(priv->netdev, &priv->dfs_chandef); ++ mutex_unlock(&priv->wdev.mtx); + } +diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c +index d5f65372356bf..0b305badae989 100644 +--- a/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c ++++ b/drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c +@@ -460,8 +460,10 @@ static void rtl8180_tx(struct ieee80211_hw *dev, + struct rtl8180_priv *priv = dev->priv; + struct rtl8180_tx_ring *ring; + struct rtl8180_tx_desc *entry; ++ unsigned int prio = 0; + unsigned long flags; +- unsigned int idx, prio, hw_prio; ++ unsigned int idx, hw_prio; ++ + dma_addr_t mapping; + u32 tx_flags; + u8 rc_flags; +@@ -470,7 +472,9 @@ static void rtl8180_tx(struct ieee80211_hw *dev, + /* do arithmetic and then convert to le16 */ + u16 frame_duration = 0; + +- prio = skb_get_queue_mapping(skb); ++ /* rtl8180/rtl8185 only has one useable tx queue */ ++ if (dev->queues > IEEE80211_AC_BK) ++ prio = skb_get_queue_mapping(skb); + ring = &priv->tx_ring[prio]; + + mapping = pci_map_single(priv->pdev, skb->data, +diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c +index bad06939a247c..9bcb187d37dcd 100644 +--- a/drivers/net/wireless/realtek/rtlwifi/usb.c ++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c +@@ -1013,7 +1013,7 @@ int rtl_usb_probe(struct usb_interface *intf, + hw = ieee80211_alloc_hw(sizeof(struct rtl_priv) + + sizeof(struct rtl_usb_priv), &rtl_ops); + if (!hw) { +- WARN_ONCE(true, "rtl_usb: ieee80211 alloc failed\n"); ++ pr_warn("rtl_usb: ieee80211 alloc failed\n"); + return -ENOMEM; + } + rtlpriv = hw->priv; +diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c +index a7ab6dab0f32d..ccaace2a5b0e5 100644 +--- a/drivers/nfc/st21nfca/se.c ++++ b/drivers/nfc/st21nfca/se.c +@@ -241,7 +241,7 @@ int st21nfca_hci_se_io(struct nfc_hci_dev *hdev, u32 se_idx, + } + EXPORT_SYMBOL(st21nfca_hci_se_io); + +-static void st21nfca_se_wt_timeout(struct timer_list *t) ++static void st21nfca_se_wt_work(struct work_struct *work) + { + /* + * No answer from the secure element +@@ -254,8 +254,9 @@ static void st21nfca_se_wt_timeout(struct timer_list *t) + */ + /* hardware reset managed through VCC_UICC_OUT power supply */ + u8 param = 0x01; +- struct st21nfca_hci_info *info = from_timer(info, t, +- se_info.bwi_timer); ++ struct st21nfca_hci_info *info = container_of(work, ++ struct st21nfca_hci_info, ++ se_info.timeout_work); + + pr_debug("\n"); + +@@ -273,6 +274,13 @@ static void st21nfca_se_wt_timeout(struct timer_list *t) + info->se_info.cb(info->se_info.cb_context, NULL, 0, -ETIME); + } + ++static void st21nfca_se_wt_timeout(struct timer_list *t) ++{ ++ struct st21nfca_hci_info *info = from_timer(info, t, se_info.bwi_timer); ++ ++ schedule_work(&info->se_info.timeout_work); ++} ++ + static void st21nfca_se_activation_timeout(struct timer_list *t) + { + struct st21nfca_hci_info *info = from_timer(info, t, +@@ -311,7 +319,7 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host, + * AID 81 5 to 16 + * PARAMETERS 82 0 to 255 + */ +- if (skb->len < NFC_MIN_AID_LENGTH + 2 && ++ if (skb->len < NFC_MIN_AID_LENGTH + 2 || + skb->data[0] != NFC_EVT_TRANSACTION_AID_TAG) + return -EPROTO; + +@@ -323,22 +331,29 @@ int st21nfca_connectivity_event_received(struct nfc_hci_dev *hdev, u8 host, + transaction->aid_len = skb->data[1]; + + /* Checking if the length of the AID is valid */ +- if (transaction->aid_len > sizeof(transaction->aid)) ++ if (transaction->aid_len > sizeof(transaction->aid)) { ++ devm_kfree(dev, transaction); + return -EINVAL; ++ } + + memcpy(transaction->aid, &skb->data[2], + transaction->aid_len); + + /* Check next byte is PARAMETERS tag (82) */ + if (skb->data[transaction->aid_len + 2] != +- NFC_EVT_TRANSACTION_PARAMS_TAG) ++ NFC_EVT_TRANSACTION_PARAMS_TAG) { ++ devm_kfree(dev, transaction); + return -EPROTO; ++ } + + transaction->params_len = skb->data[transaction->aid_len + 3]; + + /* Total size is allocated (skb->len - 2) minus fixed array members */ +- if (transaction->params_len > ((skb->len - 2) - sizeof(struct nfc_evt_transaction))) ++ if (transaction->params_len > ((skb->len - 2) - ++ sizeof(struct nfc_evt_transaction))) { ++ devm_kfree(dev, transaction); + return -EINVAL; ++ } + + memcpy(transaction->params, skb->data + + transaction->aid_len + 4, transaction->params_len); +@@ -365,6 +380,7 @@ int st21nfca_apdu_reader_event_received(struct nfc_hci_dev *hdev, + switch (event) { + case ST21NFCA_EVT_TRANSMIT_DATA: + del_timer_sync(&info->se_info.bwi_timer); ++ cancel_work_sync(&info->se_info.timeout_work); + info->se_info.bwi_active = false; + r = nfc_hci_send_event(hdev, ST21NFCA_DEVICE_MGNT_GATE, + ST21NFCA_EVT_SE_END_OF_APDU_TRANSFER, NULL, 0); +@@ -394,6 +410,7 @@ void st21nfca_se_init(struct nfc_hci_dev *hdev) + struct st21nfca_hci_info *info = nfc_hci_get_clientdata(hdev); + + init_completion(&info->se_info.req_completion); ++ INIT_WORK(&info->se_info.timeout_work, st21nfca_se_wt_work); + /* initialize timers */ + timer_setup(&info->se_info.bwi_timer, st21nfca_se_wt_timeout, 0); + info->se_info.bwi_active = false; +@@ -421,6 +438,7 @@ void st21nfca_se_deinit(struct nfc_hci_dev *hdev) + if (info->se_info.se_active) + del_timer_sync(&info->se_info.se_active_timer); + ++ cancel_work_sync(&info->se_info.timeout_work); + info->se_info.bwi_active = false; + info->se_info.se_active = false; + } +diff --git a/drivers/nfc/st21nfca/st21nfca.h b/drivers/nfc/st21nfca/st21nfca.h +index 5e0de0fef1d4e..0e4a93d11efb7 100644 +--- a/drivers/nfc/st21nfca/st21nfca.h ++++ b/drivers/nfc/st21nfca/st21nfca.h +@@ -141,6 +141,7 @@ struct st21nfca_se_info { + + se_io_cb_t cb; + void *cb_context; ++ struct work_struct timeout_work; + }; + + struct st21nfca_hci_info { +diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c +index 35d265014e1ec..0e23d8c277925 100644 +--- a/drivers/nvdimm/security.c ++++ b/drivers/nvdimm/security.c +@@ -379,11 +379,6 @@ static int security_overwrite(struct nvdimm *nvdimm, unsigned int keyid) + || !nvdimm->sec.flags) + return -EOPNOTSUPP; + +- if (dev->driver == NULL) { +- dev_dbg(dev, "Unable to overwrite while DIMM active.\n"); +- return -EINVAL; +- } +- + rc = check_security_state(nvdimm); + if (rc) + return rc; +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index af516c35afe6f..10fe7a7a2163c 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -1674,6 +1674,7 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev) + dev->ctrl.admin_q = blk_mq_init_queue(&dev->admin_tagset); + if (IS_ERR(dev->ctrl.admin_q)) { + blk_mq_free_tag_set(&dev->admin_tagset); ++ dev->ctrl.admin_q = NULL; + return -ENOMEM; + } + if (!blk_get_queue(dev->ctrl.admin_q)) { +diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c +index 1688f576ee8ac..8420ef42d89ea 100644 +--- a/drivers/of/overlay.c ++++ b/drivers/of/overlay.c +@@ -170,9 +170,7 @@ static int overlay_notify(struct overlay_changeset *ovcs, + + ret = blocking_notifier_call_chain(&overlay_notify_chain, + action, &nd); +- if (ret == NOTIFY_OK || ret == NOTIFY_STOP) +- return 0; +- if (ret) { ++ if (notifier_to_errno(ret)) { + ret = notifier_to_errno(ret); + pr_err("overlay changeset %s notifier error %d, target: %pOF\n", + of_overlay_action_name[action], ret, nd.target); +diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c +index acfbd34032a86..b34b52b364d5f 100644 +--- a/drivers/pci/controller/dwc/pci-imx6.c ++++ b/drivers/pci/controller/dwc/pci-imx6.c +@@ -413,6 +413,11 @@ static void imx6_pcie_assert_core_reset(struct imx6_pcie *imx6_pcie) + dev_err(dev, "failed to disable vpcie regulator: %d\n", + ret); + } ++ ++ /* Some boards don't have PCIe reset GPIO. */ ++ if (gpio_is_valid(imx6_pcie->reset_gpio)) ++ gpio_set_value_cansleep(imx6_pcie->reset_gpio, ++ imx6_pcie->gpio_active_high); + } + + static unsigned int imx6_pcie_grp_offset(const struct imx6_pcie *imx6_pcie) +@@ -535,15 +540,6 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie) + /* allow the clocks to stabilize */ + usleep_range(200, 500); + +- /* Some boards don't have PCIe reset GPIO. */ +- if (gpio_is_valid(imx6_pcie->reset_gpio)) { +- gpio_set_value_cansleep(imx6_pcie->reset_gpio, +- imx6_pcie->gpio_active_high); +- msleep(100); +- gpio_set_value_cansleep(imx6_pcie->reset_gpio, +- !imx6_pcie->gpio_active_high); +- } +- + switch (imx6_pcie->drvdata->variant) { + case IMX8MQ: + reset_control_deassert(imx6_pcie->pciephy_reset); +@@ -586,6 +582,15 @@ static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie) + break; + } + ++ /* Some boards don't have PCIe reset GPIO. */ ++ if (gpio_is_valid(imx6_pcie->reset_gpio)) { ++ msleep(100); ++ gpio_set_value_cansleep(imx6_pcie->reset_gpio, ++ !imx6_pcie->gpio_active_high); ++ /* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */ ++ msleep(100); ++ } ++ + return; + + err_ref_clk: +diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c +index a8eab4e67af10..17f411772f0ca 100644 +--- a/drivers/pci/controller/dwc/pcie-qcom.c ++++ b/drivers/pci/controller/dwc/pcie-qcom.c +@@ -1343,22 +1343,21 @@ static int qcom_pcie_probe(struct platform_device *pdev) + } + + ret = phy_init(pcie->phy); +- if (ret) { +- pm_runtime_disable(&pdev->dev); ++ if (ret) + goto err_pm_runtime_put; +- } + + platform_set_drvdata(pdev, pcie); + + ret = dw_pcie_host_init(pp); + if (ret) { + dev_err(dev, "cannot initialize host\n"); +- pm_runtime_disable(&pdev->dev); +- goto err_pm_runtime_put; ++ goto err_phy_exit; + } + + return 0; + ++err_phy_exit: ++ phy_exit(pcie->phy); + err_pm_runtime_put: + pm_runtime_put(dev); + pm_runtime_disable(dev); +diff --git a/drivers/pci/controller/pcie-cadence-ep.c b/drivers/pci/controller/pcie-cadence-ep.c +index def7820cb8247..5e23d575e200a 100644 +--- a/drivers/pci/controller/pcie-cadence-ep.c ++++ b/drivers/pci/controller/pcie-cadence-ep.c +@@ -178,8 +178,7 @@ static int cdns_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, phys_addr_t addr, + struct cdns_pcie *pcie = &ep->pcie; + u32 r; + +- r = find_first_zero_bit(&ep->ob_region_map, +- sizeof(ep->ob_region_map) * BITS_PER_LONG); ++ r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG); + if (r >= ep->max_regions - 1) { + dev_err(&epc->dev, "no free outbound region\n"); + return -EINVAL; +diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c +index d743b0a489886..b82edefffd15f 100644 +--- a/drivers/pci/controller/pcie-rockchip-ep.c ++++ b/drivers/pci/controller/pcie-rockchip-ep.c +@@ -263,8 +263,7 @@ static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, + struct rockchip_pcie *pcie = &ep->rockchip; + u32 r; + +- r = find_first_zero_bit(&ep->ob_region_map, +- sizeof(ep->ob_region_map) * BITS_PER_LONG); ++ r = find_first_zero_bit(&ep->ob_region_map, BITS_PER_LONG); + /* + * Region 0 is reserved for configuration space and shouldn't + * be used elsewhere per TRM, so leave it out. +diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c +index d539eb379743e..ec741f92246d6 100644 +--- a/drivers/pci/pci.c ++++ b/drivers/pci/pci.c +@@ -2613,6 +2613,8 @@ static const struct dmi_system_id bridge_d3_blacklist[] = { + DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."), + DMI_MATCH(DMI_BOARD_NAME, "X299 DESIGNARE EX-CF"), + }, ++ }, ++ { + /* + * Downstream device is not accessible after putting a root port + * into D3cold and back into D0 on Elo i2. +@@ -4915,18 +4917,18 @@ static int pci_dev_reset_slot_function(struct pci_dev *dev, int probe) + + static void pci_dev_lock(struct pci_dev *dev) + { +- pci_cfg_access_lock(dev); + /* block PM suspend, driver probe, etc. */ + device_lock(&dev->dev); ++ pci_cfg_access_lock(dev); + } + + /* Return 1 on successful lock, 0 on contention */ + static int pci_dev_trylock(struct pci_dev *dev) + { +- if (pci_cfg_access_trylock(dev)) { +- if (device_trylock(&dev->dev)) ++ if (device_trylock(&dev->dev)) { ++ if (pci_cfg_access_trylock(dev)) + return 1; +- pci_cfg_access_unlock(dev); ++ device_unlock(&dev->dev); + } + + return 0; +@@ -4934,8 +4936,8 @@ static int pci_dev_trylock(struct pci_dev *dev) + + static void pci_dev_unlock(struct pci_dev *dev) + { +- device_unlock(&dev->dev); + pci_cfg_access_unlock(dev); ++ device_unlock(&dev->dev); + } + + static void pci_dev_save_and_disable(struct pci_dev *dev) +diff --git a/drivers/pcmcia/Kconfig b/drivers/pcmcia/Kconfig +index e004d8da03dcb..73df71a142536 100644 +--- a/drivers/pcmcia/Kconfig ++++ b/drivers/pcmcia/Kconfig +@@ -151,7 +151,7 @@ config TCIC + + config PCMCIA_ALCHEMY_DEVBOARD + tristate "Alchemy Db/Pb1xxx PCMCIA socket services" +- depends on MIPS_ALCHEMY && PCMCIA ++ depends on MIPS_DB1XXX && PCMCIA + help + Enable this driver of you want PCMCIA support on your Alchemy + Db1000, Db/Pb1100, Db/Pb1500, Db/Pb1550, Db/Pb1200, DB1300 +diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c +index 5ddbf9a1f328b..21d40c6658545 100644 +--- a/drivers/phy/qualcomm/phy-qcom-qmp.c ++++ b/drivers/phy/qualcomm/phy-qcom-qmp.c +@@ -1517,7 +1517,7 @@ static int qcom_qmp_phy_enable(struct phy *phy) + qcom_qmp_phy_configure(pcs, cfg->regs, cfg->pcs_tbl, cfg->pcs_tbl_num); + ret = reset_control_deassert(qmp->ufs_reset); + if (ret) +- goto err_lane_rst; ++ goto err_pcs_ready; + + /* + * Pull out PHY from POWER DOWN state. +@@ -1860,6 +1860,11 @@ static const struct phy_ops qcom_qmp_ufs_ops = { + .owner = THIS_MODULE, + }; + ++static void qcom_qmp_reset_control_put(void *data) ++{ ++ reset_control_put(data); ++} ++ + static + int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id) + { +@@ -1929,7 +1934,7 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id) + * all phys that don't need this. + */ + snprintf(prop_name, sizeof(prop_name), "pipe%d", id); +- qphy->pipe_clk = of_clk_get_by_name(np, prop_name); ++ qphy->pipe_clk = devm_get_clk_from_child(dev, np, prop_name); + if (IS_ERR(qphy->pipe_clk)) { + if (qmp->cfg->type == PHY_TYPE_PCIE || + qmp->cfg->type == PHY_TYPE_USB3) { +@@ -1951,6 +1956,10 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id) + dev_err(dev, "failed to get lane%d reset\n", id); + return PTR_ERR(qphy->lane_rst); + } ++ ret = devm_add_action_or_reset(dev, qcom_qmp_reset_control_put, ++ qphy->lane_rst); ++ if (ret) ++ return ret; + } + + if (qmp->cfg->type == PHY_TYPE_UFS) +diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c +index f56add78d58ce..359b2ecfcbdb3 100644 +--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c ++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c +@@ -773,7 +773,7 @@ static int armada_37xx_irqchip_register(struct platform_device *pdev, + for (i = 0; i < nr_irq_parent; i++) { + int irq = irq_of_parse_and_map(np, i); + +- if (irq < 0) ++ if (!irq) + continue; + + gpiochip_set_chained_irqchip(gc, irqchip, irq, +diff --git a/drivers/pwm/pwm-lp3943.c b/drivers/pwm/pwm-lp3943.c +index bf3f14fb5f244..05e4120fd7022 100644 +--- a/drivers/pwm/pwm-lp3943.c ++++ b/drivers/pwm/pwm-lp3943.c +@@ -125,6 +125,7 @@ static int lp3943_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm, + if (err) + return err; + ++ duty_ns = min(duty_ns, period_ns); + val = (u8)(duty_ns * LP3943_MAX_DUTY / period_ns); + + return lp3943_write_byte(lp3943, reg_duty, val); +diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c +index 7fd793d8536cd..ae2addadb36f2 100644 +--- a/drivers/regulator/core.c ++++ b/drivers/regulator/core.c +@@ -1988,10 +1988,13 @@ struct regulator *_regulator_get(struct device *dev, const char *id, + rdev->exclusive = 1; + + ret = _regulator_is_enabled(rdev); +- if (ret > 0) ++ if (ret > 0) { + rdev->use_count = 1; +- else ++ regulator->enable_count = 1; ++ } else { + rdev->use_count = 0; ++ regulator->enable_count = 0; ++ } + } + + device_link_add(dev, &rdev->dev, DL_FLAG_STATELESS); +diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c +index 44b1da7cc3744..f873d97100e28 100644 +--- a/drivers/regulator/pfuze100-regulator.c ++++ b/drivers/regulator/pfuze100-regulator.c +@@ -528,6 +528,7 @@ static int pfuze_parse_regulators_dt(struct pfuze_chip *chip) + parent = of_get_child_by_name(np, "regulators"); + if (!parent) { + dev_err(dev, "regulators node not found\n"); ++ of_node_put(np); + return -EINVAL; + } + +@@ -557,6 +558,7 @@ static int pfuze_parse_regulators_dt(struct pfuze_chip *chip) + } + + of_node_put(parent); ++ of_node_put(np); + if (ret < 0) { + dev_err(dev, "Error parsing regulator init data: %d\n", + ret); +diff --git a/drivers/rpmsg/qcom_smd.c b/drivers/rpmsg/qcom_smd.c +index 19903de6268db..a4db9f6100d2f 100644 +--- a/drivers/rpmsg/qcom_smd.c ++++ b/drivers/rpmsg/qcom_smd.c +@@ -1388,9 +1388,9 @@ static int qcom_smd_parse_edge(struct device *dev, + edge->name = node->name; + + irq = irq_of_parse_and_map(node, 0); +- if (irq < 0) { ++ if (!irq) { + dev_err(dev, "required smd interrupt missing\n"); +- ret = irq; ++ ret = -EINVAL; + goto put_node; + } + +diff --git a/drivers/rtc/rtc-mt6397.c b/drivers/rtc/rtc-mt6397.c +index b216bdcba0da4..dd3901b0a4ed2 100644 +--- a/drivers/rtc/rtc-mt6397.c ++++ b/drivers/rtc/rtc-mt6397.c +@@ -331,6 +331,8 @@ static int mtk_rtc_probe(struct platform_device *pdev) + return -ENOMEM; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ++ if (!res) ++ return -EINVAL; + rtc->addr_base = res->start; + + rtc->irq = platform_get_irq(pdev, 0); +diff --git a/drivers/scsi/dc395x.c b/drivers/scsi/dc395x.c +index 5fb06930912a0..c4a6609d8fae1 100644 +--- a/drivers/scsi/dc395x.c ++++ b/drivers/scsi/dc395x.c +@@ -3664,10 +3664,19 @@ static struct DeviceCtlBlk *device_alloc(struct AdapterCtlBlk *acb, + #endif + if (dcb->target_lun != 0) { + /* Copy settings */ +- struct DeviceCtlBlk *p; +- list_for_each_entry(p, &acb->dcb_list, list) +- if (p->target_id == dcb->target_id) ++ struct DeviceCtlBlk *p = NULL, *iter; ++ ++ list_for_each_entry(iter, &acb->dcb_list, list) ++ if (iter->target_id == dcb->target_id) { ++ p = iter; + break; ++ } ++ ++ if (!p) { ++ kfree(dcb); ++ return NULL; ++ } ++ + dprintkdbg(DBG_1, + "device_alloc: <%02i-%i> copy from <%02i-%i>\n", + dcb->target_id, dcb->target_lun, +diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c +index 07a0dadc75bf5..7ce2a0434e1e5 100644 +--- a/drivers/scsi/fcoe/fcoe_ctlr.c ++++ b/drivers/scsi/fcoe/fcoe_ctlr.c +@@ -1966,7 +1966,7 @@ EXPORT_SYMBOL(fcoe_ctlr_recv_flogi); + * + * Returns: u64 fc world wide name + */ +-u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], ++u64 fcoe_wwn_from_mac(unsigned char mac[ETH_ALEN], + unsigned int scheme, unsigned int port) + { + u64 wwn; +diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c +index ff6d4aa924213..8b1ba690039b7 100644 +--- a/drivers/scsi/megaraid.c ++++ b/drivers/scsi/megaraid.c +@@ -4635,7 +4635,7 @@ static int __init megaraid_init(void) + * major number allocation. + */ + major = register_chrdev(0, "megadev_legacy", &megadev_fops); +- if (!major) { ++ if (major < 0) { + printk(KERN_WARNING + "megaraid: failed to register char device\n"); + } +diff --git a/drivers/scsi/myrb.c b/drivers/scsi/myrb.c +index 539ac8ce4fcd7..35b32920a94a0 100644 +--- a/drivers/scsi/myrb.c ++++ b/drivers/scsi/myrb.c +@@ -1241,7 +1241,8 @@ static void myrb_cleanup(struct myrb_hba *cb) + myrb_unmap(cb); + + if (cb->mmio_base) { +- cb->disable_intr(cb->io_base); ++ if (cb->disable_intr) ++ cb->disable_intr(cb->io_base); + iounmap(cb->mmio_base); + } + if (cb->irq) +@@ -3516,9 +3517,13 @@ static struct myrb_hba *myrb_detect(struct pci_dev *pdev, + mutex_init(&cb->dcmd_mutex); + mutex_init(&cb->dma_mutex); + cb->pdev = pdev; ++ cb->host = shost; + +- if (pci_enable_device(pdev)) +- goto failure; ++ if (pci_enable_device(pdev)) { ++ dev_err(&pdev->dev, "Failed to enable PCI device\n"); ++ scsi_host_put(shost); ++ return NULL; ++ } + + if (privdata->hw_init == DAC960_PD_hw_init || + privdata->hw_init == DAC960_P_hw_init) { +diff --git a/drivers/scsi/ufs/ufs-qcom.c b/drivers/scsi/ufs/ufs-qcom.c +index 4f066e3b19af1..7c9664c0c4c4f 100644 +--- a/drivers/scsi/ufs/ufs-qcom.c ++++ b/drivers/scsi/ufs/ufs-qcom.c +@@ -781,8 +781,11 @@ static void ufs_qcom_dev_ref_clk_ctrl(struct ufs_qcom_host *host, bool enable) + + writel_relaxed(temp, host->dev_ref_clk_ctrl_mmio); + +- /* ensure that ref_clk is enabled/disabled before we return */ +- wmb(); ++ /* ++ * Make sure the write to ref_clk reaches the destination and ++ * not stored in a Write Buffer (WB). ++ */ ++ readl(host->dev_ref_clk_ctrl_mmio); + + /* + * If we call hibern8 exit after this, we need to make sure that +diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c +index ebf7ae1ef70d4..670f4c7934f85 100644 +--- a/drivers/scsi/ufs/ufshcd.c ++++ b/drivers/scsi/ufs/ufshcd.c +@@ -118,8 +118,13 @@ int ufshcd_dump_regs(struct ufs_hba *hba, size_t offset, size_t len, + if (!regs) + return -ENOMEM; + +- for (pos = 0; pos < len; pos += 4) ++ for (pos = 0; pos < len; pos += 4) { ++ if (offset == 0 && ++ pos >= REG_UIC_ERROR_CODE_PHY_ADAPTER_LAYER && ++ pos <= REG_UIC_ERROR_CODE_DME) ++ continue; + regs[pos / 4] = ufshcd_readl(hba, offset + pos); ++ } + + ufshcd_hex_dump(prefix, regs, len); + kfree(regs); +diff --git a/drivers/soc/qcom/smp2p.c b/drivers/soc/qcom/smp2p.c +index 42e0b8f647aef..d42bcca3b98e2 100644 +--- a/drivers/soc/qcom/smp2p.c ++++ b/drivers/soc/qcom/smp2p.c +@@ -420,6 +420,7 @@ static int smp2p_parse_ipc(struct qcom_smp2p *smp2p) + } + + smp2p->ipc_regmap = syscon_node_to_regmap(syscon); ++ of_node_put(syscon); + if (IS_ERR(smp2p->ipc_regmap)) + return PTR_ERR(smp2p->ipc_regmap); + +diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c +index c428d0f78816e..6564f15c53190 100644 +--- a/drivers/soc/qcom/smsm.c ++++ b/drivers/soc/qcom/smsm.c +@@ -359,6 +359,7 @@ static int smsm_parse_ipc(struct qcom_smsm *smsm, unsigned host_id) + return 0; + + host->ipc_regmap = syscon_node_to_regmap(syscon); ++ of_node_put(syscon); + if (IS_ERR(host->ipc_regmap)) + return PTR_ERR(host->ipc_regmap); + +diff --git a/drivers/soc/rockchip/grf.c b/drivers/soc/rockchip/grf.c +index 494cf2b5bf7b6..343ff61ccccbb 100644 +--- a/drivers/soc/rockchip/grf.c ++++ b/drivers/soc/rockchip/grf.c +@@ -148,12 +148,14 @@ static int __init rockchip_grf_init(void) + return -ENODEV; + if (!match || !match->data) { + pr_err("%s: missing grf data\n", __func__); ++ of_node_put(np); + return -EINVAL; + } + + grf_info = match->data; + + grf = syscon_node_to_regmap(np); ++ of_node_put(np); + if (IS_ERR(grf)) { + pr_err("%s: could not get grf syscon\n", __func__); + return PTR_ERR(grf); +diff --git a/drivers/spi/spi-img-spfi.c b/drivers/spi/spi-img-spfi.c +index e9ef80983b791..5a6b02843f2bc 100644 +--- a/drivers/spi/spi-img-spfi.c ++++ b/drivers/spi/spi-img-spfi.c +@@ -771,7 +771,7 @@ static int img_spfi_resume(struct device *dev) + int ret; + + ret = pm_runtime_get_sync(dev); +- if (ret) { ++ if (ret < 0) { + pm_runtime_put_noidle(dev); + return ret; + } +diff --git a/drivers/spi/spi-rspi.c b/drivers/spi/spi-rspi.c +index 7222c7689c3c4..0524741d73b90 100644 +--- a/drivers/spi/spi-rspi.c ++++ b/drivers/spi/spi-rspi.c +@@ -1044,14 +1044,11 @@ static struct dma_chan *rspi_request_dma_chan(struct device *dev, + } + + memset(&cfg, 0, sizeof(cfg)); ++ cfg.dst_addr = port_addr + RSPI_SPDR; ++ cfg.src_addr = port_addr + RSPI_SPDR; ++ cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; ++ cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; + cfg.direction = dir; +- if (dir == DMA_MEM_TO_DEV) { +- cfg.dst_addr = port_addr; +- cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; +- } else { +- cfg.src_addr = port_addr; +- cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; +- } + + ret = dmaengine_slave_config(chan, &cfg); + if (ret) { +@@ -1082,12 +1079,12 @@ static int rspi_request_dma(struct device *dev, struct spi_controller *ctlr, + } + + ctlr->dma_tx = rspi_request_dma_chan(dev, DMA_MEM_TO_DEV, dma_tx_id, +- res->start + RSPI_SPDR); ++ res->start); + if (!ctlr->dma_tx) + return -ENODEV; + + ctlr->dma_rx = rspi_request_dma_chan(dev, DMA_DEV_TO_MEM, dma_rx_id, +- res->start + RSPI_SPDR); ++ res->start); + if (!ctlr->dma_rx) { + dma_release_channel(ctlr->dma_tx); + ctlr->dma_tx = NULL; +diff --git a/drivers/spi/spi-stm32-qspi.c b/drivers/spi/spi-stm32-qspi.c +index ea77d915216a2..8070b74202170 100644 +--- a/drivers/spi/spi-stm32-qspi.c ++++ b/drivers/spi/spi-stm32-qspi.c +@@ -293,7 +293,8 @@ static int stm32_qspi_wait_cmd(struct stm32_qspi *qspi, + if (!op->data.nbytes) + goto wait_nobusy; + +- if (readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF) ++ if ((readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF) || ++ qspi->fmode == CCR_FMODE_APM) + goto out; + + reinit_completion(&qspi->data_completion); +diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c +index 6b6ef89442837..4bbad00244ab8 100644 +--- a/drivers/spi/spi-ti-qspi.c ++++ b/drivers/spi/spi-ti-qspi.c +@@ -401,6 +401,7 @@ static int ti_qspi_dma_xfer(struct ti_qspi *qspi, dma_addr_t dma_dst, + enum dma_ctrl_flags flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; + struct dma_async_tx_descriptor *tx; + int ret; ++ unsigned long time_left; + + tx = dmaengine_prep_dma_memcpy(chan, dma_dst, dma_src, len, flags); + if (!tx) { +@@ -420,9 +421,9 @@ static int ti_qspi_dma_xfer(struct ti_qspi *qspi, dma_addr_t dma_dst, + } + + dma_async_issue_pending(chan); +- ret = wait_for_completion_timeout(&qspi->transfer_complete, ++ time_left = wait_for_completion_timeout(&qspi->transfer_complete, + msecs_to_jiffies(len)); +- if (ret <= 0) { ++ if (time_left == 0) { + dmaengine_terminate_sync(chan); + dev_err(qspi->dev, "DMA wait_for_completion_timeout\n"); + return -ETIMEDOUT; +diff --git a/drivers/staging/fieldbus/anybuss/host.c b/drivers/staging/fieldbus/anybuss/host.c +index f69dc49304571..b7a91bdef6f41 100644 +--- a/drivers/staging/fieldbus/anybuss/host.c ++++ b/drivers/staging/fieldbus/anybuss/host.c +@@ -1384,7 +1384,7 @@ anybuss_host_common_probe(struct device *dev, + goto err_device; + return cd; + err_device: +- device_unregister(&cd->client->dev); ++ put_device(&cd->client->dev); + err_kthread: + kthread_stop(cd->qthread); + err_reset: +diff --git a/drivers/staging/greybus/audio_codec.c b/drivers/staging/greybus/audio_codec.c +index 3259bf02ba25e..2418fbf1d2ab9 100644 +--- a/drivers/staging/greybus/audio_codec.c ++++ b/drivers/staging/greybus/audio_codec.c +@@ -620,8 +620,8 @@ static int gbcodec_mute_stream(struct snd_soc_dai *dai, int mute, int stream) + break; + } + if (!data) { +- dev_err(dai->dev, "%s:%s DATA connection missing\n", +- dai->name, module->name); ++ dev_err(dai->dev, "%s DATA connection missing\n", ++ dai->name); + mutex_unlock(&codec->lock); + return -ENODEV; + } +diff --git a/drivers/staging/rtl8192e/rtllib_softmac.c b/drivers/staging/rtl8192e/rtllib_softmac.c +index 4ff8fd694c600..0154f5791b121 100644 +--- a/drivers/staging/rtl8192e/rtllib_softmac.c ++++ b/drivers/staging/rtl8192e/rtllib_softmac.c +@@ -651,9 +651,9 @@ static void rtllib_beacons_stop(struct rtllib_device *ieee) + spin_lock_irqsave(&ieee->beacon_lock, flags); + + ieee->beacon_txing = 0; +- del_timer_sync(&ieee->beacon_timer); + + spin_unlock_irqrestore(&ieee->beacon_lock, flags); ++ del_timer_sync(&ieee->beacon_timer); + + } + +diff --git a/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c b/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c +index 33a6af7aad225..a869694337f72 100644 +--- a/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c ++++ b/drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c +@@ -528,9 +528,9 @@ static void ieee80211_beacons_stop(struct ieee80211_device *ieee) + spin_lock_irqsave(&ieee->beacon_lock, flags); + + ieee->beacon_txing = 0; +- del_timer_sync(&ieee->beacon_timer); + + spin_unlock_irqrestore(&ieee->beacon_lock, flags); ++ del_timer_sync(&ieee->beacon_timer); + } + + void ieee80211_stop_send_beacons(struct ieee80211_device *ieee) +diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c +index 49188ab046123..f7c1258eaa394 100644 +--- a/drivers/staging/rtl8712/usb_intf.c ++++ b/drivers/staging/rtl8712/usb_intf.c +@@ -539,13 +539,13 @@ static int r871xu_drv_init(struct usb_interface *pusb_intf, + } else { + AutoloadFail = false; + } +- if (((mac[0] == 0xff) && (mac[1] == 0xff) && ++ if ((!AutoloadFail) || ++ ((mac[0] == 0xff) && (mac[1] == 0xff) && + (mac[2] == 0xff) && (mac[3] == 0xff) && + (mac[4] == 0xff) && (mac[5] == 0xff)) || + ((mac[0] == 0x00) && (mac[1] == 0x00) && + (mac[2] == 0x00) && (mac[3] == 0x00) && +- (mac[4] == 0x00) && (mac[5] == 0x00)) || +- (!AutoloadFail)) { ++ (mac[4] == 0x00) && (mac[5] == 0x00))) { + mac[0] = 0x00; + mac[1] = 0xe0; + mac[2] = 0x4c; +diff --git a/drivers/staging/rtl8712/usb_ops.c b/drivers/staging/rtl8712/usb_ops.c +index e64845e6adf3d..af9966d03979c 100644 +--- a/drivers/staging/rtl8712/usb_ops.c ++++ b/drivers/staging/rtl8712/usb_ops.c +@@ -29,7 +29,8 @@ static u8 usb_read8(struct intf_hdl *intfhdl, u32 addr) + u16 wvalue; + u16 index; + u16 len; +- __le32 data; ++ int status; ++ __le32 data = 0; + struct intf_priv *intfpriv = intfhdl->pintfpriv; + + request = 0x05; +@@ -37,8 +38,10 @@ static u8 usb_read8(struct intf_hdl *intfhdl, u32 addr) + index = 0; + wvalue = (u16)(addr & 0x0000ffff); + len = 1; +- r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len, +- requesttype); ++ status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, ++ &data, len, requesttype); ++ if (status < 0) ++ return 0; + return (u8)(le32_to_cpu(data) & 0x0ff); + } + +@@ -49,7 +52,8 @@ static u16 usb_read16(struct intf_hdl *intfhdl, u32 addr) + u16 wvalue; + u16 index; + u16 len; +- __le32 data; ++ int status; ++ __le32 data = 0; + struct intf_priv *intfpriv = intfhdl->pintfpriv; + + request = 0x05; +@@ -57,8 +61,10 @@ static u16 usb_read16(struct intf_hdl *intfhdl, u32 addr) + index = 0; + wvalue = (u16)(addr & 0x0000ffff); + len = 2; +- r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len, +- requesttype); ++ status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, ++ &data, len, requesttype); ++ if (status < 0) ++ return 0; + return (u16)(le32_to_cpu(data) & 0xffff); + } + +@@ -69,7 +75,8 @@ static u32 usb_read32(struct intf_hdl *intfhdl, u32 addr) + u16 wvalue; + u16 index; + u16 len; +- __le32 data; ++ int status; ++ __le32 data = 0; + struct intf_priv *intfpriv = intfhdl->pintfpriv; + + request = 0x05; +@@ -77,8 +84,10 @@ static u32 usb_read32(struct intf_hdl *intfhdl, u32 addr) + index = 0; + wvalue = (u16)(addr & 0x0000ffff); + len = 4; +- r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, &data, len, +- requesttype); ++ status = r8712_usbctrl_vendorreq(intfpriv, request, wvalue, index, ++ &data, len, requesttype); ++ if (status < 0) ++ return 0; + return le32_to_cpu(data); + } + +diff --git a/drivers/thermal/broadcom/sr-thermal.c b/drivers/thermal/broadcom/sr-thermal.c +index 475ce29007713..85ab9edd580cc 100644 +--- a/drivers/thermal/broadcom/sr-thermal.c ++++ b/drivers/thermal/broadcom/sr-thermal.c +@@ -60,6 +60,9 @@ static int sr_thermal_probe(struct platform_device *pdev) + return -ENOMEM; + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); ++ if (!res) ++ return -ENOENT; ++ + sr_thermal->regs = (void __iomem *)devm_memremap(&pdev->dev, res->start, + resource_size(res), + MEMREMAP_WB); +diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c +index c8c5cdfc5e199..abc84d84f6386 100644 +--- a/drivers/tty/goldfish.c ++++ b/drivers/tty/goldfish.c +@@ -407,6 +407,7 @@ static int goldfish_tty_probe(struct platform_device *pdev) + err_tty_register_device_failed: + free_irq(irq, qtty); + err_dec_line_count: ++ tty_port_destroy(&qtty->port); + goldfish_tty_current_line_count--; + if (goldfish_tty_current_line_count == 0) + goldfish_tty_delete_driver(); +@@ -428,6 +429,7 @@ static int goldfish_tty_remove(struct platform_device *pdev) + iounmap(qtty->base); + qtty->base = NULL; + free_irq(qtty->irq, pdev); ++ tty_port_destroy(&qtty->port); + goldfish_tty_current_line_count--; + if (goldfish_tty_current_line_count == 0) + goldfish_tty_delete_driver(); +diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c +index e24161004ddc1..9b1cddbfc75c0 100644 +--- a/drivers/tty/serial/8250/8250_fintek.c ++++ b/drivers/tty/serial/8250/8250_fintek.c +@@ -197,12 +197,12 @@ static int fintek_8250_rs485_config(struct uart_port *port, + if (!pdata) + return -EINVAL; + +- /* Hardware do not support same RTS level on send and receive */ +- if (!(rs485->flags & SER_RS485_RTS_ON_SEND) == +- !(rs485->flags & SER_RS485_RTS_AFTER_SEND)) +- return -EINVAL; + + if (rs485->flags & SER_RS485_ENABLED) { ++ /* Hardware do not support same RTS level on send and receive */ ++ if (!(rs485->flags & SER_RS485_RTS_ON_SEND) == ++ !(rs485->flags & SER_RS485_RTS_AFTER_SEND)) ++ return -EINVAL; + memset(rs485->padding, 0, sizeof(rs485->padding)); + config |= RS485_URA; + } else { +diff --git a/drivers/tty/serial/digicolor-usart.c b/drivers/tty/serial/digicolor-usart.c +index 4446c13629b1c..e06967ca62fa6 100644 +--- a/drivers/tty/serial/digicolor-usart.c ++++ b/drivers/tty/serial/digicolor-usart.c +@@ -309,6 +309,8 @@ static void digicolor_uart_set_termios(struct uart_port *port, + case CS8: + default: + config |= UA_CONFIG_CHAR_LEN; ++ termios->c_cflag &= ~CSIZE; ++ termios->c_cflag |= CS8; + break; + } + +diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c +index 13e705b53217d..4bdc12908146e 100644 +--- a/drivers/tty/serial/fsl_lpuart.c ++++ b/drivers/tty/serial/fsl_lpuart.c +@@ -233,8 +233,6 @@ + /* IMX lpuart has four extra unused regs located at the beginning */ + #define IMX_REG_OFF 0x10 + +-static DEFINE_IDA(fsl_lpuart_ida); +- + enum lpuart_type { + VF610_LPUART, + LS1021A_LPUART, +@@ -269,7 +267,6 @@ struct lpuart_port { + int rx_dma_rng_buf_len; + unsigned int dma_tx_nents; + wait_queue_head_t dma_wait; +- bool id_allocated; + }; + + struct lpuart_soc_data { +@@ -2450,23 +2447,18 @@ static int lpuart_probe(struct platform_device *pdev) + + ret = of_alias_get_id(np, "serial"); + if (ret < 0) { +- ret = ida_simple_get(&fsl_lpuart_ida, 0, UART_NR, GFP_KERNEL); +- if (ret < 0) { +- dev_err(&pdev->dev, "port line is full, add device failed\n"); +- return ret; +- } +- sport->id_allocated = true; ++ dev_err(&pdev->dev, "failed to get alias id, errno %d\n", ret); ++ return ret; + } + if (ret >= ARRAY_SIZE(lpuart_ports)) { + dev_err(&pdev->dev, "serial%d out of range\n", ret); +- ret = -EINVAL; +- goto failed_out_of_range; ++ return -EINVAL; + } + sport->port.line = ret; + + ret = lpuart_enable_clks(sport); + if (ret) +- goto failed_clock_enable; ++ return ret; + sport->port.uartclk = lpuart_get_baud_clk_rate(sport); + + lpuart_ports[sport->port.line] = sport; +@@ -2516,10 +2508,6 @@ static int lpuart_probe(struct platform_device *pdev) + failed_attach_port: + failed_irq_request: + lpuart_disable_clks(sport); +-failed_clock_enable: +-failed_out_of_range: +- if (sport->id_allocated) +- ida_simple_remove(&fsl_lpuart_ida, sport->port.line); + return ret; + } + +@@ -2529,9 +2517,6 @@ static int lpuart_remove(struct platform_device *pdev) + + uart_remove_one_port(&lpuart_reg, &sport->port); + +- if (sport->id_allocated) +- ida_simple_remove(&fsl_lpuart_ida, sport->port.line); +- + lpuart_disable_clks(sport); + + if (sport->dma_tx_chan) +@@ -2663,7 +2648,6 @@ static int __init lpuart_serial_init(void) + + static void __exit lpuart_serial_exit(void) + { +- ida_destroy(&fsl_lpuart_ida); + platform_driver_unregister(&lpuart_driver); + uart_unregister_driver(&lpuart_reg); + } +diff --git a/drivers/tty/serial/icom.c b/drivers/tty/serial/icom.c +index 624f3d541c687..d047380259b53 100644 +--- a/drivers/tty/serial/icom.c ++++ b/drivers/tty/serial/icom.c +@@ -1499,7 +1499,7 @@ static int icom_probe(struct pci_dev *dev, + retval = pci_read_config_dword(dev, PCI_COMMAND, &command_reg); + if (retval) { + dev_err(&dev->dev, "PCI Config read FAILED\n"); +- return retval; ++ goto probe_exit0; + } + + pci_write_config_dword(dev, PCI_COMMAND, +diff --git a/drivers/tty/serial/meson_uart.c b/drivers/tty/serial/meson_uart.c +index fbc5bc022a392..849ce8c1ef392 100644 +--- a/drivers/tty/serial/meson_uart.c ++++ b/drivers/tty/serial/meson_uart.c +@@ -256,6 +256,14 @@ static const char *meson_uart_type(struct uart_port *port) + return (port->type == PORT_MESON) ? "meson_uart" : NULL; + } + ++/* ++ * This function is called only from probe() using a temporary io mapping ++ * in order to perform a reset before setting up the device. Since the ++ * temporarily mapped region was successfully requested, there can be no ++ * console on this port at this time. Hence it is not necessary for this ++ * function to acquire the port->lock. (Since there is no console on this ++ * port at this time, the port->lock is not initialized yet.) ++ */ + static void meson_uart_reset(struct uart_port *port) + { + u32 val; +@@ -270,9 +278,12 @@ static void meson_uart_reset(struct uart_port *port) + + static int meson_uart_startup(struct uart_port *port) + { ++ unsigned long flags; + u32 val; + int ret = 0; + ++ spin_lock_irqsave(&port->lock, flags); ++ + val = readl(port->membase + AML_UART_CONTROL); + val |= AML_UART_CLEAR_ERR; + writel(val, port->membase + AML_UART_CONTROL); +@@ -288,6 +299,8 @@ static int meson_uart_startup(struct uart_port *port) + val = (AML_UART_RECV_IRQ(1) | AML_UART_XMIT_IRQ(port->fifosize / 2)); + writel(val, port->membase + AML_UART_MISC); + ++ spin_unlock_irqrestore(&port->lock, flags); ++ + ret = request_irq(port->irq, meson_uart_interrupt, 0, + port->name, port); + +diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c +index 5129c2dfbe079..aac96659694d6 100644 +--- a/drivers/tty/serial/msm_serial.c ++++ b/drivers/tty/serial/msm_serial.c +@@ -1579,6 +1579,7 @@ static inline struct uart_port *msm_get_port_from_line(unsigned int line) + static void __msm_console_write(struct uart_port *port, const char *s, + unsigned int count, bool is_uartdm) + { ++ unsigned long flags; + int i; + int num_newlines = 0; + bool replaced = false; +@@ -1596,6 +1597,8 @@ static void __msm_console_write(struct uart_port *port, const char *s, + num_newlines++; + count += num_newlines; + ++ local_irq_save(flags); ++ + if (port->sysrq) + locked = 0; + else if (oops_in_progress) +@@ -1641,6 +1644,8 @@ static void __msm_console_write(struct uart_port *port, const char *s, + + if (locked) + spin_unlock(&port->lock); ++ ++ local_irq_restore(flags); + } + + static void msm_console_write(struct console *co, const char *s, +diff --git a/drivers/tty/serial/owl-uart.c b/drivers/tty/serial/owl-uart.c +index c55c8507713c3..e87953f8a7685 100644 +--- a/drivers/tty/serial/owl-uart.c ++++ b/drivers/tty/serial/owl-uart.c +@@ -695,6 +695,7 @@ static int owl_uart_probe(struct platform_device *pdev) + owl_port->port.uartclk = clk_get_rate(owl_port->clk); + if (owl_port->port.uartclk == 0) { + dev_err(&pdev->dev, "clock rate is zero\n"); ++ clk_disable_unprepare(owl_port->clk); + return -EINVAL; + } + owl_port->port.flags = UPF_BOOT_AUTOCONF | UPF_IOREMAP | UPF_LOW_LATENCY; +diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c +index c16234bca78fb..77f18445bb988 100644 +--- a/drivers/tty/serial/pch_uart.c ++++ b/drivers/tty/serial/pch_uart.c +@@ -635,22 +635,6 @@ static int push_rx(struct eg20t_port *priv, const unsigned char *buf, + return 0; + } + +-static int pop_tx_x(struct eg20t_port *priv, unsigned char *buf) +-{ +- int ret = 0; +- struct uart_port *port = &priv->port; +- +- if (port->x_char) { +- dev_dbg(priv->port.dev, "%s:X character send %02x (%lu)\n", +- __func__, port->x_char, jiffies); +- buf[0] = port->x_char; +- port->x_char = 0; +- ret = 1; +- } +- +- return ret; +-} +- + static int dma_push_rx(struct eg20t_port *priv, int size) + { + int room; +@@ -900,9 +884,10 @@ static unsigned int handle_tx(struct eg20t_port *priv) + + fifo_size = max(priv->fifo_size, 1); + tx_empty = 1; +- if (pop_tx_x(priv, xmit->buf)) { +- pch_uart_hal_write(priv, xmit->buf, 1); ++ if (port->x_char) { ++ pch_uart_hal_write(priv, &port->x_char, 1); + port->icount.tx++; ++ port->x_char = 0; + tx_empty = 0; + fifo_size--; + } +@@ -957,9 +942,11 @@ static unsigned int dma_handle_tx(struct eg20t_port *priv) + } + + fifo_size = max(priv->fifo_size, 1); +- if (pop_tx_x(priv, xmit->buf)) { +- pch_uart_hal_write(priv, xmit->buf, 1); ++ ++ if (port->x_char) { ++ pch_uart_hal_write(priv, &port->x_char, 1); + port->icount.tx++; ++ port->x_char = 0; + fifo_size--; + } + +diff --git a/drivers/tty/serial/rda-uart.c b/drivers/tty/serial/rda-uart.c +index ff9a27d48bca8..877d86ff68190 100644 +--- a/drivers/tty/serial/rda-uart.c ++++ b/drivers/tty/serial/rda-uart.c +@@ -262,6 +262,8 @@ static void rda_uart_set_termios(struct uart_port *port, + /* Fall through */ + case CS7: + ctrl &= ~RDA_UART_DBITS_8; ++ termios->c_cflag &= ~CSIZE; ++ termios->c_cflag |= CS7; + break; + default: + ctrl |= RDA_UART_DBITS_8; +diff --git a/drivers/tty/serial/sa1100.c b/drivers/tty/serial/sa1100.c +index 8e618129e65c9..ff4b44bdf6b67 100644 +--- a/drivers/tty/serial/sa1100.c ++++ b/drivers/tty/serial/sa1100.c +@@ -454,6 +454,8 @@ sa1100_set_termios(struct uart_port *port, struct ktermios *termios, + baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk/16); + quot = uart_get_divisor(port, baud); + ++ del_timer_sync(&sport->timer); ++ + spin_lock_irqsave(&sport->port.lock, flags); + + sport->port.read_status_mask &= UTSR0_TO_SM(UTSR0_TFS); +@@ -484,8 +486,6 @@ sa1100_set_termios(struct uart_port *port, struct ktermios *termios, + UTSR1_TO_SM(UTSR1_ROR); + } + +- del_timer_sync(&sport->timer); +- + /* + * Update the per-port timeout. + */ +diff --git a/drivers/tty/serial/serial_txx9.c b/drivers/tty/serial/serial_txx9.c +index 8507f18900d09..2783baa5dfe59 100644 +--- a/drivers/tty/serial/serial_txx9.c ++++ b/drivers/tty/serial/serial_txx9.c +@@ -648,6 +648,8 @@ serial_txx9_set_termios(struct uart_port *port, struct ktermios *termios, + case CS6: /* not supported */ + case CS8: + cval |= TXX9_SILCR_UMODE_8BIT; ++ termios->c_cflag &= ~CSIZE; ++ termios->c_cflag |= CS8; + break; + } + +diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c +index ecff9b2088087..c066bb7f07b01 100644 +--- a/drivers/tty/serial/sh-sci.c ++++ b/drivers/tty/serial/sh-sci.c +@@ -2395,8 +2395,12 @@ static void sci_set_termios(struct uart_port *port, struct ktermios *termios, + int best_clk = -1; + unsigned long flags; + +- if ((termios->c_cflag & CSIZE) == CS7) ++ if ((termios->c_cflag & CSIZE) == CS7) { + smr_val |= SCSMR_CHR; ++ } else { ++ termios->c_cflag &= ~CSIZE; ++ termios->c_cflag |= CS8; ++ } + if (termios->c_cflag & PARENB) + smr_val |= SCSMR_PE; + if (termios->c_cflag & PARODD) +diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c +index 6a2dc823ea828..7015632c49905 100644 +--- a/drivers/tty/serial/sifive.c ++++ b/drivers/tty/serial/sifive.c +@@ -667,12 +667,16 @@ static void sifive_serial_set_termios(struct uart_port *port, + int rate; + char nstop; + +- if ((termios->c_cflag & CSIZE) != CS8) ++ if ((termios->c_cflag & CSIZE) != CS8) { + dev_err_once(ssp->port.dev, "only 8-bit words supported\n"); ++ termios->c_cflag &= ~CSIZE; ++ termios->c_cflag |= CS8; ++ } + if (termios->c_iflag & (INPCK | PARMRK)) + dev_err_once(ssp->port.dev, "parity checking not supported\n"); + if (termios->c_iflag & BRKINT) + dev_err_once(ssp->port.dev, "BREAK detection not supported\n"); ++ termios->c_iflag &= ~(INPCK|PARMRK|BRKINT); + + /* Set number of stop bits */ + nstop = (termios->c_cflag & CSTOPB) ? 2 : 1; +@@ -973,7 +977,7 @@ static int sifive_serial_probe(struct platform_device *pdev) + /* Set up clock divider */ + ssp->clkin_rate = clk_get_rate(ssp->clk); + ssp->baud_rate = SIFIVE_DEFAULT_BAUD_RATE; +- ssp->port.uartclk = ssp->baud_rate * 16; ++ ssp->port.uartclk = ssp->clkin_rate; + __ssp_update_div(ssp); + + platform_set_drvdata(pdev, ssp); +diff --git a/drivers/tty/serial/st-asc.c b/drivers/tty/serial/st-asc.c +index 7971997cdead7..ce35e3a131b16 100644 +--- a/drivers/tty/serial/st-asc.c ++++ b/drivers/tty/serial/st-asc.c +@@ -540,10 +540,14 @@ static void asc_set_termios(struct uart_port *port, struct ktermios *termios, + /* set character length */ + if ((cflag & CSIZE) == CS7) { + ctrl_val |= ASC_CTL_MODE_7BIT_PAR; ++ cflag |= PARENB; + } else { + ctrl_val |= (cflag & PARENB) ? ASC_CTL_MODE_8BIT_PAR : + ASC_CTL_MODE_8BIT; ++ cflag &= ~CSIZE; ++ cflag |= CS8; + } ++ termios->c_cflag = cflag; + + /* set stop bit */ + ctrl_val |= (cflag & CSTOPB) ? ASC_CTL_STOP_2BIT : ASC_CTL_STOP_1BIT; +diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c +index d517b911cd042..d5a084ffde892 100644 +--- a/drivers/tty/serial/stm32-usart.c ++++ b/drivers/tty/serial/stm32-usart.c +@@ -745,13 +745,22 @@ static void stm32_set_termios(struct uart_port *port, struct ktermios *termios, + * CS8 or (CS7 + parity), 8 bits word aka [M1:M0] = 0b00 + * M0 and M1 already cleared by cr1 initialization. + */ +- if (bits == 9) ++ if (bits == 9) { + cr1 |= USART_CR1_M0; +- else if ((bits == 7) && cfg->has_7bits_data) ++ } else if ((bits == 7) && cfg->has_7bits_data) { + cr1 |= USART_CR1_M1; +- else if (bits != 8) ++ } else if (bits != 8) { + dev_dbg(port->dev, "Unsupported data bits config: %u bits\n" + , bits); ++ cflag &= ~CSIZE; ++ cflag |= CS8; ++ termios->c_cflag = cflag; ++ bits = 8; ++ if (cflag & PARENB) { ++ bits++; ++ cr1 |= USART_CR1_M0; ++ } ++ } + + if (ofs->rtor != UNDEF_REG && (stm32_port->rx_ch || + stm32_port->fifoen)) { +diff --git a/drivers/tty/synclink_gt.c b/drivers/tty/synclink_gt.c +index ff345a8e0fcc6..b72471373c71d 100644 +--- a/drivers/tty/synclink_gt.c ++++ b/drivers/tty/synclink_gt.c +@@ -1752,6 +1752,8 @@ static int hdlcdev_init(struct slgt_info *info) + */ + static void hdlcdev_exit(struct slgt_info *info) + { ++ if (!info->netdev) ++ return; + unregister_hdlc_device(info->netdev); + free_netdev(info->netdev); + info->netdev = NULL; +diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c +index bb148dbfbb88f..47f2370ad85cb 100644 +--- a/drivers/tty/tty_buffer.c ++++ b/drivers/tty/tty_buffer.c +@@ -172,7 +172,8 @@ static struct tty_buffer *tty_buffer_alloc(struct tty_port *port, size_t size) + have queued and recycle that ? */ + if (atomic_read(&port->buf.mem_used) > port->buf.mem_limit) + return NULL; +- p = kmalloc(sizeof(struct tty_buffer) + 2 * size, GFP_ATOMIC); ++ p = kmalloc(sizeof(struct tty_buffer) + 2 * size, ++ GFP_ATOMIC | __GFP_NOWARN); + if (p == NULL) + return NULL; + +diff --git a/drivers/usb/core/hcd-pci.c b/drivers/usb/core/hcd-pci.c +index 9e26b0143a59a..db16efe293e0b 100644 +--- a/drivers/usb/core/hcd-pci.c ++++ b/drivers/usb/core/hcd-pci.c +@@ -604,10 +604,10 @@ const struct dev_pm_ops usb_hcd_pci_pm_ops = { + .suspend_noirq = hcd_pci_suspend_noirq, + .resume_noirq = hcd_pci_resume_noirq, + .resume = hcd_pci_resume, +- .freeze = check_root_hub_suspended, ++ .freeze = hcd_pci_suspend, + .freeze_noirq = check_root_hub_suspended, + .thaw_noirq = NULL, +- .thaw = NULL, ++ .thaw = hcd_pci_resume, + .poweroff = hcd_pci_suspend, + .poweroff_noirq = hcd_pci_suspend_noirq, + .restore_noirq = hcd_pci_resume_noirq, +diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c +index 39203f2ce6a19..fde211519a973 100644 +--- a/drivers/usb/core/hcd.c ++++ b/drivers/usb/core/hcd.c +@@ -2657,6 +2657,7 @@ int usb_add_hcd(struct usb_hcd *hcd, + { + int retval; + struct usb_device *rhdev; ++ struct usb_hcd *shared_hcd; + + if (!hcd->skip_phy_initialization && usb_hcd_is_primary_hcd(hcd)) { + hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev); +@@ -2813,13 +2814,26 @@ int usb_add_hcd(struct usb_hcd *hcd, + goto err_hcd_driver_start; + } + ++ /* starting here, usbcore will pay attention to the shared HCD roothub */ ++ shared_hcd = hcd->shared_hcd; ++ if (!usb_hcd_is_primary_hcd(hcd) && shared_hcd && HCD_DEFER_RH_REGISTER(shared_hcd)) { ++ retval = register_root_hub(shared_hcd); ++ if (retval != 0) ++ goto err_register_root_hub; ++ ++ if (shared_hcd->uses_new_polling && HCD_POLL_RH(shared_hcd)) ++ usb_hcd_poll_rh_status(shared_hcd); ++ } ++ + /* starting here, usbcore will pay attention to this root hub */ +- retval = register_root_hub(hcd); +- if (retval != 0) +- goto err_register_root_hub; ++ if (!HCD_DEFER_RH_REGISTER(hcd)) { ++ retval = register_root_hub(hcd); ++ if (retval != 0) ++ goto err_register_root_hub; + +- if (hcd->uses_new_polling && HCD_POLL_RH(hcd)) +- usb_hcd_poll_rh_status(hcd); ++ if (hcd->uses_new_polling && HCD_POLL_RH(hcd)) ++ usb_hcd_poll_rh_status(hcd); ++ } + + return retval; + +@@ -2862,6 +2876,7 @@ EXPORT_SYMBOL_GPL(usb_add_hcd); + void usb_remove_hcd(struct usb_hcd *hcd) + { + struct usb_device *rhdev = hcd->self.root_hub; ++ bool rh_registered; + + dev_info(hcd->self.controller, "remove, state %x\n", hcd->state); + +@@ -2872,6 +2887,7 @@ void usb_remove_hcd(struct usb_hcd *hcd) + + dev_dbg(hcd->self.controller, "roothub graceful disconnect\n"); + spin_lock_irq (&hcd_root_hub_lock); ++ rh_registered = hcd->rh_registered; + hcd->rh_registered = 0; + spin_unlock_irq (&hcd_root_hub_lock); + +@@ -2881,7 +2897,8 @@ void usb_remove_hcd(struct usb_hcd *hcd) + cancel_work_sync(&hcd->died_work); + + mutex_lock(&usb_bus_idr_lock); +- usb_disconnect(&rhdev); /* Sets rhdev to NULL */ ++ if (rh_registered) ++ usb_disconnect(&rhdev); /* Sets rhdev to NULL */ + mutex_unlock(&usb_bus_idr_lock); + + /* +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c +index d5f233fa6f3b4..f8f2de7899a94 100644 +--- a/drivers/usb/core/quirks.c ++++ b/drivers/usb/core/quirks.c +@@ -511,6 +511,9 @@ static const struct usb_device_id usb_quirk_list[] = { + /* DJI CineSSD */ + { USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM }, + ++ /* DELL USB GEN2 */ ++ { USB_DEVICE(0x413c, 0xb062), .driver_info = USB_QUIRK_NO_LPM | USB_QUIRK_RESET_RESUME }, ++ + /* VCOM device */ + { USB_DEVICE(0x4296, 0x7570), .driver_info = USB_QUIRK_CONFIG_INTF_STRINGS }, + +diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c +index 379bbf27c7ce8..8fd6eefc671c7 100644 +--- a/drivers/usb/dwc2/gadget.c ++++ b/drivers/usb/dwc2/gadget.c +@@ -4486,7 +4486,6 @@ static int dwc2_hsotg_udc_start(struct usb_gadget *gadget, + + WARN_ON(hsotg->driver); + +- driver->driver.bus = NULL; + hsotg->driver = driver; + hsotg->gadget.dev.of_node = hsotg->dev->of_node; + hsotg->gadget.speed = USB_SPEED_UNKNOWN; +diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c +index 99964f96ff747..955bf820f4102 100644 +--- a/drivers/usb/dwc3/dwc3-pci.c ++++ b/drivers/usb/dwc3/dwc3-pci.c +@@ -211,7 +211,7 @@ static void dwc3_pci_resume_work(struct work_struct *work) + int ret; + + ret = pm_runtime_get_sync(&dwc3->dev); +- if (ret) { ++ if (ret < 0) { + pm_runtime_put_sync_autosuspend(&dwc3->dev); + return; + } +diff --git a/drivers/usb/host/isp116x-hcd.c b/drivers/usb/host/isp116x-hcd.c +index a87c0b26279e7..00a4e12a1f158 100644 +--- a/drivers/usb/host/isp116x-hcd.c ++++ b/drivers/usb/host/isp116x-hcd.c +@@ -1541,10 +1541,12 @@ static int isp116x_remove(struct platform_device *pdev) + + iounmap(isp116x->data_reg); + res = platform_get_resource(pdev, IORESOURCE_MEM, 1); +- release_mem_region(res->start, 2); ++ if (res) ++ release_mem_region(res->start, 2); + iounmap(isp116x->addr_reg); + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); +- release_mem_region(res->start, 2); ++ if (res) ++ release_mem_region(res->start, 2); + + usb_put_hcd(hcd); + return 0; +diff --git a/drivers/usb/host/oxu210hp-hcd.c b/drivers/usb/host/oxu210hp-hcd.c +index 65985247fc00f..f05b6f2b08656 100644 +--- a/drivers/usb/host/oxu210hp-hcd.c ++++ b/drivers/usb/host/oxu210hp-hcd.c +@@ -3906,8 +3906,10 @@ static int oxu_bus_suspend(struct usb_hcd *hcd) + } + } + ++ spin_unlock_irq(&oxu->lock); + /* turn off now-idle HC */ + del_timer_sync(&oxu->watchdog); ++ spin_lock_irq(&oxu->lock); + ehci_halt(oxu); + hcd->state = HC_STATE_SUSPENDED; + +diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c +index 5c93226e0e20a..8def19fc50250 100644 +--- a/drivers/usb/musb/omap2430.c ++++ b/drivers/usb/musb/omap2430.c +@@ -433,6 +433,7 @@ static int omap2430_probe(struct platform_device *pdev) + control_node = of_parse_phandle(np, "ctrl-module", 0); + if (control_node) { + control_pdev = of_find_device_by_node(control_node); ++ of_node_put(control_node); + if (!control_pdev) { + dev_err(&pdev->dev, "Failed to get control device\n"); + ret = -EINVAL; +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index 1ba4a72047dcb..62f79fd5257bc 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -1137,6 +1137,8 @@ static const struct usb_device_id option_ids[] = { + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0xff, 0x30) }, /* EM160R-GL */ + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0620, 0xff, 0, 0) }, ++ { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, 0x0700, 0xff), /* BG95 */ ++ .driver_info = RSVD(3) | ZLP }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x30) }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0, 0) }, + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500Q, 0xff, 0xff, 0x10), +diff --git a/drivers/usb/storage/karma.c b/drivers/usb/storage/karma.c +index 05cec81dcd3f2..38ddfedef629c 100644 +--- a/drivers/usb/storage/karma.c ++++ b/drivers/usb/storage/karma.c +@@ -174,24 +174,25 @@ static void rio_karma_destructor(void *extra) + + static int rio_karma_init(struct us_data *us) + { +- int ret = 0; + struct karma_data *data = kzalloc(sizeof(struct karma_data), GFP_NOIO); + + if (!data) +- goto out; ++ return -ENOMEM; + + data->recv = kmalloc(RIO_RECV_LEN, GFP_NOIO); + if (!data->recv) { + kfree(data); +- goto out; ++ return -ENOMEM; + } + + us->extra = data; + us->extra_destructor = rio_karma_destructor; +- ret = rio_karma_send_command(RIO_ENTER_STORAGE, us); +- data->in_storage = (ret == 0); +-out: +- return ret; ++ if (rio_karma_send_command(RIO_ENTER_STORAGE, us)) ++ return -EIO; ++ ++ data->in_storage = 1; ++ ++ return 0; + } + + static struct scsi_host_template karma_host_template; +diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c +index d8d3892e5a69a..3c6d452e3bf40 100644 +--- a/drivers/usb/usbip/stub_dev.c ++++ b/drivers/usb/usbip/stub_dev.c +@@ -393,7 +393,6 @@ static int stub_probe(struct usb_device *udev) + + err_port: + dev_set_drvdata(&udev->dev, NULL); +- usb_put_dev(udev); + + /* we already have busid_priv, just lock busid_lock */ + spin_lock(&busid_priv->busid_lock); +@@ -408,6 +407,7 @@ call_put_busid_priv: + put_busid_priv(busid_priv); + + sdev_free: ++ usb_put_dev(udev); + stub_device_free(sdev); + + return rc; +diff --git a/drivers/usb/usbip/stub_rx.c b/drivers/usb/usbip/stub_rx.c +index e2b0195322340..d3d360ff0d24e 100644 +--- a/drivers/usb/usbip/stub_rx.c ++++ b/drivers/usb/usbip/stub_rx.c +@@ -138,7 +138,9 @@ static int tweak_set_configuration_cmd(struct urb *urb) + req = (struct usb_ctrlrequest *) urb->setup_packet; + config = le16_to_cpu(req->wValue); + ++ usb_lock_device(sdev->udev); + err = usb_set_configuration(sdev->udev, config); ++ usb_unlock_device(sdev->udev); + if (err && err != -ENODEV) + dev_err(&sdev->udev->dev, "can't set config #%d, error %d\n", + config, err); +diff --git a/drivers/vhost/vringh.c b/drivers/vhost/vringh.c +index 4653de001e261..264cbe385a63b 100644 +--- a/drivers/vhost/vringh.c ++++ b/drivers/vhost/vringh.c +@@ -264,7 +264,7 @@ __vringh_iov(struct vringh *vrh, u16 i, + gfp_t gfp, + int (*copy)(void *dst, const void *src, size_t len)) + { +- int err, count = 0, up_next, desc_max; ++ int err, count = 0, indirect_count = 0, up_next, desc_max; + struct vring_desc desc, *descs; + struct vringh_range range = { -1ULL, 0 }, slowrange; + bool slow = false; +@@ -321,7 +321,12 @@ __vringh_iov(struct vringh *vrh, u16 i, + continue; + } + +- if (count++ == vrh->vring.num) { ++ if (up_next == -1) ++ count++; ++ else ++ indirect_count++; ++ ++ if (count > vrh->vring.num || indirect_count > desc_max) { + vringh_bad("Descriptor loop in %p", descs); + err = -ELOOP; + goto fail; +@@ -383,6 +388,7 @@ __vringh_iov(struct vringh *vrh, u16 i, + i = return_from_indirect(vrh, &up_next, + &descs, &desc_max); + slow = false; ++ indirect_count = 0; + } else + break; + } +diff --git a/drivers/video/fbdev/amba-clcd.c b/drivers/video/fbdev/amba-clcd.c +index 7de43be6ef2c2..3b7a7c74bf0a5 100644 +--- a/drivers/video/fbdev/amba-clcd.c ++++ b/drivers/video/fbdev/amba-clcd.c +@@ -774,12 +774,15 @@ static int clcdfb_of_vram_setup(struct clcd_fb *fb) + return -ENODEV; + + fb->fb.screen_base = of_iomap(memory, 0); +- if (!fb->fb.screen_base) ++ if (!fb->fb.screen_base) { ++ of_node_put(memory); + return -ENOMEM; ++ } + + fb->fb.fix.smem_start = of_translate_address(memory, + of_get_address(memory, 0, &size, NULL)); + fb->fb.fix.smem_len = size; ++ of_node_put(memory); + + return 0; + } +diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c +index 75b7705140673..1decded4845f7 100644 +--- a/drivers/video/fbdev/core/fbcon.c ++++ b/drivers/video/fbdev/core/fbcon.c +@@ -3286,6 +3286,9 @@ static void fbcon_register_existing_fbs(struct work_struct *work) + + console_lock(); + ++ deferred_takeover = false; ++ logo_shown = FBCON_LOGO_DONTSHOW; ++ + for_each_registered_fb(i) + fbcon_fb_registered(registered_fb[i]); + +@@ -3303,8 +3306,6 @@ static int fbcon_output_notifier(struct notifier_block *nb, + pr_info("fbcon: Taking over console\n"); + + dummycon_unregister_output_notifier(&fbcon_output_nb); +- deferred_takeover = false; +- logo_shown = FBCON_LOGO_DONTSHOW; + + /* We may get called in atomic context */ + schedule_work(&fbcon_deferred_takeover_work); +diff --git a/drivers/video/fbdev/pxa3xx-gcu.c b/drivers/video/fbdev/pxa3xx-gcu.c +index 74ffb446e00c9..7c4694d70dac1 100644 +--- a/drivers/video/fbdev/pxa3xx-gcu.c ++++ b/drivers/video/fbdev/pxa3xx-gcu.c +@@ -651,6 +651,7 @@ static int pxa3xx_gcu_probe(struct platform_device *pdev) + for (i = 0; i < 8; i++) { + ret = pxa3xx_gcu_add_buffer(dev, priv); + if (ret) { ++ pxa3xx_gcu_free_buffers(dev, priv); + dev_err(dev, "failed to allocate DMA memory\n"); + goto err_disable_clk; + } +@@ -667,15 +668,15 @@ static int pxa3xx_gcu_probe(struct platform_device *pdev) + SHARED_SIZE, irq); + return 0; + +-err_free_dma: +- dma_free_coherent(dev, SHARED_SIZE, +- priv->shared, priv->shared_phys); ++err_disable_clk: ++ clk_disable_unprepare(priv->clk); + + err_misc_deregister: + misc_deregister(&priv->misc_dev); + +-err_disable_clk: +- clk_disable_unprepare(priv->clk); ++err_free_dma: ++ dma_free_coherent(dev, SHARED_SIZE, ++ priv->shared, priv->shared_phys); + + return ret; + } +@@ -688,6 +689,7 @@ static int pxa3xx_gcu_remove(struct platform_device *pdev) + pxa3xx_gcu_wait_idle(priv); + misc_deregister(&priv->misc_dev); + dma_free_coherent(dev, SHARED_SIZE, priv->shared, priv->shared_phys); ++ clk_disable_unprepare(priv->clk); + pxa3xx_gcu_free_buffers(dev, priv); + + return 0; +diff --git a/drivers/watchdog/ts4800_wdt.c b/drivers/watchdog/ts4800_wdt.c +index c137ad2bd5c31..0ea554c7cda57 100644 +--- a/drivers/watchdog/ts4800_wdt.c ++++ b/drivers/watchdog/ts4800_wdt.c +@@ -125,13 +125,16 @@ static int ts4800_wdt_probe(struct platform_device *pdev) + ret = of_property_read_u32_index(np, "syscon", 1, ®); + if (ret < 0) { + dev_err(dev, "no offset in syscon\n"); ++ of_node_put(syscon_np); + return ret; + } + + /* allocate memory for watchdog struct */ + wdt = devm_kzalloc(dev, sizeof(*wdt), GFP_KERNEL); +- if (!wdt) ++ if (!wdt) { ++ of_node_put(syscon_np); + return -ENOMEM; ++ } + + /* set regmap and offset to know where to write */ + wdt->feed_offset = reg; +diff --git a/drivers/watchdog/wdat_wdt.c b/drivers/watchdog/wdat_wdt.c +index 88c5e6361aa05..fddbb39433bee 100644 +--- a/drivers/watchdog/wdat_wdt.c ++++ b/drivers/watchdog/wdat_wdt.c +@@ -462,6 +462,7 @@ static int wdat_wdt_probe(struct platform_device *pdev) + return ret; + + watchdog_set_nowayout(&wdat->wdd, nowayout); ++ watchdog_stop_on_reboot(&wdat->wdd); + return devm_watchdog_register_device(dev, &wdat->wdd); + } + +diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c +index 7b1077f0abcb0..c8aa4f5f85db1 100644 +--- a/drivers/xen/xlate_mmu.c ++++ b/drivers/xen/xlate_mmu.c +@@ -261,7 +261,6 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt, + + return 0; + } +-EXPORT_SYMBOL_GPL(xen_xlate_map_ballooned_pages); + + struct remap_pfn { + struct mm_struct *mm; +diff --git a/fs/afs/dir.c b/fs/afs/dir.c +index 8c39533d122a5..3a355a209919b 100644 +--- a/fs/afs/dir.c ++++ b/fs/afs/dir.c +@@ -415,8 +415,11 @@ static int afs_dir_iterate_block(struct afs_vnode *dvnode, + } + + /* skip if starts before the current position */ +- if (offset < curr) ++ if (offset < curr) { ++ if (next > curr) ++ ctx->pos = blkoff + next * sizeof(union afs_xdr_dirent); + continue; ++ } + + /* found the next entry */ + if (!dir_emit(ctx, dire->u.name, nlen, +diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c +index 196f9f64d075c..c999bc0c0691f 100644 +--- a/fs/binfmt_flat.c ++++ b/fs/binfmt_flat.c +@@ -422,6 +422,30 @@ static void old_reloc(unsigned long rl) + + /****************************************************************************/ + ++static inline u32 __user *skip_got_header(u32 __user *rp) ++{ ++ if (IS_ENABLED(CONFIG_RISCV)) { ++ /* ++ * RISC-V has a 16 byte GOT PLT header for elf64-riscv ++ * and 8 byte GOT PLT header for elf32-riscv. ++ * Skip the whole GOT PLT header, since it is reserved ++ * for the dynamic linker (ld.so). ++ */ ++ u32 rp_val0, rp_val1; ++ ++ if (get_user(rp_val0, rp)) ++ return rp; ++ if (get_user(rp_val1, rp + 1)) ++ return rp; ++ ++ if (rp_val0 == 0xffffffff && rp_val1 == 0xffffffff) ++ rp += 4; ++ else if (rp_val0 == 0xffffffff) ++ rp += 2; ++ } ++ return rp; ++} ++ + static int load_flat_file(struct linux_binprm *bprm, + struct lib_info *libinfo, int id, unsigned long *extra_stack) + { +@@ -769,7 +793,8 @@ static int load_flat_file(struct linux_binprm *bprm, + * image. + */ + if (flags & FLAT_FLAG_GOTPIC) { +- for (rp = (u32 __user *)datapos; ; rp++) { ++ rp = skip_got_header((u32 __user *) datapos); ++ for (; ; rp++) { + u32 addr, rp_val; + if (get_user(rp_val, rp)) + return -EFAULT; +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c +index f18c6d97932ed..a4b3e6f6bf021 100644 +--- a/fs/btrfs/disk-io.c ++++ b/fs/btrfs/disk-io.c +@@ -2927,7 +2927,7 @@ int open_ctree(struct super_block *sb, + ~BTRFS_FEATURE_INCOMPAT_SUPP; + if (features) { + btrfs_err(fs_info, +- "cannot mount because of unsupported optional features (%llx)", ++ "cannot mount because of unsupported optional features (0x%llx)", + features); + err = -EINVAL; + goto fail_csum; +@@ -2965,7 +2965,7 @@ int open_ctree(struct super_block *sb, + ~BTRFS_FEATURE_COMPAT_RO_SUPP; + if (!sb_rdonly(sb) && features) { + btrfs_err(fs_info, +- "cannot mount read-write because of unsupported optional features (%llx)", ++ "cannot mount read-write because of unsupported optional features (0x%llx)", + features); + err = -EINVAL; + goto fail_csum; +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index 8898682c91038..c7706a769de12 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -7383,12 +7383,12 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info) + * do another round of validation checks. + */ + if (total_dev != fs_info->fs_devices->total_devices) { +- btrfs_err(fs_info, +- "super_num_devices %llu mismatch with num_devices %llu found here", ++ btrfs_warn(fs_info, ++"super block num_devices %llu mismatch with DEV_ITEM count %llu, will be repaired on next transaction commit", + btrfs_super_num_devices(fs_info->super_copy), + total_dev); +- ret = -EINVAL; +- goto error; ++ fs_info->fs_devices->total_devices = total_dev; ++ btrfs_set_super_num_devices(fs_info->super_copy, total_dev); + } + if (btrfs_super_total_bytes(fs_info->super_copy) < + fs_info->fs_devices->total_rw_bytes) { +diff --git a/fs/ceph/xattr.c b/fs/ceph/xattr.c +index cb18ee637cb7b..4bcf0226818dc 100644 +--- a/fs/ceph/xattr.c ++++ b/fs/ceph/xattr.c +@@ -316,6 +316,14 @@ static ssize_t ceph_vxattrcb_snap_btime(struct ceph_inode_info *ci, char *val, + } + #define XATTR_RSTAT_FIELD(_type, _name) \ + XATTR_NAME_CEPH(_type, _name, VXATTR_FLAG_RSTAT) ++#define XATTR_RSTAT_FIELD_UPDATABLE(_type, _name) \ ++ { \ ++ .name = CEPH_XATTR_NAME(_type, _name), \ ++ .name_size = sizeof (CEPH_XATTR_NAME(_type, _name)), \ ++ .getxattr_cb = ceph_vxattrcb_ ## _type ## _ ## _name, \ ++ .exists_cb = NULL, \ ++ .flags = VXATTR_FLAG_RSTAT, \ ++ } + #define XATTR_LAYOUT_FIELD(_type, _name, _field) \ + { \ + .name = CEPH_XATTR_NAME2(_type, _name, _field), \ +@@ -353,7 +361,7 @@ static struct ceph_vxattr ceph_dir_vxattrs[] = { + XATTR_RSTAT_FIELD(dir, rfiles), + XATTR_RSTAT_FIELD(dir, rsubdirs), + XATTR_RSTAT_FIELD(dir, rbytes), +- XATTR_RSTAT_FIELD(dir, rctime), ++ XATTR_RSTAT_FIELD_UPDATABLE(dir, rctime), + { + .name = "ceph.dir.pin", + .name_size = sizeof("ceph.dir.pin"), +diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h +index 9c0e348cb00f7..414936989255a 100644 +--- a/fs/cifs/cifsglob.h ++++ b/fs/cifs/cifsglob.h +@@ -1930,11 +1930,13 @@ extern mempool_t *cifs_mid_poolp; + + /* Operations for different SMB versions */ + #define SMB1_VERSION_STRING "1.0" ++#define SMB20_VERSION_STRING "2.0" ++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY + extern struct smb_version_operations smb1_operations; + extern struct smb_version_values smb1_values; +-#define SMB20_VERSION_STRING "2.0" + extern struct smb_version_operations smb20_operations; + extern struct smb_version_values smb20_values; ++#endif /* CIFS_ALLOW_INSECURE_LEGACY */ + #define SMB21_VERSION_STRING "2.1" + extern struct smb_version_operations smb21_operations; + extern struct smb_version_values smb21_values; +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c +index 7985fe25850b7..57164563eec69 100644 +--- a/fs/cifs/smb2ops.c ++++ b/fs/cifs/smb2ops.c +@@ -3487,11 +3487,13 @@ smb3_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock, + } + } + ++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY + static bool + smb2_is_read_op(__u32 oplock) + { + return oplock == SMB2_OPLOCK_LEVEL_II; + } ++#endif /* CIFS_ALLOW_INSECURE_LEGACY */ + + static bool + smb21_is_read_op(__u32 oplock) +@@ -4573,7 +4575,7 @@ out: + return rc; + } + +- ++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY + struct smb_version_operations smb20_operations = { + .compare_fids = smb2_compare_fids, + .setup_request = smb2_setup_request, +@@ -4670,6 +4672,7 @@ struct smb_version_operations smb20_operations = { + .fiemap = smb3_fiemap, + .llseek = smb3_llseek, + }; ++#endif /* CIFS_ALLOW_INSECURE_LEGACY */ + + struct smb_version_operations smb21_operations = { + .compare_fids = smb2_compare_fids, +@@ -4987,6 +4990,7 @@ struct smb_version_operations smb311_operations = { + .llseek = smb3_llseek, + }; + ++#ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY + struct smb_version_values smb20_values = { + .version_string = SMB20_VERSION_STRING, + .protocol_id = SMB20_PROT_ID, +@@ -5007,6 +5011,7 @@ struct smb_version_values smb20_values = { + .signing_required = SMB2_NEGOTIATE_SIGNING_REQUIRED, + .create_lease_size = sizeof(struct create_lease), + }; ++#endif /* ALLOW_INSECURE_LEGACY */ + + struct smb_version_values smb21_values = { + .version_string = SMB21_VERSION_STRING, +diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c +index e068f82ffeddf..0857eb7a95e28 100644 +--- a/fs/cifs/smb2pdu.c ++++ b/fs/cifs/smb2pdu.c +@@ -356,6 +356,9 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon) + rc = -EHOSTDOWN; + mutex_unlock(&tcon->ses->session_mutex); + goto failed; ++ } else if (rc) { ++ mutex_unlock(&ses->session_mutex); ++ goto out; + } + } + if (rc || !tcon->need_reconnect) { +diff --git a/fs/dax.c b/fs/dax.c +index 12953e892bb25..bcb7c6b43fb2b 100644 +--- a/fs/dax.c ++++ b/fs/dax.c +@@ -819,7 +819,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, + if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp)) + goto unlock_pmd; + +- flush_cache_page(vma, address, pfn); ++ flush_cache_range(vma, address, ++ address + HPAGE_PMD_SIZE); + pmd = pmdp_invalidate(vma, address, pmdp); + pmd = pmd_wrprotect(pmd); + pmd = pmd_mkclean(pmd); +diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c +index 53500b555bfa8..4ae8becdb51db 100644 +--- a/fs/dlm/lock.c ++++ b/fs/dlm/lock.c +@@ -1551,6 +1551,7 @@ static int _remove_from_waiters(struct dlm_lkb *lkb, int mstype, + lkb->lkb_wait_type = 0; + lkb->lkb_flags &= ~DLM_IFL_OVERLAP_CANCEL; + lkb->lkb_wait_count--; ++ unhold_lkb(lkb); + goto out_del; + } + +@@ -1577,6 +1578,7 @@ static int _remove_from_waiters(struct dlm_lkb *lkb, int mstype, + log_error(ls, "remwait error %x reply %d wait_type %d overlap", + lkb->lkb_id, mstype, lkb->lkb_wait_type); + lkb->lkb_wait_count--; ++ unhold_lkb(lkb); + lkb->lkb_wait_type = 0; + } + +@@ -5312,11 +5314,16 @@ int dlm_recover_waiters_post(struct dlm_ls *ls) + lkb->lkb_flags &= ~DLM_IFL_OVERLAP_UNLOCK; + lkb->lkb_flags &= ~DLM_IFL_OVERLAP_CANCEL; + lkb->lkb_wait_type = 0; +- lkb->lkb_wait_count = 0; ++ /* drop all wait_count references we still ++ * hold a reference for this iteration. ++ */ ++ while (lkb->lkb_wait_count) { ++ lkb->lkb_wait_count--; ++ unhold_lkb(lkb); ++ } + mutex_lock(&ls->ls_waiters_mutex); + list_del_init(&lkb->lkb_wait_reply); + mutex_unlock(&ls->ls_waiters_mutex); +- unhold_lkb(lkb); /* for waiters list */ + + if (oc || ou) { + /* do an unlock or cancel instead of resending */ +diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c +index c38b2b8ffd1d3..a10d2bcfe75a8 100644 +--- a/fs/dlm/plock.c ++++ b/fs/dlm/plock.c +@@ -23,11 +23,11 @@ struct plock_op { + struct list_head list; + int done; + struct dlm_plock_info info; ++ int (*callback)(struct file_lock *fl, int result); + }; + + struct plock_xop { + struct plock_op xop; +- int (*callback)(struct file_lock *fl, int result); + void *fl; + void *file; + struct file_lock flc; +@@ -129,19 +129,18 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, struct file *file, + /* fl_owner is lockd which doesn't distinguish + processes on the nfs client */ + op->info.owner = (__u64) fl->fl_pid; +- xop->callback = fl->fl_lmops->lm_grant; ++ op->callback = fl->fl_lmops->lm_grant; + locks_init_lock(&xop->flc); + locks_copy_lock(&xop->flc, fl); + xop->fl = fl; + xop->file = file; + } else { + op->info.owner = (__u64)(long) fl->fl_owner; +- xop->callback = NULL; + } + + send_op(op); + +- if (xop->callback == NULL) { ++ if (!op->callback) { + rv = wait_event_interruptible(recv_wq, (op->done != 0)); + if (rv == -ERESTARTSYS) { + log_debug(ls, "dlm_posix_lock: wait killed %llx", +@@ -203,7 +202,7 @@ static int dlm_plock_callback(struct plock_op *op) + file = xop->file; + flc = &xop->flc; + fl = xop->fl; +- notify = xop->callback; ++ notify = op->callback; + + if (op->info.rv) { + notify(fl, op->info.rv); +@@ -436,10 +435,9 @@ static ssize_t dev_write(struct file *file, const char __user *u, size_t count, + if (op->info.fsid == info.fsid && + op->info.number == info.number && + op->info.owner == info.owner) { +- struct plock_xop *xop = (struct plock_xop *)op; + list_del_init(&op->list); + memcpy(&op->info, &info, sizeof(info)); +- if (xop->callback) ++ if (op->callback) + do_callback = 1; + else + op->done = 1; +diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c +index 8f665aa1d706e..62384ae77a78f 100644 +--- a/fs/ext4/inline.c ++++ b/fs/ext4/inline.c +@@ -2013,6 +2013,18 @@ int ext4_convert_inline_data(struct inode *inode) + if (!ext4_has_inline_data(inode)) { + ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA); + return 0; ++ } else if (!ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) { ++ /* ++ * Inode has inline data but EXT4_STATE_MAY_INLINE_DATA is ++ * cleared. This means we are in the middle of moving of ++ * inline data to delay allocated block. Just force writeout ++ * here to finish conversion. ++ */ ++ error = filemap_flush(inode->i_mapping); ++ if (error) ++ return error; ++ if (!ext4_has_inline_data(inode)) ++ return 0; + } + + needed_blocks = ext4_writepage_trans_blocks(inode); +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c +index 00686fbe3c27d..1cac574911a79 100644 +--- a/fs/ext4/inode.c ++++ b/fs/ext4/inode.c +@@ -5668,6 +5668,7 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr) + if (attr->ia_valid & ATTR_SIZE) { + handle_t *handle; + loff_t oldsize = inode->i_size; ++ loff_t old_disksize; + int shrink = (attr->ia_size < inode->i_size); + + if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) { +@@ -5723,6 +5724,7 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr) + inode->i_ctime = inode->i_mtime; + } + down_write(&EXT4_I(inode)->i_data_sem); ++ old_disksize = EXT4_I(inode)->i_disksize; + EXT4_I(inode)->i_disksize = attr->ia_size; + rc = ext4_mark_inode_dirty(handle, inode); + if (!error) +@@ -5734,6 +5736,8 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr) + */ + if (!error) + i_size_write(inode, attr->ia_size); ++ else ++ EXT4_I(inode)->i_disksize = old_disksize; + up_write(&EXT4_I(inode)->i_data_sem); + ext4_journal_stop(handle); + if (error) +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c +index f10307215d583..b01059bb562c0 100644 +--- a/fs/ext4/namei.c ++++ b/fs/ext4/namei.c +@@ -273,9 +273,9 @@ static struct dx_frame *dx_probe(struct ext4_filename *fname, + struct dx_hash_info *hinfo, + struct dx_frame *frame); + static void dx_release(struct dx_frame *frames); +-static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de, +- unsigned blocksize, struct dx_hash_info *hinfo, +- struct dx_map_entry map[]); ++static int dx_make_map(struct inode *dir, struct buffer_head *bh, ++ struct dx_hash_info *hinfo, ++ struct dx_map_entry *map_tail); + static void dx_sort_map(struct dx_map_entry *map, unsigned count); + static struct ext4_dir_entry_2 *dx_move_dirents(char *from, char *to, + struct dx_map_entry *offsets, int count, unsigned blocksize); +@@ -750,12 +750,14 @@ static struct dx_frame * + dx_probe(struct ext4_filename *fname, struct inode *dir, + struct dx_hash_info *hinfo, struct dx_frame *frame_in) + { +- unsigned count, indirect; ++ unsigned count, indirect, level, i; + struct dx_entry *at, *entries, *p, *q, *m; + struct dx_root *root; + struct dx_frame *frame = frame_in; + struct dx_frame *ret_err = ERR_PTR(ERR_BAD_DX_DIR); + u32 hash; ++ ext4_lblk_t block; ++ ext4_lblk_t blocks[EXT4_HTREE_LEVEL]; + + memset(frame_in, 0, EXT4_HTREE_LEVEL * sizeof(frame_in[0])); + frame->bh = ext4_read_dirblock(dir, 0, INDEX); +@@ -811,6 +813,8 @@ dx_probe(struct ext4_filename *fname, struct inode *dir, + } + + dxtrace(printk("Look up %x", hash)); ++ level = 0; ++ blocks[0] = 0; + while (1) { + count = dx_get_count(entries); + if (!count || count > dx_get_limit(entries)) { +@@ -852,15 +856,27 @@ dx_probe(struct ext4_filename *fname, struct inode *dir, + dx_get_block(at))); + frame->entries = entries; + frame->at = at; +- if (!indirect--) ++ ++ block = dx_get_block(at); ++ for (i = 0; i <= level; i++) { ++ if (blocks[i] == block) { ++ ext4_warning_inode(dir, ++ "dx entry: tree cycle block %u points back to block %u", ++ blocks[level], block); ++ goto fail; ++ } ++ } ++ if (++level > indirect) + return frame; ++ blocks[level] = block; + frame++; +- frame->bh = ext4_read_dirblock(dir, dx_get_block(at), INDEX); ++ frame->bh = ext4_read_dirblock(dir, block, INDEX); + if (IS_ERR(frame->bh)) { + ret_err = (struct dx_frame *) frame->bh; + frame->bh = NULL; + goto fail; + } ++ + entries = ((struct dx_node *) frame->bh->b_data)->entries; + + if (dx_get_limit(entries) != dx_node_limit(dir)) { +@@ -1205,15 +1221,23 @@ static inline int search_dirblock(struct buffer_head *bh, + * Create map of hash values, offsets, and sizes, stored at end of block. + * Returns number of entries mapped. + */ +-static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de, +- unsigned blocksize, struct dx_hash_info *hinfo, ++static int dx_make_map(struct inode *dir, struct buffer_head *bh, ++ struct dx_hash_info *hinfo, + struct dx_map_entry *map_tail) + { + int count = 0; +- char *base = (char *) de; ++ struct ext4_dir_entry_2 *de = (struct ext4_dir_entry_2 *)bh->b_data; ++ unsigned int buflen = bh->b_size; ++ char *base = bh->b_data; + struct dx_hash_info h = *hinfo; + +- while ((char *) de < base + blocksize) { ++ if (ext4_has_metadata_csum(dir->i_sb)) ++ buflen -= sizeof(struct ext4_dir_entry_tail); ++ ++ while ((char *) de < base + buflen) { ++ if (ext4_check_dir_entry(dir, NULL, de, bh, base, buflen, ++ ((char *)de) - base)) ++ return -EFSCORRUPTED; + if (de->name_len && de->inode) { + ext4fs_dirhash(dir, de->name, de->name_len, &h); + map_tail--; +@@ -1223,8 +1247,7 @@ static int dx_make_map(struct inode *dir, struct ext4_dir_entry_2 *de, + count++; + cond_resched(); + } +- /* XXX: do we need to check rec_len == 0 case? -Chris */ +- de = ext4_next_entry(de, blocksize); ++ de = ext4_next_entry(de, dir->i_sb->s_blocksize); + } + return count; + } +@@ -1848,8 +1871,11 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir, + + /* create map in the end of data2 block */ + map = (struct dx_map_entry *) (data2 + blocksize); +- count = dx_make_map(dir, (struct ext4_dir_entry_2 *) data1, +- blocksize, hinfo, map); ++ count = dx_make_map(dir, *bh, hinfo, map); ++ if (count < 0) { ++ err = count; ++ goto journal_error; ++ } + map -= count; + dx_sort_map(map, count); + /* Ensure that neither split block is over half full */ +@@ -3442,6 +3468,9 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle, + struct buffer_head *bh; + + if (!ext4_has_inline_data(inode)) { ++ struct ext4_dir_entry_2 *de; ++ unsigned int offset; ++ + /* The first directory block must not be a hole, so + * treat it as DIRENT_HTREE + */ +@@ -3450,9 +3479,30 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle, + *retval = PTR_ERR(bh); + return NULL; + } +- *parent_de = ext4_next_entry( +- (struct ext4_dir_entry_2 *)bh->b_data, +- inode->i_sb->s_blocksize); ++ ++ de = (struct ext4_dir_entry_2 *) bh->b_data; ++ if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data, ++ bh->b_size, 0) || ++ le32_to_cpu(de->inode) != inode->i_ino || ++ strcmp(".", de->name)) { ++ EXT4_ERROR_INODE(inode, "directory missing '.'"); ++ brelse(bh); ++ *retval = -EFSCORRUPTED; ++ return NULL; ++ } ++ offset = ext4_rec_len_from_disk(de->rec_len, ++ inode->i_sb->s_blocksize); ++ de = ext4_next_entry(de, inode->i_sb->s_blocksize); ++ if (ext4_check_dir_entry(inode, NULL, de, bh, bh->b_data, ++ bh->b_size, offset) || ++ le32_to_cpu(de->inode) == 0 || strcmp("..", de->name)) { ++ EXT4_ERROR_INODE(inode, "directory missing '..'"); ++ brelse(bh); ++ *retval = -EFSCORRUPTED; ++ return NULL; ++ } ++ *parent_de = de; ++ + return bh; + } + +diff --git a/fs/ext4/super.c b/fs/ext4/super.c +index c13879bd21683..eba2506f43991 100644 +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -1703,6 +1703,7 @@ static const struct mount_opts { + MOPT_EXT4_ONLY | MOPT_CLEAR}, + {Opt_warn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_SET}, + {Opt_nowarn_on_error, EXT4_MOUNT_WARN_ON_ERROR, MOPT_CLEAR}, ++ {Opt_commit, 0, MOPT_NO_EXT2}, + {Opt_nojournal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM, + MOPT_EXT4_ONLY | MOPT_CLEAR}, + {Opt_journal_checksum, EXT4_MOUNT_JOURNAL_CHECKSUM, +diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c +index 54f0d2c4c7d87..44c5110e18f04 100644 +--- a/fs/f2fs/checkpoint.c ++++ b/fs/f2fs/checkpoint.c +@@ -149,7 +149,7 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr, + f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d", + blkaddr, exist); + set_sbi_flag(sbi, SBI_NEED_FSCK); +- WARN_ON(1); ++ dump_stack(); + } + return exist; + } +@@ -187,7 +187,7 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi, + f2fs_warn(sbi, "access invalid blkaddr:%u", + blkaddr); + set_sbi_flag(sbi, SBI_NEED_FSCK); +- WARN_ON(1); ++ dump_stack(); + return false; + } else { + return __is_bitmap_valid(sbi, blkaddr, type); +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h +index 5645502c156df..c73a1638c18b4 100644 +--- a/fs/f2fs/f2fs.h ++++ b/fs/f2fs/f2fs.h +@@ -2100,11 +2100,17 @@ static inline void dec_valid_node_count(struct f2fs_sb_info *sbi, + { + spin_lock(&sbi->stat_lock); + +- f2fs_bug_on(sbi, !sbi->total_valid_block_count); +- f2fs_bug_on(sbi, !sbi->total_valid_node_count); ++ if (unlikely(!sbi->total_valid_block_count || ++ !sbi->total_valid_node_count)) { ++ f2fs_warn(sbi, "dec_valid_node_count: inconsistent block counts, total_valid_block:%u, total_valid_node:%u", ++ sbi->total_valid_block_count, ++ sbi->total_valid_node_count); ++ set_sbi_flag(sbi, SBI_NEED_FSCK); ++ } else { ++ sbi->total_valid_block_count--; ++ sbi->total_valid_node_count--; ++ } + +- sbi->total_valid_node_count--; +- sbi->total_valid_block_count--; + if (sbi->reserved_blocks && + sbi->current_reserved_blocks < sbi->reserved_blocks) + sbi->current_reserved_blocks++; +diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c +index 516007bb1ced1..ef08ef0170306 100644 +--- a/fs/f2fs/file.c ++++ b/fs/f2fs/file.c +@@ -1320,11 +1320,19 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start, + ret = -ENOSPC; + break; + } +- if (dn->data_blkaddr != NEW_ADDR) { +- f2fs_invalidate_blocks(sbi, dn->data_blkaddr); +- dn->data_blkaddr = NEW_ADDR; +- f2fs_set_data_blkaddr(dn); ++ ++ if (dn->data_blkaddr == NEW_ADDR) ++ continue; ++ ++ if (!f2fs_is_valid_blkaddr(sbi, dn->data_blkaddr, ++ DATA_GENERIC_ENHANCE)) { ++ ret = -EFSCORRUPTED; ++ break; + } ++ ++ f2fs_invalidate_blocks(sbi, dn->data_blkaddr); ++ dn->data_blkaddr = NEW_ADDR; ++ f2fs_set_data_blkaddr(dn); + } + + f2fs_update_extent_cache_range(dn, start, 0, index - start); +@@ -1600,6 +1608,10 @@ static long f2fs_fallocate(struct file *file, int mode, + + inode_lock(inode); + ++ ret = file_modified(file); ++ if (ret) ++ goto out; ++ + if (mode & FALLOC_FL_PUNCH_HOLE) { + if (offset >= inode->i_size) + goto out; +diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c +index 264c19e177797..b5536570707c2 100644 +--- a/fs/f2fs/inode.c ++++ b/fs/f2fs/inode.c +@@ -689,8 +689,22 @@ retry: + f2fs_lock_op(sbi); + err = f2fs_remove_inode_page(inode); + f2fs_unlock_op(sbi); +- if (err == -ENOENT) ++ if (err == -ENOENT) { + err = 0; ++ ++ /* ++ * in fuzzed image, another node may has the same ++ * block address as inode's, if it was truncated ++ * previously, truncation of inode node will fail. ++ */ ++ if (is_inode_flag_set(inode, FI_DIRTY_INODE)) { ++ f2fs_warn(F2FS_I_SB(inode), ++ "f2fs_evict_inode: inconsistent node id, ino:%lu", ++ inode->i_ino); ++ f2fs_inode_synced(inode); ++ set_sbi_flag(sbi, SBI_NEED_FSCK); ++ } ++ } + } + + /* give more chances, if ENOMEM case */ +diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c +index 78c54bb7898df..7759323bd7751 100644 +--- a/fs/f2fs/segment.c ++++ b/fs/f2fs/segment.c +@@ -352,16 +352,19 @@ void f2fs_drop_inmem_page(struct inode *inode, struct page *page) + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + struct list_head *head = &fi->inmem_pages; + struct inmem_pages *cur = NULL; ++ struct inmem_pages *tmp; + + f2fs_bug_on(sbi, !IS_ATOMIC_WRITTEN_PAGE(page)); + + mutex_lock(&fi->inmem_lock); +- list_for_each_entry(cur, head, list) { +- if (cur->page == page) ++ list_for_each_entry(tmp, head, list) { ++ if (tmp->page == page) { ++ cur = tmp; + break; ++ } + } + +- f2fs_bug_on(sbi, list_empty(head) || cur->page != page); ++ f2fs_bug_on(sbi, !cur); + list_del(&cur->list); + mutex_unlock(&fi->inmem_lock); + +diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h +index 15b343f656093..f39620f475425 100644 +--- a/fs/f2fs/segment.h ++++ b/fs/f2fs/segment.h +@@ -542,11 +542,10 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi) + return GET_SEC_FROM_SEG(sbi, (unsigned int)reserved_segments(sbi)); + } + +-static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi) ++static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi, ++ unsigned int node_blocks, unsigned int dent_blocks) + { +- unsigned int node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) + +- get_pages(sbi, F2FS_DIRTY_DENTS); +- unsigned int dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS); ++ + unsigned int segno, left_blocks; + int i; + +@@ -572,19 +571,28 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi) + static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, + int freed, int needed) + { +- int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES); +- int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS); +- int imeta_secs = get_blocktype_secs(sbi, F2FS_DIRTY_IMETA); ++ unsigned int total_node_blocks = get_pages(sbi, F2FS_DIRTY_NODES) + ++ get_pages(sbi, F2FS_DIRTY_DENTS) + ++ get_pages(sbi, F2FS_DIRTY_IMETA); ++ unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS); ++ unsigned int node_secs = total_node_blocks / BLKS_PER_SEC(sbi); ++ unsigned int dent_secs = total_dent_blocks / BLKS_PER_SEC(sbi); ++ unsigned int node_blocks = total_node_blocks % BLKS_PER_SEC(sbi); ++ unsigned int dent_blocks = total_dent_blocks % BLKS_PER_SEC(sbi); ++ unsigned int free, need_lower, need_upper; + + if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING))) + return false; + +- if (free_sections(sbi) + freed == reserved_sections(sbi) + needed && +- has_curseg_enough_space(sbi)) ++ free = free_sections(sbi) + freed; ++ need_lower = node_secs + dent_secs + reserved_sections(sbi) + needed; ++ need_upper = need_lower + (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0); ++ ++ if (free > need_upper) + return false; +- return (free_sections(sbi) + freed) <= +- (node_secs + 2 * dent_secs + imeta_secs + +- reserved_sections(sbi) + needed); ++ else if (free <= need_lower) ++ return true; ++ return !has_curseg_enough_space(sbi, node_blocks, dent_blocks); + } + + static inline bool f2fs_is_checkpoint_ready(struct f2fs_sb_info *sbi) +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c +index 6bd8a944902ef..232c99e4a1ee9 100644 +--- a/fs/f2fs/super.c ++++ b/fs/f2fs/super.c +@@ -2080,7 +2080,8 @@ int f2fs_quota_sync(struct super_block *sb, int type) + if (!sb_has_quota_active(sb, cnt)) + continue; + +- inode_lock(dqopt->files[cnt]); ++ if (!f2fs_sb_has_quota_ino(sbi)) ++ inode_lock(dqopt->files[cnt]); + + /* + * do_quotactl +@@ -2099,7 +2100,8 @@ int f2fs_quota_sync(struct super_block *sb, int type) + up_read(&sbi->quota_sem); + f2fs_unlock_op(sbi); + +- inode_unlock(dqopt->files[cnt]); ++ if (!f2fs_sb_has_quota_ino(sbi)) ++ inode_unlock(dqopt->files[cnt]); + + if (ret) + break; +diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c +index 3647c65a0f482..0191eb1dc7f66 100644 +--- a/fs/fat/fatent.c ++++ b/fs/fat/fatent.c +@@ -93,7 +93,8 @@ static int fat12_ent_bread(struct super_block *sb, struct fat_entry *fatent, + err_brelse: + brelse(bhs[0]); + err: +- fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)", (llu)blocknr); ++ fat_msg_ratelimit(sb, KERN_ERR, "FAT read failed (blocknr %llu)", ++ (llu)blocknr); + return -EIO; + } + +@@ -106,8 +107,8 @@ static int fat_ent_bread(struct super_block *sb, struct fat_entry *fatent, + fatent->fat_inode = MSDOS_SB(sb)->fat_inode; + fatent->bhs[0] = sb_bread(sb, blocknr); + if (!fatent->bhs[0]) { +- fat_msg(sb, KERN_ERR, "FAT read failed (blocknr %llu)", +- (llu)blocknr); ++ fat_msg_ratelimit(sb, KERN_ERR, "FAT read failed (blocknr %llu)", ++ (llu)blocknr); + return -EIO; + } + fatent->nr_bhs = 1; +diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c +index 22e9c88f3960a..5b3a288e0f14b 100644 +--- a/fs/fs-writeback.c ++++ b/fs/fs-writeback.c +@@ -1650,11 +1650,12 @@ static long writeback_sb_inodes(struct super_block *sb, + }; + unsigned long start_time = jiffies; + long write_chunk; +- long wrote = 0; /* count both pages and inodes */ ++ long total_wrote = 0; /* count both pages and inodes */ + + while (!list_empty(&wb->b_io)) { + struct inode *inode = wb_inode(wb->b_io.prev); + struct bdi_writeback *tmp_wb; ++ long wrote; + + if (inode->i_sb != sb) { + if (work->sb) { +@@ -1730,7 +1731,9 @@ static long writeback_sb_inodes(struct super_block *sb, + + wbc_detach_inode(&wbc); + work->nr_pages -= write_chunk - wbc.nr_to_write; +- wrote += write_chunk - wbc.nr_to_write; ++ wrote = write_chunk - wbc.nr_to_write - wbc.pages_skipped; ++ wrote = wrote < 0 ? 0 : wrote; ++ total_wrote += wrote; + + if (need_resched()) { + /* +@@ -1752,7 +1755,7 @@ static long writeback_sb_inodes(struct super_block *sb, + tmp_wb = inode_to_wb_and_lock_list(inode); + spin_lock(&inode->i_lock); + if (!(inode->i_state & I_DIRTY_ALL)) +- wrote++; ++ total_wrote++; + requeue_inode(inode, tmp_wb, &wbc); + inode_sync_complete(inode); + spin_unlock(&inode->i_lock); +@@ -1766,14 +1769,14 @@ static long writeback_sb_inodes(struct super_block *sb, + * bail out to wb_writeback() often enough to check + * background threshold and other termination conditions. + */ +- if (wrote) { ++ if (total_wrote) { + if (time_is_before_jiffies(start_time + HZ / 10UL)) + break; + if (work->nr_pages <= 0) + break; + } + } +- return wrote; ++ return total_wrote; + } + + static long __writeback_inodes_wb(struct bdi_writeback *wb, +diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c +index 5c73751adb2d3..53cd7b2bb580b 100644 +--- a/fs/iomap/buffered-io.c ++++ b/fs/iomap/buffered-io.c +@@ -535,7 +535,8 @@ iomap_write_failed(struct inode *inode, loff_t pos, unsigned len) + * write started inside the existing inode size. + */ + if (pos + len > i_size) +- truncate_pagecache_range(inode, max(pos, i_size), pos + len); ++ truncate_pagecache_range(inode, max(pos, i_size), ++ pos + len - 1); + } + + static int +diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c +index ad1eba809e7e1..ee2282b8c7a73 100644 +--- a/fs/jffs2/fs.c ++++ b/fs/jffs2/fs.c +@@ -603,6 +603,7 @@ out_root: + jffs2_free_raw_node_refs(c); + kvfree(c->blocks); + jffs2_clear_xattr_subsystem(c); ++ jffs2_sum_exit(c); + out_inohash: + kfree(c->inocache_list); + out_wbuf: +diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c +index 79f3440e204b6..d3cb27487c706 100644 +--- a/fs/jfs/jfs_dmap.c ++++ b/fs/jfs/jfs_dmap.c +@@ -385,7 +385,8 @@ int dbFree(struct inode *ip, s64 blkno, s64 nblocks) + } + + /* write the last buffer. */ +- write_metapage(mp); ++ if (mp) ++ write_metapage(mp); + + IREAD_UNLOCK(ipbmap); + +diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c +index 7d4af6cea2a63..99ee657596b5f 100644 +--- a/fs/kernfs/dir.c ++++ b/fs/kernfs/dir.c +@@ -19,7 +19,15 @@ + + DEFINE_MUTEX(kernfs_mutex); + static DEFINE_SPINLOCK(kernfs_rename_lock); /* kn->parent and ->name */ +-static char kernfs_pr_cont_buf[PATH_MAX]; /* protected by rename_lock */ ++/* ++ * Don't use rename_lock to piggy back on pr_cont_buf. We don't want to ++ * call pr_cont() while holding rename_lock. Because sometimes pr_cont() ++ * will perform wakeups when releasing console_sem. Holding rename_lock ++ * will introduce deadlock if the scheduler reads the kernfs_name in the ++ * wakeup path. ++ */ ++static DEFINE_SPINLOCK(kernfs_pr_cont_lock); ++static char kernfs_pr_cont_buf[PATH_MAX]; /* protected by pr_cont_lock */ + static DEFINE_SPINLOCK(kernfs_idr_lock); /* root->ino_idr */ + + #define rb_to_kn(X) rb_entry((X), struct kernfs_node, rb) +@@ -230,12 +238,12 @@ void pr_cont_kernfs_name(struct kernfs_node *kn) + { + unsigned long flags; + +- spin_lock_irqsave(&kernfs_rename_lock, flags); ++ spin_lock_irqsave(&kernfs_pr_cont_lock, flags); + +- kernfs_name_locked(kn, kernfs_pr_cont_buf, sizeof(kernfs_pr_cont_buf)); ++ kernfs_name(kn, kernfs_pr_cont_buf, sizeof(kernfs_pr_cont_buf)); + pr_cont("%s", kernfs_pr_cont_buf); + +- spin_unlock_irqrestore(&kernfs_rename_lock, flags); ++ spin_unlock_irqrestore(&kernfs_pr_cont_lock, flags); + } + + /** +@@ -249,10 +257,10 @@ void pr_cont_kernfs_path(struct kernfs_node *kn) + unsigned long flags; + int sz; + +- spin_lock_irqsave(&kernfs_rename_lock, flags); ++ spin_lock_irqsave(&kernfs_pr_cont_lock, flags); + +- sz = kernfs_path_from_node_locked(kn, NULL, kernfs_pr_cont_buf, +- sizeof(kernfs_pr_cont_buf)); ++ sz = kernfs_path_from_node(kn, NULL, kernfs_pr_cont_buf, ++ sizeof(kernfs_pr_cont_buf)); + if (sz < 0) { + pr_cont("(error)"); + goto out; +@@ -266,7 +274,7 @@ void pr_cont_kernfs_path(struct kernfs_node *kn) + pr_cont("%s", kernfs_pr_cont_buf); + + out: +- spin_unlock_irqrestore(&kernfs_rename_lock, flags); ++ spin_unlock_irqrestore(&kernfs_pr_cont_lock, flags); + } + + /** +@@ -870,13 +878,12 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent, + + lockdep_assert_held(&kernfs_mutex); + +- /* grab kernfs_rename_lock to piggy back on kernfs_pr_cont_buf */ +- spin_lock_irq(&kernfs_rename_lock); ++ spin_lock_irq(&kernfs_pr_cont_lock); + + len = strlcpy(kernfs_pr_cont_buf, path, sizeof(kernfs_pr_cont_buf)); + + if (len >= sizeof(kernfs_pr_cont_buf)) { +- spin_unlock_irq(&kernfs_rename_lock); ++ spin_unlock_irq(&kernfs_pr_cont_lock); + return NULL; + } + +@@ -888,7 +895,7 @@ static struct kernfs_node *kernfs_walk_ns(struct kernfs_node *parent, + parent = kernfs_find_ns(parent, name, ns); + } + +- spin_unlock_irq(&kernfs_rename_lock); ++ spin_unlock_irq(&kernfs_pr_cont_lock); + + return parent; + } +diff --git a/fs/nfs/file.c b/fs/nfs/file.c +index 73415970af381..3233da79d49a4 100644 +--- a/fs/nfs/file.c ++++ b/fs/nfs/file.c +@@ -394,11 +394,8 @@ static int nfs_write_end(struct file *file, struct address_space *mapping, + return status; + NFS_I(mapping->host)->write_io += copied; + +- if (nfs_ctx_key_to_expire(ctx, mapping->host)) { +- status = nfs_wb_all(mapping->host); +- if (status < 0) +- return status; +- } ++ if (nfs_ctx_key_to_expire(ctx, mapping->host)) ++ nfs_wb_all(mapping->host); + + return copied; + } +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index cf3b00751ff65..ba4a03a69fbf0 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -3041,6 +3041,10 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata, + } + + out: ++ if (opendata->lgp) { ++ nfs4_lgopen_release(opendata->lgp); ++ opendata->lgp = NULL; ++ } + if (!opendata->cancelled) + nfs4_sequence_free_slot(&opendata->o_res.seq_res); + return ret; +diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c +index 0471b6e0da16f..2fe48982fbb48 100644 +--- a/fs/nfs/pnfs.c ++++ b/fs/nfs/pnfs.c +@@ -1961,6 +1961,7 @@ lookup_again: + lo = pnfs_find_alloc_layout(ino, ctx, gfp_flags); + if (lo == NULL) { + spin_unlock(&ino->i_lock); ++ lseg = ERR_PTR(-ENOMEM); + trace_pnfs_update_layout(ino, pos, count, iomode, lo, lseg, + PNFS_UPDATE_LAYOUT_NOMEM); + goto out; +@@ -2090,6 +2091,7 @@ lookup_again: + + lgp = pnfs_alloc_init_layoutget_args(ino, ctx, &stateid, &arg, gfp_flags); + if (!lgp) { ++ lseg = ERR_PTR(-ENOMEM); + trace_pnfs_update_layout(ino, pos, count, iomode, lo, NULL, + PNFS_UPDATE_LAYOUT_NOMEM); + nfs_layoutget_end(lo); +diff --git a/fs/nfs/write.c b/fs/nfs/write.c +index 30d8e7bc1cef3..10ce264a64567 100644 +--- a/fs/nfs/write.c ++++ b/fs/nfs/write.c +@@ -692,11 +692,7 @@ static int nfs_writepage_locked(struct page *page, + err = nfs_do_writepage(page, wbc, &pgio); + pgio.pg_error = 0; + nfs_pageio_complete(&pgio); +- if (err < 0) +- return err; +- if (nfs_error_is_fatal(pgio.pg_error)) +- return pgio.pg_error; +- return 0; ++ return err; + } + + int nfs_writepage(struct page *page, struct writeback_control *wbc) +@@ -747,9 +743,6 @@ int nfs_writepages(struct address_space *mapping, struct writeback_control *wbc) + + if (err < 0) + goto out_err; +- err = pgio.pg_error; +- if (nfs_error_is_fatal(err)) +- goto out_err; + return 0; + out_err: + return err; +@@ -1429,7 +1422,7 @@ static void nfs_async_write_error(struct list_head *head, int error) + while (!list_empty(head)) { + req = nfs_list_entry(head->next); + nfs_list_remove_request(req); +- if (nfs_error_is_fatal(error)) ++ if (nfs_error_is_fatal_on_server(error)) + nfs_write_error(req, error); + else + nfs_redirty_request(req); +diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c +index 1e2bfd26b3521..7df9ad4d84338 100644 +--- a/fs/notify/fdinfo.c ++++ b/fs/notify/fdinfo.c +@@ -84,16 +84,9 @@ static void inotify_fdinfo(struct seq_file *m, struct fsnotify_mark *mark) + inode_mark = container_of(mark, struct inotify_inode_mark, fsn_mark); + inode = igrab(fsnotify_conn_inode(mark->connector)); + if (inode) { +- /* +- * IN_ALL_EVENTS represents all of the mask bits +- * that we expose to userspace. There is at +- * least one bit (FS_EVENT_ON_CHILD) which is +- * used only internally to the kernel. +- */ +- u32 mask = mark->mask & IN_ALL_EVENTS; +- seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:%x ", ++ seq_printf(m, "inotify wd:%x ino:%lx sdev:%x mask:%x ignored_mask:0 ", + inode_mark->wd, inode->i_ino, inode->i_sb->s_dev, +- mask, mark->ignored_mask); ++ inotify_mark_user_mask(mark)); + show_mark_fhandle(m, inode); + seq_putc(m, '\n'); + iput(inode); +diff --git a/fs/notify/inotify/inotify.h b/fs/notify/inotify/inotify.h +index 3f246f7b8a92b..8b8bf52dd08b0 100644 +--- a/fs/notify/inotify/inotify.h ++++ b/fs/notify/inotify/inotify.h +@@ -22,6 +22,18 @@ static inline struct inotify_event_info *INOTIFY_E(struct fsnotify_event *fse) + return container_of(fse, struct inotify_event_info, fse); + } + ++/* ++ * INOTIFY_USER_FLAGS represents all of the mask bits that we expose to ++ * userspace. There is at least one bit (FS_EVENT_ON_CHILD) which is ++ * used only internally to the kernel. ++ */ ++#define INOTIFY_USER_MASK (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK) ++ ++static inline __u32 inotify_mark_user_mask(struct fsnotify_mark *fsn_mark) ++{ ++ return fsn_mark->mask & INOTIFY_USER_MASK; ++} ++ + extern void inotify_ignored_and_remove_idr(struct fsnotify_mark *fsn_mark, + struct fsnotify_group *group); + extern int inotify_handle_event(struct fsnotify_group *group, +diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c +index 81ffc8629fc4b..b949b2c02f4be 100644 +--- a/fs/notify/inotify/inotify_user.c ++++ b/fs/notify/inotify/inotify_user.c +@@ -86,7 +86,7 @@ static inline __u32 inotify_arg_to_mask(u32 arg) + mask = (FS_IN_IGNORED | FS_EVENT_ON_CHILD | FS_UNMOUNT); + + /* mask off the flags used to open the fd */ +- mask |= (arg & (IN_ALL_EVENTS | IN_ONESHOT | IN_EXCL_UNLINK)); ++ mask |= (arg & INOTIFY_USER_MASK); + + return mask; + } +diff --git a/fs/notify/mark.c b/fs/notify/mark.c +index 1d96216dffd19..fdf8e03bf3df7 100644 +--- a/fs/notify/mark.c ++++ b/fs/notify/mark.c +@@ -426,7 +426,7 @@ void fsnotify_free_mark(struct fsnotify_mark *mark) + void fsnotify_destroy_mark(struct fsnotify_mark *mark, + struct fsnotify_group *group) + { +- mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); ++ mutex_lock(&group->mark_mutex); + fsnotify_detach_mark(mark); + mutex_unlock(&group->mark_mutex); + fsnotify_free_mark(mark); +@@ -738,7 +738,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group, + * move marks to free to to_free list in one go and then free marks in + * to_free list one by one. + */ +- mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); ++ mutex_lock(&group->mark_mutex); + list_for_each_entry_safe(mark, lmark, &group->marks_list, g_list) { + if ((1U << mark->connector->type) & type_mask) + list_move(&mark->g_list, &to_free); +@@ -747,7 +747,7 @@ void fsnotify_clear_marks_by_group(struct fsnotify_group *group, + + clear: + while (1) { +- mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); ++ mutex_lock(&group->mark_mutex); + if (list_empty(head)) { + mutex_unlock(&group->mark_mutex); + break; +diff --git a/fs/ocfs2/dlmfs/userdlm.c b/fs/ocfs2/dlmfs/userdlm.c +index 3df5be25bfb1f..d23bc720753ed 100644 +--- a/fs/ocfs2/dlmfs/userdlm.c ++++ b/fs/ocfs2/dlmfs/userdlm.c +@@ -435,6 +435,11 @@ again: + } + + spin_lock(&lockres->l_lock); ++ if (lockres->l_flags & USER_LOCK_IN_TEARDOWN) { ++ spin_unlock(&lockres->l_lock); ++ status = -EAGAIN; ++ goto bail; ++ } + + /* We only compare against the currently granted level + * here. If the lock is blocked waiting on a downconvert, +@@ -601,7 +606,7 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres) + spin_lock(&lockres->l_lock); + if (lockres->l_flags & USER_LOCK_IN_TEARDOWN) { + spin_unlock(&lockres->l_lock); +- return 0; ++ goto bail; + } + + lockres->l_flags |= USER_LOCK_IN_TEARDOWN; +@@ -615,12 +620,17 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres) + } + + if (lockres->l_ro_holders || lockres->l_ex_holders) { ++ lockres->l_flags &= ~USER_LOCK_IN_TEARDOWN; + spin_unlock(&lockres->l_lock); + goto bail; + } + + status = 0; + if (!(lockres->l_flags & USER_LOCK_ATTACHED)) { ++ /* ++ * lock is never requested, leave USER_LOCK_IN_TEARDOWN set ++ * to avoid new lock request coming in. ++ */ + spin_unlock(&lockres->l_lock); + goto bail; + } +@@ -631,6 +641,10 @@ int user_dlm_destroy_lock(struct user_lock_res *lockres) + + status = ocfs2_dlm_unlock(conn, &lockres->l_lksb, DLM_LKF_VALBLK); + if (status) { ++ spin_lock(&lockres->l_lock); ++ lockres->l_flags &= ~USER_LOCK_IN_TEARDOWN; ++ lockres->l_flags &= ~USER_LOCK_BUSY; ++ spin_unlock(&lockres->l_lock); + user_log_dlm_error("ocfs2_dlm_unlock", status, lockres); + goto bail; + } +diff --git a/fs/proc/generic.c b/fs/proc/generic.c +index 8c3dbe13e647c..372b4dad4863e 100644 +--- a/fs/proc/generic.c ++++ b/fs/proc/generic.c +@@ -446,6 +446,9 @@ static struct proc_dir_entry *__proc_create(struct proc_dir_entry **parent, + proc_set_user(ent, (*parent)->uid, (*parent)->gid); + + ent->proc_dops = &proc_misc_dentry_ops; ++ /* Revalidate everything under /proc/${pid}/net */ ++ if ((*parent)->proc_dops == &proc_net_dentry_ops) ++ pde_force_lookup(ent); + + out: + return ent; +diff --git a/fs/proc/proc_net.c b/fs/proc/proc_net.c +index 313b7c751867f..9cd5b47199cba 100644 +--- a/fs/proc/proc_net.c ++++ b/fs/proc/proc_net.c +@@ -343,6 +343,9 @@ static __net_init int proc_net_ns_init(struct net *net) + + proc_set_user(netd, uid, gid); + ++ /* Seed dentry revalidation for /proc/${pid}/net */ ++ pde_force_lookup(netd); ++ + err = -EEXIST; + net_statd = proc_net_mkdir(net, "stat", netd); + if (!net_statd) +diff --git a/include/drm/drm_edid.h b/include/drm/drm_edid.h +index b9719418c3d26..f40a97417b682 100644 +--- a/include/drm/drm_edid.h ++++ b/include/drm/drm_edid.h +@@ -116,7 +116,7 @@ struct detailed_data_monitor_range { + u8 supported_scalings; + u8 preferred_refresh; + } __attribute__((packed)) cvt; +- } formula; ++ } __attribute__((packed)) formula; + } __attribute__((packed)); + + struct detailed_data_wpindex { +@@ -149,7 +149,7 @@ struct detailed_non_pixel { + struct detailed_data_wpindex color; + struct std_timing timings[6]; + struct cvt_timing cvt[4]; +- } data; ++ } __attribute__((packed)) data; + } __attribute__((packed)); + + #define EDID_DETAIL_EST_TIMINGS 0xf7 +@@ -167,7 +167,7 @@ struct detailed_timing { + union { + struct detailed_pixel_timing pixel_data; + struct detailed_non_pixel other_data; +- } data; ++ } __attribute__((packed)) data; + } __attribute__((packed)); + + #define DRM_EDID_INPUT_SERRATION_VSYNC (1 << 0) +diff --git a/include/linux/bpf.h b/include/linux/bpf.h +index a73ca7c9c7d0e..5705cda3c4c4d 100644 +--- a/include/linux/bpf.h ++++ b/include/linux/bpf.h +@@ -929,6 +929,8 @@ void bpf_offload_dev_netdev_unregister(struct bpf_offload_dev *offdev, + struct net_device *netdev); + bool bpf_offload_dev_match(struct bpf_prog *prog, struct net_device *netdev); + ++void unpriv_ebpf_notify(int new_state); ++ + #if defined(CONFIG_NET) && defined(CONFIG_BPF_SYSCALL) + int bpf_prog_offload_init(struct bpf_prog *prog, union bpf_attr *attr); + +diff --git a/include/linux/efi.h b/include/linux/efi.h +index c82ef0eba4f84..f9b9f9a2fd4a5 100644 +--- a/include/linux/efi.h ++++ b/include/linux/efi.h +@@ -165,6 +165,8 @@ struct capsule_info { + size_t page_bytes_remain; + }; + ++int efi_capsule_setup_info(struct capsule_info *cap_info, void *kbuff, ++ size_t hdr_bytes); + int __efi_capsule_setup_info(struct capsule_info *cap_info); + + /* +diff --git a/include/linux/iio/common/st_sensors.h b/include/linux/iio/common/st_sensors.h +index 686be532f4cb7..7816bf070f835 100644 +--- a/include/linux/iio/common/st_sensors.h ++++ b/include/linux/iio/common/st_sensors.h +@@ -228,6 +228,7 @@ struct st_sensor_settings { + * @hw_irq_trigger: if we're using the hardware interrupt on the sensor. + * @hw_timestamp: Latest timestamp from the interrupt handler, when in use. + * @buffer_data: Data used by buffer part. ++ * @odr_lock: Local lock for preventing concurrent ODR accesses/changes + */ + struct st_sensor_data { + struct device *dev; +@@ -253,6 +254,8 @@ struct st_sensor_data { + s64 hw_timestamp; + + char buffer_data[ST_SENSORS_MAX_BUFFER_SIZE] ____cacheline_aligned; ++ ++ struct mutex odr_lock; + }; + + #ifdef CONFIG_IIO_BUFFER +diff --git a/include/linux/mailbox_controller.h b/include/linux/mailbox_controller.h +index 36d6ce673503c..6fee33cb52f58 100644 +--- a/include/linux/mailbox_controller.h ++++ b/include/linux/mailbox_controller.h +@@ -83,6 +83,7 @@ struct mbox_controller { + const struct of_phandle_args *sp); + /* Internal to API */ + struct hrtimer poll_hrt; ++ spinlock_t poll_hrt_lock; + struct list_head node; + }; + +diff --git a/include/linux/mtd/cfi.h b/include/linux/mtd/cfi.h +index c98a211086880..f3c149073c213 100644 +--- a/include/linux/mtd/cfi.h ++++ b/include/linux/mtd/cfi.h +@@ -286,6 +286,7 @@ struct cfi_private { + map_word sector_erase_cmd; + unsigned long chipshift; /* Because they're of the same type */ + const char *im_name; /* inter_module name for cmdset_setup */ ++ unsigned long quirks; + struct flchip chips[0]; /* per-chip data structure for each chip */ + }; + +diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h +index 27e7fa36f707f..a8d79f5b9a52d 100644 +--- a/include/linux/nodemask.h ++++ b/include/linux/nodemask.h +@@ -42,11 +42,11 @@ + * void nodes_shift_right(dst, src, n) Shift right + * void nodes_shift_left(dst, src, n) Shift left + * +- * int first_node(mask) Number lowest set bit, or MAX_NUMNODES +- * int next_node(node, mask) Next node past 'node', or MAX_NUMNODES +- * int next_node_in(node, mask) Next node past 'node', or wrap to first, ++ * unsigned int first_node(mask) Number lowest set bit, or MAX_NUMNODES ++ * unsigend int next_node(node, mask) Next node past 'node', or MAX_NUMNODES ++ * unsigned int next_node_in(node, mask) Next node past 'node', or wrap to first, + * or MAX_NUMNODES +- * int first_unset_node(mask) First node not set in mask, or ++ * unsigned int first_unset_node(mask) First node not set in mask, or + * MAX_NUMNODES + * + * nodemask_t nodemask_of_node(node) Return nodemask with bit 'node' set +@@ -153,7 +153,7 @@ static inline void __nodes_clear(nodemask_t *dstp, unsigned int nbits) + + #define node_test_and_set(node, nodemask) \ + __node_test_and_set((node), &(nodemask)) +-static inline int __node_test_and_set(int node, nodemask_t *addr) ++static inline bool __node_test_and_set(int node, nodemask_t *addr) + { + return test_and_set_bit(node, addr->bits); + } +@@ -200,7 +200,7 @@ static inline void __nodes_complement(nodemask_t *dstp, + + #define nodes_equal(src1, src2) \ + __nodes_equal(&(src1), &(src2), MAX_NUMNODES) +-static inline int __nodes_equal(const nodemask_t *src1p, ++static inline bool __nodes_equal(const nodemask_t *src1p, + const nodemask_t *src2p, unsigned int nbits) + { + return bitmap_equal(src1p->bits, src2p->bits, nbits); +@@ -208,7 +208,7 @@ static inline int __nodes_equal(const nodemask_t *src1p, + + #define nodes_intersects(src1, src2) \ + __nodes_intersects(&(src1), &(src2), MAX_NUMNODES) +-static inline int __nodes_intersects(const nodemask_t *src1p, ++static inline bool __nodes_intersects(const nodemask_t *src1p, + const nodemask_t *src2p, unsigned int nbits) + { + return bitmap_intersects(src1p->bits, src2p->bits, nbits); +@@ -216,20 +216,20 @@ static inline int __nodes_intersects(const nodemask_t *src1p, + + #define nodes_subset(src1, src2) \ + __nodes_subset(&(src1), &(src2), MAX_NUMNODES) +-static inline int __nodes_subset(const nodemask_t *src1p, ++static inline bool __nodes_subset(const nodemask_t *src1p, + const nodemask_t *src2p, unsigned int nbits) + { + return bitmap_subset(src1p->bits, src2p->bits, nbits); + } + + #define nodes_empty(src) __nodes_empty(&(src), MAX_NUMNODES) +-static inline int __nodes_empty(const nodemask_t *srcp, unsigned int nbits) ++static inline bool __nodes_empty(const nodemask_t *srcp, unsigned int nbits) + { + return bitmap_empty(srcp->bits, nbits); + } + + #define nodes_full(nodemask) __nodes_full(&(nodemask), MAX_NUMNODES) +-static inline int __nodes_full(const nodemask_t *srcp, unsigned int nbits) ++static inline bool __nodes_full(const nodemask_t *srcp, unsigned int nbits) + { + return bitmap_full(srcp->bits, nbits); + } +@@ -260,15 +260,15 @@ static inline void __nodes_shift_left(nodemask_t *dstp, + > MAX_NUMNODES, then the silly min_ts could be dropped. */ + + #define first_node(src) __first_node(&(src)) +-static inline int __first_node(const nodemask_t *srcp) ++static inline unsigned int __first_node(const nodemask_t *srcp) + { +- return min_t(int, MAX_NUMNODES, find_first_bit(srcp->bits, MAX_NUMNODES)); ++ return min_t(unsigned int, MAX_NUMNODES, find_first_bit(srcp->bits, MAX_NUMNODES)); + } + + #define next_node(n, src) __next_node((n), &(src)) +-static inline int __next_node(int n, const nodemask_t *srcp) ++static inline unsigned int __next_node(int n, const nodemask_t *srcp) + { +- return min_t(int,MAX_NUMNODES,find_next_bit(srcp->bits, MAX_NUMNODES, n+1)); ++ return min_t(unsigned int, MAX_NUMNODES, find_next_bit(srcp->bits, MAX_NUMNODES, n+1)); + } + + /* +@@ -276,7 +276,7 @@ static inline int __next_node(int n, const nodemask_t *srcp) + * the first node in src if needed. Returns MAX_NUMNODES if src is empty. + */ + #define next_node_in(n, src) __next_node_in((n), &(src)) +-int __next_node_in(int node, const nodemask_t *srcp); ++unsigned int __next_node_in(int node, const nodemask_t *srcp); + + static inline void init_nodemask_of_node(nodemask_t *mask, int node) + { +@@ -296,9 +296,9 @@ static inline void init_nodemask_of_node(nodemask_t *mask, int node) + }) + + #define first_unset_node(mask) __first_unset_node(&(mask)) +-static inline int __first_unset_node(const nodemask_t *maskp) ++static inline unsigned int __first_unset_node(const nodemask_t *maskp) + { +- return min_t(int,MAX_NUMNODES, ++ return min_t(unsigned int, MAX_NUMNODES, + find_first_zero_bit(maskp->bits, MAX_NUMNODES)); + } + +@@ -375,14 +375,13 @@ static inline void __nodes_fold(nodemask_t *dstp, const nodemask_t *origp, + } + + #if MAX_NUMNODES > 1 +-#define for_each_node_mask(node, mask) \ +- for ((node) = first_node(mask); \ +- (node) < MAX_NUMNODES; \ +- (node) = next_node((node), (mask))) ++#define for_each_node_mask(node, mask) \ ++ for ((node) = first_node(mask); \ ++ (node >= 0) && (node) < MAX_NUMNODES; \ ++ (node) = next_node((node), (mask))) + #else /* MAX_NUMNODES == 1 */ +-#define for_each_node_mask(node, mask) \ +- if (!nodes_empty(mask)) \ +- for ((node) = 0; (node) < 1; (node)++) ++#define for_each_node_mask(node, mask) \ ++ for ((node) = 0; (node) < 1 && !nodes_empty(mask); (node)++) + #endif /* MAX_NUMNODES */ + + /* +@@ -435,11 +434,11 @@ static inline int num_node_state(enum node_states state) + + #define first_online_node first_node(node_states[N_ONLINE]) + #define first_memory_node first_node(node_states[N_MEMORY]) +-static inline int next_online_node(int nid) ++static inline unsigned int next_online_node(int nid) + { + return next_node(nid, node_states[N_ONLINE]); + } +-static inline int next_memory_node(int nid) ++static inline unsigned int next_memory_node(int nid) + { + return next_node(nid, node_states[N_MEMORY]); + } +diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h +index 2a9df80ea8876..ae7dbdfa3d832 100644 +--- a/include/linux/ptrace.h ++++ b/include/linux/ptrace.h +@@ -30,7 +30,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr, + + #define PT_SEIZED 0x00010000 /* SEIZE used, enable new behavior */ + #define PT_PTRACED 0x00000001 +-#define PT_DTRACE 0x00000002 /* delayed trace (used on m68k, i386) */ + + #define PT_OPT_FLAG_SHIFT 3 + /* PT_TRACE_* event enable flags */ +@@ -47,12 +46,6 @@ extern int ptrace_access_vm(struct task_struct *tsk, unsigned long addr, + #define PT_EXITKILL (PTRACE_O_EXITKILL << PT_OPT_FLAG_SHIFT) + #define PT_SUSPEND_SECCOMP (PTRACE_O_SUSPEND_SECCOMP << PT_OPT_FLAG_SHIFT) + +-/* single stepping state bits (used on ARM and PA-RISC) */ +-#define PT_SINGLESTEP_BIT 31 +-#define PT_SINGLESTEP (1<flags & (1U << HCD_FLAG_WAKEUP_PENDING)) + #define HCD_RH_RUNNING(hcd) ((hcd)->flags & (1U << HCD_FLAG_RH_RUNNING)) + #define HCD_DEAD(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEAD)) ++#define HCD_DEFER_RH_REGISTER(hcd) ((hcd)->flags & (1U << HCD_FLAG_DEFER_RH_REGISTER)) + + /* + * Specifies if interfaces are authorized by default +diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h +index a01981d7108f9..f6d614926e9e9 100644 +--- a/include/net/if_inet6.h ++++ b/include/net/if_inet6.h +@@ -64,6 +64,14 @@ struct inet6_ifaddr { + + struct hlist_node addr_lst; + struct list_head if_list; ++ /* ++ * Used to safely traverse idev->addr_list in process context ++ * if the idev->lock needed to protect idev->addr_list cannot be held. ++ * In that case, add the items to this list temporarily and iterate ++ * without holding idev->lock. ++ * See addrconf_ifdown and dev_forward_change. ++ */ ++ struct list_head if_list_aux; + + struct list_head tmp_list; + struct inet6_ifaddr *ifpub; +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h +index ae69059ba76d4..1ee396ce0eda8 100644 +--- a/include/net/sch_generic.h ++++ b/include/net/sch_generic.h +@@ -160,37 +160,17 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc) + if (spin_trylock(&qdisc->seqlock)) + goto nolock_empty; + +- /* Paired with smp_mb__after_atomic() to make sure +- * STATE_MISSED checking is synchronized with clearing +- * in pfifo_fast_dequeue(). ++ /* No need to insist if the MISSED flag was already set. ++ * Note that test_and_set_bit() also gives us memory ordering ++ * guarantees wrt potential earlier enqueue() and below ++ * spin_trylock(), both of which are necessary to prevent races + */ +- smp_mb__before_atomic(); +- +- /* If the MISSED flag is set, it means other thread has +- * set the MISSED flag before second spin_trylock(), so +- * we can return false here to avoid multi cpus doing +- * the set_bit() and second spin_trylock() concurrently. +- */ +- if (test_bit(__QDISC_STATE_MISSED, &qdisc->state)) ++ if (test_and_set_bit(__QDISC_STATE_MISSED, &qdisc->state)) + return false; + +- /* Set the MISSED flag before the second spin_trylock(), +- * if the second spin_trylock() return false, it means +- * other cpu holding the lock will do dequeuing for us +- * or it will see the MISSED flag set after releasing +- * lock and reschedule the net_tx_action() to do the +- * dequeuing. +- */ +- set_bit(__QDISC_STATE_MISSED, &qdisc->state); +- +- /* spin_trylock() only has load-acquire semantic, so use +- * smp_mb__after_atomic() to ensure STATE_MISSED is set +- * before doing the second spin_trylock(). +- */ +- smp_mb__after_atomic(); +- +- /* Retry again in case other CPU may not see the new flag +- * after it releases the lock at the end of qdisc_run_end(). ++ /* Try to take the lock again to make sure that we will either ++ * grab it or the CPU that still has it will see MISSED set ++ * when testing it in qdisc_run_end() + */ + if (!spin_trylock(&qdisc->seqlock)) + return false; +@@ -214,6 +194,12 @@ static inline void qdisc_run_end(struct Qdisc *qdisc) + if (qdisc->flags & TCQ_F_NOLOCK) { + spin_unlock(&qdisc->seqlock); + ++ /* spin_unlock() only has store-release semantic. The unlock ++ * and test_bit() ordering is a store-load ordering, so a full ++ * memory barrier is needed here. ++ */ ++ smp_mb(); ++ + if (unlikely(test_bit(__QDISC_STATE_MISSED, + &qdisc->state))) { + clear_bit(__QDISC_STATE_MISSED, &qdisc->state); +diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h +index fac8e89aed81d..310e0dbffda99 100644 +--- a/include/scsi/libfcoe.h ++++ b/include/scsi/libfcoe.h +@@ -249,7 +249,8 @@ int fcoe_ctlr_recv_flogi(struct fcoe_ctlr *, struct fc_lport *, + struct fc_frame *); + + /* libfcoe funcs */ +-u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN], unsigned int, unsigned int); ++u64 fcoe_wwn_from_mac(unsigned char mac[ETH_ALEN], unsigned int scheme, ++ unsigned int port); + int fcoe_libfc_config(struct fc_lport *, struct fcoe_ctlr *, + const struct libfc_function_template *, int init_fcp); + u32 fcoe_fc_crc(struct fc_frame *fp); +diff --git a/include/sound/jack.h b/include/sound/jack.h +index 9eb2b5ec1ec41..78f3619f3de94 100644 +--- a/include/sound/jack.h ++++ b/include/sound/jack.h +@@ -62,6 +62,7 @@ struct snd_jack { + const char *id; + #ifdef CONFIG_SND_JACK_INPUT_DEV + struct input_dev *input_dev; ++ struct mutex input_dev_lock; + int registered; + int type; + char name[100]; +diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h +index 059b6e45a0283..839bb07b93a71 100644 +--- a/include/trace/events/rxrpc.h ++++ b/include/trace/events/rxrpc.h +@@ -1511,7 +1511,7 @@ TRACE_EVENT(rxrpc_call_reset, + __entry->call_serial = call->rx_serial; + __entry->conn_serial = call->conn->hi_serial; + __entry->tx_seq = call->tx_hard_ack; +- __entry->rx_seq = call->ackr_seen; ++ __entry->rx_seq = call->rx_hard_ack; + ), + + TP_printk("c=%08x %08x:%08x r=%08x/%08x tx=%08x rx=%08x", +diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h +index a5ab2973e8dc3..57184c02e3b93 100644 +--- a/include/trace/events/vmscan.h ++++ b/include/trace/events/vmscan.h +@@ -283,7 +283,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate, + __field(unsigned long, nr_scanned) + __field(unsigned long, nr_skipped) + __field(unsigned long, nr_taken) +- __field(isolate_mode_t, isolate_mode) ++ __field(unsigned int, isolate_mode) + __field(int, lru) + ), + +@@ -294,7 +294,7 @@ TRACE_EVENT(mm_vmscan_lru_isolate, + __entry->nr_scanned = nr_scanned; + __entry->nr_skipped = nr_skipped; + __entry->nr_taken = nr_taken; +- __entry->isolate_mode = isolate_mode; ++ __entry->isolate_mode = (__force unsigned int)isolate_mode; + __entry->lru = lru; + ), + +diff --git a/init/Kconfig b/init/Kconfig +index e6216dc2a1d1c..74f44b753d61d 100644 +--- a/init/Kconfig ++++ b/init/Kconfig +@@ -33,6 +33,15 @@ config CC_CAN_LINK + config CC_HAS_ASM_GOTO + def_bool $(success,$(srctree)/scripts/gcc-goto.sh $(CC)) + ++config CC_HAS_ASM_GOTO_TIED_OUTPUT ++ depends on CC_HAS_ASM_GOTO_OUTPUT ++ # Detect buggy gcc and clang, fixed in gcc-11 clang-14. ++ def_bool $(success,echo 'int foo(int *x) { asm goto (".long (%l[bar]) - .\n": "+m"(*x) ::: bar); return *x; bar: return 0; }' | $CC -x c - -c -o /dev/null) ++ ++config CC_HAS_ASM_GOTO_OUTPUT ++ depends on CC_HAS_ASM_GOTO ++ def_bool $(success,echo 'int foo(int x) { asm goto ("": "=r"(x) ::: bar); return x; bar: return 0; }' | $(CC) -x c - -c -o /dev/null) ++ + config TOOLS_SUPPORT_RELR + def_bool $(success,env "CC=$(CC)" "LD=$(LD)" "NM=$(NM)" "OBJCOPY=$(OBJCOPY)" $(srctree)/scripts/tools-support-relr.sh) + +diff --git a/ipc/mqueue.c b/ipc/mqueue.c +index 2ea0c08188e67..12519bf5f330d 100644 +--- a/ipc/mqueue.c ++++ b/ipc/mqueue.c +@@ -45,6 +45,7 @@ + + struct mqueue_fs_context { + struct ipc_namespace *ipc_ns; ++ bool newns; /* Set if newly created ipc namespace */ + }; + + #define MQUEUE_MAGIC 0x19800202 +@@ -365,6 +366,14 @@ static int mqueue_get_tree(struct fs_context *fc) + { + struct mqueue_fs_context *ctx = fc->fs_private; + ++ /* ++ * With a newly created ipc namespace, we don't need to do a search ++ * for an ipc namespace match, but we still need to set s_fs_info. ++ */ ++ if (ctx->newns) { ++ fc->s_fs_info = ctx->ipc_ns; ++ return get_tree_nodev(fc, mqueue_fill_super); ++ } + return get_tree_keyed(fc, mqueue_fill_super, ctx->ipc_ns); + } + +@@ -392,6 +401,10 @@ static int mqueue_init_fs_context(struct fs_context *fc) + return 0; + } + ++/* ++ * mq_init_ns() is currently the only caller of mq_create_mount(). ++ * So the ns parameter is always a newly created ipc namespace. ++ */ + static struct vfsmount *mq_create_mount(struct ipc_namespace *ns) + { + struct mqueue_fs_context *ctx; +@@ -403,6 +416,7 @@ static struct vfsmount *mq_create_mount(struct ipc_namespace *ns) + return ERR_CAST(fc); + + ctx = fc->fs_private; ++ ctx->newns = true; + put_ipc_ns(ctx->ipc_ns); + ctx->ipc_ns = get_ipc_ns(ns); + put_user_ns(fc->user_ns); +diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c +index 49c7a09d688d7..768ffd6037875 100644 +--- a/kernel/bpf/stackmap.c ++++ b/kernel/bpf/stackmap.c +@@ -117,7 +117,6 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr) + return ERR_PTR(-E2BIG); + + cost = n_buckets * sizeof(struct stack_map_bucket *) + sizeof(*smap); +- cost += n_buckets * (value_size + sizeof(struct stack_map_bucket)); + err = bpf_map_charge_init(&mem, cost); + if (err) + return ERR_PTR(err); +diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c +index 4dc3bbfd3e3f3..1c133f610f592 100644 +--- a/kernel/dma/debug.c ++++ b/kernel/dma/debug.c +@@ -450,7 +450,7 @@ void debug_dma_dump_mappings(struct device *dev) + * At any time debug_dma_assert_idle() can be called to trigger a + * warning if any cachelines in the given page are in the active set. + */ +-static RADIX_TREE(dma_active_cacheline, GFP_NOWAIT); ++static RADIX_TREE(dma_active_cacheline, GFP_ATOMIC); + static DEFINE_SPINLOCK(radix_lock); + #define ACTIVE_CACHELINE_MAX_OVERLAP ((1 << RADIX_TREE_MAX_TAGS) - 1) + #define CACHELINE_PER_PAGE_SHIFT (PAGE_SHIFT - L1_CACHE_SHIFT) +diff --git a/kernel/ptrace.c b/kernel/ptrace.c +index d99f73f83bf5f..aab480e24bd60 100644 +--- a/kernel/ptrace.c ++++ b/kernel/ptrace.c +@@ -1219,9 +1219,8 @@ int ptrace_request(struct task_struct *child, long request, + return ptrace_resume(child, request, data); + + case PTRACE_KILL: +- if (child->exit_state) /* already dead */ +- return 0; +- return ptrace_resume(child, request, SIGKILL); ++ send_sig_info(SIGKILL, SEND_SIG_NOINFO, child); ++ return 0; + + #ifdef CONFIG_HAVE_ARCH_TRACEHOOK + case PTRACE_GETREGSET: +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index 87d9fad9d01d6..d2a68ae7596ec 100644 +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -4485,8 +4485,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data) + + cfs_rq->throttle_count--; + if (!cfs_rq->throttle_count) { +- cfs_rq->throttled_clock_task_time += rq_clock_task(rq) - +- cfs_rq->throttled_clock_task; ++ cfs_rq->throttled_clock_pelt_time += rq_clock_pelt(rq) - ++ cfs_rq->throttled_clock_pelt; + + /* Add cfs_rq with already running entity in the list */ + if (cfs_rq->nr_running >= 1) +@@ -4503,7 +4503,7 @@ static int tg_throttle_down(struct task_group *tg, void *data) + + /* group is entering throttled state, stop time */ + if (!cfs_rq->throttle_count) { +- cfs_rq->throttled_clock_task = rq_clock_task(rq); ++ cfs_rq->throttled_clock_pelt = rq_clock_pelt(rq); + list_del_leaf_cfs_rq(cfs_rq); + } + cfs_rq->throttle_count++; +@@ -4932,7 +4932,7 @@ static void sync_throttle(struct task_group *tg, int cpu) + pcfs_rq = tg->parent->cfs_rq[cpu]; + + cfs_rq->throttle_count = pcfs_rq->throttle_count; +- cfs_rq->throttled_clock_task = rq_clock_task(cpu_rq(cpu)); ++ cfs_rq->throttled_clock_pelt = rq_clock_pelt(cpu_rq(cpu)); + } + + /* conditionally throttle active cfs_rq's from put_prev_entity() */ +diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h +index afff644da0650..43e2a47489fae 100644 +--- a/kernel/sched/pelt.h ++++ b/kernel/sched/pelt.h +@@ -127,9 +127,9 @@ static inline u64 rq_clock_pelt(struct rq *rq) + static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) + { + if (unlikely(cfs_rq->throttle_count)) +- return cfs_rq->throttled_clock_task - cfs_rq->throttled_clock_task_time; ++ return cfs_rq->throttled_clock_pelt - cfs_rq->throttled_clock_pelt_time; + +- return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_task_time; ++ return rq_clock_pelt(rq_of(cfs_rq)) - cfs_rq->throttled_clock_pelt_time; + } + #else + static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq) +diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h +index fe755c1a0af95..b8a3db59e3267 100644 +--- a/kernel/sched/sched.h ++++ b/kernel/sched/sched.h +@@ -570,8 +570,8 @@ struct cfs_rq { + s64 runtime_remaining; + + u64 throttled_clock; +- u64 throttled_clock_task; +- u64 throttled_clock_task_time; ++ u64 throttled_clock_pelt; ++ u64 throttled_clock_pelt_time; + int throttled; + int throttle_count; + struct list_head throttled_list; +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 56619766e9103..55da88f18342f 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -2537,7 +2537,7 @@ trace_event_buffer_lock_reserve(struct ring_buffer **current_rb, + } + EXPORT_SYMBOL_GPL(trace_event_buffer_lock_reserve); + +-static DEFINE_SPINLOCK(tracepoint_iter_lock); ++static DEFINE_RAW_SPINLOCK(tracepoint_iter_lock); + static DEFINE_MUTEX(tracepoint_printk_mutex); + + static void output_printk(struct trace_event_buffer *fbuffer) +@@ -2558,14 +2558,14 @@ static void output_printk(struct trace_event_buffer *fbuffer) + + event = &fbuffer->trace_file->event_call->event; + +- spin_lock_irqsave(&tracepoint_iter_lock, flags); ++ raw_spin_lock_irqsave(&tracepoint_iter_lock, flags); + trace_seq_init(&iter->seq); + iter->ent = fbuffer->entry; + event_call->event.funcs->trace(iter, 0, event); + trace_seq_putc(&iter->seq, 0); + printk("%s", iter->seq.buffer); + +- spin_unlock_irqrestore(&tracepoint_iter_lock, flags); ++ raw_spin_unlock_irqrestore(&tracepoint_iter_lock, flags); + } + + int tracepoint_printk_sysctl(struct ctl_table *table, int write, +@@ -5638,12 +5638,18 @@ static void tracing_set_nop(struct trace_array *tr) + tr->current_trace = &nop_trace; + } + ++static bool tracer_options_updated; ++ + static void add_tracer_options(struct trace_array *tr, struct tracer *t) + { + /* Only enable if the directory has been created already. */ + if (!tr->dir) + return; + ++ /* Only create trace option files after update_tracer_options finish */ ++ if (!tracer_options_updated) ++ return; ++ + create_trace_option_files(tr, t); + } + +@@ -8391,6 +8397,7 @@ static void __update_tracer_options(struct trace_array *tr) + static void update_tracer_options(struct trace_array *tr) + { + mutex_lock(&trace_types_lock); ++ tracer_options_updated = true; + __update_tracer_options(tr); + mutex_unlock(&trace_types_lock); + } +diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c +index 413da11260f89..83e1810556853 100644 +--- a/kernel/trace/trace_events_hist.c ++++ b/kernel/trace/trace_events_hist.c +@@ -2695,8 +2695,11 @@ static int init_var_ref(struct hist_field *ref_field, + return err; + free: + kfree(ref_field->system); ++ ref_field->system = NULL; + kfree(ref_field->event_name); ++ ref_field->event_name = NULL; + kfree(ref_field->name); ++ ref_field->name = NULL; + + goto out; + } +diff --git a/lib/nodemask.c b/lib/nodemask.c +index 3aa454c54c0de..e22647f5181b3 100644 +--- a/lib/nodemask.c ++++ b/lib/nodemask.c +@@ -3,9 +3,9 @@ + #include + #include + +-int __next_node_in(int node, const nodemask_t *srcp) ++unsigned int __next_node_in(int node, const nodemask_t *srcp) + { +- int ret = __next_node(node, srcp); ++ unsigned int ret = __next_node(node, srcp); + + if (ret == MAX_NUMNODES) + ret = __first_node(srcp); +diff --git a/mm/compaction.c b/mm/compaction.c +index d686887856fee..0758afd6325da 100644 +--- a/mm/compaction.c ++++ b/mm/compaction.c +@@ -1709,6 +1709,8 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) + + update_fast_start_pfn(cc, free_pfn); + pfn = pageblock_start_pfn(free_pfn); ++ if (pfn < cc->zone->zone_start_pfn) ++ pfn = cc->zone->zone_start_pfn; + cc->fast_search_fail = 0; + found_block = true; + set_pageblock_skip(freepage); +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index 20da6ede77041..b6f029a1059f1 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -5033,7 +5033,14 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep) + pud_clear(pud); + put_page(virt_to_page(ptep)); + mm_dec_nr_pmds(mm); +- *addr = ALIGN(*addr, HPAGE_SIZE * PTRS_PER_PTE) - HPAGE_SIZE; ++ /* ++ * This update of passed address optimizes loops sequentially ++ * processing addresses in increments of huge page size (PMD_SIZE ++ * in this case). By clearing the pud, a PUD_SIZE area is unmapped. ++ * Update address to the 'last page' in the cleared area so that ++ * calling loop can move to first page past this area. ++ */ ++ *addr |= PUD_SIZE - PMD_SIZE; + return 1; + } + #define want_pmd_share() (1) +diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c +index 2c616c1c62958..fbfb12e430101 100644 +--- a/net/bluetooth/sco.c ++++ b/net/bluetooth/sco.c +@@ -563,19 +563,24 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen + addr->sa_family != AF_BLUETOOTH) + return -EINVAL; + +- if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) +- return -EBADFD; ++ lock_sock(sk); ++ if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) { ++ err = -EBADFD; ++ goto done; ++ } + +- if (sk->sk_type != SOCK_SEQPACKET) +- return -EINVAL; ++ if (sk->sk_type != SOCK_SEQPACKET) { ++ err = -EINVAL; ++ goto done; ++ } + + hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR); +- if (!hdev) +- return -EHOSTUNREACH; ++ if (!hdev) { ++ err = -EHOSTUNREACH; ++ goto done; ++ } + hci_dev_lock(hdev); + +- lock_sock(sk); +- + /* Set destination address and psm */ + bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr); + +diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c +index 5b38d03f6d79a..614410a6db44b 100644 +--- a/net/ipv4/ip_gre.c ++++ b/net/ipv4/ip_gre.c +@@ -602,21 +602,20 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb, + } + + if (dev->header_ops) { +- const int pull_len = tunnel->hlen + sizeof(struct iphdr); +- + if (skb_cow_head(skb, 0)) + goto free_skb; + + tnl_params = (const struct iphdr *)skb->data; + +- if (pull_len > skb_transport_offset(skb)) +- goto free_skb; +- + /* Pull skb since ip_tunnel_xmit() needs skb->data pointing + * to gre header. + */ +- skb_pull(skb, pull_len); ++ skb_pull(skb, tunnel->hlen + sizeof(struct iphdr)); + skb_reset_mac_header(skb); ++ ++ if (skb->ip_summed == CHECKSUM_PARTIAL && ++ skb_checksum_start(skb) < skb->data) ++ goto free_skb; + } else { + if (skb_cow_head(skb, dev->needed_headroom)) + goto free_skb; +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c +index b0e6fc2c5e108..0808110451a0f 100644 +--- a/net/ipv4/tcp_input.c ++++ b/net/ipv4/tcp_input.c +@@ -2578,12 +2578,15 @@ static void tcp_mtup_probe_success(struct sock *sk) + { + struct tcp_sock *tp = tcp_sk(sk); + struct inet_connection_sock *icsk = inet_csk(sk); ++ u64 val; + +- /* FIXME: breaks with very large cwnd */ + tp->prior_ssthresh = tcp_current_ssthresh(sk); +- tp->snd_cwnd = tp->snd_cwnd * +- tcp_mss_to_mtu(sk, tp->mss_cache) / +- icsk->icsk_mtup.probe_size; ++ ++ val = (u64)tp->snd_cwnd * tcp_mss_to_mtu(sk, tp->mss_cache); ++ do_div(val, icsk->icsk_mtup.probe_size); ++ WARN_ON_ONCE((u32)val != val); ++ tp->snd_cwnd = max_t(u32, 1U, val); ++ + tp->snd_cwnd_cnt = 0; + tp->snd_cwnd_stamp = tcp_jiffies32; + tp->snd_ssthresh = tcp_current_ssthresh(sk); +diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c +index 67493ec6318ad..739fc69cdcc62 100644 +--- a/net/ipv4/tcp_output.c ++++ b/net/ipv4/tcp_output.c +@@ -3869,8 +3869,8 @@ int tcp_rtx_synack(const struct sock *sk, struct request_sock *req) + tcp_rsk(req)->txhash = net_tx_rndhash(); + res = af_ops->send_synack(sk, NULL, &fl, req, NULL, TCP_SYNACK_NORMAL); + if (!res) { +- __TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS); +- __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS); ++ TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS); ++ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPSYNRETRANS); + if (unlikely(tcp_passive_fastopen(sk))) + tcp_sk(sk)->total_retrans++; + trace_tcp_retransmit_synack(sk, req); +diff --git a/net/ipv4/xfrm4_protocol.c b/net/ipv4/xfrm4_protocol.c +index 8a4285712808e..9031b7732fece 100644 +--- a/net/ipv4/xfrm4_protocol.c ++++ b/net/ipv4/xfrm4_protocol.c +@@ -298,4 +298,3 @@ void __init xfrm4_protocol_init(void) + { + xfrm_input_register_afinfo(&xfrm4_input_afinfo); + } +-EXPORT_SYMBOL(xfrm4_protocol_init); +diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c +index 92b32d131e1c3..e29553e4f4ee3 100644 +--- a/net/ipv6/addrconf.c ++++ b/net/ipv6/addrconf.c +@@ -789,6 +789,7 @@ static void dev_forward_change(struct inet6_dev *idev) + { + struct net_device *dev; + struct inet6_ifaddr *ifa; ++ LIST_HEAD(tmp_addr_list); + + if (!idev) + return; +@@ -807,14 +808,24 @@ static void dev_forward_change(struct inet6_dev *idev) + } + } + ++ read_lock_bh(&idev->lock); + list_for_each_entry(ifa, &idev->addr_list, if_list) { + if (ifa->flags&IFA_F_TENTATIVE) + continue; ++ list_add_tail(&ifa->if_list_aux, &tmp_addr_list); ++ } ++ read_unlock_bh(&idev->lock); ++ ++ while (!list_empty(&tmp_addr_list)) { ++ ifa = list_first_entry(&tmp_addr_list, ++ struct inet6_ifaddr, if_list_aux); ++ list_del(&ifa->if_list_aux); + if (idev->cnf.forwarding) + addrconf_join_anycast(ifa); + else + addrconf_leave_anycast(ifa); + } ++ + inet6_netconf_notify_devconf(dev_net(dev), RTM_NEWNETCONF, + NETCONFA_FORWARDING, + dev->ifindex, &idev->cnf); +@@ -3713,7 +3724,8 @@ static int addrconf_ifdown(struct net_device *dev, int how) + unsigned long event = how ? NETDEV_UNREGISTER : NETDEV_DOWN; + struct net *net = dev_net(dev); + struct inet6_dev *idev; +- struct inet6_ifaddr *ifa, *tmp; ++ struct inet6_ifaddr *ifa; ++ LIST_HEAD(tmp_addr_list); + bool keep_addr = false; + bool was_ready; + int state, i; +@@ -3805,16 +3817,23 @@ restart: + write_lock_bh(&idev->lock); + } + +- list_for_each_entry_safe(ifa, tmp, &idev->addr_list, if_list) { ++ list_for_each_entry(ifa, &idev->addr_list, if_list) ++ list_add_tail(&ifa->if_list_aux, &tmp_addr_list); ++ write_unlock_bh(&idev->lock); ++ ++ while (!list_empty(&tmp_addr_list)) { + struct fib6_info *rt = NULL; + bool keep; + ++ ifa = list_first_entry(&tmp_addr_list, ++ struct inet6_ifaddr, if_list_aux); ++ list_del(&ifa->if_list_aux); ++ + addrconf_del_dad_work(ifa); + + keep = keep_addr && (ifa->flags & IFA_F_PERMANENT) && + !addr_is_local(&ifa->addr); + +- write_unlock_bh(&idev->lock); + spin_lock_bh(&ifa->lock); + + if (keep) { +@@ -3845,15 +3864,14 @@ restart: + addrconf_leave_solict(ifa->idev, &ifa->addr); + } + +- write_lock_bh(&idev->lock); + if (!keep) { ++ write_lock_bh(&idev->lock); + list_del_rcu(&ifa->if_list); ++ write_unlock_bh(&idev->lock); + in6_ifa_put(ifa); + } + } + +- write_unlock_bh(&idev->lock); +- + /* Step 5: Discard anycast and multicast list */ + if (how) { + ipv6_ac_destroy_dev(idev); +@@ -4184,7 +4202,8 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id, + send_rs = send_mld && + ipv6_accept_ra(ifp->idev) && + ifp->idev->cnf.rtr_solicits != 0 && +- (dev->flags&IFF_LOOPBACK) == 0; ++ (dev->flags & IFF_LOOPBACK) == 0 && ++ (dev->type != ARPHRD_TUNNEL); + read_unlock_bh(&ifp->idev->lock); + + /* While dad is in progress mld report's source address is in6_addrany. +diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c +index ffcfcd2b128f3..a4cad71c42047 100644 +--- a/net/ipv6/seg6_hmac.c ++++ b/net/ipv6/seg6_hmac.c +@@ -401,7 +401,6 @@ int __init seg6_hmac_init(void) + { + return seg6_hmac_init_algo(); + } +-EXPORT_SYMBOL(seg6_hmac_init); + + int __net_init seg6_hmac_net_init(struct net *net) + { +diff --git a/net/key/af_key.c b/net/key/af_key.c +index dd064d5eff6ed..32fe99cd01fc8 100644 +--- a/net/key/af_key.c ++++ b/net/key/af_key.c +@@ -2830,10 +2830,12 @@ static int pfkey_process(struct sock *sk, struct sk_buff *skb, const struct sadb + void *ext_hdrs[SADB_EXT_MAX]; + int err; + +- err = pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, +- BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); +- if (err) +- return err; ++ /* Non-zero return value of pfkey_broadcast() does not always signal ++ * an error and even on an actual error we may still want to process ++ * the message so rather ignore the return value. ++ */ ++ pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, ++ BROADCAST_PROMISC_ONLY, NULL, sock_net(sk)); + + memset(ext_hdrs, 0, sizeof(ext_hdrs)); + err = parse_exthdrs(skb, hdr, ext_hdrs); +diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c +index 9c94baaf693cb..15f47918cbacd 100644 +--- a/net/mac80211/chan.c ++++ b/net/mac80211/chan.c +@@ -1639,12 +1639,9 @@ int ieee80211_vif_use_reserved_context(struct ieee80211_sub_if_data *sdata) + + if (new_ctx->replace_state == IEEE80211_CHANCTX_REPLACE_NONE) { + if (old_ctx) +- err = ieee80211_vif_use_reserved_reassign(sdata); +- else +- err = ieee80211_vif_use_reserved_assign(sdata); ++ return ieee80211_vif_use_reserved_reassign(sdata); + +- if (err) +- return err; ++ return ieee80211_vif_use_reserved_assign(sdata); + } + + /* +diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h +index e574fbf6745a4..7747a6f46d299 100644 +--- a/net/mac80211/ieee80211_i.h ++++ b/net/mac80211/ieee80211_i.h +@@ -1082,6 +1082,9 @@ struct tpt_led_trigger { + * a scan complete for an aborted scan. + * @SCAN_HW_CANCELLED: Set for our scan work function when the scan is being + * cancelled. ++ * @SCAN_BEACON_WAIT: Set whenever we're passive scanning because of radar/no-IR ++ * and could send a probe request after receiving a beacon. ++ * @SCAN_BEACON_DONE: Beacon received, we can now send a probe request + */ + enum { + SCAN_SW_SCANNING, +@@ -1090,6 +1093,8 @@ enum { + SCAN_COMPLETED, + SCAN_ABORTED, + SCAN_HW_CANCELLED, ++ SCAN_BEACON_WAIT, ++ SCAN_BEACON_DONE, + }; + + /** +diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c +index 4d31d9688dc23..344b2c22e75b5 100644 +--- a/net/mac80211/scan.c ++++ b/net/mac80211/scan.c +@@ -252,6 +252,16 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb) + if (likely(!sdata1 && !sdata2)) + return; + ++ if (test_and_clear_bit(SCAN_BEACON_WAIT, &local->scanning)) { ++ /* ++ * we were passive scanning because of radar/no-IR, but ++ * the beacon/proberesp rx gives us an opportunity to upgrade ++ * to active scan ++ */ ++ set_bit(SCAN_BEACON_DONE, &local->scanning); ++ ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0); ++ } ++ + if (ieee80211_is_probe_resp(mgmt->frame_control)) { + struct cfg80211_scan_request *scan_req; + struct cfg80211_sched_scan_request *sched_scan_req; +@@ -753,6 +763,8 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata, + IEEE80211_CHAN_RADAR)) || + !req->n_ssids) { + next_delay = IEEE80211_PASSIVE_CHANNEL_TIME; ++ if (req->n_ssids) ++ set_bit(SCAN_BEACON_WAIT, &local->scanning); + } else { + ieee80211_scan_state_send_probe(local, &next_delay); + next_delay = IEEE80211_CHANNEL_TIME; +@@ -945,6 +957,8 @@ static void ieee80211_scan_state_set_channel(struct ieee80211_local *local, + !scan_req->n_ssids) { + *next_delay = IEEE80211_PASSIVE_CHANNEL_TIME; + local->next_scan_state = SCAN_DECISION; ++ if (scan_req->n_ssids) ++ set_bit(SCAN_BEACON_WAIT, &local->scanning); + return; + } + +@@ -1037,6 +1051,8 @@ void ieee80211_scan_work(struct work_struct *work) + goto out; + } + ++ clear_bit(SCAN_BEACON_WAIT, &local->scanning); ++ + /* + * as long as no delay is required advance immediately + * without scheduling a new work +@@ -1047,6 +1063,10 @@ void ieee80211_scan_work(struct work_struct *work) + goto out_complete; + } + ++ if (test_and_clear_bit(SCAN_BEACON_DONE, &local->scanning) && ++ local->next_scan_state == SCAN_DECISION) ++ local->next_scan_state = SCAN_SEND_PROBE; ++ + switch (local->next_scan_state) { + case SCAN_DECISION: + /* if no more bands/channels left, complete scan */ +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index 545da270e8020..58a7d89719b1d 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -2267,27 +2267,31 @@ struct nft_expr *nft_expr_init(const struct nft_ctx *ctx, + + err = nf_tables_expr_parse(ctx, nla, &info); + if (err < 0) +- goto err1; ++ goto err_expr_parse; ++ ++ err = -EOPNOTSUPP; ++ if (!(info.ops->type->flags & NFT_EXPR_STATEFUL)) ++ goto err_expr_stateful; + + err = -ENOMEM; + expr = kzalloc(info.ops->size, GFP_KERNEL); + if (expr == NULL) +- goto err2; ++ goto err_expr_stateful; + + err = nf_tables_newexpr(ctx, &info, expr); + if (err < 0) +- goto err3; ++ goto err_expr_new; + + return expr; +-err3: ++err_expr_new: + kfree(expr); +-err2: ++err_expr_stateful: + owner = info.ops->type->owner; + if (info.ops->type->release_ops) + info.ops->type->release_ops(info.ops); + + module_put(owner); +-err1: ++err_expr_parse: + return ERR_PTR(err); + } + +@@ -6566,6 +6570,9 @@ static void nft_commit_release(struct nft_trans *trans) + nf_tables_chain_destroy(&trans->ctx); + break; + case NFT_MSG_DELRULE: ++ if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD) ++ nft_flow_rule_destroy(nft_trans_flow_rule(trans)); ++ + nf_tables_rule_destroy(&trans->ctx, nft_trans_rule(trans)); + break; + case NFT_MSG_DELSET: +@@ -6887,6 +6894,9 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb) + nf_tables_rule_notify(&trans->ctx, + nft_trans_rule(trans), + NFT_MSG_NEWRULE); ++ if (trans->ctx.chain->flags & NFT_CHAIN_HW_OFFLOAD) ++ nft_flow_rule_destroy(nft_trans_flow_rule(trans)); ++ + nft_trans_destroy(trans); + break; + case NFT_MSG_DELRULE: +diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c +index 6fdea0e57db8a..6bcc18124e5bd 100644 +--- a/net/netfilter/nft_dynset.c ++++ b/net/netfilter/nft_dynset.c +@@ -204,9 +204,6 @@ static int nft_dynset_init(const struct nft_ctx *ctx, + return PTR_ERR(priv->expr); + + err = -EOPNOTSUPP; +- if (!(priv->expr->ops->type->flags & NFT_EXPR_STATEFUL)) +- goto err1; +- + if (priv->expr->ops->type->flags & NFT_EXPR_GC) { + if (set->flags & NFT_SET_TIMEOUT) + goto err1; +diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c +index 17c0f75dfcdb7..0c5bc3c37ecf4 100644 +--- a/net/netfilter/nft_nat.c ++++ b/net/netfilter/nft_nat.c +@@ -283,7 +283,8 @@ static void nft_nat_inet_eval(const struct nft_expr *expr, + { + const struct nft_nat *priv = nft_expr_priv(expr); + +- if (priv->family == nft_pf(pkt)) ++ if (priv->family == nft_pf(pkt) || ++ priv->family == NFPROTO_INET) + nft_nat_eval(expr, regs, pkt); + } + +diff --git a/net/nfc/core.c b/net/nfc/core.c +index 63701a980ee12..2d4729d1f0eb9 100644 +--- a/net/nfc/core.c ++++ b/net/nfc/core.c +@@ -1159,6 +1159,7 @@ void nfc_unregister_device(struct nfc_dev *dev) + if (dev->rfkill) { + rfkill_unregister(dev->rfkill); + rfkill_destroy(dev->rfkill); ++ dev->rfkill = NULL; + } + dev->shutting_down = true; + device_unlock(&dev->dev); +diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h +index 9fe264bec70ce..cb174f699665b 100644 +--- a/net/rxrpc/ar-internal.h ++++ b/net/rxrpc/ar-internal.h +@@ -665,20 +665,21 @@ struct rxrpc_call { + + spinlock_t input_lock; /* Lock for packet input to this call */ + +- /* receive-phase ACK management */ ++ /* Receive-phase ACK management (ACKs we send). */ + u8 ackr_reason; /* reason to ACK */ + rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */ +- rxrpc_serial_t ackr_first_seq; /* first sequence number received */ +- rxrpc_seq_t ackr_prev_seq; /* previous sequence number received */ +- rxrpc_seq_t ackr_consumed; /* Highest packet shown consumed */ +- rxrpc_seq_t ackr_seen; /* Highest packet shown seen */ ++ rxrpc_seq_t ackr_highest_seq; /* Higest sequence number received */ ++ atomic_t ackr_nr_unacked; /* Number of unacked packets */ ++ atomic_t ackr_nr_consumed; /* Number of packets needing hard ACK */ + + /* ping management */ + rxrpc_serial_t ping_serial; /* Last ping sent */ + ktime_t ping_time; /* Time last ping sent */ + +- /* transmission-phase ACK management */ ++ /* Transmission-phase ACK management (ACKs we've received). */ + ktime_t acks_latest_ts; /* Timestamp of latest ACK received */ ++ rxrpc_seq_t acks_first_seq; /* first sequence number received */ ++ rxrpc_seq_t acks_prev_seq; /* Highest previousPacket received */ + rxrpc_seq_t acks_lowest_nak; /* Lowest NACK in the buffer (or ==tx_hard_ack) */ + rxrpc_seq_t acks_lost_top; /* tx_top at the time lost-ack ping sent */ + rxrpc_serial_t acks_lost_ping; /* Serial number of probe ACK */ +diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c +index 80e15310f1b29..8574e7066d94c 100644 +--- a/net/rxrpc/call_event.c ++++ b/net/rxrpc/call_event.c +@@ -407,7 +407,8 @@ recheck_state: + goto recheck_state; + } + +- if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events)) { ++ if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events) && ++ call->state != RXRPC_CALL_CLIENT_RECV_REPLY) { + rxrpc_resend(call, now); + goto recheck_state; + } +diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c +index 916d1f455b218..5cf64cf8debf7 100644 +--- a/net/rxrpc/input.c ++++ b/net/rxrpc/input.c +@@ -413,8 +413,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) + { + struct rxrpc_skb_priv *sp = rxrpc_skb(skb); + enum rxrpc_call_state state; +- unsigned int j, nr_subpackets; +- rxrpc_serial_t serial = sp->hdr.serial, ack_serial = 0; ++ unsigned int j, nr_subpackets, nr_unacked = 0; ++ rxrpc_serial_t serial = sp->hdr.serial, ack_serial = serial; + rxrpc_seq_t seq0 = sp->hdr.seq, hard_ack; + bool immediate_ack = false, jumbo_bad = false; + u8 ack = 0; +@@ -454,7 +454,6 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) + !rxrpc_receiving_reply(call)) + goto unlock; + +- call->ackr_prev_seq = seq0; + hard_ack = READ_ONCE(call->rx_hard_ack); + + nr_subpackets = sp->nr_subpackets; +@@ -535,6 +534,9 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) + ack_serial = serial; + } + ++ if (after(seq0, call->ackr_highest_seq)) ++ call->ackr_highest_seq = seq0; ++ + /* Queue the packet. We use a couple of memory barriers here as need + * to make sure that rx_top is perceived to be set after the buffer + * pointer and that the buffer pointer is set after the annotation and +@@ -568,6 +570,8 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) + sp = NULL; + } + ++ nr_unacked++; ++ + if (last) { + set_bit(RXRPC_CALL_RX_LAST, &call->flags); + if (!ack) { +@@ -587,9 +591,14 @@ static void rxrpc_input_data(struct rxrpc_call *call, struct sk_buff *skb) + } + call->rx_expect_next = seq + 1; + } ++ if (!ack) ++ ack_serial = serial; + } + + ack: ++ if (atomic_add_return(nr_unacked, &call->ackr_nr_unacked) > 2 && !ack) ++ ack = RXRPC_ACK_IDLE; ++ + if (ack) + rxrpc_propose_ACK(call, ack, ack_serial, + immediate_ack, true, +@@ -808,7 +817,7 @@ static void rxrpc_input_soft_acks(struct rxrpc_call *call, u8 *acks, + static bool rxrpc_is_ack_valid(struct rxrpc_call *call, + rxrpc_seq_t first_pkt, rxrpc_seq_t prev_pkt) + { +- rxrpc_seq_t base = READ_ONCE(call->ackr_first_seq); ++ rxrpc_seq_t base = READ_ONCE(call->acks_first_seq); + + if (after(first_pkt, base)) + return true; /* The window advanced */ +@@ -816,7 +825,7 @@ static bool rxrpc_is_ack_valid(struct rxrpc_call *call, + if (before(first_pkt, base)) + return false; /* firstPacket regressed */ + +- if (after_eq(prev_pkt, call->ackr_prev_seq)) ++ if (after_eq(prev_pkt, call->acks_prev_seq)) + return true; /* previousPacket hasn't regressed. */ + + /* Some rx implementations put a serial number in previousPacket. */ +@@ -891,8 +900,8 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) + /* Discard any out-of-order or duplicate ACKs (outside lock). */ + if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { + trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, +- first_soft_ack, call->ackr_first_seq, +- prev_pkt, call->ackr_prev_seq); ++ first_soft_ack, call->acks_first_seq, ++ prev_pkt, call->acks_prev_seq); + return; + } + +@@ -907,14 +916,14 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb) + /* Discard any out-of-order or duplicate ACKs (inside lock). */ + if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) { + trace_rxrpc_rx_discard_ack(call->debug_id, ack_serial, +- first_soft_ack, call->ackr_first_seq, +- prev_pkt, call->ackr_prev_seq); ++ first_soft_ack, call->acks_first_seq, ++ prev_pkt, call->acks_prev_seq); + goto out; + } + call->acks_latest_ts = skb->tstamp; + +- call->ackr_first_seq = first_soft_ack; +- call->ackr_prev_seq = prev_pkt; ++ call->acks_first_seq = first_soft_ack; ++ call->acks_prev_seq = prev_pkt; + + /* Parse rwind and mtu sizes if provided. */ + if (buf.info.rxMTU) +diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c +index a4a6f8ee07201..6202d2e32914a 100644 +--- a/net/rxrpc/output.c ++++ b/net/rxrpc/output.c +@@ -74,11 +74,18 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, + u8 reason) + { + rxrpc_serial_t serial; ++ unsigned int tmp; + rxrpc_seq_t hard_ack, top, seq; + int ix; + u32 mtu, jmax; + u8 *ackp = pkt->acks; + ++ tmp = atomic_xchg(&call->ackr_nr_unacked, 0); ++ tmp |= atomic_xchg(&call->ackr_nr_consumed, 0); ++ if (!tmp && (reason == RXRPC_ACK_DELAY || ++ reason == RXRPC_ACK_IDLE)) ++ return 0; ++ + /* Barrier against rxrpc_input_data(). */ + serial = call->ackr_serial; + hard_ack = READ_ONCE(call->rx_hard_ack); +@@ -89,7 +96,7 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, + pkt->ack.bufferSpace = htons(8); + pkt->ack.maxSkew = htons(0); + pkt->ack.firstPacket = htonl(hard_ack + 1); +- pkt->ack.previousPacket = htonl(call->ackr_prev_seq); ++ pkt->ack.previousPacket = htonl(call->ackr_highest_seq); + pkt->ack.serial = htonl(serial); + pkt->ack.reason = reason; + pkt->ack.nAcks = top - hard_ack; +@@ -180,6 +187,10 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping, + n = rxrpc_fill_out_ack(conn, call, pkt, &hard_ack, &top, reason); + + spin_unlock_bh(&call->lock); ++ if (n == 0) { ++ kfree(pkt); ++ return 0; ++ } + + iov[0].iov_base = pkt; + iov[0].iov_len = sizeof(pkt->whdr) + sizeof(pkt->ack) + n; +@@ -227,13 +238,6 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, bool ping, + ntohl(pkt->ack.serial), + false, true, + rxrpc_propose_ack_retry_tx); +- } else { +- spin_lock_bh(&call->lock); +- if (after(hard_ack, call->ackr_consumed)) +- call->ackr_consumed = hard_ack; +- if (after(top, call->ackr_seen)) +- call->ackr_seen = top; +- spin_unlock_bh(&call->lock); + } + + rxrpc_set_keepalive(call); +diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c +index 4f48e3bdd4b4b..c75789ebc514d 100644 +--- a/net/rxrpc/recvmsg.c ++++ b/net/rxrpc/recvmsg.c +@@ -212,11 +212,9 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call) + rxrpc_end_rx_phase(call, serial); + } else { + /* Check to see if there's an ACK that needs sending. */ +- if (after_eq(hard_ack, call->ackr_consumed + 2) || +- after_eq(top, call->ackr_seen + 2) || +- (hard_ack == top && after(hard_ack, call->ackr_consumed))) +- rxrpc_propose_ACK(call, RXRPC_ACK_DELAY, serial, +- true, true, ++ if (atomic_inc_return(&call->ackr_nr_consumed) > 2) ++ rxrpc_propose_ACK(call, RXRPC_ACK_IDLE, serial, ++ true, false, + rxrpc_propose_ack_rotate_rx); + if (call->ackr_reason && call->ackr_reason != RXRPC_ACK_DELAY) + rxrpc_send_ack_packet(call, false, NULL); +diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c +index 1a340eb0abf7c..22f020099214d 100644 +--- a/net/rxrpc/sendmsg.c ++++ b/net/rxrpc/sendmsg.c +@@ -463,6 +463,12 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, + + success: + ret = copied; ++ if (READ_ONCE(call->state) == RXRPC_CALL_COMPLETE) { ++ read_lock_bh(&call->state_lock); ++ if (call->error < 0) ++ ret = call->error; ++ read_unlock_bh(&call->state_lock); ++ } + out: + call->tx_pending = skb; + _leave(" = %d", ret); +diff --git a/net/rxrpc/sysctl.c b/net/rxrpc/sysctl.c +index 18dade4e6f9a0..8fc4190725058 100644 +--- a/net/rxrpc/sysctl.c ++++ b/net/rxrpc/sysctl.c +@@ -12,7 +12,7 @@ + + static struct ctl_table_header *rxrpc_sysctl_reg_table; + static const unsigned int four = 4; +-static const unsigned int thirtytwo = 32; ++static const unsigned int max_backlog = RXRPC_BACKLOG_MAX - 1; + static const unsigned int n_65535 = 65535; + static const unsigned int n_max_acks = RXRPC_RXTX_BUFF_SIZE - 1; + static const unsigned long one_jiffy = 1; +@@ -97,7 +97,7 @@ static struct ctl_table rxrpc_sysctl_table[] = { + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = (void *)&four, +- .extra2 = (void *)&thirtytwo, ++ .extra2 = (void *)&max_backlog, + }, + { + .procname = "rx_window_size", +diff --git a/net/sctp/input.c b/net/sctp/input.c +index 9616b600a8766..c306cb25f5246 100644 +--- a/net/sctp/input.c ++++ b/net/sctp/input.c +@@ -92,6 +92,7 @@ int sctp_rcv(struct sk_buff *skb) + struct sctp_chunk *chunk; + union sctp_addr src; + union sctp_addr dest; ++ int bound_dev_if; + int family; + struct sctp_af *af; + struct net *net = dev_net(skb->dev); +@@ -169,7 +170,8 @@ int sctp_rcv(struct sk_buff *skb) + * If a frame arrives on an interface and the receiving socket is + * bound to another interface, via SO_BINDTODEVICE, treat it as OOTB + */ +- if (sk->sk_bound_dev_if && (sk->sk_bound_dev_if != af->skb_iif(skb))) { ++ bound_dev_if = READ_ONCE(sk->sk_bound_dev_if); ++ if (bound_dev_if && (bound_dev_if != af->skb_iif(skb))) { + if (transport) { + sctp_transport_put(transport); + asoc = NULL; +diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c +index a5a8cca46bd5f..394491692a078 100644 +--- a/net/smc/af_smc.c ++++ b/net/smc/af_smc.c +@@ -877,9 +877,9 @@ static int smc_connect(struct socket *sock, struct sockaddr *addr, + if (rc && rc != -EINPROGRESS) + goto out; + +- sock_hold(&smc->sk); /* sock put in passive closing */ + if (smc->use_fallback) + goto out; ++ sock_hold(&smc->sk); /* sock put in passive closing */ + if (flags & O_NONBLOCK) { + if (schedule_work(&smc->connect_work)) + smc->connect_nonblock = 1; +diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c +index 7ef37054071f8..cb8740d156336 100644 +--- a/net/sunrpc/xdr.c ++++ b/net/sunrpc/xdr.c +@@ -608,7 +608,11 @@ static __be32 *xdr_get_next_encode_buffer(struct xdr_stream *xdr, + */ + xdr->p = (void *)p + frag2bytes; + space_left = xdr->buf->buflen - xdr->buf->len; +- xdr->end = (void *)p + min_t(int, space_left, PAGE_SIZE); ++ if (space_left - nbytes >= PAGE_SIZE) ++ xdr->end = (void *)p + PAGE_SIZE; ++ else ++ xdr->end = (void *)p + space_left - frag1bytes; ++ + xdr->buf->page_len += frag2bytes; + xdr->buf->len += nbytes; + return p; +diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c +index c091417bd799e..60aaed9457e44 100644 +--- a/net/sunrpc/xprtrdma/rpc_rdma.c ++++ b/net/sunrpc/xprtrdma/rpc_rdma.c +@@ -1042,6 +1042,7 @@ static bool + rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep) + #if defined(CONFIG_SUNRPC_BACKCHANNEL) + { ++ struct rpc_xprt *xprt = &r_xprt->rx_xprt; + struct xdr_stream *xdr = &rep->rr_stream; + __be32 *p; + +@@ -1065,6 +1066,10 @@ rpcrdma_is_bcall(struct rpcrdma_xprt *r_xprt, struct rpcrdma_rep *rep) + if (*p != cpu_to_be32(RPC_CALL)) + return false; + ++ /* No bc service. */ ++ if (xprt->bc_serv == NULL) ++ return false; ++ + /* Now that we are sure this is a backchannel call, + * advance to the RPC header. + */ +diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c +index 8bd2454cc89dc..577f71dd63fb4 100644 +--- a/net/tipc/bearer.c ++++ b/net/tipc/bearer.c +@@ -248,9 +248,8 @@ static int tipc_enable_bearer(struct net *net, const char *name, + u32 i; + + if (!bearer_name_validate(name, &b_names)) { +- errstr = "illegal name"; + NL_SET_ERR_MSG(extack, "Illegal name"); +- goto rejected; ++ return res; + } + + if (prio > TIPC_MAX_LINK_PRI && prio != TIPC_MEDIA_LINK_PRI) { +diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c +index 05470ca91bd94..f33e90bd0683b 100644 +--- a/net/unix/af_unix.c ++++ b/net/unix/af_unix.c +@@ -440,7 +440,7 @@ static int unix_dgram_peer_wake_me(struct sock *sk, struct sock *other) + * -ECONNREFUSED. Otherwise, if we haven't queued any skbs + * to other and its full, we will hang waiting for POLLOUT. + */ +- if (unix_recvq_full(other) && !sock_flag(other, SOCK_DEAD)) ++ if (unix_recvq_full_lockless(other) && !sock_flag(other, SOCK_DEAD)) + return 1; + + if (connected) +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c +index d3e2b97d5d051..8459f5b6002e1 100644 +--- a/net/wireless/nl80211.c ++++ b/net/wireless/nl80211.c +@@ -3240,6 +3240,7 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag + wdev_lock(wdev); + switch (wdev->iftype) { + case NL80211_IFTYPE_AP: ++ case NL80211_IFTYPE_P2P_GO: + if (wdev->ssid_len && + nla_put(msg, NL80211_ATTR_SSID, wdev->ssid_len, wdev->ssid)) + goto nla_put_failure_locked; +diff --git a/scripts/faddr2line b/scripts/faddr2line +index 6c6439f69a725..0e6268d598835 100755 +--- a/scripts/faddr2line ++++ b/scripts/faddr2line +@@ -44,17 +44,6 @@ + set -o errexit + set -o nounset + +-READELF="${CROSS_COMPILE:-}readelf" +-ADDR2LINE="${CROSS_COMPILE:-}addr2line" +-SIZE="${CROSS_COMPILE:-}size" +-NM="${CROSS_COMPILE:-}nm" +- +-command -v awk >/dev/null 2>&1 || die "awk isn't installed" +-command -v ${READELF} >/dev/null 2>&1 || die "readelf isn't installed" +-command -v ${ADDR2LINE} >/dev/null 2>&1 || die "addr2line isn't installed" +-command -v ${SIZE} >/dev/null 2>&1 || die "size isn't installed" +-command -v ${NM} >/dev/null 2>&1 || die "nm isn't installed" +- + usage() { + echo "usage: faddr2line [--list] ..." >&2 + exit 1 +@@ -69,6 +58,14 @@ die() { + exit 1 + } + ++READELF="${CROSS_COMPILE:-}readelf" ++ADDR2LINE="${CROSS_COMPILE:-}addr2line" ++AWK="awk" ++ ++command -v ${AWK} >/dev/null 2>&1 || die "${AWK} isn't installed" ++command -v ${READELF} >/dev/null 2>&1 || die "${READELF} isn't installed" ++command -v ${ADDR2LINE} >/dev/null 2>&1 || die "${ADDR2LINE} isn't installed" ++ + # Try to figure out the source directory prefix so we can remove it from the + # addr2line output. HACK ALERT: This assumes that start_kernel() is in + # init/main.c! This only works for vmlinux. Otherwise it falls back to +@@ -76,7 +73,7 @@ die() { + find_dir_prefix() { + local objfile=$1 + +- local start_kernel_addr=$(${READELF} -sW $objfile | awk '$8 == "start_kernel" {printf "0x%s", $2}') ++ local start_kernel_addr=$(${READELF} --symbols --wide $objfile | ${AWK} '$8 == "start_kernel" {printf "0x%s", $2}') + [[ -z $start_kernel_addr ]] && return + + local file_line=$(${ADDR2LINE} -e $objfile $start_kernel_addr) +@@ -97,86 +94,133 @@ __faddr2line() { + local dir_prefix=$3 + local print_warnings=$4 + +- local func=${func_addr%+*} ++ local sym_name=${func_addr%+*} + local offset=${func_addr#*+} + offset=${offset%/*} +- local size= +- [[ $func_addr =~ "/" ]] && size=${func_addr#*/} ++ local user_size= ++ [[ $func_addr =~ "/" ]] && user_size=${func_addr#*/} + +- if [[ -z $func ]] || [[ -z $offset ]] || [[ $func = $func_addr ]]; then ++ if [[ -z $sym_name ]] || [[ -z $offset ]] || [[ $sym_name = $func_addr ]]; then + warn "bad func+offset $func_addr" + DONE=1 + return + fi + + # Go through each of the object's symbols which match the func name. +- # In rare cases there might be duplicates. +- file_end=$(${SIZE} -Ax $objfile | awk '$1 == ".text" {print $2}') +- while read symbol; do +- local fields=($symbol) +- local sym_base=0x${fields[0]} +- local sym_type=${fields[1]} +- local sym_end=${fields[3]} +- +- # calculate the size +- local sym_size=$(($sym_end - $sym_base)) ++ # In rare cases there might be duplicates, in which case we print all ++ # matches. ++ while read line; do ++ local fields=($line) ++ local sym_addr=0x${fields[1]} ++ local sym_elf_size=${fields[2]} ++ local sym_sec=${fields[6]} ++ ++ # Get the section size: ++ local sec_size=$(${READELF} --section-headers --wide $objfile | ++ sed 's/\[ /\[/' | ++ ${AWK} -v sec=$sym_sec '$1 == "[" sec "]" { print "0x" $6; exit }') ++ ++ if [[ -z $sec_size ]]; then ++ warn "bad section size: section: $sym_sec" ++ DONE=1 ++ return ++ fi ++ ++ # Calculate the symbol size. ++ # ++ # Unfortunately we can't use the ELF size, because kallsyms ++ # also includes the padding bytes in its size calculation. For ++ # kallsyms, the size calculation is the distance between the ++ # symbol and the next symbol in a sorted list. ++ local sym_size ++ local cur_sym_addr ++ local found=0 ++ while read line; do ++ local fields=($line) ++ cur_sym_addr=0x${fields[1]} ++ local cur_sym_elf_size=${fields[2]} ++ local cur_sym_name=${fields[7]:-} ++ ++ if [[ $cur_sym_addr = $sym_addr ]] && ++ [[ $cur_sym_elf_size = $sym_elf_size ]] && ++ [[ $cur_sym_name = $sym_name ]]; then ++ found=1 ++ continue ++ fi ++ ++ if [[ $found = 1 ]]; then ++ sym_size=$(($cur_sym_addr - $sym_addr)) ++ [[ $sym_size -lt $sym_elf_size ]] && continue; ++ found=2 ++ break ++ fi ++ done < <(${READELF} --symbols --wide $objfile | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2) ++ ++ if [[ $found = 0 ]]; then ++ warn "can't find symbol: sym_name: $sym_name sym_sec: $sym_sec sym_addr: $sym_addr sym_elf_size: $sym_elf_size" ++ DONE=1 ++ return ++ fi ++ ++ # If nothing was found after the symbol, assume it's the last ++ # symbol in the section. ++ [[ $found = 1 ]] && sym_size=$(($sec_size - $sym_addr)) ++ + if [[ -z $sym_size ]] || [[ $sym_size -le 0 ]]; then +- warn "bad symbol size: base: $sym_base end: $sym_end" ++ warn "bad symbol size: sym_addr: $sym_addr cur_sym_addr: $cur_sym_addr" + DONE=1 + return + fi ++ + sym_size=0x$(printf %x $sym_size) + +- # calculate the address +- local addr=$(($sym_base + $offset)) ++ # Calculate the section address from user-supplied offset: ++ local addr=$(($sym_addr + $offset)) + if [[ -z $addr ]] || [[ $addr = 0 ]]; then +- warn "bad address: $sym_base + $offset" ++ warn "bad address: $sym_addr + $offset" + DONE=1 + return + fi + addr=0x$(printf %x $addr) + +- # weed out non-function symbols +- if [[ $sym_type != t ]] && [[ $sym_type != T ]]; then +- [[ $print_warnings = 1 ]] && +- echo "skipping $func address at $addr due to non-function symbol of type '$sym_type'" +- continue +- fi +- +- # if the user provided a size, make sure it matches the symbol's size +- if [[ -n $size ]] && [[ $size -ne $sym_size ]]; then ++ # If the user provided a size, make sure it matches the symbol's size: ++ if [[ -n $user_size ]] && [[ $user_size -ne $sym_size ]]; then + [[ $print_warnings = 1 ]] && +- echo "skipping $func address at $addr due to size mismatch ($size != $sym_size)" ++ echo "skipping $sym_name address at $addr due to size mismatch ($user_size != $sym_size)" + continue; + fi + +- # make sure the provided offset is within the symbol's range ++ # Make sure the provided offset is within the symbol's range: + if [[ $offset -gt $sym_size ]]; then + [[ $print_warnings = 1 ]] && +- echo "skipping $func address at $addr due to size mismatch ($offset > $sym_size)" ++ echo "skipping $sym_name address at $addr due to size mismatch ($offset > $sym_size)" + continue + fi + +- # separate multiple entries with a blank line ++ # In case of duplicates or multiple addresses specified on the ++ # cmdline, separate multiple entries with a blank line: + [[ $FIRST = 0 ]] && echo + FIRST=0 + +- # pass real address to addr2line +- echo "$func+$offset/$sym_size:" +- local file_lines=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;") +- [[ -z $file_lines ]] && return ++ echo "$sym_name+$offset/$sym_size:" + ++ # Pass section address to addr2line and strip absolute paths ++ # from the output: ++ local output=$(${ADDR2LINE} -fpie $objfile $addr | sed "s; $dir_prefix\(\./\)*; ;") ++ [[ -z $output ]] && continue ++ ++ # Default output (non --list): + if [[ $LIST = 0 ]]; then +- echo "$file_lines" | while read -r line ++ echo "$output" | while read -r line + do + echo $line + done + DONE=1; +- return ++ continue + fi + +- # show each line with context +- echo "$file_lines" | while read -r line ++ # For --list, show each line with its corresponding source code: ++ echo "$output" | while read -r line + do + echo + echo $line +@@ -184,12 +228,12 @@ __faddr2line() { + n1=$[$n-5] + n2=$[$n+5] + f=$(echo $line | sed 's/.*at \(.\+\):.*/\1/g') +- awk 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") { if (NR=='$n') printf(">%d<", NR); else printf(" %d ", NR); printf("\t%s\n", $0)}' $f ++ ${AWK} 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") { if (NR=='$n') printf(">%d<", NR); else printf(" %d ", NR); printf("\t%s\n", $0)}' $f + done + + DONE=1 + +- done < <(${NM} -n $objfile | awk -v fn=$func -v end=$file_end '$3 == fn { found=1; line=$0; start=$1; next } found == 1 { found=0; print line, "0x"$1 } END {if (found == 1) print line, end; }') ++ done < <(${READELF} --symbols --wide $objfile | ${AWK} -v fn=$sym_name '$4 == "FUNC" && $8 == fn') + } + + [[ $# -lt 2 ]] && usage +diff --git a/scripts/gdb/linux/config.py b/scripts/gdb/linux/config.py +index 90e1565b19671..8843ab3cbaddc 100644 +--- a/scripts/gdb/linux/config.py ++++ b/scripts/gdb/linux/config.py +@@ -24,9 +24,9 @@ class LxConfigDump(gdb.Command): + filename = arg + + try: +- py_config_ptr = gdb.parse_and_eval("kernel_config_data + 8") +- py_config_size = gdb.parse_and_eval( +- "sizeof(kernel_config_data) - 1 - 8 * 2") ++ py_config_ptr = gdb.parse_and_eval("&kernel_config_data") ++ py_config_ptr_end = gdb.parse_and_eval("&kernel_config_data_end") ++ py_config_size = py_config_ptr_end - py_config_ptr + except gdb.error as e: + raise gdb.GdbError("Can't find config, enable CONFIG_IKCONFIG?") + +diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c +index 13cda6aa26880..59011ddf8bb80 100644 +--- a/scripts/mod/modpost.c ++++ b/scripts/mod/modpost.c +@@ -1283,7 +1283,8 @@ static int secref_whitelist(const struct sectioncheck *mismatch, + + static inline int is_arm_mapping_symbol(const char *str) + { +- return str[0] == '$' && strchr("axtd", str[1]) ++ return str[0] == '$' && ++ (str[1] == 'a' || str[1] == 'd' || str[1] == 't' || str[1] == 'x') + && (str[2] == '\0' || str[2] == '.'); + } + +@@ -1998,7 +1999,7 @@ static char *remove_dot(char *s) + + if (n && s[n]) { + size_t m = strspn(s + n + 1, "0123456789"); +- if (m && (s[n + m] == '.' || s[n + m] == 0)) ++ if (m && (s[n + m + 1] == '.' || s[n + m + 1] == 0)) + s[n] = 0; + } + return s; +diff --git a/security/integrity/platform_certs/keyring_handler.h b/security/integrity/platform_certs/keyring_handler.h +index 2462bfa08fe34..cd06bd6072be2 100644 +--- a/security/integrity/platform_certs/keyring_handler.h ++++ b/security/integrity/platform_certs/keyring_handler.h +@@ -30,3 +30,11 @@ efi_element_handler_t get_handler_for_db(const efi_guid_t *sig_type); + efi_element_handler_t get_handler_for_dbx(const efi_guid_t *sig_type); + + #endif ++ ++#ifndef UEFI_QUIRK_SKIP_CERT ++#define UEFI_QUIRK_SKIP_CERT(vendor, product) \ ++ .matches = { \ ++ DMI_MATCH(DMI_BOARD_VENDOR, vendor), \ ++ DMI_MATCH(DMI_PRODUCT_NAME, product), \ ++ }, ++#endif +diff --git a/security/integrity/platform_certs/load_uefi.c b/security/integrity/platform_certs/load_uefi.c +index f0c908241966a..452011428d119 100644 +--- a/security/integrity/platform_certs/load_uefi.c ++++ b/security/integrity/platform_certs/load_uefi.c +@@ -3,6 +3,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -11,6 +12,31 @@ + #include "../integrity.h" + #include "keyring_handler.h" + ++/* ++ * On T2 Macs reading the db and dbx efi variables to load UEFI Secure Boot ++ * certificates causes occurrence of a page fault in Apple's firmware and ++ * a crash disabling EFI runtime services. The following quirk skips reading ++ * these variables. ++ */ ++static const struct dmi_system_id uefi_skip_cert[] = { ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,1") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,2") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,3") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro15,4") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,1") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,2") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,3") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookPro16,4") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,1") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir8,2") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacBookAir9,1") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacMini8,1") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "MacPro7,1") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,1") }, ++ { UEFI_QUIRK_SKIP_CERT("Apple Inc.", "iMac20,2") }, ++ { } ++}; ++ + /* + * Look to see if a UEFI variable called MokIgnoreDB exists and return true if + * it does. +@@ -78,6 +104,13 @@ static int __init load_uefi_certs(void) + unsigned long dbsize = 0, dbxsize = 0, moksize = 0; + efi_status_t status; + int rc = 0; ++ const struct dmi_system_id *dmi_id; ++ ++ dmi_id = dmi_first_match(uefi_skip_cert); ++ if (dmi_id) { ++ pr_err("Reading UEFI Secure Boot Certs is not supported on T2 Macs.\n"); ++ return false; ++ } + + if (!efi.get_variable) + return false; +diff --git a/sound/core/jack.c b/sound/core/jack.c +index b00ae6f39f054..e7ac82d468216 100644 +--- a/sound/core/jack.c ++++ b/sound/core/jack.c +@@ -34,8 +34,11 @@ static int snd_jack_dev_disconnect(struct snd_device *device) + #ifdef CONFIG_SND_JACK_INPUT_DEV + struct snd_jack *jack = device->device_data; + +- if (!jack->input_dev) ++ mutex_lock(&jack->input_dev_lock); ++ if (!jack->input_dev) { ++ mutex_unlock(&jack->input_dev_lock); + return 0; ++ } + + /* If the input device is registered with the input subsystem + * then we need to use a different deallocator. */ +@@ -44,6 +47,7 @@ static int snd_jack_dev_disconnect(struct snd_device *device) + else + input_free_device(jack->input_dev); + jack->input_dev = NULL; ++ mutex_unlock(&jack->input_dev_lock); + #endif /* CONFIG_SND_JACK_INPUT_DEV */ + return 0; + } +@@ -82,8 +86,11 @@ static int snd_jack_dev_register(struct snd_device *device) + snprintf(jack->name, sizeof(jack->name), "%s %s", + card->shortname, jack->id); + +- if (!jack->input_dev) ++ mutex_lock(&jack->input_dev_lock); ++ if (!jack->input_dev) { ++ mutex_unlock(&jack->input_dev_lock); + return 0; ++ } + + jack->input_dev->name = jack->name; + +@@ -108,6 +115,7 @@ static int snd_jack_dev_register(struct snd_device *device) + if (err == 0) + jack->registered = 1; + ++ mutex_unlock(&jack->input_dev_lock); + return err; + } + #endif /* CONFIG_SND_JACK_INPUT_DEV */ +@@ -228,9 +236,11 @@ int snd_jack_new(struct snd_card *card, const char *id, int type, + return -ENOMEM; + } + +- /* don't creat input device for phantom jack */ +- if (!phantom_jack) { + #ifdef CONFIG_SND_JACK_INPUT_DEV ++ mutex_init(&jack->input_dev_lock); ++ ++ /* don't create input device for phantom jack */ ++ if (!phantom_jack) { + int i; + + jack->input_dev = input_allocate_device(); +@@ -248,8 +258,8 @@ int snd_jack_new(struct snd_card *card, const char *id, int type, + input_set_capability(jack->input_dev, EV_SW, + jack_switch_types[i]); + +-#endif /* CONFIG_SND_JACK_INPUT_DEV */ + } ++#endif /* CONFIG_SND_JACK_INPUT_DEV */ + + err = snd_device_new(card, SNDRV_DEV_JACK, jack, &ops); + if (err < 0) +@@ -289,10 +299,14 @@ EXPORT_SYMBOL(snd_jack_new); + void snd_jack_set_parent(struct snd_jack *jack, struct device *parent) + { + WARN_ON(jack->registered); +- if (!jack->input_dev) ++ mutex_lock(&jack->input_dev_lock); ++ if (!jack->input_dev) { ++ mutex_unlock(&jack->input_dev_lock); + return; ++ } + + jack->input_dev->dev.parent = parent; ++ mutex_unlock(&jack->input_dev_lock); + } + EXPORT_SYMBOL(snd_jack_set_parent); + +@@ -340,6 +354,8 @@ EXPORT_SYMBOL(snd_jack_set_key); + + /** + * snd_jack_report - Report the current status of a jack ++ * Note: This function uses mutexes and should be called from a ++ * context which can sleep (such as a workqueue). + * + * @jack: The jack to report status for + * @status: The current status of the jack +@@ -359,8 +375,11 @@ void snd_jack_report(struct snd_jack *jack, int status) + status & jack_kctl->mask_bits); + + #ifdef CONFIG_SND_JACK_INPUT_DEV +- if (!jack->input_dev) ++ mutex_lock(&jack->input_dev_lock); ++ if (!jack->input_dev) { ++ mutex_unlock(&jack->input_dev_lock); + return; ++ } + + for (i = 0; i < ARRAY_SIZE(jack->key); i++) { + int testbit = SND_JACK_BTN_0 >> i; +@@ -379,6 +398,7 @@ void snd_jack_report(struct snd_jack *jack, int status) + } + + input_sync(jack->input_dev); ++ mutex_unlock(&jack->input_dev_lock); + #endif /* CONFIG_SND_JACK_INPUT_DEV */ + } + EXPORT_SYMBOL(snd_jack_report); +diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c +index 5e2fadb264e4d..c0b6881b06729 100644 +--- a/sound/pci/hda/patch_conexant.c ++++ b/sound/pci/hda/patch_conexant.c +@@ -1012,6 +1012,13 @@ static int patch_conexant_auto(struct hda_codec *codec) + snd_hda_pick_fixup(codec, cxt5051_fixup_models, + cxt5051_fixups, cxt_fixups); + break; ++ case 0x14f15098: ++ codec->pin_amp_workaround = 1; ++ spec->gen.mixer_nid = 0x22; ++ spec->gen.add_stereo_mix_input = HDA_HINT_STEREO_MIX_AUTO; ++ snd_hda_pick_fixup(codec, cxt5066_fixup_models, ++ cxt5066_fixups, cxt_fixups); ++ break; + case 0x14f150f2: + codec->power_save_node = 1; + /* Fall through */ +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 78b5a0f22a415..8a221866ab01b 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -1932,6 +1932,7 @@ enum { + ALC1220_FIXUP_CLEVO_PB51ED_PINS, + ALC887_FIXUP_ASUS_AUDIO, + ALC887_FIXUP_ASUS_HMIC, ++ ALCS1200A_FIXUP_MIC_VREF, + }; + + static void alc889_fixup_coef(struct hda_codec *codec, +@@ -2477,6 +2478,14 @@ static const struct hda_fixup alc882_fixups[] = { + .chained = true, + .chain_id = ALC887_FIXUP_ASUS_AUDIO, + }, ++ [ALCS1200A_FIXUP_MIC_VREF] = { ++ .type = HDA_FIXUP_PINCTLS, ++ .v.pins = (const struct hda_pintbl[]) { ++ { 0x18, PIN_VREF50 }, /* rear mic */ ++ { 0x19, PIN_VREF50 }, /* front mic */ ++ {} ++ } ++ }, + }; + + static const struct snd_pci_quirk alc882_fixup_tbl[] = { +@@ -2514,6 +2523,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = { + SND_PCI_QUIRK(0x1043, 0x835f, "Asus Eee 1601", ALC888_FIXUP_EEE1601), + SND_PCI_QUIRK(0x1043, 0x84bc, "ASUS ET2700", ALC887_FIXUP_ASUS_BASS), + SND_PCI_QUIRK(0x1043, 0x8691, "ASUS ROG Ranger VIII", ALC882_FIXUP_GPIO3), ++ SND_PCI_QUIRK(0x1043, 0x8797, "ASUS TUF B550M-PLUS", ALCS1200A_FIXUP_MIC_VREF), + SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP), + SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP), + SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT), +diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig +index 466dc67799f4c..dfc536cd9d2fc 100644 +--- a/sound/soc/codecs/Kconfig ++++ b/sound/soc/codecs/Kconfig +@@ -759,7 +759,6 @@ config SND_SOC_MAX98095 + + config SND_SOC_MAX98357A + tristate "Maxim MAX98357A CODEC" +- depends on GPIOLIB + + config SND_SOC_MAX98371 + tristate +diff --git a/sound/soc/codecs/rk3328_codec.c b/sound/soc/codecs/rk3328_codec.c +index 514ebe16bbfad..4e71ecf54af7b 100644 +--- a/sound/soc/codecs/rk3328_codec.c ++++ b/sound/soc/codecs/rk3328_codec.c +@@ -479,7 +479,7 @@ static int rk3328_platform_probe(struct platform_device *pdev) + ret = clk_prepare_enable(rk3328->pclk); + if (ret < 0) { + dev_err(&pdev->dev, "failed to enable acodec pclk\n"); +- return ret; ++ goto err_unprepare_mclk; + } + + base = devm_platform_ioremap_resource(pdev, 0); +diff --git a/sound/soc/codecs/rt5514.c b/sound/soc/codecs/rt5514.c +index 7081142a355e1..c444a56df95ba 100644 +--- a/sound/soc/codecs/rt5514.c ++++ b/sound/soc/codecs/rt5514.c +@@ -419,7 +419,7 @@ static int rt5514_dsp_voice_wake_up_put(struct snd_kcontrol *kcontrol, + } + } + +- return 0; ++ return 1; + } + + static const struct snd_kcontrol_new rt5514_snd_controls[] = { +diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c +index c83f7f5da96b7..a66e93a3af745 100644 +--- a/sound/soc/codecs/rt5645.c ++++ b/sound/soc/codecs/rt5645.c +@@ -4074,9 +4074,14 @@ static int rt5645_i2c_remove(struct i2c_client *i2c) + if (i2c->irq) + free_irq(i2c->irq, rt5645); + ++ /* ++ * Since the rt5645_btn_check_callback() can queue jack_detect_work, ++ * the timer need to be delted first ++ */ ++ del_timer_sync(&rt5645->btn_check_timer); ++ + cancel_delayed_work_sync(&rt5645->jack_detect_work); + cancel_delayed_work_sync(&rt5645->rcclock_work); +- del_timer_sync(&rt5645->btn_check_timer); + + regulator_bulk_disable(ARRAY_SIZE(rt5645->supplies), rt5645->supplies); + +diff --git a/sound/soc/codecs/tscs454.c b/sound/soc/codecs/tscs454.c +index c3587af9985c0..3d981441b8d1a 100644 +--- a/sound/soc/codecs/tscs454.c ++++ b/sound/soc/codecs/tscs454.c +@@ -3128,18 +3128,17 @@ static int set_aif_sample_format(struct snd_soc_component *component, + unsigned int width; + int ret; + +- switch (format) { +- case SNDRV_PCM_FORMAT_S16_LE: ++ switch (snd_pcm_format_width(format)) { ++ case 16: + width = FV_WL_16; + break; +- case SNDRV_PCM_FORMAT_S20_3LE: ++ case 20: + width = FV_WL_20; + break; +- case SNDRV_PCM_FORMAT_S24_3LE: ++ case 24: + width = FV_WL_24; + break; +- case SNDRV_PCM_FORMAT_S24_LE: +- case SNDRV_PCM_FORMAT_S32_LE: ++ case 32: + width = FV_WL_32; + break; + default: +@@ -3337,6 +3336,7 @@ static const struct snd_soc_component_driver soc_component_dev_tscs454 = { + .num_dapm_routes = ARRAY_SIZE(tscs454_intercon), + .controls = tscs454_snd_controls, + .num_controls = ARRAY_SIZE(tscs454_snd_controls), ++ .endianness = 1, + }; + + #define TSCS454_RATES SNDRV_PCM_RATE_8000_96000 +diff --git a/sound/soc/codecs/wm2000.c b/sound/soc/codecs/wm2000.c +index 72e165cc64439..97ece3114b3dc 100644 +--- a/sound/soc/codecs/wm2000.c ++++ b/sound/soc/codecs/wm2000.c +@@ -536,7 +536,7 @@ static int wm2000_anc_transition(struct wm2000_priv *wm2000, + { + struct i2c_client *i2c = wm2000->i2c; + int i, j; +- int ret; ++ int ret = 0; + + if (wm2000->anc_mode == mode) + return 0; +@@ -566,13 +566,13 @@ static int wm2000_anc_transition(struct wm2000_priv *wm2000, + ret = anc_transitions[i].step[j](i2c, + anc_transitions[i].analogue); + if (ret != 0) +- return ret; ++ break; + } + + if (anc_transitions[i].dest == ANC_OFF) + clk_disable_unprepare(wm2000->mclk); + +- return 0; ++ return ret; + } + + static int wm2000_anc_set_mode(struct wm2000_priv *wm2000) +diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h +index 677ecfc1ec68f..afaef20272342 100644 +--- a/sound/soc/fsl/fsl_sai.h ++++ b/sound/soc/fsl/fsl_sai.h +@@ -67,8 +67,8 @@ + #define FSL_SAI_xCR3(tx, ofs) (tx ? FSL_SAI_TCR3(ofs) : FSL_SAI_RCR3(ofs)) + #define FSL_SAI_xCR4(tx, ofs) (tx ? FSL_SAI_TCR4(ofs) : FSL_SAI_RCR4(ofs)) + #define FSL_SAI_xCR5(tx, ofs) (tx ? FSL_SAI_TCR5(ofs) : FSL_SAI_RCR5(ofs)) +-#define FSL_SAI_xDR(tx, ofs) (tx ? FSL_SAI_TDR(ofs) : FSL_SAI_RDR(ofs)) +-#define FSL_SAI_xFR(tx, ofs) (tx ? FSL_SAI_TFR(ofs) : FSL_SAI_RFR(ofs)) ++#define FSL_SAI_xDR0(tx) (tx ? FSL_SAI_TDR0 : FSL_SAI_RDR0) ++#define FSL_SAI_xFR0(tx) (tx ? FSL_SAI_TFR0 : FSL_SAI_RFR0) + #define FSL_SAI_xMR(tx) (tx ? FSL_SAI_TMR : FSL_SAI_RMR) + + /* SAI Transmit/Receive Control Register */ +diff --git a/sound/soc/fsl/imx-sgtl5000.c b/sound/soc/fsl/imx-sgtl5000.c +index 15e8b9343c354..7106d56a3346c 100644 +--- a/sound/soc/fsl/imx-sgtl5000.c ++++ b/sound/soc/fsl/imx-sgtl5000.c +@@ -120,19 +120,19 @@ static int imx_sgtl5000_probe(struct platform_device *pdev) + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); + if (!data) { + ret = -ENOMEM; +- goto fail; ++ goto put_device; + } + + comp = devm_kzalloc(&pdev->dev, 3 * sizeof(*comp), GFP_KERNEL); + if (!comp) { + ret = -ENOMEM; +- goto fail; ++ goto put_device; + } + + data->codec_clk = clk_get(&codec_dev->dev, NULL); + if (IS_ERR(data->codec_clk)) { + ret = PTR_ERR(data->codec_clk); +- goto fail; ++ goto put_device; + } + + data->clk_frequency = clk_get_rate(data->codec_clk); +@@ -158,10 +158,10 @@ static int imx_sgtl5000_probe(struct platform_device *pdev) + data->card.dev = &pdev->dev; + ret = snd_soc_of_parse_card_name(&data->card, "model"); + if (ret) +- goto fail; ++ goto put_device; + ret = snd_soc_of_parse_audio_routing(&data->card, "audio-routing"); + if (ret) +- goto fail; ++ goto put_device; + data->card.num_links = 1; + data->card.owner = THIS_MODULE; + data->card.dai_link = &data->dai; +@@ -176,7 +176,7 @@ static int imx_sgtl5000_probe(struct platform_device *pdev) + if (ret != -EPROBE_DEFER) + dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", + ret); +- goto fail; ++ goto put_device; + } + + of_node_put(ssi_np); +@@ -184,6 +184,8 @@ static int imx_sgtl5000_probe(struct platform_device *pdev) + + return 0; + ++put_device: ++ put_device(&codec_dev->dev); + fail: + if (data && !IS_ERR(data->codec_clk)) + clk_put(data->codec_clk); +diff --git a/sound/soc/mediatek/mt2701/mt2701-wm8960.c b/sound/soc/mediatek/mt2701/mt2701-wm8960.c +index 8c4c89e4c616f..b9ad42112ea18 100644 +--- a/sound/soc/mediatek/mt2701/mt2701-wm8960.c ++++ b/sound/soc/mediatek/mt2701/mt2701-wm8960.c +@@ -129,7 +129,8 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev) + if (!codec_node) { + dev_err(&pdev->dev, + "Property 'audio-codec' missing or invalid\n"); +- return -EINVAL; ++ ret = -EINVAL; ++ goto put_platform_node; + } + for_each_card_prelinks(card, i, dai_link) { + if (dai_link->codecs->name) +@@ -140,7 +141,7 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev) + ret = snd_soc_of_parse_audio_routing(card, "audio-routing"); + if (ret) { + dev_err(&pdev->dev, "failed to parse audio-routing: %d\n", ret); +- return ret; ++ goto put_codec_node; + } + + ret = devm_snd_soc_register_card(&pdev->dev, card); +@@ -148,6 +149,10 @@ static int mt2701_wm8960_machine_probe(struct platform_device *pdev) + dev_err(&pdev->dev, "%s snd_soc_register_card fail %d\n", + __func__, ret); + ++put_codec_node: ++ of_node_put(codec_node); ++put_platform_node: ++ of_node_put(platform_node); + return ret; + } + +diff --git a/sound/soc/mediatek/mt8173/mt8173-max98090.c b/sound/soc/mediatek/mt8173/mt8173-max98090.c +index de1410c2c446f..32df181801146 100644 +--- a/sound/soc/mediatek/mt8173/mt8173-max98090.c ++++ b/sound/soc/mediatek/mt8173/mt8173-max98090.c +@@ -167,7 +167,8 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev) + if (!codec_node) { + dev_err(&pdev->dev, + "Property 'audio-codec' missing or invalid\n"); +- return -EINVAL; ++ ret = -EINVAL; ++ goto put_platform_node; + } + for_each_card_prelinks(card, i, dai_link) { + if (dai_link->codecs->name) +@@ -182,6 +183,8 @@ static int mt8173_max98090_dev_probe(struct platform_device *pdev) + __func__, ret); + + of_node_put(codec_node); ++ ++put_platform_node: + of_node_put(platform_node); + return ret; + } +diff --git a/sound/soc/mxs/mxs-saif.c b/sound/soc/mxs/mxs-saif.c +index cb1b525cbe9de..c899a05e896f3 100644 +--- a/sound/soc/mxs/mxs-saif.c ++++ b/sound/soc/mxs/mxs-saif.c +@@ -767,6 +767,7 @@ static int mxs_saif_probe(struct platform_device *pdev) + saif->master_id = saif->id; + } else { + ret = of_alias_get_id(master, "saif"); ++ of_node_put(master); + if (ret < 0) + return ret; + else +diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c +index 1c09dfb0c0f09..56c9c4189f269 100644 +--- a/sound/soc/soc-dapm.c ++++ b/sound/soc/soc-dapm.c +@@ -3421,7 +3421,6 @@ int snd_soc_dapm_put_volsw(struct snd_kcontrol *kcontrol, + update.val = val; + card->update = &update; + } +- change |= reg_change; + + ret = soc_dapm_mixer_update_power(card, kcontrol, connect, + rconnect); +@@ -3527,7 +3526,6 @@ int snd_soc_dapm_put_enum_double(struct snd_kcontrol *kcontrol, + update.val = val; + card->update = &update; + } +- change |= reg_change; + + ret = soc_dapm_mux_update_power(card, kcontrol, item[0], e); + +diff --git a/tools/perf/builtin-c2c.c b/tools/perf/builtin-c2c.c +index f2e9d2b1b913d..29d460c301767 100644 +--- a/tools/perf/builtin-c2c.c ++++ b/tools/perf/builtin-c2c.c +@@ -953,8 +953,8 @@ percent_rmt_hitm_cmp(struct perf_hpp_fmt *fmt __maybe_unused, + double per_left; + double per_right; + +- per_left = PERCENT(left, lcl_hitm); +- per_right = PERCENT(right, lcl_hitm); ++ per_left = PERCENT(left, rmt_hitm); ++ per_right = PERCENT(right, rmt_hitm); + + return per_left - per_right; + } +@@ -2733,9 +2733,7 @@ static int perf_c2c__report(int argc, const char **argv) + "the input file to process"), + OPT_INCR('N', "node-info", &c2c.node_info, + "show extra node info in report (repeat for more info)"), +-#ifdef HAVE_SLANG_SUPPORT + OPT_BOOLEAN(0, "stdio", &c2c.use_stdio, "Use the stdio interface"), +-#endif + OPT_BOOLEAN(0, "stats", &c2c.stats_only, + "Display only statistic tables (implies --stdio)"), + OPT_BOOLEAN(0, "full-symbols", &c2c.symbol_full, +@@ -2762,6 +2760,10 @@ static int perf_c2c__report(int argc, const char **argv) + if (argc) + usage_with_options(report_c2c_usage, options); + ++#ifndef HAVE_SLANG_SUPPORT ++ c2c.use_stdio = true; ++#endif ++ + if (c2c.stats_only) + c2c.use_stdio = true; + +diff --git a/tools/perf/pmu-events/jevents.c b/tools/perf/pmu-events/jevents.c +index 47f57f5829d3a..a4244bf242e6a 100644 +--- a/tools/perf/pmu-events/jevents.c ++++ b/tools/perf/pmu-events/jevents.c +@@ -567,7 +567,7 @@ int json_events(const char *fn, + } else if (json_streq(map, field, "ExtSel")) { + char *code = NULL; + addfield(map, &code, "", "", val); +- eventcode |= strtoul(code, NULL, 0) << 21; ++ eventcode |= strtoul(code, NULL, 0) << 8; + free(code); + } else if (json_streq(map, field, "EventName")) { + addfield(map, &name, "", "", val); +diff --git a/tools/perf/util/data.h b/tools/perf/util/data.h +index 259868a390198..252d990712496 100644 +--- a/tools/perf/util/data.h ++++ b/tools/perf/util/data.h +@@ -3,6 +3,7 @@ + #define __PERF_DATA_H + + #include ++#include + + enum perf_data_mode { + PERF_DATA_MODE_WRITE, +diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c +index 988326b67a916..8bf6b01b35608 100644 +--- a/tools/power/x86/turbostat/turbostat.c ++++ b/tools/power/x86/turbostat/turbostat.c +@@ -3865,6 +3865,7 @@ rapl_dram_energy_units_probe(int model, double rapl_energy_units) + case INTEL_FAM6_HASWELL_X: /* HSX */ + case INTEL_FAM6_BROADWELL_X: /* BDX */ + case INTEL_FAM6_XEON_PHI_KNL: /* KNL */ ++ case INTEL_FAM6_ICELAKE_X: /* ICX */ + return (rapl_dram_energy_units = 15.3 / 1000000); + default: + return (rapl_energy_units); +diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c +index d4a02fe44a126..0620580a5c16c 100644 +--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c ++++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_syntax.c +@@ -94,7 +94,7 @@ typedef void (* (*signal_t)(int, void (*)(int)))(int); + + typedef char * (*fn_ptr_arr1_t[10])(int **); + +-typedef char * (* const (* const fn_ptr_arr2_t[5])())(char * (*)(int)); ++typedef char * (* (* const fn_ptr_arr2_t[5])())(char * (*)(int)); + + struct struct_w_typedefs { + int_t a; +diff --git a/tools/testing/selftests/netfilter/nft_nat.sh b/tools/testing/selftests/netfilter/nft_nat.sh +index d7e07f4c3d7fc..4e15e81673104 100755 +--- a/tools/testing/selftests/netfilter/nft_nat.sh ++++ b/tools/testing/selftests/netfilter/nft_nat.sh +@@ -374,6 +374,45 @@ EOF + return $lret + } + ++test_local_dnat_portonly() ++{ ++ local family=$1 ++ local daddr=$2 ++ local lret=0 ++ local sr_s ++ local sr_r ++ ++ip netns exec "$ns0" nft -f /dev/stdin <