From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id F06E4158086 for ; Wed, 22 Dec 2021 14:06:26 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 3E79E2BC014; Wed, 22 Dec 2021 14:06:25 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id CD4912BC014 for ; Wed, 22 Dec 2021 14:06:23 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id C09EA342DCC for ; Wed, 22 Dec 2021 14:06:21 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 22803253 for ; Wed, 22 Dec 2021 14:06:20 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1640181968.c267aa3e43d90cb318844229333ae06cd918a712.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.4 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1167_linux-5.4.168.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: c267aa3e43d90cb318844229333ae06cd918a712 X-VCS-Branch: 5.4 Date: Wed, 22 Dec 2021 14:06:20 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 08c79859-86cc-4bed-9d4a-cc4b61b5c490 X-Archives-Hash: c0bd70869f42f1f7c093ae601ba94832 commit: c267aa3e43d90cb318844229333ae06cd918a712 Author: Mike Pagano gentoo org> AuthorDate: Wed Dec 22 14:06:08 2021 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Dec 22 14:06:08 2021 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c267aa3e Linux patch 5.4.168 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1167_linux-5.4.168.patch | 2322 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 2326 insertions(+) diff --git a/0000_README b/0000_README index 03c7ef31..e5039e81 100644 --- a/0000_README +++ b/0000_README @@ -711,6 +711,10 @@ Patch: 1166_linux-5.4.167.patch From: http://www.kernel.org Desc: Linux 5.4.167 +Patch: 1167_linux-5.4.168.patch +From: http://www.kernel.org +Desc: Linux 5.4.168 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1167_linux-5.4.168.patch b/1167_linux-5.4.168.patch new file mode 100644 index 00000000..529751cb --- /dev/null +++ b/1167_linux-5.4.168.patch @@ -0,0 +1,2322 @@ +diff --git a/Makefile b/Makefile +index 1045f7fc08503..c23f5b17d239f 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 4 +-SUBLEVEL = 167 ++SUBLEVEL = 168 + EXTRAVERSION = + NAME = Kleptomaniac Octopus + +diff --git a/arch/arm/boot/dts/imx6ull-pinfunc.h b/arch/arm/boot/dts/imx6ull-pinfunc.h +index eb025a9d47592..7328d4ef8559f 100644 +--- a/arch/arm/boot/dts/imx6ull-pinfunc.h ++++ b/arch/arm/boot/dts/imx6ull-pinfunc.h +@@ -82,6 +82,6 @@ + #define MX6ULL_PAD_CSI_DATA04__ESAI_TX_FS 0x01F4 0x0480 0x0000 0x9 0x0 + #define MX6ULL_PAD_CSI_DATA05__ESAI_TX_CLK 0x01F8 0x0484 0x0000 0x9 0x0 + #define MX6ULL_PAD_CSI_DATA06__ESAI_TX5_RX0 0x01FC 0x0488 0x0000 0x9 0x0 +-#define MX6ULL_PAD_CSI_DATA07__ESAI_T0 0x0200 0x048C 0x0000 0x9 0x0 ++#define MX6ULL_PAD_CSI_DATA07__ESAI_TX0 0x0200 0x048C 0x0000 0x9 0x0 + + #endif /* __DTS_IMX6ULL_PINFUNC_H */ +diff --git a/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts b/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts +index b4c0a76a4d1af..4c2fcfcc7baed 100644 +--- a/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts ++++ b/arch/arm/boot/dts/socfpga_arria10_socdk_qspi.dts +@@ -12,7 +12,7 @@ + flash0: n25q00@0 { + #address-cells = <1>; + #size-cells = <1>; +- compatible = "n25q00aa"; ++ compatible = "micron,mt25qu02g", "jedec,spi-nor"; + reg = <0>; + spi-max-frequency = <100000000>; + +diff --git a/arch/arm/boot/dts/socfpga_arria5_socdk.dts b/arch/arm/boot/dts/socfpga_arria5_socdk.dts +index 90e676e7019f2..1b02d46496a85 100644 +--- a/arch/arm/boot/dts/socfpga_arria5_socdk.dts ++++ b/arch/arm/boot/dts/socfpga_arria5_socdk.dts +@@ -119,7 +119,7 @@ + flash: flash@0 { + #address-cells = <1>; + #size-cells = <1>; +- compatible = "n25q256a"; ++ compatible = "micron,n25q256a", "jedec,spi-nor"; + reg = <0>; + spi-max-frequency = <100000000>; + +diff --git a/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts b/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts +index 6f138b2b26163..51bb436784e24 100644 +--- a/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts ++++ b/arch/arm/boot/dts/socfpga_cyclone5_socdk.dts +@@ -124,7 +124,7 @@ + flash0: n25q00@0 { + #address-cells = <1>; + #size-cells = <1>; +- compatible = "n25q00"; ++ compatible = "micron,mt25qu02g", "jedec,spi-nor"; + reg = <0>; /* chip select */ + spi-max-frequency = <100000000>; + +diff --git a/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts b/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts +index c155ff02eb6e0..cae9ddd5ed38b 100644 +--- a/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts ++++ b/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts +@@ -169,7 +169,7 @@ + flash: flash@0 { + #address-cells = <1>; + #size-cells = <1>; +- compatible = "n25q00"; ++ compatible = "micron,mt25qu02g", "jedec,spi-nor"; + reg = <0>; + spi-max-frequency = <100000000>; + +diff --git a/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts b/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts +index 8d5d3996f6f27..ca18b959e6559 100644 +--- a/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts ++++ b/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts +@@ -80,7 +80,7 @@ + flash: flash@0 { + #address-cells = <1>; + #size-cells = <1>; +- compatible = "n25q256a"; ++ compatible = "micron,n25q256a", "jedec,spi-nor"; + reg = <0>; + spi-max-frequency = <100000000>; + m25p,fast-read; +diff --git a/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts b/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts +index 99a71757cdf46..3f7aa7bf0863a 100644 +--- a/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts ++++ b/arch/arm/boot/dts/socfpga_cyclone5_sodia.dts +@@ -116,7 +116,7 @@ + flash0: n25q512a@0 { + #address-cells = <1>; + #size-cells = <1>; +- compatible = "n25q512a"; ++ compatible = "micron,n25q512a", "jedec,spi-nor"; + reg = <0>; + spi-max-frequency = <100000000>; + +diff --git a/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts b/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts +index a060718758b67..25874e1b9c829 100644 +--- a/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts ++++ b/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts +@@ -224,7 +224,7 @@ + n25q128@0 { + #address-cells = <1>; + #size-cells = <1>; +- compatible = "n25q128"; ++ compatible = "micron,n25q128", "jedec,spi-nor"; + reg = <0>; /* chip select */ + spi-max-frequency = <100000000>; + m25p,fast-read; +@@ -241,7 +241,7 @@ + n25q00@1 { + #address-cells = <1>; + #size-cells = <1>; +- compatible = "n25q00"; ++ compatible = "micron,mt25qu02g", "jedec,spi-nor"; + reg = <1>; /* chip select */ + spi-max-frequency = <100000000>; + m25p,fast-read; +diff --git a/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi +index e87a04477440e..292ca70c512b5 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi ++++ b/arch/arm64/boot/dts/rockchip/rk3399-khadas-edge.dtsi +@@ -685,7 +685,6 @@ + &sdhci { + bus-width = <8>; + mmc-hs400-1_8v; +- mmc-hs400-enhanced-strobe; + non-removable; + status = "okay"; + }; +diff --git a/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts b/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts +index 73be38a537960..a72e77c261ef3 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts ++++ b/arch/arm64/boot/dts/rockchip/rk3399-leez-p710.dts +@@ -49,7 +49,7 @@ + regulator-boot-on; + regulator-min-microvolt = <3300000>; + regulator-max-microvolt = <3300000>; +- vim-supply = <&vcc3v3_sys>; ++ vin-supply = <&vcc3v3_sys>; + }; + + vcc3v3_sys: vcc3v3-sys { +diff --git a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dts b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dts +index 1ae1ebd4efdd0..da3b031d4befa 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dts ++++ b/arch/arm64/boot/dts/rockchip/rk3399-rock-pi-4.dts +@@ -452,7 +452,7 @@ + status = "okay"; + + bt656-supply = <&vcc_3v0>; +- audio-supply = <&vcc_3v0>; ++ audio-supply = <&vcc1v8_codec>; + sdmmc-supply = <&vcc_sdio>; + gpio1830-supply = <&vcc_3v0>; + }; +diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c +index e7435f3a3d2d2..76cd09879eaf4 100644 +--- a/arch/s390/kernel/machine_kexec_file.c ++++ b/arch/s390/kernel/machine_kexec_file.c +@@ -277,6 +277,7 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi, + { + Elf_Rela *relas; + int i, r_type; ++ int ret; + + relas = (void *)pi->ehdr + relsec->sh_offset; + +@@ -311,7 +312,11 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi, + addr = section->sh_addr + relas[i].r_offset; + + r_type = ELF64_R_TYPE(relas[i].r_info); +- arch_kexec_do_relocs(r_type, loc, val, addr); ++ ret = arch_kexec_do_relocs(r_type, loc, val, addr); ++ if (ret) { ++ pr_err("Unknown rela relocation: %d\n", r_type); ++ return -ENOEXEC; ++ } + } + return 0; + } +diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c +index 464efedc778b0..2b247014ba452 100644 +--- a/drivers/ata/libata-scsi.c ++++ b/drivers/ata/libata-scsi.c +@@ -3164,8 +3164,19 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc) + goto invalid_fld; + } + +- if (ata_is_ncq(tf->protocol) && (cdb[2 + cdb_offset] & 0x3) == 0) +- tf->protocol = ATA_PROT_NCQ_NODATA; ++ if ((cdb[2 + cdb_offset] & 0x3) == 0) { ++ /* ++ * When T_LENGTH is zero (No data is transferred), dir should ++ * be DMA_NONE. ++ */ ++ if (scmd->sc_data_direction != DMA_NONE) { ++ fp = 2 + cdb_offset; ++ goto invalid_fld; ++ } ++ ++ if (ata_is_ncq(tf->protocol)) ++ tf->protocol = ATA_PROT_NCQ_NODATA; ++ } + + /* enable LBA */ + tf->flags |= ATA_TFLAG_LBA; +diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c +index baf10b73675e2..774af5ce70dad 100644 +--- a/drivers/block/xen-blkfront.c ++++ b/drivers/block/xen-blkfront.c +@@ -1565,9 +1565,12 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id) + unsigned long flags; + struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id; + struct blkfront_info *info = rinfo->dev_info; ++ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; + +- if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) ++ if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) { ++ xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); + return IRQ_HANDLED; ++ } + + spin_lock_irqsave(&rinfo->ring_lock, flags); + again: +@@ -1583,6 +1586,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id) + unsigned long id; + unsigned int op; + ++ eoiflag = 0; ++ + RING_COPY_RESPONSE(&rinfo->ring, i, &bret); + id = bret.id; + +@@ -1698,6 +1703,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id) + + spin_unlock_irqrestore(&rinfo->ring_lock, flags); + ++ xen_irq_lateeoi(irq, eoiflag); ++ + return IRQ_HANDLED; + + err: +@@ -1705,6 +1712,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id) + + spin_unlock_irqrestore(&rinfo->ring_lock, flags); + ++ /* No EOI in order to avoid further interrupts. */ ++ + pr_alert("%s disabled for further use\n", info->gd->disk_name); + return IRQ_HANDLED; + } +@@ -1744,8 +1753,8 @@ static int setup_blkring(struct xenbus_device *dev, + if (err) + goto fail; + +- err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0, +- "blkif", rinfo); ++ err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt, ++ 0, "blkif", rinfo); + if (err <= 0) { + xenbus_dev_fatal(dev, err, + "bind_evtchn_to_irqhandler failed"); +diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c +index 6ff87cd867121..e4e1b4e94a67b 100644 +--- a/drivers/clk/clk.c ++++ b/drivers/clk/clk.c +@@ -3299,6 +3299,14 @@ static int __clk_core_init(struct clk_core *core) + + clk_prepare_lock(); + ++ /* ++ * Set hw->core after grabbing the prepare_lock to synchronize with ++ * callers of clk_core_fill_parent_index() where we treat hw->core ++ * being NULL as the clk not being registered yet. This is crucial so ++ * that clks aren't parented until their parent is fully registered. ++ */ ++ core->hw->core = core; ++ + ret = clk_pm_runtime_get(core); + if (ret) + goto unlock; +@@ -3452,8 +3460,10 @@ static int __clk_core_init(struct clk_core *core) + out: + clk_pm_runtime_put(core); + unlock: +- if (ret) ++ if (ret) { + hlist_del_init(&core->child_node); ++ core->hw->core = NULL; ++ } + + clk_prepare_unlock(); + +@@ -3699,7 +3709,6 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw) + core->num_parents = init->num_parents; + core->min_rate = 0; + core->max_rate = ULONG_MAX; +- hw->core = core; + + ret = clk_core_populate_parent_map(core, init); + if (ret) +@@ -3717,7 +3726,7 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw) + goto fail_create_clk; + } + +- clk_core_link_consumer(hw->core, hw->clk); ++ clk_core_link_consumer(core, hw->clk); + + ret = __clk_core_init(core); + if (!ret) +diff --git a/drivers/dma/st_fdma.c b/drivers/dma/st_fdma.c +index 67087dbe2f9fa..f7393c19a1ba3 100644 +--- a/drivers/dma/st_fdma.c ++++ b/drivers/dma/st_fdma.c +@@ -873,4 +873,4 @@ MODULE_LICENSE("GPL v2"); + MODULE_DESCRIPTION("STMicroelectronics FDMA engine driver"); + MODULE_AUTHOR("Ludovic.barre "); + MODULE_AUTHOR("Peter Griffin "); +-MODULE_ALIAS("platform: " DRIVER_NAME); ++MODULE_ALIAS("platform:" DRIVER_NAME); +diff --git a/drivers/firmware/scpi_pm_domain.c b/drivers/firmware/scpi_pm_domain.c +index 51201600d789b..800673910b511 100644 +--- a/drivers/firmware/scpi_pm_domain.c ++++ b/drivers/firmware/scpi_pm_domain.c +@@ -16,7 +16,6 @@ struct scpi_pm_domain { + struct generic_pm_domain genpd; + struct scpi_ops *ops; + u32 domain; +- char name[30]; + }; + + /* +@@ -110,8 +109,13 @@ static int scpi_pm_domain_probe(struct platform_device *pdev) + + scpi_pd->domain = i; + scpi_pd->ops = scpi_ops; +- sprintf(scpi_pd->name, "%pOFn.%d", np, i); +- scpi_pd->genpd.name = scpi_pd->name; ++ scpi_pd->genpd.name = devm_kasprintf(dev, GFP_KERNEL, ++ "%pOFn.%d", np, i); ++ if (!scpi_pd->genpd.name) { ++ dev_err(dev, "Failed to allocate genpd name:%pOFn.%d\n", ++ np, i); ++ continue; ++ } + scpi_pd->genpd.power_off = scpi_pd_power_off; + scpi_pd->genpd.power_on = scpi_pd_power_on; + +diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +index 06cdc22b5501d..5906a8951a6c6 100644 +--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +@@ -2906,8 +2906,8 @@ static void gfx_v9_0_init_pg(struct amdgpu_device *adev) + AMD_PG_SUPPORT_CP | + AMD_PG_SUPPORT_GDS | + AMD_PG_SUPPORT_RLC_SMU_HS)) { +- WREG32(mmRLC_JUMP_TABLE_RESTORE, +- adev->gfx.rlc.cp_table_gpu_addr >> 8); ++ WREG32_SOC15(GC, 0, mmRLC_JUMP_TABLE_RESTORE, ++ adev->gfx.rlc.cp_table_gpu_addr >> 8); + gfx_v9_0_init_gfx_power_gating(adev); + } + } +diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c +index 94fde39d9ff7a..d1bbd2b197fc6 100644 +--- a/drivers/iio/adc/stm32-adc.c ++++ b/drivers/iio/adc/stm32-adc.c +@@ -933,6 +933,7 @@ pwr_dwn: + + static void stm32h7_adc_unprepare(struct stm32_adc *adc) + { ++ stm32_adc_writel(adc, STM32H7_ADC_PCSEL, 0); + stm32h7_adc_disable(adc); + stm32h7_adc_enter_pwr_down(adc); + } +diff --git a/drivers/input/touchscreen/of_touchscreen.c b/drivers/input/touchscreen/of_touchscreen.c +index e16ec4c7043a4..2962c3747adc3 100644 +--- a/drivers/input/touchscreen/of_touchscreen.c ++++ b/drivers/input/touchscreen/of_touchscreen.c +@@ -81,8 +81,8 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch, + touchscreen_get_prop_u32(dev, "touchscreen-size-x", + input_abs_get_max(input, + axis) + 1, +- &maximum) | +- touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x", ++ &maximum); ++ data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-x", + input_abs_get_fuzz(input, axis), + &fuzz); + if (data_present) +@@ -95,8 +95,8 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch, + touchscreen_get_prop_u32(dev, "touchscreen-size-y", + input_abs_get_max(input, + axis) + 1, +- &maximum) | +- touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y", ++ &maximum); ++ data_present |= touchscreen_get_prop_u32(dev, "touchscreen-fuzz-y", + input_abs_get_fuzz(input, axis), + &fuzz); + if (data_present) +@@ -106,11 +106,11 @@ void touchscreen_parse_properties(struct input_dev *input, bool multitouch, + data_present = touchscreen_get_prop_u32(dev, + "touchscreen-max-pressure", + input_abs_get_max(input, axis), +- &maximum) | +- touchscreen_get_prop_u32(dev, +- "touchscreen-fuzz-pressure", +- input_abs_get_fuzz(input, axis), +- &fuzz); ++ &maximum); ++ data_present |= touchscreen_get_prop_u32(dev, ++ "touchscreen-fuzz-pressure", ++ input_abs_get_fuzz(input, axis), ++ &fuzz); + if (data_present) + touchscreen_set_params(input, axis, 0, maximum, fuzz); + +diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c +index 9e4d1212f4c16..63f2baed3c8a6 100644 +--- a/drivers/md/persistent-data/dm-btree-remove.c ++++ b/drivers/md/persistent-data/dm-btree-remove.c +@@ -423,9 +423,9 @@ static int rebalance_children(struct shadow_spine *s, + + memcpy(n, dm_block_data(child), + dm_bm_block_size(dm_tm_get_bm(info->tm))); +- dm_tm_unlock(info->tm, child); + + dm_tm_dec(info->tm, dm_block_location(child)); ++ dm_tm_unlock(info->tm, child); + return 0; + } + +diff --git a/drivers/media/usb/dvb-usb-v2/mxl111sf.c b/drivers/media/usb/dvb-usb-v2/mxl111sf.c +index 55b4ae7037a4e..5fbce81b64c77 100644 +--- a/drivers/media/usb/dvb-usb-v2/mxl111sf.c ++++ b/drivers/media/usb/dvb-usb-v2/mxl111sf.c +@@ -931,8 +931,6 @@ static int mxl111sf_init(struct dvb_usb_device *d) + .len = sizeof(eeprom), .buf = eeprom }, + }; + +- mutex_init(&state->msg_lock); +- + ret = get_chip_info(state); + if (mxl_fail(ret)) + pr_err("failed to get chip info during probe"); +@@ -1074,6 +1072,14 @@ static int mxl111sf_get_stream_config_dvbt(struct dvb_frontend *fe, + return 0; + } + ++static int mxl111sf_probe(struct dvb_usb_device *dev) ++{ ++ struct mxl111sf_state *state = d_to_priv(dev); ++ ++ mutex_init(&state->msg_lock); ++ return 0; ++} ++ + static struct dvb_usb_device_properties mxl111sf_props_dvbt = { + .driver_name = KBUILD_MODNAME, + .owner = THIS_MODULE, +@@ -1083,6 +1089,7 @@ static struct dvb_usb_device_properties mxl111sf_props_dvbt = { + .generic_bulk_ctrl_endpoint = 0x02, + .generic_bulk_ctrl_endpoint_response = 0x81, + ++ .probe = mxl111sf_probe, + .i2c_algo = &mxl111sf_i2c_algo, + .frontend_attach = mxl111sf_frontend_attach_dvbt, + .tuner_attach = mxl111sf_attach_tuner, +@@ -1124,6 +1131,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc = { + .generic_bulk_ctrl_endpoint = 0x02, + .generic_bulk_ctrl_endpoint_response = 0x81, + ++ .probe = mxl111sf_probe, + .i2c_algo = &mxl111sf_i2c_algo, + .frontend_attach = mxl111sf_frontend_attach_atsc, + .tuner_attach = mxl111sf_attach_tuner, +@@ -1165,6 +1173,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mh = { + .generic_bulk_ctrl_endpoint = 0x02, + .generic_bulk_ctrl_endpoint_response = 0x81, + ++ .probe = mxl111sf_probe, + .i2c_algo = &mxl111sf_i2c_algo, + .frontend_attach = mxl111sf_frontend_attach_mh, + .tuner_attach = mxl111sf_attach_tuner, +@@ -1233,6 +1242,7 @@ static struct dvb_usb_device_properties mxl111sf_props_atsc_mh = { + .generic_bulk_ctrl_endpoint = 0x02, + .generic_bulk_ctrl_endpoint_response = 0x81, + ++ .probe = mxl111sf_probe, + .i2c_algo = &mxl111sf_i2c_algo, + .frontend_attach = mxl111sf_frontend_attach_atsc_mh, + .tuner_attach = mxl111sf_attach_tuner, +@@ -1311,6 +1321,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury = { + .generic_bulk_ctrl_endpoint = 0x02, + .generic_bulk_ctrl_endpoint_response = 0x81, + ++ .probe = mxl111sf_probe, + .i2c_algo = &mxl111sf_i2c_algo, + .frontend_attach = mxl111sf_frontend_attach_mercury, + .tuner_attach = mxl111sf_attach_tuner, +@@ -1381,6 +1392,7 @@ static struct dvb_usb_device_properties mxl111sf_props_mercury_mh = { + .generic_bulk_ctrl_endpoint = 0x02, + .generic_bulk_ctrl_endpoint_response = 0x81, + ++ .probe = mxl111sf_probe, + .i2c_algo = &mxl111sf_i2c_algo, + .frontend_attach = mxl111sf_frontend_attach_mercury_mh, + .tuner_attach = mxl111sf_attach_tuner, +diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c +index 470d12e308814..5a2094a281e15 100644 +--- a/drivers/net/ethernet/broadcom/bcmsysport.c ++++ b/drivers/net/ethernet/broadcom/bcmsysport.c +@@ -1277,11 +1277,11 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb, + struct bcm_sysport_priv *priv = netdev_priv(dev); + struct device *kdev = &priv->pdev->dev; + struct bcm_sysport_tx_ring *ring; ++ unsigned long flags, desc_flags; + struct bcm_sysport_cb *cb; + struct netdev_queue *txq; + u32 len_status, addr_lo; + unsigned int skb_len; +- unsigned long flags; + dma_addr_t mapping; + u16 queue; + int ret; +@@ -1339,8 +1339,10 @@ static netdev_tx_t bcm_sysport_xmit(struct sk_buff *skb, + ring->desc_count--; + + /* Ports are latched, so write upper address first */ ++ spin_lock_irqsave(&priv->desc_lock, desc_flags); + tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index)); + tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index)); ++ spin_unlock_irqrestore(&priv->desc_lock, desc_flags); + + /* Check ring space and update SW control flow */ + if (ring->desc_count == 0) +@@ -1970,6 +1972,7 @@ static int bcm_sysport_open(struct net_device *dev) + } + + /* Initialize both hardware and software ring */ ++ spin_lock_init(&priv->desc_lock); + for (i = 0; i < dev->num_tx_queues; i++) { + ret = bcm_sysport_init_tx_ring(priv, i); + if (ret) { +diff --git a/drivers/net/ethernet/broadcom/bcmsysport.h b/drivers/net/ethernet/broadcom/bcmsysport.h +index 6d80735fbc7f4..57336ca3f4277 100644 +--- a/drivers/net/ethernet/broadcom/bcmsysport.h ++++ b/drivers/net/ethernet/broadcom/bcmsysport.h +@@ -742,6 +742,7 @@ struct bcm_sysport_priv { + int wol_irq; + + /* Transmit rings */ ++ spinlock_t desc_lock; + struct bcm_sysport_tx_ring *tx_rings; + + /* Receive queue */ +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c +index c11244a9b7e69..3df25b231ab5c 100644 +--- a/drivers/net/ethernet/intel/igb/igb_main.c ++++ b/drivers/net/ethernet/intel/igb/igb_main.c +@@ -7374,6 +7374,20 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf, + struct vf_mac_filter *entry = NULL; + int ret = 0; + ++ if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) && ++ !vf_data->trusted) { ++ dev_warn(&pdev->dev, ++ "VF %d requested MAC filter but is administratively denied\n", ++ vf); ++ return -EINVAL; ++ } ++ if (!is_valid_ether_addr(addr)) { ++ dev_warn(&pdev->dev, ++ "VF %d attempted to set invalid MAC filter\n", ++ vf); ++ return -EINVAL; ++ } ++ + switch (info) { + case E1000_VF_MAC_FILTER_CLR: + /* remove all unicast MAC filters related to the current VF */ +@@ -7387,20 +7401,6 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf, + } + break; + case E1000_VF_MAC_FILTER_ADD: +- if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) && +- !vf_data->trusted) { +- dev_warn(&pdev->dev, +- "VF %d requested MAC filter but is administratively denied\n", +- vf); +- return -EINVAL; +- } +- if (!is_valid_ether_addr(addr)) { +- dev_warn(&pdev->dev, +- "VF %d attempted to set invalid MAC filter\n", +- vf); +- return -EINVAL; +- } +- + /* try to find empty slot in the list */ + list_for_each(pos, &adapter->vf_macs.l) { + entry = list_entry(pos, struct vf_mac_filter, l); +diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c +index 77cb2ab7dab40..1082e49ea0560 100644 +--- a/drivers/net/ethernet/intel/igbvf/netdev.c ++++ b/drivers/net/ethernet/intel/igbvf/netdev.c +@@ -2887,6 +2887,7 @@ static int igbvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + return 0; + + err_hw_init: ++ netif_napi_del(&adapter->rx_ring->napi); + kfree(adapter->tx_ring); + kfree(adapter->rx_ring); + err_sw_init: +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c +index 9c42f741ed5ef..74728c0a44a81 100644 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c +@@ -3405,6 +3405,9 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw) + /* flush pending Tx transactions */ + ixgbe_clear_tx_pending(hw); + ++ /* set MDIO speed before talking to the PHY in case it's the 1st time */ ++ ixgbe_set_mdio_speed(hw); ++ + /* PHY ops must be identified and initialized prior to reset */ + status = hw->phy.ops.init(hw); + if (status == IXGBE_ERR_SFP_NOT_SUPPORTED || +diff --git a/drivers/net/netdevsim/bpf.c b/drivers/net/netdevsim/bpf.c +index 2b74425822ab1..e0a4acc6144bf 100644 +--- a/drivers/net/netdevsim/bpf.c ++++ b/drivers/net/netdevsim/bpf.c +@@ -510,6 +510,7 @@ nsim_bpf_map_alloc(struct netdevsim *ns, struct bpf_offloaded_map *offmap) + goto err_free; + key = nmap->entry[i].key; + *key = i; ++ memset(nmap->entry[i].value, 0, offmap->map.value_size); + } + } + +diff --git a/drivers/net/wireless/marvell/mwifiex/cmdevt.c b/drivers/net/wireless/marvell/mwifiex/cmdevt.c +index e8788c35a453d..ec04515bd9dfa 100644 +--- a/drivers/net/wireless/marvell/mwifiex/cmdevt.c ++++ b/drivers/net/wireless/marvell/mwifiex/cmdevt.c +@@ -322,9 +322,9 @@ static int mwifiex_dnld_sleep_confirm_cmd(struct mwifiex_adapter *adapter) + + adapter->seq_num++; + sleep_cfm_buf->seq_num = +- cpu_to_le16((HostCmd_SET_SEQ_NO_BSS_INFO ++ cpu_to_le16(HostCmd_SET_SEQ_NO_BSS_INFO + (adapter->seq_num, priv->bss_num, +- priv->bss_type))); ++ priv->bss_type)); + + mwifiex_dbg(adapter, CMD, + "cmd: DNLD_CMD: %#x, act %#x, len %d, seqno %#x\n", +diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h +index 8b9d0809daf62..076ea1c4b921d 100644 +--- a/drivers/net/wireless/marvell/mwifiex/fw.h ++++ b/drivers/net/wireless/marvell/mwifiex/fw.h +@@ -512,10 +512,10 @@ enum mwifiex_channel_flags { + + #define RF_ANTENNA_AUTO 0xFFFF + +-#define HostCmd_SET_SEQ_NO_BSS_INFO(seq, num, type) { \ +- (((seq) & 0x00ff) | \ +- (((num) & 0x000f) << 8)) | \ +- (((type) & 0x000f) << 12); } ++#define HostCmd_SET_SEQ_NO_BSS_INFO(seq, num, type) \ ++ ((((seq) & 0x00ff) | \ ++ (((num) & 0x000f) << 8)) | \ ++ (((type) & 0x000f) << 12)) + + #define HostCmd_GET_SEQ_NO(seq) \ + ((seq) & HostCmd_SEQ_NUM_MASK) +diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h +index 32fe131ba366d..f7e746f1c9fb3 100644 +--- a/drivers/net/xen-netback/common.h ++++ b/drivers/net/xen-netback/common.h +@@ -203,6 +203,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */ + unsigned int rx_queue_max; + unsigned int rx_queue_len; + unsigned long last_rx_time; ++ unsigned int rx_slots_needed; + bool stalled; + + struct xenvif_copy_state rx_copy; +diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c +index 48e2006f96ce6..7f68067c01745 100644 +--- a/drivers/net/xen-netback/rx.c ++++ b/drivers/net/xen-netback/rx.c +@@ -33,28 +33,36 @@ + #include + #include + +-static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue) ++/* ++ * Update the needed ring page slots for the first SKB queued. ++ * Note that any call sequence outside the RX thread calling this function ++ * needs to wake up the RX thread via a call of xenvif_kick_thread() ++ * afterwards in order to avoid a race with putting the thread to sleep. ++ */ ++static void xenvif_update_needed_slots(struct xenvif_queue *queue, ++ const struct sk_buff *skb) + { +- RING_IDX prod, cons; +- struct sk_buff *skb; +- int needed; +- unsigned long flags; +- +- spin_lock_irqsave(&queue->rx_queue.lock, flags); ++ unsigned int needed = 0; + +- skb = skb_peek(&queue->rx_queue); +- if (!skb) { +- spin_unlock_irqrestore(&queue->rx_queue.lock, flags); +- return false; ++ if (skb) { ++ needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE); ++ if (skb_is_gso(skb)) ++ needed++; ++ if (skb->sw_hash) ++ needed++; + } + +- needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE); +- if (skb_is_gso(skb)) +- needed++; +- if (skb->sw_hash) +- needed++; ++ WRITE_ONCE(queue->rx_slots_needed, needed); ++} + +- spin_unlock_irqrestore(&queue->rx_queue.lock, flags); ++static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue) ++{ ++ RING_IDX prod, cons; ++ unsigned int needed; ++ ++ needed = READ_ONCE(queue->rx_slots_needed); ++ if (!needed) ++ return false; + + do { + prod = queue->rx.sring->req_prod; +@@ -80,13 +88,19 @@ void xenvif_rx_queue_tail(struct xenvif_queue *queue, struct sk_buff *skb) + + spin_lock_irqsave(&queue->rx_queue.lock, flags); + +- __skb_queue_tail(&queue->rx_queue, skb); +- +- queue->rx_queue_len += skb->len; +- if (queue->rx_queue_len > queue->rx_queue_max) { ++ if (queue->rx_queue_len >= queue->rx_queue_max) { + struct net_device *dev = queue->vif->dev; + + netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id)); ++ kfree_skb(skb); ++ queue->vif->dev->stats.rx_dropped++; ++ } else { ++ if (skb_queue_empty(&queue->rx_queue)) ++ xenvif_update_needed_slots(queue, skb); ++ ++ __skb_queue_tail(&queue->rx_queue, skb); ++ ++ queue->rx_queue_len += skb->len; + } + + spin_unlock_irqrestore(&queue->rx_queue.lock, flags); +@@ -100,6 +114,8 @@ static struct sk_buff *xenvif_rx_dequeue(struct xenvif_queue *queue) + + skb = __skb_dequeue(&queue->rx_queue); + if (skb) { ++ xenvif_update_needed_slots(queue, skb_peek(&queue->rx_queue)); ++ + queue->rx_queue_len -= skb->len; + if (queue->rx_queue_len < queue->rx_queue_max) { + struct netdev_queue *txq; +@@ -134,6 +150,7 @@ static void xenvif_rx_queue_drop_expired(struct xenvif_queue *queue) + break; + xenvif_rx_dequeue(queue); + kfree_skb(skb); ++ queue->vif->dev->stats.rx_dropped++; + } + } + +@@ -474,27 +491,31 @@ void xenvif_rx_action(struct xenvif_queue *queue) + xenvif_rx_copy_flush(queue); + } + +-static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue) ++static RING_IDX xenvif_rx_queue_slots(const struct xenvif_queue *queue) + { + RING_IDX prod, cons; + + prod = queue->rx.sring->req_prod; + cons = queue->rx.req_cons; + ++ return prod - cons; ++} ++ ++static bool xenvif_rx_queue_stalled(const struct xenvif_queue *queue) ++{ ++ unsigned int needed = READ_ONCE(queue->rx_slots_needed); ++ + return !queue->stalled && +- prod - cons < 1 && ++ xenvif_rx_queue_slots(queue) < needed && + time_after(jiffies, + queue->last_rx_time + queue->vif->stall_timeout); + } + + static bool xenvif_rx_queue_ready(struct xenvif_queue *queue) + { +- RING_IDX prod, cons; +- +- prod = queue->rx.sring->req_prod; +- cons = queue->rx.req_cons; ++ unsigned int needed = READ_ONCE(queue->rx_slots_needed); + +- return queue->stalled && prod - cons >= 1; ++ return queue->stalled && xenvif_rx_queue_slots(queue) >= needed; + } + + bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread) +diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c +index d6f44343213cc..d2b3381f71825 100644 +--- a/drivers/net/xen-netfront.c ++++ b/drivers/net/xen-netfront.c +@@ -142,6 +142,9 @@ struct netfront_queue { + struct sk_buff *rx_skbs[NET_RX_RING_SIZE]; + grant_ref_t gref_rx_head; + grant_ref_t grant_rx_ref[NET_RX_RING_SIZE]; ++ ++ unsigned int rx_rsp_unconsumed; ++ spinlock_t rx_cons_lock; + }; + + struct netfront_info { +@@ -364,12 +367,13 @@ static int xennet_open(struct net_device *dev) + return 0; + } + +-static void xennet_tx_buf_gc(struct netfront_queue *queue) ++static bool xennet_tx_buf_gc(struct netfront_queue *queue) + { + RING_IDX cons, prod; + unsigned short id; + struct sk_buff *skb; + bool more_to_do; ++ bool work_done = false; + const struct device *dev = &queue->info->netdev->dev; + + BUG_ON(!netif_carrier_ok(queue->info->netdev)); +@@ -386,6 +390,8 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue) + for (cons = queue->tx.rsp_cons; cons != prod; cons++) { + struct xen_netif_tx_response txrsp; + ++ work_done = true; ++ + RING_COPY_RESPONSE(&queue->tx, cons, &txrsp); + if (txrsp.status == XEN_NETIF_RSP_NULL) + continue; +@@ -429,11 +435,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue) + + xennet_maybe_wake_tx(queue); + +- return; ++ return work_done; + + err: + queue->info->broken = true; + dev_alert(dev, "Disabled for further use\n"); ++ ++ return work_done; + } + + struct xennet_gnttab_make_txreq { +@@ -753,6 +761,16 @@ static int xennet_close(struct net_device *dev) + return 0; + } + ++static void xennet_set_rx_rsp_cons(struct netfront_queue *queue, RING_IDX val) ++{ ++ unsigned long flags; ++ ++ spin_lock_irqsave(&queue->rx_cons_lock, flags); ++ queue->rx.rsp_cons = val; ++ queue->rx_rsp_unconsumed = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx); ++ spin_unlock_irqrestore(&queue->rx_cons_lock, flags); ++} ++ + static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb, + grant_ref_t ref) + { +@@ -804,7 +822,7 @@ static int xennet_get_extras(struct netfront_queue *queue, + xennet_move_rx_slot(queue, skb, ref); + } while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE); + +- queue->rx.rsp_cons = cons; ++ xennet_set_rx_rsp_cons(queue, cons); + return err; + } + +@@ -884,7 +902,7 @@ next: + } + + if (unlikely(err)) +- queue->rx.rsp_cons = cons + slots; ++ xennet_set_rx_rsp_cons(queue, cons + slots); + + return err; + } +@@ -938,7 +956,8 @@ static int xennet_fill_frags(struct netfront_queue *queue, + __pskb_pull_tail(skb, pull_to - skb_headlen(skb)); + } + if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) { +- queue->rx.rsp_cons = ++cons + skb_queue_len(list); ++ xennet_set_rx_rsp_cons(queue, ++ ++cons + skb_queue_len(list)); + kfree_skb(nskb); + return -ENOENT; + } +@@ -951,7 +970,7 @@ static int xennet_fill_frags(struct netfront_queue *queue, + kfree_skb(nskb); + } + +- queue->rx.rsp_cons = cons; ++ xennet_set_rx_rsp_cons(queue, cons); + + return 0; + } +@@ -1072,7 +1091,9 @@ err: + + if (unlikely(xennet_set_skb_gso(skb, gso))) { + __skb_queue_head(&tmpq, skb); +- queue->rx.rsp_cons += skb_queue_len(&tmpq); ++ xennet_set_rx_rsp_cons(queue, ++ queue->rx.rsp_cons + ++ skb_queue_len(&tmpq)); + goto err; + } + } +@@ -1096,7 +1117,8 @@ err: + + __skb_queue_tail(&rxq, skb); + +- i = ++queue->rx.rsp_cons; ++ i = queue->rx.rsp_cons + 1; ++ xennet_set_rx_rsp_cons(queue, i); + work_done++; + } + +@@ -1258,40 +1280,79 @@ static int xennet_set_features(struct net_device *dev, + return 0; + } + +-static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id) ++static bool xennet_handle_tx(struct netfront_queue *queue, unsigned int *eoi) + { +- struct netfront_queue *queue = dev_id; + unsigned long flags; + +- if (queue->info->broken) +- return IRQ_HANDLED; ++ if (unlikely(queue->info->broken)) ++ return false; + + spin_lock_irqsave(&queue->tx_lock, flags); +- xennet_tx_buf_gc(queue); ++ if (xennet_tx_buf_gc(queue)) ++ *eoi = 0; + spin_unlock_irqrestore(&queue->tx_lock, flags); + ++ return true; ++} ++ ++static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id) ++{ ++ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; ++ ++ if (likely(xennet_handle_tx(dev_id, &eoiflag))) ++ xen_irq_lateeoi(irq, eoiflag); ++ + return IRQ_HANDLED; + } + +-static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id) ++static bool xennet_handle_rx(struct netfront_queue *queue, unsigned int *eoi) + { +- struct netfront_queue *queue = dev_id; +- struct net_device *dev = queue->info->netdev; ++ unsigned int work_queued; ++ unsigned long flags; + +- if (queue->info->broken) +- return IRQ_HANDLED; ++ if (unlikely(queue->info->broken)) ++ return false; ++ ++ spin_lock_irqsave(&queue->rx_cons_lock, flags); ++ work_queued = RING_HAS_UNCONSUMED_RESPONSES(&queue->rx); ++ if (work_queued > queue->rx_rsp_unconsumed) { ++ queue->rx_rsp_unconsumed = work_queued; ++ *eoi = 0; ++ } else if (unlikely(work_queued < queue->rx_rsp_unconsumed)) { ++ const struct device *dev = &queue->info->netdev->dev; ++ ++ spin_unlock_irqrestore(&queue->rx_cons_lock, flags); ++ dev_alert(dev, "RX producer index going backwards\n"); ++ dev_alert(dev, "Disabled for further use\n"); ++ queue->info->broken = true; ++ return false; ++ } ++ spin_unlock_irqrestore(&queue->rx_cons_lock, flags); + +- if (likely(netif_carrier_ok(dev) && +- RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))) ++ if (likely(netif_carrier_ok(queue->info->netdev) && work_queued)) + napi_schedule(&queue->napi); + ++ return true; ++} ++ ++static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id) ++{ ++ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; ++ ++ if (likely(xennet_handle_rx(dev_id, &eoiflag))) ++ xen_irq_lateeoi(irq, eoiflag); ++ + return IRQ_HANDLED; + } + + static irqreturn_t xennet_interrupt(int irq, void *dev_id) + { +- xennet_tx_interrupt(irq, dev_id); +- xennet_rx_interrupt(irq, dev_id); ++ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS; ++ ++ if (xennet_handle_tx(dev_id, &eoiflag) && ++ xennet_handle_rx(dev_id, &eoiflag)) ++ xen_irq_lateeoi(irq, eoiflag); ++ + return IRQ_HANDLED; + } + +@@ -1525,9 +1586,10 @@ static int setup_netfront_single(struct netfront_queue *queue) + if (err < 0) + goto fail; + +- err = bind_evtchn_to_irqhandler(queue->tx_evtchn, +- xennet_interrupt, +- 0, queue->info->netdev->name, queue); ++ err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn, ++ xennet_interrupt, 0, ++ queue->info->netdev->name, ++ queue); + if (err < 0) + goto bind_fail; + queue->rx_evtchn = queue->tx_evtchn; +@@ -1555,18 +1617,18 @@ static int setup_netfront_split(struct netfront_queue *queue) + + snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name), + "%s-tx", queue->name); +- err = bind_evtchn_to_irqhandler(queue->tx_evtchn, +- xennet_tx_interrupt, +- 0, queue->tx_irq_name, queue); ++ err = bind_evtchn_to_irqhandler_lateeoi(queue->tx_evtchn, ++ xennet_tx_interrupt, 0, ++ queue->tx_irq_name, queue); + if (err < 0) + goto bind_tx_fail; + queue->tx_irq = err; + + snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name), + "%s-rx", queue->name); +- err = bind_evtchn_to_irqhandler(queue->rx_evtchn, +- xennet_rx_interrupt, +- 0, queue->rx_irq_name, queue); ++ err = bind_evtchn_to_irqhandler_lateeoi(queue->rx_evtchn, ++ xennet_rx_interrupt, 0, ++ queue->rx_irq_name, queue); + if (err < 0) + goto bind_rx_fail; + queue->rx_irq = err; +@@ -1668,6 +1730,7 @@ static int xennet_init_queue(struct netfront_queue *queue) + + spin_lock_init(&queue->tx_lock); + spin_lock_init(&queue->rx_lock); ++ spin_lock_init(&queue->rx_cons_lock); + + timer_setup(&queue->rx_refill_timer, rx_refill_timeout, 0); + +diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c +index d0cc6c0d74d6b..7dc10c2b4785d 100644 +--- a/drivers/pci/msi.c ++++ b/drivers/pci/msi.c +@@ -827,9 +827,6 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries, + goto out_disable; + } + +- /* Ensure that all table entries are masked. */ +- msix_mask_all(base, tsize); +- + ret = msix_setup_entries(dev, base, entries, nvec, affd); + if (ret) + goto out_disable; +@@ -852,6 +849,16 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries, + /* Set MSI-X enabled bits and unmask the function */ + pci_intx_for_msi(dev, 0); + dev->msix_enabled = 1; ++ ++ /* ++ * Ensure that all table entries are masked to prevent ++ * stale entries from firing in a crash kernel. ++ * ++ * Done late to deal with a broken Marvell NVME device ++ * which takes the MSI-X mask bits into account even ++ * when MSI-X is disabled, which prevents MSI delivery. ++ */ ++ msix_mask_all(base, tsize); + pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); + + pcibios_free_irq(dev); +@@ -878,7 +885,7 @@ out_free: + free_msi_irqs(dev); + + out_disable: +- pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); ++ pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE, 0); + + return ret; + } +diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c +index 44181a2cbf18d..408166bd20f33 100644 +--- a/drivers/scsi/scsi_debug.c ++++ b/drivers/scsi/scsi_debug.c +@@ -2296,11 +2296,11 @@ static int resp_mode_select(struct scsi_cmnd *scp, + __func__, param_len, res); + md_len = mselect6 ? (arr[0] + 1) : (get_unaligned_be16(arr + 0) + 2); + bd_len = mselect6 ? arr[3] : get_unaligned_be16(arr + 6); +- if (md_len > 2) { ++ off = bd_len + (mselect6 ? 4 : 8); ++ if (md_len > 2 || off >= res) { + mk_sense_invalid_fld(scp, SDEB_IN_DATA, 0, -1); + return check_condition_result; + } +- off = bd_len + (mselect6 ? 4 : 8); + mpage = arr[off] & 0x3f; + ps = !!(arr[off] & 0x80); + if (ps) { +diff --git a/drivers/soc/tegra/fuse/fuse-tegra.c b/drivers/soc/tegra/fuse/fuse-tegra.c +index 3eb44e65b3261..1a54bac512b69 100644 +--- a/drivers/soc/tegra/fuse/fuse-tegra.c ++++ b/drivers/soc/tegra/fuse/fuse-tegra.c +@@ -172,7 +172,7 @@ static struct platform_driver tegra_fuse_driver = { + }; + builtin_platform_driver(tegra_fuse_driver); + +-bool __init tegra_fuse_read_spare(unsigned int spare) ++u32 __init tegra_fuse_read_spare(unsigned int spare) + { + unsigned int offset = fuse->soc->info->spare + spare * 4; + +diff --git a/drivers/soc/tegra/fuse/fuse.h b/drivers/soc/tegra/fuse/fuse.h +index 7230cb3305033..6996cfc7cbca3 100644 +--- a/drivers/soc/tegra/fuse/fuse.h ++++ b/drivers/soc/tegra/fuse/fuse.h +@@ -53,7 +53,7 @@ struct tegra_fuse { + void tegra_init_revision(void); + void tegra_init_apbmisc(void); + +-bool __init tegra_fuse_read_spare(unsigned int spare); ++u32 __init tegra_fuse_read_spare(unsigned int spare); + u32 __init tegra_fuse_read_early(unsigned int offset); + + #ifdef CONFIG_ARCH_TEGRA_2x_SOC +diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c +index 15da02aeee948..2d2d04c071401 100644 +--- a/drivers/tty/hvc/hvc_xen.c ++++ b/drivers/tty/hvc/hvc_xen.c +@@ -37,6 +37,8 @@ struct xencons_info { + struct xenbus_device *xbdev; + struct xencons_interface *intf; + unsigned int evtchn; ++ XENCONS_RING_IDX out_cons; ++ unsigned int out_cons_same; + struct hvc_struct *hvc; + int irq; + int vtermno; +@@ -138,6 +140,8 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len) + XENCONS_RING_IDX cons, prod; + int recv = 0; + struct xencons_info *xencons = vtermno_to_xencons(vtermno); ++ unsigned int eoiflag = 0; ++ + if (xencons == NULL) + return -EINVAL; + intf = xencons->intf; +@@ -157,7 +161,27 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len) + mb(); /* read ring before consuming */ + intf->in_cons = cons; + +- notify_daemon(xencons); ++ /* ++ * When to mark interrupt having been spurious: ++ * - there was no new data to be read, and ++ * - the backend did not consume some output bytes, and ++ * - the previous round with no read data didn't see consumed bytes ++ * (we might have a race with an interrupt being in flight while ++ * updating xencons->out_cons, so account for that by allowing one ++ * round without any visible reason) ++ */ ++ if (intf->out_cons != xencons->out_cons) { ++ xencons->out_cons = intf->out_cons; ++ xencons->out_cons_same = 0; ++ } ++ if (recv) { ++ notify_daemon(xencons); ++ } else if (xencons->out_cons_same++ > 1) { ++ eoiflag = XEN_EOI_FLAG_SPURIOUS; ++ } ++ ++ xen_irq_lateeoi(xencons->irq, eoiflag); ++ + return recv; + } + +@@ -386,7 +410,7 @@ static int xencons_connect_backend(struct xenbus_device *dev, + if (ret) + return ret; + info->evtchn = evtchn; +- irq = bind_evtchn_to_irq(evtchn); ++ irq = bind_interdomain_evtchn_to_irq_lateeoi(dev->otherend_id, evtchn); + if (irq < 0) + return irq; + info->irq = irq; +@@ -550,7 +574,7 @@ static int __init xen_hvc_init(void) + return r; + + info = vtermno_to_xencons(HVC_COOKIE); +- info->irq = bind_evtchn_to_irq(info->evtchn); ++ info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn); + } + if (info->irq < 0) + info->irq = 0; /* NO_IRQ */ +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c +index e170c5b4d6f0c..a118c44c70e1e 100644 +--- a/drivers/usb/core/quirks.c ++++ b/drivers/usb/core/quirks.c +@@ -435,6 +435,9 @@ static const struct usb_device_id usb_quirk_list[] = { + { USB_DEVICE(0x1532, 0x0116), .driver_info = + USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL }, + ++ /* Lenovo USB-C to Ethernet Adapter RTL8153-04 */ ++ { USB_DEVICE(0x17ef, 0x720c), .driver_info = USB_QUIRK_NO_LPM }, ++ + /* Lenovo Powered USB-C Travel Hub (4X90S92381, RTL8153 GigE) */ + { USB_DEVICE(0x17ef, 0x721e), .driver_info = USB_QUIRK_NO_LPM }, + +diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c +index d2980e30f3417..c5acf5c39fb18 100644 +--- a/drivers/usb/gadget/composite.c ++++ b/drivers/usb/gadget/composite.c +@@ -1649,14 +1649,14 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl) + u8 endp; + + if (w_length > USB_COMP_EP0_BUFSIZ) { +- if (ctrl->bRequestType == USB_DIR_OUT) { +- goto done; +- } else { ++ if (ctrl->bRequestType & USB_DIR_IN) { + /* Cast away the const, we are going to overwrite on purpose. */ + __le16 *temp = (__le16 *)&ctrl->wLength; + + *temp = cpu_to_le16(USB_COMP_EP0_BUFSIZ); + w_length = USB_COMP_EP0_BUFSIZ; ++ } else { ++ goto done; + } + } + +diff --git a/drivers/usb/gadget/legacy/dbgp.c b/drivers/usb/gadget/legacy/dbgp.c +index 355bc7dab9d5f..6bcbad3825802 100644 +--- a/drivers/usb/gadget/legacy/dbgp.c ++++ b/drivers/usb/gadget/legacy/dbgp.c +@@ -346,14 +346,14 @@ static int dbgp_setup(struct usb_gadget *gadget, + u16 len = 0; + + if (length > DBGP_REQ_LEN) { +- if (ctrl->bRequestType == USB_DIR_OUT) { +- return err; +- } else { ++ if (ctrl->bRequestType & USB_DIR_IN) { + /* Cast away the const, we are going to overwrite on purpose. */ + __le16 *temp = (__le16 *)&ctrl->wLength; + + *temp = cpu_to_le16(DBGP_REQ_LEN); + length = DBGP_REQ_LEN; ++ } else { ++ return err; + } + } + +diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c +index f0aff79f544c3..5f1e15172403e 100644 +--- a/drivers/usb/gadget/legacy/inode.c ++++ b/drivers/usb/gadget/legacy/inode.c +@@ -1336,14 +1336,14 @@ gadgetfs_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl) + u16 w_length = le16_to_cpu(ctrl->wLength); + + if (w_length > RBUF_SIZE) { +- if (ctrl->bRequestType == USB_DIR_OUT) { +- return value; +- } else { ++ if (ctrl->bRequestType & USB_DIR_IN) { + /* Cast away the const, we are going to overwrite on purpose. */ + __le16 *temp = (__le16 *)&ctrl->wLength; + + *temp = cpu_to_le16(RBUF_SIZE); + w_length = RBUF_SIZE; ++ } else { ++ return value; + } + } + +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c +index beee3543950fe..ded05c39e4d1c 100644 +--- a/drivers/usb/host/xhci-pci.c ++++ b/drivers/usb/host/xhci-pci.c +@@ -65,6 +65,8 @@ + #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 0x161e + #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 0x15d6 + #define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 0x15d7 ++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 0x161c ++#define PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8 0x161f + + #define PCI_DEVICE_ID_ASMEDIA_1042_XHCI 0x1042 + #define PCI_DEVICE_ID_ASMEDIA_1042A_XHCI 0x1142 +@@ -303,7 +305,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) + pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_3 || + pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_4 || + pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_5 || +- pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6)) ++ pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_6 || ++ pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_7 || ++ pdev->device == PCI_DEVICE_ID_AMD_YELLOW_CARP_XHCI_8)) + xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; + + if (xhci->quirks & XHCI_RESET_ON_RESUME) +diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c +index 004b6598706b1..50f289b124d0d 100644 +--- a/drivers/usb/serial/cp210x.c ++++ b/drivers/usb/serial/cp210x.c +@@ -1552,6 +1552,8 @@ static int cp2105_gpioconf_init(struct usb_serial *serial) + + /* 2 banks of GPIO - One for the pins taken from each serial port */ + if (intf_num == 0) { ++ priv->gc.ngpio = 2; ++ + if (mode.eci == CP210X_PIN_MODE_MODEM) { + /* mark all GPIOs of this interface as reserved */ + priv->gpio_altfunc = 0xff; +@@ -1562,8 +1564,9 @@ static int cp2105_gpioconf_init(struct usb_serial *serial) + priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) & + CP210X_ECI_GPIO_MODE_MASK) >> + CP210X_ECI_GPIO_MODE_OFFSET); +- priv->gc.ngpio = 2; + } else if (intf_num == 1) { ++ priv->gc.ngpio = 3; ++ + if (mode.sci == CP210X_PIN_MODE_MODEM) { + /* mark all GPIOs of this interface as reserved */ + priv->gpio_altfunc = 0xff; +@@ -1574,7 +1577,6 @@ static int cp2105_gpioconf_init(struct usb_serial *serial) + priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) & + CP210X_SCI_GPIO_MODE_MASK) >> + CP210X_SCI_GPIO_MODE_OFFSET); +- priv->gc.ngpio = 3; + } else { + return -ENODEV; + } +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index 74203ed5479fa..2397d83434931 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -1219,6 +1219,14 @@ static const struct usb_device_id option_ids[] = { + .driver_info = NCTRL(2) | RSVD(3) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff), /* Telit LN920 (ECM) */ + .driver_info = NCTRL(0) | RSVD(1) }, ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1070, 0xff), /* Telit FN990 (rmnet) */ ++ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1071, 0xff), /* Telit FN990 (MBIM) */ ++ .driver_info = NCTRL(0) | RSVD(1) }, ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1072, 0xff), /* Telit FN990 (RNDIS) */ ++ .driver_info = NCTRL(2) | RSVD(3) }, ++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff), /* Telit FN990 (ECM) */ ++ .driver_info = NCTRL(0) | RSVD(1) }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), + .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM), +diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c +index e442d400dbb2e..deb72fd7ec504 100644 +--- a/drivers/virtio/virtio_ring.c ++++ b/drivers/virtio/virtio_ring.c +@@ -263,7 +263,7 @@ size_t virtio_max_dma_size(struct virtio_device *vdev) + size_t max_segment_size = SIZE_MAX; + + if (vring_use_dma_api(vdev)) +- max_segment_size = dma_max_mapping_size(&vdev->dev); ++ max_segment_size = dma_max_mapping_size(vdev->dev.parent); + + return max_segment_size; + } +diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c +index 60378f3baaae1..34487bf1d7914 100644 +--- a/fs/fuse/dir.c ++++ b/fs/fuse/dir.c +@@ -1032,7 +1032,7 @@ int fuse_reverse_inval_entry(struct super_block *sb, u64 parent_nodeid, + if (!parent) + return -ENOENT; + +- inode_lock(parent); ++ inode_lock_nested(parent, I_MUTEX_PARENT); + if (!S_ISDIR(parent->i_mode)) + goto unlock; + +diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c +index 3283cc2a4e42c..a48fcd4180c74 100644 +--- a/fs/nfsd/nfs4state.c ++++ b/fs/nfsd/nfs4state.c +@@ -1041,6 +1041,11 @@ hash_delegation_locked(struct nfs4_delegation *dp, struct nfs4_file *fp) + return 0; + } + ++static bool delegation_hashed(struct nfs4_delegation *dp) ++{ ++ return !(list_empty(&dp->dl_perfile)); ++} ++ + static bool + unhash_delegation_locked(struct nfs4_delegation *dp) + { +@@ -1048,7 +1053,7 @@ unhash_delegation_locked(struct nfs4_delegation *dp) + + lockdep_assert_held(&state_lock); + +- if (list_empty(&dp->dl_perfile)) ++ if (!delegation_hashed(dp)) + return false; + + dp->dl_stid.sc_type = NFS4_CLOSED_DELEG_STID; +@@ -4406,7 +4411,7 @@ static void nfsd4_cb_recall_prepare(struct nfsd4_callback *cb) + * queued for a lease break. Don't queue it again. + */ + spin_lock(&state_lock); +- if (dp->dl_time == 0) { ++ if (delegation_hashed(dp) && dp->dl_time == 0) { + dp->dl_time = get_seconds(); + list_add_tail(&dp->dl_recall_lru, &nn->del_recall_lru); + } +diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c +index 876de87f604cd..8c89eaea1583d 100644 +--- a/fs/overlayfs/dir.c ++++ b/fs/overlayfs/dir.c +@@ -113,8 +113,7 @@ kill_whiteout: + goto out; + } + +-static int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, +- umode_t mode) ++int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, umode_t mode) + { + int err; + struct dentry *d, *dentry = *newdentry; +diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h +index 6934bcf030f0b..a8e9da5f01eb5 100644 +--- a/fs/overlayfs/overlayfs.h ++++ b/fs/overlayfs/overlayfs.h +@@ -409,6 +409,7 @@ struct ovl_cattr { + + #define OVL_CATTR(m) (&(struct ovl_cattr) { .mode = (m) }) + ++int ovl_mkdir_real(struct inode *dir, struct dentry **newdentry, umode_t mode); + struct dentry *ovl_create_real(struct inode *dir, struct dentry *newdentry, + struct ovl_cattr *attr); + int ovl_cleanup(struct inode *dir, struct dentry *dentry); +diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c +index f036d7544d4a6..f5cf0938f298d 100644 +--- a/fs/overlayfs/super.c ++++ b/fs/overlayfs/super.c +@@ -650,10 +650,14 @@ retry: + goto retry; + } + +- work = ovl_create_real(dir, work, OVL_CATTR(attr.ia_mode)); +- err = PTR_ERR(work); +- if (IS_ERR(work)) +- goto out_err; ++ err = ovl_mkdir_real(dir, &work, attr.ia_mode); ++ if (err) ++ goto out_dput; ++ ++ /* Weird filesystem returning with hashed negative (kernfs)? */ ++ err = -EINVAL; ++ if (d_really_is_negative(work)) ++ goto out_dput; + + /* + * Try to remove POSIX ACL xattrs from workdir. We are good if: +diff --git a/include/net/tc_act/tc_tunnel_key.h b/include/net/tc_act/tc_tunnel_key.h +index 0689d9bcdf841..f6a0f09ccc5f9 100644 +--- a/include/net/tc_act/tc_tunnel_key.h ++++ b/include/net/tc_act/tc_tunnel_key.h +@@ -52,7 +52,10 @@ static inline struct ip_tunnel_info *tcf_tunnel_info(const struct tc_action *a) + { + #ifdef CONFIG_NET_CLS_ACT + struct tcf_tunnel_key *t = to_tunnel_key(a); +- struct tcf_tunnel_key_params *params = rtnl_dereference(t->params); ++ struct tcf_tunnel_key_params *params; ++ ++ params = rcu_dereference_protected(t->params, ++ lockdep_is_held(&a->tcfa_lock)); + + return ¶ms->tcft_enc_metadata->u.tun_info; + #else +@@ -69,7 +72,7 @@ tcf_tunnel_info_copy(const struct tc_action *a) + if (tun) { + size_t tun_size = sizeof(*tun) + tun->options_len; + struct ip_tunnel_info *tun_copy = kmemdup(tun, tun_size, +- GFP_KERNEL); ++ GFP_ATOMIC); + + return tun_copy; + } +diff --git a/kernel/audit.c b/kernel/audit.c +index 05ae208ad4423..d67fce9e3f8b8 100644 +--- a/kernel/audit.c ++++ b/kernel/audit.c +@@ -712,7 +712,7 @@ static int kauditd_send_queue(struct sock *sk, u32 portid, + { + int rc = 0; + struct sk_buff *skb; +- static unsigned int failed = 0; ++ unsigned int failed = 0; + + /* NOTE: kauditd_thread takes care of all our locking, we just use + * the netlink info passed to us (e.g. sk and portid) */ +@@ -729,32 +729,30 @@ static int kauditd_send_queue(struct sock *sk, u32 portid, + continue; + } + ++retry: + /* grab an extra skb reference in case of error */ + skb_get(skb); + rc = netlink_unicast(sk, skb, portid, 0); + if (rc < 0) { +- /* fatal failure for our queue flush attempt? */ ++ /* send failed - try a few times unless fatal error */ + if (++failed >= retry_limit || + rc == -ECONNREFUSED || rc == -EPERM) { +- /* yes - error processing for the queue */ + sk = NULL; + if (err_hook) + (*err_hook)(skb); +- if (!skb_hook) +- goto out; +- /* keep processing with the skb_hook */ ++ if (rc == -EAGAIN) ++ rc = 0; ++ /* continue to drain the queue */ + continue; + } else +- /* no - requeue to preserve ordering */ +- skb_queue_head(queue, skb); ++ goto retry; + } else { +- /* it worked - drop the extra reference and continue */ ++ /* skb sent - drop the extra reference and continue */ + consume_skb(skb); + failed = 0; + } + } + +-out: + return (rc >= 0 ? 0 : rc); + } + +@@ -1557,7 +1555,8 @@ static int __net_init audit_net_init(struct net *net) + audit_panic("cannot initialize netlink socket in namespace"); + return -ENOMEM; + } +- aunet->sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT; ++ /* limit the timeout in case auditd is blocked/stopped */ ++ aunet->sk->sk_sndtimeo = HZ / 10; + + return 0; + } +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index 11ae2747701b5..7777c35e0a171 100644 +--- a/kernel/rcu/tree.c ++++ b/kernel/rcu/tree.c +@@ -1602,7 +1602,7 @@ static void rcu_gp_fqs(bool first_time) + struct rcu_node *rnp = rcu_get_root(); + + WRITE_ONCE(rcu_state.gp_activity, jiffies); +- rcu_state.n_force_qs++; ++ WRITE_ONCE(rcu_state.n_force_qs, rcu_state.n_force_qs + 1); + if (first_time) { + /* Collect dyntick-idle snapshots. */ + force_qs_rnp(dyntick_save_progress_counter); +@@ -2207,7 +2207,7 @@ static void rcu_do_batch(struct rcu_data *rdp) + /* Reset ->qlen_last_fqs_check trigger if enough CBs have drained. */ + if (count == 0 && rdp->qlen_last_fqs_check != 0) { + rdp->qlen_last_fqs_check = 0; +- rdp->n_force_qs_snap = rcu_state.n_force_qs; ++ rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs); + } else if (count < rdp->qlen_last_fqs_check - qhimark) + rdp->qlen_last_fqs_check = count; + +@@ -2535,10 +2535,10 @@ static void __call_rcu_core(struct rcu_data *rdp, struct rcu_head *head, + } else { + /* Give the grace period a kick. */ + rdp->blimit = DEFAULT_MAX_RCU_BLIMIT; +- if (rcu_state.n_force_qs == rdp->n_force_qs_snap && ++ if (READ_ONCE(rcu_state.n_force_qs) == rdp->n_force_qs_snap && + rcu_segcblist_first_pend_cb(&rdp->cblist) != head) + rcu_force_quiescent_state(); +- rdp->n_force_qs_snap = rcu_state.n_force_qs; ++ rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs); + rdp->qlen_last_fqs_check = rcu_segcblist_n_cbs(&rdp->cblist); + } + } +@@ -3029,7 +3029,7 @@ int rcutree_prepare_cpu(unsigned int cpu) + /* Set up local state, ensuring consistent view of global state. */ + raw_spin_lock_irqsave_rcu_node(rnp, flags); + rdp->qlen_last_fqs_check = 0; +- rdp->n_force_qs_snap = rcu_state.n_force_qs; ++ rdp->n_force_qs_snap = READ_ONCE(rcu_state.n_force_qs); + rdp->blimit = blimit; + if (rcu_segcblist_empty(&rdp->cblist) && /* No early-boot CBs? */ + !rcu_segcblist_is_offloaded(&rdp->cblist)) +diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c +index 4fc2af4367a7b..36ed8bad3909e 100644 +--- a/kernel/time/timekeeping.c ++++ b/kernel/time/timekeeping.c +@@ -1236,8 +1236,7 @@ int do_settimeofday64(const struct timespec64 *ts) + timekeeping_forward_now(tk); + + xt = tk_xtime(tk); +- ts_delta.tv_sec = ts->tv_sec - xt.tv_sec; +- ts_delta.tv_nsec = ts->tv_nsec - xt.tv_nsec; ++ ts_delta = timespec64_sub(*ts, xt); + + if (timespec64_compare(&tk->wall_to_monotonic, &ts_delta) > 0) { + ret = -EINVAL; +diff --git a/net/core/skbuff.c b/net/core/skbuff.c +index 7dba091bc8617..ac083685214e0 100644 +--- a/net/core/skbuff.c ++++ b/net/core/skbuff.c +@@ -768,7 +768,7 @@ void skb_dump(const char *level, const struct sk_buff *skb, bool full_pkt) + ntohs(skb->protocol), skb->pkt_type, skb->skb_iif); + + if (dev) +- printk("%sdev name=%s feat=0x%pNF\n", ++ printk("%sdev name=%s feat=%pNF\n", + level, dev->name, &dev->features); + if (sk) + printk("%ssk family=%hu type=%u proto=%u\n", +diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c +index 4f71aca156662..f8f79672cc5f3 100644 +--- a/net/ipv4/inet_diag.c ++++ b/net/ipv4/inet_diag.c +@@ -200,6 +200,7 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk, + r->idiag_state = sk->sk_state; + r->idiag_timer = 0; + r->idiag_retrans = 0; ++ r->idiag_expires = 0; + + if (inet_diag_msg_attrs_fill(sk, skb, r, ext, user_ns, net_admin)) + goto errout; +@@ -240,20 +241,17 @@ int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk, + r->idiag_timer = 1; + r->idiag_retrans = icsk->icsk_retransmits; + r->idiag_expires = +- jiffies_to_msecs(icsk->icsk_timeout - jiffies); ++ jiffies_delta_to_msecs(icsk->icsk_timeout - jiffies); + } else if (icsk->icsk_pending == ICSK_TIME_PROBE0) { + r->idiag_timer = 4; + r->idiag_retrans = icsk->icsk_probes_out; + r->idiag_expires = +- jiffies_to_msecs(icsk->icsk_timeout - jiffies); ++ jiffies_delta_to_msecs(icsk->icsk_timeout - jiffies); + } else if (timer_pending(&sk->sk_timer)) { + r->idiag_timer = 2; + r->idiag_retrans = icsk->icsk_probes_out; + r->idiag_expires = +- jiffies_to_msecs(sk->sk_timer.expires - jiffies); +- } else { +- r->idiag_timer = 0; +- r->idiag_expires = 0; ++ jiffies_delta_to_msecs(sk->sk_timer.expires - jiffies); + } + + if ((ext & (1 << (INET_DIAG_INFO - 1))) && handler->idiag_info_size) { +@@ -338,16 +336,13 @@ static int inet_twsk_diag_fill(struct sock *sk, + r = nlmsg_data(nlh); + BUG_ON(tw->tw_state != TCP_TIME_WAIT); + +- tmo = tw->tw_timer.expires - jiffies; +- if (tmo < 0) +- tmo = 0; +- + inet_diag_msg_common_fill(r, sk); + r->idiag_retrans = 0; + + r->idiag_state = tw->tw_substate; + r->idiag_timer = 3; +- r->idiag_expires = jiffies_to_msecs(tmo); ++ tmo = tw->tw_timer.expires - jiffies; ++ r->idiag_expires = jiffies_delta_to_msecs(tmo); + r->idiag_rqueue = 0; + r->idiag_wqueue = 0; + r->idiag_uid = 0; +@@ -381,7 +376,7 @@ static int inet_req_diag_fill(struct sock *sk, struct sk_buff *skb, + offsetof(struct sock, sk_cookie)); + + tmo = inet_reqsk(sk)->rsk_timer.expires - jiffies; +- r->idiag_expires = (tmo >= 0) ? jiffies_to_msecs(tmo) : 0; ++ r->idiag_expires = jiffies_delta_to_msecs(tmo); + r->idiag_rqueue = 0; + r->idiag_wqueue = 0; + r->idiag_uid = 0; +diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c +index 7f9cae4c49e7e..16e75a996b749 100644 +--- a/net/ipv6/sit.c ++++ b/net/ipv6/sit.c +@@ -1876,7 +1876,6 @@ static int __net_init sit_init_net(struct net *net) + return 0; + + err_reg_dev: +- ipip6_dev_free(sitn->fb_tunnel_dev); + free_netdev(sitn->fb_tunnel_dev); + err_alloc_dev: + return err; +diff --git a/net/mac80211/agg-rx.c b/net/mac80211/agg-rx.c +index 4d1c335e06e57..49ec9bfb6c8e6 100644 +--- a/net/mac80211/agg-rx.c ++++ b/net/mac80211/agg-rx.c +@@ -9,7 +9,7 @@ + * Copyright 2007, Michael Wu + * Copyright 2007-2010, Intel Corporation + * Copyright(c) 2015-2017 Intel Deutschland GmbH +- * Copyright (C) 2018 Intel Corporation ++ * Copyright (C) 2018-2021 Intel Corporation + */ + + /** +@@ -191,7 +191,8 @@ static void ieee80211_add_addbaext(struct ieee80211_sub_if_data *sdata, + sband = ieee80211_get_sband(sdata); + if (!sband) + return; +- he_cap = ieee80211_get_he_iftype_cap(sband, sdata->vif.type); ++ he_cap = ieee80211_get_he_iftype_cap(sband, ++ ieee80211_vif_type_p2p(&sdata->vif)); + if (!he_cap) + return; + +@@ -292,7 +293,8 @@ void ___ieee80211_start_rx_ba_session(struct sta_info *sta, + goto end; + } + +- if (!sta->sta.ht_cap.ht_supported) { ++ if (!sta->sta.ht_cap.ht_supported && ++ sta->sdata->vif.bss_conf.chandef.chan->band != NL80211_BAND_6GHZ) { + ht_dbg(sta->sdata, + "STA %pM erroneously requests BA session on tid %d w/o QoS\n", + sta->sta.addr, tid); +diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c +index b11883d268759..f140c2b94b2c6 100644 +--- a/net/mac80211/agg-tx.c ++++ b/net/mac80211/agg-tx.c +@@ -9,7 +9,7 @@ + * Copyright 2007, Michael Wu + * Copyright 2007-2010, Intel Corporation + * Copyright(c) 2015-2017 Intel Deutschland GmbH +- * Copyright (C) 2018 - 2019 Intel Corporation ++ * Copyright (C) 2018 - 2021 Intel Corporation + */ + + #include +@@ -106,7 +106,7 @@ static void ieee80211_send_addba_request(struct ieee80211_sub_if_data *sdata, + mgmt->u.action.u.addba_req.start_seq_num = + cpu_to_le16(start_seq_num << 4); + +- ieee80211_tx_skb(sdata, skb); ++ ieee80211_tx_skb_tid(sdata, skb, tid); + } + + void ieee80211_send_bar(struct ieee80211_vif *vif, u8 *ra, u16 tid, u16 ssn) +@@ -213,6 +213,8 @@ ieee80211_agg_start_txq(struct sta_info *sta, int tid, bool enable) + struct ieee80211_txq *txq = sta->sta.txq[tid]; + struct txq_info *txqi; + ++ lockdep_assert_held(&sta->ampdu_mlme.mtx); ++ + if (!txq) + return; + +@@ -290,7 +292,6 @@ static void ieee80211_remove_tid_tx(struct sta_info *sta, int tid) + ieee80211_assign_tid_tx(sta, tid, NULL); + + ieee80211_agg_splice_finish(sta->sdata, tid); +- ieee80211_agg_start_txq(sta, tid, false); + + kfree_rcu(tid_tx, rcu_head); + } +@@ -448,6 +449,42 @@ static void sta_addba_resp_timer_expired(struct timer_list *t) + ieee80211_stop_tx_ba_session(&sta->sta, tid); + } + ++static void ieee80211_send_addba_with_timeout(struct sta_info *sta, ++ struct tid_ampdu_tx *tid_tx) ++{ ++ struct ieee80211_sub_if_data *sdata = sta->sdata; ++ struct ieee80211_local *local = sta->local; ++ u8 tid = tid_tx->tid; ++ u16 buf_size; ++ ++ /* activate the timer for the recipient's addBA response */ ++ mod_timer(&tid_tx->addba_resp_timer, jiffies + ADDBA_RESP_INTERVAL); ++ ht_dbg(sdata, "activated addBA response timer on %pM tid %d\n", ++ sta->sta.addr, tid); ++ ++ spin_lock_bh(&sta->lock); ++ sta->ampdu_mlme.last_addba_req_time[tid] = jiffies; ++ sta->ampdu_mlme.addba_req_num[tid]++; ++ spin_unlock_bh(&sta->lock); ++ ++ if (sta->sta.he_cap.has_he) { ++ buf_size = local->hw.max_tx_aggregation_subframes; ++ } else { ++ /* ++ * We really should use what the driver told us it will ++ * transmit as the maximum, but certain APs (e.g. the ++ * LinkSys WRT120N with FW v1.0.07 build 002 Jun 18 2012) ++ * will crash when we use a lower number. ++ */ ++ buf_size = IEEE80211_MAX_AMPDU_BUF_HT; ++ } ++ ++ /* send AddBA request */ ++ ieee80211_send_addba_request(sdata, sta->sta.addr, tid, ++ tid_tx->dialog_token, tid_tx->ssn, ++ buf_size, tid_tx->timeout); ++} ++ + void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid) + { + struct tid_ampdu_tx *tid_tx; +@@ -462,7 +499,6 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid) + .timeout = 0, + }; + int ret; +- u16 buf_size; + + tid_tx = rcu_dereference_protected_tid_tx(sta, tid); + +@@ -485,6 +521,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid) + + params.ssn = sta->tid_seq[tid] >> 4; + ret = drv_ampdu_action(local, sdata, ¶ms); ++ tid_tx->ssn = params.ssn; + if (ret) { + ht_dbg(sdata, + "BA request denied - HW unavailable for %pM tid %d\n", +@@ -501,32 +538,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid) + return; + } + +- /* activate the timer for the recipient's addBA response */ +- mod_timer(&tid_tx->addba_resp_timer, jiffies + ADDBA_RESP_INTERVAL); +- ht_dbg(sdata, "activated addBA response timer on %pM tid %d\n", +- sta->sta.addr, tid); +- +- spin_lock_bh(&sta->lock); +- sta->ampdu_mlme.last_addba_req_time[tid] = jiffies; +- sta->ampdu_mlme.addba_req_num[tid]++; +- spin_unlock_bh(&sta->lock); +- +- if (sta->sta.he_cap.has_he) { +- buf_size = local->hw.max_tx_aggregation_subframes; +- } else { +- /* +- * We really should use what the driver told us it will +- * transmit as the maximum, but certain APs (e.g. the +- * LinkSys WRT120N with FW v1.0.07 build 002 Jun 18 2012) +- * will crash when we use a lower number. +- */ +- buf_size = IEEE80211_MAX_AMPDU_BUF_HT; +- } +- +- /* send AddBA request */ +- ieee80211_send_addba_request(sdata, sta->sta.addr, tid, +- tid_tx->dialog_token, params.ssn, +- buf_size, tid_tx->timeout); ++ ieee80211_send_addba_with_timeout(sta, tid_tx); + } + + /* +@@ -571,7 +583,8 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid, + "Requested to start BA session on reserved tid=%d", tid)) + return -EINVAL; + +- if (!pubsta->ht_cap.ht_supported) ++ if (!pubsta->ht_cap.ht_supported && ++ sta->sdata->vif.bss_conf.chandef.chan->band != NL80211_BAND_6GHZ) + return -EINVAL; + + if (WARN_ON_ONCE(!local->ops->ampdu_action)) +@@ -860,6 +873,7 @@ void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid, + { + struct ieee80211_sub_if_data *sdata = sta->sdata; + bool send_delba = false; ++ bool start_txq = false; + + ht_dbg(sdata, "Stopping Tx BA session for %pM tid %d\n", + sta->sta.addr, tid); +@@ -877,10 +891,14 @@ void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid, + send_delba = true; + + ieee80211_remove_tid_tx(sta, tid); ++ start_txq = true; + + unlock_sta: + spin_unlock_bh(&sta->lock); + ++ if (start_txq) ++ ieee80211_agg_start_txq(sta, tid, false); ++ + if (send_delba) + ieee80211_send_delba(sdata, sta->sta.addr, tid, + WLAN_BACK_INITIATOR, WLAN_REASON_QSTA_NOT_USE); +diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h +index 2c9b3eb8b6525..f4c7e0af896b1 100644 +--- a/net/mac80211/driver-ops.h ++++ b/net/mac80211/driver-ops.h +@@ -1202,8 +1202,11 @@ static inline void drv_wake_tx_queue(struct ieee80211_local *local, + { + struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif); + +- if (local->in_reconfig) ++ /* In reconfig don't transmit now, but mark for waking later */ ++ if (local->in_reconfig) { ++ set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txq->flags); + return; ++ } + + if (!check_sdata_in_driver(sdata)) + return; +diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c +index ccaf2389ccc1d..5c727af01143f 100644 +--- a/net/mac80211/mlme.c ++++ b/net/mac80211/mlme.c +@@ -2418,11 +2418,18 @@ static void ieee80211_sta_tx_wmm_ac_notify(struct ieee80211_sub_if_data *sdata, + u16 tx_time) + { + struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; +- u16 tid = ieee80211_get_tid(hdr); +- int ac = ieee80211_ac_from_tid(tid); +- struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac]; ++ u16 tid; ++ int ac; ++ struct ieee80211_sta_tx_tspec *tx_tspec; + unsigned long now = jiffies; + ++ if (!ieee80211_is_data_qos(hdr->frame_control)) ++ return; ++ ++ tid = ieee80211_get_tid(hdr); ++ ac = ieee80211_ac_from_tid(tid); ++ tx_tspec = &ifmgd->tx_tspec[ac]; ++ + if (likely(!tx_tspec->admitted_time)) + return; + +diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h +index 2eb73be9b9865..be0df78d4a799 100644 +--- a/net/mac80211/sta_info.h ++++ b/net/mac80211/sta_info.h +@@ -180,6 +180,7 @@ struct tid_ampdu_tx { + u8 stop_initiator; + bool tx_stop; + u16 buf_size; ++ u16 ssn; + + u16 failed_bar_ssn; + bool bar_pending; +diff --git a/net/mac80211/util.c b/net/mac80211/util.c +index decd46b383938..c1c117fdf3184 100644 +--- a/net/mac80211/util.c ++++ b/net/mac80211/util.c +@@ -1227,6 +1227,8 @@ _ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action, + elems->max_idle_period_ie = (void *)pos; + break; + case WLAN_EID_EXTENSION: ++ if (!elen) ++ break; + if (pos[0] == WLAN_EID_EXT_HE_MU_EDCA && + elen >= (sizeof(*elems->mu_edca_param_set) + 1)) { + elems->mu_edca_param_set = (void *)&pos[1]; +diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c +index 0ffbf3d17911a..6062bd5bf132b 100644 +--- a/net/packet/af_packet.c ++++ b/net/packet/af_packet.c +@@ -4453,9 +4453,10 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u, + } + + out_free_pg_vec: +- bitmap_free(rx_owner_map); +- if (pg_vec) ++ if (pg_vec) { ++ bitmap_free(rx_owner_map); + free_pg_vec(pg_vec, order, req->tp_block_nr); ++ } + out: + return err; + } +diff --git a/net/rds/connection.c b/net/rds/connection.c +index c85bd6340eaa7..92ff40e7a66cf 100644 +--- a/net/rds/connection.c ++++ b/net/rds/connection.c +@@ -253,6 +253,7 @@ static struct rds_connection *__rds_conn_create(struct net *net, + * should end up here, but if it + * does, reset/destroy the connection. + */ ++ kfree(conn->c_path); + kmem_cache_free(rds_conn_slab, conn); + conn = ERR_PTR(-EOPNOTSUPP); + goto out; +diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c +index 74450b0f69fc5..214f4efdd9920 100644 +--- a/net/sched/act_sample.c ++++ b/net/sched/act_sample.c +@@ -265,14 +265,12 @@ tcf_sample_get_group(const struct tc_action *a, + struct tcf_sample *s = to_sample(a); + struct psample_group *group; + +- spin_lock_bh(&s->tcf_lock); + group = rcu_dereference_protected(s->psample_group, + lockdep_is_held(&s->tcf_lock)); + if (group) { + psample_group_take(group); + *destructor = tcf_psample_group_put; + } +- spin_unlock_bh(&s->tcf_lock); + + return group; + } +diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c +index 7f20fd37e01e0..a4c61205462ac 100644 +--- a/net/sched/cls_api.c ++++ b/net/sched/cls_api.c +@@ -3436,7 +3436,7 @@ static void tcf_sample_get_group(struct flow_action_entry *entry, + int tc_setup_flow_action(struct flow_action *flow_action, + const struct tcf_exts *exts, bool rtnl_held) + { +- const struct tc_action *act; ++ struct tc_action *act; + int i, j, k, err = 0; + + if (!exts) +@@ -3450,6 +3450,7 @@ int tc_setup_flow_action(struct flow_action *flow_action, + struct flow_action_entry *entry; + + entry = &flow_action->entries[j]; ++ spin_lock_bh(&act->tcfa_lock); + if (is_tcf_gact_ok(act)) { + entry->id = FLOW_ACTION_ACCEPT; + } else if (is_tcf_gact_shot(act)) { +@@ -3490,13 +3491,13 @@ int tc_setup_flow_action(struct flow_action *flow_action, + break; + default: + err = -EOPNOTSUPP; +- goto err_out; ++ goto err_out_locked; + } + } else if (is_tcf_tunnel_set(act)) { + entry->id = FLOW_ACTION_TUNNEL_ENCAP; + err = tcf_tunnel_encap_get_tunnel(entry, act); + if (err) +- goto err_out; ++ goto err_out_locked; + } else if (is_tcf_tunnel_release(act)) { + entry->id = FLOW_ACTION_TUNNEL_DECAP; + } else if (is_tcf_pedit(act)) { +@@ -3510,7 +3511,7 @@ int tc_setup_flow_action(struct flow_action *flow_action, + break; + default: + err = -EOPNOTSUPP; +- goto err_out; ++ goto err_out_locked; + } + entry->mangle.htype = tcf_pedit_htype(act, k); + entry->mangle.mask = tcf_pedit_mask(act, k); +@@ -3561,15 +3562,17 @@ int tc_setup_flow_action(struct flow_action *flow_action, + entry->mpls_mangle.ttl = tcf_mpls_ttl(act); + break; + default: +- goto err_out; ++ err = -EOPNOTSUPP; ++ goto err_out_locked; + } + } else if (is_tcf_skbedit_ptype(act)) { + entry->id = FLOW_ACTION_PTYPE; + entry->ptype = tcf_skbedit_ptype(act); + } else { + err = -EOPNOTSUPP; +- goto err_out; ++ goto err_out_locked; + } ++ spin_unlock_bh(&act->tcfa_lock); + + if (!is_tcf_pedit(act)) + j++; +@@ -3583,6 +3586,9 @@ err_out: + tc_cleanup_flow_action(flow_action); + + return err; ++err_out_locked: ++ spin_unlock_bh(&act->tcfa_lock); ++ goto err_out; + } + EXPORT_SYMBOL(tc_setup_flow_action); + +diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c +index e8eebe40e0ae9..0eb4d4a568f77 100644 +--- a/net/sched/sch_cake.c ++++ b/net/sched/sch_cake.c +@@ -2724,7 +2724,7 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt, + q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data), + GFP_KERNEL); + if (!q->tins) +- goto nomem; ++ return -ENOMEM; + + for (i = 0; i < CAKE_MAX_TINS; i++) { + struct cake_tin_data *b = q->tins + i; +@@ -2754,10 +2754,6 @@ static int cake_init(struct Qdisc *sch, struct nlattr *opt, + q->min_netlen = ~0; + q->min_adjlen = ~0; + return 0; +- +-nomem: +- cake_destroy(sch); +- return -ENOMEM; + } + + static int cake_dump(struct Qdisc *sch, struct sk_buff *skb) +diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c +index fa3b20e5f4608..06684ac346abd 100644 +--- a/net/smc/af_smc.c ++++ b/net/smc/af_smc.c +@@ -183,7 +183,9 @@ static int smc_release(struct socket *sock) + /* cleanup for a dangling non-blocking connect */ + if (smc->connect_nonblock && sk->sk_state == SMC_INIT) + tcp_abort(smc->clcsock->sk, ECONNABORTED); +- flush_work(&smc->connect_work); ++ ++ if (cancel_work_sync(&smc->connect_work)) ++ sock_put(&smc->sk); /* sock_hold in smc_connect for passive closing */ + + if (sk->sk_state == SMC_LISTEN) + /* smc_close_non_accepted() is called and acquires +diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl +index f459ae883a0a6..a4ca050815aba 100755 +--- a/scripts/recordmcount.pl ++++ b/scripts/recordmcount.pl +@@ -252,7 +252,7 @@ if ($arch eq "x86_64") { + + } elsif ($arch eq "s390" && $bits == 64) { + if ($cc =~ /-DCC_USING_HOTPATCH/) { +- $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*brcl\\s*0,[0-9a-f]+ <([^\+]*)>\$"; ++ $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*c0 04 00 00 00 00\\s*(bcrl\\s*0,|jgnop\\s*)[0-9a-f]+ <([^\+]*)>\$"; + $mcount_adjust = 0; + } else { + $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_390_(PC|PLT)32DBL\\s+_mcount\\+0x2\$"; +diff --git a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c +index 231d79e57774e..cfe75536d8a55 100644 +--- a/tools/testing/selftests/kvm/kvm_create_max_vcpus.c ++++ b/tools/testing/selftests/kvm/kvm_create_max_vcpus.c +@@ -12,6 +12,7 @@ + #include + #include + #include ++#include + + #include "test_util.h" + +@@ -43,10 +44,39 @@ int main(int argc, char *argv[]) + { + int kvm_max_vcpu_id = kvm_check_cap(KVM_CAP_MAX_VCPU_ID); + int kvm_max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS); ++ /* ++ * Number of file descriptors reqired, KVM_CAP_MAX_VCPUS for vCPU fds + ++ * an arbitrary number for everything else. ++ */ ++ int nr_fds_wanted = kvm_max_vcpus + 100; ++ struct rlimit rl; + + printf("KVM_CAP_MAX_VCPU_ID: %d\n", kvm_max_vcpu_id); + printf("KVM_CAP_MAX_VCPUS: %d\n", kvm_max_vcpus); + ++ /* ++ * Check that we're allowed to open nr_fds_wanted file descriptors and ++ * try raising the limits if needed. ++ */ ++ TEST_ASSERT(!getrlimit(RLIMIT_NOFILE, &rl), "getrlimit() failed!"); ++ ++ if (rl.rlim_cur < nr_fds_wanted) { ++ rl.rlim_cur = nr_fds_wanted; ++ if (rl.rlim_max < nr_fds_wanted) { ++ int old_rlim_max = rl.rlim_max; ++ rl.rlim_max = nr_fds_wanted; ++ ++ int r = setrlimit(RLIMIT_NOFILE, &rl); ++ if (r < 0) { ++ printf("RLIMIT_NOFILE hard limit is too low (%d, wanted %d)\n", ++ old_rlim_max, nr_fds_wanted); ++ exit(KSFT_SKIP); ++ } ++ } else { ++ TEST_ASSERT(!setrlimit(RLIMIT_NOFILE, &rl), "setrlimit() failed!"); ++ } ++ } ++ + /* + * Upstream KVM prior to 4.8 does not support KVM_CAP_MAX_VCPU_ID. + * Userspace is supposed to use KVM_CAP_MAX_VCPUS as the maximum ID +diff --git a/tools/testing/selftests/net/fcnal-test.sh b/tools/testing/selftests/net/fcnal-test.sh +index 782a8da5d9500..157822331954d 100755 +--- a/tools/testing/selftests/net/fcnal-test.sh ++++ b/tools/testing/selftests/net/fcnal-test.sh +@@ -1491,8 +1491,9 @@ ipv4_addr_bind_vrf() + for a in ${NSA_IP} ${VRF_IP} + do + log_start ++ show_hint "Socket not bound to VRF, but address is in VRF" + run_cmd nettest -s -R -P icmp -l ${a} -b +- log_test_addr ${a} $? 0 "Raw socket bind to local address" ++ log_test_addr ${a} $? 1 "Raw socket bind to local address" + + log_start + run_cmd nettest -s -R -P icmp -l ${a} -d ${NSA_DEV} -b +@@ -1884,7 +1885,7 @@ ipv6_ping_vrf() + log_start + show_hint "Fails since VRF device does not support linklocal or multicast" + run_cmd ${ping6} -c1 -w1 ${a} +- log_test_addr ${a} $? 2 "ping out, VRF bind" ++ log_test_addr ${a} $? 1 "ping out, VRF bind" + done + + for a in ${NSB_IP6} ${NSB_LO_IP6} ${NSB_LINKIP6}%${NSA_DEV} ${MCAST}%${NSA_DEV} +@@ -2890,11 +2891,14 @@ ipv6_addr_bind_novrf() + run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b + log_test_addr ${a} $? 0 "TCP socket bind to local address after device bind" + ++ # Sadly, the kernel allows binding a socket to a device and then ++ # binding to an address not on the device. So this test passes ++ # when it really should not + a=${NSA_LO_IP6} + log_start +- show_hint "Should fail with 'Cannot assign requested address'" +- run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b +- log_test_addr ${a} $? 1 "TCP socket bind to out of scope local address" ++ show_hint "Tecnically should fail since address is not on device but kernel allows" ++ run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b ++ log_test_addr ${a} $? 0 "TCP socket bind to out of scope local address" + } + + ipv6_addr_bind_vrf() +@@ -2935,10 +2939,15 @@ ipv6_addr_bind_vrf() + run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b + log_test_addr ${a} $? 0 "TCP socket bind to local address with device bind" + ++ # Sadly, the kernel allows binding a socket to a device and then ++ # binding to an address not on the device. The only restriction ++ # is that the address is valid in the L3 domain. So this test ++ # passes when it really should not + a=${VRF_IP6} + log_start +- run_cmd nettest -6 -s -l ${a} -d ${NSA_DEV} -t1 -b +- log_test_addr ${a} $? 1 "TCP socket bind to VRF address with device bind" ++ show_hint "Tecnically should fail since address is not on device but kernel allows" ++ run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b ++ log_test_addr ${a} $? 0 "TCP socket bind to VRF address with device bind" + + a=${NSA_LO_IP6} + log_start +diff --git a/tools/testing/selftests/net/forwarding/forwarding.config.sample b/tools/testing/selftests/net/forwarding/forwarding.config.sample +index e2adb533c8fcb..e71c61ee4cc67 100644 +--- a/tools/testing/selftests/net/forwarding/forwarding.config.sample ++++ b/tools/testing/selftests/net/forwarding/forwarding.config.sample +@@ -13,6 +13,8 @@ NETIFS[p5]=veth4 + NETIFS[p6]=veth5 + NETIFS[p7]=veth6 + NETIFS[p8]=veth7 ++NETIFS[p9]=veth8 ++NETIFS[p10]=veth9 + + ############################################################################## + # Defines