From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 32B86138359 for ; Wed, 14 Oct 2020 20:37:50 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 81C54E08ED; Wed, 14 Oct 2020 20:37:49 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 48266E08ED for ; Wed, 14 Oct 2020 20:37:49 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id C19DE335DC5 for ; Wed, 14 Oct 2020 20:37:47 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 7397338E for ; Wed, 14 Oct 2020 20:37:46 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1602707852.3798f3b4d9e1b6f339f75f3cc8970c66c28b24ab.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.4 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1070_linux-5.4.71.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 3798f3b4d9e1b6f339f75f3cc8970c66c28b24ab X-VCS-Branch: 5.4 Date: Wed, 14 Oct 2020 20:37:46 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: a6d9b417-0b6a-4b0f-a9cd-c7fcba95ce7c X-Archives-Hash: cba3ae6166a44040698b5d4683c2906f commit: 3798f3b4d9e1b6f339f75f3cc8970c66c28b24ab Author: Mike Pagano gentoo org> AuthorDate: Wed Oct 14 20:37:32 2020 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Oct 14 20:37:32 2020 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3798f3b4 Linux patch 5.4.71 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1070_linux-5.4.71.patch | 3471 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 3475 insertions(+) diff --git a/0000_README b/0000_README index f195c0d..ed12598 100644 --- a/0000_README +++ b/0000_README @@ -323,6 +323,10 @@ Patch: 1069_linux-5.4.70.patch From: http://www.kernel.org Desc: Linux 5.4.70 +Patch: 1070_linux-5.4.71.patch +From: http://www.kernel.org +Desc: Linux 5.4.71 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1070_linux-5.4.71.patch b/1070_linux-5.4.71.patch new file mode 100644 index 0000000..62177d5 --- /dev/null +++ b/1070_linux-5.4.71.patch @@ -0,0 +1,3471 @@ +diff --git a/Makefile b/Makefile +index e409fd909560f..f342e64c8c1d1 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 4 +-SUBLEVEL = 70 ++SUBLEVEL = 71 + EXTRAVERSION = + NAME = Kleptomaniac Octopus + +diff --git a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts +index 66e4ffb4e929d..2c8c2b322c727 100644 +--- a/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts ++++ b/arch/arm64/boot/dts/altera/socfpga_stratix10_socdk.dts +@@ -155,6 +155,7 @@ + }; + + &qspi { ++ status = "okay"; + flash@0 { + #address-cells = <1>; + #size-cells = <1>; +diff --git a/drivers/base/dd.c b/drivers/base/dd.c +index 1d5dd37f3abe4..84e757860ebb9 100644 +--- a/drivers/base/dd.c ++++ b/drivers/base/dd.c +@@ -518,7 +518,8 @@ static int really_probe(struct device *dev, struct device_driver *drv) + drv->bus->name, __func__, drv->name, dev_name(dev)); + if (!list_empty(&dev->devres_head)) { + dev_crit(dev, "Resources present before probing\n"); +- return -EBUSY; ++ ret = -EBUSY; ++ goto done; + } + + re_probe: +@@ -639,7 +640,7 @@ pinctrl_bind_failed: + ret = 0; + done: + atomic_dec(&probe_count); +- wake_up(&probe_waitqueue); ++ wake_up_all(&probe_waitqueue); + return ret; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +index f15ded1ce9057..c6a1dfe79e809 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +@@ -967,6 +967,7 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_tt *ttm) + + release_sg: + kfree(ttm->sg); ++ ttm->sg = NULL; + return r; + } + +diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c b/drivers/gpu/drm/nouveau/nouveau_mem.c +index c002f89685073..9682f30ab6f68 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_mem.c ++++ b/drivers/gpu/drm/nouveau/nouveau_mem.c +@@ -176,6 +176,8 @@ void + nouveau_mem_del(struct ttm_mem_reg *reg) + { + struct nouveau_mem *mem = nouveau_mem(reg); ++ if (!mem) ++ return; + nouveau_mem_fini(mem); + kfree(reg->mm_node); + reg->mm_node = NULL; +diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c +index 9a80c3c7e8af2..c40eef4e7a985 100644 +--- a/drivers/i2c/busses/i2c-i801.c ++++ b/drivers/i2c/busses/i2c-i801.c +@@ -1891,6 +1891,7 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id) + + pci_set_drvdata(dev, priv); + ++ dev_pm_set_driver_flags(&dev->dev, DPM_FLAG_NEVER_SKIP); + pm_runtime_set_autosuspend_delay(&dev->dev, 1000); + pm_runtime_use_autosuspend(&dev->dev); + pm_runtime_put_autosuspend(&dev->dev); +diff --git a/drivers/i2c/busses/i2c-meson.c b/drivers/i2c/busses/i2c-meson.c +index 1e2647f9a2a72..6661481f125cd 100644 +--- a/drivers/i2c/busses/i2c-meson.c ++++ b/drivers/i2c/busses/i2c-meson.c +@@ -5,6 +5,7 @@ + * Copyright (C) 2014 Beniamino Galvani + */ + ++#include + #include + #include + #include +@@ -32,12 +33,17 @@ + #define REG_CTRL_ACK_IGNORE BIT(1) + #define REG_CTRL_STATUS BIT(2) + #define REG_CTRL_ERROR BIT(3) +-#define REG_CTRL_CLKDIV_SHIFT 12 +-#define REG_CTRL_CLKDIV_MASK GENMASK(21, 12) +-#define REG_CTRL_CLKDIVEXT_SHIFT 28 +-#define REG_CTRL_CLKDIVEXT_MASK GENMASK(29, 28) ++#define REG_CTRL_CLKDIV GENMASK(21, 12) ++#define REG_CTRL_CLKDIVEXT GENMASK(29, 28) ++ ++#define REG_SLV_ADDR GENMASK(7, 0) ++#define REG_SLV_SDA_FILTER GENMASK(10, 8) ++#define REG_SLV_SCL_FILTER GENMASK(13, 11) ++#define REG_SLV_SCL_LOW GENMASK(27, 16) ++#define REG_SLV_SCL_LOW_EN BIT(28) + + #define I2C_TIMEOUT_MS 500 ++#define FILTER_DELAY 15 + + enum { + TOKEN_END = 0, +@@ -132,19 +138,24 @@ static void meson_i2c_set_clk_div(struct meson_i2c *i2c, unsigned int freq) + unsigned long clk_rate = clk_get_rate(i2c->clk); + unsigned int div; + +- div = DIV_ROUND_UP(clk_rate, freq * i2c->data->div_factor); ++ div = DIV_ROUND_UP(clk_rate, freq); ++ div -= FILTER_DELAY; ++ div = DIV_ROUND_UP(div, i2c->data->div_factor); + + /* clock divider has 12 bits */ +- if (div >= (1 << 12)) { ++ if (div > GENMASK(11, 0)) { + dev_err(i2c->dev, "requested bus frequency too low\n"); +- div = (1 << 12) - 1; ++ div = GENMASK(11, 0); + } + +- meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_CLKDIV_MASK, +- (div & GENMASK(9, 0)) << REG_CTRL_CLKDIV_SHIFT); ++ meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_CLKDIV, ++ FIELD_PREP(REG_CTRL_CLKDIV, div & GENMASK(9, 0))); ++ ++ meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_CLKDIVEXT, ++ FIELD_PREP(REG_CTRL_CLKDIVEXT, div >> 10)); + +- meson_i2c_set_mask(i2c, REG_CTRL, REG_CTRL_CLKDIVEXT_MASK, +- (div >> 10) << REG_CTRL_CLKDIVEXT_SHIFT); ++ /* Disable HIGH/LOW mode */ ++ meson_i2c_set_mask(i2c, REG_SLAVE_ADDR, REG_SLV_SCL_LOW_EN, 0); + + dev_dbg(i2c->dev, "%s: clk %lu, freq %u, div %u\n", __func__, + clk_rate, freq, div); +@@ -273,7 +284,10 @@ static void meson_i2c_do_start(struct meson_i2c *i2c, struct i2c_msg *msg) + token = (msg->flags & I2C_M_RD) ? TOKEN_SLAVE_ADDR_READ : + TOKEN_SLAVE_ADDR_WRITE; + +- writel(msg->addr << 1, i2c->regs + REG_SLAVE_ADDR); ++ ++ meson_i2c_set_mask(i2c, REG_SLAVE_ADDR, REG_SLV_ADDR, ++ FIELD_PREP(REG_SLV_ADDR, msg->addr << 1)); ++ + meson_i2c_add_token(i2c, TOKEN_START); + meson_i2c_add_token(i2c, token); + } +@@ -432,6 +446,10 @@ static int meson_i2c_probe(struct platform_device *pdev) + return ret; + } + ++ /* Disable filtering */ ++ meson_i2c_set_mask(i2c, REG_SLAVE_ADDR, ++ REG_SLV_SDA_FILTER | REG_SLV_SCL_FILTER, 0); ++ + meson_i2c_set_clk_div(i2c, timings.bus_freq_hz); + + return 0; +diff --git a/drivers/i2c/busses/i2c-owl.c b/drivers/i2c/busses/i2c-owl.c +index b6b5a495118b6..a567fd2b295e1 100644 +--- a/drivers/i2c/busses/i2c-owl.c ++++ b/drivers/i2c/busses/i2c-owl.c +@@ -179,6 +179,9 @@ static irqreturn_t owl_i2c_interrupt(int irq, void *_dev) + fifostat = readl(i2c_dev->base + OWL_I2C_REG_FIFOSTAT); + if (fifostat & OWL_I2C_FIFOSTAT_RNB) { + i2c_dev->err = -ENXIO; ++ /* Clear NACK error bit by writing "1" */ ++ owl_i2c_update_reg(i2c_dev->base + OWL_I2C_REG_FIFOSTAT, ++ OWL_I2C_FIFOSTAT_RNB, true); + goto stop; + } + +@@ -186,6 +189,9 @@ static irqreturn_t owl_i2c_interrupt(int irq, void *_dev) + stat = readl(i2c_dev->base + OWL_I2C_REG_STAT); + if (stat & OWL_I2C_STAT_BEB) { + i2c_dev->err = -EIO; ++ /* Clear BUS error bit by writing "1" */ ++ owl_i2c_update_reg(i2c_dev->base + OWL_I2C_REG_STAT, ++ OWL_I2C_STAT_BEB, true); + goto stop; + } + +diff --git a/drivers/input/misc/ati_remote2.c b/drivers/input/misc/ati_remote2.c +index 305f0160506a0..8a36d78fed63a 100644 +--- a/drivers/input/misc/ati_remote2.c ++++ b/drivers/input/misc/ati_remote2.c +@@ -68,7 +68,7 @@ static int ati_remote2_get_channel_mask(char *buffer, + { + pr_debug("%s()\n", __func__); + +- return sprintf(buffer, "0x%04x", *(unsigned int *)kp->arg); ++ return sprintf(buffer, "0x%04x\n", *(unsigned int *)kp->arg); + } + + static int ati_remote2_set_mode_mask(const char *val, +@@ -84,7 +84,7 @@ static int ati_remote2_get_mode_mask(char *buffer, + { + pr_debug("%s()\n", __func__); + +- return sprintf(buffer, "0x%02x", *(unsigned int *)kp->arg); ++ return sprintf(buffer, "0x%02x\n", *(unsigned int *)kp->arg); + } + + static unsigned int channel_mask = ATI_REMOTE2_MAX_CHANNEL_MASK; +diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c +index 2ffec65df3889..1147626f0d253 100644 +--- a/drivers/iommu/intel-iommu.c ++++ b/drivers/iommu/intel-iommu.c +@@ -2560,14 +2560,14 @@ static struct dmar_domain *dmar_insert_one_dev_info(struct intel_iommu *iommu, + } + + /* Setup the PASID entry for requests without PASID: */ +- spin_lock(&iommu->lock); ++ spin_lock_irqsave(&iommu->lock, flags); + if (hw_pass_through && domain_type_is_si(domain)) + ret = intel_pasid_setup_pass_through(iommu, domain, + dev, PASID_RID2PASID); + else + ret = intel_pasid_setup_second_level(iommu, domain, + dev, PASID_RID2PASID); +- spin_unlock(&iommu->lock); ++ spin_unlock_irqrestore(&iommu->lock, flags); + if (ret) { + dev_err(dev, "Setup RID2PASID failed\n"); + dmar_remove_one_dev_info(dev); +diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c +index 9c0ccb3744c28..81b8d5ede484e 100644 +--- a/drivers/mmc/core/queue.c ++++ b/drivers/mmc/core/queue.c +@@ -184,7 +184,7 @@ static void mmc_queue_setup_discard(struct request_queue *q, + q->limits.discard_granularity = card->pref_erase << 9; + /* granularity must not be greater than max. discard */ + if (card->pref_erase > max_discard) +- q->limits.discard_granularity = 0; ++ q->limits.discard_granularity = SECTOR_SIZE; + if (mmc_can_secure_erase_trim(card)) + blk_queue_flag_set(QUEUE_FLAG_SECERASE, q); + } +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c +index 0d7a173f8e61c..6862c2ef24424 100644 +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -1148,6 +1148,7 @@ static void bond_setup_by_slave(struct net_device *bond_dev, + + bond_dev->type = slave_dev->type; + bond_dev->hard_header_len = slave_dev->hard_header_len; ++ bond_dev->needed_headroom = slave_dev->needed_headroom; + bond_dev->addr_len = slave_dev->addr_len; + + memcpy(bond_dev->broadcast, slave_dev->broadcast, +diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c +index d375e438d8054..4fa9d485e2096 100644 +--- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c ++++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c +@@ -1222,7 +1222,7 @@ static int octeon_mgmt_open(struct net_device *netdev) + */ + if (netdev->phydev) { + netif_carrier_off(netdev); +- phy_start_aneg(netdev->phydev); ++ phy_start(netdev->phydev); + } + + netif_wake_queue(netdev); +@@ -1250,8 +1250,10 @@ static int octeon_mgmt_stop(struct net_device *netdev) + napi_disable(&p->napi); + netif_stop_queue(netdev); + +- if (netdev->phydev) ++ if (netdev->phydev) { ++ phy_stop(netdev->phydev); + phy_disconnect(netdev->phydev); ++ } + + netif_carrier_off(netdev); + +diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c +index 222ae76809aa1..cd95d6af8fc1b 100644 +--- a/drivers/net/ethernet/intel/iavf/iavf_main.c ++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c +@@ -3773,7 +3773,6 @@ err_dma: + return err; + } + +-#ifdef CONFIG_PM + /** + * iavf_suspend - Power management suspend routine + * @pdev: PCI device information struct +@@ -3781,11 +3780,10 @@ err_dma: + * + * Called when the system (VM) is entering sleep/suspend. + **/ +-static int iavf_suspend(struct pci_dev *pdev, pm_message_t state) ++static int __maybe_unused iavf_suspend(struct device *dev_d) + { +- struct net_device *netdev = pci_get_drvdata(pdev); ++ struct net_device *netdev = dev_get_drvdata(dev_d); + struct iavf_adapter *adapter = netdev_priv(netdev); +- int retval = 0; + + netif_device_detach(netdev); + +@@ -3803,12 +3801,6 @@ static int iavf_suspend(struct pci_dev *pdev, pm_message_t state) + + clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section); + +- retval = pci_save_state(pdev); +- if (retval) +- return retval; +- +- pci_disable_device(pdev); +- + return 0; + } + +@@ -3818,24 +3810,13 @@ static int iavf_suspend(struct pci_dev *pdev, pm_message_t state) + * + * Called when the system (VM) is resumed from sleep/suspend. + **/ +-static int iavf_resume(struct pci_dev *pdev) ++static int __maybe_unused iavf_resume(struct device *dev_d) + { +- struct iavf_adapter *adapter = pci_get_drvdata(pdev); +- struct net_device *netdev = adapter->netdev; ++ struct pci_dev *pdev = to_pci_dev(dev_d); ++ struct net_device *netdev = pci_get_drvdata(pdev); ++ struct iavf_adapter *adapter = netdev_priv(netdev); + u32 err; + +- pci_set_power_state(pdev, PCI_D0); +- pci_restore_state(pdev); +- /* pci_restore_state clears dev->state_saved so call +- * pci_save_state to restore it. +- */ +- pci_save_state(pdev); +- +- err = pci_enable_device_mem(pdev); +- if (err) { +- dev_err(&pdev->dev, "Cannot enable PCI device from suspend.\n"); +- return err; +- } + pci_set_master(pdev); + + rtnl_lock(); +@@ -3859,7 +3840,6 @@ static int iavf_resume(struct pci_dev *pdev) + return err; + } + +-#endif /* CONFIG_PM */ + /** + * iavf_remove - Device Removal Routine + * @pdev: PCI device information struct +@@ -3961,16 +3941,15 @@ static void iavf_remove(struct pci_dev *pdev) + pci_disable_device(pdev); + } + ++static SIMPLE_DEV_PM_OPS(iavf_pm_ops, iavf_suspend, iavf_resume); ++ + static struct pci_driver iavf_driver = { +- .name = iavf_driver_name, +- .id_table = iavf_pci_tbl, +- .probe = iavf_probe, +- .remove = iavf_remove, +-#ifdef CONFIG_PM +- .suspend = iavf_suspend, +- .resume = iavf_resume, +-#endif +- .shutdown = iavf_shutdown, ++ .name = iavf_driver_name, ++ .id_table = iavf_pci_tbl, ++ .probe = iavf_probe, ++ .remove = iavf_remove, ++ .driver.pm = &iavf_pm_ops, ++ .shutdown = iavf_shutdown, + }; + + /** +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +index b6a3370068f1c..7089ffcc4e512 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +@@ -69,12 +69,10 @@ enum { + MLX5_CMD_DELIVERY_STAT_CMD_DESCR_ERR = 0x10, + }; + +-static struct mlx5_cmd_work_ent *alloc_cmd(struct mlx5_cmd *cmd, +- struct mlx5_cmd_msg *in, +- struct mlx5_cmd_msg *out, +- void *uout, int uout_size, +- mlx5_cmd_cbk_t cbk, +- void *context, int page_queue) ++static struct mlx5_cmd_work_ent * ++cmd_alloc_ent(struct mlx5_cmd *cmd, struct mlx5_cmd_msg *in, ++ struct mlx5_cmd_msg *out, void *uout, int uout_size, ++ mlx5_cmd_cbk_t cbk, void *context, int page_queue) + { + gfp_t alloc_flags = cbk ? GFP_ATOMIC : GFP_KERNEL; + struct mlx5_cmd_work_ent *ent; +@@ -83,6 +81,7 @@ static struct mlx5_cmd_work_ent *alloc_cmd(struct mlx5_cmd *cmd, + if (!ent) + return ERR_PTR(-ENOMEM); + ++ ent->idx = -EINVAL; + ent->in = in; + ent->out = out; + ent->uout = uout; +@@ -91,10 +90,16 @@ static struct mlx5_cmd_work_ent *alloc_cmd(struct mlx5_cmd *cmd, + ent->context = context; + ent->cmd = cmd; + ent->page_queue = page_queue; ++ refcount_set(&ent->refcnt, 1); + + return ent; + } + ++static void cmd_free_ent(struct mlx5_cmd_work_ent *ent) ++{ ++ kfree(ent); ++} ++ + static u8 alloc_token(struct mlx5_cmd *cmd) + { + u8 token; +@@ -109,7 +114,7 @@ static u8 alloc_token(struct mlx5_cmd *cmd) + return token; + } + +-static int alloc_ent(struct mlx5_cmd *cmd) ++static int cmd_alloc_index(struct mlx5_cmd *cmd) + { + unsigned long flags; + int ret; +@@ -123,7 +128,7 @@ static int alloc_ent(struct mlx5_cmd *cmd) + return ret < cmd->max_reg_cmds ? ret : -ENOMEM; + } + +-static void free_ent(struct mlx5_cmd *cmd, int idx) ++static void cmd_free_index(struct mlx5_cmd *cmd, int idx) + { + unsigned long flags; + +@@ -132,6 +137,22 @@ static void free_ent(struct mlx5_cmd *cmd, int idx) + spin_unlock_irqrestore(&cmd->alloc_lock, flags); + } + ++static void cmd_ent_get(struct mlx5_cmd_work_ent *ent) ++{ ++ refcount_inc(&ent->refcnt); ++} ++ ++static void cmd_ent_put(struct mlx5_cmd_work_ent *ent) ++{ ++ if (!refcount_dec_and_test(&ent->refcnt)) ++ return; ++ ++ if (ent->idx >= 0) ++ cmd_free_index(ent->cmd, ent->idx); ++ ++ cmd_free_ent(ent); ++} ++ + static struct mlx5_cmd_layout *get_inst(struct mlx5_cmd *cmd, int idx) + { + return cmd->cmd_buf + (idx << cmd->log_stride); +@@ -219,11 +240,6 @@ static void poll_timeout(struct mlx5_cmd_work_ent *ent) + ent->ret = -ETIMEDOUT; + } + +-static void free_cmd(struct mlx5_cmd_work_ent *ent) +-{ +- kfree(ent); +-} +- + static int verify_signature(struct mlx5_cmd_work_ent *ent) + { + struct mlx5_cmd_mailbox *next = ent->out->next; +@@ -842,6 +858,7 @@ static void cb_timeout_handler(struct work_struct *work) + mlx5_command_str(msg_to_opcode(ent->in)), + msg_to_opcode(ent->in)); + mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true); ++ cmd_ent_put(ent); /* for the cmd_ent_get() took on schedule delayed work */ + } + + static void free_msg(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *msg); +@@ -865,14 +882,14 @@ static void cmd_work_handler(struct work_struct *work) + sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem; + down(sem); + if (!ent->page_queue) { +- alloc_ret = alloc_ent(cmd); ++ alloc_ret = cmd_alloc_index(cmd); + if (alloc_ret < 0) { + mlx5_core_err(dev, "failed to allocate command entry\n"); + if (ent->callback) { + ent->callback(-EAGAIN, ent->context); + mlx5_free_cmd_msg(dev, ent->out); + free_msg(dev, ent->in); +- free_cmd(ent); ++ cmd_ent_put(ent); + } else { + ent->ret = -EAGAIN; + complete(&ent->done); +@@ -908,8 +925,8 @@ static void cmd_work_handler(struct work_struct *work) + ent->ts1 = ktime_get_ns(); + cmd_mode = cmd->mode; + +- if (ent->callback) +- schedule_delayed_work(&ent->cb_timeout_work, cb_timeout); ++ if (ent->callback && schedule_delayed_work(&ent->cb_timeout_work, cb_timeout)) ++ cmd_ent_get(ent); + set_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state); + + /* Skip sending command to fw if internal error */ +@@ -923,13 +940,10 @@ static void cmd_work_handler(struct work_struct *work) + MLX5_SET(mbox_out, ent->out, syndrome, drv_synd); + + mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true); +- /* no doorbell, no need to keep the entry */ +- free_ent(cmd, ent->idx); +- if (ent->callback) +- free_cmd(ent); + return; + } + ++ cmd_ent_get(ent); /* for the _real_ FW event on completion */ + /* ring doorbell after the descriptor is valid */ + mlx5_core_dbg(dev, "writing 0x%x to command doorbell\n", 1 << ent->idx); + wmb(); +@@ -1029,11 +1043,16 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in, + if (callback && page_queue) + return -EINVAL; + +- ent = alloc_cmd(cmd, in, out, uout, uout_size, callback, context, +- page_queue); ++ ent = cmd_alloc_ent(cmd, in, out, uout, uout_size, ++ callback, context, page_queue); + if (IS_ERR(ent)) + return PTR_ERR(ent); + ++ /* put for this ent is when consumed, depending on the use case ++ * 1) (!callback) blocking flow: by caller after wait_func completes ++ * 2) (callback) flow: by mlx5_cmd_comp_handler() when ent is handled ++ */ ++ + ent->token = token; + ent->polling = force_polling; + +@@ -1052,12 +1071,10 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in, + } + + if (callback) +- goto out; ++ goto out; /* mlx5_cmd_comp_handler() will put(ent) */ + + err = wait_func(dev, ent); +- if (err == -ETIMEDOUT) +- goto out; +- if (err == -ECANCELED) ++ if (err == -ETIMEDOUT || err == -ECANCELED) + goto out_free; + + ds = ent->ts2 - ent->ts1; +@@ -1075,7 +1092,7 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in, + *status = ent->status; + + out_free: +- free_cmd(ent); ++ cmd_ent_put(ent); + out: + return err; + } +@@ -1490,14 +1507,19 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force + if (!forced) { + mlx5_core_err(dev, "Command completion arrived after timeout (entry idx = %d).\n", + ent->idx); +- free_ent(cmd, ent->idx); +- free_cmd(ent); ++ cmd_ent_put(ent); + } + continue; + } + +- if (ent->callback) +- cancel_delayed_work(&ent->cb_timeout_work); ++ if (ent->callback && cancel_delayed_work(&ent->cb_timeout_work)) ++ cmd_ent_put(ent); /* timeout work was canceled */ ++ ++ if (!forced || /* Real FW completion */ ++ pci_channel_offline(dev->pdev) || /* FW is inaccessible */ ++ dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) ++ cmd_ent_put(ent); ++ + if (ent->page_queue) + sem = &cmd->pages_sem; + else +@@ -1519,10 +1541,6 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force + ent->ret, deliv_status_to_str(ent->status), ent->status); + } + +- /* only real completion will free the entry slot */ +- if (!forced) +- free_ent(cmd, ent->idx); +- + if (ent->callback) { + ds = ent->ts2 - ent->ts1; + if (ent->op < ARRAY_SIZE(cmd->stats)) { +@@ -1550,10 +1568,13 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force + free_msg(dev, ent->in); + + err = err ? err : ent->status; +- if (!forced) +- free_cmd(ent); ++ /* final consumer is done, release ent */ ++ cmd_ent_put(ent); + callback(err, context); + } else { ++ /* release wait_func() so mlx5_cmd_invoke() ++ * can make the final ent_put() ++ */ + complete(&ent->done); + } + up(sem); +@@ -1563,8 +1584,11 @@ static void mlx5_cmd_comp_handler(struct mlx5_core_dev *dev, u64 vec, bool force + + void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev) + { ++ struct mlx5_cmd *cmd = &dev->cmd; ++ unsigned long bitmask; + unsigned long flags; + u64 vector; ++ int i; + + /* wait for pending handlers to complete */ + mlx5_eq_synchronize_cmd_irq(dev); +@@ -1573,11 +1597,20 @@ void mlx5_cmd_trigger_completions(struct mlx5_core_dev *dev) + if (!vector) + goto no_trig; + ++ bitmask = vector; ++ /* we must increment the allocated entries refcount before triggering the completions ++ * to guarantee pending commands will not get freed in the meanwhile. ++ * For that reason, it also has to be done inside the alloc_lock. ++ */ ++ for_each_set_bit(i, &bitmask, (1 << cmd->log_sz)) ++ cmd_ent_get(cmd->ent_arr[i]); + vector |= MLX5_TRIGGERED_CMD_COMP; + spin_unlock_irqrestore(&dev->cmd.alloc_lock, flags); + + mlx5_core_dbg(dev, "vector 0x%llx\n", vector); + mlx5_cmd_comp_handler(dev, vector, true); ++ for_each_set_bit(i, &bitmask, (1 << cmd->log_sz)) ++ cmd_ent_put(cmd->ent_arr[i]); + return; + + no_trig: +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h +index 98304c42e4952..b5c8afe8cd10d 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h +@@ -92,7 +92,12 @@ struct page_pool; + #define MLX5_MPWRQ_PAGES_PER_WQE BIT(MLX5_MPWRQ_WQE_PAGE_ORDER) + + #define MLX5_MTT_OCTW(npages) (ALIGN(npages, 8) / 2) +-#define MLX5E_REQUIRED_WQE_MTTS (ALIGN(MLX5_MPWRQ_PAGES_PER_WQE, 8)) ++/* Add another page to MLX5E_REQUIRED_WQE_MTTS as a buffer between ++ * WQEs, This page will absorb write overflow by the hardware, when ++ * receiving packets larger than MTU. These oversize packets are ++ * dropped by the driver at a later stage. ++ */ ++#define MLX5E_REQUIRED_WQE_MTTS (ALIGN(MLX5_MPWRQ_PAGES_PER_WQE + 1, 8)) + #define MLX5E_LOG_ALIGNED_MPWQE_PPW (ilog2(MLX5E_REQUIRED_WQE_MTTS)) + #define MLX5E_REQUIRED_MTTS(wqes) (wqes * MLX5E_REQUIRED_WQE_MTTS) + #define MLX5E_MAX_RQ_NUM_MTTS \ +@@ -694,6 +699,7 @@ struct mlx5e_rq { + u32 rqn; + struct mlx5_core_dev *mdev; + struct mlx5_core_mkey umr_mkey; ++ struct mlx5e_dma_info wqe_overflow; + + /* XDP read-mostly */ + struct xdp_rxq_info xdp_rxq; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c +index 73d3dc07331f1..713dc210f710c 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c +@@ -217,6 +217,9 @@ static int __mlx5e_add_vlan_rule(struct mlx5e_priv *priv, + break; + } + ++ if (WARN_ONCE(*rule_p, "VLAN rule already exists type %d", rule_type)) ++ return 0; ++ + *rule_p = mlx5_add_flow_rules(ft, spec, &flow_act, &dest, 1); + + if (IS_ERR(*rule_p)) { +@@ -397,8 +400,7 @@ static void mlx5e_add_vlan_rules(struct mlx5e_priv *priv) + for_each_set_bit(i, priv->fs.vlan.active_svlans, VLAN_N_VID) + mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_MATCH_STAG_VID, i); + +- if (priv->fs.vlan.cvlan_filter_disabled && +- !(priv->netdev->flags & IFF_PROMISC)) ++ if (priv->fs.vlan.cvlan_filter_disabled) + mlx5e_add_any_vid_rules(priv); + } + +@@ -415,8 +417,12 @@ static void mlx5e_del_vlan_rules(struct mlx5e_priv *priv) + for_each_set_bit(i, priv->fs.vlan.active_svlans, VLAN_N_VID) + mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_MATCH_STAG_VID, i); + +- if (priv->fs.vlan.cvlan_filter_disabled && +- !(priv->netdev->flags & IFF_PROMISC)) ++ WARN_ON_ONCE(!(test_bit(MLX5E_STATE_DESTROYING, &priv->state))); ++ ++ /* must be called after DESTROY bit is set and ++ * set_rx_mode is called and flushed ++ */ ++ if (priv->fs.vlan.cvlan_filter_disabled) + mlx5e_del_any_vid_rules(priv); + } + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +index ee0d78f801af5..8b8581f71e793 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +@@ -266,12 +266,17 @@ static int mlx5e_rq_alloc_mpwqe_info(struct mlx5e_rq *rq, + + static int mlx5e_create_umr_mkey(struct mlx5_core_dev *mdev, + u64 npages, u8 page_shift, +- struct mlx5_core_mkey *umr_mkey) ++ struct mlx5_core_mkey *umr_mkey, ++ dma_addr_t filler_addr) + { +- int inlen = MLX5_ST_SZ_BYTES(create_mkey_in); ++ struct mlx5_mtt *mtt; ++ int inlen; + void *mkc; + u32 *in; + int err; ++ int i; ++ ++ inlen = MLX5_ST_SZ_BYTES(create_mkey_in) + sizeof(*mtt) * npages; + + in = kvzalloc(inlen, GFP_KERNEL); + if (!in) +@@ -291,6 +296,18 @@ static int mlx5e_create_umr_mkey(struct mlx5_core_dev *mdev, + MLX5_SET(mkc, mkc, translations_octword_size, + MLX5_MTT_OCTW(npages)); + MLX5_SET(mkc, mkc, log_page_size, page_shift); ++ MLX5_SET(create_mkey_in, in, translations_octword_actual_size, ++ MLX5_MTT_OCTW(npages)); ++ ++ /* Initialize the mkey with all MTTs pointing to a default ++ * page (filler_addr). When the channels are activated, UMR ++ * WQEs will redirect the RX WQEs to the actual memory from ++ * the RQ's pool, while the gaps (wqe_overflow) remain mapped ++ * to the default page. ++ */ ++ mtt = MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt); ++ for (i = 0 ; i < npages ; i++) ++ mtt[i].ptag = cpu_to_be64(filler_addr); + + err = mlx5_core_create_mkey(mdev, umr_mkey, in, inlen); + +@@ -302,7 +319,8 @@ static int mlx5e_create_rq_umr_mkey(struct mlx5_core_dev *mdev, struct mlx5e_rq + { + u64 num_mtts = MLX5E_REQUIRED_MTTS(mlx5_wq_ll_get_size(&rq->mpwqe.wq)); + +- return mlx5e_create_umr_mkey(mdev, num_mtts, PAGE_SHIFT, &rq->umr_mkey); ++ return mlx5e_create_umr_mkey(mdev, num_mtts, PAGE_SHIFT, &rq->umr_mkey, ++ rq->wqe_overflow.addr); + } + + static inline u64 mlx5e_get_mpwqe_offset(struct mlx5e_rq *rq, u16 wqe_ix) +@@ -370,6 +388,28 @@ static void mlx5e_rq_err_cqe_work(struct work_struct *recover_work) + mlx5e_reporter_rq_cqe_err(rq); + } + ++static int mlx5e_alloc_mpwqe_rq_drop_page(struct mlx5e_rq *rq) ++{ ++ rq->wqe_overflow.page = alloc_page(GFP_KERNEL); ++ if (!rq->wqe_overflow.page) ++ return -ENOMEM; ++ ++ rq->wqe_overflow.addr = dma_map_page(rq->pdev, rq->wqe_overflow.page, 0, ++ PAGE_SIZE, rq->buff.map_dir); ++ if (dma_mapping_error(rq->pdev, rq->wqe_overflow.addr)) { ++ __free_page(rq->wqe_overflow.page); ++ return -ENOMEM; ++ } ++ return 0; ++} ++ ++static void mlx5e_free_mpwqe_rq_drop_page(struct mlx5e_rq *rq) ++{ ++ dma_unmap_page(rq->pdev, rq->wqe_overflow.addr, PAGE_SIZE, ++ rq->buff.map_dir); ++ __free_page(rq->wqe_overflow.page); ++} ++ + static int mlx5e_alloc_rq(struct mlx5e_channel *c, + struct mlx5e_params *params, + struct mlx5e_xsk_param *xsk, +@@ -434,6 +474,10 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c, + if (err) + goto err_rq_wq_destroy; + ++ err = mlx5e_alloc_mpwqe_rq_drop_page(rq); ++ if (err) ++ goto err_rq_wq_destroy; ++ + rq->mpwqe.wq.db = &rq->mpwqe.wq.db[MLX5_RCV_DBR]; + + wq_sz = mlx5_wq_ll_get_size(&rq->mpwqe.wq); +@@ -474,7 +518,7 @@ static int mlx5e_alloc_rq(struct mlx5e_channel *c, + + err = mlx5e_create_rq_umr_mkey(mdev, rq); + if (err) +- goto err_rq_wq_destroy; ++ goto err_rq_drop_page; + rq->mkey_be = cpu_to_be32(rq->umr_mkey.key); + + err = mlx5e_rq_alloc_mpwqe_info(rq, c); +@@ -622,6 +666,8 @@ err_free: + case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: + kvfree(rq->mpwqe.info); + mlx5_core_destroy_mkey(mdev, &rq->umr_mkey); ++err_rq_drop_page: ++ mlx5e_free_mpwqe_rq_drop_page(rq); + break; + default: /* MLX5_WQ_TYPE_CYCLIC */ + kvfree(rq->wqe.frags); +@@ -649,6 +695,7 @@ static void mlx5e_free_rq(struct mlx5e_rq *rq) + case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: + kvfree(rq->mpwqe.info); + mlx5_core_destroy_mkey(rq->mdev, &rq->umr_mkey); ++ mlx5e_free_mpwqe_rq_drop_page(rq); + break; + default: /* MLX5_WQ_TYPE_CYCLIC */ + kvfree(rq->wqe.frags); +@@ -4281,6 +4328,21 @@ void mlx5e_del_vxlan_port(struct net_device *netdev, struct udp_tunnel_info *ti) + mlx5e_vxlan_queue_work(priv, be16_to_cpu(ti->port), 0); + } + ++static bool mlx5e_gre_tunnel_inner_proto_offload_supported(struct mlx5_core_dev *mdev, ++ struct sk_buff *skb) ++{ ++ switch (skb->inner_protocol) { ++ case htons(ETH_P_IP): ++ case htons(ETH_P_IPV6): ++ case htons(ETH_P_TEB): ++ return true; ++ case htons(ETH_P_MPLS_UC): ++ case htons(ETH_P_MPLS_MC): ++ return MLX5_CAP_ETH(mdev, tunnel_stateless_mpls_over_gre); ++ } ++ return false; ++} ++ + static netdev_features_t mlx5e_tunnel_features_check(struct mlx5e_priv *priv, + struct sk_buff *skb, + netdev_features_t features) +@@ -4303,7 +4365,9 @@ static netdev_features_t mlx5e_tunnel_features_check(struct mlx5e_priv *priv, + + switch (proto) { + case IPPROTO_GRE: +- return features; ++ if (mlx5e_gre_tunnel_inner_proto_offload_supported(priv->mdev, skb)) ++ return features; ++ break; + case IPPROTO_IPIP: + case IPPROTO_IPV6: + if (mlx5e_tunnel_proto_supported(priv->mdev, IPPROTO_IPIP)) +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +index 373981a659c7c..6fd9749203944 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c +@@ -115,7 +115,7 @@ static int request_irqs(struct mlx5_core_dev *dev, int nvec) + return 0; + + err_request_irq: +- for (; i >= 0; i--) { ++ while (i--) { + struct mlx5_irq *irq = mlx5_irq_get(dev, i); + int irqn = pci_irq_vector(dev->pdev, i); + +diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c +index 295b27112d367..ec0d5a4a60a98 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c +@@ -290,13 +290,14 @@ mlxsw_sp_acl_tcam_group_add(struct mlxsw_sp_acl_tcam *tcam, + int err; + + group->tcam = tcam; +- mutex_init(&group->lock); + INIT_LIST_HEAD(&group->region_list); + + err = mlxsw_sp_acl_tcam_group_id_get(tcam, &group->id); + if (err) + return err; + ++ mutex_init(&group->lock); ++ + return 0; + } + +diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c +index 903212ad9bb2f..66c97049f52b7 100644 +--- a/drivers/net/ethernet/realtek/r8169_main.c ++++ b/drivers/net/ethernet/realtek/r8169_main.c +@@ -4701,7 +4701,7 @@ static void rtl_hw_start_8168f_1(struct rtl8169_private *tp) + { 0x08, 0x0001, 0x0002 }, + { 0x09, 0x0000, 0x0080 }, + { 0x19, 0x0000, 0x0224 }, +- { 0x00, 0x0000, 0x0004 }, ++ { 0x00, 0x0000, 0x0008 }, + { 0x0c, 0x3df0, 0x0200 }, + }; + +@@ -4718,7 +4718,7 @@ static void rtl_hw_start_8411(struct rtl8169_private *tp) + { 0x06, 0x00c0, 0x0020 }, + { 0x0f, 0xffff, 0x5200 }, + { 0x19, 0x0000, 0x0224 }, +- { 0x00, 0x0000, 0x0004 }, ++ { 0x00, 0x0000, 0x0008 }, + { 0x0c, 0x3df0, 0x0200 }, + }; + +diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c +index 907ae1359a7c1..30cdabf64ccc1 100644 +--- a/drivers/net/ethernet/renesas/ravb_main.c ++++ b/drivers/net/ethernet/renesas/ravb_main.c +@@ -1336,51 +1336,6 @@ static inline int ravb_hook_irq(unsigned int irq, irq_handler_t handler, + return error; + } + +-/* MDIO bus init function */ +-static int ravb_mdio_init(struct ravb_private *priv) +-{ +- struct platform_device *pdev = priv->pdev; +- struct device *dev = &pdev->dev; +- int error; +- +- /* Bitbang init */ +- priv->mdiobb.ops = &bb_ops; +- +- /* MII controller setting */ +- priv->mii_bus = alloc_mdio_bitbang(&priv->mdiobb); +- if (!priv->mii_bus) +- return -ENOMEM; +- +- /* Hook up MII support for ethtool */ +- priv->mii_bus->name = "ravb_mii"; +- priv->mii_bus->parent = dev; +- snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x", +- pdev->name, pdev->id); +- +- /* Register MDIO bus */ +- error = of_mdiobus_register(priv->mii_bus, dev->of_node); +- if (error) +- goto out_free_bus; +- +- return 0; +- +-out_free_bus: +- free_mdio_bitbang(priv->mii_bus); +- return error; +-} +- +-/* MDIO bus release function */ +-static int ravb_mdio_release(struct ravb_private *priv) +-{ +- /* Unregister mdio bus */ +- mdiobus_unregister(priv->mii_bus); +- +- /* Free bitbang info */ +- free_mdio_bitbang(priv->mii_bus); +- +- return 0; +-} +- + /* Network device open function for Ethernet AVB */ + static int ravb_open(struct net_device *ndev) + { +@@ -1389,13 +1344,6 @@ static int ravb_open(struct net_device *ndev) + struct device *dev = &pdev->dev; + int error; + +- /* MDIO bus init */ +- error = ravb_mdio_init(priv); +- if (error) { +- netdev_err(ndev, "failed to initialize MDIO\n"); +- return error; +- } +- + napi_enable(&priv->napi[RAVB_BE]); + napi_enable(&priv->napi[RAVB_NC]); + +@@ -1473,7 +1421,6 @@ out_free_irq: + out_napi_off: + napi_disable(&priv->napi[RAVB_NC]); + napi_disable(&priv->napi[RAVB_BE]); +- ravb_mdio_release(priv); + return error; + } + +@@ -1783,8 +1730,6 @@ static int ravb_close(struct net_device *ndev) + ravb_ring_free(ndev, RAVB_BE); + ravb_ring_free(ndev, RAVB_NC); + +- ravb_mdio_release(priv); +- + return 0; + } + +@@ -1936,6 +1881,51 @@ static const struct net_device_ops ravb_netdev_ops = { + .ndo_set_features = ravb_set_features, + }; + ++/* MDIO bus init function */ ++static int ravb_mdio_init(struct ravb_private *priv) ++{ ++ struct platform_device *pdev = priv->pdev; ++ struct device *dev = &pdev->dev; ++ int error; ++ ++ /* Bitbang init */ ++ priv->mdiobb.ops = &bb_ops; ++ ++ /* MII controller setting */ ++ priv->mii_bus = alloc_mdio_bitbang(&priv->mdiobb); ++ if (!priv->mii_bus) ++ return -ENOMEM; ++ ++ /* Hook up MII support for ethtool */ ++ priv->mii_bus->name = "ravb_mii"; ++ priv->mii_bus->parent = dev; ++ snprintf(priv->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x", ++ pdev->name, pdev->id); ++ ++ /* Register MDIO bus */ ++ error = of_mdiobus_register(priv->mii_bus, dev->of_node); ++ if (error) ++ goto out_free_bus; ++ ++ return 0; ++ ++out_free_bus: ++ free_mdio_bitbang(priv->mii_bus); ++ return error; ++} ++ ++/* MDIO bus release function */ ++static int ravb_mdio_release(struct ravb_private *priv) ++{ ++ /* Unregister mdio bus */ ++ mdiobus_unregister(priv->mii_bus); ++ ++ /* Free bitbang info */ ++ free_mdio_bitbang(priv->mii_bus); ++ ++ return 0; ++} ++ + static const struct of_device_id ravb_match_table[] = { + { .compatible = "renesas,etheravb-r8a7790", .data = (void *)RCAR_GEN2 }, + { .compatible = "renesas,etheravb-r8a7794", .data = (void *)RCAR_GEN2 }, +@@ -2176,6 +2166,13 @@ static int ravb_probe(struct platform_device *pdev) + eth_hw_addr_random(ndev); + } + ++ /* MDIO bus init */ ++ error = ravb_mdio_init(priv); ++ if (error) { ++ dev_err(&pdev->dev, "failed to initialize MDIO\n"); ++ goto out_dma_free; ++ } ++ + netif_napi_add(ndev, &priv->napi[RAVB_BE], ravb_poll, 64); + netif_napi_add(ndev, &priv->napi[RAVB_NC], ravb_poll, 64); + +@@ -2197,6 +2194,8 @@ static int ravb_probe(struct platform_device *pdev) + out_napi_del: + netif_napi_del(&priv->napi[RAVB_NC]); + netif_napi_del(&priv->napi[RAVB_BE]); ++ ravb_mdio_release(priv); ++out_dma_free: + dma_free_coherent(ndev->dev.parent, priv->desc_bat_size, priv->desc_bat, + priv->desc_bat_dma); + +@@ -2228,6 +2227,7 @@ static int ravb_remove(struct platform_device *pdev) + unregister_netdev(ndev); + netif_napi_del(&priv->napi[RAVB_NC]); + netif_napi_del(&priv->napi[RAVB_BE]); ++ ravb_mdio_release(priv); + pm_runtime_disable(&pdev->dev); + free_netdev(ndev); + platform_set_drvdata(pdev, NULL); +diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c +index 1a768837ca728..ce1346c14b05a 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c ++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c +@@ -662,23 +662,16 @@ static int stmmac_ethtool_op_set_eee(struct net_device *dev, + struct stmmac_priv *priv = netdev_priv(dev); + int ret; + +- if (!edata->eee_enabled) { ++ if (!priv->dma_cap.eee) ++ return -EOPNOTSUPP; ++ ++ if (!edata->eee_enabled) + stmmac_disable_eee_mode(priv); +- } else { +- /* We are asking for enabling the EEE but it is safe +- * to verify all by invoking the eee_init function. +- * In case of failure it will return an error. +- */ +- edata->eee_enabled = stmmac_eee_init(priv); +- if (!edata->eee_enabled) +- return -EOPNOTSUPP; +- } + + ret = phylink_ethtool_set_eee(priv->phylink, edata); + if (ret) + return ret; + +- priv->eee_enabled = edata->eee_enabled; + priv->tx_lpi_timer = edata->tx_lpi_timer; + return 0; + } +diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c +index 4c86a73db475a..f233a6933a976 100644 +--- a/drivers/net/macsec.c ++++ b/drivers/net/macsec.c +@@ -1080,6 +1080,7 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb) + struct macsec_rx_sa *rx_sa; + struct macsec_rxh_data *rxd; + struct macsec_dev *macsec; ++ unsigned int len; + sci_t sci; + u32 pn; + bool cbit; +@@ -1236,9 +1237,10 @@ deliver: + macsec_rxsc_put(rx_sc); + + skb_orphan(skb); ++ len = skb->len; + ret = gro_cells_receive(&macsec->gro_cells, skb); + if (ret == NET_RX_SUCCESS) +- count_rx(dev, skb->len); ++ count_rx(dev, len); + else + macsec->secy.netdev->stats.rx_dropped++; + +diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig +index fe602648b99f5..dcf2051ef2c04 100644 +--- a/drivers/net/phy/Kconfig ++++ b/drivers/net/phy/Kconfig +@@ -193,6 +193,7 @@ config MDIO_THUNDER + depends on 64BIT + depends on PCI + select MDIO_CAVIUM ++ select MDIO_DEVRES + help + This driver supports the MDIO interfaces found on Cavium + ThunderX SoCs when the MDIO bus device appears as a PCI +diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c +index 04845a4017f93..606fee99221b8 100644 +--- a/drivers/net/team/team.c ++++ b/drivers/net/team/team.c +@@ -287,7 +287,7 @@ inst_rollback: + for (i--; i >= 0; i--) + __team_option_inst_del_option(team, dst_opts[i]); + +- i = option_count - 1; ++ i = option_count; + alloc_rollback: + for (i--; i >= 0; i--) + kfree(dst_opts[i]); +@@ -2111,6 +2111,7 @@ static void team_setup_by_port(struct net_device *dev, + dev->header_ops = port_dev->header_ops; + dev->type = port_dev->type; + dev->hard_header_len = port_dev->hard_header_len; ++ dev->needed_headroom = port_dev->needed_headroom; + dev->addr_len = port_dev->addr_len; + dev->mtu = port_dev->mtu; + memcpy(dev->broadcast, port_dev->broadcast, port_dev->addr_len); +diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c +index df2f7cc6dc03a..8e37e1f58c4b9 100644 +--- a/drivers/net/usb/ax88179_178a.c ++++ b/drivers/net/usb/ax88179_178a.c +@@ -1719,6 +1719,7 @@ static const struct driver_info belkin_info = { + .status = ax88179_status, + .link_reset = ax88179_link_reset, + .reset = ax88179_reset, ++ .stop = ax88179_stop, + .flags = FLAG_ETHER | FLAG_FRAMING_AX, + .rx_fixup = ax88179_rx_fixup, + .tx_fixup = ax88179_tx_fixup, +diff --git a/drivers/net/usb/rtl8150.c b/drivers/net/usb/rtl8150.c +index 13e51ccf02147..491625c1c3084 100644 +--- a/drivers/net/usb/rtl8150.c ++++ b/drivers/net/usb/rtl8150.c +@@ -274,12 +274,20 @@ static int write_mii_word(rtl8150_t * dev, u8 phy, __u8 indx, u16 reg) + return 1; + } + +-static inline void set_ethernet_addr(rtl8150_t * dev) ++static void set_ethernet_addr(rtl8150_t *dev) + { +- u8 node_id[6]; ++ u8 node_id[ETH_ALEN]; ++ int ret; ++ ++ ret = get_registers(dev, IDR, sizeof(node_id), node_id); + +- get_registers(dev, IDR, sizeof(node_id), node_id); +- memcpy(dev->netdev->dev_addr, node_id, sizeof(node_id)); ++ if (ret == sizeof(node_id)) { ++ ether_addr_copy(dev->netdev->dev_addr, node_id); ++ } else { ++ eth_hw_addr_random(dev->netdev); ++ netdev_notice(dev->netdev, "Assigned a random MAC address: %pM\n", ++ dev->netdev->dev_addr); ++ } + } + + static int rtl8150_set_mac_address(struct net_device *netdev, void *p) +diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c +index 030d30603c295..99e1a7bc06886 100644 +--- a/drivers/net/virtio_net.c ++++ b/drivers/net/virtio_net.c +@@ -63,6 +63,11 @@ static const unsigned long guest_offloads[] = { + VIRTIO_NET_F_GUEST_CSUM + }; + ++#define GUEST_OFFLOAD_LRO_MASK ((1ULL << VIRTIO_NET_F_GUEST_TSO4) | \ ++ (1ULL << VIRTIO_NET_F_GUEST_TSO6) | \ ++ (1ULL << VIRTIO_NET_F_GUEST_ECN) | \ ++ (1ULL << VIRTIO_NET_F_GUEST_UFO)) ++ + struct virtnet_stat_desc { + char desc[ETH_GSTRING_LEN]; + size_t offset; +@@ -2572,7 +2577,8 @@ static int virtnet_set_features(struct net_device *dev, + if (features & NETIF_F_LRO) + offloads = vi->guest_offloads_capable; + else +- offloads = 0; ++ offloads = vi->guest_offloads_capable & ++ ~GUEST_OFFLOAD_LRO_MASK; + + err = virtnet_set_guest_offloads(vi, offloads); + if (err) +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 207ed6d49ad7c..ce69aaea581a5 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -2932,8 +2932,10 @@ static int nvme_dev_open(struct inode *inode, struct file *file) + } + + nvme_get_ctrl(ctrl); +- if (!try_module_get(ctrl->ops->module)) ++ if (!try_module_get(ctrl->ops->module)) { ++ nvme_put_ctrl(ctrl); + return -EINVAL; ++ } + + file->private_data = ctrl; + return 0; +diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c +index 6d7a813e7183a..e159b78b5f3b4 100644 +--- a/drivers/nvme/host/tcp.c ++++ b/drivers/nvme/host/tcp.c +@@ -861,12 +861,11 @@ static int nvme_tcp_try_send_data(struct nvme_tcp_request *req) + else + flags |= MSG_MORE; + +- /* can't zcopy slab pages */ +- if (unlikely(PageSlab(page))) { +- ret = sock_no_sendpage(queue->sock, page, offset, len, ++ if (sendpage_ok(page)) { ++ ret = kernel_sendpage(queue->sock, page, offset, len, + flags); + } else { +- ret = kernel_sendpage(queue->sock, page, offset, len, ++ ret = sock_no_sendpage(queue->sock, page, offset, len, + flags); + } + if (ret <= 0) +diff --git a/drivers/platform/olpc/olpc-ec.c b/drivers/platform/olpc/olpc-ec.c +index 190e4a6186ef7..f64b82824db28 100644 +--- a/drivers/platform/olpc/olpc-ec.c ++++ b/drivers/platform/olpc/olpc-ec.c +@@ -439,7 +439,9 @@ static int olpc_ec_probe(struct platform_device *pdev) + &config); + if (IS_ERR(ec->dcon_rdev)) { + dev_err(&pdev->dev, "failed to register DCON regulator\n"); +- return PTR_ERR(ec->dcon_rdev); ++ err = PTR_ERR(ec->dcon_rdev); ++ kfree(ec); ++ return err; + } + + ec->dbgfs_dir = olpc_ec_setup_debugfs(); +diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig +index 1cab993205142..000d5693fae74 100644 +--- a/drivers/platform/x86/Kconfig ++++ b/drivers/platform/x86/Kconfig +@@ -269,6 +269,7 @@ config FUJITSU_LAPTOP + depends on BACKLIGHT_CLASS_DEVICE + depends on ACPI_VIDEO || ACPI_VIDEO = n + select INPUT_SPARSEKMAP ++ select NEW_LEDS + select LEDS_CLASS + ---help--- + This is a driver for laptops built by Fujitsu: +diff --git a/drivers/platform/x86/intel-vbtn.c b/drivers/platform/x86/intel-vbtn.c +index 3393ee95077f6..5c103614a409a 100644 +--- a/drivers/platform/x86/intel-vbtn.c ++++ b/drivers/platform/x86/intel-vbtn.c +@@ -15,9 +15,13 @@ + #include + #include + ++/* Returned when NOT in tablet mode on some HP Stream x360 11 models */ ++#define VGBS_TABLET_MODE_FLAG_ALT 0x10 + /* When NOT in tablet mode, VGBS returns with the flag 0x40 */ +-#define TABLET_MODE_FLAG 0x40 +-#define DOCK_MODE_FLAG 0x80 ++#define VGBS_TABLET_MODE_FLAG 0x40 ++#define VGBS_DOCK_MODE_FLAG 0x80 ++ ++#define VGBS_TABLET_MODE_FLAGS (VGBS_TABLET_MODE_FLAG | VGBS_TABLET_MODE_FLAG_ALT) + + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("AceLan Kao"); +@@ -148,26 +152,60 @@ static void detect_tablet_mode(struct platform_device *device) + if (ACPI_FAILURE(status)) + return; + +- m = !(vgbs & TABLET_MODE_FLAG); ++ m = !(vgbs & VGBS_TABLET_MODE_FLAGS); + input_report_switch(priv->input_dev, SW_TABLET_MODE, m); +- m = (vgbs & DOCK_MODE_FLAG) ? 1 : 0; ++ m = (vgbs & VGBS_DOCK_MODE_FLAG) ? 1 : 0; + input_report_switch(priv->input_dev, SW_DOCK, m); + } + ++/* ++ * There are several laptops (non 2-in-1) models out there which support VGBS, ++ * but simply always return 0, which we translate to SW_TABLET_MODE=1. This in ++ * turn causes userspace (libinput) to suppress events from the builtin ++ * keyboard and touchpad, making the laptop essentially unusable. ++ * ++ * Since the problem of wrongly reporting SW_TABLET_MODE=1 in combination ++ * with libinput, leads to a non-usable system. Where as OTOH many people will ++ * not even notice when SW_TABLET_MODE is not being reported, a DMI based allow ++ * list is used here. This list mainly matches on the chassis-type of 2-in-1s. ++ * ++ * There are also some 2-in-1s which use the intel-vbtn ACPI interface to report ++ * SW_TABLET_MODE with a chassis-type of 8 ("Portable") or 10 ("Notebook"), ++ * these are matched on a per model basis, since many normal laptops with a ++ * possible broken VGBS ACPI-method also use these chassis-types. ++ */ ++static const struct dmi_system_id dmi_switches_allow_list[] = { ++ { ++ .matches = { ++ DMI_EXACT_MATCH(DMI_CHASSIS_TYPE, "31" /* Convertible */), ++ }, ++ }, ++ { ++ .matches = { ++ DMI_EXACT_MATCH(DMI_CHASSIS_TYPE, "32" /* Detachable */), ++ }, ++ }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), ++ DMI_MATCH(DMI_PRODUCT_NAME, "Venue 11 Pro 7130"), ++ }, ++ }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Stream x360 Convertible PC 11"), ++ }, ++ }, ++ {} /* Array terminator */ ++}; ++ + static bool intel_vbtn_has_switches(acpi_handle handle) + { +- const char *chassis_type = dmi_get_system_info(DMI_CHASSIS_TYPE); + unsigned long long vgbs; + acpi_status status; + +- /* +- * Some normal laptops have a VGBS method despite being non-convertible +- * and their VGBS method always returns 0, causing detect_tablet_mode() +- * to report SW_TABLET_MODE=1 to userspace, which causes issues. +- * These laptops have a DMI chassis_type of 9 ("Laptop"), do not report +- * switches on any devices with a DMI chassis_type of 9. +- */ +- if (chassis_type && strcmp(chassis_type, "9") == 0) ++ if (!dmi_check_system(dmi_switches_allow_list)) + return false; + + status = acpi_evaluate_integer(handle, "VGBS", NULL, &vgbs); +diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c +index da794dcfdd928..abcb336a515a1 100644 +--- a/drivers/platform/x86/thinkpad_acpi.c ++++ b/drivers/platform/x86/thinkpad_acpi.c +@@ -2587,7 +2587,7 @@ static void hotkey_compare_and_issue_event(struct tp_nvram_state *oldn, + */ + static int hotkey_kthread(void *data) + { +- struct tp_nvram_state s[2]; ++ struct tp_nvram_state s[2] = { 0 }; + u32 poll_mask, event_mask; + unsigned int si, so; + unsigned long t; +@@ -6863,8 +6863,10 @@ static int __init tpacpi_query_bcl_levels(acpi_handle handle) + list_for_each_entry(child, &device->children, node) { + acpi_status status = acpi_evaluate_object(child->handle, "_BCL", + NULL, &buffer); +- if (ACPI_FAILURE(status)) ++ if (ACPI_FAILURE(status)) { ++ buffer.length = ACPI_ALLOCATE_BUFFER; + continue; ++ } + + obj = (union acpi_object *)buffer.pointer; + if (!obj || (obj->type != ACPI_TYPE_PACKAGE)) { +diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c +index 36ca2cf419bfe..57ab79fbcee95 100644 +--- a/drivers/vhost/vhost.c ++++ b/drivers/vhost/vhost.c +@@ -1299,6 +1299,11 @@ static bool vq_access_ok(struct vhost_virtqueue *vq, unsigned int num, + struct vring_used __user *used) + + { ++ /* If an IOTLB device is present, the vring addresses are ++ * GIOVAs. Access validation occurs at prefetch time. */ ++ if (vq->iotlb) ++ return true; ++ + return access_ok(desc, vhost_get_desc_size(vq, num)) && + access_ok(avail, vhost_get_avail_size(vq, num)) && + access_ok(used, vhost_get_used_size(vq, num)); +@@ -1394,10 +1399,6 @@ bool vhost_vq_access_ok(struct vhost_virtqueue *vq) + if (!vq_log_access_ok(vq, vq->log_base)) + return false; + +- /* Access validation occurs at prefetch time with IOTLB */ +- if (vq->iotlb) +- return true; +- + return vq_access_ok(vq, vq->num, vq->desc, vq->avail, vq->used); + } + EXPORT_SYMBOL_GPL(vhost_vq_access_ok); +@@ -1544,8 +1545,7 @@ static long vhost_vring_set_addr(struct vhost_dev *d, + /* Also validate log access for used ring if enabled. */ + if ((a.flags & (0x1 << VHOST_VRING_F_LOG)) && + !log_access_ok(vq->log_base, a.log_guest_addr, +- sizeof *vq->used + +- vq->num * sizeof *vq->used->ring)) ++ vhost_get_used_size(vq, vq->num))) + return -EINVAL; + } + +diff --git a/drivers/video/console/newport_con.c b/drivers/video/console/newport_con.c +index 2d2ee17052e83..f45de374f165f 100644 +--- a/drivers/video/console/newport_con.c ++++ b/drivers/video/console/newport_con.c +@@ -36,12 +36,6 @@ + + #define FONT_DATA ((unsigned char *)font_vga_8x16.data) + +-/* borrowed from fbcon.c */ +-#define REFCOUNT(fd) (((int *)(fd))[-1]) +-#define FNTSIZE(fd) (((int *)(fd))[-2]) +-#define FNTCHARCNT(fd) (((int *)(fd))[-3]) +-#define FONT_EXTRA_WORDS 3 +- + static unsigned char *font_data[MAX_NR_CONSOLES]; + + static struct newport_regs *npregs; +@@ -523,6 +517,7 @@ static int newport_set_font(int unit, struct console_font *op) + FNTSIZE(new_data) = size; + FNTCHARCNT(new_data) = op->charcount; + REFCOUNT(new_data) = 0; /* usage counter */ ++ FNTSUM(new_data) = 0; + + p = new_data; + for (i = 0; i < op->charcount; i++) { +diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c +index dc7f5c4f0607e..4cf71ee0965a6 100644 +--- a/drivers/video/fbdev/core/fbcon.c ++++ b/drivers/video/fbdev/core/fbcon.c +@@ -2292,6 +2292,9 @@ static int fbcon_get_font(struct vc_data *vc, struct console_font *font) + + if (font->width <= 8) { + j = vc->vc_font.height; ++ if (font->charcount * j > FNTSIZE(fontdata)) ++ return -EINVAL; ++ + for (i = 0; i < font->charcount; i++) { + memcpy(data, fontdata, j); + memset(data + j, 0, 32 - j); +@@ -2300,6 +2303,9 @@ static int fbcon_get_font(struct vc_data *vc, struct console_font *font) + } + } else if (font->width <= 16) { + j = vc->vc_font.height * 2; ++ if (font->charcount * j > FNTSIZE(fontdata)) ++ return -EINVAL; ++ + for (i = 0; i < font->charcount; i++) { + memcpy(data, fontdata, j); + memset(data + j, 0, 64 - j); +@@ -2307,6 +2313,9 @@ static int fbcon_get_font(struct vc_data *vc, struct console_font *font) + fontdata += j; + } + } else if (font->width <= 24) { ++ if (font->charcount * (vc->vc_font.height * sizeof(u32)) > FNTSIZE(fontdata)) ++ return -EINVAL; ++ + for (i = 0; i < font->charcount; i++) { + for (j = 0; j < vc->vc_font.height; j++) { + *data++ = fontdata[0]; +@@ -2319,6 +2328,9 @@ static int fbcon_get_font(struct vc_data *vc, struct console_font *font) + } + } else { + j = vc->vc_font.height * 4; ++ if (font->charcount * j > FNTSIZE(fontdata)) ++ return -EINVAL; ++ + for (i = 0; i < font->charcount; i++) { + memcpy(data, fontdata, j); + memset(data + j, 0, 128 - j); +diff --git a/drivers/video/fbdev/core/fbcon.h b/drivers/video/fbdev/core/fbcon.h +index 78bb14c03643e..9315b360c8981 100644 +--- a/drivers/video/fbdev/core/fbcon.h ++++ b/drivers/video/fbdev/core/fbcon.h +@@ -152,13 +152,6 @@ static inline int attr_col_ec(int shift, struct vc_data *vc, + #define attr_bgcol_ec(bgshift, vc, info) attr_col_ec(bgshift, vc, info, 0) + #define attr_fgcol_ec(fgshift, vc, info) attr_col_ec(fgshift, vc, info, 1) + +-/* Font */ +-#define REFCOUNT(fd) (((int *)(fd))[-1]) +-#define FNTSIZE(fd) (((int *)(fd))[-2]) +-#define FNTCHARCNT(fd) (((int *)(fd))[-3]) +-#define FNTSUM(fd) (((int *)(fd))[-4]) +-#define FONT_EXTRA_WORDS 4 +- + /* + * Scroll Method + */ +diff --git a/drivers/video/fbdev/core/fbcon_rotate.c b/drivers/video/fbdev/core/fbcon_rotate.c +index c0d445294aa7c..ac72d4f85f7d0 100644 +--- a/drivers/video/fbdev/core/fbcon_rotate.c ++++ b/drivers/video/fbdev/core/fbcon_rotate.c +@@ -14,6 +14,7 @@ + #include + #include + #include ++#include + #include + #include "fbcon.h" + #include "fbcon_rotate.h" +diff --git a/drivers/video/fbdev/core/tileblit.c b/drivers/video/fbdev/core/tileblit.c +index eb664dbf96f66..adff8d6ffe6f9 100644 +--- a/drivers/video/fbdev/core/tileblit.c ++++ b/drivers/video/fbdev/core/tileblit.c +@@ -13,6 +13,7 @@ + #include + #include + #include ++#include + #include + #include "fbcon.h" + +diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h +index 9a690c10afaa0..23b4f38e23928 100644 +--- a/fs/btrfs/ctree.h ++++ b/fs/btrfs/ctree.h +@@ -2956,6 +2956,8 @@ int btrfs_fdatawrite_range(struct inode *inode, loff_t start, loff_t end); + loff_t btrfs_remap_file_range(struct file *file_in, loff_t pos_in, + struct file *file_out, loff_t pos_out, + loff_t len, unsigned int remap_flags); ++int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos, ++ size_t *write_bytes); + + /* tree-defrag.c */ + int btrfs_defrag_leaves(struct btrfs_trans_handle *trans, +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c +index 7658f3193175b..388449101705e 100644 +--- a/fs/btrfs/extent-tree.c ++++ b/fs/btrfs/extent-tree.c +@@ -1305,8 +1305,10 @@ static int btrfs_issue_discard(struct block_device *bdev, u64 start, u64 len, + int btrfs_discard_extent(struct btrfs_fs_info *fs_info, u64 bytenr, + u64 num_bytes, u64 *actual_bytes) + { +- int ret; ++ int ret = 0; + u64 discarded_bytes = 0; ++ u64 end = bytenr + num_bytes; ++ u64 cur = bytenr; + struct btrfs_bio *bbio = NULL; + + +@@ -1315,15 +1317,23 @@ int btrfs_discard_extent(struct btrfs_fs_info *fs_info, u64 bytenr, + * associated to its stripes that don't go away while we are discarding. + */ + btrfs_bio_counter_inc_blocked(fs_info); +- /* Tell the block device(s) that the sectors can be discarded */ +- ret = btrfs_map_block(fs_info, BTRFS_MAP_DISCARD, bytenr, &num_bytes, +- &bbio, 0); +- /* Error condition is -ENOMEM */ +- if (!ret) { +- struct btrfs_bio_stripe *stripe = bbio->stripes; ++ while (cur < end) { ++ struct btrfs_bio_stripe *stripe; + int i; + ++ num_bytes = end - cur; ++ /* Tell the block device(s) that the sectors can be discarded */ ++ ret = btrfs_map_block(fs_info, BTRFS_MAP_DISCARD, cur, ++ &num_bytes, &bbio, 0); ++ /* ++ * Error can be -ENOMEM, -ENOENT (no such chunk mapping) or ++ * -EOPNOTSUPP. For any such error, @num_bytes is not updated, ++ * thus we can't continue anyway. ++ */ ++ if (ret < 0) ++ goto out; + ++ stripe = bbio->stripes; + for (i = 0; i < bbio->num_stripes; i++, stripe++) { + u64 bytes; + struct request_queue *req_q; +@@ -1340,10 +1350,19 @@ int btrfs_discard_extent(struct btrfs_fs_info *fs_info, u64 bytenr, + stripe->physical, + stripe->length, + &bytes); +- if (!ret) ++ if (!ret) { + discarded_bytes += bytes; +- else if (ret != -EOPNOTSUPP) +- break; /* Logic errors or -ENOMEM, or -EIO but I don't know how that could happen JDM */ ++ } else if (ret != -EOPNOTSUPP) { ++ /* ++ * Logic errors or -ENOMEM, or -EIO, but ++ * unlikely to happen. ++ * ++ * And since there are two loops, explicitly ++ * go to out to avoid confusion. ++ */ ++ btrfs_put_bbio(bbio); ++ goto out; ++ } + + /* + * Just in case we get back EOPNOTSUPP for some reason, +@@ -1353,7 +1372,9 @@ int btrfs_discard_extent(struct btrfs_fs_info *fs_info, u64 bytenr, + ret = 0; + } + btrfs_put_bbio(bbio); ++ cur += num_bytes; + } ++out: + btrfs_bio_counter_dec(fs_info); + + if (actual_bytes) +diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c +index 4e4ddd5629e55..4126513e2429c 100644 +--- a/fs/btrfs/file.c ++++ b/fs/btrfs/file.c +@@ -1546,8 +1546,8 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages, + return ret; + } + +-static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos, +- size_t *write_bytes) ++int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos, ++ size_t *write_bytes) + { + struct btrfs_fs_info *fs_info = inode->root->fs_info; + struct btrfs_root *root = inode->root; +@@ -1647,7 +1647,7 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb, + if (ret < 0) { + if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW | + BTRFS_INODE_PREALLOC)) && +- check_can_nocow(BTRFS_I(inode), pos, ++ btrfs_check_can_nocow(BTRFS_I(inode), pos, + &write_bytes) > 0) { + /* + * For nodata cow case, no need to reserve +@@ -1919,13 +1919,28 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb, + pos = iocb->ki_pos; + count = iov_iter_count(from); + if (iocb->ki_flags & IOCB_NOWAIT) { ++ size_t nocow_bytes = count; ++ + /* + * We will allocate space in case nodatacow is not set, + * so bail + */ + if (!(BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW | + BTRFS_INODE_PREALLOC)) || +- check_can_nocow(BTRFS_I(inode), pos, &count) <= 0) { ++ btrfs_check_can_nocow(BTRFS_I(inode), pos, ++ &nocow_bytes) <= 0) { ++ inode_unlock(inode); ++ return -EAGAIN; ++ } ++ ++ /* check_can_nocow() locks the snapshot lock on success */ ++ btrfs_end_write_no_snapshotting(root); ++ /* ++ * There are holes in the range or parts of the range that must ++ * be COWed (shared extents, RO block groups, etc), so just bail ++ * out. ++ */ ++ if (nocow_bytes < count) { + inode_unlock(inode); + return -EAGAIN; + } +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 182e93a5b11d5..67b49b94c9cd6 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -5133,11 +5133,13 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len, + struct extent_state *cached_state = NULL; + struct extent_changeset *data_reserved = NULL; + char *kaddr; ++ bool only_release_metadata = false; + u32 blocksize = fs_info->sectorsize; + pgoff_t index = from >> PAGE_SHIFT; + unsigned offset = from & (blocksize - 1); + struct page *page; + gfp_t mask = btrfs_alloc_write_mask(mapping); ++ size_t write_bytes = blocksize; + int ret = 0; + u64 block_start; + u64 block_end; +@@ -5149,11 +5151,27 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len, + block_start = round_down(from, blocksize); + block_end = block_start + blocksize - 1; + +- ret = btrfs_delalloc_reserve_space(inode, &data_reserved, +- block_start, blocksize); +- if (ret) +- goto out; + ++ ret = btrfs_check_data_free_space(inode, &data_reserved, block_start, ++ blocksize); ++ if (ret < 0) { ++ if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW | ++ BTRFS_INODE_PREALLOC)) && ++ btrfs_check_can_nocow(BTRFS_I(inode), block_start, ++ &write_bytes) > 0) { ++ /* For nocow case, no need to reserve data space */ ++ only_release_metadata = true; ++ } else { ++ goto out; ++ } ++ } ++ ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode), blocksize); ++ if (ret < 0) { ++ if (!only_release_metadata) ++ btrfs_free_reserved_data_space(inode, data_reserved, ++ block_start, blocksize); ++ goto out; ++ } + again: + page = find_or_create_page(mapping, index, mask); + if (!page) { +@@ -5222,14 +5240,26 @@ again: + set_page_dirty(page); + unlock_extent_cached(io_tree, block_start, block_end, &cached_state); + ++ if (only_release_metadata) ++ set_extent_bit(&BTRFS_I(inode)->io_tree, block_start, ++ block_end, EXTENT_NORESERVE, NULL, NULL, ++ GFP_NOFS); ++ + out_unlock: +- if (ret) +- btrfs_delalloc_release_space(inode, data_reserved, block_start, +- blocksize, true); ++ if (ret) { ++ if (only_release_metadata) ++ btrfs_delalloc_release_metadata(BTRFS_I(inode), ++ blocksize, true); ++ else ++ btrfs_delalloc_release_space(inode, data_reserved, ++ block_start, blocksize, true); ++ } + btrfs_delalloc_release_extents(BTRFS_I(inode), blocksize); + unlock_page(page); + put_page(page); + out: ++ if (only_release_metadata) ++ btrfs_end_write_no_snapshotting(BTRFS_I(inode)->root); + extent_changeset_free(data_reserved); + return ret; + } +diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c +index 6ad216e8178e8..b0e5dfb9be7ab 100644 +--- a/fs/btrfs/send.c ++++ b/fs/btrfs/send.c +@@ -1257,12 +1257,21 @@ static int __iterate_backrefs(u64 ino, u64 offset, u64 root, void *ctx_) + */ + if (found->root == bctx->sctx->send_root) { + /* +- * TODO for the moment we don't accept clones from the inode +- * that is currently send. We may change this when +- * BTRFS_IOC_CLONE_RANGE supports cloning from and to the same +- * file. ++ * If the source inode was not yet processed we can't issue a ++ * clone operation, as the source extent does not exist yet at ++ * the destination of the stream. + */ +- if (ino >= bctx->cur_objectid) ++ if (ino > bctx->cur_objectid) ++ return 0; ++ /* ++ * We clone from the inode currently being sent as long as the ++ * source extent is already processed, otherwise we could try ++ * to clone from an extent that does not exist yet at the ++ * destination of the stream. ++ */ ++ if (ino == bctx->cur_objectid && ++ offset + bctx->extent_len > ++ bctx->sctx->cur_inode_next_write_offset) + return 0; + } + +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index 4ecd6663dfb51..e798caee978e7 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -5676,12 +5676,13 @@ void btrfs_put_bbio(struct btrfs_bio *bbio) + * replace. + */ + static int __btrfs_map_block_for_discard(struct btrfs_fs_info *fs_info, +- u64 logical, u64 length, ++ u64 logical, u64 *length_ret, + struct btrfs_bio **bbio_ret) + { + struct extent_map *em; + struct map_lookup *map; + struct btrfs_bio *bbio; ++ u64 length = *length_ret; + u64 offset; + u64 stripe_nr; + u64 stripe_nr_end; +@@ -5714,7 +5715,8 @@ static int __btrfs_map_block_for_discard(struct btrfs_fs_info *fs_info, + } + + offset = logical - em->start; +- length = min_t(u64, em->len - offset, length); ++ length = min_t(u64, em->start + em->len - logical, length); ++ *length_ret = length; + + stripe_len = map->stripe_len; + /* +@@ -6129,7 +6131,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, + + if (op == BTRFS_MAP_DISCARD) + return __btrfs_map_block_for_discard(fs_info, logical, +- *length, bbio_ret); ++ length, bbio_ret); + + ret = btrfs_get_io_geometry(fs_info, op, logical, *length, &geom); + if (ret < 0) +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c +index 64ad466695c55..9a89e5f7c4da3 100644 +--- a/fs/cifs/smb2ops.c ++++ b/fs/cifs/smb2ops.c +@@ -1179,7 +1179,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon, + rqst[1].rq_iov = si_iov; + rqst[1].rq_nvec = 1; + +- len = sizeof(ea) + ea_name_len + ea_value_len + 1; ++ len = sizeof(*ea) + ea_name_len + ea_value_len + 1; + ea = kzalloc(len, GFP_KERNEL); + if (ea == NULL) { + rc = -ENOMEM; +diff --git a/fs/io_uring.c b/fs/io_uring.c +index 2a539b794f3b0..4127ea027a14d 100644 +--- a/fs/io_uring.c ++++ b/fs/io_uring.c +@@ -340,7 +340,7 @@ struct io_kiocb { + u64 user_data; + u32 result; + u32 sequence; +- struct task_struct *task; ++ struct files_struct *files; + + struct fs_struct *fs; + +@@ -514,12 +514,14 @@ static inline void io_queue_async_work(struct io_ring_ctx *ctx, + } + } + +- req->task = current; ++ if (req->work.func == io_sq_wq_submit_work) { ++ req->files = current->files; + +- spin_lock_irqsave(&ctx->task_lock, flags); +- list_add(&req->task_list, &ctx->task_list); +- req->work_task = NULL; +- spin_unlock_irqrestore(&ctx->task_lock, flags); ++ spin_lock_irqsave(&ctx->task_lock, flags); ++ list_add(&req->task_list, &ctx->task_list); ++ req->work_task = NULL; ++ spin_unlock_irqrestore(&ctx->task_lock, flags); ++ } + + queue_work(ctx->sqo_wq[rw], &req->work); + } +@@ -668,6 +670,7 @@ static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx, + state->cur_req++; + } + ++ INIT_LIST_HEAD(&req->task_list); + req->file = NULL; + req->ctx = ctx; + req->flags = 0; +@@ -2247,6 +2250,12 @@ restart: + + if (!ret) { + req->work_task = current; ++ ++ /* ++ * Pairs with the smp_store_mb() (B) in ++ * io_cancel_async_work(). ++ */ ++ smp_mb(); /* A */ + if (req->flags & REQ_F_CANCEL) { + ret = -ECANCELED; + goto end_req; +@@ -2266,13 +2275,11 @@ restart: + break; + cond_resched(); + } while (1); +-end_req: +- if (!list_empty(&req->task_list)) { +- spin_lock_irq(&ctx->task_lock); +- list_del_init(&req->task_list); +- spin_unlock_irq(&ctx->task_lock); +- } + } ++end_req: ++ spin_lock_irq(&ctx->task_lock); ++ list_del_init(&req->task_list); ++ spin_unlock_irq(&ctx->task_lock); + + /* drop submission reference */ + io_put_req(req); +@@ -2382,6 +2389,8 @@ static bool io_add_to_prev_work(struct async_list *list, struct io_kiocb *req) + if (ret) { + struct io_ring_ctx *ctx = req->ctx; + ++ req->files = current->files; ++ + spin_lock_irq(&ctx->task_lock); + list_add(&req->task_list, &ctx->task_list); + req->work_task = NULL; +@@ -3712,19 +3721,28 @@ static int io_uring_fasync(int fd, struct file *file, int on) + } + + static void io_cancel_async_work(struct io_ring_ctx *ctx, +- struct task_struct *task) ++ struct files_struct *files) + { ++ struct io_kiocb *req; ++ + if (list_empty(&ctx->task_list)) + return; + + spin_lock_irq(&ctx->task_lock); +- while (!list_empty(&ctx->task_list)) { +- struct io_kiocb *req; + +- req = list_first_entry(&ctx->task_list, struct io_kiocb, task_list); +- list_del_init(&req->task_list); +- req->flags |= REQ_F_CANCEL; +- if (req->work_task && (!task || req->task == task)) ++ list_for_each_entry(req, &ctx->task_list, task_list) { ++ if (files && req->files != files) ++ continue; ++ ++ /* ++ * The below executes an smp_mb(), which matches with the ++ * smp_mb() (A) in io_sq_wq_submit_work() such that either ++ * we store REQ_F_CANCEL flag to req->flags or we see the ++ * req->work_task setted in io_sq_wq_submit_work(). ++ */ ++ smp_store_mb(req->flags, req->flags | REQ_F_CANCEL); /* B */ ++ ++ if (req->work_task) + send_sig(SIGINT, req->work_task, 1); + } + spin_unlock_irq(&ctx->task_lock); +@@ -3749,7 +3767,7 @@ static int io_uring_flush(struct file *file, void *data) + struct io_ring_ctx *ctx = file->private_data; + + if (fatal_signal_pending(current) || (current->flags & PF_EXITING)) +- io_cancel_async_work(ctx, current); ++ io_cancel_async_work(ctx, data); + + return 0; + } +diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h +index f050039ca2c07..e5e2425875953 100644 +--- a/include/asm-generic/vmlinux.lds.h ++++ b/include/asm-generic/vmlinux.lds.h +@@ -599,7 +599,7 @@ + #define BTF \ + .BTF : AT(ADDR(.BTF) - LOAD_OFFSET) { \ + __start_BTF = .; \ +- *(.BTF) \ ++ KEEP(*(.BTF)) \ + __stop_BTF = .; \ + } + #else +diff --git a/include/linux/font.h b/include/linux/font.h +index 51b91c8b69d58..59faa80f586df 100644 +--- a/include/linux/font.h ++++ b/include/linux/font.h +@@ -59,4 +59,17 @@ extern const struct font_desc *get_default_font(int xres, int yres, + /* Max. length for the name of a predefined font */ + #define MAX_FONT_NAME 32 + ++/* Extra word getters */ ++#define REFCOUNT(fd) (((int *)(fd))[-1]) ++#define FNTSIZE(fd) (((int *)(fd))[-2]) ++#define FNTCHARCNT(fd) (((int *)(fd))[-3]) ++#define FNTSUM(fd) (((int *)(fd))[-4]) ++ ++#define FONT_EXTRA_WORDS 4 ++ ++struct font_data { ++ unsigned int extra[FONT_EXTRA_WORDS]; ++ const unsigned char data[]; ++} __packed; ++ + #endif /* _VIDEO_FONT_H */ +diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h +index bc45ea1efbf79..c941b73773216 100644 +--- a/include/linux/khugepaged.h ++++ b/include/linux/khugepaged.h +@@ -15,6 +15,7 @@ extern int __khugepaged_enter(struct mm_struct *mm); + extern void __khugepaged_exit(struct mm_struct *mm); + extern int khugepaged_enter_vma_merge(struct vm_area_struct *vma, + unsigned long vm_flags); ++extern void khugepaged_min_free_kbytes_update(void); + #ifdef CONFIG_SHMEM + extern void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr); + #else +@@ -85,6 +86,10 @@ static inline void collapse_pte_mapped_thp(struct mm_struct *mm, + unsigned long addr) + { + } ++ ++static inline void khugepaged_min_free_kbytes_update(void) ++{ ++} + #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + + #endif /* _LINUX_KHUGEPAGED_H */ +diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h +index 897829651204b..6b4f86dfca382 100644 +--- a/include/linux/mlx5/driver.h ++++ b/include/linux/mlx5/driver.h +@@ -769,6 +769,8 @@ struct mlx5_cmd_work_ent { + u64 ts2; + u16 op; + bool polling; ++ /* Track the max comp handlers */ ++ refcount_t refcnt; + }; + + struct mlx5_pas { +diff --git a/include/linux/net.h b/include/linux/net.h +index 9cafb5f353a97..47dd7973ae9bc 100644 +--- a/include/linux/net.h ++++ b/include/linux/net.h +@@ -21,6 +21,7 @@ + #include + #include + #include ++#include + + #include + +@@ -288,6 +289,21 @@ do { \ + #define net_get_random_once_wait(buf, nbytes) \ + get_random_once_wait((buf), (nbytes)) + ++/* ++ * E.g. XFS meta- & log-data is in slab pages, or bcache meta ++ * data pages, or other high order pages allocated by ++ * __get_free_pages() without __GFP_COMP, which have a page_count ++ * of 0 and/or have PageSlab() set. We cannot use send_page for ++ * those, as that does get_page(); put_page(); and would cause ++ * either a VM_BUG directly, or __page_cache_release a page that ++ * would actually still be referenced by someone, leading to some ++ * obscure delayed Oops somewhere else. ++ */ ++static inline bool sendpage_ok(struct page *page) ++{ ++ return !PageSlab(page) && page_count(page) >= 1; ++} ++ + int kernel_sendmsg(struct socket *sock, struct msghdr *msg, struct kvec *vec, + size_t num, size_t len); + int kernel_sendmsg_locked(struct sock *sk, struct msghdr *msg, +diff --git a/include/net/act_api.h b/include/net/act_api.h +index 59d05feecfb8a..05b568b92e59d 100644 +--- a/include/net/act_api.h ++++ b/include/net/act_api.h +@@ -156,8 +156,6 @@ int tcf_idr_search(struct tc_action_net *tn, struct tc_action **a, u32 index); + int tcf_idr_create(struct tc_action_net *tn, u32 index, struct nlattr *est, + struct tc_action **a, const struct tc_action_ops *ops, + int bind, bool cpustats); +-void tcf_idr_insert(struct tc_action_net *tn, struct tc_action *a); +- + void tcf_idr_cleanup(struct tc_action_net *tn, u32 index); + int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index, + struct tc_action **a, int bind); +diff --git a/include/net/xfrm.h b/include/net/xfrm.h +index 12aa6e15e43f6..c00b9ae71ae40 100644 +--- a/include/net/xfrm.h ++++ b/include/net/xfrm.h +@@ -1773,21 +1773,17 @@ static inline unsigned int xfrm_replay_state_esn_len(struct xfrm_replay_state_es + static inline int xfrm_replay_clone(struct xfrm_state *x, + struct xfrm_state *orig) + { +- x->replay_esn = kzalloc(xfrm_replay_state_esn_len(orig->replay_esn), ++ ++ x->replay_esn = kmemdup(orig->replay_esn, ++ xfrm_replay_state_esn_len(orig->replay_esn), + GFP_KERNEL); + if (!x->replay_esn) + return -ENOMEM; +- +- x->replay_esn->bmp_len = orig->replay_esn->bmp_len; +- x->replay_esn->replay_window = orig->replay_esn->replay_window; +- +- x->preplay_esn = kmemdup(x->replay_esn, +- xfrm_replay_state_esn_len(x->replay_esn), ++ x->preplay_esn = kmemdup(orig->preplay_esn, ++ xfrm_replay_state_esn_len(orig->preplay_esn), + GFP_KERNEL); +- if (!x->preplay_esn) { +- kfree(x->replay_esn); ++ if (!x->preplay_esn) + return -ENOMEM; +- } + + return 0; + } +diff --git a/kernel/bpf/sysfs_btf.c b/kernel/bpf/sysfs_btf.c +index 3b495773de5ae..11b3380887fa0 100644 +--- a/kernel/bpf/sysfs_btf.c ++++ b/kernel/bpf/sysfs_btf.c +@@ -30,15 +30,15 @@ static struct kobject *btf_kobj; + + static int __init btf_vmlinux_init(void) + { +- if (!__start_BTF) ++ bin_attr_btf_vmlinux.size = __stop_BTF - __start_BTF; ++ ++ if (!__start_BTF || bin_attr_btf_vmlinux.size == 0) + return 0; + + btf_kobj = kobject_create_and_add("btf", kernel_kobj); + if (!btf_kobj) + return -ENOMEM; + +- bin_attr_btf_vmlinux.size = __stop_BTF - __start_BTF; +- + return sysfs_create_bin_file(btf_kobj, &bin_attr_btf_vmlinux); + } + +diff --git a/kernel/events/core.c b/kernel/events/core.c +index 47646050efa0c..09e1cc22221fe 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -97,7 +97,7 @@ static void remote_function(void *data) + * retry due to any failures in smp_call_function_single(), such as if the + * task_cpu() goes offline concurrently. + * +- * returns @func return value or -ESRCH when the process isn't running ++ * returns @func return value or -ESRCH or -ENXIO when the process isn't running + */ + static int + task_function_call(struct task_struct *p, remote_function_f func, void *info) +@@ -113,7 +113,8 @@ task_function_call(struct task_struct *p, remote_function_f func, void *info) + for (;;) { + ret = smp_call_function_single(task_cpu(p), remote_function, + &data, 1); +- ret = !ret ? data.ret : -EAGAIN; ++ if (!ret) ++ ret = data.ret; + + if (ret != -EAGAIN) + break; +diff --git a/kernel/umh.c b/kernel/umh.c +index 3474d6aa55d83..b8c524dcc76f1 100644 +--- a/kernel/umh.c ++++ b/kernel/umh.c +@@ -14,6 +14,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -75,6 +76,14 @@ static int call_usermodehelper_exec_async(void *data) + flush_signal_handlers(current, 1); + spin_unlock_irq(¤t->sighand->siglock); + ++ /* ++ * Initial kernel threads share ther FS with init, in order to ++ * get the init root directory. But we've now created a new ++ * thread that is going to execve a user process and has its own ++ * 'struct fs_struct'. Reset umask to the default. ++ */ ++ current->fs->umask = 0022; ++ + /* + * Our parent (unbound workqueue) runs with elevated scheduling + * priority. Avoid propagating that into the userspace child. +diff --git a/lib/fonts/font_10x18.c b/lib/fonts/font_10x18.c +index 532f0ff89a962..0e2deac97da0d 100644 +--- a/lib/fonts/font_10x18.c ++++ b/lib/fonts/font_10x18.c +@@ -8,8 +8,8 @@ + + #define FONTDATAMAX 9216 + +-static const unsigned char fontdata_10x18[FONTDATAMAX] = { +- ++static struct font_data fontdata_10x18 = { ++ { 0, 0, FONTDATAMAX, 0 }, { + /* 0 0x00 '^@' */ + 0x00, 0x00, /* 0000000000 */ + 0x00, 0x00, /* 0000000000 */ +@@ -5129,8 +5129,7 @@ static const unsigned char fontdata_10x18[FONTDATAMAX] = { + 0x00, 0x00, /* 0000000000 */ + 0x00, 0x00, /* 0000000000 */ + 0x00, 0x00, /* 0000000000 */ +- +-}; ++} }; + + + const struct font_desc font_10x18 = { +@@ -5138,7 +5137,7 @@ const struct font_desc font_10x18 = { + .name = "10x18", + .width = 10, + .height = 18, +- .data = fontdata_10x18, ++ .data = fontdata_10x18.data, + #ifdef __sparc__ + .pref = 5, + #else +diff --git a/lib/fonts/font_6x10.c b/lib/fonts/font_6x10.c +index 09b2cc03435b9..87da8acd07db0 100644 +--- a/lib/fonts/font_6x10.c ++++ b/lib/fonts/font_6x10.c +@@ -1,8 +1,10 @@ + // SPDX-License-Identifier: GPL-2.0 + #include + +-static const unsigned char fontdata_6x10[] = { ++#define FONTDATAMAX 2560 + ++static struct font_data fontdata_6x10 = { ++ { 0, 0, FONTDATAMAX, 0 }, { + /* 0 0x00 '^@' */ + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ +@@ -3074,14 +3076,13 @@ static const unsigned char fontdata_6x10[] = { + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ +- +-}; ++} }; + + const struct font_desc font_6x10 = { + .idx = FONT6x10_IDX, + .name = "6x10", + .width = 6, + .height = 10, +- .data = fontdata_6x10, ++ .data = fontdata_6x10.data, + .pref = 0, + }; +diff --git a/lib/fonts/font_6x11.c b/lib/fonts/font_6x11.c +index d7136c33f1f01..5e975dfa10a53 100644 +--- a/lib/fonts/font_6x11.c ++++ b/lib/fonts/font_6x11.c +@@ -9,8 +9,8 @@ + + #define FONTDATAMAX (11*256) + +-static const unsigned char fontdata_6x11[FONTDATAMAX] = { +- ++static struct font_data fontdata_6x11 = { ++ { 0, 0, FONTDATAMAX, 0 }, { + /* 0 0x00 '^@' */ + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ +@@ -3338,8 +3338,7 @@ static const unsigned char fontdata_6x11[FONTDATAMAX] = { + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ +- +-}; ++} }; + + + const struct font_desc font_vga_6x11 = { +@@ -3347,7 +3346,7 @@ const struct font_desc font_vga_6x11 = { + .name = "ProFont6x11", + .width = 6, + .height = 11, +- .data = fontdata_6x11, ++ .data = fontdata_6x11.data, + /* Try avoiding this font if possible unless on MAC */ + .pref = -2000, + }; +diff --git a/lib/fonts/font_7x14.c b/lib/fonts/font_7x14.c +index 89752d0b23e8b..86d298f385058 100644 +--- a/lib/fonts/font_7x14.c ++++ b/lib/fonts/font_7x14.c +@@ -8,8 +8,8 @@ + + #define FONTDATAMAX 3584 + +-static const unsigned char fontdata_7x14[FONTDATAMAX] = { +- ++static struct font_data fontdata_7x14 = { ++ { 0, 0, FONTDATAMAX, 0 }, { + /* 0 0x00 '^@' */ + 0x00, /* 0000000 */ + 0x00, /* 0000000 */ +@@ -4105,8 +4105,7 @@ static const unsigned char fontdata_7x14[FONTDATAMAX] = { + 0x00, /* 0000000 */ + 0x00, /* 0000000 */ + 0x00, /* 0000000 */ +- +-}; ++} }; + + + const struct font_desc font_7x14 = { +@@ -4114,6 +4113,6 @@ const struct font_desc font_7x14 = { + .name = "7x14", + .width = 7, + .height = 14, +- .data = fontdata_7x14, ++ .data = fontdata_7x14.data, + .pref = 0, + }; +diff --git a/lib/fonts/font_8x16.c b/lib/fonts/font_8x16.c +index b7ab1f5fbdb8a..37cedd36ca5ef 100644 +--- a/lib/fonts/font_8x16.c ++++ b/lib/fonts/font_8x16.c +@@ -10,8 +10,8 @@ + + #define FONTDATAMAX 4096 + +-static const unsigned char fontdata_8x16[FONTDATAMAX] = { +- ++static struct font_data fontdata_8x16 = { ++ { 0, 0, FONTDATAMAX, 0 }, { + /* 0 0x00 '^@' */ + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ +@@ -4619,8 +4619,7 @@ static const unsigned char fontdata_8x16[FONTDATAMAX] = { + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ +- +-}; ++} }; + + + const struct font_desc font_vga_8x16 = { +@@ -4628,7 +4627,7 @@ const struct font_desc font_vga_8x16 = { + .name = "VGA8x16", + .width = 8, + .height = 16, +- .data = fontdata_8x16, ++ .data = fontdata_8x16.data, + .pref = 0, + }; + EXPORT_SYMBOL(font_vga_8x16); +diff --git a/lib/fonts/font_8x8.c b/lib/fonts/font_8x8.c +index 2328ebc8bab5d..8ab695538395d 100644 +--- a/lib/fonts/font_8x8.c ++++ b/lib/fonts/font_8x8.c +@@ -9,8 +9,8 @@ + + #define FONTDATAMAX 2048 + +-static const unsigned char fontdata_8x8[FONTDATAMAX] = { +- ++static struct font_data fontdata_8x8 = { ++ { 0, 0, FONTDATAMAX, 0 }, { + /* 0 0x00 '^@' */ + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ +@@ -2570,8 +2570,7 @@ static const unsigned char fontdata_8x8[FONTDATAMAX] = { + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ +- +-}; ++} }; + + + const struct font_desc font_vga_8x8 = { +@@ -2579,6 +2578,6 @@ const struct font_desc font_vga_8x8 = { + .name = "VGA8x8", + .width = 8, + .height = 8, +- .data = fontdata_8x8, ++ .data = fontdata_8x8.data, + .pref = 0, + }; +diff --git a/lib/fonts/font_acorn_8x8.c b/lib/fonts/font_acorn_8x8.c +index 0ff0e85d4481b..069b3e80c4344 100644 +--- a/lib/fonts/font_acorn_8x8.c ++++ b/lib/fonts/font_acorn_8x8.c +@@ -3,7 +3,10 @@ + + #include + +-static const unsigned char acorndata_8x8[] = { ++#define FONTDATAMAX 2048 ++ ++static struct font_data acorndata_8x8 = { ++{ 0, 0, FONTDATAMAX, 0 }, { + /* 00 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* ^@ */ + /* 01 */ 0x7e, 0x81, 0xa5, 0x81, 0xbd, 0x99, 0x81, 0x7e, /* ^A */ + /* 02 */ 0x7e, 0xff, 0xbd, 0xff, 0xc3, 0xe7, 0xff, 0x7e, /* ^B */ +@@ -260,14 +263,14 @@ static const unsigned char acorndata_8x8[] = { + /* FD */ 0x38, 0x04, 0x18, 0x20, 0x3c, 0x00, 0x00, 0x00, + /* FE */ 0x00, 0x00, 0x3c, 0x3c, 0x3c, 0x3c, 0x00, 0x00, + /* FF */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 +-}; ++} }; + + const struct font_desc font_acorn_8x8 = { + .idx = ACORN8x8_IDX, + .name = "Acorn8x8", + .width = 8, + .height = 8, +- .data = acorndata_8x8, ++ .data = acorndata_8x8.data, + #ifdef CONFIG_ARCH_ACORN + .pref = 20, + #else +diff --git a/lib/fonts/font_mini_4x6.c b/lib/fonts/font_mini_4x6.c +index 838caa1cfef70..1449876c6a270 100644 +--- a/lib/fonts/font_mini_4x6.c ++++ b/lib/fonts/font_mini_4x6.c +@@ -43,8 +43,8 @@ __END__; + + #define FONTDATAMAX 1536 + +-static const unsigned char fontdata_mini_4x6[FONTDATAMAX] = { +- ++static struct font_data fontdata_mini_4x6 = { ++ { 0, 0, FONTDATAMAX, 0 }, { + /*{*/ + /* Char 0: ' ' */ + 0xee, /*= [*** ] */ +@@ -2145,14 +2145,14 @@ static const unsigned char fontdata_mini_4x6[FONTDATAMAX] = { + 0xee, /*= [*** ] */ + 0x00, /*= [ ] */ + /*}*/ +-}; ++} }; + + const struct font_desc font_mini_4x6 = { + .idx = MINI4x6_IDX, + .name = "MINI4x6", + .width = 4, + .height = 6, +- .data = fontdata_mini_4x6, ++ .data = fontdata_mini_4x6.data, + .pref = 3, + }; + +diff --git a/lib/fonts/font_pearl_8x8.c b/lib/fonts/font_pearl_8x8.c +index b15d3c342c5bb..32d65551e7ed2 100644 +--- a/lib/fonts/font_pearl_8x8.c ++++ b/lib/fonts/font_pearl_8x8.c +@@ -14,8 +14,8 @@ + + #define FONTDATAMAX 2048 + +-static const unsigned char fontdata_pearl8x8[FONTDATAMAX] = { +- ++static struct font_data fontdata_pearl8x8 = { ++ { 0, 0, FONTDATAMAX, 0 }, { + /* 0 0x00 '^@' */ + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ +@@ -2575,14 +2575,13 @@ static const unsigned char fontdata_pearl8x8[FONTDATAMAX] = { + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ + 0x00, /* 00000000 */ +- +-}; ++} }; + + const struct font_desc font_pearl_8x8 = { + .idx = PEARL8x8_IDX, + .name = "PEARL8x8", + .width = 8, + .height = 8, +- .data = fontdata_pearl8x8, ++ .data = fontdata_pearl8x8.data, + .pref = 2, + }; +diff --git a/lib/fonts/font_sun12x22.c b/lib/fonts/font_sun12x22.c +index 955d6eee3959d..641a6b4dca424 100644 +--- a/lib/fonts/font_sun12x22.c ++++ b/lib/fonts/font_sun12x22.c +@@ -3,8 +3,8 @@ + + #define FONTDATAMAX 11264 + +-static const unsigned char fontdata_sun12x22[FONTDATAMAX] = { +- ++static struct font_data fontdata_sun12x22 = { ++ { 0, 0, FONTDATAMAX, 0 }, { + /* 0 0x00 '^@' */ + 0x00, 0x00, /* 000000000000 */ + 0x00, 0x00, /* 000000000000 */ +@@ -6148,8 +6148,7 @@ static const unsigned char fontdata_sun12x22[FONTDATAMAX] = { + 0x00, 0x00, /* 000000000000 */ + 0x00, 0x00, /* 000000000000 */ + 0x00, 0x00, /* 000000000000 */ +- +-}; ++} }; + + + const struct font_desc font_sun_12x22 = { +@@ -6157,7 +6156,7 @@ const struct font_desc font_sun_12x22 = { + .name = "SUN12x22", + .width = 12, + .height = 22, +- .data = fontdata_sun12x22, ++ .data = fontdata_sun12x22.data, + #ifdef __sparc__ + .pref = 5, + #else +diff --git a/lib/fonts/font_sun8x16.c b/lib/fonts/font_sun8x16.c +index 03d71e53954ab..193fe6d988e08 100644 +--- a/lib/fonts/font_sun8x16.c ++++ b/lib/fonts/font_sun8x16.c +@@ -3,7 +3,8 @@ + + #define FONTDATAMAX 4096 + +-static const unsigned char fontdata_sun8x16[FONTDATAMAX] = { ++static struct font_data fontdata_sun8x16 = { ++{ 0, 0, FONTDATAMAX, 0 }, { + /* */ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, + /* */ 0x00,0x00,0x7e,0x81,0xa5,0x81,0x81,0xbd,0x99,0x81,0x81,0x7e,0x00,0x00,0x00,0x00, + /* */ 0x00,0x00,0x7e,0xff,0xdb,0xff,0xff,0xc3,0xe7,0xff,0xff,0x7e,0x00,0x00,0x00,0x00, +@@ -260,14 +261,14 @@ static const unsigned char fontdata_sun8x16[FONTDATAMAX] = { + /* */ 0x00,0x70,0xd8,0x30,0x60,0xc8,0xf8,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, + /* */ 0x00,0x00,0x00,0x00,0x7c,0x7c,0x7c,0x7c,0x7c,0x7c,0x7c,0x00,0x00,0x00,0x00,0x00, + /* */ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00, +-}; ++} }; + + const struct font_desc font_sun_8x16 = { + .idx = SUN8x16_IDX, + .name = "SUN8x16", + .width = 8, + .height = 16, +- .data = fontdata_sun8x16, ++ .data = fontdata_sun8x16.data, + #ifdef __sparc__ + .pref = 10, + #else +diff --git a/lib/fonts/font_ter16x32.c b/lib/fonts/font_ter16x32.c +index 3f0cf1ccdf3a4..91b9c283bd9cc 100644 +--- a/lib/fonts/font_ter16x32.c ++++ b/lib/fonts/font_ter16x32.c +@@ -4,8 +4,8 @@ + + #define FONTDATAMAX 16384 + +-static const unsigned char fontdata_ter16x32[FONTDATAMAX] = { +- ++static struct font_data fontdata_ter16x32 = { ++ { 0, 0, FONTDATAMAX, 0 }, { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x7f, 0xfc, 0x7f, 0xfc, + 0x70, 0x1c, 0x70, 0x1c, 0x70, 0x1c, 0x70, 0x1c, +@@ -2054,8 +2054,7 @@ static const unsigned char fontdata_ter16x32[FONTDATAMAX] = { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 255 */ +- +-}; ++} }; + + + const struct font_desc font_ter_16x32 = { +@@ -2063,7 +2062,7 @@ const struct font_desc font_ter_16x32 = { + .name = "TER16x32", + .width = 16, + .height = 32, +- .data = fontdata_ter16x32, ++ .data = fontdata_ter16x32.data, + #ifdef __sparc__ + .pref = 5, + #else +diff --git a/mm/khugepaged.c b/mm/khugepaged.c +index 9ec618d5ea557..f0d7e6483ba32 100644 +--- a/mm/khugepaged.c ++++ b/mm/khugepaged.c +@@ -54,6 +54,9 @@ enum scan_result { + #define CREATE_TRACE_POINTS + #include + ++static struct task_struct *khugepaged_thread __read_mostly; ++static DEFINE_MUTEX(khugepaged_mutex); ++ + /* default scan 8*512 pte (or vmas) every 30 second */ + static unsigned int khugepaged_pages_to_scan __read_mostly; + static unsigned int khugepaged_pages_collapsed; +@@ -832,6 +835,18 @@ static struct page *khugepaged_alloc_hugepage(bool *wait) + + static bool khugepaged_prealloc_page(struct page **hpage, bool *wait) + { ++ /* ++ * If the hpage allocated earlier was briefly exposed in page cache ++ * before collapse_file() failed, it is possible that racing lookups ++ * have not yet completed, and would then be unpleasantly surprised by ++ * finding the hpage reused for the same mapping at a different offset. ++ * Just release the previous allocation if there is any danger of that. ++ */ ++ if (*hpage && page_count(*hpage) > 1) { ++ put_page(*hpage); ++ *hpage = NULL; ++ } ++ + if (!*hpage) + *hpage = khugepaged_alloc_hugepage(wait); + +@@ -2165,8 +2180,6 @@ static void set_recommended_min_free_kbytes(void) + + int start_stop_khugepaged(void) + { +- static struct task_struct *khugepaged_thread __read_mostly; +- static DEFINE_MUTEX(khugepaged_mutex); + int err = 0; + + mutex_lock(&khugepaged_mutex); +@@ -2193,3 +2206,11 @@ fail: + mutex_unlock(&khugepaged_mutex); + return err; + } ++ ++void khugepaged_min_free_kbytes_update(void) ++{ ++ mutex_lock(&khugepaged_mutex); ++ if (khugepaged_enabled() && khugepaged_thread) ++ set_recommended_min_free_kbytes(); ++ mutex_unlock(&khugepaged_mutex); ++} +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index 373ca57807589..aff0bb4629bdf 100644 +--- a/mm/page_alloc.c ++++ b/mm/page_alloc.c +@@ -68,6 +68,7 @@ + #include + #include + #include ++#include + + #include + #include +@@ -7870,6 +7871,8 @@ int __meminit init_per_zone_wmark_min(void) + setup_min_slab_ratio(); + #endif + ++ khugepaged_min_free_kbytes_update(); ++ + return 0; + } + postcore_initcall(init_per_zone_wmark_min) +diff --git a/net/core/skbuff.c b/net/core/skbuff.c +index 08d9915d50c0f..466d6273da9f2 100644 +--- a/net/core/skbuff.c ++++ b/net/core/skbuff.c +@@ -5515,7 +5515,7 @@ int skb_mpls_push(struct sk_buff *skb, __be32 mpls_lse, __be16 mpls_proto, + lse->label_stack_entry = mpls_lse; + skb_postpush_rcsum(skb, lse, MPLS_HLEN); + +- if (ethernet) ++ if (ethernet && mac_len >= ETH_HLEN) + skb_mod_eth_type(skb, eth_hdr(skb), mpls_proto); + skb->protocol = mpls_proto; + +@@ -5555,7 +5555,7 @@ int skb_mpls_pop(struct sk_buff *skb, __be16 next_proto, int mac_len, + skb_reset_mac_header(skb); + skb_set_network_header(skb, mac_len); + +- if (ethernet) { ++ if (ethernet && mac_len >= ETH_HLEN) { + struct ethhdr *hdr; + + /* use mpls_hdr() to get ethertype to account for VLANs. */ +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c +index 2ffa33b5ef404..97f2b11ce2034 100644 +--- a/net/ipv4/tcp.c ++++ b/net/ipv4/tcp.c +@@ -971,7 +971,8 @@ ssize_t do_tcp_sendpages(struct sock *sk, struct page *page, int offset, + long timeo = sock_sndtimeo(sk, flags & MSG_DONTWAIT); + + if (IS_ENABLED(CONFIG_DEBUG_VM) && +- WARN_ONCE(PageSlab(page), "page must not be a Slab one")) ++ WARN_ONCE(!sendpage_ok(page), ++ "page must not be a Slab one and have page_count > 0")) + return -EINVAL; + + /* Wait for a connection to finish. One exception is TCP Fast Open +diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c +index 35f963690a70e..87a5037a9cb3e 100644 +--- a/net/ipv4/tcp_ipv4.c ++++ b/net/ipv4/tcp_ipv4.c +@@ -1719,12 +1719,12 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) + + __skb_pull(skb, hdrlen); + if (skb_try_coalesce(tail, skb, &fragstolen, &delta)) { +- thtail->window = th->window; +- + TCP_SKB_CB(tail)->end_seq = TCP_SKB_CB(skb)->end_seq; + +- if (after(TCP_SKB_CB(skb)->ack_seq, TCP_SKB_CB(tail)->ack_seq)) ++ if (likely(!before(TCP_SKB_CB(skb)->ack_seq, TCP_SKB_CB(tail)->ack_seq))) { + TCP_SKB_CB(tail)->ack_seq = TCP_SKB_CB(skb)->ack_seq; ++ thtail->window = th->window; ++ } + + /* We have to update both TCP_SKB_CB(tail)->tcp_flags and + * thtail->fin, so that the fast path in tcp_rcv_established() +diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c +index c86e404cd65bb..d06d7d58eaf27 100644 +--- a/net/openvswitch/conntrack.c ++++ b/net/openvswitch/conntrack.c +@@ -905,15 +905,19 @@ static int ovs_ct_nat(struct net *net, struct sw_flow_key *key, + } + err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, maniptype); + +- if (err == NF_ACCEPT && +- ct->status & IPS_SRC_NAT && ct->status & IPS_DST_NAT) { +- if (maniptype == NF_NAT_MANIP_SRC) +- maniptype = NF_NAT_MANIP_DST; +- else +- maniptype = NF_NAT_MANIP_SRC; +- +- err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, +- maniptype); ++ if (err == NF_ACCEPT && ct->status & IPS_DST_NAT) { ++ if (ct->status & IPS_SRC_NAT) { ++ if (maniptype == NF_NAT_MANIP_SRC) ++ maniptype = NF_NAT_MANIP_DST; ++ else ++ maniptype = NF_NAT_MANIP_SRC; ++ ++ err = ovs_ct_nat_execute(skb, ct, ctinfo, &info->range, ++ maniptype); ++ } else if (CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) { ++ err = ovs_ct_nat_execute(skb, ct, ctinfo, NULL, ++ NF_NAT_MANIP_SRC); ++ } + } + + /* Mark NAT done if successful and update the flow key. */ +diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c +index 06fcff2ebbba1..b5864683f200d 100644 +--- a/net/rxrpc/conn_event.c ++++ b/net/rxrpc/conn_event.c +@@ -341,18 +341,18 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, + return ret; + + spin_lock(&conn->channel_lock); +- spin_lock(&conn->state_lock); ++ spin_lock_bh(&conn->state_lock); + + if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) { + conn->state = RXRPC_CONN_SERVICE; +- spin_unlock(&conn->state_lock); ++ spin_unlock_bh(&conn->state_lock); + for (loop = 0; loop < RXRPC_MAXCALLS; loop++) + rxrpc_call_is_secure( + rcu_dereference_protected( + conn->channels[loop].call, + lockdep_is_held(&conn->channel_lock))); + } else { +- spin_unlock(&conn->state_lock); ++ spin_unlock_bh(&conn->state_lock); + } + + spin_unlock(&conn->channel_lock); +diff --git a/net/rxrpc/key.c b/net/rxrpc/key.c +index 0c98313dd7a8c..85a9ff8cd236a 100644 +--- a/net/rxrpc/key.c ++++ b/net/rxrpc/key.c +@@ -903,7 +903,7 @@ int rxrpc_request_key(struct rxrpc_sock *rx, char __user *optval, int optlen) + + _enter(""); + +- if (optlen <= 0 || optlen > PAGE_SIZE - 1) ++ if (optlen <= 0 || optlen > PAGE_SIZE - 1 || rx->securities) + return -EINVAL; + + description = memdup_user_nul(optval, optlen); +@@ -941,7 +941,7 @@ int rxrpc_server_keyring(struct rxrpc_sock *rx, char __user *optval, + if (IS_ERR(description)) + return PTR_ERR(description); + +- key = request_key_net(&key_type_keyring, description, sock_net(&rx->sk), NULL); ++ key = request_key(&key_type_keyring, description, NULL); + if (IS_ERR(key)) { + kfree(description); + _leave(" = %ld", PTR_ERR(key)); +@@ -1073,7 +1073,7 @@ static long rxrpc_read(const struct key *key, + + switch (token->security_index) { + case RXRPC_SECURITY_RXKAD: +- toksize += 9 * 4; /* viceid, kvno, key*2 + len, begin, ++ toksize += 8 * 4; /* viceid, kvno, key*2, begin, + * end, primary, tktlen */ + toksize += RND(token->kad->ticket_len); + break; +@@ -1108,7 +1108,8 @@ static long rxrpc_read(const struct key *key, + break; + + default: /* we have a ticket we can't encode */ +- BUG(); ++ pr_err("Unsupported key token type (%u)\n", ++ token->security_index); + continue; + } + +@@ -1139,6 +1140,14 @@ static long rxrpc_read(const struct key *key, + memcpy((u8 *)xdr + _l, &zero, 4 - (_l & 3)); \ + xdr += (_l + 3) >> 2; \ + } while(0) ++#define ENCODE_BYTES(l, s) \ ++ do { \ ++ u32 _l = (l); \ ++ memcpy(xdr, (s), _l); \ ++ if (_l & 3) \ ++ memcpy((u8 *)xdr + _l, &zero, 4 - (_l & 3)); \ ++ xdr += (_l + 3) >> 2; \ ++ } while(0) + #define ENCODE64(x) \ + do { \ + __be64 y = cpu_to_be64(x); \ +@@ -1166,7 +1175,7 @@ static long rxrpc_read(const struct key *key, + case RXRPC_SECURITY_RXKAD: + ENCODE(token->kad->vice_id); + ENCODE(token->kad->kvno); +- ENCODE_DATA(8, token->kad->session_key); ++ ENCODE_BYTES(8, token->kad->session_key); + ENCODE(token->kad->start); + ENCODE(token->kad->expiry); + ENCODE(token->kad->primary_flag); +@@ -1216,7 +1225,6 @@ static long rxrpc_read(const struct key *key, + break; + + default: +- BUG(); + break; + } + +diff --git a/net/sched/act_api.c b/net/sched/act_api.c +index 69d4676a402f5..4a5ef2adb2e57 100644 +--- a/net/sched/act_api.c ++++ b/net/sched/act_api.c +@@ -303,6 +303,8 @@ static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb, + + mutex_lock(&idrinfo->lock); + idr_for_each_entry_ul(idr, p, tmp, id) { ++ if (IS_ERR(p)) ++ continue; + ret = tcf_idr_release_unsafe(p); + if (ret == ACT_P_DELETED) { + module_put(ops->owner); +@@ -451,17 +453,6 @@ err1: + } + EXPORT_SYMBOL(tcf_idr_create); + +-void tcf_idr_insert(struct tc_action_net *tn, struct tc_action *a) +-{ +- struct tcf_idrinfo *idrinfo = tn->idrinfo; +- +- mutex_lock(&idrinfo->lock); +- /* Replace ERR_PTR(-EBUSY) allocated by tcf_idr_check_alloc */ +- WARN_ON(!IS_ERR(idr_replace(&idrinfo->action_idr, a, a->tcfa_index))); +- mutex_unlock(&idrinfo->lock); +-} +-EXPORT_SYMBOL(tcf_idr_insert); +- + /* Cleanup idr index that was allocated but not initialized. */ + + void tcf_idr_cleanup(struct tc_action_net *tn, u32 index) +@@ -839,6 +830,26 @@ static const struct nla_policy tcf_action_policy[TCA_ACT_MAX + 1] = { + [TCA_ACT_OPTIONS] = { .type = NLA_NESTED }, + }; + ++static void tcf_idr_insert_many(struct tc_action *actions[]) ++{ ++ int i; ++ ++ for (i = 0; i < TCA_ACT_MAX_PRIO; i++) { ++ struct tc_action *a = actions[i]; ++ struct tcf_idrinfo *idrinfo; ++ ++ if (!a) ++ continue; ++ idrinfo = a->idrinfo; ++ mutex_lock(&idrinfo->lock); ++ /* Replace ERR_PTR(-EBUSY) allocated by tcf_idr_check_alloc if ++ * it is just created, otherwise this is just a nop. ++ */ ++ idr_replace(&idrinfo->action_idr, a, a->tcfa_index); ++ mutex_unlock(&idrinfo->lock); ++ } ++} ++ + struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, + struct nlattr *nla, struct nlattr *est, + char *name, int ovr, int bind, +@@ -921,6 +932,13 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, + if (err < 0) + goto err_mod; + ++ if (TC_ACT_EXT_CMP(a->tcfa_action, TC_ACT_GOTO_CHAIN) && ++ !rcu_access_pointer(a->goto_chain)) { ++ tcf_action_destroy_1(a, bind); ++ NL_SET_ERR_MSG(extack, "can't use goto chain with NULL chain"); ++ return ERR_PTR(-EINVAL); ++ } ++ + if (!name && tb[TCA_ACT_COOKIE]) + tcf_set_action_cookie(&a->act_cookie, cookie); + +@@ -931,13 +949,6 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, + if (err != ACT_P_CREATED) + module_put(a_o->owner); + +- if (TC_ACT_EXT_CMP(a->tcfa_action, TC_ACT_GOTO_CHAIN) && +- !rcu_access_pointer(a->goto_chain)) { +- tcf_action_destroy_1(a, bind); +- NL_SET_ERR_MSG(extack, "can't use goto chain with NULL chain"); +- return ERR_PTR(-EINVAL); +- } +- + return a; + + err_mod: +@@ -981,6 +992,11 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, + actions[i - 1] = act; + } + ++ /* We have to commit them all together, because if any error happened in ++ * between, we could not handle the failure gracefully. ++ */ ++ tcf_idr_insert_many(actions); ++ + *attr_size = tcf_action_full_attrs_size(sz); + return i - 1; + +diff --git a/net/sched/act_bpf.c b/net/sched/act_bpf.c +index 04b7bd4ec7513..b7d83d01841b1 100644 +--- a/net/sched/act_bpf.c ++++ b/net/sched/act_bpf.c +@@ -361,9 +361,7 @@ static int tcf_bpf_init(struct net *net, struct nlattr *nla, + if (goto_ch) + tcf_chain_put_by_act(goto_ch); + +- if (res == ACT_P_CREATED) { +- tcf_idr_insert(tn, *act); +- } else { ++ if (res != ACT_P_CREATED) { + /* make sure the program being replaced is no longer executing */ + synchronize_rcu(); + tcf_bpf_cfg_cleanup(&old); +diff --git a/net/sched/act_connmark.c b/net/sched/act_connmark.c +index 1a8f2f85ea1ab..b00105e623341 100644 +--- a/net/sched/act_connmark.c ++++ b/net/sched/act_connmark.c +@@ -139,7 +139,6 @@ static int tcf_connmark_init(struct net *net, struct nlattr *nla, + ci->net = net; + ci->zone = parm->zone; + +- tcf_idr_insert(tn, *a); + ret = ACT_P_CREATED; + } else if (ret > 0) { + ci = to_connmark(*a); +diff --git a/net/sched/act_csum.c b/net/sched/act_csum.c +index 428b1ae00123d..fa1b1fd10c441 100644 +--- a/net/sched/act_csum.c ++++ b/net/sched/act_csum.c +@@ -110,9 +110,6 @@ static int tcf_csum_init(struct net *net, struct nlattr *nla, + if (params_new) + kfree_rcu(params_new, rcu); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); +- + return ret; + put_chain: + if (goto_ch) +diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c +index e32c4732ddf83..6119c31dcd072 100644 +--- a/net/sched/act_ct.c ++++ b/net/sched/act_ct.c +@@ -740,8 +740,6 @@ static int tcf_ct_init(struct net *net, struct nlattr *nla, + tcf_chain_put_by_act(goto_ch); + if (params) + call_rcu(¶ms->rcu, tcf_ct_params_free); +- if (res == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + + return res; + +diff --git a/net/sched/act_ctinfo.c b/net/sched/act_ctinfo.c +index a91fcee810efa..9722204399a42 100644 +--- a/net/sched/act_ctinfo.c ++++ b/net/sched/act_ctinfo.c +@@ -269,9 +269,6 @@ static int tcf_ctinfo_init(struct net *net, struct nlattr *nla, + if (cp_new) + kfree_rcu(cp_new, rcu); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); +- + return ret; + + put_chain: +diff --git a/net/sched/act_gact.c b/net/sched/act_gact.c +index 324f1d1f6d477..faf68a44b8451 100644 +--- a/net/sched/act_gact.c ++++ b/net/sched/act_gact.c +@@ -139,8 +139,6 @@ static int tcf_gact_init(struct net *net, struct nlattr *nla, + if (goto_ch) + tcf_chain_put_by_act(goto_ch); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + return ret; + release_idr: + tcf_idr_release(*a, bind); +diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c +index 778371bac93e2..488d10476e850 100644 +--- a/net/sched/act_ife.c ++++ b/net/sched/act_ife.c +@@ -626,9 +626,6 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla, + if (p) + kfree_rcu(p, rcu); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); +- + return ret; + metadata_parse_err: + if (goto_ch) +diff --git a/net/sched/act_ipt.c b/net/sched/act_ipt.c +index 214a03d405cf9..02b0cb67643e9 100644 +--- a/net/sched/act_ipt.c ++++ b/net/sched/act_ipt.c +@@ -189,8 +189,6 @@ static int __tcf_ipt_init(struct net *net, unsigned int id, struct nlattr *nla, + ipt->tcfi_t = t; + ipt->tcfi_hook = hook; + spin_unlock_bh(&ipt->tcf_lock); +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + return ret; + + err3: +diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c +index 27f624971121b..8327ef9793ef8 100644 +--- a/net/sched/act_mirred.c ++++ b/net/sched/act_mirred.c +@@ -194,8 +194,6 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla, + spin_lock(&mirred_list_lock); + list_add(&m->tcfm_list, &mirred_list); + spin_unlock(&mirred_list_lock); +- +- tcf_idr_insert(tn, *a); + } + + return ret; +diff --git a/net/sched/act_mpls.c b/net/sched/act_mpls.c +index f786775699b51..f496c99d98266 100644 +--- a/net/sched/act_mpls.c ++++ b/net/sched/act_mpls.c +@@ -273,8 +273,6 @@ static int tcf_mpls_init(struct net *net, struct nlattr *nla, + if (p) + kfree_rcu(p, rcu); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + return ret; + put_chain: + if (goto_ch) +diff --git a/net/sched/act_nat.c b/net/sched/act_nat.c +index ea4c5359e7dfb..8b5d2360a4dc2 100644 +--- a/net/sched/act_nat.c ++++ b/net/sched/act_nat.c +@@ -93,9 +93,6 @@ static int tcf_nat_init(struct net *net, struct nlattr *nla, struct nlattr *est, + if (goto_ch) + tcf_chain_put_by_act(goto_ch); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); +- + return ret; + release_idr: + tcf_idr_release(*a, bind); +diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c +index b5bc631b96b7e..ff4f2437b5925 100644 +--- a/net/sched/act_pedit.c ++++ b/net/sched/act_pedit.c +@@ -237,8 +237,6 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla, + spin_unlock_bh(&p->tcf_lock); + if (goto_ch) + tcf_chain_put_by_act(goto_ch); +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + return ret; + + put_chain: +diff --git a/net/sched/act_police.c b/net/sched/act_police.c +index 89c04c52af3da..8fd23a8b88a5e 100644 +--- a/net/sched/act_police.c ++++ b/net/sched/act_police.c +@@ -201,8 +201,6 @@ static int tcf_police_init(struct net *net, struct nlattr *nla, + if (new) + kfree_rcu(new, rcu); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + return ret; + + failure: +diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c +index 514456a0b9a83..74450b0f69fc5 100644 +--- a/net/sched/act_sample.c ++++ b/net/sched/act_sample.c +@@ -116,8 +116,6 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla, + if (goto_ch) + tcf_chain_put_by_act(goto_ch); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + return ret; + put_chain: + if (goto_ch) +diff --git a/net/sched/act_simple.c b/net/sched/act_simple.c +index 6120e56117ca1..6b0617de71c00 100644 +--- a/net/sched/act_simple.c ++++ b/net/sched/act_simple.c +@@ -156,8 +156,6 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla, + goto release_idr; + } + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + return ret; + put_chain: + if (goto_ch) +diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c +index f98b2791ecec4..f736da513d53a 100644 +--- a/net/sched/act_skbedit.c ++++ b/net/sched/act_skbedit.c +@@ -214,8 +214,6 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla, + if (goto_ch) + tcf_chain_put_by_act(goto_ch); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + return ret; + put_chain: + if (goto_ch) +diff --git a/net/sched/act_skbmod.c b/net/sched/act_skbmod.c +index 888437f97ba62..e858a0a9c0457 100644 +--- a/net/sched/act_skbmod.c ++++ b/net/sched/act_skbmod.c +@@ -190,8 +190,6 @@ static int tcf_skbmod_init(struct net *net, struct nlattr *nla, + if (goto_ch) + tcf_chain_put_by_act(goto_ch); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + return ret; + put_chain: + if (goto_ch) +diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c +index d55669e147417..bdaa04a9a7fa4 100644 +--- a/net/sched/act_tunnel_key.c ++++ b/net/sched/act_tunnel_key.c +@@ -392,9 +392,6 @@ static int tunnel_key_init(struct net *net, struct nlattr *nla, + if (goto_ch) + tcf_chain_put_by_act(goto_ch); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); +- + return ret; + + put_chain: +diff --git a/net/sched/act_vlan.c b/net/sched/act_vlan.c +index 08aaf719a70fb..3c26042f4ea6d 100644 +--- a/net/sched/act_vlan.c ++++ b/net/sched/act_vlan.c +@@ -228,8 +228,6 @@ static int tcf_vlan_init(struct net *net, struct nlattr *nla, + if (p) + kfree_rcu(p, rcu); + +- if (ret == ACT_P_CREATED) +- tcf_idr_insert(tn, *a); + return ret; + put_chain: + if (goto_ch) +diff --git a/net/sctp/auth.c b/net/sctp/auth.c +index 4278764d82b82..1d898ee4018c9 100644 +--- a/net/sctp/auth.c ++++ b/net/sctp/auth.c +@@ -494,6 +494,7 @@ int sctp_auth_init_hmacs(struct sctp_endpoint *ep, gfp_t gfp) + out_err: + /* Clean up any successful allocations */ + sctp_auth_destroy_hmacs(ep->auth_hmacs); ++ ep->auth_hmacs = NULL; + return -ENOMEM; + } + +diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c +index 70b203e5d5fd0..515d295309a86 100644 +--- a/net/tls/tls_sw.c ++++ b/net/tls/tls_sw.c +@@ -2137,10 +2137,15 @@ void tls_sw_release_resources_tx(struct sock *sk) + struct tls_context *tls_ctx = tls_get_ctx(sk); + struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx); + struct tls_rec *rec, *tmp; ++ int pending; + + /* Wait for any pending async encryptions to complete */ +- smp_store_mb(ctx->async_notify, true); +- if (atomic_read(&ctx->encrypt_pending)) ++ spin_lock_bh(&ctx->encrypt_compl_lock); ++ ctx->async_notify = true; ++ pending = atomic_read(&ctx->encrypt_pending); ++ spin_unlock_bh(&ctx->encrypt_compl_lock); ++ ++ if (pending) + crypto_wait_req(-EINPROGRESS, &ctx->async_wait); + + tls_tx_records(sk, -1); +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c +index ec559dbad56ea..7bc4f37655237 100644 +--- a/net/wireless/nl80211.c ++++ b/net/wireless/nl80211.c +@@ -3975,6 +3975,9 @@ static int nl80211_del_key(struct sk_buff *skb, struct genl_info *info) + if (err) + return err; + ++ if (key.idx < 0) ++ return -EINVAL; ++ + if (info->attrs[NL80211_ATTR_MAC]) + mac_addr = nla_data(info->attrs[NL80211_ATTR_MAC]); + +diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c +index 0ab2b35c95deb..00af31d3e7744 100644 +--- a/net/xfrm/xfrm_interface.c ++++ b/net/xfrm/xfrm_interface.c +@@ -293,7 +293,7 @@ xfrmi_xmit2(struct sk_buff *skb, struct net_device *dev, struct flowi *fl) + } + + mtu = dst_mtu(dst); +- if (!skb->ignore_df && skb->len > mtu) { ++ if (skb->len > mtu) { + skb_dst_update_pmtu_no_confirm(skb, mtu); + + if (skb->protocol == htons(ETH_P_IPV6)) { +diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c +index f3423562d9336..aaea8cb7459d8 100644 +--- a/net/xfrm/xfrm_state.c ++++ b/net/xfrm/xfrm_state.c +@@ -1016,7 +1016,8 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x, + */ + if (x->km.state == XFRM_STATE_VALID) { + if ((x->sel.family && +- !xfrm_selector_match(&x->sel, fl, x->sel.family)) || ++ (x->sel.family != family || ++ !xfrm_selector_match(&x->sel, fl, family))) || + !security_xfrm_state_pol_flow_match(x, pol, fl)) + return; + +@@ -1029,7 +1030,9 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x, + *acq_in_progress = 1; + } else if (x->km.state == XFRM_STATE_ERROR || + x->km.state == XFRM_STATE_EXPIRED) { +- if (xfrm_selector_match(&x->sel, fl, x->sel.family) && ++ if ((!x->sel.family || ++ (x->sel.family == family && ++ xfrm_selector_match(&x->sel, fl, family))) && + security_xfrm_state_pol_flow_match(x, pol, fl)) + *error = -ESRCH; + } +@@ -1069,7 +1072,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr, + tmpl->mode == x->props.mode && + tmpl->id.proto == x->id.proto && + (tmpl->id.spi == x->id.spi || !tmpl->id.spi)) +- xfrm_state_look_at(pol, x, fl, encap_family, ++ xfrm_state_look_at(pol, x, fl, family, + &best, &acquire_in_progress, &error); + } + if (best || acquire_in_progress) +@@ -1086,7 +1089,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr, + tmpl->mode == x->props.mode && + tmpl->id.proto == x->id.proto && + (tmpl->id.spi == x->id.spi || !tmpl->id.spi)) +- xfrm_state_look_at(pol, x, fl, encap_family, ++ xfrm_state_look_at(pol, x, fl, family, + &best, &acquire_in_progress, &error); + } + +@@ -1438,6 +1441,30 @@ out: + EXPORT_SYMBOL(xfrm_state_add); + + #ifdef CONFIG_XFRM_MIGRATE ++static inline int clone_security(struct xfrm_state *x, struct xfrm_sec_ctx *security) ++{ ++ struct xfrm_user_sec_ctx *uctx; ++ int size = sizeof(*uctx) + security->ctx_len; ++ int err; ++ ++ uctx = kmalloc(size, GFP_KERNEL); ++ if (!uctx) ++ return -ENOMEM; ++ ++ uctx->exttype = XFRMA_SEC_CTX; ++ uctx->len = size; ++ uctx->ctx_doi = security->ctx_doi; ++ uctx->ctx_alg = security->ctx_alg; ++ uctx->ctx_len = security->ctx_len; ++ memcpy(uctx + 1, security->ctx_str, security->ctx_len); ++ err = security_xfrm_state_alloc(x, uctx); ++ kfree(uctx); ++ if (err) ++ return err; ++ ++ return 0; ++} ++ + static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig, + struct xfrm_encap_tmpl *encap) + { +@@ -1494,6 +1521,10 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig, + goto error; + } + ++ if (orig->security) ++ if (clone_security(x, orig->security)) ++ goto error; ++ + if (orig->coaddr) { + x->coaddr = kmemdup(orig->coaddr, sizeof(*x->coaddr), + GFP_KERNEL); +@@ -1507,6 +1538,7 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig, + } + + memcpy(&x->mark, &orig->mark, sizeof(x->mark)); ++ memcpy(&x->props.smark, &orig->props.smark, sizeof(x->props.smark)); + + if (xfrm_init_state(x) < 0) + goto error; +@@ -1518,7 +1550,7 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig, + x->tfcpad = orig->tfcpad; + x->replay_maxdiff = orig->replay_maxdiff; + x->replay_maxage = orig->replay_maxage; +- x->curlft.add_time = orig->curlft.add_time; ++ memcpy(&x->curlft, &orig->curlft, sizeof(x->curlft)); + x->km.state = orig->km.state; + x->km.seq = orig->km.seq; + x->replay = orig->replay; +diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c +index 1f60124eb19bb..a30d62186f5e9 100644 +--- a/tools/perf/builtin-top.c ++++ b/tools/perf/builtin-top.c +@@ -683,7 +683,9 @@ repeat: + delay_msecs = top->delay_secs * MSEC_PER_SEC; + set_term_quiet_input(&save); + /* trash return*/ +- getc(stdin); ++ clearerr(stdin); ++ if (poll(&stdin_poll, 1, 0) > 0) ++ getc(stdin); + + while (!done) { + perf_top__print_sym_table(top); +diff --git a/tools/perf/tests/topology.c b/tools/perf/tests/topology.c +index 4a800499d7c35..22daf2bdf5faf 100644 +--- a/tools/perf/tests/topology.c ++++ b/tools/perf/tests/topology.c +@@ -33,10 +33,8 @@ static int session_write_header(char *path) + { + struct perf_session *session; + struct perf_data data = { +- .file = { +- .path = path, +- }, +- .mode = PERF_DATA_MODE_WRITE, ++ .path = path, ++ .mode = PERF_DATA_MODE_WRITE, + }; + + session = perf_session__new(&data, false, NULL); +@@ -63,10 +61,8 @@ static int check_cpu_topology(char *path, struct perf_cpu_map *map) + { + struct perf_session *session; + struct perf_data data = { +- .file = { +- .path = path, +- }, +- .mode = PERF_DATA_MODE_READ, ++ .path = path, ++ .mode = PERF_DATA_MODE_READ, + }; + int i; +